From xen-devel-bounces@lists.xenproject.org Thu Oct 01 00:07:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 00:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.972.3283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNm7c-0006J3-V1; Thu, 01 Oct 2020 00:06:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 972.3283; Thu, 01 Oct 2020 00:06:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNm7c-0006Iw-S3; Thu, 01 Oct 2020 00:06:56 +0000
Received: by outflank-mailman (input) for mailman id 972;
 Thu, 01 Oct 2020 00:06:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kNm7b-0006Ir-7k
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 00:06:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc9086a4-b977-4f98-99e2-0fedf366f8ef;
 Thu, 01 Oct 2020 00:06:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6B3022067C;
 Thu,  1 Oct 2020 00:06:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kNm7b-0006Ir-7k
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 00:06:55 +0000
X-Inumbo-ID: cc9086a4-b977-4f98-99e2-0fedf366f8ef
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc9086a4-b977-4f98-99e2-0fedf366f8ef;
	Thu, 01 Oct 2020 00:06:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6B3022067C;
	Thu,  1 Oct 2020 00:06:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601510813;
	bh=MMeaSunKQGlKKUocp6u14CEm8/aY9P233Y0l8uhIKnw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mBo86eV9QJlMDK7ysLcwHjCbQ0qaY1iEf62hHf6Xp4GH4vzLmx3+M/bHW1rQ6ifmY
	 3Xe6MbJMJ2xAQLN4+IqN7pGa44IYnJQeh9YemiG4mLk6Qe6+4wqdUNxsnJb0ehm8n2
	 NH/xoZcm74HfV2v50ultqVuWtMhax2sIFJ4lRcAc=
Date: Wed, 30 Sep 2020 17:06:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [PATCH 1/4] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
In-Reply-To: <20200926205542.9261-2-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2009301659350.10908@sstabellini-ThinkPad-T480s>
References: <20200926205542.9261-1-julien@xen.org> <20200926205542.9261-2-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 26 Sep 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
> 
> Currently, the former are still containing x86 specific code.
> 
> To avoid this rather strange split, the generic helpers are reworked so
> they are arch-agnostic. This requires the introduction of a new helper
> __acpi_os_unmap_memory() that will undo any mapping done by
> __acpi_os_map_memory().
> 
> Currently, the arch-helper for unmap is basically a no-op so it only
> returns whether the mapping was arch specific. But this will change
> in the future.
> 
> Note that the x86 version of acpi_os_map_memory() was already able to
> able the 1MB region. Hence why there is no addition of new code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/arch/arm/acpi/lib.c | 10 ++++++++++
>  xen/arch/x86/acpi/lib.c | 18 ++++++++++++++++++
>  xen/drivers/acpi/osl.c  | 34 ++++++++++++++++++----------------
>  xen/include/xen/acpi.h  |  1 +
>  4 files changed, 47 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index 4fc6e17322c1..2192a5519171 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -30,6 +30,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>      unsigned long base, offset, mapped_size;
>      int idx;
>  
> +    /* No arch specific implementation after early boot */
> +    if ( system_state >= SYS_STATE_boot )
> +        return NULL;
> +
>      offset = phys & (PAGE_SIZE - 1);
>      mapped_size = PAGE_SIZE - offset;
>      set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> @@ -49,6 +53,12 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>      return ((char *) base + offset);
>  }
>  
> +bool __acpi_unmap_table(void *ptr, unsigned long size)
> +{
> +    return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
> +             vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) );
> +}

vaddr or ptr?  :-)

lib.c: In function '__acpi_unmap_table':
lib.c:58:14: error: 'vaddr' undeclared (first use in this function)
     return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
              ^
lib.c:58:14: note: each undeclared identifier is reported only once for each function it appears in
lib.c:60:1: error: control reaches end of non-void function [-Werror=return-type]
 }
 ^
cc1: all warnings being treated as errors



>  /* True to indicate PSCI 0.2+ is implemented */
>  bool __init acpi_psci_present(void)
>  {
> diff --git a/xen/arch/x86/acpi/lib.c b/xen/arch/x86/acpi/lib.c
> index 265b9ad81905..77803f4d4c63 100644
> --- a/xen/arch/x86/acpi/lib.c
> +++ b/xen/arch/x86/acpi/lib.c
> @@ -46,6 +46,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>  	if ((phys + size) <= (1 * 1024 * 1024))
>  		return __va(phys);
>  
> +	/* No arch specific implementation after early boot */
> +	if (system_state >= SYS_STATE_boot)
> +		return NULL;
> +
>  	offset = phys & (PAGE_SIZE - 1);
>  	mapped_size = PAGE_SIZE - offset;
>  	set_fixmap(FIX_ACPI_END, phys);
> @@ -66,6 +70,20 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>  	return ((char *) base + offset);
>  }
>  
> +bool __acpi_unmap_table(void *ptr, unsigned long size)
> +{
> +	unsigned long vaddr = (unsigned long)ptr;
> +
> +	if (vaddr >= DIRECTMAP_VIRT_START &&
> +	    vaddr < DIRECTMAP_VIRT_END) {
> +		ASSERT(!((__pa(ptr) + size - 1) >> 20));
> +		return true;
> +	}
> +
> +	return (vaddr >= __fix_to_virt(FIX_ACPI_END)) &&
> +		(vaddr < (__fix_to_virt(FIX_ACPI_BEGIN) + PAGE_SIZE));
> +}
> +
>  unsigned int acpi_get_processor_id(unsigned int cpu)
>  {
>  	unsigned int acpiid, apicid;
> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 4c8bb7839eda..100eee72def2 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -92,27 +92,29 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
>  void __iomem *
>  acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
>  {
> -	if (system_state >= SYS_STATE_boot) {
> -		mfn_t mfn = _mfn(PFN_DOWN(phys));
> -		unsigned int offs = phys & (PAGE_SIZE - 1);
> -		/* The low first Mb is always mapped on x86. */
> -		if (IS_ENABLED(CONFIG_X86) && !((phys + size - 1) >> 20))
> -			return __va(phys);
> -		return __vmap(&mfn, PFN_UP(offs + size), 1, 1,
> -			      ACPI_MAP_MEM_ATTR, VMAP_DEFAULT) + offs;
> -	}
> -	return __acpi_map_table(phys, size);
> +	void *ptr;
> +	mfn_t mfn = _mfn(PFN_DOWN(phys));
> +	unsigned int offs = phys & (PAGE_SIZE - 1);
> +
> +	/* Try the arch specific implementation first */
> +	ptr = __acpi_map_table(phys, size);
> +	if (ptr)
> +		return ptr;
> +
> +	/* No common implementation for early boot map */
> +	if (unlikely(system_state < SYS_STATE_boot))
> +	     return NULL;
> +
> +	ptr = __vmap(&mfn, PFN_UP(offs + size), 1, 1,
> +		     ACPI_MAP_MEM_ATTR, VMAP_DEFAULT);
> +
> +	return !ptr ? NULL : (ptr + offs);
>  }
>  
>  void acpi_os_unmap_memory(void __iomem * virt, acpi_size size)
>  {
> -	if (IS_ENABLED(CONFIG_X86) &&
> -	    (unsigned long)virt >= DIRECTMAP_VIRT_START &&
> -	    (unsigned long)virt < DIRECTMAP_VIRT_END) {
> -		ASSERT(!((__pa(virt) + size - 1) >> 20));
> +	if (__acpi_unmap_table(virt, size))
>  		return;
> -	}
>  
>  	if (system_state >= SYS_STATE_boot)
>  		vunmap((void *)((unsigned long)virt & PAGE_MASK));
> diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
> index c945ab05c864..5a84a4bf54e0 100644
> --- a/xen/include/xen/acpi.h
> +++ b/xen/include/xen/acpi.h
> @@ -68,6 +68,7 @@ typedef int (*acpi_table_entry_handler) (struct acpi_subtable_header *header, co
>  
>  unsigned int acpi_get_processor_id (unsigned int cpu);
>  char * __acpi_map_table (paddr_t phys_addr, unsigned long size);
> +bool __acpi_unmap_table(void *ptr, unsigned long size);
>  int acpi_boot_init (void);
>  int acpi_boot_table_init (void);
>  int acpi_numa_init (void);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 00:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 00:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.976.3299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNmUM-0000NQ-1l; Thu, 01 Oct 2020 00:30:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 976.3299; Thu, 01 Oct 2020 00:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNmUL-0000NJ-Tp; Thu, 01 Oct 2020 00:30:25 +0000
Received: by outflank-mailman (input) for mailman id 976;
 Thu, 01 Oct 2020 00:30:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kNmUL-0000NE-3t
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 00:30:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81ab1160-02fb-41d7-9c38-ab476fdc1c08;
 Thu, 01 Oct 2020 00:30:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E1E9A2184D;
 Thu,  1 Oct 2020 00:30:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kNmUL-0000NE-3t
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 00:30:25 +0000
X-Inumbo-ID: 81ab1160-02fb-41d7-9c38-ab476fdc1c08
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 81ab1160-02fb-41d7-9c38-ab476fdc1c08;
	Thu, 01 Oct 2020 00:30:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E1E9A2184D;
	Thu,  1 Oct 2020 00:30:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601512223;
	bh=4PcswBWRr6GsvfgvbHrpwY1GkLAGkSmFWDxYJRxe9Ss=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bjS95EOXuOzPq9TjClXwKrxFFDNV0vUODwM97RUZ8LUGyiJ/QmDVj2t7P8GguNxTN
	 Eglm/+IqA3oiMIYWaJmlfRgDiZ0stRklvsg430Fnqwsf/aiW2Uo+HbR6DUnDENGxRX
	 t+So276Ex0JmZyzYUQ3AGMPtegpxAYtqQdoysTJk=
Date: Wed, 30 Sep 2020 17:30:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Xu <xuwei5@hisilicon.com>
Subject: Re: [PATCH 2/4] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
In-Reply-To: <20200926205542.9261-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2009301711190.10908@sstabellini-ThinkPad-T480s>
References: <20200926205542.9261-1-julien@xen.org> <20200926205542.9261-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 26 Sep 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
> {set, clear}_fixmap()" enforced that each set_fixmap() should be
> paired with a clear_fixmap(). Any failure to follow the model would
> result to a platform crash.
> 
> Unfortunately, the use of fixmap in the ACPI code was overlooked as it
> is calling set_fixmap() but not clear_fixmap().
> 
> The function __acpi_os_map_table() is reworked so:
>     - We know before the mapping whether the fixmap region is big
>     enough for the mapping.
>     - It will fail if the fixmap is always inuse.

I take you mean "it will fail if the fixmap is *already* in use"?

If so, can it be a problem? Or the expectation is that in practice
__acpi_os_map_table() will only get called once before SYS_STATE_boot?

Looking at the code it would seem that even before this patch
__acpi_os_map_table() wasn't able to handle multiple calls before
SYS_STATE_boot.


> 
> The function __acpi_os_unmap_table() will now call clear_fixmap().
> 
> Reported-by: Wei Xu <xuwei5@hisilicon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> The discussion on the original thread [1] suggested to also zap it on
> x86. This is technically not necessary today, so it is left alone for
> now.
> 
> I looked at making the fixmap code common but the index are inverted
> between Arm and x86.
> 
> [1] https://lore.kernel.org/xen-devel/5E26C935.9080107@hisilicon.com/
> ---
>  xen/arch/arm/acpi/lib.c | 75 +++++++++++++++++++++++++++++++----------
>  1 file changed, 58 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index 2192a5519171..eebaca695562 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -25,38 +25,79 @@
>  #include <xen/init.h>
>  #include <xen/mm.h>
>  
> +static bool fixmap_inuse;
> +
>  char *__acpi_map_table(paddr_t phys, unsigned long size)
>  {
> -    unsigned long base, offset, mapped_size;
> -    int idx;
> +    unsigned long base, offset;
> +    mfn_t mfn;
> +    unsigned int idx;
>  
>      /* No arch specific implementation after early boot */
>      if ( system_state >= SYS_STATE_boot )
>          return NULL;
>  
>      offset = phys & (PAGE_SIZE - 1);
> -    mapped_size = PAGE_SIZE - offset;
> -    set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> -    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
> +    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) + offset;
> +
> +    /* Check the fixmap is big enough to map the region */
> +    if ( (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - base) < size )
> +        return NULL;
> +
> +    /* With the fixmap, we can only map one region at the time */
> +    if ( fixmap_inuse )
> +        return NULL;
>  
> -    /* Most cases can be covered by the below. */
> +    fixmap_inuse = true;
> +
> +    size += offset;
> +    mfn = maddr_to_mfn(phys);
>      idx = FIXMAP_ACPI_BEGIN;
> -    while ( mapped_size < size )
> -    {
> -        if ( ++idx > FIXMAP_ACPI_END )
> -            return NULL;    /* cannot handle this */
> -        phys += PAGE_SIZE;
> -        set_fixmap(idx, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> -        mapped_size += PAGE_SIZE;
> -    }
>  
> -    return ((char *) base + offset);
> +    do {
> +        set_fixmap(idx, mfn, PAGE_HYPERVISOR);
> +        size -= min(size, (unsigned long)PAGE_SIZE);
> +        mfn = mfn_add(mfn, 1);
> +        idx++;
> +    } while ( size > 0 );
> +
> +    return (char *)base;
>  }
>  
>  bool __acpi_unmap_table(void *ptr, unsigned long size)
>  {
> -    return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
> -             vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) );
> +    vaddr_t vaddr = (vaddr_t)ptr;
> +    unsigned int idx;
> +
> +    /* We are only handling fixmap address in the arch code */
> +    if ( vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) ||
> +         vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) )

The "+ PAGE_SIZE" got lost


> +        return false;
> +
> +    /*
> +     * __acpi_map_table() will always return a pointer in the first page
> +     * for the ACPI fixmap region. The caller is expected to free with
> +     * the same address.
> +     */
> +    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIXMAP_ACPI_BEGIN));
> +
> +    /* The region allocated fit in the ACPI fixmap region. */
> +    ASSERT(size < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - vaddr));
> +    ASSERT(fixmap_inuse);
> +
> +    fixmap_inuse = false;
> +
> +    size += FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) - vaddr;

Sorry I got confused.. Shouldn't this be:

  size += vaddr - FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);

?


> +    idx = FIXMAP_ACPI_BEGIN;
> +
> +    do
> +    {
> +        clear_fixmap(idx);
> +        size -= min(size, (unsigned long)PAGE_SIZE);
> +        idx++;
> +    } while ( size > 0 );
> +
> +    return true;
>  }
>  
>  /* True to indicate PSCI 0.2+ is implemented */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 01:13:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 01:13:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.981.3315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNn9P-0004hM-Bm; Thu, 01 Oct 2020 01:12:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 981.3315; Thu, 01 Oct 2020 01:12:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNn9P-0004hF-8e; Thu, 01 Oct 2020 01:12:51 +0000
Received: by outflank-mailman (input) for mailman id 981;
 Thu, 01 Oct 2020 01:12:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kNn9N-0004h9-Ne
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:12:50 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6835e8ae-ca98-47e5-b8bd-b803a0fd0424;
 Thu, 01 Oct 2020 01:12:48 +0000 (UTC)
Received: from compute7.internal (compute7.nyi.internal [10.202.2.47])
 by mailout.nyi.internal (Postfix) with ESMTP id B82E15C0193;
 Wed, 30 Sep 2020 21:12:48 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute7.internal (MEProxy); Wed, 30 Sep 2020 21:12:48 -0400
Received: from mail-itl (unknown [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id A32673280059;
 Wed, 30 Sep 2020 21:12:47 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kNn9N-0004h9-Ne
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:12:50 +0000
X-Inumbo-ID: 6835e8ae-ca98-47e5-b8bd-b803a0fd0424
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6835e8ae-ca98-47e5-b8bd-b803a0fd0424;
	Thu, 01 Oct 2020 01:12:48 +0000 (UTC)
Received: from compute7.internal (compute7.nyi.internal [10.202.2.47])
	by mailout.nyi.internal (Postfix) with ESMTP id B82E15C0193;
	Wed, 30 Sep 2020 21:12:48 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute7.internal (MEProxy); Wed, 30 Sep 2020 21:12:48 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=content-type:date:from:message-id
	:mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; bh=guF5N7HBSXLVEMnupTPI2tYk4lCVe
	rMDQ/jjNCH/my8=; b=Zb1wxXZ9ulR+/fg3dOVzIsgX6blqBiyYxHWtnUirGBFmw
	PUByTS75W+kcPePv7oVMHfO9MDSjhNydQh1tNB1DXvTgbWzIWwNPimrC5RU/THuq
	rgQ0YJUtM2/X/NPNSzkpUsNZPdGoFf6g2NOckPd3UNStrPqUbysZhiMeQZmFj+4T
	izOExNbz64e16QV8DhGCn9jOmPNJMOJfWu0CJg8s9LlGYOiA/zD5s4cmqdy6rVDO
	aIeeTa5qCqTkiNHOI63OEOKr6RXQu6iS6Pp20am/FqqgWuHPu9fjlD/zh2bpxai9
	sBUVoR6UYNH7zxEgMoirUQeWwQLUba/49rlkSgPGw==
X-ME-Sender: <xms:EC11X2Q-UuhWa8V40tDU56TYtMpJzT4qoQ2K4egIaaSC-MyW8HSbww>
    <xme:EC11X7yEOFPwCbtZMsa3NaCIAK_PK0bhkuLqSwlcQQIX07DlJm3rLZJZaD6OGn6KN
    sUvjNhBzZTfbQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeefgdeghecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkgggtugesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcuofgr
    rhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvihhsih
    gslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnheptddugfetudev
    udeiveevgfetueejlefggffghffhhfehtdfffeefgfduueegfefhnecukfhppeeluddrie
    egrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl
    fhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:EC11Xz3EAZPYB5_IVFmDSNTRybt0MmIjvjzDKgI2CX2X46Q7LS6K0A>
    <xmx:EC11XyDILkGVLRWwWTlj08ya17LbCdfyZqZM4-_WkmkaoxdnKyTwcw>
    <xmx:EC11X_i5e0J4gxGOKPMIr9cXHEKBZiNGmCch63D8SrFAdfX6grfVLQ>
    <xmx:EC11X6uRHrSaf7f9qNzcM6YKDvx1vww4f28jfXiFGK5ILU7Nxf9Mqw>
Received: from mail-itl (unknown [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id A32673280059;
	Wed, 30 Sep 2020 21:12:47 -0400 (EDT)
Date: Thu, 1 Oct 2020 03:12:45 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Yet another S3 issue in Xen 4.14
Message-ID: <20201001011245.GL3962@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="t5C3/nrmPumNj5sH"
Content-Disposition: inline


--t5C3/nrmPumNj5sH
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Yet another S3 issue in Xen 4.14

Hi,

After patching the previous issue ("x86/S3: Fix Shadow Stack resume
path") I still encounter issues resume from S3.
Since I had it working on Xen 4.13 on this particular hardware (Thinkpad
P52), I bisected it and got this:

commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Dec 11 20:59:19 2019 +0000

    x86/S3: Drop {save,restore}_rest_processor_state() completely
   =20
    There is no need to save/restore FS/GS/XCR0 state.  It will be handled
    suitably on the context switch away from the idle.
   =20
    The CR4 restoration in restore_rest_processor_state() was actually figh=
ting
    later code in enter_state() which tried to keep CR4.MCE clear until eve=
rything
    was set up.  Delete the intermediate restoration, and defer final resto=
ration
    until after MCE is reconfigured.
   =20
    Restoring PAT can be done earlier, and ideally before paging is enabled=
=2E  By
    moving it into the trampoline during the setup for 64bit, the call can =
be
    dropped from cpu_init().  The EFI path boot path doesn't disable paging=
, so
    make the adjustment when switching onto Xen's pagetables.
   =20
    The only remaing piece of restoration is load_system_tables(), so suspe=
nd.c
    can be deleted in its entirety.
   =20
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

Parent of this commit suspends and resumes just fine. With this commit
applied, it (I think) it panics, at least I get reboot after 5s. Sadly, I
don't have serial console there.

I tried also master and stable-4.14 with this commit reverted (and also
the other fix applied), but it doesn't work. In this case I get a hang on
resume (power led still flashing, but fan woke up). There are probably
some other dependencies.

Any idea?

PS This is different than "Xen crash after S3 suspend - Xen 4.13"
thread, as this one broke with 4.13 -> 4.14 update.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--t5C3/nrmPumNj5sH
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl91LQwACgkQ24/THMrX
1yyPHQf+KtQAHIHZmQw6/Pz7aQCdVQl7Y9JCjWjdoPCRBake7bYAXTtD75zEjCCJ
EUE4uysnItG6thJYXYjiWSSqEdoGkP+v0DfDoMQtd0l+gGIMtStCgq34V9zrWn4I
MS9wUI8wm+fTE803P5JtwiFZg27N2dZDGib5dfgzOWlbByg+VndriVeWo230rxZL
1gMbvZuBDLwJ62YnwLWq1smkY5y0WzT/Nby10TjQgw2Yd4qxbCAMTW6H6QGP+U3i
nWhPYdokM5JLm49TfQtUBvuSr7vBRY8XI398wxaXjeVlNqFQ+2RPkWCrl/ONVp+Y
lS9KqWCJu7Hav3sdSLYMaG9sfOt4BQ==
=IW0k
-----END PGP SIGNATURE-----

--t5C3/nrmPumNj5sH--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 01:41:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 01:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.989.3334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNnb1-0007Kd-Qo; Thu, 01 Oct 2020 01:41:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 989.3334; Thu, 01 Oct 2020 01:41:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNnb1-0007KW-Mm; Thu, 01 Oct 2020 01:41:23 +0000
Received: by outflank-mailman (input) for mailman id 989;
 Thu, 01 Oct 2020 01:41:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNnb0-0007KR-8y
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:41:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f9063a4-35d3-40da-8119-ce8ea0b7575b;
 Thu, 01 Oct 2020 01:41:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNnat-0002Rq-Bw; Thu, 01 Oct 2020 01:41:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNnas-0000zW-Qv; Thu, 01 Oct 2020 01:41:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNnas-0008Ri-Ov; Thu, 01 Oct 2020 01:41:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNnb0-0007KR-8y
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:41:22 +0000
X-Inumbo-ID: 9f9063a4-35d3-40da-8119-ce8ea0b7575b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9f9063a4-35d3-40da-8119-ce8ea0b7575b;
	Thu, 01 Oct 2020 01:41:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TFEagftJAk811FCKZx9vAXe7jNsygVPfOfIv2h3pzL0=; b=p3xMlonUSlRYMrHc66D3Sx738G
	3kRJxEaiaeRC1BS/x5cwhf0VC/uJjMeKGUjoyt0GYDEv3nRPTWFXqfTVHdoETlcUK7lFF+5gqbes7
	Y2l8EOGuvowHZVVGGReISxR7+F/cU6IB/T0bt3U3prvkvzpdZpb3jXqnVfMP3ra04EUI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNnat-0002Rq-Bw; Thu, 01 Oct 2020 01:41:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNnas-0000zW-Qv; Thu, 01 Oct 2020 01:41:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNnas-0008Ri-Ov; Thu, 01 Oct 2020 01:41:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155098-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155098: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-pygrub:guest-localmigrate/x10:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b150cb8f67bf491a49a1cb1c7da151eeacbdbcc9
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 01:41:14 +0000

flight 155098 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155098/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-pygrub      17 guest-localmigrate/x10   fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                b150cb8f67bf491a49a1cb1c7da151eeacbdbcc9
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   41 days
Failing since        152659  2020-08-21 14:07:39 Z   40 days   73 attempts
Testing same since   155098  2020-09-29 16:14:11 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 33686 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 02:55:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 02:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1000.3363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNokV-0004e3-QI; Thu, 01 Oct 2020 02:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1000.3363; Thu, 01 Oct 2020 02:55:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNokV-0004du-Id; Thu, 01 Oct 2020 02:55:15 +0000
Received: by outflank-mailman (input) for mailman id 1000;
 Thu, 01 Oct 2020 02:55:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNokU-0004dp-8Z
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 02:55:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8bdf1546-7efd-4720-8a52-204d8bdc1c27;
 Thu, 01 Oct 2020 02:55:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNokR-0000lO-6m; Thu, 01 Oct 2020 02:55:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNokQ-0004V3-U9; Thu, 01 Oct 2020 02:55:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNokQ-0008IY-Tc; Thu, 01 Oct 2020 02:55:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNokU-0004dp-8Z
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 02:55:14 +0000
X-Inumbo-ID: 8bdf1546-7efd-4720-8a52-204d8bdc1c27
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8bdf1546-7efd-4720-8a52-204d8bdc1c27;
	Thu, 01 Oct 2020 02:55:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ayUTBGej28j7L4nOrSJRRF9o+jYw+/VqVGoTa4fomXc=; b=XTm0oKiCwcAiVW5ZuOlzjfVeot
	hWKSmqYncX9u29IOM3c3aV1th0MrGVmXiTNd/ByUnZiw832j1MHG80kIPOHDg1wMnz5ltVtn9biVB
	gno42nHQghyU2hdqeNPT7nBtQJPlzRiE4i4CJ/KSCPNsIBqr+KOLFMRe0lLzfHriiQlw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNokR-0000lO-6m; Thu, 01 Oct 2020 02:55:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNokQ-0004V3-U9; Thu, 01 Oct 2020 02:55:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNokQ-0008IY-Tc; Thu, 01 Oct 2020 02:55:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155176: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11852c7bb070a18c3708b4c001772a23e7d4fc27
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 02:55:10 +0000

flight 155176 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155176/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11852c7bb070a18c3708b4c001772a23e7d4fc27
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    0 days
Testing same since   155144  2020-09-30 16:01:24 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 03:35:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 03:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1010.3391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpNR-00089s-3a; Thu, 01 Oct 2020 03:35:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1010.3391; Thu, 01 Oct 2020 03:35:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpNQ-00089l-Vl; Thu, 01 Oct 2020 03:35:28 +0000
Received: by outflank-mailman (input) for mailman id 1010;
 Thu, 01 Oct 2020 03:35:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNpNP-00089D-HK
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 03:35:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ca9628a-4eb1-4f69-941f-45e0c2115883;
 Thu, 01 Oct 2020 03:35:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpNI-0001a0-Eo; Thu, 01 Oct 2020 03:35:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpNI-00069J-66; Thu, 01 Oct 2020 03:35:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpNI-0006LT-5c; Thu, 01 Oct 2020 03:35:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNpNP-00089D-HK
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 03:35:27 +0000
X-Inumbo-ID: 0ca9628a-4eb1-4f69-941f-45e0c2115883
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ca9628a-4eb1-4f69-941f-45e0c2115883;
	Thu, 01 Oct 2020 03:35:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KFKy2k5SOocHWFDaKFFT/sj52wpFIq3EP+ZrrwPBgLc=; b=kqV2aHXzP7KRCFHkfWxOhzXhCu
	Z+NMQJs6Kcb+B3DIz6kN8PWyTUF5npCbpDUGiCIzuPjgyPQFE46J0av1NoAZpH4JiuKCldzDOUvXB
	tILmDcURdODAV6iLHfQnH9/t4nKxoz/fhp4h+G+4smoUkF381FKEDJnOYGnLTrIj7mIU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpNI-0001a0-Eo; Thu, 01 Oct 2020 03:35:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpNI-00069J-66; Thu, 01 Oct 2020 03:35:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpNI-0006LT-5c; Thu, 01 Oct 2020 03:35:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155123: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):starved:nonblocking
    libvirt:build-i386-pvops:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=9c2ba74ad6e8cb5e76fc7954ae5445fc314f71e4
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 03:35:20 +0000

flight 155123 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155123/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               starved  n/a
 build-i386-pvops              2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              9c2ba74ad6e8cb5e76fc7954ae5445fc314f71e4
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   82 days
Failing since        151818  2020-07-11 04:18:52 Z   81 days   76 attempts
Testing same since   155123  2020-09-30 04:19:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            starved 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      starved 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 starved 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16946 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 04:03:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 04:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1016.3410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpoM-0002Oq-BE; Thu, 01 Oct 2020 04:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1016.3410; Thu, 01 Oct 2020 04:03:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpoM-0002Oj-8J; Thu, 01 Oct 2020 04:03:18 +0000
Received: by outflank-mailman (input) for mailman id 1016;
 Thu, 01 Oct 2020 04:03:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x1nv=DI=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kNpoL-0002Oc-7P
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 04:03:17 +0000
Received: from mail-oi1-x22c.google.com (unknown [2607:f8b0:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f533039-30e8-4bed-9911-3b2e75da8fe0;
 Thu, 01 Oct 2020 04:03:15 +0000 (UTC)
Received: by mail-oi1-x22c.google.com with SMTP id 26so4174902ois.5
 for <xen-devel@lists.xenproject.org>; Wed, 30 Sep 2020 21:03:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x1nv=DI=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
	id 1kNpoL-0002Oc-7P
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 04:03:17 +0000
X-Inumbo-ID: 5f533039-30e8-4bed-9911-3b2e75da8fe0
Received: from mail-oi1-x22c.google.com (unknown [2607:f8b0:4864:20::22c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5f533039-30e8-4bed-9911-3b2e75da8fe0;
	Thu, 01 Oct 2020 04:03:15 +0000 (UTC)
Received: by mail-oi1-x22c.google.com with SMTP id 26so4174902ois.5
        for <xen-devel@lists.xenproject.org>; Wed, 30 Sep 2020 21:03:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to:cc
         :content-transfer-encoding;
        bh=Y+/JvufSK3ZxV7jymrqViOSalYUL22yHRhepeIvQBGo=;
        b=OeO/StWSE/Uf2wczzfj71mYtKhB/cZGQGDy3DLfA+BHAwRg8RCzcTUViHffiLz0U6g
         /2wVuthgtyO2c4XjIR46c5AbfPPCKdW6XiqR6FS6PdDjukTHXqlMdzkWyh/ed03uW3Ba
         cMgejNXfQx7qgVeqpuzC/fJ5sQiw0LykTesaeDNmX96o7410I8CR55bMMvZMUlF063Lv
         ibXmDuOxDZEPqwKfHzsSzVfsdtt0dCTduOcbWavohiKxWUD1eY5AwMHlqAb5ZLMebWBi
         S4vPxzIVpaY3exywVXIV9JP/spiocjONN9ErO+C/cQiRASjf4mtrqhr7vKUO7eDmFxl2
         EZPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc
         :content-transfer-encoding;
        bh=Y+/JvufSK3ZxV7jymrqViOSalYUL22yHRhepeIvQBGo=;
        b=oGcqFlqDJ2Kp8quf1way4+YbxA7CFoP9JO/8cy942kZcy320n9RaICU9xkQ4lRU63k
         +qjAd9S/vERA8vi0qQibJ/mQvsIY5ALnYOpFd1kpSoTflt2fm/M2wSmtCfBNd40+v8Ry
         bXzzkGBnqVTyGXTQJTnzUQrD07mIZZgO712vSRofCfmIjtR/C+dxgsbCasoN75grSqA8
         u16WjA0R6sOXo604M2xOIDVCip7Dslo2/ZUq9IRqoaIqd4f/n8mx6FV0IRW9CJbRD5n1
         jqfKEyouXqvHGsJ4i2b0ex4RGlPei5crKeeDobylMR2ssAmq8N0/6QRoJIdkMSVJU4p2
         LmNw==
X-Gm-Message-State: AOAM532oO2LJJl73RhGNGuF9nXCquCb4fynZZIoA9dyloVgOWDzGMdFK
	J5nexw4mRYtFIhpyn/svV2MQGrOb+G/tU7x8rFyHOfwug3A=
X-Google-Smtp-Source: ABdhPJxXa6s/BzXJWyCtpG7WaoF40ixkeUdXoUBrrHCKpuUoH0Gg45gBUxXPk5NLMVq9rEFcK56kKoWjso1Hw/qfqfA=
X-Received: by 2002:aca:e08b:: with SMTP id x133mr711222oig.20.1601524994729;
 Wed, 30 Sep 2020 21:03:14 -0700 (PDT)
MIME-Version: 1.0
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Wed, 30 Sep 2020 21:03:03 -0700
Message-ID: <CACMJ4GaWcF74zE5qt31MDvcX1mx1HSW7eaOXpfpWJ2KzQZOg=Q@mail.gmail.com>
Subject: VirtIO & Argo: a Linux VirtIO transport driver on Xen
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Rich Persaud <persaur@gmail.com>, Daniel Smith <dpsmith@apertussolutions.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello

Following up on a topic introduced in last month's community call,
here are some notes on combining existing Linux guest virtio drivers
with Argo inter-VM communication on Xen.  If feasible, this would
combine the compatibility of tested Linux drivers with the mandatory
access control properties of Argo, which could help meet functional
safety and security requirements for Xen on Arm and x86.  This
development work is not resourced, but the initial investigation has
been encouraging.  We are sharing here for comment, aiming to work
with those in the Xen and wider Linux communities who may have similar
requirements.

Christopher
- 30th September 2020

---
This document describes a proposal for development of a new Linux device
driver to introduce Hypervisor-Mediated data eXchange (HMX) into the
data transport of the popular VirtIO suite of Linux virtual device
drivers, by using Argo with Xen. This will provide a way to use VirtIO
device drivers within Xen guest VMs with strong isolation properties.

This work has been developed by Christopher Clark, Daniel Smith and Rich
Persaud, with Eric Chanudet and Nick Krasnoff.
Christopher is the primary author of this version of this document.

----
Contents:

=3D Context: Introduction to VirtIO
=3D=3D VirtIO Architecture Overview
=3D=3D=3D VirtIO front-end driver classes
=3D=3D=3D VirtIO transport drivers
=3D VirtIO with Argo transport
=3D=3D Using VirtIO with the new Argo transport driver
=3D=3D=3D Host platform software
=3D=3D=3D=3D QEMU
=3D=3D=3D=3D Linux Argo driver
=3D=3D=3D=3D Toolstack
=3D=3D=3D Functionality
=3D=3D=3D Mechanisms
=3D=3D=3D From last discussion
=3D References

----
=3D Context: Introduction to VirtIO

VirtIO is a virtual device driver standard developed originally for the
Linux kernel, drawing upon the lessons learned during the development of
paravirtualized device drivers for Xen, KVM and other hypervisors. It
aimed to become a =E2=80=9Cde-facto standard for virtual I/O devices=E2=80=
=9D, and to
some extent has succeeded in doing so. VirtIO is now widely implemented
in both software and hardware, it is commonly the first choice for
virtual driver implementation in new virtualization technologies, and
the specification is now maintained under governance of the OASIS open
standards organization.

VirtIO=E2=80=99s system architecture abstracts device-specific and
device-class-specific interfaces and functionality from the transport
mechanisms that move data and issue notifications within the kernel and
across virtual machine boundaries. It is attractive to developers
seeking to implement new drivers for a virtual device because VirtIO
provides documented specified interfaces with a well-designed, efficient
and maintained common core implementation that can significantly reduce
the amount of work required to develop a new virtual device driver.

VirtIO follows the Xen PV driver model of split-device drivers, where a
front-end device driver runs within the guest virtual machine to provide
the device abstraction to the guest kernel, and a back-end driver runs
outside the VM, in platform-provided software - eg. within a QEMU device
emulator - to communicate with the front-end driver and provide mediated
access to physical device resources.

A critical property of the current common VirtIO implementations is that
their use of shared memory for data transport prevents enforcement of
strong isolation between the front-end and back-end virtual machines,
since the back-end VirtIO device driver is required to be able to obtain
direct access to the memory owned by the virtual machine running the
front-end VirtIO device driver. ie. The VM hosting the back-end driver
has significant privilege over any VM running a front-end driver.

Xen=E2=80=99s PV drivers use the grant-table mechanism to confine shared me=
mory
access to specific memory pages used and permission to access those are
specifically granted by the driver in the VM that owns the memory. Argo
goes further and achieves stronger isolation than this since it requires
no memory sharing between communicating virtual machines.

In contrast to Xen=E2=80=99s current driver transport options, the current
implementations of VirtIO transports pass memory addresses directly
across the VM boundary, under the assumption of shared memory access,
and thereby require the back-end to have sufficient privilege to
directly access any memory that the front-end driver refers to. This has
presented a challenge for the suitability of using VirtIO drivers for
Xen deployments where isolation is a requirement. Fortunately, a path
exists for integration of the Argo transport into VirtIO which can
address this and enable use of the existing body of VirtIO device
drivers with isolation maintained and mandatory access control enforced:
consequently this system architecture is significantly differentiated
from other options for virtual devices.

=3D=3D VirtIO Architecture Overview

In addition to the front-end / back-end split device driver model, there
are further standard elements of VirtIO system architecture.

For detailed reference, VirtIO is described in detail in the =E2=80=9CVirtI=
O 1.1
specification=E2=80=9D OASIS standards document. [1]

The front-end device driver architecture imposes tighter constraints on
implementation direction for this project, since it is this that is
already implemented in the wide body of existing VirtIO device drivers
that we are aiming to enable use of.

The back-end software is implemented in the platform-provided software -
ie.  the hypervisor, toolstack, a platform-provided VM or a device
emulator, etc. - where we have more flexibility in implementation
options, and the interface is determined by both the host virtualization
platform and the new transport driver that we are intending to create.

=3D=3D=3D VirtIO front-end driver classes

There are multiple classes of VirtIO device driver within the Linux
kernel; these include the general class of front-end VirtIO device
drivers, which provide function-specific logic to implement virtual
devices - eg. a virtual block device driver for storage - and the
_transport_ VirtIO device drivers, which are responsible for device
discovery with the platform and provision of data transport across the
VM boundary between the front-end drivers and the corresponding remote
back-end driver running outside the virtual machine.

=3D=3D=3D VirtIO transport drivers

There are several implementations of VirtIO transport device drivers in
Linux, each implement a common interface within the kernel, and they are
designed to be interchangeable and compatible with the VirtIO front-end
drivers: so the same front-end driver can use different transports on
different systems. Transports can coexist: different virtual devices can
be using different transports within the same virtual machine at the
same time.

=3D VirtIO with Argo transport

Enabling VirtIO to use the Argo interdomain communication mechanism for
data transport across the VM boundary will address critical requirements:

* Preserve strong isolation between the two ends of the split device driver

    * ie. remove the need for any shared memory between domains or any
      privilege to map the memory belonging to the other domain

* Enable enforcement of granular mandatory access control policy over
  the communicating endpoints

    * ie. Use Xen=E2=80=99s XSM/Flask existing control over Argo communicat=
ion,
      and leverage any new Argo MAC capabilities as they are introduced
      to govern VirtIO devices

The proposal is to implement a new VirtIO transport driver for Linux
that utilizes Argo. It will be used within guest virtual machines, and
be source-compatible with the existing VirtIO front-end device drivers
in Linux.

It will be paired with a corresponding new VirtIO-Argo back-end to run
within the Qemu device emulator, in the same fashion as the existing
VirtIO transport back-ends, and the back-end will use a non-VirtIO Linux
driver to access and utilize Argo.

Open Source VirtIO drivers for Windows are available, and enable Windows
guest VMs to run with virtual devices provided by the VirtIO backends in
QEMU. The Windows VirtIO device drivers have the same transport
abstraction and separate driver structure, so an Argo transport driver
can also be developed for Windows for source-compatibility with Windows
VirtIO device drivers.

=3D=3D Using VirtIO with the new Argo transport driver

VirtIO device drivers are included in the mainline Linux kernel and
enabled in most modern Linux distributions. Adding the new Linux-Argo
guest driver to the upstream Linux kernel will enable seamless
deployment of modern Linux guest VMs on VirtIO-Argo hypervisor
platforms.

=3D=3D=3D Host platform software

=3D=3D=3D=3D QEMU

The QEMU device emulator implements the VirtIO transport that the
front-end will connect to. Current QEMU 5.0 implements both the
virtio-pci and virtio-mmio common transports.

=3D=3D=3D=3D Linux Argo driver

For QEMU to be able to use Argo, it will need an Argo Linux kernel
device driver, with similar functionality to the existing Argo Linux
driver.

=3D=3D=3D=3D Toolstack

The toolstack of the hypervisor is responsible for configuring and
establishing the back-end devices according to the virtual machine
configuration. It will need to be aware of the VirtIO-Argo transport and
initialize the back-ends for each VM with a suitable configuration for
it.

Alternatively, in systems that do not run with a toolstack, the DomB
launch domain (when available) can perform any necessary initialization.

=3D=3D=3D Functionality

Adding Argo as a transport for VirtIO will retain Argo=E2=80=99s MAC policy
checks on all data movement, via Xen's XSM/Flask, while allowing use of
the VirtIO virtual device drivers and device implementations.

With the VirtIO virtual device drivers using the VirtIO-Argo transport
driver, the existing Xen PV drivers, which use the grant tables and
event channels, are not required, and their substitution enables removal
of shared memory from the data path of the device drivers in use and
makes the virtual device driver data path HMX-compliant.

In addition, as new virtual device classes in Linux have VirtIO drivers
implemented, these should transparently be enabled with Mandatory Access
Control, via the existing virtio-argo transport driver, potentially
without further effort required =E2=80=93 although please note that for som=
e
cases (eg. graphics) optimizing performance characteristics may require
additional effort.

=3D=3D=3D Mechanisms

VirtIO transport drivers are responsible for virtual device enumeration,
triggering driver initialization for the devices, and we are proposing
to use ACPI tables to surface the data to the guest for the new driver
to parse for this purpose. Device tree has been raised as an option for
this on Arm and it will be evaluated.

The VirtIO device drivers will retain use of the virtqueues, but the
descriptors passed between domains by the new transport driver will not
include guest physical addresses, but instead reference data that has
been exchanged via Argo. Each transmission via XEN_ARGO_OP_sendv is
subject to MAC checks by XSM.

=3D=3D=3D From last discussion

* Design of how virtual devices are surfaced to the guest VM for
  enumeration by the VirtIO-Argo transport driver

    * Current plan, from the initial x86 development focus, is to
      populate ACPI tables
    * Interest in using Device Tree, for static configurations on Arm,
      was raised on last month's Xen Community Call.
        * is being considered with development of DomB in progress
    * Does not need to be dom0 that populates this for the guest;
      just some domain with suitable permissions to do so


=3D References
1. The VirtIO 1.1 specification, OASIS
https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.=
html

2. virtio: Towards a De-Facto Standard For Virtual I/O Devices
Rusty Russell, IBM OzLabs; ACM SIGOPS Operating Systems Review, 2008.
https://ozlabs.org/~rusty/virtio-spec/virtio-paper.pdf

3. Xen and the Art of Virtualization
Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris,
Alex Ho, Rolf Neugebauer, Ian Pratt, Andrew Warfield; ACM Symposium on
Operating System Principles, 2003
https://www.cl.cam.ac.uk/research/srg/netos/papers/2003-xensosp.pdf

4. Project ACRN: 1.0 chose to implement virtio
https://projectacrn.org/acrn-project-releases-version-1-0/

5. Solo5 unikernel runtime: chose to implement virtio
https://mirage.io/blog/introducing-solo5

6. Windows VirtIO drivers
https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers
=EF=BF=BChttps://github.com/virtio-win/kvm-guest-drivers-windows


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 04:03:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 04:03:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1017.3422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpoi-0002UD-OG; Thu, 01 Oct 2020 04:03:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1017.3422; Thu, 01 Oct 2020 04:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNpoi-0002U6-LD; Thu, 01 Oct 2020 04:03:40 +0000
Received: by outflank-mailman (input) for mailman id 1017;
 Thu, 01 Oct 2020 04:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNpoh-0002Sx-8o
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 04:03:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0ccbd59-7826-4c2c-9f2a-00abb79e6caa;
 Thu, 01 Oct 2020 04:03:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpof-0002Er-PZ; Thu, 01 Oct 2020 04:03:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpof-0007Yh-Hr; Thu, 01 Oct 2020 04:03:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNpof-00007U-HO; Thu, 01 Oct 2020 04:03:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNpoh-0002Sx-8o
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 04:03:39 +0000
X-Inumbo-ID: b0ccbd59-7826-4c2c-9f2a-00abb79e6caa
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b0ccbd59-7826-4c2c-9f2a-00abb79e6caa;
	Thu, 01 Oct 2020 04:03:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t/0V5NkaH9qy6UzW6mT9WIyeQppEYzq2x4cUZ9cGK7k=; b=Z31c4OmVP9uEqUVNrIGqewGSPn
	MDhTDEd7ZCsHDXAG4QT7EWBy2dpP7CI/LzDCmdefvtGvHQvVnC8W+ZmbZ9eAxiWVL6o6TdotpJPIA
	HX47nk4VxRELq0MtdoCIZ9UcjxNrTupUpktXzLZtceI/Vtak88er1c15Swr/Bqxfg3T8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpof-0002Er-PZ; Thu, 01 Oct 2020 04:03:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpof-0007Yh-Hr; Thu, 01 Oct 2020 04:03:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNpof-00007U-HO; Thu, 01 Oct 2020 04:03:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155121-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155121: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d8ab884fe9b4dd148980bf0d8673187f8fb25887
X-Osstest-Versions-That:
    ovmf=2793a49565488e419d10ba029c838f4b7efdba38
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 04:03:37 +0000

flight 155121 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155121/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d8ab884fe9b4dd148980bf0d8673187f8fb25887
baseline version:
 ovmf                 2793a49565488e419d10ba029c838f4b7efdba38

Last test of basis   155045  2020-09-28 21:39:39 Z    2 days
Testing same since   155121  2020-09-30 03:52:34 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Jian J Wang <jian.j.wang@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Nikita <sh1r4s3@mail.si-head.nl>
  Nikita Ermakov <sh1r4s3@mail.si-head.nl>
  Patrick Henz <patrick.henz@hpe.com>
  Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>
  Wang, Jian J <jian.j.wang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2793a49565..d8ab884fe9  d8ab884fe9b4dd148980bf0d8673187f8fb25887 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 04:22:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 04:22:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.987.3437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNq6s-0004Id-BR; Thu, 01 Oct 2020 04:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 987.3437; Thu, 01 Oct 2020 04:22:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNq6s-0004IW-7M; Thu, 01 Oct 2020 04:22:26 +0000
Received: by outflank-mailman (input) for mailman id 987;
 Thu, 01 Oct 2020 01:30:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uKIp=DI=nvidia.com=ziy@srs-us1.protection.inumbo.net>)
 id 1kNnQ4-00062j-8Z
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:30:04 +0000
Received: from hqnvemgate26.nvidia.com (unknown [216.228.121.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5a73a66-b29a-4a1c-8edc-8cd9b845dfe2;
 Thu, 01 Oct 2020 01:30:03 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by
 hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
 id <B5f75310d0000>; Wed, 30 Sep 2020 18:29:49 -0700
Received: from [10.2.161.39] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 1 Oct
 2020 01:29:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uKIp=DI=nvidia.com=ziy@srs-us1.protection.inumbo.net>)
	id 1kNnQ4-00062j-8Z
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 01:30:04 +0000
X-Inumbo-ID: a5a73a66-b29a-4a1c-8edc-8cd9b845dfe2
Received: from hqnvemgate26.nvidia.com (unknown [216.228.121.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a5a73a66-b29a-4a1c-8edc-8cd9b845dfe2;
	Thu, 01 Oct 2020 01:30:03 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate26.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
	id <B5f75310d0000>; Wed, 30 Sep 2020 18:29:49 -0700
Received: from [10.2.161.39] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 1 Oct
 2020 01:29:59 +0000
From: Zi Yan <ziy@nvidia.com>
To: Thomas Gleixner <tglx@linutronix.de>
CC: Wei Liu <wei.liu@kernel.org>, Joerg Roedel <jroedel@suse.de>,
	<x86@kernel.org>, <iommu@lists.linux-foundation.org>,
	<linux-hyperv@vger.kernel.org>, <linux-pci@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>
Subject: Boot crash due to "x86/msi: Consolidate MSI allocation"
Date: Wed, 30 Sep 2020 21:29:57 -0400
X-Mailer: MailMate (1.13.2r5673)
Message-ID: <A838FF2B-11FC-42B9-87D7-A76CF46E0575@nvidia.com>
MIME-Version: 1.0
Content-Type: multipart/signed;
	boundary="=_MailMate_89ADE28D-2C7F-4143-A045-29B75B277CAD_=";
	micalg=pgp-sha512; protocol="application/pgp-signature"
X-Originating-IP: [10.124.1.5]
X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To
 HQMAIL107.nvidia.com (172.20.187.13)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1601515789; bh=Fojbv7O4gKOhX4jWiiufBy2paDV7Wfz1QnlftUqu00E=;
	h=From:To:CC:Subject:Date:X-Mailer:Message-ID:MIME-Version:
	 Content-Type:X-Originating-IP:X-ClientProxiedBy;
	b=cIugO8dHkxC986joEef0HLtyojsmV28OE7M0TLTDGp8fFYjFN/K8aox00xvQQAlXE
	 kPAn7iJ2wCt7ORmw9SHatN+nmkyzh76qmJ7ouPyfATX0j+JNIGtx+yOKwlnEPyTYNV
	 NcoE7trf1bdz+M5dBd0SOTe309ashT9nj2UWcAa2ph9oPDpTc1FRvHtQIpr63gZ0g9
	 lXt+ngE+VMOEoBO+ImZUIA5+joqTYkbA9y/W92wrpNQeDawfyu5us/kweM12SSWJZN
	 actuM7Zo6tQ558AP7LEY+EUwP1B8criLClrbYUQyuA+c+LeMQ+k/Ic8b4OKoNFDAKB
	 KkhvYJg4RbxdA==

--=_MailMate_89ADE28D-2C7F-4143-A045-29B75B277CAD_=
Content-Type: multipart/mixed;
 boundary="=_MailMate_C9C3D367-79B1-4D8D-812F-500ACD3AF7C3_="


--=_MailMate_C9C3D367-79B1-4D8D-812F-500ACD3AF7C3_=
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Thomas,

I am running linux-next on my Dell R630 and the system crashed at boot ti=
me. I bisected linux-next and
got to your commit:

commit 3b9c1d377d67072d1d8a2373b4969103cca00dab
Author: Thomas Gleixner <tglx@linutronix.de>
Date:   Wed Aug 26 13:16:46 2020 +0200

    x86/msi: Consolidate MSI allocation

    Convert the interrupt remap drivers to retrieve the pci device from t=
he msi
    descriptor and use info::hwirq.

    This is the first step to prepare x86 for using the generic MSI domai=
n ops.

    Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
    Acked-by: Wei Liu <wei.liu@kernel.org>
    Acked-by: Joerg Roedel <jroedel@suse.de>
    Link: https://lore.kernel.org/r/20200826112332.466405395@linutronix.d=
e

The crash log is below and my .config is attached.

[   11.742181] RAPL PMU: hw unit of domain dram 2^-16 Joules
[   11.768742] check: Scanning for low memory corruption every 60 seconds=

[   11.781653] Initialise system trusted keyrings
[   11.786858] workingset: timestamp_bits=3D40 max_order=3D26 bucket_orde=
r=3D0
[   11.800237] zbud: loaded
[   11.807003] fuse: init (API version 7.31)
[   11.812713] Key type asymmetric registered
[   11.817286] Asymmetric key parser 'x509' registered
[   11.822750] Block layer SCSI generic (bsg) driver version 0.4 loaded (=
major 247)
[   11.831235] io scheduler mq-deadline registered
[   11.837559] BUG: kernel NULL pointer dereference, address: 00000000000=
00018
[   11.840905] #PF: supervisor read access in kernel mode
[   11.840905] #PF: error_code(0x0000) - not-present page
[   11.840905] PGD 0 P4D 0
[   11.840905] Oops: 0000 [#1] SMP PTI
[   11.840905] CPU: 0 PID: 5 Comm: kworker/0:0 Not tainted 5.9.0-rc5-1gb-=
thp+ #57
[   11.840905] Hardware name: Dell Inc. PowerEdge R630/02C2CP, BIOS 2.11.=
0 11/02/2019
[   11.840905] Workqueue: events work_for_cpu_fn
[   11.840905] RIP: 0010:msi_desc_to_pci_dev+0x5/0x10
[   11.840905] Code: c7 98 79 35 a3 e8 89 89 b5 ff e9 bb eb ff ff cc cc c=
c cc cc cc 0f 1f 44 00 00 8b 05 65 c3 14 01 c3 0f 1f 40 00 0f 1f 44 00 00=
 <48> 8b 47 18 48 2d b0 00 00 00 c3 0f 1f 44 00 00 48 8b 47 18 48 8b
[   11.840905] RSP: 0000:ffffbd1bc00cfc88 EFLAGS: 00010297
[   11.840905] RAX: 0000000000000000 RBX: ffff972073e10000 RCX: 000000000=
0000000
[   11.840905] RDX: ffffbd1bc00cfc98 RSI: 0000000000000000 RDI: 000000000=
0000000
[   11.840905] RBP: ffff9720778d7080 R08: ffff972073e10338 R09: ffff97207=
3e10338
[   11.840905] R10: 000000000000000d R11: 0000000000000000 R12: 000000000=
0000002
[   11.840905] R13: 0000000000000000 R14: ffff972073e100b0 R15: ffff97207=
3e10000
[   11.840905] FS:  0000000000000000(0000) GS:ffff97207f800000(0000) knlG=
S:0000000000000000
[   11.840905] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   11.840905] CR2: 0000000000000018 CR3: 0000001989612001 CR4: 000000000=
03706f0
[   11.840905] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000000000=
0000000
[   11.840905] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 000000000=
0000400
[   11.840905] Call Trace:
[   11.840905]  intel_get_irq_domain+0x24/0xb0
[   11.840905]  native_setup_msi_irqs+0x3b/0x90
[   11.840905]  __pci_enable_msi_range+0x314/0x500
[   11.840905]  pci_alloc_irq_vectors_affinity+0xd2/0x130
[   11.840905]  pcie_port_device_register+0x135/0x550
[   11.840905]  ? dequeue_entity+0xa4/0x430
[   11.840905]  ? newidle_balance+0x265/0x3e0
[   11.840905]  pcie_portdrv_probe+0x2d/0xb0
[   11.840905]  local_pci_probe+0x42/0x80
[   11.840905]  ? __schedule+0x29d/0x8b0
[   11.840905]  work_for_cpu_fn+0x16/0x20
[   11.840905]  process_one_work+0x1ab/0x3a0
[   11.840905]  worker_thread+0x1df/0x3c0
[   11.840905]  ? rescuer_thread+0x3b0/0x3b0
[   11.840905]  kthread+0x134/0x150
[   11.840905]  ? __kthread_bind_mask+0x60/0x60
[   11.840905]  ret_from_fork+0x22/0x30
[   11.840905] Modules linked in:
[   11.840905] CR2: 0000000000000018
[   11.840905] ---[ end trace d9574f52cc5ec884 ]---
[   11.840905] RIP: 0010:msi_desc_to_pci_dev+0x5/0x10
[   11.840905] Code: c7 98 79 35 a3 e8 89 89 b5 ff e9 bb eb ff ff cc cc c=
c cc cc cc 0f 1f 44 00 00 8b 05 65 c3 14 01 c3 0f 1f 40 00 0f 1f 44 00 00=
 <48> 8b 47 18 48 2d b0 00 00 00 c3 0f 1f 44 00 00 48 8b 47 18 48 8b
[   11.840905] RSP: 0000:ffffbd1bc00cfc88 EFLAGS: 00010297
[   11.840905] RAX: 0000000000000000 RBX: ffff972073e10000 RCX: 000000000=
0000000
[   11.840905] RDX: ffffbd1bc00cfc98 RSI: 0000000000000000 RDI: 000000000=
0000000
[   11.840905] RBP: ffff9720778d7080 R08: ffff972073e10338 R09: ffff97207=
3e10338
[   11.840905] R10: 000000000000000d R11: 0000000000000000 R12: 000000000=
0000002
[   11.840905] R13: 0000000000000000 R14: ffff972073e100b0 R15: ffff97207=
3e10000
[   11.840905] FS:  0000000000000000(0000) GS:ffff97207f800000(0000) knlG=
S:0000000000000000
[   11.840905] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   11.840905] CR2: 0000000000000018 CR3: 0000001989612001 CR4: 000000000=
03706f0
[   11.840905] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 000000000=
0000000
[   11.840905] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 000000000=
0000400
[   11.840905] Kernel panic - not syncing: Fatal exception
[   11.840905] Kernel Offset: 0x21000000 from 0xffffffff81000000 (relocat=
ion range: 0xffffffff80000000-0xffffffffbfffffff)
[   11.840905] ---[ end Kernel panic - not syncing: Fatal exception ]---


=E2=80=94
Best Regards,
Yan Zi

--=_MailMate_C9C3D367-79B1-4D8D-812F-500ACD3AF7C3_=
Content-Disposition: attachment; filename=.config
Content-Type: text/plain
Content-Transfer-Encoding: quoted-printable

#
# Automatically generated file; DO NOT EDIT.
# Linux/x86 5.9.0-rc5 Kernel Configuration
#
CONFIG_CC_VERSION_TEXT=3D"gcc (Debian 10.2.0-9) 10.2.0"
CONFIG_CC_IS_GCC=3Dy
CONFIG_GCC_VERSION=3D100200
CONFIG_LD_VERSION=3D235010000
CONFIG_CLANG_VERSION=3D0
CONFIG_CC_CAN_LINK=3Dy
CONFIG_CC_CAN_LINK_STATIC=3Dy
CONFIG_CC_HAS_ASM_GOTO=3Dy
CONFIG_CC_HAS_ASM_INLINE=3Dy
CONFIG_IRQ_WORK=3Dy
CONFIG_BUILDTIME_TABLE_SORT=3Dy
CONFIG_THREAD_INFO_IN_TASK=3Dy

#
# General setup
#
CONFIG_INIT_ENV_ARG_LIMIT=3D32
# CONFIG_COMPILE_TEST is not set
CONFIG_LOCALVERSION=3D"-1gb-thp"
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_BUILD_SALT=3D""
CONFIG_HAVE_KERNEL_GZIP=3Dy
CONFIG_HAVE_KERNEL_BZIP2=3Dy
CONFIG_HAVE_KERNEL_LZMA=3Dy
CONFIG_HAVE_KERNEL_XZ=3Dy
CONFIG_HAVE_KERNEL_LZO=3Dy
CONFIG_HAVE_KERNEL_LZ4=3Dy
CONFIG_HAVE_KERNEL_ZSTD=3Dy
# CONFIG_KERNEL_GZIP is not set
# CONFIG_KERNEL_BZIP2 is not set
# CONFIG_KERNEL_LZMA is not set
CONFIG_KERNEL_XZ=3Dy
# CONFIG_KERNEL_LZO is not set
# CONFIG_KERNEL_LZ4 is not set
# CONFIG_KERNEL_ZSTD is not set
CONFIG_DEFAULT_INIT=3D""
CONFIG_DEFAULT_HOSTNAME=3D"(none)"
CONFIG_SWAP=3Dy
CONFIG_SYSVIPC=3Dy
CONFIG_SYSVIPC_SYSCTL=3Dy
CONFIG_POSIX_MQUEUE=3Dy
CONFIG_POSIX_MQUEUE_SYSCTL=3Dy
# CONFIG_WATCH_QUEUE is not set
CONFIG_CROSS_MEMORY_ATTACH=3Dy
CONFIG_USELIB=3Dy
CONFIG_AUDIT=3Dy
CONFIG_HAVE_ARCH_AUDITSYSCALL=3Dy
CONFIG_AUDITSYSCALL=3Dy

#
# IRQ subsystem
#
CONFIG_GENERIC_IRQ_PROBE=3Dy
CONFIG_GENERIC_IRQ_SHOW=3Dy
CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK=3Dy
CONFIG_GENERIC_PENDING_IRQ=3Dy
CONFIG_GENERIC_IRQ_MIGRATION=3Dy
CONFIG_HARDIRQS_SW_RESEND=3Dy
CONFIG_IRQ_DOMAIN=3Dy
CONFIG_IRQ_DOMAIN_HIERARCHY=3Dy
CONFIG_GENERIC_MSI_IRQ=3Dy
CONFIG_GENERIC_MSI_IRQ_DOMAIN=3Dy
CONFIG_IRQ_MSI_IOMMU=3Dy
CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR=3Dy
CONFIG_GENERIC_IRQ_RESERVATION_MODE=3Dy
CONFIG_IRQ_FORCED_THREADING=3Dy
CONFIG_SPARSE_IRQ=3Dy
# CONFIG_GENERIC_IRQ_DEBUGFS is not set
# end of IRQ subsystem

CONFIG_CLOCKSOURCE_WATCHDOG=3Dy
CONFIG_ARCH_CLOCKSOURCE_INIT=3Dy
CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE=3Dy
CONFIG_GENERIC_TIME_VSYSCALL=3Dy
CONFIG_GENERIC_CLOCKEVENTS=3Dy
CONFIG_GENERIC_CLOCKEVENTS_BROADCAST=3Dy
CONFIG_GENERIC_CLOCKEVENTS_MIN_ADJUST=3Dy
CONFIG_GENERIC_CMOS_UPDATE=3Dy
CONFIG_HAVE_POSIX_CPU_TIMERS_TASK_WORK=3Dy
CONFIG_POSIX_CPU_TIMERS_TASK_WORK=3Dy

#
# Timers subsystem
#
CONFIG_TICK_ONESHOT=3Dy
CONFIG_NO_HZ_COMMON=3Dy
# CONFIG_HZ_PERIODIC is not set
CONFIG_NO_HZ_IDLE=3Dy
# CONFIG_NO_HZ_FULL is not set
CONFIG_NO_HZ=3Dy
CONFIG_HIGH_RES_TIMERS=3Dy
# end of Timers subsystem

# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=3Dy
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_COUNT=3Dy

#
# CPU/Task time and stats accounting
#
CONFIG_TICK_CPU_ACCOUNTING=3Dy
# CONFIG_VIRT_CPU_ACCOUNTING_GEN is not set
# CONFIG_IRQ_TIME_ACCOUNTING is not set
CONFIG_BSD_PROCESS_ACCT=3Dy
CONFIG_BSD_PROCESS_ACCT_V3=3Dy
CONFIG_TASKSTATS=3Dy
CONFIG_TASK_DELAY_ACCT=3Dy
CONFIG_TASK_XACCT=3Dy
CONFIG_TASK_IO_ACCOUNTING=3Dy
# CONFIG_PSI is not set
# end of CPU/Task time and stats accounting

CONFIG_CPU_ISOLATION=3Dy

#
# RCU Subsystem
#
CONFIG_TREE_RCU=3Dy
# CONFIG_RCU_EXPERT is not set
CONFIG_SRCU=3Dy
CONFIG_TREE_SRCU=3Dy
CONFIG_TASKS_RCU_GENERIC=3Dy
CONFIG_TASKS_RUDE_RCU=3Dy
CONFIG_RCU_STALL_COMMON=3Dy
CONFIG_RCU_NEED_SEGCBLIST=3Dy
# end of RCU Subsystem

CONFIG_BUILD_BIN2C=3Dy
CONFIG_IKCONFIG=3Dm
# CONFIG_IKCONFIG_PROC is not set
# CONFIG_IKHEADERS is not set
CONFIG_LOG_BUF_SHIFT=3D18
CONFIG_LOG_CPU_MAX_BUF_SHIFT=3D12
CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT=3D13
CONFIG_HAVE_UNSTABLE_SCHED_CLOCK=3Dy

#
# Scheduler features
#
# CONFIG_UCLAMP_TASK is not set
# end of Scheduler features

CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=3Dy
CONFIG_ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH=3Dy
CONFIG_CC_HAS_INT128=3Dy
CONFIG_ARCH_SUPPORTS_INT128=3Dy
CONFIG_NUMA_BALANCING=3Dy
CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=3Dy
CONFIG_CGROUPS=3Dy
CONFIG_PAGE_COUNTER=3Dy
CONFIG_MEMCG=3Dy
CONFIG_MEMCG_SWAP=3Dy
CONFIG_MEMCG_KMEM=3Dy
CONFIG_BLK_CGROUP=3Dy
CONFIG_CGROUP_WRITEBACK=3Dy
CONFIG_CGROUP_SCHED=3Dy
CONFIG_FAIR_GROUP_SCHED=3Dy
CONFIG_CFS_BANDWIDTH=3Dy
# CONFIG_RT_GROUP_SCHED is not set
# CONFIG_CGROUP_PIDS is not set
# CONFIG_CGROUP_RDMA is not set
CONFIG_CGROUP_FREEZER=3Dy
CONFIG_CGROUP_HUGETLB=3Dy
CONFIG_CPUSETS=3Dy
CONFIG_PROC_PID_CPUSET=3Dy
CONFIG_CGROUP_DEVICE=3Dy
CONFIG_CGROUP_CPUACCT=3Dy
CONFIG_CGROUP_PERF=3Dy
# CONFIG_CGROUP_BPF is not set
# CONFIG_CGROUP_DEBUG is not set
CONFIG_SOCK_CGROUP_DATA=3Dy
CONFIG_NAMESPACES=3Dy
CONFIG_UTS_NS=3Dy
CONFIG_TIME_NS=3Dy
CONFIG_IPC_NS=3Dy
CONFIG_USER_NS=3Dy
CONFIG_PID_NS=3Dy
CONFIG_NET_NS=3Dy
CONFIG_CHECKPOINT_RESTORE=3Dy
CONFIG_SCHED_AUTOGROUP=3Dy
# CONFIG_SYSFS_DEPRECATED is not set
CONFIG_RELAY=3Dy
CONFIG_BLK_DEV_INITRD=3Dy
CONFIG_INITRAMFS_SOURCE=3D""
CONFIG_RD_GZIP=3Dy
CONFIG_RD_BZIP2=3Dy
CONFIG_RD_LZMA=3Dy
CONFIG_RD_XZ=3Dy
CONFIG_RD_LZO=3Dy
CONFIG_RD_LZ4=3Dy
CONFIG_RD_ZSTD=3Dy
# CONFIG_BOOT_CONFIG is not set
CONFIG_CC_OPTIMIZE_FOR_PERFORMANCE=3Dy
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_SYSCTL=3Dy
CONFIG_HAVE_UID16=3Dy
CONFIG_SYSCTL_EXCEPTION_TRACE=3Dy
CONFIG_HAVE_PCSPKR_PLATFORM=3Dy
CONFIG_BPF=3Dy
CONFIG_EXPERT=3Dy
CONFIG_UID16=3Dy
CONFIG_MULTIUSER=3Dy
CONFIG_SGETMASK_SYSCALL=3Dy
CONFIG_SYSFS_SYSCALL=3Dy
CONFIG_FHANDLE=3Dy
CONFIG_POSIX_TIMERS=3Dy
CONFIG_PRINTK=3Dy
CONFIG_PRINTK_NMI=3Dy
CONFIG_BUG=3Dy
CONFIG_ELF_CORE=3Dy
CONFIG_PCSPKR_PLATFORM=3Dy
CONFIG_BASE_FULL=3Dy
CONFIG_FUTEX=3Dy
CONFIG_FUTEX_PI=3Dy
CONFIG_EPOLL=3Dy
CONFIG_SIGNALFD=3Dy
CONFIG_TIMERFD=3Dy
CONFIG_EVENTFD=3Dy
CONFIG_SHMEM=3Dy
CONFIG_AIO=3Dy
CONFIG_IO_URING=3Dy
CONFIG_ADVISE_SYSCALLS=3Dy
CONFIG_MEMBARRIER=3Dy
CONFIG_KALLSYMS=3Dy
CONFIG_KALLSYMS_ALL=3Dy
CONFIG_KALLSYMS_ABSOLUTE_PERCPU=3Dy
CONFIG_KALLSYMS_BASE_RELATIVE=3Dy
# CONFIG_BPF_LSM is not set
CONFIG_BPF_SYSCALL=3Dy
CONFIG_ARCH_WANT_DEFAULT_BPF_JIT=3Dy
# CONFIG_BPF_JIT_ALWAYS_ON is not set
CONFIG_BPF_JIT_DEFAULT_ON=3Dy
# CONFIG_USERFAULTFD is not set
CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE=3Dy
CONFIG_RSEQ=3Dy
# CONFIG_DEBUG_RSEQ is not set
# CONFIG_EMBEDDED is not set
CONFIG_HAVE_PERF_EVENTS=3Dy
# CONFIG_PC104 is not set

#
# Kernel Performance Events And Counters
#
CONFIG_PERF_EVENTS=3Dy
# CONFIG_DEBUG_PERF_USE_VMALLOC is not set
# end of Kernel Performance Events And Counters

CONFIG_VM_EVENT_COUNTERS=3Dy
CONFIG_SLUB_DEBUG=3Dy
# CONFIG_SLUB_MEMCG_SYSFS_ON is not set
# CONFIG_COMPAT_BRK is not set
# CONFIG_SLAB is not set
CONFIG_SLUB=3Dy
# CONFIG_SLOB is not set
CONFIG_SLAB_MERGE_DEFAULT=3Dy
# CONFIG_SLAB_FREELIST_RANDOM is not set
# CONFIG_SLAB_FREELIST_HARDENED is not set
# CONFIG_SHUFFLE_PAGE_ALLOCATOR is not set
CONFIG_SLUB_CPU_PARTIAL=3Dy
CONFIG_SYSTEM_DATA_VERIFICATION=3Dy
CONFIG_PROFILING=3Dy
CONFIG_TRACEPOINTS=3Dy
# end of General setup

CONFIG_64BIT=3Dy
CONFIG_X86_64=3Dy
CONFIG_X86=3Dy
CONFIG_INSTRUCTION_DECODER=3Dy
CONFIG_OUTPUT_FORMAT=3D"elf64-x86-64"
CONFIG_LOCKDEP_SUPPORT=3Dy
CONFIG_STACKTRACE_SUPPORT=3Dy
CONFIG_MMU=3Dy
CONFIG_ARCH_MMAP_RND_BITS_MIN=3D28
CONFIG_ARCH_MMAP_RND_BITS_MAX=3D32
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MIN=3D8
CONFIG_ARCH_MMAP_RND_COMPAT_BITS_MAX=3D16
CONFIG_GENERIC_ISA_DMA=3Dy
CONFIG_GENERIC_BUG=3Dy
CONFIG_GENERIC_BUG_RELATIVE_POINTERS=3Dy
CONFIG_ARCH_MAY_HAVE_PC_FDC=3Dy
CONFIG_GENERIC_CALIBRATE_DELAY=3Dy
CONFIG_ARCH_HAS_CPU_RELAX=3Dy
CONFIG_ARCH_HAS_CACHE_LINE_SIZE=3Dy
CONFIG_ARCH_HAS_FILTER_PGPROT=3Dy
CONFIG_HAVE_SETUP_PER_CPU_AREA=3Dy
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=3Dy
CONFIG_NEED_PER_CPU_PAGE_FIRST_CHUNK=3Dy
CONFIG_ARCH_HIBERNATION_POSSIBLE=3Dy
CONFIG_ARCH_SUSPEND_POSSIBLE=3Dy
CONFIG_ARCH_WANT_GENERAL_HUGETLB=3Dy
CONFIG_ZONE_DMA32=3Dy
CONFIG_AUDIT_ARCH=3Dy
CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC=3Dy
CONFIG_HAVE_INTEL_TXT=3Dy
CONFIG_X86_64_SMP=3Dy
CONFIG_ARCH_SUPPORTS_UPROBES=3Dy
CONFIG_FIX_EARLYCON_MEM=3Dy
CONFIG_PGTABLE_LEVELS=3D4
CONFIG_CC_HAS_SANE_STACKPROTECTOR=3Dy

#
# Processor type and features
#
CONFIG_ZONE_DMA=3Dy
CONFIG_SMP=3Dy
CONFIG_X86_FEATURE_NAMES=3Dy
CONFIG_X86_X2APIC=3Dy
CONFIG_X86_MPPARSE=3Dy
# CONFIG_GOLDFISH is not set
CONFIG_RETPOLINE=3Dy
# CONFIG_X86_CPU_RESCTRL is not set
CONFIG_X86_EXTENDED_PLATFORM=3Dy
CONFIG_X86_NUMACHIP=3Dy
# CONFIG_X86_VSMP is not set
# CONFIG_X86_UV is not set
# CONFIG_X86_GOLDFISH is not set
# CONFIG_X86_INTEL_MID is not set
CONFIG_X86_INTEL_LPSS=3Dy
CONFIG_X86_AMD_PLATFORM_DEVICE=3Dy
CONFIG_IOSF_MBI=3Dy
CONFIG_IOSF_MBI_DEBUG=3Dy
CONFIG_X86_SUPPORTS_MEMORY_FAILURE=3Dy
CONFIG_SCHED_OMIT_FRAME_POINTER=3Dy
CONFIG_HYPERVISOR_GUEST=3Dy
CONFIG_PARAVIRT=3Dy
CONFIG_PARAVIRT_XXL=3Dy
# CONFIG_PARAVIRT_DEBUG is not set
CONFIG_PARAVIRT_SPINLOCKS=3Dy
CONFIG_X86_HV_CALLBACK_VECTOR=3Dy
CONFIG_XEN=3Dy
CONFIG_XEN_PV=3Dy
CONFIG_XEN_PV_SMP=3Dy
CONFIG_XEN_DOM0=3Dy
CONFIG_XEN_PVHVM=3Dy
CONFIG_XEN_PVHVM_SMP=3Dy
CONFIG_XEN_512GB=3Dy
CONFIG_XEN_SAVE_RESTORE=3Dy
# CONFIG_XEN_DEBUG_FS is not set
CONFIG_XEN_PVH=3Dy
CONFIG_KVM_GUEST=3Dy
CONFIG_ARCH_CPUIDLE_HALTPOLL=3Dy
CONFIG_PVH=3Dy
# CONFIG_PARAVIRT_TIME_ACCOUNTING is not set
CONFIG_PARAVIRT_CLOCK=3Dy
# CONFIG_JAILHOUSE_GUEST is not set
# CONFIG_ACRN_GUEST is not set
# CONFIG_MK8 is not set
# CONFIG_MPSC is not set
# CONFIG_MCORE2 is not set
# CONFIG_MATOM is not set
CONFIG_GENERIC_CPU=3Dy
CONFIG_X86_INTERNODE_CACHE_SHIFT=3D6
CONFIG_X86_L1_CACHE_SHIFT=3D6
CONFIG_X86_TSC=3Dy
CONFIG_X86_CMPXCHG64=3Dy
CONFIG_X86_CMOV=3Dy
CONFIG_X86_MINIMUM_CPU_FAMILY=3D64
CONFIG_X86_DEBUGCTLMSR=3Dy
CONFIG_IA32_FEAT_CTL=3Dy
CONFIG_X86_VMX_FEATURE_NAMES=3Dy
CONFIG_PROCESSOR_SELECT=3Dy
CONFIG_CPU_SUP_INTEL=3Dy
CONFIG_CPU_SUP_AMD=3Dy
CONFIG_CPU_SUP_HYGON=3Dy
CONFIG_CPU_SUP_CENTAUR=3Dy
CONFIG_CPU_SUP_ZHAOXIN=3Dy
CONFIG_HPET_TIMER=3Dy
CONFIG_HPET_EMULATE_RTC=3Dy
CONFIG_DMI=3Dy
CONFIG_GART_IOMMU=3Dy
# CONFIG_MAXSMP is not set
CONFIG_NR_CPUS_RANGE_BEGIN=3D2
CONFIG_NR_CPUS_RANGE_END=3D512
CONFIG_NR_CPUS_DEFAULT=3D64
CONFIG_NR_CPUS=3D256
CONFIG_SCHED_SMT=3Dy
CONFIG_SCHED_MC=3Dy
CONFIG_SCHED_MC_PRIO=3Dy
CONFIG_X86_LOCAL_APIC=3Dy
CONFIG_X86_IO_APIC=3Dy
CONFIG_X86_REROUTE_FOR_BROKEN_BOOT_IRQS=3Dy
CONFIG_X86_MCE=3Dy
# CONFIG_X86_MCELOG_LEGACY is not set
CONFIG_X86_MCE_INTEL=3Dy
CONFIG_X86_MCE_AMD=3Dy
CONFIG_X86_MCE_THRESHOLD=3Dy
# CONFIG_X86_MCE_INJECT is not set
CONFIG_X86_THERMAL_VECTOR=3Dy

#
# Performance monitoring
#
CONFIG_PERF_EVENTS_INTEL_UNCORE=3Dy
CONFIG_PERF_EVENTS_INTEL_RAPL=3Dy
CONFIG_PERF_EVENTS_INTEL_CSTATE=3Dy
# CONFIG_PERF_EVENTS_AMD_POWER is not set
# end of Performance monitoring

CONFIG_X86_16BIT=3Dy
CONFIG_X86_ESPFIX64=3Dy
CONFIG_X86_VSYSCALL_EMULATION=3Dy
CONFIG_X86_IOPL_IOPERM=3Dy
# CONFIG_I8K is not set
CONFIG_MICROCODE=3Dy
CONFIG_MICROCODE_INTEL=3Dy
CONFIG_MICROCODE_AMD=3Dy
CONFIG_MICROCODE_OLD_INTERFACE=3Dy
CONFIG_X86_MSR=3Dy
CONFIG_X86_CPUID=3Dy
# CONFIG_X86_5LEVEL is not set
CONFIG_X86_DIRECT_GBPAGES=3Dy
# CONFIG_X86_CPA_STATISTICS is not set
# CONFIG_AMD_MEM_ENCRYPT is not set
CONFIG_NUMA=3Dy
CONFIG_AMD_NUMA=3Dy
CONFIG_X86_64_ACPI_NUMA=3Dy
CONFIG_NUMA_EMU=3Dy
CONFIG_NODES_SHIFT=3D6
CONFIG_ARCH_SPARSEMEM_ENABLE=3Dy
CONFIG_ARCH_SPARSEMEM_DEFAULT=3Dy
CONFIG_ARCH_SELECT_MEMORY_MODEL=3Dy
CONFIG_ARCH_MEMORY_PROBE=3Dy
CONFIG_ARCH_PROC_KCORE_TEXT=3Dy
CONFIG_ILLEGAL_POINTER_VALUE=3D0xdead000000000000
CONFIG_X86_PMEM_LEGACY_DEVICE=3Dy
CONFIG_X86_PMEM_LEGACY=3Dy
CONFIG_X86_CHECK_BIOS_CORRUPTION=3Dy
CONFIG_X86_BOOTPARAM_MEMORY_CORRUPTION_CHECK=3Dy
CONFIG_X86_RESERVE_LOW=3D64
CONFIG_MTRR=3Dy
CONFIG_MTRR_SANITIZER=3Dy
CONFIG_MTRR_SANITIZER_ENABLE_DEFAULT=3D1
CONFIG_MTRR_SANITIZER_SPARE_REG_NR_DEFAULT=3D1
CONFIG_X86_PAT=3Dy
CONFIG_ARCH_USES_PG_UNCACHED=3Dy
CONFIG_ARCH_RANDOM=3Dy
CONFIG_X86_SMAP=3Dy
CONFIG_X86_UMIP=3Dy
CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS=3Dy
CONFIG_X86_INTEL_TSX_MODE_OFF=3Dy
# CONFIG_X86_INTEL_TSX_MODE_ON is not set
# CONFIG_X86_INTEL_TSX_MODE_AUTO is not set
CONFIG_EFI=3Dy
CONFIG_EFI_STUB=3Dy
CONFIG_EFI_MIXED=3Dy
CONFIG_SECCOMP=3Dy
# CONFIG_HZ_100 is not set
CONFIG_HZ_250=3Dy
# CONFIG_HZ_300 is not set
# CONFIG_HZ_1000 is not set
CONFIG_HZ=3D250
CONFIG_SCHED_HRTICK=3Dy
CONFIG_KEXEC=3Dy
CONFIG_KEXEC_FILE=3Dy
CONFIG_ARCH_HAS_KEXEC_PURGATORY=3Dy
# CONFIG_KEXEC_SIG is not set
CONFIG_CRASH_DUMP=3Dy
CONFIG_KEXEC_JUMP=3Dy
CONFIG_PHYSICAL_START=3D0x1000000
CONFIG_RELOCATABLE=3Dy
CONFIG_RANDOMIZE_BASE=3Dy
CONFIG_X86_NEED_RELOCS=3Dy
CONFIG_PHYSICAL_ALIGN=3D0x1000000
CONFIG_DYNAMIC_MEMORY_LAYOUT=3Dy
CONFIG_RANDOMIZE_MEMORY=3Dy
CONFIG_RANDOMIZE_MEMORY_PHYSICAL_PADDING=3D0xa
CONFIG_HOTPLUG_CPU=3Dy
# CONFIG_BOOTPARAM_HOTPLUG_CPU0 is not set
# CONFIG_DEBUG_HOTPLUG_CPU0 is not set
# CONFIG_COMPAT_VDSO is not set
CONFIG_LEGACY_VSYSCALL_EMULATE=3Dy
# CONFIG_LEGACY_VSYSCALL_XONLY is not set
# CONFIG_LEGACY_VSYSCALL_NONE is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_MODIFY_LDT_SYSCALL=3Dy
CONFIG_HAVE_LIVEPATCH=3Dy
CONFIG_LIVEPATCH=3Dy
# end of Processor type and features

CONFIG_ARCH_HAS_ADD_PAGES=3Dy
CONFIG_ARCH_ENABLE_MEMORY_HOTPLUG=3Dy
CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE=3Dy
CONFIG_USE_PERCPU_NUMA_NODE_ID=3Dy
CONFIG_ARCH_ENABLE_SPLIT_PMD_PTLOCK=3Dy
CONFIG_ARCH_ENABLE_HUGEPAGE_MIGRATION=3Dy
CONFIG_ARCH_ENABLE_THP_MIGRATION=3Dy

#
# Power management and ACPI options
#
CONFIG_ARCH_HIBERNATION_HEADER=3Dy
CONFIG_SUSPEND=3Dy
CONFIG_SUSPEND_FREEZER=3Dy
# CONFIG_SUSPEND_SKIP_SYNC is not set
CONFIG_HIBERNATE_CALLBACKS=3Dy
CONFIG_HIBERNATION=3Dy
CONFIG_HIBERNATION_SNAPSHOT_DEV=3Dy
CONFIG_PM_STD_PARTITION=3D""
CONFIG_PM_SLEEP=3Dy
CONFIG_PM_SLEEP_SMP=3Dy
# CONFIG_PM_AUTOSLEEP is not set
CONFIG_PM_WAKELOCKS=3Dy
CONFIG_PM_WAKELOCKS_LIMIT=3D100
CONFIG_PM_WAKELOCKS_GC=3Dy
CONFIG_PM=3Dy
CONFIG_PM_DEBUG=3Dy
CONFIG_PM_ADVANCED_DEBUG=3Dy
# CONFIG_PM_TEST_SUSPEND is not set
CONFIG_PM_SLEEP_DEBUG=3Dy
# CONFIG_DPM_WATCHDOG is not set
CONFIG_PM_TRACE=3Dy
CONFIG_PM_TRACE_RTC=3Dy
CONFIG_PM_CLK=3Dy
CONFIG_WQ_POWER_EFFICIENT_DEFAULT=3Dy
# CONFIG_ENERGY_MODEL is not set
CONFIG_ARCH_SUPPORTS_ACPI=3Dy
CONFIG_ACPI=3Dy
CONFIG_ACPI_LEGACY_TABLES_LOOKUP=3Dy
CONFIG_ARCH_MIGHT_HAVE_ACPI_PDC=3Dy
CONFIG_ACPI_SYSTEM_POWER_STATES_SUPPORT=3Dy
# CONFIG_ACPI_DEBUGGER is not set
CONFIG_ACPI_SPCR_TABLE=3Dy
CONFIG_ACPI_LPIT=3Dy
CONFIG_ACPI_SLEEP=3Dy
CONFIG_ACPI_REV_OVERRIDE_POSSIBLE=3Dy
# CONFIG_ACPI_EC_DEBUGFS is not set
CONFIG_ACPI_AC=3Dy
CONFIG_ACPI_BATTERY=3Dy
CONFIG_ACPI_BUTTON=3Dy
CONFIG_ACPI_FAN=3Dy
# CONFIG_ACPI_TAD is not set
CONFIG_ACPI_DOCK=3Dy
CONFIG_ACPI_CPU_FREQ_PSS=3Dy
CONFIG_ACPI_PROCESSOR_CSTATE=3Dy
CONFIG_ACPI_PROCESSOR_IDLE=3Dy
CONFIG_ACPI_CPPC_LIB=3Dy
CONFIG_ACPI_PROCESSOR=3Dy
# CONFIG_ACPI_IPMI is not set
CONFIG_ACPI_HOTPLUG_CPU=3Dy
# CONFIG_ACPI_PROCESSOR_AGGREGATOR is not set
CONFIG_ACPI_THERMAL=3Dy
CONFIG_ACPI_CUSTOM_DSDT_FILE=3D""
CONFIG_ARCH_HAS_ACPI_TABLE_UPGRADE=3Dy
CONFIG_ACPI_TABLE_UPGRADE=3Dy
# CONFIG_ACPI_DEBUG is not set
CONFIG_ACPI_PCI_SLOT=3Dy
CONFIG_ACPI_CONTAINER=3Dy
CONFIG_ACPI_HOTPLUG_MEMORY=3Dy
CONFIG_ACPI_HOTPLUG_IOAPIC=3Dy
# CONFIG_ACPI_SBS is not set
CONFIG_ACPI_HED=3Dy
# CONFIG_ACPI_CUSTOM_METHOD is not set
CONFIG_ACPI_BGRT=3Dy
# CONFIG_ACPI_REDUCED_HARDWARE_ONLY is not set
# CONFIG_ACPI_NFIT is not set
CONFIG_ACPI_NUMA=3Dy
# CONFIG_ACPI_HMAT is not set
CONFIG_HAVE_ACPI_APEI=3Dy
CONFIG_HAVE_ACPI_APEI_NMI=3Dy
CONFIG_ACPI_APEI=3Dy
CONFIG_ACPI_APEI_GHES=3Dy
CONFIG_ACPI_APEI_PCIEAER=3Dy
CONFIG_ACPI_APEI_MEMORY_FAILURE=3Dy
# CONFIG_ACPI_APEI_EINJ is not set
# CONFIG_ACPI_APEI_ERST_DEBUG is not set
# CONFIG_DPTF_POWER is not set
# CONFIG_ACPI_EXTLOG is not set
# CONFIG_PMIC_OPREGION is not set
# CONFIG_ACPI_CONFIGFS is not set
CONFIG_X86_PM_TIMER=3Dy
CONFIG_SFI=3Dy

#
# CPU Frequency scaling
#
CONFIG_CPU_FREQ=3Dy
CONFIG_CPU_FREQ_GOV_ATTR_SET=3Dy
CONFIG_CPU_FREQ_GOV_COMMON=3Dy
CONFIG_CPU_FREQ_STAT=3Dy
CONFIG_CPU_FREQ_DEFAULT_GOV_PERFORMANCE=3Dy
# CONFIG_CPU_FREQ_DEFAULT_GOV_POWERSAVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_USERSPACE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_CONSERVATIVE is not set
# CONFIG_CPU_FREQ_DEFAULT_GOV_SCHEDUTIL is not set
CONFIG_CPU_FREQ_GOV_PERFORMANCE=3Dy
CONFIG_CPU_FREQ_GOV_POWERSAVE=3Dy
CONFIG_CPU_FREQ_GOV_USERSPACE=3Dy
CONFIG_CPU_FREQ_GOV_ONDEMAND=3Dy
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=3Dy
CONFIG_CPU_FREQ_GOV_SCHEDUTIL=3Dy

#
# CPU frequency scaling drivers
#
CONFIG_X86_INTEL_PSTATE=3Dy
CONFIG_X86_PCC_CPUFREQ=3Dy
CONFIG_X86_ACPI_CPUFREQ=3Dy
CONFIG_X86_ACPI_CPUFREQ_CPB=3Dy
CONFIG_X86_POWERNOW_K8=3Dy
# CONFIG_X86_AMD_FREQ_SENSITIVITY is not set
CONFIG_X86_SPEEDSTEP_CENTRINO=3Dy
# CONFIG_X86_P4_CLOCKMOD is not set

#
# shared options
#
# end of CPU Frequency scaling

#
# CPU Idle
#
CONFIG_CPU_IDLE=3Dy
CONFIG_CPU_IDLE_GOV_LADDER=3Dy
CONFIG_CPU_IDLE_GOV_MENU=3Dy
# CONFIG_CPU_IDLE_GOV_TEO is not set
# CONFIG_CPU_IDLE_GOV_HALTPOLL is not set
CONFIG_HALTPOLL_CPUIDLE=3Dy
# end of CPU Idle

CONFIG_INTEL_IDLE=3Dy
# end of Power management and ACPI options

#
# Bus options (PCI etc.)
#
CONFIG_PCI_DIRECT=3Dy
CONFIG_PCI_MMCONFIG=3Dy
CONFIG_PCI_XEN=3Dy
CONFIG_MMCONF_FAM10H=3Dy
# CONFIG_PCI_CNB20LE_QUIRK is not set
# CONFIG_ISA_BUS is not set
CONFIG_ISA_DMA_API=3Dy
CONFIG_AMD_NB=3Dy
# CONFIG_X86_SYSFB is not set
# end of Bus options (PCI etc.)

#
# Binary Emulations
#
CONFIG_IA32_EMULATION=3Dy
CONFIG_X86_X32=3Dy
CONFIG_COMPAT_32=3Dy
CONFIG_COMPAT=3Dy
CONFIG_COMPAT_FOR_U64_ALIGNMENT=3Dy
CONFIG_SYSVIPC_COMPAT=3Dy
# end of Binary Emulations

#
# Firmware Drivers
#
CONFIG_EDD=3Dy
CONFIG_EDD_OFF=3Dy
CONFIG_FIRMWARE_MEMMAP=3Dy
CONFIG_DMIID=3Dy
# CONFIG_DMI_SYSFS is not set
CONFIG_DMI_SCAN_MACHINE_NON_EFI_FALLBACK=3Dy
# CONFIG_ISCSI_IBFT is not set
# CONFIG_FW_CFG_SYSFS is not set
# CONFIG_GOOGLE_FIRMWARE is not set

#
# EFI (Extensible Firmware Interface) Support
#
CONFIG_EFI_VARS=3Dy
CONFIG_EFI_ESRT=3Dy
CONFIG_EFI_VARS_PSTORE=3Dm
# CONFIG_EFI_VARS_PSTORE_DEFAULT_DISABLE is not set
CONFIG_EFI_RUNTIME_MAP=3Dy
# CONFIG_EFI_FAKE_MEMMAP is not set
CONFIG_EFI_RUNTIME_WRAPPERS=3Dy
CONFIG_EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER=3Dy
# CONFIG_EFI_BOOTLOADER_CONTROL is not set
# CONFIG_EFI_CAPSULE_LOADER is not set
# CONFIG_EFI_TEST is not set
# CONFIG_APPLE_PROPERTIES is not set
# CONFIG_RESET_ATTACK_MITIGATION is not set
# CONFIG_EFI_RCI2_TABLE is not set
# CONFIG_EFI_DISABLE_PCI_DMA is not set
# end of EFI (Extensible Firmware Interface) Support

CONFIG_UEFI_CPER=3Dy
CONFIG_UEFI_CPER_X86=3Dy
CONFIG_EFI_EARLYCON=3Dy
CONFIG_EFI_CUSTOM_SSDT_OVERLAYS=3Dy

#
# Tegra firmware driver
#
# end of Tegra firmware driver
# end of Firmware Drivers

CONFIG_HAVE_KVM=3Dy
CONFIG_HAVE_KVM_IRQCHIP=3Dy
CONFIG_HAVE_KVM_IRQFD=3Dy
CONFIG_HAVE_KVM_IRQ_ROUTING=3Dy
CONFIG_HAVE_KVM_EVENTFD=3Dy
CONFIG_KVM_MMIO=3Dy
CONFIG_KVM_ASYNC_PF=3Dy
CONFIG_HAVE_KVM_MSI=3Dy
CONFIG_HAVE_KVM_CPU_RELAX_INTERCEPT=3Dy
CONFIG_KVM_VFIO=3Dy
CONFIG_KVM_GENERIC_DIRTYLOG_READ_PROTECT=3Dy
CONFIG_KVM_COMPAT=3Dy
CONFIG_HAVE_KVM_IRQ_BYPASS=3Dy
CONFIG_HAVE_KVM_NO_POLL=3Dy
CONFIG_KVM_XFER_TO_GUEST_WORK=3Dy
CONFIG_VIRTUALIZATION=3Dy
CONFIG_KVM=3Dm
CONFIG_KVM_WERROR=3Dy
CONFIG_KVM_INTEL=3Dm
# CONFIG_KVM_AMD is not set
# CONFIG_KVM_MMU_AUDIT is not set
CONFIG_AS_AVX512=3Dy
CONFIG_AS_SHA1_NI=3Dy
CONFIG_AS_SHA256_NI=3Dy
CONFIG_AS_TPAUSE=3Dy

#
# General architecture-dependent options
#
CONFIG_CRASH_CORE=3Dy
CONFIG_KEXEC_CORE=3Dy
CONFIG_HOTPLUG_SMT=3Dy
CONFIG_GENERIC_ENTRY=3Dy
# CONFIG_OPROFILE is not set
CONFIG_HAVE_OPROFILE=3Dy
CONFIG_OPROFILE_NMI_TIMER=3Dy
CONFIG_KPROBES=3Dy
CONFIG_JUMP_LABEL=3Dy
# CONFIG_STATIC_KEYS_SELFTEST is not set
CONFIG_OPTPROBES=3Dy
CONFIG_KPROBES_ON_FTRACE=3Dy
CONFIG_UPROBES=3Dy
CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS=3Dy
CONFIG_ARCH_USE_BUILTIN_BSWAP=3Dy
CONFIG_KRETPROBES=3Dy
CONFIG_USER_RETURN_NOTIFIER=3Dy
CONFIG_HAVE_IOREMAP_PROT=3Dy
CONFIG_HAVE_KPROBES=3Dy
CONFIG_HAVE_KRETPROBES=3Dy
CONFIG_HAVE_OPTPROBES=3Dy
CONFIG_HAVE_KPROBES_ON_FTRACE=3Dy
CONFIG_HAVE_FUNCTION_ERROR_INJECTION=3Dy
CONFIG_HAVE_NMI=3Dy
CONFIG_HAVE_ARCH_TRACEHOOK=3Dy
CONFIG_HAVE_DMA_CONTIGUOUS=3Dy
CONFIG_GENERIC_SMP_IDLE_THREAD=3Dy
CONFIG_ARCH_HAS_FORTIFY_SOURCE=3Dy
CONFIG_ARCH_HAS_SET_MEMORY=3Dy
CONFIG_ARCH_HAS_SET_DIRECT_MAP=3Dy
CONFIG_HAVE_ARCH_THREAD_STRUCT_WHITELIST=3Dy
CONFIG_ARCH_WANTS_DYNAMIC_TASK_STRUCT=3Dy
CONFIG_HAVE_ASM_MODVERSIONS=3Dy
CONFIG_HAVE_REGS_AND_STACK_ACCESS_API=3Dy
CONFIG_HAVE_RSEQ=3Dy
CONFIG_HAVE_FUNCTION_ARG_ACCESS_API=3Dy
CONFIG_HAVE_HW_BREAKPOINT=3Dy
CONFIG_HAVE_MIXED_BREAKPOINTS_REGS=3Dy
CONFIG_HAVE_USER_RETURN_NOTIFIER=3Dy
CONFIG_HAVE_PERF_EVENTS_NMI=3Dy
CONFIG_HAVE_HARDLOCKUP_DETECTOR_PERF=3Dy
CONFIG_HAVE_PERF_REGS=3Dy
CONFIG_HAVE_PERF_USER_STACK_DUMP=3Dy
CONFIG_HAVE_ARCH_JUMP_LABEL=3Dy
CONFIG_HAVE_ARCH_JUMP_LABEL_RELATIVE=3Dy
CONFIG_MMU_GATHER_TABLE_FREE=3Dy
CONFIG_MMU_GATHER_RCU_TABLE_FREE=3Dy
CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG=3Dy
CONFIG_HAVE_ALIGNED_STRUCT_PAGE=3Dy
CONFIG_HAVE_CMPXCHG_LOCAL=3Dy
CONFIG_HAVE_CMPXCHG_DOUBLE=3Dy
CONFIG_ARCH_WANT_COMPAT_IPC_PARSE_VERSION=3Dy
CONFIG_ARCH_WANT_OLD_COMPAT_IPC=3Dy
CONFIG_HAVE_ARCH_SECCOMP_FILTER=3Dy
CONFIG_SECCOMP_FILTER=3Dy
CONFIG_HAVE_ARCH_STACKLEAK=3Dy
CONFIG_HAVE_STACKPROTECTOR=3Dy
CONFIG_STACKPROTECTOR=3Dy
CONFIG_STACKPROTECTOR_STRONG=3Dy
CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES=3Dy
CONFIG_HAVE_CONTEXT_TRACKING=3Dy
CONFIG_HAVE_VIRT_CPU_ACCOUNTING_GEN=3Dy
CONFIG_HAVE_IRQ_TIME_ACCOUNTING=3Dy
CONFIG_HAVE_MOVE_PMD=3Dy
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE=3Dy
CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD=3Dy
CONFIG_HAVE_ARCH_HUGE_VMAP=3Dy
CONFIG_ARCH_WANT_HUGE_PMD_SHARE=3Dy
CONFIG_HAVE_ARCH_SOFT_DIRTY=3Dy
CONFIG_HAVE_MOD_ARCH_SPECIFIC=3Dy
CONFIG_MODULES_USE_ELF_RELA=3Dy
CONFIG_ARCH_HAS_ELF_RANDOMIZE=3Dy
CONFIG_HAVE_ARCH_MMAP_RND_BITS=3Dy
CONFIG_HAVE_EXIT_THREAD=3Dy
CONFIG_ARCH_MMAP_RND_BITS=3D28
CONFIG_HAVE_ARCH_MMAP_RND_COMPAT_BITS=3Dy
CONFIG_ARCH_MMAP_RND_COMPAT_BITS=3D8
CONFIG_HAVE_ARCH_COMPAT_MMAP_BASES=3Dy
CONFIG_HAVE_STACK_VALIDATION=3Dy
CONFIG_HAVE_RELIABLE_STACKTRACE=3Dy
CONFIG_OLD_SIGSUSPEND3=3Dy
CONFIG_COMPAT_OLD_SIGACTION=3Dy
CONFIG_COMPAT_32BIT_TIME=3Dy
CONFIG_HAVE_ARCH_VMAP_STACK=3Dy
CONFIG_VMAP_STACK=3Dy
CONFIG_ARCH_HAS_STRICT_KERNEL_RWX=3Dy
CONFIG_STRICT_KERNEL_RWX=3Dy
CONFIG_ARCH_HAS_STRICT_MODULE_RWX=3Dy
CONFIG_STRICT_MODULE_RWX=3Dy
CONFIG_HAVE_ARCH_PREL32_RELOCATIONS=3Dy
CONFIG_ARCH_USE_MEMREMAP_PROT=3Dy
# CONFIG_LOCK_EVENT_COUNTS is not set
CONFIG_ARCH_HAS_MEM_ENCRYPT=3Dy

#
# GCOV-based kernel profiling
#
# CONFIG_GCOV_KERNEL is not set
CONFIG_ARCH_HAS_GCOV_PROFILE_ALL=3Dy
# end of GCOV-based kernel profiling

CONFIG_HAVE_GCC_PLUGINS=3Dy
# end of General architecture-dependent options

CONFIG_RT_MUTEXES=3Dy
CONFIG_BASE_SMALL=3D0
CONFIG_MODULE_SIG_FORMAT=3Dy
CONFIG_MODULES=3Dy
# CONFIG_MODULE_FORCE_LOAD is not set
CONFIG_MODULE_UNLOAD=3Dy
# CONFIG_MODULE_FORCE_UNLOAD is not set
CONFIG_MODVERSIONS=3Dy
CONFIG_ASM_MODVERSIONS=3Dy
CONFIG_MODULE_SRCVERSION_ALL=3Dy
CONFIG_MODULE_SIG=3Dy
# CONFIG_MODULE_SIG_FORCE is not set
CONFIG_MODULE_SIG_ALL=3Dy
# CONFIG_MODULE_SIG_SHA1 is not set
# CONFIG_MODULE_SIG_SHA224 is not set
# CONFIG_MODULE_SIG_SHA256 is not set
# CONFIG_MODULE_SIG_SHA384 is not set
CONFIG_MODULE_SIG_SHA512=3Dy
CONFIG_MODULE_SIG_HASH=3D"sha512"
# CONFIG_MODULE_COMPRESS is not set
# CONFIG_MODULE_ALLOW_MISSING_NAMESPACE_IMPORTS is not set
CONFIG_UNUSED_SYMBOLS=3Dy
CONFIG_MODULES_TREE_LOOKUP=3Dy
CONFIG_BLOCK=3Dy
CONFIG_BLK_SCSI_REQUEST=3Dy
CONFIG_BLK_CGROUP_RWSTAT=3Dy
CONFIG_BLK_DEV_BSG=3Dy
CONFIG_BLK_DEV_BSGLIB=3Dy
CONFIG_BLK_DEV_INTEGRITY=3Dy
CONFIG_BLK_DEV_INTEGRITY_T10=3Dy
# CONFIG_BLK_DEV_ZONED is not set
CONFIG_BLK_DEV_THROTTLING=3Dy
# CONFIG_BLK_DEV_THROTTLING_LOW is not set
CONFIG_BLK_CMDLINE_PARSER=3Dy
# CONFIG_BLK_WBT is not set
# CONFIG_BLK_CGROUP_IOLATENCY is not set
# CONFIG_BLK_CGROUP_IOCOST is not set
CONFIG_BLK_DEBUG_FS=3Dy
# CONFIG_BLK_SED_OPAL is not set
# CONFIG_BLK_INLINE_ENCRYPTION is not set

#
# Partition Types
#
CONFIG_PARTITION_ADVANCED=3Dy
# CONFIG_ACORN_PARTITION is not set
CONFIG_AIX_PARTITION=3Dy
CONFIG_OSF_PARTITION=3Dy
CONFIG_AMIGA_PARTITION=3Dy
CONFIG_ATARI_PARTITION=3Dy
CONFIG_MAC_PARTITION=3Dy
CONFIG_MSDOS_PARTITION=3Dy
CONFIG_BSD_DISKLABEL=3Dy
CONFIG_MINIX_SUBPARTITION=3Dy
CONFIG_SOLARIS_X86_PARTITION=3Dy
CONFIG_UNIXWARE_DISKLABEL=3Dy
CONFIG_LDM_PARTITION=3Dy
# CONFIG_LDM_DEBUG is not set
CONFIG_SGI_PARTITION=3Dy
CONFIG_ULTRIX_PARTITION=3Dy
CONFIG_SUN_PARTITION=3Dy
CONFIG_KARMA_PARTITION=3Dy
CONFIG_EFI_PARTITION=3Dy
CONFIG_SYSV68_PARTITION=3Dy
CONFIG_CMDLINE_PARTITION=3Dy
# end of Partition Types

CONFIG_BLOCK_COMPAT=3Dy
CONFIG_BLK_MQ_PCI=3Dy
CONFIG_BLK_MQ_VIRTIO=3Dy
CONFIG_BLK_PM=3Dy

#
# IO Schedulers
#
CONFIG_MQ_IOSCHED_DEADLINE=3Dy
# CONFIG_MQ_IOSCHED_KYBER is not set
# CONFIG_IOSCHED_BFQ is not set
# end of IO Schedulers

CONFIG_PREEMPT_NOTIFIERS=3Dy
CONFIG_ASN1=3Dy
CONFIG_UNINLINE_SPIN_UNLOCK=3Dy
CONFIG_ARCH_SUPPORTS_ATOMIC_RMW=3Dy
CONFIG_MUTEX_SPIN_ON_OWNER=3Dy
CONFIG_RWSEM_SPIN_ON_OWNER=3Dy
CONFIG_LOCK_SPIN_ON_OWNER=3Dy
CONFIG_ARCH_USE_QUEUED_SPINLOCKS=3Dy
CONFIG_QUEUED_SPINLOCKS=3Dy
CONFIG_ARCH_USE_QUEUED_RWLOCKS=3Dy
CONFIG_QUEUED_RWLOCKS=3Dy
CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE=3Dy
CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE=3Dy
CONFIG_ARCH_HAS_SYSCALL_WRAPPER=3Dy
CONFIG_FREEZER=3Dy

#
# Executable file formats
#
CONFIG_BINFMT_ELF=3Dy
CONFIG_COMPAT_BINFMT_ELF=3Dy
CONFIG_ELFCORE=3Dy
CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS=3Dy
CONFIG_BINFMT_SCRIPT=3Dy
# CONFIG_BINFMT_MISC is not set
CONFIG_COREDUMP=3Dy
# end of Executable file formats

#
# Memory Management options
#
CONFIG_SELECT_MEMORY_MODEL=3Dy
CONFIG_SPARSEMEM_MANUAL=3Dy
CONFIG_SPARSEMEM=3Dy
CONFIG_NEED_MULTIPLE_NODES=3Dy
CONFIG_SPARSEMEM_EXTREME=3Dy
CONFIG_SPARSEMEM_VMEMMAP_ENABLE=3Dy
CONFIG_SPARSEMEM_VMEMMAP=3Dy
CONFIG_HAVE_FAST_GUP=3Dy
CONFIG_NUMA_KEEP_MEMINFO=3Dy
CONFIG_MEMORY_ISOLATION=3Dy
CONFIG_HAVE_BOOTMEM_INFO_NODE=3Dy
CONFIG_MEMORY_HOTPLUG=3Dy
CONFIG_MEMORY_HOTPLUG_SPARSE=3Dy
# CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE is not set
CONFIG_MEMORY_HOTREMOVE=3Dy
CONFIG_SPLIT_PTLOCK_CPUS=3D4
CONFIG_MEMORY_BALLOON=3Dy
CONFIG_BALLOON_COMPACTION=3Dy
CONFIG_COMPACTION=3Dy
CONFIG_PAGE_REPORTING=3Dy
CONFIG_MIGRATION=3Dy
CONFIG_CONTIG_ALLOC=3Dy
CONFIG_PHYS_ADDR_T_64BIT=3Dy
CONFIG_BOUNCE=3Dy
CONFIG_VIRT_TO_BUS=3Dy
CONFIG_MMU_NOTIFIER=3Dy
CONFIG_KSM=3Dy
CONFIG_DEFAULT_MMAP_MIN_ADDR=3D65536
CONFIG_ARCH_SUPPORTS_MEMORY_FAILURE=3Dy
CONFIG_MEMORY_FAILURE=3Dy
# CONFIG_HWPOISON_INJECT is not set
CONFIG_TRANSPARENT_HUGEPAGE=3Dy
CONFIG_TRANSPARENT_HUGEPAGE_ALWAYS=3Dy
# CONFIG_TRANSPARENT_HUGEPAGE_MADVISE is not set
CONFIG_ARCH_WANTS_THP_SWAP=3Dy
CONFIG_THP_SWAP=3Dy
CONFIG_CLEANCACHE=3Dy
CONFIG_FRONTSWAP=3Dy
CONFIG_CMA=3Dy
# CONFIG_CMA_DEBUG is not set
# CONFIG_CMA_DEBUGFS is not set
CONFIG_CMA_AREAS=3D7
CONFIG_MEM_SOFT_DIRTY=3Dy
CONFIG_ZSWAP=3Dy
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_DEFLATE is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZO=3Dy
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_842 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4 is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_LZ4HC is not set
# CONFIG_ZSWAP_COMPRESSOR_DEFAULT_ZSTD is not set
CONFIG_ZSWAP_COMPRESSOR_DEFAULT=3D"lzo"
CONFIG_ZSWAP_ZPOOL_DEFAULT_ZBUD=3Dy
# CONFIG_ZSWAP_ZPOOL_DEFAULT_Z3FOLD is not set
# CONFIG_ZSWAP_ZPOOL_DEFAULT_ZSMALLOC is not set
CONFIG_ZSWAP_ZPOOL_DEFAULT=3D"zbud"
# CONFIG_ZSWAP_DEFAULT_ON is not set
CONFIG_ZPOOL=3Dy
CONFIG_ZBUD=3Dy
# CONFIG_Z3FOLD is not set
CONFIG_ZSMALLOC=3Dy
# CONFIG_ZSMALLOC_PGTABLE_MAPPING is not set
# CONFIG_ZSMALLOC_STAT is not set
CONFIG_GENERIC_EARLY_IOREMAP=3Dy
# CONFIG_DEFERRED_STRUCT_PAGE_INIT is not set
# CONFIG_IDLE_PAGE_TRACKING is not set
CONFIG_ARCH_HAS_PTE_DEVMAP=3Dy
# CONFIG_ZONE_DEVICE is not set
CONFIG_ARCH_USES_HIGH_VMA_FLAGS=3Dy
CONFIG_ARCH_HAS_PKEYS=3Dy
# CONFIG_PERCPU_STATS is not set
# CONFIG_GUP_BENCHMARK is not set
# CONFIG_READ_ONLY_THP_FOR_FS is not set
CONFIG_ARCH_HAS_PTE_SPECIAL=3Dy
# end of Memory Management options

CONFIG_NET=3Dy
CONFIG_NET_INGRESS=3Dy
CONFIG_SKB_EXTENSIONS=3Dy

#
# Networking options
#
CONFIG_PACKET=3Dy
# CONFIG_PACKET_DIAG is not set
CONFIG_UNIX=3Dy
CONFIG_UNIX_SCM=3Dy
# CONFIG_UNIX_DIAG is not set
# CONFIG_TLS is not set
# CONFIG_XFRM_USER is not set
# CONFIG_NET_KEY is not set
# CONFIG_XDP_SOCKETS is not set
CONFIG_INET=3Dy
CONFIG_IP_MULTICAST=3Dy
CONFIG_IP_ADVANCED_ROUTER=3Dy
CONFIG_IP_FIB_TRIE_STATS=3Dy
CONFIG_IP_MULTIPLE_TABLES=3Dy
CONFIG_IP_ROUTE_MULTIPATH=3Dy
CONFIG_IP_ROUTE_VERBOSE=3Dy
CONFIG_IP_PNP=3Dy
CONFIG_IP_PNP_DHCP=3Dy
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE_DEMUX is not set
CONFIG_IP_MROUTE_COMMON=3Dy
CONFIG_IP_MROUTE=3Dy
# CONFIG_IP_MROUTE_MULTIPLE_TABLES is not set
CONFIG_IP_PIMSM_V1=3Dy
CONFIG_IP_PIMSM_V2=3Dy
CONFIG_SYN_COOKIES=3Dy
# CONFIG_NET_IPVTI is not set
# CONFIG_NET_FOU is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_DIAG is not set
CONFIG_TCP_CONG_ADVANCED=3Dy
# CONFIG_TCP_CONG_BIC is not set
CONFIG_TCP_CONG_CUBIC=3Dy
# CONFIG_TCP_CONG_WESTWOOD is not set
# CONFIG_TCP_CONG_HTCP is not set
# CONFIG_TCP_CONG_HSTCP is not set
# CONFIG_TCP_CONG_HYBLA is not set
# CONFIG_TCP_CONG_VEGAS is not set
# CONFIG_TCP_CONG_NV is not set
# CONFIG_TCP_CONG_SCALABLE is not set
# CONFIG_TCP_CONG_LP is not set
# CONFIG_TCP_CONG_VENO is not set
# CONFIG_TCP_CONG_YEAH is not set
# CONFIG_TCP_CONG_ILLINOIS is not set
# CONFIG_TCP_CONG_DCTCP is not set
# CONFIG_TCP_CONG_CDG is not set
# CONFIG_TCP_CONG_BBR is not set
CONFIG_DEFAULT_CUBIC=3Dy
# CONFIG_DEFAULT_RENO is not set
CONFIG_DEFAULT_TCP_CONG=3D"cubic"
CONFIG_TCP_MD5SIG=3Dy
CONFIG_IPV6=3Dy
CONFIG_IPV6_ROUTER_PREF=3Dy
CONFIG_IPV6_ROUTE_INFO=3Dy
# CONFIG_IPV6_OPTIMISTIC_DAD is not set
# CONFIG_INET6_AH is not set
# CONFIG_INET6_ESP is not set
# CONFIG_INET6_IPCOMP is not set
# CONFIG_IPV6_MIP6 is not set
# CONFIG_IPV6_ILA is not set
# CONFIG_IPV6_VTI is not set
# CONFIG_IPV6_SIT is not set
# CONFIG_IPV6_TUNNEL is not set
CONFIG_IPV6_MULTIPLE_TABLES=3Dy
CONFIG_IPV6_SUBTREES=3Dy
CONFIG_IPV6_MROUTE=3Dy
CONFIG_IPV6_MROUTE_MULTIPLE_TABLES=3Dy
CONFIG_IPV6_PIMSM_V2=3Dy
# CONFIG_IPV6_SEG6_LWTUNNEL is not set
# CONFIG_IPV6_SEG6_HMAC is not set
# CONFIG_IPV6_RPL_LWTUNNEL is not set
CONFIG_NETLABEL=3Dy
# CONFIG_MPTCP is not set
CONFIG_NETWORK_SECMARK=3Dy
# CONFIG_NETWORK_PHY_TIMESTAMPING is not set
CONFIG_NETFILTER=3Dy
CONFIG_NETFILTER_ADVANCED=3Dy
CONFIG_BRIDGE_NETFILTER=3Dm

#
# Core Netfilter Configuration
#
CONFIG_NETFILTER_INGRESS=3Dy
CONFIG_NETFILTER_NETLINK=3Dy
CONFIG_NETFILTER_FAMILY_BRIDGE=3Dy
CONFIG_NETFILTER_FAMILY_ARP=3Dy
# CONFIG_NETFILTER_NETLINK_ACCT is not set
# CONFIG_NETFILTER_NETLINK_QUEUE is not set
# CONFIG_NETFILTER_NETLINK_LOG is not set
# CONFIG_NETFILTER_NETLINK_OSF is not set
CONFIG_NF_CONNTRACK=3Dy
# CONFIG_NF_LOG_NETDEV is not set
# CONFIG_NF_CONNTRACK_MARK is not set
# CONFIG_NF_CONNTRACK_SECMARK is not set
# CONFIG_NF_CONNTRACK_ZONES is not set
CONFIG_NF_CONNTRACK_PROCFS=3Dy
# CONFIG_NF_CONNTRACK_EVENTS is not set
# CONFIG_NF_CONNTRACK_TIMEOUT is not set
# CONFIG_NF_CONNTRACK_TIMESTAMP is not set
# CONFIG_NF_CONNTRACK_LABELS is not set
CONFIG_NF_CT_PROTO_DCCP=3Dy
CONFIG_NF_CT_PROTO_SCTP=3Dy
CONFIG_NF_CT_PROTO_UDPLITE=3Dy
# CONFIG_NF_CONNTRACK_AMANDA is not set
# CONFIG_NF_CONNTRACK_FTP is not set
# CONFIG_NF_CONNTRACK_H323 is not set
# CONFIG_NF_CONNTRACK_IRC is not set
# CONFIG_NF_CONNTRACK_NETBIOS_NS is not set
# CONFIG_NF_CONNTRACK_SNMP is not set
# CONFIG_NF_CONNTRACK_PPTP is not set
# CONFIG_NF_CONNTRACK_SANE is not set
# CONFIG_NF_CONNTRACK_SIP is not set
# CONFIG_NF_CONNTRACK_TFTP is not set
# CONFIG_NF_CT_NETLINK is not set
CONFIG_NF_NAT=3Dy
CONFIG_NF_NAT_REDIRECT=3Dy
CONFIG_NF_NAT_MASQUERADE=3Dy
CONFIG_NF_TABLES=3Dy
CONFIG_NF_TABLES_INET=3Dy
# CONFIG_NF_TABLES_NETDEV is not set
# CONFIG_NFT_NUMGEN is not set
# CONFIG_NFT_CT is not set
# CONFIG_NFT_COUNTER is not set
# CONFIG_NFT_CONNLIMIT is not set
# CONFIG_NFT_LOG is not set
# CONFIG_NFT_LIMIT is not set
# CONFIG_NFT_MASQ is not set
# CONFIG_NFT_REDIR is not set
CONFIG_NFT_NAT=3Dy
# CONFIG_NFT_TUNNEL is not set
# CONFIG_NFT_OBJREF is not set
# CONFIG_NFT_QUOTA is not set
# CONFIG_NFT_REJECT is not set
CONFIG_NFT_COMPAT=3Dy
# CONFIG_NFT_HASH is not set
# CONFIG_NFT_SOCKET is not set
# CONFIG_NFT_OSF is not set
# CONFIG_NFT_TPROXY is not set
# CONFIG_NFT_SYNPROXY is not set
# CONFIG_NF_FLOW_TABLE is not set
CONFIG_NETFILTER_XTABLES=3Dy

#
# Xtables combined modules
#
# CONFIG_NETFILTER_XT_MARK is not set
# CONFIG_NETFILTER_XT_CONNMARK is not set

#
# Xtables targets
#
# CONFIG_NETFILTER_XT_TARGET_AUDIT is not set
# CONFIG_NETFILTER_XT_TARGET_CLASSIFY is not set
# CONFIG_NETFILTER_XT_TARGET_CONNMARK is not set
# CONFIG_NETFILTER_XT_TARGET_HMARK is not set
# CONFIG_NETFILTER_XT_TARGET_IDLETIMER is not set
# CONFIG_NETFILTER_XT_TARGET_LED is not set
# CONFIG_NETFILTER_XT_TARGET_LOG is not set
# CONFIG_NETFILTER_XT_TARGET_MARK is not set
CONFIG_NETFILTER_XT_NAT=3Dy
CONFIG_NETFILTER_XT_TARGET_NETMAP=3Dy
# CONFIG_NETFILTER_XT_TARGET_NFLOG is not set
# CONFIG_NETFILTER_XT_TARGET_NFQUEUE is not set
# CONFIG_NETFILTER_XT_TARGET_RATEEST is not set
CONFIG_NETFILTER_XT_TARGET_REDIRECT=3Dy
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=3Dy
# CONFIG_NETFILTER_XT_TARGET_TEE is not set
# CONFIG_NETFILTER_XT_TARGET_SECMARK is not set
# CONFIG_NETFILTER_XT_TARGET_TCPMSS is not set

#
# Xtables matches
#
CONFIG_NETFILTER_XT_MATCH_ADDRTYPE=3Dy
# CONFIG_NETFILTER_XT_MATCH_BPF is not set
# CONFIG_NETFILTER_XT_MATCH_CGROUP is not set
# CONFIG_NETFILTER_XT_MATCH_CLUSTER is not set
# CONFIG_NETFILTER_XT_MATCH_COMMENT is not set
# CONFIG_NETFILTER_XT_MATCH_CONNBYTES is not set
# CONFIG_NETFILTER_XT_MATCH_CONNLABEL is not set
# CONFIG_NETFILTER_XT_MATCH_CONNLIMIT is not set
# CONFIG_NETFILTER_XT_MATCH_CONNMARK is not set
CONFIG_NETFILTER_XT_MATCH_CONNTRACK=3Dy
# CONFIG_NETFILTER_XT_MATCH_CPU is not set
# CONFIG_NETFILTER_XT_MATCH_DCCP is not set
# CONFIG_NETFILTER_XT_MATCH_DEVGROUP is not set
# CONFIG_NETFILTER_XT_MATCH_DSCP is not set
# CONFIG_NETFILTER_XT_MATCH_ECN is not set
# CONFIG_NETFILTER_XT_MATCH_ESP is not set
# CONFIG_NETFILTER_XT_MATCH_HASHLIMIT is not set
# CONFIG_NETFILTER_XT_MATCH_HELPER is not set
# CONFIG_NETFILTER_XT_MATCH_HL is not set
# CONFIG_NETFILTER_XT_MATCH_IPCOMP is not set
# CONFIG_NETFILTER_XT_MATCH_IPRANGE is not set
# CONFIG_NETFILTER_XT_MATCH_L2TP is not set
# CONFIG_NETFILTER_XT_MATCH_LENGTH is not set
# CONFIG_NETFILTER_XT_MATCH_LIMIT is not set
# CONFIG_NETFILTER_XT_MATCH_MAC is not set
# CONFIG_NETFILTER_XT_MATCH_MARK is not set
# CONFIG_NETFILTER_XT_MATCH_MULTIPORT is not set
# CONFIG_NETFILTER_XT_MATCH_NFACCT is not set
# CONFIG_NETFILTER_XT_MATCH_OSF is not set
# CONFIG_NETFILTER_XT_MATCH_OWNER is not set
# CONFIG_NETFILTER_XT_MATCH_PHYSDEV is not set
# CONFIG_NETFILTER_XT_MATCH_PKTTYPE is not set
# CONFIG_NETFILTER_XT_MATCH_QUOTA is not set
# CONFIG_NETFILTER_XT_MATCH_RATEEST is not set
# CONFIG_NETFILTER_XT_MATCH_REALM is not set
# CONFIG_NETFILTER_XT_MATCH_RECENT is not set
# CONFIG_NETFILTER_XT_MATCH_SCTP is not set
# CONFIG_NETFILTER_XT_MATCH_SOCKET is not set
# CONFIG_NETFILTER_XT_MATCH_STATE is not set
# CONFIG_NETFILTER_XT_MATCH_STATISTIC is not set
# CONFIG_NETFILTER_XT_MATCH_STRING is not set
# CONFIG_NETFILTER_XT_MATCH_TCPMSS is not set
# CONFIG_NETFILTER_XT_MATCH_TIME is not set
# CONFIG_NETFILTER_XT_MATCH_U32 is not set
# end of Core Netfilter Configuration

# CONFIG_IP_SET is not set
# CONFIG_IP_VS is not set

#
# IP: Netfilter Configuration
#
CONFIG_NF_DEFRAG_IPV4=3Dy
# CONFIG_NF_SOCKET_IPV4 is not set
# CONFIG_NF_TPROXY_IPV4 is not set
CONFIG_NF_TABLES_IPV4=3Dy
# CONFIG_NFT_DUP_IPV4 is not set
# CONFIG_NFT_FIB_IPV4 is not set
# CONFIG_NF_TABLES_ARP is not set
# CONFIG_NF_DUP_IPV4 is not set
# CONFIG_NF_LOG_ARP is not set
# CONFIG_NF_LOG_IPV4 is not set
CONFIG_NF_REJECT_IPV4=3Dy
CONFIG_IP_NF_IPTABLES=3Dy
# CONFIG_IP_NF_MATCH_AH is not set
# CONFIG_IP_NF_MATCH_ECN is not set
# CONFIG_IP_NF_MATCH_TTL is not set
CONFIG_IP_NF_FILTER=3Dy
CONFIG_IP_NF_TARGET_REJECT=3Dy
# CONFIG_IP_NF_TARGET_SYNPROXY is not set
CONFIG_IP_NF_NAT=3Dy
CONFIG_IP_NF_TARGET_MASQUERADE=3Dy
CONFIG_IP_NF_TARGET_NETMAP=3Dy
CONFIG_IP_NF_TARGET_REDIRECT=3Dy
# CONFIG_IP_NF_MANGLE is not set
# CONFIG_IP_NF_RAW is not set
# CONFIG_IP_NF_SECURITY is not set
CONFIG_IP_NF_ARPTABLES=3Dy
# CONFIG_IP_NF_ARPFILTER is not set
# CONFIG_IP_NF_ARP_MANGLE is not set
# end of IP: Netfilter Configuration

#
# IPv6: Netfilter Configuration
#
# CONFIG_NF_SOCKET_IPV6 is not set
# CONFIG_NF_TPROXY_IPV6 is not set
CONFIG_NF_TABLES_IPV6=3Dy
# CONFIG_NFT_DUP_IPV6 is not set
# CONFIG_NFT_FIB_IPV6 is not set
# CONFIG_NF_DUP_IPV6 is not set
CONFIG_NF_REJECT_IPV6=3Dy
# CONFIG_NF_LOG_IPV6 is not set
CONFIG_IP6_NF_IPTABLES=3Dy
# CONFIG_IP6_NF_MATCH_AH is not set
# CONFIG_IP6_NF_MATCH_EUI64 is not set
# CONFIG_IP6_NF_MATCH_FRAG is not set
# CONFIG_IP6_NF_MATCH_OPTS is not set
# CONFIG_IP6_NF_MATCH_HL is not set
# CONFIG_IP6_NF_MATCH_IPV6HEADER is not set
# CONFIG_IP6_NF_MATCH_MH is not set
# CONFIG_IP6_NF_MATCH_RT is not set
# CONFIG_IP6_NF_MATCH_SRH is not set
CONFIG_IP6_NF_FILTER=3Dy
CONFIG_IP6_NF_TARGET_REJECT=3Dy
# CONFIG_IP6_NF_TARGET_SYNPROXY is not set
# CONFIG_IP6_NF_MANGLE is not set
# CONFIG_IP6_NF_RAW is not set
# CONFIG_IP6_NF_SECURITY is not set
# CONFIG_IP6_NF_NAT is not set
# end of IPv6: Netfilter Configuration

CONFIG_NF_DEFRAG_IPV6=3Dy
# CONFIG_NF_TABLES_BRIDGE is not set
# CONFIG_NF_CONNTRACK_BRIDGE is not set
# CONFIG_BRIDGE_NF_EBTABLES is not set
# CONFIG_BPFILTER is not set
# CONFIG_IP_DCCP is not set
# CONFIG_IP_SCTP is not set
# CONFIG_RDS is not set
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_L2TP is not set
CONFIG_STP=3Dy
CONFIG_BRIDGE=3Dy
CONFIG_BRIDGE_IGMP_SNOOPING=3Dy
# CONFIG_BRIDGE_MRP is not set
CONFIG_HAVE_NET_DSA=3Dy
# CONFIG_NET_DSA is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
CONFIG_LLC=3Dy
# CONFIG_LLC2 is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_PHONET is not set
# CONFIG_6LOWPAN is not set
# CONFIG_IEEE802154 is not set
CONFIG_NET_SCHED=3Dy

#
# Queueing/Scheduling
#
# CONFIG_NET_SCH_CBQ is not set
# CONFIG_NET_SCH_HTB is not set
# CONFIG_NET_SCH_HFSC is not set
# CONFIG_NET_SCH_PRIO is not set
# CONFIG_NET_SCH_MULTIQ is not set
# CONFIG_NET_SCH_RED is not set
# CONFIG_NET_SCH_SFB is not set
# CONFIG_NET_SCH_SFQ is not set
# CONFIG_NET_SCH_TEQL is not set
# CONFIG_NET_SCH_TBF is not set
# CONFIG_NET_SCH_CBS is not set
# CONFIG_NET_SCH_ETF is not set
# CONFIG_NET_SCH_TAPRIO is not set
# CONFIG_NET_SCH_GRED is not set
# CONFIG_NET_SCH_DSMARK is not set
# CONFIG_NET_SCH_NETEM is not set
# CONFIG_NET_SCH_DRR is not set
# CONFIG_NET_SCH_MQPRIO is not set
# CONFIG_NET_SCH_SKBPRIO is not set
# CONFIG_NET_SCH_CHOKE is not set
# CONFIG_NET_SCH_QFQ is not set
# CONFIG_NET_SCH_CODEL is not set
# CONFIG_NET_SCH_FQ_CODEL is not set
# CONFIG_NET_SCH_CAKE is not set
# CONFIG_NET_SCH_FQ is not set
# CONFIG_NET_SCH_HHF is not set
# CONFIG_NET_SCH_PIE is not set
# CONFIG_NET_SCH_INGRESS is not set
# CONFIG_NET_SCH_PLUG is not set
# CONFIG_NET_SCH_ETS is not set
# CONFIG_NET_SCH_DEFAULT is not set

#
# Classification
#
CONFIG_NET_CLS=3Dy
# CONFIG_NET_CLS_BASIC is not set
# CONFIG_NET_CLS_TCINDEX is not set
# CONFIG_NET_CLS_ROUTE4 is not set
# CONFIG_NET_CLS_FW is not set
# CONFIG_NET_CLS_U32 is not set
# CONFIG_NET_CLS_RSVP is not set
# CONFIG_NET_CLS_RSVP6 is not set
# CONFIG_NET_CLS_FLOW is not set
# CONFIG_NET_CLS_CGROUP is not set
# CONFIG_NET_CLS_BPF is not set
# CONFIG_NET_CLS_FLOWER is not set
# CONFIG_NET_CLS_MATCHALL is not set
CONFIG_NET_EMATCH=3Dy
CONFIG_NET_EMATCH_STACK=3D32
# CONFIG_NET_EMATCH_CMP is not set
# CONFIG_NET_EMATCH_NBYTE is not set
# CONFIG_NET_EMATCH_U32 is not set
# CONFIG_NET_EMATCH_META is not set
# CONFIG_NET_EMATCH_TEXT is not set
# CONFIG_NET_EMATCH_IPT is not set
CONFIG_NET_CLS_ACT=3Dy
# CONFIG_NET_ACT_POLICE is not set
# CONFIG_NET_ACT_GACT is not set
# CONFIG_NET_ACT_MIRRED is not set
# CONFIG_NET_ACT_SAMPLE is not set
# CONFIG_NET_ACT_IPT is not set
# CONFIG_NET_ACT_NAT is not set
# CONFIG_NET_ACT_PEDIT is not set
# CONFIG_NET_ACT_SIMP is not set
# CONFIG_NET_ACT_SKBEDIT is not set
# CONFIG_NET_ACT_CSUM is not set
# CONFIG_NET_ACT_MPLS is not set
# CONFIG_NET_ACT_VLAN is not set
# CONFIG_NET_ACT_BPF is not set
# CONFIG_NET_ACT_SKBMOD is not set
# CONFIG_NET_ACT_IFE is not set
# CONFIG_NET_ACT_TUNNEL_KEY is not set
# CONFIG_NET_ACT_GATE is not set
# CONFIG_NET_TC_SKB_EXT is not set
CONFIG_NET_SCH_FIFO=3Dy
CONFIG_DCB=3Dy
CONFIG_DNS_RESOLVER=3Dy
# CONFIG_BATMAN_ADV is not set
# CONFIG_OPENVSWITCH is not set
# CONFIG_VSOCKETS is not set
# CONFIG_NETLINK_DIAG is not set
CONFIG_MPLS=3Dy
# CONFIG_NET_MPLS_GSO is not set
# CONFIG_MPLS_ROUTING is not set
# CONFIG_NET_NSH is not set
# CONFIG_HSR is not set
# CONFIG_NET_SWITCHDEV is not set
# CONFIG_NET_L3_MASTER_DEV is not set
# CONFIG_QRTR is not set
# CONFIG_NET_NCSI is not set
CONFIG_RPS=3Dy
CONFIG_RFS_ACCEL=3Dy
CONFIG_XPS=3Dy
CONFIG_CGROUP_NET_PRIO=3Dy
CONFIG_CGROUP_NET_CLASSID=3Dy
CONFIG_NET_RX_BUSY_POLL=3Dy
CONFIG_BQL=3Dy
CONFIG_BPF_JIT=3Dy
CONFIG_NET_FLOW_LIMIT=3Dy

#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_NET_DROP_MONITOR is not set
# end of Network testing
# end of Networking options

CONFIG_HAMRADIO=3Dy

#
# Packet Radio protocols
#
# CONFIG_AX25 is not set
# CONFIG_CAN is not set
# CONFIG_BT is not set
# CONFIG_AF_RXRPC is not set
# CONFIG_AF_KCM is not set
CONFIG_FIB_RULES=3Dy
CONFIG_WIRELESS=3Dy
# CONFIG_CFG80211 is not set

#
# CFG80211 needs to be enabled for MAC80211
#
CONFIG_MAC80211_STA_HASH_MAX_SIZE=3D0
# CONFIG_WIMAX is not set
CONFIG_RFKILL=3Dy
CONFIG_RFKILL_LEDS=3Dy
CONFIG_RFKILL_INPUT=3Dy
# CONFIG_RFKILL_GPIO is not set
# CONFIG_NET_9P is not set
# CONFIG_CAIF is not set
# CONFIG_CEPH_LIB is not set
# CONFIG_NFC is not set
# CONFIG_PSAMPLE is not set
# CONFIG_NET_IFE is not set
# CONFIG_LWTUNNEL is not set
CONFIG_PAGE_POOL=3Dy
CONFIG_FAILOVER=3Dy
CONFIG_ETHTOOL_NETLINK=3Dy
CONFIG_HAVE_EBPF_JIT=3Dy

#
# Device Drivers
#
CONFIG_HAVE_EISA=3Dy
# CONFIG_EISA is not set
CONFIG_HAVE_PCI=3Dy
CONFIG_PCI=3Dy
CONFIG_PCI_DOMAINS=3Dy
CONFIG_PCIEPORTBUS=3Dy
CONFIG_HOTPLUG_PCI_PCIE=3Dy
CONFIG_PCIEAER=3Dy
# CONFIG_PCIEAER_INJECT is not set
# CONFIG_PCIE_ECRC is not set
CONFIG_PCIEASPM=3Dy
CONFIG_PCIEASPM_DEFAULT=3Dy
# CONFIG_PCIEASPM_POWERSAVE is not set
# CONFIG_PCIEASPM_POWER_SUPERSAVE is not set
# CONFIG_PCIEASPM_PERFORMANCE is not set
CONFIG_PCIE_PME=3Dy
# CONFIG_PCIE_DPC is not set
# CONFIG_PCIE_PTM is not set
# CONFIG_PCIE_BW is not set
CONFIG_PCI_MSI=3Dy
CONFIG_PCI_MSI_IRQ_DOMAIN=3Dy
CONFIG_PCI_QUIRKS=3Dy
# CONFIG_PCI_DEBUG is not set
CONFIG_PCI_REALLOC_ENABLE_AUTO=3Dy
# CONFIG_PCI_STUB is not set
# CONFIG_PCI_PF_STUB is not set
# CONFIG_XEN_PCIDEV_FRONTEND is not set
CONFIG_PCI_ATS=3Dy
CONFIG_PCI_LOCKLESS_CONFIG=3Dy
CONFIG_PCI_IOV=3Dy
CONFIG_PCI_PRI=3Dy
CONFIG_PCI_PASID=3Dy
CONFIG_PCI_LABEL=3Dy
CONFIG_HOTPLUG_PCI=3Dy
CONFIG_HOTPLUG_PCI_ACPI=3Dy
# CONFIG_HOTPLUG_PCI_ACPI_IBM is not set
CONFIG_HOTPLUG_PCI_CPCI=3Dy
# CONFIG_HOTPLUG_PCI_CPCI_ZT5550 is not set
# CONFIG_HOTPLUG_PCI_CPCI_GENERIC is not set
# CONFIG_HOTPLUG_PCI_SHPC is not set

#
# PCI controller drivers
#
# CONFIG_VMD is not set

#
# DesignWare PCI Core Support
#
# CONFIG_PCIE_DW_PLAT_HOST is not set
# CONFIG_PCI_MESON is not set
# end of DesignWare PCI Core Support

#
# Mobiveil PCIe Core Support
#
# end of Mobiveil PCIe Core Support

#
# Cadence PCIe controllers support
#
# end of Cadence PCIe controllers support
# end of PCI controller drivers

#
# PCI Endpoint
#
# CONFIG_PCI_ENDPOINT is not set
# end of PCI Endpoint

#
# PCI switch controller drivers
#
# CONFIG_PCI_SW_SWITCHTEC is not set
# end of PCI switch controller drivers

# CONFIG_PCCARD is not set
CONFIG_RAPIDIO=3Dy
# CONFIG_RAPIDIO_TSI721 is not set
CONFIG_RAPIDIO_DISC_TIMEOUT=3D30
# CONFIG_RAPIDIO_ENABLE_RX_TX_PORTS is not set
CONFIG_RAPIDIO_DMA_ENGINE=3Dy
# CONFIG_RAPIDIO_DEBUG is not set
# CONFIG_RAPIDIO_ENUM_BASIC is not set
# CONFIG_RAPIDIO_CHMAN is not set
# CONFIG_RAPIDIO_MPORT_CDEV is not set

#
# RapidIO Switch drivers
#
# CONFIG_RAPIDIO_TSI57X is not set
# CONFIG_RAPIDIO_CPS_XX is not set
# CONFIG_RAPIDIO_TSI568 is not set
# CONFIG_RAPIDIO_CPS_GEN2 is not set
# CONFIG_RAPIDIO_RXS_GEN3 is not set
# end of RapidIO Switch drivers

#
# Generic Driver Options
#
CONFIG_UEVENT_HELPER=3Dy
CONFIG_UEVENT_HELPER_PATH=3D""
CONFIG_DEVTMPFS=3Dy
CONFIG_DEVTMPFS_MOUNT=3Dy
# CONFIG_STANDALONE is not set
CONFIG_PREVENT_FIRMWARE_BUILD=3Dy

#
# Firmware loader
#
CONFIG_FW_LOADER=3Dy
CONFIG_EXTRA_FIRMWARE=3D""
# CONFIG_FW_LOADER_USER_HELPER is not set
# CONFIG_FW_LOADER_COMPRESS is not set
CONFIG_FW_CACHE=3Dy
# end of Firmware loader

CONFIG_ALLOW_DEV_COREDUMP=3Dy
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_DEBUG_DEVRES is not set
# CONFIG_DEBUG_TEST_DRIVER_REMOVE is not set
# CONFIG_TEST_ASYNC_DRIVER_PROBE is not set
CONFIG_SYS_HYPERVISOR=3Dy
CONFIG_GENERIC_CPU_AUTOPROBE=3Dy
CONFIG_GENERIC_CPU_VULNERABILITIES=3Dy
CONFIG_REGMAP=3Dy
CONFIG_REGMAP_I2C=3Dy
CONFIG_REGMAP_SPI=3Dy
CONFIG_REGMAP_MMIO=3Dy
CONFIG_REGMAP_IRQ=3Dy
CONFIG_DMA_SHARED_BUFFER=3Dy
# CONFIG_DMA_FENCE_TRACE is not set
# end of Generic Driver Options

#
# Bus devices
#
# CONFIG_MHI_BUS is not set
# end of Bus devices

CONFIG_CONNECTOR=3Dy
CONFIG_PROC_EVENTS=3Dy
# CONFIG_GNSS is not set
# CONFIG_MTD is not set
# CONFIG_OF is not set
CONFIG_ARCH_MIGHT_HAVE_PC_PARPORT=3Dy
# CONFIG_PARPORT is not set
CONFIG_PNP=3Dy
# CONFIG_PNP_DEBUG_MESSAGES is not set

#
# Protocols
#
CONFIG_PNPACPI=3Dy
CONFIG_BLK_DEV=3Dy
# CONFIG_BLK_DEV_NULL_BLK is not set
# CONFIG_BLK_DEV_FD is not set
CONFIG_CDROM=3Dy
# CONFIG_BLK_DEV_PCIESSD_MTIP32XX is not set
# CONFIG_ZRAM is not set
# CONFIG_BLK_DEV_UMEM is not set
CONFIG_BLK_DEV_LOOP=3Dy
CONFIG_BLK_DEV_LOOP_MIN_COUNT=3D8
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_DRBD is not set
# CONFIG_BLK_DEV_NBD is not set
# CONFIG_BLK_DEV_SKD is not set
# CONFIG_BLK_DEV_SX8 is not set
CONFIG_BLK_DEV_RAM=3Dy
CONFIG_BLK_DEV_RAM_COUNT=3D16
CONFIG_BLK_DEV_RAM_SIZE=3D65536
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
CONFIG_XEN_BLKDEV_FRONTEND=3Dy
# CONFIG_XEN_BLKDEV_BACKEND is not set
CONFIG_VIRTIO_BLK=3Dy
# CONFIG_BLK_DEV_RBD is not set
# CONFIG_BLK_DEV_RSXX is not set

#
# NVME Support
#
# CONFIG_BLK_DEV_NVME is not set
# CONFIG_NVME_FC is not set
# end of NVME Support

#
# Misc devices
#
# CONFIG_AD525X_DPOT is not set
# CONFIG_DUMMY_IRQ is not set
# CONFIG_IBM_ASM is not set
# CONFIG_PHANTOM is not set
# CONFIG_TIFM_CORE is not set
# CONFIG_ICS932S401 is not set
# CONFIG_ENCLOSURE_SERVICES is not set
# CONFIG_HP_ILO is not set
# CONFIG_APDS9802ALS is not set
# CONFIG_ISL29003 is not set
# CONFIG_ISL29020 is not set
# CONFIG_SENSORS_TSL2550 is not set
# CONFIG_SENSORS_BH1770 is not set
# CONFIG_SENSORS_APDS990X is not set
# CONFIG_HMC6352 is not set
# CONFIG_DS1682 is not set
# CONFIG_LATTICE_ECP3_CONFIG is not set
CONFIG_SRAM=3Dy
# CONFIG_PCI_ENDPOINT_TEST is not set
# CONFIG_XILINX_SDFEC is not set
# CONFIG_PVPANIC is not set
# CONFIG_C2PORT is not set

#
# EEPROM support
#
# CONFIG_EEPROM_AT24 is not set
# CONFIG_EEPROM_AT25 is not set
# CONFIG_EEPROM_LEGACY is not set
# CONFIG_EEPROM_MAX6875 is not set
# CONFIG_EEPROM_93CX6 is not set
# CONFIG_EEPROM_93XX46 is not set
# CONFIG_EEPROM_IDT_89HPESX is not set
# CONFIG_EEPROM_EE1004 is not set
# end of EEPROM support

# CONFIG_CB710_CORE is not set

#
# Texas Instruments shared transport line discipline
#
# CONFIG_TI_ST is not set
# end of Texas Instruments shared transport line discipline

# CONFIG_SENSORS_LIS3_I2C is not set
# CONFIG_ALTERA_STAPL is not set
CONFIG_INTEL_MEI=3Dm
CONFIG_INTEL_MEI_ME=3Dm
# CONFIG_INTEL_MEI_TXE is not set
# CONFIG_VMWARE_VMCI is not set

#
# Intel MIC & related support
#
# CONFIG_INTEL_MIC_BUS is not set
# CONFIG_SCIF_BUS is not set
# CONFIG_VOP_BUS is not set
# end of Intel MIC & related support

# CONFIG_GENWQE is not set
# CONFIG_ECHO is not set
# CONFIG_MISC_ALCOR_PCI is not set
# CONFIG_MISC_RTSX_PCI is not set
# CONFIG_MISC_RTSX_USB is not set
# CONFIG_HABANA_AI is not set
# CONFIG_UACCE is not set
# end of Misc devices

CONFIG_HAVE_IDE=3Dy
# CONFIG_IDE is not set

#
# SCSI device support
#
CONFIG_SCSI_MOD=3Dy
# CONFIG_RAID_ATTRS is not set
CONFIG_SCSI=3Dy
CONFIG_SCSI_DMA=3Dy
CONFIG_SCSI_PROC_FS=3Dy

#
# SCSI support type (disk, tape, CD-ROM)
#
CONFIG_BLK_DEV_SD=3Dy
# CONFIG_CHR_DEV_ST is not set
CONFIG_BLK_DEV_SR=3Dy
CONFIG_CHR_DEV_SG=3Dy
# CONFIG_CHR_DEV_SCH is not set
CONFIG_SCSI_CONSTANTS=3Dy
CONFIG_SCSI_LOGGING=3Dy
CONFIG_SCSI_SCAN_ASYNC=3Dy

#
# SCSI Transports
#
# CONFIG_SCSI_SPI_ATTRS is not set
# CONFIG_SCSI_FC_ATTRS is not set
# CONFIG_SCSI_ISCSI_ATTRS is not set
# CONFIG_SCSI_SAS_ATTRS is not set
# CONFIG_SCSI_SAS_LIBSAS is not set
# CONFIG_SCSI_SRP_ATTRS is not set
# end of SCSI Transports

CONFIG_SCSI_LOWLEVEL=3Dy
# CONFIG_ISCSI_TCP is not set
# CONFIG_ISCSI_BOOT_SYSFS is not set
# CONFIG_SCSI_CXGB3_ISCSI is not set
# CONFIG_SCSI_CXGB4_ISCSI is not set
# CONFIG_SCSI_BNX2_ISCSI is not set
# CONFIG_BE2ISCSI is not set
# CONFIG_BLK_DEV_3W_XXXX_RAID is not set
# CONFIG_SCSI_HPSA is not set
# CONFIG_SCSI_3W_9XXX is not set
# CONFIG_SCSI_3W_SAS is not set
# CONFIG_SCSI_ACARD is not set
# CONFIG_SCSI_AACRAID is not set
# CONFIG_SCSI_AIC7XXX is not set
# CONFIG_SCSI_AIC79XX is not set
# CONFIG_SCSI_AIC94XX is not set
# CONFIG_SCSI_MVSAS is not set
# CONFIG_SCSI_MVUMI is not set
# CONFIG_SCSI_DPT_I2O is not set
# CONFIG_SCSI_ADVANSYS is not set
# CONFIG_SCSI_ARCMSR is not set
# CONFIG_SCSI_ESAS2R is not set
CONFIG_MEGARAID_NEWGEN=3Dy
# CONFIG_MEGARAID_MM is not set
# CONFIG_MEGARAID_LEGACY is not set
# CONFIG_MEGARAID_SAS is not set
# CONFIG_SCSI_MPT3SAS is not set
# CONFIG_SCSI_MPT2SAS is not set
# CONFIG_SCSI_SMARTPQI is not set
# CONFIG_SCSI_UFSHCD is not set
# CONFIG_SCSI_HPTIOP is not set
# CONFIG_SCSI_BUSLOGIC is not set
# CONFIG_SCSI_MYRB is not set
# CONFIG_SCSI_MYRS is not set
# CONFIG_VMWARE_PVSCSI is not set
# CONFIG_XEN_SCSI_FRONTEND is not set
# CONFIG_SCSI_SNIC is not set
# CONFIG_SCSI_DMX3191D is not set
# CONFIG_SCSI_FDOMAIN_PCI is not set
# CONFIG_SCSI_GDTH is not set
# CONFIG_SCSI_ISCI is not set
# CONFIG_SCSI_IPS is not set
# CONFIG_SCSI_INITIO is not set
# CONFIG_SCSI_INIA100 is not set
# CONFIG_SCSI_STEX is not set
# CONFIG_SCSI_SYM53C8XX_2 is not set
# CONFIG_SCSI_IPR is not set
# CONFIG_SCSI_QLOGIC_1280 is not set
# CONFIG_SCSI_QLA_ISCSI is not set
# CONFIG_SCSI_DC395x is not set
# CONFIG_SCSI_AM53C974 is not set
# CONFIG_SCSI_WD719X is not set
# CONFIG_SCSI_DEBUG is not set
# CONFIG_SCSI_PMCRAID is not set
# CONFIG_SCSI_PM8001 is not set
# CONFIG_SCSI_VIRTIO is not set
# CONFIG_SCSI_DH is not set
# end of SCSI device support

CONFIG_ATA=3Dy
CONFIG_SATA_HOST=3Dy
CONFIG_PATA_TIMINGS=3Dy
CONFIG_ATA_VERBOSE_ERROR=3Dy
CONFIG_ATA_FORCE=3Dy
CONFIG_ATA_ACPI=3Dy
CONFIG_SATA_ZPODD=3Dy
CONFIG_SATA_PMP=3Dy

#
# Controllers with non-SFF native interface
#
CONFIG_SATA_AHCI=3Dm
CONFIG_SATA_MOBILE_LPM_POLICY=3D0
CONFIG_SATA_AHCI_PLATFORM=3Dm
# CONFIG_SATA_INIC162X is not set
CONFIG_SATA_ACARD_AHCI=3Dm
# CONFIG_SATA_SIL24 is not set
CONFIG_ATA_SFF=3Dy

#
# SFF controllers with custom DMA interface
#
# CONFIG_PDC_ADMA is not set
# CONFIG_SATA_QSTOR is not set
# CONFIG_SATA_SX4 is not set
CONFIG_ATA_BMDMA=3Dy

#
# SATA SFF controllers with BMDMA
#
CONFIG_ATA_PIIX=3Dy
# CONFIG_SATA_DWC is not set
# CONFIG_SATA_MV is not set
# CONFIG_SATA_NV is not set
# CONFIG_SATA_PROMISE is not set
# CONFIG_SATA_SIL is not set
# CONFIG_SATA_SIS is not set
# CONFIG_SATA_SVW is not set
# CONFIG_SATA_ULI is not set
# CONFIG_SATA_VIA is not set
# CONFIG_SATA_VITESSE is not set

#
# PATA SFF controllers with BMDMA
#
# CONFIG_PATA_ALI is not set
# CONFIG_PATA_AMD is not set
# CONFIG_PATA_ARTOP is not set
# CONFIG_PATA_ATIIXP is not set
# CONFIG_PATA_ATP867X is not set
# CONFIG_PATA_CMD64X is not set
# CONFIG_PATA_CYPRESS is not set
# CONFIG_PATA_EFAR is not set
# CONFIG_PATA_HPT366 is not set
# CONFIG_PATA_HPT37X is not set
# CONFIG_PATA_HPT3X2N is not set
# CONFIG_PATA_HPT3X3 is not set
# CONFIG_PATA_IT8213 is not set
# CONFIG_PATA_IT821X is not set
# CONFIG_PATA_JMICRON is not set
# CONFIG_PATA_MARVELL is not set
# CONFIG_PATA_NETCELL is not set
# CONFIG_PATA_NINJA32 is not set
# CONFIG_PATA_NS87415 is not set
# CONFIG_PATA_OLDPIIX is not set
# CONFIG_PATA_OPTIDMA is not set
# CONFIG_PATA_PDC2027X is not set
# CONFIG_PATA_PDC_OLD is not set
# CONFIG_PATA_RADISYS is not set
# CONFIG_PATA_RDC is not set
# CONFIG_PATA_SCH is not set
# CONFIG_PATA_SERVERWORKS is not set
# CONFIG_PATA_SIL680 is not set
CONFIG_PATA_SIS=3Dy
# CONFIG_PATA_TOSHIBA is not set
# CONFIG_PATA_TRIFLEX is not set
# CONFIG_PATA_VIA is not set
# CONFIG_PATA_WINBOND is not set

#
# PIO-only SFF controllers
#
# CONFIG_PATA_CMD640_PCI is not set
# CONFIG_PATA_MPIIX is not set
# CONFIG_PATA_NS87410 is not set
# CONFIG_PATA_OPTI is not set
# CONFIG_PATA_PLATFORM is not set
# CONFIG_PATA_RZ1000 is not set

#
# Generic fallback / legacy drivers
#
# CONFIG_PATA_ACPI is not set
CONFIG_ATA_GENERIC=3Dy
# CONFIG_PATA_LEGACY is not set
CONFIG_MD=3Dy
CONFIG_BLK_DEV_MD=3Dy
CONFIG_MD_AUTODETECT=3Dy
# CONFIG_MD_LINEAR is not set
# CONFIG_MD_RAID0 is not set
# CONFIG_MD_RAID1 is not set
# CONFIG_MD_RAID10 is not set
# CONFIG_MD_RAID456 is not set
# CONFIG_MD_MULTIPATH is not set
# CONFIG_MD_FAULTY is not set
# CONFIG_BCACHE is not set
CONFIG_BLK_DEV_DM_BUILTIN=3Dy
CONFIG_BLK_DEV_DM=3Dy
# CONFIG_DM_DEBUG is not set
# CONFIG_DM_UNSTRIPED is not set
# CONFIG_DM_CRYPT is not set
# CONFIG_DM_SNAPSHOT is not set
# CONFIG_DM_THIN_PROVISIONING is not set
# CONFIG_DM_CACHE is not set
# CONFIG_DM_WRITECACHE is not set
# CONFIG_DM_EBS is not set
# CONFIG_DM_ERA is not set
# CONFIG_DM_CLONE is not set
# CONFIG_DM_MIRROR is not set
# CONFIG_DM_RAID is not set
# CONFIG_DM_ZERO is not set
# CONFIG_DM_MULTIPATH is not set
# CONFIG_DM_DELAY is not set
# CONFIG_DM_DUST is not set
# CONFIG_DM_INIT is not set
CONFIG_DM_UEVENT=3Dy
# CONFIG_DM_FLAKEY is not set
# CONFIG_DM_VERITY is not set
# CONFIG_DM_SWITCH is not set
# CONFIG_DM_LOG_WRITES is not set
# CONFIG_DM_INTEGRITY is not set
# CONFIG_TARGET_CORE is not set
CONFIG_FUSION=3Dy
# CONFIG_FUSION_SPI is not set
# CONFIG_FUSION_SAS is not set
CONFIG_FUSION_MAX_SGE=3D128
CONFIG_FUSION_LOGGING=3Dy

#
# IEEE 1394 (FireWire) support
#
# CONFIG_FIREWIRE is not set
# CONFIG_FIREWIRE_NOSY is not set
# end of IEEE 1394 (FireWire) support

CONFIG_MACINTOSH_DRIVERS=3Dy
# CONFIG_MAC_EMUMOUSEBTN is not set
CONFIG_NETDEVICES=3Dy
CONFIG_NET_CORE=3Dy
# CONFIG_BONDING is not set
# CONFIG_DUMMY is not set
# CONFIG_WIREGUARD is not set
# CONFIG_EQUALIZER is not set
CONFIG_NET_FC=3Dy
# CONFIG_IFB is not set
# CONFIG_NET_TEAM is not set
# CONFIG_MACVLAN is not set
# CONFIG_IPVLAN is not set
# CONFIG_VXLAN is not set
# CONFIG_GENEVE is not set
# CONFIG_BAREUDP is not set
# CONFIG_GTP is not set
# CONFIG_MACSEC is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_RIONET is not set
CONFIG_TUN=3Dy
# CONFIG_TUN_VNET_CROSS_LE is not set
# CONFIG_VETH is not set
CONFIG_VIRTIO_NET=3Dy
# CONFIG_NLMON is not set
# CONFIG_ARCNET is not set

#
# Distributed Switch Architecture drivers
#
# end of Distributed Switch Architecture drivers

CONFIG_ETHERNET=3Dy
CONFIG_NET_VENDOR_3COM=3Dy
# CONFIG_VORTEX is not set
# CONFIG_TYPHOON is not set
CONFIG_NET_VENDOR_ADAPTEC=3Dy
# CONFIG_ADAPTEC_STARFIRE is not set
CONFIG_NET_VENDOR_AGERE=3Dy
# CONFIG_ET131X is not set
CONFIG_NET_VENDOR_ALACRITECH=3Dy
# CONFIG_SLICOSS is not set
CONFIG_NET_VENDOR_ALTEON=3Dy
# CONFIG_ACENIC is not set
# CONFIG_ALTERA_TSE is not set
CONFIG_NET_VENDOR_AMAZON=3Dy
# CONFIG_ENA_ETHERNET is not set
CONFIG_NET_VENDOR_AMD=3Dy
# CONFIG_AMD8111_ETH is not set
# CONFIG_PCNET32 is not set
# CONFIG_AMD_XGBE is not set
CONFIG_NET_VENDOR_AQUANTIA=3Dy
# CONFIG_AQTION is not set
CONFIG_NET_VENDOR_ARC=3Dy
CONFIG_NET_VENDOR_ATHEROS=3Dy
# CONFIG_ATL2 is not set
# CONFIG_ATL1 is not set
# CONFIG_ATL1E is not set
# CONFIG_ATL1C is not set
# CONFIG_ALX is not set
# CONFIG_NET_VENDOR_AURORA is not set
CONFIG_NET_VENDOR_BROADCOM=3Dy
# CONFIG_B44 is not set
# CONFIG_BCMGENET is not set
# CONFIG_BNX2 is not set
# CONFIG_CNIC is not set
CONFIG_TIGON3=3Dm
CONFIG_TIGON3_HWMON=3Dy
# CONFIG_BNX2X is not set
# CONFIG_SYSTEMPORT is not set
# CONFIG_BNXT is not set
CONFIG_NET_VENDOR_BROCADE=3Dy
# CONFIG_BNA is not set
CONFIG_NET_VENDOR_CADENCE=3Dy
# CONFIG_MACB is not set
CONFIG_NET_VENDOR_CAVIUM=3Dy
# CONFIG_THUNDER_NIC_PF is not set
# CONFIG_THUNDER_NIC_VF is not set
# CONFIG_THUNDER_NIC_BGX is not set
# CONFIG_THUNDER_NIC_RGX is not set
# CONFIG_LIQUIDIO is not set
# CONFIG_LIQUIDIO_VF is not set
CONFIG_NET_VENDOR_CHELSIO=3Dy
# CONFIG_CHELSIO_T1 is not set
# CONFIG_CHELSIO_T3 is not set
# CONFIG_CHELSIO_T4 is not set
# CONFIG_CHELSIO_T4VF is not set
CONFIG_NET_VENDOR_CISCO=3Dy
# CONFIG_ENIC is not set
CONFIG_NET_VENDOR_CORTINA=3Dy
# CONFIG_CX_ECAT is not set
# CONFIG_DNET is not set
CONFIG_NET_VENDOR_DEC=3Dy
CONFIG_NET_TULIP=3Dy
# CONFIG_DE2104X is not set
# CONFIG_TULIP is not set
# CONFIG_DE4X5 is not set
# CONFIG_WINBOND_840 is not set
# CONFIG_DM9102 is not set
# CONFIG_ULI526X is not set
CONFIG_NET_VENDOR_DLINK=3Dy
# CONFIG_DL2K is not set
# CONFIG_SUNDANCE is not set
CONFIG_NET_VENDOR_EMULEX=3Dy
# CONFIG_BE2NET is not set
CONFIG_NET_VENDOR_EZCHIP=3Dy
CONFIG_NET_VENDOR_GOOGLE=3Dy
# CONFIG_GVE is not set
CONFIG_NET_VENDOR_HUAWEI=3Dy
# CONFIG_HINIC is not set
CONFIG_NET_VENDOR_I825XX=3Dy
CONFIG_NET_VENDOR_INTEL=3Dy
# CONFIG_E100 is not set
# CONFIG_E1000 is not set
# CONFIG_E1000E is not set
# CONFIG_IGB is not set
# CONFIG_IGBVF is not set
# CONFIG_IXGB is not set
# CONFIG_IXGBE is not set
# CONFIG_IXGBEVF is not set
# CONFIG_I40E is not set
# CONFIG_I40EVF is not set
# CONFIG_ICE is not set
# CONFIG_FM10K is not set
# CONFIG_IGC is not set
# CONFIG_JME is not set
CONFIG_NET_VENDOR_MARVELL=3Dy
# CONFIG_MVMDIO is not set
# CONFIG_SKGE is not set
# CONFIG_SKY2 is not set
CONFIG_NET_VENDOR_MELLANOX=3Dy
# CONFIG_MLX4_EN is not set
# CONFIG_MLX5_CORE is not set
# CONFIG_MLXSW_CORE is not set
# CONFIG_MLXFW is not set
CONFIG_NET_VENDOR_MICREL=3Dy
# CONFIG_KS8842 is not set
# CONFIG_KS8851 is not set
# CONFIG_KS8851_MLL is not set
# CONFIG_KSZ884X_PCI is not set
CONFIG_NET_VENDOR_MICROCHIP=3Dy
# CONFIG_ENC28J60 is not set
# CONFIG_ENCX24J600 is not set
# CONFIG_LAN743X is not set
CONFIG_NET_VENDOR_MICROSEMI=3Dy
CONFIG_NET_VENDOR_MYRI=3Dy
# CONFIG_MYRI10GE is not set
# CONFIG_FEALNX is not set
CONFIG_NET_VENDOR_NATSEMI=3Dy
# CONFIG_NATSEMI is not set
# CONFIG_NS83820 is not set
CONFIG_NET_VENDOR_NETERION=3Dy
# CONFIG_S2IO is not set
# CONFIG_VXGE is not set
CONFIG_NET_VENDOR_NETRONOME=3Dy
# CONFIG_NFP is not set
CONFIG_NET_VENDOR_NI=3Dy
# CONFIG_NI_XGE_MANAGEMENT_ENET is not set
CONFIG_NET_VENDOR_8390=3Dy
# CONFIG_NE2K_PCI is not set
CONFIG_NET_VENDOR_NVIDIA=3Dy
# CONFIG_FORCEDETH is not set
CONFIG_NET_VENDOR_OKI=3Dy
# CONFIG_ETHOC is not set
CONFIG_NET_VENDOR_PACKET_ENGINES=3Dy
# CONFIG_HAMACHI is not set
# CONFIG_YELLOWFIN is not set
CONFIG_NET_VENDOR_PENSANDO=3Dy
# CONFIG_IONIC is not set
CONFIG_NET_VENDOR_QLOGIC=3Dy
# CONFIG_QLA3XXX is not set
# CONFIG_QLCNIC is not set
# CONFIG_NETXEN_NIC is not set
# CONFIG_QED is not set
CONFIG_NET_VENDOR_QUALCOMM=3Dy
# CONFIG_QCOM_EMAC is not set
# CONFIG_RMNET is not set
CONFIG_NET_VENDOR_RDC=3Dy
# CONFIG_R6040 is not set
CONFIG_NET_VENDOR_REALTEK=3Dy
# CONFIG_8139CP is not set
# CONFIG_8139TOO is not set
# CONFIG_R8169 is not set
CONFIG_NET_VENDOR_RENESAS=3Dy
CONFIG_NET_VENDOR_ROCKER=3Dy
CONFIG_NET_VENDOR_SAMSUNG=3Dy
# CONFIG_SXGBE_ETH is not set
CONFIG_NET_VENDOR_SEEQ=3Dy
CONFIG_NET_VENDOR_SOLARFLARE=3Dy
# CONFIG_SFC is not set
# CONFIG_SFC_FALCON is not set
CONFIG_NET_VENDOR_SILAN=3Dy
# CONFIG_SC92031 is not set
CONFIG_NET_VENDOR_SIS=3Dy
# CONFIG_SIS900 is not set
# CONFIG_SIS190 is not set
CONFIG_NET_VENDOR_SMSC=3Dy
# CONFIG_EPIC100 is not set
# CONFIG_SMSC911X is not set
# CONFIG_SMSC9420 is not set
CONFIG_NET_VENDOR_SOCIONEXT=3Dy
CONFIG_NET_VENDOR_STMICRO=3Dy
# CONFIG_STMMAC_ETH is not set
CONFIG_NET_VENDOR_SUN=3Dy
# CONFIG_HAPPYMEAL is not set
# CONFIG_SUNGEM is not set
# CONFIG_CASSINI is not set
# CONFIG_NIU is not set
CONFIG_NET_VENDOR_SYNOPSYS=3Dy
# CONFIG_DWC_XLGMAC is not set
CONFIG_NET_VENDOR_TEHUTI=3Dy
# CONFIG_TEHUTI is not set
CONFIG_NET_VENDOR_TI=3Dy
# CONFIG_TI_CPSW_PHY_SEL is not set
# CONFIG_TLAN is not set
CONFIG_NET_VENDOR_VIA=3Dy
# CONFIG_VIA_RHINE is not set
# CONFIG_VIA_VELOCITY is not set
CONFIG_NET_VENDOR_WIZNET=3Dy
# CONFIG_WIZNET_W5100 is not set
# CONFIG_WIZNET_W5300 is not set
CONFIG_NET_VENDOR_XILINX=3Dy
# CONFIG_XILINX_AXI_EMAC is not set
# CONFIG_XILINX_LL_TEMAC is not set
CONFIG_FDDI=3Dy
# CONFIG_DEFXX is not set
# CONFIG_SKFP is not set
# CONFIG_HIPPI is not set
# CONFIG_NET_SB1000 is not set
CONFIG_MDIO_DEVICE=3Dy
CONFIG_MDIO_BUS=3Dy
CONFIG_MDIO_DEVRES=3Dy
# CONFIG_MDIO_BCM_UNIMAC is not set
# CONFIG_MDIO_BITBANG is not set
# CONFIG_MDIO_MSCC_MIIM is not set
# CONFIG_MDIO_MVUSB is not set
# CONFIG_MDIO_THUNDER is not set
# CONFIG_MDIO_XPCS is not set
CONFIG_PHYLIB=3Dy
CONFIG_SWPHY=3Dy
# CONFIG_LED_TRIGGER_PHY is not set

#
# MII PHY device drivers
#
# CONFIG_ADIN_PHY is not set
# CONFIG_AMD_PHY is not set
# CONFIG_AQUANTIA_PHY is not set
# CONFIG_AX88796B_PHY is not set
# CONFIG_BCM7XXX_PHY is not set
# CONFIG_BCM87XX_PHY is not set
# CONFIG_BROADCOM_PHY is not set
# CONFIG_BCM54140_PHY is not set
# CONFIG_BCM84881_PHY is not set
# CONFIG_CICADA_PHY is not set
# CONFIG_CORTINA_PHY is not set
# CONFIG_DAVICOM_PHY is not set
# CONFIG_DP83822_PHY is not set
# CONFIG_DP83TC811_PHY is not set
# CONFIG_DP83848_PHY is not set
# CONFIG_DP83867_PHY is not set
# CONFIG_DP83869_PHY is not set
CONFIG_FIXED_PHY=3Dy
# CONFIG_ICPLUS_PHY is not set
# CONFIG_INTEL_XWAY_PHY is not set
# CONFIG_LSI_ET1011C_PHY is not set
# CONFIG_LXT_PHY is not set
# CONFIG_MARVELL_PHY is not set
# CONFIG_MARVELL_10G_PHY is not set
# CONFIG_MICREL_PHY is not set
# CONFIG_MICROCHIP_PHY is not set
# CONFIG_MICROCHIP_T1_PHY is not set
# CONFIG_MICROSEMI_PHY is not set
# CONFIG_NATIONAL_PHY is not set
# CONFIG_NXP_TJA11XX_PHY is not set
# CONFIG_AT803X_PHY is not set
# CONFIG_QSEMI_PHY is not set
# CONFIG_REALTEK_PHY is not set
# CONFIG_RENESAS_PHY is not set
# CONFIG_ROCKCHIP_PHY is not set
# CONFIG_SMSC_PHY is not set
# CONFIG_STE10XP is not set
# CONFIG_TERANETICS_PHY is not set
# CONFIG_VITESSE_PHY is not set
# CONFIG_XILINX_GMII2RGMII is not set
# CONFIG_MICREL_KS8995MA is not set
CONFIG_PPP=3Dy
# CONFIG_PPP_BSDCOMP is not set
# CONFIG_PPP_DEFLATE is not set
CONFIG_PPP_FILTER=3Dy
# CONFIG_PPP_MPPE is not set
CONFIG_PPP_MULTILINK=3Dy
# CONFIG_PPPOE is not set
# CONFIG_PPP_ASYNC is not set
# CONFIG_PPP_SYNC_TTY is not set
# CONFIG_SLIP is not set
CONFIG_SLHC=3Dy
# CONFIG_USB_NET_DRIVERS is not set
CONFIG_WLAN=3Dy
# CONFIG_WIRELESS_WDS is not set
CONFIG_WLAN_VENDOR_ADMTEK=3Dy
CONFIG_WLAN_VENDOR_ATH=3Dy
# CONFIG_ATH_DEBUG is not set
# CONFIG_ATH5K_PCI is not set
CONFIG_WLAN_VENDOR_ATMEL=3Dy
CONFIG_WLAN_VENDOR_BROADCOM=3Dy
CONFIG_WLAN_VENDOR_CISCO=3Dy
CONFIG_WLAN_VENDOR_INTEL=3Dy
CONFIG_WLAN_VENDOR_INTERSIL=3Dy
# CONFIG_HOSTAP is not set
# CONFIG_PRISM54 is not set
CONFIG_WLAN_VENDOR_MARVELL=3Dy
CONFIG_WLAN_VENDOR_MEDIATEK=3Dy
CONFIG_WLAN_VENDOR_MICROCHIP=3Dy
CONFIG_WLAN_VENDOR_RALINK=3Dy
CONFIG_WLAN_VENDOR_REALTEK=3Dy
CONFIG_WLAN_VENDOR_RSI=3Dy
CONFIG_WLAN_VENDOR_ST=3Dy
CONFIG_WLAN_VENDOR_TI=3Dy
CONFIG_WLAN_VENDOR_ZYDAS=3Dy
CONFIG_WLAN_VENDOR_QUANTENNA=3Dy

#
# Enable WiMAX (Networking options) to see the WiMAX drivers
#
CONFIG_WAN=3Dy
# CONFIG_HDLC is not set
# CONFIG_DLCI is not set
# CONFIG_SBNI is not set
CONFIG_XEN_NETDEV_FRONTEND=3Dy
# CONFIG_XEN_NETDEV_BACKEND is not set
# CONFIG_VMXNET3 is not set
# CONFIG_FUJITSU_ES is not set
# CONFIG_NETDEVSIM is not set
CONFIG_NET_FAILOVER=3Dy
CONFIG_ISDN=3Dy
# CONFIG_MISDN is not set
# CONFIG_NVM is not set

#
# Input device support
#
CONFIG_INPUT=3Dy
# CONFIG_INPUT_LEDS is not set
# CONFIG_INPUT_FF_MEMLESS is not set
# CONFIG_INPUT_POLLDEV is not set
# CONFIG_INPUT_SPARSEKMAP is not set
# CONFIG_INPUT_MATRIXKMAP is not set

#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=3Dy
CONFIG_INPUT_MOUSEDEV_PSAUX=3Dy
CONFIG_INPUT_MOUSEDEV_SCREEN_X=3D1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=3D768
# CONFIG_INPUT_JOYDEV is not set
CONFIG_INPUT_EVDEV=3Dy
# CONFIG_INPUT_EVBUG is not set

#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=3Dy
# CONFIG_KEYBOARD_ADP5520 is not set
# CONFIG_KEYBOARD_ADP5588 is not set
# CONFIG_KEYBOARD_ADP5589 is not set
# CONFIG_KEYBOARD_APPLESPI is not set
CONFIG_KEYBOARD_ATKBD=3Dy
# CONFIG_KEYBOARD_QT1050 is not set
# CONFIG_KEYBOARD_QT1070 is not set
# CONFIG_KEYBOARD_QT2160 is not set
# CONFIG_KEYBOARD_DLINK_DIR685 is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_GPIO is not set
# CONFIG_KEYBOARD_GPIO_POLLED is not set
# CONFIG_KEYBOARD_TCA6416 is not set
# CONFIG_KEYBOARD_TCA8418 is not set
# CONFIG_KEYBOARD_MATRIX is not set
# CONFIG_KEYBOARD_LM8323 is not set
# CONFIG_KEYBOARD_LM8333 is not set
# CONFIG_KEYBOARD_MAX7359 is not set
# CONFIG_KEYBOARD_MCS is not set
# CONFIG_KEYBOARD_MPR121 is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_KEYBOARD_OPENCORES is not set
# CONFIG_KEYBOARD_SAMSUNG is not set
# CONFIG_KEYBOARD_STOWAWAY is not set
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_TM2_TOUCHKEY is not set
# CONFIG_KEYBOARD_TWL4030 is not set
# CONFIG_KEYBOARD_XTKBD is not set
CONFIG_INPUT_MOUSE=3Dy
# CONFIG_MOUSE_PS2 is not set
# CONFIG_MOUSE_SERIAL is not set
# CONFIG_MOUSE_APPLETOUCH is not set
# CONFIG_MOUSE_BCM5974 is not set
# CONFIG_MOUSE_CYAPA is not set
# CONFIG_MOUSE_ELAN_I2C is not set
# CONFIG_MOUSE_VSXXXAA is not set
# CONFIG_MOUSE_GPIO is not set
# CONFIG_MOUSE_SYNAPTICS_I2C is not set
# CONFIG_MOUSE_SYNAPTICS_USB is not set
CONFIG_INPUT_JOYSTICK=3Dy
# CONFIG_JOYSTICK_ANALOG is not set
# CONFIG_JOYSTICK_A3D is not set
# CONFIG_JOYSTICK_ADI is not set
# CONFIG_JOYSTICK_COBRA is not set
# CONFIG_JOYSTICK_GF2K is not set
# CONFIG_JOYSTICK_GRIP is not set
# CONFIG_JOYSTICK_GRIP_MP is not set
# CONFIG_JOYSTICK_GUILLEMOT is not set
# CONFIG_JOYSTICK_INTERACT is not set
# CONFIG_JOYSTICK_SIDEWINDER is not set
# CONFIG_JOYSTICK_TMDC is not set
# CONFIG_JOYSTICK_IFORCE is not set
# CONFIG_JOYSTICK_WARRIOR is not set
# CONFIG_JOYSTICK_MAGELLAN is not set
# CONFIG_JOYSTICK_SPACEORB is not set
# CONFIG_JOYSTICK_SPACEBALL is not set
# CONFIG_JOYSTICK_STINGER is not set
# CONFIG_JOYSTICK_TWIDJOY is not set
# CONFIG_JOYSTICK_ZHENHUA is not set
# CONFIG_JOYSTICK_AS5011 is not set
# CONFIG_JOYSTICK_JOYDUMP is not set
# CONFIG_JOYSTICK_XPAD is not set
# CONFIG_JOYSTICK_PSXPAD_SPI is not set
# CONFIG_JOYSTICK_PXRC is not set
# CONFIG_JOYSTICK_FSIA6B is not set
CONFIG_INPUT_TABLET=3Dy
# CONFIG_TABLET_USB_ACECAD is not set
# CONFIG_TABLET_USB_AIPTEK is not set
# CONFIG_TABLET_USB_GTCO is not set
# CONFIG_TABLET_USB_HANWANG is not set
# CONFIG_TABLET_USB_KBTAB is not set
# CONFIG_TABLET_USB_PEGASUS is not set
# CONFIG_TABLET_SERIAL_WACOM4 is not set
CONFIG_INPUT_TOUCHSCREEN=3Dy
CONFIG_TOUCHSCREEN_PROPERTIES=3Dy
# CONFIG_TOUCHSCREEN_88PM860X is not set
# CONFIG_TOUCHSCREEN_ADS7846 is not set
# CONFIG_TOUCHSCREEN_AD7877 is not set
# CONFIG_TOUCHSCREEN_AD7879 is not set
# CONFIG_TOUCHSCREEN_ATMEL_MXT is not set
# CONFIG_TOUCHSCREEN_AUO_PIXCIR is not set
# CONFIG_TOUCHSCREEN_BU21013 is not set
# CONFIG_TOUCHSCREEN_BU21029 is not set
# CONFIG_TOUCHSCREEN_CHIPONE_ICN8505 is not set
# CONFIG_TOUCHSCREEN_CY8CTMA140 is not set
# CONFIG_TOUCHSCREEN_CY8CTMG110 is not set
# CONFIG_TOUCHSCREEN_CYTTSP_CORE is not set
# CONFIG_TOUCHSCREEN_CYTTSP4_CORE is not set
# CONFIG_TOUCHSCREEN_DA9034 is not set
# CONFIG_TOUCHSCREEN_DA9052 is not set
# CONFIG_TOUCHSCREEN_DYNAPRO is not set
# CONFIG_TOUCHSCREEN_HAMPSHIRE is not set
# CONFIG_TOUCHSCREEN_EETI is not set
# CONFIG_TOUCHSCREEN_EGALAX_SERIAL is not set
# CONFIG_TOUCHSCREEN_EXC3000 is not set
# CONFIG_TOUCHSCREEN_FUJITSU is not set
# CONFIG_TOUCHSCREEN_GOODIX is not set
# CONFIG_TOUCHSCREEN_HIDEEP is not set
# CONFIG_TOUCHSCREEN_ILI210X is not set
# CONFIG_TOUCHSCREEN_S6SY761 is not set
# CONFIG_TOUCHSCREEN_GUNZE is not set
# CONFIG_TOUCHSCREEN_EKTF2127 is not set
# CONFIG_TOUCHSCREEN_ELAN is not set
# CONFIG_TOUCHSCREEN_ELO is not set
# CONFIG_TOUCHSCREEN_WACOM_W8001 is not set
# CONFIG_TOUCHSCREEN_WACOM_I2C is not set
# CONFIG_TOUCHSCREEN_MAX11801 is not set
# CONFIG_TOUCHSCREEN_MCS5000 is not set
# CONFIG_TOUCHSCREEN_MMS114 is not set
# CONFIG_TOUCHSCREEN_MELFAS_MIP4 is not set
# CONFIG_TOUCHSCREEN_MTOUCH is not set
# CONFIG_TOUCHSCREEN_INEXIO is not set
# CONFIG_TOUCHSCREEN_MK712 is not set
# CONFIG_TOUCHSCREEN_PENMOUNT is not set
# CONFIG_TOUCHSCREEN_EDT_FT5X06 is not set
# CONFIG_TOUCHSCREEN_TOUCHRIGHT is not set
# CONFIG_TOUCHSCREEN_TOUCHWIN is not set
# CONFIG_TOUCHSCREEN_PIXCIR is not set
# CONFIG_TOUCHSCREEN_WDT87XX_I2C is not set
# CONFIG_TOUCHSCREEN_WM831X is not set
# CONFIG_TOUCHSCREEN_USB_COMPOSITE is not set
# CONFIG_TOUCHSCREEN_TOUCHIT213 is not set
# CONFIG_TOUCHSCREEN_TSC_SERIO is not set
# CONFIG_TOUCHSCREEN_TSC2004 is not set
# CONFIG_TOUCHSCREEN_TSC2005 is not set
# CONFIG_TOUCHSCREEN_TSC2007 is not set
# CONFIG_TOUCHSCREEN_PCAP is not set
# CONFIG_TOUCHSCREEN_RM_TS is not set
# CONFIG_TOUCHSCREEN_SILEAD is not set
# CONFIG_TOUCHSCREEN_SIS_I2C is not set
# CONFIG_TOUCHSCREEN_ST1232 is not set
# CONFIG_TOUCHSCREEN_STMFTS is not set
# CONFIG_TOUCHSCREEN_SURFACE3_SPI is not set
# CONFIG_TOUCHSCREEN_SX8654 is not set
# CONFIG_TOUCHSCREEN_TPS6507X is not set
# CONFIG_TOUCHSCREEN_ZET6223 is not set
# CONFIG_TOUCHSCREEN_ZFORCE is not set
# CONFIG_TOUCHSCREEN_ROHM_BU21023 is not set
# CONFIG_TOUCHSCREEN_IQS5XX is not set
CONFIG_INPUT_MISC=3Dy
# CONFIG_INPUT_88PM860X_ONKEY is not set
# CONFIG_INPUT_AD714X is not set
# CONFIG_INPUT_BMA150 is not set
# CONFIG_INPUT_E3X0_BUTTON is not set
CONFIG_INPUT_PCSPKR=3Dm
# CONFIG_INPUT_MAX77693_HAPTIC is not set
# CONFIG_INPUT_MAX8925_ONKEY is not set
# CONFIG_INPUT_MAX8997_HAPTIC is not set
# CONFIG_INPUT_MMA8450 is not set
# CONFIG_INPUT_APANEL is not set
# CONFIG_INPUT_GPIO_BEEPER is not set
# CONFIG_INPUT_GPIO_DECODER is not set
# CONFIG_INPUT_GPIO_VIBRA is not set
# CONFIG_INPUT_ATLAS_BTNS is not set
# CONFIG_INPUT_ATI_REMOTE2 is not set
# CONFIG_INPUT_KEYSPAN_REMOTE is not set
# CONFIG_INPUT_KXTJ9 is not set
# CONFIG_INPUT_POWERMATE is not set
# CONFIG_INPUT_YEALINK is not set
# CONFIG_INPUT_CM109 is not set
# CONFIG_INPUT_REGULATOR_HAPTIC is not set
# CONFIG_INPUT_TWL4030_PWRBUTTON is not set
# CONFIG_INPUT_TWL4030_VIBRA is not set
# CONFIG_INPUT_TWL6040_VIBRA is not set
CONFIG_INPUT_UINPUT=3Dy
# CONFIG_INPUT_PALMAS_PWRBUTTON is not set
# CONFIG_INPUT_PCF8574 is not set
# CONFIG_INPUT_PWM_BEEPER is not set
# CONFIG_INPUT_PWM_VIBRA is not set
# CONFIG_INPUT_GPIO_ROTARY_ENCODER is not set
# CONFIG_INPUT_DA9052_ONKEY is not set
# CONFIG_INPUT_DA9055_ONKEY is not set
# CONFIG_INPUT_DA9063_ONKEY is not set
# CONFIG_INPUT_WM831X_ON is not set
# CONFIG_INPUT_PCAP is not set
# CONFIG_INPUT_ADXL34X is not set
# CONFIG_INPUT_IMS_PCU is not set
# CONFIG_INPUT_IQS269A is not set
# CONFIG_INPUT_CMA3000 is not set
# CONFIG_INPUT_XEN_KBDDEV_FRONTEND is not set
# CONFIG_INPUT_IDEAPAD_SLIDEBAR is not set
# CONFIG_INPUT_DRV260X_HAPTICS is not set
# CONFIG_INPUT_DRV2665_HAPTICS is not set
# CONFIG_INPUT_DRV2667_HAPTICS is not set
# CONFIG_RMI4_CORE is not set

#
# Hardware I/O ports
#
CONFIG_SERIO=3Dy
CONFIG_ARCH_MIGHT_HAVE_PC_SERIO=3Dy
CONFIG_SERIO_I8042=3Dy
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_CT82C710 is not set
# CONFIG_SERIO_PCIPS2 is not set
CONFIG_SERIO_LIBPS2=3Dy
# CONFIG_SERIO_RAW is not set
# CONFIG_SERIO_ALTERA_PS2 is not set
# CONFIG_SERIO_PS2MULT is not set
# CONFIG_SERIO_ARC_PS2 is not set
# CONFIG_SERIO_GPIO_PS2 is not set
# CONFIG_USERIO is not set
# CONFIG_GAMEPORT is not set
# end of Hardware I/O ports
# end of Input device support

#
# Character devices
#
CONFIG_TTY=3Dy
CONFIG_VT=3Dy
CONFIG_CONSOLE_TRANSLATIONS=3Dy
CONFIG_VT_CONSOLE=3Dy
CONFIG_VT_CONSOLE_SLEEP=3Dy
CONFIG_HW_CONSOLE=3Dy
CONFIG_VT_HW_CONSOLE_BINDING=3Dy
CONFIG_UNIX98_PTYS=3Dy
CONFIG_LEGACY_PTYS=3Dy
CONFIG_LEGACY_PTY_COUNT=3D0
CONFIG_LDISC_AUTOLOAD=3Dy

#
# Serial drivers
#
CONFIG_SERIAL_EARLYCON=3Dy
CONFIG_SERIAL_8250=3Dy
# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
CONFIG_SERIAL_8250_PNP=3Dy
# CONFIG_SERIAL_8250_16550A_VARIANTS is not set
# CONFIG_SERIAL_8250_FINTEK is not set
CONFIG_SERIAL_8250_CONSOLE=3Dy
CONFIG_SERIAL_8250_DMA=3Dy
CONFIG_SERIAL_8250_PCI=3Dy
CONFIG_SERIAL_8250_EXAR=3Dy
CONFIG_SERIAL_8250_NR_UARTS=3D48
CONFIG_SERIAL_8250_RUNTIME_UARTS=3D32
CONFIG_SERIAL_8250_EXTENDED=3Dy
CONFIG_SERIAL_8250_MANY_PORTS=3Dy
CONFIG_SERIAL_8250_SHARE_IRQ=3Dy
# CONFIG_SERIAL_8250_DETECT_IRQ is not set
CONFIG_SERIAL_8250_RSA=3Dy
CONFIG_SERIAL_8250_DWLIB=3Dy
# CONFIG_SERIAL_8250_DW is not set
# CONFIG_SERIAL_8250_RT288X is not set
CONFIG_SERIAL_8250_LPSS=3Dy
# CONFIG_SERIAL_8250_MID is not set

#
# Non-8250 serial port support
#
CONFIG_SERIAL_KGDB_NMI=3Dy
# CONFIG_SERIAL_MAX3100 is not set
CONFIG_SERIAL_MAX310X=3Dy
# CONFIG_SERIAL_UARTLITE is not set
CONFIG_SERIAL_CORE=3Dy
CONFIG_SERIAL_CORE_CONSOLE=3Dy
CONFIG_CONSOLE_POLL=3Dy
# CONFIG_SERIAL_JSM is not set
# CONFIG_SERIAL_LANTIQ is not set
CONFIG_SERIAL_SCCNXP=3Dy
CONFIG_SERIAL_SCCNXP_CONSOLE=3Dy
# CONFIG_SERIAL_SC16IS7XX is not set
# CONFIG_SERIAL_ALTERA_JTAGUART is not set
# CONFIG_SERIAL_ALTERA_UART is not set
# CONFIG_SERIAL_IFX6X60 is not set
# CONFIG_SERIAL_ARC is not set
# CONFIG_SERIAL_RP2 is not set
# CONFIG_SERIAL_FSL_LPUART is not set
# CONFIG_SERIAL_FSL_LINFLEXUART is not set
# CONFIG_SERIAL_SPRD is not set
# end of Serial drivers

CONFIG_SERIAL_MCTRL_GPIO=3Dy
CONFIG_SERIAL_NONSTANDARD=3Dy
# CONFIG_ROCKETPORT is not set
# CONFIG_CYCLADES is not set
# CONFIG_MOXA_INTELLIO is not set
# CONFIG_MOXA_SMARTIO is not set
# CONFIG_SYNCLINK is not set
# CONFIG_SYNCLINKMP is not set
# CONFIG_SYNCLINK_GT is not set
# CONFIG_ISI is not set
# CONFIG_N_HDLC is not set
# CONFIG_N_GSM is not set
# CONFIG_NOZOMI is not set
# CONFIG_NULL_TTY is not set
# CONFIG_TRACE_SINK is not set
CONFIG_HVC_DRIVER=3Dy
CONFIG_HVC_IRQ=3Dy
CONFIG_HVC_XEN=3Dy
CONFIG_HVC_XEN_FRONTEND=3Dy
# CONFIG_SERIAL_DEV_BUS is not set
CONFIG_TTY_PRINTK=3Dy
CONFIG_TTY_PRINTK_LEVEL=3D6
CONFIG_VIRTIO_CONSOLE=3Dy
CONFIG_IPMI_HANDLER=3Dm
CONFIG_IPMI_DMI_DECODE=3Dy
CONFIG_IPMI_PLAT_DATA=3Dy
# CONFIG_IPMI_PANIC_EVENT is not set
CONFIG_IPMI_DEVICE_INTERFACE=3Dm
CONFIG_IPMI_SI=3Dm
# CONFIG_IPMI_SSIF is not set
# CONFIG_IPMI_WATCHDOG is not set
# CONFIG_IPMI_POWEROFF is not set
CONFIG_HW_RANDOM=3Dy
# CONFIG_HW_RANDOM_TIMERIOMEM is not set
# CONFIG_HW_RANDOM_INTEL is not set
# CONFIG_HW_RANDOM_AMD is not set
# CONFIG_HW_RANDOM_BA431 is not set
# CONFIG_HW_RANDOM_VIA is not set
# CONFIG_HW_RANDOM_VIRTIO is not set
# CONFIG_APPLICOM is not set
# CONFIG_MWAVE is not set
CONFIG_DEVMEM=3Dy
# CONFIG_DEVKMEM is not set
# CONFIG_NVRAM is not set
# CONFIG_RAW_DRIVER is not set
CONFIG_DEVPORT=3Dy
CONFIG_HPET=3Dy
CONFIG_HPET_MMAP=3Dy
CONFIG_HPET_MMAP_DEFAULT=3Dy
# CONFIG_HANGCHECK_TIMER is not set
CONFIG_TCG_TPM=3Dy
# CONFIG_HW_RANDOM_TPM is not set
CONFIG_TCG_TIS_CORE=3Dy
CONFIG_TCG_TIS=3Dy
# CONFIG_TCG_TIS_SPI is not set
# CONFIG_TCG_TIS_I2C_ATMEL is not set
# CONFIG_TCG_TIS_I2C_INFINEON is not set
# CONFIG_TCG_TIS_I2C_NUVOTON is not set
# CONFIG_TCG_NSC is not set
# CONFIG_TCG_ATMEL is not set
# CONFIG_TCG_INFINEON is not set
# CONFIG_TCG_XEN is not set
CONFIG_TCG_CRB=3Dy
# CONFIG_TCG_VTPM_PROXY is not set
# CONFIG_TCG_TIS_ST33ZP24_I2C is not set
# CONFIG_TCG_TIS_ST33ZP24_SPI is not set
# CONFIG_TELCLOCK is not set
# CONFIG_XILLYBUS is not set
# end of Character devices

# CONFIG_RANDOM_TRUST_CPU is not set
# CONFIG_RANDOM_TRUST_BOOTLOADER is not set

#
# I2C support
#
CONFIG_I2C=3Dy
CONFIG_ACPI_I2C_OPREGION=3Dy
CONFIG_I2C_BOARDINFO=3Dy
CONFIG_I2C_COMPAT=3Dy
CONFIG_I2C_CHARDEV=3Dy
# CONFIG_I2C_MUX is not set
CONFIG_I2C_HELPER_AUTO=3Dy
CONFIG_I2C_ALGOBIT=3Dm

#
# I2C Hardware Bus support
#

#
# PC SMBus host controller drivers
#
# CONFIG_I2C_ALI1535 is not set
# CONFIG_I2C_ALI1563 is not set
# CONFIG_I2C_ALI15X3 is not set
# CONFIG_I2C_AMD756 is not set
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_AMD_MP2 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_ISCH is not set
# CONFIG_I2C_ISMT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_NVIDIA_GPU is not set
# CONFIG_I2C_SIS5595 is not set
# CONFIG_I2C_SIS630 is not set
# CONFIG_I2C_SIS96X is not set
# CONFIG_I2C_VIA is not set
# CONFIG_I2C_VIAPRO is not set

#
# ACPI drivers
#
# CONFIG_I2C_SCMI is not set

#
# I2C system bus drivers (mostly embedded / system-on-chip)
#
# CONFIG_I2C_CBUS_GPIO is not set
CONFIG_I2C_DESIGNWARE_CORE=3Dy
# CONFIG_I2C_DESIGNWARE_SLAVE is not set
CONFIG_I2C_DESIGNWARE_PLATFORM=3Dy
# CONFIG_I2C_DESIGNWARE_BAYTRAIL is not set
# CONFIG_I2C_DESIGNWARE_PCI is not set
# CONFIG_I2C_EMEV2 is not set
# CONFIG_I2C_GPIO is not set
# CONFIG_I2C_OCORES is not set
# CONFIG_I2C_PCA_PLATFORM is not set
# CONFIG_I2C_SIMTEC is not set
# CONFIG_I2C_XILINX is not set

#
# External I2C/SMBus adapter drivers
#
# CONFIG_I2C_DIOLAN_U2C is not set
# CONFIG_I2C_ROBOTFUZZ_OSIF is not set
# CONFIG_I2C_TAOS_EVM is not set
# CONFIG_I2C_TINY_USB is not set

#
# Other I2C/SMBus bus drivers
#
# CONFIG_I2C_MLXCPLD is not set
# end of I2C Hardware Bus support

# CONFIG_I2C_STUB is not set
# CONFIG_I2C_SLAVE is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
# end of I2C support

# CONFIG_I3C is not set
CONFIG_SPI=3Dy
# CONFIG_SPI_DEBUG is not set
CONFIG_SPI_MASTER=3Dy
# CONFIG_SPI_MEM is not set

#
# SPI Master Controller Drivers
#
# CONFIG_SPI_ALTERA is not set
# CONFIG_SPI_AXI_SPI_ENGINE is not set
# CONFIG_SPI_BITBANG is not set
# CONFIG_SPI_CADENCE is not set
# CONFIG_SPI_DESIGNWARE is not set
# CONFIG_SPI_NXP_FLEXSPI is not set
# CONFIG_SPI_GPIO is not set
# CONFIG_SPI_LANTIQ_SSC is not set
# CONFIG_SPI_OC_TINY is not set
# CONFIG_SPI_PXA2XX is not set
# CONFIG_SPI_ROCKCHIP is not set
# CONFIG_SPI_SC18IS602 is not set
# CONFIG_SPI_SIFIVE is not set
# CONFIG_SPI_MXIC is not set
# CONFIG_SPI_XCOMM is not set
# CONFIG_SPI_XILINX is not set
# CONFIG_SPI_ZYNQMP_GQSPI is not set
# CONFIG_SPI_AMD is not set

#
# SPI Multiplexer support
#
# CONFIG_SPI_MUX is not set

#
# SPI Protocol Masters
#
# CONFIG_SPI_SPIDEV is not set
# CONFIG_SPI_LOOPBACK_TEST is not set
# CONFIG_SPI_TLE62X0 is not set
# CONFIG_SPI_SLAVE is not set
CONFIG_SPI_DYNAMIC=3Dy
# CONFIG_SPMI is not set
# CONFIG_HSI is not set
# CONFIG_PPS is not set

#
# PTP clock support
#
# CONFIG_PTP_1588_CLOCK is not set

#
# Enable PHYLIB and NETWORK_PHY_TIMESTAMPING to see the additional clocks=
=2E
#
# end of PTP clock support

CONFIG_PINCTRL=3Dy
CONFIG_PINMUX=3Dy
CONFIG_PINCONF=3Dy
CONFIG_GENERIC_PINCONF=3Dy
# CONFIG_DEBUG_PINCTRL is not set
CONFIG_PINCTRL_AMD=3Dy
# CONFIG_PINCTRL_MCP23S08 is not set
CONFIG_PINCTRL_SX150X=3Dy
CONFIG_PINCTRL_BAYTRAIL=3Dy
# CONFIG_PINCTRL_CHERRYVIEW is not set
# CONFIG_PINCTRL_LYNXPOINT is not set
# CONFIG_PINCTRL_BROXTON is not set
# CONFIG_PINCTRL_CANNONLAKE is not set
# CONFIG_PINCTRL_CEDARFORK is not set
# CONFIG_PINCTRL_DENVERTON is not set
# CONFIG_PINCTRL_EMMITSBURG is not set
# CONFIG_PINCTRL_GEMINILAKE is not set
# CONFIG_PINCTRL_ICELAKE is not set
# CONFIG_PINCTRL_JASPERLAKE is not set
# CONFIG_PINCTRL_LEWISBURG is not set
# CONFIG_PINCTRL_SUNRISEPOINT is not set
# CONFIG_PINCTRL_TIGERLAKE is not set
CONFIG_GPIOLIB=3Dy
CONFIG_GPIOLIB_FASTPATH_LIMIT=3D512
CONFIG_GPIO_ACPI=3Dy
CONFIG_GPIOLIB_IRQCHIP=3Dy
# CONFIG_DEBUG_GPIO is not set
CONFIG_GPIO_SYSFS=3Dy

#
# Memory mapped GPIO drivers
#
# CONFIG_GPIO_AMDPT is not set
# CONFIG_GPIO_DWAPB is not set
# CONFIG_GPIO_EXAR is not set
# CONFIG_GPIO_GENERIC_PLATFORM is not set
# CONFIG_GPIO_ICH is not set
# CONFIG_GPIO_MB86S7X is not set
# CONFIG_GPIO_VX855 is not set
# CONFIG_GPIO_XILINX is not set
# CONFIG_GPIO_AMD_FCH is not set
# end of Memory mapped GPIO drivers

#
# Port-mapped I/O GPIO drivers
#
# CONFIG_GPIO_F7188X is not set
# CONFIG_GPIO_IT87 is not set
# CONFIG_GPIO_SCH is not set
# CONFIG_GPIO_SCH311X is not set
# CONFIG_GPIO_WINBOND is not set
# CONFIG_GPIO_WS16C48 is not set
# end of Port-mapped I/O GPIO drivers

#
# I2C GPIO expanders
#
# CONFIG_GPIO_ADP5588 is not set
# CONFIG_GPIO_MAX7300 is not set
# CONFIG_GPIO_MAX732X is not set
# CONFIG_GPIO_PCA953X is not set
# CONFIG_GPIO_PCA9570 is not set
# CONFIG_GPIO_PCF857X is not set
# CONFIG_GPIO_TPIC2810 is not set
# end of I2C GPIO expanders

#
# MFD GPIO expanders
#
# CONFIG_GPIO_ADP5520 is not set
# CONFIG_GPIO_CRYSTAL_COVE is not set
# CONFIG_GPIO_DA9052 is not set
# CONFIG_GPIO_DA9055 is not set
CONFIG_GPIO_PALMAS=3Dy
CONFIG_GPIO_RC5T583=3Dy
CONFIG_GPIO_TPS6586X=3Dy
CONFIG_GPIO_TPS65910=3Dy
# CONFIG_GPIO_TPS65912 is not set
# CONFIG_GPIO_TWL4030 is not set
# CONFIG_GPIO_TWL6040 is not set
# CONFIG_GPIO_WM831X is not set
# CONFIG_GPIO_WM8350 is not set
# end of MFD GPIO expanders

#
# PCI GPIO expanders
#
# CONFIG_GPIO_AMD8111 is not set
# CONFIG_GPIO_BT8XX is not set
# CONFIG_GPIO_ML_IOH is not set
# CONFIG_GPIO_PCI_IDIO_16 is not set
# CONFIG_GPIO_PCIE_IDIO_24 is not set
# CONFIG_GPIO_RDC321X is not set
# end of PCI GPIO expanders

#
# SPI GPIO expanders
#
# CONFIG_GPIO_MAX3191X is not set
# CONFIG_GPIO_MAX7301 is not set
# CONFIG_GPIO_MC33880 is not set
# CONFIG_GPIO_PISOSR is not set
# CONFIG_GPIO_XRA1403 is not set
# end of SPI GPIO expanders

#
# USB GPIO expanders
#
# end of USB GPIO expanders

# CONFIG_GPIO_AGGREGATOR is not set
# CONFIG_GPIO_MOCKUP is not set
# CONFIG_W1 is not set
CONFIG_POWER_AVS=3Dy
# CONFIG_QCOM_CPR is not set
CONFIG_POWER_RESET=3Dy
CONFIG_POWER_RESET_RESTART=3Dy
CONFIG_POWER_SUPPLY=3Dy
# CONFIG_POWER_SUPPLY_DEBUG is not set
CONFIG_POWER_SUPPLY_HWMON=3Dy
# CONFIG_PDA_POWER is not set
# CONFIG_MAX8925_POWER is not set
# CONFIG_WM831X_BACKUP is not set
# CONFIG_WM831X_POWER is not set
# CONFIG_WM8350_POWER is not set
# CONFIG_TEST_POWER is not set
# CONFIG_BATTERY_88PM860X is not set
# CONFIG_CHARGER_ADP5061 is not set
# CONFIG_BATTERY_CW2015 is not set
# CONFIG_BATTERY_DS2780 is not set
# CONFIG_BATTERY_DS2781 is not set
# CONFIG_BATTERY_DS2782 is not set
# CONFIG_BATTERY_SBS is not set
# CONFIG_CHARGER_SBS is not set
# CONFIG_BATTERY_BQ27XXX is not set
# CONFIG_BATTERY_DA9030 is not set
# CONFIG_BATTERY_DA9052 is not set
# CONFIG_BATTERY_MAX17040 is not set
# CONFIG_BATTERY_MAX17042 is not set
# CONFIG_CHARGER_ISP1704 is not set
# CONFIG_CHARGER_MAX8903 is not set
# CONFIG_CHARGER_LP8727 is not set
# CONFIG_CHARGER_GPIO is not set
CONFIG_CHARGER_MANAGER=3Dy
# CONFIG_CHARGER_LT3651 is not set
# CONFIG_CHARGER_MAX14577 is not set
# CONFIG_CHARGER_MAX77693 is not set
# CONFIG_CHARGER_BQ2415X is not set
# CONFIG_CHARGER_BQ24190 is not set
# CONFIG_CHARGER_BQ24257 is not set
# CONFIG_CHARGER_BQ24735 is not set
# CONFIG_CHARGER_BQ2515X is not set
# CONFIG_CHARGER_BQ25890 is not set
# CONFIG_CHARGER_SMB347 is not set
# CONFIG_CHARGER_TPS65090 is not set
# CONFIG_BATTERY_GAUGE_LTC2941 is not set
# CONFIG_CHARGER_RT9455 is not set
# CONFIG_CHARGER_BD99954 is not set
CONFIG_HWMON=3Dy
# CONFIG_HWMON_DEBUG_CHIP is not set

#
# Native drivers
#
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_ABITUGURU3 is not set
# CONFIG_SENSORS_AD7314 is not set
# CONFIG_SENSORS_AD7414 is not set
# CONFIG_SENSORS_AD7418 is not set
# CONFIG_SENSORS_ADM1021 is not set
# CONFIG_SENSORS_ADM1025 is not set
# CONFIG_SENSORS_ADM1026 is not set
# CONFIG_SENSORS_ADM1029 is not set
# CONFIG_SENSORS_ADM1031 is not set
# CONFIG_SENSORS_ADM1177 is not set
# CONFIG_SENSORS_ADM9240 is not set
# CONFIG_SENSORS_ADT7310 is not set
# CONFIG_SENSORS_ADT7410 is not set
# CONFIG_SENSORS_ADT7411 is not set
# CONFIG_SENSORS_ADT7462 is not set
# CONFIG_SENSORS_ADT7470 is not set
# CONFIG_SENSORS_ADT7475 is not set
# CONFIG_SENSORS_AS370 is not set
# CONFIG_SENSORS_ASC7621 is not set
# CONFIG_SENSORS_AXI_FAN_CONTROL is not set
# CONFIG_SENSORS_K8TEMP is not set
# CONFIG_SENSORS_K10TEMP is not set
# CONFIG_SENSORS_FAM15H_POWER is not set
# CONFIG_SENSORS_AMD_ENERGY is not set
# CONFIG_SENSORS_APPLESMC is not set
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_ASPEED is not set
# CONFIG_SENSORS_ATXP1 is not set
# CONFIG_SENSORS_DRIVETEMP is not set
# CONFIG_SENSORS_DS620 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_DELL_SMM is not set
# CONFIG_SENSORS_DA9052_ADC is not set
# CONFIG_SENSORS_DA9055 is not set
# CONFIG_SENSORS_I5K_AMB is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_SENSORS_F71882FG is not set
# CONFIG_SENSORS_F75375S is not set
# CONFIG_SENSORS_FSCHMD is not set
# CONFIG_SENSORS_FTSTEUTATES is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_G760A is not set
# CONFIG_SENSORS_G762 is not set
# CONFIG_SENSORS_HIH6130 is not set
# CONFIG_SENSORS_IBMAEM is not set
# CONFIG_SENSORS_IBMPEX is not set
# CONFIG_SENSORS_I5500 is not set
CONFIG_SENSORS_CORETEMP=3Dm
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_JC42 is not set
# CONFIG_SENSORS_POWR1220 is not set
# CONFIG_SENSORS_LINEAGE is not set
# CONFIG_SENSORS_LTC2945 is not set
# CONFIG_SENSORS_LTC2947_I2C is not set
# CONFIG_SENSORS_LTC2947_SPI is not set
# CONFIG_SENSORS_LTC2990 is not set
# CONFIG_SENSORS_LTC4151 is not set
# CONFIG_SENSORS_LTC4215 is not set
# CONFIG_SENSORS_LTC4222 is not set
# CONFIG_SENSORS_LTC4245 is not set
# CONFIG_SENSORS_LTC4260 is not set
# CONFIG_SENSORS_LTC4261 is not set
# CONFIG_SENSORS_MAX1111 is not set
# CONFIG_SENSORS_MAX16065 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_MAX1668 is not set
# CONFIG_SENSORS_MAX197 is not set
# CONFIG_SENSORS_MAX31722 is not set
# CONFIG_SENSORS_MAX31730 is not set
# CONFIG_SENSORS_MAX6621 is not set
# CONFIG_SENSORS_MAX6639 is not set
# CONFIG_SENSORS_MAX6642 is not set
# CONFIG_SENSORS_MAX6650 is not set
# CONFIG_SENSORS_MAX6697 is not set
# CONFIG_SENSORS_MAX31790 is not set
# CONFIG_SENSORS_MCP3021 is not set
# CONFIG_SENSORS_TC654 is not set
# CONFIG_SENSORS_ADCXX is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM70 is not set
# CONFIG_SENSORS_LM73 is not set
# CONFIG_SENSORS_LM75 is not set
# CONFIG_SENSORS_LM77 is not set
# CONFIG_SENSORS_LM78 is not set
# CONFIG_SENSORS_LM80 is not set
# CONFIG_SENSORS_LM83 is not set
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_LM93 is not set
# CONFIG_SENSORS_LM95234 is not set
# CONFIG_SENSORS_LM95241 is not set
# CONFIG_SENSORS_LM95245 is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_PC87427 is not set
# CONFIG_SENSORS_NTC_THERMISTOR is not set
# CONFIG_SENSORS_NCT6683 is not set
# CONFIG_SENSORS_NCT6775 is not set
# CONFIG_SENSORS_NCT7802 is not set
# CONFIG_SENSORS_NCT7904 is not set
# CONFIG_SENSORS_NPCM7XX is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_PMBUS is not set
# CONFIG_SENSORS_SHT15 is not set
# CONFIG_SENSORS_SHT21 is not set
# CONFIG_SENSORS_SHT3x is not set
# CONFIG_SENSORS_SHTC1 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_DME1737 is not set
# CONFIG_SENSORS_EMC1403 is not set
# CONFIG_SENSORS_EMC2103 is not set
# CONFIG_SENSORS_EMC6W201 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_SMSC47M192 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SCH5627 is not set
# CONFIG_SENSORS_SCH5636 is not set
# CONFIG_SENSORS_STTS751 is not set
# CONFIG_SENSORS_SMM665 is not set
# CONFIG_SENSORS_ADC128D818 is not set
# CONFIG_SENSORS_ADS7828 is not set
# CONFIG_SENSORS_ADS7871 is not set
# CONFIG_SENSORS_AMC6821 is not set
# CONFIG_SENSORS_INA209 is not set
# CONFIG_SENSORS_INA2XX is not set
# CONFIG_SENSORS_INA3221 is not set
# CONFIG_SENSORS_TC74 is not set
# CONFIG_SENSORS_THMC50 is not set
# CONFIG_SENSORS_TMP102 is not set
# CONFIG_SENSORS_TMP103 is not set
# CONFIG_SENSORS_TMP108 is not set
# CONFIG_SENSORS_TMP401 is not set
# CONFIG_SENSORS_TMP421 is not set
# CONFIG_SENSORS_TMP513 is not set
# CONFIG_SENSORS_VIA_CPUTEMP is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_VT1211 is not set
# CONFIG_SENSORS_VT8231 is not set
# CONFIG_SENSORS_W83773G is not set
# CONFIG_SENSORS_W83781D is not set
# CONFIG_SENSORS_W83791D is not set
# CONFIG_SENSORS_W83792D is not set
# CONFIG_SENSORS_W83793 is not set
# CONFIG_SENSORS_W83795 is not set
# CONFIG_SENSORS_W83L785TS is not set
# CONFIG_SENSORS_W83L786NG is not set
# CONFIG_SENSORS_W83627HF is not set
# CONFIG_SENSORS_W83627EHF is not set
# CONFIG_SENSORS_WM831X is not set
# CONFIG_SENSORS_WM8350 is not set
# CONFIG_SENSORS_XGENE is not set

#
# ACPI drivers
#
CONFIG_SENSORS_ACPI_POWER=3Dm
# CONFIG_SENSORS_ATK0110 is not set
CONFIG_THERMAL=3Dy
# CONFIG_THERMAL_NETLINK is not set
# CONFIG_THERMAL_STATISTICS is not set
CONFIG_THERMAL_EMERGENCY_POWEROFF_DELAY_MS=3D0
CONFIG_THERMAL_HWMON=3Dy
CONFIG_THERMAL_WRITABLE_TRIPS=3Dy
CONFIG_THERMAL_DEFAULT_GOV_STEP_WISE=3Dy
# CONFIG_THERMAL_DEFAULT_GOV_FAIR_SHARE is not set
# CONFIG_THERMAL_DEFAULT_GOV_USER_SPACE is not set
CONFIG_THERMAL_GOV_FAIR_SHARE=3Dy
CONFIG_THERMAL_GOV_STEP_WISE=3Dy
CONFIG_THERMAL_GOV_BANG_BANG=3Dy
CONFIG_THERMAL_GOV_USER_SPACE=3Dy
# CONFIG_DEVFREQ_THERMAL is not set
CONFIG_THERMAL_EMULATION=3Dy

#
# Intel thermal drivers
#
CONFIG_INTEL_POWERCLAMP=3Dm
CONFIG_X86_PKG_TEMP_THERMAL=3Dm
# CONFIG_INTEL_SOC_DTS_THERMAL is not set

#
# ACPI INT340X thermal drivers
#
# CONFIG_INT340X_THERMAL is not set
# end of ACPI INT340X thermal drivers

# CONFIG_INTEL_PCH_THERMAL is not set
# end of Intel thermal drivers

CONFIG_WATCHDOG=3Dy
CONFIG_WATCHDOG_CORE=3Dy
# CONFIG_WATCHDOG_NOWAYOUT is not set
CONFIG_WATCHDOG_HANDLE_BOOT_ENABLED=3Dy
CONFIG_WATCHDOG_OPEN_TIMEOUT=3D0
# CONFIG_WATCHDOG_SYSFS is not set

#
# Watchdog Pretimeout Governors
#
# CONFIG_WATCHDOG_PRETIMEOUT_GOV is not set

#
# Watchdog Device Drivers
#
# CONFIG_SOFT_WATCHDOG is not set
# CONFIG_DA9052_WATCHDOG is not set
# CONFIG_DA9055_WATCHDOG is not set
# CONFIG_DA9063_WATCHDOG is not set
# CONFIG_WDAT_WDT is not set
# CONFIG_WM831X_WATCHDOG is not set
# CONFIG_WM8350_WATCHDOG is not set
# CONFIG_XILINX_WATCHDOG is not set
# CONFIG_ZIIRAVE_WATCHDOG is not set
# CONFIG_CADENCE_WATCHDOG is not set
# CONFIG_DW_WATCHDOG is not set
# CONFIG_TWL4030_WATCHDOG is not set
# CONFIG_MAX63XX_WATCHDOG is not set
# CONFIG_ACQUIRE_WDT is not set
# CONFIG_ADVANTECH_WDT is not set
# CONFIG_ALIM1535_WDT is not set
# CONFIG_ALIM7101_WDT is not set
# CONFIG_EBC_C384_WDT is not set
# CONFIG_F71808E_WDT is not set
# CONFIG_SP5100_TCO is not set
# CONFIG_SBC_FITPC2_WATCHDOG is not set
# CONFIG_EUROTECH_WDT is not set
# CONFIG_IB700_WDT is not set
# CONFIG_IBMASR is not set
# CONFIG_WAFER_WDT is not set
# CONFIG_I6300ESB_WDT is not set
# CONFIG_IE6XX_WDT is not set
CONFIG_ITCO_WDT=3Dm
CONFIG_ITCO_VENDOR_SUPPORT=3Dy
# CONFIG_IT8712F_WDT is not set
# CONFIG_IT87_WDT is not set
# CONFIG_HP_WATCHDOG is not set
# CONFIG_SC1200_WDT is not set
# CONFIG_PC87413_WDT is not set
# CONFIG_NV_TCO is not set
# CONFIG_60XX_WDT is not set
# CONFIG_CPU5_WDT is not set
# CONFIG_SMSC_SCH311X_WDT is not set
# CONFIG_SMSC37B787_WDT is not set
# CONFIG_TQMX86_WDT is not set
# CONFIG_VIA_WDT is not set
# CONFIG_W83627HF_WDT is not set
# CONFIG_W83877F_WDT is not set
# CONFIG_W83977F_WDT is not set
# CONFIG_MACHZ_WDT is not set
# CONFIG_SBC_EPX_C3_WATCHDOG is not set
# CONFIG_INTEL_MEI_WDT is not set
# CONFIG_NI903X_WDT is not set
# CONFIG_NIC7018_WDT is not set
# CONFIG_MEN_A21_WDT is not set
# CONFIG_XEN_WDT is not set

#
# PCI-based Watchdog Cards
#
# CONFIG_PCIPCWATCHDOG is not set
# CONFIG_WDTPCI is not set

#
# USB-based Watchdog Cards
#
# CONFIG_USBPCWATCHDOG is not set
CONFIG_SSB_POSSIBLE=3Dy
# CONFIG_SSB is not set
CONFIG_BCMA_POSSIBLE=3Dy
# CONFIG_BCMA is not set

#
# Multifunction device drivers
#
CONFIG_MFD_CORE=3Dy
CONFIG_MFD_AS3711=3Dy
CONFIG_PMIC_ADP5520=3Dy
CONFIG_MFD_AAT2870_CORE=3Dy
# CONFIG_MFD_BCM590XX is not set
# CONFIG_MFD_BD9571MWV is not set
# CONFIG_MFD_AXP20X_I2C is not set
# CONFIG_MFD_MADERA is not set
CONFIG_PMIC_DA903X=3Dy
CONFIG_PMIC_DA9052=3Dy
CONFIG_MFD_DA9052_SPI=3Dy
CONFIG_MFD_DA9052_I2C=3Dy
CONFIG_MFD_DA9055=3Dy
# CONFIG_MFD_DA9062 is not set
CONFIG_MFD_DA9063=3Dy
# CONFIG_MFD_DA9150 is not set
# CONFIG_MFD_DLN2 is not set
# CONFIG_MFD_MC13XXX_SPI is not set
# CONFIG_MFD_MC13XXX_I2C is not set
# CONFIG_MFD_MP2629 is not set
# CONFIG_HTC_PASIC3 is not set
CONFIG_HTC_I2CPLD=3Dy
# CONFIG_MFD_INTEL_QUARK_I2C_GPIO is not set
CONFIG_LPC_ICH=3Dm
# CONFIG_LPC_SCH is not set
CONFIG_INTEL_SOC_PMIC=3Dy
# CONFIG_INTEL_SOC_PMIC_CHTWC is not set
# CONFIG_INTEL_SOC_PMIC_CHTDC_TI is not set
# CONFIG_MFD_INTEL_LPSS_ACPI is not set
# CONFIG_MFD_INTEL_LPSS_PCI is not set
# CONFIG_MFD_INTEL_PMC_BXT is not set
# CONFIG_MFD_IQS62X is not set
# CONFIG_MFD_JANZ_CMODIO is not set
# CONFIG_MFD_KEMPLD is not set
# CONFIG_MFD_88PM800 is not set
# CONFIG_MFD_88PM805 is not set
CONFIG_MFD_88PM860X=3Dy
CONFIG_MFD_MAX14577=3Dy
CONFIG_MFD_MAX77693=3Dy
CONFIG_MFD_MAX77843=3Dy
# CONFIG_MFD_MAX8907 is not set
CONFIG_MFD_MAX8925=3Dy
CONFIG_MFD_MAX8997=3Dy
CONFIG_MFD_MAX8998=3Dy
# CONFIG_MFD_MT6360 is not set
# CONFIG_MFD_MT6397 is not set
# CONFIG_MFD_MENF21BMC is not set
CONFIG_EZX_PCAP=3Dy
# CONFIG_MFD_VIPERBOARD is not set
# CONFIG_MFD_RETU is not set
# CONFIG_MFD_PCF50633 is not set
# CONFIG_MFD_RDC321X is not set
# CONFIG_MFD_RT5033 is not set
CONFIG_MFD_RC5T583=3Dy
CONFIG_MFD_SEC_CORE=3Dy
# CONFIG_MFD_SI476X_CORE is not set
# CONFIG_MFD_SM501 is not set
# CONFIG_MFD_SKY81452 is not set
CONFIG_ABX500_CORE=3Dy
CONFIG_AB3100_CORE=3Dy
# CONFIG_AB3100_OTP is not set
CONFIG_MFD_SYSCON=3Dy
# CONFIG_MFD_TI_AM335X_TSCADC is not set
# CONFIG_MFD_LP3943 is not set
CONFIG_MFD_LP8788=3Dy
# CONFIG_MFD_TI_LMU is not set
CONFIG_MFD_PALMAS=3Dy
# CONFIG_TPS6105X is not set
# CONFIG_TPS65010 is not set
# CONFIG_TPS6507X is not set
# CONFIG_MFD_TPS65086 is not set
CONFIG_MFD_TPS65090=3Dy
# CONFIG_MFD_TPS68470 is not set
# CONFIG_MFD_TI_LP873X is not set
CONFIG_MFD_TPS6586X=3Dy
CONFIG_MFD_TPS65910=3Dy
CONFIG_MFD_TPS65912=3Dy
CONFIG_MFD_TPS65912_I2C=3Dy
CONFIG_MFD_TPS65912_SPI=3Dy
CONFIG_MFD_TPS80031=3Dy
CONFIG_TWL4030_CORE=3Dy
CONFIG_MFD_TWL4030_AUDIO=3Dy
CONFIG_TWL6040_CORE=3Dy
# CONFIG_MFD_WL1273_CORE is not set
# CONFIG_MFD_LM3533 is not set
# CONFIG_MFD_TQMX86 is not set
# CONFIG_MFD_VX855 is not set
# CONFIG_MFD_ARIZONA_I2C is not set
# CONFIG_MFD_ARIZONA_SPI is not set
CONFIG_MFD_WM8400=3Dy
CONFIG_MFD_WM831X=3Dy
CONFIG_MFD_WM831X_I2C=3Dy
CONFIG_MFD_WM831X_SPI=3Dy
CONFIG_MFD_WM8350=3Dy
CONFIG_MFD_WM8350_I2C=3Dy
# CONFIG_MFD_WM8994 is not set
# end of Multifunction device drivers

CONFIG_REGULATOR=3Dy
# CONFIG_REGULATOR_DEBUG is not set
# CONFIG_REGULATOR_FIXED_VOLTAGE is not set
# CONFIG_REGULATOR_VIRTUAL_CONSUMER is not set
# CONFIG_REGULATOR_USERSPACE_CONSUMER is not set
# CONFIG_REGULATOR_88PG86X is not set
# CONFIG_REGULATOR_88PM8607 is not set
# CONFIG_REGULATOR_ACT8865 is not set
# CONFIG_REGULATOR_AD5398 is not set
# CONFIG_REGULATOR_AAT2870 is not set
# CONFIG_REGULATOR_AB3100 is not set
# CONFIG_REGULATOR_AS3711 is not set
# CONFIG_REGULATOR_DA903X is not set
# CONFIG_REGULATOR_DA9052 is not set
# CONFIG_REGULATOR_DA9055 is not set
# CONFIG_REGULATOR_DA9210 is not set
# CONFIG_REGULATOR_DA9211 is not set
# CONFIG_REGULATOR_FAN53555 is not set
# CONFIG_REGULATOR_GPIO is not set
# CONFIG_REGULATOR_ISL9305 is not set
# CONFIG_REGULATOR_ISL6271A is not set
# CONFIG_REGULATOR_LP3971 is not set
# CONFIG_REGULATOR_LP3972 is not set
# CONFIG_REGULATOR_LP872X is not set
# CONFIG_REGULATOR_LP8755 is not set
# CONFIG_REGULATOR_LP8788 is not set
# CONFIG_REGULATOR_LTC3589 is not set
# CONFIG_REGULATOR_LTC3676 is not set
# CONFIG_REGULATOR_MAX14577 is not set
# CONFIG_REGULATOR_MAX1586 is not set
# CONFIG_REGULATOR_MAX8649 is not set
# CONFIG_REGULATOR_MAX8660 is not set
# CONFIG_REGULATOR_MAX8925 is not set
# CONFIG_REGULATOR_MAX8952 is not set
# CONFIG_REGULATOR_MAX8997 is not set
# CONFIG_REGULATOR_MAX8998 is not set
# CONFIG_REGULATOR_MAX77693 is not set
# CONFIG_REGULATOR_MAX77826 is not set
# CONFIG_REGULATOR_MP8859 is not set
# CONFIG_REGULATOR_MT6311 is not set
# CONFIG_REGULATOR_PALMAS is not set
# CONFIG_REGULATOR_PCA9450 is not set
# CONFIG_REGULATOR_PCAP is not set
# CONFIG_REGULATOR_PFUZE100 is not set
# CONFIG_REGULATOR_PV88060 is not set
# CONFIG_REGULATOR_PV88080 is not set
# CONFIG_REGULATOR_PV88090 is not set
# CONFIG_REGULATOR_PWM is not set
# CONFIG_REGULATOR_RC5T583 is not set
# CONFIG_REGULATOR_S2MPA01 is not set
# CONFIG_REGULATOR_S2MPS11 is not set
# CONFIG_REGULATOR_S5M8767 is not set
# CONFIG_REGULATOR_SLG51000 is not set
# CONFIG_REGULATOR_TPS51632 is not set
# CONFIG_REGULATOR_TPS62360 is not set
# CONFIG_REGULATOR_TPS65023 is not set
# CONFIG_REGULATOR_TPS6507X is not set
# CONFIG_REGULATOR_TPS65090 is not set
# CONFIG_REGULATOR_TPS65132 is not set
# CONFIG_REGULATOR_TPS6524X is not set
# CONFIG_REGULATOR_TPS6586X is not set
# CONFIG_REGULATOR_TPS65910 is not set
# CONFIG_REGULATOR_TPS65912 is not set
# CONFIG_REGULATOR_TPS80031 is not set
# CONFIG_REGULATOR_TWL4030 is not set
# CONFIG_REGULATOR_WM831X is not set
# CONFIG_REGULATOR_WM8350 is not set
# CONFIG_REGULATOR_WM8400 is not set
CONFIG_RC_CORE=3Dy
CONFIG_RC_MAP=3Dy
# CONFIG_LIRC is not set
CONFIG_RC_DECODERS=3Dy
CONFIG_IR_NEC_DECODER=3Dy
CONFIG_IR_RC5_DECODER=3Dy
CONFIG_IR_RC6_DECODER=3Dy
CONFIG_IR_JVC_DECODER=3Dy
CONFIG_IR_SONY_DECODER=3Dy
CONFIG_IR_SANYO_DECODER=3Dy
CONFIG_IR_SHARP_DECODER=3Dy
CONFIG_IR_MCE_KBD_DECODER=3Dy
CONFIG_IR_XMP_DECODER=3Dy
# CONFIG_IR_IMON_DECODER is not set
# CONFIG_IR_RCMM_DECODER is not set
# CONFIG_RC_DEVICES is not set
# CONFIG_MEDIA_CEC_SUPPORT is not set
# CONFIG_MEDIA_SUPPORT is not set

#
# Graphics support
#
CONFIG_AGP=3Dy
CONFIG_AGP_AMD64=3Dy
CONFIG_AGP_INTEL=3Dy
# CONFIG_AGP_SIS is not set
CONFIG_AGP_VIA=3Dy
CONFIG_INTEL_GTT=3Dy
CONFIG_VGA_ARB=3Dy
CONFIG_VGA_ARB_MAX_GPUS=3D16
CONFIG_VGA_SWITCHEROO=3Dy
CONFIG_DRM=3Dm
# CONFIG_DRM_DP_AUX_CHARDEV is not set
# CONFIG_DRM_DEBUG_SELFTEST is not set
CONFIG_DRM_KMS_HELPER=3Dm
CONFIG_DRM_KMS_FB_HELPER=3Dy
# CONFIG_DRM_DEBUG_DP_MST_TOPOLOGY_REFS is not set
CONFIG_DRM_FBDEV_EMULATION=3Dy
CONFIG_DRM_FBDEV_OVERALLOC=3D100
# CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM is not set
CONFIG_DRM_LOAD_EDID_FIRMWARE=3Dy
# CONFIG_DRM_DP_CEC is not set
CONFIG_DRM_GEM_SHMEM_HELPER=3Dy

#
# I2C encoder or helper chips
#
# CONFIG_DRM_I2C_CH7006 is not set
# CONFIG_DRM_I2C_SIL164 is not set
# CONFIG_DRM_I2C_NXP_TDA998X is not set
# CONFIG_DRM_I2C_NXP_TDA9950 is not set
# end of I2C encoder or helper chips

#
# ARM devices
#
# end of ARM devices

# CONFIG_DRM_RADEON is not set
# CONFIG_DRM_AMDGPU is not set
# CONFIG_DRM_NOUVEAU is not set
# CONFIG_DRM_I915 is not set
# CONFIG_DRM_VGEM is not set
# CONFIG_DRM_VKMS is not set
# CONFIG_DRM_VMWGFX is not set
# CONFIG_DRM_GMA500 is not set
# CONFIG_DRM_UDL is not set
# CONFIG_DRM_AST is not set
CONFIG_DRM_MGAG200=3Dm
# CONFIG_DRM_QXL is not set
# CONFIG_DRM_BOCHS is not set
# CONFIG_DRM_VIRTIO_GPU is not set
CONFIG_DRM_PANEL=3Dy

#
# Display Panels
#
# end of Display Panels

CONFIG_DRM_BRIDGE=3Dy
CONFIG_DRM_PANEL_BRIDGE=3Dy

#
# Display Interface Bridges
#
# CONFIG_DRM_ANALOGIX_ANX78XX is not set
# end of Display Interface Bridges

# CONFIG_DRM_ETNAVIV is not set
# CONFIG_DRM_CIRRUS_QEMU is not set
# CONFIG_DRM_GM12U320 is not set
# CONFIG_TINYDRM_HX8357D is not set
# CONFIG_TINYDRM_ILI9225 is not set
# CONFIG_TINYDRM_ILI9341 is not set
# CONFIG_TINYDRM_ILI9486 is not set
# CONFIG_TINYDRM_MI0283QT is not set
# CONFIG_TINYDRM_REPAPER is not set
# CONFIG_TINYDRM_ST7586 is not set
# CONFIG_TINYDRM_ST7735R is not set
# CONFIG_DRM_XEN is not set
# CONFIG_DRM_VBOXVIDEO is not set
# CONFIG_DRM_LEGACY is not set
CONFIG_DRM_PANEL_ORIENTATION_QUIRKS=3Dy

#
# Frame buffer Devices
#
CONFIG_FB_CMDLINE=3Dy
CONFIG_FB_NOTIFY=3Dy
CONFIG_FB=3Dy
CONFIG_FIRMWARE_EDID=3Dy
CONFIG_FB_BOOT_VESA_SUPPORT=3Dy
CONFIG_FB_CFB_FILLRECT=3Dy
CONFIG_FB_CFB_COPYAREA=3Dy
CONFIG_FB_CFB_IMAGEBLIT=3Dy
CONFIG_FB_SYS_FILLRECT=3Dm
CONFIG_FB_SYS_COPYAREA=3Dm
CONFIG_FB_SYS_IMAGEBLIT=3Dm
# CONFIG_FB_FOREIGN_ENDIAN is not set
CONFIG_FB_SYS_FOPS=3Dm
CONFIG_FB_DEFERRED_IO=3Dy
CONFIG_FB_MODE_HELPERS=3Dy
CONFIG_FB_TILEBLITTING=3Dy

#
# Frame buffer hardware drivers
#
# CONFIG_FB_CIRRUS is not set
# CONFIG_FB_PM2 is not set
# CONFIG_FB_CYBER2000 is not set
# CONFIG_FB_ARC is not set
CONFIG_FB_ASILIANT=3Dy
CONFIG_FB_IMSTT=3Dy
# CONFIG_FB_VGA16 is not set
# CONFIG_FB_UVESA is not set
CONFIG_FB_VESA=3Dy
CONFIG_FB_EFI=3Dy
# CONFIG_FB_N411 is not set
# CONFIG_FB_HGA is not set
# CONFIG_FB_OPENCORES is not set
# CONFIG_FB_S1D13XXX is not set
# CONFIG_FB_NVIDIA is not set
# CONFIG_FB_RIVA is not set
# CONFIG_FB_I740 is not set
# CONFIG_FB_LE80578 is not set
# CONFIG_FB_INTEL is not set
# CONFIG_FB_MATROX is not set
# CONFIG_FB_RADEON is not set
# CONFIG_FB_ATY128 is not set
# CONFIG_FB_ATY is not set
# CONFIG_FB_S3 is not set
# CONFIG_FB_SAVAGE is not set
# CONFIG_FB_SIS is not set
# CONFIG_FB_VIA is not set
# CONFIG_FB_NEOMAGIC is not set
# CONFIG_FB_KYRO is not set
# CONFIG_FB_3DFX is not set
# CONFIG_FB_VOODOO1 is not set
# CONFIG_FB_VT8623 is not set
# CONFIG_FB_TRIDENT is not set
# CONFIG_FB_ARK is not set
# CONFIG_FB_PM3 is not set
# CONFIG_FB_CARMINE is not set
# CONFIG_FB_SMSCUFX is not set
# CONFIG_FB_UDL is not set
# CONFIG_FB_IBM_GXT4500 is not set
# CONFIG_FB_VIRTUAL is not set
# CONFIG_XEN_FBDEV_FRONTEND is not set
# CONFIG_FB_METRONOME is not set
# CONFIG_FB_MB862XX is not set
CONFIG_FB_SIMPLE=3Dy
# CONFIG_FB_SM712 is not set
# end of Frame buffer Devices

#
# Backlight & LCD device support
#
# CONFIG_LCD_CLASS_DEVICE is not set
# CONFIG_BACKLIGHT_CLASS_DEVICE is not set
# end of Backlight & LCD device support

CONFIG_HDMI=3Dy

#
# Console display driver support
#
CONFIG_VGA_CONSOLE=3Dy
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=3Dy
CONFIG_DUMMY_CONSOLE_COLUMNS=3D80
CONFIG_DUMMY_CONSOLE_ROWS=3D25
CONFIG_FRAMEBUFFER_CONSOLE=3Dy
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=3Dy
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=3Dy
# CONFIG_FRAMEBUFFER_CONSOLE_DEFERRED_TAKEOVER is not set
# end of Console display driver support

# CONFIG_LOGO is not set
# end of Graphics support

# CONFIG_SOUND is not set

#
# HID support
#
# CONFIG_HID is not set

#
# USB HID support
#
# CONFIG_USB_HID is not set
CONFIG_HID_PID=3Dy

#
# USB HID Boot Protocol drivers
#
# CONFIG_USB_KBD is not set
# CONFIG_USB_MOUSE is not set
# end of USB HID Boot Protocol drivers
# end of USB HID support

#
# I2C HID support
#
# CONFIG_I2C_HID is not set
# end of I2C HID support

#
# Intel ISH HID support
#
# CONFIG_INTEL_ISH_HID is not set
# end of Intel ISH HID support
# end of HID support

CONFIG_USB_OHCI_LITTLE_ENDIAN=3Dy
CONFIG_USB_SUPPORT=3Dy
CONFIG_USB_COMMON=3Dy
CONFIG_USB_LED_TRIG=3Dy
# CONFIG_USB_ULPI_BUS is not set
# CONFIG_USB_CONN_GPIO is not set
CONFIG_USB_ARCH_HAS_HCD=3Dy
CONFIG_USB=3Dy
CONFIG_USB_PCI=3Dy
CONFIG_USB_ANNOUNCE_NEW_DEVICES=3Dy

#
# Miscellaneous USB options
#
CONFIG_USB_DEFAULT_PERSIST=3Dy
CONFIG_USB_DYNAMIC_MINORS=3Dy
# CONFIG_USB_OTG is not set
# CONFIG_USB_OTG_PRODUCTLIST is not set
# CONFIG_USB_OTG_DISABLE_EXTERNAL_HUB is not set
# CONFIG_USB_LEDS_TRIGGER_USBPORT is not set
CONFIG_USB_AUTOSUSPEND_DELAY=3D2
# CONFIG_USB_MON is not set

#
# USB Host Controller Drivers
#
# CONFIG_USB_C67X00_HCD is not set
CONFIG_USB_XHCI_HCD=3Dy
# CONFIG_USB_XHCI_DBGCAP is not set
CONFIG_USB_XHCI_PCI=3Dy
# CONFIG_USB_XHCI_PCI_RENESAS is not set
# CONFIG_USB_XHCI_PLATFORM is not set
CONFIG_USB_EHCI_HCD=3Dy
CONFIG_USB_EHCI_ROOT_HUB_TT=3Dy
CONFIG_USB_EHCI_TT_NEWSCHED=3Dy
CONFIG_USB_EHCI_PCI=3Dy
# CONFIG_USB_EHCI_FSL is not set
CONFIG_USB_EHCI_HCD_PLATFORM=3Dy
# CONFIG_USB_OXU210HP_HCD is not set
# CONFIG_USB_ISP116X_HCD is not set
# CONFIG_USB_FOTG210_HCD is not set
# CONFIG_USB_MAX3421_HCD is not set
CONFIG_USB_OHCI_HCD=3Dy
CONFIG_USB_OHCI_HCD_PCI=3Dy
CONFIG_USB_OHCI_HCD_PLATFORM=3Dy
CONFIG_USB_UHCI_HCD=3Dy
# CONFIG_USB_SL811_HCD is not set
# CONFIG_USB_R8A66597_HCD is not set
# CONFIG_USB_HCD_TEST_MODE is not set

#
# USB Device Class drivers
#
# CONFIG_USB_ACM is not set
# CONFIG_USB_PRINTER is not set
# CONFIG_USB_WDM is not set
# CONFIG_USB_TMC is not set

#
# NOTE: USB_STORAGE depends on SCSI but BLK_DEV_SD may
#

#
# also be needed; see USB_STORAGE Help for more info
#
# CONFIG_USB_STORAGE is not set

#
# USB Imaging devices
#
# CONFIG_USB_MDC800 is not set
# CONFIG_USB_MICROTEK is not set
# CONFIG_USBIP_CORE is not set
# CONFIG_USB_CDNS3 is not set
# CONFIG_USB_MUSB_HDRC is not set
# CONFIG_USB_DWC3 is not set
CONFIG_USB_DWC2=3Dy
CONFIG_USB_DWC2_HOST=3Dy

#
# Gadget/Dual-role mode requires USB Gadget support to be enabled
#
CONFIG_USB_DWC2_PCI=3Dy
# CONFIG_USB_DWC2_DEBUG is not set
# CONFIG_USB_DWC2_TRACK_MISSED_SOFS is not set
# CONFIG_USB_CHIPIDEA is not set
# CONFIG_USB_ISP1760 is not set

#
# USB port drivers
#
# CONFIG_USB_SERIAL is not set

#
# USB Miscellaneous drivers
#
# CONFIG_USB_EMI62 is not set
# CONFIG_USB_EMI26 is not set
# CONFIG_USB_ADUTUX is not set
# CONFIG_USB_SEVSEG is not set
# CONFIG_USB_LEGOTOWER is not set
# CONFIG_USB_LCD is not set
# CONFIG_USB_CYPRESS_CY7C63 is not set
# CONFIG_USB_CYTHERM is not set
# CONFIG_USB_IDMOUSE is not set
# CONFIG_USB_FTDI_ELAN is not set
# CONFIG_USB_APPLEDISPLAY is not set
# CONFIG_APPLE_MFI_FASTCHARGE is not set
# CONFIG_USB_SISUSBVGA is not set
# CONFIG_USB_LD is not set
# CONFIG_USB_TRANCEVIBRATOR is not set
# CONFIG_USB_IOWARRIOR is not set
# CONFIG_USB_TEST is not set
# CONFIG_USB_EHSET_TEST_FIXTURE is not set
# CONFIG_USB_ISIGHTFW is not set
# CONFIG_USB_YUREX is not set
# CONFIG_USB_EZUSB_FX2 is not set
# CONFIG_USB_HUB_USB251XB is not set
# CONFIG_USB_HSIC_USB3503 is not set
# CONFIG_USB_HSIC_USB4604 is not set
# CONFIG_USB_LINK_LAYER_TEST is not set
# CONFIG_USB_CHAOSKEY is not set

#
# USB Physical Layer drivers
#
CONFIG_USB_PHY=3Dy
CONFIG_NOP_USB_XCEIV=3Dy
# CONFIG_USB_GPIO_VBUS is not set
# CONFIG_USB_ISP1301 is not set
# end of USB Physical Layer drivers

# CONFIG_USB_GADGET is not set
# CONFIG_TYPEC is not set
# CONFIG_USB_ROLE_SWITCH is not set
CONFIG_MMC=3Dy
# CONFIG_MMC_BLOCK is not set
# CONFIG_SDIO_UART is not set
# CONFIG_MMC_TEST is not set

#
# MMC/SD/SDIO Host Controller Drivers
#
# CONFIG_MMC_DEBUG is not set
# CONFIG_MMC_SDHCI is not set
# CONFIG_MMC_WBSD is not set
# CONFIG_MMC_TIFM_SD is not set
# CONFIG_MMC_SPI is not set
# CONFIG_MMC_CB710 is not set
# CONFIG_MMC_VIA_SDMMC is not set
# CONFIG_MMC_VUB300 is not set
# CONFIG_MMC_USHC is not set
# CONFIG_MMC_USDHI6ROL0 is not set
# CONFIG_MMC_CQHCI is not set
# CONFIG_MMC_HSQ is not set
# CONFIG_MMC_TOSHIBA_PCI is not set
# CONFIG_MMC_MTK is not set
# CONFIG_MEMSTICK is not set
CONFIG_NEW_LEDS=3Dy
CONFIG_LEDS_CLASS=3Dy
# CONFIG_LEDS_CLASS_FLASH is not set
# CONFIG_LEDS_CLASS_MULTICOLOR is not set
# CONFIG_LEDS_BRIGHTNESS_HW_CHANGED is not set

#
# LED drivers
#
# CONFIG_LEDS_88PM860X is not set
# CONFIG_LEDS_APU is not set
# CONFIG_LEDS_LM3530 is not set
# CONFIG_LEDS_LM3532 is not set
# CONFIG_LEDS_LM3642 is not set
# CONFIG_LEDS_PCA9532 is not set
# CONFIG_LEDS_GPIO is not set
# CONFIG_LEDS_LP3944 is not set
# CONFIG_LEDS_LP3952 is not set
# CONFIG_LEDS_LP8788 is not set
# CONFIG_LEDS_CLEVO_MAIL is not set
# CONFIG_LEDS_PCA955X is not set
# CONFIG_LEDS_PCA963X is not set
# CONFIG_LEDS_WM831X_STATUS is not set
# CONFIG_LEDS_WM8350 is not set
# CONFIG_LEDS_DA903X is not set
# CONFIG_LEDS_DA9052 is not set
# CONFIG_LEDS_DAC124S085 is not set
# CONFIG_LEDS_PWM is not set
# CONFIG_LEDS_REGULATOR is not set
# CONFIG_LEDS_BD2802 is not set
# CONFIG_LEDS_INTEL_SS4200 is not set
# CONFIG_LEDS_ADP5520 is not set
# CONFIG_LEDS_TCA6507 is not set
# CONFIG_LEDS_TLC591XX is not set
# CONFIG_LEDS_MAX8997 is not set
# CONFIG_LEDS_LM355x is not set

#
# LED driver for blink(1) USB RGB LED is under Special HID drivers (HID_T=
HINGM)
#
# CONFIG_LEDS_BLINKM is not set
# CONFIG_LEDS_MLXCPLD is not set
# CONFIG_LEDS_MLXREG is not set
# CONFIG_LEDS_USER is not set
# CONFIG_LEDS_NIC78BX is not set
# CONFIG_LEDS_TI_LMU_COMMON is not set

#
# LED Triggers
#
CONFIG_LEDS_TRIGGERS=3Dy
# CONFIG_LEDS_TRIGGER_TIMER is not set
# CONFIG_LEDS_TRIGGER_ONESHOT is not set
# CONFIG_LEDS_TRIGGER_DISK is not set
# CONFIG_LEDS_TRIGGER_HEARTBEAT is not set
# CONFIG_LEDS_TRIGGER_BACKLIGHT is not set
CONFIG_LEDS_TRIGGER_CPU=3Dy
# CONFIG_LEDS_TRIGGER_ACTIVITY is not set
# CONFIG_LEDS_TRIGGER_GPIO is not set
# CONFIG_LEDS_TRIGGER_DEFAULT_ON is not set

#
# iptables trigger is under Netfilter config (LED target)
#
# CONFIG_LEDS_TRIGGER_TRANSIENT is not set
# CONFIG_LEDS_TRIGGER_CAMERA is not set
# CONFIG_LEDS_TRIGGER_PANIC is not set
# CONFIG_LEDS_TRIGGER_NETDEV is not set
# CONFIG_LEDS_TRIGGER_PATTERN is not set
# CONFIG_LEDS_TRIGGER_AUDIO is not set
# CONFIG_ACCESSIBILITY is not set
# CONFIG_INFINIBAND is not set
CONFIG_EDAC_ATOMIC_SCRUB=3Dy
CONFIG_EDAC_SUPPORT=3Dy
CONFIG_EDAC=3Dy
# CONFIG_EDAC_LEGACY_SYSFS is not set
# CONFIG_EDAC_DEBUG is not set
# CONFIG_EDAC_DECODE_MCE is not set
# CONFIG_EDAC_GHES is not set
# CONFIG_EDAC_E752X is not set
# CONFIG_EDAC_I82975X is not set
# CONFIG_EDAC_I3000 is not set
# CONFIG_EDAC_I3200 is not set
# CONFIG_EDAC_IE31200 is not set
# CONFIG_EDAC_X38 is not set
# CONFIG_EDAC_I5400 is not set
# CONFIG_EDAC_I7CORE is not set
# CONFIG_EDAC_I5000 is not set
# CONFIG_EDAC_I5100 is not set
# CONFIG_EDAC_I7300 is not set
CONFIG_EDAC_SBRIDGE=3Dm
# CONFIG_EDAC_SKX is not set
# CONFIG_EDAC_I10NM is not set
# CONFIG_EDAC_PND2 is not set
CONFIG_RTC_LIB=3Dy
CONFIG_RTC_MC146818_LIB=3Dy
CONFIG_RTC_CLASS=3Dy
CONFIG_RTC_HCTOSYS=3Dy
CONFIG_RTC_HCTOSYS_DEVICE=3D"rtc0"
CONFIG_RTC_SYSTOHC=3Dy
CONFIG_RTC_SYSTOHC_DEVICE=3D"rtc0"
# CONFIG_RTC_DEBUG is not set
CONFIG_RTC_NVMEM=3Dy

#
# RTC interfaces
#
CONFIG_RTC_INTF_SYSFS=3Dy
CONFIG_RTC_INTF_PROC=3Dy
CONFIG_RTC_INTF_DEV=3Dy
# CONFIG_RTC_INTF_DEV_UIE_EMUL is not set
# CONFIG_RTC_DRV_TEST is not set

#
# I2C RTC drivers
#
# CONFIG_RTC_DRV_88PM860X is not set
# CONFIG_RTC_DRV_ABB5ZES3 is not set
# CONFIG_RTC_DRV_ABEOZ9 is not set
# CONFIG_RTC_DRV_ABX80X is not set
# CONFIG_RTC_DRV_DS1307 is not set
# CONFIG_RTC_DRV_DS1374 is not set
# CONFIG_RTC_DRV_DS1672 is not set
# CONFIG_RTC_DRV_LP8788 is not set
# CONFIG_RTC_DRV_MAX6900 is not set
# CONFIG_RTC_DRV_MAX8925 is not set
# CONFIG_RTC_DRV_MAX8998 is not set
# CONFIG_RTC_DRV_MAX8997 is not set
# CONFIG_RTC_DRV_RS5C372 is not set
# CONFIG_RTC_DRV_ISL1208 is not set
# CONFIG_RTC_DRV_ISL12022 is not set
# CONFIG_RTC_DRV_X1205 is not set
# CONFIG_RTC_DRV_PCF8523 is not set
# CONFIG_RTC_DRV_PCF85063 is not set
# CONFIG_RTC_DRV_PCF85363 is not set
# CONFIG_RTC_DRV_PCF8563 is not set
# CONFIG_RTC_DRV_PCF8583 is not set
# CONFIG_RTC_DRV_M41T80 is not set
# CONFIG_RTC_DRV_BQ32K is not set
# CONFIG_RTC_DRV_PALMAS is not set
# CONFIG_RTC_DRV_TPS6586X is not set
# CONFIG_RTC_DRV_TPS65910 is not set
# CONFIG_RTC_DRV_TPS80031 is not set
# CONFIG_RTC_DRV_RC5T583 is not set
# CONFIG_RTC_DRV_S35390A is not set
# CONFIG_RTC_DRV_FM3130 is not set
# CONFIG_RTC_DRV_RX8010 is not set
# CONFIG_RTC_DRV_RX8581 is not set
# CONFIG_RTC_DRV_RX8025 is not set
# CONFIG_RTC_DRV_EM3027 is not set
# CONFIG_RTC_DRV_RV3028 is not set
# CONFIG_RTC_DRV_RV8803 is not set
# CONFIG_RTC_DRV_S5M is not set
# CONFIG_RTC_DRV_SD3078 is not set

#
# SPI RTC drivers
#
# CONFIG_RTC_DRV_M41T93 is not set
# CONFIG_RTC_DRV_M41T94 is not set
# CONFIG_RTC_DRV_DS1302 is not set
# CONFIG_RTC_DRV_DS1305 is not set
# CONFIG_RTC_DRV_DS1343 is not set
# CONFIG_RTC_DRV_DS1347 is not set
# CONFIG_RTC_DRV_DS1390 is not set
# CONFIG_RTC_DRV_MAX6916 is not set
# CONFIG_RTC_DRV_R9701 is not set
# CONFIG_RTC_DRV_RX4581 is not set
# CONFIG_RTC_DRV_RX6110 is not set
# CONFIG_RTC_DRV_RS5C348 is not set
# CONFIG_RTC_DRV_MAX6902 is not set
# CONFIG_RTC_DRV_PCF2123 is not set
# CONFIG_RTC_DRV_MCP795 is not set
CONFIG_RTC_I2C_AND_SPI=3Dy

#
# SPI and I2C RTC drivers
#
# CONFIG_RTC_DRV_DS3232 is not set
# CONFIG_RTC_DRV_PCF2127 is not set
# CONFIG_RTC_DRV_RV3029C2 is not set

#
# Platform RTC drivers
#
CONFIG_RTC_DRV_CMOS=3Dy
# CONFIG_RTC_DRV_DS1286 is not set
# CONFIG_RTC_DRV_DS1511 is not set
# CONFIG_RTC_DRV_DS1553 is not set
# CONFIG_RTC_DRV_DS1685_FAMILY is not set
# CONFIG_RTC_DRV_DS1742 is not set
# CONFIG_RTC_DRV_DS2404 is not set
# CONFIG_RTC_DRV_DA9052 is not set
# CONFIG_RTC_DRV_DA9055 is not set
# CONFIG_RTC_DRV_DA9063 is not set
# CONFIG_RTC_DRV_STK17TA8 is not set
# CONFIG_RTC_DRV_M48T86 is not set
# CONFIG_RTC_DRV_M48T35 is not set
# CONFIG_RTC_DRV_M48T59 is not set
# CONFIG_RTC_DRV_MSM6242 is not set
# CONFIG_RTC_DRV_BQ4802 is not set
# CONFIG_RTC_DRV_RP5C01 is not set
# CONFIG_RTC_DRV_V3020 is not set
# CONFIG_RTC_DRV_WM831X is not set
# CONFIG_RTC_DRV_WM8350 is not set
# CONFIG_RTC_DRV_AB3100 is not set

#
# on-CPU RTC drivers
#
# CONFIG_RTC_DRV_FTRTC010 is not set
# CONFIG_RTC_DRV_PCAP is not set

#
# HID Sensor RTC drivers
#
CONFIG_DMADEVICES=3Dy
# CONFIG_DMADEVICES_DEBUG is not set

#
# DMA Devices
#
CONFIG_DMA_ENGINE=3Dy
CONFIG_DMA_ACPI=3Dy
# CONFIG_ALTERA_MSGDMA is not set
# CONFIG_INTEL_IDMA64 is not set
# CONFIG_INTEL_IDXD is not set
CONFIG_INTEL_IOATDMA=3Dm
# CONFIG_PLX_DMA is not set
# CONFIG_XILINX_ZYNQMP_DPDMA is not set
# CONFIG_QCOM_HIDMA_MGMT is not set
# CONFIG_QCOM_HIDMA is not set
CONFIG_DW_DMAC_CORE=3Dy
# CONFIG_DW_DMAC is not set
CONFIG_DW_DMAC_PCI=3Dy
# CONFIG_DW_EDMA is not set
# CONFIG_DW_EDMA_PCIE is not set
# CONFIG_SF_PDMA is not set

#
# DMA Clients
#
CONFIG_ASYNC_TX_DMA=3Dy
# CONFIG_DMATEST is not set
CONFIG_DMA_ENGINE_RAID=3Dy

#
# DMABUF options
#
CONFIG_SYNC_FILE=3Dy
# CONFIG_SW_SYNC is not set
# CONFIG_UDMABUF is not set
# CONFIG_DMABUF_MOVE_NOTIFY is not set
# CONFIG_DMABUF_SELFTESTS is not set
# CONFIG_DMABUF_HEAPS is not set
# end of DMABUF options

CONFIG_DCA=3Dm
CONFIG_AUXDISPLAY=3Dy
# CONFIG_HD44780 is not set
# CONFIG_IMG_ASCII_LCD is not set
# CONFIG_CHARLCD_BL_OFF is not set
# CONFIG_CHARLCD_BL_ON is not set
CONFIG_CHARLCD_BL_FLASH=3Dy
# CONFIG_UIO is not set
# CONFIG_VFIO is not set
CONFIG_IRQ_BYPASS_MANAGER=3Dm
CONFIG_VIRT_DRIVERS=3Dy
# CONFIG_VBOXGUEST is not set
CONFIG_VIRTIO=3Dy
CONFIG_VIRTIO_MENU=3Dy
CONFIG_VIRTIO_PCI=3Dy
CONFIG_VIRTIO_PCI_LEGACY=3Dy
# CONFIG_VIRTIO_PMEM is not set
CONFIG_VIRTIO_BALLOON=3Dy
CONFIG_VIRTIO_MEM=3Dm
# CONFIG_VIRTIO_INPUT is not set
CONFIG_VIRTIO_MMIO=3Dy
CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=3Dy
# CONFIG_VDPA is not set
CONFIG_VHOST_MENU=3Dy
# CONFIG_VHOST_NET is not set
# CONFIG_VHOST_CROSS_ENDIAN_LEGACY is not set

#
# Microsoft Hyper-V guest support
#
# CONFIG_HYPERV is not set
# end of Microsoft Hyper-V guest support

#
# Xen driver support
#
CONFIG_XEN_BALLOON=3Dy
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG=3Dy
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT=3D512
CONFIG_XEN_SCRUB_PAGES_DEFAULT=3Dy
# CONFIG_XEN_DEV_EVTCHN is not set
CONFIG_XEN_BACKEND=3Dy
# CONFIG_XENFS is not set
CONFIG_XEN_SYS_HYPERVISOR=3Dy
CONFIG_XEN_XENBUS_FRONTEND=3Dy
# CONFIG_XEN_GNTDEV is not set
# CONFIG_XEN_GRANT_DEV_ALLOC is not set
# CONFIG_XEN_GRANT_DMA_ALLOC is not set
CONFIG_SWIOTLB_XEN=3Dy
# CONFIG_XEN_PCIDEV_BACKEND is not set
# CONFIG_XEN_PVCALLS_FRONTEND is not set
# CONFIG_XEN_PVCALLS_BACKEND is not set
CONFIG_XEN_PRIVCMD=3Dm
CONFIG_XEN_ACPI_PROCESSOR=3Dy
CONFIG_XEN_MCE_LOG=3Dy
CONFIG_XEN_HAVE_PVMMU=3Dy
CONFIG_XEN_EFI=3Dy
CONFIG_XEN_AUTO_XLATE=3Dy
CONFIG_XEN_ACPI=3Dy
CONFIG_XEN_HAVE_VPMU=3Dy
# end of Xen driver support

# CONFIG_GREYBUS is not set
CONFIG_STAGING=3Dy
# CONFIG_COMEDI is not set
# CONFIG_RTL8192U is not set
# CONFIG_RTLLIB is not set
# CONFIG_RTS5208 is not set
# CONFIG_FB_SM750 is not set
CONFIG_STAGING_MEDIA=3Dy

#
# Android
#
# end of Android

# CONFIG_LTE_GDM724X is not set
# CONFIG_GS_FPGABOOT is not set
CONFIG_UNISYSSPAR=3Dy
# CONFIG_FB_TFT is not set
# CONFIG_KS7010 is not set
# CONFIG_PI433 is not set

#
# Gasket devices
#
# CONFIG_STAGING_GASKET_FRAMEWORK is not set
# end of Gasket devices

# CONFIG_FIELDBUS_DEV is not set
# CONFIG_QLGE is not set
CONFIG_X86_PLATFORM_DEVICES=3Dy
CONFIG_ACPI_WMI=3Dm
# CONFIG_WMI_BMOF is not set
# CONFIG_ALIENWARE_WMI is not set
# CONFIG_HUAWEI_WMI is not set
# CONFIG_INTEL_WMI_SBL_FW_UPDATE is not set
# CONFIG_INTEL_WMI_THUNDERBOLT is not set
# CONFIG_MXM_WMI is not set
# CONFIG_PEAQ_WMI is not set
# CONFIG_XIAOMI_WMI is not set
# CONFIG_ACERHDF is not set
# CONFIG_ACER_WIRELESS is not set
# CONFIG_ASUS_WIRELESS is not set
CONFIG_DCDBAS=3Dm
# CONFIG_DELL_SMBIOS is not set
# CONFIG_DELL_RBTN is not set
# CONFIG_DELL_RBU is not set
# CONFIG_DELL_SMO8800 is not set
# CONFIG_DELL_WMI_AIO is not set
# CONFIG_DELL_WMI_LED is not set
# CONFIG_AMILO_RFKILL is not set
# CONFIG_FUJITSU_TABLET is not set
# CONFIG_GPD_POCKET_FAN is not set
# CONFIG_HP_ACCEL is not set
# CONFIG_HP_WIRELESS is not set
# CONFIG_HP_WMI is not set
# CONFIG_IBM_RTL is not set
# CONFIG_SENSORS_HDAPS is not set
# CONFIG_INTEL_ATOMISP2_PM is not set
# CONFIG_INTEL_HID_EVENT is not set
# CONFIG_INTEL_INT0002_VGPIO is not set
# CONFIG_INTEL_MENLOW is not set
# CONFIG_INTEL_VBTN is not set
# CONFIG_SURFACE3_WMI is not set
# CONFIG_SURFACE_3_POWER_OPREGION is not set
# CONFIG_SURFACE_PRO3_BUTTON is not set
# CONFIG_PCENGINES_APU2 is not set
# CONFIG_SAMSUNG_Q10 is not set
# CONFIG_TOSHIBA_BT_RFKILL is not set
# CONFIG_TOSHIBA_HAPS is not set
# CONFIG_TOSHIBA_WMI is not set
# CONFIG_ACPI_CMPC is not set
# CONFIG_LG_LAPTOP is not set
# CONFIG_SYSTEM76_ACPI is not set
# CONFIG_TOPSTAR_LAPTOP is not set
# CONFIG_I2C_MULTI_INSTANTIATE is not set
# CONFIG_MLX_PLATFORM is not set
# CONFIG_INTEL_IPS is not set
# CONFIG_INTEL_RST is not set
# CONFIG_INTEL_SMARTCONNECT is not set

#
# Intel Speed Select Technology interface support
#
# CONFIG_INTEL_SPEED_SELECT_INTERFACE is not set
# end of Intel Speed Select Technology interface support

# CONFIG_INTEL_TURBO_MAX_3 is not set
# CONFIG_INTEL_UNCORE_FREQ_CONTROL is not set
# CONFIG_INTEL_PMC_CORE is not set
# CONFIG_INTEL_PUNIT_IPC is not set
# CONFIG_INTEL_SCU_PCI is not set
# CONFIG_INTEL_SCU_PLATFORM is not set
CONFIG_PMC_ATOM=3Dy
# CONFIG_MFD_CROS_EC is not set
CONFIG_CHROME_PLATFORMS=3Dy
# CONFIG_CHROMEOS_LAPTOP is not set
# CONFIG_CHROMEOS_PSTORE is not set
# CONFIG_CHROMEOS_TBMC is not set
# CONFIG_CROS_EC is not set
# CONFIG_CROS_KBD_LED_BACKLIGHT is not set
# CONFIG_MELLANOX_PLATFORM is not set
CONFIG_HAVE_CLK=3Dy
CONFIG_CLKDEV_LOOKUP=3Dy
CONFIG_HAVE_CLK_PREPARE=3Dy
CONFIG_COMMON_CLK=3Dy
# CONFIG_COMMON_CLK_WM831X is not set
# CONFIG_COMMON_CLK_MAX9485 is not set
# CONFIG_COMMON_CLK_SI5341 is not set
# CONFIG_COMMON_CLK_SI5351 is not set
# CONFIG_COMMON_CLK_SI544 is not set
# CONFIG_COMMON_CLK_CDCE706 is not set
# CONFIG_COMMON_CLK_CS2000_CP is not set
# CONFIG_COMMON_CLK_S2MPS11 is not set
# CONFIG_CLK_TWL6040 is not set
# CONFIG_COMMON_CLK_PALMAS is not set
# CONFIG_COMMON_CLK_PWM is not set
# CONFIG_HWSPINLOCK is not set

#
# Clock Source drivers
#
CONFIG_CLKEVT_I8253=3Dy
CONFIG_I8253_LOCK=3Dy
CONFIG_CLKBLD_I8253=3Dy
# end of Clock Source drivers

CONFIG_MAILBOX=3Dy
CONFIG_PCC=3Dy
# CONFIG_ALTERA_MBOX is not set
CONFIG_IOMMU_IOVA=3Dy
CONFIG_IOASID=3Dy
CONFIG_IOMMU_API=3Dy
CONFIG_IOMMU_SUPPORT=3Dy

#
# Generic IOMMU Pagetable Support
#
# end of Generic IOMMU Pagetable Support

# CONFIG_IOMMU_DEBUGFS is not set
# CONFIG_IOMMU_DEFAULT_PASSTHROUGH is not set
CONFIG_IOMMU_DMA=3Dy
CONFIG_AMD_IOMMU=3Dy
# CONFIG_AMD_IOMMU_V2 is not set
CONFIG_DMAR_TABLE=3Dy
CONFIG_INTEL_IOMMU=3Dy
# CONFIG_INTEL_IOMMU_SVM is not set
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=3Dy
# CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON is not set
CONFIG_IRQ_REMAP=3Dy

#
# Remoteproc drivers
#
# CONFIG_REMOTEPROC is not set
# end of Remoteproc drivers

#
# Rpmsg drivers
#
# CONFIG_RPMSG_QCOM_GLINK_RPM is not set
# CONFIG_RPMSG_VIRTIO is not set
# end of Rpmsg drivers

# CONFIG_SOUNDWIRE is not set

#
# SOC (System On Chip) specific Drivers
#

#
# Amlogic SoC drivers
#
# end of Amlogic SoC drivers

#
# Aspeed SoC drivers
#
# end of Aspeed SoC drivers

#
# Broadcom SoC drivers
#
# end of Broadcom SoC drivers

#
# NXP/Freescale QorIQ SoC drivers
#
# end of NXP/Freescale QorIQ SoC drivers

#
# i.MX SoC drivers
#
# end of i.MX SoC drivers

#
# Qualcomm SoC drivers
#
# end of Qualcomm SoC drivers

CONFIG_SOC_TI=3Dy

#
# Xilinx SoC drivers
#
# CONFIG_XILINX_VCU is not set
# end of Xilinx SoC drivers
# end of SOC (System On Chip) specific Drivers

CONFIG_PM_DEVFREQ=3Dy

#
# DEVFREQ Governors
#
CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND=3Dy
CONFIG_DEVFREQ_GOV_PERFORMANCE=3Dy
CONFIG_DEVFREQ_GOV_POWERSAVE=3Dy
CONFIG_DEVFREQ_GOV_USERSPACE=3Dy
# CONFIG_DEVFREQ_GOV_PASSIVE is not set

#
# DEVFREQ Drivers
#
CONFIG_PM_DEVFREQ_EVENT=3Dy
CONFIG_EXTCON=3Dy

#
# Extcon Device Drivers
#
# CONFIG_EXTCON_FSA9480 is not set
# CONFIG_EXTCON_GPIO is not set
# CONFIG_EXTCON_INTEL_INT3496 is not set
# CONFIG_EXTCON_MAX14577 is not set
# CONFIG_EXTCON_MAX3355 is not set
# CONFIG_EXTCON_MAX77693 is not set
# CONFIG_EXTCON_MAX77843 is not set
# CONFIG_EXTCON_MAX8997 is not set
# CONFIG_EXTCON_PALMAS is not set
# CONFIG_EXTCON_PTN5150 is not set
# CONFIG_EXTCON_RT8973A is not set
# CONFIG_EXTCON_SM5502 is not set
# CONFIG_EXTCON_USB_GPIO is not set
CONFIG_MEMORY=3Dy
# CONFIG_IIO is not set
# CONFIG_NTB is not set
CONFIG_VME_BUS=3Dy

#
# VME Bridge Drivers
#
# CONFIG_VME_CA91CX42 is not set
# CONFIG_VME_TSI148 is not set
# CONFIG_VME_FAKE is not set

#
# VME Board Drivers
#
# CONFIG_VMIVME_7805 is not set

#
# VME Device Drivers
#
# CONFIG_VME_USER is not set
CONFIG_PWM=3Dy
CONFIG_PWM_SYSFS=3Dy
# CONFIG_PWM_DEBUG is not set
# CONFIG_PWM_CRC is not set
# CONFIG_PWM_LPSS_PCI is not set
# CONFIG_PWM_LPSS_PLATFORM is not set
# CONFIG_PWM_PCA9685 is not set
# CONFIG_PWM_TWL is not set
# CONFIG_PWM_TWL_LED is not set

#
# IRQ chip support
#
# end of IRQ chip support

# CONFIG_IPACK_BUS is not set
CONFIG_RESET_CONTROLLER=3Dy
# CONFIG_RESET_BRCMSTB_RESCAL is not set
# CONFIG_RESET_TI_SYSCON is not set

#
# PHY Subsystem
#
CONFIG_GENERIC_PHY=3Dy
# CONFIG_BCM_KONA_USB2_PHY is not set
# CONFIG_PHY_PXA_28NM_HSIC is not set
# CONFIG_PHY_PXA_28NM_USB2 is not set
# CONFIG_PHY_SAMSUNG_USB2 is not set
# CONFIG_PHY_INTEL_EMMC is not set
# end of PHY Subsystem

CONFIG_POWERCAP=3Dy
# CONFIG_INTEL_RAPL is not set
# CONFIG_IDLE_INJECT is not set
# CONFIG_MCB is not set

#
# Performance monitor support
#
# end of Performance monitor support

CONFIG_RAS=3Dy
# CONFIG_RAS_CEC is not set
# CONFIG_USB4 is not set

#
# Android
#
# CONFIG_ANDROID is not set
# end of Android

CONFIG_LIBNVDIMM=3Dy
# CONFIG_BLK_DEV_PMEM is not set
# CONFIG_ND_BLK is not set
CONFIG_ND_CLAIM=3Dy
CONFIG_BTT=3Dy
CONFIG_NVDIMM_KEYS=3Dy
CONFIG_DAX=3Dy
# CONFIG_DEV_DAX is not set
CONFIG_NVMEM=3Dy
CONFIG_NVMEM_SYSFS=3Dy

#
# HW tracing support
#
# CONFIG_STM is not set
# CONFIG_INTEL_TH is not set
# end of HW tracing support

# CONFIG_FPGA is not set
# CONFIG_TEE is not set
CONFIG_PM_OPP=3Dy
# CONFIG_UNISYS_VISORBUS is not set
# CONFIG_SIOX is not set
# CONFIG_SLIMBUS is not set
# CONFIG_INTERCONNECT is not set
# CONFIG_COUNTER is not set
# end of Device Drivers

#
# File systems
#
CONFIG_DCACHE_WORD_ACCESS=3Dy
# CONFIG_VALIDATE_FS_PARSER is not set
CONFIG_FS_IOMAP=3Dy
# CONFIG_EXT2_FS is not set
# CONFIG_EXT3_FS is not set
CONFIG_EXT4_FS=3Dy
CONFIG_EXT4_USE_FOR_EXT2=3Dy
CONFIG_EXT4_FS_POSIX_ACL=3Dy
CONFIG_EXT4_FS_SECURITY=3Dy
# CONFIG_EXT4_DEBUG is not set
CONFIG_JBD2=3Dy
# CONFIG_JBD2_DEBUG is not set
CONFIG_FS_MBCACHE=3Dy
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_XFS_FS is not set
# CONFIG_GFS2_FS is not set
# CONFIG_BTRFS_FS is not set
# CONFIG_NILFS2_FS is not set
# CONFIG_F2FS_FS is not set
CONFIG_FS_DAX=3Dy
CONFIG_FS_POSIX_ACL=3Dy
CONFIG_EXPORTFS=3Dy
# CONFIG_EXPORTFS_BLOCK_OPS is not set
CONFIG_FILE_LOCKING=3Dy
CONFIG_MANDATORY_FILE_LOCKING=3Dy
# CONFIG_FS_ENCRYPTION is not set
# CONFIG_FS_VERITY is not set
CONFIG_FSNOTIFY=3Dy
CONFIG_DNOTIFY=3Dy
CONFIG_INOTIFY_USER=3Dy
CONFIG_FANOTIFY=3Dy
CONFIG_FANOTIFY_ACCESS_PERMISSIONS=3Dy
CONFIG_QUOTA=3Dy
CONFIG_QUOTA_NETLINK_INTERFACE=3Dy
# CONFIG_PRINT_QUOTA_WARNING is not set
# CONFIG_QUOTA_DEBUG is not set
# CONFIG_QFMT_V1 is not set
# CONFIG_QFMT_V2 is not set
CONFIG_QUOTACTL=3Dy
CONFIG_QUOTACTL_COMPAT=3Dy
# CONFIG_AUTOFS4_FS is not set
CONFIG_AUTOFS_FS=3Dm
CONFIG_FUSE_FS=3Dy
# CONFIG_CUSE is not set
# CONFIG_VIRTIO_FS is not set
CONFIG_OVERLAY_FS=3Dy
# CONFIG_OVERLAY_FS_REDIRECT_DIR is not set
CONFIG_OVERLAY_FS_REDIRECT_ALWAYS_FOLLOW=3Dy
# CONFIG_OVERLAY_FS_INDEX is not set
# CONFIG_OVERLAY_FS_XINO_AUTO is not set
# CONFIG_OVERLAY_FS_METACOPY is not set

#
# Caches
#
# CONFIG_FSCACHE is not set
# end of Caches

#
# CD-ROM/DVD Filesystems
#
# CONFIG_ISO9660_FS is not set
# CONFIG_UDF_FS is not set
# end of CD-ROM/DVD Filesystems

#
# DOS/FAT/EXFAT/NT Filesystems
#
CONFIG_FAT_FS=3Dy
# CONFIG_MSDOS_FS is not set
CONFIG_VFAT_FS=3Dy
CONFIG_FAT_DEFAULT_CODEPAGE=3D437
CONFIG_FAT_DEFAULT_IOCHARSET=3D"iso8859-1"
# CONFIG_FAT_DEFAULT_UTF8 is not set
# CONFIG_EXFAT_FS is not set
# CONFIG_NTFS_FS is not set
# end of DOS/FAT/EXFAT/NT Filesystems

#
# Pseudo filesystems
#
CONFIG_PROC_FS=3Dy
CONFIG_PROC_KCORE=3Dy
CONFIG_PROC_VMCORE=3Dy
# CONFIG_PROC_VMCORE_DEVICE_DUMP is not set
CONFIG_PROC_SYSCTL=3Dy
CONFIG_PROC_PAGE_MONITOR=3Dy
CONFIG_PROC_CHILDREN=3Dy
CONFIG_PROC_PID_ARCH_STATUS=3Dy
CONFIG_KERNFS=3Dy
CONFIG_SYSFS=3Dy
CONFIG_TMPFS=3Dy
CONFIG_TMPFS_POSIX_ACL=3Dy
CONFIG_TMPFS_XATTR=3Dy
# CONFIG_TMPFS_INODE64 is not set
CONFIG_HUGETLBFS=3Dy
CONFIG_HUGETLB_PAGE=3Dy
CONFIG_MEMFD_CREATE=3Dy
CONFIG_ARCH_HAS_GIGANTIC_PAGE=3Dy
# CONFIG_CONFIGFS_FS is not set
CONFIG_EFIVAR_FS=3Dy
# end of Pseudo filesystems

CONFIG_MISC_FILESYSTEMS=3Dy
# CONFIG_ORANGEFS_FS is not set
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
CONFIG_ECRYPT_FS=3Dy
CONFIG_ECRYPT_FS_MESSAGING=3Dy
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_CRAMFS is not set
# CONFIG_SQUASHFS is not set
# CONFIG_VXFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_OMFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_QNX6FS_FS is not set
# CONFIG_ROMFS_FS is not set
CONFIG_PSTORE=3Dy
CONFIG_PSTORE_DEFLATE_COMPRESS=3Dy
# CONFIG_PSTORE_LZO_COMPRESS is not set
# CONFIG_PSTORE_LZ4_COMPRESS is not set
# CONFIG_PSTORE_LZ4HC_COMPRESS is not set
# CONFIG_PSTORE_842_COMPRESS is not set
# CONFIG_PSTORE_ZSTD_COMPRESS is not set
CONFIG_PSTORE_COMPRESS=3Dy
CONFIG_PSTORE_DEFLATE_COMPRESS_DEFAULT=3Dy
CONFIG_PSTORE_COMPRESS_DEFAULT=3D"deflate"
# CONFIG_PSTORE_CONSOLE is not set
# CONFIG_PSTORE_PMSG is not set
# CONFIG_PSTORE_FTRACE is not set
# CONFIG_PSTORE_RAM is not set
# CONFIG_PSTORE_BLK is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
# CONFIG_EROFS_FS is not set
CONFIG_NETWORK_FILESYSTEMS=3Dy
# CONFIG_NFS_FS is not set
# CONFIG_NFSD is not set
# CONFIG_CEPH_FS is not set
# CONFIG_CIFS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
CONFIG_NLS=3Dy
CONFIG_NLS_DEFAULT=3D"utf8"
CONFIG_NLS_CODEPAGE_437=3Dy
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=3Dy
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_MAC_ROMAN is not set
# CONFIG_NLS_MAC_CELTIC is not set
# CONFIG_NLS_MAC_CENTEURO is not set
# CONFIG_NLS_MAC_CROATIAN is not set
# CONFIG_NLS_MAC_CYRILLIC is not set
# CONFIG_NLS_MAC_GAELIC is not set
# CONFIG_NLS_MAC_GREEK is not set
# CONFIG_NLS_MAC_ICELAND is not set
# CONFIG_NLS_MAC_INUIT is not set
# CONFIG_NLS_MAC_ROMANIAN is not set
# CONFIG_NLS_MAC_TURKISH is not set
# CONFIG_NLS_UTF8 is not set
# CONFIG_UNICODE is not set
CONFIG_IO_WQ=3Dy
# end of File systems

#
# Security options
#
CONFIG_KEYS=3Dy
# CONFIG_KEYS_REQUEST_CACHE is not set
CONFIG_PERSISTENT_KEYRINGS=3Dy
CONFIG_TRUSTED_KEYS=3Dy
CONFIG_ENCRYPTED_KEYS=3Dy
# CONFIG_KEY_DH_OPERATIONS is not set
# CONFIG_SECURITY_DMESG_RESTRICT is not set
CONFIG_SECURITY=3Dy
CONFIG_SECURITY_WRITABLE_HOOKS=3Dy
CONFIG_SECURITYFS=3Dy
CONFIG_SECURITY_NETWORK=3Dy
CONFIG_PAGE_TABLE_ISOLATION=3Dy
CONFIG_SECURITY_PATH=3Dy
CONFIG_INTEL_TXT=3Dy
CONFIG_LSM_MMAP_MIN_ADDR=3D0
CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR=3Dy
# CONFIG_HARDENED_USERCOPY is not set
# CONFIG_FORTIFY_SOURCE is not set
# CONFIG_STATIC_USERMODEHELPER is not set
CONFIG_SECURITY_SELINUX=3Dy
CONFIG_SECURITY_SELINUX_BOOTPARAM=3Dy
CONFIG_SECURITY_SELINUX_DISABLE=3Dy
CONFIG_SECURITY_SELINUX_DEVELOP=3Dy
CONFIG_SECURITY_SELINUX_AVC_STATS=3Dy
CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=3D1
CONFIG_SECURITY_SELINUX_SIDTAB_HASH_BITS=3D9
CONFIG_SECURITY_SELINUX_SID2STR_CACHE_SIZE=3D256
CONFIG_SECURITY_SMACK=3Dy
# CONFIG_SECURITY_SMACK_BRINGUP is not set
CONFIG_SECURITY_SMACK_NETFILTER=3Dy
# CONFIG_SECURITY_SMACK_APPEND_SIGNALS is not set
CONFIG_SECURITY_TOMOYO=3Dy
CONFIG_SECURITY_TOMOYO_MAX_ACCEPT_ENTRY=3D2048
CONFIG_SECURITY_TOMOYO_MAX_AUDIT_LOG=3D1024
# CONFIG_SECURITY_TOMOYO_OMIT_USERSPACE_LOADER is not set
CONFIG_SECURITY_TOMOYO_POLICY_LOADER=3D"/sbin/tomoyo-init"
CONFIG_SECURITY_TOMOYO_ACTIVATION_TRIGGER=3D"/sbin/init"
# CONFIG_SECURITY_TOMOYO_INSECURE_BUILTIN_SETTING is not set
CONFIG_SECURITY_APPARMOR=3Dy
CONFIG_SECURITY_APPARMOR_HASH=3Dy
CONFIG_SECURITY_APPARMOR_HASH_DEFAULT=3Dy
# CONFIG_SECURITY_APPARMOR_DEBUG is not set
# CONFIG_SECURITY_LOADPIN is not set
CONFIG_SECURITY_YAMA=3Dy
# CONFIG_SECURITY_SAFESETID is not set
# CONFIG_SECURITY_LOCKDOWN_LSM is not set
CONFIG_INTEGRITY=3Dy
CONFIG_INTEGRITY_SIGNATURE=3Dy
CONFIG_INTEGRITY_ASYMMETRIC_KEYS=3Dy
CONFIG_INTEGRITY_TRUSTED_KEYRING=3Dy
CONFIG_INTEGRITY_AUDIT=3Dy
CONFIG_IMA=3Dy
CONFIG_IMA_MEASURE_PCR_IDX=3D10
CONFIG_IMA_LSM_RULES=3Dy
# CONFIG_IMA_TEMPLATE is not set
CONFIG_IMA_NG_TEMPLATE=3Dy
# CONFIG_IMA_SIG_TEMPLATE is not set
CONFIG_IMA_DEFAULT_TEMPLATE=3D"ima-ng"
CONFIG_IMA_DEFAULT_HASH_SHA1=3Dy
# CONFIG_IMA_DEFAULT_HASH_SHA256 is not set
# CONFIG_IMA_DEFAULT_HASH_SHA512 is not set
CONFIG_IMA_DEFAULT_HASH=3D"sha1"
# CONFIG_IMA_WRITE_POLICY is not set
# CONFIG_IMA_READ_POLICY is not set
CONFIG_IMA_APPRAISE=3Dy
# CONFIG_IMA_ARCH_POLICY is not set
# CONFIG_IMA_APPRAISE_BUILD_POLICY is not set
CONFIG_IMA_APPRAISE_BOOTPARAM=3Dy
# CONFIG_IMA_APPRAISE_MODSIG is not set
CONFIG_IMA_TRUSTED_KEYRING=3Dy
# CONFIG_IMA_BLACKLIST_KEYRING is not set
# CONFIG_IMA_LOAD_X509 is not set
CONFIG_IMA_MEASURE_ASYMMETRIC_KEYS=3Dy
CONFIG_IMA_QUEUE_EARLY_BOOT_KEYS=3Dy
# CONFIG_IMA_SECURE_AND_OR_TRUSTED_BOOT is not set
CONFIG_EVM=3Dy
CONFIG_EVM_ATTR_FSUUID=3Dy
CONFIG_EVM_EXTRA_SMACK_XATTRS=3Dy
# CONFIG_EVM_ADD_XATTRS is not set
# CONFIG_EVM_LOAD_X509 is not set
# CONFIG_DEFAULT_SECURITY_SELINUX is not set
# CONFIG_DEFAULT_SECURITY_SMACK is not set
# CONFIG_DEFAULT_SECURITY_TOMOYO is not set
CONFIG_DEFAULT_SECURITY_APPARMOR=3Dy
# CONFIG_DEFAULT_SECURITY_DAC is not set
CONFIG_LSM=3D"lockdown,yama,loadpin,safesetid,integrity,apparmor,selinux,=
smack,tomoyo,bpf"

#
# Kernel hardening options
#

#
# Memory initialization
#
CONFIG_INIT_STACK_NONE=3Dy
# CONFIG_INIT_ON_ALLOC_DEFAULT_ON is not set
# CONFIG_INIT_ON_FREE_DEFAULT_ON is not set
# end of Memory initialization
# end of Kernel hardening options
# end of Security options

CONFIG_CRYPTO=3Dy

#
# Crypto core or helper
#
CONFIG_CRYPTO_ALGAPI=3Dy
CONFIG_CRYPTO_ALGAPI2=3Dy
CONFIG_CRYPTO_AEAD=3Dm
CONFIG_CRYPTO_AEAD2=3Dy
CONFIG_CRYPTO_SKCIPHER=3Dy
CONFIG_CRYPTO_SKCIPHER2=3Dy
CONFIG_CRYPTO_HASH=3Dy
CONFIG_CRYPTO_HASH2=3Dy
CONFIG_CRYPTO_RNG=3Dy
CONFIG_CRYPTO_RNG2=3Dy
CONFIG_CRYPTO_AKCIPHER2=3Dy
CONFIG_CRYPTO_AKCIPHER=3Dy
CONFIG_CRYPTO_KPP2=3Dy
CONFIG_CRYPTO_ACOMP2=3Dy
CONFIG_CRYPTO_MANAGER=3Dy
CONFIG_CRYPTO_MANAGER2=3Dy
# CONFIG_CRYPTO_USER is not set
CONFIG_CRYPTO_MANAGER_DISABLE_TESTS=3Dy
# CONFIG_CRYPTO_NULL is not set
CONFIG_CRYPTO_NULL2=3Dy
# CONFIG_CRYPTO_PCRYPT is not set
CONFIG_CRYPTO_CRYPTD=3Dm
# CONFIG_CRYPTO_AUTHENC is not set
# CONFIG_CRYPTO_TEST is not set
CONFIG_CRYPTO_SIMD=3Dm
CONFIG_CRYPTO_GLUE_HELPER_X86=3Dm

#
# Public-key cryptography
#
CONFIG_CRYPTO_RSA=3Dy
# CONFIG_CRYPTO_DH is not set
# CONFIG_CRYPTO_ECDH is not set
# CONFIG_CRYPTO_ECRDSA is not set
# CONFIG_CRYPTO_CURVE25519 is not set
# CONFIG_CRYPTO_CURVE25519_X86 is not set

#
# Authenticated Encryption with Associated Data
#
# CONFIG_CRYPTO_CCM is not set
# CONFIG_CRYPTO_GCM is not set
# CONFIG_CRYPTO_CHACHA20POLY1305 is not set
# CONFIG_CRYPTO_AEGIS128 is not set
# CONFIG_CRYPTO_AEGIS128_AESNI_SSE2 is not set
# CONFIG_CRYPTO_SEQIV is not set
# CONFIG_CRYPTO_ECHAINIV is not set

#
# Block modes
#
CONFIG_CRYPTO_CBC=3Dy
# CONFIG_CRYPTO_CFB is not set
# CONFIG_CRYPTO_CTR is not set
# CONFIG_CRYPTO_CTS is not set
CONFIG_CRYPTO_ECB=3Dy
# CONFIG_CRYPTO_LRW is not set
# CONFIG_CRYPTO_OFB is not set
# CONFIG_CRYPTO_PCBC is not set
# CONFIG_CRYPTO_XTS is not set
# CONFIG_CRYPTO_KEYWRAP is not set
# CONFIG_CRYPTO_NHPOLY1305_SSE2 is not set
# CONFIG_CRYPTO_NHPOLY1305_AVX2 is not set
# CONFIG_CRYPTO_ADIANTUM is not set
# CONFIG_CRYPTO_ESSIV is not set

#
# Hash modes
#
# CONFIG_CRYPTO_CMAC is not set
CONFIG_CRYPTO_HMAC=3Dy
# CONFIG_CRYPTO_XCBC is not set
# CONFIG_CRYPTO_VMAC is not set

#
# Digest
#
CONFIG_CRYPTO_CRC32C=3Dy
CONFIG_CRYPTO_CRC32C_INTEL=3Dy
# CONFIG_CRYPTO_CRC32 is not set
CONFIG_CRYPTO_CRC32_PCLMUL=3Dm
# CONFIG_CRYPTO_XXHASH is not set
# CONFIG_CRYPTO_BLAKE2B is not set
# CONFIG_CRYPTO_BLAKE2S is not set
# CONFIG_CRYPTO_BLAKE2S_X86 is not set
CONFIG_CRYPTO_CRCT10DIF=3Dy
CONFIG_CRYPTO_CRCT10DIF_PCLMUL=3Dm
# CONFIG_CRYPTO_GHASH is not set
# CONFIG_CRYPTO_POLY1305 is not set
# CONFIG_CRYPTO_POLY1305_X86_64 is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=3Dy
# CONFIG_CRYPTO_MICHAEL_MIC is not set
# CONFIG_CRYPTO_RMD128 is not set
# CONFIG_CRYPTO_RMD160 is not set
# CONFIG_CRYPTO_RMD256 is not set
# CONFIG_CRYPTO_RMD320 is not set
CONFIG_CRYPTO_SHA1=3Dy
# CONFIG_CRYPTO_SHA1_SSSE3 is not set
# CONFIG_CRYPTO_SHA256_SSSE3 is not set
# CONFIG_CRYPTO_SHA512_SSSE3 is not set
CONFIG_CRYPTO_SHA256=3Dy
CONFIG_CRYPTO_SHA512=3Dy
# CONFIG_CRYPTO_SHA3 is not set
# CONFIG_CRYPTO_SM3 is not set
# CONFIG_CRYPTO_STREEBOG is not set
# CONFIG_CRYPTO_TGR192 is not set
# CONFIG_CRYPTO_WP512 is not set
# CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL is not set

#
# Ciphers
#
CONFIG_CRYPTO_AES=3Dy
# CONFIG_CRYPTO_AES_TI is not set
CONFIG_CRYPTO_AES_NI_INTEL=3Dm
# CONFIG_CRYPTO_ANUBIS is not set
# CONFIG_CRYPTO_ARC4 is not set
# CONFIG_CRYPTO_BLOWFISH is not set
# CONFIG_CRYPTO_BLOWFISH_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA is not set
# CONFIG_CRYPTO_CAMELLIA_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAMELLIA_AESNI_AVX2_X86_64 is not set
# CONFIG_CRYPTO_CAST5 is not set
# CONFIG_CRYPTO_CAST5_AVX_X86_64 is not set
# CONFIG_CRYPTO_CAST6 is not set
# CONFIG_CRYPTO_CAST6_AVX_X86_64 is not set
# CONFIG_CRYPTO_DES is not set
# CONFIG_CRYPTO_DES3_EDE_X86_64 is not set
# CONFIG_CRYPTO_FCRYPT is not set
# CONFIG_CRYPTO_KHAZAD is not set
# CONFIG_CRYPTO_SALSA20 is not set
# CONFIG_CRYPTO_CHACHA20 is not set
# CONFIG_CRYPTO_CHACHA20_X86_64 is not set
# CONFIG_CRYPTO_SEED is not set
# CONFIG_CRYPTO_SERPENT is not set
# CONFIG_CRYPTO_SERPENT_SSE2_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX_X86_64 is not set
# CONFIG_CRYPTO_SERPENT_AVX2_X86_64 is not set
# CONFIG_CRYPTO_SM4 is not set
# CONFIG_CRYPTO_TEA is not set
# CONFIG_CRYPTO_TWOFISH is not set
# CONFIG_CRYPTO_TWOFISH_X86_64 is not set
# CONFIG_CRYPTO_TWOFISH_X86_64_3WAY is not set
# CONFIG_CRYPTO_TWOFISH_AVX_X86_64 is not set

#
# Compression
#
CONFIG_CRYPTO_DEFLATE=3Dy
CONFIG_CRYPTO_LZO=3Dy
# CONFIG_CRYPTO_842 is not set
# CONFIG_CRYPTO_LZ4 is not set
# CONFIG_CRYPTO_LZ4HC is not set
# CONFIG_CRYPTO_ZSTD is not set

#
# Random Number Generation
#
# CONFIG_CRYPTO_ANSI_CPRNG is not set
# CONFIG_CRYPTO_DRBG_MENU is not set
# CONFIG_CRYPTO_JITTERENTROPY is not set
# CONFIG_CRYPTO_USER_API_HASH is not set
# CONFIG_CRYPTO_USER_API_SKCIPHER is not set
# CONFIG_CRYPTO_USER_API_RNG is not set
# CONFIG_CRYPTO_USER_API_AEAD is not set
CONFIG_CRYPTO_HASH_INFO=3Dy

#
# Crypto library routines
#
CONFIG_CRYPTO_LIB_AES=3Dy
# CONFIG_CRYPTO_LIB_BLAKE2S is not set
# CONFIG_CRYPTO_LIB_CHACHA is not set
# CONFIG_CRYPTO_LIB_CURVE25519 is not set
CONFIG_CRYPTO_LIB_POLY1305_RSIZE=3D11
# CONFIG_CRYPTO_LIB_POLY1305 is not set
# CONFIG_CRYPTO_LIB_CHACHA20POLY1305 is not set
CONFIG_CRYPTO_LIB_SHA256=3Dy
CONFIG_CRYPTO_HW=3Dy
CONFIG_CRYPTO_DEV_PADLOCK=3Dy
# CONFIG_CRYPTO_DEV_PADLOCK_AES is not set
# CONFIG_CRYPTO_DEV_PADLOCK_SHA is not set
# CONFIG_CRYPTO_DEV_ATMEL_ECC is not set
# CONFIG_CRYPTO_DEV_ATMEL_SHA204A is not set
CONFIG_CRYPTO_DEV_CCP=3Dy
# CONFIG_CRYPTO_DEV_CCP_DD is not set
# CONFIG_CRYPTO_DEV_QAT_DH895xCC is not set
# CONFIG_CRYPTO_DEV_QAT_C3XXX is not set
# CONFIG_CRYPTO_DEV_QAT_C62X is not set
# CONFIG_CRYPTO_DEV_QAT_DH895xCCVF is not set
# CONFIG_CRYPTO_DEV_QAT_C3XXXVF is not set
# CONFIG_CRYPTO_DEV_QAT_C62XVF is not set
# CONFIG_CRYPTO_DEV_NITROX_CNN55XX is not set
# CONFIG_CRYPTO_DEV_VIRTIO is not set
# CONFIG_CRYPTO_DEV_SAFEXCEL is not set
# CONFIG_CRYPTO_DEV_AMLOGIC_GXL is not set
CONFIG_ASYMMETRIC_KEY_TYPE=3Dy
CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE=3Dy
# CONFIG_ASYMMETRIC_TPM_KEY_SUBTYPE is not set
CONFIG_X509_CERTIFICATE_PARSER=3Dy
# CONFIG_PKCS8_PRIVATE_KEY_PARSER is not set
CONFIG_PKCS7_MESSAGE_PARSER=3Dy
# CONFIG_PKCS7_TEST_KEY is not set
# CONFIG_SIGNED_PE_FILE_VERIFICATION is not set

#
# Certificates for signature checking
#
CONFIG_MODULE_SIG_KEY=3D"certs/signing_key.pem"
CONFIG_SYSTEM_TRUSTED_KEYRING=3Dy
CONFIG_SYSTEM_TRUSTED_KEYS=3D""
# CONFIG_SYSTEM_EXTRA_CERTIFICATE is not set
# CONFIG_SECONDARY_TRUSTED_KEYRING is not set
# CONFIG_SYSTEM_BLACKLIST_KEYRING is not set
# end of Certificates for signature checking

CONFIG_BINARY_PRINTF=3Dy

#
# Library routines
#
CONFIG_LINEAR_RANGES=3Dy
# CONFIG_PACKING is not set
CONFIG_BITREVERSE=3Dy
CONFIG_GENERIC_STRNCPY_FROM_USER=3Dy
CONFIG_GENERIC_STRNLEN_USER=3Dy
CONFIG_GENERIC_NET_UTILS=3Dy
CONFIG_GENERIC_FIND_FIRST_BIT=3Dy
# CONFIG_CORDIC is not set
# CONFIG_PRIME_NUMBERS is not set
CONFIG_RATIONAL=3Dy
CONFIG_GENERIC_PCI_IOMAP=3Dy
CONFIG_GENERIC_IOMAP=3Dy
CONFIG_ARCH_USE_CMPXCHG_LOCKREF=3Dy
CONFIG_ARCH_HAS_FAST_MULTIPLIER=3Dy
CONFIG_ARCH_USE_SYM_ANNOTATIONS=3Dy
CONFIG_CRC_CCITT=3Dy
CONFIG_CRC16=3Dy
CONFIG_CRC_T10DIF=3Dy
# CONFIG_CRC_ITU_T is not set
CONFIG_CRC32=3Dy
# CONFIG_CRC32_SELFTEST is not set
CONFIG_CRC32_SLICEBY8=3Dy
# CONFIG_CRC32_SLICEBY4 is not set
# CONFIG_CRC32_SARWATE is not set
# CONFIG_CRC32_BIT is not set
# CONFIG_CRC64 is not set
# CONFIG_CRC4 is not set
# CONFIG_CRC7 is not set
CONFIG_LIBCRC32C=3Dy
# CONFIG_CRC8 is not set
CONFIG_XXHASH=3Dy
# CONFIG_RANDOM32_SELFTEST is not set
CONFIG_ZLIB_INFLATE=3Dy
CONFIG_ZLIB_DEFLATE=3Dy
CONFIG_LZO_COMPRESS=3Dy
CONFIG_LZO_DECOMPRESS=3Dy
CONFIG_LZ4_DECOMPRESS=3Dy
CONFIG_ZSTD_DECOMPRESS=3Dy
CONFIG_XZ_DEC=3Dy
CONFIG_XZ_DEC_X86=3Dy
CONFIG_XZ_DEC_POWERPC=3Dy
CONFIG_XZ_DEC_IA64=3Dy
CONFIG_XZ_DEC_ARM=3Dy
CONFIG_XZ_DEC_ARMTHUMB=3Dy
CONFIG_XZ_DEC_SPARC=3Dy
CONFIG_XZ_DEC_BCJ=3Dy
# CONFIG_XZ_DEC_TEST is not set
CONFIG_DECOMPRESS_GZIP=3Dy
CONFIG_DECOMPRESS_BZIP2=3Dy
CONFIG_DECOMPRESS_LZMA=3Dy
CONFIG_DECOMPRESS_XZ=3Dy
CONFIG_DECOMPRESS_LZO=3Dy
CONFIG_DECOMPRESS_LZ4=3Dy
CONFIG_DECOMPRESS_ZSTD=3Dy
CONFIG_GENERIC_ALLOCATOR=3Dy
CONFIG_INTERVAL_TREE=3Dy
CONFIG_XARRAY_MULTI=3Dy
CONFIG_ASSOCIATIVE_ARRAY=3Dy
CONFIG_HAS_IOMEM=3Dy
CONFIG_HAS_IOPORT_MAP=3Dy
CONFIG_HAS_DMA=3Dy
CONFIG_DMA_OPS=3Dy
CONFIG_NEED_SG_DMA_LENGTH=3Dy
CONFIG_NEED_DMA_MAP_STATE=3Dy
CONFIG_ARCH_DMA_ADDR_T_64BIT=3Dy
CONFIG_SWIOTLB=3Dy
# CONFIG_DMA_CMA is not set
# CONFIG_DMA_API_DEBUG is not set
CONFIG_SGL_ALLOC=3Dy
CONFIG_IOMMU_HELPER=3Dy
CONFIG_CPU_RMAP=3Dy
CONFIG_DQL=3Dy
CONFIG_GLOB=3Dy
# CONFIG_GLOB_SELFTEST is not set
CONFIG_NLATTR=3Dy
CONFIG_CLZ_TAB=3Dy
# CONFIG_IRQ_POLL is not set
CONFIG_MPILIB=3Dy
CONFIG_SIGNATURE=3Dy
CONFIG_OID_REGISTRY=3Dy
CONFIG_UCS2_STRING=3Dy
CONFIG_HAVE_GENERIC_VDSO=3Dy
CONFIG_GENERIC_GETTIMEOFDAY=3Dy
CONFIG_GENERIC_VDSO_TIME_NS=3Dy
CONFIG_FONT_SUPPORT=3Dy
# CONFIG_FONTS is not set
CONFIG_FONT_8x8=3Dy
CONFIG_FONT_8x16=3Dy
CONFIG_SG_POOL=3Dy
CONFIG_ARCH_HAS_PMEM_API=3Dy
CONFIG_MEMREGION=3Dy
CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE=3Dy
CONFIG_ARCH_HAS_UACCESS_MCSAFE=3Dy
CONFIG_ARCH_STACKWALK=3Dy
CONFIG_SBITMAP=3Dy
# CONFIG_STRING_SELFTEST is not set
# end of Library routines

#
# Kernel hacking
#

#
# printk and dmesg options
#
CONFIG_PRINTK_TIME=3Dy
# CONFIG_PRINTK_CALLER is not set
CONFIG_CONSOLE_LOGLEVEL_DEFAULT=3D7
CONFIG_CONSOLE_LOGLEVEL_QUIET=3D4
CONFIG_MESSAGE_LOGLEVEL_DEFAULT=3D4
CONFIG_BOOT_PRINTK_DELAY=3Dy
CONFIG_DYNAMIC_DEBUG=3Dy
CONFIG_DYNAMIC_DEBUG_CORE=3Dy
CONFIG_SYMBOLIC_ERRNAME=3Dy
CONFIG_DEBUG_BUGVERBOSE=3Dy
# end of printk and dmesg options

#
# Compile-time checks and compiler options
#
CONFIG_DEBUG_INFO=3Dy
# CONFIG_DEBUG_INFO_REDUCED is not set
# CONFIG_DEBUG_INFO_COMPRESSED is not set
# CONFIG_DEBUG_INFO_SPLIT is not set
CONFIG_DEBUG_INFO_DWARF4=3Dy
# CONFIG_DEBUG_INFO_BTF is not set
CONFIG_GDB_SCRIPTS=3Dy
# CONFIG_ENABLE_MUST_CHECK is not set
CONFIG_FRAME_WARN=3D1024
# CONFIG_STRIP_ASM_SYMS is not set
# CONFIG_READABLE_ASM is not set
# CONFIG_HEADERS_INSTALL is not set
# CONFIG_DEBUG_SECTION_MISMATCH is not set
CONFIG_SECTION_MISMATCH_WARN_ONLY=3Dy
# CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_32B is not set
CONFIG_STACK_VALIDATION=3Dy
# CONFIG_DEBUG_FORCE_WEAK_PER_CPU is not set
# end of Compile-time checks and compiler options

#
# Generic Kernel Debugging Instruments
#
CONFIG_MAGIC_SYSRQ=3Dy
CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE=3D0x1
CONFIG_MAGIC_SYSRQ_SERIAL=3Dy
CONFIG_MAGIC_SYSRQ_SERIAL_SEQUENCE=3D""
CONFIG_DEBUG_FS=3Dy
CONFIG_DEBUG_FS_ALLOW_ALL=3Dy
# CONFIG_DEBUG_FS_DISALLOW_MOUNT is not set
# CONFIG_DEBUG_FS_ALLOW_NONE is not set
CONFIG_HAVE_ARCH_KGDB=3Dy
CONFIG_KGDB=3Dy
CONFIG_KGDB_SERIAL_CONSOLE=3Dy
# CONFIG_KGDB_TESTS is not set
CONFIG_KGDB_LOW_LEVEL_TRAP=3Dy
CONFIG_KGDB_KDB=3Dy
CONFIG_KDB_DEFAULT_ENABLE=3D0x1
CONFIG_KDB_KEYBOARD=3Dy
CONFIG_KDB_CONTINUE_CATASTROPHIC=3D0
CONFIG_ARCH_HAS_EARLY_DEBUG=3Dy
CONFIG_ARCH_HAS_UBSAN_SANITIZE_ALL=3Dy
# CONFIG_UBSAN is not set
# end of Generic Kernel Debugging Instruments

CONFIG_DEBUG_KERNEL=3Dy
CONFIG_DEBUG_MISC=3Dy

#
# Memory Debugging
#
CONFIG_PAGE_EXTENSION=3Dy
# CONFIG_DEBUG_PAGEALLOC is not set
# CONFIG_PAGE_OWNER is not set
# CONFIG_PAGE_POISONING is not set
# CONFIG_DEBUG_PAGE_REF is not set
# CONFIG_DEBUG_RODATA_TEST is not set
CONFIG_ARCH_HAS_DEBUG_WX=3Dy
# CONFIG_DEBUG_WX is not set
CONFIG_GENERIC_PTDUMP=3Dy
# CONFIG_PTDUMP_DEBUGFS is not set
# CONFIG_DEBUG_OBJECTS is not set
# CONFIG_SLUB_DEBUG_ON is not set
# CONFIG_SLUB_STATS is not set
CONFIG_HAVE_DEBUG_KMEMLEAK=3Dy
CONFIG_DEBUG_KMEMLEAK=3Dy
CONFIG_DEBUG_KMEMLEAK_MEM_POOL_SIZE=3D16000
# CONFIG_DEBUG_KMEMLEAK_TEST is not set
# CONFIG_DEBUG_KMEMLEAK_DEFAULT_OFF is not set
CONFIG_DEBUG_KMEMLEAK_AUTO_SCAN=3Dy
# CONFIG_DEBUG_STACK_USAGE is not set
CONFIG_SCHED_STACK_END_CHECK=3Dy
CONFIG_ARCH_HAS_DEBUG_VM_PGTABLE=3Dy
CONFIG_DEBUG_VM=3Dy
# CONFIG_DEBUG_VM_VMACACHE is not set
# CONFIG_DEBUG_VM_RB is not set
# CONFIG_DEBUG_VM_PGFLAGS is not set
CONFIG_DEBUG_VM_PGTABLE=3Dy
CONFIG_ARCH_HAS_DEBUG_VIRTUAL=3Dy
# CONFIG_DEBUG_VIRTUAL is not set
CONFIG_DEBUG_MEMORY_INIT=3Dy
# CONFIG_DEBUG_PER_CPU_MAPS is not set
CONFIG_HAVE_ARCH_KASAN=3Dy
CONFIG_HAVE_ARCH_KASAN_VMALLOC=3Dy
CONFIG_CC_HAS_KASAN_GENERIC=3Dy
CONFIG_CC_HAS_WORKING_NOSANITIZE_ADDRESS=3Dy
# CONFIG_KASAN is not set
# end of Memory Debugging

# CONFIG_DEBUG_SHIRQ is not set

#
# Debug Oops, Lockups and Hangs
#
CONFIG_PANIC_ON_OOPS=3Dy
CONFIG_PANIC_ON_OOPS_VALUE=3D1
CONFIG_PANIC_TIMEOUT=3D0
CONFIG_LOCKUP_DETECTOR=3Dy
CONFIG_SOFTLOCKUP_DETECTOR=3Dy
# CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_SOFTLOCKUP_PANIC_VALUE=3D0
CONFIG_HARDLOCKUP_DETECTOR_PERF=3Dy
CONFIG_HARDLOCKUP_CHECK_TIMESTAMP=3Dy
CONFIG_HARDLOCKUP_DETECTOR=3Dy
# CONFIG_BOOTPARAM_HARDLOCKUP_PANIC is not set
CONFIG_BOOTPARAM_HARDLOCKUP_PANIC_VALUE=3D0
CONFIG_DETECT_HUNG_TASK=3Dy
CONFIG_DEFAULT_HUNG_TASK_TIMEOUT=3D120
# CONFIG_BOOTPARAM_HUNG_TASK_PANIC is not set
CONFIG_BOOTPARAM_HUNG_TASK_PANIC_VALUE=3D0
# CONFIG_WQ_WATCHDOG is not set
# CONFIG_TEST_LOCKUP is not set
# end of Debug Oops, Lockups and Hangs

#
# Scheduler Debugging
#
CONFIG_SCHED_DEBUG=3Dy
CONFIG_SCHED_INFO=3Dy
CONFIG_SCHEDSTATS=3Dy
# end of Scheduler Debugging

# CONFIG_DEBUG_TIMEKEEPING is not set

#
# Lock Debugging (spinlocks, mutexes, etc...)
#
CONFIG_LOCK_DEBUGGING_SUPPORT=3Dy
# CONFIG_PROVE_LOCKING is not set
# CONFIG_LOCK_STAT is not set
# CONFIG_DEBUG_RT_MUTEXES is not set
CONFIG_DEBUG_SPINLOCK=3Dy
CONFIG_DEBUG_MUTEXES=3Dy
# CONFIG_DEBUG_WW_MUTEX_SLOWPATH is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_LOCK_ALLOC is not set
CONFIG_DEBUG_ATOMIC_SLEEP=3Dy
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
# CONFIG_LOCK_TORTURE_TEST is not set
# CONFIG_WW_MUTEX_SELFTEST is not set
# end of Lock Debugging (spinlocks, mutexes, etc...)

CONFIG_STACKTRACE=3Dy
# CONFIG_WARN_ALL_UNSEEDED_RANDOM is not set
# CONFIG_DEBUG_KOBJECT is not set

#
# Debug kernel data structures
#
CONFIG_DEBUG_LIST=3Dy
# CONFIG_DEBUG_PLIST is not set
# CONFIG_DEBUG_SG is not set
# CONFIG_DEBUG_NOTIFIERS is not set
# CONFIG_BUG_ON_DATA_CORRUPTION is not set
# end of Debug kernel data structures

# CONFIG_DEBUG_CREDENTIALS is not set

#
# RCU Debugging
#
# CONFIG_RCU_PERF_TEST is not set
# CONFIG_RCU_TORTURE_TEST is not set
# CONFIG_RCU_REF_SCALE_TEST is not set
CONFIG_RCU_CPU_STALL_TIMEOUT=3D60
# CONFIG_RCU_TRACE is not set
# CONFIG_RCU_EQS_DEBUG is not set
# end of RCU Debugging

# CONFIG_DEBUG_WQ_FORCE_RR_CPU is not set
# CONFIG_DEBUG_BLOCK_EXT_DEVT is not set
# CONFIG_CPU_HOTPLUG_STATE_CONTROL is not set
# CONFIG_LATENCYTOP is not set
CONFIG_USER_STACKTRACE_SUPPORT=3Dy
CONFIG_NOP_TRACER=3Dy
CONFIG_HAVE_FUNCTION_TRACER=3Dy
CONFIG_HAVE_FUNCTION_GRAPH_TRACER=3Dy
CONFIG_HAVE_DYNAMIC_FTRACE=3Dy
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_REGS=3Dy
CONFIG_HAVE_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=3Dy
CONFIG_HAVE_FTRACE_MCOUNT_RECORD=3Dy
CONFIG_HAVE_SYSCALL_TRACEPOINTS=3Dy
CONFIG_HAVE_FENTRY=3Dy
CONFIG_HAVE_C_RECORDMCOUNT=3Dy
CONFIG_TRACER_MAX_TRACE=3Dy
CONFIG_TRACE_CLOCK=3Dy
CONFIG_RING_BUFFER=3Dy
CONFIG_EVENT_TRACING=3Dy
CONFIG_CONTEXT_SWITCH_TRACER=3Dy
CONFIG_TRACING=3Dy
CONFIG_GENERIC_TRACER=3Dy
CONFIG_TRACING_SUPPORT=3Dy
CONFIG_FTRACE=3Dy
# CONFIG_BOOTTIME_TRACING is not set
CONFIG_FUNCTION_TRACER=3Dy
CONFIG_FUNCTION_GRAPH_TRACER=3Dy
CONFIG_DYNAMIC_FTRACE=3Dy
CONFIG_DYNAMIC_FTRACE_WITH_REGS=3Dy
CONFIG_DYNAMIC_FTRACE_WITH_DIRECT_CALLS=3Dy
CONFIG_FUNCTION_PROFILER=3Dy
CONFIG_STACK_TRACER=3Dy
# CONFIG_IRQSOFF_TRACER is not set
CONFIG_SCHED_TRACER=3Dy
# CONFIG_HWLAT_TRACER is not set
CONFIG_MMIOTRACE=3Dy
CONFIG_FTRACE_SYSCALLS=3Dy
CONFIG_TRACER_SNAPSHOT=3Dy
# CONFIG_TRACER_SNAPSHOT_PER_CPU_SWAP is not set
CONFIG_BRANCH_PROFILE_NONE=3Dy
# CONFIG_PROFILE_ANNOTATED_BRANCHES is not set
# CONFIG_PROFILE_ALL_BRANCHES is not set
CONFIG_BLK_DEV_IO_TRACE=3Dy
CONFIG_KPROBE_EVENTS=3Dy
# CONFIG_KPROBE_EVENTS_ON_NOTRACE is not set
CONFIG_UPROBE_EVENTS=3Dy
CONFIG_BPF_EVENTS=3Dy
CONFIG_DYNAMIC_EVENTS=3Dy
CONFIG_PROBE_EVENTS=3Dy
# CONFIG_BPF_KPROBE_OVERRIDE is not set
CONFIG_FTRACE_MCOUNT_RECORD=3Dy
# CONFIG_SYNTH_EVENTS is not set
# CONFIG_HIST_TRIGGERS is not set
# CONFIG_TRACE_EVENT_INJECT is not set
# CONFIG_TRACEPOINT_BENCHMARK is not set
# CONFIG_RING_BUFFER_BENCHMARK is not set
# CONFIG_TRACE_EVAL_MAP_FILE is not set
# CONFIG_FTRACE_STARTUP_TEST is not set
# CONFIG_RING_BUFFER_STARTUP_TEST is not set
# CONFIG_MMIOTRACE_TEST is not set
# CONFIG_PREEMPTIRQ_DELAY_TEST is not set
# CONFIG_KPROBE_EVENT_GEN_TEST is not set
# CONFIG_PROVIDE_OHCI1394_DMA_INIT is not set
# CONFIG_SAMPLES is not set
CONFIG_HAVE_ARCH_KCSAN=3Dy
CONFIG_ARCH_HAS_DEVMEM_IS_ALLOWED=3Dy
CONFIG_STRICT_DEVMEM=3Dy
# CONFIG_IO_STRICT_DEVMEM is not set

#
# x86 Debugging
#
CONFIG_TRACE_IRQFLAGS_SUPPORT=3Dy
CONFIG_TRACE_IRQFLAGS_NMI_SUPPORT=3Dy
CONFIG_EARLY_PRINTK_USB=3Dy
# CONFIG_X86_VERBOSE_BOOTUP is not set
CONFIG_EARLY_PRINTK=3Dy
CONFIG_EARLY_PRINTK_DBGP=3Dy
# CONFIG_EARLY_PRINTK_USB_XDBC is not set
# CONFIG_EFI_PGT_DUMP is not set
CONFIG_DEBUG_TLBFLUSH=3Dy
# CONFIG_IOMMU_DEBUG is not set
CONFIG_HAVE_MMIOTRACE_SUPPORT=3Dy
# CONFIG_X86_DECODER_SELFTEST is not set
# CONFIG_IO_DELAY_0X80 is not set
CONFIG_IO_DELAY_0XED=3Dy
# CONFIG_IO_DELAY_UDELAY is not set
# CONFIG_IO_DELAY_NONE is not set
# CONFIG_DEBUG_BOOT_PARAMS is not set
# CONFIG_CPA_DEBUG is not set
# CONFIG_DEBUG_ENTRY is not set
# CONFIG_DEBUG_NMI_SELFTEST is not set
CONFIG_X86_DEBUG_FPU=3Dy
# CONFIG_PUNIT_ATOM_DEBUG is not set
CONFIG_UNWINDER_ORC=3Dy
# CONFIG_UNWINDER_FRAME_POINTER is not set
# CONFIG_UNWINDER_GUESS is not set
# end of x86 Debugging

#
# Kernel Testing and Coverage
#
# CONFIG_KUNIT is not set
# CONFIG_NOTIFIER_ERROR_INJECTION is not set
CONFIG_FUNCTION_ERROR_INJECTION=3Dy
# CONFIG_FAULT_INJECTION is not set
CONFIG_ARCH_HAS_KCOV=3Dy
CONFIG_CC_HAS_SANCOV_TRACE_PC=3Dy
# CONFIG_KCOV is not set
CONFIG_RUNTIME_TESTING_MENU=3Dy
# CONFIG_LKDTM is not set
# CONFIG_TEST_LIST_SORT is not set
# CONFIG_TEST_MIN_HEAP is not set
# CONFIG_TEST_SORT is not set
# CONFIG_KPROBES_SANITY_TEST is not set
# CONFIG_BACKTRACE_SELF_TEST is not set
# CONFIG_RBTREE_TEST is not set
# CONFIG_REED_SOLOMON_TEST is not set
# CONFIG_INTERVAL_TREE_TEST is not set
# CONFIG_PERCPU_TEST is not set
# CONFIG_ATOMIC64_SELFTEST is not set
# CONFIG_TEST_HEXDUMP is not set
# CONFIG_TEST_STRING_HELPERS is not set
# CONFIG_TEST_STRSCPY is not set
# CONFIG_TEST_KSTRTOX is not set
# CONFIG_TEST_PRINTF is not set
# CONFIG_TEST_BITMAP is not set
# CONFIG_TEST_BITFIELD is not set
# CONFIG_TEST_UUID is not set
# CONFIG_TEST_XARRAY is not set
# CONFIG_TEST_OVERFLOW is not set
# CONFIG_TEST_RHASHTABLE is not set
# CONFIG_TEST_HASH is not set
# CONFIG_TEST_IDA is not set
# CONFIG_TEST_LKM is not set
# CONFIG_TEST_BITOPS is not set
# CONFIG_TEST_VMALLOC is not set
# CONFIG_TEST_USER_COPY is not set
# CONFIG_TEST_BPF is not set
# CONFIG_TEST_BLACKHOLE_DEV is not set
# CONFIG_FIND_BIT_BENCHMARK is not set
# CONFIG_TEST_FIRMWARE is not set
# CONFIG_TEST_SYSCTL is not set
# CONFIG_TEST_UDELAY is not set
# CONFIG_TEST_STATIC_KEYS is not set
# CONFIG_TEST_KMOD is not set
# CONFIG_TEST_MEMCAT_P is not set
# CONFIG_TEST_LIVEPATCH is not set
# CONFIG_TEST_STACKINIT is not set
# CONFIG_TEST_MEMINIT is not set
# CONFIG_TEST_FPU is not set
CONFIG_MEMTEST=3Dy
# end of Kernel Testing and Coverage
# end of Kernel hacking

--=_MailMate_C9C3D367-79B1-4D8D-812F-500ACD3AF7C3_=--

--=_MailMate_89ADE28D-2C7F-4143-A045-29B75B277CAD_=
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQJDBAEBCgAtFiEEh7yFAW3gwjwQ4C9anbJR82th+ooFAl91MRUPHHppeUBudmlk
aWEuY29tAAoJEJ2yUfNrYfqKOYAQAJQ0D9sYQyCoEcLOSjQuYzgctLVTZX3lVDMC
XBxvnw5s8jmhNYTCKHrBmTThaIugID5FB4K4KLL4XtEJoQMGKXL89E6dg9XT02K5
IGkkOBY+Ui9jPgMzcmZwh804FCV6ccy/PV+qdzBZU//MvqkT69t+M9nXclQTaTVy
6JVUESwmYSF/O/0GyE5fUuxvVAsOH1QzCPt219sjEE2RDzGz1WegNEu8k/BZpSJx
fWRseEOer2zh8jAn6JEjFNLyyqbf46Vn7+xX5Hwu3SNldPfk8cRJDeu+V585dz4Y
u8h126k+66YwcDApKVr3gpdGbWNd4zDJbtX33q1LCE5CQeWRwtAz/gnq36mIHNGX
BIVuHuANQ/SkIlr5havGdhixltChHxYuyKt0Qk91y+yitXsrRFq52HchJCTiORej
PaXInRdGY3PZPMZfUw4VR6hMv89+zyF9V+0yQMu5RIOScBvoSdNBE4c2YrHx7gMl
Qh7LdcvIw0/GM9Z/pdiFt/5pMSUL580whSB4OX+Nttr3dG3x+qAuNwHmwztJevkF
Ty3NKGlg4go2wUN61PcOLadfp89oxZmjLrfaKlq+RrknIuWqQUcVNfkqxHMIjWI6
F278Z3PQiNl7N+Iv1R/5/vbdWoTFvTt0BE8Q5amdDXvM+lsi/n5i2HjH0qj26qGw
hdIiLaYK
=g/xH
-----END PGP SIGNATURE-----

--=_MailMate_89ADE28D-2C7F-4143-A045-29B75B277CAD_=--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 05:11:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 05:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1033.3467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNqsW-0000hM-L8; Thu, 01 Oct 2020 05:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1033.3467; Thu, 01 Oct 2020 05:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNqsW-0000hF-IB; Thu, 01 Oct 2020 05:11:40 +0000
Received: by outflank-mailman (input) for mailman id 1033;
 Thu, 01 Oct 2020 05:11:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNqsV-0000gn-Es
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 05:11:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6be891c-009e-4b87-a182-ba55464247d6;
 Thu, 01 Oct 2020 05:11:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNqsN-00041E-La; Thu, 01 Oct 2020 05:11:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNqsN-0003Lz-Bg; Thu, 01 Oct 2020 05:11:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNqsN-00053a-BA; Thu, 01 Oct 2020 05:11:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNqsV-0000gn-Es
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 05:11:39 +0000
X-Inumbo-ID: f6be891c-009e-4b87-a182-ba55464247d6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6be891c-009e-4b87-a182-ba55464247d6;
	Thu, 01 Oct 2020 05:11:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=77nZoaZEVVtY5c/p8mszNzm2I6cS9zofczDHPazkQek=; b=WMzDfSs9NNVOlrDKlfzT/cP+wO
	dWxqWsBt9pGVl4903KQiajCKNAgJivOzQkvwHhR3mIpP3EH4UkADVDQil7Ai8PltRaw/vqs2LaoEM
	Zw7MwMKTD+Mzt8Gxb/e1e3ywsSVuaML7kgQXwUo0JWw8WQknKe06ZiAClnNvmayqT6cI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNqsN-00041E-La; Thu, 01 Oct 2020 05:11:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNqsN-0003Lz-Bg; Thu, 01 Oct 2020 05:11:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNqsN-00053a-BA; Thu, 01 Oct 2020 05:11:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155187-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155187: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11852c7bb070a18c3708b4c001772a23e7d4fc27
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 05:11:31 +0000

flight 155187 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155187/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11852c7bb070a18c3708b4c001772a23e7d4fc27
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    0 days
Testing same since   155144  2020-09-30 16:01:24 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 06:35:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 06:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1045.3499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsBe-0007jO-5w; Thu, 01 Oct 2020 06:35:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1045.3499; Thu, 01 Oct 2020 06:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsBe-0007jH-2g; Thu, 01 Oct 2020 06:35:30 +0000
Received: by outflank-mailman (input) for mailman id 1045;
 Thu, 01 Oct 2020 06:35:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zSVJ=DI=epam.com=prvs=8543a5f0e3=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kNsBd-0007jC-6M
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 06:35:29 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d696e29-d2f7-47fd-a3bc-ad9ea0d95371;
 Thu, 01 Oct 2020 06:35:26 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 0916PqT4005540; Thu, 1 Oct 2020 06:35:24 GMT
Received: from eur03-am5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2056.outbound.protection.outlook.com [104.47.8.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 33v1ddeeka-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 01 Oct 2020 06:35:24 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4883.eurprd03.prod.outlook.com (2603:10a6:208:103::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Thu, 1 Oct
 2020 06:35:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::853d:1bd6:75a0:a7d7]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::853d:1bd6:75a0:a7d7%7]) with mapi id 15.20.3433.035; Thu, 1 Oct 2020
 06:35:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zSVJ=DI=epam.com=prvs=8543a5f0e3=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kNsBd-0007jC-6M
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 06:35:29 +0000
X-Inumbo-ID: 7d696e29-d2f7-47fd-a3bc-ad9ea0d95371
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d696e29-d2f7-47fd-a3bc-ad9ea0d95371;
	Thu, 01 Oct 2020 06:35:26 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 0916PqT4005540;
	Thu, 1 Oct 2020 06:35:24 GMT
Received: from eur03-am5-obe.outbound.protection.outlook.com (mail-am5eur03lp2056.outbound.protection.outlook.com [104.47.8.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 33v1ddeeka-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 01 Oct 2020 06:35:24 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OTvy8zOXkptVHHxrcHGxpZczLC0AkBlQThkrZ5qnSOZcZhXy+o3Ij5fA8uy8toXVK4OQ/pe0aGT0wunTRO/VZCrgFmEdnU7PhTQwghm1hKc51mUIJMzH0s/8SyxMU3wnoU2ELuIWSZJGxbagipPvIozpFwAME2ixD3mw+Vc3GQVhuAquySS/r8Q+6JcjLmo619Nvzh/7ApjZYVrMyMRBXdR0WzYJB+5VxdZaQ+fmNJb1HpTHvUCsFQ+rUSUR0P/mkrPnt98oGF/lhoHGVaUo29OCfA+/ONkZVBcJtRBed/Uhkcj/DyPg551ZywP0o6c+NlxcKC2d+9F3aNlEoVZ+eA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5FEU9ZQooRXHWAlapeeODbcgKDZ9yKhSZJPo3TeKfaA=;
 b=SA/l+MwjVgpGU3pmuoYnHNDPiPjXFiCrANFsah0iRzYA37Ed2ekzZAB5kqzlRtDv2xxmeVNJtMVUnKk6f2iRca1tUpscTWwDoNouuWLhCPLubVFBULFneYpSnQ/jpX9xvCJyPzsLqicANg+vYoNkG5ya4C/VF35/9nluvOO1EIpLmJOA1QjJ5s6qwwlDZQYgOUSIbDu3IMeTv4q8Aatyww2gdVOSWP/XGi1Vg2QHOPMt2jcuqukE1XL0VZCW00hO/BZloEd5XL+fALtK+Lx9dQ/hNfwcdpdRqDjOacGsORS58Nu+SqMtlYce2dBspOIop6mzFVx+Kspo6a4kerJUZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5FEU9ZQooRXHWAlapeeODbcgKDZ9yKhSZJPo3TeKfaA=;
 b=H6Hps3Z0c4chM/DOF2u+3s5xeqVDWfeIm0kVDe8DOMLmvUzrMjUT+5I2rSLNlSGmVFkSMvYYhuqiax5X8LrNka812oJln+tVPj17o73FEj+h3LdMhf01SdgxIT0aNhaxYZUlsW3PmXVAeePbTQpKVwb5dKL6vuYh/oUQg3ulEJt4CvkXX+MuGOsjjsT/NLc8Z7YOMeXOedRJ5xopMjGquo6zbTuhEyNn3cPxUlz70QocgK/KenEz/rn1wJpTVav99xEuRW2rs8/wPooUccuIdnm+UufMjNdKUf+zNyABnWssjIHxFiVPqtr1jc7JWMb6ySOjVnTi0VImViib2ljt7w==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB4883.eurprd03.prod.outlook.com (2603:10a6:208:103::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Thu, 1 Oct
 2020 06:35:21 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::853d:1bd6:75a0:a7d7]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::853d:1bd6:75a0:a7d7%7]) with mapi id 15.20.3433.035; Thu, 1 Oct 2020
 06:35:21 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Ian Jackson <iwj@xenproject.org>,
        Oleksandr Andrushchenko
	<andr2000@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "jgross@suse.com" <jgross@suse.com>,
        "wei.liu2@citrix.com"
	<wei.liu2@citrix.com>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>
Subject: Re: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Topic: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Thread-Index: AQHWLoWvWdjq7ozlqkeaZNP0F/2zoKl++MQAgAQkQQA=
Date: Thu, 1 Oct 2020 06:35:21 +0000
Message-ID: <9e64a880-02ce-e04b-8e36-eb63fbfbd975@epam.com>
References: <20200520090425.28558-1-andr2000@gmail.com>
 <20200520090425.28558-3-andr2000@gmail.com>
 <24433.65344.748102.591216@mariner.uk.xensource.com>
In-Reply-To: <24433.65344.748102.591216@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xenproject.org; dkim=none (message not signed)
 header.d=none;xenproject.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 13868225-c8b3-46ef-3900-08d865d42bd0
x-ms-traffictypediagnostic: AM0PR03MB4883:
x-microsoft-antispam-prvs: 
 <AM0PR03MB4883D6EA580B8E4119063993E7300@AM0PR03MB4883.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 RxA9uv2Isn+PZ+xrUlCR3Ui6X/WeprrX5KHLEIKPapGviYzjlVVIoSWxgVJmidj+522G7UZ3XHP52yFMBySHaDb2/fNec89hSa2gnDCQf5tT13VYDs0EPEH0KlHW8LC1izPIRHy/mqBILQWTHEUpRd59c5gP8O4JXJdR7RemMLs2g8LBuZPHGmxoVvNeso5ptvsEYQka+gpKBSzcYkTiozg41B8995uwOUTXPiRMHmxRP3A6YaOiuaUgQyAQn/2GW0SX+EBdzMIInGTKfQhojtuBuoYOcIufnzH1IlHMs+yz3n463FdOpcMvEREhsH6DWEPE0vRK9qdtPGOh6wc3XdeReK64gi3LpGEokTeaBIoCsRORTeN59iX2TV3B64Syf8oBp8IDplx8vYYlj9TpRw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(136003)(39860400002)(346002)(6486002)(66446008)(76116006)(66556008)(31696002)(66476007)(66946007)(64756008)(966005)(53546011)(6506007)(478600001)(86362001)(83380400001)(2616005)(36756003)(8936002)(186003)(2906002)(8676002)(4326008)(6512007)(83080400001)(110136005)(54906003)(26005)(31686004)(5660300002)(316002)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 mnKoFoBKuhm/l3v8nwgBmJsjjgzI/tBZnoEXTgWxFUgt7qe65tBFb+8UKkdd7BbxTbPNXINC9TJuGPdGSJmsATuNVtJ3CDHXpJ3C/zgZj9BPbyphOE+w06lMZb/C+5HXcsjDIy7i7LG5mDQygDjkitx/L6sERmRnA/2fysZBff6xkq0Pq7g7YwTykWQD3CtcddHjvRGQ8Pd3j4pa1jWlj393C0fYNd9BkfnZRbJM1Hh8rQPpFOoFqhP0QH/MZ7mDxjyQ1xzAuu0tWrZqRikIvyED7TnCbvMqOFp1+9XdJAzZ/ttCdo2PmuGU25ilYgXl3nujPBa5Tf+ueGcgVxVumSP1UuNk/C8Fa3Pusv6o4BRPYOPFtUfpdn2jl4E9C4OvDK2ZoNAmwUAulXwkGg1zE0lD6y5mTba03TokfkLzDFn3mB5mBdkXEwE3bDwcw97q8UJCmiol+YC0XaBXdlkNNjUHtz3dg+oqKV5gq3/5tKhyLYovEQ4Yn1fk8+1mHIIyaGxGLRvWvFqPOeIOwM01G1oLv6uDkxwY0PKl2x72SE6g4rBeS7cu5rmt5UaQB3rs5kKYLpzLhik4n7I60WPC8A0OWnTrESM2POORUVjgKFyj4/SK4LV+Xb9ZSW94PwqWzRsxfNcoPiobrHAy0zYvpg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <DAAA0C6EFB9D8C42B0070C7A502ACF8F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13868225-c8b3-46ef-3900-08d865d42bd0
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Oct 2020 06:35:21.4179
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: tPESW3T56GA7vX0njXwKipqIP/gPEkbrPM27BWUbIXMWDBO6wVffTok9fgaMFDo6OWluBxvxQjgaYuDJIe3Mki/kSi9s2A4d+S3SlSnjhrDYoFLXwIrIG0tIzpWfC8qT
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4883
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-01_02:2020-10-01,2020-10-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999
 lowpriorityscore=0 spamscore=0 phishscore=0 malwarescore=0 adultscore=0
 priorityscore=1501 impostorscore=0 suspectscore=0 clxscore=1011
 bulkscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2010010058

SGksDQoNCk9uIDkvMjgvMjAgNjoyMCBQTSwgSWFuIEphY2tzb24gd3JvdGU6DQo+IE9sZWtzYW5k
ciBBbmRydXNoY2hlbmtvIHdyaXRlcyAoIltQQVRDSCAyLzJdIGxpYmdudHRhYjogQWRkIHN1cHBv
cnQgZm9yIExpbnV4IGRtYS1idWYgb2Zmc2V0Iik6DQo+PiBGcm9tOiBPbGVrc2FuZHIgQW5kcnVz
aGNoZW5rbyA8b2xla3NhbmRyX2FuZHJ1c2hjaGVua29AZXBhbS5jb20+DQo+Pg0KPj4gQWRkIHZl
cnNpb24gMiBvZiB0aGUgZG1hLWJ1ZiBpb2N0bHMgd2hpY2ggYWRkcyBkYXRhX29mcyBwYXJhbWV0
ZXIuDQo+Pg0KPj4gZG1hLWJ1ZiBpcyBiYWNrZWQgYnkgYSBzY2F0dGVyLWdhdGhlciB0YWJsZSBh
bmQgaGFzIG9mZnNldCBwYXJhbWV0ZXINCj4+IHdoaWNoIHRlbGxzIHdoZXJlIHRoZSBhY3R1YWwg
ZGF0YSBzdGFydHMuIFJlbGV2YW50IGlvY3RscyBhcmUgZXh0ZW5kZWQNCj4+IHRvIHN1cHBvcnQg
dGhhdCBvZmZzZXQ6DQo+PiAgICAtIHdoZW4gZG1hLWJ1ZiBpcyBjcmVhdGVkIChleHBvcnRlZCkg
ZnJvbSBncmFudCByZWZlcmVuY2VzIHRoZW4NCj4+ICAgICAgZGF0YV9vZnMgaXMgdXNlZCB0byBz
ZXQgdGhlIG9mZnNldCBmaWVsZCBpbiB0aGUgc2NhdHRlciBsaXN0DQo+PiAgICAgIG9mIHRoZSBu
ZXcgZG1hLWJ1Zg0KPj4gICAgLSB3aGVuIGRtYS1idWYgaXMgaW1wb3J0ZWQgYW5kIGdyYW50IHJl
ZmVyZW5jZXMgcHJvdmlkZWQgdGhlbg0KPj4gICAgICBkYXRhX29mcyBpcyB1c2VkIHRvIHJlcG9y
dCB0aGF0IG9mZnNldCB0byB1c2VyLXNwYWNlDQo+IFRoYW5rcy4gIEknbSBub3QgYSBETUEgZXhw
ZXJ0LCBidXQgSSB0aGluayB0aGlzIGlzIHByb2JhYmx5IGdvaW5nIGluDQo+IHJvdWdobHkgdGhl
IHJpZ2h0IGRpcmVjdGlvbi4gIEkgd2lsbCBwcm9iYWJseSB3YW50IGEgcmV2aWV3IGZyb20gYSBE
TUENCj4gZXhwZXJ0IHRvbywgYnV0IGxldCBtZSBnZXQgb24gd2l0aCBteSBxdWVzdGlvbnM6DQo+
DQo+IFdoZW4geW91IHNheSAidGhlIHByb3RvY29sIGNoYW5nZXMgYXJlIGFscmVhZHkgYWNjZXB0
ZWQiIEkgdGhpbmsgeW91DQo+IG1lYW4gdGhlIExpbnV4IGlvY3RsIGNoYW5nZXMgPyAgSWYgbm90
LCB3aGF0ICpkbyogeW91IG1lYW4gPw0KDQpJIG1lYW4gdGhhdCB0aGUgcmVsZXZhbnQgcHJvdG9j
b2wgY2hhbmdlcyBhcmUgYWxyZWFkeSBwYXJ0IG9mIGJvdGggWGVuIFsxXQ0KDQphbmQgTGludXgg
dHJlZXMgWzJdLiBXaGF0IGlzIG1pc3NpbmcgaXMgaW9jdGwgaW1wbGVtZW50YXRpb24gaW4gdGhl
IGtlcm5lbCBhbmQNCg0KaXRzIHN1cHBvcnQgaW4gWGVuJyB0b29scy4gVGhpcyBpcyB3aHkgSSBo
YXZlIG1hcmtlZCB0aGUgcGF0Y2ggYXMgUkZDIGluIG9yZGVyDQoNCnRvIGdldCBzb21lIHZpZXcg
b24gdGhlIG1hdHRlciBmcm9tIFhlbiBjb21tdW5pdHkuIE9uY2Ugd2UgYWdyZWUgb24gdGhlDQoN
Cm5hbWluZywgc3RydWN0dXJlIGV0Yy4gSSdsbCBzZW5kIHBhdGNoZXMgZm9yIGJvdGggWGVuIGFu
ZCBMaW51eA0KDQo+DQo+PiArLyoNCj4+ICsgKiBWZXJzaW9uIDIgb2YgdGhlIGlvY3RscyBhZGRz
IEBkYXRhX29mcyBwYXJhbWV0ZXIuDQo+PiArICoNCj4+ICsgKiBkbWEtYnVmIGlzIGJhY2tlZCBi
eSBhIHNjYXR0ZXItZ2F0aGVyIHRhYmxlIGFuZCBoYXMgb2Zmc2V0DQo+PiArICogcGFyYW1ldGVy
IHdoaWNoIHRlbGxzIHdoZXJlIHRoZSBhY3R1YWwgZGF0YSBzdGFydHMuDQo+PiArICogUmVsZXZh
bnQgaW9jdGxzIGFyZSBleHRlbmRlZCB0byBzdXBwb3J0IHRoYXQgb2Zmc2V0Og0KPj4gKyAqICAg
LSB3aGVuIGRtYS1idWYgaXMgY3JlYXRlZCAoZXhwb3J0ZWQpIGZyb20gZ3JhbnQgcmVmZXJlbmNl
cyB0aGVuDQo+PiArICogICAgIEBkYXRhX29mcyBpcyB1c2VkIHRvIHNldCB0aGUgb2Zmc2V0IGZp
ZWxkIGluIHRoZSBzY2F0dGVyIGxpc3QNCj4+ICsgKiAgICAgb2YgdGhlIG5ldyBkbWEtYnVmDQo+
PiArICogICAtIHdoZW4gZG1hLWJ1ZiBpcyBpbXBvcnRlZCBhbmQgZ3JhbnQgcmVmZXJlbmNlcyBh
cmUgcHJvdmlkZWQgdGhlbg0KPj4gKyAqICAgICBAZGF0YV9vZnMgaXMgdXNlZCB0byByZXBvcnQg
dGhhdCBvZmZzZXQgdG8gdXNlci1zcGFjZQ0KPj4gKyAqLw0KPj4gKyNkZWZpbmUgSU9DVExfR05U
REVWX0RNQUJVRl9FWFBfRlJPTV9SRUZTX1YyIFwNCj4+ICsgICAgX0lPQyhfSU9DX05PTkUsICdH
JywgMTMsIFwNCj4gSSB0aGluayB0aGlzIHdhcyBjb3BpZWQgZnJvbSBhIExpbnV4IGhlYWRlciBm
aWxlID8gIElmIHNvIHBsZWFzZSBxdW90ZQ0KPiB0aGUgcHJlY2lzZSBmaWxlIGFuZCByZXZpc2lv
biBpbiB0aGUgY29tbWl0IG1lc3NhZ2UuDQpUaGlzIGlzIG5vdCB1cHN0cmVhbSB5ZXQsIHBsZWFz
ZSBzZWUgZXhwbGFuYXRpb24gYWJvdmUNCj4gICAgQW5kIGJlIHN1cmUgdG8NCj4gY29weSB0aGUg
Y29weXJpZ2h0IGluZm9ybXRhaW9uIGFwcHJvcHJpYXRlbHkuDQo+DQo+PiAraW50IG9zZGVwX2du
dHRhYl9kbWFidWZfZXhwX2Zyb21fcmVmc192Mih4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQz
Ml90IGRvbWlkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgY29uc3QgdWludDMyX3QgKnJlZnMsDQo+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCAqZG1hYnVmX2ZkLCB1aW50
MzJfdCBkYXRhX29mcykNCj4+ICt7DQo+PiArICAgIGFib3J0KCk7DQo+IEknbSBwcmV0dHkgc3Vy
ZSB0aGlzIGlzIHdyb25nLg0KDQpGaXJzdCBvZiBhbGwsIExpbnV4IGRtYS1idWZzIGFyZSBvbmx5
IHN1cHBvcnRlZCBvbiBMaW51eCwgc28gbmVpdGhlciBGcmVlQlNEIG5vciBNaW5pLU9TDQoNCndp
bGwgaGF2ZSB0aGF0LiBJZiB5b3UgYXJlIHJlZmVycmluZyB0byAiYWJvcnQoKSIgaGVyZSwgc28g
SSBhbSBqdXN0IGFsaWduaW5nIHRvIHdoYXQgcHJldmlvdXNseQ0KDQp3YXMgdGhlcmUsIGUuZy4g
YWxsIG5vbi1yZWxldmFudCBkbWEtYnVmIE9TIHNwZWNpZmljcyB3ZXJlIGltcGxlbWVudGVkIGxp
a2UgdGhhdC4NCg0KPg0KPiBUaGlzIGxlYWRzIG1lIHRvIGFzayBhYm91dCBjb21wYXRpYmlsaXR5
LCBib3RoIGFjcm9zcyB2ZXJzaW9ucyBvZiB0aGUNCj4gdmFyaW91cyBjb21wb25lbnRzLCBhbmQg
QVBJIGNvbXBhdGliaWxpdHkgYWNyb3NzIGRpZmZlcmVudCBwbGF0Zm9ybXMuDQo+DQo+IGxpYnhl
bmdudHRhYiBpcyBzdXBwb3NlZCB0byBoYXZlIGEgc3RhYmxlIEFQSSBhbmQgQUJJLiAgVGhpcyBt
ZWFucw0KPiB0aGF0IG9sZCBwcm9ncmFtcyBzaG91bGQgd29yayB3aXRoIHRoZSBuZXcgbGlicmFy
eSAtIHdoaWNoIEkgdGhpbmsgeW91DQo+IGhhdmUgYWNoaWV2ZWQuDQpZZXMNCj4NCj4gQnV0IEkg
dGhpbmsgaXQgYWxzbyBtZWFucyB0aGF0IGl0IHNob3VsZCB3b3JrIHdpdGggbmV3IHByb2dyYW1z
LCBhbmQNCj4gdGhlIG5ldyBsaWJyYXJ5LCBvbiBvbGQga2VybmVscy4gIFdoYXQgaXMgeW91ciBj
b21wYXRpYmlsaXR5IHN0b3J5DQo+IGhlcmUgPyAgV2hhdCBpcyB0aGUgaW50ZW5kZWQgbW9kZSBv
ZiB1c2UgYnkgYW4gYXBwbGljYXRpb24gPw0KDQpXZWxsLCB0aGlzIGlzIGEgdG91Z2ggc3Rvcnku
IElmIHdlIGhhdmUgbmV3IHNvZnR3YXJlIGFuZCBuZXcgbGlicmFyeSwgYnV0IG9sZA0KDQprZXJu
ZWwgaXQgbWVhbnMgdGhhdCB0aGUgb2Zmc2V0IHdlIGFyZSB0cnlpbmcgdG8gZ2V0IHdpdGggdGhl
IG5ldyBpb2N0bCB3aWxsIGJlDQoNCnVuYXZhaWxhYmxlIHRvIHRoYXQgbmV3IHNvZnR3YXJlLiBJ
biBtb3N0IGNhc2VzIHdlIGNhbiB1c2Ugb2Zmc2V0IG9mIDAsIGJ1dCBzb21lDQoNCnBsYXRmb3Jt
cyAoaU1YOCkgdXNlIG9mZnNldCBvZiA2NC4gU28sIHdlIGNhbiB3b3JrYXJvdW5kIHRoYXQgZm9y
IG1vc3QoPykgcGxhdGZvcm1zDQoNCmJ5IHJlcG9ydGluZyBvZmZzZXQgMCwgYnV0IHNvbWUgcGxh
dGZvcm1zIHdpbGwgZmFpbC4gSSBhbSBub3Qgc3VyZSBpZiB0aGlzIGlzIGdvb2QgdG8gc3RhdGUg
dGhhdA0KDQp0aGlzIGNvbWJpbmF0aW9uIG9mIHNvZnR3YXJlIChhcyBkZXNjcmliZWQgYWJvdmUp
ICJ3aWxsIG1vc3RseSB3b3JrIiBvciBqdXN0IGxldA0KDQp0aGUgc3lzdGVtIGZhaWwgYXQgcnVu
LXRpbWUsIGJ5IGxldHRpbmcgTGludXggcmV0dXJuIEVOT1RTVVBQIGZvciB0aGUgbmV3IGlvY3Rs
Lg0KDQpCeSBmYWlsIEkgbWVhbiB0aGF0IHRoZSBkaXNwbGF5IGJhY2tlbmQgbWF5IGRlY2lkZSBp
ZiB0byB1c2UgdGhlIHByZXZpb3VzIHZlcnNpb24gb2YgdGhlIGlvY3RsDQoNCndpdGhvdXQgdGhl
IG9mZnNldCBmaWVsZC4NCg0KPg0KPiBBbmQgdGhlIHNhbWUgYXBwbGljYXRpb24gY29kZSBzaG91
bGQgYmUgdXNlYWJsZSwgc28gZmFyIGFzIHBvc3NpYmxlLA0KPiBhY3Jvc3MgZGlmZmVyZW50IHBs
YWF0Zm9ybXMgdGhhdCBzdXBwb3J0IFhlbi4NCj4NCj4gV2hhdCBmYWxsYmFjayB3b3VsZCBiZSBw
b3NzaWJsZSBmb3IgYXBwbGljYXRpb24gZG8gaWYgdGhlIHYyIGZ1bmN0aW9uDQo+IGlzIG5vdCBh
dmFpbGFibGUgPyAgSSB0aGluayB0aGF0IGZhbGxiYWNrIGFjdGlvbiBuZWVkcyB0byBiZQ0KPiBz
ZWxlY3RhYmxlIGF0IHJ1bnRpbWUsIHRvIHN1cHBvcnQgbmV3IHVzZXJzcGFjZSBvbiBvbGQga2Vy
bmVscy4NCg0KV2VsbCwgYXMgSSBzYWlkIGJlZm9yZSwgZm9yIHRoZSBwbGF0Zm9ybXMgd2l0aCBv
ZmZzZXQgMCB3ZSBhcmUgImZpbmUiIGlnbm9yaW5nIHRoZSBvZmZzZXQgYW5kDQoNCnVzaW5nIHYx
IG9mIHRoZSBpb2N0bCB3aXRob3V0IHRoZSBvZmZzZXQgZmllbGQuIEZvciB0aGUgcGxhdGZvcm1z
IHdpdGggbm9uLXplcm8gb2Zmc2V0IGl0IHJlc3VsdHMNCg0KYXQgbGVhc3QgaW4gc2xpZ2h0IHNj
cmVlbiBkaXN0b3J0aW9uIGFuZCB0aGV5IGRvIG5lZWQgdjIgb2YgdGhlIGlvY3RsDQoNCj4NCj4g
V2hhdCBhcmNoaXRlY3R1cmVzIGlzIHRoZSBuZXcgTGludXggaW9jdGwgYXZhaWxhYmxlIG9uID8N
Cng4Ni9BUk0NCj4NCj4+IGRpZmYgLS1naXQgYS90b29scy9saWJzL2dudHRhYi9pbmNsdWRlL3hl
bmdudHRhYi5oIGIvdG9vbHMvbGlicy9nbnR0YWIvaW5jbHVkZS94ZW5nbnR0YWIuaA0KPj4gaW5k
ZXggMTExZmM4OGNhZWIzLi4wOTU2YmQ5MWUwZGYgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJz
L2dudHRhYi9pbmNsdWRlL3hlbmdudHRhYi5oDQo+PiArKysgYi90b29scy9saWJzL2dudHRhYi9p
bmNsdWRlL3hlbmdudHRhYi5oDQo+PiBAQCAtMzIyLDEyICszMjIsMTkgQEAgaW50IHhlbmdudHRh
Yl9ncmFudF9jb3B5KHhlbmdudHRhYl9oYW5kbGUgKnhndCwNCj4+ICAgICogUmV0dXJucyAwIGlm
IGRtYS1idWYgd2FzIHN1Y2Nlc3NmdWxseSBjcmVhdGVkIGFuZCB0aGUgY29ycmVzcG9uZGluZw0K
Pj4gICAgKiBkbWEtYnVmJ3MgZmlsZSBkZXNjcmlwdG9yIGlzIHJldHVybmVkIGluIEBmZC4NCj4+
ICAgICoNCj4+ICsNCj4+ICsgKiBWZXJzaW9uIDIgYWxzbyBhY2NlcHRzIEBkYXRhX29mcyBvZmZz
ZXQgb2YgdGhlIGRhdGEgaW4gdGhlIGJ1ZmZlci4NCj4+ICsgKg0KPj4gICAgKiBbMV0gaHR0cHM6
Ly91cmxkZWZlbnNlLmNvbS92My9fX2h0dHBzOi8vZWxpeGlyLmJvb3RsaW4uY29tL2xpbnV4L2xh
dGVzdC9zb3VyY2UvRG9jdW1lbnRhdGlvbi9kcml2ZXItYXBpL2RtYS1idWYucnN0X187ISFHRl8y
OWRiY1FJVUJQQSFpYTdnc0Q1bzl2UHRnUXptM3BVWW1jUEVhVW1hT0RaUlVreW5pcTc0dk5EWmti
ejl6R2JxZS16Q1Vlc0hIRjMtY2tSV0x1SUJLZyQgW2VsaXhpclsuXWJvb3RsaW5bLl1jb21dDQo+
PiAgICAqLw0KPj4gICBpbnQgeGVuZ250dGFiX2RtYWJ1Zl9leHBfZnJvbV9yZWZzKHhlbmdudHRh
Yl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+PiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgdWludDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVpbnQzMl90ICpyZWZzLCB1aW50
MzJfdCAqZmQpOw0KPj4gICANCj4+ICtpbnQgeGVuZ250dGFiX2RtYWJ1Zl9leHBfZnJvbV9yZWZz
X3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMyX3QgZG9taWQsDQo+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBmbGFncywgdWludDMyX3QgY291
bnQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1aW50
MzJfdCAqcmVmcywgdWludDMyX3QgKmZkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgdWludDMyX3QgZGF0YV9vZnMpOw0KPiBJIHRoaW5rIHRoZSBpbmZvcm1hdGlv
biBhYm91dCB0aGUgbWVhbmluZyBvZiBAZGF0YV9vZnMgbXVzdCBiZSBpbiB0aGUNCj4gZG9jIGNv
bW1lbnQuICBJbmRlZWQsIHRoYXQgc2hvdWxkIGJlIHRoZSBwcmltYXJ5IGxvY2F0aW9uLg0KU3Vy
ZQ0KPg0KPiBDb252ZXJzZWx5IHRoZXJlIGlzIG5vIG5lZWQgdG8gZHVwbGljYXRlIGluZm9ybWF0
aW9uIGJldHdlZW4gdGhlIHBhdGNoDQo+IGNvbnRlbnRzLCBhbmQgdGhlIGNvbW1pdCBtZXNzYWdl
Lg0KDQpJdCdzIGp1c3QgYSBtZSB0aGF0IGFsd2F5cyB3YW50cyB0aGUgZG9jIGF0IGhhbmR5IGxv
Y2F0aW9uIHNvIEkgZG9uJ3QgbmVlZCB0byBkaWcgZm9yDQoNCnRoZSBjb21taXQgbWVzc2FnZXM/
IEJ1dCBhdCB0aGUgc2FtZSB0aW1lIHRoZSBjb21taXQgbWVzc2FnZSBzaG91bGQgYWxsb3cgb25l
DQoNCnF1aWNrbHkgdW5kZXJzdGFuZCB3aGF0J3MgaW4gdGhlcmUuIFNvLCBJIHdvdWxkIHByZWZl
ciB0byBoYXZlIG1vcmUgZGVzY3JpcHRpb24gaW4gdGhlDQoNCnBhdGNoIHRoZW4NCg0KPg0KPiBJ
cyBfdjIgcmVhbGx5IHRoZSBiZXN0IG5hbWUgZm9yIHRoaXMgPyAgQXJlIHdlIGxpa2VseSB0byB3
YW50IHRvDQo+IGV4dGVuZCB0aGlzIGFnYWluIGluIGZ1dHVyZSA/ICBQZXJoYXBzIGl0IHNob3Vs
ZCBiZSBjYWxsZWQgLi4uX29mZnNldA0KPiBvciBzb21ldGhpbmcgPyAgUGxlYXNlIHRoaW5rIGFi
b3V0IHRoaXMgYW5kIHRlbGwgbWUgeW91ciBvcGluaW9uLg0KDQpJIGRvbid0IGFjdHVhbGx5IGxp
a2UgdjIuIE5laXRoZXIgSSBjYW4gcHJvZHVjZSBhbnl0aGluZyBtb3JlIGN1dGUgOykNCg0KT24g
dGhlIG90aGVyIGhhbmQgaXQgaXMgZWFzaWVyIHRvIHVuZGVyc3RhbmQgdGhhdCB2MiBpcyBhY3R1
YWxseSBleHRlbmRzL3JlbW92ZXMvY2hhbmdlcw0KDQpzb21ldGhpbmcgdGhhdCB3YXMgaGVyZSBi
ZWZvcmUuIFNheSwgaWYgeW91IGhhdmUgMiBpb2N0bHMgeXl5IGFuZCBkZGQgeW91IG5lZWQgdG8g
Y29tcGFyZQ0KDQp0aGUgdHdvIHRvIHVuZGVyc3RhbmQgd2hhdCBpcyBtb3JlIHJlbGV2YW50IGF0
IHRoZSBtb21lbnQuIEhhdmluZyBleHBsaWNpdCB2ZXJzaW9uIGluIHRoZQ0KDQpuYW1lIGxlYXZl
cyBubyBkb3VidCBhYm91dCB3aGF0IGlzIG5ld2VyLg0KDQo+DQo+PiAraW50IG9zZGVwX2dudHRh
Yl9kbWFidWZfZXhwX2Zyb21fcmVmc192Mih4ZW5nbnR0YWJfaGFuZGxlICp4Z3QsIHVpbnQzMl90
IGRvbWlkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWlu
dDMyX3QgZmxhZ3MsIHVpbnQzMl90IGNvdW50LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgY29uc3QgdWludDMyX3QgKnJlZnMsDQo+PiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCAqZG1hYnVmX2ZkLA0KPj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZGF0YV9vZnMp
DQo+PiArew0KPj4gKyAgICBzdHJ1Y3QgaW9jdGxfZ250ZGV2X2RtYWJ1Zl9leHBfZnJvbV9yZWZz
X3YyICpmcm9tX3JlZnNfdjIgPSBOVUxMOw0KPj4gKyAgICBpbnQgcmMgPSAtMTsNCj4+ICsNCj4+
ICsgICAgaWYgKCAhY291bnQgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBlcnJubyA9IEVJTlZB
TDsNCj4+ICsgICAgICAgIGdvdG8gb3V0Ow0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIGZyb21f
cmVmc192MiA9IG1hbGxvYyhzaXplb2YoKmZyb21fcmVmc192MikgKw0KPj4gKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgKGNvdW50IC0gMSkgKiBzaXplb2YoZnJvbV9yZWZzX3YyLT5yZWZzWzBd
KSk7DQo+PiArICAgIGlmICggIWZyb21fcmVmc192MiApDQo+PiArICAgIHsNCj4+ICsgICAgICAg
IGVycm5vID0gRU5PTUVNOw0KPj4gKyAgICAgICAgZ290byBvdXQ7DQo+PiArICAgIH0NCj4+ICsN
Cj4+ICsgICAgZnJvbV9yZWZzX3YyLT5mbGFncyA9IGZsYWdzOw0KPj4gKyAgICBmcm9tX3JlZnNf
djItPmNvdW50ID0gY291bnQ7DQo+PiArICAgIGZyb21fcmVmc192Mi0+ZG9taWQgPSBkb21pZDsN
Cj4+ICsgICAgZnJvbV9yZWZzX3YyLT5kYXRhX29mcyA9IGRhdGFfb2ZzOw0KPj4gKw0KPj4gKyAg
ICBtZW1jcHkoZnJvbV9yZWZzX3YyLT5yZWZzLCByZWZzLCBjb3VudCAqIHNpemVvZihmcm9tX3Jl
ZnNfdjItPnJlZnNbMF0pKTsNCj4+ICsNCj4+ICsgICAgaWYgKCAocmMgPSBpb2N0bCh4Z3QtPmZk
LCBJT0NUTF9HTlRERVZfRE1BQlVGX0VYUF9GUk9NX1JFRlNfVjIsDQo+PiArICAgICAgICAgICAg
ICAgICAgICAgZnJvbV9yZWZzX3YyKSkgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBHVEVSUk9S
KHhndC0+bG9nZ2VyLCAiaW9jdGwgRE1BQlVGX0VYUF9GUk9NX1JFRlNfVjIgZmFpbGVkIik7DQo+
PiArICAgICAgICBnb3RvIG91dDsNCj4+ICsgICAgfQ0KPiBUaGlzIHNlZW1zIGp1c3QgYSBmYWly
bHkgb2J2aW91cyB3cmFwcGVyIGZvciB0aGlzIGlvY3RsLiAgSSB0aGluayBpdA0KPiB3b3VsZCBi
ZSBiZXN0IGZvciBtZSB0byByZXZpZXcgdGhpcyBpbiBkZXRhaWwgd2l0aCByZWZlcmVuY2UgdG8g
dGhlDQo+IGlvY3RsIGRvY3VtZW50YXRpb24gKHdoaWNoIHlvdSBoZWxwZnVsbHkgcmVmZXIgdG8g
LSB0aGFuayB5b3UhKSBhZnRlcg0KPiBJIHNlZSB0aGUgYW5zd2VycyB0byBteSBvdGhlciBxdWVz
dGlvbnMuDQoNCldlbGwsIEkgaGF2ZSBsaXR0bGUgdG8gYWRkIGFzIHRoZSBvbmx5IGNoYW5nZSBh
bmQgdGhlIHJlYXNvbiBpcyB0aGF0IHNjYXR0ZXItZ2F0aGVyIHRhYmxlJ3MNCg0Kb2Zmc2V0IG11
c3QgYmUgaG9ub3JlZCB3aGljaCB3YXMgbm90IGEgcHJvYmxlbSB1bnRpbCB3ZSBmYWNlZCBpTVg4
IHBsYXRmb3JtIHdoaWNoIGhhcw0KDQp0aGF0IG9mZnNldCBub24temVyby4gRnJhbmtseSwgbG90
cyBvZiBzb2Z0d2FyZSBhc3N1bWVzIGl0IGlzIHplcm8uLi4NCg0KPg0KPj4gK2ludCBvc2RlcF9n
bnR0YWJfZG1hYnVmX2ltcF90b19yZWZzX3YyKHhlbmdudHRhYl9oYW5kbGUgKnhndCwgdWludDMy
X3QgZG9taWQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWlu
dDMyX3QgZmQsIHVpbnQzMl90IGNvdW50LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHVpbnQzMl90ICpyZWZzLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHVpbnQzMl90ICpkYXRhX29mcykNCj4+ICt7DQo+IFRoaXMgZnVuY3Rp
b24gaXMgdmVyeSBzaW1pbGFyIHRvIHRoZSBwcmV2aW91cyBvbmUuICBJJ20gdW5jb21mb3J0YWJs
ZQ0KPiB3aXRoIHRoZSBkdXBsaWNhdGlvbiwgYnV0IEkgc2VlIHRoYXQNCj4gICAgIG9zZGVwX2du
dHRhYl9kbWFidWZfe2ltcF90byxleHBfZnJvbX1fcmVmcw0KPiBhcmUgdmVyeSBkdXBsaWNhdGl2
ZSBhbHJlYWR5LCBzbyBJIGFtIGFsc28gc29tZXdoYXQgdW5jb21mb3J0YWJsZSB3aXRoDQo+IGFz
a2luZyB5b3UgdG8gY2xlYW4gdGhpcyB1cCB3aXRoIHJlZmFjdG9yaW5nLiAgQnV0IHBlcmhhcHMg
aWYgeW91IGZlbHQNCj4gbGlrZSB0aGlua2luZyBhYm91dCBjb21iaW9uaW5nIHNvbWUgb2YgdGhp
cywgdGhhdCBtaWdodCBiZSBuaWNlLg0KDQpJIGhhdGUgaGF2aW5nIGNvZGUgZHVwbGljYXRpb24g
YXMgd2VsbDogbGVzcyBjb2RlIGxlc3MgbWFpbnRlbmFuY2UuIEJ1dCBpbiB0aGlzIGNhc2UNCg0K
dGhlIGNvbW1vbiBjb2RlIG1ha2VzIHRoYXQgZnVuY3Rpb24gZnVsbCBvZiAiaWYicyBzbyBmaW5h
bGx5IEkgZ2F2ZSB1cCBhbmQgbWFrZSBhIGNvcHktcGFzdGUuDQoNCk5vIHN0cm9uZyBvcGluaW9u
IGhlcmU6IGlmIHlvdSB0aGluayAiaWYicyBhcmUgc3RpbGwgYmV0dGVyIEknbGwgcmV3b3JrIHRo
YXQNCg0KPg0KPiBXaGF0IGRvIG15IGNvLW1haW50YWluZXJzIHRoaW5rID8NCj4NCj4NCj4gUmVn
YXJkcywNCj4gSWFuLg0KDQpUaGFuayB5b3UgZm9yIHRoZSByZXZpZXcgYW5kIHlvdXIgdGltZSwN
Cg0KT2xla3NhbmRyDQoNClsxXSBodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVu
LmdpdDthPWNvbW1pdDtoPWMyN2ExODQyMjVlYWI1NGQyMDQzNWM4Y2FiNWFkMGVmMzg0ZGMyYzAN
Cg0KWzJdIGh0dHBzOi8vZ2l0Lmtlcm5lbC5vcmcvcHViL3NjbS9saW51eC9rZXJuZWwvZ2l0L3Rv
cnZhbGRzL2xpbnV4LmdpdC9jb21taXQvP2lkPTZmOTIzMzdiNmJmZmIzZDllNTA5MDI0ZDZlZjVj
M2YyYjExMjc1N2QNCg0K


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 07:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 07:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1051.3515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsZJ-0001C3-9t; Thu, 01 Oct 2020 06:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1051.3515; Thu, 01 Oct 2020 06:59:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsZJ-0001Bw-5s; Thu, 01 Oct 2020 06:59:57 +0000
Received: by outflank-mailman (input) for mailman id 1051;
 Thu, 01 Oct 2020 06:59:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kNsZH-0001Br-Ol
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 06:59:55 +0000
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6533c48-da5e-41da-8d3a-298a2425e4db;
 Thu, 01 Oct 2020 06:59:54 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id w2so1769306wmi.1
 for <xen-devel@lists.xenproject.org>; Wed, 30 Sep 2020 23:59:54 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
 by smtp.gmail.com with ESMTPSA id t203sm7160772wmg.43.2020.09.30.23.59.51
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 30 Sep 2020 23:59:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kNsZH-0001Br-Ol
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 06:59:55 +0000
X-Inumbo-ID: b6533c48-da5e-41da-8d3a-298a2425e4db
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6533c48-da5e-41da-8d3a-298a2425e4db;
	Thu, 01 Oct 2020 06:59:54 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id w2so1769306wmi.1
        for <xen-devel@lists.xenproject.org>; Wed, 30 Sep 2020 23:59:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=/2QLOoIB2PGnNVH5UAQp2lhtkL+ccrvHD5Cl+TFmmWc=;
        b=Gy8OB9RLLg/qKRM9xSuEyp1GLAzt4XYRQvz4hNJVjeMPnu7PeHaEvNb8WihROhEDL6
         hRd8gzDYiBFBdSVBQ+fOiCPgqPQZ3XmZGNxSwxzPnxzdK5JlJTNz1rZ6XYlf19xUECvX
         gHoalGtxSYBIpr2WpzoFxatvFGHRUaAgfetcRnoKpL7YzL31MaJYO698qOPzvcO2q+xl
         i3uh4q4lEE/3Yfexy/HxeSPYZgTNauApBB68cTZ4uTathgURt0FUJ0/tWwKs+d9ZE/uz
         k4gPiWTIj6ck3K0UOowoPy613qgnCF0zSrRCm0ouotlV68HH8nKpFvrIOYlhh89eB9uU
         lduw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=/2QLOoIB2PGnNVH5UAQp2lhtkL+ccrvHD5Cl+TFmmWc=;
        b=XHZbfx8gL7ARG643kuAzdXzAedKrcPubD/1M6dB+/b0K2aA3CfIxmrbI90xZq2nFgf
         SwXnf22MbY/4aGFjBfd0Vhlu/wP8a/2S5ZT9vs9/F/yB4bkntQtjsQrlZayRZ/O6hzPM
         0bXGF7WZlu106Eikrx65xokuvWMCbyALfbTWj7sOWyN+kKnbz9dR2IT9n5OQ7vcWfB4a
         Is2xL0OYZUXlYjya9WXxhFAR3DGhVvld7w3MyjYHSnJDXj8wvl2OmDoRf2MP2UOuL+vu
         S81yXuk6Qr7jcqdvkN5AhRMD9CABTdzQyzcHqNWVa896Q9vqZvvS0gwRHKYfyQwZoDos
         mhuQ==
X-Gm-Message-State: AOAM533gtL+cifAaDZRKiz+L2JV5dD+bOUSdRP+vKu4hMriVlLbspMw8
	Ca9y6NhaGgtbV6hwEwNrtM0=
X-Google-Smtp-Source: ABdhPJz3ZLvC+5lBbW/R/WXDQrx4i75tnvS2iYuoZFMiY7aGj5t+2TOuuztNSCyoqdhQRIt0Zoz+iw==
X-Received: by 2002:a1c:66c4:: with SMTP id a187mr6667378wmc.148.1601535593389;
        Wed, 30 Sep 2020 23:59:53 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-232.amazon.com. [54.240.197.232])
        by smtp.gmail.com with ESMTPSA id t203sm7160772wmg.43.2020.09.30.23.59.51
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 30 Sep 2020 23:59:52 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	"'Oleksandr'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> <1599769330-17656-3-git-send-email-olekstysh@gmail.com> <3997a705-ccb1-4b8f-41ca-c5507360c759@xen.org> <000201d69314$97bd8fa0$c738aee0$@xen.org> <c9131bce-f028-2824-9ffc-b4db08017569@gmail.com> <2cbe7efd-f356-0f1b-0bb1-bfb2243f180c@xen.org>
In-Reply-To: <2cbe7efd-f356-0f1b-0bb1-bfb2243f180c@xen.org>
Subject: RE: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
Date: Thu, 1 Oct 2020 07:59:51 +0100
Message-ID: <008701d697c0$763c32e0$62b498a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIQ4beZ3sUmYihKXwfoVgy4BRIJGwHfeJxHAedMvHMBMIgdOgK34vRDAgTQpPuowBZCsA==

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 30 September 2020 18:48
> To: Oleksandr <olekstysh@gmail.com>; xen-devel@lists.xenproject.org
> Cc: paul@xen.org; 'Oleksandr Tyshchenko' =
<oleksandr_tyshchenko@epam.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Jan Beulich' <jbeulich@suse.com>; =
'Stefano Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Roger Pau =
Monn=C3=A9' <roger.pau@citrix.com>; 'Jun
> Nakajima' <jun.nakajima@intel.com>; 'Kevin Tian' =
<kevin.tian@intel.com>; 'Tim Deegan' <tim@xen.org>;
> 'Julien Grall' <julien.grall@arm.com>
> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature =
common
>=20
> Hi,
>=20
> On 30/09/2020 14:39, Oleksandr wrote:
> >
> > Hi Julien
> >
> > On 25.09.20 11:19, Paul Durrant wrote:
> >>> -----Original Message-----
> >>> From: Julien Grall <julien@xen.org>
> >>> Sent: 24 September 2020 19:01
> >>> To: Oleksandr Tyshchenko <olekstysh@gmail.com>;
> >>> xen-devel@lists.xenproject.org
> >>> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew
> >>> Cooper <andrew.cooper3@citrix.com>;
> >>> George Dunlap <george.dunlap@citrix.com>; Ian Jackson
> >>> <ian.jackson@eu.citrix.com>; Jan Beulich
> >>> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; =
Wei
> >>> Liu <wl@xen.org>; Roger Pau
> >>> Monn=C3=A9 <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>; =
Jun
> >>> Nakajima <jun.nakajima@intel.com>;
> >>> Kevin Tian <kevin.tian@intel.com>; Tim Deegan <tim@xen.org>; =
Julien
> >>> Grall <julien.grall@arm.com>
> >>> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature =
common
> >>>
> >>>
> >>>
> >>> On 10/09/2020 21:21, Oleksandr Tyshchenko wrote:
> >>>> +static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t =
*p)
> >>>> +{
> >>>> +    unsigned int prev_state =3D STATE_IOREQ_NONE;
> >>>> +    unsigned int state =3D p->state;
> >>>> +    uint64_t data =3D ~0;
> >>>> +
> >>>> +    smp_rmb();
> >>>> +
> >>>> +    /*
> >>>> +     * The only reason we should see this condition be false is
> >>>> when an
> >>>> +     * emulator dying races with I/O being requested.
> >>>> +     */
> >>>> +    while ( likely(state !=3D STATE_IOREQ_NONE) )
> >>>> +    {
> >>>> +        if ( unlikely(state < prev_state) )
> >>>> +        {
> >>>> +            gdprintk(XENLOG_ERR, "Weird HVM ioreq state =
transition
> >>>> %u -> %u\n",
> >>>> +                     prev_state, state);
> >>>> +            sv->pending =3D false;
> >>>> +            domain_crash(sv->vcpu->domain);
> >>>> +            return false; /* bail */
> >>>> +        }
> >>>> +
> >>>> +        switch ( prev_state =3D state )
> >>>> +        {
> >>>> +        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
> >>>> +            p->state =3D STATE_IOREQ_NONE;
> >>>> +            data =3D p->data;
> >>>> +            break;
> >>>> +
> >>>> +        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} ->
> >>>> IORESP_READY */
> >>>> +        case STATE_IOREQ_INPROCESS:
> >>>> +            wait_on_xen_event_channel(sv->ioreq_evtchn,
> >>>> +                                      ({ state =3D p->state;
> >>>> +                                         smp_rmb();
> >>>> +                                         state !=3D prev_state; =
}));
> >>>> +            continue;
> >>> As I pointed out previously [1], this helper was implemented with =
the
> >>> expectation that wait_on_xen_event_channel() will not return if =
the vCPU
> >>> got rescheduled.
> >>>
> >>> However, this assumption doesn't hold on Arm.
> >>>
> >>> I can see two solution:
> >>>      1) Re-execute the caller
> >>>      2) Prevent an IOREQ to disappear until the loop finish.
> >>>
> >>> @Paul any opinions?
> >> The ioreq control plane is largely predicated on there being no
> >> pending I/O when the state of a server is modified, and it is =
assumed
> >> that domain_pause() is sufficient to achieve this. If that =
assumption
> >> doesn't hold then we need additional synchronization.
>=20
> I don't think this assumption even hold on x86 because domain_pause()
> will not wait for I/O to finish.
>=20
> On x86, the context switch will reset the stack and therefore
> wait_on_xen_event_channel() is not going to return. Instead,
> handle_hvm_io_completion() will be called from the tail callback in
> context_switch(). get_pending_vcpu() would return NULL as the IOREQ
> server disappeared. Although, it is not clear whether the vCPU will
> continue to run (or not).
>=20
> Did I miss anything?
>=20
> Regarding the fix itself, I am not sure what sort of synchronization =
we
> can do. Are you suggesting to wait for the I/O to complete? If so, how
> do we handle the case the IOREQ server died?
>=20

s/IOREQ server/emulator but that is a good point. If domain_pause() did =
wait for I/O to complete then this would have always been a problem so, =
with hindsight, it should have been obvious this was not the case.

Digging back, it looks like things would have probably been ok before =
125833f5f1f0 "x86: fix ioreq-server event channel vulnerability" =
because, wait_on_xen_event_channel() and the loop condition above it did =
not dereference anything that would disappear with IOREQ server =
destruction (they used the shared page, which at this point was always a =
magic page and hence part of the target domain's memory). So things have =
probably been broken since 2014.

To fix the problem I think it is sufficient that we go back to a wait =
loop that can tolerate the IOREQ server disappearing between iterations =
and deals with that as a completed emulation (albeit returning f's for =
reads and sinking writes).

  Paul

> > May I please clarify whether a concern still stands (with what was =
said
> > above) and we need an additional synchronization on Arm?
>=20
> Yes the concern is still there (See above).
>=20
> Cheers,
>=20
> --
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 07:23:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 07:23:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1058.3534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsvq-0003kz-AN; Thu, 01 Oct 2020 07:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1058.3534; Thu, 01 Oct 2020 07:23:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNsvq-0003ks-7E; Thu, 01 Oct 2020 07:23:14 +0000
Received: by outflank-mailman (input) for mailman id 1058;
 Thu, 01 Oct 2020 07:23:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kNsvo-0003kn-EP
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 07:23:12 +0000
Received: from mail-ed1-x536.google.com (unknown [2a00:1450:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23f95e66-aa54-40dd-8b7e-63aae811e498;
 Thu, 01 Oct 2020 07:23:11 +0000 (UTC)
Received: by mail-ed1-x536.google.com with SMTP id j2so4524444eds.9
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 00:23:11 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
 by smtp.gmail.com with ESMTPSA id v25sm3367939edr.29.2020.10.01.00.23.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 01 Oct 2020 00:23:10 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kNsvo-0003kn-EP
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 07:23:12 +0000
X-Inumbo-ID: 23f95e66-aa54-40dd-8b7e-63aae811e498
Received: from mail-ed1-x536.google.com (unknown [2a00:1450:4864:20::536])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23f95e66-aa54-40dd-8b7e-63aae811e498;
	Thu, 01 Oct 2020 07:23:11 +0000 (UTC)
Received: by mail-ed1-x536.google.com with SMTP id j2so4524444eds.9
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 00:23:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=wYXYRhwbqmixpIy79JPI8rc1fWn7WjBc8vOTcyZaeAo=;
        b=Jvn+7g9Zqz6rvpx76xXMbXzUXANqARBmBiB5JoGKr6YjBgGdIf0wX5vJZeKvfaptuj
         6js7CDLuAEzYLUT817IyWvYrZaqErBG9xP/ylC3x4J0vjMMBD1OpeQrZ5woUdAx+PgRu
         5FxI8KJUxGrUbtq1WzUzHccMGYdOHvnkakkMo9cgQtecD9wwWKbF94Gfnid8/ZcJ/6Zc
         DcvQRFazcFJdYe12Lp5COdAix6Y3SK5NJ91gsOd1ulqTgX4BKglPEli+t8OY42NhcdCX
         bUiEjshZT85HKbRQpA20dmiBb0p/vaZazjWdLwI69PF6R+SRby3eX2WJCNpm/qqjQM39
         F2mw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=wYXYRhwbqmixpIy79JPI8rc1fWn7WjBc8vOTcyZaeAo=;
        b=NQ4rcbw1BkLX36nz2kKnKGwH+vNIQggFZBKrQlMk7O0INfUt8aiELgPQCd8XtS/tMw
         p9HmDHZU5ZcKogn/NT93J/gYfJnX+MOoHT9wmokExtH+WeI2nv4theyWjAzT64NvCPxi
         DVDSuJsyGRjS+PXW0n+ZMpQ9ImdDDYTtrHTbmNCugg7l+JpsjgnuQlqQDeo7l1WHkgD7
         zzmaphm8agK8hIGo3NRA5NfK+2jnvmrTO0zsCIZSrqbEQhfUv4xklkRDT5/rj6MEz9WM
         SMuqjGJ4wAWL1o5AdwswAGfWLFnAytbtI5K2+xYnoUhHwXlUEExvXgwgW6PCKz5yHgNb
         uU6Q==
X-Gm-Message-State: AOAM5312eo5VDSPM19yJLuMxKECxHpOsZ/kUH98FuuyXoZciHDb6POvk
	Dk0Gi766S5PjC5cWySGDhtg=
X-Google-Smtp-Source: ABdhPJxJQ6M7A6YJ4r8gH3r6d3bHIxyT0A18LDHuTXv9xgXakxgl4JJCrgPt7ED/odJub3sxWzD7xw==
X-Received: by 2002:a50:cd48:: with SMTP id d8mr4406454edj.286.1601536990871;
        Thu, 01 Oct 2020 00:23:10 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
        by smtp.gmail.com with ESMTPSA id v25sm3367939edr.29.2020.10.01.00.23.09
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 01 Oct 2020 00:23:10 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: <paul@xen.org>,
	<xen-devel@lists.xenproject.org>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	<wei.liu@kernel.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>
References: <20200915141007.25965-1-paul@xen.org> <001001d69328$0d2790c0$2776b240$@xen.org>
In-Reply-To: <001001d69328$0d2790c0$2776b240$@xen.org>
Subject: RE: [PATCH v2 0/2] fix 'xl block-detach'
Date: Thu, 1 Oct 2020 08:23:08 +0100
Message-ID: <008801d697c3$b73a3370$25ae9a50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJeoUus0OdpJ4MR5bCYHg0A3ED8pwIrGjDEqGDrr3A=

Toolstack maintainers...

Ping+1

> -----Original Message-----
> From: Paul Durrant <xadimgnik@gmail.com>
> Sent: 25 September 2020 11:39
> To: xen-devel@lists.xenproject.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>
> Subject: RE: [PATCH v2 0/2] fix 'xl block-detach'
> 
> Ping? AFAICT this series is fully acked. Can it be committed soon?
> 
>   Paul
> 
> > -----Original Message-----
> > From: Paul Durrant <paul@xen.org>
> > Sent: 15 September 2020 15:10
> > To: xen-devel@lists.xenproject.org
> > Cc: Paul Durrant <pdurrant@amazon.com>
> > Subject: [PATCH v2 0/2] fix 'xl block-detach'
> >
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > This series makes it behave as the documentation states it should
> >
> > Paul Durrant (2):
> >   libxl: provide a mechanism to define a device 'safe remove'
> >     function...
> >   xl: implement documented '--force' option for block-detach
> >
> >  docs/man/xl.1.pod.in         |  4 ++--
> >  tools/libxl/libxl.h          | 33 +++++++++++++++++++++++++--------
> >  tools/libxl/libxl_device.c   |  9 +++++----
> >  tools/libxl/libxl_disk.c     |  4 +++-
> >  tools/libxl/libxl_domain.c   |  2 +-
> >  tools/libxl/libxl_internal.h | 30 +++++++++++++++++++++++-------
> >  tools/xl/xl_block.c          | 21 ++++++++++++++++-----
> >  tools/xl/xl_cmdtable.c       |  3 ++-
> >  8 files changed, 77 insertions(+), 29 deletions(-)
> >
> > --
> > 2.20.1
> 




From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:03:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1096.3593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtYp-00089Z-Nc; Thu, 01 Oct 2020 08:03:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1096.3593; Thu, 01 Oct 2020 08:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtYp-00089S-Jd; Thu, 01 Oct 2020 08:03:31 +0000
Received: by outflank-mailman (input) for mailman id 1096;
 Thu, 01 Oct 2020 08:03:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kNtYn-00089N-Vl
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:03:30 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cbf1905-828f-475f-9b13-84d3264b6b0a;
 Thu, 01 Oct 2020 08:03:28 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k15so4505948wrn.10
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 01:03:28 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
 by smtp.gmail.com with ESMTPSA id l5sm3022012wrv.24.2020.10.01.01.03.26
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 01 Oct 2020 01:03:26 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kNtYn-00089N-Vl
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:03:30 +0000
X-Inumbo-ID: 4cbf1905-828f-475f-9b13-84d3264b6b0a
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4cbf1905-828f-475f-9b13-84d3264b6b0a;
	Thu, 01 Oct 2020 08:03:28 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k15so4505948wrn.10
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 01:03:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=OTrEnhNBN5R4+dyvqauB0Y5OwRj53IzJUx91Bcl+i78=;
        b=imOcekMOnoMaah3UJlqcpTJVMYjVhigkPyz40HYVmXhEJ4xWp4/P1BNO1ZVr9chui+
         +7spsDUSjId3ROYCISatnycAeCmnB5oOcaliX8csdcJswDj4+NH+z5Ohvk7XiF0OkzTz
         OSI3COJMjpjLSEzPBcyVgJVXNjY1duYV1xTSTjMmCgkSNIAY9ESlRqo/TLU4ftp7leDD
         0jJsmgC0EyDWxq2Uzx//u2JmVSZ1n2dRFb2FR1Jsrr4NjkNUPyuw14gS+/5m7juQi6Xl
         a5+POY0TW6Opv4xfeLe/6oniu0rGle8XxdIYJpdgK/NJDvPc205QeL+Q3l+rH85YCNp3
         jvpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=OTrEnhNBN5R4+dyvqauB0Y5OwRj53IzJUx91Bcl+i78=;
        b=NRluWBUnSGnyRWQ+/gnEA/9hFrUvzWXQSyGr8Uj1ABf626T95eZC23qLpyMsJRXgO6
         3IdyxMK2/PvY6DmlFrlSGKDrx/gU0nNuM0MoedHBrU0+qj6MhVlIukzhXum3U6W/ZB+z
         YAd7s1ucsepWxWL6cFL5LruBwfc4jYHskdJNJsoZeHrTfwrvoCvUWMMFa5UpU+iGxgtN
         PL39Ihv8MYqqeMJHKABVJsGCntJFd+ZZHMLk9NQfkfzSLqdG/gRaL3erhQl+cbtxYUHT
         1nf1XNNrTXrBV7zMLMi+pMg+6QNRq7nagHE1a29wMIbrSZMOt0ZcMHIMc2tvkrCMNA2K
         KXzg==
X-Gm-Message-State: AOAM531l0+2wzOczcC23Uqn08Zn4M01l2kepIhS4JlNx/4QZ35JmC9/o
	9YMfodW8PIotK6WhKRxngA8=
X-Google-Smtp-Source: ABdhPJwan3nlRJEN7HKE69zar2eImtVpRpYFlhfsKXozjX7PA3CNyjJoF03MdM73/nxESr0z2OvvMw==
X-Received: by 2002:a5d:5583:: with SMTP id i3mr7158520wrv.119.1601539407387;
        Thu, 01 Oct 2020 01:03:27 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
        by smtp.gmail.com with ESMTPSA id l5sm3022012wrv.24.2020.10.01.01.03.26
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 01 Oct 2020 01:03:26 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Anthony PERARD'" <anthony.perard@citrix.com>
Cc: <xen-devel@lists.xenproject.org>,
	<qemu-devel@nongnu.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Jerome Leseinne'" <jerome.leseinne@gmail.com>,
	"'Edwin Torok'" <edvin.torok@citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20200923155731.29528-1-paul@xen.org> <20200930114235.GL2024@perard.uk.xensource.com>
In-Reply-To: <20200930114235.GL2024@perard.uk.xensource.com>
Subject: RE: [PATCH] xen-bus: reduce scope of backend watch
Date: Thu, 1 Oct 2020 09:03:25 +0100
Message-ID: <008c01d697c9$579520a0$06bf61e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFwzHOYQKlt73iXJ/fB9mI5UZbIzgHcxvtVqj8StGA=

> -----Original Message-----
> From: Anthony PERARD <anthony.perard@citrix.com>
> Sent: 30 September 2020 12:43
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; qemu-devel@nongnu.org; Paul Durrant <pdurrant@amazon.com>; Jerome
> Leseinne <jerome.leseinne@gmail.com>; Edwin Torok <edvin.torok@citrix.com>; Stefano Stabellini
> <sstabellini@kernel.org>
> Subject: Re: [PATCH] xen-bus: reduce scope of backend watch
> 
> On Wed, Sep 23, 2020 at 04:57:31PM +0100, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently a single watch on /local/domain/X/backend is registered by each
> > QEMU process running in service domain X (where X is usually 0). The purpose
> > of this watch is to ensure that QEMU is notified when the Xen toolstack
> > creates a new device backend area.
> > Such a backend area is specific to a single frontend area created for a
> > specific guest domain and, since each QEMU process is also created to service
> > a specfic guest domain, it is unnecessary and inefficient to notify all QEMU
> > processes.
> > Only the QEMU process associated with the same guest domain need
> > receive the notification. This patch re-factors the watch registration code
> > such that notifications are targetted appropriately.
> >
> > Reported-by: Jerome Leseinne <jerome.leseinne@gmail.com>
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> >
> > diff --git a/hw/xen/xen-backend.c b/hw/xen/xen-backend.c
> > index 10199fb58d..f2711fe4a7 100644
> > --- a/hw/xen/xen-backend.c
> > +++ b/hw/xen/xen-backend.c
> > @@ -41,6 +41,11 @@ static void xen_backend_table_add(XenBackendImpl *impl)
> >      g_hash_table_insert(xen_backend_table_get(), (void *)impl->type, impl);
> >  }
> >
> > +static void **xen_backend_table_keys(unsigned int *count)
> > +{
> > +    return g_hash_table_get_keys_as_array(xen_backend_table_get(), count);
> 
> That could be cast to (const gchar **) as the GLib doc suggest, or (const
> char **) since gchar and char are the same.
> https://developer.gnome.org/glib/stable/glib-Hash-Tables.html#g-hash-table-get-keys-as-array
> 

Ok, I'll re-arrange the const-ing to cast at the inner level.

> > +}
> > +
> >  static const XenBackendImpl *xen_backend_table_lookup(const char *type)
> >  {
> >      return g_hash_table_lookup(xen_backend_table_get(), type);
> > diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
> > index 9ce1c9540b..c83da93bf3 100644
> > --- a/hw/xen/xen-bus.c
> > +++ b/hw/xen/xen-bus.c
> > @@ -430,7 +430,13 @@ static void xen_bus_unrealize(BusState *bus)
> >      trace_xen_bus_unrealize();
> >
> >      if (xenbus->backend_watch) {
> > -        xen_bus_remove_watch(xenbus, xenbus->backend_watch, NULL);
> > +        unsigned int i;
> > +
> > +        for (i = 0; i < xenbus->backend_types; i++) {
> > +            xen_bus_remove_watch(xenbus, xenbus->backend_watch[i], NULL);
> 
> We should check if backend_watch[i] is NULL.
> 

Yes, I'll add a check.

> > +        }
> > +
> > +        g_free(xenbus->backend_watch);
> >          xenbus->backend_watch = NULL;
> >      }
> >
> 
> The rest of the patch looks fine. Next improvement is to only look at
> only one backend type in xen_bus_backend_changed() since there is now a
> watch per backend type :-), but that would be for another day.
> 

Might not be too tricky. I'll see if I can come up with a follow-up patch.

  Paul

> Cheers,
> 
> --
> Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1101.3609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtk1-0000jK-Pm; Thu, 01 Oct 2020 08:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1101.3609; Thu, 01 Oct 2020 08:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtk1-0000jD-MU; Thu, 01 Oct 2020 08:15:05 +0000
Received: by outflank-mailman (input) for mailman id 1101;
 Thu, 01 Oct 2020 08:15:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q5fG=DI=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kNtk0-0000j7-GE
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:15:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 332a8b70-41db-4a28-b8c9-7bb8836f2e7d;
 Thu, 01 Oct 2020 08:15:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kNtjy-0008NO-IX; Thu, 01 Oct 2020 08:15:02 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kNtjy-000244-8G; Thu, 01 Oct 2020 08:15:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q5fG=DI=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kNtk0-0000j7-GE
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:15:04 +0000
X-Inumbo-ID: 332a8b70-41db-4a28-b8c9-7bb8836f2e7d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 332a8b70-41db-4a28-b8c9-7bb8836f2e7d;
	Thu, 01 Oct 2020 08:15:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:Message-Id:Date:
	Subject:Cc:To:From; bh=FbjlksMu36p7p6zRsPPG2Wo7gSzgDilHr5PW/goywzg=; b=hilq0i
	uzkQAjHMtDMOkBwWo43MPEvjvYipGJgep2B5L2FmfnfwnZK9wyo3jCx9MuD6NGa9UJZuERGXmP8g0
	piILENWxKcwz26eM2X+AzCU2pc07uAvKWdcBEIdS+hvVRF0qI/NiSS8s8crDlSaXGmG/lmc2naT89
	7Q8NQ2jNLK8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kNtjy-0008NO-IX; Thu, 01 Oct 2020 08:15:02 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kNtjy-000244-8G; Thu, 01 Oct 2020 08:15:02 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org,
	qemu-devel@nongnu.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jerome Leseinne <jerome.leseinne@gmail.com>,
	Edwin Torok <edvin.torok@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>
Subject: [PATCH v2] xen-bus: reduce scope of backend watch
Date: Thu,  1 Oct 2020 09:15:00 +0100
Message-Id: <20201001081500.1026-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently a single watch on /local/domain/X/backend is registered by each
QEMU process running in service domain X (where X is usually 0). The purpose
of this watch is to ensure that QEMU is notified when the Xen toolstack
creates a new device backend area.
Such a backend area is specific to a single frontend area created for a
specific guest domain and, since each QEMU process is also created to service
a specfic guest domain, it is unnecessary and inefficient to notify all QEMU
processes.
Only the QEMU process associated with the same guest domain need
receive the notification. This patch re-factors the watch registration code
such that notifications are targetted appropriately.

Reported-by: Jerome Leseinne <jerome.leseinne@gmail.com>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Edwin Torok <edvin.torok@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>

v2:
 - Re-work casting
 - Check for a NULL watch before trying to remove it
---
 hw/xen/xen-backend.c         | 11 ++++++++++
 hw/xen/xen-bus.c             | 40 ++++++++++++++++++++++++++++--------
 include/hw/xen/xen-backend.h |  1 +
 include/hw/xen/xen-bus.h     |  3 ++-
 4 files changed, 46 insertions(+), 9 deletions(-)

diff --git a/hw/xen/xen-backend.c b/hw/xen/xen-backend.c
index 10199fb58d..5b0fb76eae 100644
--- a/hw/xen/xen-backend.c
+++ b/hw/xen/xen-backend.c
@@ -41,6 +41,12 @@ static void xen_backend_table_add(XenBackendImpl *impl)
     g_hash_table_insert(xen_backend_table_get(), (void *)impl->type, impl);
 }
 
+static const char **xen_backend_table_keys(unsigned int *count)
+{
+    return (const char **)g_hash_table_get_keys_as_array(
+        xen_backend_table_get(), count);
+}
+
 static const XenBackendImpl *xen_backend_table_lookup(const char *type)
 {
     return g_hash_table_lookup(xen_backend_table_get(), type);
@@ -70,6 +76,11 @@ void xen_backend_register(const XenBackendInfo *info)
     xen_backend_table_add(impl);
 }
 
+const char **xen_backend_get_types(unsigned int *count)
+{
+    return xen_backend_table_keys(count);
+}
+
 static QLIST_HEAD(, XenBackendInstance) backend_list;
 
 static void xen_backend_list_add(XenBackendInstance *backend)
diff --git a/hw/xen/xen-bus.c b/hw/xen/xen-bus.c
index 9ce1c9540b..8c588920d9 100644
--- a/hw/xen/xen-bus.c
+++ b/hw/xen/xen-bus.c
@@ -430,7 +430,15 @@ static void xen_bus_unrealize(BusState *bus)
     trace_xen_bus_unrealize();
 
     if (xenbus->backend_watch) {
-        xen_bus_remove_watch(xenbus, xenbus->backend_watch, NULL);
+        unsigned int i;
+
+        for (i = 0; i < xenbus->backend_types; i++) {
+            if (xenbus->backend_watch[i]) {
+                xen_bus_remove_watch(xenbus, xenbus->backend_watch[i], NULL);
+            }
+        }
+
+        g_free(xenbus->backend_watch);
         xenbus->backend_watch = NULL;
     }
 
@@ -446,8 +454,11 @@ static void xen_bus_unrealize(BusState *bus)
 
 static void xen_bus_realize(BusState *bus, Error **errp)
 {
+    char *key = g_strdup_printf("%u", xen_domid);
     XenBus *xenbus = XEN_BUS(bus);
     unsigned int domid;
+    const char **type;
+    unsigned int i;
     Error *local_err = NULL;
 
     trace_xen_bus_realize();
@@ -469,19 +480,32 @@ static void xen_bus_realize(BusState *bus, Error **errp)
 
     module_call_init(MODULE_INIT_XEN_BACKEND);
 
-    xenbus->backend_watch =
-        xen_bus_add_watch(xenbus, "", /* domain root node */
-                          "backend", xen_bus_backend_changed, &local_err);
-    if (local_err) {
-        /* This need not be treated as a hard error so don't propagate */
-        error_reportf_err(local_err,
-                          "failed to set up enumeration watch: ");
+    type = xen_backend_get_types(&xenbus->backend_types);
+    xenbus->backend_watch = g_new(XenWatch *, xenbus->backend_types);
+
+    for (i = 0; i < xenbus->backend_types; i++) {
+        char *node = g_strdup_printf("backend/%s", type[i]);
+
+        xenbus->backend_watch[i] =
+            xen_bus_add_watch(xenbus, node, key, xen_bus_backend_changed,
+                              &local_err);
+        if (local_err) {
+            /* This need not be treated as a hard error so don't propagate */
+            error_reportf_err(local_err,
+                              "failed to set up '%s' enumeration watch: ",
+                              type[i]);
+        }
+
+        g_free(node);
     }
 
+    g_free(type);
+    g_free(key);
     return;
 
 fail:
     xen_bus_unrealize(bus);
+    g_free(key);
 }
 
 static void xen_bus_unplug_request(HotplugHandler *hotplug,
diff --git a/include/hw/xen/xen-backend.h b/include/hw/xen/xen-backend.h
index 010d712638..aac2fd454d 100644
--- a/include/hw/xen/xen-backend.h
+++ b/include/hw/xen/xen-backend.h
@@ -31,6 +31,7 @@ void xen_backend_set_device(XenBackendInstance *backend,
 XenDevice *xen_backend_get_device(XenBackendInstance *backend);
 
 void xen_backend_register(const XenBackendInfo *info);
+const char **xen_backend_get_types(unsigned int *nr);
 
 void xen_backend_device_create(XenBus *xenbus, const char *type,
                                const char *name, QDict *opts, Error **errp);
diff --git a/include/hw/xen/xen-bus.h b/include/hw/xen/xen-bus.h
index 3df696136f..6bdbf3ff82 100644
--- a/include/hw/xen/xen-bus.h
+++ b/include/hw/xen/xen-bus.h
@@ -66,7 +66,8 @@ struct XenBus {
     domid_t backend_id;
     struct xs_handle *xsh;
     XenWatchList *watch_list;
-    XenWatch *backend_watch;
+    unsigned int backend_types;
+    XenWatch **backend_watch;
     QLIST_HEAD(, XenDevice) inactive_devices;
 };
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:23:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:23:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1104.3620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtsG-0001e8-Mg; Thu, 01 Oct 2020 08:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1104.3620; Thu, 01 Oct 2020 08:23:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtsG-0001e1-Jj; Thu, 01 Oct 2020 08:23:36 +0000
Received: by outflank-mailman (input) for mailman id 1104;
 Thu, 01 Oct 2020 08:23:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VHLv=DI=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kNtsE-0001dw-UI
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:23:35 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b638129b-a334-42ca-ad13-400fce1a8748;
 Thu, 01 Oct 2020 08:22:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VHLv=DI=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kNtsE-0001dw-UI
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:23:35 +0000
X-Inumbo-ID: b638129b-a334-42ca-ad13-400fce1a8748
Received: from galois.linutronix.de (unknown [193.142.43.55])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b638129b-a334-42ca-ad13-400fce1a8748;
	Thu, 01 Oct 2020 08:22:56 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1601540555;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ONHqAv4qT+OTc2BgimstCJPvlJyn9PqWstJJf6uzEpc=;
	b=NgazHTI+SJGDs8egXRGtdo+QGi3xh0JlmycrGchBYrODbgZTjUWPuqO6ut+FnIDPq/gdSJ
	vByj+411k82ifrt2BEVuNPgqX27Y3wm+6Dmn4KcscKE05+QXauqWGac/+IkXdSlflKGPVQ
	gqO3sNFZEbbRG+38SCAe6Uy+7ikGtCs4oqeZOia8yJeiTw1hywgKpiq12HtknxenkgKzRe
	OJrPufOitZDre5pRXFtSqz9yo9w/AoYjWbk8swkUhUsvsq/Dr/oz8+/DUOp6P8ZPNkir9G
	eIp6uHls/G0PP4zrn3niGbMtU2e4Jej+7XYXpi+l/7lKt0TktRFbV900KLGnEw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1601540555;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ONHqAv4qT+OTc2BgimstCJPvlJyn9PqWstJJf6uzEpc=;
	b=IOZyUThnP0eFdoTtp5uGay28riwQzj9H86PK6wCN00tcNvQN7l480emm6WUDFykV9hpp72
	zqYBI/TNYKqiSDAg==
To: Zi Yan <ziy@nvidia.com>
Cc: Wei Liu <wei.liu@kernel.org>, Joerg Roedel <jroedel@suse.de>, x86@kernel.org, iommu@lists.linux-foundation.org, linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: Boot crash due to "x86/msi: Consolidate MSI allocation"
In-Reply-To: <A838FF2B-11FC-42B9-87D7-A76CF46E0575@nvidia.com>
References: <A838FF2B-11FC-42B9-87D7-A76CF46E0575@nvidia.com>
Date: Thu, 01 Oct 2020 10:22:35 +0200
Message-ID: <874knegxtg.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Yan,

On Wed, Sep 30 2020 at 21:29, Zi Yan wrote:
> I am running linux-next on my Dell R630 and the system crashed at boot
> time. I bisected linux-next and got to your commit:
>
>     x86/msi: Consolidate MSI allocation
>
> The crash log is below and my .config is attached.
>
> [   11.840905]  intel_get_irq_domain+0x24/0xb0
> [   11.840905]  native_setup_msi_irqs+0x3b/0x90

This is not really helpful because that's in the middle of the queue and
that code is gone at the very end. Yes, it's unfortunate that this
breaks bisection.

Can you please test:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/irq

which contains fixes and if it still crashes provide the dmesg of it.

Thanks,

        tglx



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:24:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:24:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1107.3636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtsx-0001mw-3p; Thu, 01 Oct 2020 08:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1107.3636; Thu, 01 Oct 2020 08:24:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtsw-0001mp-Vx; Thu, 01 Oct 2020 08:24:18 +0000
Received: by outflank-mailman (input) for mailman id 1107;
 Thu, 01 Oct 2020 08:24:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNtsv-0001lm-S5
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:24:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 182b7f6b-c65c-4414-a3a9-3360eee31a7a;
 Thu, 01 Oct 2020 08:24:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtso-000074-9Y; Thu, 01 Oct 2020 08:24:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtsn-0005X6-T6; Thu, 01 Oct 2020 08:24:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtsn-00013W-SQ; Thu, 01 Oct 2020 08:24:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNtsv-0001lm-S5
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:24:17 +0000
X-Inumbo-ID: 182b7f6b-c65c-4414-a3a9-3360eee31a7a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 182b7f6b-c65c-4414-a3a9-3360eee31a7a;
	Thu, 01 Oct 2020 08:24:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=n5nmKahzNnsMelv07x9yK89Uoc0+eU1+CF/mpH03aTw=; b=gGgvNrzEZU3N5Di+v/mHhi//Vn
	i+YxQuJmLtP9yFHwdaKLqMOWAUhVmNqO9LScBp7UDHu/8yaor6wyj748FKsAjveECijXJXNQIEIDE
	YImLDGJmKwhmrfA5kvYVo5fxcyX8Py25gY0xZwMP8qFoCzd4yx3f0oC6C0p0IIdhbRA0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtso-000074-9Y; Thu, 01 Oct 2020 08:24:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtsn-0005X6-T6; Thu, 01 Oct 2020 08:24:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtsn-00013W-SQ; Thu, 01 Oct 2020 08:24:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155113-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155113: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=5dba8c2f23049aa68b777a9e7e9f76c12dd00012
X-Osstest-Versions-That:
    xen=d4ed1d4132f5825a795d5a78505811ecd2717b5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 08:24:09 +0000

flight 155113 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155113/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154611
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 154611
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 154611
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 154611
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 154611
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  5dba8c2f23049aa68b777a9e7e9f76c12dd00012
baseline version:
 xen                  d4ed1d4132f5825a795d5a78505811ecd2717b5e

Last test of basis   154611  2020-09-22 11:26:05 Z    8 days
Failing since        154634  2020-09-23 05:59:56 Z    8 days    3 attempts
Testing same since   155113  2020-09-29 23:49:08 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 872 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:24:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1108.3649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtt0-0001p1-C9; Thu, 01 Oct 2020 08:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1108.3649; Thu, 01 Oct 2020 08:24:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNtt0-0001os-83; Thu, 01 Oct 2020 08:24:22 +0000
Received: by outflank-mailman (input) for mailman id 1108;
 Thu, 01 Oct 2020 08:24:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNtsz-0001mK-JU
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:24:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb4d1723-1e22-4c65-8089-b457b6e5fc58;
 Thu, 01 Oct 2020 08:24:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtst-00007p-DJ; Thu, 01 Oct 2020 08:24:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtst-0005XZ-2B; Thu, 01 Oct 2020 08:24:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNtst-0001MU-1Y; Thu, 01 Oct 2020 08:24:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNtsz-0001mK-JU
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:24:21 +0000
X-Inumbo-ID: fb4d1723-1e22-4c65-8089-b457b6e5fc58
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fb4d1723-1e22-4c65-8089-b457b6e5fc58;
	Thu, 01 Oct 2020 08:24:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=IpY0NU6HeH5TDFRT8EP21uCUj7HfyYGEad5EmyhsYig=; b=T8kz79ezQkzbSlIlyrJWflUMl7
	clTBII7aFLKH8tfTHe4RK84h2TOe2JWiqhOblxCvJEwSwtUnT9A/J8XBzDUpx6ZGFja+xBnfjryRN
	K9DPeb6dRucIGLHn7VSmrD5SBzG6PkYJ3O8NQgUOo/X3jue8+crdhqwxQXvvMm1mXtTU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtst-00007p-DJ; Thu, 01 Oct 2020 08:24:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtst-0005XZ-2B; Thu, 01 Oct 2020 08:24:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNtst-0001MU-1Y; Thu, 01 Oct 2020 08:24:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.12-testing bisection] complete test-amd64-i386-xl-xsm
Message-Id: <E1kNtst-0001MU-1Y@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 08:24:15 +0000

branch xen-4.12-testing
xenbranch xen-4.12-testing
job test-amd64-i386-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155205/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.12-testing/test-amd64-i386-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.12-testing/test-amd64-i386-xl-xsm.guest-start --summary-out=tmp/155205.bisection-summary --basis-template=154601 --blessings=real,real-bisect xen-4.12-testing test-amd64-i386-xl-xsm guest-start
Searching for failure / basis pass:
 155075 fail [host=pinot1] / 154601 [host=fiano1] 154121 [host=chardonnay0] 152525 [host=fiano1] 151715 [host=albana0] 151388 [host=rimava1] 151367 [host=huxelrebe1] 151341 [host=chardonnay1] 151316 [host=chardonnay0] 151292 [host=pinot0] 151276 [host=fiano1] 151248 [host=albana0] 151227 [host=huxelrebe0] 151184 [host=fiano0] 151161 [host=elbling0] 151128 [host=albana1] 151082 ok.
Failure / basis pass flights: 155075 / 151082
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0186e76a62f7409804c2e4785d5a11e7f82a7c52
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 d58c48df8c6ca819f5e6e6f1740bb114f24f024f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985-dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484\
 fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#8023a62081ffbe3f734019076ec1a2b4213142bb-8023a62081ffbe3f734019076ec1a2b4213142bb git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-155821a1990b6de78dde5f98fa5ab90e802021e0 git://xenbits.xen.org/xen.git#d58c48df8c6ca819f5e6e6f1740bb114f24f024f-0186e76a62f7409804c2e4785d5a11e7f82a7c52
Loaded 12581 nodes in revision graph
Searching for test results:
 151058 [host=elbling1]
 151082 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 d58c48df8c6ca819f5e6e6f1740bb114f24f024f
 151161 [host=elbling0]
 151128 [host=albana1]
 151227 [host=huxelrebe0]
 151184 [host=fiano0]
 151248 [host=albana0]
 151276 [host=fiano1]
 151292 [host=pinot0]
 151341 [host=chardonnay1]
 151316 [host=chardonnay0]
 151367 [host=huxelrebe1]
 151388 [host=rimava1]
 151715 [host=albana0]
 152525 [host=fiano1]
 154121 [host=chardonnay0]
 154601 [host=fiano1]
 154622 fail irrelevant
 154663 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155014 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155077 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 d58c48df8c6ca819f5e6e6f1740bb114f24f024f
 155138 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155142 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c2db6a86a25508725db8018c62dd39f92ae6ee79 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d9c812dda519a1a73e8370e1b81ddf46eb22ed16 19e0bbb4eba8d781b972448ec01ede6ca7fa22cb
 155145 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 20da7ca42a33d3ef767ce4129f11496af7f67c9f d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155075 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155149 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cdfc7ed34fd1ddfc9cb1dfbc339f940950638f8d d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155154 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ed0dce7d5466b6b22ff9e0923f3a3e885540bbfc d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0446e3db13671032b05d19f6117d902f5c5c76fa
 155163 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 320e7a7369245d4304ac822e67740a7ea147e7a2
 155172 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 253a1e64d30e09ae089a060e364a01b4d442d550
 155178 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 8e25d522a3fc236c0c7a02541e8071afa031386b
 155181 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155186 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155191 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155195 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155201 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155205 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
Searching for interesting versions
 Result found: flight 151082 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c, results HASH(0x55d81ddbf088) HASH(0x55d81eb1ca80) HASH(0x55d81eb091c0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 253a1e64d30e09ae089a060e364a01b4d442d550, results HASH(0x55d81eae5778) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f73401907\
 6ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 320e7a7369245d4304ac822e67740a7ea147e7a2, results HASH(0x55d81eaf5300) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ed0dce7d5466b6b22ff9e0923f3a3e885540bbfc d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 0446e3db13671032b05d19f6117d902f5c5c76fa, results HASH(0x55d81eaf03c8) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cdfc7ed34fd1ddfc9cb1dfbc339f940950638f8d d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x55d81eae7780) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 20da7ca42a33d3ef767c\
 e4129f11496af7f67c9f d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x55d81eae6078) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c2db6a86a25508725db8018c62dd39f92ae6ee79 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d9c812dda519a1a73e8370e1b81ddf46eb2\
 2ed16 19e0bbb4eba8d781b972448ec01ede6ca7fa22cb, results HASH(0x55d81eaea6b0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3ee4f6cb360a877d171f2f9bb76b0d46d2cfa985 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 d58c48df8c6ca819f5e6e6f1740bb114f24f024f, results HASH(0x55d81eaf6a08) HASH(0x55d81eafd948) Result found: flight 154663 (fail), for \
 basis failure (at ancestor ~874)
 Repro found: flight 155077 (pass), for basis pass
 Repro found: flight 155138 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
No revisions left to test, checking graph state.
 Result found: flight 155181 (pass), for last pass
 Result found: flight 155186 (fail), for first failure
 Repro found: flight 155191 (pass), for last pass
 Repro found: flight 155195 (fail), for first failure
 Repro found: flight 155201 (pass), for last pass
 Repro found: flight 155205 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155205/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 97 colors found
Revision graph left in /home/logs/results/bisect/xen-4.12-testing/test-amd64-i386-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155205: tolerable ALL FAIL

flight 155205 xen-4.12-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155205/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-xsm       12 guest-start             fail baseline untested


jobs:
 test-amd64-i386-xl-xsm                                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1117.3667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuGu-0003ql-Jq; Thu, 01 Oct 2020 08:49:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1117.3667; Thu, 01 Oct 2020 08:49:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuGu-0003qe-G8; Thu, 01 Oct 2020 08:49:04 +0000
Received: by outflank-mailman (input) for mailman id 1117;
 Thu, 01 Oct 2020 08:49:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNuGt-0003qZ-ED
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:49:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb6f5109-3802-4d8b-9998-06c7722580c8;
 Thu, 01 Oct 2020 08:49:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9DCD0B112;
 Thu,  1 Oct 2020 08:49:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNuGt-0003qZ-ED
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:49:03 +0000
X-Inumbo-ID: bb6f5109-3802-4d8b-9998-06c7722580c8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb6f5109-3802-4d8b-9998-06c7722580c8;
	Thu, 01 Oct 2020 08:49:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601542141;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v+MoeYfLNvInenjJr0Ucx2IezWcBplPP6iKHM4JqmFo=;
	b=KQrtL8cA7Z7RMntCRqwoJETZNHAydfERJfiIfXpFpAiI0Gu2Q5H2T2V8L2WTpyHs2iESX4
	Srguj0bhs/VgXQPpkZsKfV4/LngPNN66SEpHJulg8Baj82FNwx3rTj5fJ5Z/X1NYcj8VcD
	GHGs/QMgs+rDLCyk7PLCN+w4Sm2MwhY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9DCD0B112;
	Thu,  1 Oct 2020 08:49:01 +0000 (UTC)
Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
To: Julien Grall <julien@xen.org>
Cc: Oleksandr <olekstysh@gmail.com>, xen-devel@lists.xenproject.org,
 paul@xen.org, 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Kevin Tian'
 <kevin.tian@intel.com>, 'Tim Deegan' <tim@xen.org>,
 'Julien Grall' <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com>
 <1599769330-17656-3-git-send-email-olekstysh@gmail.com>
 <3997a705-ccb1-4b8f-41ca-c5507360c759@xen.org>
 <000201d69314$97bd8fa0$c738aee0$@xen.org>
 <c9131bce-f028-2824-9ffc-b4db08017569@gmail.com>
 <2cbe7efd-f356-0f1b-0bb1-bfb2243f180c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0f768df9-6e28-a0ed-92e7-b17303c24996@suse.com>
Date: Thu, 1 Oct 2020 10:49:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <2cbe7efd-f356-0f1b-0bb1-bfb2243f180c@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 19:47, Julien Grall wrote:
> Regarding the fix itself, I am not sure what sort of synchronization we 
> can do. Are you suggesting to wait for the I/O to complete? If so, how 
> do we handle the case the IOREQ server died?

In simple cases retrying the entire request may be an option. However,
if the server died after some parts of a multi-part operation were
done already, I guess the resulting loss of state is bad enough to
warrant crashing the guest. This shouldn't be much different from e.g.
a device disappearing from a bare metal system - any partial I/O done
to/from it will leave the machine in an unpredictable state, which it
may be too difficult to recover from without rebooting. (Of course,
staying with this analogue, it may also be okay to simple consider
the operation "complete", leaving it to the guest to recover. The
main issue on the hypervisor side then would be to ensure we don't
expose any uninitialized [due to not having got written to] data to
the guest.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1119.3678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuIn-0004dW-Vz; Thu, 01 Oct 2020 08:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1119.3678; Thu, 01 Oct 2020 08:51:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuIn-0004dP-Sz; Thu, 01 Oct 2020 08:51:01 +0000
Received: by outflank-mailman (input) for mailman id 1119;
 Thu, 01 Oct 2020 08:51:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kNuIl-0004dJ-V6
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:51:00 +0000
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 749ab561-18b6-4499-8285-8b8fd128a0fe;
 Thu, 01 Oct 2020 08:50:59 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id t17so2128532wmi.4
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 01:50:59 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
 by smtp.gmail.com with ESMTPSA id f12sm7179254wmf.26.2020.10.01.01.50.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 01 Oct 2020 01:50:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=937a=DI=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kNuIl-0004dJ-V6
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:51:00 +0000
X-Inumbo-ID: 749ab561-18b6-4499-8285-8b8fd128a0fe
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 749ab561-18b6-4499-8285-8b8fd128a0fe;
	Thu, 01 Oct 2020 08:50:59 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id t17so2128532wmi.4
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 01:50:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=AkzjiZMLafZ+x+Gnujvlxg9Ab8orm1L/PW9GureQJlk=;
        b=A1wv8b/ygyYpy8m5iEo++faNrqMj1tDIbMHjiOJ7JTiB/iwfFizWzFVGEk9vjmaAR+
         1FNQIYG9zhkSv2VAmUTF8KhDZBFBG225pTvn9f/nazzbV5oS1WM0IQZhefKewesqlL05
         tg7A06TGiYFbhCUapp0GtMtisGsnDc/5ps+KieZ/jm0wriqF4H5kimCf7s+qgz/9f3Wq
         wetXF/U6Es+y+4PTZFi/X8zNZ3Ghe/VjFsD8hj4c5/Qq4QE3WZHyD/HgTqckY4dPNb8C
         gEcM5soUTUKVM9AuAsF3O2Y/hEcM1q1iat/XUgkDRZvojE3ht7n28vmILb8nHlosDsO5
         5MOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=AkzjiZMLafZ+x+Gnujvlxg9Ab8orm1L/PW9GureQJlk=;
        b=WvFPTZUvxPgkgpaRJclu2pA0eMrf+eJUGid8PghbD8aJ5ytA6TMHyWoRrn+rYS1yFZ
         WClKqoJsAgcxQmI/Hk01IliXBrE+DVXeOw9yrjrnD4zLcMAK4D58qTM6HThCKIcIkYnu
         OjqcyAs19A9uXL+esgrLnv+nqotROPk/Q091ndFuhpkPvOB8srvm5iMQrGoAl/dsnujf
         QKgBBUi561jqZPjKe16/CtujHv2QsbOEqTwAb8qrx2IjH3YQucgyYkdOa2HASMqPMPmb
         hAflDrbbviLHsgr9JG+uA9N2kneWUJCVJOMWVo3QDMPBtQ1VB1dGdlHULG2cMuj7xutA
         d/zA==
X-Gm-Message-State: AOAM532avC7YqMQTiopuxXOD2s5GHgoW3CsZbscp3McuQraeG4AK2N8p
	oiy8UzuvbQ2gSoasv3HNP/w=
X-Google-Smtp-Source: ABdhPJzqUELKOVgcuxd7fs1ucASFfmPoe3qv6taL24vhR4w2IqTD2F5DOrbvdP0JpukC2RyIs5e4ug==
X-Received: by 2002:a1c:4909:: with SMTP id w9mr5996628wma.133.1601542258243;
        Thu, 01 Oct 2020 01:50:58 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-224.amazon.com. [54.240.197.224])
        by smtp.gmail.com with ESMTPSA id f12sm7179254wmf.26.2020.10.01.01.50.56
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 01 Oct 2020 01:50:57 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>
Cc: "'Oleksandr'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com> <1599769330-17656-3-git-send-email-olekstysh@gmail.com> <3997a705-ccb1-4b8f-41ca-c5507360c759@xen.org> <000201d69314$97bd8fa0$c738aee0$@xen.org> <c9131bce-f028-2824-9ffc-b4db08017569@gmail.com> <2cbe7efd-f356-0f1b-0bb1-bfb2243f180c@xen.org> <0f768df9-6e28-a0ed-92e7-b17303c24996@suse.com>
In-Reply-To: <0f768df9-6e28-a0ed-92e7-b17303c24996@suse.com>
Subject: RE: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common
Date: Thu, 1 Oct 2020 09:50:55 +0100
Message-ID: <008d01d697cf$fac24b30$f046e190$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIQ4beZ3sUmYihKXwfoVgy4BRIJGwHfeJxHAedMvHMBMIgdOgK34vRDAgTQpPsBrefRNqiyzR4g

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 01 October 2020 09:49
> To: Julien Grall <julien@xen.org>
> Cc: Oleksandr <olekstysh@gmail.com>; xen-devel@lists.xenproject.org; =
paul@xen.org; 'Oleksandr
> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>; 'George
> Dunlap' <george.dunlap@citrix.com>; 'Ian Jackson' =
<ian.jackson@eu.citrix.com>; 'Stefano Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Roger Pau =
Monn=C3=A9' <roger.pau@citrix.com>; 'Jun
> Nakajima' <jun.nakajima@intel.com>; 'Kevin Tian' =
<kevin.tian@intel.com>; 'Tim Deegan' <tim@xen.org>;
> 'Julien Grall' <julien.grall@arm.com>
> Subject: Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature =
common
>=20
> On 30.09.2020 19:47, Julien Grall wrote:
> > Regarding the fix itself, I am not sure what sort of synchronization =
we
> > can do. Are you suggesting to wait for the I/O to complete? If so, =
how
> > do we handle the case the IOREQ server died?
>=20
> In simple cases retrying the entire request may be an option. However,
> if the server died after some parts of a multi-part operation were
> done already, I guess the resulting loss of state is bad enough to
> warrant crashing the guest. This shouldn't be much different from e.g.
> a device disappearing from a bare metal system - any partial I/O done
> to/from it will leave the machine in an unpredictable state, which it
> may be too difficult to recover from without rebooting. (Of course,
> staying with this analogue, it may also be okay to simple consider
> the operation "complete", leaving it to the guest to recover. The
> main issue on the hypervisor side then would be to ensure we don't
> expose any uninitialized [due to not having got written to] data to
> the guest.)
>=20

I'll try to take a look today and come up with a patch.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:56:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1123.3690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuNI-0004pl-Jq; Thu, 01 Oct 2020 08:55:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1123.3690; Thu, 01 Oct 2020 08:55:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuNI-0004pe-Gi; Thu, 01 Oct 2020 08:55:40 +0000
Received: by outflank-mailman (input) for mailman id 1123;
 Thu, 01 Oct 2020 08:55:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNuNH-0004pZ-9w
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:55:39 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f521877-3378-482c-acd8-d32c9151a533;
 Thu, 01 Oct 2020 08:55:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNuNH-0004pZ-9w
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:55:39 +0000
X-Inumbo-ID: 6f521877-3378-482c-acd8-d32c9151a533
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6f521877-3378-482c-acd8-d32c9151a533;
	Thu, 01 Oct 2020 08:55:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601542537;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=fZ/6nqIlBBmYIEPpFga4/sXEDw1y/kPBr/n5agM7Ykk=;
  b=IDPFRsEwv+J8TofM+1ydJB/+U8IcxQ0YnX/Qklbqp0n5zbEfj87Rofg1
   yc3gX3heNqhh9bJZURsQxow++dg80GTduvngOWo4SdFQNfHW+fmwbFxZ0
   deUY8Ek6tjhJNLAkr0WhTAJkrZKpnCsjjxQ/zH12DnyZuhxNnDi/Iokv3
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: e86dpylu3t/sVz9OJy1J0l5XeXK90IxnIms7ObHsXNdgR6p9tTrAQMXIboMTdx5R9otPraU40X
 A/a1y7jv6dhk1DRz09jNWsBSknrvm24RXsTsD91Prie9iOjcfEhSeqhZaDpoCb5LC8dXgMlhZA
 N+LNIgT6JSIrrUvUJepAlpzWFAnKSwfzscE39MOwaSKgsrWwg2QRpEVfsQjmC+DqRQXHO2/MAG
 AkUJglmqjHj2ucL9BpuqsVCOnOB0TramYYajzLGDytAQd07zJulogTsl89eI2vAFdG84BTRhbD
 KBk=
X-SBRS: None
X-MesageID: 28382312
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28382312"
Date: Thu, 1 Oct 2020 10:55:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christopher Clark <christopher.w.clark@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Rich Persaud
	<persaur@gmail.com>, Daniel Smith <dpsmith@apertussolutions.com>
Subject: Re: VirtIO & Argo: a Linux VirtIO transport driver on Xen
Message-ID: <20201001085500.GX19254@Air-de-Roger>
References: <CACMJ4GaWcF74zE5qt31MDvcX1mx1HSW7eaOXpfpWJ2KzQZOg=Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CACMJ4GaWcF74zE5qt31MDvcX1mx1HSW7eaOXpfpWJ2KzQZOg=Q@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 09:03:03PM -0700, Christopher Clark wrote:
> Hello
> 
> Following up on a topic introduced in last month's community call,
> here are some notes on combining existing Linux guest virtio drivers
> with Argo inter-VM communication on Xen.  If feasible, this would
> combine the compatibility of tested Linux drivers with the mandatory
> access control properties of Argo, which could help meet functional
> safety and security requirements for Xen on Arm and x86.  This
> development work is not resourced, but the initial investigation has
> been encouraging.  We are sharing here for comment, aiming to work
> with those in the Xen and wider Linux communities who may have similar
> requirements.
> 
> Christopher
> - 30th September 2020
> 
> ---
> This document describes a proposal for development of a new Linux device
> driver to introduce Hypervisor-Mediated data eXchange (HMX) into the
> data transport of the popular VirtIO suite of Linux virtual device
> drivers, by using Argo with Xen. This will provide a way to use VirtIO
> device drivers within Xen guest VMs with strong isolation properties.
> 
> This work has been developed by Christopher Clark, Daniel Smith and Rich
> Persaud, with Eric Chanudet and Nick Krasnoff.
> Christopher is the primary author of this version of this document.
> 
> ----
> Contents:
> 
> = Context: Introduction to VirtIO
> == VirtIO Architecture Overview
> === VirtIO front-end driver classes
> === VirtIO transport drivers
> = VirtIO with Argo transport
> == Using VirtIO with the new Argo transport driver
> === Host platform software
> ==== QEMU
> ==== Linux Argo driver
> ==== Toolstack
> === Functionality
> === Mechanisms
> === From last discussion
> = References
> 
> ----
> = Context: Introduction to VirtIO
> 
> VirtIO is a virtual device driver standard developed originally for the
> Linux kernel, drawing upon the lessons learned during the development of
> paravirtualized device drivers for Xen, KVM and other hypervisors. It
> aimed to become a “de-facto standard for virtual I/O devices”, and to
> some extent has succeeded in doing so. VirtIO is now widely implemented
> in both software and hardware, it is commonly the first choice for
> virtual driver implementation in new virtualization technologies, and
> the specification is now maintained under governance of the OASIS open
> standards organization.
> 
> VirtIO’s system architecture abstracts device-specific and
> device-class-specific interfaces and functionality from the transport
> mechanisms that move data and issue notifications within the kernel and
> across virtual machine boundaries. It is attractive to developers
> seeking to implement new drivers for a virtual device because VirtIO
> provides documented specified interfaces with a well-designed, efficient
> and maintained common core implementation that can significantly reduce
> the amount of work required to develop a new virtual device driver.
> 
> VirtIO follows the Xen PV driver model of split-device drivers, where a
> front-end device driver runs within the guest virtual machine to provide
> the device abstraction to the guest kernel, and a back-end driver runs
> outside the VM, in platform-provided software - eg. within a QEMU device
> emulator - to communicate with the front-end driver and provide mediated
> access to physical device resources.
> 
> A critical property of the current common VirtIO implementations is that
> their use of shared memory for data transport prevents enforcement of
> strong isolation between the front-end and back-end virtual machines,
> since the back-end VirtIO device driver is required to be able to obtain
> direct access to the memory owned by the virtual machine running the
> front-end VirtIO device driver. ie. The VM hosting the back-end driver
> has significant privilege over any VM running a front-end driver.
> 
> Xen’s PV drivers use the grant-table mechanism to confine shared memory
> access to specific memory pages used and permission to access those are
> specifically granted by the driver in the VM that owns the memory. Argo
> goes further and achieves stronger isolation than this since it requires
> no memory sharing between communicating virtual machines.

Since there's no memory sharing, all data must be copied between
buffers (by the hypervisor I assume). Will this result in a noticeable
performance penalty?

OTOH no memory sharing means no need to map foreign memory on the
backend, which is costly.

> In contrast to Xen’s current driver transport options, the current
> implementations of VirtIO transports pass memory addresses directly
> across the VM boundary, under the assumption of shared memory access,
> and thereby require the back-end to have sufficient privilege to
> directly access any memory that the front-end driver refers to. This has
> presented a challenge for the suitability of using VirtIO drivers for
> Xen deployments where isolation is a requirement. Fortunately, a path
> exists for integration of the Argo transport into VirtIO which can
> address this and enable use of the existing body of VirtIO device
> drivers with isolation maintained and mandatory access control enforced:
> consequently this system architecture is significantly differentiated
> from other options for virtual devices.
> 
> == VirtIO Architecture Overview
> 
> In addition to the front-end / back-end split device driver model, there
> are further standard elements of VirtIO system architecture.
> 
> For detailed reference, VirtIO is described in detail in the “VirtIO 1.1
> specification” OASIS standards document. [1]
> 
> The front-end device driver architecture imposes tighter constraints on
> implementation direction for this project, since it is this that is
> already implemented in the wide body of existing VirtIO device drivers
> that we are aiming to enable use of.
> 
> The back-end software is implemented in the platform-provided software -
> ie.  the hypervisor, toolstack, a platform-provided VM or a device
> emulator, etc. - where we have more flexibility in implementation
> options, and the interface is determined by both the host virtualization
> platform and the new transport driver that we are intending to create.
> 
> === VirtIO front-end driver classes
> 
> There are multiple classes of VirtIO device driver within the Linux
> kernel; these include the general class of front-end VirtIO device
> drivers, which provide function-specific logic to implement virtual
> devices - eg. a virtual block device driver for storage - and the
> _transport_ VirtIO device drivers, which are responsible for device
> discovery with the platform and provision of data transport across the
> VM boundary between the front-end drivers and the corresponding remote
> back-end driver running outside the virtual machine.
> 
> === VirtIO transport drivers
> 
> There are several implementations of VirtIO transport device drivers in
> Linux, each implement a common interface within the kernel, and they are
> designed to be interchangeable and compatible with the VirtIO front-end
> drivers: so the same front-end driver can use different transports on
> different systems. Transports can coexist: different virtual devices can
> be using different transports within the same virtual machine at the
> same time.

Does this transport layer also define how the device configuration is
exposed?

> 
> = VirtIO with Argo transport
> 
> Enabling VirtIO to use the Argo interdomain communication mechanism for
> data transport across the VM boundary will address critical requirements:
> 
> * Preserve strong isolation between the two ends of the split device driver
> 
>     * ie. remove the need for any shared memory between domains or any
>       privilege to map the memory belonging to the other domain
> 
> * Enable enforcement of granular mandatory access control policy over
>   the communicating endpoints
> 
>     * ie. Use Xen’s XSM/Flask existing control over Argo communication,
>       and leverage any new Argo MAC capabilities as they are introduced
>       to govern VirtIO devices
> 
> The proposal is to implement a new VirtIO transport driver for Linux
> that utilizes Argo. It will be used within guest virtual machines, and
> be source-compatible with the existing VirtIO front-end device drivers
> in Linux.
> 
> It will be paired with a corresponding new VirtIO-Argo back-end to run
> within the Qemu device emulator, in the same fashion as the existing
> VirtIO transport back-ends, and the back-end will use a non-VirtIO Linux
> driver to access and utilize Argo.

IMO it would be better if we could implement the backends in a more
lightweight tool, something like kvmtool.

> Open Source VirtIO drivers for Windows are available, and enable Windows
> guest VMs to run with virtual devices provided by the VirtIO backends in
> QEMU. The Windows VirtIO device drivers have the same transport
> abstraction and separate driver structure, so an Argo transport driver
> can also be developed for Windows for source-compatibility with Windows
> VirtIO device drivers.
> 
> == Using VirtIO with the new Argo transport driver
> 
> VirtIO device drivers are included in the mainline Linux kernel and
> enabled in most modern Linux distributions. Adding the new Linux-Argo
> guest driver to the upstream Linux kernel will enable seamless
> deployment of modern Linux guest VMs on VirtIO-Argo hypervisor
> platforms.
> 
> === Host platform software
> 
> ==== QEMU
> 
> The QEMU device emulator implements the VirtIO transport that the
> front-end will connect to. Current QEMU 5.0 implements both the
> virtio-pci and virtio-mmio common transports.

Oh, I think that answers my question from above, and you would indeed
expose the device configuration using an Argo specific virtio device
bus.

Would it be possible to expose the device configuration using
virtio-{pci,mmio} but do the data transfer using Argo?

> 
> ==== Linux Argo driver
> 
> For QEMU to be able to use Argo, it will need an Argo Linux kernel
> device driver, with similar functionality to the existing Argo Linux
> driver.
> 
> ==== Toolstack
> 
> The toolstack of the hypervisor is responsible for configuring and
> establishing the back-end devices according to the virtual machine
> configuration. It will need to be aware of the VirtIO-Argo transport and
> initialize the back-ends for each VM with a suitable configuration for
> it.
> 
> Alternatively, in systems that do not run with a toolstack, the DomB
> launch domain (when available) can perform any necessary initialization.
> 
> === Functionality
> 
> Adding Argo as a transport for VirtIO will retain Argo’s MAC policy
> checks on all data movement, via Xen's XSM/Flask, while allowing use of
> the VirtIO virtual device drivers and device implementations.
> 
> With the VirtIO virtual device drivers using the VirtIO-Argo transport
> driver, the existing Xen PV drivers, which use the grant tables and
> event channels, are not required, and their substitution enables removal
> of shared memory from the data path of the device drivers in use and
> makes the virtual device driver data path HMX-compliant.
> 
> In addition, as new virtual device classes in Linux have VirtIO drivers
> implemented, these should transparently be enabled with Mandatory Access
> Control, via the existing virtio-argo transport driver, potentially
> without further effort required – although please note that for some
> cases (eg. graphics) optimizing performance characteristics may require
> additional effort.
> 
> === Mechanisms
> 
> VirtIO transport drivers are responsible for virtual device enumeration,
> triggering driver initialization for the devices, and we are proposing
> to use ACPI tables to surface the data to the guest for the new driver
> to parse for this purpose. Device tree has been raised as an option for
> this on Arm and it will be evaluated.
> 
> The VirtIO device drivers will retain use of the virtqueues, but the
> descriptors passed between domains by the new transport driver will not
> include guest physical addresses, but instead reference data that has
> been exchanged via Argo. Each transmission via XEN_ARGO_OP_sendv is
> subject to MAC checks by XSM.
> 
> === From last discussion
> 
> * Design of how virtual devices are surfaced to the guest VM for
>   enumeration by the VirtIO-Argo transport driver
> 
>     * Current plan, from the initial x86 development focus, is to
>       populate ACPI tables

Do we plan to introduce a new ACPI table for virtio-argo devices? Or
are there plans to expand an existing table?

I'm asking because all this would need to be discussed with the UEFI
Forum in order to get whatever is needed into the spec (IIRC a
separate table is easier because the specification doesn't need to be
part of the official ACPI spec).

>     * Interest in using Device Tree, for static configurations on Arm,
>       was raised on last month's Xen Community Call.
>         * is being considered with development of DomB in progress
>     * Does not need to be dom0 that populates this for the guest;
>       just some domain with suitable permissions to do so

Hm, while it's true that ACPI tables don't need to be populated by
dom0 itself, it must be a domain that has write access to the guest
memory, so that it can copy the created ACPI tables into the guest
physmap.

I think this is all fine, but seems like a very non-trivial amount of
work that depends on other entities (the UEFI Forum for ACPI changes
and OASIS for the virtio ones). It also worries me a bit that we are
jumping from not having virtio support at all on Xen to introducing
our own transport layer, but I guess that's partly due to the existing
virtio transports not being suitable for Xen use-cases.

Regarding the specific usage of Argo itself, it's my understanding
that a normal frontend would look like:

virtio front -> virtio argo transport -> Argo hypervisor specific driver

Would it be possible to implement an Argo interface that's hypervisor
agnostic? I think that way you would have a much easier time trying
to get something like this accepted upstream, if the underlying Argo
interface could be implemented by any hypervisor _without_ having to
add a new Argo hypervisor specific driver.

On x86 I guess you could implement the Argo interface using MSRs,
which would then allow any hypervisor to provide the same interface
and thus there would be no component tied to Xen or any specific
hypervisor.

Finally, has this been raised or commented with OASIS? I'm at least not
familiar at all with virtio, and I bet that applies to most of the Xen
community, so I think their opinion is likely even more important that
the Xen community one.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 08:58:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 08:58:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1127.3705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuQ3-00050J-3i; Thu, 01 Oct 2020 08:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1127.3705; Thu, 01 Oct 2020 08:58:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuQ3-00050C-0q; Thu, 01 Oct 2020 08:58:31 +0000
Received: by outflank-mailman (input) for mailman id 1127;
 Thu, 01 Oct 2020 08:58:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNuQ1-0004za-Qu
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:58:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb512de8-dead-4962-a726-0aebe6dc9c71;
 Thu, 01 Oct 2020 08:58:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNuPv-0000rM-01; Thu, 01 Oct 2020 08:58:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNuPu-0007lI-Ok; Thu, 01 Oct 2020 08:58:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNuPu-0008Kn-OG; Thu, 01 Oct 2020 08:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNuQ1-0004za-Qu
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 08:58:29 +0000
X-Inumbo-ID: fb512de8-dead-4962-a726-0aebe6dc9c71
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb512de8-dead-4962-a726-0aebe6dc9c71;
	Thu, 01 Oct 2020 08:58:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bzfKTJ1sthiYPpfHA4kiYhLGun+MCNlqrn3cFsYh/gU=; b=Ufj9Y9ogur8yh3HHaMmTSmx/07
	9hldk4B65mvZSB8BvVxEV16wnl2CGbwWHKBTlWHDyTHRgbcJhofWreMdi2/oBp5qLl0FYodPn6CZJ
	R+/6C4tLcaHKNpZBwmOjmR+sPtrzoY0LyrZkImQrB3rz2vDTfWcLY0b0E2xSpiJH5iyA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNuPv-0000rM-01; Thu, 01 Oct 2020 08:58:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNuPu-0007lI-Ok; Thu, 01 Oct 2020 08:58:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNuPu-0008Kn-OG; Thu, 01 Oct 2020 08:58:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155200: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11852c7bb070a18c3708b4c001772a23e7d4fc27
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 08:58:22 +0000

flight 155200 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155200/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11852c7bb070a18c3708b4c001772a23e7d4fc27
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Testing same since   155144  2020-09-30 16:01:24 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:06:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:06:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1132.3716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuXS-0005zF-29; Thu, 01 Oct 2020 09:06:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1132.3716; Thu, 01 Oct 2020 09:06:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuXR-0005z8-VY; Thu, 01 Oct 2020 09:06:09 +0000
Received: by outflank-mailman (input) for mailman id 1132;
 Thu, 01 Oct 2020 09:06:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ighY=DI=epam.com=prvs=854382b084=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1kNuXQ-0005z1-E7
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:06:08 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c53b9c4-648c-4e1a-8b6a-7eb57b4ec612;
 Thu, 01 Oct 2020 09:06:07 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 091960d3022678; Thu, 1 Oct 2020 09:06:05 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 by mx0b-0039f301.pphosted.com with ESMTP id 33w0f19b8n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 01 Oct 2020 09:06:05 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM6PR0302MB3445.eurprd03.prod.outlook.com (2603:10a6:209:1b::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 09:06:02 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3433.035; Thu, 1 Oct 2020
 09:06:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ighY=DI=epam.com=prvs=854382b084=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
	id 1kNuXQ-0005z1-E7
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:06:08 +0000
X-Inumbo-ID: 2c53b9c4-648c-4e1a-8b6a-7eb57b4ec612
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c53b9c4-648c-4e1a-8b6a-7eb57b4ec612;
	Thu, 01 Oct 2020 09:06:07 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 091960d3022678;
	Thu, 1 Oct 2020 09:06:05 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
	by mx0b-0039f301.pphosted.com with ESMTP id 33w0f19b8n-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 01 Oct 2020 09:06:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z8kFqoiEMbrQZ8knMQOSDTKVxcA/kj0WUiJQCSKLhmJR8WfD7sx4e4phkqEhadmxj9CNfWGPvt1JM0Jidq1bpbbRHrJCUWo0GbO/hDCPG0YZbQrbR8tbyJNr0GYdWB5u7u+FYrMHfFGRY/I51VW5hJ0PVDqQib4u4I9jvdrVW+bM4RVd5PT0W/2xdFEhwG8aJNDQZzYELjx4AUvi7mjMjRNzNnyfIi4ps8Mw+FAwIhcnGtyJVu2zW9V3772dMAGgjRcc8dysKaszSFo+b8BhBm3rc2myNDnmYGsvTGoTlaWwyt3reyLB7aLv5KP2nDJPDknr7WdH+gGTc/rsYXEpJQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4doJ2qYtM5XtWt+iExX4kzM70iPQkK6QRgn3W6ui5bA=;
 b=D2NxG27ns2tvzZ6OpYyzQHGwh6C8dkBuXhSEEOXIlKFCwfo3+G10nMC/KsprtavSU9SDaaKLqYmhGovMFdBZtE6wJlukz5OTZkFj+ujB6n1Bvn5wnfXe22aYP/aaAqB7b0JMW8ucYL/sXp2u721G1uupogoVASqJTE3nQYnZdm7Jv93eHTtPO1nQur/l785WawUt+9mAfNwDAYBCZhsofAkRJQsBwmYrk8flWyyDTc/644XYlY+H4lds8gEFm9hRXEalviRNMeUJ+UUuHN238PwiW6J55TwdRDzspz6ac3hTCJgkQxMSiPNVInd1Kow9CsbdgyI4AR8D2m4nO+f+dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4doJ2qYtM5XtWt+iExX4kzM70iPQkK6QRgn3W6ui5bA=;
 b=o9bQTVvP13/ZB88/5Mt4y9eg4YWGuEm8UeHYqtKqTj5dIj4ydEFoYjTT0aeoNoei60vJodW8GkhvDhWFSuSu8FFMyWZV922bzml/z2VuwLiO60BT7COjLDNEXuJyBWLTZiynu6iKB52RCVqulzEBqcB5HbNbC1slIMfJmtxG3wdqsXbnUqEwyG9kdu6wHW5t86M/EKG7haek4EoA66cwLVRYqsthQ6SXDHLZNnxoHXjD0P0Junz+i73XeFvyGQDNileTcAMzDILQiGmfJMU5q1CnDG5P1MHRQ6Qd2RIv4qHgEYPoyubBbyjygNCMD1gMWDQYQt6AkhQvDlbecddC5w==
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM6PR0302MB3445.eurprd03.prod.outlook.com (2603:10a6:209:1b::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 09:06:02 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3433.035; Thu, 1 Oct 2020
 09:06:02 +0000
From: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
To: "jbeulich@suse.com" <jbeulich@suse.com>,
        "George.Dunlap@citrix.com"
	<George.Dunlap@citrix.com>
CC: Artem Mygaiev <Artem_Mygaiev@epam.com>,
        "committers@xenproject.org"
	<committers@xenproject.org>,
        "julien@xen.org" <julien@xen.org>,
        "vicooodin@gmail.com" <vicooodin@gmail.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>,
        "viktor.mitin.19@gmail.com"
	<viktor.mitin.19@gmail.com>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAA==
Date: Thu, 1 Oct 2020 09:06:02 +0000
Message-ID: <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
	 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
	 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
In-Reply-To: <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 82db19e4-f993-4154-825f-08d865e938e7
x-ms-traffictypediagnostic: AM6PR0302MB3445:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM6PR0302MB34458431BA9B79EF93DB9433F2300@AM6PR0302MB3445.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 8sq3aTegJqMFZU/f7zQig+7Rhq9GPzi6KfPO4TLrILwwuThP8WoaZidnGStRUY+RJJR68/FTny3VByok7hVulRPyl4rIEl/wrLlkH5ANTIlVSltpL4VR1R8NEldDq+789YpNbnFQY48jrqjTVzfP+K5lNE7iUUI4NTa7755zrfGD9tBvxPKYVUCAlT8i/aWsIvAXdZs41UvdfdPt5WWMtgK1r6QXWHTJhrwqjFP4HeNaJvMm3UKmatytincuOE7Rc/k6/nmEWUqB2UzyPw6BDJ1NjzF+NeXJCT2DJS62lWDaFMmmOsje0Pw0r56iJ6LV6PS/WqPpIMjqFg045NWPAQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6531.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(376002)(346002)(366004)(54906003)(2616005)(91956017)(66946007)(4326008)(71200400001)(64756008)(316002)(76116006)(110136005)(6486002)(6506007)(53546011)(8936002)(86362001)(36756003)(5660300002)(66476007)(66446008)(66556008)(107886003)(8676002)(6512007)(186003)(55236004)(2906002)(478600001)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 UcCEUmGzWzNYd0YxhsHHiARhr49ynTXtseGH796RzcCWFCNfd6+ufiCWRZ/2dzyZqaGcOWKe+J1QAIKo3RCQSUoPHQ6YjN59ruyBLW9HJ/1ouoNagyGMpOf26Tb0H4wNkvbnfk81nLrzlcTYajdlTM5JiaOdElhmw9Xvo3eJQgKczwOsCR02f60YhSw7shMuTT76OpU1NRwXrZahb6DBatBcUWGWr7h8aGIhTrqa0v6zrqXB51DF6gAjDGvmfzW4JoagCNLTYWwcYnyQ+LX9BshCixdZdK7YVqXeOUn3K8h28fUfcoMOWxjFpiMRrOgUhEdqLJKUWsPA47eXbq1tQUdo4ZKUxqvDXqTS6SqsEEY9GlOtLGa7YxPcxDuiLJha6HxUoW7r7dhGC+lXgzpDXkdJvyjUQJsgz5xGixvIjXP0x6guCo+8QJ4ocvsvtIajO5z1P8SWA3dJdWUvxN6klMIotDFLP7RbdmFLnWLPkP46lt7Jl6CH2MvuJA4a/HUOhqe+ucfBuYh4JVfKT1TICmMFAQIwSbNsb+2wH6wnrV0rt5137rSwirfqVF/iNFgmW4qmdQul8adOgMvGOPIWUKjJQFvdZz8xtYuc+h8j1W0ekR02A+YDXb+UHPATcPOs6BxJOD/MTuKlv6/m4t/WRg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <B31A096870AB204CAC770E10381A24A9@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6531.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 82db19e4-f993-4154-825f-08d865e938e7
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Oct 2020 09:06:02.8102
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Ht5TLRfMAWXfD3sNiPXwe4dzmuzjzMb30r9wDcFjqHF8KQKI21B3+slAmnP8AOVC/vjhfzZXEM7wxs/hc3Xk9G80QRRAWCyAE3kGEZ6KuRc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR0302MB3445
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-01_02:2020-10-01,2020-10-01 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 mlxscore=0
 phishscore=0 spamscore=0 suspectscore=0 mlxlogscore=999 bulkscore=0
 malwarescore=0 clxscore=1011 lowpriorityscore=0 adultscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2010010080

SGksDQoNCk9uIFdlZCwgMjAyMC0wOS0zMCBhdCAxMDoyNCArMDAwMCwgR2VvcmdlIER1bmxhcCB3
cm90ZToNCj4gPiBPbiBTZXAgMzAsIDIwMjAsIGF0IDEwOjU3IEFNLCBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+DQo+ID4gd3JvdGU6DQo+ID4gDQo+ID4gT24gMzAuMDkuMjAyMCAxMTox
OCwgQW5hc3Rhc2lpYSBMdWtpYW5lbmtvIHdyb3RlOg0KPiA+ID4gSSB3b3VsZCBsaWtlIHRvIGtu
b3cgeW91ciBvcGluaW9uIG9uIHRoZSBmb2xsb3dpbmcgY29kaW5nIHN0eWxlDQo+ID4gPiBjYXNl
cy4NCj4gPiA+IFdoaWNoIG9wdGlvbiBkbyB5b3UgdGhpbmsgaXMgY29ycmVjdD8NCj4gPiA+IDEp
IEZ1bmN0aW9uIHByb3RvdHlwZSB3aGVuIHRoZSBzdHJpbmcgbGVuZ3RoIGlzIGxvbmdlciB0aGFu
IHRoZQ0KPiA+ID4gYWxsb3dlZA0KPiA+ID4gb25lDQo+ID4gPiAtc3RhdGljIGludCBfX2luaXQN
Cj4gPiA+IC1hY3BpX3BhcnNlX2dpY19jcHVfaW50ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxl
X2hlYWRlcg0KPiA+ID4gKmhlYWRlciwNCj4gPiA+IC0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIGNvbnN0IHVuc2lnbmVkIGxvbmcgZW5kKQ0KPiA+ID4gK3N0YXRpYyBpbnQgX19pbml0IGFj
cGlfcGFyc2VfZ2ljX2NwdV9pbnRlcmZhY2UoDQo+ID4gPiArICAgIHN0cnVjdCBhY3BpX3N1YnRh
YmxlX2hlYWRlciAqaGVhZGVyLCBjb25zdCB1bnNpZ25lZCBsb25nDQo+ID4gPiBlbmQpDQo+ID4g
DQo+ID4gQm90aCB2YXJpYW50cyBhcmUgZGVlbWVkIHZhbGlkIHN0eWxlLCBJIHRoaW5rIChzYW1l
IGFsc28gZ29lcyBmb3INCj4gPiBmdW5jdGlvbiBjYWxscyB3aXRoIHRoaXMgc2FtZSBwcm9ibGVt
KS4gSW4gZmFjdCB5b3UgbWl4IHR3bw0KPiA+IGRpZmZlcmVudCBzdHlsZSBhc3BlY3RzIHRvZ2V0
aGVyIChwbGFjZW1lbnQgb2YgcGFyYW1ldGVyDQo+ID4gZGVjbGFyYXRpb25zIGFuZCBwbGFjZW1l
bnQgb2YgcmV0dXJuIHR5cGUgZXRjKSAtIGZvciBlYWNoDQo+ID4gaW5kaXZpZHVhbGx5IGJvdGgg
Zm9ybXMgYXJlIGRlZW1lZCBhY2NlcHRhYmxlLCBJIHRoaW5rLg0KPiANCj4gSWYgd2XigJlyZSBn
b2luZyB0byBoYXZlIGEgdG9vbCBnbyB0aHJvdWdoIGFuZCByZXBvcnQgKGNvcnJlY3Q/KSBhbGwN
Cj4gdGhlc2UgY29kaW5nIHN0eWxlIHRoaW5ncywgaXTigJlzIGFuIG9wcG9ydHVuaXR5IHRvIHRo
aW5rIGlmIHdlIHdhbnQgdG8NCj4gYWRkIG5ldyBjb2Rpbmcgc3R5bGUgcmVxdWlyZW1lbnRzIChv
ciBjaGFuZ2UgZXhpc3RpbmcgcmVxdWlyZW1lbnRzKS4NCj4gDQoNCkkgYW0gcmVhZHkgdG8gZGlz
Y3VzcyBuZXcgcmVxdWlyZW1lbnRzIGFuZCBpbXBsZW1lbnQgdGhlbSBpbiBydWxlcyBvZg0KdGhl
IFhlbiBDb2Rpbmcgc3R5bGUgY2hlY2tlci4NCg0KPiAgLUdlb3JnZQ0KDQpSZWdhcmRzLA0KQW5h
c3Rhc2lpYQ0K


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:08:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1136.3733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuZf-00068q-Hy; Thu, 01 Oct 2020 09:08:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1136.3733; Thu, 01 Oct 2020 09:08:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuZf-00068j-Eo; Thu, 01 Oct 2020 09:08:27 +0000
Received: by outflank-mailman (input) for mailman id 1136;
 Thu, 01 Oct 2020 09:08:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNuZe-00068c-Bi
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:08:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d6b89f3-251c-41a2-a226-27f398b3a356;
 Thu, 01 Oct 2020 09:08:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA725AD49;
 Thu,  1 Oct 2020 09:08:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNuZe-00068c-Bi
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:08:26 +0000
X-Inumbo-ID: 9d6b89f3-251c-41a2-a226-27f398b3a356
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d6b89f3-251c-41a2-a226-27f398b3a356;
	Thu, 01 Oct 2020 09:08:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601543304;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GkpkHDPoQcdt8alyC1VjT+FnO+DiQ4YeDc5Nv31DKPE=;
	b=ANoD8lf7ad8iCbzaG+rl0esRDtyb8rBqVhN9ubdJaV9VxOqON+K/5HNsbQdclwJxrpae8i
	p6+8C0cTPN5m5ej2ykkpfkdk8xDmz53eWSP+fHD7ZevkXpWB2q92v+RlYrMmWXJLAURmN6
	bJbIMYrWDxjdAVmvTtg8PD1n8XSYbSc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AA725AD49;
	Thu,  1 Oct 2020 09:08:24 +0000 (UTC)
Subject: Re: [xen-unstable-smoke test] 155187: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-155187-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3d90fa45-9097-f58c-f9b3-818bf0dd0be4@suse.com>
Date: Thu, 1 Oct 2020 11:08:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <osstest-155187-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.10.2020 07:11, osstest service owner wrote:
> flight 155187 xen-unstable-smoke real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/155187/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

All I can see in the serial log is

Oct  1 03:49:33.145346 [   16.993554] virbr0: port 1(virbr0-nic) entered disabled state
Oct  1 03:49:33.145426 [   16.993584] device virbr0-nic entered promiscuous mode
Oct  1 03:49:33.157353 [   16.993662] systemd-udevd[1538]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Oct  1 03:49:33.169370 [   16.993840] systemd-udevd[1541]: Using default interface naming scheme 'v240'.
Oct  1 03:49:33.169467 [   16.998341] systemd-udevd[1538]: Process 'ifupdown-hotplug' failed with exit code 1.
Oct  1 03:49:33.181371 [   16.999735] systemd-udevd[1541]: Process 'ifupdown-hotplug' failed with exit code 1.
Oct  1 03:49:33.193293 [   17.287282] device virbr0-nic left promiscuous mode
Oct  1 03:49:33.433389 [   17.287356] virbr0: port 1(virbr0-nic) entered disabled state
Oct  1 03:49:33.433466 [   17.289383] systemd-udevd[1541]: Process 'ifupdown-hotplug' failed with exit code 1.
Oct  1 03:49:33.445371 [   17.294929] systemd-udevd[1541]: Process 'ifupdown-hotplug' failed with exit code 1.

accompanied by multiple instances of

# Warning: iptables-legacy tables present, use iptables-legacy to see them
iptables: Operation not supported.

and

# Warning: iptables-legacy tables present, use iptables-legacy to see them
iptables: Invalid argument. Run `dmesg' for more information.

in libvirtd.log. What I can't tell is whether this is the cause of the
test failure. In any event the domain was (still?) paused at the time
the debug keys got sent. What I also can't guess at all is which of the
five commits in question might be responsible here (or whether the issue
is an environment one).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:27:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:27:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1140.3749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuro-0007xM-TQ; Thu, 01 Oct 2020 09:27:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1140.3749; Thu, 01 Oct 2020 09:27:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNuro-0007xF-QC; Thu, 01 Oct 2020 09:27:12 +0000
Received: by outflank-mailman (input) for mailman id 1140;
 Thu, 01 Oct 2020 09:27:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNurn-0007xA-AU
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:27:11 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58dceebb-06ad-476e-98f3-24ae2b48aa2b;
 Thu, 01 Oct 2020 09:27:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNurn-0007xA-AU
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:27:11 +0000
X-Inumbo-ID: 58dceebb-06ad-476e-98f3-24ae2b48aa2b
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 58dceebb-06ad-476e-98f3-24ae2b48aa2b;
	Thu, 01 Oct 2020 09:27:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601544428;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=hF4JWm2e7NDZ5Xh7ZZWnWl0AevZZigjbED9dD/hQ5oU=;
  b=HfeP5JsT/unY+4e5+iQyB04IyvXALOX1l4v0dJtx7x7R2eFUQ9GvRYAo
   ZvS0zMmEzCRa4e81rleLW6GW4MJBlnZVOSOuQ1chjbKvBSzL9v0UhRlW1
   wgzlNG7GJxhePjUjybYzbR+jvghneVm0mGfHyIe+QH+SCYo/RVufGLV2S
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YLR2UGa1tE/4RMGxGxWxNWWSOwGUipKOqwsyH3eBqcYZjFEq6f63Dql0AuQVpWWc8J5UVkWSdj
 08njLDOgm0tgc62yUVqmg3zOEZ3E2+FUFUyYtucP/bO6tM6nzLIz9YeiZf/V1XOuBt9Mc59Bke
 tPLBAOyw1Fpdi608BbI8st/nsxAPQffcqHvudLfS+PDU4ujYohsT6s9Z/5OeP0S/xhAILZSqT6
 /zdmADLoNm4tgPXNHSRAEKgdUIcaiNBqetH+bvISmUDaf6KZcCbH9KN5vIwD1GiiWrwpIu5PNW
 PWY=
X-SBRS: None
X-MesageID: 28055755
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28055755"
Date: Thu, 1 Oct 2020 11:26:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: Re: [PATCH 1/8] tools/libxl: Simplify DOMCTL_CDF_ flags handling in
 libxl__domain_make()
Message-ID: <20201001092658.GY19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-2-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-2-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:41PM +0100, Andrew Cooper wrote:
> The use of the ternary operator serves only to obfuscate the code.  Rewrite it
> in more simple terms, avoiding the need to conditionally OR zero into the
> flags.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Might be worth adding to the log that it's a non-functional change.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:37:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:37:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1145.3768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNv1h-0000Vp-02; Thu, 01 Oct 2020 09:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1145.3768; Thu, 01 Oct 2020 09:37:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNv1g-0000Vi-TP; Thu, 01 Oct 2020 09:37:24 +0000
Received: by outflank-mailman (input) for mailman id 1145;
 Thu, 01 Oct 2020 09:37:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNv1f-0000Uw-Na
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:37:23 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8d2bfc4-a7d1-4583-a8fb-d9265447f5c0;
 Thu, 01 Oct 2020 09:37:17 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id k15so4844360wrn.10;
 Thu, 01 Oct 2020 02:37:17 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 185sm8475760wma.18.2020.10.01.02.37.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 02:37:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNv1f-0000Uw-Na
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:37:23 +0000
X-Inumbo-ID: a8d2bfc4-a7d1-4583-a8fb-d9265447f5c0
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a8d2bfc4-a7d1-4583-a8fb-d9265447f5c0;
	Thu, 01 Oct 2020 09:37:17 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id k15so4844360wrn.10;
        Thu, 01 Oct 2020 02:37:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=LLZQPOYWmcB5OcvPbTDTfwETRWYxrmdG96TWBsyI9VY=;
        b=fQsXnkvgkJ67zoE2Ehr9TRD7DFy7o+SQPLBHd7yOvQVvVS6f775Un+AFCJ9tuMFiii
         +XuoU8oOsvuIOVyTlV32Yto8BdlS32gQatTex1u0xUH15R5JkELLljbWdmgTJ51couY6
         7uqys0pwkBCQAHZUUbHLoRxz4ePzmmLNI0Y23qY+12NiAZlrrcIEkwE8KCeerl5pbR0T
         gXdrA0hU82lP/tAFaRzHlE5c5gv5iA8BhgNsDZQYyEvQKzxAorQKwHtRizJSm+WpGLup
         LfekMbbrDN+74pvlDbh/LldElZH7fwFnXFN3SpaJ22JvlQKrj+tHPvz5TTIhMX9nlcr9
         yFow==
X-Gm-Message-State: AOAM531Vo1qiH5g2hzalrL34n5xFbqjaHe72SBO0xDoIbxHDBYxGleQw
	VZPV9IkM7GHbJ9iwl45uJZQ=
X-Google-Smtp-Source: ABdhPJzE23z/Iabn8OLDLbEXKIeu7gT8UV4Ljqia8gjLDC2AmQKhZJdscRBdzLVrvgBBaC+yFPE1Lg==
X-Received: by 2002:adf:a3d4:: with SMTP id m20mr8403169wrb.29.1601545036696;
        Thu, 01 Oct 2020 02:37:16 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id 185sm8475760wma.18.2020.10.01.02.37.15
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 02:37:15 -0700 (PDT)
Date: Thu, 1 Oct 2020 09:37:14 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: minios-devel@lists.xenproject.org, xen-devel@lists.xenproject.org,
	samuel.thibault@ens-lyon.org, wl@xen.org
Subject: Re: [PATCH 0/2] mini-os: netfront: fix some issues
Message-ID: <20201001093714.bcfs7rf4myle6t7g@liuwe-devbox-debian-v2>
References: <20200922105826.26274-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200922105826.26274-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Tue, Sep 22, 2020 at 12:58:24PM +0200, Juergen Gross wrote:
> Fix two issues in mini-os netfront:
> 
> - undo init_netfront interface change and replace it with an alternative
> - fix mini-os suspend/resume handling in netfront
> 
> Juergen Gross (2):
>   mini-os: netfront: retrieve netmask and gateway via extra function
>   mini-os: netfront: fix suspend/resume handling
> 
>  include/netfront.h |   4 +-
>  lwip-net.c         |   4 +-
>  netfront.c         | 173 ++++++++++++++++++++-------------------------
>  test.c             |   2 +-
>  4 files changed, 84 insertions(+), 99 deletions(-)

Pushed to mini-os.

Wei.

> 
> -- 
> 2.26.2
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:39:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1150.3783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNv47-0000h9-Fn; Thu, 01 Oct 2020 09:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1150.3783; Thu, 01 Oct 2020 09:39:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNv47-0000h2-C6; Thu, 01 Oct 2020 09:39:55 +0000
Received: by outflank-mailman (input) for mailman id 1150;
 Thu, 01 Oct 2020 09:39:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNv45-0000gw-Un
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:39:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1522ba7-08c0-4757-8027-20d3e9bf0d46;
 Thu, 01 Oct 2020 09:39:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNv45-0000gw-Un
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:39:53 +0000
X-Inumbo-ID: b1522ba7-08c0-4757-8027-20d3e9bf0d46
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1522ba7-08c0-4757-8027-20d3e9bf0d46;
	Thu, 01 Oct 2020 09:39:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601545193;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=Sy9RhktHhKCZPoO1SvVPs/IAQLIURM7MzqGmOAdgZzg=;
  b=EEgUIGJAAMwbGJQhAj1uMboWp1dlKzQFuFPypLgisbQu2XL9Gp6ZmU9S
   /R0qZm3p8n1U9u6D9FuHrFdlUPH3O5aV3UEG5i3RPFu0nrdTx98uNtMa9
   VQEjPdlvWQzZxcQiEAGKxAe6/dg4OY+TC+6b72c7PA9381jN+314SksfZ
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: IaA5xlRhdTAheeutYCL7jlOOzM7g/xhjvC2zQvciPT/5e2O0ZyHRykRBh66w9otBVBHHV4HnPV
 puy9aUKniZdoDi3ldpH+JDHdTRQeViKOeG3fm0t5dO6leVvsZZMVwjGi1bdOjEh0lwujjoL0gn
 WMXCYk1bxbKHBhPVsGQ+G5aoqt0K92wlaMw4jVi7musKK1uUjbMx3RtfZtmbJh0UVacMpVKqzq
 LbZ9uL8RYJsLRTdTOiWUgXgk3PI6J4ec3ePtuRsH7XNfaBHz9VAwnIpweBkP3SrICF9FHMDmy0
 U5E=
X-SBRS: None
X-MesageID: 28323557
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28323557"
Date: Thu, 1 Oct 2020 11:39:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/8] xen/domctl: Simplify DOMCTL_CDF_ checking logic
Message-ID: <20201001093940.GZ19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-3-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:42PM +0100, Andrew Cooper wrote:
> Introduce some local variables to make the resulting logic easier to follow.
> Join the two IOMMU checks in sanitise_domain_config().  Tweak some of the
> terminology for better accuracy.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

On an unrelated note, we don't seem to sanitize iommu_opts in
sanitise_domain_config like we do for flags.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 09:56:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 09:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1159.3817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvJr-0002YB-79; Thu, 01 Oct 2020 09:56:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1159.3817; Thu, 01 Oct 2020 09:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvJr-0002Y4-3Z; Thu, 01 Oct 2020 09:56:11 +0000
Received: by outflank-mailman (input) for mailman id 1159;
 Thu, 01 Oct 2020 09:56:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kNvJp-0002Xz-EG
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:56:09 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06a245ec-cfa3-461f-a2a4-b8cf61f07638;
 Thu, 01 Oct 2020 09:56:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kNvJp-0002Xz-EG
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 09:56:09 +0000
X-Inumbo-ID: 06a245ec-cfa3-461f-a2a4-b8cf61f07638
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06a245ec-cfa3-461f-a2a4-b8cf61f07638;
	Thu, 01 Oct 2020 09:56:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601546166;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=mOHDXwmaZvhMduxoxAz5aybHHaFT/taKe8MmCgjjiBU=;
  b=ewInA9ilQP+pwUL4hbSt6/C1OyRHjrZ7+8KlvJKiRkCcw0OVmK5eAi9T
   Hff6PfxU66rAULbqwACxWhj8Wc2eCYCqgQ4h4lN2GW+sbuomJhnbUI9AZ
   IRnHZI1yiynQq2x+7G3fbpJdQrG1fbDh+xaiZ1raXEnPOZMHSJS22ZuBE
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: aKHWpBhcwBw5XC2xOvbHRxNRXBVfQ5nBOTNUtrKAip5/CUsdXHuspzpyTDp6R+WXB7gLgkg9iB
 8iBBrU1bUaEXIuAsEKFJAgxTk/tB5RTR4lmPnm7TbCFc0HIsRrPdWT1772Mrh/qsa4z4dUPWDc
 93Zu2wZw1j3sRBm4dNiPVMGAQH1UmwiOJMnh9dNk14gAHM8hNT1D1x7J56xckZbFzGVu4ChduD
 vtrNZJHaKfQMtYpyVK7xXpRGoW72fZSF4bv0OhV30UsMlRsalxmcIfxeMT8Nf54SZ2fdlNu9tO
 BN0=
X-SBRS: None
X-MesageID: 29065524
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="29065524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MpoyqVL1uTKqrmbBEpJtVF3aqnc83g2s4vx9yI/REzkml5sDRVIBqKHxJyT48JGPqIXO135QyoyfUtW9XO9NVBY0gHzKH1EkiZYigE5Tfn52u+CIUant8jvGkrnMwql7qSBAbgCzi0ojsrcCYZpZvO+kHts+88C8zOb6Ug95RnfDl/PtS3O/aY/cxGPZknQ7lm78nCcat4agG0OKcl/mNaz6GviUutBIGbsrxx1RMoNNlsaSNrhSpvuiyb5bLSyXQ9Xt7BKnbDyO5gFTBpNkzgfcX6JJ92i/JL9O9+5hW1ehU1Ukgx2YkoVuMEDJhEf8TJogeo0buF33d0pxitbxSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mOHDXwmaZvhMduxoxAz5aybHHaFT/taKe8MmCgjjiBU=;
 b=V7/Q62lddfUzy7x9zuSs3SavoMB0/rembD6UG878TN8d4y4Zhy7KIjj/DQFh91Efh/jFz3+6/qj3eHeZ56d/UVhu/1LefsTgqiL0U2FQ/fSlKXA5oHI55LREYPo0qyr5hULf12Gs1aSlKX0NpCGqa6atFsG35kzj0iRa3NIkzvdn//ZC/lNCHIDt8V/V6SI+8t32c4Rfrrh9TXT2i5PgbaGf2HaTql2YPmCuc1Labctr0qI7Gr/Xa8sUlTCvXGIueUITJR1MiOf9mjfruntCCLNjn6l3+c8nnMwc0L4oaHQBWr+xeJbk5lggec7RK4HsJ0GGhSbe4w08tcmVjJQO3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mOHDXwmaZvhMduxoxAz5aybHHaFT/taKe8MmCgjjiBU=;
 b=U8mCzBDgQgf335atzXYrJ+A6bNHRErocOlzGswdRjULMxOT5roOv+6ipJ/RYiOrK9uzPaMR5nI4GbrvaO7PAAavwviv5m4inKm8AEFarOhhcCxo398En5Eu+5AbXr2Ohb9jWxzY/jKke1EtQ41cNJtTng6LsW9OmiDgijrq7cHE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Rich Persaud <persaur@gmail.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Topic: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Index: AQHWlylRG8SsjpT4JUGR4W/Ewa0YcqmBoNCAgADi/AA=
Date: Thu, 1 Oct 2020 09:55:59 +0000
Message-ID: <63FFD578-F249-404B-9829-687A42360A76@citrix.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
 <alpine.DEB.2.21.2009301321431.10908@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2009301321431.10908@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cec56e3a-3431-47ba-cdd8-08d865f032f5
x-ms-traffictypediagnostic: BYAPR03MB4597:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB45978E675CC19AE2CB73516599300@BYAPR03MB4597.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: qVxHlbnUmf7+2p5yuMyTROSTO/E30BSF/R5mr7KRYbHLeXnchcwV2XehYX6M0a2baIWDPP3VmpwpNql4odmEfbaTRIvb53R10krE6iWNaxsQrsukoMtaPD64PrfWINKINGZp07wb1YNrSHPa098zo54kBb8f1BNBorIsogqRSZAXmg1oTeB6t25pER0IHCzi+AP+D1oWPA6aRTMApNu8UY/Z218kUgRODGnvNjciKP0df/M0hXHFZa/V79qxxajUIHHvVvHGErzXZlcr1gepkAgKmZG2e0I+Y8PgaFp/i/3QY8JiMXR+6t0a6dtE1364UlvW9HRuWekTbc07qvDaOSrmT4xjgAWhuOn+jI9BNe9Ydlxti2dYQHLCDw+FdE0+xGRC2EAkqGwGLNHUGYcTAA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(366004)(376002)(396003)(2906002)(6916009)(71200400001)(36756003)(4326008)(6486002)(478600001)(2616005)(966005)(91956017)(33656002)(54906003)(316002)(83080400001)(5660300002)(83380400001)(66476007)(6512007)(86362001)(6506007)(26005)(53546011)(8676002)(66946007)(55236004)(76116006)(8936002)(64756008)(66446008)(66556008)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: rZ0qn5yT0RIBhVAILUZvGd8W5rKBS6HZQvj6/obaq+abKmN36v6mFck2cPsTKH3+ROtkYdRm+XJtCEPb0QZXqO9T/l6SAxJ9wM8Hf18eGR1oyOyJyH4vBicusFMsSKq+tyjpQgDpslmqg4ivnATSq1kUSs64qDn2fg8aX3mHvO06BtGg6BXc1+6bcqe61Q91JUm++ifZ4G6YKsxQhkJKg5sE8GmHtqvO74fmCct+mLJOH4hxLdXqTMTWlxzCFUCJLilNoSI6kAP6HX1DTJpSzcwbZnjzawsR7/V33GqEulGNjpC/MuHvtJCgVJ/fc0j50g88MXwByHyxipUsNNlVVhdVZcBu0mukS75ULZ3Vy1lPlu/sOVzUMMMuSkKVsbL+cOi43fmzaJs7KQSxBBnmDg8sGVTzSD7eSO4XWiJ2vtKa4L+hXHTFWQPVObqK/2dle4ijAWHJXhXccQ/IP2MCWKvaKZMwK8sDU0oap20EyLrLjQCG/IYTz9je6quKPUXx8WkCsRXWb29IODL+GzMuvhs41DiSLbwHHh6lQVswrYXRLNNyAMCRRyKVHq0DSnxg5gZJAvSJ6MvpZtU9kSDs4QivvcF5I0Q8R4ZUJtNi1WlW0whTvL/sni3g/TLWD/TVYrbvtjqfrruKR7t1yMI1pA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <62D7E47AF8C5A14F8EA682D56F6BF436@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cec56e3a-3431-47ba-cdd8-08d865f032f5
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Oct 2020 09:55:59.3072
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nYnJIUBCPQjDYESvI70IJx6YduJWaELt/rJ0fUPhjCVQirUk7no/BNSsNhVuyU6pw1meSbAIjoEMzz1bg0mzdmXzOG/QUx/CsUvnfDYfaYk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4597
X-OriginatorOrg: citrix.com

DQoNCj4gT24gU2VwIDMwLCAyMDIwLCBhdCA5OjIzIFBNLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gV2VkLCAzMCBTZXAgMjAyMCwg
R2VvcmdlIER1bmxhcCB3cm90ZToNCj4+IERlZmluZSBhIHNwZWNpZmljIGNyaXRlcmlhIGZvciBo
b3cgd2UgZGV0ZXJtaW5lIHdoYXQgdG9vbHMgYW5kDQo+PiBsaWJyYXJpZXMgdG8gYmUgY29tcGF0
aWJsZSB3aXRoLiAgVGhpcyB3aWxsIGNsYXJpZnkgaXNzdWVzIHN1Y2ggYXMsDQo+PiAiU2hvdWxk
IHdlIGNvbnRpbnVlIHRvIHN1cHBvcnQgUHl0aG9uIDIuNCIgbW92aW5nIGZvcndhcmQuDQo+PiAN
Cj4+IE5vdGUgdGhhdCBDZW50T1MgNyBpcyBzZXQgdG8gc3RvcCByZWNlaXZpbmcgIm5vcm1hbCIg
bWFpbnRlbmFuY2UNCj4+IHVwZGF0ZXMgaW4gIlE0IDIwMjAiOyBhc3N1bWluZyB0aGF0IDQuMTUg
aXMgcmVsZWFzZWQgYWZ0ZXIgdGhhdCwgd2UNCj4+IG9ubHkgbmVlZCB0byBzdXBwb3J0IENlbnRP
UyAvIFJIRUwgOC4NCj4+IA0KPj4gU2lnbmVkLW9mZi1ieTogR2VvcmdlIER1bmxhcCA8Z2Vvcmdl
LmR1bmxhcEBjaXRyaXguY29tPg0KPj4gLS0tDQo+PiANCj4+IENDOiBJYW4gSmFja3NvbiA8aWFu
LmphY2tzb25AY2l0cml4LmNvbT4NCj4+IENDOiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPj4gQ0M6
IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+DQo+PiBDQzogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4gQ0M6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4+IENDOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3Jn
Pg0KPj4gQ0M6IFJpY2ggUGVyc2F1ZCA8cGVyc2F1ckBnbWFpbC5jb20+DQo+PiBDQzogQmVydHJh
bmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPg0KPj4gLS0tDQo+PiBkb2NzL2lu
ZGV4LnJzdCAgICAgICAgICAgICAgICAgICAgICAgIHwgIDIgKw0KPj4gZG9jcy9wb2xpY2llcy9k
ZXBlbmRlbmN5LXZlcnNpb25zLnJzdCB8IDc2ICsrKysrKysrKysrKysrKysrKysrKysrKysrKw0K
Pj4gMiBmaWxlcyBjaGFuZ2VkLCA3OCBpbnNlcnRpb25zKCspDQo+PiBjcmVhdGUgbW9kZSAxMDA2
NDQgZG9jcy9wb2xpY2llcy9kZXBlbmRlbmN5LXZlcnNpb25zLnJzdA0KPj4gDQo+PiBkaWZmIC0t
Z2l0IGEvZG9jcy9pbmRleC5yc3QgYi9kb2NzL2luZGV4LnJzdA0KPj4gaW5kZXggYjc1NDg3YTA1
ZC4uYWMxNzVlYWNjOCAxMDA2NDQNCj4+IC0tLSBhL2RvY3MvaW5kZXgucnN0DQo+PiArKysgYi9k
b2NzL2luZGV4LnJzdA0KPj4gQEAgLTU3LDUgKzU3LDcgQEAgTWlzY2VsbGFuZWENCj4+IC0tLS0t
LS0tLS0tDQo+PiANCj4+IC4uIHRvY3RyZWU6Og0KPj4gKyAgIDptYXhkZXB0aDogMQ0KPj4gDQo+
PiArICAgcG9saWNpZXMvZGVwZW5kZW5jeS12ZXJzaW9ucw0KPj4gICAgZ2xvc3NhcnkNCj4+IGRp
ZmYgLS1naXQgYS9kb2NzL3BvbGljaWVzL2RlcGVuZGVuY3ktdmVyc2lvbnMucnN0IGIvZG9jcy9w
b2xpY2llcy9kZXBlbmRlbmN5LXZlcnNpb25zLnJzdA0KPj4gbmV3IGZpbGUgbW9kZSAxMDA2NDQN
Cj4+IGluZGV4IDAwMDAwMDAwMDAuLmQ1ZWViODQ4ZDgNCj4+IC0tLSAvZGV2L251bGwNCj4+ICsr
KyBiL2RvY3MvcG9saWNpZXMvZGVwZW5kZW5jeS12ZXJzaW9ucy5yc3QNCj4+IEBAIC0wLDAgKzEs
NzYgQEANCj4+ICsuLiBTUERYLUxpY2Vuc2UtSWRlbnRpZmllcjogQ0MtQlktNC4wDQo+PiArDQo+
PiArQnVpbGQgYW5kIHJ1bnRpbWUgZGVwZW5kZW5jaWVzDQo+PiArPT09PT09PT09PT09PT09PT09
PT09PT09PT09PT09DQo+PiArDQo+PiArWGVuIGRlcGVuZHMgb24gb3RoZXIgcHJvZ3JhbXMgYW5k
IGxpYnJhcmllcyB0byBidWlsZCBhbmQgdG8gcnVuLg0KPj4gK0Nob3NpbmcgYSBtaW5pbXVtIHZl
cnNpb24gb2YgdGhlc2UgdG9vbHMgdG8gc3VwcG9ydCByZXF1aXJlcyBhIGNhcmVmdWwNCj4+ICti
YWxhbmNlOiBTdXBwb3J0aW5nIG9sZGVyIHZlcnNpb25zIG9mIHRoZXNlIHRvb2xzIG9yIGxpYnJh
cmllcyBtZWFucw0KPj4gK3RoYXQgWGVuIGNhbiBjb21waWxlIG9uIGEgd2lkZXIgdmFyaWV0eSBv
ZiBzeXN0ZW1zOyBidXQgbWVhbnMgdGhhdCBYZW4NCj4+ICtjYW5ub3QgdGFrZSBhZHZhbnRhZ2Ug
b2YgZmVhdHVyZXMgYXZhaWxhYmxlIGluIG5ld2VyIHZlcnNpb25zLg0KPj4gK0NvbnZlcnNlbHks
IHJlcXVpcmluZyBuZXdlciB2ZXJzaW9ucyBtZWFucyB0aGF0IFhlbiBjYW4gdGFrZSBhZHZhbnRh
Z2UNCj4+ICtvZiBuZXdlciBmZWF0dXJlcywgYnV0IGNhbm5vdCB3b3JrIG9uIGFzIHdpZGUgYSB2
YXJpZXR5IG9mIHN5c3RlbXMuDQo+PiArDQo+PiArU3BlY2lmaWMgZGVwZW5kZW5jaWVzIGFuZCB2
ZXJzaW9ucyBmb3IgYSBnaXZlbiBYZW4gcmVsZWFzZSB3aWxsIGJlDQo+PiArbGlzdGVkIGluIHRo
ZSB0b3BsZXZlbCBSRUFETUUsIGFuZC9vciBzcGVjaWZpZWQgYnkgdGhlIGBgY29uZmlndXJlYGAN
Cj4+ICtzeXN0ZW0uICBUaGlzIGRvY3VtZW50IGxheXMgb3V0IHRoZSBwcmluY2lwbGVzIGJ5IHdo
aWNoIHRob3NlIHZlcnNpb25zDQo+PiArc2hvdWxkIGJlIGNob3Nlbi4NCj4+ICsNCj4+ICtUaGUg
Z2VuZXJhbCBwcmluY2lwbGUgaXMgdGhpczoNCj4+ICsNCj4+ICsgICAgWGVuIHNob3VsZCBidWls
ZCBvbiBjdXJyZW50bHktc3VwcG9ydGVkIHZlcnNpb25zIG9mIG1ham9yIGRpc3Ryb3MNCj4+ICsg
ICAgd2hlbiByZWxlYXNlZC4NCj4+ICsNCj4+ICsiQ3VycmVudGx5LXN1cHBvcnRlZCIgbWVhbnMg
d2hhdGV2ZXIgdGhhdCBkaXN0cm8ncyB2ZXJzaW9uIG9mICJmdWxsDQo+PiArc3VwcG9ydCIuICBG
b3IgaW5zdGFuY2UsIGF0IHRoZSB0aW1lIG9mIHdyaXRpbmcsIENlbnRPUyA3IGFuZCA4IGFyZQ0K
Pj4gK2xpc3RlZCBhcyBiZWluZyBnaXZlbiAiRnVsbCBVcGRhdGVzIiwgYnV0IENlbnRPUyA2IGlz
IGxpc3RlZCBhcw0KPj4gKyJNYWludGVuYW5jZSB1cGRhdGVzIjsgdW5kZXIgdGhpcyBjcml0ZXJp
dW0sIHdlIHdvdWxkIHRyeSB0byBlbnN1cmUNCj4+ICt0aGF0IFhlbiBjb3VsZCBidWlsZCBvbiBD
ZW50T1MgNyBhbmQgOCwgYnV0IG5vdCBvbiBDZW50T1MgNi4NCj4+ICsNCj4+ICtFeGNlcHRpb25z
IGZvciBzcGVjaWZpYyBkaXN0cm9zIG9yIHRvb2xzIG1heSBiZSBtYWRlIHdoZW4gYXBwcm9wcmlh
dGUuDQo+PiArDQo+PiArT25lIGV4Y2VwdGlvbiB0byB0aGlzIGlzIGNvbXBpbGVyIHZlcnNpb25z
IGZvciB0aGUgaHlwZXJ2aXNvci4NCj4+ICtTdXBwb3J0IGZvciBuZXcgaW5zdHJ1Y3Rpb25zLCBh
bmQgaW4gcGFydGljdWxhciBzdXBwb3J0IGZvciBuZXcgc2FmZXR5DQo+PiArZmVhdHVyZXMsIG1h
eSByZXF1aXJlIGEgbmV3ZXIgY29tcGlsZXIgdGhhbiBtYW55IGRpc3Ryb3Mgc3VwcG9ydC4NCj4+
ICtUaGVzZSB3aWxsIGJlIHNwZWNpZmllZCBpbiB0aGUgUkVBRE1FLg0KPj4gKw0KPj4gK0Rpc3Ry
b3Mgd2UgY29uc2lkZXIgd2hlbiBkZWNpZGluZyBtaW5pbXVtIHZlcnNpb25zDQo+PiArLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0NCj4+ICsNCj4+ICtX
ZSBjdXJyZW50bHkgYWltIHRvIHN1cHBvcnQgWGVuIGJ1aWxkaW5nIGFuZCBydW5uaW5nIG9uIHRo
ZSBmb2xsb3dpbmcgZGlzdHJpYnV0aW9uczoNCj4+ICtEZWJpYW5fLA0KPj4gK1VidW50dV8sDQo+
PiArT3BlblNVU0VfLA0KPj4gK0FyY2ggTGludXgsDQo+PiArU0xFU18sDQo+PiArWW9jdG9fLA0K
Pj4gK0NlbnRPU18sDQo+PiArYW5kIFJIRUxfLg0KPiANCj4gQWxwaW5lIExpbnV4IHNob3VsZCBi
ZSBpbiB0aGUgbGlzdCAoY29uc2lkZXIgaXRzIHVzYWdlIGluIGNvbnRhaW5lcg0KPiBlbnZpcm9u
bWVudC4pDQoNClN1cmUsIHdlIGNhbiBhZGQgdGhhdCBvbmUgaW4uICBBbHRob3VnaCwgd2UgbWln
aHQgY29uc2lkZXIgcmVxdWlyaW5nIHRoYXQgZGlzdHJvcyBvbiB0aGlzIGxpc3QgYmUgZmlyc3Qg
YmUgYWRkZWQgdG8gdGhlIEdpdGxhYiBDSSBsb29wIGlmIHBvc3NpYmxlLg0KDQo+IEkgYW0gc3Rp
bGwgb24gQWxwaW5lIExpbnV4IDMuNywgc28gSSBhbSBzdXJlIHRoYXQgb25lIHdvcmtzLiBQcm9i
YWJseQ0KPiBvdGhlciB2ZXJzaW9ucyB3b3JrIHRvby4NCg0KUmlnaHQsIGJ1dCB0aGUgcXVlc3Rp
b24gaXMsIGlmIHNvbWVvbmUgcG9zdHMgYSBwYXRjaCB3aGljaCBjYXVzZXMgaXQgdG8gbm8gbG9u
Z2VyIGJ1aWxkIG9uIDMuNywgd291bGQgd2UgcmVqZWN0IGl0IG9yIGFjY2VwdCBpdD8NCg0KQWNj
b3JkaW5nIHRvIGh0dHBzOi8vd2lraS5hbHBpbmVsaW51eC5vcmcvd2lraS9BbHBpbmVfTGludXg6
UmVsZWFzZXMsIG9ubHkgMy4xMiBpcyBjdXJyZW50bHkgcmVjZWl2aW5nIGJ1ZyBmaXhlczsgc28g
YnkgdGhlIGNyaXRlcmlhIGFib3ZlLCB3ZSB3b3VsZCBvbmx5IHJlamVjdCBhIGNoYW5nZXNldCBp
ZiBpdCBjYXVzZWQgYSBidWlsZCByZWdyZXNzaW9uIGZvciAzLjEyLg0KDQpJIHdvdWxkIGFyZ3Vl
IHRoYXQgdGhpcyBpcyB0aGUgcmlnaHQgYXBwcm9hY2g6IEl0IGRvZXNu4oCZdCBtYWtlIHNlbnNl
IGZvciB1cyB0byBzcGVuZCBtb3JlIGVmZm9ydCBrZWVwaW5nIGFuIG9sZCBkaXN0cm8gd29ya2lu
ZyB0aGFuIHRoYXQgY29tbXVuaXR5IGl0IHNlbGYgc3BlbmRzIGtlZXBpbmcgaXQgd29ya2luZy4g
IFRoZSBVYnVudHUgY29tbXVuaXR5IHNwZW5kcyBlZmZvcnQga2VlcGluZyBVYnVudHUgMTYuMDQg
aW4gd29ya2luZyBzaGFwZSwgc28gaXQgbWFrZXMgc2Vuc2UgZm9yIHVzIHRvIHNwZW5kIGVmZm9y
dCBtYWtpbmcgc3VyZSBYZW4gNC4xNSBidWlsZHMgYW5kIHJ1bnMgb24gaXQuICBUaGUgQWxwaW5l
IExpbnV4IGNvbW11bml0eSBkb2VzbuKAmXQgcHJvbWlzZSB0byBzcGVuZCBhbnkgZWZmb3J0IHRv
IGZpeCBhbnkgbW9yZSBidWdzIGluIEFscGluZSBMaW51eCAzLjExLCBzbyBpdCBkb2VzbuKAmXQg
bWFrZSBhbnkgc2Vuc2UgZm9yIHVzIHRvIHNwZW5kIGVmZm9ydCBtYWtpbmcgc3VyZSBYZW4gNC4x
NSBydW5zIG9uIGl0Lg0KDQpPYnZpb3VzbHkgaWYgaXQgYnVpbGRzIG9uIFVidW50dSAxNi4wNCB0
aGVyZeKAmXMgYSBwcmV0dHkgaGlnaCBwcm9iYWJpbGl0eSB0aGF0IGl0IHdpbGwgYWxzbyBidWls
ZCBvbiBBbHBpbmUgTGludXggMy40KyAocmVsZWFzZWQgYXJvdW5kIHRoZSBzYW1lIHRpbWUpOyB3
ZSBqdXN0IGRvbuKAmXQgd2FudCB0byBwcm9taXNlIHRoYXQuDQoNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1162.3829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvOx-0003WS-0Y; Thu, 01 Oct 2020 10:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1162.3829; Thu, 01 Oct 2020 10:01:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvOw-0003WL-Ta; Thu, 01 Oct 2020 10:01:26 +0000
Received: by outflank-mailman (input) for mailman id 1162;
 Thu, 01 Oct 2020 10:01:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNvOw-0003WG-0V
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:01:26 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46f04faa-c426-4b63-8754-f1e68a2ec947;
 Thu, 01 Oct 2020 10:01:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNvOw-0003WG-0V
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:01:26 +0000
X-Inumbo-ID: 46f04faa-c426-4b63-8754-f1e68a2ec947
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 46f04faa-c426-4b63-8754-f1e68a2ec947;
	Thu, 01 Oct 2020 10:01:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601546485;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=ZYocmHT58hop1hnQW2D1AUwv9Zu75PWwT27jkJ3ptsc=;
  b=PIyIXrrnIKhcUIrNaiWW0fx0jhl9EbFGEpxx10NRnTVRLt729mJLhEvR
   bE+6n8IrX5BLcNkIGRkwU3OIpZphubd3OWnMVrXpJkJeI6TQgfLqCEPh4
   8EiC4obIZW925FbMJ4r+6zMZEkhnjxTF3hZ/HICnKrmlKx790wIw0Wx5J
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QwHfmwhnvOFgDSO8W0D017sZa0xJ1izLrZuPcb17SwPQy3fkozHKKoxTnIydLUqj/WbEAuy1P3
 IpjUTDDTtURA7LizvDmrYRAbSa+Zj754Y/gFTSOokGStL8ZWsgk+DgaaaqwvHCFciMtso72OgW
 N/hk/mDYoOjuwhhJT6fiZKx90QOsUAusTpQznUxa4BwzVS7Bvhc4fHwyXIC4Yx4OSt6fQ4aEAk
 3UsQdBRS7pwEod3lrXaD19QfUcC3jg/t+/XFiozFv5HyLI7a/HZ9tlOwkr9JOU3pzTsJzDrmTO
 5Hc=
X-SBRS: None
X-MesageID: 29065832
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="29065832"
Date: Thu, 1 Oct 2020 12:01:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	Christian Lindig <christian.lindig@citrix.com>, Edwin
 =?utf-8?B?VMO2csO2aw==?= <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
Message-ID: <20201001100101.GA19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-4-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-4-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:43PM +0100, Andrew Cooper wrote:
> Like other major areas of functionality, nested virt (or not) needs to be
> known at domain creation time for sensible CPUID handling, and wants to be
> known this early for sensible infrastructure handling in Xen.
> 
> Introduce XEN_DOMCTL_CDF_nested_virt and modify libxl to set it appropriately
> when creating domains.  There is no need to adjust the ARM logic to reject the
> use of this new flag.
> 
> No functional change yet.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:06:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:06:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1164.3841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvTn-0003io-Ka; Thu, 01 Oct 2020 10:06:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1164.3841; Thu, 01 Oct 2020 10:06:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvTn-0003ih-HE; Thu, 01 Oct 2020 10:06:27 +0000
Received: by outflank-mailman (input) for mailman id 1164;
 Thu, 01 Oct 2020 10:06:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kNvTm-0003ic-CO
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:06:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63677afb-644e-40fc-adbb-451ff683f780;
 Thu, 01 Oct 2020 10:06:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kNvTm-0003ic-CO
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:06:26 +0000
X-Inumbo-ID: 63677afb-644e-40fc-adbb-451ff683f780
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 63677afb-644e-40fc-adbb-451ff683f780;
	Thu, 01 Oct 2020 10:06:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601546785;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=/UAhn6V5sQFFpeKUVPG2DKvo68iuogAeGXi0cF0ZRLA=;
  b=UumGVCg6hc6Lxr3O9SnZrlanaeYKihCdprJeeX2cjQ3G604EojG2W2/u
   AdF+gOxGszlyvNGwJiLW6wurWshs9KehMsFzCmIVx9vxVnY8LB64lcOHc
   3Z8OVUeVgsaXJmOX+28b1wPtEML2eaPp70qJdPxF8cmYGHZ/ACAV80Ddr
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dH5ekr5PSIQNc0O9zF19lXXz/rnUyuiMVeOkZ6NLI875zIp7BZu82889yqe9LfhMdqn4AkyZwD
 +7pB5hlWSKNICdJnUKS+LBrs6miVzLG1RXV/CjDnte0WGv7gEzBBBPZULafJURGEiTqZgjTsck
 oypZfz7sH0TrW49IehhgymszVOWn73dQTio4ipIlCfSy2dDHGnhEGqNtWudyfXBCyq/I+S/WBe
 9Wg0aD14nFLGsvhZ/zHUJ/+jyoU8B+kBjkDLgi75IwNiLdOgIL6r4FVVzviS5XxnQ+63kSuKUj
 YTM=
X-SBRS: None
X-MesageID: 28144524
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28144524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=B+he/qXg7uS64agHxgf90uckSMUJ8pWMqulNW8mRz5hsydPrEMbx5Sog4N+N6zi4HIZ653YHFqE14VeaXvRRmiC9cGZpXWuLFNf3tCbKWYhjcElrgcfj6ltr4Ra8OClpvSUVKUgmoVv9mdz7kgzmdyqPHeFFaTmpEsm9XaVC/ZwSfzYpOj1BoSUEITGVpRN2idNHirCaw5TOTYPz5TGaPSIVxYzjfIj/1MqVRt4j6Exsua6Umc/SPk2O7pr9Ha9dx9L1t9zAcRoKiP/K7hix2pywCH67juHTjJOwofYJFKmDoMh5RHr52s3l4KAoE5lqjFOP9U6HjS4g71Up6F7p7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/UAhn6V5sQFFpeKUVPG2DKvo68iuogAeGXi0cF0ZRLA=;
 b=XBgxFXEHr+TcOnlw0X9dzC7FF2EVfXpHc8vFzewdmxMp1YOVspx4fHu6xxCcKSEX6xLK/nU4YltzzZreckdVZUYyKDKioEN34T3k9XyIfRleB9z3DR1ZejTDv3GHYNi8piw/7bjh8Cp+YkpLmv3IN/Z19k1CWiPWtaptBJ8yEHKFkzDCGXkqdnh3VFePy6qQmaxy6iSSAN7R1wfaEmCY0muUiNdm0ajIBBJ3HU8w/bqFZvrT22NvmFwu84sTuea/IMON6p1vWh9S6sIDuhjLZiudJN7EHbhJ+sekYRhx3uVYltcd93noqXxKR7yer5y+C6pqQkagHTZI/yqgRN4aVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/UAhn6V5sQFFpeKUVPG2DKvo68iuogAeGXi0cF0ZRLA=;
 b=fv10gasLOrbKVrPB5Q+GpG7pLM2Z6a5SkHa1cw5VYF4AvfQZirYtOUAQRXw76+uSb1lRccMANM8J7Qw9BiRC/0tlfnU0yaKvh7D7WwX3r6CikhwjyflogueiqHPh5G/p7fT2I5ByiTdICirLQKglQqrDfAJYgDKeg/TWu59emBE=
From: George Dunlap <George.Dunlap@citrix.com>
To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
CC: "jbeulich@suse.com" <jbeulich@suse.com>, Artem Mygaiev
	<Artem_Mygaiev@epam.com>, "committers@xenproject.org"
	<committers@xenproject.org>, "julien@xen.org" <julien@xen.org>,
	"vicooodin@gmail.com" <vicooodin@gmail.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "viktor.mitin.19@gmail.com"
	<viktor.mitin.19@gmail.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkA
Date: Thu, 1 Oct 2020 10:06:21 +0000
Message-ID: <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
In-Reply-To: <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2fe55158-6529-4ede-ef8f-08d865f1a58f
x-ms-traffictypediagnostic: BYAPR03MB3639:
x-microsoft-antispam-prvs: <BYAPR03MB3639DB9D34923523AEA3F29299300@BYAPR03MB3639.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: RALu3ktj9yQKXiUgizG+OVZtCPY6YUQbhxV/ABP8EfjgKbJQUAXCKxZX811W7LhRFBuAP6+Vcxi5TUAjsdJbjSmdHOpc6+Nn6T4gbuMtamgPqV+I0fRjC5stAistxS6pQf2U4rqryceR++T8OhhBzVzFYkQXh/CwgOlejRp9ZtkJZI/KG2lGBMKevWa1VXfng+GFKdan+9hlWsVruILH/IyfBdH9zbipvXi0I5vbdtNwKyIY99DG46SCg7mnb6Y+dzxhD3c9c2loXLSijbKPcAhdcY2AmMj+tYCy55pXeB8GvBkLNqPbwU4766IZKdhZifeiUXPmuLVJ99Ie7dwx6w==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(136003)(376002)(396003)(39860400002)(66476007)(66946007)(54906003)(2906002)(53546011)(2616005)(55236004)(91956017)(6506007)(5660300002)(26005)(76116006)(316002)(6512007)(6916009)(33656002)(64756008)(6486002)(4326008)(86362001)(36756003)(186003)(8936002)(478600001)(8676002)(71200400001)(66446008)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: L+HxWkD0wE8ipSvelLfDChuFXcHRq7Wo2/r1k/2lUwbbKBUoG8t6Bzd8m2iqQhTtWOvZBkzv/Zfab/GTVnppNaL+BgkoueDNHEQCBLUqsGBAIHRBvOxv0xKOG6M5AjA4/9p5eIS8VpqvY84ifBjNwKDRY2nZYTyU3bilUyT4dd+A3wivMtKjCxpiITUOkEPxDIYKbaWde3SjjBsgQ9QBMkQCcMvhnZKpih7JmV/lSZ9yWeNkZgx1BvIBIJyCPImHDn26WeRpc+5DkMJ+gICJLEQkZd2okGR4TyW6IYLmEHN3NY4DeYyNCBOB3hLz1LOGRK8ijYsc65F0WFDcHf/DDvxWF4CiYdZkrsB0vDpgjn7mETXpIlyMw4ntYBd2eLsaCp3puM4cQY8/NNxnXFuYomZptFXsBUhqvp6MPmnxMZGLdjrJXmek0G3ZcMZAswNnqV5Z8SQ9+ZfpZEpN6gRcviaFVV6FewT27dgzF0GJQnbCqvXnpZlsXK/iIxP856zZe0n6R+xbaqR/tjsX8evLxhDGGWzCnXs2OSuotV/XZKx28CqvSjvJHFi2YH6/+HryPb+Y4XTqhEZvvXatOJI55i8q0pIvPDEnsJTceYuNuZ0FqeyydU/jhUcDQHVE/++R+FtrMReKU0npEo0gY02tUA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6A93C568F7B01E4AB9A95C4E3D0AB4EA@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2fe55158-6529-4ede-ef8f-08d865f1a58f
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Oct 2020 10:06:21.1352
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ursIv/6Id6lOFXEw/I+GIHTdlJ9qVhJxiKlmi6msNp82zLfUdIS38oRMlBBhGQ1ZXDXETYV1eHKG/5QHpUZm/MxBvMSVxFoc7Bj35jdUAnc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3639
X-OriginatorOrg: citrix.com

DQoNCj4gT24gT2N0IDEsIDIwMjAsIGF0IDEwOjA2IEFNLCBBbmFzdGFzaWlhIEx1a2lhbmVua28g
PEFuYXN0YXNpaWFfTHVraWFuZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+IA0KPiBIaSwNCj4gDQo+
IE9uIFdlZCwgMjAyMC0wOS0zMCBhdCAxMDoyNCArMDAwMCwgR2VvcmdlIER1bmxhcCB3cm90ZToN
Cj4+PiBPbiBTZXAgMzAsIDIwMjAsIGF0IDEwOjU3IEFNLCBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+DQo+Pj4gd3JvdGU6DQo+Pj4gDQo+Pj4gT24gMzAuMDkuMjAyMCAxMToxOCwgQW5h
c3Rhc2lpYSBMdWtpYW5lbmtvIHdyb3RlOg0KPj4+PiBJIHdvdWxkIGxpa2UgdG8ga25vdyB5b3Vy
IG9waW5pb24gb24gdGhlIGZvbGxvd2luZyBjb2Rpbmcgc3R5bGUNCj4+Pj4gY2FzZXMuDQo+Pj4+
IFdoaWNoIG9wdGlvbiBkbyB5b3UgdGhpbmsgaXMgY29ycmVjdD8NCj4+Pj4gMSkgRnVuY3Rpb24g
cHJvdG90eXBlIHdoZW4gdGhlIHN0cmluZyBsZW5ndGggaXMgbG9uZ2VyIHRoYW4gdGhlDQo+Pj4+
IGFsbG93ZWQNCj4+Pj4gb25lDQo+Pj4+IC1zdGF0aWMgaW50IF9faW5pdA0KPj4+PiAtYWNwaV9w
YXJzZV9naWNfY3B1X2ludGVyZmFjZShzdHJ1Y3QgYWNwaV9zdWJ0YWJsZV9oZWFkZXINCj4+Pj4g
KmhlYWRlciwNCj4+Pj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgdW5zaWdu
ZWQgbG9uZyBlbmQpDQo+Pj4+ICtzdGF0aWMgaW50IF9faW5pdCBhY3BpX3BhcnNlX2dpY19jcHVf
aW50ZXJmYWNlKA0KPj4+PiArICAgIHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAqaGVhZGVy
LCBjb25zdCB1bnNpZ25lZCBsb25nDQo+Pj4+IGVuZCkNCj4+PiANCj4+PiBCb3RoIHZhcmlhbnRz
IGFyZSBkZWVtZWQgdmFsaWQgc3R5bGUsIEkgdGhpbmsgKHNhbWUgYWxzbyBnb2VzIGZvcg0KPj4+
IGZ1bmN0aW9uIGNhbGxzIHdpdGggdGhpcyBzYW1lIHByb2JsZW0pLiBJbiBmYWN0IHlvdSBtaXgg
dHdvDQo+Pj4gZGlmZmVyZW50IHN0eWxlIGFzcGVjdHMgdG9nZXRoZXIgKHBsYWNlbWVudCBvZiBw
YXJhbWV0ZXINCj4+PiBkZWNsYXJhdGlvbnMgYW5kIHBsYWNlbWVudCBvZiByZXR1cm4gdHlwZSBl
dGMpIC0gZm9yIGVhY2gNCj4+PiBpbmRpdmlkdWFsbHkgYm90aCBmb3JtcyBhcmUgZGVlbWVkIGFj
Y2VwdGFibGUsIEkgdGhpbmsuDQo+PiANCj4+IElmIHdl4oCZcmUgZ29pbmcgdG8gaGF2ZSBhIHRv
b2wgZ28gdGhyb3VnaCBhbmQgcmVwb3J0IChjb3JyZWN0PykgYWxsDQo+PiB0aGVzZSBjb2Rpbmcg
c3R5bGUgdGhpbmdzLCBpdOKAmXMgYW4gb3Bwb3J0dW5pdHkgdG8gdGhpbmsgaWYgd2Ugd2FudCB0
bw0KPj4gYWRkIG5ldyBjb2Rpbmcgc3R5bGUgcmVxdWlyZW1lbnRzIChvciBjaGFuZ2UgZXhpc3Rp
bmcgcmVxdWlyZW1lbnRzKS4NCj4+IA0KPiANCj4gSSBhbSByZWFkeSB0byBkaXNjdXNzIG5ldyBy
ZXF1aXJlbWVudHMgYW5kIGltcGxlbWVudCB0aGVtIGluIHJ1bGVzIG9mDQo+IHRoZSBYZW4gQ29k
aW5nIHN0eWxlIGNoZWNrZXIuDQoNClRoYW5rIHlvdS4gOi0pICBCdXQgd2hhdCBJIG1lYW50IHdh
czogUmlnaHQgbm93IHdlIGRvbuKAmXQgcmVxdWlyZSBvbmUgYXBwcm9hY2ggb3IgdGhlIG90aGVy
IGZvciB0aGlzIHNwZWNpZmljIGluc3RhbmNlLiAgRG8gd2Ugd2FudCB0byBjaG9vc2Ugb25lPw0K
DQpJIHRoaW5rIGluIHRoaXMgY2FzZSBpdCBtYWtlcyBzZW5zZSB0byBkbyB0aGUgZWFzaWVzdCB0
aGluZy4gIElmIGl04oCZcyBlYXN5IHRvIG1ha2UgdGhlIGN1cnJlbnQgdG9vbCBhY2NlcHQgYm90
aCBzdHlsZXMsIGxldOKAmXMganVzdCBkbyB0aGF0IGZvciBub3cuICBJZiB0aGUgdG9vbCBjdXJy
ZW50bHkgZm9yY2VzIHlvdSB0byBjaG9vc2Ugb25lIG9mIHRoZSB0d28gc3R5bGVzLCBsZXTigJlz
IGNob29zZSBvbmUuDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:07:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:07:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1166.3853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvUQ-0003pz-Uh; Thu, 01 Oct 2020 10:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1166.3853; Thu, 01 Oct 2020 10:07:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvUQ-0003ps-RV; Thu, 01 Oct 2020 10:07:06 +0000
Received: by outflank-mailman (input) for mailman id 1166;
 Thu, 01 Oct 2020 10:07:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNvUP-0003pj-KO
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:07:05 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72ae7b56-c6e0-4650-b615-248ae41a477a;
 Thu, 01 Oct 2020 10:07:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNvUP-0003pj-KO
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:07:05 +0000
X-Inumbo-ID: 72ae7b56-c6e0-4650-b615-248ae41a477a
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 72ae7b56-c6e0-4650-b615-248ae41a477a;
	Thu, 01 Oct 2020 10:07:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601546823;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=Zv4obAIWlRsLAjQN+kpVRABTv5ZEK3WTLA2T+jlpnug=;
  b=JlF4ttl+47NV2d9iDYanEZ+TvjP/2h33KsfFE82TbNL6xjJ76rGV2uzp
   RGXNHyGL+ys25fXb/dmFBcrAVhtJ5XpJzM1sAeFIOl1PE0Qqb7dRzjdOu
   n/NQQdc5B0dUUCRAvgZdfaihDZFAyeqv4CYpPpb85LeYpBfiKQX06gFrW
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5HGXiubGCZ27aigNbjMnPTH/jFw/iCcePLrnmG/5I6tllqxLQVYKhlGX1gLoWvVoaJumSQbPb1
 tnCL5smZCVGXrxqbQ2qOMkDR/mJV2T/FoUCatxPXgfXZluky9EmcpfbTpih8MJlgkGGNqMJyil
 C5O14pQZcQOWyjyhhOsIFrXyl8blz3GHZLn3nU9P2+Qtc3lIgTr2Xk6AI9YsiZ61bO/UXCUT0N
 P1VJlrVqp5omZfhlicFxANhHmQL7AMYM4kuTs1fE6OJQjRvjsNn4Ma6CIEks3P/Vh+MshPTv0W
 BDk=
X-SBRS: None
X-MesageID: 28058006
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28058006"
Date: Thu, 1 Oct 2020 12:06:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: Re: [PATCH 4/8] tools/cpuid: Plumb nested_virt down into
 xc_cpuid_apply_policy()
Message-ID: <20201001100654.GB19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-5-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-5-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:44PM +0100, Andrew Cooper wrote:
> Nested Virt is the final special case in legacy CPUID handling.  Pass the
> (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
> the semantic dependency on HVM_PARAM_NESTEDHVM.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:19:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1170.3867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvgO-0004s8-69; Thu, 01 Oct 2020 10:19:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1170.3867; Thu, 01 Oct 2020 10:19:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvgO-0004s1-2X; Thu, 01 Oct 2020 10:19:28 +0000
Received: by outflank-mailman (input) for mailman id 1170;
 Thu, 01 Oct 2020 10:19:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNvgM-0004rT-NX
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:19:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f932c87-4fd8-46e6-9b83-e69c8a7eccd3;
 Thu, 01 Oct 2020 10:19:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNvgA-0002dm-TJ; Thu, 01 Oct 2020 10:19:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNvgA-0003BU-MO; Thu, 01 Oct 2020 10:19:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNvgA-0007iS-Lv; Thu, 01 Oct 2020 10:19:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNvgM-0004rT-NX
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:19:26 +0000
X-Inumbo-ID: 6f932c87-4fd8-46e6-9b83-e69c8a7eccd3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6f932c87-4fd8-46e6-9b83-e69c8a7eccd3;
	Thu, 01 Oct 2020 10:19:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=h33NxTNIqDNfNAgAQVod/cMTwZznxccnlROSgC5Pl+o=; b=thy0XorDmc43qvMWAkzJY65FFV
	/zJaDhjLuM81N/dYuwsIXfCOz9U7DtpfPMrxIKZ0nWTAMwev0fh4ZwIlMZoKLQDY0kf0J1OM9LHzS
	1y2SmktjF/NS2jGsa/XKKHHS44ZS6jw83TwKYvFqyIr7mUczha27lKIhTOXGYg4nUchg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNvgA-0002dm-TJ; Thu, 01 Oct 2020 10:19:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNvgA-0003BU-MO; Thu, 01 Oct 2020 10:19:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNvgA-0007iS-Lv; Thu, 01 Oct 2020 10:19:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.14-testing bisection] complete test-amd64-amd64-xl-qemut-debianhvm-i386-xsm
Message-Id: <E1kNvgA-0007iS-Lv@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 10:19:14 +0000

branch xen-4.14-testing
xenbranch xen-4.14-testing
job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155217/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.debian-hvm-install --summary-out=tmp/155217.bisection-summary --basis-template=154350 --blessings=real,real-bisect xen-4.14-testing test-amd64-amd64-xl-qemut-debianhvm-i386-xsm debian-hvm-install
Searching for failure / basis pass:
 155087 fail [host=huxelrebe1] / 154350 [host=godello0] 154116 [host=albana1] 152545 [host=albana0] 152537 [host=elbling1] 152531 [host=chardonnay0] 152153 ok.
Failure / basis pass flights: 155087 / 152153
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 f37a1cf023b277d0d49323bf322ce3ff0c92262d
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 6ada2285d9918859699c92e09540e023e0a16054 456957aaa1391e0dfa969e2dd97b87c51a79444e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#9132a31b9c8381197eee75eb66c809182b264110-dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/osstest/seabios.git#6ada2285d9918859699c92e09540e023e0a16054-155821a1990b6de78dde5f98fa5ab90e802021e0 git://xenbits.xen.org/xen.git#456957aaa1391e0dfa969e2dd97b87c51a79444e-f37a1cf023b277d0d49323bf322ce3ff0c92262d
>From git://cache:9419/git://xenbits.xen.org/xen
   707eb41ae2..de16a8fa0d  staging    -> origin/staging
Loaded 12581 nodes in revision graph
Searching for test results:
 152153 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 6ada2285d9918859699c92e09540e023e0a16054 456957aaa1391e0dfa969e2dd97b87c51a79444e
 152531 [host=chardonnay0]
 152537 [host=elbling1]
 152545 [host=albana0]
 154116 [host=albana1]
 154148 [host=godello0]
 154350 [host=godello0]
 154617 [host=huxelrebe0]
 154641 [host=huxelrebe0]
 155016 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155091 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 6ada2285d9918859699c92e09540e023e0a16054 456957aaa1391e0dfa969e2dd97b87c51a79444e
 155147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155150 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0020157a9825e5f5784ff014044f11c0558c92fe 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155153 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 63d92674d240ab4ecab94f98e1e198842bb7de00 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155159 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5c065855284bd0ca65784d313e094054e23685bb 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 28855ebcdbfa437e60bc16c761405476fe16bc39
 155164 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 28855ebcdbfa437e60bc16c761405476fe16bc39
 155087 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155168 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070
 155177 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 03019c20b516be53ba0cd393f5291974a9a6c9a8
 155185 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155190 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155199 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155204 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155207 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155212 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155217 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 2ee270e126458471b178ca1e5d7d8d0afc48be39
Searching for interesting versions
 Result found: flight 152153 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 9b9fc8e391b6d5afa83f90271fdbd0e13871e841, results HASH(0x55bfb24b7df8) HASH(0x55bfb24a60c0) HASH(0x55bfb247aab0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070, results HASH(0x55bfb24aacf8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c\
 3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 28855ebcdbfa437e60bc16c761405476fe16bc39, results HASH(0x55bfb2492fa8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5c065855284bd0ca65784d313e094054e23685bb 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 28855ebcdbfa437e60bc16c761405476fe16bc39, results HASH(0x55bfb247cdb8) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 63d92674d240ab4ecab94f98e1e198842bb7de00 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x55bfb2496cb8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 0020157a9825e5f5784f\
 f014044f11c0558c92fe 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x55bfb24770a0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9132a31b9c8381197eee75eb66c809182b264110 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 6ada2285d9918859699c92e09540e023e0a\
 16054 456957aaa1391e0dfa969e2dd97b87c51a79444e, results HASH(0x55bfb2483d20) HASH(0x55bfb2474470) Result found: flight 155016 (fail), for basis failure (at ancestor ~394)
 Repro found: flight 155091 (pass), for basis pass
 Repro found: flight 155147 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
No revisions left to test, checking graph state.
 Result found: flight 155190 (pass), for last pass
 Result found: flight 155199 (fail), for first failure
 Repro found: flight 155204 (pass), for last pass
 Repro found: flight 155207 (fail), for first failure
 Repro found: flight 155212 (pass), for last pass
 Repro found: flight 155217 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155217/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 145 colors found
Revision graph left in /home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155217: tolerable ALL FAIL

flight 155217 xen-4.14-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155217/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:23:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1176.3884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvkP-0005km-Vw; Thu, 01 Oct 2020 10:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1176.3884; Thu, 01 Oct 2020 10:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNvkP-0005kf-Sy; Thu, 01 Oct 2020 10:23:37 +0000
Received: by outflank-mailman (input) for mailman id 1176;
 Thu, 01 Oct 2020 10:23:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNvkO-0005ka-6Y
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:23:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a095e3c1-577d-43aa-bb6c-fb77c3437125;
 Thu, 01 Oct 2020 10:23:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6848BAC2F;
 Thu,  1 Oct 2020 10:23:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNvkO-0005ka-6Y
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:23:36 +0000
X-Inumbo-ID: a095e3c1-577d-43aa-bb6c-fb77c3437125
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a095e3c1-577d-43aa-bb6c-fb77c3437125;
	Thu, 01 Oct 2020 10:23:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601547813;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0XKbdNlXp2ozLMoGydc/k1xhOaAQYLPePup+XqkVwjo=;
	b=tj7IaousHeu17VcXsmkFV86bDCBgpdXFeBxWYQeQwkqm1gPsCfmN1glwyxSGkYrxjO79+O
	0WcAOgZyuvye7ba44mIgRTZXfmSReuebQTtxecg97ZaNPvrauxoovBAyMl45QcBE4IT1fA
	CR/lH35g/9olbY8+NWDMgvnSs8dPZnU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6848BAC2F;
	Thu,  1 Oct 2020 10:23:33 +0000 (UTC)
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
 Christian Lindig <christian.lindig@citrix.com>,
 =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
 Rob Hoes <Rob.Hoes@citrix.com>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <32a955ad-9995-6df2-3d7b-6b3eb7b1d656@suse.com>
Date: Thu, 1 Oct 2020 12:23:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930134248.4918-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 15:42, Andrew Cooper wrote:
> @@ -667,6 +668,12 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>           */
>          config->flags |= XEN_DOMCTL_CDF_oos_off;
>  
> +    if ( nested_virt && !hap )
> +    {
> +        dprintk(XENLOG_INFO, "Nested virt not supported without HAP\n");
> +        return -EINVAL;
> +    }

Initially I was merely puzzled by this not being accompanied by
any removal of code elsewhere. But when I started looking I couldn't
find any such enforcement, but e.g. did find nsvm_vcpu_hostrestore()
covering the shadow mode case. For this to be "No functional change
yet" as the description claims, could you point me at where this
restriction is currently enforced?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:40:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:40:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1179.3897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw0X-0007Wu-Fk; Thu, 01 Oct 2020 10:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1179.3897; Thu, 01 Oct 2020 10:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw0X-0007Wn-Cd; Thu, 01 Oct 2020 10:40:17 +0000
Received: by outflank-mailman (input) for mailman id 1179;
 Thu, 01 Oct 2020 10:40:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9aU5=DI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kNw0V-0007Wi-TR
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:40:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5a1c4db-e72b-480c-afe3-55a1b5a76587;
 Thu, 01 Oct 2020 10:40:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4F3AABBD;
 Thu,  1 Oct 2020 10:40:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9aU5=DI=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kNw0V-0007Wi-TR
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:40:15 +0000
X-Inumbo-ID: f5a1c4db-e72b-480c-afe3-55a1b5a76587
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f5a1c4db-e72b-480c-afe3-55a1b5a76587;
	Thu, 01 Oct 2020 10:40:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601548814;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vdMeXAVxFJALT1KETj5jBE9MDFZXtlda6dsqMXQoKoQ=;
	b=hCz7Kqa/ySLuLCaUAZHgklGIXsktJvnRCnqxcgmRewjGvU9X6OPmXXprz0Qs1B3/1iUfeP
	S79yTKJV5aNsDLr0ZbrXKMblKIEtJtqApuGM2PWlorFy2RcZtsIcYGbdMtCPWcxRx283jw
	9TBbSucwzuiwgDnAkRQPpGxskQr8Am8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4F3AABBD;
	Thu,  1 Oct 2020 10:40:13 +0000 (UTC)
Subject: Re: [PATCH 2/3] tools/init-xenstore-domain: support xenstore pvh
 stubdom
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <20200923064541.19546-1-jgross@suse.com>
 <20200923064541.19546-3-jgross@suse.com>
 <20200930154611.xqzdumwec7nlnidl@liuwe-devbox-debian-v2>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <38de2f90-d6b6-af4b-2653-58119cef927d@suse.com>
Date: Thu, 1 Oct 2020 12:40:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930154611.xqzdumwec7nlnidl@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.20 17:46, Wei Liu wrote:
> On Wed, Sep 23, 2020 at 08:45:40AM +0200, Juergen Gross wrote:
>> Instead of creating the xenstore-stubdom domain first and parsing the
>> kernel later do it the other way round. This enables to probe for the
>> domain type supported by the xenstore-stubdom and to support both, pv
>> and pvh type stubdoms.
>>
>> Try to parse the stubdom image first for PV support, if this fails use
>> HVM. Then create the domain with the appropriate type selected.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> [...]
>> +    dom->container_type = XC_DOM_HVM_CONTAINER;
>> +    rv = xc_dom_parse_image(dom);
>> +    if ( rv )
>> +    {
>> +        dom->container_type = XC_DOM_PV_CONTAINER;
>> +        rv = xc_dom_parse_image(dom);
>> +        if ( rv )
>> +        {
>> +            fprintf(stderr, "xc_dom_parse_image failed\n");
>> +            goto err;
>> +        }
>> +    }
>> +    else
>> +    {
>> +        config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
>> +        config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
>> +        dom->target_pages = mem_size >> XC_PAGE_SHIFT;
>> +        dom->mmio_size = GB(4) - LAPIC_BASE_ADDRESS;
>> +        dom->lowmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>> +                          LAPIC_BASE_ADDRESS : mem_size;
>> +        dom->highmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>> +                           GB(4) + mem_size - LAPIC_BASE_ADDRESS : 0;
>> +        dom->mmio_start = LAPIC_BASE_ADDRESS;
>> +        dom->max_vcpus = 1;
>> +        e820[0].addr = 0;
>> +        e820[0].size = dom->lowmem_end;
>> +        e820[0].type = E820_RAM;
>> +        e820[1].addr = LAPIC_BASE_ADDRESS;
>> +        e820[1].size = dom->mmio_size;
>> +        e820[1].type = E820_RESERVED;
>> +        e820[2].addr = GB(4);
>> +        e820[2].size = dom->highmem_end - GB(4);
> 
> Do you not want to check if highmem_end is larger than GB(4) before
> putting in this region?

Oh, indeed I should.

> 
>> +        e820[2].type = E820_RAM;
>> +    }
> 
> This hardcoded e820 map doesn't seem very flexible, but we
> control the guest kernel anyway so I think this should be fine.
> 
> The rest of this patch looks okay to me.

Thanks.


Juergen



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:46:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:46:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1181.3909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw68-0007jg-3W; Thu, 01 Oct 2020 10:46:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1181.3909; Thu, 01 Oct 2020 10:46:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw68-0007jZ-0W; Thu, 01 Oct 2020 10:46:04 +0000
Received: by outflank-mailman (input) for mailman id 1181;
 Thu, 01 Oct 2020 10:46:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNw65-0007jU-St
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:01 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03c6abc4-b95a-4d27-b81a-d4e5c923de3e;
 Thu, 01 Oct 2020 10:46:01 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id z4so5120985wrr.4
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q12sm8370065wrs.48.2020.10.01.03.45.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:45:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNw65-0007jU-St
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:01 +0000
X-Inumbo-ID: 03c6abc4-b95a-4d27-b81a-d4e5c923de3e
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03c6abc4-b95a-4d27-b81a-d4e5c923de3e;
	Thu, 01 Oct 2020 10:46:01 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id z4so5120985wrr.4
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:00 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=1w/r3HI2q9NYFccaKwJBM3EJgliC7iBndo7ofno+z6E=;
        b=UHWLEyBs4C2yB/dDiJHNMZLTNgabzWG7wJ6WS1yk7tUbUgqIC4b6+ATsgd0AyQiIt0
         hi4Lz9ShUNEuKkBXfKGA02Q9hyX4RZ/mknDI8jhprLNKn/MmLR0O5Wbw0stN8YDv2IcS
         Wm1fdmrIleSxZNENxrBUJ5hcWMu8R1/G2eKsZQ1CdD28w9KfI5WPZDf6Z12/wU6bnALc
         HaHX0PHD0Qb0cJUQ8aClsTEI4PYVfttMyr2DkTzLS+QdsEkCCM75TkYELPc7oltzMivn
         BsvLkKquvvKdcaVpNADGtxPf1kg7lCLpQoj63gUqzhTPVvP6FOsDN4lJ0t/c8k08cB5I
         sfAw==
X-Gm-Message-State: AOAM532/2w1IE75HMGKobigwemgNZn7ve56aHnyXoKUbdRb4j2ConRCy
	4J/BSNPJq+tkniHBDDb+N48=
X-Google-Smtp-Source: ABdhPJxzjIK4hfFjtzblzYemBPxNxr75vZfECxRUpb4oxofsmcJ96ZvfWlIt+Mjl7ODRkKWqQXfMwg==
X-Received: by 2002:a5d:44cc:: with SMTP id z12mr8445689wrr.189.1601549160198;
        Thu, 01 Oct 2020 03:46:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q12sm8370065wrs.48.2020.10.01.03.45.59
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:45:59 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:45:57 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 29/31] tools: rename global libxlutil make variables
Message-ID: <20201001104557.5scj3po3s3fsx6tx@liuwe-devbox-debian-v2>
References: <20200828150747.25305-1-jgross@suse.com>
 <20200828150747.25305-30-jgross@suse.com>
 <20200907155511.jhpucgrvmthhzlmv@liuwe-devbox-debian-v2>
 <c908bdee-3ab6-6a4b-0c93-e38116a98a5c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c908bdee-3ab6-6a4b-0c93-e38116a98a5c@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Sep 07, 2020 at 06:16:32PM +0200, Jrgen Gro wrote:
> On 07.09.20 17:55, Wei Liu wrote:
> > On Fri, Aug 28, 2020 at 05:07:45PM +0200, Juergen Gross wrote:
> > > Rename *_libxlutil make variables to *_libxenutil in order to avoid
> > > nasty indirections when moving libxlutil under the tools/libs
> > > infrastructure.
> > 
> > xl means xenlight.
> > 
> > So I think the name should be libxenlightutil here.
> 
> I don't really mind, but given that the name is completely internal
> to the build system I wonder whether the shorter name isn't more
> pleasant.

Oh, yes. It is just an internal name. That's fine then. There is no
point in quibbling about this further.

Acked-by: Wei Liu <wl@xen.org>

> 
> 
> Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:46:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1182.3921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw6L-0007nR-DB; Thu, 01 Oct 2020 10:46:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1182.3921; Thu, 01 Oct 2020 10:46:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw6L-0007nJ-9m; Thu, 01 Oct 2020 10:46:17 +0000
Received: by outflank-mailman (input) for mailman id 1182;
 Thu, 01 Oct 2020 10:46:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNw6K-0007n5-Jm
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:16 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f1ec110-1ac2-434a-9b54-392be6bea932;
 Thu, 01 Oct 2020 10:46:15 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id s12so5099502wrw.11
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:15 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id d9sm7582785wmb.30.2020.10.01.03.46.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:46:14 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNw6K-0007n5-Jm
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:16 +0000
X-Inumbo-ID: 0f1ec110-1ac2-434a-9b54-392be6bea932
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f1ec110-1ac2-434a-9b54-392be6bea932;
	Thu, 01 Oct 2020 10:46:15 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id s12so5099502wrw.11
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:15 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=21emveKsZWWUjET4SNbQI/1Lw/S6aESA+s+8y2MW3cw=;
        b=eChJpIxTKecYOsSxO88o1bGu0iCRkXPffk3+VOdVRpfflXK9pdMdjXxUM64JtniZlN
         ZPrXQxVq1vcjOewvdZQpkLZHqj6kiVvgRF3+J1CWMIfsYMy+/6+Pq9+arqWnTX17Klem
         qrDpFfrADT9Y8HbzK0tOoUhB4XSQcFXHjNCHn+GtL4a8Sa69LnIR/HW9QBXPhwHf5QQM
         KvPznqC9/ea1AfGVF/ziz4HryCL8XzvyTc8UViRW1hDXDiLNKNyRYiBQopyalA5SpHLD
         cdiZyXhapTc/xQejeOCWc6t+mTa2sV0uDTNdcDkh6pk0slG5qZJPEc367lRqmjYoxp5P
         MysQ==
X-Gm-Message-State: AOAM532DeMq11o0Cngke+c5IRYKAGzZPt0Wx0RgeKMLUiFmYNTGOgx2/
	5Rr6ptpvHIPBua+YeXuj7uWUVJnt0dU=
X-Google-Smtp-Source: ABdhPJxDEqk44uJT+jSjXi6HdrUAJO+P4d9MWxFvGIepTYDNSLfZZMXfTk8zuLsuAASpMNiTV15Vig==
X-Received: by 2002:adf:f986:: with SMTP id f6mr7836499wrr.270.1601549175229;
        Thu, 01 Oct 2020 03:46:15 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id d9sm7582785wmb.30.2020.10.01.03.46.14
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:46:14 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:46:13 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v4 30/31] tools/libs: add option for library names not
 starting with libxen
Message-ID: <20201001104613.az4ylmbaa43u637f@liuwe-devbox-debian-v2>
References: <20200828150747.25305-1-jgross@suse.com>
 <20200828150747.25305-31-jgross@suse.com>
 <20200828160053.b7misof3qmmkskur@liuwe-devbox-debian-v2>
 <aa1b4641-0ce5-9c72-19a0-2e27ff1fe704@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <aa1b4641-0ce5-9c72-19a0-2e27ff1fe704@suse.com>
User-Agent: NeoMutt/20180716

On Sat, Aug 29, 2020 at 06:38:40AM +0200, Jrgen Gro wrote:
> On 28.08.20 18:00, Wei Liu wrote:
> > On Fri, Aug 28, 2020 at 05:07:46PM +0200, Juergen Gross wrote:
> > > libxlutil doesn't follow the standard name pattern of all other Xen
> > > libraries, so add another make variable which can be used to allow
> > > other names.
> > > 
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > 
> > Is this still needed?
> 
> Yes. Its for the installed files, as those need to stay as they are
> today.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1183.3933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw6l-0007w8-NT; Thu, 01 Oct 2020 10:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1183.3933; Thu, 01 Oct 2020 10:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNw6l-0007w1-KL; Thu, 01 Oct 2020 10:46:43 +0000
Received: by outflank-mailman (input) for mailman id 1183;
 Thu, 01 Oct 2020 10:46:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNw6k-0007vq-AI
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:42 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b5413fb-a503-473a-85ce-d38e9bd8fdab;
 Thu, 01 Oct 2020 10:46:39 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id q9so2403437wmj.2
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:39 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 91sm8802020wrq.9.2020.10.01.03.46.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:46:38 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNw6k-0007vq-AI
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:46:42 +0000
X-Inumbo-ID: 4b5413fb-a503-473a-85ce-d38e9bd8fdab
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4b5413fb-a503-473a-85ce-d38e9bd8fdab;
	Thu, 01 Oct 2020 10:46:39 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id q9so2403437wmj.2
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:46:39 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=CzaZ9JDBNdOdIu7CxEVCZBj55MHCI4EpDDb+mosYShI=;
        b=Z76FnEIvcQSeP6RnRIDQb2IjJhIRU8WWNKK8EmPE+hwXXJ8ShnZ5Yhu6vJJpLrabMs
         U/XLJh7Bea0J7VXnPlbIB+f+efmpAfRjTUUH2BQ7QvZIP1Cjwh2PmjTrgvxxF004FC04
         6wqBCUYs9mrr3wTSSL146pqZe3zUMrWVz1TJRx5IFNGK5b2ilgTFQKFQ+zgHFv+fwCxF
         7uWWmS3hLO15i18hJ+VYz9zpsr6nKVt2T0V8bc5biI7CdPR3Q64z6nYahDTgaHnnkZoT
         ukWllKetKi1xXFRkpj85ibbjD1pHtY31JWWlDHfsSwO6ygiASbMMyq173/tvy5P4LuAS
         CUOw==
X-Gm-Message-State: AOAM5324ibDfjLcTWkJr79Bg649tD+z/62DzMEFgQyzvjYwDREosG//4
	xOv3wQNorSFTmc5EwBTdG+g=
X-Google-Smtp-Source: ABdhPJxN2KzpQLzJgqlelXUo3j40y3mBH+Iq5Sl5Fd7qgc8vDQt3I9bnPOMKTD+5B3DNBRk+WBE1LA==
X-Received: by 2002:a05:600c:2312:: with SMTP id 18mr7636998wmo.141.1601549198524;
        Thu, 01 Oct 2020 03:46:38 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id 91sm8802020wrq.9.2020.10.01.03.46.37
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:46:38 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:46:36 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v4 31/31] tools: move libxlutil to tools/libs/util
Message-ID: <20201001104636.n2st2lszfvumokl5@liuwe-devbox-debian-v2>
References: <20200828150747.25305-1-jgross@suse.com>
 <20200828150747.25305-32-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200828150747.25305-32-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Aug 28, 2020 at 05:07:47PM +0200, Juergen Gross wrote:
> Move the libxlutil source to tools/libs/util and delete tools/libxl.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:53:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1187.3945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwDM-0000R4-GV; Thu, 01 Oct 2020 10:53:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1187.3945; Thu, 01 Oct 2020 10:53:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwDM-0000Qx-CH; Thu, 01 Oct 2020 10:53:32 +0000
Received: by outflank-mailman (input) for mailman id 1187;
 Thu, 01 Oct 2020 10:53:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNwDL-0000Qs-4S
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:53:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6452a83-6d3b-423b-b455-a76b5d47ade8;
 Thu, 01 Oct 2020 10:53:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNwDL-0000Qs-4S
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:53:31 +0000
X-Inumbo-ID: a6452a83-6d3b-423b-b455-a76b5d47ade8
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6452a83-6d3b-423b-b455-a76b5d47ade8;
	Thu, 01 Oct 2020 10:53:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601549610;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=RjRT/Hg2ogLdWLHO37jJVlulrni7VJqaoYtk8G3MeKo=;
  b=TITiAFQwKdfZ34TiwV0h3wKqdB7TL7VW3BbzaFifLnQX3/grvLj9aY9K
   kOAhnY9n4Kz7r2EHCl5EYTaSnHBiEH7i29f3QWgLoz29KMCO3xnoJiauM
   mfWsm/cfqVJtp9Kq6oa6SD80bC+raPNZDHeTwhvsG+dhkKmgsanTcGUBs
   M=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CXQeLGjRalIG7+p2fYoyLLu2Vx/Drg/73Co9r/bOyhH49yahScRdTKBTahAAiTAfBe/FYO9cdo
 1TXwfkayYbcKecY3W0iarfN6MY5Du11TBHdqnHMZAu5qAm810cXC9V6LvIdZATF1XDoEnEF0SB
 b9UoeMdUGutRB3Dh1HHfehBdbly30R3ad/iSs+jcLqJqlrpSEl79ny6y3zdt530ZnXz2sjohU0
 0tLoY99FponZydJBzSLhfc7liH65Sm8Sru3S5o19vQLWiS8E8Oe693+WVk2zC2LIn2p8I7+Jp1
 AaM=
X-SBRS: None
X-MesageID: 29068941
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="29068941"
Date: Thu, 1 Oct 2020 12:53:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 5/8] x86/hvm: Obsolete the use of HVM_PARAM_NESTEDHVM
Message-ID: <20201001105321.GC19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-6-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-6-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:45PM +0100, Andrew Cooper wrote:
> With XEN_DOMCTL_CDF_nested_virt now passed properly to domain_create(),
> reimplement nestedhvm_enabled() to use the property which is fixed for the
> lifetime of the domain.
> 
> This makes the call to nestedhvm_vcpu_initialise() from hvm_vcpu_initialise()
> no longer dead.  It became logically dead with the Xend => XL transition, as
> they initialise HVM_PARAM_NESTEDHVM in opposite orders with respect to
> XEN_DOMCTL_max_vcpus.
> 
> There is one opencoded user of nestedhvm_enabled() in HVM_PARAM_ALTP2M's
> safety check.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

It's nice that the logic is already present in hvm_vcpu_initialise :).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:54:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:54:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1188.3957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwES-0000ZK-RZ; Thu, 01 Oct 2020 10:54:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1188.3957; Thu, 01 Oct 2020 10:54:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwES-0000ZD-Nx; Thu, 01 Oct 2020 10:54:40 +0000
Received: by outflank-mailman (input) for mailman id 1188;
 Thu, 01 Oct 2020 10:54:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwER-0000Z6-2W
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:54:39 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9867bb9-4b91-42f9-8fe4-ef94a6c609e1;
 Thu, 01 Oct 2020 10:54:37 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so2400725wmd.5
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:54:37 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u17sm9484415wri.45.2020.10.01.03.54.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:54:36 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwER-0000Z6-2W
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:54:39 +0000
X-Inumbo-ID: b9867bb9-4b91-42f9-8fe4-ef94a6c609e1
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b9867bb9-4b91-42f9-8fe4-ef94a6c609e1;
	Thu, 01 Oct 2020 10:54:37 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so2400725wmd.5
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:54:37 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=XzwOsbTGpM4SQ7tKKfoY0aledk1mN59a58RYZaRsyCw=;
        b=GRpazTvMpbqIBKvdbRFeo7BLnfoOmpf31OtfJVkx23UncxzCMBm0l+ZU7ns9IJDkBB
         jwo7Z2V3dDJLNJGn5CHpOjNPK6xiGCC6o6U18omxIU8bArhBTmP042xGTEPK35pNBRz9
         BpmhLmkVmBKSoBa6DjwjnvHnvL8QXZHr1xxH2GNQyPxSFEwdR/Nt726oZNCqqEga6sjy
         h92QtjMhXW9PaPJtJVFQlOV0wL5SsrjgT/Bq4Zt5RBjGcEmQxqwRldIP0dgcR+SOWVvd
         7r7FJe+vLREptHW7BV1eWxtwGfVm3zoPXMc6QU0bDqC2t9wzgdsn/28ae1COtYAfv4SG
         94Zg==
X-Gm-Message-State: AOAM530Q98yqWbNABBPjj2R/i3ayssbPf07BfFL8jj8Zjto8fR4RsDxX
	EmBk4liGIUKGrhBPVIyShxg=
X-Google-Smtp-Source: ABdhPJxlem8yu9aucJ6sD2dvviDiI2124AYmP+mYc9avuuKlntDd7L1OFtC9OidoW4uLFeDvjuy7Ng==
X-Received: by 2002:a7b:c103:: with SMTP id w3mr7661286wmi.24.1601549676510;
        Thu, 01 Oct 2020 03:54:36 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id u17sm9484415wri.45.2020.10.01.03.54.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:54:36 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:54:34 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 1/8] tools/libxl: Simplify DOMCTL_CDF_ flags handling in
 libxl__domain_make()
Message-ID: <20201001105434.j2x4ephrfq23p6il@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-2-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-2-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:41PM +0100, Andrew Cooper wrote:
> The use of the ternary operator serves only to obfuscate the code.  Rewrite it
> in more simple terms, avoiding the need to conditionally OR zero into the
> flags.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:55:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:55:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1189.3969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwEp-0000fW-3o; Thu, 01 Oct 2020 10:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1189.3969; Thu, 01 Oct 2020 10:55:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwEp-0000fP-0j; Thu, 01 Oct 2020 10:55:03 +0000
Received: by outflank-mailman (input) for mailman id 1189;
 Thu, 01 Oct 2020 10:55:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNwEo-0000fG-Jh
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:55:02 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3224e030-c294-42e1-9b8c-c9e573b2e348;
 Thu, 01 Oct 2020 10:55:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNwEo-0000fG-Jh
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:55:02 +0000
X-Inumbo-ID: 3224e030-c294-42e1-9b8c-c9e573b2e348
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3224e030-c294-42e1-9b8c-c9e573b2e348;
	Thu, 01 Oct 2020 10:55:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601549701;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=c8WJl4FCPhwG3wElqcuyPsTZPYQsAKsKXAsqf6rKdYs=;
  b=NAvhKYCaPjsza2Ev4zuVDqvNLtx1N+E+E5koNybfXDKxbd3k2D5a19wc
   Mr3KQ0TfzfRQtXrc6DQSZatTtQWPfP1udVP1x4qucON74+Kdz8Cu1QuqZ
   LgW+NGy5M5NVqxdlOHIgj40HFz1lfVzMiQexHtOzMoQ+cK2CLETSwimMi
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QgiMuZgrebGWDPm9zs87Pe4SM90UdhXPtn0f6GTAohN3O2jqqJsg57tUn4yv5O2i7ypZSbpOIi
 dE3Yj6Q8NGni7jFtHJGDRPF5n5fmDcgBdMojQAipzxcg2X8AuntWWhyGwDyFxZ5j7cMxoyi3di
 rZjGQPBGeMoOQKXiCuFcoR2rmDfLfc7KjTPzympfDxBlle1a39xEDq+xKMfsFknxweFESRHP1P
 x4yPtGbyUuaR7e2+sZ8LdcgseDTNFVvA67fYMaGteI7P60d3rjMcmQ5MXMhpUjGfYvinca8lqq
 nuM=
X-SBRS: None
X-MesageID: 28328134
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28328134"
Date: Thu, 1 Oct 2020 12:54:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 6/8] xen/xsm: Drop xsm_hvm_param_nested()
Message-ID: <20201001105453.GD19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-7-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-7-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:46PM +0100, Andrew Cooper wrote:
> The sole caller has been removed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:55:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1191.3980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwFA-0000lx-Hr; Thu, 01 Oct 2020 10:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1191.3980; Thu, 01 Oct 2020 10:55:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwFA-0000lq-Ep; Thu, 01 Oct 2020 10:55:24 +0000
Received: by outflank-mailman (input) for mailman id 1191;
 Thu, 01 Oct 2020 10:55:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwF8-0000lb-Vu
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:55:23 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63d146d4-b5be-4c8c-b82d-01a716981bf8;
 Thu, 01 Oct 2020 10:55:22 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id k15so5125258wrn.10
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:55:22 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id t16sm2977793wmi.18.2020.10.01.03.55.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:55:20 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwF8-0000lb-Vu
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:55:23 +0000
X-Inumbo-ID: 63d146d4-b5be-4c8c-b82d-01a716981bf8
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63d146d4-b5be-4c8c-b82d-01a716981bf8;
	Thu, 01 Oct 2020 10:55:22 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id k15so5125258wrn.10
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:55:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=7NEmW18qKW/Lt+bOZTfoQ7sXNngWHPAc1cKQm0Bllf4=;
        b=gtFWXPzzP1QV9XoORSr23bJN9J2QO2Fe1IxB7GkNPUkCKf2/1jC4082ffPA8tTJmMa
         LvCkuCZNXhC5wG2YSZgwHriwg4Ln5DQYp5RksyzWfxlfSi12EsfPn4KUqyxOGnRxW/H0
         ZM3jBDK3r8sDUEX2C4QNtcagbLp+YgCSodkxHRaQ3ip9DQyeyEpi+24rUsCcDgGEqIC3
         4qhOOuBEIJSa1URsU+meeSm+4D1mAjNU/wTulg+7qgpUcXlsaU81z0ZNbfO4FI7Uvis/
         g46Z3NTgL/nrij2aC+08mA3+RskEuyQUB06NRjZEuVhSigid4socBtXPXhowpkdjgJQs
         /bWQ==
X-Gm-Message-State: AOAM53063HVm6az9zv9qKqKd16TUBSQ0TTCanX9AO55G12odvfby7kA9
	tKVKirH7pCS8BHNn3lSvqIg=
X-Google-Smtp-Source: ABdhPJzAvaNwaA2eDu6aLqCQgdSsWIX2fX85ot72QR93lbKv0BNQOVqnzJnprfe5H57cVzQmjs8TNg==
X-Received: by 2002:adf:a3d4:: with SMTP id m20mr8835398wrb.29.1601549721445;
        Thu, 01 Oct 2020 03:55:21 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id t16sm2977793wmi.18.2020.10.01.03.55.20
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:55:20 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:55:19 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/8] xen/domctl: Simplify DOMCTL_CDF_ checking logic
Message-ID: <20201001105519.mqguww6gqk5rwasy@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-3-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:42PM +0100, Andrew Cooper wrote:
> Introduce some local variables to make the resulting logic easier to follow.
> Join the two IOMMU checks in sanitise_domain_config().  Tweak some of the
> terminology for better accuracy.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1195.3992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwGF-0000wb-Sm; Thu, 01 Oct 2020 10:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1195.3992; Thu, 01 Oct 2020 10:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwGF-0000wU-Po; Thu, 01 Oct 2020 10:56:31 +0000
Received: by outflank-mailman (input) for mailman id 1195;
 Thu, 01 Oct 2020 10:56:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwGE-0000wK-Jv
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:56:30 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e73a7f5-bbf2-40f2-b6c9-424fc8206d95;
 Thu, 01 Oct 2020 10:56:29 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id x23so2425109wmi.3
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:56:29 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n6sm7973552wmd.22.2020.10.01.03.56.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:56:28 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwGE-0000wK-Jv
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:56:30 +0000
X-Inumbo-ID: 6e73a7f5-bbf2-40f2-b6c9-424fc8206d95
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6e73a7f5-bbf2-40f2-b6c9-424fc8206d95;
	Thu, 01 Oct 2020 10:56:29 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id x23so2425109wmi.3
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:56:29 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=NuIINmqUvhxVyBOdo9CdFrllv4t6Yy+33Fu7/wllaRU=;
        b=iu2z+IwzrzgokbkbptExKFg3jUqEEawmqEwexSwsawI1DnRcF06r/aUlwhQEr4/LCS
         qdwwSz1j4gLx47f6ir+oyazspXaeFmj7IIskYlRhu7dVMhutZU5iIrKdBH1v/bph/vwF
         qmU/7YiUE8kHYxRdDoJQ7NgNY1Po9b7Dmxwl2I6h6UlmQRPKPUCeHUh+czSLptzyc0LQ
         7xWDVcPdbFVofXL6bPY+FnNIMUSukJriprA9xni4ujdInnpWldYbrizbvXcNj1/aUXXY
         u/kcszSGKXDZNkq8iWLFrrvGKZaMVplykG3a5Eij9A8ejW5R3mnTZdeetVSYn4DehL4X
         +M0A==
X-Gm-Message-State: AOAM532Z0+3EPw5mILAaKhSHXePNl6ag8MDwk9xav3NOTDUANTgbaKyy
	Vip83rjcNwtylByPhehE3lY=
X-Google-Smtp-Source: ABdhPJwSWC1dnrjUlNTTc7SPjLnTgdftNMRLgorWyooJhdiSDwzYi/DAG6TVECmLeQBeoBNKmwB1EA==
X-Received: by 2002:a1c:98d8:: with SMTP id a207mr7569434wme.157.1601549789151;
        Thu, 01 Oct 2020 03:56:29 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id n6sm7973552wmd.22.2020.10.01.03.56.28
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:56:28 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:56:27 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	Edwin =?utf-8?B?VMO2csO2aw==?= <edvin.torok@citrix.com>,
	Rob Hoes <Rob.Hoes@citrix.com>
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
Message-ID: <20201001105627.4rckkcl67jbw4ekr@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-4-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-4-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:43PM +0100, Andrew Cooper wrote:
> Like other major areas of functionality, nested virt (or not) needs to be
> known at domain creation time for sensible CPUID handling, and wants to be
> known this early for sensible infrastructure handling in Xen.
> 
> Introduce XEN_DOMCTL_CDF_nested_virt and modify libxl to set it appropriately
> when creating domains.  There is no need to adjust the ARM logic to reject the
> use of this new flag.
> 
> No functional change yet.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:56:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:56:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1197.4004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwGh-00014I-4s; Thu, 01 Oct 2020 10:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1197.4004; Thu, 01 Oct 2020 10:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwGh-00014B-1u; Thu, 01 Oct 2020 10:56:59 +0000
Received: by outflank-mailman (input) for mailman id 1197;
 Thu, 01 Oct 2020 10:56:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwGf-000140-GB
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:56:57 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89ae8e78-3b30-440d-8da1-5d39ff710382;
 Thu, 01 Oct 2020 10:56:56 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id w5so5137633wrp.8
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:56:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 76sm8804381wma.42.2020.10.01.03.56.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:56:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwGf-000140-GB
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:56:57 +0000
X-Inumbo-ID: 89ae8e78-3b30-440d-8da1-5d39ff710382
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 89ae8e78-3b30-440d-8da1-5d39ff710382;
	Thu, 01 Oct 2020 10:56:56 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id w5so5137633wrp.8
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:56:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=1XWcXD6FFUKsSbEt2l0rT2x4DcMcLdIMbmuD92O09kM=;
        b=g+3plM+bY/dAQX5U4p0NnkOUZf8mc0c/as/7JWYT3iZ52bwDWOHzgtK1rFOgZkgSq5
         oVoizG2BvO1niR43GuM1nD4SmFjcWuT140P38gLt8g+CQjrIXkvSN5CY0bLrg1kz+1iO
         R+Pm+pT90sUMRXK4hEp5bh7ulSY0JyOlB+zHEKVonUaGBPDlYcBVULjXj3yUOk00JQho
         k3xA70rO5JkwLqWtiRp0tK1FC2o7fU8jMs6MjmYi4vC1Wjxqs1rQVcGVrGW+xcxRD8ie
         fCd8qyD9/kOPKM25Hi/cGd0wtzcGTl4oMiNwHzRiXq3J1q6t2VMrnX0/6+YDx35Qc5p3
         qItg==
X-Gm-Message-State: AOAM532uGxm8BtNePe1EYMu6g3aNmhbir0ebArn8W4sRjsy+DtwefGgn
	Ydjm2v45AsDqh+bKj93fNHo=
X-Google-Smtp-Source: ABdhPJyOPqp5FcwJ+Ph8VeXnlUdUKpXUf9Ltc/lGefwL1TxBLNHKjGkUP+Tv5huKOtEs16Mdavg8bA==
X-Received: by 2002:a5d:4603:: with SMTP id t3mr8011710wrq.424.1601549816158;
        Thu, 01 Oct 2020 03:56:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id 76sm8804381wma.42.2020.10.01.03.56.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:56:55 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:56:54 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 4/8] tools/cpuid: Plumb nested_virt down into
 xc_cpuid_apply_policy()
Message-ID: <20201001105654.kpx2sxehk6ca5vc7@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-5-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-5-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:44PM +0100, Andrew Cooper wrote:
> Nested Virt is the final special case in legacy CPUID handling.  Pass the
> (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
> the semantic dependency on HVM_PARAM_NESTEDHVM.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

This has gone in.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:57:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1198.4017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwHM-0001BS-FS; Thu, 01 Oct 2020 10:57:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1198.4017; Thu, 01 Oct 2020 10:57:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwHM-0001BL-Bf; Thu, 01 Oct 2020 10:57:40 +0000
Received: by outflank-mailman (input) for mailman id 1198;
 Thu, 01 Oct 2020 10:57:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwHL-0001BD-7G
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:57:39 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac66f2ce-4205-4de9-82e1-ecc1ec75aed1;
 Thu, 01 Oct 2020 10:57:38 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id e2so2549043wme.1
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:57:38 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q13sm8259640wra.93.2020.10.01.03.57.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:57:37 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwHL-0001BD-7G
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:57:39 +0000
X-Inumbo-ID: ac66f2ce-4205-4de9-82e1-ecc1ec75aed1
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac66f2ce-4205-4de9-82e1-ecc1ec75aed1;
	Thu, 01 Oct 2020 10:57:38 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id e2so2549043wme.1
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:57:38 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=dzol3yVXVExT+qqaUvqTP0armzVQ4jkLbrkE6c3+vCE=;
        b=G964OkoIZYYCkw9m3njlT64p4BIhk6FwqDU2pHdMOfboLVSrlro1JbV4eJeRpqw7uK
         DGgFIhLODS8vo3+opx0UFNIl8hvLoBNo8MVfscFPOGyZ3OYpN3znrvbJJwVflf+vS2hB
         LDEwZVHG4gMJDoSK0cXvH9PcQVZeA/zvunUWFyceSMPmf+IEdiu+Eoy75rHdklsZ7rLk
         n+DmCHaPIjBGOQdzq5HFTklEGeZQkm6Bu9YaNKYgmyDJiK+jNAW1GP/PiFWCBKaoRX3c
         t7d4Lw1dfh3RokDpLDVT2moySoACQeTBRgQ5SheqH2Ox17/RvrldsLAOn4ZqP+Lkv6j8
         gHIw==
X-Gm-Message-State: AOAM5332uAdJw6+sVQkT/UqinoD7rGE1/+vrEumVARsID/CTnx26BuU5
	nqB8BPJnWfmAPBrT1ZePaNw=
X-Google-Smtp-Source: ABdhPJyq91TaOQK0xGPuaA9/N+yacyJhyHZmyvQdsU5rZDY9lmb4B9yy1HjQMVX0vTocC38I8Td48Q==
X-Received: by 2002:a7b:cbd4:: with SMTP id n20mr8311038wmi.105.1601549857646;
        Thu, 01 Oct 2020 03:57:37 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q13sm8259640wra.93.2020.10.01.03.57.37
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:57:37 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:57:35 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 5/8] x86/hvm: Obsolete the use of HVM_PARAM_NESTEDHVM
Message-ID: <20201001105735.oiywarm7jjkbbjho@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-6-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-6-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:45PM +0100, Andrew Cooper wrote:
> With XEN_DOMCTL_CDF_nested_virt now passed properly to domain_create(),
> reimplement nestedhvm_enabled() to use the property which is fixed for the
> lifetime of the domain.
> 
> This makes the call to nestedhvm_vcpu_initialise() from hvm_vcpu_initialise()
> no longer dead.  It became logically dead with the Xend => XL transition, as
> they initialise HVM_PARAM_NESTEDHVM in opposite orders with respect to
> XEN_DOMCTL_max_vcpus.
> 
> There is one opencoded user of nestedhvm_enabled() in HVM_PARAM_ALTP2M's
> safety check.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 10:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 10:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1199.4029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwHi-0001IS-PU; Thu, 01 Oct 2020 10:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1199.4029; Thu, 01 Oct 2020 10:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwHi-0001IK-M2; Thu, 01 Oct 2020 10:58:02 +0000
Received: by outflank-mailman (input) for mailman id 1199;
 Thu, 01 Oct 2020 10:58:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwHi-0001ID-3o
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:58:02 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffffc56b-aa11-4110-9b88-07263098518f;
 Thu, 01 Oct 2020 10:58:01 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id m6so5162644wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:58:01 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q15sm8212783wrr.8.2020.10.01.03.57.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 03:58:00 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwHi-0001ID-3o
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 10:58:02 +0000
X-Inumbo-ID: ffffc56b-aa11-4110-9b88-07263098518f
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ffffc56b-aa11-4110-9b88-07263098518f;
	Thu, 01 Oct 2020 10:58:01 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id m6so5162644wrn.0
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 03:58:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=JXGMFEmYY83EHbsXJJPiEpwV2Khmtulu6FOAUZZx04A=;
        b=bISffOZMOjdX6uzw86Tun3IXlQIo2cOWaSflxxIFi70GelPA9aBft8a2no1uAGuI/q
         aldbnZ8cvv5hy6zI4f+mYxsw/8RHAclhXozhXQGVRwdiGTCIKwFKTngBG/vx3HqxytEv
         PXA2byUqWvAfIY3HtfenL/mMqXfgY3tjGoldIGeq38jt4sjlxJXxELTNBi/yPWBz6nTA
         i5GIktfbX1K93oLbwnCq2Zs7fMEu8x7UaDojAkg3Cvi9y2d3T2HgbjVseDsNK7gH3Z1j
         uWwINaVDVwSOb4ZV5YE8bZfDowLIJfaWyykitbnfS9hPDdfJfOE0yifdd3hG4eh8jwoh
         kzGA==
X-Gm-Message-State: AOAM532snDoQNWBFZOhXLTIImyPB6DEfbHkx3rsTmTpBvR1+nwsKzgdC
	Fca+4j+uI65OKQ06Ye0w2as=
X-Google-Smtp-Source: ABdhPJzLvpdzOpN2WAdA8DuW8oTWuAj33IBdqntKaPAnC5m+GT+ZV/j4b0HgjcxY1Da+PM5rIyBw/Q==
X-Received: by 2002:a5d:40c4:: with SMTP id b4mr8269104wrq.151.1601549880514;
        Thu, 01 Oct 2020 03:58:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q15sm8212783wrr.8.2020.10.01.03.57.59
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 03:58:00 -0700 (PDT)
Date: Thu, 1 Oct 2020 10:57:58 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 6/8] xen/xsm: Drop xsm_hvm_param_nested()
Message-ID: <20201001105758.ung7wpw2klgcpmq6@liuwe-devbox-debian-v2>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-7-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200930134248.4918-7-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716

On Wed, Sep 30, 2020 at 02:42:46PM +0100, Andrew Cooper wrote:
> The sole caller has been removed.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1204.4041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwJs-0002Ct-7O; Thu, 01 Oct 2020 11:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1204.4041; Thu, 01 Oct 2020 11:00:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwJs-0002Cm-2r; Thu, 01 Oct 2020 11:00:16 +0000
Received: by outflank-mailman (input) for mailman id 1204;
 Thu, 01 Oct 2020 11:00:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNwJq-0002Cg-DI
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:00:14 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f4a098c-e0bb-43b8-b6bc-4835b20eba53;
 Thu, 01 Oct 2020 11:00:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNwJq-0002Cg-DI
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:00:14 +0000
X-Inumbo-ID: 0f4a098c-e0bb-43b8-b6bc-4835b20eba53
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f4a098c-e0bb-43b8-b6bc-4835b20eba53;
	Thu, 01 Oct 2020 11:00:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601550011;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=g8PD7aAPS14JnU9fXeNLGO0c4542Mu0jtMFj9/IPPQo=;
  b=SC4TjOHXOZ1qQRafNmKAi5QOCUMI3gTv0ax2IYJ9pdx3nTkqbtWbkxw2
   QKqjPfBsQdC3y+NaaafHN5SN3iiI/nOsefGWXOc2IhF8RCgC5kTS6UkVo
   NXF8LmEr6kYAl637f0d7vVNXQ7XNr/YdcbpOWFgmIltYJLKDN4/PCnbXi
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2+LqwWxtJVIFsOQGy1B7LY2+XDm41mI5i0AdsvFcWqh11SCL+vgw6vWgWu/VFF42uIjLkrAzIY
 x8RmoSQz0Q2tIUqFzMw2Xb8CuOcKRF6ts9GUYqEOx4NJGKj8HMb+44/nwFaKoU+VQEJkcmLgRN
 uM9T8bn7Yb+nfAqxPjvIAiXYgYh31D3rUXk5/eepxM0L6y/wI4+ruWQkvKQxahjQr3zhkiZElO
 +8Z0dxuwOoeoHqYHuD2Af7N8wqM9bwfsm4u7gnDzgoyWsOAulVzaYOHlSH5aN8Pw6FYbjfziDh
 Iow=
X-SBRS: None
X-MesageID: 29069257
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="29069257"
Date: Thu, 1 Oct 2020 13:00:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH 7/8] x86/hvm: Drop restore boolean from
 hvm_cr4_guest_valid_bits()
Message-ID: <20201001110003.GE19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-8-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-8-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:47PM +0100, Andrew Cooper wrote:
> Previously, migration was reordered so the CPUID data was available before
> register state.  nestedhvm_enabled() has recently been made accurate for the
> entire lifetime of the domain.
> 
> Therefore, we can drop the bodge in hvm_cr4_guest_valid_bits() which existed
> previously to tolerate a guests' CR4 being set/restored before
> HVM_PARAM_NESTEDHVM.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, just one nit below.

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Jun Nakajima <jun.nakajima@intel.com>
> CC: Kevin Tian <kevin.tian@intel.com>
> ---
>  xen/arch/x86/hvm/domain.c       | 2 +-
>  xen/arch/x86/hvm/hvm.c          | 8 ++++----
>  xen/arch/x86/hvm/svm/svmdebug.c | 6 ++++--
>  xen/arch/x86/hvm/vmx/vmx.c      | 2 +-
>  xen/arch/x86/hvm/vmx/vvmx.c     | 2 +-
>  xen/include/asm-x86/hvm/hvm.h   | 2 +-
>  6 files changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
> index 8e3375265c..0ce132b308 100644
> --- a/xen/arch/x86/hvm/domain.c
> +++ b/xen/arch/x86/hvm/domain.c
> @@ -275,7 +275,7 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>      if ( v->arch.hvm.guest_efer & EFER_LME )
>          v->arch.hvm.guest_efer |= EFER_LMA;
>  
> -    if ( v->arch.hvm.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d, false) )
> +    if ( v->arch.hvm.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d) )
>      {
>          gprintk(XENLOG_ERR, "Bad CR4 value: %#016lx\n",
>                  v->arch.hvm.guest_cr[4]);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 101a739952..54e32e4fe8 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -972,14 +972,14 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
>          X86_CR0_CD | X86_CR0_PG)))
>  
>  /* These bits in CR4 can be set by the guest. */
> -unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore)
> +unsigned long hvm_cr4_guest_valid_bits(const struct domain *d)
>  {
>      const struct cpuid_policy *p = d->arch.cpuid;
>      bool mce, vmxe;
>  
>      /* Logic broken out simply to aid readability below. */
>      mce  = p->basic.mce || p->basic.mca;
> -    vmxe = p->basic.vmx && (restore || nestedhvm_enabled(d));
> +    vmxe = p->basic.vmx && nestedhvm_enabled(d);
>  
>      return ((p->basic.vme     ? X86_CR4_VME | X86_CR4_PVI : 0) |
>              (p->basic.tsc     ? X86_CR4_TSD               : 0) |
> @@ -1033,7 +1033,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>          return -EINVAL;
>      }
>  
> -    if ( ctxt.cr4 & ~hvm_cr4_guest_valid_bits(d, true) )
> +    if ( ctxt.cr4 & ~hvm_cr4_guest_valid_bits(d) )
>      {
>          printk(XENLOG_G_ERR "HVM%d restore: bad CR4 %#" PRIx64 "\n",
>                 d->domain_id, ctxt.cr4);
> @@ -2425,7 +2425,7 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
>      struct vcpu *v = current;
>      unsigned long old_cr;
>  
> -    if ( value & ~hvm_cr4_guest_valid_bits(v->domain, false) )
> +    if ( value & ~hvm_cr4_guest_valid_bits(v->domain) )
>      {
>          HVM_DBG_LOG(DBG_LEVEL_1,
>                      "Guest attempts to set reserved bit in CR4: %lx",
> diff --git a/xen/arch/x86/hvm/svm/svmdebug.c b/xen/arch/x86/hvm/svm/svmdebug.c
> index ba26b6a80b..f450391df4 100644
> --- a/xen/arch/x86/hvm/svm/svmdebug.c
> +++ b/xen/arch/x86/hvm/svm/svmdebug.c
> @@ -106,6 +106,7 @@ bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
>      unsigned long cr0 = vmcb_get_cr0(vmcb);
>      unsigned long cr3 = vmcb_get_cr3(vmcb);
>      unsigned long cr4 = vmcb_get_cr4(vmcb);
> +    unsigned long valid;

Could you init valid here at definition time? Also cr4_valid might be
a better name since the sacope of the variable is quite wide.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:02:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1206.4052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwMJ-0002Lb-K0; Thu, 01 Oct 2020 11:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1206.4052; Thu, 01 Oct 2020 11:02:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwMJ-0002LU-Gz; Thu, 01 Oct 2020 11:02:47 +0000
Received: by outflank-mailman (input) for mailman id 1206;
 Thu, 01 Oct 2020 11:02:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kNwMH-0002LP-Mg
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:02:45 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 288b56f7-752b-4029-a158-64a97d0ea117;
 Thu, 01 Oct 2020 11:02:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kNwMH-0002LP-Mg
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:02:45 +0000
X-Inumbo-ID: 288b56f7-752b-4029-a158-64a97d0ea117
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 288b56f7-752b-4029-a158-64a97d0ea117;
	Thu, 01 Oct 2020 11:02:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601550164;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XmSD0smz/pk8Ati3b8Dzz9Mct/ibGnDTXyZisU7AXgQ=;
  b=IeuA8tmchugigdcFZSGLHNSjmPKbYJYHtFS6PjcEwBBy7OhehVQ8+kP5
   rBUB3JrFljr59n7Tm4Bbjq7e+LiS0KSucna5htLYvHi+wkYa/DZ20JdlT
   uoq1mX20kfKLxdzi/13Z5VBkiKBw0f8FkR6FfYoENMdJoHTXPuUVPm5oz
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cro7dUeUxmZSoDaz9kczXxIT6905YTfydMRownwBVdbW+To5OxBBS+IyZ2cSRSxN0WemvhJd6d
 04GETImC/F1H4gH3JtHAlKeZ6v//6mqQ0bLL/NVNLbxFfHdxQnOSo2i/4lm6O78jiWGhDOuqDE
 9yMJW8lIE8qpQDcCaWHTm27PiDYoCkMzRs4gAl1uCfsuUePvwmfmxqM+mwk1GXhUeN7zYSk+eq
 Pi906Adolxm40XMNA3fv1twxTUzRn6F3p/WbxWRuPq99GLILyYliO4pQ7tdjfzTtXvrUQkvWpP
 YUI=
X-SBRS: None
X-MesageID: 28147959
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28147959"
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Christian Lindig
	<christian.lindig@citrix.com>, =?UTF-8?B?RWR3aW4gVMO2csO2aw==?=
	<edvin.torok@citrix.com>, Rob Hoes <Rob.Hoes@citrix.com>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-4-andrew.cooper3@citrix.com>
 <32a955ad-9995-6df2-3d7b-6b3eb7b1d656@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8e25a67b-21f1-5203-6531-4ed378a74bfb@citrix.com>
Date: Thu, 1 Oct 2020 12:02:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <32a955ad-9995-6df2-3d7b-6b3eb7b1d656@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 01/10/2020 11:23, Jan Beulich wrote:
> On 30.09.2020 15:42, Andrew Cooper wrote:
>> @@ -667,6 +668,12 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>>           */
>>          config->flags |= XEN_DOMCTL_CDF_oos_off;
>>  
>> +    if ( nested_virt && !hap )
>> +    {
>> +        dprintk(XENLOG_INFO, "Nested virt not supported without HAP\n");
>> +        return -EINVAL;
>> +    }
> Initially I was merely puzzled by this not being accompanied by
> any removal of code elsewhere. But when I started looking I couldn't
> find any such enforcement, but e.g. did find nsvm_vcpu_hostrestore()
> covering the shadow mode case. For this to be "No functional change
> yet" as the description claims, could you point me at where this
> restriction is currently enforced?

Currently enforced in the HVM_PARAM_NESTEDHVM write side effect, which
is deleted in patch 5.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:06:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1209.4064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwPY-0002XV-C4; Thu, 01 Oct 2020 11:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1209.4064; Thu, 01 Oct 2020 11:06:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwPY-0002XO-8k; Thu, 01 Oct 2020 11:06:08 +0000
Received: by outflank-mailman (input) for mailman id 1209;
 Thu, 01 Oct 2020 11:06:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNwPX-0002XI-F5
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:06:07 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bb6d98a-9eba-4b87-972e-f5e9c7ad2ef1;
 Thu, 01 Oct 2020 11:06:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNwPX-0002XI-F5
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:06:07 +0000
X-Inumbo-ID: 2bb6d98a-9eba-4b87-972e-f5e9c7ad2ef1
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2bb6d98a-9eba-4b87-972e-f5e9c7ad2ef1;
	Thu, 01 Oct 2020 11:06:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601550367;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=WFxzw03ykNV/tSUsQJQrg4smpq/kZ88dl3LIJ7m3gco=;
  b=BzlBUeeC1KI/L+zXJrd3O8nVFHgSf4WqV9v4ZsjWS5Rb3rVqkNYuYNjb
   sEVyD0IqPAawLNRTYsA63+pTUiA4nvTDSHo1mcJeuAqFn5NdBweRCH+KZ
   ftWZ5bIAEbYDCnrZGVY6r/KwY8W2Fd4f4x55evJIB6/8ubZehzhKKTf/w
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: u8/irix7pqyCTWtGfl8AH+40goQQGWiUhwUQISqQf0FLUVAp+68pOxHlEjNUPYedgxNobgSZLz
 Vw7N9tNGqUbRnS9UTiS26jchyseetwmSjHmo+tNdH3DnXlJ9T5if3uAzu/CMtrF7axrfnu/2sX
 i5YgGYUXCAXwlOnf3VTFxv6dLxIPEImUn+sxqsqz7tOddQ44Rpv3ZXHeclOXfF6y2A5cwlHU7H
 xkAC7qXEs/FWI5xRrc4M27nj7JNUn42xtXfsbJ786j/gEjdXv1nTMuPh0ma+y/jQ3AoXhD97Z/
 K/E=
X-SBRS: None
X-MesageID: 28037187
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28037187"
Date: Thu, 1 Oct 2020 13:04:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 8/8] x86/cpuid: Move VMX/SVM out of the default policy
Message-ID: <20201001110459.GF19254@Air-de-Roger>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-9-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200930134248.4918-9-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 02:42:48PM +0100, Andrew Cooper wrote:
> Nested virt is still experimental, and requires explicitly opting in to at
> domain create time.  The VMX/SVM features should not be visible by default.
> 
> Also correct them from all HVM guests, to just HAP-enabled guests.  This has
> been the restriction for SVM right from the outset (c/s e006a0e0aaa), while
> VMX was first introduced supporting shadow mode (c/s 9122c69c8d3) but later
> adjusted to HAP-only (c/s 77751ed79e3).
> 
> There is deliberately no adjustment to xc_cpuid_apply_policy() for pre-4.14
> migration compatibility.  The migration stream doesn't contain the required
> architectural state for either VMX/SVM, and a nested virt VM which migrates
> will explode in weird and wonderful ways.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1212.4077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwSY-0002k5-Qi; Thu, 01 Oct 2020 11:09:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1212.4077; Thu, 01 Oct 2020 11:09:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwSY-0002jy-NY; Thu, 01 Oct 2020 11:09:14 +0000
Received: by outflank-mailman (input) for mailman id 1212;
 Thu, 01 Oct 2020 11:09:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kNwSX-0002jt-GP
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:09:13 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fc8a135-3b0e-45df-acdc-e2f0ec0d9e73;
 Thu, 01 Oct 2020 11:09:12 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id w5so5181897wrp.8
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 04:09:12 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id y6sm8582534wrn.41.2020.10.01.04.09.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 04:09:10 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Tj+q=DI=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kNwSX-0002jt-GP
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:09:13 +0000
X-Inumbo-ID: 6fc8a135-3b0e-45df-acdc-e2f0ec0d9e73
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6fc8a135-3b0e-45df-acdc-e2f0ec0d9e73;
	Thu, 01 Oct 2020 11:09:12 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id w5so5181897wrp.8
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 04:09:12 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=BBC6qGccP+Ahz8ySynzSc+nxZwUQo0EtEHZiNRfDzg4=;
        b=DvHd+EydUN4HcA9VhOp7x4jyIgS/I/H9Qz7bhSI++HcfM57mmC7XmsK5kzdO54Aij5
         urlEBGKSJe0UojNTCAvLGlzIQt5KPMIAaIEOeWkXOd+gvSVpNLGGASDsFXDyeDc6FrKp
         sXqwhiazQDnb7PqZXYR0rwY4qEHPfPi0BJUBtB9a/BQOPBqw7Z/59dUGIzwhw0aZseX2
         VIgSzyWIFGHf1yLV4aFZHxaJbXIFBAMJO5bVxzOP+41mcEfNyczogojIaHDKx05XGfO4
         /DtHE9KBPZ7aPY9WUGJN2EmVnvxkZL2vH8lkHaKto0WmYjfMgFwzAX1LnY76ZuPfqlK9
         8luQ==
X-Gm-Message-State: AOAM532f4bLkGNFAr8IAW7mmwYknmxEZOOapECsUv6DIVJIreDbcLcvv
	AWutb6ZHS+VhoxZvy4oV100=
X-Google-Smtp-Source: ABdhPJyHKjMmpiHBTt9+mp8HApLeCYs+jv/ifZ3LgII3as7ROD0o41hOyAp+hdwQS7zpglzwALhBAg==
X-Received: by 2002:adf:f903:: with SMTP id b3mr8580081wrr.142.1601550551801;
        Thu, 01 Oct 2020 04:09:11 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id y6sm8582534wrn.41.2020.10.01.04.09.10
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 04:09:10 -0700 (PDT)
Date: Thu, 1 Oct 2020 11:09:09 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH v2 0/3] Fix and cleanup xenguest.h
Message-ID: <20201001110909.hcmtwajognnegkqf@liuwe-devbox-debian-v2>
References: <20200925062031.12200-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200925062031.12200-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Sep 25, 2020 at 08:20:28AM +0200, Juergen Gross wrote:
> This series fixes builds of libxenguest users outside the Xen build
> system and it cleans up the xenguest.h header merging xenctrl_dom.h
> into it.
> 
> Juergen Gross (3):
>   tools/libs: merge xenctrl_dom.h into xenguest.h
>   tools/libxenguest: make xc_dom_loader interface private to libxenguest
>   tools/lixenguest: hide struct elf_dom_parms layout from users

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:09:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:09:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1214.4090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwSs-0002pC-3T; Thu, 01 Oct 2020 11:09:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1214.4090; Thu, 01 Oct 2020 11:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwSs-0002p5-0X; Thu, 01 Oct 2020 11:09:34 +0000
Received: by outflank-mailman (input) for mailman id 1214;
 Thu, 01 Oct 2020 11:09:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNwSq-0002nk-NJ
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:09:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccf22787-0452-4d0d-97e7-2e8b7c63eb13;
 Thu, 01 Oct 2020 11:09:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNwSj-0003kO-OT; Thu, 01 Oct 2020 11:09:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNwSj-0005mE-Hu; Thu, 01 Oct 2020 11:09:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNwSj-0005Ui-Gv; Thu, 01 Oct 2020 11:09:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNwSq-0002nk-NJ
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:09:32 +0000
X-Inumbo-ID: ccf22787-0452-4d0d-97e7-2e8b7c63eb13
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ccf22787-0452-4d0d-97e7-2e8b7c63eb13;
	Thu, 01 Oct 2020 11:09:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NvxmvG4fbOWjfswkrc9poPXMiGTXq9RAWyh6Q5SCr6M=; b=cSdfnUODN3RHYdqQxeDIdT1WBd
	+MNQ4/yND8XFs0/r3AYxRWRiIiCOzvii9nKutIzLeAx8JrkLFKFBDPSGAuKC2JTZU951xSLSevjtC
	pWCbMn20DOeKJ7oGbe/AMlt6VE2pnllhRScLy+sLJ395PXJunLzgj0kHADKW1CW5xB6E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNwSj-0003kO-OT; Thu, 01 Oct 2020 11:09:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNwSj-0005mE-Hu; Thu, 01 Oct 2020 11:09:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNwSj-0005Ui-Gv; Thu, 01 Oct 2020 11:09:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155213: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11852c7bb070a18c3708b4c001772a23e7d4fc27
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 11:09:25 +0000

flight 155213 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155213/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11852c7bb070a18c3708b4c001772a23e7d4fc27
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Testing same since   155144  2020-09-30 16:01:24 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1221.4111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwf8-0004Ya-CG; Thu, 01 Oct 2020 11:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1221.4111; Thu, 01 Oct 2020 11:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNwf8-0004YT-8n; Thu, 01 Oct 2020 11:22:14 +0000
Received: by outflank-mailman (input) for mailman id 1221;
 Thu, 01 Oct 2020 11:22:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LTW6=DI=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kNwf6-0004YO-DT
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:22:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17e36e31-a73e-40f2-b3d5-a46f68827c57;
 Thu, 01 Oct 2020 11:22:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LTW6=DI=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kNwf6-0004YO-DT
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:22:12 +0000
X-Inumbo-ID: 17e36e31-a73e-40f2-b3d5-a46f68827c57
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 17e36e31-a73e-40f2-b3d5-a46f68827c57;
	Thu, 01 Oct 2020 11:22:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601551330;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=qnExyjv72xS4XPByzvVUXHpFl+5PJR+AivecgqaXQ5o=;
  b=cCpQkg5mtQKecwDAIG0g8bjptfBRflNiI+6YHzcXSSPk6K55c6xu934n
   c9QH/qYt6NravygJqMDQ2Uq4MOX2j1Fu9R7mo4b5qdnXoLmYPqW5+u1l0
   4Vp6yqf4akiysCVltlXs/JLYI38GdmeYyJ1Q11ndL4lzBDmDO25sKEJ4a
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pVs85hELVVepnvZSSMzlegb7wyVcQz9y9CR7mht92eybKcAka+cvI+/dTnHazZuU+Lcmo+QTz+
 ogfjKeXQANkkK3NqDNws5mJBLZXe8qjb7jT4r4gAJaoFJs6zt1JoOR7cbin/SWiI0bm25C5xzL
 Kr7mf+FIs9T2qJjnRUvz200nRwfx9qM0xNrRo+QoDBfJwZ3l3tZ/nEvYvf3CACmHopNedTsQje
 OoPALkS7I4q8xQiQsik1Zs8s/d9Bc6ZS7tlu65NLJLOo/3XqhJMOThrPIwq30g4VmWwLLdxCmK
 Jn0=
X-SBRS: None
X-MesageID: 28038335
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28038335"
Date: Thu, 1 Oct 2020 12:22:03 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Paul Durrant <paul@xen.org>
CC: <xen-devel@lists.xenproject.org>, <qemu-devel@nongnu.org>, Paul Durrant
	<pdurrant@amazon.com>, Jerome Leseinne <jerome.leseinne@gmail.com>, "Edwin
 Torok" <edvin.torok@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] xen-bus: reduce scope of backend watch
Message-ID: <20201001112203.GM2024@perard.uk.xensource.com>
References: <20201001081500.1026-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201001081500.1026-1-paul@xen.org>

On Thu, Oct 01, 2020 at 09:15:00AM +0100, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently a single watch on /local/domain/X/backend is registered by each
> QEMU process running in service domain X (where X is usually 0). The purpose
> of this watch is to ensure that QEMU is notified when the Xen toolstack
> creates a new device backend area.
> Such a backend area is specific to a single frontend area created for a
> specific guest domain and, since each QEMU process is also created to service
> a specfic guest domain, it is unnecessary and inefficient to notify all QEMU
> processes.
> Only the QEMU process associated with the same guest domain need
> receive the notification. This patch re-factors the watch registration code
> such that notifications are targetted appropriately.
> 
> Reported-by: Jerome Leseinne <jerome.leseinne@gmail.com>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:44:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1228.4126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNx0U-0006Sd-62; Thu, 01 Oct 2020 11:44:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1228.4126; Thu, 01 Oct 2020 11:44:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNx0U-0006SW-34; Thu, 01 Oct 2020 11:44:18 +0000
Received: by outflank-mailman (input) for mailman id 1228;
 Thu, 01 Oct 2020 11:44:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kNx0S-0006SR-1I
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:44:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d298d769-59a1-48ff-8b23-1550c7644c63;
 Thu, 01 Oct 2020 11:44:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kNx0S-0006SR-1I
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:44:16 +0000
X-Inumbo-ID: d298d769-59a1-48ff-8b23-1550c7644c63
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d298d769-59a1-48ff-8b23-1550c7644c63;
	Thu, 01 Oct 2020 11:44:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601552655;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=tJRjOyj67coC9I1KBFkaDAyK+2JtNm5ot/Bl9MArNYU=;
  b=V8YptN4AKjZump4ntlhmSJwuGESSwLmsN4McI+tvPoU4h0ToCs4WjTIy
   wremLWisit5byV1a6zn4p0b45HtblohcJrN+c8NaY353LAQhRD70hgkWa
   3ZwnV8F9hujVkLfNShzTFvzT2IItKYKopKHWat3+fepq8SWl4KGcOC8kc
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: /sz+1GR8BlwRGpqm2nq9eXQ6gZrPBqn/w/JdlMRYO0ty1LGTNKNAR1m2MStChVAupFloO6AA60
 Yggz88sq8W1yJVPl2r5ZwXjyTLWjfPErlzmTKw2hKfkpRiN75xaK9UBAvOQONLpdgpJSkRWddT
 BV8TFzp8U7wswV4WEMRfuWIBpdNf1zwOOHvn2psmK+IoiYwCIiVf/03eT6Ss+JL69X1vk6eezh
 unwtGmr2qBLcs7A0Gb+Y90HUw1VZkxk7dHa5Zoq6XQjdqhNDbm83ji8vZGni4uOcx+xjx27g3V
 XLk=
X-SBRS: None
X-MesageID: 28393153
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28393153"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH] xen/domain: check IOMMU options doesn't contain unknown bits set
Date: Thu, 1 Oct 2020 13:44:07 +0200
Message-ID: <20201001114407.44532-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/common/domain.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 8cfa2e0b6b..c4a480fa14 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -310,6 +310,12 @@ static int sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }
 
+    if ( config->iommu_opts & ~XEN_DOMCTL_IOMMU_no_sharept )
+    {
+        dprintk(XENLOG_INFO, "Unknown IOMMU options %#x\n", config->iommu_opts);
+        return -EINVAL;
+    }
+
     if ( !(config->flags & XEN_DOMCTL_CDF_iommu) && config->iommu_opts )
     {
         dprintk(XENLOG_INFO,
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:51:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:51:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1230.4138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNx7R-0007Me-VQ; Thu, 01 Oct 2020 11:51:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1230.4138; Thu, 01 Oct 2020 11:51:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNx7R-0007MX-SS; Thu, 01 Oct 2020 11:51:29 +0000
Received: by outflank-mailman (input) for mailman id 1230;
 Thu, 01 Oct 2020 11:51:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNx7Q-0007MS-PJ
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:51:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e4e6e1b-571e-4343-9046-c2b5535d3c01;
 Thu, 01 Oct 2020 11:51:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12D18AFBF;
 Thu,  1 Oct 2020 11:51:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNx7Q-0007MS-PJ
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:51:28 +0000
X-Inumbo-ID: 6e4e6e1b-571e-4343-9046-c2b5535d3c01
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6e4e6e1b-571e-4343-9046-c2b5535d3c01;
	Thu, 01 Oct 2020 11:51:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601553087;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DiM82XdN6Au6EVTl0sxOpxzjPI5q031P49dRd7AVWT8=;
	b=jD0RYAQDojg0PBw2BztGPq4U8006WfZAV+DZDdLCiW1XDTlgsz3Gp0e096OXUJuWgpT3sM
	OyFz0A8rX5iTvxOtfnmv797sOiI1plOUt5cvmxoLdVt0JPzpy/PebYyp6MF2t7kJfNrR/Z
	tjt6XyCZNIYbn0MoVXsTznCILPtg/fA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 12D18AFBF;
	Thu,  1 Oct 2020 11:51:27 +0000 (UTC)
Subject: Re: [PATCH] xen/domain: check IOMMU options doesn't contain unknown
 bits set
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201001114407.44532-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <70b44d77-8c49-2394-ac62-a81fa52cb049@suse.com>
Date: Thu, 1 Oct 2020 13:51:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201001114407.44532-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.10.2020 13:44, Roger Pau Monne wrote:
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 11:59:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 11:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1232.4151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxFG-0007dk-RP; Thu, 01 Oct 2020 11:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1232.4151; Thu, 01 Oct 2020 11:59:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxFG-0007dd-Nw; Thu, 01 Oct 2020 11:59:34 +0000
Received: by outflank-mailman (input) for mailman id 1232;
 Thu, 01 Oct 2020 11:59:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNxFF-0007dY-Ce
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:59:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 024ccc64-2281-4118-ab57-301af51e97cd;
 Thu, 01 Oct 2020 11:59:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 15EAAB4BD;
 Thu,  1 Oct 2020 11:59:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNxFF-0007dY-Ce
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 11:59:33 +0000
X-Inumbo-ID: 024ccc64-2281-4118-ab57-301af51e97cd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 024ccc64-2281-4118-ab57-301af51e97cd;
	Thu, 01 Oct 2020 11:59:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601553571;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=p2VZgp2dT8N5z2eZt1FOZQ2yn/BCkhnnb7PdoRhqEdI=;
	b=CEYh6CUo5ht1o5uwL83EpP6PClJqDYcHkIIr0ApHRIWG3TiOixOJ4PVAw6iiGjT3/Vt78t
	cwyXa3PJofESVtTaZ+s1s/Y3NbJVK05/KbDxdELGma16fLeV+QoP1IpXqGZYBv64jRzhf1
	ja/MVr5stAkcqUP6D83HBkGItdGHXBM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 15EAAB4BD;
	Thu,  1 Oct 2020 11:59:31 +0000 (UTC)
Subject: Re: Yet another S3 issue in Xen 4.14
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <20201001011245.GL3962@mail-itl>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
Date: Thu, 1 Oct 2020 13:59:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201001011245.GL3962@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.10.2020 03:12, Marek Marczykowski-Górecki wrote:
> After patching the previous issue ("x86/S3: Fix Shadow Stack resume
> path") I still encounter issues resume from S3.
> Since I had it working on Xen 4.13 on this particular hardware (Thinkpad
> P52), I bisected it and got this:
> 
> commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
> Author: Andrew Cooper <andrew.cooper3@citrix.com>
> Date:   Wed Dec 11 20:59:19 2019 +0000
> 
>     x86/S3: Drop {save,restore}_rest_processor_state() completely
>     
>     There is no need to save/restore FS/GS/XCR0 state.  It will be handled
>     suitably on the context switch away from the idle.
>     
>     The CR4 restoration in restore_rest_processor_state() was actually fighting
>     later code in enter_state() which tried to keep CR4.MCE clear until everything
>     was set up.  Delete the intermediate restoration, and defer final restoration
>     until after MCE is reconfigured.
>     
>     Restoring PAT can be done earlier, and ideally before paging is enabled.  By
>     moving it into the trampoline during the setup for 64bit, the call can be
>     dropped from cpu_init().  The EFI path boot path doesn't disable paging, so
>     make the adjustment when switching onto Xen's pagetables.
>     
>     The only remaing piece of restoration is load_system_tables(), so suspend.c
>     can be deleted in its entirety.
>     
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Parent of this commit suspends and resumes just fine. With this commit
> applied, it (I think) it panics, at least I get reboot after 5s. Sadly, I
> don't have serial console there.
> 
> I tried also master and stable-4.14 with this commit reverted (and also
> the other fix applied), but it doesn't work. In this case I get a hang on
> resume (power led still flashing, but fan woke up). There are probably
> some other dependencies.

Since bisection may also point you at some intermediate breakage, which
these last results of yours seem to support, could you check whether
55f8c389d434 put immediately on top of the above commit makes a difference,
and if so resume bisecting from there?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:00:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1235.4163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxFy-0008Sf-Jr; Thu, 01 Oct 2020 12:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1235.4163; Thu, 01 Oct 2020 12:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxFy-0008SY-GA; Thu, 01 Oct 2020 12:00:18 +0000
Received: by outflank-mailman (input) for mailman id 1235;
 Thu, 01 Oct 2020 12:00:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kNxFx-0008SP-K3
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:00:17 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 319ff8f1-d79c-41fa-8bca-7b67391b9f42;
 Thu, 01 Oct 2020 12:00:16 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 83F275C02CC;
 Thu,  1 Oct 2020 08:00:16 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 01 Oct 2020 08:00:16 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 0DE763064691;
 Thu,  1 Oct 2020 08:00:14 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kNxFx-0008SP-K3
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:00:17 +0000
X-Inumbo-ID: 319ff8f1-d79c-41fa-8bca-7b67391b9f42
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 319ff8f1-d79c-41fa-8bca-7b67391b9f42;
	Thu, 01 Oct 2020 12:00:16 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 83F275C02CC;
	Thu,  1 Oct 2020 08:00:16 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Thu, 01 Oct 2020 08:00:16 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=2ydlEz
	OZ+usLXd4GScthKQJDQB3Tg8b90AhwjSeRP0I=; b=LHoDqwv40ptS4gKDkbnM2O
	Lxl47rf+AJChWNwl2KUzxFuogdQhX5meWeCDX5HPljo6OXH2QXxw7RnISLSAVgpJ
	WVdEVLFih1WrNZEC0IHuNnUDqW8uugOhoF4wBoePCp6uL2xAqNHcvVSAkO3C75VB
	B5XQ3BlKNJFd6dppRzKNCq7JLc7Ca8lzgbM0euAY8G33PQbHJqFK1tf5ENsrDzc8
	R1sI/Pft84CC7SrXkUwChMQLsCG4A1fJSSLk+Ffgty8k7Hq665/Wpfd9Gi4VgqY7
	Hea9bxJ5eUxLNRteAwSv4Lk4XMMKGgZ2DWfDRnYzeWuemOhcy+ONGSao5FtWVxjw
	==
X-ME-Sender: <xms:z8R1X45Hh5sWEb8ASk-WDJzEc9nrOK1hbdnSqFpcauPk5uVpDsBe6w>
    <xme:z8R1X54nwSWQwpB349XNqUiPTqCiV_Jkp9eSsGvNkVo62RPUBZlej9mIHQetdq9mu
    mEJiCfkI1UjwA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeeggdegjecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieegrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:z8R1X3cq3h1qqh0xnjDHRzUYl8V95AyuNy9IwR8V_YGF4BtdD4DuCA>
    <xmx:z8R1X9JqVu62zYW93prOY-d3Y-3SzBSnp6MZ-N-QXCb4ZwdmjejZZg>
    <xmx:z8R1X8Lrn_DbT24RWIXrxIwzFPjMSQbxRgnG7nIJspTA2nNVEMpIoA>
    <xmx:0MR1XxXWtyfkgiVVgm2AAzHvyNmwEwrwMwOc9q5HsR8qfEKMNgHpeA>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 0DE763064691;
	Thu,  1 Oct 2020 08:00:14 -0400 (EDT)
Date: Thu, 1 Oct 2020 14:00:10 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Yet another S3 issue in Xen 4.14
Message-ID: <20201001120010.GI1482@mail-itl>
References: <20201001011245.GL3962@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="hWKK+vUwUWSC3a08"
Content-Disposition: inline
In-Reply-To: <20201001011245.GL3962@mail-itl>


--hWKK+vUwUWSC3a08
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Yet another S3 issue in Xen 4.14

On Thu, Oct 01, 2020 at 03:12:47AM +0200, Marek Marczykowski-G=C3=B3recki w=
rote:
> Hi,
>=20
> After patching the previous issue ("x86/S3: Fix Shadow Stack resume
> path") I still encounter issues resume from S3.
> Since I had it working on Xen 4.13 on this particular hardware (Thinkpad
> P52), I bisected it and got this:
>=20
> commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
> Author: Andrew Cooper <andrew.cooper3@citrix.com>
> Date:   Wed Dec 11 20:59:19 2019 +0000
>=20
>     x86/S3: Drop {save,restore}_rest_processor_state() completely
>    =20
>     There is no need to save/restore FS/GS/XCR0 state.  It will be handled
>     suitably on the context switch away from the idle.
>    =20
>     The CR4 restoration in restore_rest_processor_state() was actually fi=
ghting
>     later code in enter_state() which tried to keep CR4.MCE clear until e=
verything
>     was set up.  Delete the intermediate restoration, and defer final res=
toration
>     until after MCE is reconfigured.
>    =20
>     Restoring PAT can be done earlier, and ideally before paging is enabl=
ed.  By
>     moving it into the trampoline during the setup for 64bit, the call ca=
n be
>     dropped from cpu_init().  The EFI path boot path doesn't disable pagi=
ng, so
>     make the adjustment when switching onto Xen's pagetables.
>    =20
>     The only remaing piece of restoration is load_system_tables(), so sus=
pend.c
>     can be deleted in its entirety.
>    =20
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>=20
> Parent of this commit suspends and resumes just fine. With this commit
> applied, it (I think) it panics, at least I get reboot after 5s. Sadly, I
> don't have serial console there.

Reading the patch and discussion about it, I think the important details
is this uses EFI boot.
That may also explain why I haven't see it while bisecting the other
S3 issue, where I use legacy boot path.

> I tried also master and stable-4.14 with this commit reverted (and also
> the other fix applied), but it doesn't work. In this case I get a hang on
> resume (power led still flashing, but fan woke up). There are probably
> some other dependencies.
>=20
> Any idea?
>=20
> PS This is different than "Xen crash after S3 suspend - Xen 4.13"
> thread, as this one broke with 4.13 -> 4.14 update.
>=20



--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--hWKK+vUwUWSC3a08
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl91xMsACgkQ24/THMrX
1yxHIQf+K/Omvv7Z88//dZRx5ByyMvNXfIDiz2VZrsyAnqtVZ95k7LwEFajI68Fh
xXlpgT1+tbRY/zVahDOKIFQ938HLhDaXE+ONITQjca102GB2Sz20aK4lGRCuLhZk
CoUgf6TIzetqt5LoF9hYjeKNoWSI0SA13qgs+NYPtFFUZ/tzQkiiFJHM+4MzDj26
btPAkAhCGBqYQ9PFqtsIeKCcksvvjJhHfb6FBC+ycrdUXUDRYPe/4EkthAscSZHX
DvxSMWDZIlwo0ITRLz0U2w2q+7xfiQ13Ws0kkczewG15FjS11L4gGnr9NnyrJB+T
Uea/eo5hV67k75EVgB4/9UTszxba0Q==
=LOsR
-----END PGP SIGNATURE-----

--hWKK+vUwUWSC3a08--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:09:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:09:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1239.4175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxOy-0000Oq-Ho; Thu, 01 Oct 2020 12:09:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1239.4175; Thu, 01 Oct 2020 12:09:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxOy-0000Oj-Dx; Thu, 01 Oct 2020 12:09:36 +0000
Received: by outflank-mailman (input) for mailman id 1239;
 Thu, 01 Oct 2020 12:09:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNxOx-0000Oe-Ji
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:09:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f2150f8-94ab-4ce3-bd4a-ea1cd3a41fed;
 Thu, 01 Oct 2020 12:09:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7D72AFB5;
 Thu,  1 Oct 2020 12:09:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNxOx-0000Oe-Ji
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:09:35 +0000
X-Inumbo-ID: 4f2150f8-94ab-4ce3-bd4a-ea1cd3a41fed
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f2150f8-94ab-4ce3-bd4a-ea1cd3a41fed;
	Thu, 01 Oct 2020 12:09:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601554173;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AP2W93sOhCiXU1dh4UqXiv31c1fkggMfrQDeqedXe3Y=;
	b=j7kei7Iusr8Oz8nVP7o8kx0qaw5Jdb+msi21noCw4U3o0s4RpwU0nMGxP9VSo8wSURsReS
	JP6+/L0AsqvCuNryjidPPH4gxTtujMmJ0iEmdutU2T1jvpgxlE3GiegCk7k2pt8QusI5G5
	5cD2Z3xh18SGuOev9He+Led35teZ0sc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B7D72AFB5;
	Thu,  1 Oct 2020 12:09:33 +0000 (UTC)
Subject: Re: Yet another S3 issue in Xen 4.14
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <20201001011245.GL3962@mail-itl> <20201001120010.GI1482@mail-itl>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab492756-8f41-fecf-4062-5d0272be1e7f@suse.com>
Date: Thu, 1 Oct 2020 14:09:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201001120010.GI1482@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.10.2020 14:00, Marek Marczykowski-Górecki wrote:
> On Thu, Oct 01, 2020 at 03:12:47AM +0200, Marek Marczykowski-Górecki wrote:
>> After patching the previous issue ("x86/S3: Fix Shadow Stack resume
>> path") I still encounter issues resume from S3.
>> Since I had it working on Xen 4.13 on this particular hardware (Thinkpad
>> P52), I bisected it and got this:
>>
>> commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
>> Author: Andrew Cooper <andrew.cooper3@citrix.com>
>> Date:   Wed Dec 11 20:59:19 2019 +0000
>>
>>     x86/S3: Drop {save,restore}_rest_processor_state() completely
>>     
>>     There is no need to save/restore FS/GS/XCR0 state.  It will be handled
>>     suitably on the context switch away from the idle.
>>     
>>     The CR4 restoration in restore_rest_processor_state() was actually fighting
>>     later code in enter_state() which tried to keep CR4.MCE clear until everything
>>     was set up.  Delete the intermediate restoration, and defer final restoration
>>     until after MCE is reconfigured.
>>     
>>     Restoring PAT can be done earlier, and ideally before paging is enabled.  By
>>     moving it into the trampoline during the setup for 64bit, the call can be
>>     dropped from cpu_init().  The EFI path boot path doesn't disable paging, so
>>     make the adjustment when switching onto Xen's pagetables.
>>     
>>     The only remaing piece of restoration is load_system_tables(), so suspend.c
>>     can be deleted in its entirety.
>>     
>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Parent of this commit suspends and resumes just fine. With this commit
>> applied, it (I think) it panics, at least I get reboot after 5s. Sadly, I
>> don't have serial console there.
> 
> Reading the patch and discussion about it, I think the important details
> is this uses EFI boot.
> That may also explain why I haven't see it while bisecting the other
> S3 issue, where I use legacy boot path.

Hmm, interesting. I didn't think though that the code paths altered
by the commit above would be in any way dependent upon the way Xen
was booted. But if that's the observation, then perhaps there's
something I'm unaware of.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:14:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:14:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1242.4189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxTU-0001Ga-4x; Thu, 01 Oct 2020 12:14:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1242.4189; Thu, 01 Oct 2020 12:14:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxTU-0001GT-1s; Thu, 01 Oct 2020 12:14:16 +0000
Received: by outflank-mailman (input) for mailman id 1242;
 Thu, 01 Oct 2020 12:14:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNxTS-0001G1-JK
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:14:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11d9fd0e-c1f0-455b-9eaf-42350183237e;
 Thu, 01 Oct 2020 12:14:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNxTK-000570-VH; Thu, 01 Oct 2020 12:14:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNxTK-0001JJ-Ln; Thu, 01 Oct 2020 12:14:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNxTK-0000bE-LI; Thu, 01 Oct 2020 12:14:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNxTS-0001G1-JK
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:14:14 +0000
X-Inumbo-ID: 11d9fd0e-c1f0-455b-9eaf-42350183237e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 11d9fd0e-c1f0-455b-9eaf-42350183237e;
	Thu, 01 Oct 2020 12:14:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SiHsQf7BTcnd1Pn/aVFgWPWeVjSfYlJxU3bYrx1/xjU=; b=wm6ESjsE/zkzpP/sFJsjDLTozg
	tWuSR57V5an/jTjrOCPxO0iM+Q4fRxVK15K9/ka7Qw+Yo+4zwxstnw8NrOXwpH6+kL8+OfliQNYWh
	Oojq7/bfzTj66Lzsg5RJZOImtedxthS96X8B9Ana8OSlqNBfB+OmiKuWA1V0LLPgLOS0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNxTK-000570-VH; Thu, 01 Oct 2020 12:14:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNxTK-0001JJ-Ln; Thu, 01 Oct 2020 12:14:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNxTK-0000bE-LI; Thu, 01 Oct 2020 12:14:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155126-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155126: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:broken:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=f58caa40cd8c2a3dbed705d90b6a22facc281afb
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 12:14:06 +0000

flight 155126 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155126/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151728
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 151728
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 151728
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151728

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               broken never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  f58caa40cd8c2a3dbed705d90b6a22facc281afb
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   85 days
Testing same since   154621  2020-09-22 16:07:00 Z    8 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken
broken-step test-arm64-arm64-xl-thunderx hosts-allocate

Not pushing.

(No revision log; it would be 341 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:31:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1247.4205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxkH-00034v-RT; Thu, 01 Oct 2020 12:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1247.4205; Thu, 01 Oct 2020 12:31:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxkH-00034o-NL; Thu, 01 Oct 2020 12:31:37 +0000
Received: by outflank-mailman (input) for mailman id 1247;
 Thu, 01 Oct 2020 12:31:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kNxkG-00034j-Ck
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:31:36 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a491113e-0b4c-49e2-afa8-8001c88599d1;
 Thu, 01 Oct 2020 12:31:35 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 62FB35C0224;
 Thu,  1 Oct 2020 08:31:35 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 01 Oct 2020 08:31:35 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 2D9053064674;
 Thu,  1 Oct 2020 08:31:34 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DWMX=DI=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kNxkG-00034j-Ck
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:31:36 +0000
X-Inumbo-ID: a491113e-0b4c-49e2-afa8-8001c88599d1
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a491113e-0b4c-49e2-afa8-8001c88599d1;
	Thu, 01 Oct 2020 12:31:35 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 62FB35C0224;
	Thu,  1 Oct 2020 08:31:35 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Thu, 01 Oct 2020 08:31:35 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=ZJ8pnZ
	Cnd0BhXzb3YzxNYDH8Wp/cgwuHStISOhnoTmY=; b=LE8rlfKpKB4UGtbuz3bPbB
	BAV4IJS/RvUK10zzusdWsud1w2LPSM/+viNbZGqKIkFjY6bBmixrB18e3eNjbd3Y
	ZWLoQMtZH2Lc/T5ZZpL+cXUVXJV3JrRlu1s8OOAt1JqVPU0Ke3RfjsA2TFJ0KGK/
	Q1b3O3KbH0VjadklG8ivSMKhD65xIgvvviVlXYpMFWyHz8x/2aKCE5Oo4tRBXPHt
	5SeA5cGbdaWIKrhqGQ+1RMOcouUAohs4XjRQSSP01sMzFUCewtg9rXkt//+Q1DuT
	/e9+j5dAOdNxJXC+FBQ/yH9STrkGMBReO8PY6GoFSGlxZ0tASOOsJVEX625Ej0zA
	==
X-ME-Sender: <xms:J8x1X39Vg5Tk8834dswaFFDGTqKpWEFk1AJYk6wYK9dP0PQ6agRSBA>
    <xme:J8x1XzudxXXoc-jNF229l3FvSAIYBLhHHUt7ZLWEkHzBdGe0-hDR4CaLPazpX1pf6
    qzBj9jj6Jx0HA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeeggdehgecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieegrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:J8x1X1BMK8SGTXlxAQN2La2XLx35AAXn10Sk_aiwrWKt_8T-g2n3Wg>
    <xmx:J8x1XzfKomttmLrSTx-4_P27vyo2hIRbxGrisSNRTNat6DWZVyL_Ow>
    <xmx:J8x1X8NH9W4z9bMZyK-2po4IiYreuHnSj65HPHD12uQNqsLK6g4Dlg>
    <xmx:J8x1X3YcbLmHno6dwBdlVvbhUE7_KNHmTdZT0cBQNcHpiXecm_BvPg>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 2D9053064674;
	Thu,  1 Oct 2020 08:31:34 -0400 (EDT)
Date: Thu, 1 Oct 2020 14:31:29 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Yet another S3 issue in Xen 4.14
Message-ID: <20201001123129.GJ1482@mail-itl>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="J2PFMayAIDoBVGKc"
Content-Disposition: inline
In-Reply-To: <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>


--J2PFMayAIDoBVGKc
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Yet another S3 issue in Xen 4.14

On Thu, Oct 01, 2020 at 01:59:32PM +0200, Jan Beulich wrote:
> On 01.10.2020 03:12, Marek Marczykowski-G=C3=B3recki wrote:
> > After patching the previous issue ("x86/S3: Fix Shadow Stack resume
> > path") I still encounter issues resume from S3.
> > Since I had it working on Xen 4.13 on this particular hardware (Thinkpad
> > P52), I bisected it and got this:
> >=20
> > commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
> > Author: Andrew Cooper <andrew.cooper3@citrix.com>
> > Date:   Wed Dec 11 20:59:19 2019 +0000
> >=20
> >     x86/S3: Drop {save,restore}_rest_processor_state() completely
> >    =20
> >     There is no need to save/restore FS/GS/XCR0 state.  It will be hand=
led
> >     suitably on the context switch away from the idle.
> >    =20
> >     The CR4 restoration in restore_rest_processor_state() was actually =
fighting
> >     later code in enter_state() which tried to keep CR4.MCE clear until=
 everything
> >     was set up.  Delete the intermediate restoration, and defer final r=
estoration
> >     until after MCE is reconfigured.
> >    =20
> >     Restoring PAT can be done earlier, and ideally before paging is ena=
bled.  By
> >     moving it into the trampoline during the setup for 64bit, the call =
can be
> >     dropped from cpu_init().  The EFI path boot path doesn't disable pa=
ging, so
> >     make the adjustment when switching onto Xen's pagetables.
> >    =20
> >     The only remaing piece of restoration is load_system_tables(), so s=
uspend.c
> >     can be deleted in its entirety.
> >    =20
> >     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >     Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >=20
> > Parent of this commit suspends and resumes just fine. With this commit
> > applied, it (I think) it panics, at least I get reboot after 5s. Sadly,=
 I
> > don't have serial console there.
> >=20
> > I tried also master and stable-4.14 with this commit reverted (and also
> > the other fix applied), but it doesn't work. In this case I get a hang =
on
> > resume (power led still flashing, but fan woke up). There are probably
> > some other dependencies.
>=20
> Since bisection may also point you at some intermediate breakage, which
> these last results of yours seem to support, could you check whether
> 55f8c389d434 put immediately on top of the above commit makes a differenc=
e,
> and if so resume bisecting from there?

Nope, 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1 with 55f8c389d434 on top
it still hangs on resume.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--J2PFMayAIDoBVGKc
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl91zCIACgkQ24/THMrX
1yyDiwf9H36STPxH5ydOvf3T1opuGrYOFgTq4c/IapfGco6EbqRk9h5Gb7lkXuYj
cWytmEia0a1ZrbnGzHPFoMccuU7vnOdwXgYZzjzNkSeNN4miYk4S5JNv/QmlKIPd
1hqxvxrUye/zgVG7wHVx+xmeAFpHee6E9lrLNeemggWmV+X9cdBwPxgoRdLDIDqR
As7IBqhSjJ301YMXN5GZIXPfrZhMuMRP/NvIQE0rgLrGGzBHlqu7FmjzMDkcxtno
R0HGHdBnY4ZYbbOGMJc87fOcEB8WDREGgpXoiMdxJFxbyIGiKXnykr7vrI9yfz3u
eHvN7u5t3sgFZe1/POgARNYJCY5+7Q==
=DaQb
-----END PGP SIGNATURE-----

--J2PFMayAIDoBVGKc--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:44:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1254.4227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxwG-0004A0-3j; Thu, 01 Oct 2020 12:44:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1254.4227; Thu, 01 Oct 2020 12:44:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxwG-00049r-0O; Thu, 01 Oct 2020 12:44:00 +0000
Received: by outflank-mailman (input) for mailman id 1254;
 Thu, 01 Oct 2020 12:43:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kNxwF-00049h-Df
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:43:59 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0453461c-8dd5-41bb-8705-ef9a7c515a29;
 Thu, 01 Oct 2020 12:43:57 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kNxwF-00049h-Df
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:43:59 +0000
X-Inumbo-ID: 0453461c-8dd5-41bb-8705-ef9a7c515a29
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0453461c-8dd5-41bb-8705-ef9a7c515a29;
	Thu, 01 Oct 2020 12:43:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601556237;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=d4T2tjy0itaxm2AaSPrOmsLNIShrIBiR0eAWxmxC0qI=;
  b=KYLfnaux8gPR6S9f/jDIMHFAf1bPCfat2uxsmwZKMj9V4YvS1G3kiZue
   wDaGkmvs6XttCD2YObyw01QhakOxo9SSyLcUR3U86QGmV3rSZki+00dHS
   dX/Mp4ZCV6OWFnJzabui3xNxe0t7AH+zTxUrKXAdIA3wbGSDbW+3MBo21
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WOn/9cbOVMCFAfKu4pPAN5eUuxTIy+sxxlKrXa63dfN4I4+El8Xopr9kqIsWw3VWVNPGfa5f/4
 KVsmg80Zt6i3+ECiXvIHraqirBMxs4fGDfTIK/rb3hZ1TUZGIKStIgvpmOv0S4upMuUhtc6wqn
 NeXYkgsEMyH2jisoeE/WKAYUoPxlZL9AzfCHKSzb3qgMXk9+PpRIrxpNzkuzEOgmn1cTyz2QFc
 lJtyBgMXfmhBfuI55I7keFkNecEDh1c+P8Tsh+VpQS4LOTNbw5YAZskL2vhlyc5OofrWdjoftU
 h7Q=
X-SBRS: None
X-MesageID: 29078407
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="29078407"
Subject: Re: Yet another S3 issue in Xen 4.14
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
	<marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>
CC: xen-devel <xen-devel@lists.xenproject.org>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
Date: Thu, 1 Oct 2020 13:43:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201001123129.GJ1482@mail-itl>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 01/10/2020 13:31, Marek Marczykowski-G=C3=B3recki wrote:
> On Thu, Oct 01, 2020 at 01:59:32PM +0200, Jan Beulich wrote:
>> On 01.10.2020 03:12, Marek Marczykowski-G=C3=B3recki wrote:
>>> After patching the previous issue ("x86/S3: Fix Shadow Stack resume
>>> path") I still encounter issues resume from S3.
>>> Since I had it working on Xen 4.13 on this particular hardware (Think=
pad
>>> P52), I bisected it and got this:
>>>
>>> commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
>>> Author: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Date:   Wed Dec 11 20:59:19 2019 +0000
>>>
>>>     x86/S3: Drop {save,restore}_rest_processor_state() completely
>>>    =20
>>>     There is no need to save/restore FS/GS/XCR0 state.  It will be ha=
ndled
>>>     suitably on the context switch away from the idle.
>>>    =20
>>>     The CR4 restoration in restore_rest_processor_state() was actuall=
y fighting
>>>     later code in enter_state() which tried to keep CR4.MCE clear unt=
il everything
>>>     was set up.  Delete the intermediate restoration, and defer final=
 restoration
>>>     until after MCE is reconfigured.
>>>    =20
>>>     Restoring PAT can be done earlier, and ideally before paging is e=
nabled.  By
>>>     moving it into the trampoline during the setup for 64bit, the cal=
l can be
>>>     dropped from cpu_init().  The EFI path boot path doesn't disable =
paging, so
>>>     make the adjustment when switching onto Xen's pagetables.
>>>    =20
>>>     The only remaing piece of restoration is load_system_tables(), so=
 suspend.c
>>>     can be deleted in its entirety.
>>>    =20
>>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Parent of this commit suspends and resumes just fine. With this commi=
t
>>> applied, it (I think) it panics, at least I get reboot after 5s. Sadl=
y, I
>>> don't have serial console there.
>>>
>>> I tried also master and stable-4.14 with this commit reverted (and al=
so
>>> the other fix applied), but it doesn't work. In this case I get a han=
g on
>>> resume (power led still flashing, but fan woke up). There are probabl=
y
>>> some other dependencies.
>> Since bisection may also point you at some intermediate breakage, whic=
h
>> these last results of yours seem to support, could you check whether
>> 55f8c389d434 put immediately on top of the above commit makes a differ=
ence,
>> and if so resume bisecting from there?
> Nope, 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1 with 55f8c389d434 on top=

> it still hangs on resume.

Ok.=C2=A0 I'll see about breaking the change apart so we can bisect which=

specific bit of code movement broke things.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:45:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1258.4240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxxS-0004HN-Fk; Thu, 01 Oct 2020 12:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1258.4240; Thu, 01 Oct 2020 12:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNxxS-0004HG-Cb; Thu, 01 Oct 2020 12:45:14 +0000
Received: by outflank-mailman (input) for mailman id 1258;
 Thu, 01 Oct 2020 12:45:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gvi1=DI=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kNxxQ-0004HB-VX
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:45:13 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80afe216-9c7c-4108-a467-e902c1a5c813;
 Thu, 01 Oct 2020 12:45:12 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 091Chq1s130044;
 Thu, 1 Oct 2020 12:44:19 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 33swkm5pva-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 01 Oct 2020 12:44:19 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 091CQ6ec104197;
 Thu, 1 Oct 2020 12:44:18 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3030.oracle.com with ESMTP id 33tfk1gcdq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 01 Oct 2020 12:44:18 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 091Ci4s6018950;
 Thu, 1 Oct 2020 12:44:05 GMT
Received: from [10.74.86.152] (/10.74.86.152)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 01 Oct 2020 05:44:04 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gvi1=DI=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kNxxQ-0004HB-VX
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:45:13 +0000
X-Inumbo-ID: 80afe216-9c7c-4108-a467-e902c1a5c813
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80afe216-9c7c-4108-a467-e902c1a5c813;
	Thu, 01 Oct 2020 12:45:12 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
	by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 091Chq1s130044;
	Thu, 1 Oct 2020 12:44:19 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=dRio5ZMU9WDz6PjNY704BdYKLFPP9TtV0vYjDf32cpI=;
 b=L1tXWifpPI2khL7ZWsxd+W1OzbP5uZquW/uyQ5eKyU/GjTGSHQi30vn6r3rT7PpE/9Jd
 1UCv0ETLyOoJYqF9c0WsJVQJTp/y2FpkcoVBlK+ec42xTHc/LafZA0UXKAYkLtOY7ujM
 Gnxq5MbZIjL3GsNTAhwyceyOtdjSI5kJioc9HSNhxr64tFD9dCRDFWuAMO30IbelxaDN
 LVKCtugIDxlclIM7JSu/2UKbH9tOGUVemGFchOa5of7QiKH2ySYHPjihtWRlaI5ETzwu
 VKdojWeV1D7mXjHmrrlmUo3Mi/+h9yjiL8I62ff7BIPsaxYMYEVVGWMnND2ps+MGeZ4i kg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by aserp2120.oracle.com with ESMTP id 33swkm5pva-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Thu, 01 Oct 2020 12:44:19 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 091CQ6ec104197;
	Thu, 1 Oct 2020 12:44:18 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by userp3030.oracle.com with ESMTP id 33tfk1gcdq-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 01 Oct 2020 12:44:18 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 091Ci4s6018950;
	Thu, 1 Oct 2020 12:44:05 GMT
Received: from [10.74.86.152] (/10.74.86.152)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 01 Oct 2020 05:44:04 -0700
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, hpa@zytor.com,
        x86@kernel.org, jgross@suse.com, linux-pm@vger.kernel.org,
        linux-mm@kvack.org, kamatam@amazon.com, sstabellini@kernel.org,
        konrad.wilk@oracle.com, roger.pau@citrix.com, axboe@kernel.dk,
        davem@davemloft.net, rjw@rjwysocki.net, len.brown@intel.com,
        pavel@ucw.cz, peterz@infradead.org, eduval@amazon.com,
        sblbir@amazon.com, xen-devel@lists.xenproject.org, vkuznets@redhat.com,
        netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
        dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <e9b94104-d20a-b6b2-cbe0-f79b1ed09c98@oracle.com>
 <20200915180055.GB19975@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <5f1e4772-7bd9-e6c0-3fe6-eef98bb72bd8@oracle.com>
 <20200921215447.GA28503@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <e3e447e5-2f7a-82a2-31c8-10c2ffcbfb2c@oracle.com>
 <20200922231736.GA24215@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
Date: Thu, 1 Oct 2020 08:43:58 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9760 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 spamscore=0 mlxscore=0
 phishscore=0 adultscore=0 bulkscore=0 mlxlogscore=980 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2010010109
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9760 signatures=668680
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxscore=0 phishscore=0
 suspectscore=0 mlxlogscore=979 clxscore=1015 priorityscore=1501
 impostorscore=0 lowpriorityscore=0 bulkscore=0 spamscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2006250000
 definitions=main-2010010110


>>>>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
>>>>>>> bandwidth to dive deep into the issue and fix it.
>>>> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
>>>>
>>> I do want to fix it and did do some debugging earlier this year just haven't
>>> gotten back to it. Also, wanted to understand if the issue is a blocker to this
>>> series?
>>
>> Integrating code with known bugs is less than ideal.
>>
> So for this series to be accepted, KASLR needs to be fixed along with other
> comments of course? 


Yes, please.



>>> I had some theories when debugging around this like if the random base address picked by kaslr for the
>>> resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
>>> Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
>>> has and that can cause CPU's to get stuck when coming online.
>>
>> I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.
>>
> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> so Xen still remembers the old physical addresses for the VCPU information, created by the
> booting kernel. But since the hibernation kernel may have different physical
> addresses for VCPU info and if mismatch happens, it may cause issues with resume. 
> During hibernation, the VCPU info register hypercall is not invoked again.


I still don't think that's the cause but it's certainly worth having a look.


-boris



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 12:58:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 12:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1269.4257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyAc-0005My-Sh; Thu, 01 Oct 2020 12:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1269.4257; Thu, 01 Oct 2020 12:58:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyAc-0005Mr-PU; Thu, 01 Oct 2020 12:58:50 +0000
Received: by outflank-mailman (input) for mailman id 1269;
 Thu, 01 Oct 2020 12:58:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kNyAb-0005Mm-Kq
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:58:49 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd519763-a09f-4943-8908-0f14a9b74855;
 Thu, 01 Oct 2020 12:58:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kNyAb-0005Mm-Kq
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 12:58:49 +0000
X-Inumbo-ID: cd519763-a09f-4943-8908-0f14a9b74855
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd519763-a09f-4943-8908-0f14a9b74855;
	Thu, 01 Oct 2020 12:58:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601557128;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=tC8IZ2rjKb4/mSW5V73QWviVu64ioiomUQkuFzhd1wQ=;
  b=hzbs3XjxEqLCRph5teqaMkrf8x5fkXQ7o2VYcRPIJi8FYocCIgkcQC+9
   hDMBbM6zF4SMX+XwLGMrI0ahbBw+zWCGvjRhhcYixBAMG9v3uOwSzUAj6
   DJGEB6e13GeH0QZB/PKlxP2tLtuQkBXsJkKHqaw4GD2n2E2qypbnvYOdk
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vC2etBxWYWR1ygoadZP4GIkq5pGHw14rA3hKun27GlbE7nzJrlX5mWivm+EGA8NPDFb0GGnTUU
 vYpG3una6dg3ZG47cTZP3Nonik4p4SCh1mch51PEF9HgAZXQDyCnumvvwq1XgLX5JCcolg6UML
 I1KjWMYC+TsQefUG6B0Us59v0zRqUx4UoWXrOgfWu9w02+hWtxSAWMMNyhTSM6I2ZMEzu9lgw0
 +blvHj/hob3T91CYbpXBrHjVcM994xOJq4pNo4NOxx0AQdcTihTG3BGFWddXeKZjygw9HMiU85
 S3M=
X-SBRS: None
X-MesageID: 28071257
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28071257"
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
To: George Dunlap <george.dunlap@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Rich Persaud <persaur@gmail.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <868b25bd-ab2c-7f33-1dc2-9476c86d8050@citrix.com>
Date: Thu, 1 Oct 2020 13:58:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200930125736.95203-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 30/09/2020 13:57, George Dunlap wrote:
> Define a specific criteria for how we determine what tools and
> libraries to be compatible with.  This will clarify issues such as,
> "Should we continue to support Python 2.4" moving forward.

Luckily that one is settled.  Arguably a better option might be "what is
the minimum toolchain to support" ?

> Note that CentOS 7 is set to stop receiving "normal" maintenance
> updates in "Q4 2020"; assuming that 4.15 is released after that, we
> only need to support CentOS / RHEL 8.

While I appreciate that this doesn't mean "we'll break CentOS 7 in Q4",
I'm going to have some substantial development issues if C7 actually
stops working, at least in the short to medium term.

>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Rich Persaud <persaur@gmail.com>
> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
> ---
>  docs/index.rst                        |  2 +
>  docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
>  2 files changed, 78 insertions(+)
>  create mode 100644 docs/policies/dependency-versions.rst
>
> diff --git a/docs/index.rst b/docs/index.rst
> index b75487a05d..ac175eacc8 100644
> --- a/docs/index.rst
> +++ b/docs/index.rst
> @@ -57,5 +57,7 @@ Miscellanea
>  -----------
>  
>  .. toctree::
> +   :maxdepth: 1
>  
> +   policies/dependency-versions

I think it is great that this is going into Sphinx.

However, I'd prefer to avoid proliferating random things at the top
level, to try and keep everything in a coherent structure.


For better or worse, I guestimated at "admin guide" (end user and
sysadmin guide), "guest docs" (VM ABI, and guest kernel developers), and
"hypervisors docs" (hacking Xen).

I'm happy to shuffle the dividing lines if a better arrangement becomes
obvious.  This particular doc logically lives with "building Xen from
source".

Alternatively, I considered putting in an explicit "unsorted" section in
the short term, so content can get added, and still be clear that it
isn't in its final resting place.

>     glossary
> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/dependency-versions.rst
> new file mode 100644
> index 0000000000..d5eeb848d8
> --- /dev/null
> +++ b/docs/policies/dependency-versions.rst
> @@ -0,0 +1,76 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Build and runtime dependencies
> +==============================
> +
> +Xen depends on other programs and libraries to build and to run.
> +Chosing a minimum version of these tools to support requires a careful
> +balance: Supporting older versions of these tools or libraries means
> +that Xen can compile on a wider variety of systems; but means that Xen
> +cannot take advantage of features available in newer versions.
> +Conversely, requiring newer versions means that Xen can take advantage
> +of newer features, but cannot work on as wide a variety of systems.
> +
> +Specific dependencies and versions for a given Xen release will be
> +listed in the toplevel README, and/or specified by the ``configure``
> +system.  This document lays out the principles by which those versions
> +should be chosen.
> +
> +The general principle is this:
> +
> +    Xen should build on currently-supported versions of major distros
> +    when released.
> +
> +"Currently-supported" means whatever that distro's version of "full
> +support".  For instance, at the time of writing, CentOS 7 and 8 are
> +listed as being given "Full Updates", but CentOS 6 is listed as
> +"Maintenance updates"; under this criterium, we would try to ensure
> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
> +
> +Exceptions for specific distros or tools may be made when appropriate.
> +
> +One exception to this is compiler versions for the hypervisor.
> +Support for new instructions, and in particular support for new safety
> +features, may require a newer compiler than many distros support.
> +These will be specified in the README.

The problem we have is that xen.git contains two very different things. 
There is the hypervisor itself, which is embedded, and can easily be
cross compiled, and there is the content of tools/ which depends on a
lot of distro infrastructure.

We expect tools/ to work in any supported distro, without having to do
weird toolchain gymnastics.

For xen/ at the moment we have a very obsolete toolchain requirements,
and this is holding us back in some areas.  We're looking to bring that
forward, and may consider that being newer than some of the old distros
is necessary.

At the moment however, we have quite a lot of functionality which is
dependent on being able to detect suitable toolchain.  GCOV and CET-SS
are examples.  These features will turn themselves off in older distros,
so while you can "build" Xen that far back, you might not get everything.

For CET in particular, there is no feasible way to support it on older
toolchains.  (unless someone comes up with an extremely convincing way
of hand-crafting memory operands using raw .byte's in inline assembler.)

I definitely don't think it is unreasonable for us to require the use of
(potentially) bleeding edge toolchains if they want to use (potentially)
bleeding edge features.  CET-SS isn't bleeding edge any more, but
CET-IBT is due to the additional linker work required to make it
function.  A future one which we need to do something about is Control
Flow Integrity, which is Clang specific, depends on LTO, and caused
Linux to up their minimum supported version to 10.0.1 which was when all
the bugfixes got merged.

> +
> +Distros we consider when deciding minimum versions
> +--------------------------------------------------
> +
> +We currently aim to support Xen building and running on the following distributions:
> +Debian_,
> +Ubuntu_,
> +OpenSUSE_,
> +Arch Linux,

No link for Arch?

> +SLES_,
> +Yocto_,
> +CentOS_,
> +and RHEL_.
> +
> +.. _Debian: https://www.debian.org/releases/
> +.. _Ubuntu: https://wiki.ubuntu.com/Releases
> +.. _OpenSUSE: https://en.opensuse.org/Lifetime
> +.. _SLES: https://www.suse.com/lifecycle/
> +.. _Yocto: https://wiki.yoctoproject.org/wiki/Releases
> +.. _CentOS: https://wiki.centos.org/About/Product
> +.. _RHEL: https://access.redhat.com/support/policy/updates/errata
> +
> +Specific distro versions supported in this release
> +--------------------------------------------------
> +
> +======== ==================
> +Distro   Supported releases
> +======== ==================
> +Debian   10 (Buster)
> +Ubuntu   20.10 (Groovy Gorilla), 20.04 (Focal Fossa), 18.04 (Bionic Beaver), 16.04 (Xenial Xerus)
> +OpenSUSE Leap 15.2
> +SLES     SLES 11, 12, 15
> +Yocto    3.1 (Dunfell)
> +CentOS   8
> +RHEL     8
> +======== ==================

How about a 3rd column for "supported until" ?  It would stop this page
becoming stale simply over time.

> +
> +.. note::
> +
> +   We also support Arch Linux, but as it's a rolling distribution, the
> +   concept of "security supported releases" doesn't really apply.

Should we rationalise this list with the docker containers?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 13:05:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 13:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1272.4268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyGN-0006K0-O1; Thu, 01 Oct 2020 13:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1272.4268; Thu, 01 Oct 2020 13:04:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyGN-0006Jt-L7; Thu, 01 Oct 2020 13:04:47 +0000
Received: by outflank-mailman (input) for mailman id 1272;
 Thu, 01 Oct 2020 13:04:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNyGM-0006Jo-U1
 for xen-devel@lists.xen.org; Thu, 01 Oct 2020 13:04:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14da7d49-6b15-4817-99d1-30dbf6edc49d;
 Thu, 01 Oct 2020 13:04:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8648AAFA9;
 Thu,  1 Oct 2020 13:04:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNyGM-0006Jo-U1
	for xen-devel@lists.xen.org; Thu, 01 Oct 2020 13:04:46 +0000
X-Inumbo-ID: 14da7d49-6b15-4817-99d1-30dbf6edc49d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14da7d49-6b15-4817-99d1-30dbf6edc49d;
	Thu, 01 Oct 2020 13:04:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601557483;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SOCFfrREEMYYyFsM+7JREQAW///aGVj3cvAR5WZfCjQ=;
	b=rrr6gfJLKznionkoyvJCCPXYloRaw/QT5Qu5PbDGFFaylNQFqO23iOlISq3JWIf3g1CayO
	69QZejwr6GOqyBNzctF2n9D/7DcLRiiSSsOVggA5U0ApPrLbda73LmpVLlJxUVEswv+dhA
	kbuYTY73axkyi9Ls/vU+c9G+P5pt1Fw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8648AAFA9;
	Thu,  1 Oct 2020 13:04:43 +0000 (UTC)
Subject: Re: [XEN PATCH v14 5/8] xen: Add vmware_port support
To: Don Slutz <don.slutz@gmail.com>
Cc: xen-devel@lists.xen.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Ian Jackson <iwj@xenproject.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <cover.1597854907.git.don.slutz@gmail.com>
 <5d2e424a19ea4934be3be962cdbe6a0ec8db9a6c.1597854907.git.don.slutz@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9131ac71-e7e8-3a73-10fa-46d7bb5b60ca@suse.com>
Date: Thu, 1 Oct 2020 15:04:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <5d2e424a19ea4934be3be962cdbe6a0ec8db9a6c.1597854907.git.don.slutz@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.08.2020 18:51, Don Slutz wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -504,6 +504,8 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>  
>  static bool emulation_flags_ok(const struct domain *d, uint32_t emflags)
>  {
> +    uint32_t all_emflags = emflags & XEN_X86_EMU_ALL;
> +
>  #ifdef CONFIG_HVM
>      /* This doesn't catch !CONFIG_HVM case but it is better than nothing */
>      BUILD_BUG_ON(X86_EMU_ALL != XEN_X86_EMU_ALL);
> @@ -512,14 +514,15 @@ static bool emulation_flags_ok(const struct domain *d, uint32_t emflags)
>      if ( is_hvm_domain(d) )
>      {
>          if ( is_hardware_domain(d) &&
> -             emflags != (X86_EMU_VPCI | X86_EMU_LAPIC | X86_EMU_IOAPIC) )
> +             all_emflags != (X86_EMU_VPCI | X86_EMU_LAPIC | X86_EMU_IOAPIC) )
>              return false;
>          if ( !is_hardware_domain(d) &&
> -             emflags != (X86_EMU_ALL & ~X86_EMU_VPCI) &&
> -             emflags != X86_EMU_LAPIC )
> +             all_emflags != (X86_EMU_ALL & ~X86_EMU_VPCI) &&
> +             all_emflags != X86_EMU_LAPIC )
>              return false;
>      }
> -    else if ( emflags != 0 && emflags != X86_EMU_PIT )
> +    else if ( emflags & XEN_X86_EMU_VMWARE_PORT ||
> +              (all_emflags != 0 && all_emflags != X86_EMU_PIT) )
>      {
>          /* PV or classic PVH. */
>          return false;
> @@ -581,7 +584,7 @@ int arch_domain_create(struct domain *d,
>      if ( is_hardware_domain(d) && is_pv_domain(d) )
>          emflags |= XEN_X86_EMU_PIT;
>  
> -    if ( emflags & ~XEN_X86_EMU_ALL )
> +    if ( emflags & ~(XEN_X86_EMU_ALL | XEN_X86_EMU_VMWARE_PORT) )
>      {
>          printk(XENLOG_G_ERR "d%d: Invalid emulation bitmap: %#x\n",
>                 d->domain_id, emflags);

Seeing code churn like this one I'm inclined to suggest this shouldn't
be part of this field. Either take it from the top bits of the field
you add in patch 3, or add yet another field. See how the various
Viridian sub-features also didn't go here.

> @@ -600,6 +603,8 @@ int arch_domain_create(struct domain *d,
>      if ( is_hvm_domain(d) )
>      {
>          d->arch.hvm.vmware_hwver = config->arch.vmware_hwver;
> +        d->arch.hvm.is_vmware_port_enabled =
> +            !!(emflags & XEN_X86_EMU_VMWARE_PORT);

While I expect this to move anyway, as a general remark: No need for
!! when the lvalue is of type bool. But then why have the separate
boolean anyway? With how you have things now, you could as well
look at d->arch.emulation_flags, and with the change suggested above
you'd again have another field where the information is already
present.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -697,6 +697,9 @@ int hvm_domain_initialise(struct domain *d)
>      if ( hvm_tsc_scaling_supported )
>          d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio;
>  
> +    if ( d->arch.hvm.is_vmware_port_enabled )
> +        vmport_register(d);
> +
>      rc = viridian_domain_init(d);
>      if ( rc )
>          goto fail2;
> @@ -4214,6 +4217,12 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
>          rc = xsm_hvm_param_nested(XSM_PRIV, d);
>          if ( rc )
>              break;
> +        /* Prevent nestedhvm enable with vmport */
> +        if ( value && d->arch.hvm.is_vmware_port_enabled )
> +        {
> +            rc = -EOPNOTSUPP;
> +            break;
> +        }

Be aware that this case block is about to disappear.

> --- a/xen/arch/x86/hvm/vmware/Makefile
> +++ b/xen/arch/x86/hvm/vmware/Makefile
> @@ -1 +1,2 @@
>  obj-y += vmware.o
> +obj-y += vmport.o

Alphabetically sorted please, again.

> --- /dev/null
> +++ b/xen/arch/x86/hvm/vmware/vmport.c
> @@ -0,0 +1,148 @@
> +/*
> + * HVM VMPORT emulation
> + *
> + * Copyright (C) 2012 Verizon Corporation
> + *
> + * This file is free software; you can redistribute it and/or modify it
> + * under the terms of the GNU General Public License Version 2 (GPLv2)
> + * as published by the Free Software Foundation.
> + *
> + * This file is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details. <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/lib.h>
> +#include <asm/hvm/hvm.h>
> +#include <asm/hvm/support.h>
> +
> +#include "backdoor_def.h"
> +
> +static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> +{
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> +    /*
> +     * While VMware expects only 32-bit in, they do support using
> +     * other sizes and out.  However they do require only the 1 port
> +     * and the correct value in eax.  Since some of the data
> +     * returned in eax is smaller the 32 bits and/or you only need
> +     * the other registers the dir and bytes do not need any
> +     * checking.  The caller will handle the bytes, and dir is
> +     * handled below for eax.
> +     */
> +    if ( port == BDOOR_PORT && regs->eax == BDOOR_MAGIC )
> +    {
> +        uint32_t new_eax = ~0u;
> +        uint64_t value;
> +        struct vcpu *curr = current;
> +        struct domain *currd = curr->domain;

Both of these ought to be possible to gain const.

> +        /*
> +         * VMware changes the other (non eax) registers ignoring dir
> +         * (IN vs OUT).  It also changes only the 32-bit part
> +         * leaving the high 32-bits unchanged, unlike what one would
> +         * expect to happen.
> +         */
> +        switch ( regs->ecx & 0xffff )
> +        {
> +        case BDOOR_CMD_GETMHZ:
> +            new_eax = currd->arch.tsc_khz / 1000;
> +            break;
> +
> +        case BDOOR_CMD_GETVERSION:
> +            /* MAGIC */
> +            regs->ebx = BDOOR_MAGIC;

I don't think the comment is of much use here.

> +            /* VERSION_MAGIC */
> +            new_eax = 6;

Didn't the earlier patch talk about version 7?

> +            /* Claim we are an ESX. VMX_TYPE_SCALABLE_SERVER */
> +            regs->ecx = 2;
> +            break;
> +
> +        case BDOOR_CMD_GETHWVERSION:
> +            /* vmware_hw */
> +            new_eax = currd->arch.hvm.vmware_hwver;
> +            /*
> +             * Returning zero is not the best.  VMware was not at
> +             * all consistent in the handling of this command until
> +             * VMware hardware version 4.  So it is better to claim
> +             * 4 then 0.  This should only happen in strange configs.
> +             */
> +            if ( !new_eax )
> +                new_eax = 4;

Doesn't ->arch.hvm.vmware_hwver == 0 mean "VMware emulation disabled"?
Or are the two setting indeed meant to be entirely independent?

> +            break;
> +
> +        case BDOOR_CMD_GETHZ:
> +        {
> +            struct segment_register sreg;
> +
> +            hvm_get_segment_register(curr, x86_seg_ss, &sreg);
> +            if ( sreg.dpl == 0 )

Do you perhaps mean hvm_get_cpl() here?

> +            {
> +                value = currd->arch.tsc_khz * 1000;

No matter that value is uint64_t, you'll only ever get a 32-bit
value calculated here unless you e.g. use 1000UL.

> +                /* apic-frequency (bus speed) */
> +                regs->ecx = 1000000000ULL / APIC_BUS_CYCLE_NS;
> +                /* High part of tsc-frequency */
> +                regs->ebx = value >> 32;
> +                /* Low part of tsc-frequency */
> +                new_eax = value;
> +            }
> +            break;
> +
> +        }
> +        case BDOOR_CMD_GETTIME:
> +            value = get_localtime_us(currd) -
> +                currd->time_offset.seconds * 1000000ULL;

Whereas I don't see the need for the ULL here - seconds is a 64-bit
type already, and you'll wrongly convert from signed to unsigned.

> +            /* hostUsecs */
> +            regs->ebx = value % 1000000UL;
> +            /* hostSecs */
> +            new_eax = value / 1000000ULL;

Why once UL and once ULL? Neither of the suffixes seems necessary,
but at the very least you want to be consistent (unless there's a
reason not to be). (This, the previous, and the next comment apply
again further down.)

> +            /* maxTimeLag */
> +            regs->ecx = 1000000;

And this value is coming from where? If it can't be calculated,
please have the comment say how it was determined.

> +            /* offset to GMT in minutes */
> +            regs->edx = currd->time_offset.seconds / 60;
> +            break;
> +
> +        case BDOOR_CMD_GETTIMEFULL:
> +            /* BDOOR_MAGIC */
> +            new_eax = BDOOR_MAGIC;

Again, the comment isn't very helpful.

> +            value = get_localtime_us(currd) -
> +                currd->time_offset.seconds * 1000000ULL;
> +            /* hostUsecs */
> +            regs->ebx = value % 1000000UL;
> +            /* hostSecs low 32 bits */
> +            regs->edx = value / 1000000ULL;
> +            /* hostSecs high 32 bits */
> +            regs->esi = (value / 1000000ULL) >> 32;
> +            /* maxTimeLag */
> +            regs->ecx = 1000000;
> +            break;
> +
> +        default:
> +            /* Let backing DM handle */
> +            return X86EMUL_UNHANDLEABLE;

If so here, why not also ...

> +        }
> +        if ( dir == IOREQ_READ )
> +            *val = new_eax;
> +    }
> +    else if ( dir == IOREQ_READ )
> +        *val = ~0u;

... here?

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -162,6 +162,9 @@ struct hvm_domain {
>      spinlock_t             uc_lock;
>      bool_t                 is_in_uc_mode;
>  
> +    /* VMware backdoor port available */
> +    bool_t                 is_vmware_port_enabled;

While as per above I assume this will go away again, as a general remark:
"bool" please in new additions, or even when just touching existing lines.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 13:15:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 13:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1274.4281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyQV-0007Ig-Q3; Thu, 01 Oct 2020 13:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1274.4281; Thu, 01 Oct 2020 13:15:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyQV-0007IZ-Lq; Thu, 01 Oct 2020 13:15:15 +0000
Received: by outflank-mailman (input) for mailman id 1274;
 Thu, 01 Oct 2020 13:15:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNyQU-0007IU-LZ
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:15:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cc6b0bb-c1d7-42b1-8901-5e1ba26ad032;
 Thu, 01 Oct 2020 13:15:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNyQR-0006MZ-Ny; Thu, 01 Oct 2020 13:15:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNyQR-00050E-HQ; Thu, 01 Oct 2020 13:15:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNyQR-0007lC-Gv; Thu, 01 Oct 2020 13:15:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNyQU-0007IU-LZ
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:15:14 +0000
X-Inumbo-ID: 7cc6b0bb-c1d7-42b1-8901-5e1ba26ad032
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7cc6b0bb-c1d7-42b1-8901-5e1ba26ad032;
	Thu, 01 Oct 2020 13:15:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ANxel3gnx6hA+Bx5DNPAW7Wn/zwNzTngGJGmHo83XlE=; b=iLwlAs3e6ZYaLK9We2GkqGK01B
	sH7bRmIBqBCTXKEm8aTCupAp+T/wXBpX9Ds4hsWiwcWp59/VAFPL70aYbh+9na+Bh+GA2K1bQJ6sE
	zjoQBIVOmsW8RdBHdZzwrlrpp4jugfJ+dohOeLbxadOnG4SOcEPE26hHyuetbe9bzq6I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNyQR-0006MZ-Ny; Thu, 01 Oct 2020 13:15:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNyQR-00050E-HQ; Thu, 01 Oct 2020 13:15:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNyQR-0007lC-Gv; Thu, 01 Oct 2020 13:15:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155125: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=02de58b24d2e1b2cf947d57205bd2221d897193c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 13:15:11 +0000

flight 155125 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155125/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                02de58b24d2e1b2cf947d57205bd2221d897193c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   61 days
Failing since        152366  2020-08-01 20:49:34 Z   60 days  108 attempts
Testing same since   155125  2020-09-30 06:07:55 Z    1 days    1 attempts

------------------------------------------------------------
2449 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 330552 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 13:39:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 13:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1287.4327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyoG-000122-KL; Thu, 01 Oct 2020 13:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1287.4327; Thu, 01 Oct 2020 13:39:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNyoG-00011v-HB; Thu, 01 Oct 2020 13:39:48 +0000
Received: by outflank-mailman (input) for mailman id 1287;
 Thu, 01 Oct 2020 13:39:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uKIp=DI=nvidia.com=ziy@srs-us1.protection.inumbo.net>)
 id 1kNyoF-00011q-Oj
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:39:47 +0000
Received: from hqnvemgate24.nvidia.com (unknown [216.228.121.143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa3044b0-fd37-405a-aa11-1de46bdb0959;
 Thu, 01 Oct 2020 13:39:47 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by
 hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
 id <B5f75dbba0004>; Thu, 01 Oct 2020 06:38:02 -0700
Received: from [10.2.161.39] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 1 Oct
 2020 13:39:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uKIp=DI=nvidia.com=ziy@srs-us1.protection.inumbo.net>)
	id 1kNyoF-00011q-Oj
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:39:47 +0000
X-Inumbo-ID: aa3044b0-fd37-405a-aa11-1de46bdb0959
Received: from hqnvemgate24.nvidia.com (unknown [216.228.121.143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aa3044b0-fd37-405a-aa11-1de46bdb0959;
	Thu, 01 Oct 2020 13:39:47 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
	id <B5f75dbba0004>; Thu, 01 Oct 2020 06:38:02 -0700
Received: from [10.2.161.39] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 1 Oct
 2020 13:39:44 +0000
From: Zi Yan <ziy@nvidia.com>
To: Thomas Gleixner <tglx@linutronix.de>
CC: Wei Liu <wei.liu@kernel.org>, Joerg Roedel <jroedel@suse.de>,
	<x86@kernel.org>, <iommu@lists.linux-foundation.org>,
	<linux-hyperv@vger.kernel.org>, <linux-pci@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: Boot crash due to "x86/msi: Consolidate MSI allocation"
Date: Thu, 1 Oct 2020 09:39:42 -0400
X-Mailer: MailMate (1.13.2r5673)
Message-ID: <2F4EC354-C0BB-44BD-86A5-07F321590C31@nvidia.com>
In-Reply-To: <874knegxtg.fsf@nanos.tec.linutronix.de>
References: <A838FF2B-11FC-42B9-87D7-A76CF46E0575@nvidia.com>
 <874knegxtg.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: multipart/signed;
	boundary="=_MailMate_E37D7855-5714-45DB-89F0-E1847DF19DAF_=";
	micalg=pgp-sha512; protocol="application/pgp-signature"
X-Originating-IP: [10.124.1.5]
X-ClientProxiedBy: HQMAIL111.nvidia.com (172.20.187.18) To
 HQMAIL107.nvidia.com (172.20.187.13)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1601559483; bh=sFTfs2wKiAAdF9G/OK6dAZg/42/8CaV6qHMuyOsZ5z8=;
	h=From:To:CC:Subject:Date:X-Mailer:Message-ID:In-Reply-To:
	 References:MIME-Version:Content-Type:X-Originating-IP:
	 X-ClientProxiedBy;
	b=TPHGplKHkL8cUhcoay8Vx9C6+fu8GLE1UKqjQd6vyy68K1+6s48tbdNX4Ixo22ipS
	 WefYfhwsODoZoqNxCtcVZ/XS1m+h9X/xjtquPVVQpptUXb5Hh1mzGojUag9Ifrx2yM
	 CcLqu3+MWgws6V6CjhPPn9ZJhyfDmDIhYBRrnrapdfldkQiq9WH8eIbRbSJmK5xNor
	 dl5stqBjUQfaHYUUSwlKTrVBRA5DwcHGVYv6iRlep5m8731W9A02mP3EqZsNIGJd5a
	 lxBXeGTgYkTuEwRr9JMV6JIWe8/nGcC1wKDVdvQdQPL4dPrwKWpG9kAjTtUV3dZyor
	 FntxzCcwEWz+Q==

--=_MailMate_E37D7855-5714-45DB-89F0-E1847DF19DAF_=
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On 1 Oct 2020, at 4:22, Thomas Gleixner wrote:

> Yan,
>
> On Wed, Sep 30 2020 at 21:29, Zi Yan wrote:
>> I am running linux-next on my Dell R630 and the system crashed at boot=

>> time. I bisected linux-next and got to your commit:
>>
>>     x86/msi: Consolidate MSI allocation
>>
>> The crash log is below and my .config is attached.
>>
>> [   11.840905]  intel_get_irq_domain+0x24/0xb0
>> [   11.840905]  native_setup_msi_irqs+0x3b/0x90
>
> This is not really helpful because that's in the middle of the queue an=
d
> that code is gone at the very end. Yes, it's unfortunate that this
> breaks bisection.
>
> Can you please test:
>
>    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/irq
>
> which contains fixes and if it still crashes provide the dmesg of it.
>

My system boots without any problem using this tree. Thanks.

=E2=80=94
Best Regards,
Yan Zi

--=_MailMate_E37D7855-5714-45DB-89F0-E1847DF19DAF_=
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQJDBAEBCgAtFiEEh7yFAW3gwjwQ4C9anbJR82th+ooFAl913B4PHHppeUBudmlk
aWEuY29tAAoJEJ2yUfNrYfqKh4EP+wa4YAP+3OSFbCe0zBYGBxknODe/+Ns2BfM/
VWWcPZpcXCkaGo+f8EeYWM8tSgeYE42mDsJcG69uCRmKMJLM352iud3Kz1JRZm2Q
vxl1pWEvmvbiYmoAYbzp8B9564jX0uWJy9zKOzXkuh/jmD2m58zviAK9adapw5b0
PvVuTaa+FYYHVYxitrag3xgZsEXqK7bgJq3fCGTZ7+lRGrmM9atX616B6Dn/xx4F
/qamJ3Ema0UE45AbrrIJ076P9RJEzaTmD6SzdDfJ0Ygc2quCjskTnm1pq+MSC9fc
5Tm8IH+O+vGPeIXwZGMIj3hYWG93yDusIHPbLgTKxj1n6ygsvtEUteBB1tQ7jtKB
dK/u/L37LOZsBwhmjN2vNjpxyawj8SmwbSOcsPhGRDNY+9vO5ZAyF0NL/42urRHD
RP+m3xqp3PJOWZASZe6RNi7y1C92d9HbCHVZnc0zOz6/Ko2VtWImx9mDmsQEqD/N
cq7tq04HF9FOGeVgrmfviH+68K/FdmZct454G2y4k7eJ3Nth6MzSVopV8PWzzqrP
Jx+WQen0XUW7f1z+mf7ZmmjWpyooywgzCBHMSEoRCsA1TRgMN1pFaAfrX4LorzON
Bl1HnvJTZH4yxyOAb1L+a+aIrqKJmwMIWT1Gm8e0FGSKuH6wPyE/CrQdXvQkbmub
3YTfhW6e
=3Jrz
-----END PGP SIGNATURE-----

--=_MailMate_E37D7855-5714-45DB-89F0-E1847DF19DAF_=--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 13:53:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 13:53:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1290.4341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNz1D-0002jQ-TL; Thu, 01 Oct 2020 13:53:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1290.4341; Thu, 01 Oct 2020 13:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNz1D-0002jJ-QH; Thu, 01 Oct 2020 13:53:11 +0000
Received: by outflank-mailman (input) for mailman id 1290;
 Thu, 01 Oct 2020 13:53:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VHLv=DI=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kNz1C-0002jE-FF
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:53:10 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fe44ec7-aa2d-4ae7-af51-5a6dd8c21b65;
 Thu, 01 Oct 2020 13:53:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VHLv=DI=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kNz1C-0002jE-FF
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 13:53:10 +0000
X-Inumbo-ID: 8fe44ec7-aa2d-4ae7-af51-5a6dd8c21b65
Received: from galois.linutronix.de (unknown [193.142.43.55])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fe44ec7-aa2d-4ae7-af51-5a6dd8c21b65;
	Thu, 01 Oct 2020 13:53:08 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1601560387;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8QlmQU4U707WUFGOJ0XG+Q1YjhMDkPVgYfkVUA/WoFc=;
	b=jDpfVmtaeTkDdW/IvYOq5JTlvoEDvMPuXHKdBkqxDhj9NtP6sTCT7g/kMy5ECqhZUGy2+4
	++hAJ4Lf+UXj9X848nWTQXsWOUQwa0nSp8X6CBBBUzXU5W59I9y+JhIX+EAI1RusI4giOx
	WoE//JGbOJ/9aovK5UwQd1DPj/14mcwlcfKH5BL9ZmpEPYc59kRKDsf6dp6TQ7Zewxpo4q
	yL8kwmNhWt7cTao5A/py29YkkltUhHrvrLanp3xl0L310qxEaveU9Sh4MqjbrIFJvjOInj
	eMAYvmm8xoIqfs4bQEuXBHHNbgDKRhrRhhHHt1xe5fMSTnynEXYGFUqVvcIPNA==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1601560387;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=8QlmQU4U707WUFGOJ0XG+Q1YjhMDkPVgYfkVUA/WoFc=;
	b=VUPBvkhvJ43I0rw6XwXLBzv6cBppoaj+or1uLt5D2g6BaWB0D6BqJUpq/u8iIurpo4fr5G
	aeKaeW8Pag4Xo9Bg==
To: Zi Yan <ziy@nvidia.com>
Cc: Wei Liu <wei.liu@kernel.org>, Joerg Roedel <jroedel@suse.de>, x86@kernel.org, iommu@lists.linux-foundation.org, linux-hyperv@vger.kernel.org, linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: Boot crash due to "x86/msi: Consolidate MSI allocation"
In-Reply-To: <2F4EC354-C0BB-44BD-86A5-07F321590C31@nvidia.com>
References: <A838FF2B-11FC-42B9-87D7-A76CF46E0575@nvidia.com> <874knegxtg.fsf@nanos.tec.linutronix.de> <2F4EC354-C0BB-44BD-86A5-07F321590C31@nvidia.com>
Date: Thu, 01 Oct 2020 15:53:07 +0200
Message-ID: <87h7ref3y4.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Yan,

On Thu, Oct 01 2020 at 09:39, Zi Yan wrote:
> On 1 Oct 2020, at 4:22, Thomas Gleixner wrote:
>> Can you please test:
>>
>>    git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/irq
>>
>> which contains fixes and if it still crashes provide the dmesg of it.
>
> My system boots without any problem using this tree. Thanks.

linux-next of today contains these fixes, so that should work now as
well.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 14:39:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 14:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1300.4362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzjV-0006UQ-N8; Thu, 01 Oct 2020 14:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1300.4362; Thu, 01 Oct 2020 14:38:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzjV-0006UJ-KA; Thu, 01 Oct 2020 14:38:57 +0000
Received: by outflank-mailman (input) for mailman id 1300;
 Thu, 01 Oct 2020 14:38:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kNzjT-0006Tn-LS
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:38:55 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.61]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd93f433-6cd1-4e84-a3c8-3454135f7275;
 Thu, 01 Oct 2020 14:38:53 +0000 (UTC)
Received: from DB6PR0202CA0038.eurprd02.prod.outlook.com (2603:10a6:4:a5::24)
 by HE1PR0802MB2300.eurprd08.prod.outlook.com (2603:10a6:3:c5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 14:38:50 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::d3) by DB6PR0202CA0038.outlook.office365.com
 (2603:10a6:4:a5::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 14:38:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 14:38:50 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64");
 Thu, 01 Oct 2020 14:38:50 +0000
Received: from 59a62720c051.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CDAAF065-18DE-415C-9192-CE5D502960CB.1; 
 Thu, 01 Oct 2020 14:31:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 59a62720c051.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 01 Oct 2020 14:31:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3692.eurprd08.prod.outlook.com (2603:10a6:10:30::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 14:31:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 14:31:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kNzjT-0006Tn-LS
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:38:55 +0000
X-Inumbo-ID: cd93f433-6cd1-4e84-a3c8-3454135f7275
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.61])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cd93f433-6cd1-4e84-a3c8-3454135f7275;
	Thu, 01 Oct 2020 14:38:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fOKLAeyUWz4Ug8BSxfPKBhSvBLMnNmCHuvGaMrq1An0=;
 b=vENXuakr/R9+SYqmOiGpPY17hW6PHMOrYQc3m9pERoymjZ03LxYgg1YogfAndUeH7zZA/UttoQrf2vJO01gKbDZcU2iWGOZtJz9xaN/ov0hWo3pGA7SqDtFs7n+pdwba9ykU3TrUyTjvhIGgfrgJFnMLXzsTLOFaOsRR9afqvZI=
Received: from DB6PR0202CA0038.eurprd02.prod.outlook.com (2603:10a6:4:a5::24)
 by HE1PR0802MB2300.eurprd08.prod.outlook.com (2603:10a6:3:c5::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 14:38:50 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a5:cafe::d3) by DB6PR0202CA0038.outlook.office365.com
 (2603:10a6:4:a5::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 14:38:50 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 14:38:50 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64"); Thu, 01 Oct 2020 14:38:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 84e36ada68b97d24
X-CR-MTA-TID: 64aa7808
Received: from 59a62720c051.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id CDAAF065-18DE-415C-9192-CE5D502960CB.1;
	Thu, 01 Oct 2020 14:31:08 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 59a62720c051.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 01 Oct 2020 14:31:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d+EHIAoPpTuKd0LliO6vjeLusJAPQsS3cg0eh72Q8gi65ad5EAXv0O7AHdNWByPOEE4JSK3hk9O1Pymo1FCLz137uo8kdWu8dXAooQ9PBC2Ufb5D+h59VOOC5o9Ww7v0H/FUEhTrbY4y8TIMwNrFORdO5mw8kK7vIGONt7Uxnf9AUuRwoNXjvJQq1LBiF7RV8qvv2PeFpD/HA1lA432RV6nnHyJ7NQsfTZO0cYRw2pHXb7BqbXebnHAo+2ptc8jbUydEOQWvhyJQDMR5yIsVQwZ/6Q3hDcsEolhJ/NkaqDIz/p2HPFNDsPWGsfp5JVYtDZq57i3buHQl6XjcpzXjQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fOKLAeyUWz4Ug8BSxfPKBhSvBLMnNmCHuvGaMrq1An0=;
 b=Nh1U+6vSHprhFknP9fjxUDbPwUDD9dlQu3pvHN6fVol0IM1JN8pw8jGNvFYEVTa2QqJTKGc2vDd05IQykfOXbKsz1A9HUKrJrsrNlF0c+t9GDycvYGjbsV3vnTW/94+NHydIHXgNz4mBOh0w/jiMI3FiNNV7/ezLe/nrheTJSfYJnKC7RcD3OedHF0N0NgaFw7+lR8BtsTXxzTUO51YntmvYWjfNe6u6sjskkxRD3z2XHq5ti3G/MuKQN1ZfiZBHweNGgmHH0B2gmibFBNA+9JjGMY1172HW5jKkHQzxWuOdenCPKcKyf5hld4DcbrMs6AZ+yuh6mFOFZ48UlOAphw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fOKLAeyUWz4Ug8BSxfPKBhSvBLMnNmCHuvGaMrq1An0=;
 b=vENXuakr/R9+SYqmOiGpPY17hW6PHMOrYQc3m9pERoymjZ03LxYgg1YogfAndUeH7zZA/UttoQrf2vJO01gKbDZcU2iWGOZtJz9xaN/ov0hWo3pGA7SqDtFs7n+pdwba9ykU3TrUyTjvhIGgfrgJFnMLXzsTLOFaOsRR9afqvZI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3692.eurprd08.prod.outlook.com (2603:10a6:10:30::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 14:31:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 14:31:07 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: George Dunlap <george.dunlap@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<ian.jackson@citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Rich
 Persaud <persaur@gmail.com>, Christopher Clark
	<christopher.w.clark@gmail.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Topic: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Index: AQHWlylr89SNNL419USyeHY3I7qFNKmC0KsA
Date: Thu, 1 Oct 2020 14:31:07 +0000
Message-ID: <683E2686-1551-493B-A3AE-D0707C937155@arm.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
In-Reply-To: <20200930125736.95203-1-george.dunlap@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 003177c3-df4d-46e5-1bef-08d86617b65b
x-ms-traffictypediagnostic: DB7PR08MB3692:|HE1PR0802MB2300:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB23005D58764602F02A5370C59D300@HE1PR0802MB2300.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4G7lzglvis1bX1KzZ3QCFXASkVSc4jUV/Ho1VszI62CpY2G42VaQoTtruE4hNvQ9ToOHdww6V2vdc7k2xRm85828PFiPWTPwTHQWXqkPf0o444JngxerLgP+4dEOPc+StioObX0CwFJnC9Nz6l6QjNelxLC/xHEII9+PfGg2M8zCItFkfL1xeoqhynp6mzkXQ+pedFoPqzkAEDdqsUMzs9FBtPBPO1nLuzrs7A7f99pYoLoZ0V3sGpQXDKSR1M9sW5nFyNWD60b0R8/Zneu92LEy2oTSTJWi/psmeepTa8ocVXTtiMHq+ZNB7Pt9CRVrK+PSFavaBhy8+LOwW3zv+UUMwnENYmqEd/NpeBs6RHCOU0uF348CKSgpMLLs/Vh4wOhzG7+mdIi5efM6siloxg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(366004)(396003)(376002)(346002)(83380400001)(6512007)(6486002)(71200400001)(83080400001)(4326008)(8936002)(7416002)(6916009)(86362001)(2906002)(2616005)(966005)(26005)(6506007)(33656002)(66446008)(8676002)(66556008)(66476007)(53546011)(316002)(91956017)(36756003)(54906003)(186003)(478600001)(76116006)(66946007)(5660300002)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 IzKFFH7djgFTSBXcRNocajZKBnAQW5sIk3d8bTCh/A9jPlUtdLYUVeZ3zQOdpsFZdj6Twtgx3KR9j+iNZI7k8ZYE2KiqzEcK6J4gYMk04aSxoYXy1p37xjH6TMXbf8dG8bUfKvNSaQjoq4QgbDv3M2uYKMVyoZqtge5ED+L9QW/6arefTWy2ebjO8NamSOv1ErbjcDhVl0BxHytomFR2i5Tu1c3DpijnFA+WJCi4zsYd2f3b5Cjdyc+yqkw/Kd3jgrAKT/8WtuICgUJFZQqc0xSNrucu2JKK6h3vGpPz8UtknzzPufFUGQymLsYxe0Qn4pc+HgQLg6gHNXuHcHkInBykOLWPLjuCBhP+aNI2WFYECnb4U2kL4qxzpH+cMpDLYS3KmjHRnQiYfDrFOE0p55qmo4THtey0xYa1wVtzFiCabcv6I+NedvbAt9+441RNRSCmbW8WXaBsGp0OCmJsi4drGUtjDofiidh3T74NKdaUNXUCtfd2dZsG+riY5arTiJErMlYguU9yDsbCA3LrjXYRTJwyNnCr2bWlNIRNT9y/l64xm3ao4Tq3IPsZtbotjh2ScU7Coxsmy2kl7pKwt/Zv5aGr+u9nyZHIdojQ7zBt3377FTjALbhMO+Gu19v5A8dOaNtvs0tZd7vpJFBJnA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F2E096AD44E24E44810CE58BEDC731E8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3692
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	137f9c97-8a51-42c0-25e8-08d86616a2b9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lwXe9FH09fx6Wh8h9XD8zwPgqgiE3DTwHU7Cv/eO3MQgpYM+EvmHnaYpTFH0gg4w2hJS3Cdk3LUQLlcK+Ouwu4E5GVjwyApR5hECicTbYLiADT8a95x5hWm3QXAkOB2cFMUXPmxbvzhww7GyIBZ697DKJ0M/ABsK130V2oAv6KnamGcZL0qVn/pI2BxlLxKUxYJxtSQ7QE4eEn4zryqHxaFxTRiwoNL4iFnZwG77Wq4ZxJzHKihrhXQkuYX7yU9NpaF8594AnVx9M62nk3F+V/ga94wOtGnJ6QsP9vZrQ56Q19gZnLeA7V724ojXEgk1H2zqenNbOrFH1icw2MNNzLi84cpPypUSjEOM8jZjvLQtNbUppJFqMr7lW92IoMuKzlYaNhuOPKeq8vBqzbhglJwQB6Us3EcricYlIsvEqr9jqcmuOGe3YUBhgFX13Ae0SNF/Yw53n9GDJiraEQxu9Fw1LcpT4WvILsRIrdXUHME=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(39860400002)(46966005)(6486002)(186003)(2906002)(47076004)(966005)(33656002)(4326008)(6862004)(82310400003)(6512007)(36756003)(107886003)(83380400001)(86362001)(82740400003)(8676002)(2616005)(83080400001)(356005)(336012)(54906003)(81166007)(6506007)(5660300002)(53546011)(70206006)(316002)(70586007)(8936002)(26005)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2020 14:38:50.2555
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 003177c3-df4d-46e5-1bef-08d86617b65b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2300

Hi George,

+ Christopher Clark to have his view on what to put for Yocto.

> On 30 Sep 2020, at 13:57, George Dunlap <george.dunlap@citrix.com> wrote:
>=20
> Define a specific criteria for how we determine what tools and
> libraries to be compatible with.  This will clarify issues such as,
> "Should we continue to support Python 2.4" moving forward.
>=20
> Note that CentOS 7 is set to stop receiving "normal" maintenance
> updates in "Q4 2020"; assuming that 4.15 is released after that, we
> only need to support CentOS / RHEL 8.
>=20
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
>=20
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Rich Persaud <persaur@gmail.com>
> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
> ---
> docs/index.rst                        |  2 +
> docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
> 2 files changed, 78 insertions(+)
> create mode 100644 docs/policies/dependency-versions.rst
>=20
> diff --git a/docs/index.rst b/docs/index.rst
> index b75487a05d..ac175eacc8 100644
> --- a/docs/index.rst
> +++ b/docs/index.rst
> @@ -57,5 +57,7 @@ Miscellanea
> -----------
>=20
> .. toctree::
> +   :maxdepth: 1
>=20
> +   policies/dependency-versions
>    glossary
> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/depend=
ency-versions.rst
> new file mode 100644
> index 0000000000..d5eeb848d8
> --- /dev/null
> +++ b/docs/policies/dependency-versions.rst
> @@ -0,0 +1,76 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Build and runtime dependencies
> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D
> +
> +Xen depends on other programs and libraries to build and to run.
> +Chosing a minimum version of these tools to support requires a careful
> +balance: Supporting older versions of these tools or libraries means
> +that Xen can compile on a wider variety of systems; but means that Xen
> +cannot take advantage of features available in newer versions.
> +Conversely, requiring newer versions means that Xen can take advantage
> +of newer features, but cannot work on as wide a variety of systems.
> +
> +Specific dependencies and versions for a given Xen release will be
> +listed in the toplevel README, and/or specified by the ``configure``
> +system.  This document lays out the principles by which those versions
> +should be chosen.
> +
> +The general principle is this:
> +
> +    Xen should build on currently-supported versions of major distros
> +    when released.
> +
> +"Currently-supported" means whatever that distro's version of "full
> +support".  For instance, at the time of writing, CentOS 7 and 8 are
> +listed as being given "Full Updates", but CentOS 6 is listed as
> +"Maintenance updates"; under this criterium, we would try to ensure
> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
> +
> +Exceptions for specific distros or tools may be made when appropriate.
> +
> +One exception to this is compiler versions for the hypervisor.
> +Support for new instructions, and in particular support for new safety
> +features, may require a newer compiler than many distros support.
> +These will be specified in the README.
> +
> +Distros we consider when deciding minimum versions
> +--------------------------------------------------
> +
> +We currently aim to support Xen building and running on the following di=
stributions:
> +Debian_,
> +Ubuntu_,
> +OpenSUSE_,
> +Arch Linux,
> +SLES_,
> +Yocto_,
> +CentOS_,
> +and RHEL_.
> +
> +.. _Debian: https://www.debian.org/releases/
> +.. _Ubuntu: https://wiki.ubuntu.com/Releases
> +.. _OpenSUSE: https://en.opensuse.org/Lifetime
> +.. _SLES: https://www.suse.com/lifecycle/
> +.. _Yocto: https://wiki.yoctoproject.org/wiki/Releases
> +.. _CentOS: https://wiki.centos.org/About/Product
> +.. _RHEL: https://access.redhat.com/support/policy/updates/errata
> +
> +Specific distro versions supported in this release
> +--------------------------------------------------
> +
> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
> +Distro   Supported releases
> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
> +Debian   10 (Buster)
> +Ubuntu   20.10 (Groovy Gorilla), 20.04 (Focal Fossa), 18.04 (Bionic Beav=
er), 16.04 (Xenial Xerus)
> +OpenSUSE Leap 15.2
> +SLES     SLES 11, 12, 15
> +Yocto    3.1 (Dunfell)

Yocto only supports one version of Xen (as there is usually only one xen re=
cipe per yocto version) which
can make the version here a bit tricky:
Yocto 3.1 (Dunfell) supports only Xen 4.12
Yocto 3.2 (Gategarth) support only Xen 4.14 but has a Yocto master recipe w=
hich should actually be used
by someone who would want to try Xen master.

So I would suggest to put Yocto 3.2 here only.

@Christopher: what is your view here ?

Cheers
Bertrand

> +CentOS   8
> +RHEL     8
> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
> +
> +.. note::
> +
> +   We also support Arch Linux, but as it's a rolling distribution, the
> +   concept of "security supported releases" doesn't really apply.
> --=20
> 2.25.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 14:41:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 14:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1308.4375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzmG-0007Jl-73; Thu, 01 Oct 2020 14:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1308.4375; Thu, 01 Oct 2020 14:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzmG-0007Je-3U; Thu, 01 Oct 2020 14:41:48 +0000
Received: by outflank-mailman (input) for mailman id 1308;
 Thu, 01 Oct 2020 14:41:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kNzmF-0007JZ-BD
 for xen-devel@lists.xen.org; Thu, 01 Oct 2020 14:41:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b258e573-9868-4faa-be4c-7cd801db2a59;
 Thu, 01 Oct 2020 14:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 668E4AC3C;
 Thu,  1 Oct 2020 14:41:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kNzmF-0007JZ-BD
	for xen-devel@lists.xen.org; Thu, 01 Oct 2020 14:41:47 +0000
X-Inumbo-ID: b258e573-9868-4faa-be4c-7cd801db2a59
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b258e573-9868-4faa-be4c-7cd801db2a59;
	Thu, 01 Oct 2020 14:41:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601563305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cz+hDwcdU0MjLMs7Za9pLsd7gSCv0Vor0U6mUxc9Z0g=;
	b=dvDGFoH6el//vErrEiEEx8sZJOQ3pAq37ekNsa1StG9bJwRPk1EtOm8p/caRgniJODS9KB
	Gj5+RpDi7UcflnluZ5cDAowoxEsOAM67XoB7Un+E1wV/SCytCzheF1+DpU9mz9w/s5eblO
	HIuNcTejcKhVwlnLLTPq5uYrX044mMc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 668E4AC3C;
	Thu,  1 Oct 2020 14:41:45 +0000 (UTC)
Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
To: Don Slutz <don.slutz@gmail.com>
Cc: xen-devel@lists.xen.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Ian Jackson <iwj@xenproject.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Paul Durrant <paul@xen.org>
References: <cover.1597854907.git.don.slutz@gmail.com>
 <bfe0b9bb7b283657bc33edb7c4b425930564ca46.1597854908.git.don.slutz@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e7581f3a-71eb-3181-9128-01e22653a47e@suse.com>
Date: Thu, 1 Oct 2020 16:41:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <bfe0b9bb7b283657bc33edb7c4b425930564ca46.1597854908.git.don.slutz@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.08.2020 18:52, Don Slutz wrote:
> This adds synchronization of the 6 vcpu registers (only 32bits of
> them) that QEMU's vmport.c and vmmouse.c needs between Xen and QEMU.
> This is how VMware defined the use of these registers.
> 
> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> fetch and put these 6 vcpu registers used by the code in QEMU's
> vmport.c and vmmouse.c

I'm unconvinced this warrants a new ioreq type, and all the overhead
associated with it. I'd be curious to know what Paul or the qemu
folks think here.

> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -67,6 +67,7 @@
>  #define SPECIALPAGE_IOREQ    5
>  #define SPECIALPAGE_IDENT_PT 6
>  #define SPECIALPAGE_CONSOLE  7
> +#define SPECIALPAGE_VMPORT_REGS 8
>  #define special_pfn(x) \
>      (X86_HVM_END_SPECIAL_REGION - X86_HVM_NR_SPECIAL_PAGES + (x))
>  
> @@ -657,6 +658,8 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
>                       special_pfn(SPECIALPAGE_BUFIOREQ));
>      xc_hvm_param_set(xch, domid, HVM_PARAM_IOREQ_PFN,
>                       special_pfn(SPECIALPAGE_IOREQ));
> +    xc_hvm_param_set(xch, domid, HVM_PARAM_VMPORT_REGS_PFN,
> +                     special_pfn(SPECIALPAGE_VMPORT_REGS));

I don't think we want to see new special PFNs appear. This ought to
be made work through the acquire_resource interface instead.

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -28,6 +28,8 @@
>  #include <asm/iocap.h>
>  #include <asm/vm_event.h>
>  
> +vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct vcpu *v);

Prototypes need to be in scope for both consumer and producer, to
ensure changes done on either side get reflected on the other (or
suitably diagnosed by the compiler).

> @@ -173,6 +175,8 @@ static int hvmemul_do_io(
>      };
>      void *p_data = (void *)data;
>      int rc;
> +    bool_t is_vmware = !is_mmio && !data_is_addr &&
> +        vmport_check_port(p.addr, p.size);

As to the data_is_addr part - what about REP INS / REP OUTS?

> @@ -189,11 +193,17 @@ static int hvmemul_do_io(
>      case STATE_IOREQ_NONE:
>          break;
>      case STATE_IORESP_READY:
> +    {
> +        uint8_t calc_type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO;
> +
> +        if ( is_vmware )
> +            calc_type = IOREQ_TYPE_VMWARE_PORT;
> +
>          vio->io_req.state = STATE_IOREQ_NONE;
>          p = vio->io_req;
>  
>          /* Verify the emulation request has been correctly re-issued */
> -        if ( (p.type != (is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO)) ||
> +        if ( (p.type != calc_type) ||
>               (p.addr != addr) ||
>               (p.size != size) ||
>               (p.count > *reps) ||
> @@ -202,7 +212,7 @@ static int hvmemul_do_io(
>               (p.data_is_ptr != data_is_addr) ||
>               (data_is_addr && (p.data != data)) )
>              domain_crash(currd);
> -
> +    }
>          if ( data_is_addr )
>              return X86EMUL_UNHANDLEABLE;
>  
> @@ -322,6 +332,49 @@ static int hvmemul_do_io(
>              }
>          }
>  
> +        if ( unlikely(is_vmware) )
> +        {
> +            vmware_regs_t *vr;
> +
> +            BUILD_BUG_ON(sizeof(ioreq_t) < sizeof(vmware_regs_t));
> +
> +            p.type = vio->io_req.type = IOREQ_TYPE_VMWARE_PORT;
> +            s = hvm_select_ioreq_server(currd, &p);
> +            vr = get_vmport_regs_any(s, curr);

The function tries to give you vr even if s is NULL - at best you're
going to have inconsistent pointers in the end. I think the function
either wants to return NULL for NULL input, or you want to avoid
calling the function when s is NULL.

> +            /*
> +             * If there is no suitable backing DM, just ignore accesses.  If
> +             * we do not have access to registers to pass to QEMU, just
> +             * ignore access.
> +             */
> +            if ( !s || !vr )
> +            {
> +                rc = hvm_process_io_intercept(&null_handler, &p);
> +                vio->io_req.state = STATE_IOREQ_NONE;
> +            }
> +            else
> +            {
> +                const struct cpu_user_regs *regs = guest_cpu_user_regs();
> +
> +                p.data = regs->rax;
> +                /* The code in QEMU that uses these registers,
> +                 * vmport.c and vmmouse.c, only uses the 32bit part
> +                 * of the register.  This is how VMware defined the
> +                 * use of these registers.
> +                 */

Comment style (also elsewhere).

> +                vr->ebx = regs->ebx;
> +                vr->ecx = regs->ecx;
> +                vr->edx = regs->edx;
> +                vr->esi = regs->esi;
> +                vr->edi = regs->edi;

In the description you tale about 6 registers. Is ebp missing here
(and below)?

> +                rc = hvm_send_ioreq(s, &p, 0);
> +                if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
> +                    vio->io_req.state = STATE_IOREQ_NONE;
> +            }
> +            break;
> +        }
> +
>          if ( !s )
>              s = hvm_select_ioreq_server(currd, &p);

Please consider moving most of the if()'s body above below here, so
this remains a single, common call. Presumably even some more code
below here should remain common. The more code you duplicate, the
higher the risk of things getting updated in one place but not the
other.

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -108,6 +108,44 @@ static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
>      return NULL;
>  }
>  
> +static vmware_regs_t *get_vmport_regs_one(struct hvm_ioreq_server *s,
> +                                          struct vcpu *v)
> +{
> +    struct hvm_ioreq_vcpu *sv;
> +
> +    list_for_each_entry ( sv, &s->ioreq_vcpu_list, list_entry )
> +    {
> +        if ( sv->vcpu == v )
> +        {
> +            shared_vmport_iopage_t *p = s->vmport_ioreq.va;
> +            if ( !p )
> +                return NULL;
> +            return &p->vcpu_vmport_regs[v->vcpu_id];
> +        }
> +    }
> +    return NULL;
> +}
> +
> +vmware_regs_t *get_vmport_regs_any(struct hvm_ioreq_server *s, struct vcpu *v)
> +{
> +    struct domain *d = v->domain;
> +    unsigned int id;
> +
> +    ASSERT((v == current) || !vcpu_runnable(v));
> +
> +    if ( s )
> +        return get_vmport_regs_one(s, v);
> +
> +    FOR_EACH_IOREQ_SERVER(d, id, s)
> +    {
> +        vmware_regs_t *ret = get_vmport_regs_one(s, v);
> +
> +        if ( ret )
> +            return ret;
> +    }
> +    return NULL;
> +}

I think the naming wants improving, to take less reference to
get_ioreq() and more to more modern, properly prefixed naming.
E.g. vmport_get_regs() with the static helper becoming
_vmport_get_regs_one() or just _vmport_get_regs() (but as per
above I'm unconvinced the helper is needed).

Also for both functions (and generally)
- add const to pointed to types whenever possible,
- have a blank line between declarations and statements,
- put a blank line before a function's main return.

> @@ -206,6 +244,26 @@ bool handle_hvm_io_completion(struct vcpu *v)
>          return handle_mmio();
>  
>      case HVMIO_pio_completion:
> +        if ( vio->io_req.type == IOREQ_TYPE_VMWARE_PORT )
> +        {
> +            vmware_regs_t *vr = get_vmport_regs_any(NULL, v);

Why NULL? Isn't s the server you're after? Also - const.

> @@ -233,16 +291,28 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
>      unsigned int i;
>  
>      BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
> +    BUILD_BUG_ON(HVM_PARAM_VMPORT_REGS_PFN != HVM_PARAM_BUFIOREQ_PFN + 1);
>  
>      for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )

Without this for() changing I don't see why you put the BUILD_BUG_ON()
here.

>      {
> -        if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
> +        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )

I can't believe this to be a correct change, or if there is a bug
to be fixed here, for this to belong here.

> @@ -293,9 +363,29 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
>      }
>  }
>  
> -static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
> +typedef enum {
> +    ioreq_pt_ioreq,
> +    ioreq_pt_bufioreq,
> +    ioreq_pt_vmport,
> +} ioreq_pt_;

Why the trailing underscore? And may I ask what "pt" stands for?

> +static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, ioreq_pt_ pt)
>  {
> -    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
> +    struct hvm_ioreq_page *iorp = NULL;
> +
> +    switch ( pt )
> +    {
> +    case ioreq_pt_ioreq:
> +        iorp = &s->ioreq;
> +        break;
> +    case ioreq_pt_bufioreq:
> +        iorp = &s->bufioreq;
> +        break;
> +    case ioreq_pt_vmport:
> +        iorp = &s->vmport_ioreq;
> +        break;
> +    }
> +    ASSERT(iorp);
>  
>      if ( gfn_eq(iorp->gfn, INVALID_GFN) )
>          return;

For an ASSERT() like this, please take a look at the bottom of ./CODING_STYLE.
I think you want a default case in the switch() body with ASSERT_UNREACHABLE()
and "return" instead. (Just in case this won't go away anyway, or some similar
construct then appears elsewhere.)

> @@ -329,7 +433,10 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
>      if ( d->is_dying )
>          return -EINVAL;
>  
> -    iorp->gfn = hvm_alloc_ioreq_gfn(s);
> +    if ( pt == ioreq_pt_vmport )
> +        iorp->gfn = hvm_alloc_legacy_vmport_gfn(s);
> +    else
> +        iorp->gfn = hvm_alloc_ioreq_gfn(s);

I'm unconvinced the separate function is warranted, in case this stays in the
first place.

> @@ -645,12 +844,38 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>      for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
>      {
>          char *name;
> +        char *type_name = NULL;
> +        unsigned int limit;
>  
> -        rc = asprintf(&name, "ioreq_server %d %s", id,
> -                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
> -                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
> -                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
> -                      "");
> +        switch ( i )
> +        {
> +        case XEN_DMOP_IO_RANGE_PORT:
> +            type_name = "port";
> +            limit = MAX_NR_IO_RANGES;
> +            break;
> +        case XEN_DMOP_IO_RANGE_MEMORY:
> +            type_name = "memory";
> +            limit = MAX_NR_IO_RANGES;
> +            break;
> +        case XEN_DMOP_IO_RANGE_PCI:
> +            type_name = "pci";
> +            limit = MAX_NR_IO_RANGES;
> +            break;
> +        case XEN_DMOP_IO_RANGE_VMWARE_PORT:
> +            type_name = "VMware port";
> +            limit = 1;
> +            break;
> +        case XEN_DMOP_IO_RANGE_TIMEOFFSET:
> +            type_name = "timeoffset";
> +            limit = 1;
> +            break;

Personally I'd prefer if you simply added a single line to the
asprintf() invocation above. I don't see at all why the time offset
thingy is appearing here (and elsewhere below) all of the sudden.
And there's no point for the limit variable afaict, as you ...

> @@ -663,7 +888,11 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
>          if ( !s->range[i] )
>              goto fail;
>  
> -        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
> +        rangeset_limit(s->range[i], limit);
> +
> +        /* VMware port */
> +        if ( i == XEN_DMOP_IO_RANGE_VMWARE_PORT && s->vmport_enabled )
> +            rc = rangeset_add_range(s->range[i], 1, 1);

... add the only wanted range here and don't allow further additions.

> @@ -714,7 +945,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
>  }
>  
>  static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
> -                                 struct domain *d, int bufioreq_handling,
> +                                 struct domain *d, int flags,

Can these flags have a negative value passed?

> @@ -1282,8 +1525,9 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
>      }
>      else
>      {
> -        type = (p->type == IOREQ_TYPE_PIO) ?
> -                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> +        type = (p->type == IOREQ_TYPE_PIO) ? XEN_DMOP_IO_RANGE_PORT : 
> +            (p->type == IOREQ_TYPE_VMWARE_PORT) ? XEN_DMOP_IO_RANGE_VMWARE_PORT :
> +            XEN_DMOP_IO_RANGE_MEMORY;

Indentation.

> @@ -23,6 +24,32 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>  {
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>  
> +#define port_overlap(p, n) \
> +    ((p + n > BDOOR_PORT) && (p + n <= BDOOR_PORT + 4) ? 1 : \
> +    (BDOOR_PORT + 4 > p) && (BDOOR_PORT + 4 <= p + n) ? 1 : 0)

It's only used in BUILD_BUG_ON(), but I still think this is far more
involved than it needs to be. The typical overlap check goes along
the lines of (start1 < end2 && start2 < end1), and there shouldn't
be any use of a conditional operator needed when all you want are
values of 0 and 1 (or true/false).

> +    BUILD_BUG_ON(port_overlap(PIT_BASE, 4));
> +    BUILD_BUG_ON(port_overlap(0x61, 1));
> +    BUILD_BUG_ON(port_overlap(XEN_HVM_DEBUGCONS_IOPORT, 1));
> +    BUILD_BUG_ON(port_overlap(0xcf8, 4));
> +/* #define TMR_VAL_ADDR_V0  (ACPI_PM_TMR_BLK_ADDRESS_V0) */
> +    BUILD_BUG_ON(port_overlap(ACPI_PM_TMR_BLK_ADDRESS_V0, 4));
> +/* #define PM1a_STS_ADDR_V0 (ACPI_PM1A_EVT_BLK_ADDRESS_V0) */
> +    BUILD_BUG_ON(port_overlap(ACPI_PM1A_EVT_BLK_ADDRESS_V0, 4));

What are these comments about?

> +    BUILD_BUG_ON(port_overlap(RTC_PORT(0), 2));
> +    BUILD_BUG_ON(port_overlap(0x3c4, 2));
> +    BUILD_BUG_ON(port_overlap(0x3ce, 2));
> +/*
> + * acpi_smi_cmd can not be checked at build time:
> + *   xen/include/asm-x86/acpi.h:extern u32 acpi_smi_cmd;
> + *   xen/arch/x86/acpi/boot.c: acpi_smi_cmd = fadt->smi_command;
> + BUILD_BUG_ON(port_overlap(acpi_smi_cmd, 1));

In this case I think the BUILD_BUG_ON() would still be better to
align with the others, even if commented out.

> +*/
> +    BUILD_BUG_ON(port_overlap(0x20, 2));
> +    BUILD_BUG_ON(port_overlap(0xa0, 2));
> +    BUILD_BUG_ON(port_overlap(0x4d0, 1));
> +    BUILD_BUG_ON(port_overlap(0x4d1, 1));

#undef port_overlap

> @@ -137,6 +164,15 @@ void vmport_register(struct domain *d)
>      register_portio_handler(d, BDOOR_PORT, 4, vmport_ioport);
>  }
>  
> +bool_t vmport_check_port(unsigned int port, unsigned int bytes)

bool

> +{
> +    struct domain *currd = current->domain;

const

> @@ -66,11 +69,25 @@ struct ioreq {
>  };
>  typedef struct ioreq ioreq_t;
>  
> +struct vmware_regs {
> +    uint32_t esi;
> +    uint32_t edi;
> +    uint32_t ebx;
> +    uint32_t ecx;
> +    uint32_t edx;
> +};
> +typedef struct vmware_regs vmware_regs_t;
> +
>  struct shared_iopage {
>      struct ioreq vcpu_ioreq[1];
>  };
>  typedef struct shared_iopage shared_iopage_t;
>  
> +struct shared_vmport_iopage {
> +    struct vmware_regs vcpu_vmport_regs[1];
> +};
> +typedef struct shared_vmport_iopage shared_vmport_iopage_t;

I wonder if this layout wouldn't better include some padding, so that
entries are a multiple of 16 bytes apart, to reduce cache line bouncing.

> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -94,8 +94,8 @@
>  #define HVM_PARAM_STORE_EVTCHN 2
>  
>  #define HVM_PARAM_IOREQ_PFN    5
> -
>  #define HVM_PARAM_BUFIOREQ_PFN 6
> +#define HVM_PARAM_VMPORT_REGS_PFN 7

Is it just lucky conincidence that 7 was unused, or are you risking
collision with some old piece of software? (But as said earlier, this
is likely to go away anyway.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 14:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 14:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1326.4395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNztp-0007ds-8D; Thu, 01 Oct 2020 14:49:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1326.4395; Thu, 01 Oct 2020 14:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNztp-0007dl-4e; Thu, 01 Oct 2020 14:49:37 +0000
Received: by outflank-mailman (input) for mailman id 1326;
 Thu, 01 Oct 2020 14:49:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kNzto-0007dg-8w
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:49:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 865ed569-e313-41a6-86d9-0a875009feb0;
 Thu, 01 Oct 2020 14:49:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNztl-0008PD-GP; Thu, 01 Oct 2020 14:49:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kNztl-00019Z-AU; Thu, 01 Oct 2020 14:49:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kNztl-000200-A1; Thu, 01 Oct 2020 14:49:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kNzto-0007dg-8w
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:49:36 +0000
X-Inumbo-ID: 865ed569-e313-41a6-86d9-0a875009feb0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 865ed569-e313-41a6-86d9-0a875009feb0;
	Thu, 01 Oct 2020 14:49:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SN/1+D105P6F82CF+tHH7BM98pXPv5XaLg2nPgARtKI=; b=GzsJTNW/otSKYIQHg9RwCGjFEk
	pF3265vbmXDLQU2VdS9Yj3SaDxabdS2Na/6iQLNZRc0nRGRuIoeMvgU01y+6I8hZtcjm521IdK1LU
	6vbK7DhHoHSMHjJQ83TgwS6pgJQyPEEHWlLRtub4p17OiuD/3/92KzULWgulN89cCtH0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNztl-0008PD-GP; Thu, 01 Oct 2020 14:49:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNztl-00019Z-AU; Thu, 01 Oct 2020 14:49:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kNztl-000200-A1; Thu, 01 Oct 2020 14:49:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155225: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 14:49:33 +0000

flight 155225 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155225/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    0 days    7 attempts
Testing same since   155225  2020-10-01 12:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 14:52:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 14:52:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1329.4409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzwR-0008Sr-P0; Thu, 01 Oct 2020 14:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1329.4409; Thu, 01 Oct 2020 14:52:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kNzwR-0008Sk-Jz; Thu, 01 Oct 2020 14:52:19 +0000
Received: by outflank-mailman (input) for mailman id 1329;
 Thu, 01 Oct 2020 14:52:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kNzwP-0008Sc-Dy
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:52:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15725e7d-bacc-4463-82fa-c196a98f17a0;
 Thu, 01 Oct 2020 14:52:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CxPN=DI=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kNzwP-0008Sc-Dy
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 14:52:17 +0000
X-Inumbo-ID: 15725e7d-bacc-4463-82fa-c196a98f17a0
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15725e7d-bacc-4463-82fa-c196a98f17a0;
	Thu, 01 Oct 2020 14:52:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601563936;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=wIuWxjGuRchYU0kEXxdAm8I7Y1fG65qLv2trUdqZxrU=;
  b=PIwPbQ78y7CeaBGGG0wkMzR1C0B6x5ZpKlS9hlwGuninddcLsdV1gz4q
   KjUzx7xUrRcx6D3Mb60vCCbpzFz8Wra6GQFLVaOegAJvouHB55xiMXbQR
   aiCy+XqAzVZuSb0b86rNFrK74iKAZ+7VfgvAFMWytuATz0LtrYaOo4Yh5
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xI83KlXwt60hrxVSWeNFBdd1ShaUt+k6BNsuozJDkhXKn2CuntGzKLUV1pmXOwdsF5WiS3R3ZH
 IpsVUs2lYMgqZZl9KECKS9vVXHZf9w7FYC1Ov7EKcX92WpWE0WyTEhRo7kCohBlxiPuOoE8ro1
 d1k6K2d9OKpm6KPy7qwNotpDWN4PL2KS4tkQZaCBCBXQHA/Ddj3IqHEakoaIP2HA4e417Cp7co
 3MQmCFgLwWxEt3I064DZdEA3ddNFE2+SORnp9v+qpGX6t3Hg+UzGU0MXSg+ESXSCUFJix7/ifI
 DQU=
X-SBRS: None
X-MesageID: 28416443
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28416443"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iA9U40GFmPTPzPowVhXhjMwQZBeU7wOyi+sosHe+6X8gD1k5oYOzdSEDl3zvwFlIDhDJIHCl7wYMv/RxFfGhs9OYPdUytyMfHYp7DRrSx6g76wc8isr9zrkqZ4FLFSaqU0AlXJykDmLXb6OD0JKJqqD6cW47AtAg6wco40nSXN/LWvEeo1Uluif89bsJOffZD8FYzeiKlGbErD9GKOXDDOGahw2lVQ3A2MXMUCl9gPn9PPWSu3KQy5SJnq8bJUW6t1NCKgZ/6xCZQUqtYmDjop3HYDeLACJMDviF1tBFWulnacUVXPNQ9/otKxwJSKhkC7tWyXGkyVfiNaoDwL8Ncw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIuWxjGuRchYU0kEXxdAm8I7Y1fG65qLv2trUdqZxrU=;
 b=e8H0RMUkHgODT/wHM2DDkzUto+9IBcJDl047FqpOyqlsxOxKiDmoAXwS8Dm+uFh9unUTDWMacFhTDzg4/p5Rkbf+q8qLJfbRqyf/tGEWo49BPZZNf2AyM03fCiTD4c1spAKr5vb8f8bZsafv0XV+8oursmLM3Zu9mDK1dqNfPZLAENh+7iLTK5tEXVfPCCag5hvZdpkt3OyNKQbDFzAG6+DurBb5f7MbyaJ9NzEzsA10NwM5hYIxQRONDagN9JX/T1gQ/ofMjQgKOBkF0LZpedfLWaljYdTeWZGz1KS/ZSe0hZylfB9E1mAphxcIb9LbuC2DJGawfK/ElcOdDMRQdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIuWxjGuRchYU0kEXxdAm8I7Y1fG65qLv2trUdqZxrU=;
 b=Zj/R+tz7YTtnLLU1CIF4djJawi1K5clNFBakhc0A0ZrDO7iCORVxhwaOJyBfX8PoEWrVPBi/rtUHgESnFRC4iiGOD4/Xlu2XNq+A9nx62Uu2RFeT5fQLiMVaPkeBUvMX8ogQmKF6+5mEngIwfiRQHsB/vbTv4m89DYQUyOhT+Hg=
From: George Dunlap <George.Dunlap@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Rich Persaud <persaur@gmail.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Topic: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Index: AQHWlylRG8SsjpT4JUGR4W/Ewa0YcqmCtteAgAAfKgA=
Date: Thu, 1 Oct 2020 14:50:14 +0000
Message-ID: <F65DC414-FFA4-4990-84FF-A94503B38F3A@citrix.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
 <868b25bd-ab2c-7f33-1dc2-9476c86d8050@citrix.com>
In-Reply-To: <868b25bd-ab2c-7f33-1dc2-9476c86d8050@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: e9eb69d4-5bdc-4dfe-512e-08d866194e20
x-ms-traffictypediagnostic: BYAPR03MB4789:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB47896580C8A55F33040A604999300@BYAPR03MB4789.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: LJose0TGa6pZ6/KDORfrgSdKhReEAce91NuePbpzII4jQT55XXulgAStOj8plmunaV4THdrszsbQ370/cngp76As8y/nLHOwrwdqLeoBGV7wO2X4Qh//cYkJpIRii6dfIB/4GujYPDIvuuIrQpzeAtnUSrwBzqrBYpfxTLu3gwAtlggbf0YigiqupyDAeUJ7hXf9GXmCvKo6QW9/fHQcEuvPHkyr4vWgDp2Z0eru3uQ7VIzz/wFyWKjRBhSWnItvlrsBeXnpE6eYmeIyRMXweLqhPlsVJrrwkJzIvsJkunfh3vuYSNXBwgAtcqar0BNh4bOUzJDrT8VF9sjVUgg05pDIs8aNdnNgFh6MQ5LsjzE7vglJdfE9MM91o2KPxcA502Bn02PhuRQILADT+d+A5Q==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(376002)(366004)(39860400002)(55236004)(6506007)(8936002)(6486002)(66446008)(186003)(86362001)(66476007)(26005)(36756003)(8676002)(91956017)(6512007)(66946007)(478600001)(6862004)(64756008)(76116006)(5660300002)(71200400001)(33656002)(66556008)(54906003)(37006003)(316002)(966005)(4326008)(2906002)(83080400001)(2616005)(83380400001)(6636002)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: wmwzbDtEac7XJqRUEws3DUhYjxHnQP5nT6eRfmYEk0oDFY/aG6obfG3wu7cfnFqWFqXif7R003FvWRnNaF/cnT/xwc8UO3k8EXyQxekzKW3AjE6Ry9+s5qfi1/RGMtX4o4XT8VQSJ1ar0MdkdUqc9WD1bh/s7pCKM+2nu3blZlBZaf3hiwJA6Vyg8cpmtt1r6Eaoxtw4Dl28AVcJP7tFwzSDomben9Vti5h7MhypEyQ/K/She3M6/qGBLsqugouXJr7pkf2ksDPKNieQXHz7sD8nWfi/03hZrvoU32lSiXQczWarjdfQSxFuRHaG4D+KhpPXguyux13UaAVtnS3VVB118Avd0oxsskuJp4GhF9dwzTveazFeYrfxDWAPfhZjzA7ixiijCQiOFrm4GgUkIgwP2yU9knoYBCtiU6WMq1SGLWXB7AsVkAV9YyQFnH/yu32b1Bj97KZPUnky3p3LBQNJuWM+uja8gojknYJ5X16HeJ+Y6oo3gwH0ReR0NPk/JjZ+kT8/ZEcUJrBmncQcJnt0nozirAKeD9ElM5wWjBsSn+YqNjemGtZv75fDioQKXnPOtv6lGHtcTVaaqMBgAdaiVolu7xI817ghTPGMq6unB1w2WNhFfzWEkBUmy8UOZD4/Kx2xrkQc4UZFi1H7Vw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <F60D987C275E874C99F75866DCF3359C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e9eb69d4-5bdc-4dfe-512e-08d866194e20
X-MS-Exchange-CrossTenant-originalarrivaltime: 01 Oct 2020 14:50:14.2362
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 8FhKIVsINpkwp+xMCk/ppFU9K4+sChu9wOV7VgWtAy25v36FeeD+sPO89RGkL0C5LHbs6XhE9TR9u5l/aDZyOb4VgyfQhHZCu/TFgb3n5n8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4789
X-OriginatorOrg: citrix.com

DQoNCj4gT24gT2N0IDEsIDIwMjAsIGF0IDE6NTggUE0sIEFuZHJldyBDb29wZXIgPEFuZHJldy5D
b29wZXIzQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gT24gMzAvMDkvMjAyMCAxMzo1NywgR2Vv
cmdlIER1bmxhcCB3cm90ZToNCj4+IERlZmluZSBhIHNwZWNpZmljIGNyaXRlcmlhIGZvciBob3cg
d2UgZGV0ZXJtaW5lIHdoYXQgdG9vbHMgYW5kDQo+PiBsaWJyYXJpZXMgdG8gYmUgY29tcGF0aWJs
ZSB3aXRoLiAgVGhpcyB3aWxsIGNsYXJpZnkgaXNzdWVzIHN1Y2ggYXMsDQo+PiAiU2hvdWxkIHdl
IGNvbnRpbnVlIHRvIHN1cHBvcnQgUHl0aG9uIDIuNCIgbW92aW5nIGZvcndhcmQuDQo+IA0KPiBM
dWNraWx5IHRoYXQgb25lIGlzIHNldHRsZWQuICBBcmd1YWJseSBhIGJldHRlciBvcHRpb24gbWln
aHQgYmUgIndoYXQgaXMNCj4gdGhlIG1pbmltdW0gdG9vbGNoYWluIHRvIHN1cHBvcnQiID8NCj4g
DQo+PiBOb3RlIHRoYXQgQ2VudE9TIDcgaXMgc2V0IHRvIHN0b3AgcmVjZWl2aW5nICJub3JtYWwi
IG1haW50ZW5hbmNlDQo+PiB1cGRhdGVzIGluICJRNCAyMDIwIjsgYXNzdW1pbmcgdGhhdCA0LjE1
IGlzIHJlbGVhc2VkIGFmdGVyIHRoYXQsIHdlDQo+PiBvbmx5IG5lZWQgdG8gc3VwcG9ydCBDZW50
T1MgLyBSSEVMIDguDQo+IA0KPiBXaGlsZSBJIGFwcHJlY2lhdGUgdGhhdCB0aGlzIGRvZXNuJ3Qg
bWVhbiAid2UnbGwgYnJlYWsgQ2VudE9TIDcgaW4gUTQiLA0KPiBJJ20gZ29pbmcgdG8gaGF2ZSBz
b21lIHN1YnN0YW50aWFsIGRldmVsb3BtZW50IGlzc3VlcyBpZiBDNyBhY3R1YWxseQ0KPiBzdG9w
cyB3b3JraW5nLCBhdCBsZWFzdCBpbiB0aGUgc2hvcnQgdG8gbWVkaXVtIHRlcm0uDQoNCllvdeKA
mXJlIGdvaW5nIHRvIGhhdmUgZGV2ZWxvcG1lbnQgaXNzdWVzIGluIHRoZSBzaG9ydC10by1tZWRp
dW0gdGVybSBhcyBDNyBzdG9wcyBnZXR0aW5nIGJ1Zy1maXhlcyBhbnl3YXkuICBJ4oCZbSBwcmV0
dHkgc3VyZSB5b3UgYWxyZWFkeSByZS1jb21waWxlIHlvdXIgb3duIHZlcnNpb25zIG9mIGEgbG90
IG9mIHRoZSB0b29sY2hhaW5zIGFuZCBsaWJyYXJpZXM7IGlmIHlvdSBzdGF5IG9uIEM3IHlvdeKA
mWxsIGp1c3QgaGF2ZSB0byBhZGQgdGhpbmdzIGluIG9uZSBieSBvbmUgKG9yIHVzZSB1cGRhdGVk
IHZlcnNpb25zIGZyb20gRVBFTCkgYXMgdGhpbmdzIGNvbWUgdXAuDQoNCj4+IFNpZ25lZC1vZmYt
Ynk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4NCj4+IC0tLQ0KPj4g
DQo+PiBDQzogSWFuIEphY2tzb24gPGlhbi5qYWNrc29uQGNpdHJpeC5jb20+DQo+PiBDQzogV2Vp
IExpdSA8d2xAeGVuLm9yZz4NCj4+IENDOiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPg0KPj4gQ0M6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+IEND
OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+PiBDQzogSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4+IENDOiBSaWNoIFBlcnNhdWQgPHBlcnNhdXJA
Z21haWwuY29tPg0KPj4gQ0M6IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJt
LmNvbT4NCj4+IC0tLQ0KPj4gZG9jcy9pbmRleC5yc3QgICAgICAgICAgICAgICAgICAgICAgICB8
ICAyICsNCj4+IGRvY3MvcG9saWNpZXMvZGVwZW5kZW5jeS12ZXJzaW9ucy5yc3QgfCA3NiArKysr
KysrKysrKysrKysrKysrKysrKysrKysNCj4+IDIgZmlsZXMgY2hhbmdlZCwgNzggaW5zZXJ0aW9u
cygrKQ0KPj4gY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvcG9saWNpZXMvZGVwZW5kZW5jeS12ZXJz
aW9ucy5yc3QNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL2RvY3MvaW5kZXgucnN0IGIvZG9jcy9pbmRl
eC5yc3QNCj4+IGluZGV4IGI3NTQ4N2EwNWQuLmFjMTc1ZWFjYzggMTAwNjQ0DQo+PiAtLS0gYS9k
b2NzL2luZGV4LnJzdA0KPj4gKysrIGIvZG9jcy9pbmRleC5yc3QNCj4+IEBAIC01Nyw1ICs1Nyw3
IEBAIE1pc2NlbGxhbmVhDQo+PiAtLS0tLS0tLS0tLQ0KPj4gDQo+PiAuLiB0b2N0cmVlOjoNCj4+
ICsgICA6bWF4ZGVwdGg6IDENCj4+IA0KPj4gKyAgIHBvbGljaWVzL2RlcGVuZGVuY3ktdmVyc2lv
bnMNCj4gDQo+IEkgdGhpbmsgaXQgaXMgZ3JlYXQgdGhhdCB0aGlzIGlzIGdvaW5nIGludG8gU3Bo
aW54Lg0KPiANCj4gSG93ZXZlciwgSSdkIHByZWZlciB0byBhdm9pZCBwcm9saWZlcmF0aW5nIHJh
bmRvbSB0aGluZ3MgYXQgdGhlIHRvcA0KPiBsZXZlbCwgdG8gdHJ5IGFuZCBrZWVwIGV2ZXJ5dGhp
bmcgaW4gYSBjb2hlcmVudCBzdHJ1Y3R1cmUuDQoNCkkgd2FzIGhvcGluZyBmb3IgeW91ciBmZWVk
YmFjayBvbiB3aGVyZSB0byBwdXQgdGhpcy4gOi0pDQoNCj4gRm9yIGJldHRlciBvciB3b3JzZSwg
SSBndWVzdGltYXRlZCBhdCAiYWRtaW4gZ3VpZGUiIChlbmQgdXNlciBhbmQNCj4gc3lzYWRtaW4g
Z3VpZGUpLCAiZ3Vlc3QgZG9jcyIgKFZNIEFCSSwgYW5kIGd1ZXN0IGtlcm5lbCBkZXZlbG9wZXJz
KSwgYW5kDQo+ICJoeXBlcnZpc29ycyBkb2NzIiAoaGFja2luZyBYZW4pLg0KDQpJcyDigJxoeXBl
cnZpc29y4oCdIGluIHRoaXMgc2Vuc2UgbWVhbnQgdG8gbWVhbiB0aGUgYWN0dWFsIGh5cGVydmlz
b3IgKHhlbi5naXQveGVuKSwgb3IgdGhlIHdob2xlIGh5cGVydmlzb3Igc3lzdGVtIChpLmUuLCBl
dmVyeXRoaW5nIGluIHhlbi5naXQpPw0KDQpJdCBzZWVtcyB0byBtZSB0aGF0IHdlIG5lZWQgc29t
ZXRoaW5nIGxpa2UgdGhlIGxhdHRlcjsgaW4gd2hpY2ggY2FzZSBtYXliZSB3ZSBzaG91bGQgY2hh
bmdlIHRoYXQgc2VjdGlvbiB0byDigJxkZXZlbG9wZXIgZG9jdW1lbnRhdGlvbuKAnSBvciBzb21l
dGhpbmcsIHdpdGgg4oCcaHlwZXJ2aXNvcuKAnSBhcyBhIHNlY3Rpb24gdW5kZXIgdGhhdD8NCg0K
PiBJJ20gaGFwcHkgdG8gc2h1ZmZsZSB0aGUgZGl2aWRpbmcgbGluZXMgaWYgYSBiZXR0ZXIgYXJy
YW5nZW1lbnQgYmVjb21lcw0KPiBvYnZpb3VzLiAgVGhpcyBwYXJ0aWN1bGFyIGRvYyBsb2dpY2Fs
bHkgbGl2ZXMgd2l0aCAiYnVpbGRpbmcgWGVuIGZyb20NCj4gc291cmNlIi4NCg0KSSBkb27igJl0
IHNlZSBhIOKAnGJ1aWxkaW5nIFhlbiBmcm9tIHNvdXJjZeKAnSBzZWN0aW9uIChleGNlcHQgZm9y
IEh5cGVydmlzb3IgRG9jdW1lbnRhdGlvbi9Db2RlIENvdmVyYWdlL0NvbXBpbGluZyBYZW4sIHdo
aWNoIGlzIG9idmlvdXNseSBzcGVjaWZpYyB0byBjb2RlIGNvdmVyYWdlKS4NCg0KSWYgdGhlIG1h
aW4gdGFyZ2V0IG9mIHRoZSBwYWdlIGlzIHRvIHRlbGwgYWRtaW5zIC8gZG93bnN0cmVhbXMgd2hh
dCBkaXN0cm9zIHdlIHN1cHBvcnQsIHRoZW4gaXQgbWlnaHQgZ28gdW5kZXIg4oCcQWRtaW4gR3Vp
ZGXigJ0gc29tZXdoZXJlLiAgSWYgdGhlIG1haW4gdGFyZ2V0IGlzIHRvIHRlbGwgZGV2ZWxvcGVy
cyB3aGF0IHZlcnNpb25zIHRoZXkgaGF2ZSB0byBzdXBwb3J0IC8gZG9u4oCZdCBoYXZlIHRvIHN1
cHBvcnQsIHRoZW4gcHV0dGluZyBpdCB1bmRlciBhIG5ld2x5LWNyZWF0ZWQg4oCcZGV2ZWxvcGVy
IGRvY3VtZW50YXRpb27igJ0gc2VjdGlvbiB3b3VsZCBwcm9iYWJseSBtYWtlIHRoZSBtb3N0IHNl
bnNlLg0KDQpJIHRoaW5rIEnigJlkIGdvIHdpdGggdGhlIGxhdHRlciwgaWYgeW914oCZcmUgT0sg
d2l0aCBpdC4NCg0KDQo+PiAgICBnbG9zc2FyeQ0KPj4gZGlmZiAtLWdpdCBhL2RvY3MvcG9saWNp
ZXMvZGVwZW5kZW5jeS12ZXJzaW9ucy5yc3QgYi9kb2NzL3BvbGljaWVzL2RlcGVuZGVuY3ktdmVy
c2lvbnMucnN0DQo+PiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPj4gaW5kZXggMDAwMDAwMDAwMC4u
ZDVlZWI4NDhkOA0KPj4gLS0tIC9kZXYvbnVsbA0KPj4gKysrIGIvZG9jcy9wb2xpY2llcy9kZXBl
bmRlbmN5LXZlcnNpb25zLnJzdA0KPj4gQEAgLTAsMCArMSw3NiBAQA0KPj4gKy4uIFNQRFgtTGlj
ZW5zZS1JZGVudGlmaWVyOiBDQy1CWS00LjANCj4+ICsNCj4+ICtCdWlsZCBhbmQgcnVudGltZSBk
ZXBlbmRlbmNpZXMNCj4+ICs9PT09PT09PT09PT09PT09PT09PT09PT09PT09PT0NCj4+ICsNCj4+
ICtYZW4gZGVwZW5kcyBvbiBvdGhlciBwcm9ncmFtcyBhbmQgbGlicmFyaWVzIHRvIGJ1aWxkIGFu
ZCB0byBydW4uDQo+PiArQ2hvc2luZyBhIG1pbmltdW0gdmVyc2lvbiBvZiB0aGVzZSB0b29scyB0
byBzdXBwb3J0IHJlcXVpcmVzIGEgY2FyZWZ1bA0KPj4gK2JhbGFuY2U6IFN1cHBvcnRpbmcgb2xk
ZXIgdmVyc2lvbnMgb2YgdGhlc2UgdG9vbHMgb3IgbGlicmFyaWVzIG1lYW5zDQo+PiArdGhhdCBY
ZW4gY2FuIGNvbXBpbGUgb24gYSB3aWRlciB2YXJpZXR5IG9mIHN5c3RlbXM7IGJ1dCBtZWFucyB0
aGF0IFhlbg0KPj4gK2Nhbm5vdCB0YWtlIGFkdmFudGFnZSBvZiBmZWF0dXJlcyBhdmFpbGFibGUg
aW4gbmV3ZXIgdmVyc2lvbnMuDQo+PiArQ29udmVyc2VseSwgcmVxdWlyaW5nIG5ld2VyIHZlcnNp
b25zIG1lYW5zIHRoYXQgWGVuIGNhbiB0YWtlIGFkdmFudGFnZQ0KPj4gK29mIG5ld2VyIGZlYXR1
cmVzLCBidXQgY2Fubm90IHdvcmsgb24gYXMgd2lkZSBhIHZhcmlldHkgb2Ygc3lzdGVtcy4NCj4+
ICsNCj4+ICtTcGVjaWZpYyBkZXBlbmRlbmNpZXMgYW5kIHZlcnNpb25zIGZvciBhIGdpdmVuIFhl
biByZWxlYXNlIHdpbGwgYmUNCj4+ICtsaXN0ZWQgaW4gdGhlIHRvcGxldmVsIFJFQURNRSwgYW5k
L29yIHNwZWNpZmllZCBieSB0aGUgYGBjb25maWd1cmVgYA0KPj4gK3N5c3RlbS4gIFRoaXMgZG9j
dW1lbnQgbGF5cyBvdXQgdGhlIHByaW5jaXBsZXMgYnkgd2hpY2ggdGhvc2UgdmVyc2lvbnMNCj4+
ICtzaG91bGQgYmUgY2hvc2VuLg0KPj4gKw0KPj4gK1RoZSBnZW5lcmFsIHByaW5jaXBsZSBpcyB0
aGlzOg0KPj4gKw0KPj4gKyAgICBYZW4gc2hvdWxkIGJ1aWxkIG9uIGN1cnJlbnRseS1zdXBwb3J0
ZWQgdmVyc2lvbnMgb2YgbWFqb3IgZGlzdHJvcw0KPj4gKyAgICB3aGVuIHJlbGVhc2VkLg0KPj4g
Kw0KPj4gKyJDdXJyZW50bHktc3VwcG9ydGVkIiBtZWFucyB3aGF0ZXZlciB0aGF0IGRpc3Rybydz
IHZlcnNpb24gb2YgImZ1bGwNCj4+ICtzdXBwb3J0Ii4gIEZvciBpbnN0YW5jZSwgYXQgdGhlIHRp
bWUgb2Ygd3JpdGluZywgQ2VudE9TIDcgYW5kIDggYXJlDQo+PiArbGlzdGVkIGFzIGJlaW5nIGdp
dmVuICJGdWxsIFVwZGF0ZXMiLCBidXQgQ2VudE9TIDYgaXMgbGlzdGVkIGFzDQo+PiArIk1haW50
ZW5hbmNlIHVwZGF0ZXMiOyB1bmRlciB0aGlzIGNyaXRlcml1bSwgd2Ugd291bGQgdHJ5IHRvIGVu
c3VyZQ0KPj4gK3RoYXQgWGVuIGNvdWxkIGJ1aWxkIG9uIENlbnRPUyA3IGFuZCA4LCBidXQgbm90
IG9uIENlbnRPUyA2Lg0KPj4gKw0KPj4gK0V4Y2VwdGlvbnMgZm9yIHNwZWNpZmljIGRpc3Ryb3Mg
b3IgdG9vbHMgbWF5IGJlIG1hZGUgd2hlbiBhcHByb3ByaWF0ZS4NCj4+ICsNCj4+ICtPbmUgZXhj
ZXB0aW9uIHRvIHRoaXMgaXMgY29tcGlsZXIgdmVyc2lvbnMgZm9yIHRoZSBoeXBlcnZpc29yLg0K
Pj4gK1N1cHBvcnQgZm9yIG5ldyBpbnN0cnVjdGlvbnMsIGFuZCBpbiBwYXJ0aWN1bGFyIHN1cHBv
cnQgZm9yIG5ldyBzYWZldHkNCj4+ICtmZWF0dXJlcywgbWF5IHJlcXVpcmUgYSBuZXdlciBjb21w
aWxlciB0aGFuIG1hbnkgZGlzdHJvcyBzdXBwb3J0Lg0KPj4gK1RoZXNlIHdpbGwgYmUgc3BlY2lm
aWVkIGluIHRoZSBSRUFETUUuDQo+IA0KPiBUaGUgcHJvYmxlbSB3ZSBoYXZlIGlzIHRoYXQgeGVu
LmdpdCBjb250YWlucyB0d28gdmVyeSBkaWZmZXJlbnQgdGhpbmdzLiANCj4gVGhlcmUgaXMgdGhl
IGh5cGVydmlzb3IgaXRzZWxmLCB3aGljaCBpcyBlbWJlZGRlZCwgYW5kIGNhbiBlYXNpbHkgYmUN
Cj4gY3Jvc3MgY29tcGlsZWQsIGFuZCB0aGVyZSBpcyB0aGUgY29udGVudCBvZiB0b29scy8gd2hp
Y2ggZGVwZW5kcyBvbiBhDQo+IGxvdCBvZiBkaXN0cm8gaW5mcmFzdHJ1Y3R1cmUuDQo+IA0KPiBX
ZSBleHBlY3QgdG9vbHMvIHRvIHdvcmsgaW4gYW55IHN1cHBvcnRlZCBkaXN0cm8sIHdpdGhvdXQg
aGF2aW5nIHRvIGRvDQo+IHdlaXJkIHRvb2xjaGFpbiBneW1uYXN0aWNzLg0KPiANCj4gRm9yIHhl
bi8gYXQgdGhlIG1vbWVudCB3ZSBoYXZlIGEgdmVyeSBvYnNvbGV0ZSB0b29sY2hhaW4gcmVxdWly
ZW1lbnRzLA0KPiBhbmQgdGhpcyBpcyBob2xkaW5nIHVzIGJhY2sgaW4gc29tZSBhcmVhcy4gIFdl
J3JlIGxvb2tpbmcgdG8gYnJpbmcgdGhhdA0KPiBmb3J3YXJkLCBhbmQgbWF5IGNvbnNpZGVyIHRo
YXQgYmVpbmcgbmV3ZXIgdGhhbiBzb21lIG9mIHRoZSBvbGQgZGlzdHJvcw0KPiBpcyBuZWNlc3Nh
cnkuDQo+IA0KPiBBdCB0aGUgbW9tZW50IGhvd2V2ZXIsIHdlIGhhdmUgcXVpdGUgYSBsb3Qgb2Yg
ZnVuY3Rpb25hbGl0eSB3aGljaCBpcw0KPiBkZXBlbmRlbnQgb24gYmVpbmcgYWJsZSB0byBkZXRl
Y3Qgc3VpdGFibGUgdG9vbGNoYWluLiAgR0NPViBhbmQgQ0VULVNTDQo+IGFyZSBleGFtcGxlcy4g
IFRoZXNlIGZlYXR1cmVzIHdpbGwgdHVybiB0aGVtc2VsdmVzIG9mZiBpbiBvbGRlciBkaXN0cm9z
LA0KPiBzbyB3aGlsZSB5b3UgY2FuICJidWlsZCIgWGVuIHRoYXQgZmFyIGJhY2ssIHlvdSBtaWdo
dCBub3QgZ2V0IGV2ZXJ5dGhpbmcuDQo+IA0KPiBGb3IgQ0VUIGluIHBhcnRpY3VsYXIsIHRoZXJl
IGlzIG5vIGZlYXNpYmxlIHdheSB0byBzdXBwb3J0IGl0IG9uIG9sZGVyDQo+IHRvb2xjaGFpbnMu
ICAodW5sZXNzIHNvbWVvbmUgY29tZXMgdXAgd2l0aCBhbiBleHRyZW1lbHkgY29udmluY2luZyB3
YXkNCj4gb2YgaGFuZC1jcmFmdGluZyBtZW1vcnkgb3BlcmFuZHMgdXNpbmcgcmF3IC5ieXRlJ3Mg
aW4gaW5saW5lIGFzc2VtYmxlci4pDQo+IA0KPiBJIGRlZmluaXRlbHkgZG9uJ3QgdGhpbmsgaXQg
aXMgdW5yZWFzb25hYmxlIGZvciB1cyB0byByZXF1aXJlIHRoZSB1c2Ugb2YNCj4gKHBvdGVudGlh
bGx5KSBibGVlZGluZyBlZGdlIHRvb2xjaGFpbnMgaWYgdGhleSB3YW50IHRvIHVzZSAocG90ZW50
aWFsbHkpDQo+IGJsZWVkaW5nIGVkZ2UgZmVhdHVyZXMuICBDRVQtU1MgaXNuJ3QgYmxlZWRpbmcg
ZWRnZSBhbnkgbW9yZSwgYnV0DQo+IENFVC1JQlQgaXMgZHVlIHRvIHRoZSBhZGRpdGlvbmFsIGxp
bmtlciB3b3JrIHJlcXVpcmVkIHRvIG1ha2UgaXQNCj4gZnVuY3Rpb24uICBBIGZ1dHVyZSBvbmUg
d2hpY2ggd2UgbmVlZCB0byBkbyBzb21ldGhpbmcgYWJvdXQgaXMgQ29udHJvbA0KPiBGbG93IElu
dGVncml0eSwgd2hpY2ggaXMgQ2xhbmcgc3BlY2lmaWMsIGRlcGVuZHMgb24gTFRPLCBhbmQgY2F1
c2VkDQo+IExpbnV4IHRvIHVwIHRoZWlyIG1pbmltdW0gc3VwcG9ydGVkIHZlcnNpb24gdG8gMTAu
MC4xIHdoaWNoIHdhcyB3aGVuIGFsbA0KPiB0aGUgYnVnZml4ZXMgZ290IG1lcmdlZC4NCg0KWW91
IHNlZW0gdG8gYmUgZXhwbGFpbmluZyB3aHkgSSB3cm90ZSB0aGlzIHBhcmFncmFwaC4gIERpZCB5
b3UgaGF2ZSBhbnkgc3BlY2lmaWMgY2hhbmdlcyB5b3Ugd2FudGVkIHRvIG1ha2U/IDotKQ0KDQo+
PiArDQo+PiArRGlzdHJvcyB3ZSBjb25zaWRlciB3aGVuIGRlY2lkaW5nIG1pbmltdW0gdmVyc2lv
bnMNCj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LQ0KPj4gKw0KPj4gK1dlIGN1cnJlbnRseSBhaW0gdG8gc3VwcG9ydCBYZW4gYnVpbGRpbmcgYW5k
IHJ1bm5pbmcgb24gdGhlIGZvbGxvd2luZyBkaXN0cmlidXRpb25zOg0KPj4gK0RlYmlhbl8sDQo+
PiArVWJ1bnR1XywNCj4+ICtPcGVuU1VTRV8sDQo+PiArQXJjaCBMaW51eCwNCj4gDQo+IE5vIGxp
bmsgZm9yIEFyY2g/DQoNClRoZSBsaW5rIHBvaW50cyB0byB0aGUgcGFnZSBkZXNjcmliaW5nIHRo
ZSByZWxlYXNlIGxpZmVjeWNsZXM7IEFyY2ggZG9lc27igJl0IHJlYWxseSBoYXZlIHRoYXQgY29u
Y2VwdCAoYXMgbm90ZWQgaW4gdGhlIG5leHQgc2VjdGlvbikuDQoNCkkgY291bGQgbWFrZSBpdCBz
byB0aGF0IGxpbmtzIHRvIHRoZSByZWxlYXNlIGxpZmVjeWNsZSBwYWdlIGlzIGluIHRoZSB0YWJs
ZSBiZWxvdyBpbnN0ZWFkLg0KDQo+IA0KPj4gK1NMRVNfLA0KPj4gK1lvY3RvXywNCj4+ICtDZW50
T1NfLA0KPj4gK2FuZCBSSEVMXy4NCj4+ICsNCj4+ICsuLiBfRGViaWFuOiBodHRwczovL3d3dy5k
ZWJpYW4ub3JnL3JlbGVhc2VzLw0KPj4gKy4uIF9VYnVudHU6IGh0dHBzOi8vd2lraS51YnVudHUu
Y29tL1JlbGVhc2VzDQo+PiArLi4gX09wZW5TVVNFOiBodHRwczovL2VuLm9wZW5zdXNlLm9yZy9M
aWZldGltZQ0KPj4gKy4uIF9TTEVTOiBodHRwczovL3d3dy5zdXNlLmNvbS9saWZlY3ljbGUvDQo+
PiArLi4gX1lvY3RvOiBodHRwczovL3dpa2kueW9jdG9wcm9qZWN0Lm9yZy93aWtpL1JlbGVhc2Vz
DQo+PiArLi4gX0NlbnRPUzogaHR0cHM6Ly93aWtpLmNlbnRvcy5vcmcvQWJvdXQvUHJvZHVjdA0K
Pj4gKy4uIF9SSEVMOiBodHRwczovL2FjY2Vzcy5yZWRoYXQuY29tL3N1cHBvcnQvcG9saWN5L3Vw
ZGF0ZXMvZXJyYXRhDQo+PiArDQo+PiArU3BlY2lmaWMgZGlzdHJvIHZlcnNpb25zIHN1cHBvcnRl
ZCBpbiB0aGlzIHJlbGVhc2UNCj4+ICstLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLQ0KPj4gKw0KPj4gKz09PT09PT09ID09PT09PT09PT09PT09PT09PQ0K
Pj4gK0Rpc3RybyAgIFN1cHBvcnRlZCByZWxlYXNlcw0KPj4gKz09PT09PT09ID09PT09PT09PT09
PT09PT09PQ0KPj4gK0RlYmlhbiAgIDEwIChCdXN0ZXIpDQo+PiArVWJ1bnR1ICAgMjAuMTAgKEdy
b292eSBHb3JpbGxhKSwgMjAuMDQgKEZvY2FsIEZvc3NhKSwgMTguMDQgKEJpb25pYyBCZWF2ZXIp
LCAxNi4wNCAoWGVuaWFsIFhlcnVzKQ0KPj4gK09wZW5TVVNFIExlYXAgMTUuMg0KPj4gK1NMRVMg
ICAgIFNMRVMgMTEsIDEyLCAxNQ0KPj4gK1lvY3RvICAgIDMuMSAoRHVuZmVsbCkNCj4+ICtDZW50
T1MgICA4DQo+PiArUkhFTCAgICAgOA0KPj4gKz09PT09PT09ID09PT09PT09PT09PT09PT09PQ0K
PiANCj4gSG93IGFib3V0IGEgM3JkIGNvbHVtbiBmb3IgInN1cHBvcnRlZCB1bnRpbCIgPyAgSXQg
d291bGQgc3RvcCB0aGlzIHBhZ2UNCj4gYmVjb21pbmcgc3RhbGUgc2ltcGx5IG92ZXIgdGltZS4N
Cg0KSWYgd2UgZGlkIHRoYXQsIGl0IHdvdWxkIG1ha2UgdGhlIHRhYmxlIGxvbmdlciwgYXMgd2Xi
gJlkIGhhdmUgYSBzZXBhcmF0ZSByb3cgZm9yIGVhY2ggZGlzdHJvIHJlbGVhc2UgcmF0aGVyIHRo
YW4gZWFjaCBkaXN0cm8uDQoNClRoZSByZWxlYXNlIG1hbmFnZXIgbmVlZHMgdG8gbG9vayBhdCB0
aGlzIHRhYmxlIGJlZm9yZSB0aGUgcmVsZWFzZTsgZm9yIHRoYXQgdGhleeKAmWxsIGhhdmUgdG8g
Z28gdG8gdGhlIHJlbGVhc2UgbGlmZWN5Y2xlIHBhZ2Ugb2YgdGhlIHZhcmlvdXMgZGlzdHJvcyBh
bnl3YXksIHRvIHBpY2sgdXAgbmV3IHZlcnNpb25zIG9mIHRoZSBkaXN0cm8uICBTbyBJIGRvbuKA
mXQgdGhpbmsgaGF2aW5nIHRoZSBkYXRlIGhlcmUgYWRkcyB0aGF0IG11Y2guDQoNCj4gDQo+PiAr
DQo+PiArLi4gbm90ZTo6DQo+PiArDQo+PiArICAgV2UgYWxzbyBzdXBwb3J0IEFyY2ggTGludXgs
IGJ1dCBhcyBpdCdzIGEgcm9sbGluZyBkaXN0cmlidXRpb24sIHRoZQ0KPj4gKyAgIGNvbmNlcHQg
b2YgInNlY3VyaXR5IHN1cHBvcnRlZCByZWxlYXNlcyIgZG9lc24ndCByZWFsbHkgYXBwbHkuDQo+
IA0KPiBTaG91bGQgd2UgcmF0aW9uYWxpc2UgdGhpcyBsaXN0IHdpdGggdGhlIGRvY2tlciBjb250
YWluZXJzPw0KDQpXZSBkZWZpbml0ZWx5IHNob3VsZC4NCg0KRm9yIGRpc3Ryb3MgZm9yIHdoaWNo
IHdlIGNhbuKAmXQgZ2V0IGRvY2tlciBjb250YWluZXJzIGluIHRoZSBnaXRsYWIgQ0kgKGUuZy4s
IFNMRVMsIFJIRUwpLCBtYXliZSB3ZSBjb3VsZCBjb25zaWRlciByZXF1aXJpbmcgYSDigJxtYWlu
dGFpbmVy4oCdIHRvIHN0ZXAgdXAgYW5kIGJlIHJlc3BvbnNpYmxlIGZvciB0ZXN0aW5nIHRoZW0g
YmVmb3JlIGVhY2ggcmVsZWFzZS4NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:09:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:09:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1334.4421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0Cs-0001AZ-DU; Thu, 01 Oct 2020 15:09:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1334.4421; Thu, 01 Oct 2020 15:09:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0Cs-0001AS-A9; Thu, 01 Oct 2020 15:09:18 +0000
Received: by outflank-mailman (input) for mailman id 1334;
 Thu, 01 Oct 2020 15:09:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO0Cr-0001AM-FN
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:09:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 652f7313-40e2-4098-9317-c7b8d1429665;
 Thu, 01 Oct 2020 15:09:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0Ch-0000OS-Rq; Thu, 01 Oct 2020 15:09:07 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0Ch-0002lw-Ga; Thu, 01 Oct 2020 15:09:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO0Cr-0001AM-FN
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:09:17 +0000
X-Inumbo-ID: 652f7313-40e2-4098-9317-c7b8d1429665
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 652f7313-40e2-4098-9317-c7b8d1429665;
	Thu, 01 Oct 2020 15:09:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dqak5iRuFQ+HAyvINf14dk18mjE6xKDgjI3hlVVwKkI=; b=H+HE/MBOqNYVw7r0aRZAipHk8P
	8VkmqIf9KFDp/GGSGSnm9VouNQZ1nVhf58f5WdAe5RkyZUZq21SAZ5eHCg0SFZqrnW9D+8xirviwJ
	N5J6g7zRiA/vDaTcR86NQQrlsL6xdJm0nsMva1OcDA9nKzFRtiRlnYkc4EwJ7/DWxmw0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0Ch-0000OS-Rq; Thu, 01 Oct 2020 15:09:07 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0Ch-0002lw-Ga; Thu, 01 Oct 2020 15:09:07 +0000
Subject: Re: [PATCH 1/4] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200926205542.9261-1-julien@xen.org>
 <20200926205542.9261-2-julien@xen.org>
 <alpine.DEB.2.21.2009301659350.10908@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <764fb618-634b-18a5-8f45-e231c71c1bb5@xen.org>
Date: Thu, 1 Oct 2020 16:09:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2009301659350.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/10/2020 01:06, Stefano Stabellini wrote:
> On Sat, 26 Sep 2020, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
>> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
>>
>> Currently, the former are still containing x86 specific code.
>>
>> To avoid this rather strange split, the generic helpers are reworked so
>> they are arch-agnostic. This requires the introduction of a new helper
>> __acpi_os_unmap_memory() that will undo any mapping done by
>> __acpi_os_map_memory().
>>
>> Currently, the arch-helper for unmap is basically a no-op so it only
>> returns whether the mapping was arch specific. But this will change
>> in the future.
>>
>> Note that the x86 version of acpi_os_map_memory() was already able to
>> able the 1MB region. Hence why there is no addition of new code.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/acpi/lib.c | 10 ++++++++++
>>   xen/arch/x86/acpi/lib.c | 18 ++++++++++++++++++
>>   xen/drivers/acpi/osl.c  | 34 ++++++++++++++++++----------------
>>   xen/include/xen/acpi.h  |  1 +
>>   4 files changed, 47 insertions(+), 16 deletions(-)
>>
>> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
>> index 4fc6e17322c1..2192a5519171 100644
>> --- a/xen/arch/arm/acpi/lib.c
>> +++ b/xen/arch/arm/acpi/lib.c
>> @@ -30,6 +30,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>>       unsigned long base, offset, mapped_size;
>>       int idx;
>>   
>> +    /* No arch specific implementation after early boot */
>> +    if ( system_state >= SYS_STATE_boot )
>> +        return NULL;
>> +
>>       offset = phys & (PAGE_SIZE - 1);
>>       mapped_size = PAGE_SIZE - offset;
>>       set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
>> @@ -49,6 +53,12 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>>       return ((char *) base + offset);
>>   }
>>   
>> +bool __acpi_unmap_table(void *ptr, unsigned long size)
>> +{
>> +    return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
>> +             vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) );
>> +}
> 
> vaddr or ptr?  :-)
> 
> lib.c: In function '__acpi_unmap_table':
> lib.c:58:14: error: 'vaddr' undeclared (first use in this function)
>       return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
>                ^
> lib.c:58:14: note: each undeclared identifier is reported only once for each function it appears in
> lib.c:60:1: error: control reaches end of non-void function [-Werror=return-type]
>   }
>   ^
> cc1: all warnings being treated as errors

The whole series builds because 'vaddr' is added in the next patch. But 
I forgot to build patch by patch :(.

I will fix it in the next version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1338.4437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0IT-00023n-7H; Thu, 01 Oct 2020 15:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1338.4437; Thu, 01 Oct 2020 15:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0IT-00023g-2Q; Thu, 01 Oct 2020 15:15:05 +0000
Received: by outflank-mailman (input) for mailman id 1338;
 Thu, 01 Oct 2020 15:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO0IR-00023a-Qa
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:15:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55192bde-89b2-4eb7-bab5-bf871795e01f;
 Thu, 01 Oct 2020 15:15:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0IC-0000Vq-Nv; Thu, 01 Oct 2020 15:14:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0IC-0003LG-4e; Thu, 01 Oct 2020 15:14:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO0IR-00023a-Qa
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:15:03 +0000
X-Inumbo-ID: 55192bde-89b2-4eb7-bab5-bf871795e01f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55192bde-89b2-4eb7-bab5-bf871795e01f;
	Thu, 01 Oct 2020 15:15:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9DvD4fxxOzIhhFkXL2N/eUTq9f0ag0x4wfuy5TxXWK8=; b=gc4Wt2mkigdAZ+w8wegIiq7faZ
	ZZAjrhamhEs83JcvllqDXKizhpx76renpbfObFUs2fYMOeDyfthm3syf/Bm6/ShmbcBuFwQriB0FC
	24YhbHs6BAaDypgg0FXvy3p86fu46xNDLuhGuS09gYRPkPrbSDk5gMGrobWfYzVCD0xA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0IC-0000Vq-Nv; Thu, 01 Oct 2020 15:14:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0IC-0003LG-4e; Thu, 01 Oct 2020 15:14:48 +0000
Subject: Re: [PATCH 2/4] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Wei Xu <xuwei5@hisilicon.com>
References: <20200926205542.9261-1-julien@xen.org>
 <20200926205542.9261-3-julien@xen.org>
 <alpine.DEB.2.21.2009301711190.10908@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <15c30bad-0a01-bd89-325d-cf2a90a3070b@xen.org>
Date: Thu, 1 Oct 2020 16:14:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2009301711190.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/10/2020 01:30, Stefano Stabellini wrote:
> On Sat, 26 Sep 2020, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
>> {set, clear}_fixmap()" enforced that each set_fixmap() should be
>> paired with a clear_fixmap(). Any failure to follow the model would
>> result to a platform crash.
>>
>> Unfortunately, the use of fixmap in the ACPI code was overlooked as it
>> is calling set_fixmap() but not clear_fixmap().
>>
>> The function __acpi_os_map_table() is reworked so:
>>      - We know before the mapping whether the fixmap region is big
>>      enough for the mapping.
>>      - It will fail if the fixmap is always inuse.
> 
> I take you mean "it will fail if the fixmap is *already* in use"?

Yes.

> 
> If so, can it be a problem? Or the expectation is that in practice
> __acpi_os_map_table() will only get called once before SYS_STATE_boot?
> 
> Looking at the code it would seem that even before this patch
> __acpi_os_map_table() wasn't able to handle multiple calls before
> SYS_STATE_boot.

Correct, I am not changing any expectation here. It is only making 
clearer because before commit 022387ee1ad3 we would just overwrite the 
existing mapping with no warning.

After commit 022387ee1ad3, we would just hit the BUG_ON() in set_fixmap().

I will clarify it in the commit message.

[...]

>>   bool __acpi_unmap_table(void *ptr, unsigned long size)
>>   {
>> -    return ( vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) &&
>> -             vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) );
>> +    vaddr_t vaddr = (vaddr_t)ptr;
>> +    unsigned int idx;
>> +
>> +    /* We are only handling fixmap address in the arch code */
>> +    if ( vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) ||
>> +         vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) )
> 
> The "+ PAGE_SIZE" got lost

Hmmm yes.

> 
> 
>> +        return false;
>> +
>> +    /*
>> +     * __acpi_map_table() will always return a pointer in the first page
>> +     * for the ACPI fixmap region. The caller is expected to free with
>> +     * the same address.
>> +     */
>> +    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIXMAP_ACPI_BEGIN));
>> +
>> +    /* The region allocated fit in the ACPI fixmap region. */
>> +    ASSERT(size < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - vaddr));
>> +    ASSERT(fixmap_inuse);
>> +
>> +    fixmap_inuse = false;
>> +
>> +    size += FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) - vaddr;
> 
> Sorry I got confused.. Shouldn't this be:
> 
>    size += vaddr - FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
> 
> ?

It should be. :) I guess this was unoticed because vaddr == 
FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) in my testing.

I will fix it.

> 
> 
>> +    idx = FIXMAP_ACPI_BEGIN;
>> +
>> +    do
>> +    {
>> +        clear_fixmap(idx);
>> +        size -= min(size, (unsigned long)PAGE_SIZE);
>> +        idx++;
>> +    } while ( size > 0 );
>> +
>> +    return true;
>>   }
>>   
>>   /* True to indicate PSCI 0.2+ is implemented */
>> -- 
>> 2.17.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:17:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1341.4451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0Ks-0002Cu-Li; Thu, 01 Oct 2020 15:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1341.4451; Thu, 01 Oct 2020 15:17:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0Ks-0002Cn-HZ; Thu, 01 Oct 2020 15:17:34 +0000
Received: by outflank-mailman (input) for mailman id 1341;
 Thu, 01 Oct 2020 15:17:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kO0Kq-0002Ch-Rz
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:17:32 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2180623e-ab2a-449f-a41e-227a19856e5b;
 Thu, 01 Oct 2020 15:17:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uQij=DI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kO0Kq-0002Ch-Rz
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:17:32 +0000
X-Inumbo-ID: 2180623e-ab2a-449f-a41e-227a19856e5b
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2180623e-ab2a-449f-a41e-227a19856e5b;
	Thu, 01 Oct 2020 15:17:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601565452;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=jBL60SOOVhAJRYV/tE73mmneTV6ZR28dsmbeF/fyPAQ=;
  b=RCbvnkVz8mOHj9BCTPVo/vkjYNjhAv7iO+/YUjgmpFHt3gVAQdxAOhL8
   KMGNA2FLjHsNmjWiwx7y/rDt7twSCwaL8Y9vkwB9tVR43qH33OcbsdVRE
   ARdXWvcm4KeXo4QzbfgiYnEEBHtgUr4hDkUlLZua47zQdzLKIRG3IFcRf
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: orGed4YpnhoXaCuiVmgKLT/A7L1lNC2kERQQDbDg5XF8v36HP3fXjdg1PYGZxpcyi0PqFtovW/
 fJadu9LobTE+A4POn6qfm93wwnIKLxXoV4xNqQZJNwVLfqTJmAEMUDRlIk0/7Gqu/f4JReSOmT
 LXaoSa6IWcDnuvG+h8QzsoLdFNp68egJ+a+6WUlzLpvuuvWbo7kjbn+NXnigXf4+GZEWP45yuO
 xXIANfE44XXRET1g1WzReomWVnuNX6Z98oY/7a08KjOUdIWlVSK0B1l8FZmmSoA18AgGeN33sG
 2SI=
X-SBRS: None
X-MesageID: 28420544
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28420544"
Subject: Re: [PATCH v9 8/8] tools/libxc: add DOMAIN_CONTEXT records to the
 migration stream...
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-9-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2d6ac6ea-a312-6bac-bc7b-ea26541c1168@citrix.com>
Date: Thu, 1 Oct 2020 16:17:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-9-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> ... and bump the version.
>
> This patch implements version 4 of the migration stream by adding the code
> necessary to save and restore DOMAIN_CONTEXT records, and removing the code
> to save the SHARED_INFO and TSC_INFO records (as these are deprecated in
> version 4).
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

This really needs to be at least 3 patches.

First to adjust tools/python/scripts/verify-stream-v2 to understand the
new changes in the stream.

My testing tends to include running the script over the result of `xl
save`, and using a script in place of libxl-save-helper which tee's the
stream through the script, which lets you test in-line migrate.  (I
wonder if it would be better to add a pass-through mode to the script
and give libxl a way of running it in the same way as it currently runs
covert-legacy-stream.)

Next, a patch updating the receive side only to understand the new
changes in the stream.  In particular, this makes it far easier to
confirm that backwards compatibility is maintained.

Finally, a patch updating the sending side, if applicable.  (I've got an
alternative suggestion to avoid burning a load of major version numbers,
but will follow up on a different patch with that).

> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wl@xen.org>
>
> v7:
>  - New in v7
> ---
>  tools/libs/guest/xg_sr_common.h       |  3 ++
>  tools/libs/guest/xg_sr_common_x86.c   | 20 -----------
>  tools/libs/guest/xg_sr_common_x86.h   |  6 ----
>  tools/libs/guest/xg_sr_restore.c      | 45 +++++++++++++++++++++--
>  tools/libs/guest/xg_sr_save.c         | 52 ++++++++++++++++++++++++++-
>  tools/libs/guest/xg_sr_save_x86_hvm.c |  5 ---
>  tools/libs/guest/xg_sr_save_x86_pv.c  | 22 ------------
>  7 files changed, 97 insertions(+), 56 deletions(-)
>
> diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
> index 13fcc47420..d440281cc1 100644
> --- a/tools/libs/guest/xg_sr_common.h
> +++ b/tools/libs/guest/xg_sr_common.h
> @@ -298,6 +298,9 @@ struct xc_sr_context
>  
>              /* Sender has invoked verify mode on the stream. */
>              bool verify;
> +
> +            /* Domain context blob. */
> +            struct xc_sr_blob context;

We already have

ctx->x86.hvm.restore.context

and are now gaining

ctx->restore.context

This is concerning close to being ambiguous.  How about dom_context ?

Also, you leak the memory allocation.  Free it in xg_sr_restore.c:cleanup().

> diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
> index b57a787519..453a383ba4 100644
> --- a/tools/libs/guest/xg_sr_restore.c
> +++ b/tools/libs/guest/xg_sr_restore.c
> @@ -529,6 +529,20 @@ static int send_checkpoint_dirty_pfn_list(struct xc_sr_context *ctx)
>      return rc;
>  }
>  
> +static int stream_complete(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    int rc;
> +
> +    rc = xc_domain_setcontext(xch, ctx->domid,
> +                              ctx->restore.context.ptr,
> +                              ctx->restore.context.size);
> +    if ( rc < 0 )
> +        PERROR("Unable to restore domain context");
> +
> +    return rc;
> +}

Please put this in the PV and HVM stream_complete() hooks.

This is somewhat of a layering violation and enforcing an order which
might not be appropriate in all cases.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:34:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1347.4469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0bT-0003xg-6m; Thu, 01 Oct 2020 15:34:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1347.4469; Thu, 01 Oct 2020 15:34:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0bT-0003xZ-3Z; Thu, 01 Oct 2020 15:34:43 +0000
Received: by outflank-mailman (input) for mailman id 1347;
 Thu, 01 Oct 2020 15:34:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO0bR-0003xU-3C
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:34:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 070ba41b-fccf-438a-89ec-5929db84264a;
 Thu, 01 Oct 2020 15:34:40 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0bM-0000ul-23; Thu, 01 Oct 2020 15:34:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0bL-00054I-OW; Thu, 01 Oct 2020 15:34:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO0bR-0003xU-3C
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:34:41 +0000
X-Inumbo-ID: 070ba41b-fccf-438a-89ec-5929db84264a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 070ba41b-fccf-438a-89ec-5929db84264a;
	Thu, 01 Oct 2020 15:34:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IHj7Ual2gVJMYG9+yiC0YLQbptxHutGKeHtZpaT2xRA=; b=rBm2fsaHkw++l9kbtpPtkqaYIK
	sqXGBQOKc0OQOk0NHSTfXDZY7nfv5kH9lSZjzSLyW0j8PPvEOrtPIuQmSoasPkt+Om32MbuL98ieW
	s3IBxUhYHqbPiqdW7VrcxJb2S29twqHgZZZ+ezs2Q1enzFhSOrH72ur0lBMxUKOaUDcc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0bM-0000ul-23; Thu, 01 Oct 2020 15:34:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0bL-00054I-OW; Thu, 01 Oct 2020 15:34:35 +0000
Subject: Re: [PATCH 4/4] xen/arm: Introduce fw_unreserved_regions() and use it
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200926205542.9261-1-julien@xen.org>
 <20200926205542.9261-5-julien@xen.org>
 <alpine.DEB.2.21.2009301630250.10908@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <32212198-886c-4a44-ab27-8c650777b8d1@xen.org>
Date: Thu, 1 Oct 2020 16:34:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2009301630250.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 01/10/2020 00:40, Stefano Stabellini wrote:
> On Sat, 26 Sep 2020, Julien Grall wrote:
> I have a small suggestion for improvement that could be done on commit:
> given that bootinfo is actually used on EFI systems (granted, not
> bootinfo.reserved_mem but bootinfo.mem, see
> xen/arch/arm/efi/efi-boot.h:efi_process_memory_map_bootinfo) so
> technically bootinfo could be in-use with ACPI, maybe we could add a
> comment on top of xen/include/asm-arm/setup.h:bootinfo to say that
> reserved_mem is device tree only?

That's fine with me. I will need to resend the rest of the series, so I 
will update it at the same time.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:36:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1348.4480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0d6-00045p-JP; Thu, 01 Oct 2020 15:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1348.4480; Thu, 01 Oct 2020 15:36:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0d6-00045i-GO; Thu, 01 Oct 2020 15:36:24 +0000
Received: by outflank-mailman (input) for mailman id 1348;
 Thu, 01 Oct 2020 15:36:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kO0d5-00045W-5W
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:36:23 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 454e3879-34f0-42be-9752-b7f769ec28f1;
 Thu, 01 Oct 2020 15:36:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jQH2=DI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kO0d5-00045W-5W
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:36:23 +0000
X-Inumbo-ID: 454e3879-34f0-42be-9752-b7f769ec28f1
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 454e3879-34f0-42be-9752-b7f769ec28f1;
	Thu, 01 Oct 2020 15:36:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601566582;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=9/1+8AILDFhHJ17SHU4YqTmVOgEzlfLSZ8XgvI3ulq0=;
  b=dddyYhrPIfSarEWMMpDYERj04KUhG5V3PZ3dlgxVPReppdDVIbYWDkvM
   tf0WPZBUoD8N8/emBM/Mz3a+6avh75UAS81GgiO97+NYc73+qQp8rnv0O
   Y3+wtgcP7eD/52MZdATduqIVcv2JO+mqnrHdMSxjG8bII2O1P/qSpJhiU
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: i7SDzJzibXEPIMYH8BJAX+IT8uBW/cMj+idMK1F3wUDZf6G6eqPw5PTrSkLpZC/v8hbMGY8wmS
 s0CpCnYJGHvfos1L3sRY+EdYcA/P1PEQY98NTvdzroT7Kdtbrcar5C2Ao/lI8wQp2RGTnMzI/6
 qd0jO8UHKHBpiwv3oFDDDpdKOBtANitSB0YUea5rZEHBcBq9tvEwCwvk5DLsLkYON1HQ7us3Gc
 c+v5s+QVpGvHQxFMCLtBliKycFhig0eomLKjQIhmaZeMUFyq7XifE2vhkvHhIqyk5LOFmyTAeK
 vVw=
X-SBRS: None
X-MesageID: 28068741
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,323,1596513600"; 
   d="scan'208";a="28068741"
Date: Thu, 1 Oct 2020 17:36:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <ian.jackson@citrix.com>,
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Rich Persaud <persaur@gmail.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Message-ID: <20201001153612.GG19254@Air-de-Roger>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20200930125736.95203-1-george.dunlap@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 30, 2020 at 01:57:36PM +0100, George Dunlap wrote:
> Define a specific criteria for how we determine what tools and
> libraries to be compatible with.  This will clarify issues such as,
> "Should we continue to support Python 2.4" moving forward.
> 
> Note that CentOS 7 is set to stop receiving "normal" maintenance
> updates in "Q4 2020"; assuming that 4.15 is released after that, we
> only need to support CentOS / RHEL 8.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> 
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Rich Persaud <persaur@gmail.com>
> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
> ---
>  docs/index.rst                        |  2 +
>  docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
>  2 files changed, 78 insertions(+)
>  create mode 100644 docs/policies/dependency-versions.rst
> 
> diff --git a/docs/index.rst b/docs/index.rst
> index b75487a05d..ac175eacc8 100644
> --- a/docs/index.rst
> +++ b/docs/index.rst
> @@ -57,5 +57,7 @@ Miscellanea
>  -----------
>  
>  .. toctree::
> +   :maxdepth: 1
>  
> +   policies/dependency-versions
>     glossary
> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/dependency-versions.rst
> new file mode 100644
> index 0000000000..d5eeb848d8
> --- /dev/null
> +++ b/docs/policies/dependency-versions.rst
> @@ -0,0 +1,76 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Build and runtime dependencies
> +==============================
> +
> +Xen depends on other programs and libraries to build and to run.
> +Chosing a minimum version of these tools to support requires a careful
> +balance: Supporting older versions of these tools or libraries means
> +that Xen can compile on a wider variety of systems; but means that Xen
> +cannot take advantage of features available in newer versions.
> +Conversely, requiring newer versions means that Xen can take advantage
> +of newer features, but cannot work on as wide a variety of systems.
> +
> +Specific dependencies and versions for a given Xen release will be
> +listed in the toplevel README, and/or specified by the ``configure``
> +system.  This document lays out the principles by which those versions
> +should be chosen.
> +
> +The general principle is this:
> +
> +    Xen should build on currently-supported versions of major distros
> +    when released.
> +
> +"Currently-supported" means whatever that distro's version of "full
> +support".  For instance, at the time of writing, CentOS 7 and 8 are
> +listed as being given "Full Updates", but CentOS 6 is listed as
> +"Maintenance updates"; under this criterium, we would try to ensure
> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
> +
> +Exceptions for specific distros or tools may be made when appropriate.
> +
> +One exception to this is compiler versions for the hypervisor.
> +Support for new instructions, and in particular support for new safety
> +features, may require a newer compiler than many distros support.
> +These will be specified in the README.
> +
> +Distros we consider when deciding minimum versions
> +--------------------------------------------------
> +
> +We currently aim to support Xen building and running on the following distributions:
> +Debian_,
> +Ubuntu_,
> +OpenSUSE_,
> +Arch Linux,
> +SLES_,
> +Yocto_,
> +CentOS_,
> +and RHEL_.

Could we add FreeBSD here, I've been packaging Xen there for quite
some time now, and try to keep it working.

It's an interesting target because it has quite a different toolchain
as it's fully llvm based (clang + lld) and then it's using the ELF
Toolchain.

https://www.freebsd.org/releases/

Not sure if we want to rename the current section to Linux distros and
add a different one for other OSes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:43:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1352.4493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0jI-0004zL-Ak; Thu, 01 Oct 2020 15:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1352.4493; Thu, 01 Oct 2020 15:42:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0jI-0004zE-7H; Thu, 01 Oct 2020 15:42:48 +0000
Received: by outflank-mailman (input) for mailman id 1352;
 Thu, 01 Oct 2020 15:42:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HoEC=DI=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1kO0jG-0004z9-Em
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:42:46 +0000
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b28433a5-057c-4b29-81f4-e8e07f927f69;
 Thu, 01 Oct 2020 15:42:44 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 7D135A2F67
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 779F2A2F59
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:42 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id cjohqrD-Xtwi for <xen-devel@lists.xenproject.org>;
 Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id C786FA2F67
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Tmrf6RBOE5U8 for <xen-devel@lists.xenproject.org>;
 Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AA09FA2F59
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 9EEB720B6D
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:06 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id gWshB_rix9df for <xen-devel@lists.xenproject.org>;
 Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id AC83A206C5
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id em-h3gH6oaI1 for <xen-devel@lists.xenproject.org>;
 Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 8149720260
 for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HoEC=DI=cert.pl=michall@srs-us1.protection.inumbo.net>)
	id 1kO0jG-0004z9-Em
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:42:46 +0000
X-Inumbo-ID: b28433a5-057c-4b29-81f4-e8e07f927f69
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b28433a5-057c-4b29-81f4-e8e07f927f69;
	Thu, 01 Oct 2020 15:42:44 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
	by bagnar.nask.net.pl (Postfix) with ESMTP id 7D135A2F67
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:43 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by bagnar.nask.net.pl (Postfix) with ESMTP id 779F2A2F59
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:42 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
	by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
	with ESMTP id cjohqrD-Xtwi for <xen-devel@lists.xenproject.org>;
	Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by bagnar.nask.net.pl (Postfix) with ESMTP id C786FA2F67
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
	by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id Tmrf6RBOE5U8 for <xen-devel@lists.xenproject.org>;
	Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl [195.187.242.210])
	by bagnar.nask.net.pl (Postfix) with ESMTP id AA09FA2F59
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by belindir.nask.net.pl (Postfix) with ESMTP id 9EEB720B6D
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:06 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
	by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
	with ESMTP id gWshB_rix9df for <xen-devel@lists.xenproject.org>;
	Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by belindir.nask.net.pl (Postfix) with ESMTP id AC83A206C5
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
	by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
	with ESMTP id em-h3gH6oaI1 for <xen-devel@lists.xenproject.org>;
	Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
	by belindir.nask.net.pl (Postfix) with ESMTP id 8149720260
	for <xen-devel@lists.xenproject.org>; Thu,  1 Oct 2020 17:42:00 +0200 (CEST)
Date: Thu, 1 Oct 2020 17:42:00 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <1713934165.348347309.1601566920443.JavaMail.zimbra@cert.pl>
Subject: BUG: SIGSEGV in audio_pcm_sw_write with Windows 7 SP 1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - FF81 (Linux)/8.6.0_GA_1194)
Thread-Topic: SIGSEGV in audio_pcm_sw_write with Windows 7 SP 1
Thread-Index: F/7lr5jIhOMk/9GSjC42aY/5S1cqmw==

Hello,

I'm running the recent Xen master:
https://github.com/xen-project/xen/tree/d4ed1d4132f5825a795d5a78505811ecd27=
17b5e

when I install Windows 7 SP1, qemu-system-i386 crashes on the first attempt=
 to use audio device (i.e. when Windows boots to the Desktop and tries to p=
lay the log-in sound).

Is there some regression in qemu which triggers for my configuration?

Enclosed: xl info, my xl.cfg and the crash report from GDB.


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska

---

root@zen2:/opt/win7# xl info
host                   : zen2
release                : 4.19.0-10-amd64
version                : #1 SMP Debian 4.19.132-1 (2020-07-24)
machine                : x86_64
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 4
threads_per_core       : 1
cpu_mhz                : 3000.227
hw_caps                : bfebfbff:76faf3bf:2c100800:00000121:0000000f:029c6=
7af:00000000:00000100
virt_caps              : pv hvm hvm_directio pv_directio hap shadow iommu_h=
ap_pt_share
total_memory           : 16292
free_memory            : 4687
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 15
xen_extra              : -unstable
xen_version            : 4.15-unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64=20
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          :=20
xen_commandline        : placeholder dom0_mem=3D8192M,max:8192M dom0_max_vc=
pus=3D4 dom0_vcpus_pin=3D1 force-ept=3D1 ept=3Dpml=3D0 hap_1gb=3D0 hap_2mb=
=3D0 altp2m=3D1 smt=3D0
cc_compiler            : gcc (Debian 8.3.0-6) 8.3.0
cc_compile_by          : root
cc_compile_domain      : cert.pl
cc_compile_date        : Thu Oct  1 17:00:45 CEST 2020
build_id               : caeeb34d88d2f2bafc724be963a70ef68a9a552a
xend_config_format     : 4

---

arch =3D 'x86_64'
name =3D "vm-0"
maxmem =3D 3048
memory =3D 3048
vcpus =3D 2
maxvcpus =3D 2
builder =3D "hvm"
boot =3D "cd,menu=3Don,splash=3D/usr/share/drakrun/splash.jpg,splash-time=
=3D2000"
hap =3D 1
acpi =3D 1
on_poweroff =3D "destroy"
on_reboot =3D "restart"
on_crash =3D "destroy"
vnc=3D1
vnclisten=3D"0.0.0.0:0,websocket=3D6400"
vncpasswd=3D"sth123"
usb =3D 1
usbdevice =3D "tablet"
altp2m =3D 2
shadow_memory =3D 16
audio =3D 1
soundhw=3D'hda'
cpuid=3D"host,htt=3D0"
vga=3D"stdvga"
vif =3D [ 'type=3Dioemu,model=3De1000,bridge=3Ddrak0' ]
disk =3D [ "tap:qcow2:/var/lib/drakrun/volumes/vm-0.img,xvda,w", "file:/opt=
/win7/SW_DVD5_Win_Pro_7w_SP1_64BIT_Polish_-2_MLF_X17-59386.ISO,hdc:cdrom,r"=
, "file:/var/lib/drakrun/volumes/unattended.iso,hdd:cdrom,r" ]
processor_trace_buf_kb=3D65536

---

Thread 1 "qemu-system-i38" received signal SIGSEGV, Segmentation fault.
audio_pcm_sw_write (sw=3D0x556c610f5330, buf=3D0x0, size=3D1612) at /opt/dr=
akvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:739             =
                                                         =20
739             if (!sw->hw->pcm_ops->volume_out) {
(gdb) bt
#0  0x0000556c5e4716cb in audio_pcm_sw_write (sw=3D0x556c610f5330, buf=3D0x=
0, size=3D1612) at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audi=
o/audio.c:739                                            =20
#1  0x0000556c5e47463e in audio_capture_mix_and_clear (hw=3D0x556c60f1c440,=
 rpos=3D0, samples=3D403) at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xe=
n-dir/audio/audio.c:1069                                 =20
#2  0x0000556c5e474c44 in audio_run_out (s=3D0x556c60f1c170) at /opt/drakvu=
f-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:1203                =
                                                     =20
#3  0x0000556c5e47546e in audio_run (s=3D0x556c60f1c170, msg=3D0x556c5e9bf2=
38 "timer") at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/au=
dio.c:1372                                             =20
#4  0x0000556c5e473f35 in audio_timer (opaque=3D0x556c60f1c170) at /opt/dra=
kvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:846              =
                                                     =20
#5  0x0000556c5e85f6a5 in timerlist_run_timers (timer_list=3D0x556c60557500=
) at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.c:=
587                                                  =20
#6  0x0000556c5e85f74f in qemu_clock_run_timers (type=3DQEMU_CLOCK_VIRTUAL)=
 at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.c:6=
01                                                   =20
#7  0x0000556c5e85fa0f in qemu_clock_run_all_timers () at /opt/drakvuf-sand=
box/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.c:687                   =
                                                   =20
#8  0x0000556c5e860384 in main_loop_wait (nonblocking=3D0) at /opt/drakvuf-=
sandbox/drakvuf/xen/tools/qemu-xen-dir/util/main-loop.c:573                =
                                                     =20
#9  0x0000556c5e3f1dfc in qemu_main_loop () at /opt/drakvuf-sandbox/drakvuf=
/xen/tools/qemu-xen-dir/softmmu/vl.c:1664                                  =
                                                   =20
#10 0x0000556c5e7fda31 in main (argc=3D45, argv=3D0x7ffea849a5a8, envp=3D0x=
7ffea849a718) at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/softmm=
u/main.c:49                                              =20
(gdb)
#0  0x0000556c5e4716cb in audio_pcm_sw_write (sw=3D0x556c610f5330, buf=3D0x=
0, size=3D1612)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:73=
9
#1  0x0000556c5e47463e in audio_capture_mix_and_clear (hw=3D0x556c60f1c440,=
 rpos=3D0, samples=3D403)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:10=
69
#2  0x0000556c5e474c44 in audio_run_out (s=3D0x556c60f1c170)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:12=
03
#3  0x0000556c5e47546e in audio_run (s=3D0x556c60f1c170, msg=3D0x556c5e9bf2=
38 "timer")
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:13=
72
#4  0x0000556c5e473f35 in audio_timer (opaque=3D0x556c60f1c170)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/audio/audio.c:84=
6
#5  0x0000556c5e85f6a5 in timerlist_run_timers (timer_list=3D0x556c60557500=
)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.=
c:587
#6  0x0000556c5e85f74f in qemu_clock_run_timers (type=3DQEMU_CLOCK_VIRTUAL)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.=
c:601
#7  0x0000556c5e85fa0f in qemu_clock_run_all_timers ()
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/qemu-timer.=
c:687
#8  0x0000556c5e860384 in main_loop_wait (nonblocking=3D0)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/util/main-loop.c=
:573
#9  0x0000556c5e3f1dfc in qemu_main_loop ()
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/softmmu/vl.c:166=
4
#10 0x0000556c5e7fda31 in main (argc=3D45, argv=3D0x7ffea849a5a8, envp=3D0x=
7ffea849a718)
    at /opt/drakvuf-sandbox/drakvuf/xen/tools/qemu-xen-dir/softmmu/main.c:4=
9


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 15:54:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 15:54:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1409.4505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0ut-00064X-Jd; Thu, 01 Oct 2020 15:54:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1409.4505; Thu, 01 Oct 2020 15:54:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO0ut-00064Q-GG; Thu, 01 Oct 2020 15:54:47 +0000
Received: by outflank-mailman (input) for mailman id 1409;
 Thu, 01 Oct 2020 15:54:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO0ur-00064L-UZ
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:54:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13d237b5-5c42-4429-a738-e4aa19e51ad4;
 Thu, 01 Oct 2020 15:54:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0uo-0001Jt-Hc; Thu, 01 Oct 2020 15:54:42 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO0uo-0006LT-94; Thu, 01 Oct 2020 15:54:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO0ur-00064L-UZ
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 15:54:46 +0000
X-Inumbo-ID: 13d237b5-5c42-4429-a738-e4aa19e51ad4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13d237b5-5c42-4429-a738-e4aa19e51ad4;
	Thu, 01 Oct 2020 15:54:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=RvZLGIdlQx8nb7ju7Dv24vnrsKcJyJjpVl7Cr0kXq/U=; b=VjMLVqR5fkW7kaOGTf8Hz9Nz2A
	5v/GyWT6bmrCiAumbJvWMlG7D/deg/6/rreBEdp9F68rjlqnL4mjsqxhBynU76tnH0ldXyIHZrP4W
	J5+cIcraRyuOuiTWcvWRMaSDtEqL4LU3kfQDblSZ046LPsHDXxArodcKD1ZafckHM2Ks=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0uo-0001Jt-Hc; Thu, 01 Oct 2020 15:54:42 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO0uo-0006LT-94; Thu, 01 Oct 2020 15:54:42 +0000
Subject: Re: [PATCH 07/12] evtchn: cut short evtchn_reset()'s loop in the
 common case
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <0d5ffc89-4b04-3e06-e950-f0cb171c7419@suse.com>
 <0577c62d-b349-6a60-d8bc-5b23a74342e0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fd15073b-d9e6-541e-d4fa-2ae2f249f7cb@xen.org>
Date: Thu, 1 Oct 2020 16:54:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <0577c62d-b349-6a60-d8bc-5b23a74342e0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 28/09/2020 12:00, Jan Beulich wrote:
> The general expectation is that there are only a few open ports left
> when a domain asks its event channel configuration to be reset.
> Similarly on average half a bucket worth of event channels can be
> expected to be inactive. Try to avoid iterating over all channels, by
> utilizing usage data we're maintaining anyway.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:03:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1412.4517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO134-0007WP-Ag; Thu, 01 Oct 2020 16:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1412.4517; Thu, 01 Oct 2020 16:03:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO134-0007WI-7i; Thu, 01 Oct 2020 16:03:14 +0000
Received: by outflank-mailman (input) for mailman id 1412;
 Thu, 01 Oct 2020 16:03:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kO132-0007WC-Tl
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:03:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a6801f8-9719-46f2-9a27-28a0f58f23e7;
 Thu, 01 Oct 2020 16:03:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D59E5AD5F;
 Thu,  1 Oct 2020 16:03:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bOcq=DI=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kO132-0007WC-Tl
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:03:12 +0000
X-Inumbo-ID: 6a6801f8-9719-46f2-9a27-28a0f58f23e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6a6801f8-9719-46f2-9a27-28a0f58f23e7;
	Thu, 01 Oct 2020 16:03:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601568187;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zueDiCrYxomDQCPHoxYGh9olkGatqPlUX0kc0xMr3d4=;
	b=Lo8tfwdfisR5LSdhtFaHHeHx0lDbxlait+JgOZTF+yiWZ1n9Mb23WC4iiDHDeVIv6E+dLB
	xzYYAlXyz33WmV/OKuDhdX4CnI/v59cFWsTCHbfux9n0SfpczZVPxd0wsx93r82Grv92uM
	4nwThj2ST7JRUPs3jDDrTWSKHnArJsw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D59E5AD5F;
	Thu,  1 Oct 2020 16:03:07 +0000 (UTC)
Subject: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
Message-ID: <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
Date: Thu, 1 Oct 2020 18:03:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.09.2020 14:09, Jan Beulich wrote:
> While looking at what it would take to move around libelf/
> in the hypervisor subtree, I've run into this rule, which I
> think can do with a few improvements and some simplification.
> 
> 1: adjust population of acpi/
> 2: fix (drop) dependencies of when to populate xen/
> 3: adjust population of public headers into xen/
> 4: properly install Arm public headers
> 5: adjust x86-specific population of xen/
> 6: drop remaining -f from ln invocations

May I ask for an ack or otherwise here?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:05:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:05:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1415.4528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO14k-0007fc-Ms; Thu, 01 Oct 2020 16:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1415.4528; Thu, 01 Oct 2020 16:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO14k-0007fV-Jc; Thu, 01 Oct 2020 16:04:58 +0000
Received: by outflank-mailman (input) for mailman id 1415;
 Thu, 01 Oct 2020 16:04:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO14j-0007fP-5U
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:04:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bb75349-7a6d-4c72-87eb-135e2454df83;
 Thu, 01 Oct 2020 16:04:55 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO14b-00024g-GH; Thu, 01 Oct 2020 16:04:49 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO14b-0007Cv-8l; Thu, 01 Oct 2020 16:04:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO14j-0007fP-5U
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:04:57 +0000
X-Inumbo-ID: 1bb75349-7a6d-4c72-87eb-135e2454df83
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1bb75349-7a6d-4c72-87eb-135e2454df83;
	Thu, 01 Oct 2020 16:04:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0erPlQYdx/puuWFhuIBlj1yB5OY2G4NsMEejUtCMH5g=; b=e2y8bxHcemq8W/reZfDahj933f
	Y+XdwMfIO5RqOsq4my5I6mFljfu7rdqLkqaoautwJIF3yK7ATu+FmGOvCiGraZUKHqvzE2m3Z/hY1
	pffd+fRtZ0SGh46IH0Q/0DrVNLsngPQtZFVbC6zHSg4Aqyf0tr9KrVkeVrONhhDh+jXw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO14b-00024g-GH; Thu, 01 Oct 2020 16:04:49 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO14b-0007Cv-8l; Thu, 01 Oct 2020 16:04:49 +0000
Subject: Re: [PATCH] evtchn/Flask: pre-allocate node on send path
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <f633e95e-11e7-ccfc-07ce-7cc817fcd7fe@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e5d5dfee-aeee-ed3d-bcea-91e82198e04f@xen.org>
Date: Thu, 1 Oct 2020 17:04:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <f633e95e-11e7-ccfc-07ce-7cc817fcd7fe@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 24/09/2020 11:53, Jan Beulich wrote:
> xmalloc() & Co may not be called with IRQs off, or else check_lock()
> will have its assertion trigger about locks getting acquired
> inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
> very reasonable, especially since the per-channel lock was introduced to
> avoid acquiring the per-domain event lock on the send paths. Issue a
> second call to xsm_evtchn_send() instead, before acquiring the lock, to
> give XSM / Flask a chance to pre-allocate whatever it may need.
> 
> As these nodes are used merely for caching earlier decisions' results,
> allocate just one node in AVC code despite two potentially being needed.
> Things will merely be not as performant if a second allocation was
> wanted, just like when the pre-allocation fails.
> 
> Fixes: c0ddc8634845 ("evtchn: convert per-channel lock to be IRQ-safe")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

As discussed on the community call with one comment below:

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
> TBD: An even easier fix could be to simply guard xzalloc() by a
>       conditional checking local_irq_is_enabled(), but for a domain
>       sending only interdomain events this would mean AVC's node caching
>       would never take effect on the sending path, as allocation would
>       then always be avoided.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -723,6 +723,12 @@ int evtchn_send(struct domain *ld, unsig
>       if ( !port_is_valid(ld, lport) )
>           return -EINVAL;
>   
> +    /*
> +     * As the call further down needs to avoid allocations (due to running
> +     * with IRQs off), give XSM a chance to pre-allocate if needed.
> +     */
> +    xsm_evtchn_send(XSM_HOOK, ld, NULL);

I would suggest to add a comment on top of the evtchn_send callback in 
the XSM hook. This would be helpful for any developer of a new XSM policy.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1423.4541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO19j-0008Og-BL; Thu, 01 Oct 2020 16:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1423.4541; Thu, 01 Oct 2020 16:10:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO19j-0008O3-7t; Thu, 01 Oct 2020 16:10:07 +0000
Received: by outflank-mailman (input) for mailman id 1423;
 Thu, 01 Oct 2020 16:10:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kO19h-000885-K5
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:10:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.46]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b12a177-f55d-4de2-b13d-9826d2160811;
 Thu, 01 Oct 2020 16:10:03 +0000 (UTC)
Received: from AM6P194CA0062.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::39)
 by VE1PR08MB5822.eurprd08.prod.outlook.com (2603:10a6:800:1a7::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.22; Thu, 1 Oct
 2020 16:10:01 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::d9) by AM6P194CA0062.outlook.office365.com
 (2603:10a6:209:84::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend
 Transport; Thu, 1 Oct 2020 16:10:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:10:01 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Thu, 01 Oct 2020 16:10:01 +0000
Received: from 79c45fb50dbd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 ECC83E17-10DC-443A-8563-C8B2826F9F00.1; 
 Thu, 01 Oct 2020 16:09:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 79c45fb50dbd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 01 Oct 2020 16:09:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3594.eurprd08.prod.outlook.com (2603:10a6:10:4e::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 16:09:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:09:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kO19h-000885-K5
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:10:05 +0000
X-Inumbo-ID: 5b12a177-f55d-4de2-b13d-9826d2160811
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.46])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5b12a177-f55d-4de2-b13d-9826d2160811;
	Thu, 01 Oct 2020 16:10:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cE87yPdEd0Vb6O8VXh/1P4LZ63afPFJQdYv776+yqXE=;
 b=zCxltG1DpFkRlyij2ZDSk3rUdu1IkUao/ATiGVE7gWFnyA7tJcmEtOMrL+w8nUhXzaRdjDKoqX/Hwc7ZA88IEO0H55fTI/gBFA5sHLqAx7h6VIDojnhdg5qdZA3bmwc4UzpjyoXJGY7/C4zqVWUNC+n2WBdJql0syNto3g4+CNM=
Received: from AM6P194CA0062.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::39)
 by VE1PR08MB5822.eurprd08.prod.outlook.com (2603:10a6:800:1a7::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.22; Thu, 1 Oct
 2020 16:10:01 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::d9) by AM6P194CA0062.outlook.office365.com
 (2603:10a6:209:84::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend
 Transport; Thu, 1 Oct 2020 16:10:01 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:10:01 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Thu, 01 Oct 2020 16:10:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 72bda2ce2a8f91dc
X-CR-MTA-TID: 64aa7808
Received: from 79c45fb50dbd.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id ECC83E17-10DC-443A-8563-C8B2826F9F00.1;
	Thu, 01 Oct 2020 16:09:23 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 79c45fb50dbd.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 01 Oct 2020 16:09:23 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U7xoXCIL72sRraiEpM/Qjnh5qetlxUC0j1kHCoVw8/JZ3n0ZhYzKGFj5zgmdnccANM/d3ATJ5eQG71WrFMx211Dzig7Uff37oYMS23MVwcSBsUl2G//5TS5rqGapl0Iqt+FdXKFTTRD1MyPaOJPN1hXws0SaNkAKYf61l3Ua7sdqzVYudXY4g4UVmOoqh5ot9P1WpgbM++5bbJ/IbSUU6l6Pjpdaw67QuWVABlj4QrpRi4gyfRQK1GeKC3c0NuE9HUvhbsyXWqI78VwgTrpyi8QnhtVZzGztO8qFqw0JNgukn3A6m+n/XXRvfbkbbttPoRoPdSaSdMsuQePK6hWjuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cE87yPdEd0Vb6O8VXh/1P4LZ63afPFJQdYv776+yqXE=;
 b=D0842UJgdZ+NysDtJxGCXrLk2/uIcvKrvFqtIFCCFrW0+BO1Jk2i51sM5kpQ9tcJovmnEIogYYaLYFmkRfIjP+ezrpAi2t4962rXH7rAnwukCxtWiz2IcIOpBOd/04TmsFmH4Yx41brCgh/jN15tjoVSXia2tSp9onpxwz0gJxFN/7hr/5Tq91m1IgJ7saM0yqqHEUqSfoI+scpUmBjx00fmbKDo25zMS8SKqaKpti64BQIdb0XH3FErfBTlwov9hIbmXE++2kkWdB7nG7Jn0H6RE7ob/Aa8tEWxNAbhLtvOv6xnqiXpj9dvuN07aorXmM1cI06ZD3efwoz5OeASow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cE87yPdEd0Vb6O8VXh/1P4LZ63afPFJQdYv776+yqXE=;
 b=zCxltG1DpFkRlyij2ZDSk3rUdu1IkUao/ATiGVE7gWFnyA7tJcmEtOMrL+w8nUhXzaRdjDKoqX/Hwc7ZA88IEO0H55fTI/gBFA5sHLqAx7h6VIDojnhdg5qdZA3bmwc4UzpjyoXJGY7/C4zqVWUNC+n2WBdJql0syNto3g4+CNM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3594.eurprd08.prod.outlook.com (2603:10a6:10:4e::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 16:09:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:09:22 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Juergen Gross <jgross@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Samuel Thibault
	<samuel.thibault@ens-lyon.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 3/3] tools/lixenguest: hide struct elf_dom_parms layout
 from users
Thread-Topic: [PATCH v2 3/3] tools/lixenguest: hide struct elf_dom_parms
 layout from users
Thread-Index: AQHWkwRX9mhuhJlLT02HG0+RUuXJnKmC9GmA
Date: Thu, 1 Oct 2020 16:09:22 +0000
Message-ID: <AFBA8F7E-D421-4BF3-A8F0-9204A0A9BB6A@arm.com>
References: <20200925062031.12200-1-jgross@suse.com>
 <20200925062031.12200-4-jgross@suse.com>
In-Reply-To: <20200925062031.12200-4-jgross@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 86a842ba-227a-4e8a-480c-08d866247369
x-ms-traffictypediagnostic: DB7PR08MB3594:|VE1PR08MB5822:
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB58223F463C9AB153567952779D300@VE1PR08MB5822.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 9e/tnVrNcMTTNe6zRVGKp6a1xG050e5aciGKFZCgoGB8OCVkoLQhW4S1PDigXHsdHnR88j+OQcpMqNpLog9cYPgwkXXyAcImsQnSo97ZVV3o1xKvqpmN7FJNlsWLInxAKbrnmd0cuiXDbblpk+CIcC3jfKPP7RhLzHgtwfBE0j8L5hXViQMuOL+yPsvYWbW95IvKKT7miUeBgAnwx05r7013WU9a4PcWl0WKEEG0B4859jQ8oHaz/l/z3NyWH9krqJyhMwokeVUMjYEqv97jobapRBV39CPBfqqKtIDjWlSwZm93xQeReEuGwA2HR0KI7LwyJyzx7D4Xu83HYjoltxG6yzD/CXTFSZK7OSFITv5c5MOmIkLkJipoa5q18B1WqTiFxpVKHOA3nx8aITvNnZ3M8xHWMlXcfxUQnZk6W7Y=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(366004)(136003)(346002)(376002)(6506007)(186003)(66946007)(66446008)(54906003)(316002)(76116006)(33656002)(36756003)(64756008)(66556008)(66476007)(6486002)(86362001)(6916009)(30864003)(2906002)(6512007)(26005)(5660300002)(91956017)(8676002)(53546011)(2616005)(71200400001)(83380400001)(8936002)(478600001)(4326008)(26730200005)(19860200003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 XL6PQ4lFaAQ+wB5B0IxCul1bjT1kLwzCR9JesXzdN9dxqPeKc2P5meo7dpZnPT70ULyFj6Zue8hs1/BANO2qbEx6S8LCkmMDJWCJQJV9HoODGKo6BcEAET3teg0Ig7xMiEBW5I5wowYlyzob6eKOJCErvwml3upRPJuSt2TxEFSVtg3sd3aIQTk5Sv5+sIAL3UlioR2TmPUYAWaSUBKEyRNFvZynrAQ0L7k6Q1bXs730RqyGU5bacU+OZz9QEvFlbAOwUta8XCp5w6qCu6F6SbCenmlCDZuSMnAcwwFhdmD0LxFb/8ADJtvOAnY2B7ZX7+oFE+98FGArMgDFvBVpbKkBPk4B7oHKHZ+yfUIaX+/g0NSn1+iheRJBKHTBezdSFXsDPwtLEaacgU6D20BVG3aglUfJYgPZwkw0jus8GM1t4LYbwuNtck6JFV6i84zHaici/z6GsVMcPdN8uD3u3tZDm71Qv3d8H3hj7v5jjHy/UG+NM2r9qo2oQDTvg/TiB8/25YuJbBQ4I4GLt7Lfilb/p/IyNTwdvcKbOvsIiv/gi1DZXkePygFTp+hq30woaMvgxQQxJw3AiTPsdAvMdBWGVXp4wB6lXRv4+MqWhzS3wgZ33veUwHuWsTVXHevPLvaGimiGAJzu96g6s/eSAw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <37156746A6EBA84A92CDCC5BB2E67D1A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3594
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7c6cd08c-2abf-4856-9518-08d866245c66
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	F5F5lQYNMY2TdMydDFJMySG0DiiY9sX/OdPYK/pnTfKCs0qA+cc7e7okaY9SYwP8Yh2C0LyYz10PN608ii0h4S/5smBtywDYKH5RvwpNjXdMIBBr4YGGPw8qDf9kjFWjV0zTXofa1KbhNWMhz9r/kXM2VC9qa8P+ay4h7bCIL0iQ+vX5zBhJ1SYuM+H8tPJs46LsmvhWxSpguqOeKcxOCH2/XrWlsDhFHlbFtXDkdk+pr5lyGYG32zwLvMNbyibUxbEBMUZN22uUSC4tirJAxdcGQme0gQ4PUcnmKIr3sCYIXkW0lET7CBIImhUCZno/5nqcGbXj4vebMrsvxhlkuw9e3U0kZSLQwfIaVmV6xtJR/8qKArgBCFj37eVY79BS83QR7x+nsc5SmAvw+Xaw8tJWe68qTAKtvFPbhcXgpMC04noi6AGf9EH/Lzgr6Kh3sVQmrAQg/95cD9lZLFJGWCxV/O5x6syabWKv8U3qsjbccSu4Y/eWwRo5aV4zJ9//
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(39850400004)(346002)(46966005)(336012)(83380400001)(54906003)(6862004)(6486002)(356005)(8676002)(8936002)(6506007)(36756003)(53546011)(2616005)(4326008)(107886003)(82740400003)(81166007)(33656002)(70206006)(70586007)(30864003)(26005)(2906002)(478600001)(86362001)(5660300002)(316002)(47076004)(36906005)(186003)(82310400003)(6512007)(26730200005)(19860200003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2020 16:10:01.3340
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 86a842ba-227a-4e8a-480c-08d866247369
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5822

Hi Juergen,

> On 25 Sep 2020, at 07:20, Juergen Gross <jgross@suse.com> wrote:
>=20
> Don't include struct elf_dom_parms in struct xc_dom_image, but rather
> use a pointer to reference it. Together with adding accessor functions
> for the externally needed elements this enables to drop including the
> Xen private header xen/libelf/libelf.h from xenguest.h.

There are several places in xg_dom_elfloader.c, xg_dom_arm.c and
xg_dom_armzimageloader.c which would need to be fixed as they are
using dom->parms.

Cheers
Bertrand

>=20
> Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> stubdom/grub/kexec.c                | 18 +++---
> tools/libs/guest/include/xenguest.h | 29 +++-------
> tools/libs/guest/xg_dom_core.c      | 85 +++++++++++++++++++++++------
> tools/libs/guest/xg_private.h       |  1 +
> tools/libxl/libxl_x86_acpi.c        |  5 +-
> 5 files changed, 88 insertions(+), 50 deletions(-)
>=20
> diff --git a/stubdom/grub/kexec.c b/stubdom/grub/kexec.c
> index e9a69d2a32..3da80b5b4a 100644
> --- a/stubdom/grub/kexec.c
> +++ b/stubdom/grub/kexec.c
> @@ -222,6 +222,7 @@ void kexec(void *kernel, long kernel_size, void *modu=
le, long module_size, char
>     char features[] =3D "";
>     struct mmu_update *m2p_updates;
>     unsigned long nr_m2p_updates;
> +    uint64_t virt_base;
>=20
>     DEBUG("booting with cmdline %s\n", cmdline);
>     xc_handle =3D xc_interface_open(0,0,0);
> @@ -294,10 +295,11 @@ void kexec(void *kernel, long kernel_size, void *mo=
dule, long module_size, char
>         goto out;
>     }
>=20
> +    virt_base =3D xc_dom_virt_base(dom);
>     /* copy hypercall page */
>     /* TODO: domctl instead, but requires privileges */
> -    if (dom->parms.virt_hypercall !=3D -1) {
> -        pfn =3D PHYS_PFN(dom->parms.virt_hypercall - dom->parms.virt_bas=
e);
> +    if (xc_dom_virt_hypercall(dom) !=3D -1) {
> +        pfn =3D PHYS_PFN(xc_dom_virt_hypercall(dom) - virt_base);
>         memcpy((void *) pages[pfn], hypercall_page, PAGE_SIZE);
>     }
>=20
> @@ -313,11 +315,11 @@ void kexec(void *kernel, long kernel_size, void *mo=
dule, long module_size, char
>     /* Move current console, xenstore and boot MFNs to the allocated plac=
e */
>     do_exchange(dom, dom->console_pfn, start_info.console.domU.mfn);
>     do_exchange(dom, dom->xenstore_pfn, start_info.store_mfn);
> -    DEBUG("virt base at %llx\n", dom->parms.virt_base);
> +    DEBUG("virt base at %llx\n", virt_base);
>     DEBUG("bootstack_pfn %lx\n", dom->bootstack_pfn);
> -    _boot_target =3D dom->parms.virt_base + PFN_PHYS(dom->bootstack_pfn)=
;
> +    _boot_target =3D virt_base + PFN_PHYS(dom->bootstack_pfn);
>     DEBUG("_boot_target %lx\n", _boot_target);
> -    do_exchange(dom, PHYS_PFN(_boot_target - dom->parms.virt_base),
> +    do_exchange(dom, PHYS_PFN(_boot_target - virt_base),
>             virt_to_mfn(&_boot_page));
>=20
>     if ( dom->arch_hooks->setup_pgtables )
> @@ -373,13 +375,13 @@ void kexec(void *kernel, long kernel_size, void *mo=
dule, long module_size, char
>     _boot_oldpdmfn =3D virt_to_mfn(start_info.pt_base);
>     DEBUG("boot old pd mfn %lx\n", _boot_oldpdmfn);
>     DEBUG("boot pd virt %lx\n", dom->pgtables_seg.vstart);
> -    _boot_pdmfn =3D dom->pv_p2m[PHYS_PFN(dom->pgtables_seg.vstart - dom-=
>parms.virt_base)];
> +    _boot_pdmfn =3D dom->pv_p2m[PHYS_PFN(dom->pgtables_seg.vstart - virt=
_base)];
>     DEBUG("boot pd mfn %lx\n", _boot_pdmfn);
>     _boot_stack =3D _boot_target + PAGE_SIZE;
>     DEBUG("boot stack %lx\n", _boot_stack);
> -    _boot_start_info =3D dom->parms.virt_base + PFN_PHYS(dom->start_info=
_pfn);
> +    _boot_start_info =3D virt_base + PFN_PHYS(dom->start_info_pfn);
>     DEBUG("boot start info %lx\n", _boot_start_info);
> -    _boot_start =3D dom->parms.virt_entry;
> +    _boot_start =3D xc_dom_virt_entry(dom);
>     DEBUG("boot start %lx\n", _boot_start);
>=20
>     /* Keep only useful entries */
> diff --git a/tools/libs/guest/include/xenguest.h b/tools/libs/guest/inclu=
de/xenguest.h
> index dba6a21643..a9984dbea5 100644
> --- a/tools/libs/guest/include/xenguest.h
> +++ b/tools/libs/guest/include/xenguest.h
> @@ -22,8 +22,6 @@
> #ifndef XENGUEST_H
> #define XENGUEST_H
>=20
> -#include <xen/libelf/libelf.h>
> -
> #define XC_NUMA_NO_NODE   (~0U)
>=20
> #define XCFLAGS_LIVE      (1 << 0)
> @@ -109,7 +107,7 @@ struct xc_dom_image {
>     uint32_t f_requested[XENFEAT_NR_SUBMAPS];
>=20
>     /* info from (elf) kernel image */
> -    struct elf_dom_parms parms;
> +    struct elf_dom_parms *parms;
>     char *guest_type;
>=20
>     /* memory layout */
> @@ -390,6 +388,13 @@ void *xc_dom_pfn_to_ptr_retcount(struct xc_dom_image=
 *dom, xen_pfn_t first,
>                                  xen_pfn_t count, xen_pfn_t *count_out);
> void xc_dom_unmap_one(struct xc_dom_image *dom, xen_pfn_t pfn);
> void xc_dom_unmap_all(struct xc_dom_image *dom);
> +void *xc_dom_vaddr_to_ptr(struct xc_dom_image *dom,
> +                          xen_vaddr_t vaddr, size_t *safe_region_out);
> +uint64_t xc_dom_virt_base(struct xc_dom_image *dom);
> +uint64_t xc_dom_virt_entry(struct xc_dom_image *dom);
> +uint64_t xc_dom_virt_hypercall(struct xc_dom_image *dom);
> +char *xc_dom_guest_os(struct xc_dom_image *dom);
> +bool xc_dom_feature_get(struct xc_dom_image *dom, unsigned int nr);
>=20
> static inline void *xc_dom_seg_to_ptr_pages(struct xc_dom_image *dom,
>                                       struct xc_dom_seg *seg,
> @@ -411,24 +416,6 @@ static inline void *xc_dom_seg_to_ptr(struct xc_dom_=
image *dom,
>     return xc_dom_seg_to_ptr_pages(dom, seg, &dummy);
> }
>=20
> -static inline void *xc_dom_vaddr_to_ptr(struct xc_dom_image *dom,
> -                                        xen_vaddr_t vaddr,
> -                                        size_t *safe_region_out)
> -{
> -    unsigned int page_size =3D XC_DOM_PAGE_SIZE(dom);
> -    xen_pfn_t page =3D (vaddr - dom->parms.virt_base) / page_size;
> -    unsigned int offset =3D (vaddr - dom->parms.virt_base) % page_size;
> -    xen_pfn_t safe_region_count;
> -    void *ptr;
> -
> -    *safe_region_out =3D 0;
> -    ptr =3D xc_dom_pfn_to_ptr_retcount(dom, page, 0, &safe_region_count)=
;
> -    if ( ptr =3D=3D NULL )
> -        return ptr;
> -    *safe_region_out =3D (safe_region_count << XC_DOM_PAGE_SHIFT(dom)) -=
 offset;
> -    return ptr + offset;
> -}
> -
> static inline xen_pfn_t xc_dom_p2m(struct xc_dom_image *dom, xen_pfn_t pf=
n)
> {
>     if ( xc_dom_translated(dom) )
> diff --git a/tools/libs/guest/xg_dom_core.c b/tools/libs/guest/xg_dom_cor=
e.c
> index c0d4a0aa2f..f846d8e1ed 100644
> --- a/tools/libs/guest/xg_dom_core.c
> +++ b/tools/libs/guest/xg_dom_core.c
> @@ -735,6 +735,7 @@ void xc_dom_release(struct xc_dom_image *dom)
>         xc_dom_unmap_all(dom);
>     xc_dom_free_all(dom);
>     free(dom->arch_private);
> +    free(dom->parms);
>     free(dom);
> }
>=20
> @@ -753,6 +754,12 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *x=
ch,
>     memset(dom, 0, sizeof(*dom));
>     dom->xch =3D xch;
>=20
> +    dom->parms =3D malloc(sizeof(*dom->parms));
> +    if (!dom->parms)
> +        goto err;
> +    memset(dom->parms, 0, sizeof(*dom->parms));
> +    dom->alloc_malloc +=3D sizeof(*dom->parms);
> +
>     dom->max_kernel_size =3D XC_DOM_DECOMPRESS_MAX;
>     dom->max_module_size =3D XC_DOM_DECOMPRESS_MAX;
>     dom->max_devicetree_size =3D XC_DOM_DECOMPRESS_MAX;
> @@ -762,12 +769,12 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *=
xch,
>     if ( features )
>         elf_xen_parse_features(features, dom->f_requested, NULL);
>=20
> -    dom->parms.virt_base =3D UNSET_ADDR;
> -    dom->parms.virt_entry =3D UNSET_ADDR;
> -    dom->parms.virt_hypercall =3D UNSET_ADDR;
> -    dom->parms.virt_hv_start_low =3D UNSET_ADDR;
> -    dom->parms.elf_paddr_offset =3D UNSET_ADDR;
> -    dom->parms.p2m_base =3D UNSET_ADDR;
> +    dom->parms->virt_base =3D UNSET_ADDR;
> +    dom->parms->virt_entry =3D UNSET_ADDR;
> +    dom->parms->virt_hypercall =3D UNSET_ADDR;
> +    dom->parms->virt_hv_start_low =3D UNSET_ADDR;
> +    dom->parms->elf_paddr_offset =3D UNSET_ADDR;
> +    dom->parms->p2m_base =3D UNSET_ADDR;
>=20
>     dom->flags =3D SIF_VIRT_P2M_4TOOLS;
>=20
> @@ -920,8 +927,8 @@ int xc_dom_parse_image(struct xc_dom_image *dom)
>     for ( i =3D 0; i < XENFEAT_NR_SUBMAPS; i++ )
>     {
>         dom->f_active[i] |=3D dom->f_requested[i]; /* cmd line */
> -        dom->f_active[i] |=3D dom->parms.f_required[i]; /* kernel   */
> -        if ( (dom->f_active[i] & dom->parms.f_supported[i]) !=3D
> +        dom->f_active[i] |=3D dom->parms->f_required[i]; /* kernel   */
> +        if ( (dom->f_active[i] & dom->parms->f_supported[i]) !=3D
>              dom->f_active[i] )
>         {
>             xc_dom_panic(dom->xch, XC_INVALID_PARAM,
> @@ -1142,8 +1149,8 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>         goto err;
>     }
>     page_size =3D XC_DOM_PAGE_SIZE(dom);
> -    if ( dom->parms.virt_base !=3D UNSET_ADDR )
> -        dom->virt_alloc_end =3D dom->parms.virt_base;
> +    if ( dom->parms->virt_base !=3D UNSET_ADDR )
> +        dom->virt_alloc_end =3D dom->parms->virt_base;
>=20
>     /* load kernel */
>     if ( xc_dom_alloc_segment(dom, &dom->kernel_seg, "kernel",
> @@ -1157,7 +1164,7 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>     /* Don't load ramdisk / other modules now if no initial mapping requi=
red. */
>     for ( mod =3D 0; mod < dom->num_modules; mod++ )
>     {
> -        unmapped_initrd =3D (dom->parms.unmapped_initrd &&
> +        unmapped_initrd =3D (dom->parms->unmapped_initrd &&
>                            !dom->modules[mod].seg.vstart);
>=20
>         if ( dom->modules[mod].blob && !unmapped_initrd )
> @@ -1199,10 +1206,10 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>=20
>     /* allocate other pages */
>     if ( !dom->arch_hooks->p2m_base_supported ||
> -         dom->parms.p2m_base >=3D dom->parms.virt_base ||
> -         (dom->parms.p2m_base & (XC_DOM_PAGE_SIZE(dom) - 1)) )
> -        dom->parms.p2m_base =3D UNSET_ADDR;
> -    if ( dom->arch_hooks->alloc_p2m_list && dom->parms.p2m_base =3D=3D U=
NSET_ADDR &&
> +         dom->parms->p2m_base >=3D dom->parms->virt_base ||
> +         (dom->parms->p2m_base & (XC_DOM_PAGE_SIZE(dom) - 1)) )
> +        dom->parms->p2m_base =3D UNSET_ADDR;
> +    if ( dom->arch_hooks->alloc_p2m_list && dom->parms->p2m_base =3D=3D =
UNSET_ADDR &&
>          dom->arch_hooks->alloc_p2m_list(dom) !=3D 0 )
>         goto err;
>     if ( dom->arch_hooks->alloc_magic_pages(dom) !=3D 0 )
> @@ -1228,7 +1235,7 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>=20
>     for ( mod =3D 0; mod < dom->num_modules; mod++ )
>     {
> -        unmapped_initrd =3D (dom->parms.unmapped_initrd &&
> +        unmapped_initrd =3D (dom->parms->unmapped_initrd &&
>                            !dom->modules[mod].seg.vstart);
>=20
>         /* Load ramdisk / other modules if no initial mapping required. *=
/
> @@ -1247,11 +1254,11 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>     }
>=20
>     /* Allocate p2m list if outside of initial kernel mapping. */
> -    if ( dom->arch_hooks->alloc_p2m_list && dom->parms.p2m_base !=3D UNS=
ET_ADDR )
> +    if ( dom->arch_hooks->alloc_p2m_list && dom->parms->p2m_base !=3D UN=
SET_ADDR )
>     {
>         if ( dom->arch_hooks->alloc_p2m_list(dom) !=3D 0 )
>             goto err;
> -        dom->p2m_seg.vstart =3D dom->parms.p2m_base;
> +        dom->p2m_seg.vstart =3D dom->parms->p2m_base;
>     }
>=20
>     return 0;
> @@ -1260,6 +1267,48 @@ int xc_dom_build_image(struct xc_dom_image *dom)
>     return -1;
> }
>=20
> +void *xc_dom_vaddr_to_ptr(struct xc_dom_image *dom,
> +                          xen_vaddr_t vaddr, size_t *safe_region_out)
> +{
> +    unsigned int page_size =3D XC_DOM_PAGE_SIZE(dom);
> +    xen_pfn_t page =3D (vaddr - dom->parms->virt_base) / page_size;
> +    unsigned int offset =3D (vaddr - dom->parms->virt_base) % page_size;
> +    xen_pfn_t safe_region_count;
> +    void *ptr;
> +
> +    *safe_region_out =3D 0;
> +    ptr =3D xc_dom_pfn_to_ptr_retcount(dom, page, 0, &safe_region_count)=
;
> +    if ( ptr =3D=3D NULL )
> +        return ptr;
> +    *safe_region_out =3D (safe_region_count << XC_DOM_PAGE_SHIFT(dom)) -=
 offset;
> +    return ptr + offset;
> +}
> +
> +uint64_t xc_dom_virt_base(struct xc_dom_image *dom)
> +{
> +    return dom->parms->virt_base;
> +}
> +
> +uint64_t xc_dom_virt_entry(struct xc_dom_image *dom)
> +{
> +    return dom->parms->virt_entry;
> +}
> +
> +uint64_t xc_dom_virt_hypercall(struct xc_dom_image *dom)
> +{
> +    return dom->parms->virt_hypercall;
> +}
> +
> +char *xc_dom_guest_os(struct xc_dom_image *dom)
> +{
> +    return dom->parms->guest_os;
> +}
> +
> +bool xc_dom_feature_get(struct xc_dom_image *dom, unsigned int nr)
> +{
> +    return elf_xen_feature_get(nr, dom->parms->f_supported);
> +}
> +
> /*
>  * Local variables:
>  * mode: C
> diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.=
h
> index 9940d554ef..fee3191cd4 100644
> --- a/tools/libs/guest/xg_private.h
> +++ b/tools/libs/guest/xg_private.h
> @@ -31,6 +31,7 @@
>=20
> #include <xen/memory.h>
> #include <xen/elfnote.h>
> +#include <xen/libelf/libelf.h>
>=20
> #ifndef ELFSIZE
> #include <limits.h>
> diff --git a/tools/libxl/libxl_x86_acpi.c b/tools/libxl/libxl_x86_acpi.c
> index 1a4e9e98de..3eca1c7a9f 100644
> --- a/tools/libxl/libxl_x86_acpi.c
> +++ b/tools/libxl/libxl_x86_acpi.c
> @@ -220,9 +220,8 @@ int libxl__dom_load_acpi(libxl__gc *gc,
>      * and so we need to put RSDP in location that can be discovered by A=
CPI's
>      * standard search method, in R-O BIOS memory (we chose last 64 bytes=
)
>      */
> -    if (strcmp(dom->parms.guest_os, "linux") ||
> -        elf_xen_feature_get(XENFEAT_linux_rsdp_unrestricted,
> -                            dom->parms.f_supported))
> +    if (strcmp(xc_dom_guest_os(dom), "linux") ||
> +        xc_dom_feature_get(dom, XENFEAT_linux_rsdp_unrestricted))
>         dom->acpi_modules[0].guest_addr_out =3D ACPI_INFO_PHYSICAL_ADDRES=
S +
>             (1 + acpi_pages_num) * libxl_ctxt.page_size;
>     else
> --=20
> 2.26.2
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:21:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1451.4557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Ku-00018r-LP; Thu, 01 Oct 2020 16:21:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1451.4557; Thu, 01 Oct 2020 16:21:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Ku-00018k-II; Thu, 01 Oct 2020 16:21:40 +0000
Received: by outflank-mailman (input) for mailman id 1451;
 Thu, 01 Oct 2020 16:21:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kO1Kt-00018f-9y
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:21:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58177a19-272d-4f9d-bd5a-9fee4fc7ee5e;
 Thu, 01 Oct 2020 16:21:38 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO1Kp-0002Pc-R4; Thu, 01 Oct 2020 16:21:35 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kO1Kp-0008KT-H1; Thu, 01 Oct 2020 16:21:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gWzX=DI=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kO1Kt-00018f-9y
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:21:39 +0000
X-Inumbo-ID: 58177a19-272d-4f9d-bd5a-9fee4fc7ee5e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58177a19-272d-4f9d-bd5a-9fee4fc7ee5e;
	Thu, 01 Oct 2020 16:21:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=rpw7CLr1ofwoC9m8MtvRNT24F+koh0FBsuE/Jf13sfE=; b=2Kcemwl0ddV+ybld1ZOm5FCZNF
	AdS0eiyouynTN/USgu1BZcekcm/M9gEt8V3tk+YZeNCHcNL1c4acYh58QlisN9w8JCmOd9oYOwoPj
	Z1l9UZRd+72kJopxfHyw1d0SbHiCNnV3lD7arSYcMcYj6smQ1mY9qCn1AvinUDNvVdpk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO1Kp-0002Pc-R4; Thu, 01 Oct 2020 16:21:35 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kO1Kp-0008KT-H1; Thu, 01 Oct 2020 16:21:35 +0000
Subject: Re: [PATCH 11/12] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>, paul@xen.org
Cc: xen-devel@lists.xenproject.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>
References: <0d5ffc89-4b04-3e06-e950-f0cb171c7419@suse.com>
 <6e529147-2a76-bc28-ac16-21fc9a2c8f03@suse.com>
 <004b01d696ff$76873e50$6395baf0$@xen.org>
 <92d2714b-d762-2f15-086f-58257e3336a8@suse.com>
 <006401d69707$062a5090$127ef1b0$@xen.org>
 <3626d65c-bd5d-f65e-61ca-451110761258@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f55cb87f-418d-61fa-65f1-0e746071fe37@xen.org>
Date: Thu, 1 Oct 2020 17:21:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <3626d65c-bd5d-f65e-61ca-451110761258@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 30/09/2020 11:16, Jan Beulich wrote:
> On 30.09.2020 10:52, Paul Durrant wrote:
>> Looking again, given that both send_guest_vcpu_virq() and
>> send_guest_global_virq() (rightly) hold the evtchn lock before
>> calling evtchn_port_set_pending() I think you could do away with
>> the virq lock by adding checks in those functions to verify
>> evtchn->state == ECS_VIRQ and u.virq == virq after having
>> acquired the channel lock but before calling
>> evtchn_port_set_pending().
> 
> I don't think so: The adjustment of v->virq_to_evtchn[] in
> evtchn_close() would then happen with just the domain's event
> lock held, which the sending paths don't use at all. The per-
> channel lock gets acquired in evtchn_close() a bit later only
> (and this lock can't possibly protect per-vCPU state).
> 
> In fact I'm now getting puzzled by evtchn_bind_virq() updating
> this array with (just) the per-domain lock held. Since it's
> the last thing in the function, there's technically no strict
> need for acquiring the vIRQ lock,

Well, we at least need to prevent the compiler to tear the store/load. 
If we don't use a lock, then we would should use ACCESS_ONCE() or 
{read,write}_atomic() for all the usage.

> but holding the event lock
> definitely doesn't help.

It helps because spin_unlock() and write_unlock() use the same barrier 
(arch_lock_release_barrier()). So ...

> All that looks to be needed is the
> barrier implied from write_unlock().

No barrier should be necessary. Although, I would suggest to add a 
comment explaining it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:24:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1454.4568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Nz-0001JX-4c; Thu, 01 Oct 2020 16:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1454.4568; Thu, 01 Oct 2020 16:24:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Nz-0001JQ-1Z; Thu, 01 Oct 2020 16:24:51 +0000
Received: by outflank-mailman (input) for mailman id 1454;
 Thu, 01 Oct 2020 16:24:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1Ny-0001JL-Ku
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:24:50 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 418437ea-7a37-43b9-8e44-19305a131d4a;
 Thu, 01 Oct 2020 16:24:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Nx-00029L-5P; Thu, 01 Oct 2020 17:24:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1Ny-0001JL-Ku
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:24:50 +0000
X-Inumbo-ID: 418437ea-7a37-43b9-8e44-19305a131d4a
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 418437ea-7a37-43b9-8e44-19305a131d4a;
	Thu, 01 Oct 2020 16:24:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Nx-00029L-5P; Thu, 01 Oct 2020 17:24:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/8] standalone: Use mkdir -p
Date: Thu,  1 Oct 2020 17:24:33 +0100
Message-Id: <20201001162439.18160-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

These two mkdir calls could fail if
standalone-generate-dump-flight-runvars is run without a log
directory, because they were not concurrency-correct.

mkdir -p should fix that.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 standalone | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/standalone b/standalone
index 9553d6c9..4d1f3513 100755
--- a/standalone
+++ b/standalone
@@ -181,12 +181,8 @@ check_repos() {
 }
 
 ensure_logs() {
-    if [ ! -d "logs" ] ; then
-	mkdir "logs"
-    fi
-    if [ ! -d "logs/$flight" ] ; then
-	mkdir "logs/$flight"
-    fi
+    mkdir -p "logs"
+    mkdir -p "logs/$flight"
 }
 
 with_logging() {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:24:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:24:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1455.4581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1O4-0001Lt-EP; Thu, 01 Oct 2020 16:24:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1455.4581; Thu, 01 Oct 2020 16:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1O4-0001Lm-AW; Thu, 01 Oct 2020 16:24:56 +0000
Received: by outflank-mailman (input) for mailman id 1455;
 Thu, 01 Oct 2020 16:24:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1O3-0001JL-JR
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:24:55 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57916f4e-c27c-453e-bc92-0bb81ac20b09;
 Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Nx-00029L-Cc; Thu, 01 Oct 2020 17:24:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1O3-0001JL-JR
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:24:55 +0000
X-Inumbo-ID: 57916f4e-c27c-453e-bc92-0bb81ac20b09
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57916f4e-c27c-453e-bc92-0bb81ac20b09;
	Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Nx-00029L-Cc; Thu, 01 Oct 2020 17:24:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/8] sg-run-job: Preserve step state "fail" if set by test script
Date: Thu,  1 Oct 2020 17:24:34 +0100
Message-Id: <20201001162439.18160-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If the test script exits nonzero but after setting the step status to
'fail', we can leave it that way.  This is particularly relevant if
the iffail in the job spec says 'broken' or something.  After this
change, a step can decide to override that.

An alternative would be to have the step script exit zero, but of
course that would (generally) leave the job to continue running more
steps!

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 tcl/JobDB-Executive.tcl | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tcl/JobDB-Executive.tcl b/tcl/JobDB-Executive.tcl
index 4fe85696..38248823 100644
--- a/tcl/JobDB-Executive.tcl
+++ b/tcl/JobDB-Executive.tcl
@@ -325,6 +325,7 @@ proc step-set-status {flight job stepno st} {
                AND status<>'aborted'
                AND status<>'broken'
                AND status<>'starved'
+               AND status<>'fail'
         "
         set pause 0
         db-execute-array stopinfo "
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1456.4593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1O9-0001Px-N0; Thu, 01 Oct 2020 16:25:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1456.4593; Thu, 01 Oct 2020 16:25:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1O9-0001Pp-Ja; Thu, 01 Oct 2020 16:25:01 +0000
Received: by outflank-mailman (input) for mailman id 1456;
 Thu, 01 Oct 2020 16:25:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1O8-0001JL-Jf
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:00 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67a43fb5-0bab-4e4d-b4b0-b5c32ab605ac;
 Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Nx-00029L-KT; Thu, 01 Oct 2020 17:24:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1O8-0001JL-Jf
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:00 +0000
X-Inumbo-ID: 67a43fb5-0bab-4e4d-b4b0-b5c32ab605ac
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67a43fb5-0bab-4e4d-b4b0-b5c32ab605ac;
	Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Nx-00029L-KT; Thu, 01 Oct 2020 17:24:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 4/8] ts-hosts-allocate-Executive: Allow to tolerate missing resources
Date: Thu,  1 Oct 2020 17:24:35 +0100
Message-Id: <20201001162439.18160-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Now, a job can specify that lack of a suitable host should be treated
as a plain test failure (ie, subject to the usual regression analysis)
rather than as an infrastructure or configuration problem.

This will be useful for some tests which don't work in some branches
because of lack of suitable hardware.  We want to avoid encoding our
hardware availability situation in make-flight.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-hosts-allocate-Executive | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 698437c0..58d2a389 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -704,6 +704,10 @@ sub alloc_hosts () {
 	my ($ok, $bookinglist) = attempt_allocation({
             ts_hosts_allocate_precheck => 1,
         }, 0);
+	if ($ok == $alloc_starved_r && $r{hostalloc_missing_expected}) {
+	    broken 'no suitable hosts available (as possibly expected)',
+	      'fail';
+	}
 	die $ok if $ok>1;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1457.4605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OF-0001Uo-28; Thu, 01 Oct 2020 16:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1457.4605; Thu, 01 Oct 2020 16:25:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OE-0001Uf-V9; Thu, 01 Oct 2020 16:25:06 +0000
Received: by outflank-mailman (input) for mailman id 1457;
 Thu, 01 Oct 2020 16:25:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1OD-0001JL-Jp
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:05 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffc40c9e-ce12-41c2-9097-de9fe89da053;
 Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Nx-00029L-SQ; Thu, 01 Oct 2020 17:24:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1OD-0001JL-Jp
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:05 +0000
X-Inumbo-ID: ffc40c9e-ce12-41c2-9097-de9fe89da053
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ffc40c9e-ce12-41c2-9097-de9fe89da053;
	Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Nx-00029L-SQ; Thu, 01 Oct 2020 17:24:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 5/8] cri-getplatforms: Give names to xenarch and suite
Date: Thu,  1 Oct 2020 17:24:36 +0100
Message-Id: <20201001162439.18160-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No functional change.  This will be useful in a moment.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-getplatforms | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/cri-getplatforms b/cri-getplatforms
index 2b8cee0b..1f206908 100755
--- a/cri-getplatforms
+++ b/cri-getplatforms
@@ -17,9 +17,11 @@
 # along with this program.  If not, see <http://www.gnu.org/licenses/>.
 
 getplatforms () {
+	local xenarch=$1
+	local suite=$2
         perl -e '
                 use Osstest;
                 csreadconfig();
-                print join " ", $mhostdb->get_arch_platforms("'$blessing'", "'$1'", "'$2'") or die $!;
+                print join " ", $mhostdb->get_arch_platforms("'$blessing'", "'$xenarch'", "'$suite'") or die $!;
         '
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1458.4617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OK-0001a4-C1; Thu, 01 Oct 2020 16:25:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1458.4617; Thu, 01 Oct 2020 16:25:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OK-0001Zt-7X; Thu, 01 Oct 2020 16:25:12 +0000
Received: by outflank-mailman (input) for mailman id 1458;
 Thu, 01 Oct 2020 16:25:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1OI-0001JL-K1
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:10 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e49779b-04c5-4b07-8019-bb35150fe206;
 Thu, 01 Oct 2020 16:24:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Nw-00029L-TM; Thu, 01 Oct 2020 17:24:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1OI-0001JL-K1
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:10 +0000
X-Inumbo-ID: 3e49779b-04c5-4b07-8019-bb35150fe206
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3e49779b-04c5-4b07-8019-bb35150fe206;
	Thu, 01 Oct 2020 16:24:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Nw-00029L-TM; Thu, 01 Oct 2020 17:24:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/8] Executive: Fix an undef warning message
Date: Thu,  1 Oct 2020 17:24:32 +0100
Message-Id: <20201001162439.18160-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

$onhost can be undef too

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/Executive.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 80e70070..a0d9f81e 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1283,7 +1283,8 @@ END
     return sub {
         my ($job, $hostidname, $onhost, $uptoincl_testid) = @_;
 
-	my $memokey = "$job $hostidname $onhost ".($uptoincl_testid//"");
+	my $memokey = "$job $hostidname "
+	  .($onhost//"")." ".($uptoincl_testid//"");
 	my $memo = $our_memo->{$memokey};
 	return @$memo if $memo;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1459.4629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OO-0001eU-M6; Thu, 01 Oct 2020 16:25:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1459.4629; Thu, 01 Oct 2020 16:25:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OO-0001eN-Ie; Thu, 01 Oct 2020 16:25:16 +0000
Received: by outflank-mailman (input) for mailman id 1459;
 Thu, 01 Oct 2020 16:25:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1ON-0001JL-K9
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:15 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 827dfb01-4f3e-43a8-bbd4-e111ff0eb190;
 Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Ny-00029L-37; Thu, 01 Oct 2020 17:24:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1ON-0001JL-K9
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:15 +0000
X-Inumbo-ID: 827dfb01-4f3e-43a8-bbd4-e111ff0eb190
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 827dfb01-4f3e-43a8-bbd4-e111ff0eb190;
	Thu, 01 Oct 2020 16:24:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Ny-00029L-37; Thu, 01 Oct 2020 17:24:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 6/8] cri-getplatforms: Honour new MF_SIMULATE_PLATFORMS env var
Date: Thu,  1 Oct 2020 17:24:37 +0100
Message-Id: <20201001162439.18160-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is to be expanded by the shell, using eval, so that it can refer
to $xenarch, $suite and $blessing.

No functional change if this variable is unset, or empty.  If it is
set to a single space, cri-getplatforms produces no output (as it does
anyway in standalone mode).

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-getplatforms | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/cri-getplatforms b/cri-getplatforms
index 1f206908..9f4cae56 100755
--- a/cri-getplatforms
+++ b/cri-getplatforms
@@ -19,6 +19,10 @@
 getplatforms () {
 	local xenarch=$1
 	local suite=$2
+	if [ "x$MF_SIMULATE_PLATFORMS" != x ]; then
+		eval "echo $MF_SIMULATE_PLATFORMS"
+		return
+	fi
         perl -e '
                 use Osstest;
                 csreadconfig();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1460.4641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OU-0001k1-0i; Thu, 01 Oct 2020 16:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1460.4641; Thu, 01 Oct 2020 16:25:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OT-0001jq-SK; Thu, 01 Oct 2020 16:25:21 +0000
Received: by outflank-mailman (input) for mailman id 1460;
 Thu, 01 Oct 2020 16:25:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1OS-0001JL-KK
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:20 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 809d7148-c8f6-4f44-bfcf-d302cb04b56d;
 Thu, 01 Oct 2020 16:24:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Ny-00029L-Ao; Thu, 01 Oct 2020 17:24:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1OS-0001JL-KK
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:20 +0000
X-Inumbo-ID: 809d7148-c8f6-4f44-bfcf-d302cb04b56d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 809d7148-c8f6-4f44-bfcf-d302cb04b56d;
	Thu, 01 Oct 2020 16:24:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Ny-00029L-Ao; Thu, 01 Oct 2020 17:24:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 7/8] standalone-generate-dump-flight-runvars: Simulate cri-getplatforms
Date: Thu,  1 Oct 2020 17:24:38 +0100
Message-Id: <20201001162439.18160-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Set MF_SIMULATE_PLATFORMS to a suitable value if it is
not *set*.  (Distinguishing unset from set to empty.)

I have verified that this, plus the preceding commits to
cri-getplatforms, produces no change in the output of
  MF_SIMULATE_PLATFORMS='' OSSTEST_CONFIG=standalone-config-example eatmydata ./standalone-generate-dump-flight-runvars

Without the MF_SIMULATE_PLATFORMS setting it adds several new jobs to
each flight, name things like this:
  test-amd64-$arch1-xl-simplat-$arch2-$suite

The purpose of this right now is to provide a way to dry-run test the
next change.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 standalone-generate-dump-flight-runvars | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/standalone-generate-dump-flight-runvars b/standalone-generate-dump-flight-runvars
index 5c93b0af..07b9c126 100755
--- a/standalone-generate-dump-flight-runvars
+++ b/standalone-generate-dump-flight-runvars
@@ -45,6 +45,9 @@ fi
 : ${AP_FETCH_PLACEHOLDERS:=y}
 export AP_FETCH_PLACEHOLDERS
 
+: ${MF_SIMULATE_PLATFORMS='simplat-$xenarch-$suite'}
+export MF_SIMULATE_PLATFORMS
+
 export OSSTEST_HOSTSLIST_DUMMY=1
 
 if [ "x$AP_FETCH_PLACEHOLDERS" != xy ]; then
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:25:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:25:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1461.4653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OZ-0001qQ-Di; Thu, 01 Oct 2020 16:25:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1461.4653; Thu, 01 Oct 2020 16:25:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1OZ-0001qG-85; Thu, 01 Oct 2020 16:25:27 +0000
Received: by outflank-mailman (input) for mailman id 1461;
 Thu, 01 Oct 2020 16:25:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kO1OX-0001JL-KY
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:25 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a44ed8a7-3c40-49e2-905e-6951fcd05921;
 Thu, 01 Oct 2020 16:24:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kO1Ny-00029L-Gk; Thu, 01 Oct 2020 17:24:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j0qz=DI=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kO1OX-0001JL-KY
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:25:25 +0000
X-Inumbo-ID: a44ed8a7-3c40-49e2-905e-6951fcd05921
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a44ed8a7-3c40-49e2-905e-6951fcd05921;
	Thu, 01 Oct 2020 16:24:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kO1Ny-00029L-Gk; Thu, 01 Oct 2020 17:24:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 8/8] Tolerate lack of platform-specific hosts in old Xen branches
Date: Thu,  1 Oct 2020 17:24:39 +0100
Message-Id: <20201001162439.18160-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201001162439.18160-1-iwj@xenproject.org>
References: <20201001162439.18160-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Right now we have a situation where these can't all be made to work
because because some older Xen branches are hard to make work on
current Debian stable, and we have some hardware (which we have tagged
as specific "platforms") which doesn't work with oldstable.

This seems like a general problem, so fix it this way.

Note that we still treat these failed allocations as failures, so they
are subject to regression analysis and ought not to appear willy-nilly
on existing branches.

Runvar dump shows the addition of this runvar
   hostalloc_missing_expected=1
to
   qemu-upstream-4.6-testing
   xen-4.6-testing
   ...
   qemu-upstream-4.14-testing
   xen-4.14-testing
inclusive.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 make-flight | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/make-flight b/make-flight
index d5a3d99a..fb38bc50 100755
--- a/make-flight
+++ b/make-flight
@@ -643,13 +643,27 @@ do_pv_debian_tests () {
   for xsm in $xsms ; do
     # Basic PV Linux test with xl
     for platform in '' `getplatforms $xenarch $defsuite` ; do
+      platform_runvars=''
 
       # xsm test is not platform specific
       if [ x$xsm = xtrue -a x$platform != x ]; then
           continue
       fi
 
-      do_pv_debian_test_one xl '' xl "$platform" enable_xsm=$xsm
+      missing_expected=false
+      if [ x$platform != x ]; then
+        case "$xenbranch" in
+          xen-3.*-testing)  missing_expected=true ;;
+          xen-4.*-testing)  missing_expected=true ;;
+          *) ;;
+        esac
+      fi
+      if $missing_expected; then
+        platform_runvars+=hostalloc_missing_expected=1
+      fi
+
+      do_pv_debian_test_one xl '' xl "$platform" enable_xsm=$xsm \
+                            $platform_runvars
 
     done
   done
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:29:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1474.4665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Sc-0002K6-Vz; Thu, 01 Oct 2020 16:29:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1474.4665; Thu, 01 Oct 2020 16:29:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1Sc-0002Jz-SS; Thu, 01 Oct 2020 16:29:38 +0000
Received: by outflank-mailman (input) for mailman id 1474;
 Thu, 01 Oct 2020 16:29:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kO1Sc-0002Ju-75
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:29:38 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.85]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d3ab11e-b1ab-48e1-a55b-c3dc854ecc6f;
 Thu, 01 Oct 2020 16:29:37 +0000 (UTC)
Received: from AM6PR01CA0068.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::45) by HE1PR0801MB1849.eurprd08.prod.outlook.com
 (2603:10a6:3:89::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 16:29:35 +0000
Received: from AM5EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::56) by AM6PR01CA0068.outlook.office365.com
 (2603:10a6:20b:e0::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Thu, 1 Oct 2020 16:29:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT047.mail.protection.outlook.com (10.152.16.197) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:29:35 +0000
Received: ("Tessian outbound 195a290eb161:v64");
 Thu, 01 Oct 2020 16:29:35 +0000
Received: from 67050f35ac7b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5578BA16-5339-44A9-95BA-D52C0AF14065.1; 
 Thu, 01 Oct 2020 16:29:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 67050f35ac7b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 01 Oct 2020 16:29:28 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.23; Thu, 1 Oct
 2020 16:29:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:29:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kO1Sc-0002Ju-75
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:29:38 +0000
X-Inumbo-ID: 5d3ab11e-b1ab-48e1-a55b-c3dc854ecc6f
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5d3ab11e-b1ab-48e1-a55b-c3dc854ecc6f;
	Thu, 01 Oct 2020 16:29:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xDoGWmtH2d2BMf+8KbENi8tLb+ymvojVAsNjpS/JRQ=;
 b=aXcyUknnMhXIcqPiF3me7i6KhQzFN39Or1smigrsOQ+7So37x3MxlL879zoNdflIVRJLlD6R2cJ/uDZOSBi6nDtgzbukSrUyLKPs6VhVOihJUq6k2FVfBrOJgYahZyTq8N7Txly9BZAWTG0BrY1yXPauLt6PcSsJX+xoWBm6CSc=
Received: from AM6PR01CA0068.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::45) by HE1PR0801MB1849.eurprd08.prod.outlook.com
 (2603:10a6:3:89::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Thu, 1 Oct
 2020 16:29:35 +0000
Received: from AM5EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::56) by AM6PR01CA0068.outlook.office365.com
 (2603:10a6:20b:e0::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Thu, 1 Oct 2020 16:29:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT047.mail.protection.outlook.com (10.152.16.197) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:29:35 +0000
Received: ("Tessian outbound 195a290eb161:v64"); Thu, 01 Oct 2020 16:29:35 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7d4e67d40d793941
X-CR-MTA-TID: 64aa7808
Received: from 67050f35ac7b.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5578BA16-5339-44A9-95BA-D52C0AF14065.1;
	Thu, 01 Oct 2020 16:29:28 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 67050f35ac7b.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 01 Oct 2020 16:29:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F1krPdmyZzIOAP3fxor8vIoeU9EylVYyQCpQdOwSV3XfN4UG5CJCF0l33zWR8Fs/jD29QNU8ieIPXIwy+8+nZi7v8KQHDG61YYIS7TLRWqTrfUsEO9MV51603ZYSKURwxJA/lqF3Dcz1C61+N49FLKoek8SZ+UdCmWbnakKfGkHuyponaNsCNe0s9mQvVYfczo1samkzCY66QooNTHfETozYIhdTEs6TWZkubTPXlfbib/Yhzt3BD37EWX5ZsjfEPUSKvnZ7HHwnR9zXi9yvjCuTKwrG9hZDlOXhjv96TbrtJmW4D6zrq5yKmC83qwLkK19gJYgHvoHbkqVEauQ2HA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xDoGWmtH2d2BMf+8KbENi8tLb+ymvojVAsNjpS/JRQ=;
 b=c8flpHVdEZ/fllMVJJQljuAlgCSdjGVtHJJKFl+2QetZ6RTc59zU9IH9m6ny8cLFCiyhoCAcF9DqBkzC1giybKT0EYHmHFC4nvKSXzWnEknEOosgHOmqAAGZ0rwYs9SCQls0HnLvQooZBXwOybTzObjUlx1BBZlt02y6sRDogMbFi6143azal9nzrB4O0HKQp4lAInsOYITsYn4F14YG9dXdQKtNnTFpxF//laxPiwzJgqNl1D602CBozzXIyz/FdpQD/w2mg+BKHJaNbbgYvym5s/M9wSdVLiTyS/8aHjpuebIGrCcv+8FH1DFDC0NsAsJg2C0MqQ4kAsFOIqyv8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+xDoGWmtH2d2BMf+8KbENi8tLb+ymvojVAsNjpS/JRQ=;
 b=aXcyUknnMhXIcqPiF3me7i6KhQzFN39Or1smigrsOQ+7So37x3MxlL879zoNdflIVRJLlD6R2cJ/uDZOSBi6nDtgzbukSrUyLKPs6VhVOihJUq6k2FVfBrOJgYahZyTq8N7Txly9BZAWTG0BrY1yXPauLt6PcSsJX+xoWBm6CSc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR08MB2693.eurprd08.prod.outlook.com (2603:10a6:6:1c::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.23; Thu, 1 Oct
 2020 16:29:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:29:27 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
Thread-Topic: Ping: [PATCH 0/6] tools/include: adjustments to the population
 of xen/
Thread-Index: AQHWh2uA51KvLc+DNUCjQhMwaVDUdqmDCd+AgAAHWYA=
Date: Thu, 1 Oct 2020 16:29:27 +0000
Message-ID: <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
In-Reply-To: <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c1d7708c-7902-4e4d-f411-08d866272f31
x-ms-traffictypediagnostic: DB6PR08MB2693:|HE1PR0801MB1849:
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB1849DC43081A5E82E345DEF49D300@HE1PR0801MB1849.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 I/4If7INilnVzAj9b7wF9GxDiqzs4AnfY050BJdqHKvLFNJpeDe+u1yryOafdol4KTrRgWzozibRqGwWuS2gK2F4tZBX0I1PB4eXSZSuj/HpKv5VQtGWJ4ldE+iy1GoyUiTz0jYvRtuDpzxMQT8qdNdOMgD3mkbJLHUJdTa4UhxpbSlQxe+I3cdY75ybH5fjqjiiZDSLrQI22+6HhiIAdzhA5ji5SGLgDqbO3VXz16SSYqlLTbWMAQ8f4cBFQLGxJlJOH+mEb+4LGxrFNRTjO6l2pAxIOQgX+Ysx2AdZbeqEgA7cuGjgyCaaI6RslWttQ9MSgiANpbE/vI3SX31dpg3V1o4cy828QwMG7cnMyrY89qJcRugYDxegG94rC4gH
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39850400004)(376002)(396003)(346002)(366004)(33656002)(54906003)(86362001)(2906002)(8936002)(71200400001)(186003)(5660300002)(36756003)(6512007)(478600001)(2616005)(8676002)(64756008)(66446008)(66946007)(76116006)(91956017)(6506007)(6916009)(6486002)(53546011)(316002)(4326008)(66556008)(26005)(66476007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 iRdslZTQUWCzyVZ4flWm3mFmXku1x9Ism6TYU2bNGo25YaKqoGlnS4LOwcQ55mX1d5zqoDgm0Bbkt5X1Xu6L1dGumZTmlodvOA2pAA1ztZrONxBwiXBawj7esYrBTe9k4ZfDp1oOXmpV9dv7a5s4wv3UU9Os7prEq1RPa/OQbagkQ/aCx4cq3YQKf2I9V96G8VYng2OR1VOsykenpf9ET5W9n+o0fI2SuVZJzhUCDIm+nCb48JEeICvI4TYuTraxlJYlvt90H0hevX2x5+0KxjPW0bJguk40WTyKujR6qISjt+6NpJmQ74IYwjqk3haalIJ11cihBOQemegGMSSW+rojnAn8hWzQw34Zxa+11JaWDbRxu4irV6mFq1uywVQaxA0RcUPvL20oLobS27koNJ9NXlN4bj0ypUpUz3TCCypHNO7xKFPfITCAX4Lm/hJyA4Z5AzGfyyyTFomK9jpco6iZPfniULNgy3z48Nl6/gwR0zjo7a82GS7gygufN0QwMUGqoarSFYdj6mLytTuA0CmIIjKjfzLTCIdxVZz5DkO6FmiuFFZ6XYChj3RszjfZ6l8QGUqGbijIGT3bE8fjnefWI7tRCpkm83JmVz4S02PisXUPnDD/b/NPFykMNUAyZmcSKQA+tH3hciybq+LOSA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D6BDF005340F094EB5C73C2E176AF8B5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2693
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a60316bf-19f6-47e1-13c3-08d866272a9f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qTF8g85Xohjunur4V7BUewNN0kIku4gfwUXc/+np3BGwbt6iGCJocCWvqK31oew08GIErmQHb95n4xxpECv4g5wnZkxUo9bIQ6deTjSeHwK0IWQ/G8ECm4QrqGLr73CCphDhZGnLnxOekRzkugFIxyPAZ9KluvZRgRgeyR5mn6jTfhIHPJwtYeS6Ffwk1vCc9tCjUhXn5XLlQhUE/ns2QCDbe8NXSBQ/n9fuM17iI70bmkDj37Ns47mWB9HWiy2jF+YDRQ8UxPojy4E4NRHZJLo9JZZmx8nTauROSLQs3XEHL8rTz8p3NiziBxxHwb2nGdSGRsFD3kZTHd52VWz6tfUgUGlAh/KMBMafLzgekAO4fyF+Z4Uh6Upf67SoldfpmgvQiQyA1xiBij3SU6iIJA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(39850400004)(376002)(136003)(46966005)(6862004)(6506007)(8676002)(53546011)(26005)(5660300002)(186003)(36756003)(478600001)(70586007)(36906005)(316002)(54906003)(70206006)(47076004)(6486002)(6512007)(8936002)(356005)(86362001)(2906002)(82740400003)(82310400003)(81166007)(4326008)(2616005)(33656002)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2020 16:29:35.3906
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c1d7708c-7902-4e4d-f411-08d866272f31
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1849

SGkgSmFuLA0KDQo+IE9uIDEgT2N0IDIwMjAsIGF0IDE3OjAzLCBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMTAuMDkuMjAyMCAxNDowOSwgSmFuIEJldWxp
Y2ggd3JvdGU6DQo+PiBXaGlsZSBsb29raW5nIGF0IHdoYXQgaXQgd291bGQgdGFrZSB0byBtb3Zl
IGFyb3VuZCBsaWJlbGYvDQo+PiBpbiB0aGUgaHlwZXJ2aXNvciBzdWJ0cmVlLCBJJ3ZlIHJ1biBp
bnRvIHRoaXMgcnVsZSwgd2hpY2ggSQ0KPj4gdGhpbmsgY2FuIGRvIHdpdGggYSBmZXcgaW1wcm92
ZW1lbnRzIGFuZCBzb21lIHNpbXBsaWZpY2F0aW9uLg0KPj4gDQo+PiAxOiBhZGp1c3QgcG9wdWxh
dGlvbiBvZiBhY3BpLw0KPj4gMjogZml4IChkcm9wKSBkZXBlbmRlbmNpZXMgb2Ygd2hlbiB0byBw
b3B1bGF0ZSB4ZW4vDQo+PiAzOiBhZGp1c3QgcG9wdWxhdGlvbiBvZiBwdWJsaWMgaGVhZGVycyBp
bnRvIHhlbi8NCj4+IDQ6IHByb3Blcmx5IGluc3RhbGwgQXJtIHB1YmxpYyBoZWFkZXJzDQo+PiA1
OiBhZGp1c3QgeDg2LXNwZWNpZmljIHBvcHVsYXRpb24gb2YgeGVuLw0KPj4gNjogZHJvcCByZW1h
aW5pbmcgLWYgZnJvbSBsbiBpbnZvY2F0aW9ucw0KPiANCj4gTWF5IEkgYXNrIGZvciBhbiBhY2sg
b3Igb3RoZXJ3aXNlIGhlcmU/DQoNClRoaXMgaXMgZ29pbmcgdGhlIHJpZ2h0IHdheSBidXQgd2l0
aCB0aGlzIHNlcmllIChvbiB0b3Agb2YgY3VycmVudCBzdGFnaW5nDQpzdGF0dXMpLCBJIGhhdmUg
YSBjb21waWxhdGlvbiBlcnJvciBpbiBZb2N0byB3aGlsZSBjb21waWxpbmcgcWVtdToNCiBJbiBm
aWxlIGluY2x1ZGVkIGZyb20gL21lZGlhL2V4dGVuZC1kcml2ZS9iZXJtYXIwMS9EZXZlbG9wbWVu
dC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2RvbTAtZnZwLnByai90bXAvd29yay9hcm12OGEt
cG9reS1saW51eC9xZW11LzUuMS4wLXIwL3JlY2lwZS1zeXNyb290L3Vzci9pbmNsdWRlL3hlbmd1
ZXN0Lmg6MjUsDQp8ICAgICAgICAgICAgICAgICAgZnJvbSAvbWVkaWEvZXh0ZW5kLWRyaXZlL2Jl
cm1hcjAxL0RldmVsb3BtZW50L3hlbi1kZXYveW9jdG8tYnVpbGQvYnVpbGQvZG9tMC1mdnAucHJq
L3RtcC93b3JrL2FybXY4YS1wb2t5LWxpbnV4L3FlbXUvNS4xLjAtcjAvcWVtdS01LjEuMC9ody9p
Mzg2L3hlbi94ZW5fcGxhdGZvcm0uYzo0MToNCnwgL21lZGlhL2V4dGVuZC1kcml2ZS9iZXJtYXIw
MS9EZXZlbG9wbWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2RvbTAtZnZwLnByai90bXAv
d29yay9hcm12OGEtcG9reS1saW51eC9xZW11LzUuMS4wLXIwL3JlY2lwZS1zeXNyb290L3Vzci9p
bmNsdWRlL3hlbmN0cmxfZG9tLmg6MTk6MTA6IGZhdGFsIGVycm9yOiB4ZW4vbGliZWxmL2xpYmVs
Zi5oOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5DQp8ICAgIDE5IHwgI2luY2x1ZGUgPHhlbi9s
aWJlbGYvbGliZWxmLmg+DQp8ICAgICAgIHwgICAgICAgICAgXn5+fn5+fn5+fn5+fn5+fn5+fn5+
DQp8IGNvbXBpbGF0aW9uIHRlcm1pbmF0ZWQuDQp8IC9tZWRpYS9leHRlbmQtZHJpdmUvYmVybWFy
MDEvRGV2ZWxvcG1lbnQveGVuLWRldi95b2N0by1idWlsZC9idWlsZC9kb20wLWZ2cC5wcmovdG1w
L3dvcmsvYXJtdjhhLXBva3ktbGludXgvcWVtdS81LjEuMC1yMC9xZW11LTUuMS4wL3J1bGVzLm1h
azo2OTogcmVjaXBlIGZvciB0YXJnZXQgJ2h3L2kzODYveGVuL3hlbl9wbGF0Zm9ybS5v4oCZIGZh
aWxlZA0KDQpYZW4gaXMgdXNpbmcgeGVuY3RybF9kb20uaCB3aGljaCBuZWVkIHRoZSBsaWJlbGYu
aCBoZWFkZXIgZnJvbSB4ZW4uDQoNClJlZ2FyZHMNCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:38:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1486.4676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1bS-0003HC-4h; Thu, 01 Oct 2020 16:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1486.4676; Thu, 01 Oct 2020 16:38:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1bS-0003H5-1Z; Thu, 01 Oct 2020 16:38:46 +0000
Received: by outflank-mailman (input) for mailman id 1486;
 Thu, 01 Oct 2020 16:38:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kO1bQ-0003H0-QT
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:38:44 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::608])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c3e856f-c6fe-4619-8db5-2c049ffa6953;
 Thu, 01 Oct 2020 16:38:43 +0000 (UTC)
Received: from DB6PR0301CA0099.eurprd03.prod.outlook.com (2603:10a6:6:30::46)
 by AM6PR08MB3798.eurprd08.prod.outlook.com (2603:10a6:20b:82::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 16:38:41 +0000
Received: from DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::be) by DB6PR0301CA0099.outlook.office365.com
 (2603:10a6:6:30::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 16:38:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT020.mail.protection.outlook.com (10.152.20.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:38:40 +0000
Received: ("Tessian outbound 195a290eb161:v64");
 Thu, 01 Oct 2020 16:38:40 +0000
Received: from 1673e14c3b73.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3356F7C2-AE44-40E9-BD56-CDD3613DEF83.1; 
 Thu, 01 Oct 2020 16:38:03 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1673e14c3b73.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 01 Oct 2020 16:38:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3161.eurprd08.prod.outlook.com (2603:10a6:5:1d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.15; Thu, 1 Oct
 2020 16:38:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:38:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kO1bQ-0003H0-QT
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:38:44 +0000
X-Inumbo-ID: 5c3e856f-c6fe-4619-8db5-2c049ffa6953
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7d00::608])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c3e856f-c6fe-4619-8db5-2c049ffa6953;
	Thu, 01 Oct 2020 16:38:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WS/JUmBvufyqprbu3/XyDf6TeHR5WPzwrs0d6RgMQXU=;
 b=1KYgnpf6wMTYNF/9qsa52CKDVFcmKTkXtZI7CMNwxwAl/AYK7z8DwHZmbyqlkAMjvRqmM8WRbVNlres6Y8Xq6JjacL1J87stQKHw9XqwpNDqxOqtmVE9tWT64MoYR4aSrqc58n9fZzpQGiAlM/bFyq08Y6tK8XqJOcNtxLEzBdk=
Received: from DB6PR0301CA0099.eurprd03.prod.outlook.com (2603:10a6:6:30::46)
 by AM6PR08MB3798.eurprd08.prod.outlook.com (2603:10a6:20b:82::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Thu, 1 Oct
 2020 16:38:41 +0000
Received: from DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::be) by DB6PR0301CA0099.outlook.office365.com
 (2603:10a6:6:30::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 16:38:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT020.mail.protection.outlook.com (10.152.20.134) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:38:40 +0000
Received: ("Tessian outbound 195a290eb161:v64"); Thu, 01 Oct 2020 16:38:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 810c34f65d629049
X-CR-MTA-TID: 64aa7808
Received: from 1673e14c3b73.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3356F7C2-AE44-40E9-BD56-CDD3613DEF83.1;
	Thu, 01 Oct 2020 16:38:03 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1673e14c3b73.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 01 Oct 2020 16:38:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TPUC2N42dRJznP9mQ0Y4ORsTEujOGCTZONbqSZ2nrPghdKrBSq2ujVdOoZb/lvK7n/ycD8ArvSLBtQ68BTYPQvKA3caVaPerELeCg0LLzMjoOHwg2ji6bUTw/YgU7+iuLYpXFqDTvpx8ezzLDfiuBabPBoCm0n6xveXPGsWFE6TSEGlOFY2AD9oGkhWYE15Uk5ZQwhHS5qj5cjTFzwftIfKuNgmu7tn/yJrFqN6e4G/H+JwV1ciImf/aX8QwuyO39o7My2eXtxw1yY82eXF5Z++UcR9AHfsBUohZJGYNVmRA3TXLWS/YJ0SLtoVSbRUbPbQGBbQ3UZAAnb/EvZaQZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WS/JUmBvufyqprbu3/XyDf6TeHR5WPzwrs0d6RgMQXU=;
 b=ZUfWhZZ2K8LgLdpCCzY+gHKaFmxhrTQz0V3HeaSsnY99NSTEhWRsggJQsXTWvQNp60ruTzqVLLZVQz1RkxtMCe5Krs7AOz0Wbkd7WRV2LIj2eV51SAci44CEYXMGO+JxCt8ofmGht6PwU9D84+PorkSt78hZ+pbM6aFADGJESKe449ySOGSBTQlGy6qnSXKRQNuiFkz1Jl+Duazaxn2pY8PL2w7woqzfAeo2g44ihzbyTdUt95erCnBSrsUb2qo6myjAicRIhy02ZpVH6CRwdQFHmL61WeObMkBKsVs1ZlbAA6RbNJZ0rYnjURqqlBjcIu2krOM7jxNkpFkNFWHagg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WS/JUmBvufyqprbu3/XyDf6TeHR5WPzwrs0d6RgMQXU=;
 b=1KYgnpf6wMTYNF/9qsa52CKDVFcmKTkXtZI7CMNwxwAl/AYK7z8DwHZmbyqlkAMjvRqmM8WRbVNlres6Y8Xq6JjacL1J87stQKHw9XqwpNDqxOqtmVE9tWT64MoYR4aSrqc58n9fZzpQGiAlM/bFyq08Y6tK8XqJOcNtxLEzBdk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3161.eurprd08.prod.outlook.com (2603:10a6:5:1d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3391.15; Thu, 1 Oct
 2020 16:38:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:38:00 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Juergen Gross <jgross@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
Thread-Topic: [PATCH v3] tools/libs/stat: fix broken build
Thread-Index: AQHWiQYtlvZlYAgRJ0WaZGJ4HThwo6ln+foAgBsWa4A=
Date: Thu, 1 Oct 2020 16:38:00 +0000
Message-ID: <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
In-Reply-To: <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c46c8f6e-36b9-4b4f-f896-08d86628744e
x-ms-traffictypediagnostic: DB7PR08MB3161:|AM6PR08MB3798:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB37980C1274C81AB85FCB26B39D300@AM6PR08MB3798.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6430;OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 cRhr/vm0+ut4EXp8fk5MGNsyMqs5lX6vxdP2Gy+7l4ZR2SfdyVzH84CzvoiOLjXCF+BAaeJLhhk1YZcdAKt+PfBRdZHe4hrhqtpgorWr38lmNMQSKGXowwbKFhTPmExhmk2NwyH/C/kDs/VrhuxyUqGlY/itIwYYkMCkKpzlDoGGTc4LaoNPwfrunSglRP7qGDTxmGg+pABJMs04j0w2UgnqisZblt+VWLl6YQ04UJ77D72g0Xh2ISu6dH4mg3m89ebQMZ65vpWOJOli/6h96vraKuNVo46KWUqW86I/Kv4TS6wE1ii0d8I9M41pozyPuceiwVvaPG+bHRodNQIJynSXEphXdoTG7Mf/kRvls1EtNIIsyklsOU0wvydn8Vzr
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(396003)(376002)(136003)(366004)(346002)(8936002)(66446008)(76116006)(83380400001)(66946007)(86362001)(6512007)(6916009)(71200400001)(6486002)(33656002)(8676002)(64756008)(91956017)(66476007)(2906002)(4326008)(54906003)(478600001)(66556008)(6506007)(53546011)(5660300002)(2616005)(316002)(26005)(36756003)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ulUtT8mIOWoqQncAxUz55tbbIY0V+hWl84ZplcltvdhmEgK2XcXYrInNpjqaU6cr+tSMKZDQiAJCoQCicNPKbDukOU3rzqoUMWbeatyYibw6V+ESGvyfiMayTDZqWnFP0gY8itZe690qIuMhpbYFdkrKWV4Tz3bhl05FGrWOZSUE7bQgZSoFyjz46QJwixZtGOLT9NWtth7/0mVqpm4FHpOk41YCJOtHD9rwM1joLtSJZm+3IVHP+I66Laq99j9zeoSDmSUr5ldWtQNjpnp0ibcU/N/W+7jCYyIVmSFiz0npBN5/dOhcLp1FVVAGiRMLRAguZaxMJ/VzXiVF+gVRaNNouTX3ximN8NBf2AMm6lMzYkFxCHDjFAj9c4nFP8da3PFMHqcW9bBvQ/2li9OWWiDiBhGUynZGr4XDn68/Yrfb5ZSqk+C6kdmoAZKLDft8PFuhfJglpPTsktLp7ihluODHdQoyzq1T47cHPeIujadugjLztaDIyouNoPkpfib/EpBHeg+4/E00X2+zhDQ2+yMXSI8HQQKDRNqORuIK51nFEzCFk9lFm1D7YuvheAtE+z1mcjzW0jiyTq91GsNO8jCK5mOE9fxAtkxbccDmm9J/mBIRs4Ru1NPMu/YvphXUPrm7ZNSEFRZZCzGr1F9V4A==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <82D960DD67D6B04F9C874057903DFD08@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3161
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	dcbf303e-1277-4b1e-c573-08d866285c3f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ttn+VNeCisZImfRxTXgadcjNsuwmmVJOBdSz/9KI6v/B4purM4yY1FryqkznXCOksc9Ec71Rf+Xw5NPA26xy6i09BTk3c+xAn4r1GSUCqZd3+VADVvQ9ILCOGmP3q5/RhuWQVQKNY8AsnQvGR+0hkbg9GWZ7g1Ic4lXH8GiwdT5ttWAXhyhjJMVhi5uzCJnT7q/fXisTXUrzv6tPn16NK3khqSq6tuls/4i0bu+qkv6XoKEBUfZ+LSN0+1LiSXJTHDkGZe/J5srqpxy2e4vFQi55DALs5ZjU9j069c9sbU80rElw140TqhvToZpCK7adqLJE0qyOLD8diE/r99XUagXjZl7S1IJJa3CDRzGZK/uLiD2VOg2qrWvAzXkT/wYfVe60Z+F778AowoLUL31LfQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(346002)(376002)(136003)(46966005)(6506007)(186003)(36756003)(70586007)(33656002)(54906003)(70206006)(316002)(8676002)(86362001)(82310400003)(6512007)(2906002)(26005)(4326008)(8936002)(6862004)(5660300002)(81166007)(6486002)(53546011)(2616005)(356005)(47076004)(83380400001)(82740400003)(478600001)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2020 16:38:40.8922
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c46c8f6e-36b9-4b4f-f896-08d86628744e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3798

Hi Juergen,

> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wro=
te:
>=20
>=20
>=20
>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>=20
>> Making getBridge() static triggered a build error with some gcc versions=
:
>>=20
>> error: 'strncpy' output may be truncated copying 15 bytes from a string =
of
>> length 255 [-Werror=3Dstringop-truncation]
>>=20
>> Fix that by using a buffer with 256 bytes instead.
>>=20
>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat=
 directory")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Sorry i have to come back on this one.

I still see an error compiling with Yocto on this one:
|     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
| xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 25=
5 bytes from a string of length 255 [-Werror=3Dstringop-truncation]
|    81 |      strncpy(result, de->d_name, resultLen);
|       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

To solve it, I need to define devBridge[257] as devNoBrideg.

Regards
Bertrand

>=20
>> ---
>> tools/libs/stat/xenstat_linux.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>=20
>> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_l=
inux.c
>> index 793263f2b6..d2ee6fda64 100644
>> --- a/tools/libs/stat/xenstat_linux.c
>> +++ b/tools/libs/stat/xenstat_linux.c
>> @@ -78,7 +78,7 @@ static void getBridge(char *excludeName, char *result,=
 size_t resultLen)
>> 				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>>=20
>> 				if (access(tmp, F_OK) =3D=3D 0) {
>> -					strncpy(result, de->d_name, resultLen - 1);
>> +					strncpy(result, de->d_name, resultLen);
>> 					result[resultLen - 1] =3D 0;
>> 				}
>> 		}
>> @@ -264,7 +264,7 @@ int xenstat_collect_networks(xenstat_node * node)
>> {
>> 	/* Helper variables for parseNetDevLine() function defined above */
>> 	int i;
>> -	char line[512] =3D { 0 }, iface[16] =3D { 0 }, devBridge[16] =3D { 0 }=
, devNoBridge[17] =3D { 0 };
>> +	char line[512] =3D { 0 }, iface[16] =3D { 0 }, devBridge[256] =3D { 0 =
}, devNoBridge[257] =3D { 0 };
>> 	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPack=
ets, txErrs, txDrops;
>>=20
>> 	struct priv_data *priv =3D get_priv_data(node->handle);
>> --=20
>> 2.26.2
>>=20
>>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:44:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:44:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1497.4689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1hB-0004Br-Rq; Thu, 01 Oct 2020 16:44:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1497.4689; Thu, 01 Oct 2020 16:44:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1hB-0004Bk-Oe; Thu, 01 Oct 2020 16:44:41 +0000
Received: by outflank-mailman (input) for mailman id 1497;
 Thu, 01 Oct 2020 16:44:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kO1hA-0004Ba-CO
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:44:40 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe09::61e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2071d726-3cb5-4dd9-b120-ff122ce20b7c;
 Thu, 01 Oct 2020 16:44:38 +0000 (UTC)
Received: from DB8PR06CA0011.eurprd06.prod.outlook.com (2603:10a6:10:100::24)
 by HE1PR0802MB2553.eurprd08.prod.outlook.com (2603:10a6:3:df::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Thu, 1 Oct
 2020 16:44:36 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::98) by DB8PR06CA0011.outlook.office365.com
 (2603:10a6:10:100::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 16:44:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:44:35 +0000
Received: ("Tessian outbound a0bffebca527:v64");
 Thu, 01 Oct 2020 16:44:35 +0000
Received: from d293b38433e3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4D25FE3E-4A3B-4DA2-8C17-EAF98402597B.1; 
 Thu, 01 Oct 2020 16:43:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d293b38433e3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 01 Oct 2020 16:43:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4539.eurprd08.prod.outlook.com (2603:10a6:10:cf::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Thu, 1 Oct
 2020 16:43:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:43:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e2ni=DI=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kO1hA-0004Ba-CO
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:44:40 +0000
X-Inumbo-ID: 2071d726-3cb5-4dd9-b120-ff122ce20b7c
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe09::61e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2071d726-3cb5-4dd9-b120-ff122ce20b7c;
	Thu, 01 Oct 2020 16:44:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SrbNnulJmbLMPf40lFql3k8274MNQgxPLPCyIdAU0FA=;
 b=XelxDLBJ/wR6pYJ9LljxcPZbxCTxEr/z3VT9tBNGE23Tn8djfXis9T2snpwHemZBhaIId3c9GpYfU+bkQ3c2nBTN3s9jPJdckByRv1yfHxl8udCJasqrOwszB/uAtUMJTs0Jvy8FIokClBojo4oSUDxeNtVyrxX/uOvu0lfPlwQ=
Received: from DB8PR06CA0011.eurprd06.prod.outlook.com (2603:10a6:10:100::24)
 by HE1PR0802MB2553.eurprd08.prod.outlook.com (2603:10a6:3:df::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Thu, 1 Oct
 2020 16:44:36 +0000
Received: from DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:100:cafe::98) by DB8PR06CA0011.outlook.office365.com
 (2603:10a6:10:100::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Thu, 1 Oct 2020 16:44:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT032.mail.protection.outlook.com (10.152.20.162) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Thu, 1 Oct 2020 16:44:35 +0000
Received: ("Tessian outbound a0bffebca527:v64"); Thu, 01 Oct 2020 16:44:35 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 52b8647ebbecfdd7
X-CR-MTA-TID: 64aa7808
Received: from d293b38433e3.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 4D25FE3E-4A3B-4DA2-8C17-EAF98402597B.1;
	Thu, 01 Oct 2020 16:43:57 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d293b38433e3.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 01 Oct 2020 16:43:57 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=itZ9GW0SvQR3N2PAsLOLnF66d1G6BFiZ66ARddBaR0z6Q1e7vugDmV6wewD+xudBjkfDmWGXxggNJKhf9OdJXWY6r2wjDel529PCK86KshPoPvQhg+5ZHbuskugUYXjVNfLzs6EhiwUyUqFGdvS81ZwkWeduH9K2nw4MhIcYDxtsNo2WgrWsm/frtnrfqPqI90VTE7DLMsbd0Ns23kNcWQ5fhOalXwoN7flVDL/HKvIjTxf793YaOrGqPrwPe73giIC4Om4VLBhUzpnLhV4uMFNo5yFZ+EIJme7btIhRNDm5a+8rOS60bnUfXNZN/VeDl3iSx4x+vyWR2xNmQE7Ncg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SrbNnulJmbLMPf40lFql3k8274MNQgxPLPCyIdAU0FA=;
 b=EHFQraqYwLXj8ewX9pzpMJFGIyzY7tSDIDZGhpNptagAcYouUJHTitfP/lVTAh96rb0/oLG5h8Uk8vOhkJqdyT8SwnrX3x2EzEBpFnDOje+VcIqGoHX5dB4VgTgh8Teu5asWBiHT6GOLOV5AjkcSIt70yv25Q40zv5WAtWTEXJCrRFqgyQRfH6yBez7E6R+dLVoRNUpQuLlQ8TcJ6I3SdRKTvw+A8JJcG+ebgaLJe4x2WDWqvjyiZG0TsFmpZHgHOMIB4KgitH8JCpY36E+wk/AeR/13xnIsKp6XgD4CgNqO1sQq32icP2bM64MBYkWMZmxX4q4cRYFp7+/ZnXbzkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SrbNnulJmbLMPf40lFql3k8274MNQgxPLPCyIdAU0FA=;
 b=XelxDLBJ/wR6pYJ9LljxcPZbxCTxEr/z3VT9tBNGE23Tn8djfXis9T2snpwHemZBhaIId3c9GpYfU+bkQ3c2nBTN3s9jPJdckByRv1yfHxl8udCJasqrOwszB/uAtUMJTs0Jvy8FIokClBojo4oSUDxeNtVyrxX/uOvu0lfPlwQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4539.eurprd08.prod.outlook.com (2603:10a6:10:cf::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Thu, 1 Oct
 2020 16:43:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Thu, 1 Oct 2020
 16:43:56 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
Thread-Topic: Ping: [PATCH 0/6] tools/include: adjustments to the population
 of xen/
Thread-Index: AQHWh2uA51KvLc+DNUCjQhMwaVDUdqmDCd+AgAAHWYCAAAQLgA==
Date: Thu, 1 Oct 2020 16:43:55 +0000
Message-ID: <6B9403A3-66DC-4A69-8006-096420649768@arm.com>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
 <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
In-Reply-To: <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f9174eec-a764-4833-1833-08d8662947c2
x-ms-traffictypediagnostic: DBBPR08MB4539:|HE1PR0802MB2553:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB25539A4614543F9B8553DF439D300@HE1PR0802MB2553.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4DGUnJQN2E9DaXkkVoVxcOzGRCbuy4giyChDqvujakDR6pseJ2z0b387ZkjYDhzY/RpLEz7vR7RNuWUw1zXtFE/Eb7eJw3V0utBJx/XVgPISg4o+0a+jwd8bInk7tVVQ0FciIwvfCcmGwUPJHt9GECVZFFKhJwvj40SrFX4abAgFfd9yfC1oja0VYrNOrBH6xfyehPZiTn3eynMRiYzSSvKRuCuj8HUHX+dkjWRRRXXycqMPXOJnjV29Pu4NLgXtShNQfMnCr/6J3nnKPAvxfVGz/Wq7gIKO0L6Wo94dQs2SGq0ZiMFymYoI6TvO3uNT
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(346002)(376002)(136003)(186003)(8936002)(4326008)(6486002)(36756003)(86362001)(66556008)(64756008)(71200400001)(54906003)(8676002)(478600001)(5660300002)(316002)(26005)(76116006)(2906002)(2616005)(66476007)(66946007)(53546011)(66446008)(91956017)(6506007)(6916009)(33656002)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 KOAFfol02fgDeblrR+kzGMkKtyRSZ7ddE4jsH5riMvP793/HhB9gzR4RRBsaLPjuF+lDVCF6XcteqI33HiRpvtUUD3v34n7bULysfN7s4fQ1j+nelwzO2qP7+FNIj8SKazsWS0ymLC1M5pdUAjrSLXp+GZVo1QB1TyOnTmZsm8JGP59FxON9KYbhHsxw6HnkMgr79jDDfCBpUv1Jfnz8cYlXxSqh/mWRmFPBhTP6hH/ChSY0UNg6p0pFNbXOLTvTWSqIiGuaJF8e9vaPUcI1P/Numw+FriNB17m6FIXKYV2NOvw2KpHj4W8mFJOtsAUb0YdVaWv1zuqTQNjdD741pK0MNHBrwVxeQxJBo6H+R0m9W2rpxwaenyqW8c31Una1UkLmM1VywTBX7T8YyCIi0i/s5r3jbh/1xwdEWKTdNgUTg0pbug+J9YqJMRDVTSWIT385dg6y7merYMfhQuyHW0bYGmjPqhWrD6fjeRgssKoEOS+V/2izE5YKlIZEw6jfqTSNupcEU1aY9Ig1n4DgHTfGyemIo+3Fio0btNSpWlFVVbUotrgb8Y0Ld0/RMxQw2HNhfheWXThLRDI2gUacTH/7ELpFwO02u/upK2wvQzyWBP4zNcsydg/tYPLwMl79ns7oF6GN3hvDQ7ZeRZEEiA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <027263E1EA339B44A92D54784E09208B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4539
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	93b4e7f8-95ad-4be8-3f2c-08d866293032
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G1ivB3ETCJuxSo1zILCYS7YQ+OIZyRzE+vXHZ0sdhdy35qA0VK6ClTdG0ozJHlvsX9zzTkZsXQp2FRKnR3nMOvnWhOQNdriG3K79AMNJ1fhReK/mzcKt4MdBHur7oy3fc6NPILbgYqc2dU1nRSQYdjeAdt/R/NfhVL+elcNmE3D0ExXshB2OecI6rWalnoB2/YfKcA3HFhcNO+rGQouY9xuesD8uv+tE3+hZloWbIBDxMmAbW2mbvQgWXyqzm40Qlg56wDMXup+9ZfsIAq8QZ35aEZ9IOHWzJ2f3qNNxFcXZaan5LlQSfeFA9KN23iNulehYuQ3mqgT9l7H66VlZX+HZs5/4Cbxggxc2+WDQvnynCmPmTzaay51l+lBgJOGCJdLUuuwZ8d1Y5IGa8Z/nUw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39850400004)(136003)(396003)(376002)(46966005)(4326008)(6862004)(8936002)(6506007)(5660300002)(82310400003)(356005)(81166007)(478600001)(53546011)(6512007)(70206006)(316002)(6486002)(2616005)(336012)(47076004)(36756003)(33656002)(8676002)(86362001)(70586007)(54906003)(26005)(186003)(82740400003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Oct 2020 16:44:35.6450
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f9174eec-a764-4833-1833-08d8662947c2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT032.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2553

SGksDQoNCj4gT24gMSBPY3QgMjAyMCwgYXQgMTc6MjksIEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRy
YW5kLm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiBIaSBKYW4sDQo+IA0KPj4gT24gMSBP
Y3QgMjAyMCwgYXQgMTc6MDMsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6
DQo+PiANCj4+IE9uIDEwLjA5LjIwMjAgMTQ6MDksIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+IFdo
aWxlIGxvb2tpbmcgYXQgd2hhdCBpdCB3b3VsZCB0YWtlIHRvIG1vdmUgYXJvdW5kIGxpYmVsZi8N
Cj4+PiBpbiB0aGUgaHlwZXJ2aXNvciBzdWJ0cmVlLCBJJ3ZlIHJ1biBpbnRvIHRoaXMgcnVsZSwg
d2hpY2ggSQ0KPj4+IHRoaW5rIGNhbiBkbyB3aXRoIGEgZmV3IGltcHJvdmVtZW50cyBhbmQgc29t
ZSBzaW1wbGlmaWNhdGlvbi4NCj4+PiANCj4+PiAxOiBhZGp1c3QgcG9wdWxhdGlvbiBvZiBhY3Bp
Lw0KPj4+IDI6IGZpeCAoZHJvcCkgZGVwZW5kZW5jaWVzIG9mIHdoZW4gdG8gcG9wdWxhdGUgeGVu
Lw0KPj4+IDM6IGFkanVzdCBwb3B1bGF0aW9uIG9mIHB1YmxpYyBoZWFkZXJzIGludG8geGVuLw0K
Pj4+IDQ6IHByb3Blcmx5IGluc3RhbGwgQXJtIHB1YmxpYyBoZWFkZXJzDQo+Pj4gNTogYWRqdXN0
IHg4Ni1zcGVjaWZpYyBwb3B1bGF0aW9uIG9mIHhlbi8NCj4+PiA2OiBkcm9wIHJlbWFpbmluZyAt
ZiBmcm9tIGxuIGludm9jYXRpb25zDQo+PiANCj4+IE1heSBJIGFzayBmb3IgYW4gYWNrIG9yIG90
aGVyd2lzZSBoZXJlPw0KPiANCj4gVGhpcyBpcyBnb2luZyB0aGUgcmlnaHQgd2F5IGJ1dCB3aXRo
IHRoaXMgc2VyaWUgKG9uIHRvcCBvZiBjdXJyZW50IHN0YWdpbmcNCj4gc3RhdHVzKSwgSSBoYXZl
IGEgY29tcGlsYXRpb24gZXJyb3IgaW4gWW9jdG8gd2hpbGUgY29tcGlsaW5nIHFlbXU6DQo+IElu
IGZpbGUgaW5jbHVkZWQgZnJvbSAvbWVkaWEvZXh0ZW5kLWRyaXZlL2Jlcm1hcjAxL0RldmVsb3Bt
ZW50L3hlbi1kZXYveW9jdG8tYnVpbGQvYnVpbGQvZG9tMC1mdnAucHJqL3RtcC93b3JrL2FybXY4
YS1wb2t5LWxpbnV4L3FlbXUvNS4xLjAtcjAvcmVjaXBlLXN5c3Jvb3QvdXNyL2luY2x1ZGUveGVu
Z3Vlc3QuaDoyNSwNCj4gfCAgICAgICAgICAgICAgICAgIGZyb20gL21lZGlhL2V4dGVuZC1kcml2
ZS9iZXJtYXIwMS9EZXZlbG9wbWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2RvbTAtZnZw
LnByai90bXAvd29yay9hcm12OGEtcG9reS1saW51eC9xZW11LzUuMS4wLXIwL3FlbXUtNS4xLjAv
aHcvaTM4Ni94ZW4veGVuX3BsYXRmb3JtLmM6NDE6DQo+IHwgL21lZGlhL2V4dGVuZC1kcml2ZS9i
ZXJtYXIwMS9EZXZlbG9wbWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2RvbTAtZnZwLnBy
ai90bXAvd29yay9hcm12OGEtcG9reS1saW51eC9xZW11LzUuMS4wLXIwL3JlY2lwZS1zeXNyb290
L3Vzci9pbmNsdWRlL3hlbmN0cmxfZG9tLmg6MTk6MTA6IGZhdGFsIGVycm9yOiB4ZW4vbGliZWxm
L2xpYmVsZi5oOiBObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5DQo+IHwgICAgMTkgfCAjaW5jbHVk
ZSA8eGVuL2xpYmVsZi9saWJlbGYuaD4NCj4gfCAgICAgICB8ICAgICAgICAgIF5+fn5+fn5+fn5+
fn5+fn5+fn5+fg0KPiB8IGNvbXBpbGF0aW9uIHRlcm1pbmF0ZWQuDQo+IHwgL21lZGlhL2V4dGVu
ZC1kcml2ZS9iZXJtYXIwMS9EZXZlbG9wbWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2Rv
bTAtZnZwLnByai90bXAvd29yay9hcm12OGEtcG9reS1saW51eC9xZW11LzUuMS4wLXIwL3FlbXUt
NS4xLjAvcnVsZXMubWFrOjY5OiByZWNpcGUgZm9yIHRhcmdldCAnaHcvaTM4Ni94ZW4veGVuX3Bs
YXRmb3JtLm/igJkgZmFpbGVkDQo+IA0KPiBYZW4gaXMgdXNpbmcgeGVuY3RybF9kb20uaCB3aGlj
aCBuZWVkIHRoZSBsaWJlbGYuaCBoZWFkZXIgZnJvbSB4ZW4uDQoNCkFjdHVhbGx5IHRoaXMgaXMg
bm90IGNvbWluZyBmcm9tIHlvdXIgc2VyaWUgYW5kIHRoaXMgaXMgYWN0dWFsbHkgYSBwcm9ibGVt
IGFscmVhZHkgcHJlc2VudCBvbiBtYXN0ZXIuDQoNClJlZ2FyZHMNCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 16:50:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 16:50:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1522.4701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1mn-00056e-He; Thu, 01 Oct 2020 16:50:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1522.4701; Thu, 01 Oct 2020 16:50:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO1mn-00056X-EV; Thu, 01 Oct 2020 16:50:29 +0000
Received: by outflank-mailman (input) for mailman id 1522;
 Thu, 01 Oct 2020 16:50:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO1mm-00056Q-Hp
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:50:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 960b719c-02ea-41ad-bdbd-6cf5a7cf68b8;
 Thu, 01 Oct 2020 16:50:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO1mj-00031L-NM; Thu, 01 Oct 2020 16:50:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO1mj-00083V-D7; Thu, 01 Oct 2020 16:50:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO1mj-0005Dq-7P; Thu, 01 Oct 2020 16:50:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO1mm-00056Q-Hp
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 16:50:28 +0000
X-Inumbo-ID: 960b719c-02ea-41ad-bdbd-6cf5a7cf68b8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 960b719c-02ea-41ad-bdbd-6cf5a7cf68b8;
	Thu, 01 Oct 2020 16:50:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=fPQv49KslP/3k5egov6VpXy3gz/ZeyqQOWkjxqvWO6I=; b=h8MDKSvkPOY2Wzud5fhjjfzlPx
	7oHtsfv33Eytr+anFxipyQXdjtRQj1PzAghkpnUIDAzEGUrURMme+81WVTWvQ3qnHLbobTGTTDM+A
	8aQkZvvegwI3vRpl+JbMgtVoZ214ZksUw1iCbEma/MaotGdCRytLyPg7D9eyiTJ5I8DY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO1mj-00031L-NM; Thu, 01 Oct 2020 16:50:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO1mj-00083V-D7; Thu, 01 Oct 2020 16:50:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO1mj-0005Dq-7P; Thu, 01 Oct 2020 16:50:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-xl-xsm
Message-Id: <E1kO1mj-0005Dq-7P@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 16:50:25 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155250/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start --summary-out=tmp/155250.bisection-summary --basis-template=154611 --blessings=real,real-bisect xen-unstable test-amd64-amd64-xl-xsm guest-start
Searching for failure / basis pass:
 155113 fail [host=albana1] / 154611 [host=elbling0] 154592 [host=chardonnay0] 154576 [host=albana0] 154556 [host=godello0] 154521 [host=fiano0] 154504 [host=fiano1] 154494 [host=huxelrebe1] 154481 [host=godello1] 154465 [host=pinot0] 154090 [host=chardonnay1] 154058 [host=pinot1] 154036 [host=huxelrebe0] 154016 [host=huxelrebe1] 153983 ok.
Failure / basis pass flights: 155113 / 153983
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 1e2d3be2e516e6f415ca6029f651b76a8563a27c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc4\
 37c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#1e2d3be2e516e6f415ca6029f651b76a8563a27c-5dba8c2f23049aa68b777a9e7e9f76c12dd00012
Loaded 5001 nodes in revision graph
Searching for test results:
 153983 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 1e2d3be2e516e6f415ca6029f651b76a8563a27c
 154016 [host=huxelrebe1]
 154036 [host=huxelrebe0]
 154058 [host=pinot1]
 154090 [host=chardonnay1]
 154465 [host=pinot0]
 154481 [host=godello1]
 154494 [host=huxelrebe1]
 154504 [host=fiano1]
 154521 [host=fiano0]
 154556 [host=godello0]
 154576 [host=albana0]
 154592 [host=chardonnay0]
 154611 [host=elbling0]
 154634 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155017 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1
 155120 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 1e2d3be2e516e6f415ca6029f651b76a8563a27c
 155175 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1
 155179 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b4e41b1750d550bf2b1ccf97ee46f4f682bdbb62
 155183 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6
 155188 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8e76aef72820435e766c7f339ed36da33da90c40
 155197 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 112992b05b2d2ca63f3c78eefe1cf8d192d7303a
 155202 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155113 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155208 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d
 155215 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155221 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155227 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155230 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155233 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155245 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155250 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
Searching for interesting versions
 Result found: flight 153983 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325, results HASH(0x55a9ee49e840) HASH(0x55a9ee9edbf8) HASH(0x55a9ee4b04b8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1e\
 d79d824e605a70c3626bc437c386260 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d, results HASH(0x55a9eea9a3c0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 112992b05b2d2ca63f3c78eefe1cf8d192d7303a, results HASH(0x55a9eea934b0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f\
 0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8e76aef72820435e766c7f339ed36da33da90c40, results HASH(0x55a9eea90a48) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6, results HASH(0x55a9eea8f340) For basis failure, parent search stopping at c3038e718a19\
 fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b4e41b1750d550bf2b1ccf97ee46f4f682bdbb62, results HASH(0x55a9eea8ca30) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 1e2d3be2e516e6f415ca6029f651b76a8563a27c, results HASH(0x55a9eea02aa\
 8) HASH(0x55a9eea0a7f0) Result found: flight 154634 (fail), for basis failure (at ancestor ~366)
 Repro found: flight 155120 (pass), for basis pass
 Repro found: flight 155215 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
No revisions left to test, checking graph state.
 Result found: flight 155221 (pass), for last pass
 Result found: flight 155227 (fail), for first failure
 Repro found: flight 155230 (pass), for last pass
 Repro found: flight 155233 (fail), for first failure
 Repro found: flight 155245 (pass), for last pass
 Repro found: flight 155250 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155250/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155250: tolerable ALL FAIL

flight 155250 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155250/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start             fail baseline untested


jobs:
 test-amd64-amd64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu Oct 01 17:27:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 17:27:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1553.4724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2Mh-0007vo-Mq; Thu, 01 Oct 2020 17:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1553.4724; Thu, 01 Oct 2020 17:27:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2Mh-0007vh-Jt; Thu, 01 Oct 2020 17:27:35 +0000
Received: by outflank-mailman (input) for mailman id 1553;
 Thu, 01 Oct 2020 17:27:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44uA=DI=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kO2Mf-0007vc-VQ
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:27:34 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5544eef8-9db0-4123-accd-99d919307425;
 Thu, 01 Oct 2020 17:27:33 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w3so5330127ljo.5
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 10:27:33 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44uA=DI=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kO2Mf-0007vc-VQ
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:27:34 +0000
X-Inumbo-ID: 5544eef8-9db0-4123-accd-99d919307425
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5544eef8-9db0-4123-accd-99d919307425;
	Thu, 01 Oct 2020 17:27:33 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w3so5330127ljo.5
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 10:27:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Tc1yjnWLTCX+45mL27c71YyFWgA6XWpoJIr9YEN/nLw=;
        b=k6RGaFrmzfkQGdMI2DRmbD0ZF66y/rdQvY2D+Ntekyo6rfQXxSYtwlKsGdhuinnYxF
         DvPZf4BAeTN00HHf6q6mp4H7Sk+UF7R24CxRRvYXb0qTr+ED0lN+/u7N9wbp+eAFk7T1
         /4bQiOhKp6WUxqqAGxpq6ZDyk8p6D9TMegW2JTTXY7nITEKzB9TK4pUiP2U8sTC60hhI
         WtoV/1BQXmOeleOj05CjlqCKqZJA0KNXWuahRa/2a+Ta6AB5knvFL0yl6cLgcIwd5+l0
         XTp6RK5soB3bul9DUy07CHj1r7sTLcZ5iowaE3WLNdGsfQWj5MF167K/kmbhAnKdEQ0H
         fG1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Tc1yjnWLTCX+45mL27c71YyFWgA6XWpoJIr9YEN/nLw=;
        b=GASlI0rqoE2DOfK5H/DVPB73kM3DNfyKKZ68GD82Kid6Cwxy1cEPj52P81T/SyV4Sj
         eJY7efocsGnP/JM6OON6jrVRTAUeIK5ubE0Nfz7qCW5YBl0bPnS6qfqLOYfS+WwYxpP0
         rCKyoOxZc7UXrfBTOFFRJWLgH3Oq/LbkSMULdvL8xcCursdmy81P+wlqm4WjvrDz2SJY
         7clzQshHD98mdaJInDhd2bj9o017/ST1nUtyUgVEFI6s8Dscs7ovOSzMI6WbInt4z1Bj
         nut71B5cUj9skIh7me7uGxu7xkpca4CTxvnNE4NHwghoDOJUnupkeioj8iRaGsEC/1qu
         KEAg==
X-Gm-Message-State: AOAM531sa0R3iHytGVBBgy9XbxKzQNetGG5xIHrPk5nJIwCRcPm3sS3i
	GXck4+1HUXDqsLKscxjA6+UuC0sYgQ9t7BM3kdI=
X-Google-Smtp-Source: ABdhPJxp2b48DoNTYqyErD3PHQTMjrHJkKhWFjIzYhguxGidgivDdpTRm9n97ggW+fHzqEIksQWaTvkCzNaP+cB7SYI=
X-Received: by 2002:a2e:8782:: with SMTP id n2mr2899705lji.262.1601573252048;
 Thu, 01 Oct 2020 10:27:32 -0700 (PDT)
MIME-Version: 1.0
References: <f633e95e-11e7-ccfc-07ce-7cc817fcd7fe@suse.com> <e5d5dfee-aeee-ed3d-bcea-91e82198e04f@xen.org>
In-Reply-To: <e5d5dfee-aeee-ed3d-bcea-91e82198e04f@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 1 Oct 2020 13:27:20 -0400
Message-ID: <CAKf6xpuhCBAAHkk4D8NKiKH3EopQTYpeN9n3E60rJUY=5_SpGg@mail.gmail.com>
Subject: Re: [PATCH] evtchn/Flask: pre-allocate node on send path
To: Julien Grall <julien@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Daniel de Graaf <dgdegra@tycho.nsa.gov>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <George.Dunlap@eu.citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 1, 2020 at 12:05 PM Julien Grall <julien@xen.org> wrote:
>
> Hi,
>
> On 24/09/2020 11:53, Jan Beulich wrote:
> > xmalloc() & Co may not be called with IRQs off, or else check_lock()
> > will have its assertion trigger about locks getting acquired
> > inconsistently. Re-arranging the locking in evtchn_send() doesn't seem
> > very reasonable, especially since the per-channel lock was introduced to
> > avoid acquiring the per-domain event lock on the send paths. Issue a
> > second call to xsm_evtchn_send() instead, before acquiring the lock, to
> > give XSM / Flask a chance to pre-allocate whatever it may need.
> >
> > As these nodes are used merely for caching earlier decisions' results,
> > allocate just one node in AVC code despite two potentially being needed.
> > Things will merely be not as performant if a second allocation was
> > wanted, just like when the pre-allocation fails.
> >
> > Fixes: c0ddc8634845 ("evtchn: convert per-channel lock to be IRQ-safe")
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> As discussed on the community call with one comment below:
>
> Acked-by: Julien Grall <jgrall@amazon.com>

Tested-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 17:43:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 17:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1557.4736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2c6-0001De-41; Thu, 01 Oct 2020 17:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1557.4736; Thu, 01 Oct 2020 17:43:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2c6-0001DX-0y; Thu, 01 Oct 2020 17:43:30 +0000
Received: by outflank-mailman (input) for mailman id 1557;
 Thu, 01 Oct 2020 17:43:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO2c4-0001DS-8Q
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:43:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa7988a2-75b6-4d27-8f0d-958a3cbb6264;
 Thu, 01 Oct 2020 17:43:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2c1-00046l-G2; Thu, 01 Oct 2020 17:43:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2c1-00025O-8c; Thu, 01 Oct 2020 17:43:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2c1-0002GO-88; Thu, 01 Oct 2020 17:43:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO2c4-0001DS-8Q
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:43:28 +0000
X-Inumbo-ID: fa7988a2-75b6-4d27-8f0d-958a3cbb6264
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa7988a2-75b6-4d27-8f0d-958a3cbb6264;
	Thu, 01 Oct 2020 17:43:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=trRZ21lR95FT/gypyltIDVNQhgG59oXi6gN/0HFma00=; b=VBV4pXTAoFUb9snNRX6mj1RgKg
	oU1iuxLQj+z0Z2uNioLKNC+tn0oKxviUsKo7KOc6xfd6ow1HEkzoKWq6jOfSNmerv/AWNf43jvqXd
	BnWte25JYwQcfOlgGcUZD9IjWmL8vIKiccqUDfz1UXM2oH26850QzGy2pweXEkOoIzxI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2c1-00046l-G2; Thu, 01 Oct 2020 17:43:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2c1-00025O-8c; Thu, 01 Oct 2020 17:43:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2c1-0002GO-88; Thu, 01 Oct 2020 17:43:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155132-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 155132: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=88f5b414ac0f8008c1e2b26f93c3d980120941f7
X-Osstest-Versions-That:
    xen=c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 17:43:25 +0000

flight 155132 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154358
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  88f5b414ac0f8008c1e2b26f93c3d980120941f7
baseline version:
 xen                  c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef

Last test of basis   154358  2020-09-15 09:40:09 Z   16 days
Failing since        154602  2020-09-22 02:37:01 Z    9 days    8 attempts
Testing same since   154625  2020-09-22 20:06:06 Z    8 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Chen <wei.chen@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 564 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 17:45:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 17:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1560.4751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2e1-0001Nd-JP; Thu, 01 Oct 2020 17:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1560.4751; Thu, 01 Oct 2020 17:45:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO2e1-0001NW-Fc; Thu, 01 Oct 2020 17:45:29 +0000
Received: by outflank-mailman (input) for mailman id 1560;
 Thu, 01 Oct 2020 17:45:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO2e0-0001NQ-3K
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:45:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45cf8d2f-b5f4-4e4e-afe1-e17407b028fa;
 Thu, 01 Oct 2020 17:45:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2dx-00048t-2H; Thu, 01 Oct 2020 17:45:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2dw-0002B9-O6; Thu, 01 Oct 2020 17:45:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO2dw-0004ET-Nb; Thu, 01 Oct 2020 17:45:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO2e0-0001NQ-3K
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 17:45:28 +0000
X-Inumbo-ID: 45cf8d2f-b5f4-4e4e-afe1-e17407b028fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 45cf8d2f-b5f4-4e4e-afe1-e17407b028fa;
	Thu, 01 Oct 2020 17:45:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MvaBuOhn21SNcGYw/r0AwiGkDR6m2N4s5xePTrLfgTg=; b=A6MpvXtKJdxgGz7mu2p7DlulCd
	i9o/nGvG7ued9f3jMWfrQernMJdSgKfpPCUvXve097J8I41NlqEF1HB1zuWMbPWz5yPwtk1lETN4G
	anqsLThfaoLWbJywSeyaiXXsk3FNTg7C12/YVqLEh4BlJv9W6JsDzvSxZ5AkhfzBjLjo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2dx-00048t-2H; Thu, 01 Oct 2020 17:45:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2dw-0002B9-O6; Thu, 01 Oct 2020 17:45:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO2dw-0004ET-Nb; Thu, 01 Oct 2020 17:45:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 155136: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    seabios=849c5e50b6f474df6cc113130575bcdccfafcd9e
X-Osstest-Versions-That:
    seabios=41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 17:45:24 +0000

flight 155136 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155136/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 155049
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 155049
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 155049
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 155049
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              849c5e50b6f474df6cc113130575bcdccfafcd9e
baseline version:
 seabios              41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5

Last test of basis   155049  2020-09-28 22:09:36 Z    2 days
Testing same since   155136  2020-09-30 11:09:37 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  weitaowang-oc@zhaoxin.com <weitaowang-oc@zhaoxin.com>
  WeitaoWangoc <WeitaoWang-oc@zhaoxin.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   41289b8..849c5e5  849c5e50b6f474df6cc113130575bcdccfafcd9e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 18:32:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 18:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1574.4779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO3NF-0005nR-JW; Thu, 01 Oct 2020 18:32:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1574.4779; Thu, 01 Oct 2020 18:32:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO3NF-0005nK-GV; Thu, 01 Oct 2020 18:32:13 +0000
Received: by outflank-mailman (input) for mailman id 1574;
 Thu, 01 Oct 2020 18:32:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO3NE-0005mm-5G
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 18:32:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7957f11d-2125-4f55-b388-5b006d470e4b;
 Thu, 01 Oct 2020 18:32:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO3N6-0005BI-8l; Thu, 01 Oct 2020 18:32:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO3N6-000557-0r; Thu, 01 Oct 2020 18:32:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO3N6-0007ru-0L; Thu, 01 Oct 2020 18:32:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO3NE-0005mm-5G
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 18:32:12 +0000
X-Inumbo-ID: 7957f11d-2125-4f55-b388-5b006d470e4b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7957f11d-2125-4f55-b388-5b006d470e4b;
	Thu, 01 Oct 2020 18:32:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tIj95CVhi7YxQ+wDSPYZOv4uWEzyzt0PJlRZjvVRx6I=; b=qzfzzMmwwRaVGw9z81qn1YtT3e
	/v6cRfNsDBlNMlwMIQNSpC+NhF/VXUSPs6yocSwhp0Nqr0dXUqoSbFnVXs9Y1PWsrcSgk/AVrK3cw
	gUFbjaE7fF2cPROp7iJKLVxXlGTNomzTDE+sJFIk5DmXTZ66ZtBHKbnISaMP2PyZCU/w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO3N6-0005BI-8l; Thu, 01 Oct 2020 18:32:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO3N6-000557-0r; Thu, 01 Oct 2020 18:32:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO3N6-0007ru-0L; Thu, 01 Oct 2020 18:32:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155246-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155246: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 18:32:04 +0000

flight 155246 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155246/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days    8 attempts
Testing same since   155246  2020-10-01 15:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/lixenguest: hide struct elf_dom_parms layout from users
    
    Don't include struct elf_dom_parms in struct xc_dom_image, but rather
    use a pointer to reference it. Together with adding accessor functions
    for the externally needed elements this enables to drop including the
    Xen private header xen/libelf/libelf.h from xenguest.h.
    
    Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7f186b1996dea2992c8ed3606b38d73222293c37
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libxenguest: make xc_dom_loader interface private to libxenguest
    
    The pluggable kernel loader interface is used only internally of
    libxenguest, so make it private. This removes a dependency on the Xen
    internal header xen/libelf/libelf.h from xenguest.h.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 77a09716f251ac0e35ddd4bfda8f35fe639f432b
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libs: merge xenctrl_dom.h into xenguest.h
    
    Today xenctrl_dom.h is part of libxenctrl as it is included by
    xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
    xenguest.h where its contents really should be.
    
    Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
    if xenguest.h is already included.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 3ae0d316f01c08903a96f6b5b39275c67b823264
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: move libxlutil to tools/libs/util
    
    Move the libxlutil source to tools/libs/util and delete tools/libxl.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b22b9b9a1df865e1dd9e4f6950ae6be7081be010
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libs: add option for library names not starting with libxen
    
    libxlutil doesn't follow the standard name pattern of all other Xen
    libraries, so add another make variable which can be used to allow
    other names.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bc01c73018689e066e06515b26181d463a3f2a40
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: rename global libxlutil make variables
    
    Rename *_libxlutil make variables to *_libxenutil in order to avoid
    nasty indirections when moving libxlutil under the tools/libs
    infrastructure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 41aea82de2b581c61482aeddab151ecf3b1bca25
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libxl: move libxenlight to tools/libs/light
    
    Carve out all libxenlight related sources and move them to
    tools/libs/light in order to use the generic library build environment.
    
    The closely related sources for libxl-save-helper and the libxl test
    environment are being moved, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 20:20:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 20:20:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1588.4811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO53c-0006JT-Lg; Thu, 01 Oct 2020 20:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1588.4811; Thu, 01 Oct 2020 20:20:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO53c-0006JH-Ha; Thu, 01 Oct 2020 20:20:04 +0000
Received: by outflank-mailman (input) for mailman id 1588;
 Thu, 01 Oct 2020 20:20:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kO53a-0006DV-R6
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 20:20:02 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1382da9b-f7bf-4fbc-a3a0-7ab9508ebe45;
 Thu, 01 Oct 2020 20:20:02 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B5B632074B;
 Thu,  1 Oct 2020 20:20:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kO53a-0006DV-R6
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 20:20:02 +0000
X-Inumbo-ID: 1382da9b-f7bf-4fbc-a3a0-7ab9508ebe45
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1382da9b-f7bf-4fbc-a3a0-7ab9508ebe45;
	Thu, 01 Oct 2020 20:20:02 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B5B632074B;
	Thu,  1 Oct 2020 20:20:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601583601;
	bh=BCHPe3DSjUKxXWAheX0guF7GWZEJzxNcrQ928PIqqIs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uDdRvGbY7QOfkwYAYHYI55dLASN3oBHhnHyagbjT41G6uQNpe59kGPAd2rTjIhpp6
	 5wkP6EZwXQLTGGtbCiHvS3eVk1dtoNrOucUtwc3IquN0ygtiRwz+m/MeymUTlCsVAP
	 Z7kzmTgnUmYRq01hSSSJeQsY3Zeq9ioeE5ga/PkY=
Date: Thu, 1 Oct 2020 13:19:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: George Dunlap <George.Dunlap@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    Ian Jackson <Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, 
    Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Rich Persaud <persaur@gmail.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy
 document
In-Reply-To: <63FFD578-F249-404B-9829-687A42360A76@citrix.com>
Message-ID: <alpine.DEB.2.21.2010011319540.10908@sstabellini-ThinkPad-T480s>
References: <20200930125736.95203-1-george.dunlap@citrix.com> <alpine.DEB.2.21.2009301321431.10908@sstabellini-ThinkPad-T480s> <63FFD578-F249-404B-9829-687A42360A76@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1407231164-1601583600=:10908"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1407231164-1601583600=:10908
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 1 Oct 2020, George Dunlap wrote:
> > On Sep 30, 2020, at 9:23 PM, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 30 Sep 2020, George Dunlap wrote:
> >> Define a specific criteria for how we determine what tools and
> >> libraries to be compatible with.  This will clarify issues such as,
> >> "Should we continue to support Python 2.4" moving forward.
> >> 
> >> Note that CentOS 7 is set to stop receiving "normal" maintenance
> >> updates in "Q4 2020"; assuming that 4.15 is released after that, we
> >> only need to support CentOS / RHEL 8.
> >> 
> >> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> >> ---
> >> 
> >> CC: Ian Jackson <ian.jackson@citrix.com>
> >> CC: Wei Liu <wl@xen.org>
> >> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> >> CC: Jan Beulich <jbeulich@suse.com>
> >> CC: Stefano Stabellini <sstabellini@kernel.org>
> >> CC: Julien Grall <julien@xen.org>
> >> CC: Rich Persaud <persaur@gmail.com>
> >> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
> >> ---
> >> docs/index.rst                        |  2 +
> >> docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
> >> 2 files changed, 78 insertions(+)
> >> create mode 100644 docs/policies/dependency-versions.rst
> >> 
> >> diff --git a/docs/index.rst b/docs/index.rst
> >> index b75487a05d..ac175eacc8 100644
> >> --- a/docs/index.rst
> >> +++ b/docs/index.rst
> >> @@ -57,5 +57,7 @@ Miscellanea
> >> -----------
> >> 
> >> .. toctree::
> >> +   :maxdepth: 1
> >> 
> >> +   policies/dependency-versions
> >>    glossary
> >> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/dependency-versions.rst
> >> new file mode 100644
> >> index 0000000000..d5eeb848d8
> >> --- /dev/null
> >> +++ b/docs/policies/dependency-versions.rst
> >> @@ -0,0 +1,76 @@
> >> +.. SPDX-License-Identifier: CC-BY-4.0
> >> +
> >> +Build and runtime dependencies
> >> +==============================
> >> +
> >> +Xen depends on other programs and libraries to build and to run.
> >> +Chosing a minimum version of these tools to support requires a careful
> >> +balance: Supporting older versions of these tools or libraries means
> >> +that Xen can compile on a wider variety of systems; but means that Xen
> >> +cannot take advantage of features available in newer versions.
> >> +Conversely, requiring newer versions means that Xen can take advantage
> >> +of newer features, but cannot work on as wide a variety of systems.
> >> +
> >> +Specific dependencies and versions for a given Xen release will be
> >> +listed in the toplevel README, and/or specified by the ``configure``
> >> +system.  This document lays out the principles by which those versions
> >> +should be chosen.
> >> +
> >> +The general principle is this:
> >> +
> >> +    Xen should build on currently-supported versions of major distros
> >> +    when released.
> >> +
> >> +"Currently-supported" means whatever that distro's version of "full
> >> +support".  For instance, at the time of writing, CentOS 7 and 8 are
> >> +listed as being given "Full Updates", but CentOS 6 is listed as
> >> +"Maintenance updates"; under this criterium, we would try to ensure
> >> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
> >> +
> >> +Exceptions for specific distros or tools may be made when appropriate.
> >> +
> >> +One exception to this is compiler versions for the hypervisor.
> >> +Support for new instructions, and in particular support for new safety
> >> +features, may require a newer compiler than many distros support.
> >> +These will be specified in the README.
> >> +
> >> +Distros we consider when deciding minimum versions
> >> +--------------------------------------------------
> >> +
> >> +We currently aim to support Xen building and running on the following distributions:
> >> +Debian_,
> >> +Ubuntu_,
> >> +OpenSUSE_,
> >> +Arch Linux,
> >> +SLES_,
> >> +Yocto_,
> >> +CentOS_,
> >> +and RHEL_.
> > 
> > Alpine Linux should be in the list (consider its usage in container
> > environment.)
> 
> Sure, we can add that one in.  Although, we might consider requiring that distros on this list be first be added to the Gitlab CI loop if possible.
> 
> > I am still on Alpine Linux 3.7, so I am sure that one works. Probably
> > other versions work too.
> 
> Right, but the question is, if someone posts a patch which causes it to no longer build on 3.7, would we reject it or accept it?
> 
> According to https://wiki.alpinelinux.org/wiki/Alpine_Linux:Releases, only 3.12 is currently receiving bug fixes; so by the criteria above, we would only reject a changeset if it caused a build regression for 3.12.
> 
> I would argue that this is the right approach: It doesn’t make sense for us to spend more effort keeping an old distro working than that community it self spends keeping it working.  The Ubuntu community spends effort keeping Ubuntu 16.04 in working shape, so it makes sense for us to spend effort making sure Xen 4.15 builds and runs on it.  The Alpine Linux community doesn’t promise to spend any effort to fix any more bugs in Alpine Linux 3.11, so it doesn’t make any sense for us to spend effort making sure Xen 4.15 runs on it.
> 
> Obviously if it builds on Ubuntu 16.04 there’s a pretty high probability that it will also build on Alpine Linux 3.4+ (released around the same time); we just don’t want to promise that.

OK
--8323329-1407231164-1601583600=:10908--


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 21:41:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 21:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1596.4835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO6KU-00058v-Ok; Thu, 01 Oct 2020 21:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1596.4835; Thu, 01 Oct 2020 21:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO6KU-00058o-Li; Thu, 01 Oct 2020 21:41:34 +0000
Received: by outflank-mailman (input) for mailman id 1596;
 Thu, 01 Oct 2020 21:41:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO6KT-00058j-7E
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 21:41:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe964da6-9714-4b64-9351-a846f2468cde;
 Thu, 01 Oct 2020 21:41:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO6KQ-0000du-5A; Thu, 01 Oct 2020 21:41:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO6KP-0008EF-Uq; Thu, 01 Oct 2020 21:41:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO6KP-0004PM-JM; Thu, 01 Oct 2020 21:41:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GG2I=DI=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO6KT-00058j-7E
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 21:41:33 +0000
X-Inumbo-ID: fe964da6-9714-4b64-9351-a846f2468cde
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fe964da6-9714-4b64-9351-a846f2468cde;
	Thu, 01 Oct 2020 21:41:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H/qvz2RZxPkiA6HV8o/g6Kt4rK4h/fHcL+zL4iuNF1c=; b=YdVpTkOSPlm5BVMvuD59gyBabI
	lKdsjlh0WWOSp2QWWMyst9q91fnLgeeZq27TPgLa9HC8W/tjbR2Y/fAHIX7ZOtzfz2+WY8sn3orPm
	XQpDrWa6RhQQLXUJbA3cDsoBjpMajF/zLqfJuFXEtRrGdAGaS563Yr+RdhDpEbQOoOm4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO6KQ-0000du-5A; Thu, 01 Oct 2020 21:41:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO6KP-0008EF-Uq; Thu, 01 Oct 2020 21:41:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO6KP-0004PM-JM; Thu, 01 Oct 2020 21:41:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155262: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 01 Oct 2020 21:41:29 +0000

flight 155262 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155262/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days    9 attempts
Testing same since   155246  2020-10-01 15:01:28 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/lixenguest: hide struct elf_dom_parms layout from users
    
    Don't include struct elf_dom_parms in struct xc_dom_image, but rather
    use a pointer to reference it. Together with adding accessor functions
    for the externally needed elements this enables to drop including the
    Xen private header xen/libelf/libelf.h from xenguest.h.
    
    Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7f186b1996dea2992c8ed3606b38d73222293c37
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libxenguest: make xc_dom_loader interface private to libxenguest
    
    The pluggable kernel loader interface is used only internally of
    libxenguest, so make it private. This removes a dependency on the Xen
    internal header xen/libelf/libelf.h from xenguest.h.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 77a09716f251ac0e35ddd4bfda8f35fe639f432b
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libs: merge xenctrl_dom.h into xenguest.h
    
    Today xenctrl_dom.h is part of libxenctrl as it is included by
    xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
    xenguest.h where its contents really should be.
    
    Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
    if xenguest.h is already included.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 3ae0d316f01c08903a96f6b5b39275c67b823264
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: move libxlutil to tools/libs/util
    
    Move the libxlutil source to tools/libs/util and delete tools/libxl.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b22b9b9a1df865e1dd9e4f6950ae6be7081be010
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libs: add option for library names not starting with libxen
    
    libxlutil doesn't follow the standard name pattern of all other Xen
    libraries, so add another make variable which can be used to allow
    other names.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bc01c73018689e066e06515b26181d463a3f2a40
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: rename global libxlutil make variables
    
    Rename *_libxlutil make variables to *_libxenutil in order to avoid
    nasty indirections when moving libxlutil under the tools/libs
    infrastructure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 41aea82de2b581c61482aeddab151ecf3b1bca25
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libxl: move libxenlight to tools/libs/light
    
    Carve out all libxenlight related sources and move them to
    tools/libs/light in order to use the generic library build environment.
    
    The closely related sources for libxl-save-helper and the libxl test
    environment are being moved, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 23:52:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 23:52:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1611.4872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO8NK-0007mJ-MC; Thu, 01 Oct 2020 23:52:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1611.4872; Thu, 01 Oct 2020 23:52:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO8NK-0007mC-J7; Thu, 01 Oct 2020 23:52:38 +0000
Received: by outflank-mailman (input) for mailman id 1611;
 Thu, 01 Oct 2020 23:52:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kO8NI-0007m7-F4
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 23:52:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1cb19832-a094-4723-a08b-824f9fef119c;
 Thu, 01 Oct 2020 23:52:35 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 87E3A20888;
 Thu,  1 Oct 2020 23:52:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=thWI=DI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kO8NI-0007m7-F4
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 23:52:36 +0000
X-Inumbo-ID: 1cb19832-a094-4723-a08b-824f9fef119c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1cb19832-a094-4723-a08b-824f9fef119c;
	Thu, 01 Oct 2020 23:52:35 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 87E3A20888;
	Thu,  1 Oct 2020 23:52:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601596354;
	bh=IqxFP7AJA3oMC+9OwVQQ7v+lPsVFwhfYf/dialoN8CM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GiWdc6HVhFQrXhMyi079bBR0/aiBDPVVWeIc6Zi/6dp3TXoYkErTMr9lVNRjelFe9
	 7kAxrq88Gvp/+OmGX8S746TTe1y0ahd4giYg/rKk/+F5iHEuzT6fI5HrFevF1jHp70
	 Mrl9t6oI5znwNzILIGPpqnLcrEHa4284por/MNtY=
Date: Thu, 1 Oct 2020 16:52:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Laurentiu Tudor <laurentiu.tudor@nxp.com>
cc: sstabellini@kernel.org, julien@xen.org, xen-devel@lists.xenproject.org, 
    Volodymyr_Babchuk@epam.com, will@kernel.org, diana.craciun@nxp.com, 
    anda-alexandra.dorneanu@nxp.com
Subject: Re: [PATCH] arm,smmu: match start level of page table walk with
 P2M
In-Reply-To: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
Message-ID: <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
References: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 28 Sep 2020, laurentiu.tudor@nxp.com wrote:
> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Don't hardcode the lookup start level of the page table walk to 1
> and instead match the one used in P2M. This should fix scenarios
> involving SMMU where the start level is different than 1.
> 
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Thank you for the patch, I think it is correct, except that smmu.c today
can be enabled even on arm32 builds, where p2m_root_level would be
uninitialized.

We need to initialize p2m_root_level at the beginning of
setup_virt_paging under the #ifdef CONFIG_ARM_32. We can statically
initialize it to 1 in that case. Or...


> ---
>  xen/arch/arm/p2m.c                 | 2 +-
>  xen/drivers/passthrough/arm/smmu.c | 2 +-
>  xen/include/asm-arm/p2m.h          | 1 +
>  3 files changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ce59f2b503..0181b09dc0 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -18,7 +18,6 @@
>  
>  #ifdef CONFIG_ARM_64
>  static unsigned int __read_mostly p2m_root_order;
> -static unsigned int __read_mostly p2m_root_level;
>  #define P2M_ROOT_ORDER    p2m_root_order
>  #define P2M_ROOT_LEVEL p2m_root_level
>  static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
> @@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>   * restricted by external entity (e.g. IOMMU).
>   */
>  unsigned int __read_mostly p2m_ipa_bits = 64;
> +unsigned int __read_mostly p2m_root_level;

... we could p2m_root_level = 1; here


>  /* Helpers to lookup the properties of each level */
>  static const paddr_t level_masks[] =
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 94662a8501..85709a136f 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
>  
>  	if (!stage1)
> -		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
> +		reg |= (2 - p2m_root_level) << TTBCR_SL0_SHIFT;
>  
>  	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
>  
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 5fdb6e8183..97b5eada2b 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -12,6 +12,7 @@
>  
>  /* Holds the bit size of IPAs in p2m tables.  */
>  extern unsigned int p2m_ipa_bits;
> +extern unsigned int p2m_root_level;
>  
>  struct domain;
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 01 23:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 01 Oct 2020 23:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1612.4884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO8On-0007tb-1n; Thu, 01 Oct 2020 23:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1612.4884; Thu, 01 Oct 2020 23:54:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO8Om-0007tU-V7; Thu, 01 Oct 2020 23:54:08 +0000
Received: by outflank-mailman (input) for mailman id 1612;
 Thu, 01 Oct 2020 23:54:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=44uA=DI=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kO8Om-0007tO-Il
 for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 23:54:08 +0000
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ac46d3f-093b-4a95-a847-b7afba73396f;
 Thu, 01 Oct 2020 23:54:07 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id p15so428059qvk.5
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 16:54:07 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:d1f7:ee69:1dfe:5706])
 by smtp.gmail.com with ESMTPSA id t26sm7338124qkt.29.2020.10.01.16.54.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 01 Oct 2020 16:54:05 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=44uA=DI=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kO8Om-0007tO-Il
	for xen-devel@lists.xenproject.org; Thu, 01 Oct 2020 23:54:08 +0000
X-Inumbo-ID: 3ac46d3f-093b-4a95-a847-b7afba73396f
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ac46d3f-093b-4a95-a847-b7afba73396f;
	Thu, 01 Oct 2020 23:54:07 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id p15so428059qvk.5
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 16:54:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=yFF6fvVarLXwIeCYKkEhI6nrhgmMv9+Zhw6bLLwScMI=;
        b=j7K9mQwWIquOybazSniet4mYmJelUM2Z9+ggruaaKw3DSeYhMse9D6zxuDkfqzeQZ0
         AHEKZsgiDZox17KvmWmWaE+ZIT65MGemT/CKYQHuXzMNb3AAUo0VSUqFsq00Nxd9KlyH
         6XwhpZ7RFFrzvQNX+wH4yeg49Pw6cXTiTtTwbwVRs6RIqijGFMrTlJRsrH87AvuSn6UY
         NyM2sFJTIkOH+sQTmEC0Ofv/HTkUm+Rzz9/CCKNXXuKCAmf1p6c4LlvKIKmGJygyjl3w
         /q4vOxIM1zwMT/jpaYmhUYG9q54nTUNqPA9yesqqvC0cTdF6ie1wozKF/WdafDQxFRDu
         Il3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=yFF6fvVarLXwIeCYKkEhI6nrhgmMv9+Zhw6bLLwScMI=;
        b=JyMFtjQ5ylocwTVflsVhyBhiszmkU3DwoKdsfSP2X+RtzfS1iFpJkYOXNogHnjewQN
         UbtLGxzi+GqkcljX/NUFM7OMeqcib5Si366fHrVrsGVEK29dpE0goFZ+al+rDQ/pCQ7+
         VoN/zPY/n0yQqfp3Ev3gePo9hhMaBSoxVRXoRT0KPdcpPWaejrzfZptIuJfhK1XgcaTT
         J78loxVL+vl4/xE5BfrCdTB0nqGzk9o4KgB1DbDhfb/W28BP4nhAI4qARzU+rT3q7q6Q
         jYfdUW3Ivym58q2nFrngvcJ0AQPin2hZS6/Mel87nrv/VF6zweKyvaOeJYLkkjJUQxZe
         hb4A==
X-Gm-Message-State: AOAM532gwhaMAK3tZxZu7VCJxbu/SC6xtQPu/OFI+kcQfWyzRNXdOhWt
	oWZheb2FTHgT52iKg38CDNXrXzcPews=
X-Google-Smtp-Source: ABdhPJwEcoQIPfytlq+Ki8ylOdbQBMIdEsMnTEQCOD0g8ih9DEOXf2KpQO/nlAAiuSMjrvNa+dI14Q==
X-Received: by 2002:a0c:b3dd:: with SMTP id b29mr10280317qvf.59.1601596446938;
        Thu, 01 Oct 2020 16:54:06 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:d1f7:ee69:1dfe:5706])
        by smtp.gmail.com with ESMTPSA id t26sm7338124qkt.29.2020.10.01.16.54.05
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 01 Oct 2020 16:54:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] libxl: only query VNC when enabled
Date: Thu,  1 Oct 2020 19:53:37 -0400
Message-Id: <20201001235337.83948-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

QEMU without VNC support (configure --disable-vnc) will return an error
when VNC is queried over QMP since it does not recognize the QMP
command.  This will cause libxl to fail starting the domain even if VNC
is not enabled.  Therefore only query QEMU for VNC support when using
VNC, so a VNC-less QEMU will function in this configuration.

'goto out' jumps to the call to device_model_postconfig_done(), the
final callback after the chain of vnc queries.  This bypasses all the
QMP VNC queries.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_dm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index a944181781..d1ff35dda3 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -3140,6 +3140,7 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
 {
     EGC_GC;
     libxl__dm_spawn_state *dmss = CONTAINER_OF(qmp, *dmss, qmp);
+    const libxl_vnc_info *vnc = libxl__dm_vnc(dmss->guest_config);
     const libxl__json_object *item = NULL;
     const libxl__json_object *o = NULL;
     int i = 0;
@@ -3197,6 +3198,9 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
         if (rc) goto out;
     }
 
+    if (!vnc)
+        goto out;
+
     qmp->callback = device_model_postconfig_vnc;
     rc = libxl__ev_qmp_send(egc, qmp, "query-vnc", NULL);
     if (rc) goto out;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 01:02:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 01:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1623.4912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO9Sm-0000ar-Ma; Fri, 02 Oct 2020 01:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1623.4912; Fri, 02 Oct 2020 01:02:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO9Sm-0000Zz-JL; Fri, 02 Oct 2020 01:02:20 +0000
Received: by outflank-mailman (input) for mailman id 1623;
 Fri, 02 Oct 2020 01:02:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO9Sl-00008p-EF
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 01:02:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c30bfcd1-bb7c-40aa-86cc-99ad4364afeb;
 Fri, 02 Oct 2020 01:02:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Si-0001Yw-26; Fri, 02 Oct 2020 01:02:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Sh-0003UY-O6; Fri, 02 Oct 2020 01:02:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Sh-0001k8-NZ; Fri, 02 Oct 2020 01:02:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO9Sl-00008p-EF
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 01:02:19 +0000
X-Inumbo-ID: c30bfcd1-bb7c-40aa-86cc-99ad4364afeb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c30bfcd1-bb7c-40aa-86cc-99ad4364afeb;
	Fri, 02 Oct 2020 01:02:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FwrCQc/ZAclklDQl5I3TJlWjg+lXXgVa+1YC6u1rXN4=; b=wmGB4AIA8LiVZCtXZ+QXIrIOa6
	tOaE1k530MqxfNzlARgfPibmFMdaO18i07QPYTOOKOXrCYxHkr0SJ+elkrZu6QBnY9POkI4bQ571I
	54uAG6MsOsq/rBbJI1krvhy6zHKsiOAJA3OfDDjo+C3rFGm4iKRhxsvcG1/Ba4nCHN0U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Si-0001Yw-26; Fri, 02 Oct 2020 01:02:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Sh-0003UY-O6; Fri, 02 Oct 2020 01:02:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Sh-0001k8-NZ; Fri, 02 Oct 2020 01:02:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 155140: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=3263f257caf8e4465e9dca84a88fa0e68be74280
X-Osstest-Versions-That:
    xen=ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 01:02:15 +0000

flight 155140 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 151714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  3263f257caf8e4465e9dca84a88fa0e68be74280
baseline version:
 xen                  ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d

Last test of basis   151714  2020-07-07 13:35:55 Z   86 days
Testing same since   154619  2020-09-22 15:37:30 Z    9 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 359 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 01:05:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 01:05:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1626.4925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO9W3-0000vT-E4; Fri, 02 Oct 2020 01:05:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1626.4925; Fri, 02 Oct 2020 01:05:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kO9W3-0000vM-At; Fri, 02 Oct 2020 01:05:43 +0000
Received: by outflank-mailman (input) for mailman id 1626;
 Fri, 02 Oct 2020 01:05:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kO9W1-0000vG-Q7
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 01:05:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29c2bc06-63f8-47b4-a7b0-cd5627762984;
 Fri, 02 Oct 2020 01:05:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Vy-000857-LT; Fri, 02 Oct 2020 01:05:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Vy-0003dl-Cm; Fri, 02 Oct 2020 01:05:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kO9Vy-0006PO-CF; Fri, 02 Oct 2020 01:05:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kO9W1-0000vG-Q7
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 01:05:41 +0000
X-Inumbo-ID: 29c2bc06-63f8-47b4-a7b0-cd5627762984
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29c2bc06-63f8-47b4-a7b0-cd5627762984;
	Fri, 02 Oct 2020 01:05:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uIeOQp+jjNH21o+zUNDm73H+4nAD4aArkvsz+X8misE=; b=luTBIfnl5vbFVT8Yw7v9XG6Hl/
	Ig/VgHPEmDrZ9fUmj9bSwETAYd0g4+0VcQXjk+cdw/pH/PnQP+/keNc5+WTKJIUGdaBCDwcKp8V62
	SIVu+N5iBo9We0eLYPljKZy0yl27ZjBABV9t8JaVwrcEfo9jFnwtAyxGFk0t3nvZtR44=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Vy-000857-LT; Fri, 02 Oct 2020 01:05:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Vy-0003dl-Cm; Fri, 02 Oct 2020 01:05:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kO9Vy-0006PO-CF; Fri, 02 Oct 2020 01:05:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155271: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 01:05:38 +0000

flight 155271 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155271/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days   10 attempts
Testing same since   155246  2020-10-01 15:01:28 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/lixenguest: hide struct elf_dom_parms layout from users
    
    Don't include struct elf_dom_parms in struct xc_dom_image, but rather
    use a pointer to reference it. Together with adding accessor functions
    for the externally needed elements this enables to drop including the
    Xen private header xen/libelf/libelf.h from xenguest.h.
    
    Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7f186b1996dea2992c8ed3606b38d73222293c37
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libxenguest: make xc_dom_loader interface private to libxenguest
    
    The pluggable kernel loader interface is used only internally of
    libxenguest, so make it private. This removes a dependency on the Xen
    internal header xen/libelf/libelf.h from xenguest.h.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 77a09716f251ac0e35ddd4bfda8f35fe639f432b
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libs: merge xenctrl_dom.h into xenguest.h
    
    Today xenctrl_dom.h is part of libxenctrl as it is included by
    xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
    xenguest.h where its contents really should be.
    
    Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
    if xenguest.h is already included.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 3ae0d316f01c08903a96f6b5b39275c67b823264
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: move libxlutil to tools/libs/util
    
    Move the libxlutil source to tools/libs/util and delete tools/libxl.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b22b9b9a1df865e1dd9e4f6950ae6be7081be010
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libs: add option for library names not starting with libxen
    
    libxlutil doesn't follow the standard name pattern of all other Xen
    libraries, so add another make variable which can be used to allow
    other names.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bc01c73018689e066e06515b26181d463a3f2a40
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: rename global libxlutil make variables
    
    Rename *_libxlutil make variables to *_libxenutil in order to avoid
    nasty indirections when moving libxlutil under the tools/libs
    infrastructure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 41aea82de2b581c61482aeddab151ecf3b1bca25
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libxl: move libxenlight to tools/libs/light
    
    Carve out all libxenlight related sources and move them to
    tools/libs/light in order to use the generic library build environment.
    
    The closely related sources for libxl-save-helper and the libxl test
    environment are being moved, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 02:22:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 02:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1640.4960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOAi4-00080y-MX; Fri, 02 Oct 2020 02:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1640.4960; Fri, 02 Oct 2020 02:22:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOAi4-00080r-H4; Fri, 02 Oct 2020 02:22:12 +0000
Received: by outflank-mailman (input) for mailman id 1640;
 Fri, 02 Oct 2020 02:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOAi2-00080m-S2
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 02:22:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb8106cd-e5a2-4e5e-b3a6-d6598431ca70;
 Fri, 02 Oct 2020 02:22:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAhz-0001dW-IE; Fri, 02 Oct 2020 02:22:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAhz-0007qo-97; Fri, 02 Oct 2020 02:22:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAhz-0006XB-8d; Fri, 02 Oct 2020 02:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOAi2-00080m-S2
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 02:22:11 +0000
X-Inumbo-ID: eb8106cd-e5a2-4e5e-b3a6-d6598431ca70
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb8106cd-e5a2-4e5e-b3a6-d6598431ca70;
	Fri, 02 Oct 2020 02:22:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=d0Sk779KNn0V21V51JW5B+TZ7vkfiqfCDqtdbNIyHtY=; b=jn9OEuGmJZjeuYvQsgzsrZ4wuj
	fJzdK9V5JS95bxCNi2OwMTpkHAA+69ahnZ03OJbXQh1gxZaoGJ/Y+qlJyD5DOoKcKoO3GYk4DST2x
	Zu0DYpaK3LiJrgSaZWhofJtLLkqT3oS4R4+kfaDw4C5QNlo3L8UunTrCg5N2jGxTMScE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAhz-0001dW-IE; Fri, 02 Oct 2020 02:22:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAhz-0007qo-97; Fri, 02 Oct 2020 02:22:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAhz-0006XB-8d; Fri, 02 Oct 2020 02:22:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete test-amd64-amd64-libvirt
Message-Id: <E1kOAhz-0006XB-8d@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 02:22:07 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job test-amd64-amd64-libvirt
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
  Bug not present: 50a5215f30e964a6f16165ab57925ca39f31a849
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155282/


  commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Tue Sep 29 14:48:52 2020 +0100
  
      tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
      
      Nested Virt is the final special case in legacy CPUID handling.  Pass the
      (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
      the semantic dependency on HVM_PARAM_NESTEDHVM.
      
      No functional change.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/test-amd64-amd64-libvirt.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/test-amd64-amd64-libvirt.guest-start --summary-out=tmp/155282.bisection-summary --basis-template=155128 --blessings=real,real-bisect xen-unstable-smoke test-amd64-amd64-libvirt guest-start
Searching for failure / basis pass:
 155271 fail [host=huxelrebe1] / 155128 [host=godello0] 155089 [host=elbling1] 155080 [host=huxelrebe0] 155048 [host=chardonnay1] 155035 [host=huxelrebe0] 154728 [host=elbling0] 154637 [host=fiano0] 154615 [host=pinot0] 154609 [host=pinot1] 154581 [host=huxelrebe0] 154569 ok.
Failure / basis pass flights: 155271 / 154569
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6-bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Loaded 5001 nodes in revision graph
Searching for test results:
 154569 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6
 154581 [host=huxelrebe0]
 154609 [host=pinot1]
 154615 [host=pinot0]
 154637 [host=fiano0]
 154728 [host=elbling0]
 155022 []
 155035 [host=huxelrebe0]
 155048 [host=chardonnay1]
 155080 [host=huxelrebe0]
 155089 [host=elbling1]
 155128 [host=godello0]
 155144 [host=pinot0]
 155156 [host=pinot0]
 155162 [host=pinot0]
 155167 [host=pinot0]
 155157 [host=fiano0]
 155174 [host=pinot0]
 155176 [host=godello1]
 155180 [host=fiano0]
 155189 [host=godello1]
 155192 [host=godello1]
 155194 [host=godello1]
 155187 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 11852c7bb070a18c3708b4c001772a23e7d4fc27
 155196 [host=godello1]
 155198 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6
 155203 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 11852c7bb070a18c3708b4c001772a23e7d4fc27
 155206 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1
 155200 [host=chardonnay0]
 155209 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b18b8801d0457881282b9dde46ca1100bd5e6476
 155214 [host=chardonnay0]
 155213 [host=albana0]
 155216 [host=chardonnay0]
 155220 [host=albana0]
 155228 [host=albana0]
 155225 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
 155232 [host=albana0]
 155248 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 e301a706eb679a0246cf98324958deb3781c886a
 155253 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 0d8d289af7a679c028462c4ed5d98586f9ef9648
 155257 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 50a5215f30e964a6f16165ab57925ca39f31a849
 155246 [host=huxelrebe0]
 155260 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
 155264 [host=huxelrebe0]
 155262 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155269 [host=huxelrebe0]
 155273 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 50a5215f30e964a6f16165ab57925ca39f31a849
 155275 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
 155271 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155279 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 50a5215f30e964a6f16165ab57925ca39f31a849
 155282 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Searching for interesting versions
 Result found: flight 154569 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 50a5215f30e964a6f16165ab57925ca39f31a849, results HASH(0x56452d21d058) HASH(0x56452cc6a8c8) HASH(0x56452cc78998) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef8\
 28bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 0d8d289af7a679c028462c4ed5d98586f9ef9648, results HASH(0x56452d236218) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 e301a706eb679a0246cf98324958deb3781c886a, results HASH(0x56452d223518) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b18b8801d0457881282b9dde46ca1100bd5e6476, results HASH(0x56452d224418) For basis\
  failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1, results HASH(0x56452d227290) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b\
 1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6, results HASH(0x56452d228818) HASH(0x56452d234390) Result found: flight 155187 (fail), for basis failure (at ancestor ~386)
 Repro found: flight 155198 (pass), for basis pass
 Repro found: flight 155262 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 50a5215f30e964a6f16165ab57925ca39f31a849
No revisions left to test, checking graph state.
 Result found: flight 155257 (pass), for last pass
 Result found: flight 155260 (fail), for first failure
 Repro found: flight 155273 (pass), for last pass
 Repro found: flight 155275 (fail), for first failure
 Repro found: flight 155279 (pass), for last pass
 Repro found: flight 155282 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
  Bug not present: 50a5215f30e964a6f16165ab57925ca39f31a849
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155282/


  commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
  Author: Andrew Cooper <andrew.cooper3@citrix.com>
  Date:   Tue Sep 29 14:48:52 2020 +0100
  
      tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
      
      Nested Virt is the final special case in legacy CPUID handling.  Pass the
      (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
      the semantic dependency on HVM_PARAM_NESTEDHVM.
      
      No functional change.
      
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/test-amd64-amd64-libvirt.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155282: tolerable FAIL

flight 155282 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155282/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 02:26:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 02:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1644.4973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOAm7-0008EF-DH; Fri, 02 Oct 2020 02:26:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1644.4973; Fri, 02 Oct 2020 02:26:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOAm7-0008E8-A2; Fri, 02 Oct 2020 02:26:23 +0000
Received: by outflank-mailman (input) for mailman id 1644;
 Fri, 02 Oct 2020 02:26:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOAm5-0008E3-Sx
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 02:26:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e0c6b28-0ef3-4f54-aa23-517928d7e308;
 Fri, 02 Oct 2020 02:26:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAm3-0001hX-Ay; Fri, 02 Oct 2020 02:26:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAm3-0008Af-3k; Fri, 02 Oct 2020 02:26:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOAm3-0004vb-3E; Fri, 02 Oct 2020 02:26:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOAm5-0008E3-Sx
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 02:26:21 +0000
X-Inumbo-ID: 3e0c6b28-0ef3-4f54-aa23-517928d7e308
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3e0c6b28-0ef3-4f54-aa23-517928d7e308;
	Fri, 02 Oct 2020 02:26:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dl02qGNPOd+4YTTF8QQ+b6xWJRi4ARYTbUQESlVbCQs=; b=poejJgnYprPZjZ65uLXqZCrFfl
	g7K8D9uovAsbFKBMY4wa50vCJigo8eBtzZBaVM912ssSvichny1emDscfQvI/d1LSeFPhxCuED9sa
	CfFcHTPuO4SU1DONRri53mc9mYCFN9G4xyfXZQid7hSiTuV/jdgoN1GKUwcv69VvLvBI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAm3-0001hX-Ay; Fri, 02 Oct 2020 02:26:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAm3-0008Af-3k; Fri, 02 Oct 2020 02:26:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOAm3-0004vb-3E; Fri, 02 Oct 2020 02:26:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 155152: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=0186e76a62f7409804c2e4785d5a11e7f82a7c52
X-Osstest-Versions-That:
    xen=0446e3db13671032b05d19f6117d902f5c5c76fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 02:26:19 +0000

flight 155152 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155152/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154601
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154601
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154601
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154601
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 154601
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  0186e76a62f7409804c2e4785d5a11e7f82a7c52
baseline version:
 xen                  0446e3db13671032b05d19f6117d902f5c5c76fa

Last test of basis   154601  2020-09-22 02:37:00 Z    9 days
Testing same since   154622  2020-09-22 16:36:57 Z    9 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 473 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 04:28:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 04:28:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1657.5008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOCfk-0001lA-RY; Fri, 02 Oct 2020 04:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1657.5008; Fri, 02 Oct 2020 04:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOCfk-0001l3-NY; Fri, 02 Oct 2020 04:27:56 +0000
Received: by outflank-mailman (input) for mailman id 1657;
 Fri, 02 Oct 2020 04:27:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOCfj-0001ky-To
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:27:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cde595ca-9253-4f39-9d72-0ab85d759a4c;
 Fri, 02 Oct 2020 04:27:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1239AAC19;
 Fri,  2 Oct 2020 04:27:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOCfj-0001ky-To
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:27:55 +0000
X-Inumbo-ID: cde595ca-9253-4f39-9d72-0ab85d759a4c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cde595ca-9253-4f39-9d72-0ab85d759a4c;
	Fri, 02 Oct 2020 04:27:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601612874;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n5hLMCuQD4OyqugOBeWO8nJ7I2YjMYO1ImRm1jNVsq8=;
	b=gbDYFv1dLPC/dlkm4uHXrs07qez51pZfSyyED8yvIFcWPVhNQdr2Ar9aXBCHOZw/5dHgo3
	O0eMx0tHQXL/f1svEH9UfChGq1dHbPOom8jx2MxmGYoIOTqXIyD/yKDofF19i4cjAzdlz0
	DuvCEyjpuPEm9wCfjcfI6fAxf8G9ugY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1239AAC19;
	Fri,  2 Oct 2020 04:27:54 +0000 (UTC)
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
 <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
 <6B9403A3-66DC-4A69-8006-096420649768@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dea68b56-990d-a13f-a2c4-171e67eaaf73@suse.com>
Date: Fri, 2 Oct 2020 06:27:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <6B9403A3-66DC-4A69-8006-096420649768@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 01.10.20 18:43, Bertrand Marquis wrote:
> Hi,
> 
>> On 1 Oct 2020, at 17:29, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>
>> Hi Jan,
>>
>>> On 1 Oct 2020, at 17:03, Jan Beulich <jbeulich@suse.com> wrote:
>>>
>>> On 10.09.2020 14:09, Jan Beulich wrote:
>>>> While looking at what it would take to move around libelf/
>>>> in the hypervisor subtree, I've run into this rule, which I
>>>> think can do with a few improvements and some simplification.
>>>>
>>>> 1: adjust population of acpi/
>>>> 2: fix (drop) dependencies of when to populate xen/
>>>> 3: adjust population of public headers into xen/
>>>> 4: properly install Arm public headers
>>>> 5: adjust x86-specific population of xen/
>>>> 6: drop remaining -f from ln invocations
>>>
>>> May I ask for an ack or otherwise here?
>>
>> This is going the right way but with this serie (on top of current staging
>> status), I have a compilation error in Yocto while compiling qemu:
>> In file included from /media/extend-drive/bermar01/Development/xen-dev/yocto-build/build/dom0-fvp.prj/tmp/work/armv8a-poky-linux/qemu/5.1.0-r0/recipe-sysroot/usr/include/xenguest.h:25,
>> |                  from /media/extend-drive/bermar01/Development/xen-dev/yocto-build/build/dom0-fvp.prj/tmp/work/armv8a-poky-linux/qemu/5.1.0-r0/qemu-5.1.0/hw/i386/xen/xen_platform.c:41:
>> | /media/extend-drive/bermar01/Development/xen-dev/yocto-build/build/dom0-fvp.prj/tmp/work/armv8a-poky-linux/qemu/5.1.0-r0/recipe-sysroot/usr/include/xenctrl_dom.h:19:10: fatal error: xen/libelf/libelf.h: No such file or directory
>> |    19 | #include <xen/libelf/libelf.h>
>> |       |          ^~~~~~~~~~~~~~~~~~~~~
>> | compilation terminated.
>> | /media/extend-drive/bermar01/Development/xen-dev/yocto-build/build/dom0-fvp.prj/tmp/work/armv8a-poky-linux/qemu/5.1.0-r0/qemu-5.1.0/rules.mak:69: recipe for target 'hw/i386/xen/xen_platform.o’ failed
>>
>> Xen is using xenctrl_dom.h which need the libelf.h header from xen.
> 
> Actually this is not coming from your serie and this is actually a problem already present on master.

... and fixed on staging.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 04:50:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 04:50:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1661.5024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOD1l-0004Pg-Qr; Fri, 02 Oct 2020 04:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1661.5024; Fri, 02 Oct 2020 04:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOD1l-0004PZ-Mu; Fri, 02 Oct 2020 04:50:41 +0000
Received: by outflank-mailman (input) for mailman id 1661;
 Fri, 02 Oct 2020 04:50:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOD1j-0004PU-QG
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:50:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c38b9e92-c5e8-470d-9eb9-3413a05eee0e;
 Fri, 02 Oct 2020 04:50:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 450E9AD04;
 Fri,  2 Oct 2020 04:50:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOD1j-0004PU-QG
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:50:39 +0000
X-Inumbo-ID: c38b9e92-c5e8-470d-9eb9-3413a05eee0e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c38b9e92-c5e8-470d-9eb9-3413a05eee0e;
	Fri, 02 Oct 2020 04:50:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601614236;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J/idnfpv/lIDFEdq+x6upDoD3zsCzNiwA4jbuPtNVBQ=;
	b=HKxElXgq8DYYV/9IkUwE2TxRxC46woRPr7S/o6FFNNi5MHDqW7kuZma7sIjDNTU2vPduKP
	u4In4r/Cb0KFYS6Zm6agqMor98qGWMtd0qD7FzYXjFi/Kfi3D8zSV15OZQG6e0bkqVWSE4
	MEckAO/zGyNmFip6mPLJTgoDuDeXO3U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 450E9AD04;
	Fri,  2 Oct 2020 04:50:36 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
Date: Fri, 2 Oct 2020 06:50:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.10.20 18:38, Bertrand Marquis wrote:
> Hi Juergen,
> 
>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>
>>
>>
>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>
>>> Making getBridge() static triggered a build error with some gcc versions:
>>>
>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>> length 255 [-Werror=stringop-truncation]
>>>
>>> Fix that by using a buffer with 256 bytes instead.
>>>
>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Sorry i have to come back on this one.
> 
> I still see an error compiling with Yocto on this one:
> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
> |    81 |      strncpy(result, de->d_name, resultLen);
> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> To solve it, I need to define devBridge[257] as devNoBrideg.

IMHO this is a real compiler error.

de->d_name is an array of 256 bytes, so doing strncpy() from that to
another array of 256 bytes with a length of 256 won't truncate anything.

Making devBridge one byte longer would be dangerous, as this would do
a strncpy with length of 257 from a source with a length of 256 bytes
only.

BTW, I think Andrew? has tested my patch with a recent gcc which threw
the original error without my patch, and it was fine with the patch.
Either your compiler (assuming you are using gcc) has gained that error
or you are missing an update fixing it.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 04:54:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 04:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1665.5037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOD5n-0004cU-J4; Fri, 02 Oct 2020 04:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1665.5037; Fri, 02 Oct 2020 04:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOD5n-0004cN-Fk; Fri, 02 Oct 2020 04:54:51 +0000
Received: by outflank-mailman (input) for mailman id 1665;
 Fri, 02 Oct 2020 04:54:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOD5m-0004bv-TG
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:54:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ca36da2-92b5-4aff-be8c-3aa78b54301a;
 Fri, 02 Oct 2020 04:54:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOD5e-0004t2-Sl; Fri, 02 Oct 2020 04:54:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOD5e-0007T8-KA; Fri, 02 Oct 2020 04:54:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOD5e-0003DG-Jf; Fri, 02 Oct 2020 04:54:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOD5m-0004bv-TG
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 04:54:50 +0000
X-Inumbo-ID: 9ca36da2-92b5-4aff-be8c-3aa78b54301a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9ca36da2-92b5-4aff-be8c-3aa78b54301a;
	Fri, 02 Oct 2020 04:54:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bwtFwj3TMSDwl2GbfpTuxC/5g7VgtrxuvIowwEyDtmQ=; b=tIpjLIfaqr0DUABtYEEKOnWvq8
	ctDMVkqYhK/fcTw8TNQgWc7wgl36kkFZldhqOC06pX4zwRz0wL0pQTDGyOojSMzzGDImIF5vCATVB
	ZMVaKy8sKvJXn2+DAOQVnCmNteDR/QGS86K/kaR1Hrfx8OV9AJwfb1VoenbzLbVkRS2g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOD5e-0004t2-Sl; Fri, 02 Oct 2020 04:54:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOD5e-0007T8-KA; Fri, 02 Oct 2020 04:54:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOD5e-0003DG-Jf; Fri, 02 Oct 2020 04:54:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155287-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155287: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 04:54:42 +0000

flight 155287 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days   11 attempts
Testing same since   155246  2020-10-01 15:01:28 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/lixenguest: hide struct elf_dom_parms layout from users
    
    Don't include struct elf_dom_parms in struct xc_dom_image, but rather
    use a pointer to reference it. Together with adding accessor functions
    for the externally needed elements this enables to drop including the
    Xen private header xen/libelf/libelf.h from xenguest.h.
    
    Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7f186b1996dea2992c8ed3606b38d73222293c37
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libxenguest: make xc_dom_loader interface private to libxenguest
    
    The pluggable kernel loader interface is used only internally of
    libxenguest, so make it private. This removes a dependency on the Xen
    internal header xen/libelf/libelf.h from xenguest.h.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 77a09716f251ac0e35ddd4bfda8f35fe639f432b
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libs: merge xenctrl_dom.h into xenguest.h
    
    Today xenctrl_dom.h is part of libxenctrl as it is included by
    xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
    xenguest.h where its contents really should be.
    
    Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
    if xenguest.h is already included.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 3ae0d316f01c08903a96f6b5b39275c67b823264
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: move libxlutil to tools/libs/util
    
    Move the libxlutil source to tools/libs/util and delete tools/libxl.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b22b9b9a1df865e1dd9e4f6950ae6be7081be010
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libs: add option for library names not starting with libxen
    
    libxlutil doesn't follow the standard name pattern of all other Xen
    libraries, so add another make variable which can be used to allow
    other names.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bc01c73018689e066e06515b26181d463a3f2a40
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: rename global libxlutil make variables
    
    Rename *_libxlutil make variables to *_libxenutil in order to avoid
    nasty indirections when moving libxlutil under the tools/libs
    infrastructure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 41aea82de2b581c61482aeddab151ecf3b1bca25
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libxl: move libxenlight to tools/libs/light
    
    Carve out all libxenlight related sources and move them to
    tools/libs/light in order to use the generic library build environment.
    
    The closely related sources for libxl-save-helper and the libxl test
    environment are being moved, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:06:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:06:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1678.5071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOED3-0002bS-9v; Fri, 02 Oct 2020 06:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1678.5071; Fri, 02 Oct 2020 06:06:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOED3-0002bL-72; Fri, 02 Oct 2020 06:06:25 +0000
Received: by outflank-mailman (input) for mailman id 1678;
 Fri, 02 Oct 2020 06:06:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOED2-0002a8-5r
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:06:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 625534d4-2494-4bc4-9a66-da7fbb0f09e3;
 Fri, 02 Oct 2020 06:06:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOECu-0006iR-RS; Fri, 02 Oct 2020 06:06:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOECu-0002WB-Dr; Fri, 02 Oct 2020 06:06:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOECu-0002Nn-DJ; Fri, 02 Oct 2020 06:06:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOED2-0002a8-5r
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:06:24 +0000
X-Inumbo-ID: 625534d4-2494-4bc4-9a66-da7fbb0f09e3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 625534d4-2494-4bc4-9a66-da7fbb0f09e3;
	Fri, 02 Oct 2020 06:06:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bP8uXf2oVbdfiF83aTCzw6tioqUqM2VxZra3j/SyKJ0=; b=Z0wvuuPbk/To31BOVmbY6c4FRz
	Jx8XKLK9iP73OjDR+jyr29lQuT7E8xghSwDIt5lLRalkaZZXn0jdJkqeu90/x9oXepG15Y+k0Cv1h
	auglre9WUC2q7Qd8Jct9trx9kvqSBSSiBOEyj2YBOT7TDFNmomtA+CokJn300vIkidek=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOECu-0006iR-RS; Fri, 02 Oct 2020 06:06:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOECu-0002WB-Dr; Fri, 02 Oct 2020 06:06:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOECu-0002Nn-DJ; Fri, 02 Oct 2020 06:06:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155193-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155193: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):starved:nonblocking
    libvirt:build-i386-pvops:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=a63b48c5ecef077bf0f909a85f453a605600cf05
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 06:06:16 +0000

flight 155193 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155193/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               starved  n/a
 build-i386-pvops              2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              a63b48c5ecef077bf0f909a85f453a605600cf05
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   84 days
Failing since        151818  2020-07-11 04:18:52 Z   83 days   77 attempts
Testing same since   155193  2020-10-01 04:18:47 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            starved 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      starved 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 starved 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17006 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:12:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:12:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1680.5084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEIt-0003Tb-1m; Fri, 02 Oct 2020 06:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1680.5084; Fri, 02 Oct 2020 06:12:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEIs-0003TU-UE; Fri, 02 Oct 2020 06:12:26 +0000
Received: by outflank-mailman (input) for mailman id 1680;
 Fri, 02 Oct 2020 06:12:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOEIr-0003TP-Bp
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:12:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 998311f9-3f0f-47fe-813d-7f903fea7e5b;
 Fri, 02 Oct 2020 06:12:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 502B8B312;
 Fri,  2 Oct 2020 06:12:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOEIr-0003TP-Bp
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:12:25 +0000
X-Inumbo-ID: 998311f9-3f0f-47fe-813d-7f903fea7e5b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 998311f9-3f0f-47fe-813d-7f903fea7e5b;
	Fri, 02 Oct 2020 06:12:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601619143;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8Hw7ZlxAQw/pdvyez1KAKHCb7RLR/cvQxCiac0pD7MM=;
	b=ipE3kbU7FzZQkYcOYhQwb3h8S7qNCnkQG7dYE7Z4PJyKo4KtLkwcdJ/tPpNKFvCsSS2AYl
	pydYiuKT5NGONK3PK+rMk9JgZAG4zWjhunJX21rb3dW+3Lyoz3YUYYMnToUl/YBhk5MH72
	IdBoLprPvG9Mwx/ix8G1oGfY8cp/P60=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 502B8B312;
	Fri,  2 Oct 2020 06:12:23 +0000 (UTC)
Subject: Re: [PATCH 11/12] evtchn: convert vIRQ lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: paul@xen.org, xen-devel@lists.xenproject.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>
References: <0d5ffc89-4b04-3e06-e950-f0cb171c7419@suse.com>
 <6e529147-2a76-bc28-ac16-21fc9a2c8f03@suse.com>
 <004b01d696ff$76873e50$6395baf0$@xen.org>
 <92d2714b-d762-2f15-086f-58257e3336a8@suse.com>
 <006401d69707$062a5090$127ef1b0$@xen.org>
 <3626d65c-bd5d-f65e-61ca-451110761258@suse.com>
 <f55cb87f-418d-61fa-65f1-0e746071fe37@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <60f2ae90-160f-d7fe-9d5f-f9cd4651a93c@suse.com>
Date: Fri, 2 Oct 2020 08:12:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <f55cb87f-418d-61fa-65f1-0e746071fe37@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 01.10.2020 18:21, Julien Grall wrote:
> On 30/09/2020 11:16, Jan Beulich wrote:
>> On 30.09.2020 10:52, Paul Durrant wrote:
>>> Looking again, given that both send_guest_vcpu_virq() and
>>> send_guest_global_virq() (rightly) hold the evtchn lock before
>>> calling evtchn_port_set_pending() I think you could do away with
>>> the virq lock by adding checks in those functions to verify
>>> evtchn->state == ECS_VIRQ and u.virq == virq after having
>>> acquired the channel lock but before calling
>>> evtchn_port_set_pending().
>>
>> I don't think so: The adjustment of v->virq_to_evtchn[] in
>> evtchn_close() would then happen with just the domain's event
>> lock held, which the sending paths don't use at all. The per-
>> channel lock gets acquired in evtchn_close() a bit later only
>> (and this lock can't possibly protect per-vCPU state).
>>
>> In fact I'm now getting puzzled by evtchn_bind_virq() updating
>> this array with (just) the per-domain lock held. Since it's
>> the last thing in the function, there's technically no strict
>> need for acquiring the vIRQ lock,
> 
> Well, we at least need to prevent the compiler to tear the store/load. 
> If we don't use a lock, then we would should use ACCESS_ONCE() or 
> {read,write}_atomic() for all the usage.
> 
>> but holding the event lock
>> definitely doesn't help.
> 
> It helps because spin_unlock() and write_unlock() use the same barrier 
> (arch_lock_release_barrier()). So ...

I'm having trouble making this part of your reply fit ...

>> All that looks to be needed is the
>> barrier implied from write_unlock().
> 
> No barrier should be necessary. Although, I would suggest to add a 
> comment explaining it.

... this. If we moved the update of v->virq_to_evtchn[] out of the
locked region (as the lock doesn't protect anything anymore at that
point), I think a barrier would need adding, such that the sending
paths will observe the update by the time evtchn_bind_virq()
returns (and hence sending of a respective vIRQ event can
legitimately be expected to actually work). Or did you possibly
just misunderstand what I wrote before? By putting in question the
utility of holding the event lock, I implied the write could be
moved out of the locked region ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:14:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:14:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1685.5099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEKg-0003eP-Kj; Fri, 02 Oct 2020 06:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1685.5099; Fri, 02 Oct 2020 06:14:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEKg-0003eI-HW; Fri, 02 Oct 2020 06:14:18 +0000
Received: by outflank-mailman (input) for mailman id 1685;
 Fri, 02 Oct 2020 06:14:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOEKf-0003dV-NN
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:14:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f5a8ca5-bb8e-4a58-9a21-bd5dc6a7f184;
 Fri, 02 Oct 2020 06:14:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOEKY-0006tc-6N; Fri, 02 Oct 2020 06:14:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOEKX-0002lY-Tn; Fri, 02 Oct 2020 06:14:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOEKX-0005ut-T9; Fri, 02 Oct 2020 06:14:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOEKf-0003dV-NN
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:14:17 +0000
X-Inumbo-ID: 5f5a8ca5-bb8e-4a58-9a21-bd5dc6a7f184
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5f5a8ca5-bb8e-4a58-9a21-bd5dc6a7f184;
	Fri, 02 Oct 2020 06:14:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Eu573khhBIAmIv4xkOgDXOIEe2idJcArfdUNpGCMxsk=; b=CAL+A+NvkhhskxHr5u3ptiFhx2
	bFVmLCChBDsKCSLMBPI63y96eIHLgcwwit8bZap1PCUx6KVNBgBUQPOvbrSDgGVaMucjnkW3NHrQY
	1OLvkbxxX4XSsm7CAarLsf9cyREn87QzGVlil05u7rVR5wTb67knjA+c6FNkkPjitqNo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOEKY-0006tc-6N; Fri, 02 Oct 2020 06:14:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOEKX-0002lY-Tn; Fri, 02 Oct 2020 06:14:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOEKX-0005ut-T9; Fri, 02 Oct 2020 06:14:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 155173: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=f37a1cf023b277d0d49323bf322ce3ff0c92262d
X-Osstest-Versions-That:
    xen=28855ebcdbfa437e60bc16c761405476fe16bc39
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 06:14:09 +0000

flight 155173 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155173/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154350
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154350
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154350
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154350

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  f37a1cf023b277d0d49323bf322ce3ff0c92262d
baseline version:
 xen                  28855ebcdbfa437e60bc16c761405476fe16bc39

Last test of basis   154350  2020-09-15 00:36:14 Z   17 days
Failing since        154617  2020-09-22 14:37:47 Z    9 days    5 attempts
Testing same since   154641  2020-09-23 10:32:29 Z    8 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 506 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:20:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1688.5114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEQO-0004XF-Cy; Fri, 02 Oct 2020 06:20:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1688.5114; Fri, 02 Oct 2020 06:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEQO-0004X8-8M; Fri, 02 Oct 2020 06:20:12 +0000
Received: by outflank-mailman (input) for mailman id 1688;
 Fri, 02 Oct 2020 06:20:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOEQM-0004X3-KO
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:20:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 859395e9-6e23-4669-ad3c-a1a5b5311647;
 Fri, 02 Oct 2020 06:20:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 06CC0AD2C;
 Fri,  2 Oct 2020 06:20:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOEQM-0004X3-KO
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:20:10 +0000
X-Inumbo-ID: 859395e9-6e23-4669-ad3c-a1a5b5311647
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 859395e9-6e23-4669-ad3c-a1a5b5311647;
	Fri, 02 Oct 2020 06:20:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601619609;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZbOzifHMrWmdV8hTzqw4L1Ltp2rraz8UHROP0ToP688=;
	b=duT1sp8uf2+cXRqnCIMZ6kFEhwyO/uvCWQRcjMSU6Ouf/FkYE2I/bjsZxp0nH6L73xAdRZ
	XOurxyEKX+2/N+p/LaAigeKrBBWOcolGRmG1wjZDgQ7sGoVw0va6mjGWpyfTY91tY5WrEb
	NMiPD8bA6Qq+hWrHoEcmLPaRI3Lc0lE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 06CC0AD2C;
	Fri,  2 Oct 2020 06:20:09 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
Date: Fri, 2 Oct 2020 08:20:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 06:50, Jürgen Groß wrote:
> On 01.10.20 18:38, Bertrand Marquis wrote:
>> Hi Juergen,
>>
>>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>>
>>>
>>>
>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>
>>>> Making getBridge() static triggered a build error with some gcc versions:
>>>>
>>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>>> length 255 [-Werror=stringop-truncation]
>>>>
>>>> Fix that by using a buffer with 256 bytes instead.
>>>>
>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Sorry i have to come back on this one.
>>
>> I still see an error compiling with Yocto on this one:
>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
>> |    81 |      strncpy(result, de->d_name, resultLen);
>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> To solve it, I need to define devBridge[257] as devNoBrideg.
> 
> IMHO this is a real compiler error.
> 
> de->d_name is an array of 256 bytes, so doing strncpy() from that to
> another array of 256 bytes with a length of 256 won't truncate anything.

That's a matter of how you look at it, I think: If the original array
doesn't hold a nul-terminated string, the destination array won't
either, yet the common goal of strncpy() is to yield a properly nul-
terminated string. IOW the warning may be since the standard even has
a specific foot note to point out this possible pitfall.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:25:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:25:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1690.5125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEVr-0004jt-1D; Fri, 02 Oct 2020 06:25:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1690.5125; Fri, 02 Oct 2020 06:25:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEVq-0004jm-UT; Fri, 02 Oct 2020 06:25:50 +0000
Received: by outflank-mailman (input) for mailman id 1690;
 Fri, 02 Oct 2020 06:25:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G19b=DJ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kOEVp-0004jh-D2
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:25:49 +0000
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f5ced9a-b3fa-4b38-b02a-69e256b06704;
 Fri, 02 Oct 2020 06:25:47 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id 60so413860otw.3
 for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 23:25:47 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=G19b=DJ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
	id 1kOEVp-0004jh-D2
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:25:49 +0000
X-Inumbo-ID: 3f5ced9a-b3fa-4b38-b02a-69e256b06704
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3f5ced9a-b3fa-4b38-b02a-69e256b06704;
	Fri, 02 Oct 2020 06:25:47 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id 60so413860otw.3
        for <xen-devel@lists.xenproject.org>; Thu, 01 Oct 2020 23:25:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=TgGLU5+vZtsKDgvmriYU+N6YNNe1OuuLxuc8yu///lw=;
        b=ZJCa28PmK9pfAQb1z/Fz5nQujMMj9GwOECW8lGHmILPMDdScECcutYS+9Mxuj/X5o7
         QjtfMVLXUHXXFk1t5i1TJShzT9a+YUlEPXN8yJ4A9p1dj8fUg6+n6TIC9QVlNy19K7QY
         JcUKsbgwPyTbvME8nmGR0fwh4gpbh9zB4gEg1hf/s+J64UxgUxpLBQd0KVDDfSuZU648
         p7HB5oEvdY9axpSx5bfJGc/JwyuMDMWHNZJxylUxjWPzc3oOivdXgbjXKRHVI26tHTYY
         VpEbpCFOawUgtIu9c2wSdibgFIE1KiuQV2KZkvoPmJd6GoUA1DreE3hxvnsCevo9ojxt
         J8Gg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=TgGLU5+vZtsKDgvmriYU+N6YNNe1OuuLxuc8yu///lw=;
        b=cgf3X22j7pYUxE03ITmjwVNrQX8XemxpQEahz5KiX3p5j23InRAgUbtqAQTJvAoDXH
         xEoE4wF+FljC/v26AjYSZXNmxKKeESiuoU2RoBI4Lpzmyy4RHw7gSWbj8JNMx/HJI39N
         0vfNuiJN9c21q0mEJsdY4roBFHH8oJw2Hb1D7QbNxnmc3kEE8kfv6SFPaOngFfnzc1fb
         /6dIzLG/GaRLu/5UgJ7ag76tX12F/hf4w65lVbOqwcWYiafQ31bl5LnZVmZT93D4v9xx
         3JRmiZw/6ooL9o5LGa8kCwr/sj8VT4VfWs0klEMt4/VL55y+OJgl0QHlO36YdolbLm8u
         69hA==
X-Gm-Message-State: AOAM5321q2FpOiA81ZzT3mu2kGJC0lDJHDjvJiKZN22DpU1iDu3CwSia
	Y5rF0QAWFCwALN64ECAKkKD04uSzJnV0Nlmx+b8=
X-Google-Smtp-Source: ABdhPJxmttwG977th61NKAuyvbxpggYMqZWaJhVHmnsWppEQt2W6DDhYGUBOAJgyTHNbsrXCylGzgFKHr6jp+OCRA4A=
X-Received: by 2002:a9d:20ca:: with SMTP id x68mr81402ota.80.1601619947380;
 Thu, 01 Oct 2020 23:25:47 -0700 (PDT)
MIME-Version: 1.0
References: <20200930125736.95203-1-george.dunlap@citrix.com> <683E2686-1551-493B-A3AE-D0707C937155@arm.com>
In-Reply-To: <683E2686-1551-493B-A3AE-D0707C937155@arm.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Thu, 1 Oct 2020 23:25:35 -0700
Message-ID: <CACMJ4Gac-rtoWqV=A-LT8VLU=SZQogSR009FjJiH3fF6rju5PQ@mail.gmail.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: George Dunlap <george.dunlap@citrix.com>, 
	"open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson <ian.jackson@citrix.com>, 
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Rich Persaud <persaur@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 1, 2020 at 7:38 AM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi George,
>
> + Christopher Clark to have his view on what to put for Yocto.
>
> > On 30 Sep 2020, at 13:57, George Dunlap <george.dunlap@citrix.com> wrote:
> >
> > Define a specific criteria for how we determine what tools and
> > libraries to be compatible with.  This will clarify issues such as,
> > "Should we continue to support Python 2.4" moving forward.
> >
> > Note that CentOS 7 is set to stop receiving "normal" maintenance
> > updates in "Q4 2020"; assuming that 4.15 is released after that, we
> > only need to support CentOS / RHEL 8.
> >
> > Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> > ---
> >
> > CC: Ian Jackson <ian.jackson@citrix.com>
> > CC: Wei Liu <wl@xen.org>
> > CC: Andrew Cooper <andrew.cooper3@citrix.com>
> > CC: Jan Beulich <jbeulich@suse.com>
> > CC: Stefano Stabellini <sstabellini@kernel.org>
> > CC: Julien Grall <julien@xen.org>
> > CC: Rich Persaud <persaur@gmail.com>
> > CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
> > ---
> > docs/index.rst                        |  2 +
> > docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
> > 2 files changed, 78 insertions(+)
> > create mode 100644 docs/policies/dependency-versions.rst
> >
> > diff --git a/docs/index.rst b/docs/index.rst
> > index b75487a05d..ac175eacc8 100644
> > --- a/docs/index.rst
> > +++ b/docs/index.rst
> > @@ -57,5 +57,7 @@ Miscellanea
> > -----------
> >
> > .. toctree::
> > +   :maxdepth: 1
> >
> > +   policies/dependency-versions
> >    glossary
> > diff --git a/docs/policies/dependency-versions.rst b/docs/policies/dependency-versions.rst
> > new file mode 100644
> > index 0000000000..d5eeb848d8
> > --- /dev/null
> > +++ b/docs/policies/dependency-versions.rst
> > @@ -0,0 +1,76 @@
> > +.. SPDX-License-Identifier: CC-BY-4.0
> > +
> > +Build and runtime dependencies
> > +==============================
> > +
> > +Xen depends on other programs and libraries to build and to run.
> > +Chosing a minimum version of these tools to support requires a careful
> > +balance: Supporting older versions of these tools or libraries means
> > +that Xen can compile on a wider variety of systems; but means that Xen
> > +cannot take advantage of features available in newer versions.
> > +Conversely, requiring newer versions means that Xen can take advantage
> > +of newer features, but cannot work on as wide a variety of systems.
> > +
> > +Specific dependencies and versions for a given Xen release will be
> > +listed in the toplevel README, and/or specified by the ``configure``
> > +system.  This document lays out the principles by which those versions
> > +should be chosen.
> > +
> > +The general principle is this:
> > +
> > +    Xen should build on currently-supported versions of major distros
> > +    when released.
> > +
> > +"Currently-supported" means whatever that distro's version of "full
> > +support".  For instance, at the time of writing, CentOS 7 and 8 are
> > +listed as being given "Full Updates", but CentOS 6 is listed as
> > +"Maintenance updates"; under this criterium, we would try to ensure
> > +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
> > +
> > +Exceptions for specific distros or tools may be made when appropriate.
> > +
> > +One exception to this is compiler versions for the hypervisor.
> > +Support for new instructions, and in particular support for new safety
> > +features, may require a newer compiler than many distros support.
> > +These will be specified in the README.
> > +
> > +Distros we consider when deciding minimum versions
> > +--------------------------------------------------
> > +
> > +We currently aim to support Xen building and running on the following distributions:
> > +Debian_,
> > +Ubuntu_,
> > +OpenSUSE_,
> > +Arch Linux,
> > +SLES_,
> > +Yocto_,
> > +CentOS_,
> > +and RHEL_.
> > +
> > +.. _Debian: https://www.debian.org/releases/
> > +.. _Ubuntu: https://wiki.ubuntu.com/Releases
> > +.. _OpenSUSE: https://en.opensuse.org/Lifetime
> > +.. _SLES: https://www.suse.com/lifecycle/
> > +.. _Yocto: https://wiki.yoctoproject.org/wiki/Releases
> > +.. _CentOS: https://wiki.centos.org/About/Product
> > +.. _RHEL: https://access.redhat.com/support/policy/updates/errata
> > +
> > +Specific distro versions supported in this release
> > +--------------------------------------------------
> > +
> > +======== ==================
> > +Distro   Supported releases
> > +======== ==================
> > +Debian   10 (Buster)
> > +Ubuntu   20.10 (Groovy Gorilla), 20.04 (Focal Fossa), 18.04 (Bionic Beaver), 16.04 (Xenial Xerus)
> > +OpenSUSE Leap 15.2
> > +SLES     SLES 11, 12, 15
> > +Yocto    3.1 (Dunfell)
>
> Yocto only supports one version of Xen (as there is usually only one xen recipe per yocto version)

I'm not sure that's totally accurate: the Yocto Xen recipes are
written to support backwards compatibility with older versions of Xen.
In general, care and effort has been expended to support building with
multiple versions of Xen with the same recipes, via variable override
or bbappend, and it is expected to work.

> which can make the version here a bit tricky:
> Yocto 3.1 (Dunfell) supports only Xen 4.12
> Yocto 3.2 (Gategarth) support only Xen 4.14 but has a Yocto master recipe which should actually be used
> by someone who would want to try Xen master.
>
> So I would suggest to put Yocto 3.2 here only.
>
> @Christopher: what is your view here ?

Thanks for asking. I don't quite agree with that recommendation: I
think Dunfell does belong, with an indication that it requires
Gatesgarth meta-virtualization. Dunfell is the LTS release, so a
compatibility statement about it is important. ie:

Yocto 3.1 (Dunfell + Gatesgarth meta-virtualization)

Effort has already been made within Yocto to make the Gatesgarth
meta-virtualization layer compatible with Dunfell open-embedded core,
specifically to allow newer Xen to be used with the LTS Dunfell
software from the core layers. I would hope that any issues that are
found with that configuration will be posted so they can be fixed.

thanks,

Christopher

>
> Cheers
> Bertrand
>
> > +CentOS   8
> > +RHEL     8
> > +======== ==================
> > +
> > +.. note::
> > +
> > +   We also support Arch Linux, but as it's a rolling distribution, the
> > +   concept of "security supported releases" doesn't really apply.
> > --
> > 2.25.1
> >
> >
>


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:51:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:51:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1704.5174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEuQ-0007RB-P9; Fri, 02 Oct 2020 06:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1704.5174; Fri, 02 Oct 2020 06:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOEuQ-0007R4-LI; Fri, 02 Oct 2020 06:51:14 +0000
Received: by outflank-mailman (input) for mailman id 1704;
 Fri, 02 Oct 2020 06:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOEuP-0007Qz-4o
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:51:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03f8de42-74f4-4e01-9065-ab618b563f7d;
 Fri, 02 Oct 2020 06:51:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2C769AB0E;
 Fri,  2 Oct 2020 06:51:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOEuP-0007Qz-4o
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:51:13 +0000
X-Inumbo-ID: 03f8de42-74f4-4e01-9065-ab618b563f7d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 03f8de42-74f4-4e01-9065-ab618b563f7d;
	Fri, 02 Oct 2020 06:51:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601621470;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eFRzX9trc5MZuk77g1cmRgU+ryMN3oMc3H5lrCMOHAg=;
	b=PsSiq5fnx183NixNZQzo+UT4dsKQJftGzpcB8/rMqZ6u1jlMkU8FOIaD79H75ZInIFkDfj
	AnSdrMkWu1AsvelDsAL4JbZhDoowlTHmvkIXp1RXARsDR9zd/7EDVl9vdCwRaYatqsoP7c
	rz6pIbxOIqh5hp06HvI73HHe1rJXXfE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2C769AB0E;
	Fri,  2 Oct 2020 06:51:10 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
Date: Fri, 2 Oct 2020 08:51:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.20 08:20, Jan Beulich wrote:
> On 02.10.2020 06:50, Jürgen Groß wrote:
>> On 01.10.20 18:38, Bertrand Marquis wrote:
>>> Hi Juergen,
>>>
>>>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>>>
>>>>
>>>>
>>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>>
>>>>> Making getBridge() static triggered a build error with some gcc versions:
>>>>>
>>>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>>>> length 255 [-Werror=stringop-truncation]
>>>>>
>>>>> Fix that by using a buffer with 256 bytes instead.
>>>>>
>>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>
>>> Sorry i have to come back on this one.
>>>
>>> I still see an error compiling with Yocto on this one:
>>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
>>> |    81 |      strncpy(result, de->d_name, resultLen);
>>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>
>>> To solve it, I need to define devBridge[257] as devNoBrideg.
>>
>> IMHO this is a real compiler error.
>>
>> de->d_name is an array of 256 bytes, so doing strncpy() from that to
>> another array of 256 bytes with a length of 256 won't truncate anything.
> 
> That's a matter of how you look at it, I think: If the original array
> doesn't hold a nul-terminated string, the destination array won't
> either, yet the common goal of strncpy() is to yield a properly nul-
> terminated string. IOW the warning may be since the standard even has
> a specific foot note to point out this possible pitfall.

If the source doesn't hold a nul-terminated string there will still be
256 bytes copied, so there is no truncation done during strncpy().

In fact there is no way to use strncpy() in a safe way on a fixed sized
source array with the above semantics: either the target is larger than
the source and length is at least sizeof(source) + 1, resulting in a
possible read beyond the end of source, or the target is the same length
leading to the error.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 06:59:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 06:59:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1706.5186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOF2k-0007iH-K3; Fri, 02 Oct 2020 06:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1706.5186; Fri, 02 Oct 2020 06:59:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOF2k-0007iA-Gq; Fri, 02 Oct 2020 06:59:50 +0000
Received: by outflank-mailman (input) for mailman id 1706;
 Fri, 02 Oct 2020 06:59:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOF2j-0007i5-Fn
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:59:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d83799f-3cc9-44c5-b044-c9f711ab4f01;
 Fri, 02 Oct 2020 06:59:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 920A6AF68;
 Fri,  2 Oct 2020 06:59:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOF2j-0007i5-Fn
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 06:59:49 +0000
X-Inumbo-ID: 7d83799f-3cc9-44c5-b044-c9f711ab4f01
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7d83799f-3cc9-44c5-b044-c9f711ab4f01;
	Fri, 02 Oct 2020 06:59:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601621987;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N+s+Wwos+DreJlRD4jiPthFzPNCYs2RfABt2kX87T/U=;
	b=BEi4wBE4ORF3UMywez9Dy9b2W18GxnWmQCZIZwbUdyJw04OQuqGLsp0+wBV34bJt36fx3H
	wLUXF5skJQsJNjho5smmVGKQunwdSQD7qpLfuGdhzhefBsd6r9Z209qZBhYAQEJEj19QL9
	jymxRT3eEEziUUeLVT2dgSH4nN9s0i8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 920A6AF68;
	Fri,  2 Oct 2020 06:59:47 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
Date: Fri, 2 Oct 2020 08:59:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 08:51, Jürgen Groß wrote:
> On 02.10.20 08:20, Jan Beulich wrote:
>> On 02.10.2020 06:50, Jürgen Groß wrote:
>>> On 01.10.20 18:38, Bertrand Marquis wrote:
>>>> Hi Juergen,
>>>>
>>>>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>>>>
>>>>>
>>>>>
>>>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>>>
>>>>>> Making getBridge() static triggered a build error with some gcc versions:
>>>>>>
>>>>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>>>>> length 255 [-Werror=stringop-truncation]
>>>>>>
>>>>>> Fix that by using a buffer with 256 bytes instead.
>>>>>>
>>>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>
>>>> Sorry i have to come back on this one.
>>>>
>>>> I still see an error compiling with Yocto on this one:
>>>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>>>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
>>>> |    81 |      strncpy(result, de->d_name, resultLen);
>>>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>
>>>> To solve it, I need to define devBridge[257] as devNoBrideg.
>>>
>>> IMHO this is a real compiler error.
>>>
>>> de->d_name is an array of 256 bytes, so doing strncpy() from that to
>>> another array of 256 bytes with a length of 256 won't truncate anything.
>>
>> That's a matter of how you look at it, I think: If the original array
>> doesn't hold a nul-terminated string, the destination array won't
>> either, yet the common goal of strncpy() is to yield a properly nul-
>> terminated string. IOW the warning may be since the standard even has
>> a specific foot note to point out this possible pitfall.
> 
> If the source doesn't hold a nul-terminated string there will still be
> 256 bytes copied, so there is no truncation done during strncpy().
> 
> In fact there is no way to use strncpy() in a safe way on a fixed sized
> source array with the above semantics: either the target is larger than
> the source and length is at least sizeof(source) + 1, resulting in a
> possible read beyond the end of source, or the target is the same length
> leading to the error.

I agree with all of what you say, but I can also see why said foot note
alone may have motivated the emission of the warning.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 07:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 07:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1712.5206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFRa-0001uR-RQ; Fri, 02 Oct 2020 07:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1712.5206; Fri, 02 Oct 2020 07:25:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFRa-0001uK-Mc; Fri, 02 Oct 2020 07:25:30 +0000
Received: by outflank-mailman (input) for mailman id 1712;
 Fri, 02 Oct 2020 07:25:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOFRZ-0001uF-CC
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:25:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c24dd0a1-52e4-4af4-b6a6-feed95c5e50e;
 Fri, 02 Oct 2020 07:25:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2867ADA2;
 Fri,  2 Oct 2020 07:25:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOFRZ-0001uF-CC
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:25:29 +0000
X-Inumbo-ID: c24dd0a1-52e4-4af4-b6a6-feed95c5e50e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c24dd0a1-52e4-4af4-b6a6-feed95c5e50e;
	Fri, 02 Oct 2020 07:25:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601623527;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=maRMcan7GhT9AQxV5N7Yinx1MZOMibWup0ycBs8n9D8=;
	b=FSXnv1iJcFg/nnkHIdmUr9m2yYYSb4TVSobbIAIEfGIdx6Mpal9AZReoddGcitDIvrjF3O
	+i/84LASJf/VyqPXFHQBvOkn8tmnCF27o9JaabCj2ZIiQC+zbLU38HjfpHzLNXrD9SQPgd
	6t1rqRopFKiH/8Akdfk9qRGs4/WBbQw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2867ADA2;
	Fri,  2 Oct 2020 07:25:26 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
 <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
Date: Fri, 2 Oct 2020 09:25:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.20 08:59, Jan Beulich wrote:
> On 02.10.2020 08:51, Jürgen Groß wrote:
>> On 02.10.20 08:20, Jan Beulich wrote:
>>> On 02.10.2020 06:50, Jürgen Groß wrote:
>>>> On 01.10.20 18:38, Bertrand Marquis wrote:
>>>>> Hi Juergen,
>>>>>
>>>>>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>>>>
>>>>>>> Making getBridge() static triggered a build error with some gcc versions:
>>>>>>>
>>>>>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>>>>>> length 255 [-Werror=stringop-truncation]
>>>>>>>
>>>>>>> Fix that by using a buffer with 256 bytes instead.
>>>>>>>
>>>>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>>
>>>>> Sorry i have to come back on this one.
>>>>>
>>>>> I still see an error compiling with Yocto on this one:
>>>>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>>>>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
>>>>> |    81 |      strncpy(result, de->d_name, resultLen);
>>>>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>
>>>>> To solve it, I need to define devBridge[257] as devNoBrideg.
>>>>
>>>> IMHO this is a real compiler error.
>>>>
>>>> de->d_name is an array of 256 bytes, so doing strncpy() from that to
>>>> another array of 256 bytes with a length of 256 won't truncate anything.
>>>
>>> That's a matter of how you look at it, I think: If the original array
>>> doesn't hold a nul-terminated string, the destination array won't
>>> either, yet the common goal of strncpy() is to yield a properly nul-
>>> terminated string. IOW the warning may be since the standard even has
>>> a specific foot note to point out this possible pitfall.
>>
>> If the source doesn't hold a nul-terminated string there will still be
>> 256 bytes copied, so there is no truncation done during strncpy().
>>
>> In fact there is no way to use strncpy() in a safe way on a fixed sized
>> source array with the above semantics: either the target is larger than
>> the source and length is at least sizeof(source) + 1, resulting in a
>> possible read beyond the end of source, or the target is the same length
>> leading to the error.
> 
> I agree with all of what you say, but I can also see why said foot note
> alone may have motivated the emission of the warning.

The motivation can be explained, yes, but it is wrong. strncpy() is not
limited to source arrays of unknown length. So this warning is making
strncpy() unusable for fixed sized source strings and -Werror. And that
is nothing a compiler should be allowed to do, hence a compiler bug.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 07:51:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 07:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1716.5222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFr4-0004RW-1T; Fri, 02 Oct 2020 07:51:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1716.5222; Fri, 02 Oct 2020 07:51:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFr3-0004RP-UD; Fri, 02 Oct 2020 07:51:49 +0000
Received: by outflank-mailman (input) for mailman id 1716;
 Fri, 02 Oct 2020 07:51:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOFr2-0004RK-O1
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:51:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f758e9c4-58ec-42d6-9a49-d1918a675825;
 Fri, 02 Oct 2020 07:51:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFqz-0000R8-V9; Fri, 02 Oct 2020 07:51:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFqz-0006Mg-OO; Fri, 02 Oct 2020 07:51:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFqz-0006mg-Nu; Fri, 02 Oct 2020 07:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOFr2-0004RK-O1
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:51:48 +0000
X-Inumbo-ID: f758e9c4-58ec-42d6-9a49-d1918a675825
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f758e9c4-58ec-42d6-9a49-d1918a675825;
	Fri, 02 Oct 2020 07:51:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=yVOXgfQiPkTjcqX0202x8ue26jBgTY+17IWpX6HcTv0=; b=xKHlUmGR5xo5xA9qqGnhDvxiYl
	1Iv9CjPGbiqKcThY3TRw6mixkYCMZ6IYP5tQEsPvKoZp8eBxwhyJ/oy4D3C1P2rGmR4zxhs+pE9Oi
	NakVKXwsiJYf48HukjZHVo5vK+UOsEBzfPeB35sEAVT4rrx8PRbxTBM/8L+kTYMZJ87w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFqz-0000R8-V9; Fri, 02 Oct 2020 07:51:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFqz-0006Mg-OO; Fri, 02 Oct 2020 07:51:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFqz-0006mg-Nu; Fri, 02 Oct 2020 07:51:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.13-testing bisection] complete test-amd64-amd64-xl-xsm
Message-Id: <E1kOFqz-0006mg-Nu@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 07:51:45 +0000

branch xen-4.13-testing
xenbranch xen-4.13-testing
job test-amd64-amd64-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  21054297bf832d8eacd73dc428f55168522b0d86
  Bug not present: a8122e991da70ac1ee9f88e34e003d2169a5b114
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155306/


  commit 21054297bf832d8eacd73dc428f55168522b0d86
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:26:01 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-xsm.guest-start --summary-out=tmp/155306.bisection-summary --basis-template=154358 --blessings=real,real-bisect xen-4.13-testing test-amd64-amd64-xl-xsm guest-start
Searching for failure / basis pass:
 155132 fail [host=albana1] / 154602 [host=rimava1] 154358 [host=godello1] 152528 ok.
Failure / basis pass flights: 155132 / 152528
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed d9c812dda519a1a73e8370e1b81ddf46eb22ed16 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#8834e10b30125daa47da9f6c5c1a41b4eafbae7f-2793a49565488e419d10ba029c838f4b7efdba38 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484\
 fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#730e2b1927e7d911bbd5350714054ddd5912f4ed-730e2b1927e7d911bbd5350714054ddd5912f4ed git://xenbits.xen.org/osstest/seabios.git#d9c812dda519a1a73e8370e1b81ddf46eb22ed16-41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 git://xenbits.xen.org/xen.git#9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f-88f5b414ac0f8008c1e2b26f93c3d980120941f7
Loaded 12583 nodes in revision graph
Searching for test results:
 152528 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed d9c812dda519a1a73e8370e1b81ddf46eb22ed16 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 154358 [host=godello1]
 154602 [host=rimava1]
 154625 fail irrelevant
 154667 fail irrelevant
 155015 []
 155062 fail irrelevant
 155133 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed d9c812dda519a1a73e8370e1b81ddf46eb22ed16 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 155218 fail irrelevant
 155224 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f0b28415cb464832155d5b3ff6eb63612f58645 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 155229 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cdfc7ed34fd1ddfc9cb1dfbc339f940950638f8d d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 155242 pass irrelevant
 155247 pass irrelevant
 155132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155256 pass irrelevant
 155261 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155266 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 aa1d9a7dbfe07905f0b7218bcd433a513f762eb9
 155268 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155272 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155277 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 b015fbe509188dca47b6c7102a934a7b9ced2a9e
 155283 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 43572a4cd97902ba0155b922a4d2e99fb945ec2b
 155290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
 155294 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155297 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
 155302 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155306 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
Searching for interesting versions
 Result found: flight 152528 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114, results HASH(0x56474c5b0d78) HASH(0x56474c5a9b38) HASH(0x56474c5b1e80) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 aa1d9a7dbfe07905f0b7218bcd433a513f762eb9, results HASH(0x56474c5aed70) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cdfc7ed34fd1ddfc9cb1dfbc339f940950638f8d d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd53507\
 14054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f, results HASH(0x56474c5a5680) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f0b28415cb464832155d5b3ff6eb63612f58645 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f, results HASH(0x56474c5a1670) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed d9c812dda519a1a73e8370e1b81ddf46eb22ed16 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f, results HASH(0x56474c58f7b8) HASH(0x56474c589478) Result found: flight 155132 (fail), for basis failure (at ancestor ~1648)
 Repro found: flight 155133 (pass), for basis pass
 Repro found: flight 155261 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
No revisions left to test, checking graph state.
 Result found: flight 155268 (pass), for last pass
 Result found: flight 155290 (fail), for first failure
 Repro found: flight 155294 (pass), for last pass
 Repro found: flight 155297 (fail), for first failure
 Repro found: flight 155302 (pass), for last pass
 Repro found: flight 155306 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  21054297bf832d8eacd73dc428f55168522b0d86
  Bug not present: a8122e991da70ac1ee9f88e34e003d2169a5b114
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155306/


  commit 21054297bf832d8eacd73dc428f55168522b0d86
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:26:01 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 204 colors found
Revision graph left in /home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155306: tolerable ALL FAIL

flight 155306 xen-4.13-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155306/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start             fail baseline untested


jobs:
 test-amd64-amd64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 07:55:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 07:55:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1722.5240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFv1-0004f7-Oe; Fri, 02 Oct 2020 07:55:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1722.5240; Fri, 02 Oct 2020 07:55:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFv1-0004f0-Lf; Fri, 02 Oct 2020 07:55:55 +0000
Received: by outflank-mailman (input) for mailman id 1722;
 Fri, 02 Oct 2020 07:55:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOFv0-0004ev-Nx
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:55:54 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.61]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 252cffd0-f0a6-4c87-b3be-7607a2e72c90;
 Fri, 02 Oct 2020 07:55:53 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3421.eurprd04.prod.outlook.com (2603:10a6:803:5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.24; Fri, 2 Oct
 2020 07:55:51 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 07:55:51 +0000
Received: from [192.168.1.106] (86.123.62.1) by
 VI1PR0501CA0034.eurprd05.prod.outlook.com (2603:10a6:800:60::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 07:55:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOFv0-0004ev-Nx
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:55:54 +0000
X-Inumbo-ID: 252cffd0-f0a6-4c87-b3be-7607a2e72c90
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [40.107.3.61])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 252cffd0-f0a6-4c87-b3be-7607a2e72c90;
	Fri, 02 Oct 2020 07:55:53 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VCbdYdGyjJJ6KAl/Mn4RdmoMrTAL9GMr5/DQeqwzWiuDKHPF7VlMgT6Gjw0QVF+sdwY+PtWcrwWALrZ7v8PKEeIF9zI1y9WUxm5AattcIq1POqoVEAeRAhbiQyaznqPhOWjett9Q9zrEEWvJ/zY4fUoYZR1XCKkJAB7pnyEdjqQRUtEkQrK0nPPP11qmuP4u0tVhC/ryK1BowWN0NSEWF/7MkkJNvMKeCKzcCW2KHBV+isUUkEFTFyxJ94e+VsdRPizkXPVaEtTm1pqHjzc6mUWZnL05POhdgqKYMlAgQDjlRK8fbRaYS8Vjm9ONLv1PyHnqVuahS4PWclGn1mzr9A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=il+wQHf/vOJMi4/A137ryVjrMt4OuqFJ9IvT4CoF0SQ=;
 b=l4qWWZD+ietJLg9yn6MDerCyooUz2alyZHN570fdMkFyOLO83OMCUDBjQ6WcgUk7qUcYmeXy0TWN96JEc6Mg9ql5AbyzCX9SfJhKONuV2kPN8QaYL/FLZ11+ANfZib9qrg5g/a0pP2d+UhlAkX1z/l1s9g0NPyvltmdMUhO4PX/WKg2fdXC/nvNlc5eJ+LWGvrX9YluNm4xYhaQ8u5KHVu+eTUt2qHJcVQiiBIjzkWcfn5y5OloMEcnBQ9b8iAE/EeoBLpVri7/9IsKZhJQhhuflFFnBt8neYZL74MMX5HC3Ua8RwRvguGPaKkaEjC6F6tzqDpfmmNH81Q0XrLoVJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=il+wQHf/vOJMi4/A137ryVjrMt4OuqFJ9IvT4CoF0SQ=;
 b=UhpO5JUryMzhvxLDe8zurPbuS74wVQn2UOxnrU5OS4+y1SpO1YwyFi2K3M2+3kzopYwit6MXh/Scp0Rr7NbyNlcoHM8aN86vtnSsqEWvCMfcqZY0KKjtsm9wfCLsBBkEQkdCMP7oOBrP8g99sYiG/spDgb8f+PFF3XcVlSOd89U=
Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3421.eurprd04.prod.outlook.com (2603:10a6:803:5::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.24; Fri, 2 Oct
 2020 07:55:51 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 07:55:51 +0000
Subject: Re: [PATCH] arm,smmu: match start level of page table walk with P2M
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: julien@xen.org, xen-devel@lists.xenproject.org,
 Volodymyr_Babchuk@epam.com, will@kernel.org, diana.craciun@nxp.com,
 anda-alexandra.dorneanu@nxp.com
References: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
 <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Message-ID: <e591b03a-f05a-5e01-2fd6-f14f3c8e1039@nxp.com>
Date: Fri, 2 Oct 2020 10:55:48 +0300
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
In-Reply-To: <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: VI1PR0501CA0034.eurprd05.prod.outlook.com
 (2603:10a6:800:60::20) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.106] (86.123.62.1) by VI1PR0501CA0034.eurprd05.prod.outlook.com (2603:10a6:800:60::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 07:55:50 +0000
X-Originating-IP: [86.123.62.1]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5aff353b-6b87-42ff-21f2-08d866a894e9
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3421:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3421C714FA542D73A787327AEC310@VI1PR0402MB3421.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gc7C0cczhflJaxwLypKfQ7bwEl27dMFahH2OvxB1IriI5Zfiw8C5KM0rF5ZqwYmCFPOEZqaTVB2hj0P4rWNXIGscaShDXgN+auRkXwze8LGaT77dO/lwdts5q83Uepo6h9O5opz22ns7oykVR1zofhCdc040q/YuYpKk+f//gDzTpxoWHxMbHQqt9W3Zo8S4iU0u3m35y8/QbZrBe1Ka67hRveLJ5aA1Rd0W/7jLFt4PgXDO5SrfWR6gwxgGkNcph3zD80pnKTMtEHFTOfxjE4llLs4iL758grkaDoAkoNNYUclC2DirptbbiWzQL1PAHjqKKanNgQUcmxdkKQ4PpfFKcVmltivXXosym9CWHOsWuqPJrW4lDOishEtqEWmLJP4xsGHWj3+67496cDHQqQTnSRn9hpsF5DNt69VBiV2cI8H3O6OyPfKJJM+ti7fy
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(346002)(136003)(316002)(478600001)(6486002)(86362001)(16576012)(31696002)(2906002)(36756003)(66946007)(186003)(83380400001)(8936002)(5660300002)(16526019)(4326008)(31686004)(66476007)(66556008)(8676002)(26005)(52116002)(2616005)(956004)(44832011)(53546011)(6916009)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	X/BaLvPyokLZZXmeCBRzMKpYIBCVKnb+LxY7mMMBpr2YvSGVG/Z88K0t7eXg+Kh4LL5Nw/+on0c1SzDcEOTT3J8tljDxWNJnrfL7g9hY15gIqS9/gcNftMw5lRMiUlS/466p2J8BqEozBPdmS2Xuqx7noFoTI3vm+RnfBc71JMYxMHJU+Ug2140HmIIsfqsUM/mrIo+QklGrLQ/MQrtgAQhIxNGu6CLxeqCLJK2HqJH9qPsz+2Avk2KpLtSp0amEZqceBJAOUxVYnYK45Ni0hMXoH9ZhmAOu36WVcyjqGrzXjYE7kFDKKATUlj3AS+Cyqwb7pkGlWEjv8pPFTVR/B5z6YokDU0l81zBP8ZCZ6B/ErsX7yGoBHIZgiVeIjAv5/8+KQ+xK458/BmKVJQvbEpNEF1KKeMipsHof6/2JxkOwQruzwwuEs/Ll4BDGhiMSCVJinOMusNP8ohbcSqRFcuIce2M7ip21iYaEI71oQvxexLXDPmiUB6tk9SMvCjuSlOuU5Nww2mtVs/eOHeKQf5r+NTST5G/X7eymYGnZR8BTwwofiePRzi4M8QU/LaxNpkqLFmEbNQdJLGqC1OG3T2fa/mxTdv5pztO5mz14f4rkq4r8/7/Pn+hItlyEIPqveEnzoSbRE33mQFmMKT4/4A==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5aff353b-6b87-42ff-21f2-08d866a894e9
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 07:55:51.3899
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fkVkHkgPFpqgHui763mdefcRLpfoIgtu6ASe3G+EyOJp4Foj1UhBSdsa5T++VIBP68qB21NWuU/H+KrAuc5pBw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3421



On 10/2/2020 2:52 AM, Stefano Stabellini wrote:
> On Mon, 28 Sep 2020, laurentiu.tudor@nxp.com wrote:
>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>
>> Don't hardcode the lookup start level of the page table walk to 1
>> and instead match the one used in P2M. This should fix scenarios
>> involving SMMU where the start level is different than 1.
>>
>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Thank you for the patch, I think it is correct, except that smmu.c today
> can be enabled even on arm32 builds, where p2m_root_level would be
> uninitialized.
> 
> We need to initialize p2m_root_level at the beginning of
> setup_virt_paging under the #ifdef CONFIG_ARM_32. We can statically
> initialize it to 1 in that case. Or...
> 
> 
>> ---
>>  xen/arch/arm/p2m.c                 | 2 +-
>>  xen/drivers/passthrough/arm/smmu.c | 2 +-
>>  xen/include/asm-arm/p2m.h          | 1 +
>>  3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index ce59f2b503..0181b09dc0 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -18,7 +18,6 @@
>>  
>>  #ifdef CONFIG_ARM_64
>>  static unsigned int __read_mostly p2m_root_order;
>> -static unsigned int __read_mostly p2m_root_level;
>>  #define P2M_ROOT_ORDER    p2m_root_order
>>  #define P2M_ROOT_LEVEL p2m_root_level
>>  static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>> @@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>   * restricted by external entity (e.g. IOMMU).
>>   */
>>  unsigned int __read_mostly p2m_ipa_bits = 64;
>> +unsigned int __read_mostly p2m_root_level;
> 
> ... we could p2m_root_level = 1; here
> 

This looks straight forward and in line with what we do with
p2m_ipa_bits, I'll send a v2 right away.

Thanks for the review.

---
Best Regards, Laurentiu

>>  /* Helpers to lookup the properties of each level */
>>  static const paddr_t level_masks[] =
>> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
>> index 94662a8501..85709a136f 100644
>> --- a/xen/drivers/passthrough/arm/smmu.c
>> +++ b/xen/drivers/passthrough/arm/smmu.c
>> @@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>>  	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
>>  
>>  	if (!stage1)
>> -		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
>> +		reg |= (2 - p2m_root_level) << TTBCR_SL0_SHIFT;
>>  
>>  	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
>>  
>> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
>> index 5fdb6e8183..97b5eada2b 100644
>> --- a/xen/include/asm-arm/p2m.h
>> +++ b/xen/include/asm-arm/p2m.h
>> @@ -12,6 +12,7 @@
>>  
>>  /* Holds the bit size of IPAs in p2m tables.  */
>>  extern unsigned int p2m_ipa_bits;
>> +extern unsigned int p2m_root_level;
>>  
>>  struct domain;
>>  
>> -- 
>> 2.17.1
>>


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 07:59:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 07:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1724.5251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFyS-0004qa-8X; Fri, 02 Oct 2020 07:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1724.5251; Fri, 02 Oct 2020 07:59:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOFyS-0004qT-5D; Fri, 02 Oct 2020 07:59:28 +0000
Received: by outflank-mailman (input) for mailman id 1724;
 Fri, 02 Oct 2020 07:59:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOFyR-0004qO-2Q
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:59:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57fd3a1d-93fc-4249-b490-c2f40b35cf3e;
 Fri, 02 Oct 2020 07:59:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFyN-0000br-Vu; Fri, 02 Oct 2020 07:59:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFyN-0006eR-Nw; Fri, 02 Oct 2020 07:59:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOFyN-00025I-NU; Fri, 02 Oct 2020 07:59:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOFyR-0004qO-2Q
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 07:59:27 +0000
X-Inumbo-ID: 57fd3a1d-93fc-4249-b490-c2f40b35cf3e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57fd3a1d-93fc-4249-b490-c2f40b35cf3e;
	Fri, 02 Oct 2020 07:59:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nUVcFk3FN1RWQB+r+Bxxgxm9DGy+EdlSuHFXBEqQ5DY=; b=lSGFLv4SCTFO1fkNmO9MmDuySr
	Dnybg+GOFcHYn2Qg2sBbS6FgU4OeXC47EMkjSJzB6qCBmmgQBJQoiZ5oFI8jw4Kxmre9iygZJgOLJ
	secwAKMO+JuvqS1JzkBsvaePrd2F6yIoqUNM5slFzmC3xLMY9B8485ZhtpYZEDkUqhcQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFyN-0000br-Vu; Fri, 02 Oct 2020 07:59:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFyN-0006eR-Nw; Fri, 02 Oct 2020 07:59:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOFyN-00025I-NU; Fri, 02 Oct 2020 07:59:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155295-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155295: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 07:59:23 +0000

flight 155295 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155295/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    1 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days   12 attempts
Testing same since   155246  2020-10-01 15:01:28 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Juergen Gross <jgross@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/lixenguest: hide struct elf_dom_parms layout from users
    
    Don't include struct elf_dom_parms in struct xc_dom_image, but rather
    use a pointer to reference it. Together with adding accessor functions
    for the externally needed elements this enables to drop including the
    Xen private header xen/libelf/libelf.h from xenguest.h.
    
    Fixes: 7e0165c19387 ("tools/libxc: untangle libxenctrl from libxenguest")
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 7f186b1996dea2992c8ed3606b38d73222293c37
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libxenguest: make xc_dom_loader interface private to libxenguest
    
    The pluggable kernel loader interface is used only internally of
    libxenguest, so make it private. This removes a dependency on the Xen
    internal header xen/libelf/libelf.h from xenguest.h.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 77a09716f251ac0e35ddd4bfda8f35fe639f432b
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Oct 1 12:57:43 2020 +0200

    tools/libs: merge xenctrl_dom.h into xenguest.h
    
    Today xenctrl_dom.h is part of libxenctrl as it is included by
    xc_private.c. This seems not to be needed, so merge xenctrl_dom.h into
    xenguest.h where its contents really should be.
    
    Replace all #includes of xenctrl_dom.h by xenguest.h ones or drop them
    if xenguest.h is already included.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 3ae0d316f01c08903a96f6b5b39275c67b823264
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: move libxlutil to tools/libs/util
    
    Move the libxlutil source to tools/libs/util and delete tools/libxl.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b22b9b9a1df865e1dd9e4f6950ae6be7081be010
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libs: add option for library names not starting with libxen
    
    libxlutil doesn't follow the standard name pattern of all other Xen
    libraries, so add another make variable which can be used to allow
    other names.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bc01c73018689e066e06515b26181d463a3f2a40
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools: rename global libxlutil make variables
    
    Rename *_libxlutil make variables to *_libxenutil in order to avoid
    nasty indirections when moving libxlutil under the tools/libs
    infrastructure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 41aea82de2b581c61482aeddab151ecf3b1bca25
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 23 06:57:20 2020 +0200

    tools/libxl: move libxenlight to tools/libs/light
    
    Carve out all libxenlight related sources and move them to
    tools/libs/light in order to use the generic library build environment.
    
    The closely related sources for libxl-save-helper and the libxl test
    environment are being moved, too.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Sep 21 13:17:30 2020 +0100

    x86: Use LOCK ADD instead of MFENCE for smp_mb()
    
    MFENCE is overly heavyweight for SMP semantics on WB memory, because it also
    orders weaker cached writes, and flushes the WC buffers.
    
    This technique was used as an optimisation in Java[1], and later adopted by
    Linux[2] where it was measured to have a 60% performance improvement in VirtIO
    benchmarks.
    
    The stack is used because it is hot in the L1 cache, and a -4 offset is used
    to avoid creating a false data dependency on live data.
    
    For 64bit userspace, the Red Zone needs to be considered.  Use -32 to allow
    for a reasonable quantity of Red Zone data, but still have a 50% chance of
    hitting the same cache line as %rsp.
    
    Fix up the 32 bit definitions in HVMLoader and libxc to avoid a false data
    dependency.
    
    [1] https://shipilev.net/blog/2014/on-the-fence-with-dependencies/
    [2] https://git.kernel.org/torvalds/c/450cbdd0125cfa5d7bbf9e2a6b6961cc48d29730
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 707eb41ae2dde4636261f631224c97e9c0b16b56
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:07 2020 +0100

    xl: implement documented '--force' option for block-detach
    
    The manpage for 'xl' documents an option to force a block device to be
    released even if the domain to which it is attached does not co-operate.
    The documentation also states that, if the force flag is not specified, the
    block-detach operation should fail.
    
    Currently the force option is not implemented and a non-forced block-detach
    will auto-force after a time-out of 10s. This patch implements the force
    option and also stops auto-forcing a non-forced block-detach by calling
    libxl_device_disk_safe_remove() rather than libxl_device_disk_remove(),
    allowing the operation to fail cleanly as per the documented behaviour.
    
    NOTE: The documentation is also adjusted since the normal positioning of
          options is before compulsory parameters. It is also noted that use of
          the --force option may lead to a guest crash.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 6df07f9fbe1e9b65a40183f79a6171200dc877dd
Author: Paul Durrant <pdurrant@amazon.com>
Date:   Tue Sep 15 15:10:06 2020 +0100

    libxl: provide a mechanism to define a device 'safe remove' function...
    
    ... and use it to define libxl_device_disk_safe_remove().
    
    This patch builds on the existent macro magic by using a new value of the
    'force' field in in libxl__ao_device.
    It is currently defined as an int but is used in a boolean manner where
    1 means the operation is forced and 0 means it is not (but is actually forced
    after a 10s time-out). In adding a third value, this patch re-defines 'force'
    as a struct type (libxl__force) with a single 'flag' field taking an
    enumerated value:
    
    LIBXL__FORCE_AUTO - corresponding to the old 0 value
    LIBXL__FORCE_ON   - corresponding to the old 1 value
    LIBXL__FORCE_OFF  - the new value
    
    The LIBXL_DEFINE_DEVICE_REMOVE() macro is then modified to define the
    libxl_device_<type>_remove() and libxl_device_<type>_destroy() functions,
    setting LIBXL__FORCE_AUTO and LIBXL__FORCE_ON (respectively) in the
    libxl__ao_device passed to libxl__initiate_device_generic_remove() and a
    new macro, LIBXL_DEFINE_DEVICE_SAFE_REMOVE(), is defined that sets
    LIBXL__FORCE_OFF instead. This macro is used to define the new
    libxl_device_disk_safe_remove() function.
    
    Signed-off-by: Paul Durrant <pdurrant@amazon.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 11852c7bb070a18c3708b4c001772a23e7d4fc27
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Sep 24 16:36:48 2020 +0200

    tools/xenstore: set maximum number of grants needed
    
    When running as a stubdom Xenstore should set the maximum number of
    grants needed via a call of xengnttab_set_max_grants(), as otherwise
    the number of domains which can be supported will be 128 only (the
    default number of grants supported by Mini-OS).
    
    We use one grant per domain so the theoretical maximum number is
    DOMID_FIRST_RESERVED.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Sep 29 14:48:52 2020 +0100

    tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
    
    Nested Virt is the final special case in legacy CPUID handling.  Pass the
    (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
    the semantic dependency on HVM_PARAM_NESTEDHVM.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 50a5215f30e964a6f16165ab57925ca39f31a849
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu Sep 24 20:08:43 2020 +0200

    libxc/bitops: increase potential size of bitmaps
    
    If the bitmap is used to represent domU pages, the amount of memory is
    limited to 8TB due to the 32bit value. Adjust the code to use 64bit
    values as input. All callers already use some form of 64bit as input,
    so no further adjustment is required.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 27de84d3ae462bd8311c8267c642ec95afdcf47c
Author: Anthony PERARD <anthony.perard@citrix.com>
Date:   Wed Sep 23 12:03:23 2020 +0100

    tools: Fix configure of upstream QEMU
    
    QEMU as recently switch its build system to use meson and the
    ./configure step with meson is more restrictive that the step used to
    be, most installation path wants to be within prefix, otherwise we
    have this error message:
    
        ERROR: The value of the 'datadir' option is '/usr/share/qemu-xen' which must be a subdir of the prefix '/usr/lib/xen'.
    
    In order to workaround the limitation, we will set prefix to the same
    one as for the rest of Xen installation, and set all the other paths.
    
    For reference, a thread in qemu-devel:
        "configure with datadir outside of --prefix fails with meson"
        https://lore.kernel.org/qemu-devel/20200918133012.GH2024@perard.uk.xensource.com/t/
    
    And an issue in meson:
        "artificial limitation of directories (forced to be in prefix)"
        https://github.com/mesonbuild/meson/issues/2561
    
    Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
    Tested-by: Paul Durrant <paul@xen.org>
    Acked-by: Wei Liu <wl@xen.org>

commit 0d8d289af7a679c028462c4ed5d98586f9ef9648
Author: Olaf Hering <olaf@aepfle.de>
Date:   Wed Sep 23 08:48:40 2020 +0200

    tools/libxc: report malloc errors in writev_exact
    
    The caller of writev_exact should be notified about malloc errors
    when dealing with partial writes.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 08:15:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 08:15:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1734.5268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGDc-000777-TD; Fri, 02 Oct 2020 08:15:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1734.5268; Fri, 02 Oct 2020 08:15:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGDc-000770-Py; Fri, 02 Oct 2020 08:15:08 +0000
Received: by outflank-mailman (input) for mailman id 1734;
 Fri, 02 Oct 2020 08:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOGDb-00076v-He
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:15:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 184eced0-cefb-436b-857f-0d9b11c7d260;
 Fri, 02 Oct 2020 08:15:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E73C2AE37;
 Fri,  2 Oct 2020 08:15:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOGDb-00076v-He
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:15:07 +0000
X-Inumbo-ID: 184eced0-cefb-436b-857f-0d9b11c7d260
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 184eced0-cefb-436b-857f-0d9b11c7d260;
	Fri, 02 Oct 2020 08:15:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601626506;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GcYjzA6eXVsPOzZWJaiz0ZrJlZpu9hPVM/+LbzGJt4w=;
	b=OWxoMHGdMH11+4eHbwo1SBp89KT7VmTdzr+U0RhMSHjuRUaQB4Hs1RWci2syUbxGOQsmdO
	shfrwNhyl8JjqlAY8vNS/+AcQAsFFZ96vCbp8kfbqCITth+F+AN+E1PyArETwrIsKhTTW7
	RaeAwmorCx2vmH8WgLoTfUFURXG4sTo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E73C2AE37;
	Fri,  2 Oct 2020 08:15:05 +0000 (UTC)
Subject: Re: [PATCH v8 1/5] efi/boot.c: add file.need_to_free
To: Trammell Hudson <hudson@trmm.net>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com,
 andrew.cooper3@citrix.com, wl@xen.org
References: <20200930120011.1622924-1-hudson@trmm.net>
 <20200930120011.1622924-2-hudson@trmm.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76a21a35-0469-6d44-564a-c0fed9bf9bfd@suse.com>
Date: Fri, 2 Oct 2020 10:15:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930120011.1622924-2-hudson@trmm.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 14:00, Trammell Hudson wrote:
> The config file, kernel, initrd, etc should only be freed if they
> are allocated with the UEFI allocator.  On x86 the ucode, and on
> ARM the dtb, are also marked as need_to_free when allocated or
> expanded.
> 
> This also fixes a memory leak in ARM fdt_increase_size() if there
> is an error in building the new device tree.
> 
> Signed-off-by: Trammell Hudson <hudson@trmm.net>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 08:18:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 08:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1737.5280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGGi-0007HG-GZ; Fri, 02 Oct 2020 08:18:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1737.5280; Fri, 02 Oct 2020 08:18:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGGi-0007H9-Cr; Fri, 02 Oct 2020 08:18:20 +0000
Received: by outflank-mailman (input) for mailman id 1737;
 Fri, 02 Oct 2020 08:18:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kOGGh-0007H3-34
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:18:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 725f5fcb-cdd3-42de-b267-2f279bb9d6dc;
 Fri, 02 Oct 2020 08:18:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOGGe-0001Yu-5p; Fri, 02 Oct 2020 08:18:16 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOGGd-0002CI-Tw; Fri, 02 Oct 2020 08:18:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kOGGh-0007H3-34
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:18:19 +0000
X-Inumbo-ID: 725f5fcb-cdd3-42de-b267-2f279bb9d6dc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 725f5fcb-cdd3-42de-b267-2f279bb9d6dc;
	Fri, 02 Oct 2020 08:18:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Vgixw/CeBnwxRzcUqOB3iBR+0hSx4I1dY4tAddWw+3A=; b=ksH7FnUKdAOJJ8UA5vc0NvWUYc
	gwQXCP9nqQskafeVdiw1jpBDkrfBx+xvYk+RpKpJ06omQV+3xilWsPwLEDbUIayhOjLarVSCNnnFO
	aS9o5QGABGyZptp8uGekzQ48x2HsHAZniSLrJEe+nlhZdKUopibA5g5OQ8EilfRJB+Sk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOGGe-0001Yu-5p; Fri, 02 Oct 2020 08:18:16 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOGGd-0002CI-Tw; Fri, 02 Oct 2020 08:18:16 +0000
Subject: Re: [PATCH] arm,smmu: match start level of page table walk with P2M
To: Stefano Stabellini <sstabellini@kernel.org>,
 Laurentiu Tudor <laurentiu.tudor@nxp.com>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com,
 will@kernel.org, diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
 <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <41f7c87b-0db9-5366-b25f-775bf3d6e3ce@xen.org>
Date: Fri, 2 Oct 2020 09:18:14 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/10/2020 00:52, Stefano Stabellini wrote:
> On Mon, 28 Sep 2020, laurentiu.tudor@nxp.com wrote:
>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>
>> Don't hardcode the lookup start level of the page table walk to 1
>> and instead match the one used in P2M. This should fix scenarios
>> involving SMMU where the start level is different than 1.
>>
>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Thank you for the patch, I think it is correct, except that smmu.c today
> can be enabled even on arm32 builds, where p2m_root_level would be
> uninitialized.
> 
> We need to initialize p2m_root_level at the beginning of
> setup_virt_paging under the #ifdef CONFIG_ARM_32. We can statically
> initialize it to 1 in that case. Or...
> 
> 
>> ---
>>   xen/arch/arm/p2m.c                 | 2 +-
>>   xen/drivers/passthrough/arm/smmu.c | 2 +-
>>   xen/include/asm-arm/p2m.h          | 1 +
>>   3 files changed, 3 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>> index ce59f2b503..0181b09dc0 100644
>> --- a/xen/arch/arm/p2m.c
>> +++ b/xen/arch/arm/p2m.c
>> @@ -18,7 +18,6 @@
>>   
>>   #ifdef CONFIG_ARM_64
>>   static unsigned int __read_mostly p2m_root_order;
>> -static unsigned int __read_mostly p2m_root_level;
>>   #define P2M_ROOT_ORDER    p2m_root_order
>>   #define P2M_ROOT_LEVEL p2m_root_level
>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>> @@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>    * restricted by external entity (e.g. IOMMU).
>>    */
>>   unsigned int __read_mostly p2m_ipa_bits = 64;
>> +unsigned int __read_mostly p2m_root_level;
> 
> ... we could p2m_root_level = 1; here

IMHO, this is going to make the code quite confusing given that only the 
SMMU would use this variable for arm32.

The P2M root level also cannot be changed by the SMMU (at least for 
now). So I would suggest to introduce a helper (maybe 
p2m_get_root_level()) and use it in the SMMU code.

An alternative would be to move the definition of P2M_ROOT_{ORDER, 
LEVEL} in p2m.h

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 08:27:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 08:27:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1741.5295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGPJ-0008EJ-C9; Fri, 02 Oct 2020 08:27:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1741.5295; Fri, 02 Oct 2020 08:27:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGPJ-0008EC-97; Fri, 02 Oct 2020 08:27:13 +0000
Received: by outflank-mailman (input) for mailman id 1741;
 Fri, 02 Oct 2020 08:27:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOGPI-0008E7-8L
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:27:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b8d7f47-8287-4095-8da1-d4b62ee01b8e;
 Fri, 02 Oct 2020 08:27:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 989DBAF1A;
 Fri,  2 Oct 2020 08:27:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOGPI-0008E7-8L
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:27:12 +0000
X-Inumbo-ID: 6b8d7f47-8287-4095-8da1-d4b62ee01b8e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b8d7f47-8287-4095-8da1-d4b62ee01b8e;
	Fri, 02 Oct 2020 08:27:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601627229;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=unyV64ypPcLYZl3kVcunweCkRWB8mM6GQ6rMWm8K83U=;
	b=Jx7Eu+gIsHAfaWRO1KGz/Arq5zMAF7tLSrIGhleO24npNNIhRKZlpb7kLExzegxcsW627X
	cTyQxX6OAI5R+Nun/atUevrjjYigSh+JrAJSv+plWCTqvqKayCdl3iVhgIJm/8H6PBcC9e
	Q6+KlHZ+O/o9IkO3VfRuovORkZ8C1VQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 989DBAF1A;
	Fri,  2 Oct 2020 08:27:09 +0000 (UTC)
Subject: Re: [PATCH v8 4/5] efi: Enable booting unified
 hypervisor/kernel/initrd images
To: Trammell Hudson <hudson@trmm.net>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com,
 andrew.cooper3@citrix.com, wl@xen.org
References: <20200930120011.1622924-1-hudson@trmm.net>
 <20200930120011.1622924-5-hudson@trmm.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab61cb4b-bcbe-fb61-50d7-8d93bcfca4ab@suse.com>
Date: Fri, 2 Oct 2020 10:27:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930120011.1622924-5-hudson@trmm.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 14:00, Trammell Hudson wrote:
> @@ -1215,9 +1231,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
>          /* Get the file system interface. */
>          dir_handle = get_parent_handle(loaded_image, &file_name);
>  
> -        /* Read and parse the config file. */
> -        if ( !cfg_file_name )
> +        if ( read_section(loaded_image, L"config", &cfg, NULL) )
> +            PrintStr(L"Using builtin config file\r\n");
> +        else if ( !cfg_file_name )
>          {
> +            /* Read and parse the config file. */

I'm sorry for noticing this only now, but I don't think this comment
should be moved. If no other need for a v9 arises, this can likely
be undone while committing.

> +static bool __init pe_name_compare(const struct PeSectionHeader *sect,
> +                                   const CHAR16 *name)
> +{
> +    size_t i;
> +
> +    if ( sect->Name[0] != '.' )
> +        return -1;

I was about to say "'true' please", but you really mean 'false"
now. (Could perhaps again be fixed while committing.)

> +    for ( i = 1; i < sizeof(sect->Name); i++ )
> +    {
> +        const char c = sect->Name[i];
> +        const CHAR16 cw = name[i - 1];
> +
> +        if ( cw == L'\0' && c == '\0' )
> +            return true;
> +        if ( cw != c )
> +            return false;

Just as a remark (and again spotting only now) this could be had
with one less comparison:

        if ( cw != c )
            return false;
        if ( c == '\0' )
            return true;

At which the need for cw also disappears.

With at least the earlier two issues addressed
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 08:43:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 08:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1744.5310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGfL-0001X0-Rd; Fri, 02 Oct 2020 08:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1744.5310; Fri, 02 Oct 2020 08:43:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGfL-0001Wt-OQ; Fri, 02 Oct 2020 08:43:47 +0000
Received: by outflank-mailman (input) for mailman id 1744;
 Fri, 02 Oct 2020 08:43:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kOGfJ-0001Wo-Rd
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:43:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a8072d7-3a0d-4f00-be3d-0453f468471f;
 Fri, 02 Oct 2020 08:43:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOGfF-000251-QM; Fri, 02 Oct 2020 08:43:41 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOGfF-0003wF-IA; Fri, 02 Oct 2020 08:43:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kOGfJ-0001Wo-Rd
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:43:45 +0000
X-Inumbo-ID: 8a8072d7-3a0d-4f00-be3d-0453f468471f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8a8072d7-3a0d-4f00-be3d-0453f468471f;
	Fri, 02 Oct 2020 08:43:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ANlrpoRxsID5GC+he5mTg9W+5RYBepVE2BYuz7saSJw=; b=orRwihg47+T4v+nPdMAW405wEo
	Ww6NT6mbFOpo3voXvxVSC1DmA0nRgzhpCAyFSK3MH5yUmS5O8L2F0burC4m5VfwiIuejnNnb2Rdt9
	iDkMQZPp0rr5g5QDBsi5Nzvz7whucCv46s4eRxmKRBpuIm9I+3JpijJTPxzwzhFtfRMQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOGfF-000251-QM; Fri, 02 Oct 2020 08:43:41 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOGfF-0003wF-IA; Fri, 02 Oct 2020 08:43:41 +0000
Subject: Re: [PATCH 11/12] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: paul@xen.org, xen-devel@lists.xenproject.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>
References: <0d5ffc89-4b04-3e06-e950-f0cb171c7419@suse.com>
 <6e529147-2a76-bc28-ac16-21fc9a2c8f03@suse.com>
 <004b01d696ff$76873e50$6395baf0$@xen.org>
 <92d2714b-d762-2f15-086f-58257e3336a8@suse.com>
 <006401d69707$062a5090$127ef1b0$@xen.org>
 <3626d65c-bd5d-f65e-61ca-451110761258@suse.com>
 <f55cb87f-418d-61fa-65f1-0e746071fe37@xen.org>
 <60f2ae90-160f-d7fe-9d5f-f9cd4651a93c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <049703a8-18ff-4b94-090d-aad948c03283@xen.org>
Date: Fri, 2 Oct 2020 09:43:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <60f2ae90-160f-d7fe-9d5f-f9cd4651a93c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 02/10/2020 07:12, Jan Beulich wrote:
> On 01.10.2020 18:21, Julien Grall wrote:
>> On 30/09/2020 11:16, Jan Beulich wrote:
>>> On 30.09.2020 10:52, Paul Durrant wrote:
>>>> Looking again, given that both send_guest_vcpu_virq() and
>>>> send_guest_global_virq() (rightly) hold the evtchn lock before
>>>> calling evtchn_port_set_pending() I think you could do away with
>>>> the virq lock by adding checks in those functions to verify
>>>> evtchn->state == ECS_VIRQ and u.virq == virq after having
>>>> acquired the channel lock but before calling
>>>> evtchn_port_set_pending().
>>>
>>> I don't think so: The adjustment of v->virq_to_evtchn[] in
>>> evtchn_close() would then happen with just the domain's event
>>> lock held, which the sending paths don't use at all. The per-
>>> channel lock gets acquired in evtchn_close() a bit later only
>>> (and this lock can't possibly protect per-vCPU state).
>>>
>>> In fact I'm now getting puzzled by evtchn_bind_virq() updating
>>> this array with (just) the per-domain lock held. Since it's
>>> the last thing in the function, there's technically no strict
>>> need for acquiring the vIRQ lock,
>>
>> Well, we at least need to prevent the compiler to tear the store/load.
>> If we don't use a lock, then we would should use ACCESS_ONCE() or
>> {read,write}_atomic() for all the usage.
>>
>>> but holding the event lock
>>> definitely doesn't help.
>>
>> It helps because spin_unlock() and write_unlock() use the same barrier
>> (arch_lock_release_barrier()). So ...
> 
> I'm having trouble making this part of your reply fit ...
> 
>>> All that looks to be needed is the
>>> barrier implied from write_unlock().
>>
>> No barrier should be necessary. Although, I would suggest to add a
>> comment explaining it.
> 
> ... this. If we moved the update of v->virq_to_evtchn[] out of the
> locked region (as the lock doesn't protect anything anymore at that
> point), I think a barrier would need adding, such that the sending
> paths will observe the update by the time evtchn_bind_virq()
> returns (and hence sending of a respective vIRQ event can
> legitimately be expected to actually work). Or did you possibly
> just misunderstand what I wrote before? By putting in question the
> utility of holding the event lock, I implied the write could be
> moved out of the locked region ...

We are probably talking past each other... My point was that if we leave 
the write where it currently is, then we don't need an extra barrier 
because the spin_unlock() already contains the barrier we want.

Hence the suggestion to add a comment so a reader doesn't spend time 
wondering how this is safe...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 08:48:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 08:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1746.5322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGjX-0001iR-DK; Fri, 02 Oct 2020 08:48:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1746.5322; Fri, 02 Oct 2020 08:48:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGjX-0001iK-9Q; Fri, 02 Oct 2020 08:48:07 +0000
Received: by outflank-mailman (input) for mailman id 1746;
 Fri, 02 Oct 2020 08:48:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOGjW-0001iF-TW
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:48:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5031ce30-d688-46f1-9c41-1977436e4e50;
 Fri, 02 Oct 2020 08:48:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B436AC4D;
 Fri,  2 Oct 2020 08:48:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOGjW-0001iF-TW
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 08:48:06 +0000
X-Inumbo-ID: 5031ce30-d688-46f1-9c41-1977436e4e50
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5031ce30-d688-46f1-9c41-1977436e4e50;
	Fri, 02 Oct 2020 08:48:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601628485;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CaIWTy0Xra05FhczbcrTApaziPrlp5SYm6vWFV29lVI=;
	b=DsT+GE1eqthxd1J7NOguwMD2Jah6ZiI+Qt1HgxEDXrLITjB0EixAsj+LNV6a+P126LJ7Ah
	dLjiAp1wzo7KJX8dtoYmz9X65TKrWaNznZ5x9wsjRW/N6msrJJ7xi51XOg/32D36UFKi3R
	Lq5GAMMwzGLVdu1I18MUTqqvi83FjGg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6B436AC4D;
	Fri,  2 Oct 2020 08:48:05 +0000 (UTC)
Subject: Re: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI
 callbacks
To: Roger Pau Monne <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
Date: Fri, 2 Oct 2020 10:48:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:40, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -459,13 +459,10 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>  
>  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>  {
> -    struct vcpu *v = vlapic_vcpu(vlapic);
> -    struct domain *d = v->domain;
> -
>      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> -        vioapic_update_EOI(d, vector);
> +        vioapic_update_EOI(vector);
>  
> -    hvm_dpci_msi_eoi(d, vector);
> +    hvm_dpci_msi_eoi(vector);
>  }

What about viridian_synic_wrmsr() -> vlapic_EOI_set() ->
vlapic_handle_EOI()? You'd probably have noticed this if you
had tried to (consistently) drop the respective parameters from
the intermediate functions as well.

Question of course is in how far viridian_synic_wrmsr() for
HV_X64_MSR_EOI makes much sense when v != current. Paul, Wei?

A secondary question of course is whether passing around the
pointers isn't really cheaper than the obtaining of 'current'.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:02:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1748.5334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGx7-0003PM-LR; Fri, 02 Oct 2020 09:02:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1748.5334; Fri, 02 Oct 2020 09:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOGx7-0003PF-I0; Fri, 02 Oct 2020 09:02:09 +0000
Received: by outflank-mailman (input) for mailman id 1748;
 Fri, 02 Oct 2020 09:02:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOGx7-0003PA-0d
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:02:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fa5360e-5821-48a5-bfd4-b3cc519f415e;
 Fri, 02 Oct 2020 09:02:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7026DACC8;
 Fri,  2 Oct 2020 09:02:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOGx7-0003PA-0d
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:02:09 +0000
X-Inumbo-ID: 8fa5360e-5821-48a5-bfd4-b3cc519f415e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fa5360e-5821-48a5-bfd4-b3cc519f415e;
	Fri, 02 Oct 2020 09:02:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601629326;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RcjHckGbXllsqJReI+1aVcVLvulFVVHyn63H6SXle4o=;
	b=KIOsfnTzEf2OKnNS5GPoQPj6AaTziN+P1gz9PIjVIejJGgDplKy/uAaLVv3AHQFjbtr4aQ
	eI5YjVMLZvOBIdiWI6UQdHZcI4Qzrunb67iGK/rFVU2fczlJtygNKuQu8ZSM0SlhHx1s6B
	eqEt5MMugdyPOHdGLaewDNS5P8JKr60=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7026DACC8;
	Fri,  2 Oct 2020 09:02:06 +0000 (UTC)
Subject: Re: [PATCH v2 02/11] x86/hvm: drop domain parameter from vioapic/vpic
 EOI callbacks
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a30b4844-5d4c-c8d4-7f59-3ce3f51092cf@suse.com>
Date: Fri, 2 Oct 2020 11:02:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:40, Roger Pau Monne wrote:
> EOIs are always executed in guest vCPU context, so there's no reason to
> pass a domain parameter around as can be fetched from current->domain.

FAOD whether this is correct depends on what adjustments get made
to patch 1.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:22:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1752.5349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHGn-0005FY-Ed; Fri, 02 Oct 2020 09:22:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1752.5349; Fri, 02 Oct 2020 09:22:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHGn-0005FR-B5; Fri, 02 Oct 2020 09:22:29 +0000
Received: by outflank-mailman (input) for mailman id 1752;
 Fri, 02 Oct 2020 09:22:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOHGl-0005FM-IH
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:22:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ba5acbf-9073-4cfd-85ce-ec1625518d36;
 Fri, 02 Oct 2020 09:22:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 685E6AF8D;
 Fri,  2 Oct 2020 09:22:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOHGl-0005FM-IH
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:22:27 +0000
X-Inumbo-ID: 4ba5acbf-9073-4cfd-85ce-ec1625518d36
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4ba5acbf-9073-4cfd-85ce-ec1625518d36;
	Fri, 02 Oct 2020 09:22:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601630545;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HTRfIj5jV+DogS9N9rvaGg5cDF4cbqcTi8FGrtp3xQA=;
	b=XTH4C4aZekEQ7x3anPzdhK6jZzA05faUoaG147+XKV8KxLCxTc5GZPVyXDeDLVpQXw/9NT
	Z3vDUlU1Ry2ELBVMQqL6QuRXIS1UrWonKzPGvE6yBxTNz7ic0FV/zNFo0Lg7rF3m0B3c5+
	76J5cVwaDrfSgSwXpYWfrStBkvH/dyg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 685E6AF8D;
	Fri,  2 Oct 2020 09:22:25 +0000 (UTC)
Subject: Re: [PATCH v2 03/11] x86/vlapic: introduce an EOI callback mechanism
To: paul@xen.org
Cc: 'Roger Pau Monne' <roger.pau@citrix.com>, xen-devel@lists.xenproject.org,
 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-4-roger.pau@citrix.com>
 <006e01d6971f$bb4e0080$31ea0180$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <40ba086b-308d-d126-d255-9a7096863dc2@suse.com>
Date: Fri, 2 Oct 2020 11:22:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <006e01d6971f$bb4e0080$31ea0180$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 13:49, Paul Durrant wrote:
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Roger Pau Monne
>> Sent: 30 September 2020 11:41
>>
>> @@ -159,8 +184,12 @@ void vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
>>      else
>>          vlapic_clear_vector(vec, &vlapic->regs->data[APIC_TMR]);
>>
>> +    if ( callback )
>> +        vlapic_set_callback(vlapic, vec, callback, data);
>> +
> 
> Can this not happen several times before an EOI? I.e. the vector could
> already be set in IRR, right?

Yes, but I take it the assumption is that it'll always be the same
callback that ought to get set here. Hence the warning printk() in
that function in case it isn't.

What I wonder while looking at this function is whether the TMR
handling is correct. The SDM says "Upon acceptance of an interrupt
into the IRR, ..." which I read as "when the IRR bit transitions
from 0 to 1" (but I can see room for reading this differently).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:25:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:25:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1754.5361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHJI-0005PB-S0; Fri, 02 Oct 2020 09:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1754.5361; Fri, 02 Oct 2020 09:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHJI-0005P4-P4; Fri, 02 Oct 2020 09:25:04 +0000
Received: by outflank-mailman (input) for mailman id 1754;
 Fri, 02 Oct 2020 09:25:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MyDR=DJ=amazon.co.uk=prvs=537cbcb7c=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kOHJG-0005Ox-Ny
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:25:02 +0000
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbd5334f-5bf0-420f-808b-ca31ca0098d7;
 Fri, 02 Oct 2020 09:25:01 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1e-a70de69e.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 02 Oct 2020 09:25:00 +0000
Received: from EX13D32EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1e-a70de69e.us-east-1.amazon.com (Postfix) with ESMTPS
 id 3F8CBA1CE3; Fri,  2 Oct 2020 09:24:58 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 2 Oct 2020 09:24:58 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 2 Oct 2020 09:24:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MyDR=DJ=amazon.co.uk=prvs=537cbcb7c=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kOHJG-0005Ox-Ny
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:25:02 +0000
X-Inumbo-ID: bbd5334f-5bf0-420f-808b-ca31ca0098d7
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bbd5334f-5bf0-420f-808b-ca31ca0098d7;
	Fri, 02 Oct 2020 09:25:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601630701; x=1633166701;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=bZTRy/oiBMSi7X6q4j6KD9K3dvxlX9rhdFlk5EL90I0=;
  b=V79OG+WskMU8JkoQnKFrJQN9bWHDpBtX1xCrlxcaNz6JNL+gvbojAbRz
   IbRZ+n/Wh8hwuA0S11chyqaaE+w3hiaF5uN7K48forSfeGanVgAAIhS3z
   fwlGINg0DkFitkNnMNdk62auwrrAsdNOjVl9/za3dWWq0d8p6MUfUprlw
   o=;
X-IronPort-AV: E=Sophos;i="5.77,326,1596499200"; 
   d="scan'208";a="59026413"
Subject: RE: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI callbacks
Thread-Topic: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI callbacks
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO email-inbound-relay-1e-a70de69e.us-east-1.amazon.com) ([10.43.8.6])
  by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP; 02 Oct 2020 09:25:00 +0000
Received: from EX13D32EUC004.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1e-a70de69e.us-east-1.amazon.com (Postfix) with ESMTPS id 3F8CBA1CE3;
	Fri,  2 Oct 2020 09:24:58 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 2 Oct 2020 09:24:58 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 2 Oct 2020 09:24:58 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>
Thread-Index: AQHWlxZTwfhKSEHIHk+Wx3kHFgYPGamEA1KAgAAIUXA=
Date: Fri, 2 Oct 2020 09:24:57 +0000
Message-ID: <59e20dff55464b7fbee9737348fae751@EX13D32EUC003.ant.amazon.com>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-2-roger.pau@citrix.com>
 <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
In-Reply-To: <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.68]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDAyIE9jdG9iZXIgMjAyMCAwOTo0OA0KPiBUbzogUm9nZXIg
UGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT47IFdlaSBMaXUgPHdsQHhlbi5vcmc+OyBQ
YXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4NCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9q
ZWN0Lm9yZzsgQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IER1cnJh
bnQsIFBhdWwNCj4gPHBkdXJyYW50QGFtYXpvbi5jby51az4NCj4gU3ViamVjdDogUkU6IFtFWFRF
Uk5BTF0gW1BBVENIIHYyIDAxLzExXSB4ODYvaHZtOiBkcm9wIHZjcHUgcGFyYW1ldGVyIGZyb20g
dmxhcGljIEVPSSBjYWxsYmFja3MNCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2luYXRl
ZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtzIG9y
IG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVyIGFu
ZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDMwLjA5LjIwMjAg
MTI6NDAsIFJvZ2VyIFBhdSBNb25uZSB3cm90ZToNCj4gPiAtLS0gYS94ZW4vYXJjaC94ODYvaHZt
L3ZsYXBpYy5jDQo+ID4gKysrIGIveGVuL2FyY2gveDg2L2h2bS92bGFwaWMuYw0KPiA+IEBAIC00
NTksMTMgKzQ1OSwxMCBAQCB2b2lkIHZsYXBpY19FT0lfc2V0KHN0cnVjdCB2bGFwaWMgKnZsYXBp
YykNCj4gPg0KPiA+ICB2b2lkIHZsYXBpY19oYW5kbGVfRU9JKHN0cnVjdCB2bGFwaWMgKnZsYXBp
YywgdTggdmVjdG9yKQ0KPiA+ICB7DQo+ID4gLSAgICBzdHJ1Y3QgdmNwdSAqdiA9IHZsYXBpY192
Y3B1KHZsYXBpYyk7DQo+ID4gLSAgICBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9tYWluOw0KPiA+
IC0NCj4gPiAgICAgIGlmICggdmxhcGljX3Rlc3RfdmVjdG9yKHZlY3RvciwgJnZsYXBpYy0+cmVn
cy0+ZGF0YVtBUElDX1RNUl0pICkNCj4gPiAtICAgICAgICB2aW9hcGljX3VwZGF0ZV9FT0koZCwg
dmVjdG9yKTsNCj4gPiArICAgICAgICB2aW9hcGljX3VwZGF0ZV9FT0kodmVjdG9yKTsNCj4gPg0K
PiA+IC0gICAgaHZtX2RwY2lfbXNpX2VvaShkLCB2ZWN0b3IpOw0KPiA+ICsgICAgaHZtX2RwY2lf
bXNpX2VvaSh2ZWN0b3IpOw0KPiA+ICB9DQo+IA0KPiBXaGF0IGFib3V0IHZpcmlkaWFuX3N5bmlj
X3dybXNyKCkgLT4gdmxhcGljX0VPSV9zZXQoKSAtPg0KPiB2bGFwaWNfaGFuZGxlX0VPSSgpPyBZ
b3UnZCBwcm9iYWJseSBoYXZlIG5vdGljZWQgdGhpcyBpZiB5b3UNCj4gaGFkIHRyaWVkIHRvIChj
b25zaXN0ZW50bHkpIGRyb3AgdGhlIHJlc3BlY3RpdmUgcGFyYW1ldGVycyBmcm9tDQo+IHRoZSBp
bnRlcm1lZGlhdGUgZnVuY3Rpb25zIGFzIHdlbGwuDQo+IA0KPiBRdWVzdGlvbiBvZiBjb3Vyc2Ug
aXMgaW4gaG93IGZhciB2aXJpZGlhbl9zeW5pY193cm1zcigpIGZvcg0KPiBIVl9YNjRfTVNSX0VP
SSBtYWtlcyBtdWNoIHNlbnNlIHdoZW4gdiAhPSBjdXJyZW50LiBQYXVsLCBXZWk/DQo+IA0KDQpJ
IGRvbid0IHRoaW5rIGl0IG1ha2VzIGFueSBzZW5zZS4gSSB0aGluayBpdCB3b3VsZCBiZSBmaW5l
IHRvIG9ubHkgZG8gaXQgaWYgdiA9PSBjdXJyZW50Lg0KDQogIFBhdWwNCg0KPiBBIHNlY29uZGFy
eSBxdWVzdGlvbiBvZiBjb3Vyc2UgaXMgd2hldGhlciBwYXNzaW5nIGFyb3VuZCB0aGUNCj4gcG9p
bnRlcnMgaXNuJ3QgcmVhbGx5IGNoZWFwZXIgdGhhbiB0aGUgb2J0YWluaW5nIG9mICdjdXJyZW50
Jy4NCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:30:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:30:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1758.5376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHNx-0005dF-Lj; Fri, 02 Oct 2020 09:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1758.5376; Fri, 02 Oct 2020 09:29:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHNx-0005d8-H8; Fri, 02 Oct 2020 09:29:53 +0000
Received: by outflank-mailman (input) for mailman id 1758;
 Fri, 02 Oct 2020 09:29:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOHNw-0005d3-RU
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:29:53 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85e17d85-0b90-4cc2-9134-82d0f845b8ea;
 Fri, 02 Oct 2020 09:29:50 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB6144.eurprd04.prod.outlook.com (2603:10a6:803:fd::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 09:29:48 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:29:48 +0000
Received: from [192.168.1.106] (86.123.62.1) by
 AM0P190CA0007.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.38 via Frontend Transport; Fri, 2 Oct 2020 09:29:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOHNw-0005d3-RU
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:29:53 +0000
X-Inumbo-ID: 85e17d85-0b90-4cc2-9134-82d0f845b8ea
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown [40.107.2.45])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85e17d85-0b90-4cc2-9134-82d0f845b8ea;
	Fri, 02 Oct 2020 09:29:50 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BBquCoP/E0UeVVc/d07eVsdJjZ+vfGwYCdrySGIgbYyhXRMiV/aT9vFWf6I2qexaFIMxkvHDlyCFX6xUxEOEN3xMaVBV4fT/SFbJaBCyMyUi4oJYTdsQas5xmk4n+hJomIkC0xopA0RKLfORgUPXQ5NVooAtypO1qFX63lOb+zj8hsruvdURjBrwhTXxnKldpg9alOV56FTrtVlkLN0d70jhdvll7GOys3UtlWEfGVBlEfoiC7EkLq5s4oxkHa6vATodGzNdZ32PaNopFdqaLj1qkdg6R/HCgcxu9zwS2xDu3WRvcrRP6Nv3ckIA45XbKJFkLCx1n8/LK3k6AZKyxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=42yWuh+1D2zPjR2QTr3Esq8M8tYvM4goIZALn7eda6I=;
 b=gNjMFPduUyapxEMYuR1HP2FrvpM0FmOPCxqMIohk5T/rBhezG2t3URF7oRgb0M76tiQzcKxt1G36oj25Lx9N0usUb5nxA4VRkZFMdJR7Ex8yRiW9hqZVSVTKO8YeDv1uEemDPsBgWLGntleG7OPfcj6nMosFoF8/sTM99Bezunta8aMlS8hnitpSgxhFAkJ4q8g1r++uiP+Z7ecIEr9qKjEwA8Be3rg1Ttjs3qZk/8X+p+pKQWsAFDXei3g8S+5VwPMItMUNEf1I+tOEZfDesCpGavJAxnlGi1TwRLjkdyqwqrPzcOnwKSZ7lDsRzvPMV/0ixR8d/An5bN+zVfwSgg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=42yWuh+1D2zPjR2QTr3Esq8M8tYvM4goIZALn7eda6I=;
 b=h5ScrgWoXLOiddUG9sP0ulXcQ/KmFxz60DxJ89YvikudB5vSQ97NyBU/EnCXkzgJYeAwOZAzQXd42rTwJGZ7nP39qgfpVE1wOZ1xlojrUKQV9bfET7bLLEzbemE6vhInBUm/khqWYK+XXPPjUyeqp2Nv2/cKlFJwueCwFt4cuwc=
Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB6144.eurprd04.prod.outlook.com (2603:10a6:803:fd::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 09:29:48 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:29:48 +0000
Subject: Re: [PATCH] arm,smmu: match start level of page table walk with P2M
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com,
 will@kernel.org, diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
 <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
 <41f7c87b-0db9-5366-b25f-775bf3d6e3ce@xen.org>
From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Message-ID: <625c1142-ae1c-7374-5e77-ab52eb2c326e@nxp.com>
Date: Fri, 2 Oct 2020 12:29:44 +0300
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
In-Reply-To: <41f7c87b-0db9-5366-b25f-775bf3d6e3ce@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: AM0P190CA0007.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:208:190::17) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.106] (86.123.62.1) by AM0P190CA0007.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend Transport; Fri, 2 Oct 2020 09:29:47 +0000
X-Originating-IP: [86.123.62.1]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fbd0728b-1709-48f4-148f-08d866b5b4b3
X-MS-TrafficTypeDiagnostic: VI1PR04MB6144:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6144AFB6DA5B78821E903078EC310@VI1PR04MB6144.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	luqWZyU7lLj0unIOp/Up5BJ4jK8KSWSLMLqnnaeVC/ImVZVu7qU+e0Yo/wpZIYfqCTxt2K/4DghnloEfX8Q7kdG47ZiEtC/lwyTf8UShG1+75r27oDljmiEWY7m5G3k0M2eTK4iELGbvljoyhILpdF4WtIl1jj/rwlBO1vJZw4Ol7RcIvQ6HKq8SmhsqH9gOj5rXKj6ZLuE7D5riRpDJeQx841GnE2uoUDgVZQZY0/LBHDoy1IQr/cO8U5z1Zp/uQXXXIAqAtL7tALweIW4/KkRPLBoGmQHxHEfKtlbLzc1IIbFojKeg8Jhe2Xu1/WctOflKIRXA+iCwZR85H6+GLjI2jXL6llwcabQ34oZHA3SbGj6CajVOafTY8I0iy7C/
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(8676002)(53546011)(31696002)(66476007)(8936002)(44832011)(186003)(86362001)(16576012)(316002)(478600001)(36756003)(110136005)(66946007)(31686004)(6486002)(52116002)(66556008)(956004)(2616005)(4326008)(5660300002)(6666004)(83380400001)(2906002)(26005)(16526019)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	3nBY67g+2nqr2GhHHY008zr1cn4nTNAl51sOTqb1YtbyxHo0yKOlmw2TGqy31iHuEPkry5QnswSMKagnN5x7s2CQP1JiRJtDoM3cwUkPXGvS1a7+ew2rBme4rX0Ytluas7vHk/QbDpZMgvwEQpAxzbPm1juzjPUkAGGIjYYUbkdXI1aFVOTntkWtIu0R4IfILcYMzOQ421jNdiLQVv96TT3mHLrUIFcvIWmNPL4Srk8sTktg4GZPs07/0xW5KD98+MSA/QkBWj1zm0ora/zmmcNqcnuHCoTRWRkvEDrDUCYWWXw0cDxIiFQV57FZ7zwQdG30BACwrLXE1aZYIYzziT0qpn2KgFmYlC3jbPocLbNBh7mcVvgqPiA4aQ1leyMuLeDftiJ6jDMrr6iinv7zZmaRfMHdQAoTMp3Gw/uiGE5CJ+g3VjiUNhJ9n+CfsdMLtbe/BWXhQE4Id50Rm3VYttqqPuydhv5wqmcYFywReRHRoZu6CSXh/GDvH8sc5yArnHBfI5Qf4UFxEJlya6ZodpjiNJUVkLCJgXZNK4BtjZAPX3soHsX/MWxS2CZt+3OWmKE3hxyxuPxDNTtpdiRCE8ns7KmqaTJWP1XbuebLj5UrbBY3WMpDYbPVN9saSpYTkaT9erSOBssHSjC33zBhNA==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fbd0728b-1709-48f4-148f-08d866b5b4b3
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 09:29:48.2822
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aJLElOfjmv+7fs4Vdpu7DiDYKxcVKvOBBb8rVgY6wkttquwhG+AeSKPaZzjoqz/VQ8wEobPAurRTe0lr9W3CFg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6144



On 10/2/2020 11:18 AM, Julien Grall wrote:
> Hi,
> 
> On 02/10/2020 00:52, Stefano Stabellini wrote:
>> On Mon, 28 Sep 2020, laurentiu.tudor@nxp.com wrote:
>>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>>
>>> Don't hardcode the lookup start level of the page table walk to 1
>>> and instead match the one used in P2M. This should fix scenarios
>>> involving SMMU where the start level is different than 1.
>>>
>>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>
>> Thank you for the patch, I think it is correct, except that smmu.c today
>> can be enabled even on arm32 builds, where p2m_root_level would be
>> uninitialized.
>>
>> We need to initialize p2m_root_level at the beginning of
>> setup_virt_paging under the #ifdef CONFIG_ARM_32. We can statically
>> initialize it to 1 in that case. Or...
>>
>>
>>> ---
>>>   xen/arch/arm/p2m.c                 | 2 +-
>>>   xen/drivers/passthrough/arm/smmu.c | 2 +-
>>>   xen/include/asm-arm/p2m.h          | 1 +
>>>   3 files changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>> index ce59f2b503..0181b09dc0 100644
>>> --- a/xen/arch/arm/p2m.c
>>> +++ b/xen/arch/arm/p2m.c
>>> @@ -18,7 +18,6 @@
>>>     #ifdef CONFIG_ARM_64
>>>   static unsigned int __read_mostly p2m_root_order;
>>> -static unsigned int __read_mostly p2m_root_level;
>>>   #define P2M_ROOT_ORDER    p2m_root_order
>>>   #define P2M_ROOT_LEVEL p2m_root_level
>>>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>> @@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid =
>>> MAX_VMID_8_BIT;
>>>    * restricted by external entity (e.g. IOMMU).
>>>    */
>>>   unsigned int __read_mostly p2m_ipa_bits = 64;
>>> +unsigned int __read_mostly p2m_root_level;
>>
>> ... we could p2m_root_level = 1; here
> 
> IMHO, this is going to make the code quite confusing given that only the
> SMMU would use this variable for arm32.
> 
> The P2M root level also cannot be changed by the SMMU (at least for
> now). So I would suggest to introduce a helper (maybe
> p2m_get_root_level()) and use it in the SMMU code.
> 
> An alternative would be to move the definition of P2M_ROOT_{ORDER,
> LEVEL} in p2m.h

Alright, I'll go with this second option if that's ok with you.

---
Thanks & Best Regards, Laurentiu


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1759.5388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHOt-0006OE-Vz; Fri, 02 Oct 2020 09:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1759.5388; Fri, 02 Oct 2020 09:30:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHOt-0006O7-Se; Fri, 02 Oct 2020 09:30:51 +0000
Received: by outflank-mailman (input) for mailman id 1759;
 Fri, 02 Oct 2020 09:30:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7DnO=DJ=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kOHOs-0006O1-3i
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:30:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 153dcf84-0294-4d72-acfb-c422c54e1fd4;
 Fri, 02 Oct 2020 09:30:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7DnO=DJ=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kOHOs-0006O1-3i
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:30:50 +0000
X-Inumbo-ID: 153dcf84-0294-4d72-acfb-c422c54e1fd4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 153dcf84-0294-4d72-acfb-c422c54e1fd4;
	Fri, 02 Oct 2020 09:30:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601631049;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=yVmCpR6JSyBsvTOX22MCLXEH9gAlAvDBshycCsO1dy4=;
  b=I/78ZTQtDQS2tY+Yp+FUAZxIfk/PVzEYCKrD9w0vQaocYWIH5ekxT7Og
   IoDFwGZZNYDsH7qmLkvf5jNN2YQvw784FVp/0saxt6+pEC1nmbRiPr5DY
   LygGUsrnGgPEvD2XmhSqcvrItkLkpPPDyHpFn2N9rMFS/mkbk3s0GgNu8
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3O+CMwRwDtk4ZZBdcs+69a2mPhxX3SNr+r3xusqAD5LGCCwxCiUO0PlBWJ1bIWLMb4U/m/5Cw+
 hIYyd81K+2wyE9C6o9xab6EvzDEQJxJvHGAQ5D3bxfsNd3JY/NP25AU1mLrx/WVMZnt6k1+e4c
 D6twOvHPxpE7Cau1j/Z7Z1gW9g7WIvwFqzFMG5RzUJCJG3mhfl/Akxowh8hUjhiDiAtH2mc9hg
 HAt9zb2T36LDkAga6we7z4HHtE5vSfTDpFPsynGk2ey9M6bcbn0RXE/63N/JVooAKZcq/X+Gmh
 5sA=
X-SBRS: None
X-MesageID: 28480848
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="28480848"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H4v2SqlNqAheAkDfxYZwJMeVLGpLP+InqQrG6wyjtddEKBhzMAa54+QCrOVFbLCuG0qVBfQhi6Df0djNmOTorxoZVK9v+pW8Jo2/VXAN1p4cjFa4SxrxCNZjca8HMWwP2yqFvVisBFO64EB5r7LhYyxjdyKbJMzaVU0ZkU0NXYqbTgx7p8LC6Q4n73rZU3RGu80+03HMBY0FjEhIRih12/SbOwYXT3bK22ash4XEOder2xbE0yQ8T1EfU5yOWf52cWUGP/23Gk7rxXTZp2o/owq9jPVpVbQd1yHhP5H31gBz1/Lx/b6GT9CIh+4tQ5vZs5bnlCdkHR+1/QUCJc+B9g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVmCpR6JSyBsvTOX22MCLXEH9gAlAvDBshycCsO1dy4=;
 b=KLIu8IHk1iXOZSbqLm9J9LQ6AKnE5XrG8abbzfkPlCySdmKa7PhcHMyD8mvlYT0fTYAdrHNfsHMvEjGVrhoHY59S4BJIzXa/QR5iWYXJ7dhFhEew2+0Pz/XG/4LNbb+WZZCJQq2nRsasF1TttWTPLm06/tCDdKq5vBhwDR89e9NuMr7zh4cDECihDGN3tNrF5/yIKEl+uYCVrjc5zzCCdCA5w8jJHzkCXbnvuW7mEv+JTcZHMlGoPsKHilC7BtG+Ko3Qxjvs4f5NxGonoKfcaXVaNCYyNGg1UIywGyR7EAz/kA4A/qXRpGTGJ+vzfqEe4zKy7Z24uFvkZiH9S3KBGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yVmCpR6JSyBsvTOX22MCLXEH9gAlAvDBshycCsO1dy4=;
 b=TFZ47lm/q8SQ6ycUKZw3j3dOWrTYEOyQpIiZYMhz3zrhhO82C9fieHxgQ5xn9HAxMzp5HcPka5Y1sgbRqzOeWLhEhu8FA7SWJ3bJSZYaULHap5i+YqZDRIlseXzNUbbYn3ufbYi4k7jBLBJPqbqvEn/l1aYOb2Y3UcOCLMIuQBc=
From: George Dunlap <George.Dunlap@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, "Rich
 Persaud" <persaur@gmail.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Topic: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Index: AQHWlylRG8SsjpT4JUGR4W/Ewa0YcqmC4twAgAEsN4A=
Date: Fri, 2 Oct 2020 09:30:44 +0000
Message-ID: <C3D24549-C5B1-4B90-A18D-FEBD1A4D0B93@citrix.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
 <20201001153612.GG19254@Air-de-Roger>
In-Reply-To: <20201001153612.GG19254@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6e5a3232-6f63-4c8b-285b-08d866b5d65f
x-ms-traffictypediagnostic: BY5PR03MB5234:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR03MB523463584B3D310ED4BB8C6899310@BY5PR03MB5234.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 1md2bxk78BclaDJstPabX0g82KzF4PJNArcoKyVMQYlcypH/ti4EYECXrHsLFwooJ1uNOqd1dnSVa9Efz0x+ZmyxmO9wlC5ti/Y0jhL+UrQuRYuVsO6x602OV5K08bg/Ayp/d4YnqqDGgFy4R2SO7P1XnGGPzKhSJOMi2vUTx1F47TVrP4EOaiEtgF2gkofph2go5V7IdQPR1sr9s9+7vJKgO2/xgx4j/KUnIdffEeitwe59QNzY9Fq1wieibwFhfFyBS9TVt4XUdenVsd9GsPfFxhGM2iLzTUY0hWJY2yUl7Ejqy17cJeIdYghKtBw4VnxlgFXKcF52BG7MC2HDmSh9TCaM3OJmzOVpqY9IsOuSehm8hWhbrtBk+ESL4cEknS3jy3WPacupanHd7r4jfQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(396003)(346002)(376002)(91956017)(2906002)(33656002)(76116006)(86362001)(71200400001)(4326008)(6512007)(6862004)(36756003)(83080400001)(478600001)(8936002)(54906003)(6636002)(966005)(8676002)(55236004)(6506007)(53546011)(316002)(6486002)(37006003)(66446008)(66556008)(64756008)(66476007)(2616005)(186003)(66946007)(5660300002)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: HjhZtKbEdeYWtRvRLLZCYrD1+NmR1hyQFEjjZuZRZeRea+t+oflSZBxQ5cdSmaHOugUQpERyXLoN80Q7wHSLF+PlU7jaqisAIH5yJma4CSDkRSva+1rYmE0I9TT4zTdRZ1Vfxslvuc0+W8BnzpY9l1FB/TIMA25PDXbfK9E2TNzoB4ajBfyLc/ko13U/P0nDlPb9IVILF0Jvh3FQOKjxrDxKeyDPKYYOT7vJmzhE5m03a6XXaojK5g6COvZAQ8emEMY+dm1SkOtwXEqjPN0K9NXt59nfYNOwDP1SyuNPwDseDEyTW4yWYC3Piom+fnfHwXbcVeiBb0bLxlhPQvG1lk8ViyMGfujonodkLH93lM3XbNOWdiyCI1+lzYyFObsL2diYRmP1p6anSBGys2mvKY57tBx7jUK+X3sP71GTtI1gnkayP8VTCTsiosxDrdUizW6KWz3RYhQFOh+BUI0WC1EbR0jK5c75oAHjU6yNUHjOmZYYRKeAgwq9VzxEWFqayBT5K+gOu5zESo3lSBLpMo+V3BzY+BD9wsY6WAMs/ecCZVDQ6RXpxod7Yw+kuyAfRKdMs0c48tAMgMkAJbya2jfiNHg3WTE8pK2Vj8lhWARfBHPJuhLWuIzyIdCMtIUAdIWw7M17Q6D4GPmsmE3SdQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <FFCD7DBB617B6E44B18F6D3210797E16@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e5a3232-6f63-4c8b-285b-08d866b5d65f
X-MS-Exchange-CrossTenant-originalarrivaltime: 02 Oct 2020 09:30:44.3020
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: aWodXmMAaX4LAR75Hd6YQdsc7xyZF7BzK1c6K/1KxfRnUkT9vPQeaDRik7/PN5NypRZgDW0e48MHqPX2t/rm6+t1aJX/eIyuI/NuldhDeuU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5234
X-OriginatorOrg: citrix.com

DQoNCj4gT24gT2N0IDEsIDIwMjAsIGF0IDQ6MzYgUE0sIFJvZ2VyIFBhdSBNb25uZSA8cm9nZXIu
cGF1QGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gT24gV2VkLCBTZXAgMzAsIDIwMjAgYXQgMDE6
NTc6MzZQTSArMDEwMCwgR2VvcmdlIER1bmxhcCB3cm90ZToNCj4+ICsNCj4+ICtEaXN0cm9zIHdl
IGNvbnNpZGVyIHdoZW4gZGVjaWRpbmcgbWluaW11bSB2ZXJzaW9ucw0KPj4gKy0tLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+PiArDQo+PiArV2UgY3Vy
cmVudGx5IGFpbSB0byBzdXBwb3J0IFhlbiBidWlsZGluZyBhbmQgcnVubmluZyBvbiB0aGUgZm9s
bG93aW5nIGRpc3RyaWJ1dGlvbnM6DQo+PiArRGViaWFuXywNCj4+ICtVYnVudHVfLA0KPj4gK09w
ZW5TVVNFXywNCj4+ICtBcmNoIExpbnV4LA0KPj4gK1NMRVNfLA0KPj4gK1lvY3RvXywNCj4+ICtD
ZW50T1NfLA0KPj4gK2FuZCBSSEVMXy4NCj4gDQo+IENvdWxkIHdlIGFkZCBGcmVlQlNEIGhlcmUs
IEkndmUgYmVlbiBwYWNrYWdpbmcgWGVuIHRoZXJlIGZvciBxdWl0ZQ0KPiBzb21lIHRpbWUgbm93
LCBhbmQgdHJ5IHRvIGtlZXAgaXQgd29ya2luZy4NCg0KT2gsIHllcyBvZiBjb3Vyc2UuICBBbmQg
cHJvYmFibHkgTmV0QlNEIGFzIHdlbGwuDQoNCj4gSXQncyBhbiBpbnRlcmVzdGluZyB0YXJnZXQg
YmVjYXVzZSBpdCBoYXMgcXVpdGUgYSBkaWZmZXJlbnQgdG9vbGNoYWluDQo+IGFzIGl0J3MgZnVs
bHkgbGx2bSBiYXNlZCAoY2xhbmcgKyBsbGQpIGFuZCB0aGVuIGl0J3MgdXNpbmcgdGhlIEVMRg0K
PiBUb29sY2hhaW4uDQo+IA0KPiBodHRwczovL3d3dy5mcmVlYnNkLm9yZy9yZWxlYXNlcy8NCj4g
DQo+IE5vdCBzdXJlIGlmIHdlIHdhbnQgdG8gcmVuYW1lIHRoZSBjdXJyZW50IHNlY3Rpb24gdG8g
TGludXggZGlzdHJvcyBhbmQNCj4gYWRkIGEgZGlmZmVyZW50IG9uZSBmb3Igb3RoZXIgT1Nlcy4N
Cg0KSSBkb27igJl0IHRoaW5rIGl0IG1ha2VzIHNlbnNlIHRvIGxpc3QgdGhlc2Ugc2VwYXJhdGVs
eS4gIEnigJlsbCB0cnkgdG8gbWFrZSBzdXJlIHRoZSBuYW1pbmcgZW5jb21wYXNzZXMgYm90aC4N
Cg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:35:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1763.5401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHT4-0006cG-Jw; Fri, 02 Oct 2020 09:35:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1763.5401; Fri, 02 Oct 2020 09:35:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHT4-0006c9-Gp; Fri, 02 Oct 2020 09:35:10 +0000
Received: by outflank-mailman (input) for mailman id 1763;
 Fri, 02 Oct 2020 09:35:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kOHT3-0006c4-Hc
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:35:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71199e7a-80d8-4089-821a-60ea5555dbc2;
 Fri, 02 Oct 2020 09:35:08 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOHT1-00039T-SI; Fri, 02 Oct 2020 09:35:07 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOHT1-0007lP-KB; Fri, 02 Oct 2020 09:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kOHT3-0006c4-Hc
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:35:09 +0000
X-Inumbo-ID: 71199e7a-80d8-4089-821a-60ea5555dbc2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 71199e7a-80d8-4089-821a-60ea5555dbc2;
	Fri, 02 Oct 2020 09:35:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bzmlFRTi8qSySmtsAyZujowy57sk9C8IZDS0MYALyeg=; b=5yFJcpDUX0l1gDGQXxJkRMHSff
	mrTp35F0K8BAu1SU+hM/VgdGT/OWAg8KDQPyZTszukVkzgCVIsUwyq6zKrjjVwVvzWXRBqDG/VNmB
	ZeW5Xq4v1nMr8xBP/BqRYa5+YtiCu2l2EOBRNJvde69+l323PbKT6nKexeSQ80ikGxj0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOHT1-00039T-SI; Fri, 02 Oct 2020 09:35:07 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOHT1-0007lP-KB; Fri, 02 Oct 2020 09:35:07 +0000
Subject: Re: [PATCH] arm,smmu: match start level of page table walk with P2M
To: Laurentiu Tudor <laurentiu.tudor@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com,
 will@kernel.org, diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20200928135157.3170-1-laurentiu.tudor@nxp.com>
 <alpine.DEB.2.21.2010011647020.10908@sstabellini-ThinkPad-T480s>
 <41f7c87b-0db9-5366-b25f-775bf3d6e3ce@xen.org>
 <625c1142-ae1c-7374-5e77-ab52eb2c326e@nxp.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4c3ecf26-fd53-bc22-d1fb-56f3155f2a72@xen.org>
Date: Fri, 2 Oct 2020 10:35:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <625c1142-ae1c-7374-5e77-ab52eb2c326e@nxp.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 02/10/2020 10:29, Laurentiu Tudor wrote:
> 
> 
> On 10/2/2020 11:18 AM, Julien Grall wrote:
>> Hi,
>>
>> On 02/10/2020 00:52, Stefano Stabellini wrote:
>>> On Mon, 28 Sep 2020, laurentiu.tudor@nxp.com wrote:
>>>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>>>
>>>> Don't hardcode the lookup start level of the page table walk to 1
>>>> and instead match the one used in P2M. This should fix scenarios
>>>> involving SMMU where the start level is different than 1.
>>>>
>>>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>>
>>> Thank you for the patch, I think it is correct, except that smmu.c today
>>> can be enabled even on arm32 builds, where p2m_root_level would be
>>> uninitialized.
>>>
>>> We need to initialize p2m_root_level at the beginning of
>>> setup_virt_paging under the #ifdef CONFIG_ARM_32. We can statically
>>> initialize it to 1 in that case. Or...
>>>
>>>
>>>> ---
>>>>    xen/arch/arm/p2m.c                 | 2 +-
>>>>    xen/drivers/passthrough/arm/smmu.c | 2 +-
>>>>    xen/include/asm-arm/p2m.h          | 1 +
>>>>    3 files changed, 3 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
>>>> index ce59f2b503..0181b09dc0 100644
>>>> --- a/xen/arch/arm/p2m.c
>>>> +++ b/xen/arch/arm/p2m.c
>>>> @@ -18,7 +18,6 @@
>>>>      #ifdef CONFIG_ARM_64
>>>>    static unsigned int __read_mostly p2m_root_order;
>>>> -static unsigned int __read_mostly p2m_root_level;
>>>>    #define P2M_ROOT_ORDER    p2m_root_order
>>>>    #define P2M_ROOT_LEVEL p2m_root_level
>>>>    static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>>>> @@ -39,6 +38,7 @@ static unsigned int __read_mostly max_vmid =
>>>> MAX_VMID_8_BIT;
>>>>     * restricted by external entity (e.g. IOMMU).
>>>>     */
>>>>    unsigned int __read_mostly p2m_ipa_bits = 64;
>>>> +unsigned int __read_mostly p2m_root_level;
>>>
>>> ... we could p2m_root_level = 1; here
>>
>> IMHO, this is going to make the code quite confusing given that only the
>> SMMU would use this variable for arm32.
>>
>> The P2M root level also cannot be changed by the SMMU (at least for
>> now). So I would suggest to introduce a helper (maybe
>> p2m_get_root_level()) and use it in the SMMU code.
>>
>> An alternative would be to move the definition of P2M_ROOT_{ORDER,
>> LEVEL} in p2m.h
> 
> Alright, I'll go with this second option if that's ok with you.

I am fine with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:39:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:39:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1767.5418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHXc-0006pL-81; Fri, 02 Oct 2020 09:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1767.5418; Fri, 02 Oct 2020 09:39:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHXc-0006pE-4t; Fri, 02 Oct 2020 09:39:52 +0000
Received: by outflank-mailman (input) for mailman id 1767;
 Fri, 02 Oct 2020 09:39:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOHXa-0006p9-KD
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:39:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7669fb16-605d-4029-89d3-dbd7f3dff688;
 Fri, 02 Oct 2020 09:39:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E8A0AC54;
 Fri,  2 Oct 2020 09:39:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOHXa-0006p9-KD
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:39:50 +0000
X-Inumbo-ID: 7669fb16-605d-4029-89d3-dbd7f3dff688
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7669fb16-605d-4029-89d3-dbd7f3dff688;
	Fri, 02 Oct 2020 09:39:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601631588;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JjOM2wsX9un7MyyNRT0QnyhHaldtAv+oXZi3z52RjaY=;
	b=I79Gb8FNHptVGe26fFZc5ZF09u2kas+Cwm8g8XupsS4mA0cZr8yk+mGG71/Wr5A6m5O7mE
	KafoG6XKXoLXw6nUMWm9BRGr79POqhA8YdPSL8d2g10CY39AyCxPvuIRaeADJ5pOSV4oSg
	3kWLe9Y+NHm4JvDwVljsKXf4dWaC0ZM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8E8A0AC54;
	Fri,  2 Oct 2020 09:39:48 +0000 (UTC)
Subject: Re: [PATCH v2 03/11] x86/vlapic: introduce an EOI callback mechanism
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6863a90-584a-af21-4a0a-1b104b750978@suse.com>
Date: Fri, 2 Oct 2020 11:39:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> Add a new vlapic_set_irq_callback helper in order to inject a vector
> and set a callback to be executed when the guest performs the end of
> interrupt acknowledgment.

On v1 I did ask

"One thing I don't understand at all for now is how these
 callbacks are going to be re-instated after migration for
 not-yet-EOIed interrupts."

Afaics I didn't get an answer on this.

> ---
> RFC: should callbacks also be executed in vlapic_do_init (which is
> called by vlapic_reset). We would need to make sure ISR and IRR
> are cleared using some kind of test and clear atomic functionality to
> make this race free.

I guess this can't be decided at this point of the series, as it
may depend on what exactly the callbacks mean to do. It may even
be that whether a callback wants to do something depends on
whether it gets called "normally" or from vlapic_do_init().

> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -144,7 +144,32 @@ bool vlapic_test_irq(const struct vlapic *vlapic, uint8_t vec)
>      return vlapic_test_vector(vec, &vlapic->regs->data[APIC_IRR]);
>  }
>  
> -void vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
> +void vlapic_set_callback(struct vlapic *vlapic, unsigned int vec,
> +                         vlapic_eoi_callback_t *callback, void *data)
> +{
> +    unsigned long flags;
> +    unsigned int index = vec - 16;
> +
> +    if ( !callback || vec < 16 || vec >= X86_NR_VECTORS )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return;
> +    }
> +
> +    spin_lock_irqsave(&vlapic->callback_lock, flags);
> +    if ( vlapic->callbacks[index].callback &&
> +         vlapic->callbacks[index].callback != callback )
> +        printk(XENLOG_G_WARNING
> +               "%pv overriding vector %#x callback %ps (%p) with %ps (%p)\n",
> +               vlapic_vcpu(vlapic), vec, vlapic->callbacks[index].callback,
> +               vlapic->callbacks[index].callback, callback, callback);
> +    vlapic->callbacks[index].callback = callback;
> +    vlapic->callbacks[index].data = data;

Should "data" perhaps also be compared in the override check above?

> @@ -1629,9 +1672,23 @@ int vlapic_init(struct vcpu *v)
>      }
>      clear_page(vlapic->regs);
>  
> +    if ( !vlapic->callbacks )
> +    {
> +        vlapic->callbacks = xmalloc_array(typeof(*vlapic->callbacks),
> +                                          X86_NR_VECTORS - 16);
> +        if ( !vlapic->callbacks )
> +        {
> +            dprintk(XENLOG_ERR, "%pv: alloc vlapic callbacks error\n", v);
> +            return -ENOMEM;
> +        }
> +    }
> +    memset(vlapic->callbacks, 0, sizeof(*vlapic->callbacks) *
> +                                 (X86_NR_VECTORS - 16));

While it resembles code earlier in this function, it widens an
existing memory leak (I'll make a patch for that one) and also
makes it appear as if this function could be called more than
once for a vCPU (maybe I'll also make a patch for this).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1770.5429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHeI-0007jw-3R; Fri, 02 Oct 2020 09:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1770.5429; Fri, 02 Oct 2020 09:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHeI-0007jp-0U; Fri, 02 Oct 2020 09:46:46 +0000
Received: by outflank-mailman (input) for mailman id 1770;
 Fri, 02 Oct 2020 09:46:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOHeH-0007jk-6G
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:46:45 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5037ee84-d2de-4787-9996-c3c74ecb5f07;
 Fri, 02 Oct 2020 09:46:43 +0000 (UTC)
Received: from AM5PR0101CA0012.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::25) by AM4PR0802MB2162.eurprd08.prod.outlook.com
 (2603:10a6:200:5c::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 09:46:41 +0000
Received: from VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::bc) by AM5PR0101CA0012.outlook.office365.com
 (2603:10a6:206:16::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend
 Transport; Fri, 2 Oct 2020 09:46:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT043.mail.protection.outlook.com (10.152.19.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 09:46:40 +0000
Received: ("Tessian outbound 195a290eb161:v64");
 Fri, 02 Oct 2020 09:46:40 +0000
Received: from 02b574daa939.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 82E596E6-9C02-45E1-AB2E-3A0B763E1DFD.1; 
 Fri, 02 Oct 2020 09:46:02 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 02b574daa939.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 09:46:02 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 09:46:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:46:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOHeH-0007jk-6G
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:46:45 +0000
X-Inumbo-ID: 5037ee84-d2de-4787-9996-c3c74ecb5f07
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e1a::620])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5037ee84-d2de-4787-9996-c3c74ecb5f07;
	Fri, 02 Oct 2020 09:46:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8dNGoh+tApqUm8hZhKiDUFKe0zCuPtvL8QZGDsOurl8=;
 b=p4t2j5g8tswN0bJ5IqzyTz3XNnyt9ZwevRTa6Mh1YB3qie1yRqovXpZ11FvfscJ5jRyxve5N9L/y2DJvlLwCwD2rPCJtcTfAMQNBhGrdBASZaq4SsuQ7Dyec9haaONEO8db3a6NN+Z6zstrukkLtXFmdysXcO0LKsJYzwufU7DI=
Received: from AM5PR0101CA0012.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::25) by AM4PR0802MB2162.eurprd08.prod.outlook.com
 (2603:10a6:200:5c::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 09:46:41 +0000
Received: from VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::bc) by AM5PR0101CA0012.outlook.office365.com
 (2603:10a6:206:16::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend
 Transport; Fri, 2 Oct 2020 09:46:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT043.mail.protection.outlook.com (10.152.19.122) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 09:46:40 +0000
Received: ("Tessian outbound 195a290eb161:v64"); Fri, 02 Oct 2020 09:46:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0eab010db4de8e86
X-CR-MTA-TID: 64aa7808
Received: from 02b574daa939.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 82E596E6-9C02-45E1-AB2E-3A0B763E1DFD.1;
	Fri, 02 Oct 2020 09:46:02 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 02b574daa939.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 09:46:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CDnuQsdEuRlFypqwBxCO5T6pPlvQBByFzKsn6FjW6qybMRNGnF6ysd67svi4g0asux20k7yBfvDKL/AdLAkC78dxzbOB9c2l5KXUIP5s4ppovM9uZ7FY9qd9bUZ+14KfkO7wgILldXSIQxK8m73PFlqZqZgBKm8cF24geWrIS9RolAfO15DFhUvJSDfIAa4m+4nb7dzAD3l2CH1beuVGpfHOUnX/+LgyZtcXiabwU+7LYwvIa9Lz+2pU4LRw6MQdMaIyX+Gr6opSSVWR2tBCapxBdbSLVDWmZqWX5SRQLtPvJNcXP7teVb0na9fbdOBniDaLVisecCxTL+0MPNjoPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8dNGoh+tApqUm8hZhKiDUFKe0zCuPtvL8QZGDsOurl8=;
 b=RVXlfCFMD/1zgVSQkUGJE529hP3g/NZ5bBUrgKn1u+bRhZYdTh/y1uzAShFu5db6KXYzNjUAEVA3URHNSfkLrOXpKGChaU8WjkQ1JA8qY2r6L22HB/OMk9nZDYa9+9+Dk7Q2Q9saVvJnlKKP3UIYQ58RZ5h/chqxt1zY8phuZ686DvqBRDmIWMV0TmSxsjO3XqQvpdvwyDZztnXkrcBATgk2qE4YDSh2IqkmexuzPfWO7TIyblb8bL6omm/8dYwYJkF5g8IZaaJY214YH5CMa3pRC6DUCnRT65MWd0x2TcjGCcRejLQarXJmDh6cWfOVplwngqYwpVrhF6abhIUm9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8dNGoh+tApqUm8hZhKiDUFKe0zCuPtvL8QZGDsOurl8=;
 b=p4t2j5g8tswN0bJ5IqzyTz3XNnyt9ZwevRTa6Mh1YB3qie1yRqovXpZ11FvfscJ5jRyxve5N9L/y2DJvlLwCwD2rPCJtcTfAMQNBhGrdBASZaq4SsuQ7Dyec9haaONEO8db3a6NN+Z6zstrukkLtXFmdysXcO0LKsJYzwufU7DI=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 09:46:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:46:00 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
Thread-Topic: Ping: [PATCH 0/6] tools/include: adjustments to the population
 of xen/
Thread-Index: AQHWh2uA51KvLc+DNUCjQhMwaVDUdqmDCd+AgAAHWYCAAAQLgIAAxK+AgABY4YA=
Date: Fri, 2 Oct 2020 09:46:00 +0000
Message-ID: <A4993E3A-6529-4239-ABF8-DD89A01A54D1@arm.com>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
 <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
 <6B9403A3-66DC-4A69-8006-096420649768@arm.com>
 <dea68b56-990d-a13f-a2c4-171e67eaaf73@suse.com>
In-Reply-To: <dea68b56-990d-a13f-a2c4-171e67eaaf73@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 42ce254b-7702-4ddf-72f6-08d866b81054
x-ms-traffictypediagnostic: DBAPR08MB5845:|AM4PR0802MB2162:
X-Microsoft-Antispam-PRVS:
	<AM4PR0802MB2162A778EC4B6669454B53EF9D310@AM4PR0802MB2162.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ouyY/0yEYlaxxWepTl4LSRtxk9DM7O93Tuah61Fdp3DWZaca7Snl7rx2O/OKlfLMjHlg9HJ3s9qh4MpG6oDr2JuLc6jguHylGovHGehIQpqJGfhSHXzNSC3yNA4Wjp7cCz8oKT186GWQ6ntCVTze9++NGRAwYH9x+nu713v7Lew/+3KRsah7roMyRqbN8SvFy3fer8FV4gvRxsCIOipKfTFHSOvewmON9YK6oDsEMsHwkWcoSZv5KhkMpZ17HITLTVxoaM+QA/wSHRlvxPt2sZvv9sqxtMrgZ4CPgrz2/kfdimJDKEvQL6nwOwJbgkgp
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(346002)(396003)(39860400002)(6486002)(5660300002)(71200400001)(76116006)(2906002)(316002)(91956017)(64756008)(66946007)(66446008)(66556008)(36756003)(66476007)(6512007)(66574015)(8936002)(53546011)(478600001)(86362001)(54906003)(2616005)(8676002)(26005)(33656002)(4326008)(6916009)(83380400001)(186003)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Tk8UG03BQFlC11AM9SYWrNE89mBLhRSfOrhBCFf+vhGd3MJSV2Vt4yG4MsiSOEsCoSNZS4KGkgpBXxT46oNDC2WquLc03ZKokJdONYbHxiNBh4TBzuUqOqf4OQYd6tgNLlL2CMaqNnqe9f6VvF4HEzakViEonTtmCkQRpFi1NNXjh4zB7zDryd8JH1wPduRt8pH6OxUzB99FkWVyzFeNP6G12dWXCnGARs3XBsjZcEzssvZyiaXtwjV27WcOSheZTgeAnjSjQuIAvK+qdN9pFy1Lz43cs3Dg1s7EcLll1kOHfZASmi7eDCasnN4rdxYQ0aOCK9rK9dz0kBRsM+4r6PTPYZGizQlHKS2ZCDqnUtvVypHY6AI44esbJc+KFu/YPUIiuJdvHG9427cN4mSTX0zqvuHS6ZP5gPKbJbDkSFRznhuT5Z/B55O/JSgcy3sK/kZfFsdssW6/mZ02UpOjAhq/XpPi6Rx6HkMQ+6DddglNItKlc+VbC7Q4MSW1oM6eKi7ScSBtvNsNvTM4fqMO9PCZ3yOCgTmCumedLCdBZ5GAYOCO7fCtkHRZhNjGcHg5Xb7GNJkGMfJBtDjAE9lr2b8iUqvEHOz0QXC4CGo1vuXT6/wUprYYvPo4W2dA2Gpm7qT8siGCde1bhbvOk1MkaA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <43E708A7CE2FFF48BAD5992C2CD31ADC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5845
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	01d2dcbe-842e-44b1-4274-08d866b7f834
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	giW6cNh/m0rWd1FevfF6punC9AyuIBmUZ7hCy7QqwVxJfOB47Q8MqpuwfqCx3IcxYk1crgBuEBvCHLU6mYNELZJKkSZO2x1neLdZKJ62TpywPdAmued4h6GVbexSHSeggNgR0mnXe+kM2kkgxxV70L25uOjYBfPDCVtv7ZRQ6y3GHuEaMcEueh78bboAZ18ltO06oiyp9kHr5ADXAIGKaVs/RrmIWwKtCo5YO9SoT14Am7IP4JMyxGOORVaeuDNZPj9gpT9tQgdv/3z25dyEPoJdA0g6/jDlliUPicpzirY6A0ZqGfXcHKCHf9Ub7L4U1M7wZylz4I0Jiw7gF53Z0H2h6MzKvATS2v+VYIUrGftMX0EZWI45+RbaJBxa/2s/VJkAXXza642OgV/zp8S9vA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966005)(6506007)(316002)(36906005)(82740400003)(54906003)(47076004)(186003)(2906002)(53546011)(82310400003)(478600001)(6512007)(86362001)(26005)(6862004)(356005)(70586007)(8676002)(336012)(81166007)(33656002)(70206006)(2616005)(6486002)(8936002)(66574015)(5660300002)(36756003)(83380400001)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 09:46:40.5791
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 42ce254b-7702-4ddf-72f6-08d866b81054
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2162

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMDU6MjcsIErDvHJnZW4gR3Jvw58gPGpncm9zc0BzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMS4xMC4yMCAxODo0MywgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCj4+IEhpLA0KPj4+IE9uIDEgT2N0IDIwMjAsIGF0IDE3OjI5LCBCZXJ0cmFuZCBNYXJx
dWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IEhpIEphbiwN
Cj4+PiANCj4+Pj4gT24gMSBPY3QgMjAyMCwgYXQgMTc6MDMsIEphbiBCZXVsaWNoIDxqYmV1bGlj
aEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+IA0KPj4+PiBPbiAxMC4wOS4yMDIwIDE0OjA5LCBKYW4g
QmV1bGljaCB3cm90ZToNCj4+Pj4+IFdoaWxlIGxvb2tpbmcgYXQgd2hhdCBpdCB3b3VsZCB0YWtl
IHRvIG1vdmUgYXJvdW5kIGxpYmVsZi8NCj4+Pj4+IGluIHRoZSBoeXBlcnZpc29yIHN1YnRyZWUs
IEkndmUgcnVuIGludG8gdGhpcyBydWxlLCB3aGljaCBJDQo+Pj4+PiB0aGluayBjYW4gZG8gd2l0
aCBhIGZldyBpbXByb3ZlbWVudHMgYW5kIHNvbWUgc2ltcGxpZmljYXRpb24uDQo+Pj4+PiANCj4+
Pj4+IDE6IGFkanVzdCBwb3B1bGF0aW9uIG9mIGFjcGkvDQo+Pj4+PiAyOiBmaXggKGRyb3ApIGRl
cGVuZGVuY2llcyBvZiB3aGVuIHRvIHBvcHVsYXRlIHhlbi8NCj4+Pj4+IDM6IGFkanVzdCBwb3B1
bGF0aW9uIG9mIHB1YmxpYyBoZWFkZXJzIGludG8geGVuLw0KPj4+Pj4gNDogcHJvcGVybHkgaW5z
dGFsbCBBcm0gcHVibGljIGhlYWRlcnMNCj4+Pj4+IDU6IGFkanVzdCB4ODYtc3BlY2lmaWMgcG9w
dWxhdGlvbiBvZiB4ZW4vDQo+Pj4+PiA2OiBkcm9wIHJlbWFpbmluZyAtZiBmcm9tIGxuIGludm9j
YXRpb25zDQo+Pj4+IA0KPj4+PiBNYXkgSSBhc2sgZm9yIGFuIGFjayBvciBvdGhlcndpc2UgaGVy
ZT8NCj4+PiANCj4+PiBUaGlzIGlzIGdvaW5nIHRoZSByaWdodCB3YXkgYnV0IHdpdGggdGhpcyBz
ZXJpZSAob24gdG9wIG9mIGN1cnJlbnQgc3RhZ2luZw0KPj4+IHN0YXR1cyksIEkgaGF2ZSBhIGNv
bXBpbGF0aW9uIGVycm9yIGluIFlvY3RvIHdoaWxlIGNvbXBpbGluZyBxZW11Og0KPj4+IEluIGZp
bGUgaW5jbHVkZWQgZnJvbSAvbWVkaWEvZXh0ZW5kLWRyaXZlL2Jlcm1hcjAxL0RldmVsb3BtZW50
L3hlbi1kZXYveW9jdG8tYnVpbGQvYnVpbGQvZG9tMC1mdnAucHJqL3RtcC93b3JrL2FybXY4YS1w
b2t5LWxpbnV4L3FlbXUvNS4xLjAtcjAvcmVjaXBlLXN5c3Jvb3QvdXNyL2luY2x1ZGUveGVuZ3Vl
c3QuaDoyNSwNCj4+PiB8ICAgICAgICAgICAgICAgICAgZnJvbSAvbWVkaWEvZXh0ZW5kLWRyaXZl
L2Jlcm1hcjAxL0RldmVsb3BtZW50L3hlbi1kZXYveW9jdG8tYnVpbGQvYnVpbGQvZG9tMC1mdnAu
cHJqL3RtcC93b3JrL2FybXY4YS1wb2t5LWxpbnV4L3FlbXUvNS4xLjAtcjAvcWVtdS01LjEuMC9o
dy9pMzg2L3hlbi94ZW5fcGxhdGZvcm0uYzo0MToNCj4+PiB8IC9tZWRpYS9leHRlbmQtZHJpdmUv
YmVybWFyMDEvRGV2ZWxvcG1lbnQveGVuLWRldi95b2N0by1idWlsZC9idWlsZC9kb20wLWZ2cC5w
cmovdG1wL3dvcmsvYXJtdjhhLXBva3ktbGludXgvcWVtdS81LjEuMC1yMC9yZWNpcGUtc3lzcm9v
dC91c3IvaW5jbHVkZS94ZW5jdHJsX2RvbS5oOjE5OjEwOiBmYXRhbCBlcnJvcjogeGVuL2xpYmVs
Zi9saWJlbGYuaDogTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeQ0KPj4+IHwgICAgMTkgfCAjaW5j
bHVkZSA8eGVuL2xpYmVsZi9saWJlbGYuaD4NCj4+PiB8ICAgICAgIHwgICAgICAgICAgXn5+fn5+
fn5+fn5+fn5+fn5+fn5+DQo+Pj4gfCBjb21waWxhdGlvbiB0ZXJtaW5hdGVkLg0KPj4+IHwgL21l
ZGlhL2V4dGVuZC1kcml2ZS9iZXJtYXIwMS9EZXZlbG9wbWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxk
L2J1aWxkL2RvbTAtZnZwLnByai90bXAvd29yay9hcm12OGEtcG9reS1saW51eC9xZW11LzUuMS4w
LXIwL3FlbXUtNS4xLjAvcnVsZXMubWFrOjY5OiByZWNpcGUgZm9yIHRhcmdldCAnaHcvaTM4Ni94
ZW4veGVuX3BsYXRmb3JtLm/igJkgZmFpbGVkDQo+Pj4gDQo+Pj4gWGVuIGlzIHVzaW5nIHhlbmN0
cmxfZG9tLmggd2hpY2ggbmVlZCB0aGUgbGliZWxmLmggaGVhZGVyIGZyb20geGVuLg0KPj4gQWN0
dWFsbHkgdGhpcyBpcyBub3QgY29taW5nIGZyb20geW91ciBzZXJpZSBhbmQgdGhpcyBpcyBhY3R1
YWxseSBhIHByb2JsZW0gYWxyZWFkeSBwcmVzZW50IG9uIG1hc3Rlci4NCj4gDQo+IC4uLiBhbmQg
Zml4ZWQgb24gc3RhZ2luZy4NCg0KSSBjYW4gY29uZmlybSB0aGF0IHdpdGggdG9uaWdodCBzdGFn
aW5nIHN0YXR1cyB0aGlzIGlzc3VlIGlzIG5vdCBwcmVzZW50IGFueW1vcmUuDQoNClJlZ2FyZHMN
CkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1776.5442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHfU-0007qu-FX; Fri, 02 Oct 2020 09:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1776.5442; Fri, 02 Oct 2020 09:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHfU-0007qn-CM; Fri, 02 Oct 2020 09:48:00 +0000
Received: by outflank-mailman (input) for mailman id 1776;
 Fri, 02 Oct 2020 09:47:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOHfT-0007qg-04
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:47:59 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe02::61c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b23dea5-3134-4dde-b83f-b6fbeed984eb;
 Fri, 02 Oct 2020 09:47:57 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3421.eurprd04.prod.outlook.com (2603:10a6:803:5::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.24; Fri, 2 Oct
 2020 09:47:54 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:47:54 +0000
Received: from fsr-ub1864-101.ea.freescale.net (83.217.231.2) by
 AM0PR10CA0014.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend
 Transport; Fri, 2 Oct 2020 09:47:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOHfT-0007qg-04
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:47:59 +0000
X-Inumbo-ID: 5b23dea5-3134-4dde-b83f-b6fbeed984eb
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe02::61c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5b23dea5-3134-4dde-b83f-b6fbeed984eb;
	Fri, 02 Oct 2020 09:47:57 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dkmYYkbuM/mO7NaxIBoMoPiavl8s89qmMEvNLN1/uTGlo5/QKpf/DXXFoIC9aY6TgDx4gWK7R8FJsyYWNghynOUKizIG8KuRaLFe+ZdVCTF03SA2PLSMhKkQogHwVnc2w1rskaQIeQUbecgWdkyi12QjZETphMh5osgUJijTfigbZqH29EI5g1GVqNkiJQfkwZMuJuR+I87EjFJvYK/AaSeGZsKb8Qs0fDN1weFzXSt7NO6jpCKtjnt3pQhmZlzxtxDwBrJnbGLI8xstl+hHF8cvAFwh9FIzxEH1CbBgbYG/rruueP24eAzawWd0r8Z7Faf5dql7kZKSVVHjusDgiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3jsiYolmuFMXOgRZZk1T4VfnT84ijQrqqp/hl7R8W0g=;
 b=CFU36d2Cg8mczxCZGAeNtfJGnAmbx0gOGbhYdfyrqEGJ2pfDA7C9+3wOgw0ovybyMOKj05s3HaTBy6rNHzsGL3eI4JIsovejmJsytXzlIhTt6fo3U5OagX1mLvfYjmCFVG/nJHvVz6fEQBN35PisIRuX7tkG31CevFY8nUEu7E9tyGpRdu1wNLxmCDxdSaDsKFt804J4iSOYFqeXc+/NwcLcqQSmz+kmtSuJF/MSbaqeMYyEfYen9cNLrFZE41uSj5hIFwpfeG5VcIBK0pjYm2kcZeDTiYmtQHkTUQBmly3tgSwxSwreJRjmrFdjEYPmzDxdDGSy30wAiOQDXvop4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3jsiYolmuFMXOgRZZk1T4VfnT84ijQrqqp/hl7R8W0g=;
 b=f/Sziavxvc2IxBJokJNNxu5Asa4stV5q5mCbkwHuHBMAOAKw07Ugo3W1KfNINsoQyHD1z16wWaiilyd+OEa2iav7G8JJWOJc3o7nfdYeA6UMngQlYG8+dQi2k/T38p3/OP1T87HGAhgf0ms1M8VIBiicZybdcsdubEvsg5neD+E=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3421.eurprd04.prod.outlook.com (2603:10a6:803:5::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.24; Fri, 2 Oct
 2020 09:47:54 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 09:47:54 +0000
From: laurentiu.tudor@nxp.com
To: sstabellini@kernel.org,
	julien@xen.org,
	xen-devel@lists.xenproject.org,
	Volodymyr_Babchuk@epam.com,
	will@kernel.org
Cc: diana.craciun@nxp.com,
	anda-alexandra.dorneanu@nxp.com,
	Laurentiu Tudor <laurentiu.tudor@nxp.com>
Subject: [PATCH v2] arm,smmu: match start level of page table walk with P2M
Date: Fri,  2 Oct 2020 12:47:37 +0300
Message-Id: <20201002094737.9803-1-laurentiu.tudor@nxp.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain; charset="us-ascii"
X-ClientProxiedBy: AM0PR10CA0014.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:17c::24) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from fsr-ub1864-101.ea.freescale.net (83.217.231.2) by AM0PR10CA0014.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:17c::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend Transport; Fri, 2 Oct 2020 09:47:53 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [83.217.231.2]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 30971ddc-5ab7-4ff8-c4ce-08d866b83bfb
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3421:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB34212E88C02DF023B0C5841FEC310@VI1PR0402MB3421.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aHPup5Z3muGIdXdWqOFemYAUoNiRaUXD6z+kXYuk3ETs4FJsqd5Ren+QCoh3TMzl3ntzYzP8FztWUlhznYwnDFMQQdL4hSMHLM3kp/UUtSAZ6iPugs9WHUx9N1p7p6lqBdS3koGIP7E9yCdJHkz8JikhiGa3by9d9qvCOvPit3v2cFQ8BAHjBsUG4++C71P1mjndHMgkeVK0m3w3FYj9kDNHGmPre7yNu1ZeYJc8ywkxXco31vrXcqWlVrJ6Q4hR1y06VjNtgkRktWTJSeKwWsj0qL/nMYdreCVyazfjeoesKLH5K9Bw7TwPJg5eePefUFY04GgD9pEPiBj28NnLmPhENfbOFUxt3+LIg7qQ88rtoMscOlLfmcjEGXJJq/QA
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(39860400002)(366004)(66556008)(4326008)(66476007)(8676002)(83380400001)(186003)(8936002)(5660300002)(16526019)(52116002)(1076003)(6506007)(2616005)(9686003)(6512007)(6666004)(26005)(956004)(86362001)(6486002)(478600001)(316002)(36756003)(2906002)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	KG7URN31fkVctx9yNQ3F35ncsN4tMSpAYdPtt3zbiH9yvIgIjhlKB/qA6Id861cxv8BKB5UCb0GQhTvepA7b4xVjbJyte2yn9dafdAA7hRzN4lEwP54Y6kguRw4uzgQ8OnmGhFb9FvHoOoaIQjTdgmUx/Y7IvfYSY7zftS3zFuKcB10ct6lbTCcY6YIHmM+1m8STm4VJKziJUONvw1zYoTRLnWGQkYiXXTr/YNW3W/TGcIrKCNDLEwmDTH9Rqi6T7eGUr3sZpKsBgJ9Y71hscKWwx/NUe+x+TOVhBhskXskJ5QP5FsJu78M8juDbh6Vdls7AQdxDRxR4fhF5noFRR7cqWSWOjbi1q+NvL9oVOrGw+Xq4r7NdVPoVlG7FLSziXzpR7yw1hrZVQGHKwKUsU1wielTY/6lHJ70bonrPvGD+ds3c5bUg52VGJYN0s01Kp8zzPwGIsklNN+MxKyZKDYPjbY6+tGD29oqtYJ0iaRoIfdtqBhnWpu/bl9aMWWwNWAaTzZN4XtoCk5UVyLjWhSMa2XjFa1vY1DmMJNSkVg2FjGx/slj1SAIw0fe+ngMm/BAdt3Q7cHp3GumCOZFilGUgjfBOxCC+sahQD+U7C67ioIbpZLr9uoBOEcjP8Nn9c1/MDDX8qn0Sp2d7dmnUeQ==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 30971ddc-5ab7-4ff8-c4ce-08d866b83bfb
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 09:47:54.1914
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GjwkujPM+HeEkGuVi6LcBE0fuYqoPZlX0LRo9STvMhEAHfWGFLyXx/XHKlU4NVeDGrL8OC1ePc+xZegQ4M4m9Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3421

From: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Don't hardcode the lookup start level of the page table walk to 1
and instead match the one used in P2M. This should fix scenarios
involving SMMU where the start level is different than 1.
In order for the SMMU driver to also compile on arm32 move the
P2M_ROOT_LEVEL in the p2m header file (while at it, for
consistency also P2M_ROOT_ORDER) and use the macro in the smmu
driver.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
---
Changes in v2:
 - made smmu driver compile on arm32

 xen/arch/arm/p2m.c                 |  7 +------
 xen/drivers/passthrough/arm/smmu.c |  2 +-
 xen/include/asm-arm/p2m.h          | 10 ++++++++++
 3 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ce59f2b503..bb75f12486 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -18,16 +18,10 @@
 
 #ifdef CONFIG_ARM_64
 static unsigned int __read_mostly p2m_root_order;
-static unsigned int __read_mostly p2m_root_level;
-#define P2M_ROOT_ORDER    p2m_root_order
-#define P2M_ROOT_LEVEL p2m_root_level
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 /* VMID is by default 8 bit width on AArch64 */
 #define MAX_VMID       max_vmid
 #else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_LEVEL 1
-#define P2M_ROOT_ORDER    1
 /* VMID is always 8 bit width on AArch32 */
 #define MAX_VMID        MAX_VMID_8_BIT
 #endif
@@ -39,6 +33,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
  * restricted by external entity (e.g. IOMMU).
  */
 unsigned int __read_mostly p2m_ipa_bits = 64;
+unsigned int __read_mostly p2m_root_level;
 
 /* Helpers to lookup the properties of each level */
 static const paddr_t level_masks[] =
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 94662a8501..4ba6d3ab94 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
 
 	if (!stage1)
-		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+		reg |= (2 - P2M_ROOT_LEVEL) << TTBCR_SL0_SHIFT;
 
 	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..ab02b36a03 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -12,6 +12,16 @@
 
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
+extern unsigned int p2m_root_level;
+
+#ifdef CONFIG_ARM_64
+#define P2M_ROOT_ORDER    p2m_root_order
+#define P2M_ROOT_LEVEL p2m_root_level
+#else
+/* First level P2M is always 2 consecutive pages */
+#define P2M_ROOT_ORDER    1
+#define P2M_ROOT_LEVEL 1
+#endif
 
 struct domain;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:48:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1778.5454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHfd-0007uC-P6; Fri, 02 Oct 2020 09:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1778.5454; Fri, 02 Oct 2020 09:48:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHfd-0007u5-LG; Fri, 02 Oct 2020 09:48:09 +0000
Received: by outflank-mailman (input) for mailman id 1778;
 Fri, 02 Oct 2020 09:48:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOHfc-0007tg-0L
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:48:08 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc68222b-8951-456d-8e5e-6d0f0603b85d;
 Fri, 02 Oct 2020 09:48:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id x14so1055918wrl.12
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:48:05 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id k8sm1015650wrl.42.2020.10.02.02.48.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 02:48:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOHfc-0007tg-0L
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:48:08 +0000
X-Inumbo-ID: fc68222b-8951-456d-8e5e-6d0f0603b85d
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fc68222b-8951-456d-8e5e-6d0f0603b85d;
	Fri, 02 Oct 2020 09:48:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id x14so1055918wrl.12
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:48:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=XBKf5rCOPo4NoQKS2bCbn6OHpqMgKghSnuzncLMaEsg=;
        b=NnPhOIiGXFfvJWiXpwMfYCrlYJj3jSqwhM3Fr7FAFgDtN8+LWpUPVsY9CPqHheoz3s
         duOjN3dl7KGdJpPoH/Rn+jfMpp1zUc3mpthUjoLgdoNEYm7Fr4xKD3gS0xfEr0+XOfv6
         XtSRuu65opavt6S+qKQAVPJTls41fPewz/rXA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=XBKf5rCOPo4NoQKS2bCbn6OHpqMgKghSnuzncLMaEsg=;
        b=bjwS1PUB2j6XcjWj8NXAyg/Jiakapp8/GZZhqN+tkJyt926RKTqi2oEDcQ6XbLBQ/L
         vWecOpIBZ1UYjWFlTl82AXj5PeQLYXbynrBSxq/bkBsqZ5HrOiTjPMBPFnJqbLVNudRg
         dq00K3+EILirgvKDsrPRwc42MTswNHIWVFm3cEKpGK28X596PH0bX05mrmgPwu3mg1sz
         xfD4W02OHAlnprZIbIvmUTMt6Sd+on9pmY0iE1EqJPdrI+kkYaIR2hBZBXREEipY/yV5
         Npz7Ve4Bzq12xnz+vf7ckxzVJi+KK/GRfnBMzwStmyj0XImtg1iTQ/IxxA7pTCgmVM8j
         7oZg==
X-Gm-Message-State: AOAM533+5Gvu7S3dBFR8oraXHMYAq3YJP5/EXeuho/EAOgiGhjyMAtYi
	wTHJqfvqgbpd3SMk3Mn1380qDw==
X-Google-Smtp-Source: ABdhPJwvtlXjY3abDVfehLG+NbtTw4ENbbSxjUqiw5J1WQgeZzduYuia7fGlp8u2Z6fywLN6MoeKFw==
X-Received: by 2002:adf:fed1:: with SMTP id q17mr1966851wrs.85.1601632084874;
        Fri, 02 Oct 2020 02:48:04 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id k8sm1015650wrl.42.2020.10.02.02.48.01
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 02:48:03 -0700 (PDT)
Date: Fri, 2 Oct 2020 11:48:00 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from
 internal kmap function
Message-ID: <20201002094800.GG438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-2-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200929151437.19717-2-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:31PM +0200, Thomas Zimmermann wrote:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  drivers/gpu/drm/drm_gem_vram_helper.c | 17 ++++++-----------
>  1 file changed, 6 insertions(+), 11 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3fe4b326e18e..256b346664f2 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,16 +382,16 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_unpin);
>  
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> -				      bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
>  {
>  	int ret;
>  	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	bool is_iomem;
>  
>  	if (gbo->kmap_use_count > 0)
>  		goto out;
>  
> -	if (kmap->virtual || !map)
> +	if (kmap->virtual)
>  		goto out;
>  
>  	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> @@ -399,15 +399,10 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
>  		return ERR_PTR(ret);
>  
>  out:
> -	if (!kmap->virtual) {
> -		if (is_iomem)
> -			*is_iomem = false;
> +	if (!kmap->virtual)
>  		return NULL; /* not mapped; don't increment ref */
> -	}
>  	++gbo->kmap_use_count;
> -	if (is_iomem)
> -		return ttm_kmap_obj_virtual(kmap, is_iomem);
> -	return kmap->virtual;
> +	return ttm_kmap_obj_virtual(kmap, &is_iomem);
>  }
>  
>  static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +447,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
>  	ret = drm_gem_vram_pin_locked(gbo, 0);
>  	if (ret)
>  		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> +	base = drm_gem_vram_kmap_locked(gbo);
>  	if (IS_ERR(base)) {
>  		ret = PTR_ERR(base);
>  		goto err_drm_gem_vram_unpin_locked;
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:51:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1788.5468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHic-0000R4-Fa; Fri, 02 Oct 2020 09:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1788.5468; Fri, 02 Oct 2020 09:51:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHic-0000Qx-C8; Fri, 02 Oct 2020 09:51:14 +0000
Received: by outflank-mailman (input) for mailman id 1788;
 Fri, 02 Oct 2020 09:51:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kOHib-0000Qr-O0
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:51:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59d77d83-cbb6-4197-a2fd-21bfb9d08e86;
 Fri, 02 Oct 2020 09:51:12 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOHia-0003Un-0R; Fri, 02 Oct 2020 09:51:12 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOHiZ-0000XF-Or; Fri, 02 Oct 2020 09:51:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kOHib-0000Qr-O0
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:51:13 +0000
X-Inumbo-ID: 59d77d83-cbb6-4197-a2fd-21bfb9d08e86
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 59d77d83-cbb6-4197-a2fd-21bfb9d08e86;
	Fri, 02 Oct 2020 09:51:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mzDA/S3dz8EW+2wSiGaselJ3IjD2FytsgKUO3naeTfg=; b=yhvS6/lk9H1yA/94pdJPoaM+Qk
	l2dJpvB+n2IP7JRzyhAbtdFYmJnzlXAut0Gf7dq3yJWEHmxHy6V1wNpyk2+ndp8lMsZWChbUn3XCp
	P4qJ/YnVVeSasnarqu5m0y93AiarNeaCJVudSXC4+4oM8ONrfUOcRUl84MpjCvOtCuMY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOHia-0003Un-0R; Fri, 02 Oct 2020 09:51:12 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOHiZ-0000XF-Or; Fri, 02 Oct 2020 09:51:11 +0000
Subject: Re: [PATCH v2] arm,smmu: match start level of page table walk with
 P2M
To: laurentiu.tudor@nxp.com, sstabellini@kernel.org,
 xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com, will@kernel.org
Cc: diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20201002094737.9803-1-laurentiu.tudor@nxp.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b5bd3263-42fd-27a3-ebdc-2ae2b5b72f3a@xen.org>
Date: Fri, 2 Oct 2020 10:51:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201002094737.9803-1-laurentiu.tudor@nxp.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/10/2020 10:47, laurentiu.tudor@nxp.com wrote:
> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Don't hardcode the lookup start level of the page table walk to 1
> and instead match the one used in P2M. This should fix scenarios
> involving SMMU where the start level is different than 1.
> In order for the SMMU driver to also compile on arm32 move the
> P2M_ROOT_LEVEL in the p2m header file (while at it, for
> consistency also P2M_ROOT_ORDER) and use the macro in the smmu
> driver.
> 
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> ---
> Changes in v2:
>   - made smmu driver compile on arm32
> 
>   xen/arch/arm/p2m.c                 |  7 +------
>   xen/drivers/passthrough/arm/smmu.c |  2 +-
>   xen/include/asm-arm/p2m.h          | 10 ++++++++++
>   3 files changed, 12 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ce59f2b503..bb75f12486 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -18,16 +18,10 @@
>   
>   #ifdef CONFIG_ARM_64
>   static unsigned int __read_mostly p2m_root_order;
> -static unsigned int __read_mostly p2m_root_level;
> -#define P2M_ROOT_ORDER    p2m_root_order
> -#define P2M_ROOT_LEVEL p2m_root_level
>   static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>   /* VMID is by default 8 bit width on AArch64 */
>   #define MAX_VMID       max_vmid
>   #else
> -/* First level P2M is always 2 consecutive pages */
> -#define P2M_ROOT_LEVEL 1
> -#define P2M_ROOT_ORDER    1
>   /* VMID is always 8 bit width on AArch32 */
>   #define MAX_VMID        MAX_VMID_8_BIT
>   #endif
> @@ -39,6 +33,7 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>    * restricted by external entity (e.g. IOMMU).
>    */
>   unsigned int __read_mostly p2m_ipa_bits = 64;
> +unsigned int __read_mostly p2m_root_level;

This wants to stay in the #ifdef CONFIG_ARM_64 above and...

>   
>   /* Helpers to lookup the properties of each level */
>   static const paddr_t level_masks[] =
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 94662a8501..4ba6d3ab94 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>   	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
>   
>   	if (!stage1)
> -		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
> +		reg |= (2 - P2M_ROOT_LEVEL) << TTBCR_SL0_SHIFT;
>   
>   	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
>   
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 5fdb6e8183..ab02b36a03 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -12,6 +12,16 @@
>   
>   /* Holds the bit size of IPAs in p2m tables.  */
>   extern unsigned int p2m_ipa_bits;
> +extern unsigned int p2m_root_level;

... this wants to be in part of the #ifdef below.

> +
> +#ifdef CONFIG_ARM_64
> +#define P2M_ROOT_ORDER    p2m_root_order

As you move the define here, you should also move p2m_root_order.

> +#define P2M_ROOT_LEVEL p2m_root_level
> +#else
> +/* First level P2M is always 2 consecutive pages */
> +#define P2M_ROOT_ORDER    1
> +#define P2M_ROOT_LEVEL 1
> +#endif
>   
>   struct domain;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1793.5482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHlD-0000ae-Vz; Fri, 02 Oct 2020 09:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1793.5482; Fri, 02 Oct 2020 09:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHlD-0000aX-Ra; Fri, 02 Oct 2020 09:53:55 +0000
Received: by outflank-mailman (input) for mailman id 1793;
 Fri, 02 Oct 2020 09:53:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOHlC-0000aR-DC
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:53:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86440e6d-0b18-4323-952a-47cb42ee2ce4;
 Fri, 02 Oct 2020 09:53:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9FB2AAD39;
 Fri,  2 Oct 2020 09:53:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOHlC-0000aR-DC
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:53:54 +0000
X-Inumbo-ID: 86440e6d-0b18-4323-952a-47cb42ee2ce4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 86440e6d-0b18-4323-952a-47cb42ee2ce4;
	Fri, 02 Oct 2020 09:53:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601632432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mCHj6y8mdiCVDqDTzx/FzYO2RGdpOCYbUB/HCAvXXnE=;
	b=EdZqP89whwOCSUtnqob9vvPhW/FyqKbX5AF0QLUcR/Qy1c7F72gtR1kDiZO6uLcxxWbGyS
	dF7y9zAKHmm/Ixz9yzqePPZs4s2lJLJ+6KLbs/yV3jFMNn6hrwpz70psNXDJ54A1R4epI5
	6sLrQvMHPDBBVoDmIPwemyvLKV6Cm/0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9FB2AAD39;
	Fri,  2 Oct 2020 09:53:52 +0000 (UTC)
Subject: Re: Re: [Xen-devel] Xen Solaris support still required? Illumos/Dilos
 Xen
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Igor Kozhukhov <igor@dilos.org>
Cc: =?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?= <pasik@iki.fi>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
References: <9ee9bda5-0333-0482-75aa-81a4d352a77e@suse.com>
 <20161103135632.GF28824@reaktio.net> <20161204165715.GN28824@reaktio.net>
 <E1236D3A-24CA-4ECF-B7C8-547406C54911@dilos.org>
 <642cf596-12bc-6f94-3e2a-e0343a250abc@suse.com>
Message-ID: <746d05db-cbe0-4013-41fb-a4a5b9b71d5c@suse.com>
Date: Fri, 2 Oct 2020 11:53:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <642cf596-12bc-6f94-3e2a-e0343a250abc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.12.16 06:32, Juergen Gross wrote:
> On 04/12/16 18:11, Igor Kozhukhov wrote:
>> Hi Pasi,
>>
>> i’m using both addresses, but probably @gmale missed some emails with
>> maillist.
>>
>> About DilOS + Xen.
>>
>> i’m using xen-3.4 - old version what i backported to DilOS based on old
>> opensolaris varsion and upadted it to use python2.7 and some others zfs
>> updates - more updates :)
>> i tried to port Xen-4.3, but not finished it yet because i have no found
>> sponsors and i have been moved to some aonther job without DilOS/illumos
>> activities.
>> try to do it by free time was/is overhead.
>>
>> i have plans try to return back and look at latest Xen.
>>
>> right now i try to move DilOS bulid env to use more Debian style build
>> env and to use gcc-5.4 as primary compiler.
>> Also, i have SPARC support with DilOS and it eat some additional free time.
>> please do not drop solaris support :) - i’ll use and update it soon -
>> probably on next year.
> 
> Got it. Thanks for the note and good luck for the port!

As a followup after nearly 4 years:

It seems nothing has happened, and Solaris specific coding in Xen is
bit-rotting further. Last example is xenstored, which lost an interface
mandatory for Solaris about 1 year ago (nobody noticed, as Solaris
specific parts are neither built nor tested).

I stumbled over this one as I did some reorg of the Xen libraries and
checked all the dependencies between those.

I think at least the no longer working Solaris stuff in xenstored should
be removed now (in theory it would still be possible to use xenstore-
stubdom in Solaris), but I honestly think all the other Solaris cruft in
Xen tools should go away, too, in case nobody is really showing some
interest in it (e.g. by doing some basic build tests and maybe a small
functional test for each release of Xen).

So how does the realistic future of a Solaris dom0 look like? Is there
a non-neglectable chance it will be revived in the near future, or can
we remove the Solaris abstractions?


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:55:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1796.5494 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHnA-0000ke-C9; Fri, 02 Oct 2020 09:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1796.5494; Fri, 02 Oct 2020 09:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHnA-0000kW-8g; Fri, 02 Oct 2020 09:55:56 +0000
Received: by outflank-mailman (input) for mailman id 1796;
 Fri, 02 Oct 2020 09:55:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=igfv=DJ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kOHn9-0000kP-1U
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:55:55 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d70a517e-a9a3-4481-aa9a-d373a94371de;
 Fri, 02 Oct 2020 09:55:53 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id z17so1105004lfi.12
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:55:53 -0700 (PDT)
Received: from [192.168.1.6] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id q6sm163532ljg.115.2020.10.02.02.55.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 02 Oct 2020 02:55:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=igfv=DJ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kOHn9-0000kP-1U
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:55:55 +0000
X-Inumbo-ID: d70a517e-a9a3-4481-aa9a-d373a94371de
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d70a517e-a9a3-4481-aa9a-d373a94371de;
	Fri, 02 Oct 2020 09:55:53 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id z17so1105004lfi.12
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:55:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:from:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=jilqFGMThcEMVY+QoTbmMwbZGFodYU3r0tziQEitg7c=;
        b=Jgx/LFTUryIDqsSVmlrq6amZbBqsQ7ZcC3ICyyH0LNxFyDJZfocuWyNHjthmj05Lcu
         jD8nDHK/sIHHFSQ2DJzsx70bs9YAkkWHZGQ7CU7BWHQ+H92bBH5PlUv8mSblH2jbhwbA
         7hBNbLy/ZuVh8Uqa8S0jqlvXKlr2XKtARFs+HFwGO9l5xzx6ufMfo0igr7FLIgvELIVC
         zliPnuIUjGfhfVoaxAWrTLFWLRLZPs5go8ilXJAilXpA71pHIjtoqeluk4avq0/GqNs5
         JfzTMCPNwGJ/roC990kMmiGmF/2105ANm+biYNawPGYp9zgrzbj70x8EYOIuVMlTN2tY
         KYQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:from:to:cc:references:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=jilqFGMThcEMVY+QoTbmMwbZGFodYU3r0tziQEitg7c=;
        b=RPdBZ+wRd8lWogiUmVx1YirmNpHFzrzzGD8IddWxj32fqCFSUKRMayl03pZmI2pd5y
         XYbSTA70TWdyl1dEd9z9t+ZoqqNovztUvk1v+oIQpd8gZ6EJMz6dc5S45m56EfM0cfBx
         V/xk7EQCDtPmT4uEzyPAcL0pEc6yVtMo27jyH1jfXe+rZuUetkVwwTmoIywwikX9XN//
         qqCIMiwWTo9CfgMmKm1RozRDUCQVG1e9xlNRynRI7q5BzZ/HUA8rNvzauAiHZtXY+DNd
         EniBnuEOizQcGzzXk6dPVjBEeEnuIPM5yz30PIN1Gy+YY99mOAExzONSsIcWZxhleiXU
         WJSw==
X-Gm-Message-State: AOAM532i8hB7H0pRW8UMf7ah7ERwm1pis+cOGZIH0FJ2Tk4OucGYf2Fi
	4aZl/KDgLEbEeD6Cf+v4UeY=
X-Google-Smtp-Source: ABdhPJzmENOXsRROnhmjYAw3jX8ve5Y6KPzHhLxRv/1mI9AiFzz5icjkThsRPNfgtyNAEMsd8jFyzA==
X-Received: by 2002:a19:7006:: with SMTP id h6mr514725lfc.83.1601632552090;
        Fri, 02 Oct 2020 02:55:52 -0700 (PDT)
Received: from [192.168.1.6] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id q6sm163532ljg.115.2020.10.02.02.55.50
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 02 Oct 2020 02:55:51 -0700 (PDT)
Subject: Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache
 handling common
From: Oleksandr <olekstysh@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Julien Grall <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com>
 <1599769330-17656-14-git-send-email-olekstysh@gmail.com>
 <83dfb207-c191-8dad-1474-ce57b6d51102@suse.com>
 <2cab3ca5-0f2b-a813-099f-95bbf54bb9c8@gmail.com>
 <17f1c7d2-7a84-a6a5-4afb-f82e67bc9fd0@suse.com>
 <0fa6a31c-8da6-2a0a-b110-a697f4955702@gmail.com>
 <3abe3988-f1c0-9bbf-1ff9-ce3ae380c825@suse.com>
 <47ecdde7-6575-bee8-7981-7b1a31715a0b@gmail.com>
Message-ID: <0aa9a225-1231-fa98-f2a1-caf898a3ed86@gmail.com>
Date: Fri, 2 Oct 2020 12:55:45 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <47ecdde7-6575-bee8-7981-7b1a31715a0b@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


Hi Jan


On 25.09.20 16:05, Oleksandr wrote:
>
> On 25.09.20 10:03, Jan Beulich wrote:
>
> Hi Jan.
>
>> On 24.09.2020 18:45, Oleksandr wrote:
>>> On 24.09.20 14:16, Jan Beulich wrote:
>>>
>>> Hi Jan
>>>
>>>> On 22.09.2020 21:32, Oleksandr wrote:
>>>>> On 16.09.20 11:50, Jan Beulich wrote:
>>>>>> On 10.09.2020 22:22, Oleksandr Tyshchenko wrote:
>>>>>>> --- a/xen/common/memory.c
>>>>>>> +++ b/xen/common/memory.c
>>>>>>> @@ -1651,6 +1651,11 @@ long do_memory_op(unsigned long cmd, 
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>             break;
>>>>>>>         }
>>>>>>>     +#ifdef CONFIG_IOREQ_SERVER
>>>>>>> +    if ( op == XENMEM_decrease_reservation )
>>>>>>> +        curr_d->qemu_mapcache_invalidate = true;
>>>>>>> +#endif
>>>>>> I don't see why you put this right into decrease_reservation(). This
>>>>>> isn't just to avoid the extra conditional, but first and foremost to
>>>>>> avoid bypassing the earlier return from the function (in the case of
>>>>>> preemption). In the context of this I wonder whether the ordering of
>>>>>> operations in hvm_hypercall() is actually correct.
>>>>> Good point, indeed we may return earlier in case of preemption, I 
>>>>> missed
>>>>> that.
>>>>> Will move it to decrease_reservation(). But, we may return even 
>>>>> earlier
>>>>> in case of error...
>>>>> Now I am wondering should we move it to the very beginning of command
>>>>> processing or not?
>>>> In _this_ series I'd strongly recommend you keep things working as
>>>> they are. If independently you think you've found a reason to
>>>> re-order certain operations, then feel free to send a patch with
>>>> suitable justification.
>>> Of course, I will try to retain current behavior.
>>>
>>>
>>>>>> I'm also unconvinced curr_d is the right domain in all cases here;
>>>>>> while this may be a pre-existing issue in principle, I'm afraid it
>>>>>> gets more pronounced by the logic getting moved to common code.
>>>>> Sorry I didn't get your concern here.
>>>> Well, you need to be concerned whose qemu_mapcache_invalidate flag
>>>> you set.
>>> May I ask, in what cases the *curr_d* is the right domain?
>> When a domain does a decrease-reservation on itself. I thought
>> that's obvious. But perhaps your question was rather meant a to
>> whether a->domain ever is _not_ the right one?
> No, my question was about *curr_d*. I saw your answer
> > I'm also unconvinced curr_d is the right domain in all cases here;
> and just wanted to clarify these cases. Sorry if I was unclear.
>
>
>>
>>> We need to make sure that domain is using IOREQ server(s) at least.
>>> Hopefully, we have a helper for this
>>> which is hvm_domain_has_ioreq_server(). Please clarify, anything else I
>>> should taking care of?
>> Nothing I can recall / think of right now, except that the change
>> may want to come under a different title and with a different
>> description.As indicated, I don't think this is correct for PVH
>> Dom0 issuing the request against a HVM DomU, and addressing this
>> will likely want this moved out of hvm_memory_op() anyway. Of
>> course an option is to split this into two patches - the proposed
>> bug fix (perhaps wanting backporting) and then the moving of the
>> field out of arch.hvm. If you feel uneasy about the bug fix part,
>> let me know and I (or maybe Roger) will see to put together a
>> patch.
>
> Thank you for the clarification.
>
> Yes, it would be really nice if you (or maybe Roger) could create a 
> patch for the bug fix part.


Thank you for your patch [1].

If I got it correctly there won't be a suitable common place where to 
set qemu_mapcache_invalidate flag anymore
as XENMEM_decrease_reservation is not a single place we need to make a 
decision whether to set it
By principle of analogy, on Arm we probably want to do so in 
guest_physmap_remove_page (or maybe better in p2m_remove_mapping).
Julien, what do you think?


I will modify current patch to not alter the common code.


[1] https://patchwork.kernel.org/patch/11803383/

-- 

Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 09:58:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 09:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1798.5506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHpm-0000uV-Qh; Fri, 02 Oct 2020 09:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1798.5506; Fri, 02 Oct 2020 09:58:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHpm-0000uO-Nd; Fri, 02 Oct 2020 09:58:38 +0000
Received: by outflank-mailman (input) for mailman id 1798;
 Fri, 02 Oct 2020 09:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOHpl-0000uI-Fx
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:58:37 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10fba5cb-e58b-40c8-905b-fccb8e83145b;
 Fri, 02 Oct 2020 09:58:35 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id z1so1116726wrt.3
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:58:35 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id q18sm1109551wre.78.2020.10.02.02.58.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 02:58:33 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOHpl-0000uI-Fx
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 09:58:37 +0000
X-Inumbo-ID: 10fba5cb-e58b-40c8-905b-fccb8e83145b
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 10fba5cb-e58b-40c8-905b-fccb8e83145b;
	Fri, 02 Oct 2020 09:58:35 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id z1so1116726wrt.3
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 02:58:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=mEkpAjpBBFBC2OSbPf1rcPxheU+j8eHJKLr+XyTj9VA=;
        b=Kv9fFmwsLnH3rxc4YORHIlCmE7gMf2ANx+U1VpZfjN8Kq3n/95x+iLFFYY69i5o/VM
         aXKz1OGxaPgu4YEY2NOXjk9cQA0FvF45c+2KvuEJGmF6wRE5OG+7rgoxKuBnH3Yb2xng
         VP9WbgOJz1wPpF8BMrgVCxTdasm5/jVPq77KY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=mEkpAjpBBFBC2OSbPf1rcPxheU+j8eHJKLr+XyTj9VA=;
        b=mizpbUfPDmiBz9G6zMIHET9lZQVteRWY63KeZza28K/2ywI9aj0OF+Bs0vGJRMa7VC
         SHFKthympYplBBh+fiNsODklxUi0kb8xAALD0ppdUuN0UMFWoxpTdEDMtyjXglr+LREi
         idvt68daVjxRvQ1mnFjxcOi9q5ZhAGf2+T5SlaG79aj8qZSZHTgdZwkLk4C64onsBqPY
         rWOEWYRAEEzld3PdFsdyh/MuS7ibbeaX1/nJC+8p2YnE6SsrpgaRBLMr/GsFTHfD6BBr
         KjqoWbsEvtoiCK2GGPuaFuLudQFnCtDcai2gqOlCQreXFN2JfE/SDjsTZjXHrX4HW3Qq
         wRrQ==
X-Gm-Message-State: AOAM5335HFX+fuVJz4uWNvkTtliTxHWfLCM7zmG2G0BJODi/6uqUHWvD
	q0vb36RULYKWbC1WgXQYSqUPPg==
X-Google-Smtp-Source: ABdhPJyaxWM6azv8IiEEW0NJpIjAdBSSpixIyMrO0zJEmyGJ5I3RnUPpVbxcJ/Exwy9Zm9yRUkU4Bg==
X-Received: by 2002:a5d:5751:: with SMTP id q17mr2092527wrw.409.1601632714647;
        Fri, 02 Oct 2020 02:58:34 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id q18sm1109551wre.78.2020.10.02.02.58.31
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 02:58:33 -0700 (PDT)
Date: Fri, 2 Oct 2020 11:58:30 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Christian =?iso-8859-1?Q?K=F6nig?= <christian.koenig@amd.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>,
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
	Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
	Sam Ravnborg <sam@ravnborg.org>,
	Alex Deucher <alexander.deucher@amd.com>,
	Gerd Hoffmann <kraxel@redhat.com>,
	Lucas Stach <l.stach@pengutronix.de>,
	Russell King <linux+etnaviv@armlinux.org.uk>,
	Christian Gmeiner <christian.gmeiner@gmail.com>,
	Inki Dae <inki.dae@samsung.com>,
	Joonyoung Shim <jy0922.shim@samsung.com>,
	Seung-Woo Kim <sw0312.kim@samsung.com>,
	Kyungmin Park <kyungmin.park@samsung.com>,
	Kukjin Kim <kgene@kernel.org>,
	Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>,
	Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
	Tomeu Vizoso <tomeu.vizoso@collabora.com>,
	Steven Price <steven.price@arm.com>,
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
	Sandy Huang <hjc@rock-chips.com>,
	Heiko =?iso-8859-1?Q?St=FCbner?= <heiko@sntech.de>,
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>,
	"Anholt, Eric" <eric@anholt.net>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Huang Rui <ray.huang@amd.com>,
	Sumit Semwal <sumit.semwal@linaro.org>,
	Emil Velikov <emil.velikov@collabora.com>,
	Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com,
	Linus Walleij <linus.walleij@linaro.org>,
	Melissa Wen <melissa.srw@gmail.com>,
	"Wilson, Chris" <chris@chris-wilson.co.uk>,
	Qinglang Miao <miaoqinglang@huawei.com>,
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
	lima@lists.freedesktop.org,
	Nouveau Dev <nouveau@lists.freedesktop.org>,
	The etnaviv authors <etnaviv@lists.freedesktop.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>,
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>,
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
Message-ID: <20201002095830.GH438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> On Wed, Sep 30, 2020 at 2:34 PM Christian Knig
> <christian.koenig@amd.com> wrote:
> >
> > Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> > > On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian Knig wrote:
> > >> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> > >>> Hi
> > >>>
> > >>> Am 30.09.20 um 10:05 schrieb Christian Knig:
> > >>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> > >>>>> Hi Christian
> > >>>>>
> > >>>>> Am 29.09.20 um 17:35 schrieb Christian Knig:
> > >>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> > >>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
> > >>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
> > >>>>>>> with these values. Helpful for TTM-based drivers.
> > >>>>>> We could completely drop that if we use the same structure inside TTM as
> > >>>>>> well.
> > >>>>>>
> > >>>>>> Additional to that which driver is going to use this?
> > >>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> > >>>>> retrieve the pointer via this function.
> > >>>>>
> > >>>>> I do want to see all that being more tightly integrated into TTM, but
> > >>>>> not in this series. This one is about fixing the bochs-on-sparc64
> > >>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
> > >>>> I should have asked which driver you try to fix here :)
> > >>>>
> > >>>> In this case just keep the function inside bochs and only fix it there.
> > >>>>
> > >>>> All other drivers can be fixed when we generally pump this through TTM.
> > >>> Did you take a look at patch 3? This function will be used by VRAM
> > >>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
> > >>> have to duplicate the functionality in each if these drivers. Bochs
> > >>> itself uses VRAM helpers and doesn't touch the function directly.
> > >> Ah, ok can we have that then only in the VRAM helpers?
> > >>
> > >> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
> > >> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> > >>
> > >> What I want to avoid is to have another conversion function in TTM because
> > >> what happens here is that we already convert from ttm_bus_placement to
> > >> ttm_bo_kmap_obj and then to dma_buf_map.
> > > Hm I'm not really seeing how that helps with a gradual conversion of
> > > everything over to dma_buf_map and assorted helpers for access? There's
> > > too many places in ttm drivers where is_iomem and related stuff is used to
> > > be able to convert it all in one go. An intermediate state with a bunch of
> > > conversions seems fairly unavoidable to me.
> >
> > Fair enough. I would just have started bottom up and not top down.
> >
> > Anyway feel free to go ahead with this approach as long as we can remove
> > the new function again when we clean that stuff up for good.
> 
> Yeah I guess bottom up would make more sense as a refactoring. But the
> main motivation to land this here is to fix the __mmio vs normal
> memory confusion in the fbdev emulation helpers for sparc (and
> anything else that needs this). Hence the top down approach for
> rolling this out.

Ok I started reviewing this a bit more in-depth, and I think this is a bit
too much of a de-tour.

Looking through all the callers of ttm_bo_kmap almost everyone maps the
entire object. Only vmwgfx uses to map less than that. Also, everyone just
immediately follows up with converting that full object map into a
pointer.

So I think what we really want here is:
- new function

int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);

  _vmap name since that's consistent with both dma_buf functions and
  what's usually used to implement this. Outside of the ttm world kmap
  usually just means single-page mappings using kmap() or it's iomem
  sibling io_mapping_map* so rather confusing name for a function which
  usually is just used to set up a vmap of the entire buffer.

- a helper which can be used for the drm_gem_object_funcs vmap/vunmap
  functions for all ttm drivers. We should be able to make this fully
  generic because a) we now have dma_buf_map and b) drm_gem_object is
  embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
  and gem driver.

  This is maybe a good follow-up, since it should allow us to ditch quite
  a bit of the vram helper code for this more generic stuff. I also might
  have missed some special-cases here, but from a quick look everything
  just pins the buffer to the current location and that's it.

  Also this obviously requires Christian's generic ttm_bo_pin rework
  first.

- roll the above out to drivers.

Christian/Thomas, thoughts on this?

I think for the immediate need of rolling this out for vram helpers and
fbdev code we should be able to do this, but just postpone the driver wide
roll-out for now.

Cheers, Daniel

> -Daniel
> 
> >
> > Christian.
> >
> > > -Daniel
> > >
> > >> Thanks,
> > >> Christian.
> > >>
> > >>> Best regards
> > >>> Thomas
> > >>>
> > >>>> Regards,
> > >>>> Christian.
> > >>>>
> > >>>>> Best regards
> > >>>>> Thomas
> > >>>>>
> > >>>>>> Regards,
> > >>>>>> Christian.
> > >>>>>>
> > >>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > >>>>>>> ---
> > >>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> > >>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> > >>>>>>>     2 files changed, 44 insertions(+)
> > >>>>>>>
> > >>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> index c96a25d571c8..62d89f05a801 100644
> > >>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> > >>>>>>> @@ -34,6 +34,7 @@
> > >>>>>>>     #include <drm/drm_gem.h>
> > >>>>>>>     #include <drm/drm_hashtab.h>
> > >>>>>>>     #include <drm/drm_vma_manager.h>
> > >>>>>>> +#include <linux/dma-buf-map.h>
> > >>>>>>>     #include <linux/kref.h>
> > >>>>>>>     #include <linux/list.h>
> > >>>>>>>     #include <linux/wait.h>
> > >>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
> > >>>>>>> ttm_bo_kmap_obj *map,
> > >>>>>>>         return map->virtual;
> > >>>>>>>     }
> > >>>>>>>     +/**
> > >>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> > >>>>>>> + *
> > >>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> > >>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> > >>>>>>> + *
> > >>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
> > >>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
> > >>>>>>> + */
> > >>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
> > >>>>>>> *kmap,
> > >>>>>>> +                           struct dma_buf_map *map)
> > >>>>>>> +{
> > >>>>>>> +    bool is_iomem;
> > >>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
> > >>>>>>> +
> > >>>>>>> +    if (!vaddr)
> > >>>>>>> +        dma_buf_map_clear(map);
> > >>>>>>> +    else if (is_iomem)
> > >>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
> > >>>>>>> +    else
> > >>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> > >>>>>>> +}
> > >>>>>>> +
> > >>>>>>>     /**
> > >>>>>>>      * ttm_bo_kmap
> > >>>>>>>      *
> > >>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > >>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> > >>>>>>> --- a/include/linux/dma-buf-map.h
> > >>>>>>> +++ b/include/linux/dma-buf-map.h
> > >>>>>>> @@ -45,6 +45,12 @@
> > >>>>>>>      *
> > >>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > >>>>>>>      *
> > >>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > >>>>>>> + *
> > >>>>>>> + * .. code-block:: c
> > >>>>>>> + *
> > >>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > >>>>>>> + *
> > >>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_set() or
> > >>>>>>>      * dma_buf_map_is_null().
> > >>>>>>>      *
> > >>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > >>>>>>> dma_buf_map *map, void *vaddr)
> > >>>>>>>         map->is_iomem = false;
> > >>>>>>>     }
> > >>>>>>>     +/**
> > >>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > >>>>>>> an address in I/O memory
> > >>>>>>> + * @map:        The dma-buf mapping structure
> > >>>>>>> + * @vaddr_iomem:    An I/O-memory address
> > >>>>>>> + *
> > >>>>>>> + * Sets the address and the I/O-memory flag.
> > >>>>>>> + */
> > >>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > >>>>>>> +                           void __iomem *vaddr_iomem)
> > >>>>>>> +{
> > >>>>>>> +    map->vaddr_iomem = vaddr_iomem;
> > >>>>>>> +    map->is_iomem = true;
> > >>>>>>> +}
> > >>>>>>> +
> > >>>>>>>     /**
> > >>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > >>>>>>> for equality
> > >>>>>>>      * @lhs:    The dma-buf mapping structure
> > >>>>>> _______________________________________________
> > >>>>>> dri-devel mailing list
> > >>>>>> dri-devel@lists.freedesktop.org
> > >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> > >>>>> _______________________________________________
> > >>>>> amd-gfx mailing list
> > >>>>> amd-gfx@lists.freedesktop.org
> > >>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> > >>>> _______________________________________________
> > >>>> dri-devel mailing list
> > >>>> dri-devel@lists.freedesktop.org
> > >>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
> > >>>>
> > >>> _______________________________________________
> > >>> amd-gfx mailing list
> > >>> amd-gfx@lists.freedesktop.org
> > >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
> >
> 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:08:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1802.5517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHzd-0001x9-0m; Fri, 02 Oct 2020 10:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1802.5517; Fri, 02 Oct 2020 10:08:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOHzc-0001x2-TR; Fri, 02 Oct 2020 10:08:48 +0000
Received: by outflank-mailman (input) for mailman id 1802;
 Fri, 02 Oct 2020 10:08:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=plqz=DJ=dilos.org=igor@srs-us1.protection.inumbo.net>)
 id 1kOHzb-0001ww-GK
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:08:47 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03a37920-58cb-4e82-8cc4-909f5c3a8658;
 Fri, 02 Oct 2020 10:08:46 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id w11so1185451lfn.2
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 03:08:46 -0700 (PDT)
Received: from [192.168.88.253] ([91.204.57.91])
 by smtp.gmail.com with ESMTPSA id q4sm210805lfm.46.2020.10.02.03.08.43
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 02 Oct 2020 03:08:44 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=plqz=DJ=dilos.org=igor@srs-us1.protection.inumbo.net>)
	id 1kOHzb-0001ww-GK
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:08:47 +0000
X-Inumbo-ID: 03a37920-58cb-4e82-8cc4-909f5c3a8658
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03a37920-58cb-4e82-8cc4-909f5c3a8658;
	Fri, 02 Oct 2020 10:08:46 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id w11so1185451lfn.2
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 03:08:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=dilos-org.20150623.gappssmtp.com; s=20150623;
        h=from:message-id:mime-version:subject:date:in-reply-to:cc:to
         :references;
        bh=AVLPKaf/ll8ruIkLPEyi4ykMpD9tbEcswH/W2g/8Crc=;
        b=qcpOJg7bMrCv4SFg3oLpZvHqTewi8WLdafQJ+p8p3K/EZEFgQNrRYVO7hX2JyW0dNR
         vWeqLpEAqlmFBFcYGqpfV5wdyNW4fDaO2GyKtXHSLgsb6zInhgVul9CKhYmElhpbDZsW
         zhyCLB6q4/PYOuTbasCmnhkpP3OOCIPYvbcd7PNKwN9KZ75WK3gYFqJs+8gv8F9c3KuN
         rsVpzW1J7R1Ifwo18veRzXl1vQgaqgGBpaOSTNgruj/4aXvatGdgFV2Nigm+jUPE7yUF
         XW1q/uDgUg3XPF2Ysy021wSgQCHZY4T6Chk9ZgZDeXt511RBqGn68I1vzcltBypvwwGg
         IOiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:message-id:mime-version:subject:date
         :in-reply-to:cc:to:references;
        bh=AVLPKaf/ll8ruIkLPEyi4ykMpD9tbEcswH/W2g/8Crc=;
        b=jWnc96bZ+/BnaBH1Q6peX6dvyaMKugfP4kUxt4UL/aYz4bKK5hTixA6a4ejVbGvHM+
         HPDE+XEQBIgyDYcMY0frpj8jOS8rKQSpHkOylivplrIYGrNnrNZ4XMNWIX2aDgPqBkg6
         wZNt9wKtpbOHtBKrJ8ql2dYR+uplHrsdsa8LZ9Jhu4Yaw7bX2hZA4PPoqpUUA4VAiF3I
         /9Je1LPQRSYLx+ch+CqVimtkrHaCLEOJ1lEVSN/iMqwSo191JL9tunkcPQrLh4rUphFa
         igIsaZb5o80wRaDcKL9daJP6TiwYn7NiLhn8Uybg9e5xOjyU8DzzPkTXqi1VRsfcANEM
         1ojQ==
X-Gm-Message-State: AOAM531iwoITF0VbES5uVM4+WELSNVjp2wVLLKrKqyK6snqAliF2bnpw
	ODro0wWrZJFdSkpMPKve7J9/ag==
X-Google-Smtp-Source: ABdhPJwR7oiK03OrBs3M4/3TlQcSntyKKZo+oaIUeF2ga2X5dOhHdDTsBZ+0q/AuTI09vHLl1so67g==
X-Received: by 2002:a19:c157:: with SMTP id r84mr541731lff.34.1601633324907;
        Fri, 02 Oct 2020 03:08:44 -0700 (PDT)
Received: from [192.168.88.253] ([91.204.57.91])
        by smtp.gmail.com with ESMTPSA id q4sm210805lfm.46.2020.10.02.03.08.43
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 02 Oct 2020 03:08:44 -0700 (PDT)
From: Igor Kozhukhov <igor@dilos.org>
Message-Id: <01D5E481-0221-440F-865C-F362F344295C@dilos.org>
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_F890F14A-34A2-4694-B9AD-946C1BC81686"
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.1\))
Subject: Re: [Xen-devel] Xen Solaris support still required? Illumos/Dilos Xen
Date: Fri, 2 Oct 2020 13:08:43 +0300
In-Reply-To: <746d05db-cbe0-4013-41fb-a4a5b9b71d5c@suse.com>
Cc: =?utf-8?B?UGFzaSBLw6Rya2vDpGluZW4=?= <pasik@iki.fi>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <9ee9bda5-0333-0482-75aa-81a4d352a77e@suse.com>
 <20161103135632.GF28824@reaktio.net> <20161204165715.GN28824@reaktio.net>
 <E1236D3A-24CA-4ECF-B7C8-547406C54911@dilos.org>
 <642cf596-12bc-6f94-3e2a-e0343a250abc@suse.com>
 <746d05db-cbe0-4013-41fb-a4a5b9b71d5c@suse.com>
X-Mailer: Apple Mail (2.3608.120.23.2.1)


--Apple-Mail=_F890F14A-34A2-4694-B9AD-946C1BC81686
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Hi All,

sorry for long delay with XEN.

we have plans with Xen support on DilOS.
i have made some changes on dilos-illumos side for it.
but we have some priority to OpenZFS updates and XEN support will be a =
little bit later.

we have plans for XEN on 2021 year, but all depends on business needs =
and investments.

best regards,
-Igor

> On 2 Oct 2020, at 12:53, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> =
wrote:
>=20
> On 05.12.16 06:32, Juergen Gross wrote:
>> On 04/12/16 18:11, Igor Kozhukhov wrote:
>>> Hi Pasi,
>>>=20
>>> i=E2=80=99m using both addresses, but probably @gmale missed some =
emails with
>>> maillist.
>>>=20
>>> About DilOS + Xen.
>>>=20
>>> i=E2=80=99m using xen-3.4 - old version what i backported to DilOS =
based on old
>>> opensolaris varsion and upadted it to use python2.7 and some others =
zfs
>>> updates - more updates :)
>>> i tried to port Xen-4.3, but not finished it yet because i have no =
found
>>> sponsors and i have been moved to some aonther job without =
DilOS/illumos
>>> activities.
>>> try to do it by free time was/is overhead.
>>>=20
>>> i have plans try to return back and look at latest Xen.
>>>=20
>>> right now i try to move DilOS bulid env to use more Debian style =
build
>>> env and to use gcc-5.4 as primary compiler.
>>> Also, i have SPARC support with DilOS and it eat some additional =
free time.
>>> please do not drop solaris support :) - i=E2=80=99ll use and update =
it soon -
>>> probably on next year.
>> Got it. Thanks for the note and good luck for the port!
>=20
> As a followup after nearly 4 years:
>=20
> It seems nothing has happened, and Solaris specific coding in Xen is
> bit-rotting further. Last example is xenstored, which lost an =
interface
> mandatory for Solaris about 1 year ago (nobody noticed, as Solaris
> specific parts are neither built nor tested).
>=20
> I stumbled over this one as I did some reorg of the Xen libraries and
> checked all the dependencies between those.
>=20
> I think at least the no longer working Solaris stuff in xenstored =
should
> be removed now (in theory it would still be possible to use xenstore-
> stubdom in Solaris), but I honestly think all the other Solaris cruft =
in
> Xen tools should go away, too, in case nobody is really showing some
> interest in it (e.g. by doing some basic build tests and maybe a small
> functional test for each release of Xen).
>=20
> So how does the realistic future of a Solaris dom0 look like? Is there
> a non-neglectable chance it will be revived in the near future, or can
> we remove the Solaris abstractions?
>=20
>=20
> Juergen


--Apple-Mail=_F890F14A-34A2-4694-B9AD-946C1BC81686
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;" class=3D"">Hi =
All,<div class=3D""><br class=3D""></div><div class=3D"">sorry for long =
delay with XEN.</div><div class=3D""><br class=3D""></div><div =
class=3D"">we have plans with Xen support on DilOS.</div><div class=3D"">i=
 have made some changes on dilos-illumos side for it.</div><div =
class=3D"">but we have some priority to OpenZFS updates and XEN support =
will be a little bit later.</div><div class=3D""><br class=3D""></div><div=
 class=3D"">we have plans for XEN on 2021 year, but all depends on =
business needs and investments.</div><div class=3D""><br =
class=3D""></div><div class=3D"">best regards,</div><div =
class=3D"">-Igor<br class=3D""><div><br class=3D""><blockquote =
type=3D"cite" class=3D""><div class=3D"">On 2 Oct 2020, at 12:53, =
J=C3=BCrgen Gro=C3=9F &lt;<a href=3D"mailto:jgross@suse.com" =
class=3D"">jgross@suse.com</a>&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; =
display: inline !important;" class=3D"">On 05.12.16 06:32, Juergen Gross =
wrote:</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><blockquote type=3D"cite" style=3D"font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; orphans: auto; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: auto; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"">On =
04/12/16 18:11, Igor Kozhukhov wrote:<br class=3D""><blockquote =
type=3D"cite" class=3D"">Hi Pasi,<br class=3D""><br class=3D"">i=E2=80=99m=
 using both addresses, but probably @gmale missed some emails with<br =
class=3D"">maillist.<br class=3D""><br class=3D"">About DilOS + Xen.<br =
class=3D""><br class=3D"">i=E2=80=99m using xen-3.4 - old version what i =
backported to DilOS based on old<br class=3D"">opensolaris varsion and =
upadted it to use python2.7 and some others zfs<br class=3D"">updates - =
more updates :)<br class=3D"">i tried to port Xen-4.3, but not finished =
it yet because i have no found<br class=3D"">sponsors and i have been =
moved to some aonther job without DilOS/illumos<br =
class=3D"">activities.<br class=3D"">try to do it by free time was/is =
overhead.<br class=3D""><br class=3D"">i have plans try to return back =
and look at latest Xen.<br class=3D""><br class=3D"">right now i try to =
move DilOS bulid env to use more Debian style build<br class=3D"">env =
and to use gcc-5.4 as primary compiler.<br class=3D"">Also, i have SPARC =
support with DilOS and it eat some additional free time.<br =
class=3D"">please do not drop solaris support :) - i=E2=80=99ll use and =
update it soon -<br class=3D"">probably on next year.<br =
class=3D""></blockquote>Got it. Thanks for the note and good luck for =
the port!<br class=3D""></blockquote><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">As a followup after nearly 4 years:</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; =
display: inline !important;" class=3D"">It seems nothing has happened, =
and Solaris specific coding in Xen is</span><br style=3D"caret-color: =
rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: normal; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">bit-rotting further. Last example is xenstored, which lost an =
interface</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">mandatory for =
Solaris about 1 year ago (nobody noticed, as Solaris</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; =
display: inline !important;" class=3D"">specific parts are neither built =
nor tested).</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">I stumbled =
over this one as I did some reorg of the Xen libraries and</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; =
display: inline !important;" class=3D"">checked all the dependencies =
between those.</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">I think at least the no longer working Solaris stuff in =
xenstored should</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">be removed now (in theory it would still be possible to use =
xenstore-</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">stubdom in =
Solaris), but I honestly think all the other Solaris cruft in</span><br =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none;" class=3D""><span =
style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
14px; font-style: normal; font-variant-caps: normal; font-weight: =
normal; letter-spacing: normal; text-align: start; text-indent: 0px; =
text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; =
display: inline !important;" class=3D"">Xen tools should go away, too, =
in case nobody is really showing some</span><br style=3D"caret-color: =
rgb(0, 0, 0); font-family: Helvetica; font-size: 14px; font-style: =
normal; font-variant-caps: normal; font-weight: normal; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">interest in it (e.g. by doing some basic build tests and =
maybe a small</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">functional =
test for each release of Xen).</span><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">So how does the realistic future of a Solaris dom0 look like? =
Is there</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">a =
non-neglectable chance it will be revived in the near future, or =
can</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none;" class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 14px; font-style: normal; font-variant-caps: =
normal; font-weight: normal; letter-spacing: normal; text-align: start; =
text-indent: 0px; text-transform: none; white-space: normal; =
word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: =
none; float: none; display: inline !important;" class=3D"">we remove the =
Solaris abstractions?</span><br style=3D"caret-color: rgb(0, 0, 0); =
font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><br style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none;" class=3D""><span style=3D"caret-color: rgb(0, 0, =
0); font-family: Helvetica; font-size: 14px; font-style: normal; =
font-variant-caps: normal; font-weight: normal; letter-spacing: normal; =
text-align: start; text-indent: 0px; text-transform: none; white-space: =
normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline !important;" =
class=3D"">Juergen</span></div></blockquote></div><br =
class=3D""></div></body></html>=

--Apple-Mail=_F890F14A-34A2-4694-B9AD-946C1BC81686--


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:12:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1804.5530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI3W-0002nE-Kr; Fri, 02 Oct 2020 10:12:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1804.5530; Fri, 02 Oct 2020 10:12:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI3W-0002n7-Fk; Fri, 02 Oct 2020 10:12:50 +0000
Received: by outflank-mailman (input) for mailman id 1804;
 Fri, 02 Oct 2020 10:12:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOI3V-0002n2-Og
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:12:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 445a1fdf-eea1-4dad-be5a-2de031773575;
 Fri, 02 Oct 2020 10:12:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOI3U-00043B-2t; Fri, 02 Oct 2020 10:12:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOI3T-0003Ue-SL; Fri, 02 Oct 2020 10:12:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOI3T-0007na-Ru; Fri, 02 Oct 2020 10:12:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOI3V-0002n2-Og
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:12:49 +0000
X-Inumbo-ID: 445a1fdf-eea1-4dad-be5a-2de031773575
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 445a1fdf-eea1-4dad-be5a-2de031773575;
	Fri, 02 Oct 2020 10:12:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PaX3qRusO9IdW0r4SVT/XTrVk1U0EXmS9+15ELJaHyA=; b=BnPS+wUxgGlrn0gNVQWlKvVand
	caFjJKwcU6omFWEOznVPJI3qqGm4K7+ZjPLzZpSLx91Wf7U57SICEd3XBmnYXrm/5oyyQPf2ZjRL8
	qSga2pQgBKUEAkdNS4H1I8o17/KO73xuduBI6SpuyVSvLNyJEAoeVERajcFzJTin0gzE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOI3U-00043B-2t; Fri, 02 Oct 2020 10:12:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOI3T-0003Ue-SL; Fri, 02 Oct 2020 10:12:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOI3T-0007na-Ru; Fri, 02 Oct 2020 10:12:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155310: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=59b27f360e3d9dc0378c1288e67a91fa41a77158
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 10:12:47 +0000

flight 155310 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155310/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  59b27f360e3d9dc0378c1288e67a91fa41a77158
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    2 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days   13 attempts
Testing same since   155310  2020-10-02 08:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 387 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:13:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:13:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1806.5544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI3n-0002sF-5a; Fri, 02 Oct 2020 10:13:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1806.5544; Fri, 02 Oct 2020 10:13:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI3n-0002s6-0x; Fri, 02 Oct 2020 10:13:07 +0000
Received: by outflank-mailman (input) for mailman id 1806;
 Fri, 02 Oct 2020 10:13:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOI3l-0002rk-NA
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:13:05 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.41]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea134896-99d4-4d85-94f4-cd72f2d7c819;
 Fri, 02 Oct 2020 10:13:04 +0000 (UTC)
Received: from DB6PR1001CA0029.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::15)
 by DB7PR08MB3756.eurprd08.prod.outlook.com (2603:10a6:10:79::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 10:13:01 +0000
Received: from DB5EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::d) by DB6PR1001CA0029.outlook.office365.com
 (2603:10a6:4:55::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 10:13:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT063.mail.protection.outlook.com (10.152.20.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:13:01 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Fri, 02 Oct 2020 10:13:01 +0000
Received: from b62a67fa25e2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C40885B0-9C7E-438A-B0D7-C63B82DF1E75.1; 
 Fri, 02 Oct 2020 10:12:23 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b62a67fa25e2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 10:12:23 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5081.eurprd08.prod.outlook.com (2603:10a6:10:e5::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 10:12:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:12:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOI3l-0002rk-NA
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:13:05 +0000
X-Inumbo-ID: ea134896-99d4-4d85-94f4-cd72f2d7c819
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.41])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ea134896-99d4-4d85-94f4-cd72f2d7c819;
	Fri, 02 Oct 2020 10:13:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xM2AM7ZoE/80Nc9Pbia61OrulRWXjE18A6I508xxGRA=;
 b=HIhLB9+eOGxfJwM0YBAieYmiZ5FRipDQe90CvBbiIpHwh2zIe0Y8D7qCFMGvjqeCKewMCTmucMGB14X9ThZ+ysjd2dYKLV2+UvM0ts/EkDIvm+KxrUv36QhJB/nmpcikllTkw2cJH1nOPhJHgZBBBqToivbjFFqAxCpCxiDc7lU=
Received: from DB6PR1001CA0029.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::15)
 by DB7PR08MB3756.eurprd08.prod.outlook.com (2603:10a6:10:79::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 10:13:01 +0000
Received: from DB5EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::d) by DB6PR1001CA0029.outlook.office365.com
 (2603:10a6:4:55::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 10:13:01 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT063.mail.protection.outlook.com (10.152.20.209) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:13:01 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Fri, 02 Oct 2020 10:13:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 573baf49a94c395d
X-CR-MTA-TID: 64aa7808
Received: from b62a67fa25e2.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id C40885B0-9C7E-438A-B0D7-C63B82DF1E75.1;
	Fri, 02 Oct 2020 10:12:23 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b62a67fa25e2.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 10:12:23 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NwMKlqNfS1Qf2g3MwXhKbTeWFdtG7BEdpZ6LEqsQESGCrtOijtuvcb0MH8wufWiHgt+X0LbR4ezCEoc9EJQDt0pCtWlEcnteoYyBLYhUAUc6ffHNcW1S3/ehogRRpwOq53Ox8Bdxxe/Iin6cHdRILwNSM0DiCTwzilvl/mPFBBpc69QSHkfkPN86F2YHoWctjPY1VvnJ3kCm2ox//zHXIk/FSJ3GKJCv9Zgb+PBrqasb8EIasG8VrcoNCO+b+TegFYmpxiUwULk2Pz9IyL8UfIgZkLNiQ5CSVeMwSVnkf2NEYHbdukAGmY1EerGqewzB+Wsa3uJo6AGSv7K9iWDCnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xM2AM7ZoE/80Nc9Pbia61OrulRWXjE18A6I508xxGRA=;
 b=TS8Ww+Tdd1CzwmYHrvFVwaSZDYcNAdTvavfX33v6IlbHrSmfI8lPL3bqcqwvW4hM6Uays9QlOka0U+N8y45O5Z647oYbSa65Xj1zRm8T8M45Dj1+nRL2zVqvjh7v+gFC8Lkmbz6CJaT3cZ61XzUFZVl5Ngmhvuug0Xg8oheZtiyng9wxQzmy/jjvrO8OS9Tu8wfuGcxjF86wlQJ27BgKALQbO+7wew3NWyKmYUH5/aS4FdGAqAz7tETwZmgSvM/ru4Ha8B7LfJrHuE9APkSSe64xG8drOgjUg5unp+N/auebKngK+IfmFU5TF7Ih27ZCahV+UsM+D130YmpZgaoMzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xM2AM7ZoE/80Nc9Pbia61OrulRWXjE18A6I508xxGRA=;
 b=HIhLB9+eOGxfJwM0YBAieYmiZ5FRipDQe90CvBbiIpHwh2zIe0Y8D7qCFMGvjqeCKewMCTmucMGB14X9ThZ+ysjd2dYKLV2+UvM0ts/EkDIvm+KxrUv36QhJB/nmpcikllTkw2cJH1nOPhJHgZBBBqToivbjFFqAxCpCxiDc7lU=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5081.eurprd08.prod.outlook.com (2603:10a6:10:e5::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 10:12:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:12:22 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
Thread-Topic: [PATCH v3] tools/libs/stat: fix broken build
Thread-Index:
 AQHWiQYtlvZlYAgRJ0WaZGJ4HThwo6ln+foAgBsWa4CAAMywgIAAGQiAgAAIpgCAAAJuAIAABycAgAAuo4A=
Date: Fri, 2 Oct 2020 10:12:22 +0000
Message-ID: <5B52FDF2-18DA-4342-9280-0D497FAB6532@arm.com>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
 <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
 <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
In-Reply-To: <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 93f184e1-37e8-4a63-401d-08d866bbbe79
x-ms-traffictypediagnostic: DB8PR08MB5081:|DB7PR08MB3756:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB37563BEB7EEB4D5A5CD7C2F29D310@DB7PR08MB3756.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wZQ2UKbqlvEq3QK69HqcV7XV/ODdPgx7HKqH6HHWLhAsTNf+sC/UovURphrvKW1YGgCfQHH05lRJ5HW7IxU0/ULLwEDG3gFJ7hG5/oM4PsIs6GgYjypILgRtwIOvJ/l6WOF+iLg4BT7V18jHx5THQIkC5Fyqqp1bFg5elOxphmbG7GWUHljajPeKXyO63EL+T0b67jmFri2DChvNn5UwbYD71j0PGNsn4YZmf5HtFDdGq6nYbWkd0IomM8UbhHrVJPIo+Imrt/zOmA/HE/dW9hLy8qbrdJCO3+wcV76/t23sLAK2Pz/VlRX8jiTLR2x8pvFxzFAj3szki7cK7/rgWA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(366004)(396003)(346002)(71200400001)(86362001)(8676002)(186003)(6486002)(478600001)(2906002)(64756008)(66446008)(66556008)(2616005)(4326008)(66946007)(6506007)(66476007)(53546011)(6916009)(26005)(6512007)(33656002)(54906003)(36756003)(83380400001)(66574015)(5660300002)(316002)(8936002)(76116006)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 5l9647iNfr9lUEF76/PJQ0/Z/X3Vit9Fk4ysy0CAFv52wnLTsTzR1bUZxESS3+4Cy3KwAzCv8kuUVaHzkVNej+IJQ6xzipGFN6cSwpRmOjzw44iuotwDeraM7g/AGNh5nkkyN4aqcLLWCOZSv78P3bHZLSB8d6M74OZgVBSDs95+UhKy1elN+u6+xEttuEJXRSNKLe8fSiPmHL2I0oHTZZ+OARAQn6HiUwZR9nw8xst59jGta1Jsr0fskiEolo7Qqvg1oBsZkcSO6jhGNS3fPHQekOQb4oMkCEjeDg6EfiyUtPFT8AXUfr6rO5IlGUenYUnPn/FJchxzsfz6H1p7SICx9uyGc0EsElLHkgGARpkj9f4bfz6nOZ63514RK5wYWBaNijJ7Nao0E5vgBSeBTbslQyndVhXdHE0XrtNid3uPduyrsXEwi0Ep6OaN0H/LqWMyFTrScYAf55diF6QtjL8ZnPI7mV0ViL8SDmLRyb5G84kHH25+iZtImILleuE6pYgtlZEUuTf1YcVlS/Vrr/5GOELyv4lMG0DVuAeu5jbIAsKBM+NAVyrlbtmeh2pyHfM5rrV1EzuwZBibYL3RcsfMfFYLBq8D0RSs3Z3YTzKnQdALyXwzMdOAwpm724sQFvS50FTsYfCJU2vuAJRk9Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F1213323E79DFF4E8EBDD5A3AE0F9270@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5081
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	78465c39-d35a-4b4e-6d12-08d866bba74d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wg8iegOOmhicraXPl8sNLmeZo5XPeTCvl4NbRa/Q/aBGUxLWPtOW7ymyD8tSya6SC1yeUGRFw2afJXqx4lvsbw0jR09GBCNkGo+t2CkVedcb3cxZGGq8FaZb5Ki1T0oJiYP0osOQ7LjVvwrcy7Y2J86V9HA6xzyF0VoZyNBvaPk1d9yECgyShto0Lj7BYiHQoRCCkTMVmt2PgkiWjjRJIvN9BbHk3GwJqCbdLJ+j186psIqWizhUm7zAXKCygjKhOpjKquWdrM1KUFs1X2dTyObo5swK0vRYCwRn43LBP1jaHM40KSJJAJUF/64xRWkeSg3EHa2cnMomLxiRP5RApASxCWrAXBpJXCYxTvFIU8k6caZQtcFAgM33LiIiYAlW7u+FcD2QvNC4CNIOhFh7uw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39850400004)(136003)(46966005)(83380400001)(478600001)(47076004)(82740400003)(70206006)(54906003)(26005)(356005)(70586007)(81166007)(316002)(2906002)(2616005)(6862004)(6486002)(33656002)(66574015)(8676002)(5660300002)(4326008)(6512007)(6506007)(82310400003)(336012)(86362001)(36756003)(8936002)(186003)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 10:13:01.3400
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 93f184e1-37e8-4a63-401d-08d866bbbe79
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3756

SGksDQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMDg6MjUsIErDvHJnZW4gR3Jvw58gPGpncm9zc0Bz
dXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMi4xMC4yMCAwODo1OSwgSmFuIEJldWxpY2ggd3Jv
dGU6DQo+PiBPbiAwMi4xMC4yMDIwIDA4OjUxLCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0KPj4+IE9u
IDAyLjEwLjIwIDA4OjIwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4gT24gMDIuMTAuMjAyMCAw
Njo1MCwgSsO8cmdlbiBHcm/DnyB3cm90ZToNCj4+Pj4+IE9uIDAxLjEwLjIwIDE4OjM4LCBCZXJ0
cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+Pj4+IEhpIEp1ZXJnZW4sDQo+Pj4+Pj4gDQo+Pj4+Pj4+
IE9uIDE0IFNlcCAyMDIwLCBhdCAxMTo1OCwgQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFy
cXVpc0Bhcm0uY29tPiB3cm90ZToNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IA0KPj4+Pj4+PiANCj4+Pj4+
Pj4+IE9uIDEyIFNlcCAyMDIwLCBhdCAxNDowOCwgSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2Uu
Y29tPiB3cm90ZToNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gTWFraW5nIGdldEJyaWRnZSgpIHN0YXRp
YyB0cmlnZ2VyZWQgYSBidWlsZCBlcnJvciB3aXRoIHNvbWUgZ2NjIHZlcnNpb25zOg0KPj4+Pj4+
Pj4gDQo+Pj4+Pj4+PiBlcnJvcjogJ3N0cm5jcHknIG91dHB1dCBtYXkgYmUgdHJ1bmNhdGVkIGNv
cHlpbmcgMTUgYnl0ZXMgZnJvbSBhIHN0cmluZyBvZg0KPj4+Pj4+Pj4gbGVuZ3RoIDI1NSBbLVdl
cnJvcj1zdHJpbmdvcC10cnVuY2F0aW9uXQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBGaXggdGhhdCBi
eSB1c2luZyBhIGJ1ZmZlciB3aXRoIDI1NiBieXRlcyBpbnN0ZWFkLg0KPj4+Pj4+Pj4gDQo+Pj4+
Pj4+PiBGaXhlczogNmQwZWMwNTM5MDc3OTQgKCJ0b29sczogc3BsaXQgbGlieGVuc3RhdCBpbnRv
IG5ldyB0b29scy9saWJzL3N0YXQgZGlyZWN0b3J5IikNCj4+Pj4+Pj4+IFNpZ25lZC1vZmYtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCj4+Pj4+Pj4gUmV2aWV3ZWQtYnk6IEJl
cnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+Pj4+PiANCj4+Pj4+
PiBTb3JyeSBpIGhhdmUgdG8gY29tZSBiYWNrIG9uIHRoaXMgb25lLg0KPj4+Pj4+IA0KPj4+Pj4+
IEkgc3RpbGwgc2VlIGFuIGVycm9yIGNvbXBpbGluZyB3aXRoIFlvY3RvIG9uIHRoaXMgb25lOg0K
Pj4+Pj4+IHwgICAgIGlubGluZWQgZnJvbSAneGVuc3RhdF9jb2xsZWN0X25ldHdvcmtzJyBhdCB4
ZW5zdGF0X2xpbnV4LmM6MzA2OjI6DQo+Pj4+Pj4gfCB4ZW5zdGF0X2xpbnV4LmM6ODE6NjogZXJy
b3I6ICdzdHJuY3B5JyBvdXRwdXQgbWF5IGJlIHRydW5jYXRlZCBjb3B5aW5nIDI1NSBieXRlcyBm
cm9tIGEgc3RyaW5nIG9mIGxlbmd0aCAyNTUgWy1XZXJyb3I9c3RyaW5nb3AtdHJ1bmNhdGlvbl0N
Cj4+Pj4+PiB8ICAgIDgxIHwgICAgICBzdHJuY3B5KHJlc3VsdCwgZGUtPmRfbmFtZSwgcmVzdWx0
TGVuKTsNCj4+Pj4+PiB8ICAgICAgIHwgICAgICBefn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+
fn5+fn5+fn5+fg0KPj4+Pj4+IA0KPj4+Pj4+IFRvIHNvbHZlIGl0LCBJIG5lZWQgdG8gZGVmaW5l
IGRldkJyaWRnZVsyNTddIGFzIGRldk5vQnJpZGVnLg0KPj4+Pj4gDQo+Pj4+PiBJTUhPIHRoaXMg
aXMgYSByZWFsIGNvbXBpbGVyIGVycm9yLg0KPj4+Pj4gDQo+Pj4+PiBkZS0+ZF9uYW1lIGlzIGFu
IGFycmF5IG9mIDI1NiBieXRlcywgc28gZG9pbmcgc3RybmNweSgpIGZyb20gdGhhdCB0bw0KPj4+
Pj4gYW5vdGhlciBhcnJheSBvZiAyNTYgYnl0ZXMgd2l0aCBhIGxlbmd0aCBvZiAyNTYgd29uJ3Qg
dHJ1bmNhdGUgYW55dGhpbmcuDQo+Pj4+IA0KPj4+PiBUaGF0J3MgYSBtYXR0ZXIgb2YgaG93IHlv
dSBsb29rIGF0IGl0LCBJIHRoaW5rOiBJZiB0aGUgb3JpZ2luYWwgYXJyYXkNCj4+Pj4gZG9lc24n
dCBob2xkIGEgbnVsLXRlcm1pbmF0ZWQgc3RyaW5nLCB0aGUgZGVzdGluYXRpb24gYXJyYXkgd29u
J3QNCj4+Pj4gZWl0aGVyLCB5ZXQgdGhlIGNvbW1vbiBnb2FsIG9mIHN0cm5jcHkoKSBpcyB0byB5
aWVsZCBhIHByb3Blcmx5IG51bC0NCj4+Pj4gdGVybWluYXRlZCBzdHJpbmcuIElPVyB0aGUgd2Fy
bmluZyBtYXkgYmUgc2luY2UgdGhlIHN0YW5kYXJkIGV2ZW4gaGFzDQo+Pj4+IGEgc3BlY2lmaWMg
Zm9vdCBub3RlIHRvIHBvaW50IG91dCB0aGlzIHBvc3NpYmxlIHBpdGZhbGwuDQo+Pj4gDQo+Pj4g
SWYgdGhlIHNvdXJjZSBkb2Vzbid0IGhvbGQgYSBudWwtdGVybWluYXRlZCBzdHJpbmcgdGhlcmUg
d2lsbCBzdGlsbCBiZQ0KPj4+IDI1NiBieXRlcyBjb3BpZWQsIHNvIHRoZXJlIGlzIG5vIHRydW5j
YXRpb24gZG9uZSBkdXJpbmcgc3RybmNweSgpLg0KPj4+IA0KPj4+IEluIGZhY3QgdGhlcmUgaXMg
bm8gd2F5IHRvIHVzZSBzdHJuY3B5KCkgaW4gYSBzYWZlIHdheSBvbiBhIGZpeGVkIHNpemVkDQo+
Pj4gc291cmNlIGFycmF5IHdpdGggdGhlIGFib3ZlIHNlbWFudGljczogZWl0aGVyIHRoZSB0YXJn
ZXQgaXMgbGFyZ2VyIHRoYW4NCj4+PiB0aGUgc291cmNlIGFuZCBsZW5ndGggaXMgYXQgbGVhc3Qg
c2l6ZW9mKHNvdXJjZSkgKyAxLCByZXN1bHRpbmcgaW4gYQ0KPj4+IHBvc3NpYmxlIHJlYWQgYmV5
b25kIHRoZSBlbmQgb2Ygc291cmNlLCBvciB0aGUgdGFyZ2V0IGlzIHRoZSBzYW1lIGxlbmd0aA0K
Pj4+IGxlYWRpbmcgdG8gdGhlIGVycm9yLg0KPj4gSSBhZ3JlZSB3aXRoIGFsbCBvZiB3aGF0IHlv
dSBzYXksIGJ1dCBJIGNhbiBhbHNvIHNlZSB3aHkgc2FpZCBmb290IG5vdGUNCj4+IGFsb25lIG1h
eSBoYXZlIG1vdGl2YXRlZCB0aGUgZW1pc3Npb24gb2YgdGhlIHdhcm5pbmcuDQo+IA0KPiBUaGUg
bW90aXZhdGlvbiBjYW4gYmUgZXhwbGFpbmVkLCB5ZXMsIGJ1dCBpdCBpcyB3cm9uZy4gc3RybmNw
eSgpIGlzIG5vdA0KPiBsaW1pdGVkIHRvIHNvdXJjZSBhcnJheXMgb2YgdW5rbm93biBsZW5ndGgu
IFNvIHRoaXMgd2FybmluZyBpcyBtYWtpbmcNCj4gc3RybmNweSgpIHVudXNhYmxlIGZvciBmaXhl
ZCBzaXplZCBzb3VyY2Ugc3RyaW5ncyBhbmQgLVdlcnJvci4gQW5kIHRoYXQNCj4gaXMgbm90aGlu
ZyBhIGNvbXBpbGVyIHNob3VsZCBiZSBhbGxvd2VkIHRvIGRvLCBoZW5jZSBhIGNvbXBpbGVyIGJ1
Zy4NCg0KSSBkbyBhZ3JlZSB0aGF0IGluIHRoaXMgY2FzZSB0aGUgY29tcGlsZXIgaXMgZG9pbmcg
dG8gbXVjaC4NCg0KV2UgY291bGQgYWxzbyBjaG9vc2UgdG8gdHVybiBvZmYgdGhlIHdhcm5pbmcg
ZWl0aGVyIHVzaW5nIHByYWdtYSAod2hpY2gNCmkgcmVhbGx5IGRvIG5vdCBsaWtlKSBvciBieSBh
ZGRpbmcgYSBjZmxhZyBmb3IgdGhpcyBzcGVjaWZpYyBmaWxlIChidXQgdGhpcyBtaWdodA0KaGl0
IHVzIGxhdGVyIGluIG90aGVyIHBsYWNlcykuDQoNCkFsbCBpbiBhbGwgdGhpcyBjdXJyZW50bHkg
bWFrZXMgWGVuIG1hc3RlciBhbmQgc3RhZ2luZyBub3QgcG9zc2libGUgdG8NCmNvbXBpbGUgd2l0
aCBZb2N0byBzbyB3ZSBuZWVkIHRvIGZpbmQgYSBzb2x1dGlvbiBhcyB0aGlzIHdpbGwgYWxzbw0K
Y29tZSBpbiBhbnkgZGlzdHJpYnV0aW9uIHVzaW5nIGEgbmV3IGNvbXBpbGVyLA0KDQpDaGVlcnMN
CkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:18:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:18:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1831.5556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI8i-00039P-P5; Fri, 02 Oct 2020 10:18:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1831.5556; Fri, 02 Oct 2020 10:18:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOI8i-00039I-KQ; Fri, 02 Oct 2020 10:18:12 +0000
Received: by outflank-mailman (input) for mailman id 1831;
 Fri, 02 Oct 2020 10:18:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOI8g-00039D-F1
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:18:10 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.52]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e6e044c-e586-442e-b840-3889dfbf8f21;
 Fri, 02 Oct 2020 10:18:09 +0000 (UTC)
Received: from AM6PR04CA0004.eurprd04.prod.outlook.com (2603:10a6:20b:92::17)
 by VI1PR08MB4128.eurprd08.prod.outlook.com (2603:10a6:803:e9::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 10:17:56 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::cb) by AM6PR04CA0004.outlook.office365.com
 (2603:10a6:20b:92::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 10:17:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:17:56 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Fri, 02 Oct 2020 10:17:55 +0000
Received: from 95ef19b0997f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BB8927D8-2852-49A4-8A5E-5098021B946A.1; 
 Fri, 02 Oct 2020 10:17:18 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 95ef19b0997f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 10:17:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB2008.eurprd08.prod.outlook.com (2603:10a6:4:77::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Fri, 2 Oct
 2020 10:17:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:17:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOI8g-00039D-F1
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:18:10 +0000
X-Inumbo-ID: 8e6e044c-e586-442e-b840-3889dfbf8f21
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.52])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8e6e044c-e586-442e-b840-3889dfbf8f21;
	Fri, 02 Oct 2020 10:18:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6iwqQ3nWxRB8EcVfZ0Qyt74C59X1BrCxqRbVJ+m+MM=;
 b=SGB511ekcLQZL7kvgHSA1s4gcoJPCpsz9JA6jOV2jN00VefHzY8Qfig52IUN4j5eUN2H+JA1pK78Iyr8/7AaYPGUlzTcaCJ4NUtFeNXfMXN+nwsSC7usmHXAgJiCu2/pieFw+gTSNT2CWi8hLy9rWHwwkEFIKRv4Bf4atqbo3wM=
Received: from AM6PR04CA0004.eurprd04.prod.outlook.com (2603:10a6:20b:92::17)
 by VI1PR08MB4128.eurprd08.prod.outlook.com (2603:10a6:803:e9::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 10:17:56 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:92:cafe::cb) by AM6PR04CA0004.outlook.office365.com
 (2603:10a6:20b:92::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 10:17:56 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:17:56 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Fri, 02 Oct 2020 10:17:55 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: cb89d826696a2197
X-CR-MTA-TID: 64aa7808
Received: from 95ef19b0997f.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id BB8927D8-2852-49A4-8A5E-5098021B946A.1;
	Fri, 02 Oct 2020 10:17:18 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 95ef19b0997f.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 10:17:18 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X5HpHLBaWm4a3w5At3G0x2VzevBsrfgal2TqOJU+i0zX1B6D8BCZYRPdn6QX6G3DIW9Cz1Iyihf52G+4hdIT4FI9asOKd0jgI7KsgdK+1Hw7asMkzeCKlOC8XZTSvBVzAmTuv+LdsyO3qXcw7/a5GyRgfD9EtNDafA+TS74lsMEtqlmOv1HbkYbkWLefmYx+5hVGQiyPnfDx9fzsBBqMmyxdSjLbUBRYS6rhRghWFnHWA4z98zas6PH89B/RBHiXg0Wg/wBRFthLOIpcBER2jGw3WVDlpmsrvYG7mV1gDb5/F/qe+yJ3FFu0zNvn/ddjLvRk9g2KwOOnuSn/dl6JQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6iwqQ3nWxRB8EcVfZ0Qyt74C59X1BrCxqRbVJ+m+MM=;
 b=cQB2RZsWockKVNUdtRhcMB50kSrTLubwVH3q61LMHseQCLJvpO4/1zuQzi0ClWoCvIZljuoXMi59pmY5NB3q+lQx/xPsX0HpkQpt8zbI8wGAhAWrg9YXxdcX8MFROO7AbNT9ZdGrn2EXMo04W0ije1RRJT3zigJILI7Tt4H87hN9logwfDH0i+tC1zcdKS1H2xGH7r3CKd1B4e/BeOzFilasCgIJQPz+C/2jyOeLheBH5KZkvaSCjGVVX4HbgVbg+x8ZqLVLkOj2v7EPMTWecGvYPQaOxWRD4wTxEsyWHu0CIkM+hJXANff4I+cLW3vlzzFm/SUV2cNPG+niayhucg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6iwqQ3nWxRB8EcVfZ0Qyt74C59X1BrCxqRbVJ+m+MM=;
 b=SGB511ekcLQZL7kvgHSA1s4gcoJPCpsz9JA6jOV2jN00VefHzY8Qfig52IUN4j5eUN2H+JA1pK78Iyr8/7AaYPGUlzTcaCJ4NUtFeNXfMXN+nwsSC7usmHXAgJiCu2/pieFw+gTSNT2CWi8hLy9rWHwwkEFIKRv4Bf4atqbo3wM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB2008.eurprd08.prod.outlook.com (2603:10a6:4:77::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Fri, 2 Oct
 2020 10:17:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:17:16 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population of
 xen/
Thread-Topic: Ping: [PATCH 0/6] tools/include: adjustments to the population
 of xen/
Thread-Index:
 AQHWh2uA51KvLc+DNUCjQhMwaVDUdqmDCd+AgAAHWYCAAAQLgIAAxK+AgABY4YCAAAi8gA==
Date: Fri, 2 Oct 2020 10:17:16 +0000
Message-ID: <B41B6364-77EE-4CD2-AB1D-081A4EA099A0@arm.com>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
 <9F53B61A-5A50-46DD-BF5B-75F48C91FCFC@arm.com>
 <6B9403A3-66DC-4A69-8006-096420649768@arm.com>
 <dea68b56-990d-a13f-a2c4-171e67eaaf73@suse.com>
 <A4993E3A-6529-4239-ABF8-DD89A01A54D1@arm.com>
In-Reply-To: <A4993E3A-6529-4239-ABF8-DD89A01A54D1@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 77b31b13-8c0a-41da-d2f7-08d866bc6e35
x-ms-traffictypediagnostic: DB6PR0801MB2008:|VI1PR08MB4128:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4128CC587FC9A90CD607275E9D310@VI1PR08MB4128.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zmhEYaCFvMAs9g9SoUcMEU58hkUHRYjty/hr8yf5dR2t/HcRD3jM/3700dD+QbDLKuBR0psnL002CLuuhxhvUlJ4+QjsEDYcI4UY0Wm59HiycdW/hAPgckqYnGr5hp2Xrb7kWSiyIPTeo9e/n1nHjEomA0UNGG4bReI0R9cWG/S4AR7Uoap2H9Cfny8dLy1raYDHFfoW/1dB6d1SjmE0qvtk6q7+g2tQNug40lWah90BXWtXosnMLNIDcPm5EoFJ9iZXfBurDtqugdSGJYKyHb8G7R0m67X7DqxJggkES0NIpCy//o6kAdzAA2YVrTKL
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(39850400004)(136003)(396003)(366004)(66446008)(66556008)(66476007)(64756008)(8936002)(6512007)(71200400001)(6506007)(66946007)(91956017)(4326008)(186003)(66574015)(2616005)(83380400001)(6486002)(33656002)(2906002)(54906003)(26005)(86362001)(8676002)(5660300002)(316002)(53546011)(76116006)(6916009)(478600001)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 rQap2tqNT15CU0nrDZhgwtG0tKsqxg1u9X1q3KdMCd29GZdQPtW223kHdTYtloUeRdXUKHrIV92Z5JTjC8Et9aLfGDG4ohYMNisc0ja3praTqfB2YGttFSaHnw6ZO9o23yUkvdx9F1bG5kmrUwFN/GOR9B4ebwV9t4QYcmRtHNkKo7RajlKeNZ9xHVZiL8RN/GkZhhpqOQvsXV5FS8J+tDl6WInw8Ay/5+4Buse4AlaIBogUhDT6lu7ZeG7bqShQcEw1jl3ub3hk0W0VNFj4vopRF2cs/QCRWEOMc3JZtFNxQgWPoz5TAJOzvrez+qbT4ywydQoBdgewHWk0NK7jSArXcmTWTgtRAhxuz9AzxOKHHC3lGb+jhakaRBjDSPXDDrg9+5HHxiPhZKIzB6eEoffAVuVuA21FQJWv9VhYMo8IpW+LwsoZmzluz4jegnbSl3dYrS7VwyDngrXCwfbpBmXPdrk3+R9gmWfDXP9MLNuyQBCBtU3MnPyWnUKYjfbP8b/EztEaHpSWdn0d8Lq9VZHmwwUYv5MwtYcD9jpq7oWljelpO6VrE+0WsvmjxubLDGncUCYOrrDK0CCnHk6stX8GDusP91YldZeKL1uyDgvwSsQObwLG3VoZXija8QIaeMIhBR1ZzxnyuTSFYFdzbw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C0107AEB7603BA4681991B461E6399C4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB2008
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a326dbf2-0970-466a-c132-08d866bc56ac
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9IPGnKsCrYhIPI6tf5OQ5sOkB0QIFnnnoR1FiMHgfEe7k4+xnvHaXUWIYbqiGGcn6/flWs4EZZB61A6ZKVc8gZO39XEAQwf/93UAkKn2GqhTbXfJIJo6R1a1szezzsc8HTYWvf3+rS/nb9ddlSzQ9Y7OW2x/oLillbbFEGnmG0mVDWT34KokeY7H8zseaL5x8FGCYz+64BzU1H0vOzNMo/N8mHI9ABRPlirc8K6+Yi83oB0cEr7N8iA+OvDANzAEa8YHtYJWbEe1NfD3lHn673E5BQrRN1G1JIg9xwF1XVffoqMjX88Lte5YC2Tf/x1eDQqxw3FV4zsdlnzhqHSvXyWHAD4xpl4+r3YP9UYqPJQlf0AMS1k4E0AQUra0ojWhhvEZcN5K08bC0ILS0JtEnQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(39850400004)(46966005)(83380400001)(70586007)(6486002)(82740400003)(356005)(478600001)(26005)(81166007)(316002)(70206006)(33656002)(186003)(54906003)(36906005)(6862004)(2906002)(4326008)(2616005)(6512007)(6506007)(336012)(86362001)(82310400003)(8936002)(5660300002)(36756003)(66574015)(53546011)(47076004)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 10:17:56.1231
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 77b31b13-8c0a-41da-d2f7-08d866bc6e35
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4128

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMTA6NDUsIEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5k
Lm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiANCj4gDQo+PiBPbiAyIE9jdCAyMDIwLCBh
dCAwNToyNywgSsO8cmdlbiBHcm/DnyA8amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+IA0KPj4g
T24gMDEuMTAuMjAgMTg6NDMsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4gSGksDQo+Pj4+
IE9uIDEgT2N0IDIwMjAsIGF0IDE3OjI5LCBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJx
dWlzQGFybS5jb20+IHdyb3RlOg0KPj4+PiANCj4+Pj4gSGkgSmFuLA0KPj4+PiANCj4+Pj4+IE9u
IDEgT2N0IDIwMjAsIGF0IDE3OjAzLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+IHdy
b3RlOg0KPj4+Pj4gDQo+Pj4+PiBPbiAxMC4wOS4yMDIwIDE0OjA5LCBKYW4gQmV1bGljaCB3cm90
ZToNCj4+Pj4+PiBXaGlsZSBsb29raW5nIGF0IHdoYXQgaXQgd291bGQgdGFrZSB0byBtb3ZlIGFy
b3VuZCBsaWJlbGYvDQo+Pj4+Pj4gaW4gdGhlIGh5cGVydmlzb3Igc3VidHJlZSwgSSd2ZSBydW4g
aW50byB0aGlzIHJ1bGUsIHdoaWNoIEkNCj4+Pj4+PiB0aGluayBjYW4gZG8gd2l0aCBhIGZldyBp
bXByb3ZlbWVudHMgYW5kIHNvbWUgc2ltcGxpZmljYXRpb24uDQo+Pj4+Pj4gDQo+Pj4+Pj4gMTog
YWRqdXN0IHBvcHVsYXRpb24gb2YgYWNwaS8NCj4+Pj4+PiAyOiBmaXggKGRyb3ApIGRlcGVuZGVu
Y2llcyBvZiB3aGVuIHRvIHBvcHVsYXRlIHhlbi8NCj4+Pj4+PiAzOiBhZGp1c3QgcG9wdWxhdGlv
biBvZiBwdWJsaWMgaGVhZGVycyBpbnRvIHhlbi8NCj4+Pj4+PiA0OiBwcm9wZXJseSBpbnN0YWxs
IEFybSBwdWJsaWMgaGVhZGVycw0KPj4+Pj4+IDU6IGFkanVzdCB4ODYtc3BlY2lmaWMgcG9wdWxh
dGlvbiBvZiB4ZW4vDQo+Pj4+Pj4gNjogZHJvcCByZW1haW5pbmcgLWYgZnJvbSBsbiBpbnZvY2F0
aW9ucw0KPj4+Pj4gDQo+Pj4+PiBNYXkgSSBhc2sgZm9yIGFuIGFjayBvciBvdGhlcndpc2UgaGVy
ZT8NCj4+Pj4gDQo+Pj4+IFRoaXMgaXMgZ29pbmcgdGhlIHJpZ2h0IHdheSBidXQgd2l0aCB0aGlz
IHNlcmllIChvbiB0b3Agb2YgY3VycmVudCBzdGFnaW5nDQo+Pj4+IHN0YXR1cyksIEkgaGF2ZSBh
IGNvbXBpbGF0aW9uIGVycm9yIGluIFlvY3RvIHdoaWxlIGNvbXBpbGluZyBxZW11Og0KPj4+PiBJ
biBmaWxlIGluY2x1ZGVkIGZyb20gL21lZGlhL2V4dGVuZC1kcml2ZS9iZXJtYXIwMS9EZXZlbG9w
bWVudC94ZW4tZGV2L3lvY3RvLWJ1aWxkL2J1aWxkL2RvbTAtZnZwLnByai90bXAvd29yay9hcm12
OGEtcG9reS1saW51eC9xZW11LzUuMS4wLXIwL3JlY2lwZS1zeXNyb290L3Vzci9pbmNsdWRlL3hl
bmd1ZXN0Lmg6MjUsDQo+Pj4+IHwgICAgICAgICAgICAgICAgICBmcm9tIC9tZWRpYS9leHRlbmQt
ZHJpdmUvYmVybWFyMDEvRGV2ZWxvcG1lbnQveGVuLWRldi95b2N0by1idWlsZC9idWlsZC9kb20w
LWZ2cC5wcmovdG1wL3dvcmsvYXJtdjhhLXBva3ktbGludXgvcWVtdS81LjEuMC1yMC9xZW11LTUu
MS4wL2h3L2kzODYveGVuL3hlbl9wbGF0Zm9ybS5jOjQxOg0KPj4+PiB8IC9tZWRpYS9leHRlbmQt
ZHJpdmUvYmVybWFyMDEvRGV2ZWxvcG1lbnQveGVuLWRldi95b2N0by1idWlsZC9idWlsZC9kb20w
LWZ2cC5wcmovdG1wL3dvcmsvYXJtdjhhLXBva3ktbGludXgvcWVtdS81LjEuMC1yMC9yZWNpcGUt
c3lzcm9vdC91c3IvaW5jbHVkZS94ZW5jdHJsX2RvbS5oOjE5OjEwOiBmYXRhbCBlcnJvcjogeGVu
L2xpYmVsZi9saWJlbGYuaDogTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeQ0KPj4+PiB8ICAgIDE5
IHwgI2luY2x1ZGUgPHhlbi9saWJlbGYvbGliZWxmLmg+DQo+Pj4+IHwgICAgICAgfCAgICAgICAg
ICBefn5+fn5+fn5+fn5+fn5+fn5+fn4NCj4+Pj4gfCBjb21waWxhdGlvbiB0ZXJtaW5hdGVkLg0K
Pj4+PiB8IC9tZWRpYS9leHRlbmQtZHJpdmUvYmVybWFyMDEvRGV2ZWxvcG1lbnQveGVuLWRldi95
b2N0by1idWlsZC9idWlsZC9kb20wLWZ2cC5wcmovdG1wL3dvcmsvYXJtdjhhLXBva3ktbGludXgv
cWVtdS81LjEuMC1yMC9xZW11LTUuMS4wL3J1bGVzLm1hazo2OTogcmVjaXBlIGZvciB0YXJnZXQg
J2h3L2kzODYveGVuL3hlbl9wbGF0Zm9ybS5v4oCZIGZhaWxlZA0KPj4+PiANCj4+Pj4gWGVuIGlz
IHVzaW5nIHhlbmN0cmxfZG9tLmggd2hpY2ggbmVlZCB0aGUgbGliZWxmLmggaGVhZGVyIGZyb20g
eGVuLg0KPj4+IEFjdHVhbGx5IHRoaXMgaXMgbm90IGNvbWluZyBmcm9tIHlvdXIgc2VyaWUgYW5k
IHRoaXMgaXMgYWN0dWFsbHkgYSBwcm9ibGVtIGFscmVhZHkgcHJlc2VudCBvbiBtYXN0ZXIuDQo+
PiANCj4+IC4uLiBhbmQgZml4ZWQgb24gc3RhZ2luZy4NCj4gDQo+IEkgY2FuIGNvbmZpcm0gdGhh
dCB3aXRoIHRvbmlnaHQgc3RhZ2luZyBzdGF0dXMgdGhpcyBpc3N1ZSBpcyBub3QgcHJlc2VudCBh
bnltb3JlLg0KDQrigKYgYW5kIHRoZSBzZXJpZSBpcyBidWlsZGluZyBhbmQgd29ya2luZyBwcm9w
ZXJseSBmb3IgYXJtIChpbmNsdWRpbmcgY29tcGlsaW5nIGV2ZXJ5dGhpbmcNCm9uIFlvY3RvKS4N
Cg0KU286DQpUZXN0ZWQtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJt
LmNvbT4NCg0KQW5kIGkgdGhpbmsgaXQgaXMgYSBnb29kIGltcHJvdmVtZW50Lg0KDQpDaGVlcnMN
CkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:28:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:28:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1847.5568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIIo-00048M-TG; Fri, 02 Oct 2020 10:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1847.5568; Fri, 02 Oct 2020 10:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIIo-00048F-Pt; Fri, 02 Oct 2020 10:28:38 +0000
Received: by outflank-mailman (input) for mailman id 1847;
 Fri, 02 Oct 2020 10:28:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOIIo-00048A-3j
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:28:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18f87fbd-97f6-4bde-a329-13d8d12fdeef;
 Fri, 02 Oct 2020 10:28:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9EEC6ACC6;
 Fri,  2 Oct 2020 10:28:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOIIo-00048A-3j
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:28:38 +0000
X-Inumbo-ID: 18f87fbd-97f6-4bde-a329-13d8d12fdeef
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 18f87fbd-97f6-4bde-a329-13d8d12fdeef;
	Fri, 02 Oct 2020 10:28:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601634515;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=4UCAj3UTPnfjBSq1QDPEFFl5FD4FiGnBBNdrOKwf1mE=;
	b=fwvii5KasWFgtIU2Zg2WRzdQY2UmVBWq4tLwe9zc8R0XTjL6APB7LlBZoaY/pW2dn59qi9
	od11pZlZ6YmfrdvSi8jGtP0Sveg3zx8+1C45kz3PbDt8y1FPtTBheeeSfZsNLzawmlFyKx
	9GWqI89POxc5qNkW8PZiguAjMb3ZJjc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9EEC6ACC6;
	Fri,  2 Oct 2020 10:28:35 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/3] x86: plug 2 vCPU creation resource leaks + some cleanup
Message-ID: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Date: Fri, 2 Oct 2020 12:28:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: vLAPIC: don't leak regs page from vlapic_init() upon error
2: fix resource leaks on arch_vcpu_create() error path
3: vLAPIC: vlapic_init() runs only once for a vCPU

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:30:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:30:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1849.5580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIKh-0004wC-9x; Fri, 02 Oct 2020 10:30:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1849.5580; Fri, 02 Oct 2020 10:30:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIKh-0004w5-6i; Fri, 02 Oct 2020 10:30:35 +0000
Received: by outflank-mailman (input) for mailman id 1849;
 Fri, 02 Oct 2020 10:30:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOIKf-0004w0-Bi
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:30:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab73ea1a-e6fd-4233-9048-6f176f74fd74;
 Fri, 02 Oct 2020 10:30:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E20C1ACC6;
 Fri,  2 Oct 2020 10:30:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOIKf-0004w0-Bi
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:30:33 +0000
X-Inumbo-ID: ab73ea1a-e6fd-4233-9048-6f176f74fd74
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab73ea1a-e6fd-4233-9048-6f176f74fd74;
	Fri, 02 Oct 2020 10:30:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601634632;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XdsMYE0BREfIJTOQGEmjhjAj/gWEweIFrPxOUmgOUfI=;
	b=KIGgJtKXlcAxhSCjifoOrdFpz5YkynvtwE2vY9+5UG0ZKBmflayOcb6mVzk8VbLblLfKgy
	4RMozPTkVTVTtk2lm4sTRBUXdrFYmhT8c8w8cAzB9f/ydWZCtJcwgkCzKWMJrLmAKEKeVj
	MDUegCWRnxAoTc05UXWXFK0GItILIk8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E20C1ACC6;
	Fri,  2 Oct 2020 10:30:31 +0000 (UTC)
Subject: [PATCH 1/3] x86/vLAPIC: don't leak regs page from vlapic_init() upon
 error
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Message-ID: <b437de21-f108-c30e-4e0c-1137ad7d99fe@suse.com>
Date: Fri, 2 Oct 2020 12:30:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1625,6 +1625,7 @@ int vlapic_init(struct vcpu *v)
         vlapic->regs = __map_domain_page_global(vlapic->regs_page);
         if ( vlapic->regs == NULL )
         {
+            free_domheap_page(vlapic->regs_page);
             dprintk(XENLOG_ERR, "map vlapic regs error: %d/%d\n",
                     v->domain->domain_id, v->vcpu_id);
             return -ENOMEM;



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:30:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:30:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1850.5592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIL1-00050x-JJ; Fri, 02 Oct 2020 10:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1850.5592; Fri, 02 Oct 2020 10:30:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIL1-00050p-GH; Fri, 02 Oct 2020 10:30:55 +0000
Received: by outflank-mailman (input) for mailman id 1850;
 Fri, 02 Oct 2020 10:30:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOIL0-00050W-07
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:30:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82d7ffb1-86d2-4717-a892-1a75c77587a9;
 Fri, 02 Oct 2020 10:30:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 942A6ACC6;
 Fri,  2 Oct 2020 10:30:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOIL0-00050W-07
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:30:54 +0000
X-Inumbo-ID: 82d7ffb1-86d2-4717-a892-1a75c77587a9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82d7ffb1-86d2-4717-a892-1a75c77587a9;
	Fri, 02 Oct 2020 10:30:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601634652;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J3FZQmkdRu06gqB9XG31ZEpaiauw4LLfVKhdXDB89Ic=;
	b=uB1yAzRizYdzH/fcPcbtQBgHgliFUxIXcAG4QOlz8lf8yO0yFD47YAieG8TRv9xYSCdFtX
	VyQtBixaM2/5VgeHu/EMUONoXk0hSAcMjGPkZrufkPAhD7zYqLD8cfyr6vPW05UQkci3qu
	T1w3wKsuNj5tEJmUxRK1FMXkgWuYdWM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 942A6ACC6;
	Fri,  2 Oct 2020 10:30:52 +0000 (UTC)
Subject: [PATCH 2/3] x86: fix resource leaks on arch_vcpu_create() error path
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Message-ID: <77106fd6-96c5-4a62-5eee-8a37660db550@suse.com>
Date: Fri, 2 Oct 2020 12:30:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

{hvm,pv}_vcpu_initialise() have always been meant to be the final
possible source of errors in arch_vcpu_create(), hence not requiring
any unrolling of what they've done on the error path. (Of course this
may change once the various involved paths all have become idempotent.)

But even beyond this aspect I think it is more logical to do policy
initialization ahead of the calling of these two functions, as they may
in principle want to access it.

Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -569,6 +569,9 @@ int arch_vcpu_create(struct vcpu *v)
         vmce_init_vcpu(v);
 
         arch_vcpu_regs_init(v);
+
+        if ( (rc = init_vcpu_msr_policy(v)) )
+            goto fail;
     }
     else if ( (rc = xstate_alloc_save_area(v)) != 0 )
         return rc;
@@ -594,9 +597,6 @@ int arch_vcpu_create(struct vcpu *v)
     {
         vpmu_initialise(v);
 
-        if ( (rc = init_vcpu_msr_policy(v)) )
-            goto fail;
-
         cpuid_policy_updated(v);
     }
 



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:31:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1851.5604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOILT-00057W-TN; Fri, 02 Oct 2020 10:31:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1851.5604; Fri, 02 Oct 2020 10:31:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOILT-00057O-Q9; Fri, 02 Oct 2020 10:31:23 +0000
Received: by outflank-mailman (input) for mailman id 1851;
 Fri, 02 Oct 2020 10:31:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOILS-00057E-U5
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:31:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b63be9c4-bfab-490c-a2d5-91617c0cb1db;
 Fri, 02 Oct 2020 10:31:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66089ABF4;
 Fri,  2 Oct 2020 10:31:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOILS-00057E-U5
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:31:22 +0000
X-Inumbo-ID: b63be9c4-bfab-490c-a2d5-91617c0cb1db
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b63be9c4-bfab-490c-a2d5-91617c0cb1db;
	Fri, 02 Oct 2020 10:31:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601634681;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=50n6CMMT53yibknPmzAVv0OIHH8++s0mbagd45BQItU=;
	b=rtpwkhk5lVRRsvdTEvZyzPS8JSiXLrgoPCsiIVkZ/StQYZmC0uAZy5XGtju1EdjBQkJOwA
	eMA9Bt0S1AlQ0ZrKV3nB8Q4iukZAlitNSEiNBlg8ZYFwIW0aAIB3gkwPBz497cBsZfU4fu
	au/eHXABS3oVUv47GNspz0P4RgexuY0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 66089ABF4;
	Fri,  2 Oct 2020 10:31:21 +0000 (UTC)
Subject: [PATCH 3/3] x86/vLAPIC: vlapic_init() runs only once for a vCPU
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Message-ID: <3735eb75-76ef-abff-1b05-aa89ddc39fcc@suse.com>
Date: Fri, 2 Oct 2020 12:31:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hence there's no need to guard allocation / mapping by checks whether
the same action has been done before. I assume this was a transient
change which should have been undone before 509529e99148 ("x86 hvm: Xen
interface and implementation for virtual S3") got committed.

While touching this code, switch dprintk()-s to use %pv.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1610,27 +1610,21 @@ int vlapic_init(struct vcpu *v)
 
     vlapic->pt.source = PTSRC_lapic;
 
-    if (vlapic->regs_page == NULL)
+    vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner);
+    if ( !vlapic->regs_page )
     {
-        vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner);
-        if ( vlapic->regs_page == NULL )
-        {
-            dprintk(XENLOG_ERR, "alloc vlapic regs error: %d/%d\n",
-                    v->domain->domain_id, v->vcpu_id);
-            return -ENOMEM;
-        }
+        dprintk(XENLOG_ERR, "%pv: alloc vlapic regs error\n", v);
+        return -ENOMEM;
     }
-    if (vlapic->regs == NULL) 
+
+    vlapic->regs = __map_domain_page_global(vlapic->regs_page);
+    if ( vlapic->regs == NULL )
     {
-        vlapic->regs = __map_domain_page_global(vlapic->regs_page);
-        if ( vlapic->regs == NULL )
-        {
-            free_domheap_page(vlapic->regs_page);
-            dprintk(XENLOG_ERR, "map vlapic regs error: %d/%d\n",
-                    v->domain->domain_id, v->vcpu_id);
-            return -ENOMEM;
-        }
+        free_domheap_page(vlapic->regs_page);
+        dprintk(XENLOG_ERR, "%pv: map vlapic regs error\n", v);
+        return -ENOMEM;
     }
+
     clear_page(vlapic->regs);
 
     vlapic_reset(vlapic);



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:34:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:34:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1855.5616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOINz-0005Ky-Ax; Fri, 02 Oct 2020 10:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1855.5616; Fri, 02 Oct 2020 10:33:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOINz-0005Kc-7e; Fri, 02 Oct 2020 10:33:59 +0000
Received: by outflank-mailman (input) for mailman id 1855;
 Fri, 02 Oct 2020 10:33:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOINx-0005Jg-Lh
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:33:57 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.52]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2171b041-8114-49b9-8d09-23d8c931f541;
 Fri, 02 Oct 2020 10:33:56 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB6992.eurprd04.prod.outlook.com (2603:10a6:803:139::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 10:33:54 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:33:54 +0000
Received: from fsr-ub1864-101.ea.freescale.net (83.217.231.2) by
 AM0PR04CA0024.eurprd04.prod.outlook.com (2603:10a6:208:122::37) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 10:33:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOINx-0005Jg-Lh
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:33:57 +0000
X-Inumbo-ID: 2171b041-8114-49b9-8d09-23d8c931f541
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.52])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2171b041-8114-49b9-8d09-23d8c931f541;
	Fri, 02 Oct 2020 10:33:56 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hx1FryxjXikEYLljl/xKorKEBzIt7Vt0CNmGY6ZPu60urMCTtIgGK3Rp7cIEnBzOVdUr5GNecbD+MZn+z4L52cyOlla0BTS4IM265c4BT0Pz9QxGkwMG/rD8FoSewt9wXYDDYV3Gm3YSFaakqzRwbS57qwQzUhrLZAokdQHuLDHlSH8jSVmECWBKf90MYOEFzBiIbSBaYJ8NXmZk+O6E+4nLdEnbdRDIX06oqdFffNVtZj7XDk99WRxrKQ7CoKPpkte1ldD7GEYp7XZMBO7cGMFFd10ajzayfJn8Ouu28J5cgiznouMRourYcPUWv8l+mXwlbbv4BNFkMZlyVADDAw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=et3yJGNIONQ/i96PdAtaTJbBfeA0Mf6Odm6Mddf28Hc=;
 b=c+o5shtH1sZPV6NiIQrOaztVq9DLtJ+StnGHwdpvO+F2KzpbMZeashOvFSfa/UgAbFTcuocad/SnRsNm7Zk7I4E3hHX3oGS5koAmpb2VJ3XqHk/2mqR3OyYVbBGExBWckqK3SzbwNq/VmbGtkKYzj6kEGqaWFOPeLjeT9UakBBVC23RKG+4SSb0ER5Lkqyz7b6SNZ9ghApV9YCYZcGXJkTIpyDJG2VdAdgdgPqQIeGy2WrRt8QJuvWXfNforgn3lOKDZ23vFjUPMJmeq0+ghf+YhOeINBgsrxXFKbkcojM7nFC0j4knWEQthDY0mIb6Lw4EYVSml+aOE6WwLF8ZRbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=et3yJGNIONQ/i96PdAtaTJbBfeA0Mf6Odm6Mddf28Hc=;
 b=jkoXFilwMpwpqNFjnSM4Oywv/As7Vjz8C/OuwBqRiINiDRH0Go0POh3nGpX2pM2TGSPn/OKF2eWHiQXA2KSFFkJm0zFgnJWia22u34pCmEtsZOhaSmleafupUPxYYsxjm9qPjMwhDv98PtWTiYZ3pFeANqgkADv4DaKLC4+exjM=
Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB6992.eurprd04.prod.outlook.com (2603:10a6:803:139::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 10:33:54 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:33:54 +0000
From: laurentiu.tudor@nxp.com
To: sstabellini@kernel.org,
	julien@xen.org,
	xen-devel@lists.xenproject.org,
	Volodymyr_Babchuk@epam.com,
	will@kernel.org
Cc: diana.craciun@nxp.com,
	anda-alexandra.dorneanu@nxp.com,
	Laurentiu Tudor <laurentiu.tudor@nxp.com>
Subject: [PATCH v3] arm,smmu: match start level of page table walk with P2M
Date: Fri,  2 Oct 2020 13:33:44 +0300
Message-Id: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain; charset="us-ascii"
X-ClientProxiedBy: AM0PR04CA0024.eurprd04.prod.outlook.com
 (2603:10a6:208:122::37) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from fsr-ub1864-101.ea.freescale.net (83.217.231.2) by AM0PR04CA0024.eurprd04.prod.outlook.com (2603:10a6:208:122::37) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:33:53 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [83.217.231.2]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 57cdb606-bb2e-4fb2-e7e2-08d866bea921
X-MS-TrafficTypeDiagnostic: VI1PR04MB6992:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB6992F8DEBCAC0483C3573912EC310@VI1PR04MB6992.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aGjuzOcRPkv1hyeedoIOEYhk+oDKl6UfeLKhSrA15xPGZ+fNxl4FUmmcSOZzzPeHoHW1zTv2sgk3KqN9MRyUVjKbnKpQWZOq1gBCL4x9RyL4hlpw+U553zP4v8xg8mzK/9KMRaN0Q4UE7vdX617IH5VynGvhPxQ28e11VVRrZC2ket7427byRvu6fokXSKnC0B9HvP54zNeu7g+j+MvdR6ap2HJDIGDwqvCuIh2D77FHjiPgeIXS4bKJW8N9noz7rh8+ov//dS0NLO8d2CMkn6E0UrvzIgYdxj1TtA0hSe51q5pSJzsZk62Zmzgr2AIzA0ukxUUx3yqzHzC3r5zhRg==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(366004)(136003)(39860400002)(346002)(2616005)(478600001)(4326008)(956004)(6666004)(83380400001)(86362001)(316002)(52116002)(66556008)(66946007)(6506007)(36756003)(1076003)(9686003)(186003)(6512007)(5660300002)(16526019)(6486002)(66476007)(8676002)(2906002)(8936002)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	o6U+QrMq7DSy5RvfZJ5xjgmxD2pAvKN4AmYpfFe6yhLF1wHXTwPMbokyybuUNDs4VykSG9qEBHSrcucpuwVNY0ledbkf62QjRDPMB8AntxCgMpI6tgY+drR66GQKmine7wfSDUSMMzNlyOyxpd69/Qn4WWD/5n8CxVMA7FL2uDl28ahFVdfEq/NqDUUIznzgQ8JUxqq/godIntzJehEzLJq/eqYzIAYLqHesfnyTxEVi/4onQCCIrlYYcSjkGjnumSwQ5XaDIkRpClOnRkUUwhc9l/NQIPTknQxoG2YmHZskxxs6AAB1l0Lf/p+UYJ1Ata6TkwKQ7rJcX4zEL24EA1K4v2VM2e2wPZY2u3CLfgnx71IwcpDWoteecBkvFsVYbeOA2PhGUFw+VSK93uvRUc1j1a1SznniufrfK1S9h8QtIuE5BhTgbJKqQzf82pQMvaKb7NXiT5lLGvFXZnebp0I/4DiUMXB8iDB7IrME0VunsESsbxul63mF+URx/wXeUlUt8zI+LZIFTLhL635aRcUEgEXw3eWHdvCXNrIKIxLfW9wrjjVcs8feiTjPuUK46Vn8p4jutUDevMEwQS5R1jwRHoTNcH1yzM5Ancp9KkW1bdqhI/n9sTMxq/W7eJNZy1FZiDE8uvRBxb00if83+Q==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 57cdb606-bb2e-4fb2-e7e2-08d866bea921
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 10:33:54.3564
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: l4dbFGq677PTnkKo2PESFpc2Tohk88/wEbQvA8IW9pTPe8MmvmsNocBZWUGct0SiYQy7MoB7hKCadIyqBRprcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB6992

From: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Don't hardcode the lookup start level of the page table walk to 1
and instead match the one used in P2M. This should fix scenarios
involving SMMU where the start level is different than 1.
In order for the SMMU driver to also compile on arm32 move the
P2M_ROOT_LEVEL in the p2m header file (while at it, for
consistency also P2M_ROOT_ORDER) and use the macro in the smmu
driver.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
---
Changes in v3:
 - also export 'p2m_root_order'
 - moved variables in their rightful #ifdef block

Changes in v2:
 - made smmu driver compile on arm32

 xen/arch/arm/p2m.c                 |  9 ++-------
 xen/drivers/passthrough/arm/smmu.c |  2 +-
 xen/include/asm-arm/p2m.h          | 11 +++++++++++
 3 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ce59f2b503..4eeb867ca1 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -17,17 +17,12 @@
 #define INVALID_VMID 0 /* VMID 0 is reserved */
 
 #ifdef CONFIG_ARM_64
-static unsigned int __read_mostly p2m_root_order;
-static unsigned int __read_mostly p2m_root_level;
-#define P2M_ROOT_ORDER    p2m_root_order
-#define P2M_ROOT_LEVEL p2m_root_level
+unsigned int __read_mostly p2m_root_order;
+unsigned int __read_mostly p2m_root_level;
 static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
 /* VMID is by default 8 bit width on AArch64 */
 #define MAX_VMID       max_vmid
 #else
-/* First level P2M is always 2 consecutive pages */
-#define P2M_ROOT_LEVEL 1
-#define P2M_ROOT_ORDER    1
 /* VMID is always 8 bit width on AArch32 */
 #define MAX_VMID        MAX_VMID_8_BIT
 #endif
diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
index 94662a8501..4ba6d3ab94 100644
--- a/xen/drivers/passthrough/arm/smmu.c
+++ b/xen/drivers/passthrough/arm/smmu.c
@@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
 	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
 
 	if (!stage1)
-		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
+		reg |= (2 - P2M_ROOT_LEVEL) << TTBCR_SL0_SHIFT;
 
 	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
 
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 5fdb6e8183..28ca9a838e 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -13,6 +13,17 @@
 /* Holds the bit size of IPAs in p2m tables.  */
 extern unsigned int p2m_ipa_bits;
 
+#ifdef CONFIG_ARM_64
+extern unsigned int p2m_root_order;
+extern unsigned int p2m_root_level;
+#define P2M_ROOT_ORDER    p2m_root_order
+#define P2M_ROOT_LEVEL p2m_root_level
+#else
+/* First level P2M is always 2 consecutive pages */
+#define P2M_ROOT_ORDER    1
+#define P2M_ROOT_LEVEL 1
+#endif
+
 struct domain;
 
 extern void memory_type_changed(struct domain *);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:40:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1861.5632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIUQ-0006Eo-75; Fri, 02 Oct 2020 10:40:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1861.5632; Fri, 02 Oct 2020 10:40:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIUQ-0006Eh-2F; Fri, 02 Oct 2020 10:40:38 +0000
Received: by outflank-mailman (input) for mailman id 1861;
 Fri, 02 Oct 2020 10:40:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOIUO-0006Ec-Dl
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:40:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89f7f057-68b8-4941-8509-4d9bf4957d10;
 Fri, 02 Oct 2020 10:40:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOIUK-0004cH-Sx; Fri, 02 Oct 2020 10:40:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOIUK-0004Mj-KT; Fri, 02 Oct 2020 10:40:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOIUK-0001ze-Jw; Fri, 02 Oct 2020 10:40:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOIUO-0006Ec-Dl
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:40:36 +0000
X-Inumbo-ID: 89f7f057-68b8-4941-8509-4d9bf4957d10
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 89f7f057-68b8-4941-8509-4d9bf4957d10;
	Fri, 02 Oct 2020 10:40:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z8Rv+lci3xO94COHToHyaUsHi3KR6pTc8XeAIZja3v0=; b=2RN15kBhd39XTfGEDDkiKrkJ2x
	CxcVtTiQA7/SWAzS40cq//LxbbPbAjyHBISkZ3Z1T0QZnWi9EGuaGiARzLyoaK0x2hmuTAWnJhLt1
	BJt2tdspHSL+63dV7kqDGajKT7AUllSpXAcI9+3Te6CxeEKZK0SekSy07aKEyuXQGiFY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOIUK-0004cH-Sx; Fri, 02 Oct 2020 10:40:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOIUK-0004Mj-KT; Fri, 02 Oct 2020 10:40:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOIUK-0001ze-Jw; Fri, 02 Oct 2020 10:40:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155184-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155184: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=cbba3dc6ea3fc9aa66e9f9eb41051536e3ad7cd0
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 10:40:32 +0000

flight 155184 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155184/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                cbba3dc6ea3fc9aa66e9f9eb41051536e3ad7cd0
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   43 days
Failing since        152659  2020-08-21 14:07:39 Z   41 days   74 attempts
Testing same since   155184  2020-10-01 01:52:33 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 35036 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:42:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1865.5646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIWR-0006ON-Nw; Fri, 02 Oct 2020 10:42:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1865.5646; Fri, 02 Oct 2020 10:42:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIWR-0006OG-Kh; Fri, 02 Oct 2020 10:42:43 +0000
Received: by outflank-mailman (input) for mailman id 1865;
 Fri, 02 Oct 2020 10:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOIWQ-0006OB-76
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:42:42 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7c2da763-13a7-4efd-bd07-f7bf653027ec;
 Fri, 02 Oct 2020 10:42:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ABA9231B;
 Fri,  2 Oct 2020 03:42:40 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 351803F73B;
 Fri,  2 Oct 2020 03:42:39 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOIWQ-0006OB-76
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:42:42 +0000
X-Inumbo-ID: 7c2da763-13a7-4efd-bd07-f7bf653027ec
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 7c2da763-13a7-4efd-bd07-f7bf653027ec;
	Fri, 02 Oct 2020 10:42:41 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id ABA9231B;
	Fri,  2 Oct 2020 03:42:40 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 351803F73B;
	Fri,  2 Oct 2020 03:42:39 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH] build: always use BASEDIR for xen sub-directory
Date: Fri,  2 Oct 2020 11:42:09 +0100
Message-Id: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.

This is removing the dependency to xen subdirectory preventing using a
wrong configuration file when xen subdirectory is duplicated for
compilation tests.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/common/Makefile                | 6 +++---
 xen/include/xen/lib/x86/Makefile   | 4 ++--
 xen/tools/kconfig/Makefile.kconfig | 2 +-
 xen/xsm/flask/Makefile             | 4 ++--
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index b3b60a1ba2..083f62acb6 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -78,14 +78,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
 obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
 
-CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(XEN_ROOT)/xen/)$(KCONFIG_CONFIG)
+CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(BASEDIR)/)$(KCONFIG_CONFIG)
 config.gz: $(CONF_FILE)
 	gzip -c $< >$@
 
 config_data.o: config.gz
 
-config_data.S: $(XEN_ROOT)/xen/tools/binfile
-	$(SHELL) $(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
+config_data.S: $(BASEDIR)/tools/binfile
+	$(SHELL) $(BASEDIR)/tools/binfile $@ config.gz xen_config_data
 
 clean::
 	rm -f config_data.S config.gz 2>/dev/null
diff --git a/xen/include/xen/lib/x86/Makefile b/xen/include/xen/lib/x86/Makefile
index 408d69c99e..f1229b9bc8 100644
--- a/xen/include/xen/lib/x86/Makefile
+++ b/xen/include/xen/lib/x86/Makefile
@@ -3,6 +3,6 @@ include $(XEN_ROOT)/Config.mk
 .PHONY: all
 all: cpuid-autogen.h
 
-cpuid-autogen.h: $(XEN_ROOT)/xen/include/public/arch-x86/cpufeatureset.h $(XEN_ROOT)/xen/tools/gen-cpuid.py
-	$(PYTHON) $(XEN_ROOT)/xen/tools/gen-cpuid.py -i $< -o $@.new
+cpuid-autogen.h: $(BASEDIR)/include/public/arch-x86/cpufeatureset.h $(BASEDIR)/tools/gen-cpuid.py
+	$(PYTHON) $(BASEDIR)/tools/gen-cpuid.py -i $< -o $@.new
 	$(call move-if-changed,$@.new,$@)
diff --git a/xen/tools/kconfig/Makefile.kconfig b/xen/tools/kconfig/Makefile.kconfig
index 065f4b8471..799321ec4d 100644
--- a/xen/tools/kconfig/Makefile.kconfig
+++ b/xen/tools/kconfig/Makefile.kconfig
@@ -9,7 +9,7 @@ Q :=
 kecho := :
 
 # eventually you'll want to do out of tree builds
-srctree := $(XEN_ROOT)/xen
+srctree := $(BASEDIR)
 objtree := $(srctree)
 src := tools/kconfig
 obj := $(src)
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index 50bec20a1e..637159ad82 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -35,8 +35,8 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
 flask-policy.o: policy.bin
 
-flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
-	$(SHELL) $(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
+flask-policy.S: $(BASEDIR)/tools/binfile
+	$(SHELL) $(BASEDIR)/tools/binfile -i $@ policy.bin xsm_flask_init_policy
 
 FLASK_BUILD_DIR := $(CURDIR)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:44:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1866.5658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIXg-0006XA-4L; Fri, 02 Oct 2020 10:44:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1866.5658; Fri, 02 Oct 2020 10:44:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIXg-0006X1-0P; Fri, 02 Oct 2020 10:44:00 +0000
Received: by outflank-mailman (input) for mailman id 1866;
 Fri, 02 Oct 2020 10:43:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOIXf-0006WE-38
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:43:59 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1ba8aba-4a4b-4e9c-bd8b-d6ef16bc008a;
 Fri, 02 Oct 2020 10:43:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOIXf-0006WE-38
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:43:59 +0000
X-Inumbo-ID: f1ba8aba-4a4b-4e9c-bd8b-d6ef16bc008a
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f1ba8aba-4a4b-4e9c-bd8b-d6ef16bc008a;
	Fri, 02 Oct 2020 10:43:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601635437;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=KqxlULH5hG1JKBjKDC1ojwnlQxBw52oERRltytFhc0w=;
  b=cZ+IidvkILjnzjbtbOXDFAfzQAWr745S+hWl17Ica2ex5S9Lanlyt/M0
   Ja1xJWlgWQY/LK/5fCdBZh/TIH7wzMd8gYC/VhsZ5YTKlI3BdxKccsmUo
   80HA0NqGEKuhSFbYE28QcN8vyCpM0KT1dYJpDRUSubptUoPZqtHI5cREI
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: lgv4ndG40uZ/fXaj7Ly6CF9/N5VJANF0YAmH0/mx2p4CYHYkXwi4Nh3KVau2ff4/GckrzQJzHR
 Gam4lOKUdntJvonYryqpXwzQ2k9QhHw8e5/j7WCmR/o7+ELrE0FgqYLIavUIXPswSMafJmMANt
 +0Dze8Lh4kRLSOCtKK/MUFqrNdFz4+PzbCc9Ahp64FCKJbnIaYaOSBSUJIVh0F8sXbSymANaGB
 WYS2VM13a3Ut8HxZi9vddyvvZgcZNvgAi1sR4uuNNpBdD6nCDddCpTpgsUp8oMdUkOCZeG13bf
 AQg=
X-SBRS: None
X-MesageID: 28418869
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="28418869"
Subject: Re: [PATCH 1/3] x86/vLAPIC: don't leak regs page from vlapic_init()
 upon error
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
 <b437de21-f108-c30e-4e0c-1137ad7d99fe@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <71e0720c-5a50-6844-5631-e1802dfb0b94@citrix.com>
Date: Fri, 2 Oct 2020 11:43:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b437de21-f108-c30e-4e0c-1137ad7d99fe@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 11:30, Jan Beulich wrote:
> Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:44:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1871.5682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIYH-0006hR-KI; Fri, 02 Oct 2020 10:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1871.5682; Fri, 02 Oct 2020 10:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIYH-0006hI-Fb; Fri, 02 Oct 2020 10:44:37 +0000
Received: by outflank-mailman (input) for mailman id 1871;
 Fri, 02 Oct 2020 10:44:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOIYF-0006h7-T1
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:44:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1740d6cf-db0f-45f5-8233-8873f8171feb;
 Fri, 02 Oct 2020 10:44:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0ADE9ACAD;
 Fri,  2 Oct 2020 10:44:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOIYF-0006h7-T1
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:44:35 +0000
X-Inumbo-ID: 1740d6cf-db0f-45f5-8233-8873f8171feb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1740d6cf-db0f-45f5-8233-8873f8171feb;
	Fri, 02 Oct 2020 10:44:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601635473;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xgE+nqyEaQw5Ij1Kk5qp+kgeR+fceHPDhKSaD2W3teE=;
	b=aZUVPBCbfgEWLTEN/wSEYWzQ74GMA7uuGOeOecWow50KfllG8Sht9FfcfyU607ivYVXGi5
	hAz01rr56zHfu9VnfQ+eefrHkxJ4H+R8Js7gttRs7mUwPLoLuZYkgXh1vTrteczmQAh6Nz
	peygP0ENhdfeObyld8dmZeIKzcKslSw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0ADE9ACAD;
	Fri,  2 Oct 2020 10:44:33 +0000 (UTC)
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
 <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
 <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
 <5B52FDF2-18DA-4342-9280-0D497FAB6532@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <75346ac2-20f3-c868-4ac9-0d5a2e65d436@suse.com>
Date: Fri, 2 Oct 2020 12:44:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <5B52FDF2-18DA-4342-9280-0D497FAB6532@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.20 12:12, Bertrand Marquis wrote:
> Hi,
> 
>> On 2 Oct 2020, at 08:25, Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 02.10.20 08:59, Jan Beulich wrote:
>>> On 02.10.2020 08:51, Jürgen Groß wrote:
>>>> On 02.10.20 08:20, Jan Beulich wrote:
>>>>> On 02.10.2020 06:50, Jürgen Groß wrote:
>>>>>> On 01.10.20 18:38, Bertrand Marquis wrote:
>>>>>>> Hi Juergen,
>>>>>>>
>>>>>>>> On 14 Sep 2020, at 11:58, Bertrand Marquis <bertrand.marquis@arm.com> wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>>>>>>
>>>>>>>>> Making getBridge() static triggered a build error with some gcc versions:
>>>>>>>>>
>>>>>>>>> error: 'strncpy' output may be truncated copying 15 bytes from a string of
>>>>>>>>> length 255 [-Werror=stringop-truncation]
>>>>>>>>>
>>>>>>>>> Fix that by using a buffer with 256 bytes instead.
>>>>>>>>>
>>>>>>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new tools/libs/stat directory")
>>>>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>>>>
>>>>>>> Sorry i have to come back on this one.
>>>>>>>
>>>>>>> I still see an error compiling with Yocto on this one:
>>>>>>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>>>>>>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated copying 255 bytes from a string of length 255 [-Werror=stringop-truncation]
>>>>>>> |    81 |      strncpy(result, de->d_name, resultLen);
>>>>>>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>>>>>
>>>>>>> To solve it, I need to define devBridge[257] as devNoBrideg.
>>>>>>
>>>>>> IMHO this is a real compiler error.
>>>>>>
>>>>>> de->d_name is an array of 256 bytes, so doing strncpy() from that to
>>>>>> another array of 256 bytes with a length of 256 won't truncate anything.
>>>>>
>>>>> That's a matter of how you look at it, I think: If the original array
>>>>> doesn't hold a nul-terminated string, the destination array won't
>>>>> either, yet the common goal of strncpy() is to yield a properly nul-
>>>>> terminated string. IOW the warning may be since the standard even has
>>>>> a specific foot note to point out this possible pitfall.
>>>>
>>>> If the source doesn't hold a nul-terminated string there will still be
>>>> 256 bytes copied, so there is no truncation done during strncpy().
>>>>
>>>> In fact there is no way to use strncpy() in a safe way on a fixed sized
>>>> source array with the above semantics: either the target is larger than
>>>> the source and length is at least sizeof(source) + 1, resulting in a
>>>> possible read beyond the end of source, or the target is the same length
>>>> leading to the error.
>>> I agree with all of what you say, but I can also see why said foot note
>>> alone may have motivated the emission of the warning.
>>
>> The motivation can be explained, yes, but it is wrong. strncpy() is not
>> limited to source arrays of unknown length. So this warning is making
>> strncpy() unusable for fixed sized source strings and -Werror. And that
>> is nothing a compiler should be allowed to do, hence a compiler bug.
> 
> I do agree that in this case the compiler is doing to much.

It is plain wrong here. Rendering a Posix defined function unusable for
a completely legal use case is in no way a matter of taste or of "doing
to much". It is a bug.

> We could also choose to turn off the warning either using pragma (which
> i really do not like) or by adding a cflag for this specific file (but this might
> hit us later in other places).
> 
> All in all this currently makes Xen master and staging not possible to
> compile with Yocto so we need to find a solution as this will also
> come in any distribution using a new compiler,

A variant you didn't mention would be open coding of strncpy() (or
having a related inline function in a common header). This route would
be the one I'd prefer in case the compiler guys insist on the behavior
being fine.

You didn't tell us which compiler is being used and whether it really is
up to date. A workaround might be to set EXTRA_CFLAGS_XEN_TOOLS to
"-Wno-stringop-truncation" for the build.


Juergen



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:52:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:52:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1884.5701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIg0-0007fu-GA; Fri, 02 Oct 2020 10:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1884.5701; Fri, 02 Oct 2020 10:52:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIg0-0007fn-CX; Fri, 02 Oct 2020 10:52:36 +0000
Received: by outflank-mailman (input) for mailman id 1884;
 Fri, 02 Oct 2020 10:52:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOIfy-0007fi-TK
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:52:35 +0000
Received: from mail-40134.protonmail.ch (unknown [185.70.40.134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 184f53c7-8e07-4cfb-b81d-b714945d0b18;
 Fri, 02 Oct 2020 10:52:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOIfy-0007fi-TK
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:52:35 +0000
X-Inumbo-ID: 184f53c7-8e07-4cfb-b81d-b714945d0b18
Received: from mail-40134.protonmail.ch (unknown [185.70.40.134])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 184f53c7-8e07-4cfb-b81d-b714945d0b18;
	Fri, 02 Oct 2020 10:52:32 +0000 (UTC)
Date: Fri, 02 Oct 2020 10:52:27 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trmm.net;
	s=protonmail; t=1601635951;
	bh=yd/hwExLFuNJpHJStPSBc7XEKqrOAe/xdydm8JCy+EA=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=grYwN+vEBaAbokQoYpH1Gg+wSszaZhiWpXl1OO98go0nKtX6oH9Uf1/uEDtvKbekr
	 oqhhzR/LnUXhmCme+41neRdgHPhtj96B7IyyiLG7z1ZVlWl+u/K+NW2SLpySgrxd1a
	 GFvNz8F1dg2EdcTYJfAlUzQSganDE0gBNJYXDuLA=
To: Jan Beulich <jbeulich@suse.com>
From: Trammell Hudson <hudson@trmm.net>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "roger.pau@citrix.com" <roger.pau@citrix.com>, "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "wl@xen.org" <wl@xen.org>
Reply-To: Trammell Hudson <hudson@trmm.net>
Subject: Re: [PATCH v8 4/5] efi: Enable booting unified hypervisor/kernel/initrd images
Message-ID: <s3f2INKZyF2RmtGWXAIMvThTOOBmykDEZvbEAlTOvNW6J3GaMSr7Q5oMo-IXI2E9cXGOzyqefPTMt6BhBL3-M0B40Otjgw0ANKS-Iuo3q7g=@trmm.net>
In-Reply-To: <ab61cb4b-bcbe-fb61-50d7-8d93bcfca4ab@suse.com>
References: <20200930120011.1622924-1-hudson@trmm.net> <20200930120011.1622924-5-hudson@trmm.net> <ab61cb4b-bcbe-fb61-50d7-8d93bcfca4ab@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

On Friday, October 2, 2020 4:27 AM, Jan Beulich <jbeulich@suse.com> wrote:
> On 30.09.2020 14:00, Trammell Hudson wrote:
> > -              /* Read and parse the config file. */
>
> I'm sorry for noticing this only now, but I don't think this comment
> should be moved. If no other need for a v9 arises, this can likely
> be undone while committing.

I'll relocate it.

> > -   if ( sect->Name[0] !=3D '.' )
> > -          return -1;
>
> I was about to say "'true' please", but you really mean 'false"
> now. (Could perhaps again be fixed while committing.)

oops oops. Yes, that is a mistake.  Should be false; I'll
fix it.

> [...]
> Just as a remark (and again spotting only now) this could be had
> with one less comparison:
>
> if ( cw !=3D c )
> return false;
> if ( c =3D=3D '\0' )
> return true;
>
> At which the need for cw also disappears.

Sure.  I'll fix that, too.

Since there are a few patches to the patch, I'll send out a v9 so
that we don't forget any of the ones that we wanted to remember to make.

--
Trammell



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:55:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:55:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1886.5714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIiO-0007py-VB; Fri, 02 Oct 2020 10:55:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1886.5714; Fri, 02 Oct 2020 10:55:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIiO-0007pr-Ql; Fri, 02 Oct 2020 10:55:04 +0000
Received: by outflank-mailman (input) for mailman id 1886;
 Fri, 02 Oct 2020 10:55:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MkBu=DJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kOIiN-0007pk-Es
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:55:03 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d26940dc-e28b-4a65-8e5a-1ad379f9ba60;
 Fri, 02 Oct 2020 10:55:02 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id z4so1311135wrr.4
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 03:55:02 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id m3sm1295808wrs.83.2020.10.02.03.55.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 03:55:00 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MkBu=DJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kOIiN-0007pk-Es
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:55:03 +0000
X-Inumbo-ID: d26940dc-e28b-4a65-8e5a-1ad379f9ba60
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d26940dc-e28b-4a65-8e5a-1ad379f9ba60;
	Fri, 02 Oct 2020 10:55:02 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id z4so1311135wrr.4
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 03:55:02 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=N6VpQiXKj+GORzzS9h7PJ+smoR5FAKDuenaJpH2axqk=;
        b=bCmecWwnOngQlMm4j0KeBCYnhZKb0IZFd8wxIa1hg4r++yDFlvXqLN4hjfnKTuXQ2o
         MCXrSvpW1KoQqs2G6HjGc6zdbfYyNQQLpO2PgZbKdGzFTMCZHX9y5iyjv8XY86vn2tka
         qUxrBzkhkI9c7GvTFyn0+bru35TCaNBXFwpdSrWBKWDG00GioCGKR9XM4TkVe5J6Aq6Z
         CiFWvgrJde2CAVJWR/Hc8vwfvoUNgya2eOknS+lrWbiwTOi1EHLwYwOPoCcCE3zCrLCu
         LT+duL1tbDQ7laE80Fn+uvcLHP+NpBIq2far8tzTcbzt+OSgVfREawN7CgOpvIZmEOiM
         2baQ==
X-Gm-Message-State: AOAM532CcbsBKLksZB4l2IjEiRuTj2SxA5+OLtWjVALACnnq5VEIw5oP
	jLSs97yN8SSt9TdjDMvIhXg=
X-Google-Smtp-Source: ABdhPJwRIvygLgHC0qIPX84xXNJKjN74tbZ7gVzKVem/rriQ3ZIsiFcaw52T2n/VDF5msL9sPcuGQg==
X-Received: by 2002:a5d:4a48:: with SMTP id v8mr2350397wrs.304.1601636101244;
        Fri, 02 Oct 2020 03:55:01 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id m3sm1295808wrs.83.2020.10.02.03.55.00
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 03:55:00 -0700 (PDT)
Date: Fri, 2 Oct 2020 10:54:59 +0000
From: Wei Liu <wl@xen.org>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI
 callbacks
Message-ID: <20201002105459.ka366qj7bxaz5tea@liuwe-devbox-debian-v2>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-2-roger.pau@citrix.com>
 <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
 <59e20dff55464b7fbee9737348fae751@EX13D32EUC003.ant.amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <59e20dff55464b7fbee9737348fae751@EX13D32EUC003.ant.amazon.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 09:24:57AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Jan Beulich <jbeulich@suse.com>
> > Sent: 02 October 2020 09:48
> > To: Roger Pau Monne <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>
> > Cc: xen-devel@lists.xenproject.org; Andrew Cooper <andrew.cooper3@citrix.com>; Durrant, Paul
> > <pdurrant@amazon.co.uk>
> > Subject: RE: [EXTERNAL] [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI callbacks
> > 
> > CAUTION: This email originated from outside of the organization. Do not click links or open
> > attachments unless you can confirm the sender and know the content is safe.
> > 
> > 
> > 
> > On 30.09.2020 12:40, Roger Pau Monne wrote:
> > > --- a/xen/arch/x86/hvm/vlapic.c
> > > +++ b/xen/arch/x86/hvm/vlapic.c
> > > @@ -459,13 +459,10 @@ void vlapic_EOI_set(struct vlapic *vlapic)
> > >
> > >  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
> > >  {
> > > -    struct vcpu *v = vlapic_vcpu(vlapic);
> > > -    struct domain *d = v->domain;
> > > -
> > >      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> > > -        vioapic_update_EOI(d, vector);
> > > +        vioapic_update_EOI(vector);
> > >
> > > -    hvm_dpci_msi_eoi(d, vector);
> > > +    hvm_dpci_msi_eoi(vector);
> > >  }
> > 
> > What about viridian_synic_wrmsr() -> vlapic_EOI_set() ->
> > vlapic_handle_EOI()? You'd probably have noticed this if you
> > had tried to (consistently) drop the respective parameters from
> > the intermediate functions as well.
> > 
> > Question of course is in how far viridian_synic_wrmsr() for
> > HV_X64_MSR_EOI makes much sense when v != current. Paul, Wei?
> > 
> 
> I don't think it makes any sense. I think it would be fine to only do it if v == current.

Yes, I agree.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 10:57:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 10:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1887.5726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIkX-0007z0-BV; Fri, 02 Oct 2020 10:57:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1887.5726; Fri, 02 Oct 2020 10:57:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIkX-0007yt-8T; Fri, 02 Oct 2020 10:57:17 +0000
Received: by outflank-mailman (input) for mailman id 1887;
 Fri, 02 Oct 2020 10:57:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOIkW-0007yn-0s
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:57:16 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe07::624])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08491e15-78f9-4871-8f44-d0dd4b6841d8;
 Fri, 02 Oct 2020 10:57:14 +0000 (UTC)
Received: from AM6P195CA0104.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::45)
 by PR3PR08MB5785.eurprd08.prod.outlook.com (2603:10a6:102:89::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 10:57:12 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::59) by AM6P195CA0104.outlook.office365.com
 (2603:10a6:209:86::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Fri, 2 Oct 2020 10:57:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:57:12 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Fri, 02 Oct 2020 10:57:12 +0000
Received: from 341fb1d3e8ae.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4FC73809-D430-445D-A3FB-AA6DF07289FD.1; 
 Fri, 02 Oct 2020 10:57:06 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 341fb1d3e8ae.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 10:57:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3388.eurprd08.prod.outlook.com (2603:10a6:10:41::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Fri, 2 Oct
 2020 10:57:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:57:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOIkW-0007yn-0s
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 10:57:16 +0000
X-Inumbo-ID: 08491e15-78f9-4871-8f44-d0dd4b6841d8
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe07::624])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 08491e15-78f9-4871-8f44-d0dd4b6841d8;
	Fri, 02 Oct 2020 10:57:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFx1hvLj5u5NPzRr2Sqx1W5LO2zEcHYNaQEfD4/FiiA=;
 b=pXEbPZaK+Nbd2HE2P492JeTREQxgd75yoiXJhM5hBijBmuf4PiTY4dLZv5tMza5AFCCSiZrVKBYHxEtuv1pyDNgSMuPN74zCHesXkFZxO7jer7vz2nqpnt9a99dv+lPZwgnlj65fn+GyA+yTssBWyLdkfzoC3s++K96Aae7c+3A=
Received: from AM6P195CA0104.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::45)
 by PR3PR08MB5785.eurprd08.prod.outlook.com (2603:10a6:102:89::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 10:57:12 +0000
Received: from AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::59) by AM6P195CA0104.outlook.office365.com
 (2603:10a6:209:86::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Fri, 2 Oct 2020 10:57:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT059.mail.protection.outlook.com (10.152.17.193) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 10:57:12 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Fri, 02 Oct 2020 10:57:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ae4ca4db6ca62e1d
X-CR-MTA-TID: 64aa7808
Received: from 341fb1d3e8ae.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 4FC73809-D430-445D-A3FB-AA6DF07289FD.1;
	Fri, 02 Oct 2020 10:57:06 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 341fb1d3e8ae.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 10:57:06 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=noL26mWYVgZ62JlSkAYF5LcuDPMnaNp0wiebMrGli/uKWWlZh2k2AqaC8LLWoLGsEw7unh1JXohRd7ehm219lwL7DsRPHI/MIE/nEodXQf6ad48teO5sIJli6X+poWbTzhhW97BB/p7D/ZaiSQhbh799O2cw2il3v0e8PVPyk3IKYzp9ADi3wn8sGl9MODWm/92ScJrIturcWt3eiKYj1Qq6YC8cq1AWBHblKd/EQpxCpqdI6x5tmhaj3z7UVc86dK4KAAteb7CeMdCqYH+LNJYCHhyarMTHYvPkuv1D+wIuODXAQcj5hn4I2zRGl3EdLP9pt1MSGN7i5kQcV0TvSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFx1hvLj5u5NPzRr2Sqx1W5LO2zEcHYNaQEfD4/FiiA=;
 b=SNQSAsTFU23EUKCZHDq1vu4wnL5qDz70jlzBrvI4aqwINyeHVwuAKxXgkRHl2OO261EtMbANquigPZiUtrcBwDyvs/Jnrc3bqV2rZuh7TVzfWgLq481Zcfwj8IESs7WtRXqr8XPqbEz7NwiE31bvOeFgpx0LcTWsXpT0FS3oyX/lD4N3xv/7mGbhyGSe2070gYCCykAX5kJlegxVfDw5l2XyRt35AyBcFeq/Z08wXfe6HsR5h9psbC4I83I3N/ncaRyUUgnGS3XDQJvgRmgawi59dtIKZxxqbl8piVSrHSCrcS05DqYgY1F2SAzqQ+SANcIJ/DwtCMwONpmC3SMWBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QFx1hvLj5u5NPzRr2Sqx1W5LO2zEcHYNaQEfD4/FiiA=;
 b=pXEbPZaK+Nbd2HE2P492JeTREQxgd75yoiXJhM5hBijBmuf4PiTY4dLZv5tMza5AFCCSiZrVKBYHxEtuv1pyDNgSMuPN74zCHesXkFZxO7jer7vz2nqpnt9a99dv+lPZwgnlj65fn+GyA+yTssBWyLdkfzoC3s++K96Aae7c+3A=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3388.eurprd08.prod.outlook.com (2603:10a6:10:41::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Fri, 2 Oct
 2020 10:57:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 10:57:04 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Christopher Clark <christopher.w.clark@gmail.com>
CC: George Dunlap <george.dunlap@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <ian.jackson@citrix.com>, Wei
 Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Rich Persaud <persaur@gmail.com>
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Topic: [PATCH RFC] docs: Add minimum version depencency policy document
Thread-Index: AQHWlylr89SNNL419USyeHY3I7qFNKmC0KsAgAEKroCAAEvZgA==
Date: Fri, 2 Oct 2020 10:57:03 +0000
Message-ID: <3679A867-715B-4C74-89B2-6E347F59708F@arm.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
 <683E2686-1551-493B-A3AE-D0707C937155@arm.com>
 <CACMJ4Gac-rtoWqV=A-LT8VLU=SZQogSR009FjJiH3fF6rju5PQ@mail.gmail.com>
In-Reply-To:
 <CACMJ4Gac-rtoWqV=A-LT8VLU=SZQogSR009FjJiH3fF6rju5PQ@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 06033e38-0be6-4b02-9ebd-08d866c1eab6
x-ms-traffictypediagnostic: DB7PR08MB3388:|PR3PR08MB5785:
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB5785B6190E58F25C266C3E009D310@PR3PR08MB5785.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n1tLTgEQflQTTsrFgY/bFsCPRez4/RFAO07CKTb8d8MKq826TslXXg7TsFCsi+WOa4Qg8b/UVwYfF9roEsejws2IaHYl/YgPHLD3HTGg9qm70LkxRqvCr//C/Zmpl3iEtASJCZZHhEWW7FFwt+6GdmTnc1STfv9GVrW9D1gg8EUMyz3njI5HlQMWmO3I3pJZ+5RQtDQNJ+atEr56LmHXvGstrXZScldSw8Lwu8SyIAcGSGwoeshv3YxSSWmnq6XDSFMR05QFm9rr4fJsb0jQj0ExJ7+DkR28xDtX9c0vYtqO/nrMPUu/u3VmuknsLKkbD9Hb0rFD+1W73g9AYQhMZzjplXeaGUslo6rymZQV9aN34v97/y1BMQgqcIjVZ303kMb2/fRJO5OpD7qbicfCk/t4cG4jcjVm3DqDkmRlDrALK7ErhrJ3Pes81pNL4yaC8UNojN9EiwSqzdBEQfQckg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39850400004)(396003)(366004)(346002)(136003)(64756008)(76116006)(66946007)(91956017)(66556008)(66476007)(6506007)(53546011)(966005)(86362001)(2616005)(83380400001)(478600001)(33656002)(7416002)(2906002)(36756003)(6916009)(8936002)(6512007)(186003)(8676002)(4326008)(316002)(83080400001)(66446008)(71200400001)(6486002)(54906003)(26005)(5660300002)(266184004);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 HicB9/grW8tJ4683TKqL9B0eSMQ5xhQvyTdLqx+c3C8ums9WQ9PfVZRfDU/EsphSfvhGuHvtjlOejmJdlIDr/PPF+UHf70WMVf5jjaDP7gaQKPszqSx8F1VmP4kmRJcztPGJpUmIWoCUu8FuQ8ns2CI4W5xig2GZsAD7cDJHKamh3xvj4B5g0EhUpcsKlkHmel/BsKyENpc0ry8JHoi1Va5g6W/LQ56o1i0brWdsnx+j8Ybr1SiJQCajuJTrplMV/lDeLLrqShkpLq5Ac7IKHyZAvEvTrEV/8++/mnpD0M4lTV5klfyNUCunSgoR+8vTvT1dScDkfeFBAwoojEmolE0eULAD1t39NazIJsizEWrklVJSYSQCy98Yl4SSYGc2iLAAO01EPBibsoPPGWC1gL8g8N4ciXzamjwnH+GkBxxZ5epkhSKTlmEAbsnDG3sQJPibYlLyT9azJ79fn69clQ6nSFXjgAE3zdKoyZXUZK/y0fvca2mnMGdC5vCzKrUmXGaPp6pcgNS4eX9ds7gHmwYNo6puz7uG+QDoh8Go+JMJIDm5B/jOcjcC4H1vO3dz9hXKp7eqfzvcLOGKq6YonMiUwIBPbKw6Qw0E4wmrfjkqIIbB3s3iYlifV46gdWV5GWYzc4i/uJT6HD42ASl7Ig==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E5BDB9D54BE0A742AAA0C2518088FA42@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3388
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d81d2d84-ef7b-4e69-aea6-08d866c1e5aa
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YaI8A7Xo+z7xgmarpM9TJU1YhRhabmlsAJfiBQEk7F6ig1PS0lZb66X+kj7N8GB71YBPm7mLRhNTSE8YR1J/4W33g+wQkWzS89UKVCv6O2L6UNXQ1+H1cBa5EpJET5i5OVjUCX+0NZoKQFbCFYWGNopwTMPJKZ7zqqOJgnei8pkgD+jxlXpvlXvLWJIDTqD9Y1IFLEkI7CHwinxMfLN6IYGb1ivSlEbNuOWx/6jgjgvrMOYZXeF9Bn/hNwPQyIwkQsWdViH4RSuD2XwUichLz5DAp6AIHyo/zsBxaiBafFHIwhs9enyaNs/1HRrI/GzbF5XatV3l9fC2/U2DK6b7VIMi3elc5xDhoEGcA1wCaQU5ly/Wt4CUCAMllhvaqIaBwUfCOxYQKoZBn3NW1aXdaVfzH+IhNsMHKrN1ZiyZrodL8FEfuKGhcwyVLNa6jf8JCv+UXwzU5xyq23pOklH1XKJzjg/RK6QMqs0wlwPFgnq1NLvNsFuB45xHjK2iKFOX
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(39850400004)(136003)(396003)(46966005)(5660300002)(478600001)(81166007)(8676002)(966005)(26005)(82310400003)(54906003)(186003)(316002)(4326008)(36906005)(83380400001)(2616005)(70586007)(70206006)(6486002)(6512007)(33656002)(83080400001)(36756003)(8936002)(356005)(6862004)(107886003)(82740400003)(53546011)(336012)(6506007)(2906002)(47076004)(86362001)(266184004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 10:57:12.4553
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 06033e38-0be6-4b02-9ebd-08d866c1eab6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT059.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5785

Hi Christopher,

> On 2 Oct 2020, at 07:25, Christopher Clark <christopher.w.clark@gmail.com=
> wrote:
>=20
> On Thu, Oct 1, 2020 at 7:38 AM Bertrand Marquis
> <Bertrand.Marquis@arm.com> wrote:
>>=20
>> Hi George,
>>=20
>> + Christopher Clark to have his view on what to put for Yocto.
>>=20
>>> On 30 Sep 2020, at 13:57, George Dunlap <george.dunlap@citrix.com> wrot=
e:
>>>=20
>>> Define a specific criteria for how we determine what tools and
>>> libraries to be compatible with.  This will clarify issues such as,
>>> "Should we continue to support Python 2.4" moving forward.
>>>=20
>>> Note that CentOS 7 is set to stop receiving "normal" maintenance
>>> updates in "Q4 2020"; assuming that 4.15 is released after that, we
>>> only need to support CentOS / RHEL 8.
>>>=20
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>> ---
>>>=20
>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien@xen.org>
>>> CC: Rich Persaud <persaur@gmail.com>
>>> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
>>> ---
>>> docs/index.rst                        |  2 +
>>> docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
>>> 2 files changed, 78 insertions(+)
>>> create mode 100644 docs/policies/dependency-versions.rst
>>>=20
>>> diff --git a/docs/index.rst b/docs/index.rst
>>> index b75487a05d..ac175eacc8 100644
>>> --- a/docs/index.rst
>>> +++ b/docs/index.rst
>>> @@ -57,5 +57,7 @@ Miscellanea
>>> -----------
>>>=20
>>> .. toctree::
>>> +   :maxdepth: 1
>>>=20
>>> +   policies/dependency-versions
>>>   glossary
>>> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/depe=
ndency-versions.rst
>>> new file mode 100644
>>> index 0000000000..d5eeb848d8
>>> --- /dev/null
>>> +++ b/docs/policies/dependency-versions.rst
>>> @@ -0,0 +1,76 @@
>>> +.. SPDX-License-Identifier: CC-BY-4.0
>>> +
>>> +Build and runtime dependencies
>>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D
>>> +
>>> +Xen depends on other programs and libraries to build and to run.
>>> +Chosing a minimum version of these tools to support requires a careful
>>> +balance: Supporting older versions of these tools or libraries means
>>> +that Xen can compile on a wider variety of systems; but means that Xen
>>> +cannot take advantage of features available in newer versions.
>>> +Conversely, requiring newer versions means that Xen can take advantage
>>> +of newer features, but cannot work on as wide a variety of systems.
>>> +
>>> +Specific dependencies and versions for a given Xen release will be
>>> +listed in the toplevel README, and/or specified by the ``configure``
>>> +system.  This document lays out the principles by which those versions
>>> +should be chosen.
>>> +
>>> +The general principle is this:
>>> +
>>> +    Xen should build on currently-supported versions of major distros
>>> +    when released.
>>> +
>>> +"Currently-supported" means whatever that distro's version of "full
>>> +support".  For instance, at the time of writing, CentOS 7 and 8 are
>>> +listed as being given "Full Updates", but CentOS 6 is listed as
>>> +"Maintenance updates"; under this criterium, we would try to ensure
>>> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
>>> +
>>> +Exceptions for specific distros or tools may be made when appropriate.
>>> +
>>> +One exception to this is compiler versions for the hypervisor.
>>> +Support for new instructions, and in particular support for new safety
>>> +features, may require a newer compiler than many distros support.
>>> +These will be specified in the README.
>>> +
>>> +Distros we consider when deciding minimum versions
>>> +--------------------------------------------------
>>> +
>>> +We currently aim to support Xen building and running on the following =
distributions:
>>> +Debian_,
>>> +Ubuntu_,
>>> +OpenSUSE_,
>>> +Arch Linux,
>>> +SLES_,
>>> +Yocto_,
>>> +CentOS_,
>>> +and RHEL_.
>>> +
>>> +.. _Debian: https://www.debian.org/releases/
>>> +.. _Ubuntu: https://wiki.ubuntu.com/Releases
>>> +.. _OpenSUSE: https://en.opensuse.org/Lifetime
>>> +.. _SLES: https://www.suse.com/lifecycle/
>>> +.. _Yocto: https://wiki.yoctoproject.org/wiki/Releases
>>> +.. _CentOS: https://wiki.centos.org/About/Product
>>> +.. _RHEL: https://access.redhat.com/support/policy/updates/errata
>>> +
>>> +Specific distro versions supported in this release
>>> +--------------------------------------------------
>>> +
>>> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
>>> +Distro   Supported releases
>>> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
>>> +Debian   10 (Buster)
>>> +Ubuntu   20.10 (Groovy Gorilla), 20.04 (Focal Fossa), 18.04 (Bionic Be=
aver), 16.04 (Xenial Xerus)
>>> +OpenSUSE Leap 15.2
>>> +SLES     SLES 11, 12, 15
>>> +Yocto    3.1 (Dunfell)
>>=20
>> Yocto only supports one version of Xen (as there is usually only one xen=
 recipe per yocto version)
>=20
> I'm not sure that's totally accurate: the Yocto Xen recipes are
> written to support backwards compatibility with older versions of Xen.
> In general, care and effort has been expended to support building with
> multiple versions of Xen with the same recipes, via variable override
> or bbappend, and it is expected to work.

I agree when the latest version of meta-virtualization is used (at least
for now).

>=20
>> which can make the version here a bit tricky:
>> Yocto 3.1 (Dunfell) supports only Xen 4.12
>> Yocto 3.2 (Gategarth) support only Xen 4.14 but has a Yocto master recip=
e which should actually be used
>> by someone who would want to try Xen master.
>>=20
>> So I would suggest to put Yocto 3.2 here only.
>>=20
>> @Christopher: what is your view here ?
>=20
> Thanks for asking. I don't quite agree with that recommendation: I
> think Dunfell does belong, with an indication that it requires
> Gatesgarth meta-virtualization. Dunfell is the LTS release, so a
> compatibility statement about it is important. ie:
>=20
> Yocto 3.1 (Dunfell + Gatesgarth meta-virtualization)

Agree this is what we should say and add:

Yocto 3.2 (Gatesgarth)

>=20
> Effort has already been made within Yocto to make the Gatesgarth
> meta-virtualization layer compatible with Dunfell open-embedded core,
> specifically to allow newer Xen to be used with the LTS Dunfell
> software from the core layers. I would hope that any issues that are
> found with that configuration will be posted so they can be fixed.
>=20

I must admit i never tested the combination that way but I will check
if such a scenario could be added to the tests we define internally at arm.

Thanks for the answers

Cheers
Bertrand

> thanks,
>=20
> Christopher
>=20
>>=20
>> Cheers
>> Bertrand
>>=20
>>> +CentOS   8
>>> +RHEL     8
>>> +=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D
>>> +
>>> +.. note::
>>> +
>>> +   We also support Arch Linux, but as it's a rolling distribution, the
>>> +   concept of "security supported releases" doesn't really apply.
>>> --
>>> 2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:04:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:04:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1904.5746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIrB-0000Y9-Cb; Fri, 02 Oct 2020 11:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1904.5746; Fri, 02 Oct 2020 11:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIrB-0000Y2-7x; Fri, 02 Oct 2020 11:04:09 +0000
Received: by outflank-mailman (input) for mailman id 1904;
 Fri, 02 Oct 2020 11:04:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOIr9-0000Xx-Fz
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:04:07 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::626])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d75aed4-af75-4e70-b00d-73cd2cba7b0f;
 Fri, 02 Oct 2020 11:04:06 +0000 (UTC)
Received: from DB7PR05CA0002.eurprd05.prod.outlook.com (2603:10a6:10:36::15)
 by PR3PR08MB5626.eurprd08.prod.outlook.com (2603:10a6:102:81::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 11:04:03 +0000
Received: from DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::53) by DB7PR05CA0002.outlook.office365.com
 (2603:10a6:10:36::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 11:04:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT011.mail.protection.outlook.com (10.152.20.95) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 11:04:03 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Fri, 02 Oct 2020 11:04:03 +0000
Received: from f293c959b325.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 637136FA-FA8B-4DEE-AB7C-D9EE771DF13D.1; 
 Fri, 02 Oct 2020 11:03:58 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f293c959b325.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 11:03:58 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 11:03:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 11:03:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOIr9-0000Xx-Fz
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:04:07 +0000
X-Inumbo-ID: 3d75aed4-af75-4e70-b00d-73cd2cba7b0f
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7d00::626])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d75aed4-af75-4e70-b00d-73cd2cba7b0f;
	Fri, 02 Oct 2020 11:04:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XmvwajEXxGbQ5LSycLATrQubKMcuPuTfBR8p711VpQk=;
 b=V1oGIsNoTxFWEnANm1BCq3xa5Emm4ebVaLWnkCr/16pg+oecx37sOBXVV6YW6NuJkZvnhEJMAPDjp0oTzjwlKEr0iSWgJ6mYjw1+AASKGq4j37DcynPDl4G03ZDUlrxYtZNf2WE5Ro3jnPlmdIY3wX7gpkuBgy8qBs8+3Lkd8vc=
Received: from DB7PR05CA0002.eurprd05.prod.outlook.com (2603:10a6:10:36::15)
 by PR3PR08MB5626.eurprd08.prod.outlook.com (2603:10a6:102:81::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 11:04:03 +0000
Received: from DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::53) by DB7PR05CA0002.outlook.office365.com
 (2603:10a6:10:36::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 11:04:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT011.mail.protection.outlook.com (10.152.20.95) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 11:04:03 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Fri, 02 Oct 2020 11:04:03 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: b901a574756df2a6
X-CR-MTA-TID: 64aa7808
Received: from f293c959b325.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 637136FA-FA8B-4DEE-AB7C-D9EE771DF13D.1;
	Fri, 02 Oct 2020 11:03:58 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f293c959b325.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 11:03:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sn2KHq/UQdVcNDHF4XDuZ+kikLT2kSpT60bbFsQHbxr19C5xnbnha6UzhdRqYFeVITT2VCNlhD3kur5vQYZ6FXw27CDBCIJAeabNDN2QqapFVBQ5r9buIEHfV5LxfZHWNvX5p8cf+Zs1p6rngbJRE84hSOEXI653/bNLXaQwttwqASpiBUE/CsBJSrobQXMiourtE6s+6EDM++JMgR03lJr5aFrQ1aZarMkEeToTelO02onxpuvS0L4Bfm6Pjay1Matoms9KY7QWltma2qhmDjKpnokevpcA0nE/UWHEqlRsldI1ShriFO5EfoTkDts747MzMl814NinT4OfsUpVLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XmvwajEXxGbQ5LSycLATrQubKMcuPuTfBR8p711VpQk=;
 b=WKsTCc0cJesXVcekAaQJXUTfWHY3MUBt6Af/swam91JlwxyyRuH+hFcK1cJYSNcQhI5JhMjSTK0tgb5MTQtvQ7VPhzQEcgwPakGhbk6pJgsGBUnX4ETF5rKdDCP+M7+GB5xteidwhzXG7gx8NLltQ3XNyfeMJgScTMYVCLZSXuAoUUxC8WjtynLMVmm9ia3AYrZkZqxdWRmRFHT0f33+AIRYzAsipfYMrj7LVgCgrf6LRuX7ql7+rSKilNpmXDJZCX4KrMA1peS1b0V4dfZ64pT1Z6MOOW8D+j3K6TF8ynV/D1jqCqn2BVfGnz867aL9ahnrXTDaJeXMRBijLinstg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XmvwajEXxGbQ5LSycLATrQubKMcuPuTfBR8p711VpQk=;
 b=V1oGIsNoTxFWEnANm1BCq3xa5Emm4ebVaLWnkCr/16pg+oecx37sOBXVV6YW6NuJkZvnhEJMAPDjp0oTzjwlKEr0iSWgJ6mYjw1+AASKGq4j37DcynPDl4G03ZDUlrxYtZNf2WE5Ro3jnPlmdIY3wX7gpkuBgy8qBs8+3Lkd8vc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 11:03:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 11:03:56 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
Thread-Topic: [PATCH v3] tools/libs/stat: fix broken build
Thread-Index:
 AQHWiQYtlvZlYAgRJ0WaZGJ4HThwo6ln+foAgBsWa4CAAMywgIAAGQiAgAAIpgCAAAJuAIAABycAgAAuo4CAAAj+AIAABWkA
Date: Fri, 2 Oct 2020 11:03:55 +0000
Message-ID: <4F380E40-9AEB-4579-ADF7-833CCB5C5D54@arm.com>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
 <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
 <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
 <5B52FDF2-18DA-4342-9280-0D497FAB6532@arm.com>
 <75346ac2-20f3-c868-4ac9-0d5a2e65d436@suse.com>
In-Reply-To: <75346ac2-20f3-c868-4ac9-0d5a2e65d436@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 227313b2-6193-4dc3-b9a0-08d866c2dfbc
x-ms-traffictypediagnostic: DB6PR0802MB2566:|PR3PR08MB5626:
X-Microsoft-Antispam-PRVS:
	<PR3PR08MB56261EA9ADCE74E3D03F06D19D310@PR3PR08MB5626.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dHM4PRr1b+ReQJYgvch5AG1f0po4VauMxD9IeS2Uf0pivtTaTuoRYfR7hlzXTSwWFruds1yc8SIu/jiuJ+u6oxx+GrFhfOJP1sBuaI8Fbd7ZlOEltVAA2iLirsgx2WYTgl6Do0k2axU85Ga2prgAaPtNl5hf8CBRfcNw0nfyGA2O1tPiYl+v+QjazkGWM5Z7mUxpPnXcEMhbBLeAH38ods3GZXZXLON6X6AJpNlVZPBzXbJ9NcsfQgisLf7JfuVxohHODMyJqn03MQVVU1utCHKJFIiCOuwJTiWqh3XTw9YUAcleEqWxWUCACt4e4wzceW3btMs29JMAQwT9BDWUoA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39850400004)(376002)(366004)(346002)(66574015)(6512007)(5660300002)(316002)(186003)(54906003)(4326008)(26005)(478600001)(2616005)(83380400001)(86362001)(36756003)(71200400001)(53546011)(8936002)(2906002)(91956017)(76116006)(66946007)(66476007)(64756008)(66446008)(66556008)(6486002)(6916009)(8676002)(6506007)(33656002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 BYWG8fBE58JtkAruiqjEhDwr0dbVDjylT7vozU+fU5wDvmruZXT87Glvz1pxdzy5m6M2dW0472PEqgm99FuMhCCwmSAeUtMScYZ3ufONC3gZ4N3FvIKqEYIzV7VoX9i4cQRIXCSnuNLAOAxfJ95nSWuN3TaIR8CR+wezyWcpz35lVzHqVPpg8iswa4ZqH4as7Nvl/1pb+lBUFjBAj8jHPAwnKF5gKstGK2XEB1kLNbgd07wCF7b8QgqmyRiZ9f/62ZBVTHIPECUQzcQ65xOwDDc6LjTOPkfANJ2z7jFjpl6OTV6JzpeOzmX3+r42ZfFOJL/BROpADhKDc9xCHmH8yVKQwVd6j0pfnzdZu0T+uKWU7Ke4DUwFbeOnVLELKQ0m1AEwkG1/BYUxx+IwBcykY4DGt2pH1CRYezr3NwU5fJmD16KYYcdNKFdi11S/aJtdVIa9euwun9NwGEDYv6LkjKO5w5OKQ/l/YLKC5uIyWjqzIYQA+BBQ8jOzbmXCDwHttqdyj1fR7HwoAowmKk1PMwIz3Nn6SHNM03PWOIo0jJQ2oesABNwzp5/XAiMBtCSrSNTaaLsim7s7H/ZvoVwGe0QPu5ENrQzsweqjNU6LE285pUPO6GIo+amPbefiHCm0hqXN9OiAMYMNAow17Fjujw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <BFCFC14934BB4C449F31112CCDF70B4F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2566
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	19abe19b-fac5-4beb-af9a-08d866c2db3f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fK2hBCBt/vFgRTBVfLPnYjnZakTeY4GUqOydWae9+lyophZZgTGynm4RHr26pmRuxtE6jbWgLQYGGPo0VLjmqBHsR0c6dNxz9xO8vANwuR6DcaHINsED5Qpw4lT4AGqU9dVmpJpba9TtlIlaUe9zb3lIb2lh9z9uysX4BPwIT9KG4HIoJfmrE2o+q5+unMXPu5JyizMpUu3JnIPNID2iuv3R4mD7qwbAFnew6hUJHb8gOwmfFhihQYf9IQrvKPICJd2zo2j+x9f3OWpgcKGMpcIfr0l3qN4bsX1Z5H68JCH3woCMrk8e3NklpL0xZ5r+e5toCrpLR+C2mfGcJTMFB01xV8oaPX3qs5MgTzZ5TzQQzmhR/UXTQvhQJOG45FYv9oickdSQddJwhVzt4mB4Cg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(136003)(46966005)(70586007)(70206006)(8936002)(4326008)(2906002)(478600001)(186003)(316002)(54906003)(356005)(86362001)(6862004)(66574015)(336012)(2616005)(53546011)(33656002)(82310400003)(83380400001)(6506007)(6486002)(26005)(81166007)(8676002)(5660300002)(36756003)(6512007)(82740400003)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 11:04:03.6221
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 227313b2-6193-4dc3-b9a0-08d866c2dfbc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3PR08MB5626

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMTE6NDQsIErDvHJnZW4gR3Jvw58gPGpncm9zc0BzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMi4xMC4yMCAxMjoxMiwgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCj4+IEhpLA0KPj4+IE9uIDIgT2N0IDIwMjAsIGF0IDA4OjI1LCBKw7xyZ2VuIEdyb8Of
IDxqZ3Jvc3NAc3VzZS5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IE9uIDAyLjEwLjIwIDA4OjU5LCBK
YW4gQmV1bGljaCB3cm90ZToNCj4+Pj4gT24gMDIuMTAuMjAyMCAwODo1MSwgSsO8cmdlbiBHcm/D
nyB3cm90ZToNCj4+Pj4+IE9uIDAyLjEwLjIwIDA4OjIwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+
Pj4+PiBPbiAwMi4xMC4yMDIwIDA2OjUwLCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0KPj4+Pj4+PiBP
biAwMS4xMC4yMCAxODozOCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4+IEhpIEp1
ZXJnZW4sDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBPbiAxNCBTZXAgMjAyMCwgYXQgMTE6NTgsIEJl
cnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IE9uIDEyIFNlcCAyMDIwLCBh
dCAxNDowOCwgSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4+IE1ha2luZyBnZXRCcmlkZ2UoKSBzdGF0aWMgdHJpZ2dlcmVkIGEgYnVp
bGQgZXJyb3Igd2l0aCBzb21lIGdjYyB2ZXJzaW9uczoNCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+
IGVycm9yOiAnc3RybmNweScgb3V0cHV0IG1heSBiZSB0cnVuY2F0ZWQgY29weWluZyAxNSBieXRl
cyBmcm9tIGEgc3RyaW5nIG9mDQo+Pj4+Pj4+Pj4+IGxlbmd0aCAyNTUgWy1XZXJyb3I9c3RyaW5n
b3AtdHJ1bmNhdGlvbl0NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IEZpeCB0aGF0IGJ5IHVzaW5n
IGEgYnVmZmVyIHdpdGggMjU2IGJ5dGVzIGluc3RlYWQuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
PiBGaXhlczogNmQwZWMwNTM5MDc3OTQgKCJ0b29sczogc3BsaXQgbGlieGVuc3RhdCBpbnRvIG5l
dyB0b29scy9saWJzL3N0YXQgZGlyZWN0b3J5IikNCj4+Pj4+Pj4+Pj4gU2lnbmVkLW9mZi1ieTog
SnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPg0KPj4+Pj4+Pj4+IFJldmlld2VkLWJ5OiBC
ZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQo+Pj4+Pj4+PiANCj4+
Pj4+Pj4+IFNvcnJ5IGkgaGF2ZSB0byBjb21lIGJhY2sgb24gdGhpcyBvbmUuDQo+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+IEkgc3RpbGwgc2VlIGFuIGVycm9yIGNvbXBpbGluZyB3aXRoIFlvY3RvIG9uIHRo
aXMgb25lOg0KPj4+Pj4+Pj4gfCAgICAgaW5saW5lZCBmcm9tICd4ZW5zdGF0X2NvbGxlY3RfbmV0
d29ya3MnIGF0IHhlbnN0YXRfbGludXguYzozMDY6MjoNCj4+Pj4+Pj4+IHwgeGVuc3RhdF9saW51
eC5jOjgxOjY6IGVycm9yOiAnc3RybmNweScgb3V0cHV0IG1heSBiZSB0cnVuY2F0ZWQgY29weWlu
ZyAyNTUgYnl0ZXMgZnJvbSBhIHN0cmluZyBvZiBsZW5ndGggMjU1IFstV2Vycm9yPXN0cmluZ29w
LXRydW5jYXRpb25dDQo+Pj4+Pj4+PiB8ICAgIDgxIHwgICAgICBzdHJuY3B5KHJlc3VsdCwgZGUt
PmRfbmFtZSwgcmVzdWx0TGVuKTsNCj4+Pj4+Pj4+IHwgICAgICAgfCAgICAgIF5+fn5+fn5+fn5+
fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRvIHNvbHZl
IGl0LCBJIG5lZWQgdG8gZGVmaW5lIGRldkJyaWRnZVsyNTddIGFzIGRldk5vQnJpZGVnLg0KPj4+
Pj4+PiANCj4+Pj4+Pj4gSU1ITyB0aGlzIGlzIGEgcmVhbCBjb21waWxlciBlcnJvci4NCj4+Pj4+
Pj4gDQo+Pj4+Pj4+IGRlLT5kX25hbWUgaXMgYW4gYXJyYXkgb2YgMjU2IGJ5dGVzLCBzbyBkb2lu
ZyBzdHJuY3B5KCkgZnJvbSB0aGF0IHRvDQo+Pj4+Pj4+IGFub3RoZXIgYXJyYXkgb2YgMjU2IGJ5
dGVzIHdpdGggYSBsZW5ndGggb2YgMjU2IHdvbid0IHRydW5jYXRlIGFueXRoaW5nLg0KPj4+Pj4+
IA0KPj4+Pj4+IFRoYXQncyBhIG1hdHRlciBvZiBob3cgeW91IGxvb2sgYXQgaXQsIEkgdGhpbms6
IElmIHRoZSBvcmlnaW5hbCBhcnJheQ0KPj4+Pj4+IGRvZXNuJ3QgaG9sZCBhIG51bC10ZXJtaW5h
dGVkIHN0cmluZywgdGhlIGRlc3RpbmF0aW9uIGFycmF5IHdvbid0DQo+Pj4+Pj4gZWl0aGVyLCB5
ZXQgdGhlIGNvbW1vbiBnb2FsIG9mIHN0cm5jcHkoKSBpcyB0byB5aWVsZCBhIHByb3Blcmx5IG51
bC0NCj4+Pj4+PiB0ZXJtaW5hdGVkIHN0cmluZy4gSU9XIHRoZSB3YXJuaW5nIG1heSBiZSBzaW5j
ZSB0aGUgc3RhbmRhcmQgZXZlbiBoYXMNCj4+Pj4+PiBhIHNwZWNpZmljIGZvb3Qgbm90ZSB0byBw
b2ludCBvdXQgdGhpcyBwb3NzaWJsZSBwaXRmYWxsLg0KPj4+Pj4gDQo+Pj4+PiBJZiB0aGUgc291
cmNlIGRvZXNuJ3QgaG9sZCBhIG51bC10ZXJtaW5hdGVkIHN0cmluZyB0aGVyZSB3aWxsIHN0aWxs
IGJlDQo+Pj4+PiAyNTYgYnl0ZXMgY29waWVkLCBzbyB0aGVyZSBpcyBubyB0cnVuY2F0aW9uIGRv
bmUgZHVyaW5nIHN0cm5jcHkoKS4NCj4+Pj4+IA0KPj4+Pj4gSW4gZmFjdCB0aGVyZSBpcyBubyB3
YXkgdG8gdXNlIHN0cm5jcHkoKSBpbiBhIHNhZmUgd2F5IG9uIGEgZml4ZWQgc2l6ZWQNCj4+Pj4+
IHNvdXJjZSBhcnJheSB3aXRoIHRoZSBhYm92ZSBzZW1hbnRpY3M6IGVpdGhlciB0aGUgdGFyZ2V0
IGlzIGxhcmdlciB0aGFuDQo+Pj4+PiB0aGUgc291cmNlIGFuZCBsZW5ndGggaXMgYXQgbGVhc3Qg
c2l6ZW9mKHNvdXJjZSkgKyAxLCByZXN1bHRpbmcgaW4gYQ0KPj4+Pj4gcG9zc2libGUgcmVhZCBi
ZXlvbmQgdGhlIGVuZCBvZiBzb3VyY2UsIG9yIHRoZSB0YXJnZXQgaXMgdGhlIHNhbWUgbGVuZ3Ro
DQo+Pj4+PiBsZWFkaW5nIHRvIHRoZSBlcnJvci4NCj4+Pj4gSSBhZ3JlZSB3aXRoIGFsbCBvZiB3
aGF0IHlvdSBzYXksIGJ1dCBJIGNhbiBhbHNvIHNlZSB3aHkgc2FpZCBmb290IG5vdGUNCj4+Pj4g
YWxvbmUgbWF5IGhhdmUgbW90aXZhdGVkIHRoZSBlbWlzc2lvbiBvZiB0aGUgd2FybmluZy4NCj4+
PiANCj4+PiBUaGUgbW90aXZhdGlvbiBjYW4gYmUgZXhwbGFpbmVkLCB5ZXMsIGJ1dCBpdCBpcyB3
cm9uZy4gc3RybmNweSgpIGlzIG5vdA0KPj4+IGxpbWl0ZWQgdG8gc291cmNlIGFycmF5cyBvZiB1
bmtub3duIGxlbmd0aC4gU28gdGhpcyB3YXJuaW5nIGlzIG1ha2luZw0KPj4+IHN0cm5jcHkoKSB1
bnVzYWJsZSBmb3IgZml4ZWQgc2l6ZWQgc291cmNlIHN0cmluZ3MgYW5kIC1XZXJyb3IuIEFuZCB0
aGF0DQo+Pj4gaXMgbm90aGluZyBhIGNvbXBpbGVyIHNob3VsZCBiZSBhbGxvd2VkIHRvIGRvLCBo
ZW5jZSBhIGNvbXBpbGVyIGJ1Zy4NCj4+IEkgZG8gYWdyZWUgdGhhdCBpbiB0aGlzIGNhc2UgdGhl
IGNvbXBpbGVyIGlzIGRvaW5nIHRvIG11Y2guDQo+IA0KPiBJdCBpcyBwbGFpbiB3cm9uZyBoZXJl
LiBSZW5kZXJpbmcgYSBQb3NpeCBkZWZpbmVkIGZ1bmN0aW9uIHVudXNhYmxlIGZvcg0KPiBhIGNv
bXBsZXRlbHkgbGVnYWwgdXNlIGNhc2UgaXMgaW4gbm8gd2F5IGEgbWF0dGVyIG9mIHRhc3RlIG9y
IG9mICJkb2luZw0KPiB0byBtdWNoIi4gSXQgaXMgYSBidWcuDQoNCkFncmVlLg0KDQo+IA0KPj4g
V2UgY291bGQgYWxzbyBjaG9vc2UgdG8gdHVybiBvZmYgdGhlIHdhcm5pbmcgZWl0aGVyIHVzaW5n
IHByYWdtYSAod2hpY2gNCj4+IGkgcmVhbGx5IGRvIG5vdCBsaWtlKSBvciBieSBhZGRpbmcgYSBj
ZmxhZyBmb3IgdGhpcyBzcGVjaWZpYyBmaWxlIChidXQgdGhpcyBtaWdodA0KPj4gaGl0IHVzIGxh
dGVyIGluIG90aGVyIHBsYWNlcykuDQo+PiBBbGwgaW4gYWxsIHRoaXMgY3VycmVudGx5IG1ha2Vz
IFhlbiBtYXN0ZXIgYW5kIHN0YWdpbmcgbm90IHBvc3NpYmxlIHRvDQo+PiBjb21waWxlIHdpdGgg
WW9jdG8gc28gd2UgbmVlZCB0byBmaW5kIGEgc29sdXRpb24gYXMgdGhpcyB3aWxsIGFsc28NCj4+
IGNvbWUgaW4gYW55IGRpc3RyaWJ1dGlvbiB1c2luZyBhIG5ldyBjb21waWxlciwNCj4gDQo+IEEg
dmFyaWFudCB5b3UgZGlkbid0IG1lbnRpb24gd291bGQgYmUgb3BlbiBjb2Rpbmcgb2Ygc3RybmNw
eSgpIChvcg0KPiBoYXZpbmcgYSByZWxhdGVkIGlubGluZSBmdW5jdGlvbiBpbiBhIGNvbW1vbiBo
ZWFkZXIpLiBUaGlzIHJvdXRlIHdvdWxkDQo+IGJlIHRoZSBvbmUgSSdkIHByZWZlciBpbiBjYXNl
IHRoZSBjb21waWxlciBndXlzIGluc2lzdCBvbiB0aGUgYmVoYXZpb3INCj4gYmVpbmcgZmluZS4N
Cg0KVHJ1ZSB0aGlzIHBvc3NpYmxlLCBldmVuIHRob3VnaCBJIGRvIG5vdCBsaWtlIHRvIG1vZGlm
eSB0aGUgY29kZSB0aGF0IGRlZXBseQ0KZm9yIG9uZSBzcGVjaWZpYyBjb21waWxlci4NCg0KPiAN
Cj4gWW91IGRpZG4ndCB0ZWxsIHVzIHdoaWNoIGNvbXBpbGVyIGlzIGJlaW5nIHVzZWQgYW5kIHdo
ZXRoZXIgaXQgcmVhbGx5IGlzDQo+IHVwIHRvIGRhdGUuIEEgd29ya2Fyb3VuZCBtaWdodCBiZSB0
byBzZXQgRVhUUkFfQ0ZMQUdTX1hFTl9UT09MUyB0bw0KPiAiLVduby1zdHJpbmdvcC10cnVuY2F0
aW9uIiBmb3IgdGhlIGJ1aWxkLg0KDQpUaGF04oCZcyB3aGF0IGkgbWVhbnQgYnkg4oCcYWRkaW5n
IGEgY2ZsYWfigJ0sIGFzIHlvdSBzdWdnZXN0IHdlIGNvdWxkIGFsc28NCm1ha2UgaXQgZ2xvYmFs
IChhbmQgbm90IGxpbWl0IGl0IHRvIHRoaXMgZmlsZSkuDQoNClRoZSBjb21waWxlciBJIGFtIHVz
aW5nIGlzIHRoZSBvbmUgZnJvbSBZb2N0byBHYXRlc2dhcnRoIChyZWxlYXNlIGNvbWluZyBpbg0K
b2N0b2Jlcik6IGdjYyB2ZXJzaW9uIDEwLjIuMCAocmVsZWFzZWQgaW4ganVseSAyMDIwKS4NCg0K
Q2hlZXJzDQoNCkJlcnRyYW5kDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:05:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:05:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1911.5758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIsS-0000fE-Mi; Fri, 02 Oct 2020 11:05:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1911.5758; Fri, 02 Oct 2020 11:05:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOIsS-0000f7-JV; Fri, 02 Oct 2020 11:05:28 +0000
Received: by outflank-mailman (input) for mailman id 1911;
 Fri, 02 Oct 2020 11:05:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOIsR-0000f1-8T
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:05:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf28328b-1d65-45d5-a35b-fd764b9abca8;
 Fri, 02 Oct 2020 11:05:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOIsR-0000f1-8T
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:05:27 +0000
X-Inumbo-ID: bf28328b-1d65-45d5-a35b-fd764b9abca8
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bf28328b-1d65-45d5-a35b-fd764b9abca8;
	Fri, 02 Oct 2020 11:05:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601636726;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=LlGMt2gR70LijQzCMasO19ZCOl1CsAHLpIhOPQuW0eM=;
  b=FRJUVZqMI1DccePmoMYy0qUx2/CsuAaswHaUeT7sDz3QUntYri+2u6cp
   jLEPGW13ojGVdDQuVI7wFq9H2w9pGoe93rvc8pr2HCpaAXiS1CWGt5nPS
   PhiLtxShW60aynAC0k2SYS4IDIyePDiBnswx9TniZK7WaXdg6SYWc/kK8
   k=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZzOvdlpDCPo1BFRgQe3Ch+tkC/gsBLxEDueoj12E3vfP/UWVt7mBYJZsdn+w92KwRnW9eq+7TW
 NYZTNTkR3H5W1lA1voZwKdTBNVKOEVTfB+zQfw5FCewF5Oul66e6iCZsNBrrCjYAt02jKkHfBJ
 nGdGgh5pV4Ving88bGNxtSTrfHuKGeGft3jk6j5uh3b20m+kL50bgJIL0aTUzJ9S0wnAOoHADJ
 Yxo1zRgs9fTaxjZaylrs61LMSu5xq2PxeZgJTgfz3Lms4WPBYBCpj6Kp1aqYExfZN68XhOHbXP
 0Ew=
X-SBRS: None
X-MesageID: 28240433
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="28240433"
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <06775f15-da14-c4eb-7076-8ebe8d964339@citrix.com>
Date: Fri, 2 Oct 2020 12:05:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 05:50, Jürgen Groß wrote:
> On 01.10.20 18:38, Bertrand Marquis wrote:
>> Hi Juergen,
>>
>>> On 14 Sep 2020, at 11:58, Bertrand Marquis
>>> <bertrand.marquis@arm.com> wrote:
>>>
>>>
>>>
>>>> On 12 Sep 2020, at 14:08, Juergen Gross <jgross@suse.com> wrote:
>>>>
>>>> Making getBridge() static triggered a build error with some gcc
>>>> versions:
>>>>
>>>> error: 'strncpy' output may be truncated copying 15 bytes from a
>>>> string of
>>>> length 255 [-Werror=stringop-truncation]
>>>>
>>>> Fix that by using a buffer with 256 bytes instead.
>>>>
>>>> Fixes: 6d0ec053907794 ("tools: split libxenstat into new
>>>> tools/libs/stat directory")
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Sorry i have to come back on this one.
>>
>> I still see an error compiling with Yocto on this one:
>> |     inlined from 'xenstat_collect_networks' at xenstat_linux.c:306:2:
>> | xenstat_linux.c:81:6: error: 'strncpy' output may be truncated
>> copying 255 bytes from a string of length 255
>> [-Werror=stringop-truncation]
>> |    81 |      strncpy(result, de->d_name, resultLen);
>> |       |      ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> To solve it, I need to define devBridge[257] as devNoBrideg.
>
> IMHO this is a real compiler error.
>
> de->d_name is an array of 256 bytes, so doing strncpy() from that to
> another array of 256 bytes with a length of 256 won't truncate anything.
>
> Making devBridge one byte longer would be dangerous, as this would do
> a strncpy with length of 257 from a source with a length of 256 bytes
> only.
>
> BTW, I think Andrew? has tested my patch with a recent gcc which threw
> the original error without my patch, and it was fine with the patch.
> Either your compiler (assuming you are using gcc) has gained that error
> or you are missing an update fixing it.

All I was doing was using the gitlab CI, and reporting the failing tests.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:14:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1935.5787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ1L-0001oP-VR; Fri, 02 Oct 2020 11:14:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1935.5787; Fri, 02 Oct 2020 11:14:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ1L-0001oI-SM; Fri, 02 Oct 2020 11:14:39 +0000
Received: by outflank-mailman (input) for mailman id 1935;
 Fri, 02 Oct 2020 11:14:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOJ1K-0001o8-Bb
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:14:38 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8231079-a6db-4922-9812-b6655889c356;
 Fri, 02 Oct 2020 11:14:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOJ1K-0001o8-Bb
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:14:38 +0000
X-Inumbo-ID: f8231079-a6db-4922-9812-b6655889c356
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f8231079-a6db-4922-9812-b6655889c356;
	Fri, 02 Oct 2020 11:14:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601637277;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=GFnw3e1iFTPxYR1b9DtWHRUiA4+esND/Ff+pFeDntaU=;
  b=RkBeCHTtNsEoFXZl/AWChwLCcfN+UTFsMLHUr5NB5gofThbr1sUzLZfQ
   L2Gkjt4V+BKhPuq3z4NZfGJTFPJ1fiqQilPLVe3AzQy+7ThoUUgTRn8li
   I/ehZPPQSpKiTZ6DcNkVY/b8Qj3hOvmZZb3f4fml4/HBPztBOO/+BxYY+
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: f/pLEhB4TAsLSU9Nomz70Xt+CII5FRAqlZHbo4qYL1uGk4OkyDPoLPFhKZnoBabTaXJnqw18hW
 B0Bg7LTTNaiLlmm4UxG7lWMHHc3XZaOR4rq4PvDSJYLSpGRogl73FcHNHfYnzVGEmTVXi2HEKF
 kPN8vgw9sCTrkPItrFX8WXMoAU8e2rpRT3QDMmkSx4Fyx1n8q0v0nSMyLQyTgZkLCOGj3MzMHs
 yj8kRlDJmqgsVBTi9notaZcpKMASgjbxtAh1gAmBdN629rECGKZZLbyF2m/zHZadP0zBuaX2yP
 o0Q=
X-SBRS: None
X-MesageID: 28485448
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="28485448"
Subject: Re: [PATCH 2/3] x86: fix resource leaks on arch_vcpu_create() error
 path
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
 <77106fd6-96c5-4a62-5eee-8a37660db550@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <df700f00-9458-c7f8-90fc-65dc31850b48@citrix.com>
Date: Fri, 2 Oct 2020 12:13:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <77106fd6-96c5-4a62-5eee-8a37660db550@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 11:30, Jan Beulich wrote:
> {hvm,pv}_vcpu_initialise() have always been meant to be the final
> possible source of errors in arch_vcpu_create(), hence not requiring
> any unrolling of what they've done on the error path. (Of course this
> may change once the various involved paths all have become idempotent.)

I'd agree that the way the code was previously laid out expected
{hvm,pv}_vcpu_initialise() to be the final failing option.

I don't think "has always meant to be" is reasonable, because where is
the code comment explaining this design choice?

The idempotent plans will definitely be removing this misbehaviour, and
the memory leaks it causes.

> But even beyond this aspect I think it is more logical to do policy
> initialization ahead of the calling of these two functions, as they may
> in principle want to access it.

Not these MSRs.  They're currently a block of zeroes, and while that
will eventually change, it will still be a bunch of MSRs in their RESET
state.

The interesting MSRs are the domain ones, not the vCPU ones.

>
> Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> although I'd
prefer some adjustment to the commit message along the indicated lines.


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:18:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:18:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1955.5800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5C-00022W-J4; Fri, 02 Oct 2020 11:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1955.5800; Fri, 02 Oct 2020 11:18:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5C-00022P-EU; Fri, 02 Oct 2020 11:18:38 +0000
Received: by outflank-mailman (input) for mailman id 1955;
 Fri, 02 Oct 2020 11:18:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOJ5B-00022K-BG
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:37 +0000
Received: from mx1a.swcp.com (unknown [216.184.2.64])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f100fe75-2bff-446d-a52f-5d082d97d84c;
 Fri, 02 Oct 2020 11:18:35 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
 by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIWU5016902;
 Fri, 2 Oct 2020 05:18:32 -0600
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
 by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXn008607;
 Fri, 2 Oct 2020 05:18:29 -0600 (MDT) (envelope-from hudson@trmm.net)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOJ5B-00022K-BG
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:37 +0000
X-Inumbo-ID: f100fe75-2bff-446d-a52f-5d082d97d84c
Received: from mx1a.swcp.com (unknown [216.184.2.64])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f100fe75-2bff-446d-a52f-5d082d97d84c;
	Fri, 02 Oct 2020 11:18:35 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
	by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIWU5016902;
	Fri, 2 Oct 2020 05:18:32 -0600
Received-SPF: neutral (ame7.swcp.com: 62.251.112.184 is neither permitted nor denied by domain of hudson@trmm.net) receiver=ame7.swcp.com; client-ip=62.251.112.184; helo=diamond.fritz.box; envelope-from=hudson@trmm.net; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.10;
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
	by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXn008607;
	Fri, 2 Oct 2020 05:18:29 -0600 (MDT)
	(envelope-from hudson@trmm.net)
X-Authentication-Warning: ame7.swcp.com: Host 62-251-112-184.ip.xs4all.nl [62.251.112.184] claimed to be diamond.fritz.box
From: Trammell Hudson <hudson@trmm.net>
To: xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com,
        wl@xen.org
Subject: [PATCH v9 0/4] efi: Unified Xen hypervisor/kernel/initrd images
Date: Fri,  2 Oct 2020 07:18:18 -0400
Message-Id: <20201002111822.42142-1-hudson@trmm.net>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.83
X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.6.2 (ame7.swcp.com [216.184.2.128]); Fri, 02 Oct 2020 05:18:30 -0600 (MDT)
X-Virus-Scanned: clamav-milter 0.100.2 at ame7
X-Virus-Status: Clean
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ame7.swcp.com
X-Spam-Status: No, hits=0.7 tests=NO_RECEIVED,NO_RELAYS,SPF_NEUTRAL
	version=3.4.2
X-Spam-Level: 

This patch series adds support for bundling the xen.efi hypervisor,
the xen.cfg configuration file, the Linux kernel and initrd, as well
as the XSM, and architectural specific files into a single "unified"
EFI executable.  This allows an administrator to update the components
independently without requiring rebuilding xen, as well as to replace
the components in an existing image.

The resulting EFI executable can be invoked directly from the UEFI Boot
Manager, removing the need to use a separate loader like grub as well
as removing dependencies on local filesystem access.  And since it is
a single file, it can be signed and validated by UEFI Secure Boot without
requring the shim protocol.

It is inspired by systemd-boot's unified kernel technique and borrows the
function to locate PE sections from systemd's LGPL'ed code.  During EFI
boot, Xen looks at its own loaded image to locate the PE sections for
the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
(`.ramdisk`), and XSM config (`.xsm`), which are included after building
xen.efi using objcopy to add named sections for each input file.

Trammell Hudson (4):
  efi/boot.c: add file.need_to_free
  efi/boot.c: add handle_file_info()
  efi: Enable booting unified hypervisor/kernel/initrd images
  efi: Do not use command line if unified config is included

 .gitignore                  |   1 +
 docs/misc/efi.pandoc        |  49 ++++++++++++
 xen/arch/arm/efi/efi-boot.h |  36 ++++++---
 xen/arch/x86/efi/Makefile   |   2 +-
 xen/arch/x86/efi/efi-boot.h |  13 ++-
 xen/common/efi/boot.c       | 140 ++++++++++++++++++++++++---------
 xen/common/efi/efi.h        |   3 +
 xen/common/efi/pe.c         | 152 ++++++++++++++++++++++++++++++++++++
 8 files changed, 347 insertions(+), 49 deletions(-)
 create mode 100644 xen/common/efi/pe.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:18:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:18:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1956.5812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5F-00024I-16; Fri, 02 Oct 2020 11:18:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1956.5812; Fri, 02 Oct 2020 11:18:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5E-000248-UD; Fri, 02 Oct 2020 11:18:40 +0000
Received: by outflank-mailman (input) for mailman id 1956;
 Fri, 02 Oct 2020 11:18:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOJ5E-00023m-1p
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:40 +0000
Received: from mx1a.swcp.com (unknown [216.184.2.64])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ca75921-f737-44f8-890b-af011fc82ecf;
 Fri, 02 Oct 2020 11:18:38 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
 by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIa6F016911;
 Fri, 2 Oct 2020 05:18:36 -0600
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
 by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXo008607;
 Fri, 2 Oct 2020 05:18:33 -0600 (MDT) (envelope-from hudson@trmm.net)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOJ5E-00023m-1p
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:40 +0000
X-Inumbo-ID: 0ca75921-f737-44f8-890b-af011fc82ecf
Received: from mx1a.swcp.com (unknown [216.184.2.64])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ca75921-f737-44f8-890b-af011fc82ecf;
	Fri, 02 Oct 2020 11:18:38 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
	by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIa6F016911;
	Fri, 2 Oct 2020 05:18:36 -0600
Received-SPF: neutral (ame7.swcp.com: 62.251.112.184 is neither permitted nor denied by domain of hudson@trmm.net) receiver=ame7.swcp.com; client-ip=62.251.112.184; helo=diamond.fritz.box; envelope-from=hudson@trmm.net; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.10;
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
	by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXo008607;
	Fri, 2 Oct 2020 05:18:33 -0600 (MDT)
	(envelope-from hudson@trmm.net)
X-Authentication-Warning: ame7.swcp.com: Host 62-251-112-184.ip.xs4all.nl [62.251.112.184] claimed to be diamond.fritz.box
From: Trammell Hudson <hudson@trmm.net>
To: xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com,
        wl@xen.org
Subject: [PATCH v9 1/4] efi/boot.c: add file.need_to_free
Date: Fri,  2 Oct 2020 07:18:19 -0400
Message-Id: <20201002111822.42142-2-hudson@trmm.net>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201002111822.42142-1-hudson@trmm.net>
References: <20201002111822.42142-1-hudson@trmm.net>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.83
X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.6.2 (ame7.swcp.com [216.184.2.128]); Fri, 02 Oct 2020 05:18:34 -0600 (MDT)
X-Virus-Scanned: clamav-milter 0.100.2 at ame7
X-Virus-Status: Clean
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ame7.swcp.com
X-Spam-Status: No, hits=0.7 tests=NO_RECEIVED,NO_RELAYS,SPF_NEUTRAL
	version=3.4.2
X-Spam-Level: 

The config file, kernel, initrd, etc should only be freed if they
are allocated with the UEFI allocator.  On x86 the ucode, and on
ARM the dtb, are also marked as need_to_free when allocated or
expanded.

This also fixes a memory leak in ARM fdt_increase_size() if there
is an error in building the new device tree.

Signed-off-by: Trammell Hudson <hudson@trmm.net>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/efi/efi-boot.h | 11 +++++++++--
 xen/arch/x86/efi/efi-boot.h |  2 +-
 xen/common/efi/boot.c       | 10 ++++++----
 3 files changed, 16 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index 27dd0b1a94..c6200fda0e 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -314,7 +314,10 @@ static void __init *fdt_increase_size(struct file *fdtfile, int add_size)
     if ( fdt_size )
     {
         if ( fdt_open_into(dtbfile.ptr, new_fdt, pages * EFI_PAGE_SIZE) )
+        {
+            efi_bs->FreePages(fdt_addr, pages);
             return NULL;
+        }
     }
     else
     {
@@ -326,7 +329,10 @@ static void __init *fdt_increase_size(struct file *fdtfile, int add_size)
          * system table that is passed in the FDT.
          */
         if ( fdt_create_empty_tree(new_fdt, pages * EFI_PAGE_SIZE) )
+        {
+            efi_bs->FreePages(fdt_addr, pages);
             return NULL;
+        }
     }
 
     /*
@@ -335,12 +341,13 @@ static void __init *fdt_increase_size(struct file *fdtfile, int add_size)
      * code will free it.  If the original FDT came from a configuration
      * table, we don't own that memory and can't free it.
      */
-    if ( dtbfile.size )
+    if ( dtbfile.need_to_free )
         efi_bs->FreePages(dtbfile.addr, PFN_UP(dtbfile.size));
 
     /* Update 'file' info for new memory so we clean it up on error exits */
     dtbfile.addr = fdt_addr;
     dtbfile.size = pages * EFI_PAGE_SIZE;
+    dtbfile.need_to_free = true;
     return new_fdt;
 }
 
@@ -546,7 +553,7 @@ static void __init efi_arch_cpu(void)
 
 static void __init efi_arch_blexit(void)
 {
-    if ( dtbfile.addr && dtbfile.size )
+    if ( dtbfile.need_to_free )
         efi_bs->FreePages(dtbfile.addr, PFN_UP(dtbfile.size));
     if ( memmap )
         efi_bs->FreePool(memmap);
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index eef3f52789..1025000afd 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -689,7 +689,7 @@ static void __init efi_arch_cpu(void)
 
 static void __init efi_arch_blexit(void)
 {
-    if ( ucode.addr )
+    if ( ucode.need_to_free )
         efi_bs->FreePages(ucode.addr, PFN_UP(ucode.size));
 }
 
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 8123523194..9d6dc8ff4f 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -102,6 +102,7 @@ union string {
 
 struct file {
     UINTN size;
+    bool need_to_free;
     union {
         EFI_PHYSICAL_ADDRESS addr;
         char *str;
@@ -287,13 +288,13 @@ void __init noreturn blexit(const CHAR16 *str)
     if ( !efi_bs )
         efi_arch_halt();
 
-    if ( cfg.addr )
+    if ( cfg.need_to_free )
         efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
-    if ( kernel.addr )
+    if ( kernel.need_to_free )
         efi_bs->FreePages(kernel.addr, PFN_UP(kernel.size));
-    if ( ramdisk.addr )
+    if ( ramdisk.need_to_free )
         efi_bs->FreePages(ramdisk.addr, PFN_UP(ramdisk.size));
-    if ( xsm.addr )
+    if ( xsm.need_to_free )
         efi_bs->FreePages(xsm.addr, PFN_UP(xsm.size));
 
     efi_arch_blexit();
@@ -588,6 +589,7 @@ static bool __init read_file(EFI_FILE_HANDLE dir_handle, CHAR16 *name,
     }
     else
     {
+        file->need_to_free = true;
         file->size = size;
         if ( file != &cfg )
         {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:18:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1957.5824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5K-00028P-Aq; Fri, 02 Oct 2020 11:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1957.5824; Fri, 02 Oct 2020 11:18:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5K-00028G-6y; Fri, 02 Oct 2020 11:18:46 +0000
Received: by outflank-mailman (input) for mailman id 1957;
 Fri, 02 Oct 2020 11:18:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOJ5J-00023m-11
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:45 +0000
Received: from mx1a.swcp.com (unknown [216.184.2.64])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 825efd7e-2a1d-408d-bf97-5299f161be0c;
 Fri, 02 Oct 2020 11:18:41 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
 by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIdD6016914;
 Fri, 2 Oct 2020 05:18:39 -0600
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
 by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXp008607;
 Fri, 2 Oct 2020 05:18:36 -0600 (MDT) (envelope-from hudson@trmm.net)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOJ5J-00023m-11
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:45 +0000
X-Inumbo-ID: 825efd7e-2a1d-408d-bf97-5299f161be0c
Received: from mx1a.swcp.com (unknown [216.184.2.64])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 825efd7e-2a1d-408d-bf97-5299f161be0c;
	Fri, 02 Oct 2020 11:18:41 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
	by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIdD6016914;
	Fri, 2 Oct 2020 05:18:39 -0600
Received-SPF: neutral (ame7.swcp.com: 62.251.112.184 is neither permitted nor denied by domain of hudson@trmm.net) receiver=ame7.swcp.com; client-ip=62.251.112.184; helo=diamond.fritz.box; envelope-from=hudson@trmm.net; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.10;
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
	by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXp008607;
	Fri, 2 Oct 2020 05:18:36 -0600 (MDT)
	(envelope-from hudson@trmm.net)
X-Authentication-Warning: ame7.swcp.com: Host 62-251-112-184.ip.xs4all.nl [62.251.112.184] claimed to be diamond.fritz.box
From: Trammell Hudson <hudson@trmm.net>
To: xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com,
        wl@xen.org
Subject: [PATCH v9 2/4] efi/boot.c: add handle_file_info()
Date: Fri,  2 Oct 2020 07:18:20 -0400
Message-Id: <20201002111822.42142-3-hudson@trmm.net>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201002111822.42142-1-hudson@trmm.net>
References: <20201002111822.42142-1-hudson@trmm.net>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.83
X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.6.2 (ame7.swcp.com [216.184.2.128]); Fri, 02 Oct 2020 05:18:38 -0600 (MDT)
X-Virus-Scanned: clamav-milter 0.100.2 at ame7
X-Virus-Status: Clean
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ame7.swcp.com
X-Spam-Status: No, hits=0.7 tests=NO_RECEIVED,NO_RELAYS,SPF_NEUTRAL
	version=3.4.2
X-Spam-Level: 

Add a separate function to display the address ranges used by
the files and call `efi_arch_handle_module()` on the modules.

Signed-off-by: Trammell Hudson <hudson@trmm.net>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/efi/boot.c | 27 +++++++++++++++++----------
 1 file changed, 17 insertions(+), 10 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 9d6dc8ff4f..bd629eb658 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -547,6 +547,22 @@ static char * __init split_string(char *s)
     return NULL;
 }
 
+static void __init handle_file_info(const CHAR16 *name,
+                                    const struct file *file, const char *options)
+{
+    if ( file == &cfg )
+        return;
+
+    PrintStr(name);
+    PrintStr(L": ");
+    DisplayUint(file->addr, 2 * sizeof(file->addr));
+    PrintStr(L"-");
+    DisplayUint(file->addr + file->size, 2 * sizeof(file->addr));
+    PrintStr(newline);
+
+    efi_arch_handle_module(file, name, options);
+}
+
 static bool __init read_file(EFI_FILE_HANDLE dir_handle, CHAR16 *name,
                              struct file *file, const char *options)
 {
@@ -591,16 +607,7 @@ static bool __init read_file(EFI_FILE_HANDLE dir_handle, CHAR16 *name,
     {
         file->need_to_free = true;
         file->size = size;
-        if ( file != &cfg )
-        {
-            PrintStr(name);
-            PrintStr(L": ");
-            DisplayUint(file->addr, 2 * sizeof(file->addr));
-            PrintStr(L"-");
-            DisplayUint(file->addr + size, 2 * sizeof(file->addr));
-            PrintStr(newline);
-            efi_arch_handle_module(file, name, options);
-        }
+        handle_file_info(name, file, options);
 
         ret = FileHandle->Read(FileHandle, &file->size, file->str);
         if ( !EFI_ERROR(ret) && file->size != size )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:18:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:18:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1958.5836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5O-0002Cj-MR; Fri, 02 Oct 2020 11:18:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1958.5836; Fri, 02 Oct 2020 11:18:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5O-0002CY-Il; Fri, 02 Oct 2020 11:18:50 +0000
Received: by outflank-mailman (input) for mailman id 1958;
 Fri, 02 Oct 2020 11:18:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOJ5N-0002Bd-FU
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:49 +0000
Received: from mx1a.swcp.com (unknown [216.184.2.64])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5715368-7461-4dd4-9fd0-dec4a226d280;
 Fri, 02 Oct 2020 11:18:47 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
 by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIjXa016917;
 Fri, 2 Oct 2020 05:18:45 -0600
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
 by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXq008607;
 Fri, 2 Oct 2020 05:18:40 -0600 (MDT) (envelope-from hudson@trmm.net)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOJ5N-0002Bd-FU
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:49 +0000
X-Inumbo-ID: c5715368-7461-4dd4-9fd0-dec4a226d280
Received: from mx1a.swcp.com (unknown [216.184.2.64])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c5715368-7461-4dd4-9fd0-dec4a226d280;
	Fri, 02 Oct 2020 11:18:47 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
	by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIjXa016917;
	Fri, 2 Oct 2020 05:18:45 -0600
Received-SPF: neutral (ame7.swcp.com: 62.251.112.184 is neither permitted nor denied by domain of hudson@trmm.net) receiver=ame7.swcp.com; client-ip=62.251.112.184; helo=diamond.fritz.box; envelope-from=hudson@trmm.net; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.10;
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
	by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXq008607;
	Fri, 2 Oct 2020 05:18:40 -0600 (MDT)
	(envelope-from hudson@trmm.net)
X-Authentication-Warning: ame7.swcp.com: Host 62-251-112-184.ip.xs4all.nl [62.251.112.184] claimed to be diamond.fritz.box
From: Trammell Hudson <hudson@trmm.net>
To: xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com,
        wl@xen.org
Subject: [PATCH v9 3/4] efi: Enable booting unified hypervisor/kernel/initrd images
Date: Fri,  2 Oct 2020 07:18:21 -0400
Message-Id: <20201002111822.42142-4-hudson@trmm.net>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201002111822.42142-1-hudson@trmm.net>
References: <20201002111822.42142-1-hudson@trmm.net>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.83
X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.6.2 (ame7.swcp.com [216.184.2.128]); Fri, 02 Oct 2020 05:18:42 -0600 (MDT)
X-Virus-Scanned: clamav-milter 0.100.2 at ame7
X-Virus-Status: Clean
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ame7.swcp.com
X-Spam-Status: No, hits=0.7 tests=NO_RECEIVED,NO_RELAYS,SPF_NEUTRAL
	version=3.4.2
X-Spam-Level: 

This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
configuration file, the Linux kernel and initrd, as well as the XSM,
and architectural specific files into a single "unified" EFI executable.
This allows an administrator to update the components independently
without requiring rebuilding xen, as well as to replace the components
in an existing image.

The resulting EFI executable can be invoked directly from the UEFI Boot
Manager, removing the need to use a separate loader like grub as well
as removing dependencies on local filesystem access.  And since it is
a single file, it can be signed and validated by UEFI Secure Boot without
requring the shim protocol.

It is inspired by systemd-boot's unified kernel technique and borrows the
function to locate PE sections from systemd's LGPL'ed code.  During EFI
boot, Xen looks at its own loaded image to locate the PE sections for
the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
(`.ramdisk`), and XSM config (`.xsm`), which are included after building
xen.efi using objcopy to add named sections for each input file.

For x86, the CPU ucode can be included in a section named `.ucode`,
which is loaded in the efi_arch_cfg_file_late() stage of the boot process.

On ARM systems the Device Tree can be included in a section named
`.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
the boot process.

Note that the system will fall back to loading files from disk if
the named sections do not exist. This allows distributions to continue
with the status quo if they want a signed kernel + config, while still
allowing a user provided initrd (which is how the shim protocol currently
works as well).

This patch also adds constness to the section parameter of
efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
changes pe_find_section() to use a const CHAR16 section name,
and adds pe_name_compare() to match section names.

Signed-off-by: Trammell Hudson <hudson@trmm.net>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 .gitignore                  |   1 +
 docs/misc/efi.pandoc        |  49 ++++++++++++
 xen/arch/arm/efi/efi-boot.h |  25 ++++--
 xen/arch/x86/efi/Makefile   |   2 +-
 xen/arch/x86/efi/efi-boot.h |  11 ++-
 xen/common/efi/boot.c       |  62 +++++++++++----
 xen/common/efi/efi.h        |   3 +
 xen/common/efi/pe.c         | 152 ++++++++++++++++++++++++++++++++++++
 8 files changed, 277 insertions(+), 28 deletions(-)
 create mode 100644 xen/common/efi/pe.c

diff --git a/.gitignore b/.gitignore
index 188495783e..f6865c9cd8 100644
--- a/.gitignore
+++ b/.gitignore
@@ -327,6 +327,7 @@ xen/arch/*/efi/boot.c
 xen/arch/*/efi/compat.c
 xen/arch/*/efi/ebmalloc.c
 xen/arch/*/efi/efi.h
+xen/arch/*/efi/pe.c
 xen/arch/*/efi/runtime.c
 xen/common/config_data.S
 xen/common/config.gz
diff --git a/docs/misc/efi.pandoc b/docs/misc/efi.pandoc
index 23c1a2732d..ac3cd58cae 100644
--- a/docs/misc/efi.pandoc
+++ b/docs/misc/efi.pandoc
@@ -116,3 +116,52 @@ Filenames must be specified relative to the location of the EFI binary.
 
 Extra options to be passed to Xen can also be specified on the command line,
 following a `--` separator option.
+
+## Unified Xen kernel image
+
+The "Unified" kernel image can be generated by adding additional
+sections to the Xen EFI executable with objcopy, similar to how
+[systemd-boot uses the stub to add them to the Linux kernel](https://wiki.archlinux.org/index.php/systemd-boot#Preparing_a_unified_kernel_image)
+
+The sections for the xen configuration file, the dom0 kernel, dom0 initrd,
+XSM and CPU microcode should be added after the Xen `.pad` section, the
+ending address of which can be located with:
+
+```
+objdump -h xen.efi \
+	| perl -ane '/\.pad/ && printf "0x%016x\n", hex($F[2]) + hex($F[3])'
+```
+
+For all the examples the `.pad` section ended at 0xffff82d041000000.
+All the sections are optional (`.config`, `.kernel`, `.ramdisk`, `.xsm`,
+`.ucode` (x86) and `.dtb` (ARM)) and the order does not matter.
+The virtual addresses do not need to be contiguous, although they should not
+be overlapping and should all be greater than the last virtual address of the
+hypervisor components.
+
+```
+objcopy \
+	--add-section .config=xen.cfg \
+	--change-section-vma .config=0xffff82d041000000
+	--add-section .ucode=ucode.bin \
+	--change-section-vma .ucode=0xffff82d041010000 \
+	--add-section .xsm=xsm.cfg \
+	--change-section-vma .xsm=0xffff82d041080000 \
+	--add-section .kernel=vmlinux \
+	--change-section-vma .kernel=0xffff82d041100000 \
+	--add-section .ramdisk=initrd.img \
+	--change-section-vma .ramdisk=0xffff82d042000000 \
+	xen.efi \
+	xen.unified.efi
+```
+
+The unified executable can be signed with sbsigntool to make
+it usable with UEFI secure boot:
+
+```
+sbsign \
+	--key signing.key \
+	--cert cert.pem \
+	--output xen.signed.efi \
+	xen.unified.efi
+```
diff --git a/xen/arch/arm/efi/efi-boot.h b/xen/arch/arm/efi/efi-boot.h
index c6200fda0e..f64a6604af 100644
--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -382,27 +382,36 @@ static void __init noreturn efi_arch_post_exit_boot(void)
     efi_xen_start(fdt, fdt_totalsize(fdt));
 }
 
-static void __init efi_arch_cfg_file_early(EFI_FILE_HANDLE dir_handle, char *section)
+static void __init efi_arch_cfg_file_early(const EFI_LOADED_IMAGE *image,
+                                           EFI_FILE_HANDLE dir_handle,
+                                           const char *section)
 {
     union string name;
 
     /*
      * The DTB must be processed before any other entries in the configuration
-     * file, as the DTB is updated as modules are loaded.
+     * file, as the DTB is updated as modules are loaded.  Prefer the one
+     * stored as a PE section in a unified image, and fall back to a file
+     * on disk if the section is not present.
      */
-    name.s = get_value(&cfg, section, "dtb");
-    if ( name.s )
+    if ( !read_section(image, L"dtb", &dtbfile, NULL) )
     {
-        split_string(name.s);
-        read_file(dir_handle, s2w(&name), &dtbfile, NULL);
-        efi_bs->FreePool(name.w);
+        name.s = get_value(&cfg, section, "dtb");
+        if ( name.s )
+        {
+            split_string(name.s);
+            read_file(dir_handle, s2w(&name), &dtbfile, NULL);
+            efi_bs->FreePool(name.w);
+        }
     }
     fdt = fdt_increase_size(&dtbfile, cfg.size + EFI_PAGE_SIZE);
     if ( !fdt )
         blexit(L"Unable to create new FDT");
 }
 
-static void __init efi_arch_cfg_file_late(EFI_FILE_HANDLE dir_handle, char *section)
+static void __init efi_arch_cfg_file_late(const EFI_LOADED_IMAGE *image,
+                                          EFI_FILE_HANDLE dir_handle,
+                                          const char *section)
 {
 }
 
diff --git a/xen/arch/x86/efi/Makefile b/xen/arch/x86/efi/Makefile
index 770438a029..e857c0f2cc 100644
--- a/xen/arch/x86/efi/Makefile
+++ b/xen/arch/x86/efi/Makefile
@@ -8,7 +8,7 @@ cmd_objcopy_o_ihex = $(OBJCOPY) -I ihex -O binary $< $@
 
 boot.init.o: buildid.o
 
-EFIOBJ := boot.init.o ebmalloc.o compat.o runtime.o
+EFIOBJ := boot.init.o pe.init.o ebmalloc.o compat.o runtime.o
 
 $(call cc-option-add,cflags-stack-boundary,CC,-mpreferred-stack-boundary=4)
 $(EFIOBJ): CFLAGS-stack-boundary := $(cflags-stack-boundary)
diff --git a/xen/arch/x86/efi/efi-boot.h b/xen/arch/x86/efi/efi-boot.h
index 1025000afd..2541ba1f32 100644
--- a/xen/arch/x86/efi/efi-boot.h
+++ b/xen/arch/x86/efi/efi-boot.h
@@ -272,14 +272,21 @@ static void __init noreturn efi_arch_post_exit_boot(void)
     unreachable();
 }
 
-static void __init efi_arch_cfg_file_early(EFI_FILE_HANDLE dir_handle, char *section)
+static void __init efi_arch_cfg_file_early(const EFI_LOADED_IMAGE *image,
+                                           EFI_FILE_HANDLE dir_handle,
+                                           const char *section)
 {
 }
 
-static void __init efi_arch_cfg_file_late(EFI_FILE_HANDLE dir_handle, char *section)
+static void __init efi_arch_cfg_file_late(const EFI_LOADED_IMAGE *image,
+                                          EFI_FILE_HANDLE dir_handle,
+                                          const char *section)
 {
     union string name;
 
+    if ( read_section(image, L"ucode", &ucode, NULL) )
+        return;
+
     name.s = get_value(&cfg, section, "ucode");
     if ( !name.s )
         name.s = get_value(&cfg, "global", "ucode");
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index bd629eb658..bacd551bb5 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -122,6 +122,8 @@ static CHAR16 *s2w(union string *str);
 static char *w2s(const union string *str);
 static bool read_file(EFI_FILE_HANDLE dir_handle, CHAR16 *name,
                       struct file *file, const char *options);
+static bool read_section(const EFI_LOADED_IMAGE *image, const CHAR16 *name,
+                         struct file *file, const char *options);
 static size_t wstrlen(const CHAR16 * s);
 static int set_color(u32 mask, int bpp, u8 *pos, u8 *sz);
 static bool match_guid(const EFI_GUID *guid1, const EFI_GUID *guid2);
@@ -631,6 +633,20 @@ static bool __init read_file(EFI_FILE_HANDLE dir_handle, CHAR16 *name,
     return true;
 }
 
+static bool __init read_section(const EFI_LOADED_IMAGE *image,
+                                const CHAR16 *name, struct file *file,
+                                const char *options)
+{
+    file->ptr = pe_find_section(image->ImageBase, image->ImageSize,
+                                name, &file->size);
+    if ( !file->ptr )
+        return false;
+
+    handle_file_info(name, file, options);
+
+    return true;
+}
+
 static void __init pre_parse(const struct file *cfg)
 {
     char *ptr = cfg->str, *end = ptr + cfg->size;
@@ -1216,7 +1232,9 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
         dir_handle = get_parent_handle(loaded_image, &file_name);
 
         /* Read and parse the config file. */
-        if ( !cfg_file_name )
+        if ( read_section(loaded_image, L"config", &cfg, NULL) )
+            PrintStr(L"Using builtin config file\r\n");
+        else if ( !cfg_file_name )
         {
             CHAR16 *tail;
 
@@ -1266,29 +1284,39 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
         if ( !name.s )
             blexit(L"No Dom0 kernel image specified.");
 
-        efi_arch_cfg_file_early(dir_handle, section.s);
+        efi_arch_cfg_file_early(loaded_image, dir_handle, section.s);
 
         option_str = split_string(name.s);
-        read_file(dir_handle, s2w(&name), &kernel, option_str);
-        efi_bs->FreePool(name.w);
-
-        if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
-                        (void **)&shim_lock)) &&
-             (status = shim_lock->Verify(kernel.ptr, kernel.size)) != EFI_SUCCESS )
-            PrintErrMesg(L"Dom0 kernel image could not be verified", status);
 
-        name.s = get_value(&cfg, section.s, "ramdisk");
-        if ( name.s )
+        if ( !read_section(loaded_image, L"kernel", &kernel, option_str) )
         {
-            read_file(dir_handle, s2w(&name), &ramdisk, NULL);
+            read_file(dir_handle, s2w(&name), &kernel, option_str);
             efi_bs->FreePool(name.w);
+
+            if ( !EFI_ERROR(efi_bs->LocateProtocol(&shim_lock_guid, NULL,
+                            (void **)&shim_lock)) &&
+                 (status = shim_lock->Verify(kernel.ptr, kernel.size)) != EFI_SUCCESS )
+                PrintErrMesg(L"Dom0 kernel image could not be verified", status);
         }
 
-        name.s = get_value(&cfg, section.s, "xsm");
-        if ( name.s )
+        if ( !read_section(loaded_image, L"ramdisk", &ramdisk, NULL) )
         {
-            read_file(dir_handle, s2w(&name), &xsm, NULL);
-            efi_bs->FreePool(name.w);
+            name.s = get_value(&cfg, section.s, "ramdisk");
+            if ( name.s )
+            {
+                read_file(dir_handle, s2w(&name), &ramdisk, NULL);
+                efi_bs->FreePool(name.w);
+            }
+        }
+
+        if ( !read_section(loaded_image, L"xsm", &xsm, NULL) )
+        {
+            name.s = get_value(&cfg, section.s, "xsm");
+            if ( name.s )
+            {
+                read_file(dir_handle, s2w(&name), &xsm, NULL);
+                efi_bs->FreePool(name.w);
+            }
         }
 
         /*
@@ -1324,7 +1352,7 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
             }
         }
 
-        efi_arch_cfg_file_late(dir_handle, section.s);
+        efi_arch_cfg_file_late(loaded_image, dir_handle, section.s);
 
         efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
         cfg.addr = 0;
diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
index 4845d84913..663a8b5000 100644
--- a/xen/common/efi/efi.h
+++ b/xen/common/efi/efi.h
@@ -47,3 +47,6 @@ const CHAR16 *wmemchr(const CHAR16 *s, CHAR16 c, UINTN n);
 /* EFI boot allocator. */
 void *ebmalloc(size_t size);
 void free_ebmalloc_unused_mem(void);
+
+const void *pe_find_section(const void *image_base, const size_t image_size,
+                            const CHAR16 *section_name, UINTN *size_out);
diff --git a/xen/common/efi/pe.c b/xen/common/efi/pe.c
new file mode 100644
index 0000000000..a84992df9a
--- /dev/null
+++ b/xen/common/efi/pe.c
@@ -0,0 +1,152 @@
+/*
+ * xen/common/efi/pe.c
+ *
+ * PE executable header parser.
+ *
+ * Derived from https://github.com/systemd/systemd/blob/master/src/boot/efi/pe.c
+ * commit 07d5ed536ec0a76b08229c7a80b910cb9acaf6b1
+ *
+ * Copyright (C) 2015 Kay Sievers <kay@vrfy.org>
+ * Copyright (C) 2020 Trammell Hudson <hudson@trmm.net>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU Lesser General Public License as published by
+ * the Free Software Foundation; either version 2.1 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ */
+
+
+#include "efi.h"
+
+struct DosFileHeader {
+    UINT8   Magic[2];
+    UINT16  LastSize;
+    UINT16  nBlocks;
+    UINT16  nReloc;
+    UINT16  HdrSize;
+    UINT16  MinAlloc;
+    UINT16  MaxAlloc;
+    UINT16  ss;
+    UINT16  sp;
+    UINT16  Checksum;
+    UINT16  ip;
+    UINT16  cs;
+    UINT16  RelocPos;
+    UINT16  nOverlay;
+    UINT16  reserved[4];
+    UINT16  OEMId;
+    UINT16  OEMInfo;
+    UINT16  reserved2[10];
+    UINT32  ExeHeader;
+};
+
+#if defined(__arm__) || defined (__aarch64__)
+#define PE_HEADER_MACHINE 0xaa64
+#elif defined(__x86_64__)
+#define PE_HEADER_MACHINE 0x8664
+#else
+#error "Unknown architecture"
+#endif
+
+struct PeFileHeader {
+    UINT16  Machine;
+    UINT16  NumberOfSections;
+    UINT32  TimeDateStamp;
+    UINT32  PointerToSymbolTable;
+    UINT32  NumberOfSymbols;
+    UINT16  SizeOfOptionalHeader;
+    UINT16  Characteristics;
+};
+
+struct PeHeader {
+    UINT8   Magic[4];
+    struct PeFileHeader FileHeader;
+};
+
+struct PeSectionHeader {
+    CHAR8   Name[8];
+    UINT32  VirtualSize;
+    UINT32  VirtualAddress;
+    UINT32  SizeOfRawData;
+    UINT32  PointerToRawData;
+    UINT32  PointerToRelocations;
+    UINT32  PointerToLinenumbers;
+    UINT16  NumberOfRelocations;
+    UINT16  NumberOfLinenumbers;
+    UINT32  Characteristics;
+};
+
+static bool __init pe_name_compare(const struct PeSectionHeader *sect,
+                                   const CHAR16 *name)
+{
+    size_t i;
+
+    if ( sect->Name[0] != '.' )
+        return false;
+
+    for ( i = 1; i < sizeof(sect->Name); i++ )
+    {
+        const char c = sect->Name[i];
+
+        if ( c != name[i - 1] )
+            return false;
+        if ( c == '\0' )
+            return true;
+    }
+
+    return name[i - 1] == L'\0';
+}
+
+const void *__init pe_find_section(const void *image, const UINTN image_size,
+                                   const CHAR16 *section_name, UINTN *size_out)
+{
+    const struct DosFileHeader *dos = image;
+    const struct PeHeader *pe;
+    const struct PeSectionHeader *sect;
+    UINTN offset, i;
+
+    if ( image_size < sizeof(*dos) ||
+         memcmp(dos->Magic, "MZ", 2) != 0 )
+        return NULL;
+
+    offset = dos->ExeHeader;
+    pe = image + offset;
+
+    offset += sizeof(*pe);
+    if ( image_size < offset ||
+         memcmp(pe->Magic, "PE\0\0", 4) != 0 )
+        return NULL;
+
+    if ( pe->FileHeader.Machine != PE_HEADER_MACHINE )
+        return NULL;
+
+    offset += pe->FileHeader.SizeOfOptionalHeader;
+
+    for ( i = 0; i < pe->FileHeader.NumberOfSections; i++ )
+    {
+        sect = image + offset;
+        if ( image_size < offset + sizeof(*sect) )
+            return NULL;
+
+        if ( !pe_name_compare(sect, section_name) )
+        {
+            offset += sizeof(*sect);
+            continue;
+        }
+
+        if ( image_size < sect->VirtualSize + sect->VirtualAddress )
+            blexit(L"PE invalid section size + address");
+
+        if ( size_out )
+            *size_out = sect->VirtualSize;
+
+        return image + sect->VirtualAddress;
+    }
+
+    return NULL;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:18:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:18:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1959.5848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5R-0002Gp-Cy; Fri, 02 Oct 2020 11:18:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1959.5848; Fri, 02 Oct 2020 11:18:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJ5R-0002Gg-84; Fri, 02 Oct 2020 11:18:53 +0000
Received: by outflank-mailman (input) for mailman id 1959;
 Fri, 02 Oct 2020 11:18:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kOJ5Q-0002FX-A8
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:52 +0000
Received: from mx1a.swcp.com (unknown [216.184.2.64])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5196ee21-fce5-47d8-a0fc-72ff6a653c0f;
 Fri, 02 Oct 2020 11:18:51 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
 by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIn6T016920;
 Fri, 2 Oct 2020 05:18:49 -0600
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
 by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXr008607;
 Fri, 2 Oct 2020 05:18:46 -0600 (MDT) (envelope-from hudson@trmm.net)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=U/dH=DJ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kOJ5Q-0002FX-A8
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:18:52 +0000
X-Inumbo-ID: 5196ee21-fce5-47d8-a0fc-72ff6a653c0f
Received: from mx1a.swcp.com (unknown [216.184.2.64])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5196ee21-fce5-47d8-a0fc-72ff6a653c0f;
	Fri, 02 Oct 2020 11:18:51 +0000 (UTC)
Received: from ame7.swcp.com (ame7.swcp.com [216.184.2.70])
	by mx1a.swcp.com (8.14.4/8.14.4/Debian-4) with ESMTP id 092BIn6T016920;
	Fri, 2 Oct 2020 05:18:49 -0600
Received-SPF: neutral (ame7.swcp.com: 62.251.112.184 is neither permitted nor denied by domain of hudson@trmm.net) receiver=ame7.swcp.com; client-ip=62.251.112.184; helo=diamond.fritz.box; envelope-from=hudson@trmm.net; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.10;
Received: from diamond.fritz.box (62-251-112-184.ip.xs4all.nl [62.251.112.184])
	by ame7.swcp.com (8.15.2/8.15.2) with ESMTP id 092BIOXr008607;
	Fri, 2 Oct 2020 05:18:46 -0600 (MDT)
	(envelope-from hudson@trmm.net)
X-Authentication-Warning: ame7.swcp.com: Host 62-251-112-184.ip.xs4all.nl [62.251.112.184] claimed to be diamond.fritz.box
From: Trammell Hudson <hudson@trmm.net>
To: xen-devel@lists.xenproject.org
Cc: roger.pau@citrix.com, jbeulich@suse.com, andrew.cooper3@citrix.com,
        wl@xen.org
Subject: [PATCH v9 4/4] efi: Do not use command line if unified config is included
Date: Fri,  2 Oct 2020 07:18:22 -0400
Message-Id: <20201002111822.42142-5-hudson@trmm.net>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201002111822.42142-1-hudson@trmm.net>
References: <20201002111822.42142-1-hudson@trmm.net>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.83
X-Greylist: Message whitelisted by DRAC access database, not delayed by milter-greylist-4.6.2 (ame7.swcp.com [216.184.2.128]); Fri, 02 Oct 2020 05:18:48 -0600 (MDT)
X-Virus-Scanned: clamav-milter 0.100.2 at ame7
X-Virus-Status: Clean
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on ame7.swcp.com
X-Spam-Status: No, hits=0.7 tests=NO_RECEIVED,NO_RELAYS,SPF_NEUTRAL
	version=3.4.2
X-Spam-Level: 

If a unified Xen image is used, then the bundled configuration,
Xen command line, dom0 kernel, and ramdisk are prefered over
any files listed in the config file or provided on the command line.

Unlike the shim based verification, the PE signature on a unified image
covers all of the Xen+config+kernel+initrd modules linked into the
unified image. This also ensures that, on properly configured UEFI Secure Boot
platforms, the entire runtime will be measured into the TPM for unsealing
secrets or remote attestation.

Signed-off-by: Trammell Hudson <hudson@trmm.net>
---
 xen/common/efi/boot.c | 43 ++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 38 insertions(+), 5 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index bacd551bb5..8c11475214 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -950,6 +950,35 @@ static void __init setup_efi_pci(void)
     efi_bs->FreePool(handles);
 }
 
+/*
+ * Logic should remain sync'ed with linux/arch/x86/xen/efi.c
+ * Secure Boot is enabled iff 'SecureBoot' is set and the system is
+ * not in Setup Mode.
+ */
+static bool __init efi_secure_boot(void)
+{
+    static __initdata EFI_GUID global_guid = EFI_GLOBAL_VARIABLE;
+    uint8_t secboot, setupmode;
+    UINTN secboot_size = sizeof(secboot);
+    UINTN setupmode_size = sizeof(setupmode);
+    EFI_STATUS rc;
+
+    rc = efi_rs->GetVariable(L"SecureBoot", &global_guid,
+                             NULL, &secboot_size, &secboot);
+    if ( rc != EFI_SUCCESS )
+        return false;
+
+    rc = efi_rs->GetVariable(L"SetupMode", &global_guid,
+                             NULL, &setupmode_size, &setupmode);
+    if ( rc != EFI_SUCCESS )
+        return false;
+
+    if ( secboot > 1 || setupmode > 1 )
+        blexit(L"Invalid SecureBoot/SetupMode variables");
+
+    return secboot == 1 && setupmode == 0;
+}
+
 static void __init efi_variables(void)
 {
     EFI_STATUS status;
@@ -1126,15 +1155,15 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
     static EFI_GUID __initdata shim_lock_guid = SHIM_LOCK_PROTOCOL_GUID;
     EFI_LOADED_IMAGE *loaded_image;
     EFI_STATUS status;
-    unsigned int i, argc;
-    CHAR16 **argv, *file_name, *cfg_file_name = NULL, *options = NULL;
+    unsigned int i, argc = 0;
+    CHAR16 **argv = NULL, *file_name, *cfg_file_name = NULL, *options = NULL;
     UINTN gop_mode = ~0;
     EFI_SHIM_LOCK_PROTOCOL *shim_lock;
     EFI_GRAPHICS_OUTPUT_PROTOCOL *gop = NULL;
     union string section = { NULL }, name;
     bool base_video = false;
     const char *option_str;
-    bool use_cfg_file;
+    bool use_cfg_file, secure;
 
     __set_bit(EFI_BOOT, &efi_flags);
     __set_bit(EFI_LOADER, &efi_flags);
@@ -1153,8 +1182,10 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
         PrintErrMesg(L"No Loaded Image Protocol", status);
 
     efi_arch_load_addr_check(loaded_image);
+    secure = efi_secure_boot();
 
-    if ( use_cfg_file )
+    if ( use_cfg_file &&
+         !read_section(loaded_image, L"config", &cfg, NULL) )
     {
         UINTN offset = 0;
 
@@ -1212,6 +1243,8 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
 
     PrintStr(L"Xen " __stringify(XEN_VERSION) "." __stringify(XEN_SUBVERSION)
              XEN_EXTRAVERSION " (c/s " XEN_CHANGESET ") EFI loader\r\n");
+    if ( secure )
+        PrintStr(L"UEFI Secure Boot enabled\r\n");
 
     efi_arch_relocate_image(0);
 
@@ -1232,7 +1265,7 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SYSTEM_TABLE *SystemTable)
         dir_handle = get_parent_handle(loaded_image, &file_name);
 
         /* Read and parse the config file. */
-        if ( read_section(loaded_image, L"config", &cfg, NULL) )
+        if ( cfg.ptr )
             PrintStr(L"Using builtin config file\r\n");
         else if ( !cfg_file_name )
         {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1969.5860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJA9-0003Na-1O; Fri, 02 Oct 2020 11:23:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1969.5860; Fri, 02 Oct 2020 11:23:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJA8-0003NT-Tp; Fri, 02 Oct 2020 11:23:44 +0000
Received: by outflank-mailman (input) for mailman id 1969;
 Fri, 02 Oct 2020 11:23:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOJA8-0003NO-39
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:23:44 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edbac935-f7f8-4281-9afb-ad06861a9622;
 Fri, 02 Oct 2020 11:23:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOJA8-0003NO-39
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:23:44 +0000
X-Inumbo-ID: edbac935-f7f8-4281-9afb-ad06861a9622
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id edbac935-f7f8-4281-9afb-ad06861a9622;
	Fri, 02 Oct 2020 11:23:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601637824;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=r52eNszNUG5QuCOdbKV4ykZ4CTpHi/cpnRL2ZcKCQQ0=;
  b=GyV0g45M3mJ7ztvjdCqEi+BSkDyureHfoWK7PQcqeSBXk03qATyQjxl9
   h3IrbkK+6yrn7xYtodkiRbDYs8pU9po71IcApF/1XvardYjTkpYxSgxva
   gs18jBsZAuhso3EKawv8pHsKklJGuHnXf02UI2SunlTl/1/pn1/2loqb0
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: EgwAj4Uoxcnso4oYxvx+hPzAfj28TInscbiawn2N8SXzRlRThilRKWcCtEZU2/Fer+pljT5c8e
 O6eylcsT3gwU7MgoQ/mnKluyFej/MG/yGqU0nxM2PPdpxFHuA1xB5qMWG1IDNM7X6lB5RRoXGG
 2dag17paIhioFO9v7/T3D6lV6Sf8KNqXRRw8VLXkDYkVdYtVf14giGT89kNx8zj/OFvtWAt+Hb
 oVp8GMK1KnIQYn1lp2cHTKz+9D6vauYskdKy0BH8HXRiNBLKNKtTbcYuRIPaUYf/lkRRMWwuu6
 7GI=
X-SBRS: None
X-MesageID: 28129791
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="28129791"
Subject: Re: [PATCH 3/3] x86/vLAPIC: vlapic_init() runs only once for a vCPU
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
 <3735eb75-76ef-abff-1b05-aa89ddc39fcc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <03ab138c-5608-ba4e-90ae-5d7bcdfd6bd9@citrix.com>
Date: Fri, 2 Oct 2020 12:19:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <3735eb75-76ef-abff-1b05-aa89ddc39fcc@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 11:31, Jan Beulich wrote:
> Hence there's no need to guard allocation / mapping by checks whether
> the same action has been done before. I assume this was a transient
> change which should have been undone before 509529e99148 ("x86 hvm: Xen
> interface and implementation for virtual S3") got committed.
>
> While touching this code, switch dprintk()-s to use %pv.

Logging ENOMEM, especially without actually saying ENOMEM, is quite
pointless.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, preferably with
the printk()s dropped.


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:30:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1973.5872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJGf-0004I2-O1; Fri, 02 Oct 2020 11:30:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1973.5872; Fri, 02 Oct 2020 11:30:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJGf-0004Hv-Ks; Fri, 02 Oct 2020 11:30:29 +0000
Received: by outflank-mailman (input) for mailman id 1973;
 Fri, 02 Oct 2020 11:30:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MJP2=DJ=gmail.com=ckoenig.leichtzumerken@srs-us1.protection.inumbo.net>)
 id 1kOJGe-0004HC-46
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:30:28 +0000
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c6a57a3-222d-4a2f-9a2d-e05323d396ca;
 Fri, 02 Oct 2020 11:30:26 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id nw23so1477279ejb.4
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 04:30:26 -0700 (PDT)
Received: from ?IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7?
 ([2a02:908:1252:fb60:be8a:bd56:1f94:86e7])
 by smtp.gmail.com with ESMTPSA id jp3sm968602ejb.125.2020.10.02.04.30.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 02 Oct 2020 04:30:24 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MJP2=DJ=gmail.com=ckoenig.leichtzumerken@srs-us1.protection.inumbo.net>)
	id 1kOJGe-0004HC-46
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:30:28 +0000
X-Inumbo-ID: 5c6a57a3-222d-4a2f-9a2d-e05323d396ca
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c6a57a3-222d-4a2f-9a2d-e05323d396ca;
	Fri, 02 Oct 2020 11:30:26 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id nw23so1477279ejb.4
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 04:30:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=reply-to:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=3Yv1p91kdY6MztMHEbXAX2LIhnqCDycHHWK1s9rlIpk=;
        b=cebDEAs7j0BDWj7XygZm8zBV9UtA+AAQH7jIMa3COVQavNGQ+xmaBn9oMoiStb2f2h
         KWrSq2RIjXarUg9cmmsx6e/IvJprHToa4zB4n03/MjsEDUWHyGrhyxSqnN4QN74MCEWv
         F+qLVD6ryJnlLP8vbzJLsvM8pEx84IxEmpnoZ+bibT8IC2fY5x+WMqpAPumoMtkcL/45
         G/iif/RHXgdKh9DArmcH/vzWsL81H6c2t9CDpNdYmjQRRguBB4YtWane1V8BhhhBxe0I
         Nh4niqMUMOyIDFLfEGby2I/PFheLBfKCxpKk74E4KKaHjYLHDqqOJLGfJFBDuUwVEuK6
         mzHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:reply-to:subject:to:cc:references:from
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-transfer-encoding:content-language;
        bh=3Yv1p91kdY6MztMHEbXAX2LIhnqCDycHHWK1s9rlIpk=;
        b=AwXQtVBzDgYR9RLy3H3MgC6IJcWQql4EtRLUs/e35ArzH732nXOQ2Q3HTn4BDngWcG
         fesjqQDVj/CwHnW330kwAARatEh2J7+rB17zfdmhdbawVrgilpDHo+ZBA/vvJyIYKIlh
         ty0+PFoExDgADdjWJ2tE5cNimX68PKRPlh3Cezj911mYDt4yDVQKMM4O8nWdWC8ek5/6
         +ry8FLEz/M5iXXxJmyShgzwuxM/7Bb7GjBV67LdMG+2Dtr+dQpvCyLrUsgsVcii9DGBr
         3DkTVGeOhMG/k4yOeOjTbZhuNGzS6dcc09AopC6sKsCaJuQELQ9F8RzeWnr9zQ7aZHhU
         ffvQ==
X-Gm-Message-State: AOAM531pOM7aI8Ogwuh+oFSIixkH9wlHXfqqmat3KSptsVCShIwrpimS
	YKTKNK25Vo2m3GvAz/+C3zI=
X-Google-Smtp-Source: ABdhPJxApAPeHRcp+JFrWFCuy4bAcO8RMtt2ySgmt8bQlcWwipN6gyDihDz/j48Bs0NqfRAKqlXWUA==
X-Received: by 2002:a17:906:c7d5:: with SMTP id dc21mr1070990ejb.308.1601638225441;
        Fri, 02 Oct 2020 04:30:25 -0700 (PDT)
Received: from ?IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7? ([2a02:908:1252:fb60:be8a:bd56:1f94:86e7])
        by smtp.gmail.com with ESMTPSA id jp3sm968602ejb.125.2020.10.02.04.30.20
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 02 Oct 2020 04:30:24 -0700 (PDT)
Reply-To: christian.koenig@amd.com
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Daniel Vetter <daniel@ffwll.ch>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>, =?UTF-8?Q?Heiko_St=c3=bcbner?=
 <heiko@sntech.de>, Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 Linus Walleij <linus.walleij@linaro.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, "Anholt, Eric" <eric@anholt.net>,
 Huang Rui <ray.huang@amd.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Sam Ravnborg <sam@ravnborg.org>, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>, Rob Herring <robh@kernel.org>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Kukjin Kim <kgene@kernel.org>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Maxime Ripard <mripard@kernel.org>, Inki Dae <inki.dae@samsung.com>,
 Hans de Goede <hdegoede@redhat.com>,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Qiang Yu <yuq825@gmail.com>,
 Thomas Zimmermann <tzimmermann@suse.de>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>,
 Lucas Stach <l.stach@pengutronix.de>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <ckoenig.leichtzumerken@gmail.com>
Message-ID: <f6dcba12-8be8-b867-ac9b-a1ba50567fca@gmail.com>
Date: Fri, 2 Oct 2020 13:30:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201002095830.GH438822@phenom.ffwll.local>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US

Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>> <christian.koenig@amd.com> wrote:
>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>> Hi
>>>>>>
>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>> Hi Christian
>>>>>>>>
>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>> well.
>>>>>>>>>
>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>> retrieve the pointer via this function.
>>>>>>>>
>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>
>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>
>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>
>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>
>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>> conversions seems fairly unavoidable to me.
>>> Fair enough. I would just have started bottom up and not top down.
>>>
>>> Anyway feel free to go ahead with this approach as long as we can remove
>>> the new function again when we clean that stuff up for good.
>> Yeah I guess bottom up would make more sense as a refactoring. But the
>> main motivation to land this here is to fix the __mmio vs normal
>> memory confusion in the fbdev emulation helpers for sparc (and
>> anything else that needs this). Hence the top down approach for
>> rolling this out.
> Ok I started reviewing this a bit more in-depth, and I think this is a bit
> too much of a de-tour.
>
> Looking through all the callers of ttm_bo_kmap almost everyone maps the
> entire object. Only vmwgfx uses to map less than that. Also, everyone just
> immediately follows up with converting that full object map into a
> pointer.
>
> So I think what we really want here is:
> - new function
>
> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>
>    _vmap name since that's consistent with both dma_buf functions and
>    what's usually used to implement this. Outside of the ttm world kmap
>    usually just means single-page mappings using kmap() or it's iomem
>    sibling io_mapping_map* so rather confusing name for a function which
>    usually is just used to set up a vmap of the entire buffer.
>
> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>    functions for all ttm drivers. We should be able to make this fully
>    generic because a) we now have dma_buf_map and b) drm_gem_object is
>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>    and gem driver.
>
>    This is maybe a good follow-up, since it should allow us to ditch quite
>    a bit of the vram helper code for this more generic stuff. I also might
>    have missed some special-cases here, but from a quick look everything
>    just pins the buffer to the current location and that's it.
>
>    Also this obviously requires Christian's generic ttm_bo_pin rework
>    first.
>
> - roll the above out to drivers.
>
> Christian/Thomas, thoughts on this?

Calling this vmap instead of kmap certainly makes sense.

Not 100% sure about the generic helpers, but it sounds like this should 
indeed look rather clean in the end.

Christian.

>
> I think for the immediate need of rolling this out for vram helpers and
> fbdev code we should be able to do this, but just postpone the driver wide
> roll-out for now.
>
> Cheers, Daniel
>
>> -Daniel
>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Thanks,
>>>>> Christian.
>>>>>
>>>>>> Best regards
>>>>>> Thomas
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>> ---
>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>          return map->virtual;
>>>>>>>>>>      }
>>>>>>>>>>      +/**
>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>> *kmap,
>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>> +{
>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>> +
>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>> +    else
>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>      /**
>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>       *
>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>       *
>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>       *
>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>> + *
>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>> + *
>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>> + *
>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>       *
>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>          map->is_iomem = false;
>>>>>>>>>>      }
>>>>>>>>>>      +/**
>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>> an address in I/O memory
>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>> + *
>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>> +{
>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>      /**
>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>> for equality
>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>
>> -- 
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1974.5883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJGq-0004KV-24; Fri, 02 Oct 2020 11:30:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1974.5883; Fri, 02 Oct 2020 11:30:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJGp-0004KO-Tu; Fri, 02 Oct 2020 11:30:39 +0000
Received: by outflank-mailman (input) for mailman id 1974;
 Fri, 02 Oct 2020 11:30:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOJGo-0004Jz-6z
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:30:38 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a78ad48-1773-4da8-8fef-805ca647fb72;
 Fri, 02 Oct 2020 11:30:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOJGo-0004Jz-6z
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:30:38 +0000
X-Inumbo-ID: 4a78ad48-1773-4da8-8fef-805ca647fb72
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a78ad48-1773-4da8-8fef-805ca647fb72;
	Fri, 02 Oct 2020 11:30:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601638237;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=zCR6Dv3onCS8k84WLM6YzxWdsEqTYK3W+Ynyk/DsUJU=;
  b=PfheIrpwkEZII+GxkPZJaevi+N0SyrhRAu1LiRpJeNuQWa1ghwypAtzt
   QB8tJbqUsJxhQl/UcxqhSkbUGrG1rxjudGB89ZkndFmsp7IKHJn2901U1
   J6XJP8In3KpCia2/k/L3gCApW7Bxen1+YbX9XMjB/JK4m7rJVVrVNJKO2
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: goJ9Yi/246u1J3b5DSRDg54+JZMFdCm17xpnULr0EByx8xUAKQXtjNAAfzMZ70FivXxFnyGwMZ
 lqfP7VFAHAGCKbBM2cT3H0CsN6/PC6IIStMhA8YFKaq5xlrXVzAOYBPpBtykF2F93TGAuhuAIA
 XGG0NqKB4ZFD1PBTmTsBlEHS/Q+izF90UnkhHvfYdvmkN2V+rrhn0PIO9Ro7E2plJU6ynKdCVP
 O61pfmvLtHojPlPd7gtiKgKnTniv9vzRtKPXMZoul6igIOPUByWLHmB8rqSM/ZYT0zMBrXCluI
 +Qo=
X-SBRS: None
X-MesageID: 29170748
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="29170748"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/hvm: Correct error message in check_segment()
Date: Fri, 2 Oct 2020 12:30:12 +0100
Message-ID: <20201002113012.29932-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The error message is wrong (given AMD's older interpretation of what a NUL
segment should contain, attribute wise), and actively unhelpful because you
only get it in response to a hypercall where the one piece of information you
cannot provide is the segment selector.

Fix the message to talk about segment attributes, rather than the selector.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/domain.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
index 8e3375265c..ffe952c2df 100644
--- a/xen/arch/x86/hvm/domain.c
+++ b/xen/arch/x86/hvm/domain.c
@@ -39,7 +39,7 @@ static int check_segment(struct segment_register *reg, enum x86_segment seg)
     {
         if ( seg != x86_seg_ds && seg != x86_seg_es )
         {
-            gprintk(XENLOG_ERR, "Null selector provided for CS, SS or TR\n");
+            gprintk(XENLOG_ERR, "Empty segment attributes for CS, SS or TR\n");
             return -EINVAL;
         }
         return 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:53:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.1998.5914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJcU-0006RI-EV; Fri, 02 Oct 2020 11:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 1998.5914; Fri, 02 Oct 2020 11:53:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJcU-0006RB-BR; Fri, 02 Oct 2020 11:53:02 +0000
Received: by outflank-mailman (input) for mailman id 1998;
 Fri, 02 Oct 2020 11:53:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MkBu=DJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kOJcS-0006R6-RI
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:53:00 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2855a5f1-261f-4f52-9280-8a7e6496c1bc;
 Fri, 02 Oct 2020 11:53:00 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id o5so1478066wrn.13
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 04:53:00 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q4sm1475935wru.65.2020.10.02.04.52.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 04:52:58 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MkBu=DJ=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kOJcS-0006R6-RI
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:53:00 +0000
X-Inumbo-ID: 2855a5f1-261f-4f52-9280-8a7e6496c1bc
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2855a5f1-261f-4f52-9280-8a7e6496c1bc;
	Fri, 02 Oct 2020 11:53:00 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id o5so1478066wrn.13
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 04:53:00 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=QwGq+Eoo4NWSuRYXk18oyT964WvNmcCXLZuBK/a1N+Y=;
        b=e0gAOBfD5A8hm96S+gcQUcmPouL0pQiR8HVzceL2Cp20GmqVTJ/AtYO9KjdLRrAw7c
         yEKq2pjY0369ei0ZqFju2D/3qOvkGglhTxdUiL7o/8aeurdFLxOR1nhlh1aAIWe0mnpW
         Ve812g76CoIIjJUBjKOxhZIk4XJGCMF7OvOIut7pFJxeHCofRhz6iHavdxtpD3seBA4r
         s1uMxx6zFe6KXeW596rOvxqH8IpQj0F7nV/qmI9KwSWPhqQmmwTgBBijg1ocw6nRrmiq
         u4J4qLk2+tsS6HjHcJCz5eOXD7PZtjU71rodgTI7RsJsBweA8xlWMRDODKFJbkUkKAvF
         d0PA==
X-Gm-Message-State: AOAM531THPgLeZHo6YsukAcPN9mNfaMkrpqs1iuXrKpDxCWxJt+Ko8rL
	YlkYatDLv3pK0Yhjs6iPykU=
X-Google-Smtp-Source: ABdhPJw+mzkq3L6I/6Zxq/N+UzMAWQCTQC6L6Y7/0OrU2dJ7RwxRz648PXe9Rdy+SocFAGo0CjX8lw==
X-Received: by 2002:adf:eecb:: with SMTP id a11mr2653510wrp.356.1601639579188;
        Fri, 02 Oct 2020 04:52:59 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q4sm1475935wru.65.2020.10.02.04.52.58
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 04:52:58 -0700 (PDT)
Date: Fri, 2 Oct 2020 11:52:57 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH 0/6] tools/include: adjustments to the population
 of xen/
Message-ID: <20201002115257.crrhxpz6u5jsd34c@liuwe-devbox-debian-v2>
References: <2a9f86aa-9104-8a45-cd21-72acd693f924@suse.com>
 <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <673fdaf3-e770-67c8-0a6c-6cdec79df38a@suse.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 01, 2020 at 06:03:09PM +0200, Jan Beulich wrote:
> On 10.09.2020 14:09, Jan Beulich wrote:
> > While looking at what it would take to move around libelf/
> > in the hypervisor subtree, I've run into this rule, which I
> > think can do with a few improvements and some simplification.
> > 
> > 1: adjust population of acpi/
> > 2: fix (drop) dependencies of when to populate xen/
> > 3: adjust population of public headers into xen/
> > 4: properly install Arm public headers
> > 5: adjust x86-specific population of xen/
> > 6: drop remaining -f from ln invocations
> 
> May I ask for an ack or otherwise here?

While I think I agree with Andrew that getting rid of symlink is better,
we're nowhere near that.

This series is an improvement over the status quo, so:

Acked-by: Wei Liu <wl@xen.org>

> 
> Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 11:55:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 11:55:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2000.5926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJea-0006ap-Ra; Fri, 02 Oct 2020 11:55:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2000.5926; Fri, 02 Oct 2020 11:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJea-0006ai-ON; Fri, 02 Oct 2020 11:55:12 +0000
Received: by outflank-mailman (input) for mailman id 2000;
 Fri, 02 Oct 2020 11:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOJeY-0006ac-W9
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:55:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23cfc5dd-6ab1-49e7-9449-4e15921ea93b;
 Fri, 02 Oct 2020 11:55:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOJeY-0006ac-W9
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 11:55:11 +0000
X-Inumbo-ID: 23cfc5dd-6ab1-49e7-9449-4e15921ea93b
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23cfc5dd-6ab1-49e7-9449-4e15921ea93b;
	Fri, 02 Oct 2020 11:55:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601639709;
  h=subject:to:references:from:cc:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=xwzCT+Copx3aEgviGeHmA5hn+Ofm5MhoBu1WshWrOo0=;
  b=T2PlnndAte8/IcwXSUkIj4q9mn2v/Crj6706JxBSidAIqoXRaxPFpkBp
   x8IPokJgfSJP/LIZHLqSn6Awc1kRNJ8MtpznhYA8nMXqXkbMcxNSVwep0
   GbsEf7XOLcUmmo/s5Qo2FQJJNLELynWLAVaeRE+yaultlkr3xBBgTOti1
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: v2PcWNnaYGcL/ylgiJ3SMF2nrZ44JYoMXlYVj3UarFnixThafqGqEEWseMMnGXf+XwtVbQYWQ9
 sGPKR0C8Nke/DPgivMhCWS7EW6yXLyYIWYZXu+2giWQ0snUzsu+ieZ3wgdcXx5XeYVVojn7ZI2
 cqsuIe7yoCdZu61wI4pikQizba81ZR0uXfQ+zcrbum6Q5tOX6MIXHgbq1vy9Bf/Ud5NZIGlgEL
 2j6cvqFxONdjP14C9gl0EKtB0Zfr4AZspRObN/m6BbKmNe2nRoLklthcV7Fw0kd0W0ae9FO4FC
 x68=
X-SBRS: None
X-MesageID: 29171978
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,327,1596513600"; 
   d="scan'208";a="29171978"
Subject: Re: [xen-unstable-smoke bisection] complete test-amd64-amd64-libvirt
To: <xen-devel@lists.xenproject.org>
References: <E1kOAhz-0006XB-8d@osstest.test-lab.xenproject.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jim Fehlig <jfehlig@suse.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Anthony PERARD <anthony.perard@citrix.com>
Message-ID: <f5c5e9ea-aab2-4295-9a68-56a1cc07645e@citrix.com>
Date: Fri, 2 Oct 2020 12:55:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <E1kOAhz-0006XB-8d@osstest.test-lab.xenproject.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 03:22, osstest service owner wrote:
> *** Found and reproduced problem changeset ***
>
>   Bug is in tree:  xen git://xenbits.xen.org/xen.git
>   Bug introduced:  bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
>   Bug not present: 50a5215f30e964a6f16165ab57925ca39f31a849
>   Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155282/
>
>
>   commit bfcc97c08c2258316d1cd92c23a441d97ad6ff4e
>   Author: Andrew Cooper <andrew.cooper3@citrix.com>
>   Date:   Tue Sep 29 14:48:52 2020 +0100
>   
>       tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()
>       
>       Nested Virt is the final special case in legacy CPUID handling.  Pass the
>       (poorly named) nested_hvm setting down into xc_cpuid_apply_policy() to break
>       the semantic dependency on HVM_PARAM_NESTEDHVM.
>       
>       No functional change.
>       
>       Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>       Acked-by: Wei Liu <wl@xen.org>

This is totally bizarre.

From
http://logs.test-lab.xenproject.org/osstest/logs/155282/test-amd64-amd64-libvirt/huxelrebe1---var-log-libvirt-libvirtd.log.gz
we get a bunch of:

debug : libvirt_vmessage:71 : libvirt_vmessage: context='libxl'
format='%s%s%s%s%s%s'

lines, which I suspect means that libxl has tried logging and error, and
its not been rendered correctly.


The only possible change in libxl is side effects from the extra call to
libxl_defbool_val() which asserts that the boolean isn't in its default
form.

However, by this point in booting, libxl__domain_build_info_setdefault()
should have already forced it to true or false.

~Andrew, still very much confused


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2005.5938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJs8-0007iN-HG; Fri, 02 Oct 2020 12:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2005.5938; Fri, 02 Oct 2020 12:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJs8-0007iG-Cr; Fri, 02 Oct 2020 12:09:12 +0000
Received: by outflank-mailman (input) for mailman id 2005;
 Fri, 02 Oct 2020 12:09:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOJs6-0007iB-V6
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:09:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 065e8ae6-58c7-47e1-bf9d-e337fad4b2f5;
 Fri, 02 Oct 2020 12:09:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E43CFB453;
 Fri,  2 Oct 2020 12:09:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOJs6-0007iB-V6
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:09:11 +0000
X-Inumbo-ID: 065e8ae6-58c7-47e1-bf9d-e337fad4b2f5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 065e8ae6-58c7-47e1-bf9d-e337fad4b2f5;
	Fri, 02 Oct 2020 12:09:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601640549;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eWzi5sBq03i78dd13VMcr/YCIEfgD99M/BA7lsnrOy4=;
	b=AEQuF+O/8NhXCph2hbwxHL226XTk1wSAz3BDTwA2FoaNVdcfWUy8slItD0NX9s8i3nc54d
	gWs725DMP0j+TzK+Dp+nLzAh/tcQ4qjedQV1NGEyaXM3zoRIDfhHKYmXHA+I2JYv5/hwAX
	LK1xfCxzWHhxQ7DMAho7Ks8/OZUWL+E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E43CFB453;
	Fri,  2 Oct 2020 12:09:08 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Correct error message in check_segment()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201002113012.29932-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <986e7bdf-1ba3-3f6f-fdfb-e8ab23afbc6f@suse.com>
Date: Fri, 2 Oct 2020 14:09:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201002113012.29932-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.10.2020 13:30, Andrew Cooper wrote:
> The error message is wrong (given AMD's older interpretation of what a NUL
> segment should contain, attribute wise), and actively unhelpful because you
> only get it in response to a hypercall where the one piece of information you
> cannot provide is the segment selector.
> 
> Fix the message to talk about segment attributes, rather than the selector.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/arch/x86/hvm/domain.c
> +++ b/xen/arch/x86/hvm/domain.c
> @@ -39,7 +39,7 @@ static int check_segment(struct segment_register *reg, enum x86_segment seg)
>      {
>          if ( seg != x86_seg_ds && seg != x86_seg_es )
>          {
> -            gprintk(XENLOG_ERR, "Null selector provided for CS, SS or TR\n");
> +            gprintk(XENLOG_ERR, "Empty segment attributes for CS, SS or TR\n");

... may I suggest "Null" or "Zero" instead of "Empty"?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:12:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:12:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2007.5950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJvD-00004y-0b; Fri, 02 Oct 2020 12:12:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2007.5950; Fri, 02 Oct 2020 12:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOJvC-00004r-TK; Fri, 02 Oct 2020 12:12:22 +0000
Received: by outflank-mailman (input) for mailman id 2007;
 Fri, 02 Oct 2020 12:12:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOJvA-0008WQ-VX
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:12:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b17feca-028a-4d5d-9935-fb823db06c4d;
 Fri, 02 Oct 2020 12:12:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11793B3E7;
 Fri,  2 Oct 2020 12:12:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOJvA-0008WQ-VX
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:12:21 +0000
X-Inumbo-ID: 0b17feca-028a-4d5d-9935-fb823db06c4d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0b17feca-028a-4d5d-9935-fb823db06c4d;
	Fri, 02 Oct 2020 12:12:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601640739;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9hl3nd521JQPp97b9xvIAPpFbab/f8AzmjO/a/iZmWA=;
	b=aAYZgnyxulPkJdZy44OygSw59IafJTrP/8M3HYo+pa+geTaUVCTOHXD9ZqyT3fsl+rof7j
	h8gkz1WK6Cr3DecBNV/Z9575hAfLOQEnD1osovz9kFITE4E2vCqYPyGKFbjX5PW52v+D1V
	CkbCeaePRIOyKmUHvdxECw0177B5WDc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 11793B3E7;
	Fri,  2 Oct 2020 12:12:19 +0000 (UTC)
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
Date: Fri, 2 Oct 2020 14:12:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.10.2020 12:42, Bertrand Marquis wrote:
> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
> 
> This is removing the dependency to xen subdirectory preventing using a
> wrong configuration file when xen subdirectory is duplicated for
> compilation tests.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

(but more for the slight tidying than the purpose you name)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2012.5962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK4P-000139-UY; Fri, 02 Oct 2020 12:21:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2012.5962; Fri, 02 Oct 2020 12:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK4P-000132-Qp; Fri, 02 Oct 2020 12:21:53 +0000
Received: by outflank-mailman (input) for mailman id 2012;
 Fri, 02 Oct 2020 12:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2MsM=DJ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kOK4N-00012x-VJ
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:21:52 +0000
Received: from mail-oi1-x243.google.com (unknown [2607:f8b0:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7506339-ec27-4045-9644-be2dc1e62c52;
 Fri, 02 Oct 2020 12:21:50 +0000 (UTC)
Received: by mail-oi1-x243.google.com with SMTP id 26so1024622ois.5
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 05:21:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2MsM=DJ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kOK4N-00012x-VJ
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:21:52 +0000
X-Inumbo-ID: c7506339-ec27-4045-9644-be2dc1e62c52
Received: from mail-oi1-x243.google.com (unknown [2607:f8b0:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7506339-ec27-4045-9644-be2dc1e62c52;
	Fri, 02 Oct 2020 12:21:50 +0000 (UTC)
Received: by mail-oi1-x243.google.com with SMTP id 26so1024622ois.5
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 05:21:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=nq01YQB/Ez2vk6AHKtzJbl9d7KNOzLFVdp79wE51CO8=;
        b=J12pA5IJOFyOzjsDJPKdd6yyPZIZfVAEncJfiqCeIoXA7GJuyjOTP3zk36pG4sXCDo
         BOHRSyus9vk6/D/9OKVWSYZ/rY3ITRY7ycjHPZy2rk4H1c6WIH6oGLrJESrODULrP2i2
         GziR9uC9x/6das0oQJVqKJFpyGG1FXcoaBc90=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=nq01YQB/Ez2vk6AHKtzJbl9d7KNOzLFVdp79wE51CO8=;
        b=E3AC+oTvGgRMKqFtoRP6bQkShXVInSwCranDtdyZ2yNE514/BiT79m83nnhgOdcFLh
         51RJiQdA+ss+l0gcWg2xt+3gcy9CT2Mcw5P6M37K2zer3dJAiYc7MpLRALH09oAt4sKw
         UP2BZbsJInQxZcZPBU+2QTyDQ9hRAQZYnTsoo4BHdUzZMM1VfECYZZ7tHfX3f0J7ZAsl
         Yx6fr+wZh9nk2G0I0oMLGtZmkqK+qiEdnf2JYJE6GJSatUDUFII00uCz10pVXCX2DDwK
         0EgJ0KUl4r0s7JiMmJDFWJFaKjz0SV71Do6sd+zIwa/qCjomw7ub8zFyqIB0m8VsZV5G
         ISEQ==
X-Gm-Message-State: AOAM530MB52oT9lo3KXVNpsQ+dsNhLBRMSacZeqydzpKfjSlWEG7SjKX
	+bBGT35ydsjdhE70H++Vu6p9Tt8h9sQngybOopwRJg==
X-Google-Smtp-Source: ABdhPJz13UBI/oM9lY3EXtCelg6/RnKqvJM+aoU58aNwPxLtuE8ga2NpAfnpgTIovLJFzgeyQpyPKfJaWsGFjW9eTPc=
X-Received: by 2002:a05:6808:206:: with SMTP id l6mr1062489oie.128.1601641309381;
 Fri, 02 Oct 2020 05:21:49 -0700 (PDT)
MIME-Version: 1.0
References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com> <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local> <f6dcba12-8be8-b867-ac9b-a1ba50567fca@gmail.com>
In-Reply-To: <f6dcba12-8be8-b867-ac9b-a1ba50567fca@gmail.com>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Fri, 2 Oct 2020 14:21:38 +0200
Message-ID: <CAKMK7uHMU9X_V_gHmnVB=Jabb_p-01MQcQ4bZAnN1Sk1JMqkKg@mail.gmail.com>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
Cc: Luben Tuikov <luben.tuikov@amd.com>, =?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Dave Airlie <airlied@linux.ie>, Nouveau Dev <nouveau@lists.freedesktop.org>, 
	Linus Walleij <linus.walleij@linaro.org>, dri-devel <dri-devel@lists.freedesktop.org>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Melissa Wen <melissa.srw@gmail.com>, 
	"Anholt, Eric" <eric@anholt.net>, Huang Rui <ray.huang@amd.com>, Gerd Hoffmann <kraxel@redhat.com>, 
	Sam Ravnborg <sam@ravnborg.org>, Sumit Semwal <sumit.semwal@linaro.org>, 
	Emil Velikov <emil.velikov@collabora.com>, Rob Herring <robh@kernel.org>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Krzysztof Kozlowski <krzk@kernel.org>, 
	Steven Price <steven.price@arm.com>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, Kukjin Kim <kgene@kernel.org>, 
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	Russell King <linux+etnaviv@armlinux.org.uk>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>, 
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, Maxime Ripard <mripard@kernel.org>, 
	Inki Dae <inki.dae@samsung.com>, Hans de Goede <hdegoede@redhat.com>, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, Sean Paul <sean@poorly.run>, 
	apaneers@amd.com, Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Qinglang Miao <miaoqinglang@huawei.com>, 
	Qiang Yu <yuq825@gmail.com>, Thomas Zimmermann <tzimmermann@suse.de>, 
	Alex Deucher <alexander.deucher@amd.com>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>, Lucas Stach <l.stach@pengutronix.de>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Oct 2, 2020 at 1:30 PM Christian K=C3=B6nig
<ckoenig.leichtzumerken@gmail.com> wrote:
>
> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
> >> <christian.koenig@amd.com> wrote:
> >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig wrote=
:
> >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>> Hi
> >>>>>>
> >>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
> >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>> Hi Christian
> >>>>>>>>
> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
> >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and =
location
> >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma=
_buf_map
> >>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>> We could completely drop that if we use the same structure insi=
de TTM as
> >>>>>>>>> well.
> >>>>>>>>>
> >>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>>>>> retrieve the pointer via this function.
> >>>>>>>>
> >>>>>>>> I do want to see all that being more tightly integrated into TTM=
, but
> >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc6=
4
> >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO =
list.
> >>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>
> >>>>>>> In this case just keep the function inside bochs and only fix it =
there.
> >>>>>>>
> >>>>>>> All other drivers can be fixed when we generally pump this throug=
h TTM.
> >>>>>> Did you take a look at patch 3? This function will be used by VRAM
> >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here,=
 we
> >>>>>> have to duplicate the functionality in each if these drivers. Boch=
s
> >>>>>> itself uses VRAM helpers and doesn't touch the function directly.
> >>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>
> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_o=
bj
> >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>
> >>>>> What I want to avoid is to have another conversion function in TTM =
because
> >>>>> what happens here is that we already convert from ttm_bus_placement=
 to
> >>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>> Hm I'm not really seeing how that helps with a gradual conversion of
> >>>> everything over to dma_buf_map and assorted helpers for access? Ther=
e's
> >>>> too many places in ttm drivers where is_iomem and related stuff is u=
sed to
> >>>> be able to convert it all in one go. An intermediate state with a bu=
nch of
> >>>> conversions seems fairly unavoidable to me.
> >>> Fair enough. I would just have started bottom up and not top down.
> >>>
> >>> Anyway feel free to go ahead with this approach as long as we can rem=
ove
> >>> the new function again when we clean that stuff up for good.
> >> Yeah I guess bottom up would make more sense as a refactoring. But the
> >> main motivation to land this here is to fix the __mmio vs normal
> >> memory confusion in the fbdev emulation helpers for sparc (and
> >> anything else that needs this). Hence the top down approach for
> >> rolling this out.
> > Ok I started reviewing this a bit more in-depth, and I think this is a =
bit
> > too much of a de-tour.
> >
> > Looking through all the callers of ttm_bo_kmap almost everyone maps the
> > entire object. Only vmwgfx uses to map less than that. Also, everyone j=
ust
> > immediately follows up with converting that full object map into a
> > pointer.
> >
> > So I think what we really want here is:
> > - new function
> >
> > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> >
> >    _vmap name since that's consistent with both dma_buf functions and
> >    what's usually used to implement this. Outside of the ttm world kmap
> >    usually just means single-page mappings using kmap() or it's iomem
> >    sibling io_mapping_map* so rather confusing name for a function whic=
h
> >    usually is just used to set up a vmap of the entire buffer.
> >
> > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
> >    functions for all ttm drivers. We should be able to make this fully
> >    generic because a) we now have dma_buf_map and b) drm_gem_object is
> >    embedded in the ttm_bo, so we can upcast for everyone who's both a t=
tm
> >    and gem driver.
> >
> >    This is maybe a good follow-up, since it should allow us to ditch qu=
ite
> >    a bit of the vram helper code for this more generic stuff. I also mi=
ght
> >    have missed some special-cases here, but from a quick look everythin=
g
> >    just pins the buffer to the current location and that's it.
> >
> >    Also this obviously requires Christian's generic ttm_bo_pin rework
> >    first.
> >
> > - roll the above out to drivers.
> >
> > Christian/Thomas, thoughts on this?
>
> Calling this vmap instead of kmap certainly makes sense.
>
> Not 100% sure about the generic helpers, but it sounds like this should
> indeed look rather clean in the end.

Yeah generic helper is probably better left for a later step, after
we've rolled out ttm_bo_vmap out everywhere.
-Daniel

>
> Christian.
>
> >
> > I think for the immediate need of rolling this out for vram helpers and
> > fbdev code we should be able to do this, but just postpone the driver w=
ide
> > roll-out for now.
> >
> > Cheers, Daniel
> >
> >> -Daniel
> >>
> >>> Christian.
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Thanks,
> >>>>> Christian.
> >>>>>
> >>>>>> Best regards
> >>>>>> Thomas
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Best regards
> >>>>>>>> Thomas
> >>>>>>>>
> >>>>>>>>> Regards,
> >>>>>>>>> Christian.
> >>>>>>>>>
> >>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>> ---
> >>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 +++++++++++++++++++++++=
+
> >>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>      2 files changed, 44 insertions(+)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/tt=
m_bo_api.h
> >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>      #include <drm/drm_gem.h>
> >>>>>>>>>>      #include <drm/drm_hashtab.h>
> >>>>>>>>>>      #include <drm/drm_vma_manager.h>
> >>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>      #include <linux/kref.h>
> >>>>>>>>>>      #include <linux/list.h>
> >>>>>>>>>>      #include <linux/wait.h>
> >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(=
struct
> >>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>          return map->virtual;
> >>>>>>>>>>      }
> >>>>>>>>>>      +/**
> >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If =
the memory
> >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL=
.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_=
kmap_obj
> >>>>>>>>>> *kmap,
> >>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>> +{
> >>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>> +
> >>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iome=
m *)vaddr);
> >>>>>>>>>> +    else
> >>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>      /**
> >>>>>>>>>>       * ttm_bo_kmap
> >>>>>>>>>>       *
> >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-b=
uf-map.h
> >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>       *
> >>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>       *
> >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr=
_iomem().
> >>>>>>>>>> + *
> >>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>> + *
> >>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>> + *
> >>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_=
set() or
> >>>>>>>>>>       * dma_buf_map_is_null().
> >>>>>>>>>>       *
> >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(=
struct
> >>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>          map->is_iomem =3D false;
> >>>>>>>>>>      }
> >>>>>>>>>>      +/**
> >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping struc=
ture to
> >>>>>>>>>> an address in I/O memory
> >>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>> + *
> >>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf=
_map *map,
> >>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>> +{
> >>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
> >>>>>>>>>> +    map->is_iomem =3D true;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>      /**
> >>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping st=
ructures
> >>>>>>>>>> for equality
> >>>>>>>>>>       * @lhs:    The dma-buf mapping structure
> >>>>>>>>> _______________________________________________
> >>>>>>>>> dri-devel mailing list
> >>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%=
2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%7=
C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd89=
61fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdHOA%=
2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>> _______________________________________________
> >>>>>>>> amd-gfx mailing list
> >>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2=
F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C01=
%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961f=
e4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5HKCs=
TrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>>>>>> _______________________________________________
> >>>>>>> dri-devel mailing list
> >>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F=
%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%7C0=
1%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961=
fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdHOA%2F=
1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%=
2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C01%7=
Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4=
884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5HKCsTr=
ksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
>


--=20
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2013.5974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK4y-000183-9W; Fri, 02 Oct 2020 12:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2013.5974; Fri, 02 Oct 2020 12:22:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK4y-00017w-4Y; Fri, 02 Oct 2020 12:22:28 +0000
Received: by outflank-mailman (input) for mailman id 2013;
 Fri, 02 Oct 2020 12:22:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK4w-00017n-Cu
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:27 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2d6583c-de31-4dd0-b795-842e1da97e38;
 Fri, 02 Oct 2020 12:22:24 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4c-0003K8-4x; Fri, 02 Oct 2020 12:22:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK4w-00017n-Cu
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:27 +0000
X-Inumbo-ID: c2d6583c-de31-4dd0-b795-842e1da97e38
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c2d6583c-de31-4dd0-b795-842e1da97e38;
	Fri, 02 Oct 2020 12:22:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=MTQ4vzPn+TUgUKKcoWUoAB8fcnSJCckabiOfgd7ojV4=; b=u+R89293VKjrL950PesPnUDIDe
	Xgc19Gn/aakqGvWaf4OfcNjY+u4Rg5jRhl8xog5FaXEtduHCxwy7x81D97JMWgEx0DpXFyn6Nkoge
	qMeAHTiEugkNO43SVoVkXGxSQcW9CJRLuexYC/zmXRysmDKTlkzsYuqt6uTWHc/LCKpHvQHAGG4ry
	9GmXzBAEH0N3vvw3bXPJrQ5YGZ4/zWMewOsOlwyZLOZCljy/cw7bfRH4n9HSRYjkXChFlLUhSXmrl
	hOFwMnVf9EAUA8EtaEogxdb5y3fDWHiEAxsHulGIoXiirixmt0EolobjUTME1kfxKY53hjeOYZ6Al
	SumZaBZw==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4c-0003K8-4x; Fri, 02 Oct 2020 12:22:06 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: remove alloc_vm_area v4
Date: Fri,  2 Oct 2020 14:21:53 +0200
Message-Id: <20201002122204.1534411-1-hch@lst.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Hi Andrew,

this series removes alloc_vm_area, which was left over from the big
vmalloc interface rework.  It is a rather arkane interface, basicaly
the equivalent of get_vm_area + actually faulting in all PTEs in
the allocated area.  It was originally addeds for Xen (which isn't
modular to start with), and then grew users in zsmalloc and i915
which seems to mostly qualify as abuses of the interface, especially
for i915 as a random driver should not set up PTE bits directly.

A git tree is also available here:

    git://git.infradead.org/users/hch/misc.git alloc_vm_area

Gitweb:

    http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/alloc_vm_area

Changes since v2:
 - rebased to include the two conflicting i915 changes in drm-tip /
   linux-next again.  No changes outside of drivers/gpu/drm/i915/

Changes since v2:
 - add another missing i initialization
 - rebased to mainline instead of drm-tip again

Changes since v1:
 - fix a bug in the zsmalloc changes
 - fix a bug and rebase to include the recent changes in i915
 - add a new vmap flag that allows to free the page array and pages
   using vfree
 - add a vfree documentation updated from Matthew

Diffstat:
 arch/x86/xen/grant-table.c                |   27 ++++--
 drivers/gpu/drm/i915/Kconfig              |    1 
 drivers/gpu/drm/i915/gem/i915_gem_pages.c |  131 +++++++++++++-----------------
 drivers/gpu/drm/i915/gt/shmem_utils.c     |   76 ++++-------------
 drivers/xen/xenbus/xenbus_client.c        |   30 +++---
 include/linux/vmalloc.h                   |    7 -
 mm/Kconfig                                |    3 
 mm/memory.c                               |   16 ++-
 mm/nommu.c                                |    7 -
 mm/vmalloc.c                              |  123 ++++++++++++++--------------
 mm/zsmalloc.c                             |   10 +-
 11 files changed, 200 insertions(+), 231 deletions(-)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2014.5986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK52-0001BZ-K9; Fri, 02 Oct 2020 12:22:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2014.5986; Fri, 02 Oct 2020 12:22:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK52-0001BO-Gc; Fri, 02 Oct 2020 12:22:32 +0000
Received: by outflank-mailman (input) for mailman id 2014;
 Fri, 02 Oct 2020 12:22:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK51-00017n-AC
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:31 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33e35223-29dd-4193-bab3-92d155df5e4b;
 Fri, 02 Oct 2020 12:22:24 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4e-0003KI-Qf; Fri, 02 Oct 2020 12:22:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK51-00017n-AC
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:31 +0000
X-Inumbo-ID: 33e35223-29dd-4193-bab3-92d155df5e4b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33e35223-29dd-4193-bab3-92d155df5e4b;
	Fri, 02 Oct 2020 12:22:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=71LvYnHcMs7/rXGs508r2vzZrQvLA4mfS9GwJnz8e8I=; b=Fz/pOMH6+ie/e1OSMd6JkfXPQC
	JC5co7zAbmFu2k8+DrdE5486C8h+YCa1pn1Er8OGy8W+fjduwDJ6SfjCMYntlhgAc7GV7WLJPQnLV
	tzwMDO1lMTe11hfYJrkRsnUwp7CQ8EmZBLhkvWMNMm4+v8ISlzfj3Y20uMh1T6tF4J6ngyYK4FcLe
	qEuFVI944dPgAm4aI1KxiQu6NSETk1qw4RVOc+zM8uSpTHd/IT/VtB2Qxcbes3rwO8SoDqfr16DLT
	+r0rTdtFHGOSLhe4UYcXEzoVUwdRYJOrErYqhu1cWi57CeoX1M2VxX5KVZ/orcBMld06RXibyZiwb
	9onDIZrw==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4e-0003KI-Qf; Fri, 02 Oct 2020 12:22:09 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 02/11] mm: add a VM_MAP_PUT_PAGES flag for vmap
Date: Fri,  2 Oct 2020 14:21:55 +0200
Message-Id: <20201002122204.1534411-3-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a flag so that vmap takes ownership of the passed in page array.
When vfree is called on such an allocation it will put one reference
on each page, and free the page array itself.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/vmalloc.h | 1 +
 mm/vmalloc.c            | 9 +++++++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 0221f852a7e1a3..b899681e3ff9f0 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -24,6 +24,7 @@ struct notifier_block;		/* in notifier.h */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
 #define VM_NO_GUARD		0x00000040      /* don't add guard page */
 #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
+#define VM_MAP_PUT_PAGES	0x00000100	/* put pages and free array in vfree */
 
 /*
  * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 8770260419af06..ffad65f052c3f9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2377,8 +2377,11 @@ EXPORT_SYMBOL(vunmap);
  * @flags: vm_area->flags
  * @prot: page protection for the mapping
  *
- * Maps @count pages from @pages into contiguous kernel virtual
- * space.
+ * Maps @count pages from @pages into contiguous kernel virtual space.
+ * If @flags contains %VM_MAP_PUT_PAGES the ownership of the pages array itself
+ * (which must be kmalloc or vmalloc memory) and one reference per pages in it
+ * are transferred from the caller to vmap(), and will be freed / dropped when
+ * vfree() is called on the return value.
  *
  * Return: the address of the area or %NULL on failure
  */
@@ -2404,6 +2407,8 @@ void *vmap(struct page **pages, unsigned int count,
 		return NULL;
 	}
 
+	if (flags & VM_MAP_PUT_PAGES)
+		area->pages = pages;
 	return area->addr;
 }
 EXPORT_SYMBOL(vmap);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2015.5997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK57-0001FS-TY; Fri, 02 Oct 2020 12:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2015.5997; Fri, 02 Oct 2020 12:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK57-0001FL-QG; Fri, 02 Oct 2020 12:22:37 +0000
Received: by outflank-mailman (input) for mailman id 2015;
 Fri, 02 Oct 2020 12:22:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK56-00017n-AK
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:36 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bda441c-bf52-4162-be6f-03adaa8d9a51;
 Fri, 02 Oct 2020 12:22:25 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4h-0003Kw-HE; Fri, 02 Oct 2020 12:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK56-00017n-AK
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:36 +0000
X-Inumbo-ID: 7bda441c-bf52-4162-be6f-03adaa8d9a51
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7bda441c-bf52-4162-be6f-03adaa8d9a51;
	Fri, 02 Oct 2020 12:22:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=/d6+Dk3KKsbjd+NK9IaRJO1MZ62p/5JmeyWovJ8r20c=; b=CwHiRIYoh86oLfT/E7Hp6PlAXn
	orneAaYjAGFE5y7mA+d1X4uG3djweEapj8WoFYECeAJwsEW7kqTUhv/7X6Cxi4VXQqnhsZK6xA4iG
	z0i0gVSNX4dhHQRq1mS97BxZRRMmWtUjIsKZl3cHjILQLLgcM+qyqasKpJ8OkRF+TMmebC6MB2b4q
	aSpnR/qzjMeFBmuZQ3BxBaHDmIP4RRFFvbNmwYunJ5/JDM1a7UjbmMd6OlyGEDVf1vpMpQJrsymWx
	aPexCjXV64HyyK3H2T4hCNQ3yO9v52ScOVGEBxP4zFAnbZbGyzlMIdkPbA8QzzziWEjPWem3uhkF+
	iFNoe1Ug==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4h-0003Kw-HE; Fri, 02 Oct 2020 12:22:11 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 04/11] mm: allow a NULL fn callback in apply_to_page_range
Date: Fri,  2 Oct 2020 14:21:57 +0200
Message-Id: <20201002122204.1534411-5-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Besides calling the callback on each page, apply_to_page_range also has
the effect of pre-faulting all PTEs for the range.  To support callers
that only need the pre-faulting, make the callback optional.

Based on a patch from Minchan Kim <minchan@kernel.org>.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/memory.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index fcfc4ca36eba80..dcf2bb69fbf847 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2420,13 +2420,15 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
 
 	arch_enter_lazy_mmu_mode();
 
-	do {
-		if (create || !pte_none(*pte)) {
-			err = fn(pte++, addr, data);
-			if (err)
-				break;
-		}
-	} while (addr += PAGE_SIZE, addr != end);
+	if (fn) {
+		do {
+			if (create || !pte_none(*pte)) {
+				err = fn(pte++, addr, data);
+				if (err)
+					break;
+			}
+		} while (addr += PAGE_SIZE, addr != end);
+	}
 	*mask |= PGTBL_PTE_MODIFIED;
 
 	arch_leave_lazy_mmu_mode();
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2016.6009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5D-0001KI-6S; Fri, 02 Oct 2020 12:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2016.6009; Fri, 02 Oct 2020 12:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5D-0001KA-39; Fri, 02 Oct 2020 12:22:43 +0000
Received: by outflank-mailman (input) for mailman id 2016;
 Fri, 02 Oct 2020 12:22:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5B-00017n-AR
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45dbe476-bfd4-4559-b217-e94ac278b155;
 Fri, 02 Oct 2020 12:22:25 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4d-0003KC-Iy; Fri, 02 Oct 2020 12:22:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5B-00017n-AR
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:41 +0000
X-Inumbo-ID: 45dbe476-bfd4-4559-b217-e94ac278b155
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45dbe476-bfd4-4559-b217-e94ac278b155;
	Fri, 02 Oct 2020 12:22:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=EtVpkGxd0Kyhqnh/t2FVgPRi015JMv+YUxF5ITQDDnw=; b=YC9YSxhdMUjpAx+qZIqAtDibYf
	Ae57cee8hHW/s/lmclebw/+iy3/ZV0P1PMww16N92quPotDMxPBxb07u6e1KMiM9d2bOULOFjvktb
	edlyZ2ZS9sflTzcALRfF8JhArSlU0gXB0oVeULJ9otfAOLgM8GBmICIrOciBrAuS8iIc/qF+mTOZ4
	lMledsjicdXf60HJF7roTT2KjBQa2XH9zg+ogmvw2nsfbkk2h+DmtNF+0QHQDHWWi+Lg51UeQg8JV
	DX8S8Y22HEu2e2tNQOe0eTRtvQUuJACsJJyEx9C1jlsunp+HbZ+s0jWAyv2M7c+oqiwMIckAYtyvl
	S9PNbd9w==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4d-0003KC-Iy; Fri, 02 Oct 2020 12:22:07 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 01/11] mm: update the documentation for vfree
Date: Fri,  2 Oct 2020 14:21:54 +0200
Message-Id: <20201002122204.1534411-2-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

 * Document that you can call vfree() on an address returned from vmap()
 * Remove the note about the minimum size -- the minimum size of a vmalloc
   allocation is one page
 * Add a Context: section
 * Fix capitalisation
 * Reword the prohibition on calling from NMI context to avoid a double
   negative

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/vmalloc.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index be4724b916b3e7..8770260419af06 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2321,20 +2321,21 @@ static void __vfree(const void *addr)
 }
 
 /**
- * vfree - release memory allocated by vmalloc()
- * @addr:  memory base address
+ * vfree - Release memory allocated by vmalloc()
+ * @addr:  Memory base address
  *
- * Free the virtually continuous memory area starting at @addr, as
- * obtained from vmalloc(), vmalloc_32() or __vmalloc(). If @addr is
- * NULL, no operation is performed.
+ * Free the virtually continuous memory area starting at @addr, as obtained
+ * from one of the vmalloc() family of APIs.  This will usually also free the
+ * physical memory underlying the virtual allocation, but that memory is
+ * reference counted, so it will not be freed until the last user goes away.
  *
- * Must not be called in NMI context (strictly speaking, only if we don't
- * have CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, but making the calling
- * conventions for vfree() arch-depenedent would be a really bad idea)
+ * If @addr is NULL, no operation is performed.
  *
+ * Context:
  * May sleep if called *not* from interrupt context.
- *
- * NOTE: assumes that the object at @addr has a size >= sizeof(llist_node)
+ * Must not be called in NMI context (strictly speaking, it could be
+ * if we have CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG, but making the calling
+ * conventions for vfree() arch-depenedent would be a really bad idea).
  */
 void vfree(const void *addr)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2017.6021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5H-0001PA-LR; Fri, 02 Oct 2020 12:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2017.6021; Fri, 02 Oct 2020 12:22:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5H-0001P0-HU; Fri, 02 Oct 2020 12:22:47 +0000
Received: by outflank-mailman (input) for mailman id 2017;
 Fri, 02 Oct 2020 12:22:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5G-00017n-Ae
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:46 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aae1feaa-e1bd-4984-8c6f-e553fa319250;
 Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4l-0003Lt-Ax; Fri, 02 Oct 2020 12:22:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5G-00017n-Ae
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:46 +0000
X-Inumbo-ID: aae1feaa-e1bd-4984-8c6f-e553fa319250
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aae1feaa-e1bd-4984-8c6f-e553fa319250;
	Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=PSl4e4bvc0UrKwYaparRx8nOFITZgZOLYb17gUiAPo4=; b=X1Hf2bS6r1+v79yUY1wuVYC3Gq
	MDZqVWuDYZIWaD+3Zo3tFXAHdjNllVDLCvouVYbnYiYiXGQ1VMywtp0KVLqWzFw02dYW0sL98F1By
	IaoTnx4rLl+ffDuiMESwxTkw6+rvS6LQPkgODQZJYZ+judGYGe4IYI4IRSyuGRNM0L6mNqjh1X8Kq
	TtgiXyGM6+W2jZDvV/4Ky5QmllYMo81DYxgnngX4OU0qbM/sHtVWe2BAT+k5Otq1Xp/wS3yDw8Yz+
	mG7pmSLrSYvwpUf06azhbpWXvpjo2QwON7nRAHpYbHal6VH5+vW1Zj+MUB5KqTriVc7BOpXyqBaOt
	Y7hhMztw==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4l-0003Lt-Ax; Fri, 02 Oct 2020 12:22:15 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Subject: [PATCH 07/11] drm/i915: stop using kmap in i915_gem_object_map
Date: Fri,  2 Oct 2020 14:22:00 +0200
Message-Id: <20201002122204.1534411-8-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

kmap for !PageHighmem is just a convoluted way to say page_address,
and kunmap is a no-op in that case.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gem/i915_gem_pages.c | 7 ++-----
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index d6eeefab3d018b..6550c0bc824ea2 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -162,8 +162,6 @@ static void unmap_object(struct drm_i915_gem_object *obj, void *ptr)
 {
 	if (is_vmalloc_addr(ptr))
 		vunmap(ptr);
-	else
-		kunmap(kmap_to_page(ptr));
 }
 
 struct sg_table *
@@ -277,11 +275,10 @@ static void *i915_gem_object_map(struct drm_i915_gem_object *obj,
 		 * forever.
 		 *
 		 * So if the page is beyond the 32b boundary, make an explicit
-		 * vmap. On 64b, this check will be optimised away as we can
-		 * directly kmap any page on the system.
+		 * vmap.
 		 */
 		if (!PageHighMem(page))
-			return kmap(page);
+			return page_address(page);
 	}
 
 	mem = stack;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2018.6034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5N-0001Uz-0i; Fri, 02 Oct 2020 12:22:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2018.6034; Fri, 02 Oct 2020 12:22:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5M-0001Up-Sz; Fri, 02 Oct 2020 12:22:52 +0000
Received: by outflank-mailman (input) for mailman id 2018;
 Fri, 02 Oct 2020 12:22:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5L-00017n-Ac
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:51 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2104f18a-e603-4c19-991c-da1cb83a157d;
 Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4i-0003LQ-O5; Fri, 02 Oct 2020 12:22:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5L-00017n-Ac
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:51 +0000
X-Inumbo-ID: 2104f18a-e603-4c19-991c-da1cb83a157d
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2104f18a-e603-4c19-991c-da1cb83a157d;
	Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ggovDsPrY2tPPXwEEhp2BvATO0PndCnAzmG0yva7UFo=; b=Tj33JCJUX19+gC9K2zBUyM8lNn
	7XlD88P+JGNEKhd/mdMC9AiMPqnZSBlOBX690hs6gCM3ciOXom3s2u8Ke0OjaLx3YinIgLt7mImm8
	UmjTvrKNrSJYGEVTht5vkZgfbNFTsKzLIt4H4yJRQkSZ3GDvS1aeS1MkChysP6OZqNpza2Wl8qG1y
	02oWiQAil8ZxcXE/BzMpRO3LwlTESNaZROOSlh6z+eO2Wj/NYpc0NqRPw6TMjrzU0yqzYvy1/oCBt
	SUQqJ/8eIxAUEAVMv2l1zNgwum1Gn/b4cxLdXsSFWojo5hZJcx/OEXTBbmeKA2LtfYkYsVcdXDxh/
	FCkcqWfA==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4i-0003LQ-O5; Fri, 02 Oct 2020 12:22:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 05/11] zsmalloc: switch from alloc_vm_area to get_vm_area
Date: Fri,  2 Oct 2020 14:21:58 +0200
Message-Id: <20201002122204.1534411-6-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Just manually pre-fault the PTEs using apply_to_page_range.

Co-developed-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/zsmalloc.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index c36fdff9a37131..918c7b019b3d78 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1122,10 +1122,16 @@ static inline int __zs_cpu_up(struct mapping_area *area)
 	 */
 	if (area->vm)
 		return 0;
-	area->vm = alloc_vm_area(PAGE_SIZE * 2, NULL);
+	area->vm = get_vm_area(PAGE_SIZE * 2, 0);
 	if (!area->vm)
 		return -ENOMEM;
-	return 0;
+
+	/*
+	 * Populate ptes in advance to avoid pte allocation with GFP_KERNEL
+	 * in non-preemtible context of zs_map_object.
+	 */
+	return apply_to_page_range(&init_mm, (unsigned long)area->vm->addr,
+			PAGE_SIZE * 2, NULL, NULL);
 }
 
 static inline void __zs_cpu_down(struct mapping_area *area)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:22:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:22:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2019.6046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5S-0001b9-Bc; Fri, 02 Oct 2020 12:22:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2019.6046; Fri, 02 Oct 2020 12:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5S-0001b2-7u; Fri, 02 Oct 2020 12:22:58 +0000
Received: by outflank-mailman (input) for mailman id 2019;
 Fri, 02 Oct 2020 12:22:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5Q-00017n-Ai
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:56 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ba6aeb5-b50e-4b94-8716-650ff2962d61;
 Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4m-0003Mg-NL; Fri, 02 Oct 2020 12:22:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5Q-00017n-Ai
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:22:56 +0000
X-Inumbo-ID: 7ba6aeb5-b50e-4b94-8716-650ff2962d61
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ba6aeb5-b50e-4b94-8716-650ff2962d61;
	Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=CnZvH3y9ZKFn7BHhh1hHnXEglNnxRMJ6++dbgBNxsaA=; b=wEYR4zDtkU0e31wuybXa3t0si8
	j25UL7GpUbh11fVdTW3LoaXZAfP7qiqyxjE1RXpcuEO3HhcCtrlYAsckRXMdnShsOU3d8gmcR55q2
	L0NsLiVx8xjjQEFoMQ4uOQuvnBkkdz+AJLpVrLFfe4NsPWdAhG0EflGapb/ePS0/weV2DpOroCkK6
	qqV061i4RwJrGFi3Vn9Sg4HFKTM0nTGaZY8lLbpTSqEg8khPRcpBTR4rss8tfbAhdoSwU0xlMxSw9
	IepKlrjkujAD2CsUecn6BZMcSxGImtaoWTD7tU8kS4Oi8//3yuVxsxjTYWMoF63tWc/2dJxYc5Nc5
	+R6tfRMQ==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4m-0003Mg-NL; Fri, 02 Oct 2020 12:22:17 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Subject: [PATCH 08/11] drm/i915: use vmap in i915_gem_object_map
Date: Fri,  2 Oct 2020 14:22:01 +0200
Message-Id: <20201002122204.1534411-9-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

i915_gem_object_map implements fairly low-level vmap functionality in
a driver.  Split it into two helpers, one for remapping kernel memory
which can use vmap, and one for I/O memory that uses vmap_pfn.

The only practical difference is that alloc_vm_area prefeaults the
vmalloc area PTEs, which doesn't seem to be required here for the
kernel memory case (and could be added to vmap using a flag if actually
required).

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/Kconfig              |   1 +
 drivers/gpu/drm/i915/gem/i915_gem_pages.c | 127 ++++++++++------------
 2 files changed, 60 insertions(+), 68 deletions(-)

diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
index 9afa5c4a6bf006..1e1cb245fca778 100644
--- a/drivers/gpu/drm/i915/Kconfig
+++ b/drivers/gpu/drm/i915/Kconfig
@@ -25,6 +25,7 @@ config DRM_I915
 	select CRC32
 	select SND_HDA_I915 if SND_HDA_CORE
 	select CEC_CORE if CEC_NOTIFIER
+	select VMAP_PFN
 	help
 	  Choose this option if you have a system that has "Intel Graphics
 	  Media Accelerator" or "HD Graphics" integrated graphics,
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_pages.c b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
index 6550c0bc824ea2..f60ca6dc911f29 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_pages.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_pages.c
@@ -232,34 +232,21 @@ int __i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
 	return err;
 }
 
-static inline pte_t iomap_pte(resource_size_t base,
-			      dma_addr_t offset,
-			      pgprot_t prot)
-{
-	return pte_mkspecial(pfn_pte((base + offset) >> PAGE_SHIFT, prot));
-}
-
 /* The 'mapping' part of i915_gem_object_pin_map() below */
-static void *i915_gem_object_map(struct drm_i915_gem_object *obj,
-				 enum i915_map_type type)
+static void *i915_gem_object_map_page(struct drm_i915_gem_object *obj,
+		enum i915_map_type type)
 {
-	unsigned long n_pte = obj->base.size >> PAGE_SHIFT;
-	struct sg_table *sgt = obj->mm.pages;
-	pte_t *stack[32], **mem;
-	struct vm_struct *area;
+	unsigned long n_pages = obj->base.size >> PAGE_SHIFT, i;
+	struct page *stack[32], **pages = stack, *page;
+	struct sgt_iter iter;
 	pgprot_t pgprot;
+	void *vaddr;
 
-	if (!i915_gem_object_has_struct_page(obj) && type != I915_MAP_WC)
-		return NULL;
-
-	if (GEM_WARN_ON(type == I915_MAP_WC &&
-			!static_cpu_has(X86_FEATURE_PAT)))
-		return NULL;
-
-	/* A single page can always be kmapped */
-	if (n_pte == 1 && type == I915_MAP_WB) {
-		struct page *page = sg_page(sgt->sgl);
-
+	switch (type) {
+	default:
+		MISSING_CASE(type);
+		fallthrough;	/* to use PAGE_KERNEL anyway */
+	case I915_MAP_WB:
 		/*
 		 * On 32b, highmem using a finite set of indirect PTE (i.e.
 		 * vmap) to provide virtual mappings of the high pages.
@@ -277,30 +264,8 @@ static void *i915_gem_object_map(struct drm_i915_gem_object *obj,
 		 * So if the page is beyond the 32b boundary, make an explicit
 		 * vmap.
 		 */
-		if (!PageHighMem(page))
-			return page_address(page);
-	}
-
-	mem = stack;
-	if (n_pte > ARRAY_SIZE(stack)) {
-		/* Too big for stack -- allocate temporary array instead */
-		mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL);
-		if (!mem)
-			return NULL;
-	}
-
-	area = alloc_vm_area(obj->base.size, mem);
-	if (!area) {
-		if (mem != stack)
-			kvfree(mem);
-		return NULL;
-	}
-
-	switch (type) {
-	default:
-		MISSING_CASE(type);
-		fallthrough;	/* to use PAGE_KERNEL anyway */
-	case I915_MAP_WB:
+		if (n_pages == 1 && !PageHighMem(sg_page(obj->mm.pages->sgl)))
+			return page_address(sg_page(obj->mm.pages->sgl));
 		pgprot = PAGE_KERNEL;
 		break;
 	case I915_MAP_WC:
@@ -308,30 +273,50 @@ static void *i915_gem_object_map(struct drm_i915_gem_object *obj,
 		break;
 	}
 
-	if (i915_gem_object_has_struct_page(obj)) {
-		struct sgt_iter iter;
-		struct page *page;
-		pte_t **ptes = mem;
+	if (n_pages > ARRAY_SIZE(stack)) {
+		/* Too big for stack -- allocate temporary array instead */
+		pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL);
+		if (!pages)
+			return NULL;
+	}
 
-		for_each_sgt_page(page, iter, sgt)
-			**ptes++ = mk_pte(page, pgprot);
-	} else {
-		resource_size_t iomap;
-		struct sgt_iter iter;
-		pte_t **ptes = mem;
-		dma_addr_t addr;
+	i = 0;
+	for_each_sgt_page(page, iter, obj->mm.pages)
+		pages[i++] = page;
+	vaddr = vmap(pages, n_pages, 0, pgprot);
+	if (pages != stack)
+		kvfree(pages);
+	return vaddr;
+}
 
-		iomap = obj->mm.region->iomap.base;
-		iomap -= obj->mm.region->region.start;
+static void *i915_gem_object_map_pfn(struct drm_i915_gem_object *obj,
+		enum i915_map_type type)
+{
+	resource_size_t iomap = obj->mm.region->iomap.base -
+		obj->mm.region->region.start;
+	unsigned long n_pfn = obj->base.size >> PAGE_SHIFT;
+	unsigned long stack[32], *pfns = stack, i;
+	struct sgt_iter iter;
+	dma_addr_t addr;
+	void *vaddr;
+
+	if (type != I915_MAP_WC)
+		return NULL;
 
-		for_each_sgt_daddr(addr, iter, sgt)
-			**ptes++ = iomap_pte(iomap, addr, pgprot);
+	if (n_pfn > ARRAY_SIZE(stack)) {
+		/* Too big for stack -- allocate temporary array instead */
+		pfns = kvmalloc_array(n_pfn, sizeof(*pfns), GFP_KERNEL);
+		if (!pfns)
+			return NULL;
 	}
 
-	if (mem != stack)
-		kvfree(mem);
-
-	return area->addr;
+	i = 0;
+	for_each_sgt_daddr(addr, iter, obj->mm.pages)
+		pfns[i++] = (iomap + addr) >> PAGE_SHIFT;
+	vaddr = vmap_pfn(pfns, n_pfn, pgprot_writecombine(PAGE_KERNEL_IO));
+	if (pfns != stack)
+		kvfree(pfns);
+	return vaddr;
 }
 
 /* get, pin, and map the pages of the object into kernel space */
@@ -383,7 +368,13 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
 	}
 
 	if (!ptr) {
-		ptr = i915_gem_object_map(obj, type);
+		if (GEM_WARN_ON(type == I915_MAP_WC &&
+				!static_cpu_has(X86_FEATURE_PAT)))
+			ptr = NULL;
+		else if (i915_gem_object_has_struct_page(obj))
+			ptr = i915_gem_object_map_page(obj, type);
+		else
+			ptr = i915_gem_object_map_pfn(obj, type);
 		if (!ptr) {
 			err = -ENOMEM;
 			goto err_unpin;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:23:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2020.6058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5W-0001h2-UW; Fri, 02 Oct 2020 12:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2020.6058; Fri, 02 Oct 2020 12:23:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5W-0001gn-RA; Fri, 02 Oct 2020 12:23:02 +0000
Received: by outflank-mailman (input) for mailman id 2020;
 Fri, 02 Oct 2020 12:23:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5V-00017n-At
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:01 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 179dfc9a-3d16-425d-9d37-207ea1f972f4;
 Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4k-0003Lh-1o; Fri, 02 Oct 2020 12:22:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5V-00017n-At
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:01 +0000
X-Inumbo-ID: 179dfc9a-3d16-425d-9d37-207ea1f972f4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 179dfc9a-3d16-425d-9d37-207ea1f972f4;
	Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=FBifTnPA+3ODKD/Pp+vW9/Ge5GD+C3AXZd2dc0iyUCM=; b=riy56kysOqQh/DlF7h2scz8Jav
	gYCGSV9Rki8rEaoHZeb0dw7qkEwaXSa+0I7OSnnE/Znap8oXVn6s8jjDKj3VD413tpKR7jV94w4Vv
	wiJyRh3kWoEK0eiAOTZMsMcDCgSl562tViz0ZtRC9AGolzXvuqMB9he9OPqNj4mLVentg5PxKKhEs
	ka0sldyW2JLFmlL3TGfidtPEmKqjQMKOKf2rLW9mjfed47D8kwh/cBml+fvSLs8iyakhkadcfWTJd
	+eagdYTFhGw/m3lNBTCXmCo0dPSXkOIuHmGz+Qa9Sj4iV4tBdMCAQGIcAzH8QtTWayFDdGiPSv42L
	JsKV5Z9g==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4k-0003Lh-1o; Fri, 02 Oct 2020 12:22:14 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org,
	Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Subject: [PATCH 06/11] drm/i915: use vmap in shmem_pin_map
Date: Fri,  2 Oct 2020 14:21:59 +0200
Message-Id: <20201002122204.1534411-7-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

shmem_pin_map somewhat awkwardly reimplements vmap using
alloc_vm_area and manual pte setup.  The only practical difference
is that alloc_vm_area prefeaults the vmalloc area PTEs, which doesn't
seem to be required here (and could be added to vmap using a flag if
actually required).  Switch to use vmap, and use vfree to free both the
vmalloc mapping and the page array, as well as dropping the references
to each page.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 drivers/gpu/drm/i915/gt/shmem_utils.c | 76 +++++++--------------------
 1 file changed, 18 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
index 43c7acbdc79dea..f011ea42487e11 100644
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c
+++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -49,80 +49,40 @@ struct file *shmem_create_from_object(struct drm_i915_gem_object *obj)
 	return file;
 }
 
-static size_t shmem_npte(struct file *file)
-{
-	return file->f_mapping->host->i_size >> PAGE_SHIFT;
-}
-
-static void __shmem_unpin_map(struct file *file, void *ptr, size_t n_pte)
-{
-	unsigned long pfn;
-
-	vunmap(ptr);
-
-	for (pfn = 0; pfn < n_pte; pfn++) {
-		struct page *page;
-
-		page = shmem_read_mapping_page_gfp(file->f_mapping, pfn,
-						   GFP_KERNEL);
-		if (!WARN_ON(IS_ERR(page))) {
-			put_page(page);
-			put_page(page);
-		}
-	}
-}
-
 void *shmem_pin_map(struct file *file)
 {
-	const size_t n_pte = shmem_npte(file);
-	pte_t *stack[32], **ptes, **mem;
-	struct vm_struct *area;
-	unsigned long pfn;
-
-	mem = stack;
-	if (n_pte > ARRAY_SIZE(stack)) {
-		mem = kvmalloc_array(n_pte, sizeof(*mem), GFP_KERNEL);
-		if (!mem)
-			return NULL;
-	}
+	struct page **pages;
+	size_t n_pages, i;
+	void *vaddr;
 
-	area = alloc_vm_area(n_pte << PAGE_SHIFT, mem);
-	if (!area) {
-		if (mem != stack)
-			kvfree(mem);
+	n_pages = file->f_mapping->host->i_size >> PAGE_SHIFT;
+	pages = kvmalloc_array(n_pages, sizeof(*pages), GFP_KERNEL);
+	if (!pages)
 		return NULL;
-	}
 
-	ptes = mem;
-	for (pfn = 0; pfn < n_pte; pfn++) {
-		struct page *page;
-
-		page = shmem_read_mapping_page_gfp(file->f_mapping, pfn,
-						   GFP_KERNEL);
-		if (IS_ERR(page))
+	for (i = 0; i < n_pages; i++) {
+		pages[i] = shmem_read_mapping_page_gfp(file->f_mapping, i,
+						       GFP_KERNEL);
+		if (IS_ERR(pages[i]))
 			goto err_page;
-
-		**ptes++ = mk_pte(page,  PAGE_KERNEL);
 	}
 
-	if (mem != stack)
-		kvfree(mem);
-
+	vaddr = vmap(pages, n_pages, VM_MAP_PUT_PAGES, PAGE_KERNEL);
+	if (!vaddr)
+		goto err_page;
 	mapping_set_unevictable(file->f_mapping);
-	return area->addr;
-
+	return vaddr;
 err_page:
-	if (mem != stack)
-		kvfree(mem);
-
-	__shmem_unpin_map(file, area->addr, pfn);
+	while (--i >= 0)
+		put_page(pages[i]);
+	kvfree(pages);
 	return NULL;
 }
 
 void shmem_unpin_map(struct file *file, void *ptr)
 {
 	mapping_clear_unevictable(file->f_mapping);
-	__shmem_unpin_map(file, ptr, shmem_npte(file));
+	vfree(ptr);
 }
 
 static int __shmem_rw(struct file *file, loff_t off,
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:23:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:23:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2022.6070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5c-0001nE-Bw; Fri, 02 Oct 2020 12:23:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2022.6070; Fri, 02 Oct 2020 12:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5c-0001n2-6k; Fri, 02 Oct 2020 12:23:08 +0000
Received: by outflank-mailman (input) for mailman id 2022;
 Fri, 02 Oct 2020 12:23:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5a-00017n-B4
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:06 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6b44618-3e4c-4ac2-8e7b-d11f3a1d5c89;
 Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4g-0003Ka-6R; Fri, 02 Oct 2020 12:22:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5a-00017n-B4
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:06 +0000
X-Inumbo-ID: c6b44618-3e4c-4ac2-8e7b-d11f3a1d5c89
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6b44618-3e4c-4ac2-8e7b-d11f3a1d5c89;
	Fri, 02 Oct 2020 12:22:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=cxDf+nMgR/LOq5Vlf/2efNcelt6NT7vAwUR//5r7v5s=; b=YWVEcg8rBuVcb/YCFrZd73afgv
	8kolTlJ8R1RDS3A0cxYp59Z9H2fZ+Hjonp8ijA1/VZEo6qEkS3vqATvBIAhoKIE6J9A9oSJ1a42/Q
	TSf8rRuHIPTZZvnJCiioCHfyVHDOgG+xW0uE1iLjeFwb0p+E4moW9CYRlGQbP87UVhWqAdrioH4iI
	OxqSCaozeYCqqo4NTY4zOI04hXGmIpjZY/WikJC07xo/QZrnRlEUfhKeDqsHj05nW0XTHD714Ug7d
	iQ8YZwg1G5Wsgtl/Ut6unH3VppjZUzCpijkq8iyJWKWmzLpogAjAl2rTfrTAUMuS15swV9Ex8RbpY
	YrfWcXcw==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4g-0003Ka-6R; Fri, 02 Oct 2020 12:22:10 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 03/11] mm: add a vmap_pfn function
Date: Fri,  2 Oct 2020 14:21:56 +0200
Message-Id: <20201002122204.1534411-4-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Add a proper helper to remap PFNs into kernel virtual space so that
drivers don't have to abuse alloc_vm_area and open coded PTE
manipulation for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/vmalloc.h |  1 +
 mm/Kconfig              |  3 +++
 mm/vmalloc.c            | 45 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 49 insertions(+)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index b899681e3ff9f0..c77efeac242514 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -122,6 +122,7 @@ extern void vfree_atomic(const void *addr);
 
 extern void *vmap(struct page **pages, unsigned int count,
 			unsigned long flags, pgprot_t prot);
+void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot);
 extern void vunmap(const void *addr);
 
 extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
diff --git a/mm/Kconfig b/mm/Kconfig
index 6c974888f86f97..6fa7ba1199eb1e 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -815,6 +815,9 @@ config DEVICE_PRIVATE
 	  memory; i.e., memory that is only accessible from the device (or
 	  group of devices). You likely also want to select HMM_MIRROR.
 
+config VMAP_PFN
+	bool
+
 config FRAME_VECTOR
 	bool
 
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index ffad65f052c3f9..e2a2ded8d93478 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2413,6 +2413,51 @@ void *vmap(struct page **pages, unsigned int count,
 }
 EXPORT_SYMBOL(vmap);
 
+#ifdef CONFIG_VMAP_PFN
+struct vmap_pfn_data {
+	unsigned long	*pfns;
+	pgprot_t	prot;
+	unsigned int	idx;
+};
+
+static int vmap_pfn_apply(pte_t *pte, unsigned long addr, void *private)
+{
+	struct vmap_pfn_data *data = private;
+
+	if (WARN_ON_ONCE(pfn_valid(data->pfns[data->idx])))
+		return -EINVAL;
+	*pte = pte_mkspecial(pfn_pte(data->pfns[data->idx++], data->prot));
+	return 0;
+}
+
+/**
+ * vmap_pfn - map an array of PFNs into virtually contiguous space
+ * @pfns: array of PFNs
+ * @count: number of pages to map
+ * @prot: page protection for the mapping
+ *
+ * Maps @count PFNs from @pfns into contiguous kernel virtual space and returns
+ * the start address of the mapping.
+ */
+void *vmap_pfn(unsigned long *pfns, unsigned int count, pgprot_t prot)
+{
+	struct vmap_pfn_data data = { .pfns = pfns, .prot = pgprot_nx(prot) };
+	struct vm_struct *area;
+
+	area = get_vm_area_caller(count * PAGE_SIZE, VM_IOREMAP,
+			__builtin_return_address(0));
+	if (!area)
+		return NULL;
+	if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
+			count * PAGE_SIZE, vmap_pfn_apply, &data)) {
+		free_vm_area(area);
+		return NULL;
+	}
+	return area->addr;
+}
+EXPORT_SYMBOL_GPL(vmap_pfn);
+#endif /* CONFIG_VMAP_PFN */
+
 static void *__vmalloc_area_node(struct vm_struct *area, gfp_t gfp_mask,
 				 pgprot_t prot, int node)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:23:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:23:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2023.6082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5g-0001sN-Mo; Fri, 02 Oct 2020 12:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2023.6082; Fri, 02 Oct 2020 12:23:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK5g-0001sE-IY; Fri, 02 Oct 2020 12:23:12 +0000
Received: by outflank-mailman (input) for mailman id 2023;
 Fri, 02 Oct 2020 12:23:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5f-00017n-BH
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:11 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8bed786-c621-4d91-b93e-c7faa84c8410;
 Fri, 02 Oct 2020 12:22:27 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4o-0003N0-7i; Fri, 02 Oct 2020 12:22:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5f-00017n-BH
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:11 +0000
X-Inumbo-ID: e8bed786-c621-4d91-b93e-c7faa84c8410
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e8bed786-c621-4d91-b93e-c7faa84c8410;
	Fri, 02 Oct 2020 12:22:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=xnYvbbr90efHFoDVTt6qGpKYl4GIRbFbSmGm2q0tkCk=; b=eksxHgIDln7mOp7Kfyf4bN4BQ5
	Q9cQ8yAmnI2rLeJJwRkx3qFl+sECDJCa1ENjoWbPN8Q4EfQWrWT/bD+2rmcRXOmJTApjVK/drorPv
	Pz4bVzKchyYOgmmAhNdVXPkpw5Zc2koWci+xzekUeE2b7FJS0b0UZl1Pq1qlPMY3iWQsodbAjXDZY
	AvytpmYxppQo0EL3QCBbwn0nqWMyjusHl3fmRJzgfKNdsvAVOOtaAVRlhU83H10mwmU07iosy6A4/
	0V+AUMPvxV5tez86h/WnaoYoBfRGGMjJSyl9k8iaqL1pv7tKHFRxw+7lI1Z4DXuVU1kdIM1lDboVC
	+kVi56hQ==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4o-0003N0-7i; Fri, 02 Oct 2020 12:22:18 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 09/11] xen/xenbus: use apply_to_page_range directly in xenbus_map_ring_pv
Date: Fri,  2 Oct 2020 14:22:02 +0200
Message-Id: <20201002122204.1534411-10-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Replacing alloc_vm_area with get_vm_area_caller + apply_page_range
allows to fill put the phys_addr values directly instead of doing
another loop over all addresses.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 drivers/xen/xenbus/xenbus_client.c | 30 ++++++++++++++++--------------
 1 file changed, 16 insertions(+), 14 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 2690318ad50f48..fd80e318b99cc7 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -73,16 +73,13 @@ struct map_ring_valloc {
 	struct xenbus_map_node *node;
 
 	/* Why do we need two arrays? See comment of __xenbus_map_ring */
-	union {
-		unsigned long addrs[XENBUS_MAX_RING_GRANTS];
-		pte_t *ptes[XENBUS_MAX_RING_GRANTS];
-	};
+	unsigned long addrs[XENBUS_MAX_RING_GRANTS];
 	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
 
 	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
 	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
 
-	unsigned int idx;	/* HVM only. */
+	unsigned int idx;
 };
 
 static DEFINE_SPINLOCK(xenbus_valloc_lock);
@@ -686,6 +683,14 @@ int xenbus_unmap_ring_vfree(struct xenbus_device *dev, void *vaddr)
 EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);
 
 #ifdef CONFIG_XEN_PV
+static int map_ring_apply(pte_t *pte, unsigned long addr, void *data)
+{
+	struct map_ring_valloc *info = data;
+
+	info->phys_addrs[info->idx++] = arbitrary_virt_to_machine(pte).maddr;
+	return 0;
+}
+
 static int xenbus_map_ring_pv(struct xenbus_device *dev,
 			      struct map_ring_valloc *info,
 			      grant_ref_t *gnt_refs,
@@ -694,18 +699,15 @@ static int xenbus_map_ring_pv(struct xenbus_device *dev,
 {
 	struct xenbus_map_node *node = info->node;
 	struct vm_struct *area;
-	int err = GNTST_okay;
-	int i;
-	bool leaked;
+	bool leaked = false;
+	int err = -ENOMEM;
 
-	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);
+	area = get_vm_area(XEN_PAGE_SIZE * nr_grefs, VM_IOREMAP);
 	if (!area)
 		return -ENOMEM;
-
-	for (i = 0; i < nr_grefs; i++)
-		info->phys_addrs[i] =
-			arbitrary_virt_to_machine(info->ptes[i]).maddr;
-
+	if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
+				XEN_PAGE_SIZE * nr_grefs, map_ring_apply, info))
+		goto failed;
 	err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
 				info, GNTMAP_host_map | GNTMAP_contains_pte,
 				&leaked);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:26:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:26:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2036.6094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK94-0002Pe-69; Fri, 02 Oct 2020 12:26:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2036.6094; Fri, 02 Oct 2020 12:26:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK94-0002PX-2n; Fri, 02 Oct 2020 12:26:42 +0000
Received: by outflank-mailman (input) for mailman id 2036;
 Fri, 02 Oct 2020 12:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5k-00017n-BT
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:16 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3cb94ff-30c9-4a7c-bb16-9f9603b5c507;
 Fri, 02 Oct 2020 12:22:29 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4p-0003NC-E7; Fri, 02 Oct 2020 12:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5k-00017n-BT
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:16 +0000
X-Inumbo-ID: a3cb94ff-30c9-4a7c-bb16-9f9603b5c507
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a3cb94ff-30c9-4a7c-bb16-9f9603b5c507;
	Fri, 02 Oct 2020 12:22:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=aKbcvSTUYHnc5Kwat+nQ3mcktPzxpqXoVaxpdMprQBc=; b=B5KIXiMOT+bUgSt1GIGcOgCEjp
	KxrLVOKXkjiZh9nsUjXrZIlf4IkgoawYmYAki6NhuzjSiIFsxcxlaCsOvoZlGkgbjTPrXr3wsU70G
	py1rrHtiaqjQaApAFvYoihjieul5RVuE7O+6SjJioBBfm+PIwZYdtR+2SMdjuYu1Sp4n+FsVejSyL
	UXdlLjQD/rxohxIguUWxxVAkVtqjw1uHGlM1swtKpIiApsFHYCGukN/6ttSaUKj1/R8XhI+9c7kr2
	CHlBs2NnJcbjsxHWA7WSo2qepwoMiF6W71c/naCdia2qhrwSV7pd55VEB5WyfMw+C7q+NXxIYTfma
	JMbWWlRA==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4p-0003NC-E7; Fri, 02 Oct 2020 12:22:19 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 10/11] x86/xen: open code alloc_vm_area in arch_gnttab_valloc
Date: Fri,  2 Oct 2020 14:22:03 +0200
Message-Id: <20201002122204.1534411-11-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

Replace the last call to alloc_vm_area with an open coded version using
an iterator in struct gnttab_vm_area instead of the triple indirection
magic in alloc_vm_area.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 arch/x86/xen/grant-table.c | 27 ++++++++++++++++++++-------
 1 file changed, 20 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/grant-table.c b/arch/x86/xen/grant-table.c
index 4988e19598c8a5..1e681bf62561a0 100644
--- a/arch/x86/xen/grant-table.c
+++ b/arch/x86/xen/grant-table.c
@@ -25,6 +25,7 @@
 static struct gnttab_vm_area {
 	struct vm_struct *area;
 	pte_t **ptes;
+	int idx;
 } gnttab_shared_vm_area, gnttab_status_vm_area;
 
 int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes,
@@ -90,19 +91,31 @@ void arch_gnttab_unmap(void *shared, unsigned long nr_gframes)
 	}
 }
 
+static int gnttab_apply(pte_t *pte, unsigned long addr, void *data)
+{
+	struct gnttab_vm_area *area = data;
+
+	area->ptes[area->idx++] = pte;
+	return 0;
+}
+
 static int arch_gnttab_valloc(struct gnttab_vm_area *area, unsigned nr_frames)
 {
 	area->ptes = kmalloc_array(nr_frames, sizeof(*area->ptes), GFP_KERNEL);
 	if (area->ptes == NULL)
 		return -ENOMEM;
-
-	area->area = alloc_vm_area(PAGE_SIZE * nr_frames, area->ptes);
-	if (area->area == NULL) {
-		kfree(area->ptes);
-		return -ENOMEM;
-	}
-
+	area->area = get_vm_area(PAGE_SIZE * nr_frames, VM_IOREMAP);
+	if (!area->area)
+		goto out_free_ptes;
+	if (apply_to_page_range(&init_mm, (unsigned long)area->area->addr,
+			PAGE_SIZE * nr_frames, gnttab_apply, area))
+		goto out_free_vm_area;
 	return 0;
+out_free_vm_area:
+	free_vm_area(area->area);
+out_free_ptes:
+	kfree(area->ptes);
+	return -ENOMEM;
 }
 
 static void arch_gnttab_vfree(struct gnttab_vm_area *area)
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:27:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:27:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2037.6106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK9K-0002U5-Ei; Fri, 02 Oct 2020 12:26:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2037.6106; Fri, 02 Oct 2020 12:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOK9K-0002Ty-BC; Fri, 02 Oct 2020 12:26:58 +0000
Received: by outflank-mailman (input) for mailman id 2037;
 Fri, 02 Oct 2020 12:26:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kOK5p-00017n-Bh
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:21 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6428aced-d01c-499f-9085-ccb677ca3771;
 Fri, 02 Oct 2020 12:22:52 +0000 (UTC)
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kOK4r-0003NP-07; Fri, 02 Oct 2020 12:22:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm18=DJ=casper.srs.infradead.org=batv+27a5ecbc8e1e54150000+6249+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kOK5p-00017n-Bh
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:23:21 +0000
X-Inumbo-ID: 6428aced-d01c-499f-9085-ccb677ca3771
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6428aced-d01c-499f-9085-ccb677ca3771;
	Fri, 02 Oct 2020 12:22:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-Type:Content-ID:Content-Description;
	bh=ix+nhROFgS2R/TvryqxFuXEm6pOX6uuPKFVKed32kPg=; b=ZxPASWNJXBFtPRb6kog90k5rff
	3YKB7yxnbzt6Zj9BldP8QccHlvTh/kXpMqPC0FnoXkc0CUUN/TcEv02oKe4+ZvbNMY5mS4aY9UIAz
	XbqfauNpkrOSsljQLJsh7dJ36oClmVKIOXm9fiW24TgJ8429cBrjiAAXlkjeBqGINNXYZBLvCO52B
	ox58udQfQp3gEqPfQRaXx3aajTYzMkiSvQvprJW2VaLIJiuCFd/XjFE7CZTG6+XRf1W4nEYSTrVVf
	vZbY2Q/c13c52ovQG034bnaM00iM/NMs53oAq644X4PbdJYw70/NqqPet1yajl8ZAQpbIOVQdoEVV
	76W/0tDg==;
Received: from [2001:4bb8:180:7b62:f738:1861:1acc:15c8] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOK4r-0003NP-07; Fri, 02 Oct 2020 12:22:21 +0000
From: Christoph Hellwig <hch@lst.de>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jani Nikula <jani.nikula@linux.intel.com>,
	Joonas Lahtinen <joonas.lahtinen@linux.intel.com>,
	Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>,
	Chris Wilson <chris@chris-wilson.co.uk>,
	Matthew Auld <matthew.auld@intel.com>,
	Rodrigo Vivi <rodrigo.vivi@intel.com>,
	Minchan Kim <minchan@kernel.org>,
	Matthew Wilcox <willy@infradead.org>,
	Nitin Gupta <ngupta@vflare.org>,
	x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	intel-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	linux-mm@kvack.org
Subject: [PATCH 11/11] mm: remove alloc_vm_area
Date: Fri,  2 Oct 2020 14:22:04 +0200
Message-Id: <20201002122204.1534411-12-hch@lst.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201002122204.1534411-1-hch@lst.de>
References: <20201002122204.1534411-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

All users are gone now.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 include/linux/vmalloc.h |  5 +----
 mm/nommu.c              |  7 ------
 mm/vmalloc.c            | 48 -----------------------------------------
 3 files changed, 1 insertion(+), 59 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index c77efeac242514..938eaf9517e266 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -169,6 +169,7 @@ extern struct vm_struct *__get_vm_area_caller(unsigned long size,
 					unsigned long flags,
 					unsigned long start, unsigned long end,
 					const void *caller);
+void free_vm_area(struct vm_struct *area);
 extern struct vm_struct *remove_vm_area(const void *addr);
 extern struct vm_struct *find_vm_area(const void *addr);
 
@@ -204,10 +205,6 @@ static inline void set_vm_flush_reset_perms(void *addr)
 }
 #endif
 
-/* Allocate/destroy a 'vmalloc' VM area. */
-extern struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes);
-extern void free_vm_area(struct vm_struct *area);
-
 /* for /dev/kmem */
 extern long vread(char *buf, char *addr, unsigned long count);
 extern long vwrite(char *buf, char *addr, unsigned long count);
diff --git a/mm/nommu.c b/mm/nommu.c
index 75a327149af127..9272f30e4c4726 100644
--- a/mm/nommu.c
+++ b/mm/nommu.c
@@ -354,13 +354,6 @@ void vm_unmap_aliases(void)
 }
 EXPORT_SYMBOL_GPL(vm_unmap_aliases);
 
-struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes)
-{
-	BUG();
-	return NULL;
-}
-EXPORT_SYMBOL_GPL(alloc_vm_area);
-
 void free_vm_area(struct vm_struct *area)
 {
 	BUG();
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e2a2ded8d93478..3bc5b832451ef2 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3083,54 +3083,6 @@ int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
 }
 EXPORT_SYMBOL(remap_vmalloc_range);
 
-static int f(pte_t *pte, unsigned long addr, void *data)
-{
-	pte_t ***p = data;
-
-	if (p) {
-		*(*p) = pte;
-		(*p)++;
-	}
-	return 0;
-}
-
-/**
- * alloc_vm_area - allocate a range of kernel address space
- * @size:	   size of the area
- * @ptes:	   returns the PTEs for the address space
- *
- * Returns:	NULL on failure, vm_struct on success
- *
- * This function reserves a range of kernel address space, and
- * allocates pagetables to map that range.  No actual mappings
- * are created.
- *
- * If @ptes is non-NULL, pointers to the PTEs (in init_mm)
- * allocated for the VM area are returned.
- */
-struct vm_struct *alloc_vm_area(size_t size, pte_t **ptes)
-{
-	struct vm_struct *area;
-
-	area = get_vm_area_caller(size, VM_IOREMAP,
-				__builtin_return_address(0));
-	if (area == NULL)
-		return NULL;
-
-	/*
-	 * This ensures that page tables are constructed for this region
-	 * of kernel virtual address space and mapped into init_mm.
-	 */
-	if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
-				size, f, ptes ? &ptes : NULL)) {
-		free_vm_area(area);
-		return NULL;
-	}
-
-	return area;
-}
-EXPORT_SYMBOL_GPL(alloc_vm_area);
-
 void free_vm_area(struct vm_struct *area)
 {
 	struct vm_struct *ret;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:28:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:28:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2041.6118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKB9-0002fw-Vc; Fri, 02 Oct 2020 12:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2041.6118; Fri, 02 Oct 2020 12:28:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKB9-0002fp-SA; Fri, 02 Oct 2020 12:28:51 +0000
Received: by outflank-mailman (input) for mailman id 2041;
 Fri, 02 Oct 2020 12:28:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOKB8-0002fj-Li
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:28:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b1044f9-9a90-486c-9ed4-3b5089c75bc0;
 Fri, 02 Oct 2020 12:28:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1922AC6D;
 Fri,  2 Oct 2020 12:28:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOKB8-0002fj-Li
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:28:50 +0000
X-Inumbo-ID: 7b1044f9-9a90-486c-9ed4-3b5089c75bc0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b1044f9-9a90-486c-9ed4-3b5089c75bc0;
	Fri, 02 Oct 2020 12:28:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601641728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yYm16sD4KHefZ2j8SxaiXlxlMEdlEuTzKyyWhgvMbnE=;
	b=c2K4kOZvGetFDNSdJPBOtMLhEZpL7IJGEsL23DYXqmGx1Bb2pzv7hNqUVpeTh5flK+ur5J
	tKY4+sj3d2uwQAPwhKuRwUFoHTS2uj2i0rrG/HZaHwAVVm3VuIhGlwupBYvknxqoKClF0C
	2Eq3Uk5puW/lVXVXQMIV7ZNrZrWWl4M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C1922AC6D;
	Fri,  2 Oct 2020 12:28:48 +0000 (UTC)
Subject: Re: [PATCH 2/3] x86: fix resource leaks on arch_vcpu_create() error
 path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
 <77106fd6-96c5-4a62-5eee-8a37660db550@suse.com>
 <df700f00-9458-c7f8-90fc-65dc31850b48@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <45c132b8-2a86-2e3b-6fe4-76041787d2d4@suse.com>
Date: Fri, 2 Oct 2020 14:28:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <df700f00-9458-c7f8-90fc-65dc31850b48@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 13:13, Andrew Cooper wrote:
> On 02/10/2020 11:30, Jan Beulich wrote:
>> {hvm,pv}_vcpu_initialise() have always been meant to be the final
>> possible source of errors in arch_vcpu_create(), hence not requiring
>> any unrolling of what they've done on the error path. (Of course this
>> may change once the various involved paths all have become idempotent.)
> 
> I'd agree that the way the code was previously laid out expected
> {hvm,pv}_vcpu_initialise() to be the final failing option.
> 
> I don't think "has always meant to be" is reasonable, because where is
> the code comment explaining this design choice?

It's probably more a "happened to be that way and then it was easiest
to keep it like this", but I recall the behavior having been the subject
of discussions, with the outcome that it's at least "kind of" intended.

Would adding "kind of" make things look better to you?

>> But even beyond this aspect I think it is more logical to do policy
>> initialization ahead of the calling of these two functions, as they may
>> in principle want to access it.
> 
> Not these MSRs.  They're currently a block of zeroes, and while that
> will eventually change, it will still be a bunch of MSRs in their RESET
> state.
> 
> The interesting MSRs are the domain ones, not the vCPU ones.

If you had said "The more interesting ...", I'd have agreed. What I
was thinking of as possible uses (be it reading or writing) is
e.g. reset state that may depend on certain further properties.

Furthermore I was thinking of code paths that vCPU initialization
simply may re-use, and which expect the policy to at least be
available, irrespective of the individual MSRs' values still
being their reset ones.

>> Fixes: 4187f79dc718 ("x86/msr: introduce struct msr_vcpu_policy")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com> although I'd
> prefer some adjustment to the commit message along the indicated lines.

Thanks. As far as adjustments go, I don't really see how to better
reflect what you want, considering my replies above. If you have
any hints ... (I'll hold off committing this for a little while,
but I think I'd like to put it in before I leave for weekend and
vacation.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:31:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2046.6135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKDs-0003Ww-Es; Fri, 02 Oct 2020 12:31:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2046.6135; Fri, 02 Oct 2020 12:31:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKDs-0003Wp-Bx; Fri, 02 Oct 2020 12:31:40 +0000
Received: by outflank-mailman (input) for mailman id 2046;
 Fri, 02 Oct 2020 12:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOKDr-0003Wk-Oa
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:31:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c417eb15-30b8-4870-8f05-b7feebfccf27;
 Fri, 02 Oct 2020 12:31:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5140DAC6D;
 Fri,  2 Oct 2020 12:31:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOKDr-0003Wk-Oa
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:31:39 +0000
X-Inumbo-ID: c417eb15-30b8-4870-8f05-b7feebfccf27
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c417eb15-30b8-4870-8f05-b7feebfccf27;
	Fri, 02 Oct 2020 12:31:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601641898;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gNOz+1P9pqyTZrGccFA7iVYQln8SAIGLyY/tKcHVFV8=;
	b=asYBeV9nWCwzptOPNc9Nzp7dDj7y2zuqoWyQ79uPmsDAH9FcAgAwEYHQhOC1GzpQmKVhLO
	ENhnUNkfLY+kcjhKY63SuGyzy4vTT24nV3W70dqDwIT9sQAuYquprVFB2ytnznoNBGtzLj
	JVtqh/rOQlT7xcgznP2rnZnRE6TfqVI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5140DAC6D;
	Fri,  2 Oct 2020 12:31:38 +0000 (UTC)
Subject: Re: [PATCH 3/3] x86/vLAPIC: vlapic_init() runs only once for a vCPU
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1a55f2f0-f0aa-4a33-1219-1091ed9150df@suse.com>
 <3735eb75-76ef-abff-1b05-aa89ddc39fcc@suse.com>
 <03ab138c-5608-ba4e-90ae-5d7bcdfd6bd9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd4bd18e-d179-7ad1-edd2-3ad1520268e7@suse.com>
Date: Fri, 2 Oct 2020 14:31:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <03ab138c-5608-ba4e-90ae-5d7bcdfd6bd9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.10.2020 13:19, Andrew Cooper wrote:
> On 02/10/2020 11:31, Jan Beulich wrote:
>> Hence there's no need to guard allocation / mapping by checks whether
>> the same action has been done before. I assume this was a transient
>> change which should have been undone before 509529e99148 ("x86 hvm: Xen
>> interface and implementation for virtual S3") got committed.
>>
>> While touching this code, switch dprintk()-s to use %pv.
> 
> Logging ENOMEM, especially without actually saying ENOMEM, is quite
> pointless.
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>, preferably with
> the printk()s dropped.

Thanks, and sure - I'll be happy to drop them. Just didn't want to
make more of a change than needed, and them being dprintk()-s didn't
make them look all that awful.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:34:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:34:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2049.6149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKGn-0003j3-0k; Fri, 02 Oct 2020 12:34:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2049.6149; Fri, 02 Oct 2020 12:34:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKGm-0003iw-Tl; Fri, 02 Oct 2020 12:34:40 +0000
Received: by outflank-mailman (input) for mailman id 2049;
 Fri, 02 Oct 2020 12:34:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Anut=DJ=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kOKGm-0003ip-7c
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:34:40 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e960f30a-c63c-42b7-961c-fdfa7243362a;
 Fri, 02 Oct 2020 12:34:38 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A8EFB67373; Fri,  2 Oct 2020 14:34:36 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Anut=DJ=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kOKGm-0003ip-7c
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:34:40 +0000
X-Inumbo-ID: e960f30a-c63c-42b7-961c-fdfa7243362a
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e960f30a-c63c-42b7-961c-fdfa7243362a;
	Fri, 02 Oct 2020 12:34:38 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id A8EFB67373; Fri,  2 Oct 2020 14:34:36 +0200 (CEST)
Date: Fri, 2 Oct 2020 14:34:36 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: xen-swiotlb vs phys_to_dma
Message-ID: <20201002123436.GA30329@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.5.17 (2007-11-01)

Hi Stefano,

I've looked over xen-swiotlb in linux-next, that is with your recent
changes to take dma offsets into account.  One thing that puzzles me
is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
that the argument is a dma_addr_t and both other callers translate
from a physical to the dma address.  Was this an oversight?


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2050.6162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKHE-0003oV-Am; Fri, 02 Oct 2020 12:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2050.6162; Fri, 02 Oct 2020 12:35:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKHE-0003oO-6n; Fri, 02 Oct 2020 12:35:08 +0000
Received: by outflank-mailman (input) for mailman id 2050;
 Fri, 02 Oct 2020 12:35:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOKHC-0003oE-Jw
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:35:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.73]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7507b35-dce9-4d55-9331-634770fffe8a;
 Fri, 02 Oct 2020 12:35:04 +0000 (UTC)
Received: from AM5PR1001CA0047.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::24) by DB7PR08MB3913.eurprd08.prod.outlook.com
 (2603:10a6:10:7c::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 12:35:02 +0000
Received: from AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::8a) by AM5PR1001CA0047.outlook.office365.com
 (2603:10a6:206:15::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 12:35:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT017.mail.protection.outlook.com (10.152.16.89) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 12:35:02 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Fri, 02 Oct 2020 12:35:01 +0000
Received: from ad437ae59b25.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7233D458-06F5-4CB8-9D87-28A6EC470D32.1; 
 Fri, 02 Oct 2020 12:34:56 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad437ae59b25.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 12:34:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5372.eurprd08.prod.outlook.com (2603:10a6:10:f9::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 12:34:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 12:34:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOKHC-0003oE-Jw
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:35:06 +0000
X-Inumbo-ID: c7507b35-dce9-4d55-9331-634770fffe8a
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.73])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7507b35-dce9-4d55-9331-634770fffe8a;
	Fri, 02 Oct 2020 12:35:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6oIfde5Tuosez1/RFkRLH/eRHMbDK4rxnVpTtiuz/0=;
 b=zAG6fkP0mvjWR73PclkdS9E4ETD1uW7oZIuv2E3TRsjr9eYSWZID1y9uNBuM6BU14A7Gf42oJNgJttrCweqDyhooun9a/yNzTmjAkuqlP87k8Wq8NKz7obW0vSMtT+XCoZg/bjMJJ/c7gfmY/LEA/UlRKz1wApAMK0murdWRpLo=
Received: from AM5PR1001CA0047.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::24) by DB7PR08MB3913.eurprd08.prod.outlook.com
 (2603:10a6:10:7c::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 12:35:02 +0000
Received: from AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::8a) by AM5PR1001CA0047.outlook.office365.com
 (2603:10a6:206:15::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 12:35:02 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT017.mail.protection.outlook.com (10.152.16.89) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 12:35:02 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Fri, 02 Oct 2020 12:35:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 72989764b2f3f0ec
X-CR-MTA-TID: 64aa7808
Received: from ad437ae59b25.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 7233D458-06F5-4CB8-9D87-28A6EC470D32.1;
	Fri, 02 Oct 2020 12:34:56 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ad437ae59b25.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 12:34:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e+hQlATvMvjoiLTeYkjX8REyrOsvcW/s2at4W2oJ6M0FnyjccyTajhXKFooDNhjVB7gkveOjxdrVbF5e0Y6utH1SED67nXEBInWzx2+57pA3r/De3lCtU0JFf4HAEj/Ck9MI0FrEdkuVO2Wkd5E8Aq7xQ/4pv89FSywI2mkIyVlVaPpdnCN/KOIAmnGngebOLfmx48JEElla+fF/j/pAUKRGMZa7lDBD5dMZ2UxXXz2aNdCn7U3g4Lnuyp+4ggBv68f7UIC8ZwjWwgGd4kHZcfBtKOXYpD9SGefozwdu9NfzxFSQP1olng0iwXiucPUVzIoIUDGUflqz+AXhOBOvMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6oIfde5Tuosez1/RFkRLH/eRHMbDK4rxnVpTtiuz/0=;
 b=Vx8B4EEIrAJwerT/s25wqRQt6+Uk9vmko0KAAnFbnMtgWsLsKAp6qLZkceno3VeIINnN3fspKDRgrVGO6RKFuNusmUwicYIV0AvHgYD9jsfSvyQPhIkybGXho+InOU6BX2zuwZbGmaplAdqqeWcaSRnfilJM8fZv9+FLqGtKCEaedE4K75sBqHVpll59Hhglu1Yd/0y5bntzv8d9zlqrHKlK4iQ3fJEvd+raZAjdLOcLENJJQuItwzGruEcOdHy8ZDJYH9ShB5YfvFQcNv2za0Y1SKI+vccoyBO1GYoJ/OUUEJisKq/oia4Z9jPl8WgA4bt/0q1buRTb0fFcB6xfvg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=T6oIfde5Tuosez1/RFkRLH/eRHMbDK4rxnVpTtiuz/0=;
 b=zAG6fkP0mvjWR73PclkdS9E4ETD1uW7oZIuv2E3TRsjr9eYSWZID1y9uNBuM6BU14A7Gf42oJNgJttrCweqDyhooun9a/yNzTmjAkuqlP87k8Wq8NKz7obW0vSMtT+XCoZg/bjMJJ/c7gfmY/LEA/UlRKz1wApAMK0murdWRpLo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5372.eurprd08.prod.outlook.com (2603:10a6:10:f9::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 12:34:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 12:34:54 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Index: AQHWmKjTj7VmlPYcLkSjqrrhNFG5S6mEOTwAgAAGTYA=
Date: Fri, 2 Oct 2020 12:34:54 +0000
Message-ID: <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
References:
 <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
In-Reply-To: <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: aa6c1eb1-0a6b-4039-f57b-08d866cf9545
x-ms-traffictypediagnostic: DB8PR08MB5372:|DB7PR08MB3913:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB39130CFBA92499C8148A381E9D310@DB7PR08MB3913.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2089;OLM:2089;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 liFyhRZx5GRMB7NyuX0u/31WQgTVDmLxsUTBQYzwQHIpvPUmSPKCvraYBWsr+lzAfPz2AxmrIz4mZ1BehcJXkD6gFCaWNSWB3fUec0tjrPp5uEzuRnZ2T7JTY8FDKl8QCqhkQvaiO+wjGqRZ45HPoRpa46zrBaTiZXsos6P/9TPadr0QiGApovaoZ9H7sXjer53PblGY15Ll+t5JJk5pmEcFTzAAz/mQahpZuuESI7cO3JYEEWQ1q9BtLruo4eGfqCwctV8+0T6b4LCd8pZfnJ/XbpSQx7eqChv0HlVJ1pL80wxX4zKhgUFAHCX0H2P+Zuw0t6hhDDESavM5Z0Wl0w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(376002)(136003)(346002)(396003)(86362001)(36756003)(6916009)(8936002)(6486002)(6512007)(33656002)(316002)(186003)(2616005)(8676002)(71200400001)(26005)(2906002)(83380400001)(4744005)(5660300002)(478600001)(76116006)(66946007)(64756008)(53546011)(6506007)(66446008)(4326008)(66476007)(91956017)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 1hNQs71bYPySFiE58YKfiitK4h4bWIqj+oy4uDgv/DggO5grzJ/KQm8liqVJ+pxhIwloBEcLenlgsbLCCA0zsN9OwcGmGUGPvdXaFIBG18mP9W+SSR17zUmpnmAGVWdBUryzcbhr02zB2S1D3qocvNyXURWsAQkLVhG750HHbpMlzEcwHwvPCLxS9YvVVo5fEKcZRwngUuIfCdZB42omBB29YBrp3oNrY6hAlXfWoSe6z+Atuso5X2KGoncnMU4fbQmH52CzUgKUMo2zgDjwO1agnjgHL82BtJ4obuAUuVNHc0vdYFjJI0T5lJXkeF+dOgjeYi6NLb6Krstek/IHKE07bAqpBV+jKHTO6BBzDJNWUWocwSAqt1Rap1QceRI43WBmwRFFQnJUV9waZMt+zzc3W/9y7gZ9arDzvBR8dGsMpSliGli2qebLuqyn7eL+dyR+NV6EZ1G+CdytOlTqisrG4W5tfUaUQEqwxFmVM5fKs9xdH0uV5eGMrB+olWNdeFyQEXbUHo7hGJVyp1iTqcabpNsujEAKiGj51tp2eD3TeoFkkPz15IhQU/3J5O4i20E8HW1Rg4DkTLCQbjTS09QJu8dkw1zYnHU9NEkU2WMWnicb3EcqvHKXnnegzGTIPcVX0f8TnDSwll2JnV/obg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8C1D3F543177D34B977B752E39735A40@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5372
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	23074908-f3da-457c-de14-08d866cf90c1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BxYwFPQpzhgO5spRJuu9Z2voI6aFvslNx12sfT3XlDV/eRchiE6zpeitAwzUGA1AbkyJ2C/rE/K2XnDJvCXGAIzVACPESjnAT4qTvzrQXfjCTwnGA6/nM0c3AfD1ZkZWuqcFJbkGZ9bdtO48ZHiKOWMgTjD4vtEusj8NLcCkHCfJlnYwHUzOC3551mpn18wRWr4r6U2YxgDMNNlVfWqL1C92SJl+wpXv9XrTWWKKsz7eb5X5mxOaQnWtFQOD/cB6yjItFVYZ2a904eenLGax1tRasmFWYNrhSqqVUl85eqlxi1FMNPC098nRDLrSX63QtJU6dLNVR1kgtAlxqxfj2V3LGPY9+2n7Sk63kvYq0/7MGLKj/Rm24o0VKoFjt58w6L8scjME1JmLVnXyvtP59g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39850400004)(376002)(346002)(46966005)(82310400003)(4326008)(33656002)(5660300002)(6862004)(8936002)(8676002)(83380400001)(36756003)(6486002)(2616005)(4744005)(53546011)(81166007)(6512007)(6506007)(70586007)(70206006)(316002)(478600001)(36906005)(356005)(86362001)(82740400003)(336012)(26005)(47076004)(186003)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 12:35:02.0958
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: aa6c1eb1-0a6b-4039-f57b-08d866cf9545
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3913



> On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 02.10.2020 12:42, Bertrand Marquis wrote:
>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>=20
>> This is removing the dependency to xen subdirectory preventing using a
>> wrong configuration file when xen subdirectory is duplicated for
>> compilation tests.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks :-)

>=20
> (but more for the slight tidying than the purpose you name)

Feel free to remove the justification from the commit message if
you think it is not usefull.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2062.6174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKKh-00041b-Qv; Fri, 02 Oct 2020 12:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2062.6174; Fri, 02 Oct 2020 12:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKKh-00041U-NZ; Fri, 02 Oct 2020 12:38:43 +0000
Received: by outflank-mailman (input) for mailman id 2062;
 Fri, 02 Oct 2020 12:38:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOKKg-00041P-HM
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:38:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3681209e-73be-4bf7-8747-fb4cf5a3596b;
 Fri, 02 Oct 2020 12:38:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF58FAD1E;
 Fri,  2 Oct 2020 12:38:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOKKg-00041P-HM
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:38:42 +0000
X-Inumbo-ID: 3681209e-73be-4bf7-8747-fb4cf5a3596b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3681209e-73be-4bf7-8747-fb4cf5a3596b;
	Fri, 02 Oct 2020 12:38:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601642320;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=11vWN8IBf60RKOh7w7YSKD4HFojYZshkeYiDIrDSJF0=;
	b=l/tpRyeCoaSw/46YX87J8EmYFb8KsLA5BXJUykoGAYtEV2JJgmnp2Zx65ZH8hfRmeDyaXF
	XBIHZ6Bvt/Tn+cJFz+0NNYzl72IjpEaODIKcidIqcGYA1IYcTYf7JErazl9qk9dkRo92BN
	spXkEWjChlVyVV3FDCyttZ7NE1uwLPU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF58FAD1E;
	Fri,  2 Oct 2020 12:38:39 +0000 (UTC)
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
References: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e5da46d7-07ee-84b8-fbd8-e2c246c014de@suse.com>
Date: Fri, 2 Oct 2020 14:38:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02.10.2020 14:34, Bertrand Marquis wrote:
>> On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 02.10.2020 12:42, Bertrand Marquis wrote:
>>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>>
>>> This is removing the dependency to xen subdirectory preventing using a
>>> wrong configuration file when xen subdirectory is duplicated for
>>> compilation tests.
>>>
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Thanks :-)
> 
>>
>> (but more for the slight tidying than the purpose you name)
> 
> Feel free to remove the justification from the commit message if
> you think it is not usefull.

Oh, no, it's not like I consider it not useful. It shows how you
arrived at making the change. It's just that I didn't consider
making copies of xen/ something we mean to be supported. I wouldn't
be surprised if it got broken again ...

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 12:44:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 12:44:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2064.6186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKQN-0004uS-Hj; Fri, 02 Oct 2020 12:44:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2064.6186; Fri, 02 Oct 2020 12:44:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKQN-0004uL-E6; Fri, 02 Oct 2020 12:44:35 +0000
Received: by outflank-mailman (input) for mailman id 2064;
 Fri, 02 Oct 2020 12:44:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOKQM-0004uG-3W
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:44:34 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.46]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fc1012e-f81c-458d-b320-73a8a7108e08;
 Fri, 02 Oct 2020 12:44:32 +0000 (UTC)
Received: from DB6PR1001CA0033.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::19)
 by AM0PR08MB3555.eurprd08.prod.outlook.com (2603:10a6:208:da::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Fri, 2 Oct
 2020 12:44:26 +0000
Received: from DB5EUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::23) by DB6PR1001CA0033.outlook.office365.com
 (2603:10a6:4:55::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 12:44:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT015.mail.protection.outlook.com (10.152.20.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 12:44:26 +0000
Received: ("Tessian outbound bac899b43a54:v64");
 Fri, 02 Oct 2020 12:44:26 +0000
Received: from 490aa17f3f22.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 77B0C950-E3B8-49DF-9A41-A2A999E50C8C.1; 
 Fri, 02 Oct 2020 12:44:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 490aa17f3f22.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 12:44:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5686.eurprd08.prod.outlook.com (2603:10a6:10:1a1::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 12:44:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 12:44:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOKQM-0004uG-3W
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 12:44:34 +0000
X-Inumbo-ID: 9fc1012e-f81c-458d-b320-73a8a7108e08
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.46])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9fc1012e-f81c-458d-b320-73a8a7108e08;
	Fri, 02 Oct 2020 12:44:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EpxC1hAnH3eF30dURg+E+4988ogzfy2wrQ97gpqhEH8=;
 b=zhKQb2bxNEUOiKSxK/5T0RiGXgTPTCkkEnEY5jwVGBFCgcvOYOBQSV+dckTGSQ+QJvqO8ZZACEZYv77QqyzN8zj1+6L8tI/5ErorV4Ps4xDwvBAKZFuWBslPcV0+cF7PnNjT1sFOVpxNot1IH9QVmGB14Ilai6gqklqTdCB1nzY=
Received: from DB6PR1001CA0033.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::19)
 by AM0PR08MB3555.eurprd08.prod.outlook.com (2603:10a6:208:da::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Fri, 2 Oct
 2020 12:44:26 +0000
Received: from DB5EUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::23) by DB6PR1001CA0033.outlook.office365.com
 (2603:10a6:4:55::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Fri, 2 Oct 2020 12:44:26 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT015.mail.protection.outlook.com (10.152.20.145) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 12:44:26 +0000
Received: ("Tessian outbound bac899b43a54:v64"); Fri, 02 Oct 2020 12:44:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1037fa3d5acd32ea
X-CR-MTA-TID: 64aa7808
Received: from 490aa17f3f22.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 77B0C950-E3B8-49DF-9A41-A2A999E50C8C.1;
	Fri, 02 Oct 2020 12:44:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 490aa17f3f22.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 12:44:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NltaIem2qVqcU+fRuF9rAjaTz/A8QZay6aq5PClzRkpGNyVrs0UucAAVJ1Ctd00tdQ5JSoZi2IXb6cSNUHTpvskDZisXA9FeLMrFiEdtWTItRHGdxfyc/G+4HhwCeSflIzHdKFnoxgiRWC25RDLVXij5CDh3zZxFtjB9ee+iONvGccbTcaOsXtU3MAEu1GzFKXXCvRNUKI4Vv7cxLm3B9JeJGSTKQk8yuVDtUc2g6KfXbqId0W5Knaxck3nyxc4/hXWsWyti2wx7uhQuyo4Lge4yriq/PbWS7u12S1IJd2xskE7tlS2I4Gg2luEiCmMVAj4K9bg2oDDg9EbjVHjq8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EpxC1hAnH3eF30dURg+E+4988ogzfy2wrQ97gpqhEH8=;
 b=IdSSY+hUipx5eNTIJW1FDbIeApNXGcF+ju7nopupe7W+Jb0yCQ9g+4O/to4SR0gs0KJSif1FQpA9Mmr8xk37G2MJXqaK9HF2knQvc+LO0zOAxVn/5KRbAa2qEjYVgopQYVI0q8RJS0CNtqmvKc+6Y3n1r2qsp2qRkRhu559EvOlR0A9krdg/gt1U3xSH7+FRWQoJJziWu9F302w5QMkQdPypCNleN2jJmITH9sXyrUTX/52azaRnqhNS6/NMGopNM+UAH8HfgstEZNQRt+1s2LkHh4kRVbojQjrZ61qrXwnoClAn+U1znce+jDJujRk6LcniVLATEeAPy5eQb+4k1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EpxC1hAnH3eF30dURg+E+4988ogzfy2wrQ97gpqhEH8=;
 b=zhKQb2bxNEUOiKSxK/5T0RiGXgTPTCkkEnEY5jwVGBFCgcvOYOBQSV+dckTGSQ+QJvqO8ZZACEZYv77QqyzN8zj1+6L8tI/5ErorV4Ps4xDwvBAKZFuWBslPcV0+cF7PnNjT1sFOVpxNot1IH9QVmGB14Ilai6gqklqTdCB1nzY=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5686.eurprd08.prod.outlook.com (2603:10a6:10:1a1::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 12:44:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 12:44:07 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Index: AQHWmKjTj7VmlPYcLkSjqrrhNFG5S6mEOTwAgAAGTYCAAAEPgIAAAYQA
Date: Fri, 2 Oct 2020 12:44:07 +0000
Message-ID: <547D8B47-C521-4F43-976F-D1723470AD3C@arm.com>
References:
 <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
 <e5da46d7-07ee-84b8-fbd8-e2c246c014de@suse.com>
In-Reply-To: <e5da46d7-07ee-84b8-fbd8-e2c246c014de@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9c53bfb8-051e-416e-fed0-08d866d0e5b8
x-ms-traffictypediagnostic: DBAPR08MB5686:|AM0PR08MB3555:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3555551C8648BA6FCA13FD039D310@AM0PR08MB3555.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2657;OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 w8x5BB6CYLRV2E/2Ovind4uNA+o/6Pm3bw9LFBJrOFk1CGni4d2kTcoBs+JAj1PrQqUL9wg2OtOTR3IXjXwdEM1Smtgf/YGXY0oY4ajxBMBBKg48mXbllZBr7Q08Y/rEk5z82pGorEzTHTwu2StfVeNbL+S3xS1p35x+NtCnXZqTxN94U0cYKBWmBuI0VwsdnJBg7pc7QOQu00k9um8G27/VJVSO0ziHZfOWjEUBdaATC7XclzeV7OeeR8pgU1qO87333NKkirx89CscVHZVFHx5crRMIoCrT5BEXoYHn/GornMmxayBi2iAMs2NdmgJ1fSJH+d6I/eOqLCEueOmQA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(346002)(396003)(376002)(136003)(366004)(8676002)(86362001)(91956017)(186003)(6916009)(316002)(478600001)(66446008)(76116006)(64756008)(5660300002)(66476007)(53546011)(66946007)(71200400001)(6506007)(33656002)(26005)(36756003)(66556008)(8936002)(4326008)(6512007)(2906002)(6486002)(83380400001)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 95RMjbv6O18+yuBOncLkLa7O+seDkBM2WHY6JsEKFb6CgmV2Wy7Gktm3iyXIPtqR51rfA+rr4J2y7BNcdBnRc16gqBWUKPCEMrYpFeWEzF7i7YEp3lEgC9SOV3eWwPl5MMZdFSZfU2dXAhg2ndxkAVQijX8nCu379rHFCKvFpAowhkz6fJUaofS4k32ISfwqsls+3aSCs5xdt1um+9Mqxb7cENBoVUCuhI4e6M7M1e1PGUEoA9tL50Fg1QVCPf/pV9octDNYXCWFKYsESgJ4rEUmJqqUh1qAf2lmtXKQl8UF13mUTXBMb1afD9jmjlZltw7bmpk0fWqIyirtoUubx8Uv2rRWGc52HLN6hOnbk14NOW6JdHE0KjN3i3/mQHoPiDKUkkBJxXWslD331Ad2kkcImDn1A3lAzy+TSY2ClGIy4a1ZyL1woQvcy/GIoLmWwwVSvaHrPobVpAZ1+k5l8FDPowiPWPywMmsh2C7KHyHxnyTV3vARKmEpJU2vvFYWBPF+24/GV2a6fCzsmQ/gCETNL3PXcqWxDkw1omgrHkbyeEf/P6jtklhQPlhrHn0q1OaUNEe8STAk+pbzhb4dIICfXd+BrX/gRqDwgp67JiBijUe3j8fgOVJu09R/tA2XVF5zeMaSUqCArlr7EMZAOg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <0FD8C7DF6B794D4EBBD8ACF4FE39FD04@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5686
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	535ec197-2f4b-4c38-ddb0-08d866d0da20
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	y4hJvNPfuW+NVUwhwzEr0Dq/IJWhCfNf7HWNSIrEwy3Dgx1Zp7Ur7LKMQHHLLllGy/P4/Gt4+KrYZ6X1hAyJoqJSNwSCTcjE8XeWYLJh/3l3sv/dUPZoOUnirLaFxtJ4ZFV84NSZUMOTQWArz7koWPbEZL7Fw9RCVCTW/wDB0p1jsXk3KfYwAIvLhyQRb3yc0m33HaKt8soLSM3Aw8I1q6BSjkjMRtqCNpdG/lOG4XTnCCWBaaxkhm2V8pWqRxEirIwdrCg4zsEmqa61lEEzV6fbLvt6B1h+ykcsy/RNELhSLLYEjSbE1nuKUl2Dbi4Cbc0p8SwJ+YDwTby55MKYxKfE9qg3VJIlgH98LGIPsja4aVkdd02yX5R4OF5sLInXmP6I8HRS6DqNIf58oUDFsw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(376002)(346002)(136003)(46966005)(356005)(316002)(86362001)(47076004)(83380400001)(336012)(26005)(82310400003)(82740400003)(478600001)(33656002)(81166007)(70206006)(6506007)(4326008)(6486002)(8936002)(2906002)(70586007)(6862004)(6512007)(5660300002)(8676002)(36756003)(53546011)(186003)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 12:44:26.6192
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c53bfb8-051e-416e-fed0-08d866d0e5b8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3555

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMTM6MzgsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMi4xMC4yMDIwIDE0OjM0LCBCZXJ0cmFuZCBNYXJxdWlz
IHdyb3RlOg0KPj4+IE9uIDIgT2N0IDIwMjAsIGF0IDEzOjEyLCBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IE9uIDAyLjEwLjIwMjAgMTI6NDIsIEJlcnRy
YW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+IE1vZGlmeSBNYWtlZmlsZXMgdXNpbmcgJChYRU5fUk9P
VCkveGVuIHRvIHVzZSAkKEJBU0VESVIpIGluc3RlYWQuDQo+Pj4+IA0KPj4+PiBUaGlzIGlzIHJl
bW92aW5nIHRoZSBkZXBlbmRlbmN5IHRvIHhlbiBzdWJkaXJlY3RvcnkgcHJldmVudGluZyB1c2lu
ZyBhDQo+Pj4+IHdyb25nIGNvbmZpZ3VyYXRpb24gZmlsZSB3aGVuIHhlbiBzdWJkaXJlY3Rvcnkg
aXMgZHVwbGljYXRlZCBmb3INCj4+Pj4gY29tcGlsYXRpb24gdGVzdHMuDQo+Pj4+IA0KPj4+PiBT
aWduZWQtb2ZmLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+
DQo+Pj4gDQo+Pj4gQWNrZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+
IA0KPj4gVGhhbmtzIDotKQ0KPj4gDQo+Pj4gDQo+Pj4gKGJ1dCBtb3JlIGZvciB0aGUgc2xpZ2h0
IHRpZHlpbmcgdGhhbiB0aGUgcHVycG9zZSB5b3UgbmFtZSkNCj4+IA0KPj4gRmVlbCBmcmVlIHRv
IHJlbW92ZSB0aGUganVzdGlmaWNhdGlvbiBmcm9tIHRoZSBjb21taXQgbWVzc2FnZSBpZg0KPj4g
eW91IHRoaW5rIGl0IGlzIG5vdCB1c2VmdWxsLg0KPiANCj4gT2gsIG5vLCBpdCdzIG5vdCBsaWtl
IEkgY29uc2lkZXIgaXQgbm90IHVzZWZ1bC4gSXQgc2hvd3MgaG93IHlvdQ0KPiBhcnJpdmVkIGF0
IG1ha2luZyB0aGUgY2hhbmdlLiBJdCdzIGp1c3QgdGhhdCBJIGRpZG4ndCBjb25zaWRlcg0KPiBt
YWtpbmcgY29waWVzIG9mIHhlbi8gc29tZXRoaW5nIHdlIG1lYW4gdG8gYmUgc3VwcG9ydGVkLiBJ
IHdvdWxkbid0DQo+IGJlIHN1cnByaXNlZCBpZiBpdCBnb3QgYnJva2VuIGFnYWluIC4uLg0KDQpi
YXNpY2FsbHkgaSBkbyB0aGlzIGEg4oCcY3AgLXJz4oCdIG9mIHhlbiBzdWJkaXJlY3Rvcnkgc28g
dGhhdCBpIGNhbiBoYXZlIGRpcmVjdG9yaWVzDQppbiB3aGljaCB4ZW4gaXMgY29tcGlsZWQgZm9y
IHg4NiwgYXJtMzIgYW5kIGFybTY0IGFuZCByZWNvbXBpbGUgYWxsIG9mIHRoZW0NCnF1aWNrbHkg
d2l0aG91dCBoYXZpbmcgdG8gZ28gdGhyb3VnaCBkaXN0Y2xlYW4sIGNvbmZpZywgbWFrZSBlYWNo
IHRpbWUgb3IgbW9kaWZ5DQp0aGUgb3JpZ2luYWwgdHJlZS4NCg0KSWYgaXQgZ2V0cyBicm9rZW4g
aSB3aWxsIGZpeCBpdCA6LSkNCg0KSW4gdGhlIGxvbmcgdGVybSBpdCBpcyBhIHN0ZXAgdG93YXJk
IG91dCBvZiB0cmVlIGNvbXBpbGF0aW9uICh3aXRoIHNvbWUgb2YgdGhlDQpjaGFuZ2VzIGFscmVh
ZHkgc3RhcnRlZCBkZWFsaW5nIHdpdGggbGlua3MgYW5kIG90aGVyIHN0dWZmKS4NCg0KQmVydHJh
bmQNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:03:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:03:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2087.6198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKi7-0006hY-9r; Fri, 02 Oct 2020 13:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2087.6198; Fri, 02 Oct 2020 13:02:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKi7-0006hR-6J; Fri, 02 Oct 2020 13:02:55 +0000
Received: by outflank-mailman (input) for mailman id 2087;
 Fri, 02 Oct 2020 13:02:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOKi5-0006hM-Ug
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:02:54 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c5e1490-a271-405f-98da-2d8d35fdad97;
 Fri, 02 Oct 2020 13:02:48 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id c18so1737327wrm.9
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:02:48 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id v17sm1807477wrc.23.2020.10.02.06.02.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:02:45 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOKi5-0006hM-Ug
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:02:54 +0000
X-Inumbo-ID: 3c5e1490-a271-405f-98da-2d8d35fdad97
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c5e1490-a271-405f-98da-2d8d35fdad97;
	Fri, 02 Oct 2020 13:02:48 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id c18so1737327wrm.9
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:02:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=ZH0UeM3EQaOgODg8f5u4JykkLSw99ZDNsJ/LGTxMsA4=;
        b=WSBqSLyS29Z73Uw9yIPiz/9uRVkqEBxNG1QIwgn3TSiO6xBXzzZJJTJk50brQ+OXzd
         fSMJxFF9myx78gyhhKm1yfd4C7i1NYeRrSM/KjfbXaMMpDidKEXo7wDmgFOk4+iCYmSU
         4OgQVgIvy9pVPEW4ohUlXRKTIO1C9dyOG8rls=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=ZH0UeM3EQaOgODg8f5u4JykkLSw99ZDNsJ/LGTxMsA4=;
        b=rmNfmvTanyg8v5eaDID6kE4NAA7E6lHLAl21FBNlzOD0CCaP0oMSFSXSF/65oi1u8E
         vXpRmTXe9qda2RKYKAWPctAybnZkPvM7LvLjMV8JO4knoUIGyIxJmu7pkjgxN1MrPvDI
         oP5K3F72HMsJ5zGoEGQi12m9gjltFqzDJx2KEVNccLfxFp+uJPEG9yDgwmwtks9g0HNn
         nKI1+TpHHle344GubyHkHwdvt6rne3Uiq6nFk91D2lDGzzWRkoZhIZWdbsFzNquqzeqG
         H/lJQ5AcqVSL/FTgYvWe0RKCNf3gF6aNYmDdSLceYJKEHBR8WUtz79VIYreqK27kweQd
         hP6A==
X-Gm-Message-State: AOAM533vTEEYrd0R7N6wM/9uf4lQK9+soKRvkHuM5CgYnRXHTWtPxXwz
	jPXV8NHB2tga/EP3E1ttTtSAaA==
X-Google-Smtp-Source: ABdhPJwks2N4hhijJnxG0wAKt0M+00xtLsCvcKckl9YRlXeiIrk80ur/Elt/HK8bvnXgk8veVkRL2g==
X-Received: by 2002:adf:ec90:: with SMTP id z16mr2811904wrn.145.1601643766914;
        Fri, 02 Oct 2020 06:02:46 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id v17sm1807477wrc.23.2020.10.02.06.02.44
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:02:45 -0700 (PDT)
Date: Fri, 2 Oct 2020 15:02:42 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops
 and convert GEM backends
Message-ID: <20201002130242.GJ438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-4-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200929151437.19717-4-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:33PM +0200, Thomas Zimmermann wrote:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well.
> 
> For most GEM backends, this simply change the returned type. GEM VRAM
> helpers are also updated to indicate whether the returned framebuffer
> address is in system or I/O memory.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 ++--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |  4 +-
>  drivers/gpu/drm/ast/ast_cursor.c            | 29 +++----
>  drivers/gpu/drm/ast/ast_drv.h               |  7 +-
>  drivers/gpu/drm/drm_gem.c                   | 22 ++---
>  drivers/gpu/drm/drm_gem_cma_helper.c        | 14 ++--
>  drivers/gpu/drm/drm_gem_shmem_helper.c      | 48 ++++++-----
>  drivers/gpu/drm/drm_gem_vram_helper.c       | 90 +++++++++++----------
>  drivers/gpu/drm/etnaviv/etnaviv_drv.h       |  4 +-
>  drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 11 ++-
>  drivers/gpu/drm/exynos/exynos_drm_gem.c     |  6 +-
>  drivers/gpu/drm/exynos/exynos_drm_gem.h     |  4 +-
>  drivers/gpu/drm/lima/lima_gem.c             |  6 +-
>  drivers/gpu/drm/lima/lima_sched.c           | 11 ++-
>  drivers/gpu/drm/mgag200/mgag200_mode.c      | 12 +--
>  drivers/gpu/drm/nouveau/nouveau_gem.h       |  4 +-
>  drivers/gpu/drm/nouveau/nouveau_prime.c     |  9 ++-
>  drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 ++--
>  drivers/gpu/drm/qxl/qxl_display.c           | 13 +--
>  drivers/gpu/drm/qxl/qxl_draw.c              | 16 ++--
>  drivers/gpu/drm/qxl/qxl_drv.h               |  8 +-
>  drivers/gpu/drm/qxl/qxl_object.c            | 23 +++---
>  drivers/gpu/drm/qxl/qxl_object.h            |  2 +-
>  drivers/gpu/drm/qxl/qxl_prime.c             | 12 +--
>  drivers/gpu/drm/radeon/radeon_gem.c         |  4 +-
>  drivers/gpu/drm/radeon/radeon_prime.c       |  9 ++-
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +++--
>  drivers/gpu/drm/rockchip/rockchip_drm_gem.h |  4 +-
>  drivers/gpu/drm/tiny/cirrus.c               | 10 ++-
>  drivers/gpu/drm/tiny/gm12u320.c             | 10 ++-
>  drivers/gpu/drm/udl/udl_modeset.c           |  8 +-
>  drivers/gpu/drm/vboxvideo/vbox_mode.c       | 11 ++-
>  drivers/gpu/drm/vc4/vc4_bo.c                |  6 +-
>  drivers/gpu/drm/vc4/vc4_drv.h               |  2 +-
>  drivers/gpu/drm/vgem/vgem_drv.c             | 16 ++--
>  drivers/gpu/drm/xen/xen_drm_front_gem.c     | 18 +++--
>  drivers/gpu/drm/xen/xen_drm_front_gem.h     |  6 +-
>  include/drm/drm_gem.h                       |  5 +-
>  include/drm/drm_gem_cma_helper.h            |  4 +-
>  include/drm/drm_gem_shmem_helper.h          |  4 +-
>  include/drm/drm_gem_vram_helper.h           |  4 +-
>  41 files changed, 304 insertions(+), 222 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..de7d0cfe1b93 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -44,13 +44,14 @@
>  /**
>   * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
>   * @obj: GEM BO
> + * @map: The virtual address of the mapping.
>   *
>   * Sets up an in-kernel virtual mapping of the BO's memory.
>   *
>   * Returns:
> - * The virtual address of the mapping or an error pointer.
> + * 0 on success, or a negative errno code otherwise.
>   */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
>  	int ret;
> @@ -58,19 +59,20 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
>  			  &bo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);

I guess with the ttm_bo_vmap idea all the ttm changes here will look a bit
different.

>  
> -	return bo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
>  /**
>   * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
>   * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> + * @map: Virtual address (unused)
>   *
>   * Tears down the in-kernel virtual mapping of the BO's memory.
>   */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
>  
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..622642793064 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
>  					    struct dma_buf *dma_buf);
>  bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
>  				      struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
>  			  struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..459a3774e4e1 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>  
>  	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
>  		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>  		drm_gem_vram_unpin(gbo);
>  		drm_gem_vram_put(gbo);
>  	}
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
>  	struct drm_device *dev = &ast->base;
>  	size_t size, i;
>  	struct drm_gem_vram_object *gbo;
> -	void __iomem *vaddr;
> +	struct dma_buf_map map;
>  	int ret;
>  
>  	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
>  			drm_gem_vram_put(gbo);
>  			goto err_drm_gem_vram_put;
>  		}
> -		vaddr = drm_gem_vram_vmap(gbo);
> -		if (IS_ERR(vaddr)) {
> -			ret = PTR_ERR(vaddr);
> +		ret = drm_gem_vram_vmap(gbo, &map);
> +		if (ret) {
>  			drm_gem_vram_unpin(gbo);
>  			drm_gem_vram_put(gbo);
>  			goto err_drm_gem_vram_put;
>  		}
>  
>  		ast->cursor.gbo[i] = gbo;
> -		ast->cursor.vaddr[i] = vaddr;
> +		ast->cursor.map[i] = map;
>  	}
>  
>  	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
>  	while (i) {
>  		--i;
>  		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>  		drm_gem_vram_unpin(gbo);
>  		drm_gem_vram_put(gbo);
>  	}
> @@ -170,8 +169,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>  {
>  	struct drm_device *dev = &ast->base;
>  	struct drm_gem_vram_object *gbo;
> +	struct dma_buf_map map;
>  	int ret;
> -	void *src;
>  	void __iomem *dst;
>  
>  	if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) ||
> @@ -183,18 +182,16 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>  	ret = drm_gem_vram_pin(gbo, 0);
>  	if (ret)
>  		return ret;
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> -		ret = PTR_ERR(src);
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret)
>  		goto err_drm_gem_vram_unpin;
> -	}
>  
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>  
>  	/* do data transfer to cursor BO */
> -	update_cursor_image(dst, src, fb->width, fb->height);
> +	update_cursor_image(dst, map.vaddr, fb->width, fb->height);

I don't think digging around in the pointer is a good idea, imo this
should get a 

	/* TODO: Use mapping abstraction properly */

or similar. Same for all the other usage for map.vaddr added to drivers
below (the stuff in helpers that the next patches will change again I
think you can leave as-is, it'll go away).

I'm also wondering whether we should prefix all members of struct
dma_buf_map with _ to make it clear they shouldn't be touched, so
map._vaddr and map._is_iomem.

Also todo.rst entry for all these, there's a lot from looking throught
this patch.

>  
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>  	drm_gem_vram_unpin(gbo);
>  
>  	return 0;
> @@ -257,7 +254,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
>  	u8 __iomem *sig;
>  	u8 jreg;
>  
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>  
>  	sig = dst + AST_HWC_SIZE;
>  	writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
>  #ifndef __AST_DRV_H__
>  #define __AST_DRV_H__
>  
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/i2c.h>
>  #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>  
>  #include <drm/drm_connector.h>
>  #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>  
>  	struct {
>  		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> -		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> +		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
>  		unsigned int next_index;
>  	} cursor;
>  
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..0c4a66dea5c2 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1207,26 +1207,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>  
>  void *drm_gem_vmap(struct drm_gem_object *obj)
>  {
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	if (obj->funcs->vmap)
> -		vaddr = obj->funcs->vmap(obj);
> -	else
> -		vaddr = ERR_PTR(-EOPNOTSUPP);
> +	if (!obj->funcs->vmap) {
> +		return ERR_PTR(-EOPNOTSUPP);
>  
> -	if (!vaddr)
> -		vaddr = ERR_PTR(-ENOMEM);
> +	ret = obj->funcs->vmap(obj, &map);
> +	if (ret)
> +		return ERR_PTR(ret);
> +	else if (dma_buf_map_is_null(&map))
> +		return ERR_PTR(-ENOMEM);
>  
> -	return vaddr;
> +	return map.vaddr;
>  }
>  
>  void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
>  {
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
>  	if (!vaddr)
>  		return;
>  
>  	if (obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, vaddr);
> +		obj->funcs->vunmap(obj, &map);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..e87cd36518d3 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>   * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
>   *     address space
>   * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + *       store.
>   *
>   * This function maps a buffer exported via DRM PRIME into the kernel's
>   * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>   * driver's &drm_gem_object_funcs.vmap callback.
>   *
>   * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
>   */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>  
> -	return cma_obj->vaddr;
> +	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> +	return 0;
>  }
>  EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>  
> @@ -541,14 +545,14 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>   * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
>   *     address space
>   * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> + * @map: Kernel virtual address where the CMA GEM object was mapped
>   *
>   * This function removes a buffer exported via DRM PRIME from the kernel's
>   * virtual address space. This is a no-op because CMA buffers cannot be
>   * unmapped from kernel space. Drivers using the CMA helpers should set this
>   * as their &drm_gem_object_funcs.vunmap callback.
>   */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* Nothing to do */
>  }
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_unpin);
>  
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map;
>  	int ret = 0;
>  
> -	if (shmem->vmap_use_count++ > 0)
> -		return shmem->vaddr;
> +	if (shmem->vmap_use_count++ > 0) {
> +		dma_buf_map_set_vaddr(map, shmem->vaddr);
> +		return 0;
> +	}
>  
>  	if (obj->import_attach) {
> -		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> -		if (!ret)
> -			shmem->vaddr = map.vaddr;
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> +		if (!ret) {
> +			if (WARN_ON(map->is_iomem)) {
> +				ret = -EIO;
> +				goto err_put_pages;
> +			}
> +			shmem->vaddr = map->vaddr;
> +		}
>  	} else {
>  		pgprot_t prot = PAGE_KERNEL;
>  
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  				    VM_MAP, prot);
>  		if (!shmem->vaddr)
>  			ret = -ENOMEM;
> +		else
> +			dma_buf_map_set_vaddr(map, shmem->vaddr);
>  	}
>  
>  	if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  		goto err_put_pages;
>  	}
>  
> -	return shmem->vaddr;
> +	return 0;
>  
>  err_put_pages:
>  	if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>  err_zero_use:
>  	shmem->vmap_use_count = 0;
>  
> -	return ERR_PTR(ret);
> +	return ret;
>  }
>  
>  /*
>   * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
>   * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *       store.
>   *
>   * This function makes sure that a contiguous kernel virtual address mapping
>   * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>   * Returns:
>   * 0 on success or a negative error code on failure.
>   */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	void *vaddr;
>  	int ret;
>  
>  	ret = mutex_lock_interruptible(&shmem->vmap_lock);
>  	if (ret)
> -		return ERR_PTR(ret);
> -	vaddr = drm_gem_shmem_vmap_locked(shmem);
> +		return ret;
> +	ret = drm_gem_shmem_vmap_locked(shmem, map);
>  	mutex_unlock(&shmem->vmap_lock);
>  
> -	return vaddr;
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_vmap);
>  
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> +					struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>  
>  	if (WARN_ON_ONCE(!shmem->vmap_use_count))
>  		return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>  		return;
>  
>  	if (obj->import_attach)
> -		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> +		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>  	else
>  		vunmap(shmem->vaddr);
>  
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>  /*
>   * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
>   * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
>   *
>   * This function cleans up a kernel virtual address mapping acquired by
>   * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>   * also be called by drivers directly, in which case it will hide the
>   * differences between dma-buf imported and natively allocated objects.
>   */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>  
>  	mutex_lock(&shmem->vmap_lock);
> -	drm_gem_shmem_vunmap_locked(shmem);
> +	drm_gem_shmem_vunmap_locked(shmem, map);
>  	mutex_unlock(&shmem->vmap_lock);
>  }
>  EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 256b346664f2..6a5b932e0d06 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0-or-later
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/module.h>
>  
>  #include <drm/drm_debugfs.h>
> @@ -382,11 +383,11 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>  }
>  EXPORT_SYMBOL(drm_gem_vram_unpin);
>  
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> +				    struct dma_buf_map *map)
>  {
>  	int ret;
>  	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> -	bool is_iomem;
>  
>  	if (gbo->kmap_use_count > 0)
>  		goto out;
> @@ -396,17 +397,30 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
>  
>  	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
>  out:
> -	if (!kmap->virtual)
> -		return NULL; /* not mapped; don't increment ref */
> +	if (!kmap->virtual) {
> +		dma_buf_map_clear(map);
> +		return 0; /* not mapped; don't increment ref */
> +	}
>  	++gbo->kmap_use_count;
> -	return ttm_kmap_obj_virtual(kmap, &is_iomem);
> +	ttm_kmap_obj_to_dma_buf_map(kmap, map);
> +	return 0;
>  }
>  
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> +				       struct dma_buf_map *map)
>  {
> +	struct drm_device *dev = gbo->bo.base.dev;
> +	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	struct dma_buf_map kmap_map;
> +
> +	ttm_kmap_obj_to_dma_buf_map(kmap, &kmap_map);
> +
> +	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&kmap_map, map)))
> +		return; /* BUG: map not mapped from this BO */
> +
>  	if (WARN_ON_ONCE(!gbo->kmap_use_count))
>  		return;
>  	if (--gbo->kmap_use_count > 0)
> @@ -423,7 +437,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>  /**
>   * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
>   *                       space
> - * @gbo:	The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>   *
>   * The vmap function pins a GEM VRAM object to its current location, either
>   * system or video memory, and maps its buffer into kernel address space.
> @@ -432,48 +448,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>   * unmap and unpin the GEM VRAM object.
>   *
>   * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
>   */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>  {
>  	int ret;
> -	void *base;
>  
>  	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
>  	ret = drm_gem_vram_pin_locked(gbo, 0);
>  	if (ret)
>  		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo);
> -	if (IS_ERR(base)) {
> -		ret = PTR_ERR(base);
> +	ret = drm_gem_vram_kmap_locked(gbo, map);
> +	if (ret)
>  		goto err_drm_gem_vram_unpin_locked;
> -	}
>  
>  	ttm_bo_unreserve(&gbo->bo);
>  
> -	return base;
> +	return 0;
>  
>  err_drm_gem_vram_unpin_locked:
>  	drm_gem_vram_unpin_locked(gbo);
>  err_ttm_bo_unreserve:
>  	ttm_bo_unreserve(&gbo->bo);
> -	return ERR_PTR(ret);
> +	return ret;
>  }
>  EXPORT_SYMBOL(drm_gem_vram_vmap);
>  
>  /**
>   * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo:	The GEM VRAM object to unmap
> - * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>   *
>   * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
>   * the documentation for drm_gem_vram_vmap() for more information.
>   */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>  {
>  	int ret;
>  
> @@ -481,7 +493,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
>  	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
>  		return;
>  
> -	drm_gem_vram_kunmap_locked(gbo);
> +	drm_gem_vram_kunmap_locked(gbo, map);
>  	drm_gem_vram_unpin_locked(gbo);
>  
>  	ttm_bo_unreserve(&gbo->bo);
> @@ -829,37 +841,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
>  }
>  
>  /**
> - * drm_gem_vram_object_vmap() - \
> -	Implements &struct drm_gem_object_funcs.vmap
> - * @gem:	The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + *	Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>   *
>   * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
>   */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> -	void *base;
>  
> -	base = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(base))
> -		return NULL;
> -	return base;
> +	return drm_gem_vram_vmap(gbo, map);
>  }
>  
>  /**
> - * drm_gem_vram_object_vunmap() - \
> -	Implements &struct drm_gem_object_funcs.vunmap
> - * @gem:	The GEM object to unmap
> - * @vaddr:	The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + *	Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>   */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> -				       void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>  {
>  	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>  
> -	drm_gem_vram_vunmap(gbo, vaddr);
> +	drm_gem_vram_vunmap(gbo, map);
>  }
>  
>  /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..3d1eb8065fce 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,8 +51,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
>  int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>  int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
>  struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>  			   struct vm_area_struct *vma);
>  struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..36c03e287e29 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,12 +22,17 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
>  }
>  
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	return etnaviv_gem_vmap(obj);
> +	void *vaddr = etnaviv_gem_vmap(obj);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* TODO msm_gem_vunmap() */
>  }
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..2c74e06669fa 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -471,12 +471,12 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>  	return &exynos_gem->base;
>  }
>  
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	return NULL;
> +	return -ENOMEM;
>  }
>  
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	/* Nothing to do */
>  }

Might want to just start out with a patch to delete these. We don't keep
dummy functions around generally.

> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..ecfd048fd91d 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,8 @@ struct drm_gem_object *
>  exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>  				     struct dma_buf_attachment *attach,
>  				     struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
>  			      struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
>  	return drm_gem_shmem_pin(obj);
>  }
>  
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct lima_bo *bo = to_lima_bo(obj);
>  
>  	if (bo->heap_size)
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>  
> -	return drm_gem_shmem_vmap(obj);
> +	return drm_gem_shmem_vmap(obj, map);
>  }
>  
>  static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0 OR MIT
>  /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/kthread.h>
>  #include <linux/slab.h>
>  #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>  	struct lima_dump_chunk_buffer *buffer_chunk;
>  	u32 size, task_size, mem_size;
>  	int i;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	mutex_lock(&dev->error_task_list_lock);
>  
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>  		} else {
>  			buffer_chunk->size = lima_bo_size(bo);
>  
> -			data = drm_gem_shmem_vmap(&bo->base.base);
> -			if (IS_ERR_OR_NULL(data)) {
> +			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> +			if (ret) {
>  				kvfree(et);
>  				goto out;
>  			}
>  
> -			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> +			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>  
> -			drm_gem_shmem_vunmap(&bo->base.base, data);
> +			drm_gem_shmem_vunmap(&bo->base.base, &map);
>  		}
>  
>  		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..ae4c8cb33fae 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
>   */
>  
>  #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>  
>  #include <drm/drm_atomic_helper.h>
>  #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,16 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
>  		      struct drm_rect *clip)
>  {
>  	struct drm_device *dev = &mdev->base;
> -	void *vmap;
> +	struct dma_buf_map map;
> +	int ret;
>  
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (drm_WARN_ON(dev, !vmap))
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (drm_WARN_ON(dev, ret))
>  		return; /* BUG: SHMEM BO should always be vmapped */
>  
> -	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
> +	drm_fb_memcpy_dstclip(mdev->vram, map.vaddr, fb, clip);
>  
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  
>  	/* Always scanout image at VRAM offset 0 */
>  	mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..e780b6b1763d 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
>  extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
>  extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
>  	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
> +extern int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +extern void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..75e973a5675a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
>  }
>  
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> +int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>  	int ret;
> @@ -43,12 +43,13 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
>  			  &nvbo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&nvbo->dma_buf_vmap, map);
>  
> -	return nvbo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
>  
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
>  #include <drm/drm_gem_shmem_helper.h>
>  #include <drm/panfrost_drm.h>
>  #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/iopoll.h>
>  #include <linux/pm_runtime.h>
>  #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  {
>  	struct panfrost_file_priv *user = file_priv->driver_priv;
>  	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map;
>  	struct drm_gem_shmem_object *bo;
>  	u32 cfg, as;
>  	int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  		goto err_close_bo;
>  	}
>  
> -	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> -	if (IS_ERR(perfcnt->buf)) {
> -		ret = PTR_ERR(perfcnt->buf);
> +	ret = drm_gem_shmem_vmap(&bo->base, &map);
> +	if (ret)
>  		goto err_put_mapping;
> -	}
> +	perfcnt->buf = map.vaddr;
>  
>  	/*
>  	 * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>  	return 0;
>  
>  err_vunmap:
> -	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&bo->base, &map);
>  err_put_mapping:
>  	panfrost_gem_mapping_put(perfcnt->mapping);
>  err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>  {
>  	struct panfrost_file_priv *user = file_priv->driver_priv;
>  	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>  
>  	if (user != perfcnt->user)
>  		return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>  		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>  
>  	perfcnt->user = NULL;
> -	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
>  	perfcnt->buf = NULL;
>  	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
>  	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 6063f3a15329..ed0d22fa0161 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>  
>  #include <linux/crc32.h>
>  #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>  
>  #include <drm/drm_drv.h>
>  #include <drm/drm_atomic.h>
> @@ -581,7 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  	struct drm_gem_object *obj;
>  	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
>  	int ret;
> -	void *user_ptr;
> +	struct dma_buf_map user_map;
> +	struct dma_buf_map cursor_map;
>  	int size = 64*64*4;
>  
>  	ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
> @@ -595,7 +597,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		user_bo = gem_to_qxl_bo(obj);
>  
>  		/* pinning is done in the prepare/cleanup framevbuffer */
> -		ret = qxl_bo_kmap(user_bo, &user_ptr);
> +		ret = qxl_bo_kmap(user_bo, &user_map);
>  		if (ret)
>  			goto out_free_release;
>  
> @@ -613,7 +615,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		if (ret)
>  			goto out_unpin;
>  
> -		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> +		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
>  		if (ret)
>  			goto out_backoff;
>  
> @@ -627,7 +629,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>  		cursor->chunk.next_chunk = 0;
>  		cursor->chunk.prev_chunk = 0;
>  		cursor->chunk.data_size = size;
> -		memcpy(cursor->chunk.data, user_ptr, size);
> +		memcpy(cursor->chunk.data, user_map.vaddr, size);
>  		qxl_bo_kunmap(cursor_bo);
>  		qxl_bo_kunmap(user_bo);
>  
> @@ -1138,6 +1140,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>  {
>  	int ret;
>  	struct drm_gem_object *gobj;
> +	struct dma_buf_map map;
>  	int monitors_config_size = sizeof(struct qxl_monitors_config) +
>  		qxl_num_crtc * sizeof(struct qxl_head);
>  
> @@ -1154,7 +1157,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>  	if (ret)
>  		return ret;
>  
> -	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> +	qxl_bo_kmap(qdev->monitors_config_bo, &map);
>  
>  	qdev->monitors_config = qdev->monitors_config_bo->kptr;
>  	qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..1bf4f465ecf4 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
>   * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
>   */
>  
> +#include <linux/dma-buf-map.h>
> +
>  #include <drm/drm_fourcc.h>
>  
>  #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
>  					      unsigned int num_clips,
>  					      struct qxl_bo *clips_bo)
>  {
> +	struct dma_buf_map map;
>  	struct qxl_clip_rects *dev_clips;
>  	int ret;
>  
> -	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> -	if (ret) {
> +	ret = qxl_bo_kmap(clips_bo, &map);
> +	if (ret)
>  		return NULL;
> -	}
> +
> +	dev_clips = map.vaddr;
>  	dev_clips->num_rects = num_clips;
>  	dev_clips->chunk.next_chunk = 0;
>  	dev_clips->chunk.prev_chunk = 0;
> @@ -142,7 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>  	int stride = fb->pitches[0];
>  	/* depth is not actually interesting, we don't mask with it */
>  	int depth = fb->format->cpp[0] * 8;
> -	uint8_t *surface_base;
> +	struct dma_buf_map surface_map;
>  	struct qxl_release *release;
>  	struct qxl_bo *clips_bo;
>  	struct qxl_drm_image *dimage;
> @@ -197,11 +201,11 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>  	if (ret)
>  		goto out_release_backoff;
>  
> -	ret = qxl_bo_kmap(bo, (void **)&surface_base);
> +	ret = qxl_bo_kmap(bo, &surface_map);
>  	if (ret)
>  		goto out_release_backoff;
>  
> -	ret = qxl_image_init(qdev, release, dimage, surface_base,
> +	ret = qxl_image_init(qdev, release, dimage, surface_map.vaddr,
>  			     left - dumb_shadow_offset,
>  			     top, width, height, depth, stride);
>  	qxl_bo_kunmap(bo);
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..a9e9da4f4605 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -50,6 +50,8 @@
>  
>  #include "qxl_dev.h"
>  
> +struct dma_buf_map;
> +
>  #define DRIVER_AUTHOR		"Dave Airlie"
>  
>  #define DRIVER_NAME		"qxl"
> @@ -335,7 +337,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
>  void qxl_gem_object_close(struct drm_gem_object *obj,
>  			  struct drm_file *file_priv);
>  void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>  
>  /* qxl_dumb.c */
>  int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +446,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  struct drm_gem_object *qxl_gem_prime_import_sg_table(
>  	struct drm_device *dev, struct dma_buf_attachment *attach,
>  	struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map);
>  int qxl_gem_prime_mmap(struct drm_gem_object *obj,
>  				struct vm_area_struct *vma);
>  
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index d3635e3e3267..2d8ae3b10b1c 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
>   *          Alon Levy
>   */
>  
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
>  #include "qxl_drv.h"
>  #include "qxl_object.h"
>  
> -#include <linux/io-mapping.h>
>  static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
>  {
>  	struct qxl_bo *bo;
> @@ -150,24 +152,22 @@ int qxl_bo_create(struct qxl_device *qdev,
>  	return 0;
>  }
>  
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
>  {
> -	bool is_iomem;
>  	int r;
>  
>  	if (bo->kptr) {
> -		if (ptr)
> -			*ptr = bo->kptr;
>  		bo->map_count++;
> -		return 0;
> +		goto out;
>  	}
>  	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
>  	if (r)
>  		return r;
> -	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> -	if (ptr)
> -		*ptr = bo->kptr;
>  	bo->map_count = 1;
> +	bo->kptr = bo->kmap.virtual;
> +
> +out:
> +	ttm_kmap_obj_to_dma_buf_map(&bo->kmap, map);
>  	return 0;
>  }
>  
> @@ -178,6 +178,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
>  	void *rptr;
>  	int ret;
>  	struct io_mapping *map;
> +	struct dma_buf_map bo_map;
>  
>  	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
>  		map = qdev->vram_mapping;
> @@ -194,11 +195,11 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,

Uh, this fallback is wild. Not exactly sure this is a good idea or
anything, but also it's here already :-)

>  		return rptr;
>  	}
>  
> -	ret = qxl_bo_kmap(bo, &rptr);
> +	ret = qxl_bo_kmap(bo, &bo_map);
>  	if (ret)
>  		return NULL;
>  
> -	rptr += page_offset * PAGE_SIZE;
> +	rptr = bo_map.vaddr + page_offset * PAGE_SIZE;
>  	return rptr;
>  }
>  
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
>  			 bool kernel, bool pinned, u32 domain,
>  			 struct qxl_surface *surf,
>  			 struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
>  extern void qxl_bo_kunmap(struct qxl_bo *bo);
>  void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
>  void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
>  	return ERR_PTR(-ENOSYS);
>  }
>  
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct qxl_bo *bo = gem_to_qxl_bo(obj);
> -	void *ptr;
>  	int ret;
>  
> -	ret = qxl_bo_kmap(bo, &ptr);
> +	ret = qxl_bo_kmap(bo, map);
>  	if (ret < 0)
> -		return ERR_PTR(ret);
> +		return ret;
>  
> -	return ptr;
> +	return 0;
>  }
>  
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map)
>  {
>  	struct qxl_bo *bo = gem_to_qxl_bo(obj);
>  
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..ac51517bdfcd 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -40,8 +40,8 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
>  struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
>  int radeon_gem_prime_pin(struct drm_gem_object *obj);
>  void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>  
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..a1a358de5448 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
>  	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
>  }
>  
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>  	int ret;
> @@ -47,12 +47,13 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
>  	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
>  			  &bo->dma_buf_vmap);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
> +	ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map);
>  
> -	return bo->dma_buf_vmap.virtual;
> +	return 0;
>  }
>  
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct radeon_bo *bo = gem_to_radeon_bo(obj);
>  
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
>  	return ERR_PTR(ret);
>  }
>  
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  
> -	if (rk_obj->pages)
> -		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> -			    pgprot_writecombine(PAGE_KERNEL));
> +	if (rk_obj->pages) {
> +		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> +				  pgprot_writecombine(PAGE_KERNEL));
> +		if (!vaddr)
> +			return -ENOMEM;
> +		dma_buf_map_set_vaddr(map, vaddr);
> +		return 0;
> +	}
>  
>  	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> -		return NULL;
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>  
> -	return rk_obj->kvaddr;
> +	return 0;
>  }
>  
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>  
>  	if (rk_obj->pages) {
> -		vunmap(vaddr);
> +		vunmap(map->vaddr);
>  		return;
>  	}
>  
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
>  rockchip_gem_prime_import_sg_table(struct drm_device *dev,
>  				   struct dma_buf_attachment *attach,
>  				   struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  /* drm driver mmap file operations */
>  int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..6dc013f4b236 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
>   */
>  
>  #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
>  #include <linux/module.h>
>  #include <linux/pci.h>
>  
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  			       struct drm_rect *rect)
>  {
>  	struct cirrus_device *cirrus = to_cirrus(fb->dev);
> +	struct dma_buf_map map;
>  	void *vmap;
>  	int idx, ret;
>  
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  	if (!drm_dev_enter(&cirrus->dev, &idx))
>  		goto out;
>  
> -	ret = -ENOMEM;
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (!vmap)
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret)
>  		goto out_dev_exit;
> +	vmap = map.vaddr;
>  
>  	if (cirrus->cpp == fb->format->cpp[0])
>  		drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>  	else
>  		WARN_ON_ONCE("cpp mismatch");
>  
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  	ret = 0;
>  
>  out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..5865027a1667 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  {
>  	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
>  	struct drm_framebuffer *fb;
> +	struct dma_buf_map map;
>  	void *vaddr;
>  	u8 *src;
>  
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  	y1 = gm12u320->fb_update.rect.y1;
>  	y2 = gm12u320->fb_update.rect.y2;
>  
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> -		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
> +		GM12U320_ERR("failed to vmap fb: %d\n", ret);
>  		goto put_fb;
>  	}
> +	vaddr = map.vaddr;
>  
>  	if (fb->obj[0]->import_attach) {
>  		ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>  			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
>  	}
>  vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  put_fb:
>  	drm_framebuffer_put(fb);
>  	gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..9c8ace1aa647 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  	struct urb *urb;
>  	struct drm_rect clip;
>  	int log_bpp;
> +	struct dma_buf_map map;
>  	void *vaddr;
>  
>  	ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  			return ret;
>  	}
>  
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
>  		DRM_ERROR("failed to vmap fb\n");
>  		goto out_dma_buf_end_cpu_access;
>  	}
> +	vaddr = map.vaddr;
>  
>  	urb = udl_get_urb(dev);
>  	if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>  	ret = 0;
>  
>  out_drm_gem_shmem_vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>  out_dma_buf_end_cpu_access:
>  	if (import_attach) {
>  		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 4fcc0a542b8a..6040b9ec747f 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
>   *          Michael Thayer <michael.thayer@oracle.com,
>   *          Hans de Goede <hdegoede@redhat.com>
>   */
> +
> +#include <linux/dma-buf-map.h>
>  #include <linux/export.h>
>  
>  #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  	u32 height = plane->state->crtc_h;
>  	size_t data_size, mask_size;
>  	u32 flags;
> +	struct dma_buf_map map;
> +	int ret;
>  	u8 *src;
>  
>  	/*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  
>  	vbox_crtc->cursor_enabled = true;
>  
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret) {
>  		/*
>  		 * BUG: we should have pinned the BO in prepare_fb().
>  		 */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  		DRM_WARN("Could not map cursor bo, skipping update\n");
>  		return;
>  	}

I don't think digging around in the pointer is a good idea, imo this
should get a 

	/* FIXME: Use mapping abstraction properly */

or similar.

> +	src = map.vaddr;
>  
>  	/*
>  	 * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>  	data_size = width * height * 4 + mask_size;
>  
>  	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>  
>  	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
>  		VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..250266fb437e 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -786,16 +786,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>  	return drm_gem_cma_prime_mmap(obj, vma);
>  }
>  
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct vc4_bo *bo = to_vc4_bo(obj);
>  
>  	if (bo->validated_shader) {
>  		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>  	}
>  
> -	return drm_gem_cma_prime_vmap(obj);
> +	return drm_gem_cma_prime_vmap(obj, map);
>  }
>  
>  struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index a22478a35199..6af453c84777 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -804,7 +804,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>  struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
>  						 struct dma_buf_attachment *attach,
>  						 struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  int vc4_bo_cache_init(struct drm_device *dev);
>  void vc4_bo_cache_destroy(struct drm_device *dev);
>  int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>  	return &obj->base;
>  }
>  
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>  	long n_pages = obj->size >> PAGE_SHIFT;
>  	struct page **pages;
> +	void *vaddr;
>  
>  	pages = vgem_pin_pages(bo);
>  	if (IS_ERR(pages))
> -		return NULL;
> +		return PTR_ERR(pages);
> +
> +	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
>  
> -	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	return 0;
>  }
>  
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
>  	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>  
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>  	vgem_unpin_pages(bo);
>  }
>  
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>  	return gem_mmap_obj(xen_obj, vma);
>  }
>  
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
>  {
>  	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +	void *vaddr;
>  
>  	if (!xen_obj->pages)
> -		return NULL;
> +		return -ENOMEM;
>  
>  	/* Please see comment in gem_mmap_obj on mapping and attributes. */
> -	return vmap(xen_obj->pages, xen_obj->num_pages,
> -		    VM_MAP, PAGE_KERNEL);
> +	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> +		     VM_MAP, PAGE_KERNEL);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>  }
>  
>  void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr)
> +				    struct dma_buf_map *map)
>  {
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>  }
>  
>  int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
>  #define __XEN_DRM_FRONT_GEM_H
>  
>  struct dma_buf_attachment;
> +struct dma_buf_map;
>  struct drm_device;
>  struct drm_gem_object;
>  struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>  
>  int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>  
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> +				 struct dma_buf_map *map);
>  
>  void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr);
> +				    struct dma_buf_map *map);
>  
>  int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
>  				 struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>  
>  #include <drm/drm_vma_manager.h>
>  
> +struct dma_buf_map;
>  struct drm_gem_object;
>  
>  /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
>  	 *
>  	 * This callback is optional.
>  	 */
> -	void *(*vmap)(struct drm_gem_object *obj);
> +	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  	/**
>  	 * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
>  	 *
>  	 * This callback is optional.
>  	 */
> -	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> +	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  	/**
>  	 * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..34a7f72879c5 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,8 +103,8 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>  				  struct sg_table *sgt);
>  int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>  			   struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  struct drm_gem_object *
>  drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
>  void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>  int drm_gem_shmem_pin(struct drm_gem_object *obj);
>  void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>  
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..0c43b8f17ee9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
>  s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
>  int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
>  int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>  
>  int drm_gem_vram_fill_create_dumb(struct drm_file *file,
>  				  struct drm_device *dev,
> -- 
> 2.28.0

Bit a big patch, I can't think of a way to split it up either.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:04:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2089.6209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKjw-0006rM-QU; Fri, 02 Oct 2020 13:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2089.6209; Fri, 02 Oct 2020 13:04:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKjw-0006rF-NZ; Fri, 02 Oct 2020 13:04:48 +0000
Received: by outflank-mailman (input) for mailman id 2089;
 Fri, 02 Oct 2020 13:04:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOKju-0006r9-Up
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:04:46 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c661e77-5e9a-45c1-891b-33f8b3ffb6d3;
 Fri, 02 Oct 2020 13:04:45 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id t10so1775955wrv.1
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:04:45 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id d13sm1695267wrp.44.2020.10.02.06.04.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:04:43 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOKju-0006r9-Up
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:04:46 +0000
X-Inumbo-ID: 4c661e77-5e9a-45c1-891b-33f8b3ffb6d3
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4c661e77-5e9a-45c1-891b-33f8b3ffb6d3;
	Fri, 02 Oct 2020 13:04:45 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id t10so1775955wrv.1
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:04:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=BRdG+Sv9lbvsBV80MeJ5uYgZhZMNFbaqhIia52Zq8gE=;
        b=ld4XU9dC3LDxekGn7GamKDrYPL7jSfU1sRt03W6y6ZG7Aj0ZYozjOKbQG2CqnDAQ+a
         Wm85y42hH9jUGUKeb3Qq0XMhu73F73vVo9TsKURGxDLRrOQhbtc2wTcM+8/StKpZWy9v
         y7aSSQFwbsgDSwiglVqPKSeUy1LEYeYYNmeDw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=BRdG+Sv9lbvsBV80MeJ5uYgZhZMNFbaqhIia52Zq8gE=;
        b=tDcF/sukv+Xs8dwA4JCAWIpp79QQ2p/Au/12McGJF1s4j/sqVs7hmQk1Ldpkmjyp8B
         AiK3/9kDIjQuxqZkNBF77xLKpSh9e/rl6pZaFojLrVc2MjSzDZPQkFuL20PDx+5XG+8U
         EWivGVxoTC4unjvPN8nFc4h02G12h9vVEfrs8SjgNH2XPr1YPNTHteFnY/wQLZiqxzla
         s25cKuXTT3kMB9EERoP7+bMuu1xloCXEEaOvDS+WoX8gJtZLC/CSKfn9oFECcaTw70c4
         yCaDPMDAriKy2aaRBLI9RO5+1reN6bjN4d67cqYJ4KPU2hs1beF3Z7Z0iYwsi/HHnMSw
         eYsA==
X-Gm-Message-State: AOAM533ui1cdt/Zowc0EKqTBhS9LCJS5QvoQ6IkfJaXhmB8zXgYEXAwt
	Lgpf2vzqsBdQq99g2+GPshuVZA==
X-Google-Smtp-Source: ABdhPJyzB9ctK1WKL4a99n8wsCE482hZW17HPxIJTgRxw4A/5rqlGoYgwqEr/r2ZqiiE1CWrWADjIQ==
X-Received: by 2002:adf:e8c3:: with SMTP id k3mr3054113wrn.228.1601643884841;
        Fri, 02 Oct 2020 06:04:44 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id d13sm1695267wrp.44.2020.10.02.06.04.41
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:04:43 -0700 (PDT)
Date: Fri, 2 Oct 2020 15:04:40 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap
 interfaces to use struct dma_buf_map
Message-ID: <20201002130440.GK438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-5-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200929151437.19717-5-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:34PM +0200, Thomas Zimmermann wrote:
> GEM's vmap and vunmap interfaces now wrap memory pointers in struct
> dma_buf_map.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
>  drivers/gpu/drm/drm_gem.c      | 28 ++++++++++++++--------------
>  drivers/gpu/drm/drm_internal.h |  5 +++--
>  drivers/gpu/drm/drm_prime.c    | 14 ++++----------
>  4 files changed, 32 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index 495f47d23d87..ac0082bed966 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -3,6 +3,7 @@
>   * Copyright 2018 Noralf Trnnes
>   */
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/list.h>
>  #include <linux/module.h>
>  #include <linux/mutex.h>
> @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>   */
>  void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  {
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	if (buffer->vaddr)
>  		return buffer->vaddr;
> @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	vaddr = drm_gem_vmap(buffer->gem);
> -	if (IS_ERR(vaddr))
> -		return vaddr;
> +	ret = drm_gem_vmap(buffer->gem, &map);
> +	if (ret)
> +		return ERR_PTR(ret);
>  
> -	buffer->vaddr = vaddr;
> +	buffer->vaddr = map.vaddr;
>  
> -	return vaddr;
> +	return map.vaddr;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> +
> +	drm_gem_vunmap(buffer->gem, &map);
>  	buffer->vaddr = NULL;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 0c4a66dea5c2..f2b2f37d41c4 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -1205,32 +1205,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>  		obj->funcs->unpin(obj);
>  }
>  
> -void *drm_gem_vmap(struct drm_gem_object *obj)
> +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	struct dma_buf_map map;
>  	int ret;
>  
> -	if (!obj->funcs->vmap) {
> -		return ERR_PTR(-EOPNOTSUPP);
> +	if (!obj->funcs->vmap)
> +		return -EOPNOTSUPP;
>  
> -	ret = obj->funcs->vmap(obj, &map);
> +	ret = obj->funcs->vmap(obj, map);
>  	if (ret)
> -		return ERR_PTR(ret);
> -	else if (dma_buf_map_is_null(&map))
> -		return ERR_PTR(-ENOMEM);
> +		return ret;
> +	else if (dma_buf_map_is_null(map))
> +		return -ENOMEM;
>  
> -	return map.vaddr;
> +	return 0;
>  }
>  
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>  {
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> -
> -	if (!vaddr)
> +	if (dma_buf_map_is_null(map))
>  		return;
>  
>  	if (obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, &map);
> +		obj->funcs->vunmap(obj, map);
> +
> +	/* Always set the mapping to NULL. Callers may rely on this. */
> +	dma_buf_map_clear(map);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
> index b65865c630b0..58832d75a9bd 100644
> --- a/drivers/gpu/drm/drm_internal.h
> +++ b/drivers/gpu/drm/drm_internal.h
> @@ -33,6 +33,7 @@
>  
>  struct dentry;
>  struct dma_buf;
> +struct dma_buf_map;
>  struct drm_connector;
>  struct drm_crtc;
>  struct drm_framebuffer;
> @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
>  
>  int drm_gem_pin(struct drm_gem_object *obj);
>  void drm_gem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_vmap(struct drm_gem_object *obj);
> -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>  
>  /* drm_debugfs.c drm_debugfs_crc.c */
>  #if defined(CONFIG_DEBUG_FS)
> diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
> index 89e2a2496734..cb8fbeeb731b 100644
> --- a/drivers/gpu/drm/drm_prime.c
> +++ b/drivers/gpu/drm/drm_prime.c
> @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
>   *
>   * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
>   * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
> + * The kernel virtual address is returned in map.
>   *
> - * Returns the kernel virtual address or NULL on failure.
> + * Returns 0 on success or a negative errno code otherwise.
>   */
>  int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
> -	void *vaddr;
>  
> -	vaddr = drm_gem_vmap(obj);
> -	if (IS_ERR(vaddr))
> -		return PTR_ERR(vaddr);
> -
> -	dma_buf_map_set_vaddr(map, vaddr);
> -
> -	return 0;
> +	return drm_gem_vmap(obj, map);
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
>  
> @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
>  {
>  	struct drm_gem_object *obj = dma_buf->priv;
>  
> -	drm_gem_vunmap(obj, map->vaddr);
> +	drm_gem_vunmap(obj, map);
>  }
>  EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);

Some of the transitional stuff disappearing!

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>  
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:05:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2091.6221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKku-0006yN-51; Fri, 02 Oct 2020 13:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2091.6221; Fri, 02 Oct 2020 13:05:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKku-0006yG-1w; Fri, 02 Oct 2020 13:05:48 +0000
Received: by outflank-mailman (input) for mailman id 2091;
 Fri, 02 Oct 2020 13:05:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOKks-0006y5-04
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:05:46 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 714462d7-2a41-4bb4-9076-cb6df4b48436;
 Fri, 02 Oct 2020 13:05:44 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id e16so1774738wrm.2
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:05:44 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id k6sm1698980wmi.1.2020.10.02.06.05.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:05:42 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOKks-0006y5-04
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:05:46 +0000
X-Inumbo-ID: 714462d7-2a41-4bb4-9076-cb6df4b48436
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 714462d7-2a41-4bb4-9076-cb6df4b48436;
	Fri, 02 Oct 2020 13:05:44 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id e16so1774738wrm.2
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:05:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=dmSoMZsVp3YAqhv8Y0loIpdOiydVu0bIAY3p4JFgF7A=;
        b=AV57jYtbiDDUj/wY9l7Fv3SBOn5+BEvUjclpKKiUzjamkoeuUdu5XUIQeXdtqVoe4u
         RUgbNoQ1FIg3QVAEHmAugBg7ut2EtX2DV4qkpq1+xaxQlR3yGEkNqq77ciVXDZ0nWwIz
         iazTIBuZLCMcEaA0D/hmMTAcA7+HAkojRZOhY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=dmSoMZsVp3YAqhv8Y0loIpdOiydVu0bIAY3p4JFgF7A=;
        b=X+3s8eYNUIh/W6QRkMwhP34AANV/8JoenFNoVY84SfteVEIU/RQol6TzXDtNLu4w+f
         +engieg/4VSvY1PC0vkvqr4RBBymm4lvtN1NLaw+ocdglQPrhWbvlf6vQ5ZNHIughlSK
         eOhPCEr6Pyao4zBP9a46dmm9G7kBhN/yT+0wIHkGSFH9MG6D+StpA54l3gMSRVvTw/wd
         zRdR2A7BeRo2mHg7TfN9bSzCYb47bfs0Nb9qtehV+h7PhHe/Z4weZaAZPfsNdBnMQ+rE
         9aaAn4YWrZ2chag8lTqY81rAYyTlsnkDAkJkOcnW/bBwglw02CZsSoDqzBC5BrRp3Nz7
         bAow==
X-Gm-Message-State: AOAM532jamdvJW43k1DUtjTO4SUIk9Aqk4QsgKmrQTSbHMjmjf4nHy12
	6Lf4GCD2sdGNOQH/wK3wyLoNew==
X-Google-Smtp-Source: ABdhPJw4B8Uh8JocUse1fCeaSMDHBFwcJ3sqMM0TRtXtxZYpHdNTQf6MuSSoFxFHMqa33V2JAFy1OA==
X-Received: by 2002:a5d:4910:: with SMTP id x16mr3248620wrq.204.1601643943480;
        Fri, 02 Oct 2020 06:05:43 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id k6sm1698980wmi.1.2020.10.02.06.05.41
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:05:42 -0700 (PDT)
Date: Fri, 2 Oct 2020 15:05:39 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 5/7] drm/gem: Store client buffer mappings as struct
 dma_buf_map
Message-ID: <20201002130539.GL438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-6-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200929151437.19717-6-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:35PM +0200, Thomas Zimmermann wrote:
> Kernel DRM clients now store their framebuffer address in an instance
> of struct dma_buf_map. Depending on the buffer's location, the address
> refers to system or I/O memory.
> 
> Callers of drm_client_buffer_vmap() receive a copy of the value in
> the call's supplied arguments. It can be accessed and modified with
> dma_buf_map interfaces.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
>  drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
>  include/drm/drm_client.h        |  7 ++++---
>  3 files changed, 38 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index ac0082bed966..fe573acf1067 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>  {
>  	struct drm_device *dev = buffer->client->dev;
>  
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	drm_gem_vunmap(buffer->gem, &buffer->map);
>  
>  	if (buffer->gem)
>  		drm_gem_object_put(buffer->gem);
> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>  /**
>   * drm_client_buffer_vmap - Map DRM client buffer into address space
>   * @buffer: DRM client buffer
> + * @map_copy: Returns the mapped memory's address
>   *
>   * This function maps a client buffer into kernel address space. If the
> - * buffer is already mapped, it returns the mapping's address.
> + * buffer is already mapped, it returns the existing mapping's address.
>   *
>   * Client buffer mappings are not ref'counted. Each call to
>   * drm_client_buffer_vmap() should be followed by a call to
>   * drm_client_buffer_vunmap(); or the client buffer should be mapped
>   * throughout its lifetime.
>   *
> + * The returned address is a copy of the internal value. In contrast to
> + * other vmap interfaces, you don't need it for the client's vunmap
> + * function. So you can modify it at will during blit and draw operations.
> + *
>   * Returns:
> - *	The mapped memory's address
> + *	0 on success, or a negative errno code otherwise.
>   */
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> +int
> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
>  {
> -	struct dma_buf_map map;
> +	struct dma_buf_map *map = &buffer->map;
>  	int ret;
>  
> -	if (buffer->vaddr)
> -		return buffer->vaddr;
> +	if (dma_buf_map_is_set(map))
> +		goto out;
>  
>  	/*
>  	 * FIXME: The dependency on GEM here isn't required, we could
> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	ret = drm_gem_vmap(buffer->gem, &map);
> +	ret = drm_gem_vmap(buffer->gem, map);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
> -	buffer->vaddr = map.vaddr;
> +out:
> +	*map_copy = *map;
>  
> -	return map.vaddr;
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> +	struct dma_buf_map *map = &buffer->map;
>  
> -	drm_gem_vunmap(buffer->gem, &map);
> -	buffer->vaddr = NULL;
> +	drm_gem_vunmap(buffer->gem, map);
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>  
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 8697554ccd41..343a292f2c7c 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->vaddr + offset;
> +	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> @@ -416,7 +416,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>  	struct drm_clip_rect clip_copy;
>  	unsigned long flags;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>  	clip_copy = *clip;
> @@ -429,8 +430,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  
>  		/* Generic fbdev uses a shadow buffer */
>  		if (helper->buffer) {
> -			vaddr = drm_client_buffer_vmap(helper->buffer);
> -			if (IS_ERR(vaddr))
> +			ret = drm_client_buffer_vmap(helper->buffer, &map);
> +			if (ret)
>  				return;
>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>  		}
> @@ -2076,7 +2077,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  	struct drm_framebuffer *fb;
>  	struct fb_info *fbi;
>  	u32 format;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>  		    sizes->surface_width, sizes->surface_height,
> @@ -2112,11 +2114,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  		fb_deferred_io_init(fbi);
>  	} else {
>  		/* buffer is mapped for HW framebuffer */
> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
> -		if (IS_ERR(vaddr))
> -			return PTR_ERR(vaddr);
> +		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
> +		if (ret)
> +			return ret;
> +		if (map.is_iomem)
> +			fbi->screen_base = map.vaddr_iomem;
> +		else
> +			fbi->screen_buffer = map.vaddr;
>  
> -		fbi->screen_buffer = vaddr;
>  		/* Shamelessly leak the physical address to user-space */
>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> index 7aaea665bfc2..f07f2fb02e75 100644
> --- a/include/drm/drm_client.h
> +++ b/include/drm/drm_client.h
> @@ -3,6 +3,7 @@
>  #ifndef _DRM_CLIENT_H_
>  #define _DRM_CLIENT_H_
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/lockdep.h>
>  #include <linux/mutex.h>
>  #include <linux/types.h>
> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>  	struct drm_gem_object *gem;
>  
>  	/**
> -	 * @vaddr: Virtual address for the buffer
> +	 * @map: Virtual address for the buffer
>  	 */
> -	void *vaddr;
> +	struct dma_buf_map map;
>  
>  	/**
>  	 * @fb: DRM framebuffer
> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>  
>  int drm_client_modeset_create(struct drm_client_dev *client);

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:17:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2104.6238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKwa-0007zk-Ae; Fri, 02 Oct 2020 13:17:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2104.6238; Fri, 02 Oct 2020 13:17:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOKwa-0007zd-7T; Fri, 02 Oct 2020 13:17:52 +0000
Received: by outflank-mailman (input) for mailman id 2104;
 Fri, 02 Oct 2020 13:17:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOKwY-0007zY-QM
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:17:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e54f8b33-d105-456f-b109-3305d53996be;
 Fri, 02 Oct 2020 13:17:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 524BBB1AD;
 Fri,  2 Oct 2020 13:17:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOKwY-0007zY-QM
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:17:50 +0000
X-Inumbo-ID: e54f8b33-d105-456f-b109-3305d53996be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e54f8b33-d105-456f-b109-3305d53996be;
	Fri, 02 Oct 2020 13:17:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601644668;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PX4dwCD9Cc3WL0Wjr7dU6lXpu5u9Qb4TT68DJnkaZbc=;
	b=V0+tjxSVIVqqoAj7wBjEwYP5sN4T/KmyWwNn84xihdfYaoB6TN8T0VEY3VokpwTzN3PI/O
	NnGwlQtTaOGfDkyYEBpDY3W04WMlrSYCdLq9zdxgYT//bbR1etVC1eyfdMGkA6FesvH0Tz
	O3Cm9C+/DrcQ0HHRNK5jCFo9NGre6iU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 524BBB1AD;
	Fri,  2 Oct 2020 13:17:48 +0000 (UTC)
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>
References: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
 <e5da46d7-07ee-84b8-fbd8-e2c246c014de@suse.com>
 <547D8B47-C521-4F43-976F-D1723470AD3C@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c09ae112-a94a-7b3c-f31a-46acc5098d5c@suse.com>
Date: Fri, 2 Oct 2020 15:17:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <547D8B47-C521-4F43-976F-D1723470AD3C@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 14:44, Bertrand Marquis wrote:
> 
> 
>> On 2 Oct 2020, at 13:38, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 02.10.2020 14:34, Bertrand Marquis wrote:
>>>> On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 02.10.2020 12:42, Bertrand Marquis wrote:
>>>>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>>>>
>>>>> This is removing the dependency to xen subdirectory preventing using a
>>>>> wrong configuration file when xen subdirectory is duplicated for
>>>>> compilation tests.
>>>>>
>>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>
>>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Thanks :-)
>>>
>>>>
>>>> (but more for the slight tidying than the purpose you name)
>>>
>>> Feel free to remove the justification from the commit message if
>>> you think it is not usefull.
>>
>> Oh, no, it's not like I consider it not useful. It shows how you
>> arrived at making the change. It's just that I didn't consider
>> making copies of xen/ something we mean to be supported. I wouldn't
>> be surprised if it got broken again ...
> 
> basically i do this a “cp -rs” of xen subdirectory so that i can have directories
> in which xen is compiled for x86, arm32 and arm64 and recompile all of them
> quickly without having to go through distclean, config, make each time or modify
> the original tree.

But then you must have adjustments also in the top level makefile,
such that besides "make xen", "make xen-xyz" also works.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2108.6256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL1J-0000SX-FP; Fri, 02 Oct 2020 13:22:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2108.6256; Fri, 02 Oct 2020 13:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL1J-0000SJ-A6; Fri, 02 Oct 2020 13:22:45 +0000
Received: by outflank-mailman (input) for mailman id 2108;
 Fri, 02 Oct 2020 13:19:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kOKy8-00088z-3H
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:19:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5995f4e2-e520-477e-a19e-092aae43c9b8;
 Fri, 02 Oct 2020 13:19:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2E567AEF5;
 Fri,  2 Oct 2020 13:19:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kOKy8-00088z-3H
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:19:28 +0000
X-Inumbo-ID: 5995f4e2-e520-477e-a19e-092aae43c9b8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5995f4e2-e520-477e-a19e-092aae43c9b8;
	Fri, 02 Oct 2020 13:19:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601644766;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dXMHbVb6pgN544nxYenHHL6okl1GWDOP/Q//L6bG8YE=;
	b=aVfeYjVsEAey7qf6Ex3QHusanTZYPmzDsjnCo7TrMCRYIJ5KC3AfGMHH9a4LEDOf16/V1B
	Tj2FscxbH8Jro5Mj811UIxo+NCihU8j+Cu5ZAVUjd7kJnPdMpQBL3ARs4+b6GNQApReUkf
	1nvxW1E7gsIjlG9xufW/BUyvmVHoMsk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2E567AEF5;
	Fri,  2 Oct 2020 13:19:26 +0000 (UTC)
Date: Fri, 2 Oct 2020 15:19:24 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Oscar Salvador <osalvador@suse.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: Re: [PATCH v1 2/5] mm/page_alloc: place pages to tail in
 __putback_isolated_page()
Message-ID: <20201002131924.GH4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-3-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-3-david@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon 28-09-20 20:21:07, David Hildenbrand wrote:
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
> 
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
> 
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
> 
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> free list, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
> 
> Document that this should only be used for optimizations, and no code
> should realy on this for correction (if the order of freepage lists
> ever changes).
> 
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
> 
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 18 ++++++++++++++++--
>  1 file changed, 16 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index daab90e960fe..9e3ed4a6f69a 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -89,6 +89,18 @@ typedef int __bitwise fop_t;
>   */
>  #define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
>  
> +/*
> + * Place the (possibly merged) page to the tail of the freelist. Will ignore
> + * page shuffling (relevant code - e.g., memory onlining - is expected to
> + * shuffle the whole zone).
> + *
> + * Note: No code should rely onto this flag for correctness - it's purely
> + *       to allow for optimizations when handing back either fresh pages
> + *       (memory onlining) or untouched pages (page isolation, free page
> + *       reporting).
> + */
> +#define FOP_TO_TAIL		((__force fop_t)BIT(1))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION	(8)
> @@ -1038,7 +1050,9 @@ static inline void __free_one_page(struct page *page, unsigned long pfn,
>  done_merging:
>  	set_page_order(page, order);
>  
> -	if (is_shuffle_order(order))
> +	if (fop_flags & FOP_TO_TAIL)
> +		to_tail = true;
> +	else if (is_shuffle_order(order))
>  		to_tail = shuffle_pick_tail();
>  	else
>  		to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
> @@ -3300,7 +3314,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>  
>  	/* Return isolated page to tail of freelist. */
>  	__free_one_page(page, page_to_pfn(page), zone, order, mt,
> -			FOP_SKIP_REPORT_NOTIFY);
> +			FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL);
>  }
>  
>  /*
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2105.6249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL1J-0000Rw-3g; Fri, 02 Oct 2020 13:22:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2105.6249; Fri, 02 Oct 2020 13:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL1J-0000Rp-0Y; Fri, 02 Oct 2020 13:22:45 +0000
Received: by outflank-mailman (input) for mailman id 2105;
 Fri, 02 Oct 2020 13:17:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kOKwb-0007zP-5v
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:17:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb73828b-6662-410b-b684-640228c415e3;
 Fri, 02 Oct 2020 13:17:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 09108ACAD;
 Fri,  2 Oct 2020 13:17:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kOKwb-0007zP-5v
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:17:53 +0000
X-Inumbo-ID: bb73828b-6662-410b-b684-640228c415e3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb73828b-6662-410b-b684-640228c415e3;
	Fri, 02 Oct 2020 13:17:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601644665;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CU5N1SfvQtRpkg+8Zog93ExPg0p+BzvwAJwHCdS+2JY=;
	b=FiceQ6Kkjz7QkfPSMSOPtuyU7GKljTHYOsd1o/wNW28hnm2YsFn/XKHbTzs7kr+fWv9n8d
	Oq4lBN4dyXC+WH92QTQt0Mm9h1WVkRPZovHwQIkeec1BNFgADdysKCnh2T56obsi8aM666
	OfC+ku5GWdMUWqNHGo+M0w1MMBJ+bas=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 09108ACAD;
	Fri,  2 Oct 2020 13:17:45 +0000 (UTC)
Date: Fri, 2 Oct 2020 15:17:43 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of
 __free_one_page() to a proper flag
Message-ID: <20201002131743.GG4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-2-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-2-david@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon 28-09-20 20:21:06, David Hildenbrand wrote:
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.
> 
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Hopefully this will not wrack the generated code. But considering we
would need another parameter there is not much choice left.

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 28 ++++++++++++++++++++--------
>  1 file changed, 20 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df90e3654f97..daab90e960fe 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -77,6 +77,18 @@
>  #include "shuffle.h"
>  #include "page_reporting.h"
>  
> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;
> +
> +/* No special request */
> +#define FOP_NONE		((__force fop_t)0)
> +
> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).
> + */
> +#define FOP_SKIP_REPORT_NOTIFY	((__force fop_t)BIT(0))
> +
>  /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
>  static DEFINE_MUTEX(pcp_batch_high_lock);
>  #define MIN_PERCPU_PAGELIST_FRACTION	(8)
> @@ -948,10 +960,9 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
>   * -- nyc
>   */
>  
> -static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype, fop_t fop_flags)
>  {
>  	struct capture_control *capc = task_capc(zone);
>  	unsigned long buddy_pfn;
> @@ -1038,7 +1049,7 @@ static inline void __free_one_page(struct page *page,
>  		add_to_free_list(page, zone, order, migratetype);
>  
>  	/* Notify page reporting subsystem of freed page */
> -	if (report)
> +	if (!(fop_flags & FOP_SKIP_REPORT_NOTIFY))
>  		page_reporting_notify_free(order);
>  }
>  
> @@ -1379,7 +1390,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  		if (unlikely(isolated_pageblocks))
>  			mt = get_pageblock_migratetype(page);
>  
> -		__free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
> +		__free_one_page(page, page_to_pfn(page), zone, 0, mt, FOP_NONE);
>  		trace_mm_page_pcpu_drain(page, 0, mt);
>  	}
>  	spin_unlock(&zone->lock);
> @@ -1395,7 +1406,7 @@ static void free_one_page(struct zone *zone,
>  		is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
> -	__free_one_page(page, pfn, zone, order, migratetype, true);
> +	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
>  	spin_unlock(&zone->lock);
>  }
>  
> @@ -3288,7 +3299,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
>  	lockdep_assert_held(&zone->lock);
>  
>  	/* Return isolated page to tail of freelist. */
> -	__free_one_page(page, page_to_pfn(page), zone, order, mt, false);
> +	__free_one_page(page, page_to_pfn(page), zone, order, mt,
> +			FOP_SKIP_REPORT_NOTIFY);
>  }
>  
>  /*
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:24:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:24:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2111.6274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL2f-0000hl-2B; Fri, 02 Oct 2020 13:24:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2111.6274; Fri, 02 Oct 2020 13:24:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOL2e-0000he-U2; Fri, 02 Oct 2020 13:24:08 +0000
Received: by outflank-mailman (input) for mailman id 2111;
 Fri, 02 Oct 2020 13:24:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kOL2e-0000hZ-0x
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:24:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f509b698-bf6e-4429-a7cc-5b1421b3d3d7;
 Fri, 02 Oct 2020 13:24:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3A021AD1B;
 Fri,  2 Oct 2020 13:24:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kOL2e-0000hZ-0x
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:24:08 +0000
X-Inumbo-ID: f509b698-bf6e-4429-a7cc-5b1421b3d3d7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f509b698-bf6e-4429-a7cc-5b1421b3d3d7;
	Fri, 02 Oct 2020 13:24:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601645045;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=SHCZQQnCgrWRgQEkle9YXyRy7OdPc64LZr3cWnoNdJc=;
	b=Luf6Auunkbx5fBqPV97ffK3OdeZkpz/ly5jWOF7KyyetRTTl0tHL9c4OOQtgL7oJJlF45Y
	YEYwgkmH5oy7tRQOvB1vtvGozU6cW7b0IYGvMyQ1H7axTEbrQU2O/T7YuPsAakX2XMTi21
	f/QwyNhqMXIZaJ6nlhuX3SOiaoeqPKE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3A021AD1B;
	Fri,  2 Oct 2020 13:24:05 +0000 (UTC)
Date: Fri, 2 Oct 2020 15:24:04 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of
 the freelist in unset_migratetype_isolate()
Message-ID: <20201002132404.GI4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-4-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-4-david@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> Page isolation doesn't actually touch the pages, it simply isolates
> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> 
> We already place pages to the tail of the freelists when undoing
> isolation via __putback_isolated_page(), let's do it in any case
> (e.g., if order <= pageblock_order) and document the behavior.
> 
> Add a "to_tail" parameter to move_freepages_block() but introduce a
> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> 
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.

Is there anything preventing to do this unconditionally? Or in other
words is any of the existing callers of move_freepages_block benefiting
from adding to the head?

> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  include/linux/page-isolation.h |  4 ++--
>  mm/page_alloc.c                | 35 +++++++++++++++++++++++-----------
>  mm/page_isolation.c            | 12 +++++++++---
>  3 files changed, 35 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/page-isolation.h b/include/linux/page-isolation.h
> index 572458016331..3eca9b3c5305 100644
> --- a/include/linux/page-isolation.h
> +++ b/include/linux/page-isolation.h
> @@ -36,8 +36,8 @@ static inline bool is_migrate_isolate(int migratetype)
>  struct page *has_unmovable_pages(struct zone *zone, struct page *page,
>  				 int migratetype, int flags);
>  void set_pageblock_migratetype(struct page *page, int migratetype);
> -int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable);
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +			 bool to_tail, int *num_movable);
>  
>  /*
>   * Changes migrate type in [start_pfn, end_pfn) to be MIGRATE_ISOLATE.
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 9e3ed4a6f69a..d5a5f528b8ca 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -905,6 +905,15 @@ static inline void move_to_free_list(struct page *page, struct zone *zone,
>  	list_move(&page->lru, &area->free_list[migratetype]);
>  }
>  
> +/* Used for pages which are on another list */
> +static inline void move_to_free_list_tail(struct page *page, struct zone *zone,
> +					  unsigned int order, int migratetype)
> +{
> +	struct free_area *area = &zone->free_area[order];
> +
> +	list_move_tail(&page->lru, &area->free_list[migratetype]);
> +}
> +
>  static inline void del_page_from_free_list(struct page *page, struct zone *zone,
>  					   unsigned int order)
>  {
> @@ -2338,9 +2347,9 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> -static int move_freepages(struct zone *zone,
> -			  struct page *start_page, struct page *end_page,
> -			  int migratetype, int *num_movable)
> +static int move_freepages(struct zone *zone, struct page *start_page,
> +			  struct page *end_page, int migratetype,
> +			  bool to_tail, int *num_movable)
>  {
>  	struct page *page;
>  	unsigned int order;
> @@ -2371,7 +2380,10 @@ static int move_freepages(struct zone *zone,
>  		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
>  
>  		order = page_order(page);
> -		move_to_free_list(page, zone, order, migratetype);
> +		if (to_tail)
> +			move_to_free_list_tail(page, zone, order, migratetype);
> +		else
> +			move_to_free_list(page, zone, order, migratetype);
>  		page += 1 << order;
>  		pages_moved += 1 << order;
>  	}
> @@ -2379,8 +2391,8 @@ static int move_freepages(struct zone *zone,
>  	return pages_moved;
>  }
>  
> -int move_freepages_block(struct zone *zone, struct page *page,
> -				int migratetype, int *num_movable)
> +int move_freepages_block(struct zone *zone, struct page *page, int migratetype,
> +			 bool to_tail, int *num_movable)
>  {
>  	unsigned long start_pfn, end_pfn;
>  	struct page *start_page, *end_page;
> @@ -2401,7 +2413,7 @@ int move_freepages_block(struct zone *zone, struct page *page,
>  		return 0;
>  
>  	return move_freepages(zone, start_page, end_page, migratetype,
> -								num_movable);
> +			      to_tail, num_movable);
>  }
>  
>  static void change_pageblock_range(struct page *pageblock_page,
> @@ -2526,8 +2538,8 @@ static void steal_suitable_fallback(struct zone *zone, struct page *page,
>  	if (!whole_block)
>  		goto single_page;
>  
> -	free_pages = move_freepages_block(zone, page, start_type,
> -						&movable_pages);
> +	free_pages = move_freepages_block(zone, page, start_type, false,
> +					  &movable_pages);
>  	/*
>  	 * Determine how many pages are compatible with our allocation.
>  	 * For movable allocation, it's the number of movable pages which
> @@ -2635,7 +2647,8 @@ static void reserve_highatomic_pageblock(struct page *page, struct zone *zone,
>  	    && !is_migrate_cma(mt)) {
>  		zone->nr_reserved_highatomic += pageblock_nr_pages;
>  		set_pageblock_migratetype(page, MIGRATE_HIGHATOMIC);
> -		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, NULL);
> +		move_freepages_block(zone, page, MIGRATE_HIGHATOMIC, false,
> +				     NULL);
>  	}
>  
>  out_unlock:
> @@ -2711,7 +2724,7 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
>  			 */
>  			set_pageblock_migratetype(page, ac->migratetype);
>  			ret = move_freepages_block(zone, page, ac->migratetype,
> -									NULL);
> +						   false, NULL);
>  			if (ret) {
>  				spin_unlock_irqrestore(&zone->lock, flags);
>  				return ret;
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..de44e1329706 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -45,7 +45,7 @@ static int set_migratetype_isolate(struct page *page, int migratetype, int isol_
>  		set_pageblock_migratetype(page, MIGRATE_ISOLATE);
>  		zone->nr_isolate_pageblock++;
>  		nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE,
> -									NULL);
> +						false, NULL);
>  
>  		__mod_zone_freepage_state(zone, -nr_pages, mt);
>  		spin_unlock_irqrestore(&zone->lock, flags);
> @@ -83,7 +83,7 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>  	 * Because freepage with more than pageblock_order on isolated
>  	 * pageblock is restricted to merge due to freepage counting problem,
>  	 * it is possible that there is free buddy page.
> -	 * move_freepages_block() doesn't care of merge so we need other
> +	 * move_freepages_block() don't care about merging, so we need another
>  	 * approach in order to merge them. Isolation and free will make
>  	 * these pages to be merged.
>  	 */
> @@ -106,9 +106,15 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>  	 * If we isolate freepage with more than pageblock_order, there
>  	 * should be no freepage in the range, so we could avoid costly
>  	 * pageblock scanning for freepage moving.
> +	 *
> +	 * We didn't actually touch any of the isolated pages, so place them
> +	 * to the tail of the freelist. This is an optimization for memory
> +	 * onlining - just onlined memory won't immediately be considered for
> +	 * allocation.
>  	 */
>  	if (!isolated_page) {
> -		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> +		nr_pages = move_freepages_block(zone, page, migratetype, true,
> +						NULL);
>  		__mod_zone_freepage_state(zone, nr_pages, migratetype);
>  	}
>  	set_pageblock_migratetype(page, migratetype);
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2121.6290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHY-0001mL-EW; Fri, 02 Oct 2020 13:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2121.6290; Fri, 02 Oct 2020 13:39:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHY-0001mE-BJ; Fri, 02 Oct 2020 13:39:32 +0000
Received: by outflank-mailman (input) for mailman id 2121;
 Fri, 02 Oct 2020 13:39:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHW-0001m9-Uu
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:31 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 77bda39c-0282-427d-baa6-7e104d0205b8;
 Fri, 02 Oct 2020 13:39:29 +0000 (UTC)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-537-Dq-uan-7NGKD4Nj_AWAQFg-1; Fri, 02 Oct 2020 09:39:27 -0400
Received: by mail-wm1-f70.google.com with SMTP id x6so540382wmi.1
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:27 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id k8sm1632520wrl.42.2020.10.02.06.39.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:25 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHW-0001m9-Uu
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:31 +0000
X-Inumbo-ID: 77bda39c-0282-427d-baa6-7e104d0205b8
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 77bda39c-0282-427d-baa6-7e104d0205b8;
	Fri, 02 Oct 2020 13:39:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gJa6lLjujz4Z6eodBHhDkQDnUdrQBiirVQpBE1f30O0=;
	b=jI1Qp3NSAg5mIkSR+LFYc60ABOHs570gXQRDW19u7oOFbWo6dc1mTLPNHlyE6CpZz5BoZ1
	/prq3xjomuOpGFXNgXhSJbZ41V1JgNVilznBxvg4i2K97T8xqzGuDljO/Mm8Bvx2Z8K7RD
	IFnFmVdqaCgOSoyom9DAQUmWOwV0+Hc=
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-537-Dq-uan-7NGKD4Nj_AWAQFg-1; Fri, 02 Oct 2020 09:39:27 -0400
X-MC-Unique: Dq-uan-7NGKD4Nj_AWAQFg-1
Received: by mail-wm1-f70.google.com with SMTP id x6so540382wmi.1
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=gJa6lLjujz4Z6eodBHhDkQDnUdrQBiirVQpBE1f30O0=;
        b=X5Kcelz8jKgrACm6S6oQqX3mnpHWEGi4GqOblFijlqHSOeNwBxDTgbW7N058HxG+0Y
         /ltr2/GZ59qqpeT6HfYg4eiIAg3l7TcIYWdCi+foJeaUu0MPDKDbumJnY0f5LbxVilBk
         9aaO/i6f8OSBSDG6NW0bEUqvU4dXcZHydlO56DEV+HOu6Jj14LMGARYs+ZEUz/goXMLn
         P2OshggXhtBFqoMdTsqh/oxSe+p6Yutq02nEwMOz2gIr9+KckByY/HhBTVYDXhVDG8Hp
         2xYvvuXq1bFuzdo0NS0i0Gv8QSi/4cNK8Ufp/qTovZJ0nQu8Nl66C8kVVVQztZRQ+oyx
         KjEg==
X-Gm-Message-State: AOAM530nSUPAqkmIyquGttjD/ccSQAHr4+JjHAWqB0GEIiGpU5U7a4xe
	+lg0vGDIifR5GJkIXmsutcrmtH0habpDrabAv+l11P9gMpvLytOCY4E7Q/Pg3X0sbtvgkHU84xS
	fuSAhGKfp5ipwjJCuB1km21B3Tgk=
X-Received: by 2002:adf:a418:: with SMTP id d24mr3198323wra.80.1601645966803;
        Fri, 02 Oct 2020 06:39:26 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwtxtNy6fW+4PHTbuPzJPXeHOxCiDquXwGv5njMeErxQZK42D+who95M+frGMmK92H78CfTVA==
X-Received: by 2002:adf:a418:: with SMTP id d24mr3198294wra.80.1601645966623;
        Fri, 02 Oct 2020 06:39:26 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id k8sm1632520wrl.42.2020.10.02.06.39.25
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:25 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 0/5] qapi: Restrict machine (and migration) specific commands
Date: Fri,  2 Oct 2020 15:39:18 +0200
Message-Id: <20201002133923.1716645-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Reduce the machine code pulled into qemu-storage-daemon.

Philippe Mathieu-Daudé (5):
  qapi: Restrict 'inject-nmi' command to machine code
  qapi: Restrict 'system wakeup/reset/powerdown' commands to
    machine.json
  qapi: Restrict '(p)memsave' command to machine code
  qapi: Restrict 'query-kvm' command to machine code
  qapi: Restrict Xen migration commands to migration.json

 qapi/machine.json      | 168 +++++++++++++++++++++++++++++++++
 qapi/migration.json    |  41 ++++++++
 qapi/misc.json         | 209 -----------------------------------------
 accel/stubs/xen-stub.c |   2 +-
 hw/i386/xen/xen-hvm.c  |   2 +-
 migration/savevm.c     |   1 -
 softmmu/cpus.c         |   1 +
 ui/gtk.c               |   1 +
 ui/cocoa.m             |   1 +
 9 files changed, 214 insertions(+), 212 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2122.6302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHf-0001oS-Ny; Fri, 02 Oct 2020 13:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2122.6302; Fri, 02 Oct 2020 13:39:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHf-0001oL-KQ; Fri, 02 Oct 2020 13:39:39 +0000
Received: by outflank-mailman (input) for mailman id 2122;
 Fri, 02 Oct 2020 13:39:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHe-0001o1-BS
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:38 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 65ec9fd5-b877-4355-ae15-291e198d59bc;
 Fri, 02 Oct 2020 13:39:35 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-274-DSmf6SVhOiOntOuoPZ_fqA-1; Fri, 02 Oct 2020 09:39:33 -0400
Received: by mail-wr1-f69.google.com with SMTP id l9so554526wrq.20
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:33 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id 91sm1979455wrq.9.2020.10.02.06.39.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:31 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHe-0001o1-BS
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:38 +0000
X-Inumbo-ID: 65ec9fd5-b877-4355-ae15-291e198d59bc
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 65ec9fd5-b877-4355-ae15-291e198d59bc;
	Fri, 02 Oct 2020 13:39:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645975;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iLAVKEu0s8buMQigPSDDQRm3nwhcH7vFMDI5aBMruk4=;
	b=Z6DK7UlR9jzeTDWaDTeu0zBR6OjAZvmi4ldqzQlM7EUa0/VtQVTZ2c6ap5A/xQeV59MsZm
	d6X+Do8ZcgkfOInXJC6Gh3l7oILwJ5N3R4nCHN4Vi2hA2yM1sr2CbqKBgh09jVej7Cp0xm
	L1mtKfoRAJjKKGpDZQE7wKjr1xHVAx0=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-274-DSmf6SVhOiOntOuoPZ_fqA-1; Fri, 02 Oct 2020 09:39:33 -0400
X-MC-Unique: DSmf6SVhOiOntOuoPZ_fqA-1
Received: by mail-wr1-f69.google.com with SMTP id l9so554526wrq.20
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:33 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=iLAVKEu0s8buMQigPSDDQRm3nwhcH7vFMDI5aBMruk4=;
        b=qvzefYhpkF7n6MMR4XIkxvhtD2H+CTP0Ot9/REeG70SLmktzCVki+qeOIHps/066pF
         /LCcxR6DzDKIEWQR3tNow4C4W4t54SD2neNhO9foUwURzOM4lJUPUi661ZSvtNAl0zuF
         p8leW8pmNDYF8NFi6eMRNUbvPziBrwVxzPizZPZCrb+EHTYsyXUx9VlPb6hm9IhziyMJ
         Dipxvx/f4hPRqGFS7HFkxRS6gXCZArBGzj2dhehibUKNyf7/Q2yFHFnsmjzJr8MwWYBw
         ryzxsO6sQei/wNcy9uonhV8y1QzMjGHtALZqqJzyeQ1RY0FHFmxMAfbY/EosuYSqP4W9
         Ulkg==
X-Gm-Message-State: AOAM530l6iD3jWmhdbyGVixljgzy8/JeN16MzQ/h3EP5Cdfeh0zssPrl
	JbKbVV9+dkFvYHpaujXdQrzOh2BiV5kjDH+buJ4wP8vuHy45Q8yN16JcFYwNg1/CxCHNqzTV6mO
	x5Wk1PzLUItwVyjcUgLuhNOTXNrY=
X-Received: by 2002:a7b:cc02:: with SMTP id f2mr2894069wmh.1.1601645972085;
        Fri, 02 Oct 2020 06:39:32 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxqQv1qlyQpjgjGTEtLJvUYyUltyIpiDUFPSX1enMbbtlK6dt49SmY4VWfHBJPv80MAhGkeEw==
X-Received: by 2002:a7b:cc02:: with SMTP id f2mr2894034wmh.1.1601645971831;
        Fri, 02 Oct 2020 06:39:31 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id 91sm1979455wrq.9.2020.10.02.06.39.30
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:31 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 1/5] qapi: Restrict 'inject-nmi' command to machine code
Date: Fri,  2 Oct 2020 15:39:19 +0200
Message-Id: <20201002133923.1716645-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com>
References: <20201002133923.1716645-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Restricting 'inject-nmi' to machine.json pulls slightly
less QAPI-generated code into user-mode and tools.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 qapi/machine.json | 20 ++++++++++++++++++++
 qapi/misc.json    | 20 --------------------
 softmmu/cpus.c    |  1 +
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/qapi/machine.json b/qapi/machine.json
index 756dacb06f..073b1c98b2 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -484,6 +484,26 @@
 { 'enum': 'LostTickPolicy',
   'data': ['discard', 'delay', 'slew' ] }
 
+##
+# @inject-nmi:
+#
+# Injects a Non-Maskable Interrupt into the default CPU (x86/s390) or all CPUs (ppc64).
+# The command fails when the guest doesn't support injecting.
+#
+# Returns:  If successful, nothing
+#
+# Since:  0.14.0
+#
+# Note: prior to 2.1, this command was only supported for x86 and s390 VMs
+#
+# Example:
+#
+# -> { "execute": "inject-nmi" }
+# <- { "return": {} }
+#
+##
+{ 'command': 'inject-nmi' }
+
 ##
 # @NumaOptionsType:
 #
diff --git a/qapi/misc.json b/qapi/misc.json
index 694d2142f3..37b3e04cec 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -341,26 +341,6 @@
 ##
 { 'command': 'system_wakeup' }
 
-##
-# @inject-nmi:
-#
-# Injects a Non-Maskable Interrupt into the default CPU (x86/s390) or all CPUs (ppc64).
-# The command fails when the guest doesn't support injecting.
-#
-# Returns:  If successful, nothing
-#
-# Since:  0.14.0
-#
-# Note: prior to 2.1, this command was only supported for x86 and s390 VMs
-#
-# Example:
-#
-# -> { "execute": "inject-nmi" }
-# <- { "return": {} }
-#
-##
-{ 'command': 'inject-nmi' }
-
 ##
 # @human-monitor-command:
 #
diff --git a/softmmu/cpus.c b/softmmu/cpus.c
index ac8940d52e..bd040d6cdd 100644
--- a/softmmu/cpus.c
+++ b/softmmu/cpus.c
@@ -29,6 +29,7 @@
 #include "migration/vmstate.h"
 #include "monitor/monitor.h"
 #include "qapi/error.h"
+#include "qapi/qapi-commands-machine.h"
 #include "qapi/qapi-commands-misc.h"
 #include "qapi/qapi-events-run-state.h"
 #include "qapi/qmp/qerror.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2123.6314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHk-0001rR-7d; Fri, 02 Oct 2020 13:39:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2123.6314; Fri, 02 Oct 2020 13:39:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHk-0001rJ-3h; Fri, 02 Oct 2020 13:39:44 +0000
Received: by outflank-mailman (input) for mailman id 2123;
 Fri, 02 Oct 2020 13:39:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHj-0001o1-7l
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:43 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id bfb246c0-b6fd-40ae-bcf1-97b5a21e218e;
 Fri, 02 Oct 2020 13:39:40 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-484-8QquVv6XN-WG38wS1ghzDg-1; Fri, 02 Oct 2020 09:39:38 -0400
Received: by mail-wm1-f69.google.com with SMTP id i9so95878wml.2
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:38 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id r19sm1784306wmh.7.2020.10.02.06.39.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:36 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHj-0001o1-7l
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:43 +0000
X-Inumbo-ID: bfb246c0-b6fd-40ae-bcf1-97b5a21e218e
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id bfb246c0-b6fd-40ae-bcf1-97b5a21e218e;
	Fri, 02 Oct 2020 13:39:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645980;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kbK1m63mIh7G9YoSTt+lY9Rt1IngmCv52ys5F1Xv+Ck=;
	b=Uu3UBB/eBIZLRBxr/GKPc14VcSn2csCaxmXks06WflkrmIpYV2LU+DmKnQ+akzhTI+nipx
	QuR9bXjyODJjqrv8WyqmIQw0oopDoIHWDh/3k5bJcmDoTSjvROhQFa82GfXqqxp+Vhecey
	Cff8hZKjnDACwarcTk8YZGmgaNWQclc=
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-484-8QquVv6XN-WG38wS1ghzDg-1; Fri, 02 Oct 2020 09:39:38 -0400
X-MC-Unique: 8QquVv6XN-WG38wS1ghzDg-1
Received: by mail-wm1-f69.google.com with SMTP id i9so95878wml.2
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:38 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=kbK1m63mIh7G9YoSTt+lY9Rt1IngmCv52ys5F1Xv+Ck=;
        b=lc4bklQB/9mlIdD253n0ziNqrLJ2/OXTx8wk1FNM0UB8xR8mE5kkizU8f4WKYl5Z+Z
         NfUleXm6S6/fC3lNQADHNgAjJpGBhBGVJLvn85c0cSt39f2qL6epWwDHGmoKTb2hEzCB
         akkgxtMTth+KUrVtQoUZM5NGb6ACWU0UMHtYrWNDHRHFL70fBeR+/+dzqDDJ/HSyX6Ei
         VqZMOIA56UXy75fYOXQukbN4s5mGg2psRTw/EnDtMAY3t7yt3ERq0GIEv+ivZtIuG6wY
         lS1uqfh7fAg8qtqXW9/N16Czv1r81PaIeYFeUp1S5c/piN01cAn/XEFHbvRXajBIYgzK
         aqWQ==
X-Gm-Message-State: AOAM531w7h/gP5lYIIlEgiBEMsikbjCDKksU3W9wGaz4CEvXR0wB7PJW
	z1oQrvyG74XwsS9l+9oiVO5ZfAffeTjaTAfgFSMLL+R3tVljx6Pj1+o0qRnCCCsKZ7dHjQHoT/U
	jXS8osdorQHDwCdBEwF1sKeCoZ4A=
X-Received: by 2002:a1c:63c1:: with SMTP id x184mr3042796wmb.138.1601645977326;
        Fri, 02 Oct 2020 06:39:37 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwD/heYmqaLVMQtumsMQOWyseI2hOgaXTBn2HXZs+0th/9VfQZ/6fqWSoQPr8a9Q2Va+FrerA==
X-Received: by 2002:a1c:63c1:: with SMTP id x184mr3042765wmb.138.1601645977099;
        Fri, 02 Oct 2020 06:39:37 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id r19sm1784306wmh.7.2020.10.02.06.39.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:36 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 2/5] qapi: Restrict 'system wakeup/reset/powerdown' commands to machine.json
Date: Fri,  2 Oct 2020 15:39:20 +0200
Message-Id: <20201002133923.1716645-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com>
References: <20201002133923.1716645-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Restricting system_wakeup/system_reset/system_powerdown to
machine.json pulls slightly less QAPI-generated code into
user-mode and tools.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 qapi/machine.json | 57 +++++++++++++++++++++++++++++++++++++++++++++++
 qapi/misc.json    | 57 -----------------------------------------------
 ui/gtk.c          |  1 +
 ui/cocoa.m        |  1 +
 4 files changed, 59 insertions(+), 57 deletions(-)

diff --git a/qapi/machine.json b/qapi/machine.json
index 073b1c98b2..55328d4f3c 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -452,6 +452,63 @@
 ##
 { 'command': 'query-vm-generation-id', 'returns': 'GuidInfo' }
 
+##
+# @system_reset:
+#
+# Performs a hard reset of a guest.
+#
+# Since: 0.14.0
+#
+# Example:
+#
+# -> { "execute": "system_reset" }
+# <- { "return": {} }
+#
+##
+{ 'command': 'system_reset' }
+
+##
+# @system_powerdown:
+#
+# Requests that a guest perform a powerdown operation.
+#
+# Since: 0.14.0
+#
+# Notes: A guest may or may not respond to this command.  This command
+#        returning does not indicate that a guest has accepted the request or
+#        that it has shut down.  Many guests will respond to this command by
+#        prompting the user in some way.
+# Example:
+#
+# -> { "execute": "system_powerdown" }
+# <- { "return": {} }
+#
+##
+{ 'command': 'system_powerdown' }
+
+##
+# @system_wakeup:
+#
+# Wake up guest from suspend. If the guest has wake-up from suspend
+# support enabled (wakeup-suspend-support flag from
+# query-current-machine), wake-up guest from suspend if the guest is
+# in SUSPENDED state. Return an error otherwise.
+#
+# Since:  1.1
+#
+# Returns:  nothing.
+#
+# Note: prior to 4.0, this command does nothing in case the guest
+#       isn't suspended.
+#
+# Example:
+#
+# -> { "execute": "system_wakeup" }
+# <- { "return": {} }
+#
+##
+{ 'command': 'system_wakeup' }
+
 ##
 # @LostTickPolicy:
 #
diff --git a/qapi/misc.json b/qapi/misc.json
index 37b3e04cec..cce2e71e9c 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -177,40 +177,6 @@
 ##
 { 'command': 'stop' }
 
-##
-# @system_reset:
-#
-# Performs a hard reset of a guest.
-#
-# Since: 0.14.0
-#
-# Example:
-#
-# -> { "execute": "system_reset" }
-# <- { "return": {} }
-#
-##
-{ 'command': 'system_reset' }
-
-##
-# @system_powerdown:
-#
-# Requests that a guest perform a powerdown operation.
-#
-# Since: 0.14.0
-#
-# Notes: A guest may or may not respond to this command.  This command
-#        returning does not indicate that a guest has accepted the request or
-#        that it has shut down.  Many guests will respond to this command by
-#        prompting the user in some way.
-# Example:
-#
-# -> { "execute": "system_powerdown" }
-# <- { "return": {} }
-#
-##
-{ 'command': 'system_powerdown' }
-
 ##
 # @memsave:
 #
@@ -318,29 +284,6 @@
 ##
 { 'command': 'x-exit-preconfig', 'allow-preconfig': true }
 
-##
-# @system_wakeup:
-#
-# Wake up guest from suspend. If the guest has wake-up from suspend
-# support enabled (wakeup-suspend-support flag from
-# query-current-machine), wake-up guest from suspend if the guest is
-# in SUSPENDED state. Return an error otherwise.
-#
-# Since:  1.1
-#
-# Returns:  nothing.
-#
-# Note: prior to 4.0, this command does nothing in case the guest
-#       isn't suspended.
-#
-# Example:
-#
-# -> { "execute": "system_wakeup" }
-# <- { "return": {} }
-#
-##
-{ 'command': 'system_wakeup' }
-
 ##
 # @human-monitor-command:
 #
diff --git a/ui/gtk.c b/ui/gtk.c
index b11594d817..a752aa22be 100644
--- a/ui/gtk.c
+++ b/ui/gtk.c
@@ -33,6 +33,7 @@
 #include "qemu/osdep.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-control.h"
+#include "qapi/qapi-commands-machine.h"
 #include "qapi/qapi-commands-misc.h"
 #include "qemu/cutils.h"
 
diff --git a/ui/cocoa.m b/ui/cocoa.m
index 0910b4a716..f32adc3074 100644
--- a/ui/cocoa.m
+++ b/ui/cocoa.m
@@ -35,6 +35,7 @@
 #include "sysemu/cpu-throttle.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-block.h"
+#include "qapi/qapi-commands-machine.h"
 #include "qapi/qapi-commands-misc.h"
 #include "sysemu/blockdev.h"
 #include "qemu-version.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2124.6326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHm-0001uF-Kx; Fri, 02 Oct 2020 13:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2124.6326; Fri, 02 Oct 2020 13:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHm-0001u5-Ff; Fri, 02 Oct 2020 13:39:46 +0000
Received: by outflank-mailman (input) for mailman id 2124;
 Fri, 02 Oct 2020 13:39:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOLHk-0001sI-OP
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:44 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.87]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6f9c6a3-d29e-49cb-b837-85fddd82ccf8;
 Fri, 02 Oct 2020 13:39:43 +0000 (UTC)
Received: from AM0PR10CA0076.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::29)
 by VI1PR08MB2672.eurprd08.prod.outlook.com (2603:10a6:802:1c::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36; Fri, 2 Oct
 2020 13:39:41 +0000
Received: from AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:15:cafe::bd) by AM0PR10CA0076.outlook.office365.com
 (2603:10a6:208:15::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend
 Transport; Fri, 2 Oct 2020 13:39:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT064.mail.protection.outlook.com (10.152.17.53) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 13:39:41 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64");
 Fri, 02 Oct 2020 13:39:40 +0000
Received: from a6a43c6ac990.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9B899E1B-AFA1-4D63-9ACE-D52B78EF4AA9.1; 
 Fri, 02 Oct 2020 13:39:33 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a6a43c6ac990.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 13:39:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 13:39:32 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 13:39:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOLHk-0001sI-OP
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:44 +0000
X-Inumbo-ID: b6f9c6a3-d29e-49cb-b837-85fddd82ccf8
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.87])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b6f9c6a3-d29e-49cb-b837-85fddd82ccf8;
	Fri, 02 Oct 2020 13:39:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+S3Vu23I+R+RrBoxjqBi0zqDdivtx/4Um+qF6Mi2hE=;
 b=R0B8H9IKCyVh/ZhRQZlAMgsDmKOMXzfIsI+v6U7PufjUCYRb0tADfVRj4VDPxabLwB9607R7ESzo2bI4mhGOsADQyiGojjwgD+FBs3S7/hTw6HfeePN1oEqcO6hBx6rb5Y/J7Do8t1sw/lRFKDfoDguEtfkCpBVVXAZ1JvcwoKo=
Received: from AM0PR10CA0076.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::29)
 by VI1PR08MB2672.eurprd08.prod.outlook.com (2603:10a6:802:1c::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36; Fri, 2 Oct
 2020 13:39:41 +0000
Received: from AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:15:cafe::bd) by AM0PR10CA0076.outlook.office365.com
 (2603:10a6:208:15::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend
 Transport; Fri, 2 Oct 2020 13:39:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT064.mail.protection.outlook.com (10.152.17.53) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 13:39:41 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64"); Fri, 02 Oct 2020 13:39:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 76510ece6912c68e
X-CR-MTA-TID: 64aa7808
Received: from a6a43c6ac990.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9B899E1B-AFA1-4D63-9ACE-D52B78EF4AA9.1;
	Fri, 02 Oct 2020 13:39:33 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a6a43c6ac990.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 13:39:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gh/zgC4Wry1lNzN+PDBdeOA5pMOHIxZBBmXxUb+aE14zop5GiRzUFDMrAz0UclhZ2qhdie4a/BqLwn2M9rxLrUTRYD4FWDC+J5agCX+RLgMKI6T0X9g/EfBXuh8nDXF7MiDVrPXjoE9pL8k2KCcwc5/3OaapOCMHBdKFB1EPxQ1ygn/vvh/qEK1uq6oabypfZoPF/nPO6SR2MWt4ce+noC+uQEy1luzWDi0UVXF8RmvHfot2EfnE9YVEFMXHlMR4/mdiPU7rUGyNNtbqs7cago4xVQTsVwsN3APLLvEdOA3ge79A0rB/wdf2jbV0WA5wbck/ldaFzYs/gz8JApFL4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+S3Vu23I+R+RrBoxjqBi0zqDdivtx/4Um+qF6Mi2hE=;
 b=FxqK3ilEBAtrQfTWkEXuuLPobZ+RdltHcBnPAe4LE5KXv10h7kQIKmedk+WSBT6ldpuhIVjpI9ZHisIgB/wGgQRY0e7zpFxnJEJEfGWwpuwb0gktq4K04UY65sR+BTSM2newVbNK09RLifYgcUnKrJol3+R/kKZtFLXvdA/UMlXc7vTBpm7nFujuP9AMxFljIeuHYhajvxOzViEu9L8fjslHi7R2Ux84w5JMFDKTwAT2DRg8RUkISZrU6iYrB+nQRcjKghZGgcIaw08GsLv2IOtenb8Q2pg/i5dxjXb1T/m3KI6yHOgkXdO4cBD7qadcK4WKHaondBhTiFzEv9kmOw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H+S3Vu23I+R+RrBoxjqBi0zqDdivtx/4Um+qF6Mi2hE=;
 b=R0B8H9IKCyVh/ZhRQZlAMgsDmKOMXzfIsI+v6U7PufjUCYRb0tADfVRj4VDPxabLwB9607R7ESzo2bI4mhGOsADQyiGojjwgD+FBs3S7/hTw6HfeePN1oEqcO6hBx6rb5Y/J7Do8t1sw/lRFKDfoDguEtfkCpBVVXAZ1JvcwoKo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Fri, 2 Oct
 2020 13:39:32 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 13:39:32 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Index:
 AQHWmKjTj7VmlPYcLkSjqrrhNFG5S6mEOTwAgAAGTYCAAAEPgIAAAYQAgAAJbQCAAAYOgA==
Date: Fri, 2 Oct 2020 13:39:32 +0000
Message-ID: <9EE4FA49-D4A1-47A7-A6F7-F60F1BE97DCD@arm.com>
References:
 <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <31FC9BB1-F4C4-4203-94C1-1134607E49C2@arm.com>
 <e5da46d7-07ee-84b8-fbd8-e2c246c014de@suse.com>
 <547D8B47-C521-4F43-976F-D1723470AD3C@arm.com>
 <c09ae112-a94a-7b3c-f31a-46acc5098d5c@suse.com>
In-Reply-To: <c09ae112-a94a-7b3c-f31a-46acc5098d5c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 25d2b61e-2062-4bed-8c40-08d866d89d46
x-ms-traffictypediagnostic: DB6PR0801MB1799:|VI1PR08MB2672:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB267202E3906637C783427CDC9D310@VI1PR08MB2672.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3631;OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 2SnLZ0YkAxlgnrGoJ+KOojskR80kfbzFHJkIRxTIJTRU9PoOF7AGCKu6DBPwv7oUCOaorIWjyp05Vt1OiJ3rWs+CZ83npH76vRIl4YMtXF5EKvqVf14F1GQew4LzLJ2D5ubLM04QemSpsl8M5RC5ft52Ecn4IwIltq7bxzn8qCBA5nFg6KVY2/hyA1ZYzLp6VsbLGLz+98x1lCMSMGYilHXkBh/595M8mzt4XXBMebY9m3xqr6MEQYslCeEVZdTgd2O7ZwnAqm6whf6vmJ4yL0Gi3EYZuCDiEBtrpe6a4bLCGH0SLu3SOmz+HFx9H8mIgAnxewFXIP7komkBZKRqIg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(136003)(376002)(346002)(396003)(186003)(2906002)(6506007)(53546011)(83380400001)(91956017)(33656002)(71200400001)(4326008)(26005)(8676002)(6486002)(66446008)(66556008)(8936002)(66476007)(86362001)(64756008)(66946007)(478600001)(6512007)(316002)(6916009)(36756003)(76116006)(5660300002)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 qZu4mfd9ixicFX7AChSvxbBOg1hWRO86hwuHJVRl3onAm1DAqHQtx8Lhj9ue0i5xHpaOmm1vDnHH3xorVgpIS4fZyZ2nmUV52EhTg1dAPFE6HgSOk+BMypwi2Ay0Acps/g+Rz3TyBWTiP+LqIaLCKbs8Rb0xjgtEIUZ4rnrjJ9vYfvvpCKP1YlNl3GK+qmHECf4RhlgfPaM1AQwtleqZfWSKB8wqjs4+9N8K1KXcSqgErSEveycFO9JvqXwfhyAdMgUsDEJHlINlaFtkPVutjSys0AkQjH8VYLM/a9sblEnzwhhaOZB53bwCJurhDTUfA4YJ2j0oc9HnBwJfKkOSdD6vKjvAeafYd471X9OXmdfeZsQuwN/aopH+5CwUOH50o04MZiVFH0F/y2KezKwYnyzkylaftAtUSQRhBYT5xZGVaCHTuWvRxox4BOR6ykzMosCEnLZW2742Hg0RAiH35Z5/gP3PLSw93KsjW/iU0yYdJU/bqLfCy+EB2UDjX1mRSKtYmqark3uJbdC7Ex9Bt+6ufXi2QRea5p1/YLUWYJ9fSy/M1LZ9krBgEWooYsL5shyTWI2B/zrrU4NbNPgyrkhZoY34weayu+ZIeCq4VMC+4ntrFi6L/0fwyv5VpsoJ5edtsHDYlWyoOVcdk7WTPQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <1133418347CE694FAF9D2B20F2577599@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1799
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	11cddfa8-5c61-4825-efef-08d866d89853
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	20Mx3A2MKfI4Z6bQOB8sEMwlB7zC1i6qdRaE2wth0zJmknxN4uk8UgR1p9r6WgK6TfdHTLeFTxY4W2iz0H1gcScJ45+eoRDGjX9anDpXhov5y20VnqLBWHmQkyONE0AZdkQaIMHA91OAp2RaegrLPgnWAoGYip3dJUk/J5Qdzxlb3NMXoyAkwIsv9B6HUs1SEGqBjPE3b8zmMFF59TYsw7xn+lYQ4zmkQVzyFS0+P+D63CJgXGo9yeKWWkXxnb0ZjI9XwMQzsoIlw884/x+ZGkbo0/KbWciiSf03Xn7KipM3du3wYuJENg6hXLDcfBuARkT15Dklyb2yJD9j0Ftq3oybS60SbcpQbuYTfNx45KbIbHdE4arAq2i2yCnnMO7BXDlOLM+t9nH+5PgQR6yhog==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(2906002)(81166007)(8936002)(70206006)(70586007)(82310400003)(6506007)(478600001)(8676002)(356005)(83380400001)(6486002)(6512007)(5660300002)(336012)(33656002)(53546011)(186003)(316002)(26005)(36756003)(86362001)(6862004)(82740400003)(47076004)(36906005)(2616005)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 13:39:41.0000
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 25d2b61e-2062-4bed-8c40-08d866d89d46
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2672

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMTQ6MTcsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwMi4xMC4yMDIwIDE0OjQ0LCBCZXJ0cmFuZCBNYXJxdWlz
IHdyb3RlOg0KPj4gDQo+PiANCj4+PiBPbiAyIE9jdCAyMDIwLCBhdCAxMzozOCwgSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiAwMi4xMC4yMDIwIDE0
OjM0LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+Pj4gT24gMiBPY3QgMjAyMCwgYXQgMTM6
MTIsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+PiANCj4+Pj4+
IE9uIDAyLjEwLjIwMjAgMTI6NDIsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4gTW9k
aWZ5IE1ha2VmaWxlcyB1c2luZyAkKFhFTl9ST09UKS94ZW4gdG8gdXNlICQoQkFTRURJUikgaW5z
dGVhZC4NCj4+Pj4+PiANCj4+Pj4+PiBUaGlzIGlzIHJlbW92aW5nIHRoZSBkZXBlbmRlbmN5IHRv
IHhlbiBzdWJkaXJlY3RvcnkgcHJldmVudGluZyB1c2luZyBhDQo+Pj4+Pj4gd3JvbmcgY29uZmln
dXJhdGlvbiBmaWxlIHdoZW4geGVuIHN1YmRpcmVjdG9yeSBpcyBkdXBsaWNhdGVkIGZvcg0KPj4+
Pj4+IGNvbXBpbGF0aW9uIHRlc3RzLg0KPj4+Pj4+IA0KPj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEJl
cnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+Pj4+IA0KPj4+Pj4g
QWNrZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4+Pj4gDQo+Pj4+IFRo
YW5rcyA6LSkNCj4+Pj4gDQo+Pj4+PiANCj4+Pj4+IChidXQgbW9yZSBmb3IgdGhlIHNsaWdodCB0
aWR5aW5nIHRoYW4gdGhlIHB1cnBvc2UgeW91IG5hbWUpDQo+Pj4+IA0KPj4+PiBGZWVsIGZyZWUg
dG8gcmVtb3ZlIHRoZSBqdXN0aWZpY2F0aW9uIGZyb20gdGhlIGNvbW1pdCBtZXNzYWdlIGlmDQo+
Pj4+IHlvdSB0aGluayBpdCBpcyBub3QgdXNlZnVsbC4NCj4+PiANCj4+PiBPaCwgbm8sIGl0J3Mg
bm90IGxpa2UgSSBjb25zaWRlciBpdCBub3QgdXNlZnVsLiBJdCBzaG93cyBob3cgeW91DQo+Pj4g
YXJyaXZlZCBhdCBtYWtpbmcgdGhlIGNoYW5nZS4gSXQncyBqdXN0IHRoYXQgSSBkaWRuJ3QgY29u
c2lkZXINCj4+PiBtYWtpbmcgY29waWVzIG9mIHhlbi8gc29tZXRoaW5nIHdlIG1lYW4gdG8gYmUg
c3VwcG9ydGVkLiBJIHdvdWxkbid0DQo+Pj4gYmUgc3VycHJpc2VkIGlmIGl0IGdvdCBicm9rZW4g
YWdhaW4gLi4uDQo+PiANCj4+IGJhc2ljYWxseSBpIGRvIHRoaXMgYSDigJxjcCAtcnPigJ0gb2Yg
eGVuIHN1YmRpcmVjdG9yeSBzbyB0aGF0IGkgY2FuIGhhdmUgZGlyZWN0b3JpZXMNCj4+IGluIHdo
aWNoIHhlbiBpcyBjb21waWxlZCBmb3IgeDg2LCBhcm0zMiBhbmQgYXJtNjQgYW5kIHJlY29tcGls
ZSBhbGwgb2YgdGhlbQ0KPj4gcXVpY2tseSB3aXRob3V0IGhhdmluZyB0byBnbyB0aHJvdWdoIGRp
c3RjbGVhbiwgY29uZmlnLCBtYWtlIGVhY2ggdGltZSBvciBtb2RpZnkNCj4+IHRoZSBvcmlnaW5h
bCB0cmVlLg0KPiANCj4gQnV0IHRoZW4geW91IG11c3QgaGF2ZSBhZGp1c3RtZW50cyBhbHNvIGlu
IHRoZSB0b3AgbGV2ZWwgbWFrZWZpbGUsDQo+IHN1Y2ggdGhhdCBiZXNpZGVzICJtYWtlIHhlbiIs
ICJtYWtlIHhlbi14eXoiIGFsc28gd29ya3MuDQoNCk5vIHRoZXJlIGkgYW0gb25seSBjb21waWxp
bmcgdGhlIGh5cGVydmlzb3IsIG5vdCB0aGUgdG9vbHMsIHNvIG9ubHkgeGVuIHN1YmRpci4NCg0K
QmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2125.6338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHo-0001xN-U2; Fri, 02 Oct 2020 13:39:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2125.6338; Fri, 02 Oct 2020 13:39:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHo-0001xA-PU; Fri, 02 Oct 2020 13:39:48 +0000
Received: by outflank-mailman (input) for mailman id 2125;
 Fri, 02 Oct 2020 13:39:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHn-0001sI-8c
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:47 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 071378c0-3579-45d2-b599-f6a9d517486b;
 Fri, 02 Oct 2020 13:39:46 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-359-yxHcaOtHPt6Z8bCmYwWpEQ-1; Fri, 02 Oct 2020 09:39:44 -0400
Received: by mail-wm1-f71.google.com with SMTP id y83so538355wmc.8
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:43 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id 13sm1358682wmk.30.2020.10.02.06.39.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:41 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHn-0001sI-8c
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:47 +0000
X-Inumbo-ID: 071378c0-3579-45d2-b599-f6a9d517486b
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 071378c0-3579-45d2-b599-f6a9d517486b;
	Fri, 02 Oct 2020 13:39:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645986;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l++0FncV4p/balcOSro5u4OfFvevG6nmzhJ8fBAOs6w=;
	b=A62p7juO5/oCRzYcD+UnjZCgVl4O/baHfJCuA60JD6GY0wx84Vx9dEvMnYvJGn2W+Qb1Mh
	kcJ5wz9eJ0wfDaY2MNdFVa//r7G0kZPxYZABdbdemrzxAoJ7/kRg9K6WgUH0IL4k48XrRF
	LbpfmxnIzQ1ib2+dZkR0+o3fuWoa4VI=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-359-yxHcaOtHPt6Z8bCmYwWpEQ-1; Fri, 02 Oct 2020 09:39:44 -0400
X-MC-Unique: yxHcaOtHPt6Z8bCmYwWpEQ-1
Received: by mail-wm1-f71.google.com with SMTP id y83so538355wmc.8
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:43 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=l++0FncV4p/balcOSro5u4OfFvevG6nmzhJ8fBAOs6w=;
        b=psaPcu/UzDhKUJKdh55I2jd2eCCbEYKknfAo6zPTDG9+Zi1/ur7lQuSyHqAoUMYjyC
         /h7zKctrrz+GrC6U08CW/pKsZgT/KMUNfzRwZyNLe7JuniGdmOer1e8ebamSHO3aHJco
         wCCTC9sQkBUnz61ICzWEPHp4ZKj4+f1Xic6mX/MSWtcx2DJk9KUNhtwBAM/zzYinTZs+
         0cfIti12g2diov5PzXtZSkPI/fbZn7xzKq/ehqMd2lpDUQFJ25IoqCfBztGvOnaZ+S1N
         iAVkX5ci8PGLXFmDybgdaHPUn1kDCkRXViNboS3kRpyLsO4KNN8XvIaPQtDKBvIt8rFt
         Y0hg==
X-Gm-Message-State: AOAM5324HpPxBVLhdhXyEx1PkxtD7GJA3/yCcAac3QhLIFUSjTGYXx4I
	k9sp+v8BF4uSQKYCoWDI30CR9QgBnvvVnnfwh9g6W4/ShmgbIubU5ywAmPeMgaXbmrNlKW96l64
	naimJnnw0Fj0Egf6dfVWPh7K/5ng=
X-Received: by 2002:a1c:9a0c:: with SMTP id c12mr3069190wme.85.1601645982655;
        Fri, 02 Oct 2020 06:39:42 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJytnflmvzG2NbHjeDXeHd5cEjLZ7ujLVl3vbAMVyGFRFjZMyWEyqKmjFZCL5TRb0S/3cpVZiw==
X-Received: by 2002:a1c:9a0c:: with SMTP id c12mr3069171wme.85.1601645982462;
        Fri, 02 Oct 2020 06:39:42 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id 13sm1358682wmk.30.2020.10.02.06.39.40
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:41 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 3/5] qapi: Restrict '(p)memsave' command to machine code
Date: Fri,  2 Oct 2020 15:39:21 +0200
Message-Id: <20201002133923.1716645-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com>
References: <20201002133923.1716645-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Restricting memsave/pmemsave to machine.json pulls slightly
less QAPI-generated code into user-mode and tools.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 qapi/machine.json | 61 +++++++++++++++++++++++++++++++++++++++++++++++
 qapi/misc.json    | 61 -----------------------------------------------
 2 files changed, 61 insertions(+), 61 deletions(-)

diff --git a/qapi/machine.json b/qapi/machine.json
index 55328d4f3c..5a3bbcae01 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -887,6 +887,67 @@
 { 'enum': 'HostMemPolicy',
   'data': [ 'default', 'preferred', 'bind', 'interleave' ] }
 
+##
+# @memsave:
+#
+# Save a portion of guest memory to a file.
+#
+# @val: the virtual address of the guest to start from
+#
+# @size: the size of memory region to save
+#
+# @filename: the file to save the memory to as binary data
+#
+# @cpu-index: the index of the virtual CPU to use for translating the
+#             virtual address (defaults to CPU 0)
+#
+# Returns: Nothing on success
+#
+# Since: 0.14.0
+#
+# Notes: Errors were not reliably returned until 1.1
+#
+# Example:
+#
+# -> { "execute": "memsave",
+#      "arguments": { "val": 10,
+#                     "size": 100,
+#                     "filename": "/tmp/virtual-mem-dump" } }
+# <- { "return": {} }
+#
+##
+{ 'command': 'memsave',
+  'data': {'val': 'int', 'size': 'int', 'filename': 'str', '*cpu-index': 'int'} }
+
+##
+# @pmemsave:
+#
+# Save a portion of guest physical memory to a file.
+#
+# @val: the physical address of the guest to start from
+#
+# @size: the size of memory region to save
+#
+# @filename: the file to save the memory to as binary data
+#
+# Returns: Nothing on success
+#
+# Since: 0.14.0
+#
+# Notes: Errors were not reliably returned until 1.1
+#
+# Example:
+#
+# -> { "execute": "pmemsave",
+#      "arguments": { "val": 10,
+#                     "size": 100,
+#                     "filename": "/tmp/physical-mem-dump" } }
+# <- { "return": {} }
+#
+##
+{ 'command': 'pmemsave',
+  'data': {'val': 'int', 'size': 'int', 'filename': 'str'} }
+
 ##
 # @Memdev:
 #
diff --git a/qapi/misc.json b/qapi/misc.json
index cce2e71e9c..2a5d03a69e 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -177,67 +177,6 @@
 ##
 { 'command': 'stop' }
 
-##
-# @memsave:
-#
-# Save a portion of guest memory to a file.
-#
-# @val: the virtual address of the guest to start from
-#
-# @size: the size of memory region to save
-#
-# @filename: the file to save the memory to as binary data
-#
-# @cpu-index: the index of the virtual CPU to use for translating the
-#             virtual address (defaults to CPU 0)
-#
-# Returns: Nothing on success
-#
-# Since: 0.14.0
-#
-# Notes: Errors were not reliably returned until 1.1
-#
-# Example:
-#
-# -> { "execute": "memsave",
-#      "arguments": { "val": 10,
-#                     "size": 100,
-#                     "filename": "/tmp/virtual-mem-dump" } }
-# <- { "return": {} }
-#
-##
-{ 'command': 'memsave',
-  'data': {'val': 'int', 'size': 'int', 'filename': 'str', '*cpu-index': 'int'} }
-
-##
-# @pmemsave:
-#
-# Save a portion of guest physical memory to a file.
-#
-# @val: the physical address of the guest to start from
-#
-# @size: the size of memory region to save
-#
-# @filename: the file to save the memory to as binary data
-#
-# Returns: Nothing on success
-#
-# Since: 0.14.0
-#
-# Notes: Errors were not reliably returned until 1.1
-#
-# Example:
-#
-# -> { "execute": "pmemsave",
-#      "arguments": { "val": 10,
-#                     "size": 100,
-#                     "filename": "/tmp/physical-mem-dump" } }
-# <- { "return": {} }
-#
-##
-{ 'command': 'pmemsave',
-  'data': {'val': 'int', 'size': 'int', 'filename': 'str'} }
-
 ##
 # @cont:
 #
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:39:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2126.6350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHt-00023E-Go; Fri, 02 Oct 2020 13:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2126.6350; Fri, 02 Oct 2020 13:39:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHt-000236-CR; Fri, 02 Oct 2020 13:39:53 +0000
Received: by outflank-mailman (input) for mailman id 2126;
 Fri, 02 Oct 2020 13:39:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHs-000223-FI
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:52 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0514069d-5379-4000-bbbf-b4a2b6d0123c;
 Fri, 02 Oct 2020 13:39:51 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-596-vfdZeNnmMf-TehhOv5_q6A-1; Fri, 02 Oct 2020 09:39:50 -0400
Received: by mail-wr1-f72.google.com with SMTP id r16so554198wrm.18
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:50 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id d18sm1779795wrm.10.2020.10.02.06.39.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:46 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHs-000223-FI
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:52 +0000
X-Inumbo-ID: 0514069d-5379-4000-bbbf-b4a2b6d0123c
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0514069d-5379-4000-bbbf-b4a2b6d0123c;
	Fri, 02 Oct 2020 13:39:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645991;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tvVXKSfjWkoZJFhCHk1dGuX00kfrmBZkcI+Ka1f3ccA=;
	b=Y9UNoL+P7MRObMQELlSDzu+JhTudkEbGjlHpGCFRvvloKQZNg78znvFafm/XjHFhgLtM5w
	JWIeRVQnOL7dZqbgLLEcrun1643CiEcltViAa7G+pdmKJw+sRiQIo1gQHq3/lU24i3PuNt
	kQDUv8x4Iq3rIeUWVVybXptUDd1UkQE=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-596-vfdZeNnmMf-TehhOv5_q6A-1; Fri, 02 Oct 2020 09:39:50 -0400
X-MC-Unique: vfdZeNnmMf-TehhOv5_q6A-1
Received: by mail-wr1-f72.google.com with SMTP id r16so554198wrm.18
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:50 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=tvVXKSfjWkoZJFhCHk1dGuX00kfrmBZkcI+Ka1f3ccA=;
        b=YtcOiJ5f2FZZhMGhjzYpYlsciTMrkFzBnT0/L9b+vPhFpHqL8I+4arbPA/VA7JBAAI
         90rY0EBA4IVWzhi/sFeMedQl6nmQDTIngXkMEQaY0Oj37HO4R8ZYsb26mFbeuAHx96J4
         y7zhO6LzkGVpyy1jZ2qtQPm87fjkAXH0UftLm4GfyWhhYY25eagcEcaCYQs/BTp0RkTa
         M23mGQ07bHgJKyGieJ8J3MqIl6JkSuhu9YhK/0lP0wL5+n/IsB/4lr9XQEyrVjp2CERF
         wyiuyYbDcV/Mu9ObX1LMIZLXz082oJZiu+0+LnvgPUQjmIz1yCwua2N67OW3GOpIWDaj
         4/rQ==
X-Gm-Message-State: AOAM530GkqkUIdDhqexjKHi2obhXUFWrN8S+rFZ7j1xBYortsVEtVNyI
	cy40MXQXRS4U2RFhBWKwPc12dXosmBlNKehULdgnstSq4kDivIzQqwg9mVq7cawasnUr7dkjSjD
	VtNE/pM106XhsulTIOaJx7Berrzw=
X-Received: by 2002:adf:9bcf:: with SMTP id e15mr3075488wrc.93.1601645987716;
        Fri, 02 Oct 2020 06:39:47 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwXZ0DUtM8ag+Gzw12Jr6+YpFZwVTHU0tycGVTY/5KbE5Wb3wWL4PNJqnB0Xi/zQrZCrmxEzA==
X-Received: by 2002:adf:9bcf:: with SMTP id e15mr3075463wrc.93.1601645987503;
        Fri, 02 Oct 2020 06:39:47 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id d18sm1779795wrm.10.2020.10.02.06.39.46
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:46 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 4/5] qapi: Restrict 'query-kvm' command to machine code
Date: Fri,  2 Oct 2020 15:39:22 +0200
Message-Id: <20201002133923.1716645-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com>
References: <20201002133923.1716645-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Restricting query-kvm to machine.json pulls slightly
less QAPI-generated code into user-mode and tools.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 qapi/machine.json | 30 ++++++++++++++++++++++++++++++
 qapi/misc.json    | 30 ------------------------------
 2 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/qapi/machine.json b/qapi/machine.json
index 5a3bbcae01..7c9a263778 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -561,6 +561,36 @@
 ##
 { 'command': 'inject-nmi' }
 
+##
+# @KvmInfo:
+#
+# Information about support for KVM acceleration
+#
+# @enabled: true if KVM acceleration is active
+#
+# @present: true if KVM acceleration is built into this executable
+#
+# Since: 0.14.0
+##
+{ 'struct': 'KvmInfo', 'data': {'enabled': 'bool', 'present': 'bool'} }
+
+##
+# @query-kvm:
+#
+# Returns information about KVM acceleration
+#
+# Returns: @KvmInfo
+#
+# Since: 0.14.0
+#
+# Example:
+#
+# -> { "execute": "query-kvm" }
+# <- { "return": { "enabled": true, "present": true } }
+#
+##
+{ 'command': 'query-kvm', 'returns': 'KvmInfo' }
+
 ##
 # @NumaOptionsType:
 #
diff --git a/qapi/misc.json b/qapi/misc.json
index 2a5d03a69e..9813893269 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -68,36 +68,6 @@
 ##
 { 'command': 'query-name', 'returns': 'NameInfo', 'allow-preconfig': true }
 
-##
-# @KvmInfo:
-#
-# Information about support for KVM acceleration
-#
-# @enabled: true if KVM acceleration is active
-#
-# @present: true if KVM acceleration is built into this executable
-#
-# Since: 0.14.0
-##
-{ 'struct': 'KvmInfo', 'data': {'enabled': 'bool', 'present': 'bool'} }
-
-##
-# @query-kvm:
-#
-# Returns information about KVM acceleration
-#
-# Returns: @KvmInfo
-#
-# Since: 0.14.0
-#
-# Example:
-#
-# -> { "execute": "query-kvm" }
-# <- { "return": { "enabled": true, "present": true } }
-#
-##
-{ 'command': 'query-kvm', 'returns': 'KvmInfo' }
-
 ##
 # @IOThreadInfo:
 #
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:40:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:40:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2127.6362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHz-0002Be-UF; Fri, 02 Oct 2020 13:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2127.6362; Fri, 02 Oct 2020 13:39:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLHz-0002BS-PY; Fri, 02 Oct 2020 13:39:59 +0000
Received: by outflank-mailman (input) for mailman id 2127;
 Fri, 02 Oct 2020 13:39:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kOLHy-0002AB-Hy
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:58 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 23309c66-c2e2-446e-8bd4-41bf2412830e;
 Fri, 02 Oct 2020 13:39:57 +0000 (UTC)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-426-8CPeljkbOza0gEeRdeNhlA-1; Fri, 02 Oct 2020 09:39:55 -0400
Received: by mail-wm1-f69.google.com with SMTP id 13so437872wmf.0
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:55 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net.
 [83.53.161.74])
 by smtp.gmail.com with ESMTPSA id f6sm1818116wro.5.2020.10.02.06.39.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 06:39:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vuVU=DJ=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kOLHy-0002AB-Hy
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:39:58 +0000
X-Inumbo-ID: 23309c66-c2e2-446e-8bd4-41bf2412830e
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 23309c66-c2e2-446e-8bd4-41bf2412830e;
	Fri, 02 Oct 2020 13:39:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601645997;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8IRn/Y0aJt1eGijTbQmkn/6JoyPalvnCOEAPCHm0k58=;
	b=I7tw93m2OGcrMYGGWQLAFY0xbZJnomqp+6GXbHzFOIipZtec+FuqDQUVgXrsLshYvZ6r9C
	LvjkQDUVl9gAR6Fw3k9JNT7UnC5z/lrt/p4EaQsratPdhuE8jU12H2scPyePYlHFBYZOD6
	WbXFW7Os8Qn3R2hwv99SAJSbTy0At7E=
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-426-8CPeljkbOza0gEeRdeNhlA-1; Fri, 02 Oct 2020 09:39:55 -0400
X-MC-Unique: 8CPeljkbOza0gEeRdeNhlA-1
Received: by mail-wm1-f69.google.com with SMTP id 13so437872wmf.0
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 06:39:55 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=8IRn/Y0aJt1eGijTbQmkn/6JoyPalvnCOEAPCHm0k58=;
        b=PwZzHvpZk/1VvvaYlJkBaea/N/LqGBLLcVNb+VY1UKJ+mbca3e1qjISqOKDxAfFlDI
         PwcCVUpl8gnLu9WYRs6DVfwZdAgvQ+JdE5l7paN85nf0xdx394t5Kh1/VhXGfnkoYPN+
         ztnLTmCCCbxrlzOCwDLj8sIBBZSONf5O8X2ufGUcUa97siliXFwvqUAj859cg2kROe2y
         kX+YdGFQA1ztPkPXLMmcOqIgZhpXeDQX/MSUGHqrBl1qcfHrbh8AFViRse34LqcVM62f
         1Gt0lZe4mVgCT0cw4vPF015aR5lAG0kucP4QAtGHXUWJTTxSj1HLcF7gxEGcNhfdiyNL
         pgHg==
X-Gm-Message-State: AOAM5300aRY+Bbj+0og9u66Lf3/SNTuqmnuCFrkN2tKMglBb7WNoAV42
	i8TRxzLojydRJ/fbh5psuIsbZw5rVho68zMsKhJdz5oz/qIqdr7XrRGh6C9dFokPHL8ojVLpC49
	VGaYsUp6ZYx95/oc+PujAZ5rvoms=
X-Received: by 2002:a5d:43cf:: with SMTP id v15mr3203831wrr.269.1601645993157;
        Fri, 02 Oct 2020 06:39:53 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyrtjDxbvNT2qZx2AnpZX/U8SDHXTxctCni5ZD+jg0EWfaGOVq2AOPJli+/iQHk1imFPhnyVw==
X-Received: by 2002:a5d:43cf:: with SMTP id v15mr3203803wrr.269.1601645992898;
        Fri, 02 Oct 2020 06:39:52 -0700 (PDT)
Received: from localhost.localdomain (74.red-83-53-161.dynamicip.rima-tde.net. [83.53.161.74])
        by smtp.gmail.com with ESMTPSA id f6sm1818116wro.5.2020.10.02.06.39.51
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 06:39:52 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	"Dr. David Alan Gilbert" <dgilbert@redhat.com>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>,
	Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 5/5] qapi: Restrict Xen migration commands to migration.json
Date: Fri,  2 Oct 2020 15:39:23 +0200
Message-Id: <20201002133923.1716645-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com>
References: <20201002133923.1716645-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Restricting xen-set-global-dirty-log and xen-load-devices-state
commands migration.json pulls slightly less QAPI-generated code
into user-mode and tools.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 qapi/migration.json    | 41 +++++++++++++++++++++++++++++++++++++++++
 qapi/misc.json         | 41 -----------------------------------------
 accel/stubs/xen-stub.c |  2 +-
 hw/i386/xen/xen-hvm.c  |  2 +-
 migration/savevm.c     |  1 -
 5 files changed, 43 insertions(+), 44 deletions(-)

diff --git a/qapi/migration.json b/qapi/migration.json
index 7f5e6fd681..cb30f4c729 100644
--- a/qapi/migration.json
+++ b/qapi/migration.json
@@ -1551,6 +1551,47 @@
 { 'command': 'xen-save-devices-state',
   'data': {'filename': 'str', '*live':'bool' } }
 
+##
+# @xen-set-global-dirty-log:
+#
+# Enable or disable the global dirty log mode.
+#
+# @enable: true to enable, false to disable.
+#
+# Returns: nothing
+#
+# Since: 1.3
+#
+# Example:
+#
+# -> { "execute": "xen-set-global-dirty-log",
+#      "arguments": { "enable": true } }
+# <- { "return": {} }
+#
+##
+{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
+
+##
+# @xen-load-devices-state:
+#
+# Load the state of all devices from file. The RAM and the block devices
+# of the VM are not loaded by this command.
+#
+# @filename: the file to load the state of the devices from as binary
+#            data. See xen-save-devices-state.txt for a description of the binary
+#            format.
+#
+# Since: 2.7
+#
+# Example:
+#
+# -> { "execute": "xen-load-devices-state",
+#      "arguments": { "filename": "/tmp/resume" } }
+# <- { "return": {} }
+#
+##
+{ 'command': 'xen-load-devices-state', 'data': {'filename': 'str'} }
+
 ##
 # @xen-set-replication:
 #
diff --git a/qapi/misc.json b/qapi/misc.json
index 9813893269..afe936b45b 100644
--- a/qapi/misc.json
+++ b/qapi/misc.json
@@ -287,26 +287,6 @@
   'data': {'device': 'str', 'target': 'str', '*arg': 'str'},
   'features': [ 'deprecated' ] }
 
-##
-# @xen-set-global-dirty-log:
-#
-# Enable or disable the global dirty log mode.
-#
-# @enable: true to enable, false to disable.
-#
-# Returns: nothing
-#
-# Since: 1.3
-#
-# Example:
-#
-# -> { "execute": "xen-set-global-dirty-log",
-#      "arguments": { "enable": true } }
-# <- { "return": {} }
-#
-##
-{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
-
 ##
 # @getfd:
 #
@@ -606,24 +586,3 @@
 ##
 { 'enum': 'ReplayMode',
   'data': [ 'none', 'record', 'play' ] }
-
-##
-# @xen-load-devices-state:
-#
-# Load the state of all devices from file. The RAM and the block devices
-# of the VM are not loaded by this command.
-#
-# @filename: the file to load the state of the devices from as binary
-#            data. See xen-save-devices-state.txt for a description of the binary
-#            format.
-#
-# Since: 2.7
-#
-# Example:
-#
-# -> { "execute": "xen-load-devices-state",
-#      "arguments": { "filename": "/tmp/resume" } }
-# <- { "return": {} }
-#
-##
-{ 'command': 'xen-load-devices-state', 'data': {'filename': 'str'} }
diff --git a/accel/stubs/xen-stub.c b/accel/stubs/xen-stub.c
index 7ba0b697f4..7054965c48 100644
--- a/accel/stubs/xen-stub.c
+++ b/accel/stubs/xen-stub.c
@@ -7,7 +7,7 @@
 
 #include "qemu/osdep.h"
 #include "sysemu/xen.h"
-#include "qapi/qapi-commands-misc.h"
+#include "qapi/qapi-commands-migration.h"
 
 bool xen_allowed;
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index f3ababf33b..9519c33c09 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -24,7 +24,7 @@
 #include "hw/xen/xen-bus.h"
 #include "hw/xen/xen-x86.h"
 #include "qapi/error.h"
-#include "qapi/qapi-commands-misc.h"
+#include "qapi/qapi-commands-migration.h"
 #include "qemu/error-report.h"
 #include "qemu/main-loop.h"
 #include "qemu/range.h"
diff --git a/migration/savevm.c b/migration/savevm.c
index 34e4b71052..1fdf3f76c2 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -42,7 +42,6 @@
 #include "postcopy-ram.h"
 #include "qapi/error.h"
 #include "qapi/qapi-commands-migration.h"
-#include "qapi/qapi-commands-misc.h"
 #include "qapi/qmp/qerror.h"
 #include "qemu/error-report.h"
 #include "sysemu/cpus.h"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:41:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2140.6374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJI-0003Ed-CO; Fri, 02 Oct 2020 13:41:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2140.6374; Fri, 02 Oct 2020 13:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJI-0003EW-8P; Fri, 02 Oct 2020 13:41:20 +0000
Received: by outflank-mailman (input) for mailman id 2140;
 Fri, 02 Oct 2020 13:41:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kOLJG-0003EC-BP
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 117bebe5-2da1-4548-87ea-cbd6c1e09877;
 Fri, 02 Oct 2020 13:41:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F40DAAD12;
 Fri,  2 Oct 2020 13:41:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kOLJG-0003EC-BP
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:18 +0000
X-Inumbo-ID: 117bebe5-2da1-4548-87ea-cbd6c1e09877
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 117bebe5-2da1-4548-87ea-cbd6c1e09877;
	Fri, 02 Oct 2020 13:41:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601646076;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Z3ByYrh+VAhRxeH0eK8Na0KVOwY/vMeBd7zzpOzuyh8=;
	b=odvEcVR3MGe4ScG+CTx7+dM1KQK8SJGRd5CwKYZrKZOXio34wK3PH2wl9CUz9kMHYeYQk3
	BZPLzK9nIIqiothIjrit3Slk86uffisO5vN5J/leVvbHUXfwbIMT2Ft1ZKqM3IeJkWjqwQ
	TOwk2hwVtjDVFO77HnCPzt1saLmpGVM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F40DAAD12;
	Fri,  2 Oct 2020 13:41:15 +0000 (UTC)
Date: Fri, 2 Oct 2020 15:41:15 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>
Subject: Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in
 __free_pages_core()
Message-ID: <20201002134115.GJ4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-5-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-5-david@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon 28-09-20 20:21:09, David Hildenbrand wrote:
> __free_pages_core() is used when exposing fresh memory to the buddy
> during system boot and when onlining memory in generic_online_page().
> 
> generic_online_page() is used in two cases:
> 
> 1. Direct memory onlining in online_pages().
> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>    balloon and virtio-mem), when parts of a section are kept
>    fake-offline to be fake-onlined later on.
> 
> In 1, we already place pages to the tail of the freelist. Pages will be
> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
> via undo_isolate_page_range().
> 
> In 2, we currently don't implement a proper rule. In case of virtio-mem,
> where we currently always online MAX_ORDER - 1 pages, the pages will be
> placed to the HEAD of the freelist - undesireable. While the hyper-v
> balloon calls generic_online_page() with single pages, usually it will
> call it on successive single pages in a larger block.
> 
> The pages are fresh, so place them to the tail of the freelists and avoid
> the PCP. In __free_pages_core(), remove the now superflouos call to
> set_page_refcounted() and add a comment regarding page initialization and
> the refcount.
> 
> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
> is usually of limited use in virtualized environments), we might want to
> shuffle after a sequence of generic_online_page() calls in the
> relevant callers.

It took some time to get through all the freeing paths with subtle
differences but this looks reasonable. You are mentioning that this
influences a boot time free memory ordering as well but only very
briefly. I do not expect this to make a huge difference but who knows.
It makes some sense to add pages in the order they show up in the
physical address ordering.

> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: "K. Y. Srinivasan" <kys@microsoft.com>
> Cc: Haiyang Zhang <haiyangz@microsoft.com>
> Cc: Stephen Hemminger <sthemmin@microsoft.com>
> Cc: Wei Liu <wei.liu@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

That being said I do not see any fundamental problems.
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/page_alloc.c | 37 ++++++++++++++++++++++++-------------
>  1 file changed, 24 insertions(+), 13 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index d5a5f528b8ca..8a2134fe9947 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -270,7 +270,8 @@ bool pm_suspended_storage(void)
>  unsigned int pageblock_order __read_mostly;
>  #endif
>  
> -static void __free_pages_ok(struct page *page, unsigned int order);
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +			    fop_t fop_flags);
>  
>  /*
>   * results with 256, 32 in the lowmem_reserve sysctl:
> @@ -682,7 +683,7 @@ static void bad_page(struct page *page, const char *reason)
>  void free_compound_page(struct page *page)
>  {
>  	mem_cgroup_uncharge(page);
> -	__free_pages_ok(page, compound_order(page));
> +	__free_pages_ok(page, compound_order(page), FOP_NONE);
>  }
>  
>  void prep_compound_page(struct page *page, unsigned int order)
> @@ -1419,17 +1420,15 @@ static void free_pcppages_bulk(struct zone *zone, int count,
>  	spin_unlock(&zone->lock);
>  }
>  
> -static void free_one_page(struct zone *zone,
> -				struct page *page, unsigned long pfn,
> -				unsigned int order,
> -				int migratetype)
> +static void free_one_page(struct zone *zone, struct page *page, unsigned long pfn,
> +			  unsigned int order, int migratetype, fop_t fop_flags)
>  {
>  	spin_lock(&zone->lock);
>  	if (unlikely(has_isolate_pageblock(zone) ||
>  		is_migrate_isolate(migratetype))) {
>  		migratetype = get_pfnblock_migratetype(page, pfn);
>  	}
> -	__free_one_page(page, pfn, zone, order, migratetype, FOP_NONE);
> +	__free_one_page(page, pfn, zone, order, migratetype, fop_flags);
>  	spin_unlock(&zone->lock);
>  }
>  
> @@ -1507,7 +1506,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
>  	}
>  }
>  
> -static void __free_pages_ok(struct page *page, unsigned int order)
> +static void __free_pages_ok(struct page *page, unsigned int order,
> +			    fop_t fop_flags)
>  {
>  	unsigned long flags;
>  	int migratetype;
> @@ -1519,7 +1519,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
>  	migratetype = get_pfnblock_migratetype(page, pfn);
>  	local_irq_save(flags);
>  	__count_vm_events(PGFREE, 1 << order);
> -	free_one_page(page_zone(page), page, pfn, order, migratetype);
> +	free_one_page(page_zone(page), page, pfn, order, migratetype,
> +		      fop_flags);
>  	local_irq_restore(flags);
>  }
>  
> @@ -1529,6 +1530,11 @@ void __free_pages_core(struct page *page, unsigned int order)
>  	struct page *p = page;
>  	unsigned int loop;
>  
> +	/*
> +	 * When initializing the memmap, init_single_page() sets the refcount
> +	 * of all pages to 1 ("allocated"/"not free"). We have to set the
> +	 * refcount of all involved pages to 0.
> +	 */
>  	prefetchw(p);
>  	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
>  		prefetchw(p + 1);
> @@ -1539,8 +1545,12 @@ void __free_pages_core(struct page *page, unsigned int order)
>  	set_page_count(p, 0);
>  
>  	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> -	set_page_refcounted(page);
> -	__free_pages(page, order);
> +
> +	/*
> +	 * Bypass PCP and place fresh pages right to the tail, primarily
> +	 * relevant for memory onlining.
> +	 */
> +	__free_pages_ok(page, order, FOP_TO_TAIL);
>  }
>  
>  #ifdef CONFIG_NEED_MULTIPLE_NODES
> @@ -3171,7 +3181,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
>  	 */
>  	if (migratetype >= MIGRATE_PCPTYPES) {
>  		if (unlikely(is_migrate_isolate(migratetype))) {
> -			free_one_page(zone, page, pfn, 0, migratetype);
> +			free_one_page(zone, page, pfn, 0, migratetype,
> +				      FOP_NONE);
>  			return;
>  		}
>  		migratetype = MIGRATE_MOVABLE;
> @@ -5063,7 +5074,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
>  	if (order == 0)		/* Via pcp? */
>  		free_unref_page(page);
>  	else
> -		__free_pages_ok(page, order);
> +		__free_pages_ok(page, order, FOP_NONE);
>  }
>  
>  void __free_pages(struct page *page, unsigned int order)
> -- 
> 2.26.2
> 

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:41:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2144.6386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJS-0003Iu-ME; Fri, 02 Oct 2020 13:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2144.6386; Fri, 02 Oct 2020 13:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJS-0003Il-Im; Fri, 02 Oct 2020 13:41:30 +0000
Received: by outflank-mailman (input) for mailman id 2144;
 Fri, 02 Oct 2020 13:41:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QOZ3=DJ=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kOLJQ-0003Hu-1W
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:29 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d7d1c44-1feb-4b6b-afa1-a9ecb2f100f8;
 Fri, 02 Oct 2020 13:41:25 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kOLJG-0008Pc-Cw; Fri, 02 Oct 2020 13:41:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QOZ3=DJ=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kOLJQ-0003Hu-1W
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:29 +0000
X-Inumbo-ID: 1d7d1c44-1feb-4b6b-afa1-a9ecb2f100f8
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d7d1c44-1feb-4b6b-afa1-a9ecb2f100f8;
	Fri, 02 Oct 2020 13:41:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=YAy0v19KwKZlKr9ru/zVpzMUsVTEeJ2e3pekJyodo14=; b=iexghG7snc7QxKkez1o5os1/3C
	VMsVigSDzAim81w0WG62063q1/2M6oleyQVttaq/DwGZpvUko1TAcHMiTc7Y71fOc776brlCHNpL9
	TF/rDja+6ZlYfSRQyLAmdcF816yQQD7Gh4cjTtX9zkHPhiQblkCbSro/WbJRcAl55o/FbkG8U4+cK
	ODEbYlsH/oL9/v8WuT6CkyJmXG3JPi8Quc8roDPCf51ax85lpPy9eX0zXlMe9YyE+ArPm6I6HfE9l
	/uwdBF13nEBfPVG49VpiveihT4b0xj7wbQ3rHDVvRI8ViBmr5WRPZYCia8pMZ2UvYjRxjqdaMXw/s
	NITHbeMw==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kOLJG-0008Pc-Cw; Fri, 02 Oct 2020 13:41:18 +0000
Date: Fri, 2 Oct 2020 14:41:18 +0100
From: Matthew Wilcox <willy@infradead.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of
 __free_one_page() to a proper flag
Message-ID: <20201002134118.GA20115@casper.infradead.org>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-2-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-2-david@redhat.com>

On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
> Let's prepare for additional flags and avoid long parameter lists of bools.
> Follow-up patches will also make use of the flags in __free_pages_ok(),
> however, I wasn't able to come up with a better name for the type - should
> be good enough for internal purposes.

> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
> +typedef int __bitwise fop_t;

That invites confusion with f_op.  There's no reason to use _t as a suffix
here ... why not free_f?

> +/*
> + * Skip free page reporting notification for the (possibly merged) page. (will
> + * *not* mark the page reported, only skip the notification).

... Don't you mean "will not skip marking the page as reported, only
skip the notification"?

*reads code*

No, I'm still confused.  What does this sentence mean?

Would it help to have a FOP_DEFAULT that has FOP_REPORT_NOTIFY set and
then a FOP_SKIP_REPORT_NOTIFY define that is 0?

> -static inline void __free_one_page(struct page *page,
> -		unsigned long pfn,
> -		struct zone *zone, unsigned int order,
> -		int migratetype, bool report)
> +static inline void __free_one_page(struct page *page, unsigned long pfn,
> +				   struct zone *zone, unsigned int order,
> +				   int migratetype, fop_t fop_flags)

Please don't over-indent like this.

static inline void __free_one_page(struct page *page, unsigned long pfn,
		struct zone *zone, unsigned int order, int migratetype,
		fop_t fop_flags)

reads just as well and then if someone needs to delete the 'static'
later, they don't need to fiddle around with subsequent lines getting
the whitespace to line up again.



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:41:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2146.6398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJp-0003SR-8m; Fri, 02 Oct 2020 13:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2146.6398; Fri, 02 Oct 2020 13:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLJp-0003SK-4l; Fri, 02 Oct 2020 13:41:53 +0000
Received: by outflank-mailman (input) for mailman id 2146;
 Fri, 02 Oct 2020 13:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kOLJo-0003Ry-5B
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9b034f0-2f56-46c5-aa08-a35edeb56b40;
 Fri, 02 Oct 2020 13:41:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8FBA6AD12;
 Fri,  2 Oct 2020 13:41:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c/W5=DJ=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kOLJo-0003Ry-5B
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:41:52 +0000
X-Inumbo-ID: d9b034f0-2f56-46c5-aa08-a35edeb56b40
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d9b034f0-2f56-46c5-aa08-a35edeb56b40;
	Fri, 02 Oct 2020 13:41:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601646110;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Ex35TTNhtQo4+POapz9XDyIVapheLz/KPghDOJAZTmI=;
	b=oB4MvVDI1JsMmEp5fS6ih0IDJ3PgAF/1fiyI66oaGiIt+W4qf/INGGv0iE18qGvXKec0xO
	qLuuPJacZ9haS8eAxNIVDEfYmij/eqsIT1yxOLKQW7XVaOvlwlgI+fJex9EAJ+KK4SDYFi
	OdM6RJjLLtPQt3GbReg3chiA056/S3g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8FBA6AD12;
	Fri,  2 Oct 2020 13:41:50 +0000 (UTC)
Date: Fri, 2 Oct 2020 15:41:49 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Oscar Salvador <osalvador@suse.de>, Mike Rapoport <rppt@kernel.org>
Subject: Re: [PATCH v1 5/5] mm/memory_hotplug: update comment regarding zone
 shuffling
Message-ID: <20201002134149.GK4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-6-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200928182110.7050-6-david@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon 28-09-20 20:21:10, David Hildenbrand wrote:
> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
> 
> We now effectively shuffle only once (properly) when onlining new
> memory.
> 
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memory_hotplug.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 9db80ee29caa..c589bd8801bb 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -859,13 +859,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>  
>  	/*
> -	 * When exposing larger, physically contiguous memory areas to the
> -	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
> -	 * them either to the head or the tail of the freelist) is only helpful
> -	 * for maintaining the shuffle, but not for creating the initial
> -	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
> -	 * are properly distributed across the whole freelist. Make sure to
> -	 * shuffle once pageblocks are no longer isolated.
> +	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> +	 * the tail of the freelist when undoing isolation). Shuffle the whole
> +	 * zone to make sure the just onlined pages are properly distributed
> +	 * across the whole freelist - to create an initial shuffle.
>  	 */
>  	shuffle_zone(zone);
>  
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 13:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 13:56:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2161.6412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLY3-0004bY-LA; Fri, 02 Oct 2020 13:56:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2161.6412; Fri, 02 Oct 2020 13:56:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLY3-0004bR-HF; Fri, 02 Oct 2020 13:56:35 +0000
Received: by outflank-mailman (input) for mailman id 2161;
 Fri, 02 Oct 2020 13:56:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOLY2-0004aC-9j
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:56:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d2089a5-7260-4ad0-a7bc-059e25f0a43c;
 Fri, 02 Oct 2020 13:56:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOLXv-0000Og-QT; Fri, 02 Oct 2020 13:56:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOLXv-00073K-It; Fri, 02 Oct 2020 13:56:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOLXv-0008Mh-IO; Fri, 02 Oct 2020 13:56:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOLY2-0004aC-9j
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 13:56:34 +0000
X-Inumbo-ID: 8d2089a5-7260-4ad0-a7bc-059e25f0a43c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d2089a5-7260-4ad0-a7bc-059e25f0a43c;
	Fri, 02 Oct 2020 13:56:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BD2QsPq1BNEYN4CzERArdnc7qSnsztltfUsNI+5S7Zk=; b=RQRfMMJjjdv7ATXpAjfyqnS+xC
	kCqhkjSw43pGsOSH1AsffSQaPLPw5U2bXIiVcGnJCHYqiSDse31UYRKDdrQjIPuMu8f7LXarsvGxt
	KZqZPyRH6EyNgDXEs87EbFDKSJjsilPhvYmOXfFsIzJ1TWLwLCXbuOIZHO+bzFhSaXB8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOLXv-0000Og-QT; Fri, 02 Oct 2020 13:56:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOLXv-00073K-It; Fri, 02 Oct 2020 13:56:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOLXv-0008Mh-IO; Fri, 02 Oct 2020 13:56:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155321-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155321: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=59b27f360e3d9dc0378c1288e67a91fa41a77158
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 13:56:27 +0000

flight 155321 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155321/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  59b27f360e3d9dc0378c1288e67a91fa41a77158
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    2 days
Failing since        155144  2020-09-30 16:01:24 Z    1 days   14 attempts
Testing same since   155310  2020-10-02 08:01:29 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 387 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:08:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2166.6428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLj5-0005eM-OS; Fri, 02 Oct 2020 14:07:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2166.6428; Fri, 02 Oct 2020 14:07:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLj5-0005eF-L4; Fri, 02 Oct 2020 14:07:59 +0000
Received: by outflank-mailman (input) for mailman id 2166;
 Fri, 02 Oct 2020 14:07:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kOLj4-0005eA-JN
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:07:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::602])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fb5181d-317d-45a6-abc7-ae94184ac23d;
 Fri, 02 Oct 2020 14:07:57 +0000 (UTC)
Received: from AM0PR10CA0032.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:150::12)
 by AM6PR08MB4231.eurprd08.prod.outlook.com (2603:10a6:20b:73::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 14:07:54 +0000
Received: from VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:150:cafe::b2) by AM0PR10CA0032.outlook.office365.com
 (2603:10a6:20b:150::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Fri, 2 Oct 2020 14:07:54 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT031.mail.protection.outlook.com (10.152.18.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 14:07:54 +0000
Received: ("Tessian outbound a0bffebca527:v64");
 Fri, 02 Oct 2020 14:07:53 +0000
Received: from 9cfead6c9a6f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D0787F39-83C2-4CAC-9B58-129735500F6B.1; 
 Fri, 02 Oct 2020 14:07:48 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9cfead6c9a6f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 02 Oct 2020 14:07:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3385.eurprd08.prod.outlook.com (2603:10a6:10:4b::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Fri, 2 Oct
 2020 14:07:46 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 14:07:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3+am=DJ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kOLj4-0005eA-JN
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:07:58 +0000
X-Inumbo-ID: 6fb5181d-317d-45a6-abc7-ae94184ac23d
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e1a::602])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6fb5181d-317d-45a6-abc7-ae94184ac23d;
	Fri, 02 Oct 2020 14:07:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2v1Px/xqa6tlO61N6zSUpvVyb9yNcQ6ljMu/IvTiRa8=;
 b=Zbd9bO/kRUTCX8j+IF9bDCBhSM/AQv1kLDkpp9EoMSWCcms0FEl/T7/oazxkB1c8EN4gowpyAoPJwQxbnMseAFu2cxwZyFq2gTVu4/ORMc1eP8Hc+NZ7exdWBQOX11uLt9PTzpjQVbpy3YOx2VumV2AQSL9N/p5mDrA18bEqtio=
Received: from AM0PR10CA0032.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:20b:150::12)
 by AM6PR08MB4231.eurprd08.prod.outlook.com (2603:10a6:20b:73::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Fri, 2 Oct
 2020 14:07:54 +0000
Received: from VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:150:cafe::b2) by AM0PR10CA0032.outlook.office365.com
 (2603:10a6:20b:150::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36 via Frontend
 Transport; Fri, 2 Oct 2020 14:07:54 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT031.mail.protection.outlook.com (10.152.18.69) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Fri, 2 Oct 2020 14:07:54 +0000
Received: ("Tessian outbound a0bffebca527:v64"); Fri, 02 Oct 2020 14:07:53 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7f2a98bb643b2f3c
X-CR-MTA-TID: 64aa7808
Received: from 9cfead6c9a6f.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id D0787F39-83C2-4CAC-9B58-129735500F6B.1;
	Fri, 02 Oct 2020 14:07:48 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9cfead6c9a6f.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 02 Oct 2020 14:07:48 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dXEHUMR0bDiup6celqdkGbX/U0UvHRdxj4iE3AbYxRFvaYszGbLnEiw9d0aqWYPkufcfnGbbERBzC3DuQVm1T+IJLtF6IImiKmQliFykhRuaFFXX0yVnQeTNEcalXXAv6t2Rzadm0iK2nswwf4xjdSxSdajkJ+58FSWJpZ8SgwkQ+IaA1z81AM7Cbt0DP0mVO/+7dw0grt7MNxrT0HrUiWcEDJSxCHUcknaxVEPLV6Z93lp8iR6qAiZXsy3SboAV43ccMxWmLqD3nWrQS92QmVNroQ2WQg9vD/7ioak3P5pfWMQS/bfqDGk3AD/+1PKdvZgQX9k6Itv2wfz2DNpZBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2v1Px/xqa6tlO61N6zSUpvVyb9yNcQ6ljMu/IvTiRa8=;
 b=lAcCL6tNiqXcxRD99IAGYlqChkWBR31Ttfs8cOhLqS+ZXg4e4FQIuuX60/jUSE6CSrUOV+MU2dXsc8IBefKmoVm5wpH3KQrITc/whWEhs95i8t7oD/n3Xu4wXwLwJP1Ntc3pta8dqyEMLYg7fV0QOdBb0UKaEelNDM/DqmB0/J3JoibM4UC55K8XcMMxfIu6xZmbXQa0bxRjaCEBTHox2xLu4IlXr2FsQC0Yk/nuCcwOSDNrotWC1zRdQ+VtcOkdk7hkQeJ9keHYYC4cSbCCvUl1k4qsdSHlNA5m1PCN9VpMMopzco0uHLoxbd6Ohll1YDvnMk3t0zr6NFwAaLg4bQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2v1Px/xqa6tlO61N6zSUpvVyb9yNcQ6ljMu/IvTiRa8=;
 b=Zbd9bO/kRUTCX8j+IF9bDCBhSM/AQv1kLDkpp9EoMSWCcms0FEl/T7/oazxkB1c8EN4gowpyAoPJwQxbnMseAFu2cxwZyFq2gTVu4/ORMc1eP8Hc+NZ7exdWBQOX11uLt9PTzpjQVbpy3YOx2VumV2AQSL9N/p5mDrA18bEqtio=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3385.eurprd08.prod.outlook.com (2603:10a6:10:4b::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37; Fri, 2 Oct
 2020 14:07:46 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cccc:2933:d4d3:1a9e%6]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 14:07:46 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3] tools/libs/stat: fix broken build
Thread-Topic: [PATCH v3] tools/libs/stat: fix broken build
Thread-Index:
 AQHWiQYtlvZlYAgRJ0WaZGJ4HThwo6ln+foAgBsWa4CAAMywgIAAGQiAgAAIpgCAAAJuAIAABycAgAAuo4CAAAj+AIAABWkAgAAzXoA=
Date: Fri, 2 Oct 2020 14:07:45 +0000
Message-ID: <270E5CCF-6331-4C0A-848D-9F7305DFFA44@arm.com>
References: <20200912130836.11024-1-jgross@suse.com>
 <5232FD74-9636-4EF4-81F8-2EF7EE21D326@arm.com>
 <87CA2B55-B372-458C-82CC-2423B8AC3EEE@arm.com>
 <f12092a1-119f-ce68-8804-1a8772f1a923@suse.com>
 <f6853e47-27bd-efcd-71ae-b28e7ea1dd4d@suse.com>
 <8ddad01e-cf1a-7752-1371-a505fb26dc47@suse.com>
 <90a39759-63c1-28b9-f112-d8b3cc083565@suse.com>
 <558774ab-92cb-90ae-3936-4f9cc9d56fd0@suse.com>
 <5B52FDF2-18DA-4342-9280-0D497FAB6532@arm.com>
 <75346ac2-20f3-c868-4ac9-0d5a2e65d436@suse.com>
 <4F380E40-9AEB-4579-ADF7-833CCB5C5D54@arm.com>
In-Reply-To: <4F380E40-9AEB-4579-ADF7-833CCB5C5D54@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: dcafab15-9df0-4e53-4ab6-08d866dc8e9f
x-ms-traffictypediagnostic: DB7PR08MB3385:|AM6PR08MB4231:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB4231B9DD2EF237EF7B6D99A59D310@AM6PR08MB4231.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 vkPd/I6yY1ZL6cqd3s09o8Q/nHdNwkJm2YvBXVkD0gOlyGo1mHu1cVsOryh681jWCbuhAT/Koo6VZmzORvrKJ6V5acZSibE0YVWahkanCR9JyYA0pBDp0WJnOsOhZV4LC47SW9jK7cPR9r4/chCOWbh93SxnNglA9i5a7T4YpxJ0WUFlaPWN+67+kZO7kbA+TxEuMwtmM+EFspjG0RAhbDEnbcPFPT4evELVdOZfq61AsNR5yjqgmRzB6rkrhbhM/XVkvS5ygPqOBo0RqoSI//ZlYY7Lli7/iju2uWRFLwVrsHYL+atyevAfRHopqI2WL5PGdtc3apPxLuWY5+P2AA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(376002)(346002)(136003)(8936002)(66574015)(83380400001)(2616005)(6486002)(6916009)(2906002)(478600001)(8676002)(6512007)(86362001)(33656002)(66946007)(66556008)(36756003)(4326008)(66476007)(66446008)(64756008)(5660300002)(26005)(76116006)(91956017)(186003)(71200400001)(54906003)(6506007)(53546011)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 9q+EVNEGjTvJUFeoGWKZCGtWD3HWQJNMbihhSh94qH38zLqby2fP1z8wU8z7lZilgqkNLZX2dMAx8NqVu818hxAETcyb0mXw2l8S9WYo0/u7sQj9pEKkOexIu+SUqRNRgzIip7DFQherCtAk4P+cRKG3AQHXlwWZt2vPHmHRqGtJJtJU8+y1eYwWYJ+Tsa7Q3d4f3z6UTxQPTpxzqV3ELvs3alHVEZJxpJE883QaeTqbH8RmrYgDmnbmSGlVXFuEz7YGIv3st/ZKfFjDOv7RmfPHu1SyiymLCfrfBs0590NoyHJ73U+IyIjLkkP3phFvkbyC7OdvRU9sA+BK64ql2RCn1nWbcoluC1VIfmsbmC5qDnwlPtzBYusvDGZ1nrsrtncBAbkiCB97u8OXSAZrU8aCcuWph21J9lyMbmzbH6ivpjiEBOAq20tpyFt3TsClPhiTOiY4GvPseLh/+X85COSCOMx9zuXUdAhNnFrSCX8NqPco2kfom6vNHnh3yaAm04LJga0Q2nVD6lwYw6LWulN9zGhEXbHwvLGKnKG63yZ4/kfHmVNQS87g2qbY+dlIg1EqtIwjI8qavDcH/mkngZ2zz2coX2sptglzOzXVlkgFjUqjBjfj6ZopZtjuKnOfy2RJ3Dg3LWHlHRoIb8ZZFQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <1A0515FE9AE4624381B61152792B5A25@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3385
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8d39e78f-83cb-4549-78f6-08d866dc89b9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	1jMpzhustVDv/JVvLj2gS4SpLXnbL0+XUGRdXdIQrvabtiOjQaRjgdxtO0st/YnFXXMofdMJZjmG8GqcE0hbWL99PUO6OvjhgmvpQNABIwjXySqTX1G9mD73xhEhLreZh6HIK1lYa1EZiWHB8UPFa2J5kVtvD5BoVRLWWTmY0dgxxWzBhIHDS+NCuCXw/4XFUZuPwFFhbi56r3fzKTe8vI0ENnYI89GN3h6aKR+/wdhPAaMiChN4VDcAw7fasJz+ljC8sQ4/qJaERtMeL+mRpySM46Rpg2/WL+a5PZSQj6K/CCWJl8N2ss/WBLSjCyy2y1Fn0lbZo7VjSgLJI/g9JHwetoQOzy20JTXTMObArdvDxR5v7NDXaHnTvKCg11rXy6adGqeKjtLjy6jDzK8ZMg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(46966005)(53546011)(6506007)(2616005)(4326008)(6512007)(83380400001)(336012)(82310400003)(5660300002)(8676002)(8936002)(86362001)(36756003)(70206006)(478600001)(26005)(36906005)(70586007)(356005)(47076004)(66574015)(186003)(6486002)(82740400003)(316002)(6862004)(54906003)(33656002)(81166007)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 14:07:54.3389
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dcafab15-9df0-4e53-4ab6-08d866dc8e9f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4231

DQoNCj4gT24gMiBPY3QgMjAyMCwgYXQgMTI6MDMsIEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5k
Lm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+IA0KPiANCj4gDQo+PiBPbiAyIE9jdCAyMDIwLCBh
dCAxMTo0NCwgSsO8cmdlbiBHcm/DnyA8amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+IA0KPj4g
T24gMDIuMTAuMjAgMTI6MTIsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4gSGksDQo+Pj4+
IE9uIDIgT2N0IDIwMjAsIGF0IDA4OjI1LCBKw7xyZ2VuIEdyb8OfIDxqZ3Jvc3NAc3VzZS5jb20+
IHdyb3RlOg0KPj4+PiANCj4+Pj4gT24gMDIuMTAuMjAgMDg6NTksIEphbiBCZXVsaWNoIHdyb3Rl
Og0KPj4+Pj4gT24gMDIuMTAuMjAyMCAwODo1MSwgSsO8cmdlbiBHcm/DnyB3cm90ZToNCj4+Pj4+
PiBPbiAwMi4xMC4yMCAwODoyMCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+Pj4+IE9uIDAyLjEw
LjIwMjAgMDY6NTAsIErDvHJnZW4gR3Jvw58gd3JvdGU6DQo+Pj4+Pj4+PiBPbiAwMS4xMC4yMCAx
ODozOCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4+PiBIaSBKdWVyZ2VuLA0KPj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBPbiAxNCBTZXAgMjAyMCwgYXQgMTE6NTgsIEJlcnRyYW5kIE1h
cnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+IA0KPj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBPbiAxMiBTZXAgMjAyMCwgYXQgMTQ6
MDgsIEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4gd3JvdGU6DQo+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+IE1ha2luZyBnZXRCcmlkZ2UoKSBzdGF0aWMgdHJpZ2dlcmVkIGEgYnVpbGQg
ZXJyb3Igd2l0aCBzb21lIGdjYyB2ZXJzaW9uczoNCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4g
ZXJyb3I6ICdzdHJuY3B5JyBvdXRwdXQgbWF5IGJlIHRydW5jYXRlZCBjb3B5aW5nIDE1IGJ5dGVz
IGZyb20gYSBzdHJpbmcgb2YNCj4+Pj4+Pj4+Pj4+IGxlbmd0aCAyNTUgWy1XZXJyb3I9c3RyaW5n
b3AtdHJ1bmNhdGlvbl0NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gRml4IHRoYXQgYnkgdXNp
bmcgYSBidWZmZXIgd2l0aCAyNTYgYnl0ZXMgaW5zdGVhZC4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+Pj4gRml4ZXM6IDZkMGVjMDUzOTA3Nzk0ICgidG9vbHM6IHNwbGl0IGxpYnhlbnN0YXQgaW50
byBuZXcgdG9vbHMvbGlicy9zdGF0IGRpcmVjdG9yeSIpDQo+Pj4+Pj4+Pj4+PiBTaWduZWQtb2Zm
LWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+DQo+Pj4+Pj4+Pj4+IFJldmlld2Vk
LWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQo+Pj4+Pj4+
Pj4gDQo+Pj4+Pj4+Pj4gU29ycnkgaSBoYXZlIHRvIGNvbWUgYmFjayBvbiB0aGlzIG9uZS4NCj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJIHN0aWxsIHNlZSBhbiBlcnJvciBjb21waWxpbmcgd2l0aCBZ
b2N0byBvbiB0aGlzIG9uZToNCj4+Pj4+Pj4+PiB8ICAgICBpbmxpbmVkIGZyb20gJ3hlbnN0YXRf
Y29sbGVjdF9uZXR3b3JrcycgYXQgeGVuc3RhdF9saW51eC5jOjMwNjoyOg0KPj4+Pj4+Pj4+IHwg
eGVuc3RhdF9saW51eC5jOjgxOjY6IGVycm9yOiAnc3RybmNweScgb3V0cHV0IG1heSBiZSB0cnVu
Y2F0ZWQgY29weWluZyAyNTUgYnl0ZXMgZnJvbSBhIHN0cmluZyBvZiBsZW5ndGggMjU1IFstV2Vy
cm9yPXN0cmluZ29wLXRydW5jYXRpb25dDQo+Pj4+Pj4+Pj4gfCAgICA4MSB8ICAgICAgc3RybmNw
eShyZXN1bHQsIGRlLT5kX25hbWUsIHJlc3VsdExlbik7DQo+Pj4+Pj4+Pj4gfCAgICAgICB8ICAg
ICAgXn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn5+fn4NCj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+PiBUbyBzb2x2ZSBpdCwgSSBuZWVkIHRvIGRlZmluZSBkZXZCcmlkZ2VbMjU3XSBhcyBk
ZXZOb0JyaWRlZy4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSU1ITyB0aGlzIGlzIGEgcmVhbCBjb21w
aWxlciBlcnJvci4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gZGUtPmRfbmFtZSBpcyBhbiBhcnJheSBv
ZiAyNTYgYnl0ZXMsIHNvIGRvaW5nIHN0cm5jcHkoKSBmcm9tIHRoYXQgdG8NCj4+Pj4+Pj4+IGFu
b3RoZXIgYXJyYXkgb2YgMjU2IGJ5dGVzIHdpdGggYSBsZW5ndGggb2YgMjU2IHdvbid0IHRydW5j
YXRlIGFueXRoaW5nLg0KPj4+Pj4+PiANCj4+Pj4+Pj4gVGhhdCdzIGEgbWF0dGVyIG9mIGhvdyB5
b3UgbG9vayBhdCBpdCwgSSB0aGluazogSWYgdGhlIG9yaWdpbmFsIGFycmF5DQo+Pj4+Pj4+IGRv
ZXNuJ3QgaG9sZCBhIG51bC10ZXJtaW5hdGVkIHN0cmluZywgdGhlIGRlc3RpbmF0aW9uIGFycmF5
IHdvbid0DQo+Pj4+Pj4+IGVpdGhlciwgeWV0IHRoZSBjb21tb24gZ29hbCBvZiBzdHJuY3B5KCkg
aXMgdG8geWllbGQgYSBwcm9wZXJseSBudWwtDQo+Pj4+Pj4+IHRlcm1pbmF0ZWQgc3RyaW5nLiBJ
T1cgdGhlIHdhcm5pbmcgbWF5IGJlIHNpbmNlIHRoZSBzdGFuZGFyZCBldmVuIGhhcw0KPj4+Pj4+
PiBhIHNwZWNpZmljIGZvb3Qgbm90ZSB0byBwb2ludCBvdXQgdGhpcyBwb3NzaWJsZSBwaXRmYWxs
Lg0KPj4+Pj4+IA0KPj4+Pj4+IElmIHRoZSBzb3VyY2UgZG9lc24ndCBob2xkIGEgbnVsLXRlcm1p
bmF0ZWQgc3RyaW5nIHRoZXJlIHdpbGwgc3RpbGwgYmUNCj4+Pj4+PiAyNTYgYnl0ZXMgY29waWVk
LCBzbyB0aGVyZSBpcyBubyB0cnVuY2F0aW9uIGRvbmUgZHVyaW5nIHN0cm5jcHkoKS4NCj4+Pj4+
PiANCj4+Pj4+PiBJbiBmYWN0IHRoZXJlIGlzIG5vIHdheSB0byB1c2Ugc3RybmNweSgpIGluIGEg
c2FmZSB3YXkgb24gYSBmaXhlZCBzaXplZA0KPj4+Pj4+IHNvdXJjZSBhcnJheSB3aXRoIHRoZSBh
Ym92ZSBzZW1hbnRpY3M6IGVpdGhlciB0aGUgdGFyZ2V0IGlzIGxhcmdlciB0aGFuDQo+Pj4+Pj4g
dGhlIHNvdXJjZSBhbmQgbGVuZ3RoIGlzIGF0IGxlYXN0IHNpemVvZihzb3VyY2UpICsgMSwgcmVz
dWx0aW5nIGluIGENCj4+Pj4+PiBwb3NzaWJsZSByZWFkIGJleW9uZCB0aGUgZW5kIG9mIHNvdXJj
ZSwgb3IgdGhlIHRhcmdldCBpcyB0aGUgc2FtZSBsZW5ndGgNCj4+Pj4+PiBsZWFkaW5nIHRvIHRo
ZSBlcnJvci4NCj4+Pj4+IEkgYWdyZWUgd2l0aCBhbGwgb2Ygd2hhdCB5b3Ugc2F5LCBidXQgSSBj
YW4gYWxzbyBzZWUgd2h5IHNhaWQgZm9vdCBub3RlDQo+Pj4+PiBhbG9uZSBtYXkgaGF2ZSBtb3Rp
dmF0ZWQgdGhlIGVtaXNzaW9uIG9mIHRoZSB3YXJuaW5nLg0KPj4+PiANCj4+Pj4gVGhlIG1vdGl2
YXRpb24gY2FuIGJlIGV4cGxhaW5lZCwgeWVzLCBidXQgaXQgaXMgd3JvbmcuIHN0cm5jcHkoKSBp
cyBub3QNCj4+Pj4gbGltaXRlZCB0byBzb3VyY2UgYXJyYXlzIG9mIHVua25vd24gbGVuZ3RoLiBT
byB0aGlzIHdhcm5pbmcgaXMgbWFraW5nDQo+Pj4+IHN0cm5jcHkoKSB1bnVzYWJsZSBmb3IgZml4
ZWQgc2l6ZWQgc291cmNlIHN0cmluZ3MgYW5kIC1XZXJyb3IuIEFuZCB0aGF0DQo+Pj4+IGlzIG5v
dGhpbmcgYSBjb21waWxlciBzaG91bGQgYmUgYWxsb3dlZCB0byBkbywgaGVuY2UgYSBjb21waWxl
ciBidWcuDQo+Pj4gSSBkbyBhZ3JlZSB0aGF0IGluIHRoaXMgY2FzZSB0aGUgY29tcGlsZXIgaXMg
ZG9pbmcgdG8gbXVjaC4NCj4+IA0KPj4gSXQgaXMgcGxhaW4gd3JvbmcgaGVyZS4gUmVuZGVyaW5n
IGEgUG9zaXggZGVmaW5lZCBmdW5jdGlvbiB1bnVzYWJsZSBmb3INCj4+IGEgY29tcGxldGVseSBs
ZWdhbCB1c2UgY2FzZSBpcyBpbiBubyB3YXkgYSBtYXR0ZXIgb2YgdGFzdGUgb3Igb2YgImRvaW5n
DQo+PiB0byBtdWNoIi4gSXQgaXMgYSBidWcuDQo+IA0KPiBBZ3JlZS4NCj4gDQo+PiANCj4+PiBX
ZSBjb3VsZCBhbHNvIGNob29zZSB0byB0dXJuIG9mZiB0aGUgd2FybmluZyBlaXRoZXIgdXNpbmcg
cHJhZ21hICh3aGljaA0KPj4+IGkgcmVhbGx5IGRvIG5vdCBsaWtlKSBvciBieSBhZGRpbmcgYSBj
ZmxhZyBmb3IgdGhpcyBzcGVjaWZpYyBmaWxlIChidXQgdGhpcyBtaWdodA0KPj4+IGhpdCB1cyBs
YXRlciBpbiBvdGhlciBwbGFjZXMpLg0KPj4+IEFsbCBpbiBhbGwgdGhpcyBjdXJyZW50bHkgbWFr
ZXMgWGVuIG1hc3RlciBhbmQgc3RhZ2luZyBub3QgcG9zc2libGUgdG8NCj4+PiBjb21waWxlIHdp
dGggWW9jdG8gc28gd2UgbmVlZCB0byBmaW5kIGEgc29sdXRpb24gYXMgdGhpcyB3aWxsIGFsc28N
Cj4+PiBjb21lIGluIGFueSBkaXN0cmlidXRpb24gdXNpbmcgYSBuZXcgY29tcGlsZXIsDQo+PiAN
Cj4+IEEgdmFyaWFudCB5b3UgZGlkbid0IG1lbnRpb24gd291bGQgYmUgb3BlbiBjb2Rpbmcgb2Yg
c3RybmNweSgpIChvcg0KPj4gaGF2aW5nIGEgcmVsYXRlZCBpbmxpbmUgZnVuY3Rpb24gaW4gYSBj
b21tb24gaGVhZGVyKS4gVGhpcyByb3V0ZSB3b3VsZA0KPj4gYmUgdGhlIG9uZSBJJ2QgcHJlZmVy
IGluIGNhc2UgdGhlIGNvbXBpbGVyIGd1eXMgaW5zaXN0IG9uIHRoZSBiZWhhdmlvcg0KPj4gYmVp
bmcgZmluZS4NCj4gDQo+IFRydWUgdGhpcyBwb3NzaWJsZSwgZXZlbiB0aG91Z2ggSSBkbyBub3Qg
bGlrZSB0byBtb2RpZnkgdGhlIGNvZGUgdGhhdCBkZWVwbHkNCj4gZm9yIG9uZSBzcGVjaWZpYyBj
b21waWxlci4NCj4gDQo+PiANCj4+IFlvdSBkaWRuJ3QgdGVsbCB1cyB3aGljaCBjb21waWxlciBp
cyBiZWluZyB1c2VkIGFuZCB3aGV0aGVyIGl0IHJlYWxseSBpcw0KPj4gdXAgdG8gZGF0ZS4gQSB3
b3JrYXJvdW5kIG1pZ2h0IGJlIHRvIHNldCBFWFRSQV9DRkxBR1NfWEVOX1RPT0xTIHRvDQo+PiAi
LVduby1zdHJpbmdvcC10cnVuY2F0aW9uIiBmb3IgdGhlIGJ1aWxkLg0KPiANCj4gVGhhdOKAmXMg
d2hhdCBpIG1lYW50IGJ5IOKAnGFkZGluZyBhIGNmbGFn4oCdLCBhcyB5b3Ugc3VnZ2VzdCB3ZSBj
b3VsZCBhbHNvDQo+IG1ha2UgaXQgZ2xvYmFsIChhbmQgbm90IGxpbWl0IGl0IHRvIHRoaXMgZmls
ZSkuDQo+IA0KPiBUaGUgY29tcGlsZXIgSSBhbSB1c2luZyBpcyB0aGUgb25lIGZyb20gWW9jdG8g
R2F0ZXNnYXJ0aCAocmVsZWFzZSBjb21pbmcgaW4NCj4gb2N0b2Jlcik6IGdjYyB2ZXJzaW9uIDEw
LjIuMCAocmVsZWFzZWQgaW4ganVseSAyMDIwKS4NCg0KQWZ0ZXIgc29tZSBkaXNjdXNzaW9uLCBJ
IHdpbGwgcHJvcG9zZSBhIHBhdGNoIHRvIHNvbHZlIHRoZSBpc3N1ZSB1c2luZyBtZW1jcHkNCmlu
c3RlYWQgb2Ygc3RybmNweSBhcyBpbiB0aGlzIHNwZWNpZmljIGNhc2Ugd2Uga25vdyB0aGF0IHdl
IGNhbiB0cnVuY2F0ZSB0aGUNCnN0cmluZy4NClRoaXMgd2lsbCBhbHNvIGFsbG93IHRvIHJldmVy
dCB0aGUgYmlnIGJ1ZmZlcnMgaW4gc3RhY2suDQoNCkNoZWVycw0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2189.6467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLx3-0007Pi-Ov; Fri, 02 Oct 2020 14:22:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2189.6467; Fri, 02 Oct 2020 14:22:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLx3-0007PY-Lp; Fri, 02 Oct 2020 14:22:25 +0000
Received: by outflank-mailman (input) for mailman id 2189;
 Fri, 02 Oct 2020 14:22:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOLx1-0007Ll-Gs
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3e0e85b-5246-4578-aa00-1090ff0bacdc;
 Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C7007B1B8;
 Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOLx1-0007Ll-Gs
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:23 +0000
X-Inumbo-ID: e3e0e85b-5246-4578-aa00-1090ff0bacdc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e3e0e85b-5246-4578-aa00-1090ff0bacdc;
	Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601648537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Y4XP1g9Mr04Iv/bDO3Rv5p5eaw1xz0yirSPBgYk+Q6U=;
	b=jW1NFAJsHym+mJgJO96Iz0WZ2kfTf/UrhdoT7ZSh5PNTGGJ8B+6PFAtsOdJuUXqaB4ltDl
	dDNOWMbEOLI9KJM15SmZec/VwVyRq8dMF52ENylgmmRkvUmIgZqhGU7ZeM2QAdqdNTOc95
	deufqX0FV1yhlRe6N6rihAH9vZ2rzaY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C7007B1B8;
	Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH 1/3] tools/libs: move official headers to common directory
Date: Fri,  2 Oct 2020 16:22:12 +0200
Message-Id: <20201002142214.3438-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
References: <20201002142214.3438-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of each library having an own include directory move the
official headers to tools/include instead. This will drop the need to
link those headers to tools/include and there is no need any longer
to have library-specific include paths when building Xen.

While at it remove setting of the unused variable
PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                                    |  5 ++--
 stubdom/mini-os.mk                            |  2 +-
 tools/Rules.mk                                |  5 ++--
 tools/{libs/vchan => }/include/libxenvchan.h  |  0
 tools/{libs/light => }/include/libxl.h        |  0
 tools/{libs/light => }/include/libxl_event.h  |  0
 tools/{libs/light => }/include/libxl_json.h   |  0
 tools/{libs/light => }/include/libxl_utils.h  |  0
 tools/{libs/light => }/include/libxl_uuid.h   |  0
 tools/{libs/util => }/include/libxlutil.h     |  0
 tools/{libs/call => }/include/xencall.h       |  0
 tools/{libs/ctrl => }/include/xenctrl.h       |  0
 .../{libs/ctrl => }/include/xenctrl_compat.h  |  0
 .../devicemodel => }/include/xendevicemodel.h |  0
 tools/{libs/evtchn => }/include/xenevtchn.h   |  0
 .../include/xenforeignmemory.h                |  0
 tools/{libs/gnttab => }/include/xengnttab.h   |  0
 tools/{libs/guest => }/include/xenguest.h     |  0
 tools/{libs/hypfs => }/include/xenhypfs.h     |  0
 tools/{libs/stat => }/include/xenstat.h       |  0
 .../compat => include/xenstore-compat}/xs.h   |  0
 .../xenstore-compat}/xs_lib.h                 |  0
 tools/{libs/store => }/include/xenstore.h     |  0
 tools/{xenstore => include}/xenstore_lib.h    |  0
 .../{libs/toolcore => }/include/xentoolcore.h |  0
 .../include/xentoolcore_internal.h            |  0
 tools/{libs/toollog => }/include/xentoollog.h |  0
 tools/libs/call/Makefile                      |  3 ---
 tools/libs/ctrl/Makefile                      |  3 ---
 tools/libs/devicemodel/Makefile               |  3 ---
 tools/libs/evtchn/Makefile                    |  2 --
 tools/libs/foreignmemory/Makefile             |  3 ---
 tools/libs/gnttab/Makefile                    |  3 ---
 tools/libs/guest/Makefile                     |  3 ---
 tools/libs/hypfs/Makefile                     |  3 ---
 tools/libs/libs.mk                            | 10 +++----
 tools/libs/light/Makefile                     | 27 +++++++++----------
 tools/libs/stat/Makefile                      |  2 --
 tools/libs/store/Makefile                     | 11 +++-----
 tools/libs/toolcore/Makefile                  |  9 +++----
 tools/libs/toollog/Makefile                   |  2 --
 tools/libs/util/Makefile                      |  3 ---
 tools/libs/vchan/Makefile                     |  3 ---
 tools/ocaml/libs/xentoollog/Makefile          |  2 +-
 tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
 45 files changed, 30 insertions(+), 76 deletions(-)
 rename tools/{libs/vchan => }/include/libxenvchan.h (100%)
 rename tools/{libs/light => }/include/libxl.h (100%)
 rename tools/{libs/light => }/include/libxl_event.h (100%)
 rename tools/{libs/light => }/include/libxl_json.h (100%)
 rename tools/{libs/light => }/include/libxl_utils.h (100%)
 rename tools/{libs/light => }/include/libxl_uuid.h (100%)
 rename tools/{libs/util => }/include/libxlutil.h (100%)
 rename tools/{libs/call => }/include/xencall.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl_compat.h (100%)
 rename tools/{libs/devicemodel => }/include/xendevicemodel.h (100%)
 rename tools/{libs/evtchn => }/include/xenevtchn.h (100%)
 rename tools/{libs/foreignmemory => }/include/xenforeignmemory.h (100%)
 rename tools/{libs/gnttab => }/include/xengnttab.h (100%)
 rename tools/{libs/guest => }/include/xenguest.h (100%)
 rename tools/{libs/hypfs => }/include/xenhypfs.h (100%)
 rename tools/{libs/stat => }/include/xenstat.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs_lib.h (100%)
 rename tools/{libs/store => }/include/xenstore.h (100%)
 rename tools/{xenstore => include}/xenstore_lib.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore_internal.h (100%)
 rename tools/{libs/toollog => }/include/xentoollog.h (100%)

diff --git a/.gitignore b/.gitignore
index 188495783e..0515ca0aca 100644
--- a/.gitignore
+++ b/.gitignore
@@ -143,7 +143,6 @@ tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
 tools/libs/light/xenlight.pc
-tools/libs/light/include/_*.h
 tools/libs/stat/_paths.h
 tools/libs/stat/headers.chk
 tools/libs/stat/libxenstat.map
@@ -153,7 +152,6 @@ tools/libs/store/list.h
 tools/libs/store/utils.h
 tools/libs/store/xenstore.pc
 tools/libs/store/xs_lib.c
-tools/libs/store/include/xenstore_lib.h
 tools/libs/util/*.pc
 tools/libs/util/_paths.h
 tools/libs/util/libxlu_cfg_y.output
@@ -231,7 +229,8 @@ tools/hotplug/Linux/xendomains
 tools/hotplug/NetBSD/rc.d/xencommons
 tools/hotplug/NetBSD/rc.d/xendriverdomain
 tools/include/acpi
-tools/include/*.h
+tools/include/_libxl*.h
+tools/include/_xentoolcore_list.h
 tools/include/xen/*
 tools/include/xen-xsm/*
 tools/include/xen-foreign/*.(c|h|size)
diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
index 420e9a8771..7e4968e026 100644
--- a/stubdom/mini-os.mk
+++ b/stubdom/mini-os.mk
@@ -5,7 +5,7 @@
 # XEN_ROOT
 # MINIOS_TARGET_ARCH
 
-XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/libs/store/include
+XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/include
 TOOLCORE_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
 TOOLLOG_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
 EVTCHN_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
diff --git a/tools/Rules.mk b/tools/Rules.mk
index f3e0078927..f61da81f4a 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -87,7 +87,7 @@ endif
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
- CFLAGS_libxen$(1) = -I$$(XEN_libxen$(1))/include $$(CFLAGS_xeninclude)
+ CFLAGS_libxen$(1) = $$(CFLAGS_xeninclude)
  SHDEPS_libxen$(1) = $$(foreach use,$$(USELIBS_$(1)),$$(SHLIB_libxen$$(use)))
  LDLIBS_libxen$(1) = $$(SHDEPS_libxen$(1)) $$(XEN_libxen$(1))/lib$$(FILENAME_$(1))$$(libextension)
  SHLIB_libxen$(1) = $$(SHDEPS_libxen$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
@@ -97,8 +97,7 @@ $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
 
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
-CFLAGS_libxenctrl += $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) -D__XEN_TOOLS__
-CFLAGS_libxenguest += $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory)
+CFLAGS_libxenctrl += -D__XEN_TOOLS__
 
 ifeq ($(CONFIG_Linux),y)
 LDLIBS_libxenstore += -ldl
diff --git a/tools/libs/vchan/include/libxenvchan.h b/tools/include/libxenvchan.h
similarity index 100%
rename from tools/libs/vchan/include/libxenvchan.h
rename to tools/include/libxenvchan.h
diff --git a/tools/libs/light/include/libxl.h b/tools/include/libxl.h
similarity index 100%
rename from tools/libs/light/include/libxl.h
rename to tools/include/libxl.h
diff --git a/tools/libs/light/include/libxl_event.h b/tools/include/libxl_event.h
similarity index 100%
rename from tools/libs/light/include/libxl_event.h
rename to tools/include/libxl_event.h
diff --git a/tools/libs/light/include/libxl_json.h b/tools/include/libxl_json.h
similarity index 100%
rename from tools/libs/light/include/libxl_json.h
rename to tools/include/libxl_json.h
diff --git a/tools/libs/light/include/libxl_utils.h b/tools/include/libxl_utils.h
similarity index 100%
rename from tools/libs/light/include/libxl_utils.h
rename to tools/include/libxl_utils.h
diff --git a/tools/libs/light/include/libxl_uuid.h b/tools/include/libxl_uuid.h
similarity index 100%
rename from tools/libs/light/include/libxl_uuid.h
rename to tools/include/libxl_uuid.h
diff --git a/tools/libs/util/include/libxlutil.h b/tools/include/libxlutil.h
similarity index 100%
rename from tools/libs/util/include/libxlutil.h
rename to tools/include/libxlutil.h
diff --git a/tools/libs/call/include/xencall.h b/tools/include/xencall.h
similarity index 100%
rename from tools/libs/call/include/xencall.h
rename to tools/include/xencall.h
diff --git a/tools/libs/ctrl/include/xenctrl.h b/tools/include/xenctrl.h
similarity index 100%
rename from tools/libs/ctrl/include/xenctrl.h
rename to tools/include/xenctrl.h
diff --git a/tools/libs/ctrl/include/xenctrl_compat.h b/tools/include/xenctrl_compat.h
similarity index 100%
rename from tools/libs/ctrl/include/xenctrl_compat.h
rename to tools/include/xenctrl_compat.h
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/include/xendevicemodel.h
similarity index 100%
rename from tools/libs/devicemodel/include/xendevicemodel.h
rename to tools/include/xendevicemodel.h
diff --git a/tools/libs/evtchn/include/xenevtchn.h b/tools/include/xenevtchn.h
similarity index 100%
rename from tools/libs/evtchn/include/xenevtchn.h
rename to tools/include/xenevtchn.h
diff --git a/tools/libs/foreignmemory/include/xenforeignmemory.h b/tools/include/xenforeignmemory.h
similarity index 100%
rename from tools/libs/foreignmemory/include/xenforeignmemory.h
rename to tools/include/xenforeignmemory.h
diff --git a/tools/libs/gnttab/include/xengnttab.h b/tools/include/xengnttab.h
similarity index 100%
rename from tools/libs/gnttab/include/xengnttab.h
rename to tools/include/xengnttab.h
diff --git a/tools/libs/guest/include/xenguest.h b/tools/include/xenguest.h
similarity index 100%
rename from tools/libs/guest/include/xenguest.h
rename to tools/include/xenguest.h
diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/include/xenhypfs.h
similarity index 100%
rename from tools/libs/hypfs/include/xenhypfs.h
rename to tools/include/xenhypfs.h
diff --git a/tools/libs/stat/include/xenstat.h b/tools/include/xenstat.h
similarity index 100%
rename from tools/libs/stat/include/xenstat.h
rename to tools/include/xenstat.h
diff --git a/tools/libs/store/include/compat/xs.h b/tools/include/xenstore-compat/xs.h
similarity index 100%
rename from tools/libs/store/include/compat/xs.h
rename to tools/include/xenstore-compat/xs.h
diff --git a/tools/libs/store/include/compat/xs_lib.h b/tools/include/xenstore-compat/xs_lib.h
similarity index 100%
rename from tools/libs/store/include/compat/xs_lib.h
rename to tools/include/xenstore-compat/xs_lib.h
diff --git a/tools/libs/store/include/xenstore.h b/tools/include/xenstore.h
similarity index 100%
rename from tools/libs/store/include/xenstore.h
rename to tools/include/xenstore.h
diff --git a/tools/xenstore/xenstore_lib.h b/tools/include/xenstore_lib.h
similarity index 100%
rename from tools/xenstore/xenstore_lib.h
rename to tools/include/xenstore_lib.h
diff --git a/tools/libs/toolcore/include/xentoolcore.h b/tools/include/xentoolcore.h
similarity index 100%
rename from tools/libs/toolcore/include/xentoolcore.h
rename to tools/include/xentoolcore.h
diff --git a/tools/libs/toolcore/include/xentoolcore_internal.h b/tools/include/xentoolcore_internal.h
similarity index 100%
rename from tools/libs/toolcore/include/xentoolcore_internal.h
rename to tools/include/xentoolcore_internal.h
diff --git a/tools/libs/toollog/include/xentoollog.h b/tools/include/xentoollog.h
similarity index 100%
rename from tools/libs/toollog/include/xentoollog.h
rename to tools/include/xentoollog.h
diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 81c7478efd..4ed201b3b3 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxencall)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index 0071226d2a..4185dc3f22 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -62,9 +62,6 @@ $(eval $(genpath-target))
 
 $(LIB_OBJS) $(PIC_OBJS): _paths.h
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 clean: cleanlocal
 
 .PHONY: cleanlocal
diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index 42417958f2..b67fc0fac1 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += compat.c
 SRCS-$(CONFIG_MiniOS)  += compat.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxendevicemodel)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index aec76641e8..ad01a17b3d 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -12,5 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenevtchn)/include
diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index cf444d3c1a..13850f7988 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += compat.c netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenforeignmemory)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index d8d4d55e27..d86c49d243 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -14,6 +14,3 @@ SRCS-$(CONFIG_SunOS)   += gnttab_unimp.c gntshr_unimp.c
 SRCS-$(CONFIG_NetBSD)  += gnttab_unimp.c gntshr_unimp.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxengnttab)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index f24732fbcd..5b4ad313cc 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -113,9 +113,6 @@ xc_private.h: _paths.h
 
 $(LIB_OBJS) $(PIC_OBJS): $(LINK_FILES)
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 .PHONY: cleanlocal
 cleanlocal:
 	rm -f libxenguest.map
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
index 668d68853f..39feca87e8 100644
--- a/tools/libs/hypfs/Makefile
+++ b/tools/libs/hypfs/Makefile
@@ -9,6 +9,3 @@ APPEND_LDFLAGS += -lz
 SRCS-y                 += core.c
 
 include ../libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenhypfs)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 325b7b7cea..959ff91a56 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -47,10 +47,10 @@ endif
 PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
 
 LIBHEADER ?= $(LIB_FILE_NAME).h
-LIBHEADERS = $(foreach h, $(LIBHEADER), include/$(h))
-LIBHEADERSGLOB = $(foreach h, $(LIBHEADER), $(XEN_ROOT)/tools/include/$(h))
+LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
@@ -74,14 +74,11 @@ else
 .PHONY: headers.chk
 endif
 
-headers.chk: $(LIBHEADERSGLOB) $(AUTOINCS)
+headers.chk: $(AUTOINCS)
 
 libxen$(LIBNAME).map:
 	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
 
-$(LIBHEADERSGLOB): $(LIBHEADERS)
-	for i in $(realpath $(LIBHEADERS)); do ln -sf $$i $(XEN_ROOT)/tools/include; done
-
 lib$(LIB_FILE_NAME).a: $(LIB_OBJS)
 	$(AR) rc $@ $^
 
@@ -123,7 +120,6 @@ clean:
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
 	rm -f headers.chk
 	rm -f $(PKG_CONFIG)
-	rm -f $(LIBHEADERSGLOB)
 	rm -f _paths.h
 
 .PHONY: distclean
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index f58a3214e5..fbaffdf5a2 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -152,7 +152,7 @@ LIBXL_TEST_OBJS += $(foreach t, $(LIBXL_TESTS_INSIDE),libxl_test_$t.opic)
 TEST_PROG_OBJS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t.o) test_common.o
 TEST_PROGS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t)
 
-AUTOINCS = _libxl_list.h _paths.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+AUTOINCS = $(XEN_INCLUDE)/_libxl_list.h _paths.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS = _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 
 CLIENTS = testidl libxl-save-helper
@@ -165,9 +165,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(CURDIR)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 LDUSELIBS-y += $(PTYFUNCS_LIBS)
 LDUSELIBS-$(CONFIG_LIBNL) += $(LIBNL3_LIBS)
 LDUSELIBS-$(CONFIG_Linux) += -luuid
@@ -185,7 +182,7 @@ libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
 $(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
-testidl.c: libxl_types.idl gentest.py include/libxl.h $(AUTOINCS)
+testidl.c: libxl_types.idl gentest.py $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
 	$(PYTHON) gentest.py libxl_types.idl testidl.c.new
 	mv testidl.c.new testidl.c
 
@@ -200,15 +197,15 @@ libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 	touch $@
 
-_%.api-for-check: include/%.h $(AUTOINCS)
-	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
+_libxl.api-for-check: $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
+	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_libxl.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
 	mv -f $@.new $@
 
-_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
-	$(PERL) $^ --prefix=libxl >$@.new
-	$(call move-if-changed,$@.new,$@)
+$(XEN_INCLUDE)/_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
+	$(PERL) $^ --prefix=libxl >$(notdir $@).new
+	$(call move-if-changed,$(notdir $@).new,$@)
 
 _libxl_save_msgs_helper.c _libxl_save_msgs_callout.c \
 _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
@@ -216,13 +213,13 @@ _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
 	$(PERL) -w $< $@ >$@.new
 	$(call move-if-changed,$@.new,$@)
 
-include/libxl.h: _libxl_types.h _libxl_list.h
-include/libxl_json.h: _libxl_types_json.h
+$(XEN_INCLUDE)/libxl.h: $(XEN_INCLUDE)/_libxl_types.h $(XEN_INCLUDE)/_libxl_list.h
+$(XEN_INCLUDE)/libxl_json.h: $(XEN_INCLUDE)/_libxl_types_json.h
 libxl_internal.h: _libxl_types_internal.h _libxl_types_private.h _libxl_types_internal_private.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): include/libxl.h
+$(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h
 $(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
@@ -234,8 +231,8 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
-include/_%.h: _%.h
-	cp $< $@
+$(XEN_INCLUDE)/_%.h: _%.h
+	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
 
 libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDUSELIBS) $(APPEND_LDFLAGS)
diff --git a/tools/libs/stat/Makefile b/tools/libs/stat/Makefile
index 5463f5f7ca..8353e96946 100644
--- a/tools/libs/stat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -30,8 +30,6 @@ APPEND_LDFLAGS += $(LDLIBS-y)
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstat)/include
-
 $(LIB_OBJS): _paths.h
 
 PYLIB=bindings/swig/python/_xenstat.so
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 4da502646e..930e763de9 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -21,12 +21,12 @@ CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
 
-LINK_FILES = xs_lib.c include/xenstore_lib.h list.h utils.h
+LINK_FILES = xs_lib.c list.h utils.h
 
 $(LIB_OBJS): $(LINK_FILES)
 
 $(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/xenstore/$(notdir $@) $@
+	ln -sf $(XEN_ROOT)/tools/xenstore/$@ $@
 
 xs.opic: CFLAGS += -DUSE_PTHREAD
 ifeq ($(CONFIG_Linux),y)
@@ -35,9 +35,6 @@ else
 PKG_CONFIG_REMOVE += -ldl
 endif
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstore)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 .PHONY: install
 install: install-headers
 
@@ -45,8 +42,8 @@ install: install-headers
 install-headers:
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)/xenstore-compat
-	$(INSTALL_DATA) include/compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
-	$(INSTALL_DATA) include/compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
+	$(INSTALL_DATA) $(XEN_INCLUDE)/xenstore-compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
+	$(INSTALL_DATA) $(XEN_INCLUDE)/xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
 	ln -sf xenstore-compat/xs.h  $(DESTDIR)$(includedir)/xs.h
 	ln -sf xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xs_lib.h
 
diff --git a/tools/libs/toolcore/Makefile b/tools/libs/toolcore/Makefile
index 5819bbc8ee..1cf30733c9 100644
--- a/tools/libs/toolcore/Makefile
+++ b/tools/libs/toolcore/Makefile
@@ -3,18 +3,17 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR	= 1
 MINOR	= 0
-AUTOINCS := include/_xentoolcore_list.h
+AUTOINCS := $(XEN_INCLUDE)/_xentoolcore_list.h
 
 SRCS-y	+= handlereg.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
 PKG_CONFIG_DESC := Central support for Xen Hypervisor userland libraries
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoolcore)/include
 
 $(LIB_OBJS): $(AUTOINCS)
 $(PIC_OBJS): $(AUTOINCS)
 
-include/_xentoolcore_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
-	$(PERL) $^ --prefix=xentoolcore >$@.new
-	$(call move-if-changed,$@.new,$@)
+$(XEN_INCLUDE)/_xentoolcore_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
+	$(PERL) $^ --prefix=xentoolcore >$(notdir $@).new
+	$(call move-if-changed,$(notdir $@).new,$@)
diff --git a/tools/libs/toollog/Makefile b/tools/libs/toollog/Makefile
index 3f986835d6..dce1b2de85 100644
--- a/tools/libs/toollog/Makefile
+++ b/tools/libs/toollog/Makefile
@@ -8,5 +8,3 @@ SRCS-y	+= xtl_core.c
 SRCS-y	+= xtl_logger_stdio.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoollog)/include
diff --git a/tools/libs/util/Makefile b/tools/libs/util/Makefile
index 0c9db8027d..b739360be7 100644
--- a/tools/libs/util/Makefile
+++ b/tools/libs/util/Makefile
@@ -39,9 +39,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenutil)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 $(LIB_OBJS) $(PIC_OBJS): $(AUTOINCS) _paths.h
 
 %.c %.h:: %.y
diff --git a/tools/libs/vchan/Makefile b/tools/libs/vchan/Makefile
index 5e18d5b196..83a45d2817 100644
--- a/tools/libs/vchan/Makefile
+++ b/tools/libs/vchan/Makefile
@@ -12,9 +12,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenvchan)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 clean: cleanlocal
 
 .PHONY: cleanlocal
diff --git a/tools/ocaml/libs/xentoollog/Makefile b/tools/ocaml/libs/xentoollog/Makefile
index 8ae0a784fd..593f9e9e9d 100644
--- a/tools/ocaml/libs/xentoollog/Makefile
+++ b/tools/ocaml/libs/xentoollog/Makefile
@@ -49,7 +49,7 @@ xentoollog.mli: xentoollog.mli.in _xtl_levels.mli.in
 
 libs: $(LIBS)
 
-_xtl_levels.ml.in _xtl_levels.mli.in _xtl_levels.inc: genlevels.py $(XEN_ROOT)/tools/libs/toollog/include/xentoollog.h
+_xtl_levels.ml.in _xtl_levels.mli.in _xtl_levels.inc: genlevels.py $(XEN_INCLUDE)/xentoollog.h
 	$(PYTHON) genlevels.py _xtl_levels.mli.in _xtl_levels.ml.in _xtl_levels.inc
 
 .PHONY: install
diff --git a/tools/ocaml/libs/xentoollog/genlevels.py b/tools/ocaml/libs/xentoollog/genlevels.py
index f9cf853e26..11a623e459 100755
--- a/tools/ocaml/libs/xentoollog/genlevels.py
+++ b/tools/ocaml/libs/xentoollog/genlevels.py
@@ -6,7 +6,7 @@ import sys
 from functools import reduce
 
 def read_levels():
-	f = open('../../../libs/toollog/include/xentoollog.h', 'r')
+	f = open('../../../include/xentoollog.h', 'r')
 
 	levels = []
 	record = False
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2188.6450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLwy-0007MT-Cr; Fri, 02 Oct 2020 14:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2188.6450; Fri, 02 Oct 2020 14:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLwy-0007MJ-7H; Fri, 02 Oct 2020 14:22:20 +0000
Received: by outflank-mailman (input) for mailman id 2188;
 Fri, 02 Oct 2020 14:22:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOLww-0007Lm-Pa
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0feb651b-b438-4b37-8118-968db8f95830;
 Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BE0A2B1AE;
 Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOLww-0007Lm-Pa
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:18 +0000
X-Inumbo-ID: 0feb651b-b438-4b37-8118-968db8f95830
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0feb651b-b438-4b37-8118-968db8f95830;
	Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601648536;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=WPM2AT1uQM7p93phq/IbIoSKYFbGSwK1cRULNOU6AoU=;
	b=p2zGeqeKJGEMgaLbws1pXjMNnVi1Je3n/wKpU1lAHpzs2FNjPk/QgibuZIy0IbQo47f0jg
	XRJKXIRRTyRxY+0GVYxY0jdwOFkhfTdaLYrg54eg+r0Hn9zs/DhqcExUbsYHT+YPxwb2NN
	EHpbATdQJn1WF1pwYPduHKOhooOw90c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BE0A2B1AE;
	Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH 0/3] tools: avoid creating symbolic links during make
Date: Fri,  2 Oct 2020 16:22:11 +0200
Message-Id: <20201002142214.3438-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The rework of the Xen library build introduced creating some additional
symbolic links during the build process.

This series is undoing that by moving all official Xen library headers
to tools/include and by using include paths and the vpath directive
when access to some private headers of another directory is needed.

Juergen Gross (3):
  tools/libs: move official headers to common directory
  tools/libs/guest: don't use symbolic links for xenctrl headers
  tools/libs/store: don't use symbolic links for external files

 .gitignore                                    |  5 ++--
 stubdom/mini-os.mk                            |  2 +-
 tools/Rules.mk                                |  5 ++--
 tools/{libs/vchan => }/include/libxenvchan.h  |  0
 tools/{libs/light => }/include/libxl.h        |  0
 tools/{libs/light => }/include/libxl_event.h  |  0
 tools/{libs/light => }/include/libxl_json.h   |  0
 tools/{libs/light => }/include/libxl_utils.h  |  0
 tools/{libs/light => }/include/libxl_uuid.h   |  0
 tools/{libs/util => }/include/libxlutil.h     |  0
 tools/{libs/call => }/include/xencall.h       |  0
 tools/{libs/ctrl => }/include/xenctrl.h       |  0
 .../{libs/ctrl => }/include/xenctrl_compat.h  |  0
 .../devicemodel => }/include/xendevicemodel.h |  0
 tools/{libs/evtchn => }/include/xenevtchn.h   |  0
 .../include/xenforeignmemory.h                |  0
 tools/{libs/gnttab => }/include/xengnttab.h   |  0
 tools/{libs/guest => }/include/xenguest.h     |  0
 tools/{libs/hypfs => }/include/xenhypfs.h     |  0
 tools/{libs/stat => }/include/xenstat.h       |  0
 .../compat => include/xenstore-compat}/xs.h   |  0
 .../xenstore-compat}/xs_lib.h                 |  0
 tools/{libs/store => }/include/xenstore.h     |  0
 tools/{xenstore => include}/xenstore_lib.h    |  0
 .../{libs/toolcore => }/include/xentoolcore.h |  0
 .../include/xentoolcore_internal.h            |  0
 tools/{libs/toollog => }/include/xentoollog.h |  0
 tools/libs/call/Makefile                      |  3 ---
 tools/libs/ctrl/Makefile                      |  3 ---
 tools/libs/devicemodel/Makefile               |  3 ---
 tools/libs/evtchn/Makefile                    |  2 --
 tools/libs/foreignmemory/Makefile             |  3 ---
 tools/libs/gnttab/Makefile                    |  3 ---
 tools/libs/guest/Makefile                     | 12 ++-------
 tools/libs/hypfs/Makefile                     |  3 ---
 tools/libs/libs.mk                            | 10 +++----
 tools/libs/light/Makefile                     | 27 +++++++++----------
 tools/libs/stat/Makefile                      |  2 --
 tools/libs/store/Makefile                     | 15 +++--------
 tools/libs/toolcore/Makefile                  |  9 +++----
 tools/libs/toollog/Makefile                   |  2 --
 tools/libs/util/Makefile                      |  3 ---
 tools/libs/vchan/Makefile                     |  3 ---
 tools/ocaml/libs/xentoollog/Makefile          |  2 +-
 tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
 45 files changed, 32 insertions(+), 87 deletions(-)
 rename tools/{libs/vchan => }/include/libxenvchan.h (100%)
 rename tools/{libs/light => }/include/libxl.h (100%)
 rename tools/{libs/light => }/include/libxl_event.h (100%)
 rename tools/{libs/light => }/include/libxl_json.h (100%)
 rename tools/{libs/light => }/include/libxl_utils.h (100%)
 rename tools/{libs/light => }/include/libxl_uuid.h (100%)
 rename tools/{libs/util => }/include/libxlutil.h (100%)
 rename tools/{libs/call => }/include/xencall.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl_compat.h (100%)
 rename tools/{libs/devicemodel => }/include/xendevicemodel.h (100%)
 rename tools/{libs/evtchn => }/include/xenevtchn.h (100%)
 rename tools/{libs/foreignmemory => }/include/xenforeignmemory.h (100%)
 rename tools/{libs/gnttab => }/include/xengnttab.h (100%)
 rename tools/{libs/guest => }/include/xenguest.h (100%)
 rename tools/{libs/hypfs => }/include/xenhypfs.h (100%)
 rename tools/{libs/stat => }/include/xenstat.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs_lib.h (100%)
 rename tools/{libs/store => }/include/xenstore.h (100%)
 rename tools/{xenstore => include}/xenstore_lib.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore_internal.h (100%)
 rename tools/{libs/toollog => }/include/xentoollog.h (100%)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2187.6444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLwy-0007M2-2y; Fri, 02 Oct 2020 14:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2187.6444; Fri, 02 Oct 2020 14:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLwx-0007Lv-Vv; Fri, 02 Oct 2020 14:22:19 +0000
Received: by outflank-mailman (input) for mailman id 2187;
 Fri, 02 Oct 2020 14:22:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOLww-0007Ll-KW
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 115fcdda-7f1e-4de6-a0a0-a351702fd735;
 Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 07C60B1B5;
 Fri,  2 Oct 2020 14:22:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOLww-0007Ll-KW
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:18 +0000
X-Inumbo-ID: 115fcdda-7f1e-4de6-a0a0-a351702fd735
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 115fcdda-7f1e-4de6-a0a0-a351702fd735;
	Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601648537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QVStcnVJjPF4gY7YOMoOTncos7ovJW/Nu+5aH+zrqlg=;
	b=dxMoJeXeQtm5fkVKhE6U4HksCWMlADYRMJjM0ZG93Uxe4cqmFgR2zQE1a2bSpXazxKF+uN
	eEm4XHd0VpwKzMnlHFt0zQa+GvuFLtjzLUW6NHfHYfOLnzk2jk3/2BvM8ZApxKRDS2jy1y
	nq3+WAgAjOovi3yTatXyghgCcUDSSvs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 07C60B1B5;
	Fri,  2 Oct 2020 14:22:17 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] tools/libs/store: don't use symbolic links for external files
Date: Fri,  2 Oct 2020 16:22:14 +0200
Message-Id: <20201002142214.3438-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
References: <20201002142214.3438-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using symbolic links to include files from xenstored use
the vpath directive and an include path.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/store/Makefile | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 930e763de9..bc89b9cd70 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -21,12 +21,8 @@ CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
 
-LINK_FILES = xs_lib.c list.h utils.h
-
-$(LIB_OBJS): $(LINK_FILES)
-
-$(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/xenstore/$@ $@
+vpath xs_lib.c $(XEN_ROOT)/tools/xenstore
+CFLAGS += -I $(XEN_ROOT)/tools/xenstore
 
 xs.opic: CFLAGS += -DUSE_PTHREAD
 ifeq ($(CONFIG_Linux),y)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2190.6475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLx4-0007QO-93; Fri, 02 Oct 2020 14:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2190.6475; Fri, 02 Oct 2020 14:22:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOLx3-0007QC-Vg; Fri, 02 Oct 2020 14:22:25 +0000
Received: by outflank-mailman (input) for mailman id 2190;
 Fri, 02 Oct 2020 14:22:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOLx1-0007Lm-Lf
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bcd64eb-b6be-4a07-a4af-1257aa1a8d7a;
 Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0646AF9E;
 Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOLx1-0007Lm-Lf
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:22:23 +0000
X-Inumbo-ID: 6bcd64eb-b6be-4a07-a4af-1257aa1a8d7a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bcd64eb-b6be-4a07-a4af-1257aa1a8d7a;
	Fri, 02 Oct 2020 14:22:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601648537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B0hKhMItdbTVWeOo4rAaRHPuJTQcqrPyKeVdwhrfnHU=;
	b=Q/soOzGRi3o/4B1TWIxWhWnRfSHi5vETOP8P1RfBQ25k4cokrJjYEKMawPAlwKR16XGPjf
	xvc4fVhl7zsCYwrchgZ9U2EEXsHASH/AEm+OX4Nxj4tGpGC6FSgmdVackChNa+CiPlnY+x
	mpT4/nEVfo0xEiIaazk+bCZsV2Tv9EA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E0646AF9E;
	Fri,  2 Oct 2020 14:22:16 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] tools/libs/guest: don't use symbolic links for xenctrl headers
Date: Fri,  2 Oct 2020 16:22:13 +0200
Message-Id: <20201002142214.3438-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
References: <20201002142214.3438-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using symbolic links for accessing the xenctrl private
headers use an include path instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/guest/Makefile | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 5b4ad313cc..1c729040b3 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -6,11 +6,6 @@ ifeq ($(CONFIG_LIBXC_MINIOS),y)
 override CONFIG_MIGRATE := n
 endif
 
-LINK_FILES := xc_private.h xc_core.h xc_core_x86.h xc_core_arm.h xc_bitops.h
-
-$(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/libs/ctrl/$(notdir $@) $@
-
 SRCS-y += xg_private.c
 SRCS-y += xg_domain.c
 SRCS-y += xg_suspend.c
@@ -29,6 +24,8 @@ else
 SRCS-y += xg_nomigrate.c
 endif
 
+CFLAGS += -I$(XEN_libxenctrl)
+
 vpath %.c ../../../xen/common/libelf
 CFLAGS += -I../../../xen/common/libelf
 
@@ -111,8 +108,6 @@ $(eval $(genpath-target))
 
 xc_private.h: _paths.h
 
-$(LIB_OBJS) $(PIC_OBJS): $(LINK_FILES)
-
 .PHONY: cleanlocal
 cleanlocal:
 	rm -f libxenguest.map
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:48:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2207.6492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMM4-0001BY-Gb; Fri, 02 Oct 2020 14:48:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2207.6492; Fri, 02 Oct 2020 14:48:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMM4-0001BR-DZ; Fri, 02 Oct 2020 14:48:16 +0000
Received: by outflank-mailman (input) for mailman id 2207;
 Fri, 02 Oct 2020 14:48:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kOMM3-0001BL-4I
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:48:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 044dae2d-a4af-478f-b111-0a1570f5d0d9;
 Fri, 02 Oct 2020 14:48:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-342-GAWETC74OFW0LAD_iYSeKQ-1; Fri, 02 Oct 2020 10:48:12 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EF4995705A;
 Fri,  2 Oct 2020 14:48:08 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 81D7C78803;
 Fri,  2 Oct 2020 14:48:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kOMM3-0001BL-4I
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:48:15 +0000
X-Inumbo-ID: 044dae2d-a4af-478f-b111-0a1570f5d0d9
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 044dae2d-a4af-478f-b111-0a1570f5d0d9;
	Fri, 02 Oct 2020 14:48:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601650093;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=IM4pBIGYx/SKM1Dy3xGL2Rgl9xgWx7mTiMJTemJf/pE=;
	b=Fk5HKHa9A14CVtiB0XFYC/JobBoBQCVJekuDAfrW5dUm2I1MId5ti0RQpIDMn5gYUVTG6I
	iJfz8SMSFtFwig2ke2v7NvkZ7bMzg4At1qAeblo2J2SKpVRQnOeg8G+ZjxfvLfALOE7IW8
	kNdSmFbe3FKmbuCpcZQQ+UnOAoCdb+I=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-342-GAWETC74OFW0LAD_iYSeKQ-1; Fri, 02 Oct 2020 10:48:12 -0400
X-MC-Unique: GAWETC74OFW0LAD_iYSeKQ-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EF4995705A;
	Fri,  2 Oct 2020 14:48:08 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 81D7C78803;
	Fri,  2 Oct 2020 14:48:05 +0000 (UTC)
Subject: Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of
 __free_one_page() to a proper flag
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-acpi@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Vlastimil Babka <vbabka@suse.cz>, Oscar Salvador <osalvador@suse.de>,
 Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>,
 Dave Hansen <dave.hansen@intel.com>,
 Wei Yang <richard.weiyang@linux.alibaba.com>, Mike Rapoport <rppt@kernel.org>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-2-david@redhat.com>
 <20201002134118.GA20115@casper.infradead.org>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <2b1baab8-861d-06a3-8eab-75c4e9e1b19d@redhat.com>
Date: Fri, 2 Oct 2020 16:48:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201002134118.GA20115@casper.infradead.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11

On 02.10.20 15:41, Matthew Wilcox wrote:
> On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
>> Let's prepare for additional flags and avoid long parameter lists of bools.
>> Follow-up patches will also make use of the flags in __free_pages_ok(),
>> however, I wasn't able to come up with a better name for the type - should
>> be good enough for internal purposes.
> 
>> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
>> +typedef int __bitwise fop_t;
> 
> That invites confusion with f_op.  There's no reason to use _t as a suffix
> here ... why not free_f?

git grep "bitwise" | grep typedef | grep include/linux

indicates that "_t" it the right thing to do.

I want a name that highlights that is is for the internal variants of
free_page(), free_f / free_t is too generic.

fpi_t (Free Page Internal) ?

> 
>> +/*
>> + * Skip free page reporting notification for the (possibly merged) page. (will
>> + * *not* mark the page reported, only skip the notification).
> 
> ... Don't you mean "will not skip marking the page as reported, only
> skip the notification"?

Yeah, I can use that.

The way free page reporting works is that

1. Free page reporting infrastructure will get notified after buddy
merging about a newly freed page.

2. Once a certain threshold of free pages is reached, it will pull pages
from the freelist, report them, and mark them as reported. (see
mm/page_reporting.c)

During 2., we didn't actually free a "new page", we only temporarily
removed it from the list, that's why we have to skip the notification.

What we do here is skip 1., not 2.

> 
> *reads code*
> 
> No, I'm still confused.  What does this sentence mean?
> 
> Would it help to have a FOP_DEFAULT that has FOP_REPORT_NOTIFY set and
> then a FOP_SKIP_REPORT_NOTIFY define that is 0?

Hmm, I'm not entirely sure if that improves the situation. Then, I need
3 defines instead of two, and an "inverse" documentation for
FOP_REPORT_NOTIFY.

> 
>> -static inline void __free_one_page(struct page *page,
>> -		unsigned long pfn,
>> -		struct zone *zone, unsigned int order,
>> -		int migratetype, bool report)
>> +static inline void __free_one_page(struct page *page, unsigned long pfn,
>> +				   struct zone *zone, unsigned int order,
>> +				   int migratetype, fop_t fop_flags)
> 
> Please don't over-indent like this.
> 
> static inline void __free_one_page(struct page *page, unsigned long pfn,
> 		struct zone *zone, unsigned int order, int migratetype,
> 		fop_t fop_flags)
> 
> reads just as well and then if someone needs to delete the 'static'
> later, they don't need to fiddle around with subsequent lines getting
> the whitespace to line up again.
> 

I don't care too much about this specific instance and can fix it up.
(this is clearly a matter of personal taste)

Thanks!

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 14:57:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 14:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2210.6505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMUk-00027A-Cr; Fri, 02 Oct 2020 14:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2210.6505; Fri, 02 Oct 2020 14:57:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMUk-000273-9g; Fri, 02 Oct 2020 14:57:14 +0000
Received: by outflank-mailman (input) for mailman id 2210;
 Fri, 02 Oct 2020 14:57:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kOMUj-00026y-Jx
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:57:13 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6dad3684-cd3a-48b6-b3a6-dfd46182e2f2;
 Fri, 02 Oct 2020 14:57:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-212-DaFlpBKmN3OElTBquMAWlw-1; Fri, 02 Oct 2020 10:57:08 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B4FDA8D1AE0;
 Fri,  2 Oct 2020 14:57:06 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C83CD1992F;
 Fri,  2 Oct 2020 14:57:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kOMUj-00026y-Jx
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 14:57:13 +0000
X-Inumbo-ID: 6dad3684-cd3a-48b6-b3a6-dfd46182e2f2
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 6dad3684-cd3a-48b6-b3a6-dfd46182e2f2;
	Fri, 02 Oct 2020 14:57:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601650632;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=IlPnHBv3CyZouuYGTMm8nTfNaMqhdIMUchrOU2xlWpg=;
	b=V5Spuhl4DaFmEqLYTYmEyQNKuxDlJQrjwr5+BEnQ4if/w3QV8+sik7nQHdRt+k+3AoccD8
	/EA1njr3cza6HhJX5NfZGfiZBGq6Y8birllrD+DyMt2onyZWBLChW0qL8YDznjgtYmdI6h
	cIUkd/u6EgDKMgZHZRvNFy0Jf5lhfZ8=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-212-DaFlpBKmN3OElTBquMAWlw-1; Fri, 02 Oct 2020 10:57:08 -0400
X-MC-Unique: DaFlpBKmN3OElTBquMAWlw-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B4FDA8D1AE0;
	Fri,  2 Oct 2020 14:57:06 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
	by smtp.corp.redhat.com (Postfix) with ESMTP id C83CD1992F;
	Fri,  2 Oct 2020 14:57:03 +0000 (UTC)
Subject: Re: [PATCH v1 1/5] mm/page_alloc: convert "report" flag of
 __free_one_page() to a proper flag
From: David Hildenbrand <david@redhat.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-acpi@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Vlastimil Babka <vbabka@suse.cz>, Oscar Salvador <osalvador@suse.de>,
 Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>,
 Dave Hansen <dave.hansen@intel.com>,
 Wei Yang <richard.weiyang@linux.alibaba.com>, Mike Rapoport <rppt@kernel.org>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-2-david@redhat.com>
 <20201002134118.GA20115@casper.infradead.org>
 <2b1baab8-861d-06a3-8eab-75c4e9e1b19d@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <9a420e97-cc3d-da6b-5955-e6c31d96164c@redhat.com>
Date: Fri, 2 Oct 2020 16:57:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <2b1baab8-861d-06a3-8eab-75c4e9e1b19d@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

On 02.10.20 16:48, David Hildenbrand wrote:
> On 02.10.20 15:41, Matthew Wilcox wrote:
>> On Mon, Sep 28, 2020 at 08:21:06PM +0200, David Hildenbrand wrote:
>>> Let's prepare for additional flags and avoid long parameter lists of bools.
>>> Follow-up patches will also make use of the flags in __free_pages_ok(),
>>> however, I wasn't able to come up with a better name for the type - should
>>> be good enough for internal purposes.
>>
>>> +/* Free One Page flags: for internal, non-pcp variants of free_pages(). */
>>> +typedef int __bitwise fop_t;
>>
>> That invites confusion with f_op.  There's no reason to use _t as a suffix
>> here ... why not free_f?
> 
> git grep "bitwise" | grep typedef | grep include/linux
> 
> indicates that "_t" it the right thing to do.
> 
> I want a name that highlights that is is for the internal variants of
> free_page(), free_f / free_t is too generic.
> 
> fpi_t (Free Page Internal) ?
> 
>>
>>> +/*
>>> + * Skip free page reporting notification for the (possibly merged) page. (will
>>> + * *not* mark the page reported, only skip the notification).
>>
>> ... Don't you mean "will not skip marking the page as reported, only
>> skip the notification"?
> 
> Yeah, I can use that.

Reading again, it doesn't quite fit. Marking pages as reported is
handled by mm/page_reporting.c

/*
 * Skip free page reporting notification for the (possibly merged) page.
 * This does not hinder free page reporting from grabbing the page,
 * reporting it and marking it "reported" -  it only skips notifying
 * the free page reporting infrastructure about a newly freed page. For
 * example, used  when temporarily pulling a page from the freelist and
 * putting it back unmodified.
 */

Is that clearer?

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2213.6520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMgG-00038f-Jd; Fri, 02 Oct 2020 15:09:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2213.6520; Fri, 02 Oct 2020 15:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMgG-00038Y-Ed; Fri, 02 Oct 2020 15:09:08 +0000
Received: by outflank-mailman (input) for mailman id 2213;
 Fri, 02 Oct 2020 15:09:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUGt=DJ=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kOMgF-00038T-MJ
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:09:07 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 707efc57-a3bd-488d-802c-6ba9609a5b8e;
 Fri, 02 Oct 2020 15:09:05 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 3DC3E5C00EE;
 Fri,  2 Oct 2020 11:09:05 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 02 Oct 2020 11:09:05 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id B7E0E3064688;
 Fri,  2 Oct 2020 11:09:03 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bUGt=DJ=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kOMgF-00038T-MJ
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:09:07 +0000
X-Inumbo-ID: 707efc57-a3bd-488d-802c-6ba9609a5b8e
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 707efc57-a3bd-488d-802c-6ba9609a5b8e;
	Fri, 02 Oct 2020 15:09:05 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 3DC3E5C00EE;
	Fri,  2 Oct 2020 11:09:05 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Fri, 02 Oct 2020 11:09:05 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=1C1lvw
	3bIIFsQ68c3jWIJPGVmAXKj8uzwfQOdS6cth8=; b=UagfYQYyX5zdSAaCIWrNnv
	bCRiP0wNOWdsw4XxAquusv8aj0Lx2fXpJhlPUCq/YBCBGd1zp+p9s5bkCR0l9NLR
	7MnsJdQw7G0PoSshgFob7ET8omAVkEHuhIpBrkfvX5Chjplft5K8W0KRgJun62f1
	xINwzBWPoDsuFNilZ2ZIuYe+kHrRdtmwkiAAcjtYZv4i4rXtML4nMqeQYHOyp1Hy
	g6QnHi79zT688i/byzu9MbWhHx5mKo6jmfth6z8T+al9qDAsWQb6TSas5lKUd3ib
	3LU+sxhyh62uXzalmwmVgP/SHvNLjCs9p3ZP+51sSnr2sNCGVHzVbODNChcTsAGg
	==
X-ME-Sender: <xms:kEJ3Xwr1vH23wLYZ5iq-yXzcGjCXw9dVZoBA5v8oXF2rFyXbGKrTNw>
    <xme:kEJ3X2ptySa99uFwIS9yWh453BTGoBOtWLN3j7it82aQetNHYLwhvN_3w_F3NIn7f
    7aEhoxpYKa9Fw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeeigdekfecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeduieeg
    ieffkefguddvteeffeehhfelfeduveeljeevteffudegkeejudffhfefnecuffhomhgrih
    hnpeigvghnrdhorhhgnecukfhppeeluddrieegrddujedtrdekleenucevlhhushhtvghr
    ufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:kEJ3X1MqKT0g2H_934bK5SgRQVRGELYUaJJrfYqIWoP9Lg_d3Feo0A>
    <xmx:kEJ3X34D9UkQmiL0ILbDWVaXnSPkw9N97dY0WK2vFUeBg6nQLitFTw>
    <xmx:kEJ3X_504VboYh8IBXFhHPufImOVF3z4P46-C_COwzyUeMSVB12maQ>
    <xmx:kUJ3X4UjAjGIyaOqMQV1szmhKBA0PwedVGp_EQeCtWNt1SodnWV9Ig>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id B7E0E3064688;
	Fri,  2 Oct 2020 11:09:03 -0400 (EDT)
Date: Fri, 2 Oct 2020 17:08:59 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Yet another S3 issue in Xen 4.14
Message-ID: <20201002150859.GM3962@mail-itl>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
 <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="cGfB/trNgB3WtPHu"
Content-Disposition: inline
In-Reply-To: <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>


--cGfB/trNgB3WtPHu
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Yet another S3 issue in Xen 4.14

On Thu, Oct 01, 2020 at 01:43:52PM +0100, Andrew Cooper wrote:
> On 01/10/2020 13:31, Marek Marczykowski-G=C3=B3recki wrote:
> > On Thu, Oct 01, 2020 at 01:59:32PM +0200, Jan Beulich wrote:
> >> On 01.10.2020 03:12, Marek Marczykowski-G=C3=B3recki wrote:
> >>> After patching the previous issue ("x86/S3: Fix Shadow Stack resume
> >>> path") I still encounter issues resume from S3.
> >>> Since I had it working on Xen 4.13 on this particular hardware (Think=
pad
> >>> P52), I bisected it and got this:
> >>>
> >>> commit 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1
> >>> Author: Andrew Cooper <andrew.cooper3@citrix.com>
> >>> Date:   Wed Dec 11 20:59:19 2019 +0000
> >>>
> >>>     x86/S3: Drop {save,restore}_rest_processor_state() completely
> >>>    =20
> >>>     There is no need to save/restore FS/GS/XCR0 state.  It will be ha=
ndled
> >>>     suitably on the context switch away from the idle.
> >>>    =20
> >>>     The CR4 restoration in restore_rest_processor_state() was actuall=
y fighting
> >>>     later code in enter_state() which tried to keep CR4.MCE clear unt=
il everything
> >>>     was set up.  Delete the intermediate restoration, and defer final=
 restoration
> >>>     until after MCE is reconfigured.
> >>>    =20
> >>>     Restoring PAT can be done earlier, and ideally before paging is e=
nabled.  By
> >>>     moving it into the trampoline during the setup for 64bit, the cal=
l can be
> >>>     dropped from cpu_init().  The EFI path boot path doesn't disable =
paging, so
> >>>     make the adjustment when switching onto Xen's pagetables.
> >>>    =20
> >>>     The only remaing piece of restoration is load_system_tables(), so=
 suspend.c
> >>>     can be deleted in its entirety.
> >>>    =20
> >>>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >>>
> >>> Parent of this commit suspends and resumes just fine. With this commit
> >>> applied, it (I think) it panics, at least I get reboot after 5s. Sadl=
y, I
> >>> don't have serial console there.
> >>>
> >>> I tried also master and stable-4.14 with this commit reverted (and al=
so
> >>> the other fix applied), but it doesn't work. In this case I get a han=
g on
> >>> resume (power led still flashing, but fan woke up). There are probably
> >>> some other dependencies.
> >> Since bisection may also point you at some intermediate breakage, which
> >> these last results of yours seem to support, could you check whether
> >> 55f8c389d434 put immediately on top of the above commit makes a differ=
ence,
> >> and if so resume bisecting from there?
> > Nope, 4304ff420e51b973ec9eb9dafd64a917dd9c0fb1 with 55f8c389d434 on top
> > it still hangs on resume.
>=20
> Ok.=C2=A0 I'll see about breaking the change apart so we can bisect which
> specific bit of code movement broke things.

I've done another bisect on the commit broken up in separate changes
(https://xenbits.xen.org/gitweb/?p=3Dpeople/andrewcoop/xen.git;a=3Dshortlog=
;h=3Drefs/heads/dbg-s3)
and the bad part seems to be this:

=46rom dbdb32f8c265295d6af7cd4cd0aa12b6d04a0430 Mon Sep 17 00:00:00 2001
=46rom: Andrew Cooper <andrew.cooper3@citrix.com>
Date: Fri, 2 Oct 2020 15:40:22 +0100
Subject: [PATCH 1/1] CR4

---
 xen/arch/x86/acpi/power.c   | 9 ++++-----
 xen/arch/x86/acpi/suspend.c | 3 ---
 2 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 6dfd4c7891..0cda362045 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -195,7 +195,6 @@ static int enter_state(u32 state)
     unsigned long flags;
     int error;
     struct cpu_info *ci;
-    unsigned long cr4;
=20
     if ( (state <=3D ACPI_STATE_S0) || (state > ACPI_S_STATES_MAX) )
         return -EINVAL;
@@ -270,15 +269,15 @@ static int enter_state(u32 state)
=20
     system_state =3D SYS_STATE_resume;
=20
-    /* Restore CR4 and EFER from cached values. */
-    cr4 =3D read_cr4();
-    write_cr4(cr4 & ~X86_CR4_MCE);
+    /* Restore EFER from cached value. */
     write_efer(read_efer());
=20
     device_power_up(SAVED_ALL);
=20
     mcheck_init(&boot_cpu_data, false);
-    write_cr4(cr4);
+
+    /* Restore CR4 from cached value, now MCE is set up. */
+    write_cr4(read_cr4());
=20
     printk(XENLOG_INFO "Finishing wakeup from ACPI S%d state.\n", state);
=20
diff --git a/xen/arch/x86/acpi/suspend.c b/xen/arch/x86/acpi/suspend.c
index 060a9313b6..ca987d9019 100644
--- a/xen/arch/x86/acpi/suspend.c
+++ b/xen/arch/x86/acpi/suspend.c
@@ -23,7 +23,4 @@ void save_rest_processor_state(void)
 void restore_rest_processor_state(void)
 {
     load_system_tables();
-
-    /* Restore full CR4 (inc MCE) now that the IDT is in place. */
-    write_cr4(mmu_cr4_features);
 }
--=20
2.20.1

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--cGfB/trNgB3WtPHu
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl93QosACgkQ24/THMrX
1yw2Nwf9EZTu//vxGTNaY+7efJeVkx1/Cg93/vXrihs+AOeS2ZUtZxS9rKnQHjbV
QpQ4DMtXi9yPINwY0cgUaDGnmqEKkeYEStiljJ/5VObCdlbvCSEDnG19ie13FSgb
eabCv3WQgzaYMNgojHhPcwA9jQGp0ojLpNqeHizcF4PbOlhHQqwkb9MYFbqzHItL
fyYT6QiCDvRiTf3k0b0A/MYttm767K7Xy7JHPqNtX+uXYpe4MHarqOMxxbBi7+9c
+R1a6I9ViwKrDHWUPGm7PuviJQdnOtZpYjxg5UayhSL8H/DYN/FnNZfF94daxSXx
lwHZSfwBMxa/r1OAuts3EaPZVRS7Xw==
=p/3W
-----END PGP SIGNATURE-----

--cGfB/trNgB3WtPHu--


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:10:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2215.6532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMhN-0003v0-3d; Fri, 02 Oct 2020 15:10:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2215.6532; Fri, 02 Oct 2020 15:10:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMhM-0003ut-V1; Fri, 02 Oct 2020 15:10:16 +0000
Received: by outflank-mailman (input) for mailman id 2215;
 Fri, 02 Oct 2020 15:10:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kOMhL-0003un-MG
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:10:15 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6612e177-224e-4361-8d1b-bce6dafd5429;
 Fri, 02 Oct 2020 15:10:14 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-318-S07uizV9Mt-88Er9MY70BA-1; Fri, 02 Oct 2020 11:10:10 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C9F42188C122;
 Fri,  2 Oct 2020 15:10:07 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
 by smtp.corp.redhat.com (Postfix) with ESMTP id F12A75C22E;
 Fri,  2 Oct 2020 15:10:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kOMhL-0003un-MG
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:10:15 +0000
X-Inumbo-ID: 6612e177-224e-4361-8d1b-bce6dafd5429
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 6612e177-224e-4361-8d1b-bce6dafd5429;
	Fri, 02 Oct 2020 15:10:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601651414;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=qA3VtU+ihW5kV76AISqiMz521DoBLG66Fqr4dX/AO88=;
	b=PMkRZLpoEJU/LDTzHAMxCSw0BpRAxFXg3J97MAnbskwbZrXxS13wAFAEQvzx1c2xSZ/H0Q
	7b5kBhILP7qMcg5NOdZ+rPRcyG5VVLO28W2EPSMR6Gn2hjcBElDbZssjBd0A7I5RfCp2lp
	4gr/U1y2oSnTQAj6pT7XHbFm8dgJLt8=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-318-S07uizV9Mt-88Er9MY70BA-1; Fri, 02 Oct 2020 11:10:10 -0400
X-MC-Unique: S07uizV9Mt-88Er9MY70BA-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C9F42188C122;
	Fri,  2 Oct 2020 15:10:07 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
	by smtp.corp.redhat.com (Postfix) with ESMTP id F12A75C22E;
	Fri,  2 Oct 2020 15:10:03 +0000 (UTC)
Subject: Re: [PATCH v1 4/5] mm/page_alloc: place pages to tail in
 __free_pages_core()
To: Michal Hocko <mhocko@suse.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-acpi@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
 Vlastimil Babka <vbabka@suse.cz>, Oscar Salvador <osalvador@suse.de>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Mel Gorman <mgorman@techsingularity.net>, Dave Hansen
 <dave.hansen@intel.com>, Wei Yang <richard.weiyang@linux.alibaba.com>,
 Mike Rapoport <rppt@kernel.org>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, Wei Liu <wei.liu@kernel.org>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-5-david@redhat.com>
 <20201002134115.GJ4555@dhcp22.suse.cz>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <7bf5d426-1fdd-4525-25da-a68d92b6c11d@redhat.com>
Date: Fri, 2 Oct 2020 17:10:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201002134115.GJ4555@dhcp22.suse.cz>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16

On 02.10.20 15:41, Michal Hocko wrote:
> On Mon 28-09-20 20:21:09, David Hildenbrand wrote:
>> __free_pages_core() is used when exposing fresh memory to the buddy
>> during system boot and when onlining memory in generic_online_page().
>>
>> generic_online_page() is used in two cases:
>>
>> 1. Direct memory onlining in online_pages().
>> 2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
>>    balloon and virtio-mem), when parts of a section are kept
>>    fake-offline to be fake-onlined later on.
>>
>> In 1, we already place pages to the tail of the freelist. Pages will be
>> freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
>> via undo_isolate_page_range().
>>
>> In 2, we currently don't implement a proper rule. In case of virtio-mem,
>> where we currently always online MAX_ORDER - 1 pages, the pages will be
>> placed to the HEAD of the freelist - undesireable. While the hyper-v
>> balloon calls generic_online_page() with single pages, usually it will
>> call it on successive single pages in a larger block.
>>
>> The pages are fresh, so place them to the tail of the freelists and avoid
>> the PCP. In __free_pages_core(), remove the now superflouos call to
>> set_page_refcounted() and add a comment regarding page initialization and
>> the refcount.
>>
>> Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
>> is usually of limited use in virtualized environments), we might want to
>> shuffle after a sequence of generic_online_page() calls in the
>> relevant callers.
> 
> It took some time to get through all the freeing paths with subtle
> differences but this looks reasonable. You are mentioning that this
> influences a boot time free memory ordering as well but only very
> briefly. I do not expect this to make a huge difference but who knows.
> It makes some sense to add pages in the order they show up in the
> physical address ordering.

I think boot memory is mostly exposed in the physical address ordering.
In that case, higher addresses will now be used less likely immediately
after this patch. I also don't think it's an issue - if we still detect
it's an issue it's fairly easy to change again.

Thanks!

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:20:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2219.6545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMrB-0004sT-0h; Fri, 02 Oct 2020 15:20:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2219.6545; Fri, 02 Oct 2020 15:20:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMrA-0004sM-Tz; Fri, 02 Oct 2020 15:20:24 +0000
Received: by outflank-mailman (input) for mailman id 2219;
 Fri, 02 Oct 2020 15:20:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kOMr8-0004sH-SG
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:20:22 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 43246d4b-c9f3-412f-a9ff-ba76d09e16f3;
 Fri, 02 Oct 2020 15:20:21 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-272-w6qMtHLYOdqGZmEqgP1rMQ-1; Fri, 02 Oct 2020 11:20:16 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C928D109106D;
 Fri,  2 Oct 2020 15:20:13 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 593875D9E4;
 Fri,  2 Oct 2020 15:20:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sK/c=DJ=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kOMr8-0004sH-SG
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:20:22 +0000
X-Inumbo-ID: 43246d4b-c9f3-412f-a9ff-ba76d09e16f3
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 43246d4b-c9f3-412f-a9ff-ba76d09e16f3;
	Fri, 02 Oct 2020 15:20:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601652021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=tLF6ZatCZ5zrVPBFeigNCXJI7/8w0ttDqPTuxtDX8zs=;
	b=Jg+oWXGzxnkPLOKoihHRwxS/LtWf2eIlADJ33zOzwRsWwt3oUuCoTzYXp4iqtvSgB1QXv8
	SgFgvTeKvWRbmGX32Kx8HdhCNGkeQCOl4jYxCA3qRBOcNqbsUTEVVqegMQ9ekzmJQamKYY
	Wrw6dpZ8d3Ksd+4RL5Tx23ZyOK/rh2U=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-272-w6qMtHLYOdqGZmEqgP1rMQ-1; Fri, 02 Oct 2020 11:20:16 -0400
X-MC-Unique: w6qMtHLYOdqGZmEqgP1rMQ-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C928D109106D;
	Fri,  2 Oct 2020 15:20:13 +0000 (UTC)
Received: from [10.36.113.228] (ovpn-113-228.ams2.redhat.com [10.36.113.228])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 593875D9E4;
	Fri,  2 Oct 2020 15:20:10 +0000 (UTC)
Subject: Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of
 the freelist in unset_migratetype_isolate()
To: Michal Hocko <mhocko@suse.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-acpi@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>,
 Oscar Salvador <osalvador@suse.de>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Mel Gorman <mgorman@techsingularity.net>, Dave Hansen
 <dave.hansen@intel.com>, Vlastimil Babka <vbabka@suse.cz>,
 Wei Yang <richard.weiyang@linux.alibaba.com>, Mike Rapoport
 <rppt@kernel.org>, Scott Cheloha <cheloha@linux.ibm.com>,
 Michael Ellerman <mpe@ellerman.id.au>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-4-david@redhat.com>
 <20201002132404.GI4555@dhcp22.suse.cz>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <df0c45bf-223f-1f0b-ce3d-f2b2e05626bd@redhat.com>
Date: Fri, 2 Oct 2020 17:20:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201002132404.GI4555@dhcp22.suse.cz>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14

On 02.10.20 15:24, Michal Hocko wrote:
> On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
>> Page isolation doesn't actually touch the pages, it simply isolates
>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>
>> We already place pages to the tail of the freelists when undoing
>> isolation via __putback_isolated_page(), let's do it in any case
>> (e.g., if order <= pageblock_order) and document the behavior.
>>
>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>>
>> This change results in all pages getting onlined via online_pages() to
>> be placed to the tail of the freelist.
> 
> Is there anything preventing to do this unconditionally? Or in other
> words is any of the existing callers of move_freepages_block benefiting
> from adding to the head?

1. mm/page_isolation.c:set_migratetype_isolate()

We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
there.

2. steal_suitable_fallback():

I don't think we care too much about the order when already stealing
pageblocks ... and the freelist is empty I guess?

3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()

Not sure if we really care.

Good question, I tried to be careful of what I touch. Thoughts?

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:25:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2224.6562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMwB-00055Y-Pq; Fri, 02 Oct 2020 15:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2224.6562; Fri, 02 Oct 2020 15:25:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOMwB-00055R-LD; Fri, 02 Oct 2020 15:25:35 +0000
Received: by outflank-mailman (input) for mailman id 2224;
 Fri, 02 Oct 2020 15:25:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOMwA-00055M-Qo
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:25:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 066b3f6d-5c1b-4e3a-b620-15ee2e4fa3a7;
 Fri, 02 Oct 2020 15:25:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 059D8ACB8;
 Fri,  2 Oct 2020 15:25:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOMwA-00055M-Qo
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:25:34 +0000
X-Inumbo-ID: 066b3f6d-5c1b-4e3a-b620-15ee2e4fa3a7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 066b3f6d-5c1b-4e3a-b620-15ee2e4fa3a7;
	Fri, 02 Oct 2020 15:25:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601652333;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mtFxudx+iYl41OTBQnvwHpaaaMKl9kzkEWSgv2PKrMo=;
	b=U5EnO3aySqa76DL5dWAjrLe4r16g/SXS3R0mnfssxuX5dTPaGDTjyNicXapRASNd6OotFw
	l4m76j+K0QzYlBpVUT81lveARCOxadfYJn/7TbUIU76KS/5Kcjr2F/SrciEjT88/WBJezR
	eGyJssAWvN7R5pXHukv9RSkpg+2Svj8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 059D8ACB8;
	Fri,  2 Oct 2020 15:25:33 +0000 (UTC)
Subject: Re: [PATCH v2 04/11] x86/vmsi: use the newly introduced EOI callbacks
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-5-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <785f80d6-3a0a-6a58-fd9a-05d8ff87f6fe@suse.com>
Date: Fri, 2 Oct 2020 17:25:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> Remove the unconditional call to hvm_dpci_msi_eoi in vlapic_handle_EOI
> and instead use the newly introduced EOI callback mechanism in order
> to register a callback for MSI vectors injected from passed through
> devices.

What I'm kind of missing here is a word on why this is an improvement:
After all ...

> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -496,8 +496,6 @@ void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
>          vioapic_update_EOI(vector);
>  
> -    hvm_dpci_msi_eoi(vector);

... you're exchanging this direct call for a more complex model with
an indirect one (to the same function).

> @@ -119,7 +126,8 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
>  
>      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
>  
> -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
> +    vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, trig_mode,
> +                          hvm_dpci_msi_eoi, NULL);
>  }

While I agree with your reply to Paul regarding Dom0, I still think
the entire if() in hvm_dpci_msi_eoi() should be converted into a
conditional here. There's no point registering the callback if it's
not going to do anything.

However, looking further, the "!hvm_domain_irq(d)->dpci &&
!is_hardware_domain(d)" can be simply dropped altogether, right away.
It's now fulfilled by the identical check at the top of
hvm_dirq_assist(), thus guarding the sole call site of this function.

The !is_iommu_enabled(d) is slightly more involved to prove, but it
should also be possible to simply drop. What might help here is a
separate change to suppress opening of HVM_DPCI_SOFTIRQ when there's
no IOMMU in the system, as then it becomes obvious that this part of
the condition is guaranteed by hvm_do_IRQ_dpci(), being the only
site where the softirq can get raised (apart from the softirq
handler itself).

To sum up - the call above can probably stay as is, but the callback
can be simplified as a result of the change.

> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -874,7 +874,7 @@ static int _hvm_dpci_msi_eoi(struct domain *d,
>      return 0;
>  }
>  
> -void hvm_dpci_msi_eoi(unsigned int vector)
> +void hvm_dpci_msi_eoi(unsigned int vector, void *data)
>  {
>      struct domain *d = current->domain;

Instead of passing NULL for data and latching d from current, how
about you make the registration pass d to more easily use here?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:39:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:39:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2227.6576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kON9u-00067S-2y; Fri, 02 Oct 2020 15:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2227.6576; Fri, 02 Oct 2020 15:39:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kON9t-00067L-W4; Fri, 02 Oct 2020 15:39:45 +0000
Received: by outflank-mailman (input) for mailman id 2227;
 Fri, 02 Oct 2020 15:39:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kON9s-00067G-VS
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:39:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 190bc2d8-a250-4476-a866-8a26bb7cff8e;
 Fri, 02 Oct 2020 15:39:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86C5AB240;
 Fri,  2 Oct 2020 15:39:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5pZ8=DJ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kON9s-00067G-VS
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:39:45 +0000
X-Inumbo-ID: 190bc2d8-a250-4476-a866-8a26bb7cff8e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 190bc2d8-a250-4476-a866-8a26bb7cff8e;
	Fri, 02 Oct 2020 15:39:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653182;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qrfGihDGcHqZgbuB5NmYzKqBrdZfeb7UvtkAF/YVzOc=;
	b=szWhYp29GskhiIwbTY+qk43k+yl4yZtGm+HLtbWoN2VlO9lS/iutCE21G3ImAUEz3ilCOu
	GZZMAw+Y2DvFJM/m8D/IhX5zbSgjz3bzD3l/0bRVicgUkQgs2bAKf0KJoQBlb3f7VgePVj
	00EN5jzmQZj3TYqJI2LMnudYnAAg6zA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 86C5AB240;
	Fri,  2 Oct 2020 15:39:42 +0000 (UTC)
Subject: Re: Yet another S3 issue in Xen 4.14
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
 <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
 <20201002150859.GM3962@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <454ac9ce-012f-f2e7-722d-c5304fd3146f@suse.com>
Date: Fri, 2 Oct 2020 17:39:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201002150859.GM3962@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 17:08, Marek Marczykowski-Górecki wrote:
> I've done another bisect on the commit broken up in separate changes
> (https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/dbg-s3)
> and the bad part seems to be this:
> 
> From dbdb32f8c265295d6af7cd4cd0aa12b6d04a0430 Mon Sep 17 00:00:00 2001
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Date: Fri, 2 Oct 2020 15:40:22 +0100
> Subject: [PATCH 1/1] CR4

Interesting - I was wild guessing so yesterday, but couldn't come
up with even a vague reason why this would be. I think you could
further split it up:

> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -195,7 +195,6 @@ static int enter_state(u32 state)
>      unsigned long flags;
>      int error;
>      struct cpu_info *ci;
> -    unsigned long cr4;
>  
>      if ( (state <= ACPI_STATE_S0) || (state > ACPI_S_STATES_MAX) )
>          return -EINVAL;
> @@ -270,15 +269,15 @@ static int enter_state(u32 state)
>  
>      system_state = SYS_STATE_resume;
>  
> -    /* Restore CR4 and EFER from cached values. */
> -    cr4 = read_cr4();
> -    write_cr4(cr4 & ~X86_CR4_MCE);
> +    /* Restore EFER from cached value. */
>      write_efer(read_efer());

This one should be possible to leave in place despite ...

>      device_power_up(SAVED_ALL);
>  
>      mcheck_init(&boot_cpu_data, false);
> -    write_cr4(cr4);
> +
> +    /* Restore CR4 from cached value, now MCE is set up. */
> +    write_cr4(read_cr4());

... this change.

Further, while I can't see how the set_in_cr4() in mcheck_init()
could badly interact with the CR4 writes here, another option
might be to suppress it when system_state == SYS_STATE_resume
&& c == &boot_cpu_data (or !bsp && c == &boot_cpu_data).

> --- a/xen/arch/x86/acpi/suspend.c
> +++ b/xen/arch/x86/acpi/suspend.c
> @@ -23,7 +23,4 @@ void save_rest_processor_state(void)
>  void restore_rest_processor_state(void)
>  {
>      load_system_tables();
> -
> -    /* Restore full CR4 (inc MCE) now that the IDT is in place. */
> -    write_cr4(mmu_cr4_features);
>  }

This one should be possible to leave in place despite the other
changes.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:41:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:41:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2230.6590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBs-0006vP-Fb; Fri, 02 Oct 2020 15:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2230.6590; Fri, 02 Oct 2020 15:41:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBs-0006vI-Cb; Fri, 02 Oct 2020 15:41:48 +0000
Received: by outflank-mailman (input) for mailman id 2230;
 Fri, 02 Oct 2020 15:41:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONBq-0006v4-NJ
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9cfa185-7e39-4417-9dfb-6b072cc7461f;
 Fri, 02 Oct 2020 15:41:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2180AB260;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONBq-0006v4-NJ
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:46 +0000
X-Inumbo-ID: a9cfa185-7e39-4417-9dfb-6b072cc7461f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a9cfa185-7e39-4417-9dfb-6b072cc7461f;
	Fri, 02 Oct 2020 15:41:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=oIK93nxrDael7j58S693ug7pAufuwxo+3VASfk/e1JA=;
	b=kO9YB7cXFNm5J4pvSpdHHYbuW0pNHvZZGX3lrQWMOtSEe85+YlX5W5rc29lD/7Z8BVPeCQ
	oJrB0/WznV4DUGhLL+NYOGO3TVFSBbrS/0ZN1JHNrXehJJoXJeaYbNeWoOf2XwNDd8yIBY
	R9oYp0WugEp1qHPHVNGewYKn2WR2ngA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2180AB260;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH 0/5] tools/xenstore: remove read-only socket
Date: Fri,  2 Oct 2020 17:41:36 +0200
Message-Id: <20201002154141.11677-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The read-only socket of Xenstore is usable for the daemon case only
and even there it is not really worth to be kept, as not all
Xesntore operations changing the state of Xenstore are blocked,
oxenstored ignoring the read-only semantics completely, and the
privileges for using the ro-socket being the same as for the normal
rw-socket.

So remove this feature with switching the related use cases to the
Xenstore-type agnostic open- and close-functions..

Juergen Gross (5):
  tools/xenstore: remove socket-only option from xenstore client
  tools/libs/store: ignore XS_OPEN_SOCKETONLY flag
  tools/libs/store: drop read-only functionality
  tools: drop all deprecated usages of xs_*_open() and friends in tools
  tools/xenstore: drop creation of read-only socket in xenstored

 docs/man/xenstore-chmod.1.pod           |  4 --
 docs/man/xenstore-ls.1.pod              |  4 --
 docs/man/xenstore-read.1.pod            |  4 --
 docs/man/xenstore-write.1.pod           |  4 --
 tools/console/client/main.c             |  2 +-
 tools/console/daemon/utils.c            |  4 +-
 tools/libs/light/libxl.c                |  6 +--
 tools/libs/light/libxl_exec.c           |  6 +--
 tools/libs/light/libxl_fork.c           |  2 +-
 tools/libs/stat/xenstat.c               |  4 +-
 tools/libs/store/include/xenstore.h     | 10 -----
 tools/libs/store/xs.c                   |  9 ++--
 tools/libs/vchan/init.c                 | 10 ++---
 tools/misc/xen-lowmemd.c                |  4 +-
 tools/python/xen/lowlevel/xs/xs.c       |  6 +--
 tools/tests/mce-test/tools/xen-mceinj.c |  4 +-
 tools/xenbackendd/xenbackendd.c         |  4 +-
 tools/xenpmd/xenpmd.c                   |  6 +--
 tools/xenstore/xenstore_client.c        |  8 +---
 tools/xenstore/xenstored_core.c         | 55 +++++--------------------
 tools/xenstore/xenstored_core.h         |  3 --
 tools/xenstore/xenstored_domain.c       |  4 +-
 tools/xenstore/xs_lib.c                 |  8 +---
 23 files changed, 46 insertions(+), 125 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:41:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2232.6604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBu-0006xa-Ux; Fri, 02 Oct 2020 15:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2232.6604; Fri, 02 Oct 2020 15:41:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBu-0006xT-Qj; Fri, 02 Oct 2020 15:41:50 +0000
Received: by outflank-mailman (input) for mailman id 2232;
 Fri, 02 Oct 2020 15:41:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONBs-0006v4-SS
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1913937-ddf9-40d7-ae63-0f09078a303c;
 Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 992C7B264;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONBs-0006v4-SS
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:48 +0000
X-Inumbo-ID: e1913937-ddf9-40d7-ae63-0f09078a303c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e1913937-ddf9-40d7-ae63-0f09078a303c;
	Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KbrEsrqdNvWQhuDT/WAXgzByQyKfzuLyEZut1PWtipw=;
	b=OmTqquRryuHi+lVJnfyqJwl4ttcKsNieBVa6/IZlRTuW2hu1FvWSMDDifafqjMjySb+fDx
	qfjonnZ17444IlZXaT9sRUEs1vKQZ0jjZrBzGWEo3BjNFWavlgE87Ddek5RdPaX1quKESc
	SrJmjjR+SjnFYQ5qXk91H/qR0eMGWIw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 992C7B264;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH 4/5] tools: drop all deprecated usages of xs_*_open() and friends in tools
Date: Fri,  2 Oct 2020 17:41:40 +0200
Message-Id: <20201002154141.11677-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Switch all usages of xs_daemon_open*() and xs_domain_open() to use
xs_open() instead. While at it switch xs_daemon_close() users to
xs_close().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/console/client/main.c             |  2 +-
 tools/console/daemon/utils.c            |  4 ++--
 tools/libs/light/libxl.c                |  6 ++----
 tools/libs/light/libxl_exec.c           |  6 +++---
 tools/libs/light/libxl_fork.c           |  2 +-
 tools/libs/stat/xenstat.c               |  4 ++--
 tools/libs/vchan/init.c                 | 10 ++++------
 tools/misc/xen-lowmemd.c                |  4 ++--
 tools/python/xen/lowlevel/xs/xs.c       |  6 +++---
 tools/tests/mce-test/tools/xen-mceinj.c |  4 ++--
 tools/xenbackendd/xenbackendd.c         |  4 ++--
 tools/xenpmd/xenpmd.c                   |  6 +++---
 12 files changed, 27 insertions(+), 31 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index f92ad3d8cf..088be28dff 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -398,7 +398,7 @@ int main(int argc, char **argv)
 		exit(EINVAL);
 	}
 
-	xs = xs_daemon_open();
+	xs = xs_open(0);
 	if (xs == NULL) {
 		err(errno, "Could not contact XenStore");
 	}
diff --git a/tools/console/daemon/utils.c b/tools/console/daemon/utils.c
index 97d7798b33..f9dd8a60c5 100644
--- a/tools/console/daemon/utils.c
+++ b/tools/console/daemon/utils.c
@@ -104,7 +104,7 @@ void daemonize(const char *pidfile)
 bool xen_setup(void)
 {
 	
-	xs = xs_daemon_open();
+	xs = xs_open(0);
 	if (xs == NULL) {
 		dolog(LOG_ERR,
 		      "Failed to contact xenstore (%m).  Is it running?");
@@ -131,7 +131,7 @@ bool xen_setup(void)
 
  out:
 	if (xs)
-		xs_daemon_close(xs);
+		xs_close(xs);
 	if (xc)
 		xc_interface_close(xc);
 	return false;
diff --git a/tools/libs/light/libxl.c b/tools/libs/light/libxl.c
index 621acc88f3..d2a87157a2 100644
--- a/tools/libs/light/libxl.c
+++ b/tools/libs/light/libxl.c
@@ -103,9 +103,7 @@ int libxl_ctx_alloc(libxl_ctx **pctx, int version,
         rc = ERROR_FAIL; goto out;
     }
 
-    ctx->xsh = xs_daemon_open();
-    if (!ctx->xsh)
-        ctx->xsh = xs_domain_open();
+    ctx->xsh = xs_open(0);
     if (!ctx->xsh) {
         LOGEV(ERROR, errno, "cannot connect to xenstore");
         rc = ERROR_FAIL; goto out;
@@ -171,7 +169,7 @@ int libxl_ctx_free(libxl_ctx *ctx)
 
     if (ctx->xch) xc_interface_close(ctx->xch);
     libxl_version_info_dispose(&ctx->version_info);
-    if (ctx->xsh) xs_daemon_close(ctx->xsh);
+    if (ctx->xsh) xs_close(ctx->xsh);
     if (ctx->xce) xenevtchn_close(ctx->xce);
 
     libxl__poller_put(ctx, ctx->poller_app);
diff --git a/tools/libs/light/libxl_exec.c b/tools/libs/light/libxl_exec.c
index 47c9c8f1ba..a8b949b193 100644
--- a/tools/libs/light/libxl_exec.c
+++ b/tools/libs/light/libxl_exec.c
@@ -178,7 +178,7 @@ int libxl__xenstore_child_wait_deprecated(libxl__gc *gc,
     unsigned int num;
     char **l = NULL;
 
-    xsh = xs_daemon_open();
+    xsh = xs_open(0);
     if (xsh == NULL) {
         LOG(ERROR, "Unable to open xenstore connection");
         goto err;
@@ -206,7 +206,7 @@ int libxl__xenstore_child_wait_deprecated(libxl__gc *gc,
 
         free(p);
         xs_unwatch(xsh, path, path);
-        xs_daemon_close(xsh);
+        xs_close(xsh);
         return rc;
 again:
         free(p);
@@ -226,7 +226,7 @@ again:
     LOG(ERROR, "%s not ready", what);
 
     xs_unwatch(xsh, path, path);
-    xs_daemon_close(xsh);
+    xs_close(xsh);
 err:
     return -1;
 }
diff --git a/tools/libs/light/libxl_fork.c b/tools/libs/light/libxl_fork.c
index 9a4709b9a4..5d47dceb8a 100644
--- a/tools/libs/light/libxl_fork.c
+++ b/tools/libs/light/libxl_fork.c
@@ -663,7 +663,7 @@ int libxl__ev_child_xenstore_reopen(libxl__gc *gc, const char *what) {
     int rc;
 
     assert(!CTX->xsh);
-    CTX->xsh = xs_daemon_open();
+    CTX->xsh = xs_open(0);
     if (!CTX->xsh) {
         LOGE(ERROR, "%s: xenstore reopen failed", what);
         rc = ERROR_FAIL;  goto out;
diff --git a/tools/libs/stat/xenstat.c b/tools/libs/stat/xenstat.c
index 6f93d4e982..e49689aa2d 100644
--- a/tools/libs/stat/xenstat.c
+++ b/tools/libs/stat/xenstat.c
@@ -107,7 +107,7 @@ xenstat_handle *xenstat_init(void)
 		return NULL;
 	}
 
-	handle->xshandle = xs_daemon_open_readonly(); /* open handle to xenstore*/
+	handle->xshandle = xs_open(0); /* open handle to xenstore*/
 	if (handle->xshandle == NULL) {
 		perror("unable to open xenstore");
 		xc_interface_close(handle->xc_handle);
@@ -125,7 +125,7 @@ void xenstat_uninit(xenstat_handle * handle)
 		for (i = 0; i < NUM_COLLECTORS; i++)
 			collectors[i].uninit(handle);
 		xc_interface_close(handle->xc_handle);
-		xs_daemon_close(handle->xshandle);
+		xs_close(handle->xshandle);
 		free(handle->priv);
 		free(handle);
 	}
diff --git a/tools/libs/vchan/init.c b/tools/libs/vchan/init.c
index ad4b64fbe3..c8510e6ce9 100644
--- a/tools/libs/vchan/init.c
+++ b/tools/libs/vchan/init.c
@@ -251,7 +251,7 @@ static int init_xs_srv(struct libxenvchan *ctrl, int domain, const char* xs_base
 	char ref[16];
 	char* domid_str = NULL;
 	xs_transaction_t xs_trans = XBT_NULL;
-	xs = xs_domain_open();
+	xs = xs_open(0);
 	if (!xs)
 		goto fail;
 	domid_str = xs_read(xs, 0, "domid", NULL);
@@ -293,7 +293,7 @@ retry_transaction:
 	}
  fail_xs_open:
 	free(domid_str);
-	xs_daemon_close(xs);
+	xs_close(xs);
  fail:
 	return ret;
 }
@@ -408,9 +408,7 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 	ctrl->write.order = ctrl->read.order = 0;
 	ctrl->is_server = 0;
 
-	xs = xs_daemon_open();
-	if (!xs)
-		xs = xs_domain_open();
+	xs = xs_open(0);
 	if (!xs)
 		goto fail;
 
@@ -452,7 +450,7 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 
  out:
 	if (xs)
-		xs_daemon_close(xs);
+		xs_close(xs);
 	return ctrl;
  fail:
 	libxenvchan_close(ctrl);
diff --git a/tools/misc/xen-lowmemd.c b/tools/misc/xen-lowmemd.c
index 79ad34cb4a..a3a2741242 100644
--- a/tools/misc/xen-lowmemd.c
+++ b/tools/misc/xen-lowmemd.c
@@ -24,7 +24,7 @@ void cleanup(void)
     if (xch)
         xc_interface_close(xch);
     if (xs_handle)
-        xs_daemon_close(xs_handle);
+        xs_close(xs_handle);
 }
 
 /* Never shrink dom0 below 1 GiB */
@@ -102,7 +102,7 @@ int main(int argc, char *argv[])
         return 2;
     }
 
-    xs_handle = xs_daemon_open();
+    xs_handle = xs_open(0);
     if (xs_handle == NULL)
     {
         perror("Failed to open xenstore connection");
diff --git a/tools/python/xen/lowlevel/xs/xs.c b/tools/python/xen/lowlevel/xs/xs.c
index b7d4b6ef5d..0dad7fa5f2 100644
--- a/tools/python/xen/lowlevel/xs/xs.c
+++ b/tools/python/xen/lowlevel/xs/xs.c
@@ -791,7 +791,7 @@ static PyObject *xspy_close(XsHandle *self)
         PySequence_SetItem(self->watches, i, Py_None);
     }
 
-    xs_daemon_close(xh);
+    xs_close(xh);
     self->xh = NULL;
 
     Py_INCREF(Py_None);
@@ -985,7 +985,7 @@ xshandle_init(XsHandle *self, PyObject *args, PyObject *kwds)
                                      &readonly))
         goto fail;
 
-    self->xh = (readonly ? xs_daemon_open_readonly() : xs_daemon_open());
+    self->xh = xs_open(0);
     if (!self->xh)
         goto fail;
 
@@ -999,7 +999,7 @@ xshandle_init(XsHandle *self, PyObject *args, PyObject *kwds)
 static void xshandle_dealloc(XsHandle *self)
 {
     if (self->xh) {
-        xs_daemon_close(self->xh);
+        xs_close(self->xh);
         self->xh = NULL;
     }
 
diff --git a/tools/tests/mce-test/tools/xen-mceinj.c b/tools/tests/mce-test/tools/xen-mceinj.c
index 380e42190c..1187d01e5f 100644
--- a/tools/tests/mce-test/tools/xen-mceinj.c
+++ b/tools/tests/mce-test/tools/xen-mceinj.c
@@ -411,13 +411,13 @@ static long xs_get_dom_mem(int domid)
     unsigned int plen;
     struct xs_handle *xs;
 
-    xs = xs_daemon_open();
+    xs = xs_open(0);
     if (!xs)
         return -1;
 
     sprintf(path, "/local/domain/%d/memory/target", domid);
     memstr = xs_read(xs, XBT_NULL, path, &plen);
-    xs_daemon_close(xs);
+    xs_close(xs);
 
     if (!memstr || !plen)
         return -1;
diff --git a/tools/xenbackendd/xenbackendd.c b/tools/xenbackendd/xenbackendd.c
index b6d92984e0..21884af772 100644
--- a/tools/xenbackendd/xenbackendd.c
+++ b/tools/xenbackendd/xenbackendd.c
@@ -122,7 +122,7 @@ usage(void)
 static int
 xen_setup(void)
 {
-	xs = xs_daemon_open();
+	xs = xs_open(0);
 	if (xs == NULL) {
 		dolog(LOG_ERR,
 		    "Failed to contact xenstore (%s).  Is it running?",
@@ -138,7 +138,7 @@ xen_setup(void)
 
  out:
 	if (xs) {
-		xs_daemon_close(xs);
+		xs_close(xs);
 		xs = NULL;
 	}
 	return -1;
diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
index 1c801caa71..35fd1c931a 100644
--- a/tools/xenpmd/xenpmd.c
+++ b/tools/xenpmd/xenpmd.c
@@ -502,18 +502,18 @@ int main(int argc, char *argv[])
 #ifndef RUN_STANDALONE
     daemonize();
 #endif
-    xs = (struct xs_handle *)xs_daemon_open();
+    xs = xs_open(0);
     if ( xs == NULL ) 
         return -1;
 
     if ( write_one_time_battery_info() == 0 ) 
     {
-        xs_daemon_close(xs);
+        xs_close(xs);
         return -1;
     }
 
     wait_for_and_update_battery_status_request();
-    xs_daemon_close(xs);
+    xs_close(xs);
     return 0;
 }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:41:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2233.6616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBx-000708-8i; Fri, 02 Oct 2020 15:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2233.6616; Fri, 02 Oct 2020 15:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBx-000700-43; Fri, 02 Oct 2020 15:41:53 +0000
Received: by outflank-mailman (input) for mailman id 2233;
 Fri, 02 Oct 2020 15:41:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONBv-0006v4-Lm
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6be06e3-4dd6-46e8-99d2-81f82e469325;
 Fri, 02 Oct 2020 15:41:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B62D0B26A;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONBv-0006v4-Lm
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:51 +0000
X-Inumbo-ID: d6be06e3-4dd6-46e8-99d2-81f82e469325
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d6be06e3-4dd6-46e8-99d2-81f82e469325;
	Fri, 02 Oct 2020 15:41:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IVXyqd8+BJ9r6/80OoSoimGVYzfPs4ujpYo9jzSMuLs=;
	b=lGKrfw3ovFXY/f0HnxIqVDT9WUvjzxqKl5mkUEO2DdqbN7vwbeFGwpigFPMLDAIq8iQHxh
	2BgPOERbcaKx/yQJxlihiJIPA2K/9pohTxv68O+RxGDgtUBu8bnJ6DWOkbb59nQj8lUi1+
	MMCN6fS6iw7rzh31iqkZ/2m7gjLkzAQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B62D0B26A;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/5] tools/xenstore: drop creation of read-only socket in xenstored
Date: Fri,  2 Oct 2020 17:41:41 +0200
Message-Id: <20201002154141.11677-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With xs_daemon_open_readonly() now no longer using the read-only socket
the creation of that socket can be dropped.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 55 +++++++------------------------
 tools/xenstore/xenstored_core.h   |  3 --
 tools/xenstore/xenstored_domain.c |  4 +--
 tools/xenstore/xs_lib.c           |  8 +----
 4 files changed, 14 insertions(+), 56 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 9700772d40..b4be374d3f 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -71,7 +71,6 @@ static unsigned int current_array_size;
 static unsigned int nr_fds;
 
 static int sock = -1;
-static int ro_sock = -1;
 
 static bool verbose = false;
 LIST_HEAD(connections);
@@ -311,8 +310,7 @@ fail:
 	return -1;
 }
 
-static void initialize_fds(int *p_sock_pollfd_idx, int *p_ro_sock_pollfd_idx,
-			   int *ptimeout)
+static void initialize_fds(int *p_sock_pollfd_idx, int *ptimeout)
 {
 	struct connection *conn;
 	struct wrl_timestampt now;
@@ -325,8 +323,6 @@ static void initialize_fds(int *p_sock_pollfd_idx, int *p_ro_sock_pollfd_idx,
 
 	if (sock != -1)
 		*p_sock_pollfd_idx = set_fd(sock, POLLIN|POLLPRI);
-	if (ro_sock != -1)
-		*p_ro_sock_pollfd_idx = set_fd(ro_sock, POLLIN|POLLPRI);
 	if (reopen_log_pipe[0] != -1)
 		reopen_log_pipe0_pollfd_idx =
 			set_fd(reopen_log_pipe[0], POLLIN|POLLPRI);
@@ -472,9 +468,6 @@ static enum xs_perm_type perm_for_conn(struct connection *conn,
 	unsigned int i;
 	enum xs_perm_type mask = XS_PERM_READ|XS_PERM_WRITE|XS_PERM_OWNER;
 
-	if (!conn->can_write)
-		mask &= ~XS_PERM_WRITE;
-
 	/* Owners and tools get it all... */
 	if (!domain_is_unprivileged(conn) || perms[0].id == conn->id
                 || (conn->target && perms[0].id == conn->target->id))
@@ -1422,7 +1415,6 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 	new->pollfd_idx = -1;
 	new->write = write;
 	new->read = read;
-	new->can_write = true;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
 	INIT_LIST_HEAD(&new->watches);
@@ -1435,7 +1427,7 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 }
 
 #ifdef NO_SOCKETS
-static void accept_connection(int sock, bool canwrite)
+static void accept_connection(int sock)
 {
 }
 #else
@@ -1477,7 +1469,7 @@ static int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
-static void accept_connection(int sock, bool canwrite)
+static void accept_connection(int sock)
 {
 	int fd;
 	struct connection *conn;
@@ -1487,10 +1479,9 @@ static void accept_connection(int sock, bool canwrite)
 		return;
 
 	conn = new_connection(writefd, readfd);
-	if (conn) {
+	if (conn)
 		conn->fd = fd;
-		conn->can_write = canwrite;
-	} else
+	else
 		close(fd);
 }
 #endif
@@ -1794,28 +1785,21 @@ static void destroy_fds(void)
 {
 	if (sock >= 0)
 		close(sock);
-	if (ro_sock >= 0)
-		close(ro_sock);
 }
 
 static void init_sockets(void)
 {
 	struct sockaddr_un addr;
 	const char *soc_str = xs_daemon_socket();
-	const char *soc_str_ro = xs_daemon_socket_ro();
 
 	/* Create sockets for them to listen to. */
 	atexit(destroy_fds);
 	sock = socket(PF_UNIX, SOCK_STREAM, 0);
 	if (sock < 0)
 		barf_perror("Could not create socket");
-	ro_sock = socket(PF_UNIX, SOCK_STREAM, 0);
-	if (ro_sock < 0)
-		barf_perror("Could not create socket");
 
 	/* FIXME: Be more sophisticated, don't mug running daemon. */
 	unlink(soc_str);
-	unlink(soc_str_ro);
 
 	addr.sun_family = AF_UNIX;
 
@@ -1825,17 +1809,10 @@ static void init_sockets(void)
 	if (bind(sock, (struct sockaddr *)&addr, sizeof(addr)) != 0)
 		barf_perror("Could not bind socket to %s", soc_str);
 
-	if(strlen(soc_str_ro) >= sizeof(addr.sun_path))
-		barf_perror("socket string '%s' too long", soc_str_ro);
-	strcpy(addr.sun_path, soc_str_ro);
-	if (bind(ro_sock, (struct sockaddr *)&addr, sizeof(addr)) != 0)
-		barf_perror("Could not bind socket to %s", soc_str_ro);
-
-	if (chmod(soc_str, 0600) != 0
-	    || chmod(soc_str_ro, 0660) != 0)
+	if (chmod(soc_str, 0600) != 0)
 		barf_perror("Could not chmod sockets");
 
-	if (listen(sock, 1) != 0 || listen(ro_sock, 1) != 0)
+	if (listen(sock, 1) != 0)
 		barf_perror("Could not listen on sockets");
 }
 #endif
@@ -1893,7 +1870,7 @@ int priv_domid = 0;
 int main(int argc, char *argv[])
 {
 	int opt;
-	int sock_pollfd_idx = -1, ro_sock_pollfd_idx = -1;
+	int sock_pollfd_idx = -1;
 	bool dofork = true;
 	bool outputpid = false;
 	bool no_domain_init = false;
@@ -2010,7 +1987,7 @@ int main(int argc, char *argv[])
 		tracefile = talloc_strdup(NULL, tracefile);
 
 	/* Get ready to listen to the tools. */
-	initialize_fds(&sock_pollfd_idx, &ro_sock_pollfd_idx, &timeout);
+	initialize_fds(&sock_pollfd_idx, &timeout);
 
 	/* Tell the kernel we're up and running. */
 	xenbus_notify_running();
@@ -2051,21 +2028,11 @@ int main(int argc, char *argv[])
 				barf_perror("sock poll failed");
 				break;
 			} else if (fds[sock_pollfd_idx].revents & POLLIN) {
-				accept_connection(sock, true);
+				accept_connection(sock);
 				sock_pollfd_idx = -1;
 			}
 		}
 
-		if (ro_sock_pollfd_idx != -1) {
-			if (fds[ro_sock_pollfd_idx].revents & ~POLLIN) {
-				barf_perror("ro sock poll failed");
-				break;
-			} else if (fds[ro_sock_pollfd_idx].revents & POLLIN) {
-				accept_connection(ro_sock, false);
-				ro_sock_pollfd_idx = -1;
-			}
-		}
-
 		if (xce_pollfd_idx != -1) {
 			if (fds[xce_pollfd_idx].revents & ~POLLIN) {
 				barf_perror("xce_handle poll failed");
@@ -2128,7 +2095,7 @@ int main(int argc, char *argv[])
 			}
 		}
 
-		initialize_fds(&sock_pollfd_idx, &ro_sock_pollfd_idx, &timeout);
+		initialize_fds(&sock_pollfd_idx, &timeout);
 	}
 }
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index c4c32bc88f..1df6ad94ab 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -77,9 +77,6 @@ struct connection
 	/* Who am I? 0 for socket connections. */
 	unsigned int id;
 
-	/* Is this a read-only connection? */
-	bool can_write;
-
 	/* Buffered incoming data. */
 	struct buffered_data *in;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 0d5495745b..a2f144f6dd 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -372,7 +372,7 @@ int do_introduce(struct connection *conn, struct buffered_data *in)
 	if (get_strings(in, vec, ARRAY_SIZE(vec)) < ARRAY_SIZE(vec))
 		return EINVAL;
 
-	if (domain_is_unprivileged(conn) || !conn->can_write)
+	if (domain_is_unprivileged(conn))
 		return EACCES;
 
 	domid = atoi(vec[0]);
@@ -438,7 +438,7 @@ int do_set_target(struct connection *conn, struct buffered_data *in)
 	if (get_strings(in, vec, ARRAY_SIZE(vec)) < ARRAY_SIZE(vec))
 		return EINVAL;
 
-	if (domain_is_unprivileged(conn) || !conn->can_write)
+	if (domain_is_unprivileged(conn))
 		return EACCES;
 
 	domid = atoi(vec[0]);
diff --git a/tools/xenstore/xs_lib.c b/tools/xenstore/xs_lib.c
index 3e43f8809d..9f1dc6d559 100644
--- a/tools/xenstore/xs_lib.c
+++ b/tools/xenstore/xs_lib.c
@@ -63,13 +63,7 @@ const char *xs_daemon_socket(void)
 
 const char *xs_daemon_socket_ro(void)
 {
-	static char buf[PATH_MAX];
-	const char *s = xs_daemon_path();
-	if (s == NULL)
-		return NULL;
-	if (snprintf(buf, sizeof(buf), "%s_ro", s) >= PATH_MAX)
-		return NULL;
-	return buf;
+	return xs_daemon_path();
 }
 
 const char *xs_domain_dev(void)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:41:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2234.6625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBx-000718-SO; Fri, 02 Oct 2020 15:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2234.6625; Fri, 02 Oct 2020 15:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONBx-00070g-FG; Fri, 02 Oct 2020 15:41:53 +0000
Received: by outflank-mailman (input) for mailman id 2234;
 Fri, 02 Oct 2020 15:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONBw-0006vD-9T
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 763a8954-9454-401c-b98d-f5c3d1e992f8;
 Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 33DA2ACC8;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONBw-0006vD-9T
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:52 +0000
X-Inumbo-ID: 763a8954-9454-401c-b98d-f5c3d1e992f8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 763a8954-9454-401c-b98d-f5c3d1e992f8;
	Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6NXymCcIW4Fjm67kMAu1GZlcSOUTMK3m1fQFdWrEW6A=;
	b=ht1KggQYReLOBEjjwP8uq8Grnk1/oAKhl8LjXUDMC35tiXMauCdt+3LQX/Dd9mbJ5L9EVe
	c2c7BPdeMhSo/q+1HDhGueqb09UTbBu62wTChL9mndUUJaT/1N8NBnpraUp2ITS1YjzaRi
	uM21An7wQm2n4kUXcnaIe4QMd9krXz0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 33DA2ACC8;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/5] tools/xenstore: remove socket-only option from xenstore client
Date: Fri,  2 Oct 2020 17:41:37 +0200
Message-Id: <20201002154141.11677-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Xenstore access commands (xenstore-*) have the possibility to limit
connection to Xenstore to a local socket (option "-s"). This is an
option making no sense at all, as either there is only a socket, so
the option would be a nop, or there is no socket at all (in case
Xenstore is running in a stubdom or the client is called in a domU),
so specifying the option would just lead to failure.

So drop that option completely.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/man/xenstore-chmod.1.pod    | 4 ----
 docs/man/xenstore-ls.1.pod       | 4 ----
 docs/man/xenstore-read.1.pod     | 4 ----
 docs/man/xenstore-write.1.pod    | 4 ----
 tools/xenstore/xenstore_client.c | 8 ++------
 5 files changed, 2 insertions(+), 22 deletions(-)

diff --git a/docs/man/xenstore-chmod.1.pod b/docs/man/xenstore-chmod.1.pod
index d76f34723d..d221f5dffc 100644
--- a/docs/man/xenstore-chmod.1.pod
+++ b/docs/man/xenstore-chmod.1.pod
@@ -46,10 +46,6 @@ write, and set permissions).
 
 Apply the permissions to the key and all its I<children>.
 
-=item B<-s>
-
-Connect to the Xenstore daemon using a local socket only.
-
 =item B<-u>
 
 Apply the permissions to the key and all its I<parents>.
diff --git a/docs/man/xenstore-ls.1.pod b/docs/man/xenstore-ls.1.pod
index 8dac931e94..a9f8b32653 100644
--- a/docs/man/xenstore-ls.1.pod
+++ b/docs/man/xenstore-ls.1.pod
@@ -50,10 +50,6 @@ I<and> the permissions for any domain not explicitly listed in
 subsequent entries.  The key owner always has full access (read,
 write, and set permissions).
 
-=item B<-s>
-
-Connect to the Xenstore daemon using a local socket only.
-
 =back
 
 =head1 BUGS
diff --git a/docs/man/xenstore-read.1.pod b/docs/man/xenstore-read.1.pod
index f5a7bb7e46..c7768cbbe5 100644
--- a/docs/man/xenstore-read.1.pod
+++ b/docs/man/xenstore-read.1.pod
@@ -16,10 +16,6 @@ Read values of one or more Xenstore I<PATH>s.
 
 Prefix value with key name.
 
-=item B<-s>
-
-Connect to the Xenstore daemon using a local socket only.
-
 =item B<-R>
 
 Read raw value, skip escaping non-printable characters (\x..).
diff --git a/docs/man/xenstore-write.1.pod b/docs/man/xenstore-write.1.pod
index d1b011236a..a0b1bca333 100644
--- a/docs/man/xenstore-write.1.pod
+++ b/docs/man/xenstore-write.1.pod
@@ -13,10 +13,6 @@ provided to write them at once - in one Xenstore transaction.
 
 =over
 
-=item B<-s>
-
-Connect to the Xenstore daemon using a local socket only.
-
 =item B<-R>
 
 Write raw value, skip parsing escaped characters (\x..).
diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
index ae7ed3eb9e..8015bfe5be 100644
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -530,7 +530,7 @@ main(int argc, char **argv)
 {
     struct xs_handle *xsh;
     xs_transaction_t xth = XBT_NULL;
-    int ret = 0, socket = 0;
+    int ret = 0;
     int prefix = 0;
     int tidy = 0;
     int upto = 0;
@@ -565,7 +565,6 @@ main(int argc, char **argv)
 	static struct option long_options[] = {
 	    {"help",    0, 0, 'h'},
 	    {"flat",    0, 0, 'f'}, /* MODE_ls */
-	    {"socket",  0, 0, 's'},
 	    {"prefix",  0, 0, 'p'}, /* MODE_read || MODE_list || MODE_ls */
 	    {"tidy",    0, 0, 't'}, /* MODE_rm */
 	    {"upto",    0, 0, 'u'}, /* MODE_chmod */
@@ -593,9 +592,6 @@ main(int argc, char **argv)
 		usage(mode, switch_argv, argv[0]);
 	    }
             break;
-        case 's':
-            socket = 1;
-            break;
 	case 'p':
 	    if ( mode == MODE_read || mode == MODE_list || mode == MODE_ls )
 		prefix = 1;
@@ -675,7 +671,7 @@ main(int argc, char **argv)
 	    max_width = ws.ws_col - 2;
     }
 
-    xsh = xs_open(socket ? XS_OPEN_SOCKETONLY : 0);
+    xsh = xs_open(0);
     if (xsh == NULL) err(1, "xs_open");
 
 again:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:41:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:41:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2235.6640 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONC3-00077l-2Y; Fri, 02 Oct 2020 15:41:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2235.6640; Fri, 02 Oct 2020 15:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONC2-00077c-Td; Fri, 02 Oct 2020 15:41:58 +0000
Received: by outflank-mailman (input) for mailman id 2235;
 Fri, 02 Oct 2020 15:41:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONC1-0006vD-9j
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4c67228-f86e-46c4-a739-4b1b0520460d;
 Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5D3F0B265;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONC1-0006vD-9j
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:41:57 +0000
X-Inumbo-ID: d4c67228-f86e-46c4-a739-4b1b0520460d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d4c67228-f86e-46c4-a739-4b1b0520460d;
	Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gcLMBVGwJcyyJDDSc6Pbxmq7yh49BQo92DhFtrzVoNM=;
	b=cXGOQcxRf2cFvKIIkdGEJ5gTvTQYqgbCwNcJpjgsizwJvcz+x7s2n3yA7QRwCR214ol4s5
	g06OLukDAvd++ZJxbOoRL8KLOxK6gl0AzQaQJb7XPTs5pmp49iVC07/ifS/gu7+6doJxe3
	Ry/QPwjF2Z2YjFj2uQmn8KcNSdl/Hrc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5D3F0B265;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/5] tools/libs/store: ignore XS_OPEN_SOCKETONLY flag
Date: Fri,  2 Oct 2020 17:41:38 +0200
Message-Id: <20201002154141.11677-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When opening the connection to Xenstore via xs_open() it makes no
sense to limit the connection to the socket based one. So just ignore
the XS_OPEN_SOCKETONLY flag.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/store/include/xenstore.h | 2 --
 tools/libs/store/xs.c               | 2 +-
 2 files changed, 1 insertion(+), 3 deletions(-)

diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
index 25b31881c8..cbc7206a0f 100644
--- a/tools/libs/store/include/xenstore.h
+++ b/tools/libs/store/include/xenstore.h
@@ -66,8 +66,6 @@ typedef uint32_t xs_transaction_t;
  * * Connections made with xs_open(0) (which might be shared page or
  *   socket based) are only guaranteed to work in the parent after
  *   fork.
- * * Connections made with xs_open(XS_OPEN_SOCKETONLY) will be usable
- *   in either the parent or the child after fork, but not both.
  * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
  *   for xs_open(0).
  * * XS_OPEN_READONLY has no bearing on any of this.
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index aa1d24b8b9..320734416f 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -319,7 +319,7 @@ struct xs_handle *xs_open(unsigned long flags)
 	else
 		xsh = get_handle(xs_daemon_socket());
 
-	if (!xsh && !(flags & XS_OPEN_SOCKETONLY))
+	if (!xsh)
 		xsh = get_handle(xs_domain_dev());
 
 	if (xsh && (flags & XS_UNWATCH_FILTER))
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:42:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:42:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2236.6652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONC7-0007Dh-Jl; Fri, 02 Oct 2020 15:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2236.6652; Fri, 02 Oct 2020 15:42:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONC7-0007DU-FH; Fri, 02 Oct 2020 15:42:03 +0000
Received: by outflank-mailman (input) for mailman id 2236;
 Fri, 02 Oct 2020 15:42:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONC6-0006vD-9v
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:42:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0fb1f41-52ea-41f8-ad4f-6c9b0555a9f3;
 Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79682B269;
 Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONC6-0006vD-9v
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:42:02 +0000
X-Inumbo-ID: f0fb1f41-52ea-41f8-ad4f-6c9b0555a9f3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0fb1f41-52ea-41f8-ad4f-6c9b0555a9f3;
	Fri, 02 Oct 2020 15:41:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601653305;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FNF+aRfDbKHMZCseaVDAhmaSRPG1QxwjSNnLmsmdBpU=;
	b=s3zAfuAk3ObNwVSkUaMaGpk4An9Odhdv0ughwfA6YdsIF1MFhYKHPsYLqpmhPVWfui+5p/
	TKm1nFm2CI/IkuxovkbX7hD3Rup6TTu9giyVecUuqTnPGZRkDFesJzFKGmQHfOg9p3BfSU
	5UAlA7q0eJySJUqoxPrndPrgGACaNs0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79682B269;
	Fri,  2 Oct 2020 15:41:45 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 3/5] tools/libs/store: drop read-only functionality
Date: Fri,  2 Oct 2020 17:41:39 +0200
Message-Id: <20201002154141.11677-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today it is possible to open the connection in read-only mode via
xs_daemon_open_readonly(). This is working only with Xenstore running
as a daemon in the same domain as the user. Additionally it doesn't
add any security as accessing the socket used for that functionality
requires the same privileges as the socket used for full Xenstore
access.

So just drop the read-only semantics in all cases, leaving the
interface existing only for compatibility reasons. This in turn
requires to just ignore the XS_OPEN_READONLY flag.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/store/include/xenstore.h | 8 --------
 tools/libs/store/xs.c               | 7 ++-----
 2 files changed, 2 insertions(+), 13 deletions(-)

diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
index cbc7206a0f..158e69ef83 100644
--- a/tools/libs/store/include/xenstore.h
+++ b/tools/libs/store/include/xenstore.h
@@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
 /* Open a connection to the xs daemon.
  * Attempts to make a connection over the socket interface,
  * and if it fails, then over the  xenbus interface.
- * Mode 0 specifies read-write access, XS_OPEN_READONLY for
- * read-only access.
  *
  * * Connections made with xs_open(0) (which might be shared page or
  *   socket based) are only guaranteed to work in the parent after
  *   fork.
  * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
  *   for xs_open(0).
- * * XS_OPEN_READONLY has no bearing on any of this.
  *
  * Returns a handle or NULL.
  */
@@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
  */
 struct xs_handle *xs_daemon_open(void);
 struct xs_handle *xs_domain_open(void);
-
-/* Connect to the xs daemon (readonly for non-root clients).
- * Returns a handle or NULL.
- * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
- */
 struct xs_handle *xs_daemon_open_readonly(void);
 
 /* Close the connection to the xs daemon.
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 320734416f..4ac73ec317 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
 
 struct xs_handle *xs_daemon_open_readonly(void)
 {
-	return xs_open(XS_OPEN_READONLY);
+	return xs_open(0);
 }
 
 struct xs_handle *xs_domain_open(void)
@@ -314,10 +314,7 @@ struct xs_handle *xs_open(unsigned long flags)
 {
 	struct xs_handle *xsh = NULL;
 
-	if (flags & XS_OPEN_READONLY)
-		xsh = get_handle(xs_daemon_socket_ro());
-	else
-		xsh = get_handle(xs_daemon_socket());
+	xsh = get_handle(xs_daemon_socket());
 
 	if (!xsh)
 		xsh = get_handle(xs_domain_dev());
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 15:43:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 15:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2239.6664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOND2-0007Zy-0t; Fri, 02 Oct 2020 15:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2239.6664; Fri, 02 Oct 2020 15:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOND1-0007Zr-U1; Fri, 02 Oct 2020 15:42:59 +0000
Received: by outflank-mailman (input) for mailman id 2239;
 Fri, 02 Oct 2020 15:42:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOND0-0007ZV-0w
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:42:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8458d17a-c2e9-4f3a-8306-311a986c7892;
 Fri, 02 Oct 2020 15:42:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOND0-0007ZV-0w
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 15:42:58 +0000
X-Inumbo-ID: 8458d17a-c2e9-4f3a-8306-311a986c7892
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8458d17a-c2e9-4f3a-8306-311a986c7892;
	Fri, 02 Oct 2020 15:42:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601653376;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=2vB/BhgmcK/033yAkis2WFflAsrHtymEDb+pmWYoL2o=;
  b=dtowS8WRLEXGxqd8C02Y+S2IfGjviDsqJApFM1qehE/x0TUdGTazemjg
   yNeA7RsWuPcnBpjnVruR2N5TzvqflUVo7Z4iGcTLSZItrmOnP1XUrGjZK
   cvi6PgPkNUh109g/E8DDMT+sus3B5DCWdEPLo45udxv4iRQMaPAj7tnSr
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: nd7SNFVxjW5sRcXrQRWBKfq5bq1a61ky3GiRdH5dh+nokrGVxvpLLORvv6oJFes4fAKeHJKH7G
 FObcTadzFUsgQHvrmzvgnQ5pw3YcBWMmTeXYkVR41vjBlZ4tgw6kCoJ/Oi33UWfuQSb+SUq0b4
 xWIpgEyVpWAyX32iAfNY5cDPQYlJ3YzO3Ph1cbWeveQIJCkYkBqyfHGbzLPkBYy5m9CGug14CK
 1S65GYSgu+A/SpnWLKWMJ9o17/Y6ACkZ3pH8k5k/pDoC7uI85JRXv0ox8v4KZY7MjkugiCK4uu
 6AY=
X-SBRS: None
X-MesageID: 28440549
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,328,1596513600"; 
   d="scan'208";a="28440549"
Subject: Re: Yet another S3 issue in Xen 4.14
To: Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
CC: xen-devel <xen-devel@lists.xenproject.org>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
 <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
 <20201002150859.GM3962@mail-itl>
 <454ac9ce-012f-f2e7-722d-c5304fd3146f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <aa5e1a7b-3724-bdc1-a313-0598aabd181f@citrix.com>
Date: Fri, 2 Oct 2020 16:42:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <454ac9ce-012f-f2e7-722d-c5304fd3146f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 02/10/2020 16:39, Jan Beulich wrote:
> On 02.10.2020 17:08, Marek Marczykowski-Górecki wrote:
>> I've done another bisect on the commit broken up in separate changes
>> (https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen.git;a=shortlog;h=refs/heads/dbg-s3)
>> and the bad part seems to be this:
>>
>> From dbdb32f8c265295d6af7cd4cd0aa12b6d04a0430 Mon Sep 17 00:00:00 2001
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Date: Fri, 2 Oct 2020 15:40:22 +0100
>> Subject: [PATCH 1/1] CR4
> Interesting - I was wild guessing so yesterday, but couldn't come
> up with even a vague reason why this would be. I think you could
> further split it up:
>
>> --- a/xen/arch/x86/acpi/power.c
>> +++ b/xen/arch/x86/acpi/power.c
>> @@ -195,7 +195,6 @@ static int enter_state(u32 state)
>>      unsigned long flags;
>>      int error;
>>      struct cpu_info *ci;
>> -    unsigned long cr4;
>>  
>>      if ( (state <= ACPI_STATE_S0) || (state > ACPI_S_STATES_MAX) )
>>          return -EINVAL;
>> @@ -270,15 +269,15 @@ static int enter_state(u32 state)
>>  
>>      system_state = SYS_STATE_resume;
>>  
>> -    /* Restore CR4 and EFER from cached values. */
>> -    cr4 = read_cr4();
>> -    write_cr4(cr4 & ~X86_CR4_MCE);
>> +    /* Restore EFER from cached value. */
>>      write_efer(read_efer());
> This one should be possible to leave in place despite ...
>
>>      device_power_up(SAVED_ALL);
>>  
>>      mcheck_init(&boot_cpu_data, false);
>> -    write_cr4(cr4);
>> +
>> +    /* Restore CR4 from cached value, now MCE is set up. */
>> +    write_cr4(read_cr4());
> ... this change.
>
> Further, while I can't see how the set_in_cr4() in mcheck_init()
> could badly interact with the CR4 writes here, another option
> might be to suppress it when system_state == SYS_STATE_resume
> && c == &boot_cpu_data (or !bsp && c == &boot_cpu_data).
>
>> --- a/xen/arch/x86/acpi/suspend.c
>> +++ b/xen/arch/x86/acpi/suspend.c
>> @@ -23,7 +23,4 @@ void save_rest_processor_state(void)
>>  void restore_rest_processor_state(void)
>>  {
>>      load_system_tables();
>> -
>> -    /* Restore full CR4 (inc MCE) now that the IDT is in place. */
>> -    write_cr4(mmu_cr4_features);
>>  }
> This one should be possible to leave in place despite the other
> changes.

We're continuing to debug in private.  mmu_cr4_features and read_cr4()
are equivalent (as expected), but very different from MINIMAL_CR4 which
is what the trampoline configures, so I think we're suffering a
CR4-related #UD/#GP somewhere in device_power_up() or mcheck_init().

Its not INVPCID.  Trying others.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 16:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 16:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2251.6680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONaF-0001fo-5f; Fri, 02 Oct 2020 16:06:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2251.6680; Fri, 02 Oct 2020 16:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONaF-0001fh-2f; Fri, 02 Oct 2020 16:06:59 +0000
Received: by outflank-mailman (input) for mailman id 2251;
 Fri, 02 Oct 2020 16:06:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=17n0=DJ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kONaC-0001fb-Tm
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:06:56 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67ec1aad-eedb-4650-b28c-667b475d21ed;
 Fri, 02 Oct 2020 16:06:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=17n0=DJ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kONaC-0001fb-Tm
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:06:56 +0000
X-Inumbo-ID: 67ec1aad-eedb-4650-b28c-667b475d21ed
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67ec1aad-eedb-4650-b28c-667b475d21ed;
	Fri, 02 Oct 2020 16:06:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601654815;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=kWlD97+J77zfqraf9mQurnjq4WYZN4aMKHgQBs/w+3g=;
  b=L9KoOo44yexTKWWhspGx3mWDy5U9kR7PLBi/YL2esN66A2m/axn/DtNb
   BmGHwAcZvlTDB3+1PomwLlHoD+1wki1RS9Odyf0ta+aMiMKKVEvVXY4Wq
   2dUo9w2E3QhHo6c6h2sKZzuGLNYegqKmrVZP0wCjvzFWSEjOuaS5XX+QW
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 9DTmAvoikkF1lSy5cgeBhWYvScaBL9pRd6NMkUNF6NQJ5LGerhEoKwSwPDX9GUeSnVgZxIlmTG
 CoQg4Duw0oDF4o7zGAUtNdbkaHZUSBYAXGBWJWMviJJXfBZ+4qqOJD5sAG1oj/AIzh2WH6vkh8
 ztNg4/Y7PE2LcgZRKO7eTBlsBD/GgbhSj4t3tcmq+kfWxA/M9GvgsBa648ZpRGW2vaPbeDucdz
 5ydJQMxcoyYduMp7B25mXxxn+VzC3Qmx4txheLHmL+S+LTgvo/dtxmiUMHxowq/k5PwtGb5Xx7
 W0Q=
X-SBRS: None
X-MesageID: 28442812
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,328,1596513600"; 
   d="scan'208";a="28442812"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v1 1/1] tools/ocaml/xenstored: drop the creation of the RO socket
Date: Fri, 2 Oct 2020 17:06:32 +0100
Message-ID: <0cc19ced022e2a302fccf42bf9521c61dd0dca8a.1601654648.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1601654648.git.edvin.torok@citrix.com>
References: <cover.1601654648.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The readonly flag was propagated but ignored, so this was essentially
equivalent to a RW socket.

C xenstored is dropping the RO socket too, so drop it from oxenstored too.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/connections.ml |  2 +-
 tools/ocaml/xenstored/define.ml      |  1 -
 tools/ocaml/xenstored/xenstored.ml   | 15 ++++++---------
 3 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/tools/ocaml/xenstored/connections.ml b/tools/ocaml/xenstored/connections.ml
index f02ef6b526..f2c4318c88 100644
--- a/tools/ocaml/xenstored/connections.ml
+++ b/tools/ocaml/xenstored/connections.ml
@@ -31,7 +31,7 @@ let create () = {
 	watches = Trie.create ()
 }
 
-let add_anonymous cons fd _can_write =
+let add_anonymous cons fd =
 	let xbcon = Xenbus.Xb.open_fd fd in
 	let con = Connection.create xbcon None in
 	Hashtbl.add cons.anonymous (Xenbus.Xb.get_fd xbcon) con
diff --git a/tools/ocaml/xenstored/define.ml b/tools/ocaml/xenstored/define.ml
index 2965c08534..ea9e1b7620 100644
--- a/tools/ocaml/xenstored/define.ml
+++ b/tools/ocaml/xenstored/define.ml
@@ -18,7 +18,6 @@ let xenstored_major = 1
 let xenstored_minor = 0
 
 let xs_daemon_socket = Paths.xen_run_stored ^ "/socket"
-let xs_daemon_socket_ro = Paths.xen_run_stored ^ "/socket_ro"
 
 let default_config_dir = Paths.xen_config_dir
 
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index 5b96f1852a..7e7824761b 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -242,12 +242,11 @@ let _ =
 		()
 	);
 
-	let rw_sock, ro_sock =
+	let rw_sock =
 		if cf.disable_socket then
-			None, None
+			None
 		else
-			Some (Unix.handle_unix_error Utils.create_unix_socket Define.xs_daemon_socket),
-			Some (Unix.handle_unix_error Utils.create_unix_socket Define.xs_daemon_socket_ro)
+			Some (Unix.handle_unix_error Utils.create_unix_socket Define.xs_daemon_socket)
 		in
 
 	if cf.daemonize then
@@ -320,15 +319,14 @@ let _ =
 
 	let spec_fds =
 		(match rw_sock with None -> [] | Some x -> [ x ]) @
-		(match ro_sock with None -> [] | Some x -> [ x ]) @
 		(if cf.domain_init then [ Event.fd eventchn ] else [])
 		in
 
 	let process_special_fds rset =
-		let accept_connection can_write fd =
+		let accept_connection fd =
 			let (cfd, _addr) = Unix.accept fd in
 			debug "new connection through socket";
-			Connections.add_anonymous cons cfd can_write
+			Connections.add_anonymous cons cfd
 		and handle_eventchn _fd =
 			let port = Event.pending eventchn in
 			debug "pending port %d" (Xeneventchn.to_int port);
@@ -348,8 +346,7 @@ let _ =
 			if List.mem fd set then
 				fct fd in
 
-		maybe (fun fd -> do_if_set fd rset (accept_connection true)) rw_sock;
-		maybe (fun fd -> do_if_set fd rset (accept_connection false)) ro_sock;
+		maybe (fun fd -> do_if_set fd rset accept_connection) rw_sock;
 		do_if_set (Event.fd eventchn) rset (handle_eventchn)
 	in
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 16:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 16:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2252.6692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONaL-0001hK-FE; Fri, 02 Oct 2020 16:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2252.6692; Fri, 02 Oct 2020 16:07:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONaL-0001hC-Bi; Fri, 02 Oct 2020 16:07:05 +0000
Received: by outflank-mailman (input) for mailman id 2252;
 Fri, 02 Oct 2020 16:07:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=17n0=DJ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1kONaK-0001gv-Gj
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:07:04 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e88d9db-6cf0-44f0-a0de-a0f8054ffa4f;
 Fri, 02 Oct 2020 16:07:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=17n0=DJ=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
	id 1kONaK-0001gv-Gj
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:07:04 +0000
X-Inumbo-ID: 8e88d9db-6cf0-44f0-a0de-a0f8054ffa4f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8e88d9db-6cf0-44f0-a0de-a0f8054ffa4f;
	Fri, 02 Oct 2020 16:07:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601654824;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=YKOodhsXvB02Ask8P/bYJ7DcA/SCje8952finnXygN8=;
  b=Adfy8Q6QmhbZf6CeLc0aLeOqdzp69tYRUiaQMqdAnyGx+/2h2rK5uXao
   nB7myvu/W+LTVtsuPZMDBPOuXIAigLeSrrVNVh2XHWRb5IbYKqf7qY53A
   HpIj0YzTMDR+avBMaZL00WHm3ttMzswMGp24Hn8ab/aze7EPU/6JgwD3J
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jkfLi41Mw6kXks3+zS3hBAiwUsO1vqRJcLQBt+TqcR9jcT9WbWHMf2rugUVKz8A6mfwvWKowMa
 DQdq15tAfMkBufQcT/NVcSharOn/gnOjXH+N6ofDWOnz5plbAJ3nBoYMZ0NfDNB6+0ZkHDdW/5
 GXjF67VG7+dZQxCUvxeFW7McAm1y8bnTY4j9Tjmf/LZv30mEsHkJywLuc6uCcJ/uK8G3vJSx0/
 jV8YEEDdWzRxrSMhOqw0UsYB65lykj5nDM/SoaXRlZjKaQiIH1P5Tpsx5b5NM/7DRSX6guZmn7
 xOs=
X-SBRS: None
X-MesageID: 28508078
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,328,1596513600"; 
   d="scan'208";a="28508078"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v1 0/1] drop RO socket from oxenstored
Date: Fri, 2 Oct 2020 17:06:31 +0100
Message-ID: <cover.1601654648.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

See https://lore.kernel.org/xen-devel/20201002154141.11677-6-jgross@suse.com/T/#u

Edwin Török (1):
  tools/ocaml/xenstored: drop the creation of the RO socket

 tools/ocaml/xenstored/connections.ml |  2 +-
 tools/ocaml/xenstored/define.ml      |  1 -
 tools/ocaml/xenstored/xenstored.ml   | 15 ++++++---------
 3 files changed, 7 insertions(+), 11 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 16:15:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 16:15:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2255.6704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONhw-0002h8-9u; Fri, 02 Oct 2020 16:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2255.6704; Fri, 02 Oct 2020 16:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kONhw-0002h1-6w; Fri, 02 Oct 2020 16:14:56 +0000
Received: by outflank-mailman (input) for mailman id 2255;
 Fri, 02 Oct 2020 16:14:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kONhv-0002gw-4J
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:14:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f41fca36-0233-44e7-8b7d-186726f98aca;
 Fri, 02 Oct 2020 16:14:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA811AC65;
 Fri,  2 Oct 2020 16:14:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2E3y=DJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kONhv-0002gw-4J
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:14:55 +0000
X-Inumbo-ID: f41fca36-0233-44e7-8b7d-186726f98aca
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f41fca36-0233-44e7-8b7d-186726f98aca;
	Fri, 02 Oct 2020 16:14:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601655292;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q38OKd37Qumz8X0LRWp9yU1q7XBpoxN1vrVgvyYeBSI=;
	b=IHxFJBrX+0b6MyNm9AoDGxzbfYrY/k59fCC/N+qy3DRBNQIGo/dZwqqcImPwf6QtJzLrPA
	fiOp6ueKhO/DQ6GBC+QG9aHXObq++iHrEpr58wbUMYExa/9+VfoYckvuM2jGqWEg2YvnWi
	0MnWrxNlq6MhgBMq9zVZJQsCfGrprXk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AA811AC65;
	Fri,  2 Oct 2020 16:14:52 +0000 (UTC)
Subject: Re: [PATCH v1 0/1] drop RO socket from oxenstored
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
 xen-devel@lists.xenproject.org
Cc: Christian Lindig <christian.lindig@citrix.com>,
 David Scott <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <cover.1601654648.git.edvin.torok@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a363b6ed-86e0-2fe8-1753-f29ec55508aa@suse.com>
Date: Fri, 2 Oct 2020 18:14:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <cover.1601654648.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.20 18:06, Edwin Török wrote:
> See https://lore.kernel.org/xen-devel/20201002154141.11677-6-jgross@suse.com/T/#u
> 
> Edwin Török (1):
>    tools/ocaml/xenstored: drop the creation of the RO socket
> 
>   tools/ocaml/xenstored/connections.ml |  2 +-
>   tools/ocaml/xenstored/define.ml      |  1 -
>   tools/ocaml/xenstored/xenstored.ml   | 15 ++++++---------
>   3 files changed, 7 insertions(+), 11 deletions(-)
> 

FWIW:

Acked-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 16:36:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 16:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2257.6716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOO33-0004UW-5D; Fri, 02 Oct 2020 16:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2257.6716; Fri, 02 Oct 2020 16:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOO33-0004UP-2F; Fri, 02 Oct 2020 16:36:45 +0000
Received: by outflank-mailman (input) for mailman id 2257;
 Fri, 02 Oct 2020 16:36:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kOO31-0004UK-Vh
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:36:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ccc6b92e-473b-425a-8f47-347babf8ec51;
 Fri, 02 Oct 2020 16:36:42 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOO30-0004Lj-23; Fri, 02 Oct 2020 16:36:42 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kOO2z-0004zM-QM; Fri, 02 Oct 2020 16:36:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7zU=DJ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kOO31-0004UK-Vh
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:36:44 +0000
X-Inumbo-ID: ccc6b92e-473b-425a-8f47-347babf8ec51
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ccc6b92e-473b-425a-8f47-347babf8ec51;
	Fri, 02 Oct 2020 16:36:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9hKnUM0EhS5OmQFTKA/pPLX9GRxxho6+EFeFecFz2AI=; b=axTIL75PJ3gewzmv36pQAy7hhT
	O0e1JKXOBHINLMT0NcX6VHftQ9TnLFbfic9eir3RT2zxerZOJrh1FFiLpt54cGM0gMidDY+DBFvPG
	DA47JAF2eIvFcY96b0/qtBcHPGKXHHpVgxcYhJCaNMGeuSilHaxMjCB2mvpjg1Oc9MG8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOO30-0004Lj-23; Fri, 02 Oct 2020 16:36:42 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kOO2z-0004zM-QM; Fri, 02 Oct 2020 16:36:41 +0000
Subject: Re: [PATCH v3] arm,smmu: match start level of page table walk with
 P2M
To: laurentiu.tudor@nxp.com, sstabellini@kernel.org,
 xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com, will@kernel.org
Cc: diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5e64ee7a-436f-03ba-9516-f4d5639b93ba@xen.org>
Date: Fri, 2 Oct 2020 17:36:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 02/10/2020 11:33, laurentiu.tudor@nxp.com wrote:
> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Don't hardcode the lookup start level of the page table walk to 1
> and instead match the one used in P2M. This should fix scenarios
> involving SMMU where the start level is different than 1.
> In order for the SMMU driver to also compile on arm32 move the
> P2M_ROOT_LEVEL in the p2m header file (while at it, for
> consistency also P2M_ROOT_ORDER) and use the macro in the smmu
> driver.
> 
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 16:38:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 16:38:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2259.6728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOO4w-0004cQ-J7; Fri, 02 Oct 2020 16:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2259.6728; Fri, 02 Oct 2020 16:38:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOO4w-0004cJ-FY; Fri, 02 Oct 2020 16:38:42 +0000
Received: by outflank-mailman (input) for mailman id 2259;
 Fri, 02 Oct 2020 16:38:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOO4u-0004cD-QE
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:38:41 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.60]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c00ff61-f01a-4993-b7cb-bd9becdfc7ee;
 Fri, 02 Oct 2020 16:38:39 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3422.eurprd04.prod.outlook.com (2603:10a6:803:6::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.23; Fri, 2 Oct
 2020 16:38:37 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 16:38:37 +0000
Received: from [192.168.1.106] (86.123.62.1) by
 AM4PR0202CA0008.eurprd02.prod.outlook.com (2603:10a6:200:89::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 16:38:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOO4u-0004cD-QE
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:38:41 +0000
X-Inumbo-ID: 2c00ff61-f01a-4993-b7cb-bd9becdfc7ee
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.60])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2c00ff61-f01a-4993-b7cb-bd9becdfc7ee;
	Fri, 02 Oct 2020 16:38:39 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Wc1cgADXlo22tBLRuaCLGpGhncQwEZcRZNz9jS9XlKjK4LJd0H573zPktcue/KJcigv0iRmnO/fMgadKa9z20l7lZC+3/W4Z9/4ABeRRqBLilsGR6KFwMcmSRnVPQK3Oxkvtj0xU0itaJ+InfH6tXnL89G7JkV8GpCfmwhmjF2mklE/hu4G2m0inxhyCcs51OLDj2FkdIWwrTOtMwWFB2a4Cvi37POykEHCaunOU8OLn8NQxSM+ykFxjBhf6St/6l0FJwkN1qaf3v4pMIlAfkZT5teNk3e6GC9oOR+d6sYJW9vLblXeZOplNqDcZHoPsTwit3uGalu+7czZ8VCo4Cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mTHHU8JHyLqSMsxY5XKoXaIlmoeMNz1VJwB0pfiKP0A=;
 b=Ci3XbvQF05tYe61B0BIwqBvqqYyBq0+40JWSZewGo5RIGVMCxKUw54Ua0EpvNat5l1UAWxJHNHuwir+XGzVTlcSs02CT0d/JteqHzPn94bKVhjoXmc59N0f4AGPqYHyB7AKdwTp4iQ+zXfAGk926nPaz4Xh7ZnDpx3CHanCe+VWPRLNlHOw3vYhrXiQK87vdLzFgh58f8bLhZ+iUiQ4QuGdFrQFOmOfQGOsOvAcDnWmkOnLJcyr5xAn5uGNpTVCAoFgI1hXbnu1uQ2wqnHGVQZgjcbV3R8x7SxhIu7C5zgsSbB2B0Hi2J87ghFUdegfbkakNM8g+ROU8eI+RAL8jng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mTHHU8JHyLqSMsxY5XKoXaIlmoeMNz1VJwB0pfiKP0A=;
 b=Y3HBvxTL+BUDXDEU1WcK7BAcgvpu+M1T6CYcTccpYIBfC3HR2fNd7z3sf9oll6z4J+JBBI08AJDVfMKv4WMnIA+OpzesZU3Zo3UFVhr3cHOYkKU4VcQ3Lam5Z7tibdxo9oz9ZxRITU5WKYJDA45WjgFjDeJqRmirhyR5CRpVIS4=
Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR0402MB3422.eurprd04.prod.outlook.com (2603:10a6:803:6::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3412.23; Fri, 2 Oct
 2020 16:38:37 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 16:38:37 +0000
Subject: Re: [PATCH v3] arm,smmu: match start level of page table walk with
 P2M
To: Julien Grall <julien@xen.org>, sstabellini@kernel.org,
 xen-devel@lists.xenproject.org, Volodymyr_Babchuk@epam.com, will@kernel.org
Cc: diana.craciun@nxp.com, anda-alexandra.dorneanu@nxp.com
References: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
 <5e64ee7a-436f-03ba-9516-f4d5639b93ba@xen.org>
From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Message-ID: <7aa85c5d-2853-deda-3929-cf6e65ea4d1b@nxp.com>
Date: Fri, 2 Oct 2020 19:38:33 +0300
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
In-Reply-To: <5e64ee7a-436f-03ba-9516-f4d5639b93ba@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM4PR0202CA0008.eurprd02.prod.outlook.com
 (2603:10a6:200:89::18) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.106] (86.123.62.1) by AM4PR0202CA0008.eurprd02.prod.outlook.com (2603:10a6:200:89::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend Transport; Fri, 2 Oct 2020 16:38:35 +0000
X-Originating-IP: [86.123.62.1]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c87482d6-c5a7-4a0a-4e00-08d866f19c44
X-MS-TrafficTypeDiagnostic: VI1PR0402MB3422:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0402MB3422D5287939999D388FF318EC310@VI1PR0402MB3422.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	NuMq/uAe2kN2INZvykWPpVUQUsY1C0VRyr3b4BGMkajVR9umuc3RXp9/WhM1BOt7Ac8hYsU4chAOKfbSQwwMOwmqiNGIEppiKJrLV6cWwdEUJVhlaDklUKrCm9Vgw8dwAqR0P9PLijh6TIpRdYM8qtO1Occ16uE5PCBqdeylu67vAnRs5JCVrP1ECi/L+Fkb5A+4wPu3WZEdzMqYHwxCFr58gIh3aytzK3De9O+FV6MZ29JGbGyKaTjWIfa7uPvbSSgBM7xqrWhk4Zup2mf0V6VYcio/zRqae7E8DMyuinGWq/ayVPtmJwSTFfUoYLbMRWUyxQBjydFIZPq1zyYSMMw2fpBhZmFxZLT2sGhY06d+0N/Oe7jWyjdcTS3g6ReTH3owLw/1KZ3oRbsheYlH48NV3CRfHLx5gAO/CI/4CrBIQdk39lsJ2fK0jqytYD08
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(366004)(39860400002)(8676002)(53546011)(86362001)(4744005)(26005)(66476007)(4326008)(478600001)(316002)(16576012)(31686004)(66556008)(8936002)(6486002)(83380400001)(31696002)(5660300002)(44832011)(52116002)(36756003)(956004)(186003)(2906002)(66946007)(16526019)(2616005)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	hSBXJMaHvNelOLgxUzVP/7dPbilCYfjauzXReE/bwyOCexFQfMKQ1xXCoQaVGgxXZxUBHF30WqRJ2Bu3mHfkJjlqobmCDnM5g//3nxMUZyhba+f0j3vh9Li3D4dzikkx0Syk8xQC2+9EiGF/Vjm/qfGkKn+N2JPIl8pcOBPCSA/UcF4zUjyEbCwpK+xRbNVZlq1Qnn0wc9+liErmj3CRyGLXLML/qL1e/UA01PN6gTWMlFfE4odf2vxZTSYPGoDPZcCQFQWbDFssd2oMkG0fOJTg+IWhEckhYhNX9bnimyUZ1Z+D/AqMG6y7mP9Tlpb3VCmJLPXU//nTVCYD40w6tNHToC93TPwTlYYHJHMj1Y5QDl2qtyqZN6cmEodLAget9cGJ9lWa1rkfQhPIs7PZeCtBlfmO3Siehx1DFLg0RlYPGdudXlg4NbjKvAgBbyGcnF2j0g3prljMYRllGN53Sw7yNBeQ+HgCbZYsHb4FSOXbz+2nxGOBcbpsBrCm9+GnfPMeA4eNVI+WTyOOLKmF1hAANO+IDGLcuP810nRjAX7/LJdlgWvqPxH3dfOyd9cUvhNsd1D4abcsYgS3IhHvnY9aS/zXieNZN56NBgzzt4zA1KKzE+7tWT0ZcwJB1TDjKtOEZQybqPi7iEWHI0cHQQ==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c87482d6-c5a7-4a0a-4e00-08d866f19c44
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 16:38:37.0007
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Em9+gfekvWrqnBR4act4vZhV1ug9BVSB1h9A5yKHI9xHLr1Soph/YoFzZpmgXDohyyZsEMDN1MM91Xeztem6xw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0402MB3422



On 10/2/2020 7:36 PM, Julien Grall wrote:
> Hi,
> 
> On 02/10/2020 11:33, laurentiu.tudor@nxp.com wrote:
>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>
>> Don't hardcode the lookup start level of the page table walk to 1
>> and instead match the one used in P2M. This should fix scenarios
>> involving SMMU where the start level is different than 1.
>> In order for the SMMU driver to also compile on arm32 move the
>> P2M_ROOT_LEVEL in the p2m header file (while at it, for
>> consistency also P2M_ROOT_ORDER) and use the macro in the smmu
>> driver.
>>
>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>
> 

Thanks, Julien!

--
Best Regards, Laurentiu


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:02:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2266.6744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOORb-0007Ah-Pb; Fri, 02 Oct 2020 17:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2266.6744; Fri, 02 Oct 2020 17:02:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOORb-0007Aa-MH; Fri, 02 Oct 2020 17:02:07 +0000
Received: by outflank-mailman (input) for mailman id 2266;
 Fri, 02 Oct 2020 17:02:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOORb-0007AV-0b
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:02:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3246d4b-9581-471b-8679-15ffe9d4dddf;
 Fri, 02 Oct 2020 17:02:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOORY-0004uM-ND; Fri, 02 Oct 2020 17:02:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOORY-0008OE-EM; Fri, 02 Oct 2020 17:02:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOORY-0002Ge-Dr; Fri, 02 Oct 2020 17:02:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOORb-0007AV-0b
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:02:07 +0000
X-Inumbo-ID: d3246d4b-9581-471b-8679-15ffe9d4dddf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d3246d4b-9581-471b-8679-15ffe9d4dddf;
	Fri, 02 Oct 2020 17:02:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1EG3lUTtJ3/+gY0btHjrKq7llknwtdBK7mOaTpkfS8w=; b=174p8aC2LqoPT16z3cSoDJ1nXs
	QBuZfrpxyEcwEDDqmVEG0azrq661oJCqyZsIBJPpWTOlei4r1Lx9nN23Ib4sy6YaoMqkJ3/nH+9ey
	I0KCA2eADH/dmqW8kNqNTjAL8V8L3kNi5VZmK0XdaI8k8ZHRqRLyjaMzVnhp7RPkf4V4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOORY-0004uM-ND; Fri, 02 Oct 2020 17:02:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOORY-0008OE-EM; Fri, 02 Oct 2020 17:02:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOORY-0002Ge-Dr; Fri, 02 Oct 2020 17:02:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155327-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155327: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=59b27f360e3d9dc0378c1288e67a91fa41a77158
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 17:02:04 +0000

flight 155327 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155327/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 155128

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  59b27f360e3d9dc0378c1288e67a91fa41a77158
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    2 days
Failing since        155144  2020-09-30 16:01:24 Z    2 days   15 attempts
Testing same since   155310  2020-10-02 08:01:29 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 387 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:04:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:04:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2270.6760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOTu-0007Kn-7j; Fri, 02 Oct 2020 17:04:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2270.6760; Fri, 02 Oct 2020 17:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOTu-0007Kg-4c; Fri, 02 Oct 2020 17:04:30 +0000
Received: by outflank-mailman (input) for mailman id 2270;
 Fri, 02 Oct 2020 17:04:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OBi8=DJ=redhat.com=dgilbert@srs-us1.protection.inumbo.net>)
 id 1kOOTs-0007Kb-Md
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:04:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c79bbc2d-e2b6-4667-a841-b5f3a534b2e5;
 Fri, 02 Oct 2020 17:04:27 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-454-pVeLMFzAOZyKlcgqQOAWmg-1; Fri, 02 Oct 2020 13:04:25 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EE963801AC2;
 Fri,  2 Oct 2020 17:04:23 +0000 (UTC)
Received: from work-vm (ovpn-114-192.ams2.redhat.com [10.36.114.192])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id D311E73669;
 Fri,  2 Oct 2020 17:04:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OBi8=DJ=redhat.com=dgilbert@srs-us1.protection.inumbo.net>)
	id 1kOOTs-0007Kb-Md
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:04:28 +0000
X-Inumbo-ID: c79bbc2d-e2b6-4667-a841-b5f3a534b2e5
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c79bbc2d-e2b6-4667-a841-b5f3a534b2e5;
	Fri, 02 Oct 2020 17:04:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601658267;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Xgj0hpVcffvhbEeRWVxqdaYvCtQRdJemk3XrZAdr0Dg=;
	b=C2VSKjEhIEUbOICaFQk2+RdBHLC6stFA0Xgc9L1mnS2Np8Ksnt483YAwEfS7yp6NDtQCfT
	nFQq9abJuwTdD8ZvLPiz18iqLa4qKekb5xfEmdE0ublgmsk/vUUHsolSfz8f5juSVYRYe9
	q91rqQ3zWlkh1Wvop7NNXOtMYv0Q6I0=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-454-pVeLMFzAOZyKlcgqQOAWmg-1; Fri, 02 Oct 2020 13:04:25 -0400
X-MC-Unique: pVeLMFzAOZyKlcgqQOAWmg-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EE963801AC2;
	Fri,  2 Oct 2020 17:04:23 +0000 (UTC)
Received: from work-vm (ovpn-114-192.ams2.redhat.com [10.36.114.192])
	by smtp.corp.redhat.com (Postfix) with ESMTPS id D311E73669;
	Fri,  2 Oct 2020 17:04:14 +0000 (UTC)
Date: Fri, 2 Oct 2020 18:04:12 +0100
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Richard Henderson <rth@twiddle.net>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	xen-devel@lists.xenproject.org, Gerd Hoffmann <kraxel@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Juan Quintela <quintela@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Eric Blake <eblake@redhat.com>, Paul Durrant <paul@xen.org>,
	Peter Maydell <peter.maydell@linaro.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>
Subject: Re: [PATCH 5/5] qapi: Restrict Xen migration commands to
 migration.json
Message-ID: <20201002170412.GJ3286@work-vm>
References: <20201002133923.1716645-1-philmd@redhat.com>
 <20201002133923.1716645-6-philmd@redhat.com>
MIME-Version: 1.0
In-Reply-To: <20201002133923.1716645-6-philmd@redhat.com>
User-Agent: Mutt/1.14.6 (2020-07-11)
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dgilbert@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

* Philippe Mathieu-Daudé (philmd@redhat.com) wrote:
> Restricting xen-set-global-dirty-log and xen-load-devices-state
> commands migration.json pulls slightly less QAPI-generated code
> into user-mode and tools.
> 
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>

Looks OK; for migration


Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>

> ---
>  qapi/migration.json    | 41 +++++++++++++++++++++++++++++++++++++++++
>  qapi/misc.json         | 41 -----------------------------------------
>  accel/stubs/xen-stub.c |  2 +-
>  hw/i386/xen/xen-hvm.c  |  2 +-
>  migration/savevm.c     |  1 -
>  5 files changed, 43 insertions(+), 44 deletions(-)
> 
> diff --git a/qapi/migration.json b/qapi/migration.json
> index 7f5e6fd681..cb30f4c729 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -1551,6 +1551,47 @@
>  { 'command': 'xen-save-devices-state',
>    'data': {'filename': 'str', '*live':'bool' } }
>  
> +##
> +# @xen-set-global-dirty-log:
> +#
> +# Enable or disable the global dirty log mode.
> +#
> +# @enable: true to enable, false to disable.
> +#
> +# Returns: nothing
> +#
> +# Since: 1.3
> +#
> +# Example:
> +#
> +# -> { "execute": "xen-set-global-dirty-log",
> +#      "arguments": { "enable": true } }
> +# <- { "return": {} }
> +#
> +##
> +{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
> +
> +##
> +# @xen-load-devices-state:
> +#
> +# Load the state of all devices from file. The RAM and the block devices
> +# of the VM are not loaded by this command.
> +#
> +# @filename: the file to load the state of the devices from as binary
> +#            data. See xen-save-devices-state.txt for a description of the binary
> +#            format.
> +#
> +# Since: 2.7
> +#
> +# Example:
> +#
> +# -> { "execute": "xen-load-devices-state",
> +#      "arguments": { "filename": "/tmp/resume" } }
> +# <- { "return": {} }
> +#
> +##
> +{ 'command': 'xen-load-devices-state', 'data': {'filename': 'str'} }
> +
>  ##
>  # @xen-set-replication:
>  #
> diff --git a/qapi/misc.json b/qapi/misc.json
> index 9813893269..afe936b45b 100644
> --- a/qapi/misc.json
> +++ b/qapi/misc.json
> @@ -287,26 +287,6 @@
>    'data': {'device': 'str', 'target': 'str', '*arg': 'str'},
>    'features': [ 'deprecated' ] }
>  
> -##
> -# @xen-set-global-dirty-log:
> -#
> -# Enable or disable the global dirty log mode.
> -#
> -# @enable: true to enable, false to disable.
> -#
> -# Returns: nothing
> -#
> -# Since: 1.3
> -#
> -# Example:
> -#
> -# -> { "execute": "xen-set-global-dirty-log",
> -#      "arguments": { "enable": true } }
> -# <- { "return": {} }
> -#
> -##
> -{ 'command': 'xen-set-global-dirty-log', 'data': { 'enable': 'bool' } }
> -
>  ##
>  # @getfd:
>  #
> @@ -606,24 +586,3 @@
>  ##
>  { 'enum': 'ReplayMode',
>    'data': [ 'none', 'record', 'play' ] }
> -
> -##
> -# @xen-load-devices-state:
> -#
> -# Load the state of all devices from file. The RAM and the block devices
> -# of the VM are not loaded by this command.
> -#
> -# @filename: the file to load the state of the devices from as binary
> -#            data. See xen-save-devices-state.txt for a description of the binary
> -#            format.
> -#
> -# Since: 2.7
> -#
> -# Example:
> -#
> -# -> { "execute": "xen-load-devices-state",
> -#      "arguments": { "filename": "/tmp/resume" } }
> -# <- { "return": {} }
> -#
> -##
> -{ 'command': 'xen-load-devices-state', 'data': {'filename': 'str'} }
> diff --git a/accel/stubs/xen-stub.c b/accel/stubs/xen-stub.c
> index 7ba0b697f4..7054965c48 100644
> --- a/accel/stubs/xen-stub.c
> +++ b/accel/stubs/xen-stub.c
> @@ -7,7 +7,7 @@
>  
>  #include "qemu/osdep.h"
>  #include "sysemu/xen.h"
> -#include "qapi/qapi-commands-misc.h"
> +#include "qapi/qapi-commands-migration.h"
>  
>  bool xen_allowed;
>  
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index f3ababf33b..9519c33c09 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -24,7 +24,7 @@
>  #include "hw/xen/xen-bus.h"
>  #include "hw/xen/xen-x86.h"
>  #include "qapi/error.h"
> -#include "qapi/qapi-commands-misc.h"
> +#include "qapi/qapi-commands-migration.h"
>  #include "qemu/error-report.h"
>  #include "qemu/main-loop.h"
>  #include "qemu/range.h"
> diff --git a/migration/savevm.c b/migration/savevm.c
> index 34e4b71052..1fdf3f76c2 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -42,7 +42,6 @@
>  #include "postcopy-ram.h"
>  #include "qapi/error.h"
>  #include "qapi/qapi-commands-migration.h"
> -#include "qapi/qapi-commands-misc.h"
>  #include "qapi/qmp/qerror.h"
>  #include "qemu/error-report.h"
>  #include "sysemu/cpus.h"
> -- 
> 2.26.2
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:10:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:10:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2274.6775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOZw-0008EV-WF; Fri, 02 Oct 2020 17:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2274.6775; Fri, 02 Oct 2020 17:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOZw-0008EO-TG; Fri, 02 Oct 2020 17:10:44 +0000
Received: by outflank-mailman (input) for mailman id 2274;
 Fri, 02 Oct 2020 17:10:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOOZv-0008DI-3d
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:10:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ce581ec-2c1c-4a24-abe2-e352fe5a9911;
 Fri, 02 Oct 2020 17:10:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOOZn-00055V-EZ; Fri, 02 Oct 2020 17:10:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOOZn-0000Sb-0b; Fri, 02 Oct 2020 17:10:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOOZn-0005gQ-09; Fri, 02 Oct 2020 17:10:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOOZv-0008DI-3d
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:10:43 +0000
X-Inumbo-ID: 1ce581ec-2c1c-4a24-abe2-e352fe5a9911
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1ce581ec-2c1c-4a24-abe2-e352fe5a9911;
	Fri, 02 Oct 2020 17:10:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/UxcopIS/1YR0A/lbESgQMfLXRRia8VoukPzrFZW8HQ=; b=ydCwyU4aQuJ58o1UAgfQAnYqGH
	rgmAs8KRDNGuZqiUd0e2fDMKD6zhJVZQ6ixou0uq6pJETfbsbQ/WIOEuVxE10yaiKbUX6N5bQ8yYz
	nhJxioadHlZrZgvC7lbwkR7Po8iwS0l+WKNBBa6Dj6lnF3Sw900kbpiK0Ktc69S+dXbg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOOZn-00055V-EZ; Fri, 02 Oct 2020 17:10:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOOZn-0000Sb-0b; Fri, 02 Oct 2020 17:10:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOOZn-0005gQ-09; Fri, 02 Oct 2020 17:10:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155211-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155211: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
X-Osstest-Versions-That:
    xen=d4ed1d4132f5825a795d5a78505811ecd2717b5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 17:10:35 +0000

flight 155211 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155211/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154611
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 154611
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 154611
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 154611
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 154611
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f
baseline version:
 xen                  d4ed1d4132f5825a795d5a78505811ecd2717b5e

Last test of basis   154611  2020-09-22 11:26:05 Z   10 days
Failing since        154634  2020-09-23 05:59:56 Z    9 days    4 attempts
Testing same since   155211  2020-10-01 08:26:48 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 951 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:25:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:25:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2279.6792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOoD-0000ri-IO; Fri, 02 Oct 2020 17:25:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2279.6792; Fri, 02 Oct 2020 17:25:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOoD-0000rb-Eq; Fri, 02 Oct 2020 17:25:29 +0000
Received: by outflank-mailman (input) for mailman id 2279;
 Fri, 02 Oct 2020 17:25:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uixu=DJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kOOoD-0000rW-0D
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:25:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f52e129c-f4cd-48e4-9e33-325491c69d40;
 Fri, 02 Oct 2020 17:25:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id EF06820679;
 Fri,  2 Oct 2020 17:25:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uixu=DJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kOOoD-0000rW-0D
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:25:29 +0000
X-Inumbo-ID: f52e129c-f4cd-48e4-9e33-325491c69d40
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f52e129c-f4cd-48e4-9e33-325491c69d40;
	Fri, 02 Oct 2020 17:25:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id EF06820679;
	Fri,  2 Oct 2020 17:25:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601659527;
	bh=Jld5reVxu3BbDu07KuraQqLBn984l1q0gk9XV7HZc1g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=w2oRU3vOC2ZWubOmM+c7H0qCU+yygAMcwmaen0/NbpYcVQeD9Dvg/AuUpPLxwO4h8
	 UuFAQny41NrQhAKaXQO4kptWBRSafll6Rr8e0gQJcgMQIu9BM/kRmAnCo0Oala3nsC
	 fXHMd0vbgiYBIY/3W0Xulg53LhAqsKL5UxWfnrcc=
Date: Fri, 2 Oct 2020 10:25:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Laurentiu Tudor <laurentiu.tudor@nxp.com>
cc: sstabellini@kernel.org, julien@xen.org, xen-devel@lists.xenproject.org, 
    Volodymyr_Babchuk@epam.com, will@kernel.org, diana.craciun@nxp.com, 
    anda-alexandra.dorneanu@nxp.com
Subject: Re: [PATCH v3] arm,smmu: match start level of page table walk with
 P2M
In-Reply-To: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
Message-ID: <alpine.DEB.2.21.2010021025170.10908@sstabellini-ThinkPad-T480s>
References: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 2 Oct 2020, laurentiu.tudor@nxp.com wrote:
> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Don't hardcode the lookup start level of the page table walk to 1
> and instead match the one used in P2M. This should fix scenarios
> involving SMMU where the start level is different than 1.
> In order for the SMMU driver to also compile on arm32 move the
> P2M_ROOT_LEVEL in the p2m header file (while at it, for
> consistency also P2M_ROOT_ORDER) and use the macro in the smmu
> driver.
> 
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
> Changes in v3:
>  - also export 'p2m_root_order'
>  - moved variables in their rightful #ifdef block
> 
> Changes in v2:
>  - made smmu driver compile on arm32
> 
>  xen/arch/arm/p2m.c                 |  9 ++-------
>  xen/drivers/passthrough/arm/smmu.c |  2 +-
>  xen/include/asm-arm/p2m.h          | 11 +++++++++++
>  3 files changed, 14 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ce59f2b503..4eeb867ca1 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -17,17 +17,12 @@
>  #define INVALID_VMID 0 /* VMID 0 is reserved */
>  
>  #ifdef CONFIG_ARM_64
> -static unsigned int __read_mostly p2m_root_order;
> -static unsigned int __read_mostly p2m_root_level;
> -#define P2M_ROOT_ORDER    p2m_root_order
> -#define P2M_ROOT_LEVEL p2m_root_level
> +unsigned int __read_mostly p2m_root_order;
> +unsigned int __read_mostly p2m_root_level;
>  static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>  /* VMID is by default 8 bit width on AArch64 */
>  #define MAX_VMID       max_vmid
>  #else
> -/* First level P2M is always 2 consecutive pages */
> -#define P2M_ROOT_LEVEL 1
> -#define P2M_ROOT_ORDER    1
>  /* VMID is always 8 bit width on AArch32 */
>  #define MAX_VMID        MAX_VMID_8_BIT
>  #endif
> diff --git a/xen/drivers/passthrough/arm/smmu.c b/xen/drivers/passthrough/arm/smmu.c
> index 94662a8501..4ba6d3ab94 100644
> --- a/xen/drivers/passthrough/arm/smmu.c
> +++ b/xen/drivers/passthrough/arm/smmu.c
> @@ -1152,7 +1152,7 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain)
>  	      (TTBCR_RGN_WBWA << TTBCR_IRGN0_SHIFT);
>  
>  	if (!stage1)
> -		reg |= (TTBCR_SL0_LVL_1 << TTBCR_SL0_SHIFT);
> +		reg |= (2 - P2M_ROOT_LEVEL) << TTBCR_SL0_SHIFT;
>  
>  	writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR);
>  
> diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
> index 5fdb6e8183..28ca9a838e 100644
> --- a/xen/include/asm-arm/p2m.h
> +++ b/xen/include/asm-arm/p2m.h
> @@ -13,6 +13,17 @@
>  /* Holds the bit size of IPAs in p2m tables.  */
>  extern unsigned int p2m_ipa_bits;
>  
> +#ifdef CONFIG_ARM_64
> +extern unsigned int p2m_root_order;
> +extern unsigned int p2m_root_level;
> +#define P2M_ROOT_ORDER    p2m_root_order
> +#define P2M_ROOT_LEVEL p2m_root_level
> +#else
> +/* First level P2M is always 2 consecutive pages */
> +#define P2M_ROOT_ORDER    1
> +#define P2M_ROOT_LEVEL 1
> +#endif
> +
>  struct domain;
>  
>  extern void memory_type_changed(struct domain *);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:33:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2284.6812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOvu-0001ml-FA; Fri, 02 Oct 2020 17:33:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2284.6812; Fri, 02 Oct 2020 17:33:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOvu-0001me-Be; Fri, 02 Oct 2020 17:33:26 +0000
Received: by outflank-mailman (input) for mailman id 2284;
 Fri, 02 Oct 2020 17:33:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOOvt-0001mY-KR
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:33:25 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd031275-61be-42a9-a50c-9373a991b7fc;
 Fri, 02 Oct 2020 17:33:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOOvt-0001mY-KR
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:33:25 +0000
X-Inumbo-ID: fd031275-61be-42a9-a50c-9373a991b7fc
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fd031275-61be-42a9-a50c-9373a991b7fc;
	Fri, 02 Oct 2020 17:33:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601660004;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=5ijSWnf0ZluClQuPI9UzrQI0WNYZ7aTO7Rm1qkH0l7g=;
  b=QAcJKzMxsbPE7YscW9TiVgPBFCyrSiiEjyakN06KX9Z4hSp1JcIgIJ7x
   p8HZYqjXT/bmaX/r085zgCqJOo14pkfMhzyeag6NFRS7jPN+1OnMt7Rgi
   DR/WVoZDaUjqmtgwlU04VWyd5pjp+qiD27FjWEirHPJf5fUzJeX6nq7zY
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: iSvM/BbexMhPgj/TdHXXgsohZGzUu3Yb9HHZJr8+Lavk26K0iouKVKdTETpvQTe3RG8+B1gho5
 toQH15h7EEi+/+3xIvCOXltWI5+7cerioS5JgIW90++y1OsA7Ervp4z7brfjzEtkhimels+61i
 EHLKPF9tnRX6QdLiKKoN+yAs7le2gULZ3CTKX7NxQBvwHwHnxkU3afJM1Nt4sxmMt6rwZxKn5F
 h2V61e6XWR5RK2gKHYjSCV6FmvXca6+reCXvjORk5U51F9VLP9sBoTd9EYpPiMXebx2CtV6UQg
 gIk=
X-SBRS: None
X-MesageID: 28183010
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,328,1596513600"; 
   d="scan'208";a="28183010"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH] tools/libxl: Work around libvirt breakage in libxl__cpuid_legacy()
Date: Fri, 2 Oct 2020 18:32:59 +0100
Message-ID: <20201002173259.19702-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain

OSSTest reports that libvirt is reliably regressed.

The only possible option is a side effect of using libxl_defbool_val(), which
can only be the assert() within.  Unfortunately, libvirt actually crashes in
__vfscanf_internal() while presuambly trying to render some form of error.

Opencode the check without the assert() to unblock staging, while we
investigate what is going on with libvirt.  This will want reverting at some
point in the future.

Not-really-fixes: bfcc97c08c ("tools/cpuid: Plumb nested_virt down into xc_cpuid_apply_policy()" reliably breaks libvirt.)
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Ian Jackson <iwj@xenproject.org>
CC: Wei Liu <wl@xen.org>
CC: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_cpuid.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 08e85dcffb..16c077cceb 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -422,7 +422,15 @@ void libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
 {
     bool pae = true;
     bool itsc;
-    bool nested_virt = libxl_defbool_val(info->nested_hvm);
+
+    /*
+     * Gross hack.  Using libxl_defbool_val() here causes libvirt to crash in
+     * __vfscanf_internal(), which is probably collateral damage from a side
+     * effect of the assert().
+     *
+     * Unblock things for now by opencoding without the assert.
+     */
+    bool nested_virt = info->nested_hvm.val > 0;
 
     /*
      * For PV guests, PAE is Xen-controlled (it is the 'p' that differentiates
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:36:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2286.6824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOz6-0001yS-0Y; Fri, 02 Oct 2020 17:36:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2286.6824; Fri, 02 Oct 2020 17:36:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOOz5-0001yL-Ta; Fri, 02 Oct 2020 17:36:43 +0000
Received: by outflank-mailman (input) for mailman id 2286;
 Fri, 02 Oct 2020 17:36:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
 id 1kOOz4-0001yG-UE
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:36:42 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::60d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 830e3b57-e76f-47c1-b478-1aa4f45fdc6c;
 Fri, 02 Oct 2020 17:36:41 +0000 (UTC)
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB4238.eurprd04.prod.outlook.com (2603:10a6:803:4e::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 17:36:40 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 17:36:39 +0000
Received: from [192.168.1.106] (86.123.62.1) by
 AM0PR04CA0013.eurprd04.prod.outlook.com (2603:10a6:208:122::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend
 Transport; Fri, 2 Oct 2020 17:36:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jYI3=DJ=nxp.com=laurentiu.tudor@srs-us1.protection.inumbo.net>)
	id 1kOOz4-0001yG-UE
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:36:42 +0000
X-Inumbo-ID: 830e3b57-e76f-47c1-b478-1aa4f45fdc6c
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0c::60d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 830e3b57-e76f-47c1-b478-1aa4f45fdc6c;
	Fri, 02 Oct 2020 17:36:41 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fgUTESy/kXFaZC2Ramf0ExnaEg9WE7ufSDcMIgdRvPm6dCeWQu6/01a4AnJwjeQ/5bFAwU0hCMYHGYmnMe18vcUQQmwhJCy/p1fVKaWcQ1+3gJ+lnv7dmNwcY5AyUC/kVbAPcnCdyhSbiElKfWf4JIMm6HcwrzwY67OVx7QZGVzGWl43zzfZtvyuh2NLNkAc0PFMwZhz7sNcl9UKU4J+t7OBR46zGycZ8k8x3qJvrakj68IviYlrXlPELYzJ9CR7PJF9vp6L6IEKosFYUKpxwTl7H53UtBAu4awpFN1TQu16z9wWP5LHa6ATw/fSG7IU7lky4YP+YWk80j5iTZSX3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5KmmPazVlfHshnJbIHEva0ACD450kflohBpRredIuHc=;
 b=OUpFxSdIUlAZid6AmXxEWgEb6qdLhrTomwfN4lxK7cgUZ3BSoV0kO3zPApq+aVt8WEQmTTJU9fCB9UdQRXmKusnbfI+0Vot043LdFvPN78CfPAw8weWXGqr6wSDovTXgyAe8ixVJjsm8fBGLY/AetGpSEYzBhw5LCDP/hH2sN9eBDn7lCblOFDM07f4qxKBcfQY7al0XBjb8+E/ULLtVrCugn7nEX9zY93dpr/vK0AwhUv7mQnBAE7eOOyusRgoR1OYejN8LkrDsYFsEBlbfyryF59wW2L9TFuhkIk4mbMDzPQyQf/p/UKH/ccGuVoERc3n/l1BhJ1p0IDCYBGY5Ew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5KmmPazVlfHshnJbIHEva0ACD450kflohBpRredIuHc=;
 b=FtZLb3jqt7XDEM3BVOcXFI5Efd0Y6OlJHq2FYI+qVm5HcwAP34pn9QDEme2/1zh4Xsa8HKKjgSLjbUPBlAbiFtXp9NgljXwabPKL1izgPzdFmKDVuP3fSjY1rVa+yMgDF5b4TBAhBA42ubJWsX2QUKCgsVP6UXX3ecxlVnT+/Iw=
Authentication-Results: nxp.com; dkim=none (message not signed)
 header.d=none;nxp.com; dmarc=none action=none header.from=nxp.com;
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com (2603:10a6:803:3::26)
 by VI1PR04MB4238.eurprd04.prod.outlook.com (2603:10a6:803:4e::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Fri, 2 Oct
 2020 17:36:40 +0000
Received: from VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b]) by VI1PR0402MB3405.eurprd04.prod.outlook.com
 ([fe80::f960:c16d:16a5:6e7b%7]) with mapi id 15.20.3412.029; Fri, 2 Oct 2020
 17:36:39 +0000
Subject: Re: [PATCH v3] arm,smmu: match start level of page table walk with
 P2M
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: julien@xen.org, xen-devel@lists.xenproject.org,
 Volodymyr_Babchuk@epam.com, will@kernel.org, diana.craciun@nxp.com,
 anda-alexandra.dorneanu@nxp.com
References: <20201002103344.13015-1-laurentiu.tudor@nxp.com>
 <alpine.DEB.2.21.2010021025170.10908@sstabellini-ThinkPad-T480s>
From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Message-ID: <c26d6cdb-93b7-c184-793f-8631b165716e@nxp.com>
Date: Fri, 2 Oct 2020 20:36:35 +0300
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
In-Reply-To: <alpine.DEB.2.21.2010021025170.10908@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: AM0PR04CA0013.eurprd04.prod.outlook.com
 (2603:10a6:208:122::26) To VI1PR0402MB3405.eurprd04.prod.outlook.com
 (2603:10a6:803:3::26)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.1.106] (86.123.62.1) by AM0PR04CA0013.eurprd04.prod.outlook.com (2603:10a6:208:122::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35 via Frontend Transport; Fri, 2 Oct 2020 17:36:37 +0000
X-Originating-IP: [86.123.62.1]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 29c97d47-0581-45bc-92f7-08d866f9b754
X-MS-TrafficTypeDiagnostic: VI1PR04MB4238:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR04MB42380D22523C0FEABAE5BE1DEC310@VI1PR04MB4238.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	czG1IPHmPJcpA9E4eq8bbzajn7CmLziXAVGTWf96CtnuTn26MKZf97loV+UqsZTvPHeZOYy3EJGa8Y9ozzBdckoSoO8oKtCn4uIpxBcVBPpb97UZHw9efGdOxGcrD4DPeCuH8GDiWP95XWKk/+X37iBWiEjdcZXDWXvClXStQ9bZIWzJHW/dJhrs1uNu9QvxIfWqECHDQrZ5UTeiauwq8IHsPcRlDRpSFVu0Olahtx4reUeCvXuIbTz78pqaf5yQdE1Us1sHfqbwg8P9lw5vT+NuuRCThHlwF7leyqHZfzNuv/OSY7Urjw45SJDrKVXTIvK5E7lJzX6unhwTt/9C6PuHCtD+Zxz5zVm4VU2QDP3Z2EXrtwRU13NXCsr8UUqr
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR0402MB3405.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(39850400004)(396003)(366004)(376002)(16526019)(186003)(36756003)(2906002)(2616005)(8676002)(6486002)(6916009)(4326008)(956004)(26005)(8936002)(86362001)(4744005)(31696002)(16576012)(53546011)(44832011)(52116002)(5660300002)(478600001)(66556008)(66946007)(66476007)(83380400001)(31686004)(316002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	BmMpa6CouGmPIG9cLN0eoJKIEAYzF9yjsOfHi6eoST/dacnIlkrtIoli7qC+nx8td2w1SA5KxDYV8xjyg5PY/fCp66BL2wftEGMsOuSDfc3Q9QFHgFtsHKC8+YrGX0b5pI1P+pszAi5F4lpViLdWzDIFh6WY3KuavDgzyRT7vhi8MrNAavnDiupcl40XBZJk4UTgbhty3PR3OeGOj2wIXxBsWvzJDIOgcHAizerIzhDuwW9Q012LPAHtqnobxgrwITAUiqgn0nhejvtM/9AoOZm4zxtFCZgCr9Qi2pWkV2Uix3EQ6Fe09CSqb+fBlItwpgekwqZPAkiWTGqyOeUZpJxR16m5kllJR2aTO8R3AtelNZRS8AljUMKmZTLhQMG8+Btc5h9vmaXux/FUj+xC92rH/fUAGVIRDM+D2JFcvAf4lHBNPL7ARxERkOtzUsFmlwf3FTz58NpXoX9G4LG/lDQJaRBnCMX59uJ+9xrERmXZlgk+D5kTehLJGBNpNITTQYr5ehczqs1IQGMu6DXIustpip0mlH4E2zcpWZb6xO0KUoVl1jHSCc2noEs7mSPFRZDZ9diMEUaTH6E2dZmn4PdMYi1E2e94aBEcnP+X9GUbGrz3nOtiQpnD3LELNMStdFIBiIe4Via61sopi2n2kw==
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 29c97d47-0581-45bc-92f7-08d866f9b754
X-MS-Exchange-CrossTenant-AuthSource: VI1PR0402MB3405.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 17:36:39.7149
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1clifj5pD/OU/WlE/T4OThXtm0wvSynQOS3akAizI81dGKUNpF96yc8LTVzAZymROFYb1CcXui+oxLB/bHW1zA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR04MB4238


On 10/2/2020 8:25 PM, Stefano Stabellini wrote:
> On Fri, 2 Oct 2020, laurentiu.tudor@nxp.com wrote:
>> From: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>
>> Don't hardcode the lookup start level of the page table walk to 1
>> and instead match the one used in P2M. This should fix scenarios
>> involving SMMU where the start level is different than 1.
>> In order for the SMMU driver to also compile on arm32 move the
>> P2M_ROOT_LEVEL in the p2m header file (while at it, for
>> consistency also P2M_ROOT_ORDER) and use the macro in the smmu
>> driver.
>>
>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 

Thanks, Stefano!

---
Best Regards, Laurentiu


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 17:49:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 17:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2289.6838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOPBY-0002xz-AP; Fri, 02 Oct 2020 17:49:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2289.6838; Fri, 02 Oct 2020 17:49:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOPBY-0002xs-5N; Fri, 02 Oct 2020 17:49:36 +0000
Received: by outflank-mailman (input) for mailman id 2289;
 Fri, 02 Oct 2020 17:49:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOPBW-0002xK-OU
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:49:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47d46537-e3ff-49f7-9109-1f6a9e5c4344;
 Fri, 02 Oct 2020 17:49:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOPBO-0005sS-Ae; Fri, 02 Oct 2020 17:49:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOPBO-000304-4K; Fri, 02 Oct 2020 17:49:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOPBO-0001rQ-3q; Fri, 02 Oct 2020 17:49:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOPBW-0002xK-OU
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 17:49:34 +0000
X-Inumbo-ID: 47d46537-e3ff-49f7-9109-1f6a9e5c4344
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47d46537-e3ff-49f7-9109-1f6a9e5c4344;
	Fri, 02 Oct 2020 17:49:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=PeV25iYY84fcI1ARdpMT8AQsObe/shS1MbapH8yCWNQ=; b=1cpu8ZmFunZ1Eat5w6dtBWJaYL
	R2GUNWfa7n3msZvaBM1F3KETSbcAqLIK19zLrtBMBYFtY5sCgw6k6r/FLe89POBZ3+EYa47dypsjv
	JTxOK9tKqXPYwFbIsw0/lwmYc4Bz0ZjT47aly3xDyO29SQGHbDPP16BOmlvJ+vWQ3P2A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOPBO-0005sS-Ae; Fri, 02 Oct 2020 17:49:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOPBO-000304-4K; Fri, 02 Oct 2020 17:49:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOPBO-0001rQ-3q; Fri, 02 Oct 2020 17:49:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.11-testing bisection] complete test-amd64-i386-xl-qemut-debianhvm-i386-xsm
Message-Id: <E1kOPBO-0001rQ-3q@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 17:49:26 +0000

branch xen-4.11-testing
xenbranch xen-4.11-testing
job test-amd64-i386-xl-qemut-debianhvm-i386-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  3def8466383ab5abd17f1436d085348c2994722b
  Bug not present: cc1561a3a4e6c1b4125953703338c545ba6d14fb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155344/


  commit 3def8466383ab5abd17f1436d085348c2994722b
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:21:27 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemut-debianhvm-i386-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemut-debianhvm-i386-xsm.debian-hvm-install --summary-out=tmp/155344.bisection-summary --basis-template=151714 --blessings=real,real-bisect xen-4.11-testing test-amd64-i386-xl-qemut-debianhvm-i386-xsm debian-hvm-install
Searching for failure / basis pass:
 155140 fail [host=albana1] / 151714 [host=fiano1] 151318 [host=fiano1] 151295 [host=chardonnay0] 151279 [host=elbling0] 151260 [host=fiano0] 151234 [host=pinot1] 151204 [host=pinot0] 151166 [host=huxelrebe0] 151140 [host=rimava1] 151093 [host=albana0] 151061 [host=chardonnay1] 151035 ok.
Failure / basis pass flights: 155140 / 151035
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 3263f257caf8e4465e9dca84a88fa0e68be74280
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 9be79927a6395f12c9e24afaccf6acbaf81d402e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#dafce295e6f447ed8905db4e29241e2c6c2a4389-2793a49565488e419d10ba029c838f4b7efdba38 git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bb\
 a25148b279f60-c8ea0457495342c417c3dc033bba25148b279f60 git://xenbits.xen.org/qemu-xen.git#06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad-06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 git://xenbits.xen.org/xen.git#9be79927a6395f12c9e24afaccf6acbaf81d402e-3263f257caf8e4465e9dca84a88fa0e68be74280
Loaded 12583 nodes in revision graph
Searching for test results:
 151035 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 9be79927a6395f12c9e24afaccf6acbaf81d402e
 151012 []
 151061 [host=chardonnay1]
 151093 [host=albana0]
 151140 [host=rimava1]
 151166 [host=huxelrebe0]
 151204 [host=pinot0]
 151260 [host=fiano0]
 151234 [host=pinot1]
 151295 [host=chardonnay0]
 151279 [host=elbling0]
 151318 [host=fiano1]
 151714 [host=fiano1]
 154619 fail irrelevant
 154649 fail irrelevant
 154740 fail irrelevant
 155013 fail irrelevant
 155066 fail irrelevant
 155171 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 9be79927a6395f12c9e24afaccf6acbaf81d402e
 155259 fail irrelevant
 155263 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2eea9c6fdf948902c7f2d3ce7f1a69a22ef48870 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad d9c812dda519a1a73e8370e1b81ddf46eb22ed16 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155267 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 437eb3f7a8db7681afe0e6064d3a8edb12abb766 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155274 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cbccf995920a28071f5403b847f29ebf8b732fa9 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155140 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 3263f257caf8e4465e9dca84a88fa0e68be74280
 155278 pass irrelevant
 155286 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 3263f257caf8e4465e9dca84a88fa0e68be74280
 155292 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155296 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 6e9de083d801104f50a78f5d8e872778a776c682
 155305 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 aec99d9bc3f7459e457e3346b493e534ccbdee8a c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3263f257caf8e4465e9dca84a88fa0e68be74280
 155311 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3e565a9c603daebcf50e067c07aed7f0c4b2a6e0
 155314 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
 155317 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155324 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
 155326 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155329 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
 155332 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155344 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
Searching for interesting versions
 Result found: flight 151035 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb, results HASH(0x55dd946fca08) HASH(0x55dd946ea070) HASH(0x55dd939a38c0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 6e9de083d801104f50a78f5d8e872778a776c682, results HASH(0x55dd939b5928) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad7\
 4c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55dd946f83d0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cbccf995920a28071f5403b847f29ebf8b732fa9 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55dd946e9148) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 437eb3f7a8db7681afe0e6064d3a8edb12abb766 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55dd946bf9c0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2eea9c6fdf948902c7f2\
 d3ce7f1a69a22ef48870 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad d9c812dda519a1a73e8370e1b81ddf46eb22ed16 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55dd946d0170) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dafce295e6f447ed8905db4e29241e2c6c2a4389 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b3\
 45694 9be79927a6395f12c9e24afaccf6acbaf81d402e, results HASH(0x55dd946cde68) HASH(0x55dd946b9ca8) Result found: flight 155140 (fail), for basis failure (at ancestor ~927)
 Repro found: flight 155171 (pass), for basis pass
 Repro found: flight 155286 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
No revisions left to test, checking graph state.
 Result found: flight 155317 (pass), for last pass
 Result found: flight 155324 (fail), for first failure
 Repro found: flight 155326 (pass), for last pass
 Repro found: flight 155329 (fail), for first failure
 Repro found: flight 155332 (pass), for last pass
 Repro found: flight 155344 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  3def8466383ab5abd17f1436d085348c2994722b
  Bug not present: cc1561a3a4e6c1b4125953703338c545ba6d14fb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155344/


  commit 3def8466383ab5abd17f1436d085348c2994722b
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:21:27 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 129 colors found
Revision graph left in /home/logs/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemut-debianhvm-i386-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155344: tolerable ALL FAIL

flight 155344 xen-4.11-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155344/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 18:05:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 18:05:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2294.6856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOPQb-0004mj-S5; Fri, 02 Oct 2020 18:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2294.6856; Fri, 02 Oct 2020 18:05:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOPQb-0004mc-Ox; Fri, 02 Oct 2020 18:05:09 +0000
Received: by outflank-mailman (input) for mailman id 2294;
 Fri, 02 Oct 2020 18:05:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOPQZ-0004mX-R2
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:05:08 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 993fffcf-8143-4833-90b3-874684c73df1;
 Fri, 02 Oct 2020 18:05:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id x14so2735295wrl.12
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:05:05 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id c16sm2793734wrx.31.2020.10.02.11.05.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 11:05:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOPQZ-0004mX-R2
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:05:08 +0000
X-Inumbo-ID: 993fffcf-8143-4833-90b3-874684c73df1
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 993fffcf-8143-4833-90b3-874684c73df1;
	Fri, 02 Oct 2020 18:05:05 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id x14so2735295wrl.12
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:05:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=nAsVSdK/7E/mt3Oq7A35InRFZW35+phEzXg0tEKoM3E=;
        b=RoyUTHTN+MQaWVa1oL+ggEUiHJrt0jXB9juJNkrZmpVPsnzyomqBSiw9aTb98u4ct8
         4gbNx0B18lhFeinSoQ5/Lf/Is46husNbuW9r2SIYlaiPK/bXtSzrymO+4wWzaa+BZGKB
         Ka6gAP8r0jtntAFfHsdOUll3NCZPa58Wf2iq4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=nAsVSdK/7E/mt3Oq7A35InRFZW35+phEzXg0tEKoM3E=;
        b=Py56w/OqeOZgiyLMkUmYknVyKP6gVXbuPWIuIzwTlHoY+K3WL6Y0sueaMjSAuV6pDt
         ciDfw/+lnso8CnwdRGQFAsbNoja8+GeURVYGZB8AlkUhWuBTJcbxyDk+/VjM7UMwinjK
         MaO9Z762Uu0eg6y9rHeJIRvnbucIWwDGyNb09cJWwvB99pT3zZBwxNfBFXWB92AmEYnH
         wPBAG/He2QH0WY8IPuiqvVk8bMNkq4TEcC1ypCKIETCkOt5kxIJCkw6LQQP58Ha/W1c6
         2vEIDD79bgFGKUe89ACrTu4twrW0eZ1560IrYYw547U6zfCzJQpbQVa44/LqRd9T7o+X
         1xsA==
X-Gm-Message-State: AOAM5315br0J8Azk6Lh1YIZ5SaQcLBE8OLnCp1n3aloGAJeviYJO/KSx
	aTwamKS7gPv42CSfsusRlaFwOw==
X-Google-Smtp-Source: ABdhPJz81eHAkz0Pqdl3fYidIh3cTsK0B1hmHy8AkNXaM5ryIfRr9H9pfts0XTgN4zmYd73luhZ50Q==
X-Received: by 2002:a5d:52ca:: with SMTP id r10mr4133433wrv.195.1601661904725;
        Fri, 02 Oct 2020 11:05:04 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id c16sm2793734wrx.31.2020.10.02.11.05.02
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 11:05:03 -0700 (PDT)
Date: Fri, 2 Oct 2020 20:05:00 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
Message-ID: <20201002180500.GM438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-7-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200929151437.19717-7-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  include/linux/dma-buf-map.h       |  72 ++++++++--
>  4 files changed, 265 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 343a292f2c7c..f345a314a437 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
>  }
>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>  
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> +				      size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *dst;
> +	u8 __iomem *src;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p >= total_size)
> +		return 0;
> +
> +	if (count >= total_size)
> +		count = total_size;
> +
> +	if (count + p > total_size)
> +		count = total_size - p;
> +
> +	src = (u8 __iomem *)(info->screen_base + p);
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	dst = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!dst)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		memcpy_fromio(dst, src, c);
> +		if (copy_to_user(buf, dst, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(dst);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> +				       size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *src;
> +	u8 __iomem *dst;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p > total_size)
> +		return -EFBIG;
> +
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +
> +	if (count + p > total_size) {
> +		/*
> +		 * The framebuffer is too small. We do the
> +		 * copy operation, but return an error code
> +		 * afterwards. Taken from fbdev.
> +		 */
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - p;
> +	}
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	src = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	dst = (u8 __iomem *)(info->screen_base + p);
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		if (copy_from_user(src, buf, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, src, c);
> +
> +		dst += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(src);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
>  /**
>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>   * @info: fbdev registered by the helper
> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_fillrect(info, rect);
> +	else
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_copyarea(info, area);
> +	else
> +		drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_imageblit(info, image);
> +	else
> +		drm_fb_helper_cfb_imageblit(info, image);
> +}

I think a todo to make the new generic functions the real ones, and
drivers not using the sys/cfb ones anymore would be a good addition.

> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h

I think the below should be split out as a prep patch.

> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
>   * accessing the buffer. Use the returned instance and the helper functions
>   * to access the buffer's memory in the correct way.
>   *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
>   * considered bad style. Rather then accessing its fields directly, use one
>   * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
>   *
>   *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>   *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_clear(&map);
> + *
>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>   * dma_buf_map_is_null().
>   *
> @@ -73,17 +89,19 @@
>   *	if (dma_buf_map_is_equal(&sys_map, &io_map))
>   *		// always false
>   *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
>   *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + *	const void *src = ...; // source buffer
> + *	size_t len = ...; // length of src
> + *
> + *	dma_buf_map_memcpy_to(&map, src, len);
> + *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
>   */
>  
>  /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
>  	}
>  }
>  
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst:	The dma-buf mapping structure
> + * @src:	The source buffer
> + * @len:	The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> +	if (dst->is_iomem)
> +		memcpy_toio(dst->vaddr_iomem, src, len);
> +	else
> +		memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map:	The dma-buf mapping structure
> + * @incr:	The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> +	if (map->is_iomem)
> +		map->vaddr_iomem += incr;
> +	else
> +		map->vaddr += incr;
> +}
> +
>  #endif /* __DMA_BUF_MAP_H__ */
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 18:45:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 18:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2298.6867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQ3I-0008Cz-2L; Fri, 02 Oct 2020 18:45:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2298.6867; Fri, 02 Oct 2020 18:45:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQ3H-0008Cs-Vb; Fri, 02 Oct 2020 18:45:07 +0000
Received: by outflank-mailman (input) for mailman id 2298;
 Fri, 02 Oct 2020 18:45:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2MsM=DJ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kOQ3G-0008Cn-EY
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:45:06 +0000
Received: from mail-oo1-xc43.google.com (unknown [2607:f8b0:4864:20::c43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44bf014e-9d5c-4573-bc79-63611568e82a;
 Fri, 02 Oct 2020 18:45:04 +0000 (UTC)
Received: by mail-oo1-xc43.google.com with SMTP id c4so592554oou.6
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:45:04 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2MsM=DJ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kOQ3G-0008Cn-EY
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:45:06 +0000
X-Inumbo-ID: 44bf014e-9d5c-4573-bc79-63611568e82a
Received: from mail-oo1-xc43.google.com (unknown [2607:f8b0:4864:20::c43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44bf014e-9d5c-4573-bc79-63611568e82a;
	Fri, 02 Oct 2020 18:45:04 +0000 (UTC)
Received: by mail-oo1-xc43.google.com with SMTP id c4so592554oou.6
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:45:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=lZSwV/TxvLgtRQyseSFuOz5Dh2qMehSXaFEvs66IqY4=;
        b=W7br+aRlzq45X9zGs9xUATcRIBCFJ3jObJlGFJ+idk77p7qcMftDn2rX4AzxzMUyaZ
         jPMnJ3BMZZ+WrbYH2Gy2fwnEK1jGm82A6yCC8yVPbio7O5XUhQjvdmjfyIcB4m/8rybu
         R71q+9FkhpfjEUqyO5Wt0gDXb1FGwVO9QwW6E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=lZSwV/TxvLgtRQyseSFuOz5Dh2qMehSXaFEvs66IqY4=;
        b=LtRe4bRY8bsP9mqVoK7Wqnm0lLrb7cxIPFuSl6Hn4CXR6+n88TSSUMJQYvf6v27y3f
         wpyfPmKg2bX+0mOe6kaYSqW//YssrUC2ohHPWiJlvZnyegmnreEYhmb3Q/JCN+LvgRcI
         j/IvVfNBKmFP627tROUHevkRLWO5+/rE3TeRMWbvjcG+BROXofl9HYC25XjVtu8kyVB1
         ji4DTrNue33dLEDQ4Yycpj9ooLZZFsLRWx/PwuYG0lhRztfWkrNT5wIYYRAwCOdqzJuF
         36UZvwQtFEp+k+FFr1yCIeyhToK/VxSvs71bMN7ZznQsU26yQQBsPTphYyR5V3j0sI31
         w5hg==
X-Gm-Message-State: AOAM532lCyblLvfrqDe1LLVXb5xmz8k/t99766ZHbTkvm30TOtn1d6K/
	8GwqqEqjF7S+fI4H4Z+//P9tZ6YUqb/BJ5ZLijWmjw==
X-Google-Smtp-Source: ABdhPJxo3CwQVG7zrHO9r4gpqXHDOI3xSmfLTtpn1cgBaJLFnaDvBDsE/URziecJNATSH24FHSy7/SzIzMH/e9gCpWk=
X-Received: by 2002:a4a:3b44:: with SMTP id s65mr2903513oos.85.1601664303475;
 Fri, 02 Oct 2020 11:45:03 -0700 (PDT)
MIME-Version: 1.0
References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de>
 <20201002180500.GM438822@phenom.ffwll.local>
In-Reply-To: <20201002180500.GM438822@phenom.ffwll.local>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Fri, 2 Oct 2020 20:44:52 +0200
Message-ID: <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>
Subject: Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, 
	Dave Airlie <airlied@linux.ie>, Sam Ravnborg <sam@ravnborg.org>, 
	Alex Deucher <alexander.deucher@amd.com>, =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>, 
	Russell King <linux+etnaviv@armlinux.org.uk>, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, Inki Dae <inki.dae@samsung.com>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, Seung-Woo Kim <sw0312.kim@samsung.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>, 
	Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>, Ben Skeggs <bskeggs@redhat.com>, 
	Rob Herring <robh@kernel.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Steven Price <steven.price@arm.com>, Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Huang Rui <ray.huang@amd.com>, 
	Sumit Semwal <sumit.semwal@linaro.org>, Emil Velikov <emil.velikov@collabora.com>, 
	Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com, 
	Linus Walleij <linus.walleij@linaro.org>, Melissa Wen <melissa.srw@gmail.com>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Qinglang Miao <miaoqinglang@huawei.com>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, lima@lists.freedesktop.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>
> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Argh, I accidentally hit send before finishing this ...

> > ---
> >  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> >  include/drm/drm_mode_config.h     |  12 --
> >  include/linux/dma-buf-map.h       |  72 ++++++++--
> >  4 files changed, 265 insertions(+), 37 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> > index 13d0d04c4457..853081d186d5 100644
> > --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >       bochs->dev->mode_config.preferred_depth = 24;
> >       bochs->dev->mode_config.prefer_shadow = 0;
> >       bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > -     bochs->dev->mode_config.fbdev_use_iomem = true;
> >       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
> >
> >       bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> > index 343a292f2c7c..f345a314a437 100644
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> >  }
> >
> >  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> > -                                       struct drm_clip_rect *clip)
> > +                                       struct drm_clip_rect *clip,
> > +                                       struct dma_buf_map *dst)
> >  {
> >       struct drm_framebuffer *fb = fb_helper->fb;
> >       unsigned int cpp = fb->format->cpp[0];
> >       size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >       void *src = fb_helper->fbdev->screen_buffer + offset;
> > -     void *dst = fb_helper->buffer->map.vaddr + offset;
> >       size_t len = (clip->x2 - clip->x1) * cpp;
> >       unsigned int y;
> >
> > -     for (y = clip->y1; y < clip->y2; y++) {
> > -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > -                     memcpy(dst, src, len);
> > -             else
> > -                     memcpy_toio((void __iomem *)dst, src, len);
> > +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
> >
> > +     for (y = clip->y1; y < clip->y2; y++) {
> > +             dma_buf_map_memcpy_to(dst, src, len);
> > +             dma_buf_map_incr(dst, fb->pitches[0]);
> >               src += fb->pitches[0];
> > -             dst += fb->pitches[0];
> >       }
> >  }
> >
> > @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >                       ret = drm_client_buffer_vmap(helper->buffer, &map);
> >                       if (ret)
> >                               return;
> > -                     drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> > +                     drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> >               }
> > +
> >               if (helper->fb->funcs->dirty)
> >                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >                                                &clip_copy, 1);
> > @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
> >  }
> >  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> > +                                   size_t count, loff_t *ppos)
> > +{
> > +     unsigned long p = *ppos;
> > +     u8 *dst;
> > +     u8 __iomem *src;
> > +     int c, err = 0;
> > +     unsigned long total_size;
> > +     unsigned long alloc_size;
> > +     ssize_t ret = 0;
> > +
> > +     if (info->state != FBINFO_STATE_RUNNING)
> > +             return -EPERM;
> > +
> > +     total_size = info->screen_size;
> > +
> > +     if (total_size == 0)
> > +             total_size = info->fix.smem_len;
> > +
> > +     if (p >= total_size)
> > +             return 0;
> > +
> > +     if (count >= total_size)
> > +             count = total_size;
> > +
> > +     if (count + p > total_size)
> > +             count = total_size - p;
> > +
> > +     src = (u8 __iomem *)(info->screen_base + p);
> > +
> > +     alloc_size = min(count, PAGE_SIZE);
> > +
> > +     dst = kmalloc(alloc_size, GFP_KERNEL);
> > +     if (!dst)
> > +             return -ENOMEM;
> > +
> > +     while (count) {
> > +             c = min(count, alloc_size);
> > +
> > +             memcpy_fromio(dst, src, c);
> > +             if (copy_to_user(buf, dst, c)) {
> > +                     err = -EFAULT;
> > +                     break;
> > +             }
> > +
> > +             src += c;
> > +             *ppos += c;
> > +             buf += c;
> > +             ret += c;
> > +             count -= c;
> > +     }
> > +
> > +     kfree(dst);
> > +
> > +     if (err)
> > +             return err;
> > +
> > +     return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> > +                                    size_t count, loff_t *ppos)
> > +{
> > +     unsigned long p = *ppos;
> > +     u8 *src;
> > +     u8 __iomem *dst;
> > +     int c, err = 0;
> > +     unsigned long total_size;
> > +     unsigned long alloc_size;
> > +     ssize_t ret = 0;
> > +
> > +     if (info->state != FBINFO_STATE_RUNNING)
> > +             return -EPERM;
> > +
> > +     total_size = info->screen_size;
> > +
> > +     if (total_size == 0)
> > +             total_size = info->fix.smem_len;
> > +
> > +     if (p > total_size)
> > +             return -EFBIG;
> > +
> > +     if (count > total_size) {
> > +             err = -EFBIG;
> > +             count = total_size;
> > +     }
> > +
> > +     if (count + p > total_size) {
> > +             /*
> > +              * The framebuffer is too small. We do the
> > +              * copy operation, but return an error code
> > +              * afterwards. Taken from fbdev.
> > +              */
> > +             if (!err)
> > +                     err = -ENOSPC;
> > +             count = total_size - p;
> > +     }
> > +
> > +     alloc_size = min(count, PAGE_SIZE);
> > +
> > +     src = kmalloc(alloc_size, GFP_KERNEL);
> > +     if (!src)
> > +             return -ENOMEM;
> > +
> > +     dst = (u8 __iomem *)(info->screen_base + p);
> > +
> > +     while (count) {
> > +             c = min(count, alloc_size);
> > +
> > +             if (copy_from_user(src, buf, c)) {
> > +                     err = -EFAULT;
> > +                     break;
> > +             }
> > +             memcpy_toio(dst, src, c);
> > +
> > +             dst += c;
> > +             *ppos += c;
> > +             buf += c;
> > +             ret += c;
> > +             count -= c;
> > +     }
> > +
> > +     kfree(src);
> > +
> > +     if (err)
> > +             return err;
> > +
> > +     return ret;
> > +}

The duplication is a bit annoying here, but can't really be avoided. I
do think though we should maybe go a bit further, and have drm
implementations of this stuff instead of following fbdev concepts as
closely as possible. So here roughly:

- if we have a shadow fb, construct a dma_buf_map for that, otherwise
take the one from the driver
- have a full generic implementation using that one directly (and
checking size limits against the underlying gem buffer)
- ideally also with some testcases in the fbdev testcase we have (very
bare-bones right now) in igt

But I'm not really sure whether that's worth all the trouble. It's
just that the fbdev-ness here in this copied code sticks out a lot :-)

> > +
> >  /**
> >   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >   * @info: fbdev registered by the helper
> > @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> >               return -ENODEV;
> >  }
> >
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > +                              size_t count, loff_t *ppos)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             return drm_fb_helper_sys_read(info, buf, count, ppos);
> > +     else
> > +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> > +                               size_t count, loff_t *ppos)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             return drm_fb_helper_sys_write(info, buf, count, ppos);
> > +     else
> > +             return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > +                               const struct fb_fillrect *rect)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_fillrect(info, rect);
> > +     else
> > +             drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > +                               const struct fb_copyarea *area)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_copyarea(info, area);
> > +     else
> > +             drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > +                                const struct fb_image *image)
> > +{
> > +     struct drm_fb_helper *fb_helper = info->par;
> > +     struct drm_client_buffer *buffer = fb_helper->buffer;
> > +
> > +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +             drm_fb_helper_sys_imageblit(info, image);
> > +     else
> > +             drm_fb_helper_cfb_imageblit(info, image);
> > +}

I think a todo.rst entry to make the new generic functions the real ones, and
drivers not using the sys/cfb ones anymore would be a good addition.
It's kinda covered by the move to the generic helpers, but maybe we
can convert a few more drivers over to these here. Would also allow us
to maybe flatten the code a bit and use more of the dma_buf_map stuff
directly (instead of reusing crusty fbdev code written 20 years ago or
so).

> > +
> >  static const struct fb_ops drm_fbdev_fb_ops = {
> >       .owner          = THIS_MODULE,
> >       DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> >       .fb_release     = drm_fbdev_fb_release,
> >       .fb_destroy     = drm_fbdev_fb_destroy,
> >       .fb_mmap        = drm_fbdev_fb_mmap,
> > -     .fb_read        = drm_fb_helper_sys_read,
> > -     .fb_write       = drm_fb_helper_sys_write,
> > -     .fb_fillrect    = drm_fb_helper_sys_fillrect,
> > -     .fb_copyarea    = drm_fb_helper_sys_copyarea,
> > -     .fb_imageblit   = drm_fb_helper_sys_imageblit,
> > +     .fb_read        = drm_fbdev_fb_read,
> > +     .fb_write       = drm_fbdev_fb_write,
> > +     .fb_fillrect    = drm_fbdev_fb_fillrect,
> > +     .fb_copyarea    = drm_fbdev_fb_copyarea,
> > +     .fb_imageblit   = drm_fbdev_fb_imageblit,
> >  };
> >
> >  static struct fb_deferred_io drm_fbdev_defio = {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> >        */
> >       bool prefer_shadow_fbdev;
> >
> > -     /**
> > -      * @fbdev_use_iomem:
> > -      *
> > -      * Set to true if framebuffer reside in iomem.
> > -      * When set to true memcpy_toio() is used when copying the framebuffer in
> > -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > -      *
> > -      * FIXME: This should be replaced with a per-mapping is_iomem
> > -      * flag (like ttm does), and then used everywhere in fbdev code.
> > -      */
> > -     bool fbdev_use_iomem;
> > -
> >       /**
> >        * @quirk_addfb_prefer_xbgr_30bpp:
> >        *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h

I think the below should be split out as a prep patch.

> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> >   * accessing the buffer. Use the returned instance and the helper functions
> >   * to access the buffer's memory in the correct way.
> >   *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> > + * actually independent from the dma-buf infrastructure. When sharing buffers
> > + * among devices, drivers have to know the location of the memory to access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be generalized
> > + * and moved to a more prominent header file.
> > + *
> >   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> >   * considered bad style. Rather then accessing its fields directly, use one
> >   * of the provided helper functions, or implement your own. For example,
> > @@ -51,6 +59,14 @@
> >   *
> >   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >   *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + *   dma_buf_map_clear(&map);
> > + *
> >   * Test if a mapping is valid with either dma_buf_map_is_set() or
> >   * dma_buf_map_is_null().
> >   *
> > @@ -73,17 +89,19 @@
> >   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
> >   *           // always false
> >   *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or manipulate
> > + * the buffer memory. Depending on the location of the memory, the provided
> > + * helpers will pick the correct operations. Data can be copied into the memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> >   *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> > - * actually independent from the dma-buf infrastructure. When sharing buffers
> > - * among devices, drivers have to know the location of the memory to access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + *   const void *src = ...; // source buffer
> > + *   size_t len = ...; // length of src
> > + *
> > + *   dma_buf_map_memcpy_to(&map, src, len);
> > + *   dma_buf_map_incr(&map, len); // go to first byte after the memcpy
> >   */
> >
> >  /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
> >       }
> >  }
> >
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst:     The dma-buf mapping structure
> > + * @src:     The source buffer
> > + * @len:     The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> > +{
> > +     if (dst->is_iomem)
> > +             memcpy_toio(dst->vaddr_iomem, src, len);
> > +     else
> > +             memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> > + * @map:     The dma-buf mapping structure
> > + * @incr:    The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> > +{
> > +     if (map->is_iomem)
> > +             map->vaddr_iomem += incr;
> > +     else
> > +             map->vaddr += incr;
> > +}
> > +
> >  #endif /* __DMA_BUF_MAP_H__ */
> > --
> > 2.28.0

Aside from the details I think looks all reasonable.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 18:46:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 18:46:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2300.6880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQ4A-0008Ju-H3; Fri, 02 Oct 2020 18:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2300.6880; Fri, 02 Oct 2020 18:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQ4A-0008Jn-Dx; Fri, 02 Oct 2020 18:46:02 +0000
Received: by outflank-mailman (input) for mailman id 2300;
 Fri, 02 Oct 2020 18:46:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kOQ49-0008Jh-U1
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:46:01 +0000
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15ce7678-8595-47bb-8de5-4f5116d095f0;
 Fri, 02 Oct 2020 18:46:00 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id t10so2874427wrv.1
 for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:46:00 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id n2sm2937452wma.29.2020.10.02.11.45.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 02 Oct 2020 11:45:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z+Ma=DJ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kOQ49-0008Jh-U1
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 18:46:01 +0000
X-Inumbo-ID: 15ce7678-8595-47bb-8de5-4f5116d095f0
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 15ce7678-8595-47bb-8de5-4f5116d095f0;
	Fri, 02 Oct 2020 18:46:00 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id t10so2874427wrv.1
        for <xen-devel@lists.xenproject.org>; Fri, 02 Oct 2020 11:46:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=cqLPfUuqb60ZIVoOQ31hnraU/Ifr39O5/ULDkN9IWmw=;
        b=LhoHpieZ6b1yu3Bgv5UVyYwE5gEJCucfXpQtCsua7dxm7nPU5GPxUoM22UNQQ6v2CK
         k0Y2QqxCtl6jlPSAgvR23+rIu6KvQsYlEl3KTS7hhk4KmBbGsS7srplNima8l2683HV3
         M+RegCjSI+WIRGmRnHJIv+DYHNTbkSdc4/ZqA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=cqLPfUuqb60ZIVoOQ31hnraU/Ifr39O5/ULDkN9IWmw=;
        b=Jc1+4nCBo5FjJTpGThV4qZJ49j1stclHysExxv06YHfLujHFTghSnaVt45sydp1Amh
         8TCoBuRzWVoggND7Q/fBq8yFayCxD+3uxxfIGzBcFIfUGnBRLH+Qs+USuNfhbCiqrY/Q
         Tqahlk0L+J41Pz0afo1csTmCGbS9/nIzz85Czmn5nQprKLy7H1TUUBuEyFe2br8nUuPZ
         PT0Z53s/5iACgj9pnK1wHiTJiQYQ+rMWiVsKs/mX8r5QwXpaA1ogUMOREmlRhj/S53lP
         Jp3AoExXsmMU+/13VL6Sc6z0D7T/FrswjI6VT9oQpzelEbTke+P1nCQ+ZcRWMw15v8Hu
         Snmg==
X-Gm-Message-State: AOAM532MC78XmKTVzynSnZz2Zyvxs5acQ5iwM78MPJNPAzi8qgcL4BrG
	AbR6BAkkK9Gt8NZr8xallMnJJg==
X-Google-Smtp-Source: ABdhPJzgHPhBsD8FZtMDQrfMIjGL4qmZj/8xNAuAHXm+0qz5n2h6pVB3S7OCKNZbK4qjctoXq1cGGA==
X-Received: by 2002:a5d:40cd:: with SMTP id b13mr4528517wrq.297.1601664359823;
        Fri, 02 Oct 2020 11:45:59 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id n2sm2937452wma.29.2020.10.02.11.45.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 02 Oct 2020 11:45:57 -0700 (PDT)
Date: Fri, 2 Oct 2020 20:45:54 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map
Message-ID: <20201002184554.GN438822@phenom.ffwll.local>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-8-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200929151437.19717-8-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Sep 29, 2020 at 05:14:37PM +0200, Thomas Zimmermann wrote:
> Instances of struct dma_buf_map should be useful throughout DRM's
> memory management code. Furthermore, several drivers can now use
> generic fbdev emulation.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

> ---
>  Documentation/gpu/todo.rst | 24 ++++++++++++++++++++++--
>  1 file changed, 22 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 3751ac976c3e..023626c1837b 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,8 +197,10 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
> @@ -446,6 +448,24 @@ Contact: Ville Syrjl, Daniel Vetter
>  
>  Level: Intermediate
>  
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian Knig, Daniel Vetter
> +
> +Level: Intermediate
> +
>  
>  Core refactorings
>  =================
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 19:20:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 19:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2308.6891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQb0-0002e8-Am; Fri, 02 Oct 2020 19:19:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2308.6891; Fri, 02 Oct 2020 19:19:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOQb0-0002e1-7r; Fri, 02 Oct 2020 19:19:58 +0000
Received: by outflank-mailman (input) for mailman id 2308;
 Fri, 02 Oct 2020 19:19:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bUGt=DJ=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kOQaz-0002dw-AH
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 19:19:57 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4c7b97c-6489-4acf-a0f7-7ef04b33e76b;
 Fri, 02 Oct 2020 19:19:56 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 5C12F5C004B;
 Fri,  2 Oct 2020 15:19:56 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Fri, 02 Oct 2020 15:19:56 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 1DAFE3280068;
 Fri,  2 Oct 2020 15:19:54 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bUGt=DJ=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kOQaz-0002dw-AH
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 19:19:57 +0000
X-Inumbo-ID: e4c7b97c-6489-4acf-a0f7-7ef04b33e76b
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4c7b97c-6489-4acf-a0f7-7ef04b33e76b;
	Fri, 02 Oct 2020 19:19:56 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 5C12F5C004B;
	Fri,  2 Oct 2020 15:19:56 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Fri, 02 Oct 2020 15:19:56 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=H1/P9y
	Ei+hlyWeojnApVQmK2oZDybpCtFF/bpTEQcdk=; b=AHUYe9BOfO3qia84hBlUg4
	8WeEP+z4MIdAwPERybK9YI3DbjcdRabsVdXgvVlK9w00/DDwFRGA7MZlB0qIZjOX
	MHHtycIkhoxBOjAIcbvwIEzFJ7LKo8+n2C7bzDFafBMRpz04VOGGwUrv+kCFkoN+
	feqo7V/B+E8GPAZnqTn+AqbgsveDryL+bRJ2WpZI9EofsG7Fx8sOlYXoiHdjtIhW
	4URSBCtUSa9Y/iu97BWI8PokbzZZEiml2zVoMUmSUBiSaZy44ihsFITenIlpphfY
	QetwW+UIxDmLYVrjQEF2fD1P2hJWXHcU2ef1dqwlO89d1nf9Bw02SfNcVCHJutpA
	==
X-ME-Sender: <xms:XH13XwiyeAafWedLfav2N-KAQRauPNXb5GzGyYmDZTGqEGEZweX6Jw>
    <xme:XH13X5AZrYho6DP2C9-Vzr1LqJ8TgxEegd6b38QfZ11uaHSP5okeJXKdaVsqraRYu
    tYWz6qkp-T84w>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeeigddufeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeejudei
    geeiffekgfduvdetfeefhefhleefudevleejveetffdugeekjedufffhfeenucffohhmrg
    hinhepgigvnhdrohhrghenucfkphepledurdeigedrudejtddrkeelnecuvehluhhsthgv
    rhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:XH13X4Gayx6hRaU4un0PdNJtY8N6BWXR8wRk1-KjLaXKBBiSila-YQ>
    <xmx:XH13XxQbUJshtxgYPYjsT9U2uLORrQn0g7Wz-12XNIduzr8h9LCkRA>
    <xmx:XH13X9xDI6SyE1LJqpQs_SMGxg9-kFuS7UuoW0WCT_6yCQPMszWnig>
    <xmx:XH13X2vuxLJwZ50LV1L2T_P2KK4qyWtNn32afO6MvzuVCpvkyw2lnw>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 1DAFE3280068;
	Fri,  2 Oct 2020 15:19:54 -0400 (EDT)
Date: Fri, 2 Oct 2020 21:19:51 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: Yet another S3 issue in Xen 4.14
Message-ID: <20201002191951.GA104059@mail-itl>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
 <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
 <20201002150859.GM3962@mail-itl>
 <454ac9ce-012f-f2e7-722d-c5304fd3146f@suse.com>
 <aa5e1a7b-3724-bdc1-a313-0598aabd181f@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="SLDf9lqlvOQaIe6s"
Content-Disposition: inline
In-Reply-To: <aa5e1a7b-3724-bdc1-a313-0598aabd181f@citrix.com>


--SLDf9lqlvOQaIe6s
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Yet another S3 issue in Xen 4.14

On Fri, Oct 02, 2020 at 04:42:50PM +0100, Andrew Cooper wrote:
> On 02/10/2020 16:39, Jan Beulich wrote:
> > On 02.10.2020 17:08, Marek Marczykowski-G=C3=B3recki wrote:
> >> I've done another bisect on the commit broken up in separate changes
> >> (https://xenbits.xen.org/gitweb/?p=3Dpeople/andrewcoop/xen.git;a=3Dsho=
rtlog;h=3Drefs/heads/dbg-s3)
> >> and the bad part seems to be this:
> >>
> >> From dbdb32f8c265295d6af7cd4cd0aa12b6d04a0430 Mon Sep 17 00:00:00 2001
> >> From: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Date: Fri, 2 Oct 2020 15:40:22 +0100
> >> Subject: [PATCH 1/1] CR4
> > Interesting - I was wild guessing so yesterday, but couldn't come
> > up with even a vague reason why this would be. I think you could
> > further split it up:
> >
> >> --- a/xen/arch/x86/acpi/power.c
> >> +++ b/xen/arch/x86/acpi/power.c
> >> @@ -195,7 +195,6 @@ static int enter_state(u32 state)
> >>      unsigned long flags;
> >>      int error;
> >>      struct cpu_info *ci;
> >> -    unsigned long cr4;
> >> =20
> >>      if ( (state <=3D ACPI_STATE_S0) || (state > ACPI_S_STATES_MAX) )
> >>          return -EINVAL;
> >> @@ -270,15 +269,15 @@ static int enter_state(u32 state)
> >> =20
> >>      system_state =3D SYS_STATE_resume;
> >> =20
> >> -    /* Restore CR4 and EFER from cached values. */
> >> -    cr4 =3D read_cr4();
> >> -    write_cr4(cr4 & ~X86_CR4_MCE);
> >> +    /* Restore EFER from cached value. */
> >>      write_efer(read_efer());
> > This one should be possible to leave in place despite ...
> >
> >>      device_power_up(SAVED_ALL);
> >> =20
> >>      mcheck_init(&boot_cpu_data, false);
> >> -    write_cr4(cr4);
> >> +
> >> +    /* Restore CR4 from cached value, now MCE is set up. */
> >> +    write_cr4(read_cr4());
> > ... this change.
> >
> > Further, while I can't see how the set_in_cr4() in mcheck_init()
> > could badly interact with the CR4 writes here, another option
> > might be to suppress it when system_state =3D=3D SYS_STATE_resume
> > && c =3D=3D &boot_cpu_data (or !bsp && c =3D=3D &boot_cpu_data).
> >
> >> --- a/xen/arch/x86/acpi/suspend.c
> >> +++ b/xen/arch/x86/acpi/suspend.c
> >> @@ -23,7 +23,4 @@ void save_rest_processor_state(void)
> >>  void restore_rest_processor_state(void)
> >>  {
> >>      load_system_tables();
> >> -
> >> -    /* Restore full CR4 (inc MCE) now that the IDT is in place. */
> >> -    write_cr4(mmu_cr4_features);
> >>  }
> > This one should be possible to leave in place despite the other
> > changes.
>=20
> We're continuing to debug in private.=C2=A0 mmu_cr4_features and read_cr4=
()
> are equivalent (as expected), but very different from MINIMAL_CR4 which
> is what the trampoline configures, so I think we're suffering a
> CR4-related #UD/#GP somewhere in device_power_up() or mcheck_init().
>=20
> Its not INVPCID.=C2=A0 Trying others.

Some update:
It's OSFXSR + probably other flags, the crash happens in
enter_state()->device_power_up()->time_resume()->efi_get_time()

This also explains the difference between legacy / UEFI boot.

Disabling efi_get_time() or setting CR4 earlier solves _this_ issue, but
applied on top of stable-4.14 still doesn't work. Looks like there is
yet another S3 breakage in between. I'm bisecting it further...

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--SLDf9lqlvOQaIe6s
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl93fVYACgkQ24/THMrX
1yyInwf7BXxK1fB2WTRqErOugHqtQ004pjpf9I8voItFKYB6DMwW23GHREcFoiJu
kHWl4SNKbhaPT3q8cH6t/VCZ9/e2qg3ENFMpVbinZHhu3fP5kBoQO/skf5Cds6d3
WJPg7q6sPf6toeI5eju4SQiBD/5CbZIKI+qHMbmDd9Y1OTtiHJzEzb2YciX0HzSU
tm4uGv9i3qTIP/n+GU8Y7Z6ypf4xV2VKdasud3zHRkkz8OON17wFOnavTdsz5GfH
n4hdaveIPpahSLwbf2Dt9UoqjCZgP6wt/gir/gRNH5NbuKx/YbuJvoRZoXi3Uc69
D00Yb/oHQS0p8IspgBVYtiqjC2gAEg==
=hwmh
-----END PGP SIGNATURE-----

--SLDf9lqlvOQaIe6s--


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 20:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 20:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2327.6922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kORYf-0000Hs-DC; Fri, 02 Oct 2020 20:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2327.6922; Fri, 02 Oct 2020 20:21:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kORYf-0000Hl-9j; Fri, 02 Oct 2020 20:21:37 +0000
Received: by outflank-mailman (input) for mailman id 2327;
 Fri, 02 Oct 2020 20:21:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIB7=DJ=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kORYe-0000Hg-7R
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 20:21:36 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.93.50]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd7c432c-5f8c-478f-8752-89fef17bc8c9;
 Fri, 02 Oct 2020 20:21:34 +0000 (UTC)
Received: from SN2PR01CA0047.prod.exchangelabs.com (2603:10b6:800::15) by
 MN2PR02MB6704.namprd02.prod.outlook.com (2603:10b6:208:1d6::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.32; Fri, 2 Oct 2020 20:21:32 +0000
Received: from SN1NAM02FT048.eop-nam02.prod.protection.outlook.com
 (2603:10b6:800:0:cafe::23) by SN2PR01CA0047.outlook.office365.com
 (2603:10b6:800::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 20:21:32 +0000
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT048.mail.protection.outlook.com (10.152.72.202) with Microsoft SMTP
 Server id 15.20.3412.21 via Frontend Transport; Fri, 2 Oct 2020 20:21:31
 +0000
Received: from [149.199.38.66] (port=57914 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kORY6-00067A-3g; Fri, 02 Oct 2020 13:21:02 -0700
Received: from [127.0.0.1] (helo=localhost)
 by smtp.xilinx.com with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kORYZ-0008Sq-97; Fri, 02 Oct 2020 13:21:31 -0700
Received: from xsj-pvapsmtp01 (xsj-smtp.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 092KLPNw003319; 
 Fri, 2 Oct 2020 13:21:26 -0700
Received: from [10.23.123.31] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1kORYT-0008SG-R0; Fri, 02 Oct 2020 13:21:25 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WIB7=DJ=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kORYe-0000Hg-7R
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 20:21:36 +0000
X-Inumbo-ID: cd7c432c-5f8c-478f-8752-89fef17bc8c9
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (unknown [40.107.93.50])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd7c432c-5f8c-478f-8752-89fef17bc8c9;
	Fri, 02 Oct 2020 20:21:34 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CZOZ9C0uzv7vemfT0tAlbIYPqmCxa5zYSQG2pY0cdAkoCx715LErsisi+UJh256chVRvtOAxpAbKpRj8E4C4HhXSdCXJHWo2kYz/qV2ZlRDC+XTqagGjJL0MgxmE8yS2oEj2cACZPZuunraj4oOlOcu8zofeDk+HD9FBRsR+QpbywcppfnwpYed8Ed6aB96m0Feo1FWp5DQwvTCi4+a8gxqd2bep736t6SFSV03B1fqRW1406kX3b1KjLw8WGal5XFceR30OcrLHLPN56/b6GgnTdKu+gGJf1UEKAMhWzpxiYQagPkwcLY+V3H3Aj4MXstziwKszYy4i9w/PsMyfBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K1sKfjfl6E0C1MGRjLxW/Z7cSOfeqva3SvzEf9ONgak=;
 b=Hu5/wln1emNdkIv6dZeGB8NgW7baZojPzkqUXGGmiI7xrFkH+1oK0G/l4aVYRqPBDNcZ6kjkB7lh7SRe8ph1THx23dRIPQl98Iw4t40MpiQJbc8w3OfD2++BTA+1a9jIJje81MVN7nRxrLHVAzDyVeC+6/XhPH8pEiLgP8qhLmTis8ay2M9ufbNsrInX8gpFNNoGjLmQJa5aYOMUXYvf8LPlgIwu5FXVFC9JNxJMet4YsOLEQlhxLSDdtbrW0vpbStc6w6EbGwWx0QFT0ArNack4Q5T7iftx1UHsUm9+8XBmrcY3KCk13UKEVFuWMWyxfcwSUYqfBcA86xkJ7aladw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=lists.linux-foundation.org
 smtp.mailfrom=xilinx.com; dmarc=bestguesspass action=none
 header.from=xilinx.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K1sKfjfl6E0C1MGRjLxW/Z7cSOfeqva3SvzEf9ONgak=;
 b=iZR3Dod7hFtywYjczyK2oph4wZxyuH2pmAGz7nMkxqbB7dCDxOvSVslpVtYiY76SeTVYnynsq/3lJgBmeOBGtvbrz90IpDvCO5o5RvqlE1DOrYn4MruFpe03V15HjGJPAJnToqPDuebhjcNVF1DJpJeDVGCoDv9Re7VmqPTDaAo=
Received: from SN2PR01CA0047.prod.exchangelabs.com (2603:10b6:800::15) by
 MN2PR02MB6704.namprd02.prod.outlook.com (2603:10b6:208:1d6::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.32; Fri, 2 Oct 2020 20:21:32 +0000
Received: from SN1NAM02FT048.eop-nam02.prod.protection.outlook.com
 (2603:10b6:800:0:cafe::23) by SN2PR01CA0047.outlook.office365.com
 (2603:10b6:800::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32 via Frontend
 Transport; Fri, 2 Oct 2020 20:21:32 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; lists.linux-foundation.org; dkim=none (message not
 signed) header.d=none;lists.linux-foundation.org; dmarc=bestguesspass
 action=none header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT048.mail.protection.outlook.com (10.152.72.202) with Microsoft SMTP
 Server id 15.20.3412.21 via Frontend Transport; Fri, 2 Oct 2020 20:21:31
 +0000
Received: from [149.199.38.66] (port=57914 helo=smtp.xilinx.com)
	by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kORY6-00067A-3g; Fri, 02 Oct 2020 13:21:02 -0700
Received: from [127.0.0.1] (helo=localhost)
	by smtp.xilinx.com with smtp (Exim 4.63)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kORYZ-0008Sq-97; Fri, 02 Oct 2020 13:21:31 -0700
Received: from xsj-pvapsmtp01 (xsj-smtp.xilinx.com [149.199.38.66])
	by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 092KLPNw003319;
	Fri, 2 Oct 2020 13:21:26 -0700
Received: from [10.23.123.31] (helo=localhost)
	by xsj-pvapsmtp01 with esmtp (Exim 4.63)
	(envelope-from <stefanos@xilinx.com>)
	id 1kORYT-0008SG-R0; Fri, 02 Oct 2020 13:21:25 -0700
Date: Fri, 2 Oct 2020 13:21:25 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: Re: xen-swiotlb vs phys_to_dma
In-Reply-To: <20201002123436.GA30329@lst.de>
Message-ID: <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s>
References: <20201002123436.GA30329@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d7ce0efb-933a-43af-1ebe-08d86710c043
X-MS-TrafficTypeDiagnostic: MN2PR02MB6704:
X-Microsoft-Antispam-PRVS:
	<MN2PR02MB6704143E24A1D6CCD5D1B9C2A0310@MN2PR02MB6704.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	r1zBRk3209SYXRmyDcyzGa4vWfwiztIJnO2AAvN03JioxKopFBHwp8SCr8Ac6UNygK5a2OPAbIDrvB5fteZUMQ+XCajwu4bptHTEZ57XAy4s2efXyHDLNv44B3llFLCvVrYtrc6N+jJbAXdudO7gRmCfvM1ndcKSj/PDp3JXYYhRSiOTTDo2jpNVxfDTzen8G+Lzf27Y3dx9dY1vHgg91rlH3s0IJ7O1PkSRL7PVdvhiKCHJ/C6ndIwCIihtZveH6RDQf2/uvu3LtsF/vtUoYlxe4Cb1akJSRnavhvJAFyso9FbJNBhbZSduLRoD+aAq2hb4DE8aYvg/ZOjzOnTYQCO82DYVRZowNvxlo8BLa0OOn07Zt2sApvrm6w+3eI0lsCUuH/exGMQCZqvagU07TbBDLr31aM/JIV6CUUpzZmEQTzvZGFX1ELkiNfGCi4Jc2+zEJfYEaRcuFEqGSXGf0ZyQ6xJhVef5Y/P/vvGFv5pGBG/Y61xZ3mPd1mFRcNvM
X-Forefront-Antispam-Report:
	CIP:149.199.60.83;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapsmtpgw01;PTR:unknown-60-83.xilinx.com;CAT:NONE;SFS:(7916004)(396003)(39860400002)(376002)(136003)(346002)(46966005)(2906002)(8676002)(9786002)(70586007)(70206006)(82310400003)(6916009)(47076004)(4326008)(316002)(8936002)(9686003)(54906003)(426003)(26005)(44832011)(83380400001)(336012)(966005)(33716001)(83080400001)(186003)(81166007)(356005)(82740400003)(5660300002)(478600001)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 02 Oct 2020 20:21:31.5637
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d7ce0efb-933a-43af-1ebe-08d86710c043
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.60.83];Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource:
	SN1NAM02FT048.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB6704

On Fri, 2 Oct 2020, Christoph Hellwig wrote:
> Hi Stefano,
> 
> I've looked over xen-swiotlb in linux-next, that is with your recent
> changes to take dma offsets into account.  One thing that puzzles me
> is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
> the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
> that the argument is a dma_addr_t and both other callers translate
> from a physical to the dma address.  Was this an oversight?

Hi Christoph,

It was not an oversight, it was done on purpose, although maybe I could
have been wrong. There was a brief discussion on this topic here: 

https://marc.info/?l=linux-kernel&m=159011972107683&w=2
https://marc.info/?l=linux-kernel&m=159018047129198&w=2

I'll repeat and summarize here for convenience. 

swiotlb_init_with_tbl is called by xen_swiotlb_init, passing a virtual
address (xen_io_tlb_start), which gets converted to phys and stored in
io_tlb_start as a physical address at the beginning of swiotlb_init_with_tbl.

Afterwards, xen_swiotlb_map_page calls swiotlb_tbl_map_single. The
second parameter, dma_addr_t tbl_dma_addr, is used to calculate the
right slot in the swiotlb buffer to use, comparing it against
io_tlb_start.

Thus, I think it makes sense for xen_swiotlb_map_page to call
swiotlb_tbl_map_single passing an address meant to be compared with
io_tlb_start, which is __pa(xen_io_tlb_start), so
virt_to_phys(xen_io_tlb_start) seems to be what we want.

However, you are right that it is strange that tbl_dma_addr is a
dma_addr_t, and maybe it shouldn't be? Maybe the tbl_dma_addr parameter
to swiotlb_tbl_map_single should be a phys address instead?
Or it could be swiotlb_init_with_tbl to be wrong and it should take a
dma address to initialize the swiotlb buffer.


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 20:47:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 20:47:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2337.6934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kORxZ-000297-Hh; Fri, 02 Oct 2020 20:47:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2337.6934; Fri, 02 Oct 2020 20:47:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kORxZ-000290-EZ; Fri, 02 Oct 2020 20:47:21 +0000
Received: by outflank-mailman (input) for mailman id 2337;
 Fri, 02 Oct 2020 20:47:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uixu=DJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kORxY-00028v-H2
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 20:47:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4032b295-2c3f-44bd-8209-69b6dbf0e164;
 Fri, 02 Oct 2020 20:47:19 +0000 (UTC)
Received: from localhost.localdomain (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 15CE22065D;
 Fri,  2 Oct 2020 20:47:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uixu=DJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kORxY-00028v-H2
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 20:47:20 +0000
X-Inumbo-ID: 4032b295-2c3f-44bd-8209-69b6dbf0e164
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4032b295-2c3f-44bd-8209-69b6dbf0e164;
	Fri, 02 Oct 2020 20:47:19 +0000 (UTC)
Received: from localhost.localdomain (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 15CE22065D;
	Fri,  2 Oct 2020 20:47:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601671639;
	bh=rl0lETfRrlGNYieqFPCA3nuFxZ3gGf2in63JNwjOHHs=;
	h=From:To:Cc:Subject:Date:From;
	b=JfK2fsdKKBsNiCygi0gs1TzCiS0zm/EFyg+xVvtY51fzr+gmSEACaKuFdH4tMgk7w
	 /r1fDAkVl5fQPFktMolG86TvtNtK1Iq+KQ/F9dj3FmAxOXw3TiNVITJv16P3lqaVfV
	 DVIonzMmGeD3XFXbO2hQBXFnV+mM/ossv4NZbk1E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: julien@xen.org
Cc: sstabellini@kernel.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	roman@zededa.com
Subject: [PATCH v2] xen/rpi4: implement watchdog-based reset
Date: Fri,  2 Oct 2020 13:47:17 -0700
Message-Id: <20201002204717.14735-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

The preferred methord to reboot RPi4 is PSCI. If it is not available,
touching the watchdog is required to be able to reboot the board.

The implementation is based on
drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: roman@zededa.com
---
Changes in v2:
- improve commit message
- add Linux baseline to the commit message to be able to do comparisons
- order #includes alphabetically
- fix alignment of #defines values
- printk instead of dprintk for the "Cannot read watchdog register address" message
- uint32_t instead of u32
- newline after void __iomem *base = rpi4_map_watchdog();
---
 xen/arch/arm/platforms/brcm-raspberry-pi.c | 61 ++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index f5ae58a7d5..1a51732da5 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -17,6 +17,10 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/delay.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/platform.h>
 
 static const char *const rpi4_dt_compat[] __initconst =
@@ -37,12 +41,69 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
      * The aux peripheral also shares a page with the aux UART.
      */
     DT_MATCH_COMPATIBLE("brcm,bcm2835-aux"),
+    /* Special device used for rebooting */
+    DT_MATCH_COMPATIBLE("brcm,bcm2835-pm"),
     { /* sentinel */ },
 };
 
+
+#define PM_PASSWORD                 0x5a000000
+#define PM_RSTC                     0x1c
+#define PM_WDOG                     0x24
+#define PM_RSTC_WRCFG_FULL_RESET    0x00000020
+#define PM_RSTC_WRCFG_CLR           0xffffffcf
+
+static void __iomem *rpi4_map_watchdog(void)
+{
+    void __iomem *base;
+    struct dt_device_node *node;
+    paddr_t start, len;
+    int ret;
+
+    node = dt_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm");
+    if ( !node )
+        return NULL;
+
+    ret = dt_device_get_address(node, 0, &start, &len);
+    if ( ret )
+    {
+        printk("Cannot read watchdog register address\n");
+        return NULL;
+    }
+
+    base = ioremap_nocache(start & PAGE_MASK, PAGE_SIZE);
+    if ( !base )
+    {
+        dprintk(XENLOG_ERR, "Unable to map watchdog register!\n");
+        return NULL;
+    }
+
+    return base;
+}
+
+static void rpi4_reset(void)
+{
+    uint32_t val;
+    void __iomem *base = rpi4_map_watchdog();
+
+    if ( !base )
+        return;
+
+    /* use a timeout of 10 ticks (~150us) */
+    writel(10 | PM_PASSWORD, base + PM_WDOG);
+    val = readl(base + PM_RSTC);
+    val &= PM_RSTC_WRCFG_CLR;
+    val |= PM_PASSWORD | PM_RSTC_WRCFG_FULL_RESET;
+    writel(val, base + PM_RSTC);
+
+    /* No sleeping, possibly atomic. */
+    mdelay(1);
+}
+
 PLATFORM_START(rpi4, "Raspberry Pi 4")
     .compatible     = rpi4_dt_compat,
     .blacklist_dev  = rpi4_blacklist_dev,
+    .reset = rpi4_reset,
     .dma_bitsize    = 30,
 PLATFORM_END
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 21:20:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 21:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2346.6946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOSTe-0005VX-8v; Fri, 02 Oct 2020 21:20:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2346.6946; Fri, 02 Oct 2020 21:20:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOSTe-0005VQ-5X; Fri, 02 Oct 2020 21:20:30 +0000
Received: by outflank-mailman (input) for mailman id 2346;
 Fri, 02 Oct 2020 21:20:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOSTd-0005VL-3t
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:20:29 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f131e32-399d-4791-838b-c7ea291a6ecf;
 Fri, 02 Oct 2020 21:20:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOSTd-0005VL-3t
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:20:29 +0000
X-Inumbo-ID: 6f131e32-399d-4791-838b-c7ea291a6ecf
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6f131e32-399d-4791-838b-c7ea291a6ecf;
	Fri, 02 Oct 2020 21:20:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601673627;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=81XjHYrAMvgypDyL5q0TaY41Xh6Ind6YN24lhcqH4Bw=;
  b=K2F5lFgsAl5jOMs4cqQLxictIeK9ZEnRR3Hd4aRbBbvAkQ/xa9MHUHSJ
   SMj84nCFJ5e6X4ZZjrtZVrWZ+221IUjD4SKDhQYmmzk2bPmgzPTeZNisb
   IdG66AFLNjUzIROGugLSBKhf0+DHtX+A7uAw5NUPwke6PfBqS2aYF6BTG
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CIJKWBY5sMVNEPxYNoEd7mxy2meVTX3cwE0Y1ZQBjItqGSUNANK+q9utXgy1Td7NKuVrkSB3ey
 bKmrHp2L6zR64uBMWIfjml5ygr6/HmPDWj5ZAdfRcRGViS2SNHP7vsBd/22HWE0Hp8b92YV9oL
 h7T3BrvG4rOhfLVUAjcUcrA7VKqsWVXwV1lDC312goNGrOeq6FxnupoSAVSw98PjVfLYsIA7cy
 9l6lpfFlQvCng5hYGdQstmMpoYsYHqIMJ6jt5DNcUNXh8IyCNS6DF1G/Wuw1CPyNtz4ADzqDJ3
 muc=
X-SBRS: None
X-MesageID: 28196970
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="28196970"
Subject: Re: [PATCH v9 1/8] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Julien Grall <julien@xen.org>, "Jan
 Beulich" <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-2-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
Date: Fri, 2 Oct 2020 22:20:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-2-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> diff --git a/xen/common/save.c b/xen/common/save.c
> new file mode 100644
> index 0000000000..841c4d0e4e
> --- /dev/null
> +++ b/xen/common/save.c
> @@ -0,0 +1,315 @@
> +/*
> + * save.c: Save and restore PV guest state common to all domain types.

This description will be stale by the time your work is complete.

> +int domain_save_data(struct domain_context *c, const void *src, size_t len)
> +{
> +    int rc = c->ops.save->append(c->priv, src, len);
> +
> +    if ( !rc )
> +        c->len += len;
> +
> +    return rc;
> +}
> +
> +#define DOMAIN_SAVE_ALIGN 8

This is part of the stream ABI.

> +
> +int domain_save_end(struct domain_context *c)
> +{
> +    struct domain *d = c->domain;
> +    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */

DOMAIN_SAVE_ALIGN - (c->len & (DOMAIN_SAVE_ALIGN - 1))

isn't vulnerable to overflow.

> +    int rc;
> +
> +    if ( len )
> +    {
> +        static const uint8_t pad[DOMAIN_SAVE_ALIGN] = {};
> +
> +        rc = domain_save_data(c, pad, len);
> +
> +        if ( rc )
> +            return rc;
> +    }
> +    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
> +
> +    if ( c->name )
> +        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, c->name,
> +                 c->desc.instance, c->len, len);

IMO, this is unhelpful to print out.  It also appears to be the only use
of the c->name field.

It also creates obscure and hard to follow logic based on dry_run.

> diff --git a/xen/include/public/save.h b/xen/include/public/save.h
> new file mode 100644
> index 0000000000..551dbbddb8
> --- /dev/null
> +++ b/xen/include/public/save.h
> @@ -0,0 +1,89 @@
> +/*
> + * save.h
> + *
> + * Structure definitions for common PV/HVM domain state that is held by
> + * Xen and must be saved along with the domain's memory.
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef XEN_PUBLIC_SAVE_H
> +#define XEN_PUBLIC_SAVE_H
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#include "xen.h"
> +
> +/* Entry data is preceded by a descriptor */
> +struct domain_save_descriptor {
> +    uint16_t typecode;
> +
> +    /*
> +     * Instance number of the entry (since there may be multiple of some
> +     * types of entries).
> +     */
> +    uint16_t instance;
> +
> +    /* Entry length not including this descriptor */
> +    uint32_t length;
> +};
> +
> +/*
> + * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
> + * binds these things together, although it is not intended that the
> + * resulting type is ever instantiated.
> + */
> +#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
> +    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
> +
> +#define DOMAIN_SAVE_CODE(_x) \
> +    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->c))
> +#define DOMAIN_SAVE_TYPE(_x) \
> +    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->t)

I realise this is going to make me very unpopular, but NACK.

This is straight up obfuscation with no redeeming properties.  I know
you've copied it from the exist HVMCONTEXT infrastructure, but it is
obnoxious to use there (particularly in the domain builder) and not an
example wanting copying.

Furthermore, the code will be simpler and easier to follow without it.

Secondly, and more importantly, I do not see anything in docs/specs/
describing the binary format of this stream,  and I'm going to insist
that one appears, ahead of this patch in the series.

In doing so, you're hopefully going to discover the bug with the older
HVMCONTEXT stream which makes the version field fairly pointless (more
below).

It should describe how to forward compatibly extend the stream, and
under what circumstances the version number can/should change.  It also
needs to describe the alignment and extending rules which ...

> +
> +/*
> + * All entries will be zero-padded to the next 64-bit boundary when saved,
> + * so there is no need to include trailing pad fields in structure
> + * definitions.
> + * When loading, entries will be zero-extended if the load handler reads
> + * beyond the length specified in the descriptor.
> + */

... shouldn't be this.

The current zero extending property was an emergency hack to fix an ABI
breakage which had gone unnoticed for a couple of releases.  The work to
implement it created several very hard to debug breakages in Xen.

A properly designed stream shouldn't need auto-extending behaviour, and
the legibility of the code is improved by not having it.

It is a trick which can stay up your sleeve for an emergency, in the
hope you'll never have to use it.

> +
> +/* Terminating entry */
> +struct domain_save_end {};
> +DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
> +
> +#define DOMAIN_SAVE_MAGIC   0x53415645
> +#define DOMAIN_SAVE_VERSION 0x00000001
> +
> +/* Initial entry */
> +struct domain_save_header {
> +    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
> +    uint16_t xen_major, xen_minor; /* Xen version */
> +    uint32_t version;              /* Save format version */
> +};
> +DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);

The layout problem with the stream is the fact that this header doesn't
come first.

In the eventual future where uint16_t won't be sufficient for instance,
and uint32_t might not be sufficient for len, the version number is
going to have to be bumped, in order to change the descriptor layout.


Overall, this patch needs to be a minimum of two.  First a written
document which is the authoritative stream ABI, and the second which is
this implementation.  The header describing the stream format should not
be substantively different from xg_sr_stream_format.h

~Andrew

P.S. Another good reason for having extremely simple header files is for
the poor sole trying to write a Go/Rust/other binding for this in some
likely not-to-distant future.


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 21:37:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 21:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2350.6962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOSjv-0006ZF-PZ; Fri, 02 Oct 2020 21:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2350.6962; Fri, 02 Oct 2020 21:37:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOSjv-0006Z8-Ly; Fri, 02 Oct 2020 21:37:19 +0000
Received: by outflank-mailman (input) for mailman id 2350;
 Fri, 02 Oct 2020 21:37:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOSjt-0006Z0-EF
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:37:17 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7a44ed0-1738-491c-9f8e-d7a7dc6e6b11;
 Fri, 02 Oct 2020 21:37:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOSjt-0006Z0-EF
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:37:17 +0000
X-Inumbo-ID: e7a44ed0-1738-491c-9f8e-d7a7dc6e6b11
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e7a44ed0-1738-491c-9f8e-d7a7dc6e6b11;
	Fri, 02 Oct 2020 21:37:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601674635;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=9XdDMMC3s8tS4ojhpvzgEXNjfjbFKoUyqBzWSCEDShc=;
  b=Tf0pz0yMjRWUO8AR2Z8oLJ1xclkWr3yWQyTGYLhyi7J37Tdu8G1YK8lY
   3H2qNSQ3xcvHIlyvvyc7vZC/FBwmcFVQ0Fiq9dy2Xkv5Onj6WMa7YIU2m
   kbX/7+kiJCIfR/mvH92AArqbQeDQxZmW//OTYScA5d3vTl7qjfbTpGZjb
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LOKjwRwi/M3m+tachbBjD7pQS5M4HSs1Ov60tAq/bg3etPxr0DT4FvaCy/bxtmMPysh4pxltGS
 dVmmbnCUIbjNUAEbVYumSPTs7u2FhtZjVy486VgtyuYLJjjbYHykqcZFpBbSR7a3cd9ZOF/36l
 uZiGr1TS1p804Snkq0VlAOleswvY/oXZ2V4DZYeArv1va0GXe8YQt3MEDCTTtYK/ob/fFIm4wd
 vrXNGNp6WaiDG2GKuJIk5tlZ0ZiNneJL/qZOAK/hJkTdMOLTRwZDQpzRz/mlc22yAkIEhy6AC2
 taw=
X-SBRS: None
X-MesageID: 28463514
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="28463514"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
	<marmarek@invisiblethingslab.com>
Subject: [PATCH] x86/S3: Restore CR4 earlier during resume
Date: Fri, 2 Oct 2020 22:36:50 +0100
Message-ID: <20201002213650.2197-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

c/s 4304ff420e5 "x86/S3: Drop {save,restore}_rest_processor_state()
completely" moved CR4 restoration up into C, to account for the fact that MCE
was explicitly handled later.

However, time_resume() ends up making an EFI Runtime Service call, and EFI
explodes without OSFXSR, presumably when trying to spill %xmm registers onto
the stack.

Given this codepath, and the potential for other issues of a similar kind (TLB
flushing vs INVPCID, HVM logic vs VMXE, etc), restore CR4 in asm before
entering C.

Ignore the previous MCE special case, because its not actually necessary.  The
handler is already suitably configured from before suspend.

Fixes: 4304ff420e5 ("x86/S3: Drop {save,restore}_rest_processor_state() completely")
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

This is one definite bug fix.  It doesn't appear to be the only S3 bug
however.
---
 xen/arch/x86/acpi/power.c       | 3 ---
 xen/arch/x86/acpi/wakeup_prot.S | 5 +++++
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
index 4fb1e7a148..7f162a4df9 100644
--- a/xen/arch/x86/acpi/power.c
+++ b/xen/arch/x86/acpi/power.c
@@ -276,9 +276,6 @@ static int enter_state(u32 state)
 
     mcheck_init(&boot_cpu_data, false);
 
-    /* Restore CR4 from cached value, now MCE is set up. */
-    write_cr4(read_cr4());
-
     printk(XENLOG_INFO "Finishing wakeup from ACPI S%d state.\n", state);
 
     if ( (state == ACPI_STATE_S3) && error )
diff --git a/xen/arch/x86/acpi/wakeup_prot.S b/xen/arch/x86/acpi/wakeup_prot.S
index c6b3fcc93d..1ee5551fb5 100644
--- a/xen/arch/x86/acpi/wakeup_prot.S
+++ b/xen/arch/x86/acpi/wakeup_prot.S
@@ -110,6 +110,11 @@ ENTRY(s3_resume)
 
         call    load_system_tables
 
+        /* Restore CR4 from the cpuinfo block. */
+        GET_STACK_END(bx)
+        mov     STACK_CPUINFO_FIELD(cr4)(%rbx), %rax
+        mov     %rax, %cr4
+
 .Lsuspend_err:
         pop     %r15
         pop     %r14
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 02 21:57:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 21:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2355.6975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT2u-0008Ko-Ew; Fri, 02 Oct 2020 21:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2355.6975; Fri, 02 Oct 2020 21:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT2u-0008Kh-By; Fri, 02 Oct 2020 21:56:56 +0000
Received: by outflank-mailman (input) for mailman id 2355;
 Fri, 02 Oct 2020 21:56:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOT2t-0008KD-1U
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:56:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2fbe4fd4-6f02-47b3-bb19-b2402fcb7a31;
 Fri, 02 Oct 2020 21:56:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOT2m-0002bh-5J; Fri, 02 Oct 2020 21:56:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOT2l-00020N-UX; Fri, 02 Oct 2020 21:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOT2l-0002C6-Tw; Fri, 02 Oct 2020 21:56:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOT2t-0008KD-1U
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:56:55 +0000
X-Inumbo-ID: 2fbe4fd4-6f02-47b3-bb19-b2402fcb7a31
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2fbe4fd4-6f02-47b3-bb19-b2402fcb7a31;
	Fri, 02 Oct 2020 21:56:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vofPTJILwT1ubxLz56+m4WquMlbuXa7ulEIYCagB2Jo=; b=NmwLrbxjDPHNkrgLc6OjvZ+5k/
	BywzUTaHyK7XsWaHAXBdaubSQj4Gc45mzW7p6KNHCkPMcVVIZoOzbJfCMjfuPU0AqFHi9QgF6ELhJ
	SAJX4xA4TkKlOhgp6yEtzBuHEIKRJGhi3AiUwD7Za+5j7GLg9ZL9Hx51jqJ0IM/AK9io=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOT2m-0002bh-5J; Fri, 02 Oct 2020 21:56:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOT2l-00020N-UX; Fri, 02 Oct 2020 21:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOT2l-0002C6-Tw; Fri, 02 Oct 2020 21:56:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155349: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
X-Osstest-Versions-That:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 21:56:47 +0000

flight 155349 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155349/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893
baseline version:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f

Last test of basis   155128  2020-09-30 08:01:25 Z    2 days
Failing since        155144  2020-09-30 16:01:24 Z    2 days   16 attempts
Testing same since   155349  2020-10-02 18:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Laurentiu Tudor <laurentiu.tudor@nxp.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c73952831f..8ef6345ef5  8ef6345ef557cc2c47298217635a3088eaa59893 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 21:58:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 21:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2356.6987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT4V-0008RW-Rd; Fri, 02 Oct 2020 21:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2356.6987; Fri, 02 Oct 2020 21:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT4V-0008RP-Oh; Fri, 02 Oct 2020 21:58:35 +0000
Received: by outflank-mailman (input) for mailman id 2356;
 Fri, 02 Oct 2020 21:58:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOT4U-0008RK-65
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:58:34 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3b16c59-027b-467d-9ca4-6adc4e2e7fb0;
 Fri, 02 Oct 2020 21:58:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOT4U-0008RK-65
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 21:58:34 +0000
X-Inumbo-ID: f3b16c59-027b-467d-9ca4-6adc4e2e7fb0
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3b16c59-027b-467d-9ca4-6adc4e2e7fb0;
	Fri, 02 Oct 2020 21:58:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601675912;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=wB/CsIWii9C2rFRNqdshs8WDQFZ6Xc37HtFtY2Bt9Ms=;
  b=DplurLX/DcY9G9d4KdJVvS0tG/Gt9piLttiAD7YdmhsXy1UpEvMbjjXE
   hgdW9HTXxAgjVG7pvDn99TShC0Lz0tVtymG2P5RodaemLob1/izTKQoG0
   WU/1y9Xm+EjDhmJh9KSpmoCRQ93j5o8lYceN+fxM+aCKSbtrRRIwgJreN
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: SgbGO3XiL+6vwMkeJmlEmTxwcklwqJXaav01FpSTnt8nCBmopnf+1l3C6B4QZShsh66K8sGMEo
 hzo42CZjziiJT7GA0apeEG6bT6yTu7rys1TJt62eEmHXvn8EfIux9fXWVR5aT50Sd1lpzxUjjh
 6Zkz50E9Plxn0naF+flUOHNlLVMtfNYmpImN+VgwhZ77cHJGsPK2dpL5QjAEVhre+qXnG7A778
 ajFFgyQl1mkIuVr1JF53dMFlliT9vPxqKwWeoGHUVzsv0JBuB3vArDnmdJ4Ma1KSQgjcapZjKw
 Pnw=
X-SBRS: None
X-MesageID: 29215224
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="29215224"
Subject: Re: [PATCH v9 2/8] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Julien Grall <julien@xen.org>, "Daniel
 De Graaf" <dgdegra@tycho.nsa.gov>, Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-3-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <783f8b1b-f11f-d8ff-3643-d35f17c6c363@citrix.com>
Date: Fri, 2 Oct 2020 22:58:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-3-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 791f0a2592..743105181f 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1130,6 +1130,43 @@ struct xen_domctl_vuart_op {
>                                   */
>  };
>  
> +/*
> + * XEN_DOMCTL_getdomaincontext
> + * ---------------------------
> + *
> + * buffer (IN):   The buffer into which the context data should be
> + *                copied, or NULL to query the buffer size that should
> + *                be allocated.
> + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
> + *                zero, and the value passed out will be the size of the
> + *                buffer to allocate.
> + *                If 'buffer' is non-NULL then the value passed in must
> + *                be the size of the buffer into which data may be copied.
> + *                The value passed out will be the size of data written.
> + */
> +struct xen_domctl_getdomaincontext {
> +    uint32_t size;

This series is full of mismatched 32/64bit sizes, with several
truncation bugs in the previous patch.

Just use a 64bit size here.  Life is too short to go searching for all
the other truncation bug when this stream tips over 4G, and its not like
there is a shortage of space in this structure.

> +    uint32_t pad;
> +    XEN_GUEST_HANDLE_64(void) buffer;
> +};
> +
> +/* XEN_DOMCTL_setdomaincontext
> + * ---------------------------
> + *
> + * buffer (IN):   The buffer from which the context data should be
> + *                copied.
> + * size (IN):     The size of the buffer from which data may be copied.
> + *                This data must include DOMAIN_SAVE_CODE_HEADER at the
> + *                start and terminate with a DOMAIN_SAVE_CODE_END record.
> + *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
> + *                ignored.
> + */
> +struct xen_domctl_setdomaincontext {
> +    uint32_t size;
> +    uint32_t pad;
> +    XEN_GUEST_HANDLE_64(const_void) buffer;
> +};
> +
>  struct xen_domctl {
>      uint32_t cmd;
>  #define XEN_DOMCTL_createdomain                   1
> @@ -1214,6 +1251,8 @@ struct xen_domctl {
>  #define XEN_DOMCTL_vuart_op                      81
>  #define XEN_DOMCTL_get_cpu_policy                82
>  #define XEN_DOMCTL_set_cpu_policy                83
> +#define XEN_DOMCTL_getdomaincontext              84
> +#define XEN_DOMCTL_setdomaincontext              85

So, we've currently got:

#define XEN_DOMCTL_setvcpucontext                12
#define XEN_DOMCTL_getvcpucontext                13
#define XEN_DOMCTL_gethvmcontext                 33
#define XEN_DOMCTL_sethvmcontext                 34
#define XEN_DOMCTL_set_ext_vcpucontext           42
#define XEN_DOMCTL_get_ext_vcpucontext           43
#define XEN_DOMCTL_gethvmcontext_partial         55
#define XEN_DOMCTL_setvcpuextstate               62
#define XEN_DOMCTL_getvcpuextstate               63

which are doing alarmingly related things for vcpus.  (As an amusing
exercise to the reader, figure out which are PV specific and which are
HVM specific.  Hint: they're not disjoint sets.)


I know breaking with tradition is sacrilege, but at the very minimum,
can we get some underscores in that name so you can at least read the
words which make it up more easily.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 22:01:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 22:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2359.6999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT6q-0000u1-9O; Fri, 02 Oct 2020 22:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2359.6999; Fri, 02 Oct 2020 22:01:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOT6q-0000tu-6S; Fri, 02 Oct 2020 22:01:00 +0000
Received: by outflank-mailman (input) for mailman id 2359;
 Fri, 02 Oct 2020 22:00:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOT6o-0000tp-Tc
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:00:58 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1aa0ce90-fba4-4a3e-8869-0c2c5e87e189;
 Fri, 02 Oct 2020 22:00:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOT6o-0000tp-Tc
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:00:58 +0000
X-Inumbo-ID: 1aa0ce90-fba4-4a3e-8869-0c2c5e87e189
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1aa0ce90-fba4-4a3e-8869-0c2c5e87e189;
	Fri, 02 Oct 2020 22:00:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601676058;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Evz1SwkUqBuf45+wTDMpz48b1s9zX1y4O8zULGTKSOg=;
  b=C4ObpdQqE7tDAzxFC3Mul03PN3vjw9fadSGC31ErHWoBZZHmTJBMrQUB
   T8pOF9K2IPzOuU1Lz9IATL9cWax7tNa9ddtIjOOPJtvXk0lXv8lmzkmpU
   kUxEEW30mEBmisN3yXQqw6/KaukZ765YDHkdWOLS5lGfqrh894CHYbN/Z
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vOzgG2z4IciMTjfhhAm+83aZd8pYKg3psAOcPKsdMlw4nVxg5BLyexL2/+W+Q8SBi9ppFRahzX
 du7T24foJNZy1zhOz4wKqUNL0h79fVN8PACtFg46f+ByG03v5nY94t3ar4rR9B2QvQfdldXp1X
 mmcneeGyu1kRaTRCOmgzuTanr2qyO1uekHz3ZxAq0h8M76DPaDCzuRL+22aP3Q2AZWOVTsQMjX
 lfuJTXw/aYH/gX7ryI+xoM3f1m5BSr4qjuRhPu0wlXwadW4W4dQOb9/c0f54bhIPRgVdIuUBS2
 yFY=
X-SBRS: None
X-MesageID: 28172768
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="28172768"
Subject: Re: [PATCH v9 1/8] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Julien Grall <julien@xen.org>, "Jan
 Beulich" <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <ian.jackson@eu.citrix.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-2-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6e0caedd-ab81-bf91-a108-94b458961d72@citrix.com>
Date: Fri, 2 Oct 2020 23:00:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-2-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> +/*
> + * The 'dry_run' flag indicates that the caller of domain_save() (see below)
> + * is not trying to actually acquire the data, only the size of the data.
> + * The save handler can therefore limit work to only that which is necessary
> + * to call domain_save_data() the correct number of times with accurate values
> + * for 'len'.
> + */
> +typedef int (*domain_save_handler)(const struct domain *d,
> +                                   struct domain_context *c,
> +                                   bool dry_run);

Sorry - missed this the first time around.  This cannot take a const domain.

Doing so prevents putting (amongst other things), event channel details
into the stream, because you won't be able to take the domain's event
lock, and having the domain paused isn't good enough protection.

Removing this const will reduce the churn in subsequent patches somewhat.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 22:32:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 22:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2363.7012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTbB-0003Vn-VF; Fri, 02 Oct 2020 22:32:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2363.7012; Fri, 02 Oct 2020 22:32:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTbB-0003Vg-QH; Fri, 02 Oct 2020 22:32:21 +0000
Received: by outflank-mailman (input) for mailman id 2363;
 Fri, 02 Oct 2020 22:32:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOTbA-0003Vb-Rk
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:32:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1902266-7472-4262-abd2-4c5402e36ffe;
 Fri, 02 Oct 2020 22:32:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTb9-0003Nz-4R; Fri, 02 Oct 2020 22:32:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTb8-000543-Tp; Fri, 02 Oct 2020 22:32:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTb8-0000ab-TJ; Fri, 02 Oct 2020 22:32:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOTbA-0003Vb-Rk
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:32:20 +0000
X-Inumbo-ID: f1902266-7472-4262-abd2-4c5402e36ffe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f1902266-7472-4262-abd2-4c5402e36ffe;
	Fri, 02 Oct 2020 22:32:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m9W0ZiwOauoI0jGKbDALJn5xIhFJ2m4CAVmbqd+/fMk=; b=UIO9pIZE6LhWyypBX7u+qLVYHn
	ElBkcONs6VYZMobNXbDKgBXJtqWUEsUxc+UYUT4UdjJcIq2Qk4tA+CqBfoNXumGYZ/AdCB3TzQLDo
	XLewomjvNBlOWPtX+KOnUso3myaOur1iGZ23URiit9luKwXj3v2fpxGZyVt3Au6fYKQU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTb9-0003Nz-4R; Fri, 02 Oct 2020 22:32:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTb8-000543-Tp; Fri, 02 Oct 2020 22:32:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTb8-0000ab-TJ; Fri, 02 Oct 2020 22:32:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155223: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=2d8ca4f90eaeb61bd7e9903b56bf412f0d187137
X-Osstest-Versions-That:
    ovmf=d8ab884fe9b4dd148980bf0d8673187f8fb25887
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 22:32:18 +0000

flight 155223 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155223/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137
baseline version:
 ovmf                 d8ab884fe9b4dd148980bf0d8673187f8fb25887

Last test of basis   155121  2020-09-30 03:52:34 Z    2 days
Testing same since   155223  2020-10-01 11:40:36 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d8ab884fe9..2d8ca4f90e  2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 22:40:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 22:40:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2366.7026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTin-0004Qv-Nh; Fri, 02 Oct 2020 22:40:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2366.7026; Fri, 02 Oct 2020 22:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTin-0004Qo-KS; Fri, 02 Oct 2020 22:40:13 +0000
Received: by outflank-mailman (input) for mailman id 2366;
 Fri, 02 Oct 2020 22:40:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOTim-0004Qj-MO
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:40:12 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4b2cbae-1b05-4553-8751-bf35517b65e7;
 Fri, 02 Oct 2020 22:40:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOTim-0004Qj-MO
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:40:12 +0000
X-Inumbo-ID: c4b2cbae-1b05-4553-8751-bf35517b65e7
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c4b2cbae-1b05-4553-8751-bf35517b65e7;
	Fri, 02 Oct 2020 22:40:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601678411;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ycMtR+kFyjIg1ssaKlwxGf0OECEDVKoeSG01xTSkcz8=;
  b=dtuuGyXKG6NLeigLdjgCJTi8g0wCo/DaLAUPcfgsvteztmjraew13TKl
   nSoAUJViBDTZ4uoNaCs5L3lEbIAvi6IhU6YdklumV3tbMrjkyrP4duIMX
   1b67nNRmTVrXx3pPw26P2f/9g6H3Tk6QWHPOooPjZskV6T8LhTK+MZXxV
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: SYRLOTH05nF4h2mk9XtLth5dBJ0yXQ/DfHwVjKKYdLxCV8kO8OV0/HlE9SpjXizCBaFRqTzx7f
 AFI86pdm2JkN1Lt3uINTPEAh4ig3TEHjVVnnTk9/nHl3Iz7d0SwYnnH4WMpx1t+MD1NboqhXwI
 Db4wf7WlmP1C40h2ibuXKcZxCykoSrbbaZZE8cGWB4Yqm3W/8G+t5eQNP3slNRoUb7YCLHx2i9
 +b1ulmvTSvS7TKG7yDKY2qeWyDH0letlDawVLnaiydyd6ZGuA1rAXOx6QcS0f22FctWVnEJa7J
 6i4=
X-SBRS: None
X-MesageID: 29217147
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="29217147"
Subject: Re: [PATCH v9 3/8] tools/misc: add xen-domctx to present domain
 context
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-4-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <241eda02-e7ab-a44c-8f1c-38eb85c2f8dc@citrix.com>
Date: Fri, 2 Oct 2020 23:39:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-4-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> This tool is analogous to 'xen-hvmctx' which presents HVM context.
> Subsequent patches will add 'dump' functions when new records are
> introduced.
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Wei Liu <wl@xen.org>
>
> NOTE: Ian requested ack from Andrew

I mean - its a fairly throwaway piece of misc userspace, so ack.

However, it is largely superseded by the changes you need to make to
verify-stream-v2 so you might want to bear that in mind.

Also, I wonder if it is wise in general that we're throwing so many misc
debugging tools into sbin.

> +#include <inttypes.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <errno.h>
> +
> +#include <xenctrl.h>
> +#include <xen/xen.h>
> +#include <xen/domctl.h>
> +#include <xen/save.h>
> +
> +static void *buf = NULL;
> +static size_t len, off;
> +
> +#define GET_PTR(_x)                                                        \
> +    do {                                                                   \
> +        if ( len - off < sizeof(*(_x)) )                                   \
> +        {                                                                  \
> +            fprintf(stderr,                                                \
> +                    "error: need another %lu bytes, only %lu available\n", \

%zu is the correct formatter for size_t.

> +                    sizeof(*(_x)), len - off);                             \
> +            exit(1);                                                       \

Your error handling will be far more simple by using err() instead of
opencoding it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 22:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 22:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2369.7038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTkn-0004Z3-69; Fri, 02 Oct 2020 22:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2369.7038; Fri, 02 Oct 2020 22:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTkn-0004Yw-2t; Fri, 02 Oct 2020 22:42:17 +0000
Received: by outflank-mailman (input) for mailman id 2369;
 Fri, 02 Oct 2020 22:42:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kOTkm-0004Yj-8n
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:42:16 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a2c0ea9-08ed-4947-ad80-98eb5d9181a7;
 Fri, 02 Oct 2020 22:42:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iJBK=DJ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kOTkm-0004Yj-8n
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:42:16 +0000
X-Inumbo-ID: 6a2c0ea9-08ed-4947-ad80-98eb5d9181a7
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6a2c0ea9-08ed-4947-ad80-98eb5d9181a7;
	Fri, 02 Oct 2020 22:42:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601678534;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=7qdzCzvub0sfIC/kGzx1XCmdSUtuH2Dz3eRm6Ac/eI8=;
  b=iadlyHLUTjhLor9hSbY3gNzI+n0mvZh71/d9vDBACCg0qzdptkauSt94
   2CkMYvxbWn+vn/aPwErJ8P/dGw6XF8QAxuUf6qIKsiaeh1Vekr8Ci6ZZT
   SGRShMmonnJe+byzldc23dfycNldOGJUMosKUpSjxu7lE8eHUQQUgnnoI
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jun3fDn2iHrWZekaRBipIyETex6SvD9y1OcfN6yVgpkOGKJxKc3/dGewUmReEcP7Zbq2LtrlBB
 IHPbf3scN/TBrRcRrao0vmCsRReuc0E0OPE6D0fQ+L0rzj9SpKMGrbx0enUajrLGXbcjwS+1lm
 hs6S0+jNQrcntUR9MEhN3Z7969ujg363lMrLUA0YEmXLX2NnwNS8foFIlC5dFcLyiGC2eyjuWN
 3BLHIoOwLWqO7e0pqb+8cD31NljfjWHz664BVSGGN5HBNuI/8dJ4+VDjG/BXFo+G1X2Hczxojh
 Ptw=
X-SBRS: None
X-MesageID: 28285138
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,329,1596513600"; 
   d="scan'208";a="28285138"
Subject: Re: [PATCH v9 4/8] docs/specs: add missing definitions to
 libxc-migration-stream
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-5-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fda9ae6f-e55f-2e62-44a9-acf4e6e2d09e@citrix.com>
Date: Fri, 2 Oct 2020 23:42:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-5-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> The STATIC_DATA_END, X86_CPUID_POLICY and X86_MSR_POLICY record types have
> sections explaining what they are but their values are not defined. Indeed
> their values are defined as "Reserved for future mandatory records."
>
> Also, the spec revision is adjusted to match the migration stream version
> and an END record is added to the description of a 'typical save record for
> and x86 HVM guest.'
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Fixes: 6f71b5b1506 ("docs/migration Specify migration v3 and STATIC_DATA_END")
> Fixes: ddd273d8863 ("docs/migration: Specify X86_{CPUID,MSR}_POLICY records")

Oops sorry.  I swear I had these at one point.  I can only presume it
got swallowed by a rebase at some point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 02 22:54:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 02 Oct 2020 22:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2374.7055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTwQ-0005Zu-Af; Fri, 02 Oct 2020 22:54:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2374.7055; Fri, 02 Oct 2020 22:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOTwQ-0005Zn-7i; Fri, 02 Oct 2020 22:54:18 +0000
Received: by outflank-mailman (input) for mailman id 2374;
 Fri, 02 Oct 2020 22:54:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOTwO-0005Zh-V3
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:54:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95b0f26c-862f-4a1b-b4f7-17b4d603fcfc;
 Fri, 02 Oct 2020 22:54:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTwM-0003pL-EW; Fri, 02 Oct 2020 22:54:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTwL-00069B-VA; Fri, 02 Oct 2020 22:54:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOTwL-0007hC-Uf; Fri, 02 Oct 2020 22:54:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6tcj=DJ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOTwO-0005Zh-V3
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 22:54:17 +0000
X-Inumbo-ID: 95b0f26c-862f-4a1b-b4f7-17b4d603fcfc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95b0f26c-862f-4a1b-b4f7-17b4d603fcfc;
	Fri, 02 Oct 2020 22:54:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m4ANlX8IbKgE6sqa0tT5dsil40/UY3zt5qaGtytGxmg=; b=bRsrXszRUzogbg5zcC2h2Rs5AX
	YKyV1QVJ6ShJQUHk+WbvGirmpNaFu/2ZAfjd/MWo2dmnlupxARtQ+6XUXnBVe7Au03r5r4aXNH8Dz
	MIhc/NVn66LtXY7IyY5PuIH3afKWpc8eKGi7ejhuEEZfBdnZvnEClfKfiS5VadYDMwLQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTwM-0003pL-EW; Fri, 02 Oct 2020 22:54:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTwL-00069B-VA; Fri, 02 Oct 2020 22:54:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOTwL-0007hC-Uf; Fri, 02 Oct 2020 22:54:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155222: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c
X-Osstest-Versions-That:
    linux=5d087e3578cf9cbd850a6f0a5c8b8169f22b5272
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 02 Oct 2020 22:54:13 +0000

flight 155222 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155222/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c
baseline version:
 linux                5d087e3578cf9cbd850a6f0a5c8b8169f22b5272

Last test of basis   155020  2020-09-28 13:40:35 Z    4 days
Testing same since   155222  2020-10-01 11:40:31 Z    1 days    1 attempts

------------------------------------------------------------
419 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   5d087e3578cf..a9518c1aec5b  a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 02:13:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 02:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2429.7135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOX3E-0007p6-Ax; Sat, 03 Oct 2020 02:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2429.7135; Sat, 03 Oct 2020 02:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOX3E-0007oz-7P; Sat, 03 Oct 2020 02:13:32 +0000
Received: by outflank-mailman (input) for mailman id 2429;
 Sat, 03 Oct 2020 02:13:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOX3C-0007ot-5a
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 02:13:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60c3857c-f2db-4abb-9c00-4813df220fe2;
 Sat, 03 Oct 2020 02:13:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX34-0000zW-BJ; Sat, 03 Oct 2020 02:13:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX34-0000Sf-2t; Sat, 03 Oct 2020 02:13:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX34-0006kw-2Q; Sat, 03 Oct 2020 02:13:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOX3C-0007ot-5a
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 02:13:30 +0000
X-Inumbo-ID: 60c3857c-f2db-4abb-9c00-4813df220fe2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 60c3857c-f2db-4abb-9c00-4813df220fe2;
	Sat, 03 Oct 2020 02:13:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=qOy58dXuzXsYnISBtAqPESuThWlvttHf06DAH0XGIZs=; b=uK3d2/thYZE7/eZesbjjRLF0Mh
	bpqZGsQr24Uwz9tsksrTqhbSZsSmWSd+aij/l0N7DpPseXhskpQcqf117uhf6L2TDlBeAqHPMlp/L
	s5xx5uYfU0/vv4BowQPaJVOgCb+BOaf7X2p95V/5qx3+kU/VjhrT80Aeu3NK17/xAEh4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX34-0000zW-BJ; Sat, 03 Oct 2020 02:13:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX34-0000Sf-2t; Sat, 03 Oct 2020 02:13:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX34-0006kw-2Q; Sat, 03 Oct 2020 02:13:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.14-testing bisection] complete test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
Message-Id: <E1kOX34-0006kw-2Q@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 02:13:22 +0000

branch xen-4.14-testing
xenbranch xen-4.14-testing
job test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155359/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/155359.bisection-summary --basis-template=154350 --blessings=real,real-bisect xen-4.14-testing test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 155173 fail [host=huxelrebe0] / 154350 [host=chardonnay0] 154148 [host=albana1] 154116 [host=elbling0] 152545 [host=albana1] 152537 [host=huxelrebe1] 152531 [host=godello0] 152153 [host=albana0] 152124 [host=huxelrebe1] 152081 [host=pinot0] 152061 [host=godello1] 152043 [host=elbling1] 151922 [host=godello0] 151899 [host=fiano1] 151892 ok.
Failure / basis pass flights: 155173 / 151892
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 f37a1cf023b277d0d49323bf322ce3ff0c92262d
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#d9a4084544134eea50f62e88d79c466ae91f0455-2793a49565488e419d10ba029c838f4b7efdba38 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/osstest/seabios.git#88ab0c15525ced2eefe39220742efe4769089ad8-41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 git://xenbits.xen.org/xen.git#02d69864b51a4302a148c28d6d391238a6778b4b-f37a1cf023b277d0d49323bf322ce3ff0c92262d
Loaded 12583 nodes in revision graph
Searching for test results:
 151892 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 151899 [host=fiano1]
 151922 [host=godello0]
 152061 [host=godello1]
 152043 [host=elbling1]
 152081 [host=pinot0]
 152124 [host=huxelrebe1]
 152153 [host=albana0]
 152531 [host=godello0]
 152537 [host=huxelrebe1]
 152545 [host=albana1]
 154116 [host=elbling0]
 154148 [host=albana1]
 154350 [host=chardonnay0]
 154617 fail irrelevant
 154641 fail irrelevant
 155016 fail irrelevant
 155087 fail irrelevant
 155219 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b
 155280 fail irrelevant
 155285 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 422e93e1de6f265ff48eaacc8cf7c44d6401062e 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155289 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a048af3c9073e4b8108e6cf920bbb35574059639 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155293 pass irrelevant
 155173 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155298 pass irrelevant
 155304 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155307 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1b461403ee723dab01d5828714cca0b9396a6b3c 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 28855ebcdbfa437e60bc16c761405476fe16bc39
 155312 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5eab5f0543e4f5fc52f7e2d823a29a6b1567fc16
 155315 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 43eceee913cc76e533233568ca9390be3d080578
 155319 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 f5469067ee0260673ca1e554ff8888512a55ccfc
 155322 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155328 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070
 155333 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155343 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155350 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155353 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155355 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155359 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
Searching for interesting versions
 Result found: flight 151892 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841, results HASH(0x559809207f90) HASH(0x55980921d858) HASH(0x5598091fcd18) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070, results HASH(0x5598091f0098) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c\
 3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 f5469067ee0260673ca1e554ff8888512a55ccfc, results HASH(0x55980921fb60) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5eab5f0543e4f5fc52f7e2d823a29a6b1567fc16, results HASH(0x559809207968) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1b461403ee723dab01d5828714cca0b9396a6b3c 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 28855ebcdbfa437e60bc16c761405476fe16bc39, results HASH(0x5598077939a0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a048af3c9073e4b8108e\
 6cf920bbb35574059639 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x559809207068) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 422e93e1de6f265ff48eaacc8cf7c44d6401062e 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802\
 021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x559809205960) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d9a4084544134eea50f62e88d79c466ae91f0455 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 88ab0c15525ced2eefe39220742efe4769089ad8 02d69864b51a4302a148c28d6d391238a6778b4b, results HASH(0x5598091f83e0) HASH(0x5598091eb460) Result found: flight 155173 (fail), for \
 basis failure (at ancestor ~5143)
 Repro found: flight 155219 (pass), for basis pass
 Repro found: flight 155304 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
No revisions left to test, checking graph state.
 Result found: flight 155333 (pass), for last pass
 Result found: flight 155343 (fail), for first failure
 Repro found: flight 155350 (pass), for last pass
 Repro found: flight 155353 (fail), for first failure
 Repro found: flight 155355 (pass), for last pass
 Repro found: flight 155359 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155359/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 117 colors found
Revision graph left in /home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155359: tolerable ALL FAIL

flight 155359 xen-4.14-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155359/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 03 02:15:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 02:15:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2432.7150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOX5a-0007zl-QW; Sat, 03 Oct 2020 02:15:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2432.7150; Sat, 03 Oct 2020 02:15:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOX5a-0007ze-NA; Sat, 03 Oct 2020 02:15:58 +0000
Received: by outflank-mailman (input) for mailman id 2432;
 Sat, 03 Oct 2020 02:15:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOX5Z-0007zZ-Em
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 02:15:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 379ef434-0200-482e-92f2-4b6b20cdba5b;
 Sat, 03 Oct 2020 02:15:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX5W-00012S-3Q; Sat, 03 Oct 2020 02:15:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX5V-0000Yc-Se; Sat, 03 Oct 2020 02:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOX5V-0000DK-SB; Sat, 03 Oct 2020 02:15:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOX5Z-0007zZ-Em
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 02:15:57 +0000
X-Inumbo-ID: 379ef434-0200-482e-92f2-4b6b20cdba5b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 379ef434-0200-482e-92f2-4b6b20cdba5b;
	Sat, 03 Oct 2020 02:15:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Vml/OXjE2eKGjV3FSJX8Be4t/2vkqAJqjZNxg9ZcEBA=; b=AJSwPfV+IryH8IypwAIMcxjtm3
	R6mu7r7fxFTQXK9mMKitdo3jyPPz8f/+ZTagi2owRofANiQj9ZUFlQGdxzPeDNIprc766l9fspRCE
	qwac6qjnZ23nuKsX06erRAsV6ccmMCFzKtGVy82oBdZGirzxgx/bzchtpOzwNcZZRjA4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX5W-00012S-3Q; Sat, 03 Oct 2020 02:15:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX5V-0000Yc-Se; Sat, 03 Oct 2020 02:15:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOX5V-0000DK-SB; Sat, 03 Oct 2020 02:15:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.12-testing bisection] complete test-amd64-amd64-libvirt-xsm
Message-Id: <E1kOX5V-0000DK-SB@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 02:15:53 +0000

branch xen-4.12-testing
xenbranch xen-4.12-testing
job test-amd64-amd64-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155357/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.12-testing/test-amd64-amd64-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.12-testing/test-amd64-amd64-libvirt-xsm.guest-start --summary-out=tmp/155357.bisection-summary --basis-template=154601 --blessings=real,real-bisect xen-4.12-testing test-amd64-amd64-libvirt-xsm guest-start
Searching for failure / basis pass:
 155152 fail [host=fiano0] / 154601 [host=godello0] 154121 [host=godello1] 152525 [host=chardonnay0] 151715 [host=pinot0] 151388 [host=elbling1] 151367 [host=albana0] 151341 [host=fiano1] 151316 [host=huxelrebe0] 151292 [host=godello0] 151276 [host=huxelrebe1] 151248 [host=elbling0] 151227 ok.
Failure / basis pass flights: 155152 / 151227
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
Basis pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 06760c2bf322cef4622b56bafee74fe93ae01630
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#7a05c739c26decb8ff0eef4f6c75ce3ef729532d-7a05c739c26decb8ff0eef4f6c75ce3ef729532d https://git.savannah.gnu.org/git/gnulib.git/#8089c00979a5b089cff592c6b91420e595657167-8089c00979a5b089cff592c6b91420e595657167 https://gitlab.com/keycodemap/keycodemapdb.git#16e5b0787687d8904dad2c026107409eb9bfcb95-16e5b0787687d8904dad2c026107409eb9bfcb95 git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc\
 7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#8927e2777786a43cddfaa328b0f4c41a09c629c9-2793a49565488e419d10ba029c838f4b7efdba38 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#8023a62081ffbe3f734019076ec1a2b4213142\
 bb-8023a62081ffbe3f734019076ec1a2b4213142bb git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 git://xenbits.xen.org/xen.git#06760c2bf322cef4622b56bafee74fe93ae01630-0186e76a62f7409804c2e4785d5a11e7f82a7c52
Loaded 12583 nodes in revision graph
Searching for test results:
 151161 [host=albana1]
 151227 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 06760c2bf322cef4622b56bafee74fe93ae01630
 151184 [host=godello1]
 151248 [host=elbling0]
 151276 [host=huxelrebe1]
 151292 [host=godello0]
 151341 [host=fiano1]
 151316 [host=huxelrebe0]
 151367 [host=albana0]
 151388 [host=elbling1]
 151715 [host=pinot0]
 152525 [host=chardonnay0]
 154121 [host=godello1]
 154601 [host=godello0]
 154622 fail irrelevant
 154663 fail irrelevant
 155014 fail irrelevant
 155075 fail irrelevant
 155210 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 06760c2bf322cef4622b56bafee74fe93ae01630
 155265 fail irrelevant
 155270 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b4b9496b3c17c76fab3ebb5a59d4c8d9b6b5c505 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d9c812dda519a1a73e8370e1b81ddf46eb22ed16 19e0bbb4eba8d781b972448ec01ede6ca7fa22cb
 155276 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fbc9cb4c8b4fc7c6aa63f47eae33d7c9849bf6e0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155152 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155284 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f94345d9eae1b359c01761be975086870a4a9de9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155291 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155299 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 0446e3db13671032b05d19f6117d902f5c5c76fa
 155308 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155313 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155320 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cfd61e688f9f1736ff0311f49040669f04ac1ea6
 155323 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8e25d522a3fc236c0c7a02541e8071afa031386b
 155331 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155346 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155351 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155354 pass 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155357 fail 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
Searching for interesting versions
 Result found: flight 151227 (pass), for basis pass
 For basis failure, parent search stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c, results HASH(0x5581412e3f68) HASH(0x5581412\
 c5130) HASH(0x5581412afb68) For basis failure, parent search stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 0446e3db13671032b05d19f6117d902f5c5c76fa, results HASH(0x5\
 581412dff30) For basis failure, parent search stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f94345d9eae1b359c01761be975086870a4a9de9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x5581412e1c38) Fo\
 r basis failure, parent search stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fbc9cb4c8b4fc7c6aa63f47eae33d7c9849bf6e0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x5581412beaf0) For basis failure\
 , parent search stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b4b9496b3c17c76fab3ebb5a59d4c8d9b6b5c505 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d9c812dda519a1a73e8370e1b81ddf46eb22ed16 19e0bbb4eba8d781b972448ec01ede6ca7fa22cb, results HASH(0x5581412dfc30) For basis failure, parent search\
  stopping at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8927e2777786a43cddfaa328b0f4c41a09c629c9 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 2e3de6253422112ae43e608661ba94ea6b345694 06760c2bf322cef4622b56bafee74fe93ae01630, results HASH(0x5581412c8360) HASH(0x5581412af268) Result found: flight 15515\
 2 (fail), for basis failure (at ancestor ~875)
 Repro found: flight 155210 (pass), for basis pass
 Repro found: flight 155291 (fail), for basis failure
 0 revisions at 7a05c739c26decb8ff0eef4f6c75ce3ef729532d 8089c00979a5b089cff592c6b91420e595657167 16e5b0787687d8904dad2c026107409eb9bfcb95 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
No revisions left to test, checking graph state.
 Result found: flight 155308 (pass), for last pass
 Result found: flight 155331 (fail), for first failure
 Repro found: flight 155346 (pass), for last pass
 Repro found: flight 155351 (fail), for first failure
 Repro found: flight 155354 (pass), for last pass
 Repro found: flight 155357 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155357/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 110 colors found
Revision graph left in /home/logs/results/bisect/xen-4.12-testing/test-amd64-amd64-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155357: tolerable FAIL

flight 155357 xen-4.12-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155357/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 12 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-xsm                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 03 03:16:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 03:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2442.7170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOY1o-0004hv-Dv; Sat, 03 Oct 2020 03:16:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2442.7170; Sat, 03 Oct 2020 03:16:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOY1o-0004ho-Ao; Sat, 03 Oct 2020 03:16:08 +0000
Received: by outflank-mailman (input) for mailman id 2442;
 Sat, 03 Oct 2020 03:16:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOY1n-0004hG-AL
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 03:16:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b65a1e9b-53db-491e-8a27-f135a23cd3bd;
 Sat, 03 Oct 2020 03:15:55 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOY1a-0002FH-JG; Sat, 03 Oct 2020 03:15:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOY1a-0002ZX-8d; Sat, 03 Oct 2020 03:15:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOY1a-0001Y2-8C; Sat, 03 Oct 2020 03:15:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOY1n-0004hG-AL
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 03:16:07 +0000
X-Inumbo-ID: b65a1e9b-53db-491e-8a27-f135a23cd3bd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b65a1e9b-53db-491e-8a27-f135a23cd3bd;
	Sat, 03 Oct 2020 03:15:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TvO1Olc/2XEVUFGLKqKRmlu9Tv2ZthtHvoRz6Boz87A=; b=nysP3PSYpkDUcsxltQUMYB1k6q
	Xvz7BG6/oVItr1OxJGJVRBWpvccpVVUoXswvQA8ur2OWjGG3V+X4SmnGUhkUko1xiyIdIpLtrwrj8
	4YFJ8/HNG21aiBafbI4LD9z3QLzymLiOWnldO1oLl1QxoVQUYylQcG8eyLEMHUOiFaGY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOY1a-0002FH-JG; Sat, 03 Oct 2020 03:15:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOY1a-0002ZX-8d; Sat, 03 Oct 2020 03:15:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOY1a-0001Y2-8C; Sat, 03 Oct 2020 03:15:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155226: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-saverestore.2:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:broken:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:leak-check/check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=f58caa40cd8c2a3dbed705d90b6a22facc281afb
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 03:15:54 +0000

flight 155226 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155226/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151728
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151728
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 151728
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 151728
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151728

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-win7-amd64 15 guest-saverestore.2 fail pass in 155126

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               broken never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail in 155126 like 151728
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop   fail in 155126 never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 22 leak-check/check          fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  f58caa40cd8c2a3dbed705d90b6a22facc281afb
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   87 days
Testing same since   154621  2020-09-22 16:07:00 Z   10 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken
broken-step test-arm64-arm64-xl-thunderx hosts-allocate
broken-job test-arm64-arm64-xl-thunderx broken

Not pushing.

(No revision log; it would be 341 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 06:05:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 06:05:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2454.7190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOafT-0002p4-RP; Sat, 03 Oct 2020 06:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2454.7190; Sat, 03 Oct 2020 06:05:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOafT-0002ox-NS; Sat, 03 Oct 2020 06:05:15 +0000
Received: by outflank-mailman (input) for mailman id 2454;
 Sat, 03 Oct 2020 06:05:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOafS-0002os-MB
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 06:05:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52a8040c-367a-4174-9806-7dbbb0867238;
 Sat, 03 Oct 2020 06:05:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOafO-0006P8-ND; Sat, 03 Oct 2020 06:05:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOafO-0002wk-GK; Sat, 03 Oct 2020 06:05:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOafO-0008DH-Fm; Sat, 03 Oct 2020 06:05:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOafS-0002os-MB
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 06:05:14 +0000
X-Inumbo-ID: 52a8040c-367a-4174-9806-7dbbb0867238
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 52a8040c-367a-4174-9806-7dbbb0867238;
	Sat, 03 Oct 2020 06:05:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HeyNySgPpC1vp4cXiPb9P+r1mTTIVP3OltYakH0Tgh0=; b=ssv9jwanMzAHqXF4NXAVM/qWa9
	XFCvhRxfFAOIMkFlVYVx+5BuJgslTKRvxZzevj1AKI6FpJ2XiAp6DjPYRF8/AKrGJaEgVRd4bMFfV
	zO4mLOTQyGPFdEgmjbyyRxV1oZZG+hlREoqbw6JAef/uD/Xsik1q9xd6GsdV4lChhNB4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOafO-0006P8-ND; Sat, 03 Oct 2020 06:05:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOafO-0002wk-GK; Sat, 03 Oct 2020 06:05:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOafO-0008DH-Fm; Sat, 03 Oct 2020 06:05:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155231: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=60e720931556fc1034d0981460164dcf02697679
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 06:05:10 +0000

flight 155231 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155231/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                60e720931556fc1034d0981460164dcf02697679
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   63 days
Failing since        152366  2020-08-01 20:49:34 Z   62 days  109 attempts
Testing same since   155231  2020-10-01 13:18:14 Z    1 days    1 attempts

------------------------------------------------------------
2449 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 330712 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 07:33:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 07:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2462.7212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOc2L-0001yJ-9i; Sat, 03 Oct 2020 07:32:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2462.7212; Sat, 03 Oct 2020 07:32:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOc2L-0001yC-6P; Sat, 03 Oct 2020 07:32:57 +0000
Received: by outflank-mailman (input) for mailman id 2462;
 Sat, 03 Oct 2020 07:32:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jVkC=DK=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kOc2K-0001y7-83
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 07:32:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a704f1e-42a7-474e-ad0f-c94eb463351d;
 Sat, 03 Oct 2020 07:32:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5F068B1AC;
 Sat,  3 Oct 2020 07:32:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jVkC=DK=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kOc2K-0001y7-83
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 07:32:56 +0000
X-Inumbo-ID: 2a704f1e-42a7-474e-ad0f-c94eb463351d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a704f1e-42a7-474e-ad0f-c94eb463351d;
	Sat, 03 Oct 2020 07:32:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601710374;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=/+WtWwRXTD0BjBTHb1/FJ6JZp3AokR9+zD+bpTXewJY=;
	b=GlzssYPKtZlr4ZpQYxu6NlxGBF78a5Xni4mwOrQB7WCX/va+IwlAEog0YjtpuFXaUDFShD
	1K0y3l2xgYbm2Xi88Wu703O+Q2nb3QNN7CKYbUFxTeX+/9LqnI9nodyev5No1P9qJovkN1
	niF1OPRlribG7yIN7bEIt5vnx/CU60s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5F068B1AC;
	Sat,  3 Oct 2020 07:32:54 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.9-rc8
Date: Sat,  3 Oct 2020 09:32:53 +0200
Message-Id: <20201003073253.1861-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc8-tag

xen: branch for v5.9-rc8

In contains a fix for a regression introduced with 5.9-rc3 which caused
a system running as fully virtualized guest under Xen to crash when
using legacy devices like a floppy.

Thanks.

Juergen

 drivers/xen/events/events_base.c | 29 +++++++++++++++++++++--------
 1 file changed, 21 insertions(+), 8 deletions(-)

Juergen Gross (1):
      xen/events: don't use chip_data for legacy IRQs


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 08:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 08:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2477.7232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOdDK-00007f-Et; Sat, 03 Oct 2020 08:48:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2477.7232; Sat, 03 Oct 2020 08:48:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOdDK-00007Y-BZ; Sat, 03 Oct 2020 08:48:22 +0000
Received: by outflank-mailman (input) for mailman id 2477;
 Sat, 03 Oct 2020 08:48:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KpvH=DK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kOdDI-00007T-4y
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 08:48:20 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5c03ba95-63a9-461a-89d6-283d6ddbe3ca;
 Sat, 03 Oct 2020 08:48:17 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-245-qRo-C7aTN4eJf8sKT9OdoA-1; Sat, 03 Oct 2020 04:48:13 -0400
Received: by mail-wr1-f72.google.com with SMTP id d13so1573359wrr.23
 for <xen-devel@lists.xenproject.org>; Sat, 03 Oct 2020 01:48:13 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:f6ef:6259:374d:b794?
 ([2001:b07:6468:f312:f6ef:6259:374d:b794])
 by smtp.gmail.com with ESMTPSA id u81sm1344487wmg.43.2020.10.03.01.48.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 03 Oct 2020 01:48:11 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KpvH=DK=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kOdDI-00007T-4y
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 08:48:20 +0000
X-Inumbo-ID: 5c03ba95-63a9-461a-89d6-283d6ddbe3ca
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 5c03ba95-63a9-461a-89d6-283d6ddbe3ca;
	Sat, 03 Oct 2020 08:48:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601714897;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/yD2Xg+heLEf/P4MLYz+dS5vx/J1O0Ekd2CO4abRGUI=;
	b=IZBvuzTz3/gX1eODDowjdNzWz67prVyS566rsBHR6xJ6z39WoXCxpU9GBnILNG2YlvqG70
	Gmh4CaT9dInJiN/Da7Fko3okm7sBShHdYy7PKMrmlh5IFXY/STGLvWV1dV/ZjsEuCBcBDQ
	EpQDldvLOW7ykK9dupD0BxBluagVTJg=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-245-qRo-C7aTN4eJf8sKT9OdoA-1; Sat, 03 Oct 2020 04:48:13 -0400
X-MC-Unique: qRo-C7aTN4eJf8sKT9OdoA-1
Received: by mail-wr1-f72.google.com with SMTP id d13so1573359wrr.23
        for <xen-devel@lists.xenproject.org>; Sat, 03 Oct 2020 01:48:13 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=/yD2Xg+heLEf/P4MLYz+dS5vx/J1O0Ekd2CO4abRGUI=;
        b=Y3vTh6sJFMZEfxwlnxu88uVJfs5WtV6FYZJrTdH8DHTdH2tPyGsnOYPvEMLa2VBqvL
         wvo/VYEPdLA76oIcFlApDS3JcQRKCEcdhlLM0zRp6phXp8Xv+glqfU4RI91X8odgVR2Z
         QMr43juMp8wGai61WXfQMhFQHCn8T0PK15yRkFJd9sojfTf/XBU554eiuAfCTuoCfdGS
         85vfReNwNef+uJs+LoGNNG+/O+zq9oBkKn/d1vvvhdwaHIfPTfsVaPP12+nG2u0ancfx
         ArzQ04rkDM3M+OjuFGQL+fgi8zO07rUawH73DAiNtaL3IdG8CjXjvyhIL5+Z80ou00AO
         grvw==
X-Gm-Message-State: AOAM533Gw2UkrlqK/LJraTlIswXumZ6d1leCLC2GyXHWgo8GzknNNC6I
	bQVthaGEuhFeyV7qp6YGszuA434/qd5oWl0gCL8t6KK55jk4GV3+AKOy7lt7HUsSSf8ApUJ3XXM
	OmA8AvyBlFxH3nZZJWTZg6rt6pe4=
X-Received: by 2002:a5d:55c8:: with SMTP id i8mr7276909wrw.331.1601714892043;
        Sat, 03 Oct 2020 01:48:12 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwAIX0iUFCBJ4wyETu7XcFXy583sWva8tpa1gx50dhNU0Dwz4oNCcxqQkasJf6KRmPiXxW7Lw==
X-Received: by 2002:a5d:55c8:: with SMTP id i8mr7276871wrw.331.1601714891730;
        Sat, 03 Oct 2020 01:48:11 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:f6ef:6259:374d:b794? ([2001:b07:6468:f312:f6ef:6259:374d:b794])
        by smtp.gmail.com with ESMTPSA id u81sm1344487wmg.43.2020.10.03.01.48.09
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sat, 03 Oct 2020 01:48:11 -0700 (PDT)
Subject: Re: [PATCH v3] qemu/atomic.h: rename atomic_ to qatomic_
To: Matthew Rosato <mjrosato@linux.ibm.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>, Peter Maydell <peter.maydell@linaro.org>,
 sheepdog@lists.wpkg.org, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>, David Hildenbrand <david@redhat.com>,
 Yuval Shaia <yuval.shaia.ml@gmail.com>, Markus Armbruster
 <armbru@redhat.com>, Max Filippov <jcmvbkbc@gmail.com>,
 Alistair Francis <Alistair.Francis@wdc.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Huacai Chen <chenhc@lemote.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Alberto Garcia <berto@igalia.com>, kvm@vger.kernel.org,
 Yoshinori Sato <ysato@users.sourceforge.jp>,
 Juan Quintela <quintela@redhat.com>, Jiri Slaby <jslaby@suse.cz>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Michael Roth <mdroth@linux.vnet.ibm.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
 Liu Yuan <namei.unix@gmail.com>, Richard Henderson <rth@twiddle.net>,
 Thomas Huth <thuth@redhat.com>, Eduardo Habkost <ehabkost@redhat.com>,
 Stefan Weil <sw@weilnetz.de>, Peter Lieven <pl@kamp.de>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, xen-devel@lists.xenproject.org, qemu-riscv@nongnu.org,
 Sunil Muthuswamy <sunilmut@microsoft.com>, John Snow <jsnow@redhat.com>,
 Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
 Kevin Wolf <kwolf@redhat.com>, =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?=
 <berrange@redhat.com>, qemu-block@nongnu.org,
 Bastian Koppelmann <kbastian@mail.uni-paderborn.de>,
 Cornelia Huck <cohuck@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Sagar Karandikar <sagark@eecs.berkeley.edu>,
 Palmer Dabbelt <palmer@dabbelt.com>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Aurelien Jarno <aurelien@aurel32.net>
References: <20200923105646.47864-1-stefanha@redhat.com>
 <4e65d6fa-0a7e-015b-eb6f-5dd1cc3ddd91@linux.ibm.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <45ba3626-0e06-96c7-5ed8-ea561ae20f15@redhat.com>
Date: Sat, 3 Oct 2020 10:48:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <4e65d6fa-0a7e-015b-eb6f-5dd1cc3ddd91@linux.ibm.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02/10/20 18:43, Matthew Rosato wrote:
>> diff --git
>> a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
>> b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
>> index acd4c8346d..7b4062a1a1 100644
>> ---
>> a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
>> +++
>> b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
>> @@ -68,7 +68,7 @@ static inline int pvrdma_idx_valid(uint32_t idx,
>> uint32_t max_elems)
>>     static inline int32_t pvrdma_idx(int *var, uint32_t max_elems)
>>   {
>> -    const unsigned int idx = atomic_read(var);
>> +    const unsigned int idx = qatomic_read(var);
>>         if (pvrdma_idx_valid(idx, max_elems))
>>           return idx & (max_elems - 1);
>> @@ -77,17 +77,17 @@ static inline int32_t pvrdma_idx(int *var,
>> uint32_t max_elems)
>>     static inline void pvrdma_idx_ring_inc(int *var, uint32_t max_elems)
>>   {
>> -    uint32_t idx = atomic_read(var) + 1;    /* Increment. */
>> +    uint32_t idx = qatomic_read(var) + 1;    /* Increment. */
>>         idx &= (max_elems << 1) - 1;        /* Modulo size, flip gen. */
>> -    atomic_set(var, idx);
>> +    qatomic_set(var, idx);
>>   }
>>     static inline int32_t pvrdma_idx_ring_has_space(const struct
>> pvrdma_ring *r,
>>                             uint32_t max_elems, uint32_t *out_tail)
>>   {
>> -    const uint32_t tail = atomic_read(&r->prod_tail);
>> -    const uint32_t head = atomic_read(&r->cons_head);
>> +    const uint32_t tail = qatomic_read(&r->prod_tail);
>> +    const uint32_t head = qatomic_read(&r->cons_head);
>>         if (pvrdma_idx_valid(tail, max_elems) &&
>>           pvrdma_idx_valid(head, max_elems)) {
>> @@ -100,8 +100,8 @@ static inline int32_t
>> pvrdma_idx_ring_has_space(const struct pvrdma_ring *r,
>>   static inline int32_t pvrdma_idx_ring_has_data(const struct
>> pvrdma_ring *r,
>>                            uint32_t max_elems, uint32_t *out_head)
>>   {
>> -    const uint32_t tail = atomic_read(&r->prod_tail);
>> -    const uint32_t head = atomic_read(&r->cons_head);
>> +    const uint32_t tail = qatomic_read(&r->prod_tail);
>> +    const uint32_t head = qatomic_read(&r->cons_head);
>>         if (pvrdma_idx_valid(tail, max_elems) &&
>>           pvrdma_idx_valid(head, max_elems)) {
> 
> 
> It looks like the changes in this file are going to get reverted the
> next time someone does a linux header sync.

Source code should not be at all imported from Linux.  The hacks that
accomodate pvrdma in update-linux-headers.sh (like s/atomic_t/u32/)
really have no place there; the files in
include/standard-headers/drivers/infiniband/hw/vmw_pvrdma should all be
moved in hw/.

Paolo



From xen-devel-bounces@lists.xenproject.org Sat Oct 03 09:16:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 09:16:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2262.7244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOde0-0002hG-PL; Sat, 03 Oct 2020 09:15:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2262.7244; Sat, 03 Oct 2020 09:15:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOde0-0002h9-Lz; Sat, 03 Oct 2020 09:15:56 +0000
Received: by outflank-mailman (input) for mailman id 2262;
 Fri, 02 Oct 2020 16:44:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J9ZS=DJ=linux.ibm.com=mjrosato@srs-us1.protection.inumbo.net>)
 id 1kOOAf-0005Vv-9Y
 for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:44:37 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d086cc1-494d-40ce-8e62-f5aef5229a6b;
 Fri, 02 Oct 2020 16:44:35 +0000 (UTC)
Received: from pps.filterd (m0098410.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 092GVwrx006908; Fri, 2 Oct 2020 12:43:16 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 33x5rycn28-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 02 Oct 2020 12:43:16 -0400
Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 092GWgtP009181;
 Fri, 2 Oct 2020 12:43:15 -0400
Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com
 [169.47.144.27])
 by mx0a-001b2d01.pphosted.com with ESMTP id 33x5rycn1m-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 02 Oct 2020 12:43:15 -0400
Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1])
 by ppma05wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 092GcU34023331;
 Fri, 2 Oct 2020 16:43:13 GMT
Received: from b01cxnp22036.gho.pok.ibm.com (b01cxnp22036.gho.pok.ibm.com
 [9.57.198.26]) by ppma05wdc.us.ibm.com with ESMTP id 33sw99y7yv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 02 Oct 2020 16:43:13 +0000
Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com
 [9.57.199.111])
 by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 092GhDIh14615524
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 2 Oct 2020 16:43:13 GMT
Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 120D4AC05B;
 Fri,  2 Oct 2020 16:43:13 +0000 (GMT)
Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 4E754AC059;
 Fri,  2 Oct 2020 16:43:04 +0000 (GMT)
Received: from oc4221205838.ibm.com (unknown [9.163.4.25])
 by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP;
 Fri,  2 Oct 2020 16:43:04 +0000 (GMT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J9ZS=DJ=linux.ibm.com=mjrosato@srs-us1.protection.inumbo.net>)
	id 1kOOAf-0005Vv-9Y
	for xen-devel@lists.xenproject.org; Fri, 02 Oct 2020 16:44:37 +0000
X-Inumbo-ID: 1d086cc1-494d-40ce-8e62-f5aef5229a6b
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1d086cc1-494d-40ce-8e62-f5aef5229a6b;
	Fri, 02 Oct 2020 16:44:35 +0000 (UTC)
Received: from pps.filterd (m0098410.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 092GVwrx006908;
	Fri, 2 Oct 2020 12:43:16 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=2+A3Co7DfxkWQ8HV8UobuhVZsZR1j3dufeR3zFyNIJM=;
 b=ZL6lpAo76UUwp11NPj6zYDs8uDEd5Eg4s8Bnu+x6A9pDfY4v17ur2V5/VQAm1Xsa8WoZ
 jF57ZhyECCCEOAt7VEtCJ+X/62wSfpjwuK8OmQILaGAJlfyt2IHdnxk4i2PTbV8C5+y2
 E8QrbmluwZPKJ8nkfjZ4pV2KHv9Srg8dbS4C4+LjouCp1w1golKNt2NX64W5vFoFU3mx
 GraG/FKAidg+fls6cQa0tWqYA9UpnCVqQINCGpM6utTnAAyg9W8PYf1GW7PzOz6aQuMF
 ueSdxKKi3wtCWnYS16nDlnkvuNyIusRowWITviL71dULf4YjNRCfBzdgzDBNFIg1cgvz Tg== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0a-001b2d01.pphosted.com with ESMTP id 33x5rycn28-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 02 Oct 2020 12:43:16 -0400
Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 092GWgtP009181;
	Fri, 2 Oct 2020 12:43:15 -0400
Received: from ppma05wdc.us.ibm.com (1b.90.2fa9.ip4.static.sl-reverse.com [169.47.144.27])
	by mx0a-001b2d01.pphosted.com with ESMTP id 33x5rycn1m-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 02 Oct 2020 12:43:15 -0400
Received: from pps.filterd (ppma05wdc.us.ibm.com [127.0.0.1])
	by ppma05wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 092GcU34023331;
	Fri, 2 Oct 2020 16:43:13 GMT
Received: from b01cxnp22036.gho.pok.ibm.com (b01cxnp22036.gho.pok.ibm.com [9.57.198.26])
	by ppma05wdc.us.ibm.com with ESMTP id 33sw99y7yv-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 02 Oct 2020 16:43:13 +0000
Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111])
	by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 092GhDIh14615524
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Fri, 2 Oct 2020 16:43:13 GMT
Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 120D4AC05B;
	Fri,  2 Oct 2020 16:43:13 +0000 (GMT)
Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 4E754AC059;
	Fri,  2 Oct 2020 16:43:04 +0000 (GMT)
Received: from oc4221205838.ibm.com (unknown [9.163.4.25])
	by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP;
	Fri,  2 Oct 2020 16:43:04 +0000 (GMT)
Subject: Re: [PATCH v3] qemu/atomic.h: rename atomic_ to qatomic_
To: Stefan Hajnoczi <stefanha@redhat.com>, qemu-devel@nongnu.org
Cc: Fam Zheng <fam@euphon.net>, Peter Maydell <peter.maydell@linaro.org>,
        sheepdog@lists.wpkg.org, Paul Durrant <paul@xen.org>,
        Jason Wang <jasowang@redhat.com>, David Hildenbrand <david@redhat.com>,
        Yuval Shaia <yuval.shaia.ml@gmail.com>,
        Markus Armbruster
 <armbru@redhat.com>,
        Max Filippov <jcmvbkbc@gmail.com>,
        Alistair Francis <Alistair.Francis@wdc.com>,
        Gerd Hoffmann <kraxel@redhat.com>, Huacai Chen <chenhc@lemote.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Alberto Garcia <berto@igalia.com>, kvm@vger.kernel.org,
        Yoshinori Sato <ysato@users.sourceforge.jp>,
        Juan Quintela <quintela@redhat.com>, Jiri Slaby <jslaby@suse.cz>,
        "Michael S. Tsirkin" <mst@redhat.com>,
        Michael Roth <mdroth@linux.vnet.ibm.com>,
        Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
        Anthony Perard <anthony.perard@citrix.com>,
        =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@redhat.com>,
        Liu Yuan <namei.unix@gmail.com>, Richard Henderson <rth@twiddle.net>,
        Thomas Huth <thuth@redhat.com>, Eduardo Habkost <ehabkost@redhat.com>,
        Stefan Weil <sw@weilnetz.de>, Peter Lieven <pl@kamp.de>,
        "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
        qemu-arm@nongnu.org, xen-devel@lists.xenproject.org,
        qemu-riscv@nongnu.org, Sunil Muthuswamy <sunilmut@microsoft.com>,
        John Snow <jsnow@redhat.com>,
        Hailiang Zhang <zhang.zhanghailiang@huawei.com>,
        Kevin Wolf <kwolf@redhat.com>,
        =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?=
 <berrange@redhat.com>,
        qemu-block@nongnu.org,
        Bastian Koppelmann <kbastian@mail.uni-paderborn.de>,
        Cornelia Huck <cohuck@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
        Max Reitz <mreitz@redhat.com>,
        Sagar Karandikar <sagark@eecs.berkeley.edu>,
        Palmer Dabbelt <palmer@dabbelt.com>,
        Paolo Bonzini <pbonzini@redhat.com>,
        Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
        Aurelien Jarno <aurelien@aurel32.net>
References: <20200923105646.47864-1-stefanha@redhat.com>
From: Matthew Rosato <mjrosato@linux.ibm.com>
Message-ID: <4e65d6fa-0a7e-015b-eb6f-5dd1cc3ddd91@linux.ibm.com>
Date: Fri, 2 Oct 2020 12:43:03 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200923105646.47864-1-stefanha@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-02_10:2020-10-02,2020-10-02 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 suspectscore=0
 phishscore=0 adultscore=0 spamscore=0 lowpriorityscore=0 malwarescore=0
 clxscore=1011 bulkscore=0 priorityscore=1501 mlxlogscore=999
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2010020121

On 9/23/20 6:56 AM, Stefan Hajnoczi wrote:
> clang's C11 atomic_fetch_*() functions only take a C11 atomic type
> pointer argument. QEMU uses direct types (int, etc) and this causes a
> compiler error when a QEMU code calls these functions in a source file
> that also included <stdatomic.h> via a system header file:
> 
>    $ CC=clang CXX=clang++ ./configure ... && make
>    ../util/async.c:79:17: error: address argument to atomic operation must be a pointer to _Atomic type ('unsigned int *' invalid)
> 
> Avoid using atomic_*() names in QEMU's atomic.h since that namespace is
> used by <stdatomic.h>. Prefix QEMU's APIs with 'q' so that atomic.h
> and <stdatomic.h> can co-exist. I checked /usr/include on my machine and
> searched GitHub for existing "qatomic_" users but there seem to be none.
> 
> This patch was generated using:
> 
>    $ git grep -h -o '\<atomic\(64\)\?_[a-z0-9_]\+' include/qemu/atomic.h | \
>      sort -u >/tmp/changed_identifiers
>    $ for identifier in $(</tmp/changed_identifiers); do
>          sed -i "s%\<$identifier\>%q$identifier%g" \
>              $(git grep -I -l "\<$identifier\>")
>      done
> 
> I manually fixed line-wrap issues and misaligned rST tables.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
..snip..
> diff --git a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
> index acd4c8346d..7b4062a1a1 100644
> --- a/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
> +++ b/include/standard-headers/drivers/infiniband/hw/vmw_pvrdma/pvrdma_ring.h
> @@ -68,7 +68,7 @@ static inline int pvrdma_idx_valid(uint32_t idx, uint32_t max_elems)
>   
>   static inline int32_t pvrdma_idx(int *var, uint32_t max_elems)
>   {
> -	const unsigned int idx = atomic_read(var);
> +	const unsigned int idx = qatomic_read(var);
>   
>   	if (pvrdma_idx_valid(idx, max_elems))
>   		return idx & (max_elems - 1);
> @@ -77,17 +77,17 @@ static inline int32_t pvrdma_idx(int *var, uint32_t max_elems)
>   
>   static inline void pvrdma_idx_ring_inc(int *var, uint32_t max_elems)
>   {
> -	uint32_t idx = atomic_read(var) + 1;	/* Increment. */
> +	uint32_t idx = qatomic_read(var) + 1;	/* Increment. */
>   
>   	idx &= (max_elems << 1) - 1;		/* Modulo size, flip gen. */
> -	atomic_set(var, idx);
> +	qatomic_set(var, idx);
>   }
>   
>   static inline int32_t pvrdma_idx_ring_has_space(const struct pvrdma_ring *r,
>   					      uint32_t max_elems, uint32_t *out_tail)
>   {
> -	const uint32_t tail = atomic_read(&r->prod_tail);
> -	const uint32_t head = atomic_read(&r->cons_head);
> +	const uint32_t tail = qatomic_read(&r->prod_tail);
> +	const uint32_t head = qatomic_read(&r->cons_head);
>   
>   	if (pvrdma_idx_valid(tail, max_elems) &&
>   	    pvrdma_idx_valid(head, max_elems)) {
> @@ -100,8 +100,8 @@ static inline int32_t pvrdma_idx_ring_has_space(const struct pvrdma_ring *r,
>   static inline int32_t pvrdma_idx_ring_has_data(const struct pvrdma_ring *r,
>   					     uint32_t max_elems, uint32_t *out_head)
>   {
> -	const uint32_t tail = atomic_read(&r->prod_tail);
> -	const uint32_t head = atomic_read(&r->cons_head);
> +	const uint32_t tail = qatomic_read(&r->prod_tail);
> +	const uint32_t head = qatomic_read(&r->cons_head);
>   
>   	if (pvrdma_idx_valid(tail, max_elems) &&
>   	    pvrdma_idx_valid(head, max_elems)) {


It looks like the changes in this file are going to get reverted the 
next time someone does a linux header sync.



From xen-devel-bounces@lists.xenproject.org Sat Oct 03 12:21:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 12:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2533.7276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOgXY-0001ph-3t; Sat, 03 Oct 2020 12:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2533.7276; Sat, 03 Oct 2020 12:21:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOgXY-0001pa-0n; Sat, 03 Oct 2020 12:21:28 +0000
Received: by outflank-mailman (input) for mailman id 2533;
 Sat, 03 Oct 2020 12:21:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOgXW-0001p5-NS
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 12:21:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1cb9e3d2-e69e-4258-b835-096370804fd6;
 Sat, 03 Oct 2020 12:21:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOgXO-00069d-Ej; Sat, 03 Oct 2020 12:21:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOgXO-0007AG-7R; Sat, 03 Oct 2020 12:21:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOgXO-0005EJ-6x; Sat, 03 Oct 2020 12:21:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOgXW-0001p5-NS
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 12:21:26 +0000
X-Inumbo-ID: 1cb9e3d2-e69e-4258-b835-096370804fd6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1cb9e3d2-e69e-4258-b835-096370804fd6;
	Sat, 03 Oct 2020 12:21:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=ep+diG/tn68nTZTwQ2XC3XjKRW6cLUmfkoNOnOpCMV0=; b=zaUsS5RCYdWoe7zIdiZpyxh7vW
	k+jkbb3qKbwOMsDfJQwBy/rVgtjeKT06yr8z7prBqCfAdmzRlYnSTfvsvf38LR7brfLH2SLBQXBr0
	hHxn2kkVi0OP7xeLndSAVA73+/TAKMEeNicrw0HubGKFb2aaUF8fEP75k3A9zthrx9oM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOgXO-00069d-Ej; Sat, 03 Oct 2020 12:21:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOgXO-0007AG-7R; Sat, 03 Oct 2020 12:21:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOgXO-0005EJ-6x; Sat, 03 Oct 2020 12:21:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-i386-libvirt-xsm
Message-Id: <E1kOgXO-0005EJ-6x@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 12:21:18 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155373/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start --summary-out=tmp/155373.bisection-summary --basis-template=154611 --blessings=real,real-bisect xen-unstable test-amd64-i386-libvirt-xsm guest-start
Searching for failure / basis pass:
 155211 fail [host=albana1] / 154611 [host=pinot0] 154592 [host=elbling0] 154576 [host=fiano1] 154556 [host=chardonnay0] 154521 [host=huxelrebe0] 154504 [host=rimava1] 154494 [host=pinot1] 154481 [host=albana0] 154465 [host=fiano0] 154090 [host=huxelrebe1] 154058 [host=chardonnay0] 154036 ok.
Failure / basis pass flights: 155211 / 154036
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 89002866bb6c6f26024f015820c8f52012f95cf2
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#89002866bb6c6f26024f015820c8f52012f95cf2-c73952831f0fc63a984e0d07dff1d20f8617b81f
Loaded 5001 nodes in revision graph
Searching for test results:
 154036 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 89002866bb6c6f26024f015820c8f52012f95cf2
 154058 [host=chardonnay0]
 154090 [host=huxelrebe1]
 154465 [host=fiano0]
 154481 [host=albana0]
 154494 [host=pinot1]
 154504 [host=rimava1]
 154521 [host=huxelrebe0]
 154556 [host=chardonnay0]
 154576 [host=fiano1]
 154592 [host=elbling0]
 154611 [host=pinot0]
 154634 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155017 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1
 155113 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155255 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 89002866bb6c6f26024f015820c8f52012f95cf2
 155316 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155325 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155211 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155330 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 e71301ecd50f2d3bd1b960bbf7dcf850d02e7e8a
 155347 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155352 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 910093d54fc758e7d69261b344fdc8da3a7bd81e
 155356 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155358 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 e045199c7c9c5433d7f1461a741ed539a75cbfad
 155363 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155366 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155368 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155370 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155373 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
Searching for interesting versions
 Result found: flight 154036 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325, results HASH(0x565194fd05d8) HASH(0x565194f4b050) HASH(0x565194f30908) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef8\
 28bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 910093d54fc758e7d69261b344fdc8da3a7bd81e, results HASH(0x565194f3dad0) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 e71301ecd50f2d3bd1b960bbf7dcf850d02e7e8a, results HASH(0x565194f3e6f8) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e, results HASH(0x565194f2e600) For basis\
  failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 89002866bb6c6f26024f015820c8f52012f95cf2, results HASH(0x565194f370c8) HASH(0x565194f9bfa0) Result found: flight 154634 (fail), for basis failure (at ancestor ~371)
 Repro found: flight 155255 (pass), for basis pass
 Repro found: flight 155347 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
No revisions left to test, checking graph state.
 Result found: flight 155356 (pass), for last pass
 Result found: flight 155363 (fail), for first failure
 Repro found: flight 155366 (pass), for last pass
 Repro found: flight 155368 (fail), for first failure
 Repro found: flight 155370 (pass), for last pass
 Repro found: flight 155373 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155373/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-i386-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155373: tolerable FAIL

flight 155373 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155373/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm  12 guest-start             fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-xsm                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 03 12:45:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 12:45:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2538.7293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOgul-0003hi-4y; Sat, 03 Oct 2020 12:45:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2538.7293; Sat, 03 Oct 2020 12:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOgul-0003hb-1n; Sat, 03 Oct 2020 12:45:27 +0000
Received: by outflank-mailman (input) for mailman id 2538;
 Sat, 03 Oct 2020 12:45:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDDO=DK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kOguj-0003hU-V3
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 12:45:26 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5903b2bf-f866-4e7a-8d27-aa04cd8a28af;
 Sat, 03 Oct 2020 12:45:25 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id EBD8D5C00D0;
 Sat,  3 Oct 2020 08:45:24 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Sat, 03 Oct 2020 08:45:24 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 79E7B3064680;
 Sat,  3 Oct 2020 08:45:23 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CDDO=DK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kOguj-0003hU-V3
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 12:45:26 +0000
X-Inumbo-ID: 5903b2bf-f866-4e7a-8d27-aa04cd8a28af
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5903b2bf-f866-4e7a-8d27-aa04cd8a28af;
	Sat, 03 Oct 2020 12:45:25 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id EBD8D5C00D0;
	Sat,  3 Oct 2020 08:45:24 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Sat, 03 Oct 2020 08:45:24 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm3; bh=bh8hCQ
	WdjNGiDYJPTf/ASdV70PU50sF3IrSRDLvbyzs=; b=Y0r5UnfOTNtSrLOhrjSO5+
	D7wgnsKaktsLYegSpNsS0fKCj5+YUoWwXT60YGmd/mCegHNIPPU/H6Jl3NbYxI/0
	VLcGyzcGT6BeGL8tXoXHCGegVGlrXR0B70p4Zxeu7qYY5CcB/9XPhfQUQtIKaoXm
	taALtYjz+omdWMwMEblYGQomwC7lGHkBBK7ush3B+o86gV/wRTybi4iLb0ESs+Fg
	3fedV4j17nj4B5uWgN4o7qBADWBjJiy1afga6jDXv0Cf9h3DPQiY4/MBGVvhedQs
	7x1Ofzr/+V8t0Ykv9n6097za71XfrXxjbcQpxQNafBMvRLAnVPy562kBNVDWPAZg
	==
X-ME-Sender: <xms:ZHJ4XyZYUnZlQz-3_yXmE6W1GDrRmerr-x0wvdOyF2HgusbL2_Va1Q>
    <xme:ZHJ4X1bIP9uG3apMqeJWfSc9KDw34g3g1kRvNYzPRnasV2J7VARv5zN3nnS7-W6Ax
    VIfoT01HxDuew>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeekgdehfecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieegrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:ZHJ4X8_ozeIAHamaaSBzMnq_dnpQhX3dVUjAj_nJLYQXKj5QAKWFqg>
    <xmx:ZHJ4X0qGjfaSQYtTyod7qgzXcM4mXtusq3bqWhBOPDnSoLtLeWR1qg>
    <xmx:ZHJ4X9ptZFXGM4OOALKTWAJ3Q8nv8Pdb3e9PNwAqNMebBYhkc-H76g>
    <xmx:ZHJ4X-BQryjGadPzp-2nHEcwUsTeMA2tFCUd_LO5dx4e88K_ozYLEg>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 79E7B3064680;
	Sat,  3 Oct 2020 08:45:23 -0400 (EDT)
Date: Sat, 3 Oct 2020 14:45:18 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: Re: Yet another S3 issue in Xen 4.14
Message-ID: <20201003124518.GA104458@mail-itl>
References: <20201001011245.GL3962@mail-itl>
 <a80ad59b-feb1-01c8-2b14-dbf6568d0ff5@suse.com>
 <20201001123129.GJ1482@mail-itl>
 <1e596ccc-a875-93f1-2619-e4dbcbd88b4d@citrix.com>
 <20201002150859.GM3962@mail-itl>
 <454ac9ce-012f-f2e7-722d-c5304fd3146f@suse.com>
 <aa5e1a7b-3724-bdc1-a313-0598aabd181f@citrix.com>
 <20201002191951.GA104059@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+QahgC5+KEYLbs62"
Content-Disposition: inline
In-Reply-To: <20201002191951.GA104059@mail-itl>


--+QahgC5+KEYLbs62
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Yet another S3 issue in Xen 4.14

On Fri, Oct 02, 2020 at 09:19:55PM +0200, Marek Marczykowski-G=C3=B3recki w=
rote:
> Disabling efi_get_time() or setting CR4 earlier solves _this_ issue, but
> applied on top of stable-4.14 still doesn't work. Looks like there is
> yet another S3 breakage in between. I'm bisecting it further...

This time I get to this commit:

commit 8e2aa76dc1670e82eaa15683353853bc66bf54fc (refs/bisect/bad)
Author: Dario Faggioli <dfaggioli@suse.com>
Date:   Thu May 28 23:29:44 2020 +0200

    xen: credit2: limit the max number of CPUs in a runqueue

The failing effect after S3 resume is slightly different and not really
deterministic - sometimes it hangs immediately, sometimes the system is
interactive for few seconds and then hangs and sometimes it crashes
(looks like panic).

I've tried to switch to credit1, but this seems to be broken in yet
another way, much earlier (commits at which S3 works with credit2,
crashes on S3 resume with credit1).

(few hours later)

I managed to setup kdump kernel and got a copy of vmcore after the
crash. Then extracted crash message using strings:

(XEN) Entering ACPI S3 state.
(XEN) [VT-D]Passed iommu=3Dno-igfx option.  Disabling IGD VT-d engine.
(XEN) mce_intel.c:773: MCA Capability: firstbank 0, extended MCE MSR 0, BCA=
ST, CMCI
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) [VT-D]intremap.c:564: MSI index (65535) has an empty entry
(XEN) Assertion 'c2rqd(sched_unit_master(unit)) =3D=3D svc->rqd' failed at =
credit2.c:2273
(XEN) ----[ Xen-4.14-unstable  x86_64  debug=3Dy   Not tainted ]----
(XEN) CPU:    8
(XEN) RIP:    e008:[<ffff82d040242725>] credit2.c#csched2_unit_wake+0x14f/0=
x151
(XEN) RFLAGS: 0000000000010087   CONTEXT: hypervisor (d0v0)
(XEN) rax: ffff830250b609e0   rbx: ffff830250b18f10   rcx: 0000003210631000
(XEN) rdx: ffff830250b604a0   rsi: 0000000000000008   rdi: ffff830250b60846
(XEN) rbp: ffff830250ba7d98   rsp: ffff830250ba7d78   r8:  deadbeefdeadf00d
(XEN) r9:  deadbeefdeadf00d   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: ffff830250b0e040   r13: ffff82d04044abc0   r14: 0000000000000008
(XEN) r15: 2f3d053d56f91b80   cr0: 0000000080050033   cr4: 0000000000362660
(XEN) cr3: 0000000210270000   cr2: 0000000000000000
(XEN) fsb: 000077a6b25a2b80   gsb: ffff8881b5400000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d040242725> (credit2.c#csched2_unit_wake+0x14f=
/0x151):
(XEN)  df e8 dc bd ff ff eb ad <0f> 0b 55 48 89 e5 41 57 41 56 41 55 41 54 =
53 48
(XEN) Xen stack trace from rsp=3Dffff830250ba7d78:
(XEN)    ffff830250b10000 ffff830250b18f10 ffff830250b18f10 ffff830250b60840
(XEN)    ffff830250ba7de8 ffff82d04024b8eb 0000000000000202 ffff830250b60840
(XEN)    ffff830250b66018 0000000000000001 0000000000000000 0000000000000000
(XEN)    ffff830250b66018 ffff830250b10000 ffff830250ba7e58 ffff82d040207c3f
(XEN)    ffff82d0403673d4 ffff82d0403673c8 ffff82d0403673d4 ffff82d0403673c8
(XEN)    ffff82d0403673d4 ffff82d0403673c8 ffff82d0403673d4 ffff830250ba7ef8
(XEN)    0000000000000180 ffff830250b45000 deadbeefdeadf00d 0000000000000003
(XEN)    ffff830250ba7ee8 ffff82d0402e7759 0000000000000001 0000000000000005
(XEN)    0000000000000000 deadbeefdeadf00d deadbeefdeadf00d ffff82d0403673c8
(XEN)    ffff82d0403673d4 ffff82d0403673c8 ffff82d0403673d4 ffff82d0403673c8
(XEN)    ffff82d0403673d4 ffff830250b45000 0000000000000000 0000000000000000
(XEN)    0000000000000000 0000000000000000 00007cfdaf4580e7 ffff82d040367432
(XEN)    ffff888192157320 0000000000000000 ffffffff810eb370 0000000000000000
(XEN)    ffff8881b0a626c0 0000000000000005 0000000000000246 0000000000000001
(XEN)    ffffea0006087608 ffffea0006087608 0000000000000018 ffffffff8100230a
(XEN)    0000000000000000 0000000000000005 0000000000000001 0000010000000000
(XEN)    ffffffff8100230a 000000000000e033 0000000000000246 ffffc90002653cf0
(XEN)    000000000000e02b d2c2c2c2c2c2c2c2 c2c2c2c2c2c2c282 c2c2c2c2c2c2c2c2
(XEN)    c2e2c2c2c2c2c2c2 0000e01000000008 ffff830250b45000 0000003210631000
(XEN)    0000000000362660 0000000000000000 8000000250bc3002 0000060100000000
(XEN) Xen call trace:
(XEN)    [<ffff82d040242725>] R credit2.c#csched2_unit_wake+0x14f/0x151
(XEN)    [<ffff82d04024b8eb>] F vcpu_wake+0x105/0x52c
(XEN)    [<ffff82d040207c3f>] F do_vcpu_op+0x1b0/0x631
(XEN)    [<ffff82d0402e7759>] F pv_hypercall+0x28f/0x57d
(XEN)    [<ffff82d040367432>] F lstar_enter+0x112/0x120
(XEN)=20
(XEN)=20
(XEN) ****************************************
(XEN) Panic on CPU 8:
(XEN) Assertion 'c2rqd(sched_unit_master(unit)) =3D=3D svc->rqd' failed at =
credit2.c:2273
(XEN) ****************************************
(XEN)=20
(XEN) Reboot in five seconds...
(XEN) Executing kexec image on cpu8
(XEN) Shot down all CPUs

Looks pretty similar to the other thread "Xen crash after S3 suspend -
Xen 4.13" - adding J=C3=BCrgen. Since I've seen this one on Xen 4.13 before,
I think the commit I've found just makes it much more likely to happen.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--+QahgC5+KEYLbs62
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl94cl8ACgkQ24/THMrX
1ywxzAgAkyEB2F8wZ/8iqMKgnhMvOgxnS+KhrtVKyUUge+OGulwk35H3G9vBE0c/
iEnvQ+fgNNCsFRLZHo6ZeUAk2yzWmrVqfWlb/Jlk23QJpoHv1SbNiZyBY21yOzy+
jLUGigrJy520FrpxxPFqq2DZBjydSIR5ri6Ol8/bGwFbRL9oFLUOY4n3m1LwXeCt
hWBLT35rHCo58NyCGDgJnVdnaDKx07nbsqCAc/F9akCwXsz53bWDxehAJqb6Usus
h/vJIFHlWjdBoj8TFjTMAY5olnwJspvKQN2l6Zhw+nptzvotZDcrXiAYr3VfcnLG
JnxyV8AEf0fnplfh8/YTpvSDSBHK7Q==
=JiOQ
-----END PGP SIGNATURE-----

--+QahgC5+KEYLbs62--


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 13:43:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 13:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2541.7305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOhoU-0000Jk-Lu; Sat, 03 Oct 2020 13:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2541.7305; Sat, 03 Oct 2020 13:43:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOhoU-0000Jd-Iw; Sat, 03 Oct 2020 13:43:02 +0000
Received: by outflank-mailman (input) for mailman id 2541;
 Sat, 03 Oct 2020 13:43:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOhoS-0000JY-Qh
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 13:43:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60eeee9c-e1c4-4eef-93f1-b7462b8964dd;
 Sat, 03 Oct 2020 13:42:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOhoP-0007me-Ha; Sat, 03 Oct 2020 13:42:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOhoP-0002rK-9G; Sat, 03 Oct 2020 13:42:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOhoP-0002u3-74; Sat, 03 Oct 2020 13:42:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOhoS-0000JY-Qh
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 13:43:00 +0000
X-Inumbo-ID: 60eeee9c-e1c4-4eef-93f1-b7462b8964dd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60eeee9c-e1c4-4eef-93f1-b7462b8964dd;
	Sat, 03 Oct 2020 13:42:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dB3D85zguHnkGougXDFRIgu4Tb7pbYqNoV430wBrh3Q=; b=IeeBte94moWFVSr7eBhSIDapNw
	A/mx5PQLFNtaBM9AyNZ+rdvIaMfnVvWQ3IBmxyUy1AZGPkLpTF7tczGwfi9uHZe5rOrXYcwfKYJqf
	aKaceQcYlFzaytMgGNbMtjjuFzGEn2BuEtMWAoaCgf8kCsDIlKH3NSpy5DiIKuU+gg+o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOhoP-0007me-Ha; Sat, 03 Oct 2020 13:42:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOhoP-0002rK-9G; Sat, 03 Oct 2020 13:42:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOhoP-0002u3-74; Sat, 03 Oct 2020 13:42:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155258-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 155258: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-freebsd10-i386:guest-localmigrate:fail:heisenbug
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=88f5b414ac0f8008c1e2b26f93c3d980120941f7
X-Osstest-Versions-That:
    xen=c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 13:42:57 +0000

flight 155258 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155258/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154358
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154358
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154358
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154358

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386 15 guest-localmigrate       fail pass in 155132

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  88f5b414ac0f8008c1e2b26f93c3d980120941f7
baseline version:
 xen                  c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef

Last test of basis   154358  2020-09-15 09:40:09 Z   18 days
Failing since        154602  2020-09-22 02:37:01 Z   11 days    9 attempts
Testing same since   154625  2020-09-22 20:06:06 Z   10 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Chen <wei.chen@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 564 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 13:57:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 13:57:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2551.7332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOi2U-0001P3-AL; Sat, 03 Oct 2020 13:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2551.7332; Sat, 03 Oct 2020 13:57:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOi2U-0001Ow-71; Sat, 03 Oct 2020 13:57:30 +0000
Received: by outflank-mailman (input) for mailman id 2551;
 Sat, 03 Oct 2020 13:57:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDDO=DK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kOi2S-0001Or-TB
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 13:57:29 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f25ac761-053c-4a73-bae2-3e23c350f7d4;
 Sat, 03 Oct 2020 13:57:28 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id E03EB5C007D;
 Sat,  3 Oct 2020 09:57:27 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sat, 03 Oct 2020 09:57:27 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id CBEA4328005A;
 Sat,  3 Oct 2020 09:57:26 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CDDO=DK=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kOi2S-0001Or-TB
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 13:57:29 +0000
X-Inumbo-ID: f25ac761-053c-4a73-bae2-3e23c350f7d4
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f25ac761-053c-4a73-bae2-3e23c350f7d4;
	Sat, 03 Oct 2020 13:57:28 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id E03EB5C007D;
	Sat,  3 Oct 2020 09:57:27 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Sat, 03 Oct 2020 09:57:27 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=content-type:date:from:message-id
	:mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm3; bh=rRcHf2ZexvPvfifjdfKFn/Ku1DnpV
	Pe5zMmLzF2yg+o=; b=OKLIol0VMBmxMKDtRmrw4VieRh3yLsvHScrkm4qAznpRv
	1V2III7Arbn2QzfghGVI7Q0fgXeUotdsf8fyyVKh4O6o+XNkLiAVibl16Fn5FrpH
	O42pjsVLoV7i1iiyMyju5nIbgKGvrY0tvb/+imefxAu6zAtJuVbBzpEoeInKhrCm
	4TWGqxcHOxTnq/Sr/Mnu0+WcCRPI8gbJdlBF4sjwDLMhF1BOwd/gzKB6WzeceV99
	PWhqEsKNyab2d7RZWV5vFTl2KMY+hmYzSTw4ErDz4Z5/H14531LCe4InPLqw8TzB
	y3rPcDnschj7xGlSZdx1AGY6vKipQ22z4shgA/7yw==
X-ME-Sender: <xms:R4N4X_YvTv5CLYiJ2-dQhZ6AnTwi_hOM7klsPTq4q4_u40Czbp1gnw>
    <xme:R4N4X-aLH1mZeDQ81PUDAKrY7CidudsmpM6V51guPJZGRMpSggoF_F4FA1wm093S_
    cRVgbjbSTlC9g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrfeekgdeilecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkgggtugesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcuofgr
    rhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvihhsih
    gslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnheptddugfetudev
    udeiveevgfetueejlefggffghffhhfehtdfffeefgfduueegfefhnecukfhppeeluddrie
    egrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl
    fhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:R4N4Xx-Nh639lyS6aS891YbP4FYdwC735ejGCufxb2awpz3kBTEr1A>
    <xmx:R4N4X1pc-Or02wp0hhN0NkqS44gVKbLWpqnZhmicr639AYiiwM60Mg>
    <xmx:R4N4X6ry85mNR_scEjodn2oJGs_V-bxFL34eSAUr6C_7oHnK1EoYZw>
    <xmx:R4N4X31jvJyrWV-wrvXOO8_CB9m1_j2GpFA8s5q5pSPpCf2P5ZiQbg>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id CBEA4328005A;
	Sat,  3 Oct 2020 09:57:26 -0400 (EDT)
Date: Sat, 3 Oct 2020 15:57:23 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: S3 resume crash in memguard_guard_stack (stable-4.14)
Message-ID: <20201003135723.GO3962@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="8WA4ILJSyYAmUzbY"
Content-Disposition: inline


--8WA4ILJSyYAmUzbY
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: S3 resume crash in memguard_guard_stack (stable-4.14)

Hi,

My journey to get S3 working on Xen 4.14 continues...
My current setup is:
 - stable-4.14 (28855ebcdb)
 - with "x86/S3: Fix Shadow Stack resume path"
 - with efi_get_time() disabled
 - with "write_cr4(read_cr4())" just after "system_state =3D
   SYS_STATE_resume" (should be more or less equivalent to "x86/S3:
   Restore CR4 earlier during resume"
 - with "xen: credit2: limit the max number of CPUs in a runqueue"
   reverted

With this, I get a crash on S3 resume:

(XEN) Preparing system for ACPI S3 state.
(XEN) Disabling non-boot CPUs ...
(XEN) Entering ACPI S3 state.
(XEN) [VT-D]Passed iommu=3Dno-igfx option.  Disabling IGD VT-d engine.
(XEN) mce_intel.c:773: MCA Capability: firstbank 0, extended MCE MSR 0, BCA=
ST, CMCI
(XEN) CPU0 CMCI LVT vector (0xf1) already installed
(XEN) Finishing wakeup from ACPI S3 state.
(XEN) Enabling non-boot CPUs  ...
(XEN) ----[ Xen-4.14.1-pre  x86_64  debug=3Dy   Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d040311090>] memguard_guard_stack+0x7/0x1a5
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
(XEN) rax: ffff830250ca03f8   rbx: 0000000000000001   rcx: ffff830250cb10b0
(XEN) rdx: 0000003210739000   rsi: 0000000000000001   rdi: ffff830250ca0000
(XEN) rbp: ffff830049a6fd70   rsp: ffff830049a6fd40   r8:  0000000000000001
(XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000002
(XEN) r12: 0000000000010000   r13: 0000000000000000   r14: 0000000000000001
(XEN) r15: ffff82d040598440   cr0: 000000008005003b   cr4: 00000000003526e0
(XEN) cr3: 0000000049a5d000   cr2: ffff830250ca03f8
(XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
(XEN) Xen code around <ffff82d040311090> (memguard_guard_stack+0x7/0x1a5):
(XEN)  c3 48 8d 87 f8 03 00 00 <48> 89 87 f8 03 00 00 48 8d 87 f8 07 00 00 =
48 89
(XEN) Xen stack trace from rsp=3Dffff830049a6fd40:
(XEN)    ffff82d040321c2e ffff82d040461b68 ffff82d040461b60 ffff82d040461240
(XEN)    0000000000000001 0000000000000000 ffff830049a6fdb8 ffff82d040221f9c
(XEN)    ffff830049a6fde0 0000000000000001 0000000000000000 00000000ffffffef
(XEN)    ffff830049a6fe08 0000000000000001 ffff830250b66000 ffff830049a6fdd0
(XEN)    ffff82d0402036cf 0000000000000001 ffff830049a6fdf8 ffff82d040203a4d
(XEN)    0000000000000000 0000000000000001 0000000000000010 ffff830049a6fe28
(XEN)    ffff82d040203d00 ffff830049a6fef8 0000000000000000 0000000000000003
(XEN)    0000000000000200 ffff830049a6fe58 ffff82d040270c9a ffff830250139f70
(XEN)    ffff830250b45000 0000000000000000 0000000000000000 ffff830049a6fe78
(XEN)    ffff82d040207064 ffff830250b451b8 ffff82d0405781b0 ffff830049a6fe90
(XEN)    ffff82d04022b7bb ffff82d0405781a0 ffff830049a6fec0 ffff82d04022ba9c
(XEN)    0000000000000000 ffff82d0405781b0 ffff82d04057ed00 ffff82d040598440
(XEN)    ffff830049a6fef0 ffff82d0402f33e3 ffff830252b0e000 ffff830250b45000
(XEN)    ffff830252b0f000 0000000000000000 ffff830049a6fdc8 ffff88818ce029e0
(XEN)    ffffc900026b7f08 0000000000000003 0000000000000000 0000000000003403
(XEN)    ffffffff8277a5a8 0000000000000246 0000000000000003 0000000000003403
(XEN)    0000000000003403 0000000000000000 ffffffff810020ea 0000000000003403
(XEN)    0000000000000010 deadbeefdeadf00d 0000010000000000 ffffffff810020ea
(XEN)    000000000000e033 0000000000000246 ffffc900026b7cb8 000000000000e02b
(XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
(XEN) Xen call trace:
(XEN)    [<ffff82d040311090>] R memguard_guard_stack+0x7/0x1a5
(XEN)    [<ffff82d040321c2e>] S smpboot.c#cpu_smpboot_callback+0xe5/0x6d5
(XEN)    [<ffff82d040221f9c>] F notifier_call_chain+0x6b/0x96
(XEN)    [<ffff82d0402036cf>] F cpu.c#cpu_notifier_call_chain+0x1b/0x33
(XEN)    [<ffff82d040203a4d>] F cpu_up+0x5f/0xd5
(XEN)    [<ffff82d040203d00>] F enable_nonboot_cpus+0xea/0x1fb
(XEN)    [<ffff82d040270c9a>] F power.c#enter_state_helper+0x152/0x606
(XEN)    [<ffff82d040207064>] F domain.c#continue_hypercall_tasklet_handler=
+0x4c/0xb9
(XEN)    [<ffff82d04022b7bb>] F tasklet.c#do_tasklet_work+0x76/0xa9
(XEN)    [<ffff82d04022ba9c>] F do_tasklet+0x58/0x8a
(XEN)    [<ffff82d0402f33e3>] F domain.c#idle_loop+0x40/0x96
(XEN)=20
(XEN) Pagetable walk from ffff830250ca03f8:
(XEN)  L4[0x106] =3D 8000000049a5b063 ffffffffffffffff
(XEN)  L3[0x009] =3D 0000000250cae063 ffffffffffffffff
(XEN)  L2[0x086] =3D 0000000250cad063 ffffffffffffffff
(XEN)  L1[0x0a0] =3D 8000000250ca0161 ffffffffffffffff
(XEN)=20
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) FATAL PAGE FAULT
(XEN) [error_code=3D0003]
(XEN) Faulting linear address: ffff830250ca03f8
(XEN) ****************************************
(XEN)=20
(XEN) Reboot in five seconds...
(XEN) Executing kexec image on cpu0
(XEN) Shot down all CPUs

The code in question seems to belong to this commit:

commit 91d26ed304ff562f341824be12bf49bd78c39e39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 23 20:20:59 2020 +0100

    x86/shstk: Create shadow stacks


Disabling Shadow Stack in Kconfig makes the issue gone - I got S3 resume
working on this machine, at least once. Then it hanged after second S3
resume (most likely yet another proble...).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--8WA4ILJSyYAmUzbY
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl94g0EACgkQ24/THMrX
1ywnrwgAlTTUDuGrBn7ZBOkHA8BC+K8GQRNNIPEgP4ATXpitO/ZYYnF11sIUw61R
zeO5FNrx2SdA9Bhsvzp+2xmbDQU12ifb+ZRu0YIEp4c/TrC4qzRyI474BOCqKJAn
l4g/WbM18n7PsKrIGSmg31mYHBDw5JpWvzTF2NWn62iIYrvaUWGbgecJzsRPlKDP
yRx8lt9DZmDHvkKEwF8zH0u3St/WC/aGyG/FUzedSCiZV4CK6r/zN6KLGNRTsVPL
MWYsOYnv+FSspEYFBOsKyw9qC75hy8pAWSPdjdbYdo/Lc/pmifouxHwG64zInHAD
ZBktKrQiPwqvwh6q8ExBfVGlWJVg8A==
=evJ1
-----END PGP SIGNATURE-----

--8WA4ILJSyYAmUzbY--


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 14:33:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 14:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2553.7344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOib9-0004oS-6q; Sat, 03 Oct 2020 14:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2553.7344; Sat, 03 Oct 2020 14:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOib9-0004oL-2q; Sat, 03 Oct 2020 14:33:19 +0000
Received: by outflank-mailman (input) for mailman id 2553;
 Sat, 03 Oct 2020 14:33:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a7Ix=DK=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kOib8-0004oG-Ae
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 14:33:18 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 601600c8-62cc-4ae4-a01d-c7bad4098dd5;
 Sat, 03 Oct 2020 14:33:17 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id x23so4366797wmi.3
 for <xen-devel@lists.xenproject.org>; Sat, 03 Oct 2020 07:33:17 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u8sm5516615wmj.45.2020.10.03.07.33.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 03 Oct 2020 07:33:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a7Ix=DK=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kOib8-0004oG-Ae
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 14:33:18 +0000
X-Inumbo-ID: 601600c8-62cc-4ae4-a01d-c7bad4098dd5
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 601600c8-62cc-4ae4-a01d-c7bad4098dd5;
	Sat, 03 Oct 2020 14:33:17 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id x23so4366797wmi.3
        for <xen-devel@lists.xenproject.org>; Sat, 03 Oct 2020 07:33:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=5W0mBf5cSyEzSR4NyeVbtidTD9BwgrQSkD8xcXhRsuo=;
        b=sI6Lk6PqDc4MO0qAGhFzG62V/vXxDLhrw4IikO5pbZbDGbags7EqSLx06VPkJ4Ffr6
         JVsgaqvH5yKX/gi9TNRCeGlgUK8UDaOSgEpvYx1+i2g1OuBCKldIZpJjKma9tP4kVksK
         2zjDhqwzpKFd3fCM6gmvwJ74eMu6+Yq8+ec2OX293x5J/60tnZCbHBbfZRH8I0kI0z6I
         7U/VpwsLbqaKKleyAdWkE+Uznk/XeeWyfqTroh8WlmVqVkFtcNNxV4R9VYueuSg2gdqE
         kDfkr3fQsmhKRh/8nuFCUlFZFG/c43fZ4EwTPKu6wKISKpgymytU5InY8lS7bnNde2/g
         BhqA==
X-Gm-Message-State: AOAM5336rTcHsZoFDhnWuVGwsBlAb3xLRSf+CYohQPT5XlHTjiJGOB+T
	PrYGwbNaH2IDE9z2cqhvWy4=
X-Google-Smtp-Source: ABdhPJx/b+JmNuoUlxWgCeR7GA3GaqLXAEggfmbFSpXPg4k667nKinWX86r31HWV7koYb3N8jQef1g==
X-Received: by 2002:a05:600c:2053:: with SMTP id p19mr7984264wmg.50.1601735596576;
        Sat, 03 Oct 2020 07:33:16 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id u8sm5516615wmj.45.2020.10.03.07.33.15
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sat, 03 Oct 2020 07:33:15 -0700 (PDT)
Date: Sat, 3 Oct 2020 14:33:14 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
	Paul Durrant <pdurrant@amazon.com>, Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v9 1/8] xen/common: introduce a new framework for
 save/restore of 'domain' context
Message-ID: <20201003143314.vtviqi2mur5ajobd@liuwe-devbox-debian-v2>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-2-paul@xen.org>
 <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 10:20:18PM +0100, Andrew Cooper wrote:
[...]
> P.S. Another good reason for having extremely simple header files is for
> the poor sole trying to write a Go/Rust/other binding for this in some
> likely not-to-distant future.

For Rust the header is going to be generated by a tool called bindgen.
It doesn't like nested macros, so I would be all for a simpler C header
file if we can help it.

Wei.


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 17:37:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 17:37:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2570.7384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOlTL-0003db-4y; Sat, 03 Oct 2020 17:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2570.7384; Sat, 03 Oct 2020 17:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOlTL-0003dU-0X; Sat, 03 Oct 2020 17:37:27 +0000
Received: by outflank-mailman (input) for mailman id 2570;
 Sat, 03 Oct 2020 17:37:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOlTJ-0003dO-Lv
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 17:37:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 938791b2-6676-4198-aa64-5d2f6d1a9161;
 Sat, 03 Oct 2020 17:37:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOlTG-0004fj-D4; Sat, 03 Oct 2020 17:37:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOlTG-0008NV-3L; Sat, 03 Oct 2020 17:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOlTG-0004Rt-2q; Sat, 03 Oct 2020 17:37:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOlTJ-0003dO-Lv
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 17:37:25 +0000
X-Inumbo-ID: 938791b2-6676-4198-aa64-5d2f6d1a9161
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 938791b2-6676-4198-aa64-5d2f6d1a9161;
	Sat, 03 Oct 2020 17:37:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j8gvzrbvHEg4eLMrXE5JQu4mjhV1S4g4gvTEcSZ/a68=; b=x3Bxu5URPzTsyzfVLUs/gZUIOc
	rLBtsctfzR0l+w+/Qx+3CurKPHv5NcjH9A3myFkeK3eEIBPCcFw+ryNJPHIAuyzf5OrdtwLOccdEQ
	noAs42JOu9mwbBRHkfDQD6a6ihJt1H5GfXNLJ4VAR0yN6znahveZZDUNH/6Bhrw0p4YM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOlTG-0004fj-D4; Sat, 03 Oct 2020 17:37:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOlTG-0008NV-3L; Sat, 03 Oct 2020 17:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOlTG-0004Rt-2q; Sat, 03 Oct 2020 17:37:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 155281: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=3263f257caf8e4465e9dca84a88fa0e68be74280
X-Osstest-Versions-That:
    xen=ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 17:37:22 +0000

flight 155281 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155281/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 151714
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 151714
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 151714
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 151714

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  3263f257caf8e4465e9dca84a88fa0e68be74280
baseline version:
 xen                  ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d

Last test of basis   151714  2020-07-07 13:35:55 Z   88 days
Testing same since   154619  2020-09-22 15:37:30 Z   11 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 359 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 19:58:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 19:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2590.7429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOnfj-00075l-QE; Sat, 03 Oct 2020 19:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2590.7429; Sat, 03 Oct 2020 19:58:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOnfj-00075e-NG; Sat, 03 Oct 2020 19:58:23 +0000
Received: by outflank-mailman (input) for mailman id 2590;
 Sat, 03 Oct 2020 19:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3plm=DK=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kOnfi-00075Z-Kx
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 19:58:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe7d3c7e-589c-4256-9443-d4a62689f3d9;
 Sat, 03 Oct 2020 19:58:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3plm=DK=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kOnfi-00075Z-Kx
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 19:58:22 +0000
X-Inumbo-ID: fe7d3c7e-589c-4256-9443-d4a62689f3d9
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe7d3c7e-589c-4256-9443-d4a62689f3d9;
	Sat, 03 Oct 2020 19:58:22 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.9-rc8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601755101;
	bh=LcL5cn9XLk5mudrBuslWNR7HhK+P5LDzqeGHvnZrkCQ=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=aMRjThDceaYwrUwlyauhNwGpf6lqCWx+LDH4EH9XF1FOyZKeNrqPzPLDo4VQQdm7/
	 UN+V5tmLjyIITJWx4GsS/PEj7k9z7F/i6ZeMaPs9OxITP23XXGVIN4ERGyTQuz1zQQ
	 i0jlWil1amaPWZqPwYKbxUlRMCg8KlqdgexfA2Qw=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201003073253.1861-1-jgross@suse.com>
References: <20201003073253.1861-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20201003073253.1861-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc8-tag
X-PR-Tracked-Commit-Id: 0891fb39ba67bd7ae023ea0d367297ffff010781
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 5ee56135b2f5886cb0333d18640043a8f73fa830
Message-Id: <160175510128.27812.2872072780024080723.pr-tracker-bot@kernel.org>
Date: Sat, 03 Oct 2020 19:58:21 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Sat,  3 Oct 2020 09:32:53 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc8-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/5ee56135b2f5886cb0333d18640043a8f73fa830

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 20:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 20:10:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2593.7444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOnrb-0000OI-0T; Sat, 03 Oct 2020 20:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2593.7444; Sat, 03 Oct 2020 20:10:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOnra-0000OB-T8; Sat, 03 Oct 2020 20:10:38 +0000
Received: by outflank-mailman (input) for mailman id 2593;
 Sat, 03 Oct 2020 20:10:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOnrZ-0000Nd-4B
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 20:10:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f64276f2-c38a-48f8-8c6e-f8cf3d48574b;
 Sat, 03 Oct 2020 20:10:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOnrR-0007xL-1b; Sat, 03 Oct 2020 20:10:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOnrQ-0007Pz-QV; Sat, 03 Oct 2020 20:10:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOnrQ-0000Fw-Q1; Sat, 03 Oct 2020 20:10:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOnrZ-0000Nd-4B
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 20:10:37 +0000
X-Inumbo-ID: f64276f2-c38a-48f8-8c6e-f8cf3d48574b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f64276f2-c38a-48f8-8c6e-f8cf3d48574b;
	Sat, 03 Oct 2020 20:10:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f2XMM9y3UNOQmbgOa8ofiUdQZ0z4ltmU9ZssEsS4+kM=; b=PNQlKj7U97b4xz45oUx9Ng7WtL
	T4VjhKTy6MpB4rZQePWhZHqPEBcZy5T/S2M2H4Tg5v5yJChxyEnNjDeHFBTKIuCVEusxi1SLbkpmn
	QLajWP5myoB2C/EjIoucBwc2/iRJvX+ZA/xfBB9B44OvxTe4aBGl38JYTE4r7E+o/kSc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOnrR-0007xL-1b; Sat, 03 Oct 2020 20:10:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOnrQ-0007Pz-QV; Sat, 03 Oct 2020 20:10:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOnrQ-0000Fw-Q1; Sat, 03 Oct 2020 20:10:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155301: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=c6c23415706ee303a9fbeee5326a4e504645fe3e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 20:10:28 +0000

flight 155301 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155301/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              c6c23415706ee303a9fbeee5326a4e504645fe3e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   85 days
Failing since        151818  2020-07-11 04:18:52 Z   84 days   78 attempts
Testing same since   155301  2020-10-02 06:09:12 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17936 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 21:08:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 21:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2601.7468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOokr-0004n5-3I; Sat, 03 Oct 2020 21:07:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2601.7468; Sat, 03 Oct 2020 21:07:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOokq-0004my-UL; Sat, 03 Oct 2020 21:07:44 +0000
Received: by outflank-mailman (input) for mailman id 2601;
 Sat, 03 Oct 2020 21:07:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+91D=DK=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kOokp-0004mt-E7
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 21:07:43 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c0b306d-3d40-44d6-99e6-90bb6b06ee03;
 Sat, 03 Oct 2020 21:07:42 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kOokm-000MW3-45; Sat, 03 Oct 2020 21:07:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+91D=DK=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kOokp-0004mt-E7
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 21:07:43 +0000
X-Inumbo-ID: 1c0b306d-3d40-44d6-99e6-90bb6b06ee03
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c0b306d-3d40-44d6-99e6-90bb6b06ee03;
	Sat, 03 Oct 2020 21:07:42 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kOokm-000MW3-45; Sat, 03 Oct 2020 21:07:40 +0000
Date: Sat, 3 Oct 2020 22:07:40 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH 0/2] x86/mm: {paging,sh}_{cmpxchg,write}_guest_entry()
 adjustments
Message-ID: <20201003210740.GA86473@deinos.phlegethon.org>
References: <a7d93ec1-ed89-fbdb-1b52-4091870f7fab@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <a7d93ec1-ed89-fbdb-1b52-4091870f7fab@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 13:56 +0200 on 28 Sep (1601301371), Jan Beulich wrote:
> 1: {paging,sh}_{cmpxchg,write}_guest_entry() cannot fault
> 2: remove some indirection from {paging,sh}_cmpxchg_guest_entry()

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 21:53:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 21:53:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2606.7486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOpT6-0000Ya-Mr; Sat, 03 Oct 2020 21:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2606.7486; Sat, 03 Oct 2020 21:53:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOpT6-0000YT-Jb; Sat, 03 Oct 2020 21:53:28 +0000
Received: by outflank-mailman (input) for mailman id 2606;
 Sat, 03 Oct 2020 21:53:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOpT5-0000XC-K2
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 21:53:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff58e573-f7a4-4edd-90c4-7b467cce3a61;
 Sat, 03 Oct 2020 21:53:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpSy-0001bt-03; Sat, 03 Oct 2020 21:53:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpSx-0005O3-M9; Sat, 03 Oct 2020 21:53:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpSx-0004A3-Le; Sat, 03 Oct 2020 21:53:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOpT5-0000XC-K2
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 21:53:27 +0000
X-Inumbo-ID: ff58e573-f7a4-4edd-90c4-7b467cce3a61
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ff58e573-f7a4-4edd-90c4-7b467cce3a61;
	Sat, 03 Oct 2020 21:53:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5iLcwXCaTRTgr1dDnsQAd+gBgDScyd9rKFapCogg0w8=; b=Zp5/Hz05aAxknJle+m9waflDv1
	IjThrWAt0sNFJhDsY7Ln+dG26o0QceByK4M43w3cB4WUS9whuXERyaPXJQfXWPKFNe52ievM/VCua
	6Gnd2r51aoy+JUv2PExFhA0AczkbRtCoj+M55mdfX9nnYJVAYZXVSKc8ICOLw9AuYBUQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpSy-0001bt-03; Sat, 03 Oct 2020 21:53:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpSx-0005O3-M9; Sat, 03 Oct 2020 21:53:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpSx-0004A3-Le; Sat, 03 Oct 2020 21:53:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 155288: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:guest-localmigrate:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl:guest-start/debian.repeat:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=0186e76a62f7409804c2e4785d5a11e7f82a7c52
X-Osstest-Versions-That:
    xen=0446e3db13671032b05d19f6117d902f5c5c76fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 21:53:19 +0000

flight 155288 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155288/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154601
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154601
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154601
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154601
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154601
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154601

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-amd 16 guest-localmigrate        fail pass in 155152
 test-amd64-amd64-xl          20 guest-start/debian.repeat  fail pass in 155152

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 154601
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  0186e76a62f7409804c2e4785d5a11e7f82a7c52
baseline version:
 xen                  0446e3db13671032b05d19f6117d902f5c5c76fa

Last test of basis   154601  2020-09-22 02:37:00 Z   11 days
Testing same since   154622  2020-09-22 16:36:57 Z   11 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 473 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 03 22:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 03 Oct 2020 22:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2613.7506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOpug-0003CC-7q; Sat, 03 Oct 2020 22:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2613.7506; Sat, 03 Oct 2020 22:21:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOpug-0003C5-4k; Sat, 03 Oct 2020 22:21:58 +0000
Received: by outflank-mailman (input) for mailman id 2613;
 Sat, 03 Oct 2020 22:21:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOpue-0003C0-Jg
 for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 22:21:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43f82298-d77c-4b5d-aaea-5133fd8a80dd;
 Sat, 03 Oct 2020 22:21:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpuc-0002Ci-CP; Sat, 03 Oct 2020 22:21:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpuc-0006kL-4Z; Sat, 03 Oct 2020 22:21:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOpuc-0000QR-3s; Sat, 03 Oct 2020 22:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cIIx=DK=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOpue-0003C0-Jg
	for xen-devel@lists.xenproject.org; Sat, 03 Oct 2020 22:21:56 +0000
X-Inumbo-ID: 43f82298-d77c-4b5d-aaea-5133fd8a80dd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43f82298-d77c-4b5d-aaea-5133fd8a80dd;
	Sat, 03 Oct 2020 22:21:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/bBP8MPgg3itRc25j8wCLLQGM5H7xwDpKLWdUme3Hzo=; b=yEcXvONYXrASryUn6EOLjthFFW
	nnua9Zyc0m3eXUlvq8GqLsbFNKIaHQ+Cpc2IAl6z3LTBexkcrsM7XhAmSzWm4sPisON4JPD/W3LP3
	BXt5VZCzQJ2O/5249/yQNEW/j1E8IVo5sNLHNhsGXszP9KufjT6XhnjwxoPDwPfpVfgY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpuc-0002Ci-CP; Sat, 03 Oct 2020 22:21:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpuc-0006kL-4Z; Sat, 03 Oct 2020 22:21:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOpuc-0000QR-3s; Sat, 03 Oct 2020 22:21:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155397-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155397: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    libvirt:build-i386-libvirt:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):starved:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):starved:nonblocking
    libvirt:build-i386-pvops:hosts-allocate:starved:nonblocking
    libvirt:build-i386-xsm:hosts-allocate:starved:nonblocking
    libvirt:build-i386:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    libvirt=0bb796bda31103ebf54eefc11c471586c87b95fd
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 03 Oct 2020 22:21:54 +0000

flight 155397 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155397/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt       1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               starved  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) starved n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               starved  n/a
 build-i386-pvops              2 hosts-allocate               starved  n/a
 build-i386-xsm                2 hosts-allocate               starved  n/a
 build-i386                    2 hosts-allocate               starved  n/a

version targeted for testing:
 libvirt              0bb796bda31103ebf54eefc11c471586c87b95fd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   85 days
Failing since        151818  2020-07-11 04:18:52 Z   84 days   79 attempts
Testing same since   155397  2020-10-03 20:12:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               starved 
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   starved 
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           starved 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            starved 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      starved 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 starved 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18188 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 02:25:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 02:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2641.7567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOti5-0008Ns-5Q; Sun, 04 Oct 2020 02:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2641.7567; Sun, 04 Oct 2020 02:25:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOti5-0008Nl-2F; Sun, 04 Oct 2020 02:25:13 +0000
Received: by outflank-mailman (input) for mailman id 2641;
 Sun, 04 Oct 2020 02:25:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOti3-0008Ng-NM
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 02:25:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b37a8ad2-9c13-457e-a093-ec46f5b3ef7b;
 Sun, 04 Oct 2020 02:25:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOthz-0000FP-Fu; Sun, 04 Oct 2020 02:25:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOthy-0002EP-Q4; Sun, 04 Oct 2020 02:25:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOthy-0003XH-Ma; Sun, 04 Oct 2020 02:25:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOti3-0008Ng-NM
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 02:25:11 +0000
X-Inumbo-ID: b37a8ad2-9c13-457e-a093-ec46f5b3ef7b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b37a8ad2-9c13-457e-a093-ec46f5b3ef7b;
	Sun, 04 Oct 2020 02:25:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9JdZQVEZKy/MGdoou3inNUrp9mDTxYskNccZgxYQSUE=; b=Wr4bPTwmyNpiI9QjBMTaS7FPzm
	ROvM7B4OGIBuerbY9c7NzpYmxHWRHFk7rGEvqRuD+QTNOCL6Y2JoP3Fy+2urvYhwuNaJOZ5Ei+Ysu
	Klp0qDOEt2Ur4JMLfvZCE3BKEiaOuEvG24qsqGjSV7P0eeGDOGAST5+jEXFACMNq8C8M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOthz-0000FP-Fu; Sun, 04 Oct 2020 02:25:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOthy-0002EP-Q4; Sun, 04 Oct 2020 02:25:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOthy-0003XH-Ma; Sun, 04 Oct 2020 02:25:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155303-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 155303: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=f37a1cf023b277d0d49323bf322ce3ff0c92262d
X-Osstest-Versions-That:
    xen=28855ebcdbfa437e60bc16c761405476fe16bc39
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 02:25:06 +0000

flight 155303 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155303/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154350
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154350
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154350
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154350
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154350

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  f37a1cf023b277d0d49323bf322ce3ff0c92262d
baseline version:
 xen                  28855ebcdbfa437e60bc16c761405476fe16bc39

Last test of basis   154350  2020-09-15 00:36:14 Z   19 days
Failing since        154617  2020-09-22 14:37:47 Z   11 days    6 attempts
Testing same since   154641  2020-09-23 10:32:29 Z   10 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 506 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 05:28:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 05:28:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2669.7630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOwYk-0007MB-Gh; Sun, 04 Oct 2020 05:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2669.7630; Sun, 04 Oct 2020 05:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOwYk-0007M4-C5; Sun, 04 Oct 2020 05:27:46 +0000
Received: by outflank-mailman (input) for mailman id 2669;
 Sun, 04 Oct 2020 05:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOwYj-0007Lz-Ib
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 05:27:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffcc0928-8533-4fd2-9321-3835551ba52c;
 Sun, 04 Oct 2020 05:27:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwYf-0004P9-EV; Sun, 04 Oct 2020 05:27:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwYe-0000UY-NE; Sun, 04 Oct 2020 05:27:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwYe-0003BG-Mj; Sun, 04 Oct 2020 05:27:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOwYj-0007Lz-Ib
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 05:27:45 +0000
X-Inumbo-ID: ffcc0928-8533-4fd2-9321-3835551ba52c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ffcc0928-8533-4fd2-9321-3835551ba52c;
	Sun, 04 Oct 2020 05:27:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=TWXiAgo/uRCBlzw/NT/XJWBKEzBwt7G35vktre4gJ1Q=; b=C5Jun9dDpddmabx3oxnkZ+kP6C
	aGxb5Ce5PYSReKvY8j8XPnBF199JtMyIYW/rsfVPIp+h7K/InIDJMAkCDyYXdcqxje0ryEztarjZh
	0HC3mPpbPM0ga5/zIfO83TVXIDSnkqMJsyvGuw/HdAilDvd+mOAEDSDf1pAnaUTtRshI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwYf-0004P9-EV; Sun, 04 Oct 2020 05:27:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwYe-0000UY-NE; Sun, 04 Oct 2020 05:27:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwYe-0003BG-Mj; Sun, 04 Oct 2020 05:27:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.13-testing bisection] complete test-amd64-amd64-libvirt-xsm
Message-Id: <E1kOwYe-0003BG-Mj@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 05:27:40 +0000

branch xen-4.13-testing
xenbranch xen-4.13-testing
job test-amd64-amd64-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  21054297bf832d8eacd73dc428f55168522b0d86
  Bug not present: a8122e991da70ac1ee9f88e34e003d2169a5b114
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155425/


  commit 21054297bf832d8eacd73dc428f55168522b0d86
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:26:01 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.13-testing/test-amd64-amd64-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-libvirt-xsm.guest-start --summary-out=tmp/155425.bisection-summary --basis-template=154358 --blessings=real,real-bisect xen-4.13-testing test-amd64-amd64-libvirt-xsm guest-start
Searching for failure / basis pass:
 155258 fail [host=fiano0] / 154602 [host=godello0] 154358 [host=godello1] 152528 [host=godello0] 151712 [host=huxelrebe1] 151337 [host=godello0] 151153 [host=godello1] 151048 [host=pinot0] 150944 [host=chardonnay0] 150177 [host=elbling1] 150073 [host=albana0] 150042 [host=godello1] 149836 [host=italia0] 149664 [host=debina1] 149647 ok.
Failure / basis pass flights: 155258 / 149647
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_gnulib https://git.savannah.gnu.org/git/gnulib.git/
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
Basis pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bd6aa93296de36c5afabd34e4fa4083bccb8488d d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 6a3b59ab9c7dc00331c21346052dfa6a0df45aa3 b66ce5058ec9ce84418cedd39b2bf07b7c5a1f65
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#9d6920bd7de3f92be1894790adeb689060ab25eb-9d6920bd7de3f92be1894790adeb689060ab25eb https://git.savannah.gnu.org/git/gnulib.git/#1f6fb368c04919243e2c70f2aa514a5f88e95309-1f6fb368c04919243e2c70f2aa514a5f88e95309 https://gitlab.com/keycodemap/keycodemapdb.git#6280c94f306df6a20bbc100ba15a5a81af0366e6-6280c94f306df6a20bbc100ba15a5a81af0366e6 git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc\
 7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#bd6aa93296de36c5afabd34e4fa4083bccb8488d-d8ab884fe9b4dd148980bf0d8673187f8fb25887 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847\
 ef-730e2b1927e7d911bbd5350714054ddd5912f4ed git://xenbits.xen.org/osstest/seabios.git#6a3b59ab9c7dc00331c21346052dfa6a0df45aa3-41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 git://xenbits.xen.org/xen.git#b66ce5058ec9ce84418cedd39b2bf07b7c5a1f65-88f5b414ac0f8008c1e2b26f93c3d980120941f7
Loaded 88564 nodes in revision graph
Searching for test results:
 149647 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bd6aa93296de36c5afabd34e4fa4083bccb8488d d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 6a3b59ab9c7dc00331c21346052dfa6a0df45aa3 b66ce5058ec9ce84418cedd39b2bf07b7c5a1f65
 149664 [host=debina1]
 149836 [host=italia0]
 150042 [host=godello1]
 150073 [host=albana0]
 150177 [host=elbling1]
 150944 [host=chardonnay0]
 151048 [host=pinot0]
 151153 [host=godello1]
 151337 [host=godello0]
 151712 [host=huxelrebe1]
 152528 [host=godello0]
 154358 [host=godello1]
 154602 [host=godello0]
 154625 fail irrelevant
 154667 fail irrelevant
 155015 []
 155062 fail irrelevant
 155132 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155309 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 bd6aa93296de36c5afabd34e4fa4083bccb8488d d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 6a3b59ab9c7dc00331c21346052dfa6a0df45aa3 b66ce5058ec9ce84418cedd39b2bf07b7c5a1f65
 155364 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155367 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 88ab0c15525ced2eefe39220742efe4769089ad8 1c7a98cab9101d8fedadd0bb0ccafc4498b37560
 155369 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 422e93e1de6f265ff48eaacc8cf7c44d6401062e d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 155372 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5ba203b54e5953572e279e5505cd65e4cc360e34 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f
 155258 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155374 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7bcb021a6d54c5775c0fa1a3ea003b61f5c966ed d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 aa1d9a7dbfe07905f0b7218bcd433a513f762eb9
 155378 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 88f5b414ac0f8008c1e2b26f93c3d980120941f7
 155382 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 42fcdd42328f9819530f3f0350f9b851acc7c1a0
 155387 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 f63b20a213ecaa672cf40b4627eb1eea9542cb58
 155393 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 e1364e05f92d6c2f12cc77f100cea584354c66cb
 155398 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 43572a4cd97902ba0155b922a4d2e99fb945ec2b
 155403 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155408 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
 155412 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155416 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
 155421 pass 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
 155425 fail 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 21054297bf832d8eacd73dc428f55168522b0d86
Searching for interesting versions
 Result found: flight 149647 (pass), for basis pass
 For basis failure, parent search stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114, results HASH(0x55c976943580) HASH(0x55c9769\
 2bfb0) HASH(0x55c97073e7e0) For basis failure, parent search stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 e1364e05f92d6c2f12cc77f100cea584354c66cb, results HASH(0x5\
 5c97690b888) For basis failure, parent search stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 42fcdd42328f9819530f3f0350f9b851acc7c1a0, results HASH(0x55c976917df0) Fo\
 r basis failure, parent search stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7bcb021a6d54c5775c0fa1a3ea003b61f5c966ed d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 aa1d9a7dbfe07905f0b7218bcd433a513f762eb9, results HASH(0x55c97695e680) For basis failure\
 , parent search stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5ba203b54e5953572e279e5505cd65e4cc360e34 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f, results HASH(0x55c976924448) For basis failure, parent search\
  stopping at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 422e93e1de6f265ff48eaacc8cf7c44d6401062e d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 155821a1990b6de78dde5f98fa5ab90e802021e0 9b367b2b0b714f3ffb69ed6be0a118e8d3eac07f, results HASH(0x55c97694e7d0) For basis failure, parent search stopping at 9d\
 6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 627d1d6693b0594d257dbe1a3363a8d4bd4d8307 d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 88ab0c15525ced2eefe39220742efe4769089ad8 1c7a98cab9101d8fedadd0bb0ccafc4498b37560, results HASH(0x55c976947290) Result found: flight 155132 (fail), for basis failure (at ance\
 stor ~752)
 Repro found: flight 155309 (pass), for basis pass
 Repro found: flight 155378 (fail), for basis failure
 0 revisions at 9d6920bd7de3f92be1894790adeb689060ab25eb 1f6fb368c04919243e2c70f2aa514a5f88e95309 6280c94f306df6a20bbc100ba15a5a81af0366e6 c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 730e2b1927e7d911bbd5350714054ddd5912f4ed 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a8122e991da70ac1ee9f88e34e003d2169a5b114
No revisions left to test, checking graph state.
 Result found: flight 155403 (pass), for last pass
 Result found: flight 155408 (fail), for first failure
 Repro found: flight 155412 (pass), for last pass
 Repro found: flight 155416 (fail), for first failure
 Repro found: flight 155421 (pass), for last pass
 Repro found: flight 155425 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  21054297bf832d8eacd73dc428f55168522b0d86
  Bug not present: a8122e991da70ac1ee9f88e34e003d2169a5b114
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155425/


  commit 21054297bf832d8eacd73dc428f55168522b0d86
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:26:01 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.739278 to fit
pnmtopng: 73 colors found
Revision graph left in /home/logs/results/bisect/xen-4.13-testing/test-amd64-amd64-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155425: tolerable FAIL

flight 155425 xen-4.13-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155425/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 12 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-xsm                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Oct 04 05:49:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 05:49:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2674.7648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOwtX-0000k4-6W; Sun, 04 Oct 2020 05:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2674.7648; Sun, 04 Oct 2020 05:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOwtX-0000jx-3E; Sun, 04 Oct 2020 05:49:15 +0000
Received: by outflank-mailman (input) for mailman id 2674;
 Sun, 04 Oct 2020 05:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOwtV-0000jV-9a
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 05:49:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5da8120d-b493-4fe5-81b9-2ac5e90e3e88;
 Sun, 04 Oct 2020 05:48:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwtH-0004qT-Dp; Sun, 04 Oct 2020 05:48:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwtH-0001Lw-0T; Sun, 04 Oct 2020 05:48:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOwtG-0007x1-W2; Sun, 04 Oct 2020 05:48:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOwtV-0000jV-9a
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 05:49:13 +0000
X-Inumbo-ID: 5da8120d-b493-4fe5-81b9-2ac5e90e3e88
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5da8120d-b493-4fe5-81b9-2ac5e90e3e88;
	Sun, 04 Oct 2020 05:48:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EdxBQp7EnZNY2HhMdY+bFRn+Rp44nv5zWeFXrcf5MdY=; b=YCqeYBYUzysf+0BnHq8uUor27T
	mmt56ivhoWQwHpBnM4vb2me4Qi87KQABOVtSgysSKpv45vas54q9aOXOAoKVFryMSGYPSq6KrDOS2
	ZYICQL/JvbpJZ6p6mhdwEkZDAaAeNMfQmikgDNMv4Zci5xwB0o9L3j1IQTkMMA8EpyhA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwtH-0004qT-Dp; Sun, 04 Oct 2020 05:48:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwtH-0001Lw-0T; Sun, 04 Oct 2020 05:48:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOwtG-0007x1-W2; Sun, 04 Oct 2020 05:48:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155318-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155318: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=b5ce42f5d138d7546f9faa2decbd6ee8702243a3
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 05:48:58 +0000

flight 155318 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155318/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass

version targeted for testing:
 qemuu                b5ce42f5d138d7546f9faa2decbd6ee8702243a3
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   44 days
Failing since        152659  2020-08-21 14:07:39 Z   43 days   75 attempts
Testing same since   155318  2020-10-02 10:42:26 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 37200 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 06:54:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 06:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2686.7678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOxuA-0006jh-Ix; Sun, 04 Oct 2020 06:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2686.7678; Sun, 04 Oct 2020 06:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOxuA-0006jM-Ew; Sun, 04 Oct 2020 06:53:58 +0000
Received: by outflank-mailman (input) for mailman id 2686;
 Sun, 04 Oct 2020 06:53:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOxu9-0006iN-5s
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 06:53:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf64e334-6679-47cb-b5bc-76c2b605e746;
 Sun, 04 Oct 2020 06:53:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOxu1-0006E2-2G; Sun, 04 Oct 2020 06:53:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOxu0-0006nM-DL; Sun, 04 Oct 2020 06:53:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOxu0-0004h6-Cs; Sun, 04 Oct 2020 06:53:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOxu9-0006iN-5s
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 06:53:57 +0000
X-Inumbo-ID: cf64e334-6679-47cb-b5bc-76c2b605e746
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cf64e334-6679-47cb-b5bc-76c2b605e746;
	Sun, 04 Oct 2020 06:53:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=luJQLK3azXx60x7cV1nMmwWqHN4aj0y27wCxOLBJPKg=; b=Qcd7eBmnvLQa4cObgA1xPGrpAY
	9PeKRaTp6tNbxXpu/DkLe6qQ7a1YShSb+d6j8hN5PwiQ62CxkzSZcDCAhffxDqlDRR4PzRdr7koLI
	rIHIJ5kvMHUExLwF9H5eo2hBRq7MPbnA+FX90GSS+/KEO1qf2htr5RXCFjdTFS5bzBOE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOxu1-0006E2-2G; Sun, 04 Oct 2020 06:53:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOxu0-0006nM-DL; Sun, 04 Oct 2020 06:53:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOxu0-0004h6-Cs; Sun, 04 Oct 2020 06:53:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.12-testing bisection] complete test-amd64-amd64-xl-xsm
Message-Id: <E1kOxu0-0004h6-Cs@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 06:53:48 +0000

branch xen-4.12-testing
xenbranch xen-4.12-testing
job test-amd64-amd64-xl-xsm
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155435/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.12-testing/test-amd64-amd64-xl-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.12-testing/test-amd64-amd64-xl-xsm.guest-start --summary-out=tmp/155435.bisection-summary --basis-template=154601 --blessings=real,real-bisect xen-4.12-testing test-amd64-amd64-xl-xsm guest-start
Searching for failure / basis pass:
 155288 fail [host=chardonnay1] / 154601 [host=godello0] 154121 [host=pinot1] 152525 [host=godello0] 151715 [host=godello1] 151388 ok.
Failure / basis pass flights: 155288 / 151388
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 849c5e50b6f474df6cc113130575bcdccfafcd9e 0186e76a62f7409804c2e4785d5a11e7f82a7c52
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d11c75185276ded944f2ea0277532b7fee849bbc 050fe48dc981e0488de1f6c6c07d8110f3b7523b
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#a4a2258a1fec66665481b0bd929b049921cb07a0-d8ab884fe9b4dd148980bf0d8673187f8fb25887 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484\
 fe09f50876798-d0d8ad39ecb51cd7497cd524484fe09f50876798 git://xenbits.xen.org/qemu-xen.git#8023a62081ffbe3f734019076ec1a2b4213142bb-8023a62081ffbe3f734019076ec1a2b4213142bb git://xenbits.xen.org/osstest/seabios.git#d11c75185276ded944f2ea0277532b7fee849bbc-849c5e50b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#050fe48dc981e0488de1f6c6c07d8110f3b7523b-0186e76a62f7409804c2e4785d5a11e7f82a7c52
Loaded 2982 nodes in revision graph
Searching for test results:
 151367 [host=fiano0]
 151388 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d11c75185276ded944f2ea0277532b7fee849bbc 050fe48dc981e0488de1f6c6c07d8110f3b7523b
 151715 [host=godello1]
 152525 [host=godello0]
 154121 [host=pinot1]
 154601 [host=godello0]
 154622 fail irrelevant
 154663 fail irrelevant
 155014 fail irrelevant
 155075 fail irrelevant
 155152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155361 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d11c75185276ded944f2ea0277532b7fee849bbc 050fe48dc981e0488de1f6c6c07d8110f3b7523b
 155376 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155379 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9275bb82caf95c31c2e58a5c14b3feabf46bdf0b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155381 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 53b40c9c6d108e8c0e1500a288638623fee92bca d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933
 155384 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1b461403ee723dab01d5828714cca0b9396a6b3c d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a81e6557b9864e4288a63cbbbd3a6f98d3a74862
 155386 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 320e7a7369245d4304ac822e67740a7ea147e7a2
 155390 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cfd61e688f9f1736ff0311f49040669f04ac1ea6
 155395 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3e039e12ecfdefbf3ecbc5a63052620a1fe51ad5
 155288 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 849c5e50b6f474df6cc113130575bcdccfafcd9e 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155400 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155404 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 849c5e50b6f474df6cc113130575bcdccfafcd9e 0186e76a62f7409804c2e4785d5a11e7f82a7c52
 155407 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8e25d522a3fc236c0c7a02541e8071afa031386b
 155411 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155414 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155419 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
 155427 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
 155435 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
Searching for interesting versions
 Result found: flight 151388 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c, results HASH(0x56277e795578) HASH(0x56277e361238) HASH(0x56277e710bf8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3e039e12ecfdefbf3ecbc5a63052620a1fe51ad5, results HASH(0x56277e709690) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f73401907\
 6ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 320e7a7369245d4304ac822e67740a7ea147e7a2, results HASH(0x56277e7915d0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1b461403ee723dab01d5828714cca0b9396a6b3c d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a81e6557b9864e4288a63cbbbd3a6f98d3a74862, results HASH(0x56277e773140) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 53b40c9c6d108e8c0e1500a288638623fee92bca d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x56277e7912d0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9275bb82caf95c31c2e5\
 8a5c14b3feabf46bdf0b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 155821a1990b6de78dde5f98fa5ab90e802021e0 1336ca17742471fc4a59879ae2f637a59530a933, results HASH(0x56277e7502a8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a4a2258a1fec66665481b0bd929b049921cb07a0 d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb d11c75185276ded944f2ea0277532b7fee8\
 49bbc 050fe48dc981e0488de1f6c6c07d8110f3b7523b, results HASH(0x56277e714c08) HASH(0x56277e78abe8) Result found: flight 155152 (fail), for basis failure (at ancestor ~2981)
 Repro found: flight 155361 (pass), for basis pass
 Repro found: flight 155404 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b d0d8ad39ecb51cd7497cd524484fe09f50876798 8023a62081ffbe3f734019076ec1a2b4213142bb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
No revisions left to test, checking graph state.
 Result found: flight 155400 (pass), for last pass
 Result found: flight 155411 (fail), for first failure
 Repro found: flight 155414 (pass), for last pass
 Repro found: flight 155419 (fail), for first failure
 Repro found: flight 155427 (pass), for last pass
 Repro found: flight 155435 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Bug not present: 9dda47cb702ccb9663aec9c78ac3fdc3d4076b1c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155435/


  commit 9c2a02740f7f91543caa8fab6d2ab2bbc7c40742
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:10:32 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 123 colors found
Revision graph left in /home/logs/results/bisect/xen-4.12-testing/test-amd64-amd64-xl-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155435: tolerable ALL FAIL

flight 155435 xen-4.12-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155435/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-xsm      12 guest-start             fail baseline untested


jobs:
 test-amd64-amd64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Oct 04 07:39:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 07:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2693.7700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOybr-0001ug-9A; Sun, 04 Oct 2020 07:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2693.7700; Sun, 04 Oct 2020 07:39:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOybr-0001uZ-5n; Sun, 04 Oct 2020 07:39:07 +0000
Received: by outflank-mailman (input) for mailman id 2693;
 Sun, 04 Oct 2020 07:39:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nivo=DL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOybp-0001uU-Ho
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 07:39:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a077db5a-2f37-49f2-aab0-7271529a538d;
 Sun, 04 Oct 2020 07:39:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C79F8ACDB;
 Sun,  4 Oct 2020 07:39:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nivo=DL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOybp-0001uU-Ho
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 07:39:05 +0000
X-Inumbo-ID: a077db5a-2f37-49f2-aab0-7271529a538d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a077db5a-2f37-49f2-aab0-7271529a538d;
	Sun, 04 Oct 2020 07:39:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601797142;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ADXyo3ab3pBs4ZZ1xq/Uzp7I/o62e/+Hfh2P56RTVyw=;
	b=ZVjBXAwRO9vdmHQhaWP4CRSK/l/HxWekNguOGTTtp4Thpm78yx/oFSqW1iArADFCwMK+hO
	sGxYIP3QKtcCL5hpjBi+FJf6DtjyOrqUTG7Qshbmwo9LFJZSsSN1iWPYe21vzkfGUc6CYE
	u+e7/k+ANYXvFcgRiY6NM+8uleo7PvU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C79F8ACDB;
	Sun,  4 Oct 2020 07:39:02 +0000 (UTC)
Subject: Re: [PATCH] x86/S3: Restore CR4 earlier during resume
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <20201002213650.2197-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d4e12ca-cb0d-3fbd-7d24-27bd46b8b95c@suse.com>
Date: Sun, 4 Oct 2020 09:38:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201002213650.2197-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 02.10.2020 23:36, Andrew Cooper wrote:
> c/s 4304ff420e5 "x86/S3: Drop {save,restore}_rest_processor_state()
> completely" moved CR4 restoration up into C, to account for the fact that MCE
> was explicitly handled later.
> 
> However, time_resume() ends up making an EFI Runtime Service call, and EFI
> explodes without OSFXSR, presumably when trying to spill %xmm registers onto
> the stack.
> 
> Given this codepath, and the potential for other issues of a similar kind (TLB
> flushing vs INVPCID, HVM logic vs VMXE, etc), restore CR4 in asm before
> entering C.
> 
> Ignore the previous MCE special case, because its not actually necessary.  The
> handler is already suitably configured from before suspend.

Are you suggesting we could drop the call to mcheck_init() altogether?

> Fixes: 4304ff420e5 ("x86/S3: Drop {save,restore}_rest_processor_state() completely")
> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 07:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 07:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2698.7717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOylc-0002rx-BT; Sun, 04 Oct 2020 07:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2698.7717; Sun, 04 Oct 2020 07:49:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOylc-0002rq-8W; Sun, 04 Oct 2020 07:49:12 +0000
Received: by outflank-mailman (input) for mailman id 2698;
 Sun, 04 Oct 2020 07:49:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nivo=DL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kOylb-0002rl-Na
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 07:49:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02868a1c-4f92-4777-8d21-18e81fd90842;
 Sun, 04 Oct 2020 07:49:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E294CAC4F;
 Sun,  4 Oct 2020 07:49:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nivo=DL=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kOylb-0002rl-Na
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 07:49:11 +0000
X-Inumbo-ID: 02868a1c-4f92-4777-8d21-18e81fd90842
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 02868a1c-4f92-4777-8d21-18e81fd90842;
	Sun, 04 Oct 2020 07:49:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601797750;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bZKPBlisJQ0e/BqVkzd4Dke9AIEZYsp8PmP08KzYou0=;
	b=GOmwn9liJ8WnuwauzIrLFKSNHVoq3Vt0374amLcWvVE6d5e+SmPge7jnezRd3oVK6NNnle
	8EYSWo+X8bDMCKQjUil42LzTUADJOOsqDVJ9C2PvwtLOefU+whmNKAEf7OwUYvH0reUb0U
	455E/jrfFO7hA+va9ty1v/wD0hG3Fhc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E294CAC4F;
	Sun,  4 Oct 2020 07:49:09 +0000 (UTC)
Subject: Re: S3 resume crash in memguard_guard_stack (stable-4.14)
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Andrew Cooper <andrew.cooper3@citrix.com>
References: <20201003135723.GO3962@mail-itl>
Cc: xen-devel <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74e9a18e-4c50-5341-8f3b-e23a5382e515@suse.com>
Date: Sun, 4 Oct 2020 09:49:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201003135723.GO3962@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.10.2020 15:57, Marek Marczykowski-Górecki wrote:
> With this, I get a crash on S3 resume:
> 
> (XEN) Preparing system for ACPI S3 state.
> (XEN) Disabling non-boot CPUs ...
> (XEN) Entering ACPI S3 state.
> (XEN) [VT-D]Passed iommu=no-igfx option.  Disabling IGD VT-d engine.
> (XEN) mce_intel.c:773: MCA Capability: firstbank 0, extended MCE MSR 0, BCAST, CMCI
> (XEN) CPU0 CMCI LVT vector (0xf1) already installed
> (XEN) Finishing wakeup from ACPI S3 state.
> (XEN) Enabling non-boot CPUs  ...
> (XEN) ----[ Xen-4.14.1-pre  x86_64  debug=y   Not tainted ]----
> (XEN) CPU:    0
> (XEN) RIP:    e008:[<ffff82d040311090>] memguard_guard_stack+0x7/0x1a5
> (XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor
> (XEN) rax: ffff830250ca03f8   rbx: 0000000000000001   rcx: ffff830250cb10b0
> (XEN) rdx: 0000003210739000   rsi: 0000000000000001   rdi: ffff830250ca0000
> (XEN) rbp: ffff830049a6fd70   rsp: ffff830049a6fd40   r8:  0000000000000001
> (XEN) r9:  0000000000000000   r10: 0000000000000001   r11: 0000000000000002
> (XEN) r12: 0000000000010000   r13: 0000000000000000   r14: 0000000000000001
> (XEN) r15: ffff82d040598440   cr0: 000000008005003b   cr4: 00000000003526e0
> (XEN) cr3: 0000000049a5d000   cr2: ffff830250ca03f8
> (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: e008
> (XEN) Xen code around <ffff82d040311090> (memguard_guard_stack+0x7/0x1a5):
> (XEN)  c3 48 8d 87 f8 03 00 00 <48> 89 87 f8 03 00 00 48 8d 87 f8 07 00 00 48 89
> (XEN) Xen stack trace from rsp=ffff830049a6fd40:
> (XEN)    ffff82d040321c2e ffff82d040461b68 ffff82d040461b60 ffff82d040461240
> (XEN)    0000000000000001 0000000000000000 ffff830049a6fdb8 ffff82d040221f9c
> (XEN)    ffff830049a6fde0 0000000000000001 0000000000000000 00000000ffffffef
> (XEN)    ffff830049a6fe08 0000000000000001 ffff830250b66000 ffff830049a6fdd0
> (XEN)    ffff82d0402036cf 0000000000000001 ffff830049a6fdf8 ffff82d040203a4d
> (XEN)    0000000000000000 0000000000000001 0000000000000010 ffff830049a6fe28
> (XEN)    ffff82d040203d00 ffff830049a6fef8 0000000000000000 0000000000000003
> (XEN)    0000000000000200 ffff830049a6fe58 ffff82d040270c9a ffff830250139f70
> (XEN)    ffff830250b45000 0000000000000000 0000000000000000 ffff830049a6fe78
> (XEN)    ffff82d040207064 ffff830250b451b8 ffff82d0405781b0 ffff830049a6fe90
> (XEN)    ffff82d04022b7bb ffff82d0405781a0 ffff830049a6fec0 ffff82d04022ba9c
> (XEN)    0000000000000000 ffff82d0405781b0 ffff82d04057ed00 ffff82d040598440
> (XEN)    ffff830049a6fef0 ffff82d0402f33e3 ffff830252b0e000 ffff830250b45000
> (XEN)    ffff830252b0f000 0000000000000000 ffff830049a6fdc8 ffff88818ce029e0
> (XEN)    ffffc900026b7f08 0000000000000003 0000000000000000 0000000000003403
> (XEN)    ffffffff8277a5a8 0000000000000246 0000000000000003 0000000000003403
> (XEN)    0000000000003403 0000000000000000 ffffffff810020ea 0000000000003403
> (XEN)    0000000000000010 deadbeefdeadf00d 0000010000000000 ffffffff810020ea
> (XEN)    000000000000e033 0000000000000246 ffffc900026b7cb8 000000000000e02b
> (XEN)    0000000000000000 0000000000000000 0000000000000000 0000000000000000
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040311090>] R memguard_guard_stack+0x7/0x1a5
> (XEN)    [<ffff82d040321c2e>] S smpboot.c#cpu_smpboot_callback+0xe5/0x6d5
> (XEN)    [<ffff82d040221f9c>] F notifier_call_chain+0x6b/0x96
> (XEN)    [<ffff82d0402036cf>] F cpu.c#cpu_notifier_call_chain+0x1b/0x33
> (XEN)    [<ffff82d040203a4d>] F cpu_up+0x5f/0xd5
> (XEN)    [<ffff82d040203d00>] F enable_nonboot_cpus+0xea/0x1fb
> (XEN)    [<ffff82d040270c9a>] F power.c#enter_state_helper+0x152/0x606
> (XEN)    [<ffff82d040207064>] F domain.c#continue_hypercall_tasklet_handler+0x4c/0xb9
> (XEN)    [<ffff82d04022b7bb>] F tasklet.c#do_tasklet_work+0x76/0xa9
> (XEN)    [<ffff82d04022ba9c>] F do_tasklet+0x58/0x8a
> (XEN)    [<ffff82d0402f33e3>] F domain.c#idle_loop+0x40/0x96
> (XEN) 
> (XEN) Pagetable walk from ffff830250ca03f8:
> (XEN)  L4[0x106] = 8000000049a5b063 ffffffffffffffff
> (XEN)  L3[0x009] = 0000000250cae063 ffffffffffffffff
> (XEN)  L2[0x086] = 0000000250cad063 ffffffffffffffff
> (XEN)  L1[0x0a0] = 8000000250ca0161 ffffffffffffffff

Now this one's pretty obvious: The call to memguard_unguard_stack() during
bringing down the APs is conditional (in cpu_smpboot_free()) and hence
memguard_guard_stack() may (at present) not assume the stack is writable
(by ordinary writes, i.e. write_sss_token()). I guess we may want something
like

    if ( stack_base[cpu] == NULL )
    {
        stack_base[cpu] = alloc_xenheap_pages(STACK_ORDER, memflags);
        if ( stack_base[cpu] == NULL )
            goto out;
    }
    else if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
        memguard_unguard_stack(stack_base[cpu]);

in cpu_smpboot_alloc(). But of course the question is whether the
conditions here and there wouldn't better become cpu_has_xen_shstk, since
right now the breakage (afaict) needlessly extends to systems that aren't
CET-capable.

Jan


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 08:56:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 08:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2714.7744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOzoD-0000ur-1F; Sun, 04 Oct 2020 08:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2714.7744; Sun, 04 Oct 2020 08:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kOzoC-0000uk-Tm; Sun, 04 Oct 2020 08:55:56 +0000
Received: by outflank-mailman (input) for mailman id 2714;
 Sun, 04 Oct 2020 08:55:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kOzoB-0000uc-Nd
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 08:55:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6e54246-e83a-4df2-89c2-e6b4ed23ce91;
 Sun, 04 Oct 2020 08:55:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOzo8-0000nR-GN; Sun, 04 Oct 2020 08:55:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kOzo8-0005Qz-8Z; Sun, 04 Oct 2020 08:55:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kOzo8-0005qK-85; Sun, 04 Oct 2020 08:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kOzoB-0000uc-Nd
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 08:55:55 +0000
X-Inumbo-ID: a6e54246-e83a-4df2-89c2-e6b4ed23ce91
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6e54246-e83a-4df2-89c2-e6b4ed23ce91;
	Sun, 04 Oct 2020 08:55:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=DyFXYlhgrHzMynMjPRP65FAD4rOzFOXYUGxRST5Pcpo=; b=KhddgY8AuoptCjbDTTtBs3mtCm
	+NJU5S3U/GVOiGM3VcwrCkZw1133Dh9sFAUSf22BQW40HD/22fUZGRTSWiwr83zXVEAgQLl397akA
	L5YFmMRucbRo/bOygnnSNJtaqe5ny4qfcglVp7SSKfNhmZKYrR92229CTW220sk7QDTg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOzo8-0000nR-GN; Sun, 04 Oct 2020 08:55:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOzo8-0005Qz-8Z; Sun, 04 Oct 2020 08:55:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kOzo8-0005qK-85; Sun, 04 Oct 2020 08:55:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.14-testing bisection] complete test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm
Message-Id: <E1kOzo8-0005qK-85@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 08:55:52 +0000

branch xen-4.14-testing
xenbranch xen-4.14-testing
job test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155444/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install --summary-out=tmp/155444.bisection-summary --basis-template=154350 --blessings=real,real-bisect xen-4.14-testing test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm debian-hvm-install
Searching for failure / basis pass:
 155303 fail [host=godello1] / 154350 [host=albana1] 154148 [host=rimava1] 154116 [host=fiano1] 152545 [host=huxelrebe1] 152537 [host=rimava1] 152531 ok.
Failure / basis pass flights: 155303 / 152531
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 849c5e50b6f474df6cc113130575bcdccfafcd9e f37a1cf023b277d0d49323bf322ce3ff0c92262d
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 d9c812dda519a1a73e8370e1b81ddf46eb22ed16 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#8834e10b30125daa47da9f6c5c1a41b4eafbae7f-d8ab884fe9b4dd148980bf0d8673187f8fb25887 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/osstest/seabios.git#d9c812dda519a1a73e8370e1b81ddf46eb22ed16-849c5e50b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#c3a0fc22af90ef28e68b116c6a49d9cec57f71cf-f37a1cf023b277d0d49323bf322ce3ff0c92262d
Loaded 12584 nodes in revision graph
Searching for test results:
 152531 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 d9c812dda519a1a73e8370e1b81ddf46eb22ed16 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 152537 [host=rimava1]
 152545 [host=huxelrebe1]
 154116 [host=fiano1]
 154148 [host=rimava1]
 154350 [host=albana1]
 154617 fail irrelevant
 154641 fail irrelevant
 155016 fail irrelevant
 155087 fail irrelevant
 155173 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155360 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 d9c812dda519a1a73e8370e1b81ddf46eb22ed16 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155389 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155392 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 56aa9d19d81451bbecf57a97b9aab27243083c12 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf
 155394 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 317d84abe3bfbdff10ae1cc4f38b49307838c6c4 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ceafff707c96ca5cf01a435e4cf6f64c2dfc9a4d
 155399 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e97c78c546b04247191490bb1a59db471cd0368d 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 28855ebcdbfa437e60bc16c761405476fe16bc39
 155401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 66cdf341428ae38f6426408a95de9830b5c9c83c
 155406 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 0bc4177e6b0d7a98464913af95d3bfe4b59b7a2c
 155410 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070
 155303 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 849c5e50b6f474df6cc113130575bcdccfafcd9e f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155415 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155418 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 849c5e50b6f474df6cc113130575bcdccfafcd9e f37a1cf023b277d0d49323bf322ce3ff0c92262d
 155422 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155428 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155431 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155438 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
 155440 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
 155444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2ee270e126458471b178ca1e5d7d8d0afc48be39
Searching for interesting versions
 Result found: flight 152531 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841, results HASH(0x5579710e4908) HASH(0x5579710e7b38) HASH(0x5579710cd938) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 b8c2efbe7b3e8fa5f0b0a3679afccd1204949070, results HASH(0x557971110b40) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c\
 3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 0bc4177e6b0d7a98464913af95d3bfe4b59b7a2c, results HASH(0x557971110398) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e97c78c546b04247191490bb1a59db471cd0368d 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 28855ebcdbfa437e60bc16c761405476fe16bc39, results HASH(0x5579710c43f0) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 317d84abe3bfbdff10ae1cc4f38b49307838c6c4 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ceafff707c96ca5cf01a435e4cf6f64c2dfc9a4d, results HASH(0x5579710d1f70) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 56aa9d19d81451bbecf5\
 7a97b9aab27243083c12 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 155821a1990b6de78dde5f98fa5ab90e802021e0 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x5579710f6dc0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 8834e10b30125daa47da9f6c5c1a41b4eafbae7f 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 d9c812dda519a1a73e8370e1b81ddf46eb2\
 2ed16 c3a0fc22af90ef28e68b116c6a49d9cec57f71cf, results HASH(0x5579710e7538) HASH(0x5579710f0780) Result found: flight 155173 (fail), for basis failure (at ancestor ~347)
 Repro found: flight 155360 (pass), for basis pass
 Repro found: flight 155418 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b 3c659044118e34603161457db9934a34f816d78b ea6d3cd1ed79d824e605a70c3626bc437c386260 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
No revisions left to test, checking graph state.
 Result found: flight 155422 (pass), for last pass
 Result found: flight 155428 (fail), for first failure
 Repro found: flight 155431 (pass), for last pass
 Repro found: flight 155438 (fail), for first failure
 Repro found: flight 155440 (pass), for last pass
 Repro found: flight 155444 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  2ee270e126458471b178ca1e5d7d8d0afc48be39
  Bug not present: 9b9fc8e391b6d5afa83f90271fdbd0e13871e841
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155444/


  commit 2ee270e126458471b178ca1e5d7d8d0afc48be39
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 16:14:56 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 213 colors found
Revision graph left in /home/logs/results/bisect/xen-4.14-testing/test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155444: tolerable ALL FAIL

flight 155444 xen-4.14-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155444/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Oct 04 09:46:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 09:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2724.7763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP0aq-0005Es-1S; Sun, 04 Oct 2020 09:46:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2724.7763; Sun, 04 Oct 2020 09:46:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP0ap-0005El-Ue; Sun, 04 Oct 2020 09:46:11 +0000
Received: by outflank-mailman (input) for mailman id 2724;
 Sun, 04 Oct 2020 09:46:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kP0ao-0005Eg-Ez
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 09:46:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba9cd145-ab77-41cb-a2d0-b63701a99f0c;
 Sun, 04 Oct 2020 09:46:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP0ak-0001nf-Vh; Sun, 04 Oct 2020 09:46:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP0ak-0008DX-Nd; Sun, 04 Oct 2020 09:46:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kP0ak-0008By-NC; Sun, 04 Oct 2020 09:46:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kP0ao-0005Eg-Ez
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 09:46:10 +0000
X-Inumbo-ID: ba9cd145-ab77-41cb-a2d0-b63701a99f0c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba9cd145-ab77-41cb-a2d0-b63701a99f0c;
	Sun, 04 Oct 2020 09:46:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=daLi4lNdYNPhqL1tYp8AgUy+dvDI4YGY2F9BtQHr5Fo=; b=SB3bR3z1L3Vz+Bc+rHt37exBg4
	/O8COo1vsKWRh27/4c7nApgHSw/XZbzVvxbGjR7iMoJ8CwHO9eqSltn5ItWPUbkG8UGgiFnGIlpqG
	ZlaKERSpZj98TfSCVL1ddHl8GGQigffYsTLsnhYq9lEq0FbLro0vtW9/2f/L0Iur6gIQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP0ak-0001nf-Vh; Sun, 04 Oct 2020 09:46:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP0ak-0008DX-Nd; Sun, 04 Oct 2020 09:46:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP0ak-0008By-NC; Sun, 04 Oct 2020 09:46:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-4.11-testing bisection] complete test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
Message-Id: <E1kP0ak-0008By-NC@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 09:46:06 +0000

branch xen-4.11-testing
xenbranch xen-4.11-testing
job test-amd64-i386-xl-qemuu-debianhvm-i386-xsm
testid debian-hvm-install

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  3def8466383ab5abd17f1436d085348c2994722b
  Bug not present: cc1561a3a4e6c1b4125953703338c545ba6d14fb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155446/


  commit 3def8466383ab5abd17f1436d085348c2994722b
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:21:27 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install --summary-out=tmp/155446.bisection-summary --basis-template=151714 --blessings=real,real-bisect xen-4.11-testing test-amd64-i386-xl-qemuu-debianhvm-i386-xsm debian-hvm-install
Searching for failure / basis pass:
 155281 fail [host=albana1] / 151714 [host=elbling0] 151318 [host=chardonnay0] 151295 [host=huxelrebe1] 151279 [host=fiano0] 151260 [host=elbling0] 151234 [host=elbling1] 151204 [host=huxelrebe0] 151166 [host=pinot0] 151140 ok.
Failure / basis pass flights: 155281 / 151140
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 849c5e50b6f474df6cc113130575bcdccfafcd9e 3263f257caf8e4465e9dca84a88fa0e68be74280
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 2b77729888fb851ab96e7f77bc854122626b4861
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#9af1064995d479df92e399a682ba7e4b2fc78415-d8ab884fe9b4dd148980bf0d8673187f8fb25887 git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bb\
 a25148b279f60-c8ea0457495342c417c3dc033bba25148b279f60 git://xenbits.xen.org/qemu-xen.git#06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad-06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad git://xenbits.xen.org/osstest/seabios.git#2e3de6253422112ae43e608661ba94ea6b345694-849c5e50b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#2b77729888fb851ab96e7f77bc854122626b4861-3263f257caf8e4465e9dca84a88fa0e68be74280
Loaded 12584 nodes in revision graph
Searching for test results:
 151093 [host=chardonnay1]
 151140 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 2b77729888fb851ab96e7f77bc854122626b4861
 151166 [host=pinot0]
 151204 [host=huxelrebe0]
 151260 [host=elbling0]
 151234 [host=elbling1]
 151295 [host=huxelrebe1]
 151279 [host=fiano0]
 151318 [host=chardonnay0]
 151714 [host=elbling0]
 154619 fail irrelevant
 154649 fail irrelevant
 154740 fail irrelevant
 155013 fail irrelevant
 155066 fail irrelevant
 155140 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 3263f257caf8e4465e9dca84a88fa0e68be74280
 155348 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 2b77729888fb851ab96e7f77bc854122626b4861
 155380 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2793a49565488e419d10ba029c838f4b7efdba38 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 3263f257caf8e4465e9dca84a88fa0e68be74280
 155383 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d0c42fdf2cf1855b0a042ef82d848c7716adefe c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad d9c812dda519a1a73e8370e1b81ddf46eb22ed16 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155281 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 849c5e50b6f474df6cc113130575bcdccfafcd9e 3263f257caf8e4465e9dca84a88fa0e68be74280
 155385 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 647aa7110f9c744dd0d27b01a50b581eb3de2a57 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155391 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 849c5e50b6f474df6cc113130575bcdccfafcd9e 3263f257caf8e4465e9dca84a88fa0e68be74280
 155396 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f94345d9eae1b359c01761be975086870a4a9de9 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155405 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e97c78c546b04247191490bb1a59db471cd0368d c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
 155409 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 30b3f297603eebd7874ca3b5f9cbf7268b040046
 155413 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2fe163d70f3ae88c10f50d70a50513e395326a77
 155420 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 13f60bf98fcb3b5c5c216ee2ce536897d3a925d4
 155424 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155432 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
 155436 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155441 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
 155443 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
 155446 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 3def8466383ab5abd17f1436d085348c2994722b
Searching for interesting versions
 Result found: flight 151140 (pass), for basis pass
 For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb, results HASH(0x559340221828) HASH(0x559340220300) HASH(0x55934022e7a8) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1\
 e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 13f60bf98fcb3b5c5c216ee2ce536897d3a925d4, results HASH(0x559340219cc0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad7\
 4c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2fe163d70f3ae88c10f50d70a50513e395326a77, results HASH(0x55933f50f310) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 e97c78c546b04247191490bb1a59db471cd0368d c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55933e7ce508) For basis\
  failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f94345d9eae1b359c01761be975086870a4a9de9 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x559340229570) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 647aa7110f9c744dd0d2\
 7b01a50b581eb3de2a57 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 155821a1990b6de78dde5f98fa5ab90e802021e0 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x559340258d10) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d0c42fdf2cf1855b0a042ef82d848c7716adefe c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad d9c812dda519a1a73e8370e1b81ddf46eb2\
 2ed16 ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d, results HASH(0x55934024a3b0) For basis failure, parent search stopping at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9af1064995d479df92e399a682ba7e4b2fc78415 c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 2e3de6253422112ae43e608661ba94ea6b345694 2b77729888fb851ab96e7f77bc854122626b4861, results HASH(0x5593402361c8) HASH(0x55934024a6b0) Result found: flight 155140 (fail), for \
 basis failure (at ancestor ~5542)
 Repro found: flight 155348 (pass), for basis pass
 Repro found: flight 155391 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 fb97626fe04747ec89599dce0992def9a36e2f6b c8ea0457495342c417c3dc033bba25148b279f60 06fbdaf7d6c43b55339d4ad74c77c9be84ae41ad 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 cc1561a3a4e6c1b4125953703338c545ba6d14fb
No revisions left to test, checking graph state.
 Result found: flight 155424 (pass), for last pass
 Result found: flight 155432 (fail), for first failure
 Repro found: flight 155436 (pass), for last pass
 Repro found: flight 155441 (fail), for first failure
 Repro found: flight 155443 (pass), for last pass
 Repro found: flight 155446 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  3def8466383ab5abd17f1436d085348c2994722b
  Bug not present: cc1561a3a4e6c1b4125953703338c545ba6d14fb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155446/


  commit 3def8466383ab5abd17f1436d085348c2994722b
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 17:21:27 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

pnmtopng: 127 colors found
Revision graph left in /home/logs/results/bisect/xen-4.11-testing/test-amd64-i386-xl-qemuu-debianhvm-i386-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155446: tolerable ALL FAIL

flight 155446 xen-4.11-testing real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155446/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Oct 04 10:27:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 10:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2729.7781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP1F0-0000Nw-Cu; Sun, 04 Oct 2020 10:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2729.7781; Sun, 04 Oct 2020 10:27:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP1F0-0000Np-9r; Sun, 04 Oct 2020 10:27:42 +0000
Received: by outflank-mailman (input) for mailman id 2729;
 Sun, 04 Oct 2020 10:27:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kP1Ey-0000NN-JN
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 10:27:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03feadb1-74ad-4b80-8458-1d21f9c47593;
 Sun, 04 Oct 2020 10:27:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1Eq-0002jP-TQ; Sun, 04 Oct 2020 10:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1Eq-000290-M8; Sun, 04 Oct 2020 10:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1Eq-0003kH-Le; Sun, 04 Oct 2020 10:27:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kP1Ey-0000NN-JN
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 10:27:40 +0000
X-Inumbo-ID: 03feadb1-74ad-4b80-8458-1d21f9c47593
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 03feadb1-74ad-4b80-8458-1d21f9c47593;
	Sun, 04 Oct 2020 10:27:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vZI5vswbISsTpqlrx3pgUhSz1+WQOnjHJBn/sAytXO0=; b=hRPmGrj9JmsGOiu7vXZPetLttG
	DKU6qasQGshwfsysumIbb/2oPFluabE/oktOdW8rX7yPDBv9I8cJpm0kh5w3/2UgyroIYZo49R+UR
	9roIIImq29S9rWfLo9/OmK4x6OP6+pKcUUQ5pMAq2uXkepYPk6c2co2HVWUWIWhnTiD8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1Eq-0002jP-TQ; Sun, 04 Oct 2020 10:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1Eq-000290-M8; Sun, 04 Oct 2020 10:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1Eq-0003kH-Le; Sun, 04 Oct 2020 10:27:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155448-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 155448: all pass - PUSHED
X-Osstest-Versions-This:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
X-Osstest-Versions-That:
    xen=5dba8c2f23049aa68b777a9e7e9f76c12dd00012
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 10:27:32 +0000

flight 155448 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155448/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893
baseline version:
 xen                  5dba8c2f23049aa68b777a9e7e9f76c12dd00012

Last test of basis   155131  2020-09-30 09:18:28 Z    4 days
Testing same since   155448  2020-10-04 09:18:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Laurentiu Tudor <laurentiu.tudor@nxp.com>
  Olaf Hering <olaf@aepfle.de>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5dba8c2f23..8ef6345ef5  8ef6345ef557cc2c47298217635a3088eaa59893 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 10:56:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 10:56:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2732.7794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP1gt-0002xw-Lq; Sun, 04 Oct 2020 10:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2732.7794; Sun, 04 Oct 2020 10:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP1gt-0002xp-HQ; Sun, 04 Oct 2020 10:56:31 +0000
Received: by outflank-mailman (input) for mailman id 2732;
 Sun, 04 Oct 2020 10:56:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kP1gs-0002xk-9h
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 10:56:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 531504ac-88b1-449d-9d45-647de852e4a2;
 Sun, 04 Oct 2020 10:56:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1go-0003Hs-Ry; Sun, 04 Oct 2020 10:56:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1go-0003ky-JM; Sun, 04 Oct 2020 10:56:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kP1go-0006Qq-GN; Sun, 04 Oct 2020 10:56:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kP1gs-0002xk-9h
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 10:56:30 +0000
X-Inumbo-ID: 531504ac-88b1-449d-9d45-647de852e4a2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 531504ac-88b1-449d-9d45-647de852e4a2;
	Sun, 04 Oct 2020 10:56:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rl6D/s5dR+vmFRZ/gsQmNGKMre+F+K69kjgvcnhZuEw=; b=Ee6rIbs36ChkYnHBZyGntxh2OF
	87qUI6GtwtUsBMTNzuSuVQi0eBLuR+LzfGykwwJuteM1sAPTpqCB69UOMQbZB6mSYEs8OdnYHjTbr
	oSr6U+qxR3NFXtSl3P2GsfM9rEzr0TbAJ1UsOPSrgYZlCTGI5ELQJ4au7h66CPEr+iKo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1go-0003Hs-Ry; Sun, 04 Oct 2020 10:56:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1go-0003ky-JM; Sun, 04 Oct 2020 10:56:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP1go-0006Qq-GN; Sun, 04 Oct 2020 10:56:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155345: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-i386-xl-xsm:guest-start:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=c73952831f0fc63a984e0d07dff1d20f8617b81f
X-Osstest-Versions-That:
    xen=d4ed1d4132f5825a795d5a78505811ecd2717b5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 10:56:26 +0000

flight 155345 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155345/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 12 guest-start              fail REGR. vs. 154611
 test-amd64-amd64-xl-xsm      12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-libvirt-xsm  12 guest-start              fail REGR. vs. 154611
 test-amd64-i386-xl-xsm       12 guest-start              fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 154611

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 154611
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 154611
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 154611
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 154611
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  c73952831f0fc63a984e0d07dff1d20f8617b81f
baseline version:
 xen                  d4ed1d4132f5825a795d5a78505811ecd2717b5e

Last test of basis   154611  2020-09-22 11:26:05 Z   11 days
Failing since        154634  2020-09-23 05:59:56 Z   11 days    5 attempts
Testing same since   155211  2020-10-01 08:26:48 Z    3 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 951 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 13:21:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 13:21:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2741.7816 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP3wi-00078x-RS; Sun, 04 Oct 2020 13:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2741.7816; Sun, 04 Oct 2020 13:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP3wi-00078q-NU; Sun, 04 Oct 2020 13:21:00 +0000
Received: by outflank-mailman (input) for mailman id 2741;
 Sun, 04 Oct 2020 13:20:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kP3wh-00078l-Gq
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 13:20:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a98d81a-40f7-4c84-b696-d7653c1af536;
 Sun, 04 Oct 2020 13:20:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP3we-0006DH-Ju; Sun, 04 Oct 2020 13:20:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP3we-00024v-BH; Sun, 04 Oct 2020 13:20:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kP3we-0002Vm-Ac; Sun, 04 Oct 2020 13:20:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kP3wh-00078l-Gq
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 13:20:59 +0000
X-Inumbo-ID: 5a98d81a-40f7-4c84-b696-d7653c1af536
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5a98d81a-40f7-4c84-b696-d7653c1af536;
	Sun, 04 Oct 2020 13:20:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qnx7egqxu0Hh6Du7SHUhRjCDAYRkcCMLeEth7U+cVpk=; b=RsE843ceSjfgZ0juq2ml7mD8ei
	MYONClb1cgwd0Cte/8mWV+Giiwi6YbqDu0qWiQJ2BMf7ybhTCZXAel/VRnJTqN/tk7yQl653nbqwi
	RMwFmcdq9DmcZZS8QS5v1tpuBxUg1MC/8duaQ3cMPDY/Xu/5ook1T35WVeHYO6C1b+h0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP3we-0006DH-Ju; Sun, 04 Oct 2020 13:20:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP3we-00024v-BH; Sun, 04 Oct 2020 13:20:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP3we-0002Vm-Ac; Sun, 04 Oct 2020 13:20:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155362: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:broken:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 13:20:56 +0000

flight 155362 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155362/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               broken never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   88 days
Failing since        154621  2020-09-22 16:07:00 Z   11 days   19 attempts
Testing same since   155362  2020-10-03 03:17:48 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken
broken-step test-arm64-arm64-xl-thunderx hosts-allocate

Not pushing.

(No revision log; it would be 368 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 16:11:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 16:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2748.7838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP6b9-0005Fo-8Q; Sun, 04 Oct 2020 16:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2748.7838; Sun, 04 Oct 2020 16:10:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP6b9-0005Fh-5E; Sun, 04 Oct 2020 16:10:55 +0000
Received: by outflank-mailman (input) for mailman id 2748;
 Sun, 04 Oct 2020 16:10:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kP6b8-0005Fc-61
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 16:10:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84831578-2422-42f0-ac65-dcb2f625d4e1;
 Sun, 04 Oct 2020 16:10:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP6b5-0001lQ-Ab; Sun, 04 Oct 2020 16:10:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kP6b4-0001vB-Vs; Sun, 04 Oct 2020 16:10:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kP6b4-00027P-VK; Sun, 04 Oct 2020 16:10:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kP6b8-0005Fc-61
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 16:10:54 +0000
X-Inumbo-ID: 84831578-2422-42f0-ac65-dcb2f625d4e1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84831578-2422-42f0-ac65-dcb2f625d4e1;
	Sun, 04 Oct 2020 16:10:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VfUS+HLHMuS1r134cz7812TaEZ4wxlJ4bqP6jE55xWs=; b=35hZBVTk9QmizhYuoe8wZ8TAR/
	R2KABfjay4zjybcStUcx75XsFCTIM068uqhBoUtDUDoJVZrhDqkDyxqMqjbeMOk7BsHQxoIJQjsbX
	g+r+ePnO0dhRXFd6CWiiW2FrmayucG9TOaiS2NRCDtJ6X41RQsgMMupp9lGobgpFmNJ8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP6b5-0001lQ-Ab; Sun, 04 Oct 2020 16:10:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP6b4-0001vB-Vs; Sun, 04 Oct 2020 16:10:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kP6b4-00027P-VK; Sun, 04 Oct 2020 16:10:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155365: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d3d45f8220d60a0b2aaaacf8fb2be4e6ffd9008e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 16:10:50 +0000

flight 155365 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155365/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d3d45f8220d60a0b2aaaacf8fb2be4e6ffd9008e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   64 days
Failing since        152366  2020-08-01 20:49:34 Z   63 days  110 attempts
Testing same since   155365  2020-10-03 06:08:12 Z    1 days    1 attempts

------------------------------------------------------------
2466 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 333061 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 18:14:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 18:14:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2756.7860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP8WE-00076L-Gy; Sun, 04 Oct 2020 18:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2756.7860; Sun, 04 Oct 2020 18:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kP8WE-00075o-DQ; Sun, 04 Oct 2020 18:13:58 +0000
Received: by outflank-mailman (input) for mailman id 2756;
 Sun, 04 Oct 2020 18:13:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GwjW=DL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kP8WC-00075g-Lh
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 18:13:56 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ac7e565-6998-497d-a3d0-92ee7b6f2610;
 Sun, 04 Oct 2020 18:13:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GwjW=DL=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kP8WC-00075g-Lh
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 18:13:56 +0000
X-Inumbo-ID: 3ac7e565-6998-497d-a3d0-92ee7b6f2610
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ac7e565-6998-497d-a3d0-92ee7b6f2610;
	Sun, 04 Oct 2020 18:13:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601835235;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=pMCF59CDIOu7GsydgexNuPw99R4/maZ6+vmTv+bIM0Y=;
  b=g46GQJ9OXXIk5jpBBpRecj/O4O+QhnH7oMwKVQTMzGv/f1nm9XJQ3Ol3
   X8EAnIBenX1Dg1mleaCCERAnFvlGWkwhYTqpkAOYv34YWsWQoSoJVwUJP
   TbqAggbIVAFyd7Qh1Bc4wdyQea2Czh/cdS/uZvqNk+Hcx437qdShta5ww
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: VObHLUTJ/uVKs5WEuwwrHYowB9elvFPwJctnxCsAZuB9lOgxccicanLnxreseAMDHM0PCTfBhZ
 25JZySJgxeumN8YvhB85AHkdje4kFrlRmgNG1IydbdKnWOM7qe0dfv605Ib+IJlvXLwSjAs/Po
 mqiiiIAJsTjjmQ1ujSzscfwFoDIzEJizN0tRHckTJyVrMpqvP7N99Z4k+3CFsdqDgAcZ+oQvlp
 94x57kej2dhQfQ6IJqTzdrqiWP+DF98mYJMfHTT1R7h+WMkwbGogmaye0mhQJvmE4ZdUhin2Q8
 m5w=
X-SBRS: None
X-MesageID: 28248639
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,335,1596513600"; 
   d="scan'208";a="28248639"
Subject: Re: [PATCH] x86/S3: Restore CR4 earlier during resume
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
	<marmarek@invisiblethingslab.com>
References: <20201002213650.2197-1-andrew.cooper3@citrix.com>
 <7d4e12ca-cb0d-3fbd-7d24-27bd46b8b95c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <90366a1e-3301-e614-b91e-f6f932b51c55@citrix.com>
Date: Sun, 4 Oct 2020 19:12:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7d4e12ca-cb0d-3fbd-7d24-27bd46b8b95c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 04/10/2020 08:38, Jan Beulich wrote:
> On 02.10.2020 23:36, Andrew Cooper wrote:
>> c/s 4304ff420e5 "x86/S3: Drop {save,restore}_rest_processor_state()
>> completely" moved CR4 restoration up into C, to account for the fact that MCE
>> was explicitly handled later.
>>
>> However, time_resume() ends up making an EFI Runtime Service call, and EFI
>> explodes without OSFXSR, presumably when trying to spill %xmm registers onto
>> the stack.
>>
>> Given this codepath, and the potential for other issues of a similar kind (TLB
>> flushing vs INVPCID, HVM logic vs VMXE, etc), restore CR4 in asm before
>> entering C.
>>
>> Ignore the previous MCE special case, because its not actually necessary.  The
>> handler is already suitably configured from before suspend.
> Are you suggesting we could drop the call to mcheck_init() altogether?

Not completely.  It reconfigures some of the MCE bank controls, which
probably won't survive S3, but the #MC handler itself is fully intact
once the IDT is re-established.

It probably wants splitting in two, but I think some part of it needs to
remain.

>
>> Fixes: 4304ff420e5 ("x86/S3: Drop {save,restore}_rest_processor_state() completely")
>> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Sun Oct 04 21:10:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 04 Oct 2020 21:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2767.7890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPBH7-0005Su-Cj; Sun, 04 Oct 2020 21:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2767.7890; Sun, 04 Oct 2020 21:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPBH7-0005Sn-9Y; Sun, 04 Oct 2020 21:10:33 +0000
Received: by outflank-mailman (input) for mailman id 2767;
 Sun, 04 Oct 2020 21:10:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPBH5-0005Si-C0
 for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 21:10:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4991e719-a703-4826-9dd5-42371ed67dca;
 Sun, 04 Oct 2020 21:10:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPBH0-0007w2-LB; Sun, 04 Oct 2020 21:10:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPBH0-00026B-Dc; Sun, 04 Oct 2020 21:10:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPBH0-0008Bz-D6; Sun, 04 Oct 2020 21:10:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vMX8=DL=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPBH5-0005Si-C0
	for xen-devel@lists.xenproject.org; Sun, 04 Oct 2020 21:10:31 +0000
X-Inumbo-ID: 4991e719-a703-4826-9dd5-42371ed67dca
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4991e719-a703-4826-9dd5-42371ed67dca;
	Sun, 04 Oct 2020 21:10:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=5buAfc/5ABhdC/5gU5eVSryI9GR8N/GISJoMMF8n9gw=; b=dc/avQN5PHepi1ry5rVoaaLu+I
	2Qz0+Wh8XvljG5rM1b3DWUXNm/7sC7+yKQgtt50kPZ6d3AFBEfKH3OYW648KAhkCRFyHUP84gv5dd
	toc/5vBDluepnRM+/WJDaeI2Yz2nx/9G0UH3yUL6pthCPpB30MMe2dsGGmQ7iK6aWH6I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPBH0-0007w2-LB; Sun, 04 Oct 2020 21:10:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPBH0-00026B-Dc; Sun, 04 Oct 2020 21:10:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPBH0-0008Bz-D6; Sun, 04 Oct 2020 21:10:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable bisection] complete test-amd64-amd64-libvirt-xsm
Message-Id: <E1kPBH0-0008Bz-D6@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 04 Oct 2020 21:10:26 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-xsm
testid guest-start

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155462/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start --summary-out=tmp/155462.bisection-summary --basis-template=154611 --blessings=real,real-bisect xen-unstable test-amd64-amd64-libvirt-xsm guest-start
Searching for failure / basis pass:
 155345 fail [host=huxelrebe1] / 154611 [host=fiano1] 154592 [host=godello1] 154576 [host=godello0] 154556 [host=chardonnay0] 154521 [host=pinot0] 154504 [host=chardonnay1] 154494 [host=pinot1] 154481 [host=elbling1] 154465 [host=albana0] 154090 [host=elbling0] 154058 [host=godello1] 154036 [host=godello0] 154016 [host=huxelrebe0] 153983 [host=albana1] 153957 ok.
Failure / basis pass flights: 155345 / 153957
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b11910082d90bb1597f6679524eb726a33306672
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#b11910082d90bb1597f6679524eb726a33306672-c73952831f0fc63a984e0d07dff1d20f8617b81f
Loaded 5001 nodes in revision graph
Searching for test results:
 153906 [host=fiano0]
 153931 [host=fiano1]
 153957 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b11910082d90bb1597f6679524eb726a33306672
 153983 [host=albana1]
 154016 [host=huxelrebe0]
 154036 [host=godello0]
 154058 [host=godello1]
 154090 [host=elbling0]
 154465 [host=albana0]
 154481 [host=elbling1]
 154494 [host=pinot1]
 154504 [host=chardonnay1]
 154521 [host=pinot0]
 154556 [host=chardonnay0]
 154576 [host=godello0]
 154592 [host=godello1]
 154611 [host=fiano1]
 154634 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155017 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5bcac985498ed83d89666959175ca9c9ed561ae1
 155113 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155211 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155375 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b11910082d90bb1597f6679524eb726a33306672
 155430 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155437 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b4e41b1750d550bf2b1ccf97ee46f4f682bdbb62
 155442 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6
 155445 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8e76aef72820435e766c7f339ed36da33da90c40
 155345 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155449 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 112992b05b2d2ca63f3c78eefe1cf8d192d7303a
 155452 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155453 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d
 155455 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155456 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155458 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155459 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
 155460 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
 155462 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c0ddc8634845aba50774add6e4b73fdaffc82656
Searching for interesting versions
 Result found: flight 153957 (pass), for basis pass
 For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325, results HASH(0x55892bb62fd0) HASH(0x55892bac32f0) HASH(0x55892bb64888) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef8\
 28bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d, results HASH(0x55892bacfb58) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98\
 c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 112992b05b2d2ca63f3c78eefe1cf8d192d7303a, results HASH(0x55892bb5b088) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8e76aef72820435e766c7f339ed36da33da90c40, results HASH(0x55892bb65008) For basis\
  failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 c7e3021a71fdb4f2d5dbad90ba83ce35bc21cda6, results HASH(0x55892bb63750) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b\
 1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 b4e41b1750d550bf2b1ccf97ee46f4f682bdbb62, results HASH(0x55892bab8ca0) For basis failure, parent search stopping at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c3\
 86260 b11910082d90bb1597f6679524eb726a33306672, results HASH(0x55892bacc628) HASH(0x55892bad8320) Result found: flight 154634 (fail), for basis failure (at ancestor ~371)
 Repro found: flight 155375 (pass), for basis pass
 Repro found: flight 155430 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 8d385b247bca40ece40c9279391054bc98934325
No revisions left to test, checking graph state.
 Result found: flight 155455 (pass), for last pass
 Result found: flight 155456 (fail), for first failure
 Repro found: flight 155458 (pass), for last pass
 Repro found: flight 155459 (fail), for first failure
 Repro found: flight 155460 (pass), for last pass
 Repro found: flight 155462 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155462/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

Revision graph left in /home/logs/results/bisect/xen-unstable/test-amd64-amd64-libvirt-xsm.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155462: tolerable FAIL

flight 155462 xen-unstable real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155462/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-xsm 12 guest-start             fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-xsm                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 00:13:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 00:13:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2779.7922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPE7V-0004Nb-Ns; Mon, 05 Oct 2020 00:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2779.7922; Mon, 05 Oct 2020 00:12:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPE7V-0004NU-JN; Mon, 05 Oct 2020 00:12:49 +0000
Received: by outflank-mailman (input) for mailman id 2779;
 Mon, 05 Oct 2020 00:12:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPE7T-0004NO-G2
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 00:12:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 965df638-2b31-4ffa-ba88-c81441bfe796;
 Mon, 05 Oct 2020 00:12:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPE7R-0003lF-Vl; Mon, 05 Oct 2020 00:12:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPE7R-0004JT-Nj; Mon, 05 Oct 2020 00:12:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPE7R-0006xr-ND; Mon, 05 Oct 2020 00:12:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPE7T-0004NO-G2
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 00:12:47 +0000
X-Inumbo-ID: 965df638-2b31-4ffa-ba88-c81441bfe796
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 965df638-2b31-4ffa-ba88-c81441bfe796;
	Mon, 05 Oct 2020 00:12:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bsMnS9TguqUZtXCJkm0vf23ayX9Isble8XpLI91VpvQ=; b=ejG1QqBFTjBReHeY7kqi2pqZnx
	BujE5xIzXAhuH3n+HcHjrtX7K/+3v/Jx2oULfYrdYgr4gUq2U5Q5wPI43yr1gYeMy9UqBYfRGU8in
	kKmI5LRSoetAMEDtYp10yPzO6l0Q+Y2P/6VIhiyxP26W3Fj6IVS360IW7LxhBf0+zrds=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPE7R-0003lF-Vl; Mon, 05 Oct 2020 00:12:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPE7R-0004JT-Nj; Mon, 05 Oct 2020 00:12:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPE7R-0006xr-ND; Mon, 05 Oct 2020 00:12:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155429-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155429: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=0bb796bda31103ebf54eefc11c471586c87b95fd
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 00:12:45 +0000

flight 155429 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155429/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              0bb796bda31103ebf54eefc11c471586c87b95fd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   86 days
Failing since        151818  2020-07-11 04:18:52 Z   85 days   80 attempts
Testing same since   155397  2020-10-03 20:12:11 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18188 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 01:06:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 01:06:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2788.7946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPEx1-0001AK-7Q; Mon, 05 Oct 2020 01:06:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2788.7946; Mon, 05 Oct 2020 01:06:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPEx1-0001AD-3u; Mon, 05 Oct 2020 01:06:03 +0000
Received: by outflank-mailman (input) for mailman id 2788;
 Mon, 05 Oct 2020 01:06:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPEwz-00019f-Gr
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 01:06:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 473cc1da-0208-4e76-b886-16df288faed9;
 Mon, 05 Oct 2020 01:05:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPEwr-0005G6-Vl; Mon, 05 Oct 2020 01:05:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPEwr-0006EM-Nw; Mon, 05 Oct 2020 01:05:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPEwr-00028q-ND; Mon, 05 Oct 2020 01:05:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPEwz-00019f-Gr
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 01:06:01 +0000
X-Inumbo-ID: 473cc1da-0208-4e76-b886-16df288faed9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 473cc1da-0208-4e76-b886-16df288faed9;
	Mon, 05 Oct 2020 01:05:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I9jXr9Jv7sucdB0G8TFtE5RhbngFjCoKICEZSDpene8=; b=bYXPI9qqOXJbCdSgvBwjRJoPfW
	s9hOu3FdOuHUzmgHI3CalaZuubCNhSLAWvu9qiw5CBA6rI+cr7RMzAew46yPDIW7jcac1IdMhMURk
	n2ADbHmlcSEtfHq7ELFNy0oI4dksuFbq/Vpj5LdgyXroFNgYA8aeXIcGwqop8KwNy4kg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPEwr-0005G6-Vl; Mon, 05 Oct 2020 01:05:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPEwr-0006EM-Nw; Mon, 05 Oct 2020 01:05:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPEwr-00028q-ND; Mon, 05 Oct 2020 01:05:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155377-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 155377: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8e7e5857a203c9d9df7733fd68768555c7e76839
X-Osstest-Versions-That:
    xen=c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 01:05:53 +0000

flight 155377 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155377/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 154358

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8e7e5857a203c9d9df7733fd68768555c7e76839
baseline version:
 xen                  c663fa577b42e7f4731bb33fc7f94f7ffb05a1ef

Last test of basis   154358  2020-09-15 09:40:09 Z   19 days
Failing since        154602  2020-09-22 02:37:01 Z   12 days   10 attempts
Testing same since   155377  2020-10-03 13:48:36 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Chen <wei.chen@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c663fa577b..8e7e5857a2  8e7e5857a203c9d9df7733fd68768555c7e76839 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 05:25:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 05:25:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2850.8133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPIzR-0000VY-LH; Mon, 05 Oct 2020 05:24:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2850.8133; Mon, 05 Oct 2020 05:24:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPIzR-0000VR-ID; Mon, 05 Oct 2020 05:24:49 +0000
Received: by outflank-mailman (input) for mailman id 2850;
 Mon, 05 Oct 2020 05:24:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPIzQ-0000VM-AW
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 05:24:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56ad7e94-3fc2-4a2c-8cb6-8af33ab3013a;
 Mon, 05 Oct 2020 05:24:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPIzO-0003XU-6y; Mon, 05 Oct 2020 05:24:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPIzN-0004Jf-Tp; Mon, 05 Oct 2020 05:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPIzN-0005eX-TH; Mon, 05 Oct 2020 05:24:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPIzQ-0000VM-AW
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 05:24:48 +0000
X-Inumbo-ID: 56ad7e94-3fc2-4a2c-8cb6-8af33ab3013a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 56ad7e94-3fc2-4a2c-8cb6-8af33ab3013a;
	Mon, 05 Oct 2020 05:24:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yK7TKux1+hM2mH6rtOPL6gohgLHeAQ7acVs3DFxE9UA=; b=hJ2t2JMC2BAdLDJxQ+TwFX9Tqs
	SoDcVBe32dEabZtFTm0fAR/IN1fQIuJ6vHaAfKHdxBoni/69wBkElkeQUXBXb24dGYLQv16u7yVJO
	IQBxoo8MFBoCoDCLCjXaNAI77trTHPW1/I8ZCwq/30GP4YjGS4/cetayzzOVnx9vtM7s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPIzO-0003XU-6y; Mon, 05 Oct 2020 05:24:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPIzN-0004Jf-Tp; Mon, 05 Oct 2020 05:24:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPIzN-0005eX-TH; Mon, 05 Oct 2020 05:24:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155388-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 155388: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=3630a367854c98bbf8e747d09eeab7e68f370003
X-Osstest-Versions-That:
    xen=ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 05:24:45 +0000

flight 155388 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155388/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  3630a367854c98bbf8e747d09eeab7e68f370003
baseline version:
 xen                  ddaaccbbab6b19bf21ed2c097f3055a3c2544c8d

Last test of basis   151714  2020-07-07 13:35:55 Z   89 days
Failing since        154619  2020-09-22 15:37:30 Z   12 days    8 attempts
Testing same since   155388  2020-10-03 17:40:53 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ddaaccbbab..3630a36785  3630a367854c98bbf8e747d09eeab7e68f370003 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 06:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 06:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2858.8154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPKQY-0008Ra-MQ; Mon, 05 Oct 2020 06:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2858.8154; Mon, 05 Oct 2020 06:56:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPKQY-0008RT-JB; Mon, 05 Oct 2020 06:56:54 +0000
Received: by outflank-mailman (input) for mailman id 2858;
 Mon, 05 Oct 2020 06:56:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=g78D=DM=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kPKQW-0008RO-Vk
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 06:56:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id effc1066-6765-4f7d-ad47-fd830e025187;
 Mon, 05 Oct 2020 06:56:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93222B224;
 Mon,  5 Oct 2020 06:56:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g78D=DM=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kPKQW-0008RO-Vk
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 06:56:53 +0000
X-Inumbo-ID: effc1066-6765-4f7d-ad47-fd830e025187
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id effc1066-6765-4f7d-ad47-fd830e025187;
	Mon, 05 Oct 2020 06:56:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601881010;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=UNEm7Tn9UNlLqhlrULq94v99iXKf+1bbfGtGmTKX+7I=;
	b=S66gtJ8axc/8vLsDG0Y3zJ3DfLtemEkgIHkSLUIWIDdOmJTl5ozJHZlvC9eijR1y/3zccB
	UdINSjYVIGbyFe6FVLK6h+OkKKunrDQK6t3reRK6hhZJGiR0LMbXvjHxmXk9olvfWeq9+p
	FIB95YWX0cll+IESTIBsni+toXRKRnQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93222B224;
	Mon,  5 Oct 2020 06:56:50 +0000 (UTC)
Date: Mon, 5 Oct 2020 08:56:48 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of
 the freelist in unset_migratetype_isolate()
Message-ID: <20201005065648.GO4555@dhcp22.suse.cz>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-4-david@redhat.com>
 <20201002132404.GI4555@dhcp22.suse.cz>
 <df0c45bf-223f-1f0b-ce3d-f2b2e05626bd@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <df0c45bf-223f-1f0b-ce3d-f2b2e05626bd@redhat.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
> On 02.10.20 15:24, Michal Hocko wrote:
> > On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> >> Page isolation doesn't actually touch the pages, it simply isolates
> >> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> >>
> >> We already place pages to the tail of the freelists when undoing
> >> isolation via __putback_isolated_page(), let's do it in any case
> >> (e.g., if order <= pageblock_order) and document the behavior.
> >>
> >> Add a "to_tail" parameter to move_freepages_block() but introduce a
> >> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> >>
> >> This change results in all pages getting onlined via online_pages() to
> >> be placed to the tail of the freelist.
> > 
> > Is there anything preventing to do this unconditionally? Or in other
> > words is any of the existing callers of move_freepages_block benefiting
> > from adding to the head?
> 
> 1. mm/page_isolation.c:set_migratetype_isolate()
> 
> We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
> there.
> 
> 2. steal_suitable_fallback():
> 
> I don't think we care too much about the order when already stealing
> pageblocks ... and the freelist is empty I guess?
> 
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
> 
> Not sure if we really care.

Honestly, I have no idea. I can imagine that some atomic high order
workloads (e.g. in net) might benefit from cache line hot pages but I am
not sure this is really observable. If yes it would likely be better to
have this documented than relying on wild guess. If we do not have any
evidence then I would vote for simplicity first and go with
unconditional add_to_tail which would simply your patch a bit.

Maybe Vlastimil or Mel would have a better picture.

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:01:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:01:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2867.8165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLR6-0006c3-0P; Mon, 05 Oct 2020 08:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2867.8165; Mon, 05 Oct 2020 08:01:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLR5-0006bw-TQ; Mon, 05 Oct 2020 08:01:31 +0000
Received: by outflank-mailman (input) for mailman id 2867;
 Mon, 05 Oct 2020 08:01:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=37d/=DM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1kPLR4-0006bo-L3
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:01:30 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1b69d07c-75ba-4e74-b628-943968820c68;
 Mon, 05 Oct 2020 08:01:29 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-47-JsmJSc14P12pSFbh1NIEpg-1; Mon, 05 Oct 2020 04:01:25 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 34FCA8030D2;
 Mon,  5 Oct 2020 08:01:24 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-101.ams2.redhat.com
 [10.36.112.101])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id E190B5C225;
 Mon,  5 Oct 2020 08:01:16 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 55FF310C7796; Mon,  5 Oct 2020 10:01:15 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=37d/=DM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
	id 1kPLR4-0006bo-L3
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:01:30 +0000
X-Inumbo-ID: 1b69d07c-75ba-4e74-b628-943968820c68
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 1b69d07c-75ba-4e74-b628-943968820c68;
	Mon, 05 Oct 2020 08:01:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601884889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wJwznd7k4BYmOPHwcumCxFnElmQZhM9x5nROGs39sU4=;
	b=GlQIPQqxzZKdqAtI7dwkz1XGR4aBnLsjpkZtmoeeHn+nG/qsEa2lSRiBU+6WnGx1bmqJYD
	N4JyFztSQYfkfytA9bkLvtmb9E69mPtDO3F3DdjAalbp4DJzUA5hRxlFtbwfkxh4RtazdY
	bHvNMxj5C2gDSCbXlY4vmkBYAncnRrg=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-47-JsmJSc14P12pSFbh1NIEpg-1; Mon, 05 Oct 2020 04:01:25 -0400
X-MC-Unique: JsmJSc14P12pSFbh1NIEpg-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 34FCA8030D2;
	Mon,  5 Oct 2020 08:01:24 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-101.ams2.redhat.com [10.36.112.101])
	by smtp.corp.redhat.com (Postfix) with ESMTPS id E190B5C225;
	Mon,  5 Oct 2020 08:01:16 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
	id 55FF310C7796; Mon,  5 Oct 2020 10:01:15 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org,  Peter Maydell <peter.maydell@linaro.org>,
  Stefano Stabellini <sstabellini@kernel.org>,  Eduardo Habkost
 <ehabkost@redhat.com>,  Juan Quintela <quintela@redhat.com>,  Paul Durrant
 <paul@xen.org>,  "Dr. David Alan Gilbert" <dgilbert@redhat.com>,  "Michael
 S. Tsirkin" <mst@redhat.com>,  Gerd Hoffmann <kraxel@redhat.com>,  Paolo
 Bonzini <pbonzini@redhat.com>,  Anthony Perard
 <anthony.perard@citrix.com>,  xen-devel@lists.xenproject.org,  Richard
 Henderson <rth@twiddle.net>
Subject: Re: [PATCH 0/5] qapi: Restrict machine (and migration) specific
 commands
References: <20201002133923.1716645-1-philmd@redhat.com>
Date: Mon, 05 Oct 2020 10:01:15 +0200
In-Reply-To: <20201002133923.1716645-1-philmd@redhat.com> ("Philippe
	=?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Fri, 2 Oct 2020 15:39:18
 +0200")
Message-ID: <87wo05aypg.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> writes:

> Reduce the machine code pulled into qemu-storage-daemon.

I'm leaving review to Eduardo and Marcel for PATCH 1-4, and to David and
Juan for PATCH 5.  David already ACKed.

Can do the pull request.



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:03:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:03:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2869.8178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLSw-0006jX-E2; Mon, 05 Oct 2020 08:03:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2869.8178; Mon, 05 Oct 2020 08:03:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLSw-0006jQ-AT; Mon, 05 Oct 2020 08:03:26 +0000
Received: by outflank-mailman (input) for mailman id 2869;
 Mon, 05 Oct 2020 08:03:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cNwf=DM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kPLSu-0006jJ-9c
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:03:24 +0000
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c96de68c-30e0-4469-906e-714a26d71a58;
 Mon, 05 Oct 2020 08:03:23 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id g12so2973378wrp.10
 for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 01:03:23 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id f14sm12854907wrv.72.2020.10.05.01.03.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 05 Oct 2020 01:03:21 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cNwf=DM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kPLSu-0006jJ-9c
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:03:24 +0000
X-Inumbo-ID: c96de68c-30e0-4469-906e-714a26d71a58
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c96de68c-30e0-4469-906e-714a26d71a58;
	Mon, 05 Oct 2020 08:03:23 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id g12so2973378wrp.10
        for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 01:03:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=UwBSmE0LRswxIEXlwF4j5BDE85r77YcyH+Po0PgV3/I=;
        b=hypyxzjKb6HbXxW6eqafquoS60aJX7f43tExNsrEb2PnRLc00FyVV0Q3PDgpruFU4M
         RdGZ42dBxHkG9uTfygKMFLdVLA9ih6oINg7twNR9Y4e8E7TblMmfT7IEamXKtPA69f7I
         Sv4LLCJropZLejB1anXlBA0HOy2csneC65EhSZKsd08oMTqJU7YNcqudJktXaS32dBNr
         V4XSbHHp1HLJvzBzwx6tge0hr1zMHPK5h17B6AYqs6CXKwC1EIFKHF8le2TOao72eMWT
         axd+Zm+KIAkTN9rKpo9VZ78ald4vaU896H+Y1Sa9wugIPGG+QYt3kHVi9Bp8WpYwF3Ca
         TzcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=UwBSmE0LRswxIEXlwF4j5BDE85r77YcyH+Po0PgV3/I=;
        b=P7P6LGGwdevTW7Hnz3zve+tk/hymlmqovUAjELY/Vs7yyZkMCFJlC/kagX5Gnj22Mc
         FevACAYgTK2Vkmfu3Pp4n42N1lZ9+yAKWGUGZot/wpCGch5naEYLaTS6aYpI7g5iNdQT
         9cGkXQQfgDrmG9kvdIn5BrSDGx38T4ko/REGIYVy0R57CKNNXaJcwx4WH8K8STJ0ON2y
         5pB3owUy0MciXKNmkUw9hcPgexBCVt1jI5Eu8Abj2SwcsXy4+IKY/D/VD5qGRMdGNuUE
         FhkEBgn3Zip/s3gkuCXfTemJC+2UgvWI82E5VAzJeMRmPuHkz5Pc1hRffXPc3mK+SRFK
         zUBA==
X-Gm-Message-State: AOAM5325EWqVJsLH7kUI7iacc7loSIv/OC5P12tMoTHLPcXrfD+l7+9t
	GkbT2AOCzH08LIKZaoSH8QA=
X-Google-Smtp-Source: ABdhPJxpw6M9W6tleuXP6FByjHfOPxm+7lQomO7/cy0cupGS0LIgmSUPuPC4dKfNyA2/LlpIiJCNnQ==
X-Received: by 2002:adf:f3d2:: with SMTP id g18mr564336wrp.367.1601885002201;
        Mon, 05 Oct 2020 01:03:22 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
        by smtp.gmail.com with ESMTPSA id f14sm12854907wrv.72.2020.10.05.01.03.20
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 05 Oct 2020 01:03:21 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
References: <20200924131030.1876-1-paul@xen.org> <20200924131030.1876-2-paul@xen.org> <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
In-Reply-To: <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
Subject: RE: [PATCH v9 1/8] xen/common: introduce a new framework for save/restore of 'domain' context
Date: Mon, 5 Oct 2020 09:03:20 +0100
Message-ID: <000201d69aed$fe07a990$fa16fcb0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwFAw5g3Ahr3hNSpgiPokA==



> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 02 October 2020 22:20
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Julien Grall <julien@xen.org>; =
Jan Beulich
> <jbeulich@suse.com>; George Dunlap <george.dunlap@citrix.com>; Ian =
Jackson
> <ian.jackson@eu.citrix.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Wei Liu <wl@xen.org>;
> Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>
> Subject: Re: [PATCH v9 1/8] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 24/09/2020 14:10, Paul Durrant wrote:
> > diff --git a/xen/common/save.c b/xen/common/save.c
> > new file mode 100644
> > index 0000000000..841c4d0e4e
> > --- /dev/null
> > +++ b/xen/common/save.c
> > @@ -0,0 +1,315 @@
> > +/*
> > + * save.c: Save and restore PV guest state common to all domain =
types.
>=20
> This description will be stale by the time your work is complete.
>=20

True now, I'll just drop the 'PV'

> > +int domain_save_data(struct domain_context *c, const void *src, =
size_t len)
> > +{
> > +    int rc =3D c->ops.save->append(c->priv, src, len);
> > +
> > +    if ( !rc )
> > +        c->len +=3D len;
> > +
> > +    return rc;
> > +}
> > +
> > +#define DOMAIN_SAVE_ALIGN 8
>=20
> This is part of the stream ABI.
>=20

And what's actually the problem with defining it here?

> > +
> > +int domain_save_end(struct domain_context *c)
> > +{
> > +    struct domain *d =3D c->domain;
> > +    size_t len =3D ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* =
padding */
>=20
> DOMAIN_SAVE_ALIGN - (c->len & (DOMAIN_SAVE_ALIGN - 1))
>=20
> isn't vulnerable to overflow.
>=20

...and significantly uglier code. What's actually wrong with what I =
wrote?

> > +    int rc;
> > +
> > +    if ( len )
> > +    {
> > +        static const uint8_t pad[DOMAIN_SAVE_ALIGN] =3D {};
> > +
> > +        rc =3D domain_save_data(c, pad, len);
> > +
> > +        if ( rc )
> > +            return rc;
> > +    }
> > +    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
> > +
> > +    if ( c->name )
> > +        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, =
c->name,
> > +                 c->desc.instance, c->len, len);
>=20
> IMO, this is unhelpful to print out.  It also appears to be the only =
use
> of the c->name field.
>=20
> It also creates obscure and hard to follow logic based on dry_run.
>=20

I'll drop it to debug. I personally find it helpful and would prefer to =
keep it.

> > diff --git a/xen/include/public/save.h b/xen/include/public/save.h
> > new file mode 100644
> > index 0000000000..551dbbddb8
> > --- /dev/null
> > +++ b/xen/include/public/save.h
> > @@ -0,0 +1,89 @@
> > +/*
> > + * save.h
> > + *
> > + * Structure definitions for common PV/HVM domain state that is =
held by
> > + * Xen and must be saved along with the domain's memory.
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * Permission is hereby granted, free of charge, to any person =
obtaining a copy
> > + * of this software and associated documentation files (the =
"Software"), to
> > + * deal in the Software without restriction, including without =
limitation the
> > + * rights to use, copy, modify, merge, publish, distribute, =
sublicense, and/or
> > + * sell copies of the Software, and to permit persons to whom the =
Software is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be =
included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, =
EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF =
MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO =
EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR =
OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, =
ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR =
OTHER
> > + * DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#ifndef XEN_PUBLIC_SAVE_H
> > +#define XEN_PUBLIC_SAVE_H
> > +
> > +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> > +
> > +#include "xen.h"
> > +
> > +/* Entry data is preceded by a descriptor */
> > +struct domain_save_descriptor {
> > +    uint16_t typecode;
> > +
> > +    /*
> > +     * Instance number of the entry (since there may be multiple of =
some
> > +     * types of entries).
> > +     */
> > +    uint16_t instance;
> > +
> > +    /* Entry length not including this descriptor */
> > +    uint32_t length;
> > +};
> > +
> > +/*
> > + * Each entry has a type associated with it. =
DECLARE_DOMAIN_SAVE_TYPE
> > + * binds these things together, although it is not intended that =
the
> > + * resulting type is ever instantiated.
> > + */
> > +#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
> > +    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
> > +
> > +#define DOMAIN_SAVE_CODE(_x) \
> > +    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->c))
> > +#define DOMAIN_SAVE_TYPE(_x) \
> > +    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->t)
>=20
> I realise this is going to make me very unpopular, but NACK.
>=20
> This is straight up obfuscation with no redeeming properties.  I know
> you've copied it from the exist HVMCONTEXT infrastructure, but it is
> obnoxious to use there (particularly in the domain builder) and not an
> example wanting copying.
>=20
> Furthermore, the code will be simpler and easier to follow without it.
>=20

OK, I can drop it if you so vehemently object.

> Secondly, and more importantly, I do not see anything in docs/specs/
> describing the binary format of this stream,  and I'm going to insist
> that one appears, ahead of this patch in the series.
>=20

I can certainly put something there if you wish.

> In doing so, you're hopefully going to discover the bug with the older
> HVMCONTEXT stream which makes the version field fairly pointless (more
> below).
>=20
> It should describe how to forward compatibly extend the stream, and
> under what circumstances the version number can/should change.  It =
also
> needs to describe the alignment and extending rules which ...
>=20
> > +
> > +/*
> > + * All entries will be zero-padded to the next 64-bit boundary when =
saved,
> > + * so there is no need to include trailing pad fields in structure
> > + * definitions.
> > + * When loading, entries will be zero-extended if the load handler =
reads
> > + * beyond the length specified in the descriptor.
> > + */
>=20
> ... shouldn't be this.
>=20

Auto-padding was explicitly requested by Julien and extending (with =
zeroes or otherwise) is the necessary corollary (since the save handlers =
are not explicitly padding to the alignment boundary).

> The current zero extending property was an emergency hack to fix an =
ABI
> breakage which had gone unnoticed for a couple of releases.  The work =
to
> implement it created several very hard to debug breakages in Xen.
>=20
> A properly designed stream shouldn't need auto-extending behaviour, =
and
> the legibility of the code is improved by not having it.
>=20
> It is a trick which can stay up your sleeve for an emergency, in the
> hope you'll never have to use it.
>=20

The zero-extending here is different; it does not form part of the =
record. It is merely there to make sure the alignment constraint is met.

> > +
> > +/* Terminating entry */
> > +struct domain_save_end {};
> > +DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
> > +
> > +#define DOMAIN_SAVE_MAGIC   0x53415645
> > +#define DOMAIN_SAVE_VERSION 0x00000001
> > +
> > +/* Initial entry */
> > +struct domain_save_header {
> > +    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
> > +    uint16_t xen_major, xen_minor; /* Xen version */
> > +    uint32_t version;              /* Save format version */
> > +};
> > +DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
>=20
> The layout problem with the stream is the fact that this header =
doesn't
> come first.
>=20

? It most certainly does some first as is evident from the load and save =
functions. But I will add a document that states it, as requested.

> In the eventual future where uint16_t won't be sufficient for =
instance,
> and uint32_t might not be sufficient for len, the version number is
> going to have to be bumped, in order to change the descriptor layout.
>=20
>=20
> Overall, this patch needs to be a minimum of two.  First a written
> document which is the authoritative stream ABI, and the second which =
is
> this implementation.  The header describing the stream format should =
not
> be substantively different from xg_sr_stream_format.h
>=20

Ok.

> ~Andrew
>=20
> P.S. Another good reason for having extremely simple header files is =
for
> the poor sole trying to write a Go/Rust/other binding for this in some
> likely not-to-distant future.

Fine. I'm happy to drop the macro/type magic if no-one feels it is =
necessary.

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:21:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:21:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2872.8191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLjq-00005k-VV; Mon, 05 Oct 2020 08:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2872.8191; Mon, 05 Oct 2020 08:20:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLjq-00005Y-SX; Mon, 05 Oct 2020 08:20:54 +0000
Received: by outflank-mailman (input) for mailman id 2872;
 Mon, 05 Oct 2020 08:20:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UT+F=DM=techsingularity.net=mgorman@srs-us1.protection.inumbo.net>)
 id 1kPLjp-00005S-Ef
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:20:53 +0000
Received: from outbound-smtp22.blacknight.com (unknown [81.17.249.190])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37f84843-e545-47c8-ae8e-5b6d8cddffb6;
 Mon, 05 Oct 2020 08:20:52 +0000 (UTC)
Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10])
 by outbound-smtp22.blacknight.com (Postfix) with ESMTPS id
 2AFBE184056
 for <xen-devel@lists.xenproject.org>; Mon,  5 Oct 2020 09:20:51 +0100 (IST)
Received: (qmail 11945 invoked from network); 5 Oct 2020 08:20:50 -0000
Received: from unknown (HELO techsingularity.net)
 (mgorman@techsingularity.net@[84.203.22.4])
 by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated);
 5 Oct 2020 08:20:50 -0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UT+F=DM=techsingularity.net=mgorman@srs-us1.protection.inumbo.net>)
	id 1kPLjp-00005S-Ef
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:20:53 +0000
X-Inumbo-ID: 37f84843-e545-47c8-ae8e-5b6d8cddffb6
Received: from outbound-smtp22.blacknight.com (unknown [81.17.249.190])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 37f84843-e545-47c8-ae8e-5b6d8cddffb6;
	Mon, 05 Oct 2020 08:20:52 +0000 (UTC)
Received: from mail.blacknight.com (pemlinmail01.blacknight.ie [81.17.254.10])
	by outbound-smtp22.blacknight.com (Postfix) with ESMTPS id 2AFBE184056
	for <xen-devel@lists.xenproject.org>; Mon,  5 Oct 2020 09:20:51 +0100 (IST)
Received: (qmail 11945 invoked from network); 5 Oct 2020 08:20:50 -0000
Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[84.203.22.4])
  by 81.17.254.9 with ESMTPSA (AES256-SHA encrypted, authenticated); 5 Oct 2020 08:20:50 -0000
Date: Mon, 5 Oct 2020 09:20:49 +0100
From: Mel Gorman <mgorman@techsingularity.net>
To: Michal Hocko <mhocko@suse.com>
Cc: David Hildenbrand <david@redhat.com>, Vlastimil Babka <vbabka@suse.cz>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Oscar Salvador <osalvador@suse.de>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of
 the freelist in unset_migratetype_isolate()
Message-ID: <20201005082049.GI3227@techsingularity.net>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-4-david@redhat.com>
 <20201002132404.GI4555@dhcp22.suse.cz>
 <df0c45bf-223f-1f0b-ce3d-f2b2e05626bd@redhat.com>
 <20201005065648.GO4555@dhcp22.suse.cz>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-15
Content-Disposition: inline
In-Reply-To: <20201005065648.GO4555@dhcp22.suse.cz>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Mon, Oct 05, 2020 at 08:56:48AM +0200, Michal Hocko wrote:
> On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
> > On 02.10.20 15:24, Michal Hocko wrote:
> > > On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
> > >> Page isolation doesn't actually touch the pages, it simply isolates
> > >> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
> > >>
> > >> We already place pages to the tail of the freelists when undoing
> > >> isolation via __putback_isolated_page(), let's do it in any case
> > >> (e.g., if order <= pageblock_order) and document the behavior.
> > >>
> > >> Add a "to_tail" parameter to move_freepages_block() but introduce a
> > >> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
> > >>
> > >> This change results in all pages getting onlined via online_pages() to
> > >> be placed to the tail of the freelist.
> > > 
> > > Is there anything preventing to do this unconditionally? Or in other
> > > words is any of the existing callers of move_freepages_block benefiting
> > > from adding to the head?
> > 
> > 1. mm/page_isolation.c:set_migratetype_isolate()
> > 
> > We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
> > there.
> > 
> > 2. steal_suitable_fallback():
> > 
> > I don't think we care too much about the order when already stealing
> > pageblocks ... and the freelist is empty I guess?
> > 
> > 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
> > 
> > Not sure if we really care.
> 
> Honestly, I have no idea. I can imagine that some atomic high order
> workloads (e.g. in net) might benefit from cache line hot pages but I am
> not sure this is really observable.

The highatomic reserve is more concerned that about the allocation
succeeding than it is about cache hotness.

-- 
Mel Gorman
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:32:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:32:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2878.8210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLuk-00016n-6X; Mon, 05 Oct 2020 08:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2878.8210; Mon, 05 Oct 2020 08:32:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPLuk-00016g-3X; Mon, 05 Oct 2020 08:32:10 +0000
Received: by outflank-mailman (input) for mailman id 2878;
 Mon, 05 Oct 2020 08:32:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kPLui-00016b-SP
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:32:09 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9be9baae-a1a3-4ca8-a155-912478111101;
 Mon, 05 Oct 2020 08:32:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1kPLui-00016b-SP
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:32:09 +0000
X-Inumbo-ID: 9be9baae-a1a3-4ca8-a155-912478111101
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9be9baae-a1a3-4ca8-a155-912478111101;
	Mon, 05 Oct 2020 08:32:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601886726;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=EECC2hHZI4ec+eb9JInRiWAWRRbgpMEQSCnMH2DnY+U=;
  b=MdwhCt42zrYEbwewesPbJxV6fMELs5gZ+ZxkeFnvJq6DC80twJ3SD+9t
   rVW33HtBW+c/bwlWBRuJTr/w3U2E7ps1BRNZV0gxZKDYzYKMPpaBIqu93
   k76FRnUqvljqJBk1Jq7XkAax2QJ9RNouFZIsq/atpyyC1NNVTfo1AogfJ
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2Gn5YxardgT9ITyvFDjEAECuok5dQsNr7Kc5kYhqXRnrKu5uWYTqgyVsF4iqgAitcf7s88IDtF
 vxfkNUTfKe1j3Cag8giODm2KbiOAct5If+Ho5H6Ah7Rkk7ECMDJrcXLmHrdO6RmLig4433jZau
 qKq9XzLtViFV4th236uUdk0rlD1NUUdvnM7L8A6hJXUsWQrGw9BSSoKZQP0f3Z6vsUdymeU0IM
 zGOSlk4afIXUs+nHLmyrw95Dcm4DfKg2JKCVkVB+3kxkTEBOcgVwp/XANUzXlsn0OQtLQCjPfB
 nXk=
X-SBRS: None
X-MesageID: 28535591
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28535591"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HNNb3cdN3e2TDH9RvCj4e6njZCHlTBhXyERKbStKrZ5DSLMwjCmMxJsUjGn/Wl2Rw6DYHER+r6Wb1x5jx/AHpNNn6oF/aGjcDmzcly9rzOciaRsMebNXhiPnA/HJYi1urCQTnDTxmtVXvFKWkBgMgss/quJehudCETpFJ6mcuh1bc6bLYlWRONI7v2g+Sh/7tJPiQavnpkDFqGByWoyrMbang3sKDlflEDg2crBOG9KwX71oTZ3SbkKru+drml0ro1jSu8mqLMl3QU0gzngAExVQj8bE002PNjHOGh4T3TUCerbgs2yFtlBjplrSLOO1OcIDmGSK3nwdhEBEzU+s6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hBFWe4FusilT2++nnrktdpnGTHNllFZN9SNle7DHB/k=;
 b=U01yngFiqAaPcggZZNtVfhKxLoKS3sj01y83/Hdlxco0p7GiLEW2MOhp3PSjhJdacyWkEQlmnASOdycSgfk0zl9lWgbVmWwRZ0m7jtD7SD9G/+5nUsKoSudqjFvECi7H1/jwjjDkA4t1es6174IYw8iWM+OnaxgK7DRD/6EbQCSnHJoII7CxK/HnpZeoSKB5Rcd3xsZ1JPGytk9jxMBsTvCQYfaZ/qUAnHnUFt4PBTR89xEjBzH6Ld+Whtxm5uSOp91OceiDlh2FhOX1qYSQQGu/HihGabUVVIrF9YQ9pf/nx2tNsYjbjMIvMi8o/5VpRaHqlMCxI6eRvW9fwKswRg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hBFWe4FusilT2++nnrktdpnGTHNllFZN9SNle7DHB/k=;
 b=X65X+WewrkkPDn4Q+kBArwG+F3sC2pcfHCY3YRMY0kBuCOKH84ZKKw1ezaLQcVkFUuiHFvab3URclbahhYZP+Ll5MZDUHIF72XqicxGXT5t2LBWYzi2MEtXAJUR5uma6dqlfWwa9M68viYtG93+pb+AsIkvLLlI509cqoTR6TzQ=
From: Christian Lindig <christian.lindig@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Roger Pau Monne
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Edwin Torok <edvin.torok@citrix.com>, Rob Hoes
	<Rob.Hoes@citrix.com>
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
Thread-Topic: [PATCH 3/8] xen/domctl: Introduce and use
 XEN_DOMCTL_CDF_nested_virt
Thread-Index: AQHWlzAJkAo6XskIDU6SR0iyiMzfOamCi3aAgAAK6ICABh8oLg==
Date: Mon, 5 Oct 2020 08:32:03 +0000
Message-ID: <DS7PR03MB56550E490547FB1E15C6EBB6F60C0@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-4-andrew.cooper3@citrix.com>
 <32a955ad-9995-6df2-3d7b-6b3eb7b1d656@suse.com>,<8e25a67b-21f1-5203-6531-4ed378a74bfb@citrix.com>
In-Reply-To: <8e25a67b-21f1-5203-6531-4ed378a74bfb@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 76ad5313-dacb-438d-9a4d-08d8690922ea
x-ms-traffictypediagnostic: DM5PR03MB3355:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM5PR03MB3355F4C86A051EEEA0D6B457F60C0@DM5PR03MB3355.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: k6NPbBzCZIV6nrksy05UXdt4WP/gz8LycQ0RGkamLOnoUNHT5Tt37quFYaaMSgwFNWF5X8vdsNSjFiMW9qJt8Rzo7aoM3BjU/MxMz98uXm9ZIYKlaVAbHGwJ6Ku+peBi5hn63QY/Ta43J2jKRsH7VGpH6aA2MJ2yiEYzdkpSAWK1uVGSXoQ17rwFOEx4ZelxLv7hwDaqUJzQAfEqqZywpBfTth/HathavZRNibUh/yYD67fn0HhQZtsFIRR3a2WhcGsaz0QkM+cn2VoyN1s3KdyZBMWX/o1gAO6tLK0MqUc3SKg93QXmXEGplEXLhpfTlLzCS5lCFAS4ivqLS2kDWA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(396003)(39850400004)(136003)(55016002)(52536014)(66476007)(9686003)(8936002)(66946007)(8676002)(316002)(4326008)(2906002)(107886003)(5660300002)(478600001)(83380400001)(110136005)(54906003)(33656002)(6506007)(53546011)(76116006)(91956017)(26005)(7696005)(186003)(55236004)(44832011)(64756008)(66556008)(71200400001)(86362001)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: P97zFfbJOMHA6PnCr0BK05vU7xNEN1tTfLqosnzH18UKNZ34IQKOLl4YDkN80B487u5yq5xC4hZfEHvKkEfWYYx1NlSfaXmXK2LfZgn5kDZxijF+E/O70DH7jePk1/ZspaxSR/KBiwNNJMq9g113BCEsIYFwIPHm9VG5una6s7kryMKfZdPM6L/h55PrxSgFxb5c0reYJM09JM0OL+WV7LEoc4232VqZuRsO79cBW1BILa7gMOq7uADkIxZIi1gHB7OBASA/w90M76Yus0Wd0OoDW8IHpTTKSQ0J84rV7sJLAPKgxqc/K8WXtbvj1BxjjzzswnwZobpjsgbB0b1iFOc9Oj3rZhFF40Q/+JalqBIXH/YhtuwWyxwn7tm/7u6RSjbfJdfC57wQvSGFr49Ups29HDCzEu3WeUVsyby2PgQ8kpet7zPqSFAm5JiIZL6A5SAh71JS5zq/sVYxxBstRViOSJGS5Wy5qozrHjloil+ZanqdFMst+N5AE6pZYb12yC6qUvevPNEqytUqRBJ0wAdecJ5Hb8HKin4yGA7mJhOoUF4hT+xx04UiAs+uIoPM+8upk+6dlNBBV1qiqmNUMcvEm6IX1z8JAcQd4uW2E1kn0++7kMGuajppVKLRHDQm+n4tIVlpTQc5Ie+7478fcg==
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 76ad5313-dacb-438d-9a4d-08d8690922ea
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2020 08:32:03.1987
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: LSdfAVAiDPKzwGcxECw6XnbGw+Vu6NJKvYPgF9abmNYPMDEfJcMNCk96cibBplRHba25a5NnkKrisEzE9lA7AnFSC5qQaTUceoXt7sq0sYQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3355
X-OriginatorOrg: citrix.com

--
Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Andrew Cooper <Andrew.Cooper3@citrix.com>
Sent: 01 October 2020 12:02
To: Jan Beulich
Cc: Xen-devel; Roger Pau Monne; Wei Liu; Ian Jackson; Christian Lindig; Edw=
in Torok; Rob Hoes
Subject: Re: [PATCH 3/8] xen/domctl: Introduce and use XEN_DOMCTL_CDF_neste=
d_virt

On 01/10/2020 11:23, Jan Beulich wrote:
> On 30.09.2020 15:42, Andrew Cooper wrote:
>> @@ -667,6 +668,12 @@ int arch_sanitise_domain_config(struct xen_domctl_c=
reatedomain *config)
>>           */
>>          config->flags |=3D XEN_DOMCTL_CDF_oos_off;
>>
>> +    if ( nested_virt && !hap )
>> +    {
>> +        dprintk(XENLOG_INFO, "Nested virt not supported without HAP\n")=
;
>> +        return -EINVAL;
>> +    }
> Initially I was merely puzzled by this not being accompanied by
> any removal of code elsewhere. But when I started looking I couldn't
> find any such enforcement, but e.g. did find nsvm_vcpu_hostrestore()
> covering the shadow mode case. For this to be "No functional change
> yet" as the description claims, could you point me at where this
> restriction is currently enforced?

Currently enforced in the HVM_PARAM_NESTEDHVM write side effect, which
is deleted in patch 5.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2881.8222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM0q-0001N1-2v; Mon, 05 Oct 2020 08:38:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2881.8222; Mon, 05 Oct 2020 08:38:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM0p-0001Mu-VL; Mon, 05 Oct 2020 08:38:27 +0000
Received: by outflank-mailman (input) for mailman id 2881;
 Mon, 05 Oct 2020 08:38:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kPM0p-0001Ml-CH
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:38:27 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fab6de1-2a61-4374-a893-8e47797d6455;
 Mon, 05 Oct 2020 08:38:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1kPM0p-0001Ml-CH
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:38:27 +0000
X-Inumbo-ID: 7fab6de1-2a61-4374-a893-8e47797d6455
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7fab6de1-2a61-4374-a893-8e47797d6455;
	Mon, 05 Oct 2020 08:38:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601887106;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tBvVHmIF2GIM2x9PCRwp2kyvhDrTNgt8GBG7kUlBSrk=;
  b=aU12GS/4cjyLIyoapzevpoZZMkebMkpXD4go17jU3ZjmRxXET5F8qvad
   FihXMqmm3GJH3A4iYwQvmLgP7NEqS4HCneYjZ8u2ODMU5B6CfcmpeoOar
   x9/h3yOQCkB6q8IZAgDChtPrSUWeV3Hy8f48Z0N9/exroF7rZ0ZO1xHJB
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: BWjk+RJ3oYM8yWG1+I6x0Jm01RO8rTtQ0dN0gPolIovZM9EwhA5loLzvf25aEjljqRSt00WHlp
 4YhntDWS+yKg6qsp2UVJeWGOi5gibT8FeEapLe2cDLjxU3WXW5vgMgkL9QOFBiwwpDesZ/uAMf
 vpNXDfG7yfEmXqzQV+bFBVACHUE2YtUIYEQIJoMRKlrQ5lO021/mC7I4x25V9nxDRb1JE0of7s
 nDzBKxKl51x8PDzV96YC5Xju4uSBjYjHNKxnUpqeiQN4ZfpMsUAlBb20gNyP7ol9FOkBrxXfB4
 13g=
X-SBRS: None
X-MesageID: 28604362
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28604362"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jae9MzZU52FU9xvFPQW8uAo18BfTRWK65VI3z+5ZMRQdopxlLZ2tGN/UO2JzoI5zTroDSYqJIruRt/mpf6x2YDgqo8Z3mg37ZMu1egm5w6XO+f2+lCaOHCAHo//J3A68uBMhCSMXYvRDVXV/AXa4NlPCVthaxMTkmHR32LOeYgRAw17HITFroCAR7B8tz92CjCad4HNBD4oPNO7R7tnGrToT61SzxisEGQiJLh8By392hhq9I6tgRRfMWKPftSI2KqnKyakvKtvYA4K6+gDNEVsXLWbzdF2M0M3+Vd1B3eVJs6uJy0rnff8JdeYIMBJpANwFfKEuksw94jo/MAtYhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DwSU15XZe15isGxfsYaPjURkiPDmKzVN9Hq6/ps0fgA=;
 b=hLBZTMETG/EdZxWjqStXISeN8RZQ9VZHhfJXvs1a9NIQ3XacE9pK4i6N2eopWb4/Mw0Fo1BZFy+8kDFgYD5uj4WXNZ6XiRTbmvdTfuZW86dtI+WHw4Jm+osqUlTCjhlEXkEzBTACx2QQ9BKqL3qO87htI3vOqkq+joAYP/hJb0+dIts0eHwVa9/MWuz52RFGfdba0FijZEi2y6lJAezKxgZcVB5wFyJQrBCF5Wdh9fA4oOs07MnwY3CIKYnBHl7RwI0Vafo4YzRySDvb+3lPmOP7p/q7jtlUw/wIdkDpCQ0CKtltoV7FqrY5+NPcrNeO3VqiiNZYEs54Fb2xA2vVTg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DwSU15XZe15isGxfsYaPjURkiPDmKzVN9Hq6/ps0fgA=;
 b=UmgZtKQYrYtTnqHGeD1V4cxhYlynHOLGX2vrBgJvJKO83oUIR+1clOKMK/7fAzxMtrKCQGoEsgrU1PcJIAqbPmh4QUOJJEZhkdXGfM+UtovN8Tz8C1lJ/P/x/YhTwi2Kp3RFBT6kxglRHNyq8D9CaN9kfBByHcgou7mBLvQXPAs=
From: Christian Lindig <christian.lindig@citrix.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>, George Dunlap
	<George.Dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Samuel Thibault
	<samuel.thibault@ens-lyon.org>, David Scott <dave@recoil.org>
Subject: Re: [PATCH 0/3] tools: avoid creating symbolic links during make
Thread-Topic: [PATCH 0/3] tools: avoid creating symbolic links during make
Thread-Index: AQHWmMdzMWgM7QI3dEeJGaSzpEsf9qmItC1q
Date: Mon, 5 Oct 2020 08:38:21 +0000
Message-ID: <DS7PR03MB5655DE94A4B952B2FAD9C81EF60C0@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <20201002142214.3438-1-jgross@suse.com>
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4c7b82d3-f6d9-4086-36e9-08d8690a0469
x-ms-traffictypediagnostic: DM5PR03MB3355:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM5PR03MB33559AC5A84F6F762886FAF0F60C0@DM5PR03MB3355.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4714;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: vEmQgMQ6rZKHkmJmiMiwSBEdom6T9O4b0qeLZpTji7vR9A1WTERPu9zw9+iQEFjw5Yc08rkpoQlD8k9+oQHELnzZVTdQqFuCW1+stnF0gCZqE3RQ8P/AygbHOpEkaA3M6bkojz4LFipR/lE8Ix//zSBMWV7YjAB2tm3akNojjczr0vx+grGDih4bGNWluFPCosD1AE5mF3CIM8RfibSp8q9m14Xa08Y3VGzQqM/dsPdwvV1DAn8vyZByHDmv3/JLEUuylLaMzoeiUpW6w9rKsh8ws8lpwZI8VhsiJYlamBQfOHkkqke/hYldxZ5sHHxI4tPE6XZ54pfahhyU9tkfaQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(396003)(39850400004)(136003)(55016002)(52536014)(66476007)(9686003)(8936002)(66946007)(8676002)(316002)(4326008)(2906002)(5660300002)(478600001)(83380400001)(110136005)(54906003)(33656002)(6506007)(53546011)(76116006)(91956017)(26005)(7696005)(186003)(55236004)(44832011)(64756008)(66556008)(71200400001)(86362001)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: zmhOzbMg3hq7JLn9p/zL5V6KwCGnQlxuL14Hf+ApmO0Ef6qxuaZhMUSia3HPsvF4qPLLAxyodz+8/lBdUewE0BtJxjHymNWqr7pBtsYVkmHcU6QwLjMrqQkJToh8+qDHeKSDr/bQ5zBGj1/GyN/Z3UO2G29pkehpI5VGwN0XsWWG2r0Rh1XxT4HCaXIdFdG4HkNYJ1pFsxxeeyidgWQQBPtZ6U8C5xuPNXpB2juv0gLeLqMCDD8RfG5IokJDKR7wV66fdnO71p9k85LJeOM1wO9ihND3WPxJ95407w0VAlyFkqWSlRWTXPE0pUYUc4xOoiAG6G8jFYRSgQPXGWvSpnETOSPAo1tzXKBbJqRlZr6BZV0k9iD01Omj99iKpxlrr5ZtxS0feveSGlLs5/ImGAnzobyPKDzIZaZXUXgIEZX2+vfcZTAmJQxNye4KBG5CEqlRpaLnke1PZ9+mYVLw5Kzu86U6Rm5MykX4lTtj/ftZXjDawFg2hZWvHdslwzIZgp7p5t+JUa4xs8xBb66HiXNxOjE1Y0yQ+xAI565focFEUxi0t9L6tw+z1XYxT7FIORjSVfHesN3xXAFqFnAyCIBKK56X/V2Bw6dXhoNqv9BYFH/C/UctEo540TNB/ADZFQeEbsBcvLBJMUBGptI2hQ==
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c7b82d3-f6d9-4086-36e9-08d8690a0469
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2020 08:38:21.5886
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: DEgkGDoJ7BzUHwv7vQl9Ppq1drS6CO+jSYgj/0J1akAMkmUHRUWnn9B+3HG2FccPGWTpbx6+raQMgrN51W9TkAtJvYYqDmo0V2GfHzxwQn8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3355
X-OriginatorOrg: citrix.com

--
Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Juergen Gross <jgross@suse.com>
Sent: 02 October 2020 15:22
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross; Andrew Cooper; George Dunlap; Ian Jackson; Jan Beulich; =
Julien Grall; Stefano Stabellini; Wei Liu; Samuel Thibault; Christian Lindi=
g; David Scott
Subject: [PATCH 0/3] tools: avoid creating symbolic links during make

The rework of the Xen library build introduced creating some additional
symbolic links during the build process.

This series is undoing that by moving all official Xen library headers
to tools/include and by using include paths and the vpath directive
when access to some private headers of another directory is needed.

Juergen Gross (3):
  tools/libs: move official headers to common directory
  tools/libs/guest: don't use symbolic links for xenctrl headers
  tools/libs/store: don't use symbolic links for external files

 .gitignore                                    |  5 ++--
 stubdom/mini-os.mk                            |  2 +-
 tools/Rules.mk                                |  5 ++--
 tools/{libs/vchan =3D> }/include/libxenvchan.h  |  0
 tools/{libs/light =3D> }/include/libxl.h        |  0
 tools/{libs/light =3D> }/include/libxl_event.h  |  0
 tools/{libs/light =3D> }/include/libxl_json.h   |  0
 tools/{libs/light =3D> }/include/libxl_utils.h  |  0
 tools/{libs/light =3D> }/include/libxl_uuid.h   |  0
 tools/{libs/util =3D> }/include/libxlutil.h     |  0
 tools/{libs/call =3D> }/include/xencall.h       |  0
 tools/{libs/ctrl =3D> }/include/xenctrl.h       |  0
 .../{libs/ctrl =3D> }/include/xenctrl_compat.h  |  0
 .../devicemodel =3D> }/include/xendevicemodel.h |  0
 tools/{libs/evtchn =3D> }/include/xenevtchn.h   |  0
 .../include/xenforeignmemory.h                |  0
 tools/{libs/gnttab =3D> }/include/xengnttab.h   |  0
 tools/{libs/guest =3D> }/include/xenguest.h     |  0
 tools/{libs/hypfs =3D> }/include/xenhypfs.h     |  0
 tools/{libs/stat =3D> }/include/xenstat.h       |  0
 .../compat =3D> include/xenstore-compat}/xs.h   |  0
 .../xenstore-compat}/xs_lib.h                 |  0
 tools/{libs/store =3D> }/include/xenstore.h     |  0
 tools/{xenstore =3D> include}/xenstore_lib.h    |  0
 .../{libs/toolcore =3D> }/include/xentoolcore.h |  0
 .../include/xentoolcore_internal.h            |  0
 tools/{libs/toollog =3D> }/include/xentoollog.h |  0
 tools/libs/call/Makefile                      |  3 ---
 tools/libs/ctrl/Makefile                      |  3 ---
 tools/libs/devicemodel/Makefile               |  3 ---
 tools/libs/evtchn/Makefile                    |  2 --
 tools/libs/foreignmemory/Makefile             |  3 ---
 tools/libs/gnttab/Makefile                    |  3 ---
 tools/libs/guest/Makefile                     | 12 ++-------
 tools/libs/hypfs/Makefile                     |  3 ---
 tools/libs/libs.mk                            | 10 +++----
 tools/libs/light/Makefile                     | 27 +++++++++----------
 tools/libs/stat/Makefile                      |  2 --
 tools/libs/store/Makefile                     | 15 +++--------
 tools/libs/toolcore/Makefile                  |  9 +++----
 tools/libs/toollog/Makefile                   |  2 --
 tools/libs/util/Makefile                      |  3 ---
 tools/libs/vchan/Makefile                     |  3 ---
 tools/ocaml/libs/xentoollog/Makefile          |  2 +-
 tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
 45 files changed, 32 insertions(+), 87 deletions(-)
 rename tools/{libs/vchan =3D> }/include/libxenvchan.h (100%)
 rename tools/{libs/light =3D> }/include/libxl.h (100%)
 rename tools/{libs/light =3D> }/include/libxl_event.h (100%)
 rename tools/{libs/light =3D> }/include/libxl_json.h (100%)
 rename tools/{libs/light =3D> }/include/libxl_utils.h (100%)
 rename tools/{libs/light =3D> }/include/libxl_uuid.h (100%)
 rename tools/{libs/util =3D> }/include/libxlutil.h (100%)
 rename tools/{libs/call =3D> }/include/xencall.h (100%)
 rename tools/{libs/ctrl =3D> }/include/xenctrl.h (100%)
 rename tools/{libs/ctrl =3D> }/include/xenctrl_compat.h (100%)
 rename tools/{libs/devicemodel =3D> }/include/xendevicemodel.h (100%)
 rename tools/{libs/evtchn =3D> }/include/xenevtchn.h (100%)
 rename tools/{libs/foreignmemory =3D> }/include/xenforeignmemory.h (100%)
 rename tools/{libs/gnttab =3D> }/include/xengnttab.h (100%)
 rename tools/{libs/guest =3D> }/include/xenguest.h (100%)
 rename tools/{libs/hypfs =3D> }/include/xenhypfs.h (100%)
 rename tools/{libs/stat =3D> }/include/xenstat.h (100%)
 rename tools/{libs/store/include/compat =3D> include/xenstore-compat}/xs.h=
 (100%)
 rename tools/{libs/store/include/compat =3D> include/xenstore-compat}/xs_l=
ib.h (100%)
 rename tools/{libs/store =3D> }/include/xenstore.h (100%)
 rename tools/{xenstore =3D> include}/xenstore_lib.h (100%)
 rename tools/{libs/toolcore =3D> }/include/xentoolcore.h (100%)
 rename tools/{libs/toolcore =3D> }/include/xentoolcore_internal.h (100%)
 rename tools/{libs/toollog =3D> }/include/xentoollog.h (100%)

--
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:41:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2884.8234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM45-0002CN-HO; Mon, 05 Oct 2020 08:41:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2884.8234; Mon, 05 Oct 2020 08:41:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM45-0002CG-EQ; Mon, 05 Oct 2020 08:41:49 +0000
Received: by outflank-mailman (input) for mailman id 2884;
 Mon, 05 Oct 2020 08:41:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kPM43-0002CA-VI
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:41:47 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d823c961-0ce5-43ed-a64d-fc0dd843bf4f;
 Mon, 05 Oct 2020 08:41:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xuOL=DM=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1kPM43-0002CA-VI
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:41:47 +0000
X-Inumbo-ID: d823c961-0ce5-43ed-a64d-fc0dd843bf4f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d823c961-0ce5-43ed-a64d-fc0dd843bf4f;
	Mon, 05 Oct 2020 08:41:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601887308;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OllybNQ6Qdh+3LA1pRggAX2btkPQ3BruEMMbVLallSo=;
  b=iZiNBPNOC+m7SpLbNxbEzz/NI9i7AYh26ivCCMuHihL8HFD1m03AHw4k
   jOxrt4tO+jqnXKkNhrJo7QDDAbKfggrjWT6GRsruiefmqQm8VLTDjzvdF
   Pzadd1GgN52x7GK5x3fN9k3+96oN6CtDTV+bghQIuaigCu4Gstq56KlnG
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: OnpN2/uv4joqDM0ZjLWDKk9DoeXZuMOYiZQPtBenXpWiGchcOne74pQjmiLLSdArZifMG750UE
 XZb/PMgCnU7U3YBZpkeaiF5H480peptsNfTjo+P/wcNwXViFiI4DYEFa0S3Z1ur4M5ClN6sNTW
 dnfTWqvZr0Jyldge18f2MwJrfowL5GOFCyymYnRut9rWqAdRvV8FjdzpnHNEmPiqwTgJZrLqOx
 Pq06lnkMWJqMn8H8sLkPq0yNTG3Z1mUTKWuQwL9pj7gDdhGD4AMyDKTr6UIMXBRYX5Qkdc2oyG
 T30=
X-SBRS: None
X-MesageID: 28604524
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28604524"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RDI+te7A30rYc9YVctY0PgFcu7amdGNAStU/5KldwTGfe6hDCMnNz5ZOgoTcQ/oDQfbyt5OzZ+Ul0+fmev/We3wYMJaNMiV+fYnWaKZGBIArxiEZL2dSyv1vvIonmgAtrJeR9/2WSDOVVoOMDuHG+WvwovR34X4GscQiY53A8b6A+zMvo2/VPek6Cza5IQGF9aa0vpEAtuIh51PQuevG8ZjEmk11QYf9umWJUi/fVD0XainrIiO0mA5qKkj7L+BopnkoH81WDkWROZWcSjfcEiWUWNN7n30udal5Xk2MNYWF/WZRS/TFCr77nxkgThIP9ZX/fUWTLJEbSoOwUpd0qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X5x/wKUN848vzuEGdVqXm88LxLsR0dTcojzg97lUdSQ=;
 b=J2yR/gMQ/Hrb/g3Q4Hv70I55Nbil19R/y8Wjw/4J40eeWJAiTtmDy8ND0Nw45o0S0zXDf0aSWWiSdXZgdEOvt2b/7vZ91c1a2PqynT9VAYP3XwpUXdEwkIrjwpO1zI+JQgRGVQXXEpS0GIAMmoZFFRPB5EIpSsZNlYuVuJzyLH8m/zXEgKpJ8Qy/0pvxPxiD7U1RZHKHD58Pt+2O1dWn72mtPdBOfkmUQlodn3PXjzgML+wV7djWqsA8kilJJAxBBOOFM46s1qEcVsiCND6/M70VaAoePfRcbGzp7bJEkTOcaGNZyY5RQwJW1+rINiF/CNojiAc5BVrpFKIdYkezNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X5x/wKUN848vzuEGdVqXm88LxLsR0dTcojzg97lUdSQ=;
 b=fPGXnwIVD4zwDN9iO3NH/xTtqgHkQhR52Ds0Q9Ns+OLWv43ZxGii1pH02jQHDWc6qp2DaV34z7i/3klNNqLdf6nrDP8Or7BO3yXBgcNnDuaOG5qRlmDfTH2zlzuqyKmkvzuLruVQ5yJaO+hIQy5rILn2nNw71ftX9gigAxmqqAU=
From: Christian Lindig <christian.lindig@citrix.com>
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: David Scott <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v1 1/1] tools/ocaml/xenstored: drop the creation of the RO
 socket
Thread-Topic: [PATCH v1 1/1] tools/ocaml/xenstored: drop the creation of the
 RO socket
Thread-Index: AQHWmNZM2MiF3TuZZUy8keCMro/BnqmItKvH
Date: Mon, 5 Oct 2020 08:40:31 +0000
Message-ID: <DS7PR03MB56552313737C8007598620F1F60C0@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <cover.1601654648.git.edvin.torok@citrix.com>,<0cc19ced022e2a302fccf42bf9521c61dd0dca8a.1601654648.git.edvin.torok@citrix.com>
In-Reply-To: <0cc19ced022e2a302fccf42bf9521c61dd0dca8a.1601654648.git.edvin.torok@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f5b19a48-8a90-41b6-7713-08d8690a518f
x-ms-traffictypediagnostic: DM5PR03MB3355:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM5PR03MB33554D253D13323754E6A048F60C0@DM5PR03MB3355.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: JNPDZsQj/QxAA26DQlHDYFmW7qStZwyhiNdEBGWvtp2soIZbylGTrE6D+8twFpDlIZB3tVMF/QANC4mpSCazDTyHazMk40l0yzyTqleG7ajtyehQnTIOrZoQJ8rbgTTuL0G8V/Y3Gi61XeS6x05/mglrTMZRqjM0yweESg4+ped61S1UP8aRQRbOPO1DFw/AFL6ZZu9fW1wW4JetUlp9Jah7qsEWwJsLe2GZEQ92IY41h/1yioxnMZcc0Ew/drxR2oXWAr8noKvfYtGexiQlKRL/we+ckJlYh20M2g/B1BJevpBUx+pdUCbl5B613edN
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(396003)(39850400004)(136003)(55016002)(52536014)(66476007)(9686003)(8936002)(66946007)(8676002)(316002)(4326008)(2906002)(5660300002)(478600001)(83380400001)(110136005)(54906003)(33656002)(6506007)(53546011)(76116006)(91956017)(66574015)(26005)(7696005)(186003)(55236004)(44832011)(64756008)(66556008)(71200400001)(86362001)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: UqHiE7QtEP1XT74Z7ReDsPulQsewFiJBLhXlfigxso7fWsfDnk6qi1ovAQjonU2Ti0mP7gYDsgM431SQTGSQIYt6JXfTS1mwHLVj4FZytF/QN4oQEoiev7r2makZHKGKd2DqYh05crO9WNYLGE/vlPb12lgJODlWLUmQ5LSbgGQOa/vd4GyuLMcMa8MdoxdKU5S8PTjiZgEBDBJG/gtFORyFuCJ18RYUUbWF+4Pxu42Wc2cQC2SdK8N8irfLcCbFsLfvSMOufQnxDiQsetgK8/cCYesy3iOtbpyklz5jecIXLtKF4h7kufPGPXdjmy/YPIGdLjBXlwSjOeOjuSRK9Z6q07QSXyrykL6jL6UApLPxsiWAos/qLWy6ezt//3XSaZEAUDCw3Oc11U54CFA7LXSL0fRVwKJ5G8VIgvrlEqmDiAVZuJLq3sWoj68iQ0l4Q4OHMeYsxsUMIZ8lhhPlLmwivWdAbq/lGzWdYmNcYi8wnz1igUo7PJ4TTuUvqmjc6jI0swYDX4kOb2pjSdKsRLOyUb86ckmntUiXem8XZcaGTJTVJCmxQEn+WvwImAFye0MyG+vekT+1om5ecQuzIMM79NuryVuXGIfN9ypW4mjJoNGE093SG/yBgZXYmHxSviGNpKk2F8/ugPDnEcOC6w==
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f5b19a48-8a90-41b6-7713-08d8690a518f
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2020 08:40:31.0467
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: qM0eEW4LhMJsMoK8EJu+/uq0R/lcRYVSWTgKBrXE+7kW1/gesHNQRqTF4xGWY9Zc5GAhyj3vseBkcDWoGjM4c5qTAqRe727C9VB1nI+19Tw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3355
X-OriginatorOrg: citrix.com

--
Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Edwin T=F6r=F6k <edvin.torok@citrix.com>
Sent: 02 October 2020 17:06
To: xen-devel@lists.xenproject.org
Cc: Edwin Torok; Christian Lindig; David Scott; Ian Jackson; Wei Liu
Subject: [PATCH v1 1/1] tools/ocaml/xenstored: drop the creation of the RO =
socket

The readonly flag was propagated but ignored, so this was essentially
equivalent to a RW socket.

C xenstored is dropping the RO socket too, so drop it from oxenstored too.

Signed-off-by: Edwin T=F6r=F6k <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/connections.ml |  2 +-
 tools/ocaml/xenstored/define.ml      |  1 -
 tools/ocaml/xenstored/xenstored.ml   | 15 ++++++---------
 3 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/tools/ocaml/xenstored/connections.ml b/tools/ocaml/xenstored/c=
onnections.ml
index f02ef6b526..f2c4318c88 100644
--- a/tools/ocaml/xenstored/connections.ml
+++ b/tools/ocaml/xenstored/connections.ml
@@ -31,7 +31,7 @@ let create () =3D {
        watches =3D Trie.create ()
 }

-let add_anonymous cons fd _can_write =3D
+let add_anonymous cons fd =3D
        let xbcon =3D Xenbus.Xb.open_fd fd in
        let con =3D Connection.create xbcon None in
        Hashtbl.add cons.anonymous (Xenbus.Xb.get_fd xbcon) con
diff --git a/tools/ocaml/xenstored/define.ml b/tools/ocaml/xenstored/define=
.ml
index 2965c08534..ea9e1b7620 100644
--- a/tools/ocaml/xenstored/define.ml
+++ b/tools/ocaml/xenstored/define.ml
@@ -18,7 +18,6 @@ let xenstored_major =3D 1
 let xenstored_minor =3D 0

 let xs_daemon_socket =3D Paths.xen_run_stored ^ "/socket"
-let xs_daemon_socket_ro =3D Paths.xen_run_stored ^ "/socket_ro"

 let default_config_dir =3D Paths.xen_config_dir

diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xen=
stored.ml
index 5b96f1852a..7e7824761b 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -242,12 +242,11 @@ let _ =3D
                ()
        );

-       let rw_sock, ro_sock =3D
+       let rw_sock =3D
                if cf.disable_socket then
-                       None, None
+                       None
                else
-                       Some (Unix.handle_unix_error Utils.create_unix_sock=
et Define.xs_daemon_socket),
-                       Some (Unix.handle_unix_error Utils.create_unix_sock=
et Define.xs_daemon_socket_ro)
+                       Some (Unix.handle_unix_error Utils.create_unix_sock=
et Define.xs_daemon_socket)
                in

        if cf.daemonize then
@@ -320,15 +319,14 @@ let _ =3D

        let spec_fds =3D
                (match rw_sock with None -> [] | Some x -> [ x ]) @
-               (match ro_sock with None -> [] | Some x -> [ x ]) @
                (if cf.domain_init then [ Event.fd eventchn ] else [])
                in

        let process_special_fds rset =3D
-               let accept_connection can_write fd =3D
+               let accept_connection fd =3D
                        let (cfd, _addr) =3D Unix.accept fd in
                        debug "new connection through socket";
-                       Connections.add_anonymous cons cfd can_write
+                       Connections.add_anonymous cons cfd
                and handle_eventchn _fd =3D
                        let port =3D Event.pending eventchn in
                        debug "pending port %d" (Xeneventchn.to_int port);
@@ -348,8 +346,7 @@ let _ =3D
                        if List.mem fd set then
                                fct fd in

-               maybe (fun fd -> do_if_set fd rset (accept_connection true)=
) rw_sock;
-               maybe (fun fd -> do_if_set fd rset (accept_connection false=
)) ro_sock;
+               maybe (fun fd -> do_if_set fd rset accept_connection) rw_so=
ck;
                do_if_set (Event.fd eventchn) rset (handle_eventchn)
        in

--
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:46:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2886.8246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM8H-0002Ni-5P; Mon, 05 Oct 2020 08:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2886.8246; Mon, 05 Oct 2020 08:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPM8H-0002Nb-0z; Mon, 05 Oct 2020 08:46:09 +0000
Received: by outflank-mailman (input) for mailman id 2886;
 Mon, 05 Oct 2020 08:46:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rhq4=DM=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kPM8G-0002NW-7N
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:46:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 34f71a15-1c89-4622-8b0d-ebf576159665;
 Mon, 05 Oct 2020 08:46:07 +0000 (UTC)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-268-xSozUxOdPWql-gTzyKxVVQ-1; Mon, 05 Oct 2020 04:46:03 -0400
Received: by mail-wm1-f72.google.com with SMTP id 73so1499626wma.5
 for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 01:46:03 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:2e86:3d8b:8b70:920c?
 ([2001:b07:6468:f312:2e86:3d8b:8b70:920c])
 by smtp.gmail.com with ESMTPSA id h17sm13109068wro.27.2020.10.05.01.46.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 05 Oct 2020 01:46:00 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rhq4=DM=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kPM8G-0002NW-7N
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:46:08 +0000
X-Inumbo-ID: 34f71a15-1c89-4622-8b0d-ebf576159665
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 34f71a15-1c89-4622-8b0d-ebf576159665;
	Mon, 05 Oct 2020 08:46:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601887567;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=okCrQkGxtnHZPZ8SiAzTpzb/xJBxhBwvy+wvdWPVABI=;
	b=DPM5ylCcFJAzFpHtYDZoHwfHx8QUrs8qQuWgE/UGK4lp59cy1pZwibjyviry/1hP7czkE2
	j+EthcMOkoq73/Da9qn01Jy9taNEaxzvs2bQayJjxjcMTRNW8Wy0LBKmRjHbkP1mUBHjV/
	RvhuIlifv3uEyC24DgEFCpmMhQLuvaY=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-268-xSozUxOdPWql-gTzyKxVVQ-1; Mon, 05 Oct 2020 04:46:03 -0400
X-MC-Unique: xSozUxOdPWql-gTzyKxVVQ-1
Received: by mail-wm1-f72.google.com with SMTP id 73so1499626wma.5
        for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 01:46:03 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=okCrQkGxtnHZPZ8SiAzTpzb/xJBxhBwvy+wvdWPVABI=;
        b=TUVa1nTVEv08Kh3vOMKcYsZN7XGXL8KRX5iqJ6FhA9ViiJy3RsqBQnPHGh+QZccYJN
         XZjwqOqH46XYzL6mD0t/AycKg4Ujm/alV/vGf2+S2olZjXGTTjWR2sMyFIH6LP3QhuWe
         Y2RzkfNviEB8n/Yz+kGj4qy+fD3a657ggdc0Z2uUXorezYu7dbrc7ie6BBnbs2ud8wOI
         uCD+2Mm8v/IYgV41dDNlW3n0AsA/EoMhgZVoVjQlelDpPVI0Ce10gsa3SJnqZM3nZYXf
         FMzGCylRnvrY1C0SgFupruPa2qIuq8LicDEC2M6WvUr9hAN087s//leRzhkOJtQ9qAAQ
         UTmQ==
X-Gm-Message-State: AOAM531n8LLTBLgq1EKeEZf3pbFHgEtFFqP5QqFLyYWKUxUWJuygu6ME
	MmBRw7xMVUTebmGeq/Q2NDkUsSCnpj22aj8wYbabC9Y6I8Z6yIQlpCBDSyGoTQVtE6wqHZ8EYKX
	7lsa+7epNuZeO7a1VNqOfhNA5LJ0=
X-Received: by 2002:adf:e407:: with SMTP id g7mr10330885wrm.349.1601887562083;
        Mon, 05 Oct 2020 01:46:02 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwL7iy8SyjTNcssIenIDbDU3EPy6FirqSaU/6oIe+Yp0UOc3VHOKrxtGPaaIZvrooguFHlONw==
X-Received: by 2002:adf:e407:: with SMTP id g7mr10330871wrm.349.1601887561841;
        Mon, 05 Oct 2020 01:46:01 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:2e86:3d8b:8b70:920c? ([2001:b07:6468:f312:2e86:3d8b:8b70:920c])
        by smtp.gmail.com with ESMTPSA id h17sm13109068wro.27.2020.10.05.01.46.00
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 05 Oct 2020 01:46:00 -0700 (PDT)
Subject: Re: [PATCH 0/5] qapi: Restrict machine (and migration) specific
 commands
To: Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Juan Quintela <quintela@redhat.com>,
 Paul Durrant <paul@xen.org>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Gerd Hoffmann <kraxel@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Richard Henderson <rth@twiddle.net>
References: <20201002133923.1716645-1-philmd@redhat.com>
 <87wo05aypg.fsf@dusky.pond.sub.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <0c54aa06-372c-ab81-0974-34340adb7b55@redhat.com>
Date: Mon, 5 Oct 2020 10:46:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <87wo05aypg.fsf@dusky.pond.sub.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05/10/20 10:01, Markus Armbruster wrote:
> Philippe Mathieu-Daudé <philmd@redhat.com> writes:
> 
>> Reduce the machine code pulled into qemu-storage-daemon.
> I'm leaving review to Eduardo and Marcel for PATCH 1-4, and to David and
> Juan for PATCH 5.  David already ACKed.
> 
> Can do the pull request.
> 

If it counts, :) for patch 1-4:

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Generally these patches to remove code from user-mode emulators
fall into the "if it builds it's fine" bucket, since I assume
we want the "misc" subschema to be as small as possible.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 08:49:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 08:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2888.8258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMBE-0002aK-KN; Mon, 05 Oct 2020 08:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2888.8258; Mon, 05 Oct 2020 08:49:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMBE-0002aD-Gp; Mon, 05 Oct 2020 08:49:12 +0000
Received: by outflank-mailman (input) for mailman id 2888;
 Mon, 05 Oct 2020 08:49:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPMBD-0002a7-66
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:49:11 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe09::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2edb2870-8f8b-4439-9546-c79c6e181c08;
 Mon, 05 Oct 2020 08:49:08 +0000 (UTC)
Received: from MRXP264CA0013.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::25)
 by DB7PR08MB3065.eurprd08.prod.outlook.com (2603:10a6:5:28::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 08:49:03 +0000
Received: from VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::5e) by MRXP264CA0013.outlook.office365.com
 (2603:10a6:500:15::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Mon, 5 Oct 2020 08:49:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT014.mail.protection.outlook.com (10.152.19.38) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Mon, 5 Oct 2020 08:49:02 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64");
 Mon, 05 Oct 2020 08:49:02 +0000
Received: from 314c705e3b39.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2F485FA1-CBC4-4964-B8CE-81028FD7F14A.1; 
 Mon, 05 Oct 2020 08:48:56 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 314c705e3b39.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 05 Oct 2020 08:48:56 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4711.eurprd08.prod.outlook.com (2603:10a6:10:d6::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 08:48:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.043; Mon, 5 Oct 2020
 08:48:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPMBD-0002a7-66
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 08:49:11 +0000
X-Inumbo-ID: 2edb2870-8f8b-4439-9546-c79c6e181c08
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe09::62a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2edb2870-8f8b-4439-9546-c79c6e181c08;
	Mon, 05 Oct 2020 08:49:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2XxiqR1T2MC0osvlsAIh0OwocGkyWNJII53DDb9xAU=;
 b=HN9aTpfu9NHPTmTwRpm/rzQVa2pktT5HL0eVxDvyOmCMBAEx/qAvFL/5IwzAM5LVpDJCQmV/bHoNwZZgjwRYH3T8G0RKXqBsMmkgQryADu+X4C/4zlPYMfp//v9ZgFGeEB4ONJ9tva6IXfh71vU4XlQZL0bNkdp07OOuwvgykGM=
Received: from MRXP264CA0013.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::25)
 by DB7PR08MB3065.eurprd08.prod.outlook.com (2603:10a6:5:28::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 08:49:03 +0000
Received: from VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::5e) by MRXP264CA0013.outlook.office365.com
 (2603:10a6:500:15::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Mon, 5 Oct 2020 08:49:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT014.mail.protection.outlook.com (10.152.19.38) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Mon, 5 Oct 2020 08:49:02 +0000
Received: ("Tessian outbound 7fc8f57bdedc:v64"); Mon, 05 Oct 2020 08:49:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8405f5b0a66a5fbe
X-CR-MTA-TID: 64aa7808
Received: from 314c705e3b39.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 2F485FA1-CBC4-4964-B8CE-81028FD7F14A.1;
	Mon, 05 Oct 2020 08:48:56 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 314c705e3b39.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 05 Oct 2020 08:48:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jrr9ypSLCgGlwbFitOsArpJN7LMWiHYZZ8UYeeHp/r2az2vNXsmIZoH1D+jEy87G/74brxndBssyJVf9nDLbnmFjX3h7zIAzNsXtVyECTt2BGpTA5WkxRLZob/wI7Afncd4aE72pzUECNYc3hpGPf4rGnA8u84KAv8VivHVREL7qgtwE5qP6+SIVQOmrMsWDhhlQaN66AjBG831crN9YhgBDGaYGAj4YQhkUrDoDl8K411M+19sWNZmtFHdj51oOtygkcU8GJ8pyJ3wFHwKQ6PXinhqctLBJThu6XpOuM2tsyvhcHVOmJTe8eimPjXBsoN/6KYnYqId81h2pKm3SzQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2XxiqR1T2MC0osvlsAIh0OwocGkyWNJII53DDb9xAU=;
 b=oF4DTRBOg8vQ8kU5145OfWOhLwTqgDyZkSxO96RuoAqfkjqhLRaI1VKlk5FZ+LzE81sfyJ/B2/va1W+FopnycDl6CsHGcE6GC/8ihTqWnN5ouNyjCXM8WSbfrRVmh805Hkban7f7ud9p90AGpdgo3lQHMJogKxSOzc+7AE7cm7teXMA+yO6gLC2buYw2eDRQIZ0pXkmdfLXrKWo9Uj6OFhIURrwzYS9OHeTP8SFR3imKXb5rC87khfSpjr6JZtlgD45PUswFAR/TbAgrkcD+kmrgx+wSXFP0ld/M5t/w50+c7VWaI4Mk5EStn7o5+bKXy5vhEY7C7/pzXl1/Uh8ZQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=l2XxiqR1T2MC0osvlsAIh0OwocGkyWNJII53DDb9xAU=;
 b=HN9aTpfu9NHPTmTwRpm/rzQVa2pktT5HL0eVxDvyOmCMBAEx/qAvFL/5IwzAM5LVpDJCQmV/bHoNwZZgjwRYH3T8G0RKXqBsMmkgQryADu+X4C/4zlPYMfp//v9ZgFGeEB4ONJ9tva6IXfh71vU4XlQZL0bNkdp07OOuwvgykGM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4711.eurprd08.prod.outlook.com (2603:10a6:10:d6::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 08:48:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.043; Mon, 5 Oct 2020
 08:48:54 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Juergen Gross <jgross@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Samuel Thibault <samuel.thibault@ens-lyon.org>, Christian
 Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Subject: Re: [PATCH 0/3] tools: avoid creating symbolic links during make
Thread-Topic: [PATCH 0/3] tools: avoid creating symbolic links during make
Thread-Index: AQHWmMeMo4lBJH2wYU2vgc/1Va9MY6mItyWA
Date: Mon, 5 Oct 2020 08:48:54 +0000
Message-ID: <9D05474D-F19D-406A-B5F5-44658F49C924@arm.com>
References: <20201002142214.3438-1-jgross@suse.com>
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b5a4f10e-fee3-4285-a157-08d8690b826c
x-ms-traffictypediagnostic: DBBPR08MB4711:|DB7PR08MB3065:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3065B681F03288AA35490C8A9D0C0@DB7PR08MB3065.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5797;OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ok10P+S/7m06IcvGIzv400lQO0vFdKlYJChKZ5e7vI/Z/1oTj2KsI0Pft5CGRCmYVcBW1RLfJEIxPz3Rutwb4/VfQE+6qFjWHIax6FNh9bNnxCJyy4nRH5yGruiOS8kUEX3mxQfaP8cqyNN4fUgYvdT062xHX1Oy1xtZW7uRqG7ZWp6BGFfS7gLV9erNtACPR9FTRlkwUY6jVNXjes+1pfvAJDOAOjPTtu1FyxDQ58spgNB182Yl06vC4ZsGcaX6rdGjRb+kS6chOZfzp6e/NEwwKfs331aZ7XdmAJRVf7VrSJBDk1jwkHRw1o9TpyhvBEkFgR9BtedFgKp3drrmhA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(136003)(396003)(376002)(366004)(346002)(64756008)(66446008)(26005)(2906002)(66556008)(6506007)(66476007)(53546011)(2616005)(33656002)(66946007)(5660300002)(186003)(6916009)(36756003)(7416002)(6486002)(71200400001)(316002)(76116006)(6512007)(8936002)(8676002)(91956017)(478600001)(86362001)(83380400001)(54906003)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ZH6N7FNb67PLPz7ngQeZ4sJpphF70ce9DtIyTHjE8WXNcvP4X94Lyz2cMBj8bcgRGO2amXP4a39KJM1pmjCHA3W4Opz76eJfeSQxu9LydX9FDXIhO2ey37mFREUC8zj/vbpjnKm+3CxWONajCRK4uWfwyGiccamCQ3yHHq/cl8aiaZ+1P0P4buuLi1XamroDwehGBnU+yVjGEs7jPAAK/aAGGcVYCM4HASslUUMZS7FQetwHzifjjdlJ1WexUr+vEfIVOUM9H1umAO6k7Ki4e4JN+AV9QFgFMxzjVG+uJhmPcvruhT0lOjQNdswPftvQtKNGEfx0VVx/dbp81u1PMqieoynUgZzaDDRFQHc5WYxWNniKF7oCFcnILQBBExq6AWed9bMxYvu7wpgT2xxlj0pQUNRdofw24rGPSGuktoIsiea/SbaCve99B8jQYcWwd+ZmF31nJZU0KXZpee4QKyY/pDAbKLW5uN1vKVn1DmdLxCUOiW4Yj/XacC+OeUK7jItT+N6yOow0H75xmtmNQ7IpMIe6A2q30BFm6gDe+aOPXB++ViIJWz+Lu9toKGOKlglrwx8WLSHi+ce1BZJPxxLKmIZ1wuxZ0TlhgSPQJwNQj/TQIwj+oBBMB4jp3xFArJszRmyTh/eSflV6AXkGDA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C216FC234041044582D5BDD005B95995@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4711
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1163d8b7-b162-483b-daf2-08d8690b7d9a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TShRSjHESBw9JzChCNpgcEoCK7vGxvTolH8PDse0u2mE+CjoP0KvtfXi+JS7xQ7ss+gzbGnJ5AIR/MtK9kgGGFvDu6D3e8xxLJBDmSyYsurooh+qCgO3Ou1deNWiukuxNb1QPQ72M0cbSf7wdo8ZYaLTnZjQ/7eMB2g6uPdQXtyYYXHDuCKehGIGjWsZG+UWfdJhhdRaK1wRYOKsqsv4XgmhDJo//IfslUsdc0usUUroGrLsZ47lKqhKUxoXVlEVXKKExttylZqhRqlh1HE/rB4Ewhm41D91OLJUUTnCDr5RLeRoOxOXrheHWHLSdJZNwbjL4v1VUi+Q/8xETOMcTEPW5ABaL4q32TS+LZZUk5eTnCwqCD6pmUBcQzO6ohrTTrXpNPNHRytJvWaAEVaRvA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(346002)(376002)(46966005)(82310400003)(478600001)(8676002)(316002)(356005)(36756003)(82740400003)(2616005)(47076004)(36906005)(5660300002)(6486002)(2906002)(107886003)(83380400001)(8936002)(6512007)(186003)(26005)(336012)(70586007)(6862004)(4326008)(70206006)(33656002)(81166007)(86362001)(54906003)(53546011)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Oct 2020 08:49:02.5519
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b5a4f10e-fee3-4285-a157-08d8690b826c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3065

--
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>

Work on Yocto Gategarth (including qemu and tools compilation).

Cheers
Bertrand

> On 2 Oct 2020, at 15:22, Juergen Gross <jgross@suse.com> wrote:
>=20
> The rework of the Xen library build introduced creating some additional
> symbolic links during the build process.
>=20
> This series is undoing that by moving all official Xen library headers
> to tools/include and by using include paths and the vpath directive
> when access to some private headers of another directory is needed.
>=20
> Juergen Gross (3):
>  tools/libs: move official headers to common directory
>  tools/libs/guest: don't use symbolic links for xenctrl headers
>  tools/libs/store: don't use symbolic links for external files
>=20
> .gitignore                                    |  5 ++--
> stubdom/mini-os.mk                            |  2 +-
> tools/Rules.mk                                |  5 ++--
> tools/{libs/vchan =3D> }/include/libxenvchan.h  |  0
> tools/{libs/light =3D> }/include/libxl.h        |  0
> tools/{libs/light =3D> }/include/libxl_event.h  |  0
> tools/{libs/light =3D> }/include/libxl_json.h   |  0
> tools/{libs/light =3D> }/include/libxl_utils.h  |  0
> tools/{libs/light =3D> }/include/libxl_uuid.h   |  0
> tools/{libs/util =3D> }/include/libxlutil.h     |  0
> tools/{libs/call =3D> }/include/xencall.h       |  0
> tools/{libs/ctrl =3D> }/include/xenctrl.h       |  0
> .../{libs/ctrl =3D> }/include/xenctrl_compat.h  |  0
> .../devicemodel =3D> }/include/xendevicemodel.h |  0
> tools/{libs/evtchn =3D> }/include/xenevtchn.h   |  0
> .../include/xenforeignmemory.h                |  0
> tools/{libs/gnttab =3D> }/include/xengnttab.h   |  0
> tools/{libs/guest =3D> }/include/xenguest.h     |  0
> tools/{libs/hypfs =3D> }/include/xenhypfs.h     |  0
> tools/{libs/stat =3D> }/include/xenstat.h       |  0
> .../compat =3D> include/xenstore-compat}/xs.h   |  0
> .../xenstore-compat}/xs_lib.h                 |  0
> tools/{libs/store =3D> }/include/xenstore.h     |  0
> tools/{xenstore =3D> include}/xenstore_lib.h    |  0
> .../{libs/toolcore =3D> }/include/xentoolcore.h |  0
> .../include/xentoolcore_internal.h            |  0
> tools/{libs/toollog =3D> }/include/xentoollog.h |  0
> tools/libs/call/Makefile                      |  3 ---
> tools/libs/ctrl/Makefile                      |  3 ---
> tools/libs/devicemodel/Makefile               |  3 ---
> tools/libs/evtchn/Makefile                    |  2 --
> tools/libs/foreignmemory/Makefile             |  3 ---
> tools/libs/gnttab/Makefile                    |  3 ---
> tools/libs/guest/Makefile                     | 12 ++-------
> tools/libs/hypfs/Makefile                     |  3 ---
> tools/libs/libs.mk                            | 10 +++----
> tools/libs/light/Makefile                     | 27 +++++++++----------
> tools/libs/stat/Makefile                      |  2 --
> tools/libs/store/Makefile                     | 15 +++--------
> tools/libs/toolcore/Makefile                  |  9 +++----
> tools/libs/toollog/Makefile                   |  2 --
> tools/libs/util/Makefile                      |  3 ---
> tools/libs/vchan/Makefile                     |  3 ---
> tools/ocaml/libs/xentoollog/Makefile          |  2 +-
> tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
> 45 files changed, 32 insertions(+), 87 deletions(-)
> rename tools/{libs/vchan =3D> }/include/libxenvchan.h (100%)
> rename tools/{libs/light =3D> }/include/libxl.h (100%)
> rename tools/{libs/light =3D> }/include/libxl_event.h (100%)
> rename tools/{libs/light =3D> }/include/libxl_json.h (100%)
> rename tools/{libs/light =3D> }/include/libxl_utils.h (100%)
> rename tools/{libs/light =3D> }/include/libxl_uuid.h (100%)
> rename tools/{libs/util =3D> }/include/libxlutil.h (100%)
> rename tools/{libs/call =3D> }/include/xencall.h (100%)
> rename tools/{libs/ctrl =3D> }/include/xenctrl.h (100%)
> rename tools/{libs/ctrl =3D> }/include/xenctrl_compat.h (100%)
> rename tools/{libs/devicemodel =3D> }/include/xendevicemodel.h (100%)
> rename tools/{libs/evtchn =3D> }/include/xenevtchn.h (100%)
> rename tools/{libs/foreignmemory =3D> }/include/xenforeignmemory.h (100%)
> rename tools/{libs/gnttab =3D> }/include/xengnttab.h (100%)
> rename tools/{libs/guest =3D> }/include/xenguest.h (100%)
> rename tools/{libs/hypfs =3D> }/include/xenhypfs.h (100%)
> rename tools/{libs/stat =3D> }/include/xenstat.h (100%)
> rename tools/{libs/store/include/compat =3D> include/xenstore-compat}/xs.=
h (100%)
> rename tools/{libs/store/include/compat =3D> include/xenstore-compat}/xs_=
lib.h (100%)
> rename tools/{libs/store =3D> }/include/xenstore.h (100%)
> rename tools/{xenstore =3D> include}/xenstore_lib.h (100%)
> rename tools/{libs/toolcore =3D> }/include/xentoolcore.h (100%)
> rename tools/{libs/toolcore =3D> }/include/xentoolcore_internal.h (100%)
> rename tools/{libs/toollog =3D> }/include/xentoollog.h (100%)
>=20
> --=20
> 2.26.2
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2902.8270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMXW-0005AX-OH; Mon, 05 Oct 2020 09:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2902.8270; Mon, 05 Oct 2020 09:12:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMXW-0005AQ-Kx; Mon, 05 Oct 2020 09:12:14 +0000
Received: by outflank-mailman (input) for mailman id 2902;
 Mon, 05 Oct 2020 09:12:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPMXV-0005AJ-4p
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:12:13 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9caff9b9-20c3-4021-9615-d50571bb0b5a;
 Mon, 05 Oct 2020 09:12:12 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-259-YxpvOLN6NPWKnYXBtXMdiw-1; Mon, 05 Oct 2020 05:12:08 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC8451084D94;
 Mon,  5 Oct 2020 09:11:15 +0000 (UTC)
Received: from [10.36.114.222] (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 2F0E060BFA;
 Mon,  5 Oct 2020 09:11:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPMXV-0005AJ-4p
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:12:13 +0000
X-Inumbo-ID: 9caff9b9-20c3-4021-9615-d50571bb0b5a
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 9caff9b9-20c3-4021-9615-d50571bb0b5a;
	Mon, 05 Oct 2020 09:12:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601889132;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=NXwve0WxqZeoBW32EWOjz60HzfWSOgOTQAftpXofwqM=;
	b=Vz8KQleqFOYNvn6hQ2AJdms3vAdVE/U0Mxxp8Z8TAXOM5CM+X6ml2VypAWvLN8aA1ZA3lc
	JabO7yxnWq3K6NazVxvjZnzJ+LIECL69YhguASJqcOwWUIc8Sw1YrjnvyQOi06TzqnzYcr
	oMim0zQrtq62EKlPMn2jFZiAZYJUkfw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-259-YxpvOLN6NPWKnYXBtXMdiw-1; Mon, 05 Oct 2020 05:12:08 -0400
X-MC-Unique: YxpvOLN6NPWKnYXBtXMdiw-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC8451084D94;
	Mon,  5 Oct 2020 09:11:15 +0000 (UTC)
Received: from [10.36.114.222] (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 2F0E060BFA;
	Mon,  5 Oct 2020 09:11:08 +0000 (UTC)
Subject: Re: [PATCH v1 3/5] mm/page_alloc: always move pages to the tail of
 the freelist in unset_migratetype_isolate()
To: Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@suse.com>
Cc: Vlastimil Babka <vbabka@suse.cz>, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
 Andrew Morton <akpm@linux-foundation.org>, Oscar Salvador
 <osalvador@suse.de>, Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Dave Hansen <dave.hansen@intel.com>,
 Wei Yang <richard.weiyang@linux.alibaba.com>, Mike Rapoport
 <rppt@kernel.org>, Scott Cheloha <cheloha@linux.ibm.com>,
 Michael Ellerman <mpe@ellerman.id.au>
References: <20200928182110.7050-1-david@redhat.com>
 <20200928182110.7050-4-david@redhat.com>
 <20201002132404.GI4555@dhcp22.suse.cz>
 <df0c45bf-223f-1f0b-ce3d-f2b2e05626bd@redhat.com>
 <20201005065648.GO4555@dhcp22.suse.cz>
 <20201005082049.GI3227@techsingularity.net>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63W5Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAjwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat GmbH
Message-ID: <d466322e-054c-9303-5eb3-833dce410651@redhat.com>
Date: Mon, 5 Oct 2020 11:11:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201005082049.GI3227@techsingularity.net>
Content-Type: text/plain; charset=iso-8859-15
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12

On 05.10.20 10:20, Mel Gorman wrote:
> On Mon, Oct 05, 2020 at 08:56:48AM +0200, Michal Hocko wrote:
>> On Fri 02-10-20 17:20:09, David Hildenbrand wrote:
>>> On 02.10.20 15:24, Michal Hocko wrote:
>>>> On Mon 28-09-20 20:21:08, David Hildenbrand wrote:
>>>>> Page isolation doesn't actually touch the pages, it simply isolates
>>>>> pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>>>>>
>>>>> We already place pages to the tail of the freelists when undoing
>>>>> isolation via __putback_isolated_page(), let's do it in any case
>>>>> (e.g., if order <= pageblock_order) and document the behavior.
>>>>>
>>>>> Add a "to_tail" parameter to move_freepages_block() but introduce a
>>>>> a new move_to_free_list_tail() - similar to add_to_free_list_tail().
>>>>>
>>>>> This change results in all pages getting onlined via online_pages() to
>>>>> be placed to the tail of the freelist.
>>>>
>>>> Is there anything preventing to do this unconditionally? Or in other
>>>> words is any of the existing callers of move_freepages_block benefiting
>>>> from adding to the head?
>>>
>>> 1. mm/page_isolation.c:set_migratetype_isolate()
>>>
>>> We move stuff to the MIGRATE_ISOLATE list, we don't care about the order
>>> there.
>>>
>>> 2. steal_suitable_fallback():
>>>
>>> I don't think we care too much about the order when already stealing
>>> pageblocks ... and the freelist is empty I guess?
>>>
>>> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock()
>>>
>>> Not sure if we really care.
>>
>> Honestly, I have no idea. I can imagine that some atomic high order
>> workloads (e.g. in net) might benefit from cache line hot pages but I am
>> not sure this is really observable.
> 
> The highatomic reserve is more concerned that about the allocation
> succeeding than it is about cache hotness.
> 

Thanks Mel and Michal. I'll simplify this patch then - and if it turns
out to be an actual problem, we can change that one instance, adding a
proper comment.

Thanks!

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:12:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2903.8282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMXz-0005Fv-1F; Mon, 05 Oct 2020 09:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2903.8282; Mon, 05 Oct 2020 09:12:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMXy-0005Fo-UF; Mon, 05 Oct 2020 09:12:42 +0000
Received: by outflank-mailman (input) for mailman id 2903;
 Mon, 05 Oct 2020 09:12:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vcsX=DM=epam.com=prvs=85474143a1=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1kPMXx-0005FZ-8c
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:12:41 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 851728b0-46e4-46fe-a50a-dc65a2d327ae;
 Mon, 05 Oct 2020 09:12:38 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09595GJv023244; Mon, 5 Oct 2020 09:12:35 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58])
 by mx0a-0039f301.pphosted.com with ESMTP id 33xjp5uvvg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 05 Oct 2020 09:12:35 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB4847.eurprd03.prod.outlook.com (2603:10a6:803:bb::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 09:12:32 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::71d4:858b:cc47:7da0]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::71d4:858b:cc47:7da0%4]) with mapi id 15.20.3433.043; Mon, 5 Oct 2020
 09:12:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vcsX=DM=epam.com=prvs=85474143a1=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
	id 1kPMXx-0005FZ-8c
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:12:41 +0000
X-Inumbo-ID: 851728b0-46e4-46fe-a50a-dc65a2d327ae
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 851728b0-46e4-46fe-a50a-dc65a2d327ae;
	Mon, 05 Oct 2020 09:12:38 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09595GJv023244;
	Mon, 5 Oct 2020 09:12:35 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com (mail-db3eur04lp2058.outbound.protection.outlook.com [104.47.12.58])
	by mx0a-0039f301.pphosted.com with ESMTP id 33xjp5uvvg-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 05 Oct 2020 09:12:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dxFwJSIKbHI5On33/5odsW1RgwsLb3Z+kwLGSlvn18dpWN3dWekVpAjv1IBzdMjPPCpMs5GirIxeQ705yCThy0svh1FGwZg9hgcScJZ28zZ6zlMx5vx7V2gvNV/4lmHflPvDUeFegPUNERCHkHfY9qNvEaKT0urUQDViHoCZ3/DIY6Cw4qkJRSBB/eO2wImniyf31vTjQy6a0DLiCaZojndIlGqAzBZTXMc412AuZd7q4LBs7dTRCXRaZFtxPscIVKRMDqxnDMFR7+amv/f0FYKWj7NICSYo+3uP5IL14YXQ/c3NONe6L8qsNzoIADm+vnL/zx/+YdbrvzTd5uQb7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QoxAd8gi+GZNq9Lcq2t5WvdxqX9WQ9ewfqYfmUe3/Mg=;
 b=BZuvlwsunZenyl8uEQd3PxNATQhW+5NoxKiCei6xT4HNkYr+dBLMrY4mS51mtxI0mxanRE12krxmuBckspRPHRG3QsmMB+/tzF4YJTONcpFYJiLNFpTEF48/wpcpKLTz8/1KBkRa+LJxHE2+5IAUUHI8bfhOI2o7XEOA9HRQjw/T18wnmpW+T/ofuv8uicysdpz1gecb5Tt60ms+Eg8YZ634yVARfeqsZuJSuPtyjLKXok0fCRvW3vWsyxsFomvoYo9i2sYwETpsMkS/dgpg3FHhKcmRIJEk2OUfLRReOdLcX1RN6lo3PVwSW8GBi6bqV6VsqFS3DRheFijWWljLDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QoxAd8gi+GZNq9Lcq2t5WvdxqX9WQ9ewfqYfmUe3/Mg=;
 b=b22kCCOkLOSCyXq0tCq/K7cvt3R2eLRnhicT6DECqIrPlQ4DCAnop25MnBja9a1J3mqpMZ+oePrlp4NrXwqmOs9yH059MSgWmuYHVRyl5qXXcHbKA42bOYKfeVUp9V/fCreSNkPo15FVIc7FmQ4HZ8bvL31QpXmZA2twiC6XrA4+n9B2pvb8loU1tdLEGYxS74Pt9eB49tTh9IsPVyuEM0BrAoHSEyFDJbne5IOHo+OkUxHwxz/kgK4pNsOL47TAukFq+Jutw8kKg2ZWDFl88QCnRvj+AZabaKemCkY1GU1QOzoLU6zVxnPUtM7UuYwYzdFawpLsUdTpg0RcURxclQ==
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com (2603:10a6:800:17e::20)
 by VI1PR03MB4847.eurprd03.prod.outlook.com (2603:10a6:803:bb::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Mon, 5 Oct
 2020 09:12:32 +0000
Received: from VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::71d4:858b:cc47:7da0]) by VI1PR03MB6400.eurprd03.prod.outlook.com
 ([fe80::71d4:858b:cc47:7da0%4]) with mapi id 15.20.3433.043; Mon, 5 Oct 2020
 09:12:32 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
CC: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: [PATCH] arm: optee: don't print warning about "wrong" RPC buffer
Thread-Topic: [PATCH] arm: optee: don't print warning about "wrong" RPC buffer
Thread-Index: AQHWmveoJ/DTTDqDgE+9I2W9l7TFKg==
Date: Mon, 5 Oct 2020 09:12:32 +0000
Message-ID: <20201005091212.186934-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: c0ef2115-fa02-446d-61dd-08d8690ecab4
x-ms-traffictypediagnostic: VI1PR03MB4847:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <VI1PR03MB4847247E34BE2B423D378C51E60C0@VI1PR03MB4847.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Wq5JTLAzaUCyzWTIK3VmAWaRLSimcljm8mQFqnEzZOeYZL/fn4w9NY1NA0xfnUDc03+0RzZamvpCtmLa2PhHS5wwPfzHeLZmqV5QlVA2qhanuwmO81RSR0WV/vUVxT6TNANQS6Pz5UxY/4d4/YdnMrD7/whl78abgY3AjMi4M1IBL1nlF2nw+a2yT6VgXNpdYOwn8NJ4AlvEo72u8yUX33T8CV5FRQAU0mMrmy3H2rRXnVIPMIKPzc/Jz9T39NViwx0+WIQcX/bvz52dLUy2KkBQfEVUzCa0166dazdrSZDOHAqzneNKQ+/sfm81vWSqs/3kJdQQF9WpZimexC4W0w==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR03MB6400.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39850400004)(376002)(396003)(136003)(346002)(6512007)(478600001)(107886003)(8676002)(2906002)(4326008)(83380400001)(26005)(8936002)(76116006)(5660300002)(316002)(54906003)(36756003)(71200400001)(110136005)(66446008)(64756008)(86362001)(6486002)(91956017)(6506007)(55236004)(66476007)(66946007)(1076003)(186003)(2616005)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 H8hp6AX6W847xNUKd9xpQozoVhucOq1LfOt2Rhh1L5DYgunV8xS0BTnimQht3sguG7qNUglky6pGVMochgUdKvPW/eXr36p0kh5Vi33D+RHOxTFyCmgxQjJR/e9qlWzQ2yJ66Db+Kr1N6Vb/1+yZihkUOfn5gwU7SGMVACAqHGFjgyP4Bt69axUErsiwz1AS7fZgYI4HhEPNZLmw88gHyhjvoa+EGFvHUd5QvPiISpycTRRzdS8LJFzK1jkSl5wdiZvwoH2wZZMXQsACOsJLj3uSKJItKjr/WLvHbHNQluC7m1uI6ql3uhaGM1rjqotwa8GwA/0KNqJksFJppCM0priE/Xx+a/D8V3+FaPByQWOVZvzOjHj6TN9wAzCJy2Af/T09dHqN6QE6iv9YumM7WV9cr8u9SxYdM33d/ISPFQGo4Y+tMgnWZEpuCsFosOlo7hwKZd4yTg7ebEgwA9M06J5CRr93NdwSu3Rd5inuJ4sgvhvQMKlnuzeX9ixNiNg6gB7pVJwaeV62PzMnyV4mxJA6FzHasookwP+shxw4Ai3Tivwjg7AglZI72uSnvwSEOSZn7QXzDbN762UOBYPMAaOOjbok9NoiGf0I2k8iAA127jSh7VQM3K/oeNyPbI0qUJPV+xPiY/XzeP8GfNX5sw==
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: VI1PR03MB6400.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c0ef2115-fa02-446d-61dd-08d8690ecab4
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Oct 2020 09:12:32.3147
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ksyOC71X2v+cSGO/5Jp3/JqA4yUUhb7TkrsDtr0rqieWja+UhnGHdP4YMLSsdee1XTgw2bbIn69241OX1X790/tG87dwjQtlQd/F48PaUQ4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB4847
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-05_06:2020-10-02,2020-10-05 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0
 priorityscore=1501 spamscore=0 mlxlogscore=999 malwarescore=0
 lowpriorityscore=0 mlxscore=0 adultscore=0 bulkscore=0 suspectscore=0
 impostorscore=0 clxscore=1011 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2006250000 definitions=main-2010050069

OP-TEE mediator tracks cookie value for a last buffer which
was requested by OP-TEE. This tracked value serves one important
purpose: if OP-TEE wants to request another buffer, we know
that it finished importing previous one and we can free page list
associated with it.

Also, we had false premise that OP_TEE will free requested buffers in
reversed order. So we checked rpc_data_cookie during handling
OPTEE_RPC_CMD_SHM_FREE call and printed warning if cookie of buffer
which is requested to be freed differs from last allocated one.

During testing RPMB FS services I discovered, that RPMB code frees
request and response buffers in the same order is it allocated
them. And this is perfectly fine, actually.

So, we are removing mentioned warning message in Xen, as it is
perfectly normal to free buffers in arbitrary order.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/tee/optee.c | 20 +-------------------
 1 file changed, 1 insertion(+), 19 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 8a39fe33b0..ee85359742 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -1127,25 +1127,7 @@ static int handle_rpc_return(struct optee_domain *ct=
x,
          */
         if ( shm_rpc->xen_arg->cmd =3D=3D OPTEE_RPC_CMD_SHM_FREE )
         {
-            uint64_t cookie =3D shm_rpc->xen_arg->params[0].u.value.b;
-
-            free_optee_shm_buf(ctx, cookie);
-
-            /*
-             * OP-TEE asks to free the buffer, but this is not the same
-             * buffer we previously allocated for it. While nothing
-             * prevents OP-TEE from asking this, it is the strange
-             * situation. This may or may not be caused by a bug in
-             * OP-TEE or mediator. But is better to print warning.
-             */
-            if ( call->rpc_data_cookie && call->rpc_data_cookie !=3D cooki=
e )
-            {
-                gprintk(XENLOG_ERR,
-                        "Saved RPC cookie does not corresponds to OP-TEE's=
 (%"PRIx64" !=3D %"PRIx64")\n",
-                        call->rpc_data_cookie, cookie);
-
-                WARN();
-            }
+            free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b)=
;
             call->rpc_data_cookie =3D 0;
         }
         unmap_domain_page(shm_rpc->xen_arg);
--=20
2.27.0


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:14:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2904.8294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMZO-0005Pk-CO; Mon, 05 Oct 2020 09:14:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2904.8294; Mon, 05 Oct 2020 09:14:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMZO-0005Pc-9D; Mon, 05 Oct 2020 09:14:10 +0000
Received: by outflank-mailman (input) for mailman id 2904;
 Mon, 05 Oct 2020 09:14:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kPMZN-0005PT-1n
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:14:09 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f57f70e7-b0c6-4a96-8246-396bbefc26c9;
 Mon, 05 Oct 2020 09:14:08 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 05 Oct 2020 09:14:07 +0000
Received: from EX13D32EUC002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS
 id B4BCBA0460; Mon,  5 Oct 2020 09:14:05 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:14:04 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:14:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kPMZN-0005PT-1n
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:14:09 +0000
X-Inumbo-ID: f57f70e7-b0c6-4a96-8246-396bbefc26c9
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f57f70e7-b0c6-4a96-8246-396bbefc26c9;
	Mon, 05 Oct 2020 09:14:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601889249; x=1633425249;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=ip993wRWsklUdrvJKBBrCrWudN+UV+bDYZ1jj5g8XPQ=;
  b=s6n0KDRPSWcWT/W27GNdk6ZBTUvzs4B1+JpbncESwsnhbwntk1DWAkIq
   2GgGB60xMLIIBg1oEDeskkkZL2CFKSdGd3vMWiJGG+TO1XHIHLu12Lw4l
   S2F3tSj3Zy+Joe2/enw9P7kzhN7J5NJhrEdwrDvmdlLmZn9qnmxlflj3R
   4=;
X-IronPort-AV: E=Sophos;i="5.77,338,1596499200"; 
   d="scan'208";a="57755502"
Subject: RE: [PATCH v9 4/8] docs/specs: add missing definitions to
 libxc-migration-stream
Thread-Topic: [PATCH v9 4/8] docs/specs: add missing definitions to libxc-migration-stream
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.43.8.2])
  by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 05 Oct 2020 09:14:07 +0000
Received: from EX13D32EUC002.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
	by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS id B4BCBA0460;
	Mon,  5 Oct 2020 09:14:05 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:14:04 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:14:03 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwKSrMFxAhJ0Esapd/NkMA==
Date: Mon, 5 Oct 2020 09:14:03 +0000
Message-ID: <f9c14acb769f433c95b46f9837ca8205@EX13D32EUC003.ant.amazon.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-5-paul@xen.org>
 <fda9ae6f-e55f-2e62-44a9-acf4e6e2d09e@citrix.com>
In-Reply-To: <fda9ae6f-e55f-2e62-44a9-acf4e6e2d09e@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAwMiBPY3RvYmVyIDIwMjAgMjM6NDINCj4g
VG86IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBHZW9y
Z2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+OyBJYW4gSmFja3Nvbg0KPiA8aWFu
LmphY2tzb25AZXUuY2l0cml4LmNvbT47IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT47
IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+OyBTdGVmYW5vDQo+IFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiBTdWJqZWN0OiBS
RTogW0VYVEVSTkFMXSBbUEFUQ0ggdjkgNC84XSBkb2NzL3NwZWNzOiBhZGQgbWlzc2luZyBkZWZp
bml0aW9ucyB0byBsaWJ4Yy1taWdyYXRpb24tc3RyZWFtDQo+IA0KPiBDQVVUSU9OOiBUaGlzIGVt
YWlsIG9yaWdpbmF0ZWQgZnJvbSBvdXRzaWRlIG9mIHRoZSBvcmdhbml6YXRpb24uIERvIG5vdCBj
bGljayBsaW5rcyBvciBvcGVuDQo+IGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgY2FuIGNvbmZpcm0g
dGhlIHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0KPiANCj4gDQo+IA0KPiBP
biAyNC8wOS8yMDIwIDE0OjEwLCBQYXVsIER1cnJhbnQgd3JvdGU6DQo+ID4gRnJvbTogUGF1bCBE
dXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiA+DQo+ID4gVGhlIFNUQVRJQ19EQVRBX0VO
RCwgWDg2X0NQVUlEX1BPTElDWSBhbmQgWDg2X01TUl9QT0xJQ1kgcmVjb3JkIHR5cGVzIGhhdmUN
Cj4gPiBzZWN0aW9ucyBleHBsYWluaW5nIHdoYXQgdGhleSBhcmUgYnV0IHRoZWlyIHZhbHVlcyBh
cmUgbm90IGRlZmluZWQuIEluZGVlZA0KPiA+IHRoZWlyIHZhbHVlcyBhcmUgZGVmaW5lZCBhcyAi
UmVzZXJ2ZWQgZm9yIGZ1dHVyZSBtYW5kYXRvcnkgcmVjb3Jkcy4iDQo+ID4NCj4gPiBBbHNvLCB0
aGUgc3BlYyByZXZpc2lvbiBpcyBhZGp1c3RlZCB0byBtYXRjaCB0aGUgbWlncmF0aW9uIHN0cmVh
bSB2ZXJzaW9uDQo+ID4gYW5kIGFuIEVORCByZWNvcmQgaXMgYWRkZWQgdG8gdGhlIGRlc2NyaXB0
aW9uIG9mIGEgJ3R5cGljYWwgc2F2ZSByZWNvcmQgZm9yDQo+ID4gYW5kIHg4NiBIVk0gZ3Vlc3Qu
Jw0KPiA+DQo+ID4gU2lnbmVkLW9mZi1ieTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24u
Y29tPg0KPiA+IEZpeGVzOiA2ZjcxYjViMTUwNiAoImRvY3MvbWlncmF0aW9uIFNwZWNpZnkgbWln
cmF0aW9uIHYzIGFuZCBTVEFUSUNfREFUQV9FTkQiKQ0KPiA+IEZpeGVzOiBkZGQyNzNkODg2MyAo
ImRvY3MvbWlncmF0aW9uOiBTcGVjaWZ5IFg4Nl97Q1BVSUQsTVNSfV9QT0xJQ1kgcmVjb3JkcyIp
DQo+IA0KPiBPb3BzIHNvcnJ5LiAgSSBzd2VhciBJIGhhZCB0aGVzZSBhdCBvbmUgcG9pbnQuICBJ
IGNhbiBvbmx5IHByZXN1bWUgaXQNCj4gZ290IHN3YWxsb3dlZCBieSBhIHJlYmFzZSBhdCBzb21l
IHBvaW50Lg0KPiANCg0KQ2FuIEkgdGFrZSB0aGF0IGFzIGFuIFItYj8NCg0KICBQYXVsDQoNCj4g
fkFuZHJldw0K


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:16:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:16:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2908.8306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMbk-0005ab-QS; Mon, 05 Oct 2020 09:16:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2908.8306; Mon, 05 Oct 2020 09:16:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMbk-0005aU-N2; Mon, 05 Oct 2020 09:16:36 +0000
Received: by outflank-mailman (input) for mailman id 2908;
 Mon, 05 Oct 2020 09:16:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kPMbi-0005aP-OH
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:16:34 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id d6939e18-bf19-4f1a-8b94-39cf447491e9;
 Mon, 05 Oct 2020 09:16:34 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 05 Oct 2020 09:16:31 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com (Postfix) with ESMTPS
 id 4A266A22EA; Mon,  5 Oct 2020 09:16:30 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:16:29 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:16:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kPMbi-0005aP-OH
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:16:34 +0000
X-Inumbo-ID: d6939e18-bf19-4f1a-8b94-39cf447491e9
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id d6939e18-bf19-4f1a-8b94-39cf447491e9;
	Mon, 05 Oct 2020 09:16:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601889394; x=1633425394;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=biWVEUxOxoT7FbQmoUuhVhRixnq6eVg2MHDB9GOycSU=;
  b=SdxzF3uepKUlknvCbLa6H8UdNPCpqvmqgPCDYyygALIFL7pgoIqSMCUN
   hDLTH6FuIfaqinQSS3PLHrDk+QW07TEDTH+3mRciCYD5oiNx2AIV5+Rxy
   JGu9VHOS0PDD7/V7nUN1ku4Ow0/dq1Ec812zgJeZbDhLrNetSvUBtJ1lq
   4=;
X-IronPort-AV: E=Sophos;i="5.77,338,1596499200"; 
   d="scan'208";a="81546000"
Subject: RE: [PATCH v9 3/8] tools/misc: add xen-domctx to present domain context
Thread-Topic: [PATCH v9 3/8] tools/misc: add xen-domctx to present domain context
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 05 Oct 2020 09:16:31 +0000
Received: from EX13D32EUC003.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
	by email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com (Postfix) with ESMTPS id 4A266A22EA;
	Mon,  5 Oct 2020 09:16:30 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:16:29 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:16:28 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwIbeBJ6AX8UZfCpgEhR8A==
Date: Mon, 5 Oct 2020 09:16:28 +0000
Message-ID: <54c3b27f69314aeeaf9e5e17d1406f06@EX13D32EUC003.ant.amazon.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-4-paul@xen.org>
 <241eda02-e7ab-a44c-8f1c-38eb85c2f8dc@citrix.com>
In-Reply-To: <241eda02-e7ab-a44c-8f1c-38eb85c2f8dc@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAwMiBPY3RvYmVyIDIwMjAgMjM6NDANCj4g
VG86IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBJYW4g
SmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT47IFdlaSBMaXUNCj4gPHdsQHhlbi5v
cmc+DQo+IFN1YmplY3Q6IFJFOiBbRVhURVJOQUxdIFtQQVRDSCB2OSAzLzhdIHRvb2xzL21pc2M6
IGFkZCB4ZW4tZG9tY3R4IHRvIHByZXNlbnQgZG9tYWluIGNvbnRleHQNCj4gDQo+IENBVVRJT046
IFRoaXMgZW1haWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4g
RG8gbm90IGNsaWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4g
Y29uZmlybSB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiAN
Cj4gDQo+IE9uIDI0LzA5LzIwMjAgMTQ6MTAsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBUaGlz
IHRvb2wgaXMgYW5hbG9nb3VzIHRvICd4ZW4taHZtY3R4JyB3aGljaCBwcmVzZW50cyBIVk0gY29u
dGV4dC4NCj4gPiBTdWJzZXF1ZW50IHBhdGNoZXMgd2lsbCBhZGQgJ2R1bXAnIGZ1bmN0aW9ucyB3
aGVuIG5ldyByZWNvcmRzIGFyZQ0KPiA+IGludHJvZHVjZWQuDQo+ID4NCj4gPiBTaWduZWQtb2Zm
LWJ5OiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+DQo+ID4gQWNrZWQtYnk6IElh
biBKYWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRyaXguY29tPg0KPiA+IC0tLQ0KPiA+IENjOiBB
bmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiA+IENjOiBXZWkgTGl1
IDx3bEB4ZW4ub3JnPg0KPiA+DQo+ID4gTk9URTogSWFuIHJlcXVlc3RlZCBhY2sgZnJvbSBBbmRy
ZXcNCj4gDQo+IEkgbWVhbiAtIGl0cyBhIGZhaXJseSB0aHJvd2F3YXkgcGllY2Ugb2YgbWlzYyB1
c2Vyc3BhY2UsIHNvIGFjay4NCj4gDQo+IEhvd2V2ZXIsIGl0IGlzIGxhcmdlbHkgc3VwZXJzZWRl
ZCBieSB0aGUgY2hhbmdlcyB5b3UgbmVlZCB0byBtYWtlIHRvDQo+IHZlcmlmeS1zdHJlYW0tdjIg
c28geW91IG1pZ2h0IHdhbnQgdG8gYmVhciB0aGF0IGluIG1pbmQuDQo+IA0KPiBBbHNvLCBJIHdv
bmRlciBpZiBpdCBpcyB3aXNlIGluIGdlbmVyYWwgdGhhdCB3ZSdyZSB0aHJvd2luZyBzbyBtYW55
IG1pc2MNCj4gZGVidWdnaW5nIHRvb2xzIGludG8gc2Jpbi4NCj4gDQoNClRoZSBpbnRlbnRpb24g
aXMgdG8gZXZlbnR1YWxseSByZXBsYWNlIGh2bSBjb250ZXh0IHNvIHdlJ2xsIG5lZWQgYSBwYXJh
bGxlbCB0b29sIHRvIHJlcGxhY2UgeGVuLWh2bWN0eCwgdW5sZXNzIHdlIHdhbnQgdG8gcmV0aXJl
IGl0Lg0KDQo+ID4gKyNpbmNsdWRlIDxpbnR0eXBlcy5oPg0KPiA+ICsjaW5jbHVkZSA8c3RkaW8u
aD4NCj4gPiArI2luY2x1ZGUgPHN0ZGxpYi5oPg0KPiA+ICsjaW5jbHVkZSA8c3RyaW5nLmg+DQo+
ID4gKyNpbmNsdWRlIDxlcnJuby5oPg0KPiA+ICsNCj4gPiArI2luY2x1ZGUgPHhlbmN0cmwuaD4N
Cj4gPiArI2luY2x1ZGUgPHhlbi94ZW4uaD4NCj4gPiArI2luY2x1ZGUgPHhlbi9kb21jdGwuaD4N
Cj4gPiArI2luY2x1ZGUgPHhlbi9zYXZlLmg+DQo+ID4gKw0KPiA+ICtzdGF0aWMgdm9pZCAqYnVm
ID0gTlVMTDsNCj4gPiArc3RhdGljIHNpemVfdCBsZW4sIG9mZjsNCj4gPiArDQo+ID4gKyNkZWZp
bmUgR0VUX1BUUihfeCkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIFwNCj4gPiArICAgIGRvIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXA0KPiA+ICsgICAgICAgIGlm
ICggbGVuIC0gb2ZmIDwgc2l6ZW9mKCooX3gpKSApICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBcDQo+ID4gKyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gPiArICAgICAgICAgICAgZnBy
aW50ZihzdGRlcnIsICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgXA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICJlcnJvcjogbmVlZCBhbm90aGVyICVsdSBi
eXRlcywgb25seSAlbHUgYXZhaWxhYmxlXG4iLCBcDQo+IA0KPiAlenUgaXMgdGhlIGNvcnJlY3Qg
Zm9ybWF0dGVyIGZvciBzaXplX3QuDQo+IA0KDQpUcnVlLCBtaXNzZWQgdGhhdC4NCg0KPiA+ICsg
ICAgICAgICAgICAgICAgICAgIHNpemVvZigqKF94KSksIGxlbiAtIG9mZik7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICBcDQo+ID4gKyAgICAgICAgICAgIGV4aXQoMSk7ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwNCj4gDQo+IFlvdXIg
ZXJyb3IgaGFuZGxpbmcgd2lsbCBiZSBmYXIgbW9yZSBzaW1wbGUgYnkgdXNpbmcgZXJyKCkgaW5z
dGVhZCBvZg0KPiBvcGVuY29kaW5nIGl0Lg0KPiANCg0KT2suDQoNCiAgUGF1bA0KDQo+IH5BbmRy
ZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2915.8318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMdT-0005mL-Dk; Mon, 05 Oct 2020 09:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2915.8318; Mon, 05 Oct 2020 09:18:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMdT-0005mE-9h; Mon, 05 Oct 2020 09:18:23 +0000
Received: by outflank-mailman (input) for mailman id 2915;
 Mon, 05 Oct 2020 09:18:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kPMdS-0005m7-H0
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:18:22 +0000
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d27969d-b2d2-4b1d-8b3f-2ad6c9b0edeb;
 Mon, 05 Oct 2020 09:18:21 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-168cbb73.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 05 Oct 2020 09:18:16 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2c-168cbb73.us-west-2.amazon.com (Postfix) with ESMTPS
 id 0113FA05BB; Mon,  5 Oct 2020 09:18:16 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:18:14 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:18:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kPMdS-0005m7-H0
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:18:22 +0000
X-Inumbo-ID: 6d27969d-b2d2-4b1d-8b3f-2ad6c9b0edeb
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6d27969d-b2d2-4b1d-8b3f-2ad6c9b0edeb;
	Mon, 05 Oct 2020 09:18:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601889501; x=1633425501;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=24K6ASBYbBwXMp617UE/zbq+UrTDR2pL9xZfJNs/tDU=;
  b=CMjCej+QNMN6Y9I4NqiGgqMvSFWZXKPaiK6D4m4tiUkPBeRyhuLfOLq/
   I21jcDwYhBMHW650dn11WUjYfDCzEMw/qZUIqm3aotgjBGSWeUrnlsvN9
   R0l/HeTpZMdbSRShY65/DTBNOb8VPHw3rP/3MPGryZnzF31e8DoigghxN
   U=;
X-IronPort-AV: E=Sophos;i="5.77,338,1596499200"; 
   d="scan'208";a="73375476"
Subject: RE: [PATCH v9 2/8] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Thread-Topic: [PATCH v9 2/8] xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-2c-168cbb73.us-west-2.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP; 05 Oct 2020 09:18:16 +0000
Received: from EX13D32EUC003.ant.amazon.com (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
	by email-inbound-relay-2c-168cbb73.us-west-2.amazon.com (Postfix) with ESMTPS id 0113FA05BB;
	Mon,  5 Oct 2020 09:18:16 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 09:18:14 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 09:18:14 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Julien Grall <julien@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwEKq81GAaD/e3mph7/jwA==
Date: Mon, 5 Oct 2020 09:18:14 +0000
Message-ID: <cc549f2902da4ca9b02247f21dfb103c@EX13D32EUC003.ant.amazon.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-3-paul@xen.org>
 <783f8b1b-f11f-d8ff-3643-d35f17c6c363@citrix.com>
In-Reply-To: <783f8b1b-f11f-d8ff-3643-d35f17c6c363@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAwMiBPY3RvYmVyIDIwMjAgMjI6NTgNCj4g
VG86IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKdWxp
ZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgRGFuaWVsIERlIEdyYWFmDQo+IDxkZ2RlZ3JhQHR5
Y2hvLm5zYS5nb3Y+OyBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT47IFdl
aSBMaXUgPHdsQHhlbi5vcmc+OyBHZW9yZ2UgRHVubGFwDQo+IDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+OyBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+OyBTdGVmYW5vIFN0YWJlbGxp
bmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IFN1YmplY3Q6IFJFOiBbRVhURVJOQUxd
IFtQQVRDSCB2OSAyLzhdIHhlbi9jb21tb24vZG9tY3RsOiBpbnRyb2R1Y2UgWEVOX0RPTUNUTF9n
ZXQvc2V0ZG9tYWluY29udGV4dA0KPiANCj4gQ0FVVElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVk
IGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Ig
b3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5k
IGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IA0KPiANCj4gT24gMjQvMDkvMjAyMCAx
NDoxMCwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9w
dWJsaWMvZG9tY3RsLmggYi94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4gPiBpbmRleCA3
OTFmMGEyNTkyLi43NDMxMDUxODFmIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9kb21jdGwuaA0KPiA+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9kb21jdGwuaA0KPiA+IEBA
IC0xMTMwLDYgKzExMzAsNDMgQEAgc3RydWN0IHhlbl9kb21jdGxfdnVhcnRfb3Agew0KPiA+ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAqLw0KPiA+ICB9Ow0KPiA+DQo+ID4gKy8q
DQo+ID4gKyAqIFhFTl9ET01DVExfZ2V0ZG9tYWluY29udGV4dA0KPiA+ICsgKiAtLS0tLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0NCj4gPiArICoNCj4gPiArICogYnVmZmVyIChJTik6ICAgVGhlIGJ1
ZmZlciBpbnRvIHdoaWNoIHRoZSBjb250ZXh0IGRhdGEgc2hvdWxkIGJlDQo+ID4gKyAqICAgICAg
ICAgICAgICAgIGNvcGllZCwgb3IgTlVMTCB0byBxdWVyeSB0aGUgYnVmZmVyIHNpemUgdGhhdCBz
aG91bGQNCj4gPiArICogICAgICAgICAgICAgICAgYmUgYWxsb2NhdGVkLg0KPiA+ICsgKiBzaXpl
IChJTi9PVVQpOiBJZiAnYnVmZmVyJyBpcyBOVUxMIHRoZW4gdGhlIHZhbHVlIHBhc3NlZCBpbiBt
dXN0IGJlDQo+ID4gKyAqICAgICAgICAgICAgICAgIHplcm8sIGFuZCB0aGUgdmFsdWUgcGFzc2Vk
IG91dCB3aWxsIGJlIHRoZSBzaXplIG9mIHRoZQ0KPiA+ICsgKiAgICAgICAgICAgICAgICBidWZm
ZXIgdG8gYWxsb2NhdGUuDQo+ID4gKyAqICAgICAgICAgICAgICAgIElmICdidWZmZXInIGlzIG5v
bi1OVUxMIHRoZW4gdGhlIHZhbHVlIHBhc3NlZCBpbiBtdXN0DQo+ID4gKyAqICAgICAgICAgICAg
ICAgIGJlIHRoZSBzaXplIG9mIHRoZSBidWZmZXIgaW50byB3aGljaCBkYXRhIG1heSBiZSBjb3Bp
ZWQuDQo+ID4gKyAqICAgICAgICAgICAgICAgIFRoZSB2YWx1ZSBwYXNzZWQgb3V0IHdpbGwgYmUg
dGhlIHNpemUgb2YgZGF0YSB3cml0dGVuLg0KPiA+ICsgKi8NCj4gPiArc3RydWN0IHhlbl9kb21j
dGxfZ2V0ZG9tYWluY29udGV4dCB7DQo+ID4gKyAgICB1aW50MzJfdCBzaXplOw0KPiANCj4gVGhp
cyBzZXJpZXMgaXMgZnVsbCBvZiBtaXNtYXRjaGVkIDMyLzY0Yml0IHNpemVzLCB3aXRoIHNldmVy
YWwNCj4gdHJ1bmNhdGlvbiBidWdzIGluIHRoZSBwcmV2aW91cyBwYXRjaC4NCj4gDQo+IEp1c3Qg
dXNlIGEgNjRiaXQgc2l6ZSBoZXJlLiAgTGlmZSBpcyB0b28gc2hvcnQgdG8gZ28gc2VhcmNoaW5n
IGZvciBhbGwNCj4gdGhlIG90aGVyIHRydW5jYXRpb24gYnVnIHdoZW4gdGhpcyBzdHJlYW0gdGlw
cyBvdmVyIDRHLCBhbmQgaXRzIG5vdCBsaWtlDQo+IHRoZXJlIGlzIGEgc2hvcnRhZ2Ugb2Ygc3Bh
Y2UgaW4gdGhpcyBzdHJ1Y3R1cmUuDQo+IA0KDQpPay4NCg0KPiA+ICsgICAgdWludDMyX3QgcGFk
Ow0KPiA+ICsgICAgWEVOX0dVRVNUX0hBTkRMRV82NCh2b2lkKSBidWZmZXI7DQo+ID4gK307DQo+
ID4gKw0KPiA+ICsvKiBYRU5fRE9NQ1RMX3NldGRvbWFpbmNvbnRleHQNCj4gPiArICogLS0tLS0t
LS0tLS0tLS0tLS0tLS0tLS0tLS0tDQo+ID4gKyAqDQo+ID4gKyAqIGJ1ZmZlciAoSU4pOiAgIFRo
ZSBidWZmZXIgZnJvbSB3aGljaCB0aGUgY29udGV4dCBkYXRhIHNob3VsZCBiZQ0KPiA+ICsgKiAg
ICAgICAgICAgICAgICBjb3BpZWQuDQo+ID4gKyAqIHNpemUgKElOKTogICAgIFRoZSBzaXplIG9m
IHRoZSBidWZmZXIgZnJvbSB3aGljaCBkYXRhIG1heSBiZSBjb3BpZWQuDQo+ID4gKyAqICAgICAg
ICAgICAgICAgIFRoaXMgZGF0YSBtdXN0IGluY2x1ZGUgRE9NQUlOX1NBVkVfQ09ERV9IRUFERVIg
YXQgdGhlDQo+ID4gKyAqICAgICAgICAgICAgICAgIHN0YXJ0IGFuZCB0ZXJtaW5hdGUgd2l0aCBh
IERPTUFJTl9TQVZFX0NPREVfRU5EIHJlY29yZC4NCj4gPiArICogICAgICAgICAgICAgICAgQW55
IGRhdGEgYmV5b25kIHRoZSBET01BSU5fU0FWRV9DT0RFX0VORCByZWNvcmQgd2lsbCBiZQ0KPiA+
ICsgKiAgICAgICAgICAgICAgICBpZ25vcmVkLg0KPiA+ICsgKi8NCj4gPiArc3RydWN0IHhlbl9k
b21jdGxfc2V0ZG9tYWluY29udGV4dCB7DQo+ID4gKyAgICB1aW50MzJfdCBzaXplOw0KPiA+ICsg
ICAgdWludDMyX3QgcGFkOw0KPiA+ICsgICAgWEVOX0dVRVNUX0hBTkRMRV82NChjb25zdF92b2lk
KSBidWZmZXI7DQo+ID4gK307DQo+ID4gKw0KPiA+ICBzdHJ1Y3QgeGVuX2RvbWN0bCB7DQo+ID4g
ICAgICB1aW50MzJfdCBjbWQ7DQo+ID4gICNkZWZpbmUgWEVOX0RPTUNUTF9jcmVhdGVkb21haW4g
ICAgICAgICAgICAgICAgICAgMQ0KPiA+IEBAIC0xMjE0LDYgKzEyNTEsOCBAQCBzdHJ1Y3QgeGVu
X2RvbWN0bCB7DQo+ID4gICNkZWZpbmUgWEVOX0RPTUNUTF92dWFydF9vcCAgICAgICAgICAgICAg
ICAgICAgICA4MQ0KPiA+ICAjZGVmaW5lIFhFTl9ET01DVExfZ2V0X2NwdV9wb2xpY3kgICAgICAg
ICAgICAgICAgODINCj4gPiAgI2RlZmluZSBYRU5fRE9NQ1RMX3NldF9jcHVfcG9saWN5ICAgICAg
ICAgICAgICAgIDgzDQo+ID4gKyNkZWZpbmUgWEVOX0RPTUNUTF9nZXRkb21haW5jb250ZXh0ICAg
ICAgICAgICAgICA4NA0KPiA+ICsjZGVmaW5lIFhFTl9ET01DVExfc2V0ZG9tYWluY29udGV4dCAg
ICAgICAgICAgICAgODUNCj4gDQo+IFNvLCB3ZSd2ZSBjdXJyZW50bHkgZ290Og0KPiANCj4gI2Rl
ZmluZSBYRU5fRE9NQ1RMX3NldHZjcHVjb250ZXh0ICAgICAgICAgICAgICAgIDEyDQo+ICNkZWZp
bmUgWEVOX0RPTUNUTF9nZXR2Y3B1Y29udGV4dCAgICAgICAgICAgICAgICAxMw0KPiAjZGVmaW5l
IFhFTl9ET01DVExfZ2V0aHZtY29udGV4dCAgICAgICAgICAgICAgICAgMzMNCj4gI2RlZmluZSBY
RU5fRE9NQ1RMX3NldGh2bWNvbnRleHQgICAgICAgICAgICAgICAgIDM0DQo+ICNkZWZpbmUgWEVO
X0RPTUNUTF9zZXRfZXh0X3ZjcHVjb250ZXh0ICAgICAgICAgICA0Mg0KPiAjZGVmaW5lIFhFTl9E
T01DVExfZ2V0X2V4dF92Y3B1Y29udGV4dCAgICAgICAgICAgNDMNCj4gI2RlZmluZSBYRU5fRE9N
Q1RMX2dldGh2bWNvbnRleHRfcGFydGlhbCAgICAgICAgIDU1DQo+ICNkZWZpbmUgWEVOX0RPTUNU
TF9zZXR2Y3B1ZXh0c3RhdGUgICAgICAgICAgICAgICA2Mg0KPiAjZGVmaW5lIFhFTl9ET01DVExf
Z2V0dmNwdWV4dHN0YXRlICAgICAgICAgICAgICAgNjMNCj4gDQo+IHdoaWNoIGFyZSBkb2luZyBh
bGFybWluZ2x5IHJlbGF0ZWQgdGhpbmdzIGZvciB2Y3B1cy4gIChBcyBhbiBhbXVzaW5nDQo+IGV4
ZXJjaXNlIHRvIHRoZSByZWFkZXIsIGZpZ3VyZSBvdXQgd2hpY2ggYXJlIFBWIHNwZWNpZmljIGFu
ZCB3aGljaCBhcmUNCj4gSFZNIHNwZWNpZmljLiAgSGludDogdGhleSdyZSBub3QgZGlzam9pbnQg
c2V0cy4pDQo+IA0KDQpZZXMsIGhlbmNlIHRoZSBkZXNpcmUgdG8gY29tZSB1cCB3aXRoIHNvbWV0
aGluZyBjb21tb24uDQoNCj4gDQo+IEkga25vdyBicmVha2luZyB3aXRoIHRyYWRpdGlvbiBpcyBz
YWNyaWxlZ2UsIGJ1dCBhdCB0aGUgdmVyeSBtaW5pbXVtLA0KPiBjYW4gd2UgZ2V0IHNvbWUgdW5k
ZXJzY29yZXMgaW4gdGhhdCBuYW1lIHNvIHlvdSBjYW4gYXQgbGVhc3QgcmVhZCB0aGUNCj4gd29y
ZHMgd2hpY2ggbWFrZSBpdCB1cCBtb3JlIGVhc2lseS4NCj4gDQoNClN1cmUuDQoNCiAgUGF1bA0K
DQo+IH5BbmRyZXcNCg==


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:31:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2920.8330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMqT-0007TK-LT; Mon, 05 Oct 2020 09:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2920.8330; Mon, 05 Oct 2020 09:31:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPMqT-0007TD-Ho; Mon, 05 Oct 2020 09:31:49 +0000
Received: by outflank-mailman (input) for mailman id 2920;
 Mon, 05 Oct 2020 09:31:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPMqS-0007T8-Px
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:31:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d19bb80-f14e-48bb-9998-82c10beeb154;
 Mon, 05 Oct 2020 09:31:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPMqQ-0000jw-4U; Mon, 05 Oct 2020 09:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPMqP-00013G-Ps; Mon, 05 Oct 2020 09:31:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPMqP-00020P-PM; Mon, 05 Oct 2020 09:31:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPMqS-0007T8-Px
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:31:48 +0000
X-Inumbo-ID: 5d19bb80-f14e-48bb-9998-82c10beeb154
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5d19bb80-f14e-48bb-9998-82c10beeb154;
	Mon, 05 Oct 2020 09:31:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TD2CyFjKbIUthdP/FAC/U2XFG75jWBQhovaOjAG4STA=; b=iMBXOWxppVlZu4D0JDj0Jcstz3
	DEKi6/Muwii/+hJHwjYCoLt6oqNwMu1faqkopmABCCGbABu2VdMf0WUa7YCZw7KomgoZeY1OTr3y5
	5EEI02+gggDGoobpqdZtJFbMgh0dCudZIDtg0MUBI4KFqUxRY0N2l2BcLfIfnUyAOMxw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPMqQ-0000jw-4U; Mon, 05 Oct 2020 09:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPMqP-00013G-Ps; Mon, 05 Oct 2020 09:31:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPMqP-00020P-PM; Mon, 05 Oct 2020 09:31:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155402-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 155402: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=6888017392ac25b5e588554030642affac25a95d
X-Osstest-Versions-That:
    xen=0446e3db13671032b05d19f6117d902f5c5c76fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 09:31:45 +0000

flight 155402 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155402/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 154601
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  6888017392ac25b5e588554030642affac25a95d
baseline version:
 xen                  0446e3db13671032b05d19f6117d902f5c5c76fa

Last test of basis   154601  2020-09-22 02:37:00 Z   13 days
Failing since        154622  2020-09-22 16:36:57 Z   12 days   10 attempts
Testing same since   155402  2020-10-03 21:55:13 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0446e3db13..6888017392  6888017392ac25b5e588554030642affac25a95d -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2946.8424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7L-0000bB-Im; Mon, 05 Oct 2020 09:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2946.8424; Mon, 05 Oct 2020 09:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7L-0000b4-FM; Mon, 05 Oct 2020 09:49:15 +0000
Received: by outflank-mailman (input) for mailman id 2946;
 Mon, 05 Oct 2020 09:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7J-0000a7-FF
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 455f9f3a-be08-41d9-971d-d689238c76cc;
 Mon, 05 Oct 2020 09:49:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7E-00018P-J6; Mon, 05 Oct 2020 09:49:08 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7E-0007gW-5C; Mon, 05 Oct 2020 09:49:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7J-0000a7-FF
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:13 +0000
X-Inumbo-ID: 455f9f3a-be08-41d9-971d-d689238c76cc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 455f9f3a-be08-41d9-971d-d689238c76cc;
	Mon, 05 Oct 2020 09:49:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=CY7G9pKcwkwHIqeom2cVCCCQbNB2Y8JOhpahS3c7oEw=; b=WEJhJ+M1fiy15yd9f7A4qUUk0q
	YGDPj8j3pEZ2vAz4QEjmZGRyF1L7S8iWjhUiEEIu+vdhG9ye7pUu0vh3nKBlmvkpc/QVDcmOYd7h5
	hDbwzIIBO+KXBXP9ihFuZIAaSjZ9I+pG1pNGhMrw8FaRzyvcdmWf4INHQGO2pyppF2f0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7E-00018P-J6; Mon, 05 Oct 2020 09:49:08 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7E-0007gW-5C; Mon, 05 Oct 2020 09:49:08 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/5] iommu page-table memory pool
Date: Mon,  5 Oct 2020 10:49:00 +0100
Message-Id: <20201005094905.2929-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This series introduces a pool of memory analogous to the shadow/HAP pool,
accounted to the guest domain, from which IOMMU page-tables are allocated.

Paul Durrant (5):
  libxl: remove separate calculation of IOMMU memory overhead
  iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
  libxl / iommu / domctl: introduce XEN_DOMCTL_IOMMU_SET_ALLOCATION...
  iommu: set 'hap_pt_share' and 'need_sync' flags earlier in
    iommu_domain_init()
  x86 / iommu: create a dedicated pool of page-table pages

 tools/flask/policy/modules/dom0.te    |   2 +
 tools/libs/ctrl/include/xenctrl.h     |   5 +
 tools/libs/ctrl/xc_domain.c           |  16 ++++
 tools/libs/light/libxl_create.c       |  22 +----
 tools/libs/light/libxl_x86.c          |  10 ++
 xen/arch/x86/domain.c                 |   4 +-
 xen/drivers/passthrough/iommu.c       |  63 +++++++++---
 xen/drivers/passthrough/x86/iommu.c   | 132 ++++++++++++++++++++++----
 xen/include/asm-arm/iommu.h           |   6 ++
 xen/include/asm-x86/iommu.h           |   7 +-
 xen/include/public/domctl.h           |  22 +++++
 xen/include/xsm/dummy.h               |  17 +++-
 xen/include/xsm/xsm.h                 |  26 +++--
 xen/xsm/dummy.c                       |   6 +-
 xen/xsm/flask/hooks.c                 |  26 +++--
 xen/xsm/flask/policy/access_vectors   |   7 ++
 xen/xsm/flask/policy/security_classes |   1 +
 17 files changed, 300 insertions(+), 72 deletions(-)
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2945.8412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7K-0000aJ-95; Mon, 05 Oct 2020 09:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2945.8412; Mon, 05 Oct 2020 09:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7K-0000aC-6A; Mon, 05 Oct 2020 09:49:14 +0000
Received: by outflank-mailman (input) for mailman id 2945;
 Mon, 05 Oct 2020 09:49:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7I-0000a2-9z
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5ee281d-a4a4-454e-8da2-b8c551704b45;
 Mon, 05 Oct 2020 09:49:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7F-00018R-FE; Mon, 05 Oct 2020 09:49:09 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7F-0007gW-5T; Mon, 05 Oct 2020 09:49:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7I-0000a2-9z
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:12 +0000
X-Inumbo-ID: d5ee281d-a4a4-454e-8da2-b8c551704b45
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5ee281d-a4a4-454e-8da2-b8c551704b45;
	Mon, 05 Oct 2020 09:49:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=JndE2IT1FR1VzwIIZT6xeGdSTmev8bp7YBG/Bt30IPQ=; b=LHvT0/yKr0tJ+uY0azfHrBacfx
	T6bE7maz3Yo3mUsnUWLpLBpdZo8zOS9ZHmdAHl7kQsCRCxn5DAr60IRnS339n+uMzBKrN6ejqkkpV
	4TTPt5FCqCuCR8jUHZ9c5gkaiwd0rQj4rNdkacPLNpoDo33IF77b2PeMVCChiWcHd71c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7F-00018R-FE; Mon, 05 Oct 2020 09:49:09 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7F-0007gW-5T; Mon, 05 Oct 2020 09:49:09 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 1/5] libxl: remove separate calculation of IOMMU memory overhead
Date: Mon,  5 Oct 2020 10:49:01 +0100
Message-Id: <20201005094905.2929-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201005094905.2929-1-paul@xen.org>
References: <20201005094905.2929-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

When using 'shared_pt' mode the IOMMU is using the EPT PTEs. In 'sync_pt'
mode these PTEs are instead replicated for the IOMMU to use. Hence, it is
fairly clear that the memory overhead in this mode is essentially another
copy of the P2M.

This patch removes the independent calculation done in
libxl__get_required_iommu_memory() and instead simply uses 'shadow_memkb'
as the value of the IOMMU overhead since this is the estimated size of
the P2M.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_create.c | 22 +++++-----------------
 1 file changed, 5 insertions(+), 17 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 9a6e92b3a5..f07ba84850 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1001,21 +1001,6 @@ static bool ok_to_default_memkb_in_create(libxl__gc *gc)
      */
 }
 
-static unsigned long libxl__get_required_iommu_memory(unsigned long maxmem_kb)
-{
-    unsigned long iommu_pages = 0, mem_pages = maxmem_kb / 4;
-    unsigned int level;
-
-    /* Assume a 4 level page table with 512 entries per level */
-    for (level = 0; level < 4; level++)
-    {
-        mem_pages = DIV_ROUNDUP(mem_pages, 512);
-        iommu_pages += mem_pages;
-    }
-
-    return iommu_pages * 4;
-}
-
 int libxl__domain_config_setdefault(libxl__gc *gc,
                                     libxl_domain_config *d_config,
                                     uint32_t domid /* for logging, only */)
@@ -1168,12 +1153,15 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
             libxl_get_required_shadow_memory(d_config->b_info.max_memkb,
                                              d_config->b_info.max_vcpus);
 
-    /* No IOMMU reservation is needed if passthrough mode is not 'sync_pt' */
+    /* No IOMMU reservation is needed if passthrough mode is not 'sync_pt'
+     * otherwise we need a reservation sufficient to accommodate a copy of
+     * the P2M.
+     */
     if (d_config->b_info.iommu_memkb == LIBXL_MEMKB_DEFAULT
         && ok_to_default_memkb_in_create(gc))
         d_config->b_info.iommu_memkb =
             (d_config->c_info.passthrough == LIBXL_PASSTHROUGH_SYNC_PT)
-            ? libxl__get_required_iommu_memory(d_config->b_info.max_memkb)
+            ? d_config->b_info.shadow_memkb
             : 0;
 
     ret = libxl__domain_build_info_setdefault(gc, &d_config->b_info);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2948.8448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7P-0000ep-66; Mon, 05 Oct 2020 09:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2948.8448; Mon, 05 Oct 2020 09:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7P-0000ef-1l; Mon, 05 Oct 2020 09:49:19 +0000
Received: by outflank-mailman (input) for mailman id 2948;
 Mon, 05 Oct 2020 09:49:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7O-0000a7-94
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b0d67c1-f441-4482-81d8-0a448f5a7faf;
 Mon, 05 Oct 2020 09:49:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7G-00018X-Ub; Mon, 05 Oct 2020 09:49:10 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7G-0007gW-Lf; Mon, 05 Oct 2020 09:49:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7O-0000a7-94
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:18 +0000
X-Inumbo-ID: 3b0d67c1-f441-4482-81d8-0a448f5a7faf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3b0d67c1-f441-4482-81d8-0a448f5a7faf;
	Mon, 05 Oct 2020 09:49:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=GCHQt2qpyFKezc4RlS7bJaSqJ0rTh54jgHtmbo053rM=; b=E2g7sHQL6sF7iRLQ5d3rqFKtlM
	u2WzzcXp5l6efaJe6jQBRqspjBtYvVfC3NxlgOQmfm+6tv+rrHJaZ8fB9/V8YUvKoOs52qZogLpBZ
	UCP2oh7QgEFrBdkRoE89KMpve1YNUYAvS1Z5CDgBkmPBkLJWJjaLmV/YRVi/Zo0yULXs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7G-00018X-Ub; Mon, 05 Oct 2020 09:49:10 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7G-0007gW-Lf; Mon, 05 Oct 2020 09:49:10 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
Date: Mon,  5 Oct 2020 10:49:02 +0100
Message-Id: <20201005094905.2929-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201005094905.2929-1-paul@xen.org>
References: <20201005094905.2929-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to apply a cap to memory used for
IOMMU page-tables for a domain. This patch simply introduces the boilerplate
for the new domctl.

The implementation of new sub-operations of the new domctl will be added to
iommu_ctl() and hence, whilst this initially only returns -EOPNOTSUPP, the
code in iommu_do_domctl() is modified to set up a hypercall continuation
should iommu_ctl() return -ERESTART.

NOTE: The op code is passed into the newly introduced xsm_iommu_ctl()
      function, but for the moment only a single default 'ctl' perm is
      defined in the new 'iommu' class.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
 tools/flask/policy/modules/dom0.te    |  2 ++
 xen/drivers/passthrough/iommu.c       | 39 ++++++++++++++++++++++++---
 xen/include/public/domctl.h           | 14 ++++++++++
 xen/include/xsm/dummy.h               | 17 +++++++++---
 xen/include/xsm/xsm.h                 | 26 ++++++++++++------
 xen/xsm/dummy.c                       |  6 +++--
 xen/xsm/flask/hooks.c                 | 26 +++++++++++++-----
 xen/xsm/flask/policy/access_vectors   |  7 +++++
 xen/xsm/flask/policy/security_classes |  1 +
 9 files changed, 114 insertions(+), 24 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 0a63ce15b6..ab5eb682c7 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -69,6 +69,8 @@ auditallow dom0_t security_t:security { load_policy setenforce setbool };
 # Allow dom0 to report platform configuration changes back to the hypervisor
 allow dom0_t xen_t:resource setup;
 
+allow dom0_t xen_t:iommu ctl;
+
 admin_device(dom0_t, device_t)
 admin_device(dom0_t, irq_t)
 admin_device(dom0_t, ioport_t)
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 87f9a857bb..bef0405984 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -502,6 +502,27 @@ void iommu_resume()
         iommu_get_ops()->resume();
 }
 
+static int iommu_ctl(
+    struct xen_domctl *domctl, struct domain *d,
+    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    struct xen_domctl_iommu_ctl *ctl = &domctl->u.iommu_ctl;
+    int rc;
+
+    rc = xsm_iommu_ctl(XSM_HOOK, d, ctl->op);
+    if ( rc )
+        return rc;
+
+    switch ( ctl->op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    return rc;
+}
+
 int iommu_do_domctl(
     struct xen_domctl *domctl, struct domain *d,
     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
@@ -511,14 +532,26 @@ int iommu_do_domctl(
     if ( !is_iommu_enabled(d) )
         return -EOPNOTSUPP;
 
+    switch ( domctl->cmd )
+    {
+    case XEN_DOMCTL_iommu_ctl:
+        ret = iommu_ctl(domctl, d, u_domctl);
+        if ( ret == -ERESTART )
+            ret = hypercall_create_continuation(__HYPERVISOR_domctl,
+                                                "h", u_domctl);
+        break;
+
+    default:
 #ifdef CONFIG_HAS_PCI
-    ret = iommu_do_pci_domctl(domctl, d, u_domctl);
+        ret = iommu_do_pci_domctl(domctl, d, u_domctl);
 #endif
 
 #ifdef CONFIG_HAS_DEVICE_TREE
-    if ( ret == -ENODEV )
-        ret = iommu_do_dt_domctl(domctl, d, u_domctl);
+        if ( ret == -ENODEV )
+            ret = iommu_do_dt_domctl(domctl, d, u_domctl);
 #endif
+        break;
+    }
 
     return ret;
 }
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 791f0a2592..75e855625a 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_iommu_ctl
+ *
+ * Control of VM IOMMU settings
+ */
+
+#define XEN_DOMCTL_IOMMU_INVALID 0
+
+struct xen_domctl_iommu_ctl {
+    uint32_t op; /* XEN_DOMCTL_IOMMU_* */
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1214,6 +1226,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_iommu_ctl                     84
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1274,6 +1287,7 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_iommu_ctl         iommu_ctl;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 2368acebed..9825533c75 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -348,7 +348,15 @@ static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+#if defined(CONFIG_HAS_PASSTHROUGH)
+static XSM_INLINE int xsm_iommu_ctl(XSM_DEFAULT_ARG struct domain *d,
+                                    unsigned int op)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
+#if defined(CONFIG_HAS_PCI)
 static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
@@ -367,9 +375,9 @@ static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint
     return xsm_default_action(action, current->domain, d);
 }
 
-#endif /* HAS_PASSTHROUGH && HAS_PCI */
+#endif /* HAS_PCI */
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
 static XSM_INLINE int xsm_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
                                           const char *dtpath)
 {
@@ -384,7 +392,8 @@ static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
     return xsm_default_action(action, current->domain, d);
 }
 
-#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
+#endif /* HAS_DEVICE_TREE */
+#endif /* HAS_PASSTHROUGH */
 
 static XSM_INLINE int xsm_resource_plug_core(XSM_DEFAULT_VOID)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b21c3783d3..1a96d3502c 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -106,17 +106,19 @@ struct xsm_operations {
     int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
     int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
     int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
-    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
+    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access); 
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+#if defined(CONFIG_HAS_PASSTHROUGH)
+    int (*iommu_ctl) (struct domain *d, unsigned int op);
+#if defined(CONFIG_HAS_PCI)
     int (*get_device_group) (uint32_t machine_bdf);
     int (*assign_device) (struct domain *d, uint32_t machine_bdf);
     int (*deassign_device) (struct domain *d, uint32_t machine_bdf);
 #endif
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
     int (*assign_dtdevice) (struct domain *d, const char *dtpath);
     int (*deassign_dtdevice) (struct domain *d, const char *dtpath);
+#endif
 #endif
 
     int (*resource_plug_core) (void);
@@ -466,7 +468,14 @@ static inline int xsm_pci_config_permission (xsm_default_t def, struct domain *d
     return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access);
 }
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+#if defined(CONFIG_HAS_PASSTHROUGH)
+static inline int xsm_iommu_ctl(xsm_default_t def, struct domain *d,
+                                unsigned int op)
+{
+    return xsm_ops->iommu_ctl(d, op);
+}
+
+#if defined(CONFIG_HAS_PCI)
 static inline int xsm_get_device_group(xsm_default_t def, uint32_t machine_bdf)
 {
     return xsm_ops->get_device_group(machine_bdf);
@@ -481,9 +490,9 @@ static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint3
 {
     return xsm_ops->deassign_device(d, machine_bdf);
 }
-#endif /* HAS_PASSTHROUGH && HAS_PCI) */
+#endif /* HAS_PCI */
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
 static inline int xsm_assign_dtdevice(xsm_default_t def, struct domain *d,
                                       const char *dtpath)
 {
@@ -496,7 +505,8 @@ static inline int xsm_deassign_dtdevice(xsm_default_t def, struct domain *d,
     return xsm_ops->deassign_dtdevice(d, dtpath);
 }
 
-#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
+#endif /* HAS_DEVICE_TREE */
+#endif /* HAS_PASSTHROUGH */
 
 static inline int xsm_resource_plug_pci (xsm_default_t def, uint32_t machine_bdf)
 {
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index d4cce68089..a924f1dfd1 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -84,14 +84,16 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, get_vnumainfo);
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+    set_to_dummy_if_null(ops, iommu_ctl);
+#if defined(CONFIG_HAS_PCI)
     set_to_dummy_if_null(ops, get_device_group);
     set_to_dummy_if_null(ops, assign_device);
     set_to_dummy_if_null(ops, deassign_device);
 #endif
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
     set_to_dummy_if_null(ops, assign_dtdevice);
     set_to_dummy_if_null(ops, deassign_dtdevice);
+#endif
 #endif
 
     set_to_dummy_if_null(ops, resource_plug_core);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index b3addbf701..a2858fb0c0 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -624,6 +624,7 @@ static int flask_domctl(struct domain *d, int cmd)
      * These have individual XSM hooks
      * (drivers/passthrough/{pci,device_tree.c)
      */
+    case XEN_DOMCTL_iommu_ctl:
     case XEN_DOMCTL_get_device_group:
     case XEN_DOMCTL_test_assign_device:
     case XEN_DOMCTL_assign_device:
@@ -1278,7 +1279,15 @@ static int flask_mem_sharing(struct domain *d)
 }
 #endif
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+#if defined(CONFIG_HAS_PASSTHROUGH)
+
+static int flask_iommu_ctl(struct domain *d, unsigned int op)
+{
+    /* All current ops are subject to default 'ctl' perm */
+    return current_has_perm(d, SECCLASS_IOMMU, IOMMU__CTL);
+}
+
+#if defined(CONFIG_HAS_PCI)
 static int flask_get_device_group(uint32_t machine_bdf)
 {
     u32 rsid;
@@ -1348,9 +1357,9 @@ static int flask_deassign_device(struct domain *d, uint32_t machine_bdf)
 
     return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE, NULL);
 }
-#endif /* HAS_PASSTHROUGH && HAS_PCI */
+#endif /* HAS_PCI */
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
 static int flask_test_assign_dtdevice(const char *dtpath)
 {
     u32 rsid;
@@ -1410,7 +1419,8 @@ static int flask_deassign_dtdevice(struct domain *d, const char *dtpath)
     return avc_current_has_perm(rsid, SECCLASS_RESOURCE, RESOURCE__REMOVE_DEVICE,
                                 NULL);
 }
-#endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
+#endif /* HAS_DEVICE_TREE */
+#endif /* HAS_PASSTHROUGH */
 
 static int flask_platform_op(uint32_t op)
 {
@@ -1855,15 +1865,17 @@ static struct xsm_operations flask_ops = {
     .remove_from_physmap = flask_remove_from_physmap,
     .map_gmfn_foreign = flask_map_gmfn_foreign,
 
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
+#if defined(CONFIG_HAS_PASSTHROUGH)
+    .iommu_ctl = flask_iommu_ctl,
+#if defined(CONFIG_HAS_PCI)
     .get_device_group = flask_get_device_group,
     .assign_device = flask_assign_device,
     .deassign_device = flask_deassign_device,
 #endif
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
+#if defined(CONFIG_HAS_DEVICE_TREE)
     .assign_dtdevice = flask_assign_dtdevice,
     .deassign_dtdevice = flask_deassign_dtdevice,
+#endif
 #endif
 
     .platform_op = flask_platform_op,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index fde5162c7e..c017a38666 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -542,3 +542,10 @@ class argo
     # Domain sending a message to another domain.
     send
 }
+
+# Class iommu describes operations on the IOMMU resources of a domain
+class iommu
+{
+    # Miscellaneous control
+    ctl
+}
diff --git a/xen/xsm/flask/policy/security_classes b/xen/xsm/flask/policy/security_classes
index 50ecbabc5c..882968e79c 100644
--- a/xen/xsm/flask/policy/security_classes
+++ b/xen/xsm/flask/policy/security_classes
@@ -20,5 +20,6 @@ class grant
 class security
 class version
 class argo
+class iommu
 
 # FLASK
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2947.8436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7N-0000cr-Rq; Mon, 05 Oct 2020 09:49:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2947.8436; Mon, 05 Oct 2020 09:49:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7N-0000cj-O6; Mon, 05 Oct 2020 09:49:17 +0000
Received: by outflank-mailman (input) for mailman id 2947;
 Mon, 05 Oct 2020 09:49:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7N-0000a2-4g
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc291b9a-82d7-464d-b8ff-200c3d85c9aa;
 Mon, 05 Oct 2020 09:49:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7J-00018j-D5; Mon, 05 Oct 2020 09:49:13 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7J-0007gW-5k; Mon, 05 Oct 2020 09:49:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7N-0000a2-4g
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:17 +0000
X-Inumbo-ID: cc291b9a-82d7-464d-b8ff-200c3d85c9aa
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc291b9a-82d7-464d-b8ff-200c3d85c9aa;
	Mon, 05 Oct 2020 09:49:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=0/TvnggoZgnvE3hOxtGaw8lHv49nlACtJLzfHJVFPDg=; b=daBGQ6wW7PKyicNrzxKeGW/4xr
	JsNo/tu6Lto8STSfVzRNs3wrlqCf6RgM9URkBR0sk+ytSsYT7z+u38waeP6sT0LGWMON6encXeLRU
	sTbJgN48uiUrZaprjjzECD/WIaQpuNxGHpPGeWuDoOTBF5Ag+iDqmhSfdz0HjcRaLMys=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7J-00018j-D5; Mon, 05 Oct 2020 09:49:13 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7J-0007gW-5k; Mon, 05 Oct 2020 09:49:13 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 4/5] iommu: set 'hap_pt_share' and 'need_sync' flags earlier in iommu_domain_init()
Date: Mon,  5 Oct 2020 10:49:04 +0100
Message-Id: <20201005094905.2929-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201005094905.2929-1-paul@xen.org>
References: <20201005094905.2929-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Set these flags prior to the calls to arch_iommu_domain_init() and the
implementation init() method. There is no reason to hide this information from
those functions and the value of 'hap_pt_share' will be needed by a
modification to arch_iommu_domain_init() made in a subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
---
 xen/drivers/passthrough/iommu.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 642d5c8331..fd9705b3a9 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -174,15 +174,6 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
     hd->node = NUMA_NO_NODE;
 #endif
 
-    ret = arch_iommu_domain_init(d);
-    if ( ret )
-        return ret;
-
-    hd->platform_ops = iommu_get_ops();
-    ret = hd->platform_ops->init(d);
-    if ( ret || is_system_domain(d) )
-        return ret;
-
     /*
      * Use shared page tables for HAP and IOMMU if the global option
      * is enabled (from which we can infer the h/w is capable) and
@@ -202,7 +193,12 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
 
     ASSERT(!(hd->need_sync && hd->hap_pt_share));
 
-    return 0;
+    ret = arch_iommu_domain_init(d);
+    if ( ret )
+        return ret;
+
+    hd->platform_ops = iommu_get_ops();
+    return hd->platform_ops->init(d);
 }
 
 void __hwdom_init iommu_hwdom_init(struct domain *d)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2949.8460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7U-0000ld-JA; Mon, 05 Oct 2020 09:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2949.8460; Mon, 05 Oct 2020 09:49:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7U-0000lS-Fn; Mon, 05 Oct 2020 09:49:24 +0000
Received: by outflank-mailman (input) for mailman id 2949;
 Mon, 05 Oct 2020 09:49:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7T-0000a7-9N
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66d5992c-4a93-4e63-a402-3dcd90abad74;
 Mon, 05 Oct 2020 09:49:14 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7I-00018e-Ki; Mon, 05 Oct 2020 09:49:12 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7I-0007gW-C8; Mon, 05 Oct 2020 09:49:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7T-0000a7-9N
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:23 +0000
X-Inumbo-ID: 66d5992c-4a93-4e63-a402-3dcd90abad74
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 66d5992c-4a93-4e63-a402-3dcd90abad74;
	Mon, 05 Oct 2020 09:49:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/aaR7886EUtZE6QjtNx8lerjhqgir9MSpy1XgMZF4CQ=; b=DI9ymTP2D65+vmiOYBjbM//YKo
	UEJWIwtLaTCuNuH8oEgzyJs2GxIops+nzfjC+uJUuZMZXYdP6tlLXMss/APAlfHH5UukDZkx9Odyt
	W7aLmasRZAfakrHvBpaHu1lwvzZGq3x3qRe+XIPgerBnpJ1KZ5BJSRExOy7wiq6r4Hqw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7I-00018e-Ki; Mon, 05 Oct 2020 09:49:12 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7I-0007gW-C8; Mon, 05 Oct 2020 09:49:12 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 3/5] libxl / iommu / domctl: introduce XEN_DOMCTL_IOMMU_SET_ALLOCATION...
Date: Mon,  5 Oct 2020 10:49:03 +0100
Message-Id: <20201005094905.2929-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201005094905.2929-1-paul@xen.org>
References: <20201005094905.2929-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... sub-operation of XEN_DOMCTL_iommu_ctl.

This patch adds a new sub-operation into the domctl. The code in iommu_ctl()
is extended to call a new arch-specific iommu_set_allocation() function which
will be called with the IOMMU page-table overhead (in 4k pages) in response
to libxl issuing a new domctl via the xc_iommu_set_allocation() helper
function.

The helper function is only called in the x86 implementation of
libxl__arch_domain_create() when the calculated 'iommu_memkb' value is
non-zero. Hence the ARM implementation of iommu_set_allocation() simply
returns -EOPNOTSUPP.

NOTE: The implementation of the IOMMU page-table memory pool will be added in
      a subsequent patch and so the x86 implementation of
      iommu_set_allocation() currently does nothing other than return 0 (to
      indicate success) thereby ensuring that the new call in
      libxl__arch_domain_create() always succeeds.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
---
 tools/libs/ctrl/include/xenctrl.h   |  5 +++++
 tools/libs/ctrl/xc_domain.c         | 16 ++++++++++++++++
 tools/libs/light/libxl_x86.c        | 10 ++++++++++
 xen/drivers/passthrough/iommu.c     |  8 ++++++++
 xen/drivers/passthrough/x86/iommu.c |  5 +++++
 xen/include/asm-arm/iommu.h         |  6 ++++++
 xen/include/asm-x86/iommu.h         |  2 ++
 xen/include/public/domctl.h         |  8 ++++++++
 8 files changed, 60 insertions(+)

diff --git a/tools/libs/ctrl/include/xenctrl.h b/tools/libs/ctrl/include/xenctrl.h
index 3796425e1e..4d6c9d44bc 100644
--- a/tools/libs/ctrl/include/xenctrl.h
+++ b/tools/libs/ctrl/include/xenctrl.h
@@ -2650,6 +2650,11 @@ int xc_livepatch_replace(xc_interface *xch, char *name, uint32_t timeout, uint32
 int xc_domain_cacheflush(xc_interface *xch, uint32_t domid,
                          xen_pfn_t start_pfn, xen_pfn_t nr_pfns);
 
+/* IOMMU control operations */
+
+int xc_iommu_set_allocation(xc_interface *xch, uint32_t domid,
+                            unsigned int nr_pages);
+
 /* Compat shims */
 #include "xenctrl_compat.h"
 
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e7cea4a17d..0b20a8f2ee 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -2185,6 +2185,22 @@ int xc_domain_soft_reset(xc_interface *xch,
     domctl.domain = domid;
     return do_domctl(xch, &domctl);
 }
+
+int xc_iommu_set_allocation(xc_interface *xch, uint32_t domid,
+                            unsigned int nr_pages)
+{
+    DECLARE_DOMCTL;
+
+    memset(&domctl, 0, sizeof(domctl));
+
+    domctl.cmd = XEN_DOMCTL_iommu_ctl;
+    domctl.domain = domid;
+    domctl.u.iommu_ctl.op = XEN_DOMCTL_IOMMU_SET_ALLOCATION;
+    domctl.u.iommu_ctl.u.set_allocation.nr_pages = nr_pages;
+
+    return do_domctl(xch, &domctl);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 6ec6c27c83..9631974dd6 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -520,6 +520,16 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
                           NULL, 0, &shadow, 0, NULL);
     }
 
+    if (d_config->b_info.iommu_memkb) {
+        unsigned int nr_pages = DIV_ROUNDUP(d_config->b_info.iommu_memkb, 4);
+
+        ret = xc_iommu_set_allocation(ctx->xch, domid, nr_pages);
+        if (ret) {
+            LOGED(ERROR, domid, "Failed to set IOMMU allocation");
+            goto out;
+        }
+    }
+
     if (d_config->c_info.type == LIBXL_DOMAIN_TYPE_PV &&
             libxl_defbool_val(d_config->b_info.u.pv.e820_host)) {
         ret = libxl__e820_alloc(gc, domid, d_config);
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index bef0405984..642d5c8331 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -515,6 +515,14 @@ static int iommu_ctl(
 
     switch ( ctl->op )
     {
+    case XEN_DOMCTL_IOMMU_SET_ALLOCATION:
+    {
+        struct xen_domctl_iommu_set_allocation *set_allocation =
+            &ctl->u.set_allocation;
+
+        rc = iommu_set_allocation(d, set_allocation->nr_pages);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f17b1820f4..b168073f10 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -134,6 +134,11 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d)
         panic("PVH hardware domain iommu must be set in 'strict' mode\n");
 }
 
+int iommu_set_allocation(struct domain *d, unsigned nr_pages)
+{
+    return 0;
+}
+
 int arch_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
diff --git a/xen/include/asm-arm/iommu.h b/xen/include/asm-arm/iommu.h
index 937edc8373..2e4735bace 100644
--- a/xen/include/asm-arm/iommu.h
+++ b/xen/include/asm-arm/iommu.h
@@ -33,6 +33,12 @@ int __must_check arm_iommu_map_page(struct domain *d, dfn_t dfn, mfn_t mfn,
 int __must_check arm_iommu_unmap_page(struct domain *d, dfn_t dfn,
                                       unsigned int *flush_flags);
 
+static inline int iommu_set_allocation(struct domain *d,
+                                       unsigned int nr_pages)
+{
+    return -EOPNOTSUPP;
+}
+
 #endif /* __ARCH_ARM_IOMMU_H__ */
 
 /*
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index 970eb06ffa..d086f564af 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -138,6 +138,8 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
 int __must_check iommu_free_pgtables(struct domain *d);
 struct page_info *__must_check iommu_alloc_pgtable(struct domain *d);
 
+int __must_check iommu_set_allocation(struct domain *d, unsigned int nr_pages);
+
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
  * Local variables:
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 75e855625a..6402678838 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1138,8 +1138,16 @@ struct xen_domctl_vuart_op {
 
 #define XEN_DOMCTL_IOMMU_INVALID 0
 
+#define XEN_DOMCTL_IOMMU_SET_ALLOCATION 1
+struct xen_domctl_iommu_set_allocation {
+    uint32_t nr_pages;
+};
+
 struct xen_domctl_iommu_ctl {
     uint32_t op; /* XEN_DOMCTL_IOMMU_* */
+    union {
+        struct xen_domctl_iommu_set_allocation set_allocation;
+    } u;
 };
 
 struct xen_domctl {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 09:49:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 09:49:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2950.8472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7Z-0000rq-T4; Mon, 05 Oct 2020 09:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2950.8472; Mon, 05 Oct 2020 09:49:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPN7Z-0000rh-PE; Mon, 05 Oct 2020 09:49:29 +0000
Received: by outflank-mailman (input) for mailman id 2950;
 Mon, 05 Oct 2020 09:49:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPN7Y-0000a7-9c
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ccbdfe1a-3095-42d9-b8c2-8552cdbb2387;
 Mon, 05 Oct 2020 09:49:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7K-00018v-Ha; Mon, 05 Oct 2020 09:49:14 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPN7K-0007gW-92; Mon, 05 Oct 2020 09:49:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPN7Y-0000a7-9c
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 09:49:28 +0000
X-Inumbo-ID: ccbdfe1a-3095-42d9-b8c2-8552cdbb2387
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ccbdfe1a-3095-42d9-b8c2-8552cdbb2387;
	Mon, 05 Oct 2020 09:49:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=786CYGhqhj0G7twgI0F8WxticlI/nilf3doZC+/fM1I=; b=zeEgrm+xJUjmFaVxTi0jW2I0O7
	rxetauyJtbYfDQWFYKaA+/TGZmY4aSvvCVKV91LiD42COv9zL3Ithov3ykjT/LZitIF8OSPYbAgt0
	RvkupNDYE7bYXUT34TPwrxMf2Xooec3tcP+wqS0cALKda2CDCeDOX3H6aYIAUV4ieMSE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7K-00018v-Ha; Mon, 05 Oct 2020 09:49:14 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPN7K-0007gW-92; Mon, 05 Oct 2020 09:49:14 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 5/5] x86 / iommu: create a dedicated pool of page-table pages
Date: Mon,  5 Oct 2020 10:49:05 +0100
Message-Id: <20201005094905.2929-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201005094905.2929-1-paul@xen.org>
References: <20201005094905.2929-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch, in a way analogous to HAP page allocation, creates a pool of
pages for use as IOMMU page-tables when the size of that pool is configured
by a call to iommu_set_allocation(). That call occurs when the tool-stack
has calculates the size of the pool and issues the
XEN_DOMCTL_IOMMU_SET_ALLOCATION sub-operation of the XEN_DOMCTL_iommu_ctl
domctl. However, some IOMMU mappings must be set up during domain_create(),
before the tool-stack has had a chance to make its calculation and issue the
domctl. Hence an initial hard-coded pool size of 256 is set for domains
not sharing EPT during the call to arch_iommu_domain_init().

NOTE: No pool is configured for the hardware or quarantine domains. They
      continue to allocate page-table pages on demand.
      The prototype of iommu_free_pgtables() is change to void return as
      it no longer needs to be re-startable.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 xen/arch/x86/domain.c               |   4 +-
 xen/drivers/passthrough/x86/iommu.c | 129 ++++++++++++++++++++++++----
 xen/include/asm-x86/iommu.h         |   5 +-
 3 files changed, 116 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 7e16d49bfd..101ef4aba5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2304,7 +2304,9 @@ int domain_relinquish_resources(struct domain *d)
 
     PROGRESS(iommu_pagetables):
 
-        ret = iommu_free_pgtables(d);
+        iommu_free_pgtables(d);
+
+        ret = iommu_set_allocation(d, 0);
         if ( ret )
             return ret;
 
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index b168073f10..de0cc52489 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -134,21 +134,106 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d)
         panic("PVH hardware domain iommu must be set in 'strict' mode\n");
 }
 
-int iommu_set_allocation(struct domain *d, unsigned nr_pages)
+static int destroy_pgtable(struct domain *d)
 {
+    struct domain_iommu *hd = dom_iommu(d);
+    struct page_info *pg;
+
+    if ( !hd->arch.pgtables.nr )
+    {
+        ASSERT_UNREACHABLE();
+        return -ENOENT;
+    }
+
+    pg = page_list_remove_head(&hd->arch.pgtables.free_list);
+    if ( !pg )
+        return -EBUSY;
+
+    hd->arch.pgtables.nr--;
+    free_domheap_page(pg);
+
     return 0;
 }
 
+static int create_pgtable(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int memflags = 0;
+    struct page_info *pg;
+
+#ifdef CONFIG_NUMA
+    if ( hd->node != NUMA_NO_NODE )
+        memflags = MEMF_node(hd->node);
+#endif
+
+    pg = alloc_domheap_page(NULL, memflags);
+    if ( !pg )
+        return -ENOMEM;
+
+    page_list_add(pg, &hd->arch.pgtables.free_list);
+    hd->arch.pgtables.nr++;
+
+    return 0;
+}
+
+static int set_allocation(struct domain *d, unsigned int nr_pages,
+                          bool allow_preempt)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int done = 0;
+    int rc = 0;
+
+    spin_lock(&hd->arch.pgtables.lock);
+
+    while ( !rc )
+    {
+        if ( hd->arch.pgtables.nr < nr_pages )
+            rc = create_pgtable(d);
+        else if ( hd->arch.pgtables.nr > nr_pages )
+            rc = destroy_pgtable(d);
+        else
+            break;
+
+        if ( allow_preempt && !rc && !(++done & 0xff) &&
+             general_preempt_check() )
+            rc = -ERESTART;
+    }
+
+    spin_unlock(&hd->arch.pgtables.lock);
+
+    return rc;
+}
+
+int iommu_set_allocation(struct domain *d, unsigned int nr_pages)
+{
+    return set_allocation(d, nr_pages, true);
+}
+
+/*
+ * Some IOMMU mappings are set up during domain_create() before the tool-
+ * stack has a chance to calculate and set the appropriate page-table
+ * allocation. A hard-coded initial allocation covers this gap.
+ */
+#define INITIAL_ALLOCATION 256
+
 int arch_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
     spin_lock_init(&hd->arch.mapping_lock);
 
+    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.free_list);
     INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
     spin_lock_init(&hd->arch.pgtables.lock);
 
-    return 0;
+    /*
+     * The hardware and quarantine domains are not subject to a quota
+     * and domains sharing EPT do not require any allocation.
+     */
+    if ( is_hardware_domain(d) || d == dom_io || iommu_use_hap_pt(d) )
+        return 0;
+
+    return set_allocation(d, INITIAL_ALLOCATION, false);
 }
 
 void arch_iommu_domain_destroy(struct domain *d)
@@ -265,38 +350,45 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
         return;
 }
 
-int iommu_free_pgtables(struct domain *d)
+void iommu_free_pgtables(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    struct page_info *pg;
-    unsigned int done = 0;
 
-    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
-    {
-        free_domheap_page(pg);
+    spin_lock(&hd->arch.pgtables.lock);
 
-        if ( !(++done & 0xff) && general_preempt_check() )
-            return -ERESTART;
-    }
+    page_list_splice(&hd->arch.pgtables.list, &hd->arch.pgtables.free_list);
+    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
 
-    return 0;
+    spin_unlock(&hd->arch.pgtables.lock);
 }
 
 struct page_info *iommu_alloc_pgtable(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
-    unsigned int memflags = 0;
     struct page_info *pg;
     void *p;
 
-#ifdef CONFIG_NUMA
-    if ( hd->node != NUMA_NO_NODE )
-        memflags = MEMF_node(hd->node);
-#endif
+    spin_lock(&hd->arch.pgtables.lock);
 
-    pg = alloc_domheap_page(NULL, memflags);
+ again:
+    pg = page_list_remove_head(&hd->arch.pgtables.free_list);
     if ( !pg )
+    {
+        /*
+         * The hardware and quarantine domains are not subject to a quota
+         * so create page-table pages on demand.
+         */
+        if ( is_hardware_domain(d) || d == dom_io )
+        {
+            int rc = create_pgtable(d);
+
+            if ( !rc )
+                goto again;
+        }
+
+        spin_unlock(&hd->arch.pgtables.lock);
         return NULL;
+    }
 
     p = __map_domain_page(pg);
     clear_page(p);
@@ -306,7 +398,6 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
 
     unmap_domain_page(p);
 
-    spin_lock(&hd->arch.pgtables.lock);
     page_list_add(pg, &hd->arch.pgtables.list);
     spin_unlock(&hd->arch.pgtables.lock);
 
diff --git a/xen/include/asm-x86/iommu.h b/xen/include/asm-x86/iommu.h
index d086f564af..3991b5601b 100644
--- a/xen/include/asm-x86/iommu.h
+++ b/xen/include/asm-x86/iommu.h
@@ -47,7 +47,8 @@ struct arch_iommu
 {
     spinlock_t mapping_lock; /* io page table lock */
     struct {
-        struct page_list_head list;
+        struct page_list_head list, free_list;
+        unsigned int nr;
         spinlock_t lock;
     } pgtables;
 
@@ -135,7 +136,7 @@ int pi_update_irte(const struct pi_desc *pi_desc, const struct pirq *pirq,
         iommu_vcall(ops, sync_cache, addr, size);       \
 })
 
-int __must_check iommu_free_pgtables(struct domain *d);
+void iommu_free_pgtables(struct domain *d);
 struct page_info *__must_check iommu_alloc_pgtable(struct domain *d);
 
 int __must_check iommu_set_allocation(struct domain *d, unsigned int nr_pages);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 10:09:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 10:09:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2960.8487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNRC-00037u-Ls; Mon, 05 Oct 2020 10:09:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2960.8487; Mon, 05 Oct 2020 10:09:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNRC-00037n-IU; Mon, 05 Oct 2020 10:09:46 +0000
Received: by outflank-mailman (input) for mailman id 2960;
 Mon, 05 Oct 2020 10:09:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPNRB-00037i-SZ
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:09:45 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1ddb8a0-49d7-4907-9b45-430806a4c379;
 Mon, 05 Oct 2020 10:09:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPNRB-00037i-SZ
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:09:45 +0000
X-Inumbo-ID: c1ddb8a0-49d7-4907-9b45-430806a4c379
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1ddb8a0-49d7-4907-9b45-430806a4c379;
	Mon, 05 Oct 2020 10:09:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601892584;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=xn+yCWqOaWmfb7lGyLHM6t1kDXl3/GdpIcNrTkE8tis=;
  b=czBGgHpdFtIXgjjEdc2tuGjYHv8T9Ex4iL9F/2cIma2gQOYpTHHS/eBS
   BBzxNXR2rbqxNjYf6Xa/7CI1SXjDmQlbIzBv7iVFilebdLCBBXccl6KLE
   6U6HZL7iwvtr4bC3ykVjxc5RIKM8HXBtCFcU/Ge3kIMICiMWIdWsbpxSk
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: /TQGNUQqoVGTchSB1wd/QHwKVDwPMX9HWKf3eVLaCMyme3kogTA/8zrHAnrufth5NSoD4YBkqk
 AUnaQTP4iI0E6/fkFA2aAkLe6s7mk/u+Rk2WH7PPtY7hhxZj0tr/K+P6Mer8kXGRpBtGUAlvjD
 aOfAYb2NRPK/PBRrp97RacEKn8LG1b639ayFhYltUvqrixzAPAZbAFMs/2aCdr/GaR4B4/f3kP
 Sp275leuXM+2NIADz4UEb+irowM+1wuXw5cCHXbyw+pCYNFqeRnXUSAVpH/LKZD5oHGtwcSZfQ
 +9c=
X-SBRS: None
X-MesageID: 28250931
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28250931"
Subject: Re: [PATCH v9 5/8] docs / tools: specific migration v4 to include
 DOMAIN_CONTEXT
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-6-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <30a67387-37d4-4408-6747-5fc1b193acc7@citrix.com>
Date: Mon, 5 Oct 2020 11:09:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-6-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> A new 'domain context' framework was recently introduced to facilitate
> transfer of state for both PV and HVM guests. Hence this patch mandates the
> presence of a new DOMAIN_CONTEXT record in version 4 of the libxc migration
> stream.
> This record will incorprate the content of the domain's 'shared_info' page
> and the TSC informaiton so the SHARED_INFO and TSC_INFO records are deprecated.
> It is intended that, in future, this record will contain state currently
> present in the HVM_CONTEXT record. However, for compatibility with earlier
> migration streams, the version 4 stream format continues to specify an
> HVM_CONTEXT record and XEN_DOMCTL_sethvmcontext will continue to accept all
> content of that record that may be present in a version 3 stream.
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

What's the plan for the remainder of the work?  We don't want to burn
multiple version numbers for each bit of incremental work.

One option might be to specify the full extent of the work, and use an
environment variable to alter the behaviour of the sending side, while
it is still work-in-progress.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 10:14:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 10:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2963.8502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNVJ-0003yJ-8s; Mon, 05 Oct 2020 10:14:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2963.8502; Mon, 05 Oct 2020 10:14:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNVJ-0003yC-5H; Mon, 05 Oct 2020 10:14:01 +0000
Received: by outflank-mailman (input) for mailman id 2963;
 Mon, 05 Oct 2020 10:13:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cNwf=DM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kPNVH-0003y0-RN
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:13:59 +0000
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 880e2c0b-b5b4-4c80-9543-084c5e0a263b;
 Mon, 05 Oct 2020 10:13:59 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id t17so8163577wmi.4
 for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 03:13:59 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id h4sm14193239wrm.54.2020.10.05.03.13.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 05 Oct 2020 03:13:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cNwf=DM=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kPNVH-0003y0-RN
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:13:59 +0000
X-Inumbo-ID: 880e2c0b-b5b4-4c80-9543-084c5e0a263b
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 880e2c0b-b5b4-4c80-9543-084c5e0a263b;
	Mon, 05 Oct 2020 10:13:59 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id t17so8163577wmi.4
        for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 03:13:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=NRgMAqf3/i0yh6vU2Gatu/FdpJX+8wN+9W1V5uqyPKg=;
        b=dQXvZ4OJQOl2EGHL9XE/e1+YpOG+p1PPcAv+TqxRUAMFIcK4cQ1c5oe03vmvUbASjK
         OGBY1tz78PXW6SqnAwBlPP6VNFZY+yk9E6Z86McKL6ekAh5iOJ5YsYhJPnksIxkqK0xa
         6TKiRbQRb595oq5218cgYLIhO2NvodtP1BfnPtm72tbdTsE9v2RTVMmbCCojP5sKGMjY
         YKMxypg26iJU+y7LSccau1Sr15eq8WrcrEXjWu2USNM4Awx+f8nLVawORay4oxIhOMQ8
         dQ49fCkvC4Qk544IYaNcJRWY+LGRAs2VY6zzu7mrYnVXKc2Mgg2ULdAfy5oAUtdzxP+0
         vTrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=NRgMAqf3/i0yh6vU2Gatu/FdpJX+8wN+9W1V5uqyPKg=;
        b=btkzHmX11FS85gyN+ecdXTqJm4aoDG+XA13u7q3b6eMfn7gPMFNe1MDxi/vBk1WL2F
         aKfap/3CFjcHablIicgVDJKms87ySPkIZIGgDRf9e/nzho3wajQmEyfwFOtl1KueM75t
         HsvK08jmpdZ9ZQMN+YWUSkqIjHmIZaytMdhRLM5FoAjhaKJfT9Wkk5rLnHsPjhKG6y97
         Ms5hXQcy39sCVrZ1Qn9MsmdTIjUC+2s1QoawUCqnWeVBNkt0yxN04UNy4ZQHRNnChhwu
         ePkQwYF86gcTMBDHQuO4liz8oKE+8Go5H8eN4RuVsLM6Twxg7ERLrgv/aEszCYvnJW3p
         ZsSg==
X-Gm-Message-State: AOAM532hdtL6Z+91+fu07Mseh7LMF8R72//Q5gkLIxmqU3q9J/2VhtMf
	JkKbcTBp37NTaISH20CC4WM=
X-Google-Smtp-Source: ABdhPJxfAc7rkLs+2VLOUwsgHUkym6lK/bhb3wx7dY6XpOAgDovl7dMaIh2HpOjBypJZAWVxNNzBXg==
X-Received: by 2002:a1c:495:: with SMTP id 143mr361861wme.63.1601892838199;
        Mon, 05 Oct 2020 03:13:58 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
        by smtp.gmail.com with ESMTPSA id h4sm14193239wrm.54.2020.10.05.03.13.56
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 05 Oct 2020 03:13:57 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Marek_Marczykowski-G=C3=B3recki'?= <marmarek@invisiblethingslab.com>
References: <20200924131030.1876-1-paul@xen.org> <20200924131030.1876-6-paul@xen.org> <30a67387-37d4-4408-6747-5fc1b193acc7@citrix.com>
In-Reply-To: <30a67387-37d4-4408-6747-5fc1b193acc7@citrix.com>
Subject: RE: [PATCH v9 5/8] docs / tools: specific migration v4 to include DOMAIN_CONTEXT
Date: Mon, 5 Oct 2020 11:13:56 +0100
Message-ID: <000301d69b00$3cc99f00$b65cdd00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwI2tcBBAdGJ5VapfOrRAA==

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 05 October 2020 11:09
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; George Dunlap =
<george.dunlap@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; Jan Beulich <jbeulich@suse.com>; Julien =
Grall <julien@xen.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Marek =
Marczykowski-G=C3=B3recki
> <marmarek@invisiblethingslab.com>
> Subject: Re: [PATCH v9 5/8] docs / tools: specific migration v4 to =
include DOMAIN_CONTEXT
>=20
> On 24/09/2020 14:10, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > A new 'domain context' framework was recently introduced to =
facilitate
> > transfer of state for both PV and HVM guests. Hence this patch =
mandates the
> > presence of a new DOMAIN_CONTEXT record in version 4 of the libxc =
migration
> > stream.
> > This record will incorprate the content of the domain's =
'shared_info' page
> > and the TSC informaiton so the SHARED_INFO and TSC_INFO records are =
deprecated.
> > It is intended that, in future, this record will contain state =
currently
> > present in the HVM_CONTEXT record. However, for compatibility with =
earlier
> > migration streams, the version 4 stream format continues to specify =
an
> > HVM_CONTEXT record and XEN_DOMCTL_sethvmcontext will continue to =
accept all
> > content of that record that may be present in a version 3 stream.
> >
> > Signed-off-by: Paul Durrant <pdurrant@amazon.com>
>=20
> What's the plan for the remainder of the work?  We don't want to burn
> multiple version numbers for each bit of incremental work.
>=20
> One option might be to specify the full extent of the work, and use an
> environment variable to alter the behaviour of the sending side, while
> it is still work-in-progress.
>=20

The other missing part for transparent migration is xenstore content, =
but I'd expect that to sit at the next level up so I'm not anticipating =
any more churn at this level.

  Paul



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 10:40:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 10:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2968.8514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNuK-0005z1-Jb; Mon, 05 Oct 2020 10:39:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2968.8514; Mon, 05 Oct 2020 10:39:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPNuK-0005yu-Fs; Mon, 05 Oct 2020 10:39:52 +0000
Received: by outflank-mailman (input) for mailman id 2968;
 Mon, 05 Oct 2020 10:39:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPNuI-0005yp-Sq
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:39:50 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 144920f9-330b-4a44-955e-52f305f8aa8a;
 Mon, 05 Oct 2020 10:39:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPNuI-0005yp-Sq
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:39:50 +0000
X-Inumbo-ID: 144920f9-330b-4a44-955e-52f305f8aa8a
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 144920f9-330b-4a44-955e-52f305f8aa8a;
	Mon, 05 Oct 2020 10:39:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601894388;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ZJxHvFXJT4hE+LcVa/v6udXiAg2ozRAKSroSYJs3BJ4=;
  b=Ok9FQngMN/ObMgiCw396iOHYK0+i27ahCQOEdQAYhQfwoaiIN1tNnami
   kT1lQI/bl3CC8rHSbECT0/yu+WyrZD6zixp+qQ48yIcpIgDOvLynOKcun
   2O3Rvbz0H9Dxl7OorQTu7o91n5V8s1gjvHWcYFADe/ISg/ENxm1R7FCXm
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JbJHbmblZRw5sb17Yx6MOn4OKiJbTwJE6YrtiHjWqthMZ433+dgP8NlV2mfmhCp3JJ86sjc9I7
 FRKcxc8hUhi69W/RPVOgjmXE6jQEHi52HDbUULBKmhbVfvptSSOIL0OtYM8EGDIBvf6NnlrdGV
 y5hiwp49MqnKXji5sXhl35i7WHupN0I0UM2qjjo0X+0hXEj5EtlXoEg96ioaffNGcCBPbpNyNz
 U0X/Py8/cgcEq0+0nKCnQbVhbfvA7cP6w0f5KblYtW3unI2jETJA1nV4hxG7L7GNQwyGB+BmfQ
 ptk=
X-SBRS: None
X-MesageID: 28360803
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28360803"
Subject: Re: [PATCH v9 6/8] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Ian Jackson
	<ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-7-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a82cfb40-9ce5-d8ed-a2f7-1b02fc6e27e6@citrix.com>
Date: Mon, 5 Oct 2020 11:39:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20200924131030.1876-7-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 24/09/2020 14:10, Paul Durrant wrote:
> diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
> index 243325dfce..6ead7ea89d 100644
> --- a/tools/misc/xen-domctx.c
> +++ b/tools/misc/xen-domctx.c
> @@ -31,6 +31,7 @@
>  #include <errno.h>
>  
>  #include <xenctrl.h>
> +#include <xen-tools/libs.h>
>  #include <xen/xen.h>
>  #include <xen/domctl.h>
>  #include <xen/save.h>
> @@ -61,6 +62,82 @@ static void dump_header(void)
>  
>  }
>  
> +static void print_binary(const char *prefix, const void *val, size_t size,
> +                         const char *suffix)
> +{
> +    printf("%s", prefix);
> +
> +    while ( size-- )
> +    {
> +        uint8_t octet = *(const uint8_t *)val++;
> +        unsigned int i;
> +
> +        for ( i = 0; i < 8; i++ )
> +        {
> +            printf("%u", octet & 1);
> +            octet >>= 1;
> +        }
> +    }
> +
> +    printf("%s", suffix);
> +}
> +
> +static void dump_shared_info(void)
> +{
> +    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> +    bool has_32bit_shinfo;
> +    shared_info_any_t *info;
> +    unsigned int i, n;
> +
> +    GET_PTR(s);
> +    has_32bit_shinfo = s->flags & DOMAIN_SAVE_32BIT_SHINFO;
> +
> +    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
> +           has_32bit_shinfo ? "true" : "false", s->buffer_size);
> +
> +    info = (shared_info_any_t *)s->buffer;
> +
> +#define GET_FIELD_PTR(_f)            \
> +    (has_32bit_shinfo ?              \
> +     (const void *)&(info->x32._f) : \
> +     (const void *)&(info->x64._f))
> +#define GET_FIELD_SIZE(_f) \
> +    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
> +#define GET_FIELD(_f) \
> +    (has_32bit_shinfo ? info->x32._f : info->x64._f)
> +
> +    n = has_32bit_shinfo ?
> +        ARRAY_SIZE(info->x32.evtchn_pending) :
> +        ARRAY_SIZE(info->x64.evtchn_pending);
> +
> +    for ( i = 0; i < n; i++ )
> +    {
> +        const char *prefix = !i ?
> +            "                 evtchn_pending: " :
> +            "                                 ";
> +
> +        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
> +                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
> +    }
> +
> +    for ( i = 0; i < n; i++ )
> +    {
> +        const char *prefix = !i ?
> +            "                    evtchn_mask: " :
> +            "                                 ";
> +
> +        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
> +                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
> +    }

What about domains using FIFO?  This is meaningless for them.

> +
> +    printf("                 wc: version: %u sec: %u nsec: %u\n",
> +           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));

wc_sec_hi is also a rather critical field in this calculation.

> +
> +#undef GET_FIELD
> +#undef GET_FIELD_SIZE
> +#undef GET_FIELD_PTR
> +}
> +
>  static void dump_end(void)
>  {
>      DOMAIN_SAVE_TYPE(END) *e;
> @@ -173,6 +250,7 @@ int main(int argc, char **argv)
>              switch (desc->typecode)
>              {
>              case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
> +            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
>              case DOMAIN_SAVE_CODE(END): dump_end(); break;
>              default:
>                  printf("Unknown type %u: skipping\n", desc->typecode);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 8cfa2e0b6b..6709f9c79e 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -33,6 +33,7 @@
>  #include <xen/xenoprof.h>
>  #include <xen/irq.h>
>  #include <xen/argo.h>
> +#include <xen/save.h>
>  #include <asm/debugger.h>
>  #include <asm/p2m.h>
>  #include <asm/processor.h>
> @@ -1657,6 +1658,110 @@ int continue_hypercall_on_cpu(
>      return 0;
>  }
>  
> +static int save_shared_info(const struct domain *d, struct domain_context *c,
> +                            bool dry_run)
> +{
> +    struct domain_shared_info_context ctxt = {
> +#ifdef CONFIG_COMPAT
> +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> +        .buffer_size = has_32bit_shinfo(d) ?
> +                       sizeof(struct compat_shared_info) :
> +                       sizeof(struct shared_info),
> +#else
> +        .buffer_size = sizeof(struct shared_info),
> +#endif
> +    };
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    int rc;
> +
> +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> +    if ( rc )
> +        return rc;
> +
> +    rc = domain_save_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> +    if ( rc )
> +        return rc;
> +
> +    return domain_save_end(c);
> +}
> +
> +static int load_shared_info(struct domain *d, struct domain_context *c)
> +{
> +    struct domain_shared_info_context ctxt;
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    unsigned int i;
> +    int rc;
> +
> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> +    if ( rc )
> +        return rc;
> +
> +    if ( i ) /* expect only a single instance */
> +        return -ENXIO;
> +
> +    rc = domain_load_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( ctxt.buffer_size > sizeof(shared_info_t) ||
> +         (ctxt.flags & ~DOMAIN_SAVE_32BIT_SHINFO) )
> +        return -EINVAL;
> +
> +    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
> +    {
> +#ifdef CONFIG_COMPAT
> +        has_32bit_shinfo(d) = true;

d->arch.has_32bit_shinfo

> +#else
> +        return -EINVAL;
> +#endif
> +    }
> +
> +    if ( is_pv_domain(d) )
> +    {
> +        shared_info_t *shinfo = xmalloc(shared_info_t);
> +
> +        if ( !shinfo )
> +            return -ENOMEM;
> +
> +        rc = domain_load_data(c, shinfo, sizeof(*shinfo));
> +        if ( rc )
> +            goto out;

There's no need for a memory allocation, or to double buffer this data. 
You can memcpy() straight out of the context record.

> +
> +        memcpy(&shared_info(d, vcpu_info), &__shared_info(d, shinfo, vcpu_info),
> +               sizeof(shared_info(d, vcpu_info)));
> +        memcpy(&shared_info(d, arch), &__shared_info(d, shinfo, arch),
> +               sizeof(shared_info(d, arch)));
> +
> +        memset(&shared_info(d, evtchn_pending), 0,
> +               sizeof(shared_info(d, evtchn_pending)));
> +        memset(&shared_info(d, evtchn_mask), 0xff,
> +               sizeof(shared_info(d, evtchn_mask)));
> +
> +        shared_info(d, arch.pfn_to_mfn_frame_list_list) = 0;
> +        for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
> +            shared_info(d, vcpu_info[i].evtchn_pending_sel) = 0;

What is the plan for transparent migrate here?  While this is ok for
regular migrate, its definitely not for transparent.

> +
> +        rc = domain_load_end(c, false);
> +
> +    out:
> +        xfree(shinfo);
> +    }
> +    else
> +        /*
> +         * No modifications to shared_info are required for restoring non-PV
> +         * domains.
> +         */
> +        rc = domain_load_end(c, true);
> +
> +    return rc;
> +}
> +
> +DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, load_shared_info);
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/public/save.h b/xen/include/public/save.h
> index 551dbbddb8..0e855a4b97 100644
> --- a/xen/include/public/save.h
> +++ b/xen/include/public/save.h
> @@ -82,7 +82,18 @@ struct domain_save_header {
>  };
>  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
>  
> -#define DOMAIN_SAVE_CODE_MAX 1
> +struct domain_shared_info_context {
> +    uint32_t flags;
> +
> +#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
> +
> +    uint32_t buffer_size;

This struct is already wrapped with a header including a size which
encompasses buffer.

Multiple overlapping size fields is an easy way to memory corruption,
because it causes ambiguity as to which one is right.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 10:55:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 10:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2970.8525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPO9Q-0007i4-UZ; Mon, 05 Oct 2020 10:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2970.8525; Mon, 05 Oct 2020 10:55:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPO9Q-0007hx-RV; Mon, 05 Oct 2020 10:55:28 +0000
Received: by outflank-mailman (input) for mailman id 2970;
 Mon, 05 Oct 2020 10:55:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=37d/=DM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1kPO9P-0007hs-8Z
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:55:27 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8b537bd0-898d-4aa4-ab30-dee5db5f350f;
 Mon, 05 Oct 2020 10:55:26 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-350-uIq1M_HTMOC1le8a4VRyEA-1; Mon, 05 Oct 2020 06:55:24 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A549918C89C1;
 Mon,  5 Oct 2020 10:55:22 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-182.ams2.redhat.com
 [10.36.112.182])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8F2B85D9CC;
 Mon,  5 Oct 2020 10:55:16 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 1A6C911329C1; Mon,  5 Oct 2020 12:55:15 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=37d/=DM=redhat.com=armbru@srs-us1.protection.inumbo.net>)
	id 1kPO9P-0007hs-8Z
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 10:55:27 +0000
X-Inumbo-ID: 8b537bd0-898d-4aa4-ab30-dee5db5f350f
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8b537bd0-898d-4aa4-ab30-dee5db5f350f;
	Mon, 05 Oct 2020 10:55:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601895325;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yALnMSr1yJuue5E4AjGZKDmMg2aydLrPW9CxjD73UMw=;
	b=VU53fZrADP6BVPncsCYYu26Ejlv1XEoP2CTKb0Hbz2U+QYvvci0H9k+4SwloOf8+nKotpz
	i9JrdNzBlVeU/bnf1JLK+hbciWY0x5RTuRh+PYALvrNmnUlWPnNUzqgqRboyoFe23N4nxs
	5pr8VOGLOoKI1x/GkeOKoH/3Ek8EtlE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-350-uIq1M_HTMOC1le8a4VRyEA-1; Mon, 05 Oct 2020 06:55:24 -0400
X-MC-Unique: uIq1M_HTMOC1le8a4VRyEA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A549918C89C1;
	Mon,  5 Oct 2020 10:55:22 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-182.ams2.redhat.com [10.36.112.182])
	by smtp.corp.redhat.com (Postfix) with ESMTPS id 8F2B85D9CC;
	Mon,  5 Oct 2020 10:55:16 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
	id 1A6C911329C1; Mon,  5 Oct 2020 12:55:15 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,  Peter
 Maydell
 <peter.maydell@linaro.org>,  Stefano Stabellini <sstabellini@kernel.org>,
  Eduardo Habkost <ehabkost@redhat.com>,  Paul Durrant <paul@xen.org>,
  Juan Quintela <quintela@redhat.com>,  qemu-devel@nongnu.org,  "Dr. David
 Alan Gilbert" <dgilbert@redhat.com>,  "Michael S. Tsirkin"
 <mst@redhat.com>,  Gerd Hoffmann <kraxel@redhat.com>,  Anthony Perard
 <anthony.perard@citrix.com>,  xen-devel@lists.xenproject.org,  Richard
 Henderson <rth@twiddle.net>
Subject: Re: [PATCH 0/5] qapi: Restrict machine (and migration) specific
 commands
References: <20201002133923.1716645-1-philmd@redhat.com>
	<87wo05aypg.fsf@dusky.pond.sub.org>
	<0c54aa06-372c-ab81-0974-34340adb7b55@redhat.com>
Date: Mon, 05 Oct 2020 12:55:14 +0200
In-Reply-To: <0c54aa06-372c-ab81-0974-34340adb7b55@redhat.com> (Paolo
	Bonzini's message of "Mon, 5 Oct 2020 10:46:02 +0200")
Message-ID: <877ds5djsd.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=armbru@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 05/10/20 10:01, Markus Armbruster wrote:
>> Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> writes:
>>=20
>>> Reduce the machine code pulled into qemu-storage-daemon.
>> I'm leaving review to Eduardo and Marcel for PATCH 1-4, and to David and
>> Juan for PATCH 5.  David already ACKed.
>>=20
>> Can do the pull request.
>>=20
>
> If it counts, :) for patch 1-4:
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>
> Generally these patches to remove code from user-mode emulators
> fall into the "if it builds it's fine" bucket, since I assume
> we want the "misc" subschema to be as small as possible.

Moving stuff out of qapi/misc.json is good as long as the new home makes
sense.  So, if it builds *and* the maintainers of the new home think it
makes sense to have it there, it's fine.

I don't think we should aim for eliminating every last bit of unused
generated code from every program.  We should aim for a sensible split
into sub-modules.  Unused generated code in a program can be a sign for
a less than sensible split.



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 11:08:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 11:08:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2972.8538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPOLg-0000No-4b; Mon, 05 Oct 2020 11:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2972.8538; Mon, 05 Oct 2020 11:08:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPOLg-0000Nh-10; Mon, 05 Oct 2020 11:08:08 +0000
Received: by outflank-mailman (input) for mailman id 2972;
 Mon, 05 Oct 2020 11:08:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPOLf-0000Nc-69
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 11:08:07 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3fb8e2cf-4f88-4555-a609-829ab9b14a00;
 Mon, 05 Oct 2020 11:08:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPOLf-0000Nc-69
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 11:08:07 +0000
X-Inumbo-ID: 3fb8e2cf-4f88-4555-a609-829ab9b14a00
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3fb8e2cf-4f88-4555-a609-829ab9b14a00;
	Mon, 05 Oct 2020 11:08:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601896085;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=2oqGIF+RPbEV9LC+XbTk9RHebwJC93BbN0vGQTsrcug=;
  b=Ij8yW9FGVHNuzj3uI3DlGEZ9Zn7sCh5i9NuEoza9ZenvSRYm7+5Vn01b
   /1sayXeYnIFQXuE4VOTtxvw2jZc5lLp2XOWnA5MNN4YekOV2hDf8DgdQS
   MvQV74EbhxFbekEnnnlGEUYDeqK8SwYSw7I0ap+bgI7rmGtYtb02ZiN5k
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: tM0DIien0snYoU20+2830JM6Bq02tA1Tah1giYqW/Wv9DcdcaYz+GbTMcy1wruIMq1REzzwVh6
 x6vfabkEzy5IPr/O8GUfnQBsIcwAHoEWooq2rta3eG2SX9zaUd6Dr3XvR1RYn0gtWJBu9n19O3
 it7spY0KtAlByKC1SpJDjPeF1udresv5M2069oHWPhCIdDxja7XbTw/rJZqDTkosTbOXpniCAW
 b5LpmAhdAxJzEXVmGVeyHIqcJEd0CBRuHefWSeGbJFd4axPHcOHP7BrbUDyxEnYluv4bxiWA9/
 zHs=
X-SBRS: None
X-MesageID: 28253952
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28253952"
Subject: Re: [PATCH 7/8] x86/hvm: Drop restore boolean from
 hvm_cr4_guest_valid_bits()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
References: <20200930134248.4918-1-andrew.cooper3@citrix.com>
 <20200930134248.4918-8-andrew.cooper3@citrix.com>
 <20201001110003.GE19254@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9c502782-0484-7939-0c97-fe12a94ea2a7@citrix.com>
Date: Mon, 5 Oct 2020 12:07:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201001110003.GE19254@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 01/10/2020 12:00, Roger Pau Monné wrote:
> On Wed, Sep 30, 2020 at 02:42:47PM +0100, Andrew Cooper wrote:
>> Previously, migration was reordered so the CPUID data was available before
>> register state.  nestedhvm_enabled() has recently been made accurate for the
>> entire lifetime of the domain.
>>
>> Therefore, we can drop the bodge in hvm_cr4_guest_valid_bits() which existed
>> previously to tolerate a guests' CR4 being set/restored before
>> HVM_PARAM_NESTEDHVM.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks,

>
> Thanks, just one nit below.
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Jun Nakajima <jun.nakajima@intel.com>
>> CC: Kevin Tian <kevin.tian@intel.com>
>> ---
>>  xen/arch/x86/hvm/domain.c       | 2 +-
>>  xen/arch/x86/hvm/hvm.c          | 8 ++++----
>>  xen/arch/x86/hvm/svm/svmdebug.c | 6 ++++--
>>  xen/arch/x86/hvm/vmx/vmx.c      | 2 +-
>>  xen/arch/x86/hvm/vmx/vvmx.c     | 2 +-
>>  xen/include/asm-x86/hvm/hvm.h   | 2 +-
>>  6 files changed, 12 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/domain.c b/xen/arch/x86/hvm/domain.c
>> index 8e3375265c..0ce132b308 100644
>> --- a/xen/arch/x86/hvm/domain.c
>> +++ b/xen/arch/x86/hvm/domain.c
>> @@ -275,7 +275,7 @@ int arch_set_info_hvm_guest(struct vcpu *v, const vcpu_hvm_context_t *ctx)
>>      if ( v->arch.hvm.guest_efer & EFER_LME )
>>          v->arch.hvm.guest_efer |= EFER_LMA;
>>  
>> -    if ( v->arch.hvm.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d, false) )
>> +    if ( v->arch.hvm.guest_cr[4] & ~hvm_cr4_guest_valid_bits(d) )
>>      {
>>          gprintk(XENLOG_ERR, "Bad CR4 value: %#016lx\n",
>>                  v->arch.hvm.guest_cr[4]);
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index 101a739952..54e32e4fe8 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -972,14 +972,14 @@ const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
>>          X86_CR0_CD | X86_CR0_PG)))
>>  
>>  /* These bits in CR4 can be set by the guest. */
>> -unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore)
>> +unsigned long hvm_cr4_guest_valid_bits(const struct domain *d)
>>  {
>>      const struct cpuid_policy *p = d->arch.cpuid;
>>      bool mce, vmxe;
>>  
>>      /* Logic broken out simply to aid readability below. */
>>      mce  = p->basic.mce || p->basic.mca;
>> -    vmxe = p->basic.vmx && (restore || nestedhvm_enabled(d));
>> +    vmxe = p->basic.vmx && nestedhvm_enabled(d);
>>  
>>      return ((p->basic.vme     ? X86_CR4_VME | X86_CR4_PVI : 0) |
>>              (p->basic.tsc     ? X86_CR4_TSD               : 0) |
>> @@ -1033,7 +1033,7 @@ static int hvm_load_cpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>>          return -EINVAL;
>>      }
>>  
>> -    if ( ctxt.cr4 & ~hvm_cr4_guest_valid_bits(d, true) )
>> +    if ( ctxt.cr4 & ~hvm_cr4_guest_valid_bits(d) )
>>      {
>>          printk(XENLOG_G_ERR "HVM%d restore: bad CR4 %#" PRIx64 "\n",
>>                 d->domain_id, ctxt.cr4);
>> @@ -2425,7 +2425,7 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
>>      struct vcpu *v = current;
>>      unsigned long old_cr;
>>  
>> -    if ( value & ~hvm_cr4_guest_valid_bits(v->domain, false) )
>> +    if ( value & ~hvm_cr4_guest_valid_bits(v->domain) )
>>      {
>>          HVM_DBG_LOG(DBG_LEVEL_1,
>>                      "Guest attempts to set reserved bit in CR4: %lx",
>> diff --git a/xen/arch/x86/hvm/svm/svmdebug.c b/xen/arch/x86/hvm/svm/svmdebug.c
>> index ba26b6a80b..f450391df4 100644
>> --- a/xen/arch/x86/hvm/svm/svmdebug.c
>> +++ b/xen/arch/x86/hvm/svm/svmdebug.c
>> @@ -106,6 +106,7 @@ bool svm_vmcb_isvalid(const char *from, const struct vmcb_struct *vmcb,
>>      unsigned long cr0 = vmcb_get_cr0(vmcb);
>>      unsigned long cr3 = vmcb_get_cr3(vmcb);
>>      unsigned long cr4 = vmcb_get_cr4(vmcb);
>> +    unsigned long valid;
> Could you init valid here at definition time? Also cr4_valid might be
> a better name since the sacope of the variable is quite wide.

I have some further cleanup in mind, which is why I did it like this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2990.8557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPN-0006fZ-20; Mon, 05 Oct 2020 12:16:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2990.8557; Mon, 05 Oct 2020 12:16:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPM-0006fS-VN; Mon, 05 Oct 2020 12:16:00 +0000
Received: by outflank-mailman (input) for mailman id 2990;
 Mon, 05 Oct 2020 12:16:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPPL-0006fN-OP
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:00 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a9f24d00-0c45-465e-827f-79b04c7606f5;
 Mon, 05 Oct 2020 12:15:58 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-587-BY-7UwbeNrSQH1yvi3bonA-1; Mon, 05 Oct 2020 08:15:54 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 110A38030CD;
 Mon,  5 Oct 2020 12:15:51 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id ACE6D277C6;
 Mon,  5 Oct 2020 12:15:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPPL-0006fN-OP
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:00 +0000
X-Inumbo-ID: a9f24d00-0c45-465e-827f-79b04c7606f5
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id a9f24d00-0c45-465e-827f-79b04c7606f5;
	Mon, 05 Oct 2020 12:15:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900158;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=KBX1TXpOrcsq29GpOBaph461kpBMaGykfCUU1aPiYHU=;
	b=fvHmXnd/fhhhC3C85g5BFdrmIpJzZWl5CQY8AFLn5GsGl0KDJ3JJSe4R3ZvDt25eNjR9m5
	xo8z6DUemt8FrHNcCRD5m8SUc+Mwt/J4VIvIrWXNXAGx2CNk1585qZjzFvUewNG6lU/qI9
	K/+lR5RrkkZmAC3aPnEaNTgZhZOnKDQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-587-BY-7UwbeNrSQH1yvi3bonA-1; Mon, 05 Oct 2020 08:15:54 -0400
X-MC-Unique: BY-7UwbeNrSQH1yvi3bonA-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 110A38030CD;
	Mon,  5 Oct 2020 12:15:51 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id ACE6D277C6;
	Mon,  5 Oct 2020 12:15:36 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Dave Hansen <dave.hansen@intel.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Michal Hocko <mhocko@kernel.org>,
	Michal Hocko <mhocko@suse.com>,
	Mike Rapoport <rppt@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Wei Liu <wei.liu@kernel.org>,
	Wei Yang <richard.weiyang@linux.alibaba.com>
Subject: [PATCH v2 0/5] mm: place pages to the freelist tail when onlining and undoing isolation
Date: Mon,  5 Oct 2020 14:15:29 +0200
Message-Id: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

When adding separate memory blocks via add_memory*() and onlining them
immediately, the metadata (especially the memmap) of the next block will be
placed onto one of the just added+onlined block. This creates a chain
of unmovable allocations: If the last memory block cannot get
offlined+removed() so will all dependent ones. We directly have unmovable
allocations all over the place.

This can be observed quite easily using virtio-mem, however, it can also
be observed when using DIMMs. The freshly onlined pages will usually be
placed to the head of the freelists, meaning they will be allocated next,
turning the just-added memory usually immediately un-removable. The
fresh pages are cold, prefering to allocate others (that might be hot)
also feels to be the natural thing to do.

It also applies to the hyper-v balloon xen-balloon, and ppc64 dlpar: when
adding separate, successive memory blocks, each memory block will have
unmovable allocations on them - for example gigantic pages will fail to
allocate.

While the ZONE_NORMAL doesn't provide any guarantees that memory can get
offlined+removed again (any kind of fragmentation with unmovable
allocations is possible), there are many scenarios (hotplugging a lot of
memory, running workload, hotunplug some memory/as much as possible) where
we can offline+remove quite a lot with this patchset.

a) To visualize the problem, a very simple example:

Start a VM with 4GB and 8GB of virtio-mem memory:

 [root@localhost ~]# lsmem
 RANGE                                 SIZE  STATE REMOVABLE  BLOCK
 0x0000000000000000-0x00000000bfffffff   3G online       yes   0-23
 0x0000000100000000-0x000000033fffffff   9G online       yes 32-103

 Memory block size:       128M
 Total online memory:      12G
 Total offline memory:      0B

Then try to unplug as much as possible using virtio-mem. Observe which
memory blocks are still around. Without this patch set:

 [root@localhost ~]# lsmem
 RANGE                                  SIZE  STATE REMOVABLE   BLOCK
 0x0000000000000000-0x00000000bfffffff    3G online       yes    0-23
 0x0000000100000000-0x000000013fffffff    1G online       yes   32-39
 0x0000000148000000-0x000000014fffffff  128M online       yes      41
 0x0000000158000000-0x000000015fffffff  128M online       yes      43
 0x0000000168000000-0x000000016fffffff  128M online       yes      45
 0x0000000178000000-0x000000017fffffff  128M online       yes      47
 0x0000000188000000-0x0000000197ffffff  256M online       yes   49-50
 0x00000001a0000000-0x00000001a7ffffff  128M online       yes      52
 0x00000001b0000000-0x00000001b7ffffff  128M online       yes      54
 0x00000001c0000000-0x00000001c7ffffff  128M online       yes      56
 0x00000001d0000000-0x00000001d7ffffff  128M online       yes      58
 0x00000001e0000000-0x00000001e7ffffff  128M online       yes      60
 0x00000001f0000000-0x00000001f7ffffff  128M online       yes      62
 0x0000000200000000-0x0000000207ffffff  128M online       yes      64
 0x0000000210000000-0x0000000217ffffff  128M online       yes      66
 0x0000000220000000-0x0000000227ffffff  128M online       yes      68
 0x0000000230000000-0x0000000237ffffff  128M online       yes      70
 0x0000000240000000-0x0000000247ffffff  128M online       yes      72
 0x0000000250000000-0x0000000257ffffff  128M online       yes      74
 0x0000000260000000-0x0000000267ffffff  128M online       yes      76
 0x0000000270000000-0x0000000277ffffff  128M online       yes      78
 0x0000000280000000-0x0000000287ffffff  128M online       yes      80
 0x0000000290000000-0x0000000297ffffff  128M online       yes      82
 0x00000002a0000000-0x00000002a7ffffff  128M online       yes      84
 0x00000002b0000000-0x00000002b7ffffff  128M online       yes      86
 0x00000002c0000000-0x00000002c7ffffff  128M online       yes      88
 0x00000002d0000000-0x00000002d7ffffff  128M online       yes      90
 0x00000002e0000000-0x00000002e7ffffff  128M online       yes      92
 0x00000002f0000000-0x00000002f7ffffff  128M online       yes      94
 0x0000000300000000-0x0000000307ffffff  128M online       yes      96
 0x0000000310000000-0x0000000317ffffff  128M online       yes      98
 0x0000000320000000-0x0000000327ffffff  128M online       yes     100
 0x0000000330000000-0x000000033fffffff  256M online       yes 102-103

 Memory block size:       128M
 Total online memory:     8.1G
 Total offline memory:      0B

With this patch set:

 [root@localhost ~]# lsmem
 RANGE                                 SIZE  STATE REMOVABLE BLOCK
 0x0000000000000000-0x00000000bfffffff   3G online       yes  0-23
 0x0000000100000000-0x000000013fffffff   1G online       yes 32-39

 Memory block size:       128M
 Total online memory:       4G
 Total offline memory:      0B

All memory can get unplugged, all memory block can get removed. Of course,
no workload ran and the system was basically idle, but it highlights the
issue - the fairly deterministic chain of unmovable allocations. When a
huge page for the 2MB memmap is needed, a just-onlined 4MB page will
be split. The remaining 2MB page will be used for the memmap of the next
memory block. So one memory block will hold the memmap of the two following
memory blocks. Finally the pages of the last-onlined memory block will get
used for the next bigger allocations - if any allocation is unmovable,
all dependent memory blocks cannot get unplugged and removed until that
allocation is gone.

Note that with bigger memory blocks (e.g., 256MB), *all* memory
blocks are dependent and none can get unplugged again!

b) Experiment with memory intensive workload

I performed an experiment with an older version of this patch set
(before we used undo_isolate_page_range() in online_pages():
Hotplug 56GB to a VM with an initial 4GB, onlining all memory to
ZONE_NORMAL right from the kernel when adding it. I then run various
memory intensive workloads that consume most system memory for a total of
45 minutes. Once finished, I try to unplug as much memory as possible.

With this change, I am able to remove via virtio-mem (adding individual
128MB memory blocks) 413 out of 448 added memory blocks. Via individual
(256MB) DIMMs 380 out of 448 added memory blocks. (I don't have any numbers
without this patchset, but looking at the above example, it's at most half
of the 448 memory blocks for virtio-mem, and most probably none for DIMMs).

Again, there are workloads that might behave very differently due to the
nature of ZONE_NORMAL.

This change also affects (besodes memory onlining):
- Other users of undo_isolate_page_range(): Pages are always placed to the
  tail.
-- When memory offlining fails
-- When memory isolation fails after having isolated some pageblocks
-- When alloc_contig_range() either succeeds or fails
- Other users of __putback_isolated_page(): Pages are always placed to the
  tail.
-- Free page reporting
- Other users of __free_pages_core()
-- AFAIKs, any memory that is getting exposed to the buddy during boot.
   IIUC we will now usually allocate memory from lower addresses within
   a zone first (especially during boot).
- Other users of generic_online_page()
-- Hyper-V balloon

v1 -> v2:
- Avoid changing indentation/alignment of function parameters
- Minor spelling fixes
- "mm/page_alloc: convert "report" flag of __free_one_page() to a proper
   flag"
-- fop_t -> fpi_t
-- Clarify/extend documentation of FPI_SKIP_REPORT_NOTIFY
- "mm/page_alloc: move pages to tail in move_to_free_list()"
-- Perform change for all move_to_free_list()/move_freepages_block() users
   to simplify.
-- Adjust subject/description accordingly.
- "mm/page_alloc: place pages to tail in __free_pages_core()"
-- s/init_single_page/__init_single_page/

RFC -> v1:
- Tweak some patch descriptions
- "mm/page_alloc: place pages to tail in __putback_isolated_page()"
-- FOP_TO_TAIL now has higher precedence than page shuffling
-- Add a note that nothing should rely on FOP_TO_TAIL for correctness
- "mm/page_alloc: always move pages to the tail of the freelist in
   unset_migratetype_isolate()"
-- Use "bool" parameter for move_freepages_block() as requested
- "mm/page_alloc: place pages to tail in __free_pages_core()"
-- Eliminate set_page_refcounted() + page_ref_dec() and add a comment
- "mm/memory_hotplug: update comment regarding zone shuffling"
-- Added

David Hildenbrand (5):
  mm/page_alloc: convert "report" flag of __free_one_page() to a proper
    flag
  mm/page_alloc: place pages to tail in __putback_isolated_page()
  mm/page_alloc: move pages to tail in move_to_free_list()
  mm/page_alloc: place pages to tail in __free_pages_core()
  mm/memory_hotplug: update comment regarding zone shuffling

 mm/memory_hotplug.c | 11 +++---
 mm/page_alloc.c     | 84 +++++++++++++++++++++++++++++++++++----------
 mm/page_isolation.c |  5 +++
 3 files changed, 75 insertions(+), 25 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2991.8570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPV-0006hK-CV; Mon, 05 Oct 2020 12:16:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2991.8570; Mon, 05 Oct 2020 12:16:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPV-0006hD-8j; Mon, 05 Oct 2020 12:16:09 +0000
Received: by outflank-mailman (input) for mailman id 2991;
 Mon, 05 Oct 2020 12:16:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPPT-0006gw-SM
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:07 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 39a9e146-f421-44ff-b35f-6cb9296d79f3;
 Mon, 05 Oct 2020 12:16:05 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-389-Y1IAhXxZNuKk3UjduWcrDg-1; Mon, 05 Oct 2020 08:16:01 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B695D107ACF5;
 Mon,  5 Oct 2020 12:15:58 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0F7A01A913;
 Mon,  5 Oct 2020 12:15:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPPT-0006gw-SM
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:07 +0000
X-Inumbo-ID: 39a9e146-f421-44ff-b35f-6cb9296d79f3
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 39a9e146-f421-44ff-b35f-6cb9296d79f3;
	Mon, 05 Oct 2020 12:16:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900165;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3jnWAK3O4yILdw4ZP0mQyIXKP0/Hh52uEJ+0mn35WIQ=;
	b=FaCqxhCKW1Gu1JSOMoatETAbCAz+HmGzA5k6KPvnhCKRg3jgAPZSIdQLhzddxm61xQPW9w
	r/u/Z3Ie8zGqhH78rKHcy8VB4PSntlLaGMpMFeRvOe0RbygCJ1X5k7pw+dTP7q+USi2syz
	Z9m328UQPpa+AiKh2PzZAY1rDVKs04Y=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-389-Y1IAhXxZNuKk3UjduWcrDg-1; Mon, 05 Oct 2020 08:16:01 -0400
X-MC-Unique: Y1IAhXxZNuKk3UjduWcrDg-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B695D107ACF5;
	Mon,  5 Oct 2020 12:15:58 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 0F7A01A913;
	Mon,  5 Oct 2020 12:15:51 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Michal Hocko <mhocko@suse.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Mike Rapoport <rppt@kernel.org>
Subject: [PATCH v2 1/5] mm/page_alloc: convert "report" flag of __free_one_page() to a proper flag
Date: Mon,  5 Oct 2020 14:15:30 +0200
Message-Id: <20201005121534.15649-2-david@redhat.com>
In-Reply-To: <20201005121534.15649-1-david@redhat.com>
References: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

Let's prepare for additional flags and avoid long parameter lists of bools.
Follow-up patches will also make use of the flags in __free_pages_ok().

Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7012d67a302d..2bf235b1953f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -78,6 +78,22 @@
 #include "shuffle.h"
 #include "page_reporting.h"
 
+/* Free Page Internal flags: for internal, non-pcp variants of free_pages(). */
+typedef int __bitwise fpi_t;
+
+/* No special request */
+#define FPI_NONE		((__force fpi_t)0)
+
+/*
+ * Skip free page reporting notification for the (possibly merged) page.
+ * This does not hinder free page reporting from grabbing the page,
+ * reporting it and marking it "reported" -  it only skips notifying
+ * the free page reporting infrastructure about a newly freed page. For
+ * example, used when temporarily pulling a page from a freelist and
+ * putting it back unmodified.
+ */
+#define FPI_SKIP_REPORT_NOTIFY	((__force fpi_t)BIT(0))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -952,7 +968,7 @@ buddy_merge_likely(unsigned long pfn, unsigned long buddy_pfn,
 static inline void __free_one_page(struct page *page,
 		unsigned long pfn,
 		struct zone *zone, unsigned int order,
-		int migratetype, bool report)
+		int migratetype, fpi_t fpi_flags)
 {
 	struct capture_control *capc = task_capc(zone);
 	unsigned long buddy_pfn;
@@ -1039,7 +1055,7 @@ static inline void __free_one_page(struct page *page,
 		add_to_free_list(page, zone, order, migratetype);
 
 	/* Notify page reporting subsystem of freed page */
-	if (report)
+	if (!(fpi_flags & FPI_SKIP_REPORT_NOTIFY))
 		page_reporting_notify_free(order);
 }
 
@@ -1380,7 +1396,7 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 		if (unlikely(isolated_pageblocks))
 			mt = get_pageblock_migratetype(page);
 
-		__free_one_page(page, page_to_pfn(page), zone, 0, mt, true);
+		__free_one_page(page, page_to_pfn(page), zone, 0, mt, FPI_NONE);
 		trace_mm_page_pcpu_drain(page, 0, mt);
 	}
 	spin_unlock(&zone->lock);
@@ -1396,7 +1412,7 @@ static void free_one_page(struct zone *zone,
 		is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
-	__free_one_page(page, pfn, zone, order, migratetype, true);
+	__free_one_page(page, pfn, zone, order, migratetype, FPI_NONE);
 	spin_unlock(&zone->lock);
 }
 
@@ -3289,7 +3305,8 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 	lockdep_assert_held(&zone->lock);
 
 	/* Return isolated page to tail of freelist. */
-	__free_one_page(page, page_to_pfn(page), zone, order, mt, false);
+	__free_one_page(page, page_to_pfn(page), zone, order, mt,
+			FPI_SKIP_REPORT_NOTIFY);
 }
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2992.8582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPj-0006mz-Ki; Mon, 05 Oct 2020 12:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2992.8582; Mon, 05 Oct 2020 12:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPj-0006ms-HZ; Mon, 05 Oct 2020 12:16:23 +0000
Received: by outflank-mailman (input) for mailman id 2992;
 Mon, 05 Oct 2020 12:16:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPPh-0006mJ-Ib
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:21 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 12eb9e65-9c79-4b0b-b19c-23bbb64c80a9;
 Mon, 05 Oct 2020 12:16:20 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-416-a6njIh0COVKiFYMipVIPug-1; Mon, 05 Oct 2020 08:16:16 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13F738030AA;
 Mon,  5 Oct 2020 12:16:13 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id B73DF27CC6;
 Mon,  5 Oct 2020 12:15:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPPh-0006mJ-Ib
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:21 +0000
X-Inumbo-ID: 12eb9e65-9c79-4b0b-b19c-23bbb64c80a9
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 12eb9e65-9c79-4b0b-b19c-23bbb64c80a9;
	Mon, 05 Oct 2020 12:16:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900180;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tgT/XV/JOTuCj2JcNLuFxyO0AHRuTyQafR/eZPht6eo=;
	b=X00Ze5uW7fEZc9MPbuvq1LLwxa6i91hwBHi/1sxEuQb9sufA5EAIkQlzm10bFpH+RC+E+C
	SB5DADdctN9JNu7kNydQsv7ShdtD6yXWWyx/uy0tX+XqjAXiVjFtwHFoeeITK/KvQtisd2
	BoGX5R/s+hpPKZHHZueVqD0zgbdaucc=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-416-a6njIh0COVKiFYMipVIPug-1; Mon, 05 Oct 2020 08:16:16 -0400
X-MC-Unique: a6njIh0COVKiFYMipVIPug-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 13F738030AA;
	Mon,  5 Oct 2020 12:16:13 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id B73DF27CC6;
	Mon,  5 Oct 2020 12:15:59 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Oscar Salvador <osalvador@suse.de>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Michal Hocko <mhocko@suse.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: [PATCH v2 2/5] mm/page_alloc: place pages to tail in __putback_isolated_page()
Date: Mon,  5 Oct 2020 14:15:31 +0200
Message-Id: <20201005121534.15649-3-david@redhat.com>
In-Reply-To: <20201005121534.15649-1-david@redhat.com>
References: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

__putback_isolated_page() already documents that pages will be placed to
the tail of the freelist - this is, however, not the case for
"order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
the case for all existing users.

This change affects two users:
- free page reporting
- page isolation, when undoing the isolation (including memory onlining).

This behavior is desireable for pages that haven't really been touched
lately, so exactly the two users that don't actually read/write page
content, but rather move untouched pages.

The new behavior is especially desirable for memory onlining, where we
allow allocation of newly onlined pages via undo_isolate_page_range()
in online_pages(). Right now, we always place them to the head of the
freelist, resulting in undesireable behavior: Assume we add
individual memory chunks via add_memory() and online them right away to
the NORMAL zone. We create a dependency chain of unmovable allocations
e.g., via the memmap. The memmap of the next chunk will be placed onto
previous chunks - if the last block cannot get offlined+removed, all
dependent ones cannot get offlined+removed. While this can already be
observed with individual DIMMs, it's more of an issue for virtio-mem
(and I suspect also ppc DLPAR).

Document that this should only be used for optimizations, and no code
should rely on this behavior for correction (if the order of the
freelists ever changes).

We won't care about page shuffling: memory onlining already properly
shuffles after onlining. free page reporting doesn't care about
physically contiguous ranges, and there are already cases where page
isolation will simply move (physically close) free pages to (currently)
the head of the freelists via move_freepages_block() instead of
shuffling. If this becomes ever relevant, we should shuffle the whole
zone when undoing isolation of larger ranges, and after
free_contig_range().

Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2bf235b1953f..df5ff0cd6df1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -94,6 +94,18 @@ typedef int __bitwise fpi_t;
  */
 #define FPI_SKIP_REPORT_NOTIFY	((__force fpi_t)BIT(0))
 
+/*
+ * Place the (possibly merged) page to the tail of the freelist. Will ignore
+ * page shuffling (relevant code - e.g., memory onlining - is expected to
+ * shuffle the whole zone).
+ *
+ * Note: No code should rely on this flag for correctness - it's purely
+ *       to allow for optimizations when handing back either fresh pages
+ *       (memory onlining) or untouched pages (page isolation, free page
+ *       reporting).
+ */
+#define FPI_TO_TAIL		((__force fpi_t)BIT(1))
+
 /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */
 static DEFINE_MUTEX(pcp_batch_high_lock);
 #define MIN_PERCPU_PAGELIST_FRACTION	(8)
@@ -1044,7 +1056,9 @@ static inline void __free_one_page(struct page *page,
 done_merging:
 	set_page_order(page, order);
 
-	if (is_shuffle_order(order))
+	if (fpi_flags & FPI_TO_TAIL)
+		to_tail = true;
+	else if (is_shuffle_order(order))
 		to_tail = shuffle_pick_tail();
 	else
 		to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order);
@@ -3306,7 +3320,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt)
 
 	/* Return isolated page to tail of freelist. */
 	__free_one_page(page, page_to_pfn(page), zone, order, mt,
-			FPI_SKIP_REPORT_NOTIFY);
+			FPI_SKIP_REPORT_NOTIFY | FPI_TO_TAIL);
 }
 
 /*
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2993.8594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPs-0006sA-UO; Mon, 05 Oct 2020 12:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2993.8594; Mon, 05 Oct 2020 12:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPPs-0006s3-RL; Mon, 05 Oct 2020 12:16:32 +0000
Received: by outflank-mailman (input) for mailman id 2993;
 Mon, 05 Oct 2020 12:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPPr-0006rU-LP
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:31 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 84479ba7-e796-4cc8-b6e3-43f1748c8848;
 Mon, 05 Oct 2020 12:16:29 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-547-TlvT2dKcMy2qnf7NNmOSuA-1; Mon, 05 Oct 2020 08:16:25 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5087107ACF5;
 Mon,  5 Oct 2020 12:16:22 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8AAF6271A3;
 Mon,  5 Oct 2020 12:16:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPPr-0006rU-LP
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:31 +0000
X-Inumbo-ID: 84479ba7-e796-4cc8-b6e3-43f1748c8848
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 84479ba7-e796-4cc8-b6e3-43f1748c8848;
	Mon, 05 Oct 2020 12:16:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900189;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I+9FWa50a4fAVSLereFH2OCTk21D3ec5peVlRys8eQY=;
	b=RlKFUjuh8aMeDLhwWX1WZ5S0QGEVmrisUilRPf+BEJSvL7QxUs2dcau5pypzfunDA26Yqp
	nDK9t6cV+6kexLTmxZNqDXdpg4ps0xAfXnQG0umi/smAnLUuklKZYQeI5+LZydRTM2W6XR
	9UZ/PfrkJrI6b+u2LQIvHAE9xgYMstw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-547-TlvT2dKcMy2qnf7NNmOSuA-1; Mon, 05 Oct 2020 08:16:25 -0400
X-MC-Unique: TlvT2dKcMy2qnf7NNmOSuA-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5087107ACF5;
	Mon,  5 Oct 2020 12:16:22 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 8AAF6271A3;
	Mon,  5 Oct 2020 12:16:13 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: [PATCH v2 3/5] mm/page_alloc: move pages to tail in move_to_free_list()
Date: Mon,  5 Oct 2020 14:15:32 +0200
Message-Id: <20201005121534.15649-4-david@redhat.com>
In-Reply-To: <20201005121534.15649-1-david@redhat.com>
References: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

Whenever we move pages between freelists via move_to_free_list()/
move_freepages_block(), we don't actually touch the pages:
1. Page isolation doesn't actually touch the pages, it simply isolates
   pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
   When undoing isolation, we move the pages back to the target list.
2. Page stealing (steal_suitable_fallback()) moves free pages directly
   between lists without touching them.
3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
   free pages directly between freelists without touching them.

We already place pages to the tail of the freelists when undoing isolation
via __putback_isolated_page(), let's do it in any case (e.g., if order <=
pageblock_order) and document the behavior. To simplify, let's move the
pages to the tail for all move_to_free_list()/move_freepages_block() users.

In 2., the target list is empty, so there should be no change. In 3.,
we might observe a change, however, highatomic is more concerned about
allocations succeeding than cache hotness - if we ever realize this
change degrades a workload, we can special-case this instance and add a
proper comment.

This change results in all pages getting onlined via online_pages() to
be placed to the tail of the freelist.

Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Scott Cheloha <cheloha@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c     | 10 +++++++---
 mm/page_isolation.c |  5 +++++
 2 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index df5ff0cd6df1..b187e46cf640 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone,
 	area->nr_free++;
 }
 
-/* Used for pages which are on another list */
+/*
+ * Used for pages which are on another list. Move the pages to the tail
+ * of the list - so the moved pages won't immediately be considered for
+ * allocation again (e.g., optimization for memory onlining).
+ */
 static inline void move_to_free_list(struct page *page, struct zone *zone,
 				     unsigned int order, int migratetype)
 {
 	struct free_area *area = &zone->free_area[order];
 
-	list_move(&page->lru, &area->free_list[migratetype]);
+	list_move_tail(&page->lru, &area->free_list[migratetype]);
 }
 
 static inline void del_page_from_free_list(struct page *page, struct zone *zone,
@@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
 #endif
 
 /*
- * Move the free pages in a range to the free lists of the requested type.
+ * Move the free pages in a range to the freelist tail of the requested type.
  * Note that start_page and end_pages are not aligned on a pageblock
  * boundary. If alignment is required, use move_freepages_block()
  */
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index abfe26ad59fd..83692b937784 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
 	 * If we isolate freepage with more than pageblock_order, there
 	 * should be no freepage in the range, so we could avoid costly
 	 * pageblock scanning for freepage moving.
+	 *
+	 * We didn't actually touch any of the isolated pages, so place them
+	 * to the tail of the freelist. This is an optimization for memory
+	 * onlining - just onlined memory won't immediately be considered for
+	 * allocation.
 	 */
 	if (!isolated_page) {
 		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2994.8606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPQ5-00073U-CV; Mon, 05 Oct 2020 12:16:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2994.8606; Mon, 05 Oct 2020 12:16:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPQ5-00073K-8u; Mon, 05 Oct 2020 12:16:45 +0000
Received: by outflank-mailman (input) for mailman id 2994;
 Mon, 05 Oct 2020 12:16:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPQ4-00072n-68
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:44 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 39e3cea2-e6a0-4dbe-95a5-5909db546898;
 Mon, 05 Oct 2020 12:16:43 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-536-hdgD0PM4N7SehO3ysGnyiw-1; Mon, 05 Oct 2020 08:16:39 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 547CE18A8220;
 Mon,  5 Oct 2020 12:16:36 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id C518D1A8EC;
 Mon,  5 Oct 2020 12:16:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPQ4-00072n-68
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:44 +0000
X-Inumbo-ID: 39e3cea2-e6a0-4dbe-95a5-5909db546898
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 39e3cea2-e6a0-4dbe-95a5-5909db546898;
	Mon, 05 Oct 2020 12:16:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900202;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I03+HqTTiEYZ2UJ8Hm8w9UAbNI08TTVr15820ukCo2o=;
	b=YgrQJJ2TPmJuYA5hEQoZlIneITREmdjrt/71x9DMcH45Lw7P+QzzAwe4Lhk9kWMt3MEm/8
	ApfXRmJFTf8WUNGWcj3CGnNtMD9qKbjNUMP0Yrr47JYFNiaW7nPWN5HB6PJWnxe6blfDvD
	eODvi9TKj7y8wTSd+3opgvw3sN3KoeY=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-536-hdgD0PM4N7SehO3ysGnyiw-1; Mon, 05 Oct 2020 08:16:39 -0400
X-MC-Unique: hdgD0PM4N7SehO3ysGnyiw-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 547CE18A8220;
	Mon,  5 Oct 2020 12:16:36 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id C518D1A8EC;
	Mon,  5 Oct 2020 12:16:23 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Michal Hocko <mhocko@suse.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>
Subject: [PATCH v2 4/5] mm/page_alloc: place pages to tail in __free_pages_core()
Date: Mon,  5 Oct 2020 14:15:33 +0200
Message-Id: <20201005121534.15649-5-david@redhat.com>
In-Reply-To: <20201005121534.15649-1-david@redhat.com>
References: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

__free_pages_core() is used when exposing fresh memory to the buddy
during system boot and when onlining memory in generic_online_page().

generic_online_page() is used in two cases:

1. Direct memory onlining in online_pages().
2. Deferred memory onlining in memory-ballooning-like mechanisms (HyperV
   balloon and virtio-mem), when parts of a section are kept
   fake-offline to be fake-onlined later on.

In 1, we already place pages to the tail of the freelist. Pages will be
freed to MIGRATE_ISOLATE lists first and moved to the tail of the freelists
via undo_isolate_page_range().

In 2, we currently don't implement a proper rule. In case of virtio-mem,
where we currently always online MAX_ORDER - 1 pages, the pages will be
placed to the HEAD of the freelist - undesireable. While the hyper-v
balloon calls generic_online_page() with single pages, usually it will
call it on successive single pages in a larger block.

The pages are fresh, so place them to the tail of the freelist and avoid
the PCP. In __free_pages_core(), remove the now superflouos call to
set_page_refcounted() and add a comment regarding page initialization and
the refcount.

Note: In 2. we currently don't shuffle. If ever relevant (page shuffling
is usually of limited use in virtualized environments), we might want to
shuffle after a sequence of generic_online_page() calls in the
relevant callers.

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Stephen Hemminger <sthemmin@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/page_alloc.c | 33 +++++++++++++++++++++++----------
 1 file changed, 23 insertions(+), 10 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b187e46cf640..3dadcc6d4009 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -275,7 +275,8 @@ bool pm_suspended_storage(void)
 unsigned int pageblock_order __read_mostly;
 #endif
 
-static void __free_pages_ok(struct page *page, unsigned int order);
+static void __free_pages_ok(struct page *page, unsigned int order,
+			    fpi_t fpi_flags);
 
 /*
  * results with 256, 32 in the lowmem_reserve sysctl:
@@ -687,7 +688,7 @@ static void bad_page(struct page *page, const char *reason)
 void free_compound_page(struct page *page)
 {
 	mem_cgroup_uncharge(page);
-	__free_pages_ok(page, compound_order(page));
+	__free_pages_ok(page, compound_order(page), FPI_NONE);
 }
 
 void prep_compound_page(struct page *page, unsigned int order)
@@ -1423,14 +1424,14 @@ static void free_pcppages_bulk(struct zone *zone, int count,
 static void free_one_page(struct zone *zone,
 				struct page *page, unsigned long pfn,
 				unsigned int order,
-				int migratetype)
+				int migratetype, fpi_t fpi_flags)
 {
 	spin_lock(&zone->lock);
 	if (unlikely(has_isolate_pageblock(zone) ||
 		is_migrate_isolate(migratetype))) {
 		migratetype = get_pfnblock_migratetype(page, pfn);
 	}
-	__free_one_page(page, pfn, zone, order, migratetype, FPI_NONE);
+	__free_one_page(page, pfn, zone, order, migratetype, fpi_flags);
 	spin_unlock(&zone->lock);
 }
 
@@ -1508,7 +1509,8 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
 	}
 }
 
-static void __free_pages_ok(struct page *page, unsigned int order)
+static void __free_pages_ok(struct page *page, unsigned int order,
+			    fpi_t fpi_flags)
 {
 	unsigned long flags;
 	int migratetype;
@@ -1520,7 +1522,8 @@ static void __free_pages_ok(struct page *page, unsigned int order)
 	migratetype = get_pfnblock_migratetype(page, pfn);
 	local_irq_save(flags);
 	__count_vm_events(PGFREE, 1 << order);
-	free_one_page(page_zone(page), page, pfn, order, migratetype);
+	free_one_page(page_zone(page), page, pfn, order, migratetype,
+		      fpi_flags);
 	local_irq_restore(flags);
 }
 
@@ -1530,6 +1533,11 @@ void __free_pages_core(struct page *page, unsigned int order)
 	struct page *p = page;
 	unsigned int loop;
 
+	/*
+	 * When initializing the memmap, __init_single_page() sets the refcount
+	 * of all pages to 1 ("allocated"/"not free"). We have to set the
+	 * refcount of all involved pages to 0.
+	 */
 	prefetchw(p);
 	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
 		prefetchw(p + 1);
@@ -1540,8 +1548,12 @@ void __free_pages_core(struct page *page, unsigned int order)
 	set_page_count(p, 0);
 
 	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
-	set_page_refcounted(page);
-	__free_pages(page, order);
+
+	/*
+	 * Bypass PCP and place fresh pages right to the tail, primarily
+	 * relevant for memory onlining.
+	 */
+	__free_pages_ok(page, order, FPI_TO_TAIL);
 }
 
 #ifdef CONFIG_NEED_MULTIPLE_NODES
@@ -3168,7 +3180,8 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn)
 	 */
 	if (migratetype >= MIGRATE_PCPTYPES) {
 		if (unlikely(is_migrate_isolate(migratetype))) {
-			free_one_page(zone, page, pfn, 0, migratetype);
+			free_one_page(zone, page, pfn, 0, migratetype,
+				      FPI_NONE);
 			return;
 		}
 		migratetype = MIGRATE_MOVABLE;
@@ -4991,7 +5004,7 @@ static inline void free_the_page(struct page *page, unsigned int order)
 	if (order == 0)		/* Via pcp? */
 		free_unref_page(page);
 	else
-		__free_pages_ok(page, order);
+		__free_pages_ok(page, order, FPI_NONE);
 }
 
 void __free_pages(struct page *page, unsigned int order)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:16:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.2995.8618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPQC-00078S-Mn; Mon, 05 Oct 2020 12:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 2995.8618; Mon, 05 Oct 2020 12:16:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPQC-00078J-Ii; Mon, 05 Oct 2020 12:16:52 +0000
Received: by outflank-mailman (input) for mailman id 2995;
 Mon, 05 Oct 2020 12:16:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1kPPQA-00077V-Ii
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:50 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 131a8c6c-be8f-4871-85bd-6525d8b902ee;
 Mon, 05 Oct 2020 12:16:49 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-70-7gZGHkSGNu2LzffQTpddTg-1; Mon, 05 Oct 2020 08:16:45 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 887B4801AB1;
 Mon,  5 Oct 2020 12:16:43 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A40231A8EC;
 Mon,  5 Oct 2020 12:16:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=c+yv=DM=redhat.com=david@srs-us1.protection.inumbo.net>)
	id 1kPPQA-00077V-Ii
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:16:50 +0000
X-Inumbo-ID: 131a8c6c-be8f-4871-85bd-6525d8b902ee
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 131a8c6c-be8f-4871-85bd-6525d8b902ee;
	Mon, 05 Oct 2020 12:16:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1601900209;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0AsZOi2g9AlkyekUiDSDSjeyhdmKjvuUsI97IJKTuTs=;
	b=ic+D7sC4ZfZuNEKbtQv8oeu8IIy37KaR3P49BBhgbeAb1izamQ96ydf5IN7P+HF2h0Yi1U
	YpsIfSCyeKtKWPPL29HKNoM239QISArroBWsyMHU5JhCItKUPl3Hw3rNckzGdnZF7+01GW
	w5rM/BsDliBg6j4nriLC1X00qZt5xRQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-70-7gZGHkSGNu2LzffQTpddTg-1; Mon, 05 Oct 2020 08:16:45 -0400
X-MC-Unique: 7gZGHkSGNu2LzffQTpddTg-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 887B4801AB1;
	Mon,  5 Oct 2020 12:16:43 +0000 (UTC)
Received: from t480s.redhat.com (ovpn-114-222.ams2.redhat.com [10.36.114.222])
	by smtp.corp.redhat.com (Postfix) with ESMTP id A40231A8EC;
	Mon,  5 Oct 2020 12:16:36 +0000 (UTC)
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	David Hildenbrand <david@redhat.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Michal Hocko <mhocko@suse.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Michal Hocko <mhocko@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>,
	Oscar Salvador <osalvador@suse.de>,
	Mike Rapoport <rppt@kernel.org>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Subject: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling
Date: Mon,  5 Oct 2020 14:15:34 +0200
Message-Id: <20201005121534.15649-6-david@redhat.com>
In-Reply-To: <20201005121534.15649-1-david@redhat.com>
References: <20201005121534.15649-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23

As we no longer shuffle via generic_online_page() and when undoing
isolation, we can simplify the comment.

We now effectively shuffle only once (properly) when onlining new
memory.

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/memory_hotplug.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 03a00cb68bf7..b44d4c7ba73b 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
 
 	/*
-	 * When exposing larger, physically contiguous memory areas to the
-	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
-	 * them either to the head or the tail of the freelist) is only helpful
-	 * for maintaining the shuffle, but not for creating the initial
-	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
-	 * are properly distributed across the whole freelist. Make sure to
-	 * shuffle once pageblocks are no longer isolated.
+	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
+	 * the tail of the freelist when undoing isolation). Shuffle the whole
+	 * zone to make sure the just onlined pages are properly distributed
+	 * across the whole freelist - to create an initial shuffle.
 	 */
 	shuffle_zone(zone);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 12:23:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 12:23:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3003.8630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPWy-0008Cl-HT; Mon, 05 Oct 2020 12:23:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3003.8630; Mon, 05 Oct 2020 12:23:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPPWy-0008Ce-CE; Mon, 05 Oct 2020 12:23:52 +0000
Received: by outflank-mailman (input) for mailman id 3003;
 Mon, 05 Oct 2020 12:23:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPPWx-0008CZ-F9
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:23:51 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 163ce370-4378-4584-b0e3-3a2dffed9886;
 Mon, 05 Oct 2020 12:23:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPPWx-0008CZ-F9
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 12:23:51 +0000
X-Inumbo-ID: 163ce370-4378-4584-b0e3-3a2dffed9886
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 163ce370-4378-4584-b0e3-3a2dffed9886;
	Mon, 05 Oct 2020 12:23:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601900630;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=cHQjS2YzfY3hmqPMISaok0cpm0uhyxwRBIJfBGmxJsw=;
  b=BCfGzguTV3l5y/oSklJ12c+Ygs1iJ6L+HpEWABT4PIeXACOeawZwtMi3
   o3ZHaAI/w9L60i3bDFEpH7QruUo1LktQWciIQ/kdeB+fZh72VqKohSp5J
   3SroHXozak8eGhw9OO8qTdT+qBWNXuNUjS5GqaQ5pLcq6GjG9gVfv4tZ5
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: d2th30ZShL4DN5vYYRgtxC0Yv4Aax/X3IKRTgJq/QXY6dHdrsYq/yToTuvk/NA84G/66bBtsC8
 tl/287QdgUalR1aWls71VQGtfxHJ7mwFCXh0Scmtp8HZkPqKpEPA2BlV9tAKxDRK9acNNna6bN
 /xkjuScLwo4bQwwayH4nCo56OrCm4ZxU/yUIBGUkBfE6ncmr8cYsyN5vSSjmif78BAG4g75YGg
 wQ/Z9f5t97PPqffNn5VTglr9nsJC5eFtKjQEi6qZuLSM4BoQA9H6YQt+HPYhvg+gJ80dNWpjdA
 Un4=
X-SBRS: None
X-MesageID: 29301968
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="29301968"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/smpboot: Unconditionally call memguard_unguard_stack() in cpu_smpboot_free()
Date: Mon, 5 Oct 2020 13:23:25 +0100
Message-ID: <20201005122325.17395-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

For simplicity between various configuration, Xen always uses shadow stack
mappings (Read-only + Dirty) for the guard page, irrespective of whether
CET-SS is enabled.

memguard_guard_stack() writes shadow stack tokens with plain writes.  This is
necessary to configure the BSP shadow stack correctly, and cannot be
implemented with WRSS.

Therefore, unconditionally call memguard_unguard_stack() to return the
mappings to fully writeable, so a subsequent call to memguard_guard_stack()
will succeed.

Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This can more easily be demonstrated with CPU hotplug than S3, and the absence
of bug reports goes to show how rarely hotplug is used.
---
 xen/arch/x86/smpboot.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5708573c41..c193cc0fb8 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -971,16 +971,16 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
     if ( IS_ENABLED(CONFIG_PV32) )
         FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
 
+    if ( stack_base[cpu] )
+        memguard_unguard_stack(stack_base[cpu]);
+
     if ( remove )
     {
         FREE_XENHEAP_PAGE(per_cpu(gdt, cpu));
         FREE_XENHEAP_PAGE(idt_tables[cpu]);
 
         if ( stack_base[cpu] )
-        {
-            memguard_unguard_stack(stack_base[cpu]);
             FREE_XENHEAP_PAGES(stack_base[cpu], STACK_ORDER);
-        }
     }
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 13:39:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 13:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3006.8642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPQiB-0006EP-6A; Mon, 05 Oct 2020 13:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3006.8642; Mon, 05 Oct 2020 13:39:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPQiB-0006EI-2K; Mon, 05 Oct 2020 13:39:31 +0000
Received: by outflank-mailman (input) for mailman id 3006;
 Mon, 05 Oct 2020 13:39:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TAZG=DM=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kPQi9-0006ED-Co
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 13:39:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f75e4ae2-b7b1-417e-b2fe-1f6d676adb8a;
 Mon, 05 Oct 2020 13:39:28 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp
 [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4CD0420756;
 Mon,  5 Oct 2020 13:39:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TAZG=DM=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kPQi9-0006ED-Co
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 13:39:29 +0000
X-Inumbo-ID: f75e4ae2-b7b1-417e-b2fe-1f6d676adb8a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f75e4ae2-b7b1-417e-b2fe-1f6d676adb8a;
	Mon, 05 Oct 2020 13:39:28 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4CD0420756;
	Mon,  5 Oct 2020 13:39:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601905167;
	bh=CKM1wiIeFzqvHmo0o+SIVvTlYWd5wGgfK9tUsLJKEWI=;
	h=From:To:Cc:Subject:Date:From;
	b=daqmAbFNxPj/Sj6BaL6WyxNtCjQtE1j7egFu2nYTyIUjjP3v53bMT4MCmakcMkACm
	 x+QEf4FkSlYnEGwp7xXuqSm7TP8Iwu/6WJY5h42zPj57Xp/nUct9Vzb0lXlNVHgNFX
	 i1nfKAtvv4K8Dc1qIDXDRjdvfo8SyCXodd0YUUKw=
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	takahiro.akashi@linaro.org
Subject: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Mon,  5 Oct 2020 22:39:20 +0900
Message-Id: <160190516028.40160.9733543991325671759.stgit@devnote2>
X-Mailer: git-send-email 2.25.1
User-Agent: StGit/0.19
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in contiguous kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area (depends on CONFIG_SMP).

Without this fix, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
---
 arch/arm/xen/enlighten.c |    2 +-
 include/xen/arm/page.h   |    3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e93145d72c26..a6ab3689b2f4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index 39df751d0dc4..ac1b65470563 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 	})
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 13:42:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 13:42:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3009.8656 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPQlT-00072c-Nj; Mon, 05 Oct 2020 13:42:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3009.8656; Mon, 05 Oct 2020 13:42:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPQlT-00072V-Il; Mon, 05 Oct 2020 13:42:55 +0000
Received: by outflank-mailman (input) for mailman id 3009;
 Mon, 05 Oct 2020 13:42:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPQlS-00072Q-ID
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 13:42:54 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1906c677-7ae0-4d89-9a9e-f6c5f2bb03c9;
 Mon, 05 Oct 2020 13:42:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPQlS-00072Q-ID
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 13:42:54 +0000
X-Inumbo-ID: 1906c677-7ae0-4d89-9a9e-f6c5f2bb03c9
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1906c677-7ae0-4d89-9a9e-f6c5f2bb03c9;
	Mon, 05 Oct 2020 13:42:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601905372;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=w01SC9C/nCOxtEay1oyxna3Ur1GH+Q+tk1QpjWTLj8Y=;
  b=O8+j8ysOaUm48UfiA/W3J8uJSwd9Qwcacl0ETrFyOdjpZf9JAVW/ubpu
   7PRXC6meF7KFpczyz749gTom7dYHKPKYJ6LvH/gJ1dRgfPwc2bwoQJKzw
   1QOTi9aVyG3tprmvoEOD9IudGAOxKnGbWJ6LESGDmSQ7YBnaOtDDjR3nU
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: lHy3J9jzmyQD+2Rml7BJaQ8PRQyjTSZ8zWj+ZPxzN7PDuStYh6tINEfp79QtkbIB1mD28Hqvf2
 cpukhGcQQ+ZQAG6GM4vScngz4XmjbFWHkAK6HKpJHa96CwMHoH/bQoCmQ9/p0YxrU1TL3VNBzl
 kwZmTJ1lFeE3NNzGjCIhZk8XTvNbCNcdWxGExy9hX06O2zndBfibQIlmH6w4nC8EU45S8ilE3C
 Uj9gOAjybWL5t+BOD/9bPWOlkHM4TMcDiunb43Wtw0ahNt+oloklbDahpl++SqRTsBAHcydX41
 NVs=
X-SBRS: None
X-MesageID: 29310585
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="29310585"
Subject: Re: [PATCH RFC] docs: Add minimum version depencency policy document
To: George Dunlap <George.Dunlap@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>, Jan Beulich
	<jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>, Rich Persaud <persaur@gmail.com>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>
References: <20200930125736.95203-1-george.dunlap@citrix.com>
 <868b25bd-ab2c-7f33-1dc2-9476c86d8050@citrix.com>
 <F65DC414-FFA4-4990-84FF-A94503B38F3A@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <351673c4-c3a0-a1c4-5738-2339b698417e@citrix.com>
Date: Mon, 5 Oct 2020 14:41:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <F65DC414-FFA4-4990-84FF-A94503B38F3A@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 01/10/2020 15:50, George Dunlap wrote:

>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>>> ---
>>>
>>> CC: Ian Jackson <ian.jackson@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>> CC: Andrew Cooper <andrew.cooper3@citrix.com>
>>> CC: Jan Beulich <jbeulich@suse.com>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> CC: Julien Grall <julien@xen.org>
>>> CC: Rich Persaud <persaur@gmail.com>
>>> CC: Bertrand Marquis <Bertrand.Marquis@arm.com>
>>> ---
>>> docs/index.rst                        |  2 +
>>> docs/policies/dependency-versions.rst | 76 +++++++++++++++++++++++++++
>>> 2 files changed, 78 insertions(+)
>>> create mode 100644 docs/policies/dependency-versions.rst
>>>
>>> diff --git a/docs/index.rst b/docs/index.rst
>>> index b75487a05d..ac175eacc8 100644
>>> --- a/docs/index.rst
>>> +++ b/docs/index.rst
>>> @@ -57,5 +57,7 @@ Miscellanea
>>> -----------
>>>
>>> .. toctree::
>>> +   :maxdepth: 1
>>>
>>> +   policies/dependency-versions
>> I think it is great that this is going into Sphinx.
>>
>> However, I'd prefer to avoid proliferating random things at the top
>> level, to try and keep everything in a coherent structure.
> I was hoping for your feedback on where to put this. :-)
>
>> For better or worse, I guestimated at "admin guide" (end user and
>> sysadmin guide), "guest docs" (VM ABI, and guest kernel developers), and
>> "hypervisors docs" (hacking Xen).
> Is “hypervisor” in this sense meant to mean the actual hypervisor (xen.git/xen), or the whole hypervisor system (i.e., everything in xen.git)?

"yes".  If it seems like I'm making this up as I go along, then perhaps
its because I am.

> It seems to me that we need something like the latter; in which case maybe we should change that section to “developer documentation” or something, with “hypervisor” as a section under that?

My gut feeling is that "developing the hypervisor" is different enough
from "developing the toolstack" that there will be little overlap in
content.

>
>> I'm happy to shuffle the dividing lines if a better arrangement becomes
>> obvious.  This particular doc logically lives with "building Xen from
>> source".
> I don’t see a “building Xen from source” section (except for Hypervisor Documentation/Code Coverage/Compiling Xen, which is obviously specific to code coverage).

There isn't one yet.

> If the main target of the page is to tell admins / downstreams what distros we support, then it might go under “Admin Guide” somewhere.  If the main target is to tell developers what versions they have to support / don’t have to support, then putting it under a newly-created “developer documentation” section would probably make the most sense.
>
> I think I’d go with the latter, if you’re OK with it.

I don't think this page is applicable to downstreams.  They've already
got packages and have figured out their own support.

This is purely a statement of what we (upstream) expect/check, which
will inform developers wishing to work on master.

>
>
>>>    glossary
>>> diff --git a/docs/policies/dependency-versions.rst b/docs/policies/dependency-versions.rst
>>> new file mode 100644
>>> index 0000000000..d5eeb848d8
>>> --- /dev/null
>>> +++ b/docs/policies/dependency-versions.rst
>>> @@ -0,0 +1,76 @@
>>> +.. SPDX-License-Identifier: CC-BY-4.0
>>> +
>>> +Build and runtime dependencies
>>> +==============================
>>> +
>>> +Xen depends on other programs and libraries to build and to run.
>>> +Chosing a minimum version of these tools to support requires a careful
>>> +balance: Supporting older versions of these tools or libraries means
>>> +that Xen can compile on a wider variety of systems; but means that Xen
>>> +cannot take advantage of features available in newer versions.
>>> +Conversely, requiring newer versions means that Xen can take advantage
>>> +of newer features, but cannot work on as wide a variety of systems.
>>> +
>>> +Specific dependencies and versions for a given Xen release will be
>>> +listed in the toplevel README, and/or specified by the ``configure``
>>> +system.  This document lays out the principles by which those versions
>>> +should be chosen.
>>> +
>>> +The general principle is this:
>>> +
>>> +    Xen should build on currently-supported versions of major distros
>>> +    when released.
>>> +
>>> +"Currently-supported" means whatever that distro's version of "full
>>> +support".  For instance, at the time of writing, CentOS 7 and 8 are
>>> +listed as being given "Full Updates", but CentOS 6 is listed as
>>> +"Maintenance updates"; under this criterium, we would try to ensure
>>> +that Xen could build on CentOS 7 and 8, but not on CentOS 6.
>>> +
>>> +Exceptions for specific distros or tools may be made when appropriate.
>>> +
>>> +One exception to this is compiler versions for the hypervisor.
>>> +Support for new instructions, and in particular support for new safety
>>> +features, may require a newer compiler than many distros support.
>>> +These will be specified in the README.
>> The problem we have is that xen.git contains two very different things.
>> There is the hypervisor itself, which is embedded, and can easily be
>> cross compiled, and there is the content of tools/ which depends on a
>> lot of distro infrastructure.
>>
>> We expect tools/ to work in any supported distro, without having to do
>> weird toolchain gymnastics.
>>
>> For xen/ at the moment we have a very obsolete toolchain requirements,
>> and this is holding us back in some areas.  We're looking to bring that
>> forward, and may consider that being newer than some of the old distros
>> is necessary.
>>
>> At the moment however, we have quite a lot of functionality which is
>> dependent on being able to detect suitable toolchain.  GCOV and CET-SS
>> are examples.  These features will turn themselves off in older distros,
>> so while you can "build" Xen that far back, you might not get everything.
>>
>> For CET in particular, there is no feasible way to support it on older
>> toolchains.  (unless someone comes up with an extremely convincing way
>> of hand-crafting memory operands using raw .byte's in inline assembler.)
>>
>> I definitely don't think it is unreasonable for us to require the use of
>> (potentially) bleeding edge toolchains if they want to use (potentially)
>> bleeding edge features.  CET-SS isn't bleeding edge any more, but
>> CET-IBT is due to the additional linker work required to make it
>> function.  A future one which we need to do something about is Control
>> Flow Integrity, which is Clang specific, depends on LTO, and caused
>> Linux to up their minimum supported version to 10.0.1 which was when all
>> the bugfixes got merged.
> You seem to be explaining why I wrote this paragraph.  Did you have any specific changes you wanted to make? :-)

I don't think the paragraph as written gets this point across.

Even from the first sentence, Xen (the hypervisor) doesn't depend on
external libraries, whereas Xen (the content of xen.git) does.

>
>>> +
>>> +Distros we consider when deciding minimum versions
>>> +--------------------------------------------------
>>> +
>>> +We currently aim to support Xen building and running on the following distributions:
>>> +Debian_,
>>> +Ubuntu_,
>>> +OpenSUSE_,
>>> +Arch Linux,
>> No link for Arch?
> The link points to the page describing the release lifecycles; Arch doesn’t really have that concept (as noted in the next section).
>
> I could make it so that links to the release lifecycle page is in the table below instead.
>
>>> +SLES_,
>>> +Yocto_,
>>> +CentOS_,
>>> +and RHEL_.
>>> +
>>> +.. _Debian: https://www.debian.org/releases/
>>> +.. _Ubuntu: https://wiki.ubuntu.com/Releases
>>> +.. _OpenSUSE: https://en.opensuse.org/Lifetime
>>> +.. _SLES: https://www.suse.com/lifecycle/
>>> +.. _Yocto: https://wiki.yoctoproject.org/wiki/Releases
>>> +.. _CentOS: https://wiki.centos.org/About/Product
>>> +.. _RHEL: https://access.redhat.com/support/policy/updates/errata
>>> +
>>> +Specific distro versions supported in this release
>>> +--------------------------------------------------
>>> +
>>> +======== ==================
>>> +Distro   Supported releases
>>> +======== ==================
>>> +Debian   10 (Buster)
>>> +Ubuntu   20.10 (Groovy Gorilla), 20.04 (Focal Fossa), 18.04 (Bionic Beaver), 16.04 (Xenial Xerus)
>>> +OpenSUSE Leap 15.2
>>> +SLES     SLES 11, 12, 15
>>> +Yocto    3.1 (Dunfell)
>>> +CentOS   8
>>> +RHEL     8
>>> +======== ==================
>> How about a 3rd column for "supported until" ?  It would stop this page
>> becoming stale simply over time.
> If we did that, it would make the table longer, as we’d have a separate row for each distro release rather than each distro.
>
> The release manager needs to look at this table before the release; for that they’ll have to go to the release lifecycle page of the various distros anyway, to pick up new versions of the distro.  So I don’t think having the date here adds that much.

Irrespective of the content of the table, I'd recommend Sphinx's
list-table construct (see the example
docs/guest-guide/x86/hypercall-abi.rst).  This is deliberately more
amenable to diffing when changes are made.

Also I need to refresh one of my patches to add another extension for
hyperlinks.


The table on its own isn't terribly helpful, and will go stale.  The
point of adding a 3rd column is so people don't have to click through
onto every distro page to find out whether the content of this page is
still correct.

I'd also recommend merging the hyperlinks into the first column of the
table as a more obvious place to have the links, rather than in a line
of text.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 14:08:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 14:08:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3014.8670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRA7-0000iq-Sj; Mon, 05 Oct 2020 14:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3014.8670; Mon, 05 Oct 2020 14:08:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRA7-0000ij-Pc; Mon, 05 Oct 2020 14:08:23 +0000
Received: by outflank-mailman (input) for mailman id 3014;
 Mon, 05 Oct 2020 14:08:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kPRA6-0000ie-3B
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:08:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea56f40b-9f0c-44bd-9d56-752744813c9a;
 Mon, 05 Oct 2020 14:08:20 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPRA3-0006f0-Pn; Mon, 05 Oct 2020 14:08:19 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kPRA3-0000wo-Eg; Mon, 05 Oct 2020 14:08:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GPX8=DM=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kPRA6-0000ie-3B
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:08:22 +0000
X-Inumbo-ID: ea56f40b-9f0c-44bd-9d56-752744813c9a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ea56f40b-9f0c-44bd-9d56-752744813c9a;
	Mon, 05 Oct 2020 14:08:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=MOoi/fIdE/N1b0HiPhwIxwEbGENOEIvnmQORtzBPKrc=; b=BCPD4g4cob9Cr49spAg5r6k5S+
	fDlMTtR7YWLvs0I+MrP1Dt+h2r46m6VOLRoR5VrM8QoWKUj2UEe3uXhDgNx6i6yRfEg+Axu0utKaW
	bULpO1X59X9qN72gxS5nFOmofjVWlR2x9JfhcL5tzJgfNb/VfVx0acwUiiOxVmhIxn2A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPRA3-0006f0-Pn; Mon, 05 Oct 2020 14:08:19 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kPRA3-0000wo-Eg; Mon, 05 Oct 2020 14:08:19 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Julien Grall <julien@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] ioreq: cope with server disappearing while I/O is pending
Date: Mon,  5 Oct 2020 15:08:17 +0100
Message-Id: <20201005140817.1339-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Currently, in the event of an ioreq server being destroyed while I/O is
pending in the attached emulator, it is possible that hvm_wait_for_io() will
dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
shared page after it has been freed.
This will only occur if the emulator (which is necessarily running in a
service domain with some degree of privilege) does not complete pending I/O
during tear-down and is not directly exploitable by a guest domain.

This patch adds a call to get_pending_vcpu() into the condition of the
wait_on_xen_event_channel() macro to verify the continued existence of the
ioreq server. Should it disappear, the guest domain will be crashed.

NOTE: take the opportunity to modify the text on one gdprintk() for
      consistency with others.

Reported-by: Julien Grall <julien@xen.org>
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/ioreq.c | 30 ++++++++++++++++++++++--------
 1 file changed, 22 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1cc27df87f..e8b97cd30c 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -115,6 +115,7 @@ bool hvm_io_pending(struct vcpu *v)
 
 static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 {
+    struct vcpu *v = sv->vcpu;
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
     uint64_t data = ~0;
@@ -132,7 +133,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
             gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
                      prev_state, state);
             sv->pending = false;
-            domain_crash(sv->vcpu->domain);
+            domain_crash(v->domain);
             return false; /* bail */
         }
 
@@ -145,23 +146,36 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
 
         case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
         case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(sv->ioreq_evtchn,
-                                      ({ state = p->state;
-                                         smp_rmb();
-                                         state != prev_state; }));
+            /*
+             * NOTE: The ioreq server may have been destroyed whilst the
+             *       vcpu was blocked so re-acquire the pointer to
+             *       hvm_ioreq_vcpu to check this condition.
+             */
+            wait_on_xen_event_channel(
+                sv->ioreq_evtchn,
+                ({ sv = get_pending_vcpu(v, NULL);
+                   state = sv ? p->state : STATE_IOREQ_NONE;
+                   smp_rmb();
+                   state != prev_state; }));
+            if ( !sv )
+            {
+                gdprintk(XENLOG_ERR, "HVM ioreq server has disappeared\n");
+                domain_crash(v->domain);
+                return false; /* bail */
+            }
             continue;
 
         default:
-            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
+            gdprintk(XENLOG_ERR, "Weird HVM ioreq state %u\n", state);
             sv->pending = false;
-            domain_crash(sv->vcpu->domain);
+            domain_crash(v->domain);
             return false; /* bail */
         }
 
         break;
     }
 
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    p = &v->arch.hvm.hvm_io.io_req;
     if ( hvm_ioreq_needs_completion(p) )
         p->data = data;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 14:10:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 14:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3018.8684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRCY-0001YT-Bh; Mon, 05 Oct 2020 14:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3018.8684; Mon, 05 Oct 2020 14:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRCY-0001YM-8Y; Mon, 05 Oct 2020 14:10:54 +0000
Received: by outflank-mailman (input) for mailman id 3018;
 Mon, 05 Oct 2020 14:10:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPRCW-0001Xe-TG
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:10:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ae587e6-420f-4b47-841d-6469b9b6e68f;
 Mon, 05 Oct 2020 14:10:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPRCP-0006hI-91; Mon, 05 Oct 2020 14:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPRCP-0005i0-19; Mon, 05 Oct 2020 14:10:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPRCO-0006a3-W3; Mon, 05 Oct 2020 14:10:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPRCW-0001Xe-TG
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:10:52 +0000
X-Inumbo-ID: 2ae587e6-420f-4b47-841d-6469b9b6e68f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2ae587e6-420f-4b47-841d-6469b9b6e68f;
	Mon, 05 Oct 2020 14:10:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=67Tm5QH0vz9rAhRIU/sCkrOPzU+cDekzK1TLSO+LcMs=; b=PtblwZfQBm9ljmWPBrso7kuKKX
	Mic7P/9mTUXhbk/oYRnRKUD4garWRr0Ptf3lyV6Hu3+y8IMdsD2nJ+NC0jwrKlYFvAzNNAE9nYNKW
	yGV7ua9EGSzCrShHm9gXEzOxXZlD1fIEURVUT3DM6hPNPgdtV3pSGeKm+sV2zXpcSRQA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPRCP-0006hI-91; Mon, 05 Oct 2020 14:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPRCP-0005i0-19; Mon, 05 Oct 2020 14:10:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPRCO-0006a3-W3; Mon, 05 Oct 2020 14:10:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155417-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 155417: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=c93b520a41f2787dd76bfb2e454836d1d5787505
X-Osstest-Versions-That:
    xen=28855ebcdbfa437e60bc16c761405476fe16bc39
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 14:10:44 +0000

flight 155417 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155417/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  c93b520a41f2787dd76bfb2e454836d1d5787505
baseline version:
 xen                  28855ebcdbfa437e60bc16c761405476fe16bc39

Last test of basis   154350  2020-09-15 00:36:14 Z   20 days
Failing since        154617  2020-09-22 14:37:47 Z   12 days    7 attempts
Testing same since   155417  2020-10-04 02:29:19 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Don Slutz <don.slutz@gmail.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   28855ebcdb..c93b520a41  c93b520a41f2787dd76bfb2e454836d1d5787505 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 14:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 14:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3038.8767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRVm-0003qA-1M; Mon, 05 Oct 2020 14:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3038.8767; Mon, 05 Oct 2020 14:30:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRVl-0003q3-Uf; Mon, 05 Oct 2020 14:30:45 +0000
Received: by outflank-mailman (input) for mailman id 3038;
 Mon, 05 Oct 2020 14:30:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kPRVk-0003ps-KL
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:30:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41d812fb-380f-44ab-b1f5-bb7111cc2032;
 Mon, 05 Oct 2020 14:30:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sJhL=DM=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kPRVk-0003ps-KL
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:30:44 +0000
X-Inumbo-ID: 41d812fb-380f-44ab-b1f5-bb7111cc2032
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 41d812fb-380f-44ab-b1f5-bb7111cc2032;
	Mon, 05 Oct 2020 14:30:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1601908242;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=1PbzvOMyYiCA8HRp1C2Z5uZAn7tZtAPSW1EUWLpxzK0=;
  b=hVyhn+aYlQpGfCUSVGyCD0ma5ZhNxQdpjcgwwXbmtfg2y2dry4IxEg3M
   T44guurPniKiu7c3k2dBUeDnQ/algODYKRHK1ISgmI7Es+vr9hTaXHB5C
   JTWjrjxa3lq4t/50QUQFYPw9PGOVoOupNm7nGnztMbCERw6xF5r4a0h6Z
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5NfeUyMmJ7ZCSCc4Fjo7RwASt7DO584VaS/tUzGSR6l0q66wM8YRzbBMBc/L4dr2ZoGzHV9K46
 gpsz0C26MsWRARo4jm13Etv27veFRYpEFRaNwvKMypOjcejxw25b4Xl4yLjTywhvio334XnXlL
 THNQSe581rWt1w/fQ8HNW7j1p1giOYe0VNQ2SkUuG585k7XPCTheLeSzYgor0iezSCp15HH8lg
 9AkI+3GYGCTyAvJt8iyoMCxNSNm2fIjS2oAgOGiL6oIUJgO1xrPEr7AA2dR05Yibx6m53qvvcc
 wV4=
X-SBRS: None
X-MesageID: 28297677
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,338,1596513600"; 
   d="scan'208";a="28297677"
Subject: Re: [PATCH] ioreq: cope with server disappearing while I/O is pending
To: Paul Durrant <paul@xen.org>, <xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Julien Grall <julien@xen.org>, "Jan
 Beulich" <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201005140817.1339-1-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <84dea7c2-cd0e-c954-1bc7-80568e428ff4@citrix.com>
Date: Mon, 5 Oct 2020 15:30:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201005140817.1339-1-paul@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 05/10/2020 15:08, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
>
> Currently, in the event of an ioreq server being destroyed while I/O is
> pending in the attached emulator, it is possible that hvm_wait_for_io() will
> dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> shared page after it has been freed.

It's not legitimate for the shared page to be freed while Xen is still
using it.

Furthermore, this patch only covers the most obvious place for the bug
to manifest.  It doesn't fix them all, as a ioreq server destroy can
race with an in-progress emulation and still suffer a UAF.


An extra ref needs holding on the shared page between acquire_resource
and domain destruction, to cover Xen's use of the page.

Similarly, I don't think it is legitimate for hvm_ioreq_vcpu to be freed
while potentially in use.  These need to stick around until domain
destruction as well, I think.

> This will only occur if the emulator (which is necessarily running in a
> service domain with some degree of privilege) does not complete pending I/O
> during tear-down and is not directly exploitable by a guest domain.
>
> This patch adds a call to get_pending_vcpu() into the condition of the
> wait_on_xen_event_channel() macro to verify the continued existence of the
> ioreq server. Should it disappear, the guest domain will be crashed.
>
> NOTE: take the opportunity to modify the text on one gdprintk() for
>       consistency with others.

I know this is tangential, but all these gdprintk()'s need to change to
gprintk()'s, so release builds provide at least some hint as to why the
domain has been crashed.

(And I also really need to finish off my domain_crash() patch to force
this everywhere.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 14:39:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 14:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3042.8780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRe2-0004BC-0Q; Mon, 05 Oct 2020 14:39:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3042.8780; Mon, 05 Oct 2020 14:39:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRe1-0004B5-TX; Mon, 05 Oct 2020 14:39:17 +0000
Received: by outflank-mailman (input) for mailman id 3042;
 Mon, 05 Oct 2020 14:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kPRe0-0004B0-2a
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:39:16 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b77013d-0066-4491-b5c2-84c61d823308;
 Mon, 05 Oct 2020 14:39:15 +0000 (UTC)
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 05 Oct 2020 14:39:08 +0000
Received: from EX13D32EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com (Postfix) with ESMTPS
 id 9B7A6A2461; Mon,  5 Oct 2020 14:39:06 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 14:39:06 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 14:39:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kPRe0-0004B0-2a
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:39:16 +0000
X-Inumbo-ID: 4b77013d-0066-4491-b5c2-84c61d823308
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b77013d-0066-4491-b5c2-84c61d823308;
	Mon, 05 Oct 2020 14:39:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601908756; x=1633444756;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=KAQeIMBUHE6Z6owhx1JhCJzGoc3oNwrGfCoRsr/E4BM=;
  b=G5UqsYINf1rgyxdojoYXnSRgiwTA1CXkQir9sQH9CjxaBZpVdabXyWT6
   QuhZaM18LYGzbeGO9dQn+STXZ7jrf7+D4Kn2uru1NoPNRE8UcMhHjGzeR
   24Pf2GTIDcOoA3RiC8/qoUx5GxyFPK6b/TN35aQv1bCCs4nRfXKIEhAhC
   8=;
X-IronPort-AV: E=Sophos;i="5.77,338,1596499200"; 
   d="scan'208";a="81660280"
Subject: RE: [PATCH] ioreq: cope with server disappearing while I/O is pending
Thread-Topic: [PATCH] ioreq: cope with server disappearing while I/O is pending
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com) ([10.47.23.38])
  by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP; 05 Oct 2020 14:39:08 +0000
Received: from EX13D32EUC004.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
	by email-inbound-relay-1d-38ae4ad2.us-east-1.amazon.com (Postfix) with ESMTPS id 9B7A6A2461;
	Mon,  5 Oct 2020 14:39:06 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 14:39:06 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 14:39:05 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Thread-Index: AQHWmyED9r+Q5zm9WU2AX5sFOgJgjKmJEd0AgAABGPA=
Date: Mon, 5 Oct 2020 14:39:05 +0000
Message-ID: <9a906f6740834ee9bca4c8108de79ff8@EX13D32EUC003.ant.amazon.com>
References: <20201005140817.1339-1-paul@xen.org>
 <84dea7c2-cd0e-c954-1bc7-80568e428ff4@citrix.com>
In-Reply-To: <84dea7c2-cd0e-c954-1bc7-80568e428ff4@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBBbmRyZXcgQ29vcGVyIDxhbmRy
ZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBTZW50OiAwNSBPY3RvYmVyIDIwMjAgMTU6MzANCj4g
VG86IFBhdWwgRHVycmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnDQo+IENjOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKdWxp
ZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPjsgSmFuIEJldWxpY2gNCj4gPGpiZXVsaWNoQHN1c2Uu
Y29tPjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3
bEB4ZW4ub3JnPg0KPiBTdWJqZWN0OiBSRTogW0VYVEVSTkFMXSBbUEFUQ0hdIGlvcmVxOiBjb3Bl
IHdpdGggc2VydmVyIGRpc2FwcGVhcmluZyB3aGlsZSBJL08gaXMgcGVuZGluZw0KPiANCj4gQ0FV
VElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0
aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91
IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4g
DQo+IA0KPiANCj4gT24gMDUvMTAvMjAyMCAxNTowOCwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+
IEZyb206IFBhdWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4gPg0KPiA+IEN1cnJl
bnRseSwgaW4gdGhlIGV2ZW50IG9mIGFuIGlvcmVxIHNlcnZlciBiZWluZyBkZXN0cm95ZWQgd2hp
bGUgSS9PIGlzDQo+ID4gcGVuZGluZyBpbiB0aGUgYXR0YWNoZWQgZW11bGF0b3IsIGl0IGlzIHBv
c3NpYmxlIHRoYXQgaHZtX3dhaXRfZm9yX2lvKCkgd2lsbA0KPiA+IGRlcmVmZXJlbmNlIGEgcG9p
bnRlciB0byBhICdzdHJ1Y3QgaHZtX2lvcmVxX3ZjcHUnIG9yIHRoZSBpb3JlcSBzZXJ2ZXIncw0K
PiA+IHNoYXJlZCBwYWdlIGFmdGVyIGl0IGhhcyBiZWVuIGZyZWVkLg0KPiANCj4gSXQncyBub3Qg
bGVnaXRpbWF0ZSBmb3IgdGhlIHNoYXJlZCBwYWdlIHRvIGJlIGZyZWVkIHdoaWxlIFhlbiBpcyBz
dGlsbA0KPiB1c2luZyBpdC4NCj4gDQo+IEZ1cnRoZXJtb3JlLCB0aGlzIHBhdGNoIG9ubHkgY292
ZXJzIHRoZSBtb3N0IG9idmlvdXMgcGxhY2UgZm9yIHRoZSBidWcNCj4gdG8gbWFuaWZlc3QuICBJ
dCBkb2Vzbid0IGZpeCB0aGVtIGFsbCwgYXMgYSBpb3JlcSBzZXJ2ZXIgZGVzdHJveSBjYW4NCj4g
cmFjZSB3aXRoIGFuIGluLXByb2dyZXNzIGVtdWxhdGlvbiBhbmQgc3RpbGwgc3VmZmVyIGEgVUFG
Lg0KPiANCj4gDQo+IEFuIGV4dHJhIHJlZiBuZWVkcyBob2xkaW5nIG9uIHRoZSBzaGFyZWQgcGFn
ZSBiZXR3ZWVuIGFjcXVpcmVfcmVzb3VyY2UNCj4gYW5kIGRvbWFpbiBkZXN0cnVjdGlvbiwgdG8g
Y292ZXIgWGVuJ3MgdXNlIG9mIHRoZSBwYWdlLg0KPiANCg0KWWVzLCB0aGF0J3MgdHJ1ZS4NCg0K
PiBTaW1pbGFybHksIEkgZG9uJ3QgdGhpbmsgaXQgaXMgbGVnaXRpbWF0ZSBmb3IgaHZtX2lvcmVx
X3ZjcHUgdG8gYmUgZnJlZWQNCj4gd2hpbGUgcG90ZW50aWFsbHkgaW4gdXNlLiAgVGhlc2UgbmVl
ZCB0byBzdGljayBhcm91bmQgdW50aWwgZG9tYWluDQo+IGRlc3RydWN0aW9uIGFzIHdlbGwsIEkg
dGhpbmsuDQo+IA0KDQpUaGF0IHdvdWxkIGNvdmVyIHRoZSBwcm9ibGVtIHdpdGggdGhlIHN2IHBv
aW50ZXIsIEkgZ3Vlc3MgaXQgd291bGQgYmUgcG9zc2libGUgdG8gbWFyayB0aGUgc3RydWN0IHN0
YWxlIGFuZCB0aGVuIGZyZWUgaXQgd2hlbiAncGVuZGluZycgdHJhbnNpdGlvbnMgdG8gZmFsc2Uu
DQoNCj4gPiBUaGlzIHdpbGwgb25seSBvY2N1ciBpZiB0aGUgZW11bGF0b3IgKHdoaWNoIGlzIG5l
Y2Vzc2FyaWx5IHJ1bm5pbmcgaW4gYQ0KPiA+IHNlcnZpY2UgZG9tYWluIHdpdGggc29tZSBkZWdy
ZWUgb2YgcHJpdmlsZWdlKSBkb2VzIG5vdCBjb21wbGV0ZSBwZW5kaW5nIEkvTw0KPiA+IGR1cmlu
ZyB0ZWFyLWRvd24gYW5kIGlzIG5vdCBkaXJlY3RseSBleHBsb2l0YWJsZSBieSBhIGd1ZXN0IGRv
bWFpbi4NCj4gPg0KPiA+IFRoaXMgcGF0Y2ggYWRkcyBhIGNhbGwgdG8gZ2V0X3BlbmRpbmdfdmNw
dSgpIGludG8gdGhlIGNvbmRpdGlvbiBvZiB0aGUNCj4gPiB3YWl0X29uX3hlbl9ldmVudF9jaGFu
bmVsKCkgbWFjcm8gdG8gdmVyaWZ5IHRoZSBjb250aW51ZWQgZXhpc3RlbmNlIG9mIHRoZQ0KPiA+
IGlvcmVxIHNlcnZlci4gU2hvdWxkIGl0IGRpc2FwcGVhciwgdGhlIGd1ZXN0IGRvbWFpbiB3aWxs
IGJlIGNyYXNoZWQuDQo+ID4NCj4gPiBOT1RFOiB0YWtlIHRoZSBvcHBvcnR1bml0eSB0byBtb2Rp
ZnkgdGhlIHRleHQgb24gb25lIGdkcHJpbnRrKCkgZm9yDQo+ID4gICAgICAgY29uc2lzdGVuY3kg
d2l0aCBvdGhlcnMuDQo+IA0KPiBJIGtub3cgdGhpcyBpcyB0YW5nZW50aWFsLCBidXQgYWxsIHRo
ZXNlIGdkcHJpbnRrKCkncyBuZWVkIHRvIGNoYW5nZSB0bw0KPiBncHJpbnRrKCkncywgc28gcmVs
ZWFzZSBidWlsZHMgcHJvdmlkZSBhdCBsZWFzdCBzb21lIGhpbnQgYXMgdG8gd2h5IHRoZQ0KPiBk
b21haW4gaGFzIGJlZW4gY3Jhc2hlZC4NCj4gDQoNClllcywgdGhhdCdzIGFsc28gYSBnb29kIHBv
aW50Lg0KDQpJIGd1ZXNzIHRoaXMgd2lsbCBwYXRjaCB3aWxsIHByb2JhYmx5IG5lZWQgdG8gYmVj
b21lIGEgc2VyaWVzLg0KDQogIFBhdWwNCg0KPiAoQW5kIEkgYWxzbyByZWFsbHkgbmVlZCB0byBm
aW5pc2ggb2ZmIG15IGRvbWFpbl9jcmFzaCgpIHBhdGNoIHRvIGZvcmNlDQo+IHRoaXMgZXZlcnl3
aGVyZS4pDQo+IA0KPiB+QW5kcmV3DQo=


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 14:42:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 14:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3044.8792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRgv-0004yr-Hn; Mon, 05 Oct 2020 14:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3044.8792; Mon, 05 Oct 2020 14:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPRgv-0004yk-Ca; Mon, 05 Oct 2020 14:42:17 +0000
Received: by outflank-mailman (input) for mailman id 3044;
 Mon, 05 Oct 2020 14:42:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9LJO=DM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kPRgt-0004yf-Ch
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:42:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 843a6ff9-53e2-4cde-8c72-46d486f9fdc6;
 Mon, 05 Oct 2020 14:42:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kPRgo-0007OE-Mh; Mon, 05 Oct 2020 14:42:10 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kPRgn-0003Xa-7S; Mon, 05 Oct 2020 14:42:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9LJO=DM=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kPRgt-0004yf-Ch
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 14:42:15 +0000
X-Inumbo-ID: 843a6ff9-53e2-4cde-8c72-46d486f9fdc6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 843a6ff9-53e2-4cde-8c72-46d486f9fdc6;
	Mon, 05 Oct 2020 14:42:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8N2w0bCF8xdyrdHEbT/a55lf3SbpFUtlG7dLfXuvNio=; b=ca6HqU6B6+71YR0zL7fyuUP8K1
	YSGWWdl5sVDPFg3vAc4iB+0qS8Cr0gxj6/pr3m2aB869lthM4l+hSIdgd5CsswVCy+HfdKsYnb1J/
	F+lFY55GD9ETb7mgjugH4lDlPCqlkrhp09GnbJO6HaRfByyJku7X2HScrMz4iaycHxxU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kPRgo-0007OE-Mh; Mon, 05 Oct 2020 14:42:10 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kPRgn-0003Xa-7S; Mon, 05 Oct 2020 14:42:10 +0000
Subject: Re: [PATCH] ioreq: cope with server disappearing while I/O is pending
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201005140817.1339-1-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <ea27f64b-deea-a80e-ed05-e4a6dd8e11f9@xen.org>
Date: Mon, 5 Oct 2020 15:42:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201005140817.1339-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 05/10/2020 15:08, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently, in the event of an ioreq server being destroyed while I/O is
> pending in the attached emulator, it is possible that hvm_wait_for_io() will
> dereference a pointer to a 'struct hvm_ioreq_vcpu' or the ioreq server's
> shared page after it has been freed.

So the IOREQ code will call domain_pause() before destroying the IOREQ.

A vCPU can only be descheduled in hvm_wait_for_io() from 
wait_on_xen_event_channel(). AFAIK, we would schedule a new vCPU (or 
idle) when this happens.

On x86, the schedule() function will not return after context switch. 
Therefore...

> This will only occur if the emulator (which is necessarily running in a
> service domain with some degree of privilege) does not complete pending I/O
> during tear-down and is not directly exploitable by a guest domain.

... I am not sure how this can happen on x86. Do you mind providing an 
example?

Note that on Arm, the schedule() function will return after context 
switch. So the problem you describe is there from an arch-agnostic PoV.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 15:12:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 15:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3046.8804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSAX-0007nk-0c; Mon, 05 Oct 2020 15:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3046.8804; Mon, 05 Oct 2020 15:12:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSAW-0007nd-SX; Mon, 05 Oct 2020 15:12:52 +0000
Received: by outflank-mailman (input) for mailman id 3046;
 Mon, 05 Oct 2020 15:12:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
 id 1kPSAW-0007nW-1h
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 15:12:52 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bf775a7-4d0f-4e07-b266-b51f9f745295;
 Mon, 05 Oct 2020 15:12:51 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-af6a10df.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 05 Oct 2020 15:12:48 +0000
Received: from EX13D32EUC004.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1a-af6a10df.us-east-1.amazon.com (Postfix) with ESMTPS
 id EFB47A1F2B; Mon,  5 Oct 2020 15:12:45 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 15:12:44 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 15:12:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=f4wr=DM=amazon.co.uk=prvs=540ed4173=pdurrant@srs-us1.protection.inumbo.net>)
	id 1kPSAW-0007nW-1h
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 15:12:52 +0000
X-Inumbo-ID: 3bf775a7-4d0f-4e07-b266-b51f9f745295
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3bf775a7-4d0f-4e07-b266-b51f9f745295;
	Mon, 05 Oct 2020 15:12:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1601910772; x=1633446772;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=4l27yO1IeWJA/+w+mViJgIDkDXvYifoOIMTJABip2Oc=;
  b=mi3v7yRs3rmjcDW/sBHihx6pCx/85z8OzLNQC0Rb8hxH/+C167CdXnwC
   jvD5zWXLVjyEIcbzoyisjg3ZoBeC7g5D85O5HdzTWo6xj+bmVxwAZJ+pD
   RNeWNCCdkd7OguJlv3rtAjGYBqXrngu6wC4pLkLpWEd0EC7IfH+NmUssg
   o=;
X-IronPort-AV: E=Sophos;i="5.77,338,1596499200"; 
   d="scan'208";a="57848308"
Subject: RE: [PATCH] ioreq: cope with server disappearing while I/O is pending
Thread-Topic: [PATCH] ioreq: cope with server disappearing while I/O is pending
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1a-af6a10df.us-east-1.amazon.com) ([10.43.8.2])
  by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 05 Oct 2020 15:12:48 +0000
Received: from EX13D32EUC004.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
	by email-inbound-relay-1a-af6a10df.us-east-1.amazon.com (Postfix) with ESMTPS id EFB47A1F2B;
	Mon,  5 Oct 2020 15:12:45 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC004.ant.amazon.com (10.43.164.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Mon, 5 Oct 2020 15:12:44 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Mon, 5 Oct 2020 15:12:44 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Julien Grall <julien@xen.org>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Thread-Index: AQHWmyED9r+Q5zm9WU2AX5sFOgJgjKmJFSOAgAAEE8A=
Date: Mon, 5 Oct 2020 15:12:44 +0000
Message-ID: <4435a8a32e4347a29c18b792d7755633@EX13D32EUC003.ant.amazon.com>
References: <20201005140817.1339-1-paul@xen.org>
 <ea27f64b-deea-a80e-ed05-e4a6dd8e11f9@xen.org>
In-Reply-To: <ea27f64b-deea-a80e-ed05-e4a6dd8e11f9@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.78]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGll
bkB4ZW4ub3JnPg0KPiBTZW50OiAwNSBPY3RvYmVyIDIwMjAgMTU6NDINCj4gVG86IFBhdWwgRHVy
cmFudCA8cGF1bEB4ZW4ub3JnPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnDQo+IENj
OiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+OyBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+OyBBbmRyZXcgQ29vcGVyDQo+IDxhbmRyZXcuY29vcGVyM0BjaXRyaXgu
Y29tPjsgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb20+OyBXZWkgTGl1IDx3
bEB4ZW4ub3JnPg0KPiBTdWJqZWN0OiBSRTogW0VYVEVSTkFMXSBbUEFUQ0hdIGlvcmVxOiBjb3Bl
IHdpdGggc2VydmVyIGRpc2FwcGVhcmluZyB3aGlsZSBJL08gaXMgcGVuZGluZw0KPiANCj4gQ0FV
VElPTjogVGhpcyBlbWFpbCBvcmlnaW5hdGVkIGZyb20gb3V0c2lkZSBvZiB0aGUgb3JnYW5pemF0
aW9uLiBEbyBub3QgY2xpY2sgbGlua3Mgb3Igb3Blbg0KPiBhdHRhY2htZW50cyB1bmxlc3MgeW91
IGNhbiBjb25maXJtIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4g
DQo+IA0KPiANCj4gSGkgUGF1bCwNCj4gDQo+IE9uIDA1LzEwLzIwMjAgMTU6MDgsIFBhdWwgRHVy
cmFudCB3cm90ZToNCj4gPiBGcm9tOiBQYXVsIER1cnJhbnQgPHBkdXJyYW50QGFtYXpvbi5jb20+
DQo+ID4NCj4gPiBDdXJyZW50bHksIGluIHRoZSBldmVudCBvZiBhbiBpb3JlcSBzZXJ2ZXIgYmVp
bmcgZGVzdHJveWVkIHdoaWxlIEkvTyBpcw0KPiA+IHBlbmRpbmcgaW4gdGhlIGF0dGFjaGVkIGVt
dWxhdG9yLCBpdCBpcyBwb3NzaWJsZSB0aGF0IGh2bV93YWl0X2Zvcl9pbygpIHdpbGwNCj4gPiBk
ZXJlZmVyZW5jZSBhIHBvaW50ZXIgdG8gYSAnc3RydWN0IGh2bV9pb3JlcV92Y3B1JyBvciB0aGUg
aW9yZXEgc2VydmVyJ3MNCj4gPiBzaGFyZWQgcGFnZSBhZnRlciBpdCBoYXMgYmVlbiBmcmVlZC4N
Cj4gDQo+IFNvIHRoZSBJT1JFUSBjb2RlIHdpbGwgY2FsbCBkb21haW5fcGF1c2UoKSBiZWZvcmUg
ZGVzdHJveWluZyB0aGUgSU9SRVEuDQo+IA0KPiBBIHZDUFUgY2FuIG9ubHkgYmUgZGVzY2hlZHVs
ZWQgaW4gaHZtX3dhaXRfZm9yX2lvKCkgZnJvbQ0KPiB3YWl0X29uX3hlbl9ldmVudF9jaGFubmVs
KCkuIEFGQUlLLCB3ZSB3b3VsZCBzY2hlZHVsZSBhIG5ldyB2Q1BVIChvcg0KPiBpZGxlKSB3aGVu
IHRoaXMgaGFwcGVucy4NCj4gDQo+IE9uIHg4NiwgdGhlIHNjaGVkdWxlKCkgZnVuY3Rpb24gd2ls
bCBub3QgcmV0dXJuIGFmdGVyIGNvbnRleHQgc3dpdGNoLg0KPiBUaGVyZWZvcmUuLi4NCj4gDQo+
ID4gVGhpcyB3aWxsIG9ubHkgb2NjdXIgaWYgdGhlIGVtdWxhdG9yICh3aGljaCBpcyBuZWNlc3Nh
cmlseSBydW5uaW5nIGluIGENCj4gPiBzZXJ2aWNlIGRvbWFpbiB3aXRoIHNvbWUgZGVncmVlIG9m
IHByaXZpbGVnZSkgZG9lcyBub3QgY29tcGxldGUgcGVuZGluZyBJL08NCj4gPiBkdXJpbmcgdGVh
ci1kb3duIGFuZCBpcyBub3QgZGlyZWN0bHkgZXhwbG9pdGFibGUgYnkgYSBndWVzdCBkb21haW4u
DQo+IA0KPiAuLi4gSSBhbSBub3Qgc3VyZSBob3cgdGhpcyBjYW4gaGFwcGVuIG9uIHg4Ni4gRG8g
eW91IG1pbmQgcHJvdmlkaW5nIGFuDQo+IGV4YW1wbGU/DQoNCk1heWJlIEknbSBtaXNzaW5nIHNv
bWV0aGluZywgYnV0IEkgY2FuJ3Qgc2VlIGFueXRoaW5nIHRoYXQgd291bGQgcHJldmVudCB3YWl0
X2Zyb21feGVuX2V2ZW50X2NoYW5uZWwoKSByZXR1cm5pbmcgYWZ0ZXIgdGhlIGlvcmVxIHNlcnZl
ciBoYXMgYmVlbiBkZXN0cm95ZWQ/IFRoZSBkb21haW4gaXMgb25seSBwYXVzZWQgd2hpbHN0IGRl
c3RydWN0aW9uIGlzIGluIHByb2dyZXNzIGJ1dCBib3RoICdzdicgYW5kICdwJyB3aWxsIGJlIG9u
LXN0YWNrIGFuZCB0aHVzLCBhcyBzb29uIGFzIHRoZSBkb21haW4gaXMgdW5wYXVzZWQgYWdhaW4s
IGNvdWxkIGJlIHN1YmplY3QgdG8gVUFGLg0KDQogIFBhdWwNCg0KPiANCj4gTm90ZSB0aGF0IG9u
IEFybSwgdGhlIHNjaGVkdWxlKCkgZnVuY3Rpb24gd2lsbCByZXR1cm4gYWZ0ZXIgY29udGV4dA0K
PiBzd2l0Y2guIFNvIHRoZSBwcm9ibGVtIHlvdSBkZXNjcmliZSBpcyB0aGVyZSBmcm9tIGFuIGFy
Y2gtYWdub3N0aWMgUG9WLg0KPiANCj4gQ2hlZXJzLA0KPiANCj4gLS0NCj4gSnVsaWVuIEdyYWxs
DQo=


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:05:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:05:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3052.8822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSze-0004Ze-75; Mon, 05 Oct 2020 16:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3052.8822; Mon, 05 Oct 2020 16:05:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSze-0004ZX-35; Mon, 05 Oct 2020 16:05:42 +0000
Received: by outflank-mailman (input) for mailman id 3052;
 Mon, 05 Oct 2020 16:05:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPSzc-0004ZN-D5
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:40 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e39f64f3-d2b4-4e74-a08e-109a9fa05756;
 Mon, 05 Oct 2020 16:05:39 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C443F11D4;
 Mon,  5 Oct 2020 09:05:38 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F0E753F66B;
 Mon,  5 Oct 2020 09:05:37 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPSzc-0004ZN-D5
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:40 +0000
X-Inumbo-ID: e39f64f3-d2b4-4e74-a08e-109a9fa05756
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id e39f64f3-d2b4-4e74-a08e-109a9fa05756;
	Mon, 05 Oct 2020 16:05:39 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C443F11D4;
	Mon,  5 Oct 2020 09:05:38 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id F0E753F66B;
	Mon,  5 Oct 2020 09:05:37 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 0/2] tools: solve gcc 10 compilation issues
Date: Mon,  5 Oct 2020 17:02:47 +0100
Message-Id: <cover.1601913536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Solve various issues in tools when compiling using a gcc version 10.0 or
greater.

Bertrand Marquis (2):
  tools: use memcpy instead of strncpy in getBridge
  tool/libx: Fix libxenlight gcc warning

 tools/libs/stat/xenstat_linux.c | 9 +++++++--
 tools/libxl/libxl_mem.c         | 2 +-
 2 files changed, 8 insertions(+), 3 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3053.8832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSze-0004aA-Kh; Mon, 05 Oct 2020 16:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3053.8832; Mon, 05 Oct 2020 16:05:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSze-0004Zy-BV; Mon, 05 Oct 2020 16:05:42 +0000
Received: by outflank-mailman (input) for mailman id 3053;
 Mon, 05 Oct 2020 16:05:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPSzc-0004ZS-W7
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:41 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 45f20369-f9e3-40b3-a7fc-4ebc56793578;
 Mon, 05 Oct 2020 16:05:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AAA511435;
 Mon,  5 Oct 2020 09:05:39 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 03B2A3F66B;
 Mon,  5 Oct 2020 09:05:38 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPSzc-0004ZS-W7
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:41 +0000
X-Inumbo-ID: 45f20369-f9e3-40b3-a7fc-4ebc56793578
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 45f20369-f9e3-40b3-a7fc-4ebc56793578;
	Mon, 05 Oct 2020 16:05:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AAA511435;
	Mon,  5 Oct 2020 09:05:39 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 03B2A3F66B;
	Mon,  5 Oct 2020 09:05:38 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Date: Mon,  5 Oct 2020 17:02:48 +0100
Message-Id: <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1601913536.git.bertrand.marquis@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1601913536.git.bertrand.marquis@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>

Use memcpy in getBridge to prevent gcc warnings about truncated
strings. We know that we might truncate it, so the gcc warning
here is wrong.
Revert previous change changing buffer sizes as bigger buffers
are not needed.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/libs/stat/xenstat_linux.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
index d2ee6fda64..1db35c604c 100644
--- a/tools/libs/stat/xenstat_linux.c
+++ b/tools/libs/stat/xenstat_linux.c
@@ -78,7 +78,12 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
 				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
 
 				if (access(tmp, F_OK) == 0) {
-					strncpy(result, de->d_name, resultLen);
+					/*
+					 * Do not use strncpy to prevent compiler warning with
+					 * gcc >= 10.0
+					 * If de->d_name is longer then resultLen we truncate it
+					 */
+					memcpy(result, de->d_name, resultLen - 1);
 					result[resultLen - 1] = 0;
 				}
 		}
@@ -264,7 +269,7 @@ int xenstat_collect_networks(xenstat_node * node)
 {
 	/* Helper variables for parseNetDevLine() function defined above */
 	int i;
-	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[256] = { 0 }, devNoBridge[257] = { 0 };
+	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[16] = { 0 }, devNoBridge[17] = { 0 };
 	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPackets, txErrs, txDrops;
 
 	struct priv_data *priv = get_priv_data(node->handle);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:05:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3054.8845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSzi-0004cE-Ov; Mon, 05 Oct 2020 16:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3054.8845; Mon, 05 Oct 2020 16:05:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPSzi-0004c7-Lb; Mon, 05 Oct 2020 16:05:46 +0000
Received: by outflank-mailman (input) for mailman id 3054;
 Mon, 05 Oct 2020 16:05:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPSzh-0004ZS-SW
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:45 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 73854755-f9e2-4046-ac3c-4ee76d2eb441;
 Mon, 05 Oct 2020 16:05:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6A3C143D;
 Mon,  5 Oct 2020 09:05:40 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DBEF83F66B;
 Mon,  5 Oct 2020 09:05:39 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fzGi=DM=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPSzh-0004ZS-SW
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:05:45 +0000
X-Inumbo-ID: 73854755-f9e2-4046-ac3c-4ee76d2eb441
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 73854755-f9e2-4046-ac3c-4ee76d2eb441;
	Mon, 05 Oct 2020 16:05:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A6A3C143D;
	Mon,  5 Oct 2020 09:05:40 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DBEF83F66B;
	Mon,  5 Oct 2020 09:05:39 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 2/2] tool/libx: Fix libxenlight gcc warning
Date: Mon,  5 Oct 2020 17:02:49 +0100
Message-Id: <1c0b3a351e20e31093b5f59edd7fafeac1ceb75c.1601913536.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1601913536.git.bertrand.marquis@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1601913536.git.bertrand.marquis@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>

Fix gcc10 compilation warning about uninitialized variable by setting
it to 0.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/libxl/libxl_mem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_mem.c b/tools/libxl/libxl_mem.c
index e52a9624ea..c739d00f39 100644
--- a/tools/libxl/libxl_mem.c
+++ b/tools/libxl/libxl_mem.c
@@ -562,7 +562,7 @@ out:
 
 int libxl_get_free_memory_0x040700(libxl_ctx *ctx, uint32_t *memkb)
 {
-    uint64_t my_memkb;
+    uint64_t my_memkb = 0;
     int rc;
 
     rc = libxl_get_free_memory(ctx, &my_memkb);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:12:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:12:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3062.8858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPT6C-0005kD-Jo; Mon, 05 Oct 2020 16:12:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3062.8858; Mon, 05 Oct 2020 16:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPT6C-0005k6-Fc; Mon, 05 Oct 2020 16:12:28 +0000
Received: by outflank-mailman (input) for mailman id 3062;
 Mon, 05 Oct 2020 16:12:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPT6B-0005k1-8A
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:12:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 763b5ef3-3caf-48dd-adfc-2dc6dd2c73cd;
 Mon, 05 Oct 2020 16:12:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPT67-0001Ns-PY; Mon, 05 Oct 2020 16:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPT67-0001DT-Fa; Mon, 05 Oct 2020 16:12:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPT67-00016K-F7; Mon, 05 Oct 2020 16:12:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L9xt=DM=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPT6B-0005k1-8A
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:12:27 +0000
X-Inumbo-ID: 763b5ef3-3caf-48dd-adfc-2dc6dd2c73cd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 763b5ef3-3caf-48dd-adfc-2dc6dd2c73cd;
	Mon, 05 Oct 2020 16:12:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gnroDMgfbDgNpHrigOCh/nqRchLMmXvhMS9A0+Wm4NI=; b=Dij5HTkvZxsbBA4Kx+sTDswrun
	cRHCZfPbBdJ6Z/iRgfxafvWPuZ/H++5TKtdeuAU121jHSf/qgiyYsxQakwtq0L1t8WDQ5elnmBg2I
	++XGHJbdmJ2//fUpynAWjvkUbqQFxyyFRnLaE09wFuCP0ZpS8o1d3ZNJadaXGBkMzdFc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPT67-0001Ns-PY; Mon, 05 Oct 2020 16:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPT67-0001DT-Fa; Mon, 05 Oct 2020 16:12:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPT67-00016K-F7; Mon, 05 Oct 2020 16:12:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155434-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155434: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=469e72ab7dbbd7ff4ee601e5ea7c29545d46593b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 05 Oct 2020 16:12:23 +0000

flight 155434 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155434/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 11 guest-start    fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 11 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                469e72ab7dbbd7ff4ee601e5ea7c29545d46593b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   46 days
Failing since        152659  2020-08-21 14:07:39 Z   45 days   76 attempts
Testing same since   155434  2020-10-04 05:52:07 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 38646 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:43:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3069.8874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPTZk-00008q-AL; Mon, 05 Oct 2020 16:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3069.8874; Mon, 05 Oct 2020 16:43:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPTZk-00008j-79; Mon, 05 Oct 2020 16:43:00 +0000
Received: by outflank-mailman (input) for mailman id 3069;
 Mon, 05 Oct 2020 16:42:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xs+r=DM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kPTZi-00008e-OD
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:42:58 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e47bc10-e77f-46eb-a677-62658364958a;
 Mon, 05 Oct 2020 16:42:57 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so162051wmd.5
 for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 09:42:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id e19sm729967wme.2.2020.10.05.09.42.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 05 Oct 2020 09:42:55 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xs+r=DM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kPTZi-00008e-OD
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:42:58 +0000
X-Inumbo-ID: 1e47bc10-e77f-46eb-a677-62658364958a
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1e47bc10-e77f-46eb-a677-62658364958a;
	Mon, 05 Oct 2020 16:42:57 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so162051wmd.5
        for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 09:42:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Ljnafd1qplNYnzMsiXQsvGYfXaCWCpt32msVm8+CJD8=;
        b=EHN39ryyFEAwpdtL6eLPwNWU4N1F+4K24xU95AnNtewnumJoN5XbvWjUqDTOm2ALlM
         ckjNAm8il3Lcdw2ISTModTNJR9UjpIF3gErBwhXfGaVnWBrP2xkGUKblweeLTc3znR0a
         TZafNln49x6QP28KVMUyJggOpi4sUr1HOVcbI/thZbox/CIFygqlM72b7CICEVLfwKhA
         oBPfvvPRXxmIgBydwO0j1UgKu7zZ9tfBakDtBsoLVnB4yCQGl7YrlHJ3txXj8GlZughX
         HHzRQ7v6TPRoaX8IdrcAuAWqTBpwWMy2hu2JcEC0Fz0qtr1UuPPfRZ9lHxpvpp8JXBKr
         /59Q==
X-Gm-Message-State: AOAM533k+E0r7SzmTOnVp13fIcUPjZFT5jC0FX5y1Yolx/PPB+ZN2mFj
	BPHAJjMKF9D+ycizxy8wT4o=
X-Google-Smtp-Source: ABdhPJyjVHhlgVMyj8NzNdjpXSV2XD9X4IVUTXs12nMCY0bcaa++AWYyR4Wq55qLW6A46twup01fVw==
X-Received: by 2002:a1c:20ce:: with SMTP id g197mr290182wmg.72.1601916176135;
        Mon, 05 Oct 2020 09:42:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id e19sm729967wme.2.2020.10.05.09.42.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 05 Oct 2020 09:42:55 -0700 (PDT)
Date: Mon, 5 Oct 2020 16:42:53 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Message-ID: <20201005164253.cse24pshvbpoehjw@liuwe-devbox-debian-v2>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
User-Agent: NeoMutt/20180716

On Mon, Oct 05, 2020 at 05:02:48PM +0100, Bertrand Marquis wrote:
> Use memcpy in getBridge to prevent gcc warnings about truncated
> strings. We know that we might truncate it, so the gcc warning
> here is wrong.
> Revert previous change changing buffer sizes as bigger buffers
> are not needed.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Juergen, are you happy with this change? I have not followed closely the
discussion on #xendevel.

> ---
>  tools/libs/stat/xenstat_linux.c | 9 +++++++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
> index d2ee6fda64..1db35c604c 100644
> --- a/tools/libs/stat/xenstat_linux.c
> +++ b/tools/libs/stat/xenstat_linux.c
> @@ -78,7 +78,12 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
>  				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>  
>  				if (access(tmp, F_OK) == 0) {
> -					strncpy(result, de->d_name, resultLen);
> +					/*
> +					 * Do not use strncpy to prevent compiler warning with
> +					 * gcc >= 10.0
> +					 * If de->d_name is longer then resultLen we truncate it
> +					 */
> +					memcpy(result, de->d_name, resultLen - 1);
>  					result[resultLen - 1] = 0;
>  				}
>  		}
> @@ -264,7 +269,7 @@ int xenstat_collect_networks(xenstat_node * node)
>  {
>  	/* Helper variables for parseNetDevLine() function defined above */
>  	int i;
> -	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[256] = { 0 }, devNoBridge[257] = { 0 };
> +	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[16] = { 0 }, devNoBridge[17] = { 0 };
>  	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPackets, txErrs, txDrops;
>  
>  	struct priv_data *priv = get_priv_data(node->handle);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 16:57:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 16:57:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3072.8888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPTo0-0001J9-O3; Mon, 05 Oct 2020 16:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3072.8888; Mon, 05 Oct 2020 16:57:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPTo0-0001J2-J9; Mon, 05 Oct 2020 16:57:44 +0000
Received: by outflank-mailman (input) for mailman id 3072;
 Mon, 05 Oct 2020 16:57:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xs+r=DM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kPTnz-0001Ix-3K
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:57:43 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd98541b-8cb4-4a42-b0c4-6cd8362b1a9e;
 Mon, 05 Oct 2020 16:57:42 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so204023wmd.5
 for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 09:57:42 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id x64sm196528wmg.33.2020.10.05.09.57.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 05 Oct 2020 09:57:40 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xs+r=DM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kPTnz-0001Ix-3K
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 16:57:43 +0000
X-Inumbo-ID: bd98541b-8cb4-4a42-b0c4-6cd8362b1a9e
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd98541b-8cb4-4a42-b0c4-6cd8362b1a9e;
	Mon, 05 Oct 2020 16:57:42 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d4so204023wmd.5
        for <xen-devel@lists.xenproject.org>; Mon, 05 Oct 2020 09:57:42 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=bRHjBIvdCGyfM/Qx4IKnQpmpZIdhFFKc33TES6qzM4s=;
        b=mVCNr4IoUO4909YwWBlnEHJRFP8zxmIWAbhFtNTK+orpn7HCp1tSropu2VSzD9mg64
         pwmOI8SyKmGIkv1bnnoBWSJNnzOZQZHixYRluj74DzcU9Ep1sriGhK/v4WN2wEsseo+P
         0aZtlbfyc82vSOaGEv/tjYr01PjxxtfMYHZRr7u2Pe+t7jMyTBGNyj0iv2onSZaA8JHz
         eEB+m0Q3py4ntvmNBXmn7FeRijxsQTLP4p5KBOW1EaVwDyiduYh1YagrU0u8Pg9qs/s4
         M9v/drzdBQh521Zu6OS/K6KU5lole6D0JGG+Zgls28y8uBexuAXRQAN3p+BWb7Nz9g+7
         doRQ==
X-Gm-Message-State: AOAM533RCDfOioeWLcpVxnvbDi3Js68gfhAQVU+LOWFWJh7xgGV4Yp8V
	6lqm4KKwujXfYSV4wM9KqmI=
X-Google-Smtp-Source: ABdhPJxwysuMFLyks/xE5EJLKMGvQ4J5feEFQTw7W1edJt/1APyjzCQOGdDV5jfs+aOILE+XFSlkZw==
X-Received: by 2002:a1c:3283:: with SMTP id y125mr301580wmy.61.1601917061534;
        Mon, 05 Oct 2020 09:57:41 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id x64sm196528wmg.33.2020.10.05.09.57.40
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 05 Oct 2020 09:57:40 -0700 (PDT)
Date: Mon, 5 Oct 2020 16:57:39 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 2/2] tool/libx: Fix libxenlight gcc warning
Message-ID: <20201005165739.yoogabodz4dxrr6q@liuwe-devbox-debian-v2>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <1c0b3a351e20e31093b5f59edd7fafeac1ceb75c.1601913536.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <1c0b3a351e20e31093b5f59edd7fafeac1ceb75c.1601913536.git.bertrand.marquis@arm.com>
User-Agent: NeoMutt/20180716

On Mon, Oct 05, 2020 at 05:02:49PM +0100, Bertrand Marquis wrote:
> Fix gcc10 compilation warning about uninitialized variable by setting
> it to 0.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 18:19:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 18:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3077.8905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPV4W-0000IW-Sg; Mon, 05 Oct 2020 18:18:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3077.8905; Mon, 05 Oct 2020 18:18:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPV4W-0000IP-Pb; Mon, 05 Oct 2020 18:18:52 +0000
Received: by outflank-mailman (input) for mailman id 3077;
 Mon, 05 Oct 2020 18:18:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9LJO=DM=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kPV4V-0000IK-Fi
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 18:18:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a159c5e6-d111-4224-a8d0-cbf5adb34919;
 Mon, 05 Oct 2020 18:18:50 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kPV4U-00043H-0E; Mon, 05 Oct 2020 18:18:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kPV4T-0004A2-Gg; Mon, 05 Oct 2020 18:18:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9LJO=DM=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kPV4V-0000IK-Fi
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 18:18:51 +0000
X-Inumbo-ID: a159c5e6-d111-4224-a8d0-cbf5adb34919
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a159c5e6-d111-4224-a8d0-cbf5adb34919;
	Mon, 05 Oct 2020 18:18:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DocFb8exhfUXSoZBKE3Y45sf3gaeuZsjqUNzJjJ4OXQ=; b=ubzNNhWRXYmNL10IK204xrjoqE
	Pn0xi7QMHXGykASr/hURFXjqqO3F3l7tnHrSOKNQmBRbNJwjbQMP4sPc0Wnf2wziEkkFnEtsRPfvs
	eOxUJMwEzkmxdZuF9SM8DqXkUvo6/yy8fJ72lM2JJMNTBZzSLnpNUquhj1Jfv7uC1JPg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kPV4U-00043H-0E; Mon, 05 Oct 2020 18:18:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kPV4T-0004A2-Gg; Mon, 05 Oct 2020 18:18:49 +0000
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
To: Masami Hiramatsu <mhiramat@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 takahiro.akashi@linaro.org
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
From: Julien Grall <julien@xen.org>
Message-ID: <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
Date: Mon, 5 Oct 2020 19:18:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <160190516028.40160.9733543991325671759.stgit@devnote2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Masami,

On 05/10/2020 14:39, Masami Hiramatsu wrote:
> Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> address conversion.
> 
> In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> assumes the given virtual address is in contiguous kernel memory
> area, it can not convert the per-cpu memory if it is allocated on
> vmalloc area (depends on CONFIG_SMP).

Are you sure about this? I have a .config with CONFIG_SMP=y where the 
per-cpu region for CPU0 is allocated outside of vmalloc area.

However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING 
was enabled.

[...]

> Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")

FWIW, I think the bug was already present before 250c9af3d831.

> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> ---
>   arch/arm/xen/enlighten.c |    2 +-
>   include/xen/arm/page.h   |    3 +++
>   2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index e93145d72c26..a6ab3689b2f4 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
>   	pr_info("Xen: initializing cpu%d\n", cpu);
>   	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>   
> -	info.mfn = virt_to_gfn(vcpup);
> +	info.mfn = percpu_to_gfn(vcpup);
>   	info.offset = xen_offset_in_page(vcpup);
>   
>   	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> index 39df751d0dc4..ac1b65470563 100644
> --- a/include/xen/arm/page.h
> +++ b/include/xen/arm/page.h
> @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
>   	})
>   #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
>   
> +#define percpu_to_gfn(v)	\
> +	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> +
>   /* Only used in PV code. But ARM guests are always HVM. */
>   static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
>   {
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 21:13:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 21:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3084.8921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPXnR-00085O-Hg; Mon, 05 Oct 2020 21:13:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3084.8921; Mon, 05 Oct 2020 21:13:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPXnR-00085H-E0; Mon, 05 Oct 2020 21:13:25 +0000
Received: by outflank-mailman (input) for mailman id 3084;
 Mon, 05 Oct 2020 21:13:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ntw0=DM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kPXnQ-00085C-Cu
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 21:13:24 +0000
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e7d79ff-d43c-492e-9d85-c2e4b5b6871a;
 Mon, 05 Oct 2020 21:13:23 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id A340DC4E;
 Mon,  5 Oct 2020 17:13:21 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Mon, 05 Oct 2020 17:13:22 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 0E12F306467E;
 Mon,  5 Oct 2020 17:13:19 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ntw0=DM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kPXnQ-00085C-Cu
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 21:13:24 +0000
X-Inumbo-ID: 5e7d79ff-d43c-492e-9d85-c2e4b5b6871a
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5e7d79ff-d43c-492e-9d85-c2e4b5b6871a;
	Mon, 05 Oct 2020 21:13:23 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.west.internal (Postfix) with ESMTP id A340DC4E;
	Mon,  5 Oct 2020 17:13:21 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Mon, 05 Oct 2020 17:13:22 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=w44a34
	Xwd/fuPPnSQTUcs7xTsQtjSseIfEuXyMyPW+o=; b=gtbMI/3i2gHBHgJ2p3qmUw
	OpEaN0+DWix2rmLzrXpnpz9+BMoNULHCVAwoIjYzcQK/wfYU52SrIRS3xQ2u7nyT
	EBWW0KWntYtH901W13nxD4Qo38PjQyn+pcr2OP5WSB4WVHFrqRNYu71EWH2lNaPT
	qN9FFZ/y/DCys0mCXfI4qkAzPKd0gaSRji3H+KzbWI2oqqteMTh5DerjnsOJXAp2
	OtAAQIrmlwhDPllwn4brwpCc/yvGGYuIV4kJeCVNtKvSMNPbctVb16SPxvyzD8+F
	GDPxkfwTtYaEV1e/vVIDh89rnEoF7ouUglowVfLUU/87pYe1D+Lu6BYhjd+G5R+A
	==
X-ME-Sender: <xms:cIx7Xx-mYogIl_ajypMfdD4Y_E0LlkdA9hoH7n302RC777vuGTHAIg>
    <xme:cIx7X1uxygEGbXFihGlcQZ0Qsc1cO5mQL4gY860LA4bhfKeChcGY-7vrtDa2_ip7w
    4tUko7FXT8M5A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrgedvgdduieduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehm
    rghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsg
    drtghomh
X-ME-Proxy: <xmx:cIx7X_Cd6T-zVK4kPYJvkAWZOUA9PhWhZ_7Drg7dSAb3_f7NZg6Tlw>
    <xmx:cIx7X1dLgRjFOCYbUN_Dmg0MNuUrwO_pYVku9XSqbjPWm3jDb3azaA>
    <xmx:cIx7X2MCNbu_i6MJo6_DMZ1ab_vRVSNr2WGMAJAHbpE3fc5DqZOW9Q>
    <xmx:cYx7X43KX36r3pNap8-fah7oaU7RxVlFFWQ8iVDLO_sApFOQVnbzOw>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 0E12F306467E;
	Mon,  5 Oct 2020 17:13:19 -0400 (EDT)
Date: Mon, 5 Oct 2020 23:13:17 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/smpboot: Unconditionally call
 memguard_unguard_stack() in cpu_smpboot_free()
Message-ID: <20201005211317.GA29479@mail-itl>
References: <20201005122325.17395-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="fdj2RfSjLxBAspz7"
Content-Disposition: inline
In-Reply-To: <20201005122325.17395-1-andrew.cooper3@citrix.com>


--fdj2RfSjLxBAspz7
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] x86/smpboot: Unconditionally call
 memguard_unguard_stack() in cpu_smpboot_free()

On Mon, Oct 05, 2020 at 01:23:25PM +0100, Andrew Cooper wrote:
> For simplicity between various configuration, Xen always uses shadow stack
> mappings (Read-only + Dirty) for the guard page, irrespective of whether
> CET-SS is enabled.
>=20
> memguard_guard_stack() writes shadow stack tokens with plain writes.  Thi=
s is
> necessary to configure the BSP shadow stack correctly, and cannot be
> implemented with WRSS.
>=20
> Therefore, unconditionally call memguard_unguard_stack() to return the
> mappings to fully writeable, so a subsequent call to memguard_guard_stack=
()
> will succeed.
>=20
> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab=
=2Ecom>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Tested-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
>=20
> This can more easily be demonstrated with CPU hotplug than S3, and the ab=
sence
> of bug reports goes to show how rarely hotplug is used.
> ---
>  xen/arch/x86/smpboot.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>=20
> diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
> index 5708573c41..c193cc0fb8 100644
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -971,16 +971,16 @@ static void cpu_smpboot_free(unsigned int cpu, bool=
 remove)
>      if ( IS_ENABLED(CONFIG_PV32) )
>          FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
> =20
> +    if ( stack_base[cpu] )
> +        memguard_unguard_stack(stack_base[cpu]);
> +
>      if ( remove )
>      {
>          FREE_XENHEAP_PAGE(per_cpu(gdt, cpu));
>          FREE_XENHEAP_PAGE(idt_tables[cpu]);
> =20
>          if ( stack_base[cpu] )
> -        {
> -            memguard_unguard_stack(stack_base[cpu]);
>              FREE_XENHEAP_PAGES(stack_base[cpu], STACK_ORDER);
> -        }
>      }
>  }
> =20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--fdj2RfSjLxBAspz7
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl97jGsACgkQ24/THMrX
1ywHXAf/dvZP+pfkZeQTZFiuf+zuR0mbcoZx3l8/kJpZS7e7ojsU432zxp4PvB6R
IYCD62ypvzlq2UA+lwmHSS0rMQl7b9JyHYlV6MdlBQhCUgh4LNtlONkVU02dcRz4
Sn1xR4p0RN6ibdpRQNhIH0F2Upbr4g/x/ancfsrO0jmkAOb7cD7E89FfIoa0i58x
0PIKnZQQUsSYbDEdatQZzPlKN313MFA597dHoFv12D97TCxWDRCCnd5WuY63CX82
csyf4XWNVCI199IUwvQaN2O1NUeWuAbMm/1/0Nqw+1krY279cy/674J5m9m6aIcu
0up4Op1UOgDEI/DJpXyGCY+x0B8y0g==
=Sv2R
-----END PGP SIGNATURE-----

--fdj2RfSjLxBAspz7--


From xen-devel-bounces@lists.xenproject.org Mon Oct 05 21:15:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 05 Oct 2020 21:15:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3085.8933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPXpN-0008E8-U3; Mon, 05 Oct 2020 21:15:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3085.8933; Mon, 05 Oct 2020 21:15:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPXpN-0008E1-Qz; Mon, 05 Oct 2020 21:15:25 +0000
Received: by outflank-mailman (input) for mailman id 3085;
 Mon, 05 Oct 2020 21:15:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ntw0=DM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kPXpM-0008Du-2S
 for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 21:15:24 +0000
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0adaa178-0210-411b-95f5-ce439d0d8dfb;
 Mon, 05 Oct 2020 21:15:23 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id B5B19B54;
 Mon,  5 Oct 2020 17:15:21 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Mon, 05 Oct 2020 17:15:22 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 072453280065;
 Mon,  5 Oct 2020 17:15:19 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ntw0=DM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kPXpM-0008Du-2S
	for xen-devel@lists.xenproject.org; Mon, 05 Oct 2020 21:15:24 +0000
X-Inumbo-ID: 0adaa178-0210-411b-95f5-ce439d0d8dfb
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0adaa178-0210-411b-95f5-ce439d0d8dfb;
	Mon, 05 Oct 2020 21:15:23 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.west.internal (Postfix) with ESMTP id B5B19B54;
	Mon,  5 Oct 2020 17:15:21 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Mon, 05 Oct 2020 17:15:22 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=78gGKA
	+FpT1gnP+KjIavvYsyqEE7AX9QTMq1801bMgk=; b=I5lNESO7eMv1EfSA91K6eF
	ch5VD0EmfvLHEwpEV++lUvS8+eU9V5vpZQvNE58O/2uM8wNv87OqcmOoBajTF6L5
	W+wVNqCgMaxezPI81rcAiJ0Rs+hR8lhDqGL+Lc7QsnPGTppjGd5zKTHDzVQDhNix
	YRNU9vQWLN5O8yU1NEY3WlhSVV88BRWJiB69HX5me0CjrAqqy79eN0ovn/izWDqm
	uh1mVjHglEzC6hga7ntAZXthE0llCnkN03A3Jfrys4+XIJtohvvOaLHt0EZHuLlF
	WTXxbx7VokfpVhBcBeSAZKvAme7KnEgBPnCszE7MWXrOCQAtN8w0noDQV6AzgJ1Q
	==
X-ME-Sender: <xms:6Ix7X0EZx_dDD8fTMJcqqFGYGXBRJRvMngekfixT5KOHUc-h3N2GYg>
    <xme:6Ix7X9VCWE63dR7fEnyzCzcn2Q5Dkk9fliTCmj0lG50SX7-GpeiRC7lRZ9Tcff40p
    qoDCu3A6Fk8_A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrgedvgdduieduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedtudff
    feefjeduhffgffelueeugefgvdeltdegfeevtdeugeeuieeltdevfeeuhfenucffohhmrg
    hinhepphhrohhtrdhssgenucfkphepledurdeigedrudejtddrkeelnecuvehluhhsthgv
    rhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:6Ix7X-IqXPG079rPQUGB7TSdpHCHNtJMI2_QLrI1UKTCqm8EaSKtXQ>
    <xmx:6Ix7X2G6AFzJwHk_-FkIZwnoWLdLD4v-SV52E6Kx8_mCUUJ1WGyOkw>
    <xmx:6Ix7X6VFcv8e_ifWN-UVdWzwmtlMepgPJ9dm1RAKXKP9sor4cCjHSQ>
    <xmx:6Yx7X9dQNppcg2v79bU_96mTdH8F6FFPj08g0XZXEJQPB47jKkA0aA>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 072453280065;
	Mon,  5 Oct 2020 17:15:19 -0400 (EDT)
Date: Mon, 5 Oct 2020 23:15:17 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/S3: Restore CR4 earlier during resume
Message-ID: <20201005211517.GB29479@mail-itl>
References: <20201002213650.2197-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="0eh6TmSyL6TZE2Uz"
Content-Disposition: inline
In-Reply-To: <20201002213650.2197-1-andrew.cooper3@citrix.com>


--0eh6TmSyL6TZE2Uz
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] x86/S3: Restore CR4 earlier during resume

On Fri, Oct 02, 2020 at 10:36:50PM +0100, Andrew Cooper wrote:
> c/s 4304ff420e5 "x86/S3: Drop {save,restore}_rest_processor_state()
> completely" moved CR4 restoration up into C, to account for the fact that=
 MCE
> was explicitly handled later.
>=20
> However, time_resume() ends up making an EFI Runtime Service call, and EFI
> explodes without OSFXSR, presumably when trying to spill %xmm registers o=
nto
> the stack.
>=20
> Given this codepath, and the potential for other issues of a similar kind=
 (TLB
> flushing vs INVPCID, HVM logic vs VMXE, etc), restore CR4 in asm before
> entering C.
>=20
> Ignore the previous MCE special case, because its not actually necessary.=
  The
> handler is already suitably configured from before suspend.
>=20
> Fixes: 4304ff420e5 ("x86/S3: Drop {save,restore}_rest_processor_state() c=
ompletely")
> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab=
=2Ecom>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

This one doesn't build, wakeup_prot.S misses #include <asm/asm_defns.h>
=2E With that fixed:

Tested-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
>=20
> This is one definite bug fix.  It doesn't appear to be the only S3 bug
> however.
> ---
>  xen/arch/x86/acpi/power.c       | 3 ---
>  xen/arch/x86/acpi/wakeup_prot.S | 5 +++++
>  2 files changed, 5 insertions(+), 3 deletions(-)
>=20
> diff --git a/xen/arch/x86/acpi/power.c b/xen/arch/x86/acpi/power.c
> index 4fb1e7a148..7f162a4df9 100644
> --- a/xen/arch/x86/acpi/power.c
> +++ b/xen/arch/x86/acpi/power.c
> @@ -276,9 +276,6 @@ static int enter_state(u32 state)
> =20
>      mcheck_init(&boot_cpu_data, false);
> =20
> -    /* Restore CR4 from cached value, now MCE is set up. */
> -    write_cr4(read_cr4());
> -
>      printk(XENLOG_INFO "Finishing wakeup from ACPI S%d state.\n", state);
> =20
>      if ( (state =3D=3D ACPI_STATE_S3) && error )
> diff --git a/xen/arch/x86/acpi/wakeup_prot.S b/xen/arch/x86/acpi/wakeup_p=
rot.S
> index c6b3fcc93d..1ee5551fb5 100644
> --- a/xen/arch/x86/acpi/wakeup_prot.S
> +++ b/xen/arch/x86/acpi/wakeup_prot.S
> @@ -110,6 +110,11 @@ ENTRY(s3_resume)
> =20
>          call    load_system_tables
> =20
> +        /* Restore CR4 from the cpuinfo block. */
> +        GET_STACK_END(bx)
> +        mov     STACK_CPUINFO_FIELD(cr4)(%rbx), %rax
> +        mov     %rax, %cr4
> +
>  .Lsuspend_err:
>          pop     %r15
>          pop     %r14

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--0eh6TmSyL6TZE2Uz
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl97jOQACgkQ24/THMrX
1yxwQAf8Dmth2ZMaDIcy1RZdmJqq+/Zr6s4nHlu0slcjWanu7gIkQt7x4PzLkdGN
LDFqmCjlcUh3/dlmZ+gAT9573Gnv6WZKzUGv9SrXS5B1GVBn+Z3zWhk+S5Tf9F5q
tlIk0AydKFLFzjlASnwrfO89/ypcBi2A13iMCHgKug1BlK4NGpAtx3smn2ZrvIQP
jt70pfwZ0D49lTsKElmjhNEo7GWURtWaWNOEg/n/yhITUcb8Ykk1i43UNQ7y22i5
soWPEAv3cGnSBhPc9a7agODMLV/1smS3i5DluXwlLwWQitwi33fIU8IerjvdASjV
ZV4mXQ0SCKGvJ6jJ2dAipFQnpeYrgw==
=+E2q
-----END PGP SIGNATURE-----

--0eh6TmSyL6TZE2Uz--


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 00:36:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 00:36:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3088.8945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPaxE-0001hK-2y; Tue, 06 Oct 2020 00:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3088.8945; Tue, 06 Oct 2020 00:35:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPaxD-0001hD-VG; Tue, 06 Oct 2020 00:35:43 +0000
Received: by outflank-mailman (input) for mailman id 3088;
 Tue, 06 Oct 2020 00:35:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kPaxB-0001gh-VU
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 00:35:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 152282d8-5e7a-40a4-8f95-8fd189623f9f;
 Tue, 06 Oct 2020 00:35:40 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4F7542074A;
 Tue,  6 Oct 2020 00:35:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kPaxB-0001gh-VU
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 00:35:42 +0000
X-Inumbo-ID: 152282d8-5e7a-40a4-8f95-8fd189623f9f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 152282d8-5e7a-40a4-8f95-8fd189623f9f;
	Tue, 06 Oct 2020 00:35:40 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4F7542074A;
	Tue,  6 Oct 2020 00:35:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601944539;
	bh=z2WQFganH31foJ4iNkkZl1ET6PM+YBNa0ZXpQ29c990=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=gP6hRbcXwksMMPfdO5nHFfAIPOS215+uHEf5VJBxMftLDwB3ZDXjvm6esBt0GKZ/Y
	 dCtB97jae3fCDhNz1wYhhfUU+F2HYEI/d5udzJs+kL2CiN5DpYFG3ypw1gdEIC9cLW
	 +GSU4C0+YfYR32k9LfqTlQgTzXHa3/QYW5pzDKRE=
Date: Tue, 6 Oct 2020 09:35:36 +0900
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, Alex
 =?UTF-8?B?QmVubsOpZQ==?= <alex.bennee@linaro.org>,
 takahiro.akashi@linaro.org
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
Message-Id: <20201006093536.5f7ad9e1bc3e2fea2494c229@kernel.org>
In-Reply-To: <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
	<b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Mon, 5 Oct 2020 19:18:47 +0100
Julien Grall <julien@xen.org> wrote:

> Hi Masami,
> 
> On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > address conversion.
> > 
> > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > assumes the given virtual address is in contiguous kernel memory
> > area, it can not convert the per-cpu memory if it is allocated on
> > vmalloc area (depends on CONFIG_SMP).
> 
> Are you sure about this? I have a .config with CONFIG_SMP=y where the 
> per-cpu region for CPU0 is allocated outside of vmalloc area.
> 
> However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING 
> was enabled.

OK, I've confirmed that this depends on CONFIG_NUMA_BALANCING instead
of CONFIG_SMP. I'll update the comment.

> 
> [...]
> 
> > Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
> 
> FWIW, I think the bug was already present before 250c9af3d831.

Hm, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has introduced
the per-cpu code.

Thank you,

> 
> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > ---
> >   arch/arm/xen/enlighten.c |    2 +-
> >   include/xen/arm/page.h   |    3 +++
> >   2 files changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index e93145d72c26..a6ab3689b2f4 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
> >   	pr_info("Xen: initializing cpu%d\n", cpu);
> >   	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> >   
> > -	info.mfn = virt_to_gfn(vcpup);
> > +	info.mfn = percpu_to_gfn(vcpup);
> >   	info.offset = xen_offset_in_page(vcpup);
> >   
> >   	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> > diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> > index 39df751d0dc4..ac1b65470563 100644
> > --- a/include/xen/arm/page.h
> > +++ b/include/xen/arm/page.h
> > @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
> >   	})
> >   #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
> >   
> > +#define percpu_to_gfn(v)	\
> > +	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> > +
> >   /* Only used in PV code. But ARM guests are always HVM. */
> >   static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
> >   {
> > 
> 
> Cheers,
> 
> -- 
> Julien Grall


-- 
Masami Hiramatsu <mhiramat@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 01:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 01:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3092.8956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPbXi-0006BZ-7h; Tue, 06 Oct 2020 01:13:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3092.8956; Tue, 06 Oct 2020 01:13:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPbXi-0006BS-4Z; Tue, 06 Oct 2020 01:13:26 +0000
Received: by outflank-mailman (input) for mailman id 3092;
 Tue, 06 Oct 2020 01:13:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oh4O=DN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kPbXh-0006BN-7H
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 01:13:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37b15ab4-d4fd-4b26-aa5a-94a112e44c6a;
 Tue, 06 Oct 2020 01:13:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5BB812076B;
 Tue,  6 Oct 2020 01:13:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oh4O=DN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kPbXh-0006BN-7H
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 01:13:25 +0000
X-Inumbo-ID: 37b15ab4-d4fd-4b26-aa5a-94a112e44c6a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 37b15ab4-d4fd-4b26-aa5a-94a112e44c6a;
	Tue, 06 Oct 2020 01:13:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 5BB812076B;
	Tue,  6 Oct 2020 01:13:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601946803;
	bh=vu//iV9DFORvtpOSAwSb6Cxpg+zQ2L0xIZTcyUKvZok=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=t3p14RX2TmiiKLDeyQDTfo98b1wPAx0w/xAAEuIv5ZxxtlNPm8Ixbv2YyqiyOLors
	 fpEfj/NWTzAWFRe3bpAgnpGxrZlkXYC+Unl8Rxe10yhEgO8LdQftSDen7MXMSb2yTs
	 BJu643jbm2fK6lKNhN7ZUS9zn8YUaAMpqBgS9hdY=
Date: Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Masami Hiramatsu <mhiramat@kernel.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    takahiro.akashi@linaro.org, jgross@suse.com, boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
In-Reply-To: <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
Message-ID: <alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2> <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 5 Oct 2020, Julien Grall wrote:
> Hi Masami,
> 
> On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > address conversion.
> > 
> > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > assumes the given virtual address is in contiguous kernel memory
> > area, it can not convert the per-cpu memory if it is allocated on
> > vmalloc area (depends on CONFIG_SMP).
> 
> Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> region for CPU0 is allocated outside of vmalloc area.
> 
> However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> enabled.

I cannot reproduce the issue with defconfig, but I can with Masami's
kconfig.

If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
problem still appears.

If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
strange because CONFIG_NUMA is enabled in defconfig, and defconfig
works.


> [...]
> 
> > Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
> 
> FWIW, I think the bug was already present before 250c9af3d831.

Yeah, I bet 250c9af3d831 is not what introduced the issue. Whatever
caused virt_to_phys to stop working on vmalloc'ed addresses is the cause
of the problem. It is something that went in 5.9 (5.8 works) but I don't
know what for sure.


> > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > ---
> >   arch/arm/xen/enlighten.c |    2 +-
> >   include/xen/arm/page.h   |    3 +++
> >   2 files changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > index e93145d72c26..a6ab3689b2f4 100644
> > --- a/arch/arm/xen/enlighten.c
> > +++ b/arch/arm/xen/enlighten.c
> > @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
> >   	pr_info("Xen: initializing cpu%d\n", cpu);
> >   	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> >   -	info.mfn = virt_to_gfn(vcpup);
> > +	info.mfn = percpu_to_gfn(vcpup);
> >   	info.offset = xen_offset_in_page(vcpup);
> >     	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> > diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> > index 39df751d0dc4..ac1b65470563 100644
> > --- a/include/xen/arm/page.h
> > +++ b/include/xen/arm/page.h
> > @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
> >   	})
> >   #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) <<
> > XEN_PAGE_SHIFT))
> >   +#define percpu_to_gfn(v)	\
> > +	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> > +
> >   /* Only used in PV code. But ARM guests are always HVM. */
> >   static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
> >   {


The fix is fine for me. I tested it and it works. We need to remove the
"Fixes:" line from the commit message. Ideally, replacing it with a
reference to what is the source of the problem.

Aside from that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 02:41:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 02:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3098.8972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPcuX-0006Hw-Nq; Tue, 06 Oct 2020 02:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3098.8972; Tue, 06 Oct 2020 02:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPcuX-0006Hp-Ks; Tue, 06 Oct 2020 02:41:05 +0000
Received: by outflank-mailman (input) for mailman id 3098;
 Tue, 06 Oct 2020 02:41:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kPcuW-0006Hk-Ij
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 02:41:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aa76dc5-7101-4ce7-842d-1bb7da235c8c;
 Tue, 06 Oct 2020 02:41:03 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F413720870;
 Tue,  6 Oct 2020 02:41:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kPcuW-0006Hk-Ij
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 02:41:04 +0000
X-Inumbo-ID: 5aa76dc5-7101-4ce7-842d-1bb7da235c8c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5aa76dc5-7101-4ce7-842d-1bb7da235c8c;
	Tue, 06 Oct 2020 02:41:03 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id F413720870;
	Tue,  6 Oct 2020 02:41:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601952063;
	bh=RT53DXGsHP6mxMsu1xLBOr3O2f5KnWOs09NJYXjdP8E=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=k+xXsmZ0Wa/Ps5DOMLo5I95NWB68B22HN7YE7x8hO8Ew9x3aRydRmVssZC/TqSEDp
	 rltuOQu+90Uee7aVnqOH29pcZriUfAqNERqRspEE/CO4KQuWFrDoUtoZLy000kdEU3
	 5CgEDjtn7ifHGl3SH43Jsis7nn8zMt3ucuOt4R7A=
Date: Tue, 6 Oct 2020 11:40:58 +0900
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, Masami Hiramatsu <mhiramat@kernel.org>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, Alex
 =?UTF-8?B?QmVubsOpZQ==?= <alex.bennee@linaro.org>,
 takahiro.akashi@linaro.org, jgross@suse.com, boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
Message-Id: <20201006114058.b93839b1b8f35a470874572b@kernel.org>
In-Reply-To: <alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
	<b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
	<alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
Stefano Stabellini <sstabellini@kernel.org> wrote:

> On Mon, 5 Oct 2020, Julien Grall wrote:
> > Hi Masami,
> > 
> > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > address conversion.
> > > 
> > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > assumes the given virtual address is in contiguous kernel memory
> > > area, it can not convert the per-cpu memory if it is allocated on
> > > vmalloc area (depends on CONFIG_SMP).
> > 
> > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > region for CPU0 is allocated outside of vmalloc area.
> > 
> > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > enabled.
> 
> I cannot reproduce the issue with defconfig, but I can with Masami's
> kconfig.
> 
> If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> problem still appears.
> 
> If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> works.

Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
disappeared.

--- config-5.9.0-rc4+   2020-10-06 11:36:20.620107129 +0900
+++ config-5.9.0-rc4+.buggy     2020-10-05 21:04:40.369936461 +0900
@@ -131,7 +131,8 @@
 CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
 CONFIG_CC_HAS_INT128=y
 CONFIG_ARCH_SUPPORTS_INT128=y
-# CONFIG_NUMA_BALANCING is not set
+CONFIG_NUMA_BALANCING=y
+CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
 CONFIG_CGROUPS=y
 CONFIG_PAGE_COUNTER=y
 CONFIG_MEMCG=y

So buggy config just enabled NUMA_BALANCING (and default enabled)

> > [...]
> > 
> > > Fixes: 250c9af3d831 ("arm/xen: Add support for 64KB page granularity")
> > 
> > FWIW, I think the bug was already present before 250c9af3d831.
> 
> Yeah, I bet 250c9af3d831 is not what introduced the issue. Whatever
> caused virt_to_phys to stop working on vmalloc'ed addresses is the cause
> of the problem. It is something that went in 5.9 (5.8 works) but I don't
> know what for sure.

OK.

> 
> 
> > > Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> > > ---
> > >   arch/arm/xen/enlighten.c |    2 +-
> > >   include/xen/arm/page.h   |    3 +++
> > >   2 files changed, 4 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> > > index e93145d72c26..a6ab3689b2f4 100644
> > > --- a/arch/arm/xen/enlighten.c
> > > +++ b/arch/arm/xen/enlighten.c
> > > @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
> > >   	pr_info("Xen: initializing cpu%d\n", cpu);
> > >   	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
> > >   -	info.mfn = virt_to_gfn(vcpup);
> > > +	info.mfn = percpu_to_gfn(vcpup);
> > >   	info.offset = xen_offset_in_page(vcpup);
> > >     	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> > > diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> > > index 39df751d0dc4..ac1b65470563 100644
> > > --- a/include/xen/arm/page.h
> > > +++ b/include/xen/arm/page.h
> > > @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
> > >   	})
> > >   #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) <<
> > > XEN_PAGE_SHIFT))
> > >   +#define percpu_to_gfn(v)	\
> > > +	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> > > +
> > >   /* Only used in PV code. But ARM guests are always HVM. */
> > >   static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
> > >   {
> 
> 
> The fix is fine for me. I tested it and it works. We need to remove the
> "Fixes:" line from the commit message. Ideally, replacing it with a
> reference to what is the source of the problem.

OK, as I said, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has
introduced the per-cpu code. So note it instead of Fixes tag.

> 
> Aside from that:
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

-- 
Masami Hiramatsu <mhiramat@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 04:12:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 04:12:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3100.8985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPeKS-000698-FF; Tue, 06 Oct 2020 04:11:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3100.8985; Tue, 06 Oct 2020 04:11:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPeKS-000691-Bf; Tue, 06 Oct 2020 04:11:56 +0000
Received: by outflank-mailman (input) for mailman id 3100;
 Tue, 06 Oct 2020 04:11:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kPeKQ-00068w-Eb
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 04:11:54 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 976c198d-ed7e-46a5-a9ef-760bf4959eab;
 Tue, 06 Oct 2020 04:11:53 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id AE5EF208A9;
 Tue,  6 Oct 2020 04:11:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kPeKQ-00068w-Eb
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 04:11:54 +0000
X-Inumbo-ID: 976c198d-ed7e-46a5-a9ef-760bf4959eab
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 976c198d-ed7e-46a5-a9ef-760bf4959eab;
	Tue, 06 Oct 2020 04:11:53 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id AE5EF208A9;
	Tue,  6 Oct 2020 04:11:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601957512;
	bh=eX8rve6HxwI2nYREP/V5u420GCVH7dLvjYOErIjpC2k=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=TSZDzh1NqvIKkuuXh23GCAVV4bbk8DDnoE8f7FdveBiMASMNiLefK8vV1MaBkpnFY
	 BFiZ0Ef0IPKDHiuGlEjSEhlScxIjuTX+2NfG9iVWwgWpw9988CfF+AIqQNk5+vOaWU
	 RewyZePi8kttZp0dUeL8zT/Ly1Fce4l7skSyNQ7o=
Date: Tue, 6 Oct 2020 13:11:48 +0900
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, Alex =?UTF-8?B?QmVubsOpZQ==?=
 <alex.bennee@linaro.org>, takahiro.akashi@linaro.org, jgross@suse.com,
 boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
Message-Id: <20201006131148.1f7b63b688eae7b1e0eb2228@kernel.org>
In-Reply-To: <20201006114058.b93839b1b8f35a470874572b@kernel.org>
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
	<b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
	<alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
	<20201006114058.b93839b1b8f35a470874572b@kernel.org>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 6 Oct 2020 11:40:58 +0900
Masami Hiramatsu <mhiramat@kernel.org> wrote:

> On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
> Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> > On Mon, 5 Oct 2020, Julien Grall wrote:
> > > Hi Masami,
> > > 
> > > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > > address conversion.
> > > > 
> > > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > > assumes the given virtual address is in contiguous kernel memory
> > > > area, it can not convert the per-cpu memory if it is allocated on
> > > > vmalloc area (depends on CONFIG_SMP).
> > > 
> > > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > > region for CPU0 is allocated outside of vmalloc area.
> > > 
> > > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > > enabled.
> > 
> > I cannot reproduce the issue with defconfig, but I can with Masami's
> > kconfig.
> > 
> > If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> > problem still appears.
> > 
> > If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> > strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> > works.
> 
> Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
> disappeared.

Ah, OK. It depends on NUMA. On arm64, CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK
is enabled if CONFIG_NUMA=y.

Since per-cpu first chunk has been allocated by memblock if the
CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK is enabled(See
pcpu_embed_first_chunk()), when the kernel allocate the xen_vcpu_info
on the first chunk, it will be in the linear address space.
However, if we disable CONFIG_NUMA, it will be on vmalloc page.

And if the first chunk has been filled up before initializing xen,
the xen_vcpu_info will be allocated on the 2nd chunk which is has been
allocated by the backend allocator (kernel memory or vmalloc, depends
on CONFIG_SMP).

So anyway we have to check it carefully with a special function, which is
per_cpu_ptr_to_phys(). 

Thank you,


-- 
Masami Hiramatsu <mhiramat@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 04:34:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 04:34:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3102.8997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPeg8-0008HQ-CK; Tue, 06 Oct 2020 04:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3102.8997; Tue, 06 Oct 2020 04:34:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPeg8-0008HJ-8O; Tue, 06 Oct 2020 04:34:20 +0000
Received: by outflank-mailman (input) for mailman id 3102;
 Tue, 06 Oct 2020 04:34:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PqFe=DN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kPeg7-0008HD-MM
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 04:34:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 571e1335-7123-4373-a16f-6b792bee9640;
 Tue, 06 Oct 2020 04:34:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D82ECAC3F;
 Tue,  6 Oct 2020 04:34:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PqFe=DN=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kPeg7-0008HD-MM
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 04:34:19 +0000
X-Inumbo-ID: 571e1335-7123-4373-a16f-6b792bee9640
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 571e1335-7123-4373-a16f-6b792bee9640;
	Tue, 06 Oct 2020 04:34:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601958857;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2eNRm8oCto5znFMrn2DhDxDyf6b1GS4n/9x3IylaBB0=;
	b=OdoBtAQvkvBySQ7Gtb+Shq/KrzHwrezwjt43ALGxQ6/+cgh5Wz3Vwgsb+XuI0sguTZ+BR2
	a9HirUdbx+iwQZOyRUZ9H0CN3F4QkRaTgN6/24IxeG94srLC6IZ78k9EgRBhGzDfXolGi7
	pxlT4PrUhIN/jmMX54BdA37loLj/TC4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D82ECAC3F;
	Tue,  6 Oct 2020 04:34:17 +0000 (UTC)
Subject: Re: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <95b2a6f9-e063-8276-db62-ddaac06f4b7b@suse.com>
Date: Tue, 6 Oct 2020 06:34:16 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.20 18:02, Bertrand Marquis wrote:
> Use memcpy in getBridge to prevent gcc warnings about truncated
> strings. We know that we might truncate it, so the gcc warning
> here is wrong.
> Revert previous change changing buffer sizes as bigger buffers
> are not needed.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   tools/libs/stat/xenstat_linux.c | 9 +++++++--
>   1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
> index d2ee6fda64..1db35c604c 100644
> --- a/tools/libs/stat/xenstat_linux.c
> +++ b/tools/libs/stat/xenstat_linux.c
> @@ -78,7 +78,12 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
>   				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>   
>   				if (access(tmp, F_OK) == 0) {
> -					strncpy(result, de->d_name, resultLen);
> +					/*
> +					 * Do not use strncpy to prevent compiler warning with
> +					 * gcc >= 10.0
> +					 * If de->d_name is longer then resultLen we truncate it
> +					 */
> +					memcpy(result, de->d_name, resultLen - 1);

I think you want min(NAME_MAX, resultLen - 1) for the length.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 06:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 06:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3104.9009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPgn5-0003ko-8N; Tue, 06 Oct 2020 06:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3104.9009; Tue, 06 Oct 2020 06:49:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPgn5-0003kh-4j; Tue, 06 Oct 2020 06:49:39 +0000
Received: by outflank-mailman (input) for mailman id 3104;
 Tue, 06 Oct 2020 06:49:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kPgn3-0003kc-UF
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 06:49:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id add351b8-9fdc-4104-9464-c66c8ced278a;
 Tue, 06 Oct 2020 06:49:36 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp
 [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 67E9D20757;
 Tue,  6 Oct 2020 06:49:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=darU=DN=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kPgn3-0003kc-UF
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 06:49:37 +0000
X-Inumbo-ID: add351b8-9fdc-4104-9464-c66c8ced278a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id add351b8-9fdc-4104-9464-c66c8ced278a;
	Tue, 06 Oct 2020 06:49:36 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 67E9D20757;
	Tue,  6 Oct 2020 06:49:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1601966975;
	bh=3RqKcmjRrSN56WT/DTt/3kBXbvbvlAZT1/pBioOGcO8=;
	h=From:To:Cc:Subject:Date:From;
	b=qIvEkyUvG9QRPrqEf6B6OTJEcc5OIUoUD8o0WT8RJaK1aPLVkkZLkPdsOK2sZ3kuH
	 /StRsG1+1g/5IgHjXqJtcG/CLh4Nhc/+uF07TEO1gMhvtBiePinjnnVafGAXkqQumw
	 NFCUExgT4gcZJEmZn0IJuTeEiYzpOFIvWd7Iu/KE=
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	takahiro.akashi@linaro.org
Subject: [PATCH v2] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Tue,  6 Oct 2020 15:49:31 +0900
Message-Id: <160196697165.60224.17470743378683334995.stgit@devnote2>
X-Mailer: git-send-email 2.25.1
User-Agent: StGit/0.19
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK.
If it is enabled, the first chunk of percpu memory is linear mapped.
In the other case, that is allocated from vmalloc area. Moreover,
if the first chunk of percpu has run out until allocating
xen_vcpu_info, it will be allocated on the 2nd chunk, which is
based on kernel memory or vmalloc memory (depends on
CONFIG_NEED_PER_CPU_KM).

Without this fix and kernel configured to use vmalloc area for
the percpu memory, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

This issue has been introduced by commit 9a9ab3cc00dc ("xen/arm:
 SMP support").

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 arch/arm/xen/enlighten.c |    2 +-
 include/xen/arm/page.h   |    3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e93145d72c26..a6ab3689b2f4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index 39df751d0dc4..ac1b65470563 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 	})
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {



From xen-devel-bounces@lists.xenproject.org Tue Oct 06 07:51:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 07:51:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3110.9025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPhkt-0001fw-6O; Tue, 06 Oct 2020 07:51:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3110.9025; Tue, 06 Oct 2020 07:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPhkt-0001fp-2q; Tue, 06 Oct 2020 07:51:27 +0000
Received: by outflank-mailman (input) for mailman id 3110;
 Tue, 06 Oct 2020 07:51:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ivvU=DN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPhks-0001fk-Gq
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 07:51:26 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.88]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfa7642e-dd19-4af0-a719-106a30ccce3a;
 Tue, 06 Oct 2020 07:51:24 +0000 (UTC)
Received: from AM6PR08CA0028.eurprd08.prod.outlook.com (2603:10a6:20b:c0::16)
 by AM8PR08MB5843.eurprd08.prod.outlook.com (2603:10a6:20b:1df::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36; Tue, 6 Oct
 2020 07:51:22 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::96) by AM6PR08CA0028.outlook.office365.com
 (2603:10a6:20b:c0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Tue, 6 Oct 2020 07:51:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Tue, 6 Oct 2020 07:51:22 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Tue, 06 Oct 2020 07:51:22 +0000
Received: from eafbe70e3b2e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B35871F4-9849-4D0C-840C-2CF49E55B907.1; 
 Tue, 06 Oct 2020 07:51:16 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eafbe70e3b2e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 06 Oct 2020 07:51:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Tue, 6 Oct
 2020 07:51:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.045; Tue, 6 Oct 2020
 07:51:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ivvU=DN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPhks-0001fk-Gq
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 07:51:26 +0000
X-Inumbo-ID: dfa7642e-dd19-4af0-a719-106a30ccce3a
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.88])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dfa7642e-dd19-4af0-a719-106a30ccce3a;
	Tue, 06 Oct 2020 07:51:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8LRCBCTu8g9BZ2k5WA08aaKL1PgpqMaRSCOJp7SOPIo=;
 b=4pHvi+d1BUOqdsFCkj9IIzshufPmvE6XhCiZilL+0zodn2qjyqoMzesrtih9eGV+OmXI+Jm1W//ux7BSmz4QNYAgRmMDvrZRNBUA4Fii2EdF1wsqOqwoajsFmEab2o3XU/IvsRzvkysljJ4o/W0MhKqIxR8jsODihpd2b2sqvlc=
Received: from AM6PR08CA0028.eurprd08.prod.outlook.com (2603:10a6:20b:c0::16)
 by AM8PR08MB5843.eurprd08.prod.outlook.com (2603:10a6:20b:1df::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.36; Tue, 6 Oct
 2020 07:51:22 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::96) by AM6PR08CA0028.outlook.office365.com
 (2603:10a6:20b:c0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34 via Frontend
 Transport; Tue, 6 Oct 2020 07:51:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Tue, 6 Oct 2020 07:51:22 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Tue, 06 Oct 2020 07:51:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e47ebc7a49187d12
X-CR-MTA-TID: 64aa7808
Received: from eafbe70e3b2e.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id B35871F4-9849-4D0C-840C-2CF49E55B907.1;
	Tue, 06 Oct 2020 07:51:16 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eafbe70e3b2e.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 06 Oct 2020 07:51:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aOGh8Rg77IySt8DI+37bOFRKcfS3AJj2yA/d8raBGNIn9r4NEiJR9OqsVolr+khaRM2pZfWt7Ufocv5zyEVsW0OwAU9EwAcyoQSX/45nOkEHizuQFA4EI52uUl5UZa/EJ0cmwxQCTpMGSodqjbhQfEv9r+KAmXXFeHCD6YSfxZt4zyH7BsxOQbMIh8Pjqh+dyiazplFtA6naso0/G3Ag5UrsBoUiGoJcnxzDPFbvKH7kM7o4mrFJQlOERxAt1qPtE3lFLrR8GE2QZzZVP6kizEdzof+qf6V1f3l4ORBB0A7a9TJwsu4qyGy1BoAaIsHBC5aQw+30BS1+RBfEGwG5xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8LRCBCTu8g9BZ2k5WA08aaKL1PgpqMaRSCOJp7SOPIo=;
 b=FKRU4vQ13wTDaKEaQQmEfBsej91RVw8qQRVJ9Lv4yDFF4OukA7gY1lU5DAunKQCVZWRiVlHbNa3OYuTI9jUgr4nGywC7NqLQ3RK0VU4d3pGfEFcnxEwir82PLUL3x2Q+cycV8IVzoF8YmTO4Tmc3xFOZhzPPubzQG0fHfxUpRYYNbq38yMP8XKRMU9B5q3VYjzqYut3pSHAJTv8Hej9d5pOHUsCuOfZPq8slrdUy9fPYHEuVV3J7WzXobXRDC87fBMygNDqatBjjoFvJLvuPkwenm/bmfK5Gm9RIQpWzKThwnAIQ2VgI6EZB3HFWryzsidFbG/L0br/I2E6sC30/0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8LRCBCTu8g9BZ2k5WA08aaKL1PgpqMaRSCOJp7SOPIo=;
 b=4pHvi+d1BUOqdsFCkj9IIzshufPmvE6XhCiZilL+0zodn2qjyqoMzesrtih9eGV+OmXI+Jm1W//ux7BSmz4QNYAgRmMDvrZRNBUA4Fii2EdF1wsqOqwoajsFmEab2o3XU/IvsRzvkysljJ4o/W0MhKqIxR8jsODihpd2b2sqvlc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Tue, 6 Oct
 2020 07:51:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.045; Tue, 6 Oct 2020
 07:51:14 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Topic: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Index: AQHWmzF2iz7tdPrAmU+VUEf6Gggal6mJ/YIAgAA3CAA=
Date: Tue, 6 Oct 2020 07:51:14 +0000
Message-ID: <F313E5EA-DF1A-4AC2-885B-75FD1B1D8211@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
 <95b2a6f9-e063-8276-db62-ddaac06f4b7b@suse.com>
In-Reply-To: <95b2a6f9-e063-8276-db62-ddaac06f4b7b@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 88f1c5d3-c0f8-4e04-52a8-08d869cc9e7e
x-ms-traffictypediagnostic: DBAPR08MB5845:|AM8PR08MB5843:
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB5843061E3246C95EE92806CE9D0D0@AM8PR08MB5843.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 8OybOaITiH4UAZB1EjeK92AQr91mohtoyb0GVX1ftoxz5zQCiSxh8VjYQcv5X3oUZ29ej2GSL3A4oiz1V5/hFOLaY7A3wlPL1XQ80Q+ee3x2H5MbusCKJ2yPC7kUW75fCNsZ6K7ZgN7nT96i2eg457npoaWcXQT9Fnmo5lUu6br0m9mryxqTs3Pa7Vbu6xR7+d++pyUrGJmVAG83Rs+apkJlQaP/eS++5Dy5jV8h+MRoZSmBW0NcF8fbGh0JZ4YzPcFE6Sn+x8U/wstNmSo9RtNllG3D6MqfLTLBu5NtjjZcogUO6oGWEVpY77/oUbWMofag65p6e+VTge7+7hVnzg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(376002)(136003)(396003)(39860400002)(91956017)(53546011)(6512007)(83380400001)(6916009)(6486002)(66556008)(86362001)(64756008)(66946007)(4326008)(66446008)(76116006)(6506007)(66476007)(5660300002)(2616005)(36756003)(33656002)(71200400001)(54906003)(316002)(8936002)(478600001)(2906002)(66574015)(186003)(26005)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 3HYYqx4dC7HtQyO+yUkW9iGbuO8NRll1JyMtStzl9nj/0qnw6ZMF+1M8JLBq9dOjrvx8v0g89gGLx9jpO1RGpk9Li8tzRy/6oHClL/1hQY/Ybtij/WFNg3SluSsBW1Dl424JPIL6SYvuOIye4tv3xWiIzUbfcNq2dZSJZoVlAVuTr5lWbJmFcJDJRuYRayKYG867/ruj2fFHKlUMMm79M3wr/BAXQweGTF/ricLQ7UuCR+gOs8JGV3WURIrkUt1onlVTVg8l7bxVO+oaNHllaLJFAW1wFqjEq4Qe1nMCv+9QHEKhjZfIInY3ftVAEyLF7NIjsgEKiJb062hnVn6FrD9T/xEqyRaR0NWgbjIoT9M66g1Bw0xDHvM4oc86fRUjU17IiM5/LtmLDMzF+x+euAAOB9s4oAomY++0BG6I02cWJHld9D/CnjZk+yItiqUoFThu62kujMEzl08A8pt1+GhjRocjGKm3o2k/9zICNkScRXR3fk9DnP9PxThngC1ui00uO+RwpCyRu/LHHn5NDApw3byGrnRvtwiakRi22WeKCOvsPv6kF4k7rK0BPTPELeUBlZBVxQUbuXMoUCnVbmWHuvuuc8INtS8PX7Aup+M6smk5nxvSXjWp4DWEmQxk71m7y3xlDM1NHubcJ/X6yg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C0BB1335B1971645A9B0DE915B127476@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5845
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2388f856-92f0-4e10-2d13-08d869cc99e9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AP5Rs56mt5RlgmYwoXPCWizrapldVU68i/5VGbtt40wTpsu4c2ueE4BpKt9VLpfdbdP3yksImFEEvmqk3hCIfsHCN/XJD64f6N3dvuJNrnyku667+pK2aDH14Dbh2JIjcZBcUH8IS7p50jP1mMmUKF8SJjUWPrWgLM8qOOXdvQ9GHRayGhBOjq+Bfqu/7ZY0eq/CanvLeMOZ1tkQ3WgBu9glSPImC43UTCXlhG7Ud29PcETdzmNADbL/52zDSnRc4pcsYXuSUMGcatVyRqBiZO1kfpxv4jTdrqj6GuMngCSV8M8Mv2/vnS4PhsxgUn5sup7AnF7JuGvcHggZeROVMe8oKAZx+F/fvbWskAtW0tAuFqHdci0XCioTzyjWyqiWXzeiMka77Ulye2ofc2O34Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(346002)(396003)(46966005)(70206006)(36906005)(54906003)(316002)(336012)(4326008)(26005)(82740400003)(6862004)(86362001)(6512007)(53546011)(6506007)(36756003)(2906002)(83380400001)(186003)(70586007)(478600001)(82310400003)(6486002)(8936002)(5660300002)(66574015)(2616005)(33656002)(356005)(81166007)(47076004)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2020 07:51:22.5689
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 88f1c5d3-c0f8-4e04-52a8-08d869cc9e7e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5843

DQoNCj4gT24gNiBPY3QgMjAyMCwgYXQgMDU6MzQsIErDvHJnZW4gR3Jvw58gPGpncm9zc0BzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwNS4xMC4yMCAxODowMiwgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCj4+IFVzZSBtZW1jcHkgaW4gZ2V0QnJpZGdlIHRvIHByZXZlbnQgZ2NjIHdhcm5pbmdz
IGFib3V0IHRydW5jYXRlZA0KPj4gc3RyaW5ncy4gV2Uga25vdyB0aGF0IHdlIG1pZ2h0IHRydW5j
YXRlIGl0LCBzbyB0aGUgZ2NjIHdhcm5pbmcNCj4+IGhlcmUgaXMgd3JvbmcuDQo+PiBSZXZlcnQg
cHJldmlvdXMgY2hhbmdlIGNoYW5naW5nIGJ1ZmZlciBzaXplcyBhcyBiaWdnZXIgYnVmZmVycw0K
Pj4gYXJlIG5vdCBuZWVkZWQuDQo+PiBTaWduZWQtb2ZmLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxi
ZXJ0cmFuZC5tYXJxdWlzQGFybS5jb20+DQo+PiAtLS0NCj4+ICB0b29scy9saWJzL3N0YXQveGVu
c3RhdF9saW51eC5jIHwgOSArKysrKysrLS0NCj4+ICAxIGZpbGUgY2hhbmdlZCwgNyBpbnNlcnRp
b25zKCspLCAyIGRlbGV0aW9ucygtKQ0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvc3RhdC94
ZW5zdGF0X2xpbnV4LmMgYi90b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jDQo+PiBpbmRl
eCBkMmVlNmZkYTY0Li4xZGIzNWM2MDRjIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMvbGlicy9zdGF0
L3hlbnN0YXRfbGludXguYw0KPj4gKysrIGIvdG9vbHMvbGlicy9zdGF0L3hlbnN0YXRfbGludXgu
Yw0KPj4gQEAgLTc4LDcgKzc4LDEyIEBAIHN0YXRpYyB2b2lkIGdldEJyaWRnZShjaGFyICpleGNs
dWRlTmFtZSwgY2hhciAqcmVzdWx0LCBzaXplX3QgcmVzdWx0TGVuKQ0KPj4gIAkJCQlzcHJpbnRm
KHRtcCwgIi9zeXMvY2xhc3MvbmV0LyVzL2JyaWRnZSIsIGRlLT5kX25hbWUpOw0KPj4gICAgCQkJ
CWlmIChhY2Nlc3ModG1wLCBGX09LKSA9PSAwKSB7DQo+PiAtCQkJCQlzdHJuY3B5KHJlc3VsdCwg
ZGUtPmRfbmFtZSwgcmVzdWx0TGVuKTsNCj4+ICsJCQkJCS8qDQo+PiArCQkJCQkgKiBEbyBub3Qg
dXNlIHN0cm5jcHkgdG8gcHJldmVudCBjb21waWxlciB3YXJuaW5nIHdpdGgNCj4+ICsJCQkJCSAq
IGdjYyA+PSAxMC4wDQo+PiArCQkJCQkgKiBJZiBkZS0+ZF9uYW1lIGlzIGxvbmdlciB0aGVuIHJl
c3VsdExlbiB3ZSB0cnVuY2F0ZSBpdA0KPj4gKwkJCQkJICovDQo+PiArCQkJCQltZW1jcHkocmVz
dWx0LCBkZS0+ZF9uYW1lLCByZXN1bHRMZW4gLSAxKTsNCj4gDQo+IEkgdGhpbmsgeW91IHdhbnQg
bWluKE5BTUVfTUFYLCByZXN1bHRMZW4gLSAxKSBmb3IgdGhlIGxlbmd0aC4NCg0KdHJ1ZSwgSSB3
aWxsIGZpeCB0aGF0IGFuZCBzZW5kIGEgdjIuDQoNCkNoZWVycw0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 08:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 08:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3129.9037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPi6J-0004Bd-9B; Tue, 06 Oct 2020 08:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3129.9037; Tue, 06 Oct 2020 08:13:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPi6J-0004BW-5m; Tue, 06 Oct 2020 08:13:35 +0000
Received: by outflank-mailman (input) for mailman id 3129;
 Tue, 06 Oct 2020 08:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6FcI=DN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kPi6H-0004BR-W3
 for xen-devel@lists.xen.org; Tue, 06 Oct 2020 08:13:34 +0000
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5145476-7603-4130-bd30-e1eb5714aab8;
 Tue, 06 Oct 2020 08:13:33 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id p15so16309656ejm.7
 for <xen-devel@lists.xen.org>; Tue, 06 Oct 2020 01:13:33 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id o9sm1765320eds.5.2020.10.06.01.13.30
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 06 Oct 2020 01:13:31 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6FcI=DN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kPi6H-0004BR-W3
	for xen-devel@lists.xen.org; Tue, 06 Oct 2020 08:13:34 +0000
X-Inumbo-ID: f5145476-7603-4130-bd30-e1eb5714aab8
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5145476-7603-4130-bd30-e1eb5714aab8;
	Tue, 06 Oct 2020 08:13:33 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id p15so16309656ejm.7
        for <xen-devel@lists.xen.org>; Tue, 06 Oct 2020 01:13:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=HR56wJereoyhBJKXAri+fewH/8JdbbDXN6I88FBg51M=;
        b=BGWdDV2L20bs5zTZev8MACb1x5mxhk/dBnNElUsvJHEPO3oI9dbf0u5SgNc7VVJgqB
         aSoCEms3yykCTtOBS16hfMc8CjNRul2GFIO7ACkO82ZPkXjdEgfZrg1tV3GDE6x4V1Ys
         oyj7iW+a0m1z+CV8Ud17rgOq0kLpvse6gsazZwH0Q1/8XkFL3oZrsEgHh6/BJVmHrfK9
         3VaX7I2i/aLxyuRiKSnzZqahQQEu64n1MzwlnffeFMbP6LTWcFGw4mnhX95ssPgbSfiG
         YzKrtLRysK2Qf3J54NJDrCtPz1+zIe001QxzQNKiK2Ng24RLN4U9CXIM//GrkDuttEC3
         u6sw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=HR56wJereoyhBJKXAri+fewH/8JdbbDXN6I88FBg51M=;
        b=QU847Dd22JueiDkQyFPEQdxhEzcpooMxa10X7IKAkcnESzcgeWM0BIeIvfFOSQdv1U
         RTQx6U/U0iApZdZoQ54/z4bX+DSG/tYapssr1cP7YUFUQc8yzRItwMQcEe3r/hR1CTig
         ZTih78IAqx7fouaQFvDqyA1bJgcpdaxy7+TluGfkHO1XV/X9TVKnspGYtrW5ypOJ2qjV
         ks3hP2JQ96z9SaM6yXTfVHWXgJc0Zhofcw7X2R03gE3gRXcKsH/x4A8GCR+he5WUqPoi
         68HMatX939RAi+TxD7YknW6bHjlE6Mrk+yyLWPfUvrZ6ZXI6QZbuh4LGHZV9Rw4/i5v9
         wwNQ==
X-Gm-Message-State: AOAM533F+ibkYQUq36ArdpHLMiePS2mALt5IvN6sa0x1KVnE760ypnWd
	PbAEK5le5S/WxHvrVl+++uE=
X-Google-Smtp-Source: ABdhPJyA8I1Q/+tUvyuIXDyjYz8pEVzJJon+Pyfj0lBbLmQRT9sSWNHdG7Cq1Vz0bQf+Ih0me0RUyA==
X-Received: by 2002:a17:906:c191:: with SMTP id g17mr4053148ejz.117.1601972012225;
        Tue, 06 Oct 2020 01:13:32 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id o9sm1765320eds.5.2020.10.06.01.13.30
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 06 Oct 2020 01:13:31 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	"'Don Slutz'" <don.slutz@gmail.com>
Cc: <xen-devel@lists.xen.org>,
	"'Boris Ostrovsky'" <boris.ostrovsky@oracle.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	"'George Dunlap'" <George.Dunlap@eu.citrix.com>
References: <cover.1597854907.git.don.slutz@gmail.com> <bfe0b9bb7b283657bc33edb7c4b425930564ca46.1597854908.git.don.slutz@gmail.com> <e7581f3a-71eb-3181-9128-01e22653a47e@suse.com>
In-Reply-To: <e7581f3a-71eb-3181-9128-01e22653a47e@suse.com>
Subject: RE: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
Date: Tue, 6 Oct 2020 09:13:29 +0100
Message-ID: <000901d69bb8$941489b0$bc3d9d10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQE3EbIOCgkLUtOBXNlfcF4M3GLQwwHXOsu5AXIpxzqqrv4WAA==



> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 01 October 2020 15:42
> To: Don Slutz <don.slutz@gmail.com>
> Cc: xen-devel@lists.xen.org; Boris Ostrovsky =
<boris.ostrovsky@oracle.com>; Ian Jackson
> <iwj@xenproject.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin =
Tian <kevin.tian@intel.com>;
> Stefano Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>; =
Andrew Cooper
> <andrew.cooper3@citrix.com>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; George Dunlap
> <George.Dunlap@eu.citrix.com>; Paul Durrant <paul@xen.org>
> Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
>=20
> On 19.08.2020 18:52, Don Slutz wrote:
> > This adds synchronization of the 6 vcpu registers (only 32bits of
> > them) that QEMU's vmport.c and vmmouse.c needs between Xen and QEMU.
> > This is how VMware defined the use of these registers.
> >
> > This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> > fetch and put these 6 vcpu registers used by the code in QEMU's
> > vmport.c and vmmouse.c
>=20
> I'm unconvinced this warrants a new ioreq type, and all the overhead
> associated with it. I'd be curious to know what Paul or the qemu
> folks think here.
>=20

The current shared ioreq_t does appear have enough space to accommodate =
6 32-bit registers (in the addr, data, count and size) fields co =
couldn't the new IOREQ_TYPE_VMWARE_PORT type be dealt with by simply =
unioning the regs with these fields? That avoids the need for a whole =
new shared page.

  Paul



From xen-devel-bounces@lists.xenproject.org Tue Oct 06 08:19:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 08:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3140.9049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPiCF-0004VH-Vv; Tue, 06 Oct 2020 08:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3140.9049; Tue, 06 Oct 2020 08:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPiCF-0004VA-SA; Tue, 06 Oct 2020 08:19:43 +0000
Received: by outflank-mailman (input) for mailman id 3140;
 Tue, 06 Oct 2020 08:19:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PqFe=DN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kPiCE-0004V5-7K
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 08:19:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c760beb-24d7-47b9-95d4-a2bfc67b1aff;
 Tue, 06 Oct 2020 08:19:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99231ACC8;
 Tue,  6 Oct 2020 08:19:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PqFe=DN=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kPiCE-0004V5-7K
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 08:19:42 +0000
X-Inumbo-ID: 3c760beb-24d7-47b9-95d4-a2bfc67b1aff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c760beb-24d7-47b9-95d4-a2bfc67b1aff;
	Tue, 06 Oct 2020 08:19:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601972380;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JbIVLc8eRnlTgQmnFKJDsn72Hr2mQI7x9coSJaceAVc=;
	b=XAq0wptWsCEmmuBIwv9sCsACyOznIWYk01TY0ApjF5kh7ybpmZEo5rArFzQBHpFGiE24VH
	TVA3/iKscokPT8ADmCKFRRczFU+a9TEcwq6xCQf1dEbQvZZI4ryCNQbUvrRkInASJ8P+HZ
	eOX6q9RcEk8y5+HJbHpIzh+hDFIXgs4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 99231ACC8;
	Tue,  6 Oct 2020 08:19:40 +0000 (UTC)
Subject: Re: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
 <95b2a6f9-e063-8276-db62-ddaac06f4b7b@suse.com>
 <F313E5EA-DF1A-4AC2-885B-75FD1B1D8211@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <58c4b793-e093-ca49-a6a1-1a5073013831@suse.com>
Date: Tue, 6 Oct 2020 10:19:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <F313E5EA-DF1A-4AC2-885B-75FD1B1D8211@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.10.20 09:51, Bertrand Marquis wrote:
> 
> 
>> On 6 Oct 2020, at 05:34, Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 05.10.20 18:02, Bertrand Marquis wrote:
>>> Use memcpy in getBridge to prevent gcc warnings about truncated
>>> strings. We know that we might truncate it, so the gcc warning
>>> here is wrong.
>>> Revert previous change changing buffer sizes as bigger buffers
>>> are not needed.
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>>   tools/libs/stat/xenstat_linux.c | 9 +++++++--
>>>   1 file changed, 7 insertions(+), 2 deletions(-)
>>> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
>>> index d2ee6fda64..1db35c604c 100644
>>> --- a/tools/libs/stat/xenstat_linux.c
>>> +++ b/tools/libs/stat/xenstat_linux.c
>>> @@ -78,7 +78,12 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
>>>   				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>>>     				if (access(tmp, F_OK) == 0) {
>>> -					strncpy(result, de->d_name, resultLen);
>>> +					/*
>>> +					 * Do not use strncpy to prevent compiler warning with
>>> +					 * gcc >= 10.0
>>> +					 * If de->d_name is longer then resultLen we truncate it
>>> +					 */
>>> +					memcpy(result, de->d_name, resultLen - 1);
>>
>> I think you want min(NAME_MAX, resultLen - 1) for the length.
> 
> true, I will fix that and send a v2.

Hmm, maybe you should use

min(strnlen(de->d_name, NAME_MAX), resultLen - 1)

for the case that de->d_name is near the end of a page, as otherwise
you could try to copy unallocated space.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 08:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 08:27:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3142.9060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPiJK-0005T3-OS; Tue, 06 Oct 2020 08:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3142.9060; Tue, 06 Oct 2020 08:27:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPiJK-0005Sw-LU; Tue, 06 Oct 2020 08:27:02 +0000
Received: by outflank-mailman (input) for mailman id 3142;
 Tue, 06 Oct 2020 08:27:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RbFo=DN=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kPiJI-0005Sr-W6
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 08:27:01 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bba97cb6-e026-43f1-b5e4-9a7d47fe1fab;
 Tue, 06 Oct 2020 08:26:59 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B855868AFE; Tue,  6 Oct 2020 10:26:56 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RbFo=DN=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kPiJI-0005Sr-W6
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 08:27:01 +0000
X-Inumbo-ID: bba97cb6-e026-43f1-b5e4-9a7d47fe1fab
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bba97cb6-e026-43f1-b5e4-9a7d47fe1fab;
	Tue, 06 Oct 2020 08:26:59 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id B855868AFE; Tue,  6 Oct 2020 10:26:56 +0200 (CEST)
Date: Tue, 6 Oct 2020 10:26:56 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: Re: xen-swiotlb vs phys_to_dma
Message-ID: <20201006082656.GB10243@lst.de>
References: <20201002123436.GA30329@lst.de> <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Fri, Oct 02, 2020 at 01:21:25PM -0700, Stefano Stabellini wrote:
> On Fri, 2 Oct 2020, Christoph Hellwig wrote:
> > Hi Stefano,
> > 
> > I've looked over xen-swiotlb in linux-next, that is with your recent
> > changes to take dma offsets into account.  One thing that puzzles me
> > is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
> > the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
> > that the argument is a dma_addr_t and both other callers translate
> > from a physical to the dma address.  Was this an oversight?
> 
> Hi Christoph,
> 
> It was not an oversight, it was done on purpose, although maybe I could
> have been wrong. There was a brief discussion on this topic here: 
> 
> https://marc.info/?l=linux-kernel&m=159011972107683&w=2
> https://marc.info/?l=linux-kernel&m=159018047129198&w=2
> 
> I'll repeat and summarize here for convenience. 
> 
> swiotlb_init_with_tbl is called by xen_swiotlb_init, passing a virtual
> address (xen_io_tlb_start), which gets converted to phys and stored in
> io_tlb_start as a physical address at the beginning of swiotlb_init_with_tbl.

Yes.

> Afterwards, xen_swiotlb_map_page calls swiotlb_tbl_map_single. The
> second parameter, dma_addr_t tbl_dma_addr, is used to calculate the
> right slot in the swiotlb buffer to use, comparing it against
> io_tlb_start.

It is not compared against io_tlb_start.  It is just used to pick
a slot that fits the dma_get_seg_boundary limitation in a somewhat
awkward way.

> Thus, I think it makes sense for xen_swiotlb_map_page to call
> swiotlb_tbl_map_single passing an address meant to be compared with
> io_tlb_start, which is __pa(xen_io_tlb_start), so
> virt_to_phys(xen_io_tlb_start) seems to be what we want.

No, it doesn't.  tlb_addr is used to ensure the picked slots satisfies
the segment boundary, and for that you need a dma_addr_t.

The index variable in swiotlb_tbl_map_single is derived from
io_tlb_index, not io_tlb_start.

> However, you are right that it is strange that tbl_dma_addr is a
> dma_addr_t, and maybe it shouldn't be? Maybe the tbl_dma_addr parameter
> to swiotlb_tbl_map_single should be a phys address instead?
> Or it could be swiotlb_init_with_tbl to be wrong and it should take a
> dma address to initialize the swiotlb buffer.

No, it must be a dma_addr_t so that the dma_get_seg_boundary check works.

I think we need something like this (against linux-next):

---
>From 07b39a62b235ed2d4b2215700d99968998fbf6c0 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch@lst.de>
Date: Tue, 6 Oct 2020 10:22:19 +0200
Subject: swiotlb: remove the tlb_addr argument to swiotlb_tbl_map_single

The tlb_addr always must be the dma view of io_tlb_start so that the
segment boundary checks work.  Remove the argument and do the right
thing inside swiotlb_tbl_map_single.  This fixes the swiotlb-xen case
that failed to take DMA offset into account.  The issue probably did
not show up very much in practice as the typical dma offsets are
large enough to not affect the segment boundaries for most devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/iommu/intel/iommu.c |  5 ++---
 drivers/xen/swiotlb-xen.c   |  3 +--
 include/linux/swiotlb.h     | 10 +++-------
 kernel/dma/swiotlb.c        | 16 ++++++----------
 4 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 5ee0b7921b0b37..d473811fcfacd5 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
 	 * page aligned, we don't need to use a bounce page.
 	 */
 	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
-		tlb_addr = swiotlb_tbl_map_single(dev,
-				phys_to_dma_unencrypted(dev, io_tlb_start),
-				paddr, size, aligned_size, dir, attrs);
+		tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
+						  aligned_size, dir, attrs);
 		if (tlb_addr == DMA_MAPPING_ERROR) {
 			goto swiotlb_error;
 		} else {
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 030a225624b060..953186f6d7d222 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	 */
 	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
 
-	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
-				     phys, size, size, dir, attrs);
+	map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
 	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 513913ff748626..3bb72266a75a1d 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -45,13 +45,9 @@ enum dma_sync_target {
 	SYNC_FOR_DEVICE = 1,
 };
 
-extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
-					  dma_addr_t tbl_dma_addr,
-					  phys_addr_t phys,
-					  size_t mapping_size,
-					  size_t alloc_size,
-					  enum dma_data_direction dir,
-					  unsigned long attrs);
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
+		size_t mapping_size, size_t alloc_size,
+		enum dma_data_direction dir, unsigned long attrs);
 
 extern void swiotlb_tbl_unmap_single(struct device *hwdev,
 				     phys_addr_t tlb_addr,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 995c1b4cb427ee..8d0b7c3971e81e 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -441,14 +441,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
 	}
 }
 
-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
-				   dma_addr_t tbl_dma_addr,
-				   phys_addr_t orig_addr,
-				   size_t mapping_size,
-				   size_t alloc_size,
-				   enum dma_data_direction dir,
-				   unsigned long attrs)
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+		size_t mapping_size, size_t alloc_size,
+		enum dma_data_direction dir, unsigned long attrs)
 {
+	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, io_tlb_start);
 	unsigned long flags;
 	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
@@ -667,9 +664,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
 	trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
 			      swiotlb_force);
 
-	swiotlb_addr = swiotlb_tbl_map_single(dev,
-			phys_to_dma_unencrypted(dev, io_tlb_start),
-			paddr, size, size, dir, attrs);
+	swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
+					      attrs);
 	if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 06 10:37:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 10:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3177.9174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPkLE-00013d-6c; Tue, 06 Oct 2020 10:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3177.9174; Tue, 06 Oct 2020 10:37:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPkLE-00013W-1p; Tue, 06 Oct 2020 10:37:08 +0000
Received: by outflank-mailman (input) for mailman id 3177;
 Tue, 06 Oct 2020 10:37:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPkLC-000133-3A
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 10:37:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19046129-e125-401b-b33d-0ceb5c6ea206;
 Tue, 06 Oct 2020 10:37:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPkL9-0001OW-Fv; Tue, 06 Oct 2020 10:37:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPkL9-0000hO-5n; Tue, 06 Oct 2020 10:37:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPkL9-0007ZP-5J; Tue, 06 Oct 2020 10:37:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPkLC-000133-3A
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 10:37:06 +0000
X-Inumbo-ID: 19046129-e125-401b-b33d-0ceb5c6ea206
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 19046129-e125-401b-b33d-0ceb5c6ea206;
	Tue, 06 Oct 2020 10:37:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XgIy8AktR46V73alhobdVfye0ldEZ5uuk4e3w8nazAs=; b=6xzSTTGN6bRiPyJNUIpXWXWPos
	LGn/Cn/uaohANKU0YmbYNVH/ILgVQqhBTR+aVkLIByv/adzmgO7VtIiph9yh4+L2CbpuTEQY0DO/K
	3fEPGvVm98bFVLqbnffFcQjdw1TeyQqfrbHEz+8C9nItGfRD3o/smmyt6IDdZNP+GK9k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPkL9-0001OW-Fv; Tue, 06 Oct 2020 10:37:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPkL9-0000hO-5n; Tue, 06 Oct 2020 10:37:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPkL9-0007ZP-5J; Tue, 06 Oct 2020 10:37:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155451-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155451: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
X-Osstest-Versions-That:
    xen=d4ed1d4132f5825a795d5a78505811ecd2717b5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 10:37:03 +0000

flight 155451 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155451/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 18 guest-start/debianhvm.repeat fail like 154592
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 154611
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 154611
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 154611
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 154611
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 154611
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893
baseline version:
 xen                  d4ed1d4132f5825a795d5a78505811ecd2717b5e

Last test of basis   154611  2020-09-22 11:26:05 Z   13 days
Failing since        154634  2020-09-23 05:59:56 Z   13 days    6 attempts
Testing same since   155451  2020-10-04 10:59:30 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Laurentiu Tudor <laurentiu.tudor@nxp.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d4ed1d4132..8ef6345ef5  8ef6345ef557cc2c47298217635a3088eaa59893 -> master


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 12:09:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 12:09:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3242.9422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPlmE-0002Yn-2q; Tue, 06 Oct 2020 12:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3242.9422; Tue, 06 Oct 2020 12:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPlmD-0002Yg-W2; Tue, 06 Oct 2020 12:09:05 +0000
Received: by outflank-mailman (input) for mailman id 3242;
 Tue, 06 Oct 2020 12:09:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPlmC-0002XT-NL
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 12:09:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57a8367f-11bb-4085-8700-66b6947a84c1;
 Tue, 06 Oct 2020 12:08:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPlm5-0003NL-5x; Tue, 06 Oct 2020 12:08:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPlm4-0004fG-RE; Tue, 06 Oct 2020 12:08:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPlm4-0001RE-QZ; Tue, 06 Oct 2020 12:08:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPlmC-0002XT-NL
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 12:09:04 +0000
X-Inumbo-ID: 57a8367f-11bb-4085-8700-66b6947a84c1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57a8367f-11bb-4085-8700-66b6947a84c1;
	Tue, 06 Oct 2020 12:08:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=08XQ+AoBBrZAo5EFNMFTR4Bqx5KXvYqSPuAJaAQ1HiY=; b=w2vMHLe7l4nOjqfVerc/1K9fEj
	G03T+Ky/BPsNAj4nSW1qZGhd8A8PvXm2Nll9WwVPdhmr4WlRnkuVle38NS4kF6SnW1vU9DMskFTDL
	VdCGfJ4248IH2LiPxcjEKueNJwAr5Ngj7cnoJWM5lhXXG/Jd7Tt3jvKzadihgKL1Oiv8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPlm5-0003NL-5x; Tue, 06 Oct 2020 12:08:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPlm4-0004fG-RE; Tue, 06 Oct 2020 12:08:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPlm4-0001RE-QZ; Tue, 06 Oct 2020 12:08:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155454-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155454: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:broken:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 12:08:56 +0000

flight 155454 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155454/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               broken never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   90 days
Failing since        154621  2020-09-22 16:07:00 Z   13 days   20 attempts
Testing same since   155362  2020-10-03 03:17:48 Z    3 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken
broken-step test-arm64-arm64-xl-thunderx hosts-allocate

Not pushing.

(No revision log; it would be 368 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 12:12:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 12:12:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3244.9435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPlpX-0003Mb-IS; Tue, 06 Oct 2020 12:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3244.9435; Tue, 06 Oct 2020 12:12:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPlpX-0003MU-FR; Tue, 06 Oct 2020 12:12:31 +0000
Received: by outflank-mailman (input) for mailman id 3244;
 Tue, 06 Oct 2020 12:12:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2wnd=DN=suse.com=mhocko@srs-us1.protection.inumbo.net>)
 id 1kPlpW-0003MO-5Z
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 12:12:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e505fe30-1833-4b10-a6ff-f4ebc5802110;
 Tue, 06 Oct 2020 12:12:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B8F2FAB95;
 Tue,  6 Oct 2020 12:12:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2wnd=DN=suse.com=mhocko@srs-us1.protection.inumbo.net>)
	id 1kPlpW-0003MO-5Z
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 12:12:30 +0000
X-Inumbo-ID: e505fe30-1833-4b10-a6ff-f4ebc5802110
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e505fe30-1833-4b10-a6ff-f4ebc5802110;
	Tue, 06 Oct 2020 12:12:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1601986346;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mAU57Ek0qHw3pvzhn1nGqMRGEvP+BHT9YA55QmaOE0c=;
	b=PfznraKzbDoQ+43ul+fCEC6Ow3fb+GPiDyy0YheRdIEd5y093LGzF7+HryEbkAFwWyjGkf
	3zKDdTUHNT8nvpwqTgFG24rVLdFSOubG2Rg7qJL6FAxG7SZPG47/y6+D3GDFw0LRMABnKQ
	ReFcS00OGqBMBevTFcg7tcZbt8VllnQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B8F2FAB95;
	Tue,  6 Oct 2020 12:12:26 +0000 (UTC)
Date: Tue, 6 Oct 2020 14:12:24 +0200
From: Michal Hocko <mhocko@suse.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-acpi@vger.kernel.org,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Oscar Salvador <osalvador@suse.de>,
	Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
	Wei Yang <richard.weiyang@linux.alibaba.com>,
	Alexander Duyck <alexander.h.duyck@linux.intel.com>,
	Mel Gorman <mgorman@techsingularity.net>,
	Dave Hansen <dave.hansen@intel.com>,
	Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
	Scott Cheloha <cheloha@linux.ibm.com>,
	Michael Ellerman <mpe@ellerman.id.au>
Subject: Re: [PATCH v2 3/5] mm/page_alloc: move pages to tail in
 move_to_free_list()
Message-ID: <20201006121224.GE29020@dhcp22.suse.cz>
References: <20201005121534.15649-1-david@redhat.com>
 <20201005121534.15649-4-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201005121534.15649-4-david@redhat.com>

On Mon 05-10-20 14:15:32, David Hildenbrand wrote:
> Whenever we move pages between freelists via move_to_free_list()/
> move_freepages_block(), we don't actually touch the pages:
> 1. Page isolation doesn't actually touch the pages, it simply isolates
>    pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>    When undoing isolation, we move the pages back to the target list.
> 2. Page stealing (steal_suitable_fallback()) moves free pages directly
>    between lists without touching them.
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
>    free pages directly between freelists without touching them.
> 
> We already place pages to the tail of the freelists when undoing isolation
> via __putback_isolated_page(), let's do it in any case (e.g., if order <=
> pageblock_order) and document the behavior. To simplify, let's move the
> pages to the tail for all move_to_free_list()/move_freepages_block() users.
> 
> In 2., the target list is empty, so there should be no change. In 3.,
> we might observe a change, however, highatomic is more concerned about
> allocations succeeding than cache hotness - if we ever realize this
> change degrades a workload, we can special-case this instance and add a
> proper comment.
> 
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
> 
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Scott Cheloha <cheloha@linux.ibm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Signed-off-by: David Hildenbrand <david@redhat.com>

Much simpler!
Acked-by: Michal Hocko <mhocko@suse.com>

Thanks!

> ---
>  mm/page_alloc.c     | 10 +++++++---
>  mm/page_isolation.c |  5 +++++
>  2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index df5ff0cd6df1..b187e46cf640 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -901,13 +901,17 @@ static inline void add_to_free_list_tail(struct page *page, struct zone *zone,
>  	area->nr_free++;
>  }
>  
> -/* Used for pages which are on another list */
> +/*
> + * Used for pages which are on another list. Move the pages to the tail
> + * of the list - so the moved pages won't immediately be considered for
> + * allocation again (e.g., optimization for memory onlining).
> + */
>  static inline void move_to_free_list(struct page *page, struct zone *zone,
>  				     unsigned int order, int migratetype)
>  {
>  	struct free_area *area = &zone->free_area[order];
>  
> -	list_move(&page->lru, &area->free_list[migratetype]);
> +	list_move_tail(&page->lru, &area->free_list[migratetype]);
>  }
>  
>  static inline void del_page_from_free_list(struct page *page, struct zone *zone,
> @@ -2340,7 +2344,7 @@ static inline struct page *__rmqueue_cma_fallback(struct zone *zone,
>  #endif
>  
>  /*
> - * Move the free pages in a range to the free lists of the requested type.
> + * Move the free pages in a range to the freelist tail of the requested type.
>   * Note that start_page and end_pages are not aligned on a pageblock
>   * boundary. If alignment is required, use move_freepages_block()
>   */
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index abfe26ad59fd..83692b937784 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -106,6 +106,11 @@ static void unset_migratetype_isolate(struct page *page, unsigned migratetype)
>  	 * If we isolate freepage with more than pageblock_order, there
>  	 * should be no freepage in the range, so we could avoid costly
>  	 * pageblock scanning for freepage moving.
> +	 *
> +	 * We didn't actually touch any of the isolated pages, so place them
> +	 * to the tail of the freelist. This is an optimization for memory
> +	 * onlining - just onlined memory won't immediately be considered for
> +	 * allocation.
>  	 */
>  	if (!isolated_page) {
>  		nr_pages = move_freepages_block(zone, page, migratetype, NULL);
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 13:19:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 13:19:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3247.9448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPmsD-00011F-M8; Tue, 06 Oct 2020 13:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3247.9448; Tue, 06 Oct 2020 13:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPmsD-000118-J3; Tue, 06 Oct 2020 13:19:21 +0000
Received: by outflank-mailman (input) for mailman id 3247;
 Tue, 06 Oct 2020 13:19:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPmsB-00010a-TP
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 13:19:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca43404f-fa93-4140-a58f-e397edd122be;
 Tue, 06 Oct 2020 13:19:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPms3-0004qk-V7; Tue, 06 Oct 2020 13:19:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPms3-0002Hz-M0; Tue, 06 Oct 2020 13:19:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPms3-0002oB-LC; Tue, 06 Oct 2020 13:19:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPmsB-00010a-TP
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 13:19:19 +0000
X-Inumbo-ID: ca43404f-fa93-4140-a58f-e397edd122be
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca43404f-fa93-4140-a58f-e397edd122be;
	Tue, 06 Oct 2020 13:19:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hcdddM1LVHOioz1b2sgAKqxd5DhNDTxA6PivfCG9iR4=; b=22+NSZEEGiCJqJnGSFiHiBHO4x
	TgEyAfQa9oFT1EpBKvaGtc2omQ/x6plk8YrSPDSiGyFmBBDdJdjmWywFc4KSVql+t139Yzk8U2jpE
	eS3kXx+vJghtdX2guPQ+QCStCeFA/MLdSrtkAbprWD8E/iRrf9pRedimfEr9rwESmZAI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPms3-0004qk-V7; Tue, 06 Oct 2020 13:19:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPms3-0002Hz-M0; Tue, 06 Oct 2020 13:19:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPms3-0002oB-LC; Tue, 06 Oct 2020 13:19:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155457-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155457: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=22fbc037cd32e4e6771d2271b565806cfb8c134c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 13:19:11 +0000

flight 155457 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155457/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                22fbc037cd32e4e6771d2271b565806cfb8c134c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   66 days
Failing since        152366  2020-08-01 20:49:34 Z   65 days  111 attempts
Testing same since   155457  2020-10-04 16:15:05 Z    1 days    1 attempts

------------------------------------------------------------
2471 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 333669 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 14:34:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 14:34:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3252.9463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPo2F-00009h-DO; Tue, 06 Oct 2020 14:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3252.9463; Tue, 06 Oct 2020 14:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPo2F-00009a-9y; Tue, 06 Oct 2020 14:33:47 +0000
Received: by outflank-mailman (input) for mailman id 3252;
 Tue, 06 Oct 2020 14:33:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPo2D-000096-IW
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 14:33:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ee7a53f-ff10-4023-ad1e-88546a27698d;
 Tue, 06 Oct 2020 14:33:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPo25-0006Pr-1U; Tue, 06 Oct 2020 14:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPo24-0005Xp-NR; Tue, 06 Oct 2020 14:33:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPo24-0008Oc-Mx; Tue, 06 Oct 2020 14:33:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPo2D-000096-IW
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 14:33:45 +0000
X-Inumbo-ID: 5ee7a53f-ff10-4023-ad1e-88546a27698d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5ee7a53f-ff10-4023-ad1e-88546a27698d;
	Tue, 06 Oct 2020 14:33:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uWCgcJGLDEPy8pfSdiwD+0QC/IomAqm7ihwKZnlAJX4=; b=qrkJZZQcGTUeStGeUpk3z0czwE
	VmmGFy+/schHBpSvFSkbTJYtMxWoV24ui8ZjtHzfrOmMa69fFpe9gyruXMncarlcYq9QQ72HQmvug
	j3NHS/PxAiFCwDLQMgJ17nwR9em3K0fR+kJWvcBmwevEbw24LW8tiJMAzd1Oyqhjf2+Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPo25-0006Pr-1U; Tue, 06 Oct 2020 14:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPo24-0005Xp-NR; Tue, 06 Oct 2020 14:33:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPo24-0008Oc-Mx; Tue, 06 Oct 2020 14:33:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155495-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155495: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=93508595d588afe9dca087f95200effb7cedc81f
X-Osstest-Versions-That:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 14:33:36 +0000

flight 155495 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155495/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  93508595d588afe9dca087f95200effb7cedc81f
baseline version:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893

Last test of basis   155349  2020-10-02 18:00:30 Z    3 days
Testing same since   155495  2020-10-06 12:00:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8ef6345ef5..93508595d5  93508595d588afe9dca087f95200effb7cedc81f -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 15:42:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 15:42:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3255.9476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPp62-0006v5-Hv; Tue, 06 Oct 2020 15:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3255.9476; Tue, 06 Oct 2020 15:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPp62-0006uy-Er; Tue, 06 Oct 2020 15:41:46 +0000
Received: by outflank-mailman (input) for mailman id 3255;
 Tue, 06 Oct 2020 15:41:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPp61-0006us-Dw
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 15:41:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5625e1a2-3b70-4c3d-9cd2-5f5f4e81cd5e;
 Tue, 06 Oct 2020 15:41:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPp5w-0007nr-Vk; Tue, 06 Oct 2020 15:41:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPp5w-0001lI-Lv; Tue, 06 Oct 2020 15:41:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPp5w-0006zP-LO; Tue, 06 Oct 2020 15:41:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPp61-0006us-Dw
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 15:41:45 +0000
X-Inumbo-ID: 5625e1a2-3b70-4c3d-9cd2-5f5f4e81cd5e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5625e1a2-3b70-4c3d-9cd2-5f5f4e81cd5e;
	Tue, 06 Oct 2020 15:41:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lPUyzu7ySFbmCioJqHDPGfbRVYox5gZ0Wh5xJH2ZHkY=; b=ieu0pR14wUdnby4EsHA2qDBCLk
	q7DjTHL0WbJMxMnu4Ak704haevBsRpi/7FH6VGCxc3MCKS7bZg/UCqmLLsiPgTrgA/Mk5Wyw+deGj
	uWWSqu/E4HZgqfQvItQ3O8RlxW8m93pecLk08l7a4okSFZXv71CmvfiZeZVxHIrAenMw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPp5w-0007nr-Vk; Tue, 06 Oct 2020 15:41:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPp5w-0001lI-Lv; Tue, 06 Oct 2020 15:41:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPp5w-0006zP-LO; Tue, 06 Oct 2020 15:41:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155475-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155475: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=0bb796bda31103ebf54eefc11c471586c87b95fd
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 15:41:40 +0000

flight 155475 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155475/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              0bb796bda31103ebf54eefc11c471586c87b95fd
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   88 days
Failing since        151818  2020-07-11 04:18:52 Z   87 days   81 attempts
Testing same since   155397  2020-10-03 20:12:11 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18188 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 15:47:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 15:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3259.9492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPpBE-0007GQ-87; Tue, 06 Oct 2020 15:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3259.9492; Tue, 06 Oct 2020 15:47:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPpBE-0007GJ-4w; Tue, 06 Oct 2020 15:47:08 +0000
Received: by outflank-mailman (input) for mailman id 3259;
 Tue, 06 Oct 2020 15:47:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ivvU=DN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kPpBC-0007Fa-Ka
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 15:47:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.42]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca1153e5-e93b-4e64-af8c-7523f09e957b;
 Tue, 06 Oct 2020 15:47:04 +0000 (UTC)
Received: from AM7PR04CA0026.eurprd04.prod.outlook.com (2603:10a6:20b:110::36)
 by AM6PR08MB5077.eurprd08.prod.outlook.com (2603:10a6:20b:e6::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Tue, 6 Oct
 2020 15:47:03 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::28) by AM7PR04CA0026.outlook.office365.com
 (2603:10a6:20b:110::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37 via Frontend
 Transport; Tue, 6 Oct 2020 15:47:03 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Tue, 6 Oct 2020 15:47:02 +0000
Received: ("Tessian outbound a0bffebca527:v64");
 Tue, 06 Oct 2020 15:47:02 +0000
Received: from a58f2633d65b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 71A84ACF-D9B0-4BD7-B76F-1C1B67705918.1; 
 Tue, 06 Oct 2020 15:46:50 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a58f2633d65b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 06 Oct 2020 15:46:50 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Tue, 6 Oct
 2020 15:46:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.045; Tue, 6 Oct 2020
 15:46:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ivvU=DN=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kPpBC-0007Fa-Ka
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 15:47:06 +0000
X-Inumbo-ID: ca1153e5-e93b-4e64-af8c-7523f09e957b
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.42])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ca1153e5-e93b-4e64-af8c-7523f09e957b;
	Tue, 06 Oct 2020 15:47:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H1ePMpoFGvhxROigwQPGp/4mYvJqM2Wup7VEwVmLkik=;
 b=Ad4c9nQzUUmpRWhdCUuwIk3OtPCpxRWmCqB58CebCoZQLt+Oi0/tD1HR20fR4MHhZra0t0aco2KDe5F2gGbcfM8VPxow5JnCK+kJJ1UV8KQn+zSfWJ7Wh4yCQOPsq0KDeIYdPld7+7RuBS0ARXGXSU1rBg+W8qws5J4BxlTLBJ8=
Received: from AM7PR04CA0026.eurprd04.prod.outlook.com (2603:10a6:20b:110::36)
 by AM6PR08MB5077.eurprd08.prod.outlook.com (2603:10a6:20b:e6::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Tue, 6 Oct
 2020 15:47:03 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::28) by AM7PR04CA0026.outlook.office365.com
 (2603:10a6:20b:110::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.37 via Frontend
 Transport; Tue, 6 Oct 2020 15:47:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Tue, 6 Oct 2020 15:47:02 +0000
Received: ("Tessian outbound a0bffebca527:v64"); Tue, 06 Oct 2020 15:47:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e502637991c059a4
X-CR-MTA-TID: 64aa7808
Received: from a58f2633d65b.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 71A84ACF-D9B0-4BD7-B76F-1C1B67705918.1;
	Tue, 06 Oct 2020 15:46:50 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a58f2633d65b.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 06 Oct 2020 15:46:50 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ca0AxA83QuCRR0eEAWVwNm+lPbMKENGQoJPT8JT37GZnTxaW8wU9f6ehuTfNjUzooep3sQwrEXjP11pu24DayXNX8Fi+ALi7tJ56bDiLK48JCLP5y/fRK16/CBTiZoWn8MLM8y+7o3FWcvuogvYFZVCZch7kDRI78AeV04b365wiyij2Vk6CsQYdX5IxPjQszIHtX2I3woULBAj2M/ALBfsvBG6muNarImwfhXMU9NEz+eqkBHmN9jMA7fY9X8K8ZefP2fYzfY/RqbvCtixK1rAuT9aO3rBrOK4cVeaalYm4XS4g/7Vy7LxYKKRephp6CoIOiMule8Bw/XuFbEPoWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H1ePMpoFGvhxROigwQPGp/4mYvJqM2Wup7VEwVmLkik=;
 b=Re+LWVBknpTHsUgHZh49cCQISpYnUtmZEP/B8G9MDA2kho8W1+rZpTbPcBtx+RhE3WOEohBzNKo9XtTVYWoEjn/OR2uMb/Cj2cdQXtrcabveWJi8Q1S+UMreZiCBJ64zcz5y1XDBmWvOaXL/NGydvhYNnEs+xcvrdrOL2HdgZgRZdQbKCYhMfoqKYK0is5344TQO4B0xgCBt7XPaXtV3fmCEn8otYVkn2HJtYRVXEqPk5S3q+EJ9cU0peiw7yUwB/xCxrDAxWLObgcybkSuMnToD7MXjgMqMqlmHcEjjOh2wye66EUwvePwYdk2Oq6U5dkQL3rN0j3yjkmWJu6FWyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H1ePMpoFGvhxROigwQPGp/4mYvJqM2Wup7VEwVmLkik=;
 b=Ad4c9nQzUUmpRWhdCUuwIk3OtPCpxRWmCqB58CebCoZQLt+Oi0/tD1HR20fR4MHhZra0t0aco2KDe5F2gGbcfM8VPxow5JnCK+kJJ1UV8KQn+zSfWJ7Wh4yCQOPsq0KDeIYdPld7+7RuBS0ARXGXSU1rBg+W8qws5J4BxlTLBJ8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5752.eurprd08.prod.outlook.com (2603:10a6:10:1ac::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Tue, 6 Oct
 2020 15:46:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3433.045; Tue, 6 Oct 2020
 15:46:49 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Topic: [PATCH 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Index: AQHWmzF2iz7tdPrAmU+VUEf6Gggal6mJ/YIAgAA3CACAAAfxgIAAfPCA
Date: Tue, 6 Oct 2020 15:46:49 +0000
Message-ID: <D645DDC5-1EA3-4780-AD3C-081A0971BCA2@arm.com>
References: <cover.1601913536.git.bertrand.marquis@arm.com>
 <3de58159c6fde0cdfa4d0f292fa55fdb931cb3aa.1601913536.git.bertrand.marquis@arm.com>
 <95b2a6f9-e063-8276-db62-ddaac06f4b7b@suse.com>
 <F313E5EA-DF1A-4AC2-885B-75FD1B1D8211@arm.com>
 <58c4b793-e093-ca49-a6a1-1a5073013831@suse.com>
In-Reply-To: <58c4b793-e093-ca49-a6a1-1a5073013831@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 70fe1248-6f14-4489-ea6a-08d86a0f11eb
x-ms-traffictypediagnostic: DBAPR08MB5752:|AM6PR08MB5077:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB5077920EEF35524841DAF9CE9D0D0@AM6PR08MB5077.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /V0OyVc9xf4ZkrWSbI+5SWRKgDJ0pTGTvv3ifLFk98+nbg0tdn+P8K39mA2UgK86Fe6ws3aLwbxecejzlzOL0wkLgXVSmmBI3AJzSiL8JQPHdttSdYgiskQgRImJoeNHimNQLZL6xEhHNSdSz/aDt+6SnS+9PoP8c7uWWykpDebrpjba1vfS9sjoJZ5iARueFLJMBptN8S2sRE13urqV5q4ZZ8jhufiREp3S+Hr8jK5TU8rE/OzXbS1g6QXDfMKnFGga3aZMPdB3wQwcNmHDMLxzgI0J+NCrXHEwpFBQc6zaNhcjXoJGAuL7y8WQv/WEszx6dNhH6Vhbivw7b+Cojg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(39860400002)(136003)(346002)(396003)(66574015)(83380400001)(66476007)(66946007)(66446008)(91956017)(5660300002)(54906003)(316002)(26005)(66556008)(186003)(53546011)(76116006)(6506007)(2616005)(86362001)(6916009)(64756008)(4326008)(6486002)(71200400001)(8676002)(33656002)(478600001)(36756003)(6512007)(2906002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 PYqgru1/1n9i9BxfrJpFxAC/2XT6xM9aq32Ncb1qkWitEfyi+XpoTnMsmp08C4510j3IWCuAX6CliFNl+xN1+uH453fSNw6dQOHkVBIUl1G5TiarL4UuhVjxKQyLQ2OEGymcRi15usFTZE63s8i1O3uxgjnse1uw1MVJ2Wm8xe5O80iIopZHrIhPKGx8pzyFYkfz9L6Y9Qyf3PFcK2hAm7etYVYwkEGaGYfsK5WpboXXv31PSesxoqL4yzjbXpNckEqF66xQxnjSn2z57omkfpeNqLO14C0stozUWKrd30/Wf7E4B4Zvu30h7mCUsbZpNL7obQjFQ+iCCp8ufJ/SRhKAvJuLrhP4vapP+qY8b9Jk5Wcvzf4DEQwKNHSpmt1bsf+h1FBycCP9umOr8nnUVjvUcmbwQQSKT/z9/NokOcKvD9WPQGEkNTd5YZnC4xxyhFSO1kFbomaHaE2yUcEbSAhxb/9GvmK7Rgs4qHJta7jYdKs2f/kf+92UF3ZoiuRRV5MjF3QiKXArQtwqDMhmaVZvhN3Fpz2muL44scwqPPdjLvtDykpdd+WQuQmWplySvWD/5wYAz1p82/8EcrrqCdwnIlLLDSIqd5YIN/tKZgnf2Q5Oov/x7bL/qCvA991vAI2I32FVyLEg6ZzLVAAXNA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <A8E494F3E4CCF94CAAD6286708F31050@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5752
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	51af93bd-32ad-432e-ea4f-08d86a0f09f2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	VOh1BOr3fTsAG6zxzc4i8VWmz5fb64nIymCRn0GGm+dg9i4phQOTJ9jt7ApB2dDBCj2Ur4UMrvm60y+LBe9JPxIridTQnXvk507Uy1502U+fI28hXDm21uVshzGcuqPbMRPZBrTfAP5+Swduc5YvTOPERqp+DKulsQmEfKMvnIKAZPPZFyaINHhS7w1O0obm40yiggF3305S0w7yLeq++rpHzdTLL5qXrmZhj0ACS2PwYpV0m8hkdz8psbRMwEnK2gtjGYTAjwrfV+RmXttIm6NpnxiNfaB+y52Dv9jK5th9n7U6QRAAbxJr8xip4G2xY5C0ODCxMkSJyVWsV3tpabN1LgIk66yu2EasO80oX70oIGzJLKljnlZPMtejaip7Xp3/EXoFlrS1pYVAftVmXQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966005)(70586007)(2906002)(82310400003)(6512007)(6862004)(4326008)(356005)(86362001)(6486002)(83380400001)(66574015)(8936002)(70206006)(33656002)(5660300002)(81166007)(47076004)(26005)(53546011)(2616005)(8676002)(82740400003)(36756003)(54906003)(6506007)(316002)(478600001)(36906005)(186003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2020 15:47:02.9872
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 70fe1248-6f14-4489-ea6a-08d86a0f11eb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5077

SGkgSnVyZ2VuLA0KDQo+IE9uIDYgT2N0IDIwMjAsIGF0IDA5OjE5LCBKw7xyZ2VuIEdyb8OfIDxq
Z3Jvc3NAc3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMDYuMTAuMjAgMDk6NTEsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+Pj4gT24gNiBPY3QgMjAyMCwgYXQgMDU6MzQsIErDvHJnZW4gR3Jv
w58gPGpncm9zc0BzdXNlLmNvbT4gd3JvdGU6DQo+Pj4gDQo+Pj4gT24gMDUuMTAuMjAgMTg6MDIs
IEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+IFVzZSBtZW1jcHkgaW4gZ2V0QnJpZGdlIHRv
IHByZXZlbnQgZ2NjIHdhcm5pbmdzIGFib3V0IHRydW5jYXRlZA0KPj4+PiBzdHJpbmdzLiBXZSBr
bm93IHRoYXQgd2UgbWlnaHQgdHJ1bmNhdGUgaXQsIHNvIHRoZSBnY2Mgd2FybmluZw0KPj4+PiBo
ZXJlIGlzIHdyb25nLg0KPj4+PiBSZXZlcnQgcHJldmlvdXMgY2hhbmdlIGNoYW5naW5nIGJ1ZmZl
ciBzaXplcyBhcyBiaWdnZXIgYnVmZmVycw0KPj4+PiBhcmUgbm90IG5lZWRlZC4NCj4+Pj4gU2ln
bmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0K
Pj4+PiAtLS0NCj4+Pj4gIHRvb2xzL2xpYnMvc3RhdC94ZW5zdGF0X2xpbnV4LmMgfCA5ICsrKysr
KystLQ0KPj4+PiAgMSBmaWxlIGNoYW5nZWQsIDcgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMo
LSkNCj4+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvc3RhdC94ZW5zdGF0X2xpbnV4LmMgYi90
b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jDQo+Pj4+IGluZGV4IGQyZWU2ZmRhNjQuLjFk
YjM1YzYwNGMgMTAwNjQ0DQo+Pj4+IC0tLSBhL3Rvb2xzL2xpYnMvc3RhdC94ZW5zdGF0X2xpbnV4
LmMNCj4+Pj4gKysrIGIvdG9vbHMvbGlicy9zdGF0L3hlbnN0YXRfbGludXguYw0KPj4+PiBAQCAt
NzgsNyArNzgsMTIgQEAgc3RhdGljIHZvaWQgZ2V0QnJpZGdlKGNoYXIgKmV4Y2x1ZGVOYW1lLCBj
aGFyICpyZXN1bHQsIHNpemVfdCByZXN1bHRMZW4pDQo+Pj4+ICAJCQkJc3ByaW50Zih0bXAsICIv
c3lzL2NsYXNzL25ldC8lcy9icmlkZ2UiLCBkZS0+ZF9uYW1lKTsNCj4+Pj4gICAgCQkJCWlmIChh
Y2Nlc3ModG1wLCBGX09LKSA9PSAwKSB7DQo+Pj4+IC0JCQkJCXN0cm5jcHkocmVzdWx0LCBkZS0+
ZF9uYW1lLCByZXN1bHRMZW4pOw0KPj4+PiArCQkJCQkvKg0KPj4+PiArCQkJCQkgKiBEbyBub3Qg
dXNlIHN0cm5jcHkgdG8gcHJldmVudCBjb21waWxlciB3YXJuaW5nIHdpdGgNCj4+Pj4gKwkJCQkJ
ICogZ2NjID49IDEwLjANCj4+Pj4gKwkJCQkJICogSWYgZGUtPmRfbmFtZSBpcyBsb25nZXIgdGhl
biByZXN1bHRMZW4gd2UgdHJ1bmNhdGUgaXQNCj4+Pj4gKwkJCQkJICovDQo+Pj4+ICsJCQkJCW1l
bWNweShyZXN1bHQsIGRlLT5kX25hbWUsIHJlc3VsdExlbiAtIDEpOw0KPj4+IA0KPj4+IEkgdGhp
bmsgeW91IHdhbnQgbWluKE5BTUVfTUFYLCByZXN1bHRMZW4gLSAxKSBmb3IgdGhlIGxlbmd0aC4N
Cj4+IHRydWUsIEkgd2lsbCBmaXggdGhhdCBhbmQgc2VuZCBhIHYyLg0KPiANCj4gSG1tLCBtYXli
ZSB5b3Ugc2hvdWxkIHVzZQ0KPiANCj4gbWluKHN0cm5sZW4oZGUtPmRfbmFtZSwgTkFNRV9NQVgp
LCByZXN1bHRMZW4gLSAxKQ0KPiANCj4gZm9yIHRoZSBjYXNlIHRoYXQgZGUtPmRfbmFtZSBpcyBu
ZWFyIHRoZSBlbmQgb2YgYSBwYWdlLCBhcyBvdGhlcndpc2UNCj4geW91IGNvdWxkIHRyeSB0byBj
b3B5IHVuYWxsb2NhdGVkIHNwYWNlLg0KPiANCg0KQWdyZWUsIEkgd2lsbCBkbyB0aGF0Lg0KDQpD
aGVlcnMNCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 16:26:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 16:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3283.9506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPpnO-0003CW-9r; Tue, 06 Oct 2020 16:26:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3283.9506; Tue, 06 Oct 2020 16:26:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPpnO-0003CP-6p; Tue, 06 Oct 2020 16:26:34 +0000
Received: by outflank-mailman (input) for mailman id 3283;
 Tue, 06 Oct 2020 16:26:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZSAu=DN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kPpnM-0003CK-49
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 16:26:32 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e313a6df-be03-4653-a797-cec8643e2d98;
 Tue, 06 Oct 2020 16:26:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZSAu=DN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kPpnM-0003CK-49
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 16:26:32 +0000
X-Inumbo-ID: e313a6df-be03-4653-a797-cec8643e2d98
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e313a6df-be03-4653-a797-cec8643e2d98;
	Tue, 06 Oct 2020 16:26:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602001588;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=+MnaDUf14ywwIvsahBfXD3XH69/cHlmD0ijO1Ty9M3I=;
  b=VKxBn0HyvwYxAfFAelhslHDezLz7IZFO8PBBKfY2v4Dmd/VxBduyeEVc
   KL0M2Kp00HrmR4j2Mv+CbiI5HgxB69aWCJnUpi5G7ZM2MgZ7Qxs+27Chm
   GL8MNpLjtJQreMhii3J9A+CnufrGKSFhrf4ArRxcgbaTzE9Ue0uezgmje
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JYjHCyOE/FjAz3rFkiEHHlkloO1nD65ta8NIAFQyPxoHeebi8JpJr/V5yq10MwT42oobtljxAa
 QIm1HiH4ad+BFydxzkgZj1ZcyAc/WER/4MTKtUdZe7g0/o+gBWFO36uNwFTKgRecoNQhqg5Ocf
 LUY2Q7JH8KrNkt1d1DwHFUIBKjfAw+Ts02UQttyZ+BRGfkxC1WnzjM+lORRxnZtnE8SaFwWnt+
 7QEyYpxYTZf42f9Cp4mgAXNMWE39KOweYAAT8Kl9cohRuo3gDxI5NghgeI4UDLbkHSCGaZQu3F
 SKY=
X-SBRS: None
X-MesageID: 28409371
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,343,1596513600"; 
   d="scan'208";a="28409371"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "George
 Dunlap" <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Date: Tue, 6 Oct 2020 18:23:27 +0200
Message-ID: <20201006162327.93055-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Currently a PV hardware domain can also be given control over the CPU
frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
However since commit 322ec7c89f6 the default behavior has been changed
to reject accesses to not explicitly handled MSRs, preventing PV
guests that manage CPU frequency from reading
MSR_IA32_PERF_{STATUS/CTL}.

Additionally some HVM guests (Windows at least) will attempt to read
MSR_IA32_PERF_CTL and will panic if given back a #GP fault:

vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0

Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
handling shared between HVM and PV guests, and add an explicit case
for reads to MSR_IA32_PERF_{STATUS/CTL}.

Restore previous behavior and allow PV guests with the required
permissions to read the contents of the mentioned MSRs. Non privileged
guests will get 0 when trying to read those registers, as writes to
MSR_IA32_PERF_CTL by such guest will already be silently dropped.

Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/msr.c             | 20 ++++++++++++++++++++
 xen/arch/x86/pv/emul-priv-op.c | 14 --------------
 xen/include/xen/sched.h        |  6 ++++++
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index 81b34fb212..e4c4fa6127 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -242,6 +242,17 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
             goto gp_fault;
         break;
 
+    case MSR_IA32_PERF_STATUS:
+    case MSR_IA32_PERF_CTL:
+        if ( cp->x86_vendor != X86_VENDOR_INTEL )
+            goto gp_fault;
+        *val = 0;
+        if ( likely(!is_cpufreq_controller(d)) ||
+             boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+             rdmsr_safe(msr, *val) == 0 )
+            break;
+        goto gp_fault;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
@@ -442,6 +453,15 @@ int guest_wrmsr(struct vcpu *v, uint32_t msr, uint64_t val)
             goto gp_fault;
         break;
 
+    case MSR_IA32_PERF_CTL:
+        if ( cp->x86_vendor != X86_VENDOR_INTEL )
+            goto gp_fault;
+        if ( likely(!is_cpufreq_controller(d)) ||
+             boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+             wrmsr_safe(msr, val) == 0 )
+            break;
+        goto gp_fault;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 7cc16d6eda..dbceed8a05 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -849,12 +849,6 @@ static inline uint64_t guest_misc_enable(uint64_t val)
     return val;
 }
 
-static inline bool is_cpufreq_controller(const struct domain *d)
-{
-    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
-            is_hardware_domain(d));
-}
-
 static uint64_t guest_efer(const struct domain *d)
 {
     uint64_t val;
@@ -1121,14 +1115,6 @@ static int write_msr(unsigned int reg, uint64_t val,
             return X86EMUL_OKAY;
         break;
 
-    case MSR_IA32_PERF_CTL:
-        if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
-            break;
-        if ( likely(!is_cpufreq_controller(currd)) ||
-             wrmsr_safe(reg, val) == 0 )
-            return X86EMUL_OKAY;
-        break;
-
     case MSR_IA32_THERM_CONTROL:
     case MSR_IA32_ENERGY_PERF_BIAS:
         if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..41baa3b7a1 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
     FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
 } cpufreq_controller;
 
+static inline bool is_cpufreq_controller(const struct domain *d)
+{
+    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
+            is_hardware_domain(d));
+}
+
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 06 17:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 17:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3289.9525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPrCq-0003dY-2y; Tue, 06 Oct 2020 17:56:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3289.9525; Tue, 06 Oct 2020 17:56:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPrCp-0003dR-W8; Tue, 06 Oct 2020 17:56:55 +0000
Received: by outflank-mailman (input) for mailman id 3289;
 Tue, 06 Oct 2020 17:56:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oh4O=DN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kPrCp-0003dM-8R
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 17:56:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74b18617-d2bc-4839-a928-5fd30d024d84;
 Tue, 06 Oct 2020 17:56:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id ECA64206D4;
 Tue,  6 Oct 2020 17:56:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oh4O=DN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kPrCp-0003dM-8R
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 17:56:55 +0000
X-Inumbo-ID: 74b18617-d2bc-4839-a928-5fd30d024d84
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 74b18617-d2bc-4839-a928-5fd30d024d84;
	Tue, 06 Oct 2020 17:56:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id ECA64206D4;
	Tue,  6 Oct 2020 17:56:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602007013;
	bh=6xwkV5dowZG+AGFK3o6Fg/zFGiBS1Zab6Mdlj9bmdf8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=p/o8rpkPaWKAOJErXCTTnFk86VLXwqH9uU7bgiHZrpS2zPwxMKi9yqd0krYwsViDK
	 mZ0wW7xCx8VizqKhE0K8vAJ+jisxAt+oOURnRiiLIw1gUSJsFiL9w30iWr88TAOows
	 jAoRglv7lDrdcz7rt2IuDEdDREj9qLyTU1PA5sZo=
Date: Tue, 6 Oct 2020 10:56:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Masami Hiramatsu <mhiramat@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    takahiro.akashi@linaro.org, jgross@suse.com, boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
In-Reply-To: <20201006114058.b93839b1b8f35a470874572b@kernel.org>
Message-ID: <alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2> <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org> <alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s> <20201006114058.b93839b1b8f35a470874572b@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 6 Oct 2020, Masami Hiramatsu wrote:
> On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
> Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> > On Mon, 5 Oct 2020, Julien Grall wrote:
> > > Hi Masami,
> > > 
> > > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > > address conversion.
> > > > 
> > > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > > assumes the given virtual address is in contiguous kernel memory
> > > > area, it can not convert the per-cpu memory if it is allocated on
> > > > vmalloc area (depends on CONFIG_SMP).
> > > 
> > > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > > region for CPU0 is allocated outside of vmalloc area.
> > > 
> > > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > > enabled.
> > 
> > I cannot reproduce the issue with defconfig, but I can with Masami's
> > kconfig.
> > 
> > If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> > problem still appears.
> > 
> > If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> > strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> > works.
> 
> Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
> disappeared.
> 
> --- config-5.9.0-rc4+   2020-10-06 11:36:20.620107129 +0900
> +++ config-5.9.0-rc4+.buggy     2020-10-05 21:04:40.369936461 +0900
> @@ -131,7 +131,8 @@
>  CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
>  CONFIG_CC_HAS_INT128=y
>  CONFIG_ARCH_SUPPORTS_INT128=y
> -# CONFIG_NUMA_BALANCING is not set
> +CONFIG_NUMA_BALANCING=y
> +CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
>  CONFIG_CGROUPS=y
>  CONFIG_PAGE_COUNTER=y
>  CONFIG_MEMCG=y
> 
> So buggy config just enabled NUMA_BALANCING (and default enabled)

Yeah but both NUMA and NUMA_BALANCING are enabled in defconfig which
works fine...

[...]

> > The fix is fine for me. I tested it and it works. We need to remove the
> > "Fixes:" line from the commit message. Ideally, replacing it with a
> > reference to what is the source of the problem.
> 
> OK, as I said, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has
> introduced the per-cpu code. So note it instead of Fixes tag.

...and commit 9a9ab3cc00dc was already present in 5.8 which also works
fine with your kconfig. Something else changed in 5.9 causing this
breakage as a side effect. Commit 9a9ab3cc00dc is there since 2013, I
think it is OK -- this patch is fixing something else.


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 19:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 19:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3298.9555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPsvb-000650-I3; Tue, 06 Oct 2020 19:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3298.9555; Tue, 06 Oct 2020 19:47:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPsvb-00064t-Dz; Tue, 06 Oct 2020 19:47:15 +0000
Received: by outflank-mailman (input) for mailman id 3298;
 Tue, 06 Oct 2020 19:47:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPsvZ-000647-Mj
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 19:47:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0e0cbae-2761-4125-b33f-6d0bd5e291b2;
 Tue, 06 Oct 2020 19:47:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPsvV-00056K-Mh; Tue, 06 Oct 2020 19:47:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPsvV-0002vX-CS; Tue, 06 Oct 2020 19:47:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPsvV-0004ZX-Bw; Tue, 06 Oct 2020 19:47:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=35wL=DN=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPsvZ-000647-Mj
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 19:47:13 +0000
X-Inumbo-ID: b0e0cbae-2761-4125-b33f-6d0bd5e291b2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b0e0cbae-2761-4125-b33f-6d0bd5e291b2;
	Tue, 06 Oct 2020 19:47:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cszU28WJ+YHFVxPEwbkEtiWdVCvhnJFey1uhTGr89Mg=; b=fPjTBD1GMD+m+m9ZcLkirKfdqu
	/hLniVmtGM7M67+Yae9WZzhDik0DcS9RZO4WdzeOWCD/7uvGYoT4DFgUqYH9p9JWb6os0DTPDf9cK
	gJVmUJWOwkGGP5XnfH3uREUqryYrTaheSwY4ayRagVxxnP469LXPRghWPIOmyKF6ErHs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPsvV-00056K-Mh; Tue, 06 Oct 2020 19:47:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPsvV-0002vX-CS; Tue, 06 Oct 2020 19:47:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPsvV-0004ZX-Bw; Tue, 06 Oct 2020 19:47:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155483-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155483: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=36d9c2883e55c863b622b99f0ebb5143f0001401
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 06 Oct 2020 19:47:09 +0000

flight 155483 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155483/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 11 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 11 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-start/freebsd.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                36d9c2883e55c863b622b99f0ebb5143f0001401
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   47 days
Failing since        152659  2020-08-21 14:07:39 Z   46 days   77 attempts
Testing same since   155483  2020-10-05 16:14:14 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 39162 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 06 20:46:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 06 Oct 2020 20:46:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3303.9568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPtqr-0003WP-Bs; Tue, 06 Oct 2020 20:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3303.9568; Tue, 06 Oct 2020 20:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPtqr-0003WI-8s; Tue, 06 Oct 2020 20:46:25 +0000
Received: by outflank-mailman (input) for mailman id 3303;
 Tue, 06 Oct 2020 20:46:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tUxE=DN=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kPtqp-0003WA-UR
 for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 20:46:24 +0000
Received: from NAM02-BL2-obe.outbound.protection.outlook.com (unknown
 [40.107.75.55]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id beaf8746-f249-432f-b8ca-9e195ae49b96;
 Tue, 06 Oct 2020 20:46:19 +0000 (UTC)
Received: from SN4PR0201CA0068.namprd02.prod.outlook.com
 (2603:10b6:803:20::30) by BL0PR02MB3762.namprd02.prod.outlook.com
 (2603:10b6:207:41::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Tue, 6 Oct
 2020 20:46:17 +0000
Received: from SN1NAM02FT051.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:20:cafe::dd) by SN4PR0201CA0068.outlook.office365.com
 (2603:10b6:803:20::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend
 Transport; Tue, 6 Oct 2020 20:46:17 +0000
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT051.mail.protection.outlook.com (10.152.73.103) with Microsoft SMTP
 Server id 15.20.3433.39 via Frontend Transport; Tue, 6 Oct 2020 20:46:17
 +0000
Received: from [149.199.38.66] (port=51248 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kPtq9-000439-Re; Tue, 06 Oct 2020 13:45:41 -0700
Received: from [127.0.0.1] (helo=localhost)
 by smtp.xilinx.com with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kPtqi-0001z8-SK; Tue, 06 Oct 2020 13:46:16 -0700
Received: from [10.23.120.52] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1kPtqe-0001yS-ME; Tue, 06 Oct 2020 13:46:12 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tUxE=DN=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kPtqp-0003WA-UR
	for xen-devel@lists.xenproject.org; Tue, 06 Oct 2020 20:46:24 +0000
X-Inumbo-ID: beaf8746-f249-432f-b8ca-9e195ae49b96
Received: from NAM02-BL2-obe.outbound.protection.outlook.com (unknown [40.107.75.55])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id beaf8746-f249-432f-b8ca-9e195ae49b96;
	Tue, 06 Oct 2020 20:46:19 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kHWNjGV/BtTptwOTY2Luh8Zb2beLzkmft3OsemzXzWgXbL5iRgjrrbfViO6bko/2MCM9wmUxLtx28qIfDBauQvAyZBnE7O1z4FgZEURIfpJBZRDWxBchSPqImfufQnQwtSlYZqoHQ1344M20hrmg7g2K9nsR33uf/vMgZNZiof43V0jkw5Ody15OcZBNaScN4K/6HXQTPcZaP17EFUlcUOqCBtHrUWk74F/1ebp1d7Q29sw/nxsbHQy6AAN6Zei2Mig2ZNNyiu9j+SiaIHXaQRn4u9av5K9fomTQHstaQaZUWH1flgMmKVYzRal5Rb3G54IPbhsh6Qxe/6gFYQsKuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zmXqAuQ4/OV/KkzpSFzwJ+qvuxQ4uQ2Fjmb++dicujk=;
 b=XqLllMQexvCtyGPvgMj616vZD0U6iNfQmSKNQxYmv1xjm+15cO86qhktgDxtxWY4kCzcs5+VTyO9glOtyWPZs/2bUPJFLOw7QjMOnkLfolXRsz5XAThOBs6o+p/LSmMnnB1VM8lIujJ9Z2BxajSDWj5R2XYtCahdtZ6S0ZU/ckkVY27M94EmcRO9Bmzb4WgtMsKaRW0A1Ns6nUyjPEOmQDVa6uywPTehE/91LQIR1I3AuWVv2c5q5SwLy1RcOqTauqo12bCdLg/eBeKTfcu3qUClnC8oJUn3o+1mtbpZ0oleaPulRJ0OA6hH0vPqYBgJYHoLGNPMJn8x/Er2vSVVng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=lst.de smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zmXqAuQ4/OV/KkzpSFzwJ+qvuxQ4uQ2Fjmb++dicujk=;
 b=pK2ea/DD/BV519WphjMZ2rlOWM7Sm7T/HC0p1YtMqxEdU9HAoFnSdxQeIah7aUtLfRUj0dy4BXRSNbFYCNe8W1iXQSPUGd4F2ktGUMx+3z8rzaFg6Km8qQ9bwJcz/zKVZOg3sZNYNjIleW314Vk2c28P0NbcrrhXLLqE9RoO1Nc=
Received: from SN4PR0201CA0068.namprd02.prod.outlook.com
 (2603:10b6:803:20::30) by BL0PR02MB3762.namprd02.prod.outlook.com
 (2603:10b6:207:41::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Tue, 6 Oct
 2020 20:46:17 +0000
Received: from SN1NAM02FT051.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:20:cafe::dd) by SN4PR0201CA0068.outlook.office365.com
 (2603:10b6:803:20::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38 via Frontend
 Transport; Tue, 6 Oct 2020 20:46:17 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; lst.de; dkim=none (message not signed)
 header.d=none;lst.de; dmarc=bestguesspass action=none header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT051.mail.protection.outlook.com (10.152.73.103) with Microsoft SMTP
 Server id 15.20.3433.39 via Frontend Transport; Tue, 6 Oct 2020 20:46:17
 +0000
Received: from [149.199.38.66] (port=51248 helo=smtp.xilinx.com)
	by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kPtq9-000439-Re; Tue, 06 Oct 2020 13:45:41 -0700
Received: from [127.0.0.1] (helo=localhost)
	by smtp.xilinx.com with smtp (Exim 4.63)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kPtqi-0001z8-SK; Tue, 06 Oct 2020 13:46:16 -0700
Received: from [10.23.120.52] (helo=localhost)
	by xsj-pvapsmtp01 with esmtp (Exim 4.63)
	(envelope-from <stefanos@xilinx.com>)
	id 1kPtqe-0001yS-ME; Tue, 06 Oct 2020 13:46:12 -0700
Date: Tue, 6 Oct 2020 13:46:12 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
    xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: Re: xen-swiotlb vs phys_to_dma
In-Reply-To: <20201006082656.GB10243@lst.de>
Message-ID: <alpine.DEB.2.21.2010061325230.10908@sstabellini-ThinkPad-T480s>
References: <20201002123436.GA30329@lst.de> <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s> <20201006082656.GB10243@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6b7a1933-3135-4d5a-aa1f-08d86a38df61
X-MS-TrafficTypeDiagnostic: BL0PR02MB3762:
X-Microsoft-Antispam-PRVS:
	<BL0PR02MB37620C11D97A71AEDEE37FFDA00D0@BL0PR02MB3762.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kAUNYMKZDY1KI8q7XCOOwCL+JOe5ueQCshNmIRay4u7EmS6PSMjPzV9WcAxcYjNuHAH/lqNCkrx9L9xxcxaHRM4qk33OZEcyVo2LjkAEk6IgG4q9qNsmcx4roXq58pvYea7pidA6c9/fLvkDbxzF0GQjmz/EW50e4Sy5XvDThw4WSUxi+RSQe+bmxMHy+tmEWcmz4+XQTM6/9Ugt/goDv43L25Y2OJgO+8lcaRTGIti2SBmNhz4jfyjdNTUzVnQJvjF0gPUj//UKe8eVZ6rorBy+auAW/Bs8koekqlj1/JUzM94qBJtWR8LdM4qIVtp3fKw5Ag+rKlqAvCK5BGjXkyO6dEM8UAoc5SG+Qg2yvavJkQDrN/0vhtTVauTepHTp7pFOt2a0yP4XmHsRXayYpD4RIWNUZKgLc9qy8Nx1n+K57zkALVDZ+ZFTOtSYxdBrtCJbXjyK340zJF/PuWzsQKhuCoajYF733YGdiAmlP1AiOt2DOgfNI9Raxy5+a6KG
X-Forefront-Antispam-Report:
	CIP:149.199.60.83;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapsmtpgw01;PTR:unknown-60-83.xilinx.com;CAT:NONE;SFS:(7916004)(346002)(396003)(376002)(39860400002)(136003)(46966005)(2906002)(81166007)(83080400001)(83380400001)(9786002)(70586007)(70206006)(82310400003)(356005)(47076004)(82740400003)(9686003)(5660300002)(966005)(6916009)(33716001)(53546011)(4326008)(336012)(8676002)(426003)(8936002)(478600001)(26005)(186003)(316002)(54906003)(44832011)(6606295002);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Oct 2020 20:46:17.1297
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b7a1933-3135-4d5a-aa1f-08d86a38df61
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.60.83];Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource:
	SN1NAM02FT051.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR02MB3762

On Tue, 6 Oct 2020, Christoph Hellwig wrote:
> On Fri, Oct 02, 2020 at 01:21:25PM -0700, Stefano Stabellini wrote:
> > On Fri, 2 Oct 2020, Christoph Hellwig wrote:
> > > Hi Stefano,
> > > 
> > > I've looked over xen-swiotlb in linux-next, that is with your recent
> > > changes to take dma offsets into account.  One thing that puzzles me
> > > is that xen_swiotlb_map_page passes virt_to_phys(xen_io_tlb_start) as
> > > the tbl_dma_addr argument to swiotlb_tbl_map_single, despite the fact
> > > that the argument is a dma_addr_t and both other callers translate
> > > from a physical to the dma address.  Was this an oversight?
> > 
> > Hi Christoph,
> > 
> > It was not an oversight, it was done on purpose, although maybe I could
> > have been wrong. There was a brief discussion on this topic here: 
> > 
> > https://marc.info/?l=linux-kernel&m=159011972107683&w=2
> > https://marc.info/?l=linux-kernel&m=159018047129198&w=2
> > 
> > I'll repeat and summarize here for convenience. 
> > 
> > swiotlb_init_with_tbl is called by xen_swiotlb_init, passing a virtual
> > address (xen_io_tlb_start), which gets converted to phys and stored in
> > io_tlb_start as a physical address at the beginning of swiotlb_init_with_tbl.
> 
> Yes.
> 
> > Afterwards, xen_swiotlb_map_page calls swiotlb_tbl_map_single. The
> > second parameter, dma_addr_t tbl_dma_addr, is used to calculate the
> > right slot in the swiotlb buffer to use, comparing it against
> > io_tlb_start.
> 
> It is not compared against io_tlb_start.  It is just used to pick
> a slot that fits the dma_get_seg_boundary limitation in a somewhat
> awkward way.
> 
> > Thus, I think it makes sense for xen_swiotlb_map_page to call
> > swiotlb_tbl_map_single passing an address meant to be compared with
> > io_tlb_start, which is __pa(xen_io_tlb_start), so
> > virt_to_phys(xen_io_tlb_start) seems to be what we want.
> 
> No, it doesn't.  tlb_addr is used to ensure the picked slots satisfies
> the segment boundary, and for that you need a dma_addr_t.
> 
> The index variable in swiotlb_tbl_map_single is derived from
> io_tlb_index, not io_tlb_start.
> 
> > However, you are right that it is strange that tbl_dma_addr is a
> > dma_addr_t, and maybe it shouldn't be? Maybe the tbl_dma_addr parameter
> > to swiotlb_tbl_map_single should be a phys address instead?
> > Or it could be swiotlb_init_with_tbl to be wrong and it should take a
> > dma address to initialize the swiotlb buffer.
> 
> No, it must be a dma_addr_t so that the dma_get_seg_boundary check works.
>
> I think we need something like this (against linux-next):
> 
> ---
> >From 07b39a62b235ed2d4b2215700d99968998fbf6c0 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@lst.de>
> Date: Tue, 6 Oct 2020 10:22:19 +0200
> Subject: swiotlb: remove the tlb_addr argument to swiotlb_tbl_map_single
> 
> The tlb_addr always must be the dma view of io_tlb_start so that the
> segment boundary checks work.  Remove the argument and do the right
> thing inside swiotlb_tbl_map_single.  This fixes the swiotlb-xen case
> that failed to take DMA offset into account.  The issue probably did
> not show up very much in practice as the typical dma offsets are
> large enough to not affect the segment boundaries for most devices.

OK, this makes a lot of sense, and I like the patch because it makes the
swiotlb interface clearer.

Just one comment below.


> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/iommu/intel/iommu.c |  5 ++---
>  drivers/xen/swiotlb-xen.c   |  3 +--
>  include/linux/swiotlb.h     | 10 +++-------
>  kernel/dma/swiotlb.c        | 16 ++++++----------
>  4 files changed, 12 insertions(+), 22 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index 5ee0b7921b0b37..d473811fcfacd5 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
>  	 * page aligned, we don't need to use a bounce page.
>  	 */
>  	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
> -		tlb_addr = swiotlb_tbl_map_single(dev,
> -				phys_to_dma_unencrypted(dev, io_tlb_start),
> -				paddr, size, aligned_size, dir, attrs);
> +		tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
> +						  aligned_size, dir, attrs);
>  		if (tlb_addr == DMA_MAPPING_ERROR) {
>  			goto swiotlb_error;
>  		} else {
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 030a225624b060..953186f6d7d222 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  	 */
>  	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
>  
> -	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> -				     phys, size, size, dir, attrs);
> +	map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
>  	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
>  		return DMA_MAPPING_ERROR;
>  
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 513913ff748626..3bb72266a75a1d 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -45,13 +45,9 @@ enum dma_sync_target {
>  	SYNC_FOR_DEVICE = 1,
>  };
>  
> -extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> -					  dma_addr_t tbl_dma_addr,
> -					  phys_addr_t phys,
> -					  size_t mapping_size,
> -					  size_t alloc_size,
> -					  enum dma_data_direction dir,
> -					  unsigned long attrs);
> +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> +		size_t mapping_size, size_t alloc_size,
> +		enum dma_data_direction dir, unsigned long attrs);
>  
>  extern void swiotlb_tbl_unmap_single(struct device *hwdev,
>  				     phys_addr_t tlb_addr,
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 995c1b4cb427ee..8d0b7c3971e81e 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -441,14 +441,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
>  	}
>  }
>  
> -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
> -				   dma_addr_t tbl_dma_addr,
> -				   phys_addr_t orig_addr,
> -				   size_t mapping_size,
> -				   size_t alloc_size,
> -				   enum dma_data_direction dir,
> -				   unsigned long attrs)
> +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
> +		size_t mapping_size, size_t alloc_size,
> +		enum dma_data_direction dir, unsigned long attrs)
>  {
> +	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, io_tlb_start);

This is supposed to be hwdev, not dev


>  	unsigned long flags;
>  	phys_addr_t tlb_addr;
>  	unsigned int nslots, stride, index, wrap;
> @@ -667,9 +664,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
>  	trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
>  			      swiotlb_force);
>  
> -	swiotlb_addr = swiotlb_tbl_map_single(dev,
> -			phys_to_dma_unencrypted(dev, io_tlb_start),
> -			paddr, size, size, dir, attrs);
> +	swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
> +					      attrs);
>  	if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
>  		return DMA_MAPPING_ERROR;
>  
> -- 
> 2.28.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 02:45:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 02:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3310.9581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPzSB-0006MI-K5; Wed, 07 Oct 2020 02:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3310.9581; Wed, 07 Oct 2020 02:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kPzSB-0006M9-DT; Wed, 07 Oct 2020 02:45:19 +0000
Received: by outflank-mailman (input) for mailman id 3310;
 Wed, 07 Oct 2020 02:45:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kPzSA-0006M4-UW
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 02:45:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 765fe43d-155e-42d3-ba76-5ec4ad4d1bac;
 Wed, 07 Oct 2020 02:45:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPzS8-00075X-8J; Wed, 07 Oct 2020 02:45:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kPzS7-000308-Uh; Wed, 07 Oct 2020 02:45:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kPzS7-0003qf-Sx; Wed, 07 Oct 2020 02:45:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kPzSA-0006M4-UW
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 02:45:18 +0000
X-Inumbo-ID: 765fe43d-155e-42d3-ba76-5ec4ad4d1bac
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 765fe43d-155e-42d3-ba76-5ec4ad4d1bac;
	Wed, 07 Oct 2020 02:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NISN6PC5srhODdzlthfnnL+uM/l+1zv54Z6ZVcUWauk=; b=lx7bdJg8ntcQwldGtZnsJI9Bwi
	4AtFVkIpWJvQJ49ulvTJzM5NFwhIi3/9lQpFiAjFLcdWYUibeFqxdUsGABpKiaQ2dBv2YuacrG2zb
	xg0LLwfym4+jj1Gbl/fXi5KQujA9IOs8sRqwKCV+mkMgJX3s+4J2jgZ4CDWfC8rGglGA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPzS8-00075X-8J; Wed, 07 Oct 2020 02:45:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPzS7-000308-Uh; Wed, 07 Oct 2020 02:45:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kPzS7-0003qf-Sx; Wed, 07 Oct 2020 02:45:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155500-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155500: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3fdb431718ff2202d7fea7c64073b707db473ece
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 02:45:15 +0000

flight 155500 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155500/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3fdb431718ff2202d7fea7c64073b707db473ece
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   88 days
Failing since        151818  2020-07-11 04:18:52 Z   87 days   82 attempts
Testing same since   155500  2020-10-06 15:44:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19341 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 03:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 03:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3313.9595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ0aK-0004ud-32; Wed, 07 Oct 2020 03:57:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3313.9595; Wed, 07 Oct 2020 03:57:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ0aJ-0004uW-WD; Wed, 07 Oct 2020 03:57:48 +0000
Received: by outflank-mailman (input) for mailman id 3313;
 Wed, 07 Oct 2020 03:57:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ0aI-0004uR-18
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 03:57:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d301ce6-6838-40d6-bba0-88c6bd2e1bba;
 Wed, 07 Oct 2020 03:57:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ0aF-00009M-1m; Wed, 07 Oct 2020 03:57:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ0aE-0007Qj-Mi; Wed, 07 Oct 2020 03:57:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ0aE-0000Co-MD; Wed, 07 Oct 2020 03:57:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ0aI-0004uR-18
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 03:57:46 +0000
X-Inumbo-ID: 0d301ce6-6838-40d6-bba0-88c6bd2e1bba
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d301ce6-6838-40d6-bba0-88c6bd2e1bba;
	Wed, 07 Oct 2020 03:57:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HDU1sa276Emzb7ifEjVIDOvK9GrJD7h/rJr06DttZis=; b=aQGW9xFOWrGjdfahQHpokUdy0Z
	J5B0yPwpJE+7f4GUEXPO13sQh8EXKfH5NTxNtKXM8zatxWwRyeyZuYmYp78Uj3hrUQ3krzHE2DfS2
	78rLLdS0sxPaAcHKbYbG8Lwedzt2P3BT3sC5W5tFVjcyH4XLbIGC1XLVJhsWrAXkEdGM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ0aF-00009M-1m; Wed, 07 Oct 2020 03:57:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ0aE-0007Qj-Mi; Wed, 07 Oct 2020 03:57:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ0aE-0000Co-MD; Wed, 07 Oct 2020 03:57:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155493-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155493: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
X-Osstest-Versions-That:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 03:57:42 +0000

flight 155493 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155493/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 18 guest-start/debianhvm.repeat fail in 155451 pass in 155493
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 155451

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 155451
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 155451
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 155451
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 155451
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 155451
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 155451
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 155451
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 155451
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 155451
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893
baseline version:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893

Last test of basis   155493  2020-10-06 10:40:06 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 05:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 05:51:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3319.9609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ2Lx-0008RA-6m; Wed, 07 Oct 2020 05:51:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3319.9609; Wed, 07 Oct 2020 05:51:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ2Lx-0008R3-3j; Wed, 07 Oct 2020 05:51:05 +0000
Received: by outflank-mailman (input) for mailman id 3319;
 Wed, 07 Oct 2020 05:51:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PU4l=DO=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kQ2Lv-0008Qy-IF
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 05:51:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e723d33-54f5-4c98-9b97-cf7c2dbbe271;
 Wed, 07 Oct 2020 05:51:02 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CF07A208C7;
 Wed,  7 Oct 2020 05:50:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PU4l=DO=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kQ2Lv-0008Qy-IF
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 05:51:03 +0000
X-Inumbo-ID: 0e723d33-54f5-4c98-9b97-cf7c2dbbe271
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0e723d33-54f5-4c98-9b97-cf7c2dbbe271;
	Wed, 07 Oct 2020 05:51:02 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id CF07A208C7;
	Wed,  7 Oct 2020 05:50:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602049861;
	bh=0Pn6APRQsa4HpVjf3rGUOYdtE8clqW/GJUxe6iTqV3I=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=fmR5aCHNJXJvqiU8VG6MlZZTo7JvqEdQuWGy4VItalPDsBNjtgW2h9lnhbDk6tP+B
	 SIQu5gBARtSIY/GJ40obH23kY1FsWAalHTEORlXimqAySlcvBZLXxNO637CNRT1Ndk
	 VT5Fi/JkY+D3xYD6w/TB8ImTGaRVnfHqaGZRWwU8=
Date: Wed, 7 Oct 2020 14:50:57 +0900
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, Alex =?UTF-8?B?QmVubsOpZQ==?=
 <alex.bennee@linaro.org>, takahiro.akashi@linaro.org, jgross@suse.com,
 boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
Message-Id: <20201007145057.81024d41dfd239628296d090@kernel.org>
In-Reply-To: <alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
	<b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
	<alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
	<20201006114058.b93839b1b8f35a470874572b@kernel.org>
	<alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 6 Oct 2020 10:56:52 -0700 (PDT)
Stefano Stabellini <sstabellini@kernel.org> wrote:

> On Tue, 6 Oct 2020, Masami Hiramatsu wrote:
> > On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
> > Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > > On Mon, 5 Oct 2020, Julien Grall wrote:
> > > > Hi Masami,
> > > > 
> > > > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > > > address conversion.
> > > > > 
> > > > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > > > assumes the given virtual address is in contiguous kernel memory
> > > > > area, it can not convert the per-cpu memory if it is allocated on
> > > > > vmalloc area (depends on CONFIG_SMP).
> > > > 
> > > > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > > > region for CPU0 is allocated outside of vmalloc area.
> > > > 
> > > > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > > > enabled.
> > > 
> > > I cannot reproduce the issue with defconfig, but I can with Masami's
> > > kconfig.
> > > 
> > > If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> > > problem still appears.
> > > 
> > > If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> > > strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> > > works.
> > 
> > Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
> > disappeared.
> > 
> > --- config-5.9.0-rc4+   2020-10-06 11:36:20.620107129 +0900
> > +++ config-5.9.0-rc4+.buggy     2020-10-05 21:04:40.369936461 +0900
> > @@ -131,7 +131,8 @@
> >  CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
> >  CONFIG_CC_HAS_INT128=y
> >  CONFIG_ARCH_SUPPORTS_INT128=y
> > -# CONFIG_NUMA_BALANCING is not set
> > +CONFIG_NUMA_BALANCING=y
> > +CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
> >  CONFIG_CGROUPS=y
> >  CONFIG_PAGE_COUNTER=y
> >  CONFIG_MEMCG=y
> > 
> > So buggy config just enabled NUMA_BALANCING (and default enabled)
> 
> Yeah but both NUMA and NUMA_BALANCING are enabled in defconfig which
> works fine...
> 
> [...]
> 
> > > The fix is fine for me. I tested it and it works. We need to remove the
> > > "Fixes:" line from the commit message. Ideally, replacing it with a
> > > reference to what is the source of the problem.
> > 
> > OK, as I said, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has
> > introduced the per-cpu code. So note it instead of Fixes tag.
> 
> ...and commit 9a9ab3cc00dc was already present in 5.8 which also works
> fine with your kconfig. Something else changed in 5.9 causing this
> breakage as a side effect. Commit 9a9ab3cc00dc is there since 2013, I
> think it is OK -- this patch is fixing something else.

Hmm, then it might be someone runs out the first chunk of percpu and
xen uses the 2nd chunk which is in vmalloc area. It is possible if the
other initcall functions uses alloc_percpu().

Maybe we can trace percpu chunk allocation function for both case.

Thank you,

-- 
Masami Hiramatsu <mhiramat@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 06:00:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 06:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3321.9621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ2Un-000194-4B; Wed, 07 Oct 2020 06:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3321.9621; Wed, 07 Oct 2020 06:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ2Un-00018x-0t; Wed, 07 Oct 2020 06:00:13 +0000
Received: by outflank-mailman (input) for mailman id 3321;
 Wed, 07 Oct 2020 06:00:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSOu=DO=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kQ2Ul-00018r-SD
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:00:11 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51a96415-3d73-4d31-a9a9-fa9e3514929f;
 Wed, 07 Oct 2020 06:00:10 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 1BC7D68B02; Wed,  7 Oct 2020 08:00:09 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSOu=DO=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kQ2Ul-00018r-SD
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:00:11 +0000
X-Inumbo-ID: 51a96415-3d73-4d31-a9a9-fa9e3514929f
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 51a96415-3d73-4d31-a9a9-fa9e3514929f;
	Wed, 07 Oct 2020 06:00:10 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 1BC7D68B02; Wed,  7 Oct 2020 08:00:09 +0200 (CEST)
Date: Wed, 7 Oct 2020 08:00:08 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Christoph Hellwig <hch@lst.de>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: Re: xen-swiotlb vs phys_to_dma
Message-ID: <20201007060008.GA10125@lst.de>
References: <20201002123436.GA30329@lst.de> <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s> <20201006082656.GB10243@lst.de> <alpine.DEB.2.21.2010061325230.10908@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010061325230.10908@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Oct 06, 2020 at 01:46:12PM -0700, Stefano Stabellini wrote:
> OK, this makes a lot of sense, and I like the patch because it makes the
> swiotlb interface clearer.
> 
> Just one comment below.
> 

> > +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
> > +		size_t mapping_size, size_t alloc_size,
> > +		enum dma_data_direction dir, unsigned long attrs)
> >  {
> > +	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, io_tlb_start);
> 
> This is supposed to be hwdev, not dev

Yeah, te compiler would be rather unhappy oterwise.

I'll resend it after the dma-mapping and Xen trees are merged by Linus
to avoid a merge conflict.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 06:38:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 06:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3323.9633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ35U-0004O7-4r; Wed, 07 Oct 2020 06:38:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3323.9633; Wed, 07 Oct 2020 06:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ35U-0004O0-0n; Wed, 07 Oct 2020 06:38:08 +0000
Received: by outflank-mailman (input) for mailman id 3323;
 Wed, 07 Oct 2020 06:38:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ35T-0004Nv-9H
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:38:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a43d072-2c2f-4b4b-9788-8784ce87e081;
 Wed, 07 Oct 2020 06:38:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16D46AC0C;
 Wed,  7 Oct 2020 06:38:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ35T-0004Nv-9H
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:38:07 +0000
X-Inumbo-ID: 8a43d072-2c2f-4b4b-9788-8784ce87e081
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8a43d072-2c2f-4b4b-9788-8784ce87e081;
	Wed, 07 Oct 2020 06:38:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602052685;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3kOyoKtsiwS+tjIZ6dzqH1KCmgJdwgCsiyneWCC8R6M=;
	b=avIV58ZPj1GRAdoykPhUo3Pb+VuebfM0SYxiddm2n+p0Cvd77cb6VdKI7RCVOsx232QccN
	jcxMzjWOhbuYQMQdUmP6lUEzjthAElIa36MGYYx42V3FgYU67XNi6sminKmixQ4fV01Wwu
	SbYRFgneC0vPBvNzoU/ByaqpJ96KozU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 16D46AC0C;
	Wed,  7 Oct 2020 06:38:05 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.9-rc9
Date: Wed,  7 Oct 2020 08:38:04 +0200
Message-Id: <20201007063804.21597-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc9-tag

xen: branch for v5.9-rc9

It contains one fix for a regression when booting as a Xen guest on
ARM64 introduced probably during the 5.9 cycle. It is very low risk as
it is modifying Xen specific code only. The exact commit introducing
the bug hasn't been identified yet, but everything was fine in 5.8 and
only in 5.9 some configurations started to fail.


Thanks.

Juergen

 arch/arm/xen/enlighten.c | 2 +-
 include/xen/arm/page.h   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

Masami Hiramatsu (1):
      arm/arm64: xen: Fix to convert percpu address to gfn correctly


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 06:55:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 06:55:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3326.9645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ3Lc-0006HA-KQ; Wed, 07 Oct 2020 06:54:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3326.9645; Wed, 07 Oct 2020 06:54:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ3Lc-0006H3-HM; Wed, 07 Oct 2020 06:54:48 +0000
Received: by outflank-mailman (input) for mailman id 3326;
 Wed, 07 Oct 2020 06:54:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ3La-0006Gw-OU
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:54:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c3a9921-f63a-4b66-be1f-20780c9aaf42;
 Wed, 07 Oct 2020 06:54:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85C66B281;
 Wed,  7 Oct 2020 06:54:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ3La-0006Gw-OU
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 06:54:46 +0000
X-Inumbo-ID: 7c3a9921-f63a-4b66-be1f-20780c9aaf42
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7c3a9921-f63a-4b66-be1f-20780c9aaf42;
	Wed, 07 Oct 2020 06:54:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602053684;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A1ztSWII4k26R643KPnL55c+vrF7Cs76PMlXbicvk1U=;
	b=pQSqJrAg5TPNlAZX+DOzWNF6C9VY0c6kPNXSlxpNLxbiB6mJTTxPsE1fMNhtUv7UbX2Kio
	LoTQtAIFw5OG2tafHBqA5p77xRNihOpsz5pPJFzhKoQfNA3T7L4GmrH+oqsm2Z27VoPvDH
	cPfRpsl25PpjPzp7cPgnLi9MIxDU1K8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 85C66B281;
	Wed,  7 Oct 2020 06:54:44 +0000 (UTC)
Subject: Re: [PATCH 2/3] tools/init-xenstore-domain: support xenstore pvh
 stubdom
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <20200923064541.19546-1-jgross@suse.com>
 <20200923064541.19546-3-jgross@suse.com>
 <20200930154611.xqzdumwec7nlnidl@liuwe-devbox-debian-v2>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <baa915a8-bd96-4669-dfa1-1e4e09024493@suse.com>
Date: Wed, 7 Oct 2020 08:54:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200930154611.xqzdumwec7nlnidl@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.20 17:46, Wei Liu wrote:
> On Wed, Sep 23, 2020 at 08:45:40AM +0200, Juergen Gross wrote:
>> Instead of creating the xenstore-stubdom domain first and parsing the
>> kernel later do it the other way round. This enables to probe for the
>> domain type supported by the xenstore-stubdom and to support both, pv
>> and pvh type stubdoms.
>>
>> Try to parse the stubdom image first for PV support, if this fails use
>> HVM. Then create the domain with the appropriate type selected.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> [...]
>> +    dom->container_type = XC_DOM_HVM_CONTAINER;
>> +    rv = xc_dom_parse_image(dom);
>> +    if ( rv )
>> +    {
>> +        dom->container_type = XC_DOM_PV_CONTAINER;
>> +        rv = xc_dom_parse_image(dom);
>> +        if ( rv )
>> +        {
>> +            fprintf(stderr, "xc_dom_parse_image failed\n");
>> +            goto err;
>> +        }
>> +    }
>> +    else
>> +    {
>> +        config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
>> +        config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
>> +        dom->target_pages = mem_size >> XC_PAGE_SHIFT;
>> +        dom->mmio_size = GB(4) - LAPIC_BASE_ADDRESS;
>> +        dom->lowmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>> +                          LAPIC_BASE_ADDRESS : mem_size;
>> +        dom->highmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
>> +                           GB(4) + mem_size - LAPIC_BASE_ADDRESS : 0;
>> +        dom->mmio_start = LAPIC_BASE_ADDRESS;
>> +        dom->max_vcpus = 1;
>> +        e820[0].addr = 0;
>> +        e820[0].size = dom->lowmem_end;
>> +        e820[0].type = E820_RAM;
>> +        e820[1].addr = LAPIC_BASE_ADDRESS;
>> +        e820[1].size = dom->mmio_size;
>> +        e820[1].type = E820_RESERVED;
>> +        e820[2].addr = GB(4);
>> +        e820[2].size = dom->highmem_end - GB(4);
> 
> Do you not want to check if highmem_end is larger than GB(4) before
> putting in this region?

Oh, just realized: further down I'm setting the guest's map with either
2 or 3 entries depending on dom->highmem_end value.

So I think this is fine.


Juergen



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 07:38:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 07:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3328.9657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ41U-00023l-Sq; Wed, 07 Oct 2020 07:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3328.9657; Wed, 07 Oct 2020 07:38:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ41U-00023e-PV; Wed, 07 Oct 2020 07:38:04 +0000
Received: by outflank-mailman (input) for mailman id 3328;
 Wed, 07 Oct 2020 07:38:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ41U-00023Z-01
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 07:38:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39f5abbf-5202-4ae7-92ac-64074fb74407;
 Wed, 07 Oct 2020 07:38:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ41R-0005Mo-1v; Wed, 07 Oct 2020 07:38:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ41Q-0001eD-Pi; Wed, 07 Oct 2020 07:38:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ41Q-0002kl-P7; Wed, 07 Oct 2020 07:38:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ41U-00023Z-01
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 07:38:04 +0000
X-Inumbo-ID: 39f5abbf-5202-4ae7-92ac-64074fb74407
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39f5abbf-5202-4ae7-92ac-64074fb74407;
	Wed, 07 Oct 2020 07:38:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R7JOPyklahef1WFI3UZTOtevjEFsssjEX4FoP5oBsvE=; b=D0vdDMoj70ClRQy1zoJOQgKqwp
	550wFRCSvZFQxfNp7XwxYdLNM0kJISQ56zQERSwSeQy2nB29bsGdosUVqfPMgrICK5WtzI8tYWOiv
	rSinLYHt1Pj2/j7+ypAk7ajdYeTqpiW6P63TK+P5XdKIwvWuInRycvFMgWG90jy9IuvI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ41R-0005Mo-1v; Wed, 07 Oct 2020 07:38:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ41Q-0001eD-Pi; Wed, 07 Oct 2020 07:38:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ41Q-0002kl-P7; Wed, 07 Oct 2020 07:38:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155496-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155496: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    xen-4.10-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl:guest-start/debian.repeat:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:broken:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 07:38:00 +0000

flight 155496 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155496/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   7 xen-boot                   fail pass in 155454
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 15 guest-saverestore.2 fail pass in 155454
 test-arm64-arm64-xl          16 guest-start/debian.repeat  fail pass in 155454

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               broken never pass
 test-arm64-arm64-xl-credit1 13 migrate-support-check fail in 155454 never pass
 test-arm64-arm64-xl-credit1 14 saverestore-support-check fail in 155454 never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   91 days
Failing since        154621  2020-09-22 16:07:00 Z   14 days   21 attempts
Testing same since   155362  2020-10-03 03:17:48 Z    4 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        fail    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 broken  
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken
broken-step test-arm64-arm64-xl-thunderx hosts-allocate
broken-job test-arm64-arm64-xl-thunderx broken

Not pushing.

(No revision log; it would be 368 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 08:29:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 08:29:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3340.9671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ4p5-0007cO-83; Wed, 07 Oct 2020 08:29:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3340.9671; Wed, 07 Oct 2020 08:29:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ4p5-0007cH-3I; Wed, 07 Oct 2020 08:29:19 +0000
Received: by outflank-mailman (input) for mailman id 3340;
 Wed, 07 Oct 2020 08:29:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ4p3-0007cC-MW
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:29:17 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6dd84716-5c7f-453b-830d-82488f9c2c1d;
 Wed, 07 Oct 2020 08:29:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74CBE113E;
 Wed,  7 Oct 2020 01:29:16 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C43063F66B;
 Wed,  7 Oct 2020 01:29:15 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ4p3-0007cC-MW
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:29:17 +0000
X-Inumbo-ID: 6dd84716-5c7f-453b-830d-82488f9c2c1d
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 6dd84716-5c7f-453b-830d-82488f9c2c1d;
	Wed, 07 Oct 2020 08:29:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 74CBE113E;
	Wed,  7 Oct 2020 01:29:16 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C43063F66B;
	Wed,  7 Oct 2020 01:29:15 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
Date: Wed,  7 Oct 2020 09:28:53 +0100
Message-Id: <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Use memcpy in getBridge to prevent gcc warnings about truncated
strings. We know that we might truncate it, so the gcc warning
here is wrong.
Revert previous change changing buffer sizes as bigger buffers
are not needed.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v2:
 Use MIN between string length of de->d_name and resultLen to copy only
 the minimum size required and prevent crossing to from an unallocated
 space.
---
 tools/libs/stat/xenstat_linux.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
index d2ee6fda64..0ace03af1b 100644
--- a/tools/libs/stat/xenstat_linux.c
+++ b/tools/libs/stat/xenstat_linux.c
@@ -29,6 +29,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <regex.h>
+#include <xen-tools/libs.h>
 
 #include "xenstat_priv.h"
 
@@ -78,7 +79,13 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
 				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
 
 				if (access(tmp, F_OK) == 0) {
-					strncpy(result, de->d_name, resultLen);
+					/*
+					 * Do not use strncpy to prevent compiler warning with
+					 * gcc >= 10.0
+					 * If de->d_name is longer then resultLen we truncate it
+					 */
+					memcpy(result, de->d_name, MIN(strnlen(de->d_name,
+									sizeof(de->d_name)),resultLen - 1));
 					result[resultLen - 1] = 0;
 				}
 		}
@@ -264,7 +271,7 @@ int xenstat_collect_networks(xenstat_node * node)
 {
 	/* Helper variables for parseNetDevLine() function defined above */
 	int i;
-	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[256] = { 0 }, devNoBridge[257] = { 0 };
+	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[16] = { 0 }, devNoBridge[17] = { 0 };
 	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPackets, txErrs, txDrops;
 
 	struct priv_data *priv = get_priv_data(node->handle);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 08:39:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 08:39:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3342.9683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ4yv-0000HO-4p; Wed, 07 Oct 2020 08:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3342.9683; Wed, 07 Oct 2020 08:39:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ4yv-0000HH-1r; Wed, 07 Oct 2020 08:39:29 +0000
Received: by outflank-mailman (input) for mailman id 3342;
 Wed, 07 Oct 2020 08:39:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ4yt-0000HC-1W
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:39:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4d2a68b-c309-42a2-82aa-022277996cb2;
 Wed, 07 Oct 2020 08:39:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A534AFBF;
 Wed,  7 Oct 2020 08:39:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ4yt-0000HC-1W
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:39:27 +0000
X-Inumbo-ID: e4d2a68b-c309-42a2-82aa-022277996cb2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4d2a68b-c309-42a2-82aa-022277996cb2;
	Wed, 07 Oct 2020 08:39:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602059965;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wowmXOOwTLNaj/cvxmu6SpMhNLrBnqOIkTCROdZfRIU=;
	b=bxlksfaiGPlg8I+9lvlOBUQNMl7J4eP+AUjgC8gt/5Uht+wyRweMhw0pv/Qegqr8gIZHOs
	BfOHjJSz4H/r1OlMvmZknMZ+jhEEehfdvM1e6VreSKDnrTyrZjGhLzdijhTGrZ3NRJ7b5d
	RrNIXlxD/i56m6Jblz72i4DgOTEv4UU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A534AFBF;
	Wed,  7 Oct 2020 08:39:25 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <30a4ddc0-9443-ab02-341c-ae08af7fddea@suse.com>
Date: Wed, 7 Oct 2020 10:39:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.10.20 10:28, Bertrand Marquis wrote:
> Use memcpy in getBridge to prevent gcc warnings about truncated
> strings. We know that we might truncate it, so the gcc warning
> here is wrong.
> Revert previous change changing buffer sizes as bigger buffers
> are not needed.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in v2:
>   Use MIN between string length of de->d_name and resultLen to copy only
>   the minimum size required and prevent crossing to from an unallocated
>   space.
> ---
>   tools/libs/stat/xenstat_linux.c | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
> index d2ee6fda64..0ace03af1b 100644
> --- a/tools/libs/stat/xenstat_linux.c
> +++ b/tools/libs/stat/xenstat_linux.c
> @@ -29,6 +29,7 @@
>   #include <string.h>
>   #include <unistd.h>
>   #include <regex.h>
> +#include <xen-tools/libs.h>
>   
>   #include "xenstat_priv.h"
>   
> @@ -78,7 +79,13 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
>   				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>   
>   				if (access(tmp, F_OK) == 0) {
> -					strncpy(result, de->d_name, resultLen);
> +					/*
> +					 * Do not use strncpy to prevent compiler warning with
> +					 * gcc >= 10.0
> +					 * If de->d_name is longer then resultLen we truncate it

s/then/than/

> +					 */
> +					memcpy(result, de->d_name, MIN(strnlen(de->d_name,
> +									sizeof(de->d_name)),resultLen - 1));

You can't use sizeof(de->d_name) here, as AFAIK there is no guarantee
that de->d_name isn't e.g. defined like "char d_name[]".

My suggestion to use NAME_MAX as upper boundary for the length was
really meant to be used that way.

And additionally you might want to add 1 to the strnlen() result in
order to copy the trailing 0-byte, too (or you should zero out the
result buffer before and omit writing the final zero byte).

Thinking more about it zeroing the result buffer is better as it even
covers the theoretical case of NAME_MAX being shorter than resultLen.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 08:56:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 08:56:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3344.9695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5Fg-0002GF-Mv; Wed, 07 Oct 2020 08:56:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3344.9695; Wed, 07 Oct 2020 08:56:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5Fg-0002G8-Jm; Wed, 07 Oct 2020 08:56:48 +0000
Received: by outflank-mailman (input) for mailman id 3344;
 Wed, 07 Oct 2020 08:56:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ5Ff-0002G3-Su
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:56:47 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93aea2a4-3a21-4b9e-8083-551a3076f43b;
 Wed, 07 Oct 2020 08:56:46 +0000 (UTC)
Received: from AM5PR0502CA0017.eurprd05.prod.outlook.com
 (2603:10a6:203:91::27) by DB8PR08MB5467.eurprd08.prod.outlook.com
 (2603:10a6:10:11b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Wed, 7 Oct
 2020 08:56:44 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::e5) by AM5PR0502CA0017.outlook.office365.com
 (2603:10a6:203:91::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 08:56:44 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Wed, 7 Oct 2020 08:56:44 +0000
Received: ("Tessian outbound 195a290eb161:v64");
 Wed, 07 Oct 2020 08:56:44 +0000
Received: from b616885e88c4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9D54FE8E-8B8E-4137-A182-3BE27843BA29.1; 
 Wed, 07 Oct 2020 08:56:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b616885e88c4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 07 Oct 2020 08:56:38 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Wed, 7 Oct
 2020 08:56:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 08:56:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ5Ff-0002G3-Su
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 08:56:47 +0000
X-Inumbo-ID: 93aea2a4-3a21-4b9e-8083-551a3076f43b
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [40.107.7.88])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 93aea2a4-3a21-4b9e-8083-551a3076f43b;
	Wed, 07 Oct 2020 08:56:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wVAFq+QOJ1tkIEAjC9X3y8rUyU+ebC1a1YakVtFN3FI=;
 b=Ih9+rDLyRLP9k9RVrr+Q3lvu86RgH4jPPH05NItSiW8ZOCB6q3+T3ykswUBaFnx7jLriJPXMjaz9NFKVahoDfdpG4jpDG7r3UAE2e1YiHTh/TIVz7tJ7nMxwME82cWZ0PQc+nK2bVlWqGu+lK/4O3fcDpPym7w/fP9bB1VeO3FM=
Received: from AM5PR0502CA0017.eurprd05.prod.outlook.com
 (2603:10a6:203:91::27) by DB8PR08MB5467.eurprd08.prod.outlook.com
 (2603:10a6:10:11b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.35; Wed, 7 Oct
 2020 08:56:44 +0000
Received: from AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:91:cafe::e5) by AM5PR0502CA0017.outlook.office365.com
 (2603:10a6:203:91::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 08:56:44 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT060.mail.protection.outlook.com (10.152.16.160) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34 via Frontend Transport; Wed, 7 Oct 2020 08:56:44 +0000
Received: ("Tessian outbound 195a290eb161:v64"); Wed, 07 Oct 2020 08:56:44 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a7692d0ec485f16c
X-CR-MTA-TID: 64aa7808
Received: from b616885e88c4.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9D54FE8E-8B8E-4137-A182-3BE27843BA29.1;
	Wed, 07 Oct 2020 08:56:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b616885e88c4.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 07 Oct 2020 08:56:38 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KGPoKwhlNg+t38j564yB5vgfDSOSVcoRPkQkwLDAqZUAt1UUxh68b6g4G23XRgfpzEehR3VUQ0sIQAiWC4cfyPzqM6o6wTgZPDAOHhbqe7mhIFKsPyuOyn1rHGAjlbLz5HvRCbqus6uO+A5KkOl0YI/7I+wpcz8UtIByqGdA7ikH4JzAkPd9hQcVjUBYFh83huf2X/+Qx8BXSh5Gq5/veR2o5FpVO96uiO4PQFcQFN/gaXAtvu6WsjCfZ+NoPXejUKe2s3DPu9qhCQgjzwgtUqWv54kywFmFVyWolcolUiN4RGNuywhYj92s9G3sP5Q8maMQmJoI4glsPxVWVQ9mpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wVAFq+QOJ1tkIEAjC9X3y8rUyU+ebC1a1YakVtFN3FI=;
 b=Nx90mLlpZ7frrZm9nDncK0ksoQ53SikFb8yf3itkSMkAIkJKIWhbSC71L85xbj4QylTGwEJUvXcMGr0jpwhYTc7NnJYWurE3k0XqvPbj2E+5vmXMwKGSZwhuocNZ26+aFAY56wnLwixR7M60O6T3l6vKy/NRW7eqDpYOQm2Hobel3Aur8KaSakRzaUvk9DUpfEeSUmR2kyKUobrRxzYZukaJmqSZWTBUcK93gOdfOfiPdR8bbqTlAsGGl8i/QCi93hWs3RMTaTirFOoRxG//H7JcC9BrEvvXD60+wFdEFr/4XUZihFIUWcTAgdFUh7k6+PPhq6TX5enSObEcQAaXOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wVAFq+QOJ1tkIEAjC9X3y8rUyU+ebC1a1YakVtFN3FI=;
 b=Ih9+rDLyRLP9k9RVrr+Q3lvu86RgH4jPPH05NItSiW8ZOCB6q3+T3ykswUBaFnx7jLriJPXMjaz9NFKVahoDfdpG4jpDG7r3UAE2e1YiHTh/TIVz7tJ7nMxwME82cWZ0PQc+nK2bVlWqGu+lK/4O3fcDpPym7w/fP9bB1VeO3FM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4821.eurprd08.prod.outlook.com (2603:10a6:10:d5::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.38; Wed, 7 Oct
 2020 08:56:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 08:56:37 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Topic: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Index: AQHWnIQKr7DQ0C+zLEGkNYhGKt2Q8KmL0a8AgAAEzgA=
Date: Wed, 7 Oct 2020 08:56:36 +0000
Message-ID: <B3E4C5D5-5999-4D92-8F56-FFA7019CD9BA@arm.com>
References:
 <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
 <30a4ddc0-9443-ab02-341c-ae08af7fddea@suse.com>
In-Reply-To: <30a4ddc0-9443-ab02-341c-ae08af7fddea@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 75e22858-57ae-4e8b-8962-08d86a9eea9d
x-ms-traffictypediagnostic: DBBPR08MB4821:|DB8PR08MB5467:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5467D109776853A2D1BAE7329D0A0@DB8PR08MB5467.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3631;OLM:3631;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 BSJsjAuInXfMY+6kVNbzeCbMPt3I3DG0E6kBZP8XlVX1a+0gAW0UK6hNyU3lJ6Mzv5AQ0YJpOLAakKjf4rPqKh9OmZ5yvoH54XP3yFAfmRptA52hyMZDP6jwHIlf/PJE3FazkouNWkVZBET3AKxDz3SjvEPjp0+DTHvTWyCyLcFQazNPhIdXeNuzF78wX8oQX9VmzazH4a880QUzeqmnGLOuA35fT5W350ZXWcZgKac6oKFoW9lskcPliguMstJ8Mn1VIGK4uHf7FOo9WviEo9IgL8/80LI5CkaaA43bXcKkd54PTpuyth/m2xI6jQL/GeD6sCD2fl2lD5kw4Mtw8A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(366004)(136003)(39850400004)(8676002)(4326008)(6486002)(36756003)(2616005)(186003)(6506007)(71200400001)(86362001)(26005)(53546011)(478600001)(33656002)(64756008)(66574015)(66446008)(6512007)(8936002)(6916009)(5660300002)(54906003)(66556008)(2906002)(91956017)(66476007)(66946007)(316002)(76116006)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 d80+1FdvQFkBc5NFBrZMKayyjNy4QvyJ4zM8A2bewiOLLQHWCBxFy+79X7UhYi5M+/9XNrfQr0uGdkb+jKiDoVZBovr3YmRLiHMJSI3f1f3+shpe5bJ94ch8nvZkC8GIoxUTgdSyfwgmiq4KYj/0IwtIJjrLNz33TRFVTJpFM95mahUqC2B0SItkx1Qh4HvKfpEzGGW1787uF85Are1kAlNzFOAvlFbqT1nDo4aY2a7nBVSeLkxsidmyWDHgXONKThrUCrejaZlQZOfMZUYpHpK3rbecq+2JbXdbIhG4Y2K13SnNxRPQKKHie51OMIrJ5djssYAos2675y5TcH/yXT8NVvcfXpHMlnbudD2sRIAnZe3LFPETDsAVk7l2BYgRyJzhBZmvm5q9ssMrflev77xc7y2KuNR4ffe9x56xDJTO4Wh5QkmlMV9j7EhMvwWz3ty3zLTNLdFfqXnMnHcFBNv3ESy04wmGKur66hI+OZGuMjhga6GkuS40oRLuqLgSDz3qStAflAafWNQKNXfKGpIcXBcpBQ6Q4sWdEzNodUd23ycPtGBRdlZXUHpBOgGEcVxzkg1iEb34p7tZZTrMFUHZuecV8r5WEf6IebUbvP4glqT8TVjWInHDtRYouEvWZYKcBCC0owHC6u1i3ZAWmA==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <404D573332400E41A0F44255F3FD5EA1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4821
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	663a989f-522e-49f1-d2cc-08d86a9ee618
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QkqMdzIVefGEt+0zpXJBnu/BOY2TG8oOFZqJjozKqQyj3yav0tUI/O+Fd/HmO6s5Acgh8PjeZj3+NdwfVdLv96kqvXwJqPsiNkpsjiwwtcbfseeqy+aAvsEQlX6syjr6u6GMGzxyb7tPmfj79mpsuPVlCOi0AokAY3tfnyqiKuKij2MyPszBCrmZL9c0xtBnvyK9MVk3owgGvhhejx3rl6tv9rJSEgAAHox6pn/TMhwFS8frnuHgnYd2J1Z7Oci6ZvBAOUtT6jSbLT+8SBCRJGgZr6NUtx27KDp4SJE+nN3Ep+DsGAEditXvaCFy1eClzaTLKmJjxD/NFL3oPOK0B7qLMIVf6gtwZLLxwPZRuvK7YdXQyVgjvYL/Wfhth+vspNlwr4WJyKvKWN0lhB/hCA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39850400004)(396003)(376002)(136003)(46966005)(36906005)(81166007)(6512007)(6862004)(6506007)(47076004)(70206006)(6486002)(356005)(8676002)(53546011)(4326008)(8936002)(5660300002)(26005)(2616005)(86362001)(33656002)(2906002)(83380400001)(36756003)(336012)(82310400003)(82740400003)(70586007)(478600001)(66574015)(186003)(54906003)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 08:56:44.5881
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 75e22858-57ae-4e8b-8962-08d86a9eea9d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT060.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5467

SGkgSnVyZ2VuLA0KDQo+IE9uIDcgT2N0IDIwMjAsIGF0IDA5OjM5LCBKw7xyZ2VuIEdyb8OfIDxq
Z3Jvc3NAc3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMDcuMTAuMjAgMTA6MjgsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+PiBVc2UgbWVtY3B5IGluIGdldEJyaWRnZSB0byBwcmV2ZW50IGdj
YyB3YXJuaW5ncyBhYm91dCB0cnVuY2F0ZWQNCj4+IHN0cmluZ3MuIFdlIGtub3cgdGhhdCB3ZSBt
aWdodCB0cnVuY2F0ZSBpdCwgc28gdGhlIGdjYyB3YXJuaW5nDQo+PiBoZXJlIGlzIHdyb25nLg0K
Pj4gUmV2ZXJ0IHByZXZpb3VzIGNoYW5nZSBjaGFuZ2luZyBidWZmZXIgc2l6ZXMgYXMgYmlnZ2Vy
IGJ1ZmZlcnMNCj4+IGFyZSBub3QgbmVlZGVkLg0KPj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQg
TWFycXVpcyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4gLS0tDQo+PiBDaGFuZ2VzIGlu
IHYyOg0KPj4gIFVzZSBNSU4gYmV0d2VlbiBzdHJpbmcgbGVuZ3RoIG9mIGRlLT5kX25hbWUgYW5k
IHJlc3VsdExlbiB0byBjb3B5IG9ubHkNCj4+ICB0aGUgbWluaW11bSBzaXplIHJlcXVpcmVkIGFu
ZCBwcmV2ZW50IGNyb3NzaW5nIHRvIGZyb20gYW4gdW5hbGxvY2F0ZWQNCj4+ICBzcGFjZS4NCj4+
IC0tLQ0KPj4gIHRvb2xzL2xpYnMvc3RhdC94ZW5zdGF0X2xpbnV4LmMgfCAxMSArKysrKysrKyst
LQ0KPj4gIDEgZmlsZSBjaGFuZ2VkLCA5IGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9zdGF0L3hlbnN0YXRfbGludXguYyBiL3Rvb2xzL2xp
YnMvc3RhdC94ZW5zdGF0X2xpbnV4LmMNCj4+IGluZGV4IGQyZWU2ZmRhNjQuLjBhY2UwM2FmMWIg
MTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jDQo+PiArKysg
Yi90b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jDQo+PiBAQCAtMjksNiArMjksNyBAQA0K
Pj4gICNpbmNsdWRlIDxzdHJpbmcuaD4NCj4+ICAjaW5jbHVkZSA8dW5pc3RkLmg+DQo+PiAgI2lu
Y2x1ZGUgPHJlZ2V4Lmg+DQo+PiArI2luY2x1ZGUgPHhlbi10b29scy9saWJzLmg+DQo+PiAgICAj
aW5jbHVkZSAieGVuc3RhdF9wcml2LmgiDQo+PiAgQEAgLTc4LDcgKzc5LDEzIEBAIHN0YXRpYyB2
b2lkIGdldEJyaWRnZShjaGFyICpleGNsdWRlTmFtZSwgY2hhciAqcmVzdWx0LCBzaXplX3QgcmVz
dWx0TGVuKQ0KPj4gIAkJCQlzcHJpbnRmKHRtcCwgIi9zeXMvY2xhc3MvbmV0LyVzL2JyaWRnZSIs
IGRlLT5kX25hbWUpOw0KPj4gICAgCQkJCWlmIChhY2Nlc3ModG1wLCBGX09LKSA9PSAwKSB7DQo+
PiAtCQkJCQlzdHJuY3B5KHJlc3VsdCwgZGUtPmRfbmFtZSwgcmVzdWx0TGVuKTsNCj4+ICsJCQkJ
CS8qDQo+PiArCQkJCQkgKiBEbyBub3QgdXNlIHN0cm5jcHkgdG8gcHJldmVudCBjb21waWxlciB3
YXJuaW5nIHdpdGgNCj4+ICsJCQkJCSAqIGdjYyA+PSAxMC4wDQo+PiArCQkJCQkgKiBJZiBkZS0+
ZF9uYW1lIGlzIGxvbmdlciB0aGVuIHJlc3VsdExlbiB3ZSB0cnVuY2F0ZSBpdA0KPiANCj4gcy90
aGVuL3RoYW4vDQoNCldpbGwgZml4DQoNCj4gDQo+PiArCQkJCQkgKi8NCj4+ICsJCQkJCW1lbWNw
eShyZXN1bHQsIGRlLT5kX25hbWUsIE1JTihzdHJubGVuKGRlLT5kX25hbWUsDQo+PiArCQkJCQkJ
CQkJc2l6ZW9mKGRlLT5kX25hbWUpKSxyZXN1bHRMZW4gLSAxKSk7DQo+IA0KPiBZb3UgY2FuJ3Qg
dXNlIHNpemVvZihkZS0+ZF9uYW1lKSBoZXJlLCBhcyBBRkFJSyB0aGVyZSBpcyBubyBndWFyYW50
ZWUNCj4gdGhhdCBkZS0+ZF9uYW1lIGlzbid0IGUuZy4gZGVmaW5lZCBsaWtlICJjaGFyIGRfbmFt
ZVtdIi4NCj4gDQo+IE15IHN1Z2dlc3Rpb24gdG8gdXNlIE5BTUVfTUFYIGFzIHVwcGVyIGJvdW5k
YXJ5IGZvciB0aGUgbGVuZ3RoIHdhcw0KPiByZWFsbHkgbWVhbnQgdG8gYmUgdXNlZCB0aGF0IHdh
eS4NCj4gDQo+IEFuZCBhZGRpdGlvbmFsbHkgeW91IG1pZ2h0IHdhbnQgdG8gYWRkIDEgdG8gdGhl
IHN0cm5sZW4oKSByZXN1bHQgaW4NCj4gb3JkZXIgdG8gY29weSB0aGUgdHJhaWxpbmcgMC1ieXRl
LCB0b28gKG9yIHlvdSBzaG91bGQgemVybyBvdXQgdGhlDQo+IHJlc3VsdCBidWZmZXIgYmVmb3Jl
IGFuZCBvbWl0IHdyaXRpbmcgdGhlIGZpbmFsIHplcm8gYnl0ZSkuDQo+IA0KPiBUaGlua2luZyBt
b3JlIGFib3V0IGl0IHplcm9pbmcgdGhlIHJlc3VsdCBidWZmZXIgaXMgYmV0dGVyIGFzIGl0IGV2
ZW4NCj4gY292ZXJzIHRoZSB0aGVvcmV0aWNhbCBjYXNlIG9mIE5BTUVfTUFYIGJlaW5nIHNob3J0
ZXIgdGhhbiByZXN1bHRMZW4uDQoNClNldHRpbmcgdGhlIHJlc3VsdCBidWZmZXIgY29tcGxldGVs
eSB0byAwIGFuZCBkb2luZyBhZnRlciBhIGNvcHkgc291bmRzIGxpa2UNCmEgYmlnIGNvbXBsZXhp
dHkuDQoNCkhvdyBhYm91dDoNCmNvcHlzaXplID0gTUlOKHN0cm5sZW4oZGUtPmRfbmFtZSxOQU1F
X01BWCksIHJlc3VsdExlbiAtIDEpOw0KbWVtY3B5KHJlc3VsdCwgZGUtPmRfbmFtZSwgY29weXNp
emUpOw0KcmVzdWx0W2NvcHlzaXplICsgMV0gPSAwDQoNClRoaXMgd291bGQgY292ZXIgdGhlIGNh
c2Ugd2hlcmUgTkFNRV9NQVggaXMgc2hvcnRlciB0aGVuIHJlc3VsdExlbi4NCg0KQ2hlZXJzDQpC
ZXJ0cmFuZA0KDQo+IA0KPiANCj4gSnVlcmdlbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 09:14:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 09:14:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3356.9707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5WR-00049z-7c; Wed, 07 Oct 2020 09:14:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3356.9707; Wed, 07 Oct 2020 09:14:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5WR-00049s-3z; Wed, 07 Oct 2020 09:14:07 +0000
Received: by outflank-mailman (input) for mailman id 3356;
 Wed, 07 Oct 2020 09:14:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ5WQ-00049n-6z
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:14:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56ed6a3d-4ffb-40c3-8786-251ba8426add;
 Wed, 07 Oct 2020 09:14:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67BBCAC12;
 Wed,  7 Oct 2020 09:14:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ5WQ-00049n-6z
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:14:06 +0000
X-Inumbo-ID: 56ed6a3d-4ffb-40c3-8786-251ba8426add
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 56ed6a3d-4ffb-40c3-8786-251ba8426add;
	Wed, 07 Oct 2020 09:14:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602062044;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7KpFFuvrOT/srWnH9r6/AQUJ2IcEqImp+4OUxPM2kgo=;
	b=vPVAzRGdnU78YBWAGo88v0Zf3RB4Zuvlz0ag4FmuFUSEhU8K9bW238+HsNk75+TfIrCVyE
	YRmoisuW5fLnD1gqr3IL66O74Tawganez070/DAVx47ZUCpCjkgmMicjutNDTcXFdDki0y
	AT9Q7YO8x+nhdwhouDvvm55UBFCQxAA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 67BBCAC12;
	Wed,  7 Oct 2020 09:14:04 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
 <30a4ddc0-9443-ab02-341c-ae08af7fddea@suse.com>
 <B3E4C5D5-5999-4D92-8F56-FFA7019CD9BA@arm.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <209289fb-6de5-07d7-5597-2bf8d0a58f6e@suse.com>
Date: Wed, 7 Oct 2020 11:14:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <B3E4C5D5-5999-4D92-8F56-FFA7019CD9BA@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.10.20 10:56, Bertrand Marquis wrote:
> Hi Jurgen,
> 
>> On 7 Oct 2020, at 09:39, Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 07.10.20 10:28, Bertrand Marquis wrote:
>>> Use memcpy in getBridge to prevent gcc warnings about truncated
>>> strings. We know that we might truncate it, so the gcc warning
>>> here is wrong.
>>> Revert previous change changing buffer sizes as bigger buffers
>>> are not needed.
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> Changes in v2:
>>>   Use MIN between string length of de->d_name and resultLen to copy only
>>>   the minimum size required and prevent crossing to from an unallocated
>>>   space.
>>> ---
>>>   tools/libs/stat/xenstat_linux.c | 11 +++++++++--
>>>   1 file changed, 9 insertions(+), 2 deletions(-)
>>> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
>>> index d2ee6fda64..0ace03af1b 100644
>>> --- a/tools/libs/stat/xenstat_linux.c
>>> +++ b/tools/libs/stat/xenstat_linux.c
>>> @@ -29,6 +29,7 @@
>>>   #include <string.h>
>>>   #include <unistd.h>
>>>   #include <regex.h>
>>> +#include <xen-tools/libs.h>
>>>     #include "xenstat_priv.h"
>>>   @@ -78,7 +79,13 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
>>>   				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>>>     				if (access(tmp, F_OK) == 0) {
>>> -					strncpy(result, de->d_name, resultLen);
>>> +					/*
>>> +					 * Do not use strncpy to prevent compiler warning with
>>> +					 * gcc >= 10.0
>>> +					 * If de->d_name is longer then resultLen we truncate it
>>
>> s/then/than/
> 
> Will fix
> 
>>
>>> +					 */
>>> +					memcpy(result, de->d_name, MIN(strnlen(de->d_name,
>>> +									sizeof(de->d_name)),resultLen - 1));
>>
>> You can't use sizeof(de->d_name) here, as AFAIK there is no guarantee
>> that de->d_name isn't e.g. defined like "char d_name[]".
>>
>> My suggestion to use NAME_MAX as upper boundary for the length was
>> really meant to be used that way.
>>
>> And additionally you might want to add 1 to the strnlen() result in
>> order to copy the trailing 0-byte, too (or you should zero out the
>> result buffer before and omit writing the final zero byte).
>>
>> Thinking more about it zeroing the result buffer is better as it even
>> covers the theoretical case of NAME_MAX being shorter than resultLen.
> 
> Setting the result buffer completely to 0 and doing after a copy sounds like
> a big complexity.
> 
> How about:
> copysize = MIN(strnlen(de->d_name,NAME_MAX), resultLen - 1);
> memcpy(result, de->d_name, copysize);
> result[copysize + 1] = 0

result[copysize] = 0;

> 
> This would cover the case where NAME_MAX is shorter then resultLen.

Why is:

memset(result, 0, resultLen);
memcpy(result, de->d_name, MIN(strnlen(de->d_name,NAME_MAX), resultLen - 
1));

A big complexity?

In the end both variants are fine.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 09:16:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 09:16:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3357.9719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5Yf-0004IO-L0; Wed, 07 Oct 2020 09:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3357.9719; Wed, 07 Oct 2020 09:16:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5Yf-0004IH-Ho; Wed, 07 Oct 2020 09:16:25 +0000
Received: by outflank-mailman (input) for mailman id 3357;
 Wed, 07 Oct 2020 09:16:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ5Yd-0004I7-VS
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:16:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da4c4b2b-1b41-42dc-b6b1-eba3b14c0ee8;
 Wed, 07 Oct 2020 09:16:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5Yc-0007vF-OU; Wed, 07 Oct 2020 09:16:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5Yc-00009A-Gt; Wed, 07 Oct 2020 09:16:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5Yc-0007GU-GP; Wed, 07 Oct 2020 09:16:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ5Yd-0004I7-VS
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:16:24 +0000
X-Inumbo-ID: da4c4b2b-1b41-42dc-b6b1-eba3b14c0ee8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id da4c4b2b-1b41-42dc-b6b1-eba3b14c0ee8;
	Wed, 07 Oct 2020 09:16:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rQ3A8gBSdua2efgKtPwe+NWdmxGmT1r7TFBdylZ9ZCw=; b=AbPiOMv6TPBZdqX8VmaKiug5/b
	copDUU5Fm1sMX6MHdjmwdikCIn5bv9Meecx4+sLX81vO4Ry8pbm/ZC6nQZCF3CLVvvVkzOolRpxI6
	N+I7SimodhLA+q2TO8cFUnxFECuOC3wqqrCLfIhFSKxRX0VI2bJxdRF4NUj8wAHmguFY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5Yc-0007vF-OU; Wed, 07 Oct 2020 09:16:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5Yc-00009A-Gt; Wed, 07 Oct 2020 09:16:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5Yc-0007GU-GP; Wed, 07 Oct 2020 09:16:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155511-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155511: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3fdb431718ff2202d7fea7c64073b707db473ece
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 09:16:22 +0000

flight 155511 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155511/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3fdb431718ff2202d7fea7c64073b707db473ece
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   89 days
Failing since        151818  2020-07-11 04:18:52 Z   88 days   83 attempts
Testing same since   155500  2020-10-06 15:44:40 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19341 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 09:21:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 09:21:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3371.9732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5dc-0005KN-Dx; Wed, 07 Oct 2020 09:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3371.9732; Wed, 07 Oct 2020 09:21:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5dc-0005KG-Ap; Wed, 07 Oct 2020 09:21:32 +0000
Received: by outflank-mailman (input) for mailman id 3371;
 Wed, 07 Oct 2020 09:21:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ5db-0005K8-2s
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:21:31 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.66]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f32b8c8b-7695-4b0d-b875-4b6cde1aac11;
 Wed, 07 Oct 2020 09:21:29 +0000 (UTC)
Received: from AM5PR1001CA0027.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::40)
 by DB8PR08MB5419.eurprd08.prod.outlook.com (2603:10a6:10:118::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 09:21:27 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::f0) by AM5PR1001CA0027.outlook.office365.com
 (2603:10a6:206:2::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 09:21:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 09:21:27 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Wed, 07 Oct 2020 09:21:27 +0000
Received: from 23d9524963af.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 09F4ECDE-2030-4623-B40F-436B17CBA708.1; 
 Wed, 07 Oct 2020 09:20:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 23d9524963af.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 07 Oct 2020 09:20:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4363.eurprd08.prod.outlook.com (2603:10a6:10:ce::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 09:20:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 09:20:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ5db-0005K8-2s
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:21:31 +0000
X-Inumbo-ID: f32b8c8b-7695-4b0d-b875-4b6cde1aac11
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown [40.107.0.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f32b8c8b-7695-4b0d-b875-4b6cde1aac11;
	Wed, 07 Oct 2020 09:21:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71dI6c17l9Vcvg63H2T+BIJ5G7zSdn5YP3pLav/4aTs=;
 b=gFEAHn58isla8Dt77LaAEX/s/ni1KWRNH5geqevuWRfFiEhoAYc6uN7i0dGWbVJrA8V2TwqSd+qBKLglRB9FUxh8rgBDwVQjI6cagh+u3I9r6SveGqJIhqarEdnSVBcnbpCB5inInuJh1Uuh8i0u04ccVCni95BsBgd+S8H8oQc=
Received: from AM5PR1001CA0027.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::40)
 by DB8PR08MB5419.eurprd08.prod.outlook.com (2603:10a6:10:118::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 09:21:27 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::f0) by AM5PR1001CA0027.outlook.office365.com
 (2603:10a6:206:2::40) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 09:21:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 09:21:27 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Wed, 07 Oct 2020 09:21:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 73a5ccb73a81ad19
X-CR-MTA-TID: 64aa7808
Received: from 23d9524963af.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 09F4ECDE-2030-4623-B40F-436B17CBA708.1;
	Wed, 07 Oct 2020 09:20:49 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 23d9524963af.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 07 Oct 2020 09:20:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Nb6LWLR94p56Gb+La6S0CK6aBMJked2TAQv0V+253s1yq3hbA8wbp59BP/65dmGWry7IQ23Wxws5ZuHIQ8BJDMExt/jSl1HYnPbXOYD+oEXqq9+7K/2gm1WOlFXI5oIIJGHzeHMvadQC7entnPjg2+QpME7gomq/PbR0IbesniRpr7Blc5ghM6dgSY8gJAxjoA2BOAraOOBbsCatiPrhIayct8r4VeOZy3eklM7Qw0j5ZK+nmp8xa9wJSu1smwVW8hh8ESfMpSKovnq83Gsycn2zXPuqnWGqlcvveX2qjpqEByPnSOjpPHlA/PnyPhaj0o1A3hR1Fn/RD/ABS/XWIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71dI6c17l9Vcvg63H2T+BIJ5G7zSdn5YP3pLav/4aTs=;
 b=hLvbScfd5h0oiVcZBv2y8Nuk9dFHLVMSzDz7T6ISlRXJN1S80Yons+8C1LGB3EvcI3vk3QEVhrcuagqkR6MyI9fClA6OChJtRA3EWmEmjdYq+OsubddDEMlR18BZgDgmFKU+dr2OvuMDmb64tQbi4RgwoECa1HrBMU7XMJugRt5l2fByD3reUtNRMIRBKXZRsXYWIGPhkvM2AtlQoQ5dmWu1menIo803qbfEZcIUdF7izGqfHq/Guo6kiVozcQp+9SZjgi7zWUEdSm4QpD5FcgdKMxeT4qIQExFz1GmUkKdlpSGydmSvud6Ocz5lvikMsWlbQA1KEOKGPWJx/g/tWQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=71dI6c17l9Vcvg63H2T+BIJ5G7zSdn5YP3pLav/4aTs=;
 b=gFEAHn58isla8Dt77LaAEX/s/ni1KWRNH5geqevuWRfFiEhoAYc6uN7i0dGWbVJrA8V2TwqSd+qBKLglRB9FUxh8rgBDwVQjI6cagh+u3I9r6SveGqJIhqarEdnSVBcnbpCB5inInuJh1Uuh8i0u04ccVCni95BsBgd+S8H8oQc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4363.eurprd08.prod.outlook.com (2603:10a6:10:ce::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 09:20:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 09:20:48 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Topic: [PATCH v2 1/2] tools: use memcpy instead of strncpy in getBridge
Thread-Index: AQHWnIQKr7DQ0C+zLEGkNYhGKt2Q8KmL0a8AgAAEzgCAAAThgIAAAeGA
Date: Wed, 7 Oct 2020 09:20:48 +0000
Message-ID: <B4AB3195-CA48-4845-85E3-F21F3773945E@arm.com>
References:
 <bc191370356c300f84a16d10345d4a0d646f5bae.1601977978.git.bertrand.marquis@arm.com>
 <30a4ddc0-9443-ab02-341c-ae08af7fddea@suse.com>
 <B3E4C5D5-5999-4D92-8F56-FFA7019CD9BA@arm.com>
 <209289fb-6de5-07d7-5597-2bf8d0a58f6e@suse.com>
In-Reply-To: <209289fb-6de5-07d7-5597-2bf8d0a58f6e@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 24a2fd58-1631-4f52-f189-08d86aa25e4b
x-ms-traffictypediagnostic: DBBPR08MB4363:|DB8PR08MB5419:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5419058428FC1BC8210949649D0A0@DB8PR08MB5419.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4125;OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jk/vQXwZN8xepcSCeLKd/cLbARTKOyJZZyxW3cx2Ch5PlFuQjAC9w+xULa975ENA4Js/iMerzF8P9cZjxV9qvc+Oh0Q5BWxTc3aNiH5BentJvW403gd8FdVW1K/0SOxfQlQgFItEZrUxu+7BXNQbRjFdoYrOU21Xj2nmk8XP7JeAP+06Vy5eUEMuBlaChMKynM9Z4G8UZpojzfQ0N4YoIqWx6W0Tu4ByiWQgV90SZppa2aVjtTaagfKwqO8NRHmOJOEw4HoqM1BbvHwaG2C99efRXJIoIilyZGEUvrbVCmTLiIgtQgLRPrq3JMSAzYZtZlv2qXn5QDIjDx2QLQ53sA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39850400004)(366004)(396003)(376002)(346002)(66946007)(76116006)(66556008)(71200400001)(5660300002)(86362001)(66446008)(91956017)(33656002)(83380400001)(66574015)(6512007)(478600001)(8936002)(6486002)(2906002)(54906003)(316002)(66476007)(64756008)(186003)(6506007)(53546011)(6916009)(26005)(4326008)(36756003)(8676002)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 tEdEp6tTJu9DB3n6Wj5tChTNrUD7jGxxgSsURry1g0mngIEVB2vy8m/FHE2XR0qX3NvmjqYXqnPMNviCEWVRodgyuA4nxdTZmgzmnqYTe5Ep7Ekn9ctmyPYZUHIIbZdUCu5gQkdAYrBs/lOHIPUf8RaTiD/yxrI7WX2WjDTKzmKQeS4KCJ4dyBKksCwYI9ddZUrHq1DLoMq9BxetP20Do3+h/99b7pvTDFVVaxrjTeIvnmDcXcIERw4IUkgpnfkk59s0fl5LErluVoymDpNshv6ypYfkxw7sbos2nmqmo2Rff3h0XcVV0NNOLuz0xwCsMg/0nHr+wqpMR9+X+bkiN2unBEYBjWYtbR9l4gH0FlQdZvmuAq/r1EyfeGCPXoqGKvfJ1M5Ydv2/QjrI6cjYPf9giqE/XK6QCYBLl73O64IA5pT7rp6xClucEnClta6XElCP4cVd0T4uJTvBUAsqV6VkWjWq3cG4izA/92LZ+yL0MYqmMtuYeNA4banCZzMb1LOspp7qWD6RSfKYHhROIJSpGFRHpTCRgt28oO0WKjtzjX/9MA9S9Ln86wWifczaodmbBAEKAy8ssScFaIdzwtItCAnVNsXIZWX1l5JYV0i6LBdP5HLsDV9yxeueAd11AdXKWpeVQQw3/4DmvjJYlw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <968305C6C69D374E9608DD7BAEB74582@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4363
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e8718368-bd11-43bf-3393-08d86aa24717
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	4ZT78+GBn6MJK8hqcxAUSS8r3cVYBDdz7+e5vWWOtWdgMQuAFaa6p0Cmdih3cjmepaNh4q5kIrTo4payIsQQbPmKftQc/4YJFjB4GXz8GypJ70ctZFKz6X33ECScFdA9ZAOyQDqrXF9dn40c6VeeCBZsGbs4pfYvHEJ+ObPZlwDPLOPgBbYHsDM8t61PUlq1fTVANJsM49cyjxr8As7wEpbh6Bs8zzW+N0u3SgYwH5BgAY57gXFh3JMFVjQb0HP8SNNi/I4Ud2pyG0ijgpNxtP6WRDzA5x1+YFCq2YNzXwroSpRAhaAfesIJhlfmZ5MSG94tnwBTDqN25PlKuOpXzKqLA/FAYMXZyBEiWRr8JH8DukS8FEJ2dnpL0et3+Un6jQM6+eciQWOHVtaeKMFvVg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(136003)(46966005)(83380400001)(26005)(2616005)(4326008)(8676002)(336012)(186003)(478600001)(6486002)(70206006)(356005)(70586007)(82310400003)(8936002)(81166007)(6512007)(5660300002)(82740400003)(47076004)(86362001)(66574015)(36756003)(6862004)(53546011)(33656002)(6506007)(54906003)(2906002)(36906005)(316002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 09:21:27.1592
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 24a2fd58-1631-4f52-f189-08d86aa25e4b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5419

DQoNCj4gT24gNyBPY3QgMjAyMCwgYXQgMTA6MTQsIErDvHJnZW4gR3Jvw58gPGpncm9zc0BzdXNl
LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAwNy4xMC4yMCAxMDo1NiwgQmVydHJhbmQgTWFycXVpcyB3
cm90ZToNCj4+IEhpIEp1cmdlbiwNCj4+PiBPbiA3IE9jdCAyMDIwLCBhdCAwOTozOSwgSsO8cmdl
biBHcm/DnyA8amdyb3NzQHN1c2UuY29tPiB3cm90ZToNCj4+PiANCj4+PiBPbiAwNy4xMC4yMCAx
MDoyOCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4gVXNlIG1lbWNweSBpbiBnZXRCcmlk
Z2UgdG8gcHJldmVudCBnY2Mgd2FybmluZ3MgYWJvdXQgdHJ1bmNhdGVkDQo+Pj4+IHN0cmluZ3Mu
IFdlIGtub3cgdGhhdCB3ZSBtaWdodCB0cnVuY2F0ZSBpdCwgc28gdGhlIGdjYyB3YXJuaW5nDQo+
Pj4+IGhlcmUgaXMgd3JvbmcuDQo+Pj4+IFJldmVydCBwcmV2aW91cyBjaGFuZ2UgY2hhbmdpbmcg
YnVmZmVyIHNpemVzIGFzIGJpZ2dlciBidWZmZXJzDQo+Pj4+IGFyZSBub3QgbmVlZGVkLg0KPj4+
PiBTaWduZWQtb2ZmLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFybS5j
b20+DQo+Pj4+IC0tLQ0KPj4+PiBDaGFuZ2VzIGluIHYyOg0KPj4+PiAgVXNlIE1JTiBiZXR3ZWVu
IHN0cmluZyBsZW5ndGggb2YgZGUtPmRfbmFtZSBhbmQgcmVzdWx0TGVuIHRvIGNvcHkgb25seQ0K
Pj4+PiAgdGhlIG1pbmltdW0gc2l6ZSByZXF1aXJlZCBhbmQgcHJldmVudCBjcm9zc2luZyB0byBm
cm9tIGFuIHVuYWxsb2NhdGVkDQo+Pj4+ICBzcGFjZS4NCj4+Pj4gLS0tDQo+Pj4+ICB0b29scy9s
aWJzL3N0YXQveGVuc3RhdF9saW51eC5jIHwgMTEgKysrKysrKysrLS0NCj4+Pj4gIDEgZmlsZSBj
aGFuZ2VkLCA5IGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+Pj4+IGRpZmYgLS1naXQg
YS90b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jIGIvdG9vbHMvbGlicy9zdGF0L3hlbnN0
YXRfbGludXguYw0KPj4+PiBpbmRleCBkMmVlNmZkYTY0Li4wYWNlMDNhZjFiIDEwMDY0NA0KPj4+
PiAtLS0gYS90b29scy9saWJzL3N0YXQveGVuc3RhdF9saW51eC5jDQo+Pj4+ICsrKyBiL3Rvb2xz
L2xpYnMvc3RhdC94ZW5zdGF0X2xpbnV4LmMNCj4+Pj4gQEAgLTI5LDYgKzI5LDcgQEANCj4+Pj4g
ICNpbmNsdWRlIDxzdHJpbmcuaD4NCj4+Pj4gICNpbmNsdWRlIDx1bmlzdGQuaD4NCj4+Pj4gICNp
bmNsdWRlIDxyZWdleC5oPg0KPj4+PiArI2luY2x1ZGUgPHhlbi10b29scy9saWJzLmg+DQo+Pj4+
ICAgICNpbmNsdWRlICJ4ZW5zdGF0X3ByaXYuaCINCj4+Pj4gIEBAIC03OCw3ICs3OSwxMyBAQCBz
dGF0aWMgdm9pZCBnZXRCcmlkZ2UoY2hhciAqZXhjbHVkZU5hbWUsIGNoYXIgKnJlc3VsdCwgc2l6
ZV90IHJlc3VsdExlbikNCj4+Pj4gIAkJCQlzcHJpbnRmKHRtcCwgIi9zeXMvY2xhc3MvbmV0LyVz
L2JyaWRnZSIsIGRlLT5kX25hbWUpOw0KPj4+PiAgICAJCQkJaWYgKGFjY2Vzcyh0bXAsIEZfT0sp
ID09IDApIHsNCj4+Pj4gLQkJCQkJc3RybmNweShyZXN1bHQsIGRlLT5kX25hbWUsIHJlc3VsdExl
bik7DQo+Pj4+ICsJCQkJCS8qDQo+Pj4+ICsJCQkJCSAqIERvIG5vdCB1c2Ugc3RybmNweSB0byBw
cmV2ZW50IGNvbXBpbGVyIHdhcm5pbmcgd2l0aA0KPj4+PiArCQkJCQkgKiBnY2MgPj0gMTAuMA0K
Pj4+PiArCQkJCQkgKiBJZiBkZS0+ZF9uYW1lIGlzIGxvbmdlciB0aGVuIHJlc3VsdExlbiB3ZSB0
cnVuY2F0ZSBpdA0KPj4+IA0KPj4+IHMvdGhlbi90aGFuLw0KPj4gV2lsbCBmaXgNCj4+PiANCj4+
Pj4gKwkJCQkJICovDQo+Pj4+ICsJCQkJCW1lbWNweShyZXN1bHQsIGRlLT5kX25hbWUsIE1JTihz
dHJubGVuKGRlLT5kX25hbWUsDQo+Pj4+ICsJCQkJCQkJCQlzaXplb2YoZGUtPmRfbmFtZSkpLHJl
c3VsdExlbiAtIDEpKTsNCj4+PiANCj4+PiBZb3UgY2FuJ3QgdXNlIHNpemVvZihkZS0+ZF9uYW1l
KSBoZXJlLCBhcyBBRkFJSyB0aGVyZSBpcyBubyBndWFyYW50ZWUNCj4+PiB0aGF0IGRlLT5kX25h
bWUgaXNuJ3QgZS5nLiBkZWZpbmVkIGxpa2UgImNoYXIgZF9uYW1lW10iLg0KPj4+IA0KPj4+IE15
IHN1Z2dlc3Rpb24gdG8gdXNlIE5BTUVfTUFYIGFzIHVwcGVyIGJvdW5kYXJ5IGZvciB0aGUgbGVu
Z3RoIHdhcw0KPj4+IHJlYWxseSBtZWFudCB0byBiZSB1c2VkIHRoYXQgd2F5Lg0KPj4+IA0KPj4+
IEFuZCBhZGRpdGlvbmFsbHkgeW91IG1pZ2h0IHdhbnQgdG8gYWRkIDEgdG8gdGhlIHN0cm5sZW4o
KSByZXN1bHQgaW4NCj4+PiBvcmRlciB0byBjb3B5IHRoZSB0cmFpbGluZyAwLWJ5dGUsIHRvbyAo
b3IgeW91IHNob3VsZCB6ZXJvIG91dCB0aGUNCj4+PiByZXN1bHQgYnVmZmVyIGJlZm9yZSBhbmQg
b21pdCB3cml0aW5nIHRoZSBmaW5hbCB6ZXJvIGJ5dGUpLg0KPj4+IA0KPj4+IFRoaW5raW5nIG1v
cmUgYWJvdXQgaXQgemVyb2luZyB0aGUgcmVzdWx0IGJ1ZmZlciBpcyBiZXR0ZXIgYXMgaXQgZXZl
bg0KPj4+IGNvdmVycyB0aGUgdGhlb3JldGljYWwgY2FzZSBvZiBOQU1FX01BWCBiZWluZyBzaG9y
dGVyIHRoYW4gcmVzdWx0TGVuLg0KPj4gU2V0dGluZyB0aGUgcmVzdWx0IGJ1ZmZlciBjb21wbGV0
ZWx5IHRvIDAgYW5kIGRvaW5nIGFmdGVyIGEgY29weSBzb3VuZHMgbGlrZQ0KPj4gYSBiaWcgY29t
cGxleGl0eS4NCj4+IEhvdyBhYm91dDoNCj4+IGNvcHlzaXplID0gTUlOKHN0cm5sZW4oZGUtPmRf
bmFtZSxOQU1FX01BWCksIHJlc3VsdExlbiAtIDEpOw0KPj4gbWVtY3B5KHJlc3VsdCwgZGUtPmRf
bmFtZSwgY29weXNpemUpOw0KPj4gcmVzdWx0W2NvcHlzaXplICsgMV0gPSAwDQo+IA0KPiByZXN1
bHRbY29weXNpemVdID0gMDsNCj4gDQo+PiBUaGlzIHdvdWxkIGNvdmVyIHRoZSBjYXNlIHdoZXJl
IE5BTUVfTUFYIGlzIHNob3J0ZXIgdGhlbiByZXN1bHRMZW4uDQo+IA0KPiBXaHkgaXM6DQo+IA0K
PiBtZW1zZXQocmVzdWx0LCAwLCByZXN1bHRMZW4pOw0KPiBtZW1jcHkocmVzdWx0LCBkZS0+ZF9u
YW1lLCBNSU4oc3RybmxlbihkZS0+ZF9uYW1lLE5BTUVfTUFYKSwgcmVzdWx0TGVuIC0gMSkpOw0K
PiANCj4gQSBiaWcgY29tcGxleGl0eT8NCg0KaXQgaXMgcG90ZW50aWFsbHkgbW9yZSBjb21wdXRh
dGlvbiAoY29tcGxleGl0eSB3YXMgbm90IHRoZSByaWdodCB3b3JkIG1heWJlKS4NCg0KPiANCj4g
SW4gdGhlIGVuZCBib3RoIHZhcmlhbnRzIGFyZSBmaW5lLg0KDQppbiB0aGUgZW5kIEkgYW0gZnVs
bHkgb2sgd2l0aCBhbnksIGF0IHRoZW4gZW5kIHRoZXJlIGlzIG5vIHBlcmZvcm1hbmNlIGlzc3Vl
IGhlcmUuDQpJIHdpbGwgdXNlIHVzZSB0aGUgbWVtc2V0IHNvbHV0aW9uIGFuZCBwdXNoIGEgdjMu
DQoNCkNoZWVycw0KQmVydHJhbmQNCg0KPiANCj4gDQo+IEp1ZXJnZW4NCg0K


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 09:27:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 09:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3384.9745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5ji-0005hB-4d; Wed, 07 Oct 2020 09:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3384.9745; Wed, 07 Oct 2020 09:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ5ji-0005h4-1U; Wed, 07 Oct 2020 09:27:50 +0000
Received: by outflank-mailman (input) for mailman id 3384;
 Wed, 07 Oct 2020 09:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ5jg-0005gz-Qv
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:27:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bb06fb4-8070-4aa8-9d46-2edb008e9c79;
 Wed, 07 Oct 2020 09:27:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5je-0008AD-4v; Wed, 07 Oct 2020 09:27:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5jd-0000ki-U3; Wed, 07 Oct 2020 09:27:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ5jd-0007Kj-Ta; Wed, 07 Oct 2020 09:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ5jg-0005gz-Qv
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:27:48 +0000
X-Inumbo-ID: 3bb06fb4-8070-4aa8-9d46-2edb008e9c79
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3bb06fb4-8070-4aa8-9d46-2edb008e9c79;
	Wed, 07 Oct 2020 09:27:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S+hDr00IXx+g5zdFdmVn2YOGj2aoSFLr2J9ujE5RcCc=; b=Snzwu/9p+To/tbC60mjGm165D3
	1tF3wjT6PhzVSVoLXsHg5E+hUOKGit6XBY27H7VfriPHUPCVH+irP+j8lqFr7FdRuAB72u7xLS22e
	jmDfQGipIwuQOvyEYCoIiQmrlbuEp3Yvy5lccQXGZb/p+Hhbqr0wtm1UOFMJHfhqxytk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5je-0008AD-4v; Wed, 07 Oct 2020 09:27:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5jd-0000ki-U3; Wed, 07 Oct 2020 09:27:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ5jd-0007Kj-Ta; Wed, 07 Oct 2020 09:27:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155497-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155497: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=7575fdda569b2a2e8be32c1a64ecb05d6f96a500
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 09:27:45 +0000

flight 155497 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155497/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  7 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      7 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                7575fdda569b2a2e8be32c1a64ecb05d6f96a500
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   67 days
Failing since        152366  2020-08-01 20:49:34 Z   66 days  112 attempts
Testing same since   155497  2020-10-06 13:22:37 Z    0 days    1 attempts

------------------------------------------------------------
2499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 337249 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 09:49:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 09:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3396.9759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ63v-0007nE-Uw; Wed, 07 Oct 2020 09:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3396.9759; Wed, 07 Oct 2020 09:48:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ63v-0007n7-R8; Wed, 07 Oct 2020 09:48:43 +0000
Received: by outflank-mailman (input) for mailman id 3396;
 Wed, 07 Oct 2020 09:48:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ63u-0007n1-AU
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:48:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c101a6f-d980-496a-8b27-d0e7ad41c65d;
 Wed, 07 Oct 2020 09:48:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ63s-00008s-BY; Wed, 07 Oct 2020 09:48:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ63s-0001cL-3w; Wed, 07 Oct 2020 09:48:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ63s-0001sn-3U; Wed, 07 Oct 2020 09:48:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ63u-0007n1-AU
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 09:48:42 +0000
X-Inumbo-ID: 3c101a6f-d980-496a-8b27-d0e7ad41c65d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3c101a6f-d980-496a-8b27-d0e7ad41c65d;
	Wed, 07 Oct 2020 09:48:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rn+rif4NJkq47s7xHchR8O8Dk0OF7NczpqkbDmPtNz4=; b=fElZhhFDlz8pamhxozm9cDFaeS
	pwVzfYZhoOKdktZM2Ij58dWbFrZ+ar+F/7Orc2QBuPbIqXo9hwIhfCSmQnwm9xd35i0Cv9k0DAnVG
	X/j3KaGZ4fxDnbL6Zzvygfs6Pl1fDi7f5bm2AsFke1pFSebwp1JFlXi2ZeOn6RyOkgvc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ63s-00008s-BY; Wed, 07 Oct 2020 09:48:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ63s-0001cL-3w; Wed, 07 Oct 2020 09:48:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ63s-0001sn-3U; Wed, 07 Oct 2020 09:48:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 155515: all pass - PUSHED
X-Osstest-Versions-This:
    xen=93508595d588afe9dca087f95200effb7cedc81f
X-Osstest-Versions-That:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 09:48:40 +0000

flight 155515 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155515/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  93508595d588afe9dca087f95200effb7cedc81f
baseline version:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893

Last test of basis   155448  2020-10-04 09:18:26 Z    3 days
Testing same since   155515  2020-10-07 09:19:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8ef6345ef5..93508595d5  93508595d588afe9dca087f95200effb7cedc81f -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:06:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3401.9773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Ku-0001OT-GS; Wed, 07 Oct 2020 10:06:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3401.9773; Wed, 07 Oct 2020 10:06:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Ku-0001OM-CR; Wed, 07 Oct 2020 10:06:16 +0000
Received: by outflank-mailman (input) for mailman id 3401;
 Wed, 07 Oct 2020 10:06:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQ6Kt-0001OH-5l
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:06:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89650c40-9ad3-4d02-920d-624753781edc;
 Wed, 07 Oct 2020 10:06:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ6Kr-0000aG-92; Wed, 07 Oct 2020 10:06:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ6Kq-0001aF-Ud; Wed, 07 Oct 2020 10:06:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQ6Kt-0001OH-5l
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:06:15 +0000
X-Inumbo-ID: 89650c40-9ad3-4d02-920d-624753781edc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 89650c40-9ad3-4d02-920d-624753781edc;
	Wed, 07 Oct 2020 10:06:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QBcgr8CJNWcbv0HsNfEr0YLJ89jPfDPw8bQfvpTmvDw=; b=m+ncSQEnBoKVSmhknsISpgmTIU
	BW1WQ4z2Z5AHshw56qyDmY8CI6/k/qLfufj3ls+YAX4UDyNdMAebBPsgn8eQaXhT17v1zND4PGPZC
	FeP1Nj+HeaEqT8vqI/RJQ68YBz3NzM1yjH4qB1Q+y7a/SXSlIypN7qjvN+eLEDbve+Us=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ6Kr-0000aG-92; Wed, 07 Oct 2020 10:06:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ6Kq-0001aF-Ud; Wed, 07 Oct 2020 10:06:13 +0000
Subject: Re: [PATCH v2] xen/rpi4: implement watchdog-based reset
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, roman@zededa.com
References: <20201002204717.14735-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <14712444-992f-c435-883a-388d37177beb@xen.org>
Date: Wed, 7 Oct 2020 11:06:10 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201002204717.14735-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 02/10/2020 21:47, Stefano Stabellini wrote:
> The preferred methord to reboot RPi4 is PSCI. If it is not available,

s/methord/method/

> +
> +#define PM_PASSWORD                 0x5a000000
> +#define PM_RSTC                     0x1c
> +#define PM_WDOG                     0x24
> +#define PM_RSTC_WRCFG_FULL_RESET    0x00000020
> +#define PM_RSTC_WRCFG_CLR           0xffffffcf
> +
> +static void __iomem *rpi4_map_watchdog(void)
> +{
> +    void __iomem *base;
> +    struct dt_device_node *node;
> +    paddr_t start, len;
> +    int ret;
> +
> +    node = dt_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm");
> +    if ( !node )
> +        return NULL;
> +
> +    ret = dt_device_get_address(node, 0, &start, &len);
> +    if ( ret )
> +    {
> +        printk("Cannot read watchdog register address\n");
> +        return NULL;
> +    }
> +
> +    base = ioremap_nocache(start & PAGE_MASK, PAGE_SIZE);
> +    if ( !base )
> +    {
> +        dprintk(XENLOG_ERR, "Unable to map watchdog register!\n");

NIT: I would suggest to use printk() rather than dprintk. It would be 
useful for a normal user to know that we didn't manage to reset the 
platform and why.

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3406.9785 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Xr-0002i0-Mn; Wed, 07 Oct 2020 10:19:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3406.9785; Wed, 07 Oct 2020 10:19:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Xr-0002ht-Ja; Wed, 07 Oct 2020 10:19:39 +0000
Received: by outflank-mailman (input) for mailman id 3406;
 Wed, 07 Oct 2020 10:19:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gnuf=DO=epam.com=prvs=8549453b04=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1kQ6Xp-0002hm-Lt
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:19:38 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d634b51a-06e7-4bc7-bfc0-5f78cd237088;
 Wed, 07 Oct 2020 10:19:35 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 097AFbpJ013258; Wed, 7 Oct 2020 10:19:33 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2057.outbound.protection.outlook.com [104.47.14.57])
 by mx0b-0039f301.pphosted.com with ESMTP id 3417ue8tjq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 07 Oct 2020 10:19:33 +0000
Received: from DBAPR03MB6534.eurprd03.prod.outlook.com (2603:10a6:10:19b::13)
 by DB7PR03MB3739.eurprd03.prod.outlook.com (2603:10a6:5:1::12) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23; Wed, 7 Oct 2020 10:19:27 +0000
Received: from DBAPR03MB6534.eurprd03.prod.outlook.com
 ([fe80::fdb7:8815:74cf:ab61]) by DBAPR03MB6534.eurprd03.prod.outlook.com
 ([fe80::fdb7:8815:74cf:ab61%7]) with mapi id 15.20.3455.022; Wed, 7 Oct 2020
 10:19:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gnuf=DO=epam.com=prvs=8549453b04=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
	id 1kQ6Xp-0002hm-Lt
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:19:38 +0000
X-Inumbo-ID: d634b51a-06e7-4bc7-bfc0-5f78cd237088
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d634b51a-06e7-4bc7-bfc0-5f78cd237088;
	Wed, 07 Oct 2020 10:19:35 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 097AFbpJ013258;
	Wed, 7 Oct 2020 10:19:33 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2057.outbound.protection.outlook.com [104.47.14.57])
	by mx0b-0039f301.pphosted.com with ESMTP id 3417ue8tjq-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Wed, 07 Oct 2020 10:19:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Z1k+dLSVORM29cbVFmlAIfzncYnQSAL8be+ogAQgXpKEYy3Bo4m+Chq6FJK+3YaoeFCHsaVwPKxRbiq5HoKWzqaPV2149OzMLyWHaeuHkZzOLn3rTMyUT5jCtkcBUIqsfW6+9k0mXjVEJmC12H0vwZCdt3cA6hcG8lTtjgyO+fEg9+8dAwt6bAQgNVrZ7u/d+IXFGwKRbhzH0/C+fo0StnIqLXjNojDtr+1tIoAWbSwgvqI90DhtDtQ9vyUfpgEfurnj1bh2j727+qCNlscuSRXiXm3wNe8tiX5YYU9igfMDdR8P/mdz3Aj3Kd1XgegwkEtl74lYx9KAN7IFu0lCBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dghLAKmeiot64G2UkCyE6qRfMS9Ocom2Zlln7n4lg1Y=;
 b=e5IoToIOtRvK1Kj6R2/K2tfmvObnQJ6h7MdcuouTsxMZ9x7sNxZGzzoLgQLuxmJKz/iPI7yijDSpMY24Ru2X1oJLhDlyMXaxkcH7wyPF398UhTTFqdFJN5Ow93c/LKXm9kvgwKv0uivISOHzEj368EejTcLSeDGvXfCUt7aSmzxL8/tvbs9OfJ/IbT5cykHeI+a6ZclyLrYC65vr2dZs6c4GPuZl1lD9wL7k+hZ8+k6V8h9juuQ44znRxKlk7zhiau+PfirDNmfrgHWAIESmGroAcxlTuHKRPEijHxeEiRphro0BFrvwz64d01bu3kVgwY10WJv1YE1wtvgqkKPxhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dghLAKmeiot64G2UkCyE6qRfMS9Ocom2Zlln7n4lg1Y=;
 b=BEmF40xama3UopZyKKn+qq5zWjyNXbLtu/zJteFpQBjRNhRk7Vo3ZVgl5Kp074xjkJ39K5X54M7sG2/tpJ4fuD33ECqv/Xl2b8QvUvT2dywS4MddY55mF2u6IFfAfku9QZxofHRiwIKJ/4fe+wK3FefLaSa+BK5x/YfYT5sQaRyhk8hxiFUAt+Zz+YNzvlr89XzhOCTYoUYpfV6W81NnkafhjX0RYz4KjeBqAnQXmOJutXT/o+oTp4ivmdjrSfgBbBHBKV0ltu3soWQlNQJAXyTVvfRcXVWjvwRlYh9HwLYFRxtNLzh2xhSRLs92TfF2Lv4fjd6H0TI0pvzR5JEV4g==
Received: from DBAPR03MB6534.eurprd03.prod.outlook.com (2603:10a6:10:19b::13)
 by DB7PR03MB3739.eurprd03.prod.outlook.com (2603:10a6:5:1::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23; Wed, 7 Oct 2020 10:19:27 +0000
Received: from DBAPR03MB6534.eurprd03.prod.outlook.com
 ([fe80::fdb7:8815:74cf:ab61]) by DBAPR03MB6534.eurprd03.prod.outlook.com
 ([fe80::fdb7:8815:74cf:ab61%7]) with mapi id 15.20.3455.022; Wed, 7 Oct 2020
 10:19:27 +0000
From: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
To: "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>
CC: "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
        "vicooodin@gmail.com" <vicooodin@gmail.com>,
        "julien@xen.org"
	<julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Artem
 Mygaiev <Artem_Mygaiev@epam.com>,
        "committers@xenproject.org"
	<committers@xenproject.org>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgA=
Date: Wed, 7 Oct 2020 10:19:27 +0000
Message-ID: <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
	 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
	 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
	 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
	 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
In-Reply-To: <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f21fb7ab-ee62-4a85-78fb-08d86aaa78b7
x-ms-traffictypediagnostic: DB7PR03MB3739:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <DB7PR03MB37392F37266B20B97936F4CFF20A0@DB7PR03MB3739.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 R96to3Mlk0wNN5ucj7duMmbiRqJdTXqrXvoPOOi+3QWdwiSHP5FfLtrmppojzEnVv1eQd8I4+0PVhlvGFap1M/1YbyYHblIOiAbHBeTvbuPwP4IKnOLeIQIcOLJxah5PhZix5YOHayvK0I1HTcEExo1a2tIZIAVC6WFMEQ+gzR6MIE0kXsWXVRS2LK6Xx6XPFd39REXkjKtr4tgjjP5LtLxlOpLdoZFd6Nl1yM6p6i6OlfSJCDIgWDMsOQyY1udsGEfWON7za/9BoL0lkuum8YDfAfJtgM9TpO03iwrbUvVw2OfsHOjvKWH7+DcZfbjW
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DBAPR03MB6534.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(366004)(346002)(136003)(396003)(6916009)(4326008)(71200400001)(6486002)(66446008)(66476007)(64756008)(76116006)(66946007)(91956017)(66556008)(6512007)(5660300002)(8936002)(8676002)(2616005)(26005)(2906002)(316002)(53546011)(6506007)(86362001)(55236004)(186003)(36756003)(478600001)(83380400001)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 PkBb/BUAQFKmDf91lUJJl+vHvw7Hj65VnQsy3V/gsxQlC9HST1/IrA+IYLCLnHlF8ZSDeyYzk6TvmoPxjSes8cbOIfWDbvzOhscGMhro2FTS1oTNG9bGsF6+Rtak9d/tDMda9cg6ymmoMdKaXt0IF7f5D/4xp4xXpBrFfcsUbO3X1QlDeeZxsCtwHjOXjAVh7wSojNfnmvPZV0ZDeGeKkPY+yV6aBU/s+4aZZGcCY+KSvsOb+2gxnKt2Tb2K/5HFpLl+nRZ94S20Dq6cQy3BsK3APXKBgFRpBmmHFpiyslOVB0kNjxhZhrrw4a2a1jcJ0z9ntztNVwUrmLaxSV3wz+XNnWmOUecS8iiii076Uz/pO1XzV/Q0fpncMD1RUWhVSRJ7166cqMFcTSTUHqTjEFnHYxruhh8iu2RTpJgl5u4410btktu4aBalEMm25dOa6yZJoyNCCMplLM2hxPL5g1U5Jals9x2kD4WhX8Q9UG9vrrKCKNGzoJpGjfCBuL2GIXxrwyvQinhapj6k2mMxnstT6uPeWPYBJtL15hBgzQV7C5lVWbwN9H5TIn+YF8vcduXYwh/1dsaSC3Hv8F9dVydjgQYgWNJT4i4S82XDkhQMIxSpCOWueOm092KxM+MFVG0cejAqn/q7mx2OTvlUWA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <461C695B008DB2478AF6C32C5E32780F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DBAPR03MB6534.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f21fb7ab-ee62-4a85-78fb-08d86aaa78b7
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 Oct 2020 10:19:27.3118
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: txRrEYFb/vNRowylqGUCfUIRloxt7lwnK6S+XHeUvOgQd0KGHIkU74BeIHvrit2Kg4XQRjk5MxWvwGFQUaeoTtrHY/KhonVNlF9TDldO5UQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR03MB3739
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-07_05:2020-10-06,2020-10-07 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0
 impostorscore=0 lowpriorityscore=0 mlxlogscore=999 priorityscore=1501
 bulkscore=0 spamscore=0 suspectscore=0 adultscore=0 mlxscore=0
 clxscore=1015 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2006250000 definitions=main-2010070068

SGkgYWxsLA0KDQpPbiBUaHUsIDIwMjAtMTAtMDEgYXQgMTA6MDYgKzAwMDAsIEdlb3JnZSBEdW5s
YXAgd3JvdGU6DQo+ID4gT24gT2N0IDEsIDIwMjAsIGF0IDEwOjA2IEFNLCBBbmFzdGFzaWlhIEx1
a2lhbmVua28gPA0KPiA+IEFuYXN0YXNpaWFfTHVraWFuZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+
ID4gDQo+ID4gSGksDQo+ID4gDQo+ID4gT24gV2VkLCAyMDIwLTA5LTMwIGF0IDEwOjI0ICswMDAw
LCBHZW9yZ2UgRHVubGFwIHdyb3RlOg0KPiA+ID4gPiBPbiBTZXAgMzAsIDIwMjAsIGF0IDEwOjU3
IEFNLCBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4gPiA+IHdyb3RlOg0KPiA+
ID4gPiANCj4gPiA+ID4gT24gMzAuMDkuMjAyMCAxMToxOCwgQW5hc3Rhc2lpYSBMdWtpYW5lbmtv
IHdyb3RlOg0KPiA+ID4gPiA+IEkgd291bGQgbGlrZSB0byBrbm93IHlvdXIgb3BpbmlvbiBvbiB0
aGUgZm9sbG93aW5nIGNvZGluZw0KPiA+ID4gPiA+IHN0eWxlDQo+ID4gPiA+ID4gY2FzZXMuDQo+
ID4gPiA+ID4gV2hpY2ggb3B0aW9uIGRvIHlvdSB0aGluayBpcyBjb3JyZWN0Pw0KPiA+ID4gPiA+
IDEpIEZ1bmN0aW9uIHByb3RvdHlwZSB3aGVuIHRoZSBzdHJpbmcgbGVuZ3RoIGlzIGxvbmdlciB0
aGFuDQo+ID4gPiA+ID4gdGhlDQo+ID4gPiA+ID4gYWxsb3dlZA0KPiA+ID4gPiA+IG9uZQ0KPiA+
ID4gPiA+IC1zdGF0aWMgaW50IF9faW5pdA0KPiA+ID4gPiA+IC1hY3BpX3BhcnNlX2dpY19jcHVf
aW50ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlcg0KPiA+ID4gPiA+ICpoZWFkZXIs
DQo+ID4gPiA+ID4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgY29uc3QgdW5zaWduZWQg
bG9uZyBlbmQpDQo+ID4gPiA+ID4gK3N0YXRpYyBpbnQgX19pbml0IGFjcGlfcGFyc2VfZ2ljX2Nw
dV9pbnRlcmZhY2UoDQo+ID4gPiA+ID4gKyAgICBzdHJ1Y3QgYWNwaV9zdWJ0YWJsZV9oZWFkZXIg
KmhlYWRlciwgY29uc3QgdW5zaWduZWQgbG9uZw0KPiA+ID4gPiA+IGVuZCkNCj4gPiA+ID4gDQo+
ID4gPiA+IEJvdGggdmFyaWFudHMgYXJlIGRlZW1lZCB2YWxpZCBzdHlsZSwgSSB0aGluayAoc2Ft
ZSBhbHNvIGdvZXMNCj4gPiA+ID4gZm9yDQo+ID4gPiA+IGZ1bmN0aW9uIGNhbGxzIHdpdGggdGhp
cyBzYW1lIHByb2JsZW0pLiBJbiBmYWN0IHlvdSBtaXggdHdvDQo+ID4gPiA+IGRpZmZlcmVudCBz
dHlsZSBhc3BlY3RzIHRvZ2V0aGVyIChwbGFjZW1lbnQgb2YgcGFyYW1ldGVyDQo+ID4gPiA+IGRl
Y2xhcmF0aW9ucyBhbmQgcGxhY2VtZW50IG9mIHJldHVybiB0eXBlIGV0YykgLSBmb3IgZWFjaA0K
PiA+ID4gPiBpbmRpdmlkdWFsbHkgYm90aCBmb3JtcyBhcmUgZGVlbWVkIGFjY2VwdGFibGUsIEkg
dGhpbmsuDQo+ID4gPiANCj4gPiA+IElmIHdl4oCZcmUgZ29pbmcgdG8gaGF2ZSBhIHRvb2wgZ28g
dGhyb3VnaCBhbmQgcmVwb3J0IChjb3JyZWN0PykNCj4gPiA+IGFsbA0KPiA+ID4gdGhlc2UgY29k
aW5nIHN0eWxlIHRoaW5ncywgaXTigJlzIGFuIG9wcG9ydHVuaXR5IHRvIHRoaW5rIGlmIHdlDQo+
ID4gPiB3YW50IHRvDQo+ID4gPiBhZGQgbmV3IGNvZGluZyBzdHlsZSByZXF1aXJlbWVudHMgKG9y
IGNoYW5nZSBleGlzdGluZw0KPiA+ID4gcmVxdWlyZW1lbnRzKS4NCj4gPiA+IA0KPiA+IA0KPiA+
IEkgYW0gcmVhZHkgdG8gZGlzY3VzcyBuZXcgcmVxdWlyZW1lbnRzIGFuZCBpbXBsZW1lbnQgdGhl
bSBpbiBydWxlcw0KPiA+IG9mDQo+ID4gdGhlIFhlbiBDb2Rpbmcgc3R5bGUgY2hlY2tlci4NCj4g
DQo+IFRoYW5rIHlvdS4gOi0pICBCdXQgd2hhdCBJIG1lYW50IHdhczogUmlnaHQgbm93IHdlIGRv
buKAmXQgcmVxdWlyZSBvbmUNCj4gYXBwcm9hY2ggb3IgdGhlIG90aGVyIGZvciB0aGlzIHNwZWNp
ZmljIGluc3RhbmNlLiAgRG8gd2Ugd2FudCB0bw0KPiBjaG9vc2Ugb25lPw0KPiANCj4gSSB0aGlu
ayBpbiB0aGlzIGNhc2UgaXQgbWFrZXMgc2Vuc2UgdG8gZG8gdGhlIGVhc2llc3QgdGhpbmcuICBJ
ZiBpdOKAmXMNCj4gZWFzeSB0byBtYWtlIHRoZSBjdXJyZW50IHRvb2wgYWNjZXB0IGJvdGggc3R5
bGVzLCBsZXTigJlzIGp1c3QgZG8gdGhhdA0KPiBmb3Igbm93LiAgSWYgdGhlIHRvb2wgY3VycmVu
dGx5IGZvcmNlcyB5b3UgdG8gY2hvb3NlIG9uZSBvZiB0aGUgdHdvDQo+IHN0eWxlcywgbGV04oCZ
cyBjaG9vc2Ugb25lLg0KPiANCj4gIC1HZW9yZ2UNCg0KRHVyaW5nIHRoZSBkZXRhaWxlZCBzdHVk
eSBvZiB0aGUgWGVuIGNoZWNrZXIgYW5kIHRoZSBDbGFuZy1Gb3JtYXQgU3R5bGUNCk9wdGlvbnMs
IGl0IHdhcyBmb3VuZCB0aGF0IHRoaXMgdG9vbCwgdW5mb3J0dW5hdGVseSwgaXMgbm90IHNvIGZs
ZXhpYmxlDQp0byBhbGxvdyB0aGUgYXV0aG9yIHRvIGluZGVwZW5kZW50bHkgY2hvb3NlIHRoZSBm
b3JtYXR0aW5nIHN0eWxlIGluDQpzaXR1YXRpb25zIHRoYXQgSSBkZXNjcmliZWQgaW4gdGhlIGxh
c3QgbGV0dGVyLiBGb3IgZXhhbXBsZSBkZWZpbmUgY29kZQ0Kc3R5bGU6DQotI2RlZmluZSBBTExS
RUdTIFwNCi0gICAgQyhyMCwgcjBfdXNyKTsgICBDKHIxLCByMV91c3IpOyAgIEMocjIsIHIyX3Vz
cik7ICAgQyhyMywNCnIzX3Vzcik7ICAgXA0KLSAgICBDKGNwc3IsIGNwc3IpDQorI2RlZmluZSBB
TExSRUdTICAgICAgICAgICAgXA0KKyAgICBDKHIwLCByMF91c3IpOyAgICAgICAgIFwNCisgICAg
QyhyMSwgcjFfdXNyKTsgICAgICAgICBcDQorICAgIEMocjIsIHIyX3Vzcik7ICAgICAgICAgXA0K
VGhlcmUgYXJlIGFsc28gc29tZSBpbmNvbnNpc3RlbmNpZXMgaW4gdGhlIGZvcm1hdHRpbmcgb2Yg
dGhlIHRvb2wgYW5kDQp3aGF0IGlzIHdyaXR0ZW4gaW4gdGhlIGh5dW5nIGNvZGluZyBzdHlsZSBy
dWxlcy4gRm9yIGV4YW1wbGUsIHRoZQ0KY29tbWVudCBmb3JtYXQ6DQotICAgIC8qIFBDIHNob3Vs
ZCBiZSBhbHdheXMgYSBtdWx0aXBsZSBvZiA0LCBhcyBYZW4gaXMgdXNpbmcgQVJNDQppbnN0cnVj
dGlvbiBzZXQgKi8NCisgICAgLyogUEMgc2hvdWxkIGJlIGFsd2F5cyBhIG11bHRpcGxlIG9mIDQs
IGFzIFhlbiBpcyB1c2luZyBBUk0NCmluc3RydWN0aW9uIHNldA0KKyAgICAgKi8NCkkgd291bGQg
bGlrZSB0byBkcmF3IHlvdXIgYXR0ZW50aW9uIHRvIHRoZSBmYWN0IHRoYXQgdGhlIGNvbW1lbnQN
CmJlaGF2ZXMgaW4gdGhpcyB3YXksIHNpbmNlIHRoZSBsaW5lIGxlbmd0aCBleGNlZWRzIHRoZSBh
bGxvd2FibGUgb25lLg0KVGhlIFJlZmxvd0NvbW1lbnRzIG9wdGlvbiBpcyByZXNwb25zaWJsZSBm
b3IgdGhpcyBmb3JtYXQuIEl0IGNhbiBiZQ0KdHVybmVkIG9mZiwgYnV0IHRoZW4gdGhlIHJlc3Vs
dCB3aWxsIGJlOg0KUmVmbG93Q29tbWVudHM9ZmFsc2U6DQovKiBzZWNvbmQgdmVyeVZlcnlWZXJ5
VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlMb25nQ29tbWVudCB3aXRoDQpwbGVudHkg
b2YgaW5mb3JtYXRpb24gKi8NCg0KUmVmbG93Q29tbWVudHM9dHJ1ZToNCi8qIHNlY29uZCB2ZXJ5
VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeUxvbmdDb21tZW50IHdpdGgN
CnBsZW50eSBvZg0KICogaW5mb3JtYXRpb24gKi8NCg0KU28gSSB3YW50IHRvIGtub3cgaWYgdGhl
IGNvbW11bml0eSBpcyByZWFkeSB0byBhZGQgbmV3IGZvcm1hdHRpbmcNCm9wdGlvbnMgYW5kIGVk
aXQgb2xkIG9uZXMuIEJlbG93IEkgd2lsbCBnaXZlIGV4YW1wbGVzIG9mIHdoYXQNCmNvcnJlY3Rp
b25zIHRoZSBjaGVja2VyIGlzIGN1cnJlbnRseSBtYWtpbmcgKHRoZSBmaXJzdCB2YXJpYW50IGlu
IGVhY2gNCmNhc2UgaXMgZXhpc3RpbmcgY29kZSBhbmQgdGhlIHNlY29uZCB2YXJpYW50IGlzIGZv
cm1hdHRlZCBieSBjaGVja2VyKS4NCklmIHRoZXkgZml0IHRoZSBzdGFuZGFyZHMsIHRoZW4gSSBj
YW4gZG9jdW1lbnQgdGhlbSBpbiB0aGUgY29kaW5nDQpzdHlsZS4gSWYgbm90LCB0aGVuIEkgdHJ5
IHRvIGNvbmZpZ3VyZSB0aGUgY2hlY2tlci4gQnV0IHRoZSBpZGVhIGlzDQp0aGF0IHdlIG5lZWQg
dG8gY2hvb3NlIG9uZSBvcHRpb24gdGhhdCB3aWxsIGJlIGNvbnNpZGVyZWQgY29ycmVjdC4NCjEp
IEZ1bmN0aW9uIHByb3RvdHlwZSB3aGVuIHRoZSBzdHJpbmcgbGVuZ3RoIGlzIGxvbmdlciB0aGFu
IHRoZSBhbGxvd2VkDQotc3RhdGljIGludCBfX2luaXQNCi1hY3BpX3BhcnNlX2dpY19jcHVfaW50
ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAqaGVhZGVyLA0KLSAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgY29uc3QgdW5zaWduZWQgbG9uZyBlbmQpDQorc3RhdGljIGludCBf
X2luaXQgYWNwaV9wYXJzZV9naWNfY3B1X2ludGVyZmFjZSgNCisgICAgc3RydWN0IGFjcGlfc3Vi
dGFibGVfaGVhZGVyICpoZWFkZXIsIGNvbnN0IHVuc2lnbmVkIGxvbmcgZW5kKQ0KMikgV3JhcHBp
bmcgYW4gb3BlcmF0aW9uIHRvIGEgbmV3IGxpbmUgd2hlbiB0aGUgc3RyaW5nIGxlbmd0aCBpcyBs
b25nZXINCnRoYW4gdGhlIGFsbG93ZWQNCi0gICAgc3RhdHVzID0gYWNwaV9nZXRfdGFibGUoQUNQ
SV9TSUdfU1BDUiwgMCwNCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgKHN0cnVjdCBhY3Bp
X3RhYmxlX2hlYWRlciAqKikmc3Bjcik7DQorICAgIHN0YXR1cyA9DQorICAgICAgICBhY3BpX2dl
dF90YWJsZShBQ1BJX1NJR19TUENSLCAwLCAoc3RydWN0IGFjcGlfdGFibGVfaGVhZGVyDQoqKikm
c3Bjcik7DQozKSBTcGFjZSBhZnRlciBicmFja2V0cw0KLSAgICByZXR1cm4gKChjaGFyICopIGJh
c2UgKyBvZmZzZXQpOw0KKyAgICByZXR1cm4gKChjaGFyICopYmFzZSArIG9mZnNldCk7DQo0KSBT
cGFjZXMgaW4gYnJhY2tldHMgaW4gc3dpdGNoIGNvbmRpdGlvbg0KLSAgICBzd2l0Y2ggKCBkb21j
dGwtPmNtZCApDQorICAgIHN3aXRjaCAoZG9tY3RsLT5jbWQpDQo1KSBTcGFjZXMgaW4gYnJhY2tl
dHMgaW4gb3BlcmF0aW9uDQotICAgIGltbSA9ICggaW5zbiA+PiBCUkFOQ0hfSU5TTl9JTU1fU0hJ
RlQgKSAmIEJSQU5DSF9JTlNOX0lNTV9NQVNLOw0KKyAgICBpbW0gPSAoaW5zbiA+PiBCUkFOQ0hf
SU5TTl9JTU1fU0hJRlQpICYgQlJBTkNIX0lOU05fSU1NX01BU0s7DQo2KSBTcGFjZXMgaW4gYnJh
Y2tldHMgaW4gcmV0dXJuDQotICAgICAgICByZXR1cm4gKCAhc3ltLT5uYW1lWzJdIHx8IHN5bS0+
bmFtZVsyXSA9PSAnLicgKTsNCisgICAgICAgIHJldHVybiAoIXN5bS0+bmFtZVsyXSB8fCBzeW0t
Pm5hbWVbMl0gPT0gJy4nKTsNCjcpIFNwYWNlIGFmdGVyIHNpemVvZg0KLSAgICBjbGVhbl9hbmRf
aW52YWxpZGF0ZV9kY2FjaGVfdmFfcmFuZ2UobmV3X3B0ciwgc2l6ZW9mICgqbmV3X3B0cikgKg0K
bGVuKTsNCisgICAgY2xlYW5fYW5kX2ludmFsaWRhdGVfZGNhY2hlX3ZhX3JhbmdlKG5ld19wdHIs
IHNpemVvZigqbmV3X3B0cikgKg0KbGVuKTsNCjgpIFNwYWNlcyBiZWZvcmUgY29tbWVudCBpZiBp
dOKAmXMgb24gdGhlIHNhbWUgbGluZQ0KLSAgICBjYXNlIFJfQVJNX01PVlRfQUJTOiAvKiBTICsg
QSAqLw0KKyAgICBjYXNlIFJfQVJNX01PVlRfQUJTOiAgICAvKiBTICsgQSAqLw0KDQotICAgIGlm
ICggdG1wID09IDBVTCApICAgICAgIC8qIEFyZSBhbnkgYml0cyBzZXQ/ICovDQotICAgICAgICBy
ZXR1cm4gcmVzdWx0ICsgc2l6ZTsgICAvKiBOb3BlLiAqLw0KKyAgICBpZiAoIHRtcCA9PSAwVUwg
KSAgICAgICAgIC8qIEFyZSBhbnkgYml0cyBzZXQ/ICovDQorICAgICAgICByZXR1cm4gcmVzdWx0
ICsgc2l6ZTsgLyogTm9wZS4gKi8NCg0KOSkgU3BhY2UgYWZ0ZXIgZm9yX2VhY2hfdmNwdQ0KLSAg
ICAgICAgZm9yX2VhY2hfdmNwdShkLCB2KQ0KKyAgICAgICAgZm9yX2VhY2hfdmNwdSAoZCwgdikN
CjEwKSBTcGFjZXMgaW4gZGVjbGFyYXRpb24NCi0gICAgdW5pb24gaHNyIGhzciA9IHsgLmJpdHMg
PSByZWdzLT5oc3IgfTsNCisgICAgdW5pb24gaHNyIGhzciA9IHsuYml0cyA9IHJlZ3MtPmhzcn07
DQoNClJlZ2FyZHMsDQpBbmFzdGFzaWlhDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:20:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:20:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3407.9797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Yz-0003TS-1C; Wed, 07 Oct 2020 10:20:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3407.9797; Wed, 07 Oct 2020 10:20:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6Yy-0003TL-Uc; Wed, 07 Oct 2020 10:20:48 +0000
Received: by outflank-mailman (input) for mailman id 3407;
 Wed, 07 Oct 2020 10:20:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aqtN=DO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQ6Yx-0003TE-PC
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:20:47 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50f16277-5f50-4ec3-963b-8b1b70349d73;
 Wed, 07 Oct 2020 10:20:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aqtN=DO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQ6Yx-0003TE-PC
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:20:47 +0000
X-Inumbo-ID: 50f16277-5f50-4ec3-963b-8b1b70349d73
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 50f16277-5f50-4ec3-963b-8b1b70349d73;
	Wed, 07 Oct 2020 10:20:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602066045;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Mn6pxC92/K1q3H34qSpWU52CtwsJhqF4O23oEktGShA=;
  b=UpQL+I67DX4RHxCUT6PwHrt8vyD51stUS6vWl8WlLfsn6N6ESDYbK7sl
   agXfWR58y628Dbgq3zpu4VE7kfZmjsJqYP2+a4Xv9iYwD5sS7nVXFW/Yf
   fc2aIpnUIUwu4MgQpKbHmkllDkadPPyb3P6QVAHcB0IHk7IU5TqCJchfG
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OT05eKaA7bJf3fmLEQH4RaUgm893gUAss1DD+UtZfWZNZ5tdOif/RcdFTxqeYPuPLOU9Ru4Aqd
 XJGFhxu1jeLnxv44vluqOB8c5/tuzcEBaSweTb2dyIrY5FvMtkgqDyXSvuW8+N2AHh1oxXQLSv
 modd+NSOA32qRS2q6QDbelAG6pOmhins9EcXTd8J2uinaVb8xZWMqQhXG721TRTKydhVhbsVvT
 TN4eQNMakaLECyfYCvibbTisw/HFbo8jOTRCYvEldnQZmdVdCgU0EAXD0rsJpK/kxvqGHYO8ix
 6Ww=
X-SBRS: None
X-MesageID: 29495330
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,346,1596513600"; 
   d="scan'208";a="29495330"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/msr: handle IA32_THERM_STATUS
Date: Wed, 7 Oct 2020 12:20:32 +0200
Message-ID: <20201007102032.98565-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Windows 8 will attempt to read MSR_IA32_THERM_STATUS and panic if a
#GP fault is injected as a result:

vmx.c:3035:d8v0 RDMSR 0x0000019c unimplemented
d8v0 VIRIDIAN CRASH: 3b c0000096 fffff8061de31651 fffff4088a613720 0

So handle the MSR and return 0 instead.

Note that this is done on the generic MSR handler, and PV guest will
also get 0 back when trying to read the MSR. There doesn't seem to be
much value in handling the MSR for HVM guests only.

Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/msr.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index e4c4fa6127..190d6ac6c5 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -253,6 +253,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
             break;
         goto gp_fault;
 
+    case MSR_IA32_THERM_STATUS:
+        if ( cp->x86_vendor != X86_VENDOR_INTEL )
+            goto gp_fault;
+        *val = 0;
+        break;
+
     case MSR_X2APIC_FIRST ... MSR_X2APIC_LAST:
         if ( !is_hvm_domain(d) || v != curr )
             goto gp_fault;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:35:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3411.9809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6md-0004et-Au; Wed, 07 Oct 2020 10:34:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3411.9809; Wed, 07 Oct 2020 10:34:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6md-0004em-7d; Wed, 07 Oct 2020 10:34:55 +0000
Received: by outflank-mailman (input) for mailman id 3411;
 Wed, 07 Oct 2020 10:34:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQ6mb-0004eh-Uz
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:34:53 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bda542e0-ec04-4a18-9872-e2c8b0647df9;
 Wed, 07 Oct 2020 10:34:52 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k18so1759909wmj.5
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:34:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w11sm1610663wrn.27.2020.10.07.03.34.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 07 Oct 2020 03:34:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQ6mb-0004eh-Uz
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:34:53 +0000
X-Inumbo-ID: bda542e0-ec04-4a18-9872-e2c8b0647df9
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bda542e0-ec04-4a18-9872-e2c8b0647df9;
	Wed, 07 Oct 2020 10:34:52 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k18so1759909wmj.5
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:34:52 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=GsmpmQ8iRPAZ/finHj9qx+zPldlNPPFEeNURkdcky4c=;
        b=m6urFejpmxCyGBNIuRyWnjut4d0jBMSLRu5YonVv/29badXI7vZI5owP2r/svQnIas
         TnrMRXwf0ev5Cm8x4Umb/K2M/MZ1Fe6B+jt+zPbPWEmDOfw9mlyrBIFVAM7JRU++jAMu
         cxz3oLUDv7NvceBYHUkVvykIvEw5RI3UYfYJ+5i7A5SwijiZGoRvUk60OMftiIKxMM6k
         H0sbJYHDMkiXvDp9RWeHmYTl8jHJQA+ztvMW1K0Uf8pqK7pbtL79yO5WaI3IPjZzrFPk
         erMC+Ea6PCvYSByBYPmJ2rlKVJU8lOyJNnV7jRP5m754v1PUlMUsLD8P1b4RsRkblv0P
         t7wA==
X-Gm-Message-State: AOAM530e54sP1chTPKAenc/qPODuNOU2/oMwkObVrEV68yQHO5XOAPLO
	Qcl6x76bZi0UrqNjVSr+ZIU=
X-Google-Smtp-Source: ABdhPJzTib7Sv0zS7dNDDGn2TOGoGwXi1b4RWxRAN2KCorYTMApBFqsLCdq4I8R9P3nx9OLuzza25w==
X-Received: by 2002:a7b:cb44:: with SMTP id v4mr2537102wmj.101.1602066892043;
        Wed, 07 Oct 2020 03:34:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id w11sm1610663wrn.27.2020.10.07.03.34.50
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 07 Oct 2020 03:34:50 -0700 (PDT)
Date: Wed, 7 Oct 2020 10:34:49 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH 2/3] tools/init-xenstore-domain: support xenstore pvh
 stubdom
Message-ID: <20201007103449.h6sfj3yhuxbvvqaa@liuwe-devbox-debian-v2>
References: <20200923064541.19546-1-jgross@suse.com>
 <20200923064541.19546-3-jgross@suse.com>
 <20200930154611.xqzdumwec7nlnidl@liuwe-devbox-debian-v2>
 <baa915a8-bd96-4669-dfa1-1e4e09024493@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <baa915a8-bd96-4669-dfa1-1e4e09024493@suse.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 07, 2020 at 08:54:43AM +0200, Jrgen Gro wrote:
> On 30.09.20 17:46, Wei Liu wrote:
> > On Wed, Sep 23, 2020 at 08:45:40AM +0200, Juergen Gross wrote:
> > > Instead of creating the xenstore-stubdom domain first and parsing the
> > > kernel later do it the other way round. This enables to probe for the
> > > domain type supported by the xenstore-stubdom and to support both, pv
> > > and pvh type stubdoms.
> > > 
> > > Try to parse the stubdom image first for PV support, if this fails use
> > > HVM. Then create the domain with the appropriate type selected.
> > > 
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > [...]
> > > +    dom->container_type = XC_DOM_HVM_CONTAINER;
> > > +    rv = xc_dom_parse_image(dom);
> > > +    if ( rv )
> > > +    {
> > > +        dom->container_type = XC_DOM_PV_CONTAINER;
> > > +        rv = xc_dom_parse_image(dom);
> > > +        if ( rv )
> > > +        {
> > > +            fprintf(stderr, "xc_dom_parse_image failed\n");
> > > +            goto err;
> > > +        }
> > > +    }
> > > +    else
> > > +    {
> > > +        config.flags |= XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap;
> > > +        config.arch.emulation_flags = XEN_X86_EMU_LAPIC;
> > > +        dom->target_pages = mem_size >> XC_PAGE_SHIFT;
> > > +        dom->mmio_size = GB(4) - LAPIC_BASE_ADDRESS;
> > > +        dom->lowmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
> > > +                          LAPIC_BASE_ADDRESS : mem_size;
> > > +        dom->highmem_end = (mem_size > LAPIC_BASE_ADDRESS) ?
> > > +                           GB(4) + mem_size - LAPIC_BASE_ADDRESS : 0;
> > > +        dom->mmio_start = LAPIC_BASE_ADDRESS;
> > > +        dom->max_vcpus = 1;
> > > +        e820[0].addr = 0;
> > > +        e820[0].size = dom->lowmem_end;
> > > +        e820[0].type = E820_RAM;
> > > +        e820[1].addr = LAPIC_BASE_ADDRESS;
> > > +        e820[1].size = dom->mmio_size;
> > > +        e820[1].type = E820_RESERVED;
> > > +        e820[2].addr = GB(4);
> > > +        e820[2].size = dom->highmem_end - GB(4);
> > 
> > Do you not want to check if highmem_end is larger than GB(4) before
> > putting in this region?
> 
> Oh, just realized: further down I'm setting the guest's map with either
> 2 or 3 entries depending on dom->highmem_end value.
> 
> So I think this is fine.
> 

Fair enough.

Acked-by: Wei Liu <wl@xen.org>

Wei.

> 
> Juergen
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:39:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3416.9821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6qg-00050L-1y; Wed, 07 Oct 2020 10:39:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3416.9821; Wed, 07 Oct 2020 10:39:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ6qf-00050E-Uu; Wed, 07 Oct 2020 10:39:05 +0000
Received: by outflank-mailman (input) for mailman id 3416;
 Wed, 07 Oct 2020 10:39:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQ6qe-000509-1W
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:39:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3aa607b9-3041-4c7d-9b8a-2ba0f52f0ebe;
 Wed, 07 Oct 2020 10:39:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ6qZ-0001GA-TP; Wed, 07 Oct 2020 10:38:59 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ6qZ-0003th-Lr; Wed, 07 Oct 2020 10:38:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQ6qe-000509-1W
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:39:04 +0000
X-Inumbo-ID: 3aa607b9-3041-4c7d-9b8a-2ba0f52f0ebe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3aa607b9-3041-4c7d-9b8a-2ba0f52f0ebe;
	Wed, 07 Oct 2020 10:39:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lfyZLJuMXdhWOK+eBZOGP+LCw9yxvshAac7336OYOIw=; b=AM2Yi6iiGGqo7Y05tN6xp9e66V
	bF4l3sqaoVOdbDw0x8KGQrg377hzPHo8bDDc/gXoaepo/dbEovkkDmMpA7qUW7ewUV0IknO6Rgece
	Jb/9MeWFcxsQJZWXoQd387wJrakPUrGl+KcQ7ABDBAXQ3/sNQyE+a75fBglz4PDSWCKs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ6qZ-0001GA-TP; Wed, 07 Oct 2020 10:38:59 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ6qZ-0003th-Lr; Wed, 07 Oct 2020 10:38:59 +0000
Subject: Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache
 handling common
To: Oleksandr <olekstysh@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Julien Grall <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com>
 <1599769330-17656-14-git-send-email-olekstysh@gmail.com>
 <83dfb207-c191-8dad-1474-ce57b6d51102@suse.com>
 <2cab3ca5-0f2b-a813-099f-95bbf54bb9c8@gmail.com>
 <17f1c7d2-7a84-a6a5-4afb-f82e67bc9fd0@suse.com>
 <0fa6a31c-8da6-2a0a-b110-a697f4955702@gmail.com>
 <3abe3988-f1c0-9bbf-1ff9-ce3ae380c825@suse.com>
 <47ecdde7-6575-bee8-7981-7b1a31715a0b@gmail.com>
 <0aa9a225-1231-fa98-f2a1-caf898a3ed86@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fa610665-78c2-3bd0-66f4-2aa716bafe64@xen.org>
Date: Wed, 7 Oct 2020 11:38:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <0aa9a225-1231-fa98-f2a1-caf898a3ed86@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 02/10/2020 10:55, Oleksandr wrote:
> If I got it correctly there won't be a suitable common place where to 
> set qemu_mapcache_invalidate flag anymore
> as XENMEM_decrease_reservation is not a single place we need to make a 
> decision whether to set it
> By principle of analogy, on Arm we probably want to do so in 
> guest_physmap_remove_page (or maybe better in p2m_remove_mapping).
> Julien, what do you think?

At the moment, the Arm code doesn't explicitely remove the existing 
mapping before inserting the new mapping. Instead, this is done 
implicitely by p2m_set_entry().

So I think we want to invalidate the QEMU mapcache in p2m_set_entry() if 
the old entry is a RAM page *and* the new MFN is different.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:51:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3418.9833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ726-0006lx-6p; Wed, 07 Oct 2020 10:50:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3418.9833; Wed, 07 Oct 2020 10:50:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ726-0006lq-36; Wed, 07 Oct 2020 10:50:54 +0000
Received: by outflank-mailman (input) for mailman id 3418;
 Wed, 07 Oct 2020 10:50:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQ725-0006ll-GV
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:50:53 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d40c828-6c35-4167-ac62-6500d1c042f1;
 Wed, 07 Oct 2020 10:50:52 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id n18so1607685wrs.5
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:50:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l18sm2349693wrp.84.2020.10.07.03.50.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 07 Oct 2020 03:50:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQ725-0006ll-GV
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:50:53 +0000
X-Inumbo-ID: 2d40c828-6c35-4167-ac62-6500d1c042f1
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2d40c828-6c35-4167-ac62-6500d1c042f1;
	Wed, 07 Oct 2020 10:50:52 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id n18so1607685wrs.5
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:50:52 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=czGh4ZowBRDnHUvhZhQVe6BWzab9g6zWKZOGRKWO50c=;
        b=ULsQmCmyyCoHnjtKeurjgBoXygiMf1BLVk5RM9KcS/4FNrykjmLI3MAf9fr5ZYsd2z
         z3CEEzv6kkpV/1nL67m8ETDOkU5naAksnvZ9Q18+LQXzbXJ/AJYU/UhHCJdPtfTBjhq5
         T2ZhjKC02zHVzsawYZ5u5hujYwTqVK31yj89KWNiqH0B+AwX/8e5eQwQQonL/sIgFacQ
         tiCUkiC0pVheSPeRrPbDGdlM6R87GnVcO/WWTfleZ6ZPvCj6Up5PHHpzTavrpt4mZE/F
         f01JJXFs6t3mzhbLmBlngXh5wsNv84NUazUW50C+BR4QbEK9sEsl6bZjMljxm/hdJhPm
         oa4w==
X-Gm-Message-State: AOAM533xY6SUZeh9tMlk+QM/5tBiS4csDztU5B1RqY99b92tT8Q8rQ8i
	xGQv9B5bNJXIHEoK/PjdR/I=
X-Google-Smtp-Source: ABdhPJwKyQC2r4r3wyuItlOKMNZ6CvLXHaDBLFamy19yJ7RJzroqGqQpvuc/FO6BCd2PTm2uQNjC+Q==
X-Received: by 2002:a5d:46c5:: with SMTP id g5mr2952141wrs.416.1602067851476;
        Wed, 07 Oct 2020 03:50:51 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id l18sm2349693wrp.84.2020.10.07.03.50.50
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 07 Oct 2020 03:50:50 -0700 (PDT)
Date: Wed, 7 Oct 2020 10:50:49 +0000
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libxl: only query VNC when enabled
Message-ID: <20201007105049.vfpunr4g62fqvijr@liuwe-devbox-debian-v2>
References: <20201001235337.83948-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201001235337.83948-1-jandryuk@gmail.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 01, 2020 at 07:53:37PM -0400, Jason Andryuk wrote:
> QEMU without VNC support (configure --disable-vnc) will return an error
> when VNC is queried over QMP since it does not recognize the QMP
> command.  This will cause libxl to fail starting the domain even if VNC
> is not enabled.  Therefore only query QEMU for VNC support when using
> VNC, so a VNC-less QEMU will function in this configuration.
> 
> 'goto out' jumps to the call to device_model_postconfig_done(), the
> final callback after the chain of vnc queries.  This bypasses all the
> QMP VNC queries.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  tools/libs/light/libxl_dm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> index a944181781..d1ff35dda3 100644
> --- a/tools/libs/light/libxl_dm.c
> +++ b/tools/libs/light/libxl_dm.c
> @@ -3140,6 +3140,7 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
>  {
>      EGC_GC;
>      libxl__dm_spawn_state *dmss = CONTAINER_OF(qmp, *dmss, qmp);
> +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmss->guest_config);
>      const libxl__json_object *item = NULL;
>      const libxl__json_object *o = NULL;
>      int i = 0;
> @@ -3197,6 +3198,9 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
>          if (rc) goto out;
>      }
>  
> +    if (!vnc)
> +        goto out;
> +

I would rather this check be done in device_model_postconfig_vnc.

Does the following work for you?

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index a944181781bb..c5db755a65d7 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -3222,6 +3222,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,

     if (rc) goto out;

+    if (!vnc) goto out;
+
     /*
      * query-vnc response:
      * { 'enabled': 'bool', '*host': 'str', '*service': 'str' }
@@ -3255,7 +3257,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,
         if (rc) goto out;
     }

-    if (vnc && vnc->passwd && vnc->passwd[0]) {
+    assert(vnc);
+    if (vnc->passwd && vnc->passwd[0]) {
         qmp->callback = device_model_postconfig_vnc_passwd;
         libxl__qmp_param_add_string(gc, &args, "password", vnc->passwd);
         rc = libxl__ev_qmp_send(egc, qmp, "change-vnc-password", args);



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:54:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:54:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3421.9845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ75x-0006xD-Of; Wed, 07 Oct 2020 10:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3421.9845; Wed, 07 Oct 2020 10:54:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ75x-0006x6-KZ; Wed, 07 Oct 2020 10:54:53 +0000
Received: by outflank-mailman (input) for mailman id 3421;
 Wed, 07 Oct 2020 10:54:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQ75w-0006x1-74
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:54:52 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8685ff7d-9f4e-456b-ba81-7e36c2cd568e;
 Wed, 07 Oct 2020 10:54:51 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id n15so1629529wrq.2
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:54:51 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n6sm2517339wrx.58.2020.10.07.03.54.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 07 Oct 2020 03:54:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQ75w-0006x1-74
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:54:52 +0000
X-Inumbo-ID: 8685ff7d-9f4e-456b-ba81-7e36c2cd568e
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8685ff7d-9f4e-456b-ba81-7e36c2cd568e;
	Wed, 07 Oct 2020 10:54:51 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id n15so1629529wrq.2
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 03:54:51 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Yxgbt0GZlXvAgMopkmHRROYZi4BI1bCKPDz2tdOmbM4=;
        b=SFORKy691fvAoAzFi1oLyBWrrxp4uqjwkIczpto4ldke7xUOdwwoCNyTXJwghZPO6i
         4xryDssy/8lhOEbHwat0uzJLP2UpZTlZn1KmlsdLJ3kLKir/xmhs1VEAzKENEzbDTtzr
         QkfbZ4TRjOFPSl0w4SKBqjFEbx9gx8rKhKOD0DmxGJEudh17cK4NDbdoVqbdt5owG8fw
         b0ZVbR89tgjzl8wwCDwfq9m1ggQ6amJkH0PiX8HioLgLiK6skefvMG8CjYXdq0dIyKJv
         xq1BtSMxKGH0LoaHYemz/PKdri2uccBdAFcD8ybsZDIRpW7axdjMPVzibos4VKxl1rYC
         30sA==
X-Gm-Message-State: AOAM532V11wK9MHg6oFJsw8wKG8Ced0PIGNcKdzHjuxCFiH2GOdfP12d
	EbYJxb71xSQBrbpr5yP6pMQXNtOPC6Y=
X-Google-Smtp-Source: ABdhPJx+uIiOgjYwAn3ysFpkdp6C0tFluv9h1DnNu8GGr9QNsw+hcjTHi1aAmVfG75IpN2se0JRQ0Q==
X-Received: by 2002:a5d:6642:: with SMTP id f2mr1719301wrw.374.1602068090530;
        Wed, 07 Oct 2020 03:54:50 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id n6sm2517339wrx.58.2020.10.07.03.54.49
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 07 Oct 2020 03:54:50 -0700 (PDT)
Date: Wed, 7 Oct 2020 10:54:48 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/5] tools/libs/store: drop read-only functionality
Message-ID: <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
References: <20201002154141.11677-1-jgross@suse.com>
 <20201002154141.11677-4-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201002154141.11677-4-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 05:41:39PM +0200, Juergen Gross wrote:
> Today it is possible to open the connection in read-only mode via
> xs_daemon_open_readonly(). This is working only with Xenstore running
> as a daemon in the same domain as the user. Additionally it doesn't
> add any security as accessing the socket used for that functionality
> requires the same privileges as the socket used for full Xenstore
> access.
> 
> So just drop the read-only semantics in all cases, leaving the
> interface existing only for compatibility reasons. This in turn
> requires to just ignore the XS_OPEN_READONLY flag.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  tools/libs/store/include/xenstore.h | 8 --------
>  tools/libs/store/xs.c               | 7 ++-----
>  2 files changed, 2 insertions(+), 13 deletions(-)
> 
> diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
> index cbc7206a0f..158e69ef83 100644
> --- a/tools/libs/store/include/xenstore.h
> +++ b/tools/libs/store/include/xenstore.h
> @@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
>  /* Open a connection to the xs daemon.
>   * Attempts to make a connection over the socket interface,
>   * and if it fails, then over the  xenbus interface.
> - * Mode 0 specifies read-write access, XS_OPEN_READONLY for
> - * read-only access.
>   *
>   * * Connections made with xs_open(0) (which might be shared page or
>   *   socket based) are only guaranteed to work in the parent after
>   *   fork.
>   * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
>   *   for xs_open(0).
> - * * XS_OPEN_READONLY has no bearing on any of this.
>   *
>   * Returns a handle or NULL.
>   */
> @@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
>   */
>  struct xs_handle *xs_daemon_open(void);
>  struct xs_handle *xs_domain_open(void);
> -
> -/* Connect to the xs daemon (readonly for non-root clients).
> - * Returns a handle or NULL.
> - * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
> - */
>  struct xs_handle *xs_daemon_open_readonly(void);
>  
>  /* Close the connection to the xs daemon.
> diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
> index 320734416f..4ac73ec317 100644
> --- a/tools/libs/store/xs.c
> +++ b/tools/libs/store/xs.c
> @@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
>  
>  struct xs_handle *xs_daemon_open_readonly(void)
>  {
> -	return xs_open(XS_OPEN_READONLY);
> +	return xs_open(0);
>  }

This changes the semantics of this function, isn't it? In that the user
expects a readonly connection but in fact a read-write connection is
returned.

Maybe we should return an error here?

Wei.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 10:57:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 10:57:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3423.9857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ78m-0007GQ-7R; Wed, 07 Oct 2020 10:57:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3423.9857; Wed, 07 Oct 2020 10:57:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ78m-0007GJ-3G; Wed, 07 Oct 2020 10:57:48 +0000
Received: by outflank-mailman (input) for mailman id 3423;
 Wed, 07 Oct 2020 10:57:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ78k-0007GE-HL
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:57:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fadb1021-89d1-470a-b92e-8b2bbdca142e;
 Wed, 07 Oct 2020 10:57:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6509AAF43;
 Wed,  7 Oct 2020 10:57:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ78k-0007GE-HL
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 10:57:46 +0000
X-Inumbo-ID: fadb1021-89d1-470a-b92e-8b2bbdca142e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fadb1021-89d1-470a-b92e-8b2bbdca142e;
	Wed, 07 Oct 2020 10:57:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602068264;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kdP7UgV8LVrFbrmmFdEyVk7kZdpChguBzkpbfIxcJbM=;
	b=TofkZPwPikZ0nwQW5FP597y2+ga6Ea8Osi0KqjloMUuSD570I9GhwSXbIbrwmMvKuL4kc/
	B+V3hQ8CdQZ48MVUBw1GGWBcacfbO7t/R4UVsUn6dHMsiGsRddx4qhXgWXpPAiyTo3TCE+
	f5PVqJHqWrV5BwAGJ2yvSt+sKTptxIs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6509AAF43;
	Wed,  7 Oct 2020 10:57:44 +0000 (UTC)
Subject: Re: [PATCH 3/5] tools/libs/store: drop read-only functionality
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <20201002154141.11677-1-jgross@suse.com>
 <20201002154141.11677-4-jgross@suse.com>
 <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d03ef7db-8752-ac00-99f1-6c40f62e1162@suse.com>
Date: Wed, 7 Oct 2020 12:57:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.10.20 12:54, Wei Liu wrote:
> On Fri, Oct 02, 2020 at 05:41:39PM +0200, Juergen Gross wrote:
>> Today it is possible to open the connection in read-only mode via
>> xs_daemon_open_readonly(). This is working only with Xenstore running
>> as a daemon in the same domain as the user. Additionally it doesn't
>> add any security as accessing the socket used for that functionality
>> requires the same privileges as the socket used for full Xenstore
>> access.
>>
>> So just drop the read-only semantics in all cases, leaving the
>> interface existing only for compatibility reasons. This in turn
>> requires to just ignore the XS_OPEN_READONLY flag.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   tools/libs/store/include/xenstore.h | 8 --------
>>   tools/libs/store/xs.c               | 7 ++-----
>>   2 files changed, 2 insertions(+), 13 deletions(-)
>>
>> diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
>> index cbc7206a0f..158e69ef83 100644
>> --- a/tools/libs/store/include/xenstore.h
>> +++ b/tools/libs/store/include/xenstore.h
>> @@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
>>   /* Open a connection to the xs daemon.
>>    * Attempts to make a connection over the socket interface,
>>    * and if it fails, then over the  xenbus interface.
>> - * Mode 0 specifies read-write access, XS_OPEN_READONLY for
>> - * read-only access.
>>    *
>>    * * Connections made with xs_open(0) (which might be shared page or
>>    *   socket based) are only guaranteed to work in the parent after
>>    *   fork.
>>    * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
>>    *   for xs_open(0).
>> - * * XS_OPEN_READONLY has no bearing on any of this.
>>    *
>>    * Returns a handle or NULL.
>>    */
>> @@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
>>    */
>>   struct xs_handle *xs_daemon_open(void);
>>   struct xs_handle *xs_domain_open(void);
>> -
>> -/* Connect to the xs daemon (readonly for non-root clients).
>> - * Returns a handle or NULL.
>> - * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
>> - */
>>   struct xs_handle *xs_daemon_open_readonly(void);
>>   
>>   /* Close the connection to the xs daemon.
>> diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
>> index 320734416f..4ac73ec317 100644
>> --- a/tools/libs/store/xs.c
>> +++ b/tools/libs/store/xs.c
>> @@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
>>   
>>   struct xs_handle *xs_daemon_open_readonly(void)
>>   {
>> -	return xs_open(XS_OPEN_READONLY);
>> +	return xs_open(0);
>>   }
> 
> This changes the semantics of this function, isn't it? In that the user
> expects a readonly connection but in fact a read-write connection is
> returned.
> 
> Maybe we should return an error here?

No, as the behavior is this way already if any of the following is true:

- a guest is opening the connection
- Xenstore is running in a stubdom
- oxenstored is being used


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 11:04:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 11:04:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3425.9869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7F4-0008BC-Uj; Wed, 07 Oct 2020 11:04:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3425.9869; Wed, 07 Oct 2020 11:04:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7F4-0008B5-RO; Wed, 07 Oct 2020 11:04:18 +0000
Received: by outflank-mailman (input) for mailman id 3425;
 Wed, 07 Oct 2020 11:04:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ7F2-0008B0-OX
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:04:17 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.71]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68c9cada-2760-402c-9ff2-4dec5642f0a3;
 Wed, 07 Oct 2020 11:04:14 +0000 (UTC)
Received: from AM6P193CA0077.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::18)
 by HE1PR0802MB2252.eurprd08.prod.outlook.com (2603:10a6:3:c6::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 11:04:12 +0000
Received: from AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::d7) by AM6P193CA0077.outlook.office365.com
 (2603:10a6:209:88::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 11:04:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT022.mail.protection.outlook.com (10.152.16.79) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 11:04:12 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Wed, 07 Oct 2020 11:04:12 +0000
Received: from cf65fd090020.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DE2B7E3B-39FF-4523-818B-0351DD80B87D.1; 
 Wed, 07 Oct 2020 11:03:38 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cf65fd090020.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 07 Oct 2020 11:03:38 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Wed, 7 Oct
 2020 11:03:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 11:03:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ7F2-0008B0-OX
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:04:17 +0000
X-Inumbo-ID: 68c9cada-2760-402c-9ff2-4dec5642f0a3
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.71])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 68c9cada-2760-402c-9ff2-4dec5642f0a3;
	Wed, 07 Oct 2020 11:04:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sT54h2fksZ7J5I+P+9j4aKZF2pB581qOXG4XdEteP84=;
 b=MpYEyeylTt5S6+EwXPvrEzRzumihtR/d2TPdnA5k8Pscuq9NX97ml9V60is6Ey3+IhnyavL4tUkxCEntfeXCMb7Mg0dvzw4wGtO/k0rTCIUf32eBkfpnKrTzNrMtwmoL/VSztfRC5PYMwNczqOyQe4wAR7aqQ6LdbL5fq0bjfAM=
Received: from AM6P193CA0077.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::18)
 by HE1PR0802MB2252.eurprd08.prod.outlook.com (2603:10a6:3:c6::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.32; Wed, 7 Oct
 2020 11:04:12 +0000
Received: from AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::d7) by AM6P193CA0077.outlook.office365.com
 (2603:10a6:209:88::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 11:04:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT022.mail.protection.outlook.com (10.152.16.79) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 11:04:12 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Wed, 07 Oct 2020 11:04:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: dc8eff64858f2eb4
X-CR-MTA-TID: 64aa7808
Received: from cf65fd090020.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DE2B7E3B-39FF-4523-818B-0351DD80B87D.1;
	Wed, 07 Oct 2020 11:03:38 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cf65fd090020.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 07 Oct 2020 11:03:38 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e5gw9KtdAtZg2dq/UfvjGswJQKyWiSm40/Rr86QlbFMoi7/eEvy49X2QH8fMMQKLXilYCGreY5mrBv6ACa9nodBsK6hVte6FlgqWda+Ou7CxCEL2omS0FzLQSc9Xl62Bi33QcSZwD4Pg856rSpkwEAMeqoxYuEZf470Jl2auO+SyhfDX0uTr2pA8L6KomdcwsJC+W9ta2G/LKiUOZX1iFGhfz1Y9Cf2n8BLAq7KU/rWLnfqoE03V55z3/V2OtgKN7Zxw83b93u1FXQS62hkgiROzfgYShnOnfTAhCS4cB4UGfLL1YWTjp/V07qcIf0e2UxFGzmC5uO60OhlIV2F+wQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sT54h2fksZ7J5I+P+9j4aKZF2pB581qOXG4XdEteP84=;
 b=I9rDNbITyqE4FWBLVEBGw/NoCv/CsaQJ5DS/CmkuUqycKR3z0SfUFXIUkIyYQrl9wb/YWbUWqNa3M2nbsZIGVBwWggJVC9k6p4qoRMB6SsjckWY6WYbK9OFUg0U4x2jSWd/r6tvH/eqsTHY93KvwdaUVAR//TklnHxw05mxNuytXlAcZErD0kJsfoBz/xNXtzZjjcjcX39cufZ67+YP2Srkoh6U+Pz13/y66A8cihwYZcG3RaF961cnoom+26wZ+hdEVVKIQL7oRNT5Z9K9XoqHDNnZSjo0uspmyGQMumAZmRHFw1oktDIXnqNbjbi3W8Gf5ShPXAUEcMfEZHVLItA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sT54h2fksZ7J5I+P+9j4aKZF2pB581qOXG4XdEteP84=;
 b=MpYEyeylTt5S6+EwXPvrEzRzumihtR/d2TPdnA5k8Pscuq9NX97ml9V60is6Ey3+IhnyavL4tUkxCEntfeXCMb7Mg0dvzw4wGtO/k0rTCIUf32eBkfpnKrTzNrMtwmoL/VSztfRC5PYMwNczqOyQe4wAR7aqQ6LdbL5fq0bjfAM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Wed, 7 Oct
 2020 11:03:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 11:03:37 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Index: AQHWmKjTj7VmlPYcLkSjqrrhNFG5S6mEOTwAgAfIdAA=
Date: Wed, 7 Oct 2020 11:03:37 +0000
Message-ID: <78C4CCD0-403B-4670-A2C2-C1BB2AD498FD@arm.com>
References:
 <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
In-Reply-To: <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 27060cbd-5b46-4ae1-3d5c-08d86ab0b91e
x-ms-traffictypediagnostic: DB8PR08MB5324:|HE1PR0802MB2252:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2252F6550799F65CE8C502869D0A0@HE1PR0802MB2252.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 K4BebxKZ3lCEx7K8vtmIOgeuaRNWx/FLtp10vLotJq1jzid2IXWXqFaGV3CU3X+kfA2nqkLtZ3WjgB64zlkvQcKtOKuXsHKT+/YpZDCyzC+GJ2g2ul4CNNzGtYuQ4eZoUQ1a0/StTEoc/+pfJHpNu/28AyDs/gDpKbkEEGZ74OacrMjJkuzKwqAxjUQ3zSHjeqOHkEY275CWEhsHgtHTQJ/TpaLBwmyLO2Aay0MOM6/5QW6sxe49aO/Itc8AudrQLvgm7eH7oAVPIE5rRU+Rm3Evs7jddMhq8yqRepY2WO+JcOp+dC3vzs/Fwq6vMtRIrXUOWHM/fCeRajUwH1e8kA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(376002)(366004)(136003)(346002)(396003)(86362001)(2616005)(33656002)(478600001)(71200400001)(4744005)(66556008)(66476007)(66946007)(64756008)(76116006)(91956017)(66446008)(5660300002)(2906002)(8676002)(8936002)(7416002)(36756003)(6512007)(4326008)(6486002)(316002)(54906003)(6916009)(6506007)(53546011)(186003)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 E+FH79Mf7HaPQBUg3BOPSZkkmiMK/2mdtFnOaSNq32vfz7nVNSUU7bVzLHge2UlY2CwKmCB24afZ94wapATb25ONtpzQQAY/6fnDiYqEoQ2DHfvT4FbGMs90y7kvwdATjAx3hfF/LFB8KGSRV9/vtMDotuNJrPE6mLoYy/b+lgulIoHv16YUoqMbjbQ6rLay50MGzMO0SYXU8Fj/qvCse2QjgayNylpRWj383rVC9C0if26sXnkycvnZoGLbfZL9lNipT2HIiDOnRKvIml9MJHZoIoIXKy7GvIKkKucW1ovDdFnEj7DGqJIAMVU3E760VOtO6GUzq2JhnStgLw1+M1arbqwosQfHiV8+vfAF/6ybzkobk74rzFjkEQ+AjkACsumbNAibw6gPamNICQYySNZrgBgIk6VxCxMKBppq/uMa+qYLBGe19DtwiCNUiuTTFIHRKWerVnH8KfMDejvQ7XXpMem5Q49atvPm8EQQUVdNCAX/habfpE580SZ+QKL63Webx8VG49buJYER5qGLE8X9dUJyioKRdxMbNrsUzqV+fdqH+uQMt1CTaukWURW1Dh3FxSVQZ9t/5uqylizeC0D5wJdsyXmV28K77wGQbAlVQ+mnyZdnQ1iT15GotkfgLLsXWdzP8HflRUP8JWI0ig==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <F63C67BAA1B2C14180990FE52197FDBA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5324
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6d7d6ee7-13dd-4b5f-5d5d-08d86ab0a401
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lAQRDqBCke+FGF7KigxwfeBDMzq+hIcaeBicXLz/R6ryg/HuSqJrmCPJOcNt28Q2cJiibh9raTQVqsRQuo97hCWb5knEPk/v81QanN9XFmPffAKi+0kuXj5ngzUygSAfBrVRIeU0g5d11UnCehyomkJg0Z0BtZXgfiGxhCWqO3uO0W3d4y0pYCBHeiLZJCjSghdNh9enpoDe0aYq7XqFnnNQGZx3s9xpUPic4qCz1UBD4HJbIRF/m5ihyN1FrX3SRr2PwVXa8gpifMkEw0uA9Hxyqz+3WxmAZO+EdMNlyUSxEBNMSKkeJ7EW4ANfoNzQFRbgEZBoKeIyvXnmhfTlYiikqdjWIZKBMcEjDahlJvvqjDjEfizaX8nq8HB0n6vpOTAmJcBRBTW6ya++v+K3ag==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39850400004)(136003)(46966005)(70206006)(82310400003)(6512007)(81166007)(2616005)(47076004)(5660300002)(356005)(4744005)(70586007)(82740400003)(26005)(6506007)(478600001)(336012)(4326008)(53546011)(86362001)(54906003)(2906002)(316002)(36756003)(6486002)(8676002)(36906005)(8936002)(33656002)(6862004)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 11:04:12.4911
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 27060cbd-5b46-4ae1-3d5c-08d86ab0b91e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT022.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2252



> On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 02.10.2020 12:42, Bertrand Marquis wrote:
>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>=20
>> This is removing the dependency to xen subdirectory preventing using a
>> wrong configuration file when xen subdirectory is duplicated for
>> compilation tests.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Acked-by: Jan Beulich <jbeulich@suse.com>
>=20
> (but more for the slight tidying than the purpose you name)

Ping: Could this be pushed ?

Thanks
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 11:06:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 11:06:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3436.9881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7Gt-0008Kn-Gi; Wed, 07 Oct 2020 11:06:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3436.9881; Wed, 07 Oct 2020 11:06:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7Gt-0008Kg-Bv; Wed, 07 Oct 2020 11:06:11 +0000
Received: by outflank-mailman (input) for mailman id 3436;
 Wed, 07 Oct 2020 11:06:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ7Gs-0008Ka-MY
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:06:10 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 48d41b3b-4842-4fa6-9cb6-5ac6d109f186;
 Wed, 07 Oct 2020 11:06:09 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8EE64147A;
 Wed,  7 Oct 2020 04:06:09 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE5453F71F;
 Wed,  7 Oct 2020 04:06:08 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ7Gs-0008Ka-MY
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:06:10 +0000
X-Inumbo-ID: 48d41b3b-4842-4fa6-9cb6-5ac6d109f186
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 48d41b3b-4842-4fa6-9cb6-5ac6d109f186;
	Wed, 07 Oct 2020 11:06:09 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8EE64147A;
	Wed,  7 Oct 2020 04:06:09 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE5453F71F;
	Wed,  7 Oct 2020 04:06:08 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: print update firmware only once
Date: Wed,  7 Oct 2020 12:05:44 +0100
Message-Id: <09d04b34e6b3b77ac206a42657b1b4116e7e11f3.1602068661.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Fix enable_smccc_arch_workaround_1 to only print the warning asking to
update the firmware once.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/cpuerrata.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 6c09017515..0c63dfa779 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -187,7 +187,7 @@ warn:
         ASSERT(system_state < SYS_STATE_active);
         warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
                     "Please update your firmware.\n");
-        warned = false;
+        warned = true;
     }
 
     return 0;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 11:10:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 11:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3440.9893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7LL-0000uY-2U; Wed, 07 Oct 2020 11:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3440.9893; Wed, 07 Oct 2020 11:10:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7LK-0000uR-Vd; Wed, 07 Oct 2020 11:10:46 +0000
Received: by outflank-mailman (input) for mailman id 3440;
 Wed, 07 Oct 2020 11:10:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQ7LK-0000uM-2e
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:10:46 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9eb58c09-d92d-41a7-b15c-b11c3ff66c28;
 Wed, 07 Oct 2020 11:10:45 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id g12so1663691wrp.10
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 04:10:45 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o14sm2289271wmc.36.2020.10.07.04.10.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 07 Oct 2020 04:10:43 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQ7LK-0000uM-2e
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:10:46 +0000
X-Inumbo-ID: 9eb58c09-d92d-41a7-b15c-b11c3ff66c28
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9eb58c09-d92d-41a7-b15c-b11c3ff66c28;
	Wed, 07 Oct 2020 11:10:45 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id g12so1663691wrp.10
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 04:10:45 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=ynFyq2KGNcjPWVqbajZNB1n/z5Dl+CaD2IPMkheUkO0=;
        b=C9sboVPjmqDHb6OrcYiDPf6mlvpjLv49aYLRZy92ZLWNatURVxJhRXCRttWvmlzPrI
         aclsRwss/7LTzHS6ZgdmnnFyKrewuhAujs+NqTt6ITEzTgvQqHvZvqlrGK6huituhwpK
         MOufD03xSq9g8LhYkDKXl7n/ioNNNICyhyLXYbOFkvX9AG38FTCl40dT6I8PBd0p/kMW
         Tlm6yUU8/ldIePM8xzbx3pqjiQY0kA/XZf3iwek533Ph2Oini+E3aST/AUCx0v/fwS5C
         vTAV8iY7tWbiCl2rPiV8K3PGehHme4mTsQrpsIdQIUEGFWKwBgMWz8KYpbSB/cEAHNpw
         C7TA==
X-Gm-Message-State: AOAM533tE7Fq15FEjrySFUWaI1TGGMJe/r744iyYr/KTQuoXh3yf4wcn
	Fc9aHVOoaqYJOifj9uJ+XJ0=
X-Google-Smtp-Source: ABdhPJzIgQ89z8kXlTbjXGCncMSCpv5pPfyBz9b7VwUNzw+LPWdtAg0Nt0u699JmditVbQiHjmuHQQ==
X-Received: by 2002:a5d:568a:: with SMTP id f10mr2924637wrv.30.1602069044317;
        Wed, 07 Oct 2020 04:10:44 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o14sm2289271wmc.36.2020.10.07.04.10.43
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 07 Oct 2020 04:10:43 -0700 (PDT)
Date: Wed, 7 Oct 2020 11:10:42 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	"open list:X86" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Message-ID: <20201007111042.2vbyppns6yef55nf@liuwe-devbox-debian-v2>
References: <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <78C4CCD0-403B-4670-A2C2-C1BB2AD498FD@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <78C4CCD0-403B-4670-A2C2-C1BB2AD498FD@arm.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 07, 2020 at 11:03:37AM +0000, Bertrand Marquis wrote:
> 
> 
> > On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 02.10.2020 12:42, Bertrand Marquis wrote:
> >> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
> >> 
> >> This is removing the dependency to xen subdirectory preventing using a
> >> wrong configuration file when xen subdirectory is duplicated for
> >> compilation tests.
> >> 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > 
> > Acked-by: Jan Beulich <jbeulich@suse.com>
> > 
> > (but more for the slight tidying than the purpose you name)
> 
> Ping: Could this be pushed ?
> 

Done.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 11:16:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 11:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3444.9909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7R6-0001I7-QM; Wed, 07 Oct 2020 11:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3444.9909; Wed, 07 Oct 2020 11:16:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7R6-0001I0-Mh; Wed, 07 Oct 2020 11:16:44 +0000
Received: by outflank-mailman (input) for mailman id 3444;
 Wed, 07 Oct 2020 11:16:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ7R4-0001Hr-Ra
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:16:42 +0000
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown
 [40.107.9.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dc95637-f59f-40c9-b920-ab7859968748;
 Wed, 07 Oct 2020 11:16:41 +0000 (UTC)
Received: from MR2P264CA0051.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:31::15)
 by PR2PR08MB4842.eurprd08.prod.outlook.com (2603:10a6:101:22::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.39; Wed, 7 Oct
 2020 11:16:38 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:31:cafe::ba) by MR2P264CA0051.outlook.office365.com
 (2603:10a6:500:31::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 11:16:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 11:16:38 +0000
Received: ("Tessian outbound 195a290eb161:v64");
 Wed, 07 Oct 2020 11:16:37 +0000
Received: from 796f53b6a22b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FFB60F4C-7344-4548-A168-453E7C391813.1; 
 Wed, 07 Oct 2020 11:16:00 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 796f53b6a22b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 07 Oct 2020 11:16:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6315.eurprd08.prod.outlook.com (2603:10a6:10:209::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Wed, 7 Oct
 2020 11:15:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 11:15:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ7R4-0001Hr-Ra
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:16:42 +0000
X-Inumbo-ID: 7dc95637-f59f-40c9-b920-ab7859968748
Received: from FRA01-MR2-obe.outbound.protection.outlook.com (unknown [40.107.9.81])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7dc95637-f59f-40c9-b920-ab7859968748;
	Wed, 07 Oct 2020 11:16:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G/QhCZxTrkW3hYNCBauAC1+dF4KNigcGhgl9RNRhYXo=;
 b=YRlJGOiqKHx16JzLMVkmPLwm7npmJacC4x9p1BjWJ8thQVyoFwo2N7qAPruwsv5zC/fFsPBJWYNWLejMUc4zPAa0rizlisi8U9AHiO8MzzsIMj5pEoriJ4oBSuAPO792I8mL8gWlE/4lXpfW3vShamv9y6nyv4wIVb8qJ5L/5uQ=
Received: from MR2P264CA0051.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:31::15)
 by PR2PR08MB4842.eurprd08.prod.outlook.com (2603:10a6:101:22::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.39; Wed, 7 Oct
 2020 11:16:38 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:31:cafe::ba) by MR2P264CA0051.outlook.office365.com
 (2603:10a6:500:31::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Wed, 7 Oct 2020 11:16:38 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 11:16:38 +0000
Received: ("Tessian outbound 195a290eb161:v64"); Wed, 07 Oct 2020 11:16:37 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a6b6457fa893111d
X-CR-MTA-TID: 64aa7808
Received: from 796f53b6a22b.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id FFB60F4C-7344-4548-A168-453E7C391813.1;
	Wed, 07 Oct 2020 11:16:00 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 796f53b6a22b.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 07 Oct 2020 11:16:00 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=feOJJI+hdM+E+HtJ2duzACofUcML44k0MMsCtERwsVdBy4Cc1+Sj1ew+izJPYVa4u5iX4tSjwRNe9VnuyWoIBHU+JUuM1ZGFfq3VImVqnfnt3jYxfxQ0crIC4SvzoOxtqqQ7+ufHvdftB2LVhq86xB98m7DbYFNjQiJgS6UQOapv7q3NsQSSBJ3TjdR4HLzsKnffBW7XR5DBhsR2I38PcPE+VdC8O4koBz2svFDcjHolh9Pe126+fqmveS5h3pdN13QTOKHXyGj2olOM2n2VCVlJ/C9DXujtBzXmWDiG/2XDp2HRoivSgR+59j+GTWCL1c0VA7PmVSAzZRu8SOM7Zw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G/QhCZxTrkW3hYNCBauAC1+dF4KNigcGhgl9RNRhYXo=;
 b=H+HnQZXxTF9M8Imge2x4c7fQWRcNSze7MjAp7tPqy91Lsf5/VZ+ospTC8+SgNw+rYyZR49KrhTxwwNdLwDZ9mkV5C08Jgfv97sfQ2SB3+EDG0QQ01SWnHSIrMyt9uwtbMbnjCjG5c3QgBj6y+hU2CIYeO7MEMg3WJc0W8p3mphPb0XITFSlWyI90WvlyYxEgpwxq2BLca5HMX/u8LvTulUV2UAcsKI7545jcoT2piagTXrC1jtXwVGZdsiuK1AZaWmcmySVA8Zi8lPYCnTSmyL613tD94LMWeIBbHWC/PXDp6UenjCuuThlH9rndRZtmGNdOsgEDUwLlUxBZ+/Z86w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=G/QhCZxTrkW3hYNCBauAC1+dF4KNigcGhgl9RNRhYXo=;
 b=YRlJGOiqKHx16JzLMVkmPLwm7npmJacC4x9p1BjWJ8thQVyoFwo2N7qAPruwsv5zC/fFsPBJWYNWLejMUc4zPAa0rizlisi8U9AHiO8MzzsIMj5pEoriJ4oBSuAPO792I8mL8gWlE/4lXpfW3vShamv9y6nyv4wIVb8qJ5L/5uQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6315.eurprd08.prod.outlook.com (2603:10a6:10:209::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Wed, 7 Oct
 2020 11:15:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 11:15:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Wei Liu <wl@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH] build: always use BASEDIR for xen sub-directory
Thread-Index: AQHWmKjTj7VmlPYcLkSjqrrhNFG5S6mEOTwAgAfIdACAAAH7AIAAAXkA
Date: Wed, 7 Oct 2020 11:15:59 +0000
Message-ID: <0379442A-7104-425C-9897-CBA11E7CCE0F@arm.com>
References:
 <556f6327acea2d0343c93da28f1fc17591afd402.1601564274.git.bertrand.marquis@arm.com>
 <706afc44-a414-33ff-da94-b92f7a96f1fc@suse.com>
 <78C4CCD0-403B-4670-A2C2-C1BB2AD498FD@arm.com>
 <20201007111042.2vbyppns6yef55nf@liuwe-devbox-debian-v2>
In-Reply-To: <20201007111042.2vbyppns6yef55nf@liuwe-devbox-debian-v2>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 58c63a26-51fe-4091-f6e6-08d86ab27590
x-ms-traffictypediagnostic: DBBPR08MB6315:|PR2PR08MB4842:
X-Microsoft-Antispam-PRVS:
	<PR2PR08MB48422C0BEAB5DB5E4F19DE279D0A0@PR2PR08MB4842.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 t9yWq+K579JWueIZsnbiH6pzLvHsvLM43WcfvFUnR1AjTgoj0AN5L+oc8rKsj+JndnyEh+Qwl6OpLxZt0b8KOsIZ52iguV6DYZtv2OLQb8kjg/NEr5c1s+mR2zfRoxmB/0rDGyd3SRLlS4EaT18ykpbxU4qb2yyAdk0crKvcVjwowkUeUeR7QRkpw9ARWSKKeBdn7oMbmNSQeE4J88y+0jmNHvdk+bAd+pVQS+HHzj3AdmM9j72beEQ3qGRkFLo0sBJYlztaaJyTz4nCUHPOuLAoGDaXU9imKGKc8nPpFfwICjZ4FQ/w87yFRlcYVdX13xhA6ms/mdZ22A4Gt7LJEA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(366004)(346002)(136003)(396003)(54906003)(26005)(6506007)(316002)(186003)(53546011)(2906002)(71200400001)(2616005)(36756003)(8676002)(6512007)(8936002)(7416002)(86362001)(66446008)(64756008)(6486002)(478600001)(4744005)(5660300002)(4326008)(66476007)(6916009)(33656002)(91956017)(66556008)(66946007)(76116006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 JMUQ8sBfenpXtDbmBLTw0AA49MauoDZXwaQ5c+Ub+PnohCLbJeI8lg/QjXpS7f1XFnQFoflcd/UAAFp0S1LiPjVRqhw+IYO8x8D442K/NCPFgWnwshG4wCna5Nh6wy480QDv+Xo8iLCk+fDYFr9R9WPFJnjuspw7iGzFTz4sqTzM00co+0U0A6yRqjbDI28JC/jSYJUkVOdnmeq5ylLN4GgodxhPJvwJ2Bb3CNWCkkAaJxdyfuPPfTP3ExzyfO6PIS1kxt/pgZz8LGS+t27bwjfzEL4G5Iab26UNLwACM7hf3i5JAIRTE3JwXrjKLeP2ZysptYgtOQpUX1h9qt+5OQTUHckcnKlG1hY/6Y+69yR2V8SRkCkbx+EAr1m0uApm5NLxfHcdgWSzKI9ka9NLQW37lRMyUlDbU1XEIDu3EdRqTzxYP6JBIq9F350Foo/c6A63BquBSlE0kZBwu1QjMo5Q/VphlAqgj5/CThRUqQkuGFdEozdtrzhi0rSLvb+eGAkHNy/BMsxpsXElGgp5KvNhWKUrdbCHdXzxH7LDc483YxHighBRn/815tHTzJ76fsjyTePQHRHFXiVcFIv0fMi227BU3eQKil+uIdOerM5Vs0GVdCgS1CKcyXxtqxoROUoN82nWg9fYGYUgGL4eXQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <5BC9E4FC0AE1FF43ABAE4933B1A1C87F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6315
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4ad1d6ad-92c1-48f1-87cb-08d86ab25e66
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ndBQZFoEa12+16esnhxPEmFN70aZ9XCpITg+/0JlFZIecfdtTkJqdAz4XJxi60vzcXfh5PuFHbw8rIdO0dQmbPvv73hc8Cstpkgc7pE3t8zAUTj6bon7uK4HucLo4mwmaTGr40PeN6Dmj7v9MtGBliiRrCSP+Jv8n9Ap0JcFGVAtOwAAKE2HrzzZ82AAqAK0ha0jfySc2Ay79WQEHQ64EkIgZ62PqqfQipfU60FKldDs3j5OxiTnBis3ucu9Bhxn7rHujTzMaJwsEH1zg3AgSbVst8lKTr8g3Q0THPCqf5Dyj9EN1jxq1I5+x3OBpgm0JiEHMMsF2I5XtKn4K00VCfO2a6phKPsMbCgRftYm4Pvtw9r1j+CrzXZCaAbry/te4h49bxsfHw0q37gwIgQd3Q==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39850400004)(396003)(46966005)(53546011)(6486002)(186003)(70206006)(336012)(36906005)(70586007)(316002)(2906002)(6512007)(6862004)(47076004)(33656002)(356005)(4326008)(82740400003)(8676002)(86362001)(8936002)(82310400003)(6506007)(81166007)(36756003)(54906003)(26005)(2616005)(478600001)(5660300002)(4744005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 11:16:38.0973
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 58c63a26-51fe-4091-f6e6-08d86ab27590
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4842



> On 7 Oct 2020, at 12:10, Wei Liu <wl@xen.org> wrote:
>=20
> On Wed, Oct 07, 2020 at 11:03:37AM +0000, Bertrand Marquis wrote:
>>=20
>>=20
>>> On 2 Oct 2020, at 13:12, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 02.10.2020 12:42, Bertrand Marquis wrote:
>>>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>>>=20
>>>> This is removing the dependency to xen subdirectory preventing using a
>>>> wrong configuration file when xen subdirectory is duplicated for
>>>> compilation tests.
>>>>=20
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>=20
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>=20
>>> (but more for the slight tidying than the purpose you name)
>>=20
>> Ping: Could this be pushed ?
>>=20
>=20
> Done.

Thanks a lot

Bertrand




From xen-devel-bounces@lists.xenproject.org Wed Oct 07 11:50:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 11:50:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3474.9921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7y4-00056e-GH; Wed, 07 Oct 2020 11:50:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3474.9921; Wed, 07 Oct 2020 11:50:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ7y4-00056X-Cq; Wed, 07 Oct 2020 11:50:48 +0000
Received: by outflank-mailman (input) for mailman id 3474;
 Wed, 07 Oct 2020 11:50:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQ7y2-00056S-Mo
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:50:46 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4c9a103-80d8-4c25-bf56-9188210e164a;
 Wed, 07 Oct 2020 11:50:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQ7y2-00056S-Mo
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 11:50:46 +0000
X-Inumbo-ID: e4c9a103-80d8-4c25-bf56-9188210e164a
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4c9a103-80d8-4c25-bf56-9188210e164a;
	Wed, 07 Oct 2020 11:50:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602071445;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=18DJCBD+ATtqoB+qJkoUQ1HbicG3/05YHWIKTsKOYp8=;
  b=KKOh6N+bEtYd87oXUd6zmatZSiVrvapxBknq8J+i9JIGMD2EgS/cYnsQ
   pKuSoDZk0UXZE52mb1ZJs8gjWFMs9gDHY/0sCzJ+myQky1G/3kyZoF7rJ
   ahelKnPQ7AwnrxfSF7QDYTy/ipAr4x1PqPByP8DWtKA/UxoMDoxSK/Cr4
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: gOdAhJx1i83mMobEWzQhwdrkoS6B0Jpc837YhJDDyUNv9KGhyYtWS6dhjv+LfhcMo8inbar1ny
 3YpC4FO/vYi/eU0+tCA37sawA7dQYZRju3U8d3yG2VDMCdKOCTFprcNi9RAciN3nSyfCLP2+4b
 7T1wzhFbo9xjAhNNLV9QraaH4c47CSNBSpgMzi6g35MkOgaZkSU/IIzCKkxma5eSgTfUt3SL1N
 uzwfShoE6uu4fxcQmOdIcraoGGg+u6mVA2cG3+vy/s3FCZ9Y0EABfOGEphNmNLc5C/83hOEMYL
 760=
X-SBRS: None
X-MesageID: 28450333
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,346,1596513600"; 
   d="scan'208";a="28450333"
Subject: Re: [PATCH 3/5] tools/libs/store: drop read-only functionality
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Wei Liu <wl@xen.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>
References: <20201002154141.11677-1-jgross@suse.com>
 <20201002154141.11677-4-jgross@suse.com>
 <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
 <d03ef7db-8752-ac00-99f1-6c40f62e1162@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f4b6a87c-ac65-cb3e-d4b2-4504340b81e3@citrix.com>
Date: Wed, 7 Oct 2020 12:50:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d03ef7db-8752-ac00-99f1-6c40f62e1162@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 07/10/2020 11:57, Jürgen Groß wrote:
> On 07.10.20 12:54, Wei Liu wrote:
>> On Fri, Oct 02, 2020 at 05:41:39PM +0200, Juergen Gross wrote:
>>> Today it is possible to open the connection in read-only mode via
>>> xs_daemon_open_readonly(). This is working only with Xenstore running
>>> as a daemon in the same domain as the user. Additionally it doesn't
>>> add any security as accessing the socket used for that functionality
>>> requires the same privileges as the socket used for full Xenstore
>>> access.
>>>
>>> So just drop the read-only semantics in all cases, leaving the
>>> interface existing only for compatibility reasons. This in turn
>>> requires to just ignore the XS_OPEN_READONLY flag.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   tools/libs/store/include/xenstore.h | 8 --------
>>>   tools/libs/store/xs.c               | 7 ++-----
>>>   2 files changed, 2 insertions(+), 13 deletions(-)
>>>
>>> diff --git a/tools/libs/store/include/xenstore.h
>>> b/tools/libs/store/include/xenstore.h
>>> index cbc7206a0f..158e69ef83 100644
>>> --- a/tools/libs/store/include/xenstore.h
>>> +++ b/tools/libs/store/include/xenstore.h
>>> @@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
>>>   /* Open a connection to the xs daemon.
>>>    * Attempts to make a connection over the socket interface,
>>>    * and if it fails, then over the  xenbus interface.
>>> - * Mode 0 specifies read-write access, XS_OPEN_READONLY for
>>> - * read-only access.
>>>    *
>>>    * * Connections made with xs_open(0) (which might be shared page or
>>>    *   socket based) are only guaranteed to work in the parent after
>>>    *   fork.
>>>    * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
>>>    *   for xs_open(0).
>>> - * * XS_OPEN_READONLY has no bearing on any of this.
>>>    *
>>>    * Returns a handle or NULL.
>>>    */
>>> @@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
>>>    */
>>>   struct xs_handle *xs_daemon_open(void);
>>>   struct xs_handle *xs_domain_open(void);
>>> -
>>> -/* Connect to the xs daemon (readonly for non-root clients).
>>> - * Returns a handle or NULL.
>>> - * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
>>> - */
>>>   struct xs_handle *xs_daemon_open_readonly(void);
>>>     /* Close the connection to the xs daemon.
>>> diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
>>> index 320734416f..4ac73ec317 100644
>>> --- a/tools/libs/store/xs.c
>>> +++ b/tools/libs/store/xs.c
>>> @@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
>>>     struct xs_handle *xs_daemon_open_readonly(void)
>>>   {
>>> -    return xs_open(XS_OPEN_READONLY);
>>> +    return xs_open(0);
>>>   }
>>
>> This changes the semantics of this function, isn't it? In that the user
>> expects a readonly connection but in fact a read-write connection is
>> returned.
>>
>> Maybe we should return an error here?
>
> No, as the behavior is this way already if any of the following is true:
>
> - a guest is opening the connection
> - Xenstore is running in a stubdom
> - oxenstored is being used

Right, and these are just some of the reasons why dropping the R/O
socket is a sensible thing to do.

However, I do think xenstore.h needs large disclaimers next to the
relevant constants and functions that both XS_OPEN_* flags are now
obsolete and ignored (merged into appropriate patches in the series).

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 12:02:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 12:02:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3478.9932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ88r-0006H6-Ll; Wed, 07 Oct 2020 12:01:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3478.9932; Wed, 07 Oct 2020 12:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ88r-0006Gz-Ia; Wed, 07 Oct 2020 12:01:57 +0000
Received: by outflank-mailman (input) for mailman id 3478;
 Wed, 07 Oct 2020 12:01:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5kpZ=DO=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kQ88q-0006Gu-NQ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:01:56 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64ddcfd9-fa74-4a09-985d-3f699b0cd72c;
 Wed, 07 Oct 2020 12:01:55 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id 184so1962616lfd.6
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 05:01:55 -0700 (PDT)
Received: from [192.168.1.6] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id x17sm325927lja.10.2020.10.07.05.01.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 07 Oct 2020 05:01:53 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5kpZ=DO=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kQ88q-0006Gu-NQ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:01:56 +0000
X-Inumbo-ID: 64ddcfd9-fa74-4a09-985d-3f699b0cd72c
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 64ddcfd9-fa74-4a09-985d-3f699b0cd72c;
	Wed, 07 Oct 2020 12:01:55 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id 184so1962616lfd.6
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 05:01:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=qB9vXmKs4AD/b/cLEPCCdBmQGD4zsABIUtwF1mE8G/w=;
        b=DlZELsjGyAbmcrdzep0CtK1W6VDd5rpCbH/rytqb+bOgxoXKWYSXFUqkxo5jE7mB2A
         uNK+Wsh4vA41veKxROYQgXyZSXrnT4eT5DCz0vzgWwR8C7F3bF0GxPESAx17LtSLlTu9
         5j6O18Drvhfh33ddKhzFOf7YD1yXGrkr/BmWmsWjDr542zByjEUlMd8V/VCiHx+q7s3u
         Zqka7GWdLwb1Tijsetl1kFKap7N3jBYQyK5nkkPMa6FUEIymQKJpS+hVe4EVB6XINPX6
         bYEKw8bEmiRacyXPxiwK2+oCS143YiUq7s8pV07c1xDB85uhmk2LaQSmW77U12AZV4wJ
         1dXQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=qB9vXmKs4AD/b/cLEPCCdBmQGD4zsABIUtwF1mE8G/w=;
        b=NWOtNlvkO6xdPeLM9oS1B7GgTDJnp2Dj2vpy37iVvvr6adOOdSYytfQYi6aQTXxNNK
         cgMldlyHJbtiQhO2AF9HZpR2xouUgwUXOEXFByFOQbj/WjYsRX88xLKoo4LXJSu4fo37
         +epgW3CqHBmuUnpAaaEH4cTAh5icRo1sqy2SCoyOyI/BQz3udwRFxxUGxQamBsEp0Ns/
         8K+rzMxUZDe14PC7zDL8ayRmtz5Y4G5KOAxTg2I2TydyzKfeRebXtS5N36U5ekzOsaOQ
         eRoZs4g2sI38OgY07rOa9zStxCzLTL/CkKdWZt9hpixxKePXHKfcb1vKFeMCHA9289ws
         6wlg==
X-Gm-Message-State: AOAM533T0aAJTJ5Rv5r/Bto/hAKShxK36DTQEWarQAI2TYxXR+vShNGC
	tbt+AYPsxegJ05WZAH0kvHM=
X-Google-Smtp-Source: ABdhPJy9LfF4/eI13d4o44aFKpQIlrL6Iu2RuTf4MLfTVS+/5CZeRb8t6ipF/EH7fsUMSm+st+SsCQ==
X-Received: by 2002:a19:505a:: with SMTP id z26mr797907lfj.442.1602072114548;
        Wed, 07 Oct 2020 05:01:54 -0700 (PDT)
Received: from [192.168.1.6] ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id x17sm325927lja.10.2020.10.07.05.01.53
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 07 Oct 2020 05:01:53 -0700 (PDT)
Subject: Re: [PATCH V1 13/16] xen/ioreq: Make x86's invalidate qemu mapcache
 handling common
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Julien Grall <julien.grall@arm.com>
References: <1599769330-17656-1-git-send-email-olekstysh@gmail.com>
 <1599769330-17656-14-git-send-email-olekstysh@gmail.com>
 <83dfb207-c191-8dad-1474-ce57b6d51102@suse.com>
 <2cab3ca5-0f2b-a813-099f-95bbf54bb9c8@gmail.com>
 <17f1c7d2-7a84-a6a5-4afb-f82e67bc9fd0@suse.com>
 <0fa6a31c-8da6-2a0a-b110-a697f4955702@gmail.com>
 <3abe3988-f1c0-9bbf-1ff9-ce3ae380c825@suse.com>
 <47ecdde7-6575-bee8-7981-7b1a31715a0b@gmail.com>
 <0aa9a225-1231-fa98-f2a1-caf898a3ed86@gmail.com>
 <fa610665-78c2-3bd0-66f4-2aa716bafe64@xen.org>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <0e58975f-c0a9-76c6-9856-2a78a67e7f3e@gmail.com>
Date: Wed, 7 Oct 2020 15:01:47 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fa610665-78c2-3bd0-66f4-2aa716bafe64@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 07.10.20 13:38, Julien Grall wrote:
> Hi Oleksandr,

Hi Julien.



>
> On 02/10/2020 10:55, Oleksandr wrote:
>> If I got it correctly there won't be a suitable common place where to 
>> set qemu_mapcache_invalidate flag anymore
>> as XENMEM_decrease_reservation is not a single place we need to make 
>> a decision whether to set it
>> By principle of analogy, on Arm we probably want to do so in 
>> guest_physmap_remove_page (or maybe better in p2m_remove_mapping).
>> Julien, what do you think?
>
> At the moment, the Arm code doesn't explicitely remove the existing 
> mapping before inserting the new mapping. Instead, this is done 
> implicitely by p2m_set_entry().

Got it.


>
>
> So I think we want to invalidate the QEMU mapcache in p2m_set_entry() 
> if the old entry is a RAM page *and* the new MFN is different.

Thank you. I hope, the following is close to what was suggested (didn't 
test yet):


diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ae8594f..512eea9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1073,7 +1073,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
       */
      if ( p2m_is_valid(orig_pte) &&
           !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+        if ( domain_has_ioreq_server(p2m->domain) &&
+             (p2m->domain == current->domain) && 
p2m_is_ram(orig_pte.p2m.type) )
+            p2m->domain->qemu_mapcache_invalidate = true;
+#endif
          p2m_free_entry(p2m, orig_pte, level);
+    }

  out:
      unmap_domain_page(table);


But, if I got the review comments correctly [1], the 
qemu_mapcache_invalidate variable should be per-vcpu instead of per-domain?

[1] https://patchwork.kernel.org/patch/11803383/



-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 12:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 12:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3479.9945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8Ad-0006OG-2a; Wed, 07 Oct 2020 12:03:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3479.9945; Wed, 07 Oct 2020 12:03:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8Ac-0006O9-Vj; Wed, 07 Oct 2020 12:03:46 +0000
Received: by outflank-mailman (input) for mailman id 3479;
 Wed, 07 Oct 2020 12:03:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0nCK=DO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kQ8Ac-0006O1-31
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:03:46 +0000
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30bf7a6b-128b-4d93-b776-21b3bf3df1de;
 Wed, 07 Oct 2020 12:03:44 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id qp15so2579545ejb.3
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 05:03:44 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id m7sm1491305edv.88.2020.10.07.05.03.42
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 07 Oct 2020 05:03:43 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0nCK=DO=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kQ8Ac-0006O1-31
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:03:46 +0000
X-Inumbo-ID: 30bf7a6b-128b-4d93-b776-21b3bf3df1de
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30bf7a6b-128b-4d93-b776-21b3bf3df1de;
	Wed, 07 Oct 2020 12:03:44 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id qp15so2579545ejb.3
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 05:03:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=XmaxyqqD4t/7Z1ArZshtrKB398YVfdVs4G/Q4xHvD9Q=;
        b=CHmkX/VAtY6HCFzdJLDgQvfp09xYrvGBIX9aeLXvgNCiKTAx7VGNWvtW6PBnLnFj43
         MpMYsDe/DNPKOuxCWUNvX03ikj7J18aBXbcqdTXzgoCjK0lSKcrjDcpbojAZec2Dd9XX
         B7WfBbtnGYE4XrQaw3uS7JjPFnuWKBzd+s77pPgw2MTqCu4mVtVeWKrbHszwcpF3C48D
         F3MduQW6q+nzrOovx3V2BwMM0YsZ2lKacU040b9Uxh6cev1spCU3kgnu3V0RZ8hCgqt+
         Rv+kEAZQGBCe+rVawPr9kZfjyLvTMqD+J98MMWbJ/eioQT93P4lSPO9K53Ru2aYKB+BU
         7hOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=XmaxyqqD4t/7Z1ArZshtrKB398YVfdVs4G/Q4xHvD9Q=;
        b=seUD0NeiwX+vuOwCqUWV5Q42ckKlWDrBIy8U2kEkfCk1iR9akTUZ5XtfGwfgNYHtZn
         PP6kB5RizttKXxDxQllcaxWanu6EaN++AzskdMbgClXIMOfqHJAMk0zpCPaTefq+D8ik
         oDmHWv24PpaIJVCZcDRHRgwhtq2rd9FS34PTEMPfsUEgwrMeWo6g1NY19Te/W0lSlpym
         v8wy+kJXIZ1ZlAjDXy3DI1UEZa1ojW8ofVWPHbrF6TY3iu/FvVjpFlDiB0knBejTuxQp
         OnloX3FKGuvR68htgmY+yDvLgvRHM3bwzn9fyuJIqHyY3dbziqtQ/eq0136l4FP/6WRf
         DTbQ==
X-Gm-Message-State: AOAM532x8NwRbmvll5+48dkXjDNPVPyZIaovF8iaRWcCBEx+FwYjl+Uo
	x3i2VmBLHT3J16cLxNbrjEo=
X-Google-Smtp-Source: ABdhPJwe+CJeG6AkRc797UIxhzjpL+p33xFOsJNorrDDzJwlfoOfx+Ol+I7w4Voq0I1xxxNEn09Zqw==
X-Received: by 2002:a17:906:4151:: with SMTP id l17mr3169801ejk.83.1602072223933;
        Wed, 07 Oct 2020 05:03:43 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id m7sm1491305edv.88.2020.10.07.05.03.42
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Wed, 07 Oct 2020 05:03:43 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Ian Jackson'" <ian.jackson@eu.citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20200924131030.1876-1-paul@xen.org> <20200924131030.1876-7-paul@xen.org> <a82cfb40-9ce5-d8ed-a2f7-1b02fc6e27e6@citrix.com>
In-Reply-To: <a82cfb40-9ce5-d8ed-a2f7-1b02fc6e27e6@citrix.com>
Subject: RE: [PATCH v9 6/8] common/domain: add a domain context record for shared_info...
Date: Wed, 7 Oct 2020 13:03:42 +0100
Message-ID: <000d01d69ca1$e6f14e90$b4d3ebb0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHMaZuCwoR0ssB24MBKMpsJEDSJHwIhBQEuAQ4F24WphvTQAA==

> -----Original Message-----
> From: Andrew Cooper <andrew.cooper3@citrix.com>
> Sent: 05 October 2020 11:40
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Ian Jackson =
<ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>;
> George Dunlap <george.dunlap@citrix.com>; Jan Beulich =
<jbeulich@suse.com>; Julien Grall
> <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>
> Subject: Re: [PATCH v9 6/8] common/domain: add a domain context record =
for shared_info...
>=20
> On 24/09/2020 14:10, Paul Durrant wrote:
> > diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
> > index 243325dfce..6ead7ea89d 100644
> > --- a/tools/misc/xen-domctx.c
> > +++ b/tools/misc/xen-domctx.c
> > @@ -31,6 +31,7 @@
> >  #include <errno.h>
> >
> >  #include <xenctrl.h>
> > +#include <xen-tools/libs.h>
> >  #include <xen/xen.h>
> >  #include <xen/domctl.h>
> >  #include <xen/save.h>
> > @@ -61,6 +62,82 @@ static void dump_header(void)
> >
> >  }
> >
> > +static void print_binary(const char *prefix, const void *val, =
size_t size,
> > +                         const char *suffix)
> > +{
> > +    printf("%s", prefix);
> > +
> > +    while ( size-- )
> > +    {
> > +        uint8_t octet =3D *(const uint8_t *)val++;
> > +        unsigned int i;
> > +
> > +        for ( i =3D 0; i < 8; i++ )
> > +        {
> > +            printf("%u", octet & 1);
> > +            octet >>=3D 1;
> > +        }
> > +    }
> > +
> > +    printf("%s", suffix);
> > +}
> > +
> > +static void dump_shared_info(void)
> > +{
> > +    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> > +    bool has_32bit_shinfo;
> > +    shared_info_any_t *info;
> > +    unsigned int i, n;
> > +
> > +    GET_PTR(s);
> > +    has_32bit_shinfo =3D s->flags & DOMAIN_SAVE_32BIT_SHINFO;
> > +
> > +    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: =
%u\n",
> > +           has_32bit_shinfo ? "true" : "false", s->buffer_size);
> > +
> > +    info =3D (shared_info_any_t *)s->buffer;
> > +
> > +#define GET_FIELD_PTR(_f)            \
> > +    (has_32bit_shinfo ?              \
> > +     (const void *)&(info->x32._f) : \
> > +     (const void *)&(info->x64._f))
> > +#define GET_FIELD_SIZE(_f) \
> > +    (has_32bit_shinfo ? sizeof(info->x32._f) : =
sizeof(info->x64._f))
> > +#define GET_FIELD(_f) \
> > +    (has_32bit_shinfo ? info->x32._f : info->x64._f)
> > +
> > +    n =3D has_32bit_shinfo ?
> > +        ARRAY_SIZE(info->x32.evtchn_pending) :
> > +        ARRAY_SIZE(info->x64.evtchn_pending);
> > +
> > +    for ( i =3D 0; i < n; i++ )
> > +    {
> > +        const char *prefix =3D !i ?
> > +            "                 evtchn_pending: " :
> > +            "                                 ";
> > +
> > +        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
> > +                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
> > +    }
> > +
> > +    for ( i =3D 0; i < n; i++ )
> > +    {
> > +        const char *prefix =3D !i ?
> > +            "                    evtchn_mask: " :
> > +            "                                 ";
> > +
> > +        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
> > +                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
> > +    }
>=20
> What about domains using FIFO?  This is meaningless for them.
>=20

Indeed, but this is essentially a debug tool so I'd rather it just =
dumped everything that might be useful.

> > +
> > +    printf("                 wc: version: %u sec: %u nsec: %u\n",
> > +           GET_FIELD(wc_version), GET_FIELD(wc_sec), =
GET_FIELD(wc_nsec));
>=20
> wc_sec_hi is also a rather critical field in this calculation.=09
>=20

Ok.

> > +
> > +#undef GET_FIELD
> > +#undef GET_FIELD_SIZE
> > +#undef GET_FIELD_PTR
> > +}
> > +
> >  static void dump_end(void)
> >  {
> >      DOMAIN_SAVE_TYPE(END) *e;
> > @@ -173,6 +250,7 @@ int main(int argc, char **argv)
> >              switch (desc->typecode)
> >              {
> >              case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
> > +            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); =
break;
> >              case DOMAIN_SAVE_CODE(END): dump_end(); break;
> >              default:
> >                  printf("Unknown type %u: skipping\n", =
desc->typecode);
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 8cfa2e0b6b..6709f9c79e 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -33,6 +33,7 @@
> >  #include <xen/xenoprof.h>
> >  #include <xen/irq.h>
> >  #include <xen/argo.h>
> > +#include <xen/save.h>
> >  #include <asm/debugger.h>
> >  #include <asm/p2m.h>
> >  #include <asm/processor.h>
> > @@ -1657,6 +1658,110 @@ int continue_hypercall_on_cpu(
> >      return 0;
> >  }
> >
> > +static int save_shared_info(const struct domain *d, struct =
domain_context *c,
> > +                            bool dry_run)
> > +{
> > +    struct domain_shared_info_context ctxt =3D {
> > +#ifdef CONFIG_COMPAT
> > +        .flags =3D has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : =
0,
> > +        .buffer_size =3D has_32bit_shinfo(d) ?
> > +                       sizeof(struct compat_shared_info) :
> > +                       sizeof(struct shared_info),
> > +#else
> > +        .buffer_size =3D sizeof(struct shared_info),
> > +#endif
> > +    };
> > +    size_t hdr_size =3D offsetof(typeof(ctxt), buffer);
> > +    int rc;
> > +
> > +    rc =3D DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc =3D domain_save_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc =3D domain_save_data(c, d->shared_info, ctxt.buffer_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return domain_save_end(c);
> > +}
> > +
> > +static int load_shared_info(struct domain *d, struct domain_context =
*c)
> > +{
> > +    struct domain_shared_info_context ctxt;
> > +    size_t hdr_size =3D offsetof(typeof(ctxt), buffer);
> > +    unsigned int i;
> > +    int rc;
> > +
> > +    rc =3D DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( i ) /* expect only a single instance */
> > +        return -ENXIO;
> > +
> > +    rc =3D domain_load_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( ctxt.buffer_size > sizeof(shared_info_t) ||
> > +         (ctxt.flags & ~DOMAIN_SAVE_32BIT_SHINFO) )
> > +        return -EINVAL;
> > +
> > +    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
> > +    {
> > +#ifdef CONFIG_COMPAT
> > +        has_32bit_shinfo(d) =3D true;
>=20
> d->arch.has_32bit_shinfo
>=20

If you'd prefer, ok.

> > +#else
> > +        return -EINVAL;
> > +#endif
> > +    }
> > +
> > +    if ( is_pv_domain(d) )
> > +    {
> > +        shared_info_t *shinfo =3D xmalloc(shared_info_t);
> > +
> > +        if ( !shinfo )
> > +            return -ENOMEM;
> > +
> > +        rc =3D domain_load_data(c, shinfo, sizeof(*shinfo));
> > +        if ( rc )
> > +            goto out;
>=20
> There's no need for a memory allocation, or to double buffer this =
data.
> You can memcpy() straight out of the context record.
>=20

That would mean re-working the way that domain_load_data() works. I'd =
really rather not.

> > +
> > +        memcpy(&shared_info(d, vcpu_info), &__shared_info(d, =
shinfo, vcpu_info),
> > +               sizeof(shared_info(d, vcpu_info)));
> > +        memcpy(&shared_info(d, arch), &__shared_info(d, shinfo, =
arch),
> > +               sizeof(shared_info(d, arch)));
> > +
> > +        memset(&shared_info(d, evtchn_pending), 0,
> > +               sizeof(shared_info(d, evtchn_pending)));
> > +        memset(&shared_info(d, evtchn_mask), 0xff,
> > +               sizeof(shared_info(d, evtchn_mask)));
> > +
> > +        shared_info(d, arch.pfn_to_mfn_frame_list_list) =3D 0;
> > +        for ( i =3D 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
> > +            shared_info(d, vcpu_info[i].evtchn_pending_sel) =3D 0;
>=20
> What is the plan for transparent migrate here?  While this is ok for
> regular migrate, its definitely not for transparent.
>=20

Quite true, as evidenced that this is inside 'if ( is_pv_domain(d) )'. =
It is not yet clear how much of the shared info we need for transparent =
migrate. It may be nothing.

> > +
> > +        rc =3D domain_load_end(c, false);
> > +
> > +    out:
> > +        xfree(shinfo);
> > +    }
> > +    else
> > +        /*
> > +         * No modifications to shared_info are required for =
restoring non-PV
> > +         * domains.
> > +         */
> > +        rc =3D domain_load_end(c, true);
> > +
> > +    return rc;
> > +}
> > +
> > +DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, =
load_shared_info);
> > +
> >  /*
> >   * Local variables:
> >   * mode: C
> > diff --git a/xen/include/public/save.h b/xen/include/public/save.h
> > index 551dbbddb8..0e855a4b97 100644
> > --- a/xen/include/public/save.h
> > +++ b/xen/include/public/save.h
> > @@ -82,7 +82,18 @@ struct domain_save_header {
> >  };
> >  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
> >
> > -#define DOMAIN_SAVE_CODE_MAX 1
> > +struct domain_shared_info_context {
> > +    uint32_t flags;
> > +
> > +#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
> > +
> > +    uint32_t buffer_size;
>=20
> This struct is already wrapped with a header including a size which
> encompasses buffer.
>=20
> Multiple overlapping size fields is an easy way to memory corruption,
> because it causes ambiguity as to which one is right.
>=20

The record size currently includes padding. I'm re-working that in v10 =
and so this size can be dropped.

  Paul




From xen-devel-bounces@lists.xenproject.org Wed Oct 07 12:06:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 12:06:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3483.9956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8D5-0006Zd-Ga; Wed, 07 Oct 2020 12:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3483.9956; Wed, 07 Oct 2020 12:06:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8D5-0006ZW-DV; Wed, 07 Oct 2020 12:06:19 +0000
Received: by outflank-mailman (input) for mailman id 3483;
 Wed, 07 Oct 2020 12:06:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQ8D4-0006ZR-7H
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:06:18 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95d155b2-9b10-4fd8-9dc7-91acebbd19d0;
 Wed, 07 Oct 2020 12:06:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQ8D4-0006ZR-7H
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:06:18 +0000
X-Inumbo-ID: 95d155b2-9b10-4fd8-9dc7-91acebbd19d0
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95d155b2-9b10-4fd8-9dc7-91acebbd19d0;
	Wed, 07 Oct 2020 12:06:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602072376;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=m3SuyyN2yZpMa2yaDM+nPRt2deuYIQhv2ciHoSUXzD0=;
  b=GPi5Mh18lVa+9QDBXyJeWsg2QIfW5fAp6xEzJ3K8X2n13SlUiGFg53Ac
   fHS9UTofN5J9JGBPTv32k54rgdpb8Y6QLrJrPOS69jB95Xeok1ptTxYTy
   NFAO69IJuWqWcF/5uWSRfqTQisq0TuEk4MMHxTrFzAWioP9aizHyy5bq6
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: BaIv2Wax62EAN2daRSd2U/HH+6uWX1atyufJSqrxLzJzkA51tbYdEx7J3nLqzXXzKSOWi9bvXT
 h9krM4e5VP/FBUZ2BYWHn1sGZgnF/lqJS8V/6IoKrKZxBV5gm9yMJC+CWvHv92SqIzu5lmLt61
 y29GsrtHOmuSE3ywbChscZLOCyD/dndSFhbfOyResNycJCiv8X30OurD9g1nn4TL6G9gB+l/Fl
 pFHLpd6I2g7zh0nRcZr9E84ofVCXVfXv88WCs0zF7yNE+kMZFrdiHwtx8xdomOaIknYme24xQz
 Lv0=
X-SBRS: None
X-MesageID: 28555886
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,346,1596513600"; 
   d="scan'208";a="28555886"
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201006162327.93055-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
Date: Wed, 7 Oct 2020 13:06:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201006162327.93055-1-roger.pau@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 06/10/2020 17:23, Roger Pau Monne wrote:
> Currently a PV hardware domain can also be given control over the CPU
> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.

This might be how the current logic "works", but its straight up broken.

PERF_CTL is thread scope, so unless dom0 is identity pinned and has one
vcpu for every pcpu, it cannot use the interface correctly.

> However since commit 322ec7c89f6 the default behavior has been changed
> to reject accesses to not explicitly handled MSRs, preventing PV
> guests that manage CPU frequency from reading
> MSR_IA32_PERF_{STATUS/CTL}.
>
> Additionally some HVM guests (Windows at least) will attempt to read
> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
>
> vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
> d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
>
> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> handling shared between HVM and PV guests, and add an explicit case
> for reads to MSR_IA32_PERF_{STATUS/CTL}.

OTOH, PERF_CTL does have a seemingly architectural "please disable turbo
for me" bit, which is supposed to be for calibration loops.  I wonder if
anyone uses this, and whether we ought to honour it (probably not).

> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index d8ed83f869..41baa3b7a1 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>  } cpufreq_controller;
>  
> +static inline bool is_cpufreq_controller(const struct domain *d)
> +{
> +    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
> +            is_hardware_domain(d));

This won't compile on !CONFIG_X86, due to CONFIG_HAS_CPUFREQ

Honestly - I don't see any point to this code.  Its opt-in via the
command line only, and doesn't provide adequate checks for enablement. 
(It's not as if we're lacking complexity or moving parts when it comes
to power/frequency management).

~Andrew

> +}
> +
>  int cpupool_move_domain(struct domain *d, struct cpupool *c);
>  int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
>  int cpupool_get_id(const struct domain *d);



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 12:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 12:45:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3487.9969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8p3-00029B-74; Wed, 07 Oct 2020 12:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3487.9969; Wed, 07 Oct 2020 12:45:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ8p3-000294-2K; Wed, 07 Oct 2020 12:45:33 +0000
Received: by outflank-mailman (input) for mailman id 3487;
 Wed, 07 Oct 2020 12:45:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ8p2-00028z-7b
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:45:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8f494f3-7cb3-4e5e-8db1-ca818a68ba8e;
 Wed, 07 Oct 2020 12:45:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C18ECAB5C;
 Wed,  7 Oct 2020 12:45:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ8p2-00028z-7b
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:45:32 +0000
X-Inumbo-ID: b8f494f3-7cb3-4e5e-8db1-ca818a68ba8e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b8f494f3-7cb3-4e5e-8db1-ca818a68ba8e;
	Wed, 07 Oct 2020 12:45:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602074729;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gWF1yRt0kN+PEK237vryMnLfWYYGc5wFoG+ssNdfOfU=;
	b=lFXucfJkjm5jSKkOkT18HGmtk61c5Nh26UO5C1tPSB7BD3Eil7iK6EU5k3rzKdqmhKbL8A
	xxPV7mv/tSmUYEVQvOEL5QSt4oNrEan9/NShYkWQi9FQ9WHJZLKb+YNji6K4NAzg/yeYxR
	z2dNnGGo58qFpB/7Cka7C0Yf8og4zHs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C18ECAB5C;
	Wed,  7 Oct 2020 12:45:29 +0000 (UTC)
Subject: Re: [PATCH 3/5] tools/libs/store: drop read-only functionality
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <20201002154141.11677-1-jgross@suse.com>
 <20201002154141.11677-4-jgross@suse.com>
 <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
 <d03ef7db-8752-ac00-99f1-6c40f62e1162@suse.com>
 <f4b6a87c-ac65-cb3e-d4b2-4504340b81e3@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <72542057-f5d8-99d5-55d9-2a21b7cb2d93@suse.com>
Date: Wed, 7 Oct 2020 14:45:29 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <f4b6a87c-ac65-cb3e-d4b2-4504340b81e3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.10.20 13:50, Andrew Cooper wrote:
> On 07/10/2020 11:57, Jürgen Groß wrote:
>> On 07.10.20 12:54, Wei Liu wrote:
>>> On Fri, Oct 02, 2020 at 05:41:39PM +0200, Juergen Gross wrote:
>>>> Today it is possible to open the connection in read-only mode via
>>>> xs_daemon_open_readonly(). This is working only with Xenstore running
>>>> as a daemon in the same domain as the user. Additionally it doesn't
>>>> add any security as accessing the socket used for that functionality
>>>> requires the same privileges as the socket used for full Xenstore
>>>> access.
>>>>
>>>> So just drop the read-only semantics in all cases, leaving the
>>>> interface existing only for compatibility reasons. This in turn
>>>> requires to just ignore the XS_OPEN_READONLY flag.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>>    tools/libs/store/include/xenstore.h | 8 --------
>>>>    tools/libs/store/xs.c               | 7 ++-----
>>>>    2 files changed, 2 insertions(+), 13 deletions(-)
>>>>
>>>> diff --git a/tools/libs/store/include/xenstore.h
>>>> b/tools/libs/store/include/xenstore.h
>>>> index cbc7206a0f..158e69ef83 100644
>>>> --- a/tools/libs/store/include/xenstore.h
>>>> +++ b/tools/libs/store/include/xenstore.h
>>>> @@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
>>>>    /* Open a connection to the xs daemon.
>>>>     * Attempts to make a connection over the socket interface,
>>>>     * and if it fails, then over the  xenbus interface.
>>>> - * Mode 0 specifies read-write access, XS_OPEN_READONLY for
>>>> - * read-only access.
>>>>     *
>>>>     * * Connections made with xs_open(0) (which might be shared page or
>>>>     *   socket based) are only guaranteed to work in the parent after
>>>>     *   fork.
>>>>     * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
>>>>     *   for xs_open(0).
>>>> - * * XS_OPEN_READONLY has no bearing on any of this.
>>>>     *
>>>>     * Returns a handle or NULL.
>>>>     */
>>>> @@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
>>>>     */
>>>>    struct xs_handle *xs_daemon_open(void);
>>>>    struct xs_handle *xs_domain_open(void);
>>>> -
>>>> -/* Connect to the xs daemon (readonly for non-root clients).
>>>> - * Returns a handle or NULL.
>>>> - * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
>>>> - */
>>>>    struct xs_handle *xs_daemon_open_readonly(void);
>>>>      /* Close the connection to the xs daemon.
>>>> diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
>>>> index 320734416f..4ac73ec317 100644
>>>> --- a/tools/libs/store/xs.c
>>>> +++ b/tools/libs/store/xs.c
>>>> @@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
>>>>      struct xs_handle *xs_daemon_open_readonly(void)
>>>>    {
>>>> -    return xs_open(XS_OPEN_READONLY);
>>>> +    return xs_open(0);
>>>>    }
>>>
>>> This changes the semantics of this function, isn't it? In that the user
>>> expects a readonly connection but in fact a read-write connection is
>>> returned.
>>>
>>> Maybe we should return an error here?
>>
>> No, as the behavior is this way already if any of the following is true:
>>
>> - a guest is opening the connection
>> - Xenstore is running in a stubdom
>> - oxenstored is being used
> 
> Right, and these are just some of the reasons why dropping the R/O
> socket is a sensible thing to do.
> 
> However, I do think xenstore.h needs large disclaimers next to the
> relevant constants and functions that both XS_OPEN_* flags are now
> obsolete and ignored (merged into appropriate patches in the series).

Fine with me.


Juergen



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 12:57:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 12:57:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3490.9981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ90f-0003Qq-Df; Wed, 07 Oct 2020 12:57:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3490.9981; Wed, 07 Oct 2020 12:57:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ90f-0003Qj-Aa; Wed, 07 Oct 2020 12:57:33 +0000
Received: by outflank-mailman (input) for mailman id 3490;
 Wed, 07 Oct 2020 12:57:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tFcq=DO=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kQ90d-0003Qe-QE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:57:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73aef0d8-0987-4cf9-84c5-3f3fcf7a26a0;
 Wed, 07 Oct 2020 12:57:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E8B3B1FA;
 Wed,  7 Oct 2020 12:57:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tFcq=DO=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kQ90d-0003Qe-QE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 12:57:31 +0000
X-Inumbo-ID: 73aef0d8-0987-4cf9-84c5-3f3fcf7a26a0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 73aef0d8-0987-4cf9-84c5-3f3fcf7a26a0;
	Wed, 07 Oct 2020 12:57:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7E8B3B1FA;
	Wed,  7 Oct 2020 12:57:28 +0000 (UTC)
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Daniel Vetter <daniel@ffwll.ch>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Steven Price <steven.price@arm.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=c3=bcbner?=
 <heiko@sntech.de>, Hans de Goede <hdegoede@redhat.com>,
 Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Huang Rui <ray.huang@amd.com>, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com,
 Linus Walleij <linus.walleij@linaro.org>, Melissa Wen
 <melissa.srw@gmail.com>, "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, Nouveau Dev <nouveau@lists.freedesktop.org>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
Date: Wed, 7 Oct 2020 14:57:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201002095830.GH438822@phenom.ffwll.local>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="A04G9t256jlYlZevsaZFScYx8JwBsN4Kq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--A04G9t256jlYlZevsaZFScYx8JwBsN4Kq
Content-Type: multipart/mixed; boundary="eO0KkNElTwku9CAInc7DmPRt4uql8oe3b";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Steven Price <steven.price@arm.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=c3=bcbner?=
 <heiko@sntech.de>, Hans de Goede <hdegoede@redhat.com>,
 Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Huang Rui <ray.huang@amd.com>, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com,
 Linus Walleij <linus.walleij@linaro.org>, Melissa Wen
 <melissa.srw@gmail.com>, "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, Nouveau Dev <nouveau@lists.freedesktop.org>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
Message-ID: <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
In-Reply-To: <20201002095830.GH438822@phenom.ffwll.local>

--eO0KkNElTwku9CAInc7DmPRt4uql8oe3b
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi

Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
>> <christian.koenig@amd.com> wrote:
>>>
>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig wrote=
:
>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>> Hi
>>>>>>
>>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>> Hi Christian
>>>>>>>>
>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and =
location
>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma=
_buf_map
>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>> We could completely drop that if we use the same structure insi=
de TTM as
>>>>>>>>> well.
>>>>>>>>>
>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will=

>>>>>>>> retrieve the pointer via this function.
>>>>>>>>
>>>>>>>> I do want to see all that being more tightly integrated into TTM=
, but
>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc6=
4
>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO =
list.
>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>
>>>>>>> In this case just keep the function inside bochs and only fix it =
there.
>>>>>>>
>>>>>>> All other drivers can be fixed when we generally pump this throug=
h TTM.
>>>>>> Did you take a look at patch 3? This function will be used by VRAM=

>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here,=
 we
>>>>>> have to duplicate the functionality in each if these drivers. Boch=
s
>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>
>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_o=
bj
>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>
>>>>> What I want to avoid is to have another conversion function in TTM =
because
>>>>> what happens here is that we already convert from ttm_bus_placement=
 to
>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>> Hm I'm not really seeing how that helps with a gradual conversion of=

>>>> everything over to dma_buf_map and assorted helpers for access? Ther=
e's
>>>> too many places in ttm drivers where is_iomem and related stuff is u=
sed to
>>>> be able to convert it all in one go. An intermediate state with a bu=
nch of
>>>> conversions seems fairly unavoidable to me.
>>>
>>> Fair enough. I would just have started bottom up and not top down.
>>>
>>> Anyway feel free to go ahead with this approach as long as we can rem=
ove
>>> the new function again when we clean that stuff up for good.
>>
>> Yeah I guess bottom up would make more sense as a refactoring. But the=

>> main motivation to land this here is to fix the __mmio vs normal
>> memory confusion in the fbdev emulation helpers for sparc (and
>> anything else that needs this). Hence the top down approach for
>> rolling this out.
>=20
> Ok I started reviewing this a bit more in-depth, and I think this is a =
bit
> too much of a de-tour.
>=20
> Looking through all the callers of ttm_bo_kmap almost everyone maps the=

> entire object. Only vmwgfx uses to map less than that. Also, everyone j=
ust
> immediately follows up with converting that full object map into a
> pointer.
>=20
> So I think what we really want here is:
> - new function
>=20
> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);=

>=20
>   _vmap name since that's consistent with both dma_buf functions and
>   what's usually used to implement this. Outside of the ttm world kmap
>   usually just means single-page mappings using kmap() or it's iomem
>   sibling io_mapping_map* so rather confusing name for a function which=

>   usually is just used to set up a vmap of the entire buffer.
>=20
> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>   functions for all ttm drivers. We should be able to make this fully
>   generic because a) we now have dma_buf_map and b) drm_gem_object is
>   embedded in the ttm_bo, so we can upcast for everyone who's both a tt=
m
>   and gem driver.
>=20
>   This is maybe a good follow-up, since it should allow us to ditch qui=
te
>   a bit of the vram helper code for this more generic stuff. I also mig=
ht
>   have missed some special-cases here, but from a quick look everything=

>   just pins the buffer to the current location and that's it.
>=20
>   Also this obviously requires Christian's generic ttm_bo_pin rework
>   first.
>=20
> - roll the above out to drivers.
>=20
> Christian/Thomas, thoughts on this?

I agree on the goals, but what is the immediate objective here?

Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
more internal state that struct dma_buf_map, so they are not easily
convertible either. What you propose seems to require a reimplementation
of the existing ttm_bo_kmap() code. That is it's own patch series.

I'd rather go with some variant of the existing patch and add
ttm_bo_vmap() in a follow-up.

Best regards
Thomas

>=20
> I think for the immediate need of rolling this out for vram helpers and=

> fbdev code we should be able to do this, but just postpone the driver w=
ide
> roll-out for now.
>=20
> Cheers, Daniel
>=20
>> -Daniel
>>
>>>
>>> Christian.
>>>
>>>> -Daniel
>>>>
>>>>> Thanks,
>>>>> Christian.
>>>>>
>>>>>> Best regards
>>>>>> Thomas
>>>>>>
>>>>>>> Regards,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>> ---
>>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++=

>>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>     2 files changed, 44 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/tt=
m_bo_api.h
>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>     #include <drm/drm_gem.h>
>>>>>>>>>>     #include <drm/drm_hashtab.h>
>>>>>>>>>>     #include <drm/drm_vma_manager.h>
>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>     #include <linux/kref.h>
>>>>>>>>>>     #include <linux/list.h>
>>>>>>>>>>     #include <linux/wait.h>
>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(=
struct
>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>         return map->virtual;
>>>>>>>>>>     }
>>>>>>>>>>     +/**
>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.=

>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>> + *
>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If =
the memory
>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL=
=2E
>>>>>>>>>> + */
>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_=
kmap_obj
>>>>>>>>>> *kmap,
>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>> +{
>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>> +
>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iome=
m *)vaddr);
>>>>>>>>>> +    else
>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>     /**
>>>>>>>>>>      * ttm_bo_kmap
>>>>>>>>>>      *
>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-b=
uf-map.h
>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>      *
>>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>      *
>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr=
_iomem().
>>>>>>>>>> + *
>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>> + *
>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>> + *
>>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_s=
et() or
>>>>>>>>>>      * dma_buf_map_is_null().
>>>>>>>>>>      *
>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(=
struct
>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>         map->is_iomem =3D false;
>>>>>>>>>>     }
>>>>>>>>>>     +/**
>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping struc=
ture to
>>>>>>>>>> an address in I/O memory
>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>> + *
>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>> + */
>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf=
_map *map,
>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>> +{
>>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
>>>>>>>>>> +    map->is_iomem =3D true;
>>>>>>>>>> +}
>>>>>>>>>> +
>>>>>>>>>>     /**
>>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping str=
uctures
>>>>>>>>>> for equality
>>>>>>>>>>      * @lhs:    The dma-buf mapping structure
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%=
2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02=
%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3=
dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2=
F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C=
01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8=
961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2=
B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>>>>> _______________________________________________
>>>>>>> dri-devel mailing list
>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F=
%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%7=
C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd=
8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHd=
HOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
>>>>>>>
>>>>>> _______________________________________________
>>>>>> amd-gfx mailing list
>>>>>> amd-gfx@lists.freedesktop.org
>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%=
2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C01=
%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd896=
1fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5=
HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>
>>
>>
>> --=20
>> Daniel Vetter
>> Software Engineer, Intel Corporation
>> http://blog.ffwll.ch
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


--eO0KkNElTwku9CAInc7DmPRt4uql8oe3b--

--A04G9t256jlYlZevsaZFScYx8JwBsN4Kq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQFIBAEBCAAyFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl99uzYUHHR6aW1tZXJt
YW5uQHN1c2UuZGUACgkQaA3BHVMLeiPFlAf/dShA6MXCEFDqVh7Q2aekDDfCOwKP
86fkrX8xWfiHNET9AoewUP7K9m5+iArIbwelpHBUNV2q7EUKG8xVuK5S55bz6xxq
3vSlROkD+JZvgovXr5BuGyfn2Wfh9XI9+rs3+OtERgtJfNhjEpNok5rukvNAoFzG
ePSAUBcIN7Ezb5O93zT45pI5YwQYe9ZyFiLVztXAZ67edwZbZN8NBp5OrnAhP8Po
57jSeNoEnzwhVF7+V+tgRYYA8y7Bz7VNTbqaJoEBvqDhQQsrt0KLshFG9NQY/srf
/fO1NoHVLPTajZPPYmd9F/2xDImwGoZF2MDLL30AFoRRynTMiWtqixUpJQ==
=sTNF
-----END PGP SIGNATURE-----

--A04G9t256jlYlZevsaZFScYx8JwBsN4Kq--


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:00:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3492.9992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ93P-0004Hw-Sq; Wed, 07 Oct 2020 13:00:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3492.9992; Wed, 07 Oct 2020 13:00:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ93P-0004Hp-Pn; Wed, 07 Oct 2020 13:00:23 +0000
Received: by outflank-mailman (input) for mailman id 3492;
 Wed, 07 Oct 2020 13:00:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQ93O-0004Hk-DZ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:00:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d20856c-4643-4a7b-b700-783a73c1d132;
 Wed, 07 Oct 2020 13:00:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ93M-0004Ei-3K; Wed, 07 Oct 2020 13:00:20 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQ93L-0007PC-SY; Wed, 07 Oct 2020 13:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FceR=DO=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQ93O-0004Hk-DZ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:00:22 +0000
X-Inumbo-ID: 9d20856c-4643-4a7b-b700-783a73c1d132
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d20856c-4643-4a7b-b700-783a73c1d132;
	Wed, 07 Oct 2020 13:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=I504+elFMlT7w13ployzHQG4XVlP+UBRzPiuWe5Bo5M=; b=HWIALI5Nw1BtR47KCyfkVhX4PH
	MTH3jg9Ag40e0JDPznqjrQDJC9bjQ3hYiwzF8b220fkWXkXkvsIUYQOZBICFkzMna4URqNdvyELw6
	KrYEgxXvvi3DV8qxM5tlRZOAK9GDxqP1hWHCpLPDbxDBN5cVkBzBuRhm4GwU/FGdd7iU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ93M-0004Ei-3K; Wed, 07 Oct 2020 13:00:20 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQ93L-0007PC-SY; Wed, 07 Oct 2020 13:00:20 +0000
Subject: Re: [PATCH] xen/arm: print update firmware only once
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <09d04b34e6b3b77ac206a42657b1b4116e7e11f3.1602068661.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4b3135ff-4795-e189-0430-da5627419e4e@xen.org>
Date: Wed, 7 Oct 2020 14:00:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <09d04b34e6b3b77ac206a42657b1b4116e7e11f3.1602068661.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 07/10/2020 12:05, Bertrand Marquis wrote:
> Fix enable_smccc_arch_workaround_1 to only print the warning asking to
> update the firmware once.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   xen/arch/arm/cpuerrata.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 6c09017515..0c63dfa779 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -187,7 +187,7 @@ warn:
>           ASSERT(system_state < SYS_STATE_active);
>           warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
>                       "Please update your firmware.\n");
> -        warned = false;
> +        warned = true;

Thanks for spotting it. It looks like I introduced this bug in commit 
976319fa3de7f98b558c87b350699fffc278effc "xen/arm64: Kill 
PSCI_GET_VERSION as a variant-2 workaround".

I would suggest to add a fixes tag (can be done on commit).

Reviewed-by: Julien Grall <jgrall@amazon.com>

>       }
>   
>       return 0;
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3495.10005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9Bu-0004i2-Qa; Wed, 07 Oct 2020 13:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3495.10005; Wed, 07 Oct 2020 13:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9Bu-0004hv-Ml; Wed, 07 Oct 2020 13:09:10 +0000
Received: by outflank-mailman (input) for mailman id 3495;
 Wed, 07 Oct 2020 13:09:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQ9Bt-0004hq-9Z
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:09:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2d44ace-6358-4d57-acf5-9a66483a23af;
 Wed, 07 Oct 2020 13:09:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ9Bq-0004Pf-KZ; Wed, 07 Oct 2020 13:09:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ9Bq-0004Sr-A9; Wed, 07 Oct 2020 13:09:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQ9Bq-0000AX-8c; Wed, 07 Oct 2020 13:09:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQ9Bt-0004hq-9Z
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:09:09 +0000
X-Inumbo-ID: f2d44ace-6358-4d57-acf5-9a66483a23af
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f2d44ace-6358-4d57-acf5-9a66483a23af;
	Wed, 07 Oct 2020 13:09:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QV/CSlXTe64lW8i9cOpTii5ixorIAyyKZlBacALWR5U=; b=Jab6Rr2q5IvIMqz1EfPkgqPgEt
	LPzjA8wpnfnMERfjSLQpzWZ/UukUvP1Vnd1OWQ45CyVJWY6QlFPet7iazvhplzwjIt2p6KGMV++Gi
	Q4vJ0aKijYRDkoGyMBD7Lmc1xJZ18RmxRFoRZZPLISljzkJFoenetmXw3ZyR69LRKQ3k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ9Bq-0004Pf-KZ; Wed, 07 Oct 2020 13:09:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ9Bq-0004Sr-A9; Wed, 07 Oct 2020 13:09:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQ9Bq-0000AX-8c; Wed, 07 Oct 2020 13:09:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155509-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155509: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d7c5b788295426c1ef48a9ffc3432c51220f69ba
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 13:09:06 +0000

flight 155509 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155509/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152631
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 qemuu                d7c5b788295426c1ef48a9ffc3432c51220f69ba
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   48 days
Failing since        152659  2020-08-21 14:07:39 Z   46 days   78 attempts
Testing same since   155509  2020-10-06 19:50:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 39545 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3499.10018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9Df-0005Vg-AZ; Wed, 07 Oct 2020 13:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3499.10018; Wed, 07 Oct 2020 13:10:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9Df-0005VZ-7T; Wed, 07 Oct 2020 13:10:59 +0000
Received: by outflank-mailman (input) for mailman id 3499;
 Wed, 07 Oct 2020 13:10:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Cnt=DO=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kQ9Dc-0005VN-NL
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:10:57 +0000
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9797e3d4-6e55-4236-8c7e-4caf7e26510e;
 Wed, 07 Oct 2020 13:10:54 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id 60so2138764otw.3
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 06:10:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Cnt=DO=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kQ9Dc-0005VN-NL
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:10:57 +0000
X-Inumbo-ID: 9797e3d4-6e55-4236-8c7e-4caf7e26510e
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9797e3d4-6e55-4236-8c7e-4caf7e26510e;
	Wed, 07 Oct 2020 13:10:54 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id 60so2138764otw.3
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 06:10:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=VsnMtwduK/eaPh8sh7OsPT0FyDBA9ykxTycTmfoK7FE=;
        b=XQw1QLYfR9qfW8B6CzTH9HNbjUoNQ7c7owK49oNl3ZGAyaYgsZRW8NsymAZNFvWa6v
         Pu2B9/m31WiqNmGwbeYXwkQZ64u4W7gA0XbGls83scZYTIzLPHrVLDuVDkcKBiYS2P+J
         IUNoJl08qFhoGCgQeLpdMq8KOGzN/FBJiPLz4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=VsnMtwduK/eaPh8sh7OsPT0FyDBA9ykxTycTmfoK7FE=;
        b=knRXJ+LBIQTZQLk91IiBivsb6m08mn9q5rpldGwi8JQtB1zZqyLusv/Ek0XcW59yHr
         0QEYhehPc99NaEZ4iSLjEp96MGwDg3ryq+ejGphTmO0rmMXjdPod2qXTF0Hio6chWQuY
         fxznIVuxCNuS/DEWUwO0AwYWB+yhF6pWjGSohZM7TR1CWfwEgj/XnwG3C1M+ltjPctrV
         5Mv3Jgo/5l6Sh7fCMQIL7eYs1rPOKa/Rv0/0/+Mo07uzmbs2h0LBgaZG3vfG1cm8+rvh
         qtB82gFdEt32IEc5mBAVAEcsyBxHLOx9XEZzydvATp4fyHcKD4gu0PzpiorEURjLTU0q
         cu0Q==
X-Gm-Message-State: AOAM532zOPk+ynELVg4Mobhmgt2vXvb9zrfPY0I3V/zGwb6+FKtafD+J
	deKCz4yEvYoWF+xyzGa72EJ8hwb3ls834HZq2VuqTg==
X-Google-Smtp-Source: ABdhPJz2fU3vfrmM8BVWXG8zDyJNA8JBNVY8njvMFhcKMWXmRriQUSCDo6MUx7uQxHr2lmvyhiUCEX/nBDzmPgvF+Z4=
X-Received: by 2002:a05:6830:1e56:: with SMTP id e22mr1739852otj.303.1602076253502;
 Wed, 07 Oct 2020 06:10:53 -0700 (PDT)
MIME-Version: 1.0
References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com> <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
In-Reply-To: <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Wed, 7 Oct 2020 15:10:41 +0200
Message-ID: <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, 
	Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, 
	Dave Airlie <airlied@linux.ie>, Sam Ravnborg <sam@ravnborg.org>, 
	Alex Deucher <alexander.deucher@amd.com>, Gerd Hoffmann <kraxel@redhat.com>, 
	Lucas Stach <l.stach@pengutronix.de>, Russell King <linux+etnaviv@armlinux.org.uk>, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, Inki Dae <inki.dae@samsung.com>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, Seung-Woo Kim <sw0312.kim@samsung.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>, 
	Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>, Ben Skeggs <bskeggs@redhat.com>, 
	Rob Herring <robh@kernel.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Steven Price <steven.price@arm.com>, Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Huang Rui <ray.huang@amd.com>, 
	Sumit Semwal <sumit.semwal@linaro.org>, Emil Velikov <emil.velikov@collabora.com>, 
	Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com, 
	Linus Walleij <linus.walleij@linaro.org>, Melissa Wen <melissa.srw@gmail.com>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Qinglang Miao <miaoqinglang@huawei.com>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, lima@lists.freedesktop.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrot=
e:
>
> Hi
>
> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
> >> <christian.koenig@amd.com> wrote:
> >>>
> >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig wrote=
:
> >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>> Hi
> >>>>>>
> >>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
> >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>> Hi Christian
> >>>>>>>>
> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
> >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and =
location
> >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma=
_buf_map
> >>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>> We could completely drop that if we use the same structure insi=
de TTM as
> >>>>>>>>> well.
> >>>>>>>>>
> >>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
> >>>>>>>> retrieve the pointer via this function.
> >>>>>>>>
> >>>>>>>> I do want to see all that being more tightly integrated into TTM=
, but
> >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc6=
4
> >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO =
list.
> >>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>
> >>>>>>> In this case just keep the function inside bochs and only fix it =
there.
> >>>>>>>
> >>>>>>> All other drivers can be fixed when we generally pump this throug=
h TTM.
> >>>>>> Did you take a look at patch 3? This function will be used by VRAM
> >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here,=
 we
> >>>>>> have to duplicate the functionality in each if these drivers. Boch=
s
> >>>>>> itself uses VRAM helpers and doesn't touch the function directly.
> >>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>
> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_o=
bj
> >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>
> >>>>> What I want to avoid is to have another conversion function in TTM =
because
> >>>>> what happens here is that we already convert from ttm_bus_placement=
 to
> >>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>> Hm I'm not really seeing how that helps with a gradual conversion of
> >>>> everything over to dma_buf_map and assorted helpers for access? Ther=
e's
> >>>> too many places in ttm drivers where is_iomem and related stuff is u=
sed to
> >>>> be able to convert it all in one go. An intermediate state with a bu=
nch of
> >>>> conversions seems fairly unavoidable to me.
> >>>
> >>> Fair enough. I would just have started bottom up and not top down.
> >>>
> >>> Anyway feel free to go ahead with this approach as long as we can rem=
ove
> >>> the new function again when we clean that stuff up for good.
> >>
> >> Yeah I guess bottom up would make more sense as a refactoring. But the
> >> main motivation to land this here is to fix the __mmio vs normal
> >> memory confusion in the fbdev emulation helpers for sparc (and
> >> anything else that needs this). Hence the top down approach for
> >> rolling this out.
> >
> > Ok I started reviewing this a bit more in-depth, and I think this is a =
bit
> > too much of a de-tour.
> >
> > Looking through all the callers of ttm_bo_kmap almost everyone maps the
> > entire object. Only vmwgfx uses to map less than that. Also, everyone j=
ust
> > immediately follows up with converting that full object map into a
> > pointer.
> >
> > So I think what we really want here is:
> > - new function
> >
> > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> >
> >   _vmap name since that's consistent with both dma_buf functions and
> >   what's usually used to implement this. Outside of the ttm world kmap
> >   usually just means single-page mappings using kmap() or it's iomem
> >   sibling io_mapping_map* so rather confusing name for a function which
> >   usually is just used to set up a vmap of the entire buffer.
> >
> > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
> >   functions for all ttm drivers. We should be able to make this fully
> >   generic because a) we now have dma_buf_map and b) drm_gem_object is
> >   embedded in the ttm_bo, so we can upcast for everyone who's both a tt=
m
> >   and gem driver.
> >
> >   This is maybe a good follow-up, since it should allow us to ditch qui=
te
> >   a bit of the vram helper code for this more generic stuff. I also mig=
ht
> >   have missed some special-cases here, but from a quick look everything
> >   just pins the buffer to the current location and that's it.
> >
> >   Also this obviously requires Christian's generic ttm_bo_pin rework
> >   first.
> >
> > - roll the above out to drivers.
> >
> > Christian/Thomas, thoughts on this?
>
> I agree on the goals, but what is the immediate objective here?
>
> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
> more internal state that struct dma_buf_map, so they are not easily
> convertible either. What you propose seems to require a reimplementation
> of the existing ttm_bo_kmap() code. That is it's own patch series.
>
> I'd rather go with some variant of the existing patch and add
> ttm_bo_vmap() in a follow-up.

ttm_bo_vmap would simply wrap what you currently open-code as
ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
be a much later step. Why do you think adding ttm_bo_vmap is not
possible?
-Daniel


> Best regards
> Thomas
>
> >
> > I think for the immediate need of rolling this out for vram helpers and
> > fbdev code we should be able to do this, but just postpone the driver w=
ide
> > roll-out for now.
> >
> > Cheers, Daniel
> >
> >> -Daniel
> >>
> >>>
> >>> Christian.
> >>>
> >>>> -Daniel
> >>>>
> >>>>> Thanks,
> >>>>> Christian.
> >>>>>
> >>>>>> Best regards
> >>>>>> Thomas
> >>>>>>
> >>>>>>> Regards,
> >>>>>>> Christian.
> >>>>>>>
> >>>>>>>> Best regards
> >>>>>>>> Thomas
> >>>>>>>>
> >>>>>>>>> Regards,
> >>>>>>>>> Christian.
> >>>>>>>>>
> >>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>> ---
> >>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
> >>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>     2 files changed, 44 insertions(+)
> >>>>>>>>>>
> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/tt=
m_bo_api.h
> >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>     #include <drm/drm_gem.h>
> >>>>>>>>>>     #include <drm/drm_hashtab.h>
> >>>>>>>>>>     #include <drm/drm_vma_manager.h>
> >>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>     #include <linux/kref.h>
> >>>>>>>>>>     #include <linux/list.h>
> >>>>>>>>>>     #include <linux/wait.h>
> >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(=
struct
> >>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>         return map->virtual;
> >>>>>>>>>>     }
> >>>>>>>>>>     +/**
> >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
> >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>> + *
> >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If =
the memory
> >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL=
.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_=
kmap_obj
> >>>>>>>>>> *kmap,
> >>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>> +{
> >>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>> +
> >>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iome=
m *)vaddr);
> >>>>>>>>>> +    else
> >>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>     /**
> >>>>>>>>>>      * ttm_bo_kmap
> >>>>>>>>>>      *
> >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-b=
uf-map.h
> >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>      *
> >>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>      *
> >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr=
_iomem().
> >>>>>>>>>> + *
> >>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>> + *
> >>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>> + *
> >>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is_s=
et() or
> >>>>>>>>>>      * dma_buf_map_is_null().
> >>>>>>>>>>      *
> >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(=
struct
> >>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>         map->is_iomem =3D false;
> >>>>>>>>>>     }
> >>>>>>>>>>     +/**
> >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping struc=
ture to
> >>>>>>>>>> an address in I/O memory
> >>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>> + *
> >>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>> + */
> >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf=
_map *map,
> >>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>> +{
> >>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
> >>>>>>>>>> +    map->is_iomem =3D true;
> >>>>>>>>>> +}
> >>>>>>>>>> +
> >>>>>>>>>>     /**
> >>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping str=
uctures
> >>>>>>>>>> for equality
> >>>>>>>>>>      * @lhs:    The dma-buf mapping structure
> >>>>>>>>> _______________________________________________
> >>>>>>>>> dri-devel mailing list
> >>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%=
2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%7=
C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd89=
61fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdHOA%=
2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>> _______________________________________________
> >>>>>>>> amd-gfx mailing list
> >>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2=
F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C01=
%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961f=
e4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5HKCs=
TrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>>>>>> _______________________________________________
> >>>>>>> dri-devel mailing list
> >>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F=
%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%7C0=
1%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961=
fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdHOA%2F=
1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>
> >>>>>> _______________________________________________
> >>>>>> amd-gfx mailing list
> >>>>>> amd-gfx@lists.freedesktop.org
> >>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%=
2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C01%7=
Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4=
884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5HKCsTr=
ksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>>
> >>
> >>
> >> --
> >> Daniel Vetter
> >> Software Engineer, Intel Corporation
> >> http://blog.ffwll.ch
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
> (HRB 36809, AG N=C3=BCrnberg)
> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer
>


--=20
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:20:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:20:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3502.10031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9My-0006cZ-AK; Wed, 07 Oct 2020 13:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3502.10031; Wed, 07 Oct 2020 13:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9My-0006cS-5t; Wed, 07 Oct 2020 13:20:36 +0000
Received: by outflank-mailman (input) for mailman id 3502;
 Wed, 07 Oct 2020 13:20:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tFcq=DO=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kQ9Mw-0006cN-Ro
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:20:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf6a7ae8-90a6-433a-afb9-ae96bec3ceed;
 Wed, 07 Oct 2020 13:20:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 12A15AC6D;
 Wed,  7 Oct 2020 13:20:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tFcq=DO=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kQ9Mw-0006cN-Ro
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:20:34 +0000
X-Inumbo-ID: bf6a7ae8-90a6-433a-afb9-ae96bec3ceed
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bf6a7ae8-90a6-433a-afb9-ae96bec3ceed;
	Wed, 07 Oct 2020 13:20:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 12A15AC6D;
	Wed,  7 Oct 2020 13:20:32 +0000 (UTC)
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Luben Tuikov <luben.tuikov@amd.com>, Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, Huang Rui <ray.huang@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Sam Ravnborg <sam@ravnborg.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Kukjin Kim <kgene@kernel.org>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Hans de Goede <hdegoede@redhat.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Qiang Yu <yuq825@gmail.com>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
 <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
Date: Wed, 7 Oct 2020 15:20:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="0t1cYEjuOo3ZVuYcbyfHXuvT4PSflGI9R"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--0t1cYEjuOo3ZVuYcbyfHXuvT4PSflGI9R
Content-Type: multipart/mixed; boundary="savrqrLWkICrB7uZmjWpjSJFgaQuM1qd4";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Luben Tuikov <luben.tuikov@amd.com>, Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, Huang Rui <ray.huang@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Sam Ravnborg <sam@ravnborg.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Kukjin Kim <kgene@kernel.org>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Hans de Goede <hdegoede@redhat.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Qiang Yu <yuq825@gmail.com>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
 <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
In-Reply-To: <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>

--savrqrLWkICrB7uZmjWpjSJFgaQuM1qd4
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi

Am 07.10.20 um 15:10 schrieb Daniel Vetter:
> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> =
wrote:
>>
>> Hi
>>
>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
>>>> <christian.koenig@amd.com> wrote:
>>>>>
>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig wro=
te:
>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>> Hi Christian
>>>>>>>>>>
>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address an=
d location
>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct d=
ma_buf_map
>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>> We could completely drop that if we use the same structure in=
side TTM as
>>>>>>>>>>> well.
>>>>>>>>>>>
>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers wi=
ll
>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>
>>>>>>>>>> I do want to see all that being more tightly integrated into T=
TM, but
>>>>>>>>>> not in this series. This one is about fixing the bochs-on-spar=
c64
>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TOD=
O list.
>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>
>>>>>>>>> In this case just keep the function inside bochs and only fix i=
t there.
>>>>>>>>>
>>>>>>>>> All other drivers can be fixed when we generally pump this thro=
ugh TTM.
>>>>>>>> Did you take a look at patch 3? This function will be used by VR=
AM
>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it her=
e, we
>>>>>>>> have to duplicate the functionality in each if these drivers. Bo=
chs
>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly=
=2E
>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>
>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap=
_obj
>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>
>>>>>>> What I want to avoid is to have another conversion function in TT=
M because
>>>>>>> what happens here is that we already convert from ttm_bus_placeme=
nt to
>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>> Hm I'm not really seeing how that helps with a gradual conversion =
of
>>>>>> everything over to dma_buf_map and assorted helpers for access? Th=
ere's
>>>>>> too many places in ttm drivers where is_iomem and related stuff is=
 used to
>>>>>> be able to convert it all in one go. An intermediate state with a =
bunch of
>>>>>> conversions seems fairly unavoidable to me.
>>>>>
>>>>> Fair enough. I would just have started bottom up and not top down.
>>>>>
>>>>> Anyway feel free to go ahead with this approach as long as we can r=
emove
>>>>> the new function again when we clean that stuff up for good.
>>>>
>>>> Yeah I guess bottom up would make more sense as a refactoring. But t=
he
>>>> main motivation to land this here is to fix the __mmio vs normal
>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>> anything else that needs this). Hence the top down approach for
>>>> rolling this out.
>>>
>>> Ok I started reviewing this a bit more in-depth, and I think this is =
a bit
>>> too much of a de-tour.
>>>
>>> Looking through all the callers of ttm_bo_kmap almost everyone maps t=
he
>>> entire object. Only vmwgfx uses to map less than that. Also, everyone=
 just
>>> immediately follows up with converting that full object map into a
>>> pointer.
>>>
>>> So I think what we really want here is:
>>> - new function
>>>
>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map=
);
>>>
>>>   _vmap name since that's consistent with both dma_buf functions and
>>>   what's usually used to implement this. Outside of the ttm world kma=
p
>>>   usually just means single-page mappings using kmap() or it's iomem
>>>   sibling io_mapping_map* so rather confusing name for a function whi=
ch
>>>   usually is just used to set up a vmap of the entire buffer.
>>>
>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap=

>>>   functions for all ttm drivers. We should be able to make this fully=

>>>   generic because a) we now have dma_buf_map and b) drm_gem_object is=

>>>   embedded in the ttm_bo, so we can upcast for everyone who's both a =
ttm
>>>   and gem driver.
>>>
>>>   This is maybe a good follow-up, since it should allow us to ditch q=
uite
>>>   a bit of the vram helper code for this more generic stuff. I also m=
ight
>>>   have missed some special-cases here, but from a quick look everythi=
ng
>>>   just pins the buffer to the current location and that's it.
>>>
>>>   Also this obviously requires Christian's generic ttm_bo_pin rework
>>>   first.
>>>
>>> - roll the above out to drivers.
>>>
>>> Christian/Thomas, thoughts on this?
>>
>> I agree on the goals, but what is the immediate objective here?
>>
>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_o=
bj
>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
>> more internal state that struct dma_buf_map, so they are not easily
>> convertible either. What you propose seems to require a reimplementati=
on
>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>
>> I'd rather go with some variant of the existing patch and add
>> ttm_bo_vmap() in a follow-up.
>=20
> ttm_bo_vmap would simply wrap what you currently open-code as
> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
> be a much later step. Why do you think adding ttm_bo_vmap is not
> possible?

The calls to ttm_bo_kmap/_kunmap() require an instance of struct
ttm_bo_kmap_obj that is stored in each driver's private bo structure
(e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
patch 3, I flirted with the idea of unifying the driver's _vmap code in
a shared helper, but I couldn't find a simple way of doing it. That's
why it's open-coded in the first place.

Best regards
Thomas

> -Daniel
>=20
>=20
>> Best regards
>> Thomas
>>
>>>
>>> I think for the immediate need of rolling this out for vram helpers a=
nd
>>> fbdev code we should be able to do this, but just postpone the driver=
 wide
>>> roll-out for now.
>>>
>>> Cheers, Daniel
>>>
>>>> -Daniel
>>>>
>>>>>
>>>>> Christian.
>>>>>
>>>>>> -Daniel
>>>>>>
>>>>>>> Thanks,
>>>>>>> Christian.
>>>>>>>
>>>>>>>> Best regards
>>>>>>>> Thomas
>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>> Christian.
>>>>>>>>>
>>>>>>>>>> Best regards
>>>>>>>>>> Thomas
>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>> Christian.
>>>>>>>>>>>
>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>> ---
>>>>>>>>>>>>     include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++=
++
>>>>>>>>>>>>     include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>>>     2 files changed, 44 insertions(+)
>>>>>>>>>>>>
>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/=
ttm_bo_api.h
>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>     #include <drm/drm_gem.h>
>>>>>>>>>>>>     #include <drm/drm_hashtab.h>
>>>>>>>>>>>>     #include <drm/drm_vma_manager.h>
>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>     #include <linux/kref.h>
>>>>>>>>>>>>     #include <linux/list.h>
>>>>>>>>>>>>     #include <linux/wait.h>
>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtua=
l(struct
>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>         return map->virtual;
>>>>>>>>>>>>     }
>>>>>>>>>>>>     +/**
>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kma=
p.
>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. I=
f the memory
>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NU=
LL.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_b=
o_kmap_obj
>>>>>>>>>>>> *kmap,
>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>>>> +
>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __io=
mem *)vaddr);
>>>>>>>>>>>> +    else
>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>>     /**
>>>>>>>>>>>>      * ttm_bo_kmap
>>>>>>>>>>>>      *
>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma=
-buf-map.h
>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>      *
>>>>>>>>>>>>      *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>      *
>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vad=
dr_iomem().
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>> + *
>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>> + *
>>>>>>>>>>>>      * Test if a mapping is valid with either dma_buf_map_is=
_set() or
>>>>>>>>>>>>      * dma_buf_map_is_null().
>>>>>>>>>>>>      *
>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vadd=
r(struct
>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>         map->is_iomem =3D false;
>>>>>>>>>>>>     }
>>>>>>>>>>>>     +/**
>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping str=
ucture to
>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>> + *
>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>> + */
>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_b=
uf_map *map,
>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>> +{
>>>>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
>>>>>>>>>>>> +    map->is_iomem =3D true;
>>>>>>>>>>>> +}
>>>>>>>>>>>> +
>>>>>>>>>>>>     /**
>>>>>>>>>>>>      * dma_buf_map_is_equal - Compares two dma-buf mapping s=
tructures
>>>>>>>>>>>> for equality
>>>>>>>>>>>>      * @lhs:    The dma-buf mapping structure
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3=
A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D=
02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7=
C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
>>>>>>>>>> _______________________________________________
>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A=
%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%=
7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3d=
d8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH=
%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>>>>>>> _______________________________________________
>>>>>>>>> dri-devel mailing list
>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%=
2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02=
%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3=
dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> amd-gfx mailing list
>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%2=
F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C=
01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8=
961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2=
B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>>>
>>>>
>>>>
>>>> --
>>>> Daniel Vetter
>>>> Software Engineer, Intel Corporation
>>>> http://blog.ffwll.ch
>>>
>>
>> --
>> Thomas Zimmermann
>> Graphics Driver Developer
>> SUSE Software Solutions Germany GmbH
>> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
>> (HRB 36809, AG N=C3=BCrnberg)
>> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer
>>
>=20
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


--savrqrLWkICrB7uZmjWpjSJFgaQuM1qd4--

--0t1cYEjuOo3ZVuYcbyfHXuvT4PSflGI9R
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQFIBAEBCAAyFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl99wJsUHHR6aW1tZXJt
YW5uQHN1c2UuZGUACgkQaA3BHVMLeiP16gf+MAiAf+Uq4sRUXeFXHiI+JaHLKESL
G19zoVsU0zvLfJzfn2OBUKogUiHNPcV/nSwAC2PVWC0F1TcMt5Do32z3eR+weU+A
qWqK+roLoPOUK3HQaEvCqJ+6wy/We6m6ZEEhSMpEkmWd88by218Api5shkZJdfDW
+59khIgd+QAkgWmyb2HBFnlppmF9jOmGFhPLxzC6UPXdpnB+sytnBFfjIGujR1oK
BarNzdsdqJQQf4AWdVraC4GzVIpOS+2mbKyWiFJ9qE9OHtzgbmbNJbE2F3ymff+u
WTZx5dwK7/Ed06WYgCyI5m0lVlAy3HVLkKrXJKvG/snJ8KK/b3IgQlqtKg==
=UOQ/
-----END PGP SIGNATURE-----

--0t1cYEjuOo3ZVuYcbyfHXuvT4PSflGI9R--


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:25:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:25:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3506.10042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9RH-0006pP-1O; Wed, 07 Oct 2020 13:25:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3506.10042; Wed, 07 Oct 2020 13:25:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9RG-0006pI-Ue; Wed, 07 Oct 2020 13:25:02 +0000
Received: by outflank-mailman (input) for mailman id 3506;
 Wed, 07 Oct 2020 13:25:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ax2y=DO=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kQ9RF-0006pD-J9
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:25:01 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7eae::60f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a64df62-0cc0-41ad-917f-f648e616dd50;
 Wed, 07 Oct 2020 13:24:59 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4110.namprd12.prod.outlook.com (2603:10b6:208:1dd::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 7 Oct
 2020 13:24:53 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3455.022; Wed, 7 Oct 2020
 13:24:53 +0000
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
 (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by
 AM3PR07CA0108.eurprd07.prod.outlook.com (2603:10a6:207:7::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.11 via Frontend Transport; Wed, 7 Oct 2020 13:24:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ax2y=DO=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kQ9RF-0006pD-J9
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:25:01 +0000
X-Inumbo-ID: 7a64df62-0cc0-41ad-917f-f648e616dd50
Received: from NAM11-BN8-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7eae::60f])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7a64df62-0cc0-41ad-917f-f648e616dd50;
	Wed, 07 Oct 2020 13:24:59 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T2WoN8X5+RD1vbM77hPkD0vMdasG6qeayZFEZPlUfPxOHHD7lFSM7Mg0OvDQjTtLQ1oTe8eoxCg8QlJmjhnn5iA4fSE9VQ7l3PEoT1Q8gJ73u6G1d5MdyHX1NdHuAtepebLx6kcYz/kUz8DCcTRZlXuFaVz7acr+Jt8woRwdKSCwj7eoMnKChCoCwn+qbMBrqzo/pzw0Dl4wPhi2vD39eS7gJ3I25JEQVxK0JR08pj5gynFxWwdFveUPSOrPbe7pWCzB3/RFBPBDyUF/go4OA8ZTDr2/6tWKXmmBwtLj9TcyZ+Km4lhkIYhAGGONOFwOKVkJt3B7OL1pSJHllRunpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wWI3zJwa2ezYvNP8JGJlZCNI2a3YOk5cUf0/UOiaX54=;
 b=Oew9gstSZQFMRh91zzeBC171yKaDBdvb9vnkbD10W7WYF5XIsiRXrOrp6Qdc5i0Kndks/iZYi9b6vHIWq8hBXroM2bhD96+wAtxs913fzXGHWHqd//noenXt5LJMzi4/pl3l8wGUL8Q3bepUedMbfM0U7WFmvmo5qiJ1UYU8Cy2EfHDvvGpImNB/DMUHA8WCRUZW4dmMZ3Yj4ttID61QXpSoRgFSRv7vYrGF4dxyS7RHSsESLcS3kifTwsINWp0BjydVPaPONxfDYMlf6Z5xL3vVgDvGlyJYqFk0D4CrWgbaur+vOO43snJVoIP9fJ2VXDbuJOfKrwep7TqbaJMq5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wWI3zJwa2ezYvNP8JGJlZCNI2a3YOk5cUf0/UOiaX54=;
 b=qJshi9DWsqyFEwVtrtd1DFVBURAq2676pJBh8hwucv3AKKa+L+zIbHZKT1Ek+F57ZjrPnKFb2APy4QgqoHy26NGUjajfZSrRpzoA3pdRUBmneXtbVUm0hNtjMda2o6qgB7sbvi+uqxpRFkmoGbZe3R3fzF+2HVr2YDbCPtYuFXc=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4110.namprd12.prod.outlook.com (2603:10b6:208:1dd::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 7 Oct
 2020 13:24:53 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3455.022; Wed, 7 Oct 2020
 13:24:53 +0000
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter <daniel@ffwll.ch>
Cc: Luben Tuikov <luben.tuikov@amd.com>, Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, Huang Rui <ray.huang@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Sam Ravnborg <sam@ravnborg.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Kukjin Kim <kgene@kernel.org>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Hans de Goede <hdegoede@redhat.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Qiang Yu <yuq825@gmail.com>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
 <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
 <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com>
Date: Wed, 7 Oct 2020 15:24:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
X-ClientProxiedBy: AM3PR07CA0108.eurprd07.prod.outlook.com
 (2603:10a6:207:7::18) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7] (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by AM3PR07CA0108.eurprd07.prod.outlook.com (2603:10a6:207:7::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.11 via Frontend Transport; Wed, 7 Oct 2020 13:24:48 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f7c1e972-30c6-46d3-ad81-08d86ac45ff5
X-MS-TrafficTypeDiagnostic: MN2PR12MB4110:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<MN2PR12MB41108A13416BDF765190124E830A0@MN2PR12MB4110.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PNXbOPcM1rz71/hjPT5YJfdFGjniOIqMDn1zzWpNlVpCEgin/iQ4YoDO9Sw1uAeB8UPOJZePYABJrLH/NRYZnOHZn0NwmdjgEmzooStQgdk9XwUg2sp0rPP3Sa1mUNpSx+CS6YVqsaIdFDbFDseDd27toKEI1nVBimPygXzg87BOnuSaP9bzHgLb3WZmknsX3YLQjgJr/fhfFvZQSIXdCsaXjhdO+6yaNflRcFXNM88a9w514NTu8upiq0BW7ZPB942IciRWsH0g45AG5M0eFqf0hRJ8Z58mSwMVrXrX005d96Q5+Tcx3ahW1iNJhkAFHDYI8+cz71Lb8PbK5LIJwxO7d8VWQhFQkGtNpl6bCW0EdIuF2lA6xeMTBPhGFiAdEQ4TR8M3LJcxXkUN5oKJ5kq/FySYKTxAz8u7PtzcvnuFcl92PhhjwoiFaB8vx/ZE8bOeld9uo3lQ27OdbOyTlw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(39850400004)(376002)(396003)(31696002)(66556008)(6486002)(66574015)(66476007)(66946007)(83380400001)(110136005)(54906003)(36756003)(31686004)(316002)(53546011)(966005)(7406005)(7416002)(4326008)(83080400001)(86362001)(52116002)(8936002)(2906002)(8676002)(478600001)(30864003)(45080400002)(5660300002)(6666004)(2616005)(16526019)(186003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	FY0lIHvIykfmcEYSA048hzn7cbrKJVZ+798v2+X2FHrAch//tOBywkS6Nh88SrXrzxAfAd7SLnozk1rNNMn5F0vxxDJLUiMzZ16si3TrOFpYHvEBFvALgkLmNoL8U/jRttHaOCnzt2qa8QUVeFLcjewMaktRs0eXHQlyrHawrl5nl5pDRjHvQhKLJm2zAl5Ue0RnhCSH7aSqLy7myAKkNtmsrMh4ad7qqYb7Cz1/hSugABrvKobzMgG6m2TRUZLyabBn/e7rxwvX10TnY10NYPd90VF2DZ/D0m5lQOYq7ikD1xIt0L2Pf17g3JGep8SSRdROu79VwZLNTV11DdLOeu/a6ffP5pRQbbbfMgOkFZWre3f8xhnD2MEBVp8vngGGDOl8eVM5f2KtYtFeiZog+1vnOFV6E4YOy9PbmK9u28kfnZ1DAvb7EhBCEmdKvbH7ZZIFFQxBbZeJ74bYxu8AP+EvqcwocxhB0R6N+SLQ4keH68YOV1QrbLp9OuVExYOnCvQdqz788advBKYLBI/bcorXPIgm3x4lA3tS0i4R3A0fhAcPRPT559bI+SfFjyHK3zEOv64AM8zAlEG49b+zaMetDzvHqPmzXRZS4JRhcLSQG5hKapKGA4HV60G6zYWfYzvykowlOtWtS6VnVbXesYj20l4TodyqKqfITCCNsnwPFPOjq6iwylxeBFRWxUR7LhYGMYMHjyg5Fsd8eFy57g==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7c1e972-30c6-46d3-ad81-08d86ac45ff5
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 13:24:53.3383
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NGdflGHJrwI1vLmoyVOY2z80IsatzH8y4pl1qpRi/s4Xsa6YOaoPZHfEM78mqTN/
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4110

Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
> Hi
>
> Am 07.10.20 um 15:10 schrieb Daniel Vetter:
>> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de> wrote:
>>> Hi
>>>
>>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian König
>>>>> <christian.koenig@amd.com> wrote:
>>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian König wrote:
>>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian König:
>>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>>> Hi Christian
>>>>>>>>>>>
>>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian König:
>>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location
>>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map
>>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as
>>>>>>>>>>>> well.
>>>>>>>>>>>>
>>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will
>>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>>
>>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but
>>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64
>>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list.
>>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>>
>>>>>>>>>> In this case just keep the function inside bochs and only fix it there.
>>>>>>>>>>
>>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM.
>>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM
>>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we
>>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs
>>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly.
>>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>>
>>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj
>>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>>
>>>>>>>> What I want to avoid is to have another conversion function in TTM because
>>>>>>>> what happens here is that we already convert from ttm_bus_placement to
>>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of
>>>>>>> everything over to dma_buf_map and assorted helpers for access? There's
>>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to
>>>>>>> be able to convert it all in one go. An intermediate state with a bunch of
>>>>>>> conversions seems fairly unavoidable to me.
>>>>>> Fair enough. I would just have started bottom up and not top down.
>>>>>>
>>>>>> Anyway feel free to go ahead with this approach as long as we can remove
>>>>>> the new function again when we clean that stuff up for good.
>>>>> Yeah I guess bottom up would make more sense as a refactoring. But the
>>>>> main motivation to land this here is to fix the __mmio vs normal
>>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>>> anything else that needs this). Hence the top down approach for
>>>>> rolling this out.
>>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit
>>>> too much of a de-tour.
>>>>
>>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the
>>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just
>>>> immediately follows up with converting that full object map into a
>>>> pointer.
>>>>
>>>> So I think what we really want here is:
>>>> - new function
>>>>
>>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>>
>>>>    _vmap name since that's consistent with both dma_buf functions and
>>>>    what's usually used to implement this. Outside of the ttm world kmap
>>>>    usually just means single-page mappings using kmap() or it's iomem
>>>>    sibling io_mapping_map* so rather confusing name for a function which
>>>>    usually is just used to set up a vmap of the entire buffer.
>>>>
>>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap
>>>>    functions for all ttm drivers. We should be able to make this fully
>>>>    generic because a) we now have dma_buf_map and b) drm_gem_object is
>>>>    embedded in the ttm_bo, so we can upcast for everyone who's both a ttm
>>>>    and gem driver.
>>>>
>>>>    This is maybe a good follow-up, since it should allow us to ditch quite
>>>>    a bit of the vram helper code for this more generic stuff. I also might
>>>>    have missed some special-cases here, but from a quick look everything
>>>>    just pins the buffer to the current location and that's it.
>>>>
>>>>    Also this obviously requires Christian's generic ttm_bo_pin rework
>>>>    first.
>>>>
>>>> - roll the above out to drivers.
>>>>
>>>> Christian/Thomas, thoughts on this?
>>> I agree on the goals, but what is the immediate objective here?
>>>
>>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj
>>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
>>> more internal state that struct dma_buf_map, so they are not easily
>>> convertible either. What you propose seems to require a reimplementation
>>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>>
>>> I'd rather go with some variant of the existing patch and add
>>> ttm_bo_vmap() in a follow-up.
>> ttm_bo_vmap would simply wrap what you currently open-code as
>> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
>> be a much later step. Why do you think adding ttm_bo_vmap is not
>> possible?
> The calls to ttm_bo_kmap/_kunmap() require an instance of struct
> ttm_bo_kmap_obj that is stored in each driver's private bo structure
> (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
> patch 3, I flirted with the idea of unifying the driver's _vmap code in
> a shared helper, but I couldn't find a simple way of doing it. That's
> why it's open-coded in the first place.

Well that makes kind of sense. Keep in mind that ttm_bo_kmap is 
currently way to complicated.

Christian.

>
> Best regards
> Thomas
>
>> -Daniel
>>
>>
>>> Best regards
>>> Thomas
>>>
>>>> I think for the immediate need of rolling this out for vram helpers and
>>>> fbdev code we should be able to do this, but just postpone the driver wide
>>>> roll-out for now.
>>>>
>>>> Cheers, Daniel
>>>>
>>>>> -Daniel
>>>>>
>>>>>> Christian.
>>>>>>
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> Best regards
>>>>>>>>> Thomas
>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Thomas
>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Christian.
>>>>>>>>>>>>
>>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++
>>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
>>>>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct
>>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>>          return map->virtual;
>>>>>>>>>>>>>      }
>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap.
>>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory
>>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj
>>>>>>>>>>>>> *kmap,
>>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>>> +    void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem);
>>>>>>>>>>>>> +
>>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr);
>>>>>>>>>>>>> +    else
>>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +
>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>>       *
>>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>>> + *
>>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>>>>       *
>>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>>          map->is_iomem = false;
>>>>>>>>>>>>>      }
>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>>> + *
>>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>>> + */
>>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> +    map->vaddr_iomem = vaddr_iomem;
>>>>>>>>>>>>> +    map->is_iomem = true;
>>>>>>>>>>>>> +}
>>>>>>>>>>>>> +
>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>>>>>>>>>>>> for equality
>>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>>>>>> _______________________________________________
>>>>>>>>>> dri-devel mailing list
>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=0
>>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> amd-gfx mailing list
>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=0
>>>>>
>>>>> --
>>>>> Daniel Vetter
>>>>> Software Engineer, Intel Corporation
>>>>> http://blog.ffwll.ch
>>> --
>>> Thomas Zimmermann
>>> Graphics Driver Developer
>>> SUSE Software Solutions Germany GmbH
>>> Maxfeldstr. 5, 90409 Nürnberg, Germany
>>> (HRB 36809, AG Nürnberg)
>>> Geschäftsführer: Felix Imendörffer
>>>
>>



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3509.10067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WQ-0007s5-Us; Wed, 07 Oct 2020 13:30:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3509.10067; Wed, 07 Oct 2020 13:30:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WQ-0007rx-QJ; Wed, 07 Oct 2020 13:30:22 +0000
Received: by outflank-mailman (input) for mailman id 3509;
 Wed, 07 Oct 2020 13:30:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ9WO-0007q4-Sg
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e65c696-701d-43f3-aeca-ad359a51a758;
 Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 24A25AC6D;
 Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ9WO-0007q4-Sg
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:20 +0000
X-Inumbo-ID: 5e65c696-701d-43f3-aeca-ad359a51a758
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5e65c696-701d-43f3-aeca-ad359a51a758;
	Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602077414;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=B547wPgCpWoZcZKQSWOEnN4FxgmeBKDw7JxS9keaVX0=;
	b=hm0W1oNIiSpPYNZ+t0FJ4+RG+ngqa0FrDLozYjkzzoErJ9Ujz/mio2FBFS61/jHXt66AmI
	+8PqVxeilEl4s4N1KOFekgfFZxQmMY0dhoYe7FYxR0T2b6+dSntNTfGne+JDO1OTwpYYul
	Ns+J0mQYiQ9fqhXNWIrhNKsBqxzT3HY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 24A25AC6D;
	Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 0/2] xen/x86: implement NMI continuation as softirq
Date: Wed,  7 Oct 2020 15:30:09 +0200
Message-Id: <20201007133011.18871-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move sending of a virq event for oprofile to the local vcpu from NMI
to softirq context.

This has been tested with a small test patch using the continuation
framework of patch 1 for all NMIs and doing a print to console in
the continuation handler.

Version 1 of this small series was sent to the security list before.

Juergen Gross (2):
  xen/x86: add nmi continuation framework
  xen/oprofile: use set_nmi_continuation() for sending virq to guest

 xen/arch/x86/oprofile/nmi_int.c |  9 +++++++-
 xen/arch/x86/traps.c            | 37 +++++++++++++++++++++++++++++++++
 xen/include/asm-x86/nmi.h       |  8 ++++++-
 xen/include/xen/softirq.h       |  5 ++++-
 4 files changed, 56 insertions(+), 3 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:30:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:30:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3508.10054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WL-0007qG-Kn; Wed, 07 Oct 2020 13:30:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3508.10054; Wed, 07 Oct 2020 13:30:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WL-0007q9-Hm; Wed, 07 Oct 2020 13:30:17 +0000
Received: by outflank-mailman (input) for mailman id 3508;
 Wed, 07 Oct 2020 13:30:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ9WJ-0007q4-VW
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5771a15b-b98b-4cad-8cce-ad1e87e22a25;
 Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 757FCB234;
 Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ9WJ-0007q4-VW
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:15 +0000
X-Inumbo-ID: 5771a15b-b98b-4cad-8cce-ad1e87e22a25
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5771a15b-b98b-4cad-8cce-ad1e87e22a25;
	Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602077414;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A/ul8OKsqutALOib25CsZA6bXQqNHW0GneGeZqHG7q8=;
	b=TEdU2k071jf8kt1UTSAibopkjVoC1AFCBimdUlaguWnZ/OSyCXE/vMqE3ye6u+nlZBzIr3
	EJvc4PEbA7d4Sqakuk2YxQXNpdFyYtT3LrlEiIufiegDeKc0eap+JumxzHIcNhnCajiZt9
	eDofBqBFBltEK+umlJCJWiuwPpjk3v4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 757FCB234;
	Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] xen/oprofile: use set_nmi_continuation() for sending virq to guest
Date: Wed,  7 Oct 2020 15:30:11 +0200
Message-Id: <20201007133011.18871-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201007133011.18871-1-jgross@suse.com>
References: <20201007133011.18871-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of calling send_guest_vcpu_virq() from NMI context use the
NMI continuation framework for that purpose.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/oprofile/nmi_int.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 0f103d80a6..659e31fe19 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -83,6 +83,13 @@ void passive_domain_destroy(struct vcpu *v)
 		model->free_msr(v);
 }
 
+static void nmi_oprofile_send_virq(void *par)
+{
+	struct vcpu *v = par;
+
+	send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+}
+
 static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 {
 	int xen_mode, ovf;
@@ -90,7 +97,7 @@ static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
 	xen_mode = ring_0(regs);
 	if ( ovf && is_active(current->domain) && !xen_mode )
-		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+		set_nmi_continuation(nmi_oprofile_send_virq, current);
 
 	if ( ovf == 2 )
 		current->arch.nmi_pending = true;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:30:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3510.10079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WV-0007vY-Aq; Wed, 07 Oct 2020 13:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3510.10079; Wed, 07 Oct 2020 13:30:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9WV-0007vR-6s; Wed, 07 Oct 2020 13:30:27 +0000
Received: by outflank-mailman (input) for mailman id 3510;
 Wed, 07 Oct 2020 13:30:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQ9WT-0007q4-Ss
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 803b2796-2b01-4962-970d-9d3c4ba9ea5f;
 Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 541FEAF27;
 Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Gg45=DO=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQ9WT-0007q4-Ss
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:30:25 +0000
X-Inumbo-ID: 803b2796-2b01-4962-970d-9d3c4ba9ea5f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 803b2796-2b01-4962-970d-9d3c4ba9ea5f;
	Wed, 07 Oct 2020 13:30:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602077414;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3mWN7Jm1cNuFETK+fgXLvgGY+7f39A/ML24lxKYie30=;
	b=sgXW1lw2xTYycx2+CiKHRUPRqfY7jshh3bK5pJhK8uBciUXE/wbknecRq6JpCtiXtDHJmX
	dh8zNvgrMFerWngwxWYFBITJzGkJhL8/k0ReHRoH6wL1RYXr9jwg8PHqQRYRYjaLM4UXCE
	7Q+BVFm/77kfL4Dx1pRlk/mzM2T9YSw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 541FEAF27;
	Wed,  7 Oct 2020 13:30:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 1/2] xen/x86: add nmi continuation framework
Date: Wed,  7 Oct 2020 15:30:10 +0200
Message-Id: <20201007133011.18871-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201007133011.18871-1-jgross@suse.com>
References: <20201007133011.18871-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Actions in NMI context are rather limited as e.g. locking is rather
fragile.

Add a generic framework to continue processing in softirq context after
leaving NMI processing. This is working for NMIs happening in guest
context as NMI exit handling will issue an IPI to itself in case a
softirq is pending, resulting in the continuation running before the
guest gets control again.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/traps.c      | 37 +++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/nmi.h |  8 +++++++-
 xen/include/xen/softirq.h |  5 ++++-
 3 files changed, 48 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index bc5b8f8ea3..f433fe5acb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1799,6 +1799,42 @@ void unset_nmi_callback(void)
     nmi_callback = dummy_nmi_callback;
 }
 
+static DEFINE_PER_CPU(void (*)(void *), nmi_cont_func);
+static DEFINE_PER_CPU(void *, nmi_cont_par);
+
+static void nmi_cont_softirq(void)
+{
+    unsigned int cpu = smp_processor_id();
+    void (*func)(void *par) = per_cpu(nmi_cont_func, cpu);
+    void *par = per_cpu(nmi_cont_par, cpu);
+
+    /* Reads must be done before following write (local cpu ordering only). */
+    barrier();
+
+    per_cpu(nmi_cont_func, cpu) = NULL;
+
+    if ( func )
+        func(par);
+}
+
+int set_nmi_continuation(void (*func)(void *par), void *par)
+{
+    unsigned int cpu = smp_processor_id();
+
+    if ( per_cpu(nmi_cont_func, cpu) )
+    {
+        printk("Trying to set NMI continuation while still one active!\n");
+        return -EBUSY;
+    }
+
+    per_cpu(nmi_cont_func, cpu) = func;
+    per_cpu(nmi_cont_par, cpu) = par;
+
+    raise_softirq(NMI_CONT_SOFTIRQ);
+
+    return 0;
+}
+
 void do_device_not_available(struct cpu_user_regs *regs)
 {
 #ifdef CONFIG_PV
@@ -2132,6 +2168,7 @@ void __init trap_init(void)
 
     cpu_init();
 
+    open_softirq(NMI_CONT_SOFTIRQ, nmi_cont_softirq);
     open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
 }
 
diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
index a288f02a50..da40fb6599 100644
--- a/xen/include/asm-x86/nmi.h
+++ b/xen/include/asm-x86/nmi.h
@@ -33,5 +33,11 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
 void unset_nmi_callback(void);
 
 DECLARE_PER_CPU(unsigned int, nmi_count);
- 
+
+/**
+ * set_nmi_continuation
+ *
+ * Schedule a function to be started in softirq context after NMI handling.
+ */
+int set_nmi_continuation(void (*func)(void *par), void *par);
 #endif /* ASM_NMI_H */
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index 1f6c4783da..14c744bbf7 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -3,7 +3,10 @@
 
 /* Low-latency softirqs come first in the following list. */
 enum {
-    TIMER_SOFTIRQ = 0,
+#ifdef CONFIG_X86
+    NMI_CONT_SOFTIRQ,
+#endif
+    TIMER_SOFTIRQ,
     RCU_SOFTIRQ,
     SCHED_SLAVE_SOFTIRQ,
     SCHEDULE_SOFTIRQ,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:39:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3516.10090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9ex-0008Vv-9A; Wed, 07 Oct 2020 13:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3516.10090; Wed, 07 Oct 2020 13:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9ex-0008Vo-5c; Wed, 07 Oct 2020 13:39:11 +0000
Received: by outflank-mailman (input) for mailman id 3516;
 Wed, 07 Oct 2020 13:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ9ev-0008Vj-RA
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:39:10 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1f::623])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c5804a8-3a8c-4509-bd6b-467308179d7a;
 Wed, 07 Oct 2020 13:39:07 +0000 (UTC)
Received: from DB6PR0802CA0030.eurprd08.prod.outlook.com (2603:10a6:4:a3::16)
 by DB6PR0801MB1718.eurprd08.prod.outlook.com (2603:10a6:4:2f::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Wed, 7 Oct
 2020 13:39:05 +0000
Received: from DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::22) by DB6PR0802CA0030.outlook.office365.com
 (2603:10a6:4:a3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23 via Frontend
 Transport; Wed, 7 Oct 2020 13:39:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT030.mail.protection.outlook.com (10.152.20.144) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 13:39:05 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64");
 Wed, 07 Oct 2020 13:39:04 +0000
Received: from 14f441f6c012.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C9942248-C7EB-455B-BFC3-EEACB07D8A39.1; 
 Wed, 07 Oct 2020 13:38:37 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 14f441f6c012.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 07 Oct 2020 13:38:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 7 Oct
 2020 13:38:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 13:38:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ9ev-0008Vj-RA
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:39:10 +0000
X-Inumbo-ID: 0c5804a8-3a8c-4509-bd6b-467308179d7a
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe1f::623])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c5804a8-3a8c-4509-bd6b-467308179d7a;
	Wed, 07 Oct 2020 13:39:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PK77OV8/GB8TglcADKjZCm5195Qv3S+HIUiX4hwY+sg=;
 b=zPa97zxbWDOCkhZRb7fQJD8oBRB3O49XIDsnFdZlnZJUNHcFnSResCCga6xJrUhOECaSEATRd8XrFwgK7S6vt0ItZTfvwMdNj+TJjsTm9d9UvzMlbdANIDkiub0YvcyWHjqLSmbYY2akAnPA51lHfb8DH1skvGNyjQSkHJxCmqc=
Received: from DB6PR0802CA0030.eurprd08.prod.outlook.com (2603:10a6:4:a3::16)
 by DB6PR0801MB1718.eurprd08.prod.outlook.com (2603:10a6:4:2f::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.34; Wed, 7 Oct
 2020 13:39:05 +0000
Received: from DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::22) by DB6PR0802CA0030.outlook.office365.com
 (2603:10a6:4:a3::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23 via Frontend
 Transport; Wed, 7 Oct 2020 13:39:05 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT030.mail.protection.outlook.com (10.152.20.144) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Wed, 7 Oct 2020 13:39:05 +0000
Received: ("Tessian outbound 34b830c8a0ef:v64"); Wed, 07 Oct 2020 13:39:04 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 065025cdfa2faf93
X-CR-MTA-TID: 64aa7808
Received: from 14f441f6c012.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id C9942248-C7EB-455B-BFC3-EEACB07D8A39.1;
	Wed, 07 Oct 2020 13:38:37 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 14f441f6c012.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 07 Oct 2020 13:38:37 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=It/dmHnwcPXuvLjP2DNIFaY9dmuoOK1OK+GHV+pO4NpfXBcR7kMUyU7gckZTFNr4GXuHX5mj0EnbYO1pVAo3jAiiyhJFzchoI8J+T5WQ/wEQ/XYRMls+5T9s9214zisi+N00KoMaGL8N+FV1mUGtoWn8Oe1jJ4ZQUzpHOpP2Bhp7KoqxkavfexZOoOqPDqxKbyHvv9qyGxebdVKkNy+bMXhLoN1Mtnm3pHjKzi4SEq56HNAf0tdKuDIxPU49FjE3vHKCSJ1OdXRbMv6FahuzDrb0QfhX3dMipBVuKHhe914fhABbTpbqD6+ueE92IBfXsx8HnqMptb/vZ743FBkdog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PK77OV8/GB8TglcADKjZCm5195Qv3S+HIUiX4hwY+sg=;
 b=V5kyYx+X7/pP3KuR8nUibta5rjPJuCUyy5fnqQbmLcq8ZELocOiDqUrtuMFfiFjq+gKM8cTdmGXaJSBWth/ohLXwwMf1Uh7xqeLPXKh6QBPk+Cl7/bVSoPWpEzW6wsBmPqbIKxiRJ7da+30YSEKew4KAUctzNjQoycsBxIe+8uKjZ7KebMNigSFVS1YDffcx1589UyaEwrbhqLKeiQt4qMkoXXNySglDcAOZnCFQBEDALxpq4YCvTx8bA35pPAizHNsU3544R7LgKHmrV5YUfFGrTC04yM37K+JdE/SYcd1zNWgyf0z0oH6f+Vaq/MLjxpsgr9uc/sKYMDQTjzWuvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PK77OV8/GB8TglcADKjZCm5195Qv3S+HIUiX4hwY+sg=;
 b=zPa97zxbWDOCkhZRb7fQJD8oBRB3O49XIDsnFdZlnZJUNHcFnSResCCga6xJrUhOECaSEATRd8XrFwgK7S6vt0ItZTfvwMdNj+TJjsTm9d9UvzMlbdANIDkiub0YvcyWHjqLSmbYY2akAnPA51lHfb8DH1skvGNyjQSkHJxCmqc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1799.eurprd08.prod.outlook.com (2603:10a6:4:3a::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 7 Oct
 2020 13:38:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Wed, 7 Oct 2020
 13:38:35 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: print update firmware only once
Thread-Topic: [PATCH] xen/arm: print update firmware only once
Thread-Index: AQHWnJnyoImxa+IrX0SG4dJuPztv2amMGmgAgAAKsQA=
Date: Wed, 7 Oct 2020 13:38:34 +0000
Message-ID: <ED56185C-10BA-4A33-9273-8DBD68900AAB@arm.com>
References:
 <09d04b34e6b3b77ac206a42657b1b4116e7e11f3.1602068661.git.bertrand.marquis@arm.com>
 <4b3135ff-4795-e189-0430-da5627419e4e@xen.org>
In-Reply-To: <4b3135ff-4795-e189-0430-da5627419e4e@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 833f6051-dbc3-4ea6-7927-08d86ac65bdb
x-ms-traffictypediagnostic: DB6PR0801MB1799:|DB6PR0801MB1718:
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1718A85139C60A83858D73AB9D0A0@DB6PR0801MB1718.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AUE5o99U5ZoDixyfuo5NCkTywmKlTf0AJ4wRxtPtMUjde6OagGlzj+2eoMBRGBlkcSITZYGyuzt82Ry6CZzmImhm/8ZBzAFJjJjjQoZ3dJD7w01pl/RqLaBvaTpeHqbiZi5ZneJWrgr7Sc/+e9+K1fDUscgByCLyzP4dMWff+cdu3zXmYljZzLEMweUi3UlyyffygJDFuKJQLfxfw++Nw6Eg3ih7rasN7Fzxhl0Hing+0p0gT0ug9wT0P1xY5PzEv67gOJhPWjFMAiSqgbsEMbcZVCX+ZXcARu5GblyxWb7NlJB2z2+IbxlvGsEYzHe1
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(366004)(136003)(396003)(26005)(186003)(71200400001)(316002)(2616005)(91956017)(54906003)(5660300002)(2906002)(76116006)(83380400001)(6506007)(66446008)(64756008)(66476007)(53546011)(66946007)(66556008)(8936002)(478600001)(86362001)(36756003)(8676002)(6916009)(6512007)(15650500001)(33656002)(6486002)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Ah619DZ42B8asoykGnlfiI6lHoUQ6ckBAHyDmwLehDVONWlrTFwQ8DV1oNiB7lRD/q48BgIx68O68UNQCxisiDqXLf+eFUcLe/XmRXhQQ6SdsjQ1NakAxZXsbxdwz2oN12iFKUint1AgkErBanJkN6w3xnzx0kpRjaflFzVMqv7+UMGgokqd4OjaGvkirWmialg0f3NMTwE+b37FRYFBvluf/3O0SzEw47ea4o79fMdIdoVK2oxEIsxTMGyav1jisylH4JTZfA3e+oXgcDThCsrUd5chsUB/4dJE1nwLqd7dfyZcdnUr15BBlqzuIgYGrsJx6vkeMmOsxvaMVPgltWX2ajCLKHZU2sMlvTvJprL4+pO+q7nHMc490d1gjH8dalSCf67Yh3DZuSUG1enwAgRa5PwzvIidoJlcIzoHzYPLPZwXkvJVVjbxMVrJv/HZ4NoLbwMF+O6O5uQOgSBoVyej8Mn2DBDJNllRvEcZrxcP2Pj0ZNINEcCb1sxjB0nAQkyhsyUIhVAPlh489TdchpHL8MTzrGBf9zdF5M5HpLD7kVHPw1AnTzOEwuMIIQL5/hD4Yzw2EsrofMeqWNOGep7ZDXDu4vSdcgGHSnKxULeMoNWe4X9U841q9OwnAZ86hTBZbE8HHhuLqj9WYqbjuQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C0C7928BC2C2D94EA5FC4BA61F50E475@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1799
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0f27e7ae-38a5-46a2-d464-08d86ac64a11
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uW2YUZ3OFP0i7Mtpp2J5QdYzaskz/WKbejhIoPX9IXB1PriGg0Lj3G8QvMTFrcVLcyetymMhybBdv+7bXLu6z0Gu7SemFOGbTeDCOgYTXxgtK2f3L3XtU0M8uMNHpRFutg7BgSO1JD9zPxJfiSX67u+pWJWCFkNeRr0e6m3xg17tM+hjv67SFnncP1LCD/5l3vkOx6nms7RUqhJI1BUC8MNWygZxDz6vMci2QBgb7cgmCaV2zW1WBsFTFTgOmI+lWntTqetS/ROVs7NdBfytNO8/+p5y+c+zW5zbvV/nkXVrQ0+kITQqRoGoEJeFkKT/Vwr3/jzVQa3ExlJGnaGz+/nzgJthtC5hOKeLoz20Om+WqXkFmJMJJ0dBH0Q/UB3nT9ep0Ug+rbU2oZC/CQzvCA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(39860400002)(136003)(46966005)(6486002)(2616005)(186003)(107886003)(8676002)(6512007)(82310400003)(336012)(316002)(4326008)(6862004)(8936002)(54906003)(53546011)(26005)(6506007)(33656002)(83380400001)(47076004)(478600001)(70206006)(70586007)(81166007)(86362001)(5660300002)(2906002)(356005)(36756003)(15650500001)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 13:39:05.0034
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 833f6051-dbc3-4ea6-7927-08d86ac65bdb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1718



> On 7 Oct 2020, at 14:00, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 07/10/2020 12:05, Bertrand Marquis wrote:
>> Fix enable_smccc_arch_workaround_1 to only print the warning asking to
>> update the firmware once.
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>  xen/arch/arm/cpuerrata.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 6c09017515..0c63dfa779 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -187,7 +187,7 @@ warn:
>>          ASSERT(system_state < SYS_STATE_active);
>>          warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
>>                      "Please update your firmware.\n");
>> -        warned =3D false;
>> +        warned =3D true;
>=20
> Thanks for spotting it. It looks like I introduced this bug in commit 976=
319fa3de7f98b558c87b350699fffc278effc "xen/arm64: Kill PSCI_GET_VERSION as =
a variant-2 workaround".
>=20
> I would suggest to add a fixes tag (can be done on commit).

If you can do it during the commit that works for me.

>=20
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thanks

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:54:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:54:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3527.10103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9tU-0001va-DU; Wed, 07 Oct 2020 13:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3527.10103; Wed, 07 Oct 2020 13:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9tU-0001vT-AY; Wed, 07 Oct 2020 13:54:12 +0000
Received: by outflank-mailman (input) for mailman id 3527;
 Wed, 07 Oct 2020 13:54:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQ9tT-0001vO-Fd
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:54:11 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca9907ca-3338-4074-b033-8dbbbaa1233d;
 Wed, 07 Oct 2020 13:54:10 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id e2so2481975wme.1
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 06:54:10 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id j101sm3474366wrj.9.2020.10.07.06.54.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 07 Oct 2020 06:54:08 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mlZt=DO=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQ9tT-0001vO-Fd
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:54:11 +0000
X-Inumbo-ID: ca9907ca-3338-4074-b033-8dbbbaa1233d
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ca9907ca-3338-4074-b033-8dbbbaa1233d;
	Wed, 07 Oct 2020 13:54:10 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id e2so2481975wme.1
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 06:54:10 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=Zo/HSRFcS5MbMS7Xkqevw1ZyhIhAdgyZoDCbw04hyjo=;
        b=SynTEO5UEzvRdKZ3oi/M4Of76GN5N7/BlID5em+PTK4vT6xJaODW7fCkemJLEMWAA+
         BXazAW+MMsaUzffBgnAnsv9svBSLb+NwSrn41wgAupRbj86ahCduBYK1IeB3abNE3hmr
         HD28WeXgmZpw06TEG7WBNYW5pkZeVGI0WwREFdt74TdYC0V+lvm5coZ8h+r6dkObczfX
         UhetfffJyc0nI1eKOLD7BpKT5WKhA6onA4x2HKJlCiYVG3uBGpn4QTXNoBdBpPaA6E9o
         us6yemRSLi8NRZQu+ZtVf5dPmNUqopWXfCcP0qgdd7MPfzTR/1cqQrzAD9A5lP6kDXPD
         r0kg==
X-Gm-Message-State: AOAM530mEVR+X2v1k3jBwaSPwZWOANc1q378cNzoOx7+AjGY7Zblvbuh
	JNwvZhPHPbmQMR5tDKMjryg=
X-Google-Smtp-Source: ABdhPJzetaI07kCaLps5qwNTlUiopORDMLlfA9xYY5u3Ze9fSbl8fqsQwUuchVAcvZOm+sCltLpzBw==
X-Received: by 2002:a1c:1dd0:: with SMTP id d199mr3295463wmd.7.1602078849350;
        Wed, 07 Oct 2020 06:54:09 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id j101sm3474366wrj.9.2020.10.07.06.54.08
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 07 Oct 2020 06:54:08 -0700 (PDT)
Date: Wed, 7 Oct 2020 13:54:07 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH 3/5] tools/libs/store: drop read-only functionality
Message-ID: <20201007135407.2hmlwfaeauwrbh5m@liuwe-devbox-debian-v2>
References: <20201002154141.11677-1-jgross@suse.com>
 <20201002154141.11677-4-jgross@suse.com>
 <20201007105448.c7scd5hoellddfwd@liuwe-devbox-debian-v2>
 <d03ef7db-8752-ac00-99f1-6c40f62e1162@suse.com>
 <f4b6a87c-ac65-cb3e-d4b2-4504340b81e3@citrix.com>
 <72542057-f5d8-99d5-55d9-2a21b7cb2d93@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <72542057-f5d8-99d5-55d9-2a21b7cb2d93@suse.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 07, 2020 at 02:45:29PM +0200, Jrgen Gro wrote:
> On 07.10.20 13:50, Andrew Cooper wrote:
> > On 07/10/2020 11:57, Jrgen Gro wrote:
> > > On 07.10.20 12:54, Wei Liu wrote:
> > > > On Fri, Oct 02, 2020 at 05:41:39PM +0200, Juergen Gross wrote:
> > > > > Today it is possible to open the connection in read-only mode via
> > > > > xs_daemon_open_readonly(). This is working only with Xenstore running
> > > > > as a daemon in the same domain as the user. Additionally it doesn't
> > > > > add any security as accessing the socket used for that functionality
> > > > > requires the same privileges as the socket used for full Xenstore
> > > > > access.
> > > > > 
> > > > > So just drop the read-only semantics in all cases, leaving the
> > > > > interface existing only for compatibility reasons. This in turn
> > > > > requires to just ignore the XS_OPEN_READONLY flag.
> > > > > 
> > > > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > > > ---
> > > > >   tools/libs/store/include/xenstore.h | 8 --------
> > > > >   tools/libs/store/xs.c | 7 ++-----
> > > > >   2 files changed, 2 insertions(+), 13 deletions(-)
> > > > > 
> > > > > diff --git a/tools/libs/store/include/xenstore.h
> > > > > b/tools/libs/store/include/xenstore.h
> > > > > index cbc7206a0f..158e69ef83 100644
> > > > > --- a/tools/libs/store/include/xenstore.h
> > > > > +++ b/tools/libs/store/include/xenstore.h
> > > > > @@ -60,15 +60,12 @@ typedef uint32_t xs_transaction_t;
> > > > >   /* Open a connection to the xs daemon.
> > > > >   * Attempts to make a connection over the socket interface,
> > > > >   * and if it fails, then over the xenbus interface.
> > > > > - * Mode 0 specifies read-write access, XS_OPEN_READONLY for
> > > > > - * read-only access.
> > > > >   *
> > > > >   * * Connections made with xs_open(0) (which might be shared page or
> > > > >   * socket based) are only guaranteed to work in the parent after
> > > > >   * fork.
> > > > >   * * xs_daemon_open*() and xs_domain_open() are deprecated synonyms
> > > > >   * for xs_open(0).
> > > > > - * * XS_OPEN_READONLY has no bearing on any of this.
> > > > >   *
> > > > >   * Returns a handle or NULL.
> > > > >   */
> > > > > @@ -83,11 +80,6 @@ void xs_close(struct xs_handle *xsh /* NULL ok */);
> > > > >   */
> > > > >   struct xs_handle *xs_daemon_open(void);
> > > > >   struct xs_handle *xs_domain_open(void);
> > > > > -
> > > > > -/* Connect to the xs daemon (readonly for non-root clients).
> > > > > - * Returns a handle or NULL.
> > > > > - * Deprecated, please use xs_open(XS_OPEN_READONLY) instead
> > > > > - */
> > > > >   struct xs_handle *xs_daemon_open_readonly(void);
> > > > >    /* Close the connection to the xs daemon.
> > > > > diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
> > > > > index 320734416f..4ac73ec317 100644
> > > > > --- a/tools/libs/store/xs.c
> > > > > +++ b/tools/libs/store/xs.c
> > > > > @@ -302,7 +302,7 @@ struct xs_handle *xs_daemon_open(void)
> > > > >    struct xs_handle *xs_daemon_open_readonly(void)
> > > > >   {
> > > > > - return xs_open(XS_OPEN_READONLY);
> > > > > + return xs_open(0);
> > > > >   }
> > > > 
> > > > This changes the semantics of this function, isn't it? In that the user
> > > > expects a readonly connection but in fact a read-write connection is
> > > > returned.
> > > > 
> > > > Maybe we should return an error here?
> > > 
> > > No, as the behavior is this way already if any of the following is true:
> > > 
> > > - a guest is opening the connection
> > > - Xenstore is running in a stubdom
> > > - oxenstored is being used
> > 
> > Right, and these are just some of the reasons why dropping the R/O
> > socket is a sensible thing to do.
> > 
> > However, I do think xenstore.h needs large disclaimers next to the
> > relevant constants and functions that both XS_OPEN_* flags are now
> > obsolete and ignored (merged into appropriate patches in the series).
> 
> Fine with me.

+1 on this. Let me check other patches first. If there is no need for
another round of posting of this series, the addition of the disclaimers
can come in a separate patch.

Wei.

> 
> 
> Juergen
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:57:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:57:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3530.10114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9wU-0002F3-TU; Wed, 07 Oct 2020 13:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3530.10114; Wed, 07 Oct 2020 13:57:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9wU-0002Ew-Pw; Wed, 07 Oct 2020 13:57:18 +0000
Received: by outflank-mailman (input) for mailman id 3530;
 Wed, 07 Oct 2020 13:57:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ9wT-0002Ej-DI
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:57:17 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 023cacf1-305f-497b-b5ec-5c8e5d83e1df;
 Wed, 07 Oct 2020 13:57:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F21721063;
 Wed,  7 Oct 2020 06:57:15 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D2833F66B;
 Wed,  7 Oct 2020 06:57:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ9wT-0002Ej-DI
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:57:17 +0000
X-Inumbo-ID: 023cacf1-305f-497b-b5ec-5c8e5d83e1df
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 023cacf1-305f-497b-b5ec-5c8e5d83e1df;
	Wed, 07 Oct 2020 13:57:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id F21721063;
	Wed,  7 Oct 2020 06:57:15 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4D2833F66B;
	Wed,  7 Oct 2020 06:57:15 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in getBridge
Date: Wed,  7 Oct 2020 14:57:01 +0100
Message-Id: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Use memcpy in getBridge to prevent gcc warnings about truncated
strings. We know that we might truncate it, so the gcc warning
here is wrong.
Revert previous change changing buffer sizes as bigger buffers
are not needed.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v3:
 Do a memset 0 on destination buffer and use MIN between string length
 and resultLen - 1.
Changes in v2:
 Use MIN between string length of de->d_name and resultLen to copy only
 the minimum size required and prevent crossing to from an unallocated
 space.
---
 tools/libs/stat/xenstat_linux.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_linux.c
index d2ee6fda64..e0d242e1bc 100644
--- a/tools/libs/stat/xenstat_linux.c
+++ b/tools/libs/stat/xenstat_linux.c
@@ -29,6 +29,7 @@
 #include <string.h>
 #include <unistd.h>
 #include <regex.h>
+#include <xen-tools/libs.h>
 
 #include "xenstat_priv.h"
 
@@ -78,8 +79,14 @@ static void getBridge(char *excludeName, char *result, size_t resultLen)
 				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
 
 				if (access(tmp, F_OK) == 0) {
-					strncpy(result, de->d_name, resultLen);
-					result[resultLen - 1] = 0;
+					/*
+					 * Do not use strncpy to prevent compiler warning with
+					 * gcc >= 10.0
+					 * If de->d_name is longer then resultLen we truncate it
+					 */
+					memset(result, 0, resultLen);
+					memcpy(result, de->d_name, MIN(strnlen(de->d_name,
+									NAME_MAX),resultLen - 1));
 				}
 		}
 	}
@@ -264,7 +271,7 @@ int xenstat_collect_networks(xenstat_node * node)
 {
 	/* Helper variables for parseNetDevLine() function defined above */
 	int i;
-	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[256] = { 0 }, devNoBridge[257] = { 0 };
+	char line[512] = { 0 }, iface[16] = { 0 }, devBridge[16] = { 0 }, devNoBridge[17] = { 0 };
 	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPackets, txErrs, txDrops;
 
 	struct priv_data *priv = get_priv_data(node->handle);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 13:57:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 13:57:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3531.10121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9wV-0002FU-6x; Wed, 07 Oct 2020 13:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3531.10121; Wed, 07 Oct 2020 13:57:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQ9wV-0002FK-1h; Wed, 07 Oct 2020 13:57:19 +0000
Received: by outflank-mailman (input) for mailman id 3531;
 Wed, 07 Oct 2020 13:57:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQ9wT-0002Eo-SV
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:57:17 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 02a3b305-dfda-4ec3-988d-e418a29ca9ed;
 Wed, 07 Oct 2020 13:57:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D3796106F;
 Wed,  7 Oct 2020 06:57:16 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3005B3F66B;
 Wed,  7 Oct 2020 06:57:16 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQ9wT-0002Eo-SV
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 13:57:17 +0000
X-Inumbo-ID: 02a3b305-dfda-4ec3-988d-e418a29ca9ed
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 02a3b305-dfda-4ec3-988d-e418a29ca9ed;
	Wed, 07 Oct 2020 13:57:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D3796106F;
	Wed,  7 Oct 2020 06:57:16 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3005B3F66B;
	Wed,  7 Oct 2020 06:57:16 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/2] tool/libs/light: Fix libxenlight gcc warning
Date: Wed,  7 Oct 2020 14:57:02 +0100
Message-Id: <579cfa6e71a5a1392a5ae40cef358c4e8e3a0901.1602078276.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
References: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
In-Reply-To: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
References: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>

Fix gcc10 compilation warning about uninitialized variable by setting
it to 0.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v3: Rebase
---
 tools/libs/light/libxl_mem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_mem.c b/tools/libs/light/libxl_mem.c
index e52a9624ea..c739d00f39 100644
--- a/tools/libs/light/libxl_mem.c
+++ b/tools/libs/light/libxl_mem.c
@@ -562,7 +562,7 @@ out:
 
 int libxl_get_free_memory_0x040700(libxl_ctx *ctx, uint32_t *memkb)
 {
-    uint64_t my_memkb;
+    uint64_t my_memkb = 0;
     int rc;
 
     rc = libxl_get_free_memory(ctx, &my_memkb);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 14:31:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 14:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3546.10139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQATI-0006Gs-1Q; Wed, 07 Oct 2020 14:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3546.10139; Wed, 07 Oct 2020 14:31:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQATH-0006Gl-UN; Wed, 07 Oct 2020 14:31:11 +0000
Received: by outflank-mailman (input) for mailman id 3546;
 Wed, 07 Oct 2020 14:31:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Cnt=DO=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kQATG-0006Gg-KE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 14:31:10 +0000
Received: from mail-oo1-xc42.google.com (unknown [2607:f8b0:4864:20::c42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5f36896-3f16-4d09-a9db-7daa9ffd952f;
 Wed, 07 Oct 2020 14:31:08 +0000 (UTC)
Received: by mail-oo1-xc42.google.com with SMTP id h8so653562ooc.12
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 07:31:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Cnt=DO=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kQATG-0006Gg-KE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 14:31:10 +0000
X-Inumbo-ID: f5f36896-3f16-4d09-a9db-7daa9ffd952f
Received: from mail-oo1-xc42.google.com (unknown [2607:f8b0:4864:20::c42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5f36896-3f16-4d09-a9db-7daa9ffd952f;
	Wed, 07 Oct 2020 14:31:08 +0000 (UTC)
Received: by mail-oo1-xc42.google.com with SMTP id h8so653562ooc.12
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 07:31:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=NbLfbOo6rBgplKLLXUM/j+UjD/0nZV6znJNM3tEznfE=;
        b=NTYqZ7f5B5Gq+gYWGOBHXox72i8QP8xJnVDXz/XMzy4PD5+aheXfQ9douo1x8qBJaA
         t3MCm6+mQ2l2Te5LxUMufA52jrfxXGUuS3Ub9jWGPX7BUN7lOtXCgszdLolU9WucZHoq
         TGkYA7/fPKQfpc9yG3h/xdIDNv02YJCEcWu2k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=NbLfbOo6rBgplKLLXUM/j+UjD/0nZV6znJNM3tEznfE=;
        b=ByWEguI8gRBKN0r45cpt7cTFQ6SQ6yykHSqNtdCM7s5zv3kJB6foSSdqyrtY8WIgKK
         26vjhO+HBFCK9OUlbVeCtqrgxPLVJr6VXhY3rNRRSWAAszMtOK/uV14FAFlQuoaQ9+VJ
         DEtBv82yZNm7Wax+FAFlCcJuClxgYD5xfxDcCU/nix93Np6955GM7WYpGJrs+4+VpMcI
         aS0+vJkCrV56MEGNRUwnFdpyLX+erAGX1A/n/EZrTjTEBo9cBHr4QOkNdvr6AuRg7Ewp
         OvjFHh7h6ONzhrlmYj7ceK2ogfsUNo20bY8XYDW2DLyuCGbzACH9nMA208ZsFULac+/g
         8/WA==
X-Gm-Message-State: AOAM532N9VVLvPAT4KWNsJJNy3Gtw7BJT1U/h9vyP+A30e2m46GmH/Aa
	uwm35Z0pgwrdZ05MBjuHBwdWUThqXoxAEwaJN+z8ww==
X-Google-Smtp-Source: ABdhPJzZpewR2804aqDvAhxBrbs0NiPUuiLxDsRE6uB5xKlkNmt5x5H5vW7ze9MuHc5XJfLaMcC98vFOmMc7laRKCrA=
X-Received: by 2002:a4a:e592:: with SMTP id o18mr2238606oov.28.1602081067392;
 Wed, 07 Oct 2020 07:31:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com> <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
 <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com>
In-Reply-To: <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Wed, 7 Oct 2020 16:30:56 +0200
Message-ID: <CAKMK7uEPn=q1J50koveE+b49r=SE0eh5nTrxWOVRN2grdyNPTA@mail.gmail.com>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>, Luben Tuikov <luben.tuikov@amd.com>, 
	Dave Airlie <airlied@linux.ie>, Nouveau Dev <nouveau@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Melissa Wen <melissa.srw@gmail.com>, 
	Huang Rui <ray.huang@amd.com>, Gerd Hoffmann <kraxel@redhat.com>, Sam Ravnborg <sam@ravnborg.org>, 
	Emil Velikov <emil.velikov@collabora.com>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Krzysztof Kozlowski <krzk@kernel.org>, 
	Steven Price <steven.price@arm.com>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, Kukjin Kim <kgene@kernel.org>, 
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	Russell King <linux+etnaviv@armlinux.org.uk>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, Hans de Goede <hdegoede@redhat.com>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, Sean Paul <sean@poorly.run>, 
	apaneers@amd.com, Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Qinglang Miao <miaoqinglang@huawei.com>, 
	Qiang Yu <yuq825@gmail.com>, Alex Deucher <alexander.deucher@amd.com>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 7, 2020 at 3:25 PM Christian K=C3=B6nig <christian.koenig@amd.c=
om> wrote:
>
> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
> > Hi
> >
> > Am 07.10.20 um 15:10 schrieb Daniel Vetter:
> >> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.de>=
 wrote:
> >>> Hi
> >>>
> >>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
> >>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
> >>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
> >>>>> <christian.koenig@amd.com> wrote:
> >>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
> >>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig wr=
ote:
> >>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
> >>>>>>>>> Hi
> >>>>>>>>>
> >>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
> >>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
> >>>>>>>>>>> Hi Christian
> >>>>>>>>>>>
> >>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
> >>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
> >>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address a=
nd location
> >>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct =
dma_buf_map
> >>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
> >>>>>>>>>>>> We could completely drop that if we use the same structure i=
nside TTM as
> >>>>>>>>>>>> well.
> >>>>>>>>>>>>
> >>>>>>>>>>>> Additional to that which driver is going to use this?
> >>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers w=
ill
> >>>>>>>>>>> retrieve the pointer via this function.
> >>>>>>>>>>>
> >>>>>>>>>>> I do want to see all that being more tightly integrated into =
TTM, but
> >>>>>>>>>>> not in this series. This one is about fixing the bochs-on-spa=
rc64
> >>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TO=
DO list.
> >>>>>>>>>> I should have asked which driver you try to fix here :)
> >>>>>>>>>>
> >>>>>>>>>> In this case just keep the function inside bochs and only fix =
it there.
> >>>>>>>>>>
> >>>>>>>>>> All other drivers can be fixed when we generally pump this thr=
ough TTM.
> >>>>>>>>> Did you take a look at patch 3? This function will be used by V=
RAM
> >>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it he=
re, we
> >>>>>>>>> have to duplicate the functionality in each if these drivers. B=
ochs
> >>>>>>>>> itself uses VRAM helpers and doesn't touch the function directl=
y.
> >>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
> >>>>>>>>
> >>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kma=
p_obj
> >>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
> >>>>>>>>
> >>>>>>>> What I want to avoid is to have another conversion function in T=
TM because
> >>>>>>>> what happens here is that we already convert from ttm_bus_placem=
ent to
> >>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
> >>>>>>> Hm I'm not really seeing how that helps with a gradual conversion=
 of
> >>>>>>> everything over to dma_buf_map and assorted helpers for access? T=
here's
> >>>>>>> too many places in ttm drivers where is_iomem and related stuff i=
s used to
> >>>>>>> be able to convert it all in one go. An intermediate state with a=
 bunch of
> >>>>>>> conversions seems fairly unavoidable to me.
> >>>>>> Fair enough. I would just have started bottom up and not top down.
> >>>>>>
> >>>>>> Anyway feel free to go ahead with this approach as long as we can =
remove
> >>>>>> the new function again when we clean that stuff up for good.
> >>>>> Yeah I guess bottom up would make more sense as a refactoring. But =
the
> >>>>> main motivation to land this here is to fix the __mmio vs normal
> >>>>> memory confusion in the fbdev emulation helpers for sparc (and
> >>>>> anything else that needs this). Hence the top down approach for
> >>>>> rolling this out.
> >>>> Ok I started reviewing this a bit more in-depth, and I think this is=
 a bit
> >>>> too much of a de-tour.
> >>>>
> >>>> Looking through all the callers of ttm_bo_kmap almost everyone maps =
the
> >>>> entire object. Only vmwgfx uses to map less than that. Also, everyon=
e just
> >>>> immediately follows up with converting that full object map into a
> >>>> pointer.
> >>>>
> >>>> So I think what we really want here is:
> >>>> - new function
> >>>>
> >>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *ma=
p);
> >>>>
> >>>>    _vmap name since that's consistent with both dma_buf functions an=
d
> >>>>    what's usually used to implement this. Outside of the ttm world k=
map
> >>>>    usually just means single-page mappings using kmap() or it's iome=
m
> >>>>    sibling io_mapping_map* so rather confusing name for a function w=
hich
> >>>>    usually is just used to set up a vmap of the entire buffer.
> >>>>
> >>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunma=
p
> >>>>    functions for all ttm drivers. We should be able to make this ful=
ly
> >>>>    generic because a) we now have dma_buf_map and b) drm_gem_object =
is
> >>>>    embedded in the ttm_bo, so we can upcast for everyone who's both =
a ttm
> >>>>    and gem driver.
> >>>>
> >>>>    This is maybe a good follow-up, since it should allow us to ditch=
 quite
> >>>>    a bit of the vram helper code for this more generic stuff. I also=
 might
> >>>>    have missed some special-cases here, but from a quick look everyt=
hing
> >>>>    just pins the buffer to the current location and that's it.
> >>>>
> >>>>    Also this obviously requires Christian's generic ttm_bo_pin rewor=
k
> >>>>    first.
> >>>>
> >>>> - roll the above out to drivers.
> >>>>
> >>>> Christian/Thomas, thoughts on this?
> >>> I agree on the goals, but what is the immediate objective here?
> >>>
> >>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_=
obj
> >>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has
> >>> more internal state that struct dma_buf_map, so they are not easily
> >>> convertible either. What you propose seems to require a reimplementat=
ion
> >>> of the existing ttm_bo_kmap() code. That is it's own patch series.
> >>>
> >>> I'd rather go with some variant of the existing patch and add
> >>> ttm_bo_vmap() in a follow-up.
> >> ttm_bo_vmap would simply wrap what you currently open-code as
> >> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would
> >> be a much later step. Why do you think adding ttm_bo_vmap is not
> >> possible?
> > The calls to ttm_bo_kmap/_kunmap() require an instance of struct
> > ttm_bo_kmap_obj that is stored in each driver's private bo structure
> > (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made
> > patch 3, I flirted with the idea of unifying the driver's _vmap code in
> > a shared helper, but I couldn't find a simple way of doing it. That's
> > why it's open-coded in the first place.

Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which
shouldn't be more than a few lines, but maybe too much to do in this
series.

> Well that makes kind of sense. Keep in mind that ttm_bo_kmap is
> currently way to complicated.

Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple
1-page kmap helper on the other should help a lot.
-Daniel

>
> Christian.
>
> >
> > Best regards
> > Thomas
> >
> >> -Daniel
> >>
> >>
> >>> Best regards
> >>> Thomas
> >>>
> >>>> I think for the immediate need of rolling this out for vram helpers =
and
> >>>> fbdev code we should be able to do this, but just postpone the drive=
r wide
> >>>> roll-out for now.
> >>>>
> >>>> Cheers, Daniel
> >>>>
> >>>>> -Daniel
> >>>>>
> >>>>>> Christian.
> >>>>>>
> >>>>>>> -Daniel
> >>>>>>>
> >>>>>>>> Thanks,
> >>>>>>>> Christian.
> >>>>>>>>
> >>>>>>>>> Best regards
> >>>>>>>>> Thomas
> >>>>>>>>>
> >>>>>>>>>> Regards,
> >>>>>>>>>> Christian.
> >>>>>>>>>>
> >>>>>>>>>>> Best regards
> >>>>>>>>>>> Thomas
> >>>>>>>>>>>
> >>>>>>>>>>>> Regards,
> >>>>>>>>>>>> Christian.
> >>>>>>>>>>>>
> >>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >>>>>>>>>>>>> ---
> >>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++=
++++
> >>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++++
> >>>>>>>>>>>>>      2 files changed, 44 insertions(+)
> >>>>>>>>>>>>>
> >>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm=
/ttm_bo_api.h
> >>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
> >>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
> >>>>>>>>>>>>> @@ -34,6 +34,7 @@
> >>>>>>>>>>>>>      #include <drm/drm_gem.h>
> >>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
> >>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
> >>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
> >>>>>>>>>>>>>      #include <linux/kref.h>
> >>>>>>>>>>>>>      #include <linux/list.h>
> >>>>>>>>>>>>>      #include <linux/wait.h>
> >>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtu=
al(struct
> >>>>>>>>>>>>> ttm_bo_kmap_obj *map,
> >>>>>>>>>>>>>          return map->virtual;
> >>>>>>>>>>>>>      }
> >>>>>>>>>>>>>      +/**
> >>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_km=
ap.
> >>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. =
If the memory
> >>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to N=
ULL.
> >>>>>>>>>>>>> + */
> >>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_=
bo_kmap_obj
> >>>>>>>>>>>>> *kmap,
> >>>>>>>>>>>>> +                           struct dma_buf_map *map)
> >>>>>>>>>>>>> +{
> >>>>>>>>>>>>> +    bool is_iomem;
> >>>>>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem);
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>> +    if (!vaddr)
> >>>>>>>>>>>>> +        dma_buf_map_clear(map);
> >>>>>>>>>>>>> +    else if (is_iomem)
> >>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force __i=
omem *)vaddr);
> >>>>>>>>>>>>> +    else
> >>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
> >>>>>>>>>>>>> +}
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>>      /**
> >>>>>>>>>>>>>       * ttm_bo_kmap
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dm=
a-buf-map.h
> >>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
> >>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
> >>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
> >>>>>>>>>>>>> @@ -45,6 +45,12 @@
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_va=
ddr_iomem().
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * .. code-block:: c
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_map_=
is_set() or
> >>>>>>>>>>>>>       * dma_buf_map_is_null().
> >>>>>>>>>>>>>       *
> >>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vad=
dr(struct
> >>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
> >>>>>>>>>>>>>          map->is_iomem =3D false;
> >>>>>>>>>>>>>      }
> >>>>>>>>>>>>>      +/**
> >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping st=
ructure to
> >>>>>>>>>>>>> an address in I/O memory
> >>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
> >>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
> >>>>>>>>>>>>> + *
> >>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
> >>>>>>>>>>>>> + */
> >>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_=
buf_map *map,
> >>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
> >>>>>>>>>>>>> +{
> >>>>>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
> >>>>>>>>>>>>> +    map->is_iomem =3D true;
> >>>>>>>>>>>>> +}
> >>>>>>>>>>>>> +
> >>>>>>>>>>>>>      /**
> >>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mapping=
 structures
> >>>>>>>>>>>>> for equality
> >>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
> >>>>>>>>>>>> _______________________________________________
> >>>>>>>>>>>> dri-devel mailing list
> >>>>>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%=
3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D0=
2%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3d=
d8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdH=
OA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>>>>> _______________________________________________
> >>>>>>>>>>> amd-gfx mailing list
> >>>>>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3=
A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7=
C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd89=
61fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5H=
KCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>>>>>>>>> _______________________________________________
> >>>>>>>>>> dri-devel mailing list
> >>>>>>>>>> dri-devel@lists.freedesktop.org
> >>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A=
%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D02%=
7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8=
961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DHdHOA=
%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
> >>>>>>>>>>
> >>>>>>>>> _______________________________________________
> >>>>>>>>> amd-gfx mailing list
> >>>>>>>>> amd-gfx@lists.freedesktop.org
> >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3A%=
2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02%7C0=
1%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961=
fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3DH%2B5HKC=
sTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
> >>>>>
> >>>>> --
> >>>>> Daniel Vetter
> >>>>> Software Engineer, Intel Corporation
> >>>>> http://blog.ffwll.ch
> >>> --
> >>> Thomas Zimmermann
> >>> Graphics Driver Developer
> >>> SUSE Software Solutions Germany GmbH
> >>> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
> >>> (HRB 36809, AG N=C3=BCrnberg)
> >>> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer
> >>>
> >>
>


--=20
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 14:58:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 14:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3549.10151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQAtQ-0000EO-71; Wed, 07 Oct 2020 14:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3549.10151; Wed, 07 Oct 2020 14:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQAtQ-0000EH-2W; Wed, 07 Oct 2020 14:58:12 +0000
Received: by outflank-mailman (input) for mailman id 3549;
 Wed, 07 Oct 2020 14:58:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQAtO-0000EC-F3
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 14:58:10 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 37f9e5d1-c3bc-4f9c-a7b2-3b36f8c8695d;
 Wed, 07 Oct 2020 14:58:08 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8044F106F;
 Wed,  7 Oct 2020 07:58:08 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 14DAF3F66B;
 Wed,  7 Oct 2020 07:58:06 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MWE0=DO=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQAtO-0000EC-F3
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 14:58:10 +0000
X-Inumbo-ID: 37f9e5d1-c3bc-4f9c-a7b2-3b36f8c8695d
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 37f9e5d1-c3bc-4f9c-a7b2-3b36f8c8695d;
	Wed, 07 Oct 2020 14:58:08 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8044F106F;
	Wed,  7 Oct 2020 07:58:08 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 14DAF3F66B;
	Wed,  7 Oct 2020 07:58:06 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v2] build: always use BASEDIR for xen sub-directory
Date: Wed,  7 Oct 2020 15:57:51 +0100
Message-Id: <df2fc83d3a84dd3fc2e58101ded22847fdbaa862.1602082503.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.

This is removing the dependency to xen subdirectory preventing using a
wrong configuration file when xen subdirectory is duplicated for
compilation tests.

BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
called from the tools build and install process and BASEDIR is not set
there.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
Changes in v2:
 Fix tools installation by setting BASEDIR in lib/x86 Makefile.
---
 xen/common/Makefile                | 6 +++---
 xen/include/xen/lib/x86/Makefile   | 6 ++++--
 xen/tools/kconfig/Makefile.kconfig | 2 +-
 xen/xsm/flask/Makefile             | 4 ++--
 4 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index b3b60a1ba2..083f62acb6 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -78,14 +78,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
 obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
 
-CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(XEN_ROOT)/xen/)$(KCONFIG_CONFIG)
+CONF_FILE := $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(BASEDIR)/)$(KCONFIG_CONFIG)
 config.gz: $(CONF_FILE)
 	gzip -c $< >$@
 
 config_data.o: config.gz
 
-config_data.S: $(XEN_ROOT)/xen/tools/binfile
-	$(SHELL) $(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
+config_data.S: $(BASEDIR)/tools/binfile
+	$(SHELL) $(BASEDIR)/tools/binfile $@ config.gz xen_config_data
 
 clean::
 	rm -f config_data.S config.gz 2>/dev/null
diff --git a/xen/include/xen/lib/x86/Makefile b/xen/include/xen/lib/x86/Makefile
index 408d69c99e..c3b9ebe961 100644
--- a/xen/include/xen/lib/x86/Makefile
+++ b/xen/include/xen/lib/x86/Makefile
@@ -1,8 +1,10 @@
 include $(XEN_ROOT)/Config.mk
 
+BASEDIR = $(XEN_ROOT)/xen
+
 .PHONY: all
 all: cpuid-autogen.h
 
-cpuid-autogen.h: $(XEN_ROOT)/xen/include/public/arch-x86/cpufeatureset.h $(XEN_ROOT)/xen/tools/gen-cpuid.py
-	$(PYTHON) $(XEN_ROOT)/xen/tools/gen-cpuid.py -i $< -o $@.new
+cpuid-autogen.h: $(BASEDIR)/include/public/arch-x86/cpufeatureset.h $(BASEDIR)/tools/gen-cpuid.py
+	$(PYTHON) $(BASEDIR)/tools/gen-cpuid.py -i $< -o $@.new
 	$(call move-if-changed,$@.new,$@)
diff --git a/xen/tools/kconfig/Makefile.kconfig b/xen/tools/kconfig/Makefile.kconfig
index 065f4b8471..799321ec4d 100644
--- a/xen/tools/kconfig/Makefile.kconfig
+++ b/xen/tools/kconfig/Makefile.kconfig
@@ -9,7 +9,7 @@ Q :=
 kecho := :
 
 # eventually you'll want to do out of tree builds
-srctree := $(XEN_ROOT)/xen
+srctree := $(BASEDIR)
 objtree := $(srctree)
 src := tools/kconfig
 obj := $(src)
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index 50bec20a1e..637159ad82 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -35,8 +35,8 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
 flask-policy.o: policy.bin
 
-flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
-	$(SHELL) $(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
+flask-policy.S: $(BASEDIR)/tools/binfile
+	$(SHELL) $(BASEDIR)/tools/binfile -i $@ policy.bin xsm_flask_init_policy
 
 FLASK_BUILD_DIR := $(CURDIR)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 15:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 15:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3552.10167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQBLa-0003B4-HD; Wed, 07 Oct 2020 15:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3552.10167; Wed, 07 Oct 2020 15:27:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQBLa-0003Ax-EK; Wed, 07 Oct 2020 15:27:18 +0000
Received: by outflank-mailman (input) for mailman id 3552;
 Wed, 07 Oct 2020 15:27:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQBLY-0003Ap-DE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 15:27:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2efdd02-3d7d-4bfe-a141-a4c6fe8b1839;
 Wed, 07 Oct 2020 15:27:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQBLV-0007N0-Sz; Wed, 07 Oct 2020 15:27:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQBLV-0003vk-Hc; Wed, 07 Oct 2020 15:27:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQBLV-0000C5-Gx; Wed, 07 Oct 2020 15:27:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQBLY-0003Ap-DE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 15:27:16 +0000
X-Inumbo-ID: d2efdd02-3d7d-4bfe-a141-a4c6fe8b1839
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d2efdd02-3d7d-4bfe-a141-a4c6fe8b1839;
	Wed, 07 Oct 2020 15:27:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CpeEjilxo9u+hSrMDe3ruWDV9WxRmFMA73y+wtNuKT8=; b=m6xBndTw6WSkJCKB5KZ0xib8eQ
	whb4LaWqI+Vgut0uVV+ivhyi/JTgmsunjLAxfTrTJJEJA1JkFTanPgMQem11CBopUmPvLsspP4IEa
	jyqxonkRUAuFavfGIaTMkPq9bfeum3V9p0fCMRRBrUxJwoM31auZyVVmMMT78gkPkl6A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQBLV-0007N0-Sz; Wed, 07 Oct 2020 15:27:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQBLV-0003vk-Hc; Wed, 07 Oct 2020 15:27:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQBLV-0000C5-Gx; Wed, 07 Oct 2020 15:27:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155517-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155517: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e4e64408f5c755da3bf7bfd78e70ad9f6c448376
X-Osstest-Versions-That:
    xen=93508595d588afe9dca087f95200effb7cedc81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 15:27:13 +0000

flight 155517 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155517/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155495

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e4e64408f5c755da3bf7bfd78e70ad9f6c448376
baseline version:
 xen                  93508595d588afe9dca087f95200effb7cedc81f

Last test of basis   155495  2020-10-06 12:00:29 Z    1 days
Testing same since   155517  2020-10-07 12:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit e4e64408f5c755da3bf7bfd78e70ad9f6c448376
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Fri Oct 2 11:42:09 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 16:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 16:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3564.10195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQCVO-0003JZ-Gi; Wed, 07 Oct 2020 16:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3564.10195; Wed, 07 Oct 2020 16:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQCVO-0003JS-Ci; Wed, 07 Oct 2020 16:41:30 +0000
Received: by outflank-mailman (input) for mailman id 3564;
 Wed, 07 Oct 2020 16:41:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aqtN=DO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQCVN-0003JN-5Z
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 16:41:29 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c2fa8df-57b6-4a1c-86ec-49d4922ebccf;
 Wed, 07 Oct 2020 16:41:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aqtN=DO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQCVN-0003JN-5Z
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 16:41:29 +0000
X-Inumbo-ID: 9c2fa8df-57b6-4a1c-86ec-49d4922ebccf
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9c2fa8df-57b6-4a1c-86ec-49d4922ebccf;
	Wed, 07 Oct 2020 16:41:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602088887;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=SKwPXZtdk1ObFlhzqhu0Cw1vgsS+IcJp46R21xnariM=;
  b=b+QLcn9X32j19DVc2XtTlXf6uJM7+nFAMxujgnxumGZrU+5Odfysc/Bs
   ayp08fvI3eY9sz7L1Ma1d3O62JbkbQa1eeR9ZV/UKKaaLiajPUlsZzIo1
   /sufTHkHpubeNs+WB8dZrGPMdI4HIVm3a7wixL1jO8sXfhhda5aVAenZX
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uo32Kg+112jpcabMTHrE6EdCxeX8qLnGWbJqKBHj6pM7d5CPsclwZS9R3bB8VcxuzHlJY7i1lj
 YuNgZpvbYuRHvKuv/Rl7tQSzsazFnMvfZ0i9l7iqPfoCCs+69ucpVbgxUCqnyXxFbRhZu8XcMO
 fI6ChVdJMyVXOb5kNMdRNHtd6rO3oI+V/qTlhLiY+OQ3hk/f7nQBQXF3qtOpIrZggyiy2t7Fv5
 jyR06tiAh352BPah4BgrgUUCbhGBMS9kqIo+/+j+g/lkHq+xPCK58whYdrNfVPuwmIGGffSkJN
 4k0=
X-SBRS: None
X-MesageID: 29532617
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,347,1596513600"; 
   d="scan'208";a="29532617"
Date: Wed, 7 Oct 2020 18:41:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Message-ID: <20201007164117.GH19254@Air-de-Roger>
References: <20201006162327.93055-1-roger.pau@citrix.com>
 <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 07, 2020 at 01:06:08PM +0100, Andrew Cooper wrote:
> On 06/10/2020 17:23, Roger Pau Monne wrote:
> > Currently a PV hardware domain can also be given control over the CPU
> > frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> 
> This might be how the current logic "works", but its straight up broken.
> 
> PERF_CTL is thread scope, so unless dom0 is identity pinned and has one
> vcpu for every pcpu, it cannot use the interface correctly.

Selecting cpufreq=dom0-kernel will force vCPU pinning. I'm not able
however to see anywhere that would force dom0 vCPUs == pCPUs.

> > However since commit 322ec7c89f6 the default behavior has been changed
> > to reject accesses to not explicitly handled MSRs, preventing PV
> > guests that manage CPU frequency from reading
> > MSR_IA32_PERF_{STATUS/CTL}.
> >
> > Additionally some HVM guests (Windows at least) will attempt to read
> > MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> >
> > vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
> > d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> >
> > Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> > handling shared between HVM and PV guests, and add an explicit case
> > for reads to MSR_IA32_PERF_{STATUS/CTL}.
> 
> OTOH, PERF_CTL does have a seemingly architectural "please disable turbo
> for me" bit, which is supposed to be for calibration loops.  I wonder if
> anyone uses this, and whether we ought to honour it (probably not).

If we let guests play with this we would have to save/restore the
guest value on context switch. Unless there's a strong case for this,
I would say no.

> > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> > index d8ed83f869..41baa3b7a1 100644
> > --- a/xen/include/xen/sched.h
> > +++ b/xen/include/xen/sched.h
> > @@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
> >      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
> >  } cpufreq_controller;
> >  
> > +static inline bool is_cpufreq_controller(const struct domain *d)
> > +{
> > +    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
> > +            is_hardware_domain(d));
> 
> This won't compile on !CONFIG_X86, due to CONFIG_HAS_CPUFREQ

It does seem to build on Arm, because this is only used in x86 code:

https://gitlab.com/xen-project/people/royger/xen/-/jobs/778207412

The extern declaration of cpufreq_controller is just above, so if you
tried to use is_cpufreq_controller on Arm you would get a link time
error, otherwise it builds fine. The compiler removes the function on
Arm as it has the inline attribute and it's not used.

Alternatively I could look into moving cpufreq_controller (and
is_cpufreq_controller) out of sched.h into somewhere else, I haven't
looked at why it needs to live there.

> Honestly - I don't see any point to this code.  Its opt-in via the
> command line only, and doesn't provide adequate checks for enablement. 
> (It's not as if we're lacking complexity or moving parts when it comes
> to power/frequency management).

Right, I could do a pre-patch to remove this, but I also don't think
we should block this fix on removing FREQCTL_dom0_kernel, so I would
rather fix the regression and then remove the feature if we agree it
can be removed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 16:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 16:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3567.10208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQCY0-0003U0-Ut; Wed, 07 Oct 2020 16:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3567.10208; Wed, 07 Oct 2020 16:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQCY0-0003Tt-Ry; Wed, 07 Oct 2020 16:44:12 +0000
Received: by outflank-mailman (input) for mailman id 3567;
 Wed, 07 Oct 2020 16:44:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQCXz-0003TO-Aw
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 16:44:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5098ad54-d21a-4839-aeea-ef3ca01b5dd6;
 Wed, 07 Oct 2020 16:44:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQCXr-00014E-Hv; Wed, 07 Oct 2020 16:44:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQCXr-0007kc-9m; Wed, 07 Oct 2020 16:44:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQCXr-00054J-9I; Wed, 07 Oct 2020 16:44:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQCXz-0003TO-Aw
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 16:44:11 +0000
X-Inumbo-ID: 5098ad54-d21a-4839-aeea-ef3ca01b5dd6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5098ad54-d21a-4839-aeea-ef3ca01b5dd6;
	Wed, 07 Oct 2020 16:44:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4BU4gwyrQqT0d2uQTxxJokTpbp6ASD2wbgKM1SDYYZk=; b=EM8KRf5RKJfhzA3bBg86xVqr9I
	z1dbmlsNBxa86b5wldMVuA6QWu6f/1FST6OryYZoRnly5t6zG3ga4t6ivePGfNW3LQaQt+EfEgxSq
	Jy7vaUo6ppTO/t3rFuCBAQl0r8ZULf2wMw0tNwoUYYT5pi72SQvTVu8xjPpPxmN45Lk0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQCXr-00014E-Hv; Wed, 07 Oct 2020 16:44:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQCXr-0007kc-9m; Wed, 07 Oct 2020 16:44:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQCXr-00054J-9I; Wed, 07 Oct 2020 16:44:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155512-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155512: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c640186ec8aae6164123ee38de6409aed69eab12
X-Osstest-Versions-That:
    ovmf=2d8ca4f90eaeb61bd7e9903b56bf412f0d187137
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 16:44:03 +0000

flight 155512 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155512/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c640186ec8aae6164123ee38de6409aed69eab12
baseline version:
 ovmf                 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137

Last test of basis   155223  2020-10-01 11:40:36 Z    6 days
Testing same since   155512  2020-10-07 04:39:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   2d8ca4f90e..c640186ec8  c640186ec8aae6164123ee38de6409aed69eab12 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 17:16:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 17:16:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3573.10229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQD2t-0006ft-Hs; Wed, 07 Oct 2020 17:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3573.10229; Wed, 07 Oct 2020 17:16:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQD2t-0006fm-E2; Wed, 07 Oct 2020 17:16:07 +0000
Received: by outflank-mailman (input) for mailman id 3573;
 Wed, 07 Oct 2020 17:16:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ScZ7=DO=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kQD2r-0006fh-Up
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:16:06 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.243.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54ac2e7c-22cb-44d4-a8c8-39bacd4a42f9;
 Wed, 07 Oct 2020 17:16:01 +0000 (UTC)
Received: from SN4PR0501CA0087.namprd05.prod.outlook.com
 (2603:10b6:803:22::25) by MWHPR0201MB3484.namprd02.prod.outlook.com
 (2603:10b6:301:77::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.39; Wed, 7 Oct
 2020 17:15:58 +0000
Received: from SN1NAM02FT057.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:22:cafe::ab) by SN4PR0501CA0087.outlook.office365.com
 (2603:10b6:803:22::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.15 via Frontend
 Transport; Wed, 7 Oct 2020 17:15:58 +0000
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT057.mail.protection.outlook.com (10.152.73.105) with Microsoft SMTP
 Server id 15.20.3433.39 via Frontend Transport; Wed, 7 Oct 2020 17:15:57
 +0000
Received: from [149.199.38.66] (port=59910 helo=smtp.xilinx.com)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kQD29-0001sT-2M; Wed, 07 Oct 2020 10:15:21 -0700
Received: from [127.0.0.1] (helo=localhost)
 by smtp.xilinx.com with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kQD2j-00072d-BA; Wed, 07 Oct 2020 10:15:57 -0700
Received: from xsj-pvapsmtp01 (mail.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 097HFkS7009243; 
 Wed, 7 Oct 2020 10:15:47 -0700
Received: from [10.23.120.52] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1kQD2Y-0006zR-Nc; Wed, 07 Oct 2020 10:15:46 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ScZ7=DO=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kQD2r-0006fh-Up
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:16:06 +0000
X-Inumbo-ID: 54ac2e7c-22cb-44d4-a8c8-39bacd4a42f9
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown [40.107.243.66])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 54ac2e7c-22cb-44d4-a8c8-39bacd4a42f9;
	Wed, 07 Oct 2020 17:16:01 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lrN2lXX+T7Udw7Z0jfKYJESY0W1XtHtoOOOgCTYYnkj90d34JIfQ8yyGHrkKROA6cjJADUbXKKu54NQNjtT1HhCUBK8y6fHSkOymiQRDHDrbU8U/t7o3Gv3CYdINF0eTlwkYSKCFaGD1OtOW25I0d/Pnu1XfaNMKP7/bLjA9DQWlC5Hk2/kJ3gTd4q4gKbu04covdHFT78kAEeLotvvM3eNQP9yoIULPqZFPstFK4lg1qK7g8KMnbF9xhdREqlzw0ITn4KkEzPThePRbn4JuSQZxBOvg8Tjk6wgLjte5Obg3wOqGafe7aS5h2zc0qrHeDk7L/rvPymCiCeQ3g+Yp9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jfuc7bfP8imoEchfCuXEL9P1/SHk38u2m1zaUF29b10=;
 b=eSsyUMv6MdKEImZXw2X92gM/ebZ72/T1lqakTy1mVNuxF6JDfseRgvqKb4RD9v29EFnXuktTFeLUhfMSSBKSTBiYRn/foy/olPZseQq8mSGNFknYeChoRtRt83u0kXBMt9xhp5HG/D5qC4P5mzl15t2ysDkmemWNGUwKpa0Lx2DCPxekW4vkqEd7+X1gEzQ68VXYTUZqJjAYpYifDLWSb8w35otSAlxEnESLvwAgnirQM0KMsphLHPKy2Stbq56Nn7xZ+kaEvM1gfDc+YWYNc4JJGUlUXxeASwI7zX31yeEkkhURowCcN+SHa3fFIg5hNdC/3gEFUdzxGS8Z8qkdNA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=lists.linux-foundation.org
 smtp.mailfrom=xilinx.com; dmarc=bestguesspass action=none
 header.from=xilinx.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jfuc7bfP8imoEchfCuXEL9P1/SHk38u2m1zaUF29b10=;
 b=j8WDry7wxrykDfJOwuVRW0rhcHWutBcibIrL2FUMbX/ySoVO6DDsnk6sXjyX/JNxC0e1Ry5hILBKSLObGkC8MRtrS+HEo6EP2NlFIZ4jL6FVm47IfsLZMakQjIBlwxNpRpuduNEd11RqWBR5iVr5JMGOAsfkCT6OGeGYOw5oD0o=
Received: from SN4PR0501CA0087.namprd05.prod.outlook.com
 (2603:10b6:803:22::25) by MWHPR0201MB3484.namprd02.prod.outlook.com
 (2603:10b6:301:77::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3433.39; Wed, 7 Oct
 2020 17:15:58 +0000
Received: from SN1NAM02FT057.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:22:cafe::ab) by SN4PR0501CA0087.outlook.office365.com
 (2603:10b6:803:22::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.15 via Frontend
 Transport; Wed, 7 Oct 2020 17:15:58 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; lists.linux-foundation.org; dkim=none (message not
 signed) header.d=none;lists.linux-foundation.org; dmarc=bestguesspass
 action=none header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 SN1NAM02FT057.mail.protection.outlook.com (10.152.73.105) with Microsoft SMTP
 Server id 15.20.3433.39 via Frontend Transport; Wed, 7 Oct 2020 17:15:57
 +0000
Received: from [149.199.38.66] (port=59910 helo=smtp.xilinx.com)
	by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kQD29-0001sT-2M; Wed, 07 Oct 2020 10:15:21 -0700
Received: from [127.0.0.1] (helo=localhost)
	by smtp.xilinx.com with smtp (Exim 4.63)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kQD2j-00072d-BA; Wed, 07 Oct 2020 10:15:57 -0700
Received: from xsj-pvapsmtp01 (mail.xilinx.com [149.199.38.66])
	by xsj-smtp-dlp1.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 097HFkS7009243;
	Wed, 7 Oct 2020 10:15:47 -0700
Received: from [10.23.120.52] (helo=localhost)
	by xsj-pvapsmtp01 with esmtp (Exim 4.63)
	(envelope-from <stefanos@xilinx.com>)
	id 1kQD2Y-0006zR-Nc; Wed, 07 Oct 2020 10:15:46 -0700
Date: Wed, 7 Oct 2020 10:15:46 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
Subject: Re: xen-swiotlb vs phys_to_dma
In-Reply-To: <20201007060008.GA10125@lst.de>
Message-ID: <alpine.DEB.2.21.2010071015220.23978@sstabellini-ThinkPad-T480s>
References: <20201002123436.GA30329@lst.de> <alpine.DEB.2.21.2010021313010.10908@sstabellini-ThinkPad-T480s> <20201006082656.GB10243@lst.de> <alpine.DEB.2.21.2010061325230.10908@sstabellini-ThinkPad-T480s> <20201007060008.GA10125@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a6ca9106-6562-430c-564d-08d86ae4a7f4
X-MS-TrafficTypeDiagnostic: MWHPR0201MB3484:
X-Microsoft-Antispam-PRVS:
	<MWHPR0201MB34842EC02DEAA7D400D61E77A00A0@MWHPR0201MB3484.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yzYTur9nZ4FvMFeVhNmdtaTBjdXA3LM3bYkQWfibGQYBM9RZGRALHQSljY4k+FTY3Ym9raREQyJrUlr8gHYDk5+Ts2cDnyRRMGZvs5dhY6Z4CRHq14j+TT5cJ2mO5NyGx2MBoVAkwECQqIt51X/CIOHZL/ckzLHOoiiM7dV9nw6iE6jdPOjBFNbPOJgyBatuWE7zqs0zwhugbQGB0PS4Vp3D4edzK5LsQ95F1j1/scKlFIwTnWgK/2XS68JSJgLB0E22oAuTIEdVF1ZWPd8LFpn+3v0gnjq1TxoFH7291cjkJXEpQurhcXLgpqbJbpp3iBt4JYfrygzMfu77cGITM3CjzjE0ROOpvBS0JehbpPhoNqnbB4HkXoZiutz7z0p0T6aaAxDkLI/8gs8a9+Ohkg==
X-Forefront-Antispam-Report:
	CIP:149.199.60.83;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapsmtpgw01;PTR:unknown-60-83.xilinx.com;CAT:NONE;SFS:(7916004)(39850400004)(396003)(136003)(346002)(376002)(46966005)(33716001)(2906002)(4326008)(82310400003)(478600001)(426003)(81166007)(5660300002)(336012)(70206006)(8676002)(316002)(47076004)(9786002)(44832011)(356005)(186003)(54906003)(4744005)(8936002)(9686003)(82740400003)(26005)(70586007)(6916009);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2020 17:15:57.5760
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a6ca9106-6562-430c-564d-08d86ae4a7f4
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.60.83];Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-AuthSource:
	SN1NAM02FT057.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR0201MB3484

On Wed, 7 Oct 2020, Christoph Hellwig wrote:
> On Tue, Oct 06, 2020 at 01:46:12PM -0700, Stefano Stabellini wrote:
> > OK, this makes a lot of sense, and I like the patch because it makes the
> > swiotlb interface clearer.
> > 
> > Just one comment below.
> > 
> 
> > > +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
> > > +		size_t mapping_size, size_t alloc_size,
> > > +		enum dma_data_direction dir, unsigned long attrs)
> > >  {
> > > +	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(dev, io_tlb_start);
> > 
> > This is supposed to be hwdev, not dev
> 
> Yeah, te compiler would be rather unhappy oterwise.
> 
> I'll resend it after the dma-mapping and Xen trees are merged by Linus
> to avoid a merge conflict.

Sounds good, thanks. Please add

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 17:22:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 17:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3582.10244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQD95-0007jN-8I; Wed, 07 Oct 2020 17:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3582.10244; Wed, 07 Oct 2020 17:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQD95-0007jG-5J; Wed, 07 Oct 2020 17:22:31 +0000
Received: by outflank-mailman (input) for mailman id 3582;
 Wed, 07 Oct 2020 17:22:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jbsw=DO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQD93-0007jB-JD
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:22:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45a28bc3-6f61-4dd3-af6e-bdae6c7cb5cc;
 Wed, 07 Oct 2020 17:22:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CB0C4206F7;
 Wed,  7 Oct 2020 17:22:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jbsw=DO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQD93-0007jB-JD
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:22:29 +0000
X-Inumbo-ID: 45a28bc3-6f61-4dd3-af6e-bdae6c7cb5cc
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45a28bc3-6f61-4dd3-af6e-bdae6c7cb5cc;
	Wed, 07 Oct 2020 17:22:29 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id CB0C4206F7;
	Wed,  7 Oct 2020 17:22:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602091348;
	bh=BC/9LdfiDiOOJJTSIiWe1hIAogFd2Q7zUrLorSYZVpg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iZtN0hc4SK3yr3PEuIdnlmmPAGzP5fGyY8ByUoaT2LhPoYdFMeI8HdY53sQSRQZtu
	 7/0HP/UfU/mZ67EHScBQhE6B8VsDA/615LeVB/P0LUVQwcmrdyipg8A+/E3KGAI7cd
	 vWuwbLyiomMnq4yrXhkS9TaMHiclgZ+/Noijz1DI=
Date: Wed, 7 Oct 2020 10:22:27 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <bertrand.marquis@arm.com>, 
    xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: print update firmware only once
In-Reply-To: <4b3135ff-4795-e189-0430-da5627419e4e@xen.org>
Message-ID: <alpine.DEB.2.21.2010071022170.23978@sstabellini-ThinkPad-T480s>
References: <09d04b34e6b3b77ac206a42657b1b4116e7e11f3.1602068661.git.bertrand.marquis@arm.com> <4b3135ff-4795-e189-0430-da5627419e4e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 7 Oct 2020, Julien Grall wrote:
> Hi Bertrand,
> 
> On 07/10/2020 12:05, Bertrand Marquis wrote:
> > Fix enable_smccc_arch_workaround_1 to only print the warning asking to
> > update the firmware once.
> > 
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > ---
> >   xen/arch/arm/cpuerrata.c | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> > index 6c09017515..0c63dfa779 100644
> > --- a/xen/arch/arm/cpuerrata.c
> > +++ b/xen/arch/arm/cpuerrata.c
> > @@ -187,7 +187,7 @@ warn:
> >           ASSERT(system_state < SYS_STATE_active);
> >           warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
> >                       "Please update your firmware.\n");
> > -        warned = false;
> > +        warned = true;
> 
> Thanks for spotting it. It looks like I introduced this bug in commit
> 976319fa3de7f98b558c87b350699fffc278effc "xen/arm64: Kill PSCI_GET_VERSION as
> a variant-2 workaround".
> 
> I would suggest to add a fixes tag (can be done on commit).
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 17:28:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 17:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3585.10259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDFC-00087t-0r; Wed, 07 Oct 2020 17:28:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3585.10259; Wed, 07 Oct 2020 17:28:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDFB-00087m-TW; Wed, 07 Oct 2020 17:28:49 +0000
Received: by outflank-mailman (input) for mailman id 3585;
 Wed, 07 Oct 2020 17:28:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQDFB-00086n-0x
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:28:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c8520df-7e7e-4e24-9965-500ee17a1f4d;
 Wed, 07 Oct 2020 17:28:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQDF2-000221-24; Wed, 07 Oct 2020 17:28:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQDF1-0000xz-NZ; Wed, 07 Oct 2020 17:28:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQDF1-0000fs-N6; Wed, 07 Oct 2020 17:28:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQDFB-00086n-0x
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 17:28:49 +0000
X-Inumbo-ID: 5c8520df-7e7e-4e24-9965-500ee17a1f4d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c8520df-7e7e-4e24-9965-500ee17a1f4d;
	Wed, 07 Oct 2020 17:28:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3sc4L0y6zgUszb2Rm5RsvqjSiTMrfqIYxTUWPwJyRyM=; b=lY/d1BcC5whF5STRW9AAfDmASm
	JIqhc0hVgrK1bIgOSsFRlIXRaCsBCCX7PMkPBpZwLDufnRgvYQe57fMgkg6zjvptd+0q8nVQV+ihE
	wEMr6FY5oaCa2DJ8qf3fJMTd4+lvEkba9BXKghUoNYsp1n79EpuJNhXbWZx/wsxEl7y4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQDF2-000221-24; Wed, 07 Oct 2020 17:28:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQDF1-0000xz-NZ; Wed, 07 Oct 2020 17:28:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQDF1-0000fs-N6; Wed, 07 Oct 2020 17:28:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155510-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155510: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=93508595d588afe9dca087f95200effb7cedc81f
X-Osstest-Versions-That:
    xen=8ef6345ef557cc2c47298217635a3088eaa59893
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 17:28:39 +0000

flight 155510 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155510/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 155493
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 155493
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 155493
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 155493
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 155493
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 155493
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 155493
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 155493
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 155493
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  93508595d588afe9dca087f95200effb7cedc81f
baseline version:
 xen                  8ef6345ef557cc2c47298217635a3088eaa59893

Last test of basis   155493  2020-10-06 10:40:06 Z    1 days
Testing same since   155510  2020-10-07 04:00:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8ef6345ef5..93508595d5  93508595d588afe9dca087f95200effb7cedc81f -> master


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3608.10343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDk3-0003uF-Qn; Wed, 07 Oct 2020 18:00:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3608.10343; Wed, 07 Oct 2020 18:00:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDk3-0003u8-Nk; Wed, 07 Oct 2020 18:00:43 +0000
Received: by outflank-mailman (input) for mailman id 3608;
 Wed, 07 Oct 2020 18:00:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDk2-0003r9-Qd
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:42 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edf9d6c3-9d9b-4793-83b4-387a7cb6a76d;
 Wed, 07 Oct 2020 18:00:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjr-0007CF-Pj; Wed, 07 Oct 2020 19:00:31 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDk2-0003r9-Qd
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:42 +0000
X-Inumbo-ID: edf9d6c3-9d9b-4793-83b4-387a7cb6a76d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id edf9d6c3-9d9b-4793-83b4-387a7cb6a76d;
	Wed, 07 Oct 2020 18:00:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjr-0007CF-Pj; Wed, 07 Oct 2020 19:00:31 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 02/82] Executive.pm planner: fix typo
Date: Wed,  7 Oct 2020 18:59:04 +0100
Message-Id: <20201007180024.7932-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index a0d9f81e..f17e7b70 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -702,7 +702,7 @@ sub plan_search ($$$$) {
 
 	    next PERIOD if $endevt->{Time} <= $try_time;
             # this period is entirely before the proposed slot;
-            # it doesn't overlap, but most check subsequent periods
+            # it doesn't overlap, but must check subsequent periods
 
 	  CHECK:
 	    {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3606.10319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDju-0003rM-AA; Wed, 07 Oct 2020 18:00:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3606.10319; Wed, 07 Oct 2020 18:00:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDju-0003rE-5b; Wed, 07 Oct 2020 18:00:34 +0000
Received: by outflank-mailman (input) for mailman id 3606;
 Wed, 07 Oct 2020 18:00:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDjs-0003r9-UP
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:32 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 448c5e57-dc0a-4919-8ba5-1571f167b3f5;
 Wed, 07 Oct 2020 18:00:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjq-0007CF-Ff; Wed, 07 Oct 2020 19:00:30 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDjs-0003r9-UP
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:32 +0000
X-Inumbo-ID: 448c5e57-dc0a-4919-8ba5-1571f167b3f5
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 448c5e57-dc0a-4919-8ba5-1571f167b3f5;
	Wed, 07 Oct 2020 18:00:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjq-0007CF-Ff; Wed, 07 Oct 2020 19:00:30 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 00/82] Reuse test hosts
Date: Wed,  7 Oct 2020 18:59:02 +0100
Message-Id: <20201007180024.7932-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This series arranges to save on host setup by reusing a test host, if
the previous test passed.  Care is taken to make sure that a host is
only reused in this way if the new test would have set it up
identically.

I have had this branch in preparation since November 2017...

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3609.10355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDk9-0003yN-5Y; Wed, 07 Oct 2020 18:00:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3609.10355; Wed, 07 Oct 2020 18:00:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDk9-0003yC-2J; Wed, 07 Oct 2020 18:00:49 +0000
Received: by outflank-mailman (input) for mailman id 3609;
 Wed, 07 Oct 2020 18:00:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDk7-0003r9-Qh
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:47 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cb581d4-7b5f-471f-8c85-c3e369bc8064;
 Wed, 07 Oct 2020 18:00:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjs-0007CF-2P; Wed, 07 Oct 2020 19:00:32 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDk7-0003r9-Qh
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:47 +0000
X-Inumbo-ID: 6cb581d4-7b5f-471f-8c85-c3e369bc8064
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6cb581d4-7b5f-471f-8c85-c3e369bc8064;
	Wed, 07 Oct 2020 18:00:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjs-0007CF-2P; Wed, 07 Oct 2020 19:00:32 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 03/82] README.planner: Document magic job hostflags
Date: Wed,  7 Oct 2020 18:59:05 +0100
Message-Id: <20201007180024.7932-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 README.planner | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 60 insertions(+)

diff --git a/README.planner b/README.planner
index c33aae11..f134d716 100644
--- a/README.planner
+++ b/README.planner
@@ -203,6 +203,66 @@ that shared systems get regrooved occasionally even if nothing decides
 to unshare them.
 
 
+ts-hosts-allocate-Executive and hostflags
+----------------------------------------
+
+Within a job, the allocations are actually done by
+ts-hosts-allocate-Executive.  It is told what to do by its command
+line arguments, which are (usually) simply IDENTs.
+
+The IDENTs provide the key for runvars which control the host
+allocation algorithm.  Principally, these are the runvars which define
+the job's hostflags
+    all_hostflags
+    IDENT_hostflags
+    runtime_IDENT_hostflags
+(all of these are comma-separated lists).
+
+Each such hostflag must, in general, be set for a particular host, for
+that host to be eligible.  But there are some special forms of job
+hostflag:
+
+  share-SHARING
+
+    The host may be shared with other jobs.  Typically used for
+    builds.  SHARING is a string which denotes the "scope" of the
+    sharing - jobs with the same SHARING should set the host up
+    identically.  The osstest test harness revision is automatically
+    appended and therefore does not need to be included.
+
+  equiv-FORMALTOKEN
+
+    For each equiv-FORMALTOKEN job flag set on one or more IDENTs, a
+    corresponding equiv-ACTUALTOKEN host flag must be set on the
+    corresponding hosts.  So, for example, if the IDENTs src_host and
+    dst_host both have equiv-1 specified, then the two hosts chosen
+    for src_host and dst_host will have an actual hostflag in common
+    which matches the pattern equiv-*.
+
+  diverse-FORMATLTOKEN
+
+    For each diverse-FORMALTOKEN flag, the selected host will *not* be
+    the same as any other allocation with the same diverse-FORMALTOKEN
+    flag in the same *flight*.
+
+  CONDNAME:CONDARGS...
+
+    Looks up CONDNAME as Osstest::ResourceCondition::PROPNAME.
+    The selected host must match the appropriate condition.
+    CONDNAMEs are:
+
+  PropEq:HOSTPROP:VAL
+
+    Require the host property HOSTPROP to be equal to VAL,
+    according to string comparison.  (Unset properties are
+    match an empty VAL.)
+
+  PropMinVer:HOSTPROP:VAL
+
+    Require the host property HOSTPROP to be at least VAL, according
+    to version number comparison (as implemented by dpkg and
+    coreutils).
+
 Flights
 -------
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3607.10331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDjz-0003sN-H7; Wed, 07 Oct 2020 18:00:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3607.10331; Wed, 07 Oct 2020 18:00:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDjz-0003sG-E5; Wed, 07 Oct 2020 18:00:39 +0000
Received: by outflank-mailman (input) for mailman id 3607;
 Wed, 07 Oct 2020 18:00:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDjx-0003r9-QR
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:37 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a773c96a-06a5-4cda-beea-6bc09cdd1efb;
 Wed, 07 Oct 2020 18:00:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjr-0007CF-ID; Wed, 07 Oct 2020 19:00:31 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDjx-0003r9-QR
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:37 +0000
X-Inumbo-ID: a773c96a-06a5-4cda-beea-6bc09cdd1efb
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a773c96a-06a5-4cda-beea-6bc09cdd1efb;
	Wed, 07 Oct 2020 18:00:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjr-0007CF-ID; Wed, 07 Oct 2020 19:00:31 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 01/82] ms-queuedaemon: Update for newer Tcl's socket channel ids
Date: Wed,  7 Oct 2020 18:59:03 +0100
Message-Id: <20201007180024.7932-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Now we have things like "sock55599edaf050" where previously we had
something like "sock142".  So the output is misaligned.

Bump the sizes.  And with these longer names, when showing the front
of the queue only print the full first entry and the start of the next
one.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ms-queuedaemon | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/ms-queuedaemon b/ms-queuedaemon
index a3a009ca..dc858863 100755
--- a/ms-queuedaemon
+++ b/ms-queuedaemon
@@ -91,7 +91,7 @@ proc log-event {m} {
 proc log-state {m} {
     global need_queue_run queue
 
-    set lhs [format "N=%d Q=%d (%-15.15s) " \
+    set lhs [format "N=%d Q=%d (%-20.20s) " \
                  $need_queue_run [llength $queue] $queue]
 
     foreach-walker w {
@@ -99,13 +99,13 @@ proc log-state {m} {
 	if {[info exists queue_running]} {
 	    append lhs [format "R=%-3d " [llength $queue_running]]
 	    if {[info exists thinking]} {
-		append lhs [format "T=%-7s " $thinking]
+		append lhs [format "T=%-16s " $thinking]
 	    } else {
-		append lhs [format "          "]
+		append lhs [format "                   "]
 	    }
-	    append lhs [format "(%-15.15s) " $queue_running]
+	    append lhs [format "(%-20.20s) " $queue_running]
 	} else {
-	    append lhs "                                  "
+	    append lhs "                                                "
 	}
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3610.10367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkD-00042t-GR; Wed, 07 Oct 2020 18:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3610.10367; Wed, 07 Oct 2020 18:00:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkD-00042m-CT; Wed, 07 Oct 2020 18:00:53 +0000
Received: by outflank-mailman (input) for mailman id 3610;
 Wed, 07 Oct 2020 18:00:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkC-0003r9-Qp
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:52 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8520070e-0199-4435-9d21-e70d61f00bc3;
 Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjs-0007CF-98; Wed, 07 Oct 2020 19:00:32 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkC-0003r9-Qp
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:52 +0000
X-Inumbo-ID: 8520070e-0199-4435-9d21-e70d61f00bc3
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8520070e-0199-4435-9d21-e70d61f00bc3;
	Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjs-0007CF-98; Wed, 07 Oct 2020 19:00:32 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 04/82] sg-run-job: Minor whitespace (formatting) changes
Date: Wed,  7 Oct 2020 18:59:06 +0100
Message-Id: <20201007180024.7932-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index df3d08d0..3db05b34 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -30,12 +30,12 @@ proc per-host-prep {} {
     per-host-ts .       xen-boot/@        ts-host-reboot
 
     per-host-ts .       host-ping-check-xen/@ ts-host-ping-check
-    per-host-ts .       =(*)             {ts-leak-check basis}
+    per-host-ts .       =(*)            { ts-leak-check basis }
 }
 
 proc per-host-finish {} {
     if {[nested-hosts-p]} { set broken fail } { set broken broken }
-    per-host-ts .       =                {ts-leak-check check}
+    per-host-ts .       =               { ts-leak-check check }
     per-host-ts !$broken capture-logs/@(*) ts-logs-capture
 }
 
@@ -96,7 +96,7 @@ proc run-job {job} {
 	
 	if {![nested-hosts-p]} break
 
-	per-host-ts . final-poweroff {ts-host-powercycle --power=0}
+	per-host-ts . final-poweroff { ts-host-powercycle --power=0 }
 
         set need_xen_hosts [lunappend nested_layers_hosts]
     }
@@ -549,7 +549,7 @@ proc setup-test-pair {} {
     run-ts . =              ts-debian-install      dst_host
     run-ts . =              ts-debian-fixup        dst_host          + debian
     run-ts . =              ts-guests-nbd-mirror + dst_host src_host + debian
-    per-host-ts . =(*)     {ts-leak-check basis}
+    per-host-ts . =(*)    { ts-leak-check basis }
     run-ts . =              ts-guest-start       + src_host          + debian
 }
 proc need-hosts/test-pair {} { return {src_host dst_host} }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:00:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3611.10379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkI-00048B-PV; Wed, 07 Oct 2020 18:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3611.10379; Wed, 07 Oct 2020 18:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkI-000482-M8; Wed, 07 Oct 2020 18:00:58 +0000
Received: by outflank-mailman (input) for mailman id 3611;
 Wed, 07 Oct 2020 18:00:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkH-0003r9-RC
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:57 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 249f8180-3406-4e17-ad6d-668881da1b44;
 Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjt-0007CF-5n; Wed, 07 Oct 2020 19:00:33 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkH-0003r9-RC
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:00:57 +0000
X-Inumbo-ID: 249f8180-3406-4e17-ad6d-668881da1b44
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 249f8180-3406-4e17-ad6d-668881da1b44;
	Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjt-0007CF-5n; Wed, 07 Oct 2020 19:00:33 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 05/82] sg-run-job: Improve some internal API docs
Date: Wed,  7 Oct 2020 18:59:07 +0100
Message-Id: <20201007180024.7932-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 3db05b34..702ed558 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -206,12 +206,12 @@ proc recipe-flag {flagname {def 0}} {
 #                    general steps from running)
 #               The job status is set to IFFAIL.
 #
-#   per-host-ts IFFAIL TESTID SCRIPT-ARGS...
+#   per-host-ts IFFAIL TESTID SCRIPT-ARGS-LIST MORE-ARGS...
 #
 #       Runs the script (as a separate step) for each test host ident.
-#       The host ident is appended to SCRIPT-ARGS.  (SCRIPT-ARGS
-#       should contain an even number of + items for proper testid
-#       generation.)
+#       The host ident is appended to SCRIPT-ARGS-LIST.
+#       (SCRIPT-ARGS-LIST should contain an even number of + items for
+#       proper testid generation.)
 #
 #       The scripts are run in parallel for all host idents.
 #
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:01:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3612.10391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkO-0004DU-6e; Wed, 07 Oct 2020 18:01:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3612.10391; Wed, 07 Oct 2020 18:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkO-0004DJ-2B; Wed, 07 Oct 2020 18:01:04 +0000
Received: by outflank-mailman (input) for mailman id 3612;
 Wed, 07 Oct 2020 18:01:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkM-0003r9-RQ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:02 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ad78458-fc8c-4a5b-8731-ca75cb13c2bb;
 Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjt-0007CF-Jn; Wed, 07 Oct 2020 19:00:33 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkM-0003r9-RQ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:02 +0000
X-Inumbo-ID: 9ad78458-fc8c-4a5b-8731-ca75cb13c2bb
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ad78458-fc8c-4a5b-8731-ca75cb13c2bb;
	Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjt-0007CF-Jn; Wed, 07 Oct 2020 19:00:33 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 06/82] show_abs_time: Represent undef $timet as <undef>
Date: Wed,  7 Oct 2020 18:59:08 +0100
Message-Id: <20201007180024.7932-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This can happen, for example, if a badly broken flight has steps which
are STARTING and have NULL in the start time column, and is then
reported using sg-report-flight.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest.pm b/Osstest.pm
index b2b6b741..734c0ef6 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -457,6 +457,7 @@ sub visible_undef ($) {
 
 sub show_abs_time ($) {
     my ($timet) = @_;
+    return '<undef>' unless defined $timet;
     return strftime "%Y-%m-%d %H:%M:%S Z", gmtime $timet;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3613.10403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkS-0004Iz-Im; Wed, 07 Oct 2020 18:01:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3613.10403; Wed, 07 Oct 2020 18:01:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkS-0004In-Eg; Wed, 07 Oct 2020 18:01:08 +0000
Received: by outflank-mailman (input) for mailman id 3613;
 Wed, 07 Oct 2020 18:01:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkR-0003r9-Rn
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:07 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d866887-24c2-4b19-bf92-b182b500f471;
 Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjt-0007CF-SJ; Wed, 07 Oct 2020 19:00:33 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkR-0003r9-Rn
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:07 +0000
X-Inumbo-ID: 0d866887-24c2-4b19-bf92-b182b500f471
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d866887-24c2-4b19-bf92-b182b500f471;
	Wed, 07 Oct 2020 18:00:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjt-0007CF-SJ; Wed, 07 Oct 2020 19:00:33 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 07/82] ts-hosts-allocate-Executive: Add a comment about a warning
Date: Wed,  7 Oct 2020 18:59:09 +0100
Message-Id: <20201007180024.7932-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 58d2a389..78b94c6d 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -1039,6 +1039,10 @@ sub actual_allocation ($) {
 	    if ($shared->{ntasks}) {
 		warn "resource $shrestype $shared->{resname} allegedly".
                     " available but wrong state $shared->{state} and tasks";
+		# This can happen if following a failed prep by
+		# another job, the other shares are still owned by the
+		# now-dead task.  If so that share will become allocatable
+		# (and therefore not be counted in `ntasks') in due course.
 		return undef;
 
                 # someone was preparing it but they aren't any more
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:01:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3614.10415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkX-0004P7-Vd; Wed, 07 Oct 2020 18:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3614.10415; Wed, 07 Oct 2020 18:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkX-0004Ox-RK; Wed, 07 Oct 2020 18:01:13 +0000
Received: by outflank-mailman (input) for mailman id 3614;
 Wed, 07 Oct 2020 18:01:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkW-0003r9-Rc
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:12 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 583345bd-7fd4-4cfd-9b83-fc8afdf57668;
 Wed, 07 Oct 2020 18:00:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDju-0007CF-84; Wed, 07 Oct 2020 19:00:34 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkW-0003r9-Rc
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:12 +0000
X-Inumbo-ID: 583345bd-7fd4-4cfd-9b83-fc8afdf57668
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 583345bd-7fd4-4cfd-9b83-fc8afdf57668;
	Wed, 07 Oct 2020 18:00:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDju-0007CF-84; Wed, 07 Oct 2020 19:00:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 08/82] host reuse: ms-planner: Bring some variables forward
Date: Wed,  7 Oct 2020 18:59:10 +0100
Message-Id: <20201007180024.7932-9-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Move the scope of $share earlier in cmd_show_html, and also introduce
$shared in the colour computation.  This makes the next changes easier.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ms-planner | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/ms-planner b/ms-planner
index 11423404..d9f3db91 100755
--- a/ms-planner
+++ b/ms-planner
@@ -677,6 +677,7 @@ sub cmd_show_html () {
 	my $shares;
 	foreach my $evt (@{ $plan->{Events}{$reso} }) {
 	    my $type= $evt->{Type};
+	    my $share= $evt->{Share};
 	    my $show=
 		$type eq End ? ($evt->{Share} ? "End $evt->{Info}" : "") :
                 $type eq Unshare ? "$type [".showsharetype($evt->{Info})."]" :
@@ -685,7 +686,6 @@ sub cmd_show_html () {
 		(!$evt->{Allocated} ? Booking :
                  ($evt->{Allocated}{Live} ? Allocation : Completed).
                  " $evt->{Allocated}{Task}")." ".$evt->{Info};
-	    my $share= $evt->{Share};
 	    if ($share) {
 		$show .= sprintf(" [%s %d/%d %d]",
 				 showsharetype($share->{Type}),
@@ -789,15 +789,16 @@ sub cmd_show_html () {
                     $content->{Avail} ||
                     $content->{Type} eq After ||
                     ($content->{Allocated} && !$content->{Allocated}{Live});
+                my $shared = $content->{Share};
                 my $bgcolour=
                     $avail ?
-                    ($content->{Share} ? "#ccccff" : "#ffffff") :
+                    ($shared ? "#ccccff" : "#ffffff") :
                     $content->{Type} eq Overrun ?
-                    ($content->{Share} ? "#443344" : "#442222") :
+                    ($shared ? "#443344" : "#442222") :
                     $content->{Allocated} ?
-                    ($content->{Share} ? "#882288" : "#882222")
+                    ($shared ? "#882288" : "#882222")
                     :
-                    ($content->{Share} ? "#005555" : "#448844");
+                    ($shared ? "#005555" : "#448844");
                 my $fgcolour=
                     $avail ? "#000000" : "#ffffff";
 		printf "<td valign=top rowspan=%d".
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3615.10427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkd-0004VQ-9j; Wed, 07 Oct 2020 18:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3615.10427; Wed, 07 Oct 2020 18:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDkd-0004VG-5a; Wed, 07 Oct 2020 18:01:19 +0000
Received: by outflank-mailman (input) for mailman id 3615;
 Wed, 07 Oct 2020 18:01:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQDkb-0003r9-Rh
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:17 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96361b6a-c30f-4d7b-a433-2f271bf7a56d;
 Wed, 07 Oct 2020 18:00:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDju-0007CF-F2; Wed, 07 Oct 2020 19:00:34 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQDkb-0003r9-Rh
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:17 +0000
X-Inumbo-ID: 96361b6a-c30f-4d7b-a433-2f271bf7a56d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96361b6a-c30f-4d7b-a433-2f271bf7a56d;
	Wed, 07 Oct 2020 18:00:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDju-0007CF-F2; Wed, 07 Oct 2020 19:00:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 09/82] host reuse: ms-planner: Do not show reuse as shared in the plan
Date: Wed,  7 Oct 2020 18:59:11 +0100
Message-Id: <20201007180024.7932-10-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

If the number of shares is 1, do not show it as shared, and also
ignore the Unshare events.

This clarifies the display, especially when used with forthcoming test
host reuse work.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ms-planner | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/ms-planner b/ms-planner
index d9f3db91..4e38e4e3 100755
--- a/ms-planner
+++ b/ms-planner
@@ -249,6 +249,7 @@ sub launder_check_plan () {
 			Avail => 1,
 			Info => $cshare->{Type},
 			PreviousShare => $cshare,
+                        Shares => $cshare->{Shares},
 		    };
 		    $cshare= undef;
 		}
@@ -678,6 +679,7 @@ sub cmd_show_html () {
 	foreach my $evt (@{ $plan->{Events}{$reso} }) {
 	    my $type= $evt->{Type};
 	    my $share= $evt->{Share};
+	    next if $type eq 'Unshare' && $evt->{Shares} == 1;
 	    my $show=
 		$type eq End ? ($evt->{Share} ? "End $evt->{Info}" : "") :
                 $type eq Unshare ? "$type [".showsharetype($evt->{Info})."]" :
@@ -686,7 +688,7 @@ sub cmd_show_html () {
 		(!$evt->{Allocated} ? Booking :
                  ($evt->{Allocated}{Live} ? Allocation : Completed).
                  " $evt->{Allocated}{Task}")." ".$evt->{Info};
-	    if ($share) {
+	    if ($share && $share->{Shares} != 1) {
 		$show .= sprintf(" [%s %d/%d %d]",
 				 showsharetype($share->{Type}),
 				 $share->{Shares} - $evt->{Avail},
@@ -789,7 +791,8 @@ sub cmd_show_html () {
                     $content->{Avail} ||
                     $content->{Type} eq After ||
                     ($content->{Allocated} && !$content->{Allocated}{Live});
-                my $shared = $content->{Share};
+                my $shared = $content->{Share}
+		          && $content->{Share}{Shares} != 1;
                 my $bgcolour=
                     $avail ?
                     ($shared ? "#ccccff" : "#ffffff") :
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3616.10439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDl9-0004pK-SG; Wed, 07 Oct 2020 18:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3616.10439; Wed, 07 Oct 2020 18:01:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQDl9-0004pC-PD; Wed, 07 Oct 2020 18:01:51 +0000
Received: by outflank-mailman (input) for mailman id 3616;
 Wed, 07 Oct 2020 18:01:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQDl9-0004ov-3x
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d99c4ae-0f97-4f2c-8f85-87f864b84b6e;
 Wed, 07 Oct 2020 18:01:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01DM=DO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQDl9-0004ov-3x
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:01:51 +0000
X-Inumbo-ID: 1d99c4ae-0f97-4f2c-8f85-87f864b84b6e
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d99c4ae-0f97-4f2c-8f85-87f864b84b6e;
	Wed, 07 Oct 2020 18:01:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602093709;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=sKrFXatEEq5DqK+cx9YnltRr6D+EMF7PGcHSzrSySec=;
  b=PciKDSt472tfF64m7UjHPgyj2hM13CQvFzJmPP/3hhODNgEGnaNk7dHS
   +H4bU7ZPFIZUpN2XB39rQsbfO10KXwLK973s2dLZqBQYczIOVJgGyYHO6
   6XaDVdswe1KlaXonPj0DG9SFkLcSl6QtTYGx5Omsru+/qDo1HJ4jM2AHW
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: V1uCHVTavA80cRQ7zMOl16lNe7ZtZdcmLaxOW/s3OB6o+2VQ0R6hveNDy/1UOPuAggBjH2NFWs
 H47HMYBM57uoDVZ6bVaBK0B+jteu5RZGjSP+ZKf5ZrM5KoX7xt5SzOi/msEchQHcfcjoZagikX
 AI9DC/bLVfHEa+cxQH4xG3k+KmJENfQoFrwmeWOk3YmXFEahshIc23GVp2m77DPQu+3v2eSzFP
 Y1pSlzrS7xat0W8PH+Oa8o9kHb5Df+hQGdA8qqbpJABICHDR6Kkq3nYuS21reZzM4t2whdAWeq
 o4w=
X-SBRS: None
X-MesageID: 28774375
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,347,1596513600"; 
   d="scan'208";a="28774375"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/ucode: Trivial further cleanup
Date: Wed, 7 Oct 2020 19:01:20 +0100
Message-ID: <20201007180120.27203-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

 * Drop unused include in private.h.
 * Used explicit width integers for Intel header fields.
 * Adjust comment to better describe the extended header.
 * Drop unnecessary __packed attribute for AMD header.
 * Switch mc_patch_data_id to being uint16_t, which is how it is more commonly
   referred to.
 * Fix types and style.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/microcode/amd.c     | 10 +++++-----
 xen/arch/x86/cpu/microcode/intel.c   | 34 +++++++++++++++++-----------------
 xen/arch/x86/cpu/microcode/private.h |  2 --
 3 files changed, 22 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index cd532321e8..e913232067 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -24,7 +24,7 @@
 
 #define pr_debug(x...) ((void)0)
 
-struct __packed equiv_cpu_entry {
+struct equiv_cpu_entry {
     uint32_t installed_cpu;
     uint32_t fixed_errata_mask;
     uint32_t fixed_errata_compare;
@@ -35,7 +35,7 @@ struct __packed equiv_cpu_entry {
 struct microcode_patch {
     uint32_t data_code;
     uint32_t patch_id;
-    uint8_t  mc_patch_data_id[2];
+    uint16_t mc_patch_data_id;
     uint8_t  mc_patch_data_len;
     uint8_t  init_flag;
     uint32_t mc_patch_data_checksum;
@@ -102,7 +102,7 @@ static void collect_cpu_info(void)
              smp_processor_id(), csig->rev);
 }
 
-static bool_t verify_patch_size(uint32_t patch_size)
+static bool verify_patch_size(uint32_t patch_size)
 {
     uint32_t max_size;
 
@@ -113,7 +113,7 @@ static bool_t verify_patch_size(uint32_t patch_size)
 #define F17H_MPB_MAX_SIZE 3200
 #define F19H_MPB_MAX_SIZE 4800
 
-    switch (boot_cpu_data.x86)
+    switch ( boot_cpu_data.x86 )
     {
     case 0x14:
         max_size = F14H_MPB_MAX_SIZE;
@@ -135,7 +135,7 @@ static bool_t verify_patch_size(uint32_t patch_size)
         break;
     }
 
-    return (patch_size <= max_size);
+    return patch_size <= max_size;
 }
 
 static bool check_final_patch_levels(const struct cpu_signature *sig)
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index d031196d4c..d9bb1bc10e 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -32,38 +32,38 @@
 #define pr_debug(x...) ((void)0)
 
 struct microcode_patch {
-    unsigned int hdrver;
-    unsigned int rev;
+    uint32_t hdrver;
+    uint32_t rev;
     uint16_t year;
     uint8_t  day;
     uint8_t  month;
-    unsigned int sig;
-    unsigned int cksum;
-    unsigned int ldrver;
+    uint32_t sig;
+    uint32_t cksum;
+    uint32_t ldrver;
 
     /*
      * Microcode for the Pentium Pro and II had all further fields in the
      * header reserved, had a fixed datasize of 2000 and totalsize of 2048,
      * and didn't use platform flags despite the availability of the MSR.
      */
-    unsigned int pf;
-    unsigned int datasize;
-    unsigned int totalsize;
-    unsigned int reserved[3];
+    uint32_t pf;
+    uint32_t datasize;
+    uint32_t totalsize;
+    uint32_t reserved[3];
 
     /* Microcode payload.  Format is propriety and encrypted. */
     uint8_t data[];
-};
 
-/* microcode format is extended from prescott processors */
+    /* Extended header (iff totalsize > datasize, P4 Prescott and later) */
+};
 struct extended_sigtable {
-    unsigned int count;
-    unsigned int cksum;
-    unsigned int reserved[3];
+    uint32_t count;
+    uint32_t cksum;
+    uint32_t rsvd[3];
     struct {
-        unsigned int sig;
-        unsigned int pf;
-        unsigned int cksum;
+        uint32_t sig;
+        uint32_t pf;
+        uint32_t cksum;
     } sigs[];
 };
 
diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
index c00ba590d1..9a15cdc879 100644
--- a/xen/arch/x86/cpu/microcode/private.h
+++ b/xen/arch/x86/cpu/microcode/private.h
@@ -1,8 +1,6 @@
 #ifndef ASM_X86_MICROCODE_PRIVATE_H
 #define ASM_X86_MICROCODE_PRIVATE_H
 
-#include <xen/types.h>
-
 #include <asm/microcode.h>
 
 enum microcode_match_result {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3636.10459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0o-0006UM-DX; Wed, 07 Oct 2020 18:18:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3636.10459; Wed, 07 Oct 2020 18:18:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0o-0006UF-9N; Wed, 07 Oct 2020 18:18:02 +0000
Received: by outflank-mailman (input) for mailman id 3636;
 Wed, 07 Oct 2020 18:18:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE0n-0006UA-1W
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:01 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8577cf09-0530-4663-b220-aa83a686d198;
 Wed, 07 Oct 2020 18:17:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk9-0007CF-Ot; Wed, 07 Oct 2020 19:00:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE0n-0006UA-1W
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:01 +0000
X-Inumbo-ID: 8577cf09-0530-4663-b220-aa83a686d198
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8577cf09-0530-4663-b220-aa83a686d198;
	Wed, 07 Oct 2020 18:17:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk9-0007CF-Ot; Wed, 07 Oct 2020 19:00:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 66/82] host lifecycle: Machinery, db, for tracking relevant events
Date: Wed,  7 Oct 2020 19:00:08 +0100
Message-Id: <20201007180024.7932-67-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

When we reuse test hosts, we want to be able to give a list of the
other jobs which might be responsible for any problem.

In principle it would be possible to do this by digging into the
db's history tables like sg-report-host-history does, but this is
quite slow and also I don't have enough confidence in that approach to
use it for this application.

So instead, track the host lifecycle explicitly.

The approach taken is a hybrid one.  I first considered two and a half
approaches:

 1. Permanently record all host/share allocations and share state
    changes in a host history table.  But it is nontrivial to update
    all the allocation machinery to keep this table up to date.  It is
    also nontrivial to extract the necessary information from such a
    table: the allocation information would have to be correlated,
    using timestamps, with the steps table.  That's slow and complex.
    We had such a table but it was never used for these reasons;
    I dropped that empty table recently.

 1b. Like 1 but explicitly put a lifecycle sequence number in the
    allocations table,.  This would make it easy to find relevant
    events but would involve even more complicated logic during
    allocation.

 2. Record the host's lifecycle information in a file on the host.
    This means it gets wiped whenever the host does and makes finding
    the relevant jobs easy: read the file during logs capture, and
    we'll find everything of relevance.  It then has to be permanently
    stored somewhere it can be used for logging and archaeology: a
    per-job runvar giving the relevant host history, up to the point
    where that job finished. does that job nicely.  However, this
    has a serious problem: if the host crashes hard, we may not be
    able to recover the complete information about why!  We really
    want to the information recorded outside the host in question.

So I've taken a hybrid approach: effectively replicate the per-host
file from (2), but put the information in the database.  This
necessites a call to clear the host lifecycle history, which we make
at the *end* of the host install.  As a bonus this might let us more
easily identify if there are particular jobs that leave hosts in
states that are hard to recover from, and it will make total host
failure quite obvious because the host install log report will have a
list of the failed attempts (longer in each successive job).

For build jobs we only record the setup job, and concurrent jobs, in
the runvar.  This does not seem to have been a problem so far, and
this avoids having to do work on other allocations (eg, mg-allocate).
It also avoids having very long lists of previous builds listed in
every build job.

Test jobs are only shared within a flight and with much more limited
scope so the same considerations don't arise.  But by the same token,
we also do not need to adjust mg-allocate etc., since the user ought
not to allocate shares of test hosts unless they know what they are
doing.

In this commit we introduce:
 * The database table
 * The runvar syntax
 * The function for recording the lifecycle events

We have what amounts to an ad-hoc compression scheme for the
information in the lifecycle runvars.  Otherwise this data might get
quite voluminous, which can makes various other db queries slow.

There isn't a very good way to represent out-of-job tasks in the
lifecycle runvar.  We could maybe put in something from the tasks
table, but the entry in the tasks table might be gone by now and that
would involve quoting (and it might be quite large).

But this will only matter when a shared/reused host has been manually
messed with, and recording the task is sufficient to
 (1) note the fact of such interference
 (2) if the task is static, or still going when the job reports,
      can actually be put in the report.
 (3) failing that provide something which could be grepped for in logs

We do not call the recording function yet, so the db update is merely
Preparatory.

There is a bug in this patch: the calculation of $olive is wrong.
This will be fixed in a moment.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/JobDB/Executive.pm  | 211 ++++++++++++++++++++++++++++++++++++
 Osstest/JobDB/Standalone.pm |   2 +
 Osstest/TestSupport.pm      |  12 ++
 schema/host-lifecycle.sql   |  43 ++++++++
 4 files changed, 268 insertions(+)
 create mode 100644 schema/host-lifecycle.sql

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 8fde2934..cf82b4cf 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -24,6 +24,7 @@ use Osstest;
 use Osstest::TestSupport;
 use Osstest::Executive;
 use Data::Dumper;
+use Carp;
 
 BEGIN {
     use Exporter ();
@@ -461,4 +462,214 @@ sub jobdb_db_glob ($$) { #method
 
 sub can_anoint ($) { return 1; }
 
+sub jobdb_host_update_lifecycle_info ($$$) { #method
+    my ($mo, $ho, $mode) = @_;
+    # $mode is       sigil
+    #  selectprep  1  @     called several times, from selecthost, $isprep
+    #  wiped       2  n/a   called (usually) once, when install succeeds
+    #  select      1  +     called several times, from selecthost, !$isprep
+    #  final       1  none  called (hopefully) once, from capture-logs
+    #                        (once means within this job)
+    # Notes:
+    #   1   causes a new row to be added to the host lifecycle
+    #       subject to sigil check
+    #   2   removes other rows from the host lifecycle
+    #
+    # Where "sigil" above is nonempty: we look at the runvar.  If the
+    # runvar's final sigil is the same, we conclude that the needed
+    # work has already been done: ie, we do not need to add a row to
+    # the table nor do we necessarily need to update the runvar.
+    #
+    # If the sigil is not right, we replace the runvar with a history
+    # string derived from the lifecycle table, and (if appropriate)
+    # add a row.
+    #
+    # In principle it might be useful to update the runvar every time
+    # because the lifecycle table might have gained rows from other
+    # jobs, but we would like to avoid burdening select with more
+    # pratting about.  So we leave that for capture-logs and
+    # reuse/final, ie when newsigil is none.
+    #
+    # The runvar is
+    #     <ident>_lifecycle
+    # and contains space-separated entries
+    #     [+@][<flight>.][<job>]:<stepno>[,<ident>]
+    #     [+@]<stepno>              same flight, job, ident; new stepno
+    #     "["<omitted-count>"]"     ie literal [ ]
+    #     [+]?<taskid>              task not within a flight/job
+    # where omitted [,<ident>] means "host";
+    # omitted <flight> and <job> mean this runvar's flight and/or job;
+    # and then at then end most one of the sigils
+    #     @                      last call was selectprep
+    #     +                      last call was select
+    #     <none>                 last call was final
+    # items with no such sigil don't appear in build jobs
+    # and instead appear as [<omitted-count>] eg "[4]"
+
+    return if $ho->{Host}; # This host *has* a host - ie, nested
+
+    my $ttaskid = findtask();
+    my $hostname = $ho->{Name};
+    my $tident = $ho->{Ident};
+    my $tstepno = $mo->current_stepno();
+
+    if ($mode eq 'wiped') {
+	db_retry($flight, [qw(running)], $dbh_tests,[], sub {
+            $dbh_tests->do(<<END, {}, $hostname);
+                DELETE FROM host_lifecycle h
+                      WHERE hostname=?
+                        AND NOT EXISTS(
+                SELECT 1
+		  FROM tasks t
+		 WHERE t.live
+		   AND t.taskid = h.taskid
+                );
+END
+        });
+	logm("host lifecycle: $hostname: wiped, cleared out old info");
+	return;
+    }
+
+    my $newsigil =
+      $mode eq 'selectprep' ? '@' :
+      $mode eq 'select'     ? '+' :
+      $mode eq 'final'      ? ''  :
+      confess "$mode ?";
+
+    my $scanq = $dbh_tests->prepare(<<END);
+	   SELECT h.flight, h.job, h.isprep, h.ident, h.stepno,
+                  t.live, t.taskid,
+                  h2.lcseq later_notprep
+	     FROM host_lifecycle h
+        LEFT JOIN host_lifecycle h2
+               ON h2.hostname = h.hostname 
+              AND h2.flight   = h.flight
+              AND h2.job      = h.job
+              AND h2.ident    = h.ident
+              AND h2.taskid   = h.taskid
+              AND h2.lcseq    > h.lcseq
+              AND h.isprep AND NOT h2.isprep
+        LEFT JOIN tasks t
+               ON h.taskid = t.taskid
+            WHERE h.hostname = ?
+         ORDER BY h.lcseq;
+END
+    my $insertq = $dbh_tests->prepare(<<END);
+        INSERT INTO host_lifecycle
+                 (hostname, taskid, flight, job, isprep, ident, stepno)
+          VALUES (?,        ?,      ?,      ?,   ?,      ?,     ?     )
+END
+
+    my $ojvn = "$ho->{Ident}_lifecycle";
+    my $firstrun;
+
+    if (length $r{$ojvn}) {
+	my ($oldsigil,) = reverse split / /, $r{$ojvn};
+	$oldsigil = '' unless $oldsigil =~ m/^\W$/;
+	return if $newsigil ne '' && $oldsigil eq $newsigil;
+    } else {
+	$firstrun = 1;
+    }
+
+    my @lifecycle;
+    db_retry($dbh_tests,[], sub {
+	my $elided;
+	@lifecycle = ();
+        my %tj_seen;
+	# keys in %tj_seen are [@][<flight>.][<job>] or ?<taskid>
+        $scanq->execute($hostname);
+
+	while (my $o = $scanq->fetchrow_hashref()) {
+	    my $olive =
+	      # Any job which appeared since we started thinking
+	      # about this must have been concurrent with us,
+	      # even if it is dead now.
+	      (!$firstrun || $o->{live}) &&
+	      # If this task is still live, we need to have something
+	      # with a live mark, generally all the prep will have
+	      # occurred already, so we don't mark the prep as live
+	      # if there's a later nonprep step.
+	      !$o->{later_notprep};
+
+	    my $olivemark = !!$olive && '+';
+	    if (defined($flight) && defined($o->{flight}) &&
+		$o->{flight} eq $flight &&
+		$o->{job} eq $job) {
+		# Don't put the + mark on our own entries.
+		$olivemark = '';
+	    }
+
+	    my $oisprepmark = !!$o->{isprep} && '@';
+	    my $omarks = $olivemark.$oisprepmark;
+
+	    my $otj = '';
+	    if (!defined $o->{flight}) {
+		$otj .= "?$o->{taskid}";
+	    } else {
+		$otj .= "$o->{flight}." if $o->{flight} ne $flight;
+		$otj .= $o->{job} if $o->{job} ne $job;
+	    }
+	    next if $tj_seen{$oisprepmark.$otj}++;
+
+	    if (!$omarks && !$olive && defined($o->{flight}) &&
+		$ho->{Shared} &&
+		$ho->{Shared}{Type} =~ m/^build-/ &&
+		!$tj_seen{"\@$otj"} # do not elide use if we showed prep
+	       ) {
+		# elide previous, non-concurrent, build jobs
+		if (!$elided) { $elided = [ scalar(@lifecycle), 0]; }
+		$lifecycle[$elided->[0]] = "[".(++$elided->[1])."]";
+		next;
+	    }
+
+	    my $osuffix = !!(defined($o->{ident}) && $o->{ident} ne 'host')
+	      && ",$o->{ident}";
+
+	    my ($lastuncompr,) = grep { !m{^\W*\d+$} } reverse @lifecycle;
+	    if (defined($lastuncompr) &&
+		$lastuncompr =~ m{^\W*\Q$otj\E:\d+\Q$osuffix\E$}) {
+		push @lifecycle, "$omarks$o->{stepno}";
+	    } else {
+		push @lifecycle, "$omarks$otj:$o->{stepno}$osuffix";
+	    }
+	}
+	if (defined $flight) {
+	    $insertq->execute($hostname, $ttaskid,
+			      $flight, $job,
+			      ($mode eq 'selectprep')+0,
+                # ^ DBD::Pg doesn't accept perl canonical false for bool!
+                #   https://rt.cpan.org/Public/Bug/Display.html?id=133229
+			      $tident, $tstepno);
+	} else {
+	    $insertq->execute($hostname, $ttaskid,
+			      undef,undef,
+			      undef,
+			      undef,undef);
+	}
+    });
+
+    if (defined $flight) {
+	push @lifecycle, $newsigil if length $newsigil;
+	store_runvar($ojvn, "@lifecycle");
+    }
+}
+
+sub current_stepno ($) { #method
+    my ($jd) = @_;
+    my $testid = $ENV{OSSTEST_TESTID} // return undef;
+    my $checkq = $dbh_tests->prepare(<<END);
+        SELECT stepno
+          FROM steps
+         WHERE flight=?
+           AND job=?
+           AND testid=?
+END
+    my $stepno;
+    db_retry($flight,[qw(running)], $dbh_tests,[],sub {
+	$checkq->execute($flight,$job,$testid);
+	($stepno) = $checkq->fetchrow_array();
+    });
+    return $stepno;
+}
+
 1;
diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index 1db4dc78..6e1ae158 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -137,4 +137,6 @@ sub jobdb_db_glob ($) { #method
 
 sub can_anoint ($) { return 0; }
 
+sub jobdb_host_update_lifecycle_info { } #method
+
 1;
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 28381f05..22141981 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -88,6 +88,7 @@ BEGIN {
                       serial_fetch_logs set_host_property modify_host_flag
                       propname_massage propname_check
                       hostprop_putative_record hostflag_putative_record
+                      host_update_lifecycle_info
          
                       get_stashed open_unique_stashfile compress_stashed
                       dir_identify_vcs
@@ -174,6 +175,7 @@ our @accessible_runvar_pats =
       host_console                   *_host_console
       host_hostflagadjust            *_host_hostflagadjust
       host_hostflags                 *_host_hostflags
+      host_lifecycle                 *_host_lifecycle
       host_linux_boot_append         *_host_linux_boot_append
       host_ip                        *_host_ip
       host_power_install             *_host_power_install
@@ -3166,6 +3168,16 @@ sub sha256file ($;$) {
     return $truncate ? substr($digest, 0, $truncate) : $digest;
 }
 
+sub host_update_lifecycle_info ($$) {
+    my ($ho, $mode) = @_;
+    # $mode is
+    #  selectprep       called several times, from selecthost, $isprep
+    #  wiped            called once, when install succeeds
+    #  select           called several times, from selecthost, !$isprep
+    #  final            called hopefully once, from capture-logs
+    $mjobdb->jobdb_host_update_lifecycle_info($ho, $mode)
+}
+
 sub host_shared_mark_ready($$;$$) {
     my ($ho,$sharetype, $oldstate, $newstate) = @_;
 
diff --git a/schema/host-lifecycle.sql b/schema/host-lifecycle.sql
new file mode 100644
index 00000000..7f1f5bb0
--- /dev/null
+++ b/schema/host-lifecycle.sql
@@ -0,0 +1,43 @@
+-- ##OSSTEST## 012 Preparatory
+--
+-- Records the jobs which have touched a host, for host sharing/reuse
+-- The information here is ephemeral - it is cleared when a host is
+-- reinitialised.  The information is persisted by being copied
+-- into a runvar for each job.
+
+CREATE SEQUENCE host_lifecycle_lcseq_seq
+    NO CYCLE;
+
+CREATE TABLE host_lifecycle (
+    hostname     TEXT NOT NULL,
+    lcseq        INTEGER NOT NULL DEFAULT nextval('host_lifecycle_lcseq_seq'),
+    taskid       INTEGER NOT NULL, -- no constraint, tasks can get deleted
+    flight       INTEGER,
+    job          TEXT,
+    stepno       INTEGER,
+    ident        TEXT,
+    isprep       BOOLEAN,
+    PRIMARY KEY  (hostname, lcseq),
+--  restype      TEXT GENERATED ALWAYS AS ('host'),
+--  FOREIGN KEY  (restype,hostname)    REFERENCES resources(restype, resname),
+--     ^ those two omitted because not supported until pgsql 13
+--  FOREIGN KEY (flight, job)         REFERENCES jobs(flight, job),
+--      ^ omitted because the next constraint implies it
+    FOREIGN KEY  (flight, job, stepno) REFERENCES steps(flight, job, stepno),
+    CHECK ((
+	flight       IS NOT NULL AND
+	job          IS NOT NULL AND
+	stepno       IS NOT NULL AND
+	ident        IS NOT NULL AND
+	isprep       IS NOT NULL
+    ) OR (
+	flight       IS     NULL AND
+	job          IS     NULL AND
+	stepno       IS     NULL AND
+	ident        IS     NULL AND
+	isprep       IS     NULL
+    ))
+);
+
+ALTER SEQUENCE host_lifecycle_lcseq_seq
+    OWNED BY host_lifecycle.lcseq;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3637.10471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0t-0006Vt-Kq; Wed, 07 Oct 2020 18:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3637.10471; Wed, 07 Oct 2020 18:18:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0t-0006Vk-Hn; Wed, 07 Oct 2020 18:18:07 +0000
Received: by outflank-mailman (input) for mailman id 3637;
 Wed, 07 Oct 2020 18:18:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE0r-0006UA-RO
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:05 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb70c138-20c1-4e8d-987d-9f2ded624cf7;
 Wed, 07 Oct 2020 18:18:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk7-0007CF-Ge; Wed, 07 Oct 2020 19:00:47 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE0r-0006UA-RO
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:05 +0000
X-Inumbo-ID: bb70c138-20c1-4e8d-987d-9f2ded624cf7
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb70c138-20c1-4e8d-987d-9f2ded624cf7;
	Wed, 07 Oct 2020 18:18:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk7-0007CF-Ge; Wed, 07 Oct 2020 19:00:47 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 57/82] TestSupport: Provide runvar_is_synth
Date: Wed,  7 Oct 2020 18:59:59 +0100
Message-Id: <20201007180024.7932-58-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Internally we use an array %r_notsynth.  This allows us to avoid
adding code to store_runvar etc.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 634d6d2e..ce13d3a6 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -52,7 +52,7 @@ BEGIN {
                       store_runvar get_runvar get_runvar_maybe
                       get_runvar_default need_runvars
                       unique_incrementing_runvar next_unique_name
-                      stashfilecontents
+                      stashfilecontents runvar_is_synth
 
                       target_cmd_root_status target_cmd_output_root_status
                       target_cmd_root target_cmd target_cmd_build
@@ -147,7 +147,7 @@ BEGIN {
     @EXPORT_OK   = qw();
 }
 
-our (%r,$flight,$job,$stash);
+our (%r,$flight,$job,$stash,%r_,%r_notsynth);
 
 our %timeout= qw(RebootDown   100
                  RebootUp     400
@@ -178,12 +178,13 @@ sub tsreadconfig () {
         logm("starting $flight.$job");
 
         my $q= $dbh_tests->prepare(<<END);
-            SELECT name, val FROM runvars WHERE flight=? AND job=?
+            SELECT name, val, synth FROM runvars WHERE flight=? AND job=?
 END
         $q->execute($flight, $job);
         my $row;
         while ($row= $q->fetchrow_hashref()) {
             $r{ $row->{name} }= $row->{val};
+	    $r_notsynth{ $row->{name} }= !$row->{synth};
             logm("setting $row->{name}=$row->{val}");
         }
         $q->finish();
@@ -434,6 +435,11 @@ END
     return $value;
 }
 
+sub runvar_is_synth ($) {
+    my ($key) = @_;
+    return !$r_notsynth{$key};
+}
+
 sub target_adjust_timeout ($$) {
     my ($ho,$timeoutref) = @_; # $ho might be a $gho
     my $nestinglvl = $ho->{NestingLevel} // $ho->{Host}{NestingLevel};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3638.10483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0x-0006Yn-UF; Wed, 07 Oct 2020 18:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3638.10483; Wed, 07 Oct 2020 18:18:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE0x-0006Yf-Qg; Wed, 07 Oct 2020 18:18:11 +0000
Received: by outflank-mailman (input) for mailman id 3638;
 Wed, 07 Oct 2020 18:18:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE0w-0006UA-Ra
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:10 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1901b5e8-0928-4160-9047-aaf276f7001d;
 Wed, 07 Oct 2020 18:18:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk0-0007CF-Aq; Wed, 07 Oct 2020 19:00:40 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE0w-0006UA-Ra
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:10 +0000
X-Inumbo-ID: 1901b5e8-0928-4160-9047-aaf276f7001d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1901b5e8-0928-4160-9047-aaf276f7001d;
	Wed, 07 Oct 2020 18:18:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk0-0007CF-Aq; Wed, 07 Oct 2020 19:00:40 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 28/82] host allocation: Support new reuse-* magic hostflag
Date: Wed,  7 Oct 2020 18:59:30 +0100
Message-Id: <20201007180024.7932-29-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This is like share-* except it has different MaxTasks and MaxWear
parameters.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 README.planner              |  7 +++++++
 ts-hosts-allocate-Executive | 15 +++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/README.planner b/README.planner
index f134d716..a9275f12 100644
--- a/README.planner
+++ b/README.planner
@@ -230,6 +230,13 @@ hostflag:
     identically.  The osstest test harness revision is automatically
     appended and therefore does not need to be included.
 
+  reuse-SHARING
+
+    The host may be reused, one job after another.  Like share- but
+    only permits one job at a time, and has a much higher limit for
+    the number of successive jobs.  ts-host-test-share should be used
+    to arrange for the host's state to be recorded appropriately.
+
   equiv-FORMALTOKEN
 
     For each equiv-FORMALTOKEN job flag set on one or more IDENTs, a
diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 2c18a739..6fcfd2e3 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -279,6 +279,16 @@ sub compute_hids () {
                     " $hid->{DefaultSharedMaxTasks}".
                     " $hid->{DefaultSharedMaxWear}\n";
                 next;
+            } elsif ($flag =~ m/^reuse-/) {
+                die if exists $hid->{Shared};
+                my $shr= $'; #'
+                $hid->{Shared}= $shr." ".get_harness_rev();
+                $hid->{SharedMaxTasks}= 1;
+		$hid->{SharedMaxWear}= 10;
+                print DEBUG "HID $ident FLAG $flag SHARE-REUSE $shr".
+                    " $hid->{SharedMaxTasks}".
+                    " $hid->{SharedMaxWear}\n";
+                next;
             } elsif ($flag =~ m/^equiv-/) {
                 my $formalclass= $'; #'
                 die if exists $hid->{Equiv};
@@ -484,6 +494,11 @@ END
         foreach my $kcomb (qw(Shared-Max-Wear Shared-Max-Tasks)) {
             my $kdb= $kcomb;  $kdb =~ y/-A-Z/ a-z/;
             my $khash= $kcomb;  $khash =~ y/-//d;
+	    if ($hid->{$khash}) {
+		$candrow->{$khash} = $hid->{$khash};
+                print DEBUG "$dbg $khash FROM-HID\n";
+		next;
+	    }
             $resprop_q->execute($candrow->{restype},$candrow->{resname},$kdb);
             my $proprow= $resprop_q->fetchrow_hashref();
             my $val= $proprow->{val};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3639.10495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE13-0006dD-Bh; Wed, 07 Oct 2020 18:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3639.10495; Wed, 07 Oct 2020 18:18:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE13-0006d5-7Q; Wed, 07 Oct 2020 18:18:17 +0000
Received: by outflank-mailman (input) for mailman id 3639;
 Wed, 07 Oct 2020 18:18:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE11-0006UA-Rv
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:15 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05ce08a0-9e4a-472a-ab00-cff487857eab;
 Wed, 07 Oct 2020 18:18:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjv-0007CF-MO; Wed, 07 Oct 2020 19:00:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE11-0006UA-Rv
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:15 +0000
X-Inumbo-ID: 05ce08a0-9e4a-472a-ab00-cff487857eab
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05ce08a0-9e4a-472a-ab00-cff487857eab;
	Wed, 07 Oct 2020 18:18:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjv-0007CF-MO; Wed, 07 Oct 2020 19:00:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 14/82] target setup refactoring: Merge target_kernkind_*
Date: Wed,  7 Oct 2020 18:59:16 +0100
Message-Id: <20201007180024.7932-15-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Combine these two functions.  Rename them to a name which doesn't
mention "kernkind".

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm      |  3 +--
 Osstest/TestSupport.pm | 11 ++++-------
 ts-debian-fixup        |  3 +--
 3 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index b140ede2..85fd16da 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -68,8 +68,7 @@ sub debian_boot_setup ($$$$$;$) {
     # $xenhopt==undef => is actually a guest, do not set up a hypervisor
     my ($ho, $want_kernver, $want_xsm, $xenhopt, $distpath, $hooks) = @_;
 
-    target_kernkind_check($ho);
-    target_kernkind_console_inittab($ho,$ho,"/");
+    target_setup_rootdev_console_inittab($ho,$ho,"/");
 
     my $kopt;
     my $console= target_var($ho,'console');
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index faac106f..fd7b238b 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -105,7 +105,7 @@ BEGIN {
                       host_get_free_memory
 
                       target_ping_check_down target_ping_check_up
-                      target_kernkind_check target_kernkind_console_inittab
+                      target_setup_rootdev_console_inittab
                       target_var target_var_prefix
                       selectguest prepareguest more_prepareguest_hvm
 		      guest_template
@@ -2562,8 +2562,9 @@ sub target_var ($$) {
     return undef;
 }
 
-sub target_kernkind_check ($) {
-    my ($gho) = @_;
+sub target_setup_rootdev_console_inittab ($$$) {
+    my ($ho, $gho, $root) = @_;
+
     my $pfx= target_var_prefix($gho);
     my $kernkind= $r{$pfx."kernkind"} // 'pvops';
     my $isguest= exists $gho->{Guest};
@@ -2573,10 +2574,6 @@ sub target_kernkind_check ($) {
     } elsif ($kernkind !~ m/2618/) {
         store_runvar($pfx."console", 'xvc0') if $isguest;
     }
-}
-
-sub target_kernkind_console_inittab ($$$) {
-    my ($ho, $gho, $root) = @_;
 
     my $inittabpath= "$root/etc/inittab";
     my $console= target_var($gho,'console');
diff --git a/ts-debian-fixup b/ts-debian-fixup
index 2184212b..a878fe50 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -209,8 +209,7 @@ sub writecfg () {
 savecfg();
 ether();
 access();
-target_kernkind_check($gho);
-$console = target_kernkind_console_inittab($ho,$gho,"$mountpoint");
+$console = target_setup_rootdev_console_inittab($ho,$gho,"$mountpoint");
 
 debian_overlays($ho, \&overlay);
 target_cmd_root($ho, <<END.debian_overlays_fixup_cmd($ho, $mountpoint));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3640.10506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE18-0006ik-LG; Wed, 07 Oct 2020 18:18:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3640.10506; Wed, 07 Oct 2020 18:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE18-0006ia-HY; Wed, 07 Oct 2020 18:18:22 +0000
Received: by outflank-mailman (input) for mailman id 3640;
 Wed, 07 Oct 2020 18:18:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE16-0006UA-SC
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:20 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e994920-52a7-45e0-a646-0d04426bbfe9;
 Wed, 07 Oct 2020 18:18:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk5-0007CF-0w; Wed, 07 Oct 2020 19:00:45 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE16-0006UA-SC
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:20 +0000
X-Inumbo-ID: 9e994920-52a7-45e0-a646-0d04426bbfe9
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9e994920-52a7-45e0-a646-0d04426bbfe9;
	Wed, 07 Oct 2020 18:18:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk5-0007CF-0w; Wed, 07 Oct 2020 19:00:45 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 46/82] shared/reuse: Use @ for ts-xen-build-prep
Date: Wed,  7 Oct 2020 18:59:48 +0100
Message-Id: <20201007180024.7932-47-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Pass @ from sg-run-job.  This is the only call site for
ts-xen-build-prep, so it can lose the open-coded test for SharedReady.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job        | 2 +-
 ts-xen-build-prep | 1 -
 2 files changed, 1 insertion(+), 2 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 067b28db..d46a3a62 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -753,7 +753,7 @@ proc allocate-build-host {ostype} {
 proc prepare-build-host-linux {} {
     global jobinfo
     run-ts broken host-install(*) ts-host-install-twice + --build
-    run-ts . host-build-prep ts-xen-build-prep
+    run-ts . host-build-prep ts-xen-build-prep + @host
 }
 
 proc prepare-build-host-freebsd {} {
diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index dabb9921..092bbffe 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -28,7 +28,6 @@ tsreadconfig();
 our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
-exit 0 if $ho->{SharedReady};
 
 our ($vg,$lv);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3641.10519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1E-0006oy-Vz; Wed, 07 Oct 2020 18:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3641.10519; Wed, 07 Oct 2020 18:18:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1E-0006op-Rh; Wed, 07 Oct 2020 18:18:28 +0000
Received: by outflank-mailman (input) for mailman id 3641;
 Wed, 07 Oct 2020 18:18:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1C-0006nN-UK
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:26 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 102b99e3-6c34-4092-a542-156eab1bd2f5;
 Wed, 07 Oct 2020 18:18:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk9-0007CF-H3; Wed, 07 Oct 2020 19:00:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1C-0006nN-UK
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:26 +0000
X-Inumbo-ID: 102b99e3-6c34-4092-a542-156eab1bd2f5
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 102b99e3-6c34-4092-a542-156eab1bd2f5;
	Wed, 07 Oct 2020 18:18:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk9-0007CF-H3; Wed, 07 Oct 2020 19:00:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 65/82] hsot reuse: Make share type hash more easily greppable
Date: Wed,  7 Oct 2020 19:00:07 +0100
Message-Id: <20201007180024.7932-66-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Use - and _ to make up the base64 alphabet instead of + and /

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-reuse | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-host-reuse b/ts-host-reuse
index 701070b2..0ecbb0bd 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -75,6 +75,7 @@ sub compute_test_sharetype () {
 	push @runvartexts, "$key=$r{$key}";
     }
     my $digest = sha224_base64("@runvartexts");
+    $digest =~ y{/+}{-_};
     $sharetype = "test-$flight-$digest";
     logm "share type $sharetype; hash is of: @runvartexts";
     return $sharetype;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3643.10531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1e-00072r-9D; Wed, 07 Oct 2020 18:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3643.10531; Wed, 07 Oct 2020 18:18:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1e-00072j-5x; Wed, 07 Oct 2020 18:18:54 +0000
Received: by outflank-mailman (input) for mailman id 3643;
 Wed, 07 Oct 2020 18:18:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1d-00072Q-H9
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5e6afeb-4aa4-4a7f-aec1-f0ad7aa00adc;
 Wed, 07 Oct 2020 18:18:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjv-0007CF-6U; Wed, 07 Oct 2020 19:00:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1d-00072Q-H9
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:53 +0000
X-Inumbo-ID: e5e6afeb-4aa4-4a7f-aec1-f0ad7aa00adc
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e5e6afeb-4aa4-4a7f-aec1-f0ad7aa00adc;
	Wed, 07 Oct 2020 18:18:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjv-0007CF-6U; Wed, 07 Oct 2020 19:00:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 12/82] target setup refactoring: Move target_kernkind_check
Date: Wed,  7 Oct 2020 18:59:14 +0100
Message-Id: <20201007180024.7932-13-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This is OK because nothing in access() looks at the rootdev or console
runvars, which are what target_kernkind_check sets.

No functional change other than perhaps to log output.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-debian-fixup | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-debian-fixup b/ts-debian-fixup
index 528fb03b..34051137 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -209,8 +209,8 @@ sub writecfg () {
 
 savecfg();
 ether();
-target_kernkind_check($gho);
 access();
+target_kernkind_check($gho);
 
 debian_overlays($ho, \&overlay);
 target_cmd_root($ho, <<END.debian_overlays_fixup_cmd($ho, $mountpoint));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:18:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3644.10543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1j-00077c-L5; Wed, 07 Oct 2020 18:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3644.10543; Wed, 07 Oct 2020 18:18:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1j-00077U-FR; Wed, 07 Oct 2020 18:18:59 +0000
Received: by outflank-mailman (input) for mailman id 3644;
 Wed, 07 Oct 2020 18:18:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1i-00072Q-FZ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4107dd9a-78d9-4391-844b-8506c6bdd8d7;
 Wed, 07 Oct 2020 18:18:54 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjz-0007CF-Mp; Wed, 07 Oct 2020 19:00:39 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1i-00072Q-FZ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:18:58 +0000
X-Inumbo-ID: 4107dd9a-78d9-4391-844b-8506c6bdd8d7
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4107dd9a-78d9-4391-844b-8506c6bdd8d7;
	Wed, 07 Oct 2020 18:18:54 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjz-0007CF-Mp; Wed, 07 Oct 2020 19:00:39 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 26/82] host allocation: selecthost: allow sort-of-selection of prospective hosts
Date: Wed,  7 Oct 2020 18:59:28 +0100
Message-Id: <20201007180024.7932-27-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

If one passes a trueish value for $prospective, selecthost does not
worry about whether any host has actually been selected.  It does a
limited amount of prep work.

This will be useful if we want to know some of the non-host-specific
information selecthost computes - in particular, $ho->{Suite} etc.

No functional change with existing callers.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 7292a329..3d5f0be3 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1170,9 +1170,9 @@ sub power_state ($$;$) {
 
 #---------- host selection and properties ----------
 
-sub selecthost ($;$);
-sub selecthost ($;$) {
-    my ($ident, $none_ok) = @_;
+sub selecthost ($;$$);
+sub selecthost ($;$$) {
+    my ($ident, $none_ok, $prospective) = @_;
     # must be run outside transaction
 
     # $ident is <identspec>
@@ -1199,7 +1199,7 @@ sub selecthost ($;$) {
         $r{$ident}= $name;
     } else {
         $name= $r{$ident};
-	if (!defined $name) {
+	if (!defined $name and !$prospective) {
 	    return undef if $none_ok;
 	    die "no specified $ident";
 	}
@@ -1220,6 +1220,8 @@ sub selecthost ($;$) {
         $ho->{OS} = target_var($ho, "os") // "debian";
     }
 
+    return $ho if $prospective;
+
     #----- handle hosts which are themselves guests (nested) -----
 
     if ($name =~ s/^(.*)://) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3645.10554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1o-0007CV-Ss; Wed, 07 Oct 2020 18:19:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3645.10554; Wed, 07 Oct 2020 18:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1o-0007CO-Pg; Wed, 07 Oct 2020 18:19:04 +0000
Received: by outflank-mailman (input) for mailman id 3645;
 Wed, 07 Oct 2020 18:19:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1n-00072Q-Fm
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 348605a2-3329-47c3-ae29-9223e6c5d1b0;
 Wed, 07 Oct 2020 18:18:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkB-0007CF-1O; Wed, 07 Oct 2020 19:00:51 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1n-00072Q-Fm
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:03 +0000
X-Inumbo-ID: 348605a2-3329-47c3-ae29-9223e6c5d1b0
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 348605a2-3329-47c3-ae29-9223e6c5d1b0;
	Wed, 07 Oct 2020 18:18:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkB-0007CF-1O; Wed, 07 Oct 2020 19:00:51 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 72/82] resource reporting: Report host reuse/sharing in job report
Date: Wed,  7 Oct 2020 19:00:14 +0100
Message-Id: <20201007180024.7932-73-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Compatibility: in principle this might generate erroneous reports
which omit sharing/reuse information for allocations made by jobs
using older versions of osstest.

However, we do not share or reuse hosts across different osstest
versions, so this cannot occur.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 331 ++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 330 insertions(+), 1 deletion(-)

diff --git a/sg-report-flight b/sg-report-flight
index a1f424c5..0413a730 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -29,9 +29,10 @@ use POSIX;
 use IPC::Open2;
 use Data::Dumper;
 use File::Path;
+use Carp;
 
 use Osstest;
-use Osstest::Executive;
+use Osstest::Executive qw(:DEFAULT :colours);
 
 our $specflight;
 our %specver;
@@ -1122,6 +1123,68 @@ END
     return @failures;
 }
 
+# Machinery for generating WITH ... VALUES common table expressions.
+# Use it like this:
+#
+#    1. $some_accum = {}
+#
+#    2. valuestable_add_row($some_accum, $val, $val, $val)
+#          # ^ zero or more times
+#
+#    3. $qtxt = "WITH\n";
+#       @qparams = ();
+#       valuestable_with(\$qtxt, \@qparams, 'cte_name',
+#              qw(txtcol1 txtcol2 intcol::integer boolcol::bool ...));
+#
+# The resulting CTE table will have the name, and column names,
+# you specified.  For non-TEXT columns you must specify the type
+# because [Postgre]SQL's type inference doesn't work properly here.
+#
+# valuestable_with will always leave $qtxt ending with ",\n"
+# so you can call it multiple times.
+
+sub valuestable_add_row ($@) {
+    my ($accum, @row) = @_;
+    # $accum->{Ncols}
+    # $accum->{Params}[]
+    # $accum->{Qtxt}
+    $accum->{Ncols} //= scalar @row;
+    confess unless $accum->{Ncols} == @row;
+    push @{ $accum->{Params} }, @row;
+    $accum->{Qtxt} //= '';
+    $accum->{Qtxt} =~ s/.$/$&,/;
+    $accum->{Qtxt} .= "          (".join(',', ('?',) x @row).")\n";
+}
+sub valuestable_with ($$$@) {
+    my ($qtxtr, $paramsr, $ctename, $accum, @cols) = @_;
+    my $limit = '';
+    $accum->{Qtxt} //= do {
+	# Oh my god
+	# select * from (values );
+	# => ERROR:  syntax error at or near ")"
+	$limit = 'LIMIT 0';
+	"          (".join(',',
+			   map { m/::/ ? "NULL::$'" : "NULL" }
+			   @cols).")\n";
+    };
+    $accum->{Ncols} //= scalar @cols;
+    confess "$accum->{Ncols} != ".(scalar @cols)
+      unless $accum->{Ncols} == @cols;
+    my $cols = join(', ', @cols);
+    my $colsnotypes = join(', ', map { m/::/ ? $` : $_ } @cols);
+    $$qtxtr .= <<END;
+      $ctename ($colsnotypes) AS (SELECT
+            $cols FROM (VALUES
+$accum->{Qtxt}        $limit) $ctename ($colsnotypes)),
+
+END
+    push @$paramsr, @{ $accum->{Params} // [ ] };
+}
+
+sub nullcols {
+    join ", ", map { m/::/ ? "NULL::$' as $`" : "NULL as $_" } @_;
+}
+
 sub htmloutjob ($$) {
     my ($fi,$job) = @_;
     return unless defined $htmldir;
@@ -1213,6 +1276,272 @@ END
 <tr><td>Status:</td><td>$ji->{status}</td></tr>
 </table>
 <p>
+END
+
+    # ---------- resource reuse/sharing report ----------
+
+    # We translate the lifecycle runvars into a set of questions
+    # for the db.  But rather than doing one db query for each
+    # such question, we aggregate the questions into VALUES
+    # expressions and ask the db to produce a collated list of
+    # relevant information.  This has fewer round trips.
+
+    my $shareq_elided_accum = {};
+    my $shareq_tasks_accum = {};
+    my $shareq_main_accum = {};
+    foreach my $lc_var_row (@$runvar_table) {
+	next unless $lc_var_row->{name} =~ m{^(.*_?host)_lifecycle$};
+	my $tident = $1;
+	my $hostname = ($runvar_map{$tident} // next)->{val};
+	my $last_uncompr;
+	my $sort_index;
+	print DEBUG "SHARE LC $job $tident $lc_var_row->{val}\n";
+	foreach (split / /, $lc_var_row->{val}) {
+	    $sort_index++;
+	    if (m/^[\@\+]$/) {
+		valuestable_add_row $shareq_elided_accum,
+		  $tident, $hostname, undef, $&, $sort_index;
+		next;
+	    }
+	    if (m/^\[(\d+)\]$/) { # elided
+		valuestable_add_row $shareq_elided_accum,
+		  $tident, $hostname, $1, undef, $sort_index;
+		next;
+	    }
+	    my $olive = s/^\+//;
+	    if (m/^\?(\d+)$/) { # tasks
+		valuestable_add_row $shareq_tasks_accum,
+		  $tident, $hostname, $olive+0, $1, $sort_index;
+		next;
+	    }
+	    my $oisprep = s/^\@//;
+	    s{^\d+$}{ join ":$&", @$last_uncompr }e if $last_uncompr;
+	    if (my ($tprefix, $oflight, $ojob,
+		    $ostepno, $tsuffix, $oident) =
+		m{^((?:(\d+)\.)?([^:]+)?)\:(\d+)((?:,([^:]+))?)$}) {
+		# main
+		$last_uncompr = [ $tprefix, $tsuffix ];
+		$oflight ||= $specflight;
+		$ojob ||= $job;
+		$oident ||= 'host';
+		valuestable_add_row $shareq_main_accum,
+		  $tident, $hostname, $oflight, $ojob, $ostepno,
+		  $oisprep+0, $oident, $olive+0;
+		next;
+	    }
+	    confess "$tident $hostname $_ ?";
+	}
+    }
+    my @shareq_params;
+    my $shareq_txt = <<END;
+      WITH
+
+END
+
+    valuestable_with \$shareq_txt, \@shareq_params,
+      'q_elided', $shareq_elided_accum,
+      qw(tident hostname count::integer sigil sort_index::integer);
+
+    valuestable_with \$shareq_txt, \@shareq_params,
+      'q_tasks', $shareq_tasks_accum,
+      qw(tident hostname olive::bool taskid::integer sort_index::integer);
+
+    valuestable_with \$shareq_txt, \@shareq_params,
+      'q', $shareq_main_accum,
+      qw(tident hostname flight::integer job
+         stepno::integer oisprep::bool oident olive::bool);
+
+    # Helpers to reduce typing in the mapping from individual r_*
+    # table rows to the overall union (sum type) rows.
+    my $nullcols_main = nullcols(qw(
+        flight::integer job status oidents
+        started::integer rest_started::integer finished::integer
+    ));
+    my $nullcols_tasks = nullcols(qw(
+        taskid::integer type refkey username comment
+    ));
+    my $nullcols_elided = nullcols(qw(
+        elided::integer elided_sigil
+    ));
+
+    $shareq_txt .= <<END;
+      q2 AS
+      (SELECT q.*,
+	      (SELECT started
+		FROM steps s
+	       WHERE s.flight = q.flight
+		 AND s.job    = q.job 
+		 AND s.stepno = q.stepno
+		 AND oisprep)                    AS prep_started,
+	      (SELECT started
+		 FROM steps s
+		WHERE s.flight = q.flight
+		  AND s.job    = q.job
+		  AND s.stepno = q.stepno
+		  AND NOT oisprep)               AS rest_started,
+	      (SELECT max(finished)
+		 FROM steps s
+		WHERE s.flight = q.flight
+		  AND s.job    = q.job)          AS finished
+	FROM Q
+        ORDER BY q.tident),
+
+      r_main AS
+      (SELECT tident, hostname,
+              bool_or(olive)                     AS olive,
+              1                                  AS kind_sort,
+              flight, job,
+	      (SELECT status
+		 FROM jobs
+		WHERE jobs.flight = q2.flight
+		  AND jobs.job    = q2.job)      AS status,
+	      string_agg(DISTINCT oident,',')    AS oidents,
+	      min(prep_started)                  AS prep_started,
+	      min(rest_started)                  AS rest_started,
+	      max(finished)                      AS finished,
+	      $nullcols_tasks,
+	      $nullcols_elided,
+              NULL::integer                      AS sort_index
+	 FROM q2
+     GROUP BY tident, hostname, flight, job),
+
+      r_tasks AS
+      (SELECT tident, hostname, olive,
+              0                                  AS kind_sort,
+              $nullcols_main,
+              taskid, type, refkey, username, comment,
+              $nullcols_elided,
+              sort_index
+         FROM q_tasks NATURAL LEFT JOIN tasks),
+
+      r_elided AS
+      (SELECT tident, hostname, FALSE as olive,
+              2                                  AS kind_sort,
+              $nullcols_main,
+              $nullcols_tasks,
+              count                              AS elided,
+              sigil                              AS elided_sigil,
+              sort_index
+         FROM q_elided)
+
+-- The result row is effectively a sum type.  SQL doesn't have those.
+-- We just pile all the columns of the disjoint types together;
+-- some of them will be null for some variants.  The perl code can
+-- easily figure out which of the unioned CTEs a row came from.
+
+       SELECT * FROM r_main    UNION
+       SELECT * FROM r_tasks   UNION
+       SELECT * FROM r_elided
+     ORDER BY tident, hostname,
+	      kind_sort,
+	      finished, prep_started, rest_started, flight, job, oidents,
+	      sort_index
+END
+
+    print DEBUG "PREPARING SHAREQ\n";
+    my $shareq = db_prepare($shareq_txt);
+    print DEBUG Dumper(\@shareq_params);
+    $shareq->execute(@shareq_params);
+
+    my $share_any;
+    my $altcolour=1;
+    while (my $srow = $shareq->fetchrow_hashref()) {
+	print DEBUG "SHARE SROW ".Dumper($srow);
+	print H <<END if !$share_any++;
+<h2>Task(s) which might have affected this job's host(s)</h2>
+<p>
+<table rules="all"><tr>
+<th>role<br>(here)</td>
+<th>hostname</td>
+<th>rel.</td><!-- share reuse unknown -->
+<th>flight</td>
+<th>job</td>
+<th>role(s)<br>(there)</td>
+<th>install / prep.<br>started</td>
+<th>use</br>started</td>
+<th>last step<br>ended</td>
+<th>job<br>status</td>
+</tr>
+END
+	my $bgcolour = report_altcolour($altcolour ^= 1);
+	printf H <<END, $bgcolour, map { encode_entities $_ }
+<tr %s>
+<td align="center">%s</td>
+<td align="center"><a href="%s">%s</a></td>
+END
+	  $srow->{tident},
+	  "$c{ResultsHtmlPubBaseUrl}/host/$srow->{hostname}.html",
+	  $srow->{hostname};
+	my $rel = $srow->{olive} ?
+	  "<td align=\"center\" bgcolor=\"$red\">share</td>"
+	  : $srow->{prep_started} ?
+	  "<td align=\"center\" bgcolor=\"$purple\">prep.</td>"
+	  :
+	  "<td align=\"center\">reuse</td>";
+        if (defined $srow->{flight}) {
+	    my $furl = "$c{ReportHtmlPubBaseUrl}/$srow->{flight}/";
+	    my $jurl = "$furl/$srow->{job}/info.html";
+	    if ($srow->{flight} != $specflight) {
+		printf H <<END, $rel, map { encode_entities $_ }
+%s
+<td align="right"><a href="%s">%s</a></td>
+<td><a href="%s">%s</a></td>
+END
+		  $furl, $srow->{flight},
+		  $jurl, $srow->{job};
+	    } elsif ($srow->{job} ne $job) {
+		printf H <<END, $rel, map { encode_entities $_ }
+%s
+<td align="center">this</td>
+<td><a href="%s">%s</a></td>
+END
+		  $jurl, $srow->{job};
+	    } else {
+		printf H <<END;
+<td></td>
+<td align="center">this</td>
+<td align="center">this</td>
+END
+	    }
+	    printf H <<END,
+<td align="center">%s</td>
+<td>%s</td><td>%s</td><td>%s</td>
+END
+	      encode_entities($srow->{oidents}),
+	      map { $_ ? show_abs_time($_) : '' }
+	      $srow->{prep_started},
+	      $srow->{rest_started},
+	      !$srow->{olive} && $srow->{finished};
+	    my $info = report_run_getinfo($srow);
+	    print H <<END, 
+<td $info->{ColourAttr}>$info->{Content}</td>
+END
+	} elsif (defined $srow->{elided}) {
+	    printf H <<END, $srow->{elided};
+<td colspan="8" align="center">%d earlier job(s) elided</td>
+END
+	} elsif (defined $srow->{elided_sigil}) {
+	    printf H <<END;
+<td bgcolor="$yellow" colspan="8" align="center">
+this job incomplete, unknown number of other jobs elided
+</td>
+END
+	} elsif (defined $srow->{taskid}) {
+	    printf H <<END, $rel, map { encode_entities $_ }
+%s
+<td bgcolor="$yellow" colspan="7" align="center">?%s: %s</td>
+END
+	      $srow->{taskid},
+	      report_rogue_task_description($srow);
+	} else {
+	    confess Dumper($srow)." ?";
+	}
+	print H <<END;
+</tr>
+END
+    }
+    print H <<END if $share_any;
+</table>
 END
 
     print H <<END;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3646.10567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1u-0007HP-82; Wed, 07 Oct 2020 18:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3646.10567; Wed, 07 Oct 2020 18:19:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1u-0007HF-47; Wed, 07 Oct 2020 18:19:10 +0000
Received: by outflank-mailman (input) for mailman id 3646;
 Wed, 07 Oct 2020 18:19:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1s-00072Q-Fv
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b62c7e83-dbae-4cf4-a3c7-fb9d424626b2;
 Wed, 07 Oct 2020 18:19:01 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk5-0007CF-9J; Wed, 07 Oct 2020 19:00:45 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1s-00072Q-Fv
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:08 +0000
X-Inumbo-ID: b62c7e83-dbae-4cf4-a3c7-fb9d424626b2
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b62c7e83-dbae-4cf4-a3c7-fb9d424626b2;
	Wed, 07 Oct 2020 18:19:01 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk5-0007CF-9J; Wed, 07 Oct 2020 19:00:45 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 47/82] shared/reuse: Use @ for ts-host-install
Date: Wed,  7 Oct 2020 18:59:49 +0100
Message-Id: <20201007180024.7932-48-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Pass @ from sg-run-job.  These are all the call sites for
ts-host-install-*, so we can lose the open-coded test for SharedReady.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 sg-run-job      | 6 +++---
 ts-host-install | 1 -
 2 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index d46a3a62..c454d4ea 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -85,7 +85,7 @@ proc run-job {job} {
         }
     }
 
-    per-host-ts broken  host-install/@(*) ts-host-install-twice
+    per-host-ts @broken host-install/@(*) ts-host-install-twice
 
     per-host-prep
 
@@ -675,7 +675,7 @@ proc examine-host-prep {} {
     run-ts broken  =            ts-hosts-allocate     + host
 }
 proc examine-host-install-debian {} {
-    run-ts broken  host-install ts-host-install-twice + host
+    run-ts broken host-install  ts-host-install-twice + @host
 }
 proc examine-host-install-xen {} {
     examine-host-install-debian
@@ -752,7 +752,7 @@ proc allocate-build-host {ostype} {
 }
 proc prepare-build-host-linux {} {
     global jobinfo
-    run-ts broken host-install(*) ts-host-install-twice + --build
+    run-ts broken host-install(*) ts-host-install-twice + --build @host
     run-ts . host-build-prep ts-xen-build-prep + @host
 }
 
diff --git a/ts-host-install b/ts-host-install
index 924c1e06..b0fd2028 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -53,7 +53,6 @@ our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
 exit 0 if $ho->{Flags}{'no-reinstall'};
-exit 0 if $ho->{SharedReady};
 
 our %timeout= qw(ReadPreseed  350
                  Sshd        2400);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3647.10579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1y-0007Mn-PS; Wed, 07 Oct 2020 18:19:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3647.10579; Wed, 07 Oct 2020 18:19:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE1y-0007Md-L0; Wed, 07 Oct 2020 18:19:14 +0000
Received: by outflank-mailman (input) for mailman id 3647;
 Wed, 07 Oct 2020 18:19:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE1x-00072Q-GG
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6c7b4c3-36aa-4ac6-a6a4-36a2a11e9a31;
 Wed, 07 Oct 2020 18:19:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk7-0007CF-Nc; Wed, 07 Oct 2020 19:00:47 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE1x-00072Q-GG
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:13 +0000
X-Inumbo-ID: a6c7b4c3-36aa-4ac6-a6a4-36a2a11e9a31
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a6c7b4c3-36aa-4ac6-a6a4-36a2a11e9a31;
	Wed, 07 Oct 2020 18:19:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk7-0007CF-Nc; Wed, 07 Oct 2020 19:00:47 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 58/82] runvar access: Introduce access control machinery
Date: Wed,  7 Oct 2020 19:00:00 +0100
Message-Id: <20201007180024.7932-59-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This will allow us to trap accesses, during test host setup, to
runvars which weren't included in ithe calculation of the sharing
scope.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 53 ++++++++++++++++++++++++++++++++++++++++++
 README                 |  2 +-
 2 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index ce13d3a6..b1eca0a9 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -33,6 +33,7 @@ use File::Basename;
 use IO::Handle;
 use Carp;
 use Digest::SHA;
+use File::FnMatch qw(:fnmatch);
 
 BEGIN {
     use Exporter ();
@@ -141,6 +142,8 @@ BEGIN {
                       target_core_dump_setup
                       sha256file host_shared_mark_ready
                       gitcache_setup
+
+		      @accessible_runvar_pats
                       );
     %EXPORT_TAGS = ( );
 
@@ -156,6 +159,10 @@ our %timeout= qw(RebootDown   100
 our $logm_handle= new IO::File ">& STDERR" or die $!;
 our $logm_prefix= '';
 
+# When runvar_access_restrict is called, it will limit reading
+# of non-synth runvars to ones which match these glob patterns.
+our @accessible_runvar_pats = qw(test-host-setup-runvars-will-appear-here);
+
 #---------- test script startup ----------
 
 sub tsreadconfig () {
@@ -3164,4 +3171,50 @@ END
                                  'home-osstest-gitconfig');
 }
 
+sub runvar_access_restrict () {
+    # restricts runvars to those in @accessible_runvar_pats
+    return if "@accessible_runvar_pats" eq "*";
+    return if tied %r;
+    tie %r, 'RunvarMonitor', %r;
+}
+
+sub runvar_access_check ($$) {
+    my ($key, $what) = @_;
+    return if grep { fnmatch $_, $key } @accessible_runvar_pats;
+    my $m = "reuse-uncontrolled runvar $what '$key'\n".
+            " (controlled runvars are @accessible_runvar_pats)";
+    confess $m unless $ENV{OSSTEST_UNCONTROLLED_SHARE_RUNVAR_WARNONLY};
+    Carp::cluck $m;
+}
+
+package RunvarMonitor;
+use Carp;
+use Osstest;
+use Osstest::TestSupport;
+
+sub TIEHASH {
+    my $self = shift;
+    logm("reuse: restricting runvars to @accessible_runvar_pats");
+    return bless { @_ }, $self;
+}
+
+sub _ok {
+    my $self = shift;
+    my $key = shift;
+    Osstest::TestSupport::runvar_access_check($key, 'access');
+}
+
+sub FIRSTKEY {
+    confess
+      "reuse-uncontrolled runvar scanning - change to use runvar_glob!";
+}
+sub FETCH { my ($self, $key) = @_; $self->_ok($key); $self->{$key} }
+sub EXISTS { my ($self, $key) = @_; $self->_ok($key); exists $self->{$key} }
+sub STORE { my ($self, $key, $val) = @_; $self->{$key} = $val; }
+sub DELETE { my ($self, $key) = @_; delete $self->{$key}; }
+
+sub CLEAR { confess }
+sub SCALAR { confess }
+sub UNTIE { confess }
+
 1;
diff --git a/README b/README
index ba4bea1d..a929010c 100644
--- a/README
+++ b/README
@@ -297,7 +297,7 @@ To run osstest in standalone mode:
      curl
      netcat
      chiark-utils-bin
-     libxml-libxml-perl
+     libxml-libxml-perl libfile-fnmatch-perl
      dctrl-tools
      libnet-snmp-perl (if you are going to use Masterswitch PDUs)
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3648.10591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE24-0007TL-58; Wed, 07 Oct 2020 18:19:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3648.10591; Wed, 07 Oct 2020 18:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE24-0007TC-0L; Wed, 07 Oct 2020 18:19:20 +0000
Received: by outflank-mailman (input) for mailman id 3648;
 Wed, 07 Oct 2020 18:19:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE22-00072Q-GH
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5472e11f-a4db-4b0f-8b11-e7ba43b978da;
 Wed, 07 Oct 2020 18:19:06 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk7-0007CF-2d; Wed, 07 Oct 2020 19:00:47 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE22-00072Q-GH
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:18 +0000
X-Inumbo-ID: 5472e11f-a4db-4b0f-8b11-e7ba43b978da
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5472e11f-a4db-4b0f-8b11-e7ba43b978da;
	Wed, 07 Oct 2020 18:19:06 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk7-0007CF-2d; Wed, 07 Oct 2020 19:00:47 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 55/82] host reuse: Bump host share reuse bonus
Date: Wed,  7 Oct 2020 18:59:57 +0100
Message-Id: <20201007180024.7932-56-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

In test jobs this is now contending with the variation bonus.

If we fail to vary properly this time, we get another go in the next
flight, so this is not so critical.

This increases the amount of test host reuse.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index c1002fc9..b216186a 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -673,7 +673,7 @@ sub hid_recurse ($$) {
     $prevfail_bonus //= 7.0*86400;
     my $prevfail_equiv_bonus = $prevfail_bonus * (6.5 / 7.0);
 
-    my $share_reuse_bonus = $r{hostalloc_bonus_sharereuse} // 10000;
+    my $share_reuse_bonus = $r{hostalloc_bonus_sharereuse} // 20000;
 
     my $cost= $start_time
 	+ $duration_for_cost
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3649.10603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE28-0007Z4-HF; Wed, 07 Oct 2020 18:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3649.10603; Wed, 07 Oct 2020 18:19:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE28-0007Yv-Bc; Wed, 07 Oct 2020 18:19:24 +0000
Received: by outflank-mailman (input) for mailman id 3649;
 Wed, 07 Oct 2020 18:19:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE27-00072Q-GZ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3a47a31-db37-454f-a28e-22d0be966e38;
 Wed, 07 Oct 2020 18:19:09 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk4-0007CF-3A; Wed, 07 Oct 2020 19:00:44 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE27-00072Q-GZ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:23 +0000
X-Inumbo-ID: b3a47a31-db37-454f-a28e-22d0be966e38
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b3a47a31-db37-454f-a28e-22d0be966e38;
	Wed, 07 Oct 2020 18:19:09 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk4-0007CF-3A; Wed, 07 Oct 2020 19:00:44 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 42/82] host allocation: selecthost(): Support @IDENT for reuse
Date: Wed,  7 Oct 2020 18:59:44 +0100
Message-Id: <20201007180024.7932-43-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This is the first part of a central way to control host reuse, rather
than having to write code in each ts-* script to check Shared etc.

No functional change with existing callers.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 3d5f0be3..be6b7119 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1175,7 +1175,7 @@ sub selecthost ($;$$) {
     my ($ident, $none_ok, $prospective) = @_;
     # must be run outside transaction
 
-    # $ident is <identspec>
+    # $ident is [@]<identspec>
     #
     # <identspec> can be <ident>, typically "host" or "xxx_host"
     # which means look up the runvar $r{$ident} which
@@ -1191,8 +1191,16 @@ sub selecthost ($;$$) {
     # <hostspec> can be <parent_identspec>:<domname> meaning use the
     # Xen domain name <domname> on the host specified by
     # <parent_identspec>, which is an <identspec> as above.
+    #
+    # The leading @, if present, specifies that this script should be
+    # skipped if the host is being reused or shared.  Specifically:
+    # if the host is shared and in sharing state `ready'.  And, it is
+    # wrong to specify @ when multiple hosts are being selected.
 
     my $name;
+
+    my $isprep = $ident =~ s/^\@//;
+
     if ($ident =~ m/=/) {
         $ident= $`;
         $name= $'; #'
@@ -1352,13 +1360,27 @@ sub selecthost ($;$$) {
 
     $mjobdb->host_check_allocated($ho);
 
-    logm("host $ho->{Ident}: selected $ho->{Name} ".
+    logm("host $ho->{Ident}".
+	 ($isprep ? " (prep)" : "").
+	 ": selected $ho->{Name} ".
 	 (defined $ho->{Ether} ? $ho->{Ether} : '<unknown-ether>').
 	 " $ho->{Ip}".
          (!$ho->{Shared} ? '' :
           sprintf(" - shared %s %s %d", $ho->{Shared}{Type},
                   $ho->{Shared}{State}, $ho->{Shared}{Others}+1)));
 
+    if ($isprep && $ho->{Shared}) {
+	my $st = $ho->{Shared}{State};
+	if ($st eq 'ready') {
+	    logm("host is ready, skipping step");
+	    exit 0;
+	} elsif ($st eq 'prep') {
+	    # ok, we need to do it
+	} else {
+	    die "host sharestate is $st but isprep (@ in ident)";
+	}
+    }
+
     return $ho;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3650.10615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2D-0007fg-Sf; Wed, 07 Oct 2020 18:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3650.10615; Wed, 07 Oct 2020 18:19:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2D-0007fV-PI; Wed, 07 Oct 2020 18:19:29 +0000
Received: by outflank-mailman (input) for mailman id 3650;
 Wed, 07 Oct 2020 18:19:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2C-00072Q-Gs
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:28 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 989061d0-f358-42fb-b8df-8c1884493470;
 Wed, 07 Oct 2020 18:19:11 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk6-0007CF-Qb; Wed, 07 Oct 2020 19:00:46 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2C-00072Q-Gs
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:28 +0000
X-Inumbo-ID: 989061d0-f358-42fb-b8df-8c1884493470
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 989061d0-f358-42fb-b8df-8c1884493470;
	Wed, 07 Oct 2020 18:19:11 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk6-0007CF-Qb; Wed, 07 Oct 2020 19:00:46 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 54/82] host reuse: Use literal for the hosts_infraprioritygroup runvar
Date: Wed,  7 Oct 2020 18:59:56 +0100
Message-Id: <20201007180024.7932-55-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

At some point this might make the database smarter about indexing.
It's certainly clearer.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/JobDB/Executive.pm | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 8c235d45..8fde2934 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -403,19 +403,19 @@ sub jobdb_set_hosts_infraprioritygroup ($$$$;$) { # method
                (job = ?) AS thisjob
           FROM runvars
          WHERE flight=?
-           AND name=?
+           AND name='$vn'
       ORDER BY thisjob DESC
 END
     my $insertq = $dbh_tests->prepare(<<END);
-        INSERT INTO runvars (flight,job, name,val, synth)
-                     VALUES (?,     ?,   ?,   ?,   't')
+        INSERT INTO runvars (flight,job, name,  val, synth)
+                     VALUES (?,     ?,   '$vn', ?,   't')
 END
 
     my $resulting;
     db_retry($dbh_tests,[],sub {
 	my $use = 1;
 	$resulting = undef;
-        $queryq->execute($job, $flight, $vn);
+        $queryq->execute($job, $flight);
 	while (my ($tjob, $tval, $thisjob) = $queryq->fetchrow_array()) {
 	    if ($thisjob) {
 		logm("$vn: job is already in group $tval");
@@ -431,7 +431,7 @@ END
 	}
 	$resulting = "$use:$group_key";
 	logm("$vn: inserting job into group $resulting");
-	$insertq->execute($flight,$job,$vn, $resulting);
+	$insertq->execute($flight,$job, $resulting);
     });
     $rref->{$vn} = $resulting if $rref && defined $resulting;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3652.10627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2J-0007mE-8x; Wed, 07 Oct 2020 18:19:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3652.10627; Wed, 07 Oct 2020 18:19:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2J-0007m5-4E; Wed, 07 Oct 2020 18:19:35 +0000
Received: by outflank-mailman (input) for mailman id 3652;
 Wed, 07 Oct 2020 18:19:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2H-00072Q-H9
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f893354c-fbd2-4f1f-81a5-8981f15d37d1;
 Wed, 07 Oct 2020 18:19:13 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkA-0007CF-DW; Wed, 07 Oct 2020 19:00:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2H-00072Q-H9
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:33 +0000
X-Inumbo-ID: f893354c-fbd2-4f1f-81a5-8981f15d37d1
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f893354c-fbd2-4f1f-81a5-8981f15d37d1;
	Wed, 07 Oct 2020 18:19:13 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkA-0007CF-DW; Wed, 07 Oct 2020 19:00:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 69/82] host lifecycle: Record lifecycle in db and runvar
Date: Wed,  7 Oct 2020 19:00:11 +0100
Message-Id: <20201007180024.7932-70-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This is just the calls to host_update_lifecycle_info.
Now the db table is Needed.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm    | 2 ++
 schema/host-lifecycle.sql | 2 +-
 ts-freebsd-host-install   | 1 +
 ts-host-install           | 1 +
 ts-host-reuse             | 1 +
 ts-logs-capture           | 1 +
 6 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 22141981..163862f8 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1418,6 +1418,8 @@ sub selecthost ($;$$) {
 	}
     }
 
+    host_update_lifecycle_info($ho, $isprep ? 'selectprep' : 'select');
+
     return $ho;
 }
 
diff --git a/schema/host-lifecycle.sql b/schema/host-lifecycle.sql
index 7f1f5bb0..7e4fc2aa 100644
--- a/schema/host-lifecycle.sql
+++ b/schema/host-lifecycle.sql
@@ -1,4 +1,4 @@
--- ##OSSTEST## 012 Preparatory
+-- ##OSSTEST## 012 Needed
 --
 -- Records the jobs which have touched a host, for host sharing/reuse
 -- The information here is ephemeral - it is cleared when a host is
diff --git a/ts-freebsd-host-install b/ts-freebsd-host-install
index 9feb98cd..31e14d57 100755
--- a/ts-freebsd-host-install
+++ b/ts-freebsd-host-install
@@ -295,3 +295,4 @@ setup_netboot_local($ho);
 
 # Proceed with the install
 install();
+host_update_lifecycle_info($ho, 'wiped');
diff --git a/ts-host-install b/ts-host-install
index 5badc706..276c6af8 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -297,3 +297,4 @@ END
 }
 
 install();
+host_update_lifecycle_info($ho, 'wiped');
diff --git a/ts-host-reuse b/ts-host-reuse
index 0ecbb0bd..85beb51e 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -158,6 +158,7 @@ sub act_post_test () {
     return unless $ho->{Shared};
     die unless $ho->{Shared}{State} eq 'mid-test';
     post_test_cleanup();
+    host_update_lifecycle_info($ho, 'final');
     host_shared_mark_ready($ho, $sharetype, 'mid-test', 'ready');
 }
 
diff --git a/ts-logs-capture b/ts-logs-capture
index ec494fe1..0b0b6af6 100755
--- a/ts-logs-capture
+++ b/ts-logs-capture
@@ -309,3 +309,4 @@ if (fetch_logs_host()) {
     }
 }
 logm("logs captured to $stash");
+host_update_lifecycle_info($ho, 'final');
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:19:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:19:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3654.10639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2N-0007rg-Li; Wed, 07 Oct 2020 18:19:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3654.10639; Wed, 07 Oct 2020 18:19:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE2N-0007rT-Gn; Wed, 07 Oct 2020 18:19:39 +0000
Received: by outflank-mailman (input) for mailman id 3654;
 Wed, 07 Oct 2020 18:19:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2M-00072Q-HJ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:38 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ab6bdfe-1e35-4786-853d-05b06a8fc6dc;
 Wed, 07 Oct 2020 18:19:15 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjv-0007CF-E1; Wed, 07 Oct 2020 19:00:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2M-00072Q-HJ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:38 +0000
X-Inumbo-ID: 8ab6bdfe-1e35-4786-853d-05b06a8fc6dc
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ab6bdfe-1e35-4786-853d-05b06a8fc6dc;
	Wed, 07 Oct 2020 18:19:15 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjv-0007CF-E1; Wed, 07 Oct 2020 19:00:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 13/82] target setup refactoring: Move target_kernkind_console_inittab
Date: Wed,  7 Oct 2020 18:59:15 +0100
Message-Id: <20201007180024.7932-14-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We move this earlier.  This is OK because it depends only on the
console runvar (inside the sub; this is set by target_kernkind_check),
$ho and $gho (which are set by this point); and $mountpoint$ (which is
set by access().

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-debian-fixup | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/ts-debian-fixup b/ts-debian-fixup
index 34051137..2184212b 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -79,10 +79,9 @@ END
 }
 
 our $extra;
+our $console;
 
 sub console () {
-    my $console=
-        target_kernkind_console_inittab($ho,$gho,"$mountpoint");
     return unless length $console;
     
     my $xextra= "console=$console earlyprintk=xen";
@@ -211,6 +210,7 @@ savecfg();
 ether();
 access();
 target_kernkind_check($gho);
+$console = target_kernkind_console_inittab($ho,$gho,"$mountpoint");
 
 debian_overlays($ho, \&overlay);
 target_cmd_root($ho, <<END.debian_overlays_fixup_cmd($ho, $mountpoint));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3671.10650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE4r-0000cW-4m; Wed, 07 Oct 2020 18:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3671.10650; Wed, 07 Oct 2020 18:22:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE4r-0000cP-1m; Wed, 07 Oct 2020 18:22:13 +0000
Received: by outflank-mailman (input) for mailman id 3671;
 Wed, 07 Oct 2020 18:22:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ECWb=DO=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kQE4p-0000cJ-QQ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 199ec08b-e3d9-4eff-ac34-2b47fbd4bdc4;
 Wed, 07 Oct 2020 18:22:10 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kQE4o-0003L3-Lz
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:10 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kQE4o-0008Vi-JA
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>) id 1kQE4m-0003SW-Po
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 19:22:08 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ECWb=DO=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kQE4p-0000cJ-QQ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:11 +0000
X-Inumbo-ID: 199ec08b-e3d9-4eff-ac34-2b47fbd4bdc4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 199ec08b-e3d9-4eff-ac34-2b47fbd4bdc4;
	Wed, 07 Oct 2020 18:22:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:To:Date:
	Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=q6Lhchrp5WV/uDmtgyM8+fXp02ox19ZzxjNkpYIlJB0=; b=bS7lIOhubmgmZbfwrAnRPc15QF
	JCX3OzhMrOwqII1MLP01fd3qlmcLmXC384wIKp9qZ2jSTqAXpZWSeS54RMyheb0CjQxrx7M6fPz4o
	ZFwJnhBe2iWvXVZVpDNUaH+I+wf118iCw92aN0C7ujIpygKUhJ4L3c80gJ3nokWIRQh0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kQE4o-0003L3-Lz
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:10 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kQE4o-0008Vi-JA
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:10 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kQE4m-0003SW-Po
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 19:22:08 +0100
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24446.1872.406679.996085@mariner.uk.xensource.com>
Date: Wed, 7 Oct 2020 19:22:08 +0100
To: xen-devel@lists.xenproject.org
Subject: Re: [OSSTEST PATCH 00/82] Reuse test hosts
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Ian Jackson writes ("[OSSTEST PATCH 00/82] Reuse test hosts"):
> This series arranges to save on host setup by reusing a test host, if
> the previous test passed.  Care is taken to make sure that a host is
> only reused in this way if the new test would have set it up
> identically.
> 
> I have had this branch in preparation since November 2017...

Many of the earlier commits in this series had my Citrix address as
the author.  My setup was not configured to deliver these mails
correctly (ie, via the Citrix mail servers), so those messages will
generally have been blocked as spam.  Additionally, even the mails
which were From: iwj@xenproject were delivered via my colo mail server
which is not really right.  So some of those might also get blocked.

I don't propose to resend the mailbomb.  You can find the complete
series here:
  https://xenbits.xen.org/gitweb/?p=people/iwj/osstest.git;a=shortlog;h=refs/heads/test-host-reuse-v1
  https://xenbits.xen.org/git-http/people/iwj/osstest.git#test-host-reuse-v1

It's in osstest pretest now.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3674.10663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9C-0000oz-U6; Wed, 07 Oct 2020 18:26:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3674.10663; Wed, 07 Oct 2020 18:26:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9C-0000os-Pz; Wed, 07 Oct 2020 18:26:42 +0000
Received: by outflank-mailman (input) for mailman id 3674;
 Wed, 07 Oct 2020 18:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2W-00072Q-HT
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:48 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dc5c5b6-8d1c-43ed-a943-f114e0982010;
 Wed, 07 Oct 2020 18:19:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk7-0007CF-UQ; Wed, 07 Oct 2020 19:00:48 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2W-00072Q-HT
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:48 +0000
X-Inumbo-ID: 9dc5c5b6-8d1c-43ed-a943-f114e0982010
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9dc5c5b6-8d1c-43ed-a943-f114e0982010;
	Wed, 07 Oct 2020 18:19:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk7-0007CF-UQ; Wed, 07 Oct 2020 19:00:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 59/82] runvar access: Provide runvar_glob
Date: Wed,  7 Oct 2020 19:00:01 +0100
Message-Id: <20201007180024.7932-60-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We will need this because when runvar access is restricted, accessing
via %r directly won't work.  We want to see what patterns the code is
interested in (so that interest in a nonexistent runvar is properly
tracked).

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index b1eca0a9..6403e52b 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -53,7 +53,7 @@ BEGIN {
                       store_runvar get_runvar get_runvar_maybe
                       get_runvar_default need_runvars
                       unique_incrementing_runvar next_unique_name
-                      stashfilecontents runvar_is_synth
+                      stashfilecontents runvar_is_synth runvar_glob
 
                       target_cmd_root_status target_cmd_output_root_status
                       target_cmd_root target_cmd target_cmd_build
@@ -3187,6 +3187,20 @@ sub runvar_access_check ($$) {
     Carp::cluck $m;
 }
 
+sub runvar_glob {
+    my $monitor = tied %r;
+    my $realr = $monitor || \%r;
+    my @out;
+    foreach my $pat (@_) {
+	if ($monitor) { runvar_access_check($pat, 'scan') }
+	foreach my $key (sort keys %$realr) {
+	    next unless fnmatch $pat, $key;
+	    push @out, $key;
+	}
+    }
+    @out;
+}
+
 package RunvarMonitor;
 use Carp;
 use Osstest;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3675.10675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9J-0000sM-AJ; Wed, 07 Oct 2020 18:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3675.10675; Wed, 07 Oct 2020 18:26:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9J-0000sE-63; Wed, 07 Oct 2020 18:26:49 +0000
Received: by outflank-mailman (input) for mailman id 3675;
 Wed, 07 Oct 2020 18:26:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE69-00072Q-QI
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 305c7497-7824-4566-bea9-54b5df9ad956;
 Wed, 07 Oct 2020 18:21:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDju-0007CF-Sy; Wed, 07 Oct 2020 19:00:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE69-00072Q-QI
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:33 +0000
X-Inumbo-ID: 305c7497-7824-4566-bea9-54b5df9ad956
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 305c7497-7824-4566-bea9-54b5df9ad956;
	Wed, 07 Oct 2020 18:21:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDju-0007CF-Sy; Wed, 07 Oct 2020 19:00:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 11/82] cr-publish-flight-logs: Fix abs_time calls
Date: Wed,  7 Oct 2020 18:59:13 +0100
Message-Id: <20201007180024.7932-12-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

There was a missing space in these messages, since they were
introduced in 31b7cae19fe1
  timing traces: cr-publish-flight-logs: Report more progress

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 cr-publish-flight-logs | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cr-publish-flight-logs b/cr-publish-flight-logs
index 63c47d9f..c570dfaf 100755
--- a/cr-publish-flight-logs
+++ b/cr-publish-flight-logs
@@ -43,7 +43,7 @@ die unless $flight =~ m/^\d*$/;
 
 die "usage: ./cr-publish-flight-logs [flight]" unless @ARGV==0;
 
-sub progress { print +(show_abs_time time), @_, "\n" }
+sub progress { print +(show_abs_time time), " ", @_, "\n" }
 
 progress("acquiring publish-lock...");
 open LOCK, "> $c{GlobalLockDir}/publish-lock" or die $!;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3676.10687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9K-0000w3-KL; Wed, 07 Oct 2020 18:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3676.10687; Wed, 07 Oct 2020 18:26:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9K-0000vs-FS; Wed, 07 Oct 2020 18:26:50 +0000
Received: by outflank-mailman (input) for mailman id 3676;
 Wed, 07 Oct 2020 18:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2v-00072Q-I7
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd8824be-dd5e-4d5c-867b-b0c62c0c400f;
 Wed, 07 Oct 2020 18:19:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjy-0007CF-Ob; Wed, 07 Oct 2020 19:00:38 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2v-00072Q-I7
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:13 +0000
X-Inumbo-ID: bd8824be-dd5e-4d5c-867b-b0c62c0c400f
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd8824be-dd5e-4d5c-867b-b0c62c0c400f;
	Wed, 07 Oct 2020 18:19:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjy-0007CF-Ob; Wed, 07 Oct 2020 19:00:38 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 22/82] sg-run-job: Use +! in per-host-ts implementation
Date: Wed,  7 Oct 2020 18:59:24 +0100
Message-Id: <20201007180024.7932-23-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This makes this slightly clearer, even more so in a moment.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 3ca725e7..3f44cae7 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -458,11 +458,10 @@ proc per-host-ts {iffail ident script args} {
     set awaitl {}
     foreach host $need_xen_hosts {
         set hostargs {}
-        if {![string compare $host host]} {
-            lappend hostargs + $host
-        } else {
-            lappend hostargs $host +
-        }
+	if {[string compare $host host]} {
+	    lappend hostargs +! $host
+	}
+	lappend hostargs + $host
         lappend awaitl [eval spawn-ts $iffail $ident $script $hostargs $args]
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3677.10695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9L-0000wu-47; Wed, 07 Oct 2020 18:26:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3677.10695; Wed, 07 Oct 2020 18:26:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9K-0000wd-Ps; Wed, 07 Oct 2020 18:26:50 +0000
Received: by outflank-mailman (input) for mailman id 3677;
 Wed, 07 Oct 2020 18:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5B-00072Q-Nf
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5aab6441-1978-40d2-b11a-5aa8d7b68ec3;
 Wed, 07 Oct 2020 18:20:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkC-0007CF-Nl; Wed, 07 Oct 2020 19:00:52 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5B-00072Q-Nf
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:33 +0000
X-Inumbo-ID: 5aab6441-1978-40d2-b11a-5aa8d7b68ec3
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5aab6441-1978-40d2-b11a-5aa8d7b68ec3;
	Wed, 07 Oct 2020 18:20:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkC-0007CF-Nl; Wed, 07 Oct 2020 19:00:52 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 79/82] flight other job reporting: Further improvements to ordering
Date: Wed,  7 Oct 2020 19:00:21 +0100
Message-Id: <20201007180024.7932-80-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We want to definitely put these NULLs last.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index d8829932..8f99bb69 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1389,7 +1389,9 @@ END
       r_main AS
       (SELECT tident, hostname,
               bool_or(olive)                     AS olive,
-              1                                  AS kind_sort,
+              CASE WHEN min(prep_started)
+                   IS NOT NULL
+                   THEN 1 ELSE 3 END             AS kind_sort,
               flight, job,
 	      (SELECT status
 		 FROM jobs
@@ -1416,7 +1418,9 @@ END
 
       r_elided AS
       (SELECT tident, hostname, FALSE as olive,
-              2                                  AS kind_sort,
+              CASE WHEN count
+                   IS NOT NULL
+                   THEN 2 ELSE 4 END             AS kind_sort,
               $nullcols_main,
               $nullcols_tasks,
               count                              AS elided,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3678.10710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9M-000108-Cx; Wed, 07 Oct 2020 18:26:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3678.10710; Wed, 07 Oct 2020 18:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9M-0000zu-7e; Wed, 07 Oct 2020 18:26:52 +0000
Received: by outflank-mailman (input) for mailman id 3678;
 Wed, 07 Oct 2020 18:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4c-00072Q-Lr
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2537cdee-d122-4b24-b466-4f1c53d8830e;
 Wed, 07 Oct 2020 18:20:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk6-0007CF-IF; Wed, 07 Oct 2020 19:00:46 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4c-00072Q-Lr
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:58 +0000
X-Inumbo-ID: 2537cdee-d122-4b24-b466-4f1c53d8830e
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2537cdee-d122-4b24-b466-4f1c53d8830e;
	Wed, 07 Oct 2020 18:20:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk6-0007CF-IF; Wed, 07 Oct 2020 19:00:46 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 53/82] host reuse: Jiggle the infra-priority a bit, within a flight
Date: Wed,  7 Oct 2020 18:59:55 +0100
Message-Id: <20201007180024.7932-54-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index a50f8bf3..c1002fc9 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -733,8 +733,13 @@ sub alloc_hosts () {
         ? -10000
         : -10 * @hids;
 
-    my $infrapriority =
-	($r{hosts_infraprioritygroup} // '') =~ m/^(\d+):/ ? $1 : undef;
+    my $infrapriority;
+    if (($r{hosts_infraprioritygroup} // '') =~ m/^(\d+):/) {
+	$infrapriority = ($1 * 100) + ($$ % 100);
+	# $$ provides a pseudorandom element, which arranges that jobs
+	# in a group roughly take turns to take the scheduling lead,
+	# which will hopefully help them converge.
+    }
 
     my $ok = alloc_resources(WaitStart =>
                     ($ENV{OSSTEST_RESOURCE_WAITSTART} || $fi->{started}),
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3679.10718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9M-00011E-W3; Wed, 07 Oct 2020 18:26:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3679.10718; Wed, 07 Oct 2020 18:26:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9M-000110-J9; Wed, 07 Oct 2020 18:26:52 +0000
Received: by outflank-mailman (input) for mailman id 3679;
 Wed, 07 Oct 2020 18:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4r-00072Q-Ma
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63f7e1ad-c494-4b89-b87a-6c792056477b;
 Wed, 07 Oct 2020 18:20:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkC-0007CF-V0; Wed, 07 Oct 2020 19:00:53 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4r-00072Q-Ma
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:13 +0000
X-Inumbo-ID: 63f7e1ad-c494-4b89-b87a-6c792056477b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63f7e1ad-c494-4b89-b87a-6c792056477b;
	Wed, 07 Oct 2020 18:20:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkC-0007CF-V0; Wed, 07 Oct 2020 19:00:53 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 80/82] tsreadconfig: Change misleading "setting" message
Date: Wed,  7 Oct 2020 19:00:22 +0100
Message-Id: <20201007180024.7932-81-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

These are the *existing* runvars and it is confusing that we print
"setting" for them.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/TestSupport.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 163862f8..f2d8a0e1 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -210,7 +210,7 @@ END
         while ($row= $q->fetchrow_hashref()) {
             $r{ $row->{name} }= $row->{val};
 	    $r_notsynth{ $row->{name} }= !$row->{synth};
-            logm("setting $row->{name}=$row->{val}");
+            logm("runvar $row->{name}=$row->{val}");
         }
         $q->finish();
     });
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3680.10727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9N-00013R-Ma; Wed, 07 Oct 2020 18:26:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3680.10727; Wed, 07 Oct 2020 18:26:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9N-00012c-6T; Wed, 07 Oct 2020 18:26:53 +0000
Received: by outflank-mailman (input) for mailman id 3680;
 Wed, 07 Oct 2020 18:26:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3F-00072Q-JI
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a795a7b0-ab03-49c3-846d-640a676d8665;
 Wed, 07 Oct 2020 18:19:42 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjx-0007CF-CF; Wed, 07 Oct 2020 19:00:37 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3F-00072Q-JI
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:33 +0000
X-Inumbo-ID: a795a7b0-ab03-49c3-846d-640a676d8665
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a795a7b0-ab03-49c3-846d-640a676d8665;
	Wed, 07 Oct 2020 18:19:42 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjx-0007CF-CF; Wed, 07 Oct 2020 19:00:37 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 17/82] Debian: osstest-erase-other-disks: Slightly guard against races
Date: Wed,  7 Oct 2020 18:59:19 +0100
Message-Id: <20201007180024.7932-18-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Apparently it can happen that something decides to rescan a partition
table, removing a partition block device, while it is being zeroed:

 osstest-erase-other-disks-6081: hd devices present after: /dev/hd*
 osstest-erase-other-disks-6081: Erasing /dev/sda
 osstest-erase-other-disks-6081: Erasing /dev/sda1
 osstest-erase-other-disks-6081: /dev/sda1 is no longer a block device!

To try to narrow the window during which this race occurs, do not care
if the thing we just zeroed no longer exists after we zeroed it.

We still bomb out if it exists but is not a block device - that would
probably mean we had written it out as a file.

This is all quite unfortunate.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 85fd16da..3fa26e45 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1211,8 +1211,8 @@ zero () {
     if test -b \$dev; then
         log "Erasing \$dev"
         dd if=/dev/zero of=\$dev count=64 ||:
-        if ! test -b \$dev; then
-            log "\$dev is no longer a block device!"
+        if test -e \$dev && ! test -b \$dev; then
+            log "\$dev still exists but is no longer a block device!"
             exit 1
         fi
     else
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3681.10747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9Q-0001AU-1U; Wed, 07 Oct 2020 18:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3681.10747; Wed, 07 Oct 2020 18:26:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9P-0001AE-Si; Wed, 07 Oct 2020 18:26:55 +0000
Received: by outflank-mailman (input) for mailman id 3681;
 Wed, 07 Oct 2020 18:26:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5k-00072Q-Op
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7be757a-7ef1-4450-aafd-609299bbaff6;
 Wed, 07 Oct 2020 18:21:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk7-0007CF-AN; Wed, 07 Oct 2020 19:00:47 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5k-00072Q-Op
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:08 +0000
X-Inumbo-ID: b7be757a-7ef1-4450-aafd-609299bbaff6
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7be757a-7ef1-4450-aafd-609299bbaff6;
	Wed, 07 Oct 2020 18:21:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk7-0007CF-AN; Wed, 07 Oct 2020 19:00:47 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 56/82] subst_netboot_template: Do not use all of %r
Date: Wed,  7 Oct 2020 18:59:58 +0100
Message-Id: <20201007180024.7932-57-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Instead of copying all of %r into %v, have the template substitutor
fall back to %r from %v.

This is going to be important when we have host-reuse-related access
control to %r.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index be6b7119..634d6d2e 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2876,6 +2876,7 @@ sub subst_netboot_template ($$$) {
 	$pat =~ s{\%(\w*)\%}{
 		    $1 eq '' ? '%' :
 		    defined($v->{$1}) ? $v->{$1} :
+		    defined($r{$1}) ? $r{$1} :
 		    next;
 		 }ge;
 	# and return the first pattern we managed to completely substitute
@@ -2890,7 +2891,7 @@ sub host_netboot_file ($;$) {
     # returns the full netboot filename path
     # in array context, returns (dir, pathtail)
     #  where dir does not depend on $templatekeytail
-    my %v = %r;
+    my %v;
     my $firmware = get_host_property($ho, "firmware");
     my $templatekeybase = $firmware eq 'uefi' ? 'NetGrub' : 'Pxe';
     $templatekeytail //= 'Templates';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:26:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:26:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3682.10759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9S-0001Ga-G2; Wed, 07 Oct 2020 18:26:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3682.10759; Wed, 07 Oct 2020 18:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9S-0001GC-Av; Wed, 07 Oct 2020 18:26:58 +0000
Received: by outflank-mailman (input) for mailman id 3682;
 Wed, 07 Oct 2020 18:26:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4h-00072Q-M9
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e10c92d9-2c86-48a5-a857-634025a4b23d;
 Wed, 07 Oct 2020 18:20:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkA-0007CF-6l; Wed, 07 Oct 2020 19:00:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4h-00072Q-M9
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:03 +0000
X-Inumbo-ID: e10c92d9-2c86-48a5-a857-634025a4b23d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e10c92d9-2c86-48a5-a857-634025a4b23d;
	Wed, 07 Oct 2020 18:20:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkA-0007CF-6l; Wed, 07 Oct 2020 19:00:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 68/82] host lifecycle: Prevent referential integrity violation
Date: Wed,  7 Oct 2020 19:00:10 +0100
Message-Id: <20201007180024.7932-69-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We can't use normal constraints for either of these, sadly.

We can make the constraints into a single query which says "OK".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/JobDB/Executive.pm | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 3a8308e9..f69ce277 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -553,6 +553,28 @@ END
                ON h.taskid = t.taskid
             WHERE h.hostname = ?
          ORDER BY h.lcseq;
+END
+    # We simulate two foreign key constraints which can't be in the
+    # db schema, by checking the values we are going to insert.
+    #
+    # For "resources" we would need a foreign key constraint
+    # with a literal value as part of the foreign key, which is
+    # not supported until PostgreSQL 13.
+    #
+    # For "tasks" we only want to apply the constraint on inserts into
+    # "host_lifecycle" - in particular, we want to allow delet6ions
+    # from "tasks" to render the taskid foreign key unresolvable.
+    # This could be done with a trigger, but since here is the only
+    # place we do insertions into host_lifecycle, this seems easier.
+    my $constraintsq = $dbh_tests->prepare(<<END);
+           SELECT * FROM
+	     (SELECT 1 AS ok
+	        FROM resources where restype='host' and resname=?) 
+              hostname_ok
+             NATURAL JOIN
+             (SELECT 1 AS ok
+                FROM tasks where taskid=? AND live)
+              taskid_ok;
 END
     my $insertq = $dbh_tests->prepare(<<END);
         INSERT INTO host_lifecycle
@@ -632,6 +654,9 @@ END
 		push @lifecycle, "$omarks$otj:$o->{stepno}$osuffix";
 	    }
 	}
+	$constraintsq->execute($hostname, $ttaskid);
+	$constraintsq->fetchrow_array() or confess "$hostname ?";
+
 	if (defined $flight) {
 	    $insertq->execute($hostname, $ttaskid,
 			      $flight, $job,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3683.10771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9V-0001OK-5I; Wed, 07 Oct 2020 18:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3683.10771; Wed, 07 Oct 2020 18:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9U-0001Nt-SZ; Wed, 07 Oct 2020 18:27:00 +0000
Received: by outflank-mailman (input) for mailman id 3683;
 Wed, 07 Oct 2020 18:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3e-00072Q-JS
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20f74aeb-50d8-4880-8daf-80bf69bbf6f1;
 Wed, 07 Oct 2020 18:19:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk1-0007CF-5n; Wed, 07 Oct 2020 19:00:41 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3e-00072Q-JS
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:58 +0000
X-Inumbo-ID: 20f74aeb-50d8-4880-8daf-80bf69bbf6f1
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 20f74aeb-50d8-4880-8daf-80bf69bbf6f1;
	Wed, 07 Oct 2020 18:19:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk1-0007CF-5n; Wed, 07 Oct 2020 19:00:41 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 31/82] plan_search: Improve debugging of $share_compat_ok->()
Date: Wed,  7 Oct 2020 18:59:33 +0100
Message-Id: <20201007180024.7932-32-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No change other than to debugging output.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index e4bb0868..dfa3710a 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -702,6 +702,12 @@ sub plan_search ($$$$) {
 
 	    my $share_compat_ok = sub {
 		my ($eshare) = @_;
+		$dbgprint->("PLAN LOOP   SHARE-COMPAT-OK ".
+		    "type $eshare->{Type} vs. ".
+			($req->{Shared} // '<undef>')." ".
+		    "wear $eshare->{Wear} ".
+		    "shares $eshare->{Shares} vs. ".
+			($req->{SharedMaxTasks}//'<undef>'));
 		return 0 unless defined $req->{Shared};
 		return 0 unless $req->{Shared} eq $eshare->{Type};
 		if (defined $share_wear) {
@@ -711,6 +717,7 @@ sub plan_search ($$$$) {
 		}
 		return 0 if $share_wear > $req->{SharedMaxWear};
 		return 0 if $eshare->{Shares} != $req->{SharedMaxTasks};
+		$dbgprint->("PLAN LOOP   SHARE-COMPAT-OK Y");
 		return 1;
 	    };
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3684.10777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9W-0001Qx-6k; Wed, 07 Oct 2020 18:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3684.10777; Wed, 07 Oct 2020 18:27:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9V-0001Q9-Mq; Wed, 07 Oct 2020 18:27:01 +0000
Received: by outflank-mailman (input) for mailman id 3684;
 Wed, 07 Oct 2020 18:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3j-00072Q-Jm
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22b02326-6e00-4a43-a9af-37635fe05050;
 Wed, 07 Oct 2020 18:19:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk0-0007CF-Is; Wed, 07 Oct 2020 19:00:40 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3j-00072Q-Jm
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:03 +0000
X-Inumbo-ID: 22b02326-6e00-4a43-a9af-37635fe05050
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22b02326-6e00-4a43-a9af-37635fe05050;
	Wed, 07 Oct 2020 18:19:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk0-0007CF-Is; Wed, 07 Oct 2020 19:00:40 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 29/82] host allocation: *_shared_mark_ready: Only prod when $newstate is ready
Date: Wed,  7 Oct 2020 18:59:31 +0100
Message-Id: <20201007180024.7932-30-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index f2d43464..4cd4aa50 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1092,13 +1092,15 @@ END
 END
         }
     });
-    if (!eval {
-       my $qserv = tcpconnect_queuedaemon();
-       print $qserv "prod\n" or die $!;
-       $_ = <$qserv>;  defined && m/^OK prod\b/ or die "$_ ?";
-       1;
-    }) {
-       logm("post-mark-ready queue daemon prod failed: $@");
+    if ($newstate eq 'ready') {
+	if (!eval {
+	    my $qserv = tcpconnect_queuedaemon();
+	    print $qserv "prod\n" or die $!;
+	    $_ = <$qserv>;  defined && m/^OK prod\b/ or die "$_ ?";
+	    1;
+	}) {
+	    logm("post-mark-ready queue daemon prod failed: $@");
+	}
     }
     if ($oldshr) {
 	logm("$restype $resname shared $sharetype marked $newstate");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3685.10786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9X-0001TB-53; Wed, 07 Oct 2020 18:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3685.10786; Wed, 07 Oct 2020 18:27:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9W-0001Se-HT; Wed, 07 Oct 2020 18:27:02 +0000
Received: by outflank-mailman (input) for mailman id 3685;
 Wed, 07 Oct 2020 18:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6Y-00072Q-R6
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5464f8f2-40d0-4839-aaf7-929c5b0cc9f0;
 Wed, 07 Oct 2020 18:21:45 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjy-0007CF-Gd; Wed, 07 Oct 2020 19:00:38 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6Y-00072Q-R6
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:58 +0000
X-Inumbo-ID: 5464f8f2-40d0-4839-aaf7-929c5b0cc9f0
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5464f8f2-40d0-4839-aaf7-929c5b0cc9f0;
	Wed, 07 Oct 2020 18:21:45 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjy-0007CF-Gd; Wed, 07 Oct 2020 19:00:38 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 21/82] sg-run-job: support +! for *only* adding things to TESTID
Date: Wed,  7 Oct 2020 18:59:23 +0100
Message-Id: <20201007180024.7932-22-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/sg-run-job b/sg-run-job
index 702ed558..3ca725e7 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -277,6 +277,10 @@ proc recipe-flag {flagname {def 0}} {
 #       subsequent items in SCRIPT-ARGS are added to the expansion of
 #       /@ in TESTID.  (The "+" itself is not added to the arguments
 #       or the testid.)
+#
+#       An argument which is precisely "+!" specifies that the
+#       subsequent items in SCRIPT-ARGS are to be _only_ added to
+#       the expansion of /@ in TESTID, until the next "+".
 
 proc run-ts {iffail args} {
     set wantstatus pass
@@ -343,7 +347,11 @@ proc spawn-ts {iffail testid args} {
             set adding [expr {!$adding}]
             continue
         }
-        lappend real_args $arg
+	if {![string compare +! $arg]} {
+	    set adding -1
+	    continue
+	}
+        if {$adding>=0} { lappend real_args $arg }
         if {$adding} { lappend testid_args $arg }
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3686.10799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9Y-0001Vb-EK; Wed, 07 Oct 2020 18:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3686.10799; Wed, 07 Oct 2020 18:27:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9X-0001Uf-Bv; Wed, 07 Oct 2020 18:27:03 +0000
Received: by outflank-mailman (input) for mailman id 3686;
 Wed, 07 Oct 2020 18:27:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2R-00072Q-HW
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:43 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55d1223a-6df9-4919-9b32-964afaeeb692;
 Wed, 07 Oct 2020 18:19:18 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk0-0007CF-Tp; Wed, 07 Oct 2020 19:00:41 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2R-00072Q-HW
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:43 +0000
X-Inumbo-ID: 55d1223a-6df9-4919-9b32-964afaeeb692
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55d1223a-6df9-4919-9b32-964afaeeb692;
	Wed, 07 Oct 2020 18:19:18 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk0-0007CF-Tp; Wed, 07 Oct 2020 19:00:41 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 30/82] plan_search: Break out $share_compat_ok
Date: Wed,  7 Oct 2020 18:59:32 +0100
Message-Id: <20201007180024.7932-31-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 4cd4aa50..e4bb0868 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -700,6 +700,20 @@ sub plan_search ($$$$) {
             # this period is entirely after the proposed slot;
             # so no need to check this or any later periods
 
+	    my $share_compat_ok = sub {
+		my ($eshare) = @_;
+		return 0 unless defined $req->{Shared};
+		return 0 unless $req->{Shared} eq $eshare->{Type};
+		if (defined $share_wear) {
+		    $share_wear++ if $startevt->{Type} eq 'Start';
+		} else {
+		    $share_wear= $eshare->{Wear}+1;
+		}
+		return 0 if $share_wear > $req->{SharedMaxWear};
+		return 0 if $eshare->{Shares} != $req->{SharedMaxTasks};
+		return 1;
+	    };
+
 	    next PERIOD if $endevt->{Time} <= $try_time;
             # this period is entirely before the proposed slot;
             # it doesn't overlap, but must check subsequent periods
@@ -711,15 +725,7 @@ sub plan_search ($$$$) {
 		my $eshare= $startevt->{Share};
 		if ($eshare) {
 		    $dbgprint->("PLAN LOOP   OVERLAP ESHARE");
-		    last CHECK unless defined $req->{Shared};
-		    last CHECK unless $req->{Shared} eq $eshare->{Type};
-		    if (defined $share_wear) {
-			$share_wear++ if $startevt->{Type} eq 'Start';
-		    } else {
-			$share_wear= $eshare->{Wear}+1;
-		    }
-		    last CHECK if $share_wear > $req->{SharedMaxWear};
-		    last CHECK if $eshare->{Shares} != $req->{SharedMaxTasks};
+		    last CHECK unless $share_compat_ok->($eshare);
 		}
 		# We have suitable availability for this period
 		$dbgprint->("PLAN LOOP   OVERLAP AVAIL OK");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3687.10805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9Z-0001Zk-FM; Wed, 07 Oct 2020 18:27:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3687.10805; Wed, 07 Oct 2020 18:27:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9Y-0001YZ-LU; Wed, 07 Oct 2020 18:27:04 +0000
Received: by outflank-mailman (input) for mailman id 3687;
 Wed, 07 Oct 2020 18:27:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6T-00072Q-R3
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e3ddf54-e154-4d5e-9e4d-61ee34f75f64;
 Wed, 07 Oct 2020 18:21:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk3-0007CF-RB; Wed, 07 Oct 2020 19:00:43 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6T-00072Q-R3
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:53 +0000
X-Inumbo-ID: 8e3ddf54-e154-4d5e-9e4d-61ee34f75f64
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8e3ddf54-e154-4d5e-9e4d-61ee34f75f64;
	Wed, 07 Oct 2020 18:21:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk3-0007CF-RB; Wed, 07 Oct 2020 19:00:43 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 41/82] ts-host-reuse: Add some missing runvars to the host sharing control
Date: Wed,  7 Oct 2020 18:59:43 +0100
Message-Id: <20201007180024.7932-42-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Add some missing runvars to the host sharing control.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-reuse | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index 67de480f..5bdb07d1 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -71,7 +71,8 @@ sub compute_test_sharetype () {
     sharetype_add('di', $ho->{DiVersion});
 
     foreach my $runvar (qw(freebsd_distpath freebsdbuildjob
-			   xenable_xsm toolstack kernkind)) {
+			   xenable_xsm toolstack kernkind
+			   xen_boot_append toolstack)) {
 	my $val = $r{$runvar};
 	die "$runvar $val ?" if defined $val && $val =~ m{[,/\%\\]};
 	sharetype_add($runvar, $val);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3688.10825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9c-0001k7-Pw; Wed, 07 Oct 2020 18:27:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3688.10825; Wed, 07 Oct 2020 18:27:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9c-0001iw-5A; Wed, 07 Oct 2020 18:27:08 +0000
Received: by outflank-mailman (input) for mailman id 3688;
 Wed, 07 Oct 2020 18:27:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE43-00072Q-KX
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0fb4a81-ea99-4022-a703-ec12d3b33d0a;
 Wed, 07 Oct 2020 18:20:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk6-0007CF-4E; Wed, 07 Oct 2020 19:00:46 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE43-00072Q-KX
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:23 +0000
X-Inumbo-ID: f0fb4a81-ea99-4022-a703-ec12d3b33d0a
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0fb4a81-ea99-4022-a703-ec12d3b33d0a;
	Wed, 07 Oct 2020 18:20:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk6-0007CF-4E; Wed, 07 Oct 2020 19:00:46 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 51/82] host reuse: Reuse test hosts within a flight
Date: Wed,  7 Oct 2020 18:59:53 +0100
Message-Id: <20201007180024.7932-52-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Mark the host shareable, and unshareable, as appropriate.

There is still a lot more cleanup and improvement to do.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 sg-run-job | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/sg-run-job b/sg-run-job
index af43008d..1e2fcfee 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -29,6 +29,8 @@ proc per-host-prep {} {
     per-host-ts @.      xen-install/@     ts-xen-install
     per-host-ts @.      xen-boot/@        ts-host-reboot
 
+    per-host-ts broken  =               { ts-host-reuse start-test }
+
     per-host-ts .       host-ping-check-xen/@ ts-host-ping-check
     per-host-ts .       =(*)            { ts-leak-check basis }
 }
@@ -64,6 +66,8 @@ proc run-job {job} {
     if {!$ok} return
 
     if {[llength $need_xen_hosts]} {
+        per-host-ts broken  =           { ts-host-reuse prealloc }
+	if {!$ok} return
         eval run-ts broken  =             ts-hosts-allocate + $need_xen_hosts
     }
 
@@ -120,6 +124,7 @@ proc run-job {job} {
        set ok 0
     }
 
+    if {$ok} { per-host-ts .  =            { ts-host-reuse post-test }    }
     if {$ok} { setstatus pass                                             }
 
     if {[llength $need_build_host] && $ok} { jobdb::preserve-task 90 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3689.10834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9e-0001oJ-Hb; Wed, 07 Oct 2020 18:27:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3689.10834; Wed, 07 Oct 2020 18:27:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9d-0001n3-Kz; Wed, 07 Oct 2020 18:27:09 +0000
Received: by outflank-mailman (input) for mailman id 3689;
 Wed, 07 Oct 2020 18:27:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6E-00072Q-QR
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:38 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2b79de9-1655-40fe-9954-e6f3aa7c5974;
 Wed, 07 Oct 2020 18:21:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjx-0007CF-P2; Wed, 07 Oct 2020 19:00:37 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6E-00072Q-QR
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:38 +0000
X-Inumbo-ID: f2b79de9-1655-40fe-9954-e6f3aa7c5974
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f2b79de9-1655-40fe-9954-e6f3aa7c5974;
	Wed, 07 Oct 2020 18:21:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjx-0007CF-P2; Wed, 07 Oct 2020 19:00:37 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 18/82] host allocation: Remove some unnecessary definedness tests
Date: Wed,  7 Oct 2020 18:59:20 +0100
Message-Id: <20201007180024.7932-19-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

$set_info->() already checkes for undef, and returns immediately in
that case.  So there is no point checking at the call site.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 11 ++---------
 1 file changed, 2 insertions(+), 9 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index f17e7b70..5d71d08c 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -849,15 +849,8 @@ sub alloc_resources {
                 $set_info->('sub-priority',$ENV{OSSTEST_RESOURCE_SUBPRIORITY});
                 $set_info->('preinfo',     $ENV{OSSTEST_RESOURCE_PREINFO});
 		$set_info->('feature-noalloc', 1);
-
-                if (defined $waitstart) {
-                    $set_info->('wait-start',$waitstart);
-                }
-
-                my $adjust= $xparams{WaitStartAdjust};
-                if (defined $adjust) {
-                    $set_info->('wait-start-adjust',$adjust);
-                }
+		$set_info->('wait-start',$waitstart);
+		$set_info->('wait-start-adjust',$xparams{WaitStartAdjust});
 
                 my $jobinfo= $xparams{JobInfo};
                 if (!defined $jobinfo and defined $flight and defined $job) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3690.10846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9g-0001sB-3E; Wed, 07 Oct 2020 18:27:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3690.10846; Wed, 07 Oct 2020 18:27:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9e-0001ql-VR; Wed, 07 Oct 2020 18:27:10 +0000
Received: by outflank-mailman (input) for mailman id 3690;
 Wed, 07 Oct 2020 18:27:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6s-00072Q-Ro
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a26448e-ebaa-49d8-a387-8b29faaad382;
 Wed, 07 Oct 2020 18:21:56 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk2-0007CF-Gl; Wed, 07 Oct 2020 19:00:42 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6s-00072Q-Ro
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:18 +0000
X-Inumbo-ID: 6a26448e-ebaa-49d8-a387-8b29faaad382
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6a26448e-ebaa-49d8-a387-8b29faaad382;
	Wed, 07 Oct 2020 18:21:56 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk2-0007CF-Gl; Wed, 07 Oct 2020 19:00:42 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 36/82] resource reporting, nfc: Break out report_rogue_task_description
Date: Wed,  7 Oct 2020 18:59:38 +0100
Message-Id: <20201007180024.7932-37-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 14 ++++++++++++++
 ms-planner           |  6 +-----
 2 files changed, 15 insertions(+), 5 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 5f59c44e..e3ab1dc3 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -51,6 +51,7 @@ BEGIN {
                       report_run_getinfo report_altcolour
                       report_altchangecolour opendb_tests
                       report_blessingscond report_find_push_age_info
+		      report_rogue_task_description
 		      log_stderr_timestamped
                       tcpconnect_queuedaemon plan_search
                       manual_allocation_base_jobinfo
@@ -418,6 +419,19 @@ sub report_blessingscond ($) {
     return $blessingscond;
 }
 
+sub report_rogue_task_description ($) {
+    my ($arow) = @_;
+    # should have the fields from tasks, except taskid which
+    #  caller must print if they want it.
+    # may also have fields from resources (eg subtask)
+    my $info= "rogue task ";
+    $info .= " $arow->{type} $arow->{refkey}";
+    $info .= " ($arow->{comment})" if defined $arow->{comment};
+    $info .= " $arow->{subtask}";
+    $info .= " (user $arow->{username})";
+    return $info;
+}
+
 sub report__find_test ($$$$$$$) {
     my ($blessings, $branches, $tree,
 	$revision, $selection, $extracond, $sortlimit) = @_;
diff --git a/ms-planner b/ms-planner
index 41f27fa0..4b970730 100755
--- a/ms-planner
+++ b/ms-planner
@@ -321,11 +321,7 @@ END
 	    $info= "(preparing)";
 	} else {
 	    print DEBUG "rogue $reso $shareix: $arow->{owntaskid}\n";
-	    $info= "rogue task";
-	    $info .= " $arow->{type} $arow->{refkey}";
-	    $info .= " ($arow->{comment})" if defined $arow->{comment};
-	    $info .= " $arow->{subtask}";
-	    $info .= " (user $arow->{username})";
+	    $info= report_rogue_task_description($arow);
 	}
 	$plan->{Allocations}{$reskey}= {
             Task => $arow->{owntaskid},
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3691.10867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9k-00026I-F1; Wed, 07 Oct 2020 18:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3691.10867; Wed, 07 Oct 2020 18:27:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9k-00025O-0b; Wed, 07 Oct 2020 18:27:16 +0000
Received: by outflank-mailman (input) for mailman id 3691;
 Wed, 07 Oct 2020 18:27:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2q-00072Q-Iy
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b6a0a65-23e5-4aea-a941-1dd37c7d206f;
 Wed, 07 Oct 2020 18:19:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkE-0007CF-6h; Wed, 07 Oct 2020 19:00:54 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2q-00072Q-Iy
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:08 +0000
X-Inumbo-ID: 1b6a0a65-23e5-4aea-a941-1dd37c7d206f
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b6a0a65-23e5-4aea-a941-1dd37c7d206f;
	Wed, 07 Oct 2020 18:19:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkE-0007CF-6h; Wed, 07 Oct 2020 19:00:54 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 82/82] sg-report-flight: Word-wrapping improvements to job and step names
Date: Wed,  7 Oct 2020 19:00:24 +0100
Message-Id: <20201007180024.7932-83-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use <wbr>.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 33f953ca..a07e03cb 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1185,6 +1185,15 @@ sub nullcols {
     join ", ", map { m/::/ ? "NULL::$' as $`" : "NULL as $_" } @_;
 }
 
+sub encode_with_wbrs ($) {
+    my ($s) = @_;
+    my $re = qr{[-/]};
+    join '', map {
+	my $b = s{^$re}{} ? ('<wbr>'. $& . '&#8288;') : '';
+	$b.encode_entities($_);
+    } split m{(?=$re)}, $s;
+}
+
 sub htmloutjob ($$) {
     my ($fi,$job) = @_;
     return unless defined $htmldir;
@@ -1666,11 +1675,9 @@ END
     print H "</th>\n";
 
     foreach my $col (@cols) {
-        my $th= $col;
-        $th =~ s/\-/ $&/g;
         print H "<th>";
         print H "<a href=\"".encode_entities($col)."/$htmlleaf\">";
-        print H encode_entities($th);
+        print H encode_with_wbrs($col);
         print H "</a>";
         print H "</th>";
     }
@@ -1726,7 +1733,8 @@ END
 	    next if $this[1] == $worst[1] && $ei->{Step}{status} ne 'pass';
 	    @worst=@this;
 	    push @worst,
-	        encode_entities("$ei->{Step}{stepno}. $ei->{Step}{testid}");
+	      encode_entities("$ei->{Step}{stepno}. ").
+	      encode_with_wbrs($ei->{Step}{testid});
 	}
 	push @worstrow1, "<td ",$worst[2],">",$worst[3],"</td>";
 	push @worstrow2, "<td ",$worst[2],">",$worst[0],"</td>";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3692.10879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9o-0002HC-D9; Wed, 07 Oct 2020 18:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3692.10879; Wed, 07 Oct 2020 18:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9n-0002Ge-Va; Wed, 07 Oct 2020 18:27:19 +0000
Received: by outflank-mailman (input) for mailman id 3692;
 Wed, 07 Oct 2020 18:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6n-00072Q-RX
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03699dbd-5e37-4648-863d-40c47704ce6f;
 Wed, 07 Oct 2020 18:21:54 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk5-0007CF-FJ; Wed, 07 Oct 2020 19:00:45 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6n-00072Q-RX
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:13 +0000
X-Inumbo-ID: 03699dbd-5e37-4648-863d-40c47704ce6f
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03699dbd-5e37-4648-863d-40c47704ce6f;
	Wed, 07 Oct 2020 18:21:54 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk5-0007CF-FJ; Wed, 07 Oct 2020 19:00:45 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 48/82] shared/reuse: Use @ for freebsd host prep
Date: Wed,  7 Oct 2020 18:59:50 +0100
Message-Id: <20201007180024.7932-49-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

These are all the relevant call sites for ts-freebsd-host-install and
ts-freebsd-build-prep.  (There's a ts-freebsd-host-install in
ts-memdisk-try-append but that's for host examination and does not
uee or want sharing or reuse.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job              | 12 +++++++++---
 ts-build-prep-freebsd   |  1 -
 ts-freebsd-host-install |  1 -
 3 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index c454d4ea..0b2e20e7 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -758,9 +758,15 @@ proc prepare-build-host-linux {} {
 
 proc prepare-build-host-freebsd {} {
     global jobinfo
-    if {[recipe-flag testinstall]} { set broken fail } { set broken broken }
-    run-ts $broken host-install(*) ts-freebsd-host-install
-    run-ts . host-build-prep ts-build-prep-freebsd
+    if {[recipe-flag testinstall]} {
+	set broken fail
+	set isprep {}
+    } {
+	set broken broken
+	set isprep @
+    }
+    run-ts $broken host-install(*) ts-freebsd-host-install + ${isprep}host
+    run-ts . host-build-prep ts-build-prep-freebsd + ${isprep}host
 }
 
 proc need-hosts/coverity {} { return BUILD_LINUX }
diff --git a/ts-build-prep-freebsd b/ts-build-prep-freebsd
index ef880503..9606c0f7 100755
--- a/ts-build-prep-freebsd
+++ b/ts-build-prep-freebsd
@@ -28,7 +28,6 @@ tsreadconfig();
 our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
-exit 0 if $ho->{SharedReady};
 
 sub install_deps () {
     my @packages = qw(git glib pkgconf yajl gmake pixman markdown gettext
diff --git a/ts-freebsd-host-install b/ts-freebsd-host-install
index 3c3e9c34..9feb98cd 100755
--- a/ts-freebsd-host-install
+++ b/ts-freebsd-host-install
@@ -65,7 +65,6 @@ our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
 exit 0 if $ho->{Flags}{'no-reinstall'};
-exit 0 if $ho->{SharedReady};
 
 our $timeout = 1000;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3693.10890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9q-0002Mg-C0; Wed, 07 Oct 2020 18:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3693.10890; Wed, 07 Oct 2020 18:27:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9p-0002Lu-R6; Wed, 07 Oct 2020 18:27:21 +0000
Received: by outflank-mailman (input) for mailman id 3693;
 Wed, 07 Oct 2020 18:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE30-00072Q-IS
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51e97c34-5e24-471f-a252-757d6711b763;
 Wed, 07 Oct 2020 18:19:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk3-0007CF-A3; Wed, 07 Oct 2020 19:00:43 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE30-00072Q-IS
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:18 +0000
X-Inumbo-ID: 51e97c34-5e24-471f-a252-757d6711b763
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 51e97c34-5e24-471f-a252-757d6711b763;
	Wed, 07 Oct 2020 18:19:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk3-0007CF-A3; Wed, 07 Oct 2020 19:00:43 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 39/82] ts-host-reuse: tolerate unremoveable lv
Date: Wed,  7 Oct 2020 18:59:41 +0100
Message-Id: <20201007180024.7932-40-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

It might be a symlink in the pair tests.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-host-reuse | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index e14a9149..74ac94aa 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -112,7 +112,7 @@ ENDI
                 printf 'LV %s...\n' "$dev"
                 if ! test -e $dev; then continue; fi
                 dd if=/dev/urandom bs=1024 count=4096 of=$dev ||:
-                lvremove -f $dev
+                lvremove -f $dev ||:
             done
         done
 ENDQ
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3694.10896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9r-0002PT-FK; Wed, 07 Oct 2020 18:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3694.10896; Wed, 07 Oct 2020 18:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9q-0002Od-V2; Wed, 07 Oct 2020 18:27:22 +0000
Received: by outflank-mailman (input) for mailman id 3694;
 Wed, 07 Oct 2020 18:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3t-00072Q-Jv
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ace94ed7-c910-40bb-bbf1-9495690a4b88;
 Wed, 07 Oct 2020 18:20:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjy-0007CF-8H; Wed, 07 Oct 2020 19:00:38 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3t-00072Q-Jv
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:13 +0000
X-Inumbo-ID: ace94ed7-c910-40bb-bbf1-9495690a4b88
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ace94ed7-c910-40bb-bbf1-9495690a4b88;
	Wed, 07 Oct 2020 18:20:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjy-0007CF-8H; Wed, 07 Oct 2020 19:00:38 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 20/82] ts-hosts-allocate-Executive: Fix handling of failed preps for same sharing
Date: Wed,  7 Oct 2020 18:59:22 +0100
Message-Id: <20201007180024.7932-21-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This code was previously unreachable.  It ought to be executed when
all the shares are allocatable or prep: in that case, we can unshare
and re-share the host.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 78b94c6d..2c18a739 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -1044,12 +1044,11 @@ sub actual_allocation ($) {
 		# now-dead task.  If so that share will become allocatable
 		# (and therefore not be counted in `ntasks') in due course.
 		return undef;
-
-                # someone was preparing it but they aren't any more
-                push @allocwarnings,
-                    "previous preparation apparently abandoned";
-                $allocatable= 1;
-            }
+	    }
+	    # someone was preparing it but they aren't any more
+	    push @allocwarnings,
+		"previous preparation apparently abandoned";
+	    $allocatable= 1;
         }
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3695.10915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9u-0002Zw-Sj; Wed, 07 Oct 2020 18:27:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3695.10915; Wed, 07 Oct 2020 18:27:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9u-0002ZD-DM; Wed, 07 Oct 2020 18:27:26 +0000
Received: by outflank-mailman (input) for mailman id 3695;
 Wed, 07 Oct 2020 18:27:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5V-00072Q-OH
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f414edb5-514b-4fae-969b-d7da8e64f6e4;
 Wed, 07 Oct 2020 18:21:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk5-0007CF-Sz; Wed, 07 Oct 2020 19:00:45 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5V-00072Q-OH
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:53 +0000
X-Inumbo-ID: f414edb5-514b-4fae-969b-d7da8e64f6e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f414edb5-514b-4fae-969b-d7da8e64f6e4;
	Wed, 07 Oct 2020 18:21:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk5-0007CF-Sz; Wed, 07 Oct 2020 19:00:45 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 50/82] shared/reuse: Rely on @ for ts-host-ping-check
Date: Wed,  7 Oct 2020 18:59:52 +0100
Message-Id: <20201007180024.7932-51-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Remove the check for SharedReady.

The existence of this check is perplexing.  It was introduced in
  ts-host-ping-check: Do not run if host is being reused
in 8f1dc3f7c401 (from 2015).

At that time we only share build hosts, and build hosts never ran this
script.  So I don't understand what that was hoping to achieve.  Maybe
it made some difference in a now-lost pre-rebase situation.

Anyway, in our current tree I think we want to rerun the
ts-host-ping-check when we reuse a test host.  My change to add @ to
parts of per-host-prep in sg-run-job deliberately omitted the step
with testid host-ping-check-xen/@.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-ping-check | 2 --
 1 file changed, 2 deletions(-)

diff --git a/ts-host-ping-check b/ts-host-ping-check
index a670680c..512aaec3 100755
--- a/ts-host-ping-check
+++ b/ts-host-ping-check
@@ -27,8 +27,6 @@ our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
 
-exit 0 if $ho->{SharedReady};
-
 $_ = `ping -D -i 0.2 -c 100 $ho->{Ip} | tee -a /dev/stderr`;
 
 m/\b([0-9.]+)% packet loss\b/ or die "$_ ?";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3696.10921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9w-0002dG-DA; Wed, 07 Oct 2020 18:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3696.10921; Wed, 07 Oct 2020 18:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9v-0002c7-Gj; Wed, 07 Oct 2020 18:27:27 +0000
Received: by outflank-mailman (input) for mailman id 3696;
 Wed, 07 Oct 2020 18:27:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE77-00072Q-S8
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cff2d2c9-adb1-41e0-ab49-5bf585595830;
 Wed, 07 Oct 2020 18:22:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk4-0007CF-AH; Wed, 07 Oct 2020 19:00:44 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE77-00072Q-S8
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:33 +0000
X-Inumbo-ID: cff2d2c9-adb1-41e0-ab49-5bf585595830
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cff2d2c9-adb1-41e0-ab49-5bf585595830;
	Wed, 07 Oct 2020 18:22:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk4-0007CF-AH; Wed, 07 Oct 2020 19:00:44 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 43/82] ts-hosts-allocate-Executive print sharing info in debug output
Date: Wed,  7 Oct 2020 18:59:45 +0100
Message-Id: <20201007180024.7932-44-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 58e9f891..fc107c08 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -691,7 +691,9 @@ sub hid_recurse ($$) {
     print DEBUG "$dbg FINAL start=$start_time va=$variation_age".
 	" variation_bonus=$variation_bonus".
         " previously_failed=$previously_failed".
-	" previously_failed_equiv=$previously_failed_equiv cost=$cost\n";
+	" previously_failed_equiv=$previously_failed_equiv".
+	" eff.share_bonus=$share_reuse*$share_reuse_bonus".
+	" cost=$cost\n";
 
     if (!defined $best || $cost < $best->{Cost}) {
         print DEBUG "$dbg FINAL BEST: ".
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3697.10928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9x-0002gA-EN; Wed, 07 Oct 2020 18:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3697.10928; Wed, 07 Oct 2020 18:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9w-0002fj-OQ; Wed, 07 Oct 2020 18:27:28 +0000
Received: by outflank-mailman (input) for mailman id 3697;
 Wed, 07 Oct 2020 18:27:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE72-00072Q-S6
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:28 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1300e0d7-471c-47ba-a7b7-9128edadea9e;
 Wed, 07 Oct 2020 18:22:01 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkB-0007CF-Od; Wed, 07 Oct 2020 19:00:51 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE72-00072Q-S6
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:28 +0000
X-Inumbo-ID: 1300e0d7-471c-47ba-a7b7-9128edadea9e
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1300e0d7-471c-47ba-a7b7-9128edadea9e;
	Wed, 07 Oct 2020 18:22:01 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkB-0007CF-Od; Wed, 07 Oct 2020 19:00:51 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 75/82] host reuse: New protocol between sg-run-job and ts-host-reuse
Date: Wed,  7 Oct 2020 19:00:17 +0100
Message-Id: <20201007180024.7932-76-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Abolish post-test-ok (which runs only if successful) and replace it
with final (which sets the runvar to indicate finality, and runs
regardless).

This allows a subsequent job which reuses the host to see that this
job had finished using the host.  This is relevant for builds, where a
host can be reused even after a failed job.

"Lies", where we claim the use of the host was done, are
avoided (barring unlikely races) because selecthost de-finalises the
runvar.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job    |  8 +++++++-
 ts-host-reuse | 25 ++++++++++++++++---------
 2 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 2feb67d9..dd76d4f2 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -124,7 +124,13 @@ proc run-job {job} {
        set ok 0
     }
 
-    if {$ok} { per-host-ts .  =            { ts-host-reuse post-test-ok } }
+    if {[llength $need_build_host]} {
+	run-ts !broken =                  ts-host-reuse final host
+    }
+    set reuse {}
+    if {$ok} { lappend reuse --post-test-ok }
+    eval [list per-host-ts !broken  = { ts-host-reuse final }] $reuse
+
     if {$ok} { setstatus pass                                             }
 
     if {[llength $need_build_host] && $ok} { jobdb::preserve-task 90 }
diff --git a/ts-host-reuse b/ts-host-reuse
index 21d77500..ae967304 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -154,15 +154,22 @@ sub act_start_test () {
     host_shared_mark_ready($ho, $sharetype, \%oldstate, 'mid-test');
 }
 
-sub act_post_test_ok () {
-    die if @ARGV;
-    compute_test_sharetype();
-    $ho = selecthost($whhost);
-    return unless $ho->{Shared};
-    die unless $ho->{Shared}{State} eq 'mid-test';
-    post_test_cleanup();
-    host_update_lifecycle_info($ho, 'final');
-    host_shared_mark_ready($ho, $sharetype, 'mid-test', 'ready');
+sub act_final () {
+    if (!@ARGV) {
+	$ho = selecthost($whhost);
+	return unless $ho;
+	host_update_lifecycle_info($ho, 'final');
+    } elsif ("@ARGV" eq "--post-test-ok") {
+	compute_test_sharetype();
+	$ho = selecthost($whhost);
+	return unless $ho->{Shared};
+	die unless $ho->{Shared}{State} eq 'mid-test';
+	post_test_cleanup();
+	host_update_lifecycle_info($ho, 'final');
+	host_shared_mark_ready($ho, $sharetype, 'mid-test', 'ready');
+    } else {
+	die;
+    }
 }
 
 $action =~ y/-/_/;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3698.10943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9z-0002lX-9T; Wed, 07 Oct 2020 18:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3698.10943; Wed, 07 Oct 2020 18:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9y-0002jx-Dw; Wed, 07 Oct 2020 18:27:30 +0000
Received: by outflank-mailman (input) for mailman id 3698;
 Wed, 07 Oct 2020 18:27:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4X-00072Q-Lk
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77d213f6-ed2e-46b6-b768-ae539b012360;
 Wed, 07 Oct 2020 18:20:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDju-0007CF-L3; Wed, 07 Oct 2020 19:00:34 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4X-00072Q-Lk
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:53 +0000
X-Inumbo-ID: 77d213f6-ed2e-46b6-b768-ae539b012360
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 77d213f6-ed2e-46b6-b768-ae539b012360;
	Wed, 07 Oct 2020 18:20:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDju-0007CF-L3; Wed, 07 Oct 2020 19:00:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 10/82] hsot reuse: ms-planner: Abbreviate reporting of test shares
Date: Wed,  7 Oct 2020 18:59:12 +0100
Message-Id: <20201007180024.7932-11-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ms-planner | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/ms-planner b/ms-planner
index 4e38e4e3..78d27b2f 100755
--- a/ms-planner
+++ b/ms-planner
@@ -414,11 +414,13 @@ END
 	next if $plan->{Events}{$reskey};
 	print DEBUG "document existing quiescent sharing $reskey".
 	    " share.Type $cshare->{Type}\n";
+	my $type = $cshare->{Type};
+	$type = "$&..." if $type =~ m{^test-\d+(?=\-)};
 	my $evt = {
             Time => $plan->{Start},
             Type => Unshare,
 	    Avail => 1,
-            Info => "recently shared $cshare->{Type}",
+            Info => "recently $type",
             PreviousShare => { %$cshare },
 	};
 	delete $evt->{PreviousShare}{OnlyPreparing};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3699.10955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA1-0002pX-2v; Wed, 07 Oct 2020 18:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3699.10955; Wed, 07 Oct 2020 18:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQE9z-0002oK-PO; Wed, 07 Oct 2020 18:27:31 +0000
Received: by outflank-mailman (input) for mailman id 3699;
 Wed, 07 Oct 2020 18:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2b-00072Q-Hn
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ecb5c37-fa98-4803-bc1c-312631f56aa1;
 Wed, 07 Oct 2020 18:19:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkB-0007CF-8n; Wed, 07 Oct 2020 19:00:51 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2b-00072Q-Hn
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:53 +0000
X-Inumbo-ID: 1ecb5c37-fa98-4803-bc1c-312631f56aa1
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1ecb5c37-fa98-4803-bc1c-312631f56aa1;
	Wed, 07 Oct 2020 18:19:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkB-0007CF-8n; Wed, 07 Oct 2020 19:00:51 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 73/82] host reuse: sg-run-job: Reanme post-test-ok parameter
Date: Wed,  7 Oct 2020 19:00:15 +0100
Message-Id: <20201007180024.7932-74-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This is more accurate.

No overall functional change.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job    | 2 +-
 ts-host-reuse | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 1e2fcfee..2feb67d9 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -124,7 +124,7 @@ proc run-job {job} {
        set ok 0
     }
 
-    if {$ok} { per-host-ts .  =            { ts-host-reuse post-test }    }
+    if {$ok} { per-host-ts .  =            { ts-host-reuse post-test-ok } }
     if {$ok} { setstatus pass                                             }
 
     if {[llength $need_build_host] && $ok} { jobdb::preserve-task 90 }
diff --git a/ts-host-reuse b/ts-host-reuse
index 85beb51e..aad45bdd 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -152,7 +152,7 @@ sub act_start_test () {
     host_shared_mark_ready($ho, $sharetype, \%oldstate, 'mid-test');
 }
 
-sub act_post_test () {
+sub act_post_test_ok () {
     compute_test_sharetype();
     $ho = selecthost($whhost);
     return unless $ho->{Shared};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3700.10962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA2-0002u7-GN; Wed, 07 Oct 2020 18:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3700.10962; Wed, 07 Oct 2020 18:27:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA1-0002t9-Ft; Wed, 07 Oct 2020 18:27:33 +0000
Received: by outflank-mailman (input) for mailman id 3700;
 Wed, 07 Oct 2020 18:27:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3o-00072Q-Jr
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32357582-01ed-4322-ad57-48dd5f33319b;
 Wed, 07 Oct 2020 18:19:57 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjz-0007CF-9Z; Wed, 07 Oct 2020 19:00:39 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3o-00072Q-Jr
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:08 +0000
X-Inumbo-ID: 32357582-01ed-4322-ad57-48dd5f33319b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 32357582-01ed-4322-ad57-48dd5f33319b;
	Wed, 07 Oct 2020 18:19:57 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjz-0007CF-9Z; Wed, 07 Oct 2020 19:00:39 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 24/82] host allocation: *_shared_mark_ready: Allow other states
Date: Wed,  7 Oct 2020 18:59:26 +0100
Message-Id: <20201007180024.7932-25-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Generalise these functions so they can set the state to something
other than `ready', and so that they can expect a state other than
`prep'.

No functional change with existing callers.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm       | 19 +++++++++++--------
 Osstest/JobDB/Executive.pm |  5 +++--
 Osstest/TestSupport.pm     |  6 +++---
 3 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 703f3d85..f0038f6b 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1050,11 +1050,14 @@ END
     return $shared;
 }
 
-sub executive_resource_shared_mark_ready ($$$) {
-    my ($restype, $resname, $sharetype) = @_;
+sub executive_resource_shared_mark_ready ($$$;$$) {
+    my ($restype, $resname, $sharetype, $oldstate, $newstate) = @_;
     # must run outside transaction
 
     my $oldshr;
+    $oldstate //= 'prep';
+    $newstate //= 'ready';
+
     my $what= "resource $restype $resname";
     $sharetype .= ' '.get_harness_rev();
 
@@ -1063,11 +1066,11 @@ sub executive_resource_shared_mark_ready ($$$) {
         if (defined $oldshr) {
             die "$what shared $oldshr->{Type} not $sharetype"
                 unless $oldshr->{Type} eq $sharetype;
-            die "$what shared state $oldshr->{State} not prep"
-                unless $oldshr->{State} eq 'prep';
-            my $nrows= $dbh_tests->do(<<END,{}, $restype,$resname,$sharetype);
+            die "$what shared state $oldshr->{State} not $oldstate"
+                unless $oldshr->{State} eq $oldstate;
+            my $nrows= $dbh_tests->do(<<END,{}, $newstate, $restype,$resname,$sharetype);
                 UPDATE resource_sharing
-                   SET state='ready'
+                   SET state=?
                  WHERE restype=? AND resname=? AND sharetype=?
 END
             die "unexpected not updated state $what $sharetype $nrows"
@@ -1092,9 +1095,9 @@ END
        logm("post-mark-ready queue daemon prod failed: $@");
     }
     if ($oldshr) {
-	logm("$restype $resname shared $sharetype marked ready");
+	logm("$restype $resname shared $sharetype marked $newstate");
     } else {
-	logm("$restype $resname (not shared, $sharetype) is ready");
+	logm("$restype $resname (not shared, $sharetype) is $newstate");
     }
 }
 
diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 39deb8a2..30629572 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -352,8 +352,9 @@ sub gen_ether_offset ($$) { #method
 }
 
 sub jobdb_resource_shared_mark_ready { #method
-    my ($mo, $restype, $resname, $sharetype) = @_;
-    executive_resource_shared_mark_ready($restype, $resname, $sharetype);
+    my ($mo, $restype, $resname, $sharetype, $oldstate, $newstate) = @_;
+    executive_resource_shared_mark_ready
+	($restype, $resname, $sharetype, $oldstate,$newstate);
 }
 
 sub jobdb_check_other_job { #method
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 99c7654d..7292a329 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -3106,11 +3106,11 @@ sub sha256file ($;$) {
     return $truncate ? substr($digest, 0, $truncate) : $digest;
 }
 
-sub host_shared_mark_ready($$) {
-    my ($ho,$sharetype) = @_;
+sub host_shared_mark_ready($$;$$) {
+    my ($ho,$sharetype, $oldstate, $newstate) = @_;
 
     $mjobdb->jobdb_resource_shared_mark_ready('host', $ho->{Name},
-                                              $sharetype);
+        $sharetype, $oldstate, $newstate);
 }
 
 sub gitcache_setup ($) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3701.10978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA4-00032C-VX; Wed, 07 Oct 2020 18:27:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3701.10978; Wed, 07 Oct 2020 18:27:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA4-00030S-01; Wed, 07 Oct 2020 18:27:36 +0000
Received: by outflank-mailman (input) for mailman id 3701;
 Wed, 07 Oct 2020 18:27:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4w-00072Q-Mt
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c4a2380-59ee-4efa-b349-fac56b8f829b;
 Wed, 07 Oct 2020 18:20:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkC-0007CF-Eb; Wed, 07 Oct 2020 19:00:52 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4w-00072Q-Mt
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:18 +0000
X-Inumbo-ID: 6c4a2380-59ee-4efa-b349-fac56b8f829b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6c4a2380-59ee-4efa-b349-fac56b8f829b;
	Wed, 07 Oct 2020 18:20:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkC-0007CF-Eb; Wed, 07 Oct 2020 19:00:52 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 78/82] flight other job reporting: Put nulls last in the report
Date: Wed,  7 Oct 2020 19:00:20 +0100
Message-Id: <20201007180024.7932-79-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Cosmetic change only, but this makes the results easier to understand.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/sg-report-flight b/sg-report-flight
index 2a79db13..d8829932 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1434,7 +1434,10 @@ END
        SELECT * FROM r_elided
      ORDER BY tident, hostname,
 	      kind_sort,
-	      finished, prep_started, rest_started, flight, job, oidents,
+              prep_started NULLS LAST,
+              rest_started NULLS LAST,
+	      finished NULLS LAST,
+              flight, job, oidents,
 	      sort_index
 END
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3702.10990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA7-00037A-7S; Wed, 07 Oct 2020 18:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3702.10990; Wed, 07 Oct 2020 18:27:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA5-00034c-JB; Wed, 07 Oct 2020 18:27:37 +0000
Received: by outflank-mailman (input) for mailman id 3702;
 Wed, 07 Oct 2020 18:27:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6O-00072Q-Qr
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:48 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4d8655f-307c-496d-a72a-2974b582accc;
 Wed, 07 Oct 2020 18:21:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk3-0007CF-2H; Wed, 07 Oct 2020 19:00:43 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6O-00072Q-Qr
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:48 +0000
X-Inumbo-ID: b4d8655f-307c-496d-a72a-2974b582accc
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4d8655f-307c-496d-a72a-2974b582accc;
	Wed, 07 Oct 2020 18:21:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk3-0007CF-2H; Wed, 07 Oct 2020 19:00:43 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 38/82] ts-host-reuse: New script, to do reuse state changes
Date: Wed,  7 Oct 2020 18:59:40 +0100
Message-Id: <20201007180024.7932-39-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This will be made part of the test job recipes.

We calculate the sharing scope (sharetype) by reference to a lot of
runvars, etc.

This version of the script is rather far from the finished working
one, but it seems better to preserve the actual history for how it got
the way it is.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-reuse | 163 ++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 163 insertions(+)
 create mode 100755 ts-host-reuse

diff --git a/ts-host-reuse b/ts-host-reuse
new file mode 100755
index 00000000..e14a9149
--- /dev/null
+++ b/ts-host-reuse
@@ -0,0 +1,163 @@
+#!/usr/bin/perl -w
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2013 Citrix Inc.
+# 
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+# 
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# usage:
+#   ./ts-host-reuse prealloc|start-test|post-test IDENT [ARGS...]
+#
+# Computes the sharing scope of the host for IDENT, and then
+#
+#   prealloc      Sets the share-* runtime hostflag appropriately
+#
+#   start-test    Expects the host to have previously been `prep'
+#                 (being prepared) or `ready'.
+#                 Marks it as `mid-test'.
+#
+#                 All the scripts before `ready' or `start-test', (at
+#                 least, ones which affect the host) should have been
+#                 marked with @, so that they are skipped when the
+#                 host is shared or reused.
+#
+#   post-test     Expects the host to have previously been `mid-test'
+#                 Does a small amount of cleanup, deleting some things
+#                 which take a lot of space.
+#                 Then marks the host as `ready' for reuse.
+#                 Must not be done if test arrangements had unexpected
+#                 failures which might leave host in an odd state.
+
+use strict qw(vars);
+use DBI;
+BEGIN { unshift @INC, qw(.); }
+use Osstest;
+use POSIX;
+use Osstest::TestSupport;
+
+tsreadconfig();
+
+die unless @ARGV==2;
+
+our ($action, $whhost) = @ARGV;
+
+our $ho;
+
+#---------- compute $sharetype ----------
+
+our $sharetype;
+
+sub sharetype_add ($$) {
+    my ($k, $v) = @_;
+    return unless defined $v;
+    $sharetype .= "/$k=$v";
+}
+
+sub compute_test_sharetype () {
+    $sharetype =
+	"test-$flight/$r{arch}/$r{xenbuildjob}/$r{kernbuildjob}/$r{buildjob}";
+
+    sharetype_add('suite', $ho->{Suite});
+    sharetype_add('di', $ho->{DiVersion});
+
+    foreach my $runvar (qw(freebsd_distpath freebsdbuildjob
+			   bios xenable_xsm toolstack kernkind)) {
+	my $val = $r{$runvar};
+	die "$runvar $val ?" if defined $val && $val =~ m{[,/\%\\]};
+	sharetype_add($runvar, $val);
+    }
+
+    return $sharetype;
+}
+
+#---------- functions ----------
+
+sub post_test_cleanup () {
+    my $script = <<'ENDQ';
+        set -e
+        cd /root
+        du -skx * | while read size thing; do
+            printf '%10d %s' "$size" "$thing"
+            if [ $size -gt 11000 ]; then
+                printf ' removing'
+                rm -rf -- "$thing"
+            fi
+            printf '\n'
+        done
+ENDQ
+    my $r_vals = sub {
+	my ($re) = @_;
+	map { $r{$_} }
+            sort
+	    grep /$re/,
+	    keys %r;
+    };
+    my @vgs = $r_vals->(qr{_vg$});
+    my @lvs = $r_vals->(qr{_lv$});
+    $script .= <<ENDI.<<'ENDQ';
+        for vg in @vgs; do
+            for lv in @lvs; do
+ENDI
+                dev=/dev/$vg/$lv
+                printf 'LV %s...\n' "$dev"
+                if ! test -e $dev; then continue; fi
+                dd if=/dev/urandom bs=1024 count=4096 of=$dev ||:
+                lvremove -f $dev
+            done
+        done
+ENDQ
+    target_cmd_root($ho, $script, 300);
+}
+
+#---------- functionality shared between actions ----------
+
+sub noop_if_playing () {
+    my $wantreuse = $ENV{'OSSTEST_REUSE_TEST_HOSTS'};
+    my $intended = intended_blessing();
+    if (!defined $wantreuse) {
+	$wantreuse = $intended !~ /play/;
+    }
+    if (!$wantreuse) {
+	logm("not reusing test hosts (in $intended flight)");
+	exit 0;
+    }
+}
+
+#---------- actions ----------
+
+sub act_prealloc () {
+    noop_if_playing();
+    compute_test_sharetype();
+    $ho = selecthost($whhost, undef, 1);
+    set_runtime_hostflag($ho->{Ident}, "reuse-$sharetype");
+}
+
+sub act_start_test () {
+    compute_test_sharetype();
+    $ho = selecthost($whhost);
+    return unless $ho->{Shared};
+    my %oldstate = map { $_ => 1 } qw(prep ready);
+    host_shared_mark_ready($ho, $sharetype, \%oldstate, 'mid-test');
+}
+
+sub act_post_test () {
+    compute_test_sharetype();
+    $ho = selecthost($whhost);
+    return unless $ho->{Shared};
+    die unless $ho->{Shared}{State} eq 'mid-test';
+    post_test_cleanup();
+    host_shared_mark_ready($ho, $sharetype, 'mid-test', 'ready');
+}
+
+$action =~ y/-/_/;
+&{"act_$action"}();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3703.11007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAA-0003H0-8w; Wed, 07 Oct 2020 18:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3703.11007; Wed, 07 Oct 2020 18:27:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEA8-0003Ew-Sr; Wed, 07 Oct 2020 18:27:40 +0000
Received: by outflank-mailman (input) for mailman id 3703;
 Wed, 07 Oct 2020 18:27:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4m-00072Q-MQ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28d148d1-a6a9-4f44-b524-7bf9d2054855;
 Wed, 07 Oct 2020 18:20:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk9-0007CF-3e; Wed, 07 Oct 2020 19:00:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4m-00072Q-MQ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:08 +0000
X-Inumbo-ID: 28d148d1-a6a9-4f44-b524-7bf9d2054855
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28d148d1-a6a9-4f44-b524-7bf9d2054855;
	Wed, 07 Oct 2020 18:20:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk9-0007CF-3e; Wed, 07 Oct 2020 19:00:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 63/82] test host reuse: Switch to principled sharing scope runvar scheme
Date: Wed,  7 Oct 2020 19:00:05 +0100
Message-Id: <20201007180024.7932-64-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

* When selecthost is passed an @host ident, indicating prep work,
  engage restricted runvar access.  If no call to sharing_for_build
  was made, this means it can access only the runvars in
  the default value of @accessible_runvar_pats.

* Make the sharetype for host reuse be based on the values of
  precisely those same runvars, rather than using an adhoc scheme.

The set of covered runvars is bigger now as a result of testing...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 23 ++++++++++++++++++++++-
 ts-host-reuse          | 20 ++++++++------------
 2 files changed, 30 insertions(+), 13 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 752c36c5..28381f05 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -162,7 +162,22 @@ our $logm_prefix= '';
 
 # When runvar_access_restrict is called, it will limit reading
 # of non-synth runvars to ones which match these glob patterns.
-our @accessible_runvar_pats = qw(test-host-setup-runvars-will-appear-here);
+# The initial list is the runvars whih affect how a test host is
+# set up, and for test jobs it isn't modified.  synth runvars
+# which are read-modify-write by host setup must be listed too.
+our @accessible_runvar_pats =
+  qw(
+      *_dmrestrict *buildjob
+      arch console di_version dom0_mem enable_xsm freebsd_distpath
+      linux_boot_append os suite toolstack xen_boot_append xenable_xsm
+      host                           *_host 
+      host_console                   *_host_console
+      host_hostflagadjust            *_host_hostflagadjust
+      host_hostflags                 *_host_hostflags
+      host_linux_boot_append         *_host_linux_boot_append
+      host_ip                        *_host_ip
+      host_power_install             *_host_power_install
+   );
 
 #---------- test script startup ----------
 
@@ -1274,6 +1289,12 @@ sub selecthost ($;$$) {
 	return $child;
     }
 
+    #----- if we're sharing an actual host, make sure we do it right -----
+
+    if ($isprep) {
+	runvar_access_restrict();
+    }
+
     #----- calculation of the host's properties -----
 
     # Firstly, hardcoded defaults
diff --git a/ts-host-reuse b/ts-host-reuse
index 29abe987..c852a858 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -64,18 +64,14 @@ sub sharetype_add ($$) {
 }
 
 sub compute_test_sharetype () {
-    $sharetype =
-	"test-$flight/$r{arch}/$r{xenbuildjob}/$r{kernbuildjob}/$r{buildjob}";
-
-    sharetype_add('suite', $ho->{Suite});
-    sharetype_add('di', $ho->{DiVersion});
-
-    foreach my $runvar (qw(freebsd_distpath freebsdbuildjob
-			   xenable_xsm toolstack kernkind
-			   xen_boot_append toolstack)) {
-	my $val = $r{$runvar};
-	die "$runvar $val ?" if defined $val && $val =~ m{[,/\%\\]};
-	sharetype_add($runvar, $val);
+    $sharetype = "test-$flight";
+    my %done;
+    foreach my $key (runvar_glob(@accessible_runvar_pats)) {
+	next if runvar_is_synth($key);
+	my $val = $r{$key};
+	next if $done{$key}++;
+	$val =~ s{[^\"-\~]|\%}{ sprintf "%%%02x", ord $& }ge;
+	$sharetype .= "!$key=$r{$key}";
     }
 
     return $sharetype;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3704.11015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAC-0003Mi-1q; Wed, 07 Oct 2020 18:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3704.11015; Wed, 07 Oct 2020 18:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAB-0003Kr-0j; Wed, 07 Oct 2020 18:27:43 +0000
Received: by outflank-mailman (input) for mailman id 3704;
 Wed, 07 Oct 2020 18:27:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3Z-00072Q-JR
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:53 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e804b8aa-d6c1-408b-9e73-b72abd65494b;
 Wed, 07 Oct 2020 18:19:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk8-0007CF-Lq; Wed, 07 Oct 2020 19:00:48 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3Z-00072Q-JR
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:53 +0000
X-Inumbo-ID: e804b8aa-d6c1-408b-9e73-b72abd65494b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e804b8aa-d6c1-408b-9e73-b72abd65494b;
	Wed, 07 Oct 2020 18:19:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk8-0007CF-Lq; Wed, 07 Oct 2020 19:00:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 61/82] runvar access: Introduce sharing_for_build
Date: Wed,  7 Oct 2020 19:00:03 +0100
Message-Id: <20201007180024.7932-62-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Builds don't have so much contingent setup.  We don't track the
runvars; we just rely on the share-* hostflag set in the job.

But selecthost() is going to automatically enable runvar access
control for shared/reused hosts.  So, provide a way to disable that.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 4 +++-
 ts-host-install        | 2 ++
 ts-xen-build-prep      | 1 +
 3 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 6403e52b..c6bd6714 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -143,7 +143,7 @@ BEGIN {
                       sha256file host_shared_mark_ready
                       gitcache_setup
 
-		      @accessible_runvar_pats
+		      @accessible_runvar_pats sharing_for_build
                       );
     %EXPORT_TAGS = ( );
 
@@ -3171,6 +3171,8 @@ END
                                  'home-osstest-gitconfig');
 }
 
+sub sharing_for_build () { @accessible_runvar_pats = qw(*); };
+
 sub runvar_access_restrict () {
     # restricts runvars to those in @accessible_runvar_pats
     return if "@accessible_runvar_pats" eq "*";
diff --git a/ts-host-install b/ts-host-install
index b0fd2028..5badc706 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -49,6 +49,8 @@ while (@ARGV and $ARGV[0] =~ m/^-/) {
     }
 }
 
+if ($build) { sharing_for_build(); }
+
 our ($whhost) = @ARGV;
 $whhost ||= 'host';
 our $ho= selecthost($whhost);
diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 092bbffe..fcabf75a 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -24,6 +24,7 @@ use Osstest::TestSupport;
 use Osstest::Debian;
 
 tsreadconfig();
+sharing_for_build();
 
 our ($whhost) = @ARGV;
 $whhost ||= 'host';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3705.11027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAD-0003RO-S6; Wed, 07 Oct 2020 18:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3705.11027; Wed, 07 Oct 2020 18:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAC-0003Pd-Fp; Wed, 07 Oct 2020 18:27:44 +0000
Received: by outflank-mailman (input) for mailman id 3705;
 Wed, 07 Oct 2020 18:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2g-00072Q-Hx
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1118d174-7391-494e-9ba2-91552627002b;
 Wed, 07 Oct 2020 18:19:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkB-0007CF-Gj; Wed, 07 Oct 2020 19:00:51 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2g-00072Q-Hx
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:19:58 +0000
X-Inumbo-ID: 1118d174-7391-494e-9ba2-91552627002b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1118d174-7391-494e-9ba2-91552627002b;
	Wed, 07 Oct 2020 18:19:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkB-0007CF-Gj; Wed, 07 Oct 2020 19:00:51 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 74/82] host reuse: ts-host-reuse: Prepare for argument handling
Date: Wed,  7 Oct 2020 19:00:16 +0100
Message-Id: <20201007180024.7932-75-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No functional change.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-host-reuse | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index aad45bdd..21d77500 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -48,9 +48,9 @@ use Osstest::TestSupport;
 
 tsreadconfig();
 
-die unless @ARGV==2;
-
-our ($action, $whhost) = @ARGV;
+die unless @ARGV>=2;
+our $action = shift @ARGV;
+our $whhost = shift @ARGV;
 
 our $ho;
 
@@ -137,6 +137,7 @@ sub noop_if_playing () {
 #---------- actions ----------
 
 sub act_prealloc () {
+    die if @ARGV;
     noop_if_playing();
     compute_test_sharetype();
     $ho = selecthost($whhost, undef, 1);
@@ -145,6 +146,7 @@ sub act_prealloc () {
 }
 
 sub act_start_test () {
+    die if @ARGV;
     compute_test_sharetype();
     $ho = selecthost($whhost);
     return unless $ho->{Shared};
@@ -153,6 +155,7 @@ sub act_start_test () {
 }
 
 sub act_post_test_ok () {
+    die if @ARGV;
     compute_test_sharetype();
     $ho = selecthost($whhost);
     return unless $ho->{Shared};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3706.11047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAI-0003hH-BP; Wed, 07 Oct 2020 18:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3706.11047; Wed, 07 Oct 2020 18:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAI-0003gt-2V; Wed, 07 Oct 2020 18:27:50 +0000
Received: by outflank-mailman (input) for mailman id 3706;
 Wed, 07 Oct 2020 18:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE35-00072Q-Ii
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 754cadb0-7421-45fd-92c0-5f4f9ff11a16;
 Wed, 07 Oct 2020 18:19:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjz-0007CF-Ty; Wed, 07 Oct 2020 19:00:40 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE35-00072Q-Ii
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:23 +0000
X-Inumbo-ID: 754cadb0-7421-45fd-92c0-5f4f9ff11a16
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 754cadb0-7421-45fd-92c0-5f4f9ff11a16;
	Wed, 07 Oct 2020 18:19:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjz-0007CF-Ty; Wed, 07 Oct 2020 19:00:40 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 27/82] host allocation: *_shared_mark_ready: allow alternative $oldtypes
Date: Wed,  7 Oct 2020 18:59:29 +0100
Message-Id: <20201007180024.7932-28-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

$oldtype may now be a hashref, where keys mapping to truthy values are
permitted for the sharetype precondition.

No functional change for existing callers.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index fd975590..f2d43464 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1056,6 +1056,7 @@ sub executive_resource_shared_mark_ready ($$$;$$) {
 
     my $oldshr;
     $oldstate //= 'prep';
+    $oldstate = { $oldstate => 1 } unless ref $oldstate;
     $newstate //= 'ready';
 
     my $what= "resource $restype $resname";
@@ -1070,8 +1071,9 @@ sub executive_resource_shared_mark_ready ($$$;$$) {
         if (defined $oldshr) {
             die "$what shared $oldshr->{Type} not $sharetype"
                 unless !defined $sharetype or $oldshr->{Type} eq $sharetype;
-            die "$what shared state $oldshr->{State} not $oldstate"
-                unless $oldshr->{State} eq $oldstate;
+            die "$what shared state $oldshr->{State} not".
+		" one of ".(join ' ', sort keys %$oldstate)
+                unless $oldstate->{ $oldshr->{State} };
             my $nrows= $dbh_tests->do(<<END,{}, $newstate, $restype,$resname);
                 UPDATE resource_sharing
                    SET state=?
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3708.11053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAJ-0003kF-Ug; Wed, 07 Oct 2020 18:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3708.11053; Wed, 07 Oct 2020 18:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAI-0003j1-PV; Wed, 07 Oct 2020 18:27:50 +0000
Received: by outflank-mailman (input) for mailman id 3708;
 Wed, 07 Oct 2020 18:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6i-00072Q-RM
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:08 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e085007-4ceb-414a-9a23-2ce256872ea1;
 Wed, 07 Oct 2020 18:21:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk9-0007CF-W7; Wed, 07 Oct 2020 19:00:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6i-00072Q-RM
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:08 +0000
X-Inumbo-ID: 9e085007-4ceb-414a-9a23-2ce256872ea1
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9e085007-4ceb-414a-9a23-2ce256872ea1;
	Wed, 07 Oct 2020 18:21:52 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk9-0007CF-W7; Wed, 07 Oct 2020 19:00:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 67/82] host lifecycle: Fix detection of concurrent jobs
Date: Wed,  7 Oct 2020 19:00:09 +0100
Message-Id: <20201007180024.7932-68-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The previous algorithm was wrong here.

This commit was originally considerably later than the previous one.
I'm avoiding squshing this commit, to make future archaeology easier.
The effect of the bug is to report other tasks as live too often, so
hosts show up as shared rather than reused.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index cf82b4cf..3a8308e9 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -561,17 +561,15 @@ END
 END
 
     my $ojvn = "$ho->{Ident}_lifecycle";
-    my $firstrun;
 
     if (length $r{$ojvn}) {
 	my ($oldsigil,) = reverse split / /, $r{$ojvn};
 	$oldsigil = '' unless $oldsigil =~ m/^\W$/;
 	return if $newsigil ne '' && $oldsigil eq $newsigil;
-    } else {
-	$firstrun = 1;
     }
 
     my @lifecycle;
+    my $seen_us;
     db_retry($dbh_tests,[], sub {
 	my $elided;
 	@lifecycle = ();
@@ -581,10 +579,10 @@ END
 
 	while (my $o = $scanq->fetchrow_hashref()) {
 	    my $olive =
-	      # Any job which appeared since we started thinking
-	      # about this must have been concurrent with us,
-	      # even if it is dead now.
-	      (!$firstrun || $o->{live}) &&
+	      # Any job with any entry after we put ourselves in the
+	      # table must have been concurrent with us, even if it is
+	      # dead now.
+	      ($seen_us || $o->{live}) &&
 	      # If this task is still live, we need to have something
 	      # with a live mark, generally all the prep will have
 	      # occurred already, so we don't mark the prep as live
@@ -597,6 +595,7 @@ END
 		$o->{job} eq $job) {
 		# Don't put the + mark on our own entries.
 		$olivemark = '';
+		$seen_us = 1;
 	    }
 
 	    my $oisprepmark = !!$o->{isprep} && '@';
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3709.11071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAO-0003zw-Hl; Wed, 07 Oct 2020 18:27:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3709.11071; Wed, 07 Oct 2020 18:27:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAO-0003zT-3y; Wed, 07 Oct 2020 18:27:56 +0000
Received: by outflank-mailman (input) for mailman id 3709;
 Wed, 07 Oct 2020 18:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4S-00072Q-Lc
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:48 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddcc62eb-6ded-4054-be0f-ede0730b6969;
 Wed, 07 Oct 2020 18:20:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk2-0007CF-1d; Wed, 07 Oct 2020 19:00:42 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4S-00072Q-Lc
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:48 +0000
X-Inumbo-ID: ddcc62eb-6ded-4054-be0f-ede0730b6969
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ddcc62eb-6ded-4054-be0f-ede0730b6969;
	Wed, 07 Oct 2020 18:20:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk2-0007CF-1d; Wed, 07 Oct 2020 19:00:42 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 34/82] plan_search: Track last sharing state to determine $share_reuse
Date: Wed,  7 Oct 2020 18:59:36 +0100
Message-Id: <20201007180024.7932-35-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

What matters for the purpose of $share_reuse is not whether the host
is actually being _shared_ (ie, there are other concurrent allocations
and therefore a concurrent Event with Share information).  What we
really want to know is whether the *last* use of this host was a
suitable sharing setup - because we actually want to know if we will
be able to skip our setup.

So track that explicitly.  (The slightly odd structure, where there
are two loops in one, means that we reset $last_eshare when we go onto
the next $req ie the next host to check.)

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 4083ae6b..5f59c44e 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -676,6 +676,7 @@ sub plan_search ($$$$) {
     my $try_time= 0;
     my $confirmedok= 0;
     my $share_reuse= 0;
+    my $last_eshare;
 
     for (;;) {
 	my $req= $requestlist->[$reqix];
@@ -715,6 +716,10 @@ sub plan_search ($$$$) {
             # this period is entirely after the proposed slot;
             # so no need to check this or any later periods
 
+	    $last_eshare = $startevt->{
+                $startevt->{Type} eq 'Unshare' ? 'PreviousShare' : 'Share'
+				      };
+
 	    next PERIOD if $endevt->{Time} <= $try_time;
             # this period is entirely before the proposed slot;
             # it doesn't overlap, but must check subsequent periods
@@ -743,9 +748,10 @@ sub plan_search ($$$$) {
 	    " try=$try_time confirmed=$confirmedok reuse=$share_reuse");
 
 	$confirmedok++;
-	$share_reuse++ if defined $share_wear;
+	$share_reuse++ if $last_eshare and $share_compat_ok->($last_eshare);
 	$reqix++;
 	$reqix %= @$requestlist;
+	undef $last_eshare;
 	last if $confirmedok==@$requestlist;
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:27:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3710.11077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAP-00042s-Of; Wed, 07 Oct 2020 18:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3710.11077; Wed, 07 Oct 2020 18:27:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAO-00041g-Ns; Wed, 07 Oct 2020 18:27:56 +0000
Received: by outflank-mailman (input) for mailman id 3710;
 Wed, 07 Oct 2020 18:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5f-00072Q-Ol
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d13658e-fba9-4421-9b68-f8e723a3e629;
 Wed, 07 Oct 2020 18:21:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjz-0007CF-GO; Wed, 07 Oct 2020 19:00:39 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5f-00072Q-Ol
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:03 +0000
X-Inumbo-ID: 2d13658e-fba9-4421-9b68-f8e723a3e629
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2d13658e-fba9-4421-9b68-f8e723a3e629;
	Wed, 07 Oct 2020 18:21:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjz-0007CF-GO; Wed, 07 Oct 2020 19:00:39 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 25/82] host allocation: *_shared_mark_ready: Make $sharetype check optional
Date: Wed,  7 Oct 2020 18:59:27 +0100
Message-Id: <20201007180024.7932-26-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We are going to want to be able to set shares to other than ready,
without double-checking the sharetype.

The change to the UPDATE statement makes no difference because
resource_check_allocated_core has just got that sharetype out of the
db.  (This does remove one safety check against bugs, sadly.)

No functional change for existing callers.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index f0038f6b..fd975590 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1059,21 +1059,25 @@ sub executive_resource_shared_mark_ready ($$$;$$) {
     $newstate //= 'ready';
 
     my $what= "resource $restype $resname";
-    $sharetype .= ' '.get_harness_rev();
+    if (defined $sharetype) {
+	$sharetype .= ' '.get_harness_rev();
+    } else {
+	die if $newstate eq 'ready';
+    }
 
     db_retry($dbh_tests, [qw(resources)], sub {
         $oldshr= resource_check_allocated_core($restype, $resname);
         if (defined $oldshr) {
             die "$what shared $oldshr->{Type} not $sharetype"
-                unless $oldshr->{Type} eq $sharetype;
+                unless !defined $sharetype or $oldshr->{Type} eq $sharetype;
             die "$what shared state $oldshr->{State} not $oldstate"
                 unless $oldshr->{State} eq $oldstate;
-            my $nrows= $dbh_tests->do(<<END,{}, $newstate, $restype,$resname,$sharetype);
+            my $nrows= $dbh_tests->do(<<END,{}, $newstate, $restype,$resname);
                 UPDATE resource_sharing
                    SET state=?
-                 WHERE restype=? AND resname=? AND sharetype=?
+                 WHERE restype=? AND resname=?
 END
-            die "unexpected not updated state $what $sharetype $nrows"
+            die "unexpected not updated state $what $nrows"
                 unless $nrows==1;
 
             $dbh_tests->do(<<END,{}, $oldshr->{ResType}, $resname);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3711.11088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAS-00047b-1G; Wed, 07 Oct 2020 18:28:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3711.11088; Wed, 07 Oct 2020 18:27:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAQ-00046c-EO; Wed, 07 Oct 2020 18:27:58 +0000
Received: by outflank-mailman (input) for mailman id 3711;
 Wed, 07 Oct 2020 18:27:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE2l-00072Q-I2
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7613705-b72b-4847-aaa2-3b430a1b5cf5;
 Wed, 07 Oct 2020 18:19:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk3-0007CF-JV; Wed, 07 Oct 2020 19:00:43 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE2l-00072Q-I2
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:03 +0000
X-Inumbo-ID: a7613705-b72b-4847-aaa2-3b430a1b5cf5
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7613705-b72b-4847-aaa2-3b430a1b5cf5;
	Wed, 07 Oct 2020 18:19:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk3-0007CF-JV; Wed, 07 Oct 2020 19:00:43 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 40/82] ts-host-reuse: Do not depend on bios
Date: Wed,  7 Oct 2020 18:59:42 +0100
Message-Id: <20201007180024.7932-41-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Weirdly, this is only used for guests.  Really, it should be a
target_var, not a raw runvar applying to all guests, since it can be
guest-specific.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-host-reuse | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index 74ac94aa..67de480f 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -71,7 +71,7 @@ sub compute_test_sharetype () {
     sharetype_add('di', $ho->{DiVersion});
 
     foreach my $runvar (qw(freebsd_distpath freebsdbuildjob
-			   bios xenable_xsm toolstack kernkind)) {
+			   xenable_xsm toolstack kernkind)) {
 	my $val = $r{$runvar};
 	die "$runvar $val ?" if defined $val && $val =~ m{[,/\%\\]};
 	sharetype_add($runvar, $val);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3712.11103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAV-0004Hn-8G; Wed, 07 Oct 2020 18:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3712.11103; Wed, 07 Oct 2020 18:28:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAT-0004FP-T6; Wed, 07 Oct 2020 18:28:01 +0000
Received: by outflank-mailman (input) for mailman id 3712;
 Wed, 07 Oct 2020 18:27:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6x-00072Q-S3
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8f749b0-926d-4602-b095-e1080888c615;
 Wed, 07 Oct 2020 18:21:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk8-0007CF-6P; Wed, 07 Oct 2020 19:00:48 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6x-00072Q-S3
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:23 +0000
X-Inumbo-ID: f8f749b0-926d-4602-b095-e1080888c615
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f8f749b0-926d-4602-b095-e1080888c615;
	Wed, 07 Oct 2020 18:21:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk8-0007CF-6P; Wed, 07 Oct 2020 19:00:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 60/82] runvar access: Use runvar_glob for dmrestrict runvar search
Date: Wed,  7 Oct 2020 19:00:02 +0100
Message-Id: <20201007180024.7932-61-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 3fa26e45..ae3c1d33 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1032,7 +1032,7 @@ END
     # security.d.o CDN seems unreliable right now
     # and jessie-updates is no more
 
-    if (grep { m/_dmrestrict$/ && $r{$_} } keys %r and
+    if (grep { $r{$_} } runvar_glob('*_dmrestrict') and
 	$suite =~ m/stretch/) {
 	preseed_backports_packages($ho, $sfx, \%xopts, $suite,
 				   qw(chiark-scripts));
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3713.11112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAX-0004Nc-3h; Wed, 07 Oct 2020 18:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3713.11112; Wed, 07 Oct 2020 18:28:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAW-0004Ls-0B; Wed, 07 Oct 2020 18:28:04 +0000
Received: by outflank-mailman (input) for mailman id 3713;
 Wed, 07 Oct 2020 18:27:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3P-00072Q-JN
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:43 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2aa5d4f6-6676-4d7e-804c-ee3b484a641d;
 Wed, 07 Oct 2020 18:19:47 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkC-0007CF-7M; Wed, 07 Oct 2020 19:00:52 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3P-00072Q-JN
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:43 +0000
X-Inumbo-ID: 2aa5d4f6-6676-4d7e-804c-ee3b484a641d
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2aa5d4f6-6676-4d7e-804c-ee3b484a641d;
	Wed, 07 Oct 2020 18:19:47 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkC-0007CF-7M; Wed, 07 Oct 2020 19:00:52 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 77/82] sg-report-flight: Improvements to other job (share/reuse) reporting
Date: Wed,  7 Oct 2020 19:00:19 +0100
Message-Id: <20201007180024.7932-78-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

* Prefer to show "prep" (purple) rather than "share".
* Show our own relationship, in particular to show if it was prep.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 7dc218cf..2a79db13 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1473,12 +1473,12 @@ END
 	  "$c{ResultsHtmlPubBaseUrl}/host/$srow->{hostname}.html",
 	  $srow->{hostname};
 	my $rel =
-	  $srow->{olive} ?
-	  "<td align=\"center\" bgcolor=\"$red\">share</td>"
-	  :
 	  $srow->{prep_started} ?
 	  "<td align=\"center\" bgcolor=\"$purple\">prep.</td>"
 	  :
+	  $srow->{olive} ?
+	  "<td align=\"center\" bgcolor=\"$red\">share</td>"
+	  :
 	  "<td align=\"center\">reuse</td>";
         if (defined $srow->{flight}) {
 	    my $furl = "$c{ReportHtmlPubBaseUrl}/$srow->{flight}/";
@@ -1499,8 +1499,10 @@ END
 END
 		  $jurl, $srow->{job};
 	    } else {
-		printf H <<END;
-<td></td>
+		confess unless $rel =~ m{([0-9a-z. ]+)\</td\>$};
+		$rel = '<td></td>' if $1 eq 'reuse';
+		printf H <<END, $rel;
+%s
 <td align="center">this</td>
 <td align="center">this</td>
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3714.11123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAa-0004VD-1i; Wed, 07 Oct 2020 18:28:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3714.11123; Wed, 07 Oct 2020 18:28:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAY-0004TL-Cq; Wed, 07 Oct 2020 18:28:06 +0000
Received: by outflank-mailman (input) for mailman id 3714;
 Wed, 07 Oct 2020 18:28:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3K-00072Q-JE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:38 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 445c0d87-e272-473b-9656-7b1803bf3f30;
 Wed, 07 Oct 2020 18:19:45 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkD-0007CF-76; Wed, 07 Oct 2020 19:00:53 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3K-00072Q-JE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:38 +0000
X-Inumbo-ID: 445c0d87-e272-473b-9656-7b1803bf3f30
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 445c0d87-e272-473b-9656-7b1803bf3f30;
	Wed, 07 Oct 2020 18:19:45 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkD-0007CF-76; Wed, 07 Oct 2020 19:00:53 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 81/82] sg-report-flight: Sharing reports: more task finished info
Date: Wed,  7 Oct 2020 19:00:23 +0100
Message-Id: <20201007180024.7932-82-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Other steps from jobs affecting this host either started after we are
running, and therefore didn't affect the stuff we're reporting, or
already in the db.  Furthermore, any such effects for steps which have
finished must have completed by the max finished time  But if there
are unfinished steps, we don't know the finish time.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 8f99bb69..33f953ca 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1355,7 +1355,8 @@ END
     # table rows to the overall union (sum type) rows.
     my $nullcols_main = nullcols(qw(
         flight::integer job status oidents
-        started::integer rest_started::integer finished::integer
+        started::integer rest_started::integer
+        finished::integer all_finished::boolean
     ));
     my $nullcols_tasks = nullcols(qw(
         taskid::integer type refkey username comment
@@ -1382,7 +1383,11 @@ END
 	      (SELECT max(finished)
 		 FROM steps s
 		WHERE s.flight = q.flight
-		  AND s.job    = q.job)          AS finished
+		  AND s.job    = q.job)          AS finished,
+	      (SELECT every(finished IS NOT NULL)
+		 FROM steps s
+		WHERE s.flight = q.flight
+		  AND s.job    = q.job)          AS all_finished
 	FROM Q
         ORDER BY q.tident),
 
@@ -1401,6 +1406,7 @@ END
 	      min(prep_started)                  AS prep_started,
 	      min(rest_started)                  AS rest_started,
 	      max(finished)                      AS finished,
+	      every(all_finished)                AS all_finished,
 	      $nullcols_tasks,
 	      $nullcols_elided,
               NULL::integer                      AS sort_index
@@ -1466,7 +1472,7 @@ END
 <th>role(s)<br>(there)</td>
 <th>install / prep.<br>started</td>
 <th>use</br>started</td>
-<th>last step<br>ended</td>
+<th>last relevant step<br>ended</td>
 <th>job<br>status</td>
 </tr>
 END
@@ -1522,7 +1528,7 @@ END
 	      map { $_ ? show_abs_time($_) : '' }
 	      $srow->{prep_started},
 	      $srow->{rest_started},
-	      !$srow->{olive} && $srow->{finished};
+	      (!$srow->{olive} || $srow->{all_finished}) && $srow->{finished};
 	    my $info = report_run_getinfo($srow);
 	    print H <<END, 
 <td $info->{ColourAttr}>$info->{Content}</td>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3722.11146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAg-0004qV-T8; Wed, 07 Oct 2020 18:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3722.11146; Wed, 07 Oct 2020 18:28:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAg-0004pS-AO; Wed, 07 Oct 2020 18:28:14 +0000
Received: by outflank-mailman (input) for mailman id 3722;
 Wed, 07 Oct 2020 18:28:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6d-00072Q-R8
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:03 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c9cc43c-0db9-422e-b8a7-5ebe7389db32;
 Wed, 07 Oct 2020 18:21:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk4-0007CF-J8; Wed, 07 Oct 2020 19:00:44 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6d-00072Q-R8
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:24:03 +0000
X-Inumbo-ID: 7c9cc43c-0db9-422e-b8a7-5ebe7389db32
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7c9cc43c-0db9-422e-b8a7-5ebe7389db32;
	Wed, 07 Oct 2020 18:21:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk4-0007CF-J8; Wed, 07 Oct 2020 19:00:44 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 44/82] sg-run-job: New @ iffail tag for prep tasks
Date: Wed,  7 Oct 2020 18:59:46 +0100
Message-Id: <20201007180024.7932-45-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Currently no users sites, so no functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 sg-run-job | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 3f44cae7..a074cd42 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -223,12 +223,15 @@ proc recipe-flag {flagname {def 0}} {
 #
 #  IFFAIL can be
 #
-#           [-][!].
-#           [-][!]STATUS
+#           [@][-][!].
+#           [@][-][!]STATUS
 #
 #       where STATUS is the job/step status to be used if the step
 #       status is not as expected, and the special meanings are:
 #
+#           @      Only for per-host-ts: prefix each ident with @
+#                  to run this script only if the host is `prep'
+#                  and not if it is `ready'.
 #           -      for run-ts; suprresses exception on failure.
 #                  for per-host-ts; suppresses consequences of failure.
 #           !      Run this even if the job is being truncated.
@@ -452,8 +455,11 @@ proc reap-ts {reap {wantstatus pass}} {
 
 proc per-host-ts {iffail ident script args} {
     global ok truncate need_xen_hosts flight jobinfo
+
+    set isprep [lindex {{} @} [regsub {^\@} $iffail {} iffail]]
  
-    if {![iffail-check $iffail {$ok && !$truncate} iffail_status]} return
+    if {![iffail-check $iffail {$ok && !$truncate} iffail_status]} \
+	return
 
     set awaitl {}
     foreach host $need_xen_hosts {
@@ -461,7 +467,7 @@ proc per-host-ts {iffail ident script args} {
 	if {[string compare $host host]} {
 	    lappend hostargs +! $host
 	}
-	lappend hostargs + $host
+	lappend hostargs + $isprep$host
         lappend awaitl [eval spawn-ts $iffail $ident $script $hostargs $args]
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3721.11142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAf-0004n2-Hz; Wed, 07 Oct 2020 18:28:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3721.11142; Wed, 07 Oct 2020 18:28:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAf-0004m6-5x; Wed, 07 Oct 2020 18:28:13 +0000
Received: by outflank-mailman (input) for mailman id 3721;
 Wed, 07 Oct 2020 18:28:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5Q-00072Q-O9
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:48 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27e5ae2d-afc6-47d4-96cc-d1a866bf576b;
 Wed, 07 Oct 2020 18:21:10 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk4-0007CF-PI; Wed, 07 Oct 2020 19:00:44 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5Q-00072Q-O9
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:48 +0000
X-Inumbo-ID: 27e5ae2d-afc6-47d4-96cc-d1a866bf576b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27e5ae2d-afc6-47d4-96cc-d1a866bf576b;
	Wed, 07 Oct 2020 18:21:10 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk4-0007CF-PI; Wed, 07 Oct 2020 19:00:44 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 45/82] sg-run-job: Detect improper use of @ iffail with run-ts
Date: Wed,  7 Oct 2020 18:59:47 +0100
Message-Id: <20201007180024.7932-46-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Only per-host-ts understands this.  This is a bit of a bear trap, so
arrange to bail rather than putting strange step status values with
`@' at the front in the database...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/sg-run-job b/sg-run-job
index a074cd42..067b28db 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -317,6 +317,9 @@ proc iffail-check {iffail okexpr iffail_status_var} {
     if {![regsub {^!} $iffail {} iffail_status]} {
 	if {![uplevel 1 [list expr $okexpr]]} { return 0 }
     }
+    if {[regexp {^@} $iffail]} {
+	error "internal error - @ only valid for iffail with per-host-ts"
+    }
     return 1
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3723.11161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAk-0004wv-EJ; Wed, 07 Oct 2020 18:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3723.11161; Wed, 07 Oct 2020 18:28:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAi-0004vb-HN; Wed, 07 Oct 2020 18:28:16 +0000
Received: by outflank-mailman (input) for mailman id 3723;
 Wed, 07 Oct 2020 18:28:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE51-00072Q-NB
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 646be300-5e0c-4374-a999-7afb671732f2;
 Wed, 07 Oct 2020 18:20:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjx-0007CF-67; Wed, 07 Oct 2020 19:00:37 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE51-00072Q-NB
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:23 +0000
X-Inumbo-ID: 646be300-5e0c-4374-a999-7afb671732f2
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 646be300-5e0c-4374-a999-7afb671732f2;
	Wed, 07 Oct 2020 18:20:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjx-0007CF-67; Wed, 07 Oct 2020 19:00:37 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wei.liu@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Olivier Lambert <olivier.lambert@vates.fr>
Subject: [OSSTEST PATCH 16/82] abolish "kernkind"; desupport non-pvops kernels
Date: Wed,  7 Oct 2020 18:59:18 +0100
Message-Id: <20201007180024.7932-17-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This was for distinguishing the old-style Xenolinux kernels from pvops
kernels.

We have not actually tested any non-pvops kernels for a very very long
time.  Delete this now because the runvar is slightly in the way of
test host reuse.

(Sorry for the wide CC but it seems better to make sure anyone who
might object can do so.)

All this machinery exists just to configure the guest console
device (Xenolinux used "xvc" rather than "hvc") and the guest root
block device (Xenolinux stole "hda"/"sda" rather than using "xvda").

Specifically, in this commit:
 * In what is now target_setup_rootdev_console_inittab, do not
   look at any kernkind runvar and simply do what we would if
   it were "pvops" or unset, as it is in all current jobs.
 * Remove the runvar from all jobs creation and example runes.
   (This has no functional change even for jobs running with
   the previous osstest code because we have defaulted to "pvops"
   for a very long time.)

We retain the setting of the shell variable "kernbuild", because that
ends up in build jobs' names.  All our kernel build jobs now end in
-pvops and I intend to retain that name component since abolishing it
is nontrivial.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Juergen Gross <jgross@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wei.liu@kernel.org>
CC: Paul Durrant <paul@xen.org>
CC: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
CC: Juergen Gross <jgross@suse.com>
CC: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
CC: Andrew Cooper <Andrew.Cooper3@citrix.com>
CC: Olivier Lambert <olivier.lambert@vates.fr>
---
 Osstest/TestSupport.pm | 9 ++-------
 README                 | 2 +-
 make-hosts-flight      | 1 -
 mfi-common             | 7 ++-----
 4 files changed, 5 insertions(+), 14 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index d9bb2585..99c7654d 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2569,14 +2569,9 @@ sub target_setup_rootdev_console_inittab ($$$) {
     my ($ho, $gho, $root) = @_;
 
     my $pfx= target_var_prefix($gho);
-    my $kernkind= $r{$pfx."kernkind"} // 'pvops';
     my $isguest= exists $gho->{Guest};
-    if ($kernkind eq 'pvops') {
-        store_runvar($pfx."rootdev", 'xvda') if $isguest;
-        store_runvar($pfx."console", 'hvc0');
-    } elsif ($kernkind !~ m/2618/) {
-        store_runvar($pfx."console", 'xvc0') if $isguest;
-    }
+    store_runvar($pfx."rootdev", 'xvda') if $isguest;
+    store_runvar($pfx."console", 'hvc0');
 
     my $inittabpath= "$root/etc/inittab";
     my $console= target_var($gho,'console');
diff --git a/README b/README
index 2804ecf3..ba4bea1d 100644
--- a/README
+++ b/README
@@ -861,7 +861,7 @@ echo $flight
 job=play-amd64-amd64-xen-boot
 ./cs-job-create $flight $job play-xen-boot-x5 \
     all_hostflags=arch-amd64,arch-xen-amd64,suite-wheezy,purpose-test \
-    arch=amd64 toolstack=xl enable_xsm=false kernkind=pvops \
+    arch=amd64 toolstack=xl enable_xsm=false \
     host=$host
 
 # Reuse the binaries from the Xen template job for both the hypervisor
diff --git a/make-hosts-flight b/make-hosts-flight
index e2c3776a..63ac7b71 100755
--- a/make-hosts-flight
+++ b/make-hosts-flight
@@ -73,7 +73,6 @@ hosts_iterate () {
         local freebsd_runvars
         set_freebsd_runvars true
         runvars+=" 
-                   kernkind=pvops
                    all_host_di_version=$di_version
                    all_host_suite=$suite
                    $freebsd_runvars
diff --git a/mfi-common b/mfi-common
index e577449f..34b0c116 100644
--- a/mfi-common
+++ b/mfi-common
@@ -619,9 +619,8 @@ test_matrix_iterate () {
       case $kern in
       '')
                   kernbuild=pvops
-                  kernkind=pvops
                   ;;
-      *)          echo >&2 "kernkind ?  $kern"; exit 1 ;;
+      *)          echo >&2 "kernbuild ?  $kern"; exit 1 ;;
       esac
 
       for dom0arch in i386 amd64 armhf arm64; do
@@ -639,8 +638,7 @@ test_matrix_iterate () {
             arch_runvars=\"\$ARCH_RUNVARS_$dom0arch\"
         "
 
-        debian_runvars="debian_kernkind=$kernkind \
-                        debian_arch=$dom0arch \
+        debian_runvars="debian_arch=$dom0arch \
                         debian_suite=$guestsuite \
                         "
 
@@ -659,7 +657,6 @@ test_matrix_iterate () {
         most_runvars="
                   arch=$dom0arch                                  \
                   kernbuildjob=${bfi}build-$dom0arch-$kernbuild   \
-                  kernkind=$kernkind                              \
                   $arch_runvars $hostos_runvars
                   "
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3725.11170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAn-000544-19; Wed, 07 Oct 2020 18:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3725.11170; Wed, 07 Oct 2020 18:28:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAl-00051z-9S; Wed, 07 Oct 2020 18:28:19 +0000
Received: by outflank-mailman (input) for mailman id 3725;
 Wed, 07 Oct 2020 18:28:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5a-00072Q-Oa
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:58 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cc44eaa-1422-4344-b38e-de78d4195244;
 Wed, 07 Oct 2020 18:21:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkA-0007CF-Jv; Wed, 07 Oct 2020 19:00:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5a-00072Q-Oa
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:58 +0000
X-Inumbo-ID: 0cc44eaa-1422-4344-b38e-de78d4195244
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0cc44eaa-1422-4344-b38e-de78d4195244;
	Wed, 07 Oct 2020 18:21:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkA-0007CF-Jv; Wed, 07 Oct 2020 19:00:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 70/82] sg-report-flight: Refactor runvar access
Date: Wed,  7 Oct 2020 19:00:12 +0100
Message-Id: <20201007180024.7932-71-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Collect the runvars query into local perl variables.  This will allow
us to reuse the information without going back to the db.

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 07834581..281361c0 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1142,6 +1142,16 @@ END
     my $ji = $htmlout_jobq->fetchrow_hashref();
     die unless $ji;
 
+    my $varsq= db_prepare(<<END);
+        SELECT name, val, synth FROM runvars
+                WHERE flight=? AND job=?
+             ORDER BY synth, name
+END
+    $varsq->execute($fi->{Flight}, $job);
+    my $runvar_table = $varsq->fetchall_arrayref({});
+    my %runvar_map;
+    $runvar_map{$_->{name}} = $_ foreach @$runvar_table;
+
     print H <<END;
 <html><head><title>$title</title><head>
 <body>
@@ -1244,14 +1254,8 @@ END
 <h2>Test control variables</h2>
 <table rules=all><tr><th>Name</th><th>Value</th><th>Source</th></tr>
 END
-    my $varsq= db_prepare(<<END);
-        SELECT name, val, synth FROM runvars
-                WHERE flight=? AND job=?
-             ORDER BY synth, name
-END
-    $varsq->execute($fi->{Flight}, $job);
-    while (my $varrow= $varsq->fetchrow_arrayref()) {
-        my ($vn,$vv,$synth) = (@$varrow);
+    foreach my $varrow (@$runvar_table) {
+        my ($vn,$vv,$synth) = (@$varrow{qw(name val synth)});
         print H "<tr><th>".encode_entities($vn)."</th>";
         print H "<td>";
         my $url;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3728.11183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAp-0005BN-E6; Wed, 07 Oct 2020 18:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3728.11183; Wed, 07 Oct 2020 18:28:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAn-00059i-V7; Wed, 07 Oct 2020 18:28:21 +0000
Received: by outflank-mailman (input) for mailman id 3728;
 Wed, 07 Oct 2020 18:28:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3U-00072Q-JQ
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:48 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63ddb3e7-36d5-4845-aacd-4b3ceeeb0b9b;
 Wed, 07 Oct 2020 18:19:48 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkB-0007CF-VH; Wed, 07 Oct 2020 19:00:52 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3U-00072Q-JQ
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:20:48 +0000
X-Inumbo-ID: 63ddb3e7-36d5-4845-aacd-4b3ceeeb0b9b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 63ddb3e7-36d5-4845-aacd-4b3ceeeb0b9b;
	Wed, 07 Oct 2020 18:19:48 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkB-0007CF-VH; Wed, 07 Oct 2020 19:00:52 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 76/82] sg-report-flight: Reformat slightly
Date: Wed,  7 Oct 2020 19:00:18 +0100
Message-Id: <20201007180024.7932-77-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is more regular and will make the next commit easier to
understand.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 0413a730..7dc218cf 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1472,9 +1472,11 @@ END
 	  $srow->{tident},
 	  "$c{ResultsHtmlPubBaseUrl}/host/$srow->{hostname}.html",
 	  $srow->{hostname};
-	my $rel = $srow->{olive} ?
+	my $rel =
+	  $srow->{olive} ?
 	  "<td align=\"center\" bgcolor=\"$red\">share</td>"
-	  : $srow->{prep_started} ?
+	  :
+	  $srow->{prep_started} ?
 	  "<td align=\"center\" bgcolor=\"$purple\">prep.</td>"
 	  :
 	  "<td align=\"center\">reuse</td>";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3731.11194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAr-0005IA-F4; Wed, 07 Oct 2020 18:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3731.11194; Wed, 07 Oct 2020 18:28:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAq-0005Gx-MI; Wed, 07 Oct 2020 18:28:24 +0000
Received: by outflank-mailman (input) for mailman id 3731;
 Wed, 07 Oct 2020 18:28:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5p-00072Q-P4
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:13 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c4e2ceb-4d26-473f-b4de-23e72f53f8e1;
 Wed, 07 Oct 2020 18:21:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk2-0007CF-QW; Wed, 07 Oct 2020 19:00:42 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5p-00072Q-P4
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:13 +0000
X-Inumbo-ID: 6c4e2ceb-4d26-473f-b4de-23e72f53f8e1
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6c4e2ceb-4d26-473f-b4de-23e72f53f8e1;
	Wed, 07 Oct 2020 18:21:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk2-0007CF-QW; Wed, 07 Oct 2020 19:00:42 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 37/82] ts-hosts-allocate-Executive: Better message for hosts abandoned mid-test
Date: Wed,  7 Oct 2020 18:59:39 +0100
Message-Id: <20201007180024.7932-38-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 ts-hosts-allocate-Executive | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index 6fcfd2e3..58e9f891 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -1062,7 +1062,7 @@ sub actual_allocation ($) {
 	    }
 	    # someone was preparing it but they aren't any more
 	    push @allocwarnings,
-		"previous preparation apparently abandoned";
+		"previous use apparently abandoned ($shared->{state})";
 	    $allocatable= 1;
         }
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3739.11214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAu-0005Rm-Ps; Wed, 07 Oct 2020 18:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3739.11214; Wed, 07 Oct 2020 18:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAu-0005R2-2x; Wed, 07 Oct 2020 18:28:28 +0000
Received: by outflank-mailman (input) for mailman id 3739;
 Wed, 07 Oct 2020 18:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE3y-00072Q-KE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 086ee261-506d-4c6b-b4bd-b2f9e49e314b;
 Wed, 07 Oct 2020 18:20:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk5-0007CF-LX; Wed, 07 Oct 2020 19:00:45 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE3y-00072Q-KE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:18 +0000
X-Inumbo-ID: 086ee261-506d-4c6b-b4bd-b2f9e49e314b
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 086ee261-506d-4c6b-b4bd-b2f9e49e314b;
	Wed, 07 Oct 2020 18:20:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk5-0007CF-LX; Wed, 07 Oct 2020 19:00:45 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 49/82] host reuse: sg-run-job: per-host prep: Use @ for per-host-ts
Date: Wed,  7 Oct 2020 18:59:51 +0100
Message-Id: <20201007180024.7932-50-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

These are the steps that will be skipped when we reuse a test host.

No functional change yet since we don't allocate the host shared yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-run-job | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index 0b2e20e7..af43008d 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -25,9 +25,9 @@ readconfig
 source-method JobDB
 
 proc per-host-prep {} {
-    per-host-ts .       host-ping-check-native/@ ts-host-ping-check
-    per-host-ts .       xen-install/@     ts-xen-install
-    per-host-ts .       xen-boot/@        ts-host-reboot
+    per-host-ts @.      host-ping-check-native/@ ts-host-ping-check
+    per-host-ts @.      xen-install/@     ts-xen-install
+    per-host-ts @.      xen-boot/@        ts-host-reboot
 
     per-host-ts .       host-ping-check-xen/@ ts-host-ping-check
     per-host-ts .       =(*)            { ts-leak-check basis }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3742.11227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAx-0005Zg-AS; Wed, 07 Oct 2020 18:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3742.11227; Wed, 07 Oct 2020 18:28:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAw-0005ZJ-U6; Wed, 07 Oct 2020 18:28:30 +0000
Received: by outflank-mailman (input) for mailman id 3742;
 Wed, 07 Oct 2020 18:28:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4N-00072Q-LN
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:43 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac7250ff-2574-481e-98fa-ef231cd33e98;
 Wed, 07 Oct 2020 18:20:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjv-0007CF-SV; Wed, 07 Oct 2020 19:00:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4N-00072Q-LN
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:43 +0000
X-Inumbo-ID: ac7250ff-2574-481e-98fa-ef231cd33e98
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac7250ff-2574-481e-98fa-ef231cd33e98;
	Wed, 07 Oct 2020 18:20:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjv-0007CF-SV; Wed, 07 Oct 2020 19:00:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 15/82] target setup refactoring: Add a doc comment
Date: Wed,  7 Oct 2020 18:59:17 +0100
Message-Id: <20201007180024.7932-16-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index fd7b238b..d9bb2585 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2563,6 +2563,9 @@ sub target_var ($$) {
 }
 
 sub target_setup_rootdev_console_inittab ($$$) {
+    # Operators on $gho.
+    # $gho's filesystem is accessed via $ho and $mountpoint;
+    # so maybe $gho is $ho and $mountpoint is "/".
     my ($ho, $gho, $root) = @_;
 
     my $pfx= target_var_prefix($gho);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3743.11238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB0-0005iS-LC; Wed, 07 Oct 2020 18:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3743.11238; Wed, 07 Oct 2020 18:28:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEAz-0005hm-Qh; Wed, 07 Oct 2020 18:28:33 +0000
Received: by outflank-mailman (input) for mailman id 3743;
 Wed, 07 Oct 2020 18:28:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4I-00072Q-LE
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:38 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8753e621-b9c0-488d-8e4f-27cb8b4ac0a2;
 Wed, 07 Oct 2020 18:20:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDkA-0007CF-QM; Wed, 07 Oct 2020 19:00:50 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4I-00072Q-LE
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:38 +0000
X-Inumbo-ID: 8753e621-b9c0-488d-8e4f-27cb8b4ac0a2
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8753e621-b9c0-488d-8e4f-27cb8b4ac0a2;
	Wed, 07 Oct 2020 18:20:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDkA-0007CF-QM; Wed, 07 Oct 2020 19:00:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 71/82] resource reporting, nfc: split a here document
Date: Wed,  7 Oct 2020 19:00:13 +0100
Message-Id: <20201007180024.7932-72-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/sg-report-flight b/sg-report-flight
index 281361c0..a1f424c5 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1213,6 +1213,9 @@ END
 <tr><td>Status:</td><td>$ji->{status}</td></tr>
 </table>
 <p>
+END
+
+    print H <<END;
 <h2>Logfiles etc.</h2>
 For main test script logfiles, see entries in steps table.
 <p>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3745.11246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB3-0005oU-1N; Wed, 07 Oct 2020 18:28:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3745.11246; Wed, 07 Oct 2020 18:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB1-0005nB-Qh; Wed, 07 Oct 2020 18:28:35 +0000
Received: by outflank-mailman (input) for mailman id 3745;
 Wed, 07 Oct 2020 18:28:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE6J-00072Q-Qi
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:43 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6619fc8-0a7b-4779-9558-9abc1873d78f;
 Wed, 07 Oct 2020 18:21:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk9-0007CF-AP; Wed, 07 Oct 2020 19:00:49 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE6J-00072Q-Qi
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:43 +0000
X-Inumbo-ID: e6619fc8-0a7b-4779-9558-9abc1873d78f
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e6619fc8-0a7b-4779-9558-9abc1873d78f;
	Wed, 07 Oct 2020 18:21:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk9-0007CF-AP; Wed, 07 Oct 2020 19:00:49 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 64/82] hsot reuse: Hash the share type
Date: Wed,  7 Oct 2020 19:00:06 +0100
Message-Id: <20201007180024.7932-65-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We don't really want to duplicate (triplicate, actually) lots of the
runvars.  This will make the runvars table needlessly bloated.

So hash the values.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 README        |  2 +-
 ts-host-reuse | 11 +++++++----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/README b/README
index a929010c..1703e076 100644
--- a/README
+++ b/README
@@ -297,7 +297,7 @@ To run osstest in standalone mode:
      curl
      netcat
      chiark-utils-bin
-     libxml-libxml-perl libfile-fnmatch-perl
+     libxml-libxml-perl libfile-fnmatch-perl libdigest-sha-perl
      dctrl-tools
      libnet-snmp-perl (if you are going to use Masterswitch PDUs)
 
diff --git a/ts-host-reuse b/ts-host-reuse
index c852a858..701070b2 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -43,6 +43,7 @@ use DBI;
 BEGIN { unshift @INC, qw(.); }
 use Osstest;
 use POSIX;
+use Digest::SHA qw(sha224_base64);
 use Osstest::TestSupport;
 
 tsreadconfig();
@@ -64,16 +65,18 @@ sub sharetype_add ($$) {
 }
 
 sub compute_test_sharetype () {
-    $sharetype = "test-$flight";
+    my @runvartexts;
     my %done;
     foreach my $key (runvar_glob(@accessible_runvar_pats)) {
 	next if runvar_is_synth($key);
 	my $val = $r{$key};
 	next if $done{$key}++;
-	$val =~ s{[^\"-\~]|\%}{ sprintf "%%%02x", ord $& }ge;
-	$sharetype .= "!$key=$r{$key}";
+	$val =~ s{[^\!-\~]|\%}{ sprintf "%%%02x", ord $& }ge;
+	push @runvartexts, "$key=$r{$key}";
     }
-
+    my $digest = sha224_base64("@runvartexts");
+    $sharetype = "test-$flight-$digest";
+    logm "share type $sharetype; hash is of: @runvartexts";
     return $sharetype;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3747.11252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB4-0005tw-LD; Wed, 07 Oct 2020 18:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3747.11252; Wed, 07 Oct 2020 18:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB3-0005rq-Af; Wed, 07 Oct 2020 18:28:37 +0000
Received: by outflank-mailman (input) for mailman id 3747;
 Wed, 07 Oct 2020 18:28:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5G-00072Q-Nq
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:38 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd487d61-c3bd-4c0c-9178-6cbdf434cb81;
 Wed, 07 Oct 2020 18:21:02 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk1-0007CF-Er; Wed, 07 Oct 2020 19:00:41 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5G-00072Q-Nq
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:38 +0000
X-Inumbo-ID: fd487d61-c3bd-4c0c-9178-6cbdf434cb81
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd487d61-c3bd-4c0c-9178-6cbdf434cb81;
	Wed, 07 Oct 2020 18:21:02 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk1-0007CF-Er; Wed, 07 Oct 2020 19:00:41 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 32/82] plan_search: Use plan's Wear information rather than tracking it ourselves
Date: Wed,  7 Oct 2020 18:59:34 +0100
Message-Id: <20201007180024.7932-33-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

There is no reason not to use this information from the plan.
Not computing it ourselves saves some confusing logic here.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 16 ++++------------
 1 file changed, 4 insertions(+), 12 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index dfa3710a..e17b6503 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -675,7 +675,6 @@ sub plan_search ($$$$) {
     my $reqix= 0;
     my $try_time= 0;
     my $confirmedok= 0;
-    my $share_wear;
     my $share_reuse= 0;
 
     for (;;) {
@@ -689,8 +688,7 @@ sub plan_search ($$$$) {
       PERIOD:
 	foreach (my $ix=0; $ix<@$events; $ix++) {
 	    $dbgprint->("PLAN LOOP reqs[$reqix]=$req->{Ident}".
-		" evtix=$ix try=$try_time confirmed=$confirmedok".
-		(defined($share_wear) ? " wear=$share_wear" : ""));
+		" evtix=$ix try=$try_time confirmed=$confirmedok");
 
 	    # check the period from $events[$ix] to next event
 	    my $startevt= $events->[$ix];
@@ -710,12 +708,8 @@ sub plan_search ($$$$) {
 			($req->{SharedMaxTasks}//'<undef>'));
 		return 0 unless defined $req->{Shared};
 		return 0 unless $req->{Shared} eq $eshare->{Type};
-		if (defined $share_wear) {
-		    $share_wear++ if $startevt->{Type} eq 'Start';
-		} else {
-		    $share_wear= $eshare->{Wear}+1;
-		}
-		return 0 if $share_wear > $req->{SharedMaxWear};
+		my $wear= $eshare->{Wear}+1;
+		return 0 if $wear > $req->{SharedMaxWear};
 		return 0 if $eshare->{Shares} != $req->{SharedMaxTasks};
 		$dbgprint->("PLAN LOOP   SHARE-COMPAT-OK Y");
 		return 1;
@@ -742,13 +736,11 @@ sub plan_search ($$$$) {
 	    # nope
 	    $try_time= $endevt->{Time};
 	    $confirmedok= 0;
-	    undef $share_wear;
 	    $share_reuse= 0;
 	    $dbgprint->("PLAN LOOP   OVERLAP BAD $try_time");
 	}
 	$dbgprint->("PLAN NEXT reqs[$reqix]=$req->{Ident}".
-	    " try=$try_time confirmed=$confirmedok reuse=$share_reuse".
-	    (defined($share_wear) ? " wear=$share_wear" : ""));
+	    " try=$try_time confirmed=$confirmedok reuse=$share_reuse");
 
 	$confirmedok++;
 	$share_reuse++ if defined $share_wear;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3754.11275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB9-00068e-Js; Wed, 07 Oct 2020 18:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3754.11275; Wed, 07 Oct 2020 18:28:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEB8-00066z-Ee; Wed, 07 Oct 2020 18:28:42 +0000
Received: by outflank-mailman (input) for mailman id 3754;
 Wed, 07 Oct 2020 18:28:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE4D-00072Q-Kx
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:33 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bee077b-0b92-4225-aeef-89e68b1bd8f5;
 Wed, 07 Oct 2020 18:20:11 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk1-0007CF-ML; Wed, 07 Oct 2020 19:00:41 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE4D-00072Q-Kx
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:33 +0000
X-Inumbo-ID: 1bee077b-0b92-4225-aeef-89e68b1bd8f5
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1bee077b-0b92-4225-aeef-89e68b1bd8f5;
	Wed, 07 Oct 2020 18:20:11 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk1-0007CF-ML; Wed, 07 Oct 2020 19:00:41 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 33/82] plan search: Move $share_compat_ok further up the file
Date: Wed,  7 Oct 2020 18:59:35 +0100
Message-Id: <20201007180024.7932-34-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We are going to want to use this outside the loop.

No functional change.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index e17b6503..4083ae6b 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -684,6 +684,23 @@ sub plan_search ($$$$) {
 
         $events ||= [ ];
 
+	my $share_compat_ok = sub {
+	    my ($eshare) = @_;
+	    $dbgprint->("PLAN LOOP   SHARE-COMPAT-OK ".
+		"type $eshare->{Type} vs. ".
+		    ($req->{Shared} // '<undef>')." ".
+		"wear $eshare->{Wear} ".
+		"shares $eshare->{Shares} vs. ".
+		    ($req->{SharedMaxTasks}//'<undef>'));
+	    return 0 unless defined $req->{Shared};
+	    return 0 unless $req->{Shared} eq $eshare->{Type};
+	    my $wear= $eshare->{Wear}+1;
+	    return 0 if $wear > $req->{SharedMaxWear};
+	    return 0 if $eshare->{Shares} != $req->{SharedMaxTasks};
+	    $dbgprint->("PLAN LOOP   SHARE-COMPAT-OK Y");
+	    return 1;
+	};
+
 	# can we do $req at $try_time ?  If not, when later can we ?
       PERIOD:
 	foreach (my $ix=0; $ix<@$events; $ix++) {
@@ -698,23 +715,6 @@ sub plan_search ($$$$) {
             # this period is entirely after the proposed slot;
             # so no need to check this or any later periods
 
-	    my $share_compat_ok = sub {
-		my ($eshare) = @_;
-		$dbgprint->("PLAN LOOP   SHARE-COMPAT-OK ".
-		    "type $eshare->{Type} vs. ".
-			($req->{Shared} // '<undef>')." ".
-		    "wear $eshare->{Wear} ".
-		    "shares $eshare->{Shares} vs. ".
-			($req->{SharedMaxTasks}//'<undef>'));
-		return 0 unless defined $req->{Shared};
-		return 0 unless $req->{Shared} eq $eshare->{Type};
-		my $wear= $eshare->{Wear}+1;
-		return 0 if $wear > $req->{SharedMaxWear};
-		return 0 if $eshare->{Shares} != $req->{SharedMaxTasks};
-		$dbgprint->("PLAN LOOP   SHARE-COMPAT-OK Y");
-		return 1;
-	    };
-
 	    next PERIOD if $endevt->{Time} <= $try_time;
             # this period is entirely before the proposed slot;
             # it doesn't overlap, but must check subsequent periods
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3758.11281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBA-0006DB-Su; Wed, 07 Oct 2020 18:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3758.11281; Wed, 07 Oct 2020 18:28:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBA-0006Bu-87; Wed, 07 Oct 2020 18:28:44 +0000
Received: by outflank-mailman (input) for mailman id 3758;
 Wed, 07 Oct 2020 18:28:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5L-00072Q-Ox
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:43 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e436e85f-c801-4ac8-b2ae-868aa69cebab;
 Wed, 07 Oct 2020 18:21:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjy-0007CF-0G; Wed, 07 Oct 2020 19:00:38 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5L-00072Q-Ox
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:22:43 +0000
X-Inumbo-ID: e436e85f-c801-4ac8-b2ae-868aa69cebab
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e436e85f-c801-4ac8-b2ae-868aa69cebab;
	Wed, 07 Oct 2020 18:21:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjy-0007CF-0G; Wed, 07 Oct 2020 19:00:38 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 19/82] host allocation: Executive: Honour $xparams{InfraPriority}
Date: Wed,  7 Oct 2020 18:59:21 +0100
Message-Id: <20201007180024.7932-20-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

And pass it to ms-queuedaemon.  No functional change with existing
callers since no-one sets this yet.

Forthcoming test host sharing machinery uses this.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index 5d71d08c..703f3d85 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -851,6 +851,7 @@ sub alloc_resources {
 		$set_info->('feature-noalloc', 1);
 		$set_info->('wait-start',$waitstart);
 		$set_info->('wait-start-adjust',$xparams{WaitStartAdjust});
+		$set_info->('infra-priority',$xparams{InfraPriority});
 
                 my $jobinfo= $xparams{JobInfo};
                 if (!defined $jobinfo and defined $flight and defined $job) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3762.11294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBC-0006HR-VN; Wed, 07 Oct 2020 18:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3762.11294; Wed, 07 Oct 2020 18:28:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBB-0006GE-N9; Wed, 07 Oct 2020 18:28:45 +0000
Received: by outflank-mailman (input) for mailman id 3762;
 Wed, 07 Oct 2020 18:28:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5u-00072Q-PO
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:18 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56a087aa-63ef-43c8-b5ff-160335ac114f;
 Wed, 07 Oct 2020 18:21:25 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk8-0007CF-T2; Wed, 07 Oct 2020 19:00:48 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5u-00072Q-PO
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:18 +0000
X-Inumbo-ID: 56a087aa-63ef-43c8-b5ff-160335ac114f
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 56a087aa-63ef-43c8-b5ff-160335ac114f;
	Wed, 07 Oct 2020 18:21:25 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk8-0007CF-T2; Wed, 07 Oct 2020 19:00:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 62/82] runvar access: Introduce effects_gone_before_share_reuse
Date: Wed,  7 Oct 2020 19:00:04 +0100
Message-Id: <20201007180024.7932-63-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

The syslog server, and its port, is used for things that happen in
this job, but the syslog server is torn down and a new one started,
when the host is reused.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm      | 10 ++++++----
 Osstest/TestSupport.pm |  6 ++++++
 2 files changed, 12 insertions(+), 4 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index ae3c1d33..01930e1f 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -727,10 +727,12 @@ sub di_installcmdline_core ($$;@) {
     push @cl, "priority=$debconf_priority";
     push @cl, "rescue/enable=true" if $xopts{RescueMode};
 
-    if ($r{syslog_server}) {
-	$r{syslog_server} =~ m/:(\d+)$/ or die "$r{syslog_server} ?";
-	push @cl, "log_host=$`", "log_port=$1";
-    }
+    effects_gone_before_share_reuse(sub {
+        if ($r{syslog_server}) {
+	    $r{syslog_server} =~ m/:(\d+)$/ or die "$r{syslog_server} ?";
+	    push @cl, "log_host=$`", "log_port=$1";
+	}
+    });
 
     return @cl;
 }
diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index c6bd6714..752c36c5 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -144,6 +144,7 @@ BEGIN {
                       gitcache_setup
 
 		      @accessible_runvar_pats sharing_for_build
+		      effects_gone_before_share_reuse
                       );
     %EXPORT_TAGS = ( );
 
@@ -3173,6 +3174,11 @@ END
 
 sub sharing_for_build () { @accessible_runvar_pats = qw(*); };
 
+sub effects_gone_before_share_reuse ($) {
+    local @accessible_runvar_pats = qw(*);
+    $_[0]();
+}
+
 sub runvar_access_restrict () {
     # restricts runvars to those in @accessible_runvar_pats
     return if "@accessible_runvar_pats" eq "*";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3769.11311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBL-0006eZ-2D; Wed, 07 Oct 2020 18:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3769.11311; Wed, 07 Oct 2020 18:28:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBK-0006eK-RM; Wed, 07 Oct 2020 18:28:54 +0000
Received: by outflank-mailman (input) for mailman id 3769;
 Wed, 07 Oct 2020 18:28:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE48-00072Q-Ki
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:28 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb24c2a7-e8e3-4be2-80b3-d5f4e8342ecc;
 Wed, 07 Oct 2020 18:20:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDjz-0007CF-2x; Wed, 07 Oct 2020 19:00:39 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE48-00072Q-Ki
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:21:28 +0000
X-Inumbo-ID: eb24c2a7-e8e3-4be2-80b3-d5f4e8342ecc
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb24c2a7-e8e3-4be2-80b3-d5f4e8342ecc;
	Wed, 07 Oct 2020 18:20:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDjz-0007CF-2x; Wed, 07 Oct 2020 19:00:39 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 23/82] db_retry: Make the sleeps random and increasing
Date: Wed,  7 Oct 2020 18:59:25 +0100
Message-Id: <20201007180024.7932-24-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

When there's a thundering herd, this can run out of retries.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest.pm | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/Osstest.pm b/Osstest.pm
index 734c0ef6..809194f0 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -311,7 +311,8 @@ sub db_retry ($$$;$$) {
     my ($pre,$body) =
         (ref $code eq 'ARRAY') ? @$code : (sub { }, $code);
 
-    my $retries= 100;
+    my $max_retries= 100;
+    my $retry_count= 0;
     my $r;
     local $db_retry_stop;
     for (;;) {
@@ -339,10 +340,12 @@ sub db_retry ($$$;$$) {
 	};
 	last if !length $@;
 	die $@ unless $mjobdb->need_retry($dbh, $committing);
-        die "$dbh $body $@ ?" unless $retries-- > 0;
+        die "$dbh $body $@ GIVING UP ?" if ++$retry_count >= $max_retries;
 	eval { $dbh->rollback(); };
-	print STDERR "DB conflict (messages above may refer); retrying...\n";
-        sleep(1);
+	my $delay = rand $retry_count;
+	print STDERR "DB conflict (messages above may refer);".
+	    " retrying after $delay...\n";
+        sleep($delay);
     }
     return $r;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3770.11318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBL-0006g9-Sc; Wed, 07 Oct 2020 18:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3770.11318; Wed, 07 Oct 2020 18:28:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBL-0006fQ-AP; Wed, 07 Oct 2020 18:28:55 +0000
Received: by outflank-mailman (input) for mailman id 3770;
 Wed, 07 Oct 2020 18:28:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE5z-00072Q-Ph
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:23 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ebb28b1-5068-4d39-a376-f39b1a9a02b4;
 Wed, 07 Oct 2020 18:21:27 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk2-0007CF-7y; Wed, 07 Oct 2020 19:00:42 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE5z-00072Q-Ph
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:23 +0000
X-Inumbo-ID: 9ebb28b1-5068-4d39-a376-f39b1a9a02b4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ebb28b1-5068-4d39-a376-f39b1a9a02b4;
	Wed, 07 Oct 2020 18:21:27 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk2-0007CF-7y; Wed, 07 Oct 2020 19:00:42 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 35/82] resource reporting: Print username when listing "rogue tasks"
Date: Wed,  7 Oct 2020 18:59:37 +0100
Message-Id: <20201007180024.7932-36-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ms-planner | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ms-planner b/ms-planner
index 78d27b2f..41f27fa0 100755
--- a/ms-planner
+++ b/ms-planner
@@ -325,6 +325,7 @@ END
 	    $info .= " $arow->{type} $arow->{refkey}";
 	    $info .= " ($arow->{comment})" if defined $arow->{comment};
 	    $info .= " $arow->{subtask}";
+	    $info .= " (user $arow->{username})";
 	}
 	$plan->{Allocations}{$reskey}= {
             Task => $arow->{owntaskid},
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:28:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3771.11326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBN-0006j9-25; Wed, 07 Oct 2020 18:28:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3771.11326; Wed, 07 Oct 2020 18:28:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEBM-0006i9-Cp; Wed, 07 Oct 2020 18:28:56 +0000
Received: by outflank-mailman (input) for mailman id 3771;
 Wed, 07 Oct 2020 18:28:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQE64-00072Q-Py
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:28 +0000
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 029da054-f876-40dd-a094-2e3f3d07448a;
 Wed, 07 Oct 2020 18:21:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1kQDk6-0007CF-BH; Wed, 07 Oct 2020 19:00:46 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1qty=DO=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQE64-00072Q-Py
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:23:28 +0000
X-Inumbo-ID: 029da054-f876-40dd-a094-2e3f3d07448a
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 029da054-f876-40dd-a094-2e3f3d07448a;
	Wed, 07 Oct 2020 18:21:29 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
	(return-path ijackson@chiark.greenend.org.uk)
	id 1kQDk6-0007CF-BH; Wed, 07 Oct 2020 19:00:46 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 52/82] host allocation: Group jobs by their reuse parameters
Date: Wed,  7 Oct 2020 18:59:54 +0100
Message-Id: <20201007180024.7932-53-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201007180024.7932-1-iwj@xenproject.org>
References: <20201007180024.7932-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This promotes reuse by arranging that jobs that can reuse a host get
to run consecutively.

Signed-off-by: Ian Jackson <Ian.Jackson@eu.citrix.com>
---
 Osstest/JobDB/Executive.pm  | 47 +++++++++++++++++++++++++++++++++++++
 Osstest/JobDB/Standalone.pm |  2 ++
 ts-host-reuse               |  1 +
 ts-hosts-allocate-Executive |  4 ++++
 4 files changed, 54 insertions(+)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 30629572..8c235d45 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -389,6 +389,53 @@ END
     }
 }
 
+sub jobdb_set_hosts_infraprioritygroup ($$$$;$) { # method
+    my ($mo, $flight, $job, $group_key, $rref) = @_;
+    # Sets the runvar hosts_infraprioritygroup in $flight,$job
+    # The runvar values are NUM:GROUPKEY
+    # such that each GROUPKEY always has the same NUM, within the flight
+    # $rref is \%r (for use within a ts-*) or undef
+
+    my $vn = 'hosts_infraprioritygroup';
+
+    my $queryq = $dbh_tests->prepare(<<END);
+        SELECT job, val,
+               (job = ?) AS thisjob
+          FROM runvars
+         WHERE flight=?
+           AND name=?
+      ORDER BY thisjob DESC
+END
+    my $insertq = $dbh_tests->prepare(<<END);
+        INSERT INTO runvars (flight,job, name,val, synth)
+                     VALUES (?,     ?,   ?,   ?,   't')
+END
+
+    my $resulting;
+    db_retry($dbh_tests,[],sub {
+	my $use = 1;
+	$resulting = undef;
+        $queryq->execute($job, $flight, $vn);
+	while (my ($tjob, $tval, $thisjob) = $queryq->fetchrow_array()) {
+	    if ($thisjob) {
+		logm("$vn: job is already in group $tval");
+		return;
+	    }
+	    $tval =~ m/^(\d+)\:/ or die "$flight $job $tval ?";
+	    if ($' eq $group_key) {
+		$use = $1;
+		last;
+	    } elsif ($1 >= $use) {
+		$use = $1 + 1;
+	    }
+	}
+	$resulting = "$use:$group_key";
+	logm("$vn: inserting job into group $resulting");
+	$insertq->execute($flight,$job,$vn, $resulting);
+    });
+    $rref->{$vn} = $resulting if $rref && defined $resulting;
+}
+
 sub jobdb_flight_started_for_log_capture ($$) { #method
     my ($mo, $flight) = @_;
     my $started= $dbh_tests->selectrow_array(<<END);
diff --git a/Osstest/JobDB/Standalone.pm b/Osstest/JobDB/Standalone.pm
index 4f320ccf..1db4dc78 100644
--- a/Osstest/JobDB/Standalone.pm
+++ b/Osstest/JobDB/Standalone.pm
@@ -118,6 +118,8 @@ sub jobdb_resource_shared_mark_ready { } #method
 
 sub jobdb_check_other_job { } #method
 
+sub jobdb_set_hosts_infraprioritygroup { } # method
+
 sub jobdb_flight_started_for_log_capture ($$) { #method
     my ($mo, $flight) = @_;
     return time - 60*60; # just the most recent serial log then,
diff --git a/ts-host-reuse b/ts-host-reuse
index 5bdb07d1..29abe987 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -141,6 +141,7 @@ sub act_prealloc () {
     compute_test_sharetype();
     $ho = selecthost($whhost, undef, 1);
     set_runtime_hostflag($ho->{Ident}, "reuse-$sharetype");
+    $mjobdb->jobdb_set_hosts_infraprioritygroup($flight, $job, $sharetype, \%r);
 }
 
 sub act_start_test () {
diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index fc107c08..a50f8bf3 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -733,9 +733,13 @@ sub alloc_hosts () {
         ? -10000
         : -10 * @hids;
 
+    my $infrapriority =
+	($r{hosts_infraprioritygroup} // '') =~ m/^(\d+):/ ? $1 : undef;
+
     my $ok = alloc_resources(WaitStart =>
                     ($ENV{OSSTEST_RESOURCE_WAITSTART} || $fi->{started}),
                     WaitStartAdjust => $waitstartadjust,
+                    InfraPriority => $infrapriority,
 		    DebugFh => \*DEBUG,
                     \&attempt_allocation);
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 18:41:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 18:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3805.11353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEN6-00014c-Ty; Wed, 07 Oct 2020 18:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3805.11353; Wed, 07 Oct 2020 18:41:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQEN6-00014V-Qr; Wed, 07 Oct 2020 18:41:04 +0000
Received: by outflank-mailman (input) for mailman id 3805;
 Wed, 07 Oct 2020 18:41:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQEN6-00013t-39
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:41:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40d4bb9d-5e4f-4c64-91bf-4a88d914e1f9;
 Wed, 07 Oct 2020 18:40:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQEMy-0003iW-Rw; Wed, 07 Oct 2020 18:40:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQEMy-0005BS-K4; Wed, 07 Oct 2020 18:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQEMy-0005fo-JX; Wed, 07 Oct 2020 18:40:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQEN6-00013t-39
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 18:41:04 +0000
X-Inumbo-ID: 40d4bb9d-5e4f-4c64-91bf-4a88d914e1f9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40d4bb9d-5e4f-4c64-91bf-4a88d914e1f9;
	Wed, 07 Oct 2020 18:40:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=vQ3WMdtua64mU/eCMTrAqRNOOnZGhr6kTxh9C1ZvNec=; b=zxML+KMW3p6QmYo/rorgC/j17S
	MOFJSBH7eyNgZSGf7DHa22W2ViujDxN2tF03M95+0vNvEI+e/uBiTsh0KRk19fgrcy0lFtbVTHsgg
	hvidImfqemn3VEajhQEQ2zQmrFf2TqA2x0PZMFoKyFehTBP6fNFj3Wvzg71NYBJ7bWDE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQEMy-0003iW-Rw; Wed, 07 Oct 2020 18:40:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQEMy-0005BS-K4; Wed, 07 Oct 2020 18:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQEMy-0005fo-JX; Wed, 07 Oct 2020 18:40:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1kQEMy-0005fo-JX@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 18:40:56 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e4e64408f5c755da3bf7bfd78e70ad9f6c448376
  Bug not present: 93508595d588afe9dca087f95200effb7cedc81f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155530/


  commit e4e64408f5c755da3bf7bfd78e70ad9f6c448376
  Author: Bertrand Marquis <bertrand.marquis@arm.com>
  Date:   Fri Oct 2 11:42:09 2020 +0100
  
      build: always use BASEDIR for xen sub-directory
      
      Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
      
      This is removing the dependency to xen subdirectory preventing using a
      wrong configuration file when xen subdirectory is duplicated for
      compilation tests.
      
      Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
      Acked-by: Jan Beulich <jbeulich@suse.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/155530.bisection-summary --basis-template=155495 --blessings=real,real-bisect xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 155517 fail [host=himrod1] / 155495 [host=himrod2] 155349 [host=himrod2] 155327 ok.
Failure / basis pass flights: 155517 / 155327
(tree with no url: minios)
(tree in basispass but not in latest: ovmf)
(tree in basispass but not in latest: qemu)
(tree in basispass but not in latest: seabios)
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
Basis pass ea6d3cd1ed79d824e605a70c3626bc437c386260 59b27f360e3d9dc0378c1288e67a91fa41a77158
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#59b27f360e3d9dc0378c1288e67a91fa41a77158-e4e64408f5c755da3bf7bfd78e70ad9f6c448376
Loaded 5001 nodes in revision graph
Searching for test results:
 155310 [host=himrod2]
 155321 [host=albana1]
 155327 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155349 [host=himrod2]
 155495 [host=himrod2]
 155517 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
 155519 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155520 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
 155522 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 3600118a52e75e10800806fdd05eba13adc87347
 155523 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 1bc30c076a7f1678166934c080e1bf94b2c189af
 155524 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 7f66c0dc41ae5f770c614e516810eb1f336e2470
 155525 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 93508595d588afe9dca087f95200effb7cedc81f
 155526 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
 155527 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 93508595d588afe9dca087f95200effb7cedc81f
 155528 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
 155529 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 93508595d588afe9dca087f95200effb7cedc81f
 155530 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 e4e64408f5c755da3bf7bfd78e70ad9f6c448376
Searching for interesting versions
 Result found: flight 155327 (pass), for basis pass
 For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 93508595d588afe9dca087f95200effb7cedc81f, results HASH(0x55f3dbade518) HASH(0x55f3dbae5e48) HASH(0x55f3dbae9b58) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 7f66c0dc41ae5f770c614e516810eb1f336e2470, results HASH(0x55f3dbad0e18) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 1bc30c076a7f1678166934c080e1bf94b2c189af, results HASH(0x\
 55f3dbad9760) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 3600118a52e75e10800806fdd05eba13adc87347, results HASH(0x55f3dbadb2e8) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 59b27f360e3d9dc0378c1288e67a91fa41a77158, results HASH(0x55f3dbad1118) HASH(0x55f3dbad9460) Result found: flight 155517 (fail), for basis failure (at ancestor ~407)
 Repro found: flight 155519 (pass), for basis pass
 Repro found: flight 155520 (fail), for basis failure
 0 revisions at ea6d3cd1ed79d824e605a70c3626bc437c386260 93508595d588afe9dca087f95200effb7cedc81f
No revisions left to test, checking graph state.
 Result found: flight 155525 (pass), for last pass
 Result found: flight 155526 (fail), for first failure
 Repro found: flight 155527 (pass), for last pass
 Repro found: flight 155528 (fail), for first failure
 Repro found: flight 155529 (pass), for last pass
 Repro found: flight 155530 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  e4e64408f5c755da3bf7bfd78e70ad9f6c448376
  Bug not present: 93508595d588afe9dca087f95200effb7cedc81f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155530/


  commit e4e64408f5c755da3bf7bfd78e70ad9f6c448376
  Author: Bertrand Marquis <bertrand.marquis@arm.com>
  Date:   Fri Oct 2 11:42:09 2020 +0100
  
      build: always use BASEDIR for xen sub-directory
      
      Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
      
      This is removing the dependency to xen subdirectory preventing using a
      wrong configuration file when xen subdirectory is duplicated for
      compilation tests.
      
      Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
      Acked-by: Jan Beulich <jbeulich@suse.com>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
155530: tolerable ALL FAIL

flight 155530 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155530/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Oct 07 19:06:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 19:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3811.11366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQElp-0003tf-2D; Wed, 07 Oct 2020 19:06:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3811.11366; Wed, 07 Oct 2020 19:06:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQElo-0003tY-VZ; Wed, 07 Oct 2020 19:06:36 +0000
Received: by outflank-mailman (input) for mailman id 3811;
 Wed, 07 Oct 2020 19:06:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQEln-0003t2-M4
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 19:06:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0ea0da2-34a5-42e7-b9a9-bb5e580fd770;
 Wed, 07 Oct 2020 19:06:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQElc-0004SA-CW; Wed, 07 Oct 2020 19:06:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQElc-0006kC-4l; Wed, 07 Oct 2020 19:06:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQElc-0004Ya-4H; Wed, 07 Oct 2020 19:06:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQEln-0003t2-M4
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 19:06:35 +0000
X-Inumbo-ID: c0ea0da2-34a5-42e7-b9a9-bb5e580fd770
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0ea0da2-34a5-42e7-b9a9-bb5e580fd770;
	Wed, 07 Oct 2020 19:06:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VSGkDq6t9iLFNfNU+AnREC3AiFJkMEYYzxmortSCXLc=; b=S1whXdmJJUhVaHI56ZDM7B5WQs
	q8rSE5DYV6gldW/eExTcDKoRTNrZaC9IdTZsu/GpayLBeHuUJ3CZa3kD2zJIE9NMqHr4SE4AvVjAY
	i61HlquCOAAzErkI6/lGQEtd/hYLPUpdTot1lbI4ESARz8qJcTO6yERy/bxScvRHOCi4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQElc-0004SA-CW; Wed, 07 Oct 2020 19:06:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQElc-0006kC-4l; Wed, 07 Oct 2020 19:06:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQElc-0004Ya-4H; Wed, 07 Oct 2020 19:06:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155521-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155521: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
X-Osstest-Versions-That:
    xen=93508595d588afe9dca087f95200effb7cedc81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 19:06:24 +0000

flight 155521 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155521/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
baseline version:
 xen                  93508595d588afe9dca087f95200effb7cedc81f

Last test of basis   155495  2020-10-06 12:00:29 Z    1 days
Failing since        155517  2020-10-07 12:01:31 Z    0 days    2 attempts
Testing same since   155521  2020-10-07 16:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93508595d5..7a519f8bda  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 21:26:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 21:26:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3813.11379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQGwW-0002bp-Aw; Wed, 07 Oct 2020 21:25:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3813.11379; Wed, 07 Oct 2020 21:25:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQGwW-0002bi-6t; Wed, 07 Oct 2020 21:25:48 +0000
Received: by outflank-mailman (input) for mailman id 3813;
 Wed, 07 Oct 2020 21:25:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cKu2=DO=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1kQGwV-0002bd-CM
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 21:25:47 +0000
Received: from mail-oo1-xc43.google.com (unknown [2607:f8b0:4864:20::c43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13118245-8dd5-4776-83ea-040311671c39;
 Wed, 07 Oct 2020 21:25:45 +0000 (UTC)
Received: by mail-oo1-xc43.google.com with SMTP id h8so974936ooc.12
 for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 14:25:44 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cKu2=DO=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
	id 1kQGwV-0002bd-CM
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 21:25:47 +0000
X-Inumbo-ID: 13118245-8dd5-4776-83ea-040311671c39
Received: from mail-oo1-xc43.google.com (unknown [2607:f8b0:4864:20::c43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13118245-8dd5-4776-83ea-040311671c39;
	Wed, 07 Oct 2020 21:25:45 +0000 (UTC)
Received: by mail-oo1-xc43.google.com with SMTP id h8so974936ooc.12
        for <xen-devel@lists.xenproject.org>; Wed, 07 Oct 2020 14:25:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=tfaWlfjOG5Lclvg6ocQl8VXO5607RWxeWJyy9KOesYM=;
        b=UZCOdHk95ZQx91wU3nK7gmYCdNcC6lev8R3ryBUOF1ll11oISKeDBAmLE4vseOxoXa
         kgsMQ5UZ2si62i6PGHAcl7LBDld7iC+WY7igINshWGEiWpHFu9662TtM0MI0lLt4rwCw
         u825cUPgTxmJUF4fcjQd1eku4MnY40QaWMCloupWJtYPyhLHqTIbRxf0qA6otj10mX9P
         1ANUn+wYIOq6E04EjqlEqJvyaHJV6mscxcWZGwyqMtSsrK3lIHhvtOQaSQmQ3J34QVEu
         6IGMNPRr4X558kBBgq3CI232D/xdeYNJq0XbM37il/OlhEUKWc2Fvav0motQ0ULaBGXE
         xrnA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=tfaWlfjOG5Lclvg6ocQl8VXO5607RWxeWJyy9KOesYM=;
        b=KY0sUQM/U9+pNpyGraJCwrw9MA4WK+FDAX5L9SHKLHEiW1p+SWeNQcQNn1e9BDM6N0
         0YFWootpQ3309OX2W7CyFMzTvAcVCbuTULp56ujndT20pe4lKkGa6DurauE040wVLNiz
         Hb9/C9YvhYUXrgSWFQGTeyuTc27V4UvCFz1UyGeBDa3xnvdqsVTzPIAyUdhDDDM6B6X0
         SqghBdkpJcHAreUhUe1Y5jPxRBLbcdflBz7BaB15bTmbPZIrhcH9bFDZ0UxconvTpm70
         a4tA2NwWLDFosrlk2KtloC3tgL6eY3/ltYeWIsGV/l978APHoso4yo7L97v8bXVDfY/Y
         9p7Q==
X-Gm-Message-State: AOAM532wo1C1yTINhCztHQXsIePxd0T9XXlBXJBHNrOcxJ9B2cf/ADKb
	UFXt2yHKDzb2gGGnb6viKHa9Eg2rYgKrL+nucSI=
X-Google-Smtp-Source: ABdhPJy0+cnlxGAXo2GrThVlb2UBFEAsuPoTD0bxwjkTV5H/gt9N5TeBVaIkyqnuXTKJ0rszsfbs/GWKckyMybVMcNs=
X-Received: by 2002:a4a:ce90:: with SMTP id f16mr1863791oos.55.1602105943960;
 Wed, 07 Oct 2020 14:25:43 -0700 (PDT)
MIME-Version: 1.0
References: <CACMJ4GaWcF74zE5qt31MDvcX1mx1HSW7eaOXpfpWJ2KzQZOg=Q@mail.gmail.com>
 <20201001085500.GX19254@Air-de-Roger>
In-Reply-To: <20201001085500.GX19254@Air-de-Roger>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Wed, 7 Oct 2020 14:25:27 -0700
Message-ID: <CACMJ4Ga3ygh=7o+rELAiJy2uZoMJqQUV9jQ4zQxvgZSuXzm5QA@mail.gmail.com>
Subject: Re: VirtIO & Argo: a Linux VirtIO transport driver on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Rich Persaud <persaur@gmail.com>, 
	Daniel Smith <dpsmith@apertussolutions.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Roger: thanks for your interest and fast response to the first post in
this thread. Responses inline below.

On Thu, Oct 1, 2020 at 1:55 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com> =
wrote:
>
> On Wed, Sep 30, 2020 at 09:03:03PM -0700, Christopher Clark wrote:
> > Hello
> >
> > Following up on a topic introduced in last month's community call,
> > here are some notes on combining existing Linux guest virtio drivers
> > with Argo inter-VM communication on Xen.  If feasible, this would
> > combine the compatibility of tested Linux drivers with the mandatory
> > access control properties of Argo, which could help meet functional
> > safety and security requirements for Xen on Arm and x86.  This
> > development work is not resourced, but the initial investigation has
> > been encouraging.  We are sharing here for comment, aiming to work
> > with those in the Xen and wider Linux communities who may have similar
> > requirements.
> >
> > Christopher
> > - 30th September 2020
> >
> > ---
> > This document describes a proposal for development of a new Linux devic=
e
> > driver to introduce Hypervisor-Mediated data eXchange (HMX) into the
> > data transport of the popular VirtIO suite of Linux virtual device
> > drivers, by using Argo with Xen. This will provide a way to use VirtIO
> > device drivers within Xen guest VMs with strong isolation properties.
> >
> > This work has been developed by Christopher Clark, Daniel Smith and Ric=
h
> > Persaud, with Eric Chanudet and Nick Krasnoff.
> > Christopher is the primary author of this version of this document.
> >
> > ----
> > Contents:
> >
> > =3D Context: Introduction to VirtIO
> > =3D=3D VirtIO Architecture Overview
> > =3D=3D=3D VirtIO front-end driver classes
> > =3D=3D=3D VirtIO transport drivers
> > =3D VirtIO with Argo transport
> > =3D=3D Using VirtIO with the new Argo transport driver
> > =3D=3D=3D Host platform software
> > =3D=3D=3D=3D QEMU
> > =3D=3D=3D=3D Linux Argo driver
> > =3D=3D=3D=3D Toolstack
> > =3D=3D=3D Functionality
> > =3D=3D=3D Mechanisms
> > =3D=3D=3D From last discussion
> > =3D References
> >
> > ----
> > =3D Context: Introduction to VirtIO
> >
> > VirtIO is a virtual device driver standard developed originally for the
> > Linux kernel, drawing upon the lessons learned during the development o=
f
> > paravirtualized device drivers for Xen, KVM and other hypervisors. It
> > aimed to become a =E2=80=9Cde-facto standard for virtual I/O devices=E2=
=80=9D, and to
> > some extent has succeeded in doing so. VirtIO is now widely implemented
> > in both software and hardware, it is commonly the first choice for
> > virtual driver implementation in new virtualization technologies, and
> > the specification is now maintained under governance of the OASIS open
> > standards organization.
> >
> > VirtIO=E2=80=99s system architecture abstracts device-specific and
> > device-class-specific interfaces and functionality from the transport
> > mechanisms that move data and issue notifications within the kernel and
> > across virtual machine boundaries. It is attractive to developers
> > seeking to implement new drivers for a virtual device because VirtIO
> > provides documented specified interfaces with a well-designed, efficien=
t
> > and maintained common core implementation that can significantly reduce
> > the amount of work required to develop a new virtual device driver.
> >
> > VirtIO follows the Xen PV driver model of split-device drivers, where a
> > front-end device driver runs within the guest virtual machine to provid=
e
> > the device abstraction to the guest kernel, and a back-end driver runs
> > outside the VM, in platform-provided software - eg. within a QEMU devic=
e
> > emulator - to communicate with the front-end driver and provide mediate=
d
> > access to physical device resources.
> >
> > A critical property of the current common VirtIO implementations is tha=
t
> > their use of shared memory for data transport prevents enforcement of
> > strong isolation between the front-end and back-end virtual machines,
> > since the back-end VirtIO device driver is required to be able to obtai=
n
> > direct access to the memory owned by the virtual machine running the
> > front-end VirtIO device driver. ie. The VM hosting the back-end driver
> > has significant privilege over any VM running a front-end driver.
> >
> > Xen=E2=80=99s PV drivers use the grant-table mechanism to confine share=
d memory
> > access to specific memory pages used and permission to access those are
> > specifically granted by the driver in the VM that owns the memory. Argo
> > goes further and achieves stronger isolation than this since it require=
s
> > no memory sharing between communicating virtual machines.
>
> Since there's no memory sharing, all data must be copied between
> buffers (by the hypervisor I assume). Will this result in a noticeable
> performance penalty?
>
> OTOH no memory sharing means no need to map foreign memory on the
> backend, which is costly.

This is a fair question. An important part of this work will be to
measure and determine what the performance will be; however, this
approach was not pursued in blind faith since the
formerly-Bromium-now-HP vSentry product has demonstrated network and
block devices working over v4v, a mechanism that is closely related to
Argo, with acceptable performance.

> > In contrast to Xen=E2=80=99s current driver transport options, the curr=
ent
> > implementations of VirtIO transports pass memory addresses directly
> > across the VM boundary, under the assumption of shared memory access,
> > and thereby require the back-end to have sufficient privilege to
> > directly access any memory that the front-end driver refers to. This ha=
s
> > presented a challenge for the suitability of using VirtIO drivers for
> > Xen deployments where isolation is a requirement. Fortunately, a path
> > exists for integration of the Argo transport into VirtIO which can
> > address this and enable use of the existing body of VirtIO device
> > drivers with isolation maintained and mandatory access control enforced=
:
> > consequently this system architecture is significantly differentiated
> > from other options for virtual devices.
> >
> > =3D=3D VirtIO Architecture Overview
> >
> > In addition to the front-end / back-end split device driver model, ther=
e
> > are further standard elements of VirtIO system architecture.
> >
> > For detailed reference, VirtIO is described in detail in the =E2=80=9CV=
irtIO 1.1
> > specification=E2=80=9D OASIS standards document. [1]
> >
> > The front-end device driver architecture imposes tighter constraints on
> > implementation direction for this project, since it is this that is
> > already implemented in the wide body of existing VirtIO device drivers
> > that we are aiming to enable use of.
> >
> > The back-end software is implemented in the platform-provided software =
-
> > ie.  the hypervisor, toolstack, a platform-provided VM or a device
> > emulator, etc. - where we have more flexibility in implementation
> > options, and the interface is determined by both the host virtualizatio=
n
> > platform and the new transport driver that we are intending to create.
> >
> > =3D=3D=3D VirtIO front-end driver classes
> >
> > There are multiple classes of VirtIO device driver within the Linux
> > kernel; these include the general class of front-end VirtIO device
> > drivers, which provide function-specific logic to implement virtual
> > devices - eg. a virtual block device driver for storage - and the
> > _transport_ VirtIO device drivers, which are responsible for device
> > discovery with the platform and provision of data transport across the
> > VM boundary between the front-end drivers and the corresponding remote
> > back-end driver running outside the virtual machine.
> >
> > =3D=3D=3D VirtIO transport drivers
> >
> > There are several implementations of VirtIO transport device drivers in
> > Linux, each implement a common interface within the kernel, and they ar=
e
> > designed to be interchangeable and compatible with the VirtIO front-end
> > drivers: so the same front-end driver can use different transports on
> > different systems. Transports can coexist: different virtual devices ca=
n
> > be using different transports within the same virtual machine at the
> > same time.
>
> Does this transport layer also define how the device configuration is
> exposed?

Yes, I think that it does.

>
> >
> > =3D VirtIO with Argo transport
> >
> > Enabling VirtIO to use the Argo interdomain communication mechanism for
> > data transport across the VM boundary will address critical requirement=
s:
> >
> > * Preserve strong isolation between the two ends of the split device dr=
iver
> >
> >     * ie. remove the need for any shared memory between domains or any
> >       privilege to map the memory belonging to the other domain
> >
> > * Enable enforcement of granular mandatory access control policy over
> >   the communicating endpoints
> >
> >     * ie. Use Xen=E2=80=99s XSM/Flask existing control over Argo commun=
ication,
> >       and leverage any new Argo MAC capabilities as they are introduced
> >       to govern VirtIO devices
> >
> > The proposal is to implement a new VirtIO transport driver for Linux
> > that utilizes Argo. It will be used within guest virtual machines, and
> > be source-compatible with the existing VirtIO front-end device drivers
> > in Linux.
> >
> > It will be paired with a corresponding new VirtIO-Argo back-end to run
> > within the Qemu device emulator, in the same fashion as the existing
> > VirtIO transport back-ends, and the back-end will use a non-VirtIO Linu=
x
> > driver to access and utilize Argo.
>
> IMO it would be better if we could implement the backends in a more
> lightweight tool, something like kvmtool.

That's an interesting suggestion, thanks - I would be open to this.

>
> > Open Source VirtIO drivers for Windows are available, and enable Window=
s
> > guest VMs to run with virtual devices provided by the VirtIO backends i=
n
> > QEMU. The Windows VirtIO device drivers have the same transport
> > abstraction and separate driver structure, so an Argo transport driver
> > can also be developed for Windows for source-compatibility with Windows
> > VirtIO device drivers.
> >
> > =3D=3D Using VirtIO with the new Argo transport driver
> >
> > VirtIO device drivers are included in the mainline Linux kernel and
> > enabled in most modern Linux distributions. Adding the new Linux-Argo
> > guest driver to the upstream Linux kernel will enable seamless
> > deployment of modern Linux guest VMs on VirtIO-Argo hypervisor
> > platforms.
> >
> > =3D=3D=3D Host platform software
> >
> > =3D=3D=3D=3D QEMU
> >
> > The QEMU device emulator implements the VirtIO transport that the
> > front-end will connect to. Current QEMU 5.0 implements both the
> > virtio-pci and virtio-mmio common transports.
>
> Oh, I think that answers my question from above, and you would indeed
> expose the device configuration using an Argo specific virtio device
> bus.

Yes.

> Would it be possible to expose the device configuration using
> virtio-{pci,mmio} but do the data transfer using Argo?

I think potentially yes, with a bit of investigation needed to
determine exactly how to make that work.

>
> >
> > =3D=3D=3D=3D Linux Argo driver
> >
> > For QEMU to be able to use Argo, it will need an Argo Linux kernel
> > device driver, with similar functionality to the existing Argo Linux
> > driver.
> >
> > =3D=3D=3D=3D Toolstack
> >
> > The toolstack of the hypervisor is responsible for configuring and
> > establishing the back-end devices according to the virtual machine
> > configuration. It will need to be aware of the VirtIO-Argo transport an=
d
> > initialize the back-ends for each VM with a suitable configuration for
> > it.
> >
> > Alternatively, in systems that do not run with a toolstack, the DomB
> > launch domain (when available) can perform any necessary initialization=
.
> >
> > =3D=3D=3D Functionality
> >
> > Adding Argo as a transport for VirtIO will retain Argo=E2=80=99s MAC po=
licy
> > checks on all data movement, via Xen's XSM/Flask, while allowing use of
> > the VirtIO virtual device drivers and device implementations.
> >
> > With the VirtIO virtual device drivers using the VirtIO-Argo transport
> > driver, the existing Xen PV drivers, which use the grant tables and
> > event channels, are not required, and their substitution enables remova=
l
> > of shared memory from the data path of the device drivers in use and
> > makes the virtual device driver data path HMX-compliant.
> >
> > In addition, as new virtual device classes in Linux have VirtIO drivers
> > implemented, these should transparently be enabled with Mandatory Acces=
s
> > Control, via the existing virtio-argo transport driver, potentially
> > without further effort required =E2=80=93 although please note that for=
 some
> > cases (eg. graphics) optimizing performance characteristics may require
> > additional effort.
> >
> > =3D=3D=3D Mechanisms
> >
> > VirtIO transport drivers are responsible for virtual device enumeration=
,
> > triggering driver initialization for the devices, and we are proposing
> > to use ACPI tables to surface the data to the guest for the new driver
> > to parse for this purpose. Device tree has been raised as an option for
> > this on Arm and it will be evaluated.
> >
> > The VirtIO device drivers will retain use of the virtqueues, but the
> > descriptors passed between domains by the new transport driver will not
> > include guest physical addresses, but instead reference data that has
> > been exchanged via Argo. Each transmission via XEN_ARGO_OP_sendv is
> > subject to MAC checks by XSM.
> >
> > =3D=3D=3D From last discussion
> >
> > * Design of how virtual devices are surfaced to the guest VM for
> >   enumeration by the VirtIO-Argo transport driver
> >
> >     * Current plan, from the initial x86 development focus, is to
> >       populate ACPI tables
>
> Do we plan to introduce a new ACPI table for virtio-argo devices? Or
> are there plans to expand an existing table?
>
> I'm asking because all this would need to be discussed with the UEFI
> Forum in order to get whatever is needed into the spec (IIRC a
> separate table is easier because the specification doesn't need to be
> part of the official ACPI spec).

We don't have a decision either way on this at the moment.

> >     * Interest in using Device Tree, for static configurations on Arm,
> >       was raised on last month's Xen Community Call.
> >         * is being considered with development of DomB in progress
> >     * Does not need to be dom0 that populates this for the guest;
> >       just some domain with suitable permissions to do so
>
> Hm, while it's true that ACPI tables don't need to be populated by
> dom0 itself, it must be a domain that has write access to the guest
> memory, so that it can copy the created ACPI tables into the guest
> physmap.

Ack.

> I think this is all fine, but seems like a very non-trivial amount of
> work that depends on other entities (the UEFI Forum for ACPI changes
> and OASIS for the virtio ones).

I think your suggestion that we expose a virtual PCI device (as per
virtio-pci) to supply the device configuration, as an alternative to
an ACPI table, is attractive; even more so if there is a means of
using the existing virtio-pci device driver just with Argo as
transport.

> It also worries me a bit that we are
> jumping from not having virtio support at all on Xen to introducing
> our own transport layer, but I guess that's partly due to the existing
> virtio transports not being suitable for Xen use-cases.
>
> Regarding the specific usage of Argo itself, it's my understanding
> that a normal frontend would look like:
>
> virtio front -> virtio argo transport -> Argo hypervisor specific driver
>
> Would it be possible to implement an Argo interface that's hypervisor
> agnostic? I think that way you would have a much easier time trying
> to get something like this accepted upstream, if the underlying Argo
> interface could be implemented by any hypervisor _without_ having to
> add a new Argo hypervisor specific driver.
>
> On x86 I guess you could implement the Argo interface using MSRs,
> which would then allow any hypervisor to provide the same interface
> and thus there would be no component tied to Xen or any specific
> hypervisor.

I think the Argo hypervisor interface will have to change anyway, due
to its use of virtual addresses in hypercall arguments, which I
understand the maintainers would like to deprecate and replace across
all of Xen's hypercalls, so I'm open to looking at this when designing
the next generation hypervisor interface for Argo.

> Finally, has this been raised or commented with OASIS? I'm at least not
> familiar at all with virtio, and I bet that applies to most of the Xen
> community, so I think their opinion is likely even more important that
> the Xen community one.

I haven't taken this proposal to OASIS but if there are supporters of
this work that would like to engage there, I think it could make sense
as it develops.

thanks,

Christopher

>
> Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 21:58:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 21:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3818.11391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQHRe-0006Fs-3f; Wed, 07 Oct 2020 21:57:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3818.11391; Wed, 07 Oct 2020 21:57:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQHRe-0006Fl-0e; Wed, 07 Oct 2020 21:57:58 +0000
Received: by outflank-mailman (input) for mailman id 3818;
 Wed, 07 Oct 2020 21:57:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQHRc-0006Fg-Jq
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 21:57:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 588a64c3-f3cb-4c30-b65a-e8781db56466;
 Wed, 07 Oct 2020 21:57:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQHRZ-00084g-9F; Wed, 07 Oct 2020 21:57:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQHRY-0007T8-Sk; Wed, 07 Oct 2020 21:57:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQHRY-0005XV-Pk; Wed, 07 Oct 2020 21:57:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xB9j=DO=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQHRc-0006Fg-Jq
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 21:57:56 +0000
X-Inumbo-ID: 588a64c3-f3cb-4c30-b65a-e8781db56466
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 588a64c3-f3cb-4c30-b65a-e8781db56466;
	Wed, 07 Oct 2020 21:57:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T/OMNqsqH6l0txJSa8yMtdGK3KLfcU8+OnEp8gs3Dkw=; b=2KGVA8inbGiERRyd+YOyUt6x8i
	eQYdk7eelJBO2BUKUX2qIezb+aG7j4H482x4OJ2Q1pjdUTn1t+XlRvxvTr4D9xE04Dn4zehAl/OkZ
	PyjMCfEYc6lrVQ7Z5obEAU64EIJYnTxLIdeQgccZe8P4OtFQygpgNkY6FyNQoszjdqH8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQHRZ-00084g-9F; Wed, 07 Oct 2020 21:57:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQHRY-0007T8-Sk; Wed, 07 Oct 2020 21:57:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQHRY-0005XV-Pk; Wed, 07 Oct 2020 21:57:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155513-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155513: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=d22f99d235e13356521b374410a6ee24f50b65e6
X-Osstest-Versions-That:
    linux=a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 07 Oct 2020 21:57:52 +0000

flight 155513 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155513/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 155222

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                d22f99d235e13356521b374410a6ee24f50b65e6
baseline version:
 linux                a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c

Last test of basis   155222  2020-10-01 11:40:31 Z    6 days
Testing same since   155513  2020-10-07 06:19:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Banerjee, Debabrata" <dbanerje@akamai.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Ahmad Fatoum <a.fatoum@pengutronix.de>
  Al Viro <viro@zeniv.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Aloka Dixit <alokad@codeaurora.org>
  Andrew Jeffery <andrew@aj.id.au>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bryan O'Donoghue <bryan.odonoghue@linaro.org>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chris Packham <chris.packham@alliedtelesis.co.nz>
  Christoph Hellwig <hch@lst.de>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Corentin Labbe <clabbe@baylibre.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  dillon min <dillon.minfei@gmail.com>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Sandeen <sandeen@redhat.com>
  Eric Sandeen <sandeen@sandeen.net>
  Felix Fietkau <nbd@nbd.name>
  Filipe Manana <fdmanana@suse.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guo Ren <guoren@linux.alibaba.com>
  Hans de Goede <hdegoede@redhat.com>
  Harry Cutts <hcutts@chromium.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <james.smart@broadcom.com>
  Jean Delvare <jdelvare@suse.de>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Kerr <jk@codeconstruct.com.au>
  Jiri Kosina <jkosina@suse.cz>
  Jochen Friedrich <jochen@scram.de>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Keith Busch <kbusch@kernel.org>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Laurent Dufour <ldufour@linux.ibm.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lucy Yan <lucyyan@google.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Martin Cerveny <m.cerveny@computer.org>
  Maxime Ripard <maxime@cerno.tech>
  Michal Hocko <mhocko@suse.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nicolas VINCENT <nicolas.vincent@vossloh.com>
  Olympia Giannou <ogiannou@gmail.com>
  Olympia Giannou <olympia.giannou@leica-geosystems.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@redhat.com>
  Revanth Rajashekar <revanth.rajashekar@intel.com>
  Revanth Rajashekar <revanth.rajashekar@intel.com>1
  Rob Herring <robh@kernel.org>
  Sasha Levin <sashal@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sebastien Boeuf <sebastien.boeuf@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Boyd <sboyd@kernel.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sylwester Nawrocki <s.nawrocki@samsung.com>
  Taiping Lai <taiping.lai@unisoc.com>
  Tao Ren <rentao.bupt@gmail.com>
  Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr>
  Thierry Reding <treding@nvidia.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vincent Huang <vincent.huang@tw.synaptics.com>
  Vinod Koul <vkoul@kernel.org>
  Will McVicker <willmcvicker@google.com>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xianting Tian <tian.xianting@h3c.com>
  Xie He <xie.he.0141@gmail.com>
  Xu Kai <xukai@nationalchip.com>
  Yu Kuai <yukuai3@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1759 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 07 22:38:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 07 Oct 2020 22:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3824.11411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQI4g-0002MM-CX; Wed, 07 Oct 2020 22:38:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3824.11411; Wed, 07 Oct 2020 22:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQI4g-0002MF-9X; Wed, 07 Oct 2020 22:38:18 +0000
Received: by outflank-mailman (input) for mailman id 3824;
 Wed, 07 Oct 2020 22:38:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jbsw=DO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQI4f-0002MA-37
 for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 22:38:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a5a90dd-7fc0-494e-8f89-e3077fbec196;
 Wed, 07 Oct 2020 22:38:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9C81A2083B;
 Wed,  7 Oct 2020 22:38:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jbsw=DO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQI4f-0002MA-37
	for xen-devel@lists.xenproject.org; Wed, 07 Oct 2020 22:38:17 +0000
X-Inumbo-ID: 3a5a90dd-7fc0-494e-8f89-e3077fbec196
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3a5a90dd-7fc0-494e-8f89-e3077fbec196;
	Wed, 07 Oct 2020 22:38:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9C81A2083B;
	Wed,  7 Oct 2020 22:38:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602110295;
	bh=l4K4T8fo0+ur1aHgV4iAACL0DIXhCf13YMCySooTSfU=;
	h=From:To:Cc:Subject:Date:From;
	b=KPzCRZISJ89qN6/XajpmMZeVn7i2WBUxK1Dc6hmaq6OVONbmAUeqW0NvE3+PtuE3k
	 X1jgKugLQYslqshZPX7EGtjrWiH+lPnMdhQevscbQwdA2wh/dJPCsJFap887rfueg9
	 LBkwFIZAd2y+bseiVqBkTGhLeVl13JkfxOYpULd4=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	sstabellini@kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	roman@zededa.com
Subject: [PATCH v3] xen/rpi4: implement watchdog-based reset
Date: Wed,  7 Oct 2020 15:38:13 -0700
Message-Id: <20201007223813.1638-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

The preferred method to reboot RPi4 is PSCI. If it is not available,
touching the watchdog is required to be able to reboot the board.

The implementation is based on
drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Acked-by: Julien Grall <jgrall@amazon.com>
CC: roman@zededa.com
---
Changes in v3:
- fix typo in commit message
- dprintk -> printk
---
 xen/arch/arm/platforms/brcm-raspberry-pi.c | 61 ++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index f5ae58a7d5..811b40b1a6 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -17,6 +17,10 @@
  * GNU General Public License for more details.
  */
 
+#include <xen/delay.h>
+#include <xen/mm.h>
+#include <xen/vmap.h>
+#include <asm/io.h>
 #include <asm/platform.h>
 
 static const char *const rpi4_dt_compat[] __initconst =
@@ -37,12 +41,69 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
      * The aux peripheral also shares a page with the aux UART.
      */
     DT_MATCH_COMPATIBLE("brcm,bcm2835-aux"),
+    /* Special device used for rebooting */
+    DT_MATCH_COMPATIBLE("brcm,bcm2835-pm"),
     { /* sentinel */ },
 };
 
+
+#define PM_PASSWORD                 0x5a000000
+#define PM_RSTC                     0x1c
+#define PM_WDOG                     0x24
+#define PM_RSTC_WRCFG_FULL_RESET    0x00000020
+#define PM_RSTC_WRCFG_CLR           0xffffffcf
+
+static void __iomem *rpi4_map_watchdog(void)
+{
+    void __iomem *base;
+    struct dt_device_node *node;
+    paddr_t start, len;
+    int ret;
+
+    node = dt_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm");
+    if ( !node )
+        return NULL;
+
+    ret = dt_device_get_address(node, 0, &start, &len);
+    if ( ret )
+    {
+        printk("Cannot read watchdog register address\n");
+        return NULL;
+    }
+
+    base = ioremap_nocache(start & PAGE_MASK, PAGE_SIZE);
+    if ( !base )
+    {
+        printk("Unable to map watchdog register!\n");
+        return NULL;
+    }
+
+    return base;
+}
+
+static void rpi4_reset(void)
+{
+    uint32_t val;
+    void __iomem *base = rpi4_map_watchdog();
+
+    if ( !base )
+        return;
+
+    /* use a timeout of 10 ticks (~150us) */
+    writel(10 | PM_PASSWORD, base + PM_WDOG);
+    val = readl(base + PM_RSTC);
+    val &= PM_RSTC_WRCFG_CLR;
+    val |= PM_PASSWORD | PM_RSTC_WRCFG_FULL_RESET;
+    writel(val, base + PM_RSTC);
+
+    /* No sleeping, possibly atomic. */
+    mdelay(1);
+}
+
 PLATFORM_START(rpi4, "Raspberry Pi 4")
     .compatible     = rpi4_dt_compat,
     .blacklist_dev  = rpi4_blacklist_dev,
+    .reset = rpi4_reset,
     .dma_bitsize    = 30,
 PLATFORM_END
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 01:08:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 01:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3845.11443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQKPN-0003It-Qd; Thu, 08 Oct 2020 01:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3845.11443; Thu, 08 Oct 2020 01:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQKPN-0003Im-N7; Thu, 08 Oct 2020 01:07:49 +0000
Received: by outflank-mailman (input) for mailman id 3845;
 Thu, 08 Oct 2020 01:07:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQKPM-0003Ih-6K
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 01:07:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0fcfd36-877f-4d34-b0df-9c792ed31aad;
 Thu, 08 Oct 2020 01:07:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0E02D20872;
 Thu,  8 Oct 2020 01:07:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQKPM-0003Ih-6K
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 01:07:48 +0000
X-Inumbo-ID: e0fcfd36-877f-4d34-b0df-9c792ed31aad
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e0fcfd36-877f-4d34-b0df-9c792ed31aad;
	Thu, 08 Oct 2020 01:07:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0E02D20872;
	Thu,  8 Oct 2020 01:07:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602119265;
	bh=rMLfJWcMmXi0+/xXn3mlI+M9goai9L6dZfHG486iEsQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eO2CVbkpc1n5JZbxM08bS803nUBBKWh1YnKqS6Vq0CjJkU5W9sUUDD9PxO/hdBVOf
	 KyUfIy1lc2Se3htKDhST1c1gEe3zSZQUSoNLNA9VSY267Im04cR4BwlV6irj3WC9Yj
	 qFKQ/pCzsZA49skhIMqdG+9CKvW9ghF84OjNhQ3s=
Date: Wed, 7 Oct 2020 18:07:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
cc: "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>, 
    "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>, 
    "vicooodin@gmail.com" <vicooodin@gmail.com>, 
    "julien@xen.org" <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Artem Mygaiev <Artem_Mygaiev@epam.com>, 
    "committers@xenproject.org" <committers@xenproject.org>, 
    "jbeulich@suse.com" <jbeulich@suse.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Xen Coding style and clang-format
In-Reply-To: <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
Message-ID: <alpine.DEB.2.21.2010071750360.23978@sstabellini-ThinkPad-T480s>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>  <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>  <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>  <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>  <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1082716634-1602119265=:23978"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1082716634-1602119265=:23978
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 7 Oct 2020, Anastasiia Lukianenko wrote:
> On Thu, 2020-10-01 at 10:06 +0000, George Dunlap wrote:
> > > On Oct 1, 2020, at 10:06 AM, Anastasiia Lukianenko <
> > > Anastasiia_Lukianenko@epam.com> wrote:
> > > 
> > > Hi,
> > > 
> > > On Wed, 2020-09-30 at 10:24 +0000, George Dunlap wrote:
> > > > > On Sep 30, 2020, at 10:57 AM, Jan Beulich <jbeulich@suse.com>
> > > > > wrote:
> > > > > 
> > > > > On 30.09.2020 11:18, Anastasiia Lukianenko wrote:
> > > > > > I would like to know your opinion on the following coding
> > > > > > style
> > > > > > cases.
> > > > > > Which option do you think is correct?
> > > > > > 1) Function prototype when the string length is longer than
> > > > > > the
> > > > > > allowed
> > > > > > one
> > > > > > -static int __init
> > > > > > -acpi_parse_gic_cpu_interface(struct acpi_subtable_header
> > > > > > *header,
> > > > > > -                             const unsigned long end)
> > > > > > +static int __init acpi_parse_gic_cpu_interface(
> > > > > > +    struct acpi_subtable_header *header, const unsigned long
> > > > > > end)
> > > > > 
> > > > > Both variants are deemed valid style, I think (same also goes
> > > > > for
> > > > > function calls with this same problem). In fact you mix two
> > > > > different style aspects together (placement of parameter
> > > > > declarations and placement of return type etc) - for each
> > > > > individually both forms are deemed acceptable, I think.
> > > > 
> > > > If we’re going to have a tool go through and report (correct?)
> > > > all
> > > > these coding style things, it’s an opportunity to think if we
> > > > want to
> > > > add new coding style requirements (or change existing
> > > > requirements).
> > > > 
> > > 
> > > I am ready to discuss new requirements and implement them in rules
> > > of
> > > the Xen Coding style checker.
> > 
> > Thank you. :-)  But what I meant was: Right now we don’t require one
> > approach or the other for this specific instance.  Do we want to
> > choose one?
> > 
> > I think in this case it makes sense to do the easiest thing.  If it’s
> > easy to make the current tool accept both styles, let’s just do that
> > for now.  If the tool currently forces you to choose one of the two
> > styles, let’s choose one.
> > 
> >  -George
> 
> During the detailed study of the Xen checker and the Clang-Format Style
> Options, it was found that this tool, unfortunately, is not so flexible
> to allow the author to independently choose the formatting style in
> situations that I described in the last letter. For example define code
> style:
> -#define ALLREGS \
> -    C(r0, r0_usr);   C(r1, r1_usr);   C(r2, r2_usr);   C(r3,
> r3_usr);   \
> -    C(cpsr, cpsr)
> +#define ALLREGS            \
> +    C(r0, r0_usr);         \
> +    C(r1, r1_usr);         \
> +    C(r2, r2_usr);         \
> There are also some inconsistencies in the formatting of the tool and
> what is written in the hyung coding style rules. For example, the
> comment format:
> -    /* PC should be always a multiple of 4, as Xen is using ARM
> instruction set */
> +    /* PC should be always a multiple of 4, as Xen is using ARM
> instruction set
> +     */
> I would like to draw your attention to the fact that the comment
> behaves in this way, since the line length exceeds the allowable one.
> The ReflowComments option is responsible for this format. It can be
> turned off, but then the result will be:
> ReflowComments=false:
> /* second veryVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryLongComment with
> plenty of information */
> 
> ReflowComments=true:
> /* second veryVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryLongComment with
> plenty of
>  * information */

To me, the principal goal of the tool is to identify code style
violations. Suggesting how to fix a violation is an added bonus but not
strictly necessary.

So, I think we definitely want the tool to report the following line as
an error, because the line is too long:

/* second veryVeryVeryVeryVeryVeryVeryVeryVeryVeryVeryLongComment with plenty of information */

The suggestion on how to fix it is less important. Do we need to set
ReflowComments=true if we want to the tool to report the line as
erroneous? I take that the answer is "yes"?


> So I want to know if the community is ready to add new formatting
> options and edit old ones. Below I will give examples of what
> corrections the checker is currently making (the first variant in each
> case is existing code and the second variant is formatted by checker).
> If they fit the standards, then I can document them in the coding
> style. If not, then I try to configure the checker. But the idea is
> that we need to choose one option that will be considered correct.
>
> 1) Function prototype when the string length is longer than the allowed
> -static int __init
> -acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
> -                             const unsigned long end)
> +static int __init acpi_parse_gic_cpu_interface(
> +    struct acpi_subtable_header *header, const unsigned long end)
> 2) Wrapping an operation to a new line when the string length is longer
> than the allowed
> -    status = acpi_get_table(ACPI_SIG_SPCR, 0,
> -                            (struct acpi_table_header **)&spcr);
> +    status =
> +        acpi_get_table(ACPI_SIG_SPCR, 0, (struct acpi_table_header
> **)&spcr);
> 3) Space after brackets
> -    return ((char *) base + offset);
> +    return ((char *)base + offset);
> 4) Spaces in brackets in switch condition
> -    switch ( domctl->cmd )
> +    switch (domctl->cmd)
> 5) Spaces in brackets in operation
> -    imm = ( insn >> BRANCH_INSN_IMM_SHIFT ) & BRANCH_INSN_IMM_MASK;
> +    imm = (insn >> BRANCH_INSN_IMM_SHIFT) & BRANCH_INSN_IMM_MASK;
> 6) Spaces in brackets in return
> -        return ( !sym->name[2] || sym->name[2] == '.' );
> +        return (!sym->name[2] || sym->name[2] == '.');
> 7) Space after sizeof
> -    clean_and_invalidate_dcache_va_range(new_ptr, sizeof (*new_ptr) *
> len);
> +    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr) *
> len);
> 8) Spaces before comment if it’s on the same line
> -    case R_ARM_MOVT_ABS: /* S + A */
> +    case R_ARM_MOVT_ABS:    /* S + A */
> 
> -    if ( tmp == 0UL )       /* Are any bits set? */
> -        return result + size;   /* Nope. */
> +    if ( tmp == 0UL )         /* Are any bits set? */
> +        return result + size; /* Nope. */
> 
> 9) Space after for_each_vcpu
> -        for_each_vcpu(d, v)
> +        for_each_vcpu (d, v)
> 10) Spaces in declaration
> -    union hsr hsr = { .bits = regs->hsr };
> +    union hsr hsr = {.bits = regs->hsr};

None of these points are particularly problematic to me. I think that
some of them are good to have anyway, like 3) and 8). Some others are
not great, in particular 1) and 2), and I would prefer to keep the
current coding style for those, but I'd be certainly happy to make those
changes anyway if we get a good code style checker in exchange :-)
--8323329-1082716634-1602119265=:23978--


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 01:34:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 01:34:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.3853.11457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQKpI-0006Ok-9x; Thu, 08 Oct 2020 01:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 3853.11457; Thu, 08 Oct 2020 01:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQKpI-0006Oc-4i; Thu, 08 Oct 2020 01:34:36 +0000
Received: by outflank-mailman (input) for mailman id 3853;
 Thu, 08 Oct 2020 01:34:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQKpG-0006O4-Gq
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 01:34:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ccdec72-0212-40c5-9e88-590eb886662c;
 Thu, 08 Oct 2020 01:34:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQKp7-0005BN-2s; Thu, 08 Oct 2020 01:34:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQKp6-0003Ft-NL; Thu, 08 Oct 2020 01:34:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQKp6-0000BZ-Mp; Thu, 08 Oct 2020 01:34:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQKpG-0006O4-Gq
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 01:34:34 +0000
X-Inumbo-ID: 7ccdec72-0212-40c5-9e88-590eb886662c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ccdec72-0212-40c5-9e88-590eb886662c;
	Thu, 08 Oct 2020 01:34:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7uNnG/uCvifbJN8T8hdKnL7CI00pw0Yehi9yCmNxYuU=; b=zdvvs8u3aCGAflb8okMYAfdJGm
	n+w3HwHlPNxDQG4BHHX8fCHigXzTwMwNQGqLIzwnqoVwcSYrBfP39u7Z0Z1YSu0xJdinvuP8vktL4
	UfDS+vzQAGqzf7fq9On24Lx7NXbyDNilLbDzYCXp+Z2a7Kpnr2gKt99tVPg5z8cUJgP4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQKp7-0005BN-2s; Thu, 08 Oct 2020 01:34:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQKp6-0003Ft-NL; Thu, 08 Oct 2020 01:34:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQKp6-0000BZ-Mp; Thu, 08 Oct 2020 01:34:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155514-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155514: FAIL
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:<job status>:broken:regression
    xen-4.10-testing:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    xen-4.10-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:guest-saverestore.2:fail:heisenbug
    xen-4.10-testing:test-arm64-arm64-xl:guest-start/debian.repeat:fail:heisenbug
    xen-4.10-testing:test-xtf-amd64-amd64-2:xtf/test-hvm64-lbr-tsx-vmentry:fail:heisenbug
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 01:34:24 +0000

flight 155514 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155514/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx    <job status>                 broken  in 155496

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   7 xen-boot         fail in 155496 pass in 155514
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 15 guest-saverestore.2 fail in 155496 pass in 155514
 test-arm64-arm64-xl 16 guest-start/debian.repeat fail in 155496 pass in 155514
 test-xtf-amd64-amd64-2   54 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 155496

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               fail   never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   92 days
Failing since        154621  2020-09-22 16:07:00 Z   15 days   22 attempts
Testing same since   155362  2020-10-03 03:17:48 Z    4 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-thunderx broken

Not pushing.

(No revision log; it would be 368 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 03:59:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 03:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4255.11471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQN4i-0004Fk-5r; Thu, 08 Oct 2020 03:58:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4255.11471; Thu, 08 Oct 2020 03:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQN4i-0004Fd-2Z; Thu, 08 Oct 2020 03:58:40 +0000
Received: by outflank-mailman (input) for mailman id 4255;
 Thu, 08 Oct 2020 03:58:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQN4h-0004FB-8J
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 03:58:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2a73c35-9319-4f92-b092-b6477cb9c1f6;
 Thu, 08 Oct 2020 03:58:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQN4Y-00005A-MB; Thu, 08 Oct 2020 03:58:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQN4Y-0000m5-Ec; Thu, 08 Oct 2020 03:58:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQN4X-0001uz-I1; Thu, 08 Oct 2020 03:58:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQN4h-0004FB-8J
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 03:58:39 +0000
X-Inumbo-ID: b2a73c35-9319-4f92-b092-b6477cb9c1f6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b2a73c35-9319-4f92-b092-b6477cb9c1f6;
	Thu, 08 Oct 2020 03:58:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y+LHqgzN0cWJUnqfo5Jd/REZAtbwpGPX+NsM4qmt+0s=; b=gP4Ms2GIUGBY2d3n5zvlwbJv3w
	2tAwnJjk4APDQvNNl8pxJQgRo/MacaD3Eml61RJ2fD8tin+JFLWSWhOEsNylgzx/RL3BOHaKZqk7W
	2acsuiBTgMYp/Ktzcrq3OIdfYfEeb7aXNK5LM0j5751NJTjOyx4NTZjZBD4SohliXpZM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQN4Y-00005A-MB; Thu, 08 Oct 2020 03:58:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQN4Y-0000m5-Ec; Thu, 08 Oct 2020 03:58:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQN4X-0001uz-I1; Thu, 08 Oct 2020 03:58:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155516-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155516: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=c85fb28b6f999db9928b841f63f1beeb3074eeca
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 03:58:29 +0000

flight 155516 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155516/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  7 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      7 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                c85fb28b6f999db9928b841f63f1beeb3074eeca
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   68 days
Failing since        152366  2020-08-01 20:49:34 Z   67 days  113 attempts
Testing same since   155516  2020-10-07 09:30:00 Z    0 days    1 attempts

------------------------------------------------------------
2499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 337618 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 05:36:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 05:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4260.11485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQOab-00057p-9A; Thu, 08 Oct 2020 05:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4260.11485; Thu, 08 Oct 2020 05:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQOab-00057i-5p; Thu, 08 Oct 2020 05:35:41 +0000
Received: by outflank-mailman (input) for mailman id 4260;
 Thu, 08 Oct 2020 05:35:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQOaZ-00057A-8o
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 05:35:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cda5a42e-2179-4fca-a41a-3914ee404939;
 Thu, 08 Oct 2020 05:35:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQOaQ-0002cr-HD; Thu, 08 Oct 2020 05:35:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQOaQ-0005aw-7t; Thu, 08 Oct 2020 05:35:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQOaQ-0007Rz-74; Thu, 08 Oct 2020 05:35:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQOaZ-00057A-8o
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 05:35:39 +0000
X-Inumbo-ID: cda5a42e-2179-4fca-a41a-3914ee404939
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cda5a42e-2179-4fca-a41a-3914ee404939;
	Thu, 08 Oct 2020 05:35:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EwrxFBoPnRqiYly2LdN9p4/mxAfFfupIpHRWjNkozsw=; b=ZtiA8mpx2pInhlkO2TP5Qb7nM3
	r+ieTa82iLEIe1bP9feQTh4167fNokS1X08eaBIi+aUAKwOp8VYUN1DKV4mAdv2gBS7ntXA3N7vy7
	9fDBrT6n5zvSG/yfiMPlujhlX5x0XOOfJ2pGQSkHDGnJyJE9Yq572fXKL1sJLwCQ3N/k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQOaQ-0002cr-HD; Thu, 08 Oct 2020 05:35:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQOaQ-0005aw-7t; Thu, 08 Oct 2020 05:35:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQOaQ-0007Rz-74; Thu, 08 Oct 2020 05:35:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155518-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155518: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f2687fdb7571a444b5af3509574b659d35ddd601
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 05:35:30 +0000

flight 155518 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155518/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 11 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 11 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                f2687fdb7571a444b5af3509574b659d35ddd601
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   48 days
Failing since        152659  2020-08-21 14:07:39 Z   47 days   79 attempts
Testing same since   155518  2020-10-07 13:11:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 40281 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 07:49:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 07:49:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4263.11499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQQgA-00007W-Vy; Thu, 08 Oct 2020 07:49:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4263.11499; Thu, 08 Oct 2020 07:49:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQQgA-00007P-SS; Thu, 08 Oct 2020 07:49:34 +0000
Received: by outflank-mailman (input) for mailman id 4263;
 Thu, 08 Oct 2020 07:49:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQQg9-00007K-7g
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 07:49:33 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b65c7d19-1663-43c0-90c1-6041c147e048;
 Thu, 08 Oct 2020 07:49:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQQg9-00007K-7g
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 07:49:33 +0000
X-Inumbo-ID: b65c7d19-1663-43c0-90c1-6041c147e048
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b65c7d19-1663-43c0-90c1-6041c147e048;
	Thu, 08 Oct 2020 07:49:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602143369;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=8oXHE9yaD0ZDiGDH+pTLdVL5S7SMWQLtXkvi2k8KzIw=;
  b=Dq5kVikpptVB3gBbMfzgHGdqn0Z3iVixVm0h3Xl9LOZQ5ib1L0M/X8ju
   lcoYhQ/aPXgHCuCclUsSFI8MubFDEHH4kL6fIda0nw/lKfF/UnvpeIB0p
   zwEpinqcbxSQjc0DjE1ZXm1RCfrsmbvYFmYtERbChegS5R6KTzs3xy4cH
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZXhjVTZXIdzLastKb3c60KsYO24CBlJcj8xrt1SJH6B2+4y2Z2g24dVydqT7JeuoxgwuJsDCvc
 hSSAWGuc6f8WFfNYEx4id86xfzFs0pNpBzzsnGylSE+LFKGyxnmzx8eO5Iawew3Zg3MofGhu6/
 hxpvhERqywhpYtrWSMojqViyida2vxj0K/SMSBTsiH+K+qwPU0ACTZ3B7KgmPxWY0puRtVmF5E
 nqkiunFJvR4CKtfp0OWbfg4NkXarJVH5oBNLI0zRP4OMrB2825dZNHpTZCrqIEZsr5wNg0Gufk
 P24=
X-SBRS: None
X-MesageID: 29580987
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,350,1596513600"; 
   d="scan'208";a="29580987"
Date: Thu, 8 Oct 2020 09:49:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/ucode: Trivial further cleanup
Message-ID: <20201008074920.GI19254@Air-de-Roger>
References: <20201007180120.27203-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201007180120.27203-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 07, 2020 at 07:01:20PM +0100, Andrew Cooper wrote:
>  * Drop unused include in private.h.
>  * Used explicit width integers for Intel header fields.
>  * Adjust comment to better describe the extended header.
>  * Drop unnecessary __packed attribute for AMD header.
>  * Switch mc_patch_data_id to being uint16_t, which is how it is more commonly
>    referred to.
>  * Fix types and style.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  xen/arch/x86/cpu/microcode/amd.c     | 10 +++++-----
>  xen/arch/x86/cpu/microcode/intel.c   | 34 +++++++++++++++++-----------------
>  xen/arch/x86/cpu/microcode/private.h |  2 --
>  3 files changed, 22 insertions(+), 24 deletions(-)
> 
> diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
> index cd532321e8..e913232067 100644
> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -24,7 +24,7 @@
>  
>  #define pr_debug(x...) ((void)0)
>  
> -struct __packed equiv_cpu_entry {
> +struct equiv_cpu_entry {
>      uint32_t installed_cpu;
>      uint32_t fixed_errata_mask;
>      uint32_t fixed_errata_compare;
> @@ -35,7 +35,7 @@ struct __packed equiv_cpu_entry {
>  struct microcode_patch {
>      uint32_t data_code;
>      uint32_t patch_id;
> -    uint8_t  mc_patch_data_id[2];
> +    uint16_t mc_patch_data_id;
>      uint8_t  mc_patch_data_len;

I think you could also drop the mc_patch_ prefixes from a couple of
fields in this structure, since they serve no purpose AFAICT.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 07:58:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 07:58:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4267.11511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQQog-00016G-Pn; Thu, 08 Oct 2020 07:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4267.11511; Thu, 08 Oct 2020 07:58:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQQog-000169-MO; Thu, 08 Oct 2020 07:58:22 +0000
Received: by outflank-mailman (input) for mailman id 4267;
 Thu, 08 Oct 2020 07:58:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KIyQ=DP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQQoe-000160-Kh
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 07:58:20 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.86]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f23411a2-beaf-4321-a7a3-c7f523e38c88;
 Thu, 08 Oct 2020 07:58:19 +0000 (UTC)
Received: from AM3PR07CA0060.eurprd07.prod.outlook.com (2603:10a6:207:4::18)
 by VI1PR08MB3855.eurprd08.prod.outlook.com (2603:10a6:803:bb::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Thu, 8 Oct
 2020 07:58:14 +0000
Received: from AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:4:cafe::f5) by AM3PR07CA0060.outlook.office365.com
 (2603:10a6:207:4::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.11 via Frontend
 Transport; Thu, 8 Oct 2020 07:58:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT056.mail.protection.outlook.com (10.152.17.224) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Thu, 8 Oct 2020 07:58:14 +0000
Received: ("Tessian outbound bac899b43a54:v64");
 Thu, 08 Oct 2020 07:58:14 +0000
Received: from 9f2014325a99.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 485C651D-D151-4279-B33D-744A119EFBC2.1; 
 Thu, 08 Oct 2020 07:58:08 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9f2014325a99.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 08 Oct 2020 07:58:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5371.eurprd08.prod.outlook.com (2603:10a6:10:114::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Thu, 8 Oct
 2020 07:58:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Thu, 8 Oct 2020
 07:58:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KIyQ=DP=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQQoe-000160-Kh
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 07:58:20 +0000
X-Inumbo-ID: f23411a2-beaf-4321-a7a3-c7f523e38c88
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.86])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f23411a2-beaf-4321-a7a3-c7f523e38c88;
	Thu, 08 Oct 2020 07:58:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/7nTMjrQoVtqeugW7ZobCJYhA6M53QYDjDuRHtwZfPI=;
 b=lMdDsIp5f6daYd7Ygg0lFP1eYpLcmy6TG+lxAn2My7xCl0ShrE39Lzn0aXfnRSTgHMPb0J2RJshBNk+BsPHB9hL58Qs+IyfFOQqDai3cEEeI1k/EJOIaWewHNIhmYXWoyawEw4rw5cqRl7ih/Oqp4xY+ROzM0EUdPpCZRxYSsUk=
Received: from AM3PR07CA0060.eurprd07.prod.outlook.com (2603:10a6:207:4::18)
 by VI1PR08MB3855.eurprd08.prod.outlook.com (2603:10a6:803:bb::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Thu, 8 Oct
 2020 07:58:14 +0000
Received: from AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:207:4:cafe::f5) by AM3PR07CA0060.outlook.office365.com
 (2603:10a6:207:4::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.11 via Frontend
 Transport; Thu, 8 Oct 2020 07:58:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT056.mail.protection.outlook.com (10.152.17.224) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Thu, 8 Oct 2020 07:58:14 +0000
Received: ("Tessian outbound bac899b43a54:v64"); Thu, 08 Oct 2020 07:58:14 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 09d607fd8e780236
X-CR-MTA-TID: 64aa7808
Received: from 9f2014325a99.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 485C651D-D151-4279-B33D-744A119EFBC2.1;
	Thu, 08 Oct 2020 07:58:08 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9f2014325a99.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 08 Oct 2020 07:58:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AlAqV7qgXaAY8n1RyA5vN4CLfaH1b0ZpJVxhwh65+wia6l5Zdi530AvIRKbkcM9gp582UOTvYFW6LL5xDZC3UEq+KzN/nfROHVKLVw7M5D9cxSDi3j7FuY6db3C19fVF5LRbrnOj3ecMc7yXPwIOT+xtv9KoiphL8EbBqK8eMb5kPBMI+raW3JW3VIphi7/r85Ys3dLF2QKoR6vLNdqT3InVKpTJxgazTnxK/Qoek/CEn4wIGd8sWXrNfLBzdgRVBEjHE3b2vYOpOf1KZUbNB6fBHPcJmaDG9qSp7/9QJ9Q29fWkoKnURl6F5mVM8p5H8ITpW7IxqZYPeb/jxUeNDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/7nTMjrQoVtqeugW7ZobCJYhA6M53QYDjDuRHtwZfPI=;
 b=DoVUVyE6GDlEnOuKh8KZ/ZtrmEw9BLv61o/+WKG538FTsMFYhhC7FWzCxI7P24zUpKak6jajUPcXuuE82gGr4pa7Z/7k6aGDuVpNlv/rzp1tgBDBdBZEhaXbAImghiAgwx7GgOgPWuq9CUdNWsUz1X8wIw0UMh6n/OK2cKo0lg3Bv79o6Xws3jeZuVxF8RIi5CnTl8+rMW9fq1hEKJgoetNuMSyEEsXMCHOmDYPFmXn0wdbUs05oyHbyCBi+zem732+n7iULBsmlwNYqIvq4hNB8XuMHpHFWADZEGSAh+XwekbQGjcarDVuP9amZ0DJ4haFYd+IDmaLNp8s5PjD+uA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/7nTMjrQoVtqeugW7ZobCJYhA6M53QYDjDuRHtwZfPI=;
 b=lMdDsIp5f6daYd7Ygg0lFP1eYpLcmy6TG+lxAn2My7xCl0ShrE39Lzn0aXfnRSTgHMPb0J2RJshBNk+BsPHB9hL58Qs+IyfFOQqDai3cEEeI1k/EJOIaWewHNIhmYXWoyawEw4rw5cqRl7ih/Oqp4xY+ROzM0EUdPpCZRxYSsUk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5371.eurprd08.prod.outlook.com (2603:10a6:10:114::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Thu, 8 Oct
 2020 07:58:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.023; Thu, 8 Oct 2020
 07:58:06 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "julien@xen.org"
	<julien@xen.org>, Stefano Stabellini <stefano.stabellini@xilinx.com>,
	"roman@zededa.com" <roman@zededa.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Topic: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Index: AQHWnPqc7v70BAMN4EaAQA/1huLhsamNV4yA
Date: Thu, 8 Oct 2020 07:58:06 +0000
Message-ID: <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
References: <20201007223813.1638-1-sstabellini@kernel.org>
In-Reply-To: <20201007223813.1638-1-sstabellini@kernel.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 11a0d7da-0bdf-4971-9ae9-08d86b5fe8c2
x-ms-traffictypediagnostic: DB8PR08MB5371:|VI1PR08MB3855:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB38551617B25094C5200CD4F99D0B0@VI1PR08MB3855.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2276;OLM:2276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VGPN2COUEvoV9gmIPD0CUPrdzEm7zTtGOi+Y6sis0vYBu0JWiAJOo7nqAuS6GwZDKoo+dM0E9ZsaPqpb5fgX89XG11gHVaYex7P4qDUIoxCNlfwBRDpFSxpTbfCnXlybr/XuB9WgOFXKo7K8zg7p33t7FWXAwY690SI2yhzgBSgrtJBVfFluin/7QWxxFH/cHHTMlvii9HIoqp7P5XtZ8r5CuZfxxLYYoNyOKhAkWtIdtG3Ff9VUB9f9Ro53PAV3mizFR41CZrrImcDMshofF6NH8uuROAvEG1GJyluqIaS/Ru0lP+lM3JGnQkzPABgGvUvCnwFCM/MwpUrF0uxlxQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(396003)(376002)(39850400004)(316002)(86362001)(6512007)(54906003)(8936002)(83380400001)(6916009)(33656002)(4326008)(5660300002)(66446008)(8676002)(66946007)(66556008)(64756008)(36756003)(66476007)(91956017)(2906002)(6486002)(76116006)(53546011)(71200400001)(2616005)(478600001)(6506007)(186003)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 jteXyPQCgcSh53DigzNKCGTJwKm+gA1t+vIemc2AlI8o0XItDFZ0MZyfeAXuNPRkfHQZm+iU61h1vgXpSGQc93yU7d4rUhftXkdozhP4egnYx1/zfh8CYsV3cRNGD036/sFccK6+/CcCh7vRn9izDjtL4x8da9FZ4Zf1PD7GHqB0RC6HX+J9DkfdaSwbwiHJc1KL99Irmi/DcMeoqaCASaNfKL6RI7CapAs/5EzIgMNcuBNr0p94wAirXmdJjLOIHpJ9ghH+2ZkTjHcASwluBc25DLh47z+NpXxKXeBiH8xnybmIrJCzWF5q0Jo2QKDEVB+C2kW4RuyTKjdHue2fUlkRHGOTdHfhhcwcPdyNJnBMjjqfIvUT60IGF0m5xW+fwRjRN6nQgHJLTanOeHCDEB1yvdFLTUOf5uNdK8JdxYghFdhB5h8p8h3XuDj7GEaQst8ErFypqph5khN2k2vgRqWvChUBHdtXzb7iuxj/6BnQYLO+zzg/tV/U77peqB6tsFudjjgsp+DyL+m6n9lAZLfMdg5ywkF5R075SMP9617FWkjhx5XMqlrEOjEZeoDLMpKeLIuKxFqd/zZF2KqdrWODkU6oMpJroSCjNMYheaBlL1OoHJhyUSKaB2XIdua5dPyLwoJA0Ccx1QyiEN19cw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <842F47F688C5DA4E96EC8B062E928851@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5371
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	47c2ba04-88c9-41d5-d2d1-08d86b5fe3f2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CTk3HFO52C68t1gKOVwLr4gA6t0xPgwJ7jF9pVMwgi4yNPR0N5rHUv6Pz0Ojs8SWMI59toLmcx1ZH/pe0wswLiaLfFK5HJym1aVhhHo7NBDQ+0w4VYFd60IPhQees/UPxsxMJBkjVpKQxtenkfREaDNJx0ohI7c0QiRjgW8U9OnsSQhbuT82ptx36bpp624aiExk3Q70/MduKvUSclpQnwQl9Kwd2dFg5R7mcT1dx4uEVCAE/py1bGQd2+g/NbtetWu1xOlK5EIpuIo0NLUbNMszJuRaUxJ2D077eul5nHF8bQJgNvMhSB7TxEsfUMu5EdlWP5mPuYgaPd1qTtxgfM+UOg4Q18/H3KMPksI69Hm7M4c3KMWKW3tU0M1OCaaQaHZWqRfcfGTZWtucqFZpiQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(39850400004)(396003)(46966005)(86362001)(83380400001)(47076004)(478600001)(82740400003)(6486002)(54906003)(4326008)(107886003)(6862004)(6512007)(82310400003)(33656002)(8936002)(5660300002)(186003)(36906005)(81166007)(316002)(53546011)(2906002)(2616005)(36756003)(6506007)(336012)(70206006)(70586007)(26005)(356005)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Oct 2020 07:58:14.3458
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 11a0d7da-0bdf-4971-9ae9-08d86b5fe8c2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3855



> On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> The preferred method to reboot RPi4 is PSCI. If it is not available,
> touching the watchdog is required to be able to reboot the board.
>=20
> The implementation is based on
> drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
>=20
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Acked-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Maybe a printk if reset was not successful ?

Cheers
Bertrand

> CC: roman@zededa.com
> ---
> Changes in v3:
> - fix typo in commit message
> - dprintk -> printk
> ---
> xen/arch/arm/platforms/brcm-raspberry-pi.c | 61 ++++++++++++++++++++++
> 1 file changed, 61 insertions(+)
>=20
> diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/pl=
atforms/brcm-raspberry-pi.c
> index f5ae58a7d5..811b40b1a6 100644
> --- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
> +++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
> @@ -17,6 +17,10 @@
>  * GNU General Public License for more details.
>  */
>=20
> +#include <xen/delay.h>
> +#include <xen/mm.h>
> +#include <xen/vmap.h>
> +#include <asm/io.h>
> #include <asm/platform.h>
>=20
> static const char *const rpi4_dt_compat[] __initconst =3D
> @@ -37,12 +41,69 @@ static const struct dt_device_match rpi4_blacklist_de=
v[] __initconst =3D
>      * The aux peripheral also shares a page with the aux UART.
>      */
>     DT_MATCH_COMPATIBLE("brcm,bcm2835-aux"),
> +    /* Special device used for rebooting */
> +    DT_MATCH_COMPATIBLE("brcm,bcm2835-pm"),
>     { /* sentinel */ },
> };
>=20
> +
> +#define PM_PASSWORD                 0x5a000000
> +#define PM_RSTC                     0x1c
> +#define PM_WDOG                     0x24
> +#define PM_RSTC_WRCFG_FULL_RESET    0x00000020
> +#define PM_RSTC_WRCFG_CLR           0xffffffcf
> +
> +static void __iomem *rpi4_map_watchdog(void)
> +{
> +    void __iomem *base;
> +    struct dt_device_node *node;
> +    paddr_t start, len;
> +    int ret;
> +
> +    node =3D dt_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm");
> +    if ( !node )
> +        return NULL;
> +
> +    ret =3D dt_device_get_address(node, 0, &start, &len);
> +    if ( ret )
> +    {
> +        printk("Cannot read watchdog register address\n");
> +        return NULL;
> +    }
> +
> +    base =3D ioremap_nocache(start & PAGE_MASK, PAGE_SIZE);
> +    if ( !base )
> +    {
> +        printk("Unable to map watchdog register!\n");
> +        return NULL;
> +    }
> +
> +    return base;
> +}
> +
> +static void rpi4_reset(void)
> +{
> +    uint32_t val;
> +    void __iomem *base =3D rpi4_map_watchdog();
> +
> +    if ( !base )
> +        return;
> +
> +    /* use a timeout of 10 ticks (~150us) */
> +    writel(10 | PM_PASSWORD, base + PM_WDOG);
> +    val =3D readl(base + PM_RSTC);
> +    val &=3D PM_RSTC_WRCFG_CLR;
> +    val |=3D PM_PASSWORD | PM_RSTC_WRCFG_FULL_RESET;
> +    writel(val, base + PM_RSTC);
> +
> +    /* No sleeping, possibly atomic. */
> +    mdelay(1);
> +}
> +
> PLATFORM_START(rpi4, "Raspberry Pi 4")
>     .compatible     =3D rpi4_dt_compat,
>     .blacklist_dev  =3D rpi4_blacklist_dev,
> +    .reset =3D rpi4_reset,
>     .dma_bitsize    =3D 30,
> PLATFORM_END
>=20
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 08:28:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 08:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4295.11522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRHa-0004Yo-Ic; Thu, 08 Oct 2020 08:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4295.11522; Thu, 08 Oct 2020 08:28:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRHa-0004Yh-Fj; Thu, 08 Oct 2020 08:28:14 +0000
Received: by outflank-mailman (input) for mailman id 4295;
 Thu, 08 Oct 2020 08:28:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDYt=DP=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kQRHY-0004Yc-TP
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:28:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0eb4a397-8aae-4a6b-a070-667b36fc9c2b;
 Thu, 08 Oct 2020 08:28:12 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F33D921531;
 Thu,  8 Oct 2020 08:28:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CDYt=DP=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kQRHY-0004Yc-TP
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:28:12 +0000
X-Inumbo-ID: 0eb4a397-8aae-4a6b-a070-667b36fc9c2b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0eb4a397-8aae-4a6b-a070-667b36fc9c2b;
	Thu, 08 Oct 2020 08:28:12 +0000 (UTC)
Received: from devnote2 (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id F33D921531;
	Thu,  8 Oct 2020 08:28:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602145691;
	bh=D6jzGaurWgaF827f14jVCgmDfsqRsfIA9V7sMLM5Qr8=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=CEx8DnTPNLxslo3VTV38YIJsDXmbj7dSuXipNfQ/bVkdcKbIye6h0DjDBF/8L9jcn
	 YenNSYNeOEgGgy0B+HUS0qIX829IiQgbB5Lfoy3MTQk9Txsa8G+IH4Ne0jhchtUpsM
	 u5xIANxOv0MRhjamkpFTXdlbA1kCOvm/QRL8uw8c=
Date: Thu, 8 Oct 2020 17:28:06 +0900
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, Alex =?UTF-8?B?QmVubsOpZQ==?=
 <alex.bennee@linaro.org>, takahiro.akashi@linaro.org, jgross@suse.com,
 boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
Message-Id: <20201008172806.1591ebb538946c5ee93d372a@kernel.org>
In-Reply-To: <alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2>
	<b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org>
	<alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s>
	<20201006114058.b93839b1b8f35a470874572b@kernel.org>
	<alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 6 Oct 2020 10:56:52 -0700 (PDT)
Stefano Stabellini <sstabellini@kernel.org> wrote:

> On Tue, 6 Oct 2020, Masami Hiramatsu wrote:
> > On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
> > Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > > On Mon, 5 Oct 2020, Julien Grall wrote:
> > > > Hi Masami,
> > > > 
> > > > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > > > address conversion.
> > > > > 
> > > > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > > > assumes the given virtual address is in contiguous kernel memory
> > > > > area, it can not convert the per-cpu memory if it is allocated on
> > > > > vmalloc area (depends on CONFIG_SMP).
> > > > 
> > > > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > > > region for CPU0 is allocated outside of vmalloc area.
> > > > 
> > > > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > > > enabled.
> > > 
> > > I cannot reproduce the issue with defconfig, but I can with Masami's
> > > kconfig.
> > > 
> > > If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> > > problem still appears.
> > > 
> > > If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> > > strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> > > works.
> > 
> > Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
> > disappeared.
> > 
> > --- config-5.9.0-rc4+   2020-10-06 11:36:20.620107129 +0900
> > +++ config-5.9.0-rc4+.buggy     2020-10-05 21:04:40.369936461 +0900
> > @@ -131,7 +131,8 @@
> >  CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
> >  CONFIG_CC_HAS_INT128=y
> >  CONFIG_ARCH_SUPPORTS_INT128=y
> > -# CONFIG_NUMA_BALANCING is not set
> > +CONFIG_NUMA_BALANCING=y
> > +CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
> >  CONFIG_CGROUPS=y
> >  CONFIG_PAGE_COUNTER=y
> >  CONFIG_MEMCG=y
> > 
> > So buggy config just enabled NUMA_BALANCING (and default enabled)
> 
> Yeah but both NUMA and NUMA_BALANCING are enabled in defconfig which
> works fine...

Hmm, I found that the xen_vcpu_info was allocated on km if the Dom0 has
enough memory. On my environment, if Xen passed 2GB of RAM to Dom0, it
was allocated on kernel linear mapped address, but with 1GB of RAM, it
was on vmalloc area.
As far as I can see, it seems that the percpu allocates memory from
2nd chunk if the default memory size is small.

I've built a kernel with patch [1] and boot the same kernel up with
different dom0_mem option with "trace_event=percpu:*" kernel cmdline.
Then got following logs.

Boot with 4GB:
          <idle>-0     [000] ....     0.543208: percpu_create_chunk: base_addr=000000005d5ad71c
 [...]
         systemd-1     [000] ....     0.568931: percpu_alloc_percpu: reserved=0 is_atomic=0 size=48 align=8 base_addr=00000000fa92a086 off=32672 ptr=000000008da0b73d
         systemd-1     [000] ....     0.568938: xen_guest_init: Xen: alloc xen_vcpu_info ffff800011003fa0 id=000000008da0b73d
         systemd-1     [000] ....     0.586635: xen_starting_cpu: Xen: xen_vcpu_info ffff800011003fa0, vcpup ffff00092f4ebfa0 per_cpu_offset[0] ffff80091e4e8000

(NOTE: base_addr and ptr are encoded to the ids, not actual address
 because of "%p" printk format)

In this log, we can see the xen_vcpu_info is allocated NOT on the
new chunk (this is the 2nd chunk). As you can see, the vcpup is in
the kernel linear address in this case, because it came from the
1st kernel embedded chunk.


Boot with 1GB
          <idle>-0     [000] ....     0.516221: percpu_create_chunk: base_addr=000000008456b989
[...]
         systemd-1     [000] ....     0.541982: percpu_alloc_percpu: reserved=0 is_atomic=0 size=48 align=8 base_addr=000000008456b989 off=17920 ptr=00000000c247612d
         systemd-1     [000] ....     0.541989: xen_guest_init: Xen: alloc xen_vcpu_info 7dff951f0600 id=00000000c247612d
         systemd-1     [000] ....     0.559690: xen_starting_cpu: Xen: xen_vcpu_info 7dff951f0600, vcpup fffffdffbfcdc600 per_cpu_offset[0] ffff80002aaec000

On the other hand, when we boot the dom0 with 1GB memory, the xen_vcpu_info
is allocated on the new chunk (the id of base_addr is same).
Since the data of new chunk is allocated on vmalloc area, vcpup points
the vmalloc address.

So, the bug seems not to depend on the kconfig, but depends on where
the percpu memory is allocated from.

> [...]
> 
> > > The fix is fine for me. I tested it and it works. We need to remove the
> > > "Fixes:" line from the commit message. Ideally, replacing it with a
> > > reference to what is the source of the problem.
> > 
> > OK, as I said, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has
> > introduced the per-cpu code. So note it instead of Fixes tag.
> 
> ...and commit 9a9ab3cc00dc was already present in 5.8 which also works
> fine with your kconfig. Something else changed in 5.9 causing this
> breakage as a side effect. Commit 9a9ab3cc00dc is there since 2013, I
> think it is OK -- this patch is fixing something else.

I think the commit 9a9ab3cc00dc theoletically wrong because it uses
__pa() on percpu address. But that is not guaranteed according to the
comment on per_cpu_ptr_to_phys() as below.

----
 * percpu allocator has special setup for the first chunk, which currently
 * supports either embedding in linear address space or vmalloc mapping,
 * and, from the second one, the backing allocator (currently either vm or
 * km) provides translation.
 *
 * The addr can be translated simply without checking if it falls into the
 * first chunk. But the current code reflects better how percpu allocator
 * actually works, and the verification can discover both bugs in percpu
 * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current
 * code.
----

So we must use per_cpu_ptr_to_phys() instead of __pa() macro for percpu
address. That's why I pointed this will fix the commit 9a9ab3cc00dc.

Thank you,

[1]
diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index a6ab3689b2f4..1786b8b51a60 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -149,6 +149,7 @@ static int xen_starting_cpu(unsigned int cpu)
 
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
+	trace_printk("Xen: xen_vcpu_info %lx, vcpup %lx per_cpu_offset[%d] %lx\n", (unsigned long)xen_vcpu_info, (unsigned long)vcpup, cpu, per_cpu_offset(cpu));
 
 	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
@@ -356,6 +357,7 @@ static int __init xen_guest_init(void)
 	xen_vcpu_info = alloc_percpu(struct vcpu_info);
 	if (xen_vcpu_info == NULL)
 		return -ENOMEM;
+	trace_printk("Xen: alloc xen_vcpu_info %lx id=%p\n", (unsigned long)xen_vcpu_info, xen_vcpu_info);
 
 	/* Direct vCPU id mapping for ARM guests. */
 	for_each_possible_cpu(cpu)

-- 
Masami Hiramatsu <mhiramat@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 08:43:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 08:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4361.11553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRWA-0006YX-Ar; Thu, 08 Oct 2020 08:43:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4361.11553; Thu, 08 Oct 2020 08:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRWA-0006YQ-7Y; Thu, 08 Oct 2020 08:43:18 +0000
Received: by outflank-mailman (input) for mailman id 4361;
 Thu, 08 Oct 2020 08:43:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQRW9-0006YK-E6
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:43:17 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa018b75-b839-4962-849f-ef07e73e739a;
 Thu, 08 Oct 2020 08:43:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQRW9-0006YK-E6
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:43:17 +0000
X-Inumbo-ID: aa018b75-b839-4962-849f-ef07e73e739a
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aa018b75-b839-4962-849f-ef07e73e739a;
	Thu, 08 Oct 2020 08:43:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602146595;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=hmfaHDqcevzm6TqwFYb1N7ROXvDqhIsFeaUBR+owTgQ=;
  b=Lcgljvm0EehTjS+UgyJWGwCcCnhe3Qk/Am/VjWDXJWkE/Q1UMYJXDHUj
   20atrpLqqeS1BU7k5hiSBA7CGkdKOab0zfrFaAOHTspZJ2sFwWRBWg2WQ
   Sx78JPrFryxDj9FgOMdaeBURgurHcj4h9su949yMF3Hmax+6WfIYkNv+N
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: T00na4NxHkPyH2ebQCtGS8s6trKNpm+AH5Ip44klzrTCr1jYCODX9BhpN80rTcAPeE8rVKFtJd
 LGoOPZzk7VoFShWFTNYzMWGpYjvdnJXIei2/+jHjU36PCI5GiNnWfFbCOA1rkzHwraBlMqXMkh
 2nKCVqmjLV2YGuy479FWHuMNqfS0slh0ydCepFFLXHQI+34ZGJOi5ubUYwej2D6vCFkHjxZFH9
 4LnPPTVUeAK3BEr3Cv3FreH8hjIzwOb1qBLU79cGUg3GC+uJPV5Yhii0875hN0Noc1fMhGqqlZ
 gHw=
X-SBRS: None
X-MesageID: 28888532
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,350,1596513600"; 
   d="scan'208";a="28888532"
Date: Thu, 8 Oct 2020 10:43:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/2] xen/x86: add nmi continuation framework
Message-ID: <20201008084306.GJ19254@Air-de-Roger>
References: <20201007133011.18871-1-jgross@suse.com>
 <20201007133011.18871-2-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20201007133011.18871-2-jgross@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 07, 2020 at 03:30:10PM +0200, Juergen Gross wrote:
> Actions in NMI context are rather limited as e.g. locking is rather
> fragile.
> 
> Add a generic framework to continue processing in softirq context after
> leaving NMI processing. This is working for NMIs happening in guest
> context as NMI exit handling will issue an IPI to itself in case a
> softirq is pending, resulting in the continuation running before the
> guest gets control again.

There's already kind of a nmi callback framework using nmi_callback.
I assume that moving this existing callback code to be executed in
softirq context is not suitable because that would be past the
execution of an iret instruction?

Might be worth mentioning in the commit message.

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/arch/x86/traps.c      | 37 +++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/nmi.h |  8 +++++++-
>  xen/include/xen/softirq.h |  5 ++++-
>  3 files changed, 48 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index bc5b8f8ea3..f433fe5acb 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1799,6 +1799,42 @@ void unset_nmi_callback(void)
>      nmi_callback = dummy_nmi_callback;
>  }
>  
> +static DEFINE_PER_CPU(void (*)(void *), nmi_cont_func);
> +static DEFINE_PER_CPU(void *, nmi_cont_par);
> +
> +static void nmi_cont_softirq(void)
> +{
> +    unsigned int cpu = smp_processor_id();
> +    void (*func)(void *par) = per_cpu(nmi_cont_func, cpu);
> +    void *par = per_cpu(nmi_cont_par, cpu);

I think you can use this_cpu here and below, and avoid having to worry
about the processor id at all? Also less likely to mess up and assign
a NMI callback to a wrong CPU.

> +
> +    /* Reads must be done before following write (local cpu ordering only). */
> +    barrier();

Is this added because of the usage of RELOC_HIDE by per_cpu?

> +
> +    per_cpu(nmi_cont_func, cpu) = NULL;

Since we are using RELOC_HIDE, doesn't it imply that the compiler
cannot reorder this, since it has no idea what variable we are
accessing?

> +
> +    if ( func )
> +        func(par);
> +}
> +
> +int set_nmi_continuation(void (*func)(void *par), void *par)
> +{
> +    unsigned int cpu = smp_processor_id();
> +
> +    if ( per_cpu(nmi_cont_func, cpu) )
> +    {
> +        printk("Trying to set NMI continuation while still one active!\n");
> +        return -EBUSY;
> +    }
> +
> +    per_cpu(nmi_cont_func, cpu) = func;
> +    per_cpu(nmi_cont_par, cpu) = par;
> +
> +    raise_softirq(NMI_CONT_SOFTIRQ);
> +
> +    return 0;
> +}
> +
>  void do_device_not_available(struct cpu_user_regs *regs)
>  {
>  #ifdef CONFIG_PV
> @@ -2132,6 +2168,7 @@ void __init trap_init(void)
>  
>      cpu_init();
>  
> +    open_softirq(NMI_CONT_SOFTIRQ, nmi_cont_softirq);
>      open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
>  }
>  
> diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
> index a288f02a50..da40fb6599 100644
> --- a/xen/include/asm-x86/nmi.h
> +++ b/xen/include/asm-x86/nmi.h
> @@ -33,5 +33,11 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
>  void unset_nmi_callback(void);
>  
>  DECLARE_PER_CPU(unsigned int, nmi_count);
> - 
> +
> +/**
> + * set_nmi_continuation
> + *
> + * Schedule a function to be started in softirq context after NMI handling.
> + */
> +int set_nmi_continuation(void (*func)(void *par), void *par);

I would introduce a type for the nmi callback, as I think it's easier
than having to retype the prototype everywhere:

typedef void nmi_continuation_t(void *);

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 08:59:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 08:59:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4419.11565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRl5-0007ku-Q2; Thu, 08 Oct 2020 08:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4419.11565; Thu, 08 Oct 2020 08:58:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRl5-0007kn-N3; Thu, 08 Oct 2020 08:58:43 +0000
Received: by outflank-mailman (input) for mailman id 4419;
 Thu, 08 Oct 2020 08:58:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bIXB=DP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQRl5-0007ki-3A
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:58:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6169c46-1754-4e5f-add2-634e6f4fd085;
 Thu, 08 Oct 2020 08:58:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 41913AC3C;
 Thu,  8 Oct 2020 08:58:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bIXB=DP=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQRl5-0007ki-3A
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 08:58:43 +0000
X-Inumbo-ID: c6169c46-1754-4e5f-add2-634e6f4fd085
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6169c46-1754-4e5f-add2-634e6f4fd085;
	Thu, 08 Oct 2020 08:58:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602147521;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ErcYGOP7VowXW+iOjYtY9CQWr866S44UL9TcmCYuLp4=;
	b=mbTM8OOixJoIfvTRqX/5E9PT92N8QpE+dzhneFw9wsDXP1jEaKujh7cLDDoHbih7Rj6ts7
	O8yTLHd8ieqkH4Z7COkNrZTK/To2EBMvmXgI3v1H6/bp6ZCin5bI+p9itvDyr7BKUWyfu1
	0oo7E2BZudxe2TptB8zMTQD5nIAmq7M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 41913AC3C;
	Thu,  8 Oct 2020 08:58:41 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/x86: add nmi continuation framework
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201007133011.18871-1-jgross@suse.com>
 <20201007133011.18871-2-jgross@suse.com>
 <20201008084306.GJ19254@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e774053e-1f84-fd84-bafc-a2f254d70286@suse.com>
Date: Thu, 8 Oct 2020 10:58:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201008084306.GJ19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.10.20 10:43, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 03:30:10PM +0200, Juergen Gross wrote:
>> Actions in NMI context are rather limited as e.g. locking is rather
>> fragile.
>>
>> Add a generic framework to continue processing in softirq context after
>> leaving NMI processing. This is working for NMIs happening in guest
>> context as NMI exit handling will issue an IPI to itself in case a
>> softirq is pending, resulting in the continuation running before the
>> guest gets control again.
> 
> There's already kind of a nmi callback framework using nmi_callback.
> I assume that moving this existing callback code to be executed in
> softirq context is not suitable because that would be past the
> execution of an iret instruction?
> 
> Might be worth mentioning in the commit message.
> 
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   xen/arch/x86/traps.c      | 37 +++++++++++++++++++++++++++++++++++++
>>   xen/include/asm-x86/nmi.h |  8 +++++++-
>>   xen/include/xen/softirq.h |  5 ++++-
>>   3 files changed, 48 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index bc5b8f8ea3..f433fe5acb 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -1799,6 +1799,42 @@ void unset_nmi_callback(void)
>>       nmi_callback = dummy_nmi_callback;
>>   }
>>   
>> +static DEFINE_PER_CPU(void (*)(void *), nmi_cont_func);
>> +static DEFINE_PER_CPU(void *, nmi_cont_par);
>> +
>> +static void nmi_cont_softirq(void)
>> +{
>> +    unsigned int cpu = smp_processor_id();
>> +    void (*func)(void *par) = per_cpu(nmi_cont_func, cpu);
>> +    void *par = per_cpu(nmi_cont_par, cpu);
> 
> I think you can use this_cpu here and below, and avoid having to worry
> about the processor id at all? Also less likely to mess up and assign
> a NMI callback to a wrong CPU.

this_cpu() and smp_processor_id() are both based on get_cpu_info().
So multiple uses of this_cpu() are less efficient than per_cpu() with
cpu in a local variable (regarding code-size and speed, at least I think
so).

> 
>> +
>> +    /* Reads must be done before following write (local cpu ordering only). */
>> +    barrier();
> 
> Is this added because of the usage of RELOC_HIDE by per_cpu?

This is added because reordering of loads by the compiler must be
avoided as a NMI using those fields again might happen anytime.

> 
>> +
>> +    per_cpu(nmi_cont_func, cpu) = NULL;
> 
> Since we are using RELOC_HIDE, doesn't it imply that the compiler
> cannot reorder this, since it has no idea what variable we are
> accessing?

I'd rather be safe here than relying on per_cpu() internals.

> 
>> +
>> +    if ( func )
>> +        func(par);
>> +}
>> +
>> +int set_nmi_continuation(void (*func)(void *par), void *par)
>> +{
>> +    unsigned int cpu = smp_processor_id();
>> +
>> +    if ( per_cpu(nmi_cont_func, cpu) )
>> +    {
>> +        printk("Trying to set NMI continuation while still one active!\n");
>> +        return -EBUSY;
>> +    }
>> +
>> +    per_cpu(nmi_cont_func, cpu) = func;
>> +    per_cpu(nmi_cont_par, cpu) = par;
>> +
>> +    raise_softirq(NMI_CONT_SOFTIRQ);
>> +
>> +    return 0;
>> +}
>> +
>>   void do_device_not_available(struct cpu_user_regs *regs)
>>   {
>>   #ifdef CONFIG_PV
>> @@ -2132,6 +2168,7 @@ void __init trap_init(void)
>>   
>>       cpu_init();
>>   
>> +    open_softirq(NMI_CONT_SOFTIRQ, nmi_cont_softirq);
>>       open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
>>   }
>>   
>> diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
>> index a288f02a50..da40fb6599 100644
>> --- a/xen/include/asm-x86/nmi.h
>> +++ b/xen/include/asm-x86/nmi.h
>> @@ -33,5 +33,11 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
>>   void unset_nmi_callback(void);
>>   
>>   DECLARE_PER_CPU(unsigned int, nmi_count);
>> -
>> +
>> +/**
>> + * set_nmi_continuation
>> + *
>> + * Schedule a function to be started in softirq context after NMI handling.
>> + */
>> +int set_nmi_continuation(void (*func)(void *par), void *par);
> 
> I would introduce a type for the nmi callback, as I think it's easier
> than having to retype the prototype everywhere:
> 
> typedef void nmi_continuation_t(void *);

Fine with me.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 09:00:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 09:00:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4424.11577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRmo-00009l-86; Thu, 08 Oct 2020 09:00:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4424.11577; Thu, 08 Oct 2020 09:00:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQRmo-00009e-54; Thu, 08 Oct 2020 09:00:30 +0000
Received: by outflank-mailman (input) for mailman id 4424;
 Thu, 08 Oct 2020 09:00:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kncA=DP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kQRmm-00009Z-Ji
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:00:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a862eddb-08ff-419e-a193-bc21425eb368;
 Thu, 08 Oct 2020 09:00:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE3E3AF4D;
 Thu,  8 Oct 2020 09:00:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kncA=DP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kQRmm-00009Z-Ji
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:00:28 +0000
X-Inumbo-ID: a862eddb-08ff-419e-a193-bc21425eb368
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a862eddb-08ff-419e-a193-bc21425eb368;
	Thu, 08 Oct 2020 09:00:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE3E3AF4D;
	Thu,  8 Oct 2020 09:00:25 +0000 (UTC)
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
To: Daniel Vetter <daniel@ffwll.ch>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Cc: Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, Huang Rui <ray.huang@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Qiang Yu <yuq825@gmail.com>,
 Sam Ravnborg <sam@ravnborg.org>, Emil Velikov <emil.velikov@collabora.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Luben Tuikov <luben.tuikov@amd.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>, Hans de Goede <hdegoede@redhat.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Kukjin Kim <kgene@kernel.org>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
 <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
 <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
 <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com>
 <CAKMK7uEPn=q1J50koveE+b49r=SE0eh5nTrxWOVRN2grdyNPTA@mail.gmail.com>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <5c0dc0bf-b4ca-db84-708e-74a5b033018f@suse.de>
Date: Thu, 8 Oct 2020 11:00:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <CAKMK7uEPn=q1J50koveE+b49r=SE0eh5nTrxWOVRN2grdyNPTA@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="1RRRYtJ6APQmd6LeI1KMPbGtdio7fZVjb"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--1RRRYtJ6APQmd6LeI1KMPbGtdio7fZVjb
Content-Type: multipart/mixed; boundary="jDFbYBZf8kdKgUAVGHXxfA7MPXZztPeOO";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Cc: Dave Airlie <airlied@linux.ie>,
 Nouveau Dev <nouveau@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Melissa Wen <melissa.srw@gmail.com>, Huang Rui <ray.huang@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Qiang Yu <yuq825@gmail.com>,
 Sam Ravnborg <sam@ravnborg.org>, Emil Velikov <emil.velikov@collabora.com>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Krzysztof Kozlowski <krzk@kernel.org>, Steven Price <steven.price@arm.com>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 Luben Tuikov <luben.tuikov@amd.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>, Ben Skeggs <bskeggs@redhat.com>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>, Hans de Goede <hdegoede@redhat.com>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 Sean Paul <sean@poorly.run>, apaneers@amd.com,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>,
 Kyungmin Park <kyungmin.park@samsung.com>,
 Qinglang Miao <miaoqinglang@huawei.com>, Kukjin Kim <kgene@kernel.org>,
 Alex Deucher <alexander.deucher@amd.com>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>
Message-ID: <5c0dc0bf-b4ca-db84-708e-74a5b033018f@suse.de>
Subject: Re: [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for
 type conversion
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-3-tzimmermann@suse.de>
 <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com>
 <2614314a-81f7-4722-c400-68d90e48e09a@suse.de>
 <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com>
 <07972ada-9135-3743-a86b-487f610c509f@suse.de>
 <b569b7e3-68f0-edcc-c8f4-170e9042d348@gmail.com>
 <20200930094712.GW438822@phenom.ffwll.local>
 <8479d0aa-3826-4f37-0109-55daca515793@amd.com>
 <CAKMK7uH0U36NG8w98i0x6HVGeogiwnYDRiKquLW-8znLa7-0yg@mail.gmail.com>
 <20201002095830.GH438822@phenom.ffwll.local>
 <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de>
 <CAKMK7uEu0vwiG9Uz0_Ysyus0ZAF-1HNxvPZjcG3xZS=gkKgJLw@mail.gmail.com>
 <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de>
 <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com>
 <CAKMK7uEPn=q1J50koveE+b49r=SE0eh5nTrxWOVRN2grdyNPTA@mail.gmail.com>
In-Reply-To: <CAKMK7uEPn=q1J50koveE+b49r=SE0eh5nTrxWOVRN2grdyNPTA@mail.gmail.com>

--jDFbYBZf8kdKgUAVGHXxfA7MPXZztPeOO
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi

Am 07.10.20 um 16:30 schrieb Daniel Vetter:
> On Wed, Oct 7, 2020 at 3:25 PM Christian K=C3=B6nig <christian.koenig@a=
md.com> wrote:
>>
>> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann:
>>> Hi
>>>
>>> Am 07.10.20 um 15:10 schrieb Daniel Vetter:
>>>> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann <tzimmermann@suse.d=
e> wrote:
>>>>> Hi
>>>>>
>>>>> Am 02.10.20 um 11:58 schrieb Daniel Vetter:
>>>>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote:
>>>>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K=C3=B6nig
>>>>>>> <christian.koenig@amd.com> wrote:
>>>>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter:
>>>>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K=C3=B6nig =
wrote:
>>>>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann:
>>>>>>>>>>> Hi
>>>>>>>>>>>
>>>>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K=C3=B6nig:
>>>>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann:
>>>>>>>>>>>>> Hi Christian
>>>>>>>>>>>>>
>>>>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K=C3=B6nig:
>>>>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann:
>>>>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address=
 and location
>>>>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struc=
t dma_buf_map
>>>>>>>>>>>>>>> with these values. Helpful for TTM-based drivers.
>>>>>>>>>>>>>> We could completely drop that if we use the same structure=
 inside TTM as
>>>>>>>>>>>>>> well.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Additional to that which driver is going to use this?
>>>>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers=
 will
>>>>>>>>>>>>> retrieve the pointer via this function.
>>>>>>>>>>>>>
>>>>>>>>>>>>> I do want to see all that being more tightly integrated int=
o TTM, but
>>>>>>>>>>>>> not in this series. This one is about fixing the bochs-on-s=
parc64
>>>>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM =
TODO list.
>>>>>>>>>>>> I should have asked which driver you try to fix here :)
>>>>>>>>>>>>
>>>>>>>>>>>> In this case just keep the function inside bochs and only fi=
x it there.
>>>>>>>>>>>>
>>>>>>>>>>>> All other drivers can be fixed when we generally pump this t=
hrough TTM.
>>>>>>>>>>> Did you take a look at patch 3? This function will be used by=
 VRAM
>>>>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it =
here, we
>>>>>>>>>>> have to duplicate the functionality in each if these drivers.=
 Bochs
>>>>>>>>>>> itself uses VRAM helpers and doesn't touch the function direc=
tly.
>>>>>>>>>> Ah, ok can we have that then only in the VRAM helpers?
>>>>>>>>>>
>>>>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_k=
map_obj
>>>>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK.
>>>>>>>>>>
>>>>>>>>>> What I want to avoid is to have another conversion function in=
 TTM because
>>>>>>>>>> what happens here is that we already convert from ttm_bus_plac=
ement to
>>>>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map.
>>>>>>>>> Hm I'm not really seeing how that helps with a gradual conversi=
on of
>>>>>>>>> everything over to dma_buf_map and assorted helpers for access?=
 There's
>>>>>>>>> too many places in ttm drivers where is_iomem and related stuff=
 is used to
>>>>>>>>> be able to convert it all in one go. An intermediate state with=
 a bunch of
>>>>>>>>> conversions seems fairly unavoidable to me.
>>>>>>>> Fair enough. I would just have started bottom up and not top dow=
n.
>>>>>>>>
>>>>>>>> Anyway feel free to go ahead with this approach as long as we ca=
n remove
>>>>>>>> the new function again when we clean that stuff up for good.
>>>>>>> Yeah I guess bottom up would make more sense as a refactoring. Bu=
t the
>>>>>>> main motivation to land this here is to fix the __mmio vs normal
>>>>>>> memory confusion in the fbdev emulation helpers for sparc (and
>>>>>>> anything else that needs this). Hence the top down approach for
>>>>>>> rolling this out.
>>>>>> Ok I started reviewing this a bit more in-depth, and I think this =
is a bit
>>>>>> too much of a de-tour.
>>>>>>
>>>>>> Looking through all the callers of ttm_bo_kmap almost everyone map=
s the
>>>>>> entire object. Only vmwgfx uses to map less than that. Also, every=
one just
>>>>>> immediately follows up with converting that full object map into a=

>>>>>> pointer.
>>>>>>
>>>>>> So I think what we really want here is:
>>>>>> - new function
>>>>>>
>>>>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *=
map);
>>>>>>
>>>>>>    _vmap name since that's consistent with both dma_buf functions =
and
>>>>>>    what's usually used to implement this. Outside of the ttm world=
 kmap
>>>>>>    usually just means single-page mappings using kmap() or it's io=
mem
>>>>>>    sibling io_mapping_map* so rather confusing name for a function=
 which
>>>>>>    usually is just used to set up a vmap of the entire buffer.
>>>>>>
>>>>>> - a helper which can be used for the drm_gem_object_funcs vmap/vun=
map
>>>>>>    functions for all ttm drivers. We should be able to make this f=
ully
>>>>>>    generic because a) we now have dma_buf_map and b) drm_gem_objec=
t is
>>>>>>    embedded in the ttm_bo, so we can upcast for everyone who's bot=
h a ttm
>>>>>>    and gem driver.
>>>>>>
>>>>>>    This is maybe a good follow-up, since it should allow us to dit=
ch quite
>>>>>>    a bit of the vram helper code for this more generic stuff. I al=
so might
>>>>>>    have missed some special-cases here, but from a quick look ever=
ything
>>>>>>    just pins the buffer to the current location and that's it.
>>>>>>
>>>>>>    Also this obviously requires Christian's generic ttm_bo_pin rew=
ork
>>>>>>    first.
>>>>>>
>>>>>> - roll the above out to drivers.
>>>>>>
>>>>>> Christian/Thomas, thoughts on this?
>>>>> I agree on the goals, but what is the immediate objective here?
>>>>>
>>>>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kma=
p_obj
>>>>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj h=
as
>>>>> more internal state that struct dma_buf_map, so they are not easily=

>>>>> convertible either. What you propose seems to require a reimplement=
ation
>>>>> of the existing ttm_bo_kmap() code. That is it's own patch series.
>>>>>
>>>>> I'd rather go with some variant of the existing patch and add
>>>>> ttm_bo_vmap() in a follow-up.
>>>> ttm_bo_vmap would simply wrap what you currently open-code as
>>>> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj wou=
ld
>>>> be a much later step. Why do you think adding ttm_bo_vmap is not
>>>> possible?
>>> The calls to ttm_bo_kmap/_kunmap() require an instance of struct
>>> ttm_bo_kmap_obj that is stored in each driver's private bo structure
>>> (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I mad=
e
>>> patch 3, I flirted with the idea of unifying the driver's _vmap code =
in
>>> a shared helper, but I couldn't find a simple way of doing it. That's=

>>> why it's open-coded in the first place.
>=20
> Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which
> shouldn't be more than a few lines, but maybe too much to do in this
> series.
>=20
>> Well that makes kind of sense. Keep in mind that ttm_bo_kmap is
>> currently way to complicated.
>=20
> Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple
> 1-page kmap helper on the other should help a lot.

I'm not too happy about the plan, but I'll send out something like this
in the next iteration.

Best regards
Thomas

> -Daniel
>=20
>>
>> Christian.
>>
>>>
>>> Best regards
>>> Thomas
>>>
>>>> -Daniel
>>>>
>>>>
>>>>> Best regards
>>>>> Thomas
>>>>>
>>>>>> I think for the immediate need of rolling this out for vram helper=
s and
>>>>>> fbdev code we should be able to do this, but just postpone the dri=
ver wide
>>>>>> roll-out for now.
>>>>>>
>>>>>> Cheers, Daniel
>>>>>>
>>>>>>> -Daniel
>>>>>>>
>>>>>>>> Christian.
>>>>>>>>
>>>>>>>>> -Daniel
>>>>>>>>>
>>>>>>>>>> Thanks,
>>>>>>>>>> Christian.
>>>>>>>>>>
>>>>>>>>>>> Best regards
>>>>>>>>>>> Thomas
>>>>>>>>>>>
>>>>>>>>>>>> Regards,
>>>>>>>>>>>> Christian.
>>>>>>>>>>>>
>>>>>>>>>>>>> Best regards
>>>>>>>>>>>>> Thomas
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>> Christian.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>>>>>>>>>>>>> ---
>>>>>>>>>>>>>>>      include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++=
++++++
>>>>>>>>>>>>>>>      include/linux/dma-buf-map.h  | 20 ++++++++++++++++++=
++
>>>>>>>>>>>>>>>      2 files changed, 44 insertions(+)
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/t=
tm/ttm_bo_api.h
>>>>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644
>>>>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>>>>>>>>>>>>> @@ -34,6 +34,7 @@
>>>>>>>>>>>>>>>      #include <drm/drm_gem.h>
>>>>>>>>>>>>>>>      #include <drm/drm_hashtab.h>
>>>>>>>>>>>>>>>      #include <drm/drm_vma_manager.h>
>>>>>>>>>>>>>>> +#include <linux/dma-buf-map.h>
>>>>>>>>>>>>>>>      #include <linux/kref.h>
>>>>>>>>>>>>>>>      #include <linux/list.h>
>>>>>>>>>>>>>>>      #include <linux/wait.h>
>>>>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_vir=
tual(struct
>>>>>>>>>>>>>>> ttm_bo_kmap_obj *map,
>>>>>>>>>>>>>>>          return map->virtual;
>>>>>>>>>>>>>>>      }
>>>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_=
kmap.
>>>>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map=
=2E If the memory
>>>>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to=
 NULL.
>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct tt=
m_bo_kmap_obj
>>>>>>>>>>>>>>> *kmap,
>>>>>>>>>>>>>>> +                           struct dma_buf_map *map)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +    bool is_iomem;
>>>>>>>>>>>>>>> +    void *vaddr =3D ttm_kmap_obj_virtual(kmap, &is_iomem=
);
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>> +    if (!vaddr)
>>>>>>>>>>>>>>> +        dma_buf_map_clear(map);
>>>>>>>>>>>>>>> +    else if (is_iomem)
>>>>>>>>>>>>>>> +        dma_buf_map_set_vaddr_iomem(map, (void __force _=
_iomem *)vaddr);
>>>>>>>>>>>>>>> +    else
>>>>>>>>>>>>>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>>>       * ttm_bo_kmap
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/=
dma-buf-map.h
>>>>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h
>>>>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h
>>>>>>>>>>>>>>> @@ -45,6 +45,12 @@
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>>       *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_=
vaddr_iomem().
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * .. code-block:: c
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>>       * Test if a mapping is valid with either dma_buf_ma=
p_is_set() or
>>>>>>>>>>>>>>>       * dma_buf_map_is_null().
>>>>>>>>>>>>>>>       *
>>>>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_v=
addr(struct
>>>>>>>>>>>>>>> dma_buf_map *map, void *vaddr)
>>>>>>>>>>>>>>>          map->is_iomem =3D false;
>>>>>>>>>>>>>>>      }
>>>>>>>>>>>>>>>      +/**
>>>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping =
structure to
>>>>>>>>>>>>>>> an address in I/O memory
>>>>>>>>>>>>>>> + * @map:        The dma-buf mapping structure
>>>>>>>>>>>>>>> + * @vaddr_iomem:    An I/O-memory address
>>>>>>>>>>>>>>> + *
>>>>>>>>>>>>>>> + * Sets the address and the I/O-memory flag.
>>>>>>>>>>>>>>> + */
>>>>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dm=
a_buf_map *map,
>>>>>>>>>>>>>>> +                           void __iomem *vaddr_iomem)
>>>>>>>>>>>>>>> +{
>>>>>>>>>>>>>>> +    map->vaddr_iomem =3D vaddr_iomem;
>>>>>>>>>>>>>>> +    map->is_iomem =3D true;
>>>>>>>>>>>>>>> +}
>>>>>>>>>>>>>>> +
>>>>>>>>>>>>>>>      /**
>>>>>>>>>>>>>>>       * dma_buf_map_is_equal - Compares two dma-buf mappi=
ng structures
>>>>>>>>>>>>>>> for equality
>>>>>>>>>>>>>>>       * @lhs:    The dma-buf mapping structure
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttp=
s%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=
=3D02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b=
8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sda=
ta=3DHdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D=
0
>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps=
%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D=
02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7=
C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> dri-devel mailing list
>>>>>>>>>>>> dri-devel@lists.freedesktop.org
>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%=
3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=3D=
02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7=
C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&amp;reserved=3D0
>>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> amd-gfx mailing list
>>>>>>>>>>> amd-gfx@lists.freedesktop.org
>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=3Dhttps%3=
A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&amp;data=3D02=
%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3=
dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&amp;sdata=3D=
H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&amp;reserved=3D0
>>>>>>>
>>>>>>> --
>>>>>>> Daniel Vetter
>>>>>>> Software Engineer, Intel Corporation
>>>>>>> http://blog.ffwll.ch
>>>>> --
>>>>> Thomas Zimmermann
>>>>> Graphics Driver Developer
>>>>> SUSE Software Solutions Germany GmbH
>>>>> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
>>>>> (HRB 36809, AG N=C3=BCrnberg)
>>>>> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer
>>>>>
>>>>
>>
>=20
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


--jDFbYBZf8kdKgUAVGHXxfA7MPXZztPeOO--

--1RRRYtJ6APQmd6LeI1KMPbGtdio7fZVjb
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQFIBAEBCAAyFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl9+1SUUHHR6aW1tZXJt
YW5uQHN1c2UuZGUACgkQaA3BHVMLeiNwWAf/Z6wurJbb9OHAKU2vC2UR7c+Pe4lu
/ZDIyVbUQzqpuG6kUt56Ei9bxPOuvbzMeimdkcpxCZHJn+R/yC2/0V4kzm8vHdV3
CPCdGSSWYEw52VeWQQLuibonRRB0BrCrCBm3pjfxh6S1cYhmDIHP7+wpTuNGLgE9
7MFOn5/KrTbDEeZdbrbgOBRbGPvUWdVAIeqLxWKBxveDEVjlvonQ3HxmFa8Gfjiv
bRSIkCJBGC1Idnddu8kE1hJKiftVE7aT/mOQeU4uGe+8laj9QvS9O3kZrWY2sfGX
KyENY1fBryoXsnEQQIcl3vy5ZELHlD0XHZnUf0kYnbjnXSY8mXdI2BSuMg==
=s7Qc
-----END PGP SIGNATURE-----

--1RRRYtJ6APQmd6LeI1KMPbGtdio7fZVjb--


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 09:25:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 09:25:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4443.11589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQSAs-000298-D7; Thu, 08 Oct 2020 09:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4443.11589; Thu, 08 Oct 2020 09:25:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQSAs-000291-A4; Thu, 08 Oct 2020 09:25:22 +0000
Received: by outflank-mailman (input) for mailman id 4443;
 Thu, 08 Oct 2020 09:25:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kncA=DP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kQSAq-00028w-Bk
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:25:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31e8bfc3-9192-4e93-bedb-25764a37ff11;
 Thu, 08 Oct 2020 09:25:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 522EEB237;
 Thu,  8 Oct 2020 09:25:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kncA=DP=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kQSAq-00028w-Bk
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:25:20 +0000
X-Inumbo-ID: 31e8bfc3-9192-4e93-bedb-25764a37ff11
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 31e8bfc3-9192-4e93-bedb-25764a37ff11;
	Thu, 08 Oct 2020 09:25:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 522EEB237;
	Thu,  8 Oct 2020 09:25:15 +0000 (UTC)
Subject: Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Steven Price <steven.price@arm.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=c3=bcbner?=
 <heiko@sntech.de>, Hans de Goede <hdegoede@redhat.com>,
 Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Huang Rui <ray.huang@amd.com>, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com,
 Linus Walleij <linus.walleij@linaro.org>, Melissa Wen
 <melissa.srw@gmail.com>, "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, Nouveau Dev <nouveau@lists.freedesktop.org>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-7-tzimmermann@suse.de>
 <20201002180500.GM438822@phenom.ffwll.local>
 <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <ffc4b2de-ff97-210f-0ae4-f2f85a27f59b@suse.de>
Date: Thu, 8 Oct 2020 11:25:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wSkQnhyu1s6mke5MwMEXxiMF0P61mkAgk"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wSkQnhyu1s6mke5MwMEXxiMF0P61mkAgk
Content-Type: multipart/mixed; boundary="9SInwHdMlT9HJ8SWW0YhlKXr6GBVVNNex";
 protected-headers="v1"
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>,
 Maxime Ripard <mripard@kernel.org>, Dave Airlie <airlied@linux.ie>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>,
 Russell King <linux+etnaviv@armlinux.org.uk>,
 Christian Gmeiner <christian.gmeiner@gmail.com>,
 Inki Dae <inki.dae@samsung.com>, Joonyoung Shim <jy0922.shim@samsung.com>,
 Seung-Woo Kim <sw0312.kim@samsung.com>,
 Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>,
 Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>,
 Ben Skeggs <bskeggs@redhat.com>, Rob Herring <robh@kernel.org>,
 Tomeu Vizoso <tomeu.vizoso@collabora.com>,
 Steven Price <steven.price@arm.com>,
 Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>,
 Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=c3=bcbner?=
 <heiko@sntech.de>, Hans de Goede <hdegoede@redhat.com>,
 Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Huang Rui <ray.huang@amd.com>, Sumit Semwal <sumit.semwal@linaro.org>,
 Emil Velikov <emil.velikov@collabora.com>,
 Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com,
 Linus Walleij <linus.walleij@linaro.org>, Melissa Wen
 <melissa.srw@gmail.com>, "Wilson, Chris" <chris@chris-wilson.co.uk>,
 Qinglang Miao <miaoqinglang@huawei.com>,
 dri-devel <dri-devel@lists.freedesktop.org>,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 "open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>,
 The etnaviv authors <etnaviv@lists.freedesktop.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-samsung-soc <linux-samsung-soc@vger.kernel.org>,
 lima@lists.freedesktop.org, Nouveau Dev <nouveau@lists.freedesktop.org>,
 "open list:DRM DRIVER FOR QXL VIRTUAL GPU"
 <spice-devel@lists.freedesktop.org>,
 "open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>,
 "moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>,
 "open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>,
 "moderated list:DMA BUFFER SHARING FRAMEWORK"
 <linaro-mm-sig@lists.linaro.org>
Message-ID: <ffc4b2de-ff97-210f-0ae4-f2f85a27f59b@suse.de>
Subject: Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
References: <20200929151437.19717-1-tzimmermann@suse.de>
 <20200929151437.19717-7-tzimmermann@suse.de>
 <20201002180500.GM438822@phenom.ffwll.local>
 <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>
In-Reply-To: <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>

--9SInwHdMlT9HJ8SWW0YhlKXr6GBVVNNex
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable

Hi

Am 02.10.20 um 20:44 schrieb Daniel Vetter:
> On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
>>
>> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
>>> At least sparc64 requires I/O-specific access to framebuffers. This
>>> patch updates the fbdev console accordingly.
>>>
>>> For drivers with direct access to the framebuffer memory, the callbac=
k
>>> functions in struct fb_ops test for the type of memory and call the r=
sp
>>> fb_sys_ of fb_cfb_ functions.
>>>
>>> For drivers that employ a shadow buffer, fbdev's blit function retrie=
ves
>>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
>>> interfaces to access the buffer.
>>>
>>> The bochs driver on sparc64 uses a workaround to flag the framebuffer=
 as
>>> I/O memory and avoid a HW exception. With the introduction of struct
>>> dma_buf_map, this is not required any longer. The patch removes the r=
sp
>>> code from both, bochs and fbdev.
>>>
>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>=20
> Argh, I accidentally hit send before finishing this ...
>=20
>>> ---
>>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>>>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++=
--
>>>  include/drm/drm_mode_config.h     |  12 --
>>>  include/linux/dma-buf-map.h       |  72 ++++++++--
>>>  4 files changed, 265 insertions(+), 37 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/boch=
s/bochs_kms.c
>>> index 13d0d04c4457..853081d186d5 100644
>>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
>>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
>>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>>>       bochs->dev->mode_config.preferred_depth =3D 24;
>>>       bochs->dev->mode_config.prefer_shadow =3D 0;
>>>       bochs->dev->mode_config.prefer_shadow_fbdev =3D 1;
>>> -     bochs->dev->mode_config.fbdev_use_iomem =3D true;
>>>       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =3D =
true;
>>>
>>>       bochs->dev->mode_config.funcs =3D &bochs_mode_funcs;
>>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb=
_helper.c
>>> index 343a292f2c7c..f345a314a437 100644
>>> --- a/drivers/gpu/drm/drm_fb_helper.c
>>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct =
work_struct *work)
>>>  }
>>>
>>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_h=
elper,
>>> -                                       struct drm_clip_rect *clip)
>>> +                                       struct drm_clip_rect *clip,
>>> +                                       struct dma_buf_map *dst)
>>>  {
>>>       struct drm_framebuffer *fb =3D fb_helper->fb;
>>>       unsigned int cpp =3D fb->format->cpp[0];
>>>       size_t offset =3D clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>>       void *src =3D fb_helper->fbdev->screen_buffer + offset;
>>> -     void *dst =3D fb_helper->buffer->map.vaddr + offset;
>>>       size_t len =3D (clip->x2 - clip->x1) * cpp;
>>>       unsigned int y;
>>>
>>> -     for (y =3D clip->y1; y < clip->y2; y++) {
>>> -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
>>> -                     memcpy(dst, src, len);
>>> -             else
>>> -                     memcpy_toio((void __iomem *)dst, src, len);
>>> +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip=
 rect */
>>>
>>> +     for (y =3D clip->y1; y < clip->y2; y++) {
>>> +             dma_buf_map_memcpy_to(dst, src, len);
>>> +             dma_buf_map_incr(dst, fb->pitches[0]);
>>>               src +=3D fb->pitches[0];
>>> -             dst +=3D fb->pitches[0];
>>>       }
>>>  }
>>>
>>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_=
struct *work)
>>>                       ret =3D drm_client_buffer_vmap(helper->buffer, =
&map);
>>>                       if (ret)
>>>                               return;
>>> -                     drm_fb_helper_dirty_blit_real(helper, &clip_cop=
y);
>>> +                     drm_fb_helper_dirty_blit_real(helper, &clip_cop=
y, &map);
>>>               }
>>> +
>>>               if (helper->fb->funcs->dirty)
>>>                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0=
,
>>>                                                &clip_copy, 1);
>>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info=
 *info,
>>>  }
>>>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>>>
>>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __u=
ser *buf,
>>> +                                   size_t count, loff_t *ppos)
>>> +{
>>> +     unsigned long p =3D *ppos;
>>> +     u8 *dst;
>>> +     u8 __iomem *src;
>>> +     int c, err =3D 0;
>>> +     unsigned long total_size;
>>> +     unsigned long alloc_size;
>>> +     ssize_t ret =3D 0;
>>> +
>>> +     if (info->state !=3D FBINFO_STATE_RUNNING)
>>> +             return -EPERM;
>>> +
>>> +     total_size =3D info->screen_size;
>>> +
>>> +     if (total_size =3D=3D 0)
>>> +             total_size =3D info->fix.smem_len;
>>> +
>>> +     if (p >=3D total_size)
>>> +             return 0;
>>> +
>>> +     if (count >=3D total_size)
>>> +             count =3D total_size;
>>> +
>>> +     if (count + p > total_size)
>>> +             count =3D total_size - p;
>>> +
>>> +     src =3D (u8 __iomem *)(info->screen_base + p);
>>> +
>>> +     alloc_size =3D min(count, PAGE_SIZE);
>>> +
>>> +     dst =3D kmalloc(alloc_size, GFP_KERNEL);
>>> +     if (!dst)
>>> +             return -ENOMEM;
>>> +
>>> +     while (count) {
>>> +             c =3D min(count, alloc_size);
>>> +
>>> +             memcpy_fromio(dst, src, c);
>>> +             if (copy_to_user(buf, dst, c)) {
>>> +                     err =3D -EFAULT;
>>> +                     break;
>>> +             }
>>> +
>>> +             src +=3D c;
>>> +             *ppos +=3D c;
>>> +             buf +=3D c;
>>> +             ret +=3D c;
>>> +             count -=3D c;
>>> +     }
>>> +
>>> +     kfree(dst);
>>> +
>>> +     if (err)
>>> +             return err;
>>> +
>>> +     return ret;
>>> +}
>>> +
>>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const c=
har __user *buf,
>>> +                                    size_t count, loff_t *ppos)
>>> +{
>>> +     unsigned long p =3D *ppos;
>>> +     u8 *src;
>>> +     u8 __iomem *dst;
>>> +     int c, err =3D 0;
>>> +     unsigned long total_size;
>>> +     unsigned long alloc_size;
>>> +     ssize_t ret =3D 0;
>>> +
>>> +     if (info->state !=3D FBINFO_STATE_RUNNING)
>>> +             return -EPERM;
>>> +
>>> +     total_size =3D info->screen_size;
>>> +
>>> +     if (total_size =3D=3D 0)
>>> +             total_size =3D info->fix.smem_len;
>>> +
>>> +     if (p > total_size)
>>> +             return -EFBIG;
>>> +
>>> +     if (count > total_size) {
>>> +             err =3D -EFBIG;
>>> +             count =3D total_size;
>>> +     }
>>> +
>>> +     if (count + p > total_size) {
>>> +             /*
>>> +              * The framebuffer is too small. We do the
>>> +              * copy operation, but return an error code
>>> +              * afterwards. Taken from fbdev.
>>> +              */
>>> +             if (!err)
>>> +                     err =3D -ENOSPC;
>>> +             count =3D total_size - p;
>>> +     }
>>> +
>>> +     alloc_size =3D min(count, PAGE_SIZE);
>>> +
>>> +     src =3D kmalloc(alloc_size, GFP_KERNEL);
>>> +     if (!src)
>>> +             return -ENOMEM;
>>> +
>>> +     dst =3D (u8 __iomem *)(info->screen_base + p);
>>> +
>>> +     while (count) {
>>> +             c =3D min(count, alloc_size);
>>> +
>>> +             if (copy_from_user(src, buf, c)) {
>>> +                     err =3D -EFAULT;
>>> +                     break;
>>> +             }
>>> +             memcpy_toio(dst, src, c);
>>> +
>>> +             dst +=3D c;
>>> +             *ppos +=3D c;
>>> +             buf +=3D c;
>>> +             ret +=3D c;
>>> +             count -=3D c;
>>> +     }
>>> +
>>> +     kfree(src);
>>> +
>>> +     if (err)
>>> +             return err;
>>> +
>>> +     return ret;
>>> +}
>=20
> The duplication is a bit annoying here, but can't really be avoided. I
> do think though we should maybe go a bit further, and have drm
> implementations of this stuff instead of following fbdev concepts as
> closely as possible. So here roughly:
>=20
> - if we have a shadow fb, construct a dma_buf_map for that, otherwise
> take the one from the driver
> - have a full generic implementation using that one directly (and
> checking size limits against the underlying gem buffer)
> - ideally also with some testcases in the fbdev testcase we have (very
> bare-bones right now) in igt
>=20
> But I'm not really sure whether that's worth all the trouble. It's
> just that the fbdev-ness here in this copied code sticks out a lot :-)
>=20
>>> +
>>>  /**
>>>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>>>   * @info: fbdev registered by the helper
>>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *i=
nfo, struct vm_area_struct *vma)
>>>               return -ENODEV;
>>>  }
>>>
>>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *=
buf,
>>> +                              size_t count, loff_t *ppos)
>>> +{
>>> +     struct drm_fb_helper *fb_helper =3D info->par;
>>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
>>> +             return drm_fb_helper_sys_read(info, buf, count, ppos);
>>> +     else
>>> +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
>>> +}
>>> +
>>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char _=
_user *buf,
>>> +                               size_t count, loff_t *ppos)
>>> +{
>>> +     struct drm_fb_helper *fb_helper =3D info->par;
>>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
>>> +             return drm_fb_helper_sys_write(info, buf, count, ppos);=

>>> +     else
>>> +             return drm_fb_helper_cfb_write(info, buf, count, ppos);=

>>> +}
>>> +
>>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
>>> +                               const struct fb_fillrect *rect)
>>> +{
>>> +     struct drm_fb_helper *fb_helper =3D info->par;
>>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
>>> +             drm_fb_helper_sys_fillrect(info, rect);
>>> +     else
>>> +             drm_fb_helper_cfb_fillrect(info, rect);
>>> +}
>>> +
>>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
>>> +                               const struct fb_copyarea *area)
>>> +{
>>> +     struct drm_fb_helper *fb_helper =3D info->par;
>>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
>>> +             drm_fb_helper_sys_copyarea(info, area);
>>> +     else
>>> +             drm_fb_helper_cfb_copyarea(info, area);
>>> +}
>>> +
>>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
>>> +                                const struct fb_image *image)
>>> +{
>>> +     struct drm_fb_helper *fb_helper =3D info->par;
>>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
>>> +
>>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
>>> +             drm_fb_helper_sys_imageblit(info, image);
>>> +     else
>>> +             drm_fb_helper_cfb_imageblit(info, image);
>>> +}
>=20
> I think a todo.rst entry to make the new generic functions the real one=
s, and
> drivers not using the sys/cfb ones anymore would be a good addition.
> It's kinda covered by the move to the generic helpers, but maybe we
> can convert a few more drivers over to these here. Would also allow us
> to maybe flatten the code a bit and use more of the dma_buf_map stuff
> directly (instead of reusing crusty fbdev code written 20 years ago or
> so).

I wouldn't mind doing our own thing, but dma_buf_map is not a good fit
here. Mostly because the _cfb_ code first does a reads from I/O to
system memory, and then copies to userspace. The _sys_ functions copy
directly to userspace. (Same for write, but in the other direction.)

There's some code at the top and bottom of these functions that could be
shared. If we want to share the copy loops, we'd probably end up with
additional memcpys in the _sys_ case.

Best regards
Thomas

>=20
>>> +
>>>  static const struct fb_ops drm_fbdev_fb_ops =3D {
>>>       .owner          =3D THIS_MODULE,
>>>       DRM_FB_HELPER_DEFAULT_OPS,
>>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops =3D=
 {
>>>       .fb_release     =3D drm_fbdev_fb_release,
>>>       .fb_destroy     =3D drm_fbdev_fb_destroy,
>>>       .fb_mmap        =3D drm_fbdev_fb_mmap,
>>> -     .fb_read        =3D drm_fb_helper_sys_read,
>>> -     .fb_write       =3D drm_fb_helper_sys_write,
>>> -     .fb_fillrect    =3D drm_fb_helper_sys_fillrect,
>>> -     .fb_copyarea    =3D drm_fb_helper_sys_copyarea,
>>> -     .fb_imageblit   =3D drm_fb_helper_sys_imageblit,
>>> +     .fb_read        =3D drm_fbdev_fb_read,
>>> +     .fb_write       =3D drm_fbdev_fb_write,
>>> +     .fb_fillrect    =3D drm_fbdev_fb_fillrect,
>>> +     .fb_copyarea    =3D drm_fbdev_fb_copyarea,
>>> +     .fb_imageblit   =3D drm_fbdev_fb_imageblit,
>>>  };
>>>
>>>  static struct fb_deferred_io drm_fbdev_defio =3D {
>>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_con=
fig.h
>>> index 5ffbb4ed5b35..ab424ddd7665 100644
>>> --- a/include/drm/drm_mode_config.h
>>> +++ b/include/drm/drm_mode_config.h
>>> @@ -877,18 +877,6 @@ struct drm_mode_config {
>>>        */
>>>       bool prefer_shadow_fbdev;
>>>
>>> -     /**
>>> -      * @fbdev_use_iomem:
>>> -      *
>>> -      * Set to true if framebuffer reside in iomem.
>>> -      * When set to true memcpy_toio() is used when copying the fram=
ebuffer in
>>> -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
>>> -      *
>>> -      * FIXME: This should be replaced with a per-mapping is_iomem
>>> -      * flag (like ttm does), and then used everywhere in fbdev code=
=2E
>>> -      */
>>> -     bool fbdev_use_iomem;
>>> -
>>>       /**
>>>        * @quirk_addfb_prefer_xbgr_30bpp:
>>>        *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.=
h
>=20
> I think the below should be split out as a prep patch.
>=20
>>> index 2e8bbecb5091..6ca0f304dda2 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -32,6 +32,14 @@
>>>   * accessing the buffer. Use the returned instance and the helper fu=
nctions
>>>   * to access the buffer's memory in the correct way.
>>>   *
>>> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpe=
rs are
>>> + * actually independent from the dma-buf infrastructure. When sharin=
g buffers
>>> + * among devices, drivers have to know the location of the memory to=
 access
>>> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_m=
ap>`
>>> + * solves this problem for dma-buf and its users. If other drivers o=
r
>>> + * sub-systems require similar functionality, the type could be gene=
ralized
>>> + * and moved to a more prominent header file.
>>> + *
>>>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` =
is
>>>   * considered bad style. Rather then accessing its fields directly, =
use one
>>>   * of the provided helper functions, or implement your own. For exam=
ple,
>>> @@ -51,6 +59,14 @@
>>>   *
>>>   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>   *
>>> + * Instances of struct dma_buf_map do not have to be cleaned up, but=

>>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings=

>>> + * always refer to system memory.
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + *   dma_buf_map_clear(&map);
>>> + *
>>>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>   * dma_buf_map_is_null().
>>>   *
>>> @@ -73,17 +89,19 @@
>>>   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
>>>   *           // always false
>>>   *
>>> - * Instances of struct dma_buf_map do not have to be cleaned up, but=

>>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings=

>>> - * always refer to system memory.
>>> + * A set up instance of struct dma_buf_map can be used to access or =
manipulate
>>> + * the buffer memory. Depending on the location of the memory, the p=
rovided
>>> + * helpers will pick the correct operations. Data can be copied into=
 the memory
>>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with=

>>> + * dma_buf_map_incr().
>>>   *
>>> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpe=
rs are
>>> - * actually independent from the dma-buf infrastructure. When sharin=
g buffers
>>> - * among devices, drivers have to know the location of the memory to=
 access
>>> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_m=
ap>`
>>> - * solves this problem for dma-buf and its users. If other drivers o=
r
>>> - * sub-systems require similar functionality, the type could be gene=
ralized
>>> - * and moved to a more prominent header file.
>>> + * .. code-block:: c
>>> + *
>>> + *   const void *src =3D ...; // source buffer
>>> + *   size_t len =3D ...; // length of src
>>> + *
>>> + *   dma_buf_map_memcpy_to(&map, src, len);
>>> + *   dma_buf_map_incr(&map, len); // go to first byte after the memc=
py
>>>   */
>>>
>>>  /**
>>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_=
buf_map *map)
>>>       }
>>>  }
>>>
>>> +/**
>>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
>>> + * @dst:     The dma-buf mapping structure
>>> + * @src:     The source buffer
>>> + * @len:     The number of byte in src
>>> + *
>>> + * Copies data into a dma-buf mapping. The source buffer is in syste=
m
>>> + * memory. Depending on the buffer's location, the helper picks the =
correct
>>> + * method of accessing the memory.
>>> + */
>>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, co=
nst void *src, size_t len)
>>> +{
>>> +     if (dst->is_iomem)
>>> +             memcpy_toio(dst->vaddr_iomem, src, len);
>>> +     else
>>> +             memcpy(dst->vaddr, src, len);
>>> +}
>>> +
>>> +/**
>>> + * dma_buf_map_incr - Increments the address stored in a dma-buf map=
ping
>>> + * @map:     The dma-buf mapping structure
>>> + * @incr:    The number of bytes to increment
>>> + *
>>> + * Increments the address stored in a dma-buf mapping. Depending on =
the
>>> + * buffer's location, the correct value will be updated.
>>> + */
>>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t =
incr)
>>> +{
>>> +     if (map->is_iomem)
>>> +             map->vaddr_iomem +=3D incr;
>>> +     else
>>> +             map->vaddr +=3D incr;
>>> +}
>>> +
>>>  #endif /* __DMA_BUF_MAP_H__ */
>>> --
>>> 2.28.0
>=20
> Aside from the details I think looks all reasonable.
> -Daniel
>=20

--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


--9SInwHdMlT9HJ8SWW0YhlKXr6GBVVNNex--

--wSkQnhyu1s6mke5MwMEXxiMF0P61mkAgk
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQFIBAEBCAAyFiEEchf7rIzpz2NEoWjlaA3BHVMLeiMFAl9+2vkUHHR6aW1tZXJt
YW5uQHN1c2UuZGUACgkQaA3BHVMLeiMmeAgAqwVo45DghrkjdBwYPmX4OVZnRERj
XUMMDFUfWoPmn1NUUVpBZmoS1/yVaB9U2asvyKAzBbPlcOgc+rk0jDE2LamWDzJH
jai4oxop9bFlBbyU70iWTvtfaJ8mNDCw9TB+jxY9hl4ikbxvs1eqEmHfHl5BkbK4
UM2tr+YOafQimRZqkLEA38LXzzpOlDfuO5LGot4VjyJ0TBL4W1/ph4sq8riK6ion
Rs7v9ywaMnDsNgCQ+yqK1SB+rDNUWc8SdOlal/y21OpyOz2b/zSyFAHE+Hqu5WSk
PtIQE26Mf0QpKSucONsOKip49UzGa5cdbdzNqDOl4inpYHbyEF30+l6c9A==
=so2q
-----END PGP SIGNATURE-----

--wSkQnhyu1s6mke5MwMEXxiMF0P61mkAgk--


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 09:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 09:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4449.11601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQSL2-00038p-EM; Thu, 08 Oct 2020 09:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4449.11601; Thu, 08 Oct 2020 09:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQSL2-00038i-B5; Thu, 08 Oct 2020 09:35:52 +0000
Received: by outflank-mailman (input) for mailman id 4449;
 Thu, 08 Oct 2020 09:35:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AT8d=DP=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kQSL0-00038d-FN
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:35:51 +0000
Received: from mail-ot1-x344.google.com (unknown [2607:f8b0:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d01dea4-db6e-4b00-99d4-ecd2abadf518;
 Thu, 08 Oct 2020 09:35:48 +0000 (UTC)
Received: by mail-ot1-x344.google.com with SMTP id m11so4863382otk.13
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 02:35:48 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AT8d=DP=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kQSL0-00038d-FN
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 09:35:51 +0000
X-Inumbo-ID: 3d01dea4-db6e-4b00-99d4-ecd2abadf518
Received: from mail-ot1-x344.google.com (unknown [2607:f8b0:4864:20::344])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d01dea4-db6e-4b00-99d4-ecd2abadf518;
	Thu, 08 Oct 2020 09:35:48 +0000 (UTC)
Received: by mail-ot1-x344.google.com with SMTP id m11so4863382otk.13
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 02:35:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=ZrmRuXyaFHFfF1DpIKTXxKW4IsXEuSn4zFLNldJ3bZ0=;
        b=YtQ/ivtJQmp5b+qosIz6lM4sRgwqCFlQ6rowPWNvMYv5dJMKnTWw131r1zzJQn2A3N
         RyLNJisws8mAYt8dR+o50wEmX+2KCVFzf3SoEJbYY384GLvqbIH18Ui2d2SX891ACsLv
         tr5QIuMFDrFzP7wVckMDQbxaxyIcRIztRBEl8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=ZrmRuXyaFHFfF1DpIKTXxKW4IsXEuSn4zFLNldJ3bZ0=;
        b=KGPfL1ABWkc6EU+8G2IScGlVfIIgw1beLZJhOx8BoDeiwDkmK4WR/LMWjBqv3C3+ep
         HdnPLIFA1ZEiQ+0VdKwU9eWklmVSz+1LgYen1CPteifJPY2qQ1VjOXXNXH1mp1IRjys4
         D8L7PorQxjNSiIxkIHTmmgtkH34zvd0HdBa/f6bHSyCNxcIUSuglV5cS8HUF9mSGXSIS
         ty4eAytiWdx6nwdCFh90xUXZcIsTnaHZb2DaiY10vSf2X7igx4L3Wbx2OBwpvwEp9ZUm
         Irf4/p9xwbgcWeceyG0FUzUoM0yyJ/VHUf6AVXqZI/nQHkEg1gXbq5YoBDYSqb0+Zby5
         NrVw==
X-Gm-Message-State: AOAM531RWRTR+p7RxxEs8JTeakYjv9I0kBTPLoSyIJNe9m3xeIlcaHyz
	HGcthl4lbkrveSdFdPBoV5DjlMU63366HlgXr+66iw==
X-Google-Smtp-Source: ABdhPJyuob7SbAyhZgi7aMSyZt7e9GM6NvJxynkCepRUii8S9uJrD52JJKh3jA3fxJMCx//NY8lpbC5+iDjXJbCnvzg=
X-Received: by 2002:a05:6830:1647:: with SMTP id h7mr4749389otr.281.1602149747285;
 Thu, 08 Oct 2020 02:35:47 -0700 (PDT)
MIME-Version: 1.0
References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de>
 <20201002180500.GM438822@phenom.ffwll.local> <CAKMK7uFVHrqBh1sqQHR56vp2JS77XoCs232B5mkJXXpLhgLW8Q@mail.gmail.com>
 <ffc4b2de-ff97-210f-0ae4-f2f85a27f59b@suse.de>
In-Reply-To: <ffc4b2de-ff97-210f-0ae4-f2f85a27f59b@suse.de>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Thu, 8 Oct 2020 11:35:35 +0200
Message-ID: <CAKMK7uFds1v63V8jd0fvjqve=TiVMeHmGwLJ72ZOyGFQ0OvGxw@mail.gmail.com>
Subject: Re: [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>, Maxime Ripard <mripard@kernel.org>, 
	Dave Airlie <airlied@linux.ie>, Sam Ravnborg <sam@ravnborg.org>, 
	Alex Deucher <alexander.deucher@amd.com>, =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>, 
	Gerd Hoffmann <kraxel@redhat.com>, Lucas Stach <l.stach@pengutronix.de>, 
	Russell King <linux+etnaviv@armlinux.org.uk>, 
	Christian Gmeiner <christian.gmeiner@gmail.com>, Inki Dae <inki.dae@samsung.com>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, Seung-Woo Kim <sw0312.kim@samsung.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Kukjin Kim <kgene@kernel.org>, 
	Krzysztof Kozlowski <krzk@kernel.org>, Qiang Yu <yuq825@gmail.com>, Ben Skeggs <bskeggs@redhat.com>, 
	Rob Herring <robh@kernel.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Steven Price <steven.price@arm.com>, Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	Sandy Huang <hjc@rock-chips.com>, =?UTF-8?Q?Heiko_St=C3=BCbner?= <heiko@sntech.de>, 
	Hans de Goede <hdegoede@redhat.com>, Sean Paul <sean@poorly.run>, "Anholt, Eric" <eric@anholt.net>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Huang Rui <ray.huang@amd.com>, 
	Sumit Semwal <sumit.semwal@linaro.org>, Emil Velikov <emil.velikov@collabora.com>, 
	Luben Tuikov <luben.tuikov@amd.com>, apaneers@amd.com, 
	Linus Walleij <linus.walleij@linaro.org>, Melissa Wen <melissa.srw@gmail.com>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Qinglang Miao <miaoqinglang@huawei.com>, 
	dri-devel <dri-devel@lists.freedesktop.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, lima@lists.freedesktop.org, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 8, 2020 at 11:25 AM Thomas Zimmermann <tzimmermann@suse.de> wro=
te:
>
> Hi
>
> Am 02.10.20 um 20:44 schrieb Daniel Vetter:
> > On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter <daniel@ffwll.ch> wrote:
> >>
> >> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote:
> >>> At least sparc64 requires I/O-specific access to framebuffers. This
> >>> patch updates the fbdev console accordingly.
> >>>
> >>> For drivers with direct access to the framebuffer memory, the callbac=
k
> >>> functions in struct fb_ops test for the type of memory and call the r=
sp
> >>> fb_sys_ of fb_cfb_ functions.
> >>>
> >>> For drivers that employ a shadow buffer, fbdev's blit function retrie=
ves
> >>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> >>> interfaces to access the buffer.
> >>>
> >>> The bochs driver on sparc64 uses a workaround to flag the framebuffer=
 as
> >>> I/O memory and avoid a HW exception. With the introduction of struct
> >>> dma_buf_map, this is not required any longer. The patch removes the r=
sp
> >>> code from both, bochs and fbdev.
> >>>
> >>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >
> > Argh, I accidentally hit send before finishing this ...
> >
> >>> ---
> >>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >>>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++=
--
> >>>  include/drm/drm_mode_config.h     |  12 --
> >>>  include/linux/dma-buf-map.h       |  72 ++++++++--
> >>>  4 files changed, 265 insertions(+), 37 deletions(-)
> >>>
> >>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/boch=
s/bochs_kms.c
> >>> index 13d0d04c4457..853081d186d5 100644
> >>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> >>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> >>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >>>       bochs->dev->mode_config.preferred_depth =3D 24;
> >>>       bochs->dev->mode_config.prefer_shadow =3D 0;
> >>>       bochs->dev->mode_config.prefer_shadow_fbdev =3D 1;
> >>> -     bochs->dev->mode_config.fbdev_use_iomem =3D true;
> >>>       bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =3D =
true;
> >>>
> >>>       bochs->dev->mode_config.funcs =3D &bochs_mode_funcs;
> >>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb=
_helper.c
> >>> index 343a292f2c7c..f345a314a437 100644
> >>> --- a/drivers/gpu/drm/drm_fb_helper.c
> >>> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct =
work_struct *work)
> >>>  }
> >>>
> >>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_h=
elper,
> >>> -                                       struct drm_clip_rect *clip)
> >>> +                                       struct drm_clip_rect *clip,
> >>> +                                       struct dma_buf_map *dst)
> >>>  {
> >>>       struct drm_framebuffer *fb =3D fb_helper->fb;
> >>>       unsigned int cpp =3D fb->format->cpp[0];
> >>>       size_t offset =3D clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >>>       void *src =3D fb_helper->fbdev->screen_buffer + offset;
> >>> -     void *dst =3D fb_helper->buffer->map.vaddr + offset;
> >>>       size_t len =3D (clip->x2 - clip->x1) * cpp;
> >>>       unsigned int y;
> >>>
> >>> -     for (y =3D clip->y1; y < clip->y2; y++) {
> >>> -             if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> >>> -                     memcpy(dst, src, len);
> >>> -             else
> >>> -                     memcpy_toio((void __iomem *)dst, src, len);
> >>> +     dma_buf_map_incr(dst, offset); /* go to first pixel within clip=
 rect */
> >>>
> >>> +     for (y =3D clip->y1; y < clip->y2; y++) {
> >>> +             dma_buf_map_memcpy_to(dst, src, len);
> >>> +             dma_buf_map_incr(dst, fb->pitches[0]);
> >>>               src +=3D fb->pitches[0];
> >>> -             dst +=3D fb->pitches[0];
> >>>       }
> >>>  }
> >>>
> >>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_=
struct *work)
> >>>                       ret =3D drm_client_buffer_vmap(helper->buffer, =
&map);
> >>>                       if (ret)
> >>>                               return;
> >>> -                     drm_fb_helper_dirty_blit_real(helper, &clip_cop=
y);
> >>> +                     drm_fb_helper_dirty_blit_real(helper, &clip_cop=
y, &map);
> >>>               }
> >>> +
> >>>               if (helper->fb->funcs->dirty)
> >>>                       helper->fb->funcs->dirty(helper->fb, NULL, 0, 0=
,
> >>>                                                &clip_copy, 1);
> >>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info=
 *info,
> >>>  }
> >>>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> >>>
> >>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __u=
ser *buf,
> >>> +                                   size_t count, loff_t *ppos)
> >>> +{
> >>> +     unsigned long p =3D *ppos;
> >>> +     u8 *dst;
> >>> +     u8 __iomem *src;
> >>> +     int c, err =3D 0;
> >>> +     unsigned long total_size;
> >>> +     unsigned long alloc_size;
> >>> +     ssize_t ret =3D 0;
> >>> +
> >>> +     if (info->state !=3D FBINFO_STATE_RUNNING)
> >>> +             return -EPERM;
> >>> +
> >>> +     total_size =3D info->screen_size;
> >>> +
> >>> +     if (total_size =3D=3D 0)
> >>> +             total_size =3D info->fix.smem_len;
> >>> +
> >>> +     if (p >=3D total_size)
> >>> +             return 0;
> >>> +
> >>> +     if (count >=3D total_size)
> >>> +             count =3D total_size;
> >>> +
> >>> +     if (count + p > total_size)
> >>> +             count =3D total_size - p;
> >>> +
> >>> +     src =3D (u8 __iomem *)(info->screen_base + p);
> >>> +
> >>> +     alloc_size =3D min(count, PAGE_SIZE);
> >>> +
> >>> +     dst =3D kmalloc(alloc_size, GFP_KERNEL);
> >>> +     if (!dst)
> >>> +             return -ENOMEM;
> >>> +
> >>> +     while (count) {
> >>> +             c =3D min(count, alloc_size);
> >>> +
> >>> +             memcpy_fromio(dst, src, c);
> >>> +             if (copy_to_user(buf, dst, c)) {
> >>> +                     err =3D -EFAULT;
> >>> +                     break;
> >>> +             }
> >>> +
> >>> +             src +=3D c;
> >>> +             *ppos +=3D c;
> >>> +             buf +=3D c;
> >>> +             ret +=3D c;
> >>> +             count -=3D c;
> >>> +     }
> >>> +
> >>> +     kfree(dst);
> >>> +
> >>> +     if (err)
> >>> +             return err;
> >>> +
> >>> +     return ret;
> >>> +}
> >>> +
> >>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const c=
har __user *buf,
> >>> +                                    size_t count, loff_t *ppos)
> >>> +{
> >>> +     unsigned long p =3D *ppos;
> >>> +     u8 *src;
> >>> +     u8 __iomem *dst;
> >>> +     int c, err =3D 0;
> >>> +     unsigned long total_size;
> >>> +     unsigned long alloc_size;
> >>> +     ssize_t ret =3D 0;
> >>> +
> >>> +     if (info->state !=3D FBINFO_STATE_RUNNING)
> >>> +             return -EPERM;
> >>> +
> >>> +     total_size =3D info->screen_size;
> >>> +
> >>> +     if (total_size =3D=3D 0)
> >>> +             total_size =3D info->fix.smem_len;
> >>> +
> >>> +     if (p > total_size)
> >>> +             return -EFBIG;
> >>> +
> >>> +     if (count > total_size) {
> >>> +             err =3D -EFBIG;
> >>> +             count =3D total_size;
> >>> +     }
> >>> +
> >>> +     if (count + p > total_size) {
> >>> +             /*
> >>> +              * The framebuffer is too small. We do the
> >>> +              * copy operation, but return an error code
> >>> +              * afterwards. Taken from fbdev.
> >>> +              */
> >>> +             if (!err)
> >>> +                     err =3D -ENOSPC;
> >>> +             count =3D total_size - p;
> >>> +     }
> >>> +
> >>> +     alloc_size =3D min(count, PAGE_SIZE);
> >>> +
> >>> +     src =3D kmalloc(alloc_size, GFP_KERNEL);
> >>> +     if (!src)
> >>> +             return -ENOMEM;
> >>> +
> >>> +     dst =3D (u8 __iomem *)(info->screen_base + p);
> >>> +
> >>> +     while (count) {
> >>> +             c =3D min(count, alloc_size);
> >>> +
> >>> +             if (copy_from_user(src, buf, c)) {
> >>> +                     err =3D -EFAULT;
> >>> +                     break;
> >>> +             }
> >>> +             memcpy_toio(dst, src, c);
> >>> +
> >>> +             dst +=3D c;
> >>> +             *ppos +=3D c;
> >>> +             buf +=3D c;
> >>> +             ret +=3D c;
> >>> +             count -=3D c;
> >>> +     }
> >>> +
> >>> +     kfree(src);
> >>> +
> >>> +     if (err)
> >>> +             return err;
> >>> +
> >>> +     return ret;
> >>> +}
> >
> > The duplication is a bit annoying here, but can't really be avoided. I
> > do think though we should maybe go a bit further, and have drm
> > implementations of this stuff instead of following fbdev concepts as
> > closely as possible. So here roughly:
> >
> > - if we have a shadow fb, construct a dma_buf_map for that, otherwise
> > take the one from the driver
> > - have a full generic implementation using that one directly (and
> > checking size limits against the underlying gem buffer)
> > - ideally also with some testcases in the fbdev testcase we have (very
> > bare-bones right now) in igt
> >
> > But I'm not really sure whether that's worth all the trouble. It's
> > just that the fbdev-ness here in this copied code sticks out a lot :-)
> >
> >>> +
> >>>  /**
> >>>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >>>   * @info: fbdev registered by the helper
> >>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *i=
nfo, struct vm_area_struct *vma)
> >>>               return -ENODEV;
> >>>  }
> >>>
> >>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *=
buf,
> >>> +                              size_t count, loff_t *ppos)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper =3D info->par;
> >>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
> >>> +             return drm_fb_helper_sys_read(info, buf, count, ppos);
> >>> +     else
> >>> +             return drm_fb_helper_cfb_read(info, buf, count, ppos);
> >>> +}
> >>> +
> >>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char _=
_user *buf,
> >>> +                               size_t count, loff_t *ppos)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper =3D info->par;
> >>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
> >>> +             return drm_fb_helper_sys_write(info, buf, count, ppos);
> >>> +     else
> >>> +             return drm_fb_helper_cfb_write(info, buf, count, ppos);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> >>> +                               const struct fb_fillrect *rect)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper =3D info->par;
> >>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
> >>> +             drm_fb_helper_sys_fillrect(info, rect);
> >>> +     else
> >>> +             drm_fb_helper_cfb_fillrect(info, rect);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> >>> +                               const struct fb_copyarea *area)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper =3D info->par;
> >>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
> >>> +             drm_fb_helper_sys_copyarea(info, area);
> >>> +     else
> >>> +             drm_fb_helper_cfb_copyarea(info, area);
> >>> +}
> >>> +
> >>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> >>> +                                const struct fb_image *image)
> >>> +{
> >>> +     struct drm_fb_helper *fb_helper =3D info->par;
> >>> +     struct drm_client_buffer *buffer =3D fb_helper->buffer;
> >>> +
> >>> +     if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem=
)
> >>> +             drm_fb_helper_sys_imageblit(info, image);
> >>> +     else
> >>> +             drm_fb_helper_cfb_imageblit(info, image);
> >>> +}
> >
> > I think a todo.rst entry to make the new generic functions the real one=
s, and
> > drivers not using the sys/cfb ones anymore would be a good addition.
> > It's kinda covered by the move to the generic helpers, but maybe we
> > can convert a few more drivers over to these here. Would also allow us
> > to maybe flatten the code a bit and use more of the dma_buf_map stuff
> > directly (instead of reusing crusty fbdev code written 20 years ago or
> > so).
>
> I wouldn't mind doing our own thing, but dma_buf_map is not a good fit
> here. Mostly because the _cfb_ code first does a reads from I/O to
> system memory, and then copies to userspace. The _sys_ functions copy
> directly to userspace. (Same for write, but in the other direction.)
>
> There's some code at the top and bottom of these functions that could be
> shared. If we want to share the copy loops, we'd probably end up with
> additional memcpys in the _sys_ case.

Yeah I noticed that. I'd just ignore it. If someone is using a) fbdev
and b) read/write on it, they don't care much about performance. We
can do another copy or two, no problem. But the duplication is also ok
I guess, just a bit less pretty.
-Daniel

> Best regards
> Thomas
>
> >
> >>> +
> >>>  static const struct fb_ops drm_fbdev_fb_ops =3D {
> >>>       .owner          =3D THIS_MODULE,
> >>>       DRM_FB_HELPER_DEFAULT_OPS,
> >>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops =
=3D {
> >>>       .fb_release     =3D drm_fbdev_fb_release,
> >>>       .fb_destroy     =3D drm_fbdev_fb_destroy,
> >>>       .fb_mmap        =3D drm_fbdev_fb_mmap,
> >>> -     .fb_read        =3D drm_fb_helper_sys_read,
> >>> -     .fb_write       =3D drm_fb_helper_sys_write,
> >>> -     .fb_fillrect    =3D drm_fb_helper_sys_fillrect,
> >>> -     .fb_copyarea    =3D drm_fb_helper_sys_copyarea,
> >>> -     .fb_imageblit   =3D drm_fb_helper_sys_imageblit,
> >>> +     .fb_read        =3D drm_fbdev_fb_read,
> >>> +     .fb_write       =3D drm_fbdev_fb_write,
> >>> +     .fb_fillrect    =3D drm_fbdev_fb_fillrect,
> >>> +     .fb_copyarea    =3D drm_fbdev_fb_copyarea,
> >>> +     .fb_imageblit   =3D drm_fbdev_fb_imageblit,
> >>>  };
> >>>
> >>>  static struct fb_deferred_io drm_fbdev_defio =3D {
> >>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_con=
fig.h
> >>> index 5ffbb4ed5b35..ab424ddd7665 100644
> >>> --- a/include/drm/drm_mode_config.h
> >>> +++ b/include/drm/drm_mode_config.h
> >>> @@ -877,18 +877,6 @@ struct drm_mode_config {
> >>>        */
> >>>       bool prefer_shadow_fbdev;
> >>>
> >>> -     /**
> >>> -      * @fbdev_use_iomem:
> >>> -      *
> >>> -      * Set to true if framebuffer reside in iomem.
> >>> -      * When set to true memcpy_toio() is used when copying the fram=
ebuffer in
> >>> -      * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> >>> -      *
> >>> -      * FIXME: This should be replaced with a per-mapping is_iomem
> >>> -      * flag (like ttm does), and then used everywhere in fbdev code=
.
> >>> -      */
> >>> -     bool fbdev_use_iomem;
> >>> -
> >>>       /**
> >>>        * @quirk_addfb_prefer_xbgr_30bpp:
> >>>        *
> >>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.=
h
> >
> > I think the below should be split out as a prep patch.
> >
> >>> index 2e8bbecb5091..6ca0f304dda2 100644
> >>> --- a/include/linux/dma-buf-map.h
> >>> +++ b/include/linux/dma-buf-map.h
> >>> @@ -32,6 +32,14 @@
> >>>   * accessing the buffer. Use the returned instance and the helper fu=
nctions
> >>>   * to access the buffer's memory in the correct way.
> >>>   *
> >>> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpe=
rs are
> >>> + * actually independent from the dma-buf infrastructure. When sharin=
g buffers
> >>> + * among devices, drivers have to know the location of the memory to=
 access
> >>> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_m=
ap>`
> >>> + * solves this problem for dma-buf and its users. If other drivers o=
r
> >>> + * sub-systems require similar functionality, the type could be gene=
ralized
> >>> + * and moved to a more prominent header file.
> >>> + *
> >>>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` =
is
> >>>   * considered bad style. Rather then accessing its fields directly, =
use one
> >>>   * of the provided helper functions, or implement your own. For exam=
ple,
> >>> @@ -51,6 +59,14 @@
> >>>   *
> >>>   *   dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >>>   *
> >>> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> >>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> >>> + * always refer to system memory.
> >>> + *
> >>> + * .. code-block:: c
> >>> + *
> >>> + *   dma_buf_map_clear(&map);
> >>> + *
> >>>   * Test if a mapping is valid with either dma_buf_map_is_set() or
> >>>   * dma_buf_map_is_null().
> >>>   *
> >>> @@ -73,17 +89,19 @@
> >>>   *   if (dma_buf_map_is_equal(&sys_map, &io_map))
> >>>   *           // always false
> >>>   *
> >>> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> >>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> >>> - * always refer to system memory.
> >>> + * A set up instance of struct dma_buf_map can be used to access or =
manipulate
> >>> + * the buffer memory. Depending on the location of the memory, the p=
rovided
> >>> + * helpers will pick the correct operations. Data can be copied into=
 the memory
> >>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> >>> + * dma_buf_map_incr().
> >>>   *
> >>> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpe=
rs are
> >>> - * actually independent from the dma-buf infrastructure. When sharin=
g buffers
> >>> - * among devices, drivers have to know the location of the memory to=
 access
> >>> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_m=
ap>`
> >>> - * solves this problem for dma-buf and its users. If other drivers o=
r
> >>> - * sub-systems require similar functionality, the type could be gene=
ralized
> >>> - * and moved to a more prominent header file.
> >>> + * .. code-block:: c
> >>> + *
> >>> + *   const void *src =3D ...; // source buffer
> >>> + *   size_t len =3D ...; // length of src
> >>> + *
> >>> + *   dma_buf_map_memcpy_to(&map, src, len);
> >>> + *   dma_buf_map_incr(&map, len); // go to first byte after the memc=
py
> >>>   */
> >>>
> >>>  /**
> >>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_=
buf_map *map)
> >>>       }
> >>>  }
> >>>
> >>> +/**
> >>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> >>> + * @dst:     The dma-buf mapping structure
> >>> + * @src:     The source buffer
> >>> + * @len:     The number of byte in src
> >>> + *
> >>> + * Copies data into a dma-buf mapping. The source buffer is in syste=
m
> >>> + * memory. Depending on the buffer's location, the helper picks the =
correct
> >>> + * method of accessing the memory.
> >>> + */
> >>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, co=
nst void *src, size_t len)
> >>> +{
> >>> +     if (dst->is_iomem)
> >>> +             memcpy_toio(dst->vaddr_iomem, src, len);
> >>> +     else
> >>> +             memcpy(dst->vaddr, src, len);
> >>> +}
> >>> +
> >>> +/**
> >>> + * dma_buf_map_incr - Increments the address stored in a dma-buf map=
ping
> >>> + * @map:     The dma-buf mapping structure
> >>> + * @incr:    The number of bytes to increment
> >>> + *
> >>> + * Increments the address stored in a dma-buf mapping. Depending on =
the
> >>> + * buffer's location, the correct value will be updated.
> >>> + */
> >>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t =
incr)
> >>> +{
> >>> +     if (map->is_iomem)
> >>> +             map->vaddr_iomem +=3D incr;
> >>> +     else
> >>> +             map->vaddr +=3D incr;
> >>> +}
> >>> +
> >>>  #endif /* __DMA_BUF_MAP_H__ */
> >>> --
> >>> 2.28.0
> >
> > Aside from the details I think looks all reasonable.
> > -Daniel
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
> (HRB 36809, AG N=C3=BCrnberg)
> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer
>


--=20
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 12:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 12:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4465.11615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVRC-00040L-MM; Thu, 08 Oct 2020 12:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4465.11615; Thu, 08 Oct 2020 12:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVRC-00040E-JC; Thu, 08 Oct 2020 12:54:26 +0000
Received: by outflank-mailman (input) for mailman id 4465;
 Thu, 08 Oct 2020 12:54:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQVRB-000409-2b
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:54:25 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b158584-b5ed-480e-b4cb-4378216e8061;
 Thu, 08 Oct 2020 12:54:24 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id z22so6712228wmi.0
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:54:24 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o194sm6728427wme.24.2020.10.08.05.54.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 05:54:22 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQVRB-000409-2b
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:54:25 +0000
X-Inumbo-ID: 1b158584-b5ed-480e-b4cb-4378216e8061
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b158584-b5ed-480e-b4cb-4378216e8061;
	Thu, 08 Oct 2020 12:54:24 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id z22so6712228wmi.0
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:54:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=GgFbCKyjOz9HeWAfWmdIujxFruBcbz3/RSlY+kjpREI=;
        b=Md5MOMlt9Ghs3cGQiZ+zQ+r+c+I5oaDJOJhTTx847eom6ZmtqbjXU6ihRab2eSXgu5
         L0kTr3ine1buViA9mlVhOX+nnYxdT7jULR5CRintgiRvkbXa93rhopMmYZ2MiK1NPJ54
         rrVokAZWFvg2V7F6r6kZqW8Q8A4RDYd3Pw63kFHAbmXIgPYB1LInyCF7Y40T3X6CXQ2D
         DxUTvgRgRls2j4fp87xCvK8hTqzsE1drQPRca0zHNpTkfLf6mgPDe7+1Ci0m4P/Sb5gS
         MqGHm378msveS9UWGcGdlDwZsfIjYQA3Xlq183wY6n535i+vLQEkvvytYN4iJaGy4zHG
         0aNA==
X-Gm-Message-State: AOAM531Egqy7hdubM0ObJ/XEOaFzdb19pld/atQhnT34WtZzPEOaOkWW
	SsOayA3fIblBFb4SoVciQTxCu6cYqMQ=
X-Google-Smtp-Source: ABdhPJwSoRzWdjLR6fSPR7LxriR7i32uk6XE0lk2dQAFX+26vpcjetuxFENeHxhyAfEaw7UQubssUA==
X-Received: by 2002:a1c:cc0a:: with SMTP id h10mr8165275wmb.80.1602161663459;
        Thu, 08 Oct 2020 05:54:23 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o194sm6728427wme.24.2020.10.08.05.54.21
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 05:54:22 -0700 (PDT)
Date: Thu, 8 Oct 2020 12:54:20 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [PATCH 0/5] tools/xenstore: remove read-only socket
Message-ID: <20201008125420.fvycxbba7mrypvmf@liuwe-devbox-debian-v2>
References: <20201002154141.11677-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201002154141.11677-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 05:41:36PM +0200, Juergen Gross wrote:
> The read-only socket of Xenstore is usable for the daemon case only
> and even there it is not really worth to be kept, as not all
> Xesntore operations changing the state of Xenstore are blocked,
> oxenstored ignoring the read-only semantics completely, and the
> privileges for using the ro-socket being the same as for the normal
> rw-socket.
> 
> So remove this feature with switching the related use cases to the
> Xenstore-type agnostic open- and close-functions..
> 
> Juergen Gross (5):
>   tools/xenstore: remove socket-only option from xenstore client
>   tools/libs/store: ignore XS_OPEN_SOCKETONLY flag
>   tools/libs/store: drop read-only functionality
>   tools: drop all deprecated usages of xs_*_open() and friends in tools
>   tools/xenstore: drop creation of read-only socket in xenstored

Acked + applied.

Please submit a follow-up patch for adding some comments to xenstore.h.

Wei.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 12:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 12:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4466.11627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVRJ-00041r-V7; Thu, 08 Oct 2020 12:54:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4466.11627; Thu, 08 Oct 2020 12:54:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVRJ-00041j-Rb; Thu, 08 Oct 2020 12:54:33 +0000
Received: by outflank-mailman (input) for mailman id 4466;
 Thu, 08 Oct 2020 12:54:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQVRJ-00041X-2S
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:54:33 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 924d22de-8d80-465a-a70c-4abf3ced9477;
 Thu, 08 Oct 2020 12:54:31 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id m6so6499788wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:54:31 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c68sm6788628wmd.34.2020.10.08.05.54.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 05:54:30 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQVRJ-00041X-2S
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:54:33 +0000
X-Inumbo-ID: 924d22de-8d80-465a-a70c-4abf3ced9477
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 924d22de-8d80-465a-a70c-4abf3ced9477;
	Thu, 08 Oct 2020 12:54:31 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id m6so6499788wrn.0
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:54:31 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=uKhENcFgNeHcmfi/Xc1khSKvTSV9c736YCkp+l64P7c=;
        b=lwthDI92MyQnqOUxIH4KGALUmWWeUmi+BCtyZCwgxrJPOeOkaWxSUmLPsbYl3SKuMC
         hQwTLX39I/a2RL9+9ge5wcHCSANhomx4vA5mX19rTDr1EuyxQ4YTGrnXw1NrJARlo+J2
         vosGuqxcSZKj1YBj3693WCvLqckwOzJwe9PlYCI3spM+J70rcn5UYkrxGHuGv/7jWPR4
         RGstg7hTJTLd1TZJYRAHkeUS4d4MDzi+H8ocHWb8o72Ob72JnEhA/IyqjRyjjOrsXvnU
         OKSKx+3NOhVA3Z6+I8KcHxAGeoDdK3EGEkuEE0S+aUEP7Awb/x50dQIn4DrwMDr0P6Ql
         mBKA==
X-Gm-Message-State: AOAM533VQK8eZTX83BX1HwL0p4ZYubNnGpLgXwk3RzmHWgB66PKYXcod
	OD322Rm8mvYmxvWMuv9Frx4=
X-Google-Smtp-Source: ABdhPJxpy3JciGSMV8DaezHiIPzUrrlLjUfTuL6qqAoLLpt/4JO/TWv3MYVkvCaIYqykrmoqznRLMQ==
X-Received: by 2002:adf:f207:: with SMTP id p7mr9849408wro.152.1602161671062;
        Thu, 08 Oct 2020 05:54:31 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id c68sm6788628wmd.34.2020.10.08.05.54.30
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 05:54:30 -0700 (PDT)
Date: Thu, 8 Oct 2020 12:54:29 +0000
From: Wei Liu <wl@xen.org>
To: Edwin =?utf-8?B?VMO2csO2aw==?= <edvin.torok@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 0/1] drop RO socket from oxenstored
Message-ID: <20201008125429.q3z57uerlrdpoqtc@liuwe-devbox-debian-v2>
References: <cover.1601654648.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cover.1601654648.git.edvin.torok@citrix.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 05:06:31PM +0100, Edwin Trk wrote:
> See https://lore.kernel.org/xen-devel/20201002154141.11677-6-jgross@suse.com/T/#u
> 
> Edwin Trk (1):
>   tools/ocaml/xenstored: drop the creation of the RO socket
> 
>  tools/ocaml/xenstored/connections.ml |  2 +-
>  tools/ocaml/xenstored/define.ml      |  1 -
>  tools/ocaml/xenstored/xenstored.ml   | 15 ++++++---------

Applied.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 12:57:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 12:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4481.11662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVTm-0004Of-NK; Thu, 08 Oct 2020 12:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4481.11662; Thu, 08 Oct 2020 12:57:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVTm-0004OX-K4; Thu, 08 Oct 2020 12:57:06 +0000
Received: by outflank-mailman (input) for mailman id 4481;
 Thu, 08 Oct 2020 12:57:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQVTl-0004OS-8e
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:57:05 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 818b9644-43e8-497a-899d-064f456c8cd1;
 Thu, 08 Oct 2020 12:57:03 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id j136so6413615wmj.2
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:57:03 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id p21sm7030763wmc.28.2020.10.08.05.57.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 05:57:02 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQVTl-0004OS-8e
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 12:57:05 +0000
X-Inumbo-ID: 818b9644-43e8-497a-899d-064f456c8cd1
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 818b9644-43e8-497a-899d-064f456c8cd1;
	Thu, 08 Oct 2020 12:57:03 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id j136so6413615wmj.2
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 05:57:03 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Bx8RcOnAWmrJ0g8im4xcst7IS35xi9boAGY9ldV69WQ=;
        b=M0gzLqXVdylhfUkJxnjlHL4fIsnq7D/0dS0dJ47QEMCcjovWZwrEvUjzMGv6LzsF/K
         /AGfuSQwy8F2zkqMuk1bE6/Ic/0bCQrAjxojwY9WXjFeEFTfGucnfk+DGvAg8wfasCaj
         85m3LpVcfYxQOQOaQ8hPT0f31hS30j5/3ZdDBeJocnNtukresgaHpbd7mDXhpm0JZ/1Y
         wFtJUPZzBtpNKupNAjRT8vBXF78a/1EqvKs5eiIbZd082O1tw6ItUw5YlFKOd4Uc9HHj
         WJZwNERBS84dr0E2USls60FN4czjtkmHIhsD1T3lOQJjg2kC/Tzfvfqb+n5G53xzsYKd
         MgQA==
X-Gm-Message-State: AOAM533ViiyYiVE8BxJdfFYXE5hS1GbrPrup+zEt3KaXwg3U/6qxlySc
	TewFQq+zoeXzKDrGfJvnSvbsySfX/B0=
X-Google-Smtp-Source: ABdhPJxYJWXOt7cEvb1PS0ExMEo06yFsA2tFEdaZbEiDOE4nKxvH4oH3H8XMxS6UJOTIg3VdLe1Xig==
X-Received: by 2002:a1c:c28a:: with SMTP id s132mr9153727wmf.13.1602161822818;
        Thu, 08 Oct 2020 05:57:02 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id p21sm7030763wmc.28.2020.10.08.05.57.02
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 05:57:02 -0700 (PDT)
Date: Thu, 8 Oct 2020 12:57:01 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, jgross@suse.com,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 2/2] tool/libs/light: Fix libxenlight gcc warning
Message-ID: <20201008125700.xo7c4ctwdz4chsye@liuwe-devbox-debian-v2>
References: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
 <579cfa6e71a5a1392a5ae40cef358c4e8e3a0901.1602078276.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <579cfa6e71a5a1392a5ae40cef358c4e8e3a0901.1602078276.git.bertrand.marquis@arm.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 07, 2020 at 02:57:02PM +0100, Bertrand Marquis wrote:
> Fix gcc10 compilation warning about uninitialized variable by setting
> it to 0.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Applied.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 13:07:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 13:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4485.11679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVdz-0005U1-P3; Thu, 08 Oct 2020 13:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4485.11679; Thu, 08 Oct 2020 13:07:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVdz-0005Tu-LY; Thu, 08 Oct 2020 13:07:39 +0000
Received: by outflank-mailman (input) for mailman id 4485;
 Thu, 08 Oct 2020 13:07:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQVdx-0005Tp-OR
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:07:37 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72637db2-8990-47eb-82d5-53441f9dde86;
 Thu, 08 Oct 2020 13:07:36 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id l11so6396872wmh.2
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:07:36 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id b8sm6746188wmb.4.2020.10.08.06.07.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 06:07:34 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQVdx-0005Tp-OR
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:07:37 +0000
X-Inumbo-ID: 72637db2-8990-47eb-82d5-53441f9dde86
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 72637db2-8990-47eb-82d5-53441f9dde86;
	Thu, 08 Oct 2020 13:07:36 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id l11so6396872wmh.2
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:07:36 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KjIx+9Xuwm/NU6V/rg66DBuxDp4aIJaVISlPb6YOr+w=;
        b=j4wXf5GIjLAUdQykC3KMz4jIOku/ykUGeCwnDbJm5RTyC0sfQcjEKcWlHbk8hXOU5N
         bFNHvjxL3T4ZuPhDxO0x0SGjwf267imN2CWtlRRGTDEypfFgppnKnJuFY8+QPZnkP1mH
         PFMql0VqYwHdU1FkbdSt9V+Q4ZYJ7YY4qcIle+ZQvQ++Sql6fdgu3YVr5egK5GZYKuFl
         NAryzoqswhKVTvqyoyOmKm9yc3EF3ozNzZ4GFhWIddbbt4RXHDpyYLjcbNi4pbcfHkAb
         W/viHd4UT6fa0XtQuJO6GBX7v9hz4KgS+hGHCWaP6upWELxeV2J61qHVob7Y2/lTEonx
         OnXw==
X-Gm-Message-State: AOAM531PVtYkzxnDeKJacCdE4jj50nMipG+kT2Q74f0I6cxjw3gBrVdP
	DKbHpCbEKQzX3IDoScPtKM4=
X-Google-Smtp-Source: ABdhPJyLQsnsiC0HvCZsnX9ACo/WUINu6LnF7FIlx5nP90c9nbRejnYyjIcQLj593mOUXLJe4Un4Xg==
X-Received: by 2002:a1c:6a0a:: with SMTP id f10mr8881256wmc.86.1602162455621;
        Thu, 08 Oct 2020 06:07:35 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id b8sm6746188wmb.4.2020.10.08.06.07.34
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 06:07:34 -0700 (PDT)
Date: Thu, 8 Oct 2020 13:07:33 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH 1/5] libxl: remove separate calculation of IOMMU memory
 overhead
Message-ID: <20201008130733.7pu73mu4iqjj2svd@liuwe-devbox-debian-v2>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-2-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201005094905.2929-2-paul@xen.org>
User-Agent: NeoMutt/20180716

On Mon, Oct 05, 2020 at 10:49:01AM +0100, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> When using 'shared_pt' mode the IOMMU is using the EPT PTEs. In 'sync_pt'
> mode these PTEs are instead replicated for the IOMMU to use. Hence, it is
> fairly clear that the memory overhead in this mode is essentially another
> copy of the P2M.
> 
> This patch removes the independent calculation done in
> libxl__get_required_iommu_memory() and instead simply uses 'shadow_memkb'
> as the value of the IOMMU overhead since this is the estimated size of
> the P2M.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 13:08:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 13:08:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4486.11690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVfG-0005ar-4S; Thu, 08 Oct 2020 13:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4486.11690; Thu, 08 Oct 2020 13:08:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQVfG-0005ak-1K; Thu, 08 Oct 2020 13:08:58 +0000
Received: by outflank-mailman (input) for mailman id 4486;
 Thu, 08 Oct 2020 13:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQVfF-0005ad-BP
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:08:57 +0000
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00b669fc-c15b-4079-a615-b97206f4a653;
 Thu, 08 Oct 2020 13:08:56 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id f21so6379664wml.3
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:08:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o3sm2772964wru.15.2020.10.08.06.08.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 06:08:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQVfF-0005ad-BP
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:08:57 +0000
X-Inumbo-ID: 00b669fc-c15b-4079-a615-b97206f4a653
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 00b669fc-c15b-4079-a615-b97206f4a653;
	Thu, 08 Oct 2020 13:08:56 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id f21so6379664wml.3
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:08:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=3rkwZWPGY+iNfIv5b9/+yjGaZwN2S6ImDZjPfKPPD1Y=;
        b=NzajfVpP5oXCZJ7YeF97BaaPrFF+FnrjFjjcfZoX8FuL056Dc+7gzqvb1D5fuqaDAk
         K41c6f1/q5NaTglkRDzZ0gwsE7HEHi8ngIyquAVwY5rclVuCUtuRRXowJ9T+S1Khpa92
         1g5BBk7xuuB/C1SPdnepLJJW1o3hwZaCfwaqctB2t4iYP+UmuQEnSWKMWlgu1/iCErG7
         ZRLczH64upXkDdUg+RiHor859PkQpsTS7vCrfHy9Kfcf6+AlkP6ZY6CDSJoDMQtKnznu
         HwT/OD6yanKs7asBYhwHJGKOHedBB7QamwnlD2CXA5dFQHr27Yrxb6/1bRi4Jpb2Cd+R
         QN/A==
X-Gm-Message-State: AOAM532E76eZ9x5Bwsc3NIZ/JTYMBG57LSopbcKk7PldBU58w81mWG4E
	uRrlUDLDEodwKxFY2VawxaU=
X-Google-Smtp-Source: ABdhPJyb32ZKFNJY4jHifOiJODc1kdl9Eqf0asg+YnQoCPs0UYzxzNi6VMpQW3j4XS/I1pklMIS7eQ==
X-Received: by 2002:a1c:bd57:: with SMTP id n84mr8966251wmf.126.1602162535392;
        Thu, 08 Oct 2020 06:08:55 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o3sm2772964wru.15.2020.10.08.06.08.54
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 06:08:54 -0700 (PDT)
Date: Thu, 8 Oct 2020 13:08:53 +0000
From: Wei Liu <wl@xen.org>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH 3/5] libxl / iommu / domctl: introduce
 XEN_DOMCTL_IOMMU_SET_ALLOCATION...
Message-ID: <20201008130853.pwzxmex3uufi6emv@liuwe-devbox-debian-v2>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-4-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201005094905.2929-4-paul@xen.org>
User-Agent: NeoMutt/20180716

On Mon, Oct 05, 2020 at 10:49:03AM +0100, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
[...]
> diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
> index 6ec6c27c83..9631974dd6 100644
> --- a/tools/libs/light/libxl_x86.c
> +++ b/tools/libs/light/libxl_x86.c
> @@ -520,6 +520,16 @@ int libxl__arch_domain_create(libxl__gc *gc, libxl_domain_config *d_config,
>                            NULL, 0, &shadow, 0, NULL);
>      }
>  
> +    if (d_config->b_info.iommu_memkb) {
> +        unsigned int nr_pages = DIV_ROUNDUP(d_config->b_info.iommu_memkb, 4);
> +

Please use XC_PAGE_SHIFT / XC_PAGE_SIZE for the calculation.

Wei.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 13:31:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 13:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4489.11703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQW14-0008BJ-Od; Thu, 08 Oct 2020 13:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4489.11703; Thu, 08 Oct 2020 13:31:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQW14-0008BC-LZ; Thu, 08 Oct 2020 13:31:30 +0000
Received: by outflank-mailman (input) for mailman id 4489;
 Thu, 08 Oct 2020 13:31:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PeKH=DP=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kQW13-0008B7-Dr
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:31:29 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fea143d-6cc2-4203-a0d7-760aa7ba69fa;
 Thu, 08 Oct 2020 13:31:28 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f21so5745153ljh.7
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:31:28 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PeKH=DP=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kQW13-0008B7-Dr
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 13:31:29 +0000
X-Inumbo-ID: 3fea143d-6cc2-4203-a0d7-760aa7ba69fa
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3fea143d-6cc2-4203-a0d7-760aa7ba69fa;
	Thu, 08 Oct 2020 13:31:28 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f21so5745153ljh.7
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 06:31:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=XxY5tVysIH111MlcFgaa6XJyUOWdKj60fn8nTmIS7ZU=;
        b=mIgvKrOmlpbB1xCsEQvYeD/mp2F1awXVZ2UzVEA5Jeo1lnrVxQp2WbQianhfyV/O+F
         3gKpsSX1pQv/7uh4XaCjor5ly0+GCSqflxMTpCOr11N5YODh5+0C9c5sSi+0nrKxABxK
         4aFz/LhFlJDj4uMM3+QJ1lAHmLYUVmke7JNFz6gKhTe08EAUufAEfxNrhVwf6lniC16v
         7sTgv22XD5yjpqmTTlacaygt1bUDN6FR5fSxOpIFhximwv7KmYY1iL36B6rmPozgzMW0
         cHF9GtDJfHezcUBnKcZUG8uKBKRzAma3FO/Uj1yjrVwxpENJZJPNtHvNn3XCCpE0VJ4G
         exkQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=XxY5tVysIH111MlcFgaa6XJyUOWdKj60fn8nTmIS7ZU=;
        b=LX2awGLVI5mrmiA6JwDFg1y9B3NlBdyj4hv8bvP8aYxESYRLnKRjrOXLtOaEE/WGIO
         O/IKjFiklbe1jnkveRRo7ZTt22Otpbm3xtg4nlNOkEZ9XB1sNUZzKwDXMFMuFi1AHmht
         UPrVbVNwRtxidEYQsGV6TZsFskbpMHym4hYknpL3thkTIE6wpiY0314FK9BXe14FtAso
         cmgNYHcoXuqmvUSNF38BabpBuo1Nt620r6sQM6AFRRm9wBpOz1Ck05NtNoJp5h8S9RpD
         ZSGSeb8YTsgWxHe1YP32F2gEf5ar8KNbJPjiXGeYTUpafhNEGExxt9vDh2RVpDktjfd6
         Pnvg==
X-Gm-Message-State: AOAM53068bABCmmbSbJJea20yBNqSKuZTYDiSHxtHh1nuSQeBLJuKIOJ
	91U56aHy5IRqyyek9oHsJb2CqyXMfinOaujIxtMQBLWPj60=
X-Google-Smtp-Source: ABdhPJz/GdflQ9RyPbOMn6d1xEV/b/a0AE0+Ss5B2q0bN9u9JdRB3vyvf9OhoqxN0dZbD/u8J79fYfkI1XnXGVhgwLA=
X-Received: by 2002:a2e:8782:: with SMTP id n2mr3503919lji.262.1602163887265;
 Thu, 08 Oct 2020 06:31:27 -0700 (PDT)
MIME-Version: 1.0
References: <20201001235337.83948-1-jandryuk@gmail.com> <20201007105049.vfpunr4g62fqvijr@liuwe-devbox-debian-v2>
In-Reply-To: <20201007105049.vfpunr4g62fqvijr@liuwe-devbox-debian-v2>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 8 Oct 2020 09:31:15 -0400
Message-ID: <CAKf6xptt_r6_VuRSwRXQRUR4Q39c_619e4iNxi8uVxV7YOHDBw@mail.gmail.com>
Subject: Re: [PATCH] libxl: only query VNC when enabled
To: Wei Liu <wl@xen.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Oct 7, 2020 at 6:50 AM Wei Liu <wl@xen.org> wrote:
>
> On Thu, Oct 01, 2020 at 07:53:37PM -0400, Jason Andryuk wrote:
> > QEMU without VNC support (configure --disable-vnc) will return an error
> > when VNC is queried over QMP since it does not recognize the QMP
> > command.  This will cause libxl to fail starting the domain even if VNC
> > is not enabled.  Therefore only query QEMU for VNC support when using
> > VNC, so a VNC-less QEMU will function in this configuration.
> >
> > 'goto out' jumps to the call to device_model_postconfig_done(), the
> > final callback after the chain of vnc queries.  This bypasses all the
> > QMP VNC queries.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  tools/libs/light/libxl_dm.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> > index a944181781..d1ff35dda3 100644
> > --- a/tools/libs/light/libxl_dm.c
> > +++ b/tools/libs/light/libxl_dm.c
> > @@ -3140,6 +3140,7 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> >  {
> >      EGC_GC;
> >      libxl__dm_spawn_state *dmss = CONTAINER_OF(qmp, *dmss, qmp);
> > +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmss->guest_config);
> >      const libxl__json_object *item = NULL;
> >      const libxl__json_object *o = NULL;
> >      int i = 0;
> > @@ -3197,6 +3198,9 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> >          if (rc) goto out;
> >      }
> >
> > +    if (!vnc)
> > +        goto out;
> > +
>
> I would rather this check be done in device_model_postconfig_vnc.
>
> Does the following work for you?

I like your version, but it doesn't work:
libxl: debug: libxl_qmp.c:1883:libxl__ev_qmp_send: Domain 1: ev
0x55aa58417d88, cmd 'query-vnc'
libxl: error: libxl_qmp.c:1836:qmp_ev_parse_error_messages: Domain
1:The command query-vnc has not been found
libxl: error: libxl_dm.c:3321:device_model_postconfig_done: Domain
1:Post DM startup configs failed, rc=-29

When QEMU has vnc disabled, it doesn't recognize query-vnc.  I looked
at modifying qemu to support query-vnc even with --disable-vnc, but it
was messy to untangle the QMP definitions.  Since we are telling libxl
not to use VNC, it makes sense not to query about it.

Regards,
Jason

> diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> index a944181781bb..c5db755a65d7 100644
> --- a/tools/libs/light/libxl_dm.c
> +++ b/tools/libs/light/libxl_dm.c
> @@ -3222,6 +3222,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,
>
>      if (rc) goto out;
>
> +    if (!vnc) goto out;
> +
>      /*
>       * query-vnc response:
>       * { 'enabled': 'bool', '*host': 'str', '*service': 'str' }
> @@ -3255,7 +3257,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,
>          if (rc) goto out;
>      }
>
> -    if (vnc && vnc->passwd && vnc->passwd[0]) {
> +    assert(vnc);
> +    if (vnc->passwd && vnc->passwd[0]) {
>          qmp->callback = device_model_postconfig_vnc_passwd;
>          libxl__qmp_param_add_string(gc, &args, "password", vnc->passwd);
>          rc = libxl__ev_qmp_send(egc, qmp, "change-vnc-password", args);
>


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 14:30:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 14:30:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4492.11717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQWvi-0004fg-9r; Thu, 08 Oct 2020 14:30:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4492.11717; Thu, 08 Oct 2020 14:30:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQWvi-0004f6-67; Thu, 08 Oct 2020 14:30:02 +0000
Received: by outflank-mailman (input) for mailman id 4492;
 Thu, 08 Oct 2020 14:30:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQWvg-0004Uk-Fc
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 14:30:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4bb09f4-2304-4e4a-b993-6cc68ebb1849;
 Thu, 08 Oct 2020 14:29:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQWvY-0006GR-Hu; Thu, 08 Oct 2020 14:29:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQWvY-0003Vk-4P; Thu, 08 Oct 2020 14:29:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQWvY-0005rY-2s; Thu, 08 Oct 2020 14:29:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQWvg-0004Uk-Fc
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 14:30:00 +0000
X-Inumbo-ID: a4bb09f4-2304-4e4a-b993-6cc68ebb1849
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a4bb09f4-2304-4e4a-b993-6cc68ebb1849;
	Thu, 08 Oct 2020 14:29:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NPN1UUn3zfnwxEuI4D9rO6KSe5k8USF0nhDcEO94STs=; b=OwUyiL7JDgn/lIVvlVrRPcJ1Hl
	Rc1BmWFkW4aI2ibpWsA66eLMLyEIRsvh7JT2PW+GxuHsC7YHo+oRIU62nBFqEKweQGXixBzESP5iW
	fPzepHZU9SqSAIZy2KeM4X1TcbVfXGy+/I8y1NmkGIezysKtq4fXJ9/jJ+R+W3Fald9g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQWvY-0006GR-Hu; Thu, 08 Oct 2020 14:29:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQWvY-0003Vk-4P; Thu, 08 Oct 2020 14:29:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQWvY-0005rY-2s; Thu, 08 Oct 2020 14:29:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155532: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:regression
    xen-unstable:test-amd64-amd64-pygrub:guest-localmigrate/x10:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
X-Osstest-Versions-That:
    xen=93508595d588afe9dca087f95200effb7cedc81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 14:29:52 +0000

flight 155532 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155532/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64 16 guest-localmigrate/x10 fail REGR. vs. 155510
 test-amd64-amd64-pygrub      17 guest-localmigrate/x10   fail REGR. vs. 155510

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 155510
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 155510
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 155510
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 155510
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 155510
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 155510
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 155510
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 155510
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 155510
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
baseline version:
 xen                  93508595d588afe9dca087f95200effb7cedc81f

Last test of basis   155510  2020-10-07 04:00:55 Z    1 days
Testing same since   155532  2020-10-07 19:37:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
Author: Wei Liu <wl@xen.org>
Date:   Wed Oct 7 14:59:51 2020 +0000

    Revert "build: always use BASEDIR for xen sub-directory"
    
    This reverts commit e4e64408f5c755da3bf7bfd78e70ad9f6c448376.

commit e4e64408f5c755da3bf7bfd78e70ad9f6c448376
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Fri Oct 2 11:42:09 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 14:52:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 14:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4497.11731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXHf-00074z-DL; Thu, 08 Oct 2020 14:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4497.11731; Thu, 08 Oct 2020 14:52:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXHf-00074s-AI; Thu, 08 Oct 2020 14:52:43 +0000
Received: by outflank-mailman (input) for mailman id 4497;
 Thu, 08 Oct 2020 14:52:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQXHe-00074n-6G
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 14:52:42 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e937c2ca-adb6-4ab2-852f-fb4d5df2bf74;
 Thu, 08 Oct 2020 14:52:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQXHe-00074n-6G
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 14:52:42 +0000
X-Inumbo-ID: e937c2ca-adb6-4ab2-852f-fb4d5df2bf74
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e937c2ca-adb6-4ab2-852f-fb4d5df2bf74;
	Thu, 08 Oct 2020 14:52:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602168760;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=GljNlADrhBwR0lHtxxzzRDo6MX8DOuP2v2zSKoT+PvA=;
  b=En8m0NaZ8U+z/dPpaFJb2LSY2Bz1jrS+RYlYyuh9U3IelqY866CquA5R
   7oeJiHq6Eppu91ghTHiqjterl8NGRKA8y5jWFLUHKW2rWa/TeXJbxXhgH
   7KhUR4sClZDz54Gz1279uMIoueouNIVuZ9bMtyI4005pvXe8h3eAB+KxI
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +W/Mo867VPZVHBh+yRvNqntNMjCYjoK82wbO6EnPy1r9iDbQjHOqZwStoUuut2tSuXsu49UZym
 zpxkgHj3sdyloWJXGYenlv2LKSw4hOAnfJPNdj3mca1dvltEjMX5XatUuRWm6TGUvlElSYZsUX
 sv+d6Znx+jNmQYs+gY4+GWBBsKTDkDOfzgmmTkniZJh0uUyV/5uiMMf8QQY8M85Q4rfw7UnIL9
 20ss9h9I32YQYXbdS+wFwuQQ64k+pIIdXz7cpLEFTPGg6rHfNvFXG3/1WGsV2ZMlt5jLGfS78q
 N3E=
X-SBRS: None
X-MesageID: 28916378
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,351,1596513600"; 
   d="scan'208";a="28916378"
Date: Thu, 8 Oct 2020 16:52:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v2 3/4] x86/shim: don't permit HVM and PV_SHIM_EXCLUSIVE
 at the same time
Message-ID: <20201008145229.GK19254@Air-de-Roger>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <c94e4480-96a0-34b6-a4c6-6176daa57588@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c94e4480-96a0-34b6-a4c6-6176daa57588@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 16, 2020 at 03:08:00PM +0200, Jan Beulich wrote:
> This combination doesn't really make sense (and there likely are more);
> in particular even if the code built with both options set, HVM guests
> wouldn't work (and I think one wouldn't be able to create one in the
> first place). The alternative here would be some presumably intrusive
> #ifdef-ary to get this combination to actually build (but still not
> work) again.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I can see the desire for being able to remove code, and the point
Andrew made about one option not making another disappear in a
completely different menu section.

Yet I don't see how to converge the two together, unless we completely
change our menu layouts, and even then I'm not sure I see how we could
structure this. Hence:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:21:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:21:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4499.11743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXjX-0001Pf-OG; Thu, 08 Oct 2020 15:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4499.11743; Thu, 08 Oct 2020 15:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXjX-0001PV-L7; Thu, 08 Oct 2020 15:21:31 +0000
Received: by outflank-mailman (input) for mailman id 4499;
 Thu, 08 Oct 2020 15:21:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQXjX-0001PQ-0r
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:21:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 388cfe01-f52d-4982-8ff6-1300ac48f90d;
 Thu, 08 Oct 2020 15:21:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQXjX-0001PQ-0r
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:21:31 +0000
X-Inumbo-ID: 388cfe01-f52d-4982-8ff6-1300ac48f90d
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 388cfe01-f52d-4982-8ff6-1300ac48f90d;
	Thu, 08 Oct 2020 15:21:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602170488;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=pE04uvuGBd9qJQ4+u9fj1g+5M0gYiYl5ZZNsiR4/NHw=;
  b=Bpz9mNnvLH4voFS9TyGQC7wmaQKScrPLJjDcX9vt19vTfTDC8HMMv5ix
   MGnCX+YDj/VutQV4s/r88ri9opB98WJ3q/R8wkzSkbtN4lHcZUG/PvjaO
   59pytdMVgWK6xFr7/e+GtTRf/0QZ2U6pMwlCPXgVSpgqEg0RbJT48+zsU
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: H8zRRZJ0JgMpUM1nL1yViZ95Dv7T1ujS7DvVa57D+K7tUaiR4qCa43k4Rq/Edf/un+7ijIbUmj
 LcE3pkWo8pYUUM1LUpia/pHv4Ynk/DaraR1ph40leMyeH1ygiKkO28s5/5LaYJAeVGaqxmkYIr
 N6q2t0lQLu+yH/h5F/dSmcgwEn+2aYS2s8ja/ZSXRDJxoIGiw1BLG/mdYfi2Jwu5XweDTI6XRW
 t2Cy25HCus+zI65T5GSKd6ZOuC3ERPNtYW9Cq7yMkP1FnF5ztzZG+dcO7BG3aB4KSbx3cXrX0b
 RmQ=
X-SBRS: None
X-MesageID: 29617263
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,351,1596513600"; 
   d="scan'208";a="29617263"
Date: Thu, 8 Oct 2020 17:15:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e()
Message-ID: <20201008151556.GL19254@Air-de-Roger>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Sep 16, 2020 at 03:08:40PM +0200, Jan Beulich wrote:
> By passing the functions an MFN and flags, only a single instance of
                           ^ a
> each is needed; they were pretty large for being inline functions
> anyway.
> 
> While moving the code, also adjust coding style and add const where
> sensible / possible.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/arch/x86/mm/shadow/hvm.c
> +++ b/xen/arch/x86/mm/shadow/hvm.c
> @@ -903,6 +903,104 @@ int shadow_track_dirty_vram(struct domai
>      return rc;
>  }
>  
> +void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
> +                         mfn_t sl1mfn, const void *sl1e,
> +                         const struct domain *d)
> +{
> +    unsigned long gfn;
> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> +
> +    ASSERT(is_hvm_domain(d));
> +
> +    if ( !dirty_vram /* tracking disabled? */ ||
> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
> +        return;
> +
> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
> +    /* Page sharing not supported on shadow PTs */
> +    BUG_ON(SHARED_M2P(gfn));
> +
> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> +    {
> +        unsigned long i = gfn - dirty_vram->begin_pfn;
> +        const struct page_info *page = mfn_to_page(mfn);
> +
> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> +            /* Initial guest reference, record it */
> +            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) |
> +                                   PAGE_OFFSET(sl1e);
> +    }
> +}
> +
> +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
> +                         mfn_t sl1mfn, const void *sl1e,
> +                         const struct domain *d)
> +{
> +    unsigned long gfn;
> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> +
> +    ASSERT(is_hvm_domain(d));
> +
> +    if ( !dirty_vram /* tracking disabled? */ ||
> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
> +        return;
> +
> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
> +    /* Page sharing not supported on shadow PTs */
> +    BUG_ON(SHARED_M2P(gfn));
> +
> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> +    {
> +        unsigned long i = gfn - dirty_vram->begin_pfn;
> +        const struct page_info *page = mfn_to_page(mfn);
> +        bool dirty = false;
> +        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
> +
> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> +        {
> +            /* Last reference */
> +            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
> +            {
> +                /* We didn't know it was that one, let's say it is dirty */
> +                dirty = true;
> +            }
> +            else
> +            {
> +                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
> +                if ( l1f & _PAGE_DIRTY )
> +                    dirty = true;
> +            }
> +        }
> +        else
> +        {
> +            /* We had more than one reference, just consider the page dirty. */
> +            dirty = true;
> +            /* Check that it's not the one we recorded. */
> +            if ( dirty_vram->sl1ma[i] == sl1ma )
> +            {
> +                /* Too bad, we remembered the wrong one... */
> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
> +            }
> +            else
> +            {
> +                /*
> +                 * Ok, our recorded sl1e is still pointing to this page, let's
> +                 * just hope it will remain.
> +                 */
> +            }
> +        }
> +
> +        if ( dirty )
> +        {
> +            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);

Could you use _set_bit here?

> +            dirty_vram->last_dirty = NOW();
> +        }
> +    }
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -1047,107 +1047,6 @@ static int shadow_set_l2e(struct domain
>      return flags;
>  }
>  
> -static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e,
> -                                       shadow_l1e_t *sl1e,
> -                                       mfn_t sl1mfn,
> -                                       struct domain *d)
> -{
> -#ifdef CONFIG_HVM
> -    mfn_t mfn = shadow_l1e_get_mfn(new_sl1e);
> -    int flags = shadow_l1e_get_flags(new_sl1e);
> -    unsigned long gfn;
> -    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> -
> -    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
> -         || !(flags & _PAGE_RW) /* read-only mapping? */
> -         || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
> -        return;
> -
> -    gfn = gfn_x(mfn_to_gfn(d, mfn));
> -    /* Page sharing not supported on shadow PTs */
> -    BUG_ON(SHARED_M2P(gfn));
> -
> -    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> -    {
> -        unsigned long i = gfn - dirty_vram->begin_pfn;
> -        struct page_info *page = mfn_to_page(mfn);
> -
> -        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> -            /* Initial guest reference, record it */
> -            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn)
> -                | ((unsigned long)sl1e & ~PAGE_MASK);
> -    }
> -#endif
> -}
> -
> -static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e,
> -                                       shadow_l1e_t *sl1e,
> -                                       mfn_t sl1mfn,
> -                                       struct domain *d)
> -{
> -#ifdef CONFIG_HVM
> -    mfn_t mfn = shadow_l1e_get_mfn(old_sl1e);
> -    int flags = shadow_l1e_get_flags(old_sl1e);
> -    unsigned long gfn;
> -    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> -
> -    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
> -         || !(flags & _PAGE_RW) /* read-only mapping? */
> -         || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
> -        return;
> -
> -    gfn = gfn_x(mfn_to_gfn(d, mfn));
> -    /* Page sharing not supported on shadow PTs */
> -    BUG_ON(SHARED_M2P(gfn));
> -
> -    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> -    {
> -        unsigned long i = gfn - dirty_vram->begin_pfn;
> -        struct page_info *page = mfn_to_page(mfn);
> -        int dirty = 0;
> -        paddr_t sl1ma = mfn_to_maddr(sl1mfn)
> -            | ((unsigned long)sl1e & ~PAGE_MASK);
> -
> -        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> -        {
> -            /* Last reference */
> -            if ( dirty_vram->sl1ma[i] == INVALID_PADDR ) {
> -                /* We didn't know it was that one, let's say it is dirty */
> -                dirty = 1;
> -            }
> -            else
> -            {
> -                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
> -                dirty_vram->sl1ma[i] = INVALID_PADDR;
> -                if ( flags & _PAGE_DIRTY )
> -                    dirty = 1;
> -            }
> -        }
> -        else
> -        {
> -            /* We had more than one reference, just consider the page dirty. */
> -            dirty = 1;
> -            /* Check that it's not the one we recorded. */
> -            if ( dirty_vram->sl1ma[i] == sl1ma )
> -            {
> -                /* Too bad, we remembered the wrong one... */
> -                dirty_vram->sl1ma[i] = INVALID_PADDR;
> -            }
> -            else
> -            {
> -                /* Ok, our recorded sl1e is still pointing to this page, let's
> -                 * just hope it will remain. */
> -            }
> -        }
> -        if ( dirty )
> -        {
> -            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
> -            dirty_vram->last_dirty = NOW();
> -        }
> -    }
> -#endif
> -}
> -
>  static int shadow_set_l1e(struct domain *d,
>                            shadow_l1e_t *sl1e,
>                            shadow_l1e_t new_sl1e,
> @@ -1156,6 +1055,7 @@ static int shadow_set_l1e(struct domain
>  {
>      int flags = 0;
>      shadow_l1e_t old_sl1e;
> +    unsigned int old_sl1f;
>  #if SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC
>      mfn_t new_gmfn = shadow_l1e_get_mfn(new_sl1e);
>  #endif
> @@ -1194,7 +1094,9 @@ static int shadow_set_l1e(struct domain
>                  new_sl1e = shadow_l1e_flip_flags(new_sl1e, rc);
>                  /* fall through */
>              case 0:
> -                shadow_vram_get_l1e(new_sl1e, sl1e, sl1mfn, d);
> +                shadow_vram_get_mfn(shadow_l1e_get_mfn(new_sl1e),
> +                                    shadow_l1e_get_flags(new_sl1e),
> +                                    sl1mfn, sl1e, d);

As you have moved this function into a HVM build time file, don't you
need to guard this call, or alternatively provide a dummy handler for
!CONFIG_HVM in private.h?

Same for shadow_vram_put_mfn.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4501.11755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXnb-0001as-9e; Thu, 08 Oct 2020 15:25:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4501.11755; Thu, 08 Oct 2020 15:25:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXnb-0001al-6U; Thu, 08 Oct 2020 15:25:43 +0000
Received: by outflank-mailman (input) for mailman id 4501;
 Thu, 08 Oct 2020 15:25:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PeKH=DP=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kQXna-0001ab-3U
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:25:42 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bbdba12-1b0d-4bc8-8db6-ee0ffbb2756b;
 Thu, 08 Oct 2020 15:25:41 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id b22so6918678lfs.13
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:25:41 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PeKH=DP=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kQXna-0001ab-3U
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:25:42 +0000
X-Inumbo-ID: 1bbdba12-1b0d-4bc8-8db6-ee0ffbb2756b
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1bbdba12-1b0d-4bc8-8db6-ee0ffbb2756b;
	Thu, 08 Oct 2020 15:25:41 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id b22so6918678lfs.13
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:25:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=o3vUyC3fPQjFS7Epr+zzwzy+oDiIhOKRhXq6Ne0vIDc=;
        b=vYnmZRqlDZvuIvxvsljC0dYALaxDtlevM+jnRRCQ/e+BFBpth7kHKkBixnjcH8w2JF
         wEncK+6UMDOzEfoOGC/amtHQKQDRQkVEjGlXQXm2LkMIvljkJeDp2VdBIzRiukpgameq
         VbZyjV1WPDWCGvKLrK+eNh4KXNM/ODUYFRWemHsQOogSlmmMQ+dzpImtV6cRfi3uwz3T
         Az0eJttRhqezfrMp6xuvnA3kaWmd2YCDbm7SPTWxobZlmQOauzP1iTuG8UiYyaGOvQd/
         MBf+7jYZ471cwo+uQeR/AcPFnYKsg0/V/RMayRbL96v0vpQBBzgZd2IZac7xDDjPGeP5
         G2Iw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=o3vUyC3fPQjFS7Epr+zzwzy+oDiIhOKRhXq6Ne0vIDc=;
        b=LVSXnkGl9ORL9WHB1hXB5hZhWfFWPbAH4baEh8YlT6oz35sO+Y6s+ihKA15xUKaysC
         aJPwGxKtsAPE6Oo2WprNb3Dygll1BTFlWdCRP0eHXNGYrlve0u3hGRn3oi9gg4TlIgtJ
         vyps/Mg9QdnJaQx4q/zm0RmPATM3g0rhs7rCOX2X3ByndqqwG0sUqUiRWYMZ2KfTHOOs
         kPkaTDDQwnwzQ7R8k/2YLMAj0SvpI272jxxV6ahXO/CSxjA5iSXNDQDZ3VQs2TeppWFX
         om+zc3PskwYOgVayeGCeDROoiHbgwUK6gVu0d1Zl9xFIJJOgXZaEBb6p3eBd8HxAQhAw
         ORrw==
X-Gm-Message-State: AOAM5309LAu5lHdfFrW2eB63EhLocJd5oGJvnJ/MZxoCXZ1TdjGuP96S
	8U1qfAfA8/o1P+owhlnaN02mtp1o1mxiHTRrjZDQCTZ+
X-Google-Smtp-Source: ABdhPJxdvEltAz8+Ft7SbyMeNtROB9GBoZanB+1ucQX5+Hc9rqXsB/E4FP7NGqep70w4X8crYifICgIUUxZRomfH0+8=
X-Received: by 2002:ac2:44a4:: with SMTP id c4mr3019120lfm.365.1602170739775;
 Thu, 08 Oct 2020 08:25:39 -0700 (PDT)
MIME-Version: 1.0
References: <20201001235337.83948-1-jandryuk@gmail.com> <20201007105049.vfpunr4g62fqvijr@liuwe-devbox-debian-v2>
 <CAKf6xptt_r6_VuRSwRXQRUR4Q39c_619e4iNxi8uVxV7YOHDBw@mail.gmail.com>
In-Reply-To: <CAKf6xptt_r6_VuRSwRXQRUR4Q39c_619e4iNxi8uVxV7YOHDBw@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 8 Oct 2020 11:25:27 -0400
Message-ID: <CAKf6xpve+8uAbCYRW9n3cr+2tgNbT5UD9UhSkg5ZmLarQgCSXg@mail.gmail.com>
Subject: Re: [PATCH] libxl: only query VNC when enabled
To: Wei Liu <wl@xen.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 8, 2020 at 9:31 AM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> On Wed, Oct 7, 2020 at 6:50 AM Wei Liu <wl@xen.org> wrote:
> >
> > On Thu, Oct 01, 2020 at 07:53:37PM -0400, Jason Andryuk wrote:
> > > QEMU without VNC support (configure --disable-vnc) will return an error
> > > when VNC is queried over QMP since it does not recognize the QMP
> > > command.  This will cause libxl to fail starting the domain even if VNC
> > > is not enabled.  Therefore only query QEMU for VNC support when using
> > > VNC, so a VNC-less QEMU will function in this configuration.
> > >
> > > 'goto out' jumps to the call to device_model_postconfig_done(), the
> > > final callback after the chain of vnc queries.  This bypasses all the
> > > QMP VNC queries.
> > >
> > > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > > ---
> > >  tools/libs/light/libxl_dm.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> > > index a944181781..d1ff35dda3 100644
> > > --- a/tools/libs/light/libxl_dm.c
> > > +++ b/tools/libs/light/libxl_dm.c
> > > @@ -3140,6 +3140,7 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> > >  {
> > >      EGC_GC;
> > >      libxl__dm_spawn_state *dmss = CONTAINER_OF(qmp, *dmss, qmp);
> > > +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmss->guest_config);
> > >      const libxl__json_object *item = NULL;
> > >      const libxl__json_object *o = NULL;
> > >      int i = 0;
> > > @@ -3197,6 +3198,9 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> > >          if (rc) goto out;
> > >      }
> > >
> > > +    if (!vnc)
> > > +        goto out;
> > > +
> >
> > I would rather this check be done in device_model_postconfig_vnc.
> >
> > Does the following work for you?
>
> I like your version, but it doesn't work:
> libxl: debug: libxl_qmp.c:1883:libxl__ev_qmp_send: Domain 1: ev
> 0x55aa58417d88, cmd 'query-vnc'
> libxl: error: libxl_qmp.c:1836:qmp_ev_parse_error_messages: Domain
> 1:The command query-vnc has not been found
> libxl: error: libxl_dm.c:3321:device_model_postconfig_done: Domain
> 1:Post DM startup configs failed, rc=-29
>
> When QEMU has vnc disabled, it doesn't recognize query-vnc.  I looked
> at modifying qemu to support query-vnc even with --disable-vnc, but it
> was messy to untangle the QMP definitions.  Since we are telling libxl
> not to use VNC, it makes sense not to query about it.

Oh, I should add that QEMU needs a small patch to allow -vnc none in
ui/vnc-stub.c when vnc is disabled.  This is because libxl always
passes -vnc none to ensure a default vnc is not created.

Regards,
Jason

>
> > diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> > index a944181781bb..c5db755a65d7 100644
> > --- a/tools/libs/light/libxl_dm.c
> > +++ b/tools/libs/light/libxl_dm.c
> > @@ -3222,6 +3222,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,
> >
> >      if (rc) goto out;
> >
> > +    if (!vnc) goto out;
> > +
> >      /*
> >       * query-vnc response:
> >       * { 'enabled': 'bool', '*host': 'str', '*service': 'str' }
> > @@ -3255,7 +3257,8 @@ static void device_model_postconfig_vnc(libxl__egc *egc,
> >          if (rc) goto out;
> >      }
> >
> > -    if (vnc && vnc->passwd && vnc->passwd[0]) {
> > +    assert(vnc);
> > +    if (vnc->passwd && vnc->passwd[0]) {
> >          qmp->callback = device_model_postconfig_vnc_passwd;
> >          libxl__qmp_param_add_string(gc, &args, "password", vnc->passwd);
> >          rc = libxl__ev_qmp_send(egc, qmp, "change-vnc-password", args);
> >


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:27:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4502.11767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXox-0001lW-MH; Thu, 08 Oct 2020 15:27:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4502.11767; Thu, 08 Oct 2020 15:27:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXox-0001lP-I9; Thu, 08 Oct 2020 15:27:07 +0000
Received: by outflank-mailman (input) for mailman id 4502;
 Thu, 08 Oct 2020 15:27:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQXov-0001lI-Pz
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:27:05 +0000
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e27c1249-41f5-407f-91b7-9bb8b7a8a2d5;
 Thu, 08 Oct 2020 15:27:04 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id i1so959533wro.1
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:27:04 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h4sm7792367wrv.11.2020.10.08.08.27.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 08:27:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQXov-0001lI-Pz
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:27:05 +0000
X-Inumbo-ID: e27c1249-41f5-407f-91b7-9bb8b7a8a2d5
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e27c1249-41f5-407f-91b7-9bb8b7a8a2d5;
	Thu, 08 Oct 2020 15:27:04 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id i1so959533wro.1
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:27:04 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=jExkVY6rle/g0eFEizOMcJcaaR8v0g7iI/uZrvF8cGk=;
        b=WzvwveNH865KHM6ALDOyL1WH/4+DoLv7Z0BE6ioR34QSJIY5SK2v5LwZJoSVwqvnIj
         v8Qbl847snh8G++tKFtHJ7ZL00WWuZkM54r05McoFndQMTflVU0vRC4SOBK3ZsZZysCG
         xiQJLyPyZ/bQFX6SJN5A7UaRtGe/6mV/MjF+AY2RepNfSB4j/Kc8IvSYN48dZgrZV1du
         p0+0TQEArsU75uPl+xyHvDB2hEmm3KgyV0/tjRxnD19z+oRgYxnXpq1KdUcLg81mSbhN
         vkcx4T4Xs49TTJHncQHRAcH6Y6VmKVQjI3W57e1h1E6nR9xcGT/T82d+M3naZxnWKiDO
         O0jQ==
X-Gm-Message-State: AOAM530gIuViY6e2JTyYPc2rgql1wbqTQV8Y/+OTAfUb57E/tOHqwkrK
	qDsSD5hvR+7cBNS1+D88XQU=
X-Google-Smtp-Source: ABdhPJwhUkZVTlvmjTaHadG3LcxQKAyegHKpY07zWWiKAIYNW9ovzxurOasOCC8NOBVJa082gH2zqQ==
X-Received: by 2002:adf:f508:: with SMTP id q8mr9449655wro.233.1602170823968;
        Thu, 08 Oct 2020 08:27:03 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id h4sm7792367wrv.11.2020.10.08.08.27.03
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 08:27:03 -0700 (PDT)
Date: Thu, 8 Oct 2020 15:27:02 +0000
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Wei Liu <wl@xen.org>, xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH] libxl: only query VNC when enabled
Message-ID: <20201008152702.bckdhhrvl3p4p3qw@liuwe-devbox-debian-v2>
References: <20201001235337.83948-1-jandryuk@gmail.com>
 <20201007105049.vfpunr4g62fqvijr@liuwe-devbox-debian-v2>
 <CAKf6xptt_r6_VuRSwRXQRUR4Q39c_619e4iNxi8uVxV7YOHDBw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xptt_r6_VuRSwRXQRUR4Q39c_619e4iNxi8uVxV7YOHDBw@mail.gmail.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 08, 2020 at 09:31:15AM -0400, Jason Andryuk wrote:
> On Wed, Oct 7, 2020 at 6:50 AM Wei Liu <wl@xen.org> wrote:
> >
> > On Thu, Oct 01, 2020 at 07:53:37PM -0400, Jason Andryuk wrote:
> > > QEMU without VNC support (configure --disable-vnc) will return an error
> > > when VNC is queried over QMP since it does not recognize the QMP
> > > command.  This will cause libxl to fail starting the domain even if VNC
> > > is not enabled.  Therefore only query QEMU for VNC support when using
> > > VNC, so a VNC-less QEMU will function in this configuration.
> > >
> > > 'goto out' jumps to the call to device_model_postconfig_done(), the
> > > final callback after the chain of vnc queries.  This bypasses all the
> > > QMP VNC queries.
> > >
> > > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > > ---
> > >  tools/libs/light/libxl_dm.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
> > > index a944181781..d1ff35dda3 100644
> > > --- a/tools/libs/light/libxl_dm.c
> > > +++ b/tools/libs/light/libxl_dm.c
> > > @@ -3140,6 +3140,7 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> > >  {
> > >      EGC_GC;
> > >      libxl__dm_spawn_state *dmss = CONTAINER_OF(qmp, *dmss, qmp);
> > > +    const libxl_vnc_info *vnc = libxl__dm_vnc(dmss->guest_config);
> > >      const libxl__json_object *item = NULL;
> > >      const libxl__json_object *o = NULL;
> > >      int i = 0;
> > > @@ -3197,6 +3198,9 @@ static void device_model_postconfig_chardev(libxl__egc *egc,
> > >          if (rc) goto out;
> > >      }
> > >
> > > +    if (!vnc)
> > > +        goto out;
> > > +
> >
> > I would rather this check be done in device_model_postconfig_vnc.
> >
> > Does the following work for you?
> 
> I like your version, but it doesn't work:
> libxl: debug: libxl_qmp.c:1883:libxl__ev_qmp_send: Domain 1: ev
> 0x55aa58417d88, cmd 'query-vnc'
> libxl: error: libxl_qmp.c:1836:qmp_ev_parse_error_messages: Domain
> 1:The command query-vnc has not been found
> libxl: error: libxl_dm.c:3321:device_model_postconfig_done: Domain
> 1:Post DM startup configs failed, rc=-29
> 
> When QEMU has vnc disabled, it doesn't recognize query-vnc.  I looked
> at modifying qemu to support query-vnc even with --disable-vnc, but it
> was messy to untangle the QMP definitions.  Since we are telling libxl
> not to use VNC, it makes sense not to query about it.

Ah, my bad. I misread the code. By the time libxl enters
device_model_postconfig_vnc the query-vnc command will have already been
issued. So your patch is fine.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:32:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4505.11779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXtr-0002d3-9n; Thu, 08 Oct 2020 15:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4505.11779; Thu, 08 Oct 2020 15:32:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXtr-0002cw-6A; Thu, 08 Oct 2020 15:32:11 +0000
Received: by outflank-mailman (input) for mailman id 4505;
 Thu, 08 Oct 2020 15:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kQXtq-0002cr-1m
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:32:10 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04fc85e3-2f1c-4eba-a5f6-9e4a1247aad4;
 Thu, 08 Oct 2020 15:32:09 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id w5so7091562wrp.8
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:32:08 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a17sm7835694wra.29.2020.10.08.08.32.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 08 Oct 2020 08:32:07 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+9vM=DP=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kQXtq-0002cr-1m
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:32:10 +0000
X-Inumbo-ID: 04fc85e3-2f1c-4eba-a5f6-9e4a1247aad4
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 04fc85e3-2f1c-4eba-a5f6-9e4a1247aad4;
	Thu, 08 Oct 2020 15:32:09 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id w5so7091562wrp.8
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 08:32:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=GFEvg+G8XLVTz8WQ66J6MfKtZsIj4/8kxVUZHxyvJoo=;
        b=GtUqY1LXbKJ/LPmgwO/IwRCbgBAZaBveDK2NMC0hyDCMzHW/AVEuB1/Gggf8Hu0dE2
         INCavHuOlS0QszovnbsGMTSBfgCtnT2tCicJoL7Zh8acXN5AulTyRuuC+pXmM1jHx26J
         ysDhJzRVWloruQIBVIuvxMRGRWYNj1e8HzbC+OiXf45yoCIPvq1mrvMiutiHa59eXF87
         AiDL6iH93i6pJsMMGBq3pT9ullEueDlBdRHrNPRAG5djA8PnJjUiWvbXem4T5cwBvAly
         kORMBXv6Z2tJECBYs2pPtb4yE2EhmY2IaqWByEis2P07I+yRLTstWXm2G+k9+3ZUTgGS
         G1Kw==
X-Gm-Message-State: AOAM530NexjD7hen2yZHuZhNJIeaGOZ86mTEbVhSXeTX/c4lj8bKDR/8
	m1jtapqkEl4pZ7ecMx3RKkc=
X-Google-Smtp-Source: ABdhPJzZKGGn4IZeiu5hTjl+rIUDl3jtUF8aGQx2DLUTwo/0vTk3cJSHf0z0xGu1A9BDzNveYQVFpQ==
X-Received: by 2002:a5d:634d:: with SMTP id b13mr10254283wrw.324.1602171128053;
        Thu, 08 Oct 2020 08:32:08 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id a17sm7835694wra.29.2020.10.08.08.32.07
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 08 Oct 2020 08:32:07 -0700 (PDT)
Date: Thu, 8 Oct 2020 15:32:05 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: Re: [PATCH 1/3] tools/libs: move official headers to common directory
Message-ID: <20201008153205.6quz43n7w7utli22@liuwe-devbox-debian-v2>
References: <20201002142214.3438-1-jgross@suse.com>
 <20201002142214.3438-2-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201002142214.3438-2-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Fri, Oct 02, 2020 at 04:22:12PM +0200, Juergen Gross wrote:
> Instead of each library having an own include directory move the
> official headers to tools/include instead. This will drop the need to
> link those headers to tools/include and there is no need any longer
> to have library-specific include paths when building Xen.
> 
> While at it remove setting of the unused variable
> PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
> 

Not sure about this.

Will this make packaging individual libraries more difficult?
Maintainers will need to comb through a large amount of headers to pick
the ones they want.

What do others think?

Wei.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:37:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:37:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4509.11791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXzG-0002u8-3E; Thu, 08 Oct 2020 15:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4509.11791; Thu, 08 Oct 2020 15:37:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQXzF-0002u1-VH; Thu, 08 Oct 2020 15:37:45 +0000
Received: by outflank-mailman (input) for mailman id 4509;
 Thu, 08 Oct 2020 15:37:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vbpC=DP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQXzF-0002tw-78
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:37:45 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d522df77-337c-47bc-8229-6c4152d4aa34;
 Thu, 08 Oct 2020 15:37:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vbpC=DP=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQXzF-0002tw-78
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:37:45 +0000
X-Inumbo-ID: d522df77-337c-47bc-8229-6c4152d4aa34
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d522df77-337c-47bc-8229-6c4152d4aa34;
	Thu, 08 Oct 2020 15:37:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602171463;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=1YYGOIXmzwBA9TpHl7w5i85Ax3lz5yMlIe57VEEsLlU=;
  b=Wa35wCD5Y1WLR2qmjh37hP8dZ4Sl7t4+X1zeTGHzispkqg1d63f/gqeL
   A2rWukLgMAfd3wqE4CE87I1Q6gNozEEHYMbNAJd0y9fLsP/fZUoN5FUh1
   zEWWx+A7MTuWe/9NTEfCMTd7H2783jVeK8v0BePRkGbtx1mPEFduKhH3w
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JkA4QJ2bG+bh9hgFbhym4A9gWPq0Oz9tiMcsNYUTw8upS3GODH3AD+lbJsjZM3d5sNq9i/jXJ5
 t81JD778VIXog3dCI/qQKgvWipfU11EDOqtcmGq2loCcNaoP34NJQ9Kz0VaO5AWKK5mt9IRjof
 EI6RHdjtH6/7HDcMXn/9qz/Y3IoyuMPYCULgSuPRdcV9WQ1lMg9df9rmI0R1mTtl0pRmk/6r/+
 lblCzGRnjbBnKB1Vk9gp+h3e1Ched8uWDEpDKLmQgoR+gIslWlLFeOaqv1F2sdidxnyqbwnQPL
 69g=
X-SBRS: None
X-MesageID: 29619216
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,351,1596513600"; 
   d="scan'208";a="29619216"
Subject: Re: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan
	<tim@xen.org>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
 <20201008151556.GL19254@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e769e1ae-fd2f-881e-4dcc-3cbf40d6b732@citrix.com>
Date: Thu, 8 Oct 2020 16:36:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201008151556.GL19254@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 08/10/2020 16:15, Roger Pau Monné wrote:
> On Wed, Sep 16, 2020 at 03:08:40PM +0200, Jan Beulich wrote:
>> By passing the functions an MFN and flags, only a single instance of
>                            ^ a

'an' is correct.

an MFN
a Machine Frame Number

because the pronunciation changes.  "an" precedes anything with a vowel
sound, not just vowels themselves.  (Isn't English great...)

>> each is needed; they were pretty large for being inline functions
>> anyway.
>>
>> While moving the code, also adjust coding style and add const where
>> sensible / possible.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: New.
>>
>> --- a/xen/arch/x86/mm/shadow/hvm.c
>> +++ b/xen/arch/x86/mm/shadow/hvm.c
>> @@ -903,6 +903,104 @@ int shadow_track_dirty_vram(struct domai
>>      return rc;
>>  }
>>  
>> +void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
>> +                         mfn_t sl1mfn, const void *sl1e,
>> +                         const struct domain *d)
>> +{
>> +    unsigned long gfn;
>> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>> +
>> +    ASSERT(is_hvm_domain(d));
>> +
>> +    if ( !dirty_vram /* tracking disabled? */ ||
>> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
>> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
>> +        return;
>> +
>> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
>> +    /* Page sharing not supported on shadow PTs */
>> +    BUG_ON(SHARED_M2P(gfn));
>> +
>> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
>> +    {
>> +        unsigned long i = gfn - dirty_vram->begin_pfn;
>> +        const struct page_info *page = mfn_to_page(mfn);
>> +
>> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
>> +            /* Initial guest reference, record it */
>> +            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) |
>> +                                   PAGE_OFFSET(sl1e);
>> +    }
>> +}
>> +
>> +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
>> +                         mfn_t sl1mfn, const void *sl1e,
>> +                         const struct domain *d)
>> +{
>> +    unsigned long gfn;
>> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>> +
>> +    ASSERT(is_hvm_domain(d));
>> +
>> +    if ( !dirty_vram /* tracking disabled? */ ||
>> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
>> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
>> +        return;
>> +
>> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
>> +    /* Page sharing not supported on shadow PTs */
>> +    BUG_ON(SHARED_M2P(gfn));
>> +
>> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
>> +    {
>> +        unsigned long i = gfn - dirty_vram->begin_pfn;
>> +        const struct page_info *page = mfn_to_page(mfn);
>> +        bool dirty = false;
>> +        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
>> +
>> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
>> +        {
>> +            /* Last reference */
>> +            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
>> +            {
>> +                /* We didn't know it was that one, let's say it is dirty */
>> +                dirty = true;
>> +            }
>> +            else
>> +            {
>> +                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>> +                if ( l1f & _PAGE_DIRTY )
>> +                    dirty = true;
>> +            }
>> +        }
>> +        else
>> +        {
>> +            /* We had more than one reference, just consider the page dirty. */
>> +            dirty = true;
>> +            /* Check that it's not the one we recorded. */
>> +            if ( dirty_vram->sl1ma[i] == sl1ma )
>> +            {
>> +                /* Too bad, we remembered the wrong one... */
>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>> +            }
>> +            else
>> +            {
>> +                /*
>> +                 * Ok, our recorded sl1e is still pointing to this page, let's
>> +                 * just hope it will remain.
>> +                 */
>> +            }
>> +        }
>> +
>> +        if ( dirty )
>> +        {
>> +            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
> Could you use _set_bit here?

__set_bit() uses 4-byte accesses.  This uses 1-byte accesses.

Last I checked, there is a boundary issue at the end of the dirty_bitmap.

Both Julien and I have considered changing our bit infrastructure to use
byte accesses, which would make them more generally useful.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 15:46:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 15:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4511.11803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQY7w-0003tj-W6; Thu, 08 Oct 2020 15:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4511.11803; Thu, 08 Oct 2020 15:46:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQY7w-0003tc-Sp; Thu, 08 Oct 2020 15:46:44 +0000
Received: by outflank-mailman (input) for mailman id 4511;
 Thu, 08 Oct 2020 15:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDYt=DP=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
 id 1kQY7w-0003tX-0t
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:46:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9182a750-5deb-43f4-a89b-75c0cbb7b4d5;
 Thu, 08 Oct 2020 15:46:42 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp
 [210.141.244.193])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0092220739;
 Thu,  8 Oct 2020 15:46:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CDYt=DP=kernel.org=mhiramat@srs-us1.protection.inumbo.net>)
	id 1kQY7w-0003tX-0t
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 15:46:44 +0000
X-Inumbo-ID: 9182a750-5deb-43f4-a89b-75c0cbb7b4d5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9182a750-5deb-43f4-a89b-75c0cbb7b4d5;
	Thu, 08 Oct 2020 15:46:42 +0000 (UTC)
Received: from localhost.localdomain (NE2965lan1.rev.em-net.ne.jp [210.141.244.193])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0092220739;
	Thu,  8 Oct 2020 15:46:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602172001;
	bh=URIrqbBZFVv8zRZdEp54wBusdj0zVHMR07U5vdNs8mo=;
	h=From:To:Cc:Subject:Date:From;
	b=TZ9QNy0j3cHF52KEoAsDg3e2O0fg8cBtYwcgVxf6EsTB8WYcC7QGE6i0PceSPVN/z
	 2b5yTSx3VmP0gAz5PALL10zmdiXh1x059sJ8/zFq0PKFRzsaQAoFrL8eDPMvP3OOdt
	 UTkzTKRamDYaHTkQbpzFiTjKWRLx+FAKtndIePUc=
From: Masami Hiramatsu <mhiramat@kernel.org>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	takahiro.akashi@linaro.org
Subject: [PATCH v3] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Fri,  9 Oct 2020 00:46:37 +0900
Message-Id: <160217199753.214054.18187130778764820148.stgit@devnote2>
X-Mailer: git-send-email 2.25.1
User-Agent: StGit/0.19
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Use per_cpu_ptr_to_phys() instead of virt_to_phys() and __pa()
for per-cpu address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on from where the xen_vcpu_info percpu object is
allocated. If CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y, the
first chunk of percpu is on the kernel linear mapped area.
If the first chunk is enough large, xen_guest_init() can allocate
the xen_vcpu_info on the first chunk. However, if it is not large
enough, or other subsystem init functions called earlier than
xen_guest_init() runs out the first chunk, the xen_vcpu_info is
allocated on the 2nd chunk of percpu, and it can be on vmalloc
memory (depends on CONFIG_NEED_PER_CPU_KM=n).

Without this fix and, unfortnunately, xen_vcpu_info is allocated
on the vmalloc area, the Dom0 kernel will fail to boot with
following errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Theoletically, this issue has been introduced by commit 9a9ab3cc00dc
("xen/arm: SMP support") because it uses __pa() on percpu address.

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 Changes in v3:
   Update patch description to explain the mechanism of the problem.
---
 arch/arm/xen/enlighten.c |    2 +-
 include/xen/arm/page.h   |    3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e93145d72c26..a6ab3689b2f4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index 39df751d0dc4..ac1b65470563 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 	})
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 16:08:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 16:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4518.11814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYSt-0006JW-S7; Thu, 08 Oct 2020 16:08:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4518.11814; Thu, 08 Oct 2020 16:08:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYSt-0006JP-P5; Thu, 08 Oct 2020 16:08:23 +0000
Received: by outflank-mailman (input) for mailman id 4518;
 Thu, 08 Oct 2020 16:08:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQYSs-0006JK-Fy
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:08:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95c9804f-915b-4d91-a85f-dc93422bc1fd;
 Thu, 08 Oct 2020 16:08:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQYSq-0000RA-DY; Thu, 08 Oct 2020 16:08:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQYSq-0001Bj-3f; Thu, 08 Oct 2020 16:08:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQYSq-0001vB-39; Thu, 08 Oct 2020 16:08:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQYSs-0006JK-Fy
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:08:22 +0000
X-Inumbo-ID: 95c9804f-915b-4d91-a85f-dc93422bc1fd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95c9804f-915b-4d91-a85f-dc93422bc1fd;
	Thu, 08 Oct 2020 16:08:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eKevr3XPbcAJqjKR7j+3cx6joG4t1vZsoURDsfk+8bY=; b=OezmghDf4oDL/x/li6GUOAY5O0
	0ZEjtW/sJZjwWjk67K2ZsqWwdHl2tDmVcT1y6gGu2aPIfUWQJQbeI/ykUSiO3Y4BwnW9eRDslJ/ld
	8xq26rcvpA8yt1lDP+WPGB1DnmUZ5i4bzGYvJQGvVWCCMY4on/5RrvFlJ7TYUkQD36I8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQYSq-0000RA-DY; Thu, 08 Oct 2020 16:08:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQYSq-0001Bj-3f; Thu, 08 Oct 2020 16:08:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQYSq-0001vB-39; Thu, 08 Oct 2020 16:08:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155547-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155547: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0241809bf838875615797f52af34222e5ab8e98f
X-Osstest-Versions-That:
    xen=7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 16:08:20 +0000

flight 155547 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155547/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0241809bf838875615797f52af34222e5ab8e98f
baseline version:
 xen                  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a

Last test of basis   155521  2020-10-07 16:00:31 Z    1 days
Testing same since   155547  2020-10-08 13:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a519f8bda..0241809bf8  0241809bf838875615797f52af34222e5ab8e98f -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 16:28:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 16:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4526.11831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYmO-0008Dr-Mo; Thu, 08 Oct 2020 16:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4526.11831; Thu, 08 Oct 2020 16:28:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYmO-0008Dk-Js; Thu, 08 Oct 2020 16:28:32 +0000
Received: by outflank-mailman (input) for mailman id 4526;
 Thu, 08 Oct 2020 16:28:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQYmO-0008Df-8P
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:28:32 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7941e7bb-7831-4762-8da7-7db0d757b870;
 Thu, 08 Oct 2020 16:28:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQYmO-0008Df-8P
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:28:32 +0000
X-Inumbo-ID: 7941e7bb-7831-4762-8da7-7db0d757b870
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7941e7bb-7831-4762-8da7-7db0d757b870;
	Thu, 08 Oct 2020 16:28:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602174510;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=3N62fHmZztOzbsPScfZDPlmQCibQFLFqXrLGiviPvWw=;
  b=gDom9NiVXKLDBWqZ+qLdDiGrDki9+Xxci32Va9UolJ+GIKq15m4cxFET
   bqL9pLLV4zvKRlpotnjP0tpfLUcyPfR/Lt2UdrtUs1F9/HcS6fd3CUpDQ
   XSzA0DqOPwERqc0Da5QB1N1EeQtdfD8LzVqGP5uLOlTAZ60fbA8XPZb6c
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: k0o/W4GWV460WRM5Azj6t6ibq/hfQ3CzdGWx7dnMHnQ9lwsnT1bEu4GWac5fG5nBioMjSVQkau
 bxAO83kJ0RRk5iNyK24iKI14GBQbFHvb6p1ZYNX0oU31yzXTnDflnAOT1JqSuxzKAzU5R6RnNt
 LWVrGScrZn2wBO4xRGzq1dtN0QBkY6Jf24WivDO9L9a17g1xmPM6MwShunTB6KJt9Xhsf4ttT+
 PMPZ6974VCq91rT7128q/wWHFb4ExMwEm6psEWS02nlAAdOIHrQVmTafLl/pn9GVkTvzXBZXZ/
 Odk=
X-SBRS: None
X-MesageID: 28856212
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,351,1596513600"; 
   d="scan'208";a="28856212"
Date: Thu, 8 Oct 2020 18:28:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 5/6] x86: guard against straight-line speculation past
 RET
Message-ID: <20201008162822.GM19254@Air-de-Roger>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Mon, Sep 28, 2020 at 02:31:49PM +0200, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
> 
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated), and we want to use \(text)
> escaping / quoting in the auxiliary macro.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Code LGTM.

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

See below for the TBD.

> ---
> TBD: Should this depend on CONFIG_SPECULATIVE_HARDEN_BRANCH?

I don't see the additions done in 3b7dab93f240 being guarded by
CONFIG_SPECULATIVE_HARDEN_BRANCH, so in that regard I would say no.
However those are already guarded by CONFIG_INDIRECT_THUNK so it's
slightly weird that the addition of such protections cannot be turned
off in any way.

I would be fine with having the additions done in 3b7dab93f240
protected by CONFIG_SPECULATIVE_HARDEN_BRANCH, and then the additions
done here also.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 16:36:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 16:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4529.11844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYtV-0000gv-G3; Thu, 08 Oct 2020 16:35:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4529.11844; Thu, 08 Oct 2020 16:35:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQYtV-0000go-CZ; Thu, 08 Oct 2020 16:35:53 +0000
Received: by outflank-mailman (input) for mailman id 4529;
 Thu, 08 Oct 2020 16:35:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kQYtU-0000gj-5L
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:35:52 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f77dbc1e-b12c-464e-95a1-364385fe5387;
 Thu, 08 Oct 2020 16:35:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uUpI=DP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kQYtU-0000gj-5L
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 16:35:52 +0000
X-Inumbo-ID: f77dbc1e-b12c-464e-95a1-364385fe5387
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f77dbc1e-b12c-464e-95a1-364385fe5387;
	Thu, 08 Oct 2020 16:35:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602174951;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=1rsD1tf3DdaaL5KXzOKC2r5GJlph1c1FYx7mIzcvdx8=;
  b=GY4DV2cZKuuzIWpaS5rH3B22XPk8d9qZ3hZ2Ncp0heDBzAvSCu9B2Rv+
   gt19sS8wHWKZlj/khOwRPPkxmKjAd31T5I313f5lq5tRMFMAiELyEftw5
   iiySglSIuHiYnHR+PbPMgEVmXamTHoFxf3RxEW4yj7flx++xvF2JdOnco
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zWNqFa0adWT5dBcnQf/+qONM/SOOdHUGj4VmpNszqiw++T/lsC+sCgxepJt9ox+lJBvGc2l54a
 oSZ/llmsSjp7k50wX34zNq3t/ShfdM13cR/KnB7e4naXY8EI9M2ZGAhWWjSf0KExKYeKuuhhif
 JykAhqGC218ftv8GksTG/c4I1bwZA3efq9u90h4Cy9wKVM9b720Osl/u5csa/jh5mxhvbHwmZF
 QT5yIP0+r1ODJv0vZD5qEitXm3BFUSuiizQogs1KmLjC7zTVoV2JsP2PEqRWY39FQVeb7jKTlg
 39g=
X-SBRS: None
X-MesageID: 29627191
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,351,1596513600"; 
   d="scan'208";a="29627191"
Date: Thu, 8 Oct 2020 18:35:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v2 6/6] x86: limit amount of INT3 in IND_THUNK_*
Message-ID: <20201008163541.GN19254@Air-de-Roger>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <4d66eb4d-4044-8b48-d7cc-354a236e6b26@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4d66eb4d-4044-8b48-d7cc-354a236e6b26@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Mon, Sep 28, 2020 at 02:32:24PM +0200, Jan Beulich wrote:
> There's no point having every replacement variant to also specify the
> INT3 - just have it once in the base macro. When patching, NOPs will get
> inserted, which are fine to speculate through (until reaching the INT3).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> I also wonder whether the LFENCE in IND_THUNK_RETPOLINE couldn't be
> replaced by INT3 as well. Of course the effect will be marginal, as the
> size of the thunk will still be 16 bytes when including tail padding
> resulting from alignment.

I think Andrew is the best one to have an opinion on this.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 17:08:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 17:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4537.11856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQZPA-0003X5-B3; Thu, 08 Oct 2020 17:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4537.11856; Thu, 08 Oct 2020 17:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQZPA-0003Wy-7y; Thu, 08 Oct 2020 17:08:36 +0000
Received: by outflank-mailman (input) for mailman id 4537;
 Thu, 08 Oct 2020 17:08:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQZP8-0003Wt-3D
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:08:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbc64f9c-ab33-4471-a2d8-7f0f1259b474;
 Thu, 08 Oct 2020 17:08:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 079F9221FC;
 Thu,  8 Oct 2020 17:08:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQZP8-0003Wt-3D
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:08:34 +0000
X-Inumbo-ID: cbc64f9c-ab33-4471-a2d8-7f0f1259b474
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cbc64f9c-ab33-4471-a2d8-7f0f1259b474;
	Thu, 08 Oct 2020 17:08:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 079F9221FC;
	Thu,  8 Oct 2020 17:08:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602176912;
	bh=BqYzm9Yn2mq4gbbl1eMR5Ck0wZRtKdy6D1Ch0d3tuH0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=R/HPmurTuIbFDHRmZbkTWOYofiZIb8RqOskFuITiU1TEu/enjUe5+NN2WB1U9BtRh
	 GWAG1K4WBKWtdRKAW5eYmt3L7FCbENPjALzON2dcqcgqcqoQmUoSzKX/NZk3KpnTuP
	 moJsb/sq5f6rqud85CFB3R2Fv76cktfAUHEaajlk=
Date: Thu, 8 Oct 2020 10:08:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Masami Hiramatsu <mhiramat@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    takahiro.akashi@linaro.org
Subject: Re: [PATCH v3] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
In-Reply-To: <160217199753.214054.18187130778764820148.stgit@devnote2>
Message-ID: <alpine.DEB.2.21.2010081006200.23978@sstabellini-ThinkPad-T480s>
References: <160217199753.214054.18187130778764820148.stgit@devnote2>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Thanks Masami, the patch is already in our queue for going upstream.

On Fri, 9 Oct 2020, Masami Hiramatsu wrote:
> Use per_cpu_ptr_to_phys() instead of virt_to_phys() and __pa()
> for per-cpu address conversion.
> 
> In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> assumes the given virtual address is in linear mapped kernel memory
> area, it can not convert the per-cpu memory if it is allocated on
> vmalloc area.
> 
> This depends on from where the xen_vcpu_info percpu object is
> allocated. If CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK=y, the
> first chunk of percpu is on the kernel linear mapped area.
> If the first chunk is enough large, xen_guest_init() can allocate
> the xen_vcpu_info on the first chunk. However, if it is not large
> enough, or other subsystem init functions called earlier than
> xen_guest_init() runs out the first chunk, the xen_vcpu_info is
> allocated on the 2nd chunk of percpu, and it can be on vmalloc
> memory (depends on CONFIG_NEED_PER_CPU_KM=n).
> 
> Without this fix and, unfortnunately, xen_vcpu_info is allocated
> on the vmalloc area, the Dom0 kernel will fail to boot with
> following errors.
> 
> [    0.466172] Xen: initializing cpu0
> [    0.469601] ------------[ cut here ]------------
> [    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
> [    0.484435] Modules linked in:
> [    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
> [    0.493895] Hardware name: Socionext Developer Box (DT)
> [    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
> [    0.504836] pc : xen_starting_cpu+0x160/0x180
> [    0.509263] lr : xen_starting_cpu+0xb0/0x180
> [    0.513599] sp : ffff8000116cbb60
> [    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
> [    0.522366] x27: 0000000000000000 x26: 0000000000000000
> [    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
> [    0.533129] x23: 0000000000000000 x22: 0000000000000000
> [    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
> [    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
> [    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
> [    0.554655] x15: ffffffffffffffff x14: 0720072007200720
> [    0.560037] x13: 0720072007200720 x12: 0720072007200720
> [    0.565418] x11: 0720072007200720 x10: 0720072007200720
> [    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
> [    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
> [    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
> [    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
> [    0.592327] x1 : 000000000000002f x0 : 0000800000000000
> [    0.597716] Call trace:
> [    0.600232]  xen_starting_cpu+0x160/0x180
> [    0.604309]  cpuhp_invoke_callback+0xac/0x640
> [    0.608736]  cpuhp_issue_call+0xf4/0x150
> [    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
> [    0.618030]  __cpuhp_setup_state+0x84/0xf8
> [    0.622192]  xen_guest_init+0x324/0x364
> [    0.626097]  do_one_initcall+0x54/0x250
> [    0.630003]  kernel_init_freeable+0x12c/0x2c8
> [    0.634428]  kernel_init+0x1c/0x128
> [    0.637988]  ret_from_fork+0x10/0x18
> [    0.641635] ---[ end trace d95b5309a33f8b27 ]---
> [    0.646337] ------------[ cut here ]------------
> [    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
> [    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
> [    0.662548] Modules linked in:
> [    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
> [    0.673398] Hardware name: Socionext Developer Box (DT)
> [    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
> [    0.684338] pc : xen_starting_cpu+0x178/0x180
> [    0.688765] lr : xen_starting_cpu+0x144/0x180
> [    0.693188] sp : ffff8000116cbb60
> [    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
> [    0.701955] x27: 0000000000000000 x26: 0000000000000000
> [    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
> [    0.712718] x23: 0000000000000000 x22: 0000000000000000
> [    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
> [    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
> [    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
> [    0.734245] x15: ffffffffffffffff x14: 0720072007200720
> [    0.739626] x13: 0720072007200720 x12: 0720072007200720
> [    0.745008] x11: 0720072007200720 x10: 0720072007200720
> [    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
> [    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
> [    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
> [    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
> [    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
> [    0.777304] Call trace:
> [    0.779819]  xen_starting_cpu+0x178/0x180
> [    0.783898]  cpuhp_invoke_callback+0xac/0x640
> [    0.788325]  cpuhp_issue_call+0xf4/0x150
> [    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
> [    0.797619]  __cpuhp_setup_state+0x84/0xf8
> [    0.801779]  xen_guest_init+0x324/0x364
> [    0.805683]  do_one_initcall+0x54/0x250
> [    0.809590]  kernel_init_freeable+0x12c/0x2c8
> [    0.814016]  kernel_init+0x1c/0x128
> [    0.817583]  ret_from_fork+0x10/0x18
> [    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
> [    0.827415] ---[ end trace d95b5309a33f8b28 ]---
> [    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> [    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---
> 
> Theoletically, this issue has been introduced by commit 9a9ab3cc00dc
> ("xen/arm: SMP support") because it uses __pa() on percpu address.
> 
> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
>  Changes in v3:
>    Update patch description to explain the mechanism of the problem.
> ---
>  arch/arm/xen/enlighten.c |    2 +-
>  include/xen/arm/page.h   |    3 +++
>  2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
> index e93145d72c26..a6ab3689b2f4 100644
> --- a/arch/arm/xen/enlighten.c
> +++ b/arch/arm/xen/enlighten.c
> @@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
>  	pr_info("Xen: initializing cpu%d\n", cpu);
>  	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
>  
> -	info.mfn = virt_to_gfn(vcpup);
> +	info.mfn = percpu_to_gfn(vcpup);
>  	info.offset = xen_offset_in_page(vcpup);
>  
>  	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
> diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
> index 39df751d0dc4..ac1b65470563 100644
> --- a/include/xen/arm/page.h
> +++ b/include/xen/arm/page.h
> @@ -83,6 +83,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
>  	})
>  #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
>  
> +#define percpu_to_gfn(v)	\
> +	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
> +
>  /* Only used in PV code. But ARM guests are always HVM. */
>  static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
>  {
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 17:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 17:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4541.11868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQZkb-00068h-AG; Thu, 08 Oct 2020 17:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4541.11868; Thu, 08 Oct 2020 17:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQZkb-00068a-5z; Thu, 08 Oct 2020 17:30:45 +0000
Received: by outflank-mailman (input) for mailman id 4541;
 Thu, 08 Oct 2020 17:30:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQZkZ-00068P-Oe
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:30:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f807d22a-0b1e-484e-893b-6a2beed58a89;
 Thu, 08 Oct 2020 17:30:42 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6402222200;
 Thu,  8 Oct 2020 17:30:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQZkZ-00068P-Oe
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:30:43 +0000
X-Inumbo-ID: f807d22a-0b1e-484e-893b-6a2beed58a89
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f807d22a-0b1e-484e-893b-6a2beed58a89;
	Thu, 08 Oct 2020 17:30:42 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6402222200;
	Thu,  8 Oct 2020 17:30:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602178241;
	bh=7ueGKOqJhqJU4X35iZZ5iwrpzJ9RJyWVf3WNj3wZ6og=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mG7HHv90QHFOAyD/+NDvDDhgWHFoFKUHwrOoEmgtBWFMY/YSPkpVZxyu3QFBck3AE
	 2ofxuvpnrGA7qfo+MEdGBfcFVC1sOUy5/kJv0UEtH5Lyd+ZudJ2ECBghtii8yTSwxN
	 6VzQ6RZLQQKWBm9Q3wvxQQdxqUMr5nr9oXXLFMlQ=
Date: Thu, 8 Oct 2020 10:30:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Masami Hiramatsu <mhiramat@kernel.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    takahiro.akashi@linaro.org, jgross@suse.com, boris.ostrovsky@oracle.com
Subject: Re: [PATCH] arm/arm64: xen: Fix to convert percpu address to gfn
 correctly
In-Reply-To: <20201008172806.1591ebb538946c5ee93d372a@kernel.org>
Message-ID: <alpine.DEB.2.21.2010081030180.23978@sstabellini-ThinkPad-T480s>
References: <160190516028.40160.9733543991325671759.stgit@devnote2> <b205ec9c-c307-2b67-c43a-cf2a67179484@xen.org> <alpine.DEB.2.21.2010051526550.10908@sstabellini-ThinkPad-T480s> <20201006114058.b93839b1b8f35a470874572b@kernel.org>
 <alpine.DEB.2.21.2010061040350.10908@sstabellini-ThinkPad-T480s> <20201008172806.1591ebb538946c5ee93d372a@kernel.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 8 Oct 2020, Masami Hiramatsu wrote:
> On Tue, 6 Oct 2020 10:56:52 -0700 (PDT)
> Stefano Stabellini <sstabellini@kernel.org> wrote:
> 
> > On Tue, 6 Oct 2020, Masami Hiramatsu wrote:
> > > On Mon, 5 Oct 2020 18:13:22 -0700 (PDT)
> > > Stefano Stabellini <sstabellini@kernel.org> wrote:
> > > 
> > > > On Mon, 5 Oct 2020, Julien Grall wrote:
> > > > > Hi Masami,
> > > > > 
> > > > > On 05/10/2020 14:39, Masami Hiramatsu wrote:
> > > > > > Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
> > > > > > address conversion.
> > > > > > 
> > > > > > In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
> > > > > > to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
> > > > > > assumes the given virtual address is in contiguous kernel memory
> > > > > > area, it can not convert the per-cpu memory if it is allocated on
> > > > > > vmalloc area (depends on CONFIG_SMP).
> > > > > 
> > > > > Are you sure about this? I have a .config with CONFIG_SMP=y where the per-cpu
> > > > > region for CPU0 is allocated outside of vmalloc area.
> > > > > 
> > > > > However, I was able to trigger the bug as soon as CONFIG_NUMA_BALANCING was
> > > > > enabled.
> > > > 
> > > > I cannot reproduce the issue with defconfig, but I can with Masami's
> > > > kconfig.
> > > > 
> > > > If I disable just CONFIG_NUMA_BALANCING from Masami's kconfig, the
> > > > problem still appears.
> > > > 
> > > > If I disable CONFIG_NUMA from Masami's kconfig, it works, which is
> > > > strange because CONFIG_NUMA is enabled in defconfig, and defconfig
> > > > works.
> > > 
> > > Hmm, strange, because when I disabled CONFIG_NUMA_BALANCING, the issue
> > > disappeared.
> > > 
> > > --- config-5.9.0-rc4+   2020-10-06 11:36:20.620107129 +0900
> > > +++ config-5.9.0-rc4+.buggy     2020-10-05 21:04:40.369936461 +0900
> > > @@ -131,7 +131,8 @@
> > >  CONFIG_ARCH_SUPPORTS_NUMA_BALANCING=y
> > >  CONFIG_CC_HAS_INT128=y
> > >  CONFIG_ARCH_SUPPORTS_INT128=y
> > > -# CONFIG_NUMA_BALANCING is not set
> > > +CONFIG_NUMA_BALANCING=y
> > > +CONFIG_NUMA_BALANCING_DEFAULT_ENABLED=y
> > >  CONFIG_CGROUPS=y
> > >  CONFIG_PAGE_COUNTER=y
> > >  CONFIG_MEMCG=y
> > > 
> > > So buggy config just enabled NUMA_BALANCING (and default enabled)
> > 
> > Yeah but both NUMA and NUMA_BALANCING are enabled in defconfig which
> > works fine...
> 
> Hmm, I found that the xen_vcpu_info was allocated on km if the Dom0 has
> enough memory. On my environment, if Xen passed 2GB of RAM to Dom0, it
> was allocated on kernel linear mapped address, but with 1GB of RAM, it
> was on vmalloc area.
> As far as I can see, it seems that the percpu allocates memory from
> 2nd chunk if the default memory size is small.
> 
> I've built a kernel with patch [1] and boot the same kernel up with
> different dom0_mem option with "trace_event=percpu:*" kernel cmdline.
> Then got following logs.
> 
> Boot with 4GB:
>           <idle>-0     [000] ....     0.543208: percpu_create_chunk: base_addr=000000005d5ad71c
>  [...]
>          systemd-1     [000] ....     0.568931: percpu_alloc_percpu: reserved=0 is_atomic=0 size=48 align=8 base_addr=00000000fa92a086 off=32672 ptr=000000008da0b73d
>          systemd-1     [000] ....     0.568938: xen_guest_init: Xen: alloc xen_vcpu_info ffff800011003fa0 id=000000008da0b73d
>          systemd-1     [000] ....     0.586635: xen_starting_cpu: Xen: xen_vcpu_info ffff800011003fa0, vcpup ffff00092f4ebfa0 per_cpu_offset[0] ffff80091e4e8000
> 
> (NOTE: base_addr and ptr are encoded to the ids, not actual address
>  because of "%p" printk format)
> 
> In this log, we can see the xen_vcpu_info is allocated NOT on the
> new chunk (this is the 2nd chunk). As you can see, the vcpup is in
> the kernel linear address in this case, because it came from the
> 1st kernel embedded chunk.
> 
> 
> Boot with 1GB
>           <idle>-0     [000] ....     0.516221: percpu_create_chunk: base_addr=000000008456b989
> [...]
>          systemd-1     [000] ....     0.541982: percpu_alloc_percpu: reserved=0 is_atomic=0 size=48 align=8 base_addr=000000008456b989 off=17920 ptr=00000000c247612d
>          systemd-1     [000] ....     0.541989: xen_guest_init: Xen: alloc xen_vcpu_info 7dff951f0600 id=00000000c247612d
>          systemd-1     [000] ....     0.559690: xen_starting_cpu: Xen: xen_vcpu_info 7dff951f0600, vcpup fffffdffbfcdc600 per_cpu_offset[0] ffff80002aaec000
> 
> On the other hand, when we boot the dom0 with 1GB memory, the xen_vcpu_info
> is allocated on the new chunk (the id of base_addr is same).
> Since the data of new chunk is allocated on vmalloc area, vcpup points
> the vmalloc address.
> 
> So, the bug seems not to depend on the kconfig, but depends on where
> the percpu memory is allocated from.
> 
> > [...]
> > 
> > > > The fix is fine for me. I tested it and it works. We need to remove the
> > > > "Fixes:" line from the commit message. Ideally, replacing it with a
> > > > reference to what is the source of the problem.
> > > 
> > > OK, as I said, it seems commit 9a9ab3cc00dc ("xen/arm: SMP support") has
> > > introduced the per-cpu code. So note it instead of Fixes tag.
> > 
> > ...and commit 9a9ab3cc00dc was already present in 5.8 which also works
> > fine with your kconfig. Something else changed in 5.9 causing this
> > breakage as a side effect. Commit 9a9ab3cc00dc is there since 2013, I
> > think it is OK -- this patch is fixing something else.
> 
> I think the commit 9a9ab3cc00dc theoletically wrong because it uses
> __pa() on percpu address. But that is not guaranteed according to the
> comment on per_cpu_ptr_to_phys() as below.
> 
> ----
>  * percpu allocator has special setup for the first chunk, which currently
>  * supports either embedding in linear address space or vmalloc mapping,
>  * and, from the second one, the backing allocator (currently either vm or
>  * km) provides translation.
>  *
>  * The addr can be translated simply without checking if it falls into the
>  * first chunk. But the current code reflects better how percpu allocator
>  * actually works, and the verification can discover both bugs in percpu
>  * allocator itself and per_cpu_ptr_to_phys() callers. So we keep current
>  * code.
> ----
> 
> So we must use per_cpu_ptr_to_phys() instead of __pa() macro for percpu
> address. That's why I pointed this will fix the commit 9a9ab3cc00dc.

Thank you for the analysis. We are going to try to get the patch
upstream as soon as we can.


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 17:54:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 17:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4544.11882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQa7G-00084Q-9G; Thu, 08 Oct 2020 17:54:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4544.11882; Thu, 08 Oct 2020 17:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQa7G-00084J-63; Thu, 08 Oct 2020 17:54:10 +0000
Received: by outflank-mailman (input) for mailman id 4544;
 Thu, 08 Oct 2020 17:54:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQa7E-00083r-Pa
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:54:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d929c6b-27e7-4c74-a8bb-e379378d3bd8;
 Thu, 08 Oct 2020 17:53:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQa75-0002dc-6U; Thu, 08 Oct 2020 17:53:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQa74-0005vW-UT; Thu, 08 Oct 2020 17:53:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQa74-0000ay-Tw; Thu, 08 Oct 2020 17:53:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQa7E-00083r-Pa
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 17:54:08 +0000
X-Inumbo-ID: 9d929c6b-27e7-4c74-a8bb-e379378d3bd8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9d929c6b-27e7-4c74-a8bb-e379378d3bd8;
	Thu, 08 Oct 2020 17:53:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qupPFbgYLbihB227jMyvtYg07NjSE1npxWzS1YmScJw=; b=bjm3MvPvtsIb0YRenDvbINTHoP
	ic3HGdjqoyMEt0H3FyiD7D6SfLdbXnTa882MjKXjXr9UKk5a4YABESSe4N3zFJAaJkLZjDaqcqMML
	GzeJ8A7WyvnL5r8lEQwCcQx6fvYdkg+P3RMpzSzQ2ViOELUUbR4nVIOxE/i36DIfEuiM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQa75-0002dc-6U; Thu, 08 Oct 2020 17:53:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQa74-0005vW-UT; Thu, 08 Oct 2020 17:53:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQa74-0000ay-Tw; Thu, 08 Oct 2020 17:53:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155543: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=1bbd33ecba21577c8e8d0233ad70488ee95d5e96
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 17:53:58 +0000

flight 155543 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              1bbd33ecba21577c8e8d0233ad70488ee95d5e96
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   90 days
Failing since        151818  2020-07-11 04:18:52 Z   89 days   84 attempts
Testing same since   155543  2020-10-08 04:19:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19501 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:27:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:27:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4549.11894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQadS-0002ZF-V8; Thu, 08 Oct 2020 18:27:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4549.11894; Thu, 08 Oct 2020 18:27:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQadS-0002Z8-S6; Thu, 08 Oct 2020 18:27:26 +0000
Received: by outflank-mailman (input) for mailman id 4549;
 Thu, 08 Oct 2020 18:27:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQadR-0002Z3-LX
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:27:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b6e95f7-51d0-4987-b2ce-6b6f430cd27f;
 Thu, 08 Oct 2020 18:27:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C64D8221FC;
 Thu,  8 Oct 2020 18:27:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t5hJ=DP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQadR-0002Z3-LX
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:27:25 +0000
X-Inumbo-ID: 1b6e95f7-51d0-4987-b2ce-6b6f430cd27f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b6e95f7-51d0-4987-b2ce-6b6f430cd27f;
	Thu, 08 Oct 2020 18:27:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C64D8221FC;
	Thu,  8 Oct 2020 18:27:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602181644;
	bh=wnDUWvvrVEzbRJ13dZOZxiSsGeyY7Ar2ucxrfdfKROc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=1o2wv4d3pxDrPmK8k49se0nfi7xezAOBKvfTeXGBnCnCdh2/OAgDIZmIJJGSU0fq/
	 fna+j6tAKZPsnLeZ0Q/X9imcuXpein0T+bzVvdXzFKh0w8cMrM2vU/WPohw0fyCcj9
	 SsKlUKLCDGGn+at6Hg/GMX7SMVPkY4viXt/DFtlI=
Date: Thu, 8 Oct 2020 11:27:22 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    "julien@xen.org" <julien@xen.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    "roman@zededa.com" <roman@zededa.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
In-Reply-To: <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
Message-ID: <alpine.DEB.2.21.2010081103110.23978@sstabellini-ThinkPad-T480s>
References: <20201007223813.1638-1-sstabellini@kernel.org> <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 8 Oct 2020, Bertrand Marquis wrote:
> > On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > The preferred method to reboot RPi4 is PSCI. If it is not available,
> > touching the watchdog is required to be able to reboot the board.
> > 
> > The implementation is based on
> > drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Acked-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Maybe a printk if reset was not successful ?

That not quite platform specific but we could add a printk to
xen/arch/arm/shutdown.c:machine_restart if we are still alive after
100ms.

I'll commit this patch as is and maybe send another one for
machine_restart.


> > CC: roman@zededa.com
> > ---
> > Changes in v3:
> > - fix typo in commit message
> > - dprintk -> printk
> > ---
> > xen/arch/arm/platforms/brcm-raspberry-pi.c | 61 ++++++++++++++++++++++
> > 1 file changed, 61 insertions(+)
> > 
> > diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > index f5ae58a7d5..811b40b1a6 100644
> > --- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > +++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > @@ -17,6 +17,10 @@
> >  * GNU General Public License for more details.
> >  */
> > 
> > +#include <xen/delay.h>
> > +#include <xen/mm.h>
> > +#include <xen/vmap.h>
> > +#include <asm/io.h>
> > #include <asm/platform.h>
> > 
> > static const char *const rpi4_dt_compat[] __initconst =
> > @@ -37,12 +41,69 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
> >      * The aux peripheral also shares a page with the aux UART.
> >      */
> >     DT_MATCH_COMPATIBLE("brcm,bcm2835-aux"),
> > +    /* Special device used for rebooting */
> > +    DT_MATCH_COMPATIBLE("brcm,bcm2835-pm"),
> >     { /* sentinel */ },
> > };
> > 
> > +
> > +#define PM_PASSWORD                 0x5a000000
> > +#define PM_RSTC                     0x1c
> > +#define PM_WDOG                     0x24
> > +#define PM_RSTC_WRCFG_FULL_RESET    0x00000020
> > +#define PM_RSTC_WRCFG_CLR           0xffffffcf
> > +
> > +static void __iomem *rpi4_map_watchdog(void)
> > +{
> > +    void __iomem *base;
> > +    struct dt_device_node *node;
> > +    paddr_t start, len;
> > +    int ret;
> > +
> > +    node = dt_find_compatible_node(NULL, NULL, "brcm,bcm2835-pm");
> > +    if ( !node )
> > +        return NULL;
> > +
> > +    ret = dt_device_get_address(node, 0, &start, &len);
> > +    if ( ret )
> > +    {
> > +        printk("Cannot read watchdog register address\n");
> > +        return NULL;
> > +    }
> > +
> > +    base = ioremap_nocache(start & PAGE_MASK, PAGE_SIZE);
> > +    if ( !base )
> > +    {
> > +        printk("Unable to map watchdog register!\n");
> > +        return NULL;
> > +    }
> > +
> > +    return base;
> > +}
> > +
> > +static void rpi4_reset(void)
> > +{
> > +    uint32_t val;
> > +    void __iomem *base = rpi4_map_watchdog();
> > +
> > +    if ( !base )
> > +        return;
> > +
> > +    /* use a timeout of 10 ticks (~150us) */
> > +    writel(10 | PM_PASSWORD, base + PM_WDOG);
> > +    val = readl(base + PM_RSTC);
> > +    val &= PM_RSTC_WRCFG_CLR;
> > +    val |= PM_PASSWORD | PM_RSTC_WRCFG_FULL_RESET;
> > +    writel(val, base + PM_RSTC);
> > +
> > +    /* No sleeping, possibly atomic. */
> > +    mdelay(1);
> > +}
> > +
> > PLATFORM_START(rpi4, "Raspberry Pi 4")
> >     .compatible     = rpi4_dt_compat,
> >     .blacklist_dev  = rpi4_blacklist_dev,
> > +    .reset = rpi4_reset,
> >     .dma_bitsize    = 30,
> > PLATFORM_END
> > 
> > -- 
> > 2.17.1
> > 
> > 
> 


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:39:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:39:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4552.11906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQap9-0003cU-0e; Thu, 08 Oct 2020 18:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4552.11906; Thu, 08 Oct 2020 18:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQap8-0003cN-Tx; Thu, 08 Oct 2020 18:39:30 +0000
Received: by outflank-mailman (input) for mailman id 4552;
 Thu, 08 Oct 2020 18:39:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ogxi=DP=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kQap6-0003cI-Un
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:39:28 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b1e3e3-711c-4c1f-ba9e-a206fff3f1b6;
 Thu, 08 Oct 2020 18:39:27 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 098Id4eR056821
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 8 Oct 2020 14:39:10 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 098Id4tw056820;
 Thu, 8 Oct 2020 11:39:04 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ogxi=DP=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kQap6-0003cI-Un
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:39:28 +0000
X-Inumbo-ID: d7b1e3e3-711c-4c1f-ba9e-a206fff3f1b6
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7b1e3e3-711c-4c1f-ba9e-a206fff3f1b6;
	Thu, 08 Oct 2020 18:39:27 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 098Id4eR056821
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 8 Oct 2020 14:39:10 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 098Id4tw056820;
	Thu, 8 Oct 2020 11:39:04 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 8 Oct 2020 11:39:04 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Alex Benn??e <alex.bennee@linaro.org>, bertrand.marquis@arm.com,
        andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201008183904.GA56716@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Sep 28, 2020 at 03:47:52PM +0900, Masami Hiramatsu wrote:
> This made progress with my Xen boot on DeveloperBox (
> https://www.96boards.org/product/developerbox/ ) with ACPI.

Adding your patch on top of Julien Grall's patch appears to push the Xen
boot of my target device (Raspberry PI 4B with Tianocore) further.  Still
yet to see any output attributable to the Domain 0 kernel though.

(XEN) *** LOADING DOMAIN 0 ***
(XEN) Loading d0 kernel from boot module @ 0000000032234000
(XEN) Loading ramdisk from boot module @ 0000000031747000
(XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
(XEN) BANK[0] 0x00000020000000-0x00000030000000 (256MB)
(XEN) BANK[1] 0x00000040000000-0x000000b0000000 (1792MB)
(XEN) Grant table range: 0x000000315f3000-0x00000031633000
(XEN) Allocating PPI 16 for event channel interrupt
(XEN) Loading zImage from 0000000032234000 to 0000000020080000-0000000021359200
(XEN) Loading d0 initrd from 0000000031747000 to 0x0000000028200000-0x0000000028cebfe4
(XEN) Loading d0 DTB to 0x0000000028000000-0x0000000028000273
(XEN) Initial low memory virq threshold set at 0x4000 pages.
(XEN) Scrubbing Free RAM in background
(XEN) Std. Loglevel: All
(XEN) Guest Loglevel: All
(XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
(XEN) Freed 396kB init memory.

The line, "Loading d0 DTB to 0x0000000028000000-0x0000000028000273" seems
rather suspicious as I would expect Domain 0 to see ACPI tables, not a
device tree.

Your (Masami Hiramatsu) patch seems plausible, but things haven't
progressed enough for me to endorse it.  Looks like something closer to
the core of ACPI still needs further work, Julien Grall?


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4561.11966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6w-0005aQ-Rh; Thu, 08 Oct 2020 18:57:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4561.11966; Thu, 08 Oct 2020 18:57:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6w-0005aF-NL; Thu, 08 Oct 2020 18:57:54 +0000
Received: by outflank-mailman (input) for mailman id 4561;
 Thu, 08 Oct 2020 18:57:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6v-0005RO-Oh
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2735b4bf-4c1d-4fe6-833a-c586c308dc65;
 Thu, 08 Oct 2020 18:57:46 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6l-00041O-9a; Thu, 08 Oct 2020 18:57:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6k-0002P9-QB; Thu, 08 Oct 2020 18:57:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6v-0005RO-Oh
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:53 +0000
X-Inumbo-ID: 2735b4bf-4c1d-4fe6-833a-c586c308dc65
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2735b4bf-4c1d-4fe6-833a-c586c308dc65;
	Thu, 08 Oct 2020 18:57:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=PIEMrLkEW9cqepMH9vM88oB66k912G0TL+PNGKBij/g=; b=1qd0VIdCHtQLR5HdSVJ3PjcAuE
	Dpa0nudmTP4twX2Csaq/21cUrdp7dpotIMAHIwChbsP6E7yBd5OhIW6O5E+cNWD4TQot8ljNYnUF3
	ifa6gdZqXE6NgL56QFjkx798ZLZfbq/Ai5MEKxMPrZVbkmp9oZbwcVxKPcsOvMsysBoY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6l-00041O-9a; Thu, 08 Oct 2020 18:57:43 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6k-0002P9-QB; Thu, 08 Oct 2020 18:57:43 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Julien Grall <julien@xen.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <iwj@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v10 03/11] xen/common/domctl: introduce XEN_DOMCTL_get/set_domain_context
Date: Thu,  8 Oct 2020 19:57:27 +0100
Message-Id: <20201008185735.29875-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

These domctls provide a mechanism to get and set 'domain context' from
the toolstack. The implementation calls the domain_save_ctxt() and
domain_load_ctxt() functions introduced in a previous patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Julien Grall <julien@xen.org>
Cc: Wei Liu <wl@xen.org>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v10:
 - Re-base
 - Add underscores and move to 64-bit image size as requested by Andrew
 - Add a couple of record alignment ASSERTions
 - Dropped R-b and A-b since changes are not entirely cosmetic

v4:
 - Add missing zero pad checks

v3:
 - Addressed comments from Julien and Jan
 - Use vmalloc() rather than xmalloc_bytes()

v2:
 - drop mask parameter
 - const-ify some more buffers
---
 tools/flask/policy/modules/xen.if   |   4 +-
 tools/libs/ctrl/include/xenctrl.h   |   5 +
 tools/libs/ctrl/xc_domain.c         |  56 +++++++++
 xen/common/domctl.c                 | 173 ++++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  39 +++++++
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   4 +
 7 files changed, 285 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 5e2aa472b6..2e2303d684 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -53,7 +53,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpu_policy settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset
-			resource_map get_cpu_policy };
+			resource_map get_cpu_policy set_context };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -97,7 +97,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
-	allow $1 $2:domain2 gettsc;
+	allow $1 $2:domain2 { gettsc get_context };
 	allow $1 $2:shadow { enable disable logdirty };
 ')
 
diff --git a/tools/libs/ctrl/include/xenctrl.h b/tools/libs/ctrl/include/xenctrl.h
index 3796425e1e..754a00c67b 100644
--- a/tools/libs/ctrl/include/xenctrl.h
+++ b/tools/libs/ctrl/include/xenctrl.h
@@ -867,6 +867,11 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
                              uint8_t *hvm_ctxt,
                              uint32_t size);
 
+int xc_domain_get_context(xc_interface *xch, uint32_t domid,
+                          void *ctxt_buf, size_t *size);
+int xc_domain_set_context(xc_interface *xch, uint32_t domid,
+                          const void *ctxt_buf, size_t size);
+
 /**
  * This function will return guest IO ABI protocol
  *
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index e7cea4a17d..f35c1d2a28 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -536,6 +536,62 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
     return ret;
 }
 
+int xc_domain_get_context(xc_interface *xch, uint32_t domid,
+                          void *ctxt_buf, size_t *size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_get_domain_context,
+        .domain = domid,
+        .u.get_domain_context.size = *size,
+    };
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, *size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.set_domain_context.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    if ( ret )
+        return ret;
+
+    *size = domctl.u.get_domain_context.size;
+    if ( *size != domctl.u.get_domain_context.size )
+    {
+        errno = EOVERFLOW;
+        return -1;
+    }
+
+    return 0;
+}
+
+int xc_domain_set_context(xc_interface *xch, uint32_t domid,
+                          const void *ctxt_buf, size_t size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_set_domain_context,
+        .domain = domid,
+        .u.set_domain_context.size = size,
+    };
+    DECLARE_HYPERCALL_BOUNCE_IN(ctxt_buf, size);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.set_domain_context.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    return ret;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index af044e2eda..6dbbe7f08a 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -25,6 +25,8 @@
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
+#include <xen/save.h>
+#include <xen/vmap.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -273,6 +275,168 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
     return ERR_PTR(ret);
 }
 
+struct domctl_context
+{
+    void *buffer;
+    struct domain_context_record *rec;
+    size_t len;
+    size_t cur;
+};
+
+static int dry_run_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len + len < c->len )
+        return -EOVERFLOW;
+
+    c->len += len;
+
+    return 0;
+}
+
+static int dry_run_begin(void *priv, const struct domain_context_record *rec)
+{
+    return dry_run_append(priv, NULL, sizeof(*rec));
+}
+
+static int dry_run_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    ASSERT(IS_ALIGNED(c->len, DOMAIN_CONTEXT_RECORD_ALIGN));
+
+    return 0;
+}
+
+static struct domain_save_ctxt_ops dry_run_ops = {
+    .begin = dry_run_begin,
+    .append = dry_run_append,
+    .end = dry_run_end,
+};
+
+static int save_begin(void *priv, const struct domain_context_record *rec)
+{
+    struct domctl_context *c = priv;
+
+    ASSERT(IS_ALIGNED(c->cur, DOMAIN_CONTEXT_RECORD_ALIGN));
+
+    if ( c->len - c->cur < sizeof(*rec) )
+        return -ENOSPC;
+
+    c->rec = c->buffer + c->cur; /* stash pointer to record */
+    *c->rec = *rec;
+
+    c->cur += sizeof(*rec);
+
+    return 0;
+}
+
+static int save_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENOSPC;
+
+    memcpy(c->buffer + c->cur, data, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static int save_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    c->rec->length = len;
+
+    return 0;
+}
+
+static struct domain_save_ctxt_ops save_ops = {
+    .begin = save_begin,
+    .append = save_append,
+    .end = save_end,
+};
+
+static int get_domain_context(struct domain *d,
+                              struct xen_domctl_get_domain_context *gdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
+    {
+        if ( gdc->size )
+            return -EINVAL;
+
+        /* dry run to acquire buffer size */
+        rc = domain_save_ctxt(d, &dry_run_ops, &c, true);
+        if ( rc )
+            return rc;
+
+        gdc->size = c.len;
+        return 0;
+    }
+
+    c.len = gdc->size;
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = domain_save_ctxt(d, &save_ops, &c, false);
+
+    gdc->size = c.cur;
+    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
+        rc = -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
+static int load_read(void *priv, void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENODATA;
+
+    memcpy(data, c->buffer + c->cur, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static struct domain_load_ctxt_ops load_ops = {
+    .read = load_read,
+};
+
+static int set_domain_context(struct domain *d,
+                              const struct xen_domctl_set_domain_context *sdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR, .len = sdc->size };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = !copy_from_guest(c.buffer, sdc->buffer, c.len) ?
+        domain_load_ctxt(d, &load_ops, &c) : -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -867,6 +1031,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_get_domain_context:
+        ret = get_domain_context(d, &op->u.get_domain_context);
+        copyback = !ret;
+        break;
+
+    case XEN_DOMCTL_set_domain_context:
+        ret = set_domain_context(d, &op->u.set_domain_context);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 666aeb71bf..a3e10c03f1 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1132,6 +1132,41 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_get_domain_context
+ * -----------------------------
+ *
+ * buffer (IN):   The buffer into which the context data should be
+ *                copied, or NULL to query the buffer size that should
+ *                be allocated.
+ * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
+ *                zero, and the value passed out will be the size of the
+ *                buffer to allocate.
+ *                If 'buffer' is non-NULL then the value passed in must
+ *                be the size of the buffer into which data may be copied.
+ *                The value passed out will be the size of data written.
+ */
+struct xen_domctl_get_domain_context {
+    uint64_t size;
+    XEN_GUEST_HANDLE_64(void) buffer;
+};
+
+/* XEN_DOMCTL_set_domain_context
+ * -----------------------------
+ *
+ * buffer (IN):   The buffer from which the context data should be
+ *                copied.
+ * size (IN):     The size of the buffer from which data may be copied.
+ *                This data must include DOMAIN_SAVE_CODE_HEADER at the
+ *                start and terminate with a DOMAIN_SAVE_CODE_END record.
+ *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
+ *                ignored.
+ */
+struct xen_domctl_set_domain_context {
+    uint64_t size;
+    XEN_GUEST_HANDLE_64(const_void) buffer;
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1216,6 +1251,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_get_domain_context            84
+#define XEN_DOMCTL_set_domain_context            85
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1276,6 +1313,8 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_get_domain_context get_domain_context;
+        struct xen_domctl_set_domain_context set_domain_context;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de050cc9fe..3c6217e4ac 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -754,6 +754,12 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_get_cpu_policy:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_POLICY);
 
+    case XEN_DOMCTL_set_domain_context:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SET_CONTEXT);
+
+    case XEN_DOMCTL_get_domain_context:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CONTEXT);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 1aa0bb501c..fea0c9f143 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -245,6 +245,10 @@ class domain2
     resource_map
 # XEN_DOMCTL_get_cpu_policy
     get_cpu_policy
+# XEN_DOMCTL_set_domain_context
+    set_context
+# XEN_DOMCTL_get_domain_context
+    get_context
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4557.11918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6m-0005Ra-GA; Thu, 08 Oct 2020 18:57:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4557.11918; Thu, 08 Oct 2020 18:57:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6m-0005RT-D3; Thu, 08 Oct 2020 18:57:44 +0000
Received: by outflank-mailman (input) for mailman id 4557;
 Thu, 08 Oct 2020 18:57:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6l-0005RO-VE
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a700aef9-855c-45d9-8cc0-d4c741c137d9;
 Thu, 08 Oct 2020 18:57:42 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6i-000419-5v; Thu, 08 Oct 2020 18:57:40 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6h-0002P9-TL; Thu, 08 Oct 2020 18:57:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6l-0005RO-VE
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:44 +0000
X-Inumbo-ID: a700aef9-855c-45d9-8cc0-d4c741c137d9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a700aef9-855c-45d9-8cc0-d4c741c137d9;
	Thu, 08 Oct 2020 18:57:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=UWh3z72cedco8BYeLrNyT1WM5VqVHSPK2ry97ilWHbM=; b=GjA3D5gCTCKzxAjMsiSTBtmec7
	4sW0wpdtMyI6pbtzYn6+nMRW9K21NGSRQIhc52CajAn9qcI7fE4KGu3ApClPqcczOOgVbebO695aW
	M6nf9Q3/OZa2iZPZMnJ/aP3PR+vSrmZZSIKtlmia6uRcfWCE/UwfrenEnaPV7LNtKSME=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6i-000419-5v; Thu, 08 Oct 2020 18:57:40 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6h-0002P9-TL; Thu, 08 Oct 2020 18:57:40 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v10 01/11] docs / include: introduce a new framework for 'domain context' records
Date: Thu,  8 Oct 2020 19:57:25 +0100
Message-Id: <20201008185735.29875-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

To allow enlightened HVM guests (i.e. those that have PV drivers) to be
migrated without their co-operation it will be necessary to transfer 'PV'
state such as event channel state, grant entry state, etc.

Currently there is a framework (entered via the hvm_save/load() functions)
that allows a domain's 'HVM' (architectural) state to be transferred but
'PV' state is also common with pure PV guests and so this framework is not
really suitable.

This patch adds the specification of a new common 'domain context' image and
the supporting public header. Subsequent patches will introduce other parts of
the framework, and code that will make use of it.

This patch also marks the HVM-only framework as deprecated in favour of the
new framework.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v10:
 - New in v10
---
 docs/specs/domain-context.md           | 137 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/save.h              |  66 ++++++++++++
 4 files changed, 213 insertions(+)
 create mode 100644 docs/specs/domain-context.md
 create mode 100644 xen/include/public/save.h

diff --git a/docs/specs/domain-context.md b/docs/specs/domain-context.md
new file mode 100644
index 0000000000..f177cf24b3
--- /dev/null
+++ b/docs/specs/domain-context.md
@@ -0,0 +1,137 @@
+# Domain Context (v1)
+
+## Background
+
+The design for *Non-Cooperative Migration of Guests*[1] explains that extra
+information is required in the migration stream to allow a guest running PV
+drivers to be migrated without its co-operation. This information includes
+hypervisor state such as the set of event channels in operation and the
+content of the grant table.
+
+There already exists in Xen a mechanism to save and restore *HVM context*[2].
+This specification describes a new framework that will replace that
+mechanism and be suitable for transferring additional *PV state* records
+conveying the information mentioned above. There is also on-going work to
+implement *live update* of Xen where hypervisor state must be transferred in
+memory from one incarnation to the next. The framework described is designed
+to also be suitable for this purpose.
+
+## Image format
+
+The format will read from or written to the hypervisor in a single virtually
+contiguous buffer segmented into **context records** specified in the following
+sections.
+
+Fields within the records will always be aligned to their size. Padding and
+reserved fields will always be set to zero when the context buffer is read
+from the hypervisor and will be verified when written.
+The endianness of numerical values will the native endianness of the
+hypervisor. In the case of migration, that endianness is specified in the
+*libxenctrl (libxc) Domain Image Format*[3].
+
+### Record format
+
+All records have the following format:
+
+```
+    0       1       2       3       4       5       6       7    octet
++-------+-------+-------+-------+-------+-------+-------+-------+
+| type                          | instance                      |
++-------------------------------+-------------------------------+
+| length                                                        |
++---------------------------------------------------------------+
+| body
+...
+|       | padding (0 to 7 octets)                               |
++-------+-------------------------------------------------------+
+```
+
+\pagebreak
+The fields are defined as follows:
+
+
+| Field      | Description                                      |
+|------------|--------------------------------------------------|
+| `type`     | A code which determines the layout and semantics |
+|            | of `body`                                        |
+|            |                                                  |
+| `instance` | The instance of the record                       |
+|            |                                                  |
+| `length`   | The length (in octets) of `body`                 |
+|            |                                                  |
+| `body`     | Zero or more octets of record data               |
+|            |                                                  |
+| `padding`  | Zero to seven octets of zero-filled padding to   |
+|            | bring the total record length up to the next     |
+|            | 64-bit boundary                                  |
+
+The `instance` field is present to distinguish multiple occurences of
+a record. E.g. state that is per-vcpu may need to be described in multiple
+records.
+
+The first record in the image is always a **START** record. The version of
+the image format can be inferred from the `type` of this record.
+
+## Image content
+
+The following records are defined for the v1 image format. This set may be
+extended in newer versions of the hypervisor. It is not expected that an image
+saved on a newer version of Xen will need to be restored on an older version.
+Therefore an image containing unrecognized record types should be rejected.
+
+### START
+
+```
+    0       1       2       3       4       5       6       7    octet
++-------+-------+-------+-------+-------+-------+-------+-------+
+| type == 1                     | instance == 0                 |
++-------------------------------+-------------------------------+
+| length == 8                                                   |
++-------------------------------+-------------------------------+
+| xen_major                     | xen_minor                     |
++-------------------------------+-------------------------------+
+```
+
+A type 1 **START** record implies a v1 image. If a new image format version
+is needed in future then this can be indicated by a new type value for this
+(first) record in the image.
+
+\pagebreak
+The record body contains the following fields:
+
+| Field       | Description                                     |
+|-------------|-------------------------------------------------|
+| `xen_major` | The major version of Xen that created this      |
+|             | image                                           |
+|             |                                                 |
+| `xen_minor` | The minor version of Xen that created this      |
+|             | image                                           |
+
+The version of Xen that created the image can be useful to the version that
+is restoring the image to determine whether certain records are expected to
+be present in the image. For example, a version of Xen prior to X.Y may not
+generate a FOO record but Xen X.Y+ can infer its content. But Xen X.Y+1
+**must** generate the FOO record as, from that version onward, its content
+can no longer be safely inferred.
+
+### END
+
+```
+    0       1       2       3       4       5       6       7    octet
++-------+-------+-------+-------+-------+-------+-------+-------+
+| type == 0                     | instance == 0                 |
++-------------------------------+-------------------------------+
+| length == 0                                                   |
++---------------------------------------------------------------+
+```
+
+A record of this type terminates the image. No further data from the buffer
+should be consumed.
+
+* * *
+
+[1] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md
+
+[2] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/hvm/save.h
+
+[3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/specs/libxc-migration-stream.pandoc
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65bcb..d5b0c15203 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,11 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 773a380bc2..e61e2dbcd7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -648,6 +648,11 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
new file mode 100644
index 0000000000..c4be9f570c
--- /dev/null
+++ b/xen/include/public/save.h
@@ -0,0 +1,66 @@
+/*
+ * save.h
+ *
+ * Structure definitions for common PV/HVM domain state that is held by Xen.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef XEN_PUBLIC_SAVE_H
+#define XEN_PUBLIC_SAVE_H
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "xen.h"
+
+/*
+ * C structures for the Domain Context v1 format.
+ * See docs/specs/domain-context.md
+ */
+
+struct domain_context_record {
+    uint32_t type;
+    uint32_t instance;
+    uint64_t length;
+    uint8_t body[XEN_FLEX_ARRAY_DIM];
+};
+
+#define _DOMAIN_CONTEXT_RECORD_ALIGN 3
+#define DOMAIN_CONTEXT_RECORD_ALIGN (1U << _DOMAIN_CONTEXT_RECORD_ALIGN)
+
+enum {
+    DOMAIN_CONTEXT_END,
+    DOMAIN_CONTEXT_START,
+    /* New types go here */
+    DOMAIN_CONTEXT_NR_TYPES
+};
+
+/* Initial entry */
+struct domain_context_start {
+    uint32_t xen_major, xen_minor;
+};
+
+/* Terminating entry */
+struct domain_context_end {};
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* XEN_PUBLIC_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4559.11942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6s-0005Ue-6r; Thu, 08 Oct 2020 18:57:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4559.11942; Thu, 08 Oct 2020 18:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6s-0005UX-2u; Thu, 08 Oct 2020 18:57:50 +0000
Received: by outflank-mailman (input) for mailman id 4559;
 Thu, 08 Oct 2020 18:57:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6q-0005RO-OX
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d15dbb8b-9b40-4fbc-a5ba-a99a04bb68db;
 Thu, 08 Oct 2020 18:57:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6g-000417-Lg; Thu, 08 Oct 2020 18:57:38 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6g-0002P9-9v; Thu, 08 Oct 2020 18:57:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6q-0005RO-OX
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:48 +0000
X-Inumbo-ID: d15dbb8b-9b40-4fbc-a5ba-a99a04bb68db
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d15dbb8b-9b40-4fbc-a5ba-a99a04bb68db;
	Thu, 08 Oct 2020 18:57:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=nLbQbDyKtmQ0zyuBxr1DU07QHvJLOR8wLN3+oQCa8vE=; b=CXLvN0UYgu8NTpeor46WvaZ7CK
	2gVN5A7Hwl32wXCnUGBbzo0+mqyJsgrgX/X+xggKKkL3pEvTvv7h0BRxWyr6GxoRxrTGXaiemq8LV
	Qp0FgTBnzcCG1xv4nKEsorfpZ9NjjICL9rlESpR6FmGZHE5geWtoHjcT15rGHAO7weyE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6g-000417-Lg; Thu, 08 Oct 2020 18:57:38 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6g-0002P9-9v; Thu, 08 Oct 2020 18:57:38 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 00/11] domain context infrastructure
Date: Thu,  8 Oct 2020 19:57:24 +0100
Message-Id: <20201008185735.29875-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (11):
  docs / include: introduce a new framework for 'domain context' records
  xen: introduce implementation of save/restore of 'domain context'
  xen/common/domctl: introduce XEN_DOMCTL_get/set_domain_context
  tools/misc: add xen-domctx to present domain context
  common/domain: add a domain context record for shared_info...
  x86/time: add a domain context record for tsc_info...
  docs/specs: add missing definitions to libxc-migration-stream
  docs / tools: specify migration v4 to include DOMAIN_CONTEXT
  tools/python: modify libxc.py to verify v4 stream
  tools/libs/guest: add code to restore a v4 libxc stream
  tools/libs/guest: add code to save a v4 libxc stream

 .gitignore                               |   1 +
 docs/specs/domain-context.md             | 198 +++++++++++++
 docs/specs/libxc-migration-stream.pandoc |  69 ++++-
 tools/flask/policy/modules/xen.if        |   4 +-
 tools/libs/ctrl/include/xenctrl.h        |   5 +
 tools/libs/ctrl/xc_domain.c              |  56 ++++
 tools/libs/guest/xg_sr_common.c          |   1 +
 tools/libs/guest/xg_sr_common.h          |   3 +
 tools/libs/guest/xg_sr_common_x86.c      |  20 --
 tools/libs/guest/xg_sr_common_x86.h      |   6 -
 tools/libs/guest/xg_sr_restore.c         |  24 +-
 tools/libs/guest/xg_sr_restore_x86_hvm.c |   9 +
 tools/libs/guest/xg_sr_restore_x86_pv.c  |   9 +
 tools/libs/guest/xg_sr_save.c            |  52 +++-
 tools/libs/guest/xg_sr_save_x86_hvm.c    |   5 -
 tools/libs/guest/xg_sr_save_x86_pv.c     |  22 --
 tools/libs/guest/xg_sr_stream_format.h   |   1 +
 tools/misc/Makefile                      |   4 +
 tools/misc/xen-domctx.c                  | 264 ++++++++++++++++++
 tools/python/xen/migration/libxc.py      |  22 +-
 xen/arch/x86/time.c                      |  30 ++
 xen/common/Makefile                      |   1 +
 xen/common/domain.c                      | 113 ++++++++
 xen/common/domctl.c                      | 173 ++++++++++++
 xen/common/save.c                        | 339 +++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h   |   5 +
 xen/include/public/arch-x86/hvm/save.h   |   5 +
 xen/include/public/domctl.h              |  39 +++
 xen/include/public/save.h                |  85 ++++++
 xen/include/xen/save.h                   | 160 +++++++++++
 xen/xsm/flask/hooks.c                    |   6 +
 xen/xsm/flask/policy/access_vectors      |   4 +
 32 files changed, 1661 insertions(+), 74 deletions(-)
 create mode 100644 docs/specs/domain-context.md
 create mode 100644 tools/misc/xen-domctx.c
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4558.11930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6n-0005SX-QW; Thu, 08 Oct 2020 18:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4558.11930; Thu, 08 Oct 2020 18:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6n-0005SQ-LI; Thu, 08 Oct 2020 18:57:45 +0000
Received: by outflank-mailman (input) for mailman id 4558;
 Thu, 08 Oct 2020 18:57:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6m-0005Rd-I8
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22f72eed-43f4-4eb6-be00-1881072b7c92;
 Thu, 08 Oct 2020 18:57:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6j-00041D-Hq; Thu, 08 Oct 2020 18:57:41 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6j-0002P9-A5; Thu, 08 Oct 2020 18:57:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6m-0005Rd-I8
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:44 +0000
X-Inumbo-ID: 22f72eed-43f4-4eb6-be00-1881072b7c92
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22f72eed-43f4-4eb6-be00-1881072b7c92;
	Thu, 08 Oct 2020 18:57:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=MH3Jcb2gIq1O8Mh9sTMsNlQA5vWj9EpZg8v7PZTX4es=; b=vLm+VSM1g8XlfzsW/Qq54eKPLa
	iubIj2PDUwi92PaDQQOb8bVRVlC60wlgtv8UUtz5UHjzygnhfWSZDYTZXK82LzAoOURDKxLVfPbF4
	J/KQz7Auv1cEE+alXIVZf8LNxjY9xpPJZaLwjsJI1EHqobBRoBFEttX2HXZa43GcJiLI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6j-00041D-Hq; Thu, 08 Oct 2020 18:57:41 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6j-0002P9-A5; Thu, 08 Oct 2020 18:57:41 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 02/11] xen: introduce implementation of save/restore of 'domain context'
Date: Thu,  8 Oct 2020 19:57:26 +0100
Message-Id: <20201008185735.29875-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced the specification of the 'domain context' image
and the supporting public header. This patch introduces the code necessary
to generate and consume such an image. The entry points to the code are
domain_save_ctxt() and domain_load_ctxt(). Code to call these functions will
be introduced in a subsequent patch.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>

v10:
 - New in v10
 - Largely derived from patch #1 of the v9 series, but the code has been
   re-worked both functionally and cosmetically
---
 xen/common/Makefile    |   1 +
 xen/common/save.c      | 339 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/save.h | 160 +++++++++++++++++++
 3 files changed, 500 insertions(+)
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/xen/save.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index b3b60a1ba2..3e6f21714a 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -37,6 +37,7 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
+obj-y += save.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
diff --git a/xen/common/save.c b/xen/common/save.c
new file mode 100644
index 0000000000..9287b20198
--- /dev/null
+++ b/xen/common/save.c
@@ -0,0 +1,339 @@
+/*
+ * save.c: save and load state common to all domain types.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/compile.h>
+#include <xen/save.h>
+
+struct domain_ctxt_state {
+    struct domain *d;
+    struct domain_context_record rec;
+    size_t len; /* for internal accounting */
+    union {
+        const struct domain_save_ctxt_ops *save;
+        const struct domain_load_ctxt_ops *load;
+    } ops;
+    void *priv;
+    bool dry_run;
+};
+
+static struct {
+    const char *name;
+    domain_save_ctxt_type save;
+    domain_load_ctxt_type load;
+} fns[DOMAIN_CONTEXT_NR_TYPES];
+
+void __init domain_register_ctxt_type(unsigned int type, const char *name,
+                                      domain_save_ctxt_type save,
+                                      domain_load_ctxt_type load)
+{
+    BUG_ON(type >= ARRAY_SIZE(fns));
+
+    ASSERT(!fns[type].save);
+    ASSERT(!fns[type].load);
+
+    fns[type].name = name;
+    fns[type].save = save;
+    fns[type].load = load;
+}
+
+int domain_save_ctxt_rec_begin(struct domain_ctxt_state *c,
+                               unsigned int type, unsigned int instance)
+{
+    struct domain_context_record rec = { .type = type, .instance = instance };
+    int rc;
+
+    c->rec = rec;
+    c->len = 0;
+
+    rc = c->ops.save->begin(c->priv, &c->rec);
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+int domain_save_ctxt_rec_data(struct domain_ctxt_state *c, const void *src,
+                              size_t len)
+{
+    int rc = c->ops.save->append(c->priv, src, len);
+
+    if ( !rc )
+        c->len += len;
+
+    return rc;
+}
+
+int domain_save_ctxt_rec_end(struct domain_ctxt_state *c)
+{
+    size_t len = c->len;
+    size_t pad = ROUNDUP(len, DOMAIN_CONTEXT_RECORD_ALIGN) - len;
+    int rc;
+
+    if ( pad )
+    {
+        static const uint8_t zeroes[DOMAIN_CONTEXT_RECORD_ALIGN] = {};
+
+        rc = c->ops.save->append(c->priv, zeroes, pad);
+        if ( rc )
+            return rc;
+    }
+
+    if ( !c->dry_run )
+        gdprintk(XENLOG_DEBUG, "%pd save: %s[%u] +%zu (+%zu)\n", c->d,
+                 fns[c->rec.type].name, c->rec.instance,
+                 len, pad);
+
+    rc = c->ops.save->end(c->priv, c->len);
+
+    return rc;
+}
+
+int domain_save_ctxt(struct domain *d, const struct domain_save_ctxt_ops *ops,
+                     void *priv, bool dry_run)
+{
+    struct domain_ctxt_state c = {
+        .d = d,
+        .ops.save = ops,
+        .priv = priv,
+        .dry_run = dry_run,
+    };
+    domain_save_ctxt_type save;
+    unsigned int type;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    save = fns[DOMAIN_CONTEXT_START].save;
+    BUG_ON(!save);
+
+    rc = save(d, &c, dry_run);
+    if ( rc )
+        return rc;
+
+    domain_pause(d);
+
+    for ( type = DOMAIN_CONTEXT_START + 1; type < ARRAY_SIZE(fns); type++ )
+    {
+        save = fns[type].save;
+        if ( !save )
+            continue;
+
+        rc = save(d, &c, dry_run);
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    if ( rc )
+        return rc;
+
+    save = fns[DOMAIN_CONTEXT_END].save;
+    BUG_ON(!save);
+
+    return save(d, &c, dry_run);
+}
+
+int domain_load_ctxt_rec_begin(struct domain_ctxt_state *c,
+                               unsigned int type, unsigned int *instance)
+{
+    if ( type != c->rec.type )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    *instance = c->rec.instance;
+    c->len = 0;
+
+    return 0;
+}
+
+int domain_load_ctxt_rec_data(struct domain_ctxt_state *c, void *dst,
+                              size_t len)
+{
+    int rc = 0;
+
+    c->len += len;
+    if (c->len > c->rec.length)
+        return -ENODATA;
+
+    if ( dst )
+        rc = c->ops.load->read(c->priv, dst, len);
+    else /* sink data */
+    {
+        uint8_t ignore;
+
+        while ( !rc && len-- )
+            rc = c->ops.load->read(c->priv, &ignore, sizeof(ignore));
+    }
+
+    return rc;
+}
+
+int domain_load_ctxt_rec_end(struct domain_ctxt_state *c, bool ignore_data)
+{
+    size_t len = c->len;
+    size_t pad = ROUNDUP(len, DOMAIN_CONTEXT_RECORD_ALIGN) - len;
+
+    gdprintk(XENLOG_DEBUG, "%pd load: %s[%u] +%zu (+%zu)\n", c->d,
+             fns[c->rec.type].name, c->rec.instance,
+             len, pad);
+
+    while ( pad-- )
+    {
+        uint8_t zero;
+        int rc = c->ops.load->read(c->priv, &zero, sizeof(zero));
+
+        if ( rc )
+            return rc;
+
+        if ( zero )
+            return -EINVAL;
+    }
+
+    return 0;
+}
+
+int domain_load_ctxt(struct domain *d, const struct domain_load_ctxt_ops *ops,
+                     void *priv)
+{
+    struct domain_ctxt_state c = { .d = d, .ops.load = ops, .priv = priv, };
+    domain_load_ctxt_type load;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    rc = c.ops.load->read(c.priv, &c.rec, sizeof(c.rec));
+    if ( rc )
+        return rc;
+
+    load = fns[DOMAIN_CONTEXT_START].load;
+    BUG_ON(!load);
+
+    rc = load(d, &c);
+    if ( rc )
+        return rc;
+
+    domain_pause(d);
+
+    for (;;)
+    {
+        unsigned int type;
+
+        rc = c.ops.load->read(c.priv, &c.rec, sizeof(c.rec));
+        if ( rc )
+            break;
+
+        type = c.rec.type;
+        if ( type == DOMAIN_CONTEXT_END )
+            break;
+
+        rc = -EINVAL;
+        if ( type >= ARRAY_SIZE(fns) )
+            break;
+
+        load = fns[type].load;
+        if ( !load )
+            break;
+
+        rc = load(d, &c);
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    if ( rc )
+        return rc;
+
+    load = fns[DOMAIN_CONTEXT_END].load;
+    BUG_ON(!load);
+
+    return load(d, &c);
+}
+
+static int save_start(struct domain *d, struct domain_ctxt_state *c,
+                      bool dry_run)
+{
+    static const struct domain_context_start s = {
+        .xen_major = XEN_VERSION,
+        .xen_minor = XEN_SUBVERSION,
+    };
+
+    return domain_save_ctxt_rec(c, DOMAIN_CONTEXT_START, 0, &s, sizeof(s));
+}
+
+static int load_start(struct domain *d, struct domain_ctxt_state *c)
+{
+    static struct domain_context_start s;
+    unsigned int i;
+    int rc = domain_load_ctxt_rec(c, DOMAIN_CONTEXT_START, &i, &s, sizeof(s));
+
+    if ( rc )
+        return rc;
+
+    if ( i )
+        return -EINVAL;
+
+    /*
+     * Make sure we are not attempting to load an image generated by a newer
+     * version of Xen.
+     */
+    if ( s.xen_major > XEN_VERSION && s.xen_minor > XEN_SUBVERSION )
+        return -EOPNOTSUPP;
+
+    return 0;
+}
+
+DOMAIN_REGISTER_CTXT_TYPE(START, save_start, load_start);
+
+static int save_end(struct domain *d, struct domain_ctxt_state *c,
+                    bool dry_run)
+{
+    static const struct domain_context_end e = {};
+
+    return domain_save_ctxt_rec(c, DOMAIN_CONTEXT_END, 0, &e, sizeof(e));
+}
+
+static int load_end(struct domain *d, struct domain_ctxt_state *c)
+{
+    unsigned int i;
+    int rc = domain_load_ctxt_rec(c, DOMAIN_CONTEXT_END, &i, NULL,
+                                  sizeof(struct domain_context_end));
+
+    if ( rc )
+        return rc;
+
+    if ( i )
+        return -EINVAL;
+
+    return 0;
+}
+
+DOMAIN_REGISTER_CTXT_TYPE(END, save_end, load_end);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/save.h b/xen/include/xen/save.h
new file mode 100644
index 0000000000..358cf2f700
--- /dev/null
+++ b/xen/include/xen/save.h
@@ -0,0 +1,160 @@
+/*
+ * save.h: support routines for save/restore
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XEN_SAVE_H
+#define XEN_SAVE_H
+
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+
+#include <public/save.h>
+
+struct domain_ctxt_state;
+
+int domain_save_ctxt_rec_begin(struct domain_ctxt_state *c, unsigned int type,
+                               unsigned int instance);
+int domain_save_ctxt_rec_data(struct domain_ctxt_state *c, const void *data,
+                              size_t len);
+int domain_save_ctxt_rec_end(struct domain_ctxt_state *c);
+
+static inline int domain_save_ctxt_rec(struct domain_ctxt_state *c,
+                                       unsigned int type, unsigned int instance,
+                                       const void *src, size_t len)
+{
+    int rc;
+
+    rc = domain_save_ctxt_rec_begin(c, type, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_ctxt_rec_data(c, src, len);
+    if ( rc )
+        return rc;
+
+    return domain_save_ctxt_rec_end(c);
+}
+
+int domain_load_ctxt_rec_begin(struct domain_ctxt_state *c, unsigned int type,
+                               unsigned int *instance);
+int domain_load_ctxt_rec_data(struct domain_ctxt_state *c, void *data,
+                              size_t len);
+int domain_load_ctxt_rec_end(struct domain_ctxt_state *c, bool ignore_data);
+
+static inline int domain_load_ctxt_rec(struct domain_ctxt_state *c,
+                                       unsigned int type,
+                                       unsigned int *instance, void *dst,
+                                       size_t len)
+{
+    int rc;
+
+    rc = domain_load_ctxt_rec_begin(c, type, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_load_ctxt_rec_data(c, dst, len);
+    if ( rc )
+        return rc;
+
+    return domain_load_ctxt_rec_end(c, false);
+}
+
+/*
+ * The 'dry_run' flag indicates that the caller of domain_save_ctxt() (see below)
+ * is not trying to actually acquire the data, only the size of the data.
+ * The save handler can therefore limit work to only that which is necessary
+ * to call domain_save_ctxt_rec_data() the correct number of times with accurate
+ * values for 'len'.
+ *
+ * NOTE: the domain pointer argument to domain_save_ctxt_type is not const as
+ * some handlers may need to acquire locks.
+ */
+typedef int (*domain_save_ctxt_type)(struct domain *d,
+                                     struct domain_ctxt_state *c,
+                                     bool dry_run);
+typedef int (*domain_load_ctxt_type)(struct domain *d,
+                                     struct domain_ctxt_state *c);
+
+void domain_register_ctxt_type(unsigned int type, const char *name,
+                               domain_save_ctxt_type save,
+                               domain_load_ctxt_type load);
+
+/*
+ * Register save and load handlers for a record type.
+ *
+ * Save handlers will be invoked in an order which copes with any inter-
+ * entry dependencies. For now this means that HEADER will come first and
+ * END will come last, all others being invoked in order of 'typecode'.
+ *
+ * Load handlers will be invoked in the order of entries present in the
+ * buffer.
+ */
+#define DOMAIN_REGISTER_CTXT_TYPE(x, s, l)                    \
+    static int __init __domain_register_##x##_ctxt_type(void) \
+    {                                                         \
+        domain_register_ctxt_type(                            \
+            DOMAIN_CONTEXT_ ## x,                             \
+            #x,                                               \
+            &(s),                                             \
+            &(l));                                            \
+                                                              \
+        return 0;                                             \
+    }                                                         \
+    __initcall(__domain_register_##x##_ctxt_type);
+
+/* Callback functions */
+struct domain_save_ctxt_ops {
+    /*
+     * Begin a new entry with the given record (only type and instance are
+     * valid).
+     */
+    int (*begin)(void *priv, const struct domain_context_record *rec);
+    /* Append data/padding to the buffer */
+    int (*append)(void *priv, const void *data, size_t len);
+    /*
+     * Complete the entry by updating the record with the total length of the
+     * appended data (not including padding).
+     */
+    int (*end)(void *priv, size_t len);
+};
+
+struct domain_load_ctxt_ops {
+    /* Read data/padding from the buffer */
+    int (*read)(void *priv, void *data, size_t len);
+};
+
+/*
+ * Entry points:
+ *
+ * ops:     These are callback functions provided by the caller that will
+ *          be used to write to (in the save case) or read from (in the
+ *          load case) the context buffer. See above for more detail.
+ * priv:    This is a pointer that will be passed to the copy function to
+ *          allow it to identify the context buffer and the current state
+ *          of the save or load operation.
+ * dry_run: If this is set then the caller of domain_save() is only trying
+ *          to acquire the total size of the data, not the data itself.
+ *          In this case the caller may supply different ops to avoid doing
+ *          unnecessary work.
+ */
+int domain_save_ctxt(struct domain *d, const struct domain_save_ctxt_ops *ops,
+                     void *priv, bool dry_run);
+int domain_load_ctxt(struct domain *d, const struct domain_load_ctxt_ops *ops,
+                     void *priv);
+
+#endif /* XEN_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4560.11954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6t-0005WP-GA; Thu, 08 Oct 2020 18:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4560.11954; Thu, 08 Oct 2020 18:57:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6t-0005WH-C3; Thu, 08 Oct 2020 18:57:51 +0000
Received: by outflank-mailman (input) for mailman id 4560;
 Thu, 08 Oct 2020 18:57:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6r-0005Rd-Gs
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08ecb7e3-5169-43f5-968b-d4e3676090eb;
 Thu, 08 Oct 2020 18:57:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6m-00041U-9S; Thu, 08 Oct 2020 18:57:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6m-0002P9-1o; Thu, 08 Oct 2020 18:57:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6r-0005Rd-Gs
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:49 +0000
X-Inumbo-ID: 08ecb7e3-5169-43f5-968b-d4e3676090eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 08ecb7e3-5169-43f5-968b-d4e3676090eb;
	Thu, 08 Oct 2020 18:57:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=KuTv/AiA3AykRxQcQYe5kZtE2uGaDTe4ZnjePHv8ems=; b=4BKw2EiLnQterbDLN/hAsmx+RQ
	yybqjdsv86n0liKGgrRVdDL5IzIPhMQ+5z5x+mfhCgck5YKQ2ldcRbynxBmSL7K5KJwzddY2gLGaH
	Sbhkze0q4Ugn46LP7bOymaTZKGdy4R8KpkBDizxL84lCzG8T/CBnT1uK4VapD+g/30WA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6m-00041U-9S; Thu, 08 Oct 2020 18:57:44 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6m-0002P9-1o; Thu, 08 Oct 2020 18:57:44 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v10 04/11] tools/misc: add xen-domctx to present domain context
Date: Thu,  8 Oct 2020 19:57:28 +0100
Message-Id: <20201008185735.29875-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This tool is analogous to 'xen-hvmctx' which presents HVM context.
Subsequent patches will add 'dump' functions when new records are
introduced.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Ian Jackson <iwj@xenproject.org>
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v10:
 - Re-base
 - Use err[x]()
 - Keep existing A-b since modifications are trivial

v3:
 - Re-worked to avoid copying onto stack
 - Added optional typecode and instance arguments

v2:
 - Change name from 'xen-ctx' to 'xen-domctx'
---
 .gitignore              |   1 +
 tools/misc/Makefile     |   4 +
 tools/misc/xen-domctx.c | 172 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 177 insertions(+)
 create mode 100644 tools/misc/xen-domctx.c

diff --git a/.gitignore b/.gitignore
index 188495783e..4d9a61124d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -243,6 +243,7 @@ tools/misc/xen_cpuperf
 tools/misc/xen-cpuid
 tools/misc/xen-detect
 tools/misc/xen-diag
+tools/misc/xen-domctx
 tools/misc/xen-tmem-list-parse
 tools/misc/xen-livepatch
 tools/misc/xenperf
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 7d37f297a9..fb673d0ff6 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -32,6 +32,7 @@ INSTALL_SBIN                   += xenpm
 INSTALL_SBIN                   += xenwatchdogd
 INSTALL_SBIN                   += xen-livepatch
 INSTALL_SBIN                   += xen-diag
+INSTALL_SBIN                   += xen-domctx
 INSTALL_SBIN += $(INSTALL_SBIN-y)
 
 # Everything to be installed in a private bin/
@@ -111,6 +112,9 @@ xen-livepatch: xen-livepatch.o
 xen-diag: xen-diag.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-domctx: xen-domctx.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 xen-lowmemd: xen-lowmemd.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
new file mode 100644
index 0000000000..ca135b9a28
--- /dev/null
+++ b/tools/misc/xen-domctx.c
@@ -0,0 +1,172 @@
+/*
+ * xen-domctx.c
+ *
+ * Print out domain save records in a human-readable way.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <err.h>
+
+#include <xenctrl.h>
+#include <xen-tools/libs.h>
+#include <xen/xen.h>
+#include <xen/domctl.h>
+#include <xen/save.h>
+
+static void *buf = NULL;
+static size_t len, off;
+
+#define GET_PTR(_x)                                                        \
+    do {                                                                   \
+        if ( len - off < sizeof(*(_x)) )                                   \
+            errx(1, "error: need another %lu bytes, only %lu available",   \
+                    sizeof(*(_x)), len - off);                             \
+        (_x) = buf + off;                                                  \
+    } while (false);
+
+static void dump_start(void)
+{
+    struct domain_context_start *s;
+
+    GET_PTR(s);
+
+    printf("    START: Xen %u.%u\n", s->xen_major, s->xen_minor);
+}
+
+static void dump_end(void)
+{
+    struct domain_context_end *e;
+
+    GET_PTR(e);
+
+    printf("    END\n");
+}
+
+static void usage(void)
+{
+    errx(1, "usage: <domid> [ <type> [ <instance> ]]");
+}
+
+int main(int argc, char **argv)
+{
+    char *s, *e;
+    long domid;
+    long type = -1;
+    long instance = -1;
+    unsigned int entry;
+    xc_interface *xch;
+    int rc;
+
+    if ( argc < 2 || argc > 4 )
+        usage();
+
+    s = e = argv[1];
+    domid = strtol(s, &e, 0);
+
+    if ( *s == '\0' || *e != '\0' ||
+         domid < 0 || domid >= DOMID_FIRST_RESERVED )
+        errx(1, "invalid domid '%s'", s);
+
+    if ( argc >= 3 )
+    {
+        s = e = argv[2];
+        type = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+            errx(1, "invalid type '%s'", s);
+    }
+
+    if ( argc == 4 )
+    {
+        s = e = argv[3];
+        instance = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+            errx(1, "invalid instance '%s'", s);
+    }
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( !xch )
+        err(1, "can't open libxc handle");
+
+    rc = xc_domain_get_context(xch, domid, NULL, &len);
+    if ( rc < 0 )
+        err(1, "can't get context length for dom %lu", domid);
+
+    buf = malloc(len);
+    if ( !buf )
+        err(1, "can't allocate %lu bytes", len);
+
+    rc = xc_domain_get_context(xch, domid, buf, &len);
+    if ( rc < 0 )
+        err(1, "can't get context for dom %lu", domid);
+
+    off = 0;
+
+    entry = 0;
+    for ( ;; )
+    {
+        struct domain_context_record *rec;
+
+        GET_PTR(rec);
+
+        off += sizeof(*rec);
+
+        if ( (type < 0 || type == rec->type) &&
+             (instance < 0 || instance == rec->instance) )
+        {
+            printf("[%u] type: %u instance: %u length: %"PRIu64"\n", entry++,
+                   rec->type, rec->instance, rec->length);
+
+            switch (rec->type)
+            {
+            case DOMAIN_CONTEXT_START: dump_start(); break;
+            case DOMAIN_CONTEXT_END: dump_end(); break;
+            default:
+                printf("Unknown type %u: skipping\n", rec->type);
+                break;
+            }
+        }
+
+        if ( rec->type == DOMAIN_CONTEXT_END )
+            break;
+
+        off += ROUNDUP(rec->length, _DOMAIN_CONTEXT_RECORD_ALIGN);
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4562.11978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6y-0005d4-B4; Thu, 08 Oct 2020 18:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4562.11978; Thu, 08 Oct 2020 18:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb6y-0005cp-67; Thu, 08 Oct 2020 18:57:56 +0000
Received: by outflank-mailman (input) for mailman id 4562;
 Thu, 08 Oct 2020 18:57:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb6w-0005Rd-H6
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27f7fc65-d96f-4489-8d37-a62f4a7d1169;
 Thu, 08 Oct 2020 18:57:50 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6q-000424-JJ; Thu, 08 Oct 2020 18:57:48 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6q-0002P9-BJ; Thu, 08 Oct 2020 18:57:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb6w-0005Rd-H6
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:54 +0000
X-Inumbo-ID: 27f7fc65-d96f-4489-8d37-a62f4a7d1169
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27f7fc65-d96f-4489-8d37-a62f4a7d1169;
	Thu, 08 Oct 2020 18:57:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=oKO5AKNeTUrG7AZllhuh/I6ylTG0M0k1GwLjYqlJpq8=; b=0VgqTqfgE6iFtnmo7tyTIVdPMt
	QuH44fAxOfGUDZ2JqYkrCJWtWWT6vj4yDx8LsTfBjGgb8bjzH29jAuYNqvMPy39i/BhA/eiF8aMkp
	SjnpjqBdbXSOjti8kq4F7ckpIQHWOEygmmaDs/MMwHw9JH0C1u3r2rWY+DUVEIGJlM00=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6q-000424-JJ; Thu, 08 Oct 2020 18:57:48 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6q-0002P9-BJ; Thu, 08 Oct 2020 18:57:48 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v10 07/11] docs/specs: add missing definitions to libxc-migration-stream
Date: Thu,  8 Oct 2020 19:57:31 +0100
Message-Id: <20201008185735.29875-8-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

The STATIC_DATA_END, X86_CPUID_POLICY and X86_MSR_POLICY record types have
sections explaining what they are but their values are not defined. Indeed
their values are defined as "Reserved for future mandatory records."

Also, the spec revision is adjusted to match the migration stream version
and an END record is added to the description of a 'typical save record for
and x86 HVM guest.'

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Fixes: 6f71b5b1506 ("docs/migration Specify migration v3 and STATIC_DATA_END")
Fixes: ddd273d8863 ("docs/migration: Specify X86_{CPUID,MSR}_POLICY records")
Acked-by: Wei Liu <wl@xen.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v7:
 - New in v7
---
 docs/specs/libxc-migration-stream.pandoc | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/docs/specs/libxc-migration-stream.pandoc b/docs/specs/libxc-migration-stream.pandoc
index 6b0c49e97a..8aeab3b11b 100644
--- a/docs/specs/libxc-migration-stream.pandoc
+++ b/docs/specs/libxc-migration-stream.pandoc
@@ -3,7 +3,7 @@
   Andrew Cooper <<andrew.cooper3@citrix.com>>
   Wen Congyang <<wency@cn.fujitsu.com>>
   Yang Hongyang <<hongyang.yang@easystack.cn>>
-% Revision 2
+% Revision 3
 
 Introduction
 ============
@@ -227,7 +227,13 @@ type         0x00000000: END
 
              0x0000000F: CHECKPOINT_DIRTY_PFN_LIST (Secondary -> Primary)
 
-             0x00000010 - 0x7FFFFFFF: Reserved for future _mandatory_
+             0x00000010: STATIC_DATA_END
+
+             0x00000011: X86_CPUID_POLICY
+
+             0x00000012: X86_MSR_POLICY
+
+             0x00000013 - 0x7FFFFFFF: Reserved for future _mandatory_
              records.
 
              0x80000000 - 0xFFFFFFFF: Reserved for future _optional_
@@ -732,6 +738,7 @@ A typical save record for an x86 HVM guest image would look like:
 * X86_TSC_INFO
 * HVM_PARAMS
 * HVM_CONTEXT
+* END record
 
 HVM_PARAMS must precede HVM_CONTEXT, as certain parameters can affect
 the validity of architectural state in the context.
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4563.11990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb72-0005jL-Nh; Thu, 08 Oct 2020 18:58:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4563.11990; Thu, 08 Oct 2020 18:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb72-0005jB-IQ; Thu, 08 Oct 2020 18:58:00 +0000
Received: by outflank-mailman (input) for mailman id 4563;
 Thu, 08 Oct 2020 18:57:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb70-0005RO-Or
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b401a948-9a26-4764-a57a-e7a26aace545;
 Thu, 08 Oct 2020 18:57:46 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6n-00041e-MH; Thu, 08 Oct 2020 18:57:45 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6n-0002P9-Ea; Thu, 08 Oct 2020 18:57:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb70-0005RO-Or
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:58 +0000
X-Inumbo-ID: b401a948-9a26-4764-a57a-e7a26aace545
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b401a948-9a26-4764-a57a-e7a26aace545;
	Thu, 08 Oct 2020 18:57:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=08OQ9dhpBtbLKfokq1GJMc6/ONnHhWNL0B52lkG8Jxs=; b=0X+9fnPxu+nVAFDzZo+0JLq+3R
	B9jEyuocVfr4jF570pJIh+UXlKXKAv9fMBqH8w7GhbXuVVfWhY/1Aik64e+/Dy09D6lJnRvUInhuf
	UZfZNSgYOvder6/BoDN6GkxHnS61NcuSKXC+kPFoltX2UfAc7FcAUjFD1VzkDQ66i5RI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6n-00041e-MH; Thu, 08 Oct 2020 18:57:45 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6n-0002P9-Ea; Thu, 08 Oct 2020 18:57:45 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v10 05/11] common/domain: add a domain context record for shared_info...
Date: Thu,  8 Oct 2020 19:57:29 +0100
Message-Id: <20201008185735.29875-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: Processing of the content during restore is currently limited to
      PV domains, and matches processing of the PV-only SHARED_INFO record
      done by libxc. All content is, however, saved such that restore
      processing can be modified in future without requiring a new record
      format.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v10:
 - Re-base
 - Amend the specification now there is one
 - Dropped Jan's R-b as modifications are not completely trivial

v9:
 - Use macros to make the code less verbose
 - Add missing check for allocation failure

v8:
 - Incorporate zero-ing out of shared info fields that would be done in
   processing of SHARED_INFO from older stream versions

v7:
 - Only restore vcpu_info and arch sub-structures for PV domains, to match
   processing of SHARED_INFO in xc_sr_restore_x86_pv.c
 - Use additional option to domain_load_end() to ignore the record for
   HVM domains

v6:
 - Only save compat_shared_info buffer if has_32bit_shinfo is set
 - Validate flags field in load handler

v5:
 - Addressed comments from Julien

v4:
 - Addressed comments from Jan

v3:
 - Actually dump some of the content of shared_info

v2:
 - Drop the header change to define a 'Xen' page size and instead use a
   variable length struct now that the framework makes this is feasible
 - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
---
 docs/specs/domain-context.md |  29 +++++++++
 tools/misc/xen-domctx.c      |  80 +++++++++++++++++++++++++
 xen/common/domain.c          | 113 +++++++++++++++++++++++++++++++++++
 xen/include/public/save.h    |  11 ++++
 4 files changed, 233 insertions(+)

diff --git a/docs/specs/domain-context.md b/docs/specs/domain-context.md
index f177cf24b3..95e9f9d1ab 100644
--- a/docs/specs/domain-context.md
+++ b/docs/specs/domain-context.md
@@ -128,6 +128,33 @@ can no longer be safely inferred.
 A record of this type terminates the image. No further data from the buffer
 should be consumed.
 
+### SHARED_INFO
+
+```
+    0       1       2       3       4       5       6       7    octet
++-------+-------+-------+-------+-------+-------+-------+-------+
+| type == 2                     | instance == 0                 |
++-------------------------------+-------------------------------+
+| length                                                        |
++-------------------------------+-------------------------------+
+| flags                         | buffer
++-------------------------------+
+...
+```
+
+\pagebreak
+The record body contains the following fields:
+
+| Field       | Description                                     |
+|-------------|-------------------------------------------------|
+| `flags`     | A bit-wise OR of the following:                 |
+|             |                                                 |
+|             | 0x00000001: The domain has 32-bit (compat)      |
+|             |             shared info                         |
+|             |                                                 |
+| `buffer`    | The shared info (`length` being architecture    |
+|             | dependent[4])                                   |
+
 * * *
 
 [1] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md
@@ -135,3 +162,5 @@ should be consumed.
 [2] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/hvm/save.h
 
 [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/specs/libxc-migration-stream.pandoc
+
+[4] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=tools/include/xen-foreign/reference.size
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index ca135b9a28..5ea6de50d1 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -57,6 +57,85 @@ static void dump_start(void)
     printf("    START: Xen %u.%u\n", s->xen_major, s->xen_minor);
 }
 
+static void print_binary(const char *prefix, const void *val, size_t size,
+                         const char *suffix)
+{
+    printf("%s", prefix);
+
+    while ( size-- )
+    {
+        uint8_t octet = *(const uint8_t *)val++;
+        unsigned int i;
+
+        for ( i = 0; i < 8; i++ )
+        {
+            printf("%u", octet & 1);
+            octet >>= 1;
+        }
+    }
+
+    printf("%s", suffix);
+}
+
+static void dump_shared_info(void)
+{
+    struct domain_context_shared_info *s;
+    bool has_32bit_shinfo;
+    shared_info_any_t *info;
+    unsigned int i, n;
+
+    GET_PTR(s);
+    has_32bit_shinfo = s->flags & DOMAIN_CONTEXT_32BIT_SHARED_INFO;
+
+    printf("    SHARED_INFO: has_32bit_shinfo: %s\n",
+           has_32bit_shinfo ? "true" : "false");
+
+    info = (shared_info_any_t *)s->buffer;
+
+#define GET_FIELD_PTR(_f)            \
+    (has_32bit_shinfo ?              \
+     (const void *)&(info->x32._f) : \
+     (const void *)&(info->x64._f))
+#define GET_FIELD_SIZE(_f) \
+    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
+#define GET_FIELD(_f) \
+    (has_32bit_shinfo ? info->x32._f : info->x64._f)
+
+    n = has_32bit_shinfo ?
+        ARRAY_SIZE(info->x32.evtchn_pending) :
+        ARRAY_SIZE(info->x64.evtchn_pending);
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                 evtchn_pending: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
+                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
+    }
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                    evtchn_mask: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
+                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
+    }
+
+    printf("                 wc: version: %u sec: %u nsec: %u",
+           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
+    if ( !has_32bit_shinfo )
+        printf(" sec_hi: %u", info->x64.xen_wc_sec_hi);
+    printf("\n");
+
+#undef GET_FIELD
+#undef GET_FIELD_SIZE
+#undef GET_FIELD_PTR
+}
+
 static void dump_end(void)
 {
     struct domain_context_end *e;
@@ -145,6 +224,7 @@ int main(int argc, char **argv)
             switch (rec->type)
             {
             case DOMAIN_CONTEXT_START: dump_start(); break;
+            case DOMAIN_CONTEXT_SHARED_INFO: dump_shared_info(); break;
             case DOMAIN_CONTEXT_END: dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", rec->type);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index f748806a45..6c223dae38 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -33,6 +33,7 @@
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/save.h>
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
@@ -1671,6 +1672,118 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+static int save_shared_info(struct domain *d, struct domain_ctxt_state *c,
+                            bool dry_run)
+{
+#ifdef CONFIG_COMPAT
+    struct domain_context_shared_info s = {
+        .flags = has_32bit_shinfo(d) ? DOMAIN_CONTEXT_32BIT_SHARED_INFO : 0,
+    };
+    size_t size = has_32bit_shinfo(d) ?
+        sizeof(struct compat_shared_info) :
+        sizeof(struct shared_info);
+#else
+    struct domain_context_shared_info s = {};
+    size_t size = sizeof(struct shared_info);
+#endif
+    int rc;
+
+    rc = domain_save_ctxt_rec_begin(c, DOMAIN_CONTEXT_SHARED_INFO, 0);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_ctxt_rec_data(c, &s, offsetof(typeof(s), buffer));
+    if ( rc )
+        return rc;
+
+    rc = domain_save_ctxt_rec_data(c, d->shared_info, size);
+    if ( rc )
+        return rc;
+
+    return domain_save_ctxt_rec_end(c);
+}
+
+static int load_shared_info(struct domain *d, struct domain_ctxt_state *c)
+{
+    struct domain_context_shared_info s = {};
+    size_t size;
+    unsigned int i;
+    int rc;
+
+    rc = domain_load_ctxt_rec_begin(c, DOMAIN_CONTEXT_SHARED_INFO, &i);
+    if ( rc )
+        return rc;
+
+    if ( i ) /* expect only a single instance */
+        return -ENXIO;
+
+    rc = domain_load_ctxt_rec_data(c, &s, offsetof(typeof(s), buffer));
+    if ( rc )
+        return rc;
+
+    if ( s.flags & ~DOMAIN_CONTEXT_32BIT_SHARED_INFO )
+        return -EINVAL;
+
+    if ( s.flags & DOMAIN_CONTEXT_32BIT_SHARED_INFO )
+    {
+#ifdef CONFIG_COMPAT
+        d->arch.has_32bit_shinfo = true;
+        size = sizeof(struct compat_shared_info);
+#else
+        return -EINVAL;
+#endif
+    }
+    else
+        size = sizeof(struct shared_info);
+
+    if ( is_pv_domain(d) )
+    {
+        shared_info_t *shinfo = xzalloc(shared_info_t);
+
+        if ( !shinfo )
+            return -ENOMEM;
+
+        rc = domain_load_ctxt_rec_data(c, shinfo, size);
+        if ( rc )
+            goto out;
+
+        memcpy(&shared_info(d, vcpu_info), &__shared_info(d, shinfo, vcpu_info),
+               sizeof(shared_info(d, vcpu_info)));
+        memcpy(&shared_info(d, arch), &__shared_info(d, shinfo, arch),
+               sizeof(shared_info(d, arch)));
+
+        memset(&shared_info(d, evtchn_pending), 0,
+               sizeof(shared_info(d, evtchn_pending)));
+        memset(&shared_info(d, evtchn_mask), 0xff,
+               sizeof(shared_info(d, evtchn_mask)));
+
+#ifdef CONFIG_X86
+        shared_info(d, arch.pfn_to_mfn_frame_list_list) = 0;
+#endif
+        for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
+            shared_info(d, vcpu_info[i].evtchn_pending_sel) = 0;
+
+        rc = domain_load_ctxt_rec_end(c, false);
+
+    out:
+        xfree(shinfo);
+    }
+    else
+    {
+        /*
+         * No modifications to shared_info are required for restoring non-PV
+         * domains.
+         */
+        rc = domain_load_ctxt_rec_data(c, NULL, size);
+        if ( !rc )
+            rc = domain_load_ctxt_rec_end(c, true);
+    }
+
+    return rc;
+}
+
+DOMAIN_REGISTER_CTXT_TYPE(SHARED_INFO, save_shared_info, load_shared_info);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index c4be9f570c..bccbaadd0b 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -49,6 +49,7 @@ struct domain_context_record {
 enum {
     DOMAIN_CONTEXT_END,
     DOMAIN_CONTEXT_START,
+    DOMAIN_CONTEXT_SHARED_INFO,
     /* New types go here */
     DOMAIN_CONTEXT_NR_TYPES
 };
@@ -58,6 +59,16 @@ struct domain_context_start {
     uint32_t xen_major, xen_minor;
 };
 
+struct domain_context_shared_info {
+    uint32_t flags;
+
+#define _DOMAIN_CONTEXT_32BIT_SHARED_INFO 0
+#define DOMAIN_CONTEXT_32BIT_SHARED_INFO \
+    (1U << _DOMAIN_CONTEXT_32BIT_SHARED_INFO)
+
+    uint8_t buffer[XEN_FLEX_ARRAY_DIM];
+};
+
 /* Terminating entry */
 struct domain_context_end {};
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4564.11995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb73-0005kM-7I; Thu, 08 Oct 2020 18:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4564.11995; Thu, 08 Oct 2020 18:58:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb72-0005k6-UT; Thu, 08 Oct 2020 18:58:00 +0000
Received: by outflank-mailman (input) for mailman id 4564;
 Thu, 08 Oct 2020 18:57:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb71-0005Rd-HC
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9873da21-24d2-457c-af46-bf5823508cc0;
 Thu, 08 Oct 2020 18:57:51 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6s-00042L-4O; Thu, 08 Oct 2020 18:57:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6r-0002P9-RT; Thu, 08 Oct 2020 18:57:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb71-0005Rd-HC
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:57:59 +0000
X-Inumbo-ID: 9873da21-24d2-457c-af46-bf5823508cc0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9873da21-24d2-457c-af46-bf5823508cc0;
	Thu, 08 Oct 2020 18:57:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=RXDGKqvarznbbGXJp5zFlQUMeIoJk0IJ8cYPeqwtqQs=; b=rAfksRyRfbM1+KyVINSizjKsW7
	s5ZYQtUw+GOOJZCSzw5hYA6ljgkke3rzD0gwKzLnGthpgP2tGuCHbmeGbgATInoETkuNNvJmdhG35
	hqVaY2FjQBaANGFUy6bky3cOyVEFeTGprgfM9LU6N+s93WuFwHRk7yURWou6zR2Y6yOQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6s-00042L-4O; Thu, 08 Oct 2020 18:57:50 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6r-0002P9-RT; Thu, 08 Oct 2020 18:57:50 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [PATCH v10 08/11] docs / tools: specify migration v4 to include DOMAIN_CONTEXT
Date: Thu,  8 Oct 2020 19:57:32 +0100
Message-Id: <20201008185735.29875-9-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

A new 'domain context' framework was recently introduced to facilitate
transfer of state for both PV and HVM guests. Hence this patch mandates the
presence of a new DOMAIN_CONTEXT record in version 4 of the libxc migration
stream.
This record will incorprate the content of the domain's 'shared_info' page
and the TSC informition so the SHARED_INFO and TSC_INFO records are deprecated.
It is intended that, in future, this record will contain state currently
present in the HVM_CONTEXT record. However, for compatibility with earlier
migration streams, the version 4 stream format continues to specify an
HVM_CONTEXT record and XEN_DOMCTL_sethvmcontext will continue to accept all
content of that record that may be present in a version 3 stream.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>

NOTE: Wei requested ack from Andrew

v10:
 - Removed changes to xg_sr_common.c and libxc.py to make this a just
   documentation and header patch
 - Dropped Wei's A-b in light of change

v7:
 - New in v7
---
 docs/specs/libxc-migration-stream.pandoc | 62 ++++++++++++++++++------
 tools/libs/guest/xg_sr_stream_format.h   |  1 +
 2 files changed, 49 insertions(+), 14 deletions(-)

diff --git a/docs/specs/libxc-migration-stream.pandoc b/docs/specs/libxc-migration-stream.pandoc
index 8aeab3b11b..aa6fe284f3 100644
--- a/docs/specs/libxc-migration-stream.pandoc
+++ b/docs/specs/libxc-migration-stream.pandoc
@@ -3,7 +3,7 @@
   Andrew Cooper <<andrew.cooper3@citrix.com>>
   Wen Congyang <<wency@cn.fujitsu.com>>
   Yang Hongyang <<hongyang.yang@easystack.cn>>
-% Revision 3
+% Revision 4
 
 Introduction
 ============
@@ -127,7 +127,7 @@ marker      0xFFFFFFFFFFFFFFFF.
 
 id          0x58454E46 ("XENF" in ASCII).
 
-version     0x00000003.  The version of this specification.
+version     0x00000004.  The version of this specification.
 
 options     bit 0: Endianness.  0 = little-endian, 1 = big-endian.
 
@@ -209,9 +209,9 @@ type         0x00000000: END
 
              0x00000006: X86_PV_VCPU_XSAVE
 
-             0x00000007: SHARED_INFO
+             0x00000007: SHARED_INFO (deprecated)
 
-             0x00000008: X86_TSC_INFO
+             0x00000008: X86_TSC_INFO (deprecated)
 
              0x00000009: HVM_CONTEXT
 
@@ -233,7 +233,9 @@ type         0x00000000: END
 
              0x00000012: X86_MSR_POLICY
 
-             0x00000013 - 0x7FFFFFFF: Reserved for future _mandatory_
+             0x00000013: DOMAIN_CONTEXT
+
+             0x00000014 - 0x7FFFFFFF: Reserved for future _mandatory_
              records.
 
              0x80000000 - 0xFFFFFFFF: Reserved for future _optional_
@@ -442,10 +444,11 @@ X86_PV_VCPU_MSRS             XEN_DOMCTL_{get,set}\_vcpu_msrs
 
 \clearpage
 
-SHARED_INFO
------------
+SHARED_INFO (deprecated)
+------------------------
 
-The content of the Shared Info page.
+The content of the Shared Info page. This is incorporated into the
+DOMAIN_CONTEXT record as of specification version 4.
 
      0     1     2     3     4     5     6     7 octet
     +-------------------------------------------------+
@@ -462,11 +465,12 @@ shared_info      Contents of the shared info page.  This record
 
 \clearpage
 
-X86_TSC_INFO
-------------
+X86_TSC_INFO (deprecated)
+-------------------------
 
 Domain TSC information, as accessed by the
-XEN_DOMCTL_{get,set}tscinfo hypercall sub-ops.
+XEN_DOMCTL_{get,set}tscinfo hypercall sub-ops. This is incorporated into the
+DOMAIN_CONTEXT record as of specification version 4.
 
      0     1     2     3     4     5     6     7 octet
     +------------------------+------------------------+
@@ -680,6 +684,25 @@ MSR_policy       Array of xen_msr_entry_t[]'s
 
 \clearpage
 
+DOMAIN_CONTEXT
+--------------
+
+Domain context, as accessed by the
+XEN_DOMCTL_{get,set}_domain_context hypercall sub-ops.
+
+     0     1     2     3     4     5     6     7 octet
+    +-------------------------------------------------+
+    | dom_ctx                                         |
+    ...
+    +-------------------------------------------------+
+
+--------------------------------------------------------------------
+Field            Description
+-----------      ---------------------------------------------------
+dom_ctx          The Domain Context blob from Xen.
+--------------------------------------------------------------------
+
+\clearpage
 
 Layout
 ======
@@ -706,8 +729,7 @@ A typical save record for an x86 PV guest image would look like:
     * STATIC_DATA_END
 * X86_PV_P2M_FRAMES record
 * Many PAGE_DATA records
-* X86_TSC_INFO
-* SHARED_INFO record
+* DOMAIN_CONTEXT
 * VCPU context records for each online VCPU
     * X86_PV_VCPU_BASIC record
     * X86_PV_VCPU_EXTENDED record
@@ -735,7 +757,7 @@ A typical save record for an x86 HVM guest image would look like:
     * X86_{CPUID,MSR}_POLICY
     * STATIC_DATA_END
 * Many PAGE_DATA records
-* X86_TSC_INFO
+* DOMAIN_CONTEXT
 * HVM_PARAMS
 * HVM_CONTEXT
 * END record
@@ -746,6 +768,18 @@ the validity of architectural state in the context.
 Compatibility with older versions
 =================================
 
+v4 compat with v3
+-----------------
+
+A v4 stream is compatible with a v3 stream, but mandates the presence of a
+DOMAIN_CONTEXT record. This incorporates context such as the content of
+the domain's Shared Info page and the TSC information, hence the SHARED_INFO
+and TSC_INFO records are deprecated.
+It also supercedes HVM_CONTEXT and, over time, data that is currently part of
+the HVM_CONTEXT blob will move to the DOMAIN_CONTEXT blob. Xen, however, will
+continue to accept all defined HVM_CONTEXT records so a v4-compatible
+receiver can still accept an unmodified v3 stream.
+
 v3 compat with v2
 -----------------
 
diff --git a/tools/libs/guest/xg_sr_stream_format.h b/tools/libs/guest/xg_sr_stream_format.h
index 8a0da26f75..bc538bc192 100644
--- a/tools/libs/guest/xg_sr_stream_format.h
+++ b/tools/libs/guest/xg_sr_stream_format.h
@@ -76,6 +76,7 @@ struct xc_sr_rhdr
 #define REC_TYPE_STATIC_DATA_END            0x00000010U
 #define REC_TYPE_X86_CPUID_POLICY           0x00000011U
 #define REC_TYPE_X86_MSR_POLICY             0x00000012U
+#define REC_TYPE_DOMAIN_CONTEXT             0x00000013U
 
 #define REC_TYPE_OPTIONAL             0x80000000U
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4565.12014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb77-0005rT-EA; Thu, 08 Oct 2020 18:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4565.12014; Thu, 08 Oct 2020 18:58:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb77-0005rF-8n; Thu, 08 Oct 2020 18:58:05 +0000
Received: by outflank-mailman (input) for mailman id 4565;
 Thu, 08 Oct 2020 18:58:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb75-0005RO-P5
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:58:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4a4771c-8495-46c6-aa76-25ee6f552b4b;
 Thu, 08 Oct 2020 18:57:48 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6p-00041s-5r; Thu, 08 Oct 2020 18:57:47 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6o-0002P9-Ug; Thu, 08 Oct 2020 18:57:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb75-0005RO-P5
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:58:03 +0000
X-Inumbo-ID: f4a4771c-8495-46c6-aa76-25ee6f552b4b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f4a4771c-8495-46c6-aa76-25ee6f552b4b;
	Thu, 08 Oct 2020 18:57:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=AC7LRRI1wOmmMw28VRmEhixnQ6YAW+ioZhx4wAS3Mgc=; b=I/SgwdNaU4Vq24kIJOAwqscdRv
	Z+eGMyM/eyRDVC81hbY0ckFcIxoD/5YtKXYklLcHW+/UM6+8noE3kpf+Ypmml9H8WOHSBgBzAg+CJ
	GLXHQyhBAsN0XxH+fbzrHDHvHjY4YmBcLWfzIpmUae7OvJO5tvaNyWbGd8yeDq+SzE+0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6p-00041s-5r; Thu, 08 Oct 2020 18:57:47 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6o-0002P9-Ug; Thu, 08 Oct 2020 18:57:47 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v10 06/11] x86/time: add a domain context record for tsc_info...
Date: Thu,  8 Oct 2020 19:57:30 +0100
Message-Id: <20201008185735.29875-7-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: Whilst the record is x86 specific, it is visible directly in the common
      header as context record numbers should be unique across all
      architectures.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v10:
 - Re-base
 - Amend the specification now there is one
 - Kept Jan's R-b as the changes are cosmetic

v8:
 - Removed stray blank line

v7:
 - New in v7
---
 docs/specs/domain-context.md | 32 ++++++++++++++++++++++++++++++++
 tools/misc/xen-domctx.c      | 12 ++++++++++++
 xen/arch/x86/time.c          | 30 ++++++++++++++++++++++++++++++
 xen/include/public/save.h    |  8 ++++++++
 4 files changed, 82 insertions(+)

diff --git a/docs/specs/domain-context.md b/docs/specs/domain-context.md
index 95e9f9d1ab..8aa3466d96 100644
--- a/docs/specs/domain-context.md
+++ b/docs/specs/domain-context.md
@@ -155,6 +155,38 @@ The record body contains the following fields:
 | `buffer`    | The shared info (`length` being architecture    |
 |             | dependent[4])                                   |
 
+### TSC_INFO
+
+```
+    0       1       2       3       4       5       6       7    octet
++-------+-------+-------+-------+-------+-------+-------+-------+
+| type == 3                     | instance == 0                 |
++-------------------------------+-------------------------------+
+| length == 20                                                  |
++-------------------------------+-------------------------------+
+| mode                          | khz                           |
++-------------------------------+-------------------------------+
+| nsec                                                          |
++-------------------------------+-------------------------------+
+| incarnation                   |
++-------------------------------+
+```
+
+The record body contains the following fields:
+
+| Field         | Description                                   |
+|---------------|-----------------------------------------------|
+| `mode`        | 0x00000000: Default (emulate if necessary)    |
+|               | 0x00000001: Always emulate                    |
+|               | 0x00000002: Never emulate                     |
+|               |                                               |
+| `khz`         | The TSC frequency in kHz                      |
+|               |                                               |
+| `nsec`        | Elapsed time in nanoseconds                   |
+|               |                                               |
+| `incarnation` | Incarnation (counter indicating how many      |
+|               | times the TSC value has been set)             |
+
 * * *
 
 [1] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index 5ea6de50d1..1a5dfb9d5a 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -136,6 +136,17 @@ static void dump_shared_info(void)
 #undef GET_FIELD_PTR
 }
 
+static void dump_tsc_info(void)
+{
+    struct domain_context_tsc_info *t;
+
+    GET_PTR(t);
+
+    printf("    TSC_INFO: mode: %u incarnation: %u\n"
+           "              khz %u nsec: %"PRIu64"\n",
+           t->mode, t->incarnation, t->khz, t->nsec);
+}
+
 static void dump_end(void)
 {
     struct domain_context_end *e;
@@ -225,6 +236,7 @@ int main(int argc, char **argv)
             {
             case DOMAIN_CONTEXT_START: dump_start(); break;
             case DOMAIN_CONTEXT_SHARED_INFO: dump_shared_info(); break;
+            case DOMAIN_CONTEXT_TSC_INFO: dump_tsc_info(); break;
             case DOMAIN_CONTEXT_END: dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", rec->type);
diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
index 8938c0f435..aec4bfb0f3 100644
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -26,6 +26,7 @@
 #include <xen/symbols.h>
 #include <xen/keyhandler.h>
 #include <xen/guest_access.h>
+#include <xen/save.h>
 #include <asm/io.h>
 #include <asm/iocap.h>
 #include <asm/msr.h>
@@ -2451,6 +2452,35 @@ int tsc_set_info(struct domain *d,
     return 0;
 }
 
+static int save_tsc_info(struct domain *d, struct domain_ctxt_state *c,
+                         bool dry_run)
+{
+    struct domain_context_tsc_info t = {};
+
+    if ( !dry_run )
+        tsc_get_info(d, &t.mode, &t.nsec, &t.khz, &t.incarnation);
+
+    return domain_save_ctxt_rec(c, DOMAIN_CONTEXT_TSC_INFO, 0, &t, sizeof(t));
+}
+
+static int load_tsc_info(struct domain *d, struct domain_ctxt_state *c)
+{
+    struct domain_context_tsc_info t;
+    unsigned int i;
+    int rc;
+
+    rc = domain_load_ctxt_rec(c, DOMAIN_CONTEXT_TSC_INFO, &i, &t, sizeof(t));
+    if ( rc )
+        return rc;
+
+    if ( i ) /* expect only a single instance */
+        return -ENXIO;
+
+    return tsc_set_info(d, t.mode, t.nsec, t.khz, t.incarnation);
+}
+
+DOMAIN_REGISTER_CTXT_TYPE(TSC_INFO, save_tsc_info, load_tsc_info);
+
 /* vtsc may incur measurable performance degradation, diagnose with this */
 static void dump_softtsc(unsigned char key)
 {
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index bccbaadd0b..86881864cf 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -50,6 +50,7 @@ enum {
     DOMAIN_CONTEXT_END,
     DOMAIN_CONTEXT_START,
     DOMAIN_CONTEXT_SHARED_INFO,
+    DOMAIN_CONTEXT_TSC_INFO,
     /* New types go here */
     DOMAIN_CONTEXT_NR_TYPES
 };
@@ -69,6 +70,13 @@ struct domain_context_shared_info {
     uint8_t buffer[XEN_FLEX_ARRAY_DIM];
 };
 
+struct domain_context_tsc_info {
+    uint32_t mode;
+    uint32_t khz;
+    uint64_t nsec;
+    uint32_t incarnation;
+};
+
 /* Terminating entry */
 struct domain_context_end {};
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 18:58:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 18:58:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4566.12026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb7C-0005ze-1g; Thu, 08 Oct 2020 18:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4566.12026; Thu, 08 Oct 2020 18:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQb7B-0005zU-UL; Thu, 08 Oct 2020 18:58:09 +0000
Received: by outflank-mailman (input) for mailman id 4566;
 Thu, 08 Oct 2020 18:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQb7A-0005RO-PI
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:58:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebdf290f-2615-4289-be82-5b3bb9be3c6f;
 Thu, 08 Oct 2020 18:57:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6t-00042S-7Y; Thu, 08 Oct 2020 18:57:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6t-0002P9-03; Thu, 08 Oct 2020 18:57:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQb7A-0005RO-PI
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 18:58:08 +0000
X-Inumbo-ID: ebdf290f-2615-4289-be82-5b3bb9be3c6f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ebdf290f-2615-4289-be82-5b3bb9be3c6f;
	Thu, 08 Oct 2020 18:57:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=D58UTgHV8FlNHLsedxEWdVpeTR0XSu26cm7AoQLQNpU=; b=Hi+X0PIXwE3MkDPPVg7LgaUgIy
	1IT8qkoisipVVxs5J/J4D+7PangVK3a9UXTSWQucdbxGtyCTx8gRu9OcvjDB3VPqFHmU+ti3VZyN5
	FaR26E7S5UZdCNP/drdWiJ7MeGKil7Hu1PmEhpBjbxgscOZeTVqU8ufWNGe7kpH3cC0E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6t-00042S-7Y; Thu, 08 Oct 2020 18:57:51 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6t-0002P9-03; Thu, 08 Oct 2020 18:57:51 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 09/11] tools/python: modify libxc.py to verify v4 stream
Date: Thu,  8 Oct 2020 19:57:33 +0100
Message-Id: <20201008185735.29875-10-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds code to verify the presence of a REC_TYPE_domain_context
record in a v4 stream, as well as absence of REC_TYPE_shared_info and
REC_TYPE_tsc_info records.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "Marek Marczykowski-Górecki" <marmarek@invisiblethingslab.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

v10:
 - New in v10
---
 tools/python/xen/migration/libxc.py | 22 ++++++++++++++++++++--
 1 file changed, 20 insertions(+), 2 deletions(-)

diff --git a/tools/python/xen/migration/libxc.py b/tools/python/xen/migration/libxc.py
index 9881f5ced4..24fb50cbda 100644
--- a/tools/python/xen/migration/libxc.py
+++ b/tools/python/xen/migration/libxc.py
@@ -59,6 +59,7 @@ REC_TYPE_checkpoint_dirty_pfn_list  = 0x0000000f
 REC_TYPE_static_data_end            = 0x00000010
 REC_TYPE_x86_cpuid_policy           = 0x00000011
 REC_TYPE_x86_msr_policy             = 0x00000012
+REC_TYPE_domain_context             = 0x00000013
 
 rec_type_to_str = {
     REC_TYPE_end                        : "End",
@@ -80,6 +81,7 @@ rec_type_to_str = {
     REC_TYPE_static_data_end            : "Static data end",
     REC_TYPE_x86_cpuid_policy           : "x86 CPUID policy",
     REC_TYPE_x86_msr_policy             : "x86 MSR policy",
+    REC_TYPE_domain_context             : "Domain context",
 }
 
 # page_data
@@ -156,9 +158,9 @@ class VerifyLibxc(VerifyBase):
             raise StreamError("Bad image id: Expected 0x%x, got 0x%x" %
                               (IHDR_IDENT, ident))
 
-        if not (2 <= version <= 3):
+        if not (2 <= version <= 4):
             raise StreamError(
-                "Unknown image version: Expected 2 <= ver <= 3, got %d" %
+                "Unknown image version: Expected 2 <= ver <= 4, got %d" %
                 (version, ))
 
         self.version = version
@@ -362,6 +364,9 @@ class VerifyLibxc(VerifyBase):
     def verify_record_shared_info(self, content):
         """ shared info record """
 
+        if self.version >= 4:
+            raise RecordError("Shared info record found in v4 stream")
+
         contentsz = len(content)
         if contentsz != 4096:
             raise RecordError("Length expected to be 4906 bytes, not %d" %
@@ -371,6 +376,9 @@ class VerifyLibxc(VerifyBase):
     def verify_record_tsc_info(self, content):
         """ tsc info record """
 
+        if self.version >= 4:
+            raise RecordError("TSC info record found in v4 stream")
+
         sz = calcsize(X86_TSC_INFO_FORMAT)
 
         if len(content) != sz:
@@ -476,6 +484,14 @@ class VerifyLibxc(VerifyBase):
             raise RecordError("Record length %u, expected multiple of %u" %
                               (contentsz, sz))
 
+    def verify_record_domain_context(self, content):
+        """ domain context record """
+
+        if self.version < 4:
+            raise RecordError("Domain context record found in v3 stream")
+
+        if len(content) == 0:
+            raise RecordError("Zero length domain context")
 
 record_verifiers = {
     REC_TYPE_end:
@@ -526,4 +542,6 @@ record_verifiers = {
         VerifyLibxc.verify_record_x86_cpuid_policy,
     REC_TYPE_x86_msr_policy:
         VerifyLibxc.verify_record_x86_msr_policy,
+    REC_TYPE_domain_context:
+        VerifyLibxc.verify_record_domain_context,
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4589.12067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNB-0008Q4-Je; Thu, 08 Oct 2020 19:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4589.12067; Thu, 08 Oct 2020 19:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNB-0008Pu-Cp; Thu, 08 Oct 2020 19:14:41 +0000
Received: by outflank-mailman (input) for mailman id 4589;
 Thu, 08 Oct 2020 19:14:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNA-0008Lp-70
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5df7ad22-b9f2-4ff2-9714-6aae44ef43a1;
 Thu, 08 Oct 2020 19:14:34 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN4-0004R6-Fx
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN4-0003q2-DN
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN2-0006So-J0; Thu, 08 Oct 2020 20:14:32 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNA-0008Lp-70
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:40 +0000
X-Inumbo-ID: 5df7ad22-b9f2-4ff2-9714-6aae44ef43a1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5df7ad22-b9f2-4ff2-9714-6aae44ef43a1;
	Thu, 08 Oct 2020 19:14:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ChVOyZirwRN6GYYGaNTGSTLn1c+zNQzFzRYYMg92a3g=; b=HVb4luVkT4nDBJNxgkuiziSfyB
	H/FFq85YUFlxAz1yUwwmlaRA/axLnjcqwmyq1oFZ9NrYNP55r5b0hKatkOXo3vMQKtmOH4VeJuNTF
	ksINsB/0CGPboMAeDleV2LNmtCk6H4gwzBmT/AEiZHWnngiDwW9bNR/teod6tCCieV0E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0004R6-Fx
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0003q2-DN
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN2-0006So-J0; Thu, 08 Oct 2020 20:14:32 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 02/13] Honour OSSTEST_SIMULATE_FAIL in sg-run-job
Date: Thu,  8 Oct 2020 20:14:11 +0100
Message-Id: <20201008191422.5683-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a Tcl list of globs for <job>.<step>, and allows for
simulating particular test failures.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/sg-run-job b/sg-run-job
index dd76d4f2..c64ae026 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -406,7 +406,14 @@ proc spawn-ts {iffail testid args} {
     jobdb::spawn-step-commit $flight $jobinfo(job) $stepno $testid
 
     set xprefix {}
-    if {[var-or-default env(OSSTEST_SIMULATE) 0]} { set xprefix echo }
+    if {[var-or-default env(OSSTEST_SIMULATE) 0]} {
+	set xprefix echo
+	foreach ent [var-or-default env(OSSTEST_SIMULATE_FAIL) {}] {
+	    if {[string match $ent $jobinfo(job).$testid]} {
+		set xprefix OSSTEST_SIMULATE_FAIL
+	    }
+	}
+    }
 
     set log [jobdb::step-log-filename $flight $jobinfo(job) $stepno $ts]
     set redirects {}
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4586.12038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbN5-0008M1-Kf; Thu, 08 Oct 2020 19:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4586.12038; Thu, 08 Oct 2020 19:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbN5-0008Lu-HF; Thu, 08 Oct 2020 19:14:35 +0000
Received: by outflank-mailman (input) for mailman id 4586;
 Thu, 08 Oct 2020 19:14:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbN4-0008Lk-Dr
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0144c565-3c22-414f-b139-cf38414afbf5;
 Thu, 08 Oct 2020 19:14:33 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN3-0004R0-Hp
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:33 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN3-0003pX-G5
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:33 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN1-0006So-Jc; Thu, 08 Oct 2020 20:14:31 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbN4-0008Lk-Dr
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
X-Inumbo-ID: 0144c565-3c22-414f-b139-cf38414afbf5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0144c565-3c22-414f-b139-cf38414afbf5;
	Thu, 08 Oct 2020 19:14:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=K5qarsK62Og7Ss+hbnvf8Yalf+UlShpHtXrprG47jgU=; b=OkRPZXNg3ZqMCsVsp0ZH8rq+D/
	JJo6Nr/IEHm5KutUTroBv3kBaGGTtYOGy4QVevrM5Ff4YPJtzFZXlWX7ibUhgTv5xAnlbdHd77j+D
	BTLLv3P5OmTqT6XvuOmTwLdT9mVCmWsj/0RCd/QglH8qPT/3jdA38gL0QkvHdNKrWKXY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN3-0004R0-Hp
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:33 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN3-0003pX-G5
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:33 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN1-0006So-Jc; Thu, 08 Oct 2020 20:14:31 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	committers@xenproject.org
Subject: [OSSTEST PATCH 00/13] Immediately retry failing tests
Date: Thu,  8 Oct 2020 20:14:09 +0100
Message-Id: <20201008191422.5683-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We discussed this at the Xen Summit.  What I do here is immediate
retry the jobs with regressions, and then reanalyse the original full
flight.  If the retries showed the failures were heisenbugs, this will
let them though.

This should reduce the negative impact on development, of heisenbugs,
but it won't do anything to help keep them out of the tree.

I think this approach was basically already agreed, but I'm CCing the
committers here for form's sake.

 README.dev          |  9 +++----
 cr-daily-branch     | 58 ++++++++++++++++++++++++++++++++++++++++++---
 cr-disk-report      |  2 +-
 cr-try-bisect-adhoc |  2 +-
 cri-args-hostlists  | 28 +++++++++++++++-------
 cs-bisection-step   |  4 ++--
 sg-report-flight    | 35 +++++++++++++++++++++++----
 sg-run-job          |  9 ++++++-
 8 files changed, 121 insertions(+), 26 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4588.12062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNB-0008PT-5G; Thu, 08 Oct 2020 19:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4588.12062; Thu, 08 Oct 2020 19:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNB-0008PL-1o; Thu, 08 Oct 2020 19:14:41 +0000
Received: by outflank-mailman (input) for mailman id 4588;
 Thu, 08 Oct 2020 19:14:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbN9-0008Lk-CT
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fe6d655-919e-4e1e-9e8e-ff9316426a99;
 Thu, 08 Oct 2020 19:14:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0004RF-H1
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0003qs-GK
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN3-0006So-LB; Thu, 08 Oct 2020 20:14:33 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbN9-0008Lk-CT
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:39 +0000
X-Inumbo-ID: 2fe6d655-919e-4e1e-9e8e-ff9316426a99
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2fe6d655-919e-4e1e-9e8e-ff9316426a99;
	Thu, 08 Oct 2020 19:14:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=9Pahmn/najHz4bedXTajTfnABim3bLuns9qaaY31+FA=; b=afGK+zxF6x9uECYrUOBwrdu1DR
	eFI7a3coUQeF/RMrGetzuBEJPfDhvAltSegyyeNveMqfEpXAyGChj4M9PW4hO6wUyAtNO7YR4qWWv
	r/pdTgJ7Xw1GQlhB28xeBmoXdag60D6Kgh4zXBjHdYUNcCpQQsYTi52dxmyBXOIyta+0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0004RF-H1
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0003qs-GK
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN3-0006So-LB; Thu, 08 Oct 2020 20:14:33 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 04/13] cri-args-hostlists: New debug var $OSSTEST_REPORT_JOB_HISTORY_RUN
Date: Thu,  8 Oct 2020 20:14:13 +0100
Message-Id: <20201008191422.5683-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No effect if this is empty.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 1 +
 1 file changed, 1 insertion(+)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 6cdff53f..7019c0c7 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -121,6 +121,7 @@ start_email () {
 
 	date >&2
 
+	$OSSTEST_REPORT_JOB_HISTORY_RUN \
 	with-lock-ex -w $globallockdir/report-lock \
 	  ./sg-report-job-history --report-processing-start-time \
 	    --html-dir=$job_html_dir --flight=$flight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4587.12050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbN6-0008Mt-Sp; Thu, 08 Oct 2020 19:14:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4587.12050; Thu, 08 Oct 2020 19:14:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbN6-0008Mm-PU; Thu, 08 Oct 2020 19:14:36 +0000
Received: by outflank-mailman (input) for mailman id 4587;
 Thu, 08 Oct 2020 19:14:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbN5-0008Lp-DU
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96ba930c-3383-4068-91c2-db1e192daac7;
 Thu, 08 Oct 2020 19:14:34 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN4-0004R3-7F
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN4-0003pq-6V
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN2-0006So-05; Thu, 08 Oct 2020 20:14:32 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbN5-0008Lp-DU
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
X-Inumbo-ID: 96ba930c-3383-4068-91c2-db1e192daac7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 96ba930c-3383-4068-91c2-db1e192daac7;
	Thu, 08 Oct 2020 19:14:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ib9u1KozgEMCg0jCGLWPWL8C4aZzOGRCjREpAAS7g4Y=; b=i2dGcQ3zWuoKvOwadSL80/ZwQh
	IMh5nqXT/22Hz6X1BsFZguNK/5O3s2zyMOo+IZYWYHHjjKqPlKpMO1y8fUFNsZDCGKmlil5XCUzZn
	SGJVt1vXsLQTn5lJicg0a/At1hcux+hBg9J3X0NqB2L34FIJzlpWe4nOKv+RLqEb6Ho4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0004R3-7F
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0003pq-6V
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:34 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN2-0006So-05; Thu, 08 Oct 2020 20:14:32 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 01/13] Honour OSSTEST_SIMULATE=2 to actually run dummy flight
Date: Thu,  8 Oct 2020 20:14:10 +0100
Message-Id: <20201008191422.5683-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 994e00c0..6cdff53f 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -68,8 +68,8 @@ fi
 
 execute_flight () {
         case "x$OSSTEST_SIMULATE" in
-        x|x0)   ;;
-        *)      echo SIMULATING - NOT EXECUTING $1 $2
+        x|x0|x2)   ;;
+        *)      echo SIMULATING $OSSTEST_SIMULATE - NOT EXECUTING $1 $2
                 return
                 ;;
         esac
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4590.12086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNF-0008WJ-U5; Thu, 08 Oct 2020 19:14:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4590.12086; Thu, 08 Oct 2020 19:14:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNF-0008W8-Q7; Thu, 08 Oct 2020 19:14:45 +0000
Received: by outflank-mailman (input) for mailman id 4590;
 Thu, 08 Oct 2020 19:14:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNE-0008Lk-Cg
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d131487-a37e-4e2b-827b-1d288fdd9341;
 Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN6-0004RZ-Qz
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN6-0003sK-Q8
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN4-0006So-TQ; Thu, 08 Oct 2020 20:14:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNE-0008Lk-Cg
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:44 +0000
X-Inumbo-ID: 4d131487-a37e-4e2b-827b-1d288fdd9341
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d131487-a37e-4e2b-827b-1d288fdd9341;
	Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=r6GFqM+hgG67t/2ObzXPFUVWLmljZDhT7Qkvv+SlnJ8=; b=IDL7tteu39YOKtNiW+fBnd8SFD
	POom8NkZ+Nt5atbSwEjfOmE42p8JgEMzk6H5d5S1+OjjpgbJVuGTAwF3HfN+RpNENSVHbtDPBsA6q
	+baFdiTcnvrWZQl22Mg6gIqJHbXBnRxA+RZXp6GDHewJEreDQYUnvBhQIA/18ivoG84k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0004RZ-Qz
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0003sK-Q8
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0006So-TQ; Thu, 08 Oct 2020 20:14:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 07/13] sg-report-flight: Provide --refer-to-flight option
Date: Thu,  8 Oct 2020 20:14:16 +0100
Message-Id: <20201008191422.5683-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This just generates an extra heading and URL at the top of the output.
In particular, it doesn't affect the algorithms which calculate
regressions.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/sg-report-flight b/sg-report-flight
index 4b33facb..d9f0b964 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -42,6 +42,7 @@ our $htmldir;
 our $want_info_headers;
 our ($branch, $branches_cond_q);
 our @allows;
+our (@refer_to_flights);
 our (@includebeginfiles,@includefiles);
 
 open DEBUG, ">/dev/null";
@@ -66,6 +67,8 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
         push @includebeginfiles, $1;
     } elsif (m/^--include=(.*)$/) {
         push @includefiles, $1;
+    } elsif (m/^--refer-to-flight=(.*)$/) {
+        push @refer_to_flights, $1;
     } elsif (restrictflight_arg($_)) {
         # Handled by Executive
     } elsif (m/^--allow=(.*)$/) {
@@ -504,6 +507,16 @@ END
         die unless defined $specflight;
     }
 }
+sub find_refer_to_flights () {
+    my $ffq = $dbh_tests->prepare("SELECT * FROM flights WHERE flight=?");
+    @refer_to_flights = map {
+	my $flight = $_;
+	$ffq->execute($flight);
+	my $row = $ffq->fetchrow_hashref();
+	die "refer to flight $flight not found\n" unless $row;
+	{ Flight => $flight, FlightInfo => $row };
+    } @refer_to_flights;
+}
 
 sub examineflight ($) {
     my ($flight) = @_;
@@ -804,6 +817,10 @@ END
 
     printout_flightheader($r);
 
+    foreach my $ref_r (@refer_to_flights) {
+	printout_flightheader($ref_r);
+    }
+
     if (defined $r->{Overall}) {
         bodyprint "\n";
         bodyprint $r->{Overall};
@@ -1878,6 +1895,7 @@ db_retry($dbh_tests, [], sub {
     if (defined $mro) {
 	open MRO, "> $mro.new" or die "$mro.new $!";
     }
+    find_refer_to_flights();
     findspecflight();
     my $fi= examineflight($specflight);
     my @fails= justifyfailures($fi);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4591.12097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNH-00007s-85; Thu, 08 Oct 2020 19:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4591.12097; Thu, 08 Oct 2020 19:14:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNH-00007j-4I; Thu, 08 Oct 2020 19:14:47 +0000
Received: by outflank-mailman (input) for mailman id 4591;
 Thu, 08 Oct 2020 19:14:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNF-0008Lp-7J
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18382a06-0336-4c70-b274-adb40acfb562;
 Thu, 08 Oct 2020 19:14:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0004RB-4a
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0003qS-2W
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN3-0006So-7Y; Thu, 08 Oct 2020 20:14:33 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNF-0008Lp-7J
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:45 +0000
X-Inumbo-ID: 18382a06-0336-4c70-b274-adb40acfb562
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 18382a06-0336-4c70-b274-adb40acfb562;
	Thu, 08 Oct 2020 19:14:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=02bnAe8Pk+WQt+MbpE+lK1JlJkiQs50S4+SkvCn1WvQ=; b=GmGifwcvrhqwsTq+Lso9Ep7vut
	gcsmLnbZN9pVUwz4U5vpqdojCFq9smoSjFlyujFJRsuoUBRHwpwXGcYww2Wj6Qc2TdZSlfwZN3j1P
	Ocj744+PrF9raTarv0Q05lN30y5pY/GBCZS19mohTnW4lMXEzfVSFyygSADWdmh8n7C4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0004RB-4a
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0003qS-2W
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN3-0006So-7Y; Thu, 08 Oct 2020 20:14:33 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 03/13] sg-report-job-history: eval $DAILY_BRANCH_PREEXEC_HOOK
Date: Thu,  8 Oct 2020 20:14:12 +0100
Message-Id: <20201008191422.5683-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Put the call to this debugging/testing variable inside an eval.  This
allows a wider variety of stunts.  The one in-tree reference is
already compatible with this new semantics.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index b8f221ee..23060588 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -472,7 +472,7 @@ sgr_args+=" $EXTRA_SGR_ARGS"
 
 date >&2
 : $flight $branch $OSSTEST_BLESSING $sgr_args
-$DAILY_BRANCH_PREEXEC_HOOK
+eval "$DAILY_BRANCH_PREEXEC_HOOK"
 execute_flight $flight $OSSTEST_BLESSING
 date >&2
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4592.12110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNK-0000EK-MF; Thu, 08 Oct 2020 19:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4592.12110; Thu, 08 Oct 2020 19:14:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNK-0000Dv-Gh; Thu, 08 Oct 2020 19:14:50 +0000
Received: by outflank-mailman (input) for mailman id 4592;
 Thu, 08 Oct 2020 19:14:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNJ-0008Lk-Cp
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 416f730f-5ee2-4de1-9eb9-1ac8d5dc2294;
 Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN7-0004Rh-94
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN7-0003sr-8D
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN5-0006So-CS; Thu, 08 Oct 2020 20:14:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNJ-0008Lk-Cp
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:49 +0000
X-Inumbo-ID: 416f730f-5ee2-4de1-9eb9-1ac8d5dc2294
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 416f730f-5ee2-4de1-9eb9-1ac8d5dc2294;
	Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Oq5CHXwRLYwRce8o8pCzzReKPSkGwMOpuff7iZeMKgQ=; b=rr/uDGHWHd6MHYYEVEgQlIJHvW
	4udzSG1lv9NsnZ8Gc5LQsioCLB/VTKMaLSWGdLodAM6K6yaLy0g1KUbPsLZ6X7LZ6yWOJXdb5HAzb
	5EzVD2VfP+iQRym3dPL1+OlWwF0Fdph4c42Fy/YkRESvg9Tix1E0AEFZfaM/mlzkm5Yc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0004Rh-94
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0003sr-8D
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0006So-CS; Thu, 08 Oct 2020 20:14:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 08/13] sg-report-flight: Nicer output for --refer-to-flight option
Date: Thu,  8 Oct 2020 20:14:17 +0100
Message-Id: <20201008191422.5683-9-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Sort the flight summary lines together, before the URLs.  This makes
it considerably easier to read.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index d9f0b964..f6ace190 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -796,12 +796,17 @@ sub includes ($) {
     }
 }    
 
-sub printout_flightheader ($) {
-    my ($r) = @_;
-    bodyprint <<END;
+sub printout_flightheaders {
+    foreach my $r (@_) {
+	bodyprint <<END;
 flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
+END
+    }
+    foreach my $r (@_) {
+	bodyprint <<END;
 $c{ReportHtmlPubBaseUrl}/$r->{Flight}/
 END
+    }
 }
 
 sub printout {
@@ -814,12 +819,7 @@ sub printout {
 $r->{Flight}: $r->{OutcomeSummary}
 END
     includes(\@includebeginfiles);
-
-    printout_flightheader($r);
-
-    foreach my $ref_r (@refer_to_flights) {
-	printout_flightheader($ref_r);
-    }
+    printout_flightheaders $r, @refer_to_flights;
 
     if (defined $r->{Overall}) {
         bodyprint "\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4593.12122 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNM-0000IQ-G3; Thu, 08 Oct 2020 19:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4593.12122; Thu, 08 Oct 2020 19:14:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNM-0000IG-8Q; Thu, 08 Oct 2020 19:14:52 +0000
Received: by outflank-mailman (input) for mailman id 4593;
 Thu, 08 Oct 2020 19:14:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNK-0008Lp-7M
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 286ccc93-c436-4794-8ccc-44b0bbb7a092;
 Thu, 08 Oct 2020 19:14:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0004RM-Vr
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN5-0003rG-UI
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN4-0006So-2J; Thu, 08 Oct 2020 20:14:34 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNK-0008Lp-7M
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:50 +0000
X-Inumbo-ID: 286ccc93-c436-4794-8ccc-44b0bbb7a092
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 286ccc93-c436-4794-8ccc-44b0bbb7a092;
	Thu, 08 Oct 2020 19:14:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=e3H0s2+WzQtNrHrJPYnYMyg0OSRyEJNxDWc8yLBsrEc=; b=ThhjZGReMUkY9BrNq7xCGh8Umd
	4YA7moqxkSbPkTuiu434WGYDrFAia4Fg4hGT2ouyN4VK0DsFvFZyxMqtjzCopclPwj0s1N8WsGeVT
	om1Yc2zXAWJ2FFbWEfX/F0uN6NkEFXMMq2X+wQevxAzoIFguRaV+cV7y7ch8fat6jgJs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0004RM-Vr
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0003rG-UI
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0006So-2J; Thu, 08 Oct 2020 20:14:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 05/13] cri-args-hostlists: Break out report_flight and publish_logs
Date: Thu,  8 Oct 2020 20:14:14 +0100
Message-Id: <20201008191422.5683-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

NFC.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 cri-args-hostlists | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 7019c0c7..52e39f33 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -128,10 +128,7 @@ start_email () {
 
 	date >&2
 
-	./sg-report-flight --report-processing-start-time \
-	        --html-dir=$flight_html_dir/$flight/ \
-		--allow=allow.all --allow=allow.$branch \
-		$sgr_args $flight >tmp/$flight.report
+	report_flight $flight
 	./cr-fold-long-lines tmp/$flight.report
 
 	date >&2
@@ -144,11 +141,23 @@ start_email () {
 	date >&2
 }
 
+report_flight () {
+	local flight=$1
+	./sg-report-flight --html-dir=$flight_html_dir/$flight/ \
+		--allow=allow.all --allow=allow.$branch \
+		$sgr_args $flight >tmp/$flight.report
+}
+
+publish_logs () {
+	local flight=$1
+	./cr-publish-flight-logs ${OSSTEST_PUSH_HARNESS- --push-harness} \
+	    $flight >&2
+}
+
 publish_send_email () {
 	local flight=$1
+	publish_logs $flight
 	exec >&2
-	./cr-publish-flight-logs ${OSSTEST_PUSH_HARNESS- --push-harness} \
-	    $flight
 	send_email tmp/$flight.email
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4594.12134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNP-0000PP-To; Thu, 08 Oct 2020 19:14:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4594.12134; Thu, 08 Oct 2020 19:14:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNP-0000P9-OX; Thu, 08 Oct 2020 19:14:55 +0000
Received: by outflank-mailman (input) for mailman id 4594;
 Thu, 08 Oct 2020 19:14:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNO-0008Lk-D8
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9424736-6a7b-4165-8c3b-5a33faec68f6;
 Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN7-0004Ru-OC
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN7-0003tb-NR
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN5-0006So-RA; Thu, 08 Oct 2020 20:14:35 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNO-0008Lk-D8
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:54 +0000
X-Inumbo-ID: c9424736-6a7b-4165-8c3b-5a33faec68f6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c9424736-6a7b-4165-8c3b-5a33faec68f6;
	Thu, 08 Oct 2020 19:14:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=MCsVUQFJziPwMSkLU0mG88UTd+s0NbUrjbUZhZJ3BRQ=; b=6n4qxVAdBKMOrxnSZfauFCKsJ6
	PZXKvMwIXWDbnsGnATWDeQd8AArWNyhQEPRI19Ga4gm6dJmdrFJVi6FMTv+jvo4KS6g3phM8pHw7m
	0h/RRfiIu2LL3UaOfku7o+A3RhiEwECNuqiuzTvSxFSAo/hoDvxu/HkKhpz7xwXkQ/hQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0004Ru-OC
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0003tb-NR
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:37 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN5-0006So-RA; Thu, 08 Oct 2020 20:14:35 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 09/13] Introduce real-retry blessing
Date: Thu,  8 Oct 2020 20:14:18 +0100
Message-Id: <20201008191422.5683-10-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Nothing produces this yet.  (There's play-retry as well of course but
we don't need to document that really.)

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 README.dev          | 9 +++++----
 cr-daily-branch     | 3 ++-
 cr-disk-report      | 2 +-
 cr-try-bisect-adhoc | 2 +-
 cs-bisection-step   | 4 ++--
 sg-report-flight    | 2 +-
 6 files changed, 12 insertions(+), 10 deletions(-)

diff --git a/README.dev b/README.dev
index 2cbca109..3d09b3c6 100644
--- a/README.dev
+++ b/README.dev
@@ -381,10 +381,11 @@ These are the principal (intended) blessings:
    commissioning, and that blessing removed and replaced with `real'
    when the hosts are ready.
 
- * `real-bisect' and `adhoc-bisect': These are found only as the
-   blessing of finished flights.  (This is achieved by passing
-   *-bisect to sg-execute-flight.)  This allows the archaeologist
-   tools to distinguish full flights from bisection steps.
+ * `real-bisect', `real-retry', `adhoc-bisect': These are found only
+   as the blessing of finished flights.  (This is achieved by passing
+   *-bisect or *-retry to sg-execute-flight.)  This allows the
+   archaeologist tools to distinguish full flights from bisection
+   steps and retries.
 
    The corresponding intended blessing (as found in the `intended'
    column of the flights table) is `real'.  So the hosts used by the
diff --git a/cr-daily-branch b/cr-daily-branch
index 23060588..285ea361 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -76,7 +76,8 @@ case $branch in
 	treeurl=`./ap-print-url $branch`;;
 esac
 
-blessings_arg=--blessings=${DAILY_BRANCH_TESTED_BLESSING:-real}
+blessings_arg=${DAILY_BRANCH_TESTED_BLESSING:-real}
+blessings_arg=--blessings=${blessings_arg},${blessings_arg}-retest
 sgr_args+=" $blessings_arg"
 
 force_baseline='' # Non-empty = indication why we are forcing baseline.
diff --git a/cr-disk-report b/cr-disk-report
index 543d35bf..d76fd72f 100755
--- a/cr-disk-report
+++ b/cr-disk-report
@@ -38,7 +38,7 @@ our $graphs_px=0;
 our $graphs_py=0;
 open DEBUG, ">/dev/null" or die $!;
 
-our @blessings = qw(real real-bisect);
+our @blessings = qw(real real-retry real-bisect);
 # for these blessings column is       "<blessing> <branch>"
 # for other blessings column is       "<intended> [<blessing>]"
 
diff --git a/cr-try-bisect-adhoc b/cr-try-bisect-adhoc
index caadfd80..c2cfa475 100755
--- a/cr-try-bisect-adhoc
+++ b/cr-try-bisect-adhoc
@@ -49,7 +49,7 @@ export OSSTEST_BLESSING=adhoc
 
 compute_state_callback () {
 	compute_state_core \
-        	--blessings=real,real-bisect,adhoc-bisect \
+        	--blessings=real,real-retry,real-bisect,adhoc-bisect \
                 $bisect "$@" $branch $job $testid
 }
 
diff --git a/cs-bisection-step b/cs-bisection-step
index 762966da..8b391448 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -7,7 +7,7 @@
 # usage:
 #   ./cs-bisection-setup [<options>] <branch> <job> <testid>
 # options, usually:
-#      --blessings=real,real-bisect
+#      --blessings=real,real-retry,real-bisect
 #
 # First entry in --blessings list is the blessing of the basis
 # (non-bisection) flights.  This should not be the same as the
@@ -45,7 +45,7 @@ use HTML::Entities;
 use Osstest::Executive;
 use URI::Escape;
 
-our @blessings= qw(real real-bisect);
+our @blessings= qw(real real-retry real-bisect);
 our @revtuplegenargs= ();
 our $broken;
 
diff --git a/sg-report-flight b/sg-report-flight
index f6ace190..18643df6 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -120,7 +120,7 @@ die if defined $specver{this}{flight};
 die if defined $specver{that}{flight} &&
     grep { $_ ne 'flight' } keys %{ $specver{that} };
 
-push @blessings, 'real', 'real-bisect' unless @blessings;
+push @blessings, 'real', 'real-retry', 'real-bisect' unless @blessings;
 
 csreadconfig();
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4595.12140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNQ-0000R5-PR; Thu, 08 Oct 2020 19:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4595.12140; Thu, 08 Oct 2020 19:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbNQ-0000Qc-AT; Thu, 08 Oct 2020 19:14:56 +0000
Received: by outflank-mailman (input) for mailman id 4595;
 Thu, 08 Oct 2020 19:14:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbNP-0008Lp-7b
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f973d24-873e-46a8-bcde-8c3007f0adab;
 Thu, 08 Oct 2020 19:14:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN6-0004RR-Ba
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbN6-0003rk-Ag
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN4-0006So-H8; Thu, 08 Oct 2020 20:14:34 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbNP-0008Lp-7b
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:55 +0000
X-Inumbo-ID: 9f973d24-873e-46a8-bcde-8c3007f0adab
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9f973d24-873e-46a8-bcde-8c3007f0adab;
	Thu, 08 Oct 2020 19:14:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jdLmqoSD6rDtlzGT0JaoPu0SLWweyf2QtJXodAcLoIs=; b=M1XpsdZqmQsGnsgADvoCw34qln
	jwEZXO3ISx42rvA/ViFuHHH0KyN2ZFwZPtqsxUIFPGJbyABrc+pxclLivEAx621td/F3lKwJjTW0S
	6N6/R5X2K/7bVcpU2ldKHwx9dIcD0ox7dV40iyxxzsiNaMNIMUTCbHJTMgNzIWW6h5TE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0004RR-Ba
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0003rk-Ag
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:14:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN4-0006So-H8; Thu, 08 Oct 2020 20:14:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH 06/13] sg-report-flight: Break out printout_flightheader
Date: Thu,  8 Oct 2020 20:14:15 +0100
Message-Id: <20201008191422.5683-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index a07e03cb..4b33facb 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -783,6 +783,14 @@ sub includes ($) {
     }
 }    
 
+sub printout_flightheader ($) {
+    my ($r) = @_;
+    bodyprint <<END;
+flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
+$c{ReportHtmlPubBaseUrl}/$r->{Flight}/
+END
+}
+
 sub printout {
     my ($r, @failures) = @_;
     $header_text = '';
@@ -793,10 +801,9 @@ sub printout {
 $r->{Flight}: $r->{OutcomeSummary}
 END
     includes(\@includebeginfiles);
-    bodyprint <<END;
-flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
-$c{ReportHtmlPubBaseUrl}/$r->{Flight}/
-END
+
+    printout_flightheader($r);
+
     if (defined $r->{Overall}) {
         bodyprint "\n";
         bodyprint $r->{Overall};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4610.12158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbT5-0001uP-Ke; Thu, 08 Oct 2020 19:20:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4610.12158; Thu, 08 Oct 2020 19:20:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbT5-0001uI-Gn; Thu, 08 Oct 2020 19:20:47 +0000
Received: by outflank-mailman (input) for mailman id 4610;
 Thu, 08 Oct 2020 19:20:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R/F0=DP=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kQbT4-0001uD-62
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:20:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 685cba4c-56c4-458a-aed8-d539b026cfcb;
 Thu, 08 Oct 2020 19:20:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=R/F0=DP=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kQbT4-0001uD-62
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:20:46 +0000
X-Inumbo-ID: 685cba4c-56c4-458a-aed8-d539b026cfcb
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 685cba4c-56c4-458a-aed8-d539b026cfcb;
	Thu, 08 Oct 2020 19:20:45 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.9-rc9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602184844;
	bh=QKVreTOtOfv+UqyeLrweLlXPIWls8Uj8qpGVUvUyd6M=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=r83oyq+sD2C9B6GiqlVxUhMtlPEL0uJbZBF6cHRxmtQ6KkwackHEu4wJ5tTsTxjte
	 PeNdnKbY93WfqSfWqOSyGfwFt8dS4iLvvRnSE9wnsLaph8VBREichGxONHUkZMH7Z7
	 KqZuIWrdD6V8J5lj2M58M1NHeEvfj4wJ6Jx88Bhc=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201007063804.21597-1-jgross@suse.com>
References: <20201007063804.21597-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20201007063804.21597-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc9-tag
X-PR-Tracked-Commit-Id: 5a0677110b73dd3e1766f89159701bfe8ac06808
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 86f0a5fb1b98e993fd43899d6640c7b9eec5000a
Message-Id: <160218484456.22350.13183729801289360481.pr-tracker-bot@kernel.org>
Date: Thu, 08 Oct 2020 19:20:44 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Wed,  7 Oct 2020 08:38:04 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.9b-rc9-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/86f0a5fb1b98e993fd43899d6640c7b9eec5000a

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:23:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:23:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4612.12169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbVG-000230-0n; Thu, 08 Oct 2020 19:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4612.12169; Thu, 08 Oct 2020 19:23:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbVF-00022t-Ty; Thu, 08 Oct 2020 19:23:01 +0000
Received: by outflank-mailman (input) for mailman id 4612;
 Thu, 08 Oct 2020 19:22:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQbVD-00022n-0J
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:22:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ca40446-ffd0-4a4d-8213-18c2ccb5eaf2;
 Thu, 08 Oct 2020 19:22:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQbVA-0004dz-OJ; Thu, 08 Oct 2020 19:22:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6t-0002P9-WA; Thu, 08 Oct 2020 18:57:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQbVD-00022n-0J
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:22:59 +0000
X-Inumbo-ID: 5ca40446-ffd0-4a4d-8213-18c2ccb5eaf2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5ca40446-ffd0-4a4d-8213-18c2ccb5eaf2;
	Thu, 08 Oct 2020 19:22:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ci9ElP7+t6nSyjJlvsrEVOvf1v3Xr2+ABAE93SLLx1k=; b=df3O5q3+SZF71Hv5AwwF8CGSdI
	wpSBcvUqgbAeFlsZ3g1+teZRisgnU6KTJuwHTayscSaZuy/sNf4CoFaHS19JHzR1+9LijZ/RL61d5
	b4azNpI0UfcLk7/yCx98fieKlZW8hjhZwvfylZag2Qun96m+8xX+BW85SeahnMilBHns=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQbVA-0004dz-OJ; Thu, 08 Oct 2020 19:22:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6t-0002P9-WA; Thu, 08 Oct 2020 18:57:52 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 10/11] tools/libs/guest: add code to restore a v4 libxc stream
Date: Thu,  8 Oct 2020 19:57:34 +0100
Message-Id: <20201008185735.29875-11-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds the necessary code to accept a v4 stream, and to recognise and
restore a REC_TYPE_DOMAIN_CONTEXT record.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

v10:
 - New in v10
 - Derived from patch #8 of the v9 series
 - Fix memory leak
---
 tools/libs/guest/xg_sr_common.c          |  1 +
 tools/libs/guest/xg_sr_common.h          |  3 +++
 tools/libs/guest/xg_sr_restore.c         | 24 ++++++++++++++++++++++--
 tools/libs/guest/xg_sr_restore_x86_hvm.c |  9 +++++++++
 tools/libs/guest/xg_sr_restore_x86_pv.c  |  9 +++++++++
 5 files changed, 44 insertions(+), 2 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.c b/tools/libs/guest/xg_sr_common.c
index 17567ab133..f813320202 100644
--- a/tools/libs/guest/xg_sr_common.c
+++ b/tools/libs/guest/xg_sr_common.c
@@ -39,6 +39,7 @@ static const char *const mandatory_rec_types[] =
     [REC_TYPE_STATIC_DATA_END]              = "Static data end",
     [REC_TYPE_X86_CPUID_POLICY]             = "x86 CPUID policy",
     [REC_TYPE_X86_MSR_POLICY]               = "x86 MSR policy",
+    [REC_TYPE_DOMAIN_CONTEXT]               = "Domain context",
 };
 
 const char *rec_type_to_str(uint32_t type)
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index cc3ad1c394..ba9e5b0a84 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -297,6 +297,9 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+
+            /* Domain context blob. */
+            struct xc_sr_blob dom_ctx;
         } restore;
     };
 
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index b57a787519..9d2bbdfaa3 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -35,9 +35,9 @@ static int read_headers(struct xc_sr_context *ctx)
         return -1;
     }
 
-    if ( ihdr.version < 2 || ihdr.version > 3 )
+    if ( ihdr.version < 2 || ihdr.version > 4 )
     {
-        ERROR("Invalid Version: Expected 2 <= ver <= 3, Got %d",
+        ERROR("Invalid Version: Expected 2 <= ver <= 4, Got %d",
               ihdr.version);
         return -1;
     }
@@ -682,6 +682,21 @@ int handle_static_data_end(struct xc_sr_context *ctx)
     return rc;
 }
 
+/*
+ * Process a DOMAIN_CONTEXT record from the stream.
+ */
+static int handle_domain_context(struct xc_sr_context *ctx,
+                                 struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    int rc = update_blob(&ctx->restore.dom_ctx, rec->data, rec->length);
+
+    if ( rc )
+        ERROR("Unable to allocate %u bytes for domain context", rec->length);
+
+    return rc;
+}
+
 static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
 {
     xc_interface *xch = ctx->xch;
@@ -709,6 +724,10 @@ static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         rc = handle_static_data_end(ctx);
         break;
 
+    case REC_TYPE_DOMAIN_CONTEXT:
+        rc = handle_domain_context(ctx, rec);
+        break;
+
     default:
         rc = ctx->restore.ops.process_record(ctx, rec);
         break;
@@ -784,6 +803,7 @@ static void cleanup(struct xc_sr_context *ctx)
 
     free(ctx->restore.buffered_records);
     free(ctx->restore.populated_pfns);
+    free(ctx->restore.dom_ctx.ptr);
 
     if ( ctx->restore.ops.cleanup(ctx) )
         PERROR("Failed to clean up");
diff --git a/tools/libs/guest/xg_sr_restore_x86_hvm.c b/tools/libs/guest/xg_sr_restore_x86_hvm.c
index d6ea6f3012..6bb164b9f0 100644
--- a/tools/libs/guest/xg_sr_restore_x86_hvm.c
+++ b/tools/libs/guest/xg_sr_restore_x86_hvm.c
@@ -225,6 +225,15 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
         return rc;
     }
 
+    rc = xc_domain_set_context(xch, ctx->domid,
+                               ctx->restore.dom_ctx.ptr,
+                               ctx->restore.dom_ctx.size);
+    if ( rc )
+    {
+        PERROR("Unable to restore Domain context");
+        return rc;
+    }
+
     rc = xc_dom_gnttab_seed(xch, ctx->domid, true,
                             ctx->restore.console_gfn,
                             ctx->restore.xenstore_gfn,
diff --git a/tools/libs/guest/xg_sr_restore_x86_pv.c b/tools/libs/guest/xg_sr_restore_x86_pv.c
index dc50b0f5a8..2dafad7b83 100644
--- a/tools/libs/guest/xg_sr_restore_x86_pv.c
+++ b/tools/libs/guest/xg_sr_restore_x86_pv.c
@@ -1134,6 +1134,15 @@ static int x86_pv_stream_complete(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
+    rc = xc_domain_set_context(xch, ctx->domid,
+                               ctx->restore.dom_ctx.ptr,
+                               ctx->restore.dom_ctx.size);
+    if ( rc )
+    {
+        PERROR("Unable to restore Domain context");
+        return rc;
+    }
+
     rc = xc_dom_gnttab_seed(xch, ctx->domid, false,
                             ctx->restore.console_gfn,
                             ctx->restore.xenstore_gfn,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:23:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4613.12182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbVJ-00024s-As; Thu, 08 Oct 2020 19:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4613.12182; Thu, 08 Oct 2020 19:23:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbVJ-00024k-5y; Thu, 08 Oct 2020 19:23:05 +0000
Received: by outflank-mailman (input) for mailman id 4613;
 Thu, 08 Oct 2020 19:23:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kQbVH-00022n-V7
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:23:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15f5fd7e-b207-46b1-9f4d-2ad76c0f10ab;
 Thu, 08 Oct 2020 19:22:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQbVA-0004dx-MV; Thu, 08 Oct 2020 19:22:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com
 ([109.146.187.185] helo=u2f063a87eabd5f.home)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kQb6u-0002P9-W8; Thu, 08 Oct 2020 18:57:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3RzB=DP=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kQbVH-00022n-V7
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:23:03 +0000
X-Inumbo-ID: 15f5fd7e-b207-46b1-9f4d-2ad76c0f10ab
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15f5fd7e-b207-46b1-9f4d-2ad76c0f10ab;
	Thu, 08 Oct 2020 19:22:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
	In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=NK7CpHuHkdziWvYnINTC4RC1wRqkyK5TJ03pOR95Gpw=; b=fgPKe+f5yoRg42ttK4I/VvlcO9
	UL8ifZWH2gcHck4+lsRqprbrM2r3/gvTAUXtNiIAkxciGsycfdWXQTjkpSHKRWfLz6wTUCXuciwlE
	3wHOrcUtitTYdZV5X6p+fKxDMCKykIx4oSE0RhMlA3sVcp5BvXsJ+ExhhwkfkPmJg1c8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQbVA-0004dx-MV; Thu, 08 Oct 2020 19:22:56 +0000
Received: from host109-146-187-185.range109-146.btcentralplus.com ([109.146.187.185] helo=u2f063a87eabd5f.home)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kQb6u-0002P9-W8; Thu, 08 Oct 2020 18:57:53 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v10 11/11] tools/libs/guest: add code to save a v4 libxc stream
Date: Thu,  8 Oct 2020 19:57:35 +0100
Message-Id: <20201008185735.29875-12-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008185735.29875-1-paul@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Paul Durrant <pdurrant@amazon.com>

This patch adds the necessary code to save a REC_TYPE_DOMAIN_CONTEXT record,
and stop saving the now obsolete REC_TYPE_SHARED_INFO and REC_TYPE_TSC_INFO
records for PV guests.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

v10:
 - New in v10
 - Derived from patch #8 of the v9 series
---
 tools/libs/guest/xg_sr_common_x86.c   | 20 -----------
 tools/libs/guest/xg_sr_common_x86.h   |  6 ----
 tools/libs/guest/xg_sr_save.c         | 52 ++++++++++++++++++++++++++-
 tools/libs/guest/xg_sr_save_x86_hvm.c |  5 ---
 tools/libs/guest/xg_sr_save_x86_pv.c  | 22 ------------
 5 files changed, 51 insertions(+), 54 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/guest/xg_sr_common_x86.c
index 6f12483907..10a35b998e 100644
--- a/tools/libs/guest/xg_sr_common_x86.c
+++ b/tools/libs/guest/xg_sr_common_x86.c
@@ -1,25 +1,5 @@
 #include "xg_sr_common_x86.h"
 
-int write_x86_tsc_info(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-    struct xc_sr_rec_x86_tsc_info tsc = {};
-    struct xc_sr_record rec = {
-        .type = REC_TYPE_X86_TSC_INFO,
-        .length = sizeof(tsc),
-        .data = &tsc,
-    };
-
-    if ( xc_domain_get_tsc_info(xch, ctx->domid, &tsc.mode,
-                                &tsc.nsec, &tsc.khz, &tsc.incarnation) < 0 )
-    {
-        PERROR("Unable to obtain TSC information");
-        return -1;
-    }
-
-    return write_record(ctx, &rec);
-}
-
 int handle_x86_tsc_info(struct xc_sr_context *ctx, struct xc_sr_record *rec)
 {
     xc_interface *xch = ctx->xch;
diff --git a/tools/libs/guest/xg_sr_common_x86.h b/tools/libs/guest/xg_sr_common_x86.h
index b55758c96d..e504169705 100644
--- a/tools/libs/guest/xg_sr_common_x86.h
+++ b/tools/libs/guest/xg_sr_common_x86.h
@@ -3,12 +3,6 @@
 
 #include "xg_sr_common.h"
 
-/*
- * Obtains a domains TSC information from Xen and writes a X86_TSC_INFO record
- * into the stream.
- */
-int write_x86_tsc_info(struct xc_sr_context *ctx);
-
 /*
  * Parses a X86_TSC_INFO record and applies the result to the domain.
  */
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 2ba7c3200c..3eecc3e987 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -13,7 +13,7 @@ static int write_headers(struct xc_sr_context *ctx, uint16_t guest_type)
     struct xc_sr_ihdr ihdr = {
         .marker  = IHDR_MARKER,
         .id      = htonl(IHDR_ID),
-        .version = htonl(3),
+        .version = htonl(4),
         .options = htons(IHDR_OPT_LITTLE_ENDIAN),
     };
     struct xc_sr_dhdr dhdr = {
@@ -44,6 +44,52 @@ static int write_headers(struct xc_sr_context *ctx, uint16_t guest_type)
     return 0;
 }
 
+/*
+ * Writes a DOMAIN_CONTEXT record into the stream.
+ */
+static int write_domain_context_record(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_record rec = {
+        .type = REC_TYPE_DOMAIN_CONTEXT,
+    };
+    size_t len = 0;
+    int rc;
+
+    rc = xc_domain_get_context(xch, ctx->domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        PERROR("can't get record length for dom %u\n", ctx->domid);
+        goto out;
+    }
+
+    rec.data = malloc(len);
+
+    rc = -1;
+    if ( !rec.data )
+    {
+        PERROR("can't allocate %lu bytes\n", len);
+        goto out;
+    }
+
+    rc = xc_domain_get_context(xch, ctx->domid, rec.data, &len);
+    if ( rc < 0 )
+    {
+        PERROR("can't get domain record for dom %u\n", ctx->domid);
+        goto out;
+    }
+
+    rec.length = len;
+    rc = write_record(ctx, &rec);
+    if ( rc < 0 )
+        PERROR("failed to write DOMAIN_CONTEXT record");
+
+ out:
+    free(rec.data);
+
+    return rc;
+}
+
 /*
  * Writes an END record into the stream.
  */
@@ -905,6 +951,10 @@ static int save(struct xc_sr_context *ctx, uint16_t guest_type)
             goto err;
         }
 
+        rc = write_domain_context_record(ctx);
+        if ( rc )
+            goto err;
+
         rc = ctx->save.ops.end_of_checkpoint(ctx);
         if ( rc )
             goto err;
diff --git a/tools/libs/guest/xg_sr_save_x86_hvm.c b/tools/libs/guest/xg_sr_save_x86_hvm.c
index 1634a7bc43..d44fb3fc4f 100644
--- a/tools/libs/guest/xg_sr_save_x86_hvm.c
+++ b/tools/libs/guest/xg_sr_save_x86_hvm.c
@@ -193,11 +193,6 @@ static int x86_hvm_end_of_checkpoint(struct xc_sr_context *ctx)
 {
     int rc;
 
-    /* Write the TSC record. */
-    rc = write_x86_tsc_info(ctx);
-    if ( rc )
-        return rc;
-
     /* Write the HVM_CONTEXT record. */
     rc = write_hvm_context(ctx);
     if ( rc )
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..3de7b19f54 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -849,20 +849,6 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
     return rc;
 }
 
-/*
- * Writes an SHARED_INFO record into the stream.
- */
-static int write_shared_info(struct xc_sr_context *ctx)
-{
-    struct xc_sr_record rec = {
-        .type = REC_TYPE_SHARED_INFO,
-        .length = PAGE_SIZE,
-        .data = ctx->x86.pv.shinfo,
-    };
-
-    return write_record(ctx, &rec);
-}
-
 /*
  * Normalise a pagetable for the migration stream.  Performs mfn->pfn
  * conversions on the ptes.
@@ -1093,14 +1079,6 @@ static int x86_pv_end_of_checkpoint(struct xc_sr_context *ctx)
 {
     int rc;
 
-    rc = write_x86_tsc_info(ctx);
-    if ( rc )
-        return rc;
-
-    rc = write_shared_info(ctx);
-    if ( rc )
-        return rc;
-
     rc = write_all_vcpu_information(ctx);
     if ( rc )
         return rc;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:31:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4619.12196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbcz-0003BP-E2; Thu, 08 Oct 2020 19:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4619.12196; Thu, 08 Oct 2020 19:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbcz-0003BI-9W; Thu, 08 Oct 2020 19:31:01 +0000
Received: by outflank-mailman (input) for mailman id 4619;
 Thu, 08 Oct 2020 19:31:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQbcy-0003Ag-CW
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:31:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55fa0cdd-4b87-4bb3-a2c7-fcfa831aee50;
 Thu, 08 Oct 2020 19:30:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQbcq-0004nM-8r; Thu, 08 Oct 2020 19:30:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQbcp-0002Az-Rj; Thu, 08 Oct 2020 19:30:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQbcp-0002ft-RI; Thu, 08 Oct 2020 19:30:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQbcy-0003Ag-CW
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:31:00 +0000
X-Inumbo-ID: 55fa0cdd-4b87-4bb3-a2c7-fcfa831aee50
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55fa0cdd-4b87-4bb3-a2c7-fcfa831aee50;
	Thu, 08 Oct 2020 19:30:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7MypSTP/LCNqe2Ms8pSz6J4UaOVSbNXpmnQchxsazrI=; b=xozHkAaQZSW/GIh4uKOOCBUQZ8
	ea5IvL+IlNLi/NDPAKtsCl7o8mgIWs7zCT1VHi2+lYzJKJ/yMnXxA1mJyrDSIW9R01ZI4v1zSLH4s
	LMKxgeIGY3wRzF6IM11r163srrrkCyPQSIpJoF920ZT4/0in8dfO2L1xbirkCAatV9BI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQbcq-0004nM-8r; Thu, 08 Oct 2020 19:30:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQbcp-0002Az-Rj; Thu, 08 Oct 2020 19:30:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQbcp-0002ft-RI; Thu, 08 Oct 2020 19:30:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155534-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155534: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=d22f99d235e13356521b374410a6ee24f50b65e6
X-Osstest-Versions-That:
    linux=a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 19:30:51 +0000

flight 155534 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155534/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 155513 pass in 155534
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 155513

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                d22f99d235e13356521b374410a6ee24f50b65e6
baseline version:
 linux                a9518c1aec5b6a8e1a04bbd54e6ba9725ef0db4c

Last test of basis   155222  2020-10-01 11:40:31 Z    7 days
Testing same since   155513  2020-10-07 06:19:08 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Banerjee, Debabrata" <dbanerje@akamai.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Ahmad Fatoum <a.fatoum@pengutronix.de>
  Al Viro <viro@zeniv.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Aloka Dixit <alokad@codeaurora.org>
  Andrew Jeffery <andrew@aj.id.au>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bryan O'Donoghue <bryan.odonoghue@linaro.org>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chris Packham <chris.packham@alliedtelesis.co.nz>
  Christoph Hellwig <hch@lst.de>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chunyan Zhang <chunyan.zhang@unisoc.com>
  Corentin Labbe <clabbe@baylibre.com>
  Damien Le Moal <damien.lemoal@wdc.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  dillon min <dillon.minfei@gmail.com>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Sandeen <sandeen@redhat.com>
  Eric Sandeen <sandeen@sandeen.net>
  Felix Fietkau <nbd@nbd.name>
  Filipe Manana <fdmanana@suse.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Guo Ren <guoren@linux.alibaba.com>
  Hans de Goede <hdegoede@redhat.com>
  Harry Cutts <hcutts@chromium.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <james.smart@broadcom.com>
  Jean Delvare <jdelvare@suse.de>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Kerr <jk@codeconstruct.com.au>
  Jiri Kosina <jkosina@suse.cz>
  Jochen Friedrich <jochen@scram.de>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Keith Busch <kbusch@kernel.org>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Laurent Dufour <ldufour@linux.ibm.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lucy Yan <lucyyan@google.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Martin Cerveny <m.cerveny@computer.org>
  Maxime Ripard <maxime@cerno.tech>
  Michal Hocko <mhocko@suse.com>
  Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
  Nicolas VINCENT <nicolas.vincent@vossloh.com>
  Olympia Giannou <ogiannou@gmail.com>
  Olympia Giannou <olympia.giannou@leica-geosystems.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@redhat.com>
  Revanth Rajashekar <revanth.rajashekar@intel.com>
  Revanth Rajashekar <revanth.rajashekar@intel.com>1
  Rob Herring <robh@kernel.org>
  Sasha Levin <sashal@kernel.org>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sebastien Boeuf <sebastien.boeuf@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Boyd <sboyd@kernel.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sylwester Nawrocki <s.nawrocki@samsung.com>
  Taiping Lai <taiping.lai@unisoc.com>
  Tao Ren <rentao.bupt@gmail.com>
  Thibaut Sautereau <thibaut.sautereau@ssi.gouv.fr>
  Thierry Reding <treding@nvidia.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vincent Huang <vincent.huang@tw.synaptics.com>
  Vinod Koul <vkoul@kernel.org>
  Will McVicker <willmcvicker@google.com>
  Willy Tarreau <w@1wt.eu>
  Wolfram Sang <wsa@kernel.org>
  Xianting Tian <tian.xianting@h3c.com>
  Xie He <xie.he.0141@gmail.com>
  Xu Kai <xukai@nationalchip.com>
  Yu Kuai <yukuai3@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a9518c1aec5b..d22f99d235e1  d22f99d235e13356521b374410a6ee24f50b65e6 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:39:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4621.12208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkj-0003Wr-5Z; Thu, 08 Oct 2020 19:39:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4621.12208; Thu, 08 Oct 2020 19:39:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkj-0003Wk-2O; Thu, 08 Oct 2020 19:39:01 +0000
Received: by outflank-mailman (input) for mailman id 4621;
 Thu, 08 Oct 2020 19:39:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbkh-0003WY-UA
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5985addb-448f-4222-ac62-91b32dd5759b;
 Thu, 08 Oct 2020 19:38:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkg-0004xY-Nr
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:58 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkg-0005M3-Mv
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN7-0006So-Sq; Thu, 08 Oct 2020 20:14:38 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbkh-0003WY-UA
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:59 +0000
X-Inumbo-ID: 5985addb-448f-4222-ac62-91b32dd5759b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5985addb-448f-4222-ac62-91b32dd5759b;
	Thu, 08 Oct 2020 19:38:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uF93lLE6Hujj9HHD1uOcz9Kb2G0DNm7aSbOO3mDjPxk=; b=xBDd+2KoTZzeVdyDtJK+1BjQUi
	dr8sWTB9JIeAhL6rOHCH6UKktPi3aLXocn16x/g9GbaaZi0WrPBYx8i7JIl6eaI7rFtOUiWN41lzz
	a0DLaxtu52DEzY3wR7IWWgYERzOGPQ5laHoxpJGk9TATlh6y1XHhjX5nxbxZHr5MWRTI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkg-0004xY-Nr
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:58 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkg-0005M3-Mv
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:38:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0006So-Sq; Thu, 08 Oct 2020 20:14:38 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: [OSSTEST PATCH 13/13] cr-daily-branch: Do not do immediate retry of failing xtf flights
Date: Thu,  8 Oct 2020 20:14:22 +0100
Message-Id: <20201008191422.5683-14-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

CC: Andrew Cooper <Andrew.Cooper3@citrix.com>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 1 +
 1 file changed, 1 insertion(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index 3e58d465..9b1961bd 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -484,6 +484,7 @@ default_immediate_retry=$wantpush
 case "$branch" in
 *smoke*)	default_immediate_retry=false ;;
 osstest)	default_immediate_retry=false ;;
+xtf*)		default_immediate_retry=false ;;
 *)		;;
 esac
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:39:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4622.12220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkp-0003ZD-FA; Thu, 08 Oct 2020 19:39:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4622.12220; Thu, 08 Oct 2020 19:39:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkp-0003Z6-As; Thu, 08 Oct 2020 19:39:07 +0000
Received: by outflank-mailman (input) for mailman id 4622;
 Thu, 08 Oct 2020 19:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbko-0003Yi-7m
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 858aec6d-ff27-477b-86b2-c876ceba815b;
 Thu, 08 Oct 2020 19:39:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkn-0004y3-4w
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:05 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkn-0005NZ-42
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:05 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN6-0006So-9m; Thu, 08 Oct 2020 20:14:36 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbko-0003Yi-7m
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:06 +0000
X-Inumbo-ID: 858aec6d-ff27-477b-86b2-c876ceba815b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 858aec6d-ff27-477b-86b2-c876ceba815b;
	Thu, 08 Oct 2020 19:39:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=T/OCcqRxeQc9x8tpxs5RGS4FPCqsIq9O+7k8X7cgE7E=; b=myVy3ffk5l9fTO6DjVB8ItLAoo
	txCdhoaMx7mmVXuQnHaEy6LTerQ2tXYUJJgX/DJs2GCJEEm5T6NiuT36V6xOQu+qxYmtWgeISXLAV
	fsnrl/3I5hrt79rL1w3A9APvr7fmG9gGVB4gYJEuD/Cu45fKGOeDBsvdL+oAIfxFynjw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkn-0004y3-4w
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:05 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkn-0005NZ-42
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:05 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0006So-9m; Thu, 08 Oct 2020 20:14:36 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 10/13] cri-args-hostlists: Move flight_html_dir variable
Date: Thu,  8 Oct 2020 20:14:19 +0100
Message-Id: <20201008191422.5683-11-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is only used in report_flight.  We are going to want to call
report_flight from outside start_email, without having to set that
variable ourselves.

The variable isn't actually used in start_email.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 52e39f33..52cac195 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -113,7 +113,6 @@ start_email () {
 	printf '%s\n' "`getconfig EmailStdHeaders`"
 	printf 'Subject: %s' "${subject_prefix:-[$branch test] }"
 
-	local flight_html_dir=$OSSTEST_HTMLPUB_DIR/
 	local job_html_dir=$OSSTEST_HTML_DIR/
 	local host_html_dir=$OSSTEST_HTML_DIR/host/
 
@@ -143,6 +142,7 @@ start_email () {
 
 report_flight () {
 	local flight=$1
+	local flight_html_dir=$OSSTEST_HTMLPUB_DIR/
 	./sg-report-flight --html-dir=$flight_html_dir/$flight/ \
 		--allow=allow.all --allow=allow.$branch \
 		$sgr_args $flight >tmp/$flight.report
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4623.12232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkt-0003cs-SN; Thu, 08 Oct 2020 19:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4623.12232; Thu, 08 Oct 2020 19:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbkt-0003ci-Oe; Thu, 08 Oct 2020 19:39:11 +0000
Received: by outflank-mailman (input) for mailman id 4623;
 Thu, 08 Oct 2020 19:39:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbks-0003c5-Iy
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c2fff55-8ccc-41dc-bdb1-cebf48079dc6;
 Thu, 08 Oct 2020 19:39:09 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkr-0004yD-MW
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:09 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkr-0005OP-Ll
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:09 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN6-0006So-QH; Thu, 08 Oct 2020 20:14:36 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbks-0003c5-Iy
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:10 +0000
X-Inumbo-ID: 7c2fff55-8ccc-41dc-bdb1-cebf48079dc6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7c2fff55-8ccc-41dc-bdb1-cebf48079dc6;
	Thu, 08 Oct 2020 19:39:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=rKhUBECvH4UVMiFTJmADZZZYMWQOR+f/Us5E4XBbMw4=; b=pMdPpc0R8BfRIfkbfiBbeYrZmI
	aSNVXDhLHzl18lkxI35PUX+98luv93X7/hvRC6Y/qHkO1aHH0ykaV+idIWrHjy35mZJPYr4xu/maR
	gbZNJ5Cec7W+PbLBsdzJZkXtFkUS1gUulDSLGRAsSCwYUWIgrQYxkWXytrveYg94FpAY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkr-0004yD-MW
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:09 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkr-0005OP-Ll
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:09 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN6-0006So-QH; Thu, 08 Oct 2020 20:14:36 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 11/13] cr-daily-branch: Immediately retry failing tests
Date: Thu,  8 Oct 2020 20:14:20 +0100
Message-Id: <20201008191422.5683-12-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We exclude the self-tests because we don't want to miss breakage, and
the Xen smoke tests because they will be run again RSN anyway.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 48 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 47 insertions(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index 285ea361..bea8734e 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -472,12 +472,58 @@ esac
 sgr_args+=" $EXTRA_SGR_ARGS"
 
 date >&2
+original_start=`date +%s`
+
 : $flight $branch $OSSTEST_BLESSING $sgr_args
 eval "$DAILY_BRANCH_PREEXEC_HOOK"
 execute_flight $flight $OSSTEST_BLESSING
 date >&2
 
-start_email $flight $branch "$sgr_args" "$subject_prefix"
+default_immediate_retry=$wantpush
+
+case "$branch" in
+*smoke*)	default_immediate_retry=false ;;
+osstest)	default_immediate_retry=false ;;
+*)		;;
+esac
+
+: ${OSSTEST_IMMEDIATE_RETRY:-$default_immediate_retry}
+
+while true; do
+	start_email $flight $branch "$sgr_args" "$subject_prefix"
+	if grep '^tolerable$' $mrof >/dev/null 2>&1; then break; fi
+	if ! $OSSTEST_IMMEDIATE_RETRY; then break; fi
+	OSSTEST_IMMEDIATE_RETRY=false
+	retry_jobs=$(
+		perl <$mrof -wne '
+			next unless m/^regression (\S+) /;
+			my $j = $1;
+			next if $j =~ m/^build/;
+			$r{$j}++;
+			END {
+				print "copy-jobs '$flight' $_ "
+					foreach sort keys %r;
+			}'
+	)
+	if [ "x$retry_jobs" = x ]; then break; fi
+
+	rflight=$(
+		./cs-adjust-flight new:$OSSTEST_BLESSING \
+			branch-set $branch \
+			$retry_jobs
+	)
+
+	./mg-adjust-flight-makexrefs -v $rflight \
+		--branch=$branch --revision-osstest=$narness_rev \
+		'^build-*' --debug --blessings=real
+
+	export OSSTEST_RESOURCE_WAITSTART=$original_start
+	execute_flight $rflight $OSSTEST_BLESSING-retest
+	report_flight $rflight
+	publish_logs $rflight
+
+	sgr_args+=" --refer-to-flight=$rflight"
+done
 
 push=false
 if grep '^tolerable$' $mrof >/dev/null 2>&1; then push=$wantpush; fi
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 19:39:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 19:39:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4624.12244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbl0-0003ip-5T; Thu, 08 Oct 2020 19:39:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4624.12244; Thu, 08 Oct 2020 19:39:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQbl0-0003if-1V; Thu, 08 Oct 2020 19:39:18 +0000
Received: by outflank-mailman (input) for mailman id 4624;
 Thu, 08 Oct 2020 19:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kQbky-0003hn-QU
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a5a942f-8fc3-4da1-8101-93938b5f8d44;
 Thu, 08 Oct 2020 19:39:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbky-0004z7-1J
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kQbkx-0005PW-Vh
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kQbN7-0006So-6d; Thu, 08 Oct 2020 20:14:37 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ViHE=DP=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kQbky-0003hn-QU
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:16 +0000
X-Inumbo-ID: 8a5a942f-8fc3-4da1-8101-93938b5f8d44
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8a5a942f-8fc3-4da1-8101-93938b5f8d44;
	Thu, 08 Oct 2020 19:39:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=sjdvYyM1PnXhPbXqN8m96KPPeA795n+928eEa0JNcoo=; b=buqk5P9CeNW/avktkVxJlwMMIz
	cKnL2ED0j3xq0Z1AD+G+3ZUZ7lNZupIYRopQoS/YBv0oaRCagb6drCYwFjWJxUDlFIj5XXr5U9xWb
	S8DPeKW1hU+YBJVacfELEi5MaixS6DetQ4Qw1TOJYSFq+YfpSwOrgboT7jxg+lzyi3xg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbky-0004z7-1J
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbkx-0005PW-Vh
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 19:39:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kQbN7-0006So-6d; Thu, 08 Oct 2020 20:14:37 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 12/13] Honour OSSTEST_SIMULATE_FAIL_RETRY for immediate retries
Date: Thu,  8 Oct 2020 20:14:21 +0100
Message-Id: <20201008191422.5683-13-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201008191422.5683-1-iwj@xenproject.org>
References: <20201008191422.5683-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is primarily useful for debugging the immediate-retry logic, but
it seems churlish to delete it again.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index bea8734e..3e58d465 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -517,6 +517,10 @@ while true; do
 		--branch=$branch --revision-osstest=$narness_rev \
 		'^build-*' --debug --blessings=real
 
+	if [ "${OSSTEST_SIMULATE_FAIL_RETRY+y}" = y ]; then
+		export OSSTEST_SIMULATE_FAIL="${OSSTEST_SIMULATE_FAIL_RETRY}"
+	fi
+
 	export OSSTEST_RESOURCE_WAITSTART=$original_start
 	execute_flight $rflight $OSSTEST_BLESSING-retest
 	report_flight $rflight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 08 21:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 08 Oct 2020 21:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4633.12258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQdfr-00075e-Aq; Thu, 08 Oct 2020 21:42:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4633.12258; Thu, 08 Oct 2020 21:42:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQdfr-00075X-7f; Thu, 08 Oct 2020 21:42:07 +0000
Received: by outflank-mailman (input) for mailman id 4633;
 Thu, 08 Oct 2020 21:42:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQdfp-00075S-Dg
 for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 21:42:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51958c88-926b-48df-b225-81df8d078827;
 Thu, 08 Oct 2020 21:42:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQdfl-0007ca-9d; Thu, 08 Oct 2020 21:42:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQdfk-0003BR-TG; Thu, 08 Oct 2020 21:42:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQdfk-0002Ni-Sn; Thu, 08 Oct 2020 21:42:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKTU=DP=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQdfp-00075S-Dg
	for xen-devel@lists.xenproject.org; Thu, 08 Oct 2020 21:42:05 +0000
X-Inumbo-ID: 51958c88-926b-48df-b225-81df8d078827
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 51958c88-926b-48df-b225-81df8d078827;
	Thu, 08 Oct 2020 21:42:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=64CMOA/XgufenpEdb3O6eaHh6UIH+bb0+zfVJr/R04A=; b=wSxde37FNURZZi6D4SC+jWxSh7
	1pgoSVFG37DMaI1YgoZwCGt23SeeE/XJlT8u27ibz8K1VRHENCFskdC5/2HrDgEDNtjbUGkXL+2Iv
	JFX7Npj9vByUaKOp2eDisXnWeX5nlIJu4dAyHf8NY11xoH/dtS/Gfy+F1J4lucIJa0JM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQdfl-0007ca-9d; Thu, 08 Oct 2020 21:42:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQdfk-0003BR-TG; Thu, 08 Oct 2020 21:42:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQdfk-0002Ni-Sn; Thu, 08 Oct 2020 21:42:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 155541: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-xtf-amd64-amd64-2:xtf/test-hvm64-lbr-tsx-vmentry:fail:heisenbug
    xen-4.10-testing:test-xtf-amd64-amd64-3:xtf/test-hvm64-lbr-tsx-vmentry:fail:heisenbug
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore.2:fail:heisenbug
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
X-Osstest-Versions-That:
    xen=93be943e7d759015bd5db41a48f6dce58e580d5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 08 Oct 2020 21:42:00 +0000

flight 155541 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155541/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-2 54 xtf/test-hvm64-lbr-tsx-vmentry fail in 155514 pass in 155541
 test-xtf-amd64-amd64-3   54 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 155514
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail pass in 155514

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail in 155514 never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 151728
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 151728
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               fail   never pass
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced
baseline version:
 xen                  93be943e7d759015bd5db41a48f6dce58e580d5a

Last test of basis   151728  2020-07-08 01:17:09 Z   92 days
Failing since        154621  2020-09-22 16:07:00 Z   16 days   23 attempts
Testing same since   155362  2020-10-03 03:17:48 Z    5 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93be943e7d..1719f79a0e  1719f79a0efd36d15837c51982173dd1c287dced -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 00:02:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 00:02:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4666.12332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQfr5-0003qZ-Sx; Fri, 09 Oct 2020 00:01:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4666.12332; Fri, 09 Oct 2020 00:01:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQfr5-0003qS-PL; Fri, 09 Oct 2020 00:01:51 +0000
Received: by outflank-mailman (input) for mailman id 4666;
 Fri, 09 Oct 2020 00:01:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQfr3-0003qN-RY
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 00:01:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66e1ca5e-1963-4465-99ab-c3f0102bb66c;
 Fri, 09 Oct 2020 00:01:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQfr0-0002f4-R6; Fri, 09 Oct 2020 00:01:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQfr0-0008Om-Hq; Fri, 09 Oct 2020 00:01:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQfr0-0007NH-HN; Fri, 09 Oct 2020 00:01:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQfr3-0003qN-RY
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 00:01:50 +0000
X-Inumbo-ID: 66e1ca5e-1963-4465-99ab-c3f0102bb66c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66e1ca5e-1963-4465-99ab-c3f0102bb66c;
	Fri, 09 Oct 2020 00:01:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+CGBaUphD7J+2ydaD2UqdcjHvF7GqMd0PwgXCnxbMi8=; b=Ahwnx3rFWlx4QuyfRGHlE7l3D+
	zdCdxnt2sRI7qsl7ub5ZWhem9vzj4UxmzmBZlJugg9Rx3wOBucbKSEnIlR5ZMoEpu2VHnVaxpI6wp
	BKkurqnCva+iW1vG+ueSaQmiUkbvsT4qhmxoNt7g95hp3xQ8hx5qUEbXJuWLjrUcytyc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQfr0-0002f4-R6; Fri, 09 Oct 2020 00:01:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQfr0-0008Om-Hq; Fri, 09 Oct 2020 00:01:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQfr0-0007NH-HN; Fri, 09 Oct 2020 00:01:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155542: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=c85fb28b6f999db9928b841f63f1beeb3074eeca
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 00:01:46 +0000

flight 155542 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155542/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  7 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 155516
 test-armhf-armhf-xl-credit2   7 xen-boot                   fail pass in 155516
 test-armhf-armhf-xl-credit1   7 xen-boot                   fail pass in 155516

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      7 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 13 migrate-support-check fail in 155516 never pass
 test-armhf-armhf-xl-credit1 14 saverestore-support-check fail in 155516 never pass
 test-armhf-armhf-xl-credit2 13 migrate-support-check fail in 155516 never pass
 test-armhf-armhf-xl-credit2 14 saverestore-support-check fail in 155516 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                c85fb28b6f999db9928b841f63f1beeb3074eeca
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   69 days
Failing since        152366  2020-08-01 20:49:34 Z   68 days  114 attempts
Testing same since   155516  2020-10-07 09:30:00 Z    1 days    2 attempts

------------------------------------------------------------
2499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 337618 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 00:12:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 00:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4671.12347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQg13-0004sJ-1w; Fri, 09 Oct 2020 00:12:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4671.12347; Fri, 09 Oct 2020 00:12:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQg12-0004sC-UI; Fri, 09 Oct 2020 00:12:08 +0000
Received: by outflank-mailman (input) for mailman id 4671;
 Fri, 09 Oct 2020 00:12:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fkdD=DQ=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1kQg11-0004s7-U7
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 00:12:08 +0000
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dbd2706-d179-4524-b125-ee5550b06ab2;
 Fri, 09 Oct 2020 00:12:07 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id m9so6648656qth.7
 for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 17:12:07 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fkdD=DQ=zededa.com=roman@srs-us1.protection.inumbo.net>)
	id 1kQg11-0004s7-U7
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 00:12:08 +0000
X-Inumbo-ID: 0dbd2706-d179-4524-b125-ee5550b06ab2
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dbd2706-d179-4524-b125-ee5550b06ab2;
	Fri, 09 Oct 2020 00:12:07 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id m9so6648656qth.7
        for <xen-devel@lists.xenproject.org>; Thu, 08 Oct 2020 17:12:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=HHRiFQ4uAxyxJTCGNYZBIb0TKAWtUin/2niu/YgVXnQ=;
        b=HUdtsfdvPBlhhc7EQpsjwxX5Hc9V+glCtpfeETSf3Euf8qWuALlXAx1hlvSOB7d33c
         E19ycS1uZr0UDRa2EZK9fJXmiavRsLkUqzvQ/QbEwh9CrcJWNlUO28lbyBTPJlRz3Pj1
         f2uVTgg1j+n4kGun4MPofHtXDzbV2mdQxBFSjwOAXLoTkW59IQkdoMp36a/4pmAQBQt3
         uf0SnY3rbGQR+SBbrMHnyATUY0NQvW2a9L83lazNlCBofJBiNgC0gR5zlb7IM3cNd2Ah
         DuxyTAB8D0O8UYhsac1yxlWUyIWVdFCZWrAecLt22ItOhUfBeGml+ed5ukDGc3RzHr0F
         oDxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=HHRiFQ4uAxyxJTCGNYZBIb0TKAWtUin/2niu/YgVXnQ=;
        b=M4r9sz7S9qqjkTKRdmkZcqQpVrbzZ3lwCk2RUgxTAzdSpzGEYT9bJyEl+nhwqhh5va
         M1HUxmPG3qRsj9ktAR4bbaAj31ZclRWlNxFF1x85tG9Zu/rJ8vH8PgVmcFss3TyXLSHN
         b1RscauyiGPNrC4+6hbp02WYxr3JQSt0Q4yO1ogRlGwT+Rhlfik8Wej5aPDlXNqJvAYq
         pEkQS5uKJuekq3tX3g5MzcFXA1mPCHlOxyq8XaGbOrTimuk7lPMXJraxAwsH4QscqLct
         1fKAdlxzrDeSLX/R3XKkNgBIvzUWGqszDqVJm+yZbTYMYfwXFvWOMjMlVSEGps8scpuS
         qKpw==
X-Gm-Message-State: AOAM5309uptlHZcsus/NUsfTrOv2FXkuRkXVIvbgP7GvgQWxQdRNoZ3e
	P1rkiWeNjb1WK9VR+XwHkfEyJomUb+76viqgOO9fVg==
X-Google-Smtp-Source: ABdhPJw8kncMGVqev5RvUyGx/Jx2ui4Z9cK7wL1d3QzYyToBWwwlvOy9Yq/L8YICpUEv72KeZQowOV5pFzjESf/w5NQ=
X-Received: by 2002:ac8:b83:: with SMTP id h3mr10964807qti.113.1602202326731;
 Thu, 08 Oct 2020 17:12:06 -0700 (PDT)
MIME-Version: 1.0
References: <20201007223813.1638-1-sstabellini@kernel.org> <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
In-Reply-To: <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Thu, 8 Oct 2020 17:11:55 -0700
Message-ID: <CAMmSBy8wxneKDk01HRCZxHR-58R8v6Kp1-5TA28iQgM4OG56Wg@mail.gmail.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, 
	"open list:X86" <xen-devel@lists.xenproject.org>, "julien@xen.org" <julien@xen.org>, 
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 8, 2020 at 12:58 AM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
>
>
> > On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >
> > The preferred method to reboot RPi4 is PSCI. If it is not available,
> > touching the watchdog is required to be able to reboot the board.
> >
> > The implementation is based on
> > drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > Acked-by: Julien Grall <jgrall@amazon.com>
>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

FWIW:

Tested-by: Roman Shaposhnik <roman@zededa.com>

Great to see it being merged -- one less custom patch for us to carry in EVE.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 01:17:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 01:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4675.12363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQh2T-0002tH-3k; Fri, 09 Oct 2020 01:17:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4675.12363; Fri, 09 Oct 2020 01:17:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQh2S-0002tA-W1; Fri, 09 Oct 2020 01:17:40 +0000
Received: by outflank-mailman (input) for mailman id 4675;
 Fri, 09 Oct 2020 01:17:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2c2X=DQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQh2R-0002t5-Oi
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 01:17:39 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id faa43fe7-3aa0-4f90-84fc-f112f14ebdf8;
 Fri, 09 Oct 2020 01:17:38 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3132122240;
 Fri,  9 Oct 2020 01:17:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2c2X=DQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQh2R-0002t5-Oi
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 01:17:39 +0000
X-Inumbo-ID: faa43fe7-3aa0-4f90-84fc-f112f14ebdf8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id faa43fe7-3aa0-4f90-84fc-f112f14ebdf8;
	Fri, 09 Oct 2020 01:17:38 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3132122240;
	Fri,  9 Oct 2020 01:17:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602206257;
	bh=N8AHEPFp1lyzBoBDbNRLy/YDZM/OJro+Jwd8IPiFPk4=;
	h=Date:From:To:cc:Subject:From;
	b=lbJX7sNftrCsSL2VKENQzyTQKx502+zAaqZ3+Ajua3uT/Nh3Jp70A6fC6IklQmWEP
	 uQjQp/P4n0wqVIfiob+q3QA8Go6Xu5Wua2g2eNR1NAYkv4lp4VkivKN/MCNdLXnoUV
	 gA8w5Db1ZXGIvg3zKCaRvjbla9l5s5V/xQ3btS2I=
Date: Thu, 8 Oct 2020 18:17:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: committers@xenproject.org
cc: sstabellini@kernel.org, George.Dunlap@citrix.com, bertrand.marquis@arm.com, 
    xen-devel@lists.xenproject.org
Subject: plans on generating docs from Xen in-code comments
Message-ID: <alpine.DEB.2.21.2010081044260.23978@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This email is an update in regards to the Xen documentation effort
driven by the FuSa SIG. We plan to go forward with the proposal below;
please let us know if you have any concerns.

We have been discussing how to generate documentation for Xen public
headers from in-code comments. Initially, we thought of using
kernel-doc, like the Linux kernel. kernel-doc uses a format similar to
Doxygen. I sent out a patch series to convert public headers to the
kernel-doc format in August, see [1].

Afterward, we discussed with an Intel engineer from the Zephyr project,
interested in Zephyr's safety certification. They are solving the same
problem using Doxygen. He demoed the use of Doxygen to generate
documentation and safety requirements from in-code comments and other
docs. They could produce excellent documents, suitable for safety
certifications. For example, see [2]. We should be able to reuse their
scripts and settings in Xen Project, which would save us significant
efforts. It makes sense for us to try to follow the same path with
Doxygen.

Configuring Doxygen is complex because it is much more flexible; we'll
need that flexibility. We plan to working on an appropriate Doxygen
configuration (based on Zephyr's) during the next few months. As soon as
we have it, we'll add Doxygen to Xen Project infrastructure to
automatically generate documents for the master branch (something along
the lines of http://xenbits.xenproject.org/docs/sphinx-unstable/).

In the meantime, we still intend to go ahead with the patch series [1]
to convert the public headers to kernel-doc: kernel-doc and Doxygen use
similar formats, so it is a good step in the right direction. It helps
us get closer to the final objective of Doxygen-based documents. After
we have Doxygen in place, we'll further refine the in-code comments as
necessary.

Cheers,

Stefano

[1] https://marc.info/?l=xen-devel&m=159675781406690
[2] https://docs.zephyrproject.org/latest/index.html


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 02:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 02:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4680.12381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQhvM-0000Cy-JM; Fri, 09 Oct 2020 02:14:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4680.12381; Fri, 09 Oct 2020 02:14:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQhvM-0000Cq-Cl; Fri, 09 Oct 2020 02:14:24 +0000
Received: by outflank-mailman (input) for mailman id 4680;
 Fri, 09 Oct 2020 02:14:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQhvK-0000CI-Te
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 02:14:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3707d1fc-132d-4987-a5ba-85b48c3da9b6;
 Fri, 09 Oct 2020 02:14:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQhvC-0006Ev-71; Fri, 09 Oct 2020 02:14:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQhvB-0004FO-Sa; Fri, 09 Oct 2020 02:14:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQhvB-0004qP-Rq; Fri, 09 Oct 2020 02:14:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQhvK-0000CI-Te
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 02:14:22 +0000
X-Inumbo-ID: 3707d1fc-132d-4987-a5ba-85b48c3da9b6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3707d1fc-132d-4987-a5ba-85b48c3da9b6;
	Fri, 09 Oct 2020 02:14:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CFtYfiyWNIAItX1p8WqS9eqzAaghrKt1u74Z3jwm5M0=; b=dEs02t5p5wQas26IHJyZ33kEFe
	Rh0tgUGAST27VFapHKjddiBwxFynnx8NZMd1z0KQIXwR4yuGwzD0nANgk3IVQPDTgtxC7ToSsftg2
	nHLHE+SuqYMUzifEl6JSdUCiJF+1ukMcVn6fThJJhiTas/bKBGFtTrGnE4jOetw68DDY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQhvC-0006Ev-71; Fri, 09 Oct 2020 02:14:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQhvB-0004FO-Sa; Fri, 09 Oct 2020 02:14:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQhvB-0004qP-Rq; Fri, 09 Oct 2020 02:14:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155544: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f2687fdb7571a444b5af3509574b659d35ddd601
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 02:14:13 +0000

flight 155544 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155544/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 11 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 11 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                f2687fdb7571a444b5af3509574b659d35ddd601
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   49 days
Failing since        152659  2020-08-21 14:07:39 Z   48 days   80 attempts
Testing same since   155518  2020-10-07 13:11:05 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 40281 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 06:15:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 06:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4692.12407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQlgd-0005Zn-7J; Fri, 09 Oct 2020 06:15:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4692.12407; Fri, 09 Oct 2020 06:15:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQlgd-0005Zg-44; Fri, 09 Oct 2020 06:15:27 +0000
Received: by outflank-mailman (input) for mailman id 4692;
 Fri, 09 Oct 2020 06:15:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQlgb-0005ZD-Gj
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 06:15:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2dbfc26-4891-440a-b1d5-a88261b7ac3f;
 Fri, 09 Oct 2020 06:15:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQlgU-0003J3-Sc; Fri, 09 Oct 2020 06:15:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQlgU-0001bo-Ki; Fri, 09 Oct 2020 06:15:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQlgU-0008Or-KI; Fri, 09 Oct 2020 06:15:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQlgb-0005ZD-Gj
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 06:15:25 +0000
X-Inumbo-ID: d2dbfc26-4891-440a-b1d5-a88261b7ac3f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d2dbfc26-4891-440a-b1d5-a88261b7ac3f;
	Fri, 09 Oct 2020 06:15:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0urk0LNanaARlS2RMas+KYpkMNRYxByzjr/AzQ+GSNQ=; b=zP4Hd05ZQTFF8Sg48ki5la2aW+
	PRZQCYIWKz+iQvRAnO5Ce1E4Dw3sluV9O3YyOWW3FNCKBnmtr0HfTj8R6LWv42JxJK5ajjra/7U3k
	FeSjmwgrKQKN6hA0OwxpNI7rSTmqBiDaAH9QjPcO6HT+dtnXFm4R+VvaQZYTauBPaPfI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQlgU-0003J3-Sc; Fri, 09 Oct 2020 06:15:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQlgU-0001bo-Ki; Fri, 09 Oct 2020 06:15:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQlgU-0008Or-KI; Fri, 09 Oct 2020 06:15:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155584-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155584: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=0241809bf838875615797f52af34222e5ab8e98f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 06:15:18 +0000

flight 155584 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155584/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  0241809bf838875615797f52af34222e5ab8e98f

Last test of basis   155547  2020-10-08 13:01:25 Z    0 days
Testing same since   155584  2020-10-09 02:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Roman Shaposhnik <roman@zededa.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0241809bf8..25849c8b16  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 06:53:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 06:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4698.12426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQmHV-0000nx-8a; Fri, 09 Oct 2020 06:53:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4698.12426; Fri, 09 Oct 2020 06:53:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQmHV-0000nq-59; Fri, 09 Oct 2020 06:53:33 +0000
Received: by outflank-mailman (input) for mailman id 4698;
 Fri, 09 Oct 2020 06:53:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQmHT-0000nl-MG
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 06:53:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 805574ae-a9ef-47b2-9a4d-9e4bd3ef26b3;
 Fri, 09 Oct 2020 06:53:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQmHR-00046q-UW; Fri, 09 Oct 2020 06:53:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQmHR-00040C-Nx; Fri, 09 Oct 2020 06:53:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQmHR-0001TO-NW; Fri, 09 Oct 2020 06:53:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQmHT-0000nl-MG
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 06:53:31 +0000
X-Inumbo-ID: 805574ae-a9ef-47b2-9a4d-9e4bd3ef26b3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 805574ae-a9ef-47b2-9a4d-9e4bd3ef26b3;
	Fri, 09 Oct 2020 06:53:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H6BHX48+Knvhj5mJ+i/RLJtofuxMAP+YxfljBhXCigM=; b=2Vc7bcyP9WkVfrc97xyVzbFoo1
	rT4BUEQIcilFwQmLySEVjFpEwtm7ZoLIRH7afKFFZdj4P7sjPCBnrnsi4q+VAdjPEOkjWP2GzI+k1
	hW9gslvRKkYgl/spchJmOXr6wVWqqYw9f7XYjHf4viKBuVaZaLdJc6nG6x9ahxa9jDTo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQmHR-00046q-UW; Fri, 09 Oct 2020 06:53:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQmHR-00040C-Nx; Fri, 09 Oct 2020 06:53:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQmHR-0001TO-NW; Fri, 09 Oct 2020 06:53:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155548-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155548: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=69e95b9efed520e643b9e5b0573180aa7c5ecaca
X-Osstest-Versions-That:
    ovmf=c640186ec8aae6164123ee38de6409aed69eab12
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 06:53:29 +0000

flight 155548 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155548/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 69e95b9efed520e643b9e5b0573180aa7c5ecaca
baseline version:
 ovmf                 c640186ec8aae6164123ee38de6409aed69eab12

Last test of basis   155512  2020-10-07 04:39:54 Z    2 days
Testing same since   155548  2020-10-08 13:39:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Chang Abner <abner.chang@hpe.com>
  Fu Siyuan <siyuan.fu@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Leif Lindholm <leif@nuviainc.com>
  Wang Fan <fan.wang@intel.com>
  Wu Jiaxin <jiaxin.wu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c640186ec8..69e95b9efe  69e95b9efed520e643b9e5b0573180aa7c5ecaca -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 07:48:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 07:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4703.12444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQn8V-0005bF-Ds; Fri, 09 Oct 2020 07:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4703.12444; Fri, 09 Oct 2020 07:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQn8V-0005b8-9x; Fri, 09 Oct 2020 07:48:19 +0000
Received: by outflank-mailman (input) for mailman id 4703;
 Fri, 09 Oct 2020 07:48:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XHyC=DQ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQn8U-0005b3-Az
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 07:48:18 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.56]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfe4ce1a-9937-4103-b2a9-4623c9adbdce;
 Fri, 09 Oct 2020 07:48:16 +0000 (UTC)
Received: from MR2P264CA0154.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::17) by
 AM6PR08MB4534.eurprd08.prod.outlook.com (2603:10a6:20b:ba::20) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34; Fri, 9 Oct 2020 07:48:14 +0000
Received: from VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::16) by MR2P264CA0154.outlook.office365.com
 (2603:10a6:501:1::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Fri, 9 Oct 2020 07:48:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT058.mail.protection.outlook.com (10.152.19.86) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Fri, 9 Oct 2020 07:48:13 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Fri, 09 Oct 2020 07:48:12 +0000
Received: from 5e6667de145c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D870A6AD-D3B1-4859-AF33-6F97E4A246C2.1; 
 Fri, 09 Oct 2020 07:48:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5e6667de145c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 09 Oct 2020 07:48:07 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Fri, 9 Oct
 2020 07:48:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.026; Fri, 9 Oct 2020
 07:48:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XHyC=DQ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQn8U-0005b3-Az
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 07:48:18 +0000
X-Inumbo-ID: cfe4ce1a-9937-4103-b2a9-4623c9adbdce
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown [40.107.2.56])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cfe4ce1a-9937-4103-b2a9-4623c9adbdce;
	Fri, 09 Oct 2020 07:48:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VSph6GES7caMfjI+9zAybban9hXyWMX1td9tdi8K7RM=;
 b=XYLFcjiECZUXnvxrXjrZqetbA5KvN5ehSp0QQ/Guyz8Z7CgPyU2rae0goU5T4rtBuUolL5LT2NA905OC+PaEyrWY7rTgHv8r1HSXrGQnvg/m4siiQx/T8Np3HsIKK/leo1bzuMvXxaWQDuigWuBe0jdGFgVAAIcbFXcxAfhZ3Rs=
Received: from MR2P264CA0154.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501:1::17) by
 AM6PR08MB4534.eurprd08.prod.outlook.com (2603:10a6:20b:ba::20) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3433.34; Fri, 9 Oct 2020 07:48:14 +0000
Received: from VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:501:1:cafe::16) by MR2P264CA0154.outlook.office365.com
 (2603:10a6:501:1::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Fri, 9 Oct 2020 07:48:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT058.mail.protection.outlook.com (10.152.19.86) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Fri, 9 Oct 2020 07:48:13 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Fri, 09 Oct 2020 07:48:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2883794f510aa1c4
X-CR-MTA-TID: 64aa7808
Received: from 5e6667de145c.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id D870A6AD-D3B1-4859-AF33-6F97E4A246C2.1;
	Fri, 09 Oct 2020 07:48:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5e6667de145c.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 09 Oct 2020 07:48:07 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nYZU/DsjQNWu/ZGeGmmrHXc7EGat12RP3zIpEU3yp9yjF/8D5/6EEKvO13EhpL5NDdQVzVtK+1d70nbG8zRU+NabOUGCGVjRysGlvINdTo9eWv0EIuSMhr+pglS0fNB1W4bN1ntK7toGrN5akv3j3OfQI/x1wicYbMkR/9R9U7p5vCIl4d5ok882050TgcxtS5882OLEC7/Ew+acc7xXucZVJlNYvJ9/A6f3x2Io3QkDadl/tE5XANHbsGMR5eJZsPkJleIIIhlNc6gB1hyJxw4IM2ujFlAk283GT1GkQY03tquoIwQb3WXNJbpgkKAkCdiJMDto3boB+FJKbei2RA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VSph6GES7caMfjI+9zAybban9hXyWMX1td9tdi8K7RM=;
 b=Dx7wbKDIlFYT6I/JovbHAZMSU7hnpT0P+pgbpEU8Q/TcaDvjbFTtvj/1Z+L9OS3qLC3U8VVIX9mVqAJIQdC7FNvRoVh14pM8F4RbjKKc31w7VFZN3GUKmk3ufjy886KKOX8xivE7mCY6cBslbVo/npBJNT38Fb5b1VopUrq6Rgd9QiStINzHxH99XuEgJPfaXMqawAnteJs9/TX7TvDRnWOi43vwSNo6ooedPVzHY/Z+4IAbq72vgKeWxI9DNytSByYef7lojxj0X0NHI++gUOBJl7NVUweCdF8qJZfGQBzq788VVvyfWZLXnmEtGMZsyO20oJEI+WvEgDvBD8gn5A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VSph6GES7caMfjI+9zAybban9hXyWMX1td9tdi8K7RM=;
 b=XYLFcjiECZUXnvxrXjrZqetbA5KvN5ehSp0QQ/Guyz8Z7CgPyU2rae0goU5T4rtBuUolL5LT2NA905OC+PaEyrWY7rTgHv8r1HSXrGQnvg/m4siiQx/T8Np3HsIKK/leo1bzuMvXxaWQDuigWuBe0jdGFgVAAIcbFXcxAfhZ3Rs=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5324.eurprd08.prod.outlook.com (2603:10a6:10:11e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Fri, 9 Oct
 2020 07:48:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.026; Fri, 9 Oct 2020
 07:48:06 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "julien@xen.org"
	<julien@xen.org>, Stefano Stabellini <stefano.stabellini@xilinx.com>,
	"roman@zededa.com" <roman@zededa.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Topic: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Index: AQHWnPqc7v70BAMN4EaAQA/1huLhsamNV4yAgACv0gCAAN+3gA==
Date: Fri, 9 Oct 2020 07:48:05 +0000
Message-ID: <B196761E-78D7-4891-A28E-E04E0B85A202@arm.com>
References: <20201007223813.1638-1-sstabellini@kernel.org>
 <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
 <alpine.DEB.2.21.2010081103110.23978@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010081103110.23978@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b94155c2-5ca2-4c4e-70d8-08d86c27ad15
x-ms-traffictypediagnostic: DB8PR08MB5324:|AM6PR08MB4534:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB453440BD54507753FB6B9CC49D080@AM6PR08MB4534.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6108;OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 S8W4SQ8Q7e7AEmOIqgteDYt7RDCgrk39tmmTlnVzpN3iz2gPxihEddQgzCfiKYRxpwtfbejpS+Q2VPOqaqZsTqa4jMMZ8qX1OPKLmEKAGZAB1TQw8tEvYp/WDoAUlGOqC0Is5vgDX+FAVJJ04K9UTwpDRdTTHG327PVFwMqiqBaK6JjBKGjegGqdrMDt0k1CZMF1bJwGBcLIiIm+QHr9jkPGCw2v/BGBy2Os/QIAycn8+WL98zTK0fOVblt8he2ugWFbAnXtVTwMOQgxPAJ2raP5aw0E/X21lwycKndR8SBwmNE0d3VQfuyNBgQf6Vq+0lenjXEcgOF6QiszktWYug==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(39860400002)(366004)(376002)(316002)(6512007)(83380400001)(186003)(6506007)(26005)(4326008)(36756003)(53546011)(6916009)(6486002)(54906003)(478600001)(71200400001)(86362001)(2616005)(64756008)(76116006)(91956017)(5660300002)(66476007)(8936002)(8676002)(66946007)(2906002)(33656002)(66446008)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 TeJjUlC5H3LzA7dgoyCksbvXJdIsJagBf0g5qxiA19Op/VfIdkAX5786Q1k2zcCs987irR3iikVY+a2Cj3Ekt2y0UMI9qO3GuPFKWlXL402appsYX36CMELLhbyiJSYXuJE90XJehoyAkWEZIb5pVXBQV/3u5M9KBliNA0tg1KIOU/2jH3RAHe85Fq6EBwquOxEt31HzoJVVWqIZCpvYB63NHHXGmHrSr+vGOPhQdqpAyT3pUU3es3+QVYqUakXpcFmImYbllxoHw0ngReKSqjSUsWSPQ/3GiBVGnQGeLi4J39ifIOcADU2ek7wHj59Ni89va3mIiR3v8qe4PLWawNvuq8H+3YQTVtV7YCd2E3+c4iRDpSal/u2THjC316T66ao1ak8NTrC/gOIG0OajdOHIW46/moE6r4AprpIMSbIRQ/c+qrZ8WEvBT3PFubp+fGdd4+veL8xnhIq6hFNXmZZqO7uCf9HpV5HJ2MsbYjdbEHoRctPNH2diLjdN8si7gJyRoHtOotUsq5+EUDBqGD8mmUxkDS5DVLmhZfd+J7u/NdkH0ZnsnqoqYS3ubN4tmlyt5FKhczoXvWRCxS4CxibDOnnhnJ2+h3s0ww+d1x/gry2YpxUC5pt20yEzqikqb5hVF6QCnVLlouAtQBhysw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <B7DB6FD66C8704468D0E9AC83F340C83@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5324
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c5f7effc-4538-408c-0b24-08d86c27a88e
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pHVDUsZGJK8DrS8KYDfC5B6BOuKBDgceiLomSGetTFWrQ0tfovk5kBWO7dfbJak1y/8XvGukNzCShojWgDId0vHk2uKQ4V2fHUYcA/gtYWO2vLns7iHZCEw78nLSU756SDlANIUea/+EhSDgQodhQjWDUithPWaoZLxm2wNBtnCcojNDE66bbRnxlr5jdiSKjl13620gzw9wRFYPQOsK8KU2GgEarlBrUprLmR5ZcrwSYZRwjNbNglawPW+SdCPmKaHfAM3GCnxXgvelHiSUgtTEMOIr8wqX1A6U/S0wVSMcIHSCf9/V5wvIRP7pLW4xK/FahyjzX1K+lsUYpOXCiZz9XtkkgG/rK9oZwzcqCWfC7lU48xiKqWbRNY2EddQwIRFPRT2ET4HBbRYswQcZDA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(346002)(136003)(46966005)(186003)(4326008)(6506007)(6862004)(81166007)(54906003)(356005)(6486002)(53546011)(6512007)(82740400003)(36756003)(26005)(47076004)(107886003)(33656002)(83380400001)(336012)(82310400003)(2616005)(478600001)(5660300002)(8676002)(316002)(70206006)(86362001)(70586007)(8936002)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Oct 2020 07:48:13.5203
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b94155c2-5ca2-4c4e-70d8-08d86c27ad15
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT058.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4534



> On 8 Oct 2020, at 19:27, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Thu, 8 Oct 2020, Bertrand Marquis wrote:
>>> On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>>=20
>>> The preferred method to reboot RPi4 is PSCI. If it is not available,
>>> touching the watchdog is required to be able to reboot the board.
>>>=20
>>> The implementation is based on
>>> drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
>>>=20
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> Acked-by: Julien Grall <jgrall@amazon.com>
>>=20
>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>=20
>> Maybe a printk if reset was not successful ?
>=20
> That not quite platform specific but we could add a printk to
> xen/arch/arm/shutdown.c:machine_restart if we are still alive after
> 100ms.

Even nicer :-)
Definitely usefull to see something if for one reason reset/restart did
not succeed for whatever reason.

>=20
> I'll commit this patch as is and maybe send another one for
> machine_restart.

Please tell me if you want me to handle that one (at the end I did request
that so not really fare to ask you to do it:-) ).

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 09:25:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 09:25:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4735.12463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQodr-0006hV-3w; Fri, 09 Oct 2020 09:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4735.12463; Fri, 09 Oct 2020 09:24:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQodr-0006hO-0p; Fri, 09 Oct 2020 09:24:47 +0000
Received: by outflank-mailman (input) for mailman id 4735;
 Fri, 09 Oct 2020 09:24:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PrsB=DQ=linux.alibaba.com=richard.weiyang@srs-us1.protection.inumbo.net>)
 id 1kQodo-0006hJ-T6
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 09:24:44 +0000
Received: from out30-42.freemail.mail.aliyun.com (unknown [115.124.30.42])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3412a8f-aef8-44f8-bda6-8965a5c60530;
 Fri, 09 Oct 2020 09:24:41 +0000 (UTC)
Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com
 fp:SMTPD_---0UBQ5Qni_1602235473) by smtp.aliyun-inc.com(127.0.0.1);
 Fri, 09 Oct 2020 17:24:34 +0800
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PrsB=DQ=linux.alibaba.com=richard.weiyang@srs-us1.protection.inumbo.net>)
	id 1kQodo-0006hJ-T6
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 09:24:44 +0000
X-Inumbo-ID: f3412a8f-aef8-44f8-bda6-8965a5c60530
Received: from out30-42.freemail.mail.aliyun.com (unknown [115.124.30.42])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f3412a8f-aef8-44f8-bda6-8965a5c60530;
	Fri, 09 Oct 2020 09:24:41 +0000 (UTC)
X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R941e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04357;MF=richard.weiyang@linux.alibaba.com;NM=1;PH=DS;RN=20;SR=0;TI=SMTPD_---0UBQ5Qni_1602235473;
Received: from localhost(mailfrom:richard.weiyang@linux.alibaba.com fp:SMTPD_---0UBQ5Qni_1602235473)
          by smtp.aliyun-inc.com(127.0.0.1);
          Fri, 09 Oct 2020 17:24:34 +0800
Date: Fri, 9 Oct 2020 17:24:33 +0800
From: Wei Yang <richard.weiyang@linux.alibaba.com>
To: Nathan Chancellor <natechancellor@gmail.com>
Cc: david@redhat.com, akpm@linux-foundation.org, ardb@kernel.org,
	bhe@redhat.com, dan.j.williams@intel.com, jgg@ziepe.ca,
	keescook@chromium.org, linux-acpi@vger.kernel.org,
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, linux-nvdimm@lists.01.org,
	linux-s390@vger.kernel.org, mhocko@suse.com,
	pankaj.gupta.linux@gmail.com, richardw.yang@linux.intel.com,
	virtualization@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org, clang-built-linux@googlegroups.com
Subject: Re: [PATCH] kernel/resource: Fix use of ternary condition in
 release_mem_region_adjustable
Message-ID: <20201009092433.GA56924@L-31X9LVDL-1304.local>
Reply-To: Wei Yang <richard.weiyang@linux.alibaba.com>
References: <20200911103459.10306-2-david@redhat.com>
 <20200922060748.2452056-1-natechancellor@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200922060748.2452056-1-natechancellor@gmail.com>

On Mon, Sep 21, 2020 at 11:07:48PM -0700, Nathan Chancellor wrote:
>Clang warns:
>
>kernel/resource.c:1281:53: warning: operator '?:' has lower precedence
>than '|'; '|' will be evaluated first
>[-Wbitwise-conditional-parentheses]
>        new_res = alloc_resource(GFP_KERNEL | alloc_nofail ? __GFP_NOFAIL : 0);
>                                 ~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>kernel/resource.c:1281:53: note: place parentheses around the '|'
>expression to silence this warning
>        new_res = alloc_resource(GFP_KERNEL | alloc_nofail ? __GFP_NOFAIL : 0);
>                                 ~~~~~~~~~~~~~~~~~~~~~~~~~ ^
>kernel/resource.c:1281:53: note: place parentheses around the '?:'
>expression to evaluate it first
>        new_res = alloc_resource(GFP_KERNEL | alloc_nofail ? __GFP_NOFAIL : 0);
>                                                           ^
>                                              (                              )
>1 warning generated.
>
>Add the parentheses as it was clearly intended for the ternary condition
>to be evaluated first.
>
>Fixes: 5fd23bd0d739 ("kernel/resource: make release_mem_region_adjustable() never fail")
>Link: https://github.com/ClangBuiltLinux/linux/issues/1159
>Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>

Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

>---
>
>Presumably, this will be squashed but I included a fixes tag
>nonetheless. Apologies if this has already been noticed and fixed
>already, I did not find anything on LKML.
>
> kernel/resource.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
>diff --git a/kernel/resource.c b/kernel/resource.c
>index ca2a666e4317..3ae2f56cc79d 100644
>--- a/kernel/resource.c
>+++ b/kernel/resource.c
>@@ -1278,7 +1278,7 @@ void release_mem_region_adjustable(resource_size_t start, resource_size_t size)
> 	 * similarly).
> 	 */
> retry:
>-	new_res = alloc_resource(GFP_KERNEL | alloc_nofail ? __GFP_NOFAIL : 0);
>+	new_res = alloc_resource(GFP_KERNEL | (alloc_nofail ? __GFP_NOFAIL : 0));
> 
> 	p = &parent->child;
> 	write_lock(&resource_lock);
>
>base-commit: 40ee82f47bf297e31d0c47547cd8f24ede52415a
>-- 
>2.28.0

-- 
Wei Yang
Help you, Help me


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 09:39:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 09:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4738.12477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQosE-0007rt-Gb; Fri, 09 Oct 2020 09:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4738.12477; Fri, 09 Oct 2020 09:39:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQosE-0007rm-Bi; Fri, 09 Oct 2020 09:39:38 +0000
Received: by outflank-mailman (input) for mailman id 4738;
 Fri, 09 Oct 2020 09:39:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQosD-0007rh-MN
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 09:39:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea23fa90-075a-4d81-9f87-ca46aad683d1;
 Fri, 09 Oct 2020 09:39:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQos6-00081X-D2; Fri, 09 Oct 2020 09:39:30 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQos6-0008Nq-4Q; Fri, 09 Oct 2020 09:39:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQosD-0007rh-MN
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 09:39:37 +0000
X-Inumbo-ID: ea23fa90-075a-4d81-9f87-ca46aad683d1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea23fa90-075a-4d81-9f87-ca46aad683d1;
	Fri, 09 Oct 2020 09:39:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mFRuY1FDncZFrn5lLxJnQ5RUv3Y2kBgHnffYVRUPw0k=; b=25U8C1fjAlhSSq/A+lj+EvWPhU
	pwvEgCMOZ1QF8CypgsFEOqZOal/2hJaQrqIP7yersZatJpTwB8/DxaECRx8fYwOU5NijpO2Ei9G8Q
	dGiN3VkluRUZl1+ZOOdfGU+QlfAuYJzJbW5+bsakoM+HUgQ+hSUHxwYk81TXeAvPG80U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQos6-00081X-D2; Fri, 09 Oct 2020 09:39:30 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQos6-0008Nq-4Q; Fri, 09 Oct 2020 09:39:30 +0000
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
To: Elliott Mitchell <ehem+xen@m5p.com>,
 Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
 bertrand.marquis@arm.com, andre.przywara@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monn?? <roger.pau@citrix.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <20201008183904.GA56716@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f0976c17-ad36-847b-7868-f6bb13948368@xen.org>
Date: Fri, 9 Oct 2020 10:39:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201008183904.GA56716@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hello,

On 08/10/2020 19:39, Elliott Mitchell wrote:
> On Mon, Sep 28, 2020 at 03:47:52PM +0900, Masami Hiramatsu wrote:
>> This made progress with my Xen boot on DeveloperBox (
>> https://www.96boards.org/product/developerbox/ ) with ACPI.
> 
> Adding your patch on top of Julien Grall's patch appears to push the Xen
> boot of my target device (Raspberry PI 4B with Tianocore) further.  Still
> yet to see any output attributable to the Domain 0 kernel though.
> 
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Loading d0 kernel from boot module @ 0000000032234000
> (XEN) Loading ramdisk from boot module @ 0000000031747000
> (XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
> (XEN) BANK[0] 0x00000020000000-0x00000030000000 (256MB)
> (XEN) BANK[1] 0x00000040000000-0x000000b0000000 (1792MB)
> (XEN) Grant table range: 0x000000315f3000-0x00000031633000
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading zImage from 0000000032234000 to 0000000020080000-0000000021359200
> (XEN) Loading d0 initrd from 0000000031747000 to 0x0000000028200000-0x0000000028cebfe4
> (XEN) Loading d0 DTB to 0x0000000028000000-0x0000000028000273
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Scrubbing Free RAM in background
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) Freed 396kB init memory.
> 
> The line, "Loading d0 DTB to 0x0000000028000000-0x0000000028000273" seems
> rather suspicious as I would expect Domain 0 to see ACPI tables, not a
> device tree.

This is normal, we are creating a small device-tree to pass some 
information to dom0 (such as the ACPI tables, command line, initrd).

> 
> Your (Masami Hiramatsu) patch seems plausible, but things haven't
> progressed enough for me to endorse it.  Looks like something closer to
> the core of ACPI still needs further work, Julien Grall?

I didn't go very far during my testing because QEMU is providing ACPI 
5.1 (Xen only supports 6.0+ so far).

For your log above, Xen finished to boot and now dom0 should start 
booting. The lack of console output may be because of a crash in Linux 
during earlyboot.

Do you have the early console enabled Linux? This can be done by adding 
earlycon=xenboot on the Linux command line.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 11:53:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 11:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4748.12501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQqxe-0003bj-OZ; Fri, 09 Oct 2020 11:53:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4748.12501; Fri, 09 Oct 2020 11:53:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQqxe-0003bc-L4; Fri, 09 Oct 2020 11:53:22 +0000
Received: by outflank-mailman (input) for mailman id 4748;
 Fri, 09 Oct 2020 11:53:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQqxd-0003bX-TI
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 11:53:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90eee37d-9c55-40c3-98cc-237243b2ef52;
 Fri, 09 Oct 2020 11:53:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQqxd-0003bX-TI
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 11:53:21 +0000
X-Inumbo-ID: 90eee37d-9c55-40c3-98cc-237243b2ef52
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 90eee37d-9c55-40c3-98cc-237243b2ef52;
	Fri, 09 Oct 2020 11:53:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602244399;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=EUFUGG6ye4lWDbsiaxpUUwbssNRy7ER5uCtlxpcnJ2U=;
  b=WJIogR5OUFWIP0AW7i3PfxIFwt7x9xPOx2NKMKAp7ioXHyqjk6W55ZBg
   pFbZ5eQ+iqLpRbXa9rmUKmuEAnZWm8ulb0czQ2v3uHEdHsyseMjrGQFju
   tWtyOGAb2MCNpzTAfdMPfbnTdZ1Lca114nlUsgnLnZ6A+amz8xUImB8Cz
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: X8EsZJ07r3jUjvM5t0W4Xg/s0lWwHFcSHZDUvpOsRT6XsMvDQyP5hKx0JL8mL/YG1ufvaKKAt3
 +/FO7V1aTSGlih4unrD/t/8L2AeTgu22DxB7uu9tIV9/LaZzzbpIXEurzbZ+G/O9QVlXSfNxaA
 NuigFk/CPlWQjLeervIWP1UU8fOdZfQf0CIz0UtLWBKtScIURZWPD652LkZ53fZEi9+qmVlV/U
 i7vlAk9h4zlVhOJfC+Vjbc8cL/eEHV6bghDmNV21436V5uBiD+bfyhN7KyEZwVLTZX2CVB9bA5
 0PA=
X-SBRS: 2.5
X-MesageID: 28918937
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,355,1596513600"; 
   d="scan'208";a="28918937"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Andy Lutomirski
	<luto@kernel.org>, Manuel Bouyer <bouyer@antioche.eu.org>
Subject: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
Date: Fri, 9 Oct 2020 12:53:01 +0100
Message-ID: <20201009115301.19516-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200923101848.29049-4-andrew.cooper3@citrix.com>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Despite appearing to be a deliberate design choice of early PV64, the
resulting behaviour for unregistered SYSCALL callbacks creates an untenable
testability problem for Xen.  Furthermore, the behaviour is undocumented,
bizarre, and inconsistent with related behaviour in Xen, and very liable
introduce a security vulnerability into a PV guest if the author hasn't
studied Xen's assembly code in detail.

There are two different bugs here.

1) The current logic confuses the registered entrypoints, and may deliver a
   SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
   entrypoint is registered.

   This has been the case ever since 2007 (c/s cd75d47348b) but up until
   2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
   a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.

   Xen would malfunction under these circumstances, if it were a PV guest.
   Linux would as well, but PVOps has always registered both entrypoints and
   discarded the Xen-provided selectors.  NetBSD really does malfunction as a
   consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).

2) In the case that neither SYSCALL callbacks are registered, the guest will
   be crashed when userspace executes a SYSCALL instruction, which is a
   userspace => kernel DoS.

   This has been the case ever since the introduction of 64bit PV support, but
   behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which yield
   #GP/#UD in userspace before the callback is registered, and are therefore
   safe by default.

This change does constitute a change in the PV ABI, for corner cases of a PV
guest kernel registering neither callback, or not registering the 32bit
callback when running on AMD/Hygon hardware.

It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
SYSENTER (safe by default, until explicitly enabled), as well as native
hardware (always delivered to the single applicable callback).

Most importantly however, and the primary reason for the change, is that it
lets us sensibly test the fast system call entrypoints under all states a PV
guest can construct, to prove correct behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Andy Lutomirski <luto@kernel.org>
CC: Manuel Bouyer <bouyer@antioche.eu.org>

v2:
 * Drop unnecessary instruction suffixes
 * Don't truncate #UD entrypoint to 32 bits

Manuel: This will result in a corner case change for NetBSD.

At the moment on native, 32bit userspace on 64bit NetBSD will get #UD (Intel,
etc), or an explicit -ENOSYS (AMD, etc) when trying to execute a 32bit SYSCALL
instruction.

After this change, a 64bit PV VM will consistently see #UD (like on Intel, etc
hardware) even when running on AMD/Hygon hardware (as Xsyscall32 isn't
registered with Xen), rather than following Xsyscall into the proper system
call path.
---
 xen/arch/x86/x86_64/entry.S | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 000eb9722b..aaf8402f93 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -26,18 +26,30 @@
 /* %rbx: struct vcpu */
 ENTRY(switch_to_kernel)
         leaq  VCPU_trap_bounce(%rbx),%rdx
-        /* TB_eip = (32-bit syscall && syscall32_addr) ?
-         *          syscall32_addr : syscall_addr */
-        xor   %eax,%eax
+
+        /* TB_eip = 32-bit syscall ? syscall32_addr : syscall_addr */
+        mov   VCPU_syscall32_addr(%rbx), %ecx
+        mov   VCPU_syscall_addr(%rbx), %rax
         cmpw  $FLAT_USER_CS32,UREGS_cs(%rsp)
-        cmoveq VCPU_syscall32_addr(%rbx),%rax
-        testq %rax,%rax
-        cmovzq VCPU_syscall_addr(%rbx),%rax
-        movq  %rax,TRAPBOUNCE_eip(%rdx)
+        cmove %rcx, %rax
+
         /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
         btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
         setc  %cl
         leal  (,%rcx,TBF_INTERRUPT),%ecx
+
+        test  %rax, %rax
+UNLIKELY_START(z, syscall_no_callback) /* TB_eip == 0 => #UD */
+        mov   VCPU_trap_ctxt(%rbx), %rdi
+        movl  $X86_EXC_UD, UREGS_entry_vector(%rsp)
+        subl  $2, UREGS_rip(%rsp)
+        mov   X86_EXC_UD * TRAPINFO_sizeof + TRAPINFO_eip(%rdi), %rax
+        testb $4, X86_EXC_UD * TRAPINFO_sizeof + TRAPINFO_flags(%rdi)
+        setnz %cl
+        lea   TBF_EXCEPTION(, %rcx, TBF_INTERRUPT), %ecx
+UNLIKELY_END(syscall_no_callback)
+
+        movq  %rax,TRAPBOUNCE_eip(%rdx)
         movb  %cl,TRAPBOUNCE_flags(%rdx)
         call  create_bounce_frame
         andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 12:41:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 12:41:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4756.12512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQrhs-0008DN-JE; Fri, 09 Oct 2020 12:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4756.12512; Fri, 09 Oct 2020 12:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQrhs-0008DG-G6; Fri, 09 Oct 2020 12:41:08 +0000
Received: by outflank-mailman (input) for mailman id 4756;
 Fri, 09 Oct 2020 12:41:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zdOq=DQ=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1kQrhr-0008DB-FC
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 12:41:07 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edd1f79d-2d0e-4e90-8ea1-1308a8beb57c;
 Fri, 09 Oct 2020 12:41:05 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 099CewU6000464;
 Fri, 9 Oct 2020 14:40:58 +0200 (CEST)
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 099Cevf3016301;
 Fri, 9 Oct 2020 14:40:57 +0200 (MEST)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
 id 691C17216; Fri,  9 Oct 2020 14:40:57 +0200 (MEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zdOq=DQ=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
	id 1kQrhr-0008DB-FC
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 12:41:07 +0000
X-Inumbo-ID: edd1f79d-2d0e-4e90-8ea1-1308a8beb57c
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id edd1f79d-2d0e-4e90-8ea1-1308a8beb57c;
	Fri, 09 Oct 2020 12:41:05 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 099CewU6000464;
	Fri, 9 Oct 2020 14:40:58 +0200 (CEST)
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 099Cevf3016301;
	Fri, 9 Oct 2020 14:40:57 +0200 (MEST)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id 691C17216; Fri,  9 Oct 2020 14:40:57 +0200 (MEST)
Date: Fri, 9 Oct 2020 14:40:57 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
        Jan Beulich <JBeulich@suse.com>,
        Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
        Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
Message-ID: <20201009124057.GC20248@mail.soc.lip6.fr>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201009115301.19516-1-andrew.cooper3@citrix.com>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Fri, 09 Oct 2020 14:40:58 +0200 (CEST)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

On Fri, Oct 09, 2020 at 12:53:01PM +0100, Andrew Cooper wrote:
> Despite appearing to be a deliberate design choice of early PV64, the
> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
> testability problem for Xen.  Furthermore, the behaviour is undocumented,
> bizarre, and inconsistent with related behaviour in Xen, and very liable
> introduce a security vulnerability into a PV guest if the author hasn't
> studied Xen's assembly code in detail.
> 
> There are two different bugs here.
> 
> 1) The current logic confuses the registered entrypoints, and may deliver a
>    SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>    entrypoint is registered.
> 
>    This has been the case ever since 2007 (c/s cd75d47348b) but up until
>    2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>    a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
> 
>    Xen would malfunction under these circumstances, if it were a PV guest.
>    Linux would as well, but PVOps has always registered both entrypoints and
>    discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>    consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).

What do you mean with malfunction ? A 64bits guest can run 32bit code
just fine, this is part of our daily regression tests.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 12:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 12:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4760.12529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQrrG-0000n4-JA; Fri, 09 Oct 2020 12:50:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4760.12529; Fri, 09 Oct 2020 12:50:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQrrG-0000mx-FF; Fri, 09 Oct 2020 12:50:50 +0000
Received: by outflank-mailman (input) for mailman id 4760;
 Fri, 09 Oct 2020 12:50:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQrrF-0000mn-19
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 12:50:49 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78e1486d-3f53-48ab-9fc7-5cb52875436e;
 Fri, 09 Oct 2020 12:50:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQrrF-0000mn-19
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 12:50:49 +0000
X-Inumbo-ID: 78e1486d-3f53-48ab-9fc7-5cb52875436e
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 78e1486d-3f53-48ab-9fc7-5cb52875436e;
	Fri, 09 Oct 2020 12:50:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602247847;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=LA4ValVjL8pCRMheQICR7aT2QKgr/PA2M2hwj82BINA=;
  b=IeZPWatKUAh7e7Aot15PtMzZLjZ3kjoz8ABdQ+ocoZEZTsijmrS5Bswf
   lgIH2QOzngPB5muw/xDBImV97lMwRXpqTJBFCwKyCK+fx+9lpO/eBErNS
   dHWmV5WL3ZYlpnUCfQ9UxwEOuk1o3fG9YTpj7tNbPhRb8CK8o3T4tx/Ec
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: smdUOx3w+32oKOmu51KwsCZaG4UKu0bEw/KfBW8svV1/vFYgHEy9I18K47LhVZtBBoLp813W7P
 KVTXBH18FaII4/fb4bZTDz458HlP6GI1xmLcBQfrsb1dKnqNnD1doo+b5+GYestrHk5TemCNR9
 IylMv2briVnewrrl9OP7tybN6Gp7L4r8pC1udySDfcYnACdXx+yJ3VsY9xDfVUPTjaTbalo/Je
 /9cnt28FKbQPVekyl6CcfuH6zhoM1gMrfPP8cKGw7aDFYObggwPJiJeXEhhfh+WO6dDYLg7idB
 8I4=
X-SBRS: 2.5
X-MesageID: 28739715
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,355,1596513600"; 
   d="scan'208";a="28739715"
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
To: Manuel Bouyer <bouyer@antioche.eu.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
 <20201009124057.GC20248@mail.soc.lip6.fr>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8806fa3a-d614-c5e0-5456-5a286a48f9a5@citrix.com>
Date: Fri, 9 Oct 2020 13:50:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201009124057.GC20248@mail.soc.lip6.fr>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 09/10/2020 13:40, Manuel Bouyer wrote:
> On Fri, Oct 09, 2020 at 12:53:01PM +0100, Andrew Cooper wrote:
>> Despite appearing to be a deliberate design choice of early PV64, the
>> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
>> testability problem for Xen.  Furthermore, the behaviour is undocumented,
>> bizarre, and inconsistent with related behaviour in Xen, and very liable
>> introduce a security vulnerability into a PV guest if the author hasn't
>> studied Xen's assembly code in detail.
>>
>> There are two different bugs here.
>>
>> 1) The current logic confuses the registered entrypoints, and may deliver a
>>    SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>>    entrypoint is registered.
>>
>>    This has been the case ever since 2007 (c/s cd75d47348b) but up until
>>    2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>>    a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
>>
>>    Xen would malfunction under these circumstances, if it were a PV guest.
>>    Linux would as well, but PVOps has always registered both entrypoints and
>>    discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>>    consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).
> What do you mean with «malfunction» ? A 64bits guest can run 32bit code
> just fine, this is part of our daily regression tests.

Right, but your 32bit code never executes the SYSCALL instruction,
because it is hardwired as -ENOSYS on native, and doesn't work on Intel
hardware at all.

Under Xen, this enters the regular syscall path (buggy but benign), and
before the selector fix two years ago, would (AFAICT) eventually try to
HYPERCALL_iret with the bogus selectors, and hit the failsafe callback,
which is a straight panic().

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 13:17:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 13:17:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4764.12542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQsGW-0002tj-Ng; Fri, 09 Oct 2020 13:16:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4764.12542; Fri, 09 Oct 2020 13:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQsGW-0002tc-Kd; Fri, 09 Oct 2020 13:16:56 +0000
Received: by outflank-mailman (input) for mailman id 4764;
 Fri, 09 Oct 2020 13:16:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQsGV-0002t4-2q
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 13:16:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8fab5bc-167f-4612-aa6e-c04ca8e33890;
 Fri, 09 Oct 2020 13:16:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsGN-0004Dx-TR; Fri, 09 Oct 2020 13:16:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsGN-0005Ju-KX; Fri, 09 Oct 2020 13:16:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsGN-0000sI-K5; Fri, 09 Oct 2020 13:16:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQsGV-0002t4-2q
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 13:16:55 +0000
X-Inumbo-ID: b8fab5bc-167f-4612-aa6e-c04ca8e33890
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b8fab5bc-167f-4612-aa6e-c04ca8e33890;
	Fri, 09 Oct 2020 13:16:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yy9QK4YU7yEXaPdJNESjcepN6lXqis64msrOyYZCU4E=; b=DWNG/oK140GmY0M3YKb/K8ZpUV
	VHsdCIoVrhgmAK7UGkj9jMEGA5UF5xeUO60QzhvmjM3i0Z1bRcrqKGhhPO1CFJ9JksCUuzVX/zg6x
	oZqHVYhToHS050dbsGH1it84d4fJvRXpCL5ly00FdxtXJc5t4q2mR/WTXRXd4sKam1zQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsGN-0004Dx-TR; Fri, 09 Oct 2020 13:16:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsGN-0005Ju-KX; Fri, 09 Oct 2020 13:16:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsGN-0000sI-K5; Fri, 09 Oct 2020 13:16:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155588-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155588: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=79cb397b39db74899a77ef756d3c987328168acf
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 13:16:47 +0000

flight 155588 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155588/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              79cb397b39db74899a77ef756d3c987328168acf
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   91 days
Failing since        151818  2020-07-11 04:18:52 Z   90 days   85 attempts
Testing same since   155588  2020-10-09 04:21:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20090 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 13:50:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 13:50:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4772.12562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQsmr-0006TJ-HA; Fri, 09 Oct 2020 13:50:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4772.12562; Fri, 09 Oct 2020 13:50:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQsmr-0006TC-E9; Fri, 09 Oct 2020 13:50:21 +0000
Received: by outflank-mailman (input) for mailman id 4772;
 Fri, 09 Oct 2020 13:50:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQsmq-0006Se-TG
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 13:50:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ffc7306-3d44-415f-909f-a5c0e1f553d2;
 Fri, 09 Oct 2020 13:50:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsmj-0004ts-4z; Fri, 09 Oct 2020 13:50:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsmi-0006Q5-RQ; Fri, 09 Oct 2020 13:50:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQsmi-0004rc-Qv; Fri, 09 Oct 2020 13:50:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQsmq-0006Se-TG
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 13:50:20 +0000
X-Inumbo-ID: 0ffc7306-3d44-415f-909f-a5c0e1f553d2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0ffc7306-3d44-415f-909f-a5c0e1f553d2;
	Fri, 09 Oct 2020 13:50:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GDT9yfcz7ZyHG/CeZt5yD/NmXo0j5sFJIoNZLKS83TE=; b=pbcyiQvnfC6Vka4+UBHPAiwbSH
	BNv3h2WV5AkevMKFLwuLI2ZOkK+aQgSrB8Zi+ov1COeRseAUTbTz49LmxZjQMp/wUs2duidV1apNh
	e5ykyT5wgEMJpkAytIfqejU6ZE5oairrmK55dQOkvMt6bSExDPEwKTcZdinozFpNfOKw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsmj-0004ts-4z; Fri, 09 Oct 2020 13:50:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsmi-0006Q5-RQ; Fri, 09 Oct 2020 13:50:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQsmi-0004rc-Qv; Fri, 09 Oct 2020 13:50:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155549: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-pygrub:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
X-Osstest-Versions-That:
    xen=93508595d588afe9dca087f95200effb7cedc81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 13:50:12 +0000

flight 155549 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155549/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-amd64 16 guest-localmigrate/x10 fail in 155532 pass in 155549
 test-amd64-amd64-pygrub 17 guest-localmigrate/x10 fail in 155532 pass in 155549
 test-armhf-armhf-examine      8 reboot                     fail pass in 155532

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 155510
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 155510
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 155510
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 155510
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 155510
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 155510
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 155510
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 155510
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 155510
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
baseline version:
 xen                  93508595d588afe9dca087f95200effb7cedc81f

Last test of basis   155510  2020-10-07 04:00:55 Z    2 days
Testing same since   155532  2020-10-07 19:37:26 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93508595d5..7a519f8bda  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a -> master


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 14:22:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 14:22:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4781.12584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtI5-00010V-7m; Fri, 09 Oct 2020 14:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4781.12584; Fri, 09 Oct 2020 14:22:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtI5-00010O-4L; Fri, 09 Oct 2020 14:22:37 +0000
Received: by outflank-mailman (input) for mailman id 4781;
 Fri, 09 Oct 2020 14:22:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kQtI3-00010J-Nj
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:22:35 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9d0aeb7-6b76-4e93-8c59-c3284f96d9cb;
 Fri, 09 Oct 2020 14:22:34 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099EM9Eo063688
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 9 Oct 2020 10:22:15 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 099EM8Rk063687;
 Fri, 9 Oct 2020 07:22:08 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kQtI3-00010J-Nj
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:22:35 +0000
X-Inumbo-ID: c9d0aeb7-6b76-4e93-8c59-c3284f96d9cb
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c9d0aeb7-6b76-4e93-8c59-c3284f96d9cb;
	Fri, 09 Oct 2020 14:22:34 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099EM9Eo063688
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 9 Oct 2020 10:22:15 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 099EM8Rk063687;
	Fri, 9 Oct 2020 07:22:08 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 9 Oct 2020 07:22:08 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>,
        Masami Hiramatsu <masami.hiramatsu@linaro.org>,
        xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
        bertrand.marquis@arm.com, andre.przywara@arm.com,
        Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201009142208.GA63582@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <20201008183904.GA56716@mattapan.m5p.com>
 <f0976c17-ad36-847b-7868-f6bb13948368@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <f0976c17-ad36-847b-7868-f6bb13948368@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 09, 2020 at 10:39:26AM +0100, Julien Grall wrote:
> On 08/10/2020 19:39, Elliott Mitchell wrote:
> > Your (Masami Hiramatsu) patch seems plausible, but things haven't
> > progressed enough for me to endorse it.  Looks like something closer to
> > the core of ACPI still needs further work, Julien Grall?
> 
> I didn't go very far during my testing because QEMU is providing ACPI 
> 5.1 (Xen only supports 6.0+ so far).
> 
> For your log above, Xen finished to boot and now dom0 should start 
> booting. The lack of console output may be because of a crash in Linux 
> during earlyboot.
> 
> Do you have the early console enabled Linux? This can be done by adding 
> earlycon=xenboot on the Linux command line.

Finding all the command-line console settings can be a challenge.  I had
thought it was supposed to be "console=hvc0 earlycon=hvc0".

With that though I finally have some output which claims to come from the
Linux kernel (yay! finally hit this point!).  As we were both guessing,
very early kernel panic:

[    0.000000] efi: Getting EFI parameters from FDT:
[    0.000000] efi: Can't find 'System Table' in device tree!
[    0.000000] cma: Failed to reserve 64 MiB
[    0.000000] Kernel panic - not syncing: Failed to allocate page table page

I don't know whether this is a problem with the mini-DT which was passed
in versus ACPI tables.  I note a complete lack of ACPI table information.
The kernel is from a 5.6-based kernel tree.  I'm unsure which portion to
try updating next.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 09 14:42:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 14:42:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4783.12596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtbN-0002uf-TE; Fri, 09 Oct 2020 14:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4783.12596; Fri, 09 Oct 2020 14:42:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtbN-0002uY-Q8; Fri, 09 Oct 2020 14:42:33 +0000
Received: by outflank-mailman (input) for mailman id 4783;
 Fri, 09 Oct 2020 14:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gROa=DQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kQtbM-0002uT-05
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:42:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b55b9195-e2b9-4385-9a97-de4bb1c7c545;
 Fri, 09 Oct 2020 14:42:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8F0CAE2C;
 Fri,  9 Oct 2020 14:42:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gROa=DQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kQtbM-0002uT-05
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:42:32 +0000
X-Inumbo-ID: b55b9195-e2b9-4385-9a97-de4bb1c7c545
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b55b9195-e2b9-4385-9a97-de4bb1c7c545;
	Fri, 09 Oct 2020 14:42:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602254550;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Mf9G95BFIif8P4JOo8vFEGHaFCuDzB5d2ITiJa6fvhQ=;
	b=hWIW3UxjjlHedkBjscX2/oMZBiqp8cMgbM5bV0NCuQYPfsNOtHucN61nhTHeTWM/sa/i8N
	bFbDMkKpRE+iGrAeCuYdUClZo3vcbJlPsqnB+JDYS54Jzgb0ekl4iudUBLU/kUI503bLol
	WkWWXUua7KDrPPiTrIomxXNLksFUXZo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E8F0CAE2C;
	Fri,  9 Oct 2020 14:42:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andy Lutomirski <luto@kernel.org>
Subject: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
Date: Fri,  9 Oct 2020 16:42:25 +0200
Message-Id: <20201009144225.12019-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When running in lazy TLB mode the currently active page tables might
be the ones of a previous process, e.g. when running a kernel thread.

This can be problematic in case kernel code is being modified via
text_poke() in a kernel thread, and on another processor exit_mmap()
is active for the process which was running on the first cpu before
the kernel thread.

As text_poke() is using a temporary address space and the former
address space (obtained via cpu_tlbstate.loaded_mm) is restored
afterwards, there is a race possible in case the cpu on which
exit_mmap() is running wants to make sure there are no stale
references to that address space on any cpu active (this e.g. is
required when running as a Xen PV guest, where this problem has been
observed and analyzed).

In order to avoid that, drop off TLB lazy mode before switching to the
temporary address space.

Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/kernel/alternative.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index cdaab30880b9..cd6be6f143e8 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
 	temp_mm_state_t temp_state;
 
 	lockdep_assert_irqs_disabled();
+
+	/*
+	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
+	 * with a stale address space WITHOUT being in lazy mode after
+	 * restoring the previous mm.
+	 */
+	if (this_cpu_read(cpu_tlbstate.is_lazy))
+		leave_mm(smp_processor_id());
+
 	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
 	switch_mm_irqs_off(NULL, mm, current);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 14:43:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 14:43:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4784.12608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtcA-00030V-6x; Fri, 09 Oct 2020 14:43:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4784.12608; Fri, 09 Oct 2020 14:43:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQtcA-00030O-3t; Fri, 09 Oct 2020 14:43:22 +0000
Received: by outflank-mailman (input) for mailman id 4784;
 Fri, 09 Oct 2020 14:43:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Pj9d=DQ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kQtc9-00030J-59
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:43:21 +0000
Received: from mail2.protonmail.ch (unknown [185.70.40.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ce5dcf4-8923-4e31-8ffb-e5d7fb53f4a3;
 Fri, 09 Oct 2020 14:43:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Pj9d=DQ=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kQtc9-00030J-59
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 14:43:21 +0000
X-Inumbo-ID: 3ce5dcf4-8923-4e31-8ffb-e5d7fb53f4a3
Received: from mail2.protonmail.ch (unknown [185.70.40.22])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3ce5dcf4-8923-4e31-8ffb-e5d7fb53f4a3;
	Fri, 09 Oct 2020 14:43:19 +0000 (UTC)
Date: Fri, 09 Oct 2020 14:43:11 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trmm.net;
	s=protonmail; t=1602254597;
	bh=i/z6qZs3MVcbPYz5v3h03B0WlCRfNj74n7UOutiml5U=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=f/3KLhGO5sqGnzkmappuJ6HriigUdm3UnQKLBkCSHYmym4dFH/ttIZLrsZZgO22t7
	 HSwu4n45wLe8Q614MZA25yNcy9KNLKGChx2Un5rTmTQTSb2TFRXrC9xq37MShSzoqP
	 Pv11Biuq4FxfZxOJIwsVoUkhVDktbKyD8G0nUiwo=
To: Trammell Hudson <hudson@trmm.net>
From: Trammell Hudson <hudson@trmm.net>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "roger.pau@citrix.com" <roger.pau@citrix.com>, "jbeulich@suse.com" <jbeulich@suse.com>, "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>, "wl@xen.org" <wl@xen.org>
Reply-To: Trammell Hudson <hudson@trmm.net>
Subject: Re: [PATCH v9 0/4] efi: Unified Xen hypervisor/kernel/initrd images
Message-ID: <BbDD1Aa2FXJRlpSpqyFVl4-6u6S-OnBkoMyvoPHadElIyfNDl2h9J34bk12XyvFtEOweGsCRTmqY8eSSbvR98RHJpFzDHpWWa67IaW6Sz7I=@trmm.net>
In-Reply-To: <20201002111822.42142-1-hudson@trmm.net>
References: <20201002111822.42142-1-hudson@trmm.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

Any further thoughts on this patch series? Three out of four of
them have been reviewed or acked by at least one reviewer, with
only the last one currently unreviewed.

--
Trammell

On Friday, October 2, 2020 1:18 PM, Trammell Hudson <hudson@trmm.net> wrote=
:
> This patch series adds support for bundling the xen.efi hypervisor,
> the xen.cfg configuration file, the Linux kernel and initrd, as well
> as the XSM, and architectural specific files into a single "unified"
> EFI executable. This allows an administrator to update the components
> independently without requiring rebuilding xen, as well as to replace
> the components in an existing image.
>
> The resulting EFI executable can be invoked directly from the UEFI Boot
> Manager, removing the need to use a separate loader like grub as well
> as removing dependencies on local filesystem access. And since it is
> a single file, it can be signed and validated by UEFI Secure Boot without
> requring the shim protocol.
>
> It is inspired by systemd-boot's unified kernel technique and borrows the
> function to locate PE sections from systemd's LGPL'ed code. During EFI
> boot, Xen looks at its own loaded image to locate the PE sections for
> the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
> (`.ramdisk`), and XSM config (`.xsm`), which are included after building
> xen.efi using objcopy to add named sections for each input file.
>
> Trammell Hudson (4):
> efi/boot.c: add file.need_to_free
> efi/boot.c: add handle_file_info()
> efi: Enable booting unified hypervisor/kernel/initrd images
> efi: Do not use command line if unified config is included
>
> .gitignore | 1 +
> docs/misc/efi.pandoc | 49 ++++++++++++
> xen/arch/arm/efi/efi-boot.h | 36 ++++++---
> xen/arch/x86/efi/Makefile | 2 +-
> xen/arch/x86/efi/efi-boot.h | 13 ++-
> xen/common/efi/boot.c | 140 ++++++++++++++++++++++++---------
> xen/common/efi/efi.h | 3 +
> xen/common/efi/pe.c | 152 ++++++++++++++++++++++++++++++++++++
> 8 files changed, 347 insertions(+), 49 deletions(-)
> create mode 100644 xen/common/efi/pe.c
>
> -------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------------------------------------------------------=
---------------------------
>
> 2.25.1




From xen-devel-bounces@lists.xenproject.org Fri Oct 09 15:10:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 15:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4794.12627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQu2E-0005uF-I8; Fri, 09 Oct 2020 15:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4794.12627; Fri, 09 Oct 2020 15:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQu2E-0005u8-Es; Fri, 09 Oct 2020 15:10:18 +0000
Received: by outflank-mailman (input) for mailman id 4794;
 Fri, 09 Oct 2020 15:10:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQu2D-0005u3-5t
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:10:17 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c9d7ca4-6f8f-4c2d-bb72-efee0b57943b;
 Fri, 09 Oct 2020 15:10:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQu2D-0005u3-5t
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:10:17 +0000
X-Inumbo-ID: 5c9d7ca4-6f8f-4c2d-bb72-efee0b57943b
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c9d7ca4-6f8f-4c2d-bb72-efee0b57943b;
	Fri, 09 Oct 2020 15:10:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602256215;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=DYoNpDoMG9Fehrv+VZjGLyy/ZiJAyI6bFkIjoSqxHU8=;
  b=e2ZA8UkIbk3StvyGcbkwfNubHPFKqp7kOs5GHb9Y8nvs5q6h1PyDeLWH
   pMSE7D10QS8joA6pzdOT+AsnuWKkGvXWVpCfbLOZPE5QqTMBygXOIuAtG
   9RGveAF1Kfp2lKRwPltB+KbZ2k+tfLEKtGHTEZDuuhP6B9XY2N+KWZUTu
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QfPRStXtVq8jrgtGSQL124vYwOqRWy0phUOvS/zrGho0mRiJQBuHrHoU0GHnr8MXKwZdIihKVX
 VT4GwbOXKUdKgcKFjtRHJ8lwKzttw2ejfv2MOuhkz4Yunyv+ai9pkiihHhIe9mAIQ+PFwkFMQj
 224i+2lFYUO5KdnnJfVx+GS0a9A6WMg5nh0Y5//nNTh+MvcBJDaJhH1SkYSIMpX9R6D9RHNpSz
 7L7ChDlQ2BIS3wqM9YkCaFL+y7DZG2jCKyrWNV8mmvrtlcCHh6rGwem+kjb/cbCRtsn2223q0c
 Nvk=
X-SBRS: 2.5
X-MesageID: 28656374
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,355,1596513600"; 
   d="scan'208";a="28656374"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering guest"
Date: Fri, 9 Oct 2020 16:09:48 +0100
Message-ID: <20201009150948.31063-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

At the time of XSA-170, the x86 instruction emulator really was broken, and
would allow arbitrary non-canonical values to be loaded into %rip.  This was
fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
targets".

However, in a demonstration that off-by-one errors really are one of the
hardest programming issues we face, everyone involved with XSA-170, myself
included, mistook the statement in the SDM which says:

  If the processor supports N < 64 linear-address bits, bits 63:N must be identical

to mean "must be canonical".  A real canonical check is bits 63:N-1.

VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
to the boundary condition at 0x0000800000000000.

Now that the emulator has been fixed, revert the XSA-170 change to fix
architectural behaviour at the boundary case.  The XTF test case for XSA-170
exercises this corner case, and still passes.

Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vmx.c | 34 +---------------------------------
 1 file changed, 1 insertion(+), 33 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 86b8916a5d..28d09c1ca0 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3650,7 +3650,7 @@ static int vmx_handle_apic_write(void)
 void vmx_vmexit_handler(struct cpu_user_regs *regs)
 {
     unsigned long exit_qualification, exit_reason, idtv_info, intr_info = 0;
-    unsigned int vector = 0, mode;
+    unsigned int vector = 0;
     struct vcpu *v = current;
     struct domain *currd = v->domain;
 
@@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 out:
     if ( nestedhvm_vcpu_in_guestmode(v) )
         nvmx_idtv_handling();
-
-    /*
-     * VM entry will fail (causing the guest to get crashed) if rIP (and
-     * rFLAGS, but we don't have an issue there) doesn't meet certain
-     * criteria. As we must not allow less than fully privileged mode to have
-     * such an effect on the domain, we correct rIP in that case (accepting
-     * this not being architecturally correct behavior, as the injected #GP
-     * fault will then not see the correct [invalid] return address).
-     * And since we know the guest will crash, we crash it right away if it
-     * already is in most privileged mode.
-     */
-    mode = vmx_guest_x86_mode(v);
-    if ( mode == 8 ? !is_canonical_address(regs->rip)
-                   : regs->rip != regs->eip )
-    {
-        gprintk(XENLOG_WARNING, "Bad rIP %lx for mode %u\n", regs->rip, mode);
-
-        if ( vmx_get_cpl() )
-        {
-            __vmread(VM_ENTRY_INTR_INFO, &intr_info);
-            if ( !(intr_info & INTR_INFO_VALID_MASK) )
-                hvm_inject_hw_exception(TRAP_gp_fault, 0);
-            /* Need to fix rIP nevertheless. */
-            if ( mode == 8 )
-                regs->rip = (long)(regs->rip << (64 - VADDR_BITS)) >>
-                            (64 - VADDR_BITS);
-            else
-                regs->rip = regs->eip;
-        }
-        else
-            domain_crash(v->domain);
-    }
 }
 
 static void lbr_tsx_fixup(void)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 15:38:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 15:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4814.12647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQuSw-00086G-Vz; Fri, 09 Oct 2020 15:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4814.12647; Fri, 09 Oct 2020 15:37:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQuSw-000869-Sp; Fri, 09 Oct 2020 15:37:54 +0000
Received: by outflank-mailman (input) for mailman id 4814;
 Fri, 09 Oct 2020 15:37:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQuSv-000864-V1
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:37:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dcbe69b2-efd8-44e8-bc4a-716041cc72e8;
 Fri, 09 Oct 2020 15:37:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQuSv-000864-V1
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:37:53 +0000
X-Inumbo-ID: dcbe69b2-efd8-44e8-bc4a-716041cc72e8
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dcbe69b2-efd8-44e8-bc4a-716041cc72e8;
	Fri, 09 Oct 2020 15:37:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602257872;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=MqELPW85u3V6N1YuIxAnqJS9yS1izLKsf8w1e9+5eaY=;
  b=Ewx12Vps+QKuET4TIL7gm974URhEsjUIiKhbyd7djwjKlc61oRuxu93d
   LYw6+Qs/VOCBWk3vxJqhdI7wcCLjCR21r5UAzWBe8aV8NHcon8ZwagSWo
   DfNUenVS5WkPvuZ3rSjLakTykhYiL2RZppR5VfCgrAwJZu0Zb/qXw5GE/
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hls6j+yGQbNmoNMn3vxSr1eOhlrXR539K1m/G0oGpENTckNXZc1c/6UtUR2YI0P+VzsMaiePjs
 CBdUUqsgOFAteygzV9Khr7vLaXYnzFjXppas0r59BNHfW8CiE/h6qBDCQXUA94kZlnCPrTjGlj
 abIxhOxtZ8rbmRqy5ne3wWgCPKLg6HZ1ZUXhXkHDkkxhdDyWiLBahJ3thjjzlMECIIBNg17sXi
 VAlmq++vM3AbHPiJYJDfFIqV/By1eiGzBdSR+YHg9/HJX5XbzBsBHTfT4sA0xIEP3NwBw8hD+1
 D+k=
X-SBRS: 2.5
X-MesageID: 28938944
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,355,1596513600"; 
   d="scan'208";a="28938944"
Subject: Re: [PATCH] x86/ucode: Trivial further cleanup
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20201007180120.27203-1-andrew.cooper3@citrix.com>
 <20201008074920.GI19254@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <42681884-d622-5eca-6384-c4e91bcb3444@citrix.com>
Date: Fri, 9 Oct 2020 16:37:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201008074920.GI19254@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 08/10/2020 08:49, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 07:01:20PM +0100, Andrew Cooper wrote:
>>  * Drop unused include in private.h.
>>  * Used explicit width integers for Intel header fields.
>>  * Adjust comment to better describe the extended header.
>>  * Drop unnecessary __packed attribute for AMD header.
>>  * Switch mc_patch_data_id to being uint16_t, which is how it is more commonly
>>    referred to.
>>  * Fix types and style.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks,

>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> ---
>>  xen/arch/x86/cpu/microcode/amd.c     | 10 +++++-----
>>  xen/arch/x86/cpu/microcode/intel.c   | 34 +++++++++++++++++-----------------
>>  xen/arch/x86/cpu/microcode/private.h |  2 --
>>  3 files changed, 22 insertions(+), 24 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
>> index cd532321e8..e913232067 100644
>> --- a/xen/arch/x86/cpu/microcode/amd.c
>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>> @@ -24,7 +24,7 @@
>>  
>>  #define pr_debug(x...) ((void)0)
>>  
>> -struct __packed equiv_cpu_entry {
>> +struct equiv_cpu_entry {
>>      uint32_t installed_cpu;
>>      uint32_t fixed_errata_mask;
>>      uint32_t fixed_errata_compare;
>> @@ -35,7 +35,7 @@ struct __packed equiv_cpu_entry {
>>  struct microcode_patch {
>>      uint32_t data_code;
>>      uint32_t patch_id;
>> -    uint8_t  mc_patch_data_id[2];
>> +    uint16_t mc_patch_data_id;
>>      uint8_t  mc_patch_data_len;
> I think you could also drop the mc_patch_ prefixes from a couple of
> fields in this structure, since they serve no purpose AFAICT.

Actually, I'll drop this change and leave the field names alone. 
Stripping that prefix will make the field names logically wrong (e.g.
data_len isn't the length of the header, or of the entire patch), and
I've got other work planned to clean this area up.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 15:53:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 15:53:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4836.12682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQuhR-0001gq-Sf; Fri, 09 Oct 2020 15:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4836.12682; Fri, 09 Oct 2020 15:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQuhR-0001gj-PS; Fri, 09 Oct 2020 15:52:53 +0000
Received: by outflank-mailman (input) for mailman id 4836;
 Fri, 09 Oct 2020 15:52:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQuhR-0001gH-3C
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:52:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffc264bc-cf12-4592-8f09-c3c9348ed774;
 Fri, 09 Oct 2020 15:52:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQuhK-0007YI-MU; Fri, 09 Oct 2020 15:52:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQuhK-00033o-9V; Fri, 09 Oct 2020 15:52:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQuhK-0008NB-93; Fri, 09 Oct 2020 15:52:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQuhR-0001gH-3C
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 15:52:53 +0000
X-Inumbo-ID: ffc264bc-cf12-4592-8f09-c3c9348ed774
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ffc264bc-cf12-4592-8f09-c3c9348ed774;
	Fri, 09 Oct 2020 15:52:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kx1AydKJC3EVBjGvnciyzhDmruvUJWsLwDHg7J3tpuU=; b=i2mMXenvZqoKZj6dPNF4DNet7N
	FcgxJIH+p+mximWKpdZV2zdETznqxJVsmd87KNQw7H9wcG26GhK8HhrPwy9oAnCaIXcrCk5FpL8Ui
	5AMxt6x+D6JDLuBWEkJc5J6V8c+A+Pd9q6peomzJOG3Y/AVcc8arZxHSB3HI17f+QKRw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQuhK-0007YI-MU; Fri, 09 Oct 2020 15:52:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQuhK-00033o-9V; Fri, 09 Oct 2020 15:52:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQuhK-0008NB-93; Fri, 09 Oct 2020 15:52:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155582-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155582: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=3fdd47c3b40ac48e6e6e5904cf24d12e6e073a96
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 15:52:46 +0000

flight 155582 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155582/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  6 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  6 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair          8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair          9 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  6 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  6 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  6 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  6 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 6 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  8 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair  9 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  7 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  7 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      7 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   7 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      7 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                3fdd47c3b40ac48e6e6e5904cf24d12e6e073a96
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   69 days
Failing since        152366  2020-08-01 20:49:34 Z   68 days  115 attempts
Testing same since   155582  2020-10-09 00:10:28 Z    0 days    1 attempts

------------------------------------------------------------
2507 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 338969 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 16:34:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 16:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4843.12697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQvL9-00062R-Bi; Fri, 09 Oct 2020 16:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4843.12697; Fri, 09 Oct 2020 16:33:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQvL9-00062K-8l; Fri, 09 Oct 2020 16:33:55 +0000
Received: by outflank-mailman (input) for mailman id 4843;
 Fri, 09 Oct 2020 16:33:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2c2X=DQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kQvL7-00062F-BR
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 16:33:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f1892fc-18ba-47f6-974b-dd71bea30ff5;
 Fri, 09 Oct 2020 16:33:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 57BBA22280;
 Fri,  9 Oct 2020 16:33:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2c2X=DQ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kQvL7-00062F-BR
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 16:33:53 +0000
X-Inumbo-ID: 2f1892fc-18ba-47f6-974b-dd71bea30ff5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2f1892fc-18ba-47f6-974b-dd71bea30ff5;
	Fri, 09 Oct 2020 16:33:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 57BBA22280;
	Fri,  9 Oct 2020 16:33:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602261231;
	bh=WKdJ1VOpMjQrC+6IyJVrHwUYIe6GzX5YhmHEQTDw0Mw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=0T7qB4CjvKJ1NLKLfuaY11dPutqryz2wEvQoXh+xPgmwkisMP5Lyi/1RLW4GTyHXo
	 EYN88PQn33e3KChc7cUKN+gtyxP1HR86u0w0XpDjcF5yMX1FrG4mwFik29W3L4xaUw
	 lyHdnghvxKLPFd75NrerlTQWhOeUgYNLwQc6oxxo=
Date: Fri, 9 Oct 2020 09:33:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    "julien@xen.org" <julien@xen.org>, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    "roman@zededa.com" <roman@zededa.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
In-Reply-To: <B196761E-78D7-4891-A28E-E04E0B85A202@arm.com>
Message-ID: <alpine.DEB.2.21.2010090933240.23978@sstabellini-ThinkPad-T480s>
References: <20201007223813.1638-1-sstabellini@kernel.org> <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com> <alpine.DEB.2.21.2010081103110.23978@sstabellini-ThinkPad-T480s> <B196761E-78D7-4891-A28E-E04E0B85A202@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 9 Oct 2020, Bertrand Marquis wrote:
> > On 8 Oct 2020, at 19:27, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Thu, 8 Oct 2020, Bertrand Marquis wrote:
> >>> On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>> 
> >>> The preferred method to reboot RPi4 is PSCI. If it is not available,
> >>> touching the watchdog is required to be able to reboot the board.
> >>> 
> >>> The implementation is based on
> >>> drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
> >>> 
> >>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >>> Acked-by: Julien Grall <jgrall@amazon.com>
> >> 
> >> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> 
> >> Maybe a printk if reset was not successful ?
> > 
> > That not quite platform specific but we could add a printk to
> > xen/arch/arm/shutdown.c:machine_restart if we are still alive after
> > 100ms.
> 
> Even nicer :-)
> Definitely usefull to see something if for one reason reset/restart did
> not succeed for whatever reason.
> 
> > 
> > I'll commit this patch as is and maybe send another one for
> > machine_restart.
> 
> Please tell me if you want me to handle that one (at the end I did request
> that so not really fare to ask you to do it:-) ).

Since you are volunteering, yes please :-)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 16:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 16:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.4865.12728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQvdl-00085S-EP; Fri, 09 Oct 2020 16:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 4865.12728; Fri, 09 Oct 2020 16:53:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQvdl-00085L-B8; Fri, 09 Oct 2020 16:53:09 +0000
Received: by outflank-mailman (input) for mailman id 4865;
 Fri, 09 Oct 2020 16:53:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XHyC=DQ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kQvdj-00085C-LX
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 16:53:07 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64b967e6-5484-4d2b-be8a-37f420d5449f;
 Fri, 09 Oct 2020 16:53:06 +0000 (UTC)
Received: from DB6PR0201CA0004.eurprd02.prod.outlook.com (2603:10a6:4:3f::14)
 by VI1PR08MB4510.eurprd08.prod.outlook.com (2603:10a6:803:fc::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Fri, 9 Oct
 2020 16:53:04 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::a) by DB6PR0201CA0004.outlook.office365.com
 (2603:10a6:4:3f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Fri, 9 Oct 2020 16:53:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Fri, 9 Oct 2020 16:53:04 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Fri, 09 Oct 2020 16:53:04 +0000
Received: from 491c1cbede1f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5D0EDAE2-31B2-414A-BE4C-23B431B9658E.1; 
 Fri, 09 Oct 2020 16:52:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 491c1cbede1f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 09 Oct 2020 16:52:42 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Fri, 9 Oct
 2020 16:52:40 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.026; Fri, 9 Oct 2020
 16:52:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XHyC=DQ=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kQvdj-00085C-LX
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 16:53:07 +0000
X-Inumbo-ID: 64b967e6-5484-4d2b-be8a-37f420d5449f
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.44])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 64b967e6-5484-4d2b-be8a-37f420d5449f;
	Fri, 09 Oct 2020 16:53:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVJrITW8CwGkDrhswFYdOGz5HdCluBOgNTFnJnmzA8g=;
 b=CjD/KsfLjM2+ZU874TgoHzb3KHfTxeP7NZRazpS5DnEZvY1ABzqiDUD6CNCUAV2cfz237k5WCG3GN8PI95XXmWWC4szDgaP7y5xVqUcvXS58Zdf66iuQHQGbdDfVSUnJkCaAlgBJhYtW0rqMOBMLpb2xii1waiaDKNrmrbj7nsc=
Received: from DB6PR0201CA0004.eurprd02.prod.outlook.com (2603:10a6:4:3f::14)
 by VI1PR08MB4510.eurprd08.prod.outlook.com (2603:10a6:803:fc::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Fri, 9 Oct
 2020 16:53:04 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:3f:cafe::a) by DB6PR0201CA0004.outlook.office365.com
 (2603:10a6:4:3f::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21 via Frontend
 Transport; Fri, 9 Oct 2020 16:53:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Fri, 9 Oct 2020 16:53:04 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Fri, 09 Oct 2020 16:53:04 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 58f9a50300b4ec21
X-CR-MTA-TID: 64aa7808
Received: from 491c1cbede1f.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5D0EDAE2-31B2-414A-BE4C-23B431B9658E.1;
	Fri, 09 Oct 2020 16:52:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 491c1cbede1f.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 09 Oct 2020 16:52:42 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RXcBw/6WiTJRGigSbDt37ZmGZi+WUo4Gcy/I/zWh40M4QpludOnlj8wndKPpxyA8Y1X4BTZF+agLVtXHIQjDDVqmRDmFoowcOzck52aDqyNnMx+1SdAId7P+/IGe2oE68l6p4GjBQS7I4EAhxvxSN/8Z8zZZIhq+KKh+HnwvFVvsSHDzPxW/Hf2Aczif2CIyGgi8Oa/j7wIqO+1J+Yl9LaNInbBuJOxxY/+nNapWdLrLQxpyHhESltX+Z/SNKMvxIftnnRHbUEkm7NYVPrugN/WyDm5RpY/bhlxtjE60iqN1Mirve4Da9JJ9R4TtphhWbMETKRSQGv81w2oMTB/LnQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVJrITW8CwGkDrhswFYdOGz5HdCluBOgNTFnJnmzA8g=;
 b=AToywdoJzy1AXsCuYXI1gBqugo1+oJ1VevHIf43/MPUf4nBcEWYqsJfvqeWCsw0vi4/1DuGeDhL2ckT7Ujxou1WU1qcqUkvSKohkHvY+oB7iwCRidXA2FTUJJyIpDFyZCUwQetYEvsZwJW9isXNJDThQzqc9Ox2kNlC/0eZXifCwBhHstFOziCQcByG1xuQZzC1j3x0yHmWx/Xm3wSCJeQjPBBCKTNd4P3llNPxILLyqqZt5O7VqqpdGnVdOeEiKcgG/Siq2K9o/ZnevVXjrunwbyaaXHzJkEX58P7gq+r7lU4C3rFvIbIEUKnob0aPDvUbBAZ0HBSPnbltHGjlC2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iVJrITW8CwGkDrhswFYdOGz5HdCluBOgNTFnJnmzA8g=;
 b=CjD/KsfLjM2+ZU874TgoHzb3KHfTxeP7NZRazpS5DnEZvY1ABzqiDUD6CNCUAV2cfz237k5WCG3GN8PI95XXmWWC4szDgaP7y5xVqUcvXS58Zdf66iuQHQGbdDfVSUnJkCaAlgBJhYtW0rqMOBMLpb2xii1waiaDKNrmrbj7nsc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Fri, 9 Oct
 2020 16:52:40 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.026; Fri, 9 Oct 2020
 16:52:40 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "julien@xen.org"
	<julien@xen.org>, Stefano Stabellini <stefano.stabellini@xilinx.com>,
	"roman@zededa.com" <roman@zededa.com>
Subject: Re: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Topic: [PATCH v3] xen/rpi4: implement watchdog-based reset
Thread-Index: AQHWnPqc7v70BAMN4EaAQA/1huLhsamNV4yAgACv0gCAAN+3gIAAkuUAgAAFQwA=
Date: Fri, 9 Oct 2020 16:52:40 +0000
Message-ID: <4DE99283-89F3-4FA1-84D9-88F37ED236A6@arm.com>
References: <20201007223813.1638-1-sstabellini@kernel.org>
 <1A694341-33AC-41E1-B216-2D3E1A6C45B4@arm.com>
 <alpine.DEB.2.21.2010081103110.23978@sstabellini-ThinkPad-T480s>
 <B196761E-78D7-4891-A28E-E04E0B85A202@arm.com>
 <alpine.DEB.2.21.2010090933240.23978@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010090933240.23978@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c1b86fd0-c769-49ea-3470-08d86c73ca28
x-ms-traffictypediagnostic: DB6PR0802MB2566:|VI1PR08MB4510:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB45104A4D276C888A40F168349D080@VI1PR08MB4510.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UVhblb4U/1kD+zWxQS7zewSoXiEXLsrPq4qiSwS1LivqtNPL0TqYLTQD9nLYMboROH6FXzSOWHJ+ez965MWLlaW3YNgqrQlOpa71yjttuVlLg6Zu2vfu2SGfsY/ZNvdM1H/eGUcKHQ9lzadAC9TBSKga0Yw0dNR9It1NB1Rhk0V+hlVO/2BAvGGue9YxOpGqPrI0G9xoShm1ERR6AtlF9NanU2d7+K9EuT3KS6Xe3PwQy2o2n2ams+yv39iV7uQRB78KEk8iuA2JguK7Oqt/13op1gX5mN0azQPOeGY2mhD771b/M53Wqi4/DPVeQ+ftfpviwFo0kVjfXTK1rxldGA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(83380400001)(5660300002)(66476007)(6486002)(66446008)(33656002)(64756008)(478600001)(6916009)(4326008)(66946007)(316002)(91956017)(66556008)(54906003)(6506007)(186003)(53546011)(36756003)(76116006)(26005)(2906002)(71200400001)(8676002)(6512007)(2616005)(8936002)(86362001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 MAYZ45Bbq6fwXCxJngiFQ1V8W5AZb71NF2MyyY3ClsPk6SBWqBuH1DYPu8u1sJNRPFfuuWUgqNKuxhnOETgk91nXU/JI70v9hxlk/O571fqFcStfP/1wB7yRIkx4HoZQFIMYw6s02XYcDF9bD3bVfctEFN8L5FsazocJVERHPLdExiFcIeJKmLhmG0iFU80E1R8sJPj5NTZvv1pZkGU8AloVutMJc7sQZm7dQ6ejZPyV8owGgFP/BKrPeX0CnlSYPRxzXjyisBB9BvMd6K2WJcQUDmhHcdhOEP32H8PBLK3G/ZxrPLi9x9GQ53yB3qETHex7xhJa8uk0cibHwN37lZA5c45JknQ6cqzT66U4tLLjc3ryxo0ZFn21L/YupqLqCrrDMmZOkSE1I552e0SROH7dz41MwZxUuTtXI8p8thQkFMOnvJfxGjFQHh4pbqC5r4lf/lpgfeMgV9t7UHLehFmMz3FV2ERBgGPL7ohopXnm99N6rcTAe9mRLLn+fEYnjMss5tIRcYUGKBjpa6WrS5uplpN/Hng2kT2VPBwEBTkBzGMu/QOh0bDaccSbYcf1ra1HocTt8ANs/YUF/Xx4LYF8M4HNAd8lVby8yKT2RTphcA9+nS6cFye+XWs+CXBYFuoznjNALze6XVswzPgICw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <F55BA0551CB4C7479135208A99C510C7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2566
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	89e2f452-989a-40b1-d020-08d86c73bc4a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q/Hc+88erewkKk98fwy3b/iFCP1SIGTBOosAdGX5QQYriQkzXo3eg1QkLKpktz1sAttus6F5Dxrje5qawyuvwaS+uNLC6QRItfa1UH5sX4rSgG1rPdooe13f+v+GYyC32WdJ946658bNhh//29O+VFrYATjd1GsuE0Vq6rrE1owheW8VkQadK3qirCwYM3VPkScU5Cg/AoxHqebtpkMuDlZiGWJjM4cb9s8Bum0Hqj911UrmrVDGoZNiKvVbFlc33NjNJVFUP+9v/wBiciQpcJtmF2ndBD16FRFk0d3gA7xY5Hb2GQO9bRWdSGszTAycFWz3c3givoVpNlVDttaaKcoSJRht8+9ozWKWQ7C1itp0TaIHFGNJZ3qTBW0eShsmwkwq4spGm4CRqLmI3znCKg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(396003)(376002)(346002)(46966005)(107886003)(53546011)(8676002)(47076004)(86362001)(82310400003)(26005)(6486002)(70206006)(5660300002)(478600001)(70586007)(81166007)(2906002)(6862004)(36756003)(2616005)(82740400003)(33656002)(186003)(336012)(54906003)(356005)(316002)(83380400001)(6506007)(6512007)(4326008)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Oct 2020 16:53:04.1536
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c1b86fd0-c769-49ea-3470-08d86c73ca28
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4510

Hi Stefano,

> On 9 Oct 2020, at 17:33, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Fri, 9 Oct 2020, Bertrand Marquis wrote:
>>> On 8 Oct 2020, at 19:27, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>>=20
>>> On Thu, 8 Oct 2020, Bertrand Marquis wrote:
>>>>> On 7 Oct 2020, at 23:38, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>>>>>=20
>>>>> The preferred method to reboot RPi4 is PSCI. If it is not available,
>>>>> touching the watchdog is required to be able to reboot the board.
>>>>>=20
>>>>> The implementation is based on
>>>>> drivers/watchdog/bcm2835_wdt.c:__bcm2835_restart in Linux v5.9-rc7.
>>>>>=20
>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>> Acked-by: Julien Grall <jgrall@amazon.com>
>>>>=20
>>>> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>=20
>>>> Maybe a printk if reset was not successful ?
>>>=20
>>> That not quite platform specific but we could add a printk to
>>> xen/arch/arm/shutdown.c:machine_restart if we are still alive after
>>> 100ms.
>>=20
>> Even nicer :-)
>> Definitely usefull to see something if for one reason reset/restart did
>> not succeed for whatever reason.
>>=20
>>>=20
>>> I'll commit this patch as is and maybe send another one for
>>> machine_restart.
>>=20
>> Please tell me if you want me to handle that one (at the end I did reque=
st
>> that so not really fare to ask you to do it:-) ).
>=20
> Since you are volunteering, yes please :-)
Fare enough

I will add this to my small fixes list :-)

Cheers
Bertrand=


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 17:38:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 17:38:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5003.12804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwLi-0004ST-6V; Fri, 09 Oct 2020 17:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5003.12804; Fri, 09 Oct 2020 17:38:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwLi-0004SM-30; Fri, 09 Oct 2020 17:38:34 +0000
Received: by outflank-mailman (input) for mailman id 5003;
 Fri, 09 Oct 2020 17:38:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQwLg-0004SH-Jy
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 17:38:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b621e2c7-8fca-4711-810c-ed6bde2704a7;
 Fri, 09 Oct 2020 17:38:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQwLd-0001p9-5n; Fri, 09 Oct 2020 17:38:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQwLc-0006oR-Qp; Fri, 09 Oct 2020 17:38:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQwLc-00060W-QK; Fri, 09 Oct 2020 17:38:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQwLg-0004SH-Jy
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 17:38:32 +0000
X-Inumbo-ID: b621e2c7-8fca-4711-810c-ed6bde2704a7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b621e2c7-8fca-4711-810c-ed6bde2704a7;
	Fri, 09 Oct 2020 17:38:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9H8GUFP/hV8BPF2H2mRHjH9fWHgq4MnsDJ0gQi5Bquo=; b=HGP/7K6SjEz4EenwwFqpVcsPbk
	SW91OirM1kmJOO1kMp6R8DLCJQuxTL3fM7LIVwMjfFXs+e0UetZKksRlKqahW6q6MvMNYfo1RWc5G
	beW7jLXkn5DOl6XITrLlLP0hYhc61HEiC5C4WA47b2R7nzP36J/IOV/fMVrjb2xVRRGg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQwLd-0001p9-5n; Fri, 09 Oct 2020 17:38:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQwLc-0006oR-Qp; Fri, 09 Oct 2020 17:38:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQwLc-00060W-QK; Fri, 09 Oct 2020 17:38:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155585-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155585: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=497d415d76b9f59fcae27f22df1ca2c3fa4df64e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 17:38:28 +0000

flight 155585 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155585/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 11 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 11 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 10 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 11 guest-start            fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 11 guest-start           fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 10 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 10 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     12 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 10 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 10 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 10 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-install  fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                497d415d76b9f59fcae27f22df1ca2c3fa4df64e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   50 days
Failing since        152659  2020-08-21 14:07:39 Z   49 days   81 attempts
Testing same since   155585  2020-10-09 02:17:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 41414 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 17:47:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 17:47:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5027.12842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwUL-0005mQ-Po; Fri, 09 Oct 2020 17:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5027.12842; Fri, 09 Oct 2020 17:47:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwUL-0005mJ-Mr; Fri, 09 Oct 2020 17:47:29 +0000
Received: by outflank-mailman (input) for mailman id 5027;
 Fri, 09 Oct 2020 17:47:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQwUK-0005mE-O9
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 17:47:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12cd25d3-4289-4d91-905e-e9cf612e68b7;
 Fri, 09 Oct 2020 17:47:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQwUG-00021k-RK; Fri, 09 Oct 2020 17:47:24 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQwUG-0001gA-JB; Fri, 09 Oct 2020 17:47:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQwUK-0005mE-O9
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 17:47:28 +0000
X-Inumbo-ID: 12cd25d3-4289-4d91-905e-e9cf612e68b7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 12cd25d3-4289-4d91-905e-e9cf612e68b7;
	Fri, 09 Oct 2020 17:47:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date:
	Message-ID:Subject:From:Cc:To;
	bh=xE9bJ0TQJqRsSB+C4YN7bhtHRoSu1lDzBSOBdMwYV6c=; b=2h69FtNvDyIPZHrlD0wgXH+R6q
	G9EBWlf6O5y7P3Q0QFilh+NLy+vUhUfT8er755mIpG/eNq3z7O3ZDPqhAqIz3srN3pL0dzsTL1lAz
	OiilmiTSezuAKRsiY8Oy+Ed81f2LCqnB9/IqxHcBRN2hEseGRVRY75pVZiiNW3hcI+w4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQwUG-00021k-RK; Fri, 09 Oct 2020 17:47:24 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQwUG-0001gA-JB; Fri, 09 Oct 2020 17:47:24 +0000
To: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>, ba1020@protonmail.ch
From: Julien Grall <julien@xen.org>
Subject: Tools backport request for Xen 4.14
Message-ID: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
Date: Fri, 9 Oct 2020 18:47:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Ian & Jan,

Would it be possible to consider backporting to 4.14 the following tools 
commit:

d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"

This would help to build Xen tools on Debian Testing with GCC 10. I 
haven't build itself myself, so I can't promise this is only one :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 18:15:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 18:15:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5040.12856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwvY-0000Bt-VO; Fri, 09 Oct 2020 18:15:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5040.12856; Fri, 09 Oct 2020 18:15:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQwvY-0000Bm-SO; Fri, 09 Oct 2020 18:15:36 +0000
Received: by outflank-mailman (input) for mailman id 5040;
 Fri, 09 Oct 2020 18:15:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kQwvX-0000Bh-EE
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 18:15:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 878b3be5-b4a3-49e6-94b0-e07c05eccdc6;
 Fri, 09 Oct 2020 18:15:34 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQwvL-0002gX-Ru; Fri, 09 Oct 2020 18:15:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kQwvL-00049e-I2; Fri, 09 Oct 2020 18:15:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2W9l=DQ=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kQwvX-0000Bh-EE
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 18:15:35 +0000
X-Inumbo-ID: 878b3be5-b4a3-49e6-94b0-e07c05eccdc6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 878b3be5-b4a3-49e6-94b0-e07c05eccdc6;
	Fri, 09 Oct 2020 18:15:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5Z4jVQIxIVyDAbSJyOzZjF0+rPalo9kgmd8di19kV+4=; b=inKv2pypZADt89UaSjf/OqzGBf
	n9oNIqx6zUp0G20u053T0VpVC3/JmSFs4jGny7DHj8NOyjSdbCW5u0OBHvqNirQ5ZqXxG2JuLbpCU
	GJotBZlQgaZIgSUpfTZqwBHa/xJLws83vuP2q9ZvEngby9MTVlzbe7BbW61/sLUGueCA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQwvL-0002gX-Ru; Fri, 09 Oct 2020 18:15:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kQwvL-00049e-I2; Fri, 09 Oct 2020 18:15:23 +0000
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
 xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
 bertrand.marquis@arm.com, andre.przywara@arm.com,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monn?? <roger.pau@citrix.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <20201008183904.GA56716@mattapan.m5p.com>
 <f0976c17-ad36-847b-7868-f6bb13948368@xen.org>
 <20201009142208.GA63582@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <55a4ac3d-dc6f-f2f6-1a98-62d1c555d26e@xen.org>
Date: Fri, 9 Oct 2020 19:15:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201009142208.GA63582@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 09/10/2020 15:22, Elliott Mitchell wrote:
> On Fri, Oct 09, 2020 at 10:39:26AM +0100, Julien Grall wrote:
>> On 08/10/2020 19:39, Elliott Mitchell wrote:
>>> Your (Masami Hiramatsu) patch seems plausible, but things haven't
>>> progressed enough for me to endorse it.  Looks like something closer to
>>> the core of ACPI still needs further work, Julien Grall?
>>
>> I didn't go very far during my testing because QEMU is providing ACPI
>> 5.1 (Xen only supports 6.0+ so far).
>>
>> For your log above, Xen finished to boot and now dom0 should start
>> booting. The lack of console output may be because of a crash in Linux
>> during earlyboot.
>>
>> Do you have the early console enabled Linux? This can be done by adding
>> earlycon=xenboot on the Linux command line.
> 
> Finding all the command-line console settings can be a challenge.  I had
> thought it was supposed to be "console=hvc0 earlycon=hvc0".
> 
> With that though I finally have some output which claims to come from the
> Linux kernel (yay! finally hit this point!).  As we were both guessing,
> very early kernel panic:
> 
> [    0.000000] efi: Getting EFI parameters from FDT:
> [    0.000000] efi: Can't find 'System Table' in device tree!

Thank you for sending part of the log. Looking at Linux 5.6 code, the 
error message is printed by efi_get_fdt_params() (see 
drivers/firmware/efi.c) when one of the property is missing.

'System Table' suggests that Linux wasn't enable to find 
"linux,uefi-system-table" or "xen,uefi-system-table".

Xen will only create later. Would it be possible to add some code in 
__find_uefi_params() to confirm which property Linux thinks is missing?

> [    0.000000] cma: Failed to reserve 64 MiB
> [    0.000000] Kernel panic - not syncing: Failed to allocate page table page
> 
> I don't know whether this is a problem with the mini-DT which was passed
> in versus ACPI tables.  I note a complete lack of ACPI table information.

I think this is normal because, IIRC, the ACPI root pointer will be 
found using the System Table.

> The kernel is from a 5.6-based kernel tree.  I'm unsure which portion to
> try updating next.
> 
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 18:50:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 18:50:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5042.12868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQxSf-0003DU-Jz; Fri, 09 Oct 2020 18:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5042.12868; Fri, 09 Oct 2020 18:49:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQxSf-0003DN-Gy; Fri, 09 Oct 2020 18:49:49 +0000
Received: by outflank-mailman (input) for mailman id 5042;
 Fri, 09 Oct 2020 18:49:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kQxSd-0003DI-HR
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 18:49:47 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d54be095-8c5d-4dcb-aa91-2d191ac625c7;
 Fri, 09 Oct 2020 18:49:46 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099InWg3065261
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 9 Oct 2020 14:49:38 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 099InUbs065260;
 Fri, 9 Oct 2020 11:49:30 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kQxSd-0003DI-HR
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 18:49:47 +0000
X-Inumbo-ID: d54be095-8c5d-4dcb-aa91-2d191ac625c7
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d54be095-8c5d-4dcb-aa91-2d191ac625c7;
	Fri, 09 Oct 2020 18:49:46 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099InWg3065261
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 9 Oct 2020 14:49:38 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 099InUbs065260;
	Fri, 9 Oct 2020 11:49:30 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 9 Oct 2020 11:49:30 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>, ba1020@protonmail.ch
Subject: Re: Tools backport request for Xen 4.14
Message-ID: <20201009184930.GA65219@mattapan.m5p.com>
References: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <54fcf6ea-f400-c96a-cde6-4f55f909c2d6@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 09, 2020 at 06:47:22PM +0100, Julien Grall wrote:
> Would it be possible to consider backporting to 4.14 the following tools 
> commit:
> 
> d25cc3ec93eb "libxl: workaround gcc 10.2 maybe-uninitialized warning"
> 
> This would help to build Xen tools on Debian Testing with GCC 10. I 
> haven't build itself myself, so I can't promise this is only one :).

>From Debian's repository:
https://salsa.debian.org/xen-team/debian-xen.git

The master and knorrie/4.14 branches include that commit.  They will
hopefully soon include all the Debian-specific bits for cross-building
too.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5046.12881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPh-00016p-10; Fri, 09 Oct 2020 19:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5046.12881; Fri, 09 Oct 2020 19:50:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPg-00016i-TI; Fri, 09 Oct 2020 19:50:48 +0000
Received: by outflank-mailman (input) for mailman id 5046;
 Fri, 09 Oct 2020 19:50:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyPe-00016d-QA
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:46 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 874e54ff-88b7-4ba4-9c9d-61d1bab9ff10;
 Fri, 09 Oct 2020 19:50:43 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:41 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:41 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyPe-00016d-QA
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:46 +0000
X-Inumbo-ID: 874e54ff-88b7-4ba4-9c9d-61d1bab9ff10
Received: from mga04.intel.com (unknown [192.55.52.120])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 874e54ff-88b7-4ba4-9c9d-61d1bab9ff10;
	Fri, 09 Oct 2020 19:50:43 +0000 (UTC)
IronPort-SDR: iNPAtLjhcq3sopKfmes6gauyP5qXGMfiXQL/Ndk83toWtfsuIHL+WbDJ9IEqUMZax+KwbSYdLE
 kdDQ4hvw5d/Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893194"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162893194"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:41 -0700
IronPort-SDR: wgvSBhlinBwJf2eRaqYm1d4mOPDeheaaBRmvXZpaWhx0BsPjq5MOqCRmfglsuVIrge+HvLIvQ5
 IT741lyNdN2Q==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="419536654"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:41 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 00/58] PMEM: Introduce stray write protection for PMEM
Date: Fri,  9 Oct 2020 12:49:35 -0700
Message-Id: <20201009195033.3208459-1-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Should a stray write in the kernel occur persistent memory is affected more
than regular memory.  A write to the wrong area of memory could result in
latent data corruption which will will persist after a reboot.  PKS provides a
nice way to restrict access to persistent memory kernel mappings, while
providing fast access when needed.

Since the last RFC[1] this patch set has grown quite a bit.  It now depends on
the core patches submitted separately.

	https://lore.kernel.org/lkml/20201009194258.3207172-1-ira.weiny@intel.com/

And contained in the git tree here:

	https://github.com/weiny2/linux-kernel/tree/pks-rfc-v3

However, functionally there is only 1 major change from the last RFC.
Specifically, kmap() is most often used within a single thread in a 'map/do
something/unmap' pattern.  In fact this is the pattern used in ~90% of the
callers of kmap().  This pattern works very well for the pmem use case and the
testing which was done.  However, there were another ~20-30 kmap users which do
not follow this pattern.  Some of them seem to expect the mapping to be
'global' while others require a detailed audit to be sure.[2][3]

While we don't anticipate global mappings to pmem there is a danger in
changing the semantics of kmap().  Effectively, this would cause an unresolved
page fault with little to no information about why.

There were a number of options considered.

1) Attempt to change all the thread local kmap() calls to kmap_atomic()
2) Introduce a flags parameter to kmap() to indicate if the mapping should be
   global or not
3) Change ~20-30 call sites to 'kmap_global()' to indicate that they require a
   global mapping of the pages
4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to
   be used within that thread of execution only

Option 1 is simply not feasible kmap_atomic() is not the same semantic as
kmap() within a single tread.  Option 2 would require all of the call sites of
kmap() to change.  Option 3 seems like a good minimal change but there is a
danger that new code may miss the semantic change of kmap() and not get the
behavior intended for future users.  Therefore, option #4 was chosen.

To handle the global PKRS state in the most efficient manner possible.  We
lazily override the thread specific PKRS key value only when needed because we
anticipate PKS to not be needed will not be needed most of the time.  And even
when it is used 90% of the time it is a thread local call.


[1] https://lore.kernel.org/lkml/20200717072056.73134-1-ira.weiny@intel.com/

[2] The following list of callers continue calling kmap() (utilizing the global
PKRS).  It would be nice if more of them could be converted to kmap_thread()

	drivers/firewire/net.c:         ptr = kmap(dev->broadcast_rcv_buffer.pages[u]);
	drivers/gpu/drm/i915/gem/i915_gem_pages.c:              return kmap(sg_page(sgt->sgl));
	drivers/gpu/drm/ttm/ttm_bo_util.c:              map->virtual = kmap(map->page);
	drivers/infiniband/hw/qib/qib_user_sdma.c:      mpage = kmap(page);
	drivers/misc/vmw_vmci/vmci_host.c:      context->notify = kmap(context->notify_page) + (uva & (PAGE_SIZE - 1));
	drivers/misc/xilinx_sdfec.c:            addr = kmap(pages[i]);
	drivers/mmc/host/usdhi6rol0.c:  host->pg.mapped         = kmap(host->pg.page);
	drivers/mmc/host/usdhi6rol0.c:  host->pg.mapped = kmap(host->pg.page);
	drivers/mmc/host/usdhi6rol0.c:  host->pg.mapped = kmap(host->pg.page);
	drivers/nvme/target/tcp.c:              iov->iov_base = kmap(sg_page(sg)) + sg->offset + sg_offset;
	drivers/scsi/libiscsi_tcp.c:            segment->sg_mapped = kmap(sg_page(sg));
	drivers/target/iscsi/iscsi_target.c:            iov[i].iov_base = kmap(sg_page(sg)) + sg->offset + page_off;
	drivers/target/target_core_transport.c:         return kmap(sg_page(sg)) + sg->offset;
	fs/btrfs/check-integrity.c:             block_ctx->datav[i] = kmap(block_ctx->pagev[i]);
	fs/ceph/dir.c:          cache_ctl->dentries = kmap(cache_ctl->page);
	fs/ceph/inode.c:                ctl->dentries = kmap(ctl->page);
	fs/erofs/zpvec.h:               kmap_atomic(ctor->curr) : kmap(ctor->curr);
	lib/scatterlist.c:              miter->addr = kmap(miter->page) + miter->__offset;
	net/ceph/pagelist.c:    pl->mapped_tail = kmap(page);
	net/ceph/pagelist.c:            pl->mapped_tail = kmap(page);
	virt/kvm/kvm_main.c:                    hva = kmap(page);

[3] The following appear to follow the same pattern as ext2 which was converted
after some code audit.  So I _think_ they too could be converted to
k[un]map_thread().

	fs/freevxfs/vxfs_subr.c|75| kmap(pp);
	fs/jfs/jfs_metapage.c|102| kmap(page);
	fs/jfs/jfs_metapage.c|156| kmap(page);
	fs/minix/dir.c|72| kmap(page);
	fs/nilfs2/dir.c|195| kmap(page);
	fs/nilfs2/ifile.h|24| void *kaddr = kmap(ibh->b_page);
	fs/ntfs/aops.h|78| kmap(page);
	fs/ntfs/compress.c|574| kmap(page);
	fs/qnx6/dir.c|32| kmap(page);
	fs/qnx6/dir.c|58| kmap(*p = page);
	fs/qnx6/inode.c|190| kmap(page);
	fs/qnx6/inode.c|557| kmap(page);
	fs/reiserfs/inode.c|2397| kmap(bh_result->b_page);
	fs/reiserfs/xattr.c|444| kmap(page);
	fs/sysv/dir.c|60| kmap(page);
	fs/sysv/dir.c|262| kmap(page);
	fs/ufs/dir.c|194| kmap(page);
	fs/ufs/dir.c|562| kmap(page);


Ira Weiny (58):
  x86/pks: Add a global pkrs option
  x86/pks/test: Add testing for global option
  memremap: Add zone device access protection
  kmap: Add stray access protection for device pages
  kmap: Introduce k[un]map_thread
  kmap: Introduce k[un]map_thread debugging
  drivers/drbd: Utilize new kmap_thread()
  drivers/firmware_loader: Utilize new kmap_thread()
  drivers/gpu: Utilize new kmap_thread()
  drivers/rdma: Utilize new kmap_thread()
  drivers/net: Utilize new kmap_thread()
  fs/afs: Utilize new kmap_thread()
  fs/btrfs: Utilize new kmap_thread()
  fs/cifs: Utilize new kmap_thread()
  fs/ecryptfs: Utilize new kmap_thread()
  fs/gfs2: Utilize new kmap_thread()
  fs/nilfs2: Utilize new kmap_thread()
  fs/hfs: Utilize new kmap_thread()
  fs/hfsplus: Utilize new kmap_thread()
  fs/jffs2: Utilize new kmap_thread()
  fs/nfs: Utilize new kmap_thread()
  fs/f2fs: Utilize new kmap_thread()
  fs/fuse: Utilize new kmap_thread()
  fs/freevxfs: Utilize new kmap_thread()
  fs/reiserfs: Utilize new kmap_thread()
  fs/zonefs: Utilize new kmap_thread()
  fs/ubifs: Utilize new kmap_thread()
  fs/cachefiles: Utilize new kmap_thread()
  fs/ntfs: Utilize new kmap_thread()
  fs/romfs: Utilize new kmap_thread()
  fs/vboxsf: Utilize new kmap_thread()
  fs/hostfs: Utilize new kmap_thread()
  fs/cramfs: Utilize new kmap_thread()
  fs/erofs: Utilize new kmap_thread()
  fs: Utilize new kmap_thread()
  fs/ext2: Use ext2_put_page
  fs/ext2: Utilize new kmap_thread()
  fs/isofs: Utilize new kmap_thread()
  fs/jffs2: Utilize new kmap_thread()
  net: Utilize new kmap_thread()
  drivers/target: Utilize new kmap_thread()
  drivers/scsi: Utilize new kmap_thread()
  drivers/mmc: Utilize new kmap_thread()
  drivers/xen: Utilize new kmap_thread()
  drivers/firmware: Utilize new kmap_thread()
  drives/staging: Utilize new kmap_thread()
  drivers/mtd: Utilize new kmap_thread()
  drivers/md: Utilize new kmap_thread()
  drivers/misc: Utilize new kmap_thread()
  drivers/android: Utilize new kmap_thread()
  kernel: Utilize new kmap_thread()
  mm: Utilize new kmap_thread()
  lib: Utilize new kmap_thread()
  powerpc: Utilize new kmap_thread()
  samples: Utilize new kmap_thread()
  dax: Stray access protection for dax_direct_access()
  nvdimm/pmem: Stray access protection for pmem->virt_addr
  [dax|pmem]: Enable stray access protection

 Documentation/core-api/protection-keys.rst    |  11 +-
 arch/powerpc/mm/mem.c                         |   4 +-
 arch/x86/entry/common.c                       |  28 +++
 arch/x86/include/asm/pkeys.h                  |   6 +-
 arch/x86/include/asm/pkeys_common.h           |   8 +-
 arch/x86/kernel/process.c                     |  74 ++++++-
 arch/x86/mm/fault.c                           | 193 ++++++++++++++----
 arch/x86/mm/pkeys.c                           |  88 ++++++--
 drivers/android/binder_alloc.c                |   4 +-
 drivers/base/firmware_loader/fallback.c       |   4 +-
 drivers/base/firmware_loader/main.c           |   4 +-
 drivers/block/drbd/drbd_main.c                |   4 +-
 drivers/block/drbd/drbd_receiver.c            |  12 +-
 drivers/dax/device.c                          |   2 +
 drivers/dax/super.c                           |   2 +
 drivers/firmware/efi/capsule-loader.c         |   6 +-
 drivers/firmware/efi/capsule.c                |   4 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c       |  12 +-
 drivers/gpu/drm/gma500/gma_display.c          |   4 +-
 drivers/gpu/drm/gma500/mmu.c                  |  10 +-
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c     |   4 +-
 .../drm/i915/gem/selftests/i915_gem_context.c |   4 +-
 .../drm/i915/gem/selftests/i915_gem_mman.c    |   8 +-
 drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c  |   4 +-
 drivers/gpu/drm/i915/gt/intel_gtt.c           |   4 +-
 drivers/gpu/drm/i915/gt/shmem_utils.c         |   4 +-
 drivers/gpu/drm/i915/i915_gem.c               |   8 +-
 drivers/gpu/drm/i915/i915_gpu_error.c         |   4 +-
 drivers/gpu/drm/i915/selftests/i915_perf.c    |   4 +-
 drivers/gpu/drm/radeon/radeon_ttm.c           |   4 +-
 drivers/infiniband/hw/hfi1/sdma.c             |   4 +-
 drivers/infiniband/hw/i40iw/i40iw_cm.c        |  10 +-
 drivers/infiniband/sw/siw/siw_qp_tx.c         |  14 +-
 drivers/md/bcache/request.c                   |   4 +-
 drivers/misc/vmw_vmci/vmci_queue_pair.c       |  12 +-
 drivers/mmc/host/mmc_spi.c                    |   4 +-
 drivers/mmc/host/sdricoh_cs.c                 |   4 +-
 drivers/mtd/mtd_blkdevs.c                     |  12 +-
 drivers/net/ethernet/intel/igb/igb_ethtool.c  |   4 +-
 .../net/ethernet/intel/ixgbe/ixgbe_ethtool.c  |   4 +-
 drivers/nvdimm/pmem.c                         |   6 +
 drivers/scsi/ipr.c                            |   8 +-
 drivers/scsi/pmcraid.c                        |   8 +-
 drivers/staging/rts5208/rtsx_transport.c      |   4 +-
 drivers/target/target_core_iblock.c           |   4 +-
 drivers/target/target_core_rd.c               |   4 +-
 drivers/target/target_core_transport.c        |   4 +-
 drivers/xen/gntalloc.c                        |   4 +-
 fs/afs/dir.c                                  |  16 +-
 fs/afs/dir_edit.c                             |  16 +-
 fs/afs/mntpt.c                                |   4 +-
 fs/afs/write.c                                |   4 +-
 fs/aio.c                                      |   4 +-
 fs/binfmt_elf.c                               |   4 +-
 fs/binfmt_elf_fdpic.c                         |   4 +-
 fs/btrfs/check-integrity.c                    |   4 +-
 fs/btrfs/compression.c                        |   4 +-
 fs/btrfs/inode.c                              |  16 +-
 fs/btrfs/lzo.c                                |  24 +--
 fs/btrfs/raid56.c                             |  34 +--
 fs/btrfs/reflink.c                            |   8 +-
 fs/btrfs/send.c                               |   4 +-
 fs/btrfs/zlib.c                               |  32 +--
 fs/btrfs/zstd.c                               |  20 +-
 fs/cachefiles/rdwr.c                          |   4 +-
 fs/cifs/cifsencrypt.c                         |   6 +-
 fs/cifs/file.c                                |  16 +-
 fs/cifs/smb2ops.c                             |   8 +-
 fs/cramfs/inode.c                             |  10 +-
 fs/ecryptfs/crypto.c                          |   8 +-
 fs/ecryptfs/read_write.c                      |   8 +-
 fs/erofs/super.c                              |   4 +-
 fs/erofs/xattr.c                              |   4 +-
 fs/exec.c                                     |  10 +-
 fs/ext2/dir.c                                 |   8 +-
 fs/ext2/ext2.h                                |   8 +
 fs/ext2/namei.c                               |  15 +-
 fs/f2fs/f2fs.h                                |   8 +-
 fs/freevxfs/vxfs_immed.c                      |   4 +-
 fs/fuse/readdir.c                             |   4 +-
 fs/gfs2/bmap.c                                |   4 +-
 fs/gfs2/ops_fstype.c                          |   4 +-
 fs/hfs/bnode.c                                |  14 +-
 fs/hfs/btree.c                                |  20 +-
 fs/hfsplus/bitmap.c                           |  20 +-
 fs/hfsplus/bnode.c                            | 102 ++++-----
 fs/hfsplus/btree.c                            |  18 +-
 fs/hostfs/hostfs_kern.c                       |  12 +-
 fs/io_uring.c                                 |   4 +-
 fs/isofs/compress.c                           |   4 +-
 fs/jffs2/file.c                               |   8 +-
 fs/jffs2/gc.c                                 |   4 +-
 fs/nfs/dir.c                                  |  20 +-
 fs/nilfs2/alloc.c                             |  34 +--
 fs/nilfs2/cpfile.c                            |   4 +-
 fs/ntfs/aops.c                                |   4 +-
 fs/reiserfs/journal.c                         |   4 +-
 fs/romfs/super.c                              |   4 +-
 fs/splice.c                                   |   4 +-
 fs/ubifs/file.c                               |  16 +-
 fs/vboxsf/file.c                              |  12 +-
 fs/zonefs/super.c                             |   4 +-
 include/linux/entry-common.h                  |   3 +
 include/linux/highmem.h                       |  63 +++++-
 include/linux/memremap.h                      |   1 +
 include/linux/mm.h                            |  43 ++++
 include/linux/pkeys.h                         |   6 +-
 include/linux/sched.h                         |   8 +
 include/trace/events/kmap_thread.h            |  56 +++++
 init/init_task.c                              |   6 +
 kernel/fork.c                                 |  18 ++
 kernel/kexec_core.c                           |   8 +-
 lib/Kconfig.debug                             |   8 +
 lib/iov_iter.c                                |  12 +-
 lib/pks/pks_test.c                            | 138 +++++++++++--
 lib/test_bpf.c                                |   4 +-
 lib/test_hmm.c                                |   8 +-
 mm/Kconfig                                    |  13 ++
 mm/debug.c                                    |  23 +++
 mm/memory.c                                   |   8 +-
 mm/memremap.c                                 |  90 ++++++++
 mm/swapfile.c                                 |   4 +-
 mm/userfaultfd.c                              |   4 +-
 net/ceph/messenger.c                          |   4 +-
 net/core/datagram.c                           |   4 +-
 net/core/sock.c                               |   8 +-
 net/ipv4/ip_output.c                          |   4 +-
 net/sunrpc/cache.c                            |   4 +-
 net/sunrpc/xdr.c                              |   8 +-
 net/tls/tls_device.c                          |   4 +-
 samples/vfio-mdev/mbochs.c                    |   4 +-
 131 files changed, 1284 insertions(+), 565 deletions(-)
 create mode 100644 include/trace/events/kmap_thread.h

-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5049.12917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPu-0001FH-85; Fri, 09 Oct 2020 19:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5049.12917; Fri, 09 Oct 2020 19:51:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPu-0001F8-4b; Fri, 09 Oct 2020 19:51:02 +0000
Received: by outflank-mailman (input) for mailman id 5049;
 Fri, 09 Oct 2020 19:51:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyPs-00017t-Mt
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:00 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec80b497-e56e-45d0-8d6e-e391c6030fc6;
 Fri, 09 Oct 2020 19:50:57 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:56 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:55 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyPs-00017t-Mt
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:00 +0000
X-Inumbo-ID: ec80b497-e56e-45d0-8d6e-e391c6030fc6
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ec80b497-e56e-45d0-8d6e-e391c6030fc6;
	Fri, 09 Oct 2020 19:50:57 +0000 (UTC)
IronPort-SDR: PTG5c9b/qVU7iZaEC3Awxf+6lgNn3QjU8bpB7EvjBfWjWl1n8I7AGFRBo5QDU7isHWBgQF/ymx
 r48vVqzpM1Vw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162067770"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162067770"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:56 -0700
IronPort-SDR: WlCZORaTV0ocl1yC7IDa4J4IbRZJZvP+KytzS1LtxRNRUTgHeg4W5ZWyzNndx3DpBXRsqL9R3c
 CQItA3N5s8ig==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="462300571"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:55 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 03/58] memremap: Add zone device access protection
Date: Fri,  9 Oct 2020 12:49:38 -0700
Message-Id: <20201009195033.3208459-4-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Device managed memory exposes itself to the kernel direct map which
allows stray pointers to access these device memories.

Stray pointers to normal memory may result in a crash or other
undesirable behavior which, while unfortunate, are usually recoverable
with a reboot.  Stray access, specifically stray writes, to areas such
as non-volatile memory are permanent in nature and thus are more likely
to result in permanent user data loss vs stray access to other memory
areas.

Furthermore, we protect against reads which can help with speculative
reads to poison areas as well.  But this is a secondary reason.

Set up an infrastructure for extra device access protection. Then
implement the new protection using the new Protection Keys Supervisor
(PKS) on architectures which support it.

To enable this extra protection devices specify a flag in the pgmap to
indicate that these areas wish to use additional protection.

Kernel code which intends to access this memory can do so automatically
through the use of the kmap infrastructure calling into
dev_access_[enable|disable]() described here.  The kmap infrastructure
is implemented in a follow on patch.

In addition, users can directly enable/disable the access through
dev_access_[enable|disable]() if they have a priori knowledge of the
type of pages they are accessing.

All calls to enable/disable protection flow through
dev_access_[enable|disable]() and are nestable by the use of a per task
reference count.  This reference count does 2 things.

1) Allows a thread to nest calls to disable protection such that the
   first call to re-enable protection does not 'break' the last access of
   the pmem device memory.

2) Provides faster performance by avoiding lots of MSR writes.  For
   example, looping over a sequence of pmem pages.

In addition, we must ensure the reference count is preserved through an
exception so we add the count to irqentry_state_t and save/restore the
reference count while giving exceptions their own count should they use
a kmap call.

The following shows how this works through an exception:

    ...
            // ref == 0
            dev_access_enable()  // ref += 1 ==> disable protection
                    irq()
                            // enable protection
                            // ref = 0
                            _handler()
                                    dev_access_enable()   // ref += 1 ==> disable protection
                                    dev_access_disable()  // ref -= 1 ==> enable protection
                            // WARN_ON(ref != 0)
                            // disable protection
            do_pmem_thing()  // all good here
            dev_access_disable() // ref -= 1 ==> 0 ==> enable protection
    ...

Nested exceptions operate the same way with each exception storing the
interrupted exception state all the way down.

The pkey value is never free'ed as this optimizes the implementation to
be either on or off using a static branch conditional in the fast paths.

Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 arch/x86/entry/common.c      | 21 +++++++++
 include/linux/entry-common.h |  3 ++
 include/linux/memremap.h     |  1 +
 include/linux/mm.h           | 43 +++++++++++++++++
 include/linux/sched.h        |  3 ++
 init/init_task.c             |  3 ++
 kernel/fork.c                |  3 ++
 mm/Kconfig                   | 13 ++++++
 mm/memremap.c                | 90 ++++++++++++++++++++++++++++++++++++
 9 files changed, 180 insertions(+)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 86ad32e0095e..3680724c1a4d 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -264,12 +264,27 @@ noinstr void idtentry_exit_nmi(struct pt_regs *regs, irqentry_state_t *irq_state
  *
  * NOTE That the thread saved PKRS must be preserved separately to ensure
  * global overrides do not 'stick' on a thread.
+ *
+ * Furthermore, Zone Device Access Protection maintains access in a re-entrant
+ * manner through a reference count which also needs to be maintained should
+ * exception handlers use those interfaces for memory access.  Here we start
+ * off the exception handler ref count to 0 and ensure it is 0 when the
+ * exception is done.  Then restore it for the interrupted task.
  */
 noinstr void irq_save_pkrs(irqentry_state_t *state)
 {
 	if (!cpu_feature_enabled(X86_FEATURE_PKS))
 		return;
 
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	/*
+	 * Save the ref count of the current running process and set it to 0
+	 * for any irq users to properly track re-entrance
+	 */
+	state->pkrs_ref = current->dev_page_access_ref;
+	current->dev_page_access_ref = 0;
+#endif
+
 	/*
 	 * The thread_pkrs must be maintained separately to prevent global
 	 * overrides from 'sticking' on a thread.
@@ -286,6 +301,12 @@ noinstr void irq_restore_pkrs(irqentry_state_t *state)
 
 	write_pkrs(state->pkrs);
 	current->thread.saved_pkrs = state->thread_pkrs;
+
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	WARN_ON_ONCE(current->dev_page_access_ref != 0);
+	/* Restore the interrupted process reference */
+	current->dev_page_access_ref = state->pkrs_ref;
+#endif
 }
 #endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */
 
diff --git a/include/linux/entry-common.h b/include/linux/entry-common.h
index c3b361ffa059..06743cce2dbf 100644
--- a/include/linux/entry-common.h
+++ b/include/linux/entry-common.h
@@ -343,6 +343,9 @@ void irqentry_exit_to_user_mode(struct pt_regs *regs);
 #ifndef irqentry_state
 typedef struct irqentry_state {
 #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	unsigned int pkrs_ref;
+#endif
 	u32 pkrs;
 	u32 thread_pkrs;
 #endif
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index e5862746751b..b6713ee7b218 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -89,6 +89,7 @@ struct dev_pagemap_ops {
 };
 
 #define PGMAP_ALTMAP_VALID	(1 << 0)
+#define PGMAP_PROT_ENABLED	(1 << 1)
 
 /**
  * struct dev_pagemap - metadata for ZONE_DEVICE mappings
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 16b799a0522c..9e845515ff15 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1141,6 +1141,49 @@ static inline bool is_pci_p2pdma_page(const struct page *page)
 		page->pgmap->type == MEMORY_DEVICE_PCI_P2PDMA;
 }
 
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+DECLARE_STATIC_KEY_FALSE(dev_protection_static_key);
+
+/*
+ * We make page_is_access_protected() as quick as possible.
+ *    1) If no mappings have been enabled with extra protection we skip this
+ *       entirely
+ *    2) Skip pages which are not ZONE_DEVICE
+ *    3) Only then check if this particular page was mapped with extra
+ *       protections.
+ */
+static inline bool page_is_access_protected(struct page *page)
+{
+	if (!static_branch_unlikely(&dev_protection_static_key))
+		return false;
+	if (!is_zone_device_page(page))
+		return false;
+	if (page->pgmap->flags & PGMAP_PROT_ENABLED)
+		return true;
+	return false;
+}
+
+void __dev_access_enable(bool global);
+void __dev_access_disable(bool global);
+static __always_inline void dev_access_enable(bool global)
+{
+	if (static_branch_unlikely(&dev_protection_static_key))
+		__dev_access_enable(global);
+}
+static __always_inline void dev_access_disable(bool global)
+{
+	if (static_branch_unlikely(&dev_protection_static_key))
+		__dev_access_disable(global);
+}
+#else
+static inline bool page_is_access_protected(struct page *page)
+{
+	return false;
+}
+static inline void dev_access_enable(bool global) { }
+static inline void dev_access_disable(bool global) { }
+#endif /* CONFIG_ZONE_DEVICE_ACCESS_PROTECTION */
+
 /* 127: arbitrary random number, small enough to assemble well */
 #define page_ref_zero_or_close_to_overflow(page) \
 	((unsigned int) page_ref_count(page) + 127u <= 127u)
diff --git a/include/linux/sched.h b/include/linux/sched.h
index afe01e232935..25d97ab6c757 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1315,6 +1315,9 @@ struct task_struct {
 	struct callback_head		mce_kill_me;
 #endif
 
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	unsigned int			dev_page_access_ref;
+#endif
 	/*
 	 * New fields for task_struct should be added above here, so that
 	 * they are included in the randomized portion of task_struct.
diff --git a/init/init_task.c b/init/init_task.c
index f6889fce64af..9b39f25de59b 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -209,6 +209,9 @@ struct task_struct init_task
 #ifdef CONFIG_SECCOMP
 	.seccomp	= { .filter_count = ATOMIC_INIT(0) },
 #endif
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	.dev_page_access_ref = 0,
+#endif
 };
 EXPORT_SYMBOL(init_task);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index da8d360fb032..b6a3ee328a89 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -940,6 +940,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
 
 #ifdef CONFIG_MEMCG
 	tsk->active_memcg = NULL;
+#endif
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+	tsk->dev_page_access_ref = 0;
 #endif
 	return tsk;
 
diff --git a/mm/Kconfig b/mm/Kconfig
index 1b9bc004d9bc..01dd75720ae6 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -794,6 +794,19 @@ config ZONE_DEVICE
 
 	  If FS_DAX is enabled, then say Y.
 
+config ZONE_DEVICE_ACCESS_PROTECTION
+	bool "Device memory access protection"
+	depends on ZONE_DEVICE
+	depends on ARCH_HAS_SUPERVISOR_PKEYS
+
+	help
+	  Enable the option of having access protections on device memory
+	  areas.  This protects against access to device memory which is not
+	  intended such as stray writes.  This feature is particularly useful
+	  to protect against corruption of persistent memory.
+
+	  If in doubt, say 'Y'.
+
 config DEV_PAGEMAP_OPS
 	bool
 
diff --git a/mm/memremap.c b/mm/memremap.c
index fbfc79fd9c24..edad2aa0bd24 100644
--- a/mm/memremap.c
+++ b/mm/memremap.c
@@ -6,12 +6,16 @@
 #include <linux/memory_hotplug.h>
 #include <linux/mm.h>
 #include <linux/pfn_t.h>
+#include <linux/pkeys.h>
 #include <linux/swap.h>
 #include <linux/mmzone.h>
 #include <linux/swapops.h>
 #include <linux/types.h>
 #include <linux/wait_bit.h>
 #include <linux/xarray.h>
+#include <uapi/asm-generic/mman-common.h>
+
+#define PKEY_INVALID (INT_MIN)
 
 static DEFINE_XARRAY(pgmap_array);
 
@@ -67,6 +71,89 @@ static void devmap_managed_enable_put(void)
 }
 #endif /* CONFIG_DEV_PAGEMAP_OPS */
 
+#ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
+/*
+ * Note; all devices which have asked for protections share the same key.  The
+ * key may, or may not, have been provided by the core.  If not, protection
+ * will remain disabled.  The key acquisition is attempted at init time and
+ * never again.  So we don't have to worry about dev_page_pkey changing.
+ */
+static int dev_page_pkey = PKEY_INVALID;
+DEFINE_STATIC_KEY_FALSE(dev_protection_static_key);
+EXPORT_SYMBOL(dev_protection_static_key);
+
+static pgprot_t dev_pgprot_get(struct dev_pagemap *pgmap, pgprot_t prot)
+{
+	if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID) {
+		pgprotval_t val = pgprot_val(prot);
+
+		static_branch_inc(&dev_protection_static_key);
+		prot = __pgprot(val | _PAGE_PKEY(dev_page_pkey));
+	}
+	return prot;
+}
+
+static void dev_pgprot_put(struct dev_pagemap *pgmap)
+{
+	if (pgmap->flags & PGMAP_PROT_ENABLED && dev_page_pkey != PKEY_INVALID)
+		static_branch_dec(&dev_protection_static_key);
+}
+
+void __dev_access_disable(bool global)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	if (!--current->dev_page_access_ref)
+		pks_mknoaccess(dev_page_pkey, global);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(__dev_access_disable);
+
+void __dev_access_enable(bool global)
+{
+	unsigned long flags;
+
+	local_irq_save(flags);
+	/* 0 clears the PKEY_DISABLE_ACCESS bit, allowing access */
+	if (!current->dev_page_access_ref++)
+		pks_mkrdwr(dev_page_pkey, global);
+	local_irq_restore(flags);
+}
+EXPORT_SYMBOL_GPL(__dev_access_enable);
+
+/**
+ * dev_access_protection_init: Configure a PKS key domain for device pages
+ *
+ * The domain defaults to the protected state.  Device page mappings should set
+ * the PGMAP_PROT_ENABLED flag when mapping pages.
+ *
+ * Note the pkey is never free'ed.  This is run at init time and we either get
+ * the key or we do not.  We need to do this to maintian a constant key (or
+ * not) as device memory is added or removed.
+ */
+static int __init __dev_access_protection_init(void)
+{
+	int pkey = pks_key_alloc("Device Memory");
+
+	if (pkey < 0)
+		return 0;
+
+	dev_page_pkey = pkey;
+
+	return 0;
+}
+subsys_initcall(__dev_access_protection_init);
+#else
+static pgprot_t dev_pgprot_get(struct dev_pagemap *pgmap, pgprot_t prot)
+{
+	return prot;
+}
+static void dev_pgprot_put(struct dev_pagemap *pgmap)
+{
+}
+#endif /* CONFIG_ZONE_DEVICE_ACCESS_PROTECTION */
+
 static void pgmap_array_delete(struct resource *res)
 {
 	xa_store_range(&pgmap_array, PHYS_PFN(res->start), PHYS_PFN(res->end),
@@ -156,6 +243,7 @@ void memunmap_pages(struct dev_pagemap *pgmap)
 	pgmap_array_delete(res);
 	WARN_ONCE(pgmap->altmap.alloc, "failed to free all reserved pages\n");
 	devmap_managed_enable_put();
+	dev_pgprot_put(pgmap);
 }
 EXPORT_SYMBOL_GPL(memunmap_pages);
 
@@ -191,6 +279,8 @@ void *memremap_pages(struct dev_pagemap *pgmap, int nid)
 	int error, is_ram;
 	bool need_devmap_managed = true;
 
+	params.pgprot = dev_pgprot_get(pgmap, params.pgprot);
+
 	switch (pgmap->type) {
 	case MEMORY_DEVICE_PRIVATE:
 		if (!IS_ENABLED(CONFIG_DEVICE_PRIVATE)) {
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5048.12905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPo-0001Av-Uh; Fri, 09 Oct 2020 19:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5048.12905; Fri, 09 Oct 2020 19:50:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPo-0001Ao-R8; Fri, 09 Oct 2020 19:50:56 +0000
Received: by outflank-mailman (input) for mailman id 5048;
 Fri, 09 Oct 2020 19:50:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyPn-00017t-Md
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:55 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9967089-f009-46f5-8231-486fb3396c20;
 Fri, 09 Oct 2020 19:50:52 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:51 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:50 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyPn-00017t-Md
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:55 +0000
X-Inumbo-ID: e9967089-f009-46f5-8231-486fb3396c20
Received: from mga14.intel.com (unknown [192.55.52.115])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e9967089-f009-46f5-8231-486fb3396c20;
	Fri, 09 Oct 2020 19:50:52 +0000 (UTC)
IronPort-SDR: qZUgeWweAAkT+UGCT1/1mmT1I5zLAbVQ+QVU+857pjwIbo9dYuoyMJ/2Aw1Qpx9Xz22FF3325Y
 Xhkfk3THfhvg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743583"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="164743583"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:51 -0700
IronPort-SDR: r9rFWs4LxSt/DLhc0vk8qSUxzY/hR2W3QZLi44jmj/rGXS9Mfwq/Fn231GdfrO+iUZYF22umi/
 70MWS29lQZXg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529052813"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:50 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 02/58] x86/pks/test: Add testing for global option
Date: Fri,  9 Oct 2020 12:49:37 -0700
Message-Id: <20201009195033.3208459-3-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Now that PKS can be enabled globaly (for all threads) add a test which
spawns a thread and tests the same PKS functionality.

The test enables/disables PKS in 1 thread while attempting to access the
page in another thread.  We use the same test array as in the 'local'
PKS testing.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 arch/x86/mm/fault.c |   4 ++
 lib/pks/pks_test.c  | 128 +++++++++++++++++++++++++++++++++++++++++---
 2 files changed, 124 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 4b4ff9efa298..4c74f52fbc23 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1108,6 +1108,10 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte,
 		if (global_pkey_is_enabled(pte, is_write, irq_state))
 			return 1;
 
+		/*
+		 * NOTE: This must be after the global_pkey_is_enabled() call
+		 * to allow the fixup code to be tested.
+		 */
 		if (handle_pks_testing(error_code, irq_state))
 			return 1;
 
diff --git a/lib/pks/pks_test.c b/lib/pks/pks_test.c
index 286c8b8457da..dfddccbe4cb6 100644
--- a/lib/pks/pks_test.c
+++ b/lib/pks/pks_test.c
@@ -154,7 +154,8 @@ static void check_exception(irqentry_state_t *irq_state)
 	}
 
 	/* Check the exception state */
-	if (!check_pkrs(test_armed_key, PKEY_DISABLE_ACCESS)) {
+	if (!check_pkrs(test_armed_key,
+			PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)) {
 		pr_err("     FAIL: PKRS cache and MSR\n");
 		test_exception_ctx->pass = false;
 	}
@@ -308,24 +309,29 @@ static int test_it(struct pks_test_ctx *ctx, struct pks_access_test *test, void
 	return ret;
 }
 
-static int run_access_test(struct pks_test_ctx *ctx,
-			   struct pks_access_test *test,
-			   void *ptr)
+static void set_protection(int pkey, enum pks_access_mode mode, bool global)
 {
-	switch (test->mode) {
+	switch (mode) {
 		case PKS_TEST_NO_ACCESS:
-			pks_mknoaccess(ctx->pkey, false);
+			pks_mknoaccess(pkey, global);
 			break;
 		case PKS_TEST_RDWR:
-			pks_mkrdwr(ctx->pkey, false);
+			pks_mkrdwr(pkey, global);
 			break;
 		case PKS_TEST_RDONLY:
-			pks_mkread(ctx->pkey, false);
+			pks_mkread(pkey, global);
 			break;
 		default:
 			pr_err("BUG in test invalid mode\n");
 			break;
 	}
+}
+
+static int run_access_test(struct pks_test_ctx *ctx,
+			   struct pks_access_test *test,
+			   void *ptr)
+{
+	set_protection(ctx->pkey, test->mode, false);
 
 	return test_it(ctx, test, ptr);
 }
@@ -516,6 +522,110 @@ static void run_exception_test(void)
 		 pass ? "PASS" : "FAIL");
 }
 
+struct shared_data {
+	struct mutex lock;
+	struct pks_test_ctx *ctx;
+	void *kmap_addr;
+	struct pks_access_test *test;
+};
+
+static int thread_main(void *d)
+{
+	struct shared_data *data = d;
+	struct pks_test_ctx *ctx = data->ctx;
+
+	while (!kthread_should_stop()) {
+		mutex_lock(&data->lock);
+		/*
+		 * wait for the main thread to hand us the page
+		 * We should be spinning so hopefully we will not have gotten
+		 * the global value from a schedule in.
+		 */
+		if (data->kmap_addr) {
+			if (test_it(ctx, data->test, data->kmap_addr))
+				ctx->pass = false;
+			data->kmap_addr = NULL;
+		}
+		mutex_unlock(&data->lock);
+	}
+
+	return 0;
+}
+
+static void run_thread_access_test(struct shared_data *data,
+				   struct pks_test_ctx *ctx,
+				   struct pks_access_test *test,
+				   void *ptr)
+{
+	set_protection(ctx->pkey, test->mode, true);
+
+	pr_info("checking...  mode %s; write %s\n",
+			get_mode_str(test->mode), test->write ? "TRUE" : "FALSE");
+
+	mutex_lock(&data->lock);
+	data->test = test;
+	data->kmap_addr = ptr;
+	mutex_unlock(&data->lock);
+
+	while (data->kmap_addr) {
+		msleep(10);
+	}
+}
+
+static void run_global_test(void)
+{
+	struct task_struct *other_task;
+	struct pks_test_ctx *ctx;
+	struct shared_data data;
+	bool pass = true;
+	void *ptr;
+	int i;
+
+	pr_info("     ***** BEGIN: global pkey checking\n");
+
+	/* Set up context, data pgae, and thread */
+	ctx = alloc_ctx("global pkey test");
+	if (IS_ERR(ctx)) {
+		pr_err("     FAIL: no context\n");
+		pass = false;
+		goto result;
+	}
+	ptr = alloc_test_page(ctx->pkey);
+	if (!ptr) {
+		pr_err("     FAIL: no vmalloc page\n");
+		pass = false;
+		goto free_context;
+	}
+	other_task = kthread_run(thread_main, &data, "PKRS global test");
+	if (IS_ERR(other_task)) {
+		pr_err("     FAIL: Failed to start thread\n");
+		pass = false;
+		goto free_page;
+	}
+
+	memset(&data, 0, sizeof(data));
+	mutex_init(&data.lock);
+	data.ctx = ctx;
+
+	/* Start testing */
+	ctx->pass = true;
+
+	for (i = 0; i < ARRAY_SIZE(pkey_test_ary); i++) {
+		run_thread_access_test(&data, ctx, &pkey_test_ary[i], ptr);
+	}
+
+	kthread_stop(other_task);
+	pass = ctx->pass;
+
+free_page:
+	vfree(ptr);
+free_context:
+	free_ctx(ctx);
+result:
+	pr_info("     ***** END: global pkey checking : %s\n",
+		 pass ? "PASS" : "FAIL");
+}
+
 static void run_all(void)
 {
 	struct pks_test_ctx *ctx[PKS_NUM_KEYS];
@@ -538,6 +648,8 @@ static void run_all(void)
 	}
 
 	run_exception_test();
+
+	run_global_test();
 }
 
 static void crash_it(void)
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5047.12893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPk-000186-GD; Fri, 09 Oct 2020 19:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5047.12893; Fri, 09 Oct 2020 19:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPk-00017z-CC; Fri, 09 Oct 2020 19:50:52 +0000
Received: by outflank-mailman (input) for mailman id 5047;
 Fri, 09 Oct 2020 19:50:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyPi-00017t-TP
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:50 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60c96da0-c715-4d4e-8d52-8466329b7084;
 Fri, 09 Oct 2020 19:50:48 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:46 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:46 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyPi-00017t-TP
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:50:50 +0000
X-Inumbo-ID: 60c96da0-c715-4d4e-8d52-8466329b7084
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 60c96da0-c715-4d4e-8d52-8466329b7084;
	Fri, 09 Oct 2020 19:50:48 +0000 (UTC)
IronPort-SDR: aw8sfLkmi4c3ekkTAJj2we0xzco62FXv6FoN/FoZXqSZzZKvEosMmPZVBXEqVdUXVCHW4gkGcC
 eJWxftTpOzPw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642809"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165642809"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700
IronPort-SDR: qnBWOjzlCrAn0iz4juA1IxPDmIdlUS8zYMFKRiVxpP3UruuZtN+Up+nA/7WPjpw84o9yPhVhmH
 jUXVTF6t7/+g==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298530884"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:46 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 01/58] x86/pks: Add a global pkrs option
Date: Fri,  9 Oct 2020 12:49:36 -0700
Message-Id: <20201009195033.3208459-2-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Some users, such as kmap(), sometimes requires PKS to be global.
However, updating all CPUs, and worse yet all threads is expensive.

Introduce a global PKRS state which is checked at critical times to
allow the state to enable access when global PKS is required.  To
accomplish this with minimal locking; the code is carefully designed
with the following key concepts.

1) Borrow the idea of lazy TLB invalidations from the fault handler
   code.  When enabling PKS access we anticipate that other threads are
   not yet running.  However, if they are we catch the fault and clean
   up the MSR value.

2) When disabling PKS access we force all MSR values across all CPU's.
   This is required to block access as soon as possible.[1]  However, it
   is key that we never attempt to update the per-task PKS values
   directly.  See next point.

3) Per-task PKS values never get updated with global PKS values.  This
   is key to prevent locking requirements and a nearly intractable
   problem of trying to update every task in the system.  Here are a few
   key points.

   3a) The MSR value can be updated with the global PKS value if that
   global value happened to change while the task was running.

   3b) If the task was sleeping while the global PKS was updated then
   the global value is added in when task's are scheduled.

   3c) If the global PKS value restricts access the MSR is updated as
   soon as possible[1] and the thread value is not updated which ensures
   the thread does not retain the elevated privileges after a context
   switch.

4) Follow on patches must be careful to preserve the separation of the
   thread PKRS value and the MSR value.

5) Access Disable on any individual pkey is turned into (Access Disable
   | Write Disable) to facilitate faster integration of the global value
   into the thread local MSR through a simple '&' operation.  Doing
   otherwise would result in complicated individual bit manipulation for
   each pkey.

[1] There is a race condition which is ignored which is required for
performance issues.  This potentially allows access to a thread until
the end of it's time slice.  After the context switch the global value
will be restored.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 Documentation/core-api/protection-keys.rst |  11 +-
 arch/x86/entry/common.c                    |   7 +
 arch/x86/include/asm/pkeys.h               |   6 +-
 arch/x86/include/asm/pkeys_common.h        |   8 +-
 arch/x86/kernel/process.c                  |  74 +++++++-
 arch/x86/mm/fault.c                        | 189 ++++++++++++++++-----
 arch/x86/mm/pkeys.c                        |  88 ++++++++--
 include/linux/pkeys.h                      |   6 +-
 lib/pks/pks_test.c                         |  16 +-
 9 files changed, 329 insertions(+), 76 deletions(-)

diff --git a/Documentation/core-api/protection-keys.rst b/Documentation/core-api/protection-keys.rst
index c60366921d60..9e8a98653e13 100644
--- a/Documentation/core-api/protection-keys.rst
+++ b/Documentation/core-api/protection-keys.rst
@@ -121,9 +121,9 @@ mapping adds that mapping to the protection domain.
         int pks_key_alloc(const char * const pkey_user);
         #define PAGE_KERNEL_PKEY(pkey)
         #define _PAGE_KEY(pkey)
-        void pks_mknoaccess(int pkey);
-        void pks_mkread(int pkey);
-        void pks_mkrdwr(int pkey);
+        void pks_mknoaccess(int pkey, bool global);
+        void pks_mkread(int pkey, bool global);
+        void pks_mkrdwr(int pkey, bool global);
         void pks_key_free(int pkey);
 
 pks_key_alloc() allocates keys dynamically to allow better use of the limited
@@ -141,7 +141,10 @@ _PAGE_KEY().
 The pks_mk*() family of calls allows kernel users the ability to change the
 protections for the domain identified by the pkey specified.  3 states are
 available pks_mknoaccess(), pks_mkread(), and pks_mkrdwr() which set the access
-to none, read, and read/write respectively.
+to none, read, and read/write respectively.  'global' specifies that the
+protection should be set across all threads (logical CPU's) not just the
+current running thread/CPU.  This increases the overhead of PKS and lessens the
+protection so it should be used sparingly.
 
 Finally, pks_key_free() allows a user to return the key to the allocator for
 use by others.
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 324a8fd5ac10..86ad32e0095e 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -261,12 +261,19 @@ noinstr void idtentry_exit_nmi(struct pt_regs *regs, irqentry_state_t *irq_state
  * current running value and set the default PKRS value for the duration of the
  * exception.  Thus preventing exception handlers from having the elevated
  * access of the interrupted task.
+ *
+ * NOTE That the thread saved PKRS must be preserved separately to ensure
+ * global overrides do not 'stick' on a thread.
  */
 noinstr void irq_save_pkrs(irqentry_state_t *state)
 {
 	if (!cpu_feature_enabled(X86_FEATURE_PKS))
 		return;
 
+	/*
+	 * The thread_pkrs must be maintained separately to prevent global
+	 * overrides from 'sticking' on a thread.
+	 */
 	state->thread_pkrs = current->thread.saved_pkrs;
 	state->pkrs = this_cpu_read(pkrs_cache);
 	write_pkrs(INIT_PKRS_VALUE);
diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
index 79952216474e..cae0153a5480 100644
--- a/arch/x86/include/asm/pkeys.h
+++ b/arch/x86/include/asm/pkeys.h
@@ -143,9 +143,9 @@ u32 update_pkey_val(u32 pk_reg, int pkey, unsigned int flags);
 int pks_key_alloc(const char *const pkey_user);
 void pks_key_free(int pkey);
 
-void pks_mknoaccess(int pkey);
-void pks_mkread(int pkey);
-void pks_mkrdwr(int pkey);
+void pks_mknoaccess(int pkey, bool global);
+void pks_mkread(int pkey, bool global);
+void pks_mkrdwr(int pkey, bool global);
 
 #endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */
 
diff --git a/arch/x86/include/asm/pkeys_common.h b/arch/x86/include/asm/pkeys_common.h
index 8961e2ddd6ff..e380679ba1bb 100644
--- a/arch/x86/include/asm/pkeys_common.h
+++ b/arch/x86/include/asm/pkeys_common.h
@@ -6,7 +6,12 @@
 #define PKR_WD_BIT 0x2
 #define PKR_BITS_PER_PKEY 2
 
-#define PKR_AD_KEY(pkey)	(PKR_AD_BIT << ((pkey) * PKR_BITS_PER_PKEY))
+/*
+ * We must define 11b as the default to make global overrides efficient.
+ * See arch/x86/kernel/process.c where the global pkrs is factored in during
+ * context switch.
+ */
+#define PKR_AD_KEY(pkey)	((PKR_WD_BIT | PKR_AD_BIT) << ((pkey) * PKR_BITS_PER_PKEY))
 
 /*
  * Define a default PKRS value for each task.
@@ -27,6 +32,7 @@
 #define        PKS_NUM_KEYS            16
 
 #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS
+extern u32 pkrs_global_cache;
 DECLARE_PER_CPU(u32, pkrs_cache);
 noinstr void write_pkrs(u32 new_pkrs);
 #else
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index eb3a95a69392..58edd162d9cb 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -43,7 +43,7 @@
 #include <asm/io_bitmap.h>
 #include <asm/proto.h>
 #include <asm/frame.h>
-#include <asm/pkeys_common.h>
+#include <linux/pkeys.h>
 
 #include "process.h"
 
@@ -189,15 +189,83 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg,
 }
 
 #ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS
-DECLARE_PER_CPU(u32, pkrs_cache);
 static inline void pks_init_task(struct task_struct *tsk)
 {
 	/* New tasks get the most restrictive PKRS value */
 	tsk->thread.saved_pkrs = INIT_PKRS_VALUE;
 }
+
+extern u32 pkrs_global_cache;
+
+/**
+ * The global PKRS value can only increase access.  Because 01b and 11b both
+ * disable access.  The following truth table is our desired result for each of
+ * the pkeys when we add in the global permissions.
+ *
+ * 00 R/W    - Write enabled (all access)
+ * 10 Read   - write disabled (Read only)
+ * 01 NO Acc - access disabled
+ * 11 NO Acc - also access disabled
+ *
+ * local  global  desired   required
+ *                result    operation
+ * 00      00         00      &
+ * 00      10         00      &
+ * 00      01         00      &
+ * 00      11         00      &
+ *
+ * 10      00         00      &
+ * 10      10         10      &
+ * 10      01         10      ^ special case
+ * 10      11         10      &
+ *
+ * 01      00         00      &
+ * 01      10         10      ^ special case
+ * 01      01         01      &
+ * 01      11         01      &
+ *
+ * 11      00         00      &
+ * 11      10         10      &
+ * 11      01         01      &
+ * 11      11         11      &
+ *
+ * In order to eliminate the need to loop through each pkey and deal with the 2
+ * above special cases we force all 01b values to 11b through the API thus
+ * resulting in the simplified truth table below.
+ *
+ * 00 R/W    - Write enabled (all access)
+ * 10 Read   - write disabled (Read only)
+ * 01 NO Acc - access disabled
+ *    (Not allowed in the API always use 11)
+ * 11 NO Acc - access disabled
+ *
+ * local  global  desired   effective
+ *                result    operation
+ * 00      00         00      &
+ * 00      10         00      &
+ * 00      11         00      &
+ * 00      11         00      &
+ *
+ * 10      00         00      &
+ * 10      10         10      &
+ * 10      11         10      &
+ * 10      11         10      &
+ *
+ * 11      00         00      &
+ * 11      10         10      &
+ * 11      11         11      &
+ * 11      11         11      &
+ *
+ * 11      00         00      &
+ * 11      10         10      &
+ * 11      11         11      &
+ * 11      11         11      &
+ *
+ * Thus we can simply 'AND' in the global pkrs value.
+ */
 static inline void pks_sched_in(void)
 {
-	write_pkrs(current->thread.saved_pkrs);
+	write_pkrs(current->thread.saved_pkrs & pkrs_global_cache);
 }
 #else
 static inline void pks_init_task(struct task_struct *tsk) { }
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index dd5af9399131..4b4ff9efa298 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -32,6 +32,8 @@
 #include <asm/pgtable_areas.h>		/* VMALLOC_START, ...		*/
 #include <asm/kvm_para.h>		/* kvm_handle_async_pf		*/
 
+#include <asm-generic/mman-common.h>
+
 #define CREATE_TRACE_POINTS
 #include <asm/trace/exceptions.h>
 
@@ -995,9 +997,124 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code,
 	}
 }
 
-static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte)
+#ifdef CONFIG_ARCH_HAS_SUPERVISOR_PKEYS
+/*
+ * check if we have had a 'global' pkey update.  If so, handle this like a lazy
+ * TLB; fix up the local MSR and return
+ *
+ * See arch/x86/kernel/process.c for the explanation on how global is handled
+ * with a simple '&' operation.
+ *
+ * Also we don't update the current thread saved_pkrs because we don't want the
+ * global value to 'stick' with the thread.  Rather we want this to be valid
+ * only for the remainder of this time slice.  For subsequent time slices the
+ * global value will be factored in during schedule; see arch/x86/kernel/process.c
+ *
+ * Finally we have a trade off between performance and forcing a restriction of
+ * permissions across all CPUs on a global update.
+ *
+ * Given the following window.
+ *
+ *          Global PKRS            CPU #0                CPU #1
+ *             cache                 MSR                   MSR
+ *
+ *                  |                  |                     |
+ *   Global         |----------\       |                     |
+ *   Restriction    |           ------------> read           |             <= T1
+ *   (on CPU #0)    |                  |       |             |
+ *    ------\       |                  |       |             |
+ *           ------>|                  |       |             |
+ *                  |                  |       |             |
+ *    Update CPU #1 |--------\         |       |             |
+ *                  |         --------------\  |             |
+ *                  |                  |     --|------------>|
+ *    Global remote |                  |       |             |
+ *     MSR update   |                  |       |             |
+ *     (CPU 2-n)    |                  |       |             |
+ *                  |-----> CPU's      |       v             |
+ *   local          |       (2-N)      |     local --\       |
+ *   update         |                  |     update   ------>|(Update      <= T2
+ *    ----------------\                |                     | Incorrect)
+ *                  |  -----------\    |                     |
+ *                  |              --->|(Update OK)          |
+ *         Context  |                  |                     |
+ *         Switch   |----------\       |                     |
+ *                  |           ------------> read           |
+ *                  |                  |       |             |
+ *                  |                  |       |             |
+ *                  |                  |       v             |
+ *                  |                  |     local --\       |
+ *                  |                  |     update   ------>|(Update
+ *                  |                  |                     | Correct)
+ *
+ * We allow for a larger window of the global pkey being open because global
+ * updates should be rare and we don't want to burden normal faults with having
+ * to read the global state.
+ */
+static bool global_pkey_is_enabled(pte_t *pte, bool is_write,
+				   irqentry_state_t *irq_state)
+{
+	u8 pkey = pte_flags_pkey(pte->pte);
+	int pkey_shift = pkey * PKR_BITS_PER_PKEY;
+	u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift);
+	u32 global = READ_ONCE(pkrs_global_cache);
+	u32 val;
+
+	/* Return early if global access is not valid */
+	val = (global & mask) >> pkey_shift;
+	if ((val & PKR_AD_BIT) || (is_write && (val & PKR_WD_BIT)))
+		return false;
+
+	irq_state->pkrs &= global;
+
+	return true;
+}
+
+#else /* !CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */
+__always_inline bool global_pkey_is_enabled(pte_t *pte, bool is_write,
+					    irqentry_state_t *irq_state)
+{
+	return false;
+}
+#endif /* CONFIG_ARCH_HAS_SUPERVISOR_PKEYS */
+
+#ifdef CONFIG_PKS_TESTING
+bool pks_test_callback(irqentry_state_t *irq_state);
+static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state)
+{
+	/*
+	 * If we get a protection key exception it could be because we
+	 * are running the PKS test.  If so, pks_test_callback() will
+	 * clear the protection mechanism and return true to indicate
+	 * the fault was handled
+	 */
+	return pks_test_callback(irq_state);
+}
+#else /* !CONFIG_PKS_TESTING */
+static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state)
+{
+	return false;
+}
+#endif /* CONFIG_PKS_TESTING */
+
+
+static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte,
+				       irqentry_state_t *irq_state)
 {
-	if ((error_code & X86_PF_WRITE) && !pte_write(*pte))
+	bool is_write = (error_code & X86_PF_WRITE);
+
+	if (IS_ENABLED(CONFIG_ARCH_HAS_SUPERVISOR_PKEYS) &&
+	    error_code & X86_PF_PK) {
+		if (global_pkey_is_enabled(pte, is_write, irq_state))
+			return 1;
+
+		if (handle_pks_testing(error_code, irq_state))
+			return 1;
+
+		return 0;
+	}
+
+	if (is_write && !pte_write(*pte))
 		return 0;
 
 	if ((error_code & X86_PF_INSTR) && !pte_exec(*pte))
@@ -1007,7 +1124,7 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte)
 }
 
 /*
- * Handle a spurious fault caused by a stale TLB entry.
+ * Handle a spurious fault caused by a stale TLB entry or a lazy PKRS update.
  *
  * This allows us to lazily refresh the TLB when increasing the
  * permissions of a kernel page (RO -> RW or NX -> X).  Doing it
@@ -1022,13 +1139,19 @@ static int spurious_kernel_fault_check(unsigned long error_code, pte_t *pte)
  * There are no security implications to leaving a stale TLB when
  * increasing the permissions on a page.
  *
+ * Similarly, PKRS increases in permissions are done on a thread local level.
+ * But if the caller indicates the permission should be allowd globaly we can
+ * lazily update only those threads which fault and avoid a global IPI MSR
+ * update.
+ *
  * Returns non-zero if a spurious fault was handled, zero otherwise.
  *
  * See Intel Developer's Manual Vol 3 Section 4.10.4.3, bullet 3
  * (Optional Invalidation).
  */
 static noinline int
-spurious_kernel_fault(unsigned long error_code, unsigned long address)
+spurious_kernel_fault(unsigned long error_code, unsigned long address,
+		      irqentry_state_t *irq_state)
 {
 	pgd_t *pgd;
 	p4d_t *p4d;
@@ -1038,17 +1161,19 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address)
 	int ret;
 
 	/*
-	 * Only writes to RO or instruction fetches from NX may cause
-	 * spurious faults.
+	 * Only PKey faults or writes to RO or instruction fetches from NX may
+	 * cause spurious faults.
 	 *
 	 * These could be from user or supervisor accesses but the TLB
 	 * is only lazily flushed after a kernel mapping protection
 	 * change, so user accesses are not expected to cause spurious
 	 * faults.
 	 */
-	if (error_code != (X86_PF_WRITE | X86_PF_PROT) &&
-	    error_code != (X86_PF_INSTR | X86_PF_PROT))
-		return 0;
+	if (!(error_code & X86_PF_PK)) {
+		if (error_code != (X86_PF_WRITE | X86_PF_PROT) &&
+		    error_code != (X86_PF_INSTR | X86_PF_PROT))
+			return 0;
+	}
 
 	pgd = init_mm.pgd + pgd_index(address);
 	if (!pgd_present(*pgd))
@@ -1059,27 +1184,31 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address)
 		return 0;
 
 	if (p4d_large(*p4d))
-		return spurious_kernel_fault_check(error_code, (pte_t *) p4d);
+		return spurious_kernel_fault_check(error_code, (pte_t *) p4d,
+						   irq_state);
 
 	pud = pud_offset(p4d, address);
 	if (!pud_present(*pud))
 		return 0;
 
 	if (pud_large(*pud))
-		return spurious_kernel_fault_check(error_code, (pte_t *) pud);
+		return spurious_kernel_fault_check(error_code, (pte_t *) pud,
+						   irq_state);
 
 	pmd = pmd_offset(pud, address);
 	if (!pmd_present(*pmd))
 		return 0;
 
 	if (pmd_large(*pmd))
-		return spurious_kernel_fault_check(error_code, (pte_t *) pmd);
+		return spurious_kernel_fault_check(error_code, (pte_t *) pmd,
+						   irq_state);
 
 	pte = pte_offset_kernel(pmd, address);
 	if (!pte_present(*pte))
 		return 0;
 
-	ret = spurious_kernel_fault_check(error_code, pte);
+	ret = spurious_kernel_fault_check(error_code, pte,
+					  irq_state);
 	if (!ret)
 		return 0;
 
@@ -1087,7 +1216,8 @@ spurious_kernel_fault(unsigned long error_code, unsigned long address)
 	 * Make sure we have permissions in PMD.
 	 * If not, then there's a bug in the page tables:
 	 */
-	ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd);
+	ret = spurious_kernel_fault_check(error_code, (pte_t *) pmd,
+					  irq_state);
 	WARN_ONCE(!ret, "PMD has incorrect permission bits\n");
 
 	return ret;
@@ -1150,25 +1280,6 @@ static int fault_in_kernel_space(unsigned long address)
 	return address >= TASK_SIZE_MAX;
 }
 
-#ifdef CONFIG_PKS_TESTING
-bool pks_test_callback(irqentry_state_t *irq_state);
-static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state)
-{
-	/*
-	 * If we get a protection key exception it could be because we
-	 * are running the PKS test.  If so, pks_test_callback() will
-	 * clear the protection mechanism and return true to indicate
-	 * the fault was handled.
-	 */
-	return (hw_error_code & X86_PF_PK) && pks_test_callback(irq_state);
-}
-#else
-static bool handle_pks_testing(unsigned long hw_error_code, irqentry_state_t *irq_state)
-{
-	return false;
-}
-#endif
-
 /*
  * Called for all faults where 'address' is part of the kernel address
  * space.  Might get called for faults that originate from *code* that
@@ -1186,9 +1297,6 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code,
 	    !cpu_feature_enabled(X86_FEATURE_PKS))
 		WARN_ON_ONCE(hw_error_code & X86_PF_PK);
 
-	if (handle_pks_testing(hw_error_code, irq_state))
-		return;
-
 #ifdef CONFIG_X86_32
 	/*
 	 * We can fault-in kernel-space virtual memory on-demand. The
@@ -1220,8 +1328,11 @@ do_kern_addr_fault(struct pt_regs *regs, unsigned long hw_error_code,
 	}
 #endif
 
-	/* Was the fault spurious, caused by lazy TLB invalidation? */
-	if (spurious_kernel_fault(hw_error_code, address))
+	/*
+	 * Was the fault spurious; caused by lazy TLB invalidation or PKRS
+	 * update?
+	 */
+	if (spurious_kernel_fault(hw_error_code, address, irq_state))
 		return;
 
 	/* kprobes don't want to hook the spurious faults: */
@@ -1492,7 +1603,7 @@ DEFINE_IDTENTRY_RAW_ERRORCODE(exc_page_fault)
 	 *
 	 * Fingers crossed.
 	 *
-	 * The async #PF handling code takes care of idtentry handling
+	 * The async #PF handling code takes care of irqentry handling
 	 * itself.
 	 */
 	if (kvm_handle_async_pf(regs, (u32)address))
diff --git a/arch/x86/mm/pkeys.c b/arch/x86/mm/pkeys.c
index 2431c68ef752..a45893069877 100644
--- a/arch/x86/mm/pkeys.c
+++ b/arch/x86/mm/pkeys.c
@@ -263,33 +263,84 @@ noinstr void write_pkrs(u32 new_pkrs)
 }
 EXPORT_SYMBOL_GPL(write_pkrs);
 
+/*
+ * NOTE: The pkrs_global_cache is _never_ stored in the per thread PKRS cache
+ * values [thread.saved_pkrs] by design
+ *
+ * This allows us to invalidate access on running threads immediately upon
+ * invalidate.  Sleeping threads will not be enabled due to the algorithm
+ * during pkrs_sched_in()
+ */
+DEFINE_SPINLOCK(pkrs_global_cache_lock);
+u32 pkrs_global_cache = INIT_PKRS_VALUE;
+EXPORT_SYMBOL_GPL(pkrs_global_cache);
+
+static inline void update_global_pkrs(int pkey, unsigned long protection)
+{
+	int pkey_shift = pkey * PKR_BITS_PER_PKEY;
+	u32 mask = (((1 << PKR_BITS_PER_PKEY) - 1) << pkey_shift);
+	u32 old_val;
+
+	spin_lock(&pkrs_global_cache_lock);
+	old_val = (pkrs_global_cache & mask) >> pkey_shift;
+	pkrs_global_cache &= ~mask;
+	if (protection & PKEY_DISABLE_ACCESS)
+		pkrs_global_cache |= PKR_AD_BIT << pkey_shift;
+	if (protection & PKEY_DISABLE_WRITE)
+		pkrs_global_cache |= PKR_WD_BIT << pkey_shift;
+
+	/*
+	 * If we are preventing access from the old value.  Force the
+	 * update on all running threads.
+	 */
+	if (((old_val == 0) && protection) ||
+	    ((old_val & PKR_WD_BIT) && (protection & PKEY_DISABLE_ACCESS))) {
+		int cpu;
+
+		for_each_online_cpu(cpu) {
+			u32 *ptr = per_cpu_ptr(&pkrs_cache, cpu);
+
+			*ptr = update_pkey_val(*ptr, pkey, protection);
+			wrmsrl_on_cpu(cpu, MSR_IA32_PKRS, *ptr);
+			put_cpu_ptr(ptr);
+		}
+	}
+	spin_unlock(&pkrs_global_cache_lock);
+}
+
 /**
  * Do not call this directly, see pks_mk*() below.
  *
  * @pkey: Key for the domain to change
  * @protection: protection bits to be used
+ * @global: should this change be made globally or not.
  *
  * Protection utilizes the same protection bits specified for User pkeys
  *     PKEY_DISABLE_ACCESS
  *     PKEY_DISABLE_WRITE
  *
  */
-static inline void pks_update_protection(int pkey, unsigned long protection)
+static inline void pks_update_protection(int pkey, unsigned long protection,
+					 bool global)
 {
-	current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs,
-						     pkey, protection);
 	preempt_disable();
+	if (global)
+		update_global_pkrs(pkey, protection);
+
+	current->thread.saved_pkrs = update_pkey_val(current->thread.saved_pkrs, pkey,
+						     protection);
 	write_pkrs(current->thread.saved_pkrs);
+
 	preempt_enable();
 }
 
 /**
  * PKS access control functions
  *
- * Change the access of the domain specified by the pkey.  These are global
- * updates.  They only affects the current running thread.  It is undefined and
- * a bug for users to call this without having allocated a pkey and using it as
- * pkey here.
+ * Change the access of the domain specified by the pkey.  These may be global
+ * updates depending on the value of global.  It is undefined and a bug for
+ * users to call this without having allocated a pkey and using it as pkey
+ * here.
  *
  * pks_mknoaccess()
  *     Disable all access to the domain
@@ -299,23 +350,30 @@ static inline void pks_update_protection(int pkey, unsigned long protection)
  *     Make the domain Read/Write
  *
  * @pkey the pkey for which the access should change.
- *
+ * @global if true the access is enabled on all threads/logical cpus
  */
-void pks_mknoaccess(int pkey)
+void pks_mknoaccess(int pkey, bool global)
 {
-	pks_update_protection(pkey, PKEY_DISABLE_ACCESS);
+	/*
+	 * We force disable access to be 11b
+	 * (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)
+	 * instaed of 01b See arch/x86/kernel/process.c where the global pkrs
+	 * is factored in during context switch.
+	 */
+	pks_update_protection(pkey, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE,
+			      global);
 }
 EXPORT_SYMBOL_GPL(pks_mknoaccess);
 
-void pks_mkread(int pkey)
+void pks_mkread(int pkey, bool global)
 {
-	pks_update_protection(pkey, PKEY_DISABLE_WRITE);
+	pks_update_protection(pkey, PKEY_DISABLE_WRITE, global);
 }
 EXPORT_SYMBOL_GPL(pks_mkread);
 
-void pks_mkrdwr(int pkey)
+void pks_mkrdwr(int pkey, bool global)
 {
-	pks_update_protection(pkey, 0);
+	pks_update_protection(pkey, 0, global);
 }
 EXPORT_SYMBOL_GPL(pks_mkrdwr);
 
@@ -377,7 +435,7 @@ void pks_key_free(int pkey)
 		return;
 
 	/* Restore to default of no access */
-	pks_mknoaccess(pkey);
+	pks_mknoaccess(pkey, true);
 	pks_key_users[pkey] = NULL;
 	__clear_bit(pkey, &pks_key_allocation_map);
 }
diff --git a/include/linux/pkeys.h b/include/linux/pkeys.h
index f9552bd9341f..8f3bfec83949 100644
--- a/include/linux/pkeys.h
+++ b/include/linux/pkeys.h
@@ -57,15 +57,15 @@ static inline int pks_key_alloc(const char * const pkey_user)
 static inline void pks_key_free(int pkey)
 {
 }
-static inline void pks_mknoaccess(int pkey)
+static inline void pks_mknoaccess(int pkey, bool global)
 {
 	WARN_ON_ONCE(1);
 }
-static inline void pks_mkread(int pkey)
+static inline void pks_mkread(int pkey, bool global)
 {
 	WARN_ON_ONCE(1);
 }
-static inline void pks_mkrdwr(int pkey)
+static inline void pks_mkrdwr(int pkey, bool global)
 {
 	WARN_ON_ONCE(1);
 }
diff --git a/lib/pks/pks_test.c b/lib/pks/pks_test.c
index d7dbf92527bd..286c8b8457da 100644
--- a/lib/pks/pks_test.c
+++ b/lib/pks/pks_test.c
@@ -163,12 +163,12 @@ static void check_exception(irqentry_state_t *irq_state)
 	 * Check we can update the value during exception without affecting the
 	 * calling thread.  The calling thread is checked after exception...
 	 */
-	pks_mkrdwr(test_armed_key);
+	pks_mkrdwr(test_armed_key, false);
 	if (!check_pkrs(test_armed_key, 0)) {
 		pr_err("     FAIL: exception did not change register to 0\n");
 		test_exception_ctx->pass = false;
 	}
-	pks_mknoaccess(test_armed_key);
+	pks_mknoaccess(test_armed_key, false);
 	if (!check_pkrs(test_armed_key, PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)) {
 		pr_err("     FAIL: exception did not change register to 0x3\n");
 		test_exception_ctx->pass = false;
@@ -314,13 +314,13 @@ static int run_access_test(struct pks_test_ctx *ctx,
 {
 	switch (test->mode) {
 		case PKS_TEST_NO_ACCESS:
-			pks_mknoaccess(ctx->pkey);
+			pks_mknoaccess(ctx->pkey, false);
 			break;
 		case PKS_TEST_RDWR:
-			pks_mkrdwr(ctx->pkey);
+			pks_mkrdwr(ctx->pkey, false);
 			break;
 		case PKS_TEST_RDONLY:
-			pks_mkread(ctx->pkey);
+			pks_mkread(ctx->pkey, false);
 			break;
 		default:
 			pr_err("BUG in test invalid mode\n");
@@ -476,7 +476,7 @@ static void run_exception_test(void)
 		goto free_context;
 	}
 
-	pks_mkread(ctx->pkey);
+	pks_mkread(ctx->pkey, false);
 
 	spin_lock(&test_lock);
 	WRITE_ONCE(test_exception_ctx, ctx);
@@ -556,7 +556,7 @@ static void crash_it(void)
 		return;
 	}
 
-	pks_mknoaccess(ctx->pkey);
+	pks_mknoaccess(ctx->pkey, false);
 
 	spin_lock(&test_lock);
 	WRITE_ONCE(test_armed_key, 0);
@@ -618,7 +618,7 @@ static ssize_t pks_write_file(struct file *file, const char __user *user_buf,
 	/* start of context switch test */
 	if (!strcmp(buf, "1")) {
 		/* Ensure a known state to test context switch */
-		pks_mknoaccess(ctx->pkey);
+		pks_mknoaccess(ctx->pkey, false);
 	}
 
 	/* After context switch msr should be restored */
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5050.12929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPy-0001Jx-KE; Fri, 09 Oct 2020 19:51:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5050.12929; Fri, 09 Oct 2020 19:51:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyPy-0001Jl-FW; Fri, 09 Oct 2020 19:51:06 +0000
Received: by outflank-mailman (input) for mailman id 5050;
 Fri, 09 Oct 2020 19:51:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyPx-00017t-Mt
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:05 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4282cc09-23b8-4aac-841b-d5b893266d1a;
 Fri, 09 Oct 2020 19:51:00 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:59 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:50:59 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyPx-00017t-Mt
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:05 +0000
X-Inumbo-ID: 4282cc09-23b8-4aac-841b-d5b893266d1a
Received: from mga04.intel.com (unknown [192.55.52.120])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4282cc09-23b8-4aac-841b-d5b893266d1a;
	Fri, 09 Oct 2020 19:51:00 +0000 (UTC)
IronPort-SDR: KUbg6N1aEvdq1lZWjuPMeLWn/4+X7BeeW6v3JeJfCOVi1eCZfenI9JAKJ7rBcdzgEGATKsqmPg
 /kXf02ZPRNEA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893235"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162893235"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700
IronPort-SDR: /j9oTf64Lc/fwCVe0LT3Z3/pz5zJXnOJNQAbfqyJqoBSoEhIO3zt/eOhOuH8Xl1K1eBwx2q0zq
 2edyiKrU43UQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="519846797"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:50:59 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 04/58] kmap: Add stray access protection for device pages
Date: Fri,  9 Oct 2020 12:49:39 -0700
Message-Id: <20201009195033.3208459-5-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Device managed pages may have additional protections.  These protections
need to be removed prior to valid use by kernel users.

Check for special treatment of device managed pages in kmap and take
action if needed.  We use kmap as an interface for generic kernel code
because under normal circumstances it would be a bug for general kernel
code to not use kmap prior to accessing kernel memory.  Therefore, this
should allow any valid kernel users to seamlessly use these pages
without issues.

Because of the critical nature of kmap it must be pointed out that the
over head on regular DRAM is carefully implemented to be as fast as
possible.  Furthermore the underlying MSR write required on device pages
when protected is better than a normal MSR write.

Specifically, WRMSR(MSR_IA32_PKRS) is not serializing but still
maintains ordering properties similar to WRPKRU.  The current SDM
section on PKRS needs updating but should be the same as that of WRPKRU.
So to quote from the WRPKRU text:

	WRPKRU will never execute speculatively. Memory accesses
	affected by PKRU register will not execute (even speculatively)
	until all prior executions of WRPKRU have completed execution
	and updated the PKRU register.

Still this will make accessing pmem more expensive from the kernel but
the overhead is minimized and many pmem users access this memory through
user page mappings which are not affected at all.

Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 include/linux/highmem.h | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 14e6202ce47f..2a9806e3b8d2 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -8,6 +8,7 @@
 #include <linux/mm.h>
 #include <linux/uaccess.h>
 #include <linux/hardirq.h>
+#include <linux/memremap.h>
 
 #include <asm/cacheflush.h>
 
@@ -31,6 +32,20 @@ static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
 
 #include <asm/kmap_types.h>
 
+static inline void dev_page_enable_access(struct page *page, bool global)
+{
+	if (!page_is_access_protected(page))
+		return;
+	dev_access_enable(global);
+}
+
+static inline void dev_page_disable_access(struct page *page, bool global)
+{
+	if (!page_is_access_protected(page))
+		return;
+	dev_access_disable(global);
+}
+
 #ifdef CONFIG_HIGHMEM
 extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot);
 extern void kunmap_atomic_high(void *kvaddr);
@@ -55,6 +70,11 @@ static inline void *kmap(struct page *page)
 	else
 		addr = kmap_high(page);
 	kmap_flush_tlb((unsigned long)addr);
+	/*
+	 * Even non-highmem pages may have additional access protections which
+	 * need to be checked and potentially enabled.
+	 */
+	dev_page_enable_access(page, true);
 	return addr;
 }
 
@@ -63,6 +83,11 @@ void kunmap_high(struct page *page);
 static inline void kunmap(struct page *page)
 {
 	might_sleep();
+	/*
+	 * Even non-highmem pages may have additional access protections which
+	 * need to be checked and potentially disabled.
+	 */
+	dev_page_disable_access(page, true);
 	if (!PageHighMem(page))
 		return;
 	kunmap_high(page);
@@ -85,6 +110,7 @@ static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot)
 {
 	preempt_disable();
 	pagefault_disable();
+	dev_page_enable_access(page, false);
 	if (!PageHighMem(page))
 		return page_address(page);
 	return kmap_atomic_high_prot(page, prot);
@@ -137,6 +163,7 @@ static inline unsigned long totalhigh_pages(void) { return 0UL; }
 static inline void *kmap(struct page *page)
 {
 	might_sleep();
+	dev_page_enable_access(page, true);
 	return page_address(page);
 }
 
@@ -146,6 +173,7 @@ static inline void kunmap_high(struct page *page)
 
 static inline void kunmap(struct page *page)
 {
+	dev_page_disable_access(page, true);
 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
 	kunmap_flush_on_unmap(page_address(page));
 #endif
@@ -155,6 +183,7 @@ static inline void *kmap_atomic(struct page *page)
 {
 	preempt_disable();
 	pagefault_disable();
+	dev_page_enable_access(page, false);
 	return page_address(page);
 }
 #define kmap_atomic_prot(page, prot)	kmap_atomic(page)
@@ -216,7 +245,8 @@ static inline void kmap_atomic_idx_pop(void)
 #define kunmap_atomic(addr)                                     \
 do {                                                            \
 	BUILD_BUG_ON(__same_type((addr), struct page *));       \
-	kunmap_atomic_high(addr);                                  \
+	dev_page_disable_access(kmap_to_page(addr), false);     \
+	kunmap_atomic_high(addr);                               \
 	pagefault_enable();                                     \
 	preempt_enable();                                       \
 } while (0)
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5051.12941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQ4-0001RO-4e; Fri, 09 Oct 2020 19:51:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5051.12941; Fri, 09 Oct 2020 19:51:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQ3-0001RD-Vz; Fri, 09 Oct 2020 19:51:11 +0000
Received: by outflank-mailman (input) for mailman id 5051;
 Fri, 09 Oct 2020 19:51:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQ2-00017t-NN
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:10 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0de2af32-c62b-4d94-8cd3-ba138d29b9a1;
 Fri, 09 Oct 2020 19:51:04 +0000 (UTC)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:03 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:02 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQ2-00017t-NN
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:10 +0000
X-Inumbo-ID: 0de2af32-c62b-4d94-8cd3-ba138d29b9a1
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0de2af32-c62b-4d94-8cd3-ba138d29b9a1;
	Fri, 09 Oct 2020 19:51:04 +0000 (UTC)
IronPort-SDR: fAg3CZRPj5M5ay7lEJ+vwH24VmPSxfUiSlyA+IlIDTT469ELlUSlK7ihhfTjKJVqM40AMW849Y
 0eWlg1CQ5lIw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229714944"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229714944"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:03 -0700
IronPort-SDR: Zj7B8Ym6Opm+j7NgzILIxHvFMHLpJ+KeG5qMdRlGsqJzYlMhgHY+wciO+/VIwDrpca7gdal3tS
 LaUxL8Xa1x8w==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="343971995"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:02 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 05/58] kmap: Introduce k[un]map_thread
Date: Fri,  9 Oct 2020 12:49:40 -0700
Message-Id: <20201009195033.3208459-6-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

To correctly support the semantics of kmap() with Kernel protection keys
(PKS), kmap() may be required to set the protections on multiple
processors (globally).  Enabling PKS globally can be very expensive
depending on the requested operation.  Furthermore, enabling a domain
globally reduces the protection afforded by PKS.

Most kmap() (Aprox 209 of 229) callers use the map within a single thread and
have no need for the protection domain to be enabled globally.  However, the
remaining callers do not follow this pattern and, as best I can tell, expect
the mapping to be 'global' and available to any thread who may access the
mapping.[1]

We don't anticipate global mappings to pmem, however in general there is a
danger in changing the semantics of kmap().  Effectively, this would cause an
unresolved page fault with little to no information about why the failure
occurred.

To resolve this a number of options were considered.

1) Attempt to change all the thread local kmap() calls to kmap_atomic()[2]
2) Introduce a flags parameter to kmap() to indicate if the mapping should be
   global or not
3) Change ~20 call sites to 'kmap_global()' to indicate that they require a
   global enablement of the pages.
4) Change ~209 call sites to 'kmap_thread()' to indicate that the mapping is to
   be used within that thread of execution only

Option 1 is simply not feasible.  Option 2 would require all of the call sites
of kmap() to change.  Option 3 seems like a good minimal change but there is a
danger that new code may miss the semantic change of kmap() and not get the
behavior the developer intended.  Therefore, #4 was chosen.

Subsequent patches will convert most ~90% of the kmap callers to this new call
leaving about 10% of the existing kmap callers to enable PKS globally.

Cc: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 include/linux/highmem.h | 34 ++++++++++++++++++++++++++--------
 1 file changed, 26 insertions(+), 8 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 2a9806e3b8d2..ef7813544719 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -60,7 +60,7 @@ static inline void kmap_flush_tlb(unsigned long addr) { }
 #endif
 
 void *kmap_high(struct page *page);
-static inline void *kmap(struct page *page)
+static inline void *__kmap(struct page *page, bool global)
 {
 	void *addr;
 
@@ -74,20 +74,20 @@ static inline void *kmap(struct page *page)
 	 * Even non-highmem pages may have additional access protections which
 	 * need to be checked and potentially enabled.
 	 */
-	dev_page_enable_access(page, true);
+	dev_page_enable_access(page, global);
 	return addr;
 }
 
 void kunmap_high(struct page *page);
 
-static inline void kunmap(struct page *page)
+static inline void __kunmap(struct page *page, bool global)
 {
 	might_sleep();
 	/*
 	 * Even non-highmem pages may have additional access protections which
 	 * need to be checked and potentially disabled.
 	 */
-	dev_page_disable_access(page, true);
+	dev_page_disable_access(page, global);
 	if (!PageHighMem(page))
 		return;
 	kunmap_high(page);
@@ -160,10 +160,10 @@ static inline struct page *kmap_to_page(void *addr)
 
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
-static inline void *kmap(struct page *page)
+static inline void *__kmap(struct page *page, bool global)
 {
 	might_sleep();
-	dev_page_enable_access(page, true);
+	dev_page_enable_access(page, global);
 	return page_address(page);
 }
 
@@ -171,9 +171,9 @@ static inline void kunmap_high(struct page *page)
 {
 }
 
-static inline void kunmap(struct page *page)
+static inline void __kunmap(struct page *page, bool global)
 {
-	dev_page_disable_access(page, true);
+	dev_page_disable_access(page, global);
 #ifdef ARCH_HAS_FLUSH_ON_KUNMAP
 	kunmap_flush_on_unmap(page_address(page));
 #endif
@@ -238,6 +238,24 @@ static inline void kmap_atomic_idx_pop(void)
 
 #endif
 
+static inline void *kmap(struct page *page)
+{
+	return __kmap(page, true);
+}
+static inline void kunmap(struct page *page)
+{
+	__kunmap(page, true);
+}
+
+static inline void *kmap_thread(struct page *page)
+{
+	return __kmap(page, false);
+}
+static inline void kunmap_thread(struct page *page)
+{
+	__kunmap(page, false);
+}
+
 /*
  * Prevent people trying to call kunmap_atomic() as if it were kunmap()
  * kunmap_atomic() should get the return value of kmap_atomic, not the page.
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5052.12953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQ8-0001Wj-Fg; Fri, 09 Oct 2020 19:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5052.12953; Fri, 09 Oct 2020 19:51:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQ8-0001WZ-Am; Fri, 09 Oct 2020 19:51:16 +0000
Received: by outflank-mailman (input) for mailman id 5052;
 Fri, 09 Oct 2020 19:51:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQ7-00017t-NQ
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:15 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d2a52c7-0d23-41cc-9411-3318aa65d277;
 Fri, 09 Oct 2020 19:51:07 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:07 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:06 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQ7-00017t-NQ
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:15 +0000
X-Inumbo-ID: 5d2a52c7-0d23-41cc-9411-3318aa65d277
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5d2a52c7-0d23-41cc-9411-3318aa65d277;
	Fri, 09 Oct 2020 19:51:07 +0000 (UTC)
IronPort-SDR: 1WCralPpYjjUTmyOzWUbH/4mS4DiGRaW1doz1tehOFB5vwTYiOZATz7sWVT+W0Q2R3V6PSwTXS
 jjEqDNDEQprA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229714953"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229714953"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:07 -0700
IronPort-SDR: q86t8daZNt+V6d8RNukEYKSCMEs+oCo1U1vd6Yq1DOlnqrvHVUOvZ3CGMAsBzz1plldsK41yyY
 6uUXC42AACQg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="317146961"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:06 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Juri Lelli <juri.lelli@redhat.com>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	Dietmar Eggemann <dietmar.eggemann@arm.com>,
	Steven Rostedt <rostedt@goodmis.org>,
	Ben Segall <bsegall@google.com>,
	Mel Gorman <mgorman@suse.de>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 06/58] kmap: Introduce k[un]map_thread debugging
Date: Fri,  9 Oct 2020 12:49:41 -0700
Message-Id: <20201009195033.3208459-7-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Most kmap() callers use the map within a single thread and have no need
for the protection domain to be enabled globally.

To differentiate these kmap users, new k[un]map_thread() calls were
introduced which are thread local.

To aid in debugging the new use of kmap_thread(), add a reference count,
a check on that count, and tracing to ID where mapping errors occur.

Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 include/linux/highmem.h            |  5 +++
 include/linux/sched.h              |  5 +++
 include/trace/events/kmap_thread.h | 56 ++++++++++++++++++++++++++++++
 init/init_task.c                   |  3 ++
 kernel/fork.c                      | 15 ++++++++
 lib/Kconfig.debug                  |  8 +++++
 mm/debug.c                         | 23 ++++++++++++
 7 files changed, 115 insertions(+)
 create mode 100644 include/trace/events/kmap_thread.h

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index ef7813544719..22d1c000802e 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -247,6 +247,10 @@ static inline void kunmap(struct page *page)
 	__kunmap(page, true);
 }
 
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+void *kmap_thread(struct page *page);
+void kunmap_thread(struct page *page);
+#else
 static inline void *kmap_thread(struct page *page)
 {
 	return __kmap(page, false);
@@ -255,6 +259,7 @@ static inline void kunmap_thread(struct page *page)
 {
 	__kunmap(page, false);
 }
+#endif
 
 /*
  * Prevent people trying to call kunmap_atomic() as if it were kunmap()
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 25d97ab6c757..4627ea4a49e6 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1318,6 +1318,11 @@ struct task_struct {
 #ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
 	unsigned int			dev_page_access_ref;
 #endif
+
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+	unsigned int			kmap_thread_cnt;
+#endif
+
 	/*
 	 * New fields for task_struct should be added above here, so that
 	 * they are included in the randomized portion of task_struct.
diff --git a/include/trace/events/kmap_thread.h b/include/trace/events/kmap_thread.h
new file mode 100644
index 000000000000..e7143cfe0daf
--- /dev/null
+++ b/include/trace/events/kmap_thread.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB */
+
+/*
+ * Copyright (c) 2020 Intel Corporation.  All rights reserved.
+ *
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM kmap_thread
+
+#if !defined(_TRACE_KMAP_THREAD_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_KMAP_THREAD_H
+
+#include <linux/tracepoint.h>
+
+DECLARE_EVENT_CLASS(kmap_thread_template,
+	TP_PROTO(struct task_struct *tsk, struct page *page,
+		 void *caller_addr, int cnt),
+	TP_ARGS(tsk, page, caller_addr, cnt),
+
+	TP_STRUCT__entry(
+		__field(int, pid)
+		__field(struct page *, page)
+		__field(void *, caller_addr)
+		__field(int, cnt)
+	),
+
+	TP_fast_assign(
+		__entry->pid = tsk->pid;
+		__entry->page = page;
+		__entry->caller_addr = caller_addr;
+		__entry->cnt = cnt;
+	),
+
+	TP_printk("PID %d; (%d) %pS %p",
+		__entry->pid,
+		__entry->cnt,
+		__entry->caller_addr,
+		__entry->page
+	)
+);
+
+DEFINE_EVENT(kmap_thread_template, kmap_thread,
+	TP_PROTO(struct task_struct *tsk, struct page *page,
+		 void *caller_addr, int cnt),
+	TP_ARGS(tsk, page, caller_addr, cnt));
+
+DEFINE_EVENT(kmap_thread_template, kunmap_thread,
+	TP_PROTO(struct task_struct *tsk, struct page *page,
+		 void *caller_addr, int cnt),
+	TP_ARGS(tsk, page, caller_addr, cnt));
+
+
+#endif /* _TRACE_KMAP_THREAD_H */
+
+#include <trace/define_trace.h>
diff --git a/init/init_task.c b/init/init_task.c
index 9b39f25de59b..19f09965eb34 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -212,6 +212,9 @@ struct task_struct init_task
 #ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
 	.dev_page_access_ref = 0,
 #endif
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+	.kmap_thread_cnt = 0,
+#endif
 };
 EXPORT_SYMBOL(init_task);
 
diff --git a/kernel/fork.c b/kernel/fork.c
index b6a3ee328a89..2c66e49b7614 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -722,6 +722,17 @@ static inline void put_signal_struct(struct signal_struct *sig)
 		free_signal_struct(sig);
 }
 
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+static void check_outstanding_kmap_thread(struct task_struct *tsk)
+{
+	if (tsk->kmap_thread_cnt)
+		pr_warn(KERN_ERR "WARNING: PID %d; Failed to kunmap_thread() [cnt %d]\n",
+			tsk->pid, tsk->kmap_thread_cnt);
+}
+#else
+static void check_outstanding_kmap_thread(struct task_struct *tsk) { }
+#endif
+
 void __put_task_struct(struct task_struct *tsk)
 {
 	WARN_ON(!tsk->exit_state);
@@ -734,6 +745,7 @@ void __put_task_struct(struct task_struct *tsk)
 	exit_creds(tsk);
 	delayacct_tsk_free(tsk);
 	put_signal_struct(tsk->signal);
+	check_outstanding_kmap_thread(tsk);
 
 	if (!profile_handoff_task(tsk))
 		free_task(tsk);
@@ -943,6 +955,9 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node)
 #endif
 #ifdef CONFIG_ZONE_DEVICE_ACCESS_PROTECTION
 	tsk->dev_page_access_ref = 0;
+#endif
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+	tsk->kmap_thread_cnt = 0;
 #endif
 	return tsk;
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index f015c09ba5a1..6507b43d5b0c 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -858,6 +858,14 @@ config DEBUG_HIGHMEM
 	  This option enables additional error checking for high memory
 	  systems.  Disable for production systems.
 
+config DEBUG_KMAP_THREAD
+	bool "Kmap debugging"
+	depends on DEBUG_KERNEL
+	help
+	  This option enables additional error checking for kernel mapping code
+	  specifically the k[un]map_thread() calls.  Disable for production
+	  systems.
+
 config HAVE_DEBUG_STACKOVERFLOW
 	bool
 
diff --git a/mm/debug.c b/mm/debug.c
index ca8d1cacdecc..68d186f3570e 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -320,3 +320,26 @@ void page_init_poison(struct page *page, size_t size)
 }
 EXPORT_SYMBOL_GPL(page_init_poison);
 #endif		/* CONFIG_DEBUG_VM */
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/kmap_thread.h>
+
+#ifdef CONFIG_DEBUG_KMAP_THREAD
+void *kmap_thread(struct page *page)
+{
+	trace_kmap_thread(current, page, __builtin_return_address(0),
+			  current->kmap_thread_cnt);
+	current->kmap_thread_cnt++;
+	return __kmap(page, false);
+}
+EXPORT_SYMBOL_GPL(kmap_thread);
+
+void kunmap_thread(struct page *page)
+{
+	__kunmap(page, false);
+	current->kmap_thread_cnt--;
+	trace_kunmap_thread(current, page, __builtin_return_address(0),
+			    current->kmap_thread_cnt);
+}
+EXPORT_SYMBOL_GPL(kunmap_thread);
+#endif
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5056.12965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQG-0001fP-Qr; Fri, 09 Oct 2020 19:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5056.12965; Fri, 09 Oct 2020 19:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQG-0001fG-Mu; Fri, 09 Oct 2020 19:51:24 +0000
Received: by outflank-mailman (input) for mailman id 5056;
 Fri, 09 Oct 2020 19:51:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQF-0001du-65
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:23 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23148f64-b11b-4113-9b7b-712fcbfbd8a0;
 Fri, 09 Oct 2020 19:51:20 +0000 (UTC)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:19 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:19 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQF-0001du-65
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:23 +0000
X-Inumbo-ID: 23148f64-b11b-4113-9b7b-712fcbfbd8a0
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23148f64-b11b-4113-9b7b-712fcbfbd8a0;
	Fri, 09 Oct 2020 19:51:20 +0000 (UTC)
IronPort-SDR: wm/KpE75h4gxsfD3UkW9WKz4qcZTqNn7FpVMX+rG7OjPuhmnSOFI7Lhz/P6k5yuzyBTreKQ41D
 U0M5R4qPNU8A==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976103"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="182976103"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:19 -0700
IronPort-SDR: 5T2OCVzeHkG6wr08UcIl5CMQca5a2EWHDQvUbsMI5eegPrFwWedcb1Yb3fuBoEZmXZa2FGZF8Z
 Z3c1XDGR3/oA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="354958923"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:19 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	David Airlie <airlied@linux.ie>,
	Daniel Vetter <daniel@ffwll.ch>,
	Patrik Jakobsson <patrik.r.jakobsson@gmail.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 09/58] drivers/gpu: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:44 -0700
Message-Id: <20201009195033.3208459-10-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls in the gpu stack are localized to a single thread.
To avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c              | 12 ++++++------
 drivers/gpu/drm/gma500/gma_display.c                 |  4 ++--
 drivers/gpu/drm/gma500/mmu.c                         | 10 +++++-----
 drivers/gpu/drm/i915/gem/i915_gem_shmem.c            |  4 ++--
 .../gpu/drm/i915/gem/selftests/i915_gem_context.c    |  4 ++--
 drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c   |  8 ++++----
 drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c         |  4 ++--
 drivers/gpu/drm/i915/gt/intel_gtt.c                  |  4 ++--
 drivers/gpu/drm/i915/gt/shmem_utils.c                |  4 ++--
 drivers/gpu/drm/i915/i915_gem.c                      |  8 ++++----
 drivers/gpu/drm/i915/i915_gpu_error.c                |  4 ++--
 drivers/gpu/drm/i915/selftests/i915_perf.c           |  4 ++--
 drivers/gpu/drm/radeon/radeon_ttm.c                  |  4 ++--
 13 files changed, 37 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
index 978bae731398..bd564bccb7a3 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
@@ -2437,11 +2437,11 @@ static ssize_t amdgpu_ttm_gtt_read(struct file *f, char __user *buf,
 
 		page = adev->gart.pages[p];
 		if (page) {
-			ptr = kmap(page);
+			ptr = kmap_thread(page);
 			ptr += off;
 
 			r = copy_to_user(buf, ptr, cur_size);
-			kunmap(adev->gart.pages[p]);
+			kunmap_thread(adev->gart.pages[p]);
 		} else
 			r = clear_user(buf, cur_size);
 
@@ -2507,9 +2507,9 @@ static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf,
 		if (p->mapping != adev->mman.bdev.dev_mapping)
 			return -EPERM;
 
-		ptr = kmap(p);
+		ptr = kmap_thread(p);
 		r = copy_to_user(buf, ptr + off, bytes);
-		kunmap(p);
+		kunmap_thread(p);
 		if (r)
 			return -EFAULT;
 
@@ -2558,9 +2558,9 @@ static ssize_t amdgpu_iomem_write(struct file *f, const char __user *buf,
 		if (p->mapping != adev->mman.bdev.dev_mapping)
 			return -EPERM;
 
-		ptr = kmap(p);
+		ptr = kmap_thread(p);
 		r = copy_from_user(ptr + off, buf, bytes);
-		kunmap(p);
+		kunmap_thread(p);
 		if (r)
 			return -EFAULT;
 
diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
index 3df6d6e850f5..35f4e55c941f 100644
--- a/drivers/gpu/drm/gma500/gma_display.c
+++ b/drivers/gpu/drm/gma500/gma_display.c
@@ -400,9 +400,9 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
 		/* Copy the cursor to cursor mem */
 		tmp_dst = dev_priv->vram_addr + cursor_gt->offset;
 		for (i = 0; i < cursor_pages; i++) {
-			tmp_src = kmap(gt->pages[i]);
+			tmp_src = kmap_thread(gt->pages[i]);
 			memcpy(tmp_dst, tmp_src, PAGE_SIZE);
-			kunmap(gt->pages[i]);
+			kunmap_thread(gt->pages[i]);
 			tmp_dst += PAGE_SIZE;
 		}
 
diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
index 505044c9a673..fba7a3a461fd 100644
--- a/drivers/gpu/drm/gma500/mmu.c
+++ b/drivers/gpu/drm/gma500/mmu.c
@@ -192,20 +192,20 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver,
 		pd->invalid_pte = 0;
 	}
 
-	v = kmap(pd->dummy_pt);
+	v = kmap_thread(pd->dummy_pt);
 	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
 		v[i] = pd->invalid_pte;
 
-	kunmap(pd->dummy_pt);
+	kunmap_thread(pd->dummy_pt);
 
-	v = kmap(pd->p);
+	v = kmap_thread(pd->p);
 	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
 		v[i] = pd->invalid_pde;
 
-	kunmap(pd->p);
+	kunmap_thread(pd->p);
 
 	clear_page(kmap(pd->dummy_page));
-	kunmap(pd->dummy_page);
+	kunmap_thread(pd->dummy_page);
 
 	pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024);
 	if (!pd->tables)
diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
index 38113d3c0138..274424795fb7 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
@@ -566,9 +566,9 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
 		if (err < 0)
 			goto fail;
 
-		vaddr = kmap(page);
+		vaddr = kmap_thread(page);
 		memcpy(vaddr, data, len);
-		kunmap(page);
+		kunmap_thread(page);
 
 		err = pagecache_write_end(file, file->f_mapping,
 					  offset, len, len,
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
index 7ffc3c751432..b466c677d007 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
@@ -1754,7 +1754,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
 		return -EINVAL;
 	}
 
-	vaddr = kmap(page);
+	vaddr = kmap_thread(page);
 	if (!vaddr) {
 		pr_err("No (mappable) scratch page!\n");
 		return -EINVAL;
@@ -1765,7 +1765,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
 		pr_err("Inconsistent initial state of scratch page!\n");
 		err = -EINVAL;
 	}
-	kunmap(page);
+	kunmap_thread(page);
 
 	return err;
 }
diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
index 9c7402ce5bf9..447df22e2e06 100644
--- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
+++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
@@ -143,7 +143,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
 	intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
 
 	p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
-	cpu = kmap(p) + offset_in_page(offset);
+	cpu = kmap_thread(p) + offset_in_page(offset);
 	drm_clflush_virt_range(cpu, sizeof(*cpu));
 	if (*cpu != (u32)page) {
 		pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
@@ -161,7 +161,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
 	}
 	*cpu = 0;
 	drm_clflush_virt_range(cpu, sizeof(*cpu));
-	kunmap(p);
+	kunmap_thread(p);
 
 out:
 	__i915_vma_put(vma);
@@ -236,7 +236,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
 		intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
 
 		p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
-		cpu = kmap(p) + offset_in_page(offset);
+		cpu = kmap_thread(p) + offset_in_page(offset);
 		drm_clflush_virt_range(cpu, sizeof(*cpu));
 		if (*cpu != (u32)page) {
 			pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
@@ -254,7 +254,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
 		}
 		*cpu = 0;
 		drm_clflush_virt_range(cpu, sizeof(*cpu));
-		kunmap(p);
+		kunmap_thread(p);
 		if (err)
 			return err;
 
diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
index 7fb36b12fe7a..38da348282f1 100644
--- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
+++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
@@ -731,7 +731,7 @@ static void swizzle_page(struct page *page)
 	char *vaddr;
 	int i;
 
-	vaddr = kmap(page);
+	vaddr = kmap_thread(page);
 
 	for (i = 0; i < PAGE_SIZE; i += 128) {
 		memcpy(temp, &vaddr[i], 64);
@@ -739,7 +739,7 @@ static void swizzle_page(struct page *page)
 		memcpy(&vaddr[i + 64], temp, 64);
 	}
 
-	kunmap(page);
+	kunmap_thread(page);
 }
 
 /**
diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
index 2a72cce63fd9..4cfb24e9ed62 100644
--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
+++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
@@ -312,9 +312,9 @@ static void poison_scratch_page(struct page *page, unsigned long size)
 	do {
 		void *vaddr;
 
-		vaddr = kmap(page);
+		vaddr = kmap_thread(page);
 		memset(vaddr, POISON_FREE, PAGE_SIZE);
-		kunmap(page);
+		kunmap_thread(page);
 
 		page = pfn_to_page(page_to_pfn(page) + 1);
 		size -= PAGE_SIZE;
diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
index 43c7acbdc79d..a40d3130cebf 100644
--- a/drivers/gpu/drm/i915/gt/shmem_utils.c
+++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
@@ -142,12 +142,12 @@ static int __shmem_rw(struct file *file, loff_t off,
 		if (IS_ERR(page))
 			return PTR_ERR(page);
 
-		vaddr = kmap(page);
+		vaddr = kmap_thread(page);
 		if (write)
 			memcpy(vaddr + offset_in_page(off), ptr, this);
 		else
 			memcpy(ptr, vaddr + offset_in_page(off), this);
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 
 		len -= this;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 9aa3066cb75d..cae8300fd224 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -312,14 +312,14 @@ shmem_pread(struct page *page, int offset, int len, char __user *user_data,
 	char *vaddr;
 	int ret;
 
-	vaddr = kmap(page);
+	vaddr = kmap_thread(page);
 
 	if (needs_clflush)
 		drm_clflush_virt_range(vaddr + offset, len);
 
 	ret = __copy_to_user(user_data, vaddr + offset, len);
 
-	kunmap(page);
+	kunmap_thread(page);
 
 	return ret ? -EFAULT : 0;
 }
@@ -708,7 +708,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
 	char *vaddr;
 	int ret;
 
-	vaddr = kmap(page);
+	vaddr = kmap_thread(page);
 
 	if (needs_clflush_before)
 		drm_clflush_virt_range(vaddr + offset, len);
@@ -717,7 +717,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
 	if (!ret && needs_clflush_after)
 		drm_clflush_virt_range(vaddr + offset, len);
 
-	kunmap(page);
+	kunmap_thread(page);
 
 	return ret ? -EFAULT : 0;
 }
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 3e6cbb0d1150..aecd469b6b6e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1058,9 +1058,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
 
 			drm_clflush_pages(&page, 1);
 
-			s = kmap(page);
+			s = kmap_thread(page);
 			ret = compress_page(compress, s, dst, false);
-			kunmap(page);
+			kunmap_thread(page);
 
 			drm_clflush_pages(&page, 1);
 
diff --git a/drivers/gpu/drm/i915/selftests/i915_perf.c b/drivers/gpu/drm/i915/selftests/i915_perf.c
index c2d001d9c0ec..7f7ef2d056f4 100644
--- a/drivers/gpu/drm/i915/selftests/i915_perf.c
+++ b/drivers/gpu/drm/i915/selftests/i915_perf.c
@@ -307,7 +307,7 @@ static int live_noa_gpr(void *arg)
 	}
 
 	/* Poison the ce->vm so we detect writes not to the GGTT gt->scratch */
-	scratch = kmap(ce->vm->scratch[0].base.page);
+	scratch = kmap_thread(ce->vm->scratch[0].base.page);
 	memset(scratch, POISON_FREE, PAGE_SIZE);
 
 	rq = intel_context_create_request(ce);
@@ -405,7 +405,7 @@ static int live_noa_gpr(void *arg)
 out_rq:
 	i915_request_put(rq);
 out_ce:
-	kunmap(ce->vm->scratch[0].base.page);
+	kunmap_thread(ce->vm->scratch[0].base.page);
 	intel_context_put(ce);
 out:
 	stream_destroy(stream);
diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
index 004344dce140..0aba0cac51e1 100644
--- a/drivers/gpu/drm/radeon/radeon_ttm.c
+++ b/drivers/gpu/drm/radeon/radeon_ttm.c
@@ -1013,11 +1013,11 @@ static ssize_t radeon_ttm_gtt_read(struct file *f, char __user *buf,
 
 		page = rdev->gart.pages[p];
 		if (page) {
-			ptr = kmap(page);
+			ptr = kmap_thread(page);
 			ptr += off;
 
 			r = copy_to_user(buf, ptr, cur_size);
-			kunmap(rdev->gart.pages[p]);
+			kunmap_thread(rdev->gart.pages[p]);
 		} else
 			r = clear_user(buf, cur_size);
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5057.12977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQJ-0001jT-AI; Fri, 09 Oct 2020 19:51:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5057.12977; Fri, 09 Oct 2020 19:51:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQJ-0001jJ-6B; Fri, 09 Oct 2020 19:51:27 +0000
Received: by outflank-mailman (input) for mailman id 5057;
 Fri, 09 Oct 2020 19:51:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQH-00017t-Nq
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:25 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6eb2b790-532f-4c20-9458-3ca15fe10dd4;
 Fri, 09 Oct 2020 19:51:12 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:11 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:09 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQH-00017t-Nq
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:25 +0000
X-Inumbo-ID: 6eb2b790-532f-4c20-9458-3ca15fe10dd4
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6eb2b790-532f-4c20-9458-3ca15fe10dd4;
	Fri, 09 Oct 2020 19:51:12 +0000 (UTC)
IronPort-SDR: NR9N6xiFW+t/8G8yY3Q9fs/VzfLsE90NMCOhK2w0F/9yvvLNWB/jTDcdKz0LujAZkCrQ4deT8A
 fIjKt9UJL/+A==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363388"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363388"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:11 -0700
IronPort-SDR: 7ZsxuYrd4RIh+I9L87qE8QKpDxHdxlIArpw96UL8x0WQ0i4hLag3NZ1m816qhb9i0FMTl4f1za
 dG2KA2sC/NWA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="312652519"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:09 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 07/58] drivers/drbd: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:42 -0700
Message-Id: <20201009195033.3208459-8-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this driver are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/block/drbd/drbd_main.c     |  4 ++--
 drivers/block/drbd/drbd_receiver.c | 12 ++++++------
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 573dbf6f0c31..f0d0c6b0745e 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -1532,9 +1532,9 @@ static int _drbd_no_send_page(struct drbd_peer_device *peer_device, struct page
 	int err;
 
 	socket = peer_device->connection->data.socket;
-	addr = kmap(page) + offset;
+	addr = kmap_thread(page) + offset;
 	err = drbd_send_all(peer_device->connection, socket, addr, size, msg_flags);
-	kunmap(page);
+	kunmap_thread(page);
 	if (!err)
 		peer_device->device->send_cnt += size >> 9;
 	return err;
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 422363daa618..4704bc0564e2 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -1951,13 +1951,13 @@ read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
 	page = peer_req->pages;
 	page_chain_for_each(page) {
 		unsigned len = min_t(int, ds, PAGE_SIZE);
-		data = kmap(page);
+		data = kmap_thread(page);
 		err = drbd_recv_all_warn(peer_device->connection, data, len);
 		if (drbd_insert_fault(device, DRBD_FAULT_RECEIVE)) {
 			drbd_err(device, "Fault injection: Corrupting data on receive\n");
 			data[0] = data[0] ^ (unsigned long)-1;
 		}
-		kunmap(page);
+		kunmap_thread(page);
 		if (err) {
 			drbd_free_peer_req(device, peer_req);
 			return NULL;
@@ -1992,7 +1992,7 @@ static int drbd_drain_block(struct drbd_peer_device *peer_device, int data_size)
 
 	page = drbd_alloc_pages(peer_device, 1, 1);
 
-	data = kmap(page);
+	data = kmap_thread(page);
 	while (data_size) {
 		unsigned int len = min_t(int, data_size, PAGE_SIZE);
 
@@ -2001,7 +2001,7 @@ static int drbd_drain_block(struct drbd_peer_device *peer_device, int data_size)
 			break;
 		data_size -= len;
 	}
-	kunmap(page);
+	kunmap_thread(page);
 	drbd_free_pages(peer_device->device, page, 0);
 	return err;
 }
@@ -2033,10 +2033,10 @@ static int recv_dless_read(struct drbd_peer_device *peer_device, struct drbd_req
 	D_ASSERT(peer_device->device, sector == bio->bi_iter.bi_sector);
 
 	bio_for_each_segment(bvec, bio, iter) {
-		void *mapped = kmap(bvec.bv_page) + bvec.bv_offset;
+		void *mapped = kmap_thread(bvec.bv_page) + bvec.bv_offset;
 		expect = min_t(int, data_size, bvec.bv_len);
 		err = drbd_recv_all_warn(peer_device->connection, mapped, expect);
-		kunmap(bvec.bv_page);
+		kunmap_thread(bvec.bv_page);
 		if (err)
 			return err;
 		data_size -= expect;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5059.12989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQM-0001oZ-OX; Fri, 09 Oct 2020 19:51:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5059.12989; Fri, 09 Oct 2020 19:51:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQM-0001oQ-Jl; Fri, 09 Oct 2020 19:51:30 +0000
Received: by outflank-mailman (input) for mailman id 5059;
 Fri, 09 Oct 2020 19:51:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQL-0001nZ-SQ
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:29 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea353239-8838-4285-98ec-2159d2193702;
 Fri, 09 Oct 2020 19:51:27 +0000 (UTC)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:27 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:25 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQL-0001nZ-SQ
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:29 +0000
X-Inumbo-ID: ea353239-8838-4285-98ec-2159d2193702
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea353239-8838-4285-98ec-2159d2193702;
	Fri, 09 Oct 2020 19:51:27 +0000 (UTC)
IronPort-SDR: zN+BPdbSz58gNhMlaTO57LfM2A3FvsmcOIfqRBDPJG4d31G/7sexBRpKrLNyZv1BkBWUCIjw0p
 nEaUVK7qAHXA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363415"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363415"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga002.jf.intel.com ([10.7.209.21])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:27 -0700
IronPort-SDR: zNYl6viNpTmcuoOUFeW6ebDlF6mcWw7xHrLDRpkX9eSJEFyww70EqY5cZFdXSl9ODsqjoVqJW/
 EpbiFUFW5goQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="329006240"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:25 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Jesse Brandeburg <jesse.brandeburg@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 11/58] drivers/net: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:46 -0700
Message-Id: <20201009195033.3208459-12-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in these drivers are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/net/ethernet/intel/igb/igb_ethtool.c     | 4 ++--
 drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
index 6e8231c1ddf0..ac9189752012 100644
--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
+++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
@@ -1794,14 +1794,14 @@ static int igb_check_lbtest_frame(struct igb_rx_buffer *rx_buffer,
 
 	frame_size >>= 1;
 
-	data = kmap(rx_buffer->page);
+	data = kmap_thread(rx_buffer->page);
 
 	if (data[3] != 0xFF ||
 	    data[frame_size + 10] != 0xBE ||
 	    data[frame_size + 12] != 0xAF)
 		match = false;
 
-	kunmap(rx_buffer->page);
+	kunmap_thread(rx_buffer->page);
 
 	return match;
 }
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index 71ec908266a6..7d469425f8b4 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1963,14 +1963,14 @@ static bool ixgbe_check_lbtest_frame(struct ixgbe_rx_buffer *rx_buffer,
 
 	frame_size >>= 1;
 
-	data = kmap(rx_buffer->page) + rx_buffer->page_offset;
+	data = kmap_thread(rx_buffer->page) + rx_buffer->page_offset;
 
 	if (data[3] != 0xFF ||
 	    data[frame_size + 10] != 0xBE ||
 	    data[frame_size + 12] != 0xAF)
 		match = false;
 
-	kunmap(rx_buffer->page);
+	kunmap_thread(rx_buffer->page);
 
 	return match;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5060.13001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQO-0001rN-5B; Fri, 09 Oct 2020 19:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5060.13001; Fri, 09 Oct 2020 19:51:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQO-0001rE-0I; Fri, 09 Oct 2020 19:51:32 +0000
Received: by outflank-mailman (input) for mailman id 5060;
 Fri, 09 Oct 2020 19:51:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQM-00017t-Nr
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:30 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68e21de4-db08-48c0-a407-2cd0dfe15bf7;
 Fri, 09 Oct 2020 19:51:16 +0000 (UTC)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:16 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:15 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQM-00017t-Nr
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:30 +0000
X-Inumbo-ID: 68e21de4-db08-48c0-a407-2cd0dfe15bf7
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 68e21de4-db08-48c0-a407-2cd0dfe15bf7;
	Fri, 09 Oct 2020 19:51:16 +0000 (UTC)
IronPort-SDR: xu2zdTr/OuPgqZUiJmtWsW/7B3mrCCwnks9bx2Meq/ltxzmXA//3yXGuOXmgr0dQSn38fBYOaw
 KbopyuSKWrQw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363398"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363398"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:16 -0700
IronPort-SDR: JrRKqLJV6Da/4fMbj+O46I413eP5nzJr39umbL7Y1s6mKMem5hJdGDAmJBrAS4XWsKikzHiU+3
 eSwE6XBZdj5A==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="518800998"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:15 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Luis Chamberlain <mcgrof@kernel.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 08/58] drivers/firmware_loader: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:43 -0700
Message-Id: <20201009195033.3208459-9-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this driver are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/base/firmware_loader/fallback.c | 4 ++--
 drivers/base/firmware_loader/main.c     | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
index 283ca2de76d4..22dea9ba7a37 100644
--- a/drivers/base/firmware_loader/fallback.c
+++ b/drivers/base/firmware_loader/fallback.c
@@ -322,14 +322,14 @@ static void firmware_rw(struct fw_priv *fw_priv, char *buffer,
 		int page_ofs = offset & (PAGE_SIZE-1);
 		int page_cnt = min_t(size_t, PAGE_SIZE - page_ofs, count);
 
-		page_data = kmap(fw_priv->pages[page_nr]);
+		page_data = kmap_thread(fw_priv->pages[page_nr]);
 
 		if (read)
 			memcpy(buffer, page_data + page_ofs, page_cnt);
 		else
 			memcpy(page_data + page_ofs, buffer, page_cnt);
 
-		kunmap(fw_priv->pages[page_nr]);
+		kunmap_thread(fw_priv->pages[page_nr]);
 		buffer += page_cnt;
 		offset += page_cnt;
 		count -= page_cnt;
diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
index 63b9714a0154..cc884c9f8742 100644
--- a/drivers/base/firmware_loader/main.c
+++ b/drivers/base/firmware_loader/main.c
@@ -409,11 +409,11 @@ static int fw_decompress_xz_pages(struct device *dev, struct fw_priv *fw_priv,
 
 		/* decompress onto the new allocated page */
 		page = fw_priv->pages[fw_priv->nr_pages - 1];
-		xz_buf.out = kmap(page);
+		xz_buf.out = kmap_thread(page);
 		xz_buf.out_pos = 0;
 		xz_buf.out_size = PAGE_SIZE;
 		xz_ret = xz_dec_run(xz_dec, &xz_buf);
-		kunmap(page);
+		kunmap_thread(page);
 		fw_priv->size += xz_buf.out_pos;
 		/* partial decompression means either end or error */
 		if (xz_buf.out_pos != PAGE_SIZE)
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5061.13013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQT-0001zj-Jm; Fri, 09 Oct 2020 19:51:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5061.13013; Fri, 09 Oct 2020 19:51:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQT-0001zb-D7; Fri, 09 Oct 2020 19:51:37 +0000
Received: by outflank-mailman (input) for mailman id 5061;
 Fri, 09 Oct 2020 19:51:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQR-00017t-Nz
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:35 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 421f9afe-82a9-49bd-a2c7-af48226c3509;
 Fri, 09 Oct 2020 19:51:27 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:24 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:22 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQR-00017t-Nz
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:35 +0000
X-Inumbo-ID: 421f9afe-82a9-49bd-a2c7-af48226c3509
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 421f9afe-82a9-49bd-a2c7-af48226c3509;
	Fri, 09 Oct 2020 19:51:27 +0000 (UTC)
IronPort-SDR: VVH/40Y04DhyS2991pJmxjYlWzFcRTZwZCrVIt/PvQJO0hzpK5yF854P1CWk9iC4i0bKSRbz3F
 JAC/DHX3APyw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592053"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165592053"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga007.jf.intel.com ([10.7.209.58])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:24 -0700
IronPort-SDR: XjNk9w0mzDhi5BMigEI5d3C8W/iIq4vdRrLZr3Ki7urqpxKKtyqLLY7j1Bq5ukpJe/vk25ySpH
 cq+qGeUOhHow==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="355862741"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:22 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Mike Marciniszyn <mike.marciniszyn@intel.com>,
	Dennis Dalessandro <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Faisal Latif <faisal.latif@intel.com>,
	Shiraz Saleem <shiraz.saleem@intel.com>,
	Bernard Metzler <bmt@zurich.ibm.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:45 -0700
Message-Id: <20201009195033.3208459-11-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in these drivers are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: Faisal Latif <faisal.latif@intel.com>
Cc: Shiraz Saleem <shiraz.saleem@intel.com>
Cc: Bernard Metzler <bmt@zurich.ibm.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/infiniband/hw/hfi1/sdma.c      |  4 ++--
 drivers/infiniband/hw/i40iw/i40iw_cm.c | 10 +++++-----
 drivers/infiniband/sw/siw/siw_qp_tx.c  | 14 +++++++-------
 3 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index 04575c9afd61..09d206e3229a 100644
--- a/drivers/infiniband/hw/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -3130,7 +3130,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
 		}
 
 		if (type == SDMA_MAP_PAGE) {
-			kvaddr = kmap(page);
+			kvaddr = kmap_thread(page);
 			kvaddr += offset;
 		} else if (WARN_ON(!kvaddr)) {
 			__sdma_txclean(dd, tx);
@@ -3140,7 +3140,7 @@ int ext_coal_sdma_tx_descs(struct hfi1_devdata *dd, struct sdma_txreq *tx,
 		memcpy(tx->coalesce_buf + tx->coalesce_idx, kvaddr, len);
 		tx->coalesce_idx += len;
 		if (type == SDMA_MAP_PAGE)
-			kunmap(page);
+			kunmap_thread(page);
 
 		/* If there is more data, return */
 		if (tx->tlen - tx->coalesce_idx)
diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
index a3b95805c154..122d7a5642a1 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
@@ -3721,7 +3721,7 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 		ibmr->device = iwpd->ibpd.device;
 		iwqp->lsmm_mr = ibmr;
 		if (iwqp->page)
-			iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page);
+			iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page);
 		dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp,
 							iwqp->ietf_mem.va,
 							(accept.size + conn_param->private_data_len),
@@ -3729,12 +3729,12 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
 
 	} else {
 		if (iwqp->page)
-			iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page);
+			iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page);
 		dev->iw_priv_qp_ops->qp_send_lsmm(&iwqp->sc_qp, NULL, 0, 0);
 	}
 
 	if (iwqp->page)
-		kunmap(iwqp->page);
+		kunmap_thread(iwqp->page);
 
 	iwqp->cm_id = cm_id;
 	cm_node->cm_id = cm_id;
@@ -4102,10 +4102,10 @@ static void i40iw_cm_event_connected(struct i40iw_cm_event *event)
 	i40iw_cm_init_tsa_conn(iwqp, cm_node);
 	read0 = (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO);
 	if (iwqp->page)
-		iwqp->sc_qp.qp_uk.sq_base = kmap(iwqp->page);
+		iwqp->sc_qp.qp_uk.sq_base = kmap_thread(iwqp->page);
 	dev->iw_priv_qp_ops->qp_send_rtt(&iwqp->sc_qp, read0);
 	if (iwqp->page)
-		kunmap(iwqp->page);
+		kunmap_thread(iwqp->page);
 
 	memset(&attr, 0, sizeof(attr));
 	attr.qp_state = IB_QPS_RTS;
diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
index d19d8325588b..4ed37c328d02 100644
--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
+++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
@@ -76,7 +76,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr)
 			if (unlikely(!p))
 				return -EFAULT;
 
-			buffer = kmap(p);
+			buffer = kmap_thread(p);
 
 			if (likely(PAGE_SIZE - off >= bytes)) {
 				memcpy(paddr, buffer + off, bytes);
@@ -84,7 +84,7 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr)
 				unsigned long part = bytes - (PAGE_SIZE - off);
 
 				memcpy(paddr, buffer + off, part);
-				kunmap(p);
+				kunmap_thread(p);
 
 				if (!mem->is_pbl)
 					p = siw_get_upage(mem->umem,
@@ -96,10 +96,10 @@ static int siw_try_1seg(struct siw_iwarp_tx *c_tx, void *paddr)
 				if (unlikely(!p))
 					return -EFAULT;
 
-				buffer = kmap(p);
+				buffer = kmap_thread(p);
 				memcpy(paddr + part, buffer, bytes - part);
 			}
-			kunmap(p);
+			kunmap_thread(p);
 		}
 	}
 	return (int)bytes;
@@ -505,7 +505,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
 				page_array[seg] = p;
 
 				if (!c_tx->use_sendpage) {
-					iov[seg].iov_base = kmap(p) + fp_off;
+					iov[seg].iov_base = kmap_thread(p) + fp_off;
 					iov[seg].iov_len = plen;
 
 					/* Remember for later kunmap() */
@@ -518,9 +518,9 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx, struct socket *s)
 							plen);
 				} else if (do_crc) {
 					crypto_shash_update(c_tx->mpa_crc_hd,
-							    kmap(p) + fp_off,
+							    kmap_thread(p) + fp_off,
 							    plen);
-					kunmap(p);
+					kunmap_thread(p);
 				}
 			} else {
 				u64 va = sge->laddr + sge_off;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5062.13025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQU-00022T-UL; Fri, 09 Oct 2020 19:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5062.13025; Fri, 09 Oct 2020 19:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQU-00022I-Nx; Fri, 09 Oct 2020 19:51:38 +0000
Received: by outflank-mailman (input) for mailman id 5062;
 Fri, 09 Oct 2020 19:51:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQU-00021L-71
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:38 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bacf52a-8f3a-4f3b-914a-0b5e856a8218;
 Fri, 09 Oct 2020 19:51:34 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:33 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:33 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQU-00021L-71
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:38 +0000
X-Inumbo-ID: 1bacf52a-8f3a-4f3b-914a-0b5e856a8218
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1bacf52a-8f3a-4f3b-914a-0b5e856a8218;
	Fri, 09 Oct 2020 19:51:34 +0000 (UTC)
IronPort-SDR: uJZ9XKGvc0vuyW5esmXJjoBYWlLV2eRw1BQlkNxcyXAAFhqSRmwHmiALWdPZwNgH9Q0ILMf4gI
 34EIYAGYWR9w==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715047"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229715047"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700
IronPort-SDR: HIW7PNpXrID3gOC4BJBiMK8gMhcX7pYwx4FKSDabG5rp79Jg8ur1teAkpcUK61qJFFQS9ID+ws
 9ehpgS1/cOeQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="349957192"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:33 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Chris Mason <clm@fb.com>,
	Josef Bacik <josef@toxicpanda.com>,
	David Sterba <dsterba@suse.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 13/58] fs/btrfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:48 -0700
Message-Id: <20201009195033.3208459-14-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Chris Mason <clm@fb.com>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: David Sterba <dsterba@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/btrfs/check-integrity.c |  4 ++--
 fs/btrfs/compression.c     |  4 ++--
 fs/btrfs/inode.c           | 16 ++++++++--------
 fs/btrfs/lzo.c             | 24 ++++++++++++------------
 fs/btrfs/raid56.c          | 34 +++++++++++++++++-----------------
 fs/btrfs/reflink.c         |  8 ++++----
 fs/btrfs/send.c            |  4 ++--
 fs/btrfs/zlib.c            | 32 ++++++++++++++++----------------
 fs/btrfs/zstd.c            | 20 ++++++++++----------
 9 files changed, 73 insertions(+), 73 deletions(-)

diff --git a/fs/btrfs/check-integrity.c b/fs/btrfs/check-integrity.c
index 81a8c87a5afb..9e5a02512ab5 100644
--- a/fs/btrfs/check-integrity.c
+++ b/fs/btrfs/check-integrity.c
@@ -2706,7 +2706,7 @@ static void __btrfsic_submit_bio(struct bio *bio)
 
 		bio_for_each_segment(bvec, bio, iter) {
 			BUG_ON(bvec.bv_len != PAGE_SIZE);
-			mapped_datav[i] = kmap(bvec.bv_page);
+			mapped_datav[i] = kmap_thread(bvec.bv_page);
 			i++;
 
 			if (dev_state->state->print_mask &
@@ -2720,7 +2720,7 @@ static void __btrfsic_submit_bio(struct bio *bio)
 					      bio, &bio_is_patched,
 					      bio->bi_opf);
 		bio_for_each_segment(bvec, bio, iter)
-			kunmap(bvec.bv_page);
+			kunmap_thread(bvec.bv_page);
 		kfree(mapped_datav);
 	} else if (NULL != dev_state && (bio->bi_opf & REQ_PREFLUSH)) {
 		if (dev_state->state->print_mask &
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 1ab56a734e70..5944fb36d68a 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1626,7 +1626,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
 	curr_sample_pos = 0;
 	while (index < index_end) {
 		page = find_get_page(inode->i_mapping, index);
-		in_data = kmap(page);
+		in_data = kmap_thread(page);
 		/* Handle case where the start is not aligned to PAGE_SIZE */
 		i = start % PAGE_SIZE;
 		while (i < PAGE_SIZE - SAMPLING_READ_SIZE) {
@@ -1639,7 +1639,7 @@ static void heuristic_collect_sample(struct inode *inode, u64 start, u64 end,
 			start += SAMPLING_INTERVAL;
 			curr_sample_pos += SAMPLING_READ_SIZE;
 		}
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 
 		index++;
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 9570458aa847..9710a52c6c42 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -4603,7 +4603,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 	if (offset != blocksize) {
 		if (!len)
 			len = blocksize - offset;
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 		if (front)
 			memset(kaddr + (block_start - page_offset(page)),
 				0, offset);
@@ -4611,7 +4611,7 @@ int btrfs_truncate_block(struct inode *inode, loff_t from, loff_t len,
 			memset(kaddr + (block_start - page_offset(page)) +  offset,
 				0, len);
 		flush_dcache_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
@@ -6509,9 +6509,9 @@ static noinline int uncompress_inline(struct btrfs_path *path,
 	 */
 
 	if (max_size + pg_offset < PAGE_SIZE) {
-		char *map = kmap(page);
+		char *map = kmap_thread(page);
 		memset(map + pg_offset + max_size, 0, PAGE_SIZE - max_size - pg_offset);
-		kunmap(page);
+		kunmap_thread(page);
 	}
 	kfree(tmp);
 	return ret;
@@ -6704,7 +6704,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
 					goto out;
 				}
 			} else {
-				map = kmap(page);
+				map = kmap_thread(page);
 				read_extent_buffer(leaf, map + pg_offset, ptr,
 						   copy_size);
 				if (pg_offset + copy_size < PAGE_SIZE) {
@@ -6712,7 +6712,7 @@ struct extent_map *btrfs_get_extent(struct btrfs_inode *inode,
 					       PAGE_SIZE - pg_offset -
 					       copy_size);
 				}
-				kunmap(page);
+				kunmap_thread(page);
 			}
 			flush_dcache_page(page);
 		}
@@ -8326,10 +8326,10 @@ vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf)
 		zero_start = PAGE_SIZE;
 
 	if (zero_start != PAGE_SIZE) {
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 		memset(kaddr + zero_start, 0, PAGE_SIZE - zero_start);
 		flush_dcache_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 	}
 	ClearPageChecked(page);
 	set_page_dirty(page);
diff --git a/fs/btrfs/lzo.c b/fs/btrfs/lzo.c
index aa9cd11f4b78..f29dcc9ec573 100644
--- a/fs/btrfs/lzo.c
+++ b/fs/btrfs/lzo.c
@@ -140,7 +140,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 	*total_in = 0;
 
 	in_page = find_get_page(mapping, start >> PAGE_SHIFT);
-	data_in = kmap(in_page);
+	data_in = kmap_thread(in_page);
 
 	/*
 	 * store the size of all chunks of compressed data in
@@ -151,7 +151,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 		ret = -ENOMEM;
 		goto out;
 	}
-	cpage_out = kmap(out_page);
+	cpage_out = kmap_thread(out_page);
 	out_offset = LZO_LEN;
 	tot_out = LZO_LEN;
 	pages[0] = out_page;
@@ -209,7 +209,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 				if (out_len == 0 && tot_in >= len)
 					break;
 
-				kunmap(out_page);
+				kunmap_thread(out_page);
 				if (nr_pages == nr_dest_pages) {
 					out_page = NULL;
 					ret = -E2BIG;
@@ -221,7 +221,7 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 					ret = -ENOMEM;
 					goto out;
 				}
-				cpage_out = kmap(out_page);
+				cpage_out = kmap_thread(out_page);
 				pages[nr_pages++] = out_page;
 
 				pg_bytes_left = PAGE_SIZE;
@@ -243,12 +243,12 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 			break;
 
 		bytes_left = len - tot_in;
-		kunmap(in_page);
+		kunmap_thread(in_page);
 		put_page(in_page);
 
 		start += PAGE_SIZE;
 		in_page = find_get_page(mapping, start >> PAGE_SHIFT);
-		data_in = kmap(in_page);
+		data_in = kmap_thread(in_page);
 		in_len = min(bytes_left, PAGE_SIZE);
 	}
 
@@ -258,10 +258,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 	}
 
 	/* store the size of all chunks of compressed data */
-	cpage_out = kmap(pages[0]);
+	cpage_out = kmap_thread(pages[0]);
 	write_compress_length(cpage_out, tot_out);
 
-	kunmap(pages[0]);
+	kunmap_thread(pages[0]);
 
 	ret = 0;
 	*total_out = tot_out;
@@ -269,10 +269,10 @@ int lzo_compress_pages(struct list_head *ws, struct address_space *mapping,
 out:
 	*out_pages = nr_pages;
 	if (out_page)
-		kunmap(out_page);
+		kunmap_thread(out_page);
 
 	if (in_page) {
-		kunmap(in_page);
+		kunmap_thread(in_page);
 		put_page(in_page);
 	}
 
@@ -305,7 +305,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 	u64 disk_start = cb->start;
 	struct bio *orig_bio = cb->orig_bio;
 
-	data_in = kmap(pages_in[0]);
+	data_in = kmap_thread(pages_in[0]);
 	tot_len = read_compress_length(data_in);
 	/*
 	 * Compressed data header check.
@@ -387,7 +387,7 @@ int lzo_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 				else
 					kunmap(pages_in[page_in_index]);
 
-				data_in = kmap(pages_in[++page_in_index]);
+				data_in = kmap_thread(pages_in[++page_in_index]);
 
 				in_page_bytes_left = PAGE_SIZE;
 				in_offset = 0;
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 255490f42b5d..34e646e4548c 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -262,13 +262,13 @@ static void cache_rbio_pages(struct btrfs_raid_bio *rbio)
 		if (!rbio->bio_pages[i])
 			continue;
 
-		s = kmap(rbio->bio_pages[i]);
-		d = kmap(rbio->stripe_pages[i]);
+		s = kmap_thread(rbio->bio_pages[i]);
+		d = kmap_thread(rbio->stripe_pages[i]);
 
 		copy_page(d, s);
 
-		kunmap(rbio->bio_pages[i]);
-		kunmap(rbio->stripe_pages[i]);
+		kunmap_thread(rbio->bio_pages[i]);
+		kunmap_thread(rbio->stripe_pages[i]);
 		SetPageUptodate(rbio->stripe_pages[i]);
 	}
 	set_bit(RBIO_CACHE_READY_BIT, &rbio->flags);
@@ -1241,13 +1241,13 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 		/* first collect one page from each data stripe */
 		for (stripe = 0; stripe < nr_data; stripe++) {
 			p = page_in_rbio(rbio, stripe, pagenr, 0);
-			pointers[stripe] = kmap(p);
+			pointers[stripe] = kmap_thread(p);
 		}
 
 		/* then add the parity stripe */
 		p = rbio_pstripe_page(rbio, pagenr);
 		SetPageUptodate(p);
-		pointers[stripe++] = kmap(p);
+		pointers[stripe++] = kmap_thread(p);
 
 		if (has_qstripe) {
 
@@ -1257,7 +1257,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 			 */
 			p = rbio_qstripe_page(rbio, pagenr);
 			SetPageUptodate(p);
-			pointers[stripe++] = kmap(p);
+			pointers[stripe++] = kmap_thread(p);
 
 			raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE,
 						pointers);
@@ -1269,7 +1269,7 @@ static noinline void finish_rmw(struct btrfs_raid_bio *rbio)
 
 
 		for (stripe = 0; stripe < rbio->real_stripes; stripe++)
-			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
+			kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0));
 	}
 
 	/*
@@ -1835,7 +1835,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 			} else {
 				page = rbio_stripe_page(rbio, stripe, pagenr);
 			}
-			pointers[stripe] = kmap(page);
+			pointers[stripe] = kmap_thread(page);
 		}
 
 		/* all raid6 handling here */
@@ -1940,7 +1940,7 @@ static void __raid_recover_end_io(struct btrfs_raid_bio *rbio)
 			} else {
 				page = rbio_stripe_page(rbio, stripe, pagenr);
 			}
-			kunmap(page);
+			kunmap_thread(page);
 		}
 	}
 
@@ -2379,18 +2379,18 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
 		/* first collect one page from each data stripe */
 		for (stripe = 0; stripe < nr_data; stripe++) {
 			p = page_in_rbio(rbio, stripe, pagenr, 0);
-			pointers[stripe] = kmap(p);
+			pointers[stripe] = kmap_thread(p);
 		}
 
 		/* then add the parity stripe */
-		pointers[stripe++] = kmap(p_page);
+		pointers[stripe++] = kmap_thread(p_page);
 
 		if (has_qstripe) {
 			/*
 			 * raid6, add the qstripe and call the
 			 * library function to fill in our p/q
 			 */
-			pointers[stripe++] = kmap(q_page);
+			pointers[stripe++] = kmap_thread(q_page);
 
 			raid6_call.gen_syndrome(rbio->real_stripes, PAGE_SIZE,
 						pointers);
@@ -2402,17 +2402,17 @@ static noinline void finish_parity_scrub(struct btrfs_raid_bio *rbio,
 
 		/* Check scrubbing parity and repair it */
 		p = rbio_stripe_page(rbio, rbio->scrubp, pagenr);
-		parity = kmap(p);
+		parity = kmap_thread(p);
 		if (memcmp(parity, pointers[rbio->scrubp], PAGE_SIZE))
 			copy_page(parity, pointers[rbio->scrubp]);
 		else
 			/* Parity is right, needn't writeback */
 			bitmap_clear(rbio->dbitmap, pagenr, 1);
-		kunmap(p);
+		kunmap_thread(p);
 
 		for (stripe = 0; stripe < nr_data; stripe++)
-			kunmap(page_in_rbio(rbio, stripe, pagenr, 0));
-		kunmap(p_page);
+			kunmap_thread(page_in_rbio(rbio, stripe, pagenr, 0));
+		kunmap_thread(p_page);
 	}
 
 	__free_page(p_page);
diff --git a/fs/btrfs/reflink.c b/fs/btrfs/reflink.c
index 5cd02514cf4d..10e53d7eba8c 100644
--- a/fs/btrfs/reflink.c
+++ b/fs/btrfs/reflink.c
@@ -92,10 +92,10 @@ static int copy_inline_to_page(struct inode *inode,
 	if (comp_type == BTRFS_COMPRESS_NONE) {
 		char *map;
 
-		map = kmap(page);
+		map = kmap_thread(page);
 		memcpy(map, data_start, datal);
 		flush_dcache_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 	} else {
 		ret = btrfs_decompress(comp_type, data_start, page, 0,
 				       inline_size, datal);
@@ -119,10 +119,10 @@ static int copy_inline_to_page(struct inode *inode,
 	if (datal < block_size) {
 		char *map;
 
-		map = kmap(page);
+		map = kmap_thread(page);
 		memset(map + datal, 0, block_size - datal);
 		flush_dcache_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 	}
 
 	SetPageUptodate(page);
diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
index d9813a5b075a..06c383d3dc43 100644
--- a/fs/btrfs/send.c
+++ b/fs/btrfs/send.c
@@ -4863,9 +4863,9 @@ static ssize_t fill_read_buf(struct send_ctx *sctx, u64 offset, u32 len)
 			}
 		}
 
-		addr = kmap(page);
+		addr = kmap_thread(page);
 		memcpy(sctx->read_buf + ret, addr + pg_offset, cur_len);
-		kunmap(page);
+		kunmap_thread(page);
 		unlock_page(page);
 		put_page(page);
 		index++;
diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
index 05615a1099db..45b7a907bab3 100644
--- a/fs/btrfs/zlib.c
+++ b/fs/btrfs/zlib.c
@@ -126,7 +126,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 		ret = -ENOMEM;
 		goto out;
 	}
-	cpage_out = kmap(out_page);
+	cpage_out = kmap_thread(out_page);
 	pages[0] = out_page;
 	nr_pages = 1;
 
@@ -149,12 +149,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 
 				for (i = 0; i < in_buf_pages; i++) {
 					if (in_page) {
-						kunmap(in_page);
+						kunmap_thread(in_page);
 						put_page(in_page);
 					}
 					in_page = find_get_page(mapping,
 								start >> PAGE_SHIFT);
-					data_in = kmap(in_page);
+					data_in = kmap_thread(in_page);
 					memcpy(workspace->buf + i * PAGE_SIZE,
 					       data_in, PAGE_SIZE);
 					start += PAGE_SIZE;
@@ -162,12 +162,12 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 				workspace->strm.next_in = workspace->buf;
 			} else {
 				if (in_page) {
-					kunmap(in_page);
+					kunmap_thread(in_page);
 					put_page(in_page);
 				}
 				in_page = find_get_page(mapping,
 							start >> PAGE_SHIFT);
-				data_in = kmap(in_page);
+				data_in = kmap_thread(in_page);
 				start += PAGE_SIZE;
 				workspace->strm.next_in = data_in;
 			}
@@ -196,7 +196,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 		 * the stream end if required
 		 */
 		if (workspace->strm.avail_out == 0) {
-			kunmap(out_page);
+			kunmap_thread(out_page);
 			if (nr_pages == nr_dest_pages) {
 				out_page = NULL;
 				ret = -E2BIG;
@@ -207,7 +207,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 				ret = -ENOMEM;
 				goto out;
 			}
-			cpage_out = kmap(out_page);
+			cpage_out = kmap_thread(out_page);
 			pages[nr_pages] = out_page;
 			nr_pages++;
 			workspace->strm.avail_out = PAGE_SIZE;
@@ -234,7 +234,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 			goto out;
 		} else if (workspace->strm.avail_out == 0) {
 			/* get another page for the stream end */
-			kunmap(out_page);
+			kunmap_thread(out_page);
 			if (nr_pages == nr_dest_pages) {
 				out_page = NULL;
 				ret = -E2BIG;
@@ -245,7 +245,7 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 				ret = -ENOMEM;
 				goto out;
 			}
-			cpage_out = kmap(out_page);
+			cpage_out = kmap_thread(out_page);
 			pages[nr_pages] = out_page;
 			nr_pages++;
 			workspace->strm.avail_out = PAGE_SIZE;
@@ -265,10 +265,10 @@ int zlib_compress_pages(struct list_head *ws, struct address_space *mapping,
 out:
 	*out_pages = nr_pages;
 	if (out_page)
-		kunmap(out_page);
+		kunmap_thread(out_page);
 
 	if (in_page) {
-		kunmap(in_page);
+		kunmap_thread(in_page);
 		put_page(in_page);
 	}
 	return ret;
@@ -289,7 +289,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 	u64 disk_start = cb->start;
 	struct bio *orig_bio = cb->orig_bio;
 
-	data_in = kmap(pages_in[page_in_index]);
+	data_in = kmap_thread(pages_in[page_in_index]);
 	workspace->strm.next_in = data_in;
 	workspace->strm.avail_in = min_t(size_t, srclen, PAGE_SIZE);
 	workspace->strm.total_in = 0;
@@ -311,7 +311,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 
 	if (Z_OK != zlib_inflateInit2(&workspace->strm, wbits)) {
 		pr_warn("BTRFS: inflateInit failed\n");
-		kunmap(pages_in[page_in_index]);
+		kunmap_thread(pages_in[page_in_index]);
 		return -EIO;
 	}
 	while (workspace->strm.total_in < srclen) {
@@ -339,13 +339,13 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 
 		if (workspace->strm.avail_in == 0) {
 			unsigned long tmp;
-			kunmap(pages_in[page_in_index]);
+			kunmap_thread(pages_in[page_in_index]);
 			page_in_index++;
 			if (page_in_index >= total_pages_in) {
 				data_in = NULL;
 				break;
 			}
-			data_in = kmap(pages_in[page_in_index]);
+			data_in = kmap_thread(pages_in[page_in_index]);
 			workspace->strm.next_in = data_in;
 			tmp = srclen - workspace->strm.total_in;
 			workspace->strm.avail_in = min(tmp,
@@ -359,7 +359,7 @@ int zlib_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 done:
 	zlib_inflateEnd(&workspace->strm);
 	if (data_in)
-		kunmap(pages_in[page_in_index]);
+		kunmap_thread(pages_in[page_in_index]);
 	if (!ret)
 		zero_fill_bio(orig_bio);
 	return ret;
diff --git a/fs/btrfs/zstd.c b/fs/btrfs/zstd.c
index 9a4871636c6c..48e03f6dcef7 100644
--- a/fs/btrfs/zstd.c
+++ b/fs/btrfs/zstd.c
@@ -399,7 +399,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 
 	/* map in the first page of input data */
 	in_page = find_get_page(mapping, start >> PAGE_SHIFT);
-	workspace->in_buf.src = kmap(in_page);
+	workspace->in_buf.src = kmap_thread(in_page);
 	workspace->in_buf.pos = 0;
 	workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE);
 
@@ -411,7 +411,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 		goto out;
 	}
 	pages[nr_pages++] = out_page;
-	workspace->out_buf.dst = kmap(out_page);
+	workspace->out_buf.dst = kmap_thread(out_page);
 	workspace->out_buf.pos = 0;
 	workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE);
 
@@ -446,7 +446,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 		if (workspace->out_buf.pos == workspace->out_buf.size) {
 			tot_out += PAGE_SIZE;
 			max_out -= PAGE_SIZE;
-			kunmap(out_page);
+			kunmap_thread(out_page);
 			if (nr_pages == nr_dest_pages) {
 				out_page = NULL;
 				ret = -E2BIG;
@@ -458,7 +458,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 				goto out;
 			}
 			pages[nr_pages++] = out_page;
-			workspace->out_buf.dst = kmap(out_page);
+			workspace->out_buf.dst = kmap_thread(out_page);
 			workspace->out_buf.pos = 0;
 			workspace->out_buf.size = min_t(size_t, max_out,
 							PAGE_SIZE);
@@ -479,7 +479,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 			start += PAGE_SIZE;
 			len -= PAGE_SIZE;
 			in_page = find_get_page(mapping, start >> PAGE_SHIFT);
-			workspace->in_buf.src = kmap(in_page);
+			workspace->in_buf.src = kmap_thread(in_page);
 			workspace->in_buf.pos = 0;
 			workspace->in_buf.size = min_t(size_t, len, PAGE_SIZE);
 		}
@@ -518,7 +518,7 @@ int zstd_compress_pages(struct list_head *ws, struct address_space *mapping,
 			goto out;
 		}
 		pages[nr_pages++] = out_page;
-		workspace->out_buf.dst = kmap(out_page);
+		workspace->out_buf.dst = kmap_thread(out_page);
 		workspace->out_buf.pos = 0;
 		workspace->out_buf.size = min_t(size_t, max_out, PAGE_SIZE);
 	}
@@ -565,7 +565,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 		goto done;
 	}
 
-	workspace->in_buf.src = kmap(pages_in[page_in_index]);
+	workspace->in_buf.src = kmap_thread(pages_in[page_in_index]);
 	workspace->in_buf.pos = 0;
 	workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE);
 
@@ -601,14 +601,14 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 			break;
 
 		if (workspace->in_buf.pos == workspace->in_buf.size) {
-			kunmap(pages_in[page_in_index++]);
+			kunmap_thread(pages_in[page_in_index++]);
 			if (page_in_index >= total_pages_in) {
 				workspace->in_buf.src = NULL;
 				ret = -EIO;
 				goto done;
 			}
 			srclen -= PAGE_SIZE;
-			workspace->in_buf.src = kmap(pages_in[page_in_index]);
+			workspace->in_buf.src = kmap_thread(pages_in[page_in_index]);
 			workspace->in_buf.pos = 0;
 			workspace->in_buf.size = min_t(size_t, srclen, PAGE_SIZE);
 		}
@@ -617,7 +617,7 @@ int zstd_decompress_bio(struct list_head *ws, struct compressed_bio *cb)
 	zero_fill_bio(orig_bio);
 done:
 	if (workspace->in_buf.src)
-		kunmap(pages_in[page_in_index]);
+		kunmap_thread(pages_in[page_in_index]);
 	return ret;
 }
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5063.13037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQc-0002DB-FO; Fri, 09 Oct 2020 19:51:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5063.13037; Fri, 09 Oct 2020 19:51:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQc-0002Cy-B8; Fri, 09 Oct 2020 19:51:46 +0000
Received: by outflank-mailman (input) for mailman id 5063;
 Fri, 09 Oct 2020 19:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQb-0002Bq-9T
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:45 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38b255ef-94d7-4d9e-ad22-2e94a778b483;
 Fri, 09 Oct 2020 19:51:43 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:42 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:41 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQb-0002Bq-9T
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:45 +0000
X-Inumbo-ID: 38b255ef-94d7-4d9e-ad22-2e94a778b483
Received: from mga05.intel.com (unknown [192.55.52.43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 38b255ef-94d7-4d9e-ad22-2e94a778b483;
	Fri, 09 Oct 2020 19:51:43 +0000 (UTC)
IronPort-SDR: yccRu5+Zi4ddrJLQZ5u/PzNSuD2a4R8EjkuO9u9NWwWg7uNLyxX6g2DNZFA9NwwtOZXVSyBRXE
 JuTWe/GvxnrQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226006"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="250226006"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga001.jf.intel.com ([10.7.209.18])
  by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:42 -0700
IronPort-SDR: /JTFkd5x0991ia3Dqzqc9DQMEj3VkzRKrzcEaYc1Uf7R2BILbYJ/fow/WUSSa3njc7c47Zr56l
 AjDrFCZ0Z24A==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="389236880"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:41 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Herbert Xu <herbert@gondor.apana.org.au>,
	Eric Biggers <ebiggers@google.com>,
	Aditya Pakki <pakki001@umn.edu>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 15/58] fs/ecryptfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:50 -0700
Message-Id: <20201009195033.3208459-16-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Aditya Pakki <pakki001@umn.edu>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/ecryptfs/crypto.c     | 8 ++++----
 fs/ecryptfs/read_write.c | 8 ++++----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/ecryptfs/crypto.c b/fs/ecryptfs/crypto.c
index 0681540c48d9..e73e00994bee 100644
--- a/fs/ecryptfs/crypto.c
+++ b/fs/ecryptfs/crypto.c
@@ -469,10 +469,10 @@ int ecryptfs_encrypt_page(struct page *page)
 	}
 
 	lower_offset = lower_offset_for_page(crypt_stat, page);
-	enc_extent_virt = kmap(enc_extent_page);
+	enc_extent_virt = kmap_thread(enc_extent_page);
 	rc = ecryptfs_write_lower(ecryptfs_inode, enc_extent_virt, lower_offset,
 				  PAGE_SIZE);
-	kunmap(enc_extent_page);
+	kunmap_thread(enc_extent_page);
 	if (rc < 0) {
 		ecryptfs_printk(KERN_ERR,
 			"Error attempting to write lower page; rc = [%d]\n",
@@ -518,10 +518,10 @@ int ecryptfs_decrypt_page(struct page *page)
 	BUG_ON(!(crypt_stat->flags & ECRYPTFS_ENCRYPTED));
 
 	lower_offset = lower_offset_for_page(crypt_stat, page);
-	page_virt = kmap(page);
+	page_virt = kmap_thread(page);
 	rc = ecryptfs_read_lower(page_virt, lower_offset, PAGE_SIZE,
 				 ecryptfs_inode);
-	kunmap(page);
+	kunmap_thread(page);
 	if (rc < 0) {
 		ecryptfs_printk(KERN_ERR,
 			"Error attempting to read lower page; rc = [%d]\n",
diff --git a/fs/ecryptfs/read_write.c b/fs/ecryptfs/read_write.c
index 0438997ac9d8..5eca4330c0c0 100644
--- a/fs/ecryptfs/read_write.c
+++ b/fs/ecryptfs/read_write.c
@@ -64,11 +64,11 @@ int ecryptfs_write_lower_page_segment(struct inode *ecryptfs_inode,
 
 	offset = ((((loff_t)page_for_lower->index) << PAGE_SHIFT)
 		  + offset_in_page);
-	virt = kmap(page_for_lower);
+	virt = kmap_thread(page_for_lower);
 	rc = ecryptfs_write_lower(ecryptfs_inode, virt, offset, size);
 	if (rc > 0)
 		rc = 0;
-	kunmap(page_for_lower);
+	kunmap_thread(page_for_lower);
 	return rc;
 }
 
@@ -251,11 +251,11 @@ int ecryptfs_read_lower_page_segment(struct page *page_for_ecryptfs,
 	int rc;
 
 	offset = ((((loff_t)page_index) << PAGE_SHIFT) + offset_in_page);
-	virt = kmap(page_for_ecryptfs);
+	virt = kmap_thread(page_for_ecryptfs);
 	rc = ecryptfs_read_lower(virt, offset, size, ecryptfs_inode);
 	if (rc > 0)
 		rc = 0;
-	kunmap(page_for_ecryptfs);
+	kunmap_thread(page_for_ecryptfs);
 	flush_dcache_page(page_for_ecryptfs);
 	return rc;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5064.13049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQd-0002G0-UB; Fri, 09 Oct 2020 19:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5064.13049; Fri, 09 Oct 2020 19:51:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQd-0002Fn-Om; Fri, 09 Oct 2020 19:51:47 +0000
Received: by outflank-mailman (input) for mailman id 5064;
 Fri, 09 Oct 2020 19:51:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQd-0002Bq-C2
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:47 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d758406-926b-431b-95c2-a5674dadec7e;
 Fri, 09 Oct 2020 19:51:46 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:45 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:44 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQd-0002Bq-C2
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:47 +0000
X-Inumbo-ID: 4d758406-926b-431b-95c2-a5674dadec7e
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d758406-926b-431b-95c2-a5674dadec7e;
	Fri, 09 Oct 2020 19:51:46 +0000 (UTC)
IronPort-SDR: FSckl3oZnJbJGVIGFWQtPrkxhCD1TiJIJwZd6ZBxOGULnZQ/lEw9wNGzMvWdPMesk5C6Ybblwm
 1voMWuWWnFJQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642952"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165642952"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:45 -0700
IronPort-SDR: /XLKsCzQipLZYQ1rW+ETY2grttZNamy0/VnWHXeIGpFKMMZaq8ytLhObxugEuPiPsSx/1cqrId
 IdubLVsw8k0A==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="419536913"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:44 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Bob Peterson <rpeterso@redhat.com>,
	Andreas Gruenbacher <agruenba@redhat.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 16/58] fs/gfs2: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:51 -0700
Message-Id: <20201009195033.3208459-17-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Bob Peterson <rpeterso@redhat.com>
Cc: Andreas Gruenbacher <agruenba@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/gfs2/bmap.c       | 4 ++--
 fs/gfs2/ops_fstype.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index 0f69fbd4af66..375af4528411 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -67,7 +67,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
 	}
 
 	if (!PageUptodate(page)) {
-		void *kaddr = kmap(page);
+		void *kaddr = kmap_thread(page);
 		u64 dsize = i_size_read(inode);
  
 		if (dsize > gfs2_max_stuffed_size(ip))
@@ -75,7 +75,7 @@ static int gfs2_unstuffer_page(struct gfs2_inode *ip, struct buffer_head *dibh,
 
 		memcpy(kaddr, dibh->b_data + sizeof(struct gfs2_dinode), dsize);
 		memset(kaddr + dsize, 0, PAGE_SIZE - dsize);
-		kunmap(page);
+		kunmap_thread(page);
 
 		SetPageUptodate(page);
 	}
diff --git a/fs/gfs2/ops_fstype.c b/fs/gfs2/ops_fstype.c
index 6d18d2c91add..a5d20d9b504a 100644
--- a/fs/gfs2/ops_fstype.c
+++ b/fs/gfs2/ops_fstype.c
@@ -263,9 +263,9 @@ static int gfs2_read_super(struct gfs2_sbd *sdp, sector_t sector, int silent)
 		__free_page(page);
 		return -EIO;
 	}
-	p = kmap(page);
+	p = kmap_thread(page);
 	gfs2_sb_in(sdp, p);
-	kunmap(page);
+	kunmap_thread(page);
 	__free_page(page);
 	return gfs2_check_sb(sdp, silent);
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5065.13061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQh-0002Lq-Cc; Fri, 09 Oct 2020 19:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5065.13061; Fri, 09 Oct 2020 19:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQh-0002Lc-6C; Fri, 09 Oct 2020 19:51:51 +0000
Received: by outflank-mailman (input) for mailman id 5065;
 Fri, 09 Oct 2020 19:51:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQg-0002Bq-8P
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:50 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba2ac356-7bb9-4d72-9a7f-6522fc6dc35b;
 Fri, 09 Oct 2020 19:51:48 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:47 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:47 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQg-0002Bq-8P
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:50 +0000
X-Inumbo-ID: ba2ac356-7bb9-4d72-9a7f-6522fc6dc35b
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba2ac356-7bb9-4d72-9a7f-6522fc6dc35b;
	Fri, 09 Oct 2020 19:51:48 +0000 (UTC)
IronPort-SDR: GL0kvQxLfLtsDzjy4D+FvX3Chw3kJv7t6m4P2AbNpnvwFrw1C68Yk0M/7BsBYGABt5ftzzfmc0
 Fu9XjRCurzpQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642963"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165642963"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700
IronPort-SDR: vdHXKYZiEWUJ0fN4OExd2VqdiXE9aYdlaLF6p7mD2X3g24JwvvcNWIkVlfM1uW7/GA3kn+ap6+
 Rex2Wyxl2AaA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298531066"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:47 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Ryusuke Konishi <konishi.ryusuke@gmail.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 17/58] fs/nilfs2: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:52 -0700
Message-Id: <20201009195033.3208459-18-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Ryusuke Konishi <konishi.ryusuke@gmail.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/nilfs2/alloc.c  | 34 +++++++++++++++++-----------------
 fs/nilfs2/cpfile.c |  4 ++--
 2 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/fs/nilfs2/alloc.c b/fs/nilfs2/alloc.c
index adf3bb0a8048..2aa4c34094ef 100644
--- a/fs/nilfs2/alloc.c
+++ b/fs/nilfs2/alloc.c
@@ -524,7 +524,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
 		ret = nilfs_palloc_get_desc_block(inode, group, 1, &desc_bh);
 		if (ret < 0)
 			return ret;
-		desc_kaddr = kmap(desc_bh->b_page);
+		desc_kaddr = kmap_thread(desc_bh->b_page);
 		desc = nilfs_palloc_block_get_group_desc(
 			inode, group, desc_bh, desc_kaddr);
 		n = nilfs_palloc_rest_groups_in_desc_block(inode, group,
@@ -536,7 +536,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
 					inode, group, 1, &bitmap_bh);
 				if (ret < 0)
 					goto out_desc;
-				bitmap_kaddr = kmap(bitmap_bh->b_page);
+				bitmap_kaddr = kmap_thread(bitmap_bh->b_page);
 				bitmap = bitmap_kaddr + bh_offset(bitmap_bh);
 				pos = nilfs_palloc_find_available_slot(
 					bitmap, group_offset,
@@ -547,21 +547,21 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
 						desc, lock, -1);
 					req->pr_entry_nr =
 						entries_per_group * group + pos;
-					kunmap(desc_bh->b_page);
-					kunmap(bitmap_bh->b_page);
+					kunmap_thread(desc_bh->b_page);
+					kunmap_thread(bitmap_bh->b_page);
 
 					req->pr_desc_bh = desc_bh;
 					req->pr_bitmap_bh = bitmap_bh;
 					return 0;
 				}
-				kunmap(bitmap_bh->b_page);
+				kunmap_thread(bitmap_bh->b_page);
 				brelse(bitmap_bh);
 			}
 
 			group_offset = 0;
 		}
 
-		kunmap(desc_bh->b_page);
+		kunmap_thread(desc_bh->b_page);
 		brelse(desc_bh);
 	}
 
@@ -569,7 +569,7 @@ int nilfs_palloc_prepare_alloc_entry(struct inode *inode,
 	return -ENOSPC;
 
  out_desc:
-	kunmap(desc_bh->b_page);
+	kunmap_thread(desc_bh->b_page);
 	brelse(desc_bh);
 	return ret;
 }
@@ -605,10 +605,10 @@ void nilfs_palloc_commit_free_entry(struct inode *inode,
 	spinlock_t *lock;
 
 	group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset);
-	desc_kaddr = kmap(req->pr_desc_bh->b_page);
+	desc_kaddr = kmap_thread(req->pr_desc_bh->b_page);
 	desc = nilfs_palloc_block_get_group_desc(inode, group,
 						 req->pr_desc_bh, desc_kaddr);
-	bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page);
+	bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page);
 	bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh);
 	lock = nilfs_mdt_bgl_lock(inode, group);
 
@@ -620,8 +620,8 @@ void nilfs_palloc_commit_free_entry(struct inode *inode,
 	else
 		nilfs_palloc_group_desc_add_entries(desc, lock, 1);
 
-	kunmap(req->pr_bitmap_bh->b_page);
-	kunmap(req->pr_desc_bh->b_page);
+	kunmap_thread(req->pr_bitmap_bh->b_page);
+	kunmap_thread(req->pr_desc_bh->b_page);
 
 	mark_buffer_dirty(req->pr_desc_bh);
 	mark_buffer_dirty(req->pr_bitmap_bh);
@@ -646,10 +646,10 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode,
 	spinlock_t *lock;
 
 	group = nilfs_palloc_group(inode, req->pr_entry_nr, &group_offset);
-	desc_kaddr = kmap(req->pr_desc_bh->b_page);
+	desc_kaddr = kmap_thread(req->pr_desc_bh->b_page);
 	desc = nilfs_palloc_block_get_group_desc(inode, group,
 						 req->pr_desc_bh, desc_kaddr);
-	bitmap_kaddr = kmap(req->pr_bitmap_bh->b_page);
+	bitmap_kaddr = kmap_thread(req->pr_bitmap_bh->b_page);
 	bitmap = bitmap_kaddr + bh_offset(req->pr_bitmap_bh);
 	lock = nilfs_mdt_bgl_lock(inode, group);
 
@@ -661,8 +661,8 @@ void nilfs_palloc_abort_alloc_entry(struct inode *inode,
 	else
 		nilfs_palloc_group_desc_add_entries(desc, lock, 1);
 
-	kunmap(req->pr_bitmap_bh->b_page);
-	kunmap(req->pr_desc_bh->b_page);
+	kunmap_thread(req->pr_bitmap_bh->b_page);
+	kunmap_thread(req->pr_desc_bh->b_page);
 
 	brelse(req->pr_bitmap_bh);
 	brelse(req->pr_desc_bh);
@@ -754,7 +754,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems)
 		/* Get the first entry number of the group */
 		group_min_nr = (__u64)group * epg;
 
-		bitmap_kaddr = kmap(bitmap_bh->b_page);
+		bitmap_kaddr = kmap_thread(bitmap_bh->b_page);
 		bitmap = bitmap_kaddr + bh_offset(bitmap_bh);
 		lock = nilfs_mdt_bgl_lock(inode, group);
 
@@ -800,7 +800,7 @@ int nilfs_palloc_freev(struct inode *inode, __u64 *entry_nrs, size_t nitems)
 			entry_start = rounddown(group_offset, epb);
 		} while (true);
 
-		kunmap(bitmap_bh->b_page);
+		kunmap_thread(bitmap_bh->b_page);
 		mark_buffer_dirty(bitmap_bh);
 		brelse(bitmap_bh);
 
diff --git a/fs/nilfs2/cpfile.c b/fs/nilfs2/cpfile.c
index 86d4d850d130..402ab8bfce29 100644
--- a/fs/nilfs2/cpfile.c
+++ b/fs/nilfs2/cpfile.c
@@ -235,11 +235,11 @@ int nilfs_cpfile_get_checkpoint(struct inode *cpfile,
 	ret = nilfs_cpfile_get_checkpoint_block(cpfile, cno, create, &cp_bh);
 	if (ret < 0)
 		goto out_header;
-	kaddr = kmap(cp_bh->b_page);
+	kaddr = kmap_thread(cp_bh->b_page);
 	cp = nilfs_cpfile_block_get_checkpoint(cpfile, cno, cp_bh, kaddr);
 	if (nilfs_checkpoint_invalid(cp)) {
 		if (!create) {
-			kunmap(cp_bh->b_page);
+			kunmap_thread(cp_bh->b_page);
 			brelse(cp_bh);
 			ret = -ENOENT;
 			goto out_header;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:51:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5067.13072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQm-0002UV-Nq; Fri, 09 Oct 2020 19:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5067.13072; Fri, 09 Oct 2020 19:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQm-0002UI-JZ; Fri, 09 Oct 2020 19:51:56 +0000
Received: by outflank-mailman (input) for mailman id 5067;
 Fri, 09 Oct 2020 19:51:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQl-0002Bq-8L
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:55 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04e64965-0a39-4d97-830b-bac4dbf46a56;
 Fri, 09 Oct 2020 19:51:51 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:51 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:50 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQl-0002Bq-8L
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:55 +0000
X-Inumbo-ID: 04e64965-0a39-4d97-830b-bac4dbf46a56
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 04e64965-0a39-4d97-830b-bac4dbf46a56;
	Fri, 09 Oct 2020 19:51:51 +0000 (UTC)
IronPort-SDR: Vg6H/NGB8J5Ehxvg1pw4k6uX/FOFO7IaaMQWcLb+ZPBjPHmCDv/mS18CNcnuQjMfWklg0jZ54S
 cteYccC27MEA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165642974"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165642974"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:51 -0700
IronPort-SDR: OASmflq+Rx+yTtbdwfMJ0o3fYX/22afbp7t4aASuh19KdC4DJdh+fKQIdVVFr7y3ZwUlZzPFTz
 pRsXSfU26jow==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529053257"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:50 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 18/58] fs/hfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:53 -0700
Message-Id: <20201009195033.3208459-19-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/hfs/bnode.c | 14 +++++++-------
 fs/hfs/btree.c | 20 ++++++++++----------
 2 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
index b63a4df7327b..8b4d02576405 100644
--- a/fs/hfs/bnode.c
+++ b/fs/hfs/bnode.c
@@ -23,8 +23,8 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf,
 	off += node->page_offset;
 	page = node->page[0];
 
-	memcpy(buf, kmap(page) + off, len);
-	kunmap(page);
+	memcpy(buf, kmap_thread(page) + off, len);
+	kunmap_thread(page);
 }
 
 u16 hfs_bnode_read_u16(struct hfs_bnode *node, int off)
@@ -108,9 +108,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
 	src_page = src_node->page[0];
 	dst_page = dst_node->page[0];
 
-	memcpy(kmap(dst_page) + dst, kmap(src_page) + src, len);
-	kunmap(src_page);
-	kunmap(dst_page);
+	memcpy(kmap_thread(dst_page) + dst, kmap_thread(src_page) + src, len);
+	kunmap_thread(src_page);
+	kunmap_thread(dst_page);
 	set_page_dirty(dst_page);
 }
 
@@ -125,9 +125,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
 	src += node->page_offset;
 	dst += node->page_offset;
 	page = node->page[0];
-	ptr = kmap(page);
+	ptr = kmap_thread(page);
 	memmove(ptr + dst, ptr + src, len);
-	kunmap(page);
+	kunmap_thread(page);
 	set_page_dirty(page);
 }
 
diff --git a/fs/hfs/btree.c b/fs/hfs/btree.c
index 19017d296173..bd4a6d35e361 100644
--- a/fs/hfs/btree.c
+++ b/fs/hfs/btree.c
@@ -80,7 +80,7 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
 		goto free_inode;
 
 	/* Load the header */
-	head = (struct hfs_btree_header_rec *)(kmap(page) + sizeof(struct hfs_bnode_desc));
+	head = (struct hfs_btree_header_rec *)(kmap_thread(page) + sizeof(struct hfs_bnode_desc));
 	tree->root = be32_to_cpu(head->root);
 	tree->leaf_count = be32_to_cpu(head->leaf_count);
 	tree->leaf_head = be32_to_cpu(head->leaf_head);
@@ -119,7 +119,7 @@ struct hfs_btree *hfs_btree_open(struct super_block *sb, u32 id, btree_keycmp ke
 	tree->node_size_shift = ffs(size) - 1;
 	tree->pages_per_bnode = (tree->node_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
 
-	kunmap(page);
+	kunmap_thread(page);
 	put_page(page);
 	return tree;
 
@@ -268,7 +268,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 
 	off += node->page_offset;
 	pagep = node->page + (off >> PAGE_SHIFT);
-	data = kmap(*pagep);
+	data = kmap_thread(*pagep);
 	off &= ~PAGE_MASK;
 	idx = 0;
 
@@ -281,7 +281,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 						idx += i;
 						data[off] |= m;
 						set_page_dirty(*pagep);
-						kunmap(*pagep);
+						kunmap_thread(*pagep);
 						tree->free_nodes--;
 						mark_inode_dirty(tree->inode);
 						hfs_bnode_put(node);
@@ -290,14 +290,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 				}
 			}
 			if (++off >= PAGE_SIZE) {
-				kunmap(*pagep);
-				data = kmap(*++pagep);
+				kunmap_thread(*pagep);
+				data = kmap_thread(*++pagep);
 				off = 0;
 			}
 			idx += 8;
 			len--;
 		}
-		kunmap(*pagep);
+		kunmap_thread(*pagep);
 		nidx = node->next;
 		if (!nidx) {
 			printk(KERN_DEBUG "create new bmap node...\n");
@@ -313,7 +313,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 		off = off16;
 		off += node->page_offset;
 		pagep = node->page + (off >> PAGE_SHIFT);
-		data = kmap(*pagep);
+		data = kmap_thread(*pagep);
 		off &= ~PAGE_MASK;
 	}
 }
@@ -360,7 +360,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
 	}
 	off += node->page_offset + nidx / 8;
 	page = node->page[off >> PAGE_SHIFT];
-	data = kmap(page);
+	data = kmap_thread(page);
 	off &= ~PAGE_MASK;
 	m = 1 << (~nidx & 7);
 	byte = data[off];
@@ -373,7 +373,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
 	}
 	data[off] = byte & ~m;
 	set_page_dirty(page);
-	kunmap(page);
+	kunmap_thread(page);
 	hfs_bnode_put(node);
 	tree->free_nodes++;
 	mark_inode_dirty(tree->inode);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5068.13085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQr-0002cU-DZ; Fri, 09 Oct 2020 19:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5068.13085; Fri, 09 Oct 2020 19:52:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQr-0002cD-6s; Fri, 09 Oct 2020 19:52:01 +0000
Received: by outflank-mailman (input) for mailman id 5068;
 Fri, 09 Oct 2020 19:52:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQq-0002Bq-8U
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:00 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fca483a-f979-4b79-8dd1-10f1925524ff;
 Fri, 09 Oct 2020 19:51:56 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:55 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:54 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQq-0002Bq-8U
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:00 +0000
X-Inumbo-ID: 8fca483a-f979-4b79-8dd1-10f1925524ff
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fca483a-f979-4b79-8dd1-10f1925524ff;
	Fri, 09 Oct 2020 19:51:56 +0000 (UTC)
IronPort-SDR: 4yfB73SxYF7XBF8UGI/c48/8kLz3A3unWQKVAAx1vb3BnR+jVKIia0bIpw3KbebLm9HwpFLIRq
 AyN6kBu/ZHnQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643001"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165643001"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:55 -0700
IronPort-SDR: vmw0BK7OYY8h0fZsnTgtJRpvmXsWJvpJiZ/O3ONyZujh0oi54UYOLunW4B1xesRkeTs3uxMx1d
 pOHAzA+ZEWzg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529053305"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:54 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 19/58] fs/hfsplus: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:54 -0700
Message-Id: <20201009195033.3208459-20-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/hfsplus/bitmap.c |  20 ++++-----
 fs/hfsplus/bnode.c  | 102 ++++++++++++++++++++++----------------------
 fs/hfsplus/btree.c  |  18 ++++----
 3 files changed, 70 insertions(+), 70 deletions(-)

diff --git a/fs/hfsplus/bitmap.c b/fs/hfsplus/bitmap.c
index cebce0cfe340..9ec7c1559a0c 100644
--- a/fs/hfsplus/bitmap.c
+++ b/fs/hfsplus/bitmap.c
@@ -39,7 +39,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 		start = size;
 		goto out;
 	}
-	pptr = kmap(page);
+	pptr = kmap_thread(page);
 	curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
 	i = offset % 32;
 	offset &= ~(PAGE_CACHE_BITS - 1);
@@ -74,7 +74,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 			}
 			curr++;
 		}
-		kunmap(page);
+		kunmap_thread(page);
 		offset += PAGE_CACHE_BITS;
 		if (offset >= size)
 			break;
@@ -84,7 +84,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 			start = size;
 			goto out;
 		}
-		curr = pptr = kmap(page);
+		curr = pptr = kmap_thread(page);
 		if ((size ^ offset) / PAGE_CACHE_BITS)
 			end = pptr + PAGE_CACHE_BITS / 32;
 		else
@@ -127,7 +127,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 			len -= 32;
 		}
 		set_page_dirty(page);
-		kunmap(page);
+		kunmap_thread(page);
 		offset += PAGE_CACHE_BITS;
 		page = read_mapping_page(mapping, offset / PAGE_CACHE_BITS,
 					 NULL);
@@ -135,7 +135,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 			start = size;
 			goto out;
 		}
-		pptr = kmap(page);
+		pptr = kmap_thread(page);
 		curr = pptr;
 		end = pptr + PAGE_CACHE_BITS / 32;
 	}
@@ -151,7 +151,7 @@ int hfsplus_block_allocate(struct super_block *sb, u32 size,
 done:
 	*curr = cpu_to_be32(n);
 	set_page_dirty(page);
-	kunmap(page);
+	kunmap_thread(page);
 	*max = offset + (curr - pptr) * 32 + i - start;
 	sbi->free_blocks -= *max;
 	hfsplus_mark_mdb_dirty(sb);
@@ -185,7 +185,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
 	page = read_mapping_page(mapping, pnr, NULL);
 	if (IS_ERR(page))
 		goto kaboom;
-	pptr = kmap(page);
+	pptr = kmap_thread(page);
 	curr = pptr + (offset & (PAGE_CACHE_BITS - 1)) / 32;
 	end = pptr + PAGE_CACHE_BITS / 32;
 	len = count;
@@ -215,11 +215,11 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
 		if (!count)
 			break;
 		set_page_dirty(page);
-		kunmap(page);
+		kunmap_thread(page);
 		page = read_mapping_page(mapping, ++pnr, NULL);
 		if (IS_ERR(page))
 			goto kaboom;
-		pptr = kmap(page);
+		pptr = kmap_thread(page);
 		curr = pptr;
 		end = pptr + PAGE_CACHE_BITS / 32;
 	}
@@ -231,7 +231,7 @@ int hfsplus_block_free(struct super_block *sb, u32 offset, u32 count)
 	}
 out:
 	set_page_dirty(page);
-	kunmap(page);
+	kunmap_thread(page);
 	sbi->free_blocks += len;
 	hfsplus_mark_mdb_dirty(sb);
 	mutex_unlock(&sbi->alloc_mutex);
diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
index 177fae4e6581..62757d92fbbd 100644
--- a/fs/hfsplus/bnode.c
+++ b/fs/hfsplus/bnode.c
@@ -29,14 +29,14 @@ void hfs_bnode_read(struct hfs_bnode *node, void *buf, int off, int len)
 	off &= ~PAGE_MASK;
 
 	l = min_t(int, len, PAGE_SIZE - off);
-	memcpy(buf, kmap(*pagep) + off, l);
-	kunmap(*pagep);
+	memcpy(buf, kmap_thread(*pagep) + off, l);
+	kunmap_thread(*pagep);
 
 	while ((len -= l) != 0) {
 		buf += l;
 		l = min_t(int, len, PAGE_SIZE);
-		memcpy(buf, kmap(*++pagep), l);
-		kunmap(*pagep);
+		memcpy(buf, kmap_thread(*++pagep), l);
+		kunmap_thread(*pagep);
 	}
 }
 
@@ -82,16 +82,16 @@ void hfs_bnode_write(struct hfs_bnode *node, void *buf, int off, int len)
 	off &= ~PAGE_MASK;
 
 	l = min_t(int, len, PAGE_SIZE - off);
-	memcpy(kmap(*pagep) + off, buf, l);
+	memcpy(kmap_thread(*pagep) + off, buf, l);
 	set_page_dirty(*pagep);
-	kunmap(*pagep);
+	kunmap_thread(*pagep);
 
 	while ((len -= l) != 0) {
 		buf += l;
 		l = min_t(int, len, PAGE_SIZE);
-		memcpy(kmap(*++pagep), buf, l);
+		memcpy(kmap_thread(*++pagep), buf, l);
 		set_page_dirty(*pagep);
-		kunmap(*pagep);
+		kunmap_thread(*pagep);
 	}
 }
 
@@ -112,15 +112,15 @@ void hfs_bnode_clear(struct hfs_bnode *node, int off, int len)
 	off &= ~PAGE_MASK;
 
 	l = min_t(int, len, PAGE_SIZE - off);
-	memset(kmap(*pagep) + off, 0, l);
+	memset(kmap_thread(*pagep) + off, 0, l);
 	set_page_dirty(*pagep);
-	kunmap(*pagep);
+	kunmap_thread(*pagep);
 
 	while ((len -= l) != 0) {
 		l = min_t(int, len, PAGE_SIZE);
-		memset(kmap(*++pagep), 0, l);
+		memset(kmap_thread(*++pagep), 0, l);
 		set_page_dirty(*pagep);
-		kunmap(*pagep);
+		kunmap_thread(*pagep);
 	}
 }
 
@@ -142,24 +142,24 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
 
 	if (src == dst) {
 		l = min_t(int, len, PAGE_SIZE - src);
-		memcpy(kmap(*dst_page) + src, kmap(*src_page) + src, l);
-		kunmap(*src_page);
+		memcpy(kmap_thread(*dst_page) + src, kmap_thread(*src_page) + src, l);
+		kunmap_thread(*src_page);
 		set_page_dirty(*dst_page);
-		kunmap(*dst_page);
+		kunmap_thread(*dst_page);
 
 		while ((len -= l) != 0) {
 			l = min_t(int, len, PAGE_SIZE);
-			memcpy(kmap(*++dst_page), kmap(*++src_page), l);
-			kunmap(*src_page);
+			memcpy(kmap_thread(*++dst_page), kmap_thread(*++src_page), l);
+			kunmap_thread(*src_page);
 			set_page_dirty(*dst_page);
-			kunmap(*dst_page);
+			kunmap_thread(*dst_page);
 		}
 	} else {
 		void *src_ptr, *dst_ptr;
 
 		do {
-			src_ptr = kmap(*src_page) + src;
-			dst_ptr = kmap(*dst_page) + dst;
+			src_ptr = kmap_thread(*src_page) + src;
+			dst_ptr = kmap_thread(*dst_page) + dst;
 			if (PAGE_SIZE - src < PAGE_SIZE - dst) {
 				l = PAGE_SIZE - src;
 				src = 0;
@@ -171,9 +171,9 @@ void hfs_bnode_copy(struct hfs_bnode *dst_node, int dst,
 			}
 			l = min(len, l);
 			memcpy(dst_ptr, src_ptr, l);
-			kunmap(*src_page);
+			kunmap_thread(*src_page);
 			set_page_dirty(*dst_page);
-			kunmap(*dst_page);
+			kunmap_thread(*dst_page);
 			if (!dst)
 				dst_page++;
 			else
@@ -202,27 +202,27 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
 
 		if (src == dst) {
 			while (src < len) {
-				memmove(kmap(*dst_page), kmap(*src_page), src);
-				kunmap(*src_page);
+				memmove(kmap_thread(*dst_page), kmap_thread(*src_page), src);
+				kunmap_thread(*src_page);
 				set_page_dirty(*dst_page);
-				kunmap(*dst_page);
+				kunmap_thread(*dst_page);
 				len -= src;
 				src = PAGE_SIZE;
 				src_page--;
 				dst_page--;
 			}
 			src -= len;
-			memmove(kmap(*dst_page) + src,
-				kmap(*src_page) + src, len);
-			kunmap(*src_page);
+			memmove(kmap_thread(*dst_page) + src,
+				kmap_thread(*src_page) + src, len);
+			kunmap_thread(*src_page);
 			set_page_dirty(*dst_page);
-			kunmap(*dst_page);
+			kunmap_thread(*dst_page);
 		} else {
 			void *src_ptr, *dst_ptr;
 
 			do {
-				src_ptr = kmap(*src_page) + src;
-				dst_ptr = kmap(*dst_page) + dst;
+				src_ptr = kmap_thread(*src_page) + src;
+				dst_ptr = kmap_thread(*dst_page) + dst;
 				if (src < dst) {
 					l = src;
 					src = PAGE_SIZE;
@@ -234,9 +234,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
 				}
 				l = min(len, l);
 				memmove(dst_ptr - l, src_ptr - l, l);
-				kunmap(*src_page);
+				kunmap_thread(*src_page);
 				set_page_dirty(*dst_page);
-				kunmap(*dst_page);
+				kunmap_thread(*dst_page);
 				if (dst == PAGE_SIZE)
 					dst_page--;
 				else
@@ -251,26 +251,26 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
 
 		if (src == dst) {
 			l = min_t(int, len, PAGE_SIZE - src);
-			memmove(kmap(*dst_page) + src,
-				kmap(*src_page) + src, l);
-			kunmap(*src_page);
+			memmove(kmap_thread(*dst_page) + src,
+				kmap_thread(*src_page) + src, l);
+			kunmap_thread(*src_page);
 			set_page_dirty(*dst_page);
-			kunmap(*dst_page);
+			kunmap_thread(*dst_page);
 
 			while ((len -= l) != 0) {
 				l = min_t(int, len, PAGE_SIZE);
-				memmove(kmap(*++dst_page),
-					kmap(*++src_page), l);
-				kunmap(*src_page);
+				memmove(kmap_thread(*++dst_page),
+					kmap_thread(*++src_page), l);
+				kunmap_thread(*src_page);
 				set_page_dirty(*dst_page);
-				kunmap(*dst_page);
+				kunmap_thread(*dst_page);
 			}
 		} else {
 			void *src_ptr, *dst_ptr;
 
 			do {
-				src_ptr = kmap(*src_page) + src;
-				dst_ptr = kmap(*dst_page) + dst;
+				src_ptr = kmap_thread(*src_page) + src;
+				dst_ptr = kmap_thread(*dst_page) + dst;
 				if (PAGE_SIZE - src <
 						PAGE_SIZE - dst) {
 					l = PAGE_SIZE - src;
@@ -283,9 +283,9 @@ void hfs_bnode_move(struct hfs_bnode *node, int dst, int src, int len)
 				}
 				l = min(len, l);
 				memmove(dst_ptr, src_ptr, l);
-				kunmap(*src_page);
+				kunmap_thread(*src_page);
 				set_page_dirty(*dst_page);
-				kunmap(*dst_page);
+				kunmap_thread(*dst_page);
 				if (!dst)
 					dst_page++;
 				else
@@ -502,14 +502,14 @@ struct hfs_bnode *hfs_bnode_find(struct hfs_btree *tree, u32 num)
 	if (!test_bit(HFS_BNODE_NEW, &node->flags))
 		return node;
 
-	desc = (struct hfs_bnode_desc *)(kmap(node->page[0]) +
+	desc = (struct hfs_bnode_desc *)(kmap_thread(node->page[0]) +
 			node->page_offset);
 	node->prev = be32_to_cpu(desc->prev);
 	node->next = be32_to_cpu(desc->next);
 	node->num_recs = be16_to_cpu(desc->num_recs);
 	node->type = desc->type;
 	node->height = desc->height;
-	kunmap(node->page[0]);
+	kunmap_thread(node->page[0]);
 
 	switch (node->type) {
 	case HFS_NODE_HEADER:
@@ -593,14 +593,14 @@ struct hfs_bnode *hfs_bnode_create(struct hfs_btree *tree, u32 num)
 	}
 
 	pagep = node->page;
-	memset(kmap(*pagep) + node->page_offset, 0,
+	memset(kmap_thread(*pagep) + node->page_offset, 0,
 	       min_t(int, PAGE_SIZE, tree->node_size));
 	set_page_dirty(*pagep);
-	kunmap(*pagep);
+	kunmap_thread(*pagep);
 	for (i = 1; i < tree->pages_per_bnode; i++) {
-		memset(kmap(*++pagep), 0, PAGE_SIZE);
+		memset(kmap_thread(*++pagep), 0, PAGE_SIZE);
 		set_page_dirty(*pagep);
-		kunmap(*pagep);
+		kunmap_thread(*pagep);
 	}
 	clear_bit(HFS_BNODE_NEW, &node->flags);
 	wake_up(&node->lock_wq);
diff --git a/fs/hfsplus/btree.c b/fs/hfsplus/btree.c
index 66774f4cb4fd..74fcef3a1628 100644
--- a/fs/hfsplus/btree.c
+++ b/fs/hfsplus/btree.c
@@ -394,7 +394,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 
 	off += node->page_offset;
 	pagep = node->page + (off >> PAGE_SHIFT);
-	data = kmap(*pagep);
+	data = kmap_thread(*pagep);
 	off &= ~PAGE_MASK;
 	idx = 0;
 
@@ -407,7 +407,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 						idx += i;
 						data[off] |= m;
 						set_page_dirty(*pagep);
-						kunmap(*pagep);
+						kunmap_thread(*pagep);
 						tree->free_nodes--;
 						mark_inode_dirty(tree->inode);
 						hfs_bnode_put(node);
@@ -417,14 +417,14 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 				}
 			}
 			if (++off >= PAGE_SIZE) {
-				kunmap(*pagep);
-				data = kmap(*++pagep);
+				kunmap_thread(*pagep);
+				data = kmap_thread(*++pagep);
 				off = 0;
 			}
 			idx += 8;
 			len--;
 		}
-		kunmap(*pagep);
+		kunmap_thread(*pagep);
 		nidx = node->next;
 		if (!nidx) {
 			hfs_dbg(BNODE_MOD, "create new bmap node\n");
@@ -440,7 +440,7 @@ struct hfs_bnode *hfs_bmap_alloc(struct hfs_btree *tree)
 		off = off16;
 		off += node->page_offset;
 		pagep = node->page + (off >> PAGE_SHIFT);
-		data = kmap(*pagep);
+		data = kmap_thread(*pagep);
 		off &= ~PAGE_MASK;
 	}
 }
@@ -490,7 +490,7 @@ void hfs_bmap_free(struct hfs_bnode *node)
 	}
 	off += node->page_offset + nidx / 8;
 	page = node->page[off >> PAGE_SHIFT];
-	data = kmap(page);
+	data = kmap_thread(page);
 	off &= ~PAGE_MASK;
 	m = 1 << (~nidx & 7);
 	byte = data[off];
@@ -498,13 +498,13 @@ void hfs_bmap_free(struct hfs_bnode *node)
 		pr_crit("trying to free free bnode "
 				"%u(%d)\n",
 			node->this, node->type);
-		kunmap(page);
+		kunmap_thread(page);
 		hfs_bnode_put(node);
 		return;
 	}
 	data[off] = byte & ~m;
 	set_page_dirty(page);
-	kunmap(page);
+	kunmap_thread(page);
 	hfs_bnode_put(node);
 	tree->free_nodes++;
 	mark_inode_dirty(tree->inode);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5070.13097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQt-0002gw-Pl; Fri, 09 Oct 2020 19:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5070.13097; Fri, 09 Oct 2020 19:52:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQt-0002go-KP; Fri, 09 Oct 2020 19:52:03 +0000
Received: by outflank-mailman (input) for mailman id 5070;
 Fri, 09 Oct 2020 19:52:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQr-0002d3-KD
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:01 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad65a983-4fff-4a15-add7-cf80a5a90236;
 Fri, 09 Oct 2020 19:52:00 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:59 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:57 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQr-0002d3-KD
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:01 +0000
X-Inumbo-ID: ad65a983-4fff-4a15-add7-cf80a5a90236
Received: from mga05.intel.com (unknown [192.55.52.43])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ad65a983-4fff-4a15-add7-cf80a5a90236;
	Fri, 09 Oct 2020 19:52:00 +0000 (UTC)
IronPort-SDR: tfPYQjWZcvUNnpYePsAkAMaOg6Pbh7V3JU1drmegAKrQ3tb8GFnTuTFhk8DBuRopBRHq/BF4wa
 BVH26UoDbF/A==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226060"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="250226060"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:59 -0700
IronPort-SDR: eWSKFRksQmWHvS1j4lFouXjU/VQ78asIZFlodYo3aryNHFOArT/TLAMBqja15iBonRYEaVT+Tm
 vvSyZvytK3gg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="462300719"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:57 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	David Woodhouse <dwmw2@infradead.org>,
	Richard Weinberger <richard@nod.at>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 20/58] fs/jffs2: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:55 -0700
Message-Id: <20201009195033.3208459-21-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Richard Weinberger <richard@nod.at>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/jffs2/file.c | 4 ++--
 fs/jffs2/gc.c   | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index f8fb89b10227..3e6d54f9b011 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -88,7 +88,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
 
 	BUG_ON(!PageLocked(pg));
 
-	pg_buf = kmap(pg);
+	pg_buf = kmap_thread(pg);
 	/* FIXME: Can kmap fail? */
 
 	ret = jffs2_read_inode_range(c, f, pg_buf, pg->index << PAGE_SHIFT,
@@ -103,7 +103,7 @@ static int jffs2_do_readpage_nolock (struct inode *inode, struct page *pg)
 	}
 
 	flush_dcache_page(pg);
-	kunmap(pg);
+	kunmap_thread(pg);
 
 	jffs2_dbg(2, "readpage finished\n");
 	return ret;
diff --git a/fs/jffs2/gc.c b/fs/jffs2/gc.c
index 373b3b7c9f44..a7259783ab84 100644
--- a/fs/jffs2/gc.c
+++ b/fs/jffs2/gc.c
@@ -1335,7 +1335,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era
 		return PTR_ERR(page);
 	}
 
-	pg_ptr = kmap(page);
+	pg_ptr = kmap_thread(page);
 	mutex_lock(&f->sem);
 
 	offset = start;
@@ -1400,7 +1400,7 @@ static int jffs2_garbage_collect_dnode(struct jffs2_sb_info *c, struct jffs2_era
 		}
 	}
 
-	kunmap(page);
+	kunmap_thread(page);
 	put_page(page);
 	return ret;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5073.13109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQy-0002oJ-41; Fri, 09 Oct 2020 19:52:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5073.13109; Fri, 09 Oct 2020 19:52:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyQx-0002o5-V3; Fri, 09 Oct 2020 19:52:07 +0000
Received: by outflank-mailman (input) for mailman id 5073;
 Fri, 09 Oct 2020 19:52:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQw-0002d3-IT
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:06 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96fa9753-0987-4330-bb12-b2e8a3f54d44;
 Fri, 09 Oct 2020 19:52:01 +0000 (UTC)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:59 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:37 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQw-0002d3-IT
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:06 +0000
X-Inumbo-ID: 96fa9753-0987-4330-bb12-b2e8a3f54d44
Received: from mga05.intel.com (unknown [192.55.52.43])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 96fa9753-0987-4330-bb12-b2e8a3f54d44;
	Fri, 09 Oct 2020 19:52:01 +0000 (UTC)
IronPort-SDR: qLhhMoyOBDZc/+AW2KGKEP3DfAdFgCaVCOKSxpoyj0Rq/JOYiMVt1e1Bz+zJx2rL+dAIaT43P/
 HJ5I4RTHpJEQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226062"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="250226062"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
  by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:59 -0700
IronPort-SDR: 6nDyG/rNIU8VCp7PRxt8N7VM6Ts7FaR6Cq3BMAOPdzxAbIieQhQY1hWISzmf8TSYIGXE2cP/wS
 1DDas2UbMNkw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="345147432"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:37 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Steve French <sfrench@samba.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 14/58] fs/cifs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:49 -0700
Message-Id: <20201009195033.3208459-15-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Steve French <sfrench@samba.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/cifs/cifsencrypt.c |  6 +++---
 fs/cifs/file.c        | 16 ++++++++--------
 fs/cifs/smb2ops.c     |  8 ++++----
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
index 9daa256f69d4..2f8232d01a56 100644
--- a/fs/cifs/cifsencrypt.c
+++ b/fs/cifs/cifsencrypt.c
@@ -82,17 +82,17 @@ int __cifs_calc_signature(struct smb_rqst *rqst,
 
 		rqst_page_get_length(rqst, i, &len, &offset);
 
-		kaddr = (char *) kmap(rqst->rq_pages[i]) + offset;
+		kaddr = (char *) kmap_thread(rqst->rq_pages[i]) + offset;
 
 		rc = crypto_shash_update(shash, kaddr, len);
 		if (rc) {
 			cifs_dbg(VFS, "%s: Could not update with payload\n",
 				 __func__);
-			kunmap(rqst->rq_pages[i]);
+			kunmap_thread(rqst->rq_pages[i]);
 			return rc;
 		}
 
-		kunmap(rqst->rq_pages[i]);
+		kunmap_thread(rqst->rq_pages[i]);
 	}
 
 	rc = crypto_shash_final(shash, signature);
diff --git a/fs/cifs/file.c b/fs/cifs/file.c
index be46fab4c96d..6db2caab8852 100644
--- a/fs/cifs/file.c
+++ b/fs/cifs/file.c
@@ -2145,17 +2145,17 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
 	inode = page->mapping->host;
 
 	offset += (loff_t)from;
-	write_data = kmap(page);
+	write_data = kmap_thread(page);
 	write_data += from;
 
 	if ((to > PAGE_SIZE) || (from > to)) {
-		kunmap(page);
+		kunmap_thread(page);
 		return -EIO;
 	}
 
 	/* racing with truncate? */
 	if (offset > mapping->host->i_size) {
-		kunmap(page);
+		kunmap_thread(page);
 		return 0; /* don't care */
 	}
 
@@ -2183,7 +2183,7 @@ static int cifs_partialpagewrite(struct page *page, unsigned from, unsigned to)
 			rc = -EIO;
 	}
 
-	kunmap(page);
+	kunmap_thread(page);
 	return rc;
 }
 
@@ -2559,10 +2559,10 @@ static int cifs_write_end(struct file *file, struct address_space *mapping,
 		   known which we might as well	leverage */
 		/* BB check if anything else missing out of ppw
 		   such as updating last write time */
-		page_data = kmap(page);
+		page_data = kmap_thread(page);
 		rc = cifs_write(cfile, pid, page_data + offset, copied, &pos);
 		/* if (rc < 0) should we set writebehind rc? */
-		kunmap(page);
+		kunmap_thread(page);
 
 		free_xid(xid);
 	} else {
@@ -4511,7 +4511,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page,
 	if (rc == 0)
 		goto read_complete;
 
-	read_data = kmap(page);
+	read_data = kmap_thread(page);
 	/* for reads over a certain size could initiate async read ahead */
 
 	rc = cifs_read(file, read_data, PAGE_SIZE, poffset);
@@ -4540,7 +4540,7 @@ static int cifs_readpage_worker(struct file *file, struct page *page,
 	rc = 0;
 
 io_error:
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 
 read_complete:
diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
index 32f90dc82c84..a3e7ebab38b6 100644
--- a/fs/cifs/smb2ops.c
+++ b/fs/cifs/smb2ops.c
@@ -4068,12 +4068,12 @@ smb3_init_transform_rq(struct TCP_Server_Info *server, int num_rqst,
 
 			rqst_page_get_length(&new_rq[i], j, &len, &offset);
 
-			dst = (char *) kmap(new_rq[i].rq_pages[j]) + offset;
-			src = (char *) kmap(old_rq[i - 1].rq_pages[j]) + offset;
+			dst = (char *) kmap_thread(new_rq[i].rq_pages[j]) + offset;
+			src = (char *) kmap_thread(old_rq[i - 1].rq_pages[j]) + offset;
 
 			memcpy(dst, src, len);
-			kunmap(new_rq[i].rq_pages[j]);
-			kunmap(old_rq[i - 1].rq_pages[j]);
+			kunmap_thread(new_rq[i].rq_pages[j]);
+			kunmap_thread(old_rq[i - 1].rq_pages[j]);
 		}
 	}
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5074.13120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR2-0002up-Db; Fri, 09 Oct 2020 19:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5074.13120; Fri, 09 Oct 2020 19:52:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR2-0002uf-A1; Fri, 09 Oct 2020 19:52:12 +0000
Received: by outflank-mailman (input) for mailman id 5074;
 Fri, 09 Oct 2020 19:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyR1-0002d3-Im
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:11 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 30150e07-14e3-48da-a02e-f5f16bf31f29;
 Fri, 09 Oct 2020 19:52:02 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:01 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:01 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyR1-0002d3-Im
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:11 +0000
X-Inumbo-ID: 30150e07-14e3-48da-a02e-f5f16bf31f29
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 30150e07-14e3-48da-a02e-f5f16bf31f29;
	Fri, 09 Oct 2020 19:52:02 +0000 (UTC)
IronPort-SDR: CwCqh1jTT86tyWyXIuTm8S1/lWi685IvGTH5/sYLRrcdqRctlM76XwR4KnDXsehQEdaeq4f1lR
 nQoAjJpJtE4Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976218"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="182976218"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:01 -0700
IronPort-SDR: tVhE30/49ehqkoc80ZCcDXB4dMTZze5YTnWPIjaBRWP4P6o/AEUDuRNENWQluyt3yOqCx99fob
 q9C+vTCoMURQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="519846944"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:01 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Trond Myklebust <trond.myklebust@hammerspace.com>,
	Anna Schumaker <anna.schumaker@netapp.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 21/58] fs/nfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:56 -0700
Message-Id: <20201009195033.3208459-22-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <anna.schumaker@netapp.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/nfs/dir.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c
index cb52db9a0cfb..fee321acccb4 100644
--- a/fs/nfs/dir.c
+++ b/fs/nfs/dir.c
@@ -213,7 +213,7 @@ int nfs_readdir_make_qstr(struct qstr *string, const char *name, unsigned int le
 static
 int nfs_readdir_add_to_array(struct nfs_entry *entry, struct page *page)
 {
-	struct nfs_cache_array *array = kmap(page);
+	struct nfs_cache_array *array = kmap_thread(page);
 	struct nfs_cache_array_entry *cache_entry;
 	int ret;
 
@@ -235,7 +235,7 @@ int nfs_readdir_add_to_array(struct nfs_entry *entry, struct page *page)
 	if (entry->eof != 0)
 		array->eof_index = array->size;
 out:
-	kunmap(page);
+	kunmap_thread(page);
 	return ret;
 }
 
@@ -347,7 +347,7 @@ int nfs_readdir_search_array(nfs_readdir_descriptor_t *desc)
 	struct nfs_cache_array *array;
 	int status;
 
-	array = kmap(desc->page);
+	array = kmap_thread(desc->page);
 
 	if (*desc->dir_cookie == 0)
 		status = nfs_readdir_search_for_pos(array, desc);
@@ -359,7 +359,7 @@ int nfs_readdir_search_array(nfs_readdir_descriptor_t *desc)
 		desc->current_index += array->size;
 		desc->page_index++;
 	}
-	kunmap(desc->page);
+	kunmap_thread(desc->page);
 	return status;
 }
 
@@ -602,10 +602,10 @@ int nfs_readdir_page_filler(nfs_readdir_descriptor_t *desc, struct nfs_entry *en
 
 out_nopages:
 	if (count == 0 || (status == -EBADCOOKIE && entry->eof != 0)) {
-		array = kmap(page);
+		array = kmap_thread(page);
 		array->eof_index = array->size;
 		status = 0;
-		kunmap(page);
+		kunmap_thread(page);
 	}
 
 	put_page(scratch);
@@ -669,7 +669,7 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
 		goto out;
 	}
 
-	array = kmap(page);
+	array = kmap_thread(page);
 
 	status = nfs_readdir_alloc_pages(pages, array_size);
 	if (status < 0)
@@ -691,7 +691,7 @@ int nfs_readdir_xdr_to_array(nfs_readdir_descriptor_t *desc, struct page *page,
 
 	nfs_readdir_free_pages(pages, array_size);
 out_release_array:
-	kunmap(page);
+	kunmap_thread(page);
 	nfs4_label_free(entry.label);
 out:
 	nfs_free_fattr(entry.fattr);
@@ -803,7 +803,7 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc)
 	struct nfs_cache_array *array = NULL;
 	struct nfs_open_dir_context *ctx = file->private_data;
 
-	array = kmap(desc->page);
+	array = kmap_thread(desc->page);
 	for (i = desc->cache_entry_index; i < array->size; i++) {
 		struct nfs_cache_array_entry *ent;
 
@@ -827,7 +827,7 @@ int nfs_do_filldir(nfs_readdir_descriptor_t *desc)
 	if (array->eof_index >= 0)
 		desc->eof = true;
 
-	kunmap(desc->page);
+	kunmap_thread(desc->page);
 	dfprintk(DIRCACHE, "NFS: nfs_do_filldir() filling ended @ cookie %Lu; returning = %d\n",
 			(unsigned long long)*desc->dir_cookie, res);
 	return res;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5075.13127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR3-0002w9-3M; Fri, 09 Oct 2020 19:52:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5075.13127; Fri, 09 Oct 2020 19:52:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR2-0002vq-Px; Fri, 09 Oct 2020 19:52:12 +0000
Received: by outflank-mailman (input) for mailman id 5075;
 Fri, 09 Oct 2020 19:52:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyR1-0002tp-Mb
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:11 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9ddbe04-042d-40ae-b137-5053db29f448;
 Fri, 09 Oct 2020 19:52:10 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:09 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:08 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyR1-0002tp-Mb
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:11 +0000
X-Inumbo-ID: b9ddbe04-042d-40ae-b137-5053db29f448
Received: from mga05.intel.com (unknown [192.55.52.43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b9ddbe04-042d-40ae-b137-5053db29f448;
	Fri, 09 Oct 2020 19:52:10 +0000 (UTC)
IronPort-SDR: AVSKZFwlV1ldgQIvgIgp+VNk1rb9F997DQAB5T/JQHKkrwSWqMMgm3DTpbU3qi5SJJPXOhr8iA
 DoV7+m7jZIoQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226094"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="250226094"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
  by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:09 -0700
IronPort-SDR: zRSzmM1HZFGhzQcGhWxIWUdt7PON477D2ysFpCnQTUxdw9Z8VdaNQXVN+6ixBePqleDwfFxfgw
 K58aOn7YePQQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="317147210"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:08 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Miklos Szeredi <miklos@szeredi.hu>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 23/58] fs/fuse: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:58 -0700
Message-Id: <20201009195033.3208459-24-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Miklos Szeredi <miklos@szeredi.hu>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/fuse/readdir.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/fuse/readdir.c b/fs/fuse/readdir.c
index 90e3f01bd796..953ffe6f56e3 100644
--- a/fs/fuse/readdir.c
+++ b/fs/fuse/readdir.c
@@ -536,9 +536,9 @@ static int fuse_readdir_cached(struct file *file, struct dir_context *ctx)
 	 * Contents of the page are now protected against changing by holding
 	 * the page lock.
 	 */
-	addr = kmap(page);
+	addr = kmap_thread(page);
 	res = fuse_parse_cache(ff, addr, size, ctx);
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	put_page(page);
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5076.13145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR7-00035l-RV; Fri, 09 Oct 2020 19:52:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5076.13145; Fri, 09 Oct 2020 19:52:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyR7-00035S-Lg; Fri, 09 Oct 2020 19:52:17 +0000
Received: by outflank-mailman (input) for mailman id 5076;
 Fri, 09 Oct 2020 19:52:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyR6-0002d3-Ip
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:16 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25d30baa-8627-4539-9a37-76ca83e75cd5;
 Fri, 09 Oct 2020 19:52:05 +0000 (UTC)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:05 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:04 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyR6-0002d3-Ip
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:16 +0000
X-Inumbo-ID: 25d30baa-8627-4539-9a37-76ca83e75cd5
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 25d30baa-8627-4539-9a37-76ca83e75cd5;
	Fri, 09 Oct 2020 19:52:05 +0000 (UTC)
IronPort-SDR: JxxB9TTBSbIEnWjpq1IRg59LzNcV9PhmlWv9G263YWDUYkGI1BEGNxnTI82gi7w7iw/d3BJ8F5
 +pbmiyGJD+OQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976229"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="182976229"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:05 -0700
IronPort-SDR: UirI0CQIu4U61+QlSzJhUSBnMwMbOjm+hcyTouEwvOwx6I0trp+50TKD6KEUa5cRPFVp+hPY69
 eHlTO5RS+u4A==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="343972211"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:04 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Chao Yu <chao@kernel.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:57 -0700
Message-Id: <20201009195033.3208459-23-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Chao Yu <chao@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/f2fs/f2fs.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
index d9e52a7f3702..ff72a45a577e 100644
--- a/fs/f2fs/f2fs.h
+++ b/fs/f2fs/f2fs.h
@@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page(
 
 static inline void f2fs_copy_page(struct page *src, struct page *dst)
 {
-	char *src_kaddr = kmap(src);
-	char *dst_kaddr = kmap(dst);
+	char *src_kaddr = kmap_thread(src);
+	char *dst_kaddr = kmap_thread(dst);
 
 	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
-	kunmap(dst);
-	kunmap(src);
+	kunmap_thread(dst);
+	kunmap_thread(src);
 }
 
 static inline void f2fs_put_page(struct page *page, int unlock)
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5078.13157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRD-0003Ew-88; Fri, 09 Oct 2020 19:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5078.13157; Fri, 09 Oct 2020 19:52:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRD-0003Em-37; Fri, 09 Oct 2020 19:52:23 +0000
Received: by outflank-mailman (input) for mailman id 5078;
 Fri, 09 Oct 2020 19:52:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRB-0002d3-It
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:21 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2749c6a6-7012-41e4-b3da-3db0212b39f1;
 Fri, 09 Oct 2020 19:52:15 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:14 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:13 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRB-0002d3-It
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:21 +0000
X-Inumbo-ID: 2749c6a6-7012-41e4-b3da-3db0212b39f1
Received: from mga14.intel.com (unknown [192.55.52.115])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2749c6a6-7012-41e4-b3da-3db0212b39f1;
	Fri, 09 Oct 2020 19:52:15 +0000 (UTC)
IronPort-SDR: v5b9Q8Z2GSzBmZpclLi79CDcBclwnaLDng+afkWHJyp8MtfG51HJ/dnMv79JJlXETxnoSQaMLm
 BTw3tczoCwBA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743881"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="164743881"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
  by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:14 -0700
IronPort-SDR: 3JX06O44Su5jMQA8yIL2WSzzn7vN3G6TESZobv60VERxc/1W/c4pPVNWJf815mQW07ftkYqATa
 1m7LiUgK4+Rg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="312652755"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:13 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Christoph Hellwig <hch@infradead.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 24/58] fs/freevxfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:59 -0700
Message-Id: <20201009195033.3208459-25-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Christoph Hellwig <hch@infradead.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/freevxfs/vxfs_immed.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/freevxfs/vxfs_immed.c b/fs/freevxfs/vxfs_immed.c
index bfc780c682fb..9c42fec4cd85 100644
--- a/fs/freevxfs/vxfs_immed.c
+++ b/fs/freevxfs/vxfs_immed.c
@@ -69,9 +69,9 @@ vxfs_immed_readpage(struct file *fp, struct page *pp)
 	u_int64_t	offset = (u_int64_t)pp->index << PAGE_SHIFT;
 	caddr_t		kaddr;
 
-	kaddr = kmap(pp);
+	kaddr = kmap_thread(pp);
 	memcpy(kaddr, vip->vii_immed.vi_immed + offset, PAGE_SIZE);
-	kunmap(pp);
+	kunmap_thread(pp);
 	
 	flush_dcache_page(pp);
 	SetPageUptodate(pp);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5079.13164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRD-0003GE-R4; Fri, 09 Oct 2020 19:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5079.13164; Fri, 09 Oct 2020 19:52:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRD-0003Fi-FQ; Fri, 09 Oct 2020 19:52:23 +0000
Received: by outflank-mailman (input) for mailman id 5079;
 Fri, 09 Oct 2020 19:52:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRB-0003DD-TA
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:21 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 114dd5ee-a9ba-4bfb-9ece-c3908f972bfd;
 Fri, 09 Oct 2020 19:52:19 +0000 (UTC)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:18 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:18 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRB-0003DD-TA
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:21 +0000
X-Inumbo-ID: 114dd5ee-a9ba-4bfb-9ece-c3908f972bfd
Received: from mga02.intel.com (unknown [134.134.136.20])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 114dd5ee-a9ba-4bfb-9ece-c3908f972bfd;
	Fri, 09 Oct 2020 19:52:19 +0000 (UTC)
IronPort-SDR: HyfS7VmAzOeN0qMivQvoCHw/UhQd5NKN33p3DiiNbudQeqTB6/LL6bWl26oONTrvqBH8rxjgdt
 Hlf0ywrei4ng==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152450959"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="152450959"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
  by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700
IronPort-SDR: wr8yF8/sTlHJunnzFelUAfkdGIrCwMwbAv+fVYfwv9n4Q9ri2s7tZy/sCWglaZc+50HSVTr6HF
 aNXvhi/ktrcA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="518801285"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:18 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jan Kara <jack@suse.cz>,
	"Theodore Ts'o" <tytso@mit.edu>,
	Randy Dunlap <rdunlap@infradead.org>,
	Alex Shi <alex.shi@linux.alibaba.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 25/58] fs/reiserfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:00 -0700
Message-Id: <20201009195033.3208459-26-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Jan Kara <jack@suse.cz>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/reiserfs/journal.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/reiserfs/journal.c b/fs/reiserfs/journal.c
index e98f99338f8f..be8f56261e8c 100644
--- a/fs/reiserfs/journal.c
+++ b/fs/reiserfs/journal.c
@@ -4194,11 +4194,11 @@ static int do_journal_end(struct reiserfs_transaction_handle *th, int flags)
 					    SB_ONDISK_JOURNAL_SIZE(sb)));
 			set_buffer_uptodate(tmp_bh);
 			page = cn->bh->b_page;
-			addr = kmap(page);
+			addr = kmap_thread(page);
 			memcpy(tmp_bh->b_data,
 			       addr + offset_in_page(cn->bh->b_data),
 			       cn->bh->b_size);
-			kunmap(page);
+			kunmap_thread(page);
 			mark_buffer_dirty(tmp_bh);
 			jindex++;
 			set_buffer_journal_dirty(cn->bh);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5081.13180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRN-0003Wh-7O; Fri, 09 Oct 2020 19:52:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5081.13180; Fri, 09 Oct 2020 19:52:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRN-0003WZ-3i; Fri, 09 Oct 2020 19:52:33 +0000
Received: by outflank-mailman (input) for mailman id 5081;
 Fri, 09 Oct 2020 19:52:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRL-0002d3-JS
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:31 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7d70813-3935-4011-8eb3-d3dc4f9da9a3;
 Fri, 09 Oct 2020 19:52:23 +0000 (UTC)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:22 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:21 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRL-0002d3-JS
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:31 +0000
X-Inumbo-ID: c7d70813-3935-4011-8eb3-d3dc4f9da9a3
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7d70813-3935-4011-8eb3-d3dc4f9da9a3;
	Fri, 09 Oct 2020 19:52:23 +0000 (UTC)
IronPort-SDR: nMC9K3ubWzfvYuZ7DSyCCO9Sbt6ISBPe0BjddKTp38eJH4KzOv9asRhjkuiaQ6OmoyxfQFymq/
 fyG07zfZVnFQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363585"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363585"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:22 -0700
IronPort-SDR: 66NbHcTTg+QzLX/XjjZVbeOCocpzHQlXc/1uyGjsCwgKA6U0t9qDgsp0fGjjJhdH2o4WKM8nKt
 1m4fFh6MimYQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="354959189"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:21 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Damien Le Moal <damien.lemoal@wdc.com>,
	Naohiro Aota <naohiro.aota@wdc.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 26/58] fs/zonefs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:01 -0700
Message-Id: <20201009195033.3208459-27-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Damien Le Moal <damien.lemoal@wdc.com>
Cc: Naohiro Aota <naohiro.aota@wdc.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/zonefs/super.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c
index 8ec7c8f109d7..2fd6c86beee1 100644
--- a/fs/zonefs/super.c
+++ b/fs/zonefs/super.c
@@ -1297,7 +1297,7 @@ static int zonefs_read_super(struct super_block *sb)
 	if (ret)
 		goto free_page;
 
-	super = kmap(page);
+	super = kmap_thread(page);
 
 	ret = -EINVAL;
 	if (le32_to_cpu(super->s_magic) != ZONEFS_MAGIC)
@@ -1349,7 +1349,7 @@ static int zonefs_read_super(struct super_block *sb)
 	ret = 0;
 
 unmap:
-	kunmap(page);
+	kunmap_thread(page);
 free_page:
 	__free_page(page);
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5082.13193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRO-0003Zf-Mp; Fri, 09 Oct 2020 19:52:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5082.13193; Fri, 09 Oct 2020 19:52:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRO-0003ZT-H9; Fri, 09 Oct 2020 19:52:34 +0000
Received: by outflank-mailman (input) for mailman id 5082;
 Fri, 09 Oct 2020 19:52:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRM-0003VG-6w
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:32 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab7fd60a-f541-4ba8-8f03-1ab0e710db19;
 Fri, 09 Oct 2020 19:52:31 +0000 (UTC)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:30 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:29 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRM-0003VG-6w
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:32 +0000
X-Inumbo-ID: ab7fd60a-f541-4ba8-8f03-1ab0e710db19
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab7fd60a-f541-4ba8-8f03-1ab0e710db19;
	Fri, 09 Oct 2020 19:52:31 +0000 (UTC)
IronPort-SDR: ANd9sD4ETF8cDNeNNLHF1xEZdjjzmZGxw88XeOjm10HKAa6Lb2UuTqdSALqYxupiNLRRqyS0Tg
 lhpFpcbFa9GQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363625"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363625"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga002.jf.intel.com ([10.7.209.21])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:30 -0700
IronPort-SDR: R8dnd4Gi4VR6B2jqh++vX5H71gVtYpBk3ChyBwkq2ll0tg176OYljcvG78jHbXA7T9l0eBxuZS
 YLX/odwmLDNw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="329006590"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:29 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	David Howells <dhowells@redhat.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 28/58] fs/cachefiles: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:03 -0700
Message-Id: <20201009195033.3208459-29-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/cachefiles/rdwr.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
index 3080cda9e824..2468e5c067ba 100644
--- a/fs/cachefiles/rdwr.c
+++ b/fs/cachefiles/rdwr.c
@@ -936,9 +936,9 @@ int cachefiles_write_page(struct fscache_storage *op, struct page *page)
 		}
 	}
 
-	data = kmap(page);
+	data = kmap_thread(page);
 	ret = kernel_write(file, data, len, &pos);
-	kunmap(page);
+	kunmap_thread(page);
 	fput(file);
 	if (ret != len)
 		goto error_eio;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5085.13204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRS-0003ge-9D; Fri, 09 Oct 2020 19:52:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5085.13204; Fri, 09 Oct 2020 19:52:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRS-0003gS-3y; Fri, 09 Oct 2020 19:52:38 +0000
Received: by outflank-mailman (input) for mailman id 5085;
 Fri, 09 Oct 2020 19:52:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRQ-0002d3-Jj
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:36 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc8e77bf-2006-4dac-810f-8f63fcceecbd;
 Fri, 09 Oct 2020 19:52:27 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:26 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:25 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRQ-0002d3-Jj
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:36 +0000
X-Inumbo-ID: cc8e77bf-2006-4dac-810f-8f63fcceecbd
Received: from mga06.intel.com (unknown [134.134.136.31])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cc8e77bf-2006-4dac-810f-8f63fcceecbd;
	Fri, 09 Oct 2020 19:52:27 +0000 (UTC)
IronPort-SDR: kH9/s/sUlXbDXcGE7Zkt2tcUwxSSR+7F2MO4VqVSmD1FVEMG0jDSuFnDzi2AqTzRqF/4bThxVz
 4InDf5c+JUaQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227179028"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="227179028"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga007.jf.intel.com ([10.7.209.58])
  by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:26 -0700
IronPort-SDR: enOwdrUvox960YRmC5C0rYPH71vGkXRrRSf++MkJJnBOIwe1z1BLjEriaudfjb7SF8F3v1X02a
 epHq/DZLuLjw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="355863002"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:25 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Richard Weinberger <richard@nod.at>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 27/58] fs/ubifs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:02 -0700
Message-Id: <20201009195033.3208459-28-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Richard Weinberger <richard@nod.at>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/ubifs/file.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
index b77d1637bbbc..a3537447a885 100644
--- a/fs/ubifs/file.c
+++ b/fs/ubifs/file.c
@@ -111,7 +111,7 @@ static int do_readpage(struct page *page)
 	ubifs_assert(c, !PageChecked(page));
 	ubifs_assert(c, !PagePrivate(page));
 
-	addr = kmap(page);
+	addr = kmap_thread(page);
 
 	block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT;
 	beyond = (i_size + UBIFS_BLOCK_SIZE - 1) >> UBIFS_BLOCK_SHIFT;
@@ -174,7 +174,7 @@ static int do_readpage(struct page *page)
 	SetPageUptodate(page);
 	ClearPageError(page);
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	return 0;
 
 error:
@@ -182,7 +182,7 @@ static int do_readpage(struct page *page)
 	ClearPageUptodate(page);
 	SetPageError(page);
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	return err;
 }
 
@@ -616,7 +616,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,
 	dbg_gen("ino %lu, pg %lu, i_size %lld, flags %#lx",
 		inode->i_ino, page->index, i_size, page->flags);
 
-	addr = zaddr = kmap(page);
+	addr = zaddr = kmap_thread(page);
 
 	end_index = (i_size - 1) >> PAGE_SHIFT;
 	if (!i_size || page->index > end_index) {
@@ -692,7 +692,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,
 	SetPageUptodate(page);
 	ClearPageError(page);
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	*n = nn;
 	return 0;
 
@@ -700,7 +700,7 @@ static int populate_page(struct ubifs_info *c, struct page *page,
 	ClearPageUptodate(page);
 	SetPageError(page);
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	ubifs_err(c, "bad data node (block %u, inode %lu)",
 		  page_block, inode->i_ino);
 	return -EINVAL;
@@ -918,7 +918,7 @@ static int do_writepage(struct page *page, int len)
 	/* Update radix tree tags */
 	set_page_writeback(page);
 
-	addr = kmap(page);
+	addr = kmap_thread(page);
 	block = page->index << UBIFS_BLOCKS_PER_PAGE_SHIFT;
 	i = 0;
 	while (len) {
@@ -950,7 +950,7 @@ static int do_writepage(struct page *page, int len)
 	ClearPagePrivate(page);
 	ClearPageChecked(page);
 
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	end_page_writeback(page);
 	return err;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5086.13212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRS-0003hs-S6; Fri, 09 Oct 2020 19:52:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5086.13212; Fri, 09 Oct 2020 19:52:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRS-0003hW-ID; Fri, 09 Oct 2020 19:52:38 +0000
Received: by outflank-mailman (input) for mailman id 5086;
 Fri, 09 Oct 2020 19:52:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRR-0003VG-4U
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:37 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e58c0191-cb0d-42e4-82e6-add0ebb4508f;
 Fri, 09 Oct 2020 19:52:33 +0000 (UTC)
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:33 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:32 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRR-0003VG-4U
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:37 +0000
X-Inumbo-ID: e58c0191-cb0d-42e4-82e6-add0ebb4508f
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e58c0191-cb0d-42e4-82e6-add0ebb4508f;
	Fri, 09 Oct 2020 19:52:33 +0000 (UTC)
IronPort-SDR: Lo2XtPowmQYnTWHphnBW2bQKZ2oio1YDqSAzSkZkJbRd28sKJh/91jcB/8RpILqScKg5Y2mPSC
 mGB3S19zQb1Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363640"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363640"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:33 -0700
IronPort-SDR: XHP7HCDttdXyLYHE1MyL2Bm/StJaryejpRmS+iQtfLlXLzP04sJOdLalPQz60Qvt6cf6bIW1lg
 L4ex5grUzrzg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298419726"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:32 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Anton Altaparmakov <anton@tuxera.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 29/58] fs/ntfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:04 -0700
Message-Id: <20201009195033.3208459-30-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Anton Altaparmakov <anton@tuxera.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/ntfs/aops.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/ntfs/aops.c b/fs/ntfs/aops.c
index bb0a43860ad2..11633d732809 100644
--- a/fs/ntfs/aops.c
+++ b/fs/ntfs/aops.c
@@ -1099,7 +1099,7 @@ static int ntfs_write_mst_block(struct page *page,
 	if (!nr_bhs)
 		goto done;
 	/* Map the page so we can access its contents. */
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	/* Clear the page uptodate flag whilst the mst fixups are applied. */
 	BUG_ON(!PageUptodate(page));
 	ClearPageUptodate(page);
@@ -1276,7 +1276,7 @@ static int ntfs_write_mst_block(struct page *page,
 		iput(VFS_I(base_tni));
 	}
 	SetPageUptodate(page);
-	kunmap(page);
+	kunmap_thread(page);
 done:
 	if (unlikely(err && err != -ENOMEM)) {
 		/*
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5087.13229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRX-0003rA-AW; Fri, 09 Oct 2020 19:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5087.13229; Fri, 09 Oct 2020 19:52:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRX-0003qu-55; Fri, 09 Oct 2020 19:52:43 +0000
Received: by outflank-mailman (input) for mailman id 5087;
 Fri, 09 Oct 2020 19:52:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRW-0003VG-4U
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:42 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b2ad870-6b3b-4492-9208-b18e08b18cd8;
 Fri, 09 Oct 2020 19:52:37 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:37 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:36 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRW-0003VG-4U
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:42 +0000
X-Inumbo-ID: 1b2ad870-6b3b-4492-9208-b18e08b18cd8
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b2ad870-6b3b-4492-9208-b18e08b18cd8;
	Fri, 09 Oct 2020 19:52:37 +0000 (UTC)
IronPort-SDR: CUDWesW2Va/hB/jfTCoGkbhArBK0AbkZL9zvo/5/AypRvI6V34y6IeJs/Wo0Cre+qSM4x94xIj
 tZITQ10Kb0/w==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363661"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363661"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:37 -0700
IronPort-SDR: y48WLHg756NhaQ3a3jufh8VKJ1E37wUwe8fJ/HzDHZ6hDk/W8p1+XzOieTDyZGUDdKxAAVZp75
 YdcYJzI+gesw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="349957490"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:36 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 30/58] fs/romfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:05 -0700
Message-Id: <20201009195033.3208459-31-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/romfs/super.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/romfs/super.c b/fs/romfs/super.c
index e582d001f792..9050074c6755 100644
--- a/fs/romfs/super.c
+++ b/fs/romfs/super.c
@@ -107,7 +107,7 @@ static int romfs_readpage(struct file *file, struct page *page)
 	void *buf;
 	int ret;
 
-	buf = kmap(page);
+	buf = kmap_thread(page);
 	if (!buf)
 		return -ENOMEM;
 
@@ -136,7 +136,7 @@ static int romfs_readpage(struct file *file, struct page *page)
 		SetPageUptodate(page);
 
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	return ret;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5091.13241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRg-00046d-NV; Fri, 09 Oct 2020 19:52:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5091.13241; Fri, 09 Oct 2020 19:52:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRg-00046K-JT; Fri, 09 Oct 2020 19:52:52 +0000
Received: by outflank-mailman (input) for mailman id 5091;
 Fri, 09 Oct 2020 19:52:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRe-00043i-HP
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:50 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b734728-88a0-41f1-9504-607647bc49b5;
 Fri, 09 Oct 2020 19:52:49 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:48 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:47 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRe-00043i-HP
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:50 +0000
X-Inumbo-ID: 6b734728-88a0-41f1-9504-607647bc49b5
Received: from mga14.intel.com (unknown [192.55.52.115])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b734728-88a0-41f1-9504-607647bc49b5;
	Fri, 09 Oct 2020 19:52:49 +0000 (UTC)
IronPort-SDR: N7voPqBiOJWo/YvfTgf0df/GIAxVqFAdmol5LxUitt2JNgrbMo2iP2Jv81aeB8EE9Jq27j+af+
 6lwJy/d34Ipw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164743960"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="164743960"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:48 -0700
IronPort-SDR: 0rzHuL4N+KM4WVW+vmL+eEj1y192PC4ThPKt2jdycOAoxaq6uEUOpoOJAuFDFbsfnCMTEcEeiF
 lMErav9d18Iw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="419537108"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:47 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Nicolas Pitre <nico@fluxnic.net>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:08 -0700
Message-Id: <20201009195033.3208459-34-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/cramfs/inode.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
index 912308600d39..003c014a42ed 100644
--- a/fs/cramfs/inode.c
+++ b/fs/cramfs/inode.c
@@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
 		struct page *page = pages[i];
 
 		if (page) {
-			memcpy(data, kmap(page), PAGE_SIZE);
-			kunmap(page);
+			memcpy(data, kmap_thread(page), PAGE_SIZE);
+			kunmap_thread(page);
 			put_page(page);
 		} else
 			memset(data, 0, PAGE_SIZE);
@@ -826,7 +826,7 @@ static int cramfs_readpage(struct file *file, struct page *page)
 
 	maxblock = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
 	bytes_filled = 0;
-	pgdata = kmap(page);
+	pgdata = kmap_thread(page);
 
 	if (page->index < maxblock) {
 		struct super_block *sb = inode->i_sb;
@@ -914,13 +914,13 @@ static int cramfs_readpage(struct file *file, struct page *page)
 
 	memset(pgdata + bytes_filled, 0, PAGE_SIZE - bytes_filled);
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	SetPageUptodate(page);
 	unlock_page(page);
 	return 0;
 
 err:
-	kunmap(page);
+	kunmap_thread(page);
 	ClearPageUptodate(page);
 	SetPageError(page);
 	unlock_page(page);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5092.13248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRh-00047s-DW; Fri, 09 Oct 2020 19:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5092.13248; Fri, 09 Oct 2020 19:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRh-00047S-1Y; Fri, 09 Oct 2020 19:52:53 +0000
Received: by outflank-mailman (input) for mailman id 5092;
 Fri, 09 Oct 2020 19:52:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRg-0003VG-4o
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:52 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 621cbdbf-5a40-44ce-83c9-a302c14e923e;
 Fri, 09 Oct 2020 19:52:42 +0000 (UTC)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:41 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:40 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRg-0003VG-4o
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:52 +0000
X-Inumbo-ID: 621cbdbf-5a40-44ce-83c9-a302c14e923e
Received: from mga17.intel.com (unknown [192.55.52.151])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 621cbdbf-5a40-44ce-83c9-a302c14e923e;
	Fri, 09 Oct 2020 19:52:42 +0000 (UTC)
IronPort-SDR: imeySJ6f3Q9sudI3xVuxsFxeYZjwfBmWOs3m2f93+9txgw1kzGjh0BHC4WQZKliadxJEcB8gow
 6RHcn4LZ75Og==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397496"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="145397496"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
  by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:41 -0700
IronPort-SDR: L0Vefok2A93xih84zuFNBu3LKLwMRamDsDkR1F4Gust5uwQPsEktlHxFcxfQt1U0vxS4VptD4d
 rIp50GoBaQKg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="345148048"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:40 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Hans de Goede <hdegoede@redhat.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 31/58] fs/vboxsf: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:06 -0700
Message-Id: <20201009195033.3208459-32-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/vboxsf/file.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/vboxsf/file.c b/fs/vboxsf/file.c
index c4ab5996d97a..d9c7e6b7b4cc 100644
--- a/fs/vboxsf/file.c
+++ b/fs/vboxsf/file.c
@@ -216,7 +216,7 @@ static int vboxsf_readpage(struct file *file, struct page *page)
 	u8 *buf;
 	int err;
 
-	buf = kmap(page);
+	buf = kmap_thread(page);
 
 	err = vboxsf_read(sf_handle->root, sf_handle->handle, off, &nread, buf);
 	if (err == 0) {
@@ -227,7 +227,7 @@ static int vboxsf_readpage(struct file *file, struct page *page)
 		SetPageError(page);
 	}
 
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	return err;
 }
@@ -268,10 +268,10 @@ static int vboxsf_writepage(struct page *page, struct writeback_control *wbc)
 	if (!sf_handle)
 		return -EBADF;
 
-	buf = kmap(page);
+	buf = kmap_thread(page);
 	err = vboxsf_write(sf_handle->root, sf_handle->handle,
 			   off, &nwrite, buf);
-	kunmap(page);
+	kunmap_thread(page);
 
 	kref_put(&sf_handle->refcount, vboxsf_handle_release);
 
@@ -302,10 +302,10 @@ static int vboxsf_write_end(struct file *file, struct address_space *mapping,
 	if (!PageUptodate(page) && copied < len)
 		zero_user(page, from + copied, len - copied);
 
-	buf = kmap(page);
+	buf = kmap_thread(page);
 	err = vboxsf_write(sf_handle->root, sf_handle->handle,
 			   pos, &nwritten, buf + from);
-	kunmap(page);
+	kunmap_thread(page);
 
 	if (err) {
 		nwritten = 0;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:52:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5094.13265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRm-0004Hf-Mm; Fri, 09 Oct 2020 19:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5094.13265; Fri, 09 Oct 2020 19:52:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRm-0004HT-IP; Fri, 09 Oct 2020 19:52:58 +0000
Received: by outflank-mailman (input) for mailman id 5094;
 Fri, 09 Oct 2020 19:52:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRl-0003VG-4q
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:57 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf302556-a958-407d-bd09-6bb7935b5582;
 Fri, 09 Oct 2020 19:52:46 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:45 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:43 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRl-0003VG-4q
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:52:57 +0000
X-Inumbo-ID: bf302556-a958-407d-bd09-6bb7935b5582
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf302556-a958-407d-bd09-6bb7935b5582;
	Fri, 09 Oct 2020 19:52:46 +0000 (UTC)
IronPort-SDR: i9q3zJ+uL+2benq9Ca20fKVcvn5UbbjweCphprj2Vg+SWY5xTRvk1nOW4rQlhM7h2o8WQcpn4t
 4Hs+X+9kVweA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068029"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162068029"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga001.jf.intel.com ([10.7.209.18])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:45 -0700
IronPort-SDR: A7QVfEYpWd+zukvvJSzSDlmpc6UtcI1VTl49mnWle7o9+shZWHPb4ayBrZUheEhKcyM4kHv/QX
 ai2WmfcpOoFA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="389237190"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:43 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jeff Dike <jdike@addtoit.com>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 32/58] fs/hostfs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:07 -0700
Message-Id: <20201009195033.3208459-33-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Jeff Dike <jdike@addtoit.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Anton Ivanov <anton.ivanov@cambridgegreys.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/hostfs/hostfs_kern.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
index c070c0d8e3e9..608efd0f83cb 100644
--- a/fs/hostfs/hostfs_kern.c
+++ b/fs/hostfs/hostfs_kern.c
@@ -409,7 +409,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc)
 	if (page->index >= end_index)
 		count = inode->i_size & (PAGE_SIZE-1);
 
-	buffer = kmap(page);
+	buffer = kmap_thread(page);
 
 	err = write_file(HOSTFS_I(inode)->fd, &base, buffer, count);
 	if (err != count) {
@@ -425,7 +425,7 @@ static int hostfs_writepage(struct page *page, struct writeback_control *wbc)
 	err = 0;
 
  out:
-	kunmap(page);
+	kunmap_thread(page);
 
 	unlock_page(page);
 	return err;
@@ -437,7 +437,7 @@ static int hostfs_readpage(struct file *file, struct page *page)
 	loff_t start = page_offset(page);
 	int bytes_read, ret = 0;
 
-	buffer = kmap(page);
+	buffer = kmap_thread(page);
 	bytes_read = read_file(FILE_HOSTFS_I(file)->fd, &start, buffer,
 			PAGE_SIZE);
 	if (bytes_read < 0) {
@@ -454,7 +454,7 @@ static int hostfs_readpage(struct file *file, struct page *page)
 
  out:
 	flush_dcache_page(page);
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	return ret;
 }
@@ -480,9 +480,9 @@ static int hostfs_write_end(struct file *file, struct address_space *mapping,
 	unsigned from = pos & (PAGE_SIZE - 1);
 	int err;
 
-	buffer = kmap(page);
+	buffer = kmap_thread(page);
 	err = write_file(FILE_HOSTFS_I(file)->fd, &pos, buffer + from, copied);
-	kunmap(page);
+	kunmap_thread(page);
 
 	if (!PageUptodate(page) && err == PAGE_SIZE)
 		SetPageUptodate(page);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5097.13277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRr-0004QT-FT; Fri, 09 Oct 2020 19:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5097.13277; Fri, 09 Oct 2020 19:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRr-0004QE-9c; Fri, 09 Oct 2020 19:53:03 +0000
Received: by outflank-mailman (input) for mailman id 5097;
 Fri, 09 Oct 2020 19:53:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRq-0003VG-51
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:02 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 994c5aca-c014-4148-96d9-691c33e44643;
 Fri, 09 Oct 2020 19:52:52 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:51 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:50 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRq-0003VG-51
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:02 +0000
X-Inumbo-ID: 994c5aca-c014-4148-96d9-691c33e44643
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 994c5aca-c014-4148-96d9-691c33e44643;
	Fri, 09 Oct 2020 19:52:52 +0000 (UTC)
IronPort-SDR: iZxtaX4a/lCACf+YZwwn03jGiMwUdb8+s7en1JzcNOvZAONZ1adgEI+dukjN+iYbwuwNtfJ36u
 1Mp66w6EBhcw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715276"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229715276"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:51 -0700
IronPort-SDR: raa31ON9t7yf3LIgfzeAsEKPUCppaesVfvBVJ2ruufMJvWsy9JzU1jTdgl2++U0w8tR7AuyOxm
 Q4+3OjaqYbiw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298531317"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:50 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Gao Xiang <xiang@kernel.org>,
	Chao Yu <chao@kernel.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 34/58] fs/erofs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:09 -0700
Message-Id: <20201009195033.3208459-35-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To avoid
the over head of global PKRS updates use the new kmap_thread() call.

Cc: Gao Xiang <xiang@kernel.org>
Cc: Chao Yu <chao@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/erofs/super.c | 4 ++--
 fs/erofs/xattr.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/erofs/super.c b/fs/erofs/super.c
index ddaa516c008a..41696b60f1b3 100644
--- a/fs/erofs/super.c
+++ b/fs/erofs/super.c
@@ -139,7 +139,7 @@ static int erofs_read_superblock(struct super_block *sb)
 
 	sbi = EROFS_SB(sb);
 
-	data = kmap(page);
+	data = kmap_thread(page);
 	dsb = (struct erofs_super_block *)(data + EROFS_SUPER_OFFSET);
 
 	ret = -EINVAL;
@@ -189,7 +189,7 @@ static int erofs_read_superblock(struct super_block *sb)
 	}
 	ret = 0;
 out:
-	kunmap(page);
+	kunmap_thread(page);
 	put_page(page);
 	return ret;
 }
diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c
index c8c381eadcd6..1771baa99d77 100644
--- a/fs/erofs/xattr.c
+++ b/fs/erofs/xattr.c
@@ -20,7 +20,7 @@ static inline void xattr_iter_end(struct xattr_iter *it, bool atomic)
 {
 	/* the only user of kunmap() is 'init_inode_xattrs' */
 	if (!atomic)
-		kunmap(it->page);
+		kunmap_thread(it->page);
 	else
 		kunmap_atomic(it->kaddr);
 
@@ -96,7 +96,7 @@ static int init_inode_xattrs(struct inode *inode)
 	}
 
 	/* read in shared xattr array (non-atomic, see kmalloc below) */
-	it.kaddr = kmap(it.page);
+	it.kaddr = kmap_thread(it.page);
 	atomic_map = false;
 
 	ih = (struct erofs_xattr_ibody_header *)(it.kaddr + it.ofs);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5098.13283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRs-0004Rf-72; Fri, 09 Oct 2020 19:53:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5098.13283; Fri, 09 Oct 2020 19:53:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRr-0004RC-Ol; Fri, 09 Oct 2020 19:53:03 +0000
Received: by outflank-mailman (input) for mailman id 5098;
 Fri, 09 Oct 2020 19:53:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRq-0004ON-92
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:02 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82058a1b-985d-469f-87e3-9a30a06f5c85;
 Fri, 09 Oct 2020 19:53:00 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:59 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:58 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRq-0004ON-92
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:02 +0000
X-Inumbo-ID: 82058a1b-985d-469f-87e3-9a30a06f5c85
Received: from mga02.intel.com (unknown [134.134.136.20])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 82058a1b-985d-469f-87e3-9a30a06f5c85;
	Fri, 09 Oct 2020 19:53:00 +0000 (UTC)
IronPort-SDR: fhW9lWZ/lvFOVRx331X+jmNLbyY4kI+QFql7gRSWaNC7oM4i6yiJCULWjNqKQu0GdsagGKwzcn
 2MUeQsnus3gg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152451061"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="152451061"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:59 -0700
IronPort-SDR: Wf0Bi1Gs1P3ZgILN9Dk8+ODZhbqXc7J3tJZabe+2AUx5fg/nQYQB2E9E/9NHNQgBhyfVAmIu+q
 D7ag8zQ3enrg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="462300964"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:58 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jan Kara <jack@suse.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 36/58] fs/ext2: Use ext2_put_page
Date: Fri,  9 Oct 2020 12:50:11 -0700
Message-Id: <20201009195033.3208459-37-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

There are 3 places in namei.c where the equivalent of ext2_put_page() is
open coded.  We want to use k[un]map_thread() instead of k[un]map() in
ext2_[get|put]_page().

Move ext2_put_page() to ext2.h and use it in namei.c in prep for
converting the k[un]map() code.

Cc: Jan Kara <jack@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/ext2/dir.c   |  6 ------
 fs/ext2/ext2.h  |  8 ++++++++
 fs/ext2/namei.c | 15 +++++----------
 3 files changed, 13 insertions(+), 16 deletions(-)

diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c
index 70355ab6740e..f3194bf20733 100644
--- a/fs/ext2/dir.c
+++ b/fs/ext2/dir.c
@@ -66,12 +66,6 @@ static inline unsigned ext2_chunk_size(struct inode *inode)
 	return inode->i_sb->s_blocksize;
 }
 
-static inline void ext2_put_page(struct page *page)
-{
-	kunmap(page);
-	put_page(page);
-}
-
 /*
  * Return the offset into page `page_nr' of the last valid
  * byte in that page, plus one.
diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
index 5136b7289e8d..021ec8b42ac3 100644
--- a/fs/ext2/ext2.h
+++ b/fs/ext2/ext2.h
@@ -16,6 +16,8 @@
 #include <linux/blockgroup_lock.h>
 #include <linux/percpu_counter.h>
 #include <linux/rbtree.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
 
 /* XXX Here for now... not interested in restructing headers JUST now */
 
@@ -745,6 +747,12 @@ extern int ext2_delete_entry (struct ext2_dir_entry_2 *, struct page *);
 extern int ext2_empty_dir (struct inode *);
 extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **);
 extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int);
+static inline void ext2_put_page(struct page *page)
+{
+	kunmap(page);
+	put_page(page);
+}
+
 
 /* ialloc.c */
 extern struct inode * ext2_new_inode (struct inode *, umode_t, const struct qstr *);
diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c
index 5bf2c145643b..ea980f1e2e99 100644
--- a/fs/ext2/namei.c
+++ b/fs/ext2/namei.c
@@ -389,23 +389,18 @@ static int ext2_rename (struct inode * old_dir, struct dentry * old_dentry,
 	if (dir_de) {
 		if (old_dir != new_dir)
 			ext2_set_link(old_inode, dir_de, dir_page, new_dir, 0);
-		else {
-			kunmap(dir_page);
-			put_page(dir_page);
-		}
+		else
+			ext2_put_page(dir_page);
 		inode_dec_link_count(old_dir);
 	}
 	return 0;
 
 
 out_dir:
-	if (dir_de) {
-		kunmap(dir_page);
-		put_page(dir_page);
-	}
+	if (dir_de)
+		ext2_put_page(dir_page);
 out_old:
-	kunmap(old_page);
-	put_page(old_page);
+	ext2_put_page(old_page);
 out:
 	return err;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5103.13301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRw-0004c0-KS; Fri, 09 Oct 2020 19:53:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5103.13301; Fri, 09 Oct 2020 19:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyRw-0004bp-FX; Fri, 09 Oct 2020 19:53:08 +0000
Received: by outflank-mailman (input) for mailman id 5103;
 Fri, 09 Oct 2020 19:53:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyRv-0004ON-7c
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:07 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd56cfaa-a4a4-4ffa-82cb-2d312625fe16;
 Fri, 09 Oct 2020 19:53:03 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:02 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:02 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyRv-0004ON-7c
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:07 +0000
X-Inumbo-ID: bd56cfaa-a4a4-4ffa-82cb-2d312625fe16
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bd56cfaa-a4a4-4ffa-82cb-2d312625fe16;
	Fri, 09 Oct 2020 19:53:03 +0000 (UTC)
IronPort-SDR: ilAjpYQOlagkaK3s2lWXgwZ4mxD8/Q0nrhm0bep1pVX2urdzS2DtXs212b0Vzf9ZPZ1iRD7cd2
 IiSUoPsX2lSA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976383"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="182976383"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700
IronPort-SDR: bH2kRxXxeY8kuFBN5R8RneSpQbEQLW5eawptefbcIpm8xkU067aWcbaReXxx6/6TDXgDaxiIQ5
 8lenxDucas2g==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="519847131"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:02 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Jan Kara <jack@suse.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 37/58] fs/ext2: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:12 -0700
Message-Id: <20201009195033.3208459-38-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS update use the new kmap_thread() call instead.

Cc: Jan Kara <jack@suse.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/ext2/dir.c  | 2 +-
 fs/ext2/ext2.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/ext2/dir.c b/fs/ext2/dir.c
index f3194bf20733..abe97ba458c8 100644
--- a/fs/ext2/dir.c
+++ b/fs/ext2/dir.c
@@ -196,7 +196,7 @@ static struct page * ext2_get_page(struct inode *dir, unsigned long n,
 	struct address_space *mapping = dir->i_mapping;
 	struct page *page = read_mapping_page(mapping, n, NULL);
 	if (!IS_ERR(page)) {
-		kmap(page);
+		kmap_thread(page);
 		if (unlikely(!PageChecked(page))) {
 			if (PageError(page) || !ext2_check_page(page, quiet))
 				goto fail;
diff --git a/fs/ext2/ext2.h b/fs/ext2/ext2.h
index 021ec8b42ac3..9bcb6714c255 100644
--- a/fs/ext2/ext2.h
+++ b/fs/ext2/ext2.h
@@ -749,7 +749,7 @@ extern struct ext2_dir_entry_2 * ext2_dotdot (struct inode *, struct page **);
 extern void ext2_set_link(struct inode *, struct ext2_dir_entry_2 *, struct page *, struct inode *, int);
 static inline void ext2_put_page(struct page *page)
 {
-	kunmap(page);
+	kunmap_thread(page);
 	put_page(page);
 }
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5105.13313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS2-0004le-2H; Fri, 09 Oct 2020 19:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5105.13313; Fri, 09 Oct 2020 19:53:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS1-0004lO-SZ; Fri, 09 Oct 2020 19:53:13 +0000
Received: by outflank-mailman (input) for mailman id 5105;
 Fri, 09 Oct 2020 19:53:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyS0-0003VG-5W
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:12 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6a952ee-b302-4050-8ce5-5482337f9bb5;
 Fri, 09 Oct 2020 19:52:56 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:55 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:52:54 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyS0-0003VG-5W
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:12 +0000
X-Inumbo-ID: f6a952ee-b302-4050-8ce5-5482337f9bb5
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6a952ee-b302-4050-8ce5-5482337f9bb5;
	Fri, 09 Oct 2020 19:52:56 +0000 (UTC)
IronPort-SDR: HeQeWn0p/UqbYkNt0JKeVeSxNj8mIesXlUIJCv0vTU0iyGMGJUfZbqb7TupHI2SCJe6szIUfd3
 x6vVlPS8SQQQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068044"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162068044"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:55 -0700
IronPort-SDR: TN82OMSOJnR+1dzdu/wtYu+HarPMiqU5tCJLq9k+HDWuiKzSJliaVoZHfBedXzzykRk72guGi9
 gJFQG6Pg4N1Q==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529053748"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:52:54 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Jens Axboe <axboe@kernel.dk>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 35/58] fs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:10 -0700
Message-Id: <20201009195033.3208459-36-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/aio.c              |  4 ++--
 fs/binfmt_elf.c       |  4 ++--
 fs/binfmt_elf_fdpic.c |  4 ++--
 fs/exec.c             | 10 +++++-----
 fs/io_uring.c         |  4 ++--
 fs/splice.c           |  4 ++--
 6 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/fs/aio.c b/fs/aio.c
index d5ec30385566..27f95996d25f 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -1223,10 +1223,10 @@ static long aio_read_events_ring(struct kioctx *ctx,
 		avail = min(avail, nr - ret);
 		avail = min_t(long, avail, AIO_EVENTS_PER_PAGE - pos);
 
-		ev = kmap(page);
+		ev = kmap_thread(page);
 		copy_ret = copy_to_user(event + ret, ev + pos,
 					sizeof(*ev) * avail);
-		kunmap(page);
+		kunmap_thread(page);
 
 		if (unlikely(copy_ret)) {
 			ret = -EFAULT;
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 13d053982dd7..1a332ef1ae03 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -2430,9 +2430,9 @@ static int elf_core_dump(struct coredump_params *cprm)
 
 			page = get_dump_page(addr);
 			if (page) {
-				void *kaddr = kmap(page);
+				void *kaddr = kmap_thread(page);
 				stop = !dump_emit(cprm, kaddr, PAGE_SIZE);
-				kunmap(page);
+				kunmap_thread(page);
 				put_page(page);
 			} else
 				stop = !dump_skip(cprm, PAGE_SIZE);
diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
index 50f845702b92..8fbe188e0fdd 100644
--- a/fs/binfmt_elf_fdpic.c
+++ b/fs/binfmt_elf_fdpic.c
@@ -1542,9 +1542,9 @@ static bool elf_fdpic_dump_segments(struct coredump_params *cprm)
 			bool res;
 			struct page *page = get_dump_page(addr);
 			if (page) {
-				void *kaddr = kmap(page);
+				void *kaddr = kmap_thread(page);
 				res = dump_emit(cprm, kaddr, PAGE_SIZE);
-				kunmap(page);
+				kunmap_thread(page);
 				put_page(page);
 			} else {
 				res = dump_skip(cprm, PAGE_SIZE);
diff --git a/fs/exec.c b/fs/exec.c
index a91003e28eaa..3948b8511e3a 100644
--- a/fs/exec.c
+++ b/fs/exec.c
@@ -575,11 +575,11 @@ static int copy_strings(int argc, struct user_arg_ptr argv,
 
 				if (kmapped_page) {
 					flush_kernel_dcache_page(kmapped_page);
-					kunmap(kmapped_page);
+					kunmap_thread(kmapped_page);
 					put_arg_page(kmapped_page);
 				}
 				kmapped_page = page;
-				kaddr = kmap(kmapped_page);
+				kaddr = kmap_thread(kmapped_page);
 				kpos = pos & PAGE_MASK;
 				flush_arg_page(bprm, kpos, kmapped_page);
 			}
@@ -593,7 +593,7 @@ static int copy_strings(int argc, struct user_arg_ptr argv,
 out:
 	if (kmapped_page) {
 		flush_kernel_dcache_page(kmapped_page);
-		kunmap(kmapped_page);
+		kunmap_thread(kmapped_page);
 		put_arg_page(kmapped_page);
 	}
 	return ret;
@@ -871,11 +871,11 @@ int transfer_args_to_stack(struct linux_binprm *bprm,
 
 	for (index = MAX_ARG_PAGES - 1; index >= stop; index--) {
 		unsigned int offset = index == stop ? bprm->p & ~PAGE_MASK : 0;
-		char *src = kmap(bprm->page[index]) + offset;
+		char *src = kmap_thread(bprm->page[index]) + offset;
 		sp -= PAGE_SIZE - offset;
 		if (copy_to_user((void *) sp, src, PAGE_SIZE - offset) != 0)
 			ret = -EFAULT;
-		kunmap(bprm->page[index]);
+		kunmap_thread(bprm->page[index]);
 		if (ret)
 			goto out;
 	}
diff --git a/fs/io_uring.c b/fs/io_uring.c
index aae0ef2ec34d..f59bb079822d 100644
--- a/fs/io_uring.c
+++ b/fs/io_uring.c
@@ -2903,7 +2903,7 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb,
 			iovec = iov_iter_iovec(iter);
 		} else {
 			/* fixed buffers import bvec */
-			iovec.iov_base = kmap(iter->bvec->bv_page)
+			iovec.iov_base = kmap_thread(iter->bvec->bv_page)
 						+ iter->iov_offset;
 			iovec.iov_len = min(iter->count,
 					iter->bvec->bv_len - iter->iov_offset);
@@ -2918,7 +2918,7 @@ static ssize_t loop_rw_iter(int rw, struct file *file, struct kiocb *kiocb,
 		}
 
 		if (iov_iter_is_bvec(iter))
-			kunmap(iter->bvec->bv_page);
+			kunmap_thread(iter->bvec->bv_page);
 
 		if (nr < 0) {
 			if (!ret)
diff --git a/fs/splice.c b/fs/splice.c
index ce75aec52274..190c4d218c30 100644
--- a/fs/splice.c
+++ b/fs/splice.c
@@ -815,9 +815,9 @@ static int write_pipe_buf(struct pipe_inode_info *pipe, struct pipe_buffer *buf,
 	void *data;
 	loff_t tmp = sd->pos;
 
-	data = kmap(buf->page);
+	data = kmap_thread(buf->page);
 	ret = __kernel_write(sd->u.file, data + buf->offset, sd->len, &tmp);
-	kunmap(buf->page);
+	kunmap_thread(buf->page);
 
 	return ret;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5106.13319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS2-0004nO-Qc; Fri, 09 Oct 2020 19:53:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5106.13319; Fri, 09 Oct 2020 19:53:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS2-0004mf-F1; Fri, 09 Oct 2020 19:53:14 +0000
Received: by outflank-mailman (input) for mailman id 5106;
 Fri, 09 Oct 2020 19:53:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyS0-0004ON-7g
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:12 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19dafb76-799b-4177-8d09-41d49a2e1241;
 Fri, 09 Oct 2020 19:53:07 +0000 (UTC)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:06 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:05 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyS0-0004ON-7g
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:12 +0000
X-Inumbo-ID: 19dafb76-799b-4177-8d09-41d49a2e1241
Received: from mga12.intel.com (unknown [192.55.52.136])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 19dafb76-799b-4177-8d09-41d49a2e1241;
	Fri, 09 Oct 2020 19:53:07 +0000 (UTC)
IronPort-SDR: ypbsF279aj/UfkFu7vSHb62UW4kIuhQf4geco3iFinIuY00FES/7BN88CYJoC27z1lNDTwV/ao
 vzSvvvR0LXWw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144850994"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="144850994"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
  by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:06 -0700
IronPort-SDR: +9tLMQNrOc9ZmtQe+NKSSXZyhjRooFfxfongQijBj4blQjg4c6+fJ/ZHkl3bUeAx7HZ0qDTuwd
 uW08H1SVCXkg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="343972363"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:05 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 38/58] fs/isofs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:13 -0700
Message-Id: <20201009195033.3208459-39-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over head of
global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/isofs/compress.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/isofs/compress.c b/fs/isofs/compress.c
index bc12ac7e2312..ddd3fd99d2e1 100644
--- a/fs/isofs/compress.c
+++ b/fs/isofs/compress.c
@@ -344,7 +344,7 @@ static int zisofs_readpage(struct file *file, struct page *page)
 			pages[i] = grab_cache_page_nowait(mapping, index);
 		if (pages[i]) {
 			ClearPageError(pages[i]);
-			kmap(pages[i]);
+			kmap_thread(pages[i]);
 		}
 	}
 
@@ -356,7 +356,7 @@ static int zisofs_readpage(struct file *file, struct page *page)
 			flush_dcache_page(pages[i]);
 			if (i == full_page && err)
 				SetPageError(pages[i]);
-			kunmap(pages[i]);
+			kunmap_thread(pages[i]);
 			unlock_page(pages[i]);
 			if (i != full_page)
 				put_page(pages[i]);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5109.13337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS6-0004we-Ov; Fri, 09 Oct 2020 19:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5109.13337; Fri, 09 Oct 2020 19:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS6-0004wN-H8; Fri, 09 Oct 2020 19:53:18 +0000
Received: by outflank-mailman (input) for mailman id 5109;
 Fri, 09 Oct 2020 19:53:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyS5-0003VG-5j
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:17 +0000
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b9f24bd-4098-4169-8bf1-e2d9e98cb2ea;
 Fri, 09 Oct 2020 19:53:10 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:09 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:08 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyS5-0003VG-5j
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:17 +0000
X-Inumbo-ID: 2b9f24bd-4098-4169-8bf1-e2d9e98cb2ea
Received: from mga05.intel.com (unknown [192.55.52.43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2b9f24bd-4098-4169-8bf1-e2d9e98cb2ea;
	Fri, 09 Oct 2020 19:53:10 +0000 (UTC)
IronPort-SDR: wBpIZZUB0xS4qiwTrFwjmmX+/Zzas4vCfJwtYYWRJRGWc1i7EUIxnDY59J+tnUdZnUf79UjW7l
 x4vaRFHFWNtw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="250226251"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="250226251"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
  by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:09 -0700
IronPort-SDR: FciD93oXa6TN5PQF74JzeuCyRYkWBlT2lC/7ncntBELjkiuqMo1u2JSAhMViBtfJLu9YdFEYCB
 Wo/5PMHidIaw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="317147397"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:08 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 39/58] fs/jffs2: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:14 -0700
Message-Id: <20201009195033.3208459-40-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over head of
global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/jffs2/file.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/jffs2/file.c b/fs/jffs2/file.c
index 3e6d54f9b011..14dd2b18cc16 100644
--- a/fs/jffs2/file.c
+++ b/fs/jffs2/file.c
@@ -287,13 +287,13 @@ static int jffs2_write_end(struct file *filp, struct address_space *mapping,
 
 	/* In 2.4, it was already kmapped by generic_file_write(). Doesn't
 	   hurt to do it again. The alternative is ifdefs, which are ugly. */
-	kmap(pg);
+	kmap_thread(pg);
 
 	ret = jffs2_write_inode_range(c, f, ri, page_address(pg) + aligned_start,
 				      (pg->index << PAGE_SHIFT) + aligned_start,
 				      end - aligned_start, &writtenlen);
 
-	kunmap(pg);
+	kunmap_thread(pg);
 
 	if (ret) {
 		/* There was an error writing. */
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5110.13343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS7-0004yG-M0; Fri, 09 Oct 2020 19:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5110.13343; Fri, 09 Oct 2020 19:53:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyS7-0004xg-0a; Fri, 09 Oct 2020 19:53:19 +0000
Received: by outflank-mailman (input) for mailman id 5110;
 Fri, 09 Oct 2020 19:53:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyS5-0004ON-7z
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:17 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f03ed1fc-ba08-4b52-9477-109773b6b938;
 Fri, 09 Oct 2020 19:53:14 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:13 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:12 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyS5-0004ON-7z
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:17 +0000
X-Inumbo-ID: f03ed1fc-ba08-4b52-9477-109773b6b938
Received: from mga14.intel.com (unknown [192.55.52.115])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f03ed1fc-ba08-4b52-9477-109773b6b938;
	Fri, 09 Oct 2020 19:53:14 +0000 (UTC)
IronPort-SDR: rLwBuegh++hAKCzvTNH0V2C7ctpRPmWJhLoyXE9txrbFSA+IbwRe8Ph37uZqORvr5UbNRmBny4
 V5oIamHelu8A==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="164744028"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="164744028"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
  by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:13 -0700
IronPort-SDR: kHLawvWt7hk0hVI10Fetwn7tOzYF5yIQvhqqrAQq3flxek1V3ypdWT/2gH8vwQndln/BZvc5kH
 kTRjULDVKoGw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="312652948"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:12 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>,
	Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>,
	Trond Myklebust <trond.myklebust@hammerspace.com>,
	Anna Schumaker <anna.schumaker@netapp.com>,
	Boris Pismenny <borisp@nvidia.com>,
	Aviad Yehezkel <aviadye@nvidia.com>,
	John Fastabend <john.fastabend@gmail.com>,
	Daniel Borkmann <daniel@iogearbox.net>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 40/58] net: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:15 -0700
Message-Id: <20201009195033.3208459-41-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls in these drivers are localized to a single thread.
To avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: "David S. Miller" <davem@davemloft.net>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: Alexey Kuznetsov <kuznet@ms2.inr.ac.ru>
Cc: Hideaki YOSHIFUJI <yoshfuji@linux-ipv6.org>
Cc: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: Anna Schumaker <anna.schumaker@netapp.com>
Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Aviad Yehezkel <aviadye@nvidia.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 net/ceph/messenger.c | 4 ++--
 net/core/datagram.c  | 4 ++--
 net/core/sock.c      | 8 ++++----
 net/ipv4/ip_output.c | 4 ++--
 net/sunrpc/cache.c   | 4 ++--
 net/sunrpc/xdr.c     | 8 ++++----
 net/tls/tls_device.c | 4 ++--
 7 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index d4d7a0e52491..0c49b8e333da 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -1535,10 +1535,10 @@ static u32 ceph_crc32c_page(u32 crc, struct page *page,
 {
 	char *kaddr;
 
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	BUG_ON(kaddr == NULL);
 	crc = crc32c(crc, kaddr + page_offset, length);
-	kunmap(page);
+	kunmap_thread(page);
 
 	return crc;
 }
diff --git a/net/core/datagram.c b/net/core/datagram.c
index 639745d4f3b9..cbd0a343074a 100644
--- a/net/core/datagram.c
+++ b/net/core/datagram.c
@@ -441,14 +441,14 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
 		end = start + skb_frag_size(frag);
 		if ((copy = end - offset) > 0) {
 			struct page *page = skb_frag_page(frag);
-			u8 *vaddr = kmap(page);
+			u8 *vaddr = kmap_thread(page);
 
 			if (copy > len)
 				copy = len;
 			n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
 					vaddr + skb_frag_off(frag) + offset - start,
 					copy, data, to);
-			kunmap(page);
+			kunmap_thread(page);
 			offset += n;
 			if (n != copy)
 				goto short_copy;
diff --git a/net/core/sock.c b/net/core/sock.c
index 6c5c6b18eff4..9b46a75cd8c1 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2846,11 +2846,11 @@ ssize_t sock_no_sendpage(struct socket *sock, struct page *page, int offset, siz
 	ssize_t res;
 	struct msghdr msg = {.msg_flags = flags};
 	struct kvec iov;
-	char *kaddr = kmap(page);
+	char *kaddr = kmap_thread(page);
 	iov.iov_base = kaddr + offset;
 	iov.iov_len = size;
 	res = kernel_sendmsg(sock, &msg, &iov, 1, size);
-	kunmap(page);
+	kunmap_thread(page);
 	return res;
 }
 EXPORT_SYMBOL(sock_no_sendpage);
@@ -2861,12 +2861,12 @@ ssize_t sock_no_sendpage_locked(struct sock *sk, struct page *page,
 	ssize_t res;
 	struct msghdr msg = {.msg_flags = flags};
 	struct kvec iov;
-	char *kaddr = kmap(page);
+	char *kaddr = kmap_thread(page);
 
 	iov.iov_base = kaddr + offset;
 	iov.iov_len = size;
 	res = kernel_sendmsg_locked(sk, &msg, &iov, 1, size);
-	kunmap(page);
+	kunmap_thread(page);
 	return res;
 }
 EXPORT_SYMBOL(sock_no_sendpage_locked);
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c
index e6f2ada9e7d5..05304fb251a4 100644
--- a/net/ipv4/ip_output.c
+++ b/net/ipv4/ip_output.c
@@ -949,9 +949,9 @@ csum_page(struct page *page, int offset, int copy)
 {
 	char *kaddr;
 	__wsum csum;
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	csum = csum_partial(kaddr + offset, copy, 0);
-	kunmap(page);
+	kunmap_thread(page);
 	return csum;
 }
 
diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
index baef5ee43dbb..88193f2a8e6f 100644
--- a/net/sunrpc/cache.c
+++ b/net/sunrpc/cache.c
@@ -935,9 +935,9 @@ static ssize_t cache_downcall(struct address_space *mapping,
 	if (!page)
 		goto out_slow;
 
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	ret = cache_do_downcall(kaddr, buf, count, cd);
-	kunmap(page);
+	kunmap_thread(page);
 	unlock_page(page);
 	put_page(page);
 	return ret;
diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
index be11d672b5b9..00afbb48fb0a 100644
--- a/net/sunrpc/xdr.c
+++ b/net/sunrpc/xdr.c
@@ -1353,7 +1353,7 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base,
 		base &= ~PAGE_MASK;
 		avail_page = min_t(unsigned int, PAGE_SIZE - base,
 					avail_here);
-		c = kmap(*ppages) + base;
+		c = kmap_thread(*ppages) + base;
 
 		while (avail_here) {
 			avail_here -= avail_page;
@@ -1429,9 +1429,9 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base,
 				}
 			}
 			if (avail_here) {
-				kunmap(*ppages);
+				kunmap_thread(*ppages);
 				ppages++;
-				c = kmap(*ppages);
+				c = kmap_thread(*ppages);
 			}
 
 			avail_page = min(avail_here,
@@ -1471,7 +1471,7 @@ xdr_xcode_array2(struct xdr_buf *buf, unsigned int base,
 out:
 	kfree(elem);
 	if (ppages)
-		kunmap(*ppages);
+		kunmap_thread(*ppages);
 	return err;
 }
 
diff --git a/net/tls/tls_device.c b/net/tls/tls_device.c
index b74e2741f74f..ead5b1c485f8 100644
--- a/net/tls/tls_device.c
+++ b/net/tls/tls_device.c
@@ -576,13 +576,13 @@ int tls_device_sendpage(struct sock *sk, struct page *page,
 		goto out;
 	}
 
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	iov.iov_base = kaddr + offset;
 	iov.iov_len = size;
 	iov_iter_kvec(&msg_iter, WRITE, &iov, 1, size);
 	rc = tls_push_data(sk, &msg_iter, size,
 			   flags, TLS_RECORD_TYPE_DATA);
-	kunmap(page);
+	kunmap_thread(page);
 
 out:
 	release_sock(sk);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5124.13361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySV-0005UE-5Z; Fri, 09 Oct 2020 19:53:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5124.13361; Fri, 09 Oct 2020 19:53:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySV-0005U4-1F; Fri, 09 Oct 2020 19:53:43 +0000
Received: by outflank-mailman (input) for mailman id 5124;
 Fri, 09 Oct 2020 19:53:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyST-0005Ps-UX
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:41 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 805f1667-c4f5-44bc-bab3-1f1b1a4fb193;
 Fri, 09 Oct 2020 19:53:39 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:38 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:37 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyST-0005Ps-UX
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:41 +0000
X-Inumbo-ID: 805f1667-c4f5-44bc-bab3-1f1b1a4fb193
Received: from mga17.intel.com (unknown [192.55.52.151])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 805f1667-c4f5-44bc-bab3-1f1b1a4fb193;
	Fri, 09 Oct 2020 19:53:39 +0000 (UTC)
IronPort-SDR: vf1kLufGuZvoeyE0bw08HZmdaR4ufp6QnkT9XSJmCbKbXRFNpNwX+/QDTcc6JIksyNbs0aO7Y3
 Gm6ahr573jeA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397633"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="145397633"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga001.jf.intel.com ([10.7.209.18])
  by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:38 -0700
IronPort-SDR: 77OyjRxsHRqLdlwGyIEmSTWShPVNVQULX7LnGcMeG3NkwcCC4UbesWRkZ9bTTqcjV6t6jmiGf4
 lDgqS7BTpvKw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="389237394"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:37 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Coly Li <colyli@suse.de>,
	Kent Overstreet <kent.overstreet@gmail.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:23 -0700
Message-Id: <20201009195033.3208459-49-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Coly Li <colyli@suse.de> (maintainer:BCACHE (BLOCK LAYER CACHE))
Cc: Kent Overstreet <kent.overstreet@gmail.com> (maintainer:BCACHE (BLOCK LAYER CACHE))
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/md/bcache/request.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
index c7cadaafa947..a4571f6d09dd 100644
--- a/drivers/md/bcache/request.c
+++ b/drivers/md/bcache/request.c
@@ -44,10 +44,10 @@ static void bio_csum(struct bio *bio, struct bkey *k)
 	uint64_t csum = 0;
 
 	bio_for_each_segment(bv, bio, iter) {
-		void *d = kmap(bv.bv_page) + bv.bv_offset;
+		void *d = kmap_thread(bv.bv_page) + bv.bv_offset;
 
 		csum = bch_crc64_update(csum, d, bv.bv_len);
-		kunmap(bv.bv_page);
+		kunmap_thread(bv.bv_page);
 	}
 
 	k->ptr[KEY_PTRS(k)] = csum & (~0ULL >> 1);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5128.13373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySa-0005ab-Ga; Fri, 09 Oct 2020 19:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5128.13373; Fri, 09 Oct 2020 19:53:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySa-0005aQ-Bm; Fri, 09 Oct 2020 19:53:48 +0000
Received: by outflank-mailman (input) for mailman id 5128;
 Fri, 09 Oct 2020 19:53:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySY-0005Ps-Ui
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:46 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ae675ba-8138-4d44-9dc2-c2b7720986d2;
 Fri, 09 Oct 2020 19:53:43 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:41 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:40 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySY-0005Ps-Ui
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:46 +0000
X-Inumbo-ID: 1ae675ba-8138-4d44-9dc2-c2b7720986d2
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ae675ba-8138-4d44-9dc2-c2b7720986d2;
	Fri, 09 Oct 2020 19:53:43 +0000 (UTC)
IronPort-SDR: NFNuK0DA20GFM7n/UmsvLHztqJht3MxNKx1+P/wchXae5yacImepU1iGMpU18ZqQI3Wjm8b9Jw
 njiXbjFJKd6Q==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592519"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165592519"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:41 -0700
IronPort-SDR: t/zSGeR++7wfj+VtFM7nhfyp5h0blfdWqDlNHnP/5jl48Bm08GFuTiLpDu6MwkEQ7bZpMY8SQr
 l7ztZVCdgy3g==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="419537249"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:40 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 49/58] drivers/misc: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:24 -0700
Message-Id: <20201009195033.3208459-50-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/misc/vmw_vmci/vmci_queue_pair.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/misc/vmw_vmci/vmci_queue_pair.c b/drivers/misc/vmw_vmci/vmci_queue_pair.c
index 8531ae781195..f308abb8ad03 100644
--- a/drivers/misc/vmw_vmci/vmci_queue_pair.c
+++ b/drivers/misc/vmw_vmci/vmci_queue_pair.c
@@ -343,7 +343,7 @@ static int qp_memcpy_to_queue_iter(struct vmci_queue *queue,
 		size_t to_copy;
 
 		if (kernel_if->host)
-			va = kmap(kernel_if->u.h.page[page_index]);
+			va = kmap_thread(kernel_if->u.h.page[page_index]);
 		else
 			va = kernel_if->u.g.vas[page_index + 1];
 			/* Skip header. */
@@ -357,12 +357,12 @@ static int qp_memcpy_to_queue_iter(struct vmci_queue *queue,
 		if (!copy_from_iter_full((u8 *)va + page_offset, to_copy,
 					 from)) {
 			if (kernel_if->host)
-				kunmap(kernel_if->u.h.page[page_index]);
+				kunmap_thread(kernel_if->u.h.page[page_index]);
 			return VMCI_ERROR_INVALID_ARGS;
 		}
 		bytes_copied += to_copy;
 		if (kernel_if->host)
-			kunmap(kernel_if->u.h.page[page_index]);
+			kunmap_thread(kernel_if->u.h.page[page_index]);
 	}
 
 	return VMCI_SUCCESS;
@@ -391,7 +391,7 @@ static int qp_memcpy_from_queue_iter(struct iov_iter *to,
 		int err;
 
 		if (kernel_if->host)
-			va = kmap(kernel_if->u.h.page[page_index]);
+			va = kmap_thread(kernel_if->u.h.page[page_index]);
 		else
 			va = kernel_if->u.g.vas[page_index + 1];
 			/* Skip header. */
@@ -405,12 +405,12 @@ static int qp_memcpy_from_queue_iter(struct iov_iter *to,
 		err = copy_to_iter((u8 *)va + page_offset, to_copy, to);
 		if (err != to_copy) {
 			if (kernel_if->host)
-				kunmap(kernel_if->u.h.page[page_index]);
+				kunmap_thread(kernel_if->u.h.page[page_index]);
 			return VMCI_ERROR_INVALID_ARGS;
 		}
 		bytes_copied += to_copy;
 		if (kernel_if->host)
-			kunmap(kernel_if->u.h.page[page_index]);
+			kunmap_thread(kernel_if->u.h.page[page_index]);
 	}
 
 	return VMCI_SUCCESS;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5132.13385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySe-0005gX-PR; Fri, 09 Oct 2020 19:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5132.13385; Fri, 09 Oct 2020 19:53:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySe-0005gP-MD; Fri, 09 Oct 2020 19:53:52 +0000
Received: by outflank-mailman (input) for mailman id 5132;
 Fri, 09 Oct 2020 19:53:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySd-0005Ps-Uk
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:51 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 681c2b85-c394-4b4c-8973-c2248e976f5a;
 Fri, 09 Oct 2020 19:53:44 +0000 (UTC)
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:44 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:43 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySd-0005Ps-Uk
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:51 +0000
X-Inumbo-ID: 681c2b85-c394-4b4c-8973-c2248e976f5a
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 681c2b85-c394-4b4c-8973-c2248e976f5a;
	Fri, 09 Oct 2020 19:53:44 +0000 (UTC)
IronPort-SDR: YziJI3PlJBzliL0ylJRMcYbG2UKox1qqh1obzhf8msK3M9/YjZBsW9yMJC4UnilOh3SrEyGsaD
 oNcf5vpnJ9Og==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162068173"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162068173"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:44 -0700
IronPort-SDR: Uit4axCqwtogby85Iv/hBLI1Moj4COtsBpPB5jXSVWpuENwpihzSQwSQg3nZnBMabzVvybSK2I
 1kQ4taZ9oZgQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298531484"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:43 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 50/58] drivers/android: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:25 -0700
Message-Id: <20201009195033.3208459-51-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/android/binder_alloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
index 69609696a843..5f50856caad7 100644
--- a/drivers/android/binder_alloc.c
+++ b/drivers/android/binder_alloc.c
@@ -1118,9 +1118,9 @@ binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
 		page = binder_alloc_get_page(alloc, buffer,
 					     buffer_offset, &pgoff);
 		size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
-		kptr = kmap(page) + pgoff;
+		kptr = kmap_thread(page) + pgoff;
 		ret = copy_from_user(kptr, from, size);
-		kunmap(page);
+		kunmap_thread(page);
 		if (ret)
 			return bytes - size + ret;
 		bytes -= size;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:53:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5136.13397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySk-0005od-BV; Fri, 09 Oct 2020 19:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5136.13397; Fri, 09 Oct 2020 19:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySk-0005oQ-79; Fri, 09 Oct 2020 19:53:58 +0000
Received: by outflank-mailman (input) for mailman id 5136;
 Fri, 09 Oct 2020 19:53:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySi-0005Ps-VA
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:56 +0000
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f055cc1e-3bb0-4e29-b6d2-a702b0fea35e;
 Fri, 09 Oct 2020 19:53:52 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:51 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:50 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySi-0005Ps-VA
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:56 +0000
X-Inumbo-ID: f055cc1e-3bb0-4e29-b6d2-a702b0fea35e
Received: from mga18.intel.com (unknown [134.134.136.126])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f055cc1e-3bb0-4e29-b6d2-a702b0fea35e;
	Fri, 09 Oct 2020 19:53:52 +0000 (UTC)
IronPort-SDR: tTFgvc6RZJ8V+MwcL9uJFqUOw2UuoqGI4QphGn9hIaRslAXm5PLWIYDepQxgE4Xj46Ap/V33jh
 a/ivOUt5XS9w==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="153363849"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="153363849"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:51 -0700
IronPort-SDR: V1KD7wTraPak1zxWaGCn8NDyAgfcS4zsKsBv4Z0vY7HUn25qjJm9u99gQjtiJGVT+R+oXmWyM0
 H2mADCd5/x5Q==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="462301148"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:50 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 52/58] mm: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:27 -0700
Message-Id: <20201009195033.3208459-53-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 mm/memory.c      | 8 ++++----
 mm/swapfile.c    | 4 ++--
 mm/userfaultfd.c | 4 ++--
 3 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index fcfc4ca36eba..75a054882d7a 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4945,7 +4945,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
 			if (bytes > PAGE_SIZE-offset)
 				bytes = PAGE_SIZE-offset;
 
-			maddr = kmap(page);
+			maddr = kmap_thread(page);
 			if (write) {
 				copy_to_user_page(vma, page, addr,
 						  maddr + offset, buf, bytes);
@@ -4954,7 +4954,7 @@ int __access_remote_vm(struct task_struct *tsk, struct mm_struct *mm,
 				copy_from_user_page(vma, page, addr,
 						    buf, maddr + offset, bytes);
 			}
-			kunmap(page);
+			kunmap_thread(page);
 			put_page(page);
 		}
 		len -= bytes;
@@ -5216,14 +5216,14 @@ long copy_huge_page_from_user(struct page *dst_page,
 
 	for (i = 0; i < pages_per_huge_page; i++) {
 		if (allow_pagefault)
-			page_kaddr = kmap(dst_page + i);
+			page_kaddr = kmap_thread(dst_page + i);
 		else
 			page_kaddr = kmap_atomic(dst_page + i);
 		rc = copy_from_user(page_kaddr,
 				(const void __user *)(src + i * PAGE_SIZE),
 				PAGE_SIZE);
 		if (allow_pagefault)
-			kunmap(dst_page + i);
+			kunmap_thread(dst_page + i);
 		else
 			kunmap_atomic(page_kaddr);
 
diff --git a/mm/swapfile.c b/mm/swapfile.c
index debc94155f74..e3296ff95648 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3219,7 +3219,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
 		error = PTR_ERR(page);
 		goto bad_swap_unlock_inode;
 	}
-	swap_header = kmap(page);
+	swap_header = kmap_thread(page);
 
 	maxpages = read_swap_header(p, swap_header, inode);
 	if (unlikely(!maxpages)) {
@@ -3395,7 +3395,7 @@ SYSCALL_DEFINE2(swapon, const char __user *, specialfile, int, swap_flags)
 		filp_close(swap_file, NULL);
 out:
 	if (page && !IS_ERR(page)) {
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 	}
 	if (name)
diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
index 9a3d451402d7..4d38c881bb2d 100644
--- a/mm/userfaultfd.c
+++ b/mm/userfaultfd.c
@@ -586,11 +586,11 @@ static __always_inline ssize_t __mcopy_atomic(struct mm_struct *dst_mm,
 			mmap_read_unlock(dst_mm);
 			BUG_ON(!page);
 
-			page_kaddr = kmap(page);
+			page_kaddr = kmap_thread(page);
 			err = copy_from_user(page_kaddr,
 					     (const void __user *) src_addr,
 					     PAGE_SIZE);
-			kunmap(page);
+			kunmap_thread(page);
 			if (unlikely(err)) {
 				err = -EFAULT;
 				goto out;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5138.13409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySp-0005xh-Og; Fri, 09 Oct 2020 19:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5138.13409; Fri, 09 Oct 2020 19:54:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySp-0005xW-Gv; Fri, 09 Oct 2020 19:54:03 +0000
Received: by outflank-mailman (input) for mailman id 5138;
 Fri, 09 Oct 2020 19:54:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySn-0005Ps-V7
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:01 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 661d6fbc-7671-45ed-9247-8eec879818b0;
 Fri, 09 Oct 2020 19:53:55 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:54 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:53 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySn-0005Ps-V7
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:01 +0000
X-Inumbo-ID: 661d6fbc-7671-45ed-9247-8eec879818b0
Received: from mga06.intel.com (unknown [134.134.136.31])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 661d6fbc-7671-45ed-9247-8eec879818b0;
	Fri, 09 Oct 2020 19:53:55 +0000 (UTC)
IronPort-SDR: qAwk0eQ+X+LCpDWx3sefcn5lu/8xvuqCwVCgM1HthVzDQ0OnqVQX6aby7qQJevD7rkG95HexiI
 CPFDGhVgYKVg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227179201"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="227179201"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:54 -0700
IronPort-SDR: bZn0Zma13YkX7FJFkNsCYMjMNff2FSiB/MgoUwYT50qMOfu0yRYfZtOtUYx1uwuwPtN+tdFX+r
 hwly4wq6Ikyg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="519847271"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:53 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	=?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= <jglisse@redhat.com>,
	Martin KaFai Lau <kafai@fb.com>,
	Song Liu <songliubraving@fb.com>,
	Yonghong Song <yhs@fb.com>,
	Andrii Nakryiko <andriin@fb.com>,
	John Fastabend <john.fastabend@gmail.com>,
	KP Singh <kpsingh@chromium.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 53/58] lib: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:28 -0700
Message-Id: <20201009195033.3208459-54-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: Andrii Nakryiko <andriin@fb.com>
Cc: John Fastabend <john.fastabend@gmail.com>
Cc: KP Singh <kpsingh@chromium.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 lib/iov_iter.c | 12 ++++++------
 lib/test_bpf.c |  4 ++--
 lib/test_hmm.c |  8 ++++----
 3 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 5e40786c8f12..1d47f957cf95 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -208,7 +208,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b
 	}
 	/* Too bad - revert to non-atomic kmap */
 
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	from = kaddr + offset;
 	left = copyout(buf, from, copy);
 	copy -= left;
@@ -225,7 +225,7 @@ static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t b
 		from += copy;
 		bytes -= copy;
 	}
-	kunmap(page);
+	kunmap_thread(page);
 
 done:
 	if (skip == iov->iov_len) {
@@ -292,7 +292,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t
 	}
 	/* Too bad - revert to non-atomic kmap */
 
-	kaddr = kmap(page);
+	kaddr = kmap_thread(page);
 	to = kaddr + offset;
 	left = copyin(to, buf, copy);
 	copy -= left;
@@ -309,7 +309,7 @@ static size_t copy_page_from_iter_iovec(struct page *page, size_t offset, size_t
 		to += copy;
 		bytes -= copy;
 	}
-	kunmap(page);
+	kunmap_thread(page);
 
 done:
 	if (skip == iov->iov_len) {
@@ -1742,10 +1742,10 @@ int iov_iter_for_each_range(struct iov_iter *i, size_t bytes,
 		return 0;
 
 	iterate_all_kinds(i, bytes, v, -EINVAL, ({
-		w.iov_base = kmap(v.bv_page) + v.bv_offset;
+		w.iov_base = kmap_thread(v.bv_page) + v.bv_offset;
 		w.iov_len = v.bv_len;
 		err = f(&w, context);
-		kunmap(v.bv_page);
+		kunmap_thread(v.bv_page);
 		err;}), ({
 		w = v;
 		err = f(&w, context);})
diff --git a/lib/test_bpf.c b/lib/test_bpf.c
index ca7d635bccd9..441f822f56ba 100644
--- a/lib/test_bpf.c
+++ b/lib/test_bpf.c
@@ -6506,11 +6506,11 @@ static void *generate_test_data(struct bpf_test *test, int sub)
 		if (!page)
 			goto err_kfree_skb;
 
-		ptr = kmap(page);
+		ptr = kmap_thread(page);
 		if (!ptr)
 			goto err_free_page;
 		memcpy(ptr, test->frag_data, MAX_DATA);
-		kunmap(page);
+		kunmap_thread(page);
 		skb_add_rx_frag(skb, 0, page, 0, MAX_DATA, MAX_DATA);
 	}
 
diff --git a/lib/test_hmm.c b/lib/test_hmm.c
index e7dc3de355b7..e40d26f97f45 100644
--- a/lib/test_hmm.c
+++ b/lib/test_hmm.c
@@ -329,9 +329,9 @@ static int dmirror_do_read(struct dmirror *dmirror, unsigned long start,
 		if (!page)
 			return -ENOENT;
 
-		tmp = kmap(page);
+		tmp = kmap_thread(page);
 		memcpy(ptr, tmp, PAGE_SIZE);
-		kunmap(page);
+		kunmap_thread(page);
 
 		ptr += PAGE_SIZE;
 		bounce->cpages++;
@@ -398,9 +398,9 @@ static int dmirror_do_write(struct dmirror *dmirror, unsigned long start,
 		if (!page || xa_pointer_tag(entry) != DPT_XA_TAG_WRITE)
 			return -ENOENT;
 
-		tmp = kmap(page);
+		tmp = kmap_thread(page);
 		memcpy(tmp, ptr, PAGE_SIZE);
-		kunmap(page);
+		kunmap_thread(page);
 
 		ptr += PAGE_SIZE;
 		bounce->cpages++;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5139.13414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySq-0005yl-6X; Fri, 09 Oct 2020 19:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5139.13414; Fri, 09 Oct 2020 19:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySp-0005yK-U0; Fri, 09 Oct 2020 19:54:03 +0000
Received: by outflank-mailman (input) for mailman id 5139;
 Fri, 09 Oct 2020 19:54:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySp-0005x6-8P
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:03 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6f54446-46d1-4920-a493-c209ac9ecba7;
 Fri, 09 Oct 2020 19:54:02 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:01 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:00 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySp-0005x6-8P
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:03 +0000
X-Inumbo-ID: f6f54446-46d1-4920-a493-c209ac9ecba7
Received: from mga12.intel.com (unknown [192.55.52.136])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6f54446-46d1-4920-a493-c209ac9ecba7;
	Fri, 09 Oct 2020 19:54:02 +0000 (UTC)
IronPort-SDR: s702jsQClR22uyUIa1DJCn3AxvMbIytbcDnPsENmUbh0L/kV/PkPNi2T5tmbu+/56fcl6uR0LT
 +MZiIYckw7Rw==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="144851103"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="144851103"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:01 -0700
IronPort-SDR: vscQ8/PPGEHFzlszF4epup1/yO0Zdiea6PUky2hcTMxo6TNU67DufPr1A76ntRogBKZEHAEf8L
 bsjRzl3g/9bQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529054196"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:00 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Kirti Wankhede <kwankhede@nvidia.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 55/58] samples: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:30 -0700
Message-Id: <20201009195033.3208459-56-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Kirti Wankhede <kwankhede@nvidia.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 samples/vfio-mdev/mbochs.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/samples/vfio-mdev/mbochs.c b/samples/vfio-mdev/mbochs.c
index 3cc5e5921682..6d95422c0b46 100644
--- a/samples/vfio-mdev/mbochs.c
+++ b/samples/vfio-mdev/mbochs.c
@@ -479,12 +479,12 @@ static ssize_t mdev_access(struct mdev_device *mdev, char *buf, size_t count,
 		pos -= MBOCHS_MMIO_BAR_OFFSET;
 		poff = pos & ~PAGE_MASK;
 		pg = __mbochs_get_page(mdev_state, pos >> PAGE_SHIFT);
-		map = kmap(pg);
+		map = kmap_thread(pg);
 		if (is_write)
 			memcpy(map + poff, buf, count);
 		else
 			memcpy(buf, map + poff, count);
-		kunmap(pg);
+		kunmap_thread(pg);
 		put_page(pg);
 
 	} else {
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5143.13433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySu-00067R-Je; Fri, 09 Oct 2020 19:54:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5143.13433; Fri, 09 Oct 2020 19:54:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySu-00067E-Dm; Fri, 09 Oct 2020 19:54:08 +0000
Received: by outflank-mailman (input) for mailman id 5143;
 Fri, 09 Oct 2020 19:54:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySs-0005Ps-VI
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:06 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8c6ff98-ebe7-4d7f-bc01-8468e7bda959;
 Fri, 09 Oct 2020 19:53:58 +0000 (UTC)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:57 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:56 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySs-0005Ps-VI
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:06 +0000
X-Inumbo-ID: e8c6ff98-ebe7-4d7f-bc01-8468e7bda959
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e8c6ff98-ebe7-4d7f-bc01-8468e7bda959;
	Fri, 09 Oct 2020 19:53:58 +0000 (UTC)
IronPort-SDR: N/ixrEFoutZgbXM/AHGDsQNxC1kA56LAqAWK8DPr3GIxlNVz/5LTOSXL4du4Kmk5VTXL5D8VII
 cnjVp56u10vA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643384"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165643384"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:57 -0700
IronPort-SDR: I2dap8dqfU4jkUPjfDUEInlXytro00aRpG9kPai7q+149QpAXJ//5ZcvM1PB/3zTdLq6ZFWD0B
 HAucu70KNsIA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="343972515"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:56 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 54/58] powerpc: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:29 -0700
Message-Id: <20201009195033.3208459-55-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Paul Mackerras <paulus@samba.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 arch/powerpc/mm/mem.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 42e25874f5a8..6ef557b8dda6 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -573,9 +573,9 @@ void flush_icache_user_page(struct vm_area_struct *vma, struct page *page,
 {
 	unsigned long maddr;
 
-	maddr = (unsigned long) kmap(page) + (addr & ~PAGE_MASK);
+	maddr = (unsigned long) kmap_thread(page) + (addr & ~PAGE_MASK);
 	flush_icache_range(maddr, maddr + len);
-	kunmap(page);
+	kunmap_thread(page);
 }
 
 /*
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5144.13441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySv-00068x-CN; Fri, 09 Oct 2020 19:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5144.13441; Fri, 09 Oct 2020 19:54:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySu-00068W-U6; Fri, 09 Oct 2020 19:54:08 +0000
Received: by outflank-mailman (input) for mailman id 5144;
 Fri, 09 Oct 2020 19:54:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySt-0005x6-77
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:07 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5bc4318-9a41-4b0b-bc6e-214ad03ae863;
 Fri, 09 Oct 2020 19:54:06 +0000 (UTC)
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:04 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:03 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySt-0005x6-77
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:07 +0000
X-Inumbo-ID: b5bc4318-9a41-4b0b-bc6e-214ad03ae863
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b5bc4318-9a41-4b0b-bc6e-214ad03ae863;
	Fri, 09 Oct 2020 19:54:06 +0000 (UTC)
IronPort-SDR: c5c8lWx53OKNfJTf5ukBb57x4xGzHo3/RsCiYSxlMsrD0E9zKISdhlW1OfiP938gwC0kIn9Ony
 mzjXzkcVlXMQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="182976549"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="182976549"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:04 -0700
IronPort-SDR: NEiErKYZfPZGqrWyxIZ0z43ef9+IKjrcXd9NJIRckxXQwdDEMd5poh/dknsqhifod6Ym/UoaZd
 Mu4h9c7DJvbA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="462301216"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:03 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 56/58] dax: Stray access protection for dax_direct_access()
Date: Fri,  9 Oct 2020 12:50:31 -0700
Message-Id: <20201009195033.3208459-57-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

dax_direct_access() is a special case of accessing pmem via a page
offset and without a struct page.

Because the dax driver is well aware of the special protections it has
mapped memory with, call dev_access_[en|dis]able() directly instead of
the unnecessary overhead of trying to get a page to kmap.

Similar to kmap, we leverage existing functions, dax_read_[un]lock(),
because they are already required to surround the use of the memory
returned from dax_direct_access().

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/dax/super.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/dax/super.c b/drivers/dax/super.c
index e84070b55463..0ddb3ee73e36 100644
--- a/drivers/dax/super.c
+++ b/drivers/dax/super.c
@@ -30,6 +30,7 @@ static DEFINE_SPINLOCK(dax_host_lock);
 
 int dax_read_lock(void)
 {
+	dev_access_enable(false);
 	return srcu_read_lock(&dax_srcu);
 }
 EXPORT_SYMBOL_GPL(dax_read_lock);
@@ -37,6 +38,7 @@ EXPORT_SYMBOL_GPL(dax_read_lock);
 void dax_read_unlock(int id)
 {
 	srcu_read_unlock(&dax_srcu, id);
+	dev_access_disable(false);
 }
 EXPORT_SYMBOL_GPL(dax_read_unlock);
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5147.13457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySz-0006IM-Rx; Fri, 09 Oct 2020 19:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5147.13457; Fri, 09 Oct 2020 19:54:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQySz-0006I9-L6; Fri, 09 Oct 2020 19:54:13 +0000
Received: by outflank-mailman (input) for mailman id 5147;
 Fri, 09 Oct 2020 19:54:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySx-0005Ps-VW
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:12 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aeeae19a-7896-4128-97d7-8714610ab44e;
 Fri, 09 Oct 2020 19:54:08 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:07 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:06 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySx-0005Ps-VW
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:12 +0000
X-Inumbo-ID: aeeae19a-7896-4128-97d7-8714610ab44e
Received: from mga06.intel.com (unknown [134.134.136.31])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aeeae19a-7896-4128-97d7-8714610ab44e;
	Fri, 09 Oct 2020 19:54:08 +0000 (UTC)
IronPort-SDR: H8bQ2F75ooVY2NRJCpX/U8qZHkgTS9+a84f9GqjJMTf/7sPa785KonRZXlZZdluWAAn3WcHYds
 KcVwzJGbiUfg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="227179229"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="227179229"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:07 -0700
IronPort-SDR: src9VKOvZKZdzmhAmOplZF/CbXKWViy2IZEce2k3Dk75HVdMa01qWYEbu8ywq24hGpakhLlrfI
 iYxmJaXcxtuw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="519847335"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:06 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 57/58] nvdimm/pmem: Stray access protection for pmem->virt_addr
Date: Fri,  9 Oct 2020 12:50:32 -0700
Message-Id: <20201009195033.3208459-58-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The pmem driver uses a cached virtual address to access its memory
directly.  Because the nvdimm driver is well aware of the special
protections it has mapped memory with, we call dev_access_[en|dis]able()
around the direct pmem->virt_addr (pmem_addr) usage instead of the
unnecessary overhead of trying to get a page to kmap.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/nvdimm/pmem.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index fab29b514372..e4dc1ae990fc 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -148,7 +148,9 @@ static blk_status_t pmem_do_read(struct pmem_device *pmem,
 	if (unlikely(is_bad_pmem(&pmem->bb, sector, len)))
 		return BLK_STS_IOERR;
 
+	dev_access_enable(false);
 	rc = read_pmem(page, page_off, pmem_addr, len);
+	dev_access_disable(false);
 	flush_dcache_page(page);
 	return rc;
 }
@@ -180,11 +182,13 @@ static blk_status_t pmem_do_write(struct pmem_device *pmem,
 	 * after clear poison.
 	 */
 	flush_dcache_page(page);
+	dev_access_enable(false);
 	write_pmem(pmem_addr, page, page_off, len);
 	if (unlikely(bad_pmem)) {
 		rc = pmem_clear_poison(pmem, pmem_off, len);
 		write_pmem(pmem_addr, page, page_off, len);
 	}
+	dev_access_disable(false);
 
 	return rc;
 }
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:54:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:54:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5148.13464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyT0-0006Jx-FD; Fri, 09 Oct 2020 19:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5148.13464; Fri, 09 Oct 2020 19:54:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyT0-0006JE-31; Fri, 09 Oct 2020 19:54:14 +0000
Received: by outflank-mailman (input) for mailman id 5148;
 Fri, 09 Oct 2020 19:54:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySy-0005x6-7b
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:12 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8081570c-b6c3-4219-88e1-68789d0e79c4;
 Fri, 09 Oct 2020 19:54:11 +0000 (UTC)
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:10 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:54:09 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySy-0005x6-7b
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:54:12 +0000
X-Inumbo-ID: 8081570c-b6c3-4219-88e1-68789d0e79c4
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8081570c-b6c3-4219-88e1-68789d0e79c4;
	Fri, 09 Oct 2020 19:54:11 +0000 (UTC)
IronPort-SDR: c08hdHvc6ybHobwQ+D0UVX/9ZHOzfQiZb2ianlX2khVfGVs+sgixO1byrqO2vOAn/X1rge7y3h
 hITAoUY46rQg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592594"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165592594"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga004.fm.intel.com ([10.253.24.48])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:10 -0700
IronPort-SDR: LoljxrxdEbdUiebRGt6guGtjo6SPfpWGHMJuLgTgYB7BeDU8CJtiGau7DzbQyZ+Y5gpHVnCtT5
 urTwCEpflSRg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="343972696"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:54:09 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 58/58] [dax|pmem]: Enable stray access protection
Date: Fri,  9 Oct 2020 12:50:33 -0700
Message-Id: <20201009195033.3208459-59-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

Protecting against stray writes is particularly important for PMEM
because, unlike writes to anonymous memory, writes to PMEM persists
across a reboot.  Thus data corruption could result in permanent loss of
data.

While stray writes are more serious than reads, protection is also
enabled for reads.  This helps to detect bugs in code which would
incorrectly access device memory and prevents a more serious machine
checks should those bug reads from a poison page.

Enable stray access protection by setting the flag in pgmap which
requests it.  There is no option presented to the user.  If Zone Device
Access Protection not be supported this flag will have no affect.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/dax/device.c  | 2 ++
 drivers/nvdimm/pmem.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/drivers/dax/device.c b/drivers/dax/device.c
index 1e89513f3c59..e6fb35b4f0fb 100644
--- a/drivers/dax/device.c
+++ b/drivers/dax/device.c
@@ -430,6 +430,8 @@ int dev_dax_probe(struct device *dev)
 	}
 
 	dev_dax->pgmap.type = MEMORY_DEVICE_GENERIC;
+	dev_dax->pgmap.flags |= PGMAP_PROT_ENABLED;
+
 	addr = devm_memremap_pages(dev, &dev_dax->pgmap);
 	if (IS_ERR(addr))
 		return PTR_ERR(addr);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index e4dc1ae990fc..9fcd8338e23f 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -426,6 +426,8 @@ static int pmem_attach_disk(struct device *dev,
 		return -EBUSY;
 	}
 
+	pmem->pgmap.flags |= PGMAP_PROT_ENABLED;
+
 	q = blk_alloc_queue(dev_to_node(dev));
 	if (!q)
 		return -ENOMEM;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:56:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5169.13481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyVR-00075G-5M; Fri, 09 Oct 2020 19:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5169.13481; Fri, 09 Oct 2020 19:56:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyVR-000757-2F; Fri, 09 Oct 2020 19:56:45 +0000
Received: by outflank-mailman (input) for mailman id 5169;
 Fri, 09 Oct 2020 19:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySP-0003VG-6N
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:37 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb1a64dd-e4c8-4906-97c2-1fdc3985d221;
 Fri, 09 Oct 2020 19:53:27 +0000 (UTC)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:26 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:25 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySP-0003VG-6N
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:37 +0000
X-Inumbo-ID: bb1a64dd-e4c8-4906-97c2-1fdc3985d221
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb1a64dd-e4c8-4906-97c2-1fdc3985d221;
	Fri, 09 Oct 2020 19:53:27 +0000 (UTC)
IronPort-SDR: XekXSL8XDIzS4z8DlhwRK7mwt6lTeWdTniQtHCkXg5m/Za1sv6ctOaMcXkcX0ONfiLAaB2or/0
 E0lSeMRBqvaQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165643304"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165643304"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga002.jf.intel.com ([10.7.209.21])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:26 -0700
IronPort-SDR: 2ROrwlCi2HA6jb/vbADUiWNpPA9fz//CxLO3iPg6ChQCY2CkDUKdp2BuOf7Z0K54YjJfUTECcl
 a5kJtnEVjTKQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="329006788"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga002-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:25 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 44/58] drivers/xen: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:19 -0700
Message-Id: <20201009195033.3208459-45-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/xen/gntalloc.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/gntalloc.c b/drivers/xen/gntalloc.c
index 3fa40c723e8e..3b78e055feff 100644
--- a/drivers/xen/gntalloc.c
+++ b/drivers/xen/gntalloc.c
@@ -184,9 +184,9 @@ static int add_grefs(struct ioctl_gntalloc_alloc_gref *op,
 static void __del_gref(struct gntalloc_gref *gref)
 {
 	if (gref->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {
-		uint8_t *tmp = kmap(gref->page);
+		uint8_t *tmp = kmap_thread(gref->page);
 		tmp[gref->notify.pgoff] = 0;
-		kunmap(gref->page);
+		kunmap_thread(gref->page);
 	}
 	if (gref->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {
 		notify_remote_via_evtchn(gref->notify.event);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:58:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5170.13493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyWq-0007Ft-HH; Fri, 09 Oct 2020 19:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5170.13493; Fri, 09 Oct 2020 19:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyWq-0007Fm-E7; Fri, 09 Oct 2020 19:58:12 +0000
Received: by outflank-mailman (input) for mailman id 5170;
 Fri, 09 Oct 2020 19:58:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQyQW-00017t-O1
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:40 +0000
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6a77be5-dfa3-4325-b5d5-64874a8f25aa;
 Fri, 09 Oct 2020 19:51:30 +0000 (UTC)
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:29 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:51:28 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQyQW-00017t-O1
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:51:40 +0000
X-Inumbo-ID: b6a77be5-dfa3-4325-b5d5-64874a8f25aa
Received: from mga17.intel.com (unknown [192.55.52.151])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b6a77be5-dfa3-4325-b5d5-64874a8f25aa;
	Fri, 09 Oct 2020 19:51:30 +0000 (UTC)
IronPort-SDR: hQbQJaHKT1qYoXdodrzz5AwO+ossNkFoGtKaMa110sFHVGe5CcmJs7aOsl7Sse7UQvhIOgTdbb
 oh4G8BA5Z9wg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="145397298"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="145397298"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
  by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:29 -0700
IronPort-SDR: 0+lGONiHFSeyE/0kQGUe/v/Z0vqstKsQGCedA4TvP63fkNzDWGjPYFj9hsR1yECUQTFrwG697Z
 McTBAsxZaLwQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298419323"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:51:28 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	David Howells <dhowells@redhat.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 12/58] fs/afs: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:49:47 -0700
Message-Id: <20201009195033.3208459-13-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

The kmap() calls in this FS are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 fs/afs/dir.c      | 16 ++++++++--------
 fs/afs/dir_edit.c | 16 ++++++++--------
 fs/afs/mntpt.c    |  4 ++--
 fs/afs/write.c    |  4 ++--
 4 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/fs/afs/dir.c b/fs/afs/dir.c
index 1d2e61e0ab04..5d01cdb590de 100644
--- a/fs/afs/dir.c
+++ b/fs/afs/dir.c
@@ -127,14 +127,14 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page,
 	qty /= sizeof(union afs_xdr_dir_block);
 
 	/* check them */
-	dbuf = kmap(page);
+	dbuf = kmap_thread(page);
 	for (tmp = 0; tmp < qty; tmp++) {
 		if (dbuf->blocks[tmp].hdr.magic != AFS_DIR_MAGIC) {
 			printk("kAFS: %s(%lx): bad magic %d/%d is %04hx\n",
 			       __func__, dvnode->vfs_inode.i_ino, tmp, qty,
 			       ntohs(dbuf->blocks[tmp].hdr.magic));
 			trace_afs_dir_check_failed(dvnode, off, i_size);
-			kunmap(page);
+			kunmap_thread(page);
 			trace_afs_file_error(dvnode, -EIO, afs_file_error_dir_bad_magic);
 			goto error;
 		}
@@ -146,7 +146,7 @@ static bool afs_dir_check_page(struct afs_vnode *dvnode, struct page *page,
 		((u8 *)&dbuf->blocks[tmp])[AFS_DIR_BLOCK_SIZE - 1] = 0;
 	}
 
-	kunmap(page);
+	kunmap_thread(page);
 
 checked:
 	afs_stat_v(dvnode, n_read_dir);
@@ -177,13 +177,13 @@ static bool afs_dir_check_pages(struct afs_vnode *dvnode, struct afs_read *req)
 		req->pos, req->index, req->nr_pages, req->offset);
 
 	for (i = 0; i < req->nr_pages; i++) {
-		dbuf = kmap(req->pages[i]);
+		dbuf = kmap_thread(req->pages[i]);
 		for (j = 0; j < qty; j++) {
 			union afs_xdr_dir_block *block = &dbuf->blocks[j];
 
 			pr_warn("[%02x] %32phN\n", i * qty + j, block);
 		}
-		kunmap(req->pages[i]);
+		kunmap_thread(req->pages[i]);
 	}
 	return false;
 }
@@ -481,7 +481,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx,
 
 		limit = blkoff & ~(PAGE_SIZE - 1);
 
-		dbuf = kmap(page);
+		dbuf = kmap_thread(page);
 
 		/* deal with the individual blocks stashed on this page */
 		do {
@@ -489,7 +489,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx,
 					       sizeof(union afs_xdr_dir_block)];
 			ret = afs_dir_iterate_block(dvnode, ctx, dblock, blkoff);
 			if (ret != 1) {
-				kunmap(page);
+				kunmap_thread(page);
 				goto out;
 			}
 
@@ -497,7 +497,7 @@ static int afs_dir_iterate(struct inode *dir, struct dir_context *ctx,
 
 		} while (ctx->pos < dir->i_size && blkoff < limit);
 
-		kunmap(page);
+		kunmap_thread(page);
 		ret = 0;
 	}
 
diff --git a/fs/afs/dir_edit.c b/fs/afs/dir_edit.c
index b108528bf010..35ed6828e205 100644
--- a/fs/afs/dir_edit.c
+++ b/fs/afs/dir_edit.c
@@ -218,7 +218,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 	need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE);
 	need_slots /= AFS_DIR_DIRENT_SIZE;
 
-	meta_page = kmap(page0);
+	meta_page = kmap_thread(page0);
 	meta = &meta_page->blocks[0];
 	if (i_size == 0)
 		goto new_directory;
@@ -247,7 +247,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 				set_page_private(page, 1);
 				SetPagePrivate(page);
 			}
-			dir_page = kmap(page);
+			dir_page = kmap_thread(page);
 		}
 
 		/* Abandon the edit if we got a callback break. */
@@ -284,7 +284,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 
 		if (page != page0) {
 			unlock_page(page);
-			kunmap(page);
+			kunmap_thread(page);
 			put_page(page);
 		}
 	}
@@ -323,7 +323,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 	afs_set_contig_bits(block, slot, need_slots);
 	if (page != page0) {
 		unlock_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 	}
 
@@ -337,7 +337,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 
 out_unmap:
 	unlock_page(page0);
-	kunmap(page0);
+	kunmap_thread(page0);
 	put_page(page0);
 	_leave("");
 	return;
@@ -346,7 +346,7 @@ void afs_edit_dir_add(struct afs_vnode *vnode,
 	trace_afs_edit_dir(vnode, why, afs_edit_dir_create_inval, 0, 0, 0, 0, name->name);
 	clear_bit(AFS_VNODE_DIR_VALID, &vnode->flags);
 	if (page != page0) {
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 	}
 	goto out_unmap;
@@ -398,7 +398,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode,
 	need_slots = round_up(12 + name->len + 1 + 4, AFS_DIR_DIRENT_SIZE);
 	need_slots /= AFS_DIR_DIRENT_SIZE;
 
-	meta_page = kmap(page0);
+	meta_page = kmap_thread(page0);
 	meta = &meta_page->blocks[0];
 
 	/* Find a page that has sufficient slots available.  Each VM page
@@ -410,7 +410,7 @@ void afs_edit_dir_remove(struct afs_vnode *vnode,
 			page = find_lock_page(vnode->vfs_inode.i_mapping, index);
 			if (!page)
 				goto error;
-			dir_page = kmap(page);
+			dir_page = kmap_thread(page);
 		} else {
 			page = page0;
 			dir_page = meta_page;
diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
index 79bc5f1338ed..562454e2fd5c 100644
--- a/fs/afs/mntpt.c
+++ b/fs/afs/mntpt.c
@@ -139,11 +139,11 @@ static int afs_mntpt_set_params(struct fs_context *fc, struct dentry *mntpt)
 			return ret;
 		}
 
-		buf = kmap(page);
+		buf = kmap_thread(page);
 		ret = -EINVAL;
 		if (buf[size - 1] == '.')
 			ret = vfs_parse_fs_string(fc, "source", buf, size - 1);
-		kunmap(page);
+		kunmap_thread(page);
 		put_page(page);
 		if (ret < 0)
 			return ret;
diff --git a/fs/afs/write.c b/fs/afs/write.c
index 4b2265cb1891..c56e5b4db4ae 100644
--- a/fs/afs/write.c
+++ b/fs/afs/write.c
@@ -38,9 +38,9 @@ static int afs_fill_page(struct afs_vnode *vnode, struct key *key,
 	if (pos >= vnode->vfs_inode.i_size) {
 		p = pos & ~PAGE_MASK;
 		ASSERTCMP(p + len, <=, PAGE_SIZE);
-		data = kmap(page);
+		data = kmap_thread(page);
 		memset(data + p, 0, len);
-		kunmap(page);
+		kunmap_thread(page);
 		return 0;
 	}
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:59:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5172.13505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyXj-0007SU-Se; Fri, 09 Oct 2020 19:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5172.13505; Fri, 09 Oct 2020 19:59:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyXj-0007SN-Oj; Fri, 09 Oct 2020 19:59:07 +0000
Received: by outflank-mailman (input) for mailman id 5172;
 Fri, 09 Oct 2020 19:59:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySZ-0003VG-6p
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:47 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b332249c-4559-4c52-ab00-c74c294dc3ba;
 Fri, 09 Oct 2020 19:53:32 +0000 (UTC)
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:31 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:31 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySZ-0003VG-6p
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:47 +0000
X-Inumbo-ID: b332249c-4559-4c52-ab00-c74c294dc3ba
Received: from mga04.intel.com (unknown [192.55.52.120])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b332249c-4559-4c52-ab00-c74c294dc3ba;
	Fri, 09 Oct 2020 19:53:32 +0000 (UTC)
IronPort-SDR: yzaa39ck/j3h3xkJLKOvGHyLQ5yo3A8DLPn9stlc7AMjYQOmLSa0s32ddG9B1BRWjVv633sgGZ
 Tk1duDTUd9lQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893622"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162893622"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
  by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:31 -0700
IronPort-SDR: tmjEkPyqc7ovqgXQGaOUDyTXP+wKCpLd9SWVbWzn6wrJD59J17IHvt5wfRHDxw49UwqlYvaTbK
 hGdPiXGJDlXQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="349957787"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:31 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 46/58] drives/staging: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:21 -0700
Message-Id: <20201009195033.3208459-47-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/staging/rts5208/rtsx_transport.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/rts5208/rtsx_transport.c b/drivers/staging/rts5208/rtsx_transport.c
index 0027bcf638ad..f747cc23951b 100644
--- a/drivers/staging/rts5208/rtsx_transport.c
+++ b/drivers/staging/rts5208/rtsx_transport.c
@@ -92,13 +92,13 @@ unsigned int rtsx_stor_access_xfer_buf(unsigned char *buffer,
 			while (sglen > 0) {
 				unsigned int plen = min(sglen, (unsigned int)
 						PAGE_SIZE - poff);
-				unsigned char *ptr = kmap(page);
+				unsigned char *ptr = kmap_thread(page);
 
 				if (dir == TO_XFER_BUF)
 					memcpy(ptr + poff, buffer + cnt, plen);
 				else
 					memcpy(buffer + cnt, ptr + poff, plen);
-				kunmap(page);
+				kunmap_thread(page);
 
 				/* Start at the beginning of the next page */
 				poff = 0;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 19:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 19:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5173.13517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyXv-0007Xy-8k; Fri, 09 Oct 2020 19:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5173.13517; Fri, 09 Oct 2020 19:59:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyXv-0007Xq-5M; Fri, 09 Oct 2020 19:59:19 +0000
Received: by outflank-mailman (input) for mailman id 5173;
 Fri, 09 Oct 2020 19:59:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySU-0003VG-6W
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:42 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30caa391-3e96-48cc-9fcb-f4823c84ce0d;
 Fri, 09 Oct 2020 19:53:30 +0000 (UTC)
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:29 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:28 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySU-0003VG-6W
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:42 +0000
X-Inumbo-ID: 30caa391-3e96-48cc-9fcb-f4823c84ce0d
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30caa391-3e96-48cc-9fcb-f4823c84ce0d;
	Fri, 09 Oct 2020 19:53:30 +0000 (UTC)
IronPort-SDR: kaVSwKYO/oBj40XBBm1bm6G1A1++4hOplAS3cyXyq2u2UFdJSJ887QaOjSj9FhCnDez5pzPD9U
 QFqmpzx346Pg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="165592493"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="165592493"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:29 -0700
IronPort-SDR: BOVOU8SHaQbMr+qBBSrCzuqbJHuQd4ndzwss05aaSC7o5I30ANXdekITPCnqmJ5xtHPcWbAp0e
 1H/r6jgeLwtA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="298419945"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga007-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:28 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Ard Biesheuvel <ardb@kernel.org>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 45/58] drivers/firmware: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:20 -0700
Message-Id: <20201009195033.3208459-46-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/firmware/efi/capsule-loader.c | 6 +++---
 drivers/firmware/efi/capsule.c        | 4 ++--
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
index 4dde8edd53b6..aa2e0b5940fd 100644
--- a/drivers/firmware/efi/capsule-loader.c
+++ b/drivers/firmware/efi/capsule-loader.c
@@ -197,7 +197,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff,
 		page = cap_info->pages[cap_info->index - 1];
 	}
 
-	kbuff = kmap(page);
+	kbuff = kmap_thread(page);
 	kbuff += PAGE_SIZE - cap_info->page_bytes_remain;
 
 	/* Copy capsule binary data from user space to kernel space buffer */
@@ -217,7 +217,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff,
 	}
 
 	cap_info->count += write_byte;
-	kunmap(page);
+	kunmap_thread(page);
 
 	/* Submit the full binary to efi_capsule_update() API */
 	if (cap_info->header.headersize > 0 &&
@@ -236,7 +236,7 @@ static ssize_t efi_capsule_write(struct file *file, const char __user *buff,
 	return write_byte;
 
 fail_unmap:
-	kunmap(page);
+	kunmap_thread(page);
 failed:
 	efi_free_all_buff_pages(cap_info);
 	return ret;
diff --git a/drivers/firmware/efi/capsule.c b/drivers/firmware/efi/capsule.c
index 598b7800d14e..edb7797b0e4f 100644
--- a/drivers/firmware/efi/capsule.c
+++ b/drivers/firmware/efi/capsule.c
@@ -244,7 +244,7 @@ int efi_capsule_update(efi_capsule_header_t *capsule, phys_addr_t *pages)
 	for (i = 0; i < sg_count; i++) {
 		efi_capsule_block_desc_t *sglist;
 
-		sglist = kmap(sg_pages[i]);
+		sglist = kmap_thread(sg_pages[i]);
 
 		for (j = 0; j < SGLIST_PER_PAGE && count > 0; j++) {
 			u64 sz = min_t(u64, imagesize,
@@ -265,7 +265,7 @@ int efi_capsule_update(efi_capsule_header_t *capsule, phys_addr_t *pages)
 		else
 			sglist[j].data = page_to_phys(sg_pages[i + 1]);
 
-		kunmap(sg_pages[i]);
+		kunmap_thread(sg_pages[i]);
 	}
 
 	mutex_lock(&capsule_mutex);
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:03:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5180.13528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQybb-00008o-QJ; Fri, 09 Oct 2020 20:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5180.13528; Fri, 09 Oct 2020 20:03:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQybb-00008h-Mx; Fri, 09 Oct 2020 20:03:07 +0000
Received: by outflank-mailman (input) for mailman id 5180;
 Fri, 09 Oct 2020 20:03:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySK-0003VG-68
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:32 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3a4d526-b774-45b6-986a-8c76e30921b9;
 Fri, 09 Oct 2020 19:53:23 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:23 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:21 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySK-0003VG-68
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:32 +0000
X-Inumbo-ID: f3a4d526-b774-45b6-986a-8c76e30921b9
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3a4d526-b774-45b6-986a-8c76e30921b9;
	Fri, 09 Oct 2020 19:53:23 +0000 (UTC)
IronPort-SDR: GhVMyiL9zhS2aRTAKOr+xl1SDa/QOf9DA/kKt55l4IUXUjm24NizN6Sb+9ifvZUj0phLSipFoi
 wYk7MinVHiCQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715329"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229715329"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga007.jf.intel.com ([10.7.209.58])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:23 -0700
IronPort-SDR: 1WKybDcesSB1sLOn62ClPw5Oz5roBPmtIQsa1XcymP/Rm5On5MJ4hIzRHLHnwze0O/9dY0G7BT
 oI/HBaoknNhQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="355863255"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:21 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Sascha Sommer <saschasommer@freenet.de>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 43/58] drivers/mmc: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:18 -0700
Message-Id: <20201009195033.3208459-44-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Sascha Sommer <saschasommer@freenet.de>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/mmc/host/mmc_spi.c    | 4 ++--
 drivers/mmc/host/sdricoh_cs.c | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
index 18a850f37ddc..ab28e7103b8d 100644
--- a/drivers/mmc/host/mmc_spi.c
+++ b/drivers/mmc/host/mmc_spi.c
@@ -918,7 +918,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
 		}
 
 		/* allow pio too; we don't allow highmem */
-		kmap_addr = kmap(sg_page(sg));
+		kmap_addr = kmap_thread(sg_page(sg));
 		if (direction == DMA_TO_DEVICE)
 			t->tx_buf = kmap_addr + sg->offset;
 		else
@@ -950,7 +950,7 @@ mmc_spi_data_do(struct mmc_spi_host *host, struct mmc_command *cmd,
 		/* discard mappings */
 		if (direction == DMA_FROM_DEVICE)
 			flush_kernel_dcache_page(sg_page(sg));
-		kunmap(sg_page(sg));
+		kunmap_thread(sg_page(sg));
 		if (dma_dev)
 			dma_unmap_page(dma_dev, dma_addr, PAGE_SIZE, dir);
 
diff --git a/drivers/mmc/host/sdricoh_cs.c b/drivers/mmc/host/sdricoh_cs.c
index 76a8cd3a186f..7806bc69c4f1 100644
--- a/drivers/mmc/host/sdricoh_cs.c
+++ b/drivers/mmc/host/sdricoh_cs.c
@@ -312,11 +312,11 @@ static void sdricoh_request(struct mmc_host *mmc, struct mmc_request *mrq)
 			int result;
 			page = sg_page(data->sg);
 
-			buf = kmap(page) + data->sg->offset + (len * i);
+			buf = kmap_thread(page) + data->sg->offset + (len * i);
 			result =
 				sdricoh_blockio(host,
 					data->flags & MMC_DATA_READ, buf, len);
-			kunmap(page);
+			kunmap_thread(page);
 			flush_dcache_page(page);
 			if (result) {
 				dev_err(dev, "sdricoh_request: cmd %i "
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:04:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:04:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5181.13540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQycZ-0000IW-4a; Fri, 09 Oct 2020 20:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5181.13540; Fri, 09 Oct 2020 20:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQycZ-0000IM-1U; Fri, 09 Oct 2020 20:04:07 +0000
Received: by outflank-mailman (input) for mailman id 5181;
 Fri, 09 Oct 2020 20:04:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySe-0003VG-7G
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:52 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf38be33-9ae8-4546-b721-bdb4d3e8e31a;
 Fri, 09 Oct 2020 19:53:35 +0000 (UTC)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:35 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:34 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySe-0003VG-7G
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:52 +0000
X-Inumbo-ID: bf38be33-9ae8-4546-b721-bdb4d3e8e31a
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bf38be33-9ae8-4546-b721-bdb4d3e8e31a;
	Fri, 09 Oct 2020 19:53:35 +0000 (UTC)
IronPort-SDR: sBe4qza1ii/MGc30I0htHBYySTCcdoQh3ck1mtlQblZ2QOG0ARGVBMIJlxmelbFm4XO7kCrmPA
 KKTQ1obXbEpA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715352"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229715352"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:35 -0700
IronPort-SDR: 1BDjhBWRiZHjmax0H5jHPL9Ee2R7VPzFxt4XNyUZ5d5LafdvLgy40B0wtQFVU3AX2TCQgDyRso
 HhmDRI33qIDA==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="345148602"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:34 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Miquel Raynal <miquel.raynal@bootlin.com>,
	Richard Weinberger <richard@nod.at>,
	Vignesh Raghavendra <vigneshr@ti.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 47/58] drivers/mtd: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:22 -0700
Message-Id: <20201009195033.3208459-48-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Miquel Raynal <miquel.raynal@bootlin.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Vignesh Raghavendra <vigneshr@ti.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/mtd/mtd_blkdevs.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 0c05f77f9b21..4b18998273fa 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -88,14 +88,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr,
 			return BLK_STS_IOERR;
 		return BLK_STS_OK;
 	case REQ_OP_READ:
-		buf = kmap(bio_page(req->bio)) + bio_offset(req->bio);
+		buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio);
 		for (; nsect > 0; nsect--, block++, buf += tr->blksize) {
 			if (tr->readsect(dev, block, buf)) {
-				kunmap(bio_page(req->bio));
+				kunmap_thread(bio_page(req->bio));
 				return BLK_STS_IOERR;
 			}
 		}
-		kunmap(bio_page(req->bio));
+		kunmap_thread(bio_page(req->bio));
 		rq_flush_dcache_pages(req);
 		return BLK_STS_OK;
 	case REQ_OP_WRITE:
@@ -103,14 +103,14 @@ static blk_status_t do_blktrans_request(struct mtd_blktrans_ops *tr,
 			return BLK_STS_IOERR;
 
 		rq_flush_dcache_pages(req);
-		buf = kmap(bio_page(req->bio)) + bio_offset(req->bio);
+		buf = kmap_thread(bio_page(req->bio)) + bio_offset(req->bio);
 		for (; nsect > 0; nsect--, block++, buf += tr->blksize) {
 			if (tr->writesect(dev, block, buf)) {
-				kunmap(bio_page(req->bio));
+				kunmap_thread(bio_page(req->bio));
 				return BLK_STS_IOERR;
 			}
 		}
-		kunmap(bio_page(req->bio));
+		kunmap_thread(bio_page(req->bio));
 		return BLK_STS_OK;
 	default:
 		return BLK_STS_IOERR;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:04:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:04:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5182.13552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQycs-0000PO-DO; Fri, 09 Oct 2020 20:04:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5182.13552; Fri, 09 Oct 2020 20:04:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQycs-0000PH-AF; Fri, 09 Oct 2020 20:04:26 +0000
Received: by outflank-mailman (input) for mailman id 5182;
 Fri, 09 Oct 2020 20:04:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySF-0003VG-66
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:27 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f381d98-fe9e-4d68-8e1c-ef3c87d3dae7;
 Fri, 09 Oct 2020 19:53:20 +0000 (UTC)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:19 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:18 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySF-0003VG-66
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:27 +0000
X-Inumbo-ID: 1f381d98-fe9e-4d68-8e1c-ef3c87d3dae7
Received: from mga07.intel.com (unknown [134.134.136.100])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1f381d98-fe9e-4d68-8e1c-ef3c87d3dae7;
	Fri, 09 Oct 2020 19:53:20 +0000 (UTC)
IronPort-SDR: oFArxdeaaEdNwkSkWc6QZhEbjIJdqfCt0F9ef7cZ36fUICd3MSKqpYgAseFrbQH6krHzX5t4r4
 lUxcH/chTKog==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="229715323"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="229715323"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
  by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:19 -0700
IronPort-SDR: W4xxziDMxkncUnjnfmhtERgTle+fDEBI/syTBEG/tlt++YK2v+9/w8FvvM36z1FzCpPIzrEgyR
 3zBS2pDk0hzg==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="354959339"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:18 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	"James E.J. Bottomley" <jejb@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 42/58] drivers/scsi: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:17 -0700
Message-Id: <20201009195033.3208459-43-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls are localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: "James E.J. Bottomley" <jejb@linux.ibm.com>
Cc: "Martin K. Petersen" <martin.petersen@oracle.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/scsi/ipr.c     | 8 ++++----
 drivers/scsi/pmcraid.c | 8 ++++----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index b0aa58d117cc..a5a0b8feb661 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -3923,9 +3923,9 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
 			buffer += bsize_elem) {
 		struct page *page = sg_page(sg);
 
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 		memcpy(kaddr, buffer, bsize_elem);
-		kunmap(page);
+		kunmap_thread(page);
 
 		sg->length = bsize_elem;
 
@@ -3938,9 +3938,9 @@ static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
 	if (len % bsize_elem) {
 		struct page *page = sg_page(sg);
 
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 		memcpy(kaddr, buffer, len % bsize_elem);
-		kunmap(page);
+		kunmap_thread(page);
 
 		sg->length = len % bsize_elem;
 	}
diff --git a/drivers/scsi/pmcraid.c b/drivers/scsi/pmcraid.c
index aa9ae2ae8579..4b05ba4b8a11 100644
--- a/drivers/scsi/pmcraid.c
+++ b/drivers/scsi/pmcraid.c
@@ -3269,13 +3269,13 @@ static int pmcraid_copy_sglist(
 	for (i = 0; i < (len / bsize_elem); i++, sg = sg_next(sg), buffer += bsize_elem) {
 		struct page *page = sg_page(sg);
 
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 		if (direction == DMA_TO_DEVICE)
 			rc = copy_from_user(kaddr, buffer, bsize_elem);
 		else
 			rc = copy_to_user(buffer, kaddr, bsize_elem);
 
-		kunmap(page);
+		kunmap_thread(page);
 
 		if (rc) {
 			pmcraid_err("failed to copy user data into sg list\n");
@@ -3288,14 +3288,14 @@ static int pmcraid_copy_sglist(
 	if (len % bsize_elem) {
 		struct page *page = sg_page(sg);
 
-		kaddr = kmap(page);
+		kaddr = kmap_thread(page);
 
 		if (direction == DMA_TO_DEVICE)
 			rc = copy_from_user(kaddr, buffer, len % bsize_elem);
 		else
 			rc = copy_to_user(buffer, kaddr, len % bsize_elem);
 
-		kunmap(page);
+		kunmap_thread(page);
 
 		sg->length = len % bsize_elem;
 	}
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:05:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:05:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5185.13564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyeC-0000cY-PY; Fri, 09 Oct 2020 20:05:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5185.13564; Fri, 09 Oct 2020 20:05:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyeC-0000cR-Ly; Fri, 09 Oct 2020 20:05:48 +0000
Received: by outflank-mailman (input) for mailman id 5185;
 Fri, 09 Oct 2020 20:05:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySj-0003VG-7G
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:57 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0f9d367-7d99-41de-b28b-a3d90d450997;
 Fri, 09 Oct 2020 19:53:48 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:47 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:46 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySj-0003VG-7G
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:57 +0000
X-Inumbo-ID: e0f9d367-7d99-41de-b28b-a3d90d450997
Received: from mga02.intel.com (unknown [134.134.136.20])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0f9d367-7d99-41de-b28b-a3d90d450997;
	Fri, 09 Oct 2020 19:53:48 +0000 (UTC)
IronPort-SDR: RffSlFjQ0u1RoSX45kPUXvPF3Vw5pU/zKXIN7B2MoxGCg/yXY+mGIAKBEEp01ajJQ8AuP/OGtW
 QDw5dllcg4Bg==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="152451185"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="152451185"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:47 -0700
IronPort-SDR: hSLATPILKZexzU9UbulfkIo/q3wPUuwdHpgGfd67f9RjOUrzIM+GeKhbZtX4gmiSR0zMVEA6Pm
 bSqmX2bz2skQ==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="529054109"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:46 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	Eric Biederman <ebiederm@xmission.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 51/58] kernel: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:26 -0700
Message-Id: <20201009195033.3208459-52-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

This kmap() call is localized to a single thread.  To avoid the over
head of global PKRS updates use the new kmap_thread() call.

Cc: Eric Biederman <ebiederm@xmission.com>
Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 kernel/kexec_core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index c19c0dad1ebe..272a9920c0d6 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -815,7 +815,7 @@ static int kimage_load_normal_segment(struct kimage *image,
 		if (result < 0)
 			goto out;
 
-		ptr = kmap(page);
+		ptr = kmap_thread(page);
 		/* Start with a clear page */
 		clear_page(ptr);
 		ptr += maddr & ~PAGE_MASK;
@@ -828,7 +828,7 @@ static int kimage_load_normal_segment(struct kimage *image,
 			memcpy(ptr, kbuf, uchunk);
 		else
 			result = copy_from_user(ptr, buf, uchunk);
-		kunmap(page);
+		kunmap_thread(page);
 		if (result) {
 			result = -EFAULT;
 			goto out;
@@ -879,7 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 			goto out;
 		}
 		arch_kexec_post_alloc_pages(page_address(page), 1, 0);
-		ptr = kmap(page);
+		ptr = kmap_thread(page);
 		ptr += maddr & ~PAGE_MASK;
 		mchunk = min_t(size_t, mbytes,
 				PAGE_SIZE - (maddr & ~PAGE_MASK));
@@ -895,7 +895,7 @@ static int kimage_load_crash_segment(struct kimage *image,
 		else
 			result = copy_from_user(ptr, buf, uchunk);
 		kexec_flush_icache_page(page);
-		kunmap(page);
+		kunmap_thread(page);
 		arch_kexec_pre_free_pages(page_address(page), 1);
 		if (result) {
 			result = -EFAULT;
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:06:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5187.13577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyej-0000mu-8D; Fri, 09 Oct 2020 20:06:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5187.13577; Fri, 09 Oct 2020 20:06:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyej-0000mn-59; Fri, 09 Oct 2020 20:06:21 +0000
Received: by outflank-mailman (input) for mailman id 5187;
 Fri, 09 Oct 2020 20:06:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kQySA-0003VG-5y
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:22 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d452309a-cb21-4eed-9904-b4d3dcf278de;
 Fri, 09 Oct 2020 19:53:17 +0000 (UTC)
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:16 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 09 Oct 2020 12:53:15 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lMaC=DQ=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kQySA-0003VG-5y
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 19:53:22 +0000
X-Inumbo-ID: d452309a-cb21-4eed-9904-b4d3dcf278de
Received: from mga04.intel.com (unknown [192.55.52.120])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d452309a-cb21-4eed-9904-b4d3dcf278de;
	Fri, 09 Oct 2020 19:53:17 +0000 (UTC)
IronPort-SDR: dtyJeJMEyuAFFKXUyW3C+SeRkMQE+np3w56zlQF/zJLVZHiNvhryV9Cjmj/btBVNwwF8CfI8kV
 HztKg2hNMEPA==
X-IronPort-AV: E=McAfee;i="6000,8403,9769"; a="162893592"
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="162893592"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga006.fm.intel.com ([10.253.24.20])
  by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:16 -0700
IronPort-SDR: tpdf2AjeaRc5H+9J2CTgDjuzQWeL2r1NHpqYA6zEYh1sueM+cncDhIB0ZXYbhrsfqtXGL/LVMU
 w/znFNAZKmjw==
X-IronPort-AV: E=Sophos;i="5.77,355,1596524400"; 
   d="scan'208";a="518801571"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 09 Oct 2020 12:53:15 -0700
From: ira.weiny@intel.com
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>
Cc: Ira Weiny <ira.weiny@intel.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org,
	netdev@vger.kernel.org,
	bpf@vger.kernel.org,
	kexec@lists.infradead.org,
	linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org,
	linux-aio@kvack.org,
	io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com,
	ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org,
	linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com,
	samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: [PATCH RFC PKS/PMEM 41/58] drivers/target: Utilize new kmap_thread()
Date: Fri,  9 Oct 2020 12:50:16 -0700
Message-Id: <20201009195033.3208459-42-ira.weiny@intel.com>
X-Mailer: git-send-email 2.28.0.rc0.12.gb6a658bd00c9
In-Reply-To: <20201009195033.3208459-1-ira.weiny@intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ira Weiny <ira.weiny@intel.com>

These kmap() calls in this driver are localized to a single thread.  To
avoid the over head of global PKRS updates use the new kmap_thread()
call.

Signed-off-by: Ira Weiny <ira.weiny@intel.com>
---
 drivers/target/target_core_iblock.c    | 4 ++--
 drivers/target/target_core_rd.c        | 4 ++--
 drivers/target/target_core_transport.c | 4 ++--
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
index 1c181d31f4c8..df7b1568edb3 100644
--- a/drivers/target/target_core_iblock.c
+++ b/drivers/target/target_core_iblock.c
@@ -415,7 +415,7 @@ iblock_execute_zero_out(struct block_device *bdev, struct se_cmd *cmd)
 	unsigned char *buf, *not_zero;
 	int ret;
 
-	buf = kmap(sg_page(sg)) + sg->offset;
+	buf = kmap_thread(sg_page(sg)) + sg->offset;
 	if (!buf)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
 	/*
@@ -423,7 +423,7 @@ iblock_execute_zero_out(struct block_device *bdev, struct se_cmd *cmd)
 	 * incoming WRITE_SAME payload does not contain zeros.
 	 */
 	not_zero = memchr_inv(buf, 0x00, cmd->data_length);
-	kunmap(sg_page(sg));
+	kunmap_thread(sg_page(sg));
 
 	if (not_zero)
 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
diff --git a/drivers/target/target_core_rd.c b/drivers/target/target_core_rd.c
index 408bd975170b..dbbdd39c5bf9 100644
--- a/drivers/target/target_core_rd.c
+++ b/drivers/target/target_core_rd.c
@@ -159,9 +159,9 @@ static int rd_allocate_sgl_table(struct rd_dev *rd_dev, struct rd_dev_sg_table *
 			sg_assign_page(&sg[j], pg);
 			sg[j].length = PAGE_SIZE;
 
-			p = kmap(pg);
+			p = kmap_thread(pg);
 			memset(p, init_payload, PAGE_SIZE);
-			kunmap(pg);
+			kunmap_thread(pg);
 		}
 
 		page_offset += sg_per_table;
diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
index ff26ab0a5f60..8d0bae5a92e5 100644
--- a/drivers/target/target_core_transport.c
+++ b/drivers/target/target_core_transport.c
@@ -1692,11 +1692,11 @@ int target_submit_cmd_map_sgls(struct se_cmd *se_cmd, struct se_session *se_sess
 			unsigned char *buf = NULL;
 
 			if (sgl)
-				buf = kmap(sg_page(sgl)) + sgl->offset;
+				buf = kmap_thread(sg_page(sgl)) + sgl->offset;
 
 			if (buf) {
 				memset(buf, 0, sgl->length);
-				kunmap(sg_page(sgl));
+				kunmap_thread(sg_page(sgl));
 			}
 		}
 
-- 
2.28.0.rc0.12.gb6a658bd00c9



From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:20:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:20:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5198.13590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQysa-0002y4-Iy; Fri, 09 Oct 2020 20:20:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5198.13590; Fri, 09 Oct 2020 20:20:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQysa-0002xx-Fu; Fri, 09 Oct 2020 20:20:40 +0000
Received: by outflank-mailman (input) for mailman id 5198;
 Fri, 09 Oct 2020 20:20:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kQysZ-0002xE-UV
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 20:20:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e41c4d65-b909-49f3-804e-8ac38dc4cabe;
 Fri, 09 Oct 2020 20:20:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQysS-0005Xl-MV; Fri, 09 Oct 2020 20:20:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kQysS-0000jC-Dr; Fri, 09 Oct 2020 20:20:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kQysS-0008Pc-DN; Fri, 09 Oct 2020 20:20:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kQysZ-0002xE-UV
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 20:20:39 +0000
X-Inumbo-ID: e41c4d65-b909-49f3-804e-8ac38dc4cabe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e41c4d65-b909-49f3-804e-8ac38dc4cabe;
	Fri, 09 Oct 2020 20:20:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2c6lQIxSybPIV7zyJZAnpRkJYOCnP/gh9jzOXrP6RKM=; b=yzbsckugO00FZluTYx0rzrXXsA
	H7EqIXcM6r58Wh9MtfM+ZI95NYT2LZCHk0n898kzMj6iLnrJ0Q2NUFNHOLy/mVtYL+ideRrvr6XFi
	gPTmmk56Pm2EY/mRL4sYkdufMi+rNvxljgBDTeN8r3Bw1LAm+TtzWQmPPx67sWr+vpcU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQysS-0005Xl-MV; Fri, 09 Oct 2020 20:20:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQysS-0000jC-Dr; Fri, 09 Oct 2020 20:20:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kQysS-0008Pc-DN; Fri, 09 Oct 2020 20:20:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155612-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155612: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 20:20:32 +0000

flight 155612 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155612/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    0 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 20:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 20:28:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5200.13603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyzd-0003Uj-Bb; Fri, 09 Oct 2020 20:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5200.13603; Fri, 09 Oct 2020 20:27:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kQyzd-0003Uc-8J; Fri, 09 Oct 2020 20:27:57 +0000
Received: by outflank-mailman (input) for mailman id 5200;
 Fri, 09 Oct 2020 20:27:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kQyzc-0003UX-1m
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 20:27:56 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8aeadf5d-0244-4420-92fd-c55589930fe0;
 Fri, 09 Oct 2020 20:27:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=n4y9=DQ=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kQyzc-0003UX-1m
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 20:27:56 +0000
X-Inumbo-ID: 8aeadf5d-0244-4420-92fd-c55589930fe0
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8aeadf5d-0244-4420-92fd-c55589930fe0;
	Fri, 09 Oct 2020 20:27:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602275275;
  h=subject:to:references:cc:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=SLeBnas35PvecqOOwKJ2Z9gEB76wpLcuw/zkY5tzagg=;
  b=Tq+HRWzmgGGAL8ufd0lQ1QI3aiwZQZYlMz9fNzqnEXE4YRsor5sLOZso
   uQGMBmyYw1BIe+o4epX94WgTLgGj0gW3TOKJoKZu1UAe2iY7dWxA1PCAV
   EO6S1uktdgDQofVse69+CkR2D6KmlB86YpVVxSHpckLkaarP62lCEl+8w
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sQ0GOikCXaagwywo+Q7GFOnFqXN6P1vQ6AxKP05If4QAzmNnD9mo4lsYfXq6bg5rs6xmztFW4S
 hOUNdLBmJj01k7L4BWmASv2Xl1UzPkAmcot2fmLg7eNwlG2WqwAzFilaIIbZ0c0ERlupiWLBU1
 uw41o0j7UEJJMMMo0W9ky79xcckYCJZDWLZNvF8ibizQrHKaCV+MNVkVPcYesUUqcPPsMdnJWJ
 R2zsRHBueCyd458dlnzqf4dwdajoApZPvIhE/+qJYY5TEYL7OM9avvVVJVINu+9Ihdfk95ySti
 Rhk=
X-SBRS: 2.5
X-MesageID: 28680128
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,355,1596513600"; 
   d="scan'208";a="28680128"
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
To: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	"Stefano Stabellini" <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
References: <osstest-155612-mainreport@xen.org>
CC: Trammell Hudson <hudson@trmm.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com>
Date: Fri, 9 Oct 2020 21:27:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <osstest-155612-mainreport@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 09/10/2020 21:20, osstest service owner wrote:
> flight 155612 xen-unstable-smoke real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/155612/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

From
http://logs.test-lab.xenproject.org/osstest/logs/155612/test-arm64-arm64-xl-xsm/serial-laxton0.log

Oct  9 18:45:20.790611
Xen 4.15-unstable (c/s Fri Oct 2 12:30:34 2020 +0200 git:8a62dee9ce) EFI
loader
Oct  9 18:45:20.934506 Using configuration file 'xen.cfg'
Oct  9 18:45:20.934558 vmlinuz: 0x00000083fb1e2000-0x00000083fc8b6a00
Oct  9 18:45:20.946436 initrd.gz: 0x00000083f94ef000-0x00000083fb1e1c5a
Oct  9 18:45:21.618435 xenpolicy: 0x00000083f94ec000-0x00000083f94eea0a
Oct  9 18:45:21.774473 Oct  9 18:51:08.218564 <client 0x1b7f430
connected - now 1 clients>
Oct  9 18:51:08.976537 <client 0x1b7f430 disconnected - now 0 clients>

Looks like arm64 is crashing fairly early on boot.

This is probably caused by "efi: Enable booting unified
hypervisor/kernel/initrd images".

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 21:32:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 21:32:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5220.13639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR009-00029M-Q7; Fri, 09 Oct 2020 21:32:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5220.13639; Fri, 09 Oct 2020 21:32:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR009-00029F-Mx; Fri, 09 Oct 2020 21:32:33 +0000
Received: by outflank-mailman (input) for mailman id 5220;
 Fri, 09 Oct 2020 21:32:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR008-00029A-VA
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:32:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a3a0136-83fd-4827-9a2e-82c3571ccd32;
 Fri, 09 Oct 2020 21:32:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR006-00074c-EJ; Fri, 09 Oct 2020 21:32:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR006-00047s-4H; Fri, 09 Oct 2020 21:32:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR006-0001a9-3q; Fri, 09 Oct 2020 21:32:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR008-00029A-VA
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:32:32 +0000
X-Inumbo-ID: 5a3a0136-83fd-4827-9a2e-82c3571ccd32
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5a3a0136-83fd-4827-9a2e-82c3571ccd32;
	Fri, 09 Oct 2020 21:32:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NyTuRXjDfm7TCuqhsiTQ17bVnMWKWbbjI9YSBB17sww=; b=Se6O5xZXf+tUufYgZqTJpeIRdI
	yuV8WevKx4hhN6qI6iFQjKlFwaHzbtJKIGH9YTwoJjde60sbV4yyJcLHtJZGfp/9YSOl+ReHnqDgD
	UKeu4X+1cTdxP8DrI5LrKqdKITymq4SONO0mny3sg4V111T4162A5ZqvYm8/dGDajzHk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR006-00074c-EJ; Fri, 09 Oct 2020 21:32:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR006-00047s-4H; Fri, 09 Oct 2020 21:32:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR006-0001a9-3q; Fri, 09 Oct 2020 21:32:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155594: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=091ab12b340a05c99ce0e31d29293ce58c7014e2
X-Osstest-Versions-That:
    ovmf=69e95b9efed520e643b9e5b0573180aa7c5ecaca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 21:32:30 +0000

flight 155594 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155594/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 091ab12b340a05c99ce0e31d29293ce58c7014e2
baseline version:
 ovmf                 69e95b9efed520e643b9e5b0573180aa7c5ecaca

Last test of basis   155548  2020-10-08 13:39:54 Z    1 days
Testing same since   155594  2020-10-09 09:11:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Maciej Rabeda <maciej.rabeda@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   69e95b9efe..091ab12b34  091ab12b340a05c99ce0e31d29293ce58c7014e2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 21:50:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 21:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5226.13653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR0Gz-0003XU-9z; Fri, 09 Oct 2020 21:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5226.13653; Fri, 09 Oct 2020 21:49:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR0Gz-0003XN-6m; Fri, 09 Oct 2020 21:49:57 +0000
Received: by outflank-mailman (input) for mailman id 5226;
 Fri, 09 Oct 2020 21:49:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kR0Gy-0003XI-0B
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:49:56 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2c6a3b9-a27c-4970-86f4-80d1f94261da;
 Fri, 09 Oct 2020 21:49:54 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099LnVFL066278
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 9 Oct 2020 17:49:37 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 099LnUET066277;
 Fri, 9 Oct 2020 14:49:30 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kR0Gy-0003XI-0B
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:49:56 +0000
X-Inumbo-ID: f2c6a3b9-a27c-4970-86f4-80d1f94261da
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f2c6a3b9-a27c-4970-86f4-80d1f94261da;
	Fri, 09 Oct 2020 21:49:54 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099LnVFL066278
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 9 Oct 2020 17:49:37 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 099LnUET066277;
	Fri, 9 Oct 2020 14:49:30 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 9 Oct 2020 14:49:30 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Alex Benn??e <alex.bennee@linaro.org>, bertrand.marquis@arm.com,
        andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201009214930.GB65219@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Sep 28, 2020 at 03:47:52PM +0900, Masami Hiramatsu wrote:
> This made progress with my Xen boot on DeveloperBox (
> https://www.96boards.org/product/developerbox/ ) with ACPI.

Now things have progressed a bit more and I can confirm the patch
provides useful progress.  I cannot say whether there is a better way
since I'm not familiar enough with the code.

As such, for both Masami Hiramatsu's "Hide UART address only if STAO..."
and Julien Grall's set of four:

Tested-by: Elliott Mitchell <ehem+xen@m5p.com>


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 09 22:04:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 22:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5228.13665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR0UZ-0005Ns-Jp; Fri, 09 Oct 2020 22:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5228.13665; Fri, 09 Oct 2020 22:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR0UZ-0005Nl-Fe; Fri, 09 Oct 2020 22:03:59 +0000
Received: by outflank-mailman (input) for mailman id 5228;
 Fri, 09 Oct 2020 22:03:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yC7Y=DQ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kR0UX-0005MI-4Q
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 22:03:57 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7e76827-3760-40c2-9c9b-a1389e534246;
 Fri, 09 Oct 2020 22:03:55 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id v12so11213126wmh.3
 for <xen-devel@lists.xenproject.org>; Fri, 09 Oct 2020 15:03:55 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id c18sm14231894wrq.5.2020.10.09.15.03.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 09 Oct 2020 15:03:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yC7Y=DQ=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kR0UX-0005MI-4Q
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 22:03:57 +0000
X-Inumbo-ID: e7e76827-3760-40c2-9c9b-a1389e534246
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e7e76827-3760-40c2-9c9b-a1389e534246;
	Fri, 09 Oct 2020 22:03:55 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id v12so11213126wmh.3
        for <xen-devel@lists.xenproject.org>; Fri, 09 Oct 2020 15:03:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:mail-followup-to:references
         :mime-version:content-disposition:in-reply-to;
        bh=6AdF7ImouyBo1v+UVVHbR2jrgxlTgfY49UnpKuJl2vo=;
        b=MAvLr2XnoBdtbc8e8cEo1YppoaD8vfmQyJEZKkS/CF5FMGYOCD9W7O0IZtWYq8ODgA
         hPrQ1FmqoKh8rvOQoL3cQ1KrqxgRTT0OnIwFAo1dmcJzppCqnka5FyN347HmibM8jSEz
         e9ww2l67YncgPsbvE8VuHqhkcOIRrbdxVqCNU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id
         :mail-followup-to:references:mime-version:content-disposition
         :in-reply-to;
        bh=6AdF7ImouyBo1v+UVVHbR2jrgxlTgfY49UnpKuJl2vo=;
        b=NnjA7vs3Eiw8PGllDtnMsy7yFIW59F7p0Aq5NlnHR+a6n51Smiy2TFJed7At8nMplV
         lOXJGIfEc5JhYM+2llo3S+PB6AuLMCs/LJbevefgeMle9v0phglI7N6fboiNoBty/VjT
         OKOiekYaEMZJi2/ABUNdn3ES2sG+uKq0WRBtetcaBTIuveSm3Dxeb5rO+YKkE7w1fw5R
         qEMX3+e5vfo+07PKZT2UPx7eX6cv2efFC5/dATbWvnAqXkVgvS7qP931lK8u4GPC0P4T
         41T0btjaohs7LaUnshl1srI15jfqnyI5gkk+RfVwPFp4rZCdbPDeAL0MhrA7T/Wxgah0
         o31g==
X-Gm-Message-State: AOAM5335K+nGKpTZK0PNxiZWswg2nOBtaLDI4nQcKhOyHbR1MlStUrG2
	pUrcOkHXDgdOtWpKVLgUFyKtmg==
X-Google-Smtp-Source: ABdhPJzLUGH0qXD9MRJ9uaTqaSHXSKqVIytWPhW+Fv8l3CnjdOFUZRdjqAS25NTJ59jhdr8rs7Tm+A==
X-Received: by 2002:a1c:a9d1:: with SMTP id s200mr4476wme.107.1602281034146;
        Fri, 09 Oct 2020 15:03:54 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id c18sm14231894wrq.5.2020.10.09.15.03.51
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 09 Oct 2020 15:03:52 -0700 (PDT)
Date: Sat, 10 Oct 2020 00:03:49 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
	Patrik Jakobsson <patrik.r.jakobsson@gmail.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 09/58] drivers/gpu: Utilize new kmap_thread()
Message-ID: <20201009220349.GQ438822@phenom.ffwll.local>
Mail-Followup-To: ira.weiny@intel.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	David Airlie <airlied@linux.ie>,
	Patrik Jakobsson <patrik.r.jakobsson@gmail.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-10-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009195033.3208459-10-ira.weiny@intel.com>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Fri, Oct 09, 2020 at 12:49:44PM -0700, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> These kmap() calls in the gpu stack are localized to a single thread.
> To avoid the over head of global PKRS updates use the new kmap_thread()
> call.
> 
> Cc: David Airlie <airlied@linux.ie>
> Cc: Daniel Vetter <daniel@ffwll.ch>
> Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>

I'm guessing the entire pile goes in through some other tree. If so:

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

If you want this to land through maintainer trees, then we need a
per-driver split (since aside from amdgpu and radeon they're all different
subtrees).

btw the two kmap calls in drm you highlight in the cover letter should
also be convertible to kmap_thread. We only hold vmalloc mappings for a
longer time (or it'd be quite a driver bug). So if you want maybe throw
those two as two additional patches on top, and we can do some careful
review & testing for them.
-Daniel

> ---
>  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c              | 12 ++++++------
>  drivers/gpu/drm/gma500/gma_display.c                 |  4 ++--
>  drivers/gpu/drm/gma500/mmu.c                         | 10 +++++-----
>  drivers/gpu/drm/i915/gem/i915_gem_shmem.c            |  4 ++--
>  .../gpu/drm/i915/gem/selftests/i915_gem_context.c    |  4 ++--
>  drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c   |  8 ++++----
>  drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c         |  4 ++--
>  drivers/gpu/drm/i915/gt/intel_gtt.c                  |  4 ++--
>  drivers/gpu/drm/i915/gt/shmem_utils.c                |  4 ++--
>  drivers/gpu/drm/i915/i915_gem.c                      |  8 ++++----
>  drivers/gpu/drm/i915/i915_gpu_error.c                |  4 ++--
>  drivers/gpu/drm/i915/selftests/i915_perf.c           |  4 ++--
>  drivers/gpu/drm/radeon/radeon_ttm.c                  |  4 ++--
>  13 files changed, 37 insertions(+), 37 deletions(-)
> 
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> index 978bae731398..bd564bccb7a3 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> @@ -2437,11 +2437,11 @@ static ssize_t amdgpu_ttm_gtt_read(struct file *f, char __user *buf,
>  
>  		page = adev->gart.pages[p];
>  		if (page) {
> -			ptr = kmap(page);
> +			ptr = kmap_thread(page);
>  			ptr += off;
>  
>  			r = copy_to_user(buf, ptr, cur_size);
> -			kunmap(adev->gart.pages[p]);
> +			kunmap_thread(adev->gart.pages[p]);
>  		} else
>  			r = clear_user(buf, cur_size);
>  
> @@ -2507,9 +2507,9 @@ static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf,
>  		if (p->mapping != adev->mman.bdev.dev_mapping)
>  			return -EPERM;
>  
> -		ptr = kmap(p);
> +		ptr = kmap_thread(p);
>  		r = copy_to_user(buf, ptr + off, bytes);
> -		kunmap(p);
> +		kunmap_thread(p);
>  		if (r)
>  			return -EFAULT;
>  
> @@ -2558,9 +2558,9 @@ static ssize_t amdgpu_iomem_write(struct file *f, const char __user *buf,
>  		if (p->mapping != adev->mman.bdev.dev_mapping)
>  			return -EPERM;
>  
> -		ptr = kmap(p);
> +		ptr = kmap_thread(p);
>  		r = copy_from_user(ptr + off, buf, bytes);
> -		kunmap(p);
> +		kunmap_thread(p);
>  		if (r)
>  			return -EFAULT;
>  
> diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
> index 3df6d6e850f5..35f4e55c941f 100644
> --- a/drivers/gpu/drm/gma500/gma_display.c
> +++ b/drivers/gpu/drm/gma500/gma_display.c
> @@ -400,9 +400,9 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
>  		/* Copy the cursor to cursor mem */
>  		tmp_dst = dev_priv->vram_addr + cursor_gt->offset;
>  		for (i = 0; i < cursor_pages; i++) {
> -			tmp_src = kmap(gt->pages[i]);
> +			tmp_src = kmap_thread(gt->pages[i]);
>  			memcpy(tmp_dst, tmp_src, PAGE_SIZE);
> -			kunmap(gt->pages[i]);
> +			kunmap_thread(gt->pages[i]);
>  			tmp_dst += PAGE_SIZE;
>  		}
>  
> diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
> index 505044c9a673..fba7a3a461fd 100644
> --- a/drivers/gpu/drm/gma500/mmu.c
> +++ b/drivers/gpu/drm/gma500/mmu.c
> @@ -192,20 +192,20 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver,
>  		pd->invalid_pte = 0;
>  	}
>  
> -	v = kmap(pd->dummy_pt);
> +	v = kmap_thread(pd->dummy_pt);
>  	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
>  		v[i] = pd->invalid_pte;
>  
> -	kunmap(pd->dummy_pt);
> +	kunmap_thread(pd->dummy_pt);
>  
> -	v = kmap(pd->p);
> +	v = kmap_thread(pd->p);
>  	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
>  		v[i] = pd->invalid_pde;
>  
> -	kunmap(pd->p);
> +	kunmap_thread(pd->p);
>  
>  	clear_page(kmap(pd->dummy_page));
> -	kunmap(pd->dummy_page);
> +	kunmap_thread(pd->dummy_page);
>  
>  	pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024);
>  	if (!pd->tables)
> diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> index 38113d3c0138..274424795fb7 100644
> --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> @@ -566,9 +566,9 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
>  		if (err < 0)
>  			goto fail;
>  
> -		vaddr = kmap(page);
> +		vaddr = kmap_thread(page);
>  		memcpy(vaddr, data, len);
> -		kunmap(page);
> +		kunmap_thread(page);
>  
>  		err = pagecache_write_end(file, file->f_mapping,
>  					  offset, len, len,
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> index 7ffc3c751432..b466c677d007 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> @@ -1754,7 +1754,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
>  		return -EINVAL;
>  	}
>  
> -	vaddr = kmap(page);
> +	vaddr = kmap_thread(page);
>  	if (!vaddr) {
>  		pr_err("No (mappable) scratch page!\n");
>  		return -EINVAL;
> @@ -1765,7 +1765,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
>  		pr_err("Inconsistent initial state of scratch page!\n");
>  		err = -EINVAL;
>  	}
> -	kunmap(page);
> +	kunmap_thread(page);
>  
>  	return err;
>  }
> diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> index 9c7402ce5bf9..447df22e2e06 100644
> --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> @@ -143,7 +143,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
>  	intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
>  
>  	p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> -	cpu = kmap(p) + offset_in_page(offset);
> +	cpu = kmap_thread(p) + offset_in_page(offset);
>  	drm_clflush_virt_range(cpu, sizeof(*cpu));
>  	if (*cpu != (u32)page) {
>  		pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
> @@ -161,7 +161,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
>  	}
>  	*cpu = 0;
>  	drm_clflush_virt_range(cpu, sizeof(*cpu));
> -	kunmap(p);
> +	kunmap_thread(p);
>  
>  out:
>  	__i915_vma_put(vma);
> @@ -236,7 +236,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
>  		intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
>  
>  		p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> -		cpu = kmap(p) + offset_in_page(offset);
> +		cpu = kmap_thread(p) + offset_in_page(offset);
>  		drm_clflush_virt_range(cpu, sizeof(*cpu));
>  		if (*cpu != (u32)page) {
>  			pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
> @@ -254,7 +254,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
>  		}
>  		*cpu = 0;
>  		drm_clflush_virt_range(cpu, sizeof(*cpu));
> -		kunmap(p);
> +		kunmap_thread(p);
>  		if (err)
>  			return err;
>  
> diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> index 7fb36b12fe7a..38da348282f1 100644
> --- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> +++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> @@ -731,7 +731,7 @@ static void swizzle_page(struct page *page)
>  	char *vaddr;
>  	int i;
>  
> -	vaddr = kmap(page);
> +	vaddr = kmap_thread(page);
>  
>  	for (i = 0; i < PAGE_SIZE; i += 128) {
>  		memcpy(temp, &vaddr[i], 64);
> @@ -739,7 +739,7 @@ static void swizzle_page(struct page *page)
>  		memcpy(&vaddr[i + 64], temp, 64);
>  	}
>  
> -	kunmap(page);
> +	kunmap_thread(page);
>  }
>  
>  /**
> diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
> index 2a72cce63fd9..4cfb24e9ed62 100644
> --- a/drivers/gpu/drm/i915/gt/intel_gtt.c
> +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
> @@ -312,9 +312,9 @@ static void poison_scratch_page(struct page *page, unsigned long size)
>  	do {
>  		void *vaddr;
>  
> -		vaddr = kmap(page);
> +		vaddr = kmap_thread(page);
>  		memset(vaddr, POISON_FREE, PAGE_SIZE);
> -		kunmap(page);
> +		kunmap_thread(page);
>  
>  		page = pfn_to_page(page_to_pfn(page) + 1);
>  		size -= PAGE_SIZE;
> diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
> index 43c7acbdc79d..a40d3130cebf 100644
> --- a/drivers/gpu/drm/i915/gt/shmem_utils.c
> +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
> @@ -142,12 +142,12 @@ static int __shmem_rw(struct file *file, loff_t off,
>  		if (IS_ERR(page))
>  			return PTR_ERR(page);
>  
> -		vaddr = kmap(page);
> +		vaddr = kmap_thread(page);
>  		if (write)
>  			memcpy(vaddr + offset_in_page(off), ptr, this);
>  		else
>  			memcpy(ptr, vaddr + offset_in_page(off), this);
> -		kunmap(page);
> +		kunmap_thread(page);
>  		put_page(page);
>  
>  		len -= this;
> diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> index 9aa3066cb75d..cae8300fd224 100644
> --- a/drivers/gpu/drm/i915/i915_gem.c
> +++ b/drivers/gpu/drm/i915/i915_gem.c
> @@ -312,14 +312,14 @@ shmem_pread(struct page *page, int offset, int len, char __user *user_data,
>  	char *vaddr;
>  	int ret;
>  
> -	vaddr = kmap(page);
> +	vaddr = kmap_thread(page);
>  
>  	if (needs_clflush)
>  		drm_clflush_virt_range(vaddr + offset, len);
>  
>  	ret = __copy_to_user(user_data, vaddr + offset, len);
>  
> -	kunmap(page);
> +	kunmap_thread(page);
>  
>  	return ret ? -EFAULT : 0;
>  }
> @@ -708,7 +708,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
>  	char *vaddr;
>  	int ret;
>  
> -	vaddr = kmap(page);
> +	vaddr = kmap_thread(page);
>  
>  	if (needs_clflush_before)
>  		drm_clflush_virt_range(vaddr + offset, len);
> @@ -717,7 +717,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
>  	if (!ret && needs_clflush_after)
>  		drm_clflush_virt_range(vaddr + offset, len);
>  
> -	kunmap(page);
> +	kunmap_thread(page);
>  
>  	return ret ? -EFAULT : 0;
>  }
> diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> index 3e6cbb0d1150..aecd469b6b6e 100644
> --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> @@ -1058,9 +1058,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
>  
>  			drm_clflush_pages(&page, 1);
>  
> -			s = kmap(page);
> +			s = kmap_thread(page);
>  			ret = compress_page(compress, s, dst, false);
> -			kunmap(page);
> +			kunmap_thread(page);
>  
>  			drm_clflush_pages(&page, 1);
>  
> diff --git a/drivers/gpu/drm/i915/selftests/i915_perf.c b/drivers/gpu/drm/i915/selftests/i915_perf.c
> index c2d001d9c0ec..7f7ef2d056f4 100644
> --- a/drivers/gpu/drm/i915/selftests/i915_perf.c
> +++ b/drivers/gpu/drm/i915/selftests/i915_perf.c
> @@ -307,7 +307,7 @@ static int live_noa_gpr(void *arg)
>  	}
>  
>  	/* Poison the ce->vm so we detect writes not to the GGTT gt->scratch */
> -	scratch = kmap(ce->vm->scratch[0].base.page);
> +	scratch = kmap_thread(ce->vm->scratch[0].base.page);
>  	memset(scratch, POISON_FREE, PAGE_SIZE);
>  
>  	rq = intel_context_create_request(ce);
> @@ -405,7 +405,7 @@ static int live_noa_gpr(void *arg)
>  out_rq:
>  	i915_request_put(rq);
>  out_ce:
> -	kunmap(ce->vm->scratch[0].base.page);
> +	kunmap_thread(ce->vm->scratch[0].base.page);
>  	intel_context_put(ce);
>  out:
>  	stream_destroy(stream);
> diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> index 004344dce140..0aba0cac51e1 100644
> --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> @@ -1013,11 +1013,11 @@ static ssize_t radeon_ttm_gtt_read(struct file *f, char __user *buf,
>  
>  		page = rdev->gart.pages[p];
>  		if (page) {
> -			ptr = kmap(page);
> +			ptr = kmap_thread(page);
>  			ptr += off;
>  
>  			r = copy_to_user(buf, ptr, cur_size);
> -			kunmap(rdev->gart.pages[p]);
> +			kunmap_thread(rdev->gart.pages[p]);
>  		} else
>  			r = clear_user(buf, cur_size);
>  
> -- 
> 2.28.0.rc0.12.gb6a658bd00c9
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Fri Oct 09 22:37:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 22:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5236.13684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR10m-0000CK-Ge; Fri, 09 Oct 2020 22:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5236.13684; Fri, 09 Oct 2020 22:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR10m-0000CD-Dj; Fri, 09 Oct 2020 22:37:16 +0000
Received: by outflank-mailman (input) for mailman id 5236;
 Fri, 09 Oct 2020 22:37:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kR10l-0000C8-5u
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 22:37:15 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4162e366-9131-446f-825b-eac540daf4d5;
 Fri, 09 Oct 2020 22:37:14 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099Mar12066450
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 9 Oct 2020 18:36:58 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 099Maqhl066449;
 Fri, 9 Oct 2020 15:36:52 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bBQf=DQ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kR10l-0000C8-5u
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 22:37:15 +0000
X-Inumbo-ID: 4162e366-9131-446f-825b-eac540daf4d5
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4162e366-9131-446f-825b-eac540daf4d5;
	Fri, 09 Oct 2020 22:37:14 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 099Mar12066450
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 9 Oct 2020 18:36:58 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 099Maqhl066449;
	Fri, 9 Oct 2020 15:36:52 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 9 Oct 2020 15:36:52 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
        xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
        bertrand.marquis@arm.com, andre.przywara@arm.com,
        Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201009223652.GC65219@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <20201008183904.GA56716@mattapan.m5p.com>
 <f0976c17-ad36-847b-7868-f6bb13948368@xen.org>
 <20201009142208.GA63582@mattapan.m5p.com>
 <55a4ac3d-dc6f-f2f6-1a98-62d1c555d26e@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <55a4ac3d-dc6f-f2f6-1a98-62d1c555d26e@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 09, 2020 at 07:15:20PM +0100, Julien Grall wrote:
> On 09/10/2020 15:22, Elliott Mitchell wrote:
> > Finding all the command-line console settings can be a challenge.  I had
> > thought it was supposed to be "console=hvc0 earlycon=hvc0".
> > 
> > With that though I finally have some output which claims to come from the
> > Linux kernel (yay! finally hit this point!).  As we were both guessing,
> > very early kernel panic:
> > 
> > [    0.000000] efi: Getting EFI parameters from FDT:
> > [    0.000000] efi: Can't find 'System Table' in device tree!
> 
> Thank you for sending part of the log. Looking at Linux 5.6 code, the 
> error message is printed by efi_get_fdt_params() (see 
> drivers/firmware/efi.c) when one of the property is missing.
> 
> 'System Table' suggests that Linux wasn't enable to find 
> "linux,uefi-system-table" or "xen,uefi-system-table".
> 
> Xen will only create later. Would it be possible to add some code in 
> __find_uefi_params() to confirm which property Linux thinks is missing?

Trying to rebuild a configuration long after the prior build can be
entertaining.  Finally identified one patch appears to be for 4.19, but
breaks 5.x.  With that and an appropriate adjustment:

[    0.000000] efi: __find_uefi_params(): Missing "linux,uefi-system-table"
[    0.000000] efi: __find_uefi_params(): Missing "linux,uefi-system-table"


Good news is I had been meaning to try a later kernel for a while and
thus tried building a 5.8 kernel.  Once built, the 5.8 kernel booted
successfully.  As such the next issue is the driver for the graphics chip
doesn't function under Xen as of the 5.8 kernel and Xen 4.14.

I can now state I've gotten success with a combination of Julien Grall's
patches and Masami Hiramatsu.  Certainly qualifies for:

Tested-by: Elliott Mitchell <ehem+xen@m5p.com>

For all 5 patches.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 09 23:12:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 09 Oct 2020 23:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5238.13696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR1Yq-000411-9P; Fri, 09 Oct 2020 23:12:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5238.13696; Fri, 09 Oct 2020 23:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR1Yq-00040u-6X; Fri, 09 Oct 2020 23:12:28 +0000
Received: by outflank-mailman (input) for mailman id 5238;
 Fri, 09 Oct 2020 23:12:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR1Yp-00040p-5D
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 23:12:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4512bfe-290b-43b3-8292-33b8dcef48f6;
 Fri, 09 Oct 2020 23:12:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR1Yl-0000kO-P7; Fri, 09 Oct 2020 23:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR1Yl-0001SZ-Hf; Fri, 09 Oct 2020 23:12:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR1Yl-0002Pg-HC; Fri, 09 Oct 2020 23:12:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VNj9=DQ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR1Yp-00040p-5D
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 23:12:27 +0000
X-Inumbo-ID: c4512bfe-290b-43b3-8292-33b8dcef48f6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c4512bfe-290b-43b3-8292-33b8dcef48f6;
	Fri, 09 Oct 2020 23:12:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6XR9OA8d24Bajj7tZ7N5BQlYU/MHsSHhIxZNWE07xOU=; b=Tv/O10A7+3Sa92eA59h1lS96Yg
	gcbtatAPzryIHqjVbs9aidjFn1lSiiY/fZGkNIj2svL27wuIqOV1HS4DkJNLMxHDzq2cQzxD31FXd
	BMBrdSPpbLmV4c0nvkpzzblgwO9AJSRGlOfxdZjBuEZEBoROR+jU2votXvcFycKdREgE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR1Yl-0000kO-P7; Fri, 09 Oct 2020 23:12:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR1Yl-0001SZ-Hf; Fri, 09 Oct 2020 23:12:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR1Yl-0002Pg-HC; Fri, 09 Oct 2020 23:12:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155615-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155615: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 09 Oct 2020 23:12:23 +0000

flight 155615 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155615/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    0 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 00:41:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 00:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5247.13723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR2wJ-00052n-P6; Sat, 10 Oct 2020 00:40:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5247.13723; Sat, 10 Oct 2020 00:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR2wJ-00052g-M7; Sat, 10 Oct 2020 00:40:47 +0000
Received: by outflank-mailman (input) for mailman id 5247;
 Sat, 10 Oct 2020 00:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xu9P=DR=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kR2wG-00052V-C6
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 00:40:45 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35d94951-08dd-4aa4-8d80-9d434c9fb9a2;
 Sat, 10 Oct 2020 00:40:41 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kR2vS-0004My-FJ; Sat, 10 Oct 2020 00:39:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Xu9P=DR=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kR2wG-00052V-C6
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 00:40:45 +0000
X-Inumbo-ID: 35d94951-08dd-4aa4-8d80-9d434c9fb9a2
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35d94951-08dd-4aa4-8d80-9d434c9fb9a2;
	Sat, 10 Oct 2020 00:40:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=wylOEgAZGPAkxCdlobMzyIfu/ZrVv1Os6TQldUFLziM=; b=F41ki3gR7FBgwnVausn+Ym7HP1
	XlPOrKIoC49hpvUE9/aWBoH68sW9nzN+roolJxog6JtW0hinw+GJGJQweM2tJ1u8x4huOElOun4HX
	OLG5RHoBoAf3CSIzFNHpVgsRU+TgTIZh4srysDUduyMdIIxPlPK5JA/amN/knuZnCqa9Zv6UObB0o
	hZnMFz1K/YCtH1pW7cz6Th6CIA0I8ero69lmDRA42tDkhymN2BfqXsEeqYlbw/xTqpsZhH92AuKGR
	+ciamleDZuZCDFfE/FcJg9XsXU418hEml49KDOM/rl5J3rO0lFNTAWXDAV5Z24k3Ay+VgEnmqYlhx
	rI3nBVLQ==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kR2vS-0004My-FJ; Sat, 10 Oct 2020 00:39:54 +0000
Date: Sat, 10 Oct 2020 01:39:54 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Eric Biggers <ebiggers@kernel.org>
Cc: ira.weiny@intel.com, Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201010003954.GW20115@casper.infradead.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009213434.GA839@sol.localdomain>

On Fri, Oct 09, 2020 at 02:34:34PM -0700, Eric Biggers wrote:
> On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@intel.com wrote:
> > The kmap() calls in this FS are localized to a single thread.  To avoid
> > the over head of global PKRS updates use the new kmap_thread() call.
> >
> > @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page(
> >  
> >  static inline void f2fs_copy_page(struct page *src, struct page *dst)
> >  {
> > -	char *src_kaddr = kmap(src);
> > -	char *dst_kaddr = kmap(dst);
> > +	char *src_kaddr = kmap_thread(src);
> > +	char *dst_kaddr = kmap_thread(dst);
> >  
> >  	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
> > -	kunmap(dst);
> > -	kunmap(src);
> > +	kunmap_thread(dst);
> > +	kunmap_thread(src);
> >  }
> 
> Wouldn't it make more sense to switch cases like this to kmap_atomic()?
> The pages are only mapped to do a memcpy(), then they're immediately unmapped.

Maybe you missed the earlier thread from Thomas trying to do something
similar for rather different reasons ...

https://lore.kernel.org/lkml/20200919091751.011116649@linutronix.de/


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 02:21:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 02:21:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5257.13747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR4VG-0007gZ-GY; Sat, 10 Oct 2020 02:20:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5257.13747; Sat, 10 Oct 2020 02:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR4VG-0007gR-BA; Sat, 10 Oct 2020 02:20:58 +0000
Received: by outflank-mailman (input) for mailman id 5257;
 Sat, 10 Oct 2020 02:20:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7N0y=DR=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1kR4VE-0007gL-K7
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:20:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af0a070b-044d-4ffe-987f-36e7774bb0bb;
 Sat, 10 Oct 2020 02:20:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B0EF9AC65;
 Sat, 10 Oct 2020 02:20:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7N0y=DR=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1kR4VE-0007gL-K7
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:20:56 +0000
X-Inumbo-ID: af0a070b-044d-4ffe-987f-36e7774bb0bb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id af0a070b-044d-4ffe-987f-36e7774bb0bb;
	Sat, 10 Oct 2020 02:20:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B0EF9AC65;
	Sat, 10 Oct 2020 02:20:53 +0000 (UTC)
Subject: Re: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread()
To: ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>,
 Kent Overstreet <kent.overstreet@gmail.com>, x86@kernel.org,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
 linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org,
 linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org,
 bpf@vger.kernel.org, kexec@lists.infradead.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 devel@driverdev.osuosl.org, linux-efi@vger.kernel.org,
 linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org,
 target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
 ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org,
 io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
 linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
 reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net,
 linux-nilfs@vger.kernel.org, cluster-devel@redhat.com,
 ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org,
 linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org,
 linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 drbd-dev@lists.linbit.com, linux-block@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-cachefs@redhat.com,
 samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-49-ira.weiny@intel.com>
From: Coly Li <colyli@suse.de>
Autocrypt: addr=colyli@suse.de; keydata=
 mQINBFYX6S8BEAC9VSamb2aiMTQREFXK4K/W7nGnAinca7MRuFUD4JqWMJ9FakNRd/E0v30F
 qvZ2YWpidPjaIxHwu3u9tmLKqS+2vnP0k7PRHXBYbtZEMpy3kCzseNfdrNqwJ54A430BHf2S
 GMVRVENiScsnh4SnaYjFVvB8SrlhTsgVEXEBBma5Ktgq9YSoy5miatWmZvHLFTQgFMabCz/P
 j5/xzykrF6yHo0rHZtwzQzF8rriOplAFCECp/t05+OeHHxjSqSI0P/G79Ll+AJYLRRm9til/
 K6yz/1hX5xMToIkYrshDJDrUc8DjEpISQQPhG19PzaUf3vFpmnSVYprcWfJWsa2wZyyjRFkf
 J51S82WfclafNC6N7eRXedpRpG6udUAYOA1YdtlyQRZa84EJvMzW96iSL1Gf+ZGtRuM3k49H
 1wiWOjlANiJYSIWyzJjxAd/7Xtiy/s3PRKL9u9y25ftMLFa1IljiDG+mdY7LyAGfvdtIkanr
 iBpX4gWXd7lNQFLDJMfShfu+CTMCdRzCAQ9hIHPmBeZDJxKq721CyBiGAhRxDN+TYiaG/UWT
 7IB7LL4zJrIe/xQ8HhRO+2NvT89o0LxEFKBGg39yjTMIrjbl2ZxY488+56UV4FclubrG+t16
 r2KrandM7P5RjR+cuHhkKseim50Qsw0B+Eu33Hjry7YCihmGswARAQABtBhDb2x5IExpIDxj
 b2x5bGlAc3VzZS5kZT6JAlYEEwEIAEACGyMHCwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgBYh
 BOo+RS/0+Uhgjej60Mc5B5Nrffj8BQJcR84dBQkY++fuAAoJEMc5B5Nrffj8ixcP/3KAKg1X
 EcoW4u/0z+Ton5rCyb/NpAww8MuRjNW82UBUac7yCi1y3OW7NtLjuBLw5SaVG5AArb7IF3U0
 qTOobqfl5XHsT0o5wFHZaKUrnHb6y7V3SplsJWfkP3JmOooJsQB3z3K96ZTkFelsNb0ZaBRu
 gV+LA4MomhQ+D3BCDR1it1OX/tpvm2uaDF6s/8uFtcDEM9eQeqATN/QAJ49nvU/I8zDSY9rc
 0x9mP0x+gH4RccbnoPu/rUG6Fm1ZpLrbb6NpaYBBJ/V1BC4lIOjnd24bsoQrQmnJn9dSr60X
 1MY60XDszIyzRw7vbJcUn6ZzPNFDxFFT9diIb+wBp+DD8ZlD/hnVpl4f921ZbvfOSsXAJrKB
 1hGY17FPwelp1sPcK2mDT+pfHEMV+OQdZzD2OCKtza/5IYismJJm3oVUYMogb5vDNAw9X2aP
 XgwUuG+FDEFPamFMUwIfzYHcePfqf0mMsaeSgtA/xTxzx/0MLjUJHl46Bc0uKDhv7QUyGz0j
 Ywgr2mHTvG+NWQ/mDeHNGkcnsnp3IY7koDHnN2xMFXzY4bn9m8ctqKo2roqjCzoxD/njoAhf
 KBzdybLHATqJG/yiZSbCxDA1n/J4FzPyZ0rNHUAJ/QndmmVspE9syFpFCKigvvyrzm016+k+
 FJ59Q6RG4MSy/+J565Xj+DNY3/dCuQINBFYX6S8BEADZP+2cl4DRFaSaBms08W8/smc5T2CO
 YhAoygZn71rB7Djml2ZdvrLRjR8Qbn0Q/2L2gGUVc63pJnbrjlXSx2LfAFE0SlfYIJ11aFdF
 9w7RvqWByQjDJor3Z0fWvPExplNgMvxpD0U0QrVT5dIGTx9hadejCl/ug09Lr6MPQn+a4+qs
 aRWwgCSHaIuDkH3zI1MJXiqXXFKUzJ/Fyx6R72rqiMPHH2nfwmMu6wOXAXb7+sXjZz5Po9GJ
 g2OcEc+rpUtKUJGyeQsnCDxUcqJXZDBi/GnhPCcraQuqiQ7EGWuJfjk51vaI/rW4bZkA9yEP
 B9rBYngbz7cQymUsfxuTT8OSlhxjP3l4ZIZFKIhDaQeZMj8pumBfEVUyiF6KVSfgfNQ/5PpM
 R4/pmGbRqrAAElhrRPbKQnCkGWDr8zG+AjN1KF6rHaFgAIO7TtZ+F28jq4reLkur0N5tQFww
 wFwxzROdeLHuZjL7eEtcnNnzSkXHczLkV4kQ3+vr/7Gm65mQfnVpg6JpwpVrbDYQeOFlxZ8+
 GERY5Dag4KgKa/4cSZX2x/5+KkQx9wHwackw5gDCvAdZ+Q81nm6tRxEYBBiVDQZYqO73stgT
 ZyrkxykUbQIy8PI+g7XMDCMnPiDncQqgf96KR3cvw4wN8QrgA6xRo8xOc2C3X7jTMQUytCz9
 0MyV1QARAQABiQI8BBgBCAAmAhsMFiEE6j5FL/T5SGCN6PrQxzkHk2t9+PwFAlxHziAFCRj7
 5/EACgkQxzkHk2t9+PxgfA//cH5R1DvpJPwraTAl24SUcG9EWe+NXyqveApe05nk15zEuxxd
 e4zFEjo+xYZilSveLqYHrm/amvQhsQ6JLU+8N60DZHVcXbw1Eb8CEjM5oXdbcJpXh1/1BEwl
 4phsQMkxOTns51bGDhTQkv4lsZKvNByB9NiiMkT43EOx14rjkhHw3rnqoI7ogu8OO7XWfKcL
 CbchjJ8t3c2XK1MUe056yPpNAT2XPNF2EEBPG2Y2F4vLgEbPv1EtpGUS1+JvmK3APxjXUl5z
 6xrxCQDWM5AAtGfM/IswVjbZYSJYyH4BQKrShzMb0rWUjkpXvvjsjt8rEXpZEYJgX9jvCoxt
 oqjCKiVLpwje9WkEe9O9VxljmPvxAhVqJjX62S+TGp93iD+mvpCoHo3+CcvyRcilz+Ko8lfO
 hS9tYT0HDUiDLvpUyH1AR2xW9RGDevGfwGTpF0K6cLouqyZNdhlmNciX48tFUGjakRFsxRmX
 K0Jx4CEZubakJe+894sX6pvNFiI7qUUdB882i5GR3v9ijVPhaMr8oGuJ3kvwBIA8lvRBGVGn
 9xvzkQ8Prpbqh30I4NMp8MjFdkwCN6znBKPHdjNTwE5PRZH0S9J0o67IEIvHfH0eAWAsgpTz
 +jwc7VKH7vkvgscUhq/v1/PEWCAqh9UHy7R/jiUxwzw/288OpgO+i+2l11Y=
Message-ID: <c802fbf4-f67a-b205-536d-9c71b440f9c8@suse.de>
Date: Sat, 10 Oct 2020 10:20:34 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201009195033.3208459-49-ira.weiny@intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2020/10/10 03:50, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> These kmap() calls are localized to a single thread.  To avoid the over
> head of global PKRS updates use the new kmap_thread() call.
> 

Hi Ira,

There were a number of options considered.

1) Attempt to change all the thread local kmap() calls to kmap_atomic()
2) Introduce a flags parameter to kmap() to indicate if the mapping
should be global or not
3) Change ~20-30 call sites to 'kmap_global()' to indicate that they
require a global mapping of the pages
4) Change ~209 call sites to 'kmap_thread()' to indicate that the
mapping is to be used within that thread of execution only


I copied the above information from patch 00/58 to this message. The
idea behind kmap_thread() is fine to me, but as you said the new api is
very easy to be missed in new code (even for me). I would like to be
supportive to option 2) introduce a flag to kmap(), then we won't forget
the new thread-localized kmap method, and people won't ask why a
_thread() function is called but no kthread created.

Thanks.


Coly Li



> Cc: Coly Li <colyli@suse.de> (maintainer:BCACHE (BLOCK LAYER CACHE))
> Cc: Kent Overstreet <kent.overstreet@gmail.com> (maintainer:BCACHE (BLOCK LAYER CACHE))
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
>  drivers/md/bcache/request.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
> index c7cadaafa947..a4571f6d09dd 100644
> --- a/drivers/md/bcache/request.c
> +++ b/drivers/md/bcache/request.c
> @@ -44,10 +44,10 @@ static void bio_csum(struct bio *bio, struct bkey *k)
>  	uint64_t csum = 0;
>  
>  	bio_for_each_segment(bv, bio, iter) {
> -		void *d = kmap(bv.bv_page) + bv.bv_offset;
> +		void *d = kmap_thread(bv.bv_page) + bv.bv_offset;
>  
>  		csum = bch_crc64_update(csum, d, bv.bv_len);
> -		kunmap(bv.bv_page);
> +		kunmap_thread(bv.bv_page);
>  	}
>  
>  	k->ptr[KEY_PTRS(k)] = csum & (~0ULL >> 1);
> 



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 02:43:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 02:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5263.13767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR4qy-0001OL-EU; Sat, 10 Oct 2020 02:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5263.13767; Sat, 10 Oct 2020 02:43:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR4qy-0001OE-9r; Sat, 10 Oct 2020 02:43:24 +0000
Received: by outflank-mailman (input) for mailman id 5263;
 Sat, 10 Oct 2020 02:43:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dXbD=DR=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kR4qv-0001O9-LU
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:43:23 +0000
Received: from bedivere.hansenpartnership.com (unknown [66.63.167.143])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3434b3d-9823-4393-8356-dd35c0c1993b;
 Sat, 10 Oct 2020 02:43:19 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 2492A8EE25D;
 Fri,  9 Oct 2020 19:43:17 -0700 (PDT)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id 3vw63n_vJBCB; Fri,  9 Oct 2020 19:43:16 -0700 (PDT)
Received: from jarvis (c-73-35-198-56.hsd1.wa.comcast.net [73.35.198.56])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 0DE4F8EE120;
 Fri,  9 Oct 2020 19:43:14 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dXbD=DR=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kR4qv-0001O9-LU
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:43:23 +0000
X-Inumbo-ID: f3434b3d-9823-4393-8356-dd35c0c1993b
Received: from bedivere.hansenpartnership.com (unknown [66.63.167.143])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f3434b3d-9823-4393-8356-dd35c0c1993b;
	Sat, 10 Oct 2020 02:43:19 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 2492A8EE25D;
	Fri,  9 Oct 2020 19:43:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1602297797;
	bh=H8TP93foEqN1ktQ8Zvn50X/i2mF1Mo+G3bAPl3laMS8=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=IKFTIVFzGPRTOXbvF4guFV0idy0d2c5GY0pNpauMNCD265o/8xML504AQiHNNKINI
	 03VTbsdpTlx1IvRxmapAuFqbOyRLRkynjYBLceWgnAFSazyacAGjs/kfFiiin3dxCG
	 9/K3u5C/d7R0kHrIFRdMX77E7sQ6OOI7DSaYWfn4=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id 3vw63n_vJBCB; Fri,  9 Oct 2020 19:43:16 -0700 (PDT)
Received: from jarvis (c-73-35-198-56.hsd1.wa.comcast.net [73.35.198.56])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 0DE4F8EE120;
	Fri,  9 Oct 2020 19:43:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1602297796;
	bh=H8TP93foEqN1ktQ8Zvn50X/i2mF1Mo+G3bAPl3laMS8=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=BgjiThamK3sRChTIWem3VqlcTVcZOY3Z2GNJyIwpezgtDJLGW3IszUxOTE+yoCAWv
	 V48K29iXWIdEyZfVjEsU0FSlh3cTrvJt0EqAPo1Ll9Jsfhk5OlUorelNGWOoKpuPOj
	 SS1Y94jSiSEug+8lBRnZGZxsf1hZ1ZG8/+ovIeYw=
Message-ID: <95d137992900152a0453f7ba37771cb9025121fa.camel@HansenPartnership.com>
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Eric Biggers <ebiggers@kernel.org>, ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>, Thomas Gleixner
 <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov
 <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra
 <peterz@infradead.org>, linux-aio@kvack.org, linux-efi@vger.kernel.org, 
 kvm@vger.kernel.org, linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
 Dave Hansen <dave.hansen@linux.intel.com>, dri-devel@lists.freedesktop.org,
 linux-mm@kvack.org,  target-devel@vger.kernel.org,
 linux-mtd@lists.infradead.org,  linux-kselftest@vger.kernel.org,
 samba-technical@lists.samba.org,  ceph-devel@vger.kernel.org,
 drbd-dev@lists.linbit.com,  devel@driverdev.osuosl.org,
 linux-cifs@vger.kernel.org,  linux-nilfs@vger.kernel.org,
 linux-scsi@vger.kernel.org,  linux-nvdimm@lists.01.org,
 linux-rdma@vger.kernel.org, x86@kernel.org,  amd-gfx@lists.freedesktop.org,
 linux-afs@lists.infradead.org,  cluster-devel@redhat.com,
 linux-cachefs@redhat.com,  intel-wired-lan@lists.osuosl.org,
 xen-devel@lists.xenproject.org,  linux-ext4@vger.kernel.org, Fenghua Yu
 <fenghua.yu@intel.com>,  ecryptfs@vger.kernel.org,
 linux-um@lists.infradead.org,  intel-gfx@lists.freedesktop.org,
 linux-erofs@lists.ozlabs.org,  reiserfs-devel@vger.kernel.org,
 linux-block@vger.kernel.org,  linux-bcache@vger.kernel.org, Jaegeuk Kim
 <jaegeuk@kernel.org>, Dan Williams <dan.j.williams@intel.com>,
 io-uring@vger.kernel.org, linux-nfs@vger.kernel.org, 
 linux-ntfs-dev@lists.sourceforge.net, netdev@vger.kernel.org, 
 kexec@lists.infradead.org, linux-kernel@vger.kernel.org, 
 linux-f2fs-devel@lists.sourceforge.net, linux-fsdevel@vger.kernel.org, 
 bpf@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
 linux-btrfs@vger.kernel.org
Date: Fri, 09 Oct 2020 19:43:13 -0700
In-Reply-To: <20201009213434.GA839@sol.localdomain>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
	 <20201009195033.3208459-23-ira.weiny@intel.com>
	 <20201009213434.GA839@sol.localdomain>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Fri, 2020-10-09 at 14:34 -0700, Eric Biggers wrote:
> On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > The kmap() calls in this FS are localized to a single thread.  To
> > avoid the over head of global PKRS updates use the new
> > kmap_thread() call.
> > 
> > Cc: Jaegeuk Kim <jaegeuk@kernel.org>
> > Cc: Chao Yu <chao@kernel.org>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > ---
> >  fs/f2fs/f2fs.h | 8 ++++----
> >  1 file changed, 4 insertions(+), 4 deletions(-)
> > 
> > diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> > index d9e52a7f3702..ff72a45a577e 100644
> > --- a/fs/f2fs/f2fs.h
> > +++ b/fs/f2fs/f2fs.h
> > @@ -2410,12 +2410,12 @@ static inline struct page
> > *f2fs_pagecache_get_page(
> >  
> >  static inline void f2fs_copy_page(struct page *src, struct page
> > *dst)
> >  {
> > -	char *src_kaddr = kmap(src);
> > -	char *dst_kaddr = kmap(dst);
> > +	char *src_kaddr = kmap_thread(src);
> > +	char *dst_kaddr = kmap_thread(dst);
> >  
> >  	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
> > -	kunmap(dst);
> > -	kunmap(src);
> > +	kunmap_thread(dst);
> > +	kunmap_thread(src);
> >  }
> 
> Wouldn't it make more sense to switch cases like this to
> kmap_atomic()?
> The pages are only mapped to do a memcpy(), then they're immediately
> unmapped.

On a VIPT/VIVT architecture, this is horrendously wasteful.  You're
taking something that was mapped at colour c_src mapping it to a new
address src_kaddr, which is likely a different colour and necessitates
flushing the original c_src, then you copy it to dst_kaddr, which is
also likely a different colour from c_dst, so dst_kaddr has to be
flushed on kunmap and c_dst has to be invalidated on kmap.  What we
should have is an architectural primitive for doing this, something
like kmemcopy_arch(dst, src).  PIPT architectures can implement it as
the above (possibly losing kmap if they don't need it) but VIPT/VIVT
architectures can set up a correctly coloured mapping so they can
simply copy from c_src to c_dst without any need to flush and the data
arrives cache hot at c_dst.

James




From xen-devel-bounces@lists.xenproject.org Sat Oct 10 02:53:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 02:53:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5268.13779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR50S-0002Tu-Gy; Sat, 10 Oct 2020 02:53:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5268.13779; Sat, 10 Oct 2020 02:53:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR50S-0002Tn-DN; Sat, 10 Oct 2020 02:53:12 +0000
Received: by outflank-mailman (input) for mailman id 5268;
 Sat, 10 Oct 2020 02:53:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GxYu=DR=nvidia.com=jhubbard@srs-us1.protection.inumbo.net>)
 id 1kR50R-0002Ti-6g
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:53:11 +0000
Received: from hqnvemgate24.nvidia.com (unknown [216.228.121.143])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d4dfed58-1fdb-41f6-84f8-1bdabd39137b;
 Sat, 10 Oct 2020 02:53:10 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by
 hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
 id <B5f8121a50000>; Fri, 09 Oct 2020 19:51:17 -0700
Received: from [10.2.51.144] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 10 Oct
 2020 02:53:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GxYu=DR=nvidia.com=jhubbard@srs-us1.protection.inumbo.net>)
	id 1kR50R-0002Ti-6g
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 02:53:11 +0000
X-Inumbo-ID: d4dfed58-1fdb-41f6-84f8-1bdabd39137b
Received: from hqnvemgate24.nvidia.com (unknown [216.228.121.143])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d4dfed58-1fdb-41f6-84f8-1bdabd39137b;
	Sat, 10 Oct 2020 02:53:10 +0000 (UTC)
Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA)
	id <B5f8121a50000>; Fri, 09 Oct 2020 19:51:17 -0700
Received: from [10.2.51.144] (10.124.1.5) by HQMAIL107.nvidia.com
 (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sat, 10 Oct
 2020 02:53:07 +0000
Subject: Re: [PATCH RFC PKS/PMEM 57/58] nvdimm/pmem: Stray access protection
 for pmem->virt_addr
To: <ira.weiny@intel.com>, Andrew Morton <akpm@linux-foundation.org>, "Thomas
 Gleixner" <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, "Borislav
 Petkov" <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra
	<peterz@infradead.org>
CC: <x86@kernel.org>, Dave Hansen <dave.hansen@linux.intel.com>, Dan Williams
	<dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
	<linux-doc@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<linux-nvdimm@lists.01.org>, <linux-fsdevel@vger.kernel.org>,
	<linux-mm@kvack.org>, <linux-kselftest@vger.kernel.org>,
	<linuxppc-dev@lists.ozlabs.org>, <kvm@vger.kernel.org>,
	<netdev@vger.kernel.org>, <bpf@vger.kernel.org>, <kexec@lists.infradead.org>,
	<linux-bcache@vger.kernel.org>, <linux-mtd@lists.infradead.org>,
	<devel@driverdev.osuosl.org>, <linux-efi@vger.kernel.org>,
	<linux-mmc@vger.kernel.org>, <linux-scsi@vger.kernel.org>,
	<target-devel@vger.kernel.org>, <linux-nfs@vger.kernel.org>,
	<ceph-devel@vger.kernel.org>, <linux-ext4@vger.kernel.org>,
	<linux-aio@kvack.org>, <io-uring@vger.kernel.org>,
	<linux-erofs@lists.ozlabs.org>, <linux-um@lists.infradead.org>,
	<linux-ntfs-dev@lists.sourceforge.net>, <reiserfs-devel@vger.kernel.org>,
	<linux-f2fs-devel@lists.sourceforge.net>, <linux-nilfs@vger.kernel.org>,
	<cluster-devel@redhat.com>, <ecryptfs@vger.kernel.org>,
	<linux-cifs@vger.kernel.org>, <linux-btrfs@vger.kernel.org>,
	<linux-afs@lists.infradead.org>, <linux-rdma@vger.kernel.org>,
	<amd-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>,
	<intel-gfx@lists.freedesktop.org>, <drbd-dev@lists.linbit.com>,
	<linux-block@vger.kernel.org>, <xen-devel@lists.xenproject.org>,
	<linux-cachefs@redhat.com>, <samba-technical@lists.samba.org>,
	<intel-wired-lan@lists.osuosl.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-58-ira.weiny@intel.com>
From: John Hubbard <jhubbard@nvidia.com>
Message-ID: <bd3f5ece-0e7b-4c15-abbc-1b3b943334dc@nvidia.com>
Date: Fri, 9 Oct 2020 19:53:07 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201009195033.3208459-58-ira.weiny@intel.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.124.1.5]
X-ClientProxiedBy: HQMAIL105.nvidia.com (172.20.187.12) To
 HQMAIL107.nvidia.com (172.20.187.13)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1;
	t=1602298277; bh=hKwlK3WolBLUufkeWDHCi6j+X4NXa8gQFiKyGjbOY3s=;
	h=Subject:To:CC:References:From:Message-ID:Date:User-Agent:
	 MIME-Version:In-Reply-To:Content-Type:Content-Language:
	 Content-Transfer-Encoding:X-Originating-IP:X-ClientProxiedBy;
	b=edx7xet+HWktPTMH7LfazMaeZj84i5/Le7BE3m/9xNKYA9bmh246jZEvn48F/uMcW
	 RRn8BggXdwK6EgYw84fvX6LW3WH/wQjijcVtcfekd8K6KkJdzgyiWWhhWRHsAgsUWu
	 ErO3rgTi0L/NWozRhmjxim3aejVQ7k0j+Xmczu6ahvjgHQEdG1f6IxukspiHHh4eDZ
	 vXW13vRjbU9kKvq3xSMRRweChxuwg1Gt9UWgcBiwICYzh7lbhEKe0zR7+e8y/8iSgu
	 9o2Hr0ioZGbS1OLU7+SDtKlkAPg/fZiUxFefl5DnOwU2H5+VRhK6GCiKvAXlVbpTMd
	 H8SS1utYxNiTQ==

On 10/9/20 12:50 PM, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> The pmem driver uses a cached virtual address to access its memory
> directly.  Because the nvdimm driver is well aware of the special
> protections it has mapped memory with, we call dev_access_[en|dis]able()
> around the direct pmem->virt_addr (pmem_addr) usage instead of the
> unnecessary overhead of trying to get a page to kmap.
> 
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
>   drivers/nvdimm/pmem.c | 4 ++++
>   1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index fab29b514372..e4dc1ae990fc 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -148,7 +148,9 @@ static blk_status_t pmem_do_read(struct pmem_device *pmem,
>   	if (unlikely(is_bad_pmem(&pmem->bb, sector, len)))
>   		return BLK_STS_IOERR;
>   
> +	dev_access_enable(false);
>   	rc = read_pmem(page, page_off, pmem_addr, len);
> +	dev_access_disable(false);

Hi Ira!

The APIs should be tweaked to use a symbol (GLOBAL, PER_THREAD), instead of
true/false. Try reading the above and you'll see that it sounds like it's
doing the opposite of what it is ("enable_this(false)" sounds like a clumsy
API design to *disable*, right?). And there is no hint about the scope.

And it *could* be so much more readable like this:

     dev_access_enable(DEV_ACCESS_THIS_THREAD);



thanks,
-- 
John Hubbard
NVIDIA


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 03:31:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 03:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5272.13795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5aq-0006Vd-GX; Sat, 10 Oct 2020 03:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5272.13795; Sat, 10 Oct 2020 03:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5aq-0006VW-Bn; Sat, 10 Oct 2020 03:30:48 +0000
Received: by outflank-mailman (input) for mailman id 5272;
 Sat, 10 Oct 2020 03:30:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR5ap-0006VP-GC
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:30:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de2b42cb-1db6-4083-8825-8b361d1f7268;
 Sat, 10 Oct 2020 03:30:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5am-0007Wf-46; Sat, 10 Oct 2020 03:30:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5al-000703-Rz; Sat, 10 Oct 2020 03:30:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5al-0000h4-RV; Sat, 10 Oct 2020 03:30:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR5ap-0006VP-GC
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:30:47 +0000
X-Inumbo-ID: de2b42cb-1db6-4083-8825-8b361d1f7268
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de2b42cb-1db6-4083-8825-8b361d1f7268;
	Sat, 10 Oct 2020 03:30:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QBil6nd55hbve0YywmbH/EZ30NukHKWxb7pyLskOXzA=; b=eB7OZ7i6t9xwc3vhshau0EtLQE
	jj/eG5hOx1lOIBHeDSrTHqHfwFymFcu1KMtP5ZixZ21g28CtW54uV/6RxftNCN1PyJTwPLvvy76yP
	mRgzjcvkuqJ89WARR4WNnyVxWRUOFeX/oBrREZPvrwx4yNfplMMa6nOrAGf7ak8lr96E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5am-0007Wf-46; Sat, 10 Oct 2020 03:30:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5al-000703-Rz; Sat, 10 Oct 2020 03:30:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5al-0000h4-RV; Sat, 10 Oct 2020 03:30:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155622-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155622: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 03:30:43 +0000

flight 155622 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155622/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 03:43:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 03:43:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5276.13809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5mv-0007dw-KQ; Sat, 10 Oct 2020 03:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5276.13809; Sat, 10 Oct 2020 03:43:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5mv-0007dp-Gw; Sat, 10 Oct 2020 03:43:17 +0000
Received: by outflank-mailman (input) for mailman id 5276;
 Sat, 10 Oct 2020 03:43:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Kyxi=DR=xmission.com=ebiederm@srs-us1.protection.inumbo.net>)
 id 1kR5mu-0007dk-Ay
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:43:16 +0000
Received: from out01.mta.xmission.com (unknown [166.70.13.231])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfde5a33-e930-4dc7-83bd-c708f08867f0;
 Sat, 10 Oct 2020 03:43:14 +0000 (UTC)
Received: from in02.mta.xmission.com ([166.70.13.52])
 by out01.mta.xmission.com with esmtps (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93)
 (envelope-from <ebiederm@xmission.com>)
 id 1kR5ma-003qSe-1R; Fri, 09 Oct 2020 21:42:56 -0600
Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95]
 helo=x220.xmission.com) by in02.mta.xmission.com with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.87)
 (envelope-from <ebiederm@xmission.com>)
 id 1kR5mZ-0002tO-03; Fri, 09 Oct 2020 21:42:55 -0600
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Kyxi=DR=xmission.com=ebiederm@srs-us1.protection.inumbo.net>)
	id 1kR5mu-0007dk-Ay
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:43:16 +0000
X-Inumbo-ID: dfde5a33-e930-4dc7-83bd-c708f08867f0
Received: from out01.mta.xmission.com (unknown [166.70.13.231])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfde5a33-e930-4dc7-83bd-c708f08867f0;
	Sat, 10 Oct 2020 03:43:14 +0000 (UTC)
Received: from in02.mta.xmission.com ([166.70.13.52])
	by out01.mta.xmission.com with esmtps  (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
	(Exim 4.93)
	(envelope-from <ebiederm@xmission.com>)
	id 1kR5ma-003qSe-1R; Fri, 09 Oct 2020 21:42:56 -0600
Received: from ip68-227-160-95.om.om.cox.net ([68.227.160.95] helo=x220.xmission.com)
	by in02.mta.xmission.com with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.87)
	(envelope-from <ebiederm@xmission.com>)
	id 1kR5mZ-0002tO-03; Fri, 09 Oct 2020 21:42:55 -0600
From: ebiederm@xmission.com (Eric W. Biederman)
To: ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>,  Thomas Gleixner
 <tglx@linutronix.de>,  Ingo Molnar <mingo@redhat.com>,  Borislav Petkov
 <bp@alien8.de>,  Andy Lutomirski <luto@kernel.org>,  Peter Zijlstra
 <peterz@infradead.org>,  x86@kernel.org,  Dave Hansen
 <dave.hansen@linux.intel.com>,  Dan Williams <dan.j.williams@intel.com>,
  Fenghua Yu <fenghua.yu@intel.com>,  linux-doc@vger.kernel.org,
  linux-kernel@vger.kernel.org,  linux-nvdimm@lists.01.org,
  linux-fsdevel@vger.kernel.org,  linux-mm@kvack.org,
  linux-kselftest@vger.kernel.org,  linuxppc-dev@lists.ozlabs.org,
  kvm@vger.kernel.org,  netdev@vger.kernel.org,  bpf@vger.kernel.org,
  kexec@lists.infradead.org,  linux-bcache@vger.kernel.org,
  linux-mtd@lists.infradead.org,  devel@driverdev.osuosl.org,
  linux-efi@vger.kernel.org,  linux-mmc@vger.kernel.org,
  linux-scsi@vger.kernel.org,  target-devel@vger.kernel.org,
  linux-nfs@vger.kernel.org,  ceph-devel@vger.kernel.org,
  linux-ext4@vger.kernel.org,  linux-aio@kvack.org,
  io-uring@vger.kernel.org,  linux-erofs@lists.ozlabs.org,
  linux-um@lists.infradead.org,  linux-ntfs-dev@lists.sourceforge.net,
  reiserfs-devel@vger.kernel.org,  linux-f2fs-devel@lists.sourceforge.net,
  linux-nilfs@vger.kernel.org,  cluster-devel@redhat.com,
  ecryptfs@vger.kernel.org,  linux-cifs@vger.kernel.org,
  linux-btrfs@vger.kernel.org,  linux-afs@lists.infradead.org,
  linux-rdma@vger.kernel.org,  amd-gfx@lists.freedesktop.org,
  dri-devel@lists.freedesktop.org,  intel-gfx@lists.freedesktop.org,
  drbd-dev@lists.linbit.com,  linux-block@vger.kernel.org,
  xen-devel@lists.xenproject.org,  linux-cachefs@redhat.com,
  samba-technical@lists.samba.org,  intel-wired-lan@lists.osuosl.org
References: <20201009195033.3208459-1-ira.weiny@intel.com>
	<20201009195033.3208459-52-ira.weiny@intel.com>
Date: Fri, 09 Oct 2020 22:43:15 -0500
In-Reply-To: <20201009195033.3208459-52-ira.weiny@intel.com> (ira weiny's
	message of "Fri, 9 Oct 2020 12:50:26 -0700")
Message-ID: <87k0vysq3w.fsf@x220.int.ebiederm.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.1 (gnu/linux)
MIME-Version: 1.0
Content-Type: text/plain
X-XM-SPF: eid=1kR5mZ-0002tO-03;;;mid=<87k0vysq3w.fsf@x220.int.ebiederm.org>;;;hst=in02.mta.xmission.com;;;ip=68.227.160.95;;;frm=ebiederm@xmission.com;;;spf=neutral
X-XM-AID: U2FsdGVkX1+M6Nwc1eevosTTnX6IxBw6BnHTGm05YjI=
X-SA-Exim-Connect-IP: 68.227.160.95
X-SA-Exim-Mail-From: ebiederm@xmission.com
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on sa05.xmission.com
X-Spam-Level: *
X-Spam-Status: No, score=1.5 required=8.0 tests=ALL_TRUSTED,BAYES_50,
	DCC_CHECK_NEGATIVE,T_TM2_M_HEADER_IN_MSG,T_TooManySym_01,
	T_TooManySym_02,T_TooManySym_03,T_XMDrugObfuBody_08,XMSubLong
	autolearn=disabled version=3.4.2
X-Spam-Report: 
	* -1.0 ALL_TRUSTED Passed through trusted hosts only via SMTP
	*  0.8 BAYES_50 BODY: Bayes spam probability is 40 to 60%
	*      [score: 0.5000]
	*  0.7 XMSubLong Long Subject
	*  0.0 T_TM2_M_HEADER_IN_MSG BODY: No description available.
	* -0.0 DCC_CHECK_NEGATIVE Not listed in DCC
	*      [sa05 1397; Body=1 Fuz1=1 Fuz2=1]
	*  0.0 T_TooManySym_01 4+ unique symbols in subject
	*  1.0 T_XMDrugObfuBody_08 obfuscated drug references
	*  0.0 T_TooManySym_02 5+ unique symbols in subject
	*  0.0 T_TooManySym_03 6+ unique symbols in subject
X-Spam-DCC: XMission; sa05 1397; Body=1 Fuz1=1 Fuz2=1 
X-Spam-Combo: *;ira.weiny@intel.com
X-Spam-Relay-Country: 
X-Spam-Timing: total 512 ms - load_scoreonly_sql: 0.07 (0.0%),
	signal_user_changed: 13 (2.5%), b_tie_ro: 11 (2.2%), parse: 1.72
	(0.3%), extract_message_metadata: 23 (4.4%), get_uri_detail_list: 2.3
	(0.5%), tests_pri_-1000: 25 (4.8%), tests_pri_-950: 1.65 (0.3%),
	tests_pri_-900: 1.39 (0.3%), tests_pri_-90: 81 (15.8%), check_bayes:
	78 (15.3%), b_tokenize: 16 (3.2%), b_tok_get_all: 9 (1.8%),
	b_comp_prob: 2.3 (0.5%), b_tok_touch_all: 47 (9.1%), b_finish: 1.06
	(0.2%), tests_pri_0: 332 (64.8%), check_dkim_signature: 0.75 (0.1%),
	check_dkim_adsp: 18 (3.5%), poll_dns_idle: 0.34 (0.1%), tests_pri_10:
	4.5 (0.9%), tests_pri_500: 25 (5.0%), rewrite_mail: 0.00 (0.0%)
Subject: Re: [PATCH RFC PKS/PMEM 51/58] kernel: Utilize new kmap_thread()
X-Spam-Flag: No
X-SA-Exim-Version: 4.2.1 (built Thu, 05 May 2016 13:38:54 -0600)
X-SA-Exim-Scanned: Yes (on in02.mta.xmission.com)

ira.weiny@intel.com writes:

> From: Ira Weiny <ira.weiny@intel.com>
>
> This kmap() call is localized to a single thread.  To avoid the over
> head of global PKRS updates use the new kmap_thread() call.

Acked-by: "Eric W. Biederman" <ebiederm@xmission.com>

>
> Cc: Eric Biederman <ebiederm@xmission.com>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
>  kernel/kexec_core.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index c19c0dad1ebe..272a9920c0d6 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -815,7 +815,7 @@ static int kimage_load_normal_segment(struct kimage *image,
>  		if (result < 0)
>  			goto out;
>  
> -		ptr = kmap(page);
> +		ptr = kmap_thread(page);
>  		/* Start with a clear page */
>  		clear_page(ptr);
>  		ptr += maddr & ~PAGE_MASK;
> @@ -828,7 +828,7 @@ static int kimage_load_normal_segment(struct kimage *image,
>  			memcpy(ptr, kbuf, uchunk);
>  		else
>  			result = copy_from_user(ptr, buf, uchunk);
> -		kunmap(page);
> +		kunmap_thread(page);
>  		if (result) {
>  			result = -EFAULT;
>  			goto out;
> @@ -879,7 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
>  			goto out;
>  		}
>  		arch_kexec_post_alloc_pages(page_address(page), 1, 0);
> -		ptr = kmap(page);
> +		ptr = kmap_thread(page);
>  		ptr += maddr & ~PAGE_MASK;
>  		mchunk = min_t(size_t, mbytes,
>  				PAGE_SIZE - (maddr & ~PAGE_MASK));
> @@ -895,7 +895,7 @@ static int kimage_load_crash_segment(struct kimage *image,
>  		else
>  			result = copy_from_user(ptr, buf, uchunk);
>  		kexec_flush_icache_page(page);
> -		kunmap(page);
> +		kunmap_thread(page);
>  		arch_kexec_pre_free_pages(page_address(page), 1);
>  		if (result) {
>  			result = -EFAULT;


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 03:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 03:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5279.13821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5vX-0000I9-G6; Sat, 10 Oct 2020 03:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5279.13821; Sat, 10 Oct 2020 03:52:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR5vX-0000I2-Cz; Sat, 10 Oct 2020 03:52:11 +0000
Received: by outflank-mailman (input) for mailman id 5279;
 Sat, 10 Oct 2020 03:52:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR5vW-0000Hx-Jv
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:52:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f41f071-caef-460b-94aa-34d05002b98b;
 Sat, 10 Oct 2020 03:52:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5vU-0007yt-0K; Sat, 10 Oct 2020 03:52:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5vT-0007iJ-Kq; Sat, 10 Oct 2020 03:52:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR5vT-0008L8-KK; Sat, 10 Oct 2020 03:52:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR5vW-0000Hx-Jv
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 03:52:10 +0000
X-Inumbo-ID: 8f41f071-caef-460b-94aa-34d05002b98b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8f41f071-caef-460b-94aa-34d05002b98b;
	Sat, 10 Oct 2020 03:52:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2a+CwLkUKbrf5g9/x92s0I6S3VfrSblAJOOK4cQ8Dj8=; b=BzEfjWw2jIYW7LPRUrFJhgAPVm
	adv0+I1OJrL14Rz4tNBIITLdpzFXkAkotit50q13c1/i4xAhJTyyp77WDDMdtegNrQhrF1yC661qZ
	k2+ptlbCboeTKf6vS5dfZzwbkZqWLCerJyRLqzm034XOhdOP4IHhLIM4+KWk3DeFzK3g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5vU-0007yt-0K; Sat, 10 Oct 2020 03:52:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5vT-0007iJ-Kq; Sat, 10 Oct 2020 03:52:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR5vT-0008L8-KK; Sat, 10 Oct 2020 03:52:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155600: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 03:52:07 +0000

flight 155600 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155600/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155549
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155549
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155549
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155549
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155549
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155549
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155549
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155549
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155549
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a

Last test of basis   155549  2020-10-08 14:31:41 Z    1 days
Testing same since   155600  2020-10-09 13:53:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roman Shaposhnik <roman@zededa.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a519f8bda..25849c8b16  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9 -> master


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 04:34:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 04:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5253.13886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR6a3-0004r7-Tb; Sat, 10 Oct 2020 04:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5253.13886; Sat, 10 Oct 2020 04:34:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR6a3-0004qx-MD; Sat, 10 Oct 2020 04:34:03 +0000
Received: by outflank-mailman (input) for mailman id 5253;
 Sat, 10 Oct 2020 01:30:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nh66=DR=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
 id 1kR3ib-0002Rw-2y
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 01:30:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a389ba4-6669-4d82-8249-0dc5fd61c70f;
 Sat, 10 Oct 2020 01:30:40 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net
 [172.10.235.113])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4E006206D9;
 Sat, 10 Oct 2020 01:30:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Nh66=DR=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
	id 1kR3ib-0002Rw-2y
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 01:30:41 +0000
X-Inumbo-ID: 6a389ba4-6669-4d82-8249-0dc5fd61c70f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6a389ba4-6669-4d82-8249-0dc5fd61c70f;
	Sat, 10 Oct 2020 01:30:40 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net [172.10.235.113])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4E006206D9;
	Sat, 10 Oct 2020 01:30:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602293439;
	bh=sHFyDh52gnsvc+uI8yhUC5uDYbYkQCvra+rDiRMmAvs=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=kwaFsg+FwkcbT1hA6vX2rbzC7SEHg0ckV7q6+TfKUuTYp7lijGzsxhx1D9MfT9qZg
	 puzd9VwNUhNC7pSZTswv9TmxeVvsyOKe33XMZMPqVG5rfdvIuh+YhVfWI5Z4mrkh0S
	 V8+sp7izCEY2Jms4cmTCzh86mPIYz7uaZ6k3dzLQ=
Date: Fri, 9 Oct 2020 18:30:36 -0700
From: Eric Biggers <ebiggers@kernel.org>
To: Matthew Wilcox <willy@infradead.org>
Cc: ira.weiny@intel.com, Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201010013036.GD1122@sol.localdomain>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201010003954.GW20115@casper.infradead.org>

On Sat, Oct 10, 2020 at 01:39:54AM +0100, Matthew Wilcox wrote:
> On Fri, Oct 09, 2020 at 02:34:34PM -0700, Eric Biggers wrote:
> > On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@intel.com wrote:
> > > The kmap() calls in this FS are localized to a single thread.  To avoid
> > > the over head of global PKRS updates use the new kmap_thread() call.
> > >
> > > @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page(
> > >  
> > >  static inline void f2fs_copy_page(struct page *src, struct page *dst)
> > >  {
> > > -	char *src_kaddr = kmap(src);
> > > -	char *dst_kaddr = kmap(dst);
> > > +	char *src_kaddr = kmap_thread(src);
> > > +	char *dst_kaddr = kmap_thread(dst);
> > >  
> > >  	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
> > > -	kunmap(dst);
> > > -	kunmap(src);
> > > +	kunmap_thread(dst);
> > > +	kunmap_thread(src);
> > >  }
> > 
> > Wouldn't it make more sense to switch cases like this to kmap_atomic()?
> > The pages are only mapped to do a memcpy(), then they're immediately unmapped.
> 
> Maybe you missed the earlier thread from Thomas trying to do something
> similar for rather different reasons ...
> 
> https://lore.kernel.org/lkml/20200919091751.011116649@linutronix.de/

I did miss it.  I'm not subscribed to any of the mailing lists it was sent to.

Anyway, it shouldn't matter.  Patchsets should be standalone, and not require
reading random prior threads on linux-kernel to understand.

And I still don't really understand.  After this patchset, there is still code
nearly identical to the above (doing a temporary mapping just for a memcpy) that
would still be using kmap_atomic().  Is the idea that later, such code will be
converted to use kmap_thread() instead?  If not, why use one over the other?

- Eric


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 04:34:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 04:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5223.13879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR6a3-0004qg-IA; Sat, 10 Oct 2020 04:34:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5223.13879; Sat, 10 Oct 2020 04:34:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR6a3-0004qZ-E6; Sat, 10 Oct 2020 04:34:03 +0000
Received: by outflank-mailman (input) for mailman id 5223;
 Fri, 09 Oct 2020 21:34:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2fBY=DQ=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
 id 1kR02A-0002Ir-QK
 for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:34:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d3fe240-07e7-4666-9565-bbcad7bef198;
 Fri, 09 Oct 2020 21:34:38 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net
 [172.10.235.113])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 44F7C21D6C;
 Fri,  9 Oct 2020 21:34:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2fBY=DQ=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
	id 1kR02A-0002Ir-QK
	for xen-devel@lists.xenproject.org; Fri, 09 Oct 2020 21:34:38 +0000
X-Inumbo-ID: 8d3fe240-07e7-4666-9565-bbcad7bef198
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d3fe240-07e7-4666-9565-bbcad7bef198;
	Fri, 09 Oct 2020 21:34:38 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net [172.10.235.113])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 44F7C21D6C;
	Fri,  9 Oct 2020 21:34:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602279277;
	bh=af4HYKUXjfzQEnpRkoqrLOtb/VuZgD2GaaA85DESAB8=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=mAKgXcy5CifgRYN2i2C2c94+Jg5oLVaoZk49jTIFxseeqdMtEOOqP5cL66hXhKNTI
	 eNSQDBkgqgAc1Wd7XIXje+joGiBVI4yatMIDb1+kpBlSLeGvwRiFFGcw+A+0IeryZM
	 wi38bfOOqpr1rBlahq6SJoGGKG4xvvKCLXOhXA9s=
Date: Fri, 9 Oct 2020 14:34:34 -0700
From: Eric Biggers <ebiggers@kernel.org>
To: ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201009213434.GA839@sol.localdomain>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009195033.3208459-23-ira.weiny@intel.com>

On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@intel.com wrote:
> From: Ira Weiny <ira.weiny@intel.com>
> 
> The kmap() calls in this FS are localized to a single thread.  To avoid
> the over head of global PKRS updates use the new kmap_thread() call.
> 
> Cc: Jaegeuk Kim <jaegeuk@kernel.org>
> Cc: Chao Yu <chao@kernel.org>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
>  fs/f2fs/f2fs.h | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
> index d9e52a7f3702..ff72a45a577e 100644
> --- a/fs/f2fs/f2fs.h
> +++ b/fs/f2fs/f2fs.h
> @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page(
>  
>  static inline void f2fs_copy_page(struct page *src, struct page *dst)
>  {
> -	char *src_kaddr = kmap(src);
> -	char *dst_kaddr = kmap(dst);
> +	char *src_kaddr = kmap_thread(src);
> +	char *dst_kaddr = kmap_thread(dst);
>  
>  	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
> -	kunmap(dst);
> -	kunmap(src);
> +	kunmap_thread(dst);
> +	kunmap_thread(src);
>  }

Wouldn't it make more sense to switch cases like this to kmap_atomic()?
The pages are only mapped to do a memcpy(), then they're immediately unmapped.

- Eric


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 06:56:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 06:56:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5314.13910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR8nD-0002OA-2U; Sat, 10 Oct 2020 06:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5314.13910; Sat, 10 Oct 2020 06:55:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR8nC-0002O3-V8; Sat, 10 Oct 2020 06:55:46 +0000
Received: by outflank-mailman (input) for mailman id 5314;
 Sat, 10 Oct 2020 06:55:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR8nC-0002Ny-1j
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 06:55:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc59e7c4-fa91-4308-9b02-cd8b94dfd5dc;
 Sat, 10 Oct 2020 06:55:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR8n8-0003tZ-O7; Sat, 10 Oct 2020 06:55:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR8n8-0006LV-Eo; Sat, 10 Oct 2020 06:55:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR8n8-0004Ff-EH; Sat, 10 Oct 2020 06:55:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR8nC-0002Ny-1j
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 06:55:46 +0000
X-Inumbo-ID: dc59e7c4-fa91-4308-9b02-cd8b94dfd5dc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc59e7c4-fa91-4308-9b02-cd8b94dfd5dc;
	Sat, 10 Oct 2020 06:55:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8kFHZ/rf2WQVVWNZhyxZ8uhwc05OeZKCXzwGlwhkdqc=; b=C1g75Do90dW2ULI8B6MTQn2Sqd
	HoJ27EoQO8uuq1NSGRYF3iPdbJgPvyAdpwenB3iy7qrS3eB2O/pYhJp74V4X27orD3hxfNBlgFZSU
	MDNOQ/v8lQIypXhoRakYCEZkyxNEIARYdFOc1Knc7qwGtImRFACsHHuhBqyaN4ynYo7Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR8n8-0003tZ-O7; Sat, 10 Oct 2020 06:55:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR8n8-0006LV-Eo; Sat, 10 Oct 2020 06:55:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR8n8-0004Ff-EH; Sat, 10 Oct 2020 06:55:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155602-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155602: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:debian-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=583090b1b8232e6eae243a9009699666153a13a9
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 06:55:42 +0000

flight 155602 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155602/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 12 debian-install          fail REGR. vs. 152332
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                583090b1b8232e6eae243a9009699666153a13a9
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   70 days
Failing since        152366  2020-08-01 20:49:34 Z   69 days  116 attempts
Testing same since   155602  2020-10-09 15:57:02 Z    0 days    1 attempts

------------------------------------------------------------
2507 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 339058 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 07:44:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 07:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5324.13932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR9YL-0007Vx-5K; Sat, 10 Oct 2020 07:44:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5324.13932; Sat, 10 Oct 2020 07:44:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR9YL-0007Vq-2L; Sat, 10 Oct 2020 07:44:29 +0000
Received: by outflank-mailman (input) for mailman id 5324;
 Sat, 10 Oct 2020 07:44:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kR9YK-0007Vl-HJ
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 07:44:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5365368-3d91-425a-8828-d427cfdbf59f;
 Sat, 10 Oct 2020 07:44:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR9YG-0004uF-Oi; Sat, 10 Oct 2020 07:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kR9YG-0000C1-GV; Sat, 10 Oct 2020 07:44:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kR9YG-0003hS-Fz; Sat, 10 Oct 2020 07:44:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kR9YK-0007Vl-HJ
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 07:44:28 +0000
X-Inumbo-ID: c5365368-3d91-425a-8828-d427cfdbf59f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c5365368-3d91-425a-8828-d427cfdbf59f;
	Sat, 10 Oct 2020 07:44:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/6ThPTrBTwZxRlwkyTwum+pkLFzVLHx+tMck987xxxI=; b=ttiFaVxKhoJZ1Bdz5bu6hWGGZv
	TyPpLsWfhk2epjpUTiIZQbaMPmuYorGdLUt12eDwx3s3jZzGj+tWLcZ1D2OJNmP+nyKtcI63aOcUJ
	MFtSKwGBO1LbEHMHd7j8NkV48ZL+IRIPx8m4ewRZO+Fim+2G3Ky4SxvUtQxpPtlodcqs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR9YG-0004uF-Oi; Sat, 10 Oct 2020 07:44:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR9YG-0000C1-GV; Sat, 10 Oct 2020 07:44:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kR9YG-0003hS-Fz; Sat, 10 Oct 2020 07:44:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155632-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155632: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 07:44:24 +0000

flight 155632 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 07:46:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 07:46:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5326.13947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR9Zx-0007dX-IL; Sat, 10 Oct 2020 07:46:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5326.13947; Sat, 10 Oct 2020 07:46:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kR9Zx-0007dQ-FF; Sat, 10 Oct 2020 07:46:09 +0000
Received: by outflank-mailman (input) for mailman id 5326;
 Sat, 10 Oct 2020 07:46:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7bc=DR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kR9Zw-0007dL-H5
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 07:46:08 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8016231a-0505-4f81-99e4-e19300d88777;
 Sat, 10 Oct 2020 07:46:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M7bc=DR=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kR9Zw-0007dL-H5
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 07:46:08 +0000
X-Inumbo-ID: 8016231a-0505-4f81-99e4-e19300d88777
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8016231a-0505-4f81-99e4-e19300d88777;
	Sat, 10 Oct 2020 07:46:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602315961;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=9rgp/WPws0Jq25znxcrpUTPnAVAtTUiAVNt+qnsvDDI=;
  b=BWTJ0mi7cCufiBHhYcRyqrRGHq4oNQKrhHrSNQ/yjepGUfzAtmLSzMAE
   1PLWY0WWQf4ZSeWPKSRIa/wvo9li5PSSZrDJVd6Gj3JJdP5XysVBlmETM
   c6V60hGBoGcHGvPee/Ce4ReIsshpPdgVhPWLYiEHsn5NPOfCq5WE7l516
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4dotsxQFiu5/9b0VnskP7iYC+UOVNAw6EckU/ATIiGp7mkSDuwy5KdxGSvRnmoYdZEOPJZ4pGx
 9c+Fg1viDFdoGxkR5DXdpt7a7wvtFujryffA9gm2XXNBjWoOeMJmGgRt+DpK9NtTbx1JNuMFMs
 41Q9Pas5J9Ca98gFwS0l6seEmfoIK6Le49b8HZt+klcLw5UoBSrQ3INonzsD/q5RzCw/an4OZm
 gIYzeKPu5YmcfH4tyaausSJp3+v+8KbIKOzEEE0jBZ/Xc+kDCm2faWCsP1FjQADKqRFpmn8lW5
 n0Q=
X-SBRS: 2.5
X-MesageID: 29051009
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,358,1596513600"; 
   d="scan'208";a="29051009"
Date: Sat, 10 Oct 2020 09:45:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e()
Message-ID: <20201010074525.GO19254@Air-de-Roger>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
 <20201008151556.GL19254@Air-de-Roger>
 <e769e1ae-fd2f-881e-4dcc-3cbf40d6b732@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e769e1ae-fd2f-881e-4dcc-3cbf40d6b732@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 08, 2020 at 04:36:47PM +0100, Andrew Cooper wrote:
> On 08/10/2020 16:15, Roger Pau Monné wrote:
> > On Wed, Sep 16, 2020 at 03:08:40PM +0200, Jan Beulich wrote:
> >> By passing the functions an MFN and flags, only a single instance of
> >                            ^ a
> 
> 'an' is correct.
> 
> an MFN
> a Machine Frame Number
> 
> because the pronunciation changes.  "an" precedes anything with a vowel
> sound, not just vowels themselves.  (Isn't English great...)

Oh great, I think I've been misspelling this myself for a long time.

> >> each is needed; they were pretty large for being inline functions
> >> anyway.
> >>
> >> While moving the code, also adjust coding style and add const where
> >> sensible / possible.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> v2: New.
> >>
> >> --- a/xen/arch/x86/mm/shadow/hvm.c
> >> +++ b/xen/arch/x86/mm/shadow/hvm.c
> >> @@ -903,6 +903,104 @@ int shadow_track_dirty_vram(struct domai
> >>      return rc;
> >>  }
> >>  
> >> +void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
> >> +                         mfn_t sl1mfn, const void *sl1e,
> >> +                         const struct domain *d)
> >> +{
> >> +    unsigned long gfn;
> >> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> >> +
> >> +    ASSERT(is_hvm_domain(d));
> >> +
> >> +    if ( !dirty_vram /* tracking disabled? */ ||
> >> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
> >> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
> >> +        return;
> >> +
> >> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
> >> +    /* Page sharing not supported on shadow PTs */
> >> +    BUG_ON(SHARED_M2P(gfn));
> >> +
> >> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> >> +    {
> >> +        unsigned long i = gfn - dirty_vram->begin_pfn;
> >> +        const struct page_info *page = mfn_to_page(mfn);
> >> +
> >> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> >> +            /* Initial guest reference, record it */
> >> +            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) |
> >> +                                   PAGE_OFFSET(sl1e);
> >> +    }
> >> +}
> >> +
> >> +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
> >> +                         mfn_t sl1mfn, const void *sl1e,
> >> +                         const struct domain *d)
> >> +{
> >> +    unsigned long gfn;
> >> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
> >> +
> >> +    ASSERT(is_hvm_domain(d));
> >> +
> >> +    if ( !dirty_vram /* tracking disabled? */ ||
> >> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
> >> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
> >> +        return;
> >> +
> >> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
> >> +    /* Page sharing not supported on shadow PTs */
> >> +    BUG_ON(SHARED_M2P(gfn));
> >> +
> >> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
> >> +    {
> >> +        unsigned long i = gfn - dirty_vram->begin_pfn;
> >> +        const struct page_info *page = mfn_to_page(mfn);
> >> +        bool dirty = false;
> >> +        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
> >> +
> >> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
> >> +        {
> >> +            /* Last reference */
> >> +            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
> >> +            {
> >> +                /* We didn't know it was that one, let's say it is dirty */
> >> +                dirty = true;
> >> +            }
> >> +            else
> >> +            {
> >> +                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
> >> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
> >> +                if ( l1f & _PAGE_DIRTY )
> >> +                    dirty = true;
> >> +            }
> >> +        }
> >> +        else
> >> +        {
> >> +            /* We had more than one reference, just consider the page dirty. */
> >> +            dirty = true;
> >> +            /* Check that it's not the one we recorded. */
> >> +            if ( dirty_vram->sl1ma[i] == sl1ma )
> >> +            {
> >> +                /* Too bad, we remembered the wrong one... */
> >> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
> >> +            }
> >> +            else
> >> +            {
> >> +                /*
> >> +                 * Ok, our recorded sl1e is still pointing to this page, let's
> >> +                 * just hope it will remain.
> >> +                 */
> >> +            }
> >> +        }
> >> +
> >> +        if ( dirty )
> >> +        {
> >> +            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
> > Could you use _set_bit here?
> 
> __set_bit() uses 4-byte accesses.  This uses 1-byte accesses.

Right, this is allocated using alloc directly, not the bitmap helper,
and the size if rounded to byte level, not unsigned int.

> Last I checked, there is a boundary issue at the end of the dirty_bitmap.
> 
> Both Julien and I have considered changing our bit infrastructure to use
> byte accesses, which would make them more generally useful.

Does indeed seem useful to me, as we could safely expand the usage of
the bitmap ops without risking introducing bugs.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 08:14:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 08:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5363.13981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRA0p-00037U-Jj; Sat, 10 Oct 2020 08:13:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5363.13981; Sat, 10 Oct 2020 08:13:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRA0p-00037N-Gd; Sat, 10 Oct 2020 08:13:55 +0000
Received: by outflank-mailman (input) for mailman id 5363;
 Sat, 10 Oct 2020 08:13:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRA0o-00037I-Vc
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 08:13:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01e94bb3-b0a2-466c-a11d-f62370485121;
 Sat, 10 Oct 2020 08:13:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRA0n-00063G-8h; Sat, 10 Oct 2020 08:13:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRA0n-00025u-0Y; Sat, 10 Oct 2020 08:13:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRA0n-0004PF-03; Sat, 10 Oct 2020 08:13:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRA0o-00037I-Vc
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 08:13:55 +0000
X-Inumbo-ID: 01e94bb3-b0a2-466c-a11d-f62370485121
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01e94bb3-b0a2-466c-a11d-f62370485121;
	Sat, 10 Oct 2020 08:13:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Dnu0m3NjgwSf15RmNcl/+XC5bURswRJIgGouGPVPlpo=; b=ZbbtmdgRznummjlMJxc6VNOna5
	tBf6q72cUiQU2VxgbDq04pCs42WMPNVHhLfpwBsh9hzDsxDqIVh0s/OtSiXIrWBgneOdsV0nwYca0
	0HKnjCnKqYiprSjZRIDuSkB2MzFKULVT6utPNlT7DaEr1B6umpc9yad9REV6n3lo845c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRA0n-00063G-8h; Sat, 10 Oct 2020 08:13:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRA0n-00025u-0Y; Sat, 10 Oct 2020 08:13:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRA0n-0004PF-03; Sat, 10 Oct 2020 08:13:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155617-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155617: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4
X-Osstest-Versions-That:
    ovmf=091ab12b340a05c99ce0e31d29293ce58c7014e2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 08:13:53 +0000

flight 155617 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155617/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4
baseline version:
 ovmf                 091ab12b340a05c99ce0e31d29293ce58c7014e2

Last test of basis   155594  2020-10-09 09:11:36 Z    0 days
Testing same since   155617  2020-10-09 21:42:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Vladimir Olovyannikov <vladimir.olovyannikov@broadcom.com>
  Vladimir Olovyannikov via groups.io <vladimir.olovyannikov=broadcom.com@groups.io>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   091ab12b34..70c2f10fde  70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 08:38:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 08:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5368.13999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRAO1-0005SB-Ih; Sat, 10 Oct 2020 08:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5368.13999; Sat, 10 Oct 2020 08:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRAO1-0005S4-Fb; Sat, 10 Oct 2020 08:37:53 +0000
Received: by outflank-mailman (input) for mailman id 5368;
 Sat, 10 Oct 2020 08:37:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRAO0-0005Rz-PQ
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 08:37:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7d9d31f-9b20-4656-9981-19504e8b5195;
 Sat, 10 Oct 2020 08:37:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRANv-0006X6-Ta; Sat, 10 Oct 2020 08:37:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRANv-00035b-IP; Sat, 10 Oct 2020 08:37:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRANv-0006El-Hv; Sat, 10 Oct 2020 08:37:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRAO0-0005Rz-PQ
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 08:37:52 +0000
X-Inumbo-ID: e7d9d31f-9b20-4656-9981-19504e8b5195
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e7d9d31f-9b20-4656-9981-19504e8b5195;
	Sat, 10 Oct 2020 08:37:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gIHYLOIxe1MhCVdTxcCWML8ITqQDWU7qh9PPXNDN8jQ=; b=Cf2lmsTb7B5hgkCy/piSIFGFpU
	zBEjgQXKgED8iTfhAnZlsKQpaaP4LlbW3dBNTk9gUZGI2f7zoXWbk2hNZLbxuBRvhAZgRDGvePSb/
	rslB7Jr0I9ETEfhpBjiYSTdDTFUl/xW5NjbLpXUx08GSgH4/UfOr50SHH7czIT/QM1yg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRANv-0006X6-Ta; Sat, 10 Oct 2020 08:37:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRANv-00035b-IP; Sat, 10 Oct 2020 08:37:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRANv-0006El-Hv; Sat, 10 Oct 2020 08:37:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155613: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 08:37:47 +0000

flight 155613 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155613/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   50 days
Failing since        152659  2020-08-21 14:07:39 Z   49 days   82 attempts
Testing same since   155613  2020-10-09 18:07:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5377.14031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBV4-0004Ci-Di; Sat, 10 Oct 2020 09:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5377.14031; Sat, 10 Oct 2020 09:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBV4-0004Cb-AQ; Sat, 10 Oct 2020 09:49:14 +0000
Received: by outflank-mailman (input) for mailman id 5377;
 Sat, 10 Oct 2020 09:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kRBV3-0004CL-2K
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 09:49:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c231ea3-8f3f-4e25-bb7f-89702403e16c;
 Sat, 10 Oct 2020 09:49:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRBUu-0007yC-Lz; Sat, 10 Oct 2020 09:49:04 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRBUu-0008VS-Cz; Sat, 10 Oct 2020 09:49:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kRBV3-0004CL-2K
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 09:49:13 +0000
X-Inumbo-ID: 0c231ea3-8f3f-4e25-bb7f-89702403e16c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0c231ea3-8f3f-4e25-bb7f-89702403e16c;
	Sat, 10 Oct 2020 09:49:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=zrcSwGlRzNlvVNi/gmxjgy1Cp08YtnuvYagKgCui6MY=; b=CyT9xsUEFVELaoRi0j3X0yA1/7
	Zcchqwo4gcH1MGWfR0VXyjgk+zwECuHHfXlavYuek6y94MjPG1IA/dOUG/LWE0W/oHOxn2DF3kc5Y
	gKZRQzxVgrHbN6lo9BaPtDuB9bqlLybrTdGrZ3WZQxYp8IiX4j9y+On4eUU+TXmISQdM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRBUu-0007yC-Lz; Sat, 10 Oct 2020 09:49:04 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRBUu-0008VS-Cz; Sat, 10 Oct 2020 09:49:04 +0000
Subject: Re: [PATCH 1/4] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200926205542.9261-1-julien@xen.org>
 <20200926205542.9261-2-julien@xen.org>
 <fe055799-de10-891a-bcee-bbb01a8c0b3d@suse.com>
 <b5624bfa-f24b-4c0a-6735-3473892fbd2f@xen.org>
 <a07b59a0-41a3-ee4e-f28a-38499a2a4055@suse.com>
 <0d7d239e-a9ca-394a-9c7c-19f3aead6790@xen.org>
Message-ID: <ef338482-7516-5587-ad7e-72e3bbd5415e@xen.org>
Date: Sat, 10 Oct 2020 10:49:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <0d7d239e-a9ca-394a-9c7c-19f3aead6790@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 28/09/2020 11:39, Julien Grall wrote:
> 
> 
> On 28/09/2020 11:09, Jan Beulich wrote:
>> On 28.09.2020 11:58, Julien Grall wrote:
>>> On 28/09/2020 09:18, Jan Beulich wrote:
>>>> On 26.09.2020 22:55, Julien Grall wrote:
>>>>> --- a/xen/arch/x86/acpi/lib.c
>>>>> +++ b/xen/arch/x86/acpi/lib.c
>>>>> @@ -46,6 +46,10 @@ char *__acpi_map_table(paddr_t phys, unsigned 
>>>>> long size)
>>>>>        if ((phys + size) <= (1 * 1024 * 1024))
>>>>>            return __va(phys);
>>>>> +    /* No arch specific implementation after early boot */
>>>>> +    if (system_state >= SYS_STATE_boot)
>>>>> +        return NULL;
>>>>
>>>> Considering the code in context above, the comment isn't entirely
>>>> correct.
>>>
>>> How about "No arch specific implementation after early boot but for the
>>> first 1MB"?
>>
>> That or simply "No further ...".
> 
> I will do that.
> 
>>>>> +{
>>>>> +    unsigned long vaddr = (unsigned long)ptr;
>>>>> +
>>>>> +    if (vaddr >= DIRECTMAP_VIRT_START &&
>>>>> +        vaddr < DIRECTMAP_VIRT_END) {
>>>>> +        ASSERT(!((__pa(ptr) + size - 1) >> 20));
>>>>> +        return true;
>>>>> +    }
>>>>> +
>>>>> +    return (vaddr >= __fix_to_virt(FIX_ACPI_END)) &&
>>>>> +        (vaddr < (__fix_to_virt(FIX_ACPI_BEGIN) + PAGE_SIZE));
>>>>
>>>> Indentation is slightly wrong here.
>>>
>>> This is Linux coding style and therefore is using hard tab. Care to
>>> explain the problem?
>>
>> The two opening parentheses should align with one another,
>> shouldn't they?
> 
> Hmmm... somehow vim wants to indent this way. I am not entirely sure why...

Looking at the Linux codebase this is the expected indentation. This is 
because the first ( on the first line is not closed and until the last ) 
on the second line.

So I will stick with this code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 09:49:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 09:49:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5375.14016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBUn-00047m-3D; Sat, 10 Oct 2020 09:48:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5375.14016; Sat, 10 Oct 2020 09:48:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBUn-00047f-0G; Sat, 10 Oct 2020 09:48:57 +0000
Received: by outflank-mailman (input) for mailman id 5375;
 Sat, 10 Oct 2020 09:48:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRBUl-00047a-ED
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 09:48:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8cafd91a-ed83-4092-ad97-87abf4510e80;
 Sat, 10 Oct 2020 09:48:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRBUg-0007xv-QH; Sat, 10 Oct 2020 09:48:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRBUg-0006RD-I0; Sat, 10 Oct 2020 09:48:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRBUg-0002co-HS; Sat, 10 Oct 2020 09:48:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRBUl-00047a-ED
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 09:48:55 +0000
X-Inumbo-ID: 8cafd91a-ed83-4092-ad97-87abf4510e80
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8cafd91a-ed83-4092-ad97-87abf4510e80;
	Sat, 10 Oct 2020 09:48:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vISBtqgGML9IkUdibea5l1YSDJ7UZ7s72rEepynkPPU=; b=QhDMiT62neogPChKc+jFgAieKy
	Qg3jabSmB6nIcLxtf1IZwvOehhMpMd5ybOdFFwcSrfhnCmKKxvuRgPYY3bI4souuQ8Atc6SE4a2fY
	byjtTknM0z4efzsHL8iI3GYqxKWy8Sovb2Z/P6CTS1OgLPf6wyZoiYetBmC8rjNhPfM4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRBUg-0007xv-QH; Sat, 10 Oct 2020 09:48:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRBUg-0006RD-I0; Sat, 10 Oct 2020 09:48:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRBUg-0002co-HS; Sat, 10 Oct 2020 09:48:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155634-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155634: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 09:48:50 +0000

flight 155634 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155634/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   92 days
Failing since        151818  2020-07-11 04:18:52 Z   91 days   86 attempts
Testing same since   155634  2020-10-10 04:19:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20775 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 10:05:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 10:05:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5382.14047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBkI-0006Ck-Uu; Sat, 10 Oct 2020 10:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5382.14047; Sat, 10 Oct 2020 10:04:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRBkI-0006Cd-QW; Sat, 10 Oct 2020 10:04:58 +0000
Received: by outflank-mailman (input) for mailman id 5382;
 Sat, 10 Oct 2020 10:04:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kRBkH-0006CY-Kx
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 10:04:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1a7a2b6-90b4-4dcb-b1b1-f272ef0e8049;
 Sat, 10 Oct 2020 10:04:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRBk9-0008NM-DJ; Sat, 10 Oct 2020 10:04:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRBk9-0001Po-1P; Sat, 10 Oct 2020 10:04:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kRBkH-0006CY-Kx
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 10:04:57 +0000
X-Inumbo-ID: e1a7a2b6-90b4-4dcb-b1b1-f272ef0e8049
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e1a7a2b6-90b4-4dcb-b1b1-f272ef0e8049;
	Sat, 10 Oct 2020 10:04:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=XddWANs5aqstI8ET5ejYVjqwsSs2NPWWYsW3pGqRj+k=; b=bPJq/M7wsTZDfrgy07P/9zTvzq
	MtR8l3RmhkTq2sLQJ+Eu7PamEFFKN4DWHTFKcErvtotfTfmo7DG+oCUmOVsY7JqGah8DcBNkKHEso
	Ieq2jJET5CJ5pSFkoF4tjNmaQYZvJRGedj2WA0ABDq0xnWMDlroE2xrHILlheWHXcn0A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRBk9-0008NM-DJ; Sat, 10 Oct 2020 10:04:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRBk9-0001Po-1P; Sat, 10 Oct 2020 10:04:49 +0000
Subject: Re: [PATCH 1/4] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
From: Julien Grall <julien@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200926205542.9261-1-julien@xen.org>
 <20200926205542.9261-2-julien@xen.org>
 <fe055799-de10-891a-bcee-bbb01a8c0b3d@suse.com>
 <b5624bfa-f24b-4c0a-6735-3473892fbd2f@xen.org>
 <a07b59a0-41a3-ee4e-f28a-38499a2a4055@suse.com>
 <0d7d239e-a9ca-394a-9c7c-19f3aead6790@xen.org>
 <ef338482-7516-5587-ad7e-72e3bbd5415e@xen.org>
Message-ID: <12a827f3-a382-990c-1111-aa83d38454d0@xen.org>
Date: Sat, 10 Oct 2020 11:04:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <ef338482-7516-5587-ad7e-72e3bbd5415e@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 10/10/2020 10:49, Julien Grall wrote:
> Hi,
> 
> On 28/09/2020 11:39, Julien Grall wrote:
>>
>>
>> On 28/09/2020 11:09, Jan Beulich wrote:
>>> On 28.09.2020 11:58, Julien Grall wrote:
>>>> On 28/09/2020 09:18, Jan Beulich wrote:
>>>>> On 26.09.2020 22:55, Julien Grall wrote:
>>>>>> --- a/xen/arch/x86/acpi/lib.c
>>>>>> +++ b/xen/arch/x86/acpi/lib.c
>>>>>> @@ -46,6 +46,10 @@ char *__acpi_map_table(paddr_t phys, unsigned 
>>>>>> long size)
>>>>>>        if ((phys + size) <= (1 * 1024 * 1024))
>>>>>>            return __va(phys);
>>>>>> +    /* No arch specific implementation after early boot */
>>>>>> +    if (system_state >= SYS_STATE_boot)
>>>>>> +        return NULL;
>>>>>
>>>>> Considering the code in context above, the comment isn't entirely
>>>>> correct.
>>>>
>>>> How about "No arch specific implementation after early boot but for the
>>>> first 1MB"?
>>>
>>> That or simply "No further ...".
>>
>> I will do that.
>>
>>>>>> +{
>>>>>> +    unsigned long vaddr = (unsigned long)ptr;
>>>>>> +
>>>>>> +    if (vaddr >= DIRECTMAP_VIRT_START &&
>>>>>> +        vaddr < DIRECTMAP_VIRT_END) {
>>>>>> +        ASSERT(!((__pa(ptr) + size - 1) >> 20));
>>>>>> +        return true;
>>>>>> +    }
>>>>>> +
>>>>>> +    return (vaddr >= __fix_to_virt(FIX_ACPI_END)) &&
>>>>>> +        (vaddr < (__fix_to_virt(FIX_ACPI_BEGIN) + PAGE_SIZE));
>>>>>
>>>>> Indentation is slightly wrong here.
>>>>
>>>> This is Linux coding style and therefore is using hard tab. Care to
>>>> explain the problem?
>>>
>>> The two opening parentheses should align with one another,
>>> shouldn't they?
>>
>> Hmmm... somehow vim wants to indent this way. I am not entirely sure 
>> why...
> 
> Looking at the Linux codebase this is the expected indentation. This is 
> because the first ( on the first line is not closed and until the last ) 
> on the second line.

Hrmm... I obviously misread the code I wrote... Sorry for the noise.

> 
> So I will stick with this code.
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 10:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 10:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5388.14065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRCId-0001nk-Sw; Sat, 10 Oct 2020 10:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5388.14065; Sat, 10 Oct 2020 10:40:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRCId-0001nd-PD; Sat, 10 Oct 2020 10:40:27 +0000
Received: by outflank-mailman (input) for mailman id 5388;
 Sat, 10 Oct 2020 10:40:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRCIc-0001nB-EF
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 10:40:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac5c5d55-92a5-44f1-9c98-8697c39f4407;
 Sat, 10 Oct 2020 10:40:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRCIU-0000er-Bl; Sat, 10 Oct 2020 10:40:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRCIU-0008Jo-44; Sat, 10 Oct 2020 10:40:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRCIU-0005is-3Z; Sat, 10 Oct 2020 10:40:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRCIc-0001nB-EF
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 10:40:26 +0000
X-Inumbo-ID: ac5c5d55-92a5-44f1-9c98-8697c39f4407
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac5c5d55-92a5-44f1-9c98-8697c39f4407;
	Sat, 10 Oct 2020 10:40:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tgMkMJXFm3VKJfJ5UCJHV9c+98KUpCknRqNh1QhcnRI=; b=brgE+aNw8K/aDxVlrx1jWMr3nm
	mA2M/wtR7VkYVZN/dakUzx6TvBhjH3E26dI1+x8mdmanyWO0p/sE0As4uybkh4yXSuBW8pEet8156
	FSgHrGqmz2QLWFy9zktqY778WOSzMUqAb0wP1n/IamOiA0P2DYIqelrWybUO3emEKSOQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRCIU-0000er-Bl; Sat, 10 Oct 2020 10:40:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRCIU-0008Jo-44; Sat, 10 Oct 2020 10:40:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRCIU-0005is-3Z; Sat, 10 Oct 2020 10:40:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155642-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155642: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 10:40:18 +0000

flight 155642 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155642/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 11:02:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 11:02:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5392.14076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRCdc-0003x1-Rt; Sat, 10 Oct 2020 11:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5392.14076; Sat, 10 Oct 2020 11:02:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRCdc-0003wu-Oc; Sat, 10 Oct 2020 11:02:08 +0000
Received: by outflank-mailman (input) for mailman id 5392;
 Sat, 10 Oct 2020 11:02:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kRCdb-0003wp-KN
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:02:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed552af9-784a-4565-b752-1d9778160bd1;
 Sat, 10 Oct 2020 11:02:06 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRCdX-00018v-Go; Sat, 10 Oct 2020 11:02:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRCdX-0006iz-5S; Sat, 10 Oct 2020 11:02:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kRCdb-0003wp-KN
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:02:07 +0000
X-Inumbo-ID: ed552af9-784a-4565-b752-1d9778160bd1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ed552af9-784a-4565-b752-1d9778160bd1;
	Sat, 10 Oct 2020 11:02:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kZ3T8E90nhe6fT4YSRfw7kez/6rP9Uw708x2JxQe0cU=; b=BPCvCFQnd+71fOxusP21qBpbec
	4TgmvJWYC55VK5BSiNZ/YLxUi5fhH0oRu45C7rXjwqH1wniWBGXvuNJdt1XjHnmGxqHqJtWqmirHN
	b8zS3Tv/YywdkIIMH9F4OwgZXNveVC82L1G+wt5Qlm5+qyTld3NUjXT5dUwu/Ke2pvzw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRCdX-00018v-Go; Sat, 10 Oct 2020 11:02:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRCdX-0006iz-5S; Sat, 10 Oct 2020 11:02:03 +0000
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
Date: Sat, 10 Oct 2020 12:02:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 28/09/2020 07:47, Masami Hiramatsu wrote:
> Hello,

Hi Masami,

> This made progress with my Xen boot on DeveloperBox (
> https://www.96boards.org/product/developerbox/ ) with ACPI.
> 

I have reviewed the patch attached and I have a couple of remarks about it.

The STAO table was originally created to allow an hypervisor to hide 
devices from a controller domain (such as Dom0). If this table is not 
present, then it means the OS/hypervisor can use any device listed in 
the ACPI table.

Additionally, the STAO table should never be present in the host ACPI table.

Therefore, I think the code should not try to find the STAO. Instead, it 
should check whether the SPCR table is present.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 11:42:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 11:42:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5400.14097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDGn-00082d-2g; Sat, 10 Oct 2020 11:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5400.14097; Sat, 10 Oct 2020 11:42:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDGm-00082W-W6; Sat, 10 Oct 2020 11:42:36 +0000
Received: by outflank-mailman (input) for mailman id 5400;
 Sat, 10 Oct 2020 11:42:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=p5/m=DR=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kRDGl-00082R-Mk
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:42:35 +0000
Received: from mail-40133.protonmail.ch (unknown [185.70.40.133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 675bc7dd-ddc9-4f7d-b986-a4d50f077b4b;
 Sat, 10 Oct 2020 11:42:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=p5/m=DR=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kRDGl-00082R-Mk
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:42:35 +0000
X-Inumbo-ID: 675bc7dd-ddc9-4f7d-b986-a4d50f077b4b
Received: from mail-40133.protonmail.ch (unknown [185.70.40.133])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 675bc7dd-ddc9-4f7d-b986-a4d50f077b4b;
	Sat, 10 Oct 2020 11:42:32 +0000 (UTC)
Date: Sat, 10 Oct 2020 11:42:25 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trmm.net;
	s=protonmail; t=1602330151;
	bh=WWQ4gUUJ6h8klFU4fMdnE0RbkA80QkHki4E1xrHzezg=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=XkSLE3DKU0dRRp49F4z9cCgqeT5MUdHcvSduicf37nVgLGWfm2H3sUHcKOSF5ys0D
	 q498rU/yOxv4R6K+mxvAt5/lEnKlZj6QRw2FuiuA1QCr/OnWxbRLYoCBKK/+dAkZH3
	 Td4uG9O67o6ZIvGCf19SJa/5tvll+6KsCzoEIhrA=
To: Andrew Cooper <andrew.cooper3@citrix.com>
From: Trammell Hudson <hudson@trmm.net>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Reply-To: Trammell Hudson <hudson@trmm.net>
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
Message-ID: <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
In-Reply-To: <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com>
References: <osstest-155612-mainreport@xen.org> <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

On Friday, October 9, 2020 10:27 PM, Andrew Cooper <andrew.cooper3@citrix.c=
om> wrote:
> [...]
> Looks like arm64 is crashing fairly early on boot.
>
> This is probably caused by "efi: Enable booting unified
> hypervisor/kernel/initrd images".

Darn it.  I'm working out how to build and boot qemu aarch64 so
that I can figure out what is going on.

Also, I'm not sure that it is possible to build a unified arm
image right now; objcopy (and all of the obj* tools) say
"File format not recognized" on the xen.efi file.  The MZ file
is not what they are expecting for ARM executables.

--
Trammell



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 11:47:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 11:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5396.14109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDLr-0008LR-Mt; Sat, 10 Oct 2020 11:47:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5396.14109; Sat, 10 Oct 2020 11:47:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDLr-0008LK-JY; Sat, 10 Oct 2020 11:47:51 +0000
Received: by outflank-mailman (input) for mailman id 5396;
 Sat, 10 Oct 2020 11:37:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3gaG=DR=zurich.ibm.com=bmt@srs-us1.protection.inumbo.net>)
 id 1kRDBM-0007AW-Nj
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:37:00 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a266619-2e86-464b-8fa5-e785e7d4caeb;
 Sat, 10 Oct 2020 11:36:57 +0000 (UTC)
Received: from pps.filterd (m0098409.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09ABWpaV113189
 for <xen-devel@lists.xenproject.org>; Sat, 10 Oct 2020 07:36:57 -0400
Received: from smtp.notes.na.collabserv.com (smtp.notes.na.collabserv.com
 [192.155.248.91])
 by mx0a-001b2d01.pphosted.com with ESMTP id 343b0wh7jk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT)
 for <xen-devel@lists.xenproject.org>; Sat, 10 Oct 2020 07:36:56 -0400
Received: from localhost
 by smtp.notes.na.collabserv.com with smtp.notes.na.collabserv.com ESMTP
 for <xen-devel@lists.xenproject.org> from <BMT@zurich.ibm.com>;
 Sat, 10 Oct 2020 11:36:55 -0000
Received: from us1a3-smtp05.a3.dal06.isc4sb.com (10.146.71.159)
 by smtp.notes.na.collabserv.com (10.106.227.143) with
 smtp.notes.na.collabserv.com ESMTP; Sat, 10 Oct 2020 11:36:50 -0000
Received: from us1a3-mail162.a3.dal06.isc4sb.com ([10.146.71.4])
 by us1a3-smtp05.a3.dal06.isc4sb.com
 with ESMTP id 2020101011364991-175970 ;
 Sat, 10 Oct 2020 11:36:49 +0000 
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3gaG=DR=zurich.ibm.com=bmt@srs-us1.protection.inumbo.net>)
	id 1kRDBM-0007AW-Nj
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 11:37:00 +0000
X-Inumbo-ID: 2a266619-2e86-464b-8fa5-e785e7d4caeb
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2a266619-2e86-464b-8fa5-e785e7d4caeb;
	Sat, 10 Oct 2020 11:36:57 +0000 (UTC)
Received: from pps.filterd (m0098409.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09ABWpaV113189
	for <xen-devel@lists.xenproject.org>; Sat, 10 Oct 2020 07:36:57 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=in-reply-to : subject :
 from : to : cc : date : mime-version : references :
 content-transfer-encoding : content-type : message-id; s=pp1;
 bh=NiDLo4pOxkTOHNh379y2lMt59tGJeK9BGhibE+AHe30=;
 b=fFl+sWZ60CJNW5vj7Uw0QdIFBZaJLDIK+TmFY7+uXdPQEExbAUVKU05YhKg5dS5u6J9M
 gRTcbWK1ehAiHgiW0QiEErv4vh+5MOV1iJMmz3AgZqVrCWi+Nh1nNGlGLl5qNxYu9TT3
 rsErWvzJTnT/TvD/yGmIkQ5cIJnW2dxyfP5KEerqQF50430xLTdRCdzrzGOBj/IVBjpb
 ZIGntFQpbgC01amsKXwfy1cB65UwCiqnbdgLT6Z6Qd+CSe7yV4n3BXy+qSd8xdiYdQTK
 an7SA2yY6csjDSnstIEkYdPhg6mYvKJGwGGY1NoLkcuzdqKyvDxx/5UGYpv3tzPn6Oq9 ZA== 
Received: from smtp.notes.na.collabserv.com (smtp.notes.na.collabserv.com [192.155.248.91])
	by mx0a-001b2d01.pphosted.com with ESMTP id 343b0wh7jk-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT)
	for <xen-devel@lists.xenproject.org>; Sat, 10 Oct 2020 07:36:56 -0400
Received: from localhost
	by smtp.notes.na.collabserv.com with smtp.notes.na.collabserv.com ESMTP
	for <xen-devel@lists.xenproject.org> from <BMT@zurich.ibm.com>;
	Sat, 10 Oct 2020 11:36:55 -0000
Received: from us1a3-smtp05.a3.dal06.isc4sb.com (10.146.71.159)
	by smtp.notes.na.collabserv.com (10.106.227.143) with smtp.notes.na.collabserv.com ESMTP;
	Sat, 10 Oct 2020 11:36:50 -0000
Received: from us1a3-mail162.a3.dal06.isc4sb.com ([10.146.71.4])
          by us1a3-smtp05.a3.dal06.isc4sb.com
          with ESMTP id 2020101011364991-175970 ;
          Sat, 10 Oct 2020 11:36:49 +0000 
In-Reply-To: <20201009195033.3208459-11-ira.weiny@intel.com>
Subject: Re: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new kmap_thread()
From: "Bernard Metzler" <BMT@zurich.ibm.com>
To: ira.weiny@intel.com
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
        "Thomas Gleixner"
 <tglx@linutronix.de>,
        "Ingo Molnar" <mingo@redhat.com>, "Borislav Petkov"
 <bp@alien8.de>,
        "Andy Lutomirski" <luto@kernel.org>,
        "Peter Zijlstra"
 <peterz@infradead.org>,
        "Mike Marciniszyn"
 <mike.marciniszyn@intel.com>,
        "Dennis Dalessandro"
 <dennis.dalessandro@intel.com>,
        "Doug Ledford" <dledford@redhat.com>,
        "Jason
 Gunthorpe" <jgg@ziepe.ca>,
        "Faisal Latif" <faisal.latif@intel.com>,
        "Shiraz
 Saleem" <shiraz.saleem@intel.com>, x86@kernel.org,
        "Dave Hansen"
 <dave.hansen@linux.intel.com>,
        "Dan Williams"
 <dan.j.williams@intel.com>,
        "Fenghua Yu"
 <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
        linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
        linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
        linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
        kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
        kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
        linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
        linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
        linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
        linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
        linux-ext4@vger.kernel.org, linux-aio@kvack.org,
        io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
        linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
        reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net,
        linux-nilfs@vger.kernel.org, cluster-devel@redhat.com,
        ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org,
        linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org,
        linux-rdma@vger.kernel.org, amd-gfx@lists.freed.esktop.org,
        dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        drbd-dev@tron.linbit.com, linux-block@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-cachefs@redhat.com,
        samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org
Date: Sat, 10 Oct 2020 11:36:49 +0000
MIME-Version: 1.0
Sensitivity: 
Importance: Normal
X-Priority: 3 (Normal)
References: <20201009195033.3208459-11-ira.weiny@intel.com>,<20201009195033.3208459-1-ira.weiny@intel.com>
X-Mailer: IBM iNotes ($HaikuForm 1054.1) | IBM Domino Build
 SCN1812108_20180501T0841_FP65 April 15, 2020 at 09:48
X-LLNOutbound: False
X-Disclaimed: 59823
X-TNEFEvaluated: 1
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
x-cbid: 20101011-2475-0000-0000-0000044A0339
X-IBM-SpamModules-Scores: BY=0.233045; FL=0; FP=0; FZ=0; HX=0; KW=0; PH=0;
 SC=0.421684; ST=0; TS=0; UL=0; ISC=; MB=0.000000
X-IBM-SpamModules-Versions: BY=3.00013982; HX=3.00000242; KW=3.00000007;
 PH=3.00000004; SC=3.00000295; SDB=6.01447073; UDB=6.00777937; IPR=6.01229775;
 MB=3.00034472; MTD=3.00000008; XFM=3.00000015; UTC=2020-10-10 11:36:54
X-IBM-AV-DETECTION: SAVI=unsuspicious REMOTE=unsuspicious XFE=unused
X-IBM-AV-VERSION: SAVI=2020-10-10 06:57:40 - 6.00011937
x-cbparentid: 20101011-2476-0000-0000-0000DAA5035B
Message-Id: <OF849D92D8.F4735ECA-ON002585FD.003F5F27-002585FD.003FCBD6@notes.na.collabserv.com>
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-10_07:2020-10-09,2020-10-10 signatures=0
X-Proofpoint-Spam-Reason: orgsafe

-----ira.weiny@intel.com wrote: -----

>To: "Andrew Morton" <akpm@linux-foundation.org>, "Thomas Gleixner"
><tglx@linutronix.de>, "Ingo Molnar" <mingo@redhat.com>, "Borislav
>Petkov" <bp@alien8.de>, "Andy Lutomirski" <luto@kernel.org>, "Peter
>Zijlstra" <peterz@infradead.org>
>From: ira.weiny@intel.com
>Date: 10/09/2020 09:52PM
>Cc: "Ira Weiny" <ira.weiny@intel.com>, "Mike Marciniszyn"
><mike.marciniszyn@intel.com>, "Dennis Dalessandro"
><dennis.dalessandro@intel.com>, "Doug Ledford" <dledford@redhat.com>,
>"Jason Gunthorpe" <jgg@ziepe.ca>, "Faisal Latif"
><faisal.latif@intel.com>, "Shiraz Saleem" <shiraz.saleem@intel.com>,
>"Bernard Metzler" <bmt@zurich.ibm.com>, x86@kernel.org, "Dave Hansen"
><dave.hansen@linux.intel.com>, "Dan Williams"
><dan.j.williams@intel.com>, "Fenghua Yu" <fenghua.yu@intel.com>,
>linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
>linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org,
>linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
>linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org,
>netdev@vger.kernel.org, bpf@vger.kernel.org,
>kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
>linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
>linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
>linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
>linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
>linux-ext4@vger.kernel.org, linux-aio@kvack.org,
>io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
>linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
>reiserfs-devel@vger.kernel.org,
>linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
>cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
>linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
>linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
>amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
>intel-gfx@lists.freedesktop.org, drbd-dev@tron.linbit.com,
>linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
>linux-cachefs@redhat.com, samba-technical@lists.samba.org,
>intel-wired-lan@lists.osuosl.org
>Subject: [EXTERNAL] [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize
>new kmap=5Fthread()
>
>From: Ira Weiny <ira.weiny@intel.com>
>
>The kmap() calls in these drivers are localized to a single thread.
>To
>avoid the over head of global PKRS updates use the new kmap=5Fthread()
>call.
>
>Cc: Mike Marciniszyn <mike.marciniszyn@intel.com>
>Cc: Dennis Dalessandro <dennis.dalessandro@intel.com>
>Cc: Doug Ledford <dledford@redhat.com>
>Cc: Jason Gunthorpe <jgg@ziepe.ca>
>Cc: Faisal Latif <faisal.latif@intel.com>
>Cc: Shiraz Saleem <shiraz.saleem@intel.com>
>Cc: Bernard Metzler <bmt@zurich.ibm.com>
>Signed-off-by: Ira Weiny <ira.weiny@intel.com>
>---
> drivers/infiniband/hw/hfi1/sdma.c      |  4 ++--
> drivers/infiniband/hw/i40iw/i40iw=5Fcm.c | 10 +++++-----
> drivers/infiniband/sw/siw/siw=5Fqp=5Ftx.c  | 14 +++++++-------
> 3 files changed, 14 insertions(+), 14 deletions(-)
>
>diff --git a/drivers/infiniband/hw/hfi1/sdma.c
>b/drivers/infiniband/hw/hfi1/sdma.c
>index 04575c9afd61..09d206e3229a 100644
>--- a/drivers/infiniband/hw/hfi1/sdma.c
>+++ b/drivers/infiniband/hw/hfi1/sdma.c
>@@ -3130,7 +3130,7 @@ int ext=5Fcoal=5Fsdma=5Ftx=5Fdescs(struct hfi1=5Fdev=
data
>*dd, struct sdma=5Ftxreq *tx,
> 		}
>=20
> 		if (type =3D=3D SDMA=5FMAP=5FPAGE) {
>-			kvaddr =3D kmap(page);
>+			kvaddr =3D kmap=5Fthread(page);
> 			kvaddr +=3D offset;
> 		} else if (WARN=5FON(!kvaddr)) {
> 			=5F=5Fsdma=5Ftxclean(dd, tx);
>@@ -3140,7 +3140,7 @@ int ext=5Fcoal=5Fsdma=5Ftx=5Fdescs(struct hfi1=5Fdev=
data
>*dd, struct sdma=5Ftxreq *tx,
> 		memcpy(tx->coalesce=5Fbuf + tx->coalesce=5Fidx, kvaddr, len);
> 		tx->coalesce=5Fidx +=3D len;
> 		if (type =3D=3D SDMA=5FMAP=5FPAGE)
>-			kunmap(page);
>+			kunmap=5Fthread(page);
>=20
> 		/* If there is more data, return */
> 		if (tx->tlen - tx->coalesce=5Fidx)
>diff --git a/drivers/infiniband/hw/i40iw/i40iw=5Fcm.c
>b/drivers/infiniband/hw/i40iw/i40iw=5Fcm.c
>index a3b95805c154..122d7a5642a1 100644
>--- a/drivers/infiniband/hw/i40iw/i40iw=5Fcm.c
>+++ b/drivers/infiniband/hw/i40iw/i40iw=5Fcm.c
>@@ -3721,7 +3721,7 @@ int i40iw=5Faccept(struct iw=5Fcm=5Fid *cm=5Fid, str=
uct
>iw=5Fcm=5Fconn=5Fparam *conn=5Fparam)
> 		ibmr->device =3D iwpd->ibpd.device;
> 		iwqp->lsmm=5Fmr =3D ibmr;
> 		if (iwqp->page)
>-			iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap(iwqp->page);
>+			iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap=5Fthread(iwqp->page);
> 		dev->iw=5Fpriv=5Fqp=5Fops->qp=5Fsend=5Flsmm(&iwqp->sc=5Fqp,
> 							iwqp->ietf=5Fmem.va,
> 							(accept.size + conn=5Fparam->private=5Fdata=5Flen),
>@@ -3729,12 +3729,12 @@ int i40iw=5Faccept(struct iw=5Fcm=5Fid *cm=5Fid,
>struct iw=5Fcm=5Fconn=5Fparam *conn=5Fparam)
>=20
> 	} else {
> 		if (iwqp->page)
>-			iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap(iwqp->page);
>+			iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap=5Fthread(iwqp->page);
> 		dev->iw=5Fpriv=5Fqp=5Fops->qp=5Fsend=5Flsmm(&iwqp->sc=5Fqp, NULL, 0, 0);
> 	}
>=20
> 	if (iwqp->page)
>-		kunmap(iwqp->page);
>+		kunmap=5Fthread(iwqp->page);
>=20
> 	iwqp->cm=5Fid =3D cm=5Fid;
> 	cm=5Fnode->cm=5Fid =3D cm=5Fid;
>@@ -4102,10 +4102,10 @@ static void i40iw=5Fcm=5Fevent=5Fconnected(struct
>i40iw=5Fcm=5Fevent *event)
> 	i40iw=5Fcm=5Finit=5Ftsa=5Fconn(iwqp, cm=5Fnode);
> 	read0 =3D (cm=5Fnode->send=5Frdma0=5Fop =3D=3D SEND=5FRDMA=5FREAD=5FZERO=
);
> 	if (iwqp->page)
>-		iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap(iwqp->page);
>+		iwqp->sc=5Fqp.qp=5Fuk.sq=5Fbase =3D kmap=5Fthread(iwqp->page);
> 	dev->iw=5Fpriv=5Fqp=5Fops->qp=5Fsend=5Frtt(&iwqp->sc=5Fqp, read0);
> 	if (iwqp->page)
>-		kunmap(iwqp->page);
>+		kunmap=5Fthread(iwqp->page);
>=20
> 	memset(&attr, 0, sizeof(attr));
> 	attr.qp=5Fstate =3D IB=5FQPS=5FRTS;
>diff --git a/drivers/infiniband/sw/siw/siw=5Fqp=5Ftx.c
>b/drivers/infiniband/sw/siw/siw=5Fqp=5Ftx.c
>index d19d8325588b..4ed37c328d02 100644
>--- a/drivers/infiniband/sw/siw/siw=5Fqp=5Ftx.c
>+++ b/drivers/infiniband/sw/siw/siw=5Fqp=5Ftx.c
>@@ -76,7 +76,7 @@ static int siw=5Ftry=5F1seg(struct siw=5Fiwarp=5Ftx *c=
=5Ftx,
>void *paddr)
> 			if (unlikely(!p))
> 				return -EFAULT;
>=20
>-			buffer =3D kmap(p);
>+			buffer =3D kmap=5Fthread(p);
>=20
> 			if (likely(PAGE=5FSIZE - off >=3D bytes)) {
> 				memcpy(paddr, buffer + off, bytes);
>@@ -84,7 +84,7 @@ static int siw=5Ftry=5F1seg(struct siw=5Fiwarp=5Ftx *c=
=5Ftx,
>void *paddr)
> 				unsigned long part =3D bytes - (PAGE=5FSIZE - off);
>=20
> 				memcpy(paddr, buffer + off, part);
>-				kunmap(p);
>+				kunmap=5Fthread(p);
>=20
> 				if (!mem->is=5Fpbl)
> 					p =3D siw=5Fget=5Fupage(mem->umem,
>@@ -96,10 +96,10 @@ static int siw=5Ftry=5F1seg(struct siw=5Fiwarp=5Ftx
>*c=5Ftx, void *paddr)
> 				if (unlikely(!p))
> 					return -EFAULT;
>=20
>-				buffer =3D kmap(p);
>+				buffer =3D kmap=5Fthread(p);
> 				memcpy(paddr + part, buffer, bytes - part);
> 			}
>-			kunmap(p);
>+			kunmap=5Fthread(p);
> 		}
> 	}
> 	return (int)bytes;
>@@ -505,7 +505,7 @@ static int siw=5Ftx=5Fhdt(struct siw=5Fiwarp=5Ftx *c=
=5Ftx,
>struct socket *s)
> 				page=5Farray[seg] =3D p;
>=20
> 				if (!c=5Ftx->use=5Fsendpage) {
>-					iov[seg].iov=5Fbase =3D kmap(p) + fp=5Foff;
>+					iov[seg].iov=5Fbase =3D kmap=5Fthread(p) + fp=5Foff;

This misses a corresponding kunmap=5Fthread() in siw=5Funmap=5Fpages()
(pls change line 403 in siw=5Fqp=5Ftx.c as well)

Thanks,
Bernard.

> 					iov[seg].iov=5Flen =3D plen;
>=20
> 					/* Remember for later kunmap() */
>@@ -518,9 +518,9 @@ static int siw=5Ftx=5Fhdt(struct siw=5Fiwarp=5Ftx *c=
=5Ftx,
>struct socket *s)
> 							plen);
> 				} else if (do=5Fcrc) {
> 					crypto=5Fshash=5Fupdate(c=5Ftx->mpa=5Fcrc=5Fhd,
>-							    kmap(p) + fp=5Foff,
>+							    kmap=5Fthread(p) + fp=5Foff,
> 							    plen);
>-					kunmap(p);
>+					kunmap=5Fthread(p);
> 				}
> 			} else {
> 				u64 va =3D sge->laddr + sge=5Foff;
>--=20
>2.28.0.rc0.12.gb6a658bd00c9
>
>



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 12:20:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 12:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5410.14121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDr6-0003dR-Ny; Sat, 10 Oct 2020 12:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5410.14121; Sat, 10 Oct 2020 12:20:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRDr6-0003dK-Ja; Sat, 10 Oct 2020 12:20:08 +0000
Received: by outflank-mailman (input) for mailman id 5410;
 Sat, 10 Oct 2020 12:20:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=p5/m=DR=trmm.net=hudson@srs-us1.protection.inumbo.net>)
 id 1kRDr5-0003dF-AP
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:20:07 +0000
Received: from mail-40133.protonmail.ch (unknown [185.70.40.133])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05f8832b-8dc8-45f8-a304-889b28cc54cd;
 Sat, 10 Oct 2020 12:20:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=p5/m=DR=trmm.net=hudson@srs-us1.protection.inumbo.net>)
	id 1kRDr5-0003dF-AP
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:20:07 +0000
X-Inumbo-ID: 05f8832b-8dc8-45f8-a304-889b28cc54cd
Received: from mail-40133.protonmail.ch (unknown [185.70.40.133])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 05f8832b-8dc8-45f8-a304-889b28cc54cd;
	Sat, 10 Oct 2020 12:20:05 +0000 (UTC)
Date: Sat, 10 Oct 2020 12:20:00 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=trmm.net;
	s=protonmail; t=1602332404;
	bh=lsIhnCOWcS2GB63jUq+46D0wd98PJU/sk4wtEKJjRgA=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=Ey1l4XkhHwSBEVGJvuI2Emz9gNIi2omYXgLpHM1AGiD+nc493513yXQ5cd9agFZS4
	 bYK7KGtgbBuB1y3IyO1oh5ZeAVtoMuOfyRTPygWQTDFzLxe02vRPmRtBvyjr4OBcjz
	 RCPxhzXfoTNzdYQfoNeWxZNYTAhMQ/pD2TgIVyQI=
To: Trammell Hudson <hudson@trmm.net>
From: Trammell Hudson <hudson@trmm.net>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Reply-To: Trammell Hudson <hudson@trmm.net>
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
Message-ID: <KEN3Xd8rbniZ349uB92E_Xej14BVyItkPHvTZ3HnlhQc1w79RbgIbkFWBcNKotWLWXBqb6VkmCdIhgzpWMd8jn77lSkLCH_gSp-ARmzMBUc=@trmm.net>
In-Reply-To: <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
References: <osstest-155612-mainreport@xen.org> <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com> <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

On Saturday, October 10, 2020 1:42 PM, Trammell Hudson <hudson@trmm.net> wr=
ote:
> On Friday, October 9, 2020 10:27 PM, Andrew Cooper andrew.cooper3@citrix.=
com wrote:
> > [...]
> > Looks like arm64 is crashing fairly early on boot.
> > This is probably caused by "efi: Enable booting unified
> > hypervisor/kernel/initrd images".
>
> Darn it. I'm working out how to build and boot qemu aarch64 so
> that I can figure out what is going on.

Unfortunately qemu 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.32)
doesn't replicate this crash using the command line from
https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/qem=
u-system-aarch64


qemu-system-aarch64 \
        -cpu cortex-a57 \
        -smp 4 -m 4096 \
        -machine virt,gic_version=3D3 \
        -machine virtualization=3Dtrue \
        -machine type=3Dvirt \
        -display none \
        -serial mon:stdio \
        -bios /usr/share/qemu-efi-aarch64/QEMU_EFI.fd \
        -drive if=3Dnone,file=3Dxenial-server-cloudimg-arm64-uefi1.img,id=
=3Dhd0 \
        -device virtio-blk-device,drive=3Dhd0 \
        -boot order=3Dd
[...]
Xen 4.15-unstable (c/s Fri Oct 2 12:30:34 2020 +0200 git:8a62dee9ce) EFI lo=
ader
Using configuration file 'BOOTAA64.cfg'
virt-gicv3.dtb: 0x000000013bebe000-0x000000013bece000
kernel: 0x000000013aecd000-0x000000013bd12200
initrd: 0x0000000138ced000-0x000000013aecc3bb
 __  __            _  _    _ ____                     _        _     _
 \ \/ /___ _ __   | || |  / | ___|    _   _ _ __  ___| |_ __ _| |__ | | ___
  \  // _ \ '_ \  | || |_ | |___ \ __| | | | '_ \/ __| __/ _` | '_ \| |/ _ =
\
  /  \  __/ | | | |__   _|| |___) |__| |_| | | | \__ \ || (_| | |_) | |  __=
/
 /_/\_\___|_| |_|    |_|(_)_|____/    \__,_|_| |_|___/\__\__,_|_.__/|_|\___=
|

(XEN) Xen version 4.15-unstable (hudson@) (aarch64-linux-gnu-gcc (Ubuntu/Li=
naro 7.5.0-3ubuntu1~18.04) 7.5.0) debug=3Dn  Sat Oct 10 08:42:30 CEST 2020
(XEN) Latest ChangeSet: Fri Oct 2 12:30:34 2020 +0200 git:8a62dee9ce
(XEN) Processor: 411fd070: "ARM Limited", variant: 0x1, part 0xd07, rev 0x0

[...]


It makes it all the way into the Linux kernel and user space
without crashing.  Hopefully someone with better access to ARM
hardware can help debug!

--
Trammell


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 12:38:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 12:38:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5414.14136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRE91-0004vZ-Aw; Sat, 10 Oct 2020 12:38:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5414.14136; Sat, 10 Oct 2020 12:38:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRE91-0004vS-7u; Sat, 10 Oct 2020 12:38:39 +0000
Received: by outflank-mailman (input) for mailman id 5414;
 Sat, 10 Oct 2020 12:38:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kRE8z-0004vN-33
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:38:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c870a2c5-7b5d-4cca-b04c-5505e36d1f94;
 Sat, 10 Oct 2020 12:38:35 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRE8t-00035p-W7; Sat, 10 Oct 2020 12:38:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRE8t-0005nb-Kq; Sat, 10 Oct 2020 12:38:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kRE8z-0004vN-33
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:38:37 +0000
X-Inumbo-ID: c870a2c5-7b5d-4cca-b04c-5505e36d1f94
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c870a2c5-7b5d-4cca-b04c-5505e36d1f94;
	Sat, 10 Oct 2020 12:38:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NLURfznbn98vTc2vw79KDivcTsMqB4j59h8RmSjK4zM=; b=OqvltKQh8+bWdRvkuiwZr7ltcG
	MyQDAqrpaGLnr/FMhTo4nWYnsfsxlASFe8Hn7Lmgv8KZz9h9FaPHRqnx+g4kwvVf7wsKES/EEA8ms
	a2gDf/iYnpRwBMwXAi2I/Oa+Ee7ISJefwT+xas9gtHhmud99O4hMex/kQU0ucFmmuLj4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRE8t-00035p-W7; Sat, 10 Oct 2020 12:38:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRE8t-0005nb-Kq; Sat, 10 Oct 2020 12:38:31 +0000
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
To: Trammell Hudson <hudson@trmm.net>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Bertrand Marquis
 <Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <osstest-155612-mainreport@xen.org>
 <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com>
 <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
From: Julien Grall <julien@xen.org>
Message-ID: <01c8b669-d77e-75c4-7317-213e32eb2b73@xen.org>
Date: Sat, 10 Oct 2020 13:38:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 10/10/2020 12:42, Trammell Hudson wrote:
> On Friday, October 9, 2020 10:27 PM, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> [...]
>> Looks like arm64 is crashing fairly early on boot.
>>
>> This is probably caused by "efi: Enable booting unified
>> hypervisor/kernel/initrd images".
> 
> Darn it.  I'm working out how to build and boot qemu aarch64 so
> that I can figure out what is going on.

FWIW, in OSSTest, we are chainloading Xen from GRUB. I have tried to 
chainloading on QEMU but couldn't get to work so far (even without your 
series).

Although, I have no trouble to boot using the GRUB way (i.e. via multiboot).

> 
> Also, I'm not sure that it is possible to build a unified arm
> image right now; objcopy (and all of the obj* tools) say
> "File format not recognized" on the xen.efi file.  The MZ file
> is not what they are expecting for ARM executables.

IIUC, you are trying to add section into the EFI binary and not the ELF. 
Is it correct?

I don't know what x86 is doing but for Arm, xen.efi (and Linux Image) is 
custom built. So it may lack information to be recognized by objdump.

My knowledge of objdump is fairly limited. If you are interested to fix 
it, then I would suggest to ask the binutils community what they expect.

We could then adapt so objdump can recognize it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 12:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 12:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5415.14149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRE9X-000526-KD; Sat, 10 Oct 2020 12:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5415.14149; Sat, 10 Oct 2020 12:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRE9X-00051y-H9; Sat, 10 Oct 2020 12:39:11 +0000
Received: by outflank-mailman (input) for mailman id 5415;
 Sat, 10 Oct 2020 12:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kRE9W-00051o-8l
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:39:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35dcfc93-3c5b-4fb4-9ddc-99eb783075d9;
 Sat, 10 Oct 2020 12:39:09 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRE9R-00036K-IF; Sat, 10 Oct 2020 12:39:05 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kRE9R-0005p7-8V; Sat, 10 Oct 2020 12:39:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rChQ=DR=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kRE9W-00051o-8l
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 12:39:10 +0000
X-Inumbo-ID: 35dcfc93-3c5b-4fb4-9ddc-99eb783075d9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35dcfc93-3c5b-4fb4-9ddc-99eb783075d9;
	Sat, 10 Oct 2020 12:39:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=hRY+s3qEXwJNSarKsS4Kj8GFgWkuSr8f9z69mMdSWxs=; b=Q5YZNdsjeHfW/nZOenhRvnTrJz
	MoMulswsCCETK+w5tOU1fk5u2gj/2zeqrbJcDJNGF2Fa/MCNnazeVlAX2BUed4Jtww9wcJreE6XMu
	z+oCZoCrUnItklIEg95lc6RY9h4s+oeJwWDgUZ3eEmx1OoVLMTiwJoJ6ljhW5mV2FV5Y=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRE9R-00036K-IF; Sat, 10 Oct 2020 12:39:05 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kRE9R-0005p7-8V; Sat, 10 Oct 2020 12:39:05 +0000
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
To: Trammell Hudson <hudson@trmm.net>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Bertrand Marquis
 <Bertrand.Marquis@arm.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <osstest-155612-mainreport@xen.org>
 <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com>
 <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
 <KEN3Xd8rbniZ349uB92E_Xej14BVyItkPHvTZ3HnlhQc1w79RbgIbkFWBcNKotWLWXBqb6VkmCdIhgzpWMd8jn77lSkLCH_gSp-ARmzMBUc=@trmm.net>
From: Julien Grall <julien@xen.org>
Message-ID: <efc58afe-aec0-3761-a8e1-4f16dfc1216e@xen.org>
Date: Sat, 10 Oct 2020 13:39:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <KEN3Xd8rbniZ349uB92E_Xej14BVyItkPHvTZ3HnlhQc1w79RbgIbkFWBcNKotWLWXBqb6VkmCdIhgzpWMd8jn77lSkLCH_gSp-ARmzMBUc=@trmm.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 10/10/2020 13:20, Trammell Hudson wrote:
> On Saturday, October 10, 2020 1:42 PM, Trammell Hudson <hudson@trmm.net> wrote:
>> On Friday, October 9, 2020 10:27 PM, Andrew Cooper andrew.cooper3@citrix.com wrote:
>>> [...]
>>> Looks like arm64 is crashing fairly early on boot.
>>> This is probably caused by "efi: Enable booting unified
>>> hypervisor/kernel/initrd images".
>>
>> Darn it. I'm working out how to build and boot qemu aarch64 so
>> that I can figure out what is going on.
> 
> Unfortunately qemu 2.11.1(Debian 1:2.11+dfsg-1ubuntu7.32)
> doesn't replicate this crash using the command line from
> https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/qemu-system-aarch64

Are you chainloading Xen?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 13:10:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 13:10:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5420.14164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kREdr-0000FQ-6T; Sat, 10 Oct 2020 13:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5420.14164; Sat, 10 Oct 2020 13:10:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kREdr-0000FJ-3U; Sat, 10 Oct 2020 13:10:31 +0000
Received: by outflank-mailman (input) for mailman id 5420;
 Sat, 10 Oct 2020 13:10:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kREdp-0000FE-7M
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 13:10:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14bd9c86-4713-4b14-afb6-4d2ea481628f;
 Sat, 10 Oct 2020 13:10:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kREdl-0003kw-95; Sat, 10 Oct 2020 13:10:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kREdl-0008NX-1c; Sat, 10 Oct 2020 13:10:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kREdl-0000DP-17; Sat, 10 Oct 2020 13:10:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kREdp-0000FE-7M
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 13:10:29 +0000
X-Inumbo-ID: 14bd9c86-4713-4b14-afb6-4d2ea481628f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14bd9c86-4713-4b14-afb6-4d2ea481628f;
	Sat, 10 Oct 2020 13:10:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=9cKEvzK27ujtKBZ/Ri33penMFcwrR4vjG2oLF3EhfPU=; b=BpIi7weXlqG8ljMo6Z5gwW6bQ0
	ANBpvQ05NTiHkjSts9ACC8jrQVxWk44U7GIOIUemSNpukFx6s/lYpQqBtj0fj4YPjYIO63Vew9EnA
	40Pw2P5Rt9p1f3enjgQyb7Fiib0n0kMqtz4cJLgTtufyOLn8LGxguWH006viPwaGrJSA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kREdl-0003kw-95; Sat, 10 Oct 2020 13:10:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kREdl-0008NX-1c; Sat, 10 Oct 2020 13:10:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kREdl-0000DP-17; Sat, 10 Oct 2020 13:10:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete test-arm64-arm64-xl-xsm
Message-Id: <E1kREdl-0000DP-17@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 13:10:25 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job test-arm64-arm64-xl-xsm
testid xen-boot

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8a71d50ed40bfa78c37722dc11995ac2563662c3
  Bug not present: 4dced5df761e36fa2561f6f0f6563b3580d95e7f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155652/


  commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
  Author: Trammell Hudson <hudson@trmm.net>
  Date:   Fri Oct 2 07:18:21 2020 -0400
  
      efi: Enable booting unified hypervisor/kernel/initrd images
      
      This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
      configuration file, the Linux kernel and initrd, as well as the XSM,
      and architectural specific files into a single "unified" EFI executable.
      This allows an administrator to update the components independently
      without requiring rebuilding xen, as well as to replace the components
      in an existing image.
      
      The resulting EFI executable can be invoked directly from the UEFI Boot
      Manager, removing the need to use a separate loader like grub as well
      as removing dependencies on local filesystem access.  And since it is
      a single file, it can be signed and validated by UEFI Secure Boot without
      requring the shim protocol.
      
      It is inspired by systemd-boot's unified kernel technique and borrows the
      function to locate PE sections from systemd's LGPL'ed code.  During EFI
      boot, Xen looks at its own loaded image to locate the PE sections for
      the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
      (`.ramdisk`), and XSM config (`.xsm`), which are included after building
      xen.efi using objcopy to add named sections for each input file.
      
      For x86, the CPU ucode can be included in a section named `.ucode`,
      which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
      
      On ARM systems the Device Tree can be included in a section named
      `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
      the boot process.
      
      Note that the system will fall back to loading files from disk if
      the named sections do not exist. This allows distributions to continue
      with the status quo if they want a signed kernel + config, while still
      allowing a user provided initrd (which is how the shim protocol currently
      works as well).
      
      This patch also adds constness to the section parameter of
      efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
      changes pe_find_section() to use a const CHAR16 section name,
      and adds pe_name_compare() to match section names.
      
      Signed-off-by: Trammell Hudson <hudson@trmm.net>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      [Fix ARM build by including pe.init.o]
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/test-arm64-arm64-xl-xsm.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/test-arm64-arm64-xl-xsm.xen-boot --summary-out=tmp/155652.bisection-summary --basis-template=155584 --blessings=real,real-bisect xen-unstable-smoke test-arm64-arm64-xl-xsm xen-boot
Searching for failure / basis pass:
 155642 fail [host=laxton0] / 155584 ok.
Failure / basis pass flights: 155642 / 155584
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Basis pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9-a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#25849c8b16f2a5b7fcd0a823e80a5f1b590291f9-8a62dee\
 9ceff3056c7e0bd9632bac39bee2a51b3
Loaded 5001 nodes in revision graph
Searching for test results:
 155584 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155612 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
 155614 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155616 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
 155615 [host=rochester1]
 155618 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
 155620 [host=rochester1]
 155623 [host=rochester1]
 155624 [host=rochester1]
 155626 [host=rochester1]
 155628 [host=rochester1]
 155622 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
 155629 [host=rochester1]
 155633 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 4dced5df761e36fa2561f6f0f6563b3580d95e7f
 155635 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a71d50ed40bfa78c37722dc11995ac2563662c3
 155638 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 4dced5df761e36fa2561f6f0f6563b3580d95e7f
 155632 [host=rochester1]
 155640 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a71d50ed40bfa78c37722dc11995ac2563662c3
 155641 [host=rochester1]
 155646 [host=rochester1]
 155642 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
 155648 [host=rochester1]
 155651 pass a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 4dced5df761e36fa2561f6f0f6563b3580d95e7f
 155652 fail a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 8a71d50ed40bfa78c37722dc11995ac2563662c3
Searching for interesting versions
 Result found: flight 155584 (pass), for basis pass
 For basis failure, parent search stopping at a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 4dced5df761e36fa2561f6f0f6563b3580d95e7f, results HASH(0x5640e46200e8) HASH(0x5640e4617da0) HASH(0x5640e4627028) For basis failure, parent search stopping at a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 04be2c3a067899a3860fc2c7bc7a1599502ed1c5, results \
 HASH(0x5640e460f2d8) For basis failure, parent search stopping at a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9, results HASH(0x5640e460cbe0) HASH(0x5640e46152f0) Result found: flight 155612 (fail), for basis failure (at ancestor ~422)
 Repro found: flight 155614 (pass), for basis pass
 Repro found: flight 155616 (fail), for basis failure
 0 revisions at a6c5dd1dbaffe4cc398d8454546ba9246b9a95c9 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea6d3cd1ed79d824e605a70c3626bc437c386260 4dced5df761e36fa2561f6f0f6563b3580d95e7f
No revisions left to test, checking graph state.
 Result found: flight 155633 (pass), for last pass
 Result found: flight 155635 (fail), for first failure
 Repro found: flight 155638 (pass), for last pass
 Repro found: flight 155640 (fail), for first failure
 Repro found: flight 155651 (pass), for last pass
 Repro found: flight 155652 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8a71d50ed40bfa78c37722dc11995ac2563662c3
  Bug not present: 4dced5df761e36fa2561f6f0f6563b3580d95e7f
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155652/


  commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
  Author: Trammell Hudson <hudson@trmm.net>
  Date:   Fri Oct 2 07:18:21 2020 -0400
  
      efi: Enable booting unified hypervisor/kernel/initrd images
      
      This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
      configuration file, the Linux kernel and initrd, as well as the XSM,
      and architectural specific files into a single "unified" EFI executable.
      This allows an administrator to update the components independently
      without requiring rebuilding xen, as well as to replace the components
      in an existing image.
      
      The resulting EFI executable can be invoked directly from the UEFI Boot
      Manager, removing the need to use a separate loader like grub as well
      as removing dependencies on local filesystem access.  And since it is
      a single file, it can be signed and validated by UEFI Secure Boot without
      requring the shim protocol.
      
      It is inspired by systemd-boot's unified kernel technique and borrows the
      function to locate PE sections from systemd's LGPL'ed code.  During EFI
      boot, Xen looks at its own loaded image to locate the PE sections for
      the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
      (`.ramdisk`), and XSM config (`.xsm`), which are included after building
      xen.efi using objcopy to add named sections for each input file.
      
      For x86, the CPU ucode can be included in a section named `.ucode`,
      which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
      
      On ARM systems the Device Tree can be included in a section named
      `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
      the boot process.
      
      Note that the system will fall back to loading files from disk if
      the named sections do not exist. This allows distributions to continue
      with the status quo if they want a signed kernel + config, while still
      allowing a user provided initrd (which is how the shim protocol currently
      works as well).
      
      This patch also adds constness to the section parameter of
      efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
      changes pe_find_section() to use a const CHAR16 section name,
      and adds pe_name_compare() to match section names.
      
      Signed-off-by: Trammell Hudson <hudson@trmm.net>
      Reviewed-by: Jan Beulich <jbeulich@suse.com>
      [Fix ARM build by including pe.init.o]
      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/test-arm64-arm64-xl-xsm.xen-boot.{dot,ps,png,html,svg}.
----------------------------------------
155652: tolerable ALL FAIL

flight 155652 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155652/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                fail baseline untested


jobs:
 test-arm64-arm64-xl-xsm                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 13:34:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 13:34:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5423.14178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRF10-0002Il-8B; Sat, 10 Oct 2020 13:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5423.14178; Sat, 10 Oct 2020 13:34:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRF10-0002Ie-56; Sat, 10 Oct 2020 13:34:26 +0000
Received: by outflank-mailman (input) for mailman id 5423;
 Sat, 10 Oct 2020 13:34:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRF0y-0002IZ-Kt
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 13:34:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22a56855-9e1a-467c-a54e-6f74addc7d45;
 Sat, 10 Oct 2020 13:34:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRF0w-0004G9-2g; Sat, 10 Oct 2020 13:34:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRF0v-00013m-Qb; Sat, 10 Oct 2020 13:34:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRF0v-0008NV-Q6; Sat, 10 Oct 2020 13:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRF0y-0002IZ-Kt
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 13:34:24 +0000
X-Inumbo-ID: 22a56855-9e1a-467c-a54e-6f74addc7d45
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22a56855-9e1a-467c-a54e-6f74addc7d45;
	Sat, 10 Oct 2020 13:34:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=G/Us54dO7fubzSLv/2hAeaOWU/0e2bHLUx33Xjl9ehw=; b=dshM9bVO4BQIuerSqbbSuS7Vqf
	ab6jbxI6KZKxzhlFOrQxMx3yZz6pH4TZOGK5peSwF69X6azwq+QlGIcYgpj07dhTb9bwc2hpeCUKQ
	d7YphjsizjiAG3HMw2fr8cbtoSIbDhjRxFdXcaPH+dyg0iil2cqlW9koCruM4UmYGjQk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRF0w-0004G9-2g; Sat, 10 Oct 2020 13:34:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRF0v-00013m-Qb; Sat, 10 Oct 2020 13:34:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRF0v-0008NV-Q6; Sat, 10 Oct 2020 13:34:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155649-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155649: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 13:34:21 +0000

flight 155649 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155649/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 15:26:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 15:26:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5442.14218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRGlB-0004WR-VE; Sat, 10 Oct 2020 15:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5442.14218; Sat, 10 Oct 2020 15:26:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRGlB-0004WK-S6; Sat, 10 Oct 2020 15:26:13 +0000
Received: by outflank-mailman (input) for mailman id 5442;
 Sat, 10 Oct 2020 15:26:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRGlA-0004Vg-KB
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 15:26:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbeadf75-a6c4-4dce-8924-04018a84e8e8;
 Sat, 10 Oct 2020 15:26:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRGkz-0006aF-Ur; Sat, 10 Oct 2020 15:26:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRGkz-0007BD-KR; Sat, 10 Oct 2020 15:26:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRGkz-0008Td-Jv; Sat, 10 Oct 2020 15:26:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRGlA-0004Vg-KB
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 15:26:12 +0000
X-Inumbo-ID: cbeadf75-a6c4-4dce-8924-04018a84e8e8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cbeadf75-a6c4-4dce-8924-04018a84e8e8;
	Sat, 10 Oct 2020 15:26:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mOc/RI4mQYhhQJlRwoiPoscx6SbLR2cI7M8JS6YmvGU=; b=bGZoWZOxIuclW9ChtmpTpEItRY
	QGw7JcQ5VKhuQuRGOmYgfCz/FcwxaG0mmM3TEQMY4ksLRBxfl3vTnMQysNBHDFh9bYaWO59vuZNHD
	rw/4oAcmiues7/ufxyX6pI0hzpr0DS5eqN8nJ00Pd7XIJ/WoWO6yebvpViUfDKNoKpaU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRGkz-0006aF-Ur; Sat, 10 Oct 2020 15:26:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRGkz-0007BD-KR; Sat, 10 Oct 2020 15:26:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRGkz-0008Td-Jv; Sat, 10 Oct 2020 15:26:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155630-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155630: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-raw:leak-check/check:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 15:26:01 +0000

flight 155630 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155630/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw 20 leak-check/check           fail pass in 155600

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155600
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155600
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155600
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155600
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155600
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155600
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155600
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155600
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155600
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155630  2020-10-10 03:53:58 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 16:03:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 16:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5470.14255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRHLS-0000nf-BK; Sat, 10 Oct 2020 16:03:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5470.14255; Sat, 10 Oct 2020 16:03:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRHLS-0000nY-8H; Sat, 10 Oct 2020 16:03:42 +0000
Received: by outflank-mailman (input) for mailman id 5470;
 Sat, 10 Oct 2020 16:03:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRHLR-0000nT-KX
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 16:03:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c756dfa-3900-4875-a37d-3cfd0724c35a;
 Sat, 10 Oct 2020 16:03:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRHLQ-0007tN-7M; Sat, 10 Oct 2020 16:03:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRHLP-0008LW-WA; Sat, 10 Oct 2020 16:03:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRHLP-0005bp-Vh; Sat, 10 Oct 2020 16:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRHLR-0000nT-KX
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 16:03:41 +0000
X-Inumbo-ID: 7c756dfa-3900-4875-a37d-3cfd0724c35a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7c756dfa-3900-4875-a37d-3cfd0724c35a;
	Sat, 10 Oct 2020 16:03:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V2QuOCLIfQN8+/qjq/Pt98rY5I9UAEyTuc84t1aug18=; b=jD1Ud9JOHMzOrPHObQYj1K6TZh
	lrWJTDvdnumBvzg7SVvG5TvQUx5KYV5zROBwxtYfCXpu+zbhDqwwT4RsX5XogE1tdl6HoEm4n2OFA
	eXDWn3Q+e0tfK+YnyPtCoQTy9daC5ZsyeGqXtW6jCHYKKflXf6ZH9F8LIkPtsn9jxbrk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRHLQ-0007tN-7M; Sat, 10 Oct 2020 16:03:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRHLP-0008LW-WA; Sat, 10 Oct 2020 16:03:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRHLP-0005bp-Vh; Sat, 10 Oct 2020 16:03:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155643-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155643: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ae511331e0fb1625ba649f377e81e487de3a5531
X-Osstest-Versions-That:
    ovmf=70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 16:03:39 +0000

flight 155643 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155643/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ae511331e0fb1625ba649f377e81e487de3a5531
baseline version:
 ovmf                 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4

Last test of basis   155617  2020-10-09 21:42:06 Z    0 days
Testing same since   155643  2020-10-10 08:15:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  gaoliming <gaoliming@byosoft.com.cn>
  Garrett Kirkendall <garrett.kirkendall@amd.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Malgorzata Kukiello <jacek.kukiello@intel.com>
  Sanyo Wang <sanyo.wang@intel.com>
  Wang, Sanyo <sanyo.wang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   70c2f10fde..ae511331e0  ae511331e0fb1625ba649f377e81e487de3a5531 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 16:59:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 16:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5476.14275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRIDB-0005pR-I6; Sat, 10 Oct 2020 16:59:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5476.14275; Sat, 10 Oct 2020 16:59:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRIDB-0005pK-Ev; Sat, 10 Oct 2020 16:59:13 +0000
Received: by outflank-mailman (input) for mailman id 5476;
 Sat, 10 Oct 2020 16:59:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRID9-0005om-N0
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 16:59:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e2fe768-d461-4e30-9105-918352d25e49;
 Sat, 10 Oct 2020 16:59:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRID1-0000Yj-NA; Sat, 10 Oct 2020 16:59:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRID1-0001S0-F1; Sat, 10 Oct 2020 16:59:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRID1-0006o8-EZ; Sat, 10 Oct 2020 16:59:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRID9-0005om-N0
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 16:59:11 +0000
X-Inumbo-ID: 5e2fe768-d461-4e30-9105-918352d25e49
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5e2fe768-d461-4e30-9105-918352d25e49;
	Sat, 10 Oct 2020 16:59:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=V0sitg0AsjzQkitZWLR5j39xCLqpG7iH0rCEQy5+NT0=; b=1LX6X2oif8jPKs7GZnPvW5Vsj6
	y91k2K/bWjIVJ/MDCOj+as380lltVxoDa9vWowe2ldQNmCUnzWPVMWGt64ydx2yZnuPtJqz84S/RZ
	qJKdcqT4Ks+ehLrTaCyLp81dAZL4tykmQZGrVSOsYtIYKYEnWB7hIcet1tykgxjJTNZI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRID1-0000Yj-NA; Sat, 10 Oct 2020 16:59:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRID1-0001S0-F1; Sat, 10 Oct 2020 16:59:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRID1-0006o8-EZ; Sat, 10 Oct 2020 16:59:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-freebsd10-i386
Message-Id: <E1kRID1-0006o8-EZ@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 16:59:03 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-freebsd10-i386
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1583a389885346208217e8170395624b3aa90480
  Bug not present: a77dabc33bcc36ec348854f23e89e0de22ca045b
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155658/


  commit 1583a389885346208217e8170395624b3aa90480
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Tue Jul 7 10:21:10 2020 +0200
  
      cpus: extract out qtest-specific code to accel/qtest
      
      register a "CpusAccel" interface for qtest as well.
      
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-start --summary-out=tmp/155658.bisection-summary --basis-template=152631 --blessings=real,real-bisect qemu-mainline test-amd64-i386-freebsd10-i386 guest-start
Searching for failure / basis pass:
 155613 fail [host=huxelrebe0] / 155509 [host=elbling0] 155483 [host=chardonnay1] 155434 [host=rimava1] 155318 [host=fiano1] 155184 [host=albana0] 155098 [host=pinot1] 155018 [host=chardonnay0] 154629 [host=elbling0] 154607 [host=huxelrebe1] 154583 [host=pinot0] 154566 [host=elbling1] 154552 [host=rimava1] 154544 [host=fiano0] 154526 ok.
Failure / basis pass flights: 155613 / 154526
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 053a4177817db307ec854356e95b5b350800a216 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#7faece69854cbcc593643182581b5d7f99b7dab6-69e95b9efed520e643b9e5b0573180aa7c5ecaca git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#053a4177817db307ec854356e95b5b350800a216-4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 git://xenbits.xen.org/osstest/seabios.git#155821a1990b6de78dde5f98fa5ab90e802021e0-849c5e50b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#baa4d064e91b6d2bcfe400bdf71f83b961e4c28e-7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
Loaded 51989 nodes in revision graph
Searching for test results:
 154485 [host=fiano1]
 154496 [host=albana0]
 154508 [host=chardonnay1]
 154526 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 053a4177817db307ec854356e95b5b350800a216 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 154544 [host=fiano0]
 154552 [host=rimava1]
 154566 [host=elbling1]
 154583 [host=pinot0]
 154607 [host=huxelrebe1]
 154629 [host=elbling0]
 155018 [host=chardonnay0]
 155098 [host=pinot1]
 155184 [host=albana0]
 155318 [host=fiano1]
 155434 [host=rimava1]
 155483 [host=chardonnay1]
 155509 [host=elbling0]
 155518 fail irrelevant
 155545 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 053a4177817db307ec854356e95b5b350800a216 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155580 fail irrelevant
 155581 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 671ad7c4468f795b66b4cd8f376f1b1ce6701b63 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155544 fail irrelevant
 155583 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0ac0b47c44b4be6cbce26777a1a5968cc8f025a5 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155586 fail irrelevant
 155587 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0d2a4545bf7e763984d3ee3e802617544cb7fc7a 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155589 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b23317eec4715aa62de9a6e5490a01122c8eef0e 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155591 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625581c2602b5b43e115b779a9a782478e6f92e7 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155592 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 37aeb7a28ddbf52dd25dd53ae1b8391bc2287858 849c5e50b6f474df6cc113130575bcdccfafcd9e de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
 155595 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7cd77fb02b9a2117a56fed172f09a1820fcd6b0b 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155597 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5663ac2aa0eafb40411ac4dff85e6ab529c4d199 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155598 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e344ffe73bd77e7067099155cfd8bf42b07ed631 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155599 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1553d543ff4b9f91de4ed7743f0cd6e534528b9e 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155601 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92d09502676678c8ebb1ad830666b323d3c88f9d 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 4bdbf746ac9152e70f264f87db4472707da805ce
 155603 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8138405528c29af2a850cd672a8f8a0b33b7ab40 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155585 fail irrelevant
 155611 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12a0edafeb5019aac74114b62a4703f79c5c693 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5bcac985498ed83d89666959175ca9c9ed561ae1
 155619 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7faece69854cbcc593643182581b5d7f99b7dab6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 053a4177817db307ec854356e95b5b350800a216 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155621 fail irrelevant
 155625 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 c640186ec8aae6164123ee38de6409aed69eab12 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6eeea6725a70e6fcb5abba0764496bdab07ddfb3 849c5e50b6f474df6cc113130575bcdccfafcd9e 93508595d588afe9dca087f95200effb7cedc81f
 155627 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d7c5b788295426c1ef48a9ffc3432c51220f69ba 849c5e50b6f474df6cc113130575bcdccfafcd9e 93508595d588afe9dca087f95200effb7cedc81f
 155631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 fda8458bd3a9cb3108ba2f09921b6e3eee0d1bf3 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155636 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e0715f6abce0c04f68d35c4f6df2976ac57379c9 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155637 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8ef39ecfa69f758680280577077e25f4b5be9844 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155613 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
 155644 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155647 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 7a519f8bda6f3505a4c1fbf277f002aa0c12ab9a
 155650 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 57038a92bb06111fbee57f56c0231359573e805d 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155653 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155655 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155656 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155657 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155658 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
Searching for interesting versions
 Result found: flight 154526 (pass), for basis pass
 Result found: flight 155613 (fail), for basis failure
 Repro found: flight 155619 (pass), for basis pass
 Repro found: flight 155647 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
No revisions left to test, checking graph state.
 Result found: flight 155644 (pass), for last pass
 Result found: flight 155653 (fail), for first failure
 Repro found: flight 155655 (pass), for last pass
 Repro found: flight 155656 (fail), for first failure
 Repro found: flight 155657 (pass), for last pass
 Repro found: flight 155658 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1583a389885346208217e8170395624b3aa90480
  Bug not present: a77dabc33bcc36ec348854f23e89e0de22ca045b
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155658/


  commit 1583a389885346208217e8170395624b3aa90480
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Tue Jul 7 10:21:10 2020 +0200
  
      cpus: extract out qtest-specific code to accel/qtest
      
      register a "CpusAccel" interface for qtest as well.
      
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

pnmtopng: 244 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155658: tolerable ALL FAIL

flight 155658 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155658/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 13 guest-start           fail baseline untested


jobs:
 test-amd64-i386-freebsd10-i386                               fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 10 17:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 17:15:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5482.14289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRISX-0007fV-3a; Sat, 10 Oct 2020 17:15:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5482.14289; Sat, 10 Oct 2020 17:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRISX-0007fO-0A; Sat, 10 Oct 2020 17:15:05 +0000
Received: by outflank-mailman (input) for mailman id 5482;
 Sat, 10 Oct 2020 17:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRISV-0007fI-EN
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 17:15:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98f41563-c877-49cf-a33b-bd4256db3f30;
 Sat, 10 Oct 2020 17:15:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRISS-0000uG-6Y; Sat, 10 Oct 2020 17:15:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRISS-0001tg-08; Sat, 10 Oct 2020 17:15:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRISR-0001ta-Vu; Sat, 10 Oct 2020 17:14:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRISV-0007fI-EN
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 17:15:03 +0000
X-Inumbo-ID: 98f41563-c877-49cf-a33b-bd4256db3f30
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 98f41563-c877-49cf-a33b-bd4256db3f30;
	Sat, 10 Oct 2020 17:15:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=avR3t6Yoq72tApymDYOdp94C2EtQOLzG+PPhmsx9RIU=; b=6bm7Iz/Gj2dqqRxCCnqBJ8jqXb
	0yQ2JM5uVLlKrPBuCHIWmJQB+UHE6zxTnPSKFbqfbP91oNzhBeZf+uTz+uppRD+lBh5exoqR9sYfm
	o6vILtBHYbNVfd/RjxgAvJYxFLIcQQ5VzaykUoPzGYDnOmlSBKg15v7RDggqcOEZGImU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRISS-0000uG-6Y; Sat, 10 Oct 2020 17:15:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRISS-0001tg-08; Sat, 10 Oct 2020 17:15:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRISR-0001ta-Vu; Sat, 10 Oct 2020 17:14:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155654-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155654: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 17:14:59 +0000

flight 155654 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155654/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 18:25:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 18:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5522.14344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRJXx-0006Ri-3B; Sat, 10 Oct 2020 18:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5522.14344; Sat, 10 Oct 2020 18:24:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRJXw-0006Rb-WD; Sat, 10 Oct 2020 18:24:44 +0000
Received: by outflank-mailman (input) for mailman id 5522;
 Sat, 10 Oct 2020 18:24:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRJXv-0006Qh-Iz
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 18:24:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 641a595d-0a8c-4985-ab62-7cad230eacae;
 Sat, 10 Oct 2020 18:24:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRJXo-0002NI-CS; Sat, 10 Oct 2020 18:24:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRJXo-0003kz-2y; Sat, 10 Oct 2020 18:24:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRJXo-0000nl-2W; Sat, 10 Oct 2020 18:24:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRJXv-0006Qh-Iz
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 18:24:43 +0000
X-Inumbo-ID: 641a595d-0a8c-4985-ab62-7cad230eacae
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 641a595d-0a8c-4985-ab62-7cad230eacae;
	Sat, 10 Oct 2020 18:24:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X2fIyhllBBJYGdW11xIvMqQkZ6T5Ccem8iF20Kw1XNg=; b=rOCrPUe7BgbQ4nEi2I3Cc23aKd
	giI/vPL6Ma2jVIUXzLqTm9/8RX6X0wq5boZeXDGQWgn2PP3B6EIKVXSAI8u08h091ziOCYsBDhCYU
	ka9V6rcgJb032+IgWmvWJxMvOOEaeoxefr+5hI+uNSsusWhC/piFVZggsytPTlyKNNrg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRJXo-0002NI-CS; Sat, 10 Oct 2020 18:24:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRJXo-0003kz-2y; Sat, 10 Oct 2020 18:24:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRJXo-0000nl-2W; Sat, 10 Oct 2020 18:24:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155660-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 155660: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=79d9c62fb0e89dabcda6ba265ed89607be2dedc5
X-Osstest-Versions-That:
    xtf=a1bb00c99b92202ee818d9df6464484f89989d80
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 18:24:36 +0000

flight 155660 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155660/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  79d9c62fb0e89dabcda6ba265ed89607be2dedc5
baseline version:
 xtf                  a1bb00c99b92202ee818d9df6464484f89989d80

Last test of basis   154651  2020-09-23 16:09:54 Z   17 days
Testing same since   155660  2020-10-10 17:41:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   a1bb00c..79d9c62  79d9c62fb0e89dabcda6ba265ed89607be2dedc5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 18:34:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 18:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5524.14357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRJh2-0007To-0H; Sat, 10 Oct 2020 18:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5524.14357; Sat, 10 Oct 2020 18:34:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRJh1-0007Th-TN; Sat, 10 Oct 2020 18:34:07 +0000
Received: by outflank-mailman (input) for mailman id 5524;
 Sat, 10 Oct 2020 18:34:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GNvj=DR=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1kRJh0-0007Tc-Gv
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 18:34:06 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d7b2ba1-562c-48eb-95b9-66cd1f6d2e79;
 Sat, 10 Oct 2020 18:34:04 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:42036
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1kRJ4o-0004y4-Ql; Sat, 10 Oct 2020 19:54:39 +0200
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GNvj=DR=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
	id 1kRJh0-0007Tc-Gv
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 18:34:06 +0000
X-Inumbo-ID: 8d7b2ba1-562c-48eb-95b9-66cd1f6d2e79
Received: from server.eikelenboom.it (unknown [91.121.65.215])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d7b2ba1-562c-48eb-95b9-66cd1f6d2e79;
	Sat, 10 Oct 2020 18:34:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Type:MIME-Version:Date:Message-ID:
	Subject:From:To:Sender:Reply-To:Cc:Content-Transfer-Encoding:Content-ID:
	Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
	:Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=UenO3/p4N1SqrFPt+SHitRQbon5Doh2QeWcnlnRNnl8=; b=GLVCEho+NEJVmJwRiCQ2CPx1TY
	1Y+1sZJwFSf1vsFXtBxA7W4DA31IiQ820pdl59D7otoc8G9C0Q5kROsfmI5Xe+Vjkrjy2Zw6NQi2b
	b8hrnkTgUeo2qD/urYV6uL+0VG+NZ6tNyL/IT/D/Ekm9yIZGeTV8LgaXyOJQNuJ9Zp5A=;
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:42036 helo=[172.16.1.50])
	by server.eikelenboom.it with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <linux@eikelenboom.it>)
	id 1kRJ4o-0004y4-Ql; Sat, 10 Oct 2020 19:54:39 +0200
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Igor Druzhinin <igor.druzhinin@citrix.com>, Jan Beulich <jbeulich@suse.com>
From: Sander Eikelenboom <linux@eikelenboom.it>
Subject: Xen-unstable :can't boot HVM guests, bisected to commit: "hvmloader:
 indicate ACPI tables with "ACPI data" type in e820"
Message-ID: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
Date: Sat, 10 Oct 2020 19:51:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
Content-Type: multipart/mixed;
 boundary="------------C8441FDC7B1161885B47F6B2"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C8441FDC7B1161885B47F6B2
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Hi Igor/Jan,

I tried to update my AMD machine to current xen-unstable, but
unfortunately the HVM guests don't boot after that. The guest keeps
using CPU-cycles but I don't get to a command prompt (or any output at
all). PVH guests run fine.

Bisection leads to commit:

8efa46516c5f4cf185c8df179812c185d3c27eb6
hvmloader: indicate ACPI tables with "ACPI data" type in e820

I tried xen-unstable with this commit reverted and with that everything
works fine.

I attached the xl-dmesg output.

--
Sander

--------------C8441FDC7B1161885B47F6B2
Content-Type: text/plain; charset=UTF-8;
 name="xl-dmesg-extended-20201010.txt"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="xl-dmesg-extended-20201010.txt"

IF9fICBfXyAgICAgICAgICAgIF8gIF8gICAgXyBfX19fICAgICAgICAgICAgICAgICAgICAg
XyAgICAgICAgXyAgICAgXwogXCBcLyAvX19fIF8gX18gICB8IHx8IHwgIC8gfCBfX198ICAg
IF8gICBfIF8gX18gIF9fX3wgfF8gX18gX3wgfF9fIHwgfCBfX18KICBcICAvLyBfIFwgJ18g
XCAgfCB8fCB8XyB8IHxfX18gXCBfX3wgfCB8IHwgJ18gXC8gX198IF9fLyBfYCB8ICdfIFx8
IHwvIF8gXAogIC8gIFwgIF9fLyB8IHwgfCB8X18gICBffHwgfF9fXykgfF9ffCB8X3wgfCB8
IHwgXF9fIFwgfHwgKF98IHwgfF8pIHwgfCAgX18vCiAvXy9cX1xfX198X3wgfF98ICAgIHxf
fChfKV98X19fXy8gICAgXF9fLF98X3wgfF98X19fL1xfX1xfXyxffF8uX18vfF98XF9fX3wK
CihYRU4pIFswMDAwMDAxYTkzNWQzM2FlXSBYZW4gdmVyc2lvbiA0LjE1LXVuc3RhYmxlIChy
b290QGR5bmRucy5vcmcpIChnY2MgKERlYmlhbiA4LjMuMC02KSA4LjMuMCkgZGVidWc9eSAg
U2F0IE9jdCAxMCAxNzo0Mjo1NiBDRVNUIDIwMjAKKFhFTikgWzAwMDAwMDFhOWEzNTM0NGFd
IExhdGVzdCBDaGFuZ2VTZXQ6IEZyaSBPY3QgMiAxMjozMDozNCAyMDIwICswMjAwIGdpdDo4
YTYyZGVlOWNlCihYRU4pIFswMDAwMDAxYTllOWVmNmEwXSBidWlsZC1pZDogNTE3OTE2MzQ5
YTQ2ZmRhYjJmYWFkZmVmYTcwNTE5MjhhMTU5NDc5NgooWEVOKSBbMDAwMDAwMWFhMjcwOWQ1
M10gQm9vdGxvYWRlcjogR1JVQiAyLjAyK2Rmc2cxLTIwK2RlYjEwdTIKKFhFTikgWzAwMDAw
MDFhYTVhOWNhY2VdIENvbW1hbmQgbGluZTogZG9tMF9tZW09MjA0OE0sbWF4OjIwNDhNIGxv
Z2x2bD1hbGwgZ3Vlc3RfbG9nbHZsPWFsbCBjb25zb2xlX3RpbWVzdGFtcHM9ZGF0ZW1zIHZn
YT1nZngtMTI4MHgxMDI0eDMyIG5vLWNwdWlkbGUgY29tMT0zODQwMCw4bjEgY29uc29sZT12
Z2EsY29tMSBpdnJzX2lvYXBpY1s2XT0wMDoxNC4wIGlvbW11PW9uLHZlcmJvc2UsZGVidWcg
Y29ucmluZ19zaXplPTEyOGsgdWNvZGU9c2NhbiBzY2hlZD1jcmVkaXQyIGdudHRhYl9tYXhf
ZnJhbWVzPTY0IHJlYm9vdD1hCihYRU4pIFswMDAwMDAxYWI0YTNjOWYzXSBYZW4gaW1hZ2Ug
bG9hZCBiYXNlIGFkZHJlc3M6IDAKKFhFTikgWzAwMDAwMDFhYjc3NzVhZTNdIFZpZGVvIGlu
Zm9ybWF0aW9uOgooWEVOKSBbMDAwMDAwMWFiOWIyYTMwNl0gIFZHQSBpcyBncmFwaGljcyBt
b2RlIDEyODB4MTAyNCwgMzIgYnBwCihYRU4pIFswMDAwMDAxYWJjZjg4OTEyXSAgVkJFL0RE
QyBtZXRob2RzOiBub25lOyBFRElEIHRyYW5zZmVyIHRpbWU6IDAgc2Vjb25kcwooWEVOKSBb
MDAwMDAwMWFjMGYwMzg3ZV0gIEVESUQgaW5mbyBub3QgcmV0cmlldmVkIGJlY2F1c2Ugbm8g
RERDIHJldHJpZXZhbCBtZXRob2QgZGV0ZWN0ZWQKKFhFTikgWzAwMDAwMDFhYzU4MDJlMmVd
IERpc2MgaW5mb3JtYXRpb246CihYRU4pIFswMDAwMDAxYWM3YWViMWMzXSAgRm91bmQgNCBN
QlIgc2lnbmF0dXJlcwooWEVOKSBbMDAwMDAwMWFjYTI5NzY4ZV0gIEZvdW5kIDQgRUREIGlu
Zm9ybWF0aW9uIHN0cnVjdHVyZXMKKFhFTikgWzAwMDAwMDFhY2QzYzlhODNdIENQVSBWZW5k
b3I6IEFNRCwgRmFtaWx5IDE2ICgweDEwKSwgTW9kZWwgMTAgKDB4YSksIFN0ZXBwaW5nIDAg
KHJhdyAwMDEwMGZhMCkKKFhFTikgWzAwMDAwMDFhZDI1ODRlMWJdIFhlbi1lODIwIFJBTSBt
YXA6CihYRU4pIFswMDAwMDAxYWQ0ODZjZjMwXSAgWzAwMDAwMDAwMDAwMDAwMDAsIDAwMDAw
MDAwMDAwOTYzZmZdICh1c2FibGUpCihYRU4pIFswMDAwMDAxYWQ4MjU4ZDViXSAgWzAwMDAw
MDAwMDAwOTY0MDAsIDAwMDAwMDAwMDAwOWZmZmZdIChyZXNlcnZlZCkKKFhFTikgWzAwMDAw
MDFhZGJkZGI1N2JdICBbMDAwMDAwMDAwMDBlNDAwMCwgMDAwMDAwMDAwMDBmZmZmZl0gKHJl
c2VydmVkKQooWEVOKSBbMDAwMDAwMWFkZjk1ZWFiYV0gIFswMDAwMDAwMDAwMTAwMDAwLCAw
MDAwMDAwMGM3ZjhmZmZmXSAodXNhYmxlKQooWEVOKSBbMDAwMDAwMWFlMzM0YThiMl0gIFsw
MDAwMDAwMGM3ZjkwMDAwLCAwMDAwMDAwMGM3ZjlkZmZmXSAoQUNQSSBkYXRhKQooWEVOKSBb
MDAwMDAwMWFlNmY5ODQ0YV0gIFswMDAwMDAwMGM3ZjllMDAwLCAwMDAwMDAwMGM3ZmRmZmZm
XSAoQUNQSSBOVlMpCihYRU4pIFswMDAwMDAxYWVhYjFhNmY2XSAgWzAwMDAwMDAwYzdmZTAw
MDAsIDAwMDAwMDAwYzdmZmZmZmZdIChyZXNlcnZlZCkKKFhFTikgWzAwMDAwMDFhZWU2OWM1
ZWFdICBbMDAwMDAwMDBmZmUwMDAwMCwgMDAwMDAwMDBmZmZmZmZmZl0gKHJlc2VydmVkKQoo
WEVOKSBbMDAwMDAwMWFmMjIxZTYyNl0gIFswMDAwMDAwMTAwMDAwMDAwLCAwMDAwMDAwNmI3
ZmZmZmZmXSAodXNhYmxlKQooWEVOKSBbMDAwMDAwMWFmZDdiNTkzYl0gTmV3IFhlbiBpbWFn
ZSBiYXNlIGFkZHJlc3M6IDB4Yzc4MDAwMDAKKFhFTikgWzAwMDAwMDFiMDBiNDZhNjJdIEFD
UEk6IFJTRFAgMDAwRkIxMDAsIDAwMTQgKHIwIEFDUElBTSkKKFhFTikgWzAwMDAwMDFiMDNl
MTAwM2JdIEFDUEk6IFJTRFQgQzdGOTAwMDAsIDAwNDggKHIxIE1TSSAgICBPRU1TTElDICAy
MDEwMDkxMyBNU0ZUICAgICAgIDk3KQooWEVOKSBbMDAwMDAwMWIwOGEzYzQ2M10gQUNQSTog
RkFDUCBDN0Y5MDIwMCwgMDA4NCAocjEgNzY0ME1TIEE3NjQwMTAwIDIwMTAwOTEzIE1TRlQg
ICAgICAgOTcpCihYRU4pIFswMDAwMDAxYjBkNjY2NmJiXSBBQ1BJOiBEU0RUIEM3RjkwNUUw
LCA5NDI3IChyMSAgQTc2NDAgQTc2NDAxMDAgICAgICAxMDAgSU5UTCAyMDA1MTExNykKKFhF
TikgWzAwMDAwMDFiMTIyOTNmYjZdIEFDUEk6IEZBQ1MgQzdGOUUwMDAsIDAwNDAKKFhFTikg
WzAwMDAwMDFiMTRiZDUxZWFdIEFDUEk6IEFQSUMgQzdGOTAzOTAsIDAwODggKHIxIDc2NDBN
UyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQooWEVOKSBbMDAwMDAwMWIxOTgw
MjJmYV0gQUNQSTogTUNGRyBDN0Y5MDQyMCwgMDAzQyAocjEgNzY0ME1TIE9FTU1DRkcgIDIw
MTAwOTEzIE1TRlQgICAgICAgOTcpCihYRU4pIFswMDAwMDAxYjFlNDJlMDhhXSBBQ1BJOiBT
TElDIEM3RjkwNDYwLCAwMTc2IChyMSBNU0kgICAgT0VNU0xJQyAgMjAxMDA5MTMgTVNGVCAg
ICAgICA5NykKKFhFTikgWzAwMDAwMDFiMjMwNWEyZDBdIEFDUEk6IE9FTUIgQzdGOUUwNDAs
IDAwNzIgKHIxIDc2NDBNUyBBNzY0MDEwMCAyMDEwMDkxMyBNU0ZUICAgICAgIDk3KQooWEVO
KSBbMDAwMDAwMWIyN2M4NTFlYV0gQUNQSTogU1JBVCBDN0Y5QTVFMCwgMDEwOCAocjMgQU1E
ICAgIEZBTV9GXzEwICAgICAgICAyIEFNRCAgICAgICAgIDEpCihYRU4pIFswMDAwMDAxYjJj
OGIxMWMwXSBBQ1BJOiBIUEVUIEM3RjlBNkYwLCAwMDM4IChyMSA3NjQwTVMgT0VNSFBFVCAg
MjAxMDA5MTMgTVNGVCAgICAgICA5NykKKFhFTikgWzAwMDAwMDFiMzE0ZGUwNjBdIEFDUEk6
IElWUlMgQzdGOUE3MzAsIDAxMDggKHIxICBBTUQgICAgIFJEODkwUyAgIDIwMjAzMSBBTUQg
ICAgICAgICAwKQooWEVOKSBbMDAwMDAwMWIzNjEwOTM5Yl0gQUNQSTogU1NEVCBDN0Y5QTg0
MCwgMERBNCAocjEgQSBNIEkgIFBPV0VSTk9XICAgICAgICAxIEFNRCAgICAgICAgIDEpCihY
RU4pIFswMDAwMDAxYjNhZDM1NjY4XSBTeXN0ZW0gUkFNOiAyNjYyM01CICgyNzI2MjEwNGtC
KQooWEVOKSBbMDAwMDAwMWI0OGJiYzI3OF0gU1JBVDogUFhNIDAgLT4gQVBJQyAwMCAtPiBO
b2RlIDAKKFhFTikgWzAwMDAwMDFiNGJhOGM5MmFdIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDEg
LT4gTm9kZSAwCihYRU4pIFswMDAwMDAxYjRlOTVkYzNlXSBTUkFUOiBQWE0gMCAtPiBBUElD
IDAyIC0+IE5vZGUgMAooWEVOKSBbMDAwMDAwMWI1MTgyY2EwMl0gU1JBVDogUFhNIDAgLT4g
QVBJQyAwMyAtPiBOb2RlIDAKKFhFTikgWzAwMDAwMDFiNTQ2ZmVlYjNdIFNSQVQ6IFBYTSAw
IC0+IEFQSUMgMDQgLT4gTm9kZSAwCihYRU4pIFswMDAwMDAxYjU3NWNmNGFhXSBTUkFUOiBQ
WE0gMCAtPiBBUElDIDA1IC0+IE5vZGUgMAooWEVOKSBbMDAwMDAwMWI1YTQ5ZjVmNl0gU1JB
VDogTm9kZSAwIFBYTSAwIDAtYTAwMDAKKFhFTikgWzAwMDAwMDFiNWNlYWQyZTJdIFNSQVQ6
IE5vZGUgMCBQWE0gMCAxMDAwMDAtYzgwMDAwMDAKKFhFTikgWzAwMDAwMDFiNWZmMTM5OWVd
IFNSQVQ6IE5vZGUgMCBQWE0gMCAxMDAwMDAwMDAtNmI4MDAwMDAwCihYRU4pIFswMDAwMDAx
YjYzMmE2YmYzXSBOVU1BOiBBbGxvY2F0ZWQgbWVtbm9kZW1hcCBmcm9tIDZhNGQyYTAwMCAt
IDZhNGQzMTAwMAooWEVOKSBbMDAwMDAwMWI2NzIxZmM1Ml0gTlVNQTogVXNpbmcgOCBmb3Ig
dGhlIGhhc2ggc2hpZnQuCihYRU4pIFswMDAwMDAxYmM3N2E3NTY1XSBEb21haW4gaGVhcCBp
bml0aWFsaXNlZAooWEVOKSBbMDAwMDAwMWJjOWY1NDZhM10gQWxsb2NhdGVkIGNvbnNvbGUg
cmluZyBvZiAxMjggS2lCLgooWEVOKSBbMDAwMDAwMWJlMDFhNTU4Nl0gdmVzYWZiOiBmcmFt
ZWJ1ZmZlciBhdCAweDAwMDAwMDAwZmIwMDAwMDAsIG1hcHBlZCB0byAweGZmZmY4MmMwMDAy
MDEwMDAsIHVzaW5nIDYxNDRrLCB0b3RhbCAxNDMzNmsKKFhFTikgWzAwMDAwMDFiZTY0ZDNi
ZDJdIHZlc2FmYjogbW9kZSBpcyAxMjgweDEwMjR4MzIsIGxpbmVsZW5ndGg9NTEyMCwgZm9u
dCA4eDE2CihYRU4pIFswMDAwMDAxYmVhNmFmZGQ4XSB2ZXNhZmI6IFRydWVjb2xvcjogc2l6
ZT04Ojg6ODo4LCBzaGlmdD0yNDoxNjo4OjAKKFhFTikgWzAwMDAwMDFiZjRlMGYyZWFdIGZv
dW5kIFNNUCBNUC10YWJsZSBhdCAwMDBmZjc4MAooWEVOKSBbMDAwMDAwMWJmN2I0Nzk3OF0g
RE1JIHByZXNlbnQuCihYRU4pIFswMDAwMDAxYmY5YTM5NGYyXSBVc2luZyBBUElDIGRyaXZl
ciBkZWZhdWx0CihYRU4pIFswMDAwMDAxYmZjMzdhNDMzXSBBQ1BJOiBQTS1UaW1lciBJTyBQ
b3J0OiAweDgwOCAoMzIgYml0cykKKFhFTikgWzAwMDAwMDFiZmY3ZDk3MTJdIEFDUEk6IFNM
RUVQIElORk86IHBtMXhfY250WzE6ODA0LDE6MF0sIHBtMXhfZXZ0WzE6ODAwLDE6MF0KKFhF
TikgWzAwMDAwMDFjMDNiNGMwMGVdIEFDUEk6ICAgICAgICAgICAgIHdha2V1cF92ZWNbYzdm
OWUwMGNdLCB2ZWNfc2l6ZVsyMF0KKFhFTikgWzAwMDAwMDFjMDc5ZmExODJdIEFDUEk6IExv
Y2FsIEFQSUMgYWRkcmVzcyAweGZlZTAwMDAwCihYRU4pIFswMDAwMDAxYzBhYjJiYmNhXSBB
Q1BJOiBMQVBJQyAoYWNwaV9pZFsweDAxXSBsYXBpY19pZFsweDAwXSBlbmFibGVkKQooWEVO
KSBbMDAwMDAwMWMwZTg0NDE0YV0gQUNQSTogTEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNf
aWRbMHgwMV0gZW5hYmxlZCkKKFhFTikgWzAwMDAwMDFjMTI1NWNjZTNdIEFDUEk6IExBUElD
IChhY3BpX2lkWzB4MDNdIGxhcGljX2lkWzB4MDJdIGVuYWJsZWQpCihYRU4pIFswMDAwMDAx
YzE2Mjc1NWMzXSBBQ1BJOiBMQVBJQyAoYWNwaV9pZFsweDA0XSBsYXBpY19pZFsweDAzXSBl
bmFibGVkKQooWEVOKSBbMDAwMDAwMWMxOWY4ZTkyYV0gQUNQSTogTEFQSUMgKGFjcGlfaWRb
MHgwNV0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgWzAwMDAwMDFjMWRjYTVlMjNd
IEFDUEk6IExBUElDIChhY3BpX2lkWzB4MDZdIGxhcGljX2lkWzB4MDVdIGVuYWJsZWQpCihY
RU4pIFswMDAwMDAxYzIxOWMwNjY2XSBBQ1BJOiBJT0FQSUMgKGlkWzB4MDZdIGFkZHJlc3Nb
MHhmZWMwMDAwMF0gZ3NpX2Jhc2VbMF0pCihYRU4pIFswMDAwMDAxYzI1YWNmZmVlXSBJT0FQ
SUNbMF06IGFwaWNfaWQgNiwgdmVyc2lvbiAzMywgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kg
MC0yMwooWEVOKSBbMDAwMDAwMWMyYTE2ZDU1Nl0gQUNQSTogSU9BUElDIChpZFsweDA3XSBh
ZGRyZXNzWzB4ZmVjMjAwMDBdIGdzaV9iYXNlWzI0XSkKKFhFTikgWzAwMDAwMDFjMmUzNDk3
M2VdIElPQVBJQ1sxXTogYXBpY19pZCA3LCB2ZXJzaW9uIDMzLCBhZGRyZXNzIDB4ZmVjMjAw
MDAsIEdTSSAyNC01NQooWEVOKSBbMDAwMDAwMWMzMmFiMmEwNl0gQUNQSTogSU5UX1NSQ19P
VlIgKGJ1cyAwIGJ1c19pcnEgMCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKKFhFTikgWzAwMDAw
MDFjMzZjOGVkYzJdIEFDUEk6IElOVF9TUkNfT1ZSIChidXMgMCBidXNfaXJxIDkgZ2xvYmFs
X2lycSA5IGxvdyBsZXZlbCkKKFhFTikgWzAwMDAwMDFjM2FmZmY3NjhdIEFDUEk6IElSUTAg
dXNlZCBieSBvdmVycmlkZS4KKFhFTikgWzAwMDAwMDFjM2RiYTI3ZWFdIEFDUEk6IElSUTIg
dXNlZCBieSBvdmVycmlkZS4KKFhFTikgWzAwMDAwMDFjNDA3NDgzZTBdIEFDUEk6IElSUTkg
dXNlZCBieSBvdmVycmlkZS4KKFhFTikgWzAwMDAwMDFjNDMyZWIwYWFdIEVuYWJsaW5nIEFQ
SUMgbW9kZTogIEZsYXQuICBVc2luZyAyIEkvTyBBUElDcwooWEVOKSBbMDAwMDAwMWM0NmMw
YmVhNl0gQUNQSTogSFBFVCBpZDogMHg4MzAwIGJhc2U6IDB4ZmVkMDAwMDAKKFhFTikgWzAw
MDAwMDFjNDlmOWU3MThdIFBDSTogTUNGRyBjb25maWd1cmF0aW9uIDA6IGJhc2UgZTAwMDAw
MDAgc2VnbWVudCAwMDAwIGJ1c2VzIDAwIC0gZmYKKFhFTikgWzAwMDAwMDFjNGVhMzRjYTZd
IFBDSTogTm90IHVzaW5nIE1DRkcgZm9yIHNlZ21lbnQgMDAwMCBidXMgMDAtZmYKKFhFTikg
WzAwMDAwMDFjNTI0MjEyODJdIEFNRC1WaTogRm91bmQgTVNJIGNhcGFiaWxpdHkgYmxvY2sg
YXQgMHg1NAooWEVOKSBbMDAwMDAwMWM1NWFlMDMxM10gVXNpbmcgQUNQSSAoTUFEVCkgZm9y
IFNNUCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uCihYRU4pIFswMDAwMDAxYzU5OGM1MTBi
XSBTTVA6IEFsbG93aW5nIDYgQ1BVcyAoMCBob3RwbHVnIENQVXMpCihYRU4pIFswMDAwMDAx
YzVjYjhjZWI4XSBJUlEgbGltaXRzOiA1NiBHU0ksIDExOTIgTVNJL01TSS1YCihYRU4pIFsw
MDAwMDAxYzVmYmY0MmNhXSBtaWNyb2NvZGU6IENQVTAgdXBkYXRlZCBmcm9tIHJldmlzaW9u
IDB4MTAwMDBiZiB0byAweDEwMDAwZGMKKFhFTikgWzAwMDAwMDFjNjQwZmI3NWVdIENQVTA6
IDMyMDAgKDgwMCAuLi4gMzYwMCkgTUh6CihYRU4pIFswMDAwMDAxYzY2ZDZhYjRhXSBDUFUw
OiBBTUQgRmFtMTBoIG1hY2hpbmUgY2hlY2sgcmVwb3J0aW5nIGVuYWJsZWQKKFhFTikgWzAw
MDAwMDFjNmE4ZWJlMzZdIFNwZWN1bGF0aXZlIG1pdGlnYXRpb24gZmFjaWxpdGllczoKKFhF
TikgWzAwMDAwMDFjNmQ5NTNlOWJdICAgSGFyZHdhcmUgZmVhdHVyZXM6CihYRU4pIFswMDAw
MDAxYzZmZTlkY2E2XSAgIENvbXBpbGVkLWluIHN1cHBvcnQ6IElORElSRUNUX1RIVU5LIFNI
QURPV19QQUdJTkcKKFhFTikgWzAwMDAwMDFjNzNjODJjMzNdICAgWGVuIHNldHRpbmdzOiBC
VEktVGh1bmsgTEZFTkNFLCBTUEVDX0NUUkw6IE5vLCBPdGhlcjogQlJBTkNIX0hBUkRFTgoo
WEVOKSBbMDAwMDAwMWM3ODhhZDUwNl0gICBTdXBwb3J0IGZvciBIVk0gVk1zOiBSU0IKKFhF
TikgWzAwMDAwMDFjN2IyYmIwYjZdICAgU3VwcG9ydCBmb3IgUFYgVk1zOiBSU0IKKFhFTikg
WzAwMDAwMDFjN2RiZmU3ZjNdICAgWFBUSSAoNjQtYml0IFBWIG9ubHkpOiBEb20wIGRpc2Fi
bGVkLCBEb21VIGRpc2FibGVkICh3aXRob3V0IFBDSUQpCihYRU4pIFswMDAwMDAxYzgyNzVm
NThkXSAgIFBWIEwxVEYgc2hhZG93aW5nOiBEb20wIGRpc2FibGVkLCBEb21VIGRpc2FibGVk
CihYRU4pIFswMDAwMDAxYzg2M2FiZDI2XSBVc2luZyBzY2hlZHVsZXI6IFNNUCBDcmVkaXQg
U2NoZWR1bGVyIHJldjIgKGNyZWRpdDIpCihYRU4pIFswMDAwMDAxYzhhMjViNjllXSBJbml0
aWFsaXppbmcgQ3JlZGl0MiBzY2hlZHVsZXIKKFhFTikgWzAwMDAwMDFjOGNmOTRlOWFdICBs
b2FkX3ByZWNpc2lvbl9zaGlmdDogMTgKKFhFTikgWzAwMDAwMDFjOGY4ZDg1ZjBdICBsb2Fk
X3dpbmRvd19zaGlmdDogMzAKKFhFTikgWzAwMDAwMDFjOTFmYjhlNGRdICB1bmRlcmxvYWRf
YmFsYW5jZV90b2xlcmFuY2U6IDAKKFhFTikgWzAwMDAwMDFjOTRkYmQ4MzNdICBvdmVybG9h
ZF9iYWxhbmNlX3RvbGVyYW5jZTogLTMKKFhFTikgWzAwMDAwMDFjOTdiYzQ1ZDNdICBydW5x
dWV1ZXMgYXJyYW5nZW1lbnQ6IHNvY2tldAooWEVOKSBbMDAwMDAwMWM5YThmZTNjZF0gIGNh
cCBlbmZvcmNlbWVudCBncmFudWxhcml0eTogMTBtcwooWEVOKSBbMDAwMDAwMWM5ZDk2NGVm
YV0gbG9hZCB0cmFja2luZyB3aW5kb3cgbGVuZ3RoIDEwNzM3NDE4MjQgbnMKKFhFTikgWzAw
MDAwMDFjYWE2ZDFjOThdIFBsYXRmb3JtIHRpbWVyIGlzIDE0LjMxOE1IeiBIUEVUCihYRU4p
IFsgICAgMy40NTc0MjhdIERldGVjdGVkIDMyMDAuMTQxIE1IeiBwcm9jZXNzb3IuCihYRU4p
IFsgICAgMy40NzM1NDldIGFsdCB0YWJsZSBmZmZmODJkMDQwNDhiM2YwIC0+IGZmZmY4MmQw
NDA0OTk0MGMKKFhFTikgWyAgICAzLjQ5OTg4NV0gQU1ELVZpOiBJVlJTIEJsb2NrOiBGb3Vu
ZCB0eXBlIDB4MTAgZmxhZ3MgMHgzZSBsZW4gMHhkOCBpZCAweDIKKFhFTikgWyAgICAzLjUy
MTk4Ml0gQU1ELVZpOiBVc2luZyBJVkhEIHR5cGUgMHgxMAooWEVOKSBbICAgIDMuNTM1MjQ1
XSBBTUQtVmk6IEFDUEkgVGFibGU6CihYRU4pIFsgICAgMy41NDYxNjJdIEFNRC1WaTogIFNp
Z25hdHVyZSBJVlJTCihYRU4pIFsgICAgMy41NTgxMjJdIEFNRC1WaTogIExlbmd0aCAweDEw
OAooWEVOKSBbICAgIDMuNTY5NTYyXSBBTUQtVmk6ICBSZXZpc2lvbiAweDEKKFhFTikgWyAg
ICAzLjU4MTAwMl0gQU1ELVZpOiAgQ2hlY2tTdW0gMHg3OQooWEVOKSBbICAgIDMuNTkyNzAy
XSBBTUQtVmk6ICBPRU1fSWQgQU1EICAKKFhFTikgWyAgICAzLjYwNDE0MF0gQU1ELVZpOiAg
T0VNX1RhYmxlX0lkIFJEODkwUwooWEVOKSBbICAgIDMuNjE3Mzk5XSBBTUQtVmk6ICBPRU1f
UmV2aXNpb24gMHgyMDIwMzEKKFhFTikgWyAgICAzLjYzMTE4MF0gQU1ELVZpOiAgQ3JlYXRv
cl9JZCBBTUQgCihYRU4pIFsgICAgMy42NDgzOTFdIEFNRC1WaTogIENyZWF0b3JfUmV2aXNp
b24gMAooWEVOKSBbICAgIDMuNjY2MzY5XSBBTUQtVmk6IElWUlMgQmxvY2s6IHR5cGUgMHgx
MCBmbGFncyAweDNlIGxlbiAweGQ4IGlkIDB4MgooWEVOKSBbICAgIDMuNjkxOTY1XSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MyBpZCAwIGZsYWdzIDAKKFhFTikgWyAg
ICAzLjcxNTQ4MV0gQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAwIC0+IDB4MgooWEVOKSBbICAg
IDMuNzM0NTYzXSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDEw
IGZsYWdzIDAKKFhFTikgWyAgICAzLjc1ODgyMF0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRy
eTogdHlwZSAweDIgaWQgMHhlMDAgZmxhZ3MgMAooWEVOKSBbICAgIDMuNzgzMzM2XSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDE4IGZsYWdzIDAKKFhFTikg
WyAgICAzLjgwNzYyMV0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQg
MHhkMDAgZmxhZ3MgMAooWEVOKSBbICAgIDMuODMyMTY0XSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4MiBpZCAweDI4IGZsYWdzIDAKKFhFTikgWyAgICAzLjg1NjQyMF0g
QU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhjMDAgZmxhZ3MgMAoo
WEVOKSBbICAgIDMuODgwOTI1XSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4
MiBpZCAweDMwIGZsYWdzIDAKKFhFTikgWyAgICAzLjkwNTE2Nl0gQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhiMDAgZmxhZ3MgMAooWEVOKSBbICAgIDMuOTI5
NjcxXSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDQ4IGZsYWdz
IDAKKFhFTikgWyAgICAzLjk1MzkwM10gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlw
ZSAweDIgaWQgMHhhMDAgZmxhZ3MgMAooWEVOKSBbICAgIDMuOTc4MzY5XSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDUwIGZsYWdzIDAKKFhFTikgWyAgICA0
LjAwMjU4NF0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDMgaWQgMHg5MDAg
ZmxhZ3MgMAooWEVOKSBbICAgIDQuMDI3MDQ5XSBBTUQtVmk6ICBEZXZfSWQgUmFuZ2U6IDB4
OTAwIC0+IDB4OTA3CihYRU4pIFsgICAgNC4wNDc1NjNdIEFNRC1WaTogSVZIRCBEZXZpY2Ug
RW50cnk6IHR5cGUgMHgyIGlkIDB4NjAgZmxhZ3MgMAooWEVOKSBbICAgIDQuMDcxNjkxXSBB
TUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDUwMCBmbGFncyAwCihY
RU4pIFsgICAgNC4wOTYxMDRdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgy
IGlkIDB4NjA4IGZsYWdzIDAKKFhFTikgWyAgICA0LjEyMDU0M10gQU1ELVZpOiBJVkhEIERl
dmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg4MDAgZmxhZ3MgMAooWEVOKSBbICAgIDQuMTQ1
MDE5XSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweDYxMCBmbGFn
cyAwCihYRU4pIFsgICAgNC4xNjk0NzJdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5
cGUgMHgyIGlkIDB4NzAwIGZsYWdzIDAKKFhFTikgWyAgICA0LjE5MzkyNF0gQU1ELVZpOiBJ
VkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHg2OCBmbGFncyAwCihYRU4pIFsgICAg
NC4yMTgxMzBdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUgMHgzIGlkIDB4NDAw
IGZsYWdzIDAKKFhFTikgWyAgICA0LjI0MjU2OV0gQU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAw
eDQwMCAtPiAweDQwNwooWEVOKSBbICAgIDQuMjYzMDk0XSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4MiBpZCAweDg4IGZsYWdzIDAKKFhFTikgWyAgICA0LjI4NzIzNV0g
QU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDMgaWQgMHg5MCBmbGFncyAwCihY
RU4pIFsgICAgNC4zMTEzNzRdIEFNRC1WaTogIERldl9JZCBSYW5nZTogMHg5MCAtPiAweDky
CihYRU4pIFsgICAgNC4zMzEzNTRdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50cnk6IHR5cGUg
MHgzIGlkIDB4OTggZmxhZ3MgMAooWEVOKSBbICAgIDQuMzU1NTIwXSBBTUQtVmk6ICBEZXZf
SWQgUmFuZ2U6IDB4OTggLT4gMHg5YQooWEVOKSBbICAgIDQuMzc1NDYwXSBBTUQtVmk6IElW
SEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweGEwIGZsYWdzIDB4ZDcKKFhFTikgWyAg
ICA0LjQwMDM2OV0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhh
MiBmbGFncyAwCihYRU4pIFsgICAgNC40MjQ1MzRdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6IHR5cGUgMHgyIGlkIDB4YTMgZmxhZ3MgMAooWEVOKSBbICAgIDQuNDQ4Njg4XSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweGE0IGZsYWdzIDAKKFhFTikg
WyAgICA0LjQ3MjgyN10gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAg
ZmxhZ3MgMAooWEVOKSBbICAgIDQuNDk1NjI4XSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5
OiB0eXBlIDB4NDMgaWQgMHgzMDAgZmxhZ3MgMAooWEVOKSBbICAgIDQuNTIwMjQ4XSBBTUQt
Vmk6ICBEZXZfSWQgUmFuZ2U6IDB4MzAwIC0+IDB4M2ZmIGFsaWFzIDB4YTQKKFhFTikgWyAg
ICA0LjU0MzYyM10gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQgMHhh
NSBmbGFncyAwCihYRU4pIFsgICAgNC41Njc3MjJdIEFNRC1WaTogSVZIRCBEZXZpY2UgRW50
cnk6IHR5cGUgMHgyIGlkIDB4YTggZmxhZ3MgMAooWEVOKSBbICAgIDQuNTkxODExXSBBTUQt
Vmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4MiBpZCAweGE5IGZsYWdzIDAKKFhFTikg
WyAgICA0LjYxNTg4N10gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDIgaWQg
MHgxMDAgZmxhZ3MgMAooWEVOKSBbICAgIDQuNjQwMjIwXSBBTUQtVmk6IElWSEQgRGV2aWNl
IEVudHJ5OiB0eXBlIDB4MyBpZCAweGIwIGZsYWdzIDAKKFhFTikgWyAgICA0LjY2NDMyMV0g
QU1ELVZpOiAgRGV2X0lkIFJhbmdlOiAweGIwIC0+IDB4YjIKKFhFTikgWyAgICA0LjY4NDI3
NV0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAwIGlkIDAgZmxhZ3MgMAooWEVO
KSBbICAgIDQuNzA3MTI5XSBBTUQtVmk6IElWSEQgRGV2aWNlIEVudHJ5OiB0eXBlIDB4NDgg
aWQgMCBmbGFncyAweGQ3CihYRU4pIFsgICAgNC43MzE1OTNdIEFNRC1WaTogSVZIRCBTcGVj
aWFsOiAwMDAwOjAwOjE0LjAgdmFyaWV0eSAweDIgaGFuZGxlIDAKKFhFTikgWyAgICA0Ljc1
Njg0MF0gQU1ELVZpOiBJVkhEIERldmljZSBFbnRyeTogdHlwZSAweDQ4IGlkIDAgZmxhZ3Mg
MAooWEVOKSBbICAgIDQuNzgwNTc2XSBBTUQtVmk6IElWSEQgU3BlY2lhbDogMDAwMDowMDow
MC4xIHZhcmlldHkgMHgxIGhhbmRsZSAweDcKKFhFTikgWyAgICA0LjgwNjUzN10gQU1ELVZp
OiBEaXNhYmxlZCBIQVAgbWVtb3J5IG1hcCBzaGFyaW5nIHdpdGggSU9NTVUKKFhFTikgWyAg
ICA0LjgzMDU4NF0gQU1ELVZpOiBJT01NVSAwIEVuYWJsZWQuCihYRU4pIFsgICAgNC44NDc5
MTRdIEkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkCihYRU4pIFsgICAgNC44NjU2NzFdICAt
IERvbTAgbW9kZTogUmVsYXhlZAooWEVOKSBbICAgIDQuODgyMDYyXSBJbnRlcnJ1cHQgcmVt
YXBwaW5nIGVuYWJsZWQKKFhFTikgWyAgICA0Ljg5OTk4OF0gbnJfc29ja2V0czogMQooWEVO
KSBbICAgIDQuOTE0NTM0XSBFTkFCTElORyBJTy1BUElDIElSUXMKKFhFTikgWyAgICA0Ljkz
MDg0OV0gIC0+IFVzaW5nIG5ldyBBQ0sgbWV0aG9kCihYRU4pIFsgICAgNC45NDgxNjVdIC4u
VElNRVI6IHZlY3Rvcj0weEYwIGFwaWMxPTAgcGluMT0yIGFwaWMyPS0xIHBpbjI9LTEKKFhF
TikgWzIwMjAtMTAtMTAgMTY6MTI6MzYuNjY1XSBIVk06IEFTSURzIGVuYWJsZWQuCihYRU4p
IFsyMDIwLTEwLTEwIDE2OjEyOjM2LjY3MV0gU1ZNOiBTdXBwb3J0ZWQgYWR2YW5jZWQgZmVh
dHVyZXM6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2LjY3N10gIC0gTmVzdGVkIFBhZ2Ug
VGFibGVzIChOUFQpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2LjY4Ml0gIC0gTGFzdCBC
cmFuY2ggUmVjb3JkIChMQlIpIFZpcnR1YWxpc2F0aW9uCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjM2LjY4OF0gIC0gTmV4dC1SSVAgU2F2ZWQgb24gI1ZNRVhJVAooWEVOKSBbMjAyMC0x
MC0xMCAxNjoxMjozNi42OTRdICAtIFBhdXNlLUludGVyY2VwdCBGaWx0ZXIKKFhFTikgWzIw
MjAtMTAtMTAgMTY6MTI6MzYuNzAwXSBIVk06IFNWTSBlbmFibGVkCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjM2LjcwNV0gSFZNOiBIYXJkd2FyZSBBc3Npc3RlZCBQYWdpbmcgKEhBUCkg
ZGV0ZWN0ZWQKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzYuNzExXSBIVk06IEhBUCBwYWdl
IHNpemVzOiA0a0IsIDJNQiwgMUdCCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2LjcxN10g
YWx0IHRhYmxlIGZmZmY4MmQwNDA0OGIzZjAgLT4gZmZmZjgyZDA0MDQ5OTQwYwooWEVOKSBb
MjAyMC0xMC0xMCAxNjoxMjozMC44MThdIG1pY3JvY29kZTogQ1BVMSB1cGRhdGVkIGZyb20g
cmV2aXNpb24gMHgxMDAwMGJmIHRvIDB4MTAwMDBkYwooWEVOKSBbMjAyMC0xMC0xMCAxNjox
MjozMC44MThdIG1pY3JvY29kZTogQ1BVMiB1cGRhdGVkIGZyb20gcmV2aXNpb24gMHgxMDAw
MGJmIHRvIDB4MTAwMDBkYwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozMC44MThdIG1pY3Jv
Y29kZTogQ1BVMyB1cGRhdGVkIGZyb20gcmV2aXNpb24gMHgxMDAwMGJmIHRvIDB4MTAwMDBk
YwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozMC44MThdIG1pY3JvY29kZTogQ1BVNCB1cGRh
dGVkIGZyb20gcmV2aXNpb24gMHgxMDAwMGJmIHRvIDB4MTAwMDBkYwooWEVOKSBbMjAyMC0x
MC0xMCAxNjoxMjozMC44MThdIG1pY3JvY29kZTogQ1BVNSB1cGRhdGVkIGZyb20gcmV2aXNp
b24gMHgxMDAwMGJmIHRvIDB4MTAwMDBkYwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNi44
MDZdIEJyb3VnaHQgdXAgNiBDUFVzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2LjgxOF0g
U2NoZWR1bGluZyBncmFudWxhcml0eTogY3B1LCAxIENQVSBwZXIgc2NoZWQtcmVzb3VyY2UK
KFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzYuODI0XSBBZGRpbmcgY3B1IDAgdG8gcnVucXVl
dWUgMAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNi44MzBdICBGaXJzdCBjcHUgb24gcnVu
cXVldWUsIGFjdGl2YXRpbmcKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzYuODM2XSBBZGRp
bmcgY3B1IDEgdG8gcnVucXVldWUgMAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNi44NDJd
IEFkZGluZyBjcHUgMiB0byBydW5xdWV1ZSAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2
Ljg0OF0gQWRkaW5nIGNwdSAzIHRvIHJ1bnF1ZXVlIDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6
MTI6MzYuODU0XSBBZGRpbmcgY3B1IDQgdG8gcnVucXVldWUgMAooWEVOKSBbMjAyMC0xMC0x
MCAxNjoxMjozNi44NjBdIEFkZGluZyBjcHUgNSB0byBydW5xdWV1ZSAwCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjEyOjM2Ljg2Nl0gTUNBOiBVc2UgaHcgdGhyZXNob2xkaW5nIHRvIGFkanVz
dCBwb2xsaW5nIGZyZXF1ZW5jeQooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNi44NzJdIG1j
aGVja19wb2xsOiBNYWNoaW5lIGNoZWNrIHBvbGxpbmcgdGltZXIgc3RhcnRlZC4KKFhFTikg
WzIwMjAtMTAtMTAgMTY6MTI6MzYuODc4XSBYZW5vcHJvZmlsZTogRmFpbGVkIHRvIHNldHVw
IElCUyBMVlQgb2Zmc2V0LCBJQlNDVEwgPSAweGZmZmZmZmZmCihYRU4pIFsyMDIwLTEwLTEw
IDE2OjEyOjM2Ljg4NF0gUnVubmluZyBzdHViIHJlY292ZXJ5IHNlbGZ0ZXN0cy4uLgooWEVO
KSBbMjAyMC0xMC0xMCAxNjoxMjozNi44OTBdIEZpeHVwICNVRFswMDAwXTogZmZmZjgyZDA3
ZmZmZTA0MCBbZmZmZjgyZDA3ZmZmZTA0MF0gLT4gZmZmZjgyZDA0MDM5NmEwMQooWEVOKSBb
MjAyMC0xMC0xMCAxNjoxMjozNi44OTZdIEZpeHVwICNHUFswMDAwXTogZmZmZjgyZDA3ZmZm
ZTA0MSBbZmZmZjgyZDA3ZmZmZTA0MV0gLT4gZmZmZjgyZDA0MDM5NmEwMQooWEVOKSBbMjAy
MC0xMC0xMCAxNjoxMjozNi45MDJdIEZpeHVwICNTU1swMDAwXTogZmZmZjgyZDA3ZmZmZTA0
MCBbZmZmZjgyZDA3ZmZmZTA0MF0gLT4gZmZmZjgyZDA0MDM5NmEwMQooWEVOKSBbMjAyMC0x
MC0xMCAxNjoxMjozNi45MDhdIEZpeHVwICNCUFswMDAwXTogZmZmZjgyZDA3ZmZmZTA0MSBb
ZmZmZjgyZDA3ZmZmZTA0MV0gLT4gZmZmZjgyZDA0MDM5NmEwMQooWEVOKSBbMjAyMC0xMC0x
MCAxNjoxMjozNi45MzRdIE5YIChFeGVjdXRlIERpc2FibGUpIHByb3RlY3Rpb24gYWN0aXZl
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2Ljk0MV0gTXVsdGlwbGUgaW5pdHJkIGNhbmRp
ZGF0ZXMsIHBpY2tpbmcgbW9kdWxlICMxCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM2Ljk0
N10gRG9tMCBoYXMgbWF4aW11bSA2ODAgUElSUXMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6
MzYuOTUzXSAqKiogQnVpbGRpbmcgYSBQViBEb20wICoqKgooWEVOKSBbMjAyMC0xMC0xMCAx
NjoxMjozNy4yNTRdIEVMRjogcGhkcjogcGFkZHI9MHgxMDAwMDAwIG1lbXN6PTB4MWMxZTgy
YwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy4yNjBdIEVMRjogcGhkcjogcGFkZHI9MHgy
ZTAwMDAwIG1lbXN6PTB4N2QwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjI2N10g
RUxGOiBwaGRyOiBwYWRkcj0weDM1ZDAwMDAgbWVtc3o9MHgyYmFkOAooWEVOKSBbMjAyMC0x
MC0xMCAxNjoxMjozNy4yNzNdIEVMRjogcGhkcjogcGFkZHI9MHgzNWZjMDAwIG1lbXN6PTB4
NDJhMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjI3OV0gRUxGOiBtZW1vcnk6IDB4
MTAwMDAwMCAtPiAweDNhMjYwMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuMjg1XSBF
TEY6IG5vdGU6IEdVRVNUX09TID0gImxpbnV4IgooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjoz
Ny4yOTFdIEVMRjogbm90ZTogR1VFU1RfVkVSU0lPTiA9ICIyLjYiCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjM3LjI5N10gRUxGOiBub3RlOiBYRU5fVkVSU0lPTiA9ICJ4ZW4tMy4wIgoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy4zMDRdIEVMRjogbm90ZTogVklSVF9CQVNFID0g
MHhmZmZmZmZmZjgwMDAwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjMxMF0gRUxG
OiBub3RlOiBJTklUX1AyTSA9IDB4ODAwMDAwMDAwMAooWEVOKSBbMjAyMC0xMC0xMCAxNjox
MjozNy4zMTZdIEVMRjogbm90ZTogRU5UUlkgPSAweGZmZmZmZmZmODM1ZmMxODAKKFhFTikg
WzIwMjAtMTAtMTAgMTY6MTI6MzcuMzIyXSBFTEY6IG5vdGU6IEhZUEVSQ0FMTF9QQUdFID0g
MHhmZmZmZmZmZjgxMDAyMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjMyOF0gRUxG
OiBub3RlOiBGRUFUVVJFUyA9ICIhd3JpdGFibGVfcGFnZV90YWJsZXN8cGFlX3BnZGlyX2Fi
b3ZlXzRnYiIKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuMzM0XSBFTEY6IG5vdGU6IFNV
UFBPUlRFRF9GRUFUVVJFUyA9IDB4ODgwMQooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy4z
NDFdIEVMRjogbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBbMjAyMC0xMC0xMCAxNjox
MjozNy4zNDddIEVMRjogbm90ZTogTE9BREVSID0gImdlbmVyaWMiCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjM3LjM1M10gRUxGOiBub3RlOiB1bmtub3duICgweGQpCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjEyOjM3LjM1OV0gRUxGOiBub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy4zNjVdIEVMRjogbm90ZTogTU9EX1NUQVJUX1BG
TiA9IDB4MQooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy4zNzFdIEVMRjogbm90ZTogSFZf
U1RBUlRfTE9XID0gMHhmZmZmODAwMDAwMDAwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEy
OjM3LjM3OF0gRUxGOiBub3RlOiBQQUREUl9PRkZTRVQgPSAwCihYRU4pIFsyMDIwLTEwLTEw
IDE2OjEyOjM3LjM4NF0gRUxGOiBub3RlOiBQSFlTMzJfRU5UUlkgPSAweDEwMDA0MjAKKFhF
TikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuMzkwXSBFTEY6IEZvdW5kIFBWSCBpbWFnZQooWEVO
KSBbMjAyMC0xMC0xMCAxNjoxMjozNy4zOTZdIEVMRjogYWRkcmVzc2VzOgooWEVOKSBbMjAy
MC0xMC0xMCAxNjoxMjozNy40MDJdICAgICB2aXJ0X2Jhc2UgICAgICAgID0gMHhmZmZmZmZm
ZjgwMDAwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjQwOV0gICAgIGVsZl9wYWRk
cl9vZmZzZXQgPSAweDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuNDE1XSAgICAgdmly
dF9vZmZzZXQgICAgICA9IDB4ZmZmZmZmZmY4MDAwMDAwMAooWEVOKSBbMjAyMC0xMC0xMCAx
NjoxMjozNy40MjFdICAgICB2aXJ0X2tzdGFydCAgICAgID0gMHhmZmZmZmZmZjgxMDAwMDAw
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjQyN10gICAgIHZpcnRfa2VuZCAgICAgICAg
PSAweGZmZmZmZmZmODNhMjYwMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuNDM0XSAg
ICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZmZmY4MzVmYzE4MAooWEVOKSBbMjAyMC0x
MC0xMCAxNjoxMjozNy40NDBdICAgICBwMm1fYmFzZSAgICAgICAgID0gMHg4MDAwMDAwMDAw
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjQ0Nl0gIFhlbiAga2VybmVsOiA2NC1iaXQs
IGxzYiwgY29tcGF0MzIKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuNDUzXSAgRG9tMCBr
ZXJuZWw6IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAwMDAwMCAtPiAweDNhMjYwMDAK
KFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuNDYwXSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5H
RU1FTlQ6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjQ2N10gIERvbTAgYWxsb2MuOiAg
IDAwMDAwMDA2OTgwMDAwMDAtPjAwMDAwMDA2OWMwMDAwMDAgKDQ5NzE4MyBwYWdlcyB0byBi
ZSBhbGxvY2F0ZWQpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjQ3NF0gIEluaXQuIHJh
bWRpc2s6IDAwMDAwMDA2YjU2MWIwMDAtPjAwMDAwMDA2YjdmZmI4NDYKKFhFTikgWzIwMjAt
MTAtMTAgMTY6MTI6MzcuNDgwXSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoKKFhFTikg
WzIwMjAtMTAtMTAgMTY6MTI6MzcuNDg3XSAgTG9hZGVkIGtlcm5lbDogZmZmZmZmZmY4MTAw
MDAwMC0+ZmZmZmZmZmY4M2EyNjAwMAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy40OTNd
ICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjUwMF0gIFBoeXMtTWFjaCBtYXA6IDAwMDAwMDgw
MDAwMDAwMDAtPjAwMDAwMDgwMDA0MDAwMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6Mzcu
NTA2XSAgU3RhcnQgaW5mbzogICAgZmZmZmZmZmY4M2EyNjAwMC0+ZmZmZmZmZmY4M2EyNjRi
OAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy41MTNdICBYZW5zdG9yZSByaW5nOiAwMDAw
MDAwMDAwMDAwMDAwLT4wMDAwMDAwMDAwMDAwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEy
OjM3LjUxOV0gIENvbnNvbGUgcmluZzogIDAwMDAwMDAwMDAwMDAwMDAtPjAwMDAwMDAwMDAw
MDAwMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzcuNTI2XSAgUGFnZSB0YWJsZXM6ICAg
ZmZmZmZmZmY4M2EyNzAwMC0+ZmZmZmZmZmY4M2E0ODAwMAooWEVOKSBbMjAyMC0xMC0xMCAx
NjoxMjozNy41MzJdICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgzYTQ4MDAwLT5mZmZmZmZm
ZjgzYTQ5MDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjUzOF0gIFRPVEFMOiAgICAg
ICAgIGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODNjMDAwMDAKKFhFTikgWzIwMjAtMTAt
MTAgMTY6MTI6MzcuNTQ1XSAgRU5UUlkgQUREUkVTUzogZmZmZmZmZmY4MzVmYzE4MAooWEVO
KSBbMjAyMC0xMC0xMCAxNjoxMjozNy41NTJdIERvbTAgaGFzIG1heGltdW0gNiBWQ1BVcwoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozNy41NTldIEVMRjogcGhkciAwIGF0IDB4ZmZmZmZm
ZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODJjMWU4MmMKKFhFTikgWzIwMjAtMTAtMTAgMTY6
MTI6MzcuNTg3XSBFTEY6IHBoZHIgMSBhdCAweGZmZmZmZmZmODJlMDAwMDAgLT4gMHhmZmZm
ZmZmZjgzNWQwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM3LjYwMF0gRUxGOiBwaGRy
IDIgYXQgMHhmZmZmZmZmZjgzNWQwMDAwIC0+IDB4ZmZmZmZmZmY4MzVmYmFkOAooWEVOKSBb
MjAyMC0xMC0xMCAxNjoxMjozNy42MDddIEVMRjogcGhkciAzIGF0IDB4ZmZmZmZmZmY4MzVm
YzAwMCAtPiAweGZmZmZmZmZmODM3NGQwMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6Mzku
MjA4XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAwLCB0eXBl
ID0gMHg2LCByb290IHRhYmxlID0gMHg2YjU2MTgwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBt
b2RlID0gMwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS4yMTVdIEFNRC1WaTogU2V0dXAg
SS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MTAsIHR5cGUgPSAweDIsIHJvb3QgdGFi
bGUgPSAweDZiNTYxODAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjM5LjIyMl0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTog
ZGV2aWNlIGlkID0gMHgxOCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAw
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6
MzkuMjI5XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDI4
LCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHg2YjU2MTgwMDAsIGRvbWFpbiA9IDAsIHBh
Z2luZyBtb2RlID0gMwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS4yMzZdIEFNRC1WaTog
U2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4MzAsIHR5cGUgPSAweDIsIHJv
b3QgdGFibGUgPSAweDZiNTYxODAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjI0M10gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0
YWJsZTogZGV2aWNlIGlkID0gMHg0OCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NmI1
NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MTI6MzkuMjUwXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweDUwLCB0eXBlID0gMHgyLCByb290IHRhYmxlID0gMHg2YjU2MTgwMDAsIGRvbWFpbiA9
IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS4yNThdIEFN
RC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4NjAsIHR5cGUgPSAw
eDIsIHJvb3QgdGFibGUgPSAweDZiNTYxODAwMCwgZG9tYWluID0gMCwgcGFnaW5nIG1vZGUg
PSAzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjI2NV0gQU1ELVZpOiBTZXR1cCBJL08g
cGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg2OCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9
IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAt
MTAtMTAgMTY6MTI6MzkuMjcyXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZp
Y2UgaWQgPSAweDg4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHg2YjU2MTgwMDAsIGRv
bWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS4y
ODBdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4OTAsIHR5
cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDZiNTYxODAwMCwgZG9tYWluID0gMCwgcGFnaW5n
IG1vZGUgPSAzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjI4OF0gQU1ELVZpOiBTZXR1
cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHg5MiwgdHlwZSA9IDB4Nywgcm9vdCB0
YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
WzIwMjAtMTAtMTAgMTY6MTI6MzkuMjk2XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxl
OiBkZXZpY2UgaWQgPSAweDk4LCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHg2YjU2MTgw
MDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBbMjAyMC0xMC0xMCAxNjox
MjozOS4zMDNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBpZCA9IDB4
OWEsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDZiNTYxODAwMCwgZG9tYWluID0gMCwg
cGFnaW5nIG1vZGUgPSAzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjMxMV0gQU1ELVZp
OiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhMCwgdHlwZSA9IDB4Nywg
cm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuMzIwXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweGEyLCB0eXBlID0gMHg3LCByb290IHRhYmxlID0gMHg2
YjU2MTgwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBbMjAyMC0xMC0x
MCAxNjoxMjozOS4zMjhdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRldmljZSBp
ZCA9IDB4YTMsIHR5cGUgPSAweDcsIHJvb3QgdGFibGUgPSAweDZiNTYxODAwMCwgZG9tYWlu
ID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjMzNl0g
QU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhhNCwgdHlwZSA9
IDB4NSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuMzQ1XSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGE1LCB0eXBlID0gMHg3LCByb290IHRhYmxl
ID0gMHg2YjU2MTgwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVOKSBbMjAy
MC0xMC0xMCAxNjoxMjozOS4zNTNdIEFNRC1WaTogU2V0dXAgSS9PIHBhZ2UgdGFibGU6IGRl
dmljZSBpZCA9IDB4YTgsIHR5cGUgPSAweDIsIHJvb3QgdGFibGUgPSAweDZiNTYxODAwMCwg
ZG9tYWluID0gMCwgcGFnaW5nIG1vZGUgPSAzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5
LjM2Ml0gQU1ELVZpOiBTZXR1cCBJL08gcGFnZSB0YWJsZTogZGV2aWNlIGlkID0gMHhiMCwg
dHlwZSA9IDB4Nywgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuMzcxXSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGIyLCB0eXBlID0gMHg3LCByb290
IHRhYmxlID0gMHg2YjU2MTgwMDAsIGRvbWFpbiA9IDAsIHBhZ2luZyBtb2RlID0gMwooWEVO
KSBbMjAyMC0xMC0xMCAxNjoxMjozOS4zNzldIEFNRC1WaTogU2tpcHBpbmcgaG9zdCBicmlk
Z2UgMDAwMDowMDoxOC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjM4OF0gQU1ELVZp
OiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjEKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MTI6MzkuMzk3XSBBTUQtVmk6IFNraXBwaW5nIGhvc3QgYnJpZGdlIDAwMDA6MDA6MTgu
MgooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS40MDZdIEFNRC1WaTogU2tpcHBpbmcgaG9z
dCBicmlkZ2UgMDAwMDowMDoxOC4zCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5LjQxNF0g
QU1ELVZpOiBTa2lwcGluZyBob3N0IGJyaWRnZSAwMDAwOjAwOjE4LjQKKFhFTikgWzIwMjAt
MTAtMTAgMTY6MTI6MzkuNDIzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZp
Y2UgaWQgPSAweDQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6Mzku
NDMyXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMSwg
dHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNDQxXSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwMiwgdHlwZSA9IDB4MSwgcm9v
dCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNDUxXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweDQwMywgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1
NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MTI6MzkuNDYwXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweDQwNCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNDcwXSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwNSwgdHlwZSA9
IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNDc5XSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDQwNiwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIw
MjAtMTAtMTAgMTY6MTI6MzkuNDg5XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweDQwNywgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAw
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6
MzkuNDk5XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDUw
MCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBw
YWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTA5XSBBTUQtVmk6
IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDYwOCwgdHlwZSA9IDB4Miwg
cm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMK
KFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTE5XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdl
IHRhYmxlOiBkZXZpY2UgaWQgPSAweDYxMCwgdHlwZSA9IDB4Miwgcm9vdCB0YWJsZSA9IDB4
NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAt
MTAgMTY6MTI6MzkuNTI5XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2Ug
aWQgPSAweDcwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21h
aW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTQw
XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDgwMCwgdHlw
ZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcg
bW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTUwXSBBTUQtVmk6IFNldHVw
IEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwMCwgdHlwZSA9IDB4MSwgcm9vdCB0
YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikg
WzIwMjAtMTAtMTAgMTY6MTI6MzkuNTYxXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxl
OiBkZXZpY2UgaWQgPSAweDkwMSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4
MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6
MTI6MzkuNTcxXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAw
eDkwMiwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAw
LCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTgyXSBBTUQt
Vmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwMywgdHlwZSA9IDB4
MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9
IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNTkzXSBBTUQtVmk6IFNldHVwIEkvTyBw
YWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwNCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9
IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAt
MTAtMTAgMTY6MTI6MzkuNjA0XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZp
Y2UgaWQgPSAweDkwNSwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBk
b21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6Mzku
NjE1XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwNiwg
dHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdp
bmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNjI2XSBBTUQtVmk6IFNl
dHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweDkwNywgdHlwZSA9IDB4MSwgcm9v
dCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhF
TikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNjM3XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRh
YmxlOiBkZXZpY2UgaWQgPSAweGEwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1
NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MTI6MzkuNjQ4XSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQg
PSAweGIwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4g
PSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNjYwXSBB
TUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGMwMCwgdHlwZSA9
IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9k
ZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNjcxXSBBTUQtVmk6IFNldHVwIEkv
TyBwYWdlIHRhYmxlOiBkZXZpY2UgaWQgPSAweGQwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJs
ZSA9IDB4NmI1NjE4MDAwLCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIw
MjAtMTAtMTAgMTY6MTI6MzkuNjgzXSBBTUQtVmk6IFNldHVwIEkvTyBwYWdlIHRhYmxlOiBk
ZXZpY2UgaWQgPSAweGUwMCwgdHlwZSA9IDB4MSwgcm9vdCB0YWJsZSA9IDB4NmI1NjE4MDAw
LCBkb21haW4gPSAwLCBwYWdpbmcgbW9kZSA9IDMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6
MzkuNzAwXSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAw
MCBwYWdlcy4KKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNzEyXSBTY3J1YmJpbmcgRnJl
ZSBSQU0gaW4gYmFja2dyb3VuZAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS43MjNdIFN0
ZC4gTG9nbGV2ZWw6IEFsbAooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS43MzVdIEd1ZXN0
IExvZ2xldmVsOiBBbGwKKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6MzkuNzQ2XSBYZW4gaXMg
cmVsaW5xdWlzaGluZyBWR0EgY29uc29sZS4KKFhFTikgWzIwMjAtMTAtMTAgMTY6MTI6Mzku
ODQyXSAqKiogU2VyaWFsIGlucHV0IHRvIERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGlt
ZXMgdG8gc3dpdGNoIGlucHV0KQooWEVOKSBbMjAyMC0xMC0xMCAxNjoxMjozOS44NDJdIEZy
ZWVkIDU1NmtCIGluaXQgbWVtb3J5CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5Ljg0M10g
ZW11bC1wcml2LW9wLmM6OTkzOmQwdjAgUkRNU1IgMHhjMDAxMDExMiB1bmltcGxlbWVudGVk
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjM5Ljg0M10gZW11bC1wcml2LW9wLmM6OTkzOmQw
djAgUkRNU1IgMHhjMDAxMDAxNSB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjIyNF0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjAgUkRNU1IgMHhjMDAxMDA0OCB1
bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIyNF0gZW11bC1wcml2
LW9wLmM6OTkzOmQwdjAgUkRNU1IgMHhjMDAxMTAyYSB1bmltcGxlbWVudGVkCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjIyOF0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjAgUkRNU1Ig
MHhjMDAxMDA1NSB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIy
OV0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjEgUkRNU1IgMHhjMDAxMDA0OCB1bmltcGxlbWVu
dGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIyOV0gZW11bC1wcml2LW9wLmM6OTkz
OmQwdjEgUkRNU1IgMHhjMDAxMTAyYSB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEw
IDE2OjEyOjQwLjIzMF0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjIgUkRNU1IgMHhjMDAxMDA0
OCB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIzMF0gZW11bC1w
cml2LW9wLmM6OTkzOmQwdjIgUkRNU1IgMHhjMDAxMTAyYSB1bmltcGxlbWVudGVkCihYRU4p
IFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIzMF0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjMgUkRN
U1IgMHhjMDAxMDA0OCB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjIzMF0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjMgUkRNU1IgMHhjMDAxMTAyYSB1bmltcGxl
bWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIzMV0gZW11bC1wcml2LW9wLmM6
OTkzOmQwdjQgUkRNU1IgMHhjMDAxMDA0OCB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjIzMV0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjQgUkRNU1IgMHhjMDAx
MTAyYSB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIzMV0gZW11
bC1wcml2LW9wLmM6OTkzOmQwdjUgUkRNU1IgMHhjMDAxMDA0OCB1bmltcGxlbWVudGVkCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjIzMV0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjUg
UkRNU1IgMHhjMDAxMTAyYSB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEy
OjQwLjI5OV0gZW11bC1wcml2LW9wLmM6OTkzOmQwdjEgUkRNU1IgMHhjMDAxMDAxYSB1bmlt
cGxlbWVudGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjI5OV0gZW11bC1wcml2LW9w
LmM6OTkzOmQwdjEgUkRNU1IgMHhjMDAxMDAxMCB1bmltcGxlbWVudGVkCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjEyOjQwLjM3MV0gUENJOiBVc2luZyBNQ0ZHIGZvciBzZWdtZW50IDAwMDAg
YnVzIDAwLWZmCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3MV0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3Ml0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDowMC4yCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3Ml0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDowMi4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjM3Ml0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDowMy4wCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjM3M10gUENJIGFkZCBkZXZpY2UgMDAwMDowMDowNS4wCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjM3M10gUENJIGFkZCBkZXZpY2UgMDAwMDowMDowNi4wCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjM3M10gUENJIGFkZCBkZXZpY2UgMDAwMDowMDowOS4wCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3NF0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDow
YS4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3NF0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDowYy4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3NF0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDowZC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3NV0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxMS4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3NV0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjM3NV0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMi4yCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjM3Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4wCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjM3Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxMy4yCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjM3Nl0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4wCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3N10gUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
NC4yCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3N10gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxNC4zCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3N10gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxNC40CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3N10gUENJIGFk
ZCBkZXZpY2UgMDAwMDowMDoxNC41CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3OF0g
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNS4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjM3OF0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNi4wCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjM3OF0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNi4yCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjM3OF0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4wCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjM3OV0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxOC4xCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3OV0gUENJIGFkZCBkZXZpY2UgMDAwMDowMDox
OC4yCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3OV0gUENJIGFkZCBkZXZpY2UgMDAw
MDowMDoxOC4zCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM3OV0gUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxOC40CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM4MF0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowZTowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjM5MV0g
UENJIGFkZCBkZXZpY2UgMDAwMDowZDowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjM5MV0gUENJIGFkZCBkZXZpY2UgMDAwMDowYzowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjQwMV0gUENJIGFkZCBkZXZpY2UgMDAwMDowYjowMC4wCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjQxMV0gUENJIGFkZCBkZXZpY2UgMDAwMDowYTowMC4wCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjQxMV0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC4wCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxMl0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTow
MC4xCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxMl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowOTowMC4yCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxM10gUENJIGFkZCBkZXZp
Y2UgMDAwMDowOTowMC4zCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxM10gUENJIGFk
ZCBkZXZpY2UgMDAwMDowOTowMC40CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxM10g
UENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC41CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjQxNF0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC42CihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjQxNF0gUENJIGFkZCBkZXZpY2UgMDAwMDowOTowMC43CihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjQxNV0gUENJIGFkZCBkZXZpY2UgMDAwMDowNTowMC4wCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjQxNV0gUENJIGFkZCBkZXZpY2UgMDAwMDowNjowMS4wCihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxNl0gUENJIGFkZCBkZXZpY2UgMDAwMDowNjow
Mi4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxNl0gUENJIGFkZCBkZXZpY2UgMDAw
MDowODowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxN10gUENJIGFkZCBkZXZp
Y2UgMDAwMDowNzowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxOF0gUENJIGFk
ZCBkZXZpY2UgMDAwMDowNDowMC4wCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQxOF0g
UENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4xCihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQw
LjQxOF0gUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4yCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjEyOjQwLjQxOV0gUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC4zCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjEyOjQwLjQxOV0gUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC40CihYRU4pIFsy
MDIwLTEwLTEwIDE2OjEyOjQwLjQyMF0gUENJIGFkZCBkZXZpY2UgMDAwMDowNDowMC41CihY
RU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQyMF0gUENJIGFkZCBkZXZpY2UgMDAwMDowNDow
MC42CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQwLjQyMF0gUENJIGFkZCBkZXZpY2UgMDAw
MDowNDowMC43CihYRU4pIFsyMDIwLTEwLTEwIDE2OjEyOjQ1LjMwMV0gZDA6IEZvcmNpbmcg
cmVhZC1vbmx5IGFjY2VzcyB0byBNRk4gZmVkMDAKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6
MDQuODMwXSBIVk0gZDF2MCBzYXZlOiBDUFUKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQu
ODMwXSBIVk0gZDF2MSBzYXZlOiBDUFUKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMw
XSBIVk0gZDEgc2F2ZTogUElDCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZN
IGQxIHNhdmU6IElPQVBJQwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTowNC44MzBdIEhWTSBk
MXYwIHNhdmU6IExBUElDCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZNIGQx
djEgc2F2ZTogTEFQSUMKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2
MCBzYXZlOiBMQVBJQ19SRUdTCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZN
IGQxdjEgc2F2ZTogTEFQSUNfUkVHUwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTowNC44MzBd
IEhWTSBkMSBzYXZlOiBQQ0lfSVJRCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0g
SFZNIGQxIHNhdmU6IElTQV9JUlEKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBI
Vk0gZDEgc2F2ZTogUENJX0xJTksKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBI
Vk0gZDEgc2F2ZTogUElUCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZNIGQx
IHNhdmU6IFJUQwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTowNC44MzBdIEhWTSBkMSBzYXZl
OiBIUEVUCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZNIGQxIHNhdmU6IFBN
VElNRVIKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2MCBzYXZlOiBN
VFJSCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZNIGQxdjEgc2F2ZTogTVRS
UgooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTowNC44MzBdIEhWTSBkMSBzYXZlOiBWSVJJRElB
Tl9ET01BSU4KKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2MCBzYXZl
OiBDUFVfWFNBVkUKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2MSBz
YXZlOiBDUFVfWFNBVkUKKFhFTikgWzIwMjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2
MCBzYXZlOiBWSVJJRElBTl9WQ1BVCihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA0LjgzMF0g
SFZNIGQxdjEgc2F2ZTogVklSSURJQU5fVkNQVQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTow
NC44MzBdIEhWTSBkMXYwIHNhdmU6IFZNQ0VfVkNQVQooWEVOKSBbMjAyMC0xMC0xMCAxNjoy
MTowNC44MzBdIEhWTSBkMXYxIHNhdmU6IFZNQ0VfVkNQVQooWEVOKSBbMjAyMC0xMC0xMCAx
NjoyMTowNC44MzBdIEhWTSBkMXYwIHNhdmU6IFRTQ19BREpVU1QKKFhFTikgWzIwMjAtMTAt
MTAgMTY6MjE6MDQuODMwXSBIVk0gZDF2MSBzYXZlOiBUU0NfQURKVVNUCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjIxOjA0LjgzMF0gSFZNIGQxdjAgc2F2ZTogQ1BVX01TUgooWEVOKSBbMjAy
MC0xMC0xMCAxNjoyMTowNC44MzBdIEhWTSBkMXYxIHNhdmU6IENQVV9NU1IKKFhFTikgWzIw
MjAtMTAtMTAgMTY6MjE6MDQuODMwXSBIVk0xIHJlc3RvcmU6IENQVSAwCihkMSkgWzIwMjAt
MTAtMTAgMTY6MjE6MDYuMDUxXSBIVk0gTG9hZGVyCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6
MDYuMDUxXSBEZXRlY3RlZCBYZW4gdjQuMTUtdW5zdGFibGUKKGQxKSBbMjAyMC0xMC0xMCAx
NjoyMTowNi4wNTFdIFhlbmJ1cyByaW5ncyBAMHhmZWZmYzAwMCwgZXZlbnQgY2hhbm5lbCAx
CihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMDUxXSBTeXN0ZW0gcmVxdWVzdGVkIFNlYUJJ
T1MKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4wNTFdIENQVSBzcGVlZCBpcyAzMjAwIE1I
egooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjA1MV0gUmVsb2NhdGluZyBndWVzdCBtZW1v
cnkgZm9yIGxvd21lbSBNTUlPIHNwYWNlIGRpc2FibGVkCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjIxOjA2LjA1MV0gaXJxLmM6Mzc4OiBEb20xIFBDSSBsaW5rIDAgY2hhbmdlZCAwIC0+IDUK
KGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4wNTFdIFBDSS1JU0EgbGluayAwIHJvdXRlZCB0
byBJUlE1CihYRU4pIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjA1MV0gaXJxLmM6Mzc4OiBEb20x
IFBDSSBsaW5rIDEgY2hhbmdlZCAwIC0+IDEwCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYu
MDUyXSBQQ0ktSVNBIGxpbmsgMSByb3V0ZWQgdG8gSVJRMTAKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MjE6MDYuMDUyXSBpcnEuYzozNzg6IERvbTEgUENJIGxpbmsgMiBjaGFuZ2VkIDAgLT4g
MTEKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4wNTJdIFBDSS1JU0EgbGluayAyIHJvdXRl
ZCB0byBJUlExMQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4wNTJdIGlycS5jOjM3ODog
RG9tMSBQQ0kgbGluayAzIGNoYW5nZWQgMCAtPiA1CihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6
MDYuMDUyXSBQQ0ktSVNBIGxpbmsgMyByb3V0ZWQgdG8gSVJRNQooZDEpIFsyMDIwLTEwLTEw
IDE2OjIxOjA2LjA2M10gcGNpIGRldiAwMTozIElOVEEtPklSUTEwCihkMSkgWzIwMjAtMTAt
MTAgMTY6MjE6MDYuMDY1XSBwY2kgZGV2IDAyOjAgSU5UQS0+SVJRMTEKKGQxKSBbMjAyMC0x
MC0xMCAxNjoyMTowNi4wNzFdIHBjaSBkZXYgMDQ6MCBJTlRBLT5JUlE1CihkMSkgWzIwMjAt
MTAtMTAgMTY6MjE6MDYuMDkzXSBObyBSQU0gaW4gaGlnaCBtZW1vcnk7IHNldHRpbmcgaGln
aF9tZW0gcmVzb3VyY2UgYmFzZSB0byAxMDAwMDAwMDAKKGQxKSBbMjAyMC0xMC0xMCAxNjoy
MTowNi4wOTNdIHBjaSBkZXYgMDM6MCBiYXIgMTAgc2l6ZSAwMDIwMDAwMDA6IDBmMDAwMDAw
OAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjA5NF0gcGNpIGRldiAwMjowIGJhciAxNCBz
aXplIDAwMTAwMDAwMDogMGYyMDAwMDA4CihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMDk0
XSBwY2kgZGV2IDA0OjAgYmFyIDMwIHNpemUgMDAwMDQwMDAwOiAwZjMwMDAwMDAKKGQxKSBb
MjAyMC0xMC0xMCAxNjoyMTowNi4wOTRdIHBjaSBkZXYgMDM6MCBiYXIgMzAgc2l6ZSAwMDAw
MTAwMDA6IDBmMzA0MDAwMAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjA5NF0gcGNpIGRl
diAwMzowIGJhciAxNCBzaXplIDAwMDAwMTAwMDogMGYzMDUwMDAwCihkMSkgWzIwMjAtMTAt
MTAgMTY6MjE6MDYuMDk0XSBwY2kgZGV2IDAyOjAgYmFyIDEwIHNpemUgMDAwMDAwMTAwOiAw
MDAwMGMwMDEKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4wOTRdIHBjaSBkZXYgMDQ6MCBi
YXIgMTAgc2l6ZSAwMDAwMDAxMDA6IDAwMDAwYzEwMQooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjA5NV0gcGNpIGRldiAwNDowIGJhciAxNCBzaXplIDAwMDAwMDEwMDogMGYzMDUxMDAw
CihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMDk1XSBwY2kgZGV2IDAxOjEgYmFyIDIwIHNp
emUgMDAwMDAwMDEwOiAwMDAwMGMyMDEKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMDFd
IE11bHRpcHJvY2Vzc29yIGluaXRpYWxpc2F0aW9uOgooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEwMV0gIC0gQ1BVMCAuLi4gNDgtYml0IHBoeXMgLi4uIGZpeGVkIE1UUlJzIC4uLiB2
YXIgTVRSUnMgWzEvOF0gLi4uIGRvbmUuCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTAx
XSAgLSBDUFUxIC4uLiA0OC1iaXQgcGh5cyAuLi4gZml4ZWQgTVRSUnMgLi4uIHZhciBNVFJS
cyBbMS84XSAuLi4gZG9uZS4KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMDFdIFRlc3Rp
bmcgSFZNIGVudmlyb25tZW50OgooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEwMV0gVXNp
bmcgc2NyYXRjaCBtZW1vcnkgYXQgNDAwMDAwCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYu
MTEzXSAgLSBSRVAgSU5TQiBhY3Jvc3MgcGFnZSBib3VuZGFyaWVzIC4uLiBwYXNzZWQKKGQx
KSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMjVdICAtIFJFUCBJTlNXIGFjcm9zcyBwYWdlIGJv
dW5kYXJpZXMgLi4uIHBhc3NlZAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEzNl0gIC0g
R1MgYmFzZSBNU1JzIGFuZCBTV0FQR1MgLi4uIHBhc3NlZAooZDEpIFsyMDIwLTEwLTEwIDE2
OjIxOjA2LjEzNl0gUGFzc2VkIDMgb2YgMyB0ZXN0cwooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEzNl0gV3JpdGluZyBTTUJJT1MgdGFibGVzIC4uLgooZDEpIFsyMDIwLTEwLTEwIDE2
OjIxOjA2LjEzN10gTG9hZGluZyBTZWFCSU9TIC4uLgooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEzN10gQ3JlYXRpbmcgTVAgdGFibGVzIC4uLgooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEzN10gTG9hZGluZyBBQ1BJIC4uLgooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEz
OF0gdm04NiBUU1MgYXQgZmMxMDAyODAKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMzhd
IEJJT1MgbWFwOgooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEzOF0gIDEwMDAwLTEwMGUz
OiBTY3JhdGNoIHNwYWNlCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTM4XSAgYzAwMDAt
ZmZmZmY6IE1haW4gQklPUwooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEzOF0gRTgyMCB0
YWJsZToKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMzhdICBbMDBdOiAwMDAwMDAwMDow
MDAwMDAwMCAtIDAwMDAwMDAwOjAwMGEwMDAwOiBSQU0KKGQxKSBbMjAyMC0xMC0xMCAxNjoy
MTowNi4xMzhdICBIT0xFOiAwMDAwMDAwMDowMDBhMDAwMCAtIDAwMDAwMDAwOjAwMGMwMDAw
CihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTM4XSAgWzAxXTogMDAwMDAwMDA6MDAwYzAw
MDAgLSAwMDAwMDAwMDowMDEwMDAwMDogUkVTRVJWRUQKKGQxKSBbMjAyMC0xMC0xMCAxNjoy
MTowNi4xMzhdICBbMDJdOiAwMDAwMDAwMDowMDEwMDAwMCAtIDAwMDAwMDAwOjBmODAwMDAw
OiBSQU0KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMzhdICBIT0xFOiAwMDAwMDAwMDow
ZjgwMDAwMCAtIDAwMDAwMDAwOmZjMDAwMDAwCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYu
MTM5XSAgWzAzXTogMDAwMDAwMDA6ZmMwMDAwMDAgLSAwMDAwMDAwMDpmYzAwYjAwMDogQUNQ
SQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEzOV0gIFswNF06IDAwMDAwMDAwOmZjMDBi
MDAwIC0gMDAwMDAwMDE6MDAwMDAwMDA6IFJFU0VSVkVECihkMSkgWzIwMjAtMTAtMTAgMTY6
MjE6MDYuMTM5XSBJbnZva2luZyBTZWFCSU9TIC4uLgooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEzOV0gU2VhQklPUyAodmVyc2lvbiByZWwtMS4xNC4wLTAtZzE1NTgyMWEtWGVuKQoo
ZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjEzOV0gQlVJTEQ6IGdjYzogKERlYmlhbiA4LjMu
MC02KSA4LjMuMCBiaW51dGlsczogKEdOVSBCaW51dGlscyBmb3IgRGViaWFuKSAyLjMxLjEK
KGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xMzldIAooZDEpIFsyMDIwLTEwLTEwIDE2OjIx
OjA2LjEzOV0gRm91bmQgWGVuIGh5cGVydmlzb3Igc2lnbmF0dXJlIGF0IDQwMDAwMDAwCihk
MSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTQwXSBSdW5uaW5nIG9uIFFFTVUgKGk0NDBmeCkK
KGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xNDBdIHhlbjogY29weSBlODIwLi4uCihkMSkg
WzIwMjAtMTAtMTAgMTY6MjE6MDYuMTQwXSBSZWxvY2F0aW5nIGluaXQgZnJvbSAweDAwMGQ2
MmUwIHRvIDB4MGY3YWE3NjAgKHNpemUgODgwNjQpCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6
MDYuMTQwXSBGb3VuZCBRRU1VIGZ3X2NmZwooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0
Ml0gUmFtU2l6ZU92ZXI0RzogMHgwMDAwMDAwMDAwMDAwMDAwIFtjbW9zXQooZDEpIFsyMDIw
LTEwLTEwIDE2OjIxOjA2LjE0Ml0gYm9vdCBvcmRlcjoKKGQxKSBbMjAyMC0xMC0xMCAxNjoy
MTowNi4xNDJdIDE6IC9yb21AZ2Vucm9tcy9saW51eGJvb3QuYmluCihkMSkgWzIwMjAtMTAt
MTAgMTY6MjE6MDYuMTQ2XSBGb3VuZCA3IFBDSSBkZXZpY2VzIChtYXggUENJIGJ1cyBpcyAw
MCkKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4xNDZdIEFsbG9jYXRlZCBYZW4gaHlwZXJj
YWxsIHBhZ2UgYXQgZjdmZjAwMAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0Nl0gRGV0
ZWN0ZWQgWGVuIHY0LjE1LXVuc3RhYmxlCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTQ2
XSB4ZW46IGNvcHkgQklPUyB0YWJsZXMuLi4KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4x
NDZdIENvcHlpbmcgU01CSU9TIGVudHJ5IHBvaW50IGZyb20gMHgwMDAxMDAyMCB0byAweDAw
MGY1YjgwCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTQ2XSBDb3B5aW5nIE1QVEFCTEUg
ZnJvbSAweGZjMTAwMTgwL2ZjMTAwMTkwIHRvIDB4MDAwZjVhODAKKGQxKSBbMjAyMC0xMC0x
MCAxNjoyMTowNi4xNDZdIENvcHlpbmcgUElSIGZyb20gMHgwMDAxMDA0MCB0byAweDAwMGY1
YTAwCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMTQ2XSBDb3B5aW5nIEFDUEkgUlNEUCBm
cm9tIDB4MDAwMTAwYzAgdG8gMHgwMDBmNTlkMAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2
LjE0Nl0gdGFibGUoNTA0MzQxNDYpPTB4ZmMwMGEzNzAgKHZpYSB4c2R0KQooZDEpIFsyMDIw
LTEwLTEwIDE2OjIxOjA2LjE0Nl0gVXNpbmcgcG10aW1lciwgaW9wb3J0IDB4YjAwOAooZDEp
IFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0Nl0gdGFibGUoNTA0MzQxNDYpPTB4ZmMwMGEzNzAg
KHZpYSB4c2R0KQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0Nl0gQUNQSTogcGFyc2Ug
RFNEVCBhdCAweGZjMDAxMDQwIChsZW4gMzc1MzkpCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6
MDYuMTQ2XSBwYXJzZV90ZXJtbGlzdDogcGFyc2UgZXJyb3IsIHNraXAgZnJvbSAxNi8yNzY0
MQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0Nl0gcGFyc2VfdGVybWxpc3Q6IHBhcnNl
IGVycm9yLCBza2lwIGZyb20gODcvNjA0MQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE0
Nl0gU2NhbiBmb3IgVkdBIG9wdGlvbiByb20KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4x
NThdIFJ1bm5pbmcgb3B0aW9uIHJvbSBhdCBjMDAwOjAwMDMKKFhFTikgWzIwMjAtMTAtMTAg
MTY6MjE6MDYuMTU4XSBzdGR2Z2EuYzoxNzM6ZDF2MCBlbnRlcmluZyBzdGR2Z2EgbW9kZQoo
ZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjE2OF0gcG1tIGNhbGwgYXJnMT0wCihkMSkgWzIw
MjAtMTAtMTAgMTY6MjE6MDYuMTY4XSBUdXJuaW5nIG9uIHZnYSB0ZXh0IG1vZGUgY29uc29s
ZQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjIzMV0gU2VhQklPUyAodmVyc2lvbiByZWwt
MS4xNC4wLTAtZzE1NTgyMWEtWGVuKQooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjIzOF0g
TWFjaGluZSBVVUlEIDFmMzNlMzViLTI0YmUtNGJlZi05ZTQ2LTc5Y2U4YWJmNWM3ZgooZDEp
IFsyMDIwLTEwLTEwIDE2OjIxOjA2LjIzOV0gQVRBIGNvbnRyb2xsZXIgMSBhdCAxZjAvM2Y0
L2MyMDAgKGlycSAxNCBkZXYgOSkKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4yNDBdIEFU
QSBjb250cm9sbGVyIDIgYXQgMTcwLzM3NC9jMjA4IChpcnEgMTUgZGV2IDkpCihkMSkgWzIw
MjAtMTAtMTAgMTY6MjE6MDYuMjQxXSBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogSEFMVAoo
ZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjI0MV0gRm91bmQgMCBscHQgcG9ydHMKKGQxKSBb
MjAyMC0xMC0xMCAxNjoyMTowNi4yNDFdIEZvdW5kIDAgc2VyaWFsIHBvcnRzCihkMSkgWzIw
MjAtMTAtMTAgMTY6MjE6MDYuMjQ0XSBhdGEwLTA6IFFFTVUgSEFSRERJU0sgQVRBLTcgSGFy
ZC1EaXNrICg1MTIwIE1pQnl0ZXMpCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMjQ0XSBT
ZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3BjaUBpMGNmOC8qQDEsMS9kcml2ZUAwL2Rpc2tA
MAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjI0NF0gU2VhcmNoaW5nIGJpb3MtZ2VvbWV0
cnkgZm9yOiAvcGNpQGkwY2Y4LypAMSwxL2RyaXZlQDAvZGlza0AwCihkMSkgWzIwMjAtMTAt
MTAgMTY6MjE6MDYuMjQ1XSBQUzIga2V5Ym9hcmQgaW5pdGlhbGl6ZWQKKGQxKSBbMjAyMC0x
MC0xMCAxNjoyMTowNi4yNDVdIEFsbCB0aHJlYWRzIGNvbXBsZXRlLgooZDEpIFsyMDIwLTEw
LTEwIDE2OjIxOjA2LjI0NV0gU2NhbiBmb3Igb3B0aW9uIHJvbXMKKGQxKSBbMjAyMC0xMC0x
MCAxNjoyMTowNi4yNjZdIFJ1bm5pbmcgb3B0aW9uIHJvbSBhdCBjOTgwOjAwMDMKKGQxKSBb
MjAyMC0xMC0xMCAxNjoyMTowNi4yNzFdIHBtbSBjYWxsIGFyZzE9MQooZDEpIFsyMDIwLTEw
LTEwIDE2OjIxOjA2LjI3MV0gcG1tIGNhbGwgYXJnMT0wCihkMSkgWzIwMjAtMTAtMTAgMTY6
MjE6MDYuMjcyXSBwbW0gY2FsbCBhcmcxPTEKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4y
NzJdIHBtbSBjYWxsIGFyZzE9MAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjI4OF0gUnVu
bmluZyBvcHRpb24gcm9tIGF0IGNhODA6MDAwMwooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2
LjI4OV0gU2VhcmNoaW5nIGJvb3RvcmRlciBmb3I6IC9wY2lAaTBjZjgvKkA0CihkMSkgWzIw
MjAtMTAtMTAgMTY6MjE6MDYuMjg5XSBTZWFyY2hpbmcgYm9vdG9yZGVyIGZvcjogL3JvbUBn
ZW5yb21zL2xpbnV4Ym9vdC5iaW4KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4yODldIFNl
YXJjaGluZyBib290b3JkZXIgZm9yOiBIQUxUCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYu
Mjg5XSBkcml2ZSAweDAwMGY1OTYwOiBQQ0hTPTEwNDAyLzE2LzYzIHRyYW5zbGF0aW9uPWxi
YSBMQ0hTPTY1Mi8yNTUvNjMgcz0xMDQ4NTc2MAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2
LjI4OV0gU3BhY2UgYXZhaWxhYmxlIGZvciBVTUI6IGNiMDAwLWViMDAwLCBmNTNhMC1mNTk2
MAooZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjI4OV0gUmV0dXJuZWQgMjU4MDQ4IGJ5dGVz
IG9mIFpvbmVIaWdoCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMjg5XSBlODIwIG1hcCBo
YXMgNyBpdGVtczoKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4yODldICAgMDogMDAwMDAw
MDAwMDAwMDAwMCAtIDAwMDAwMDAwMDAwOWZjMDAgPSAxIFJBTQooZDEpIFsyMDIwLTEwLTEw
IDE2OjIxOjA2LjI4OV0gICAxOiAwMDAwMDAwMDAwMDlmYzAwIC0gMDAwMDAwMDAwMDBhMDAw
MCA9IDIgUkVTRVJWRUQKKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4yODldICAgMjogMDAw
MDAwMDAwMDBmMDAwMCAtIDAwMDAwMDAwMDAxMDAwMDAgPSAyIFJFU0VSVkVECihkMSkgWzIw
MjAtMTAtMTAgMTY6MjE6MDYuMjg5XSAgIDM6IDAwMDAwMDAwMDAxMDAwMDAgLSAwMDAwMDAw
MDBmN2ZmMDAwID0gMSBSQU0KKGQxKSBbMjAyMC0xMC0xMCAxNjoyMTowNi4yODldICAgNDog
MDAwMDAwMDAwZjdmZjAwMCAtIDAwMDAwMDAwMGY4MDAwMDAgPSAyIFJFU0VSVkVECihkMSkg
WzIwMjAtMTAtMTAgMTY6MjE6MDYuMjg5XSAgIDU6IDAwMDAwMDAwZmMwMDAwMDAgLSAwMDAw
MDAwMGZjMDBiMDAwID0gMyBBQ1BJCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYuMjg5XSAg
IDY6IDAwMDAwMDAwZmMwMGIwMDAgLSAwMDAwMDAwMTAwMDAwMDAwID0gMiBSRVNFUlZFRAoo
ZDEpIFsyMDIwLTEwLTEwIDE2OjIxOjA2LjI5MF0gZW50ZXIgaGFuZGxlXzE5OgooZDEpIFsy
MDIwLTEwLTEwIDE2OjIxOjA2LjI5MF0gICBOVUxMCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6
MDYuMjk0XSBCb290aW5nIGZyb20gUk9NLi4uCihkMSkgWzIwMjAtMTAtMTAgMTY6MjE6MDYu
Mjk0XSBCb290aW5nIGZyb20gY2E4MDowMDNjCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2
LjcyMV0gJ2gnIHByZXNzZWQgLT4gc2hvd2luZyBpbnN0YWxsZWQgaGFuZGxlcnMKKFhFTikg
WzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICclJyAoYXNjaWkgJzI1JykgPT4gdHJh
cCB0byB4ZW5kYmcKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICcqJyAo
YXNjaWkgJzJhJykgPT4gcHJpbnQgYWxsIGRpYWdub3N0aWNzCihYRU4pIFsyMDIwLTEwLTEw
IDE2OjI3OjU2LjcyMV0gIGtleSAnKycgKGFzY2lpICcyYicpID0+IGluY3JlYXNlIGxvZyBs
ZXZlbCB0aHJlc2hvbGQKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICct
JyAoYXNjaWkgJzJkJykgPT4gZGVjcmVhc2UgbG9nIGxldmVsIHRocmVzaG9sZAooWEVOKSBb
MjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJzAnIChhc2NpaSAnMzAnKSA9PiBkdW1w
IERvbTAgcmVnaXN0ZXJzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtleSAn
QScgKGFzY2lpICc0MScpID0+IHRvZ2dsZSBhbHRlcm5hdGl2ZSBrZXkgaGFuZGxpbmcKKFhF
TikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdHJyAoYXNjaWkgJzQ3JykgPT4g
dG9nZ2xlIGhvc3QvZ3Vlc3QgbG9nIGxldmVsIGFkanVzdG1lbnQKKFhFTikgWzIwMjAtMTAt
MTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdIJyAoYXNjaWkgJzQ4JykgPT4gZHVtcCBoZWFwIGlu
Zm8KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdJJyAoYXNjaWkgJzQ5
JykgPT4gZHVtcCBIVk0gaXJxIGluZm8KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIx
XSAga2V5ICdNJyAoYXNjaWkgJzRkJykgPT4gZHVtcCBNU0kgc3RhdGUKKFhFTikgWzIwMjAt
MTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdOJyAoYXNjaWkgJzRlJykgPT4gdHJpZ2dlciBh
biBOTUkKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdPJyAoYXNjaWkg
JzRmJykgPT4gdG9nZ2xlIHNoYWRvdyBhdWRpdHMKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6
NTYuNzIxXSAga2V5ICdRJyAoYXNjaWkgJzUxJykgPT4gZHVtcCBQQ0kgZGV2aWNlcwooWEVO
KSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ1InIChhc2NpaSAnNTInKSA9PiBy
ZWJvb3QgbWFjaGluZQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ1Mn
IChhc2NpaSAnNTMnKSA9PiByZXNldCBzaGFkb3cgcGFnZXRhYmxlcwooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ1YnIChhc2NpaSAnNTYnKSA9PiBkdW1wIElPTU1V
IGludHJlbWFwIHRhYmxlcwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkg
J2EnIChhc2NpaSAnNjEnKSA9PiBkdW1wIHRpbWVyIHF1ZXVlcwooWEVOKSBbMjAyMC0xMC0x
MCAxNjoyNzo1Ni43MjFdICBrZXkgJ2MnIChhc2NpaSAnNjMnKSA9PiBkdW1wIEFDUEkgQ3gg
c3RydWN0dXJlcwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ2QnIChh
c2NpaSAnNjQnKSA9PiBkdW1wIHJlZ2lzdGVycwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1
Ni43MjFdICBrZXkgJ2UnIChhc2NpaSAnNjUnKSA9PiBkdW1wIGV2dGNobiBpbmZvCihYRU4p
IFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtleSAnZycgKGFzY2lpICc2NycpID0+IHBy
aW50IGdyYW50IHRhYmxlIHVzYWdlCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0g
IGtleSAnaCcgKGFzY2lpICc2OCcpID0+IHNob3cgdGhpcyBtZXNzYWdlCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtleSAnaScgKGFzY2lpICc2OScpID0+IGR1bXAgaW50
ZXJydXB0IGJpbmRpbmdzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtleSAn
bScgKGFzY2lpICc2ZCcpID0+IG1lbW9yeSBpbmZvCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3
OjU2LjcyMV0gIGtleSAnbicgKGFzY2lpICc2ZScpID0+IE5NSSBzdGF0aXN0aWNzCihYRU4p
IFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtleSAnbycgKGFzY2lpICc2ZicpID0+IGR1
bXAgaW9tbXUgcDJtIHRhYmxlCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0gIGtl
eSAncScgKGFzY2lpICc3MScpID0+IGR1bXAgZG9tYWluIChhbmQgZ3Vlc3QgZGVidWcpIGlu
Zm8KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICdyJyAoYXNjaWkgJzcy
JykgPT4gZHVtcCBydW4gcXVldWVzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0g
IGtleSAncycgKGFzY2lpICc3MycpID0+IGR1bXAgc29mdHRzYyBzdGF0cwooWEVOKSBbMjAy
MC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ3QnIChhc2NpaSAnNzQnKSA9PiBkaXNwbGF5
IG11bHRpLWNwdSBjbG9jayBpbmZvCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI3OjU2LjcyMV0g
IGtleSAndScgKGFzY2lpICc3NScpID0+IGR1bXAgTlVNQSBpbmZvCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjI3OjU2LjcyMV0gIGtleSAndicgKGFzY2lpICc3NicpID0+IGR1bXAgQU1ELVYg
Vk1DQnMKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjc6NTYuNzIxXSAga2V5ICd3JyAoYXNjaWkg
Jzc3JykgPT4gc3luY2hyb25vdXNseSBkdW1wIGNvbnNvbGUgcmluZyBidWZmZXIgKGRtZXNn
KQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43MjFdICBrZXkgJ3gnIChhc2NpaSAnNzgn
KSA9PiBwcmludCBsaXZlcGF0Y2ggaW5mbwooWEVOKSBbMjAyMC0xMC0xMCAxNjoyNzo1Ni43
MjFdICBrZXkgJ3onIChhc2NpaSAnN2EnKSA9PiBkdW1wIElPQVBJQyBpbmZvCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gJ3EnIHByZXNzZWQgLT4gZHVtcGluZyBkb21haW4g
aW5mbyAobm93ID0gOTQ1MzE0MjQ3OTYwKQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45
MTNdIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9yIGRvbWFpbiAwOgooWEVOKSBbMjAyMC0xMC0x
MCAxNjoyODoxNS45MTNdICAgICByZWZjbnQ9NCBkeWluZz0wIHBhdXNlX2NvdW50PTAKKFhF
TikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgbnJfcGFnZXM9NTI0Mjg3IHhlbmhl
YXBfcGFnZXM9MiBzaGFyZWRfcGFnZXM9MCBwYWdlZF9wYWdlcz0wIGRpcnR5X2NwdXM9ezAt
NH0gbWF4X3BhZ2VzPTUyNDI4OAooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAg
ICBoYW5kbGU9MDAwMDAwMDAtMDAwMC0wMDAwLTAwMDAtMDAwMDAwMDAwMDAwIHZtX2Fzc2lz
dD0wMDAwMDAyZAooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIFJhbmdlc2V0cyBi
ZWxvbmdpbmcgdG8gZG9tYWluIDA6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10g
ICAgIEludGVycnVwdHMgeyAxLTU1LCA1Ny03OSB9CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4
OjE1LjkxM10gICAgIEkvTyBNZW1vcnkgeyAwLWY1ZmZmLCBmNjAwNC1mZWRmZiwgZmVmMDAt
ZmNmZmZmZiwgMTAwMDAwMDAtZmZmZmZmZmZmIH0KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6
MTUuOTEzXSAgICAgSS9PIFBvcnRzICB7IDAtMWYsIDIyLTNmLCA0NC02MCwgNjItOWYsIGEy
LTNmNywgNDAwLTgwNywgODBjLWNmYiwgZDAwLWZmZmYgfQooWEVOKSBbMjAyMC0xMC0xMCAx
NjoyODoxNS45MTNdICAgICBsb2ctZGlydHkgIHsgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoy
ODoxNS45MTNdIE1lbW9yeSBwYWdlcyBiZWxvbmdpbmcgdG8gZG9tYWluIDA6CihYRU4pIFsy
MDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIERvbVBhZ2UgbGlzdCB0b28gbG9uZyB0byBk
aXNwbGF5CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIFhlblBhZ2UgMDAw
MDAwMDAwMDBjN2Y4YzogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj1lNDAwMDAwMDAwMDAw
MDAyCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIFhlblBhZ2UgMDAwMDAw
MDAwMDY5Zjk3ZjogY2FmPWMwMDAwMDAwMDAwMDAwMDIsIHRhZj1lNDAwMDAwMDAwMDAwMDAy
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gTk9ERSBhZmZpbml0eSBmb3IgZG9t
YWluIDA6IFswXQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIFZDUFUgaW5mb3Jt
YXRpb24gYW5kIGNhbGxiYWNrcyBmb3IgZG9tYWluIDA6CihYRU4pIFsyMDIwLTEwLTEwIDE2
OjI4OjE1LjkxM10gICBVTklUMCBhZmZpbml0aWVzOiBoYXJkPXswLTV9IHNvZnQ9ezAtNX0K
KFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgVkNQVTA6IENQVTEgW2hhcz1G
XSBwb2xsPTAgdXBjYWxsX3BlbmQ9MDAgdXBjYWxsX21hc2s9MDAgZGlydHlfY3B1PTEKKFhF
TikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9m
bGFncz0xCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gTm8gcGVyaW9kaWMgdGlt
ZXIKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgIFVOSVQxIGFmZmluaXRpZXM6
IGhhcmQ9ezAtNX0gc29mdD17MC01fQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNd
ICAgICBWQ1BVMTogQ1BVMCBbaGFzPVRdIHBvbGw9MCB1cGNhbGxfcGVuZD0wMCB1cGNhbGxf
bWFzaz0wMCBkaXJ0eV9jcHU9MAooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAg
ICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTAKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6
MTUuOTEzXSBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45
MTNdICAgVU5JVDIgYWZmaW5pdGllczogaGFyZD17MC01fSBzb2Z0PXswLTV9CihYRU4pIFsy
MDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIFZDUFUyOiBDUFUzIFtoYXM9Rl0gcG9sbD0w
IHVwY2FsbF9wZW5kPTAwIHVwY2FsbF9tYXNrPTAwIGRpcnR5X2NwdT0zCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIHBhdXNlX2NvdW50PTAgcGF1c2VfZmxhZ3M9MQoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIE5vIHBlcmlvZGljIHRpbWVyCihYRU4p
IFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICBVTklUMyBhZmZpbml0aWVzOiBoYXJkPXsw
LTV9IHNvZnQ9ezAtNX0KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgVkNQ
VTM6IENQVTIgW2hhcz1GXSBwb2xsPTAgdXBjYWxsX3BlbmQ9MDAgdXBjYWxsX21hc2s9MDAg
ZGlydHlfY3B1PTIKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgcGF1c2Vf
Y291bnQ9MCBwYXVzZV9mbGFncz0xCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10g
Tm8gcGVyaW9kaWMgdGltZXIKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgIFVO
SVQ0IGFmZmluaXRpZXM6IGhhcmQ9ezAtNX0gc29mdD17MC01fQooWEVOKSBbMjAyMC0xMC0x
MCAxNjoyODoxNS45MTNdICAgICBWQ1BVNDogQ1BVNCBbaGFzPVRdIHBvbGw9MCB1cGNhbGxf
cGVuZD0wMCB1cGNhbGxfbWFzaz0wMCBkaXJ0eV9jcHU9NAooWEVOKSBbMjAyMC0xMC0xMCAx
NjoyODoxNS45MTNdICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdzPTAKKFhFTikgWzIw
MjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSBObyBwZXJpb2RpYyB0aW1lcgooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODoxNS45MTNdICAgVU5JVDUgYWZmaW5pdGllczogaGFyZD17MC01fSBzb2Z0
PXswLTV9CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIFZDUFU1OiBDUFUz
IFtoYXM9Rl0gcG9sbD0wIHVwY2FsbF9wZW5kPTAwIHVwY2FsbF9tYXNrPTAwIAooWEVOKSBb
MjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAgICBwYXVzZV9jb3VudD0wIHBhdXNlX2ZsYWdz
PTEKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSBObyBwZXJpb2RpYyB0aW1lcgoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIEdlbmVyYWwgaW5mb3JtYXRpb24gZm9y
IGRvbWFpbiAxOgooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAgICByZWZjbnQ9
MyBkeWluZz0wIHBhdXNlX2NvdW50PTAKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEz
XSAgICAgbnJfcGFnZXM9NjU2MTIgeGVuaGVhcF9wYWdlcz0yIHNoYXJlZF9wYWdlcz0wIHBh
Z2VkX3BhZ2VzPTAgZGlydHlfY3B1cz17NX0gbWF4X3BhZ2VzPTY1NzkyCihYRU4pIFsyMDIw
LTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIGhhbmRsZT0xZjMzZTM1Yi0yNGJlLTRiZWYtOWU0
Ni03OWNlOGFiZjVjN2Ygdm1fYXNzaXN0PTAwMDAwMDAwCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjI4OjE1LjkxM10gICAgIHBhZ2luZyBhc3Npc3RhbmNlOiBoYXAgcmVmY291bnRzIHRyYW5z
bGF0ZSBleHRlcm5hbCAKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSBSYW5nZXNl
dHMgYmVsb25naW5nIHRvIGRvbWFpbiAxOgooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45
MTNdICAgICBpb3JlcV9zZXJ2ZXIgMCBwY2kgeyAwLCA4LTksIGIsIDEwLCAxOCwgMjAgfQoo
WEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAgICBpb3JlcV9zZXJ2ZXIgMCBtZW1v
cnkgeyBhMDAwMC1iZmZmZiwgZjAwMDAwMDAtZjA3ZmZmZmYsIGYxMDAwMDAwLWYxM2ZmZmZm
LCBmMjAwMDAwMC1mMmZmZmZmZiwgZjMwNTAwMDAtZjMwNTEwZmYsIGZlYzAwMDAwLWZlYzAw
ZmZmLCBmZWQwMDAwMC1mZWQwMDNmZiwgZmVlMDAwMDAtZmVlZmZmZmYgfQooWEVOKSBbMjAy
MC0xMC0xMCAxNjoyODoxNS45MTNdICAgICBpb3JlcV9zZXJ2ZXIgMCBwb3J0IHsgMC0xZiwg
NjAsIDY0LCA3MC03MSwgODAtODMsIDg3LCA4OS04YiwgOGYsIDkyLCBiMi1iMywgYzAtZGYs
IGYwLCAxNzAtMTc3LCAxZjAtMWY3LCAzNzYsIDNiMC0zZGYsIDNmMS0zZjcsIDUxMC01MTEs
IGNmOC1jZmYsIGFlMDAtYWUxMywgYWYwMC1hZjFmLCBhZmUwLWFmZTMsIGIwMDAtYjAwNSwg
YjAwOC1iMDBiLCBjMDAwLWMyMGYgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNd
ICAgICBJbnRlcnJ1cHRzIHsgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAg
ICBJL08gTWVtb3J5IHsgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAgICBJ
L08gUG9ydHMgIHsgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdICAgICBsb2ct
ZGlydHkgIHsgfQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIE1lbW9yeSBwYWdl
cyBiZWxvbmdpbmcgdG8gZG9tYWluIDE6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1Ljkx
M10gICAgIERvbVBhZ2UgbGlzdCB0b28gbG9uZyB0byBkaXNwbGF5CihYRU4pIFsyMDIwLTEw
LTEwIDE2OjI4OjE1LjkxM10gICAgIFBvRCBlbnRyaWVzPTAgY2FjaGVzaXplPTAKKFhFTikg
WzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwMGM3ZGZk
OiBjYWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPWU0MDAwMDAwMDAwMDAwMDEKKFhFTikgWzIw
MjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgWGVuUGFnZSAwMDAwMDAwMDAwNDQ2Y2M5OiBj
YWY9YzAwMDAwMDAwMDAwMDAwMSwgdGFmPWU0MDAwMDAwMDAwMDAwMDEKKFhFTikgWzIwMjAt
MTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgRXh0cmFQYWdlIDAwMDAwMDAwMDA0NDJlYzI6IGNh
Zj04MDQwMDAwMDAwMDAwMDAzLCB0YWY9ZTQwMDAwMDAwMDAwMDAwMQooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODoxNS45MTNdICAgICBFeHRyYVBhZ2UgMDAwMDAwMDAwMDQ0MmVjMTogY2Fm
PTgwNDAwMDAwMDAwMDAwMDMsIHRhZj1lNDAwMDAwMDAwMDAwMDAxCihYRU4pIFsyMDIwLTEw
LTEwIDE2OjI4OjE1LjkxM10gTk9ERSBhZmZpbml0eSBmb3IgZG9tYWluIDE6IFswXQooWEVO
KSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIFZDUFUgaW5mb3JtYXRpb24gYW5kIGNhbGxi
YWNrcyBmb3IgZG9tYWluIDE6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICBV
TklUMCBhZmZpbml0aWVzOiBoYXJkPXsyLTV9IHNvZnQ9ezAtNX0KKFhFTikgWzIwMjAtMTAt
MTAgMTY6Mjg6MTUuOTEzXSAgICAgVkNQVTA6IENQVTUgW2hhcz1GXSBwb2xsPTAgdXBjYWxs
X3BlbmQ9MDAgdXBjYWxsX21hc2s9MDAgZGlydHlfY3B1PTUKKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6MTUuOTEzXSAgICAgcGF1c2VfY291bnQ9MCBwYXVzZV9mbGFncz00CihYRU4pIFsy
MDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAgIHBhZ2luZyBhc3Npc3RhbmNlOiBoYXAsIDEg
bGV2ZWxzCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gTm8gcGVyaW9kaWMgdGlt
ZXIKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgIFVOSVQxIGFmZmluaXRpZXM6
IGhhcmQ9ezItNX0gc29mdD17MC01fQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNd
ICAgICBWQ1BVMTogQ1BVNSBbaGFzPUZdIHBvbGw9MCB1cGNhbGxfcGVuZD0wMCB1cGNhbGxf
bWFzaz0wMCAKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSAgICAgcGF1c2VfY291
bnQ9MCBwYXVzZV9mbGFncz0yCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gICAg
IHBhZ2luZyBhc3Npc3RhbmNlOiBoYXAsIDEgbGV2ZWxzCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjI4OjE1LjkxM10gTm8gcGVyaW9kaWMgdGltZXIKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6
MTUuOTEzXSBOb3RpZnlpbmcgZ3Vlc3QgMDowICh2aXJxIDEsIHBvcnQgNSkKKFhFTikgWzIw
MjAtMTAtMTAgMTY6Mjg6MTUuOTEzXSBOb3RpZnlpbmcgZ3Vlc3QgMDoxICh2aXJxIDEsIHBv
cnQgMTEpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gTm90aWZ5aW5nIGd1ZXN0
IDA6MiAodmlycSAxLCBwb3J0IDE4KQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNd
IE5vdGlmeWluZyBndWVzdCAwOjMgKHZpcnEgMSwgcG9ydCAyNSkKKFhFTikgWzIwMjAtMTAt
MTAgMTY6Mjg6MTUuOTEzXSBOb3RpZnlpbmcgZ3Vlc3QgMDo0ICh2aXJxIDEsIHBvcnQgMzIp
CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjE1LjkxM10gTm90aWZ5aW5nIGd1ZXN0IDA6NSAo
dmlycSAxLCBwb3J0IDM5KQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIE5vdGlm
eWluZyBndWVzdCAxOjAgKHZpcnEgMSwgcG9ydCAwKQooWEVOKSBbMjAyMC0xMC0xMCAxNjoy
ODoxNS45MTNdIE5vdGlmeWluZyBndWVzdCAxOjEgKHZpcnEgMSwgcG9ydCAwKQooWEVOKSBb
MjAyMC0xMC0xMCAxNjoyODoxNS45MTNdIFNoYXJlZCBmcmFtZXMgMCAtLSBTYXZlZCBmcmFt
ZXMgMAooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIHNjaGVkX3NtdF9wb3dlcl9z
YXZpbmdzOiBkaXNhYmxlZAooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIE5PVz05
NzE3ODUzMzI0MjEKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSBPbmxpbmUgQ3B1
czogMC01CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gQ3B1cG9vbCAwOgooWEVO
KSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIENwdXM6IDAtNQooWEVOKSBbMjAyMC0xMC0x
MCAxNjoyODo0Mi4zODRdIFNjaGVkdWxpbmcgZ3JhbnVsYXJpdHk6IGNwdSwgMSBDUFUgcGVy
IHNjaGVkLXJlc291cmNlCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gU2NoZWR1
bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciByZXYyIChjcmVkaXQyKQooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODo0Mi4zODRdIEFjdGl2ZSBxdWV1ZXM6IDEKKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6NDIuMzg0XSAJZGVmYXVsdC13ZWlnaHQgICAgID0gMjU2CihYRU4pIFsyMDIwLTEw
LTEwIDE2OjI4OjQyLjM4NF0gUnVucXVldWUgMDoKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6
NDIuMzg0XSAJbmNwdXMgICAgICAgICAgICAgID0gNgooWEVOKSBbMjAyMC0xMC0xMCAxNjoy
ODo0Mi4zODRdIAljcHVzICAgICAgICAgICAgICAgPSAwLTUKKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6NDIuMzg0XSAJbWF4X3dlaWdodCAgICAgICAgID0gMTAyNAooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODo0Mi4zODRdIAlwaWNrX2JpYXMgICAgICAgICAgPSA0CihYRU4pIFsyMDIw
LTEwLTEwIDE2OjI4OjQyLjM4NF0gCWluc3Rsb2FkICAgICAgICAgICA9IDIKKFhFTikgWzIw
MjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSAJYXZlbG9hZCAgICAgICAgICAgID0gMzk0OTYzICh+
MTUwJSkKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSAJaWRsZXJzOiAyYgooWEVO
KSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIAl0aWNrbGVkOiAwMAooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODo0Mi4zODRdIAlmdWxseSBpZGxlIGNvcmVzOiAyYgooWEVOKSBbMjAyMC0x
MC0xMCAxNjoyODo0Mi4zODRdIERvbWFpbiBpbmZvOgooWEVOKSBbMjAyMC0xMC0xMCAxNjoy
ODo0Mi4zODRdIAlEb21haW46IDAgdyAxMDI0IGMgMCB2IDYKKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6NDIuMzg0XSAJICAxOiBbMC4wXSBmbGFncz0yIGNwdT00IGNyZWRpdD0xMDUwMDAw
MCBbdz0xMDI0XSBsb2FkPTQ0MyAofjAlKQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4z
ODRdIAkgIDI6IFswLjFdIGZsYWdzPTAgY3B1PTAgY3JlZGl0PTEwMzQwODkyIFt3PTEwMjRd
IGxvYWQ9MjE4ICh+MCUpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gCSAgMzog
WzAuMl0gZmxhZ3M9MCBjcHU9NSBjcmVkaXQ9MTAyNzY0MzUgW3c9MTAyNF0gbG9hZD0yNzgg
KH4wJSkKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSAJICA0OiBbMC4zXSBmbGFn
cz0wIGNwdT00IGNyZWRpdD0xMDUwMDAwMCBbdz0xMDI0XSBsb2FkPTE3NiAofjAlKQooWEVO
KSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIAkgIDU6IFswLjRdIGZsYWdzPTIgY3B1PTIg
Y3JlZGl0PTg0NDUxMjQgW3c9MTAyNF0gbG9hZD0yNjIxNDQgKH4xMDAlKQooWEVOKSBbMjAy
MC0xMC0xMCAxNjoyODo0Mi4zODRdIAkgIDY6IFswLjVdIGZsYWdzPTAgY3B1PTEgY3JlZGl0
PTkyMDk0MDYgW3c9MTAyNF0gbG9hZD01MDEgKH4wJSkKKFhFTikgWzIwMjAtMTAtMTAgMTY6
Mjg6NDIuMzg0XSAJRG9tYWluOiAxIHcgMjU2IGMgMCB2IDIKKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6NDIuMzg0XSAJICA3OiBbMS4wXSBmbGFncz0yIGNwdT0zIGNyZWRpdD0xNzc3OTE2
IFt3PTI1Nl0gbG9hZD0xMzEwNzIgKH41MCUpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQy
LjM4NF0gCSAgODogWzEuMV0gZmxhZ3M9MCBjcHU9NSBjcmVkaXQ9MTA1MDAwMDAgW3c9MjU2
XSBsb2FkPTUxICh+MCUpCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gUnVucXVl
dWUgMDoKKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSBDUFVbMDBdIHJ1bnE9MCwg
c2libGluZz17MH0sIGNvcmU9ezAtNX0KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0
XSBDUFVbMDFdIHJ1bnE9MCwgc2libGluZz17MX0sIGNvcmU9ezAtNX0KKFhFTikgWzIwMjAt
MTAtMTAgMTY6Mjg6NDIuMzg0XSBDUFVbMDJdIHJ1bnE9MCwgc2libGluZz17Mn0sIGNvcmU9
ezAtNX0KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSAJcnVuOiBbMC40XSBmbGFn
cz0yIGNwdT0yIGNyZWRpdD04NDQ1MTI0IFt3PTEwMjRdIGxvYWQ9MjYyMTQ0ICh+MTAwJSkK
KFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSBDUFVbMDNdIHJ1bnE9MCwgc2libGlu
Zz17M30sIGNvcmU9ezAtNX0KKFhFTikgWzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSBDUFVb
MDRdIHJ1bnE9MCwgc2libGluZz17NH0sIGNvcmU9ezAtNX0KKFhFTikgWzIwMjAtMTAtMTAg
MTY6Mjg6NDIuMzg0XSAJcnVuOiBbMC4wXSBmbGFncz0yIGNwdT00IGNyZWRpdD0xMDUwMDAw
MCBbdz0xMDI0XSBsb2FkPTQ0MyAofjAlKQooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4z
ODRdIENQVVswNV0gcnVucT0wLCBzaWJsaW5nPXs1fSwgY29yZT17MC01fQooWEVOKSBbMjAy
MC0xMC0xMCAxNjoyODo0Mi4zODRdIFJVTlE6CihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQy
LjM4NF0gQ1BVcyBpbmZvOgooWEVOKSBbMjAyMC0xMC0xMCAxNjoyODo0Mi4zODRdIENQVVsw
MF0gY3VycmVudD1kW0lETEVddjAsIGN1cnI9ZFtJRExFXXYwLCBwcmV2PU5VTEwKKFhFTikg
WzIwMjAtMTAtMTAgMTY6Mjg6NDIuMzg0XSBDUFVbMDFdIGN1cnJlbnQ9ZFtJRExFXXYxLCBj
dXJyPWRbSURMRV12MSwgcHJldj1OVUxMCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4
NF0gQ1BVWzAyXSBjdXJyZW50PWQwdjQsIGN1cnI9ZDB2NCwgcHJldj1OVUxMCihYRU4pIFsy
MDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gQ1BVWzAzXSBjdXJyZW50PWQxdjAsIGN1cnI9ZDF2
MCwgcHJldj1OVUxMCihYRU4pIFsyMDIwLTEwLTEwIDE2OjI4OjQyLjM4NF0gQ1BVWzA0XSBj
dXJyZW50PWQwdjAsIGN1cnI9ZDB2MCwgcHJldj1OVUxMCihYRU4pIFsyMDIwLTEwLTEwIDE2
OjI4OjQyLjM4NF0gQ1BVWzA1XSBjdXJyZW50PWRbSURMRV12NSwgY3Vycj1kW0lETEVddjUs
IHByZXY9TlVMTAo=
--------------C8441FDC7B1161885B47F6B2--


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 19:55:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 19:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5532.14377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRKxb-0006l2-B6; Sat, 10 Oct 2020 19:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5532.14377; Sat, 10 Oct 2020 19:55:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRKxb-0006kv-7a; Sat, 10 Oct 2020 19:55:19 +0000
Received: by outflank-mailman (input) for mailman id 5532;
 Sat, 10 Oct 2020 19:55:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRKxZ-0006kq-Vu
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 19:55:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89cbef9a-5b20-448d-8fb9-96b8cd5b91d5;
 Sat, 10 Oct 2020 19:55:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRKxV-0004B4-HW; Sat, 10 Oct 2020 19:55:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRKxV-0005rz-7j; Sat, 10 Oct 2020 19:55:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRKxV-00049R-7G; Sat, 10 Oct 2020 19:55:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRKxZ-0006kq-Vu
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 19:55:18 +0000
X-Inumbo-ID: 89cbef9a-5b20-448d-8fb9-96b8cd5b91d5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 89cbef9a-5b20-448d-8fb9-96b8cd5b91d5;
	Sat, 10 Oct 2020 19:55:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X4eifrTPok4JqlyNOG08xBn6+4A77XSxHcz6NuqOP6o=; b=SFRWj3lmfZ6e1ljFmgS1+PRw4L
	VcHzXTJblBI+MlJ05UYBGNAEP7X9nBUw0ouBhcp178r2apiRZzU+gcZYAwo3WaQm96Q6+X+adwXsV
	atMXv+q5yxEtPNaFaz18XKuUuCbNHNUVNTwtQx2fAccKO7IT36QZUV7dBbQBbfKv2AM0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRKxV-0004B4-HW; Sat, 10 Oct 2020 19:55:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRKxV-0005rz-7j; Sat, 10 Oct 2020 19:55:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRKxV-00049R-7G; Sat, 10 Oct 2020 19:55:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155645-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155645: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 19:55:13 +0000

flight 155645 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155645/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 12 debian-install            fail pass in 155613

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   51 days
Failing since        152659  2020-08-21 14:07:39 Z   50 days   83 attempts
Testing same since   155613  2020-10-09 18:07:54 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 20:52:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 20:52:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5538.14395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRLqv-00046o-M5; Sat, 10 Oct 2020 20:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5538.14395; Sat, 10 Oct 2020 20:52:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRLqv-00046h-J7; Sat, 10 Oct 2020 20:52:29 +0000
Received: by outflank-mailman (input) for mailman id 5538;
 Sat, 10 Oct 2020 20:52:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRLqu-00046c-8E
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 20:52:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e410fc3-c737-4d52-b296-91b6e31ce8f9;
 Sat, 10 Oct 2020 20:52:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRLqr-0005Pv-Sz; Sat, 10 Oct 2020 20:52:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRLqr-00079y-J3; Sat, 10 Oct 2020 20:52:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRLqr-0000fJ-IY; Sat, 10 Oct 2020 20:52:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRLqu-00046c-8E
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 20:52:28 +0000
X-Inumbo-ID: 9e410fc3-c737-4d52-b296-91b6e31ce8f9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9e410fc3-c737-4d52-b296-91b6e31ce8f9;
	Sat, 10 Oct 2020 20:52:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZlsBor2ezfBf3ChJYppYgqWEaZeSRsi5xsHCjbfriEk=; b=sACRs3+XsFd8f0By2S9oTbTz5R
	44BFsS4hbTE2M/4le0wjwY9fgVrATyUnmvJRpQkS/So5nQJtzMhDo0AHTz6snSpF2NfPzAJBA1nlu
	5iloarH4vbbbQhuG+ta7ymEGgY1ScKdzujN/PvFNTO8uurcXs4TxmdIEoEKBBnp+5yRA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRLqr-0005Pv-Sz; Sat, 10 Oct 2020 20:52:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRLqr-00079y-J3; Sat, 10 Oct 2020 20:52:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRLqr-0000fJ-IY; Sat, 10 Oct 2020 20:52:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155662-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155662: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 20:52:25 +0000

flight 155662 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155662/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 21:55:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 21:55:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5543.14409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRMpV-0001Vn-NR; Sat, 10 Oct 2020 21:55:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5543.14409; Sat, 10 Oct 2020 21:55:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRMpV-0001Vg-K7; Sat, 10 Oct 2020 21:55:05 +0000
Received: by outflank-mailman (input) for mailman id 5543;
 Sat, 10 Oct 2020 21:55:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRMpU-0001Va-3g
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 21:55:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d5ef2e8-270c-4bc7-873f-ad79660b765e;
 Sat, 10 Oct 2020 21:55:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRMpQ-0006e9-Jo; Sat, 10 Oct 2020 21:55:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRMpQ-0000dv-8d; Sat, 10 Oct 2020 21:55:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRMpQ-00030u-89; Sat, 10 Oct 2020 21:55:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRMpU-0001Va-3g
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 21:55:04 +0000
X-Inumbo-ID: 3d5ef2e8-270c-4bc7-873f-ad79660b765e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3d5ef2e8-270c-4bc7-873f-ad79660b765e;
	Sat, 10 Oct 2020 21:55:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KhGz3ffm07OyR1hYEZJwCUzGwhWT9/mUQ3SaRhDV00Y=; b=euh+TL/1EfmovGEqVlULu3dIfs
	PGFneQEfDdtV1L2uW2Zz155ZM3biq3kTwEzzfqCCnW97lCwr21AKk9tlBzwDgE5ryZXgmD4Nzt68l
	No0pJEFmZQ6G25xyF/Qjp+Rz3r4FFeITgSArg8l/bO8BRCWOo+XhUZzZFq+8p8vdLF5g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRMpQ-0006e9-Jo; Sat, 10 Oct 2020 21:55:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRMpQ-0000dv-8d; Sat, 10 Oct 2020 21:55:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRMpQ-00030u-89; Sat, 10 Oct 2020 21:55:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155639-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155639: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=6f2f486d57c4d562cdf4932320b66fbb878ab1c4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 21:55:00 +0000

flight 155639 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155639/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     17 guest-saverestore        fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                6f2f486d57c4d562cdf4932320b66fbb878ab1c4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   71 days
Failing since        152366  2020-08-01 20:49:34 Z   70 days  117 attempts
Testing same since   155639  2020-10-10 06:58:43 Z    0 days    1 attempts

------------------------------------------------------------
2510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 339458 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 23:02:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 23:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5548.14427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRNsJ-0008AB-Kr; Sat, 10 Oct 2020 23:02:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5548.14427; Sat, 10 Oct 2020 23:02:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRNsJ-0008A4-Hr; Sat, 10 Oct 2020 23:02:03 +0000
Received: by outflank-mailman (input) for mailman id 5548;
 Sat, 10 Oct 2020 23:02:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LTlF=DR=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kRNsI-00089z-OS
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 23:02:02 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3b02001-4839-4e87-8d2b-07a670331f39;
 Sat, 10 Oct 2020 23:02:00 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 10 Oct 2020 16:01:58 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 10 Oct 2020 16:01:56 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LTlF=DR=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kRNsI-00089z-OS
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 23:02:02 +0000
X-Inumbo-ID: c3b02001-4839-4e87-8d2b-07a670331f39
Received: from mga14.intel.com (unknown [192.55.52.115])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c3b02001-4839-4e87-8d2b-07a670331f39;
	Sat, 10 Oct 2020 23:02:00 +0000 (UTC)
IronPort-SDR: lE3wQofHjIJVvOhUcpp6rmJ+AcUanf7MWUL1/j73hBpJH+mb8vdnffTAVBkMLCfmYXP2HYk+Iv
 8lMoPV62uteQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9770"; a="164823030"
X-IronPort-AV: E=Sophos;i="5.77,360,1596524400"; 
   d="scan'208";a="164823030"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2020 16:01:58 -0700
IronPort-SDR: u8hpYVeZMmLfNxJlrYKazUxwnAD4tdLG9jg9nK8c6q04PE64rSl1u0/Oby62JZ45YUl7qtSnq6
 /O/xXGC90zkg==
X-IronPort-AV: E=Sophos;i="5.77,360,1596524400"; 
   d="scan'208";a="529414067"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Oct 2020 16:01:56 -0700
Date: Sat, 10 Oct 2020 16:01:55 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	David Airlie <airlied@linux.ie>,
	Patrik Jakobsson <patrik.r.jakobsson@gmail.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 09/58] drivers/gpu: Utilize new kmap_thread()
Message-ID: <20201010230155.GX2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-10-ira.weiny@intel.com>
 <20201009220349.GQ438822@phenom.ffwll.local>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009220349.GQ438822@phenom.ffwll.local>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Sat, Oct 10, 2020 at 12:03:49AM +0200, Daniel Vetter wrote:
> On Fri, Oct 09, 2020 at 12:49:44PM -0700, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > These kmap() calls in the gpu stack are localized to a single thread.
> > To avoid the over head of global PKRS updates use the new kmap_thread()
> > call.
> > 
> > Cc: David Airlie <airlied@linux.ie>
> > Cc: Daniel Vetter <daniel@ffwll.ch>
> > Cc: Patrik Jakobsson <patrik.r.jakobsson@gmail.com>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> 
> I'm guessing the entire pile goes in through some other tree.
>

Apologies for not realizing there were multiple maintainers here.

But, I was thinking it would land together through the mm tree once the core
support lands.  I've tried to split these out in a way they can be easily
reviewed/acked by the correct developers.

> If so:
> 
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> If you want this to land through maintainer trees, then we need a
> per-driver split (since aside from amdgpu and radeon they're all different
> subtrees).

It is just RFC for the moment.  I need to get the core support accepted first
then this can land.

> 
> btw the two kmap calls in drm you highlight in the cover letter should
> also be convertible to kmap_thread. We only hold vmalloc mappings for a
> longer time (or it'd be quite a driver bug). So if you want maybe throw
> those two as two additional patches on top, and we can do some careful
> review & testing for them.

Cool.  I'll add them in.

Ira

> -Daniel
> 
> > ---
> >  drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c              | 12 ++++++------
> >  drivers/gpu/drm/gma500/gma_display.c                 |  4 ++--
> >  drivers/gpu/drm/gma500/mmu.c                         | 10 +++++-----
> >  drivers/gpu/drm/i915/gem/i915_gem_shmem.c            |  4 ++--
> >  .../gpu/drm/i915/gem/selftests/i915_gem_context.c    |  4 ++--
> >  drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c   |  8 ++++----
> >  drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c         |  4 ++--
> >  drivers/gpu/drm/i915/gt/intel_gtt.c                  |  4 ++--
> >  drivers/gpu/drm/i915/gt/shmem_utils.c                |  4 ++--
> >  drivers/gpu/drm/i915/i915_gem.c                      |  8 ++++----
> >  drivers/gpu/drm/i915/i915_gpu_error.c                |  4 ++--
> >  drivers/gpu/drm/i915/selftests/i915_perf.c           |  4 ++--
> >  drivers/gpu/drm/radeon/radeon_ttm.c                  |  4 ++--
> >  13 files changed, 37 insertions(+), 37 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > index 978bae731398..bd564bccb7a3 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
> > @@ -2437,11 +2437,11 @@ static ssize_t amdgpu_ttm_gtt_read(struct file *f, char __user *buf,
> >  
> >  		page = adev->gart.pages[p];
> >  		if (page) {
> > -			ptr = kmap(page);
> > +			ptr = kmap_thread(page);
> >  			ptr += off;
> >  
> >  			r = copy_to_user(buf, ptr, cur_size);
> > -			kunmap(adev->gart.pages[p]);
> > +			kunmap_thread(adev->gart.pages[p]);
> >  		} else
> >  			r = clear_user(buf, cur_size);
> >  
> > @@ -2507,9 +2507,9 @@ static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf,
> >  		if (p->mapping != adev->mman.bdev.dev_mapping)
> >  			return -EPERM;
> >  
> > -		ptr = kmap(p);
> > +		ptr = kmap_thread(p);
> >  		r = copy_to_user(buf, ptr + off, bytes);
> > -		kunmap(p);
> > +		kunmap_thread(p);
> >  		if (r)
> >  			return -EFAULT;
> >  
> > @@ -2558,9 +2558,9 @@ static ssize_t amdgpu_iomem_write(struct file *f, const char __user *buf,
> >  		if (p->mapping != adev->mman.bdev.dev_mapping)
> >  			return -EPERM;
> >  
> > -		ptr = kmap(p);
> > +		ptr = kmap_thread(p);
> >  		r = copy_from_user(ptr + off, buf, bytes);
> > -		kunmap(p);
> > +		kunmap_thread(p);
> >  		if (r)
> >  			return -EFAULT;
> >  
> > diff --git a/drivers/gpu/drm/gma500/gma_display.c b/drivers/gpu/drm/gma500/gma_display.c
> > index 3df6d6e850f5..35f4e55c941f 100644
> > --- a/drivers/gpu/drm/gma500/gma_display.c
> > +++ b/drivers/gpu/drm/gma500/gma_display.c
> > @@ -400,9 +400,9 @@ int gma_crtc_cursor_set(struct drm_crtc *crtc,
> >  		/* Copy the cursor to cursor mem */
> >  		tmp_dst = dev_priv->vram_addr + cursor_gt->offset;
> >  		for (i = 0; i < cursor_pages; i++) {
> > -			tmp_src = kmap(gt->pages[i]);
> > +			tmp_src = kmap_thread(gt->pages[i]);
> >  			memcpy(tmp_dst, tmp_src, PAGE_SIZE);
> > -			kunmap(gt->pages[i]);
> > +			kunmap_thread(gt->pages[i]);
> >  			tmp_dst += PAGE_SIZE;
> >  		}
> >  
> > diff --git a/drivers/gpu/drm/gma500/mmu.c b/drivers/gpu/drm/gma500/mmu.c
> > index 505044c9a673..fba7a3a461fd 100644
> > --- a/drivers/gpu/drm/gma500/mmu.c
> > +++ b/drivers/gpu/drm/gma500/mmu.c
> > @@ -192,20 +192,20 @@ struct psb_mmu_pd *psb_mmu_alloc_pd(struct psb_mmu_driver *driver,
> >  		pd->invalid_pte = 0;
> >  	}
> >  
> > -	v = kmap(pd->dummy_pt);
> > +	v = kmap_thread(pd->dummy_pt);
> >  	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
> >  		v[i] = pd->invalid_pte;
> >  
> > -	kunmap(pd->dummy_pt);
> > +	kunmap_thread(pd->dummy_pt);
> >  
> > -	v = kmap(pd->p);
> > +	v = kmap_thread(pd->p);
> >  	for (i = 0; i < (PAGE_SIZE / sizeof(uint32_t)); ++i)
> >  		v[i] = pd->invalid_pde;
> >  
> > -	kunmap(pd->p);
> > +	kunmap_thread(pd->p);
> >  
> >  	clear_page(kmap(pd->dummy_page));
> > -	kunmap(pd->dummy_page);
> > +	kunmap_thread(pd->dummy_page);
> >  
> >  	pd->tables = vmalloc_user(sizeof(struct psb_mmu_pt *) * 1024);
> >  	if (!pd->tables)
> > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > index 38113d3c0138..274424795fb7 100644
> > --- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > +++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
> > @@ -566,9 +566,9 @@ i915_gem_object_create_shmem_from_data(struct drm_i915_private *dev_priv,
> >  		if (err < 0)
> >  			goto fail;
> >  
> > -		vaddr = kmap(page);
> > +		vaddr = kmap_thread(page);
> >  		memcpy(vaddr, data, len);
> > -		kunmap(page);
> > +		kunmap_thread(page);
> >  
> >  		err = pagecache_write_end(file, file->f_mapping,
> >  					  offset, len, len,
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> > index 7ffc3c751432..b466c677d007 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_context.c
> > @@ -1754,7 +1754,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
> >  		return -EINVAL;
> >  	}
> >  
> > -	vaddr = kmap(page);
> > +	vaddr = kmap_thread(page);
> >  	if (!vaddr) {
> >  		pr_err("No (mappable) scratch page!\n");
> >  		return -EINVAL;
> > @@ -1765,7 +1765,7 @@ static int check_scratch_page(struct i915_gem_context *ctx, u32 *out)
> >  		pr_err("Inconsistent initial state of scratch page!\n");
> >  		err = -EINVAL;
> >  	}
> > -	kunmap(page);
> > +	kunmap_thread(page);
> >  
> >  	return err;
> >  }
> > diff --git a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> > index 9c7402ce5bf9..447df22e2e06 100644
> > --- a/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> > +++ b/drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
> > @@ -143,7 +143,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
> >  	intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
> >  
> >  	p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> > -	cpu = kmap(p) + offset_in_page(offset);
> > +	cpu = kmap_thread(p) + offset_in_page(offset);
> >  	drm_clflush_virt_range(cpu, sizeof(*cpu));
> >  	if (*cpu != (u32)page) {
> >  		pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
> > @@ -161,7 +161,7 @@ static int check_partial_mapping(struct drm_i915_gem_object *obj,
> >  	}
> >  	*cpu = 0;
> >  	drm_clflush_virt_range(cpu, sizeof(*cpu));
> > -	kunmap(p);
> > +	kunmap_thread(p);
> >  
> >  out:
> >  	__i915_vma_put(vma);
> > @@ -236,7 +236,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
> >  		intel_gt_flush_ggtt_writes(&to_i915(obj->base.dev)->gt);
> >  
> >  		p = i915_gem_object_get_page(obj, offset >> PAGE_SHIFT);
> > -		cpu = kmap(p) + offset_in_page(offset);
> > +		cpu = kmap_thread(p) + offset_in_page(offset);
> >  		drm_clflush_virt_range(cpu, sizeof(*cpu));
> >  		if (*cpu != (u32)page) {
> >  			pr_err("Partial view for %lu [%u] (offset=%llu, size=%u [%llu, row size %u], fence=%d, tiling=%d, stride=%d) misalignment, expected write to page (%llu + %u [0x%llx]) of 0x%x, found 0x%x\n",
> > @@ -254,7 +254,7 @@ static int check_partial_mappings(struct drm_i915_gem_object *obj,
> >  		}
> >  		*cpu = 0;
> >  		drm_clflush_virt_range(cpu, sizeof(*cpu));
> > -		kunmap(p);
> > +		kunmap_thread(p);
> >  		if (err)
> >  			return err;
> >  
> > diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> > index 7fb36b12fe7a..38da348282f1 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
> > @@ -731,7 +731,7 @@ static void swizzle_page(struct page *page)
> >  	char *vaddr;
> >  	int i;
> >  
> > -	vaddr = kmap(page);
> > +	vaddr = kmap_thread(page);
> >  
> >  	for (i = 0; i < PAGE_SIZE; i += 128) {
> >  		memcpy(temp, &vaddr[i], 64);
> > @@ -739,7 +739,7 @@ static void swizzle_page(struct page *page)
> >  		memcpy(&vaddr[i + 64], temp, 64);
> >  	}
> >  
> > -	kunmap(page);
> > +	kunmap_thread(page);
> >  }
> >  
> >  /**
> > diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
> > index 2a72cce63fd9..4cfb24e9ed62 100644
> > --- a/drivers/gpu/drm/i915/gt/intel_gtt.c
> > +++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
> > @@ -312,9 +312,9 @@ static void poison_scratch_page(struct page *page, unsigned long size)
> >  	do {
> >  		void *vaddr;
> >  
> > -		vaddr = kmap(page);
> > +		vaddr = kmap_thread(page);
> >  		memset(vaddr, POISON_FREE, PAGE_SIZE);
> > -		kunmap(page);
> > +		kunmap_thread(page);
> >  
> >  		page = pfn_to_page(page_to_pfn(page) + 1);
> >  		size -= PAGE_SIZE;
> > diff --git a/drivers/gpu/drm/i915/gt/shmem_utils.c b/drivers/gpu/drm/i915/gt/shmem_utils.c
> > index 43c7acbdc79d..a40d3130cebf 100644
> > --- a/drivers/gpu/drm/i915/gt/shmem_utils.c
> > +++ b/drivers/gpu/drm/i915/gt/shmem_utils.c
> > @@ -142,12 +142,12 @@ static int __shmem_rw(struct file *file, loff_t off,
> >  		if (IS_ERR(page))
> >  			return PTR_ERR(page);
> >  
> > -		vaddr = kmap(page);
> > +		vaddr = kmap_thread(page);
> >  		if (write)
> >  			memcpy(vaddr + offset_in_page(off), ptr, this);
> >  		else
> >  			memcpy(ptr, vaddr + offset_in_page(off), this);
> > -		kunmap(page);
> > +		kunmap_thread(page);
> >  		put_page(page);
> >  
> >  		len -= this;
> > diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
> > index 9aa3066cb75d..cae8300fd224 100644
> > --- a/drivers/gpu/drm/i915/i915_gem.c
> > +++ b/drivers/gpu/drm/i915/i915_gem.c
> > @@ -312,14 +312,14 @@ shmem_pread(struct page *page, int offset, int len, char __user *user_data,
> >  	char *vaddr;
> >  	int ret;
> >  
> > -	vaddr = kmap(page);
> > +	vaddr = kmap_thread(page);
> >  
> >  	if (needs_clflush)
> >  		drm_clflush_virt_range(vaddr + offset, len);
> >  
> >  	ret = __copy_to_user(user_data, vaddr + offset, len);
> >  
> > -	kunmap(page);
> > +	kunmap_thread(page);
> >  
> >  	return ret ? -EFAULT : 0;
> >  }
> > @@ -708,7 +708,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
> >  	char *vaddr;
> >  	int ret;
> >  
> > -	vaddr = kmap(page);
> > +	vaddr = kmap_thread(page);
> >  
> >  	if (needs_clflush_before)
> >  		drm_clflush_virt_range(vaddr + offset, len);
> > @@ -717,7 +717,7 @@ shmem_pwrite(struct page *page, int offset, int len, char __user *user_data,
> >  	if (!ret && needs_clflush_after)
> >  		drm_clflush_virt_range(vaddr + offset, len);
> >  
> > -	kunmap(page);
> > +	kunmap_thread(page);
> >  
> >  	return ret ? -EFAULT : 0;
> >  }
> > diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
> > index 3e6cbb0d1150..aecd469b6b6e 100644
> > --- a/drivers/gpu/drm/i915/i915_gpu_error.c
> > +++ b/drivers/gpu/drm/i915/i915_gpu_error.c
> > @@ -1058,9 +1058,9 @@ i915_vma_coredump_create(const struct intel_gt *gt,
> >  
> >  			drm_clflush_pages(&page, 1);
> >  
> > -			s = kmap(page);
> > +			s = kmap_thread(page);
> >  			ret = compress_page(compress, s, dst, false);
> > -			kunmap(page);
> > +			kunmap_thread(page);
> >  
> >  			drm_clflush_pages(&page, 1);
> >  
> > diff --git a/drivers/gpu/drm/i915/selftests/i915_perf.c b/drivers/gpu/drm/i915/selftests/i915_perf.c
> > index c2d001d9c0ec..7f7ef2d056f4 100644
> > --- a/drivers/gpu/drm/i915/selftests/i915_perf.c
> > +++ b/drivers/gpu/drm/i915/selftests/i915_perf.c
> > @@ -307,7 +307,7 @@ static int live_noa_gpr(void *arg)
> >  	}
> >  
> >  	/* Poison the ce->vm so we detect writes not to the GGTT gt->scratch */
> > -	scratch = kmap(ce->vm->scratch[0].base.page);
> > +	scratch = kmap_thread(ce->vm->scratch[0].base.page);
> >  	memset(scratch, POISON_FREE, PAGE_SIZE);
> >  
> >  	rq = intel_context_create_request(ce);
> > @@ -405,7 +405,7 @@ static int live_noa_gpr(void *arg)
> >  out_rq:
> >  	i915_request_put(rq);
> >  out_ce:
> > -	kunmap(ce->vm->scratch[0].base.page);
> > +	kunmap_thread(ce->vm->scratch[0].base.page);
> >  	intel_context_put(ce);
> >  out:
> >  	stream_destroy(stream);
> > diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
> > index 004344dce140..0aba0cac51e1 100644
> > --- a/drivers/gpu/drm/radeon/radeon_ttm.c
> > +++ b/drivers/gpu/drm/radeon/radeon_ttm.c
> > @@ -1013,11 +1013,11 @@ static ssize_t radeon_ttm_gtt_read(struct file *f, char __user *buf,
> >  
> >  		page = rdev->gart.pages[p];
> >  		if (page) {
> > -			ptr = kmap(page);
> > +			ptr = kmap_thread(page);
> >  			ptr += off;
> >  
> >  			r = copy_to_user(buf, ptr, cur_size);
> > -			kunmap(rdev->gart.pages[p]);
> > +			kunmap_thread(rdev->gart.pages[p]);
> >  		} else
> >  			r = clear_user(buf, cur_size);
> >  
> > -- 
> > 2.28.0.rc0.12.gb6a658bd00c9
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Sat Oct 10 23:25:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 10 Oct 2020 23:25:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5553.14441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kROEO-0001nE-Nq; Sat, 10 Oct 2020 23:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5553.14441; Sat, 10 Oct 2020 23:24:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kROEO-0001n7-Ke; Sat, 10 Oct 2020 23:24:52 +0000
Received: by outflank-mailman (input) for mailman id 5553;
 Sat, 10 Oct 2020 23:24:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kROEM-0001n2-Vn
 for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 23:24:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb274123-c732-48e3-8847-4f3d7a0d633d;
 Sat, 10 Oct 2020 23:24:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kROEK-0008Sl-OU; Sat, 10 Oct 2020 23:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kROEK-00043S-H3; Sat, 10 Oct 2020 23:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kROEK-0006rU-GW; Sat, 10 Oct 2020 23:24:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cL7A=DR=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kROEM-0001n2-Vn
	for xen-devel@lists.xenproject.org; Sat, 10 Oct 2020 23:24:51 +0000
X-Inumbo-ID: bb274123-c732-48e3-8847-4f3d7a0d633d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bb274123-c732-48e3-8847-4f3d7a0d633d;
	Sat, 10 Oct 2020 23:24:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d4x6dmdw3JYHptzwcVSL79i6GFIqNjKHkb8X9rlIw2k=; b=v+p8dh0Fw3gGwYKQlVLogsYNzs
	DDvbViOJZZhUOp/bad/EMURSknmu4MdRXqoJU9b6rTtl2h9vgqba/L3irTD0KWGPVGKz9QQ+cS5bq
	P3B72e6wnW1mkdFzRDXaVUYGxreACZiT3stzYFQvv8sKWKrxvHHJM4FIOw6NFRlC54YU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kROEK-0008Sl-OU; Sat, 10 Oct 2020 23:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kROEK-00043S-H3; Sat, 10 Oct 2020 23:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kROEK-0006rU-GW; Sat, 10 Oct 2020 23:24:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155667-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155667: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 10 Oct 2020 23:24:48 +0000

flight 155667 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155667/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    1 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 00:06:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 00:06:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5557.14457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kROsP-0006JU-0f; Sun, 11 Oct 2020 00:06:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5557.14457; Sun, 11 Oct 2020 00:06:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kROsO-0006JN-Tv; Sun, 11 Oct 2020 00:06:12 +0000
Received: by outflank-mailman (input) for mailman id 5557;
 Sun, 11 Oct 2020 00:06:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kROsO-0006JH-0h
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 00:06:12 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b936d2fe-b81f-4f13-a6e1-196cba0676c6;
 Sun, 11 Oct 2020 00:06:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kROsO-0006JH-0h
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 00:06:12 +0000
X-Inumbo-ID: b936d2fe-b81f-4f13-a6e1-196cba0676c6
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b936d2fe-b81f-4f13-a6e1-196cba0676c6;
	Sun, 11 Oct 2020 00:06:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602374768;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=dYEOxio7772ItDxMpmD0lMpQJAns+yid55pYiW2kXek=;
  b=XTKkB6CJweiVl68AH2GNJT3iWtmSgkKB26Tky6FPxHPMywP1tmoNQw8L
   /9m82ELWbh9xf8YsVNz/r0J2/Bye9CBECxWjLIQq48rCKdoHMHZgch4Z4
   Sq7cK+IeSJdrlq4Zs6/fgcH/PNqKw1xcPmSFy5p1V7LXexz0AI5nboeX7
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Nz4t/PtSFcIHSjBW/V4VqUzm5V0GhfM6O7tyxrtRBLuMOk6Pa+DCVrf2yg0wcHIsUKnPepaciK
 jQOf5ljCm+OmuorSxL2rgibYTd3HdYkfXy5kIl7TMHmfjSJsvZRH95/A/7/3cN+BfVDVFY4BwO
 zyn8w9JyUyH8qcWvEFqF0kYclH1tvV/+W9V1GTjd5Vit0ke9fQg9N5nQnDAIQa1rE8dqaiZSJH
 yR3mzJhSGmN9n5MamXGH895Om5WQS8mdv1SPZFe7zsSQEIE+Re7iBZYYE5T/iWsSZGnM6SH93F
 Xcg=
X-SBRS: 2.5
X-MesageID: 28810665
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,360,1596513600"; 
   d="scan'208";a="28810665"
Subject: Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to
 commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"
To: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>
References: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
Date: Sun, 11 Oct 2020 01:06:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/10/2020 18:51, Sander Eikelenboom wrote:
> Hi Igor/Jan,
> 
> I tried to update my AMD machine to current xen-unstable, but
> unfortunately the HVM guests don't boot after that. The guest keeps
> using CPU-cycles but I don't get to a command prompt (or any output at
> all). PVH guests run fine.
> 
> Bisection leads to commit:
> 
> 8efa46516c5f4cf185c8df179812c185d3c27eb6
> hvmloader: indicate ACPI tables with "ACPI data" type in e820
> 
> I tried xen-unstable with this commit reverted and with that everything
> works fine.
> 
> I attached the xl-dmesg output.

What guests are you using? Could you get serial output from the guest?
Is it AMD specific? If it's a Linux guest could you get a stacktrace from
the guest using xenctx?

We have tested the change on all modern guests in our Citrix lab and haven't
found any problem for several months. 

Igor


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 02:20:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 02:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5565.14475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRQxi-00039L-0l; Sun, 11 Oct 2020 02:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5565.14475; Sun, 11 Oct 2020 02:19:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRQxh-00039C-PF; Sun, 11 Oct 2020 02:19:49 +0000
Received: by outflank-mailman (input) for mailman id 5565;
 Sun, 11 Oct 2020 02:19:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRQxg-000397-1C
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 02:19:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 477e0e0e-4e48-4978-b09b-d8a6a18b810e;
 Sun, 11 Oct 2020 02:19:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRQxZ-00054S-Jg; Sun, 11 Oct 2020 02:19:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRQxY-0002nQ-VZ; Sun, 11 Oct 2020 02:19:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRQxY-0002D0-V3; Sun, 11 Oct 2020 02:19:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRQxg-000397-1C
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 02:19:48 +0000
X-Inumbo-ID: 477e0e0e-4e48-4978-b09b-d8a6a18b810e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 477e0e0e-4e48-4978-b09b-d8a6a18b810e;
	Sun, 11 Oct 2020 02:19:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZUm9PsfhujJpYNH2Ke+Ne8hnrnAJgQUMJAbghixeni8=; b=wC0yFwlte39PeQ0GNjlrhYp9Vk
	ZPRBKBktL34pF2DpznubqsQWVPXUoB7WDWAJsTSjRS38VeZMBJV+TYDp4YdgbJo+FJ39YDzvi6udC
	mDl/Y9caE9/HUJcIjZBDgS784cY5m31AeTQXQC9i3qYEt//CKbz0XUoI1+HFLat8I9Mk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRQxZ-00054S-Jg; Sun, 11 Oct 2020 02:19:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRQxY-0002nQ-VZ; Sun, 11 Oct 2020 02:19:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRQxY-0002D0-V3; Sun, 11 Oct 2020 02:19:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155665-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155665: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-pvshim:debian-fixup:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 02:19:40 +0000

flight 155665 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155665/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 12 debian-install  fail in 155645 pass in 155665
 test-amd64-amd64-xl-pvhv2-amd 12 debian-install            fail pass in 155645
 test-amd64-amd64-xl-pvshim   13 debian-fixup               fail pass in 155645
 test-amd64-i386-pair    27 guest-migrate/dst_host/src_host fail pass in 155645

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   51 days
Failing since        152659  2020-08-21 14:07:39 Z   50 days   84 attempts
Testing same since   155613  2020-10-09 18:07:54 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 02:49:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 02:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5572.14492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRRQO-00066Q-Dr; Sun, 11 Oct 2020 02:49:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5572.14492; Sun, 11 Oct 2020 02:49:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRRQO-00066J-Aw; Sun, 11 Oct 2020 02:49:28 +0000
Received: by outflank-mailman (input) for mailman id 5572;
 Sun, 11 Oct 2020 02:49:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRRQM-00065r-VD
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 02:49:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af356e0d-edaa-43e3-8065-2166806ee28f;
 Sun, 11 Oct 2020 02:49:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRRQF-0005eH-Cg; Sun, 11 Oct 2020 02:49:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRRQF-0004Oy-3q; Sun, 11 Oct 2020 02:49:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRRQF-0000W0-3O; Sun, 11 Oct 2020 02:49:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRRQM-00065r-VD
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 02:49:26 +0000
X-Inumbo-ID: af356e0d-edaa-43e3-8065-2166806ee28f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af356e0d-edaa-43e3-8065-2166806ee28f;
	Sun, 11 Oct 2020 02:49:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bkF7N3/3Tv4SlR1FEew5XotWuubdIZV923LJqBe4X1s=; b=wKzWyF0kvaeWAzI+/Z4eR9dRdD
	f/zh/sfCdT5uKklkPM26XGJQGMm5RuQgOu3He2ZQNRnajSXSAwUt5gzdN84murK0ySH8ZkDF3bxr0
	DZUekZs7yzQf6LhAqhrmR+/bdqrWHiXJiEbn+XaP+T/nPArgXhyO5sXdTYDX6d3Qav5U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRRQF-0005eH-Cg; Sun, 11 Oct 2020 02:49:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRRQF-0004Oy-3q; Sun, 11 Oct 2020 02:49:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRRQF-0000W0-3O; Sun, 11 Oct 2020 02:49:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155671-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155671: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 02:49:19 +0000

flight 155671 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155671/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 06:07:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 06:07:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5582.14517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRUVz-0000GM-Or; Sun, 11 Oct 2020 06:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5582.14517; Sun, 11 Oct 2020 06:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRUVz-0000GF-Lt; Sun, 11 Oct 2020 06:07:27 +0000
Received: by outflank-mailman (input) for mailman id 5582;
 Sun, 11 Oct 2020 06:07:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRUVy-0000GA-Bz
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 06:07:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e43b288-d6cc-4a90-97ee-3d5f3e7eb037;
 Sun, 11 Oct 2020 06:07:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRUVu-0001mM-TU; Sun, 11 Oct 2020 06:07:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRUVu-0004nG-LR; Sun, 11 Oct 2020 06:07:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRUVu-0003EW-Kz; Sun, 11 Oct 2020 06:07:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRUVy-0000GA-Bz
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 06:07:26 +0000
X-Inumbo-ID: 2e43b288-d6cc-4a90-97ee-3d5f3e7eb037
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2e43b288-d6cc-4a90-97ee-3d5f3e7eb037;
	Sun, 11 Oct 2020 06:07:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Tly/vd8z+de/uqwqtU6asmRe5dSwf8Dt7GQU7xhSSU=; b=0sXEEUYLk11FJnbXYFSQQK/h4u
	dUlEuedTUfptpPMU7NC6f1zzZLAoyKKndYHK2JCwJhskLY16M6kRqY0Z0wk6wzwo5sfPP3i+CiACx
	mGdKLCCv2PWkIHVN8BwK7gUl3gifHBTg9Vm0TUe443dLTzqJRmDJavtP13BlzU4VNxqY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRUVu-0001mM-TU; Sun, 11 Oct 2020 06:07:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRUVu-0004nG-LR; Sun, 11 Oct 2020 06:07:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRUVu-0003EW-Kz; Sun, 11 Oct 2020 06:07:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155676-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155676: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 06:07:22 +0000

flight 155676 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155676/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 06:39:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 06:39:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5590.14536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRV0e-0003FG-CM; Sun, 11 Oct 2020 06:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5590.14536; Sun, 11 Oct 2020 06:39:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRV0e-0003F9-9J; Sun, 11 Oct 2020 06:39:08 +0000
Received: by outflank-mailman (input) for mailman id 5590;
 Sun, 11 Oct 2020 06:39:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRV0c-0003Eb-K6
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 06:39:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd69707d-4819-4d3b-93ce-76d7d89d1b42;
 Sun, 11 Oct 2020 06:38:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRV0U-0002P2-Kr; Sun, 11 Oct 2020 06:38:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRV0U-0006h5-CD; Sun, 11 Oct 2020 06:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRV0U-0003Jm-BS; Sun, 11 Oct 2020 06:38:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRV0c-0003Eb-K6
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 06:39:06 +0000
X-Inumbo-ID: bd69707d-4819-4d3b-93ce-76d7d89d1b42
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd69707d-4819-4d3b-93ce-76d7d89d1b42;
	Sun, 11 Oct 2020 06:38:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=heAw+uzysnICrom8sgRV8vh1ybcMGjvwCqNPbItHUeE=; b=46bvTyJpgOFGcoTWsDijwSop+j
	M1ddb2F9f/epJ/BwdINDnFU56VLNQVwjGlNoDAESmZVkbxUhNalLafEVl5niycuCgE8wkC3IIUfUM
	YoO17IxLyP566DUCcd7bSXvhniCP4Ya6X/kjKP2g5Gk+D8Zz/r5aBlva0wAnKsYVw0KQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRV0U-0002P2-Kr; Sun, 11 Oct 2020 06:38:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRV0U-0006h5-CD; Sun, 11 Oct 2020 06:38:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRV0U-0003Jm-BS; Sun, 11 Oct 2020 06:38:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155668-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155668: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=6f2f486d57c4d562cdf4932320b66fbb878ab1c4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 06:38:58 +0000

flight 155668 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155668/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds    17 guest-saverestore fail in 155639 pass in 155668
 test-amd64-amd64-pair   27 guest-migrate/dst_host/src_host fail pass in 155639

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                6f2f486d57c4d562cdf4932320b66fbb878ab1c4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   71 days
Failing since        152366  2020-08-01 20:49:34 Z   70 days  118 attempts
Testing same since   155639  2020-10-10 06:58:43 Z    0 days    2 attempts

------------------------------------------------------------
2510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 339458 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 09:32:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 09:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5607.14561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXho-0003p7-IA; Sun, 11 Oct 2020 09:31:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5607.14561; Sun, 11 Oct 2020 09:31:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXho-0003p0-Db; Sun, 11 Oct 2020 09:31:52 +0000
Received: by outflank-mailman (input) for mailman id 5607;
 Sun, 11 Oct 2020 09:31:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRXhn-0003ov-MP
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:31:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3dfbe72-908e-406f-bb94-581271113e6f;
 Sun, 11 Oct 2020 09:31:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXhl-0006QS-9z; Sun, 11 Oct 2020 09:31:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXhl-0008Mn-2s; Sun, 11 Oct 2020 09:31:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXhl-0005Di-2P; Sun, 11 Oct 2020 09:31:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRXhn-0003ov-MP
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:31:51 +0000
X-Inumbo-ID: a3dfbe72-908e-406f-bb94-581271113e6f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a3dfbe72-908e-406f-bb94-581271113e6f;
	Sun, 11 Oct 2020 09:31:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U4iqs77gVTUh/9vPYg91z7/pJA4ye7Y9BvTbFSQAEgE=; b=mgZwKS2gV89PUTiDYQ4kYtpZcp
	Om0TRYxxlGgC/nBNqGZkYs6oyxoOi+9oES24pHcHqfmK6l7C19FgCwNqKlTf+j6rftVyazMdm/5cD
	mZj+ioSQlfBmTylXsgfnLW1dawBne88jI5dpUyLUEb9AaMPIOaPx6QJHi/z/zWYYBwCU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXhl-0006QS-9z; Sun, 11 Oct 2020 09:31:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXhl-0008Mn-2s; Sun, 11 Oct 2020 09:31:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXhl-0005Di-2P; Sun, 11 Oct 2020 09:31:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155683-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155683: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 09:31:49 +0000

flight 155683 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155683/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 09:43:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 09:43:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5610.14575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXtQ-0004tf-LA; Sun, 11 Oct 2020 09:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5610.14575; Sun, 11 Oct 2020 09:43:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXtQ-0004tY-HT; Sun, 11 Oct 2020 09:43:52 +0000
Received: by outflank-mailman (input) for mailman id 5610;
 Sun, 11 Oct 2020 09:43:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DJVw=DS=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1kRXtP-0004tS-BH
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:43:51 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8a1d4c0-983c-4068-8f73-ac25a1f009e1;
 Sun, 11 Oct 2020 09:43:48 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:54102
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1kRXwS-0006uf-Ue; Sun, 11 Oct 2020 11:47:01 +0200
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DJVw=DS=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
	id 1kRXtP-0004tS-BH
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:43:51 +0000
X-Inumbo-ID: b8a1d4c0-983c-4068-8f73-ac25a1f009e1
Received: from server.eikelenboom.it (unknown [91.121.65.215])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b8a1d4c0-983c-4068-8f73-ac25a1f009e1;
	Sun, 11 Oct 2020 09:43:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:
	Reply-To:Cc:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=TbcVuUDAMh5k/dQPsShWjaIzQRL9Lld6a1obbcUbXcw=; b=f6qzkOfJuQlVCi+hQqNshOWPBk
	7bvAZvh2M1DG7C784lNEeKEyKmJk/dBOW7Rp5L4gRxI0xGI7DyylKNid00gW0n+e+Y7dhssN2nkri
	jiJZCH1H72spxtS0OQkVIp2jGWwhH5mt9z72GmzuaQGQ7CXYtXTMbTsBtwDxEqta/XPQ=;
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:54102 helo=[172.16.1.50])
	by server.eikelenboom.it with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <linux@eikelenboom.it>)
	id 1kRXwS-0006uf-Ue; Sun, 11 Oct 2020 11:47:01 +0200
Subject: Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to
 commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>
References: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
 <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <24413d2e-5665-bc36-452b-af5c9b1af0b8@eikelenboom.it>
Date: Sun, 11 Oct 2020 11:43:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/10/2020 02:06, Igor Druzhinin wrote:
> On 10/10/2020 18:51, Sander Eikelenboom wrote:
>> Hi Igor/Jan,
>>
>> I tried to update my AMD machine to current xen-unstable, but
>> unfortunately the HVM guests don't boot after that. The guest keeps
>> using CPU-cycles but I don't get to a command prompt (or any output at
>> all). PVH guests run fine.
>>
>> Bisection leads to commit:
>>
>> 8efa46516c5f4cf185c8df179812c185d3c27eb6
>> hvmloader: indicate ACPI tables with "ACPI data" type in e820
>>
>> I tried xen-unstable with this commit reverted and with that everything
>> works fine.
>>
>> I attached the xl-dmesg output.
> 
> What guests are you using? 
Not sure I understand what you ask for, but:
dom0 PV
guest HVM (qemu-xen)

> Could you get serial output from the guest?
Not getting any, it seems to be stuck in very early boot.

> Is it AMD specific?
Can't tell, this is the only machine I test xen-unstable on.
It's a AMD phenom X6.
Both dom0 and guest kernel are 5.9-rc8.

Tested with guest config:
kernel      = '/boot/vmlinuz-xen-guest'
ramdisk     = '/boot/initrd.img-xen-guest'

cmdline     = 'root=UUID=7cc4a90d-d6b0-4958-bb7d-50497aa29f18 ro
nomodeset console=tty1 console=ttyS0 console=hvc0 earlyprintk=xen'

type='hvm'

device_model_version = 'qemu-xen'

cpus        = "2-5"
vcpus = 2

memory      = '512'

disk        = [
                  'phy:/dev/xen_vms_ssd/media,xvda,w'
              ]

name        = 'guest'

vif         = [ 'bridge=xen_bridge,ip=192.168.1.10,mac=00:16:3E:DC:0A:F1' ]

on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'preserve'

vnc=0


>If it's a Linux guest could you get a stacktrace from
> the guest using xenctx?

It is, here are few subsequent runs:

~# /usr/local/lib/xen/bin/xenctx -s
/boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
vcpu0:
cs:eip: ca80:00000256
flags: 00000016 nz a p
ss:esp: 0000:00006f38
eax: 029e0012	ebx: 0000fb00	ecx: 028484e3	edx: 00000511
esi: 00000000	edi: f97b7363	ebp: 00006f38
 ds:     ca80	 es:     0010	 fs:     0000	 gs:     0000

cr0: 00000011
cr2: 00000000
cr3: 00400000
cr4: 00000000

dr0: 00000000
dr1: 00000000
dr2: 00000000
dr3: 00000000
dr6: ffff0ff0
dr7: 00000400
Code (instr addr 00000256)
ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff <00> f0
53 ff 00 f0 53 ff 00 f0 53



vcpu1 offline

~# /usr/local/lib/xen/bin/xenctx -s
/boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
vcpu0:
cs:eip: ca80:00000256
flags: 00000016 nz a p
ss:esp: 0000:00006f38
eax: 029e0012	ebx: 0000fb00	ecx: 028444b7	edx: 00000511
esi: 00000000	edi: f97bb38f	ebp: 00006f38
 ds:     ca80	 es:     0010	 fs:     0000	 gs:     0000

cr0: 00000011
cr2: 00000000
cr3: 00400000
cr4: 00000000

dr0: 00000000
dr1: 00000000
dr2: 00000000
dr3: 00000000
dr6: ffff0ff0
dr7: 00000400
Code (instr addr 00000256)
ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff <00> f0
53 ff 00 f0 53 ff 00 f0 53



vcpu1 offline

~# /usr/local/lib/xen/bin/xenctx -s
/boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
vcpu0:
cs:eip: ca80:00000256
flags: 00000016 nz a p
ss:esp: 0000:00006f38
eax: 029e0012	ebx: 0000fb00	ecx: 02840901	edx: 00000511
esi: 00000000	edi: f97bef45	ebp: 00006f38
 ds:     ca80	 es:     0010	 fs:     0000	 gs:     0000

cr0: 00000011
cr2: 00000000
cr3: 00400000
cr4: 00000000

dr0: 00000000
dr1: 00000000
dr2: 00000000
dr3: 00000000
dr6: ffff0ff0
dr7: 00000400
Code (instr addr 00000256)
ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff <00> f0
53 ff 00 f0 53 ff 00 f0 53



vcpu1 offline

~# /usr/local/lib/xen/bin/xenctx -s
/boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
vcpu0:
cs:eip: ca80:00000256
flags: 00000016 nz a p
ss:esp: 0000:00006f38
eax: 029e0012	ebx: 0000fb00	ecx: 0283d4bd	edx: 00000511
esi: 00000000	edi: f97c2389	ebp: 00006f38
 ds:     ca80	 es:     0010	 fs:     0000	 gs:     0000

cr0: 00000011
cr2: 00000000
cr3: 00400000
cr4: 00000000

dr0: 00000000
dr1: 00000000
dr2: 00000000
dr3: 00000000
dr6: ffff0ff0
dr7: 00000400
Code (instr addr 00000256)
ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff <00> f0
53 ff 00 f0 53 ff 00 f0 53



vcpu1 offline

~# /usr/local/lib/xen/bin/xenctx -s
/boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
vcpu0:
cs:eip: ca80:00000256
flags: 00000016 nz a p
ss:esp: 0000:00006f38
eax: 029e0012	ebx: 0000fb00	ecx: 02838e90	edx: 00000511
esi: 00000000	edi: f97c69b6	ebp: 00006f38
 ds:     ca80	 es:     0010	 fs:     0000	 gs:     0000

cr0: 00000011
cr2: 00000000
cr3: 00400000
cr4: 00000000

dr0: 00000000
dr1: 00000000
dr2: 00000000
dr3: 00000000
dr6: ffff0ff0
dr7: 00000400
Code (instr addr 00000256)
ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff 00 f0 53 ff <00> f0
53 ff 00 f0 53 ff 00 f0 53



vcpu1 offline


> We have tested the change on all modern guests in our Citrix lab and haven't
> found any problem for several months. 

> Igor
> 

--
Sander



From xen-devel-bounces@lists.xenproject.org Sun Oct 11 09:48:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 09:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5613.14589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXxw-0005DP-6n; Sun, 11 Oct 2020 09:48:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5613.14589; Sun, 11 Oct 2020 09:48:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRXxw-0005DI-3d; Sun, 11 Oct 2020 09:48:32 +0000
Received: by outflank-mailman (input) for mailman id 5613;
 Sun, 11 Oct 2020 09:48:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRXxu-0005Cm-Qd
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:48:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c817d854-e401-4193-9054-c314dd544e82;
 Sun, 11 Oct 2020 09:48:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXxo-0006mE-As; Sun, 11 Oct 2020 09:48:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXxo-0000eP-2V; Sun, 11 Oct 2020 09:48:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRXxo-0002o2-22; Sun, 11 Oct 2020 09:48:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRXxu-0005Cm-Qd
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 09:48:30 +0000
X-Inumbo-ID: c817d854-e401-4193-9054-c314dd544e82
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c817d854-e401-4193-9054-c314dd544e82;
	Sun, 11 Oct 2020 09:48:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=96X1lfLJoQ2nDZNEucYInMbkLiBWtlGfhJHV6a/4q8M=; b=iCf+5UQZ577JX+PnkiFUrcL5Vn
	jh3YYvgkSdF3VLOkSSX4XRX3SCXTaoTD+7IPSBNzwuP2Hj0VxkityDjj83ZokU38anrY8cqq4yobp
	ar5yhsTLilb6zRnQIlS+jXfUWuGqCKC0mg1MgfwqRzj7krIbaruGo9sx5xvOc/wIEHA4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXxo-0006mE-As; Sun, 11 Oct 2020 09:48:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXxo-0000eP-2V; Sun, 11 Oct 2020 09:48:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRXxo-0002o2-22; Sun, 11 Oct 2020 09:48:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155687-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 155687: all pass - PUSHED
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=93508595d588afe9dca087f95200effb7cedc81f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 09:48:24 +0000

flight 155687 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155687/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  93508595d588afe9dca087f95200effb7cedc81f

Last test of basis   155515  2020-10-07 09:19:32 Z    4 days
Testing same since   155687  2020-10-11 09:19:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roman Shaposhnik <roman@zededa.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   93508595d5..25849c8b16  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 10:13:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 10:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5617.14605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRYM0-00083O-BN; Sun, 11 Oct 2020 10:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5617.14605; Sun, 11 Oct 2020 10:13:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRYM0-00083H-8B; Sun, 11 Oct 2020 10:13:24 +0000
Received: by outflank-mailman (input) for mailman id 5617;
 Sun, 11 Oct 2020 10:13:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRYLy-00083C-RS
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 10:13:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 769de4c6-68d1-4df1-91e3-2c19c072780f;
 Sun, 11 Oct 2020 10:13:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRYLu-0007Me-VW; Sun, 11 Oct 2020 10:13:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRYLu-0001Qy-O9; Sun, 11 Oct 2020 10:13:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRYLu-0007qK-Ne; Sun, 11 Oct 2020 10:13:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRYLy-00083C-RS
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 10:13:22 +0000
X-Inumbo-ID: 769de4c6-68d1-4df1-91e3-2c19c072780f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 769de4c6-68d1-4df1-91e3-2c19c072780f;
	Sun, 11 Oct 2020 10:13:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uARo8gooId8cGz3kBJEYR8XjzPLnaWNV1ApMSTGFZgk=; b=N0Qt5jfrYuWY3kmMLDOCQyLTYG
	MNX5jegG+hGKquXsWfYEAaMvZHUELtOjv6WDBBypl9HiJLL7MxqrzEIqPAeteDTrrp8Vm7j0oG88M
	eog7TE2VG/aJw0BWfZ2osluZiZhbvm1LkdzvalR5wWQ5Z1GoiNusD6Hn5bDvT6jc6w98=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRYLu-0007Me-VW; Sun, 11 Oct 2020 10:13:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRYLu-0001Qy-O9; Sun, 11 Oct 2020 10:13:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRYLu-0007qK-Ne; Sun, 11 Oct 2020 10:13:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155678-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155678: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 10:13:18 +0000

flight 155678 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155678/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   93 days
Failing since        151818  2020-07-11 04:18:52 Z   92 days   87 attempts
Testing same since   155634  2020-10-10 04:19:54 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20775 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 10:40:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 10:40:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5652.14659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRYlv-0002zT-MT; Sun, 11 Oct 2020 10:40:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5652.14659; Sun, 11 Oct 2020 10:40:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRYlv-0002zM-J9; Sun, 11 Oct 2020 10:40:11 +0000
Received: by outflank-mailman (input) for mailman id 5652;
 Sun, 11 Oct 2020 10:40:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kRYlu-0002zH-Ci
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 10:40:10 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 423286cd-ae2d-47e7-967d-02826e164a25;
 Sun, 11 Oct 2020 10:40:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kRYlu-0002zH-Ci
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 10:40:10 +0000
X-Inumbo-ID: 423286cd-ae2d-47e7-967d-02826e164a25
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 423286cd-ae2d-47e7-967d-02826e164a25;
	Sun, 11 Oct 2020 10:40:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602412808;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=Jh8UfBfcxQz0Av/ZjvCOrikS4XhNNBeDoYLcxC4dop4=;
  b=KHZig0/xSI6ako5h1qsXqqjReeH7uph/+jDHiYANm/Qv4ItzJnDfVQvV
   7vUqEmGOoNUZHoe47T+xqxdTAC1xeVWpiFx+1kHHGYXSQro3Brlg52PMk
   hfFDgC9VvTDgu0kU3wfjonxaHde5Argx1wfpywZsAGMnKJF+vj3kjGUg5
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: /FiDJ1BaYzq8ZrPucYfk86aq9HAhHQGUliFWhIZiP+gkHlh7DQFsCIXnXZO8YnliYvxA/GhVCL
 7Qvt5Mo6dAUqvPQpFsZ+MJyLUWqd18IzOEDTh2H0MDSkOV5sUOqXC8zK3nQpQ+rJQQoxcPQPXd
 ClS7UE71k82ZJwV2vcwZUlBVxlYY+C2fqyilKu4NEPrv8Rks6xxenb9AmLNFDD7JMTe6x9kw+w
 Bi0oROLYC0+LGtSxTtQwVwzP6/ZuRGEKIomCU6lBWfAW/SvRvvHoteVIdMKxSsNnjp5hE954ql
 DV0=
X-SBRS: 2.5
X-MesageID: 29003191
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,362,1596513600"; 
   d="scan'208";a="29003191"
Subject: Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to
 commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"
To: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>
References: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
 <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
 <24413d2e-5665-bc36-452b-af5c9b1af0b8@eikelenboom.it>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <a7e46051-999d-fa5a-6707-d4c6e61727bb@citrix.com>
Date: Sun, 11 Oct 2020 11:40:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <24413d2e-5665-bc36-452b-af5c9b1af0b8@eikelenboom.it>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/10/2020 10:43, Sander Eikelenboom wrote:
> On 11/10/2020 02:06, Igor Druzhinin wrote:
>> On 10/10/2020 18:51, Sander Eikelenboom wrote:
>>> Hi Igor/Jan,
>>>
>>> I tried to update my AMD machine to current xen-unstable, but
>>> unfortunately the HVM guests don't boot after that. The guest keeps
>>> using CPU-cycles but I don't get to a command prompt (or any output at
>>> all). PVH guests run fine.
>>>
>>> Bisection leads to commit:
>>>
>>> 8efa46516c5f4cf185c8df179812c185d3c27eb6
>>> hvmloader: indicate ACPI tables with "ACPI data" type in e820
>>>
>>> I tried xen-unstable with this commit reverted and with that everything
>>> works fine.
>>>
>>> I attached the xl-dmesg output.
>>
>> What guests are you using? 
> Not sure I understand what you ask for, but:
> dom0 PV
> guest HVM (qemu-xen)
> 
>> Could you get serial output from the guest?
> Not getting any, it seems to be stuck in very early boot.
> 
>> Is it AMD specific?
> Can't tell, this is the only machine I test xen-unstable on.
> It's a AMD phenom X6.
> Both dom0 and guest kernel are 5.9-rc8.
> 
> Tested with guest config:
> kernel      = '/boot/vmlinuz-xen-guest'
> ramdisk     = '/boot/initrd.img-xen-guest'
> 
> cmdline     = 'root=UUID=7cc4a90d-d6b0-4958-bb7d-50497aa29f18 ro
> nomodeset console=tty1 console=ttyS0 console=hvc0 earlyprintk=xen'
> 
> type='hvm'
> 
> device_model_version = 'qemu-xen'
> 
> cpus        = "2-5"
> vcpus = 2
> 
> memory      = '512'
> 
> disk        = [
>                   'phy:/dev/xen_vms_ssd/media,xvda,w'
>               ]
> 
> name        = 'guest'
> 
> vif         = [ 'bridge=xen_bridge,ip=192.168.1.10,mac=00:16:3E:DC:0A:F1' ]
> 
> on_poweroff = 'destroy'
> on_reboot   = 'restart'
> on_crash    = 'preserve'
> 
> vnc=0
> 
> 
>> If it's a Linux guest could you get a stacktrace from
>> the guest using xenctx?
> 
> It is, here are few subsequent runs:
> 
> ~# /usr/local/lib/xen/bin/xenctx -s
> /boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
> vcpu0:
> cs:eip: ca80:00000256

Ok, it's stuck in linuxboot.bin option ROM. That's not something we test in Citrix -
we don't use fw_cfg. It could be something with caching (given it's moving but slowly) or a
bug uncovered by memory map changes. I'll try to get a repro on Monday.

It could be AMD specific if it's caching related and that's why osstest didn't pick it up.

Igor


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 11:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 11:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5657.14672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRZNm-0006Dw-L7; Sun, 11 Oct 2020 11:19:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5657.14672; Sun, 11 Oct 2020 11:19:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRZNm-0006Dp-IC; Sun, 11 Oct 2020 11:19:18 +0000
Received: by outflank-mailman (input) for mailman id 5657;
 Sun, 11 Oct 2020 11:19:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRZNl-0006DM-1F
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 11:19:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id daaf5d0b-7c76-46ca-a594-16aa7b6ace5a;
 Sun, 11 Oct 2020 11:19:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRZNc-0000FG-TM; Sun, 11 Oct 2020 11:19:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRZNc-0003l3-L3; Sun, 11 Oct 2020 11:19:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRZNc-0000IH-KX; Sun, 11 Oct 2020 11:19:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRZNl-0006DM-1F
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 11:19:17 +0000
X-Inumbo-ID: daaf5d0b-7c76-46ca-a594-16aa7b6ace5a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id daaf5d0b-7c76-46ca-a594-16aa7b6ace5a;
	Sun, 11 Oct 2020 11:19:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GMMbTeo8/G6upun/RcXZaOVlM5Xvn8rTd73gkkVAu1g=; b=HY7eCKyYyM4gXFESHxNtk9tjQp
	WpToXwUjfWy9v8yTKeU4dVQf2IBsubMA6rt0HPKx/wb0A1ptvX1u1x/22834nDL45S6f9AT9Ldj5V
	rmGdOn7iFhEWp+vXDCovy/I1VsyoSFt3P4ucgzASvD4n3T5qNCeYvjxm3hkKz8GVsdhE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRZNc-0000FG-TM; Sun, 11 Oct 2020 11:19:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRZNc-0003l3-L3; Sun, 11 Oct 2020 11:19:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRZNc-0000IH-KX; Sun, 11 Oct 2020 11:19:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155673-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155673: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-raw:leak-check/check:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 11:19:08 +0000

flight 155673 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155673/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-shadow      <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-shadow    5 host-install(5)          broken pass in 155630
 test-armhf-armhf-libvirt-raw 20 leak-check/check fail in 155630 pass in 155673
 test-amd64-amd64-libvirt-pair 27 guest-migrate/dst_host/src_host fail pass in 155630
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 155630
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 155630

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 155630 like 155600
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 155630 never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155630
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155630
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155630
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155630
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155630
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155630
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155630
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155630
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155673  2020-10-11 01:51:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   broken  
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-shadow broken
broken-step test-amd64-amd64-xl-shadow host-install(5)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Oct 11 11:20:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 11:20:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5658.14685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRZPF-0006zW-1D; Sun, 11 Oct 2020 11:20:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5658.14685; Sun, 11 Oct 2020 11:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRZPE-0006zP-Tp; Sun, 11 Oct 2020 11:20:48 +0000
Received: by outflank-mailman (input) for mailman id 5658;
 Sun, 11 Oct 2020 11:20:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kRZPE-0006zK-1C
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 11:20:48 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 30d1cd9f-3591-4f34-829b-f44e2db5de21;
 Sun, 11 Oct 2020 11:20:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CCit=DS=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kRZPE-0006zK-1C
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 11:20:48 +0000
X-Inumbo-ID: 30d1cd9f-3591-4f34-829b-f44e2db5de21
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 30d1cd9f-3591-4f34-829b-f44e2db5de21;
	Sun, 11 Oct 2020 11:20:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602415246;
  h=subject:from:to:references:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=F96iuLJBrM3+ym/2qb2p66Ok0Td8MfgT4AUYKVDQMnY=;
  b=F+j02LAKc46ex+qrw0JBqcqKUfRPJPudjfaOWhjoDZET3/iV+XCe336z
   24TesPVgaOreavj9ZJOeY0ryUJ7C+I9cLIz/PYg/b7DLJSFZ7y5nHjUWW
   zuJEiHoCl38wsER74RTDPC1tiTiulQ2AonL/juGZmJRTUIly2GOOsfg2c
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bSo58SLb3wVD/sgPRXTPLVptudqNtD6RB6mA874kVyViib1g39LSL8y95XToWlxwRatKP6XCsO
 07EU6SLshljox5IlFClPvDBe+sm9wN3BYSViJ43sks6aijuDL2vuYz3CiaY1Ejm3+R3Df5B+0s
 s6QdI7btBwFyKbuBvprWZ16FI2/zCNCbMo0R4lIsIDAlodj2zWuprAJEBsz2Z86ITjUsBVNWL9
 kDZgdhuh3cQTzl0k0jIPhG6akr/MfQGIKd/MXSpDouzPaqC21cVuCA4ANc2KShmHM56ZZ1kK92
 jM4=
X-SBRS: 2.5
X-MesageID: 29077927
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,362,1596513600"; 
   d="scan'208";a="29077927"
Subject: Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to
 commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: Sander Eikelenboom <linux@eikelenboom.it>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>
References: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
 <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
 <24413d2e-5665-bc36-452b-af5c9b1af0b8@eikelenboom.it>
 <a7e46051-999d-fa5a-6707-d4c6e61727bb@citrix.com>
Message-ID: <03219f26-3a2c-edcd-6654-a084102f9020@citrix.com>
Date: Sun, 11 Oct 2020 12:20:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a7e46051-999d-fa5a-6707-d4c6e61727bb@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/10/2020 11:40, Igor Druzhinin wrote:
> On 11/10/2020 10:43, Sander Eikelenboom wrote:
>> On 11/10/2020 02:06, Igor Druzhinin wrote:
>>> On 10/10/2020 18:51, Sander Eikelenboom wrote:
>>>> Hi Igor/Jan,
>>>>
>>>> I tried to update my AMD machine to current xen-unstable, but
>>>> unfortunately the HVM guests don't boot after that. The guest keeps
>>>> using CPU-cycles but I don't get to a command prompt (or any output at
>>>> all). PVH guests run fine.
>>>>
>>>> Bisection leads to commit:
>>>>
>>>> 8efa46516c5f4cf185c8df179812c185d3c27eb6
>>>> hvmloader: indicate ACPI tables with "ACPI data" type in e820
>>>>
>>>> I tried xen-unstable with this commit reverted and with that everything
>>>> works fine.
>>>>
>>>> I attached the xl-dmesg output.
>>>
>>> What guests are you using? 
>> Not sure I understand what you ask for, but:
>> dom0 PV
>> guest HVM (qemu-xen)
>>
>>> Could you get serial output from the guest?
>> Not getting any, it seems to be stuck in very early boot.
>>
>>> Is it AMD specific?
>> Can't tell, this is the only machine I test xen-unstable on.
>> It's a AMD phenom X6.
>> Both dom0 and guest kernel are 5.9-rc8.
>>
>> Tested with guest config:
>> kernel      = '/boot/vmlinuz-xen-guest'
>> ramdisk     = '/boot/initrd.img-xen-guest'
>>
>> cmdline     = 'root=UUID=7cc4a90d-d6b0-4958-bb7d-50497aa29f18 ro
>> nomodeset console=tty1 console=ttyS0 console=hvc0 earlyprintk=xen'
>>
>> type='hvm'
>>
>> device_model_version = 'qemu-xen'
>>
>> cpus        = "2-5"
>> vcpus = 2
>>
>> memory      = '512'
>>
>> disk        = [
>>                   'phy:/dev/xen_vms_ssd/media,xvda,w'
>>               ]
>>
>> name        = 'guest'
>>
>> vif         = [ 'bridge=xen_bridge,ip=192.168.1.10,mac=00:16:3E:DC:0A:F1' ]
>>
>> on_poweroff = 'destroy'
>> on_reboot   = 'restart'
>> on_crash    = 'preserve'
>>
>> vnc=0
>>
>>
>>> If it's a Linux guest could you get a stacktrace from
>>> the guest using xenctx?
>>
>> It is, here are few subsequent runs:
>>
>> ~# /usr/local/lib/xen/bin/xenctx -s
>> /boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
>> vcpu0:
>> cs:eip: ca80:00000256
> 
> Ok, it's stuck in linuxboot.bin option ROM. That's not something we test in Citrix -
> we don't use fw_cfg. It could be something with caching (given it's moving but slowly) or a
> bug uncovered by memory map changes. I'll try to get a repro on Monday.

Right, I think I know what will fix your problem - could you flip "ACPI data"
type to "ACPI NVS" in my commit.

Jan, this is what we've discussed on the list as an ambiguity in ACPI spec but
couldn't reach a clean resolution after all.
SeaBIOS thinks that "ACPI data" type is essentially RAM that could be reported
as RAM resource to the guest in E801.
https://wiki.osdev.org/Detecting_Memory_(x86)#BIOS_Function:_INT_0x15.2C_AX_.3D_0xE801

// Calculate the maximum ramsize (less than 4gig) from e820 map.
static void
calcRamSize(void)
{
    u32 rs = 0;
    int i;
    for (i=e820_count-1; i>=0; i--) {
        struct e820entry *en = &e820_list[i];
        u64 end = en->start + en->size;
        u32 type = en->type;
        if (end <= 0xffffffff && (type == E820_ACPI || type == E820_RAM)) {
            rs = end;
            break;
        }
    }
    LegacyRamSize = rs >= 1024*1024 ? rs : 1024*1024;
}

what is wrong here I think is that it clearly doesn't handle holes and worked more
by luck. So SeaBIOS needs to be fixed but I think that using ACPI NVS in hvmloader
is still safer.

Igor


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 12:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 12:01:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5668.14700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRa2J-0002X8-HQ; Sun, 11 Oct 2020 12:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5668.14700; Sun, 11 Oct 2020 12:01:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRa2J-0002X1-ES; Sun, 11 Oct 2020 12:01:11 +0000
Received: by outflank-mailman (input) for mailman id 5668;
 Sun, 11 Oct 2020 12:01:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRa2H-0002Wv-VJ
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:01:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33801eff-c3fc-4880-a71f-3bcfd46950cc;
 Sun, 11 Oct 2020 12:01:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRa2F-000171-D3; Sun, 11 Oct 2020 12:01:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRa2F-0005g9-4q; Sun, 11 Oct 2020 12:01:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRa2F-0008Ek-4P; Sun, 11 Oct 2020 12:01:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRa2H-0002Wv-VJ
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:01:09 +0000
X-Inumbo-ID: 33801eff-c3fc-4880-a71f-3bcfd46950cc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33801eff-c3fc-4880-a71f-3bcfd46950cc;
	Sun, 11 Oct 2020 12:01:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BvLWfT7s8yiXm8Yb1PxGmM5DS3GiJHNHPTiAMi9VpvA=; b=nD8MtNrMpOX7slCT/TyGyjBSvt
	skKFvbqJNK6sp+Zu+xubLvbTJekgWNjNN1VJCMJNWOEcr4aAfsRcFqjtqP2cnL1du6+7Mx+QoGVxA
	z8vdcCTlyX+3k60+b0xZwNh1zOLXn+W4xKslt8DAdrnnwlg3T4uvzmhLPKiVfsgkT+8I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRa2F-000171-D3; Sun, 11 Oct 2020 12:01:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRa2F-0005g9-4q; Sun, 11 Oct 2020 12:01:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRa2F-0008Ek-4P; Sun, 11 Oct 2020 12:01:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155689-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155689: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 12:01:07 +0000

flight 155689 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155689/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 12:24:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 12:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5676.14719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRaOU-0004cG-HK; Sun, 11 Oct 2020 12:24:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5676.14719; Sun, 11 Oct 2020 12:24:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRaOU-0004c9-Dm; Sun, 11 Oct 2020 12:24:06 +0000
Received: by outflank-mailman (input) for mailman id 5676;
 Sun, 11 Oct 2020 12:24:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eyjW=DS=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRaOT-0004c4-SH
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:24:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c9317ad-ca78-4b0b-9ba5-02b38c0aedd0;
 Sun, 11 Oct 2020 12:24:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0079ACF5;
 Sun, 11 Oct 2020 12:24:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eyjW=DS=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRaOT-0004c4-SH
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:24:05 +0000
X-Inumbo-ID: 5c9317ad-ca78-4b0b-9ba5-02b38c0aedd0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c9317ad-ca78-4b0b-9ba5-02b38c0aedd0;
	Sun, 11 Oct 2020 12:24:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602419043;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=tVL4jCgyW5n6HvlNwqjNO0dk/wbT+JIJA5ZDI7IXTso=;
	b=hSew4UKB/u36vrdxedRyuhRRVfNvrkTU/nQI+X5o/9Ee6W9r/5FPqgeZeEqbGXK6cvqkZ1
	z+vYgeGZkIcYI+ZkRoN7QtAONhxEN+NI3z9g5w3g6MdLbbr2aA6lhUg+DkZEL2bVJmrX69
	6moCkOLOj9zjmo+tMyHX47N40UbuxpU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D0079ACF5;
	Sun, 11 Oct 2020 12:24:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs/store: add disclaimer to header file regarding ignored options
Date: Sun, 11 Oct 2020 14:24:01 +0200
Message-Id: <20201011122401.28608-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a disclaimer to the libxenstore header file that all of the open
flags (socket only connection, read only connection) are ignored.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/store/include/xenstore.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
index 158e69ef83..2b3f69fb61 100644
--- a/tools/libs/store/include/xenstore.h
+++ b/tools/libs/store/include/xenstore.h
@@ -23,6 +23,7 @@
 
 #define XBT_NULL 0
 
+/* Following open flags are deprecated and ignored! */
 #define XS_OPEN_READONLY	(1UL<<0)
 #define XS_OPEN_SOCKETONLY      (1UL<<1)
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Oct 11 12:46:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 12:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5679.14733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRakH-0006fE-Ap; Sun, 11 Oct 2020 12:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5679.14733; Sun, 11 Oct 2020 12:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRakH-0006f7-7H; Sun, 11 Oct 2020 12:46:37 +0000
Received: by outflank-mailman (input) for mailman id 5679;
 Sun, 11 Oct 2020 12:46:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRakG-0006ec-DR
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:46:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 491da747-4d1f-4ed5-8865-11b8e732d866;
 Sun, 11 Oct 2020 12:46:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRak7-00022O-Sf; Sun, 11 Oct 2020 12:46:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRak7-0007qG-KI; Sun, 11 Oct 2020 12:46:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRak7-0003Gt-Jl; Sun, 11 Oct 2020 12:46:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRakG-0006ec-DR
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 12:46:36 +0000
X-Inumbo-ID: 491da747-4d1f-4ed5-8865-11b8e732d866
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 491da747-4d1f-4ed5-8865-11b8e732d866;
	Sun, 11 Oct 2020 12:46:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RrsZaPiyokTg15MOEisBE6D+cL7xmdccz3Y954TPVvc=; b=peT4zgoG4X3Zz2r6SHZgSWOMSt
	bsmFc7k9rYhaRd/kbPz30ODShvsqh74UFJGaCEk0EM6usadKR+1WrV2Ali49kLquJKmnq7uOcwcYt
	bTUdJrspsvKvnY/QPyvagHzQOAemoHQoE5b+FcKwf5Xs2y229D8P5sbV/amfDiee7tlw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRak7-00022O-Sf; Sun, 11 Oct 2020 12:46:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRak7-0007qG-KI; Sun, 11 Oct 2020 12:46:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRak7-0003Gt-Jl; Sun, 11 Oct 2020 12:46:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155675-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155675: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-pvshim:debian-fixup:fail:heisenbug
    qemu-mainline:test-amd64-i386-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    qemu-mainline:test-amd64-amd64-pygrub:debian-di-install:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 12:46:27 +0000

flight 155675 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155675/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-amd 12 debian-install  fail in 155665 pass in 155675
 test-amd64-amd64-xl-pvshim   13 debian-fixup     fail in 155665 pass in 155675
 test-amd64-i386-pair 27 guest-migrate/dst_host/src_host fail in 155665 pass in 155675
 test-amd64-amd64-pygrub      12 debian-di-install          fail pass in 155665

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   52 days
Failing since        152659  2020-08-21 14:07:39 Z   50 days   85 attempts
Testing same since   155613  2020-10-09 18:07:54 Z    1 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 14:36:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 14:36:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5687.14752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRcRp-0000N5-Ln; Sun, 11 Oct 2020 14:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5687.14752; Sun, 11 Oct 2020 14:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRcRp-0000My-Io; Sun, 11 Oct 2020 14:35:41 +0000
Received: by outflank-mailman (input) for mailman id 5687;
 Sun, 11 Oct 2020 14:35:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DJVw=DS=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1kRcRn-0000Mt-9t
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 14:35:39 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 20cb1483-4fb8-48cf-aefb-a8dcc08c46e0;
 Sun, 11 Oct 2020 14:35:35 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:56432
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1kRcUr-0008Av-Eq; Sun, 11 Oct 2020 16:38:49 +0200
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DJVw=DS=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
	id 1kRcRn-0000Mt-9t
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 14:35:39 +0000
X-Inumbo-ID: 20cb1483-4fb8-48cf-aefb-a8dcc08c46e0
Received: from server.eikelenboom.it (unknown [91.121.65.215])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 20cb1483-4fb8-48cf-aefb-a8dcc08c46e0;
	Sun, 11 Oct 2020 14:35:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:
	Reply-To:Cc:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=f3ScdoKbWWtzolAVZqhfqcwl0CNfWJlmgxLIUvsk+wE=; b=BNmC9crCveMS9V3BmyatkLQ6zI
	C+exz9is9IapLhk2IWBE8lakxI+EOwHuLz6E/p5UljCf+sOuguD5Mjaj+fwP9ZZ7XygFefovf7IOn
	JJtfT4SQpH02yiO3G6yNB6XzbhivRlr3WULE56yBBVUjLYwkXzJbHtuhO3osB89T/XWE=;
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:56432 helo=[172.16.1.50])
	by server.eikelenboom.it with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <linux@eikelenboom.it>)
	id 1kRcUr-0008Av-Eq; Sun, 11 Oct 2020 16:38:49 +0200
Subject: Re: [SUSPECTED SPAM]Xen-unstable :can't boot HVM guests, bisected to
 commit: "hvmloader: indicate ACPI tables with "ACPI data" type in e820"
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>
References: <9293a9e1-e507-4788-5460-d5ec9abc1af9@eikelenboom.it>
 <bbc026b0-06f1-a052-030d-d6757dda89b9@citrix.com>
 <24413d2e-5665-bc36-452b-af5c9b1af0b8@eikelenboom.it>
 <a7e46051-999d-fa5a-6707-d4c6e61727bb@citrix.com>
 <03219f26-3a2c-edcd-6654-a084102f9020@citrix.com>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <71d09f23-8931-4cfc-a49a-6787c0b6ba5c@eikelenboom.it>
Date: Sun, 11 Oct 2020 16:35:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <03219f26-3a2c-edcd-6654-a084102f9020@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11/10/2020 13:20, Igor Druzhinin wrote:
> On 11/10/2020 11:40, Igor Druzhinin wrote:
>> On 11/10/2020 10:43, Sander Eikelenboom wrote:
>>> On 11/10/2020 02:06, Igor Druzhinin wrote:
>>>> On 10/10/2020 18:51, Sander Eikelenboom wrote:
>>>>> Hi Igor/Jan,
>>>>>
>>>>> I tried to update my AMD machine to current xen-unstable, but
>>>>> unfortunately the HVM guests don't boot after that. The guest keeps
>>>>> using CPU-cycles but I don't get to a command prompt (or any output at
>>>>> all). PVH guests run fine.
>>>>>
>>>>> Bisection leads to commit:
>>>>>
>>>>> 8efa46516c5f4cf185c8df179812c185d3c27eb6
>>>>> hvmloader: indicate ACPI tables with "ACPI data" type in e820
>>>>>
>>>>> I tried xen-unstable with this commit reverted and with that everything
>>>>> works fine.
>>>>>
>>>>> I attached the xl-dmesg output.
>>>>
>>>> What guests are you using? 
>>> Not sure I understand what you ask for, but:
>>> dom0 PV
>>> guest HVM (qemu-xen)
>>>
>>>> Could you get serial output from the guest?
>>> Not getting any, it seems to be stuck in very early boot.
>>>
>>>> Is it AMD specific?
>>> Can't tell, this is the only machine I test xen-unstable on.
>>> It's a AMD phenom X6.
>>> Both dom0 and guest kernel are 5.9-rc8.
>>>
>>> Tested with guest config:
>>> kernel      = '/boot/vmlinuz-xen-guest'
>>> ramdisk     = '/boot/initrd.img-xen-guest'
>>>
>>> cmdline     = 'root=UUID=7cc4a90d-d6b0-4958-bb7d-50497aa29f18 ro
>>> nomodeset console=tty1 console=ttyS0 console=hvc0 earlyprintk=xen'
>>>
>>> type='hvm'
>>>
>>> device_model_version = 'qemu-xen'
>>>
>>> cpus        = "2-5"
>>> vcpus = 2
>>>
>>> memory      = '512'
>>>
>>> disk        = [
>>>                   'phy:/dev/xen_vms_ssd/media,xvda,w'
>>>               ]
>>>
>>> name        = 'guest'
>>>
>>> vif         = [ 'bridge=xen_bridge,ip=192.168.1.10,mac=00:16:3E:DC:0A:F1' ]
>>>
>>> on_poweroff = 'destroy'
>>> on_reboot   = 'restart'
>>> on_crash    = 'preserve'
>>>
>>> vnc=0
>>>
>>>
>>>> If it's a Linux guest could you get a stacktrace from
>>>> the guest using xenctx?
>>>
>>> It is, here are few subsequent runs:
>>>
>>> ~# /usr/local/lib/xen/bin/xenctx -s
>>> /boot/System.map-5.9.0-rc8-20201010-doflr-mac80211debug+ -f -a -C 4
>>> vcpu0:
>>> cs:eip: ca80:00000256
>>
>> Ok, it's stuck in linuxboot.bin option ROM. That's not something we test in Citrix -
>> we don't use fw_cfg. It could be something with caching (given it's moving but slowly) or a
>> bug uncovered by memory map changes. I'll try to get a repro on Monday.
> 
> Right, I think I know what will fix your problem - could you flip "ACPI data"
> type to "ACPI NVS" in my commit.

Just did and the guest now boots fine.

--
Sander

> Jan, this is what we've discussed on the list as an ambiguity in ACPI spec but
> couldn't reach a clean resolution after all.
> SeaBIOS thinks that "ACPI data" type is essentially RAM that could be reported
> as RAM resource to the guest in E801.
> https://wiki.osdev.org/Detecting_Memory_(x86)#BIOS_Function:_INT_0x15.2C_AX_.3D_0xE801
> 
> // Calculate the maximum ramsize (less than 4gig) from e820 map.
> static void
> calcRamSize(void)
> {
>     u32 rs = 0;
>     int i;
>     for (i=e820_count-1; i>=0; i--) {
>         struct e820entry *en = &e820_list[i];
>         u64 end = en->start + en->size;
>         u32 type = en->type;
>         if (end <= 0xffffffff && (type == E820_ACPI || type == E820_RAM)) {
>             rs = end;
>             break;
>         }
>     }
>     LegacyRamSize = rs >= 1024*1024 ? rs : 1024*1024;
> }
> 
> what is wrong here I think is that it clearly doesn't handle holes and worked more
> by luck. So SeaBIOS needs to be fixed but I think that using ACPI NVS in hvmloader
> is still safer.
> 
> Igor
> 



From xen-devel-bounces@lists.xenproject.org Sun Oct 11 16:23:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 16:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5692.14769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRe8H-0002uO-Gz; Sun, 11 Oct 2020 16:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5692.14769; Sun, 11 Oct 2020 16:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRe8H-0002uH-De; Sun, 11 Oct 2020 16:23:37 +0000
Received: by outflank-mailman (input) for mailman id 5692;
 Sun, 11 Oct 2020 16:23:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRe8G-0002uC-V7
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 16:23:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cd0322e-36ed-47eb-b5e0-2ed43c9c19bb;
 Sun, 11 Oct 2020 16:23:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRe8E-000722-8s; Sun, 11 Oct 2020 16:23:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRe8E-0005z0-0N; Sun, 11 Oct 2020 16:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRe8D-0004wP-W8; Sun, 11 Oct 2020 16:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRe8G-0002uC-V7
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 16:23:37 +0000
X-Inumbo-ID: 8cd0322e-36ed-47eb-b5e0-2ed43c9c19bb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cd0322e-36ed-47eb-b5e0-2ed43c9c19bb;
	Sun, 11 Oct 2020 16:23:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FVfFfQqIA+aMah8Xuwc6rtnd5K3WgJ63C+LoOWbI1F0=; b=GwOgtOElyrRu0mrpEYLYbAkWyw
	BheObl/osyTuSWn6eAzC/8VndiGM8iK/S2ERFmuISTUOW06mzyeqczUnV76taaxe5kW7npzRShn5m
	9rDwUzrfYCjLPgiSM/7tOarOGvGcEVnwxu3k8mRggJW0TlRgDtfFJnJQuTN3zxur1uLU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRe8E-000722-8s; Sun, 11 Oct 2020 16:23:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRe8E-0005z0-0N; Sun, 11 Oct 2020 16:23:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRe8D-0004wP-W8; Sun, 11 Oct 2020 16:23:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155694-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155694: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 16:23:33 +0000

flight 155694 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155694/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    1 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 16:59:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 16:59:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5697.14787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRegw-000669-Gl; Sun, 11 Oct 2020 16:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5697.14787; Sun, 11 Oct 2020 16:59:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRegw-000662-D8; Sun, 11 Oct 2020 16:59:26 +0000
Received: by outflank-mailman (input) for mailman id 5697;
 Sun, 11 Oct 2020 16:59:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRegv-00065x-KF
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 16:59:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 446e1d8b-cb9b-4d94-a6b3-ad232c980dbc;
 Sun, 11 Oct 2020 16:59:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRegs-0007jR-5t; Sun, 11 Oct 2020 16:59:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRegr-0006uM-TQ; Sun, 11 Oct 2020 16:59:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRegr-0008M5-Sv; Sun, 11 Oct 2020 16:59:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRegv-00065x-KF
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 16:59:25 +0000
X-Inumbo-ID: 446e1d8b-cb9b-4d94-a6b3-ad232c980dbc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 446e1d8b-cb9b-4d94-a6b3-ad232c980dbc;
	Sun, 11 Oct 2020 16:59:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ouqwSsV6IkTdrFMIuMQ5ocxojaK+nENvigFsSRIk02I=; b=01BshdmvJQh4nJRjw1esY5BKzR
	mv79YP1EMNlRJgl6HVH6e/BeCOJBxTzzjSr58WCVTnorj2jru2kOuV4zVgmA8xMeg2mkJWRXrYRyC
	Xas5UVIAbxBQTtPRcBPn2MWpIwsCr7F9VPDEkfqkqAggp3XsM1p6tp7nBoUzrQDid8FA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRegs-0007jR-5t; Sun, 11 Oct 2020 16:59:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRegr-0006uM-TQ; Sun, 11 Oct 2020 16:59:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRegr-0008M5-Sv; Sun, 11 Oct 2020 16:59:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155682-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155682: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=da690031a5d6d50a361e3f19f3eeabd086a6f20d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 16:59:21 +0000

flight 155682 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155682/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-pair 27 guest-migrate/dst_host/src_host fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                da690031a5d6d50a361e3f19f3eeabd086a6f20d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   71 days
Failing since        152366  2020-08-01 20:49:34 Z   70 days  119 attempts
Testing same since   155682  2020-10-11 06:41:56 Z    0 days    1 attempts

------------------------------------------------------------
2512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 339707 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 19:14:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 19:14:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5709.14809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRgmo-0002UC-7F; Sun, 11 Oct 2020 19:13:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5709.14809; Sun, 11 Oct 2020 19:13:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRgmo-0002U5-3K; Sun, 11 Oct 2020 19:13:38 +0000
Received: by outflank-mailman (input) for mailman id 5709;
 Sun, 11 Oct 2020 19:13:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRgmn-0002U0-2k
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 19:13:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7655132-6400-4867-8ed6-7b1a914ce4a1;
 Sun, 11 Oct 2020 19:13:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRgmj-00025V-3p; Sun, 11 Oct 2020 19:13:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRgmi-0002k9-RM; Sun, 11 Oct 2020 19:13:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRgmi-0005T0-Qo; Sun, 11 Oct 2020 19:13:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRgmn-0002U0-2k
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 19:13:37 +0000
X-Inumbo-ID: b7655132-6400-4867-8ed6-7b1a914ce4a1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7655132-6400-4867-8ed6-7b1a914ce4a1;
	Sun, 11 Oct 2020 19:13:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QZDKuD5MrI3hPSnFU9duke7HpqjIqL9f+mCvwNs7yVA=; b=ktuzlVOhgJuRnyJI3dC1H2JXpJ
	ECAV3xCFvu+hjmADiyCB1T45qWSMT4sXLRv/ZK7Kf9w5+M8aL3XmH9VpWRkq73LLWwBVmYTaZelwl
	B9qHGUyfKal0Qe+MaNrgwRmTt0i+5E8bipUBRlOBr193k/KHxiSpxBBvVge1Ke7ciweY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRgmj-00025V-3p; Sun, 11 Oct 2020 19:13:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRgmi-0002k9-RM; Sun, 11 Oct 2020 19:13:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRgmi-0005T0-Qo; Sun, 11 Oct 2020 19:13:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155695-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155695: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-pygrub:debian-di-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 19:13:32 +0000

flight 155695 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155695/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pygrub     12 debian-di-install fail in 155675 pass in 155695
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install          fail pass in 155675
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 155675

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   52 days
Failing since        152659  2020-08-21 14:07:39 Z   51 days   86 attempts
Testing same since   155613  2020-10-09 18:07:54 Z    2 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42209 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 19:36:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 19:36:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5715.14823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRh8p-0004Xr-4U; Sun, 11 Oct 2020 19:36:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5715.14823; Sun, 11 Oct 2020 19:36:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRh8p-0004Xk-1U; Sun, 11 Oct 2020 19:36:23 +0000
Received: by outflank-mailman (input) for mailman id 5715;
 Sun, 11 Oct 2020 19:36:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRh8n-0004Xf-Sf
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 19:36:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d82008a4-c724-45f6-b0f8-0417dd792857;
 Sun, 11 Oct 2020 19:36:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRh8l-0002WX-7t; Sun, 11 Oct 2020 19:36:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRh8k-0003le-Sq; Sun, 11 Oct 2020 19:36:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRh8k-0000GP-SK; Sun, 11 Oct 2020 19:36:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRh8n-0004Xf-Sf
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 19:36:21 +0000
X-Inumbo-ID: d82008a4-c724-45f6-b0f8-0417dd792857
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d82008a4-c724-45f6-b0f8-0417dd792857;
	Sun, 11 Oct 2020 19:36:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1jSeLyzkvop81NqaaBYMe7Q88Z2X528A6aGyK0lg8Fc=; b=R9CstvdRwU+Yia4iXRFM0FRi/D
	8la/kdhE/MmdkL3rpslhF1AbCRq8aWytprsViJx7vdR9cWwMfrenJ10qxcRSYY8pvYconoPU+O+GJ
	LL3etf4bpMNRqDXXyvhfY0TSpbDrMMeJQlsI6ACAjINcAfwgTGmG0oAEnEfXx5a6blCk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRh8l-0002WX-7t; Sun, 11 Oct 2020 19:36:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRh8k-0003le-Sq; Sun, 11 Oct 2020 19:36:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRh8k-0000GP-SK; Sun, 11 Oct 2020 19:36:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155699-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155699: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 19:36:18 +0000

flight 155699 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155699/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    2 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 20:59:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 20:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5728.14841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRiRN-00043C-AX; Sun, 11 Oct 2020 20:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5728.14841; Sun, 11 Oct 2020 20:59:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRiRN-000435-6i; Sun, 11 Oct 2020 20:59:37 +0000
Received: by outflank-mailman (input) for mailman id 5728;
 Sun, 11 Oct 2020 20:59:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v8YJ=DS=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kRiRL-00042y-6h
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 20:59:35 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c6abcae-6657-43b5-b1ea-d797d3b471a2;
 Sun, 11 Oct 2020 20:59:34 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id f21so15416813wml.3
 for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 13:59:34 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id r1sm21772609wro.18.2020.10.11.13.59.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 11 Oct 2020 13:59:32 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=v8YJ=DS=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kRiRL-00042y-6h
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 20:59:35 +0000
X-Inumbo-ID: 3c6abcae-6657-43b5-b1ea-d797d3b471a2
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c6abcae-6657-43b5-b1ea-d797d3b471a2;
	Sun, 11 Oct 2020 20:59:34 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id f21so15416813wml.3
        for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 13:59:34 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=nHvxWt8meOHfENweROJRuGyAw8bL1sUNhAZ9nwSynPU=;
        b=bVRNKtxmkN7aHHhdxXC27RVAD7DOLYqbqqFZQbVPjeJ01A0kdR4v8966SlAkY34HAL
         ubUtSEhA/aRURR2hGljv/rZ99QIgqkH3Sl5cp77VCeJ/9kjEaNCbpoz79/VmpaIW2kt2
         GkSPgLVWX3mzyZAaRU3K0UUSsprwPzk8dziyWKlourT20Jx9dZzuMhQKR4VOdprkefC1
         32L01+1dZCR7mXkeSHxxZXayorYSs/d9J9WhkE6TA9P/GpUllso1MFmxFMUgdxIjJvwz
         Yjg88CA+Ol9vbW+OCQGVu/ie9qjspu3ku4nFCAyDgkKkBWXaaJ7JFyeR35jx71jAmpjn
         hKAw==
X-Gm-Message-State: AOAM532/tmd0W7vA6lmfo6yxeOMoWajZ/EfstMUV0XKAU9XhC95bSPC1
	GjYXgRI4CqXJ+CcCaxdEMUPs/Bt2Xic=
X-Google-Smtp-Source: ABdhPJwm7LOGvXVasG+OvFoPTHC9tyTC8NOCaEfoN0KCm5RwtrIlOS/2CfukwTzpKdoHRTUOAFLjCA==
X-Received: by 2002:a1c:9949:: with SMTP id b70mr8107124wme.116.1602449973238;
        Sun, 11 Oct 2020 13:59:33 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id r1sm21772609wro.18.2020.10.11.13.59.32
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sun, 11 Oct 2020 13:59:32 -0700 (PDT)
Date: Sun, 11 Oct 2020 20:59:31 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/libs/store: add disclaimer to header file
 regarding ignored options
Message-ID: <20201011205931.47z6vd4fau2py5bl@liuwe-devbox-debian-v2>
References: <20201011122401.28608-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201011122401.28608-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Sun, Oct 11, 2020 at 02:24:01PM +0200, Juergen Gross wrote:
> Add a disclaimer to the libxenstore header file that all of the open
> flags (socket only connection, read only connection) are ignored.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked + applied.

> ---
>  tools/libs/store/include/xenstore.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/tools/libs/store/include/xenstore.h b/tools/libs/store/include/xenstore.h
> index 158e69ef83..2b3f69fb61 100644
> --- a/tools/libs/store/include/xenstore.h
> +++ b/tools/libs/store/include/xenstore.h
> @@ -23,6 +23,7 @@
>  
>  #define XBT_NULL 0
>  
> +/* Following open flags are deprecated and ignored! */
>  #define XS_OPEN_READONLY	(1UL<<0)
>  #define XS_OPEN_SOCKETONLY      (1UL<<1)
>  
> -- 
> 2.26.2
> 


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 22:13:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 22:13:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5736.14865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRja5-00038E-Ox; Sun, 11 Oct 2020 22:12:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5736.14865; Sun, 11 Oct 2020 22:12:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRja5-000387-Ln; Sun, 11 Oct 2020 22:12:41 +0000
Received: by outflank-mailman (input) for mailman id 5736;
 Sun, 11 Oct 2020 22:12:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRja4-000382-MM
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 22:12:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40d9be7c-ba64-49dd-8f93-eb7c86f43867;
 Sun, 11 Oct 2020 22:12:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRja1-0005kz-UN; Sun, 11 Oct 2020 22:12:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRja1-0002Yo-NC; Sun, 11 Oct 2020 22:12:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRja1-00033f-Mi; Sun, 11 Oct 2020 22:12:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fh4T=DS=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRja4-000382-MM
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 22:12:40 +0000
X-Inumbo-ID: 40d9be7c-ba64-49dd-8f93-eb7c86f43867
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40d9be7c-ba64-49dd-8f93-eb7c86f43867;
	Sun, 11 Oct 2020 22:12:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NRDh7tLe8+qgj32/YhNRx+ABjKnk0GEX6413zWqptv8=; b=46YHIvcp+rYm1MWpU2c9UA9Jdy
	KuieMRTecFUtrFNs0RRLOTv9S2KeHjekpOT+80PHstTrGluURwDcYqRgRDKL7+2Az05csF9cuEj4y
	NFdNipO7lOB5NwogDTreTBPSnAq9rRdDQCSFSXhmy6XKafjFlgTNlfC6S239PpSeErII=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRja1-0005kz-UN; Sun, 11 Oct 2020 22:12:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRja1-0002Yo-NC; Sun, 11 Oct 2020 22:12:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRja1-00033f-Mi; Sun, 11 Oct 2020 22:12:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155704-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155704: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 11 Oct 2020 22:12:37 +0000

flight 155704 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155704/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Testing same since   155612  2020-10-09 18:01:22 Z    2 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 11 23:31:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 23:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5741.14882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRkoP-0002M3-Jh; Sun, 11 Oct 2020 23:31:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5741.14882; Sun, 11 Oct 2020 23:31:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRkoP-0002Lw-Gi; Sun, 11 Oct 2020 23:31:33 +0000
Received: by outflank-mailman (input) for mailman id 5741;
 Sun, 11 Oct 2020 23:31:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0GLi=DS=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1kRkoN-0002Lo-RF
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 23:31:31 +0000
Received: from mail-qk1-x731.google.com (unknown [2607:f8b0:4864:20::731])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4bced579-d309-47f4-84b2-311d75545f4f;
 Sun, 11 Oct 2020 23:31:30 +0000 (UTC)
Received: by mail-qk1-x731.google.com with SMTP id 188so16252314qkk.12
 for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 16:31:30 -0700 (PDT)
Received: from six.home (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id
 j88sm11321663qte.96.2020.10.11.16.31.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 11 Oct 2020 16:31:28 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0GLi=DS=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
	id 1kRkoN-0002Lo-RF
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 23:31:31 +0000
X-Inumbo-ID: 4bced579-d309-47f4-84b2-311d75545f4f
Received: from mail-qk1-x731.google.com (unknown [2607:f8b0:4864:20::731])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4bced579-d309-47f4-84b2-311d75545f4f;
	Sun, 11 Oct 2020 23:31:30 +0000 (UTC)
Received: by mail-qk1-x731.google.com with SMTP id 188so16252314qkk.12
        for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 16:31:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=S6paNgzd2Tf+FILweikzjEXvaD1XAePsH8bNyZZsa2Y=;
        b=SKP6EWykOTvWJlShn8HrHbgxaG577VSQcIrDuXhwHXrLpUxhWEWC5Nzhm3I87hNsRY
         aHrKCupp5AK9+D8MT3Gh1lxb5/R1iljFUGLkksHNvoQe9pJX1p7OSl/GXwrWTDwj/+/w
         oAC0SL7Y6mfuC7b4xkIpwY5FYmx22SePwbxUTaWvE6rEDZ3+lHE4t1O3zZn20GU+r/Xh
         9tec04fSclONj8uWUqdDVCaJepZp+0tuH4tPWKRWkQk5bMQe/F1Of1sM0zRBhz8F9Sik
         MYbWQxSXUycXVk1s0b28zYXn6kzI5kSfZLSQpt3yQHSmoYjCV50wJdVRdpQyajz794XC
         7DRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=S6paNgzd2Tf+FILweikzjEXvaD1XAePsH8bNyZZsa2Y=;
        b=MkirZ9iQ+aTbOkV6xAGYMAc2NPgnjjAjuvs14L+kOT/rFfM6y1Oa7NhjLWZIBKmtmA
         XJ4UNqoQUIJ0JbB0NuW3of9zgpI/Lv1Vn9SwrTW2Pm9PdxPGI0PiZt4FfTIUAmvHGo38
         z2DEduQvd6Z5twYJ43L0Z7tcn2wsTcRWpH01lmjaqJeHusY8ZIfGwKWB0yJVrgTDdQOv
         ka52/LUBpObEcyI02TnYQG47/Ar4ss4SNcuFCokHaOn98x+nItmoDxp04XwpgSO7pwjR
         FmC4FcOp6HlkuY2JnL1o/Fd6a6QXFOje8h7FrifV9cw2+tydHPFIuyKd2xBehoDD4LOK
         T8tg==
X-Gm-Message-State: AOAM530E4oLglEDLH2aWIcg7HFaauVdfLSImYVXT7LNbTz9upasdCfWR
	l1a2XDhaSf8CzQgHWFouE7I1nXuthdY=
X-Google-Smtp-Source: ABdhPJxXrSmqR/mmdlfiMXv7kMhwPsWW1gIISvYdFwOWJSsv4iGOhFneN+X4Ng+95JwwiGWae1XoOg==
X-Received: by 2002:a05:620a:159b:: with SMTP id d27mr7482994qkk.28.1602459089711;
        Sun, 11 Oct 2020 16:31:29 -0700 (PDT)
Received: from six.home (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
        by smtp.gmail.com with ESMTPSA id j88sm11321663qte.96.2020.10.11.16.31.27
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sun, 11 Oct 2020 16:31:28 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] golang/xenlight: do not hard code libxl dir in gengotypes.py
Date: Sun, 11 Oct 2020 19:31:24 -0400
Message-Id: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1

Currently, in order to 'import idl' in gengotypes.py, we derive the path
of the libxl source directory from the XEN_ROOT environment variable, and
append that to sys.path so python can see idl.py. Since the the recent move of
libxl to tools/libs/light, this hard coding breaks the build.

Instead, check for the environment variable LIBXL_SRC_DIR, but move this
check to a try-except block (with empty except). This simply makes the
real error more visible, and does not strictly require that
LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
rather than XEN_ROOT when calling gengotypes.py.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/Makefile      | 2 +-
 tools/golang/xenlight/gengotypes.py | 9 ++++++++-
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index fd8e4893db..e394ef9b2b 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -16,7 +16,7 @@ all: build
 GOXL_GEN_FILES = types.gen.go helpers.gen.go
 
 %.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
-	XEN_ROOT=$(XEN_ROOT) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
+	LIBXL_SRC_DIR=$(LIBXL_SRC_DIR) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
 
 # Go will do its own dependency checking, and not actuall go through
 # with the build if none of the input files have changed.
diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index ebec938224..4ac181ae47 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -3,7 +3,14 @@
 import os
 import sys
 
-sys.path.append('{0}/tools/libxl'.format(os.environ['XEN_ROOT']))
+try:
+    sys.path.append(os.environ['LIBXL_SRC_DIR'])
+except:
+    # If we get here, then we expect the 'import idl'
+    # expression to fail. That error is more informative,
+    # so let it happen.
+    pass
+
 import idl
 
 # Go versions of some builtin types.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun Oct 11 23:31:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 11 Oct 2020 23:31:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5742.14895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRkoT-0002N9-Rk; Sun, 11 Oct 2020 23:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5742.14895; Sun, 11 Oct 2020 23:31:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRkoT-0002N2-Oj; Sun, 11 Oct 2020 23:31:37 +0000
Received: by outflank-mailman (input) for mailman id 5742;
 Sun, 11 Oct 2020 23:31:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0GLi=DS=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1kRkoS-0002Lo-F7
 for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 23:31:36 +0000
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2ba78f8-565f-4991-b089-5e2dcd0adf6c;
 Sun, 11 Oct 2020 23:31:32 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id c23so12468674qtp.0
 for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 16:31:32 -0700 (PDT)
Received: from six.home (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id
 j88sm11321663qte.96.2020.10.11.16.31.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 11 Oct 2020 16:31:30 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0GLi=DS=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
	id 1kRkoS-0002Lo-F7
	for xen-devel@lists.xenproject.org; Sun, 11 Oct 2020 23:31:36 +0000
X-Inumbo-ID: b2ba78f8-565f-4991-b089-5e2dcd0adf6c
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b2ba78f8-565f-4991-b089-5e2dcd0adf6c;
	Sun, 11 Oct 2020 23:31:32 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id c23so12468674qtp.0
        for <xen-devel@lists.xenproject.org>; Sun, 11 Oct 2020 16:31:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=9vyBvmU6s28EMBTFGVMlypGGZgndJL2zkc8914Z6eIg=;
        b=OZp2ylPMg5oq9jIFwDPPskU2kCUhf+CSNX5I7d20KPIlT0rQEJjo1/D7ahdz5g9oTg
         Y/MH+1HoEWiihnm7DnKnPqKFeuApl9qQ/qyRFo3Z1k3mJOcka3NYYaO59hyYXvMm5g9P
         Y2IWFhz0j4/ymzATH+o0C8/7+1XIOGXaWYVZj4pFfv9S/MYxLMz5x0vDP1uL6hcy3fbY
         hIhqtmMrHewxkcHEhGAEbEfdRMDa9CGBYDokT9wLo4SSFIYZN7qPHIKbFtFrSQ3nYvbj
         PZArnunik4Yz3Rgbu8JN4X1ITxb2it5nT40KfXTj3HWwtD9Hn+ECemvdOYcOp/GUyqnx
         314w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=9vyBvmU6s28EMBTFGVMlypGGZgndJL2zkc8914Z6eIg=;
        b=mk6BFBsj7N1DL6YiBc1gIjSgZJbLT2BL2u3jGC0i27gTXgopMCkheT3aFwPBw3nRIl
         kLGtPgKMuOHAr5qVMj1olYPfLVSLHgHbG1wt6XwngNNIkPmVGNE1rPRfEn5V1MnmLt6e
         dc4SFioZaqwmGV1hBTki4iwAbOk/QDMATAPYGGeQCCphwVyRi5X1CDB+vpLjMPr+DIO2
         UIMRNTt/Q75HZsLzctTqw78RSj51940wikFWLSH4yKVh/XOnvFTFBIa7Z0peV+0pCDIG
         ekVJTT85bn1+bRyUGX+s6NkIX3FX/oE2WwyBp7COtX2HGlKLcD5HRMovM9twdIE6T2QD
         36aw==
X-Gm-Message-State: AOAM5308Ye8cRWUtboiwC3UELoTSBnYHD5r7sN1kGSSnateX7vIHpbn2
	whU4LhxXKeqlpLOWR4Ks9rlWjz5h+LI=
X-Google-Smtp-Source: ABdhPJzlhws5PGusVnBacW74kDIwu7qmamBSo/l1QQaXh2Cj6Bq6Riy0FDdXDTHmh52A1aNe13s3fg==
X-Received: by 2002:ac8:5185:: with SMTP id c5mr7860279qtn.349.1602459091271;
        Sun, 11 Oct 2020 16:31:31 -0700 (PDT)
Received: from six.home (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
        by smtp.gmail.com with ESMTPSA id j88sm11321663qte.96.2020.10.11.16.31.29
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sun, 11 Oct 2020 16:31:30 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] golang/xenlight: standardize generated code comment
Date: Sun, 11 Oct 2020 19:31:25 -0400
Message-Id: <d8615f72d205b8a818ea397ccbb86f6ade1cd158.1602458773.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
References: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
In-Reply-To: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
References: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>

There is a standard format for generated Go code header comments, as set
by [1]. Modify gengotypes.py to follow this standard, and use the
additional

  // source: <IDL file basename>

convention used by protoc-gen-go.

This change is motivated by the fact that since 41aea82de2, the comment
would include the absolute path to libxl_types.idl, therefore creating
unintended diffs when generating code across different machines. This
approach fixes that problem.

[1] https://github.com/golang/go/issues/13560

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py  | 10 ++++------
 tools/golang/xenlight/helpers.gen.go |  7 ++-----
 tools/golang/xenlight/types.gen.go   |  7 ++-----
 3 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index 4ac181ae47..3e40c3d5dc 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -731,13 +731,11 @@ if __name__ == '__main__':
         name = b.typename
         builtin_type_names[name] = xenlight_golang_fmt_name(name)
 
-    header_comment="""// DO NOT EDIT.
-//
-// This file is generated by:
-// {0}
-//
+    header_comment="""// Code generated by {}. DO NOT EDIT.
+// source: {}
 
-""".format(' '.join(sys.argv))
+""".format(os.path.basename(sys.argv[0]),
+           ' '.join([os.path.basename(a) for a in sys.argv[1:]]))
 
     xenlight_golang_generate_types(types=types,
                                    comment=header_comment)
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 152c7e8e6b..c8605994e7 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1,8 +1,5 @@
-// DO NOT EDIT.
-//
-// This file is generated by:
-// gengotypes.py ../../libxl/libxl_types.idl
-//
+// Code generated by gengotypes.py. DO NOT EDIT.
+// source: libxl_types.idl
 
 package xenlight
 
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index 663c1e86b4..b4c5df0f2c 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -1,8 +1,5 @@
-// DO NOT EDIT.
-//
-// This file is generated by:
-// gengotypes.py ../../libxl/libxl_types.idl
-//
+// Code generated by gengotypes.py. DO NOT EDIT.
+// source: libxl_types.idl
 
 package xenlight
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 01:12:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 01:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5752.14914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmNT-00051F-Qa; Mon, 12 Oct 2020 01:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5752.14914; Mon, 12 Oct 2020 01:11:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmNT-000518-NZ; Mon, 12 Oct 2020 01:11:51 +0000
Received: by outflank-mailman (input) for mailman id 5752;
 Mon, 12 Oct 2020 01:11:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=w0qL=DT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kRmNT-000513-3x
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:11:51 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24d1fd8b-5620-40ed-9387-23f8e9a20eaa;
 Mon, 12 Oct 2020 01:11:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09C1BdHI082874
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sun, 11 Oct 2020 21:11:45 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09C1BdwK082873;
 Sun, 11 Oct 2020 18:11:39 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=w0qL=DT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kRmNT-000513-3x
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:11:51 +0000
X-Inumbo-ID: 24d1fd8b-5620-40ed-9387-23f8e9a20eaa
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24d1fd8b-5620-40ed-9387-23f8e9a20eaa;
	Mon, 12 Oct 2020 01:11:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09C1BdHI082874
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Sun, 11 Oct 2020 21:11:45 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09C1BdwK082873;
	Sun, 11 Oct 2020 18:11:39 -0700 (PDT)
	(envelope-from ehem)
Date: Sun, 11 Oct 2020 18:11:39 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
        Marek =?unknown-8bit?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python build
 process
Message-ID: <20201012011139.GA82449@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Unexpectedly the environment variable which needs to be passed is
$LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
of the host `ld`.

Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
it can load at runtime, not executables.

This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
to $LDFLAGS which breaks many linkers.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
This is now the *third* time this has been sent to the list.  Mark Pryor
has tested and confirms Python cross-building is working.  There is one
wart left which I'm unsure of the best approach for.

Having looked around a bit, I believe this is a Python 2/3 compatibility
issue.  "distutils" for Python 2 likely lacked a separate $LDSHARED or
$LD variable, whereas Python 3 does have this.  Alas this is pointless
due to the above (unless you can cause distutils.py to do the final link
step separately).
---
 tools/pygrub/Makefile | 9 +++++----
 tools/python/Makefile | 9 +++++----
 2 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
index 3063c4998f..37b2146214 100644
--- a/tools/pygrub/Makefile
+++ b/tools/pygrub/Makefile
@@ -3,20 +3,21 @@ XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
-PY_LDFLAGS = $(LDFLAGS) $(APPEND_LDFLAGS)
+PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
 INSTALL_LOG = build/installed_files.txt
 
 .PHONY: all
 all: build
 .PHONY: build
 build:
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
+	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
 
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)/$(bindir)
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) \
-		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
+	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
+		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
+		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
 		 --root="$(DESTDIR)" --install-scripts=$(LIBEXEC_BIN) --force
 	set -e; if [ $(bindir) != $(LIBEXEC_BIN) -a \
 	             "`readlink -f $(DESTDIR)/$(bindir)`" != \
diff --git a/tools/python/Makefile b/tools/python/Makefile
index 8d22c03676..b675f5b4de 100644
--- a/tools/python/Makefile
+++ b/tools/python/Makefile
@@ -5,19 +5,20 @@ include $(XEN_ROOT)/tools/Rules.mk
 all: build
 
 PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
-PY_LDFLAGS = $(LDFLAGS) $(APPEND_LDFLAGS)
+PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
 INSTALL_LOG = build/installed_files.txt
 
 .PHONY: build
 build:
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" $(PYTHON) setup.py build
+	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
 
 .PHONY: install
 install:
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
 
-	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) \
-		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
+	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
+		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
+		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
 		--root="$(DESTDIR)" --force
 
 	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)
-- 
2.20.1


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Oct 12 01:43:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 01:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5759.14932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmrb-00080R-J1; Mon, 12 Oct 2020 01:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5759.14932; Mon, 12 Oct 2020 01:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmrb-00080K-Ed; Mon, 12 Oct 2020 01:42:59 +0000
Received: by outflank-mailman (input) for mailman id 5759;
 Mon, 12 Oct 2020 01:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRmra-0007zs-6g
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:42:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d69ed792-3bb8-4afa-8ed3-6ef4a490e036;
 Mon, 12 Oct 2020 01:42:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmrS-0002dj-Rt; Mon, 12 Oct 2020 01:42:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmrS-0000PT-HM; Mon, 12 Oct 2020 01:42:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmrS-0005zT-Gp; Mon, 12 Oct 2020 01:42:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRmra-0007zs-6g
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:42:58 +0000
X-Inumbo-ID: d69ed792-3bb8-4afa-8ed3-6ef4a490e036
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d69ed792-3bb8-4afa-8ed3-6ef4a490e036;
	Mon, 12 Oct 2020 01:42:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZLJxsD+MRBDzl5Et0OBubaayLZ7F5iJp9lvtnLoSXSo=; b=pJgYSU1l0KIFB3qjth0XUGorEd
	3K6wTCEv7IuSd1PYezm7v5Mu3Sc0wQMFZw1ECOIxdcEgqPg5xVrVwv+5eRM7GvQ+h45iz11e8YxtH
	PhMli78BTiD8Io6XJ2zyK8APRqdK2gJrdwzL8PiLOf51M41wpPVZrSAR+duxU+GCXbo0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmrS-0002dj-Rt; Mon, 12 Oct 2020 01:42:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmrS-0000PT-HM; Mon, 12 Oct 2020 01:42:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmrS-0005zT-Gp; Mon, 12 Oct 2020 01:42:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155708-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155708: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 01:42:50 +0000

flight 155708 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155708/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    2 days
Failing since        155612  2020-10-09 18:01:22 Z    2 days   17 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 01:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 01:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5761.14945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmxB-0008K8-6p; Mon, 12 Oct 2020 01:48:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5761.14945; Mon, 12 Oct 2020 01:48:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRmxB-0008K1-3q; Mon, 12 Oct 2020 01:48:45 +0000
Received: by outflank-mailman (input) for mailman id 5761;
 Mon, 12 Oct 2020 01:48:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRmx9-0008Jw-30
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:48:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4122c3c8-f18d-4396-8f3e-05e746664e65;
 Mon, 12 Oct 2020 01:48:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmx4-0002kt-Ow; Mon, 12 Oct 2020 01:48:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmx4-0000XG-HW; Mon, 12 Oct 2020 01:48:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRmx4-0006En-H0; Mon, 12 Oct 2020 01:48:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRmx9-0008Jw-30
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 01:48:43 +0000
X-Inumbo-ID: 4122c3c8-f18d-4396-8f3e-05e746664e65
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4122c3c8-f18d-4396-8f3e-05e746664e65;
	Mon, 12 Oct 2020 01:48:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RhFMlO3R+HeW1cV1MDjEunF6lSgQvBZcY6U4vVX0qNI=; b=Sx490zdD6Z+aTnkX5zjlP6vMDB
	xnSET/GJGEQvXW7dtj1uRi8HtU7Ai8arrUxORAXYliIKQrhEg1TGcUYBehadW9fNbxUjzWrn6aIdy
	m9wfRd0CK0nUhv+bmaA3QvMDBm4clH0DmED312EerKkK2mlVvs7fIH1lrhJ8d2aG7Pro=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmx4-0002kt-Ow; Mon, 12 Oct 2020 01:48:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmx4-0000XG-HW; Mon, 12 Oct 2020 01:48:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRmx4-0006En-H0; Mon, 12 Oct 2020 01:48:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155703-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155703: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=48a340d9b23ffcf7704f2de14d1e505481a84a1c
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 01:48:38 +0000

flight 155703 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155703/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                48a340d9b23ffcf7704f2de14d1e505481a84a1c
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   52 days
Failing since        152659  2020-08-21 14:07:39 Z   51 days   87 attempts
Testing same since   155703  2020-10-11 19:37:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42895 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 02:10:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 02:10:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5770.14964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRnIH-000302-9E; Mon, 12 Oct 2020 02:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5770.14964; Mon, 12 Oct 2020 02:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRnIH-0002zv-6C; Mon, 12 Oct 2020 02:10:33 +0000
Received: by outflank-mailman (input) for mailman id 5770;
 Mon, 12 Oct 2020 02:10:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRnIG-0002yp-8q
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 02:10:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 280aff59-a57b-492c-ad46-c53ae060bd1b;
 Mon, 12 Oct 2020 02:10:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRnI8-0003dk-M5; Mon, 12 Oct 2020 02:10:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRnI8-00011f-Cm; Mon, 12 Oct 2020 02:10:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRnI8-0001oK-CL; Mon, 12 Oct 2020 02:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRnIG-0002yp-8q
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 02:10:32 +0000
X-Inumbo-ID: 280aff59-a57b-492c-ad46-c53ae060bd1b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 280aff59-a57b-492c-ad46-c53ae060bd1b;
	Mon, 12 Oct 2020 02:10:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BBsmY+qXLbNiA49YvJjQHd44kY6RfD13LB09hWK4tYo=; b=fTqRvmF95eXn+Zd/pfj0U1VJhl
	wXwmctxKzblc+c0AjJjQCxWaYigf95tagRl1hNNhqJATVeoYaIfzuoyNG8SndQ7ckRG5Fuz/1gmLE
	usOak+w3z4gNOP8W4CkbY3Ejm7FVJMtfFo/itR+eLbn9QZRn/Y4f9A5T5jsRMSN0U3sk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRnI8-0003dk-M5; Mon, 12 Oct 2020 02:10:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRnI8-00011f-Cm; Mon, 12 Oct 2020 02:10:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRnI8-0001oK-CL; Mon, 12 Oct 2020 02:10:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155700-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155700: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=da690031a5d6d50a361e3f19f3eeabd086a6f20d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 02:10:24 +0000

flight 155700 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155700/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair 27 guest-migrate/dst_host/src_host fail in 155682 pass in 155700
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 155682

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                da690031a5d6d50a361e3f19f3eeabd086a6f20d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   72 days
Failing since        152366  2020-08-01 20:49:34 Z   71 days  120 attempts
Testing same since   155682  2020-10-11 06:41:56 Z    0 days    2 attempts

------------------------------------------------------------
2512 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 339707 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 04:14:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 04:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5773.14985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpEP-0006nP-KT; Mon, 12 Oct 2020 04:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5773.14985; Mon, 12 Oct 2020 04:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpEP-0006nI-Gi; Mon, 12 Oct 2020 04:14:41 +0000
Received: by outflank-mailman (input) for mailman id 5773;
 Mon, 12 Oct 2020 02:30:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cN/0=DT=wdc.com=prvs=54744081d=damien.lemoal@srs-us1.protection.inumbo.net>)
 id 1kRnbX-0004yn-NV
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 02:30:28 +0000
Received: from esa6.hgst.iphmx.com (unknown [216.71.154.45])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45e97e30-0d64-4b9f-94f5-55b1a22e5994;
 Mon, 12 Oct 2020 02:30:25 +0000 (UTC)
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO
 NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
 by ob1.hgst.iphmx.com with ESMTP; 12 Oct 2020 10:30:21 +0800
Received: from BL0PR04MB6514.namprd04.prod.outlook.com (2603:10b6:208:1ca::23)
 by MN2PR04MB6928.namprd04.prod.outlook.com (2603:10b6:208:1e3::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Mon, 12 Oct
 2020 02:30:19 +0000
Received: from BL0PR04MB6514.namprd04.prod.outlook.com
 ([fe80::4c3e:2b29:1dc5:1a85]) by BL0PR04MB6514.namprd04.prod.outlook.com
 ([fe80::4c3e:2b29:1dc5:1a85%6]) with mapi id 15.20.3455.029; Mon, 12 Oct 2020
 02:30:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cN/0=DT=wdc.com=prvs=54744081d=damien.lemoal@srs-us1.protection.inumbo.net>)
	id 1kRnbX-0004yn-NV
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 02:30:28 +0000
X-Inumbo-ID: 45e97e30-0d64-4b9f-94f5-55b1a22e5994
Received: from esa6.hgst.iphmx.com (unknown [216.71.154.45])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 45e97e30-0d64-4b9f-94f5-55b1a22e5994;
	Mon, 12 Oct 2020 02:30:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1602469826; x=1634005826;
  h=from:to:cc:subject:date:message-id:references:
   content-transfer-encoding:mime-version;
  bh=B2H/dL/4Tk6ifO229+CDbwJhsi61Ae0kMhbusFZSfYA=;
  b=IFoIj5Lmvd7DgG7HVctd7QXEBq5iKRq1R7YLToSqatAuf4DhTyaI7TUk
   83tHlkFY453/6xywbIO21GpX+lMUI0l1MsH+vIm25UlDETgrzqQHIPj9s
   hQXO7drpWY795z+VjQ0X878czpAnYl85qeeMfMMg/PwZM1ztD+5qgmn71
   6AI6PPS+viNEtMoV0hVHTsGTEq9WjlfxeTCqiY3aSLUJSp1B9vdv23I3h
   AJSkepTYuvZgQ6QWCdBtCm2492xxiZmYPrcJzyVaaBsNYD47zAoe23fjQ
   U9eAeQxQoIRK0GNPUhv/2SC052VX3msw2Fwzl+w/+f5fgGw/92Gtr4ix2
   A==;
IronPort-SDR: otNXfCl468GV2kaRfPMurNEzzb+5DTvHbKamj2s+8BDLocQWr4zuQ3M3i+IKAljLE0jNxd9VpM
 cO+rQZXBC3hCB4USGfnGA9ogfGdhVarakItuizW2Gy++p8Z+M6RlhT6/ZGAyh4Bb4soyndVApS
 t1CbgZ/xItk8xcO9aKd2ojIIDU+T0Z2LVw84K7kBp34/4Smmy1km93xm3x16vuxVWJ3+pskOv3
 siKun/r5wc2+puveLQmD3LIou+hiTE8AMRzhJ76wz/XN32b0xaH0DxtBNrPj7bQ2z+DqpFhCU+
 uec=
X-IronPort-AV: E=Sophos;i="5.77,365,1596470400"; 
   d="scan'208";a="150813290"
Received: from mail-bn8nam12lp2169.outbound.protection.outlook.com (HELO NAM12-BN8-obe.outbound.protection.outlook.com) ([104.47.55.169])
  by ob1.hgst.iphmx.com with ESMTP; 12 Oct 2020 10:30:21 +0800
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gx/LTHF9rvHW8Oe8ILITCqr1oekN0sSJJn9RkpcHF8rIin2//JVyW6AgAlQAeO/v1EowEo57pZXHLTZIAy/oh4sgxpcSGZDKN5oz0Qe5qPK5ALqfSLuBJmsV9AjLyCrQw4LfNvVHA5X/BDt9kKApbyWTlvE/mChWEAeK2jwUV+avUBnk6VOQeR5wnt5ri1PntjTXlIuGOC1C+wB/9l2juXZbCJp4Pz4p8f7Fz7xkvRrQ4+eJwlbmv6boTHlg5Kw8lXE/1AjnM4ZgS0AFsifxbxuSIjOVAxEndjwMb+y3JcGsuVp5FkLwRs65TutRE74FFvIf22sxhDPzR40XvIfT3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nSbIdboq5PT4g9q0YvKJz3kaJMX2RtABGp66gEoNwmc=;
 b=Kok+1aaGCiSZfqW1wmfB+kHe79JMm2f0m4TbxYf9iF9O5G/ZXp38pxI6FEhdVvJczC1oxUEx0BiWJ/jH1cJTKJ7qsq1BJbVSGcJTsAorroGNrYmYM+t+8l7Wbh5IWvJ0VGXdwladeVfJOpcoqinFt4xiAnbhaH0uGewPXN4kRMuNmUGT0t/etLda3IPoUpK7rFgAL7ObgaolOSF9YiV1tKcRYfXVoLRA9X+eXZQtAEs4qBuZHPYEZf91PR0dFaLSXlnkDhV3/6JNvTP1YcC6arSVI+ByCj2IBl8Uhj9qBZ+KZZJ487Embj0mtjbFITcETuHh+wrsFcIy0nDYldJobg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nSbIdboq5PT4g9q0YvKJz3kaJMX2RtABGp66gEoNwmc=;
 b=bu3NZ6nEIRGWvcX7BdK5F82a2Su5/ALOZX5mADuG3ixHeWl8vkyQlIEiI35HkhS1z2iLz2dIGt5v/ks6P0iyAim2GKOXOJDRdbUJuqpyR6LOJWHd5LzT+oOasYNyetl+MyoT9N0v8T1iC7HL9CruXmyPts3iLmme8nvBuhJdvlI=
Received: from BL0PR04MB6514.namprd04.prod.outlook.com (2603:10b6:208:1ca::23)
 by MN2PR04MB6928.namprd04.prod.outlook.com (2603:10b6:208:1e3::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Mon, 12 Oct
 2020 02:30:19 +0000
Received: from BL0PR04MB6514.namprd04.prod.outlook.com
 ([fe80::4c3e:2b29:1dc5:1a85]) by BL0PR04MB6514.namprd04.prod.outlook.com
 ([fe80::4c3e:2b29:1dc5:1a85%6]) with mapi id 15.20.3455.029; Mon, 12 Oct 2020
 02:30:19 +0000
From: Damien Le Moal <Damien.LeMoal@wdc.com>
To: "ira.weiny@intel.com" <ira.weiny@intel.com>, Andrew Morton
	<akpm@linux-foundation.org>, Thomas Gleixner <tglx@linutronix.de>, Ingo
 Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Andy Lutomirski
	<luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>
CC: Naohiro Aota <Naohiro.Aota@wdc.com>, "x86@kernel.org" <x86@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>, Dan Williams
	<dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"linux-nvdimm@lists.01.org" <linux-nvdimm@lists.01.org>,
	"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>, "linux-kselftest@vger.kernel.org"
	<linux-kselftest@vger.kernel.org>, "linuxppc-dev@lists.ozlabs.org"
	<linuxppc-dev@lists.ozlabs.org>, "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>, "bpf@vger.kernel.org"
	<bpf@vger.kernel.org>, "kexec@lists.infradead.org"
	<kexec@lists.infradead.org>, "linux-bcache@vger.kernel.org"
	<linux-bcache@vger.kernel.org>, "linux-mtd@lists.infradead.org"
	<linux-mtd@lists.infradead.org>, "devel@driverdev.osuosl.org"
	<devel@driverdev.osuosl.org>, "linux-efi@vger.kernel.org"
	<linux-efi@vger.kernel.org>, "linux-mmc@vger.kernel.org"
	<linux-mmc@vger.kernel.org>, "linux-scsi@vger.kernel.org"
	<linux-scsi@vger.kernel.org>, "target-devel@vger.kernel.org"
	<target-devel@vger.kernel.org>, "linux-nfs@vger.kernel.org"
	<linux-nfs@vger.kernel.org>, "ceph-devel@vger.kernel.org"
	<ceph-devel@vger.kernel.org>, "linux-ext4@vger.kernel.org"
	<linux-ext4@vger.kernel.org>, "linux-aio@kvack.org" <linux-aio@kvack.org>,
	"io-uring@vger.kernel.org" <io-uring@vger.kernel.org>,
	"linux-erofs@lists.ozlabs.org" <linux-erofs@lists.ozlabs.org>,
	"linux-um@lists.infradead.org" <linux-um@lists.infradead.org>,
	"linux-ntfs-dev@lists.sourceforge.net"
	<linux-ntfs-dev@lists.sourceforge.net>, "reiserfs-devel@vger.kernel.org"
	<reiserfs-devel@vger.kernel.org>, "linux-f2fs-devel@lists.sourceforge.net"
	<linux-f2fs-devel@lists.sourceforge.net>, "linux-nilfs@vger.kernel.org"
	<linux-nilfs@vger.kernel.org>, "cluster-devel@redhat.com"
	<cluster-devel@redhat.com>, "ecryptfs@vger.kernel.org"
	<ecryptfs@vger.kernel.org>, "linux-cifs@vger.kernel.org"
	<linux-cifs@vger.kernel.org>, "linux-btrfs@vger.kernel.org"
	<linux-btrfs@vger.kernel.org>, "linux-afs@lists.infradead.org"
	<linux-afs@lists.infradead.org>, "linux-rdma@vger.kernel.org"
	<linux-rdma@vger.kernel.org>, "amd-gfx@lists.freedesktop.org"
	<amd-gfx@lists.freedesktop.org>, "dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>, "intel-gfx@lists.freedesktop.org"
	<intel-gfx@lists.freedesktop.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "linux-block@vger.kernel.org"
	<linux-block@vger.kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "linux-cachefs@redhat.com"
	<linux-cachefs@redhat.com>, "samba-technical@lists.samba.org"
	<samba-technical@lists.samba.org>, "intel-wired-lan@lists.osuosl.org"
	<intel-wired-lan@lists.osuosl.org>
Subject: Re: [PATCH RFC PKS/PMEM 26/58] fs/zonefs: Utilize new kmap_thread()
Thread-Topic: [PATCH RFC PKS/PMEM 26/58] fs/zonefs: Utilize new kmap_thread()
Thread-Index: AQHWnnW5LYLzak05pEGSI1gzotA84w==
Date: Mon, 12 Oct 2020 02:30:18 +0000
Message-ID:
 <BL0PR04MB65146627753E6A8125C30044E7070@BL0PR04MB6514.namprd04.prod.outlook.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-27-ira.weiny@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=wdc.com;
x-originating-ip: [2400:2411:43c0:6000:dbd:ddb3:86a4:b7da]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 2d8e38c9-c661-47b4-955c-08d86e56c300
x-ms-traffictypediagnostic: MN2PR04MB6928:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <MN2PR04MB69281462D64954D72F6383A1E7070@MN2PR04MB6928.namprd04.prod.outlook.com>
wdcipoutbound: EOP-TRUE
x-ms-oob-tlc-oobclassifiers: OLM:2089;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 nb7ZZkrOcnI45uzu82g63ZPDCw/m4UEa+T14BlRir04UDAmgqTKjBKFQ/1tq3q95aFreC/myPH7bQVuOZygHUtANTgMCUzT/Pyur4q2rkZgxQjgu2tK2/9wVp2h2SQ2PIehISfjlFkUmEohfWX26Cg+7Fw+ncYVUGCA3GBq/iNQrWxbeF5GIkE4ciKI2Ta+JMFg530FWmVV/3MNOdO5PIr93+nS5dgdzR8tDl4fNZNcda6K0U+RBmBhAF6G4PiHCwu4dLKAqHtEtpN1w+lsxrk7joY78vLDxPGvR8dOWqU93TlxExhLGaLZNWnYKeI3pA3QpzCWmAeR04Ul6/pWdYg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL0PR04MB6514.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(8936002)(71200400001)(86362001)(186003)(91956017)(4326008)(66446008)(64756008)(66556008)(76116006)(7366002)(52536014)(8676002)(4744005)(5660300002)(33656002)(6506007)(7406005)(7416002)(478600001)(66946007)(66476007)(9686003)(53546011)(55016002)(7696005)(316002)(110136005)(54906003)(2906002)(83380400001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata:
 jjLBIa8rzP8PSUzUPRNNjZPO+F9m4B6rNx2BP5Hxn4hovZp8mG8VwNQRMnSFbWOz3vIBGGbrlwmZ2h/rrQ3LAnV478QA8TsJeEWgnROLCLR26e3M2EbbogZuJfrd8ZHepR6teHHrbin/rSSMJcyuivIccqO+Io91CjJtVLKGT+IxxyJeBxmaC3EM+JLu8Rm3Kx8m967TzWpRyeGTzqzgeKyG0yThTif6ugu1X+cS2L5ObX1F/fl2leT5khZ92kjaRHmV42QjfDSDQBRJ5T2sWkjLqL6qZdGMf2A7TG8LiEhWaSnE+PmptdxY4alKupBmftgWegPnh18eZ3klScKVzKqPJWtLa3seT93Q56fQAqIthA5+kX5qt7UatZBhg0gKm0KkBt+EXNCPEi6hfwJXTcKfMhrLXdmO4xo5z+sSfQStIvDc5rcmkS1pZ7oYknbDJlJtS3SQSqbBQQUJPafcSH0MKoCCaLHQYYEZElvKn9S/oXh/zKwEQ9/zN4tqic4ypaC6xgod9dW89WUn6IRtPbzsXwH70beu1D6sABibklUQzTXk4zAezxU4pLpXOEpjFmzQIZ8wo7n5pOvKphLu99Aqpkh/XR9us5YFAynE0SXyaG0lSyNOUkHX4czg41/a9jardublYq7n+2PgYlLtR3YEH/hKmOGAbSLFeveqJxrdOEFzmYAfv7w/UdGZFRYD9O0jto4jXaOrMeEBD/8QJw==
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL0PR04MB6514.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d8e38c9-c661-47b4-955c-08d86e56c300
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 02:30:18.8981
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: FtNVgLbLUqLnCuOwFgwFyh+Wo2k/+4LOuBCyiRJEYABushz8+FxnNx4RRBUxLG0y9/lhxKLevLpPI63mq8apww==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR04MB6928

On 2020/10/10 4:52, ira.weiny@intel.com wrote:=0A=
> From: Ira Weiny <ira.weiny@intel.com>=0A=
> =0A=
> The kmap() calls in this FS are localized to a single thread.  To avoid=
=0A=
> the over head of global PKRS updates use the new kmap_thread() call.=0A=
> =0A=
> Cc: Damien Le Moal <damien.lemoal@wdc.com>=0A=
> Cc: Naohiro Aota <naohiro.aota@wdc.com>=0A=
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>=0A=
> ---=0A=
>  fs/zonefs/super.c | 4 ++--=0A=
>  1 file changed, 2 insertions(+), 2 deletions(-)=0A=
> =0A=
> diff --git a/fs/zonefs/super.c b/fs/zonefs/super.c=0A=
> index 8ec7c8f109d7..2fd6c86beee1 100644=0A=
> --- a/fs/zonefs/super.c=0A=
> +++ b/fs/zonefs/super.c=0A=
> @@ -1297,7 +1297,7 @@ static int zonefs_read_super(struct super_block *sb=
)=0A=
>  	if (ret)=0A=
>  		goto free_page;=0A=
>  =0A=
> -	super =3D kmap(page);=0A=
> +	super =3D kmap_thread(page);=0A=
>  =0A=
>  	ret =3D -EINVAL;=0A=
>  	if (le32_to_cpu(super->s_magic) !=3D ZONEFS_MAGIC)=0A=
> @@ -1349,7 +1349,7 @@ static int zonefs_read_super(struct super_block *sb=
)=0A=
>  	ret =3D 0;=0A=
>  =0A=
>  unmap:=0A=
> -	kunmap(page);=0A=
> +	kunmap_thread(page);=0A=
>  free_page:=0A=
>  	__free_page(page);=0A=
>  =0A=
> =0A=
=0A=
acked-by: Damien Le Moal <damien.lemoal@wdc.com>=0A=
=0A=
-- =0A=
Damien Le Moal=0A=
Western Digital Research=0A=


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 04:20:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 04:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5781.14998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpJP-000789-ET; Mon, 12 Oct 2020 04:19:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5781.14998; Mon, 12 Oct 2020 04:19:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpJP-000782-An; Mon, 12 Oct 2020 04:19:51 +0000
Received: by outflank-mailman (input) for mailman id 5781;
 Mon, 12 Oct 2020 04:19:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRpJO-00077x-5S
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 04:19:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31580948-b2d6-4560-9458-4b432e2571d2;
 Mon, 12 Oct 2020 04:19:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 585A7AB5C;
 Mon, 12 Oct 2020 04:19:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRpJO-00077x-5S
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 04:19:50 +0000
X-Inumbo-ID: 31580948-b2d6-4560-9458-4b432e2571d2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 31580948-b2d6-4560-9458-4b432e2571d2;
	Mon, 12 Oct 2020 04:19:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602476387;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2Vo7oaCnzCTjjefr7nGjqs8mFYgwNhN8TeP87arFBPI=;
	b=VYiHrnNzKZxCtvG+Y0HfHhfpWBtlexg/rw83A2ULTu0nht0VWx0xZbWgMhUsytct2fTB/v
	lHaM8szRprajcmYUgWKxcft6wI7bESOW2SJ3m60Cpgx0c5DSlH/ElcQtZd6b2n8y30JDeI
	4UwJ1uiQSQL+8k+v6tTPnr2rWjHD9kY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 585A7AB5C;
	Mon, 12 Oct 2020 04:19:47 +0000 (UTC)
Subject: Re: [PATCH 1/3] tools/libs: move official headers to common directory
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20201002142214.3438-1-jgross@suse.com>
 <20201002142214.3438-2-jgross@suse.com>
 <20201008153205.6quz43n7w7utli22@liuwe-devbox-debian-v2>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a3900dbd-f57b-9972-b3f6-d12a314ac3f7@suse.com>
Date: Mon, 12 Oct 2020 06:19:46 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201008153205.6quz43n7w7utli22@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.10.20 17:32, Wei Liu wrote:
> On Fri, Oct 02, 2020 at 04:22:12PM +0200, Juergen Gross wrote:
>> Instead of each library having an own include directory move the
>> official headers to tools/include instead. This will drop the need to
>> link those headers to tools/include and there is no need any longer
>> to have library-specific include paths when building Xen.
>>
>> While at it remove setting of the unused variable
>> PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
>>
> 
> Not sure about this.
> 
> Will this make packaging individual libraries more difficult?

Not at all. This was meant for the pkg-config file used internally in
the Xen build, not for the ones installed on user's systems.

As there seems to be no need for that I believe we should drop that
extra hook. It can easily be reintroduced in case we need it in future.

> Maintainers will need to comb through a large amount of headers to pick
> the ones they want.
> 
> What do others think?


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 04:48:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 04:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5785.15013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpki-0001aV-NP; Mon, 12 Oct 2020 04:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5785.15013; Mon, 12 Oct 2020 04:48:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRpki-0001aO-Jw; Mon, 12 Oct 2020 04:48:04 +0000
Received: by outflank-mailman (input) for mailman id 5785;
 Mon, 12 Oct 2020 04:48:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kRpkh-0001aJ-I6
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 04:48:03 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed0a6959-2783-4d5a-a842-27e68ee762ed;
 Mon, 12 Oct 2020 04:48:01 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 21:47:58 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 21:47:57 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kRpkh-0001aJ-I6
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 04:48:03 +0000
X-Inumbo-ID: ed0a6959-2783-4d5a-a842-27e68ee762ed
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ed0a6959-2783-4d5a-a842-27e68ee762ed;
	Mon, 12 Oct 2020 04:48:01 +0000 (UTC)
IronPort-SDR: I4ItzH009yCOPBRK0pgaf/qWGb8aPXMqA1P60HkuxHiF2gof0cCbdYD7TZB/MoqSwm58mRCOld
 rSdfvafaTheg==
X-IronPort-AV: E=McAfee;i="6000,8403,9771"; a="162223403"
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="162223403"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 21:47:58 -0700
IronPort-SDR: OeJlrvjJIthwDW5ZiLkp+F246Fw9BWID0JMdXYyjjISLX5UipnPYleFarWgQdoLnt1ZOhIB0kv
 QrFucSf/AAcw==
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="529805779"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 21:47:57 -0700
Date: Sun, 11 Oct 2020 21:47:56 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Bernard Metzler <BMT@zurich.ibm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Mike Marciniszyn <mike.marciniszyn@intel.com>,
	Dennis Dalessandro <dennis.dalessandro@intel.com>,
	Doug Ledford <dledford@redhat.com>, Jason Gunthorpe <jgg@ziepe.ca>,
	Faisal Latif <faisal.latif@intel.com>,
	Shiraz Saleem <shiraz.saleem@intel.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freed.esktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@tron.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 10/58] drivers/rdma: Utilize new
 kmap_thread()
Message-ID: <20201012044756.GY2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-11-ira.weiny@intel.com>
 <20201009195033.3208459-1-ira.weiny@intel.com>
 <OF849D92D8.F4735ECA-ON002585FD.003F5F27-002585FD.003FCBD6@notes.na.collabserv.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <OF849D92D8.F4735ECA-ON002585FD.003F5F27-002585FD.003FCBD6@notes.na.collabserv.com>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Sat, Oct 10, 2020 at 11:36:49AM +0000, Bernard Metzler wrote:
> -----ira.weiny@intel.com wrote: -----
> 

[snip]

> >@@ -505,7 +505,7 @@ static int siw_tx_hdt(struct siw_iwarp_tx *c_tx,
> >struct socket *s)
> > 				page_array[seg] = p;
> > 
> > 				if (!c_tx->use_sendpage) {
> >-					iov[seg].iov_base = kmap(p) + fp_off;
> >+					iov[seg].iov_base = kmap_thread(p) + fp_off;
> 
> This misses a corresponding kunmap_thread() in siw_unmap_pages()
> (pls change line 403 in siw_qp_tx.c as well)

Thanks I missed that.

Done.

Ira

> 
> Thanks,
> Bernard.
> 


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 05:28:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 05:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5787.15025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqNp-0005oP-Ud; Mon, 12 Oct 2020 05:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5787.15025; Mon, 12 Oct 2020 05:28:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqNp-0005oI-Rb; Mon, 12 Oct 2020 05:28:29 +0000
Received: by outflank-mailman (input) for mailman id 5787;
 Mon, 12 Oct 2020 05:28:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kRqNo-0005oD-EI
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 05:28:28 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4e3331b-0794-42cb-9a6e-5c06d1122f7d;
 Mon, 12 Oct 2020 05:28:25 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 22:28:23 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 22:28:22 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kRqNo-0005oD-EI
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 05:28:28 +0000
X-Inumbo-ID: d4e3331b-0794-42cb-9a6e-5c06d1122f7d
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d4e3331b-0794-42cb-9a6e-5c06d1122f7d;
	Mon, 12 Oct 2020 05:28:25 +0000 (UTC)
IronPort-SDR: SG42Hhvir3Ps0qAEaYiWNHtylTRusWTN1QTivsXX9mt9Xzt89f6kwHoZn1aFWl5GmihnM+dZdh
 PchXazDAcDIg==
X-IronPort-AV: E=McAfee;i="6000,8403,9771"; a="162225634"
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="162225634"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 22:28:23 -0700
IronPort-SDR: Awdi9Evv+UAQx0jH4ny/s7+Nxcmli85F+a73BOsSCHQvCG+q13xwJ2JLwUM5pKItlqpgm8qi5b
 o2XGvRLHUUbw==
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="529816997"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 22:28:22 -0700
Date: Sun, 11 Oct 2020 22:28:18 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Coly Li <colyli@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Kent Overstreet <kent.overstreet@gmail.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread()
Message-ID: <20201012052817.GZ2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-49-ira.weiny@intel.com>
 <c802fbf4-f67a-b205-536d-9c71b440f9c8@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c802fbf4-f67a-b205-536d-9c71b440f9c8@suse.de>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Sat, Oct 10, 2020 at 10:20:34AM +0800, Coly Li wrote:
> On 2020/10/10 03:50, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > These kmap() calls are localized to a single thread.  To avoid the over
> > head of global PKRS updates use the new kmap_thread() call.
> > 
> 
> Hi Ira,
> 
> There were a number of options considered.
> 
> 1) Attempt to change all the thread local kmap() calls to kmap_atomic()
> 2) Introduce a flags parameter to kmap() to indicate if the mapping
> should be global or not
> 3) Change ~20-30 call sites to 'kmap_global()' to indicate that they
> require a global mapping of the pages
> 4) Change ~209 call sites to 'kmap_thread()' to indicate that the
> mapping is to be used within that thread of execution only
> 
> 
> I copied the above information from patch 00/58 to this message. The
> idea behind kmap_thread() is fine to me, but as you said the new api is
> very easy to be missed in new code (even for me). I would like to be
> supportive to option 2) introduce a flag to kmap(), then we won't forget
> the new thread-localized kmap method, and people won't ask why a
> _thread() function is called but no kthread created.

Thanks for the feedback.

I'm going to hold off making any changes until others weigh in.  FWIW, I kind
of like option 2 as well.  But there is already kmap_atomic() so it seemed like
kmap_XXXX() was more in line with the current API.

Thanks,
Ira

> 
> Thanks.
> 
> 
> Coly Li
> 


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 05:52:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 05:52:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5791.15041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqkx-00006N-V0; Mon, 12 Oct 2020 05:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5791.15041; Mon, 12 Oct 2020 05:52:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqkx-000062-Qq; Mon, 12 Oct 2020 05:52:23 +0000
Received: by outflank-mailman (input) for mailman id 5791;
 Mon, 12 Oct 2020 05:52:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kRqkx-00005u-A3
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 05:52:23 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cfe2325-0548-49c5-bc49-3513849e67fe;
 Mon, 12 Oct 2020 05:52:21 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 22:52:20 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 22:52:19 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kRqkx-00005u-A3
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 05:52:23 +0000
X-Inumbo-ID: 4cfe2325-0548-49c5-bc49-3513849e67fe
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4cfe2325-0548-49c5-bc49-3513849e67fe;
	Mon, 12 Oct 2020 05:52:21 +0000 (UTC)
IronPort-SDR: IZoSZ/GPXERgY4pSWfMFevrkmP+3iWYjSO6Y6o1NEk2j35ezRz042p/BH0mZHp3/ACJDi3IEMK
 Y+FNgg9/JeRg==
X-IronPort-AV: E=McAfee;i="6000,8403,9771"; a="162227139"
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="162227139"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 22:52:20 -0700
IronPort-SDR: Ya9EDAn3SOMd08SCKVKBtueoni+yyq9EF8H8N9tr+YE/IrdFHweYy6SREcwPgxnde0DbbLRvxa
 rOKYCbITG9ew==
X-IronPort-AV: E=Sophos;i="5.77,365,1596524400"; 
   d="scan'208";a="520573207"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 22:52:19 -0700
Date: Sun, 11 Oct 2020 22:52:19 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 57/58] nvdimm/pmem: Stray access protection
 for pmem->virt_addr
Message-ID: <20201012055218.GA2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-58-ira.weiny@intel.com>
 <bd3f5ece-0e7b-4c15-abbc-1b3b943334dc@nvidia.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <bd3f5ece-0e7b-4c15-abbc-1b3b943334dc@nvidia.com>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Fri, Oct 09, 2020 at 07:53:07PM -0700, John Hubbard wrote:
> On 10/9/20 12:50 PM, ira.weiny@intel.com wrote:
> > From: Ira Weiny <ira.weiny@intel.com>
> > 
> > The pmem driver uses a cached virtual address to access its memory
> > directly.  Because the nvdimm driver is well aware of the special
> > protections it has mapped memory with, we call dev_access_[en|dis]able()
> > around the direct pmem->virt_addr (pmem_addr) usage instead of the
> > unnecessary overhead of trying to get a page to kmap.
> > 
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > ---
> >   drivers/nvdimm/pmem.c | 4 ++++
> >   1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> > index fab29b514372..e4dc1ae990fc 100644
> > --- a/drivers/nvdimm/pmem.c
> > +++ b/drivers/nvdimm/pmem.c
> > @@ -148,7 +148,9 @@ static blk_status_t pmem_do_read(struct pmem_device *pmem,
> >   	if (unlikely(is_bad_pmem(&pmem->bb, sector, len)))
> >   		return BLK_STS_IOERR;
> > +	dev_access_enable(false);
> >   	rc = read_pmem(page, page_off, pmem_addr, len);
> > +	dev_access_disable(false);
> 
> Hi Ira!
> 
> The APIs should be tweaked to use a symbol (GLOBAL, PER_THREAD), instead of
> true/false. Try reading the above and you'll see that it sounds like it's
> doing the opposite of what it is ("enable_this(false)" sounds like a clumsy
> API design to *disable*, right?). And there is no hint about the scope.

Sounds reasonable.

> 
> And it *could* be so much more readable like this:
> 
>     dev_access_enable(DEV_ACCESS_THIS_THREAD);

I'll think about the flag name.  I'm not liking 'this thread'.

Maybe DEV_ACCESS_[GLOBAL|THREAD]

Ira



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 06:07:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 06:07:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5793.15052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqzS-0001RC-9I; Mon, 12 Oct 2020 06:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5793.15052; Mon, 12 Oct 2020 06:07:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRqzS-0001R5-67; Mon, 12 Oct 2020 06:07:22 +0000
Received: by outflank-mailman (input) for mailman id 5793;
 Mon, 12 Oct 2020 06:07:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRqzQ-0001R0-Jx
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 06:07:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1d02fe7-589c-4b36-b887-4e220c8540b1;
 Mon, 12 Oct 2020 06:07:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRqzO-000178-2V; Mon, 12 Oct 2020 06:07:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRqzN-0002gs-Py; Mon, 12 Oct 2020 06:07:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRqzN-0007v6-PU; Mon, 12 Oct 2020 06:07:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRqzQ-0001R0-Jx
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 06:07:20 +0000
X-Inumbo-ID: e1d02fe7-589c-4b36-b887-4e220c8540b1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1d02fe7-589c-4b36-b887-4e220c8540b1;
	Mon, 12 Oct 2020 06:07:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zlyu3Jom5FqnHAKi41CJdn6q1/EwE2v/4JPxN45QD7k=; b=E7CPay65W6T8lCD4421OhxylJo
	Qpap+4lCaZpGWByQSCRK7d7ADsC7m2RGbRghJ3m2OijQmgAlaVTs9f4/WtY9ITQMdRG8OWnlTLSq9
	D8RxJx2j1vW5ksapS+aOkieSqFOGl66xM09QV5fBFx0Q4Ry8p7OTCtfu2+DQ9B8WLrbg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRqzO-000178-2V; Mon, 12 Oct 2020 06:07:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRqzN-0002gs-Py; Mon, 12 Oct 2020 06:07:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRqzN-0007v6-PU; Mon, 12 Oct 2020 06:07:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155716-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155716: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 06:07:17 +0000

flight 155716 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155716/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    2 days   18 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 06:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 06:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5800.15070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRrlE-0006If-63; Mon, 12 Oct 2020 06:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5800.15070; Mon, 12 Oct 2020 06:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRrlE-0006IY-36; Mon, 12 Oct 2020 06:56:44 +0000
Received: by outflank-mailman (input) for mailman id 5800;
 Mon, 12 Oct 2020 06:56:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kRrlC-0006IT-Jw
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 06:56:42 +0000
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 801ca88d-ed0c-4fd4-a78a-bb43fb6a4bf3;
 Mon, 12 Oct 2020 06:56:41 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 23:56:37 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Oct 2020 23:56:35 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kRrlC-0006IT-Jw
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 06:56:42 +0000
X-Inumbo-ID: 801ca88d-ed0c-4fd4-a78a-bb43fb6a4bf3
Received: from mga04.intel.com (unknown [192.55.52.120])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 801ca88d-ed0c-4fd4-a78a-bb43fb6a4bf3;
	Mon, 12 Oct 2020 06:56:41 +0000 (UTC)
IronPort-SDR: leJH/l1opPoVIuQPF8tu1d0ELtmPxDn3WakRMzOtB4zHZxOlPLqLu6sTQfOKjkYfksWCnDtd0i
 Nq+Fm38lLwdQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9771"; a="163059282"
X-IronPort-AV: E=Sophos;i="5.77,366,1596524400"; 
   d="scan'208";a="163059282"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 23:56:37 -0700
IronPort-SDR: DSpSIoZKV7KPLq4zxFvBA+ZLKNkpgmuUQCwMmUi8wsqk6OFxxWBKXoVpyUUZRY9EN2DChpPczf
 4EPhWIQlBVGQ==
X-IronPort-AV: E=Sophos;i="5.77,366,1596524400"; 
   d="scan'208";a="529842687"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Oct 2020 23:56:35 -0700
Date: Sun, 11 Oct 2020 23:56:35 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Eric Biggers <ebiggers@kernel.org>
Cc: Matthew Wilcox <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201010013036.GD1122@sol.localdomain>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Fri, Oct 09, 2020 at 06:30:36PM -0700, Eric Biggers wrote:
> On Sat, Oct 10, 2020 at 01:39:54AM +0100, Matthew Wilcox wrote:
> > On Fri, Oct 09, 2020 at 02:34:34PM -0700, Eric Biggers wrote:
> > > On Fri, Oct 09, 2020 at 12:49:57PM -0700, ira.weiny@intel.com wrote:
> > > > The kmap() calls in this FS are localized to a single thread.  To avoid
> > > > the over head of global PKRS updates use the new kmap_thread() call.
> > > >
> > > > @@ -2410,12 +2410,12 @@ static inline struct page *f2fs_pagecache_get_page(
> > > >  
> > > >  static inline void f2fs_copy_page(struct page *src, struct page *dst)
> > > >  {
> > > > -	char *src_kaddr = kmap(src);
> > > > -	char *dst_kaddr = kmap(dst);
> > > > +	char *src_kaddr = kmap_thread(src);
> > > > +	char *dst_kaddr = kmap_thread(dst);
> > > >  
> > > >  	memcpy(dst_kaddr, src_kaddr, PAGE_SIZE);
> > > > -	kunmap(dst);
> > > > -	kunmap(src);
> > > > +	kunmap_thread(dst);
> > > > +	kunmap_thread(src);
> > > >  }
> > > 
> > > Wouldn't it make more sense to switch cases like this to kmap_atomic()?
> > > The pages are only mapped to do a memcpy(), then they're immediately unmapped.
> > 
> > Maybe you missed the earlier thread from Thomas trying to do something
> > similar for rather different reasons ...
> > 
> > https://lore.kernel.org/lkml/20200919091751.011116649@linutronix.de/
> 
> I did miss it.  I'm not subscribed to any of the mailing lists it was sent to.
> 
> Anyway, it shouldn't matter.  Patchsets should be standalone, and not require
> reading random prior threads on linux-kernel to understand.

Sorry, but I did not think that the discussion above was directly related.  If
I'm not mistaken, Thomas' work was directed at relaxing kmap_atomic() into
kmap_thread() calls.  While interesting, it is not the point of this series.  I
want to restrict kmap() callers into kmap_thread().

For this series it was considered to change the kmap_thread() call sites to
kmap_atomic().  But like I said in the cover letter kmap_atomic() is not the
same semantic.  It is too strict.  Perhaps I should have expanded that
explanation.

> 
> And I still don't really understand.  After this patchset, there is still code
> nearly identical to the above (doing a temporary mapping just for a memcpy) that
> would still be using kmap_atomic().

I don't understand.  You mean there would be other call sites calling:

kmap_atomic()
memcpy()
kunmap_atomic()

?

> Is the idea that later, such code will be
> converted to use kmap_thread() instead?  If not, why use one over the other?
 

The reason for the new call is that with PKS added behind kmap we have 3 levels
of mapping we want.

global kmap (can span threads and sleep)
'thread' kmap (can sleep but not span threads)
'atomic' kmap (can't sleep nor span threads [by definition])

As Matthew said perhaps 'global kmaps' may be best changed to vmaps?  I just
don't know the details of every call site.

And since I don't know the call site details if there are kmap_thread() calls
which are better off as kmap_atomic() calls I think it is worth converting
them.  But I made the assumption that kmap users would already be calling
kmap_atomic() if they could (because it is more efficient).

Ira


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 07:41:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 07:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5803.15085 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRsS3-0002Wm-Ba; Mon, 12 Oct 2020 07:40:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5803.15085; Mon, 12 Oct 2020 07:40:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRsS3-0002Wf-8J; Mon, 12 Oct 2020 07:40:59 +0000
Received: by outflank-mailman (input) for mailman id 5803;
 Mon, 12 Oct 2020 07:40:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=a/xH=DT=suse.de=colyli@srs-us1.protection.inumbo.net>)
 id 1kRsS2-0002Wa-4m
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 07:40:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7275a4a-e048-4532-9d22-46a927c27e97;
 Mon, 12 Oct 2020 07:40:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3DEE1AC1D;
 Mon, 12 Oct 2020 07:40:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=a/xH=DT=suse.de=colyli@srs-us1.protection.inumbo.net>)
	id 1kRsS2-0002Wa-4m
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 07:40:58 +0000
X-Inumbo-ID: e7275a4a-e048-4532-9d22-46a927c27e97
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e7275a4a-e048-4532-9d22-46a927c27e97;
	Mon, 12 Oct 2020 07:40:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3DEE1AC1D;
	Mon, 12 Oct 2020 07:40:52 +0000 (UTC)
Subject: Re: [PATCH RFC PKS/PMEM 48/58] drivers/md: Utilize new kmap_thread()
To: Ira Weiny <ira.weiny@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>,
 Kent Overstreet <kent.overstreet@gmail.com>, x86@kernel.org,
 Dave Hansen <dave.hansen@linux.intel.com>,
 Dan Williams <dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
 linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org,
 linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org,
 bpf@vger.kernel.org, kexec@lists.infradead.org,
 linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
 devel@driverdev.osuosl.org, linux-efi@vger.kernel.org,
 linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org,
 target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
 ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, linux-aio@kvack.org,
 io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
 linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
 reiserfs-devel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net,
 linux-nilfs@vger.kernel.org, cluster-devel@redhat.com,
 ecryptfs@vger.kernel.org, linux-cifs@vger.kernel.org,
 linux-btrfs@vger.kernel.org, linux-afs@lists.infradead.org,
 linux-rdma@vger.kernel.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
 drbd-dev@lists.linbit.com, linux-block@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-cachefs@redhat.com,
 samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-49-ira.weiny@intel.com>
 <c802fbf4-f67a-b205-536d-9c71b440f9c8@suse.de>
 <20201012052817.GZ2046448@iweiny-DESK2.sc.intel.com>
From: Coly Li <colyli@suse.de>
Autocrypt: addr=colyli@suse.de; keydata=
 mQINBFYX6S8BEAC9VSamb2aiMTQREFXK4K/W7nGnAinca7MRuFUD4JqWMJ9FakNRd/E0v30F
 qvZ2YWpidPjaIxHwu3u9tmLKqS+2vnP0k7PRHXBYbtZEMpy3kCzseNfdrNqwJ54A430BHf2S
 GMVRVENiScsnh4SnaYjFVvB8SrlhTsgVEXEBBma5Ktgq9YSoy5miatWmZvHLFTQgFMabCz/P
 j5/xzykrF6yHo0rHZtwzQzF8rriOplAFCECp/t05+OeHHxjSqSI0P/G79Ll+AJYLRRm9til/
 K6yz/1hX5xMToIkYrshDJDrUc8DjEpISQQPhG19PzaUf3vFpmnSVYprcWfJWsa2wZyyjRFkf
 J51S82WfclafNC6N7eRXedpRpG6udUAYOA1YdtlyQRZa84EJvMzW96iSL1Gf+ZGtRuM3k49H
 1wiWOjlANiJYSIWyzJjxAd/7Xtiy/s3PRKL9u9y25ftMLFa1IljiDG+mdY7LyAGfvdtIkanr
 iBpX4gWXd7lNQFLDJMfShfu+CTMCdRzCAQ9hIHPmBeZDJxKq721CyBiGAhRxDN+TYiaG/UWT
 7IB7LL4zJrIe/xQ8HhRO+2NvT89o0LxEFKBGg39yjTMIrjbl2ZxY488+56UV4FclubrG+t16
 r2KrandM7P5RjR+cuHhkKseim50Qsw0B+Eu33Hjry7YCihmGswARAQABtBhDb2x5IExpIDxj
 b2x5bGlAc3VzZS5kZT6JAlYEEwEIAEACGyMHCwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgBYh
 BOo+RS/0+Uhgjej60Mc5B5Nrffj8BQJcR84dBQkY++fuAAoJEMc5B5Nrffj8ixcP/3KAKg1X
 EcoW4u/0z+Ton5rCyb/NpAww8MuRjNW82UBUac7yCi1y3OW7NtLjuBLw5SaVG5AArb7IF3U0
 qTOobqfl5XHsT0o5wFHZaKUrnHb6y7V3SplsJWfkP3JmOooJsQB3z3K96ZTkFelsNb0ZaBRu
 gV+LA4MomhQ+D3BCDR1it1OX/tpvm2uaDF6s/8uFtcDEM9eQeqATN/QAJ49nvU/I8zDSY9rc
 0x9mP0x+gH4RccbnoPu/rUG6Fm1ZpLrbb6NpaYBBJ/V1BC4lIOjnd24bsoQrQmnJn9dSr60X
 1MY60XDszIyzRw7vbJcUn6ZzPNFDxFFT9diIb+wBp+DD8ZlD/hnVpl4f921ZbvfOSsXAJrKB
 1hGY17FPwelp1sPcK2mDT+pfHEMV+OQdZzD2OCKtza/5IYismJJm3oVUYMogb5vDNAw9X2aP
 XgwUuG+FDEFPamFMUwIfzYHcePfqf0mMsaeSgtA/xTxzx/0MLjUJHl46Bc0uKDhv7QUyGz0j
 Ywgr2mHTvG+NWQ/mDeHNGkcnsnp3IY7koDHnN2xMFXzY4bn9m8ctqKo2roqjCzoxD/njoAhf
 KBzdybLHATqJG/yiZSbCxDA1n/J4FzPyZ0rNHUAJ/QndmmVspE9syFpFCKigvvyrzm016+k+
 FJ59Q6RG4MSy/+J565Xj+DNY3/dCuQINBFYX6S8BEADZP+2cl4DRFaSaBms08W8/smc5T2CO
 YhAoygZn71rB7Djml2ZdvrLRjR8Qbn0Q/2L2gGUVc63pJnbrjlXSx2LfAFE0SlfYIJ11aFdF
 9w7RvqWByQjDJor3Z0fWvPExplNgMvxpD0U0QrVT5dIGTx9hadejCl/ug09Lr6MPQn+a4+qs
 aRWwgCSHaIuDkH3zI1MJXiqXXFKUzJ/Fyx6R72rqiMPHH2nfwmMu6wOXAXb7+sXjZz5Po9GJ
 g2OcEc+rpUtKUJGyeQsnCDxUcqJXZDBi/GnhPCcraQuqiQ7EGWuJfjk51vaI/rW4bZkA9yEP
 B9rBYngbz7cQymUsfxuTT8OSlhxjP3l4ZIZFKIhDaQeZMj8pumBfEVUyiF6KVSfgfNQ/5PpM
 R4/pmGbRqrAAElhrRPbKQnCkGWDr8zG+AjN1KF6rHaFgAIO7TtZ+F28jq4reLkur0N5tQFww
 wFwxzROdeLHuZjL7eEtcnNnzSkXHczLkV4kQ3+vr/7Gm65mQfnVpg6JpwpVrbDYQeOFlxZ8+
 GERY5Dag4KgKa/4cSZX2x/5+KkQx9wHwackw5gDCvAdZ+Q81nm6tRxEYBBiVDQZYqO73stgT
 ZyrkxykUbQIy8PI+g7XMDCMnPiDncQqgf96KR3cvw4wN8QrgA6xRo8xOc2C3X7jTMQUytCz9
 0MyV1QARAQABiQI8BBgBCAAmAhsMFiEE6j5FL/T5SGCN6PrQxzkHk2t9+PwFAlxHziAFCRj7
 5/EACgkQxzkHk2t9+PxgfA//cH5R1DvpJPwraTAl24SUcG9EWe+NXyqveApe05nk15zEuxxd
 e4zFEjo+xYZilSveLqYHrm/amvQhsQ6JLU+8N60DZHVcXbw1Eb8CEjM5oXdbcJpXh1/1BEwl
 4phsQMkxOTns51bGDhTQkv4lsZKvNByB9NiiMkT43EOx14rjkhHw3rnqoI7ogu8OO7XWfKcL
 CbchjJ8t3c2XK1MUe056yPpNAT2XPNF2EEBPG2Y2F4vLgEbPv1EtpGUS1+JvmK3APxjXUl5z
 6xrxCQDWM5AAtGfM/IswVjbZYSJYyH4BQKrShzMb0rWUjkpXvvjsjt8rEXpZEYJgX9jvCoxt
 oqjCKiVLpwje9WkEe9O9VxljmPvxAhVqJjX62S+TGp93iD+mvpCoHo3+CcvyRcilz+Ko8lfO
 hS9tYT0HDUiDLvpUyH1AR2xW9RGDevGfwGTpF0K6cLouqyZNdhlmNciX48tFUGjakRFsxRmX
 K0Jx4CEZubakJe+894sX6pvNFiI7qUUdB882i5GR3v9ijVPhaMr8oGuJ3kvwBIA8lvRBGVGn
 9xvzkQ8Prpbqh30I4NMp8MjFdkwCN6znBKPHdjNTwE5PRZH0S9J0o67IEIvHfH0eAWAsgpTz
 +jwc7VKH7vkvgscUhq/v1/PEWCAqh9UHy7R/jiUxwzw/288OpgO+i+2l11Y=
Message-ID: <026a7658-6c43-6510-a8b5-32f29de7b281@suse.de>
Date: Mon, 12 Oct 2020 15:40:27 +0800
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201012052817.GZ2046448@iweiny-DESK2.sc.intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 2020/10/12 13:28, Ira Weiny wrote:
> On Sat, Oct 10, 2020 at 10:20:34AM +0800, Coly Li wrote:
>> On 2020/10/10 03:50, ira.weiny@intel.com wrote:
>>> From: Ira Weiny <ira.weiny@intel.com>
>>>
>>> These kmap() calls are localized to a single thread.  To avoid the over
>>> head of global PKRS updates use the new kmap_thread() call.
>>>
>>
>> Hi Ira,
>>
>> There were a number of options considered.
>>
>> 1) Attempt to change all the thread local kmap() calls to kmap_atomic()
>> 2) Introduce a flags parameter to kmap() to indicate if the mapping
>> should be global or not
>> 3) Change ~20-30 call sites to 'kmap_global()' to indicate that they
>> require a global mapping of the pages
>> 4) Change ~209 call sites to 'kmap_thread()' to indicate that the
>> mapping is to be used within that thread of execution only
>>
>>
>> I copied the above information from patch 00/58 to this message. The
>> idea behind kmap_thread() is fine to me, but as you said the new api is
>> very easy to be missed in new code (even for me). I would like to be
>> supportive to option 2) introduce a flag to kmap(), then we won't forget
>> the new thread-localized kmap method, and people won't ask why a
>> _thread() function is called but no kthread created.
> 
> Thanks for the feedback.
> 
> I'm going to hold off making any changes until others weigh in.  FWIW, I kind
> of like option 2 as well.  But there is already kmap_atomic() so it seemed like
> kmap_XXXX() was more in line with the current API.

I understand it now, the idea is fine to me.

Acked-by: Coly Li <colyli@suse.de>

Thanks.

Coly Li


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 08:44:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 08:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5815.15101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtRO-0002Kl-M4; Mon, 12 Oct 2020 08:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5815.15101; Mon, 12 Oct 2020 08:44:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtRO-0002Kc-Db; Mon, 12 Oct 2020 08:44:22 +0000
Received: by outflank-mailman (input) for mailman id 5815;
 Mon, 12 Oct 2020 08:44:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRtRO-0002KX-01
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 08:44:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b22d979a-3021-44c6-b5c8-fa4f22933fc8;
 Mon, 12 Oct 2020 08:44:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtRJ-0000gU-Jj; Mon, 12 Oct 2020 08:44:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtRJ-0002YM-98; Mon, 12 Oct 2020 08:44:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtRJ-0005lj-8b; Mon, 12 Oct 2020 08:44:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRtRO-0002KX-01
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 08:44:22 +0000
X-Inumbo-ID: b22d979a-3021-44c6-b5c8-fa4f22933fc8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b22d979a-3021-44c6-b5c8-fa4f22933fc8;
	Mon, 12 Oct 2020 08:44:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YoReMLJMYFyUz9WMPkpOj2voaEiGWiuZnLcjQ3MjeFA=; b=d7vphsBEaT8t3UIcN9LJiShKMY
	TeRnkE39LCUjoSlFwmmkWZL9IkrbvJit2JgAdug6hA0KfidHUi5z/Nb8z6/WBkfh67xlpiKo5awtJ
	OSu63Nn8Zwf8nOH64SICgRSga+Gk8wkevvHeCqKrx8I5uh5mSLG6OtYFHC+7euIfSmsA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtRJ-0000gU-Jj; Mon, 12 Oct 2020 08:44:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtRJ-0002YM-98; Mon, 12 Oct 2020 08:44:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtRJ-0005lj-8b; Mon, 12 Oct 2020 08:44:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155721-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155721: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 08:44:17 +0000

flight 155721 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155721/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              7382a7c2bef9d0f74a364a13b8b4ec8c08ffd1e5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   94 days
Failing since        151818  2020-07-11 04:18:52 Z   93 days   88 attempts
Testing same since   155634  2020-10-10 04:19:54 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20775 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5819.15116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtpZ-0004SA-Jz; Mon, 12 Oct 2020 09:09:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5819.15116; Mon, 12 Oct 2020 09:09:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtpZ-0004S3-H3; Mon, 12 Oct 2020 09:09:21 +0000
Received: by outflank-mailman (input) for mailman id 5819;
 Mon, 12 Oct 2020 09:09:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRtpY-0004Ry-Lr
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:09:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 441515db-0c87-46c9-a408-cf8b6ff429be;
 Mon, 12 Oct 2020 09:09:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtpW-0001DJ-2B; Mon, 12 Oct 2020 09:09:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtpV-0003jV-Pt; Mon, 12 Oct 2020 09:09:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRtpV-0007aU-PM; Mon, 12 Oct 2020 09:09:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRtpY-0004Ry-Lr
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:09:20 +0000
X-Inumbo-ID: 441515db-0c87-46c9-a408-cf8b6ff429be
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 441515db-0c87-46c9-a408-cf8b6ff429be;
	Mon, 12 Oct 2020 09:09:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=79HAXwvF6MYeW6n6UDY9CiruU2u8r49kNEPRdUDXKLA=; b=wPjI62rn5ulsPEfSLbuVP7SUJ4
	/GushZyKfsKLcmtjGgl3q7wbX434BMT7IorAGtDRvvjxbHitYSMNUUY8pN6nb6OWEjfNrETeoB9Vs
	6UTkMuM2qrMK71rZW0fLeSP4G3XdhH5bPam0LVEqdcJEwQ5BaMGg7DxxmXBEHW8Q5dXY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtpW-0001DJ-2B; Mon, 12 Oct 2020 09:09:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtpV-0003jV-Pt; Mon, 12 Oct 2020 09:09:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRtpV-0007aU-PM; Mon, 12 Oct 2020 09:09:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155724-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155724: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 09:09:17 +0000

flight 155724 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155724/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    2 days   19 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:11:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5824.15131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrI-0005Gi-4z; Mon, 12 Oct 2020 09:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5824.15131; Mon, 12 Oct 2020 09:11:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrI-0005Gb-1Z; Mon, 12 Oct 2020 09:11:08 +0000
Received: by outflank-mailman (input) for mailman id 5824;
 Mon, 12 Oct 2020 09:11:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRtrG-0005GU-6V
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa7f867d-7762-43b6-ae41-1608beac901c;
 Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5F9BEB03D;
 Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRtrG-0005GU-6V
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:06 +0000
X-Inumbo-ID: fa7f867d-7762-43b6-ae41-1608beac901c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa7f867d-7762-43b6-ae41-1608beac901c;
	Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602493861;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SYSqKrZWGVU8nxonrOSCcTNQRNTPVy8fH0KXNbdNrqQ=;
	b=UQVVEfJ95n3dY3iu7GzrIgq7zCx6w88hoYhCakpqGLOl2YZ9H6Ss4ezICtUjHacArx1Yh0
	SeptfKSuohJ0y89iYiYJ5qm93EOoYxJODDSX347hFr03NZJ6D2L02a1NNZqkZUCUlmRB+V
	0/aZ1d/CpcQ+Xe0QVFXp1Km/xpEF28g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5F9BEB03D;
	Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon, 12 Oct 2020 11:10:57 +0200
Message-Id: <20201012091058.27023-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012091058.27023-1-jgross@suse.com>
References: <20201012091058.27023-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index fc189152e1..fffbd409c8 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    u32 raw;
+    struct {
+        u8 last_priority;
+        u16 last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq;
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:11:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:11:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5825.15143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrM-0005JZ-Cv; Mon, 12 Oct 2020 09:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5825.15143; Mon, 12 Oct 2020 09:11:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrM-0005JS-A1; Mon, 12 Oct 2020 09:11:12 +0000
Received: by outflank-mailman (input) for mailman id 5825;
 Mon, 12 Oct 2020 09:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRtrL-0005GU-5F
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b6fe8c4-5cc4-4d8b-9a3d-cff6ad477420;
 Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 29FB5B053;
 Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRtrL-0005GU-5F
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:11 +0000
X-Inumbo-ID: 9b6fe8c4-5cc4-4d8b-9a3d-cff6ad477420
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b6fe8c4-5cc4-4d8b-9a3d-cff6ad477420;
	Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602493861;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=p+PDp6ih0oKakngE1u8E/yAM210S6xuZh5w/amfPtDs=;
	b=ctknOv5NFBgZz8sIzYhhPl8HVXLD4MVTdVga8EYP2Pf48h8E3Dk1z/LoigqfJP2wiyjVc0
	+DNnMnYSTKP4pp+bhGRq+nyKOG2VYC0pidw8/2Kcv8RnD+8J1FvaJTZj6D8TEdI5T+Z0IZ
	4X7TZ2SOaH4zGiPcB6i72gnA8OzkR1M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 29FB5B053;
	Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/2] XSA-343 followup patches
Date: Mon, 12 Oct 2020 11:10:56 +0200
Message-Id: <20201012091058.27023-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/common/event_fifo.c    |  25 +++++++--
 xen/include/xen/event.h    |  50 ++++++++++++++---
 xen/include/xen/sched.h    |   5 +-
 6 files changed, 120 insertions(+), 84 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:11:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5826.15155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrR-0005NT-L3; Mon, 12 Oct 2020 09:11:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5826.15155; Mon, 12 Oct 2020 09:11:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtrR-0005NL-I7; Mon, 12 Oct 2020 09:11:17 +0000
Received: by outflank-mailman (input) for mailman id 5826;
 Mon, 12 Oct 2020 09:11:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRtrQ-0005GU-5I
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8582d4de-33c6-41bf-8989-abde2aff5395;
 Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B637B087;
 Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRtrQ-0005GU-5I
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:11:16 +0000
X-Inumbo-ID: 8582d4de-33c6-41bf-8989-abde2aff5395
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8582d4de-33c6-41bf-8989-abde2aff5395;
	Mon, 12 Oct 2020 09:11:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602493861;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=u/vnZwI3ysh3QTMWdvYSgNcqLtswfypo6dZsUh9TVME=;
	b=cqcqU9o8htJLN/3QPHWtghB5EPDWDK5Wf9GIwRubX2FRszOF/4H50wwgb9m38yiW27X7rp
	AAlz52JpxysS7iHjay5YjRQDJ7WVH/cmbFtW/iHfgsJZ3SBQw4ZtXW8FFeDsKudvhHYsb+
	Dh669Q+5t4OodHpZGaAu5/TrB9gh8sk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9B637B087;
	Mon, 12 Oct 2020 09:11:01 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/2] xen/evtchn: rework per event channel lock
Date: Mon, 12 Oct 2020 11:10:58 +0200
Message-Id: <20201012091058.27023-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012091058.27023-1-jgross@suse.com>
References: <20201012091058.27023-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between sending an event or
querying the channel's state against removal of the event channel.

Use a locking scheme similar to a rwlock, but with some modifications:

- sending an event or querying the event channel's state uses an
  operation similar to read_trylock(), in case of not obtaining the
  lock the sending is omitted or a default state is returned

- closing an event channel is similar to write_lock(), but without
  real fairness regarding multiple writers (this saves some space in
  the event channel structure and multiple writers are impossible as
  closing an event channel requires the domain's event_lock to be
  held).

With this locking scheme it is mandatory that a writer will always
either start with an unbound or free event channel or will end with
an unbound or free event channel, as otherwise the reaction of a reader
not getting the lock would be wrong.

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/include/xen/event.h    |  50 ++++++++++++++---
 xen/include/xen/sched.h    |   2 +-
 5 files changed, 100 insertions(+), 76 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..77290032f5 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_tryread_lock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..3734250bf7 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_tryread_lock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e365b5498f..398a1e7aa0 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -131,7 +131,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        atomic_set(&chn[i].lock, 0);
     }
     return chn;
 }
@@ -253,7 +253,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -269,14 +268,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -289,32 +288,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -324,7 +317,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -377,7 +369,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -400,7 +392,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -438,14 +429,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -462,7 +453,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -474,13 +464,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -524,7 +514,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -557,14 +546,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -585,7 +574,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -686,14 +674,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -701,9 +689,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -723,7 +711,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_tryread_lock(lchn) )
+        return 0;
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -771,7 +759,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -841,7 +833,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -856,9 +847,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_tryread_lock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1327,7 +1321,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1340,14 +1333,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1383,7 +1376,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1398,7 +1390,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_tryread_lock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1408,7 +1401,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 509d3ae861..abf26a892c 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    int val;
+
+    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+          val != EVENT_WRITE_LOCK_INC;
+          val = atomic_read(&evtchn->lock) )
+        cpu_relax();
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+}
+
+static inline bool evtchn_tryread_lock(struct evtchn *evtchn)
+{
+    if ( atomic_read(&evtchn->lock) >= EVENT_WRITE_LOCK_INC )
+        return false;
+
+    if ( atomic_inc_return(&evtchn->lock) < EVENT_WRITE_LOCK_INC )
+        return true;
+
+    atomic_dec(&evtchn->lock);
+    return false;
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    atomic_dec(&evtchn->lock);
+}
+
 static inline unsigned int max_evtchns(const struct domain *d)
 {
     return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS
@@ -249,12 +282,11 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_tryread_lock(evtchn) )
+        rc = evtchn_is_masked(d, evtchn);
+    evtchn_read_unlock(evtchn);
 
     return rc;
 }
@@ -274,12 +306,12 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
+        if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) )
+        {
             rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..096e0ec6af 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    atomic_t lock;         /* kind of rwlock, use evtchn_*_[un]lock()        */
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:15:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:15:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5831.15167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtvL-0005pM-4l; Mon, 12 Oct 2020 09:15:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5831.15167; Mon, 12 Oct 2020 09:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRtvL-0005pF-1W; Mon, 12 Oct 2020 09:15:19 +0000
Received: by outflank-mailman (input) for mailman id 5831;
 Mon, 12 Oct 2020 09:15:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRtvK-0005pA-9J
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:15:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0eeefbd3-d830-4fcc-9034-d937f88127b2;
 Mon, 12 Oct 2020 09:15:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B50DAF0E;
 Mon, 12 Oct 2020 09:15:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRtvK-0005pA-9J
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:15:18 +0000
X-Inumbo-ID: 0eeefbd3-d830-4fcc-9034-d937f88127b2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0eeefbd3-d830-4fcc-9034-d937f88127b2;
	Mon, 12 Oct 2020 09:15:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602494115;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IDSrsPtVKC/xAOGARSdlQ5HMziWm1Vwrg6lD2YWyBBU=;
	b=JTmMeqOoncfr6hQZH67pZ0ty/qhF7VDrCLg29+JZ78uFwCoWkiVyy0aTIcUsDjSp+tfqpK
	SYkx5ZPzYzGuE8A3/nlEzysKuev/hh49+uLMqT+3C9YLyiETLhLtnqZOISkh1O8Zs+GUFt
	pFrWA3jCOFFFp7/KwKsuCkdX/WyF818=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2B50DAF0E;
	Mon, 12 Oct 2020 09:15:15 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen/evtchn: rework per event channel lock
To: xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012091058.27023-1-jgross@suse.com>
 <20201012091058.27023-3-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3a7c7346-6506-f2e1-ad22-cd18aa1ccfc3@suse.com>
Date: Mon, 12 Oct 2020 11:15:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012091058.27023-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.20 11:10, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between sending an event or
> querying the channel's state against removal of the event channel.
> 
> Use a locking scheme similar to a rwlock, but with some modifications:
> 
> - sending an event or querying the event channel's state uses an
>    operation similar to read_trylock(), in case of not obtaining the
>    lock the sending is omitted or a default state is returned
> 
> - closing an event channel is similar to write_lock(), but without
>    real fairness regarding multiple writers (this saves some space in
>    the event channel structure and multiple writers are impossible as
>    closing an event channel requires the domain's event_lock to be
>    held).
> 
> With this locking scheme it is mandatory that a writer will always
> either start with an unbound or free event channel or will end with
> an unbound or free event channel, as otherwise the reaction of a reader
> not getting the lock would be wrong.
> 
> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Oh, please ignore. I forgot to add the needed barriers to the locking
functions.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5837.15203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7R-0006yH-SX; Mon, 12 Oct 2020 09:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5837.15203; Mon, 12 Oct 2020 09:27:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7R-0006y8-Oy; Mon, 12 Oct 2020 09:27:49 +0000
Received: by outflank-mailman (input) for mailman id 5837;
 Mon, 12 Oct 2020 09:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRu7R-0006v7-5D
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce4a0cf1-0c20-4fd9-a0e5-c366bb5c8c81;
 Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 144FBB11D;
 Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRu7R-0006v7-5D
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:49 +0000
X-Inumbo-ID: ce4a0cf1-0c20-4fd9-a0e5-c366bb5c8c81
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ce4a0cf1-0c20-4fd9-a0e5-c366bb5c8c81;
	Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602494863;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yPhjpyUu4X5OZjRKFYIDjthAXEM4EfUzY2ak6MXL0XA=;
	b=CHYbaBd+EOnDuJN04BBWMP38vzVbnNn2SsJIYp9QWkMv9h0LWO0/0f3f1DBQsXNRpgHZbW
	MJUtU8NXucDeJUFCN9zfBRmcluWw2yLd4ZkRjKCkIf2/h8hf1GPCxLjUj7J5r1cNbH+Zcm
	RuyKJ+7z8y1RVSAa9vwmVFzTSyfVjZc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 144FBB11D;
	Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
Date: Mon, 12 Oct 2020 11:27:40 +0200
Message-Id: <20201012092740.1617-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012092740.1617-1-jgross@suse.com>
References: <20201012092740.1617-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between sending an event or
querying the channel's state against removal of the event channel.

Use a locking scheme similar to a rwlock, but with some modifications:

- sending an event or querying the event channel's state uses an
  operation similar to read_trylock(), in case of not obtaining the
  lock the sending is omitted or a default state is returned

- closing an event channel is similar to write_lock(), but without
  real fairness regarding multiple writers (this saves some space in
  the event channel structure and multiple writers are impossible as
  closing an event channel requires the domain's event_lock to be
  held).

With this locking scheme it is mandatory that a writer will always
either start with an unbound or free event channel or will end with
an unbound or free event channel, as otherwise the reaction of a reader
not getting the lock would be wrong.

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added needed barriers
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/include/xen/event.h    |  56 ++++++++++++++++---
 xen/include/xen/sched.h    |   2 +-
 5 files changed, 106 insertions(+), 76 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..77290032f5 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_tryread_lock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..3734250bf7 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_tryread_lock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e365b5498f..398a1e7aa0 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -131,7 +131,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        atomic_set(&chn[i].lock, 0);
     }
     return chn;
 }
@@ -253,7 +253,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -269,14 +268,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -289,32 +288,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -324,7 +317,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -377,7 +369,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -400,7 +392,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -438,14 +429,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -462,7 +453,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -474,13 +464,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -524,7 +514,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -557,14 +546,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -585,7 +574,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -686,14 +674,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -701,9 +689,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -723,7 +711,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_tryread_lock(lchn) )
+        return 0;
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -771,7 +759,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -841,7 +833,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -856,9 +847,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_tryread_lock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_tryread_lock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1327,7 +1321,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1340,14 +1333,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1383,7 +1376,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1398,7 +1390,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_tryread_lock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1408,7 +1401,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 509d3ae861..39a93f7556 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    int val;
+
+    /* No barrier needed, atomic_add_return() is full barrier. */
+    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+          val != EVENT_WRITE_LOCK_INC;
+          val = atomic_read(&evtchn->lock) )
+        cpu_relax();
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    arch_lock_release_barrier();
+
+    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+}
+
+static inline bool evtchn_tryread_lock(struct evtchn *evtchn)
+{
+    if ( atomic_read(&evtchn->lock) >= EVENT_WRITE_LOCK_INC )
+        return false;
+
+    /* No barrier needed, atomic_inc_return() is full barrier. */
+    if ( atomic_inc_return(&evtchn->lock) < EVENT_WRITE_LOCK_INC )
+        return true;
+
+    atomic_dec(&evtchn->lock);
+    return false;
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    arch_lock_release_barrier();
+
+    atomic_dec(&evtchn->lock);
+}
+
 static inline unsigned int max_evtchns(const struct domain *d)
 {
     return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS
@@ -249,12 +288,11 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_tryread_lock(evtchn) )
+        rc = evtchn_is_masked(d, evtchn);
+    evtchn_read_unlock(evtchn);
 
     return rc;
 }
@@ -274,12 +312,12 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
+        if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) )
+        {
             rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..096e0ec6af 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    atomic_t lock;         /* kind of rwlock, use evtchn_*_[un]lock()        */
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5835.15179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7N-0006vJ-Cf; Mon, 12 Oct 2020 09:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5835.15179; Mon, 12 Oct 2020 09:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7N-0006vC-8d; Mon, 12 Oct 2020 09:27:45 +0000
Received: by outflank-mailman (input) for mailman id 5835;
 Mon, 12 Oct 2020 09:27:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRu7M-0006v7-8M
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0450a4dc-39fe-4b5e-a12a-22a68f0d597c;
 Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A2806ACE6;
 Mon, 12 Oct 2020 09:27:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRu7M-0006v7-8M
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:44 +0000
X-Inumbo-ID: 0450a4dc-39fe-4b5e-a12a-22a68f0d597c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0450a4dc-39fe-4b5e-a12a-22a68f0d597c;
	Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602494862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=asB2BCdRsDccGAc7GcG/kddQH4nCdr8U8AOZiUpLbWE=;
	b=ifIHI4wlnNJnZgWMVBBgNCukI5PzuwlTNIBE+fGyfup1hDoJH1FFtNj4Xa9aBv9CuBJE56
	OCI67kxyPPdNv4UuZyDwc3NAJaOyLLM51Ikq20QsDTvHUl9S2m6wUABHgbcNO+fFvuiV8H
	8un7TXjM+KNMnl3iu7cFnOYGCbnejyA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A2806ACE6;
	Mon, 12 Oct 2020 09:27:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 0/2] XSA-343 followup patches
Date: Mon, 12 Oct 2020 11:27:38 +0200
Message-Id: <20201012092740.1617-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/common/event_fifo.c    |  25 +++++++--
 xen/include/xen/event.h    |  56 ++++++++++++++++---
 xen/include/xen/sched.h    |   5 +-
 6 files changed, 126 insertions(+), 84 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:27:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:27:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5836.15191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7P-0006wI-K4; Mon, 12 Oct 2020 09:27:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5836.15191; Mon, 12 Oct 2020 09:27:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRu7P-0006wB-GU; Mon, 12 Oct 2020 09:27:47 +0000
Received: by outflank-mailman (input) for mailman id 5836;
 Mon, 12 Oct 2020 09:27:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRu7N-0006vh-Q0
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fcc190b-6940-4bbb-a1a8-3c705fea8142;
 Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C8C34AD8D;
 Mon, 12 Oct 2020 09:27:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRu7N-0006vh-Q0
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:27:45 +0000
X-Inumbo-ID: 6fcc190b-6940-4bbb-a1a8-3c705fea8142
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6fcc190b-6940-4bbb-a1a8-3c705fea8142;
	Mon, 12 Oct 2020 09:27:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602494862;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SYSqKrZWGVU8nxonrOSCcTNQRNTPVy8fH0KXNbdNrqQ=;
	b=i6Gp+c1el4IOs2iWyMWypD/jFOenUL+qtGTQn70h5+V7A52hx1MLqudUhth8baPTxxpWPQ
	pErSfOqmgDHrZ41/chwtIlFMT4c5V4rGOiLQ9UDhRulZ1uZj0UG4hb5Qzv+3/AxVPTAy/Y
	d0UnNU0JftEfrvV1lwTgPEndzlgXJzI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C8C34AD8D;
	Mon, 12 Oct 2020 09:27:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon, 12 Oct 2020 11:27:39 +0200
Message-Id: <20201012092740.1617-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012092740.1617-1-jgross@suse.com>
References: <20201012092740.1617-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index fc189152e1..fffbd409c8 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    u32 raw;
+    struct {
+        u8 last_priority;
+        u16 last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq;
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5841.15215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRuRI-0000lB-KI; Mon, 12 Oct 2020 09:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5841.15215; Mon, 12 Oct 2020 09:48:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRuRI-0000l4-GX; Mon, 12 Oct 2020 09:48:20 +0000
Received: by outflank-mailman (input) for mailman id 5841;
 Mon, 12 Oct 2020 09:48:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kRuRH-0000kz-Kp
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:48:19 +0000
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b48031dc-42de-41d2-ad4a-e23f919ac538;
 Mon, 12 Oct 2020 09:48:18 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id lw21so22321353ejb.6
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 02:48:18 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id a20sm10378289ejg.41.2020.10.12.02.48.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 12 Oct 2020 02:48:17 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kRuRH-0000kz-Kp
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:48:19 +0000
X-Inumbo-ID: b48031dc-42de-41d2-ad4a-e23f919ac538
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b48031dc-42de-41d2-ad4a-e23f919ac538;
	Mon, 12 Oct 2020 09:48:18 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id lw21so22321353ejb.6
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 02:48:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=gQnbn6MBOYsJAO8/g7lEu+HeG9m4dHt4A8REBchbeaw=;
        b=j80APBo2nhuqj6esymsB/CgBBZOtDpj8thFjXIVk4DR6hJpdkg/YFirXyb0MAosamm
         Gck+K15gakP7PvRCWvEsd6zGCJq01SorZg7bpgJcDbFayITohBeTH+FQulbdasj9R6zy
         mX9yNtc7XZbqxowSfvUNb6rAb2iUOVGcFur7kyvqJ1wnvEqrbRKYOtYvYufhqSqxA+CO
         P2IHVEry36lKnicfl0XiYAflAQv3MXjdl6dEs19YjmOKQ7mdVjUxkcLk714ILt8gbKtw
         smkYlWATUUJqZ2GVWhQFQFINVCyEIkBFhq3MPb7zkIvGGuy9KCZ2RwuiQBJZBOOox9rj
         VxjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=gQnbn6MBOYsJAO8/g7lEu+HeG9m4dHt4A8REBchbeaw=;
        b=Dgosqf8N0G4lWYJpUCHigPn0u5syXEdVefL4sthfZiA+ixY7w2totev45grsKvZVqy
         ss/+S3BYw6qw/A1r70Ojwb0V46pEjZIBMikpoSUcvx8e4lGjcS2sISe4PPVo8wseQZod
         qP9o/TC/4U26gZ9sYNMhSQLCxOF/YpNpXuLsfUpRuPGokqXOkMYfxuY6VThMjxRtfVPW
         iQDsth9hqg2EoPtjH0mLuR8AxcpaCiqk4KgOOBhi5+D6iXDzJC314Wqd1l1qz1waLgVw
         d47NRPtfgC/sMW2MGPoLJ+PDkDg9LEAOu8pdeY4b1NvYQ6MpdPwzogJAeqHGvX/aA+LL
         q8Kw==
X-Gm-Message-State: AOAM531WBKFSbV32uUsUsf1uMUtTFA04+0g43+0jFtPeWuedthKGeLo0
	qvkONM0ULF3F8ZB7jl3R0x0=
X-Google-Smtp-Source: ABdhPJxw82uoVsIauKlzAtTWV2T7T9MvYIFW5/4N0n6COT2UwSB22pdQOC66tSmgN/gFXnEqco3Hzw==
X-Received: by 2002:a17:906:490d:: with SMTP id b13mr26734070ejq.122.1602496097762;
        Mon, 12 Oct 2020 02:48:17 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
        by smtp.gmail.com with ESMTPSA id a20sm10378289ejg.41.2020.10.12.02.48.16
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 12 Oct 2020 02:48:17 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com> <20201012092740.1617-2-jgross@suse.com>
In-Reply-To: <20201012092740.1617-2-jgross@suse.com>
Subject: RE: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon, 12 Oct 2020 10:48:16 +0100
Message-ID: <001101d6a07c$cf7c7f80$6e757e80$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJ57sLa/Hg8NzwnPjAH3Ul9dW38wQCnOCG/qEfhDJA=

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> Sent: 12 October 2020 10:28
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Julien
> Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> Subject: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
> 
> The queue for a fifo event is depending on the vcpu_id and the
> priority of the event. When sending an event it might happen the
> event needs to change queues and the old queue needs to be kept for
> keeping the links between queue elements intact. For this purpose
> the event channel contains last_priority and last_vcpu_id values
> elements for being able to identify the old queue.
> 
> In order to avoid races always access last_priority and last_vcpu_id
> with a single atomic operation avoiding any inconsistencies.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  xen/common/event_fifo.c | 25 +++++++++++++++++++------
>  xen/include/xen/sched.h |  3 +--
>  2 files changed, 20 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
> index fc189152e1..fffbd409c8 100644
> --- a/xen/common/event_fifo.c
> +++ b/xen/common/event_fifo.c
> @@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
>      unsigned int num_evtchns;
>  };
> 
> +union evtchn_fifo_lastq {
> +    u32 raw;
> +    struct {
> +        u8 last_priority;
> +        u16 last_vcpu_id;
> +    };
> +};

I guess you want to s/u32/uint32_t, etc. above.

> +
>  static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
>                                                         unsigned int port)
>  {
> @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
>      struct vcpu *v;
>      struct evtchn_fifo_queue *q, *old_q;
>      unsigned int try;
> +    union evtchn_fifo_lastq lastq;
> 
>      for ( try = 0; try < 3; try++ )
>      {
> -        v = d->vcpu[evtchn->last_vcpu_id];
> -        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
> +        v = d->vcpu[lastq.last_vcpu_id];
> +        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
> 
>          spin_lock_irqsave(&old_q->lock, *flags);
> 
> -        v = d->vcpu[evtchn->last_vcpu_id];
> -        q = &v->evtchn_fifo->queue[evtchn->last_priority];
> +        v = d->vcpu[lastq.last_vcpu_id];
> +        q = &v->evtchn_fifo->queue[lastq.last_priority];
> 
>          if ( old_q == q )
>              return old_q;
> @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>          /* Moved to a different queue? */
>          if ( old_q != q )
>          {
> -            evtchn->last_vcpu_id = v->vcpu_id;
> -            evtchn->last_priority = q->priority;
> +            union evtchn_fifo_lastq lastq;
> +
> +            lastq.last_vcpu_id = v->vcpu_id;
> +            lastq.last_priority = q->priority;
> +            write_atomic(&evtchn->fifo_lastq, lastq.raw);
> 

You're going to leak some stack here I think. Perhaps add a 'pad' field between 'last_priority' and 'last_vcpu_id' and zero it?

  Paul

>              spin_unlock_irqrestore(&old_q->lock, flags);
>              spin_lock_irqsave(&q->lock, flags);
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index d8ed83f869..a298ff4df8 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -114,8 +114,7 @@ struct evtchn
>          u16 virq;      /* state == ECS_VIRQ */
>      } u;
>      u8 priority;
> -    u8 last_priority;
> -    u16 last_vcpu_id;
> +    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
>  #ifdef CONFIG_XSM
>      union {
>  #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
> --
> 2.26.2
> 




From xen-devel-bounces@lists.xenproject.org Mon Oct 12 09:56:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 09:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5845.15228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRuZ8-0001oN-Fp; Mon, 12 Oct 2020 09:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5845.15228; Mon, 12 Oct 2020 09:56:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRuZ8-0001oG-D5; Mon, 12 Oct 2020 09:56:26 +0000
Received: by outflank-mailman (input) for mailman id 5845;
 Mon, 12 Oct 2020 09:56:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRuZ7-0001oB-C8
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:56:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24cb99c0-d570-438d-ad81-41075fc5fa06;
 Mon, 12 Oct 2020 09:56:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9AEB9B06A;
 Mon, 12 Oct 2020 09:56:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRuZ7-0001oB-C8
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 09:56:25 +0000
X-Inumbo-ID: 24cb99c0-d570-438d-ad81-41075fc5fa06
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24cb99c0-d570-438d-ad81-41075fc5fa06;
	Mon, 12 Oct 2020 09:56:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602496583;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L7B5G+ZWPs6KBKx/MKpWIccnm2mLzAkl0/gWeFDJWAc=;
	b=Jr6HGY3YW+x/HCniORissOSNx53oNXxZ9GliFiCq3+5/GxUFOanOTMfN2mHoACextb+Erb
	MqRYQ8Zvi5d1aHxM+v7EkAQYpXYWJ6LeCkH52n264KUjiG/mwfzS2Qhn17B6XpiWYZiTwN
	yIVaYdqu6UwwDnZmDv6IQcsCD0ZvduY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9AEB9B06A;
	Mon, 12 Oct 2020 09:56:23 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jan Beulich' <jbeulich@suse.com>,
 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <001101d6a07c$cf7c7f80$6e757e80$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4fec0346-6048-723c-f5c6-50c3f68f508a@suse.com>
Date: Mon, 12 Oct 2020 11:56:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <001101d6a07c$cf7c7f80$6e757e80$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.20 11:48, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
>> Sent: 12 October 2020 10:28
>> To: xen-devel@lists.xenproject.org
>> Cc: Juergen Gross <jgross@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Julien
>> Grall <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <wl@xen.org>
>> Subject: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
>>
>> The queue for a fifo event is depending on the vcpu_id and the
>> priority of the event. When sending an event it might happen the
>> event needs to change queues and the old queue needs to be kept for
>> keeping the links between queue elements intact. For this purpose
>> the event channel contains last_priority and last_vcpu_id values
>> elements for being able to identify the old queue.
>>
>> In order to avoid races always access last_priority and last_vcpu_id
>> with a single atomic operation avoiding any inconsistencies.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   xen/common/event_fifo.c | 25 +++++++++++++++++++------
>>   xen/include/xen/sched.h |  3 +--
>>   2 files changed, 20 insertions(+), 8 deletions(-)
>>
>> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
>> index fc189152e1..fffbd409c8 100644
>> --- a/xen/common/event_fifo.c
>> +++ b/xen/common/event_fifo.c
>> @@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
>>       unsigned int num_evtchns;
>>   };
>>
>> +union evtchn_fifo_lastq {
>> +    u32 raw;
>> +    struct {
>> +        u8 last_priority;
>> +        u16 last_vcpu_id;
>> +    };
>> +};
> 
> I guess you want to s/u32/uint32_t, etc. above.

Hmm, yes, probably.

> 
>> +
>>   static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
>>                                                          unsigned int port)
>>   {
>> @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
>>       struct vcpu *v;
>>       struct evtchn_fifo_queue *q, *old_q;
>>       unsigned int try;
>> +    union evtchn_fifo_lastq lastq;
>>
>>       for ( try = 0; try < 3; try++ )
>>       {
>> -        v = d->vcpu[evtchn->last_vcpu_id];
>> -        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
>> +        lastq.raw = read_atomic(&evtchn->fifo_lastq);
>> +        v = d->vcpu[lastq.last_vcpu_id];
>> +        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
>>
>>           spin_lock_irqsave(&old_q->lock, *flags);
>>
>> -        v = d->vcpu[evtchn->last_vcpu_id];
>> -        q = &v->evtchn_fifo->queue[evtchn->last_priority];
>> +        v = d->vcpu[lastq.last_vcpu_id];
>> +        q = &v->evtchn_fifo->queue[lastq.last_priority];
>>
>>           if ( old_q == q )
>>               return old_q;
>> @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
>>           /* Moved to a different queue? */
>>           if ( old_q != q )
>>           {
>> -            evtchn->last_vcpu_id = v->vcpu_id;
>> -            evtchn->last_priority = q->priority;
>> +            union evtchn_fifo_lastq lastq;
>> +
>> +            lastq.last_vcpu_id = v->vcpu_id;
>> +            lastq.last_priority = q->priority;
>> +            write_atomic(&evtchn->fifo_lastq, lastq.raw);
>>
> 
> You're going to leak some stack here I think. Perhaps add a 'pad' field between 'last_priority' and 'last_vcpu_id' and zero it?

I can do that, but why? This is nothing a guest is supposed to see at
any time.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5880.15261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRujO-0003EO-Sw; Mon, 12 Oct 2020 10:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5880.15261; Mon, 12 Oct 2020 10:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRujO-0003EH-Pg; Mon, 12 Oct 2020 10:07:02 +0000
Received: by outflank-mailman (input) for mailman id 5880;
 Mon, 12 Oct 2020 10:07:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kRujN-0003EC-Jd
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:07:01 +0000
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5333952d-ab81-4640-96ce-a4d3ff65e0b3;
 Mon, 12 Oct 2020 10:07:00 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id x7so12250009eje.8
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 03:07:00 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id q14sm10390603ejo.53.2020.10.12.03.06.58
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 12 Oct 2020 03:06:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kRujN-0003EC-Jd
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:07:01 +0000
X-Inumbo-ID: 5333952d-ab81-4640-96ce-a4d3ff65e0b3
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5333952d-ab81-4640-96ce-a4d3ff65e0b3;
	Mon, 12 Oct 2020 10:07:00 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id x7so12250009eje.8
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 03:07:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=iLqiTpZxgc/OLVw6/LsTqY/o+SmgB8y8C7ZiDO+xLT0=;
        b=vHPfGfgg/bEhxH5IY6ovi2jaZ3v0L2k7uWVY64++h97w5A2Tqkej6tHgmS0T/jzFlT
         XDy33GrpbhdDy2SxhxEYxn6mYRi85/v+NE3xeLVa31hg34bbwnUMC1bSYUHBqjgpJ79M
         jYu+OJ9xx20ewTuzaj9qy1NWucVIYUivqPK7LytBiNYD6RCuYr3B8/kuxHg0MOqLjuSY
         O18zTbRGO0y2ARJM4YSvFIQP30G+Spq5aeAFubleBIgQsUlrD3lqLPeO8LxwvsC16Hhu
         Z56KDIn2pGFJQJXYEByEXu+z0JkVJxWK4XE20sk9i6sAuV/q9ltuoTt1cGQydRCNJw3m
         PCWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=iLqiTpZxgc/OLVw6/LsTqY/o+SmgB8y8C7ZiDO+xLT0=;
        b=cKV1fDx9QfeYs1W0JBjk3ecGe16ICPrmrrC3fLsYxhItMDbny4HtRVFIbr+rESu9pH
         yub/0+4si4Bmi9XCxlnI6nWk5/8SAlaT2KT6hD2zOi8cnFAVuwnKZNJdtSSvEPspffQG
         o3y/U4nV0cqyMHq383Jn1mlNAeleg4rbsrjtgG3plrBtFLcbFgk87QuhCs129bAdaE8K
         5/uv826YHTJviFKFdrxE50cuqHYtHAz7cFZQbrkOgAwmH//FtmMj50GfSPGYxVfYwM8z
         OLW4+ZpUg2sOSmQSjdwLRpWz6fHXbocSZOIXa0vn5tgy2lTk9Kdni/7/1+ShFuB/mxQ7
         DzAA==
X-Gm-Message-State: AOAM530EKdUuZhx4kZMgXwenbeL8QoW2IaaJ5Q03o2jAhQydkFwUIn0b
	d+/LHOsH0Ypu4TOTY6BVQo4=
X-Google-Smtp-Source: ABdhPJzeLZcEUDCJplATepx+CwwQ8Fwe38PL9VJqG557e3dj9YIJ6WjG/+iT/eirgJnx/wkkXujhkg==
X-Received: by 2002:a17:906:f0d8:: with SMTP id dk24mr27043057ejb.492.1602497219695;
        Mon, 12 Oct 2020 03:06:59 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
        by smtp.gmail.com with ESMTPSA id q14sm10390603ejo.53.2020.10.12.03.06.58
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 12 Oct 2020 03:06:59 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com> <20201012092740.1617-2-jgross@suse.com> <001101d6a07c$cf7c7f80$6e757e80$@xen.org> <4fec0346-6048-723c-f5c6-50c3f68f508a@suse.com>
In-Reply-To: <4fec0346-6048-723c-f5c6-50c3f68f508a@suse.com>
Subject: RE: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Mon, 12 Oct 2020 11:06:58 +0100
Message-ID: <001201d6a07f$6c3d0f40$44b72dc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJ57sLa/Hg8NzwnPjAH3Ul9dW38wQCnOCG/Al1KAfICVpOi4qgiSBTw

> -----Original Message-----
> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Sent: 12 October 2020 10:56
> To: paul@xen.org; xen-devel@lists.xenproject.org
> Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian
> Jackson' <iwj@xenproject.org>; 'Jan Beulich' <jbeulich@suse.com>; =
'Julien Grall' <julien@xen.org>;
> 'Stefano Stabellini' <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>
> Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and =
last_vcpu_id together
>=20
> On 12.10.20 11:48, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf =
Of Juergen Gross
> >> Sent: 12 October 2020 10:28
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Juergen Gross <jgross@suse.com>; Andrew Cooper =
<andrew.cooper3@citrix.com>; George Dunlap
> >> <george.dunlap@citrix.com>; Ian Jackson <iwj@xenproject.org>; Jan =
Beulich <jbeulich@suse.com>;
> Julien
> >> Grall <julien@xen.org>; Stefano Stabellini =
<sstabellini@kernel.org>; Wei Liu <wl@xen.org>
> >> Subject: [PATCH v2 1/2] xen/events: access last_priority and =
last_vcpu_id together
> >>
> >> The queue for a fifo event is depending on the vcpu_id and the
> >> priority of the event. When sending an event it might happen the
> >> event needs to change queues and the old queue needs to be kept for
> >> keeping the links between queue elements intact. For this purpose
> >> the event channel contains last_priority and last_vcpu_id values
> >> elements for being able to identify the old queue.
> >>
> >> In order to avoid races always access last_priority and =
last_vcpu_id
> >> with a single atomic operation avoiding any inconsistencies.
> >>
> >> Signed-off-by: Juergen Gross <jgross@suse.com>
> >> ---
> >>   xen/common/event_fifo.c | 25 +++++++++++++++++++------
> >>   xen/include/xen/sched.h |  3 +--
> >>   2 files changed, 20 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
> >> index fc189152e1..fffbd409c8 100644
> >> --- a/xen/common/event_fifo.c
> >> +++ b/xen/common/event_fifo.c
> >> @@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
> >>       unsigned int num_evtchns;
> >>   };
> >>
> >> +union evtchn_fifo_lastq {
> >> +    u32 raw;
> >> +    struct {
> >> +        u8 last_priority;
> >> +        u16 last_vcpu_id;
> >> +    };
> >> +};
> >
> > I guess you want to s/u32/uint32_t, etc. above.
>=20
> Hmm, yes, probably.
>=20
> >
> >> +
> >>   static inline event_word_t *evtchn_fifo_word_from_port(const =
struct domain *d,
> >>                                                          unsigned =
int port)
> >>   {
> >> @@ -86,16 +94,18 @@ static struct evtchn_fifo_queue =
*lock_old_queue(const struct domain *d,
> >>       struct vcpu *v;
> >>       struct evtchn_fifo_queue *q, *old_q;
> >>       unsigned int try;
> >> +    union evtchn_fifo_lastq lastq;
> >>
> >>       for ( try =3D 0; try < 3; try++ )
> >>       {
> >> -        v =3D d->vcpu[evtchn->last_vcpu_id];
> >> -        old_q =3D &v->evtchn_fifo->queue[evtchn->last_priority];
> >> +        lastq.raw =3D read_atomic(&evtchn->fifo_lastq);
> >> +        v =3D d->vcpu[lastq.last_vcpu_id];
> >> +        old_q =3D &v->evtchn_fifo->queue[lastq.last_priority];
> >>
> >>           spin_lock_irqsave(&old_q->lock, *flags);
> >>
> >> -        v =3D d->vcpu[evtchn->last_vcpu_id];
> >> -        q =3D &v->evtchn_fifo->queue[evtchn->last_priority];
> >> +        v =3D d->vcpu[lastq.last_vcpu_id];
> >> +        q =3D &v->evtchn_fifo->queue[lastq.last_priority];
> >>
> >>           if ( old_q =3D=3D q )
> >>               return old_q;
> >> @@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct =
vcpu *v, struct evtchn *evtchn)
> >>           /* Moved to a different queue? */
> >>           if ( old_q !=3D q )
> >>           {
> >> -            evtchn->last_vcpu_id =3D v->vcpu_id;
> >> -            evtchn->last_priority =3D q->priority;
> >> +            union evtchn_fifo_lastq lastq;
> >> +
> >> +            lastq.last_vcpu_id =3D v->vcpu_id;
> >> +            lastq.last_priority =3D q->priority;
> >> +            write_atomic(&evtchn->fifo_lastq, lastq.raw);
> >>
> >
> > You're going to leak some stack here I think. Perhaps add a 'pad' =
field between 'last_priority' and
> 'last_vcpu_id' and zero it?
>=20
> I can do that, but why? This is nothing a guest is supposed to see at
> any time.

True, but it would also be nice if the value of 'raw' was at least =
predictable. I guest just adding '=3D {}' to the declaration would =
actually be easiest.

  Paul

>=20
>=20
> Juergen



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:13:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:13:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5883.15273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRupw-0004EY-JQ; Mon, 12 Oct 2020 10:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5883.15273; Mon, 12 Oct 2020 10:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRupw-0004ER-Fm; Mon, 12 Oct 2020 10:13:48 +0000
Received: by outflank-mailman (input) for mailman id 5883;
 Mon, 12 Oct 2020 10:13:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwhd=DT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kRupu-0004EM-7H
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:13:47 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8580d5f3-3d6a-42fb-ad8d-f5bba176dd19;
 Mon, 12 Oct 2020 10:13:43 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kRupj-0003tb-DR; Mon, 12 Oct 2020 10:13:35 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 485E3304B90;
 Mon, 12 Oct 2020 12:13:30 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 9CFB320AEA645; Mon, 12 Oct 2020 12:13:30 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pwhd=DT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kRupu-0004EM-7H
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:13:47 +0000
X-Inumbo-ID: 8580d5f3-3d6a-42fb-ad8d-f5bba176dd19
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8580d5f3-3d6a-42fb-ad8d-f5bba176dd19;
	Mon, 12 Oct 2020 10:13:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=Dn/O7OmyBKPjXz4w+vIDufrUCX0e5WR00/QXRFI14Wc=; b=prmzH2eD4I4CqdlEZQDpN4SQCB
	jY2hODRzO7/iAF5kObY8UBFy/jzJZByMbwnxgYL7Awp/F8v/UtIH/41XK0DtskSN7qFUsa4aEWGAa
	z0aBhy7Uc8HbtB8CmIXh9bv0Wud7hj0/d1Gf01HcqAQ5U+owxFXsADpzgR03FPiKwARAjlehGjVDy
	CdMFxOwr1hJusThxWOFoA/TJ44MuKFI2IliDG515XKOVkqbOmm2wCYCzXJZw2U8FSTdRZBmqKZ7iS
	4SuHgk72HnWzgZdkcvtFKUKsAMT99+mCfVZqw2gKQyiWP8Nd/c9Y2IwIGdVoXniV36pcaRbiBb0+6
	GGFsM+uA==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kRupj-0003tb-DR; Mon, 12 Oct 2020 10:13:35 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 485E3304B90;
	Mon, 12 Oct 2020 12:13:30 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id 9CFB320AEA645; Mon, 12 Oct 2020 12:13:30 +0200 (CEST)
Date: Mon, 12 Oct 2020 12:13:30 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
Message-ID: <20201012101330.GR2628@hirez.programming.kicks-ass.net>
References: <20201009144225.12019-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009144225.12019-1-jgross@suse.com>

On Fri, Oct 09, 2020 at 04:42:25PM +0200, Juergen Gross wrote:
> When running in lazy TLB mode the currently active page tables might
> be the ones of a previous process, e.g. when running a kernel thread.
> 
> This can be problematic in case kernel code is being modified via
> text_poke() in a kernel thread, and on another processor exit_mmap()
> is active for the process which was running on the first cpu before
> the kernel thread.
> 
> As text_poke() is using a temporary address space and the former
> address space (obtained via cpu_tlbstate.loaded_mm) is restored
> afterwards, there is a race possible in case the cpu on which
> exit_mmap() is running wants to make sure there are no stale
> references to that address space on any cpu active (this e.g. is
> required when running as a Xen PV guest, where this problem has been
> observed and analyzed).
> 
> In order to avoid that, drop off TLB lazy mode before switching to the
> temporary address space.

Oh man, that must've been 'fun' :/

> Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  arch/x86/kernel/alternative.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index cdaab30880b9..cd6be6f143e8 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
>  	temp_mm_state_t temp_state;
>  
>  	lockdep_assert_irqs_disabled();
> +
> +	/*
> +	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
> +	 * with a stale address space WITHOUT being in lazy mode after
> +	 * restoring the previous mm.
> +	 */
> +	if (this_cpu_read(cpu_tlbstate.is_lazy))
> +		leave_mm(smp_processor_id());
> +
>  	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>  	switch_mm_irqs_off(NULL, mm, current);

Would it make sense to write it like:

	this_state.mm = this_cpu_read(cpu_tlbstate.is_lazy) ?
			&init_mm : this_cpu_read(cpu_tlbstate.loaded_mm);

Possibly with that wrapped in a conveniently named helper function.



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:26:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5885.15285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRv1w-0005En-KG; Mon, 12 Oct 2020 10:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5885.15285; Mon, 12 Oct 2020 10:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRv1w-0005Eg-FD; Mon, 12 Oct 2020 10:26:12 +0000
Received: by outflank-mailman (input) for mailman id 5885;
 Mon, 12 Oct 2020 10:26:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kRv1v-0005Eb-3A
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:26:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2c9b353-40df-4519-971a-698aebf81d7b;
 Mon, 12 Oct 2020 10:26:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62A3EB1E4;
 Mon, 12 Oct 2020 10:26:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HuXe=DT=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kRv1v-0005Eb-3A
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:26:11 +0000
X-Inumbo-ID: b2c9b353-40df-4519-971a-698aebf81d7b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b2c9b353-40df-4519-971a-698aebf81d7b;
	Mon, 12 Oct 2020 10:26:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602498368;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ilXt6NkD7kYkq9BqAajphKCLvAtvM4L7108qoj49GGw=;
	b=B645MlSP9esJa/Qq1W/IlcqcC4DNU8k3WeD7ZhM/HFe+lcuUjg6FrL1efaU9tEs8vp7SmV
	y0vWLxlB9hmdQQ3ehDoqBb+Nn4LdH/QvcZxiEhzAfYljTGWSwhdSFPuUnv6zXcTIPBXgeR
	3ivv4T/Cu9tAFcTgdE2u/Dd1GkwFvmw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 62A3EB1E4;
	Mon, 12 Oct 2020 10:26:08 +0000 (UTC)
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>
References: <20201009144225.12019-1-jgross@suse.com>
 <20201012101330.GR2628@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <fc202e90-00ff-a635-f298-c3ca293e9182@suse.com>
Date: Mon, 12 Oct 2020 12:26:06 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012101330.GR2628@hirez.programming.kicks-ass.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.20 12:13, Peter Zijlstra wrote:
> On Fri, Oct 09, 2020 at 04:42:25PM +0200, Juergen Gross wrote:
>> When running in lazy TLB mode the currently active page tables might
>> be the ones of a previous process, e.g. when running a kernel thread.
>>
>> This can be problematic in case kernel code is being modified via
>> text_poke() in a kernel thread, and on another processor exit_mmap()
>> is active for the process which was running on the first cpu before
>> the kernel thread.
>>
>> As text_poke() is using a temporary address space and the former
>> address space (obtained via cpu_tlbstate.loaded_mm) is restored
>> afterwards, there is a race possible in case the cpu on which
>> exit_mmap() is running wants to make sure there are no stale
>> references to that address space on any cpu active (this e.g. is
>> required when running as a Xen PV guest, where this problem has been
>> observed and analyzed).
>>
>> In order to avoid that, drop off TLB lazy mode before switching to the
>> temporary address space.
> 
> Oh man, that must've been 'fun' :/

Yeah.

> 
>> Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   arch/x86/kernel/alternative.c | 9 +++++++++
>>   1 file changed, 9 insertions(+)
>>
>> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
>> index cdaab30880b9..cd6be6f143e8 100644
>> --- a/arch/x86/kernel/alternative.c
>> +++ b/arch/x86/kernel/alternative.c
>> @@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
>>   	temp_mm_state_t temp_state;
>>   
>>   	lockdep_assert_irqs_disabled();
>> +
>> +	/*
>> +	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
>> +	 * with a stale address space WITHOUT being in lazy mode after
>> +	 * restoring the previous mm.
>> +	 */
>> +	if (this_cpu_read(cpu_tlbstate.is_lazy))
>> +		leave_mm(smp_processor_id());
>> +
>>   	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>>   	switch_mm_irqs_off(NULL, mm, current);
> 
> Would it make sense to write it like:
> 
> 	this_state.mm = this_cpu_read(cpu_tlbstate.is_lazy) ?
> 			&init_mm : this_cpu_read(cpu_tlbstate.loaded_mm);
> 
> Possibly with that wrapped in a conveniently named helper function.

Fine with me, but I don't think it matters that much.

For each batch of text_poke() it will be hit only once, and I'm not sure
it is really a good idea to use the knowledge that leave_mm() is just a
switch to init_mm here.

In case it is still the preferred way to do it I can send an update of
the patch.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:27:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5886.15297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRv3R-0005MQ-Uh; Mon, 12 Oct 2020 10:27:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5886.15297; Mon, 12 Oct 2020 10:27:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRv3R-0005MJ-Qr; Mon, 12 Oct 2020 10:27:45 +0000
Received: by outflank-mailman (input) for mailman id 5886;
 Mon, 12 Oct 2020 10:27:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRv3Q-0005MA-2S
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:27:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 525ec849-31a9-4f3f-9bca-8e0195bf11e0;
 Mon, 12 Oct 2020 10:27:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRv3M-0002vv-74; Mon, 12 Oct 2020 10:27:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRv3L-0008CW-SX; Mon, 12 Oct 2020 10:27:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRv3L-0000xv-S4; Mon, 12 Oct 2020 10:27:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRv3Q-0005MA-2S
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:27:44 +0000
X-Inumbo-ID: 525ec849-31a9-4f3f-9bca-8e0195bf11e0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 525ec849-31a9-4f3f-9bca-8e0195bf11e0;
	Mon, 12 Oct 2020 10:27:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LsVyY3ALdl3+/dhYfLM30ddCPGy0WRTcm4l/EDyq2NQ=; b=sizju/gyIjHN4ohjZkJYdNfqxQ
	KumpqfUkuUpMHQAbI+eR+YOnmzSlk4jalm2pMc4htylRHw+V1+lUf83l8ywvMtYf6Wu+o6V1lXe9U
	flL9AxnilyIOG0Cs5roDvGysaM18SyLXY9gazbJ1Cq371zxcL/+IdwulhHpIPPvH29RM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRv3M-0002vv-74; Mon, 12 Oct 2020 10:27:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRv3L-0008CW-SX; Mon, 12 Oct 2020 10:27:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRv3L-0000xv-S4; Mon, 12 Oct 2020 10:27:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155713-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155713: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=48a340d9b23ffcf7704f2de14d1e505481a84a1c
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 10:27:39 +0000

flight 155713 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155713/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   8 xen-boot                   fail pass in 155703

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 155703 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 155703 never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                48a340d9b23ffcf7704f2de14d1e505481a84a1c
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   53 days
Failing since        152659  2020-08-21 14:07:39 Z   51 days   88 attempts
Testing same since   155703  2020-10-11 19:37:50 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42895 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:45:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5898.15313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRvKB-0007Th-Mz; Mon, 12 Oct 2020 10:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5898.15313; Mon, 12 Oct 2020 10:45:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRvKB-0007Ta-JM; Mon, 12 Oct 2020 10:45:03 +0000
Received: by outflank-mailman (input) for mailman id 5898;
 Mon, 12 Oct 2020 10:45:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRvKA-0007T2-F3
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:45:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bc7293e-3e57-498f-b5d7-a94cd8b2eb12;
 Mon, 12 Oct 2020 10:44:55 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRvK3-0003H5-H7; Mon, 12 Oct 2020 10:44:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRvK3-0000L4-92; Mon, 12 Oct 2020 10:44:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRvK3-0006Uh-8Y; Mon, 12 Oct 2020 10:44:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRvKA-0007T2-F3
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:45:02 +0000
X-Inumbo-ID: 0bc7293e-3e57-498f-b5d7-a94cd8b2eb12
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0bc7293e-3e57-498f-b5d7-a94cd8b2eb12;
	Mon, 12 Oct 2020 10:44:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uoofCJNT4uV7yAn0ayHGnOnTNhX+oDwDC02VOBopb9I=; b=GggHG95A0vfct5Lm9LE4x+8Ywq
	4KYhWR/iGa8e/LhJdaxTvSwA0A8w04xleBahbzxY4I54VL10tAcGDwVCjar2k1m9ICAR8DW2c66YG
	6sa28VFgdrjXpGJrc1TI06i69XXvmX95nwl6FddPnstDM5+g6p3/1UKCUSFoOsCKvMYw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRvK3-0003H5-H7; Mon, 12 Oct 2020 10:44:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRvK3-0000L4-92; Mon, 12 Oct 2020 10:44:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRvK3-0006Uh-8Y; Mon, 12 Oct 2020 10:44:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155714-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155714: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cc942105ede58a300ba46f3df0edfa86b3abd4dd
X-Osstest-Versions-That:
    ovmf=ae511331e0fb1625ba649f377e81e487de3a5531
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 10:44:55 +0000

flight 155714 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155714/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cc942105ede58a300ba46f3df0edfa86b3abd4dd
baseline version:
 ovmf                 ae511331e0fb1625ba649f377e81e487de3a5531

Last test of basis   155643  2020-10-10 08:15:18 Z    2 days
Testing same since   155714  2020-10-12 01:54:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ae511331e0..cc942105ed  cc942105ede58a300ba46f3df0edfa86b3abd4dd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 10:45:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 10:45:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5899.15325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRvKU-0007Yh-06; Mon, 12 Oct 2020 10:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5899.15325; Mon, 12 Oct 2020 10:45:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRvKT-0007Ya-SS; Mon, 12 Oct 2020 10:45:21 +0000
Received: by outflank-mailman (input) for mailman id 5899;
 Mon, 12 Oct 2020 10:45:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pwhd=DT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kRvKR-0007Xt-8L
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:45:20 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f7f16a-5a4e-4c65-adfa-319f3de547e5;
 Mon, 12 Oct 2020 10:45:16 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kRvKI-0002rw-Te; Mon, 12 Oct 2020 10:45:11 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 08611300DB4;
 Mon, 12 Oct 2020 12:45:09 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id E95E920A2950E; Mon, 12 Oct 2020 12:45:08 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pwhd=DT=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kRvKR-0007Xt-8L
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 10:45:20 +0000
X-Inumbo-ID: 96f7f16a-5a4e-4c65-adfa-319f3de547e5
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96f7f16a-5a4e-4c65-adfa-319f3de547e5;
	Mon, 12 Oct 2020 10:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=z5pzSV5nBQT8SxLlSdLOzbr7Kyh1DTeoSAzBFZne3gQ=; b=fVqmSQORpjIfktIa6KeeyDumA1
	M2YhWhOV4tEOpYdOX/rmk7+aGrLU3e4ZHMP6wAwDixCnWObMzp8iDVj5CQFc6fwNOnOtdxvX+tPvh
	U9cv4Ay5cZMAbsZUvjm0bXBwOqbnleDCuTudxxQoUjlw1FyZ4dDM8IDtXmAaSGvLL0qY9EeBfqSZJ
	fLsrdUSNIyaXt6zqrq3VAeG502NoG+2DIzhyGGUxplRgJy+d46bkcX86qaDDsipOsu4DNHlutWggV
	XJVFUch8HvPOVceoLhvlHmBBCsDTJ8xBT2Yin5OVz+H8HvPedxHs5hYSQaBRpH1ITTWQgBOcqXfwQ
	xtRyOjwA==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kRvKI-0002rw-Te; Mon, 12 Oct 2020 10:45:11 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 08611300DB4;
	Mon, 12 Oct 2020 12:45:09 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id E95E920A2950E; Mon, 12 Oct 2020 12:45:08 +0200 (CEST)
Date: Mon, 12 Oct 2020 12:45:08 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
Message-ID: <20201012104508.GS2628@hirez.programming.kicks-ass.net>
References: <20201009144225.12019-1-jgross@suse.com>
 <20201012101330.GR2628@hirez.programming.kicks-ass.net>
 <fc202e90-00ff-a635-f298-c3ca293e9182@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fc202e90-00ff-a635-f298-c3ca293e9182@suse.com>

On Mon, Oct 12, 2020 at 12:26:06PM +0200, Jrgen Gro wrote:

> > > @@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
> > >   	temp_mm_state_t temp_state;
> > >   	lockdep_assert_irqs_disabled();
> > > +
> > > +	/*
> > > +	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
> > > +	 * with a stale address space WITHOUT being in lazy mode after
> > > +	 * restoring the previous mm.
> > > +	 */
> > > +	if (this_cpu_read(cpu_tlbstate.is_lazy))
> > > +		leave_mm(smp_processor_id());
> > > +
> > >   	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
> > >   	switch_mm_irqs_off(NULL, mm, current);
> > 
> > Would it make sense to write it like:
> > 
> > 	this_state.mm = this_cpu_read(cpu_tlbstate.is_lazy) ?
> > 			&init_mm : this_cpu_read(cpu_tlbstate.loaded_mm);
> > 
> > Possibly with that wrapped in a conveniently named helper function.
> 
> Fine with me, but I don't think it matters that much.
> 
> For each batch of text_poke() it will be hit only once, and I'm not sure
> it is really a good idea to use the knowledge that leave_mm() is just a
> switch to init_mm here.

Yeah, I'm not sure either. But it's something I came up with when
looking at all this.

Andy, what's your preference?


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 11:58:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 11:58:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5910.15347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRwSi-0005yS-DJ; Mon, 12 Oct 2020 11:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5910.15347; Mon, 12 Oct 2020 11:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRwSi-0005yL-AH; Mon, 12 Oct 2020 11:57:56 +0000
Received: by outflank-mailman (input) for mailman id 5910;
 Mon, 12 Oct 2020 11:57:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRwSh-0005xk-JP
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 11:57:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ff35011-e9b0-4b6d-bf8b-8edfa13ab6e4;
 Mon, 12 Oct 2020 11:57:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRwSZ-0004k5-QQ; Mon, 12 Oct 2020 11:57:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRwSZ-0003Lv-J8; Mon, 12 Oct 2020 11:57:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRwSZ-0007gh-Id; Mon, 12 Oct 2020 11:57:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRwSh-0005xk-JP
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 11:57:55 +0000
X-Inumbo-ID: 2ff35011-e9b0-4b6d-bf8b-8edfa13ab6e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2ff35011-e9b0-4b6d-bf8b-8edfa13ab6e4;
	Mon, 12 Oct 2020 11:57:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nYzzFLBwrn3ywfFS5hp376EUsLDYJvN509j5Tmaw/7w=; b=H6cVe9PVL5AAfiV4wolRSp3Rfc
	EuXfF97f4tihV5Jbtczy6sE7LGsBqjrxE8q2TjGx4tiYt6V+d9/WBM9UWqbKha5cGnIeydnT2lT4l
	xlg5Y8QNYIn259rmW2yDMTmFvbxgYb5Rp15maRgvlcwtUVUXzGWiH0/dI9yxEbQJ4tWk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRwSZ-0004k5-QQ; Mon, 12 Oct 2020 11:57:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRwSZ-0003Lv-J8; Mon, 12 Oct 2020 11:57:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRwSZ-0007gh-Id; Mon, 12 Oct 2020 11:57:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155712-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155712: FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-shadow:<job status>:broken:regression
    xen-unstable:test-amd64-amd64-xl-shadow:host-install(5):broken:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 11:57:47 +0000

flight 155712 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155712/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-shadow      <job status>                 broken  in 155673

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-shadow   5 host-install(5) broken in 155673 pass in 155712
 test-amd64-amd64-libvirt-pair 27 guest-migrate/dst_host/src_host fail in 155673 pass in 155712
 test-amd64-amd64-xl-rtds 20 guest-localmigrate/x10 fail in 155673 pass in 155712
 test-armhf-armhf-libvirt      8 xen-boot         fail in 155673 pass in 155712
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore          fail pass in 155673

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155630
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155673
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155673
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155673
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155673
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155673
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155673
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155673
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155673
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155712  2020-10-12 01:52:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-shadow broken

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:34:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5921.15361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRx1x-0001Xl-J1; Mon, 12 Oct 2020 12:34:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5921.15361; Mon, 12 Oct 2020 12:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRx1x-0001Xe-Fm; Mon, 12 Oct 2020 12:34:21 +0000
Received: by outflank-mailman (input) for mailman id 5921;
 Mon, 12 Oct 2020 12:34:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRx1v-0001Wk-RD
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:34:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b64cee47-f5b5-4ad7-afe8-5131527c6d9d;
 Mon, 12 Oct 2020 12:34:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRx1n-0005UI-BQ; Mon, 12 Oct 2020 12:34:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRx1n-0004sI-2h; Mon, 12 Oct 2020 12:34:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRx1n-00073n-2B; Mon, 12 Oct 2020 12:34:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRx1v-0001Wk-RD
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:34:19 +0000
X-Inumbo-ID: b64cee47-f5b5-4ad7-afe8-5131527c6d9d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b64cee47-f5b5-4ad7-afe8-5131527c6d9d;
	Mon, 12 Oct 2020 12:34:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=aPEIadeenY4MFMQX+oqEamPAp0NPYuwL14MmMy+l404=; b=a9PqFn6z+LzhxZ8RhEZGdOAjXi
	mWQdA3y7qdJ6Uq7o+Wfch78O6DypTBv3aWLw81av73mzNPeqJjpeTFSGGjJ0vgvNq9nMARz4wWb6l
	aknWiKuxm2JUFbJL1UyT6mKXvSEeh86dSTZgC82pfYoytvobp9wIPGZAHrB5M9W45YTc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRx1n-0005UI-BQ; Mon, 12 Oct 2020 12:34:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRx1n-0004sI-2h; Mon, 12 Oct 2020 12:34:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRx1n-00073n-2B; Mon, 12 Oct 2020 12:34:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-freebsd10-amd64
Message-Id: <E1kRx1n-00073n-2B@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 12:34:11 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-freebsd10-amd64
testid guest-start

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1583a389885346208217e8170395624b3aa90480
  Bug not present: a77dabc33bcc36ec348854f23e89e0de22ca045b
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155731/


  commit 1583a389885346208217e8170395624b3aa90480
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Tue Jul 7 10:21:10 2020 +0200
  
      cpus: extract out qtest-specific code to accel/qtest
      
      register a "CpusAccel" interface for qtest as well.
      
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-start.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-start --summary-out=tmp/155731.bisection-summary --basis-template=152631 --blessings=real,real-bisect qemu-mainline test-amd64-i386-freebsd10-amd64 guest-start
Searching for failure / basis pass:
 155713 fail [host=huxelrebe0] / 155509 [host=albana0] 155483 [host=albana1] 155434 [host=huxelrebe1] 155318 [host=elbling0] 155184 [host=fiano0] 155098 [host=fiano1] 155018 [host=pinot0] 154629 [host=rimava1] 154607 [host=chardonnay0] 154583 [host=pinot1] 154566 [host=albana1] 154552 [host=albana0] 154544 [host=chardonnay1] 154526 [host=fiano0] 154508 [host=elbling1] 154496 [host=elbling0] 154485 [host=huxelrebe1] 154466 ok.
Failure / basis pass flights: 155713 / 154466
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 698d3d7726232694018d437279dd4166e462deb7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a6a0c8394c5b0ce4cee2a1597d235d9b2b9af3c2 155821a1990b6de78dde5f98fa5ab90e802021e0 1e2d3be2e516e6f415ca6029f651b76a8563a27c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#698d3d7726232694018d437279dd4166e462deb7-ae511331e0fb1625ba649f377e81e487de3a5531 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#a6a0c8394c5b0ce4cee2a1597d235d9b2b9af3c2-48a340d9b23ffcf7704f2de14d1e505481a84a1c git://xenbits.xen.org/osstest/seabios.git#155821a1990b6de78dde5f98fa5ab90e802021e0-849c5e50b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#1e2d3be2e516e6f415ca6029f651b76a8563a27c-25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Loaded 64470 nodes in revision graph
Searching for test results:
 154466 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 698d3d7726232694018d437279dd4166e462deb7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a6a0c8394c5b0ce4cee2a1597d235d9b2b9af3c2 155821a1990b6de78dde5f98fa5ab90e802021e0 1e2d3be2e516e6f415ca6029f651b76a8563a27c
 154485 [host=huxelrebe1]
 154496 [host=elbling0]
 154508 [host=elbling1]
 154526 [host=fiano0]
 154544 [host=chardonnay1]
 154552 [host=albana0]
 154566 [host=albana1]
 154583 [host=pinot1]
 154607 [host=chardonnay0]
 154629 [host=rimava1]
 155018 [host=pinot0]
 155098 [host=fiano1]
 155184 [host=fiano0]
 155318 [host=elbling0]
 155434 [host=huxelrebe1]
 155483 [host=albana1]
 155509 [host=albana0]
 155518 fail irrelevant
 155544 fail irrelevant
 155585 fail irrelevant
 155613 fail irrelevant
 155659 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 698d3d7726232694018d437279dd4166e462deb7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a6a0c8394c5b0ce4cee2a1597d235d9b2b9af3c2 155821a1990b6de78dde5f98fa5ab90e802021e0 1e2d3be2e516e6f415ca6029f651b76a8563a27c
 155661 fail irrelevant
 155663 fail irrelevant
 155645 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155664 fail irrelevant
 155666 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155669 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 eb94b81a94bce112e6b206df846c1551aaf6cab6 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155670 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a1d22c668a7662289b42624fe2aa92c9a23df1d2 849c5e50b6f474df6cc113130575bcdccfafcd9e 0241809bf838875615797f52af34222e5ab8e98f
 155672 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2687fdb7571a444b5af3509574b659d35ddd601 849c5e50b6f474df6cc113130575bcdccfafcd9e 93508595d588afe9dca087f95200effb7cedc81f
 155665 fail irrelevant
 155674 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 671ad7c4468f795b66b4cd8f376f1b1ce6701b63 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155677 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 36d9c2883e55c863b622b99f0ebb5143f0001401 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155679 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0d2a4545bf7e763984d3ee3e802617544cb7fc7a 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155680 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b23317eec4715aa62de9a6e5490a01122c8eef0e 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155681 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 625581c2602b5b43e115b779a9a782478e6f92e7 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155684 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 37aeb7a28ddbf52dd25dd53ae1b8391bc2287858 849c5e50b6f474df6cc113130575bcdccfafcd9e de16a8fa0db7f1879442cf9cfe865eb2e9d98e6d
 155685 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7cd77fb02b9a2117a56fed172f09a1820fcd6b0b 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155686 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5663ac2aa0eafb40411ac4dff85e6ab529c4d199 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155688 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e344ffe73bd77e7067099155cfd8bf42b07ed631 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 c73952831f0fc63a984e0d07dff1d20f8617b81f
 155690 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1553d543ff4b9f91de4ed7743f0cd6e534528b9e 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155691 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92d09502676678c8ebb1ad830666b323d3c88f9d 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 4bdbf746ac9152e70f264f87db4472707da805ce
 155675 fail irrelevant
 155692 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8138405528c29af2a850cd672a8f8a0b33b7ab40 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155693 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12a0edafeb5019aac74114b62a4703f79c5c693 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5bcac985498ed83d89666959175ca9c9ed561ae1
 155696 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 c122bca9cd7b986be4d473240a4fec6315b7a2c2 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5a37207df52066efefe419c677b089a654d37afc
 155697 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e626fa7364bc2d7db5f6c8e59150dee1689b98a 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5a37207df52066efefe419c677b089a654d37afc
 155698 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 3f3daf89308930e45f82ae67dd2a2d6e030bb091 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 edf6ea6fbe793b017a9765b493d7b1675a16a42f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155701 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 d4ed1d4132f5825a795d5a78505811ecd2717b5e
 155695 fail irrelevant
 155702 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1bd5556f6686365e76f7ff67fe67260c449e8345 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5a37207df52066efefe419c677b089a654d37afc
 155705 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d16e72f2d4df2c9e631393adf1669a1da7efe8a 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 4bdbf746ac9152e70f264f87db4472707da805ce
 155706 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dd8c1e808f1ca311e1f50bff218c3ee3198b1f02 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155707 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 deb62371fe311cefd8a6f58e2da42b15d7e2a356 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155709 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f51af04cce148e825a9f3c4f446e5cd3cfaff71d 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155710 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5770e8afd629cc8a83dc41e2524258c73c1b301e 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155703 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155711 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e92558e4bf8059ce4f0b310afe218802b72766bc 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155715 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 698d3d7726232694018d437279dd4166e462deb7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a6a0c8394c5b0ce4cee2a1597d235d9b2b9af3c2 155821a1990b6de78dde5f98fa5ab90e802021e0 1e2d3be2e516e6f415ca6029f651b76a8563a27c
 155718 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155719 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0ac0b47c44b4be6cbce26777a1a5968cc8f025a5 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155720 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 430065dab060f04a74f915ea1260dcc79701ca75 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155722 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155723 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155725 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155726 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155713 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155727 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155730 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155731 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1583a389885346208217e8170395624b3aa90480 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
Searching for interesting versions
 Result found: flight 154466 (pass), for basis pass
 Result found: flight 155703 (fail), for basis failure
 Repro found: flight 155715 (pass), for basis pass
 Repro found: flight 155718 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a77dabc33bcc36ec348854f23e89e0de22ca045b 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
No revisions left to test, checking graph state.
 Result found: flight 155723 (pass), for last pass
 Result found: flight 155725 (fail), for first failure
 Repro found: flight 155726 (pass), for last pass
 Repro found: flight 155727 (fail), for first failure
 Repro found: flight 155730 (pass), for last pass
 Repro found: flight 155731 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1583a389885346208217e8170395624b3aa90480
  Bug not present: a77dabc33bcc36ec348854f23e89e0de22ca045b
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155731/


  commit 1583a389885346208217e8170395624b3aa90480
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Tue Jul 7 10:21:10 2020 +0200
  
      cpus: extract out qtest-specific code to accel/qtest
      
      register a "CpusAccel" interface for qtest as well.
      
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.890093 to fit
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-amd64.guest-start.{dot,ps,png,html,svg}.
----------------------------------------
155731: tolerable ALL FAIL

flight 155731 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155731/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 13 guest-start          fail baseline untested


jobs:
 test-amd64-i386-freebsd10-amd64                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5924.15374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCZ-0002cM-Lo; Mon, 12 Oct 2020 12:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5924.15374; Mon, 12 Oct 2020 12:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCZ-0002cF-Iw; Mon, 12 Oct 2020 12:45:19 +0000
Received: by outflank-mailman (input) for mailman id 5924;
 Mon, 12 Oct 2020 12:45:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCX-0002cA-W8
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:18 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f8c5287d-de03-4165-b360-1c43f7f32b1f;
 Mon, 12 Oct 2020 12:45:16 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-51-puG-AytJPlqu3n4nFyTAxA-1; Mon, 12 Oct 2020 08:45:10 -0400
Received: by mail-wr1-f69.google.com with SMTP id x16so8849260wrg.7
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:10 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id d2sm14916054wrq.34.2020.10.12.05.45.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCX-0002cA-W8
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:18 +0000
X-Inumbo-ID: f8c5287d-de03-4165-b360-1c43f7f32b1f
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id f8c5287d-de03-4165-b360-1c43f7f32b1f;
	Mon, 12 Oct 2020 12:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506716;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=0VfM6Y1Z4oOM9AviCLT3C6cOeCH1n4rJ8RPo235glro=;
	b=YChWZiyCrsTVXV90QgJvNNBOvbWmNrcisjkDZE+iere73iZTsm7TJ2pTMqDpRu+8fQJuea
	9r9LqP5y3v5OPpW1gjFd18lJNIgm9drxFwPLh7WE7VfhU7na6x9n48/PpnNnFFy0ONhQAJ
	JydRwPt8pZUAtDKIWgUvPaEh3J4xwpk=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-51-puG-AytJPlqu3n4nFyTAxA-1; Mon, 12 Oct 2020 08:45:10 -0400
X-MC-Unique: puG-AytJPlqu3n4nFyTAxA-1
Received: by mail-wr1-f69.google.com with SMTP id x16so8849260wrg.7
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:10 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=UtSObLJfNIzbJcrgKRU5Cu+zNLd9lEQsFGBvfOC7Lok=;
        b=ihjbmfyF4AADUD1jmre+K6hMUPGDsvdq4zcOhn8pE42kiBRHIh9sDQDEMnn1HJGvyl
         Z7uetkgNGw3OU0Q1jEW8FVHornjJkEs+oDgyQISl2UfH1I/FDxttnnz+rDgukRESiLWW
         ijAxfek61N8A5KNO2GIf6uynh0Go4NwMrCIDLlR2gwQ4dHMWKG2gA87buvFJWeN3xUUD
         6CEofdu9KwUQgS54KSNWPLurjK+f60RZeC3wkqvB6Wr7gxcfmiWYhq/16LbHqpFWYA1m
         q2klXPoEfvcGMhGFn3sOaInFWqEBy5qv13S3xzyT+WJYHPYk6lk7zDuZM1C3moNTrw5/
         wRpw==
X-Gm-Message-State: AOAM530gePNyIgE+eTQ/deE+kk0p7MDmZzK3RtJc8oiY1okK46WM2Mbq
	sYT8klMcVek+jMxjT9DhED8pN+WNP35BHQGPyoq2xVQYns4zCjAw+9xT9cP2/Je+w7oVr5moDpG
	XRT4p0UAMBUE5lDR6EvnCBsRJK5k=
X-Received: by 2002:a5d:46c1:: with SMTP id g1mr9137839wrs.101.1602506709326;
        Mon, 12 Oct 2020 05:45:09 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyz6hFiVJHdaU7cfa/kbxEMNRyQTLhhCcs5f5STC6wr7DnE8rXhJdscPh109XPd+XACdRYSYQ==
X-Received: by 2002:a5d:46c1:: with SMTP id g1mr9137815wrs.101.1602506709130;
        Mon, 12 Oct 2020 05:45:09 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id d2sm14916054wrq.34.2020.10.12.05.45.07
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:08 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: [PATCH 0/5] hw: Use PCI macros from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 14:45:01 +0200
Message-Id: <20201012124506.3406909-1-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

Trivial patches using the generic PCI macros from "hw/pci/pci.h".=0D
=0D
Philippe Mathieu-Daud=C3=A9 (5):=0D
  hw/pci-host/bonito: Make PCI_ADDR() macro more readable=0D
  hw/pci-host: Use the PCI_BUILD_BDF() macro from 'hw/pci/pci.h'=0D
  hw/pci-host/uninorth: Use the PCI_FUNC() macro from 'hw/pci/pci.h'=0D
  hw: Use the PCI_SLOT() macro from 'hw/pci/pci.h'=0D
  hw: Use the PCI_DEVFN() macro from 'hw/pci/pci.h'=0D
=0D
 hw/arm/virt.c          | 3 ++-=0D
 hw/hppa/dino.c         | 2 +-=0D
 hw/i386/xen/xen-hvm.c  | 2 +-=0D
 hw/isa/piix3.c         | 2 +-=0D
 hw/mips/gt64xxx_pci.c  | 2 +-=0D
 hw/pci-host/bonito.c   | 5 ++---=0D
 hw/pci-host/pnv_phb4.c | 2 +-=0D
 hw/pci-host/ppce500.c  | 2 +-=0D
 hw/pci-host/uninorth.c | 8 +++-----=0D
 hw/ppc/ppc4xx_pci.c    | 2 +-=0D
 hw/sh4/sh_pci.c        | 2 +-=0D
 11 files changed, 15 insertions(+), 17 deletions(-)=0D
=0D
--=20=0D
2.26.2=0D
=0D



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5926.15399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCh-0002gn-Av; Mon, 12 Oct 2020 12:45:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5926.15399; Mon, 12 Oct 2020 12:45:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCh-0002gf-7Z; Mon, 12 Oct 2020 12:45:27 +0000
Received: by outflank-mailman (input) for mailman id 5926;
 Mon, 12 Oct 2020 12:45:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCf-0002fQ-6i
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:25 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 51cfb409-301b-4ac3-abae-5168facf7e27;
 Mon, 12 Oct 2020 12:45:23 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-32-wim6cHprMSa0X9dzGXdizw-1; Mon, 12 Oct 2020 08:45:22 -0400
Received: by mail-wr1-f70.google.com with SMTP id t3so2761638wrq.2
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:21 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id b5sm2564766wrs.97.2020.10.12.05.45.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:20 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCf-0002fQ-6i
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:25 +0000
X-Inumbo-ID: 51cfb409-301b-4ac3-abae-5168facf7e27
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 51cfb409-301b-4ac3-abae-5168facf7e27;
	Mon, 12 Oct 2020 12:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506723;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uEfJTmQm7IfbaNLnhhKiUk3mZKxHaYrO9aWEgNYf1LY=;
	b=UkT5GRgFttURbsFo2rWy98QsdLC50MaBIDAzU5UWSFUhgBNTQldOvYSB9THd7uq+93GcS5
	czDI0G+LNyv5f/pMHDFSmXdyaE+aJQnkejwqseA5HD/TCcd5HnLx4Boqw0vLyrOXK87naf
	XfRFI8vg6bA4w+5pGzvN/eyqefgNc8I=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-32-wim6cHprMSa0X9dzGXdizw-1; Mon, 12 Oct 2020 08:45:22 -0400
X-MC-Unique: wim6cHprMSa0X9dzGXdizw-1
Received: by mail-wr1-f70.google.com with SMTP id t3so2761638wrq.2
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:21 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=uEfJTmQm7IfbaNLnhhKiUk3mZKxHaYrO9aWEgNYf1LY=;
        b=TMzkmyvMMihEIwhfSYFEXtWS/J4ZWNtK6jqW5V6qStU2kRx4miIw4KlXXeWsk/9j2x
         pFGl/eYtxUaJb/5zkeLwLO3kiux3EQ8Zw3jsSIoHuDAWcQsteqdDltw3/hU8qbbR0Ix3
         K/EZKpbngfgecccPhodAHYWgelyyA1Z9zcsFHNLKqZ9n4hlxIIGItJ1t6rRr+YEh+HOo
         vYVg9nl8GnIcH1Gm1V46JoN8xstaRRAcm7mUAgAboblwwe7vI4LKxvYwITsimW9Gq61X
         1dFzkmzNIpsioiWGZlTDIKXAGkKwJu3fpbolxVr740m5V4/ySZ81UKSHfGROPY2d6xTC
         iaBg==
X-Gm-Message-State: AOAM530yMK/z4j+/tYHqn+6pr5d4aeZO8m0BNk5HKN6qADSdlrNubrje
	Tz44AvrrudjqqDRTBm/N32097TPdgZ3VUMpQF++rFBJQV/ztJ67C16Q/WE6VsXIMApu1XuFIEyL
	DwDLgC/H1r8LdZit5BsUBjb2I8Zc=
X-Received: by 2002:adf:8b85:: with SMTP id o5mr16319167wra.104.1602506720840;
        Mon, 12 Oct 2020 05:45:20 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzUet7a3+icknkdXIUBYsqHO/fz4MbL6X5ByiZqEc6CjjMuhyc0L0IjJS+YtIXZM/ET90duwQ==
X-Received: by 2002:adf:8b85:: with SMTP id o5mr16319151wra.104.1602506720677;
        Mon, 12 Oct 2020 05:45:20 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id b5sm2564766wrs.97.2020.10.12.05.45.18
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:20 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: [PATCH 2/5] hw/pci-host: Use the PCI_BUILD_BDF() macro from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 14:45:03 +0200
Message-Id: <20201012124506.3406909-3-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012124506.3406909-1-philmd@redhat.com>
References: <20201012124506.3406909-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <f4bug@amsat.org>

We already have a generic PCI_BUILD_BDF() macro in "hw/pci/pci.h"
to pack these values, use it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci-host/bonito.c   | 3 +--
 hw/pci-host/pnv_phb4.c | 2 +-
 2 files changed, 2 insertions(+), 3 deletions(-)

diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index abb3ee86769..b05295639a6 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -196,8 +196,7 @@ FIELD(BONGENCFG, PCIQUEUE,      12, 1)
 #define PCI_IDSEL_VIA686B          (1 << PCI_IDSEL_VIA686B_BIT)
 
 #define PCI_ADDR(busno , devno , funno , regno)  \
-    ((((busno) << 8) & 0xff00) + (((devno) << 3) & 0xf8) + \
-    (((funno) & 0x7) << 8) + (regno))
+    ((PCI_BUILD_BDF(busno, PCI_DEVFN(devno , funno)) << 8) + (regno))
 
 typedef struct BonitoState BonitoState;
 
diff --git a/hw/pci-host/pnv_phb4.c b/hw/pci-host/pnv_phb4.c
index 03daf40a237..6328e985f81 100644
--- a/hw/pci-host/pnv_phb4.c
+++ b/hw/pci-host/pnv_phb4.c
@@ -889,7 +889,7 @@ static bool pnv_phb4_resolve_pe(PnvPhb4DMASpace *ds)
     /* Read RTE */
     bus_num = pci_bus_num(ds->bus);
     addr = rtt & PHB_RTT_BASE_ADDRESS_MASK;
-    addr += 2 * ((bus_num << 8) | ds->devfn);
+    addr += 2 * PCI_BUILD_BDF(bus_num, ds->devfn);
     if (dma_memory_read(&address_space_memory, addr, &rte, sizeof(rte))) {
         phb_error(ds->phb, "Failed to read RTT entry at 0x%"PRIx64, addr);
         /* Set error bits ? fence ? ... */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5925.15387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCd-0002e1-VB; Mon, 12 Oct 2020 12:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5925.15387; Mon, 12 Oct 2020 12:45:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCd-0002dq-R3; Mon, 12 Oct 2020 12:45:23 +0000
Received: by outflank-mailman (input) for mailman id 5925;
 Mon, 12 Oct 2020 12:45:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCc-0002cA-QF
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:22 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 47b22cc0-726e-4604-916d-836074c798f4;
 Mon, 12 Oct 2020 12:45:18 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-146-ZTed8B7mMT2VgJR1qKIMtQ-1; Mon, 12 Oct 2020 08:45:16 -0400
Received: by mail-wr1-f69.google.com with SMTP id r8so1941961wrp.5
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:16 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id f6sm11016383wru.50.2020.10.12.05.45.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:14 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCc-0002cA-QF
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:22 +0000
X-Inumbo-ID: 47b22cc0-726e-4604-916d-836074c798f4
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 47b22cc0-726e-4604-916d-836074c798f4;
	Mon, 12 Oct 2020 12:45:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506718;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bwG8iSHLZW0tTYCnl2ic6/t6qmfYl8jGlMpgEnOotCs=;
	b=LoZqtaFZ14PMiVG5YwWZDLriI0xjyU7gGSiWQDnxzOKLHb3cwadDmXh61qzcP0A40ySUE2
	MdBHKQ2YyjyqH2/HvUpEO+g/Lj8MJVewGgFov7OAbbgBhTP9E7hVojhpHOr9/MiNsDaQfr
	5mQmXpW3qfQOevb7Xa+m/mWEKiM/jJg=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-146-ZTed8B7mMT2VgJR1qKIMtQ-1; Mon, 12 Oct 2020 08:45:16 -0400
X-MC-Unique: ZTed8B7mMT2VgJR1qKIMtQ-1
Received: by mail-wr1-f69.google.com with SMTP id r8so1941961wrp.5
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:16 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bwG8iSHLZW0tTYCnl2ic6/t6qmfYl8jGlMpgEnOotCs=;
        b=PEGS2VG6nLVSQGMPuYGKhrprPynHlFS9z2eO3q7PdJhSCQ2Buud0LMRy/R7lEFasOY
         mboqhm5t2SboNEjnwmcu2eiN8fygnEDXoQB0U0yX0AwdEi/J7ovI4ANov3gcSMsTI3oS
         lhEBla/YNRydoMF6R5QB55oJPuFFFW2V1/eidXxRGSqsRZJFtL+hgb4exIeHkTbLlpE3
         3/dhY6ZFeZzu9CrhaHmomO76lz5yiBG72NLEg0ALr9NXtEyL703uRELeyz6tqcz5lTuN
         A1CyqLDQkPeBWDqpUQ8uWIO0wJImQ5hmQQCHQxsH0+P/V2HWw9fsRITsboQ66j8pIcDm
         0SyA==
X-Gm-Message-State: AOAM532qndwIchDvDaxB/ZBwbTM20TLzhmh79FFxoBluYBG68dXdaAof
	5Dy7aexgAHo7qdEv34JZzVH1OuY9Af0kIe/0Tb7lljW0JDyxyE/42Re4fZ8dJIyOZlWu8GkaAr9
	7iOzXDwoMnkDtVmuI+/hISbf/5N0=
X-Received: by 2002:adf:dd46:: with SMTP id u6mr29753240wrm.295.1602506715238;
        Mon, 12 Oct 2020 05:45:15 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyueR4JjpNH2sLzfTjSqD0R65Ts5ka4vohHBW3dn4ShgyH1BUlk0MTIhNUHnqavD80ySjglhw==
X-Received: by 2002:adf:dd46:: with SMTP id u6mr29753217wrm.295.1602506715091;
        Mon, 12 Oct 2020 05:45:15 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id f6sm11016383wru.50.2020.10.12.05.45.12
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:14 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: [PATCH 1/5] hw/pci-host/bonito: Make PCI_ADDR() macro more readable
Date: Mon, 12 Oct 2020 14:45:02 +0200
Message-Id: <20201012124506.3406909-2-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012124506.3406909-1-philmd@redhat.com>
References: <20201012124506.3406909-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <f4bug@amsat.org>

The PCI_ADDR() macro use generic PCI fields shifted by 8-bit.
Rewrite it extracting the shift operation one layer.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci-host/bonito.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index a99eced0657..abb3ee86769 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -196,8 +196,8 @@ FIELD(BONGENCFG, PCIQUEUE,      12, 1)
 #define PCI_IDSEL_VIA686B          (1 << PCI_IDSEL_VIA686B_BIT)
 
 #define PCI_ADDR(busno , devno , funno , regno)  \
-    ((((busno) << 16) & 0xff0000) + (((devno) << 11) & 0xf800) + \
-    (((funno) << 8) & 0x700) + (regno))
+    ((((busno) << 8) & 0xff00) + (((devno) << 3) & 0xf8) + \
+    (((funno) & 0x7) << 8) + (regno))
 
 typedef struct BonitoState BonitoState;
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5927.15411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCm-0002lW-K9; Mon, 12 Oct 2020 12:45:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5927.15411; Mon, 12 Oct 2020 12:45:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCm-0002lN-Gi; Mon, 12 Oct 2020 12:45:32 +0000
Received: by outflank-mailman (input) for mailman id 5927;
 Mon, 12 Oct 2020 12:45:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCl-0002kr-Us
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:31 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e9d0eca7-a567-4ea2-b67c-45a220f9f8aa;
 Mon, 12 Oct 2020 12:45:31 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-301-8UvaL3I4NoeFMoWYH2yU_A-1; Mon, 12 Oct 2020 08:45:27 -0400
Received: by mail-wr1-f69.google.com with SMTP id q15so428804wrw.8
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:27 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id d2sm14916896wrq.34.2020.10.12.05.45.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:25 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCl-0002kr-Us
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:31 +0000
X-Inumbo-ID: e9d0eca7-a567-4ea2-b67c-45a220f9f8aa
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id e9d0eca7-a567-4ea2-b67c-45a220f9f8aa;
	Mon, 12 Oct 2020 12:45:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506731;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=afTxi5eJGUK9TCxtbNIddaNUCDX7pFC3c3DvHTcAqpY=;
	b=J9B7H7LrBwwINvhmbA0l+GyBcrAWdBztzvrutX5qL9WpBHgXY5E3f50Ir4hPrPx7EdgzIn
	kR4x/qEi2is+WKZzD/VYzwSh6LAbviPzQByxNB409OF7AU6CG3ut1GZi4KTvieb3f//Fg0
	EhfteZwtpEqiSFfOuc1YT8KEBIDC8Po=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-301-8UvaL3I4NoeFMoWYH2yU_A-1; Mon, 12 Oct 2020 08:45:27 -0400
X-MC-Unique: 8UvaL3I4NoeFMoWYH2yU_A-1
Received: by mail-wr1-f69.google.com with SMTP id q15so428804wrw.8
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=afTxi5eJGUK9TCxtbNIddaNUCDX7pFC3c3DvHTcAqpY=;
        b=SGEzAkeueYgh1Hdl9rnkhIP18i9616ASlF36+VdljJtVtzRVAUQ3a/RPKQ5Su7hc5a
         +ujUMfIsvgEfW2HDoie/XHyMufSRNfkuFwWljUXgF3By+kbakcgePCW42VhrrpLIXgUN
         qHRy6Urn2i3HTQf9xPnD62CJlKzg6wMGLO82BXSJBlvsYl1Vj0eezxu2iaCp3Bcu8MN+
         sezXXycjRWhAtXx3SHF+BwG4gago6CTIpXz90pelp7CELxjRXvMOWZdyhHU5q+ULUjFN
         prlh7pyoWv5FQo3aA+hvzCyXL4XLTrfA6l+DiW6sT+yRTo1uSChDP6PxfvEM/7epXnDl
         OL8w==
X-Gm-Message-State: AOAM531xFIwIiXpe8mxKCUapgmDLG7TN69MqAvVW3a3P22eTG3XJD6Ny
	FzfJP92lltqux3nVk2OyyNaHvIsYX5DkJorPLtZ/9hNZT2ci9Egn8nREPnk8eHm7e/l3gOu+EWt
	H9wYRfd8fr1mQcN7xSMLJ3EBVBt8=
X-Received: by 2002:adf:df03:: with SMTP id y3mr10299478wrl.70.1602506726315;
        Mon, 12 Oct 2020 05:45:26 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxpp+5kB0ZYqB04uT+WIG2a4k9Wmj2oRtvHubS3HvdSiiZu7CRJnV3hLF0c9e7ixdPrjPJpoQ==
X-Received: by 2002:adf:df03:: with SMTP id y3mr10299453wrl.70.1602506726102;
        Mon, 12 Oct 2020 05:45:26 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id d2sm14916896wrq.34.2020.10.12.05.45.24
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:25 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: [PATCH 3/5] hw/pci-host/uninorth: Use the PCI_FUNC() macro from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 14:45:04 +0200
Message-Id: <20201012124506.3406909-4-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012124506.3406909-1-philmd@redhat.com>
References: <20201012124506.3406909-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <f4bug@amsat.org>

We already have a generic PCI_FUNC() macro in "hw/pci/pci.h" to
extract the PCI function identifier, use it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci-host/uninorth.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/pci-host/uninorth.c b/hw/pci-host/uninorth.c
index 1ed1072eeb5..c21de0ab805 100644
--- a/hw/pci-host/uninorth.c
+++ b/hw/pci-host/uninorth.c
@@ -65,7 +65,7 @@ static uint32_t unin_get_config_reg(uint32_t reg, uint32_t addr)
         if (slot == 32) {
             slot = -1; /* XXX: should this be 0? */
         }
-        func = (reg >> 8) & 7;
+        func = PCI_FUNC(reg >> 8);
 
         /* ... and then convert them to x86 format */
         /* config pointer */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5928.15423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCq-0002r0-Ts; Mon, 12 Oct 2020 12:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5928.15423; Mon, 12 Oct 2020 12:45:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCq-0002qq-Pm; Mon, 12 Oct 2020 12:45:36 +0000
Received: by outflank-mailman (input) for mailman id 5928;
 Mon, 12 Oct 2020 12:45:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCp-0002pc-LV
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:35 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 28a554d5-2d99-49f4-ae34-aaae167cf750;
 Mon, 12 Oct 2020 12:45:34 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-552-6YnS7GlHPC2C_PRw3qm4hA-1; Mon, 12 Oct 2020 08:45:33 -0400
Received: by mail-wr1-f70.google.com with SMTP id b6so9214803wrn.17
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:32 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id l5sm24326366wrq.14.2020.10.12.05.45.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:30 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCp-0002pc-LV
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:35 +0000
X-Inumbo-ID: 28a554d5-2d99-49f4-ae34-aaae167cf750
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 28a554d5-2d99-49f4-ae34-aaae167cf750;
	Mon, 12 Oct 2020 12:45:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506734;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xCNLhoelsRO4TCaFqCkPA59+Ay48720LqdAdNtyWtdc=;
	b=ME0qyA/3xFXlPM+JyqDYjndNjdg9M+Bi4xYmeOb4Xfq5nl7WTbN7ImIK16TkalixutE/IG
	fU0zzkjB89xJIysW2YrWW3sqvSeypdbCUwKyuX+QmXULvqq1JUp8+LMl6nin+xFvoOqREc
	8eMRH+tkhjTdDZB0RzkDW2zUCj1F+Ao=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-552-6YnS7GlHPC2C_PRw3qm4hA-1; Mon, 12 Oct 2020 08:45:33 -0400
X-MC-Unique: 6YnS7GlHPC2C_PRw3qm4hA-1
Received: by mail-wr1-f70.google.com with SMTP id b6so9214803wrn.17
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xCNLhoelsRO4TCaFqCkPA59+Ay48720LqdAdNtyWtdc=;
        b=SFpshbcfaP6Ru/vTp5JlU5rrx8iIBmStk6UUmR7I7Hdk2R85ELYg13kRVszXAHQLga
         4R+VzZmQ3xHkbW8EA3+Ot5CyTF1rRb0CdKn3ZMvbJKIGcVn48gRf0cYeTGRuK6VpZN+5
         wdluBdIA4+CHK+qciy2S2TzkRVxAeeudW6/2azZlwj2KcoGXBjFx0b+l/qvQiM6dwpCK
         mgU2qHvYTI6U0Fmr/7a8QhCPo1GXFwI38dVgFB08TDA0lZzHW0P57FwqJjoDSC5pbUsz
         vKr7axonXn0u625wYEzQ31p2WMzijHEjV2ujpzcXjxkScXpFjxH8F3et5xBanLxAO13D
         Cq6Q==
X-Gm-Message-State: AOAM531QuKtrSV3VTIQ7jG0tE4PULsrASPETYAy6dwEaJUdYQSh1uOtz
	FGSF8BsLTJCuSSsMty/WKwaPxybCQB1KVZkxLL7vAlofB8Gr+Ki4L7nG9haZdo+NBp68ZLQMkep
	RK4rEF+vEuhh50wAn/tWUJ27KOsg=
X-Received: by 2002:a5d:4409:: with SMTP id z9mr29211373wrq.236.1602506731773;
        Mon, 12 Oct 2020 05:45:31 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxae6z034MmpPgmdTflq+mg0CAWtLHwAeKxZKoPrU70udcFRiLBlXGhJ+sn9Ujlq9DXSzSzbg==
X-Received: by 2002:a5d:4409:: with SMTP id z9mr29211331wrq.236.1602506731500;
        Mon, 12 Oct 2020 05:45:31 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id l5sm24326366wrq.14.2020.10.12.05.45.29
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:30 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: [PATCH 4/5] hw: Use the PCI_SLOT() macro from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 14:45:05 +0200
Message-Id: <20201012124506.3406909-5-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012124506.3406909-1-philmd@redhat.com>
References: <20201012124506.3406909-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <f4bug@amsat.org>

We already have a generic PCI_SLOT() macro in "hw/pci/pci.h"
to extract the PCI slot identifier, use it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/hppa/dino.c        | 2 +-
 hw/i386/xen/xen-hvm.c | 2 +-
 hw/isa/piix3.c        | 2 +-
 hw/mips/gt64xxx_pci.c | 2 +-
 hw/pci-host/bonito.c  | 2 +-
 hw/pci-host/ppce500.c | 2 +-
 hw/ppc/ppc4xx_pci.c   | 2 +-
 hw/sh4/sh_pci.c       | 2 +-
 8 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
index 81053b5fb64..5b82c9440d1 100644
--- a/hw/hppa/dino.c
+++ b/hw/hppa/dino.c
@@ -496,7 +496,7 @@ static void dino_set_irq(void *opaque, int irq, int level)
 
 static int dino_pci_map_irq(PCIDevice *d, int irq_num)
 {
-    int slot = d->devfn >> 3;
+    int slot = PCI_SLOT(d->devfn);
 
     assert(irq_num >= 0 && irq_num <= 3);
 
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index f3ababf33b6..276254e6ca9 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -140,7 +140,7 @@ typedef struct XenIOState {
 
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
 {
-    return irq_num + ((pci_dev->devfn >> 3) << 2);
+    return irq_num + (PCI_SLOT(pci_dev->devfn) << 2);
 }
 
 void xen_piix3_set_irq(void *opaque, int irq_num, int level)
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index 587850b8881..f46ccae25cf 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -361,7 +361,7 @@ type_init(piix3_register_types)
 static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
 {
     int slot_addend;
-    slot_addend = (pci_dev->devfn >> 3) - 1;
+    slot_addend = PCI_SLOT(pci_dev->devfn) - 1;
     return (pci_intx + slot_addend) & 3;
 }
 
diff --git a/hw/mips/gt64xxx_pci.c b/hw/mips/gt64xxx_pci.c
index e091bc4ed55..588e6f99301 100644
--- a/hw/mips/gt64xxx_pci.c
+++ b/hw/mips/gt64xxx_pci.c
@@ -982,7 +982,7 @@ static int gt64120_pci_map_irq(PCIDevice *pci_dev, int irq_num)
 {
     int slot;
 
-    slot = (pci_dev->devfn >> 3);
+    slot = PCI_SLOT(pci_dev->devfn);
 
     switch (slot) {
     /* PIIX4 USB */
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index b05295639a6..ee8b193e15b 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -570,7 +570,7 @@ static int pci_bonito_map_irq(PCIDevice *pci_dev, int irq_num)
 {
     int slot;
 
-    slot = (pci_dev->devfn >> 3);
+    slot = PCI_SLOT(pci_dev->devfn);
 
     switch (slot) {
     case 5:   /* FULOONG2E_VIA_SLOT, SouthBridge, IDE, USB, ACPI, AC97, MC97 */
diff --git a/hw/pci-host/ppce500.c b/hw/pci-host/ppce500.c
index 9517aab913e..5ad1424b31a 100644
--- a/hw/pci-host/ppce500.c
+++ b/hw/pci-host/ppce500.c
@@ -342,7 +342,7 @@ static const MemoryRegionOps e500_pci_reg_ops = {
 
 static int mpc85xx_pci_map_irq(PCIDevice *pci_dev, int pin)
 {
-    int devno = pci_dev->devfn >> 3;
+    int devno = PCI_SLOT(pci_dev->devfn);
     int ret;
 
     ret = ppce500_pci_map_irq_slot(devno, pin);
diff --git a/hw/ppc/ppc4xx_pci.c b/hw/ppc/ppc4xx_pci.c
index 28724c06f88..e8789f64e80 100644
--- a/hw/ppc/ppc4xx_pci.c
+++ b/hw/ppc/ppc4xx_pci.c
@@ -243,7 +243,7 @@ static void ppc4xx_pci_reset(void *opaque)
  * may need further refactoring for other boards. */
 static int ppc4xx_pci_map_irq(PCIDevice *pci_dev, int irq_num)
 {
-    int slot = pci_dev->devfn >> 3;
+    int slot = PCI_SLOT(pci_dev->devfn);
 
     trace_ppc4xx_pci_map_irq(pci_dev->devfn, irq_num, slot);
 
diff --git a/hw/sh4/sh_pci.c b/hw/sh4/sh_pci.c
index 73d2d0bccb0..734892f47c7 100644
--- a/hw/sh4/sh_pci.c
+++ b/hw/sh4/sh_pci.c
@@ -109,7 +109,7 @@ static const MemoryRegionOps sh_pci_reg_ops = {
 
 static int sh_pci_map_irq(PCIDevice *d, int irq_num)
 {
-    return (d->devfn >> 3);
+    return PCI_SLOT(d->devfn);
 }
 
 static void sh_pci_set_irq(void *opaque, int irq_num, int level)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:45:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5929.15435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCy-0002yN-6E; Mon, 12 Oct 2020 12:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5929.15435; Mon, 12 Oct 2020 12:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxCy-0002yF-2P; Mon, 12 Oct 2020 12:45:44 +0000
Received: by outflank-mailman (input) for mailman id 5929;
 Mon, 12 Oct 2020 12:45:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRxCw-0002x4-Mb
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:42 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a2a73f4a-121b-4786-8d31-62151f37af06;
 Mon, 12 Oct 2020 12:45:42 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-173-8aCFu4Q9NpWinpC_cbR2fQ-1; Mon, 12 Oct 2020 08:45:38 -0400
Received: by mail-wr1-f70.google.com with SMTP id t17so9288654wrm.13
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:38 -0700 (PDT)
Received: from localhost.localdomain
 (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id s185sm22765462wmf.3.2020.10.12.05.45.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 05:45:36 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRxCw-0002x4-Mb
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:45:42 +0000
X-Inumbo-ID: a2a73f4a-121b-4786-8d31-62151f37af06
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id a2a73f4a-121b-4786-8d31-62151f37af06;
	Mon, 12 Oct 2020 12:45:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602506741;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9wGxJZcsEOLAGSmgs7p8pt5WGBHmly5siKydoVe/oZE=;
	b=GyWb5H/fZm06bwgGygqA8wvOijWLElMPBbR3NwJ7MLmx63LYrinMLPua+m/FnGVnPL29jF
	37CsQ6wuDBa2UT6F3b4hRP8EfWmenc9qT/SKwiv4+WcEPkHAPt5lNUE4zsJryZAhMFQDj1
	U4E54cSQn7bYPDr/7M59NoPKFMPkbOo=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-173-8aCFu4Q9NpWinpC_cbR2fQ-1; Mon, 12 Oct 2020 08:45:38 -0400
X-MC-Unique: 8aCFu4Q9NpWinpC_cbR2fQ-1
Received: by mail-wr1-f70.google.com with SMTP id t17so9288654wrm.13
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:45:38 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=9wGxJZcsEOLAGSmgs7p8pt5WGBHmly5siKydoVe/oZE=;
        b=mB+0nT7k7KUi7qtdGb1Kr1uSuYSyIn8zdocxrTLwMLprOrMhNZAUrTPdPREHM27/jL
         9Luv3by5FgdcWqdDGj1hhCubsjoZ8cIk2TbOPXBdi7ob8FF0lFnHaEYJkq9W3/rYMHMq
         N64/wGOBMWMrApkfUGbtbUPFj/jjheDyQWOYbfV0P+W2S9bqa6afDkyXj1CHMZgEUQMu
         xyZx55bKG5OSusFonVeNGTgtxEx98DV+li9fJnrrFCnwRcTmngRXApLjopkffoAWCoj+
         aEibFrEIdW2ufFaou1opuch50Y5PR3nYSj2YpKms3Q8HsbXpHjyyYxN7XvlqZMruVDGK
         lcaQ==
X-Gm-Message-State: AOAM531eQdYhKKNi5MgeQYX6y70az+WIi40PzftcFxSFH9KO0mnkWokQ
	qIG6Q2pz+/WO8zUY2ObnYUIcNcZpSHvKy5V11/OLnyaSq2nyrJMc9wftYGFh76IRFINg+HI0L4i
	/EfTPc/bTEIBHutf1DDsqb3ShwVQ=
X-Received: by 2002:adf:dccc:: with SMTP id x12mr30544384wrm.241.1602506737182;
        Mon, 12 Oct 2020 05:45:37 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwGju9pLDMIkjylp5prt4ukBOzX2jmiDpK+NGjR15EPrzwXN0JDbAFCeWH5hcu15IBYdHhmBg==
X-Received: by 2002:adf:dccc:: with SMTP id x12mr30544349wrm.241.1602506736967;
        Mon, 12 Oct 2020 05:45:36 -0700 (PDT)
Received: from localhost.localdomain (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id s185sm22765462wmf.3.2020.10.12.05.45.35
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 05:45:36 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org,
	qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>,
	Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
	David Gibson <david@gibson.dropbear.id.au>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: [PATCH 5/5] hw: Use the PCI_DEVFN() macro from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 14:45:06 +0200
Message-Id: <20201012124506.3406909-6-philmd@redhat.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201012124506.3406909-1-philmd@redhat.com>
References: <20201012124506.3406909-1-philmd@redhat.com>
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Philippe Mathieu-Daudé <f4bug@amsat.org>

We already have a generic PCI_DEVFN() macro in "hw/pci/pci.h"
to pack the PCI slot/function identifiers, use it.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/arm/virt.c          | 3 ++-
 hw/pci-host/uninorth.c | 6 ++----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index e465a988d68..f601ef0798c 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -1144,7 +1144,8 @@ static void create_pcie_irq_map(const VirtMachineState *vms,
                      full_irq_map, sizeof(full_irq_map));
 
     qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupt-map-mask",
-                           0x1800, 0, 0, /* devfn (PCI_SLOT(3)) */
+                           cpu_to_be16(PCI_DEVFN(3, 0)), /* Slot 3 */
+                           0, 0,
                            0x7           /* PCI irq */);
 }
 
diff --git a/hw/pci-host/uninorth.c b/hw/pci-host/uninorth.c
index c21de0ab805..f73d452bdce 100644
--- a/hw/pci-host/uninorth.c
+++ b/hw/pci-host/uninorth.c
@@ -70,10 +70,8 @@ static uint32_t unin_get_config_reg(uint32_t reg, uint32_t addr)
         /* ... and then convert them to x86 format */
         /* config pointer */
         retval = (reg & (0xff - 7)) | (addr & 7);
-        /* slot */
-        retval |= slot << 11;
-        /* fn */
-        retval |= func << 8;
+        /* slot, fn */
+        retval |= PCI_DEVFN(slot, func) << 8;
     }
 
     trace_unin_get_config_reg(reg, addr, retval);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 12:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 12:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5938.15446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxJ8-00043Z-T2; Mon, 12 Oct 2020 12:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5938.15446; Mon, 12 Oct 2020 12:52:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxJ8-00043S-Q7; Mon, 12 Oct 2020 12:52:06 +0000
Received: by outflank-mailman (input) for mailman id 5938;
 Mon, 12 Oct 2020 12:52:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kRxJ7-00043N-3J
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:52:05 +0000
Received: from mail-ej1-x62a.google.com (unknown [2a00:1450:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04790500-fd75-4937-87e8-59a2558da542;
 Mon, 12 Oct 2020 12:52:04 +0000 (UTC)
Received: by mail-ej1-x62a.google.com with SMTP id c22so23048963ejx.0
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:52:04 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id me12sm10516645ejb.108.2020.10.12.05.52.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 12 Oct 2020 05:52:02 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=K42p=DT=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kRxJ7-00043N-3J
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 12:52:05 +0000
X-Inumbo-ID: 04790500-fd75-4937-87e8-59a2558da542
Received: from mail-ej1-x62a.google.com (unknown [2a00:1450:4864:20::62a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 04790500-fd75-4937-87e8-59a2558da542;
	Mon, 12 Oct 2020 12:52:04 +0000 (UTC)
Received: by mail-ej1-x62a.google.com with SMTP id c22so23048963ejx.0
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 05:52:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=cGxZviZhW7O5jqtJ0bZv9cnvWltQUpe03KlUpuYC5Pk=;
        b=J6jW6p6hzHWOhTn5sd21PaYyZX8b8qC4clssFgX3c8w+3Lr4ULTcCq+fH6ZnydnPTR
         OoSjfTZrwz3B1f4IDFsUz73av/D7osQPM0ZIkmn96OrJq4fLJSYWE1VSsiXOZcSRqNLm
         c2Vw6gT9JIYUAtGl59LtZabKjNZO15dQrFsZfharwkYrOtpb76ojXJWmaCpBdJPZbX/+
         OVcaZVufm6GyvZAo/7aWEXuYcTxuKYo65DdzKK4erP5VJ4U5yJs7nAktT/qd0EG8d7fl
         EVU+AJicMA75lA/1Plu8+tyuGNmK9kx9EOzlzjakE1oUus+88px2HgdvFRLVc5tcdX+j
         8kzA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=cGxZviZhW7O5jqtJ0bZv9cnvWltQUpe03KlUpuYC5Pk=;
        b=e9/ENTqt37GPLNpzBo33IyimYZVxKcGqiW4Yi2KBKeQ77QM+YLBr40dQ5sFC015vwF
         Yw85UnLQ8DSbbkAEeTUSgD1ujWsJb82B55QHRzlFu3X1Q3FW2lAokm4PxowR4XWMJ0XG
         JT7YjmPuGgJSPeyaP6Zsbm+MGe+ShUlnAhnSvBmHrNestCTAoMsEBtp47rbegfkXkSDL
         mLsI0J4hEk088IL4GKrFOpZJcNdpapTQRTYyDD8MLBVx1ixbGulXBAW7vnwmRADBEMWi
         XeZme1KWSflhvgqPvKv48R07GSSZLPJomhN1jEe/QDpVCB7qgdrwv1n0PTEnbeYwIMeq
         2qtw==
X-Gm-Message-State: AOAM532ZkBIKtJDm19zLYhYTyL7cskZCn49XIIEgtmxKqYX5I3/MJ7QA
	zt4ApspPZ062fXaY+/Dn5ic=
X-Google-Smtp-Source: ABdhPJwHAfJ1KA6vSf28jqCsNyskhp8zKVGrZmW9gROZziTmMM1V5o3vcaVB+aiilEPExBKNmWJZlw==
X-Received: by 2002:a17:906:39ce:: with SMTP id i14mr29164990eje.170.1602507123287;
        Mon, 12 Oct 2020 05:52:03 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
        by smtp.gmail.com with ESMTPSA id me12sm10516645ejb.108.2020.10.12.05.52.01
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 12 Oct 2020 05:52:02 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
	<qemu-devel@nongnu.org>
Cc: "'Peter Maydell'" <peter.maydell@linaro.org>,
	<qemu-ppc@nongnu.org>,
	<qemu-trivial@nongnu.org>,
	"'Aurelien Jarno'" <aurelien@aurel32.net>,
	<qemu-arm@nongnu.org>,
	=?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <f4bug@amsat.org>,
	"'Michael S. Tsirkin'" <mst@redhat.com>,
	"'Eduardo Habkost'" <ehabkost@redhat.com>,
	"'Jiaxun Yang'" <jiaxun.yang@flygoat.com>,
	"'Yoshinori Sato'" <ysato@users.sourceforge.jp>,
	=?utf-8?Q?'C=C3=A9dric_Le_Goater'?= <clg@kaod.org>,
	"'David Gibson'" <david@gibson.dropbear.id.au>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Helge Deller'" <deller@gmx.de>,
	"'Anthony Perard'" <anthony.perard@citrix.com>,
	"'Richard Henderson'" <rth@twiddle.net>,
	"'Aleksandar Markovic'" <aleksandar.qemu.devel@gmail.com>,
	<xen-devel@lists.xenproject.org>,
	"'Aleksandar Rikalo'" <aleksandar.rikalo@syrmia.com>,
	"'Marcel Apfelbaum'" <marcel.apfelbaum@gmail.com>,
	"'Mark Cave-Ayland'" <mark.cave-ayland@ilande.co.uk>,
	"'Paolo Bonzini'" <pbonzini@redhat.com>,
	"'Huacai Chen'" <chenhc@lemote.com>
References: <20201012124506.3406909-1-philmd@redhat.com> <20201012124506.3406909-5-philmd@redhat.com>
In-Reply-To: <20201012124506.3406909-5-philmd@redhat.com>
Subject: RE: [PATCH 4/5] hw: Use the PCI_SLOT() macro from 'hw/pci/pci.h'
Date: Mon, 12 Oct 2020 13:52:00 +0100
Message-ID: <001301d6a096$7b3a3880$71aea980$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIn0Xs2eWz4EZNtdpBYSFizYKd7hAIdwBpFqOCb0bA=

> -----Original Message-----
> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Sent: 12 October 2020 13:45
> To: qemu-devel@nongnu.org
> Cc: Peter Maydell <peter.maydell@linaro.org>; qemu-ppc@nongnu.org; =
qemu-trivial@nongnu.org; Paul
> Durrant <paul@xen.org>; Aurelien Jarno <aurelien@aurel32.net>; =
qemu-arm@nongnu.org; Philippe Mathieu-
> Daud=C3=A9 <f4bug@amsat.org>; Michael S. Tsirkin <mst@redhat.com>; =
Eduardo Habkost <ehabkost@redhat.com>;
> Jiaxun Yang <jiaxun.yang@flygoat.com>; Yoshinori Sato =
<ysato@users.sourceforge.jp>; C=C3=A9dric Le Goater
> <clg@kaod.org>; David Gibson <david@gibson.dropbear.id.au>; Stefano =
Stabellini
> <sstabellini@kernel.org>; Helge Deller <deller@gmx.de>; Anthony Perard =
<anthony.perard@citrix.com>;
> Richard Henderson <rth@twiddle.net>; Aleksandar Markovic =
<aleksandar.qemu.devel@gmail.com>; xen-
> devel@lists.xenproject.org; Aleksandar Rikalo =
<aleksandar.rikalo@syrmia.com>; Marcel Apfelbaum
> <marcel.apfelbaum@gmail.com>; Mark Cave-Ayland =
<mark.cave-ayland@ilande.co.uk>; Paolo Bonzini
> <pbonzini@redhat.com>; Huacai Chen <chenhc@lemote.com>
> Subject: [PATCH 4/5] hw: Use the PCI_SLOT() macro from 'hw/pci/pci.h'
>=20
> From: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
>=20
> We already have a generic PCI_SLOT() macro in "hw/pci/pci.h"
> to extract the PCI slot identifier, use it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

xen-hvm change...

Acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 13:11:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 13:11:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5942.15459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxbK-00065x-Ji; Mon, 12 Oct 2020 13:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5942.15459; Mon, 12 Oct 2020 13:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxbK-00065q-G4; Mon, 12 Oct 2020 13:10:54 +0000
Received: by outflank-mailman (input) for mailman id 5942;
 Mon, 12 Oct 2020 13:10:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRxbJ-00065l-4r
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:10:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2620709-6c00-4453-97d0-a790b9a9ffda;
 Mon, 12 Oct 2020 13:10:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRxbG-0006FB-Bv; Mon, 12 Oct 2020 13:10:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRxbG-000645-2v; Mon, 12 Oct 2020 13:10:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRxbG-0008Pd-2P; Mon, 12 Oct 2020 13:10:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRxbJ-00065l-4r
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:10:53 +0000
X-Inumbo-ID: e2620709-6c00-4453-97d0-a790b9a9ffda
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e2620709-6c00-4453-97d0-a790b9a9ffda;
	Mon, 12 Oct 2020 13:10:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0dL48PRMwfkFswvZhJE96urKQAjF+oMjhaDA3ToiCYc=; b=j9g+Z1IgswC6yIoYLhRBC2Y8ER
	6zBM2GoIMd+sjmeatXc8IMpm46MtDq5tUG8WmLxOLhdQtwF1Rs0WkGG/WywQb+kczHNdNI87ElGsP
	CUK3w63XSktoi7vkvCbJPBjzaDTLCcRZScSKfrtWYT9hgunliTvo8bjHb0+P9iGR1ni0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRxbG-0006FB-Bv; Mon, 12 Oct 2020 13:10:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRxbG-000645-2v; Mon, 12 Oct 2020 13:10:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRxbG-0008Pd-2P; Mon, 12 Oct 2020 13:10:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155728-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155728: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 13:10:50 +0000

flight 155728 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155728/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    2 days   20 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 13:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 13:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5945.15473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxlY-0007Aa-KH; Mon, 12 Oct 2020 13:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5945.15473; Mon, 12 Oct 2020 13:21:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxlY-0007AT-HK; Mon, 12 Oct 2020 13:21:28 +0000
Received: by outflank-mailman (input) for mailman id 5945;
 Mon, 12 Oct 2020 13:21:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kRxlW-0007AM-P0
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:21:27 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20a2e5b0-70a5-457c-8c38-d9f8e627bc4d;
 Mon, 12 Oct 2020 13:21:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kRxlW-0007AM-P0
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:21:27 +0000
X-Inumbo-ID: 20a2e5b0-70a5-457c-8c38-d9f8e627bc4d
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 20a2e5b0-70a5-457c-8c38-d9f8e627bc4d;
	Mon, 12 Oct 2020 13:21:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602508884;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=dgbELpXm+tKUAI+m6aoZHvCyhz6/QiVIWuxq43gC7Lg=;
  b=dzZ0F6ZSwf91K4SMJ/mBuR/F+z5EGQg//n/yWU6v18P56m34RJ5J16Ec
   ILYEHXuvlkcNxP3fz8z79nfBxFbaesv2/ZUu06654F1yZ2WUectNcHqK0
   EyUEf1L+TuThA2ojd/X1ZqUbdgP5+q4MbZ8Wt0SON2YqUJos1AtzYBp8H
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: v3fu9BnCv54ATKDShxmgTDgVx6oWO2ELsR6Vz4S8eI5gdVVPU+LB1O1gCpEMVSD3AmlqvaU0eZ
 pmA2lu6oa/rm/FWZhjXR+FSwYjJDcFcA6HxqG+w+/gnoUazECYwP55+hxDZmJ61nHLytwyCcaJ
 +2GNlRoniLEyuWe9Z6xrKhhh4fr4c74jga9JB+aoHkZ6Hu5YUmR8q8ha+IB+6mYf0Y2jqJy7Vh
 wVP9alXOomuDvkDG5IfQqhpiZWdGXKDXoEGnGNhmbyJJz1XWyDbX9NcHtd0Iv3y60tadXjP9io
 Z24=
X-SBRS: 2.5
X-MesageID: 29828672
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,366,1596513600"; 
   d="scan'208";a="29828672"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k7ng/DVulfPQMsbpcSvIefC1io3AT7p06CDp0l09hZieUbGoxhKQlB0cPV8xwZ0/y59tFP+/o6fOlmg49H64g18NDrClHYxUZlsq3T5khsNg/QX2tW9d5dP7xqTkE3KYaUTLiKcXwcW89eb49byNsh7/5HmrLwn57FLBdwpYJ/8DMSN8YKTgDQEGLK/6pE435o3o5nZ8BNg7MDnrU5FjmzkhFFh6Abw6sRSII9d8SDzCIzxnLRLA32JkEb/8r6HiTeegj+EcRdj6lH2Q1uVQ0GEHjUEfkjMty+mBGRdod6xJ1GyWXutHDrEfAZC6HjBJy/md+sY2PWHBpwmxSuQ32g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dgbELpXm+tKUAI+m6aoZHvCyhz6/QiVIWuxq43gC7Lg=;
 b=Gm/XOkpXxhszKyGe+qU/OjlDUUu09LEKZo+BB9UBqMcCfZGgKMEy8CMjtmvlfzvX/MpnaG1cX0ndw5dMbKzHC4FjoOYyguGzrJkXuqRq9jxbb4OHOugi0xA2UMn+LT7KNTQ87YweSdNztf6b5FblOwEIlUBcBG36/g14pe5VU66yQs2GbrxaaCPgMfKOOlKOgAbJiEiq+hMAm8HT5h/EGcsiLnPIJKc2OuGKCx3ik4DP7v9ju0EoGcyWvwGPneuJWbOzO2dC9g3ziCpDxDUDh59qDL1PSqLAmDnLDrqzMgWz9Zd0RFDBvhRKwL0XX645QNwKmFH39/26bbk8vIqRbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dgbELpXm+tKUAI+m6aoZHvCyhz6/QiVIWuxq43gC7Lg=;
 b=kMlyBchkWMJOxtV1AR4Jktrg+w3g2hkSkDRE1j28iRWwbZ1QnZaGl1rRYRGT85SX6Z0aTzXjlJXit5ZNtodUeF5z1dersoZRGQzMQrY4PyFOxEjQ+TJrx/68p5a+pWHmrFwuV0g7wYp8+sG8gW4ZnrJbDDGItch4kHXrEBY0jvs=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 1/2] golang/xenlight: do not hard code libxl dir in
 gengotypes.py
Thread-Topic: [PATCH 1/2] golang/xenlight: do not hard code libxl dir in
 gengotypes.py
Thread-Index: AQHWoCarymhorBdQCEG/HIvWMEWdvKmT9NcA
Date: Mon, 12 Oct 2020 13:21:21 +0000
Message-ID: <DF12E889-B8BA-490C-8FC5-B13531026DEB@citrix.com>
References: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
In-Reply-To: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d666ae99-fade-4796-936e-08d86eb1b611
x-ms-traffictypediagnostic: SJ0PR03MB5631:
x-microsoft-antispam-prvs: <SJ0PR03MB56311D2F46F447D29FCAA7E699070@SJ0PR03MB5631.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: gujGHVO2ZXvpfomAXlsIuDK1c5N/jOFmg9wyd/InydiPQxAX7gIBCdcA/y0CIqYBZHBc82+yDG4LfTB85WZzypeSpANhMRf4eIiLJTMagRdrI2wiSFbHsGA757c12pV2cpYg1qSih14N4G9D2HaO6y6vKD6s5D0+l9Sh8xbmuRHfzIVEhJV5/8vk2QRq68FxY5fSDRPa3lsrwqQIL4L/haqdkdLNwJ5AeskcxZ0VtHstHNEBYA5m0sLlP3v+rP6FaWGvCiLsYHhd40OBBEHX39pYjX0FKteue3yvLt+bgeZs3Zf8gNWpKrzSfoG9BKBvO+yuuhsoCz3n8EJTFophRQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(366004)(39860400002)(376002)(33656002)(2616005)(54906003)(5660300002)(76116006)(186003)(4744005)(316002)(66946007)(66446008)(66476007)(66556008)(64756008)(91956017)(71200400001)(478600001)(83380400001)(26005)(36756003)(8676002)(53546011)(2906002)(6506007)(4326008)(55236004)(86362001)(8936002)(6486002)(6512007)(6916009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: JeO6OmUiP8KqR0I0JgmvCe5Wh1EWdlWSg33+1gaGfyQTzGrzHgQ8FvdHCD9gwhHR60CKfPi83FU0oZt8ZpZw1Cm5RzF6Nca4z71R2UCikRCxkOWYEFKc770lrX9hdKKNlvk9bPRKhxR5lsTpaZAg/Y1qAjpSL1CBCJqsWYYyGMw3ARKWavppsw9+xJHwASfKC/defeZ6e/grbm6hSMywB+EturIUF5pHmzgNqHuUJ0yeOsCh9XAhEMb4r/FiVEZVzs/jDkHWCiLJXnriZACnhMUN+9LtS6J82FPoZr4J4F0RkxqC4+6A6krtiQTMrPa+Zv1UWkcmaNSp35WsDdKROWhGIrJ1sejr69UD5I/Yf3N3l7KS8A7Jh1ycZ87g97FCdK8Vz7Fiz6rMarF/wCE7vZRpfAFQKrETUDswK219fy5QCtZ10+W7ikya0zuY37Q9ijJfI9lfbLkP+HI2gmffdLG3GbW/z99gNugi5ClKVygGDZKR3Qpve0OSQgHsW4wFScRgJ1ZVHva52YBqREgbEOw6NMI2ph4mSpRZtXZprl8M9HuIvp4BdQlT3O6V8yfg04wOjrLEpl1/38P7KpNp3byxkvvnibuPQKc9wZqt5YwZKPjMV9+iaWPFHh9S0ouX8IPdDpcFwn/pdwtL2cePZQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <285A952472198545A8B92AE3DB10EFBD@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d666ae99-fade-4796-936e-08d86eb1b611
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 13:21:21.4188
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Qi4a8YIX2Y+pt9YVchjTceI3kwSWFgUJ7ayp1/DRYLocJgfeGhR9blUIs9OB5CgDQYLTvM33xuc/2aodnCCMVBkscPWeFGK9u9ZLKXWDkVY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5631
X-OriginatorOrg: citrix.com



> On Oct 12, 2020, at 12:31 AM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Currently, in order to 'import idl' in gengotypes.py, we derive the path
> of the libxl source directory from the XEN_ROOT environment variable, and
> append that to sys.path so python can see idl.py. Since the the recent mo=
ve of
> libxl to tools/libs/light, this hard coding breaks the build.
>=20
> Instead, check for the environment variable LIBXL_SRC_DIR, but move this
> check to a try-except block (with empty except). This simply makes the
> real error more visible, and does not strictly require that
> LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
> rather than XEN_ROOT when calling gengotypes.py.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 13:21:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 13:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5946.15484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxlp-0007En-Ss; Mon, 12 Oct 2020 13:21:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5946.15484; Mon, 12 Oct 2020 13:21:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRxlp-0007Eg-Q3; Mon, 12 Oct 2020 13:21:45 +0000
Received: by outflank-mailman (input) for mailman id 5946;
 Mon, 12 Oct 2020 13:21:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kRxlo-0007EL-4c
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:21:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a9db84b-b174-451b-b81b-aaa80353a45c;
 Mon, 12 Oct 2020 13:21:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kRxlo-0007EL-4c
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:21:44 +0000
X-Inumbo-ID: 2a9db84b-b174-451b-b81b-aaa80353a45c
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2a9db84b-b174-451b-b81b-aaa80353a45c;
	Mon, 12 Oct 2020 13:21:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602508902;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=+opqHIrxp82GqLjS0Fyev5j2ebhjlS98gYziXnrz7SM=;
  b=HDPK1qUc1VOzWcBHixrBi3AiK1NhRuLyR+1ZNNu4lSDFmJPu6ukG4Ykj
   XELBlryxQA1DtR1VmuQ9wRB661+zhNHBjLfQ9qJO8n2mh2GdDNaE96i3s
   UEGUgPe1E36drh+9b2JFHAeoMpG8dXsmwzz3mXhmLrbC1Ehk8UAKOXt/O
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: SFPNHpivpNQNDqZvBnZ1bEiXEp+6H7ofSCoF2QlCYW0sgcFGnBTXPxTSsh09B8lO9eTOIKzcBm
 zQAlFh1s+JBMxA1et6rRauArwI1W7j50TK9EPkLU5ML/qiSNugGeL01HK2YF28A7ZyoXim0jLP
 BK9rHnyZp2OKeoT3vajNtL6u07KtZFxiSbk8rPmX8WwXjk55RfrU3d6cGdVWMq7Vw8vK1wJqN9
 DmKS3NWNyyh9fQ8YtaLdccsbOTj5LuvW1J6WVFyL6kwGW3p5FyAM4QcWGM0NjTckFnIqF33eFt
 +0g=
X-SBRS: 2.5
X-MesageID: 28795818
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,366,1596513600"; 
   d="scan'208";a="28795818"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jn3FrhdeR2ydI1MwnVRqtJKqF7UTwSBT4/E82sOxf5HnpfA9C79UxL5txzz268faa69hyFAfWOuev12/Tk8cf63EUnO7q3r++Ihd32rL9iE/ECcR/yHSoy6giaoXkhwHRhxsf3BoIo/P+Si0m9EdwYyDaBdCBdx1XHs91JOlY/ebeol6DeJXnrWpn2HnKNsVJXuCLz/CexctmsH4PLhABr3RWwnbC6Ipx+Whk4orujrliPIWKe+YP7NSq/BD9f1c3ICnPaNHeLOXUPtLPigfaREjOUyqSOAbviJcwf7IPD/8b8eN8+gHUVVCU9ozwCy7DRSlowePYTk5QebDs2qdTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cTgcSX37QmQcykf28E0vQqerMpNJnPSmoi6uIFIQk0M=;
 b=QOPZ5GvcWwfYJaA11SiqLFe1mJlCEhoK5K4GyiEeKSRFZm4/uu0tRpVt5YGIa2PJXcb1rxmBnsl9aeyPkWQxkvtADpn1Ki1y0GKsWKIv7H0e36xujhTqpcQgOalZTrUO6dnfUHb0Lg7rEIGHa+23jIiUrAYTz/mAcSY94AFzrYNLDcJFQilOMcBhDh6+AqZjxf+REV9PjmouTLT7Owduwz6Ll9AM2hOUUsNHBlKORO7XUI6MuaxHk6XtJ58behno56V9mDqohNd1UN7F5ZpzCaMs+iuABRgypprEkMf/usyT8XvmzFCrhyxvz+xI+a8SQBOzwMRG/Z5zMVS28QyULw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cTgcSX37QmQcykf28E0vQqerMpNJnPSmoi6uIFIQk0M=;
 b=B7yZwDa6gHJMZEI01Csou02BOATkOHVYv4a2c7JfMzidlXj1D5eLep9lHXYUU6uhDNR+gPgu0TJ1xfHeI2NGTz5aR/FIpaX3Tq/Qu2IVIAts2Jjm+Rtry66J3x6qiuiwUl7D1pvSyyzSYzkdf/zeVh3KAI3zGOOHUehEHX95/2k=
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH 2/2] golang/xenlight: standardize generated code comment
Thread-Topic: [PATCH 2/2] golang/xenlight: standardize generated code comment
Thread-Index: AQHWoCapLtpfOKLaDkuCp3d16+bqdKmT9OyA
Date: Mon, 12 Oct 2020 13:21:38 +0000
Message-ID: <91E60ECC-5F17-4BCF-8BA7-DAC4E9227607@citrix.com>
References: <8e66cd2d53bb9f14bdfa0a2539773f3a6a3526b6.1602458773.git.rosbrookn@ainfosec.com>
 <d8615f72d205b8a818ea397ccbb86f6ade1cd158.1602458773.git.rosbrookn@ainfosec.com>
In-Reply-To: <d8615f72d205b8a818ea397ccbb86f6ade1cd158.1602458773.git.rosbrookn@ainfosec.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 34d82313-c310-4e5b-4a89-08d86eb1c045
x-ms-traffictypediagnostic: SJ0PR03MB5631:
x-microsoft-antispam-prvs: <SJ0PR03MB5631FD39EAAEF6AC48D2498999070@SJ0PR03MB5631.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2512;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: TX7yb3O2GXeccd7B9UJG2FkS83n2KTSp7CQarXqr90JxhwcdRi0fOPmu4QoHaQt3Ds8n1lBnq14a/wE4AbwJXvp68eFOOTC7/Ia1H2dQMrlkU6/HQyTDwHTQHy8iEs2R2VnSm/5M/gZ5sJEBGTvSyznlcE0l5wS0/bXXf7Jtd5/ECeitok6+I7dPEub0Cq2QQU8qu08Jl17MeWYKfrtLxEZnOSJMFDSwAGbUcLmCdSpZ4I8R7Vj06A+WmJgXRK4yX4zoP0K6ayBODYhMwS+sDL0vjHiKizf6EfKwGlbSnnnGFNPWHVIxZZIm95LWBeboCegbPgGosv9NWmY1gc5UUWxnR46Eh/fJ1sQDM6l6KvCSJXQ5/K7WCOTKsRyJnmOqNHExTPXS/stCf7a/n3V8RA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(366004)(39860400002)(376002)(33656002)(2616005)(54906003)(5660300002)(966005)(76116006)(186003)(4744005)(316002)(66946007)(66446008)(66476007)(66556008)(64756008)(91956017)(71200400001)(478600001)(26005)(36756003)(8676002)(53546011)(2906002)(6506007)(4326008)(55236004)(86362001)(8936002)(6486002)(6512007)(83080400001)(6916009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 7gOt+Jpz6vXSdrOph/2+XS/3z7Agn2N5TKgX8n2L4ceXGJJC4eoacC5dequ1ntrRaGxgYjORtwel9U5okCXLk+fC/ZmX+hHn+hLH2i5jkCDOTTWGkFFsk8vhD72TVzGpQFC8Yk+7auDfhdYBlZ4sVTRGYzpdXWwiU4xALKTWXdTVF2ZcSy2kWVjN0N2qvPR8A8DWdRKRMdI/7qXbEyh4WIhGFy6o5EuES2ErM3V5eKAVVfHX+Bqgdy9R7O4Ni+SPqBTHJ3+5gavOQBd7NEAw2jUdiI7p2H63M7eUut7YFUSZc5H4j06Vgm4FJq1Gr2AMKVIloFt39l8q9Eatv0InOHWZnVeJ7MdzD5jeXR6R1Rp8f/fj6mKxH+PDdj94+TW4d+/zW3jGP9YGv3lcY+qZegmkJOAvTzhs8/GzG9rgsUZrylhoqjwaFauMCrU1tFwruOuID+yPX2O5E0B8CtYnO2rTYrX2z9oTZ+w/Sm/J8djGZP7etsKwYxGj7FCIgN4yYylLylU+0lwyGQiKX6UTrbjYKAROYoz3DObPu6+yq4vi6iwFujFWFNi0hkzoxRIqWjtlriAhOO58838bwxhRXjCWBgiWRDmDVKAC+qZ3mRx5THF9b0oqMDsSqnOLPj7NEesyGyuG/6540nl3LIiHEg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D30F78B635CDEC49BAC1D21982489BF3@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34d82313-c310-4e5b-4a89-08d86eb1c045
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 13:21:38.5149
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ugHFN44HWOE6lG6jigcBWKUnx4i4GGyb7+ov/6egQr3FC8XgQi72sW7kCTv00GgMCSyR7nCbZmWGm2ojqy7/yvx0PaYfzstWxSAWsarsxS0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5631
X-OriginatorOrg: citrix.com



> On Oct 12, 2020, at 12:31 AM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> There is a standard format for generated Go code header comments, as set
> by [1]. Modify gengotypes.py to follow this standard, and use the
> additional
>=20
>  // source: <IDL file basename>
>=20
> convention used by protoc-gen-go.
>=20
> This change is motivated by the fact that since 41aea82de2, the comment
> would include the absolute path to libxl_types.idl, therefore creating
> unintended diffs when generating code across different machines. This
> approach fixes that problem.
>=20
> [1] https://github.com/golang/go/issues/13560
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 13:49:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 13:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5949.15497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyCq-00019b-5s; Mon, 12 Oct 2020 13:49:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5949.15497; Mon, 12 Oct 2020 13:49:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyCq-00019U-21; Mon, 12 Oct 2020 13:49:40 +0000
Received: by outflank-mailman (input) for mailman id 5949;
 Mon, 12 Oct 2020 13:49:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wBf0=DT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kRyCp-00019P-7Q
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:49:39 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bafb4230-266c-4689-bde9-539d991aac2f;
 Mon, 12 Oct 2020 13:49:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wBf0=DT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kRyCp-00019P-7Q
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:49:39 +0000
X-Inumbo-ID: bafb4230-266c-4689-bde9-539d991aac2f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bafb4230-266c-4689-bde9-539d991aac2f;
	Mon, 12 Oct 2020 13:49:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602510578;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=eUVqbV5KovsAwf7HOfovlf9L/YrWZ2fcZGUE1UEun/4=;
  b=Mex5Wt+jVepM34qWO/j2RTQVhl0fxxmPjZyZQ/3xYkexIv/us7bf0ejd
   +2APILytVpvLahs8nHDUh3j8a+0u6HMOjqNJY+2XCTtBRarXVseJT947v
   pk/j9ePDKJjVwodQjjpMp9jo7qW8dn1cJLyZCb1nG32LTYnSBVoqKQZKY
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fjjJ6Om0vv5+2l6+3zSDFJGlMzZgJniFcUzGJoxd9EjggT9IINx5POjs3WfXJzaqrEXaj5pifj
 XT5K+zW9KelE/TsFk3b+S0QJluGFeQn80O+9fdaPkRZPRvSveN1uZ4uThaLqqCYplKmAZha4XO
 qrzBThE8GtZnKIKhwiVu2lLBZz5ISciFp/40ALOX1ncVU6IgxPk3ofWFL+EYTZtXE2iT5xmQAY
 xkbNJtigtwfxEqykUXtQ63XfG8z35GC4m5B9k85qShCe9CH4GGzC9ivgmLhf3Hlp3lZyWL1jXj
 y0A=
X-SBRS: 2.5
X-MesageID: 29129914
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,366,1596513600"; 
   d="scan'208";a="29129914"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
Subject: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
Date: Mon, 12 Oct 2020 14:49:08 +0100
Message-ID: <20201012134908.27497-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
contains the legacy vm86 fields from 32bit days, which are beyond the
hardware-pushed frame.

Accessing these fields is generally illegal, as they are logically out of
bounds for anything other than an interrupt/exception hitting ring1/3 code.

Unfortunately, the #DF handler uses these fields as part of preparing the
state dump, and being IST, accesses the adjacent stack frame.

This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
layout to support shadow stacks" repositioned the #DF stack to be adjacent to
the guard page, which turns this OoB write into a fatal pagefault:

  (XEN) *** DOUBLE FAULT ***
  (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
  (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
  (XEN) CPU:    4
  (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
  (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
  ...
  (XEN) Xen call trace:
  (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
  (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
  (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
  (XEN)
  (XEN) Pagetable walk from ffff830236f6d008:
  (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
  (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
  (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
  (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
  (XEN)
  (XEN) ****************************************
  (XEN) Panic on CPU 4:
  (XEN) FATAL PAGE FAULT
  (XEN) [error_code=0003]
  (XEN) Faulting linear address: ffff830236f6d008
  (XEN) ****************************************
  (XEN)

and rendering the main #DF analysis broken.

The proper fix is to delete cpu_user_regs.es and later, so no
interrupt/exception path can access OoB, but this needs disentangling from the
PV ABI first.

Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 xen/arch/x86/cpu/common.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index da74172776..a684519a20 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -770,7 +770,13 @@ void load_system_tables(void)
 	tss->ist[IST_MCE - 1] = stack_top + (1 + IST_MCE) * PAGE_SIZE;
 	tss->ist[IST_NMI - 1] = stack_top + (1 + IST_NMI) * PAGE_SIZE;
 	tss->ist[IST_DB  - 1] = stack_top + (1 + IST_DB)  * PAGE_SIZE;
-	tss->ist[IST_DF  - 1] = stack_top + (1 + IST_DF)  * PAGE_SIZE;
+	/*
+	 * Gross bodge.  The #DF handler uses the vm86 fields of cpu_user_regs
+	 * beyond the hardware frame.  Adjust the stack entrypoint so this
+	 * doesn't manifest as an OoB write which hits the guard page.
+	 */
+	tss->ist[IST_DF  - 1] = stack_top + (1 + IST_DF)  * PAGE_SIZE -
+		(sizeof(struct cpu_user_regs) - offsetof(struct cpu_user_regs, es));
 	tss->bitmap = IOBMP_INVALID_OFFSET;
 
 	/* All other stack pointers poisioned. */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 13:55:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 13:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5954.15511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyIP-0002Au-Uv; Mon, 12 Oct 2020 13:55:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5954.15511; Mon, 12 Oct 2020 13:55:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyIP-0002An-RI; Mon, 12 Oct 2020 13:55:25 +0000
Received: by outflank-mailman (input) for mailman id 5954;
 Mon, 12 Oct 2020 13:55:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R8H3=DT=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
 id 1kRyIN-0002Ai-W6
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:55:24 +0000
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d85a01d-8cc9-40dc-9dc1-d03dbe13488c;
 Mon, 12 Oct 2020 13:55:20 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
 by localhost (Postfix) with SMTP id 0E74074594E;
 Mon, 12 Oct 2020 15:55:19 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
 id E0E53745712; Mon, 12 Oct 2020 15:55:18 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by zero.eik.bme.hu (Postfix) with ESMTP id DF31F745702;
 Mon, 12 Oct 2020 15:55:18 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=R8H3=DT=eik.bme.hu=balaton@srs-us1.protection.inumbo.net>)
	id 1kRyIN-0002Ai-W6
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 13:55:24 +0000
X-Inumbo-ID: 3d85a01d-8cc9-40dc-9dc1-d03dbe13488c
Received: from zero.eik.bme.hu (unknown [2001:738:2001:2001::2001])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d85a01d-8cc9-40dc-9dc1-d03dbe13488c;
	Mon, 12 Oct 2020 13:55:20 +0000 (UTC)
Received: from zero.eik.bme.hu (blah.eik.bme.hu [152.66.115.182])
	by localhost (Postfix) with SMTP id 0E74074594E;
	Mon, 12 Oct 2020 15:55:19 +0200 (CEST)
Received: by zero.eik.bme.hu (Postfix, from userid 432)
	id E0E53745712; Mon, 12 Oct 2020 15:55:18 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by zero.eik.bme.hu (Postfix) with ESMTP id DF31F745702;
	Mon, 12 Oct 2020 15:55:18 +0200 (CEST)
Date: Mon, 12 Oct 2020 15:55:18 +0200 (CEST)
From: BALATON Zoltan <balaton@eik.bme.hu>
To: =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <philmd@redhat.com>
cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>, 
    Paul Durrant <paul@xen.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>, 
    Huacai Chen <chenhc@lemote.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Yoshinori Sato <ysato@users.sourceforge.jp>, qemu-trivial@nongnu.org, 
    Helge Deller <deller@gmx.de>, "Michael S. Tsirkin" <mst@redhat.com>, 
    Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>, 
    Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
    Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org, 
    David Gibson <david@gibson.dropbear.id.au>, 
    Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>, 
    Eduardo Habkost <ehabkost@redhat.com>, qemu-arm@nongnu.org, 
    =?ISO-8859-15?Q?C=E9dric_Le_Goater?= <clg@kaod.org>, 
    Richard Henderson <rth@twiddle.net>, 
    =?ISO-8859-15?Q?Philippe_Mathieu-Daud=E9?= <f4bug@amsat.org>, 
    qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>, 
    Aurelien Jarno <aurelien@aurel32.net>
Subject: Re: [PATCH 1/5] hw/pci-host/bonito: Make PCI_ADDR() macro more
 readable
In-Reply-To: <20201012124506.3406909-2-philmd@redhat.com>
Message-ID: <3894edd-a214-3edf-8cbe-3566842e8a4@eik.bme.hu>
References: <20201012124506.3406909-1-philmd@redhat.com> <20201012124506.3406909-2-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="3866299591-694601815-1602510918=:97629"
X-Spam-Checker-Version: Sophos PMX: 6.4.8.2820816, Antispam-Engine: 2.7.2.2107409, Antispam-Data: 2020.10.12.135118, AntiVirus-Engine: 5.77.0, AntiVirus-Data: 2020.10.12.5770001
X-Spam-Flag: NO
X-Spam-Probability: 9%
X-Spam-Level: 
X-Spam-Status: No, score=9% required=50%

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--3866299591-694601815-1602510918=:97629
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8BIT

On Mon, 12 Oct 2020, Philippe Mathieu-Daudé wrote:
> From: Philippe Mathieu-Daudé <f4bug@amsat.org>
>
> The PCI_ADDR() macro use generic PCI fields shifted by 8-bit.
> Rewrite it extracting the shift operation one layer.
>
> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
> ---
> hw/pci-host/bonito.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
> index a99eced0657..abb3ee86769 100644
> --- a/hw/pci-host/bonito.c
> +++ b/hw/pci-host/bonito.c
> @@ -196,8 +196,8 @@ FIELD(BONGENCFG, PCIQUEUE,      12, 1)
> #define PCI_IDSEL_VIA686B          (1 << PCI_IDSEL_VIA686B_BIT)
>
> #define PCI_ADDR(busno , devno , funno , regno)  \
> -    ((((busno) << 16) & 0xff0000) + (((devno) << 11) & 0xf800) + \
> -    (((funno) << 8) & 0x700) + (regno))
> +    ((((busno) << 8) & 0xff00) + (((devno) << 3) & 0xf8) + \
> +    (((funno) & 0x7) << 8) + (regno))

Are you missing a << 8 somewhere before + (regno) or both of these are 
equally unreadable and I've missed something? This seems to be completely 
replaced by next patch so what's the point of this change?

Regards,
BALATON Zoltan

>
> typedef struct BonitoState BonitoState;
>
>
--3866299591-694601815-1602510918=:97629--


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 14:27:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 14:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5957.15524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRynA-0005EU-BL; Mon, 12 Oct 2020 14:27:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5957.15524; Mon, 12 Oct 2020 14:27:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRynA-0005EN-8N; Mon, 12 Oct 2020 14:27:12 +0000
Received: by outflank-mailman (input) for mailman id 5957;
 Mon, 12 Oct 2020 14:27:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kRyn8-0005EI-VL
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:27:11 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 314f5422-3e7d-49e2-8a4c-02e0335de2b1;
 Mon, 12 Oct 2020 14:27:09 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-176-DM2oIv67Nm-9F6tyvIgkbg-1; Mon, 12 Oct 2020 10:27:07 -0400
Received: by mail-wr1-f69.google.com with SMTP id t17so9412352wrm.13
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 07:27:07 -0700 (PDT)
Received: from [192.168.1.36] (106.red-83-59-162.dynamicip.rima-tde.net.
 [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id f6sm11354830wru.50.2020.10.12.07.27.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 12 Oct 2020 07:27:05 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Icg=DT=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kRyn8-0005EI-VL
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:27:11 +0000
X-Inumbo-ID: 314f5422-3e7d-49e2-8a4c-02e0335de2b1
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 314f5422-3e7d-49e2-8a4c-02e0335de2b1;
	Mon, 12 Oct 2020 14:27:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602512829;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vo+2idY1iOLKCcCi+5C1utcyJaMx4cV6n4V4S3DSJYU=;
	b=D9z/fy+WqrDNWlxzzfefNn6zPD0BBBa+EtAUPnP0QIKljaCwIcPQnkYUFlOfB/RM1SHnYg
	9+5OSiDrOqjJgEJKP+KUM/x0/42tFv+cNW+1WmlThjPKziglTJUC7fvU8AhAUX5VnayQsJ
	xRt9rdPWU+rdGxPIxekx45/dFhy8fGc=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-176-DM2oIv67Nm-9F6tyvIgkbg-1; Mon, 12 Oct 2020 10:27:07 -0400
X-MC-Unique: DM2oIv67Nm-9F6tyvIgkbg-1
Received: by mail-wr1-f69.google.com with SMTP id t17so9412352wrm.13
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 07:27:07 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Vo+2idY1iOLKCcCi+5C1utcyJaMx4cV6n4V4S3DSJYU=;
        b=igSRNVOjnPAdyw6ECpqBgLyaEVZmEivOjDQQOV5Qp2DgPziuY/A7bqz/1lOIQ9wtov
         UhpGU3cuMxe6F3GG8bmrghZRFwP0QO/lnFqLr7TVZh7BTtzKnbK5GMDvyuHqUVsYmJ/K
         tdso6S4AEj4xN+dwBua1of+ANS9ooGZQGMZkBmF1iwFa0zliC95MLKC3gfCzufyJpPTU
         4ISADIhD4aV8hi7/EjNnkGoX6fA35Ouvpt+WHqX5diSvxz70VmhLTPV7YHND9Om8RtDH
         sOo+8UEXWWlXVvwInRvmpQUrGA8RsYHP+ZpesvGxEPIgnHYkaaGyD+Ug4BCvFh4GY0Li
         tZeA==
X-Gm-Message-State: AOAM531pTAOS2yGM9ODY42k4sEpT2UcnEC4UiPhn3zcShB28BHip29Gq
	yaD3yuwIEq1v/bmOhn7/LUAIttZGVzCRcMO41+QN7V3Ml480sBdPCZXOUnPjk9ROw/00klTLkxP
	n+/4tQDlG4hXu3126ZnXbZxilKcs=
X-Received: by 2002:adf:8b92:: with SMTP id o18mr30897969wra.54.1602512826430;
        Mon, 12 Oct 2020 07:27:06 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyN/JUPdkMsHaZBYIgaydnOJmnNwLH0QYzEln08Ptid/yaqlyzjllxaKTcU7OhsAYIHHDu0Gg==
X-Received: by 2002:adf:8b92:: with SMTP id o18mr30897939wra.54.1602512826234;
        Mon, 12 Oct 2020 07:27:06 -0700 (PDT)
Received: from [192.168.1.36] (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id f6sm11354830wru.50.2020.10.12.07.27.04
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 12 Oct 2020 07:27:05 -0700 (PDT)
Subject: Re: [PATCH 1/5] hw/pci-host/bonito: Make PCI_ADDR() macro more
 readable
To: BALATON Zoltan <balaton@eik.bme.hu>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
 Paul Durrant <paul@xen.org>, Jiaxun Yang <jiaxun.yang@flygoat.com>,
 Huacai Chen <chenhc@lemote.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Yoshinori Sato <ysato@users.sourceforge.jp>,
 qemu-trivial@nongnu.org, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 David Gibson <david@gibson.dropbear.id.au>,
 Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-arm@nongnu.org,
 =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>,
 Richard Henderson <rth@twiddle.net>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <f4bug@amsat.org>, qemu-ppc@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>
References: <20201012124506.3406909-1-philmd@redhat.com>
 <20201012124506.3406909-2-philmd@redhat.com>
 <3894edd-a214-3edf-8cbe-3566842e8a4@eik.bme.hu>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <aadc1813-9289-85eb-18b9-70c4189fd879@redhat.com>
Date: Mon, 12 Oct 2020 16:27:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <3894edd-a214-3edf-8cbe-3566842e8a4@eik.bme.hu>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10/12/20 3:55 PM, BALATON Zoltan wrote:
> On Mon, 12 Oct 2020, Philippe Mathieu-Daudé wrote:
>> From: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>
>> The PCI_ADDR() macro use generic PCI fields shifted by 8-bit.
>> Rewrite it extracting the shift operation one layer.
>>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> hw/pci-host/bonito.c | 4 ++--
>> 1 file changed, 2 insertions(+), 2 deletions(-)
>>
>> diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
>> index a99eced0657..abb3ee86769 100644
>> --- a/hw/pci-host/bonito.c
>> +++ b/hw/pci-host/bonito.c
>> @@ -196,8 +196,8 @@ FIELD(BONGENCFG, PCIQUEUE,      12, 1)
>> #define PCI_IDSEL_VIA686B          (1 << PCI_IDSEL_VIA686B_BIT)
>>
>> #define PCI_ADDR(busno , devno , funno , regno)  \
>> -    ((((busno) << 16) & 0xff0000) + (((devno) << 11) & 0xf800) + \
>> -    (((funno) << 8) & 0x700) + (regno))
>> +    ((((busno) << 8) & 0xff00) + (((devno) << 3) & 0xf8) + \
>> +    (((funno) & 0x7) << 8) + (regno))
> 
> Are you missing a << 8 somewhere before + (regno) or both of these are 
> equally unreadable and I've missed something? This seems to be 
> completely replaced by next patch so what's the point of this change?

I might have missed a parenthesis somewhere indeed =)

I'm happy to merge it in the next patch, I thought it would
be easier to review but it isn't.

Thanks for reviewing!

> 
> Regards,
> BALATON Zoltan
> 
>>
>> typedef struct BonitoState BonitoState;
>>
>>



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 14:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 14:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5959.15537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRytX-0006Fg-2a; Mon, 12 Oct 2020 14:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5959.15537; Mon, 12 Oct 2020 14:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRytW-0006FD-Um; Mon, 12 Oct 2020 14:33:46 +0000
Received: by outflank-mailman (input) for mailman id 5959;
 Mon, 12 Oct 2020 14:33:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wBf0=DT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kRytW-00068O-2J
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:33:46 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cba30a2-aa77-49d0-a7c5-2e98b334ad0f;
 Mon, 12 Oct 2020 14:33:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wBf0=DT=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kRytW-00068O-2J
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:33:46 +0000
X-Inumbo-ID: 8cba30a2-aa77-49d0-a7c5-2e98b334ad0f
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cba30a2-aa77-49d0-a7c5-2e98b334ad0f;
	Mon, 12 Oct 2020 14:33:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602513204;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=yRny3O4/YPHHOTKW9ErMqTxVvo4tLH5gJNhcPjhT/yc=;
  b=Y2AQ0qCUnidteQmMzg0E1AIeDrwUiDYU0VbrakYxq7d65zIrDMF9f98o
   MoaBkwbPiu8S4ZXc2A2of0vHy7aYgn/5AUEAHKOCmLmM0XJ60+eNx+l53
   xxC8hyhnhvYPtelgx33fPvzJPYXDJooC2LQpqzBLumIk0XJy4AbZKecNz
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MOtvIfuyBNYAxa0IYnZRnVL3H9oufYuXyT48u5q6voZiUb3r5S4Wu3c9FlRsjKlVBNtmtQM04T
 tXv1YLAAAa3TRzvDYXq2Ml/ovpquW72u0T/XoRN/xrEXO8gvyeeb8+p9PAq2NWu3z7ijeqKT5Q
 sQQBhD0yVGNjwAJ+yb55GucdcC35Sr6HWTEGheY9TN47hYY9CB113XGXuU5WajvuelCH4tEDfp
 Yv+CY2D40Z8K4q4eeLbtWg63BSRMxCcCSv5stEVPPAAfvVr8hLWIk1OJuDvWZ1+WFe4rCsLoOS
 8cc=
X-SBRS: 2.5
X-MesageID: 29838148
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,366,1596513600"; 
   d="scan'208";a="29838148"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/ucode/intel: Improve description for gathering the microcode revision
Date: Mon, 12 Oct 2020 15:25:23 +0100
Message-ID: <20201012142523.17652-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Obtaining the microcode revision on Intel CPUs is complicated for backwards
compatibility reasons.  Update apply_microcode() to use a slightly more
efficient CPUID invocation, now that the documentation has been updated to
confirm that any CPUID instruction is fine, not just CPUID.1

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpu/microcode/intel.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index d9bb1bc10e..72c07fcd1d 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -126,13 +126,16 @@ static void collect_cpu_info(void)
     rdmsrl(MSR_IA32_PLATFORM_ID, msr_content);
     csig->pf = 1 << ((msr_content >> 50) & 7);
 
-    wrmsrl(MSR_IA32_UCODE_REV, 0x0ULL);
-    /* As documented in the SDM: Do a CPUID 1 here */
+    /*
+     * Obtaining the microcode version involves writing 0 to the "read only"
+     * UCODE_REV MSR, executing any CPUID instruction, after which a nonzero
+     * revision should appear.
+     */
+    wrmsrl(MSR_IA32_UCODE_REV, 0);
     csig->sig = cpuid_eax(1);
-
-    /* get the current revision from MSR 0x8B */
     rdmsrl(MSR_IA32_UCODE_REV, msr_content);
-    csig->rev = (uint32_t)(msr_content >> 32);
+    csig->rev = msr_content >> 32;
+
     pr_debug("microcode: collect_cpu_info : sig=%#x, pf=%#x, rev=%#x\n",
              csig->sig, csig->pf, csig->rev);
 }
@@ -270,14 +273,15 @@ static int apply_microcode(const struct microcode_patch *patch)
 
     wbinvd();
 
-    /* write microcode via MSR 0x79 */
     wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)patch->data);
-    wrmsrl(MSR_IA32_UCODE_REV, 0x0ULL);
 
-    /* As documented in the SDM: Do a CPUID 1 here */
-    cpuid_eax(1);
-
-    /* get the current revision from MSR 0x8B */
+    /*
+     * Obtaining the microcode version involves writing 0 to the "read only"
+     * UCODE_REV MSR, executing any CPUID instruction, after which a nonzero
+     * revision should appear.
+     */
+    wrmsrl(MSR_IA32_UCODE_REV, 0);
+    cpuid_eax(0);
     rdmsrl(MSR_IA32_UCODE_REV, msr_content);
     sig->rev = rev = msr_content >> 32;
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 14:38:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 14:38:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5961.15548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyxk-0006Rs-JQ; Mon, 12 Oct 2020 14:38:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5961.15548; Mon, 12 Oct 2020 14:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRyxk-0006Rl-GK; Mon, 12 Oct 2020 14:38:08 +0000
Received: by outflank-mailman (input) for mailman id 5961;
 Mon, 12 Oct 2020 14:38:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IHVd=DT=epam.com=prvs=855414846c=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1kRyxj-0006Rg-6s
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:38:07 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29412c70-d056-4969-be6b-ad689cb23386;
 Mon, 12 Oct 2020 14:38:05 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09CEavic016322; Mon, 12 Oct 2020 14:38:02 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2055.outbound.protection.outlook.com [104.47.13.55])
 by mx0a-0039f301.pphosted.com with ESMTP id 344ktthqdu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 12 Oct 2020 14:38:02 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM6PR03MB4726.eurprd03.prod.outlook.com (2603:10a6:20b:d::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Mon, 12 Oct
 2020 14:37:59 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3455.030; Mon, 12 Oct 2020
 14:37:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IHVd=DT=epam.com=prvs=855414846c=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
	id 1kRyxj-0006Rg-6s
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 14:38:07 +0000
X-Inumbo-ID: 29412c70-d056-4969-be6b-ad689cb23386
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 29412c70-d056-4969-be6b-ad689cb23386;
	Mon, 12 Oct 2020 14:38:05 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09CEavic016322;
	Mon, 12 Oct 2020 14:38:02 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com (mail-he1eur04lp2055.outbound.protection.outlook.com [104.47.13.55])
	by mx0a-0039f301.pphosted.com with ESMTP id 344ktthqdu-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Mon, 12 Oct 2020 14:38:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=itiyESxEjlTiJPNUa0OkRLxBsz3gq31G322uoyWowW8zKpp1ifN1ufAyYvKi8ez3aok87DVsT9TWDY1IdxYGJbtPIEmXStdD9gfXvu2/ru7uPZK4cqLp/yI1C2U0epBygcFlGZlXZNiduSd4avFTDTiDlV/EhrtKrnCsLFmhlwJomt8tL+RZaNsGJ7p8mh90ZEueWPEECw1kSvCqnZPpRKmc8H6NGXNxgnmE5PpuSHiO7F+z0DX4y3FgeloSlsnnhzRftaSTVsd4X3glDDlEQg205RzP1vjj1hKxN1Xep0in4TIHIegJMt+3pnirMSp6xqNYNY9pQ6RuPUYf+H+78Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hX3IZ+BX9MUB9mUwmLhy1WSZc8H8kV7aXxWovIdiUf4=;
 b=OopyCgnRCQ5fggPNMuCIP0D2mkjxay6bePUzcgqQWypQYEuhJ2s7GckVAzsUS4aXZaYhq5NJ3nk9VCVjERmIn31yR+iPIOQtQLgzdNs4bf3CySdOhN9Ive82+z3XsO1rOZo5wVcgmcJn+gDP6a5jqKn8quESuhJVMuGUtTEKnEYPviyWMHsB7PANaCtTPkzz6Vlt9fJpFmdyJRh10EdkyfWX/fAgprsCoYf9yj6k89sdHgn/EjNXPOgbX62d+dYTqdELqf7ECJUnnUPeYjRGYgjgpFonnvW4ZxGcEYZ1xaJeXjLajzqWsPNECNQW+xLr35vx8ReE7VHKcz/hkIe/GQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hX3IZ+BX9MUB9mUwmLhy1WSZc8H8kV7aXxWovIdiUf4=;
 b=YuWhwbhMrB3lqEfdd9Mb2abGQ3cSUeH6T9PMl/YaiHTfOgSdADPbxgJe9ALXhG9nivXj5JkLIC7nEftz/RO89JfC72kIHygUoG42vKd6fCgCZaYbMTwjEJLWoFO3Wz4vbjnvH9sIo93CLY7lELY5m7sPhex/aZxwgxURSVMkX+9++EtQbHaqbh+CgtLOfXQXvfAp6bIhuVnY/FqQJRtpo2+qn1E6YfYxOtVFnaVjsMgF8xyTvoV5e3PIK+aQutR9gOh8eQDXzurz40fIjlpygjbtVAhda/0tptx1OUE05UPWmqZMadUrA5Av3snPdXBGBQGfGOJN7hSy7isMF9I2Ag==
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM6PR03MB4726.eurprd03.prod.outlook.com (2603:10a6:20b:d::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23; Mon, 12 Oct
 2020 14:37:59 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3455.030; Mon, 12 Oct 2020
 14:37:59 +0000
From: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
To: "sstabellini@kernel.org" <sstabellini@kernel.org>
CC: "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
        "vicooodin@gmail.com" <vicooodin@gmail.com>,
        "julien@xen.org"
	<julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Artem
 Mygaiev <Artem_Mygaiev@epam.com>,
        "committers@xenproject.org"
	<committers@xenproject.org>,
        "George.Dunlap@citrix.com"
	<George.Dunlap@citrix.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: 
 AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgCAAPgwAIAHK7QA
Date: Mon, 12 Oct 2020 14:37:58 +0000
Message-ID: <c391abc7b03b90496dea307ef9a8a08d94a862a9.camel@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
	  <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
	  <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
	  <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
	  <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
	 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
	 <alpine.DEB.2.21.2010071750360.23978@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010071750360.23978@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5638ae55-bb32-4748-af3a-08d86ebc6a61
x-ms-traffictypediagnostic: AM6PR03MB4726:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM6PR03MB47266C6604CE3DF5F8AE88D6F2070@AM6PR03MB4726.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 YJxOKBVoxzQZZq67WYf6Dyd/C6c5S4hPEfFDG9Fi19zLpkbXqH7gXVtTUvr9BY7uQEgfMFimP/a4EhPVQ6G/SIJF3JTTW9p/WydBhpKWICpVTG8TI2+/KhKx8VUYtPfejshd3uAbcFCJF3mZqrKjZHHnHXuHWKhZKs6urRymf8CNjN7cprauL+JzZ6He75sdZuZK9gRa6Q+jsUe88H/8PNWR3TxksPGaNLlC5sKNDDGw6VMRdPnrCTehiaf3q1WGfbk7zV+ua2yJWgB24Uy4AuFg7vge3sV0KdYmw+c0KaRgwrWnDjOS1cZGrWzPrGJF
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6531.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(366004)(346002)(136003)(6512007)(5660300002)(64756008)(478600001)(66446008)(186003)(36756003)(66946007)(2616005)(91956017)(76116006)(66556008)(66476007)(54906003)(316002)(6486002)(55236004)(86362001)(6916009)(4326008)(71200400001)(8676002)(6506007)(53546011)(26005)(8936002)(83380400001)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 7xIH4csoY4iQGoiqTRz7wg79MJyMt+1lTz1QrO67BT/IfnExRjnmaMBpG2cz2ihXkGgCnTGzKypjYaFQ18V8+ALdtiAfCRNiWjdUVinEJ1t2bEqiXbgPFUns36DBLRLIRUy/A6pVCx+p2S/5IAwxgcJdmlHf8VeTmMhh1CPnJky4FenFmh1CYGXOq6LZhHgP/0zhfLfklPFkd9gSL6AIMDacGpPEfS4ogXb2xakPiebyEUh61kfknVCIY9F7aJpYceTC40W+mUXmHR645AQ1NM+eDY3JQuOsPrGKjuUcHnNAAmtf93hjdQdWIbqLWKd5x09MzVHTATqK2R/I28hBsPLExijF8BO0VNjIRpOHgUg4KF3jFTiuAmQt0YB8uZV3Drbccie5W8I3vEE/dlN3iS/3Zx6g+TC2tTRoqVaDnV5Y8CzC8qKIvIaOnO702otv4tPdqcSBdOjbHdqDSfu7rCxgDmVHAUGSsUp10hpVLgmDH0AErTNhi82KcBOj3taWzbZYVQDWQ4P1uZlpO1Y/v2XWRhrMQjCfjC2zHYCxX+MZ8rjCuwxx272Axzo1pNL4DQsEiZmUHA32hLDArvOvkGVIPmV4rFrOrEqF1kWMFthWBPkOsWG2IoPa4YbJrObpU1YgtPZf+Dpl6Xs/PuV3zw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <79899CFBF5C3374BAF6504613618A6BE@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6531.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5638ae55-bb32-4748-af3a-08d86ebc6a61
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 14:37:58.9095
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: CR5/my7QflEltxp6qGobCY7sVXxhJSbEXa6YVO4ocq3FuxMpzYSX2HVAYiUhaK4yO+f2ql6URLJ/wITEafyaJwEiwU4Qew73LT+Cnrz3NUc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR03MB4726
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-12_12:2020-10-12,2020-10-12 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 bulkscore=0
 mlxlogscore=999 priorityscore=1501 malwarescore=0 phishscore=0
 lowpriorityscore=0 mlxscore=0 impostorscore=0 adultscore=0 suspectscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010120118

SGkgYWxsLA0KDQpPbiBXZWQsIDIwMjAtMTAtMDcgYXQgMTg6MDcgLTA3MDAsIFN0ZWZhbm8gU3Rh
YmVsbGluaSB3cm90ZToNCj4gT24gV2VkLCA3IE9jdCAyMDIwLCBBbmFzdGFzaWlhIEx1a2lhbmVu
a28gd3JvdGU6DQo+ID4gT24gVGh1LCAyMDIwLTEwLTAxIGF0IDEwOjA2ICswMDAwLCBHZW9yZ2Ug
RHVubGFwIHdyb3RlOg0KPiA+ID4gPiBPbiBPY3QgMSwgMjAyMCwgYXQgMTA6MDYgQU0sIEFuYXN0
YXNpaWEgTHVraWFuZW5rbyA8DQo+ID4gPiA+IEFuYXN0YXNpaWFfTHVraWFuZW5rb0BlcGFtLmNv
bT4gd3JvdGU6DQo+ID4gPiA+IA0KPiA+ID4gPiBIaSwNCj4gPiA+ID4gDQo+ID4gPiA+IE9uIFdl
ZCwgMjAyMC0wOS0zMCBhdCAxMDoyNCArMDAwMCwgR2VvcmdlIER1bmxhcCB3cm90ZToNCj4gPiA+
ID4gPiA+IE9uIFNlcCAzMCwgMjAyMCwgYXQgMTA6NTcgQU0sIEphbiBCZXVsaWNoIDwNCj4gPiA+
ID4gPiA+IGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+ID4gPiA+ID4gd3JvdGU6DQo+ID4gPiA+ID4g
PiANCj4gPiA+ID4gPiA+IE9uIDMwLjA5LjIwMjAgMTE6MTgsIEFuYXN0YXNpaWEgTHVraWFuZW5r
byB3cm90ZToNCj4gPiA+ID4gPiA+ID4gSSB3b3VsZCBsaWtlIHRvIGtub3cgeW91ciBvcGluaW9u
IG9uIHRoZSBmb2xsb3dpbmcgY29kaW5nDQo+ID4gPiA+ID4gPiA+IHN0eWxlDQo+ID4gPiA+ID4g
PiA+IGNhc2VzLg0KPiA+ID4gPiA+ID4gPiBXaGljaCBvcHRpb24gZG8geW91IHRoaW5rIGlzIGNv
cnJlY3Q/DQo+ID4gPiA+ID4gPiA+IDEpIEZ1bmN0aW9uIHByb3RvdHlwZSB3aGVuIHRoZSBzdHJp
bmcgbGVuZ3RoIGlzIGxvbmdlcg0KPiA+ID4gPiA+ID4gPiB0aGFuDQo+ID4gPiA+ID4gPiA+IHRo
ZQ0KPiA+ID4gPiA+ID4gPiBhbGxvd2VkDQo+ID4gPiA+ID4gPiA+IG9uZQ0KPiA+ID4gPiA+ID4g
PiAtc3RhdGljIGludCBfX2luaXQNCj4gPiA+ID4gPiA+ID4gLWFjcGlfcGFyc2VfZ2ljX2NwdV9p
bnRlcmZhY2Uoc3RydWN0IGFjcGlfc3VidGFibGVfaGVhZGVyDQo+ID4gPiA+ID4gPiA+ICpoZWFk
ZXIsDQo+ID4gPiA+ID4gPiA+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVu
c2lnbmVkIGxvbmcgZW5kKQ0KPiA+ID4gPiA+ID4gPiArc3RhdGljIGludCBfX2luaXQgYWNwaV9w
YXJzZV9naWNfY3B1X2ludGVyZmFjZSgNCj4gPiA+ID4gPiA+ID4gKyAgICBzdHJ1Y3QgYWNwaV9z
dWJ0YWJsZV9oZWFkZXIgKmhlYWRlciwgY29uc3QgdW5zaWduZWQNCj4gPiA+ID4gPiA+ID4gbG9u
Zw0KPiA+ID4gPiA+ID4gPiBlbmQpDQo+ID4gPiA+ID4gPiANCj4gPiA+ID4gPiA+IEJvdGggdmFy
aWFudHMgYXJlIGRlZW1lZCB2YWxpZCBzdHlsZSwgSSB0aGluayAoc2FtZSBhbHNvDQo+ID4gPiA+
ID4gPiBnb2VzDQo+ID4gPiA+ID4gPiBmb3INCj4gPiA+ID4gPiA+IGZ1bmN0aW9uIGNhbGxzIHdp
dGggdGhpcyBzYW1lIHByb2JsZW0pLiBJbiBmYWN0IHlvdSBtaXggdHdvDQo+ID4gPiA+ID4gPiBk
aWZmZXJlbnQgc3R5bGUgYXNwZWN0cyB0b2dldGhlciAocGxhY2VtZW50IG9mIHBhcmFtZXRlcg0K
PiA+ID4gPiA+ID4gZGVjbGFyYXRpb25zIGFuZCBwbGFjZW1lbnQgb2YgcmV0dXJuIHR5cGUgZXRj
KSAtIGZvciBlYWNoDQo+ID4gPiA+ID4gPiBpbmRpdmlkdWFsbHkgYm90aCBmb3JtcyBhcmUgZGVl
bWVkIGFjY2VwdGFibGUsIEkgdGhpbmsuDQo+ID4gPiA+ID4gDQo+ID4gPiA+ID4gSWYgd2XigJly
ZSBnb2luZyB0byBoYXZlIGEgdG9vbCBnbyB0aHJvdWdoIGFuZCByZXBvcnQNCj4gPiA+ID4gPiAo
Y29ycmVjdD8pDQo+ID4gPiA+ID4gYWxsDQo+ID4gPiA+ID4gdGhlc2UgY29kaW5nIHN0eWxlIHRo
aW5ncywgaXTigJlzIGFuIG9wcG9ydHVuaXR5IHRvIHRoaW5rIGlmIHdlDQo+ID4gPiA+ID4gd2Fu
dCB0bw0KPiA+ID4gPiA+IGFkZCBuZXcgY29kaW5nIHN0eWxlIHJlcXVpcmVtZW50cyAob3IgY2hh
bmdlIGV4aXN0aW5nDQo+ID4gPiA+ID4gcmVxdWlyZW1lbnRzKS4NCj4gPiA+ID4gPiANCj4gPiA+
ID4gDQo+ID4gPiA+IEkgYW0gcmVhZHkgdG8gZGlzY3VzcyBuZXcgcmVxdWlyZW1lbnRzIGFuZCBp
bXBsZW1lbnQgdGhlbSBpbg0KPiA+ID4gPiBydWxlcw0KPiA+ID4gPiBvZg0KPiA+ID4gPiB0aGUg
WGVuIENvZGluZyBzdHlsZSBjaGVja2VyLg0KPiA+ID4gDQo+ID4gPiBUaGFuayB5b3UuIDotKSAg
QnV0IHdoYXQgSSBtZWFudCB3YXM6IFJpZ2h0IG5vdyB3ZSBkb27igJl0IHJlcXVpcmUNCj4gPiA+
IG9uZQ0KPiA+ID4gYXBwcm9hY2ggb3IgdGhlIG90aGVyIGZvciB0aGlzIHNwZWNpZmljIGluc3Rh
bmNlLiAgRG8gd2Ugd2FudCB0bw0KPiA+ID4gY2hvb3NlIG9uZT8NCj4gPiA+IA0KPiA+ID4gSSB0
aGluayBpbiB0aGlzIGNhc2UgaXQgbWFrZXMgc2Vuc2UgdG8gZG8gdGhlIGVhc2llc3QgdGhpbmcu
ICBJZg0KPiA+ID4gaXTigJlzDQo+ID4gPiBlYXN5IHRvIG1ha2UgdGhlIGN1cnJlbnQgdG9vbCBh
Y2NlcHQgYm90aCBzdHlsZXMsIGxldOKAmXMganVzdCBkbw0KPiA+ID4gdGhhdA0KPiA+ID4gZm9y
IG5vdy4gIElmIHRoZSB0b29sIGN1cnJlbnRseSBmb3JjZXMgeW91IHRvIGNob29zZSBvbmUgb2Yg
dGhlDQo+ID4gPiB0d28NCj4gPiA+IHN0eWxlcywgbGV04oCZcyBjaG9vc2Ugb25lLg0KPiA+ID4g
DQo+ID4gPiAgLUdlb3JnZQ0KPiA+IA0KPiA+IER1cmluZyB0aGUgZGV0YWlsZWQgc3R1ZHkgb2Yg
dGhlIFhlbiBjaGVja2VyIGFuZCB0aGUgQ2xhbmctRm9ybWF0DQo+ID4gU3R5bGUNCj4gPiBPcHRp
b25zLCBpdCB3YXMgZm91bmQgdGhhdCB0aGlzIHRvb2wsIHVuZm9ydHVuYXRlbHksIGlzIG5vdCBz
bw0KPiA+IGZsZXhpYmxlDQo+ID4gdG8gYWxsb3cgdGhlIGF1dGhvciB0byBpbmRlcGVuZGVudGx5
IGNob29zZSB0aGUgZm9ybWF0dGluZyBzdHlsZSBpbg0KPiA+IHNpdHVhdGlvbnMgdGhhdCBJIGRl
c2NyaWJlZCBpbiB0aGUgbGFzdCBsZXR0ZXIuIEZvciBleGFtcGxlIGRlZmluZQ0KPiA+IGNvZGUN
Cj4gPiBzdHlsZToNCj4gPiAtI2RlZmluZSBBTExSRUdTIFwNCj4gPiAtICAgIEMocjAsIHIwX3Vz
cik7ICAgQyhyMSwgcjFfdXNyKTsgICBDKHIyLCByMl91c3IpOyAgIEMocjMsDQo+ID4gcjNfdXNy
KTsgICBcDQo+ID4gLSAgICBDKGNwc3IsIGNwc3IpDQo+ID4gKyNkZWZpbmUgQUxMUkVHUyAgICAg
ICAgICAgIFwNCj4gPiArICAgIEMocjAsIHIwX3Vzcik7ICAgICAgICAgXA0KPiA+ICsgICAgQyhy
MSwgcjFfdXNyKTsgICAgICAgICBcDQo+ID4gKyAgICBDKHIyLCByMl91c3IpOyAgICAgICAgIFwN
Cj4gPiBUaGVyZSBhcmUgYWxzbyBzb21lIGluY29uc2lzdGVuY2llcyBpbiB0aGUgZm9ybWF0dGlu
ZyBvZiB0aGUgdG9vbA0KPiA+IGFuZA0KPiA+IHdoYXQgaXMgd3JpdHRlbiBpbiB0aGUgaHl1bmcg
Y29kaW5nIHN0eWxlIHJ1bGVzLiBGb3IgZXhhbXBsZSwgdGhlDQo+ID4gY29tbWVudCBmb3JtYXQ6
DQo+ID4gLSAgICAvKiBQQyBzaG91bGQgYmUgYWx3YXlzIGEgbXVsdGlwbGUgb2YgNCwgYXMgWGVu
IGlzIHVzaW5nIEFSTQ0KPiA+IGluc3RydWN0aW9uIHNldCAqLw0KPiA+ICsgICAgLyogUEMgc2hv
dWxkIGJlIGFsd2F5cyBhIG11bHRpcGxlIG9mIDQsIGFzIFhlbiBpcyB1c2luZyBBUk0NCj4gPiBp
bnN0cnVjdGlvbiBzZXQNCj4gPiArICAgICAqLw0KPiA+IEkgd291bGQgbGlrZSB0byBkcmF3IHlv
dXIgYXR0ZW50aW9uIHRvIHRoZSBmYWN0IHRoYXQgdGhlIGNvbW1lbnQNCj4gPiBiZWhhdmVzIGlu
IHRoaXMgd2F5LCBzaW5jZSB0aGUgbGluZSBsZW5ndGggZXhjZWVkcyB0aGUgYWxsb3dhYmxlDQo+
ID4gb25lLg0KPiA+IFRoZSBSZWZsb3dDb21tZW50cyBvcHRpb24gaXMgcmVzcG9uc2libGUgZm9y
IHRoaXMgZm9ybWF0LiBJdCBjYW4gYmUNCj4gPiB0dXJuZWQgb2ZmLCBidXQgdGhlbiB0aGUgcmVz
dWx0IHdpbGwgYmU6DQo+ID4gUmVmbG93Q29tbWVudHM9ZmFsc2U6DQo+ID4gLyogc2Vjb25kIHZl
cnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5TG9uZ0NvbW1lbnQNCj4g
PiB3aXRoDQo+ID4gcGxlbnR5IG9mIGluZm9ybWF0aW9uICovDQo+ID4gDQo+ID4gUmVmbG93Q29t
bWVudHM9dHJ1ZToNCj4gPiAvKiBzZWNvbmQgdmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZl
cnlWZXJ5VmVyeVZlcnlMb25nQ29tbWVudA0KPiA+IHdpdGgNCj4gPiBwbGVudHkgb2YNCj4gPiAg
KiBpbmZvcm1hdGlvbiAqLw0KPiANCj4gVG8gbWUsIHRoZSBwcmluY2lwYWwgZ29hbCBvZiB0aGUg
dG9vbCBpcyB0byBpZGVudGlmeSBjb2RlIHN0eWxlDQo+IHZpb2xhdGlvbnMuIFN1Z2dlc3Rpbmcg
aG93IHRvIGZpeCBhIHZpb2xhdGlvbiBpcyBhbiBhZGRlZCBib251cyBidXQNCj4gbm90DQo+IHN0
cmljdGx5IG5lY2Vzc2FyeS4NCj4gDQo+IFNvLCBJIHRoaW5rIHdlIGRlZmluaXRlbHkgd2FudCB0
aGUgdG9vbCB0byByZXBvcnQgdGhlIGZvbGxvd2luZyBsaW5lDQo+IGFzDQo+IGFuIGVycm9yLCBi
ZWNhdXNlIHRoZSBsaW5lIGlzIHRvbyBsb25nOg0KPiANCj4gLyogc2Vjb25kIHZlcnlWZXJ5VmVy
eVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5TG9uZ0NvbW1lbnQNCj4gd2l0aCBwbGVu
dHkgb2YgaW5mb3JtYXRpb24gKi8NCj4gDQo+IFRoZSBzdWdnZXN0aW9uIG9uIGhvdyB0byBmaXgg
aXQgaXMgbGVzcyBpbXBvcnRhbnQuIERvIHdlIG5lZWQgdG8gc2V0DQo+IFJlZmxvd0NvbW1lbnRz
PXRydWUgaWYgd2Ugd2FudCB0byB0aGUgdG9vbCB0byByZXBvcnQgdGhlIGxpbmUgYXMNCj4gZXJy
b25lb3VzPyBJIHRha2UgdGhhdCB0aGUgYW5zd2VyIGlzICJ5ZXMiPw0KPiANCj4gDQo+ID4gU28g
SSB3YW50IHRvIGtub3cgaWYgdGhlIGNvbW11bml0eSBpcyByZWFkeSB0byBhZGQgbmV3IGZvcm1h
dHRpbmcNCj4gPiBvcHRpb25zIGFuZCBlZGl0IG9sZCBvbmVzLiBCZWxvdyBJIHdpbGwgZ2l2ZSBl
eGFtcGxlcyBvZiB3aGF0DQo+ID4gY29ycmVjdGlvbnMgdGhlIGNoZWNrZXIgaXMgY3VycmVudGx5
IG1ha2luZyAodGhlIGZpcnN0IHZhcmlhbnQgaW4NCj4gPiBlYWNoDQo+ID4gY2FzZSBpcyBleGlz
dGluZyBjb2RlIGFuZCB0aGUgc2Vjb25kIHZhcmlhbnQgaXMgZm9ybWF0dGVkIGJ5DQo+ID4gY2hl
Y2tlcikuDQo+ID4gSWYgdGhleSBmaXQgdGhlIHN0YW5kYXJkcywgdGhlbiBJIGNhbiBkb2N1bWVu
dCB0aGVtIGluIHRoZSBjb2RpbmcNCj4gPiBzdHlsZS4gSWYgbm90LCB0aGVuIEkgdHJ5IHRvIGNv
bmZpZ3VyZSB0aGUgY2hlY2tlci4gQnV0IHRoZSBpZGVhIGlzDQo+ID4gdGhhdCB3ZSBuZWVkIHRv
IGNob29zZSBvbmUgb3B0aW9uIHRoYXQgd2lsbCBiZSBjb25zaWRlcmVkIGNvcnJlY3QuDQo+ID4g
DQo+ID4gMSkgRnVuY3Rpb24gcHJvdG90eXBlIHdoZW4gdGhlIHN0cmluZyBsZW5ndGggaXMgbG9u
Z2VyIHRoYW4gdGhlDQo+ID4gYWxsb3dlZA0KPiA+IC1zdGF0aWMgaW50IF9faW5pdA0KPiA+IC1h
Y3BpX3BhcnNlX2dpY19jcHVfaW50ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAq
aGVhZGVyLA0KPiA+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVuc2lnbmVk
IGxvbmcgZW5kKQ0KPiA+ICtzdGF0aWMgaW50IF9faW5pdCBhY3BpX3BhcnNlX2dpY19jcHVfaW50
ZXJmYWNlKA0KPiA+ICsgICAgc3RydWN0IGFjcGlfc3VidGFibGVfaGVhZGVyICpoZWFkZXIsIGNv
bnN0IHVuc2lnbmVkIGxvbmcgZW5kKQ0KPiA+IDIpIFdyYXBwaW5nIGFuIG9wZXJhdGlvbiB0byBh
IG5ldyBsaW5lIHdoZW4gdGhlIHN0cmluZyBsZW5ndGggaXMNCj4gPiBsb25nZXINCj4gPiB0aGFu
IHRoZSBhbGxvd2VkDQo+ID4gLSAgICBzdGF0dXMgPSBhY3BpX2dldF90YWJsZShBQ1BJX1NJR19T
UENSLCAwLA0KPiA+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgKHN0cnVjdCBhY3BpX3Rh
YmxlX2hlYWRlciAqKikmc3Bjcik7DQo+ID4gKyAgICBzdGF0dXMgPQ0KPiA+ICsgICAgICAgIGFj
cGlfZ2V0X3RhYmxlKEFDUElfU0lHX1NQQ1IsIDAsIChzdHJ1Y3QgYWNwaV90YWJsZV9oZWFkZXIN
Cj4gPiAqKikmc3Bjcik7DQo+ID4gMykgU3BhY2UgYWZ0ZXIgYnJhY2tldHMNCj4gPiAtICAgIHJl
dHVybiAoKGNoYXIgKikgYmFzZSArIG9mZnNldCk7DQo+ID4gKyAgICByZXR1cm4gKChjaGFyICop
YmFzZSArIG9mZnNldCk7DQo+ID4gNCkgU3BhY2VzIGluIGJyYWNrZXRzIGluIHN3aXRjaCBjb25k
aXRpb24NCj4gPiAtICAgIHN3aXRjaCAoIGRvbWN0bC0+Y21kICkNCj4gPiArICAgIHN3aXRjaCAo
ZG9tY3RsLT5jbWQpDQo+ID4gNSkgU3BhY2VzIGluIGJyYWNrZXRzIGluIG9wZXJhdGlvbg0KPiA+
IC0gICAgaW1tID0gKCBpbnNuID4+IEJSQU5DSF9JTlNOX0lNTV9TSElGVCApICYNCj4gPiBCUkFO
Q0hfSU5TTl9JTU1fTUFTSzsNCj4gPiArICAgIGltbSA9IChpbnNuID4+IEJSQU5DSF9JTlNOX0lN
TV9TSElGVCkgJiBCUkFOQ0hfSU5TTl9JTU1fTUFTSzsNCj4gPiA2KSBTcGFjZXMgaW4gYnJhY2tl
dHMgaW4gcmV0dXJuDQo+ID4gLSAgICAgICAgcmV0dXJuICggIXN5bS0+bmFtZVsyXSB8fCBzeW0t
Pm5hbWVbMl0gPT0gJy4nICk7DQo+ID4gKyAgICAgICAgcmV0dXJuICghc3ltLT5uYW1lWzJdIHx8
IHN5bS0+bmFtZVsyXSA9PSAnLicpOw0KPiA+IDcpIFNwYWNlIGFmdGVyIHNpemVvZg0KPiA+IC0g
ICAgY2xlYW5fYW5kX2ludmFsaWRhdGVfZGNhY2hlX3ZhX3JhbmdlKG5ld19wdHIsIHNpemVvZg0K
PiA+ICgqbmV3X3B0cikgKg0KPiA+IGxlbik7DQo+ID4gKyAgICBjbGVhbl9hbmRfaW52YWxpZGF0
ZV9kY2FjaGVfdmFfcmFuZ2UobmV3X3B0ciwgc2l6ZW9mKCpuZXdfcHRyKQ0KPiA+ICoNCj4gPiBs
ZW4pOw0KPiA+IDgpIFNwYWNlcyBiZWZvcmUgY29tbWVudCBpZiBpdOKAmXMgb24gdGhlIHNhbWUg
bGluZQ0KPiA+IC0gICAgY2FzZSBSX0FSTV9NT1ZUX0FCUzogLyogUyArIEEgKi8NCj4gPiArICAg
IGNhc2UgUl9BUk1fTU9WVF9BQlM6ICAgIC8qIFMgKyBBICovDQo+ID4gDQo+ID4gLSAgICBpZiAo
IHRtcCA9PSAwVUwgKSAgICAgICAvKiBBcmUgYW55IGJpdHMgc2V0PyAqLw0KPiA+IC0gICAgICAg
IHJldHVybiByZXN1bHQgKyBzaXplOyAgIC8qIE5vcGUuICovDQo+ID4gKyAgICBpZiAoIHRtcCA9
PSAwVUwgKSAgICAgICAgIC8qIEFyZSBhbnkgYml0cyBzZXQ/ICovDQo+ID4gKyAgICAgICAgcmV0
dXJuIHJlc3VsdCArIHNpemU7IC8qIE5vcGUuICovDQo+ID4gDQo+ID4gOSkgU3BhY2UgYWZ0ZXIg
Zm9yX2VhY2hfdmNwdQ0KPiA+IC0gICAgICAgIGZvcl9lYWNoX3ZjcHUoZCwgdikNCj4gPiArICAg
ICAgICBmb3JfZWFjaF92Y3B1IChkLCB2KQ0KPiA+IDEwKSBTcGFjZXMgaW4gZGVjbGFyYXRpb24N
Cj4gPiAtICAgIHVuaW9uIGhzciBoc3IgPSB7IC5iaXRzID0gcmVncy0+aHNyIH07DQo+ID4gKyAg
ICB1bmlvbiBoc3IgaHNyID0gey5iaXRzID0gcmVncy0+aHNyfTsNCj4gDQo+IE5vbmUgb2YgdGhl
c2UgcG9pbnRzIGFyZSBwYXJ0aWN1bGFybHkgcHJvYmxlbWF0aWMgdG8gbWUuIEkgdGhpbmsgdGhh
dA0KPiBzb21lIG9mIHRoZW0gYXJlIGdvb2QgdG8gaGF2ZSBhbnl3YXksIGxpa2UgMykgYW5kIDgp
LiBTb21lIG90aGVycyBhcmUNCj4gbm90IGdyZWF0LCBpbiBwYXJ0aWN1bGFyIDEpIGFuZCAyKSwg
YW5kIEkgd291bGQgcHJlZmVyIHRvIGtlZXAgdGhlDQo+IGN1cnJlbnQgY29kaW5nIHN0eWxlIGZv
ciB0aG9zZSwgYnV0IEknZCBiZSBjZXJ0YWlubHkgaGFwcHkgdG8gbWFrZQ0KPiB0aG9zZQ0KPiBj
aGFuZ2VzIGFueXdheSBpZiB3ZSBnZXQgYSBnb29kIGNvZGUgc3R5bGUgY2hlY2tlciBpbiBleGNo
YW5nZSA6LSkNCg0KVGhhbmsgeW91IGZvciBjb21tZW50cyA6KQ0KDQpJZiBubyBvbmUgb2JqZWN0
cywgSSB3aWxsIHNvb24gcHJlcGFyZSBhIHZlcnNpb24gb2YgdGhlIGNoZWNrZXIgd2l0aA0KbWlu
b3IgY2hhbmdlcyBhbmQgYWRkaXRpb25zIHRvIHRoZSBjb2Rpbmcgc3R5bGUgZG9jdW1lbnQuDQoN
ClJlZ2FyZHMsDQpBbmFzdGFzaWlhDQo=


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 15:22:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 15:22:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5968.15567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRzea-0002io-9i; Mon, 12 Oct 2020 15:22:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5968.15567; Mon, 12 Oct 2020 15:22:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kRzea-0002ih-6D; Mon, 12 Oct 2020 15:22:24 +0000
Received: by outflank-mailman (input) for mailman id 5968;
 Mon, 12 Oct 2020 15:22:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kRzeY-0002hh-TA
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 15:22:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b392582-e233-4c69-ae17-8cd4ca614e21;
 Mon, 12 Oct 2020 15:22:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRzeR-0000Yr-7E; Mon, 12 Oct 2020 15:22:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kRzeQ-0002Lp-Rs; Mon, 12 Oct 2020 15:22:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kRzeQ-0007bC-RM; Mon, 12 Oct 2020 15:22:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kRzeY-0002hh-TA
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 15:22:22 +0000
X-Inumbo-ID: 7b392582-e233-4c69-ae17-8cd4ca614e21
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b392582-e233-4c69-ae17-8cd4ca614e21;
	Mon, 12 Oct 2020 15:22:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=slRJAQ5VemP4yB7jSTBIGQ97dhkD43c2XaG/aDzxgfw=; b=kwMXU4zorHSg/mNliIvHLBh9Pp
	dldUPUpk1aMvZd9WzwN0Lg6BS3UKBBzaHgUiFkv3WO9oQRBBzkqP1Bopd+Kg7CPJhQWYUpQeQIUeu
	dHWmsq2IXfewxJ3VF4h0N1AjqCpKvDauLbcF9RiIuftPcop3UqlwIF0UWKo+iEogiEig=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRzeR-0000Yr-7E; Mon, 12 Oct 2020 15:22:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRzeQ-0002Lp-Rs; Mon, 12 Oct 2020 15:22:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kRzeQ-0007bC-RM; Mon, 12 Oct 2020 15:22:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155717-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155717: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=bbf5c979011a099af5dc76498918ed7df445635b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 15:22:14 +0000

flight 155717 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155717/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                bbf5c979011a099af5dc76498918ed7df445635b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   72 days
Failing since        152366  2020-08-01 20:49:34 Z   71 days  121 attempts
Testing same since   155717  2020-10-12 02:12:48 Z    0 days    1 attempts

------------------------------------------------------------
2513 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 340006 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 16:20:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 16:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5973.15583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0YC-0008PY-LZ; Mon, 12 Oct 2020 16:19:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5973.15583; Mon, 12 Oct 2020 16:19:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0YC-0008PR-I7; Mon, 12 Oct 2020 16:19:52 +0000
Received: by outflank-mailman (input) for mailman id 5973;
 Mon, 12 Oct 2020 16:19:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xHRm=DT=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
 id 1kS0YA-0008PM-UH
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:19:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb530395-e65e-489f-a32e-fbc03bbd904f;
 Mon, 12 Oct 2020 16:19:50 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net
 [172.10.235.113])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 22B302080A;
 Mon, 12 Oct 2020 16:19:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xHRm=DT=kernel.org=ebiggers@srs-us1.protection.inumbo.net>)
	id 1kS0YA-0008PM-UH
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:19:51 +0000
X-Inumbo-ID: eb530395-e65e-489f-a32e-fbc03bbd904f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eb530395-e65e-489f-a32e-fbc03bbd904f;
	Mon, 12 Oct 2020 16:19:50 +0000 (UTC)
Received: from sol.localdomain (172-10-235-113.lightspeed.sntcca.sbcglobal.net [172.10.235.113])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 22B302080A;
	Mon, 12 Oct 2020 16:19:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602519589;
	bh=FXrMfe87r7In01hy1fZxNUDLVXbtP/5TJ3+XTzWq1o4=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=V7DSrccLr5b4UIwMLihtwG0wHPpEpEeCdL4DEsryZDNbDbBe/031RRLLs/mkasjte
	 GkozOXCGlriE75ewyNE/y/+1/YN0mvEXF3Fx+zkSk5bqABC19TCsH57zxXZt4yoDfc
	 Gd/Q7Kn6Oc4wJMc886CZx9bFLur2svNEjPDcQyvk=
Date: Mon, 12 Oct 2020 09:19:46 -0700
From: Eric Biggers <ebiggers@kernel.org>
To: Ira Weiny <ira.weiny@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012161946.GA858@sol.localdomain>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>

On Sun, Oct 11, 2020 at 11:56:35PM -0700, Ira Weiny wrote:
> > 
> > And I still don't really understand.  After this patchset, there is still code
> > nearly identical to the above (doing a temporary mapping just for a memcpy) that
> > would still be using kmap_atomic().
> 
> I don't understand.  You mean there would be other call sites calling:
> 
> kmap_atomic()
> memcpy()
> kunmap_atomic()

Yes, there are tons of places that do this.  Try 'git grep -A6 kmap_atomic'
and look for memcpy().

Hence why I'm asking what will be the "recommended" way to do this...
kunmap_thread() or kmap_atomic()?

> And since I don't know the call site details if there are kmap_thread() calls
> which are better off as kmap_atomic() calls I think it is worth converting
> them.  But I made the assumption that kmap users would already be calling
> kmap_atomic() if they could (because it is more efficient).

Not necessarily.  In cases where either one is correct, people might not have
put much thought into which of kmap() and kmap_atomic() they are using.

- Eric


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 16:28:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 16:28:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5975.15595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0gi-00013r-IT; Mon, 12 Oct 2020 16:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5975.15595; Mon, 12 Oct 2020 16:28:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0gi-00013k-Ew; Mon, 12 Oct 2020 16:28:40 +0000
Received: by outflank-mailman (input) for mailman id 5975;
 Mon, 12 Oct 2020 16:28:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZN0g=DT=intel.com=dave.hansen@srs-us1.protection.inumbo.net>)
 id 1kS0gh-00013f-Ap
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:28:39 +0000
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13e2cb20-ed7c-45bb-aec9-454ced67d97e;
 Mon, 12 Oct 2020 16:28:36 +0000 (UTC)
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 09:28:31 -0700
Received: from soumyaka-mobl.amr.corp.intel.com (HELO [10.212.101.39])
 ([10.212.101.39])
 by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 09:28:30 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZN0g=DT=intel.com=dave.hansen@srs-us1.protection.inumbo.net>)
	id 1kS0gh-00013f-Ap
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:28:39 +0000
X-Inumbo-ID: 13e2cb20-ed7c-45bb-aec9-454ced67d97e
Received: from mga12.intel.com (unknown [192.55.52.136])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 13e2cb20-ed7c-45bb-aec9-454ced67d97e;
	Mon, 12 Oct 2020 16:28:36 +0000 (UTC)
IronPort-SDR: vdxXv3+BQUJxWsYqBivlXKlWk0gKRqXvswdKhy7HyaHwlCMl2dtYEd6R1ARXVzUvNntcJf7i/i
 MT7rSctDtYgQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="145086225"
X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; 
   d="scan'208";a="145086225"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
  by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 09:28:31 -0700
IronPort-SDR: lpVqfUJ9kdOGINFzSz3Rb/+voVruUqKl5QXBieW8JLewmkz3mmd/w7cERQhz4kzL6b3gTh7T5C
 5oHwexl8ugkg==
X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; 
   d="scan'208";a="355847059"
Received: from soumyaka-mobl.amr.corp.intel.com (HELO [10.212.101.39]) ([10.212.101.39])
  by fmsmga003-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 09:28:30 -0700
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
To: Eric Biggers <ebiggers@kernel.org>, Ira Weiny <ira.weiny@intel.com>
Cc: Matthew Wilcox <willy@infradead.org>,
 Andrew Morton <akpm@linux-foundation.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
 linux-efi@vger.kernel.org, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
 linux-mmc@vger.kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
 dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
 target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
 ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
 devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
 linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
 linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org, x86@kernel.org,
 amd-gfx@lists.freedesktop.org, linux-afs@lists.infradead.org,
 cluster-devel@redhat.com, linux-cachefs@redhat.com,
 intel-wired-lan@lists.osuosl.org, xen-devel@lists.xenproject.org,
 linux-ext4@vger.kernel.org, Fenghua Yu <fenghua.yu@intel.com>,
 ecryptfs@vger.kernel.org, linux-um@lists.infradead.org,
 intel-gfx@lists.freedesktop.org, linux-erofs@lists.ozlabs.org,
 reiserfs-devel@vger.kernel.org, linux-block@vger.kernel.org,
 linux-bcache@vger.kernel.org, Jaegeuk Kim <jaegeuk@kernel.org>,
 Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
 linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
 netdev@vger.kernel.org, kexec@lists.infradead.org,
 linux-kernel@vger.kernel.org, linux-f2fs-devel@lists.sourceforge.net,
 linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
 <20201012161946.GA858@sol.localdomain>
From: Dave Hansen <dave.hansen@intel.com>
Autocrypt: addr=dave.hansen@intel.com; keydata=
 xsFNBE6HMP0BEADIMA3XYkQfF3dwHlj58Yjsc4E5y5G67cfbt8dvaUq2fx1lR0K9h1bOI6fC
 oAiUXvGAOxPDsB/P6UEOISPpLl5IuYsSwAeZGkdQ5g6m1xq7AlDJQZddhr/1DC/nMVa/2BoY
 2UnKuZuSBu7lgOE193+7Uks3416N2hTkyKUSNkduyoZ9F5twiBhxPJwPtn/wnch6n5RsoXsb
 ygOEDxLEsSk/7eyFycjE+btUtAWZtx+HseyaGfqkZK0Z9bT1lsaHecmB203xShwCPT49Blxz
 VOab8668QpaEOdLGhtvrVYVK7x4skyT3nGWcgDCl5/Vp3TWA4K+IofwvXzX2ON/Mj7aQwf5W
 iC+3nWC7q0uxKwwsddJ0Nu+dpA/UORQWa1NiAftEoSpk5+nUUi0WE+5DRm0H+TXKBWMGNCFn
 c6+EKg5zQaa8KqymHcOrSXNPmzJuXvDQ8uj2J8XuzCZfK4uy1+YdIr0yyEMI7mdh4KX50LO1
 pmowEqDh7dLShTOif/7UtQYrzYq9cPnjU2ZW4qd5Qz2joSGTG9eCXLz5PRe5SqHxv6ljk8mb
 ApNuY7bOXO/A7T2j5RwXIlcmssqIjBcxsRRoIbpCwWWGjkYjzYCjgsNFL6rt4OL11OUF37wL
 QcTl7fbCGv53KfKPdYD5hcbguLKi/aCccJK18ZwNjFhqr4MliQARAQABzShEYXZpZCBDaHJp
 c3RvcGhlciBIYW5zZW4gPGRhdmVAc3I3MS5uZXQ+wsF7BBMBAgAlAhsDBgsJCAcDAgYVCAIJ
 CgsEFgIDAQIeAQIXgAUCTo3k0QIZAQAKCRBoNZUwcMmSsMO2D/421Xg8pimb9mPzM5N7khT0
 2MCnaGssU1T59YPE25kYdx2HntwdO0JA27Wn9xx5zYijOe6B21ufrvsyv42auCO85+oFJWfE
 K2R/IpLle09GDx5tcEmMAHX6KSxpHmGuJmUPibHVbfep2aCh9lKaDqQR07gXXWK5/yU1Dx0r
 VVFRaHTasp9fZ9AmY4K9/BSA3VkQ8v3OrxNty3OdsrmTTzO91YszpdbjjEFZK53zXy6tUD2d
 e1i0kBBS6NLAAsqEtneplz88T/v7MpLmpY30N9gQU3QyRC50jJ7LU9RazMjUQY1WohVsR56d
 ORqFxS8ChhyJs7BI34vQusYHDTp6PnZHUppb9WIzjeWlC7Jc8lSBDlEWodmqQQgp5+6AfhTD
 kDv1a+W5+ncq+Uo63WHRiCPuyt4di4/0zo28RVcjtzlGBZtmz2EIC3vUfmoZbO/Gn6EKbYAn
 rzz3iU/JWV8DwQ+sZSGu0HmvYMt6t5SmqWQo/hyHtA7uF5Wxtu1lCgolSQw4t49ZuOyOnQi5
 f8R3nE7lpVCSF1TT+h8kMvFPv3VG7KunyjHr3sEptYxQs4VRxqeirSuyBv1TyxT+LdTm6j4a
 mulOWf+YtFRAgIYyyN5YOepDEBv4LUM8Tz98lZiNMlFyRMNrsLV6Pv6SxhrMxbT6TNVS5D+6
 UorTLotDZKp5+M7BTQRUY85qARAAsgMW71BIXRgxjYNCYQ3Xs8k3TfAvQRbHccky50h99TUY
 sqdULbsb3KhmY29raw1bgmyM0a4DGS1YKN7qazCDsdQlxIJp9t2YYdBKXVRzPCCsfWe1dK/q
 66UVhRPP8EGZ4CmFYuPTxqGY+dGRInxCeap/xzbKdvmPm01Iw3YFjAE4PQ4hTMr/H76KoDbD
 cq62U50oKC83ca/PRRh2QqEqACvIH4BR7jueAZSPEDnzwxvVgzyeuhwqHY05QRK/wsKuhq7s
 UuYtmN92Fasbxbw2tbVLZfoidklikvZAmotg0dwcFTjSRGEg0Gr3p/xBzJWNavFZZ95Rj7Et
 db0lCt0HDSY5q4GMR+SrFbH+jzUY/ZqfGdZCBqo0cdPPp58krVgtIGR+ja2Mkva6ah94/oQN
 lnCOw3udS+Eb/aRcM6detZr7XOngvxsWolBrhwTQFT9D2NH6ryAuvKd6yyAFt3/e7r+HHtkU
 kOy27D7IpjngqP+b4EumELI/NxPgIqT69PQmo9IZaI/oRaKorYnDaZrMXViqDrFdD37XELwQ
 gmLoSm2VfbOYY7fap/AhPOgOYOSqg3/Nxcapv71yoBzRRxOc4FxmZ65mn+q3rEM27yRztBW9
 AnCKIc66T2i92HqXCw6AgoBJRjBkI3QnEkPgohQkZdAb8o9WGVKpfmZKbYBo4pEAEQEAAcLB
 XwQYAQIACQUCVGPOagIbDAAKCRBoNZUwcMmSsJeCEACCh7P/aaOLKWQxcnw47p4phIVR6pVL
 e4IEdR7Jf7ZL00s3vKSNT+nRqdl1ugJx9Ymsp8kXKMk9GSfmZpuMQB9c6io1qZc6nW/3TtvK
 pNGz7KPPtaDzvKA4S5tfrWPnDr7n15AU5vsIZvgMjU42gkbemkjJwP0B1RkifIK60yQqAAlT
 YZ14P0dIPdIPIlfEPiAWcg5BtLQU4Wg3cNQdpWrCJ1E3m/RIlXy/2Y3YOVVohfSy+4kvvYU3
 lXUdPb04UPw4VWwjcVZPg7cgR7Izion61bGHqVqURgSALt2yvHl7cr68NYoFkzbNsGsye9ft
 M9ozM23JSgMkRylPSXTeh5JIK9pz2+etco3AfLCKtaRVysjvpysukmWMTrx8QnI5Nn5MOlJj
 1Ov4/50JY9pXzgIDVSrgy6LYSMc4vKZ3QfCY7ipLRORyalFDF3j5AGCMRENJjHPD6O7bl3Xo
 4DzMID+8eucbXxKiNEbs21IqBZbbKdY1GkcEGTE7AnkA3Y6YB7I/j9mQ3hCgm5muJuhM/2Fr
 OPsw5tV/LmQ5GXH0JQ/TZXWygyRFyyI2FqNTx4WHqUn3yFj8rwTAU1tluRUYyeLy0ayUlKBH
 ybj0N71vWO936MqP6haFERzuPAIpxj2ezwu0xb1GjTk4ynna6h5GjnKgdfOWoRtoWndMZxbA
 z5cecg==
Message-ID: <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>
Date: Mon, 12 Oct 2020 09:28:29 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201012161946.GA858@sol.localdomain>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10/12/20 9:19 AM, Eric Biggers wrote:
> On Sun, Oct 11, 2020 at 11:56:35PM -0700, Ira Weiny wrote:
>>> And I still don't really understand.  After this patchset, there is still code
>>> nearly identical to the above (doing a temporary mapping just for a memcpy) that
>>> would still be using kmap_atomic().
>> I don't understand.  You mean there would be other call sites calling:
>>
>> kmap_atomic()
>> memcpy()
>> kunmap_atomic()
> Yes, there are tons of places that do this.  Try 'git grep -A6 kmap_atomic'
> and look for memcpy().
> 
> Hence why I'm asking what will be the "recommended" way to do this...
> kunmap_thread() or kmap_atomic()?

kmap_atomic() is always preferred over kmap()/kmap_thread().
kmap_atomic() is _much_ more lightweight since its TLB invalidation is
always CPU-local and never broadcast.

So, basically, unless you *must* sleep while the mapping is in place,
kmap_atomic() is preferred.



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 16:43:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 16:43:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5981.15609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0uY-0002sd-3D; Mon, 12 Oct 2020 16:42:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5981.15609; Mon, 12 Oct 2020 16:42:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0uY-0002sQ-04; Mon, 12 Oct 2020 16:42:58 +0000
Received: by outflank-mailman (input) for mailman id 5981;
 Mon, 12 Oct 2020 16:42:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS0uX-0002qT-2d
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:42:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c8fce09-eddc-4c5e-82b1-43e82fb055a6;
 Mon, 12 Oct 2020 16:42:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS0uM-0002i3-TQ; Mon, 12 Oct 2020 16:42:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS0uM-0005S4-Mv; Mon, 12 Oct 2020 16:42:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS0uM-0005Oc-MP; Mon, 12 Oct 2020 16:42:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS0uX-0002qT-2d
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:42:57 +0000
X-Inumbo-ID: 6c8fce09-eddc-4c5e-82b1-43e82fb055a6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6c8fce09-eddc-4c5e-82b1-43e82fb055a6;
	Mon, 12 Oct 2020 16:42:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8qGTKS8c+7gyTPpMTIPIEenAqwjIx62/0RqzdpH+uug=; b=bD/cF4KhT4FPHnWKsoW+8OpFmz
	QcsNhSio5RsnUHp5sa6iCS6xcPC5f5pK86A9sG0MG5K7d02wdwRuIjyRHzQO50yWqDxJ6VlASDunq
	vy70k1ihyTIOwCKYGj8BR38n24iDzd9XDzQf2hWAVxLqFVA5qhj3pKapO0lERDBAe40o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS0uM-0002i3-TQ; Mon, 12 Oct 2020 16:42:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS0uM-0005S4-Mv; Mon, 12 Oct 2020 16:42:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS0uM-0005Oc-MP; Mon, 12 Oct 2020 16:42:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155729-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155729: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=48a340d9b23ffcf7704f2de14d1e505481a84a1c
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 16:42:46 +0000

flight 155729 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155729/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                48a340d9b23ffcf7704f2de14d1e505481a84a1c
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   53 days
Failing since        152659  2020-08-21 14:07:39 Z   52 days   89 attempts
Testing same since   155703  2020-10-11 19:37:50 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 42895 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 16:45:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 16:45:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5983.15620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0wh-00038l-Fu; Mon, 12 Oct 2020 16:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5983.15620; Mon, 12 Oct 2020 16:45:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS0wh-00038e-Cz; Mon, 12 Oct 2020 16:45:11 +0000
Received: by outflank-mailman (input) for mailman id 5983;
 Mon, 12 Oct 2020 16:45:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0yg=DT=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kS0we-00038Z-TC
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:45:09 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0984f5af-c751-45e3-ae05-62bd9db96cb4;
 Mon, 12 Oct 2020 16:45:05 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kS0wA-0004gO-8Q; Mon, 12 Oct 2020 16:44:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=I0yg=DT=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kS0we-00038Z-TC
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:45:09 +0000
X-Inumbo-ID: 0984f5af-c751-45e3-ae05-62bd9db96cb4
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0984f5af-c751-45e3-ae05-62bd9db96cb4;
	Mon, 12 Oct 2020 16:45:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=l9dol8BjF52rFe8mzz85c9RZmeAYDZ2M1zPXpY3Bxec=; b=cjGmRDHB+K+xo1zaI46uR/JWZM
	mu6b6OfRoAYfgdNA5Kf5Iurex6D1FcBZ+mRQhj12vLi3isoy7f1JrMIQOIfa61TejqqWEwpL43yKb
	2mAeiG7QIg8Vb+ajA0gepoKbc6o17WQEzV+UWJKTyQWQStoFHb/kNJEfYbWmPc27vxrcwV1GpTL/g
	cqcPit9vRB3f1Zs6upmREd44qhzYUWIO5sf13vXmWctx364S7GYQlJM4ZaGSBTPIwUKM8imQUHEl+
	YU7Gj26Vo02zH0C4u7a3/EscApIpoLOe+KQejmiwTcRHTR/bJWdX/slhogDIyA7BDVBafjowKFz4j
	1dyYxPzg==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kS0wA-0004gO-8Q; Mon, 12 Oct 2020 16:44:38 +0000
Date: Mon, 12 Oct 2020 17:44:38 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Dave Hansen <dave.hansen@intel.com>
Cc: Eric Biggers <ebiggers@kernel.org>, Ira Weiny <ira.weiny@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012164438.GA20115@casper.infradead.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
 <20201012161946.GA858@sol.localdomain>
 <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>

On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> kmap_atomic() is always preferred over kmap()/kmap_thread().
> kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> always CPU-local and never broadcast.
> 
> So, basically, unless you *must* sleep while the mapping is in place,
> kmap_atomic() is preferred.

But kmap_atomic() disables preemption, so the _ideal_ interface would map
it only locally, then on preemption make it global.  I don't even know
if that _can_ be done.  But this email makes it seem like kmap_atomic()
has no downsides.


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 16:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 16:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5987.15634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS15H-0004EN-HY; Mon, 12 Oct 2020 16:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5987.15634; Mon, 12 Oct 2020 16:54:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS15H-0004EG-Ed; Mon, 12 Oct 2020 16:54:03 +0000
Received: by outflank-mailman (input) for mailman id 5987;
 Mon, 12 Oct 2020 16:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS15G-0004Dh-6m
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:54:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0add593-c459-49ba-92f6-0bac9f370d27;
 Mon, 12 Oct 2020 16:53:55 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS158-0002ve-QQ; Mon, 12 Oct 2020 16:53:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS158-00067R-Hd; Mon, 12 Oct 2020 16:53:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS158-0003ye-H7; Mon, 12 Oct 2020 16:53:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS15G-0004Dh-6m
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 16:54:02 +0000
X-Inumbo-ID: e0add593-c459-49ba-92f6-0bac9f370d27
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0add593-c459-49ba-92f6-0bac9f370d27;
	Mon, 12 Oct 2020 16:53:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NUay4wAXNlNh4pk1n/OC1VcWKI59uK1zXu6gdyWSUyU=; b=y8coeQ/Afrx4nN6FsbBclLbuCo
	pBgZtxnlGisUzn/qXSSmwvFNtD+NYklv73PSeNoViKZFsuigOUL5dlRRq5pyb2z9BYU61S3bdcJq8
	4uIj5RKDJlYA6jygFFxqDu7ml/7lxj31X3hcHow5/PPD2deBq/LKoew+aZQRDvXFFz1c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS158-0002ve-QQ; Mon, 12 Oct 2020 16:53:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS158-00067R-Hd; Mon, 12 Oct 2020 16:53:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS158-0003ye-H7; Mon, 12 Oct 2020 16:53:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155734-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155734: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 16:53:54 +0000

flight 155734 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155734/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    2 days   21 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 18:09:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 18:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5994.15651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS2GC-0002py-So; Mon, 12 Oct 2020 18:09:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5994.15651; Mon, 12 Oct 2020 18:09:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS2GC-0002pr-Po; Mon, 12 Oct 2020 18:09:24 +0000
Received: by outflank-mailman (input) for mailman id 5994;
 Mon, 12 Oct 2020 18:09:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kS2GC-0002pm-0y
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 18:09:24 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed24f8b5-8a10-48c2-9a9b-90418d99e488;
 Mon, 12 Oct 2020 18:09:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yFJ=DT=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kS2GC-0002pm-0y
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 18:09:24 +0000
X-Inumbo-ID: ed24f8b5-8a10-48c2-9a9b-90418d99e488
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ed24f8b5-8a10-48c2-9a9b-90418d99e488;
	Mon, 12 Oct 2020 18:09:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602526161;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=mQtuHwexDo0BOb9pxNFmUuM+i03PpgjzgSLPCeutbVI=;
  b=hsstIH/EAvi3O0unzbDuzCPwUK9vlyL9WQ3g0BhIljsn8ccCjkocHHbZ
   jH5sfnvhexvvqCx2IKrpkAduYOzte7Sk6zOYAmJ+P3roeC8rUVHKu4yQe
   uByomZO2zkPMGSKkjPV+QWgPSYMzNMOeyF6R6n14FPE/mlL824ghluOjX
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: AZKTaq3vzAG+tcfIc6utVleQaA9fejLxDWKWR1JR3FskM0RGm0K8PbII58GUqtNMVVmhdDmkm3
 X2fOOfhEbDS6u7Ci5LDuCQIuQUVOe0bt4O/yvZcmZqWePrIYWrr0Awj3LihQ1+SdUO6RZ5QfCI
 ORQUTPWEsiS48AgpqZpZEhI5Mk1+O+IXMwM1SDuYwyX9V2Fg+OPpHnIOFd51ipBcfVBdZkOPpL
 6VTC0LF4ioxS3nGmD1+VnYqYU7gsm9Zl3FM2LuK79mUnneTfNPyC3j7FGlUcnHiG0T0Yqs7pqp
 6r4=
X-SBRS: 2.5
X-MesageID: 29858757
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,367,1596513600"; 
   d="scan'208";a="29858757"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hnugOpaBW8NRAgAWLOmQdxS4SRl+TBYU3Jkc3sIzeicU9+NqQlbdV/dNXX0tEjgHTcVgXi+QMvNAa7WBth7D/duBOxGRRdKx0Wtoa5E96Nzc+zGPTDxmQNqpz3qvvy6XpRGYHtqFf+tgVBtyA8svoQqoQrYQBTu8U6szmbHdyMpnOWq6BOfP0a7ZFii/ZZ2jACcUpi06MNaj4bx+UYHuh3Tl2xB1BgLZjJYUNQ9Ni3LBRUJnfUqq4UVauSPug2MParHhsxDkUONJq8Ej6olabzoV3Q+b4kXMVG0yjx9TnQg3BapN2mwnmp132t45ZHY8G4zlqkY0WJ/33JlSHv2AMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mQtuHwexDo0BOb9pxNFmUuM+i03PpgjzgSLPCeutbVI=;
 b=kkxu7cjAlUAuOD0bjyDf8apBIu8Ik4uMmOLt8Ag9F8vIGVVUHj8AnY8gCvtkOTzmMI+oo30DPJXoomLgr4k//wzJX6Xmcb57qMVVHE9/OVXqPnl2umI12dfae9O5HtBCJ6wO8xygTL6EphpO6VpPA10m0tE4DXWEQkV1Yc8qKhvExWqtOcPChW4zzmEI7Tqkwj3oyYJUKGtAaTibmn2P2lcSxnjjUzK5SosBU5H2pJhaP4gdesiWRSAQC1rXAdyEUNLq4xFjfSxWJoqJXrXPSj4Sgng+wBclcWzKO0oc9k/f4G87fR4xldvPPlVJyKBWEIArUAMooDsiZUsFqbFrBw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mQtuHwexDo0BOb9pxNFmUuM+i03PpgjzgSLPCeutbVI=;
 b=tSEXOWhAnVOAyUZ5gw1C6Mz0AXcd/LytYitf1dRPLQrJXYwq7smttVifnnp5X3M+oi7APXlcFApzkZHjWYZXxJHuFLSr6Ju5ywHQGYQLtyalTTU+Yh83ypRifPJ3yVH7RLLb8EJ5xdtCUD+o6neldNjt3eWnkjqdgm8aSPzf1+A=
From: George Dunlap <George.Dunlap@citrix.com>
To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
CC: "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
	"vicooodin@gmail.com" <vicooodin@gmail.com>, "julien@xen.org"
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, "Artem
 Mygaiev" <Artem_Mygaiev@epam.com>, "committers@xenproject.org"
	<committers@xenproject.org>, "jbeulich@suse.com" <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgCACF7sgA==
Date: Mon, 12 Oct 2020 18:09:16 +0000
Message-ID: <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
In-Reply-To: <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.1)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d550360f-7040-4a24-e1fd-08d86ed9ef09
x-ms-traffictypediagnostic: BYAPR03MB3704:
x-microsoft-antispam-prvs: <BYAPR03MB37048EFF860E58862187077199070@BYAPR03MB3704.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 3QnAzL7b3XKEMDW0DtnDr7kM/MnlWKAbX0hgXz50txPERW5y8WJoE+69Pk8ovEN0uxhu98I8RFt2e20OAvt7DYB8tlNw5OmIs7kZAZ3A4PQi+s4nOilA9pLoUnV213kGjER9V23T0y31Sq8OXtZwfTpnDfnjcofnnFC5d827UhIQSd0eHeWjkA7/mXIyv0IJjEdaB6P1Y+THfgK93QoVvB7SgwaJw2+VjjGKGF0VlG3f7Mm7rppYH5I0tC1kdG2RwUOOIeTw3LAFNRnm5vASfhbU6nLMhvmRt1gElke8bkKgKB33oiOm/SsGKv2vNTcA
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(136003)(39860400002)(366004)(8936002)(6512007)(2616005)(186003)(36756003)(6916009)(33656002)(6506007)(71200400001)(478600001)(86362001)(53546011)(55236004)(2906002)(4326008)(66946007)(6486002)(5660300002)(76116006)(91956017)(54906003)(316002)(83380400001)(8676002)(26005)(66556008)(66446008)(66476007)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: FOTcVJiutNhrRoBZW7jJR7ITrvfPIkn0+ZP0R49nhHTiZcG2XXIIps9s3ZDhjzq80Cut2UKdBtdmb45sxyQu1R946E5ZMhN3ZQZFwPiJ/uPMqHViAMANPrnU0kOVIHswFKT+YSb9+8sPc0DQ9f35NxNA7aniPmtZWCyo54tOYqUP0F1Z8lyY3u/brOD4fLD7eqCKkCdMzNHuqH83kArkp9J0udPzUwe6c+xqdgU7RSXIWHaitHV5PVed5llrGFg4Iafry4Dsujm1RTaISknNET9jPvqehmQwfvy2oq9tpsoeRuQMqhQU+Yd1bkmYO8f6anzjj00lRjmeWpeTueNaHaqiHpCxOtCB7LccNTTAeAsZ35PSQ0skzk9WIoy03QEiuUyPRMBICnOAKIW2J+u71irjwk64IGIr88DmY+2bLhM0Wb4oum3+7raXyRxMaCZeCR/QBQKieEe53FmzeLG6VTxbNAo4gVET5u0aCHaguJMyzRFj/qGHXgHaN7zASfGCAZbWOZGGpJVGYemlv57BY+UATngPFLrCD/cbzk6A4IXjYYSTuJjB4mnNTkFCbpxVcDVd4RMBsjbEEGpOODvznfGshWSATX+NfDWmjqMyVGwPInnRR1j0Q5CnoFF9SeUYIl0nfc80EmvPkS4FTyka/Q==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <B488EBF62511D8449F731B44DA784BF5@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d550360f-7040-4a24-e1fd-08d86ed9ef09
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Oct 2020 18:09:16.7947
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dFi/PAiKYIqlhsLvUOUm8PCmwImLBdTaa4KIJpLdMG1ZKzKe7E39SCY1wpDLKhX22saxZx1qCPyunn4bCDq/BW7/+YO9yzKslWJ3i+wQyrU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3704
X-OriginatorOrg: citrix.com

DQoNCj4gT24gT2N0IDcsIDIwMjAsIGF0IDExOjE5IEFNLCBBbmFzdGFzaWlhIEx1a2lhbmVua28g
PEFuYXN0YXNpaWFfTHVraWFuZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+IA0KPiBIaSBhbGwsDQo+
IA0KPiBPbiBUaHUsIDIwMjAtMTAtMDEgYXQgMTA6MDYgKzAwMDAsIEdlb3JnZSBEdW5sYXAgd3Jv
dGU6DQo+Pj4gT24gT2N0IDEsIDIwMjAsIGF0IDEwOjA2IEFNLCBBbmFzdGFzaWlhIEx1a2lhbmVu
a28gPA0KPj4+IEFuYXN0YXNpaWFfTHVraWFuZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+Pj4gDQo+
Pj4gSGksDQo+Pj4gDQo+Pj4gT24gV2VkLCAyMDIwLTA5LTMwIGF0IDEwOjI0ICswMDAwLCBHZW9y
Z2UgRHVubGFwIHdyb3RlOg0KPj4+Pj4gT24gU2VwIDMwLCAyMDIwLCBhdCAxMDo1NyBBTSwgSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPj4+Pj4gd3JvdGU6DQo+Pj4+PiANCj4+Pj4+
IE9uIDMwLjA5LjIwMjAgMTE6MTgsIEFuYXN0YXNpaWEgTHVraWFuZW5rbyB3cm90ZToNCj4+Pj4+
PiBJIHdvdWxkIGxpa2UgdG8ga25vdyB5b3VyIG9waW5pb24gb24gdGhlIGZvbGxvd2luZyBjb2Rp
bmcNCj4+Pj4+PiBzdHlsZQ0KPj4+Pj4+IGNhc2VzLg0KPj4+Pj4+IFdoaWNoIG9wdGlvbiBkbyB5
b3UgdGhpbmsgaXMgY29ycmVjdD8NCj4+Pj4+PiAxKSBGdW5jdGlvbiBwcm90b3R5cGUgd2hlbiB0
aGUgc3RyaW5nIGxlbmd0aCBpcyBsb25nZXIgdGhhbg0KPj4+Pj4+IHRoZQ0KPj4+Pj4+IGFsbG93
ZWQNCj4+Pj4+PiBvbmUNCj4+Pj4+PiAtc3RhdGljIGludCBfX2luaXQNCj4+Pj4+PiAtYWNwaV9w
YXJzZV9naWNfY3B1X2ludGVyZmFjZShzdHJ1Y3QgYWNwaV9zdWJ0YWJsZV9oZWFkZXINCj4+Pj4+
PiAqaGVhZGVyLA0KPj4+Pj4+IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVu
c2lnbmVkIGxvbmcgZW5kKQ0KPj4+Pj4+ICtzdGF0aWMgaW50IF9faW5pdCBhY3BpX3BhcnNlX2dp
Y19jcHVfaW50ZXJmYWNlKA0KPj4+Pj4+ICsgICAgc3RydWN0IGFjcGlfc3VidGFibGVfaGVhZGVy
ICpoZWFkZXIsIGNvbnN0IHVuc2lnbmVkIGxvbmcNCj4+Pj4+PiBlbmQpDQo+Pj4+PiANCj4+Pj4+
IEJvdGggdmFyaWFudHMgYXJlIGRlZW1lZCB2YWxpZCBzdHlsZSwgSSB0aGluayAoc2FtZSBhbHNv
IGdvZXMNCj4+Pj4+IGZvcg0KPj4+Pj4gZnVuY3Rpb24gY2FsbHMgd2l0aCB0aGlzIHNhbWUgcHJv
YmxlbSkuIEluIGZhY3QgeW91IG1peCB0d28NCj4+Pj4+IGRpZmZlcmVudCBzdHlsZSBhc3BlY3Rz
IHRvZ2V0aGVyIChwbGFjZW1lbnQgb2YgcGFyYW1ldGVyDQo+Pj4+PiBkZWNsYXJhdGlvbnMgYW5k
IHBsYWNlbWVudCBvZiByZXR1cm4gdHlwZSBldGMpIC0gZm9yIGVhY2gNCj4+Pj4+IGluZGl2aWR1
YWxseSBib3RoIGZvcm1zIGFyZSBkZWVtZWQgYWNjZXB0YWJsZSwgSSB0aGluay4NCj4+Pj4gDQo+
Pj4+IElmIHdl4oCZcmUgZ29pbmcgdG8gaGF2ZSBhIHRvb2wgZ28gdGhyb3VnaCBhbmQgcmVwb3J0
IChjb3JyZWN0PykNCj4+Pj4gYWxsDQo+Pj4+IHRoZXNlIGNvZGluZyBzdHlsZSB0aGluZ3MsIGl0
4oCZcyBhbiBvcHBvcnR1bml0eSB0byB0aGluayBpZiB3ZQ0KPj4+PiB3YW50IHRvDQo+Pj4+IGFk
ZCBuZXcgY29kaW5nIHN0eWxlIHJlcXVpcmVtZW50cyAob3IgY2hhbmdlIGV4aXN0aW5nDQo+Pj4+
IHJlcXVpcmVtZW50cykuDQo+Pj4+IA0KPj4+IA0KPj4+IEkgYW0gcmVhZHkgdG8gZGlzY3VzcyBu
ZXcgcmVxdWlyZW1lbnRzIGFuZCBpbXBsZW1lbnQgdGhlbSBpbiBydWxlcw0KPj4+IG9mDQo+Pj4g
dGhlIFhlbiBDb2Rpbmcgc3R5bGUgY2hlY2tlci4NCj4+IA0KPj4gVGhhbmsgeW91LiA6LSkgIEJ1
dCB3aGF0IEkgbWVhbnQgd2FzOiBSaWdodCBub3cgd2UgZG9u4oCZdCByZXF1aXJlIG9uZQ0KPj4g
YXBwcm9hY2ggb3IgdGhlIG90aGVyIGZvciB0aGlzIHNwZWNpZmljIGluc3RhbmNlLiAgRG8gd2Ug
d2FudCB0bw0KPj4gY2hvb3NlIG9uZT8NCj4+IA0KPj4gSSB0aGluayBpbiB0aGlzIGNhc2UgaXQg
bWFrZXMgc2Vuc2UgdG8gZG8gdGhlIGVhc2llc3QgdGhpbmcuICBJZiBpdOKAmXMNCj4+IGVhc3kg
dG8gbWFrZSB0aGUgY3VycmVudCB0b29sIGFjY2VwdCBib3RoIHN0eWxlcywgbGV04oCZcyBqdXN0
IGRvIHRoYXQNCj4+IGZvciBub3cuICBJZiB0aGUgdG9vbCBjdXJyZW50bHkgZm9yY2VzIHlvdSB0
byBjaG9vc2Ugb25lIG9mIHRoZSB0d28NCj4+IHN0eWxlcywgbGV04oCZcyBjaG9vc2Ugb25lLg0K
Pj4gDQo+PiAtR2VvcmdlDQo+IA0KPiBEdXJpbmcgdGhlIGRldGFpbGVkIHN0dWR5IG9mIHRoZSBY
ZW4gY2hlY2tlciBhbmQgdGhlIENsYW5nLUZvcm1hdCBTdHlsZQ0KPiBPcHRpb25zLCBpdCB3YXMg
Zm91bmQgdGhhdCB0aGlzIHRvb2wsIHVuZm9ydHVuYXRlbHksIGlzIG5vdCBzbyBmbGV4aWJsZQ0K
PiB0byBhbGxvdyB0aGUgYXV0aG9yIHRvIGluZGVwZW5kZW50bHkgY2hvb3NlIHRoZSBmb3JtYXR0
aW5nIHN0eWxlIGluDQo+IHNpdHVhdGlvbnMgdGhhdCBJIGRlc2NyaWJlZCBpbiB0aGUgbGFzdCBs
ZXR0ZXIuIEZvciBleGFtcGxlIGRlZmluZSBjb2RlDQo+IHN0eWxlOg0KPiAtI2RlZmluZSBBTExS
RUdTIFwNCj4gLSAgICBDKHIwLCByMF91c3IpOyAgIEMocjEsIHIxX3Vzcik7ICAgQyhyMiwgcjJf
dXNyKTsgICBDKHIzLA0KPiByM191c3IpOyAgIFwNCj4gLSAgICBDKGNwc3IsIGNwc3IpDQo+ICsj
ZGVmaW5lIEFMTFJFR1MgICAgICAgICAgICBcDQo+ICsgICAgQyhyMCwgcjBfdXNyKTsgICAgICAg
ICBcDQo+ICsgICAgQyhyMSwgcjFfdXNyKTsgICAgICAgICBcDQo+ICsgICAgQyhyMiwgcjJfdXNy
KTsgICAgICAgICBcDQo+IFRoZXJlIGFyZSBhbHNvIHNvbWUgaW5jb25zaXN0ZW5jaWVzIGluIHRo
ZSBmb3JtYXR0aW5nIG9mIHRoZSB0b29sIGFuZA0KPiB3aGF0IGlzIHdyaXR0ZW4gaW4gdGhlIGh5
dW5nIGNvZGluZyBzdHlsZSBydWxlcy4gRm9yIGV4YW1wbGUsIHRoZQ0KPiBjb21tZW50IGZvcm1h
dDoNCj4gLSAgICAvKiBQQyBzaG91bGQgYmUgYWx3YXlzIGEgbXVsdGlwbGUgb2YgNCwgYXMgWGVu
IGlzIHVzaW5nIEFSTQ0KPiBpbnN0cnVjdGlvbiBzZXQgKi8NCj4gKyAgICAvKiBQQyBzaG91bGQg
YmUgYWx3YXlzIGEgbXVsdGlwbGUgb2YgNCwgYXMgWGVuIGlzIHVzaW5nIEFSTQ0KPiBpbnN0cnVj
dGlvbiBzZXQNCj4gKyAgICAgKi8NCj4gSSB3b3VsZCBsaWtlIHRvIGRyYXcgeW91ciBhdHRlbnRp
b24gdG8gdGhlIGZhY3QgdGhhdCB0aGUgY29tbWVudA0KPiBiZWhhdmVzIGluIHRoaXMgd2F5LCBz
aW5jZSB0aGUgbGluZSBsZW5ndGggZXhjZWVkcyB0aGUgYWxsb3dhYmxlIG9uZS4NCj4gVGhlIFJl
Zmxvd0NvbW1lbnRzIG9wdGlvbiBpcyByZXNwb25zaWJsZSBmb3IgdGhpcyBmb3JtYXQuIEl0IGNh
biBiZQ0KPiB0dXJuZWQgb2ZmLCBidXQgdGhlbiB0aGUgcmVzdWx0IHdpbGwgYmU6DQo+IFJlZmxv
d0NvbW1lbnRzPWZhbHNlOg0KPiAvKiBzZWNvbmQgdmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVy
eVZlcnlWZXJ5VmVyeVZlcnlMb25nQ29tbWVudCB3aXRoDQo+IHBsZW50eSBvZiBpbmZvcm1hdGlv
biAqLw0KPiANCj4gUmVmbG93Q29tbWVudHM9dHJ1ZToNCj4gLyogc2Vjb25kIHZlcnlWZXJ5VmVy
eVZlcnlWZXJ5VmVyeVZlcnlWZXJ5VmVyeVZlcnlWZXJ5TG9uZ0NvbW1lbnQgd2l0aA0KPiBwbGVu
dHkgb2YNCj4gKiBpbmZvcm1hdGlvbiAqLw0KPiANCj4gU28gSSB3YW50IHRvIGtub3cgaWYgdGhl
IGNvbW11bml0eSBpcyByZWFkeSB0byBhZGQgbmV3IGZvcm1hdHRpbmcNCj4gb3B0aW9ucyBhbmQg
ZWRpdCBvbGQgb25lcy4gQmVsb3cgSSB3aWxsIGdpdmUgZXhhbXBsZXMgb2Ygd2hhdA0KPiBjb3Jy
ZWN0aW9ucyB0aGUgY2hlY2tlciBpcyBjdXJyZW50bHkgbWFraW5nICh0aGUgZmlyc3QgdmFyaWFu
dCBpbiBlYWNoDQo+IGNhc2UgaXMgZXhpc3RpbmcgY29kZSBhbmQgdGhlIHNlY29uZCB2YXJpYW50
IGlzIGZvcm1hdHRlZCBieSBjaGVja2VyKS4NCj4gSWYgdGhleSBmaXQgdGhlIHN0YW5kYXJkcywg
dGhlbiBJIGNhbiBkb2N1bWVudCB0aGVtIGluIHRoZSBjb2RpbmcNCj4gc3R5bGUuIElmIG5vdCwg
dGhlbiBJIHRyeSB0byBjb25maWd1cmUgdGhlIGNoZWNrZXIuIEJ1dCB0aGUgaWRlYSBpcw0KPiB0
aGF0IHdlIG5lZWQgdG8gY2hvb3NlIG9uZSBvcHRpb24gdGhhdCB3aWxsIGJlIGNvbnNpZGVyZWQg
Y29ycmVjdC4NCj4gMSkgRnVuY3Rpb24gcHJvdG90eXBlIHdoZW4gdGhlIHN0cmluZyBsZW5ndGgg
aXMgbG9uZ2VyIHRoYW4gdGhlIGFsbG93ZWQNCj4gLXN0YXRpYyBpbnQgX19pbml0DQo+IC1hY3Bp
X3BhcnNlX2dpY19jcHVfaW50ZXJmYWNlKHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAqaGVh
ZGVyLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICBjb25zdCB1bnNpZ25lZCBsb25n
IGVuZCkNCj4gK3N0YXRpYyBpbnQgX19pbml0IGFjcGlfcGFyc2VfZ2ljX2NwdV9pbnRlcmZhY2Uo
DQo+ICsgICAgc3RydWN0IGFjcGlfc3VidGFibGVfaGVhZGVyICpoZWFkZXIsIGNvbnN0IHVuc2ln
bmVkIGxvbmcgZW5kKQ0KDQpKYW4gYWxyZWFkeSBjb21tZW50ZWQgb24gdGhpcyBvbmU7IGlzIHRo
ZXJlIGFueSB3YXkgdG8gdGVsbCB0aGUgY2hlY2tlciB0byBpZ25vcmUgIHRoaXMgZGlzY3JlcGFu
Y3k/DQoNCklmIG5vdCwgSSB0aGluayB3ZSBzaG91bGQganVzdCBjaG9vc2Ugb25lOyBJ4oCZZCBn
byB3aXRoIHRoZSBsYXR0ZXIuDQoNCj4gMikgV3JhcHBpbmcgYW4gb3BlcmF0aW9uIHRvIGEgbmV3
IGxpbmUgd2hlbiB0aGUgc3RyaW5nIGxlbmd0aCBpcyBsb25nZXINCj4gdGhhbiB0aGUgYWxsb3dl
ZA0KPiAtICAgIHN0YXR1cyA9IGFjcGlfZ2V0X3RhYmxlKEFDUElfU0lHX1NQQ1IsIDAsDQo+IC0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgKHN0cnVjdCBhY3BpX3RhYmxlX2hlYWRlciAqKikm
c3Bjcik7DQo+ICsgICAgc3RhdHVzID0NCj4gKyAgICAgICAgYWNwaV9nZXRfdGFibGUoQUNQSV9T
SUdfU1BDUiwgMCwgKHN0cnVjdCBhY3BpX3RhYmxlX2hlYWRlcg0KPiAqKikmc3Bjcik7DQoNClBl
cnNvbmFsbHkgSSBwcmVmZXIgdGhlIGZpcnN0IHZlcnNpb24uDQoNCj4gMykgU3BhY2UgYWZ0ZXIg
YnJhY2tldHMNCj4gLSAgICByZXR1cm4gKChjaGFyICopIGJhc2UgKyBvZmZzZXQpOw0KPiArICAg
IHJldHVybiAoKGNoYXIgKiliYXNlICsgb2Zmc2V0KTsNCg0KVGhpcyBzZWVtcyBsaWtlIGEgZ29v
ZCBjaGFuZ2UgdG8gbWUuDQoNCj4gNCkgU3BhY2VzIGluIGJyYWNrZXRzIGluIHN3aXRjaCBjb25k
aXRpb24NCj4gLSAgICBzd2l0Y2ggKCBkb21jdGwtPmNtZCApDQo+ICsgICAgc3dpdGNoIChkb21j
dGwtPmNtZCkNCg0KVGhpcyBpcyBleHBsaWNpdGx5IGFnYWluc3QgdGhlIGN1cnJlbnQgY29kaW5n
IHN0eWxlLg0KDQo+IDUpIFNwYWNlcyBpbiBicmFja2V0cyBpbiBvcGVyYXRpb24NCj4gLSAgICBp
bW0gPSAoIGluc24gPj4gQlJBTkNIX0lOU05fSU1NX1NISUZUICkgJiBCUkFOQ0hfSU5TTl9JTU1f
TUFTSzsNCj4gKyAgICBpbW0gPSAoaW5zbiA+PiBCUkFOQ0hfSU5TTl9JTU1fU0hJRlQpICYgQlJB
TkNIX0lOU05fSU1NX01BU0s7DQoNCkkgKnRoaW5rKiB0aGlzIGlzIGFscmVhZHkgdGhlIG9mZmlj
aWFsIHN0eWxlLg0KDQo+IDYpIFNwYWNlcyBpbiBicmFja2V0cyBpbiByZXR1cm4NCj4gLSAgICAg
ICAgcmV0dXJuICggIXN5bS0+bmFtZVsyXSB8fCBzeW0tPm5hbWVbMl0gPT0gJy4nICk7DQo+ICsg
ICAgICAgIHJldHVybiAoIXN5bS0+bmFtZVsyXSB8fCBzeW0tPm5hbWVbMl0gPT0gJy4nKTsNCg0K
U2ltaWxhcmx5LCBJIHRoaW5rIHRoaXMgaXMgYWxyZWFkeSB0aGUgb2ZmaWNpYWwgc3R5bGUuDQoN
Cj4gNykgU3BhY2UgYWZ0ZXIgc2l6ZW9mDQo+IC0gICAgY2xlYW5fYW5kX2ludmFsaWRhdGVfZGNh
Y2hlX3ZhX3JhbmdlKG5ld19wdHIsIHNpemVvZiAoKm5ld19wdHIpICoNCj4gbGVuKTsNCj4gKyAg
ICBjbGVhbl9hbmRfaW52YWxpZGF0ZV9kY2FjaGVfdmFfcmFuZ2UobmV3X3B0ciwgc2l6ZW9mKCpu
ZXdfcHRyKSAqDQo+IGxlbik7DQoNCkkgdGhpbmsgdGhpcyBpcyBjb3JyZWN0Lg0KDQo+IDgpIFNw
YWNlcyBiZWZvcmUgY29tbWVudCBpZiBpdOKAmXMgb24gdGhlIHNhbWUgbGluZQ0KPiAtICAgIGNh
c2UgUl9BUk1fTU9WVF9BQlM6IC8qIFMgKyBBICovDQo+ICsgICAgY2FzZSBSX0FSTV9NT1ZUX0FC
UzogICAgLyogUyArIEEgKi8NCj4gDQo+IC0gICAgaWYgKCB0bXAgPT0gMFVMICkgICAgICAgLyog
QXJlIGFueSBiaXRzIHNldD8gKi8NCj4gLSAgICAgICAgcmV0dXJuIHJlc3VsdCArIHNpemU7ICAg
LyogTm9wZS4gKi8NCj4gKyAgICBpZiAoIHRtcCA9PSAwVUwgKSAgICAgICAgIC8qIEFyZSBhbnkg
Yml0cyBzZXQ/ICovDQo+ICsgICAgICAgIHJldHVybiByZXN1bHQgKyBzaXplOyAvKiBOb3BlLiAq
Lw0KDQpTZWVtIE9LIHRvIG1lLg0KDQo+IA0KPiA5KSBTcGFjZSBhZnRlciBmb3JfZWFjaF92Y3B1
DQo+IC0gICAgICAgIGZvcl9lYWNoX3ZjcHUoZCwgdikNCj4gKyAgICAgICAgZm9yX2VhY2hfdmNw
dSAoZCwgdikNCg0KRXIsIG5vdCBzdXJlIGFib3V0IHRoaXMgb25lLiAgVGhpcyBpcyBhY3R1YWxs
eSBhIG1hY3JvOyBidXQgb2J2aW91c2x5IGl0IGxvb2tzIGxpa2UgZm9yICggKS4NCg0KSSB0aGlu
ayBKYW4gd2lsbCBwcm9iYWJseSBoYXZlIGFuIG9waW5pb24sIGFuZCBJIHRoaW5rIGhl4oCZbGwg
YmUgYmFjayB0b21vcnJvdzsgc28gbWF5YmUgd2FpdCBqdXN0IGEgZGF5IG9yIHR3byBiZWZvcmUg
c3RhcnRpbmcgdG8gcHJlcCB5b3VyIHNlcmllcy4NCg0KPiAxMCkgU3BhY2VzIGluIGRlY2xhcmF0
aW9uDQo+IC0gICAgdW5pb24gaHNyIGhzciA9IHsgLmJpdHMgPSByZWdzLT5oc3IgfTsNCj4gKyAg
ICB1bmlvbiBoc3IgaHNyID0gey5iaXRzID0gcmVncy0+aHNyfTsNCg0KSeKAmW0gZmluZSB3aXRo
IHRoaXMgdG9vLg0KDQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:02:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5998.15667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS35O-0008Us-1Z; Mon, 12 Oct 2020 19:02:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5998.15667; Mon, 12 Oct 2020 19:02:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS35N-0008Ul-Td; Mon, 12 Oct 2020 19:02:17 +0000
Received: by outflank-mailman (input) for mailman id 5998;
 Mon, 12 Oct 2020 19:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pqe9=DT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kS35N-0008Ug-9t
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:02:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecdfc1f8-e35b-4e59-af6a-4f4656fadb24;
 Mon, 12 Oct 2020 19:02:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8129B2067C;
 Mon, 12 Oct 2020 19:02:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pqe9=DT=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kS35N-0008Ug-9t
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:02:17 +0000
X-Inumbo-ID: ecdfc1f8-e35b-4e59-af6a-4f4656fadb24
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ecdfc1f8-e35b-4e59-af6a-4f4656fadb24;
	Mon, 12 Oct 2020 19:02:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 8129B2067C;
	Mon, 12 Oct 2020 19:02:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602529336;
	bh=NFFfp+KbgsFcGWT60xOjGsfDsuX6lx/Z+evzycXNx1I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=lleIApgKIApv4YDHOfQ7R0EGLYKXGgqv52j2nmd9LAgqVffk5RRGIJN7bDUDUxXo6
	 nSOg56to36KSyScvm7uSDF22JUFuVy72RRSNt99cWiq1HY+ftq9uGRfESvWVPXMP7l
	 7uCq84bzkKJDeKNwMAm4HBzQshvuQ8a1mNumbJgY=
Date: Mon, 12 Oct 2020 12:02:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel@lists.xenproject.org, 
    =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, ehem+xen@m5p.com, 
    bertrand.marquis@arm.com, andre.przywara@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
In-Reply-To: <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
Message-ID: <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
References: <20200926205542.9261-1-julien@xen.org> <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com> <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 10 Oct 2020, Julien Grall wrote:
> On 28/09/2020 07:47, Masami Hiramatsu wrote:
> > Hello,
> 
> Hi Masami,
> 
> > This made progress with my Xen boot on DeveloperBox (
> > https://www.96boards.org/product/developerbox/ ) with ACPI.
> > 
> 
> I have reviewed the patch attached and I have a couple of remarks about it.
> 
> The STAO table was originally created to allow an hypervisor to hide devices
> from a controller domain (such as Dom0). If this table is not present, then it
> means the OS/hypervisor can use any device listed in the ACPI table.
> 
> Additionally, the STAO table should never be present in the host ACPI table.
> 
> Therefore, I think the code should not try to find the STAO. Instead, it
> should check whether the SPCR table is present.

Yes, that makes sense, but that brings me to the next question.

SPCR seems to be required by SBBR, however, Masami wrote that he could
boot on a system without SPCR, which gets me very confused for two
reasons:

1) Why there is no SPCR? Isn't it supposed to be mandatory? Is it
because there no UART on Masami's system?

2) If there is no SPCR, how did Masami manage to boot Xen?
I take without any serial output? Just with the framebuffer?


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:03:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:03:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.5999.15679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36E-0000D9-E5; Mon, 12 Oct 2020 19:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 5999.15679; Mon, 12 Oct 2020 19:03:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36E-0000D2-Au; Mon, 12 Oct 2020 19:03:10 +0000
Received: by outflank-mailman (input) for mailman id 5999;
 Mon, 12 Oct 2020 19:03:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kS36D-0000Cx-4t
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c784db3-55b0-49e8-86ac-de4b27c36edc;
 Mon, 12 Oct 2020 19:03:08 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 36220214DB;
 Mon, 12 Oct 2020 19:03:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kS36D-0000Cx-4t
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:09 +0000
X-Inumbo-ID: 8c784db3-55b0-49e8-86ac-de4b27c36edc
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8c784db3-55b0-49e8-86ac-de4b27c36edc;
	Mon, 12 Oct 2020 19:03:08 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 36220214DB;
	Mon, 12 Oct 2020 19:03:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602529387;
	bh=Mk2zB7GUesBDYX/zhV/qiZ/j3XQWk09LyphfX7ZQ4G0=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=zHzruLaIlY8n78tf4+6ZSR0ph95e7glUVoCh/WJ/hSu2BxXmN/eo7rcFSRsZGW7t5
	 lLTk1V1Jvr4NlBP7ZKyJo2p8bb/rsQWz5xFPW9GRb2H2JUpTtTB7iHAK0/zQ+9CcLQ
	 nwfgeGzGxO5S9NhgUGtQV09jsFHY8EkJBsK9sBbo=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH AUTOSEL 5.8 21/24] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Mon, 12 Oct 2020 15:02:36 -0400
Message-Id: <20201012190239.3279198-21-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201012190239.3279198-1-sashal@kernel.org>
References: <20201012190239.3279198-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Masami Hiramatsu <mhiramat@kernel.org>

[ Upstream commit 5a0677110b73dd3e1766f89159701bfe8ac06808 ]

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK.
If it is enabled, the first chunk of percpu memory is linear mapped.
In the other case, that is allocated from vmalloc area. Moreover,
if the first chunk of percpu has run out until allocating
xen_vcpu_info, it will be allocated on the 2nd chunk, which is
based on kernel memory or vmalloc memory (depends on
CONFIG_NEED_PER_CPU_KM).

Without this fix and kernel configured to use vmalloc area for
the percpu memory, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/160196697165.60224.17470743378683334995.stgit@devnote2
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/xen/enlighten.c | 2 +-
 include/xen/arm/page.h   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index e93145d72c26e..a6ab3689b2f4a 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index d7f6af50e200b..0bd3967f6df1e 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -79,6 +79,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 #define virt_to_gfn(v)		(pfn_to_gfn(virt_to_phys(v) >> XEN_PAGE_SHIFT))
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:03:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6000.15691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36d-0000SI-ND; Mon, 12 Oct 2020 19:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6000.15691; Mon, 12 Oct 2020 19:03:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36d-0000SA-K2; Mon, 12 Oct 2020 19:03:35 +0000
Received: by outflank-mailman (input) for mailman id 6000;
 Mon, 12 Oct 2020 19:03:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kS36b-0000Ry-Vo
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2501ca2-c80b-4412-9d83-b2a9bf0bd5a8;
 Mon, 12 Oct 2020 19:03:33 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7670D21D6C;
 Mon, 12 Oct 2020 19:03:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kS36b-0000Ry-Vo
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:34 +0000
X-Inumbo-ID: b2501ca2-c80b-4412-9d83-b2a9bf0bd5a8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b2501ca2-c80b-4412-9d83-b2a9bf0bd5a8;
	Mon, 12 Oct 2020 19:03:33 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7670D21D6C;
	Mon, 12 Oct 2020 19:03:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602529412;
	bh=nNU22ATOyX0rO71abqxBczKhJEmToH6ypOzPEkNK30I=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=gqnNWglldvu2yfFXX1+YKvL60Ip8Zyp0eBCmHvFggDgVYW2DBPJPzWZgEmGDoEAJ/
	 vPP5RXk3N5LQaTBNZReq5fvyjnRpIrilN3QhqbYIs9/1yYJtv424X2BkZv5LCv+g46
	 IOcVJ6eBZnJxHPQ/7fm+GlChi1zQmVJ/bvYjGEss=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH AUTOSEL 5.4 14/15] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Mon, 12 Oct 2020 15:03:11 -0400
Message-Id: <20201012190313.3279397-14-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201012190313.3279397-1-sashal@kernel.org>
References: <20201012190313.3279397-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Masami Hiramatsu <mhiramat@kernel.org>

[ Upstream commit 5a0677110b73dd3e1766f89159701bfe8ac06808 ]

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK.
If it is enabled, the first chunk of percpu memory is linear mapped.
In the other case, that is allocated from vmalloc area. Moreover,
if the first chunk of percpu has run out until allocating
xen_vcpu_info, it will be allocated on the 2nd chunk, which is
based on kernel memory or vmalloc memory (depends on
CONFIG_NEED_PER_CPU_KM).

Without this fix and kernel configured to use vmalloc area for
the percpu memory, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/160196697165.60224.17470743378683334995.stgit@devnote2
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/xen/enlighten.c | 2 +-
 include/xen/arm/page.h   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index dd6804a64f1a0..a019490c4ec3e 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -150,7 +150,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index f77dcbcba5a60..27d08d4a853c2 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -79,6 +79,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 #define virt_to_gfn(v)		(pfn_to_gfn(virt_to_phys(v) >> XEN_PAGE_SHIFT))
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:03:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6002.15703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36u-0000Y5-W2; Mon, 12 Oct 2020 19:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6002.15703; Mon, 12 Oct 2020 19:03:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS36u-0000Xy-SC; Mon, 12 Oct 2020 19:03:52 +0000
Received: by outflank-mailman (input) for mailman id 6002;
 Mon, 12 Oct 2020 19:03:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kS36t-0000Xi-RD
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 601922c8-2fbb-464d-b661-9d090d713724;
 Mon, 12 Oct 2020 19:03:51 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7EC6A20BED;
 Mon, 12 Oct 2020 19:03:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kS36t-0000Xi-RD
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:03:51 +0000
X-Inumbo-ID: 601922c8-2fbb-464d-b661-9d090d713724
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 601922c8-2fbb-464d-b661-9d090d713724;
	Mon, 12 Oct 2020 19:03:51 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7EC6A20BED;
	Mon, 12 Oct 2020 19:03:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602529430;
	bh=Szsd54BcWQq3eUT3m1lvpRPfwDlpauY+yNkZzt0nheY=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=O6Rvd8fXhZl5UQDkwQwpL2fa2Uj78fMQ7mjrii8k5QK7ZG1kmCenD0gz1gVUfx/hR
	 IRbbu0rciefc8SkBk+BCF4i9JX2B/4TS2sVZRUEzY95lhuh70zgFtasnncS4a7M9v6
	 NkmfrZkFfXYStlj6Qb52/bn4eM9i1PikaKQYvlHI=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH AUTOSEL 4.19 11/12] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Mon, 12 Oct 2020 15:03:34 -0400
Message-Id: <20201012190335.3279538-11-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201012190335.3279538-1-sashal@kernel.org>
References: <20201012190335.3279538-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Masami Hiramatsu <mhiramat@kernel.org>

[ Upstream commit 5a0677110b73dd3e1766f89159701bfe8ac06808 ]

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK.
If it is enabled, the first chunk of percpu memory is linear mapped.
In the other case, that is allocated from vmalloc area. Moreover,
if the first chunk of percpu has run out until allocating
xen_vcpu_info, it will be allocated on the 2nd chunk, which is
based on kernel memory or vmalloc memory (depends on
CONFIG_NEED_PER_CPU_KM).

Without this fix and kernel configured to use vmalloc area for
the percpu memory, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/160196697165.60224.17470743378683334995.stgit@devnote2
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/xen/enlighten.c | 2 +-
 include/xen/arm/page.h   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index 07060e5b58641..05387c94902a4 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -184,7 +184,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index f77dcbcba5a60..27d08d4a853c2 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -79,6 +79,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 #define virt_to_gfn(v)		(pfn_to_gfn(virt_to_phys(v) >> XEN_PAGE_SHIFT))
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:04:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6004.15715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS37B-0000eC-8Q; Mon, 12 Oct 2020 19:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6004.15715; Mon, 12 Oct 2020 19:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS37B-0000e4-4k; Mon, 12 Oct 2020 19:04:09 +0000
Received: by outflank-mailman (input) for mailman id 6004;
 Mon, 12 Oct 2020 19:04:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kS379-0000dl-RE
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:04:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adaaa76e-c22e-4e52-ab52-8948ed0375e2;
 Mon, 12 Oct 2020 19:04:07 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 807052222F;
 Mon, 12 Oct 2020 19:04:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2065=DT=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kS379-0000dl-RE
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:04:07 +0000
X-Inumbo-ID: adaaa76e-c22e-4e52-ab52-8948ed0375e2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id adaaa76e-c22e-4e52-ab52-8948ed0375e2;
	Mon, 12 Oct 2020 19:04:07 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 807052222F;
	Mon, 12 Oct 2020 19:04:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602529446;
	bh=ExFaKH18wvRgVDPOD9/sVbjT/ogMhKoFhXvzCRb5Nd0=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=c2/Fga9oIooFcFLicvS4AREqLXFoDHw+YRtkOMkPQqKyaN7cKHdGnhkQ1BwLFW3Or
	 35SAH7VIVhcNi4x1NhwEeu4+CvFspThxvsRIhkM3vn2p3atWPDs0hXZ7dLNnG6Jzk4
	 ujT6/TDUV8MkRW4Hw81uYLXOL62W0NgP4zkZQSys=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Masami Hiramatsu <mhiramat@kernel.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH AUTOSEL 4.14 10/11] arm/arm64: xen: Fix to convert percpu address to gfn correctly
Date: Mon, 12 Oct 2020 15:03:52 -0400
Message-Id: <20201012190353.3279662-10-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201012190353.3279662-1-sashal@kernel.org>
References: <20201012190353.3279662-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Masami Hiramatsu <mhiramat@kernel.org>

[ Upstream commit 5a0677110b73dd3e1766f89159701bfe8ac06808 ]

Use per_cpu_ptr_to_phys() instead of virt_to_phys() for per-cpu
address conversion.

In xen_starting_cpu(), per-cpu xen_vcpu_info address is converted
to gfn by virt_to_gfn() macro. However, since the virt_to_gfn(v)
assumes the given virtual address is in linear mapped kernel memory
area, it can not convert the per-cpu memory if it is allocated on
vmalloc area.

This depends on CONFIG_NEED_PER_CPU_EMBED_FIRST_CHUNK.
If it is enabled, the first chunk of percpu memory is linear mapped.
In the other case, that is allocated from vmalloc area. Moreover,
if the first chunk of percpu has run out until allocating
xen_vcpu_info, it will be allocated on the 2nd chunk, which is
based on kernel memory or vmalloc memory (depends on
CONFIG_NEED_PER_CPU_KM).

Without this fix and kernel configured to use vmalloc area for
the percpu memory, the Dom0 kernel will fail to boot with following
errors.

[    0.466172] Xen: initializing cpu0
[    0.469601] ------------[ cut here ]------------
[    0.474295] WARNING: CPU: 0 PID: 1 at arch/arm64/xen/../../arm/xen/enlighten.c:153 xen_starting_cpu+0x160/0x180
[    0.484435] Modules linked in:
[    0.487565] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc4+ #4
[    0.493895] Hardware name: Socionext Developer Box (DT)
[    0.499194] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.504836] pc : xen_starting_cpu+0x160/0x180
[    0.509263] lr : xen_starting_cpu+0xb0/0x180
[    0.513599] sp : ffff8000116cbb60
[    0.516984] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.522366] x27: 0000000000000000 x26: 0000000000000000
[    0.527754] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.533129] x23: 0000000000000000 x22: 0000000000000000
[    0.538511] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.543892] x19: ffff8000113a9988 x18: 0000000000000010
[    0.549274] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.554655] x15: ffffffffffffffff x14: 0720072007200720
[    0.560037] x13: 0720072007200720 x12: 0720072007200720
[    0.565418] x11: 0720072007200720 x10: 0720072007200720
[    0.570801] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.576182] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.581564] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.586945] x3 : 0000000000000000 x2 : ffff80000abec000
[    0.592327] x1 : 000000000000002f x0 : 0000800000000000
[    0.597716] Call trace:
[    0.600232]  xen_starting_cpu+0x160/0x180
[    0.604309]  cpuhp_invoke_callback+0xac/0x640
[    0.608736]  cpuhp_issue_call+0xf4/0x150
[    0.612728]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.618030]  __cpuhp_setup_state+0x84/0xf8
[    0.622192]  xen_guest_init+0x324/0x364
[    0.626097]  do_one_initcall+0x54/0x250
[    0.630003]  kernel_init_freeable+0x12c/0x2c8
[    0.634428]  kernel_init+0x1c/0x128
[    0.637988]  ret_from_fork+0x10/0x18
[    0.641635] ---[ end trace d95b5309a33f8b27 ]---
[    0.646337] ------------[ cut here ]------------
[    0.651005] kernel BUG at arch/arm64/xen/../../arm/xen/enlighten.c:158!
[    0.657697] Internal error: Oops - BUG: 0 [#1] SMP
[    0.662548] Modules linked in:
[    0.665676] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.9.0-rc4+ #4
[    0.673398] Hardware name: Socionext Developer Box (DT)
[    0.678695] pstate: 00000005 (nzcv daif -PAN -UAO BTYPE=--)
[    0.684338] pc : xen_starting_cpu+0x178/0x180
[    0.688765] lr : xen_starting_cpu+0x144/0x180
[    0.693188] sp : ffff8000116cbb60
[    0.696573] x29: ffff8000116cbb60 x28: ffff80000abec000
[    0.701955] x27: 0000000000000000 x26: 0000000000000000
[    0.707344] x25: ffff80001156c000 x24: fffffdffbfcdb600
[    0.712718] x23: 0000000000000000 x22: 0000000000000000
[    0.718107] x21: ffff8000113a99c8 x20: ffff800010fe4f68
[    0.723481] x19: ffff8000113a9988 x18: 0000000000000010
[    0.728863] x17: 0000000094fe0f81 x16: 00000000deadbeef
[    0.734245] x15: ffffffffffffffff x14: 0720072007200720
[    0.739626] x13: 0720072007200720 x12: 0720072007200720
[    0.745008] x11: 0720072007200720 x10: 0720072007200720
[    0.750390] x9 : ffff8000100fbdc0 x8 : ffff800010715208
[    0.755771] x7 : 0000000000000054 x6 : ffff00001b790f00
[    0.761153] x5 : ffff800010bbf880 x4 : 0000000000000000
[    0.766534] x3 : 0000000000000000 x2 : 00000000deadbeef
[    0.771916] x1 : 00000000deadbeef x0 : ffffffffffffffea
[    0.777304] Call trace:
[    0.779819]  xen_starting_cpu+0x178/0x180
[    0.783898]  cpuhp_invoke_callback+0xac/0x640
[    0.788325]  cpuhp_issue_call+0xf4/0x150
[    0.792317]  __cpuhp_setup_state_cpuslocked+0x128/0x2c8
[    0.797619]  __cpuhp_setup_state+0x84/0xf8
[    0.801779]  xen_guest_init+0x324/0x364
[    0.805683]  do_one_initcall+0x54/0x250
[    0.809590]  kernel_init_freeable+0x12c/0x2c8
[    0.814016]  kernel_init+0x1c/0x128
[    0.817583]  ret_from_fork+0x10/0x18
[    0.821226] Code: d0006980 f9427c00 cb000300 17ffffea (d4210000)
[    0.827415] ---[ end trace d95b5309a33f8b28 ]---
[    0.832076] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[    0.839815] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Link: https://lore.kernel.org/r/160196697165.60224.17470743378683334995.stgit@devnote2
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 arch/arm/xen/enlighten.c | 2 +-
 include/xen/arm/page.h   | 3 +++
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/enlighten.c b/arch/arm/xen/enlighten.c
index ba7f4c8f5c3e4..b473318da5967 100644
--- a/arch/arm/xen/enlighten.c
+++ b/arch/arm/xen/enlighten.c
@@ -170,7 +170,7 @@ static int xen_starting_cpu(unsigned int cpu)
 	pr_info("Xen: initializing cpu%d\n", cpu);
 	vcpup = per_cpu_ptr(xen_vcpu_info, cpu);
 
-	info.mfn = virt_to_gfn(vcpup);
+	info.mfn = percpu_to_gfn(vcpup);
 	info.offset = xen_offset_in_page(vcpup);
 
 	err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, xen_vcpu_nr(cpu),
diff --git a/include/xen/arm/page.h b/include/xen/arm/page.h
index f77dcbcba5a60..27d08d4a853c2 100644
--- a/include/xen/arm/page.h
+++ b/include/xen/arm/page.h
@@ -79,6 +79,9 @@ static inline unsigned long bfn_to_pfn(unsigned long bfn)
 #define virt_to_gfn(v)		(pfn_to_gfn(virt_to_phys(v) >> XEN_PAGE_SHIFT))
 #define gfn_to_virt(m)		(__va(gfn_to_pfn(m) << XEN_PAGE_SHIFT))
 
+#define percpu_to_gfn(v)	\
+	(pfn_to_gfn(per_cpu_ptr_to_phys(v) >> XEN_PAGE_SHIFT))
+
 /* Only used in PV code. But ARM guests are always HVM. */
 static inline xmaddr_t arbitrary_virt_to_machine(void *vaddr)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:32:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:32:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6012.15726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS3YB-0003Xc-Hq; Mon, 12 Oct 2020 19:32:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6012.15726; Mon, 12 Oct 2020 19:32:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS3YB-0003XV-En; Mon, 12 Oct 2020 19:32:03 +0000
Received: by outflank-mailman (input) for mailman id 6012;
 Mon, 12 Oct 2020 19:32:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS3YA-0003XQ-GX
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:32:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 087b0a43-9a6a-4a2b-a40b-2d5bcfb33c23;
 Mon, 12 Oct 2020 19:32:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS3Y7-0006E2-Os; Mon, 12 Oct 2020 19:31:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS3Y7-00047a-Fx; Mon, 12 Oct 2020 19:31:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS3Y7-0006Jr-FP; Mon, 12 Oct 2020 19:31:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS3YA-0003XQ-GX
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:32:02 +0000
X-Inumbo-ID: 087b0a43-9a6a-4a2b-a40b-2d5bcfb33c23
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 087b0a43-9a6a-4a2b-a40b-2d5bcfb33c23;
	Mon, 12 Oct 2020 19:32:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s4kMH3/OiKB+Jxd7GuiZJ+u5OwfYW8sj00kyRfADkbQ=; b=qEI6v4k9vklUtkc/AbtPmH0Egu
	TrQMUEFyOUqrM4a45hTnzD+XHU404ZK+4icu6/O4/F1uHEPvB1zeV78sBsnlDtz+8nkoFspYSffyE
	ZAzFqnbxwj4F9GP6e+yKA6ph47vlni3Y7JsVwCojMd4IWblKNri8vpIFCu/yfTkx44PQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS3Y7-0006E2-Os; Mon, 12 Oct 2020 19:31:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS3Y7-00047a-Fx; Mon, 12 Oct 2020 19:31:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS3Y7-0006Jr-FP; Mon, 12 Oct 2020 19:31:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155741-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155741: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 19:31:59 +0000

flight 155741 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155741/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   22 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 19:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 19:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6017.15743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS3tR-0005kH-7g; Mon, 12 Oct 2020 19:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6017.15743; Mon, 12 Oct 2020 19:54:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS3tR-0005kA-4H; Mon, 12 Oct 2020 19:54:01 +0000
Received: by outflank-mailman (input) for mailman id 6017;
 Mon, 12 Oct 2020 19:53:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kS3tP-0005jz-JN
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:53:59 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05e7fdfb-8787-40eb-af15-5ee5c7ffaa6b;
 Mon, 12 Oct 2020 19:53:57 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 12:53:56 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 12:53:54 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kS3tP-0005jz-JN
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 19:53:59 +0000
X-Inumbo-ID: 05e7fdfb-8787-40eb-af15-5ee5c7ffaa6b
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05e7fdfb-8787-40eb-af15-5ee5c7ffaa6b;
	Mon, 12 Oct 2020 19:53:57 +0000 (UTC)
IronPort-SDR: n76ml+Lh/mxdJ/ZqnzGph4iG6/stFlHRZRh+D6Fm5ILZbTNtYD9klDu1H+GIcodF01kPL1mPuR
 58DhqwNLc/PQ==
X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="162328382"
X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; 
   d="scan'208";a="162328382"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 12:53:56 -0700
IronPort-SDR: fkO3cV1AstS50IlWJFeIR7IMIWTUvxOVu0RxBH9tBAb2aFjzmjvlioAsa4gAQcEu3X0zxNtHVX
 KqidbmrXoAPg==
X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; 
   d="scan'208";a="530096227"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 12:53:54 -0700
Date: Mon, 12 Oct 2020 12:53:54 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>, Eric Biggers <ebiggers@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012195354.GC2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
 <20201012161946.GA858@sol.localdomain>
 <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>
 <20201012164438.GA20115@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201012164438.GA20115@casper.infradead.org>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> > always CPU-local and never broadcast.
> > 
> > So, basically, unless you *must* sleep while the mapping is in place,
> > kmap_atomic() is preferred.
> 
> But kmap_atomic() disables preemption, so the _ideal_ interface would map
> it only locally, then on preemption make it global.  I don't even know
> if that _can_ be done.  But this email makes it seem like kmap_atomic()
> has no downsides.

And that is IIUC what Thomas was trying to solve.

Also, Linus brought up that kmap_atomic() has quirks in nesting.[1]

>From what I can see all of these discussions support the need to have something
between kmap() and kmap_atomic().

However, the reason behind converting call sites to kmap_thread() are different
between Thomas' patch set and mine.  Both require more kmap granularity.
However, they do so with different reasons and underlying implementations but
with the _same_ resulting semantics; a thread local mapping which is
preemptable.[2]  Therefore they each focus on changing different call sites.

While this patch set is huge I think it serves a valuable purpose to identify a
large number of call sites which are candidates for this new semantic.

Ira

[1] https://lore.kernel.org/lkml/CAHk-=wgbmwsTOKs23Z=71EBTrULoeaH2U3TNqT2atHEWvkBKdw@mail.gmail.com/
[2] It is important to note these implementations are not incompatible with
each other.  So I don't see yet another 'kmap_something()' being required.


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 20:03:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 20:03:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6020.15757 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS42e-0006tk-6t; Mon, 12 Oct 2020 20:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6020.15757; Mon, 12 Oct 2020 20:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS42e-0006td-3k; Mon, 12 Oct 2020 20:03:32 +0000
Received: by outflank-mailman (input) for mailman id 6020;
 Mon, 12 Oct 2020 20:03:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0yg=DT=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kS42a-0006t3-7t
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:03:30 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a399b9ea-b7cc-4432-9c41-35b29be75191;
 Mon, 12 Oct 2020 20:03:25 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kS422-0000HA-AX; Mon, 12 Oct 2020 20:02:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=I0yg=DT=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kS42a-0006t3-7t
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:03:30 +0000
X-Inumbo-ID: a399b9ea-b7cc-4432-9c41-35b29be75191
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a399b9ea-b7cc-4432-9c41-35b29be75191;
	Mon, 12 Oct 2020 20:03:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=bCWIFHvT8FvIPrDMc3maof4qHUnzu6oS60i9lgfyu+w=; b=osm7lCKhtCxcgsJVmnbUIZHhMp
	5nvioHkxA8AhlfbRA0tXQBSBlQdovP0UNFUmlu/tYL30pdEacRnzwKeBjNLJ8YRj+D/sF4f+x4FRA
	68EE5VyuQPlhkPgzE4vuiuikkji/6lzFKcK2qIYtpBfSW0scT+5xZ65/me+9ijMgWVh8NM56bexcy
	gJZd0qYjXWzdg4a2mFecVVyLa3Rwg0gTTp9KEwoseCi8fLlQXob3uBfLwA1anIPkZvUKPjVJpvK3u
	UT+9cSbb34PGNNH3jL974SwKTCytz2qfL4ONXzbZAaSn1ZoiKQGnXIIuZa+WQOp/Awa/qjp/ly6GK
	H6D/8XHQ==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kS422-0000HA-AX; Mon, 12 Oct 2020 20:02:54 +0000
Date: Mon, 12 Oct 2020 21:02:54 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Ira Weiny <ira.weiny@intel.com>
Cc: Dave Hansen <dave.hansen@intel.com>, Eric Biggers <ebiggers@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012200254.GB20115@casper.infradead.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
 <20201012161946.GA858@sol.localdomain>
 <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>
 <20201012164438.GA20115@casper.infradead.org>
 <20201012195354.GC2046448@iweiny-DESK2.sc.intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201012195354.GC2046448@iweiny-DESK2.sc.intel.com>

On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > > kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> > > always CPU-local and never broadcast.
> > > 
> > > So, basically, unless you *must* sleep while the mapping is in place,
> > > kmap_atomic() is preferred.
> > 
> > But kmap_atomic() disables preemption, so the _ideal_ interface would map
> > it only locally, then on preemption make it global.  I don't even know
> > if that _can_ be done.  But this email makes it seem like kmap_atomic()
> > has no downsides.
> 
> And that is IIUC what Thomas was trying to solve.
> 
> Also, Linus brought up that kmap_atomic() has quirks in nesting.[1]
> 
> >From what I can see all of these discussions support the need to have something
> between kmap() and kmap_atomic().
> 
> However, the reason behind converting call sites to kmap_thread() are different
> between Thomas' patch set and mine.  Both require more kmap granularity.
> However, they do so with different reasons and underlying implementations but
> with the _same_ resulting semantics; a thread local mapping which is
> preemptable.[2]  Therefore they each focus on changing different call sites.
> 
> While this patch set is huge I think it serves a valuable purpose to identify a
> large number of call sites which are candidates for this new semantic.

Yes, I agree.  My problem with this patch-set is that it ties it to
some Intel feature that almost nobody cares about.  Maybe we should
care about it, but you didn't try very hard to make anyone care about
it in the cover letter.

For a future patch-set, I'd like to see you just introduce the new
API.  Then you can optimise the Intel implementation of it afterwards.
Those patch-sets have entirely different reviewers.


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 20:07:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 20:07:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6022.15768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS46u-00073X-Oc; Mon, 12 Oct 2020 20:07:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6022.15768; Mon, 12 Oct 2020 20:07:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS46u-00073Q-Lk; Mon, 12 Oct 2020 20:07:56 +0000
Received: by outflank-mailman (input) for mailman id 6022;
 Mon, 12 Oct 2020 20:07:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CKVJ=DT=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kS46t-00073L-GT
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:07:55 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39d3af10-0f9d-466d-99fa-bca774f334a7;
 Mon, 12 Oct 2020 20:07:54 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id c23so14472348qtp.0
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 13:07:54 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:9802:d83e:b724:7fdf])
 by smtp.gmail.com with ESMTPSA id d129sm13418350qkg.127.2020.10.12.13.07.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 13:07:53 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CKVJ=DT=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kS46t-00073L-GT
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:07:55 +0000
X-Inumbo-ID: 39d3af10-0f9d-466d-99fa-bca774f334a7
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39d3af10-0f9d-466d-99fa-bca774f334a7;
	Mon, 12 Oct 2020 20:07:54 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id c23so14472348qtp.0
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 13:07:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=v0Cs33Gr8ZIAyIhglkddtMKcyQqAbuNNR2DdCRh4xdo=;
        b=I5NqU2FmUWwd1I3YdYafLj6MuoLjP8vfg48bx4m87EUu2Z9j57n0IL2LEKO9zUeFPN
         yddqJSh6ZdXpGKz4oNhDbdJyLMQO9wAbrYe/k+07LrlqsItDNlE26EPfVhL4KYFkf5Th
         HK3QAXukmMHGZT71wNmI/11SnDwPxidsRANoKS94e9UWIIZSd57K1JhKPdS5RS88vGLl
         62QVH4qOoUOpKfmEDt/iJ3wXCvNQSi7PYo9fNu221qung4MlK+9gPbfxBa67CF3zL4Wm
         HO1D+I5SvUURf8TNTTl5gV0lzdEo6WTsIHOcwZFaz7ZFZCTaKQ0+0SRdBmIGorsFn0nI
         lpfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=v0Cs33Gr8ZIAyIhglkddtMKcyQqAbuNNR2DdCRh4xdo=;
        b=ReorT8aPLrPFIiMgUx9Y8yLpWxC9hdrKRqew+ZWO7iGLvOtv2Vq3a0CvyYjVYkGZW/
         aAI5SmE8jKhYrRL+emBmp6z56y4pfFpN+YTsmINmk8chr/pKwJg5B79oVYc3eDkRR4t+
         kflenN3x/Ccg2/8HRyXvZCa1nMa8UQpSiwRg0kTZZEVVtRXxckQ9cNJO92xbVrtF6rkE
         u6sY5VFQq6LA/uBOU6R9WLVPrpd9SrHmaindDCN+AVb+ibxRPGGL9bR82sTfYAG6pxbA
         d2JJAjVgVrsnN/6DP5Gie2me3XjtAXAjVH6GUCeS3xilqlRZhAd/OsjnUpIOBSUIbEb3
         0uGA==
X-Gm-Message-State: AOAM530rA8L0hvallFSxDGmusRvyWfGs6N15Jdp0u6BsqmAWULFRHUl+
	Xc2ho2i1GzVMjFzy5WScuwk=
X-Google-Smtp-Source: ABdhPJzLBfY+CXT7WdaA1EBpQhcIOT8q6V+A1sEJrRQx5zR8sS7e385nJtH7BRiNQ/b/X/cAnaaHGA==
X-Received: by 2002:ac8:1a6f:: with SMTP id q44mr11703130qtk.136.1602533274354;
        Mon, 12 Oct 2020 13:07:54 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:9802:d83e:b724:7fdf])
        by smtp.gmail.com with ESMTPSA id d129sm13418350qkg.127.2020.10.12.13.07.52
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 13:07:53 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH 0/2] Add Xen CpusAccel
Date: Mon, 12 Oct 2020 16:07:22 -0400
Message-Id: <20201012200725.64137-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen was left behind when CpusAccel became mandatory and fails the assert
in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
Move the qtest cpu functions to a common location and reuse them for
Xen.

Jason Andryuk (2):
  accel: move qtest CpusAccel functions to a common location
  accel: Add xen CpusAccel using dummy-cpu

 .../qtest-cpus.c => dummy/dummy-cpus.c}       | 22 +++++--------------
 .../qtest-cpus.h => dummy/dummy-cpus.h}       | 10 ++++-----
 accel/dummy/meson.build                       |  7 ++++++
 accel/meson.build                             |  1 +
 accel/qtest/meson.build                       |  1 -
 accel/qtest/qtest.c                           |  7 +++++-
 accel/xen/xen-all.c                           | 10 +++++++++
 7 files changed, 34 insertions(+), 24 deletions(-)
 rename accel/{qtest/qtest-cpus.c => dummy/dummy-cpus.c} (76%)
 rename accel/{qtest/qtest-cpus.h => dummy/dummy-cpus.h} (59%)
 create mode 100644 accel/dummy/meson.build

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 20:08:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 20:08:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6023.15781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS470-000779-0N; Mon, 12 Oct 2020 20:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6023.15781; Mon, 12 Oct 2020 20:08:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS46z-000772-TP; Mon, 12 Oct 2020 20:08:01 +0000
Received: by outflank-mailman (input) for mailman id 6023;
 Mon, 12 Oct 2020 20:08:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CKVJ=DT=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kS46y-00073L-CB
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:08:00 +0000
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d8cb4da-0b8c-45fe-83b2-8d614af9c96d;
 Mon, 12 Oct 2020 20:07:59 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id b10so6713792qvf.0
 for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 13:07:59 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:9802:d83e:b724:7fdf])
 by smtp.gmail.com with ESMTPSA id d129sm13418350qkg.127.2020.10.12.13.07.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 12 Oct 2020 13:07:58 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CKVJ=DT=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kS46y-00073L-CB
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:08:00 +0000
X-Inumbo-ID: 9d8cb4da-0b8c-45fe-83b2-8d614af9c96d
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d8cb4da-0b8c-45fe-83b2-8d614af9c96d;
	Mon, 12 Oct 2020 20:07:59 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id b10so6713792qvf.0
        for <xen-devel@lists.xenproject.org>; Mon, 12 Oct 2020 13:07:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=QY2243JX786fdUoHzoK1Y6Kh1lJrJoZtoq/K0Z9ICJs=;
        b=rYtSPe2K/v1dU7vZsTf27LfQRM1ImXctJlIOG3nIN7IH978uvQVPQCjWOd6D6XniNU
         mKQg27mvMRIXMCi4AnPhtDNCrMgkF9YnvVLyst07RQa3J7AoDV9twIKB9nD75/WM1SXf
         si7oE4eTR1Vtlc59TMUo6Ns6acrzagC9QRh6ajVF6vQJKUtHM3bRSpHWKiraNSDp0oTx
         PbbHDxkBMZjQDCuFP0KNmVOHAEZYT4PS/s0QVTFGjQsKkfTJRLfrtGTHABYgGkfMvWX9
         PaucQ1LQGfo7nU2F3n7cPK94O3Z3cTejLMuXEXktHME0VdOtuuhg8oNV+MVp3QdHgj+c
         eaiA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=QY2243JX786fdUoHzoK1Y6Kh1lJrJoZtoq/K0Z9ICJs=;
        b=N5vQNLrLjLwDg8/qdiFNllK7Dch672r2qLfcGFy+NfatUkDUVJZqqY7uVPaRdSekqQ
         oPV/WWtZTTWZPRZEZWcZr88j06siplWo/MOrTpfcccIwI5nQ+vSloXnhoxqz3bkGTU8p
         UvF2gMzqjmmXVh2fwglszh2Zqy+Pb53xsMvft2lJq8eJlwXeNR7Aay0VvhKPckfyWsFX
         j7VvpuG8lfiolcSIZFS+kVlP/1zi3Rjo1xuRI2GIwIYIgGaMEK/wk0SgGygiW/yYCCEf
         9JXCx0/BnaGR31zR6atHZzlfZ+cEzfhON94l3fOHanMDTENR+6HWZnfb3KKvYfko9FgV
         yp3A==
X-Gm-Message-State: AOAM531FZCITL/VqLXKLmq3mWocvuwU/yCb9SwQlywdZiG3TPM9ZkJ5W
	XblesAUJ7SoF2wjVHklah+I=
X-Google-Smtp-Source: ABdhPJw3I51kN+J+TGXLaLkJTSdYuDYY97kg7N1G1MUPkpX1p5tdCYS7ebkzO/aJ0SAisDXB4fym3w==
X-Received: by 2002:a0c:b29e:: with SMTP id r30mr27540725qve.38.1602533279090;
        Mon, 12 Oct 2020 13:07:59 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:9802:d83e:b724:7fdf])
        by smtp.gmail.com with ESMTPSA id d129sm13418350qkg.127.2020.10.12.13.07.57
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 12 Oct 2020 13:07:58 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH 2/2] accel: Add xen CpusAccel using dummy-cpu
Date: Mon, 12 Oct 2020 16:07:24 -0400
Message-Id: <20201012200725.64137-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201012200725.64137-1-jandryuk@gmail.com>
References: <20201012200725.64137-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen was broken by commit 1583a3898853 ("cpus: extract out qtest-specific
code to accel/qtest").  Xen relied on qemu_init_vcpu() calling
qemu_dummy_start_vcpu() in the default case, but that was replaced by
g_assert_not_reached().

Add a minimal "CpusAccel" for xen using the dummy-cpu implementation
used by qtest.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 accel/dummy/meson.build |  1 +
 accel/xen/xen-all.c     | 10 ++++++++++
 2 files changed, 11 insertions(+)

diff --git a/accel/dummy/meson.build b/accel/dummy/meson.build
index 5fbe27de90..cdff0ba746 100644
--- a/accel/dummy/meson.build
+++ b/accel/dummy/meson.build
@@ -4,3 +4,4 @@ dummy_ss.add(files(
 ))
 
 specific_ss.add_all(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'], if_true: dummy_ss)
+specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)
diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
index 60b971d0a8..2d243c58d4 100644
--- a/accel/xen/xen-all.c
+++ b/accel/xen/xen-all.c
@@ -16,12 +16,15 @@
 #include "hw/xen/xen_pt.h"
 #include "chardev/char.h"
 #include "sysemu/accel.h"
+#include "sysemu/cpus.h"
 #include "sysemu/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/misc.h"
 #include "migration/global_state.h"
 #include "hw/boards.h"
 
+#include "accel/dummy/dummy-cpus.h"
+
 //#define DEBUG_XEN
 
 #ifdef DEBUG_XEN
@@ -153,6 +156,10 @@ static void xen_setup_post(MachineState *ms, AccelState *accel)
     }
 }
 
+const CpusAccel xen_cpus = {
+    .create_vcpu_thread = dummy_start_vcpu_thread,
+};
+
 static int xen_init(MachineState *ms)
 {
     MachineClass *mc = MACHINE_GET_CLASS(ms);
@@ -180,6 +187,9 @@ static int xen_init(MachineState *ms)
      * opt out of system RAM being allocated by generic code
      */
     mc->default_ram_id = NULL;
+
+    cpus_register_accel(&xen_cpus);
+
     return 0;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 12 21:35:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 21:35:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6034.15797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS5TO-0007U7-0q; Mon, 12 Oct 2020 21:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6034.15797; Mon, 12 Oct 2020 21:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS5TN-0007U0-U7; Mon, 12 Oct 2020 21:35:13 +0000
Received: by outflank-mailman (input) for mailman id 6034;
 Mon, 12 Oct 2020 21:35:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=w0qL=DT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kS5TM-0007Tv-KN
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 21:35:12 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44a200bb-fba6-4669-ae4e-e0c31806915c;
 Mon, 12 Oct 2020 21:35:11 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09CLYqhX089433
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 12 Oct 2020 17:34:58 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09CLYpGX089432;
 Mon, 12 Oct 2020 14:34:51 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=w0qL=DT=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kS5TM-0007Tv-KN
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 21:35:12 +0000
X-Inumbo-ID: 44a200bb-fba6-4669-ae4e-e0c31806915c
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44a200bb-fba6-4669-ae4e-e0c31806915c;
	Mon, 12 Oct 2020 21:35:11 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09CLYqhX089433
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Mon, 12 Oct 2020 17:34:58 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09CLYpGX089432;
	Mon, 12 Oct 2020 14:34:51 -0700 (PDT)
	(envelope-from ehem)
Date: Mon, 12 Oct 2020 14:34:51 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>,
        Masami Hiramatsu <masami.hiramatsu@linaro.org>,
        xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
        bertrand.marquis@arm.com, andre.przywara@arm.com,
        Julien Grall <jgrall@amazon.com>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201012213451.GA89158@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
 <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 12, 2020 at 12:02:14PM -0700, Stefano Stabellini wrote:
> On Sat, 10 Oct 2020, Julien Grall wrote:
> > Therefore, I think the code should not try to find the STAO. Instead, it
> > should check whether the SPCR table is present.
> 
> Yes, that makes sense, but that brings me to the next question.
> 
> SPCR seems to be required by SBBR, however, Masami wrote that he could
> boot on a system without SPCR, which gets me very confused for two
> reasons:
> 
> 1) Why there is no SPCR? Isn't it supposed to be mandatory? Is it
> because there no UART on Masami's system?

I'm on different hardware, but some folks have setup Tianocore for it.
According to Documentation/arm64/acpi_object_usage.rst,
"Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT".  Yet when
booting a Linux kernel directly on the hardware it lists APIC, BGRT,
CSRT, DSDT, DBG2, FACP, GTDT, PPTT, RSDP, and XSDT.

I don't know whether Linux's ACPI code omits mention of some required
tables and merely panics if they're absent.  Yet I'm speculating the list
of required tables has shrunk, SPCR is no longer required, and the
documentation is out of date.  Perhaps SPCR was required in early Linux
ACPI implementations, but more recent ones removed that requirement?

> 2) If there is no SPCR, how did Masami manage to boot Xen?
> I take without any serial output? Just with the framebuffer?

On my board the provided tables are sufficient to let Linux identify
ttyAMA0.  Linux's efifb driver finds the framebuffer and apparently has
it usable.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Oct 12 22:45:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 22:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6040.15815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS6Yo-0005uN-8b; Mon, 12 Oct 2020 22:44:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6040.15815; Mon, 12 Oct 2020 22:44:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS6Yo-0005uG-5X; Mon, 12 Oct 2020 22:44:54 +0000
Received: by outflank-mailman (input) for mailman id 6040;
 Mon, 12 Oct 2020 22:44:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS6Ym-0005to-Ho
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 22:44:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1df71836-67b8-4756-9e5d-0769475e54c3;
 Mon, 12 Oct 2020 22:44:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS6Yf-0001pm-8y; Mon, 12 Oct 2020 22:44:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS6Yf-0001ZT-2o; Mon, 12 Oct 2020 22:44:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS6Yf-0007lZ-2M; Mon, 12 Oct 2020 22:44:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FZzu=DT=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS6Ym-0005to-Ho
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 22:44:52 +0000
X-Inumbo-ID: 1df71836-67b8-4756-9e5d-0769475e54c3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1df71836-67b8-4756-9e5d-0769475e54c3;
	Mon, 12 Oct 2020 22:44:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ephr/RvgcUQLVyuIn/L8KtQgwW/T5V2t+1+ujgOIVRw=; b=rqcdvp4Gd8aiBmKrr1ZtEKcjnZ
	/EFYP+BGcI4IRmU5ZXQL1ohP6dKH0D4BM09S9XZ1UHJ46ygKJgaDUFdZyRvqt1eMxdz2wizo0a6sz
	C/joiHFM9vxEQBWI3oe7rBNQVmvH8FWRaTk7JZvIQqbcXGCfT7sGuiynjBIROxsiSNWE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS6Yf-0001pm-8y; Mon, 12 Oct 2020 22:44:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS6Yf-0001ZT-2o; Mon, 12 Oct 2020 22:44:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS6Yf-0007lZ-2M; Mon, 12 Oct 2020 22:44:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155748-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155748: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 12 Oct 2020 22:44:45 +0000

flight 155748 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155748/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   23 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 12 23:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 12 Oct 2020 23:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6044.15831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS7Hw-0002E4-SM; Mon, 12 Oct 2020 23:31:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6044.15831; Mon, 12 Oct 2020 23:31:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS7Hw-0002Dx-PN; Mon, 12 Oct 2020 23:31:32 +0000
Received: by outflank-mailman (input) for mailman id 6044;
 Mon, 12 Oct 2020 23:31:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kS7Hw-0002Ds-6T
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 23:31:32 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d74d2054-0344-497d-8175-e3f939398f74;
 Mon, 12 Oct 2020 23:31:29 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 16:31:28 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 12 Oct 2020 16:31:27 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uzT3=DT=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kS7Hw-0002Ds-6T
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 23:31:32 +0000
X-Inumbo-ID: d74d2054-0344-497d-8175-e3f939398f74
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d74d2054-0344-497d-8175-e3f939398f74;
	Mon, 12 Oct 2020 23:31:29 +0000 (UTC)
IronPort-SDR: AeZxv+XjWchjyxZAu8U9B1gJv6fjD1GzNAwB2Xbv36DEHH5mgoOIuBay/GcuyiUT9Jrxhjt3So
 VhbfXG3AIlmg==
X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="165879239"
X-IronPort-AV: E=Sophos;i="5.77,368,1596524400"; 
   d="scan'208";a="165879239"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 16:31:28 -0700
IronPort-SDR: tYujEEKuwB7pwkDEJNgfSBtOh7hwJ+YZEVhRvoIibttlBusG3o0pRElwiIGjx+C70rOitVdHyl
 N8yvUi3stUVQ==
X-IronPort-AV: E=Sophos;i="5.77,368,1596524400"; 
   d="scan'208";a="313606559"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 16:31:27 -0700
Date: Mon, 12 Oct 2020 16:31:26 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Dave Hansen <dave.hansen@intel.com>, Eric Biggers <ebiggers@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, linux-aio@kvack.org,
	linux-efi@vger.kernel.org, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-mmc@vger.kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	dri-devel@lists.freedesktop.org, linux-mm@kvack.org,
	target-devel@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-kselftest@vger.kernel.org, samba-technical@lists.samba.org,
	ceph-devel@vger.kernel.org, drbd-dev@lists.linbit.com,
	devel@driverdev.osuosl.org, linux-cifs@vger.kernel.org,
	linux-nilfs@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, linux-rdma@vger.kernel.org,
	x86@kernel.org, amd-gfx@lists.freedesktop.org,
	linux-afs@lists.infradead.org, cluster-devel@redhat.com,
	linux-cachefs@redhat.com, intel-wired-lan@lists.osuosl.org,
	xen-devel@lists.xenproject.org, linux-ext4@vger.kernel.org,
	Fenghua Yu <fenghua.yu@intel.com>, ecryptfs@vger.kernel.org,
	linux-um@lists.infradead.org, intel-gfx@lists.freedesktop.org,
	linux-erofs@lists.ozlabs.org, reiserfs-devel@vger.kernel.org,
	linux-block@vger.kernel.org, linux-bcache@vger.kernel.org,
	Jaegeuk Kim <jaegeuk@kernel.org>,
	Dan Williams <dan.j.williams@intel.com>, io-uring@vger.kernel.org,
	linux-nfs@vger.kernel.org, linux-ntfs-dev@lists.sourceforge.net,
	netdev@vger.kernel.org, kexec@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net,
	linux-fsdevel@vger.kernel.org, bpf@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH RFC PKS/PMEM 22/58] fs/f2fs: Utilize new kmap_thread()
Message-ID: <20201012233126.GD2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-23-ira.weiny@intel.com>
 <20201009213434.GA839@sol.localdomain>
 <20201010003954.GW20115@casper.infradead.org>
 <20201010013036.GD1122@sol.localdomain>
 <20201012065635.GB2046448@iweiny-DESK2.sc.intel.com>
 <20201012161946.GA858@sol.localdomain>
 <5d621db9-23d4-e140-45eb-d7fca2093d2b@intel.com>
 <20201012164438.GA20115@casper.infradead.org>
 <20201012195354.GC2046448@iweiny-DESK2.sc.intel.com>
 <20201012200254.GB20115@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201012200254.GB20115@casper.infradead.org>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Mon, Oct 12, 2020 at 09:02:54PM +0100, Matthew Wilcox wrote:
> On Mon, Oct 12, 2020 at 12:53:54PM -0700, Ira Weiny wrote:
> > On Mon, Oct 12, 2020 at 05:44:38PM +0100, Matthew Wilcox wrote:
> > > On Mon, Oct 12, 2020 at 09:28:29AM -0700, Dave Hansen wrote:
> > > > kmap_atomic() is always preferred over kmap()/kmap_thread().
> > > > kmap_atomic() is _much_ more lightweight since its TLB invalidation is
> > > > always CPU-local and never broadcast.
> > > > 
> > > > So, basically, unless you *must* sleep while the mapping is in place,
> > > > kmap_atomic() is preferred.
> > > 
> > > But kmap_atomic() disables preemption, so the _ideal_ interface would map
> > > it only locally, then on preemption make it global.  I don't even know
> > > if that _can_ be done.  But this email makes it seem like kmap_atomic()
> > > has no downsides.
> > 
> > And that is IIUC what Thomas was trying to solve.
> > 
> > Also, Linus brought up that kmap_atomic() has quirks in nesting.[1]
> > 
> > >From what I can see all of these discussions support the need to have something
> > between kmap() and kmap_atomic().
> > 
> > However, the reason behind converting call sites to kmap_thread() are different
> > between Thomas' patch set and mine.  Both require more kmap granularity.
> > However, they do so with different reasons and underlying implementations but
> > with the _same_ resulting semantics; a thread local mapping which is
> > preemptable.[2]  Therefore they each focus on changing different call sites.
> > 
> > While this patch set is huge I think it serves a valuable purpose to identify a
> > large number of call sites which are candidates for this new semantic.
> 
> Yes, I agree.  My problem with this patch-set is that it ties it to
> some Intel feature that almost nobody cares about.

I humbly disagree.  At this level the only thing this is tied to is the idea
that there are additional memory protections available which can be enabled
quickly on a per-thread basis.  PKS on Intel is but 1 implementation of that.

Even the kmap code only has knowledge that there is something which needs to be
done special on a devm page.

>
> Maybe we should
> care about it, but you didn't try very hard to make anyone care about
> it in the cover letter.

Ok my bad.  We have customers who care very much about restricting access to
the PMEM pages to prevent bugs in the kernel from causing permanent damage to
their data/file systems.  I'll reword the cover letter better.

> 
> For a future patch-set, I'd like to see you just introduce the new
> API.  Then you can optimise the Intel implementation of it afterwards.
> Those patch-sets have entirely different reviewers.

I considered doing this.  But this seemed more logical because the feature is
being driven by PMEM which is behind the kmap interface not by the users of the
API.

I can introduce a patch set with a kmap_thread() call which does nothing if
that is more palatable but it seems wrong to me to do so.

Ira


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 00:41:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 00:41:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6046.15843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8Mm-0001BQ-Bf; Tue, 13 Oct 2020 00:40:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6046.15843; Tue, 13 Oct 2020 00:40:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8Mm-0001BJ-6z; Tue, 13 Oct 2020 00:40:36 +0000
Received: by outflank-mailman (input) for mailman id 6046;
 Tue, 13 Oct 2020 00:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hVX3=DU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kS8Mk-0001An-BW
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:40:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17f45ace-568a-4904-90b0-9fa9f4f31097;
 Tue, 13 Oct 2020 00:40:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6B68521655;
 Tue, 13 Oct 2020 00:40:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hVX3=DU=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kS8Mk-0001An-BW
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:40:34 +0000
X-Inumbo-ID: 17f45ace-568a-4904-90b0-9fa9f4f31097
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 17f45ace-568a-4904-90b0-9fa9f4f31097;
	Tue, 13 Oct 2020 00:40:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 6B68521655;
	Tue, 13 Oct 2020 00:40:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602549632;
	bh=DcvpmSPD3amwYj0sKLgyCYgJqiHDDrWFFKw+/xE1sFM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Owf9KYTXxzN3Qtmk0YTw10GHvBeeJ46g24zG1QRaTwPouoLyYgSJcvLo9iv4lTX4B
	 gtiTR8mkFxNzcaEZT9FKoueVdYRVwaDys2b7+kgHArQwjfPf2eoMDbefdH19CyRwED
	 wylYyT9aSZ1SBDrUyND/8IIH7ctfRwQaIhjy59jw=
Date: Mon, 12 Oct 2020 17:40:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Trammell Hudson <hudson@trmm.net>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [xen-unstable-smoke test] 155612: regressions - FAIL
In-Reply-To: <01c8b669-d77e-75c4-7317-213e32eb2b73@xen.org>
Message-ID: <alpine.DEB.2.21.2010121731570.10386@sstabellini-ThinkPad-T480s>
References: <osstest-155612-mainreport@xen.org> <0d3766f0-a1a4-bc86-9372-79b1b65eae47@citrix.com> <l13ej-jSgj1tw6_awkBjUgauf1oh4k3PIQavoWsHdhhiH0qLc1hI4x0lK1Sx4S6DseYE2JQ4w1uFwuEgF325BDawQcpOe5sDX95C3MyqXlQ=@trmm.net>
 <01c8b669-d77e-75c4-7317-213e32eb2b73@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 10 Oct 2020, Julien Grall wrote:
> Hi,
> 
> On 10/10/2020 12:42, Trammell Hudson wrote:
> > On Friday, October 9, 2020 10:27 PM, Andrew Cooper
> > <andrew.cooper3@citrix.com> wrote:
> > > [...]
> > > Looks like arm64 is crashing fairly early on boot.
> > > 
> > > This is probably caused by "efi: Enable booting unified
> > > hypervisor/kernel/initrd images".
> > 
> > Darn it.  I'm working out how to build and boot qemu aarch64 so
> > that I can figure out what is going on.
> 
> FWIW, in OSSTest, we are chainloading Xen from GRUB. I have tried to
> chainloading on QEMU but couldn't get to work so far (even without your
> series).
> 
> Although, I have no trouble to boot using the GRUB way (i.e. via multiboot).

It took me a while to set it up, but now I have a test environment based
on RPi4 where I can chainload Xen from Grub EFI and boot successfully up
until the rootfs (I don't have a rootfs setup correctly yet, so it
breaks with the usual "Cannot open root device".) Which means I can get
both Xen and the Dom0 kernel to boot.

I hope it will be useful in the future, but in this case it didn't help
because I get the same behavior with and without Trammell's patches. For
me the chainload boot doesn't break.

Could it be down to the Grub version in-use? I am using UBoot EFI to
load Grub, but I doubt that could be a meaningful difference.


> > Also, I'm not sure that it is possible to build a unified arm
> > image right now; objcopy (and all of the obj* tools) say
> > "File format not recognized" on the xen.efi file.  The MZ file
> > is not what they are expecting for ARM executables.
> 
> IIUC, you are trying to add section into the EFI binary and not the ELF. Is it
> correct?
> 
> I don't know what x86 is doing but for Arm, xen.efi (and Linux Image) is
> custom built.

Specifically, see:

xen/arch/arm/arm64/head.S:efi_head


> So it may lack information to be recognized by objdump.
> 
> My knowledge of objdump is fairly limited. If you are interested to fix it,
> then I would suggest to ask the binutils community what they expect.
> 
> We could then adapt so objdump can recognize it.



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 00:45:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 00:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6049.15856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RG-0001VV-T3; Tue, 13 Oct 2020 00:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6049.15856; Tue, 13 Oct 2020 00:45:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RG-0001VO-Q4; Tue, 13 Oct 2020 00:45:14 +0000
Received: by outflank-mailman (input) for mailman id 6049;
 Tue, 13 Oct 2020 00:45:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1kS8RF-0001V8-OM
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:14 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 987dc933-c226-4274-b8c3-5c731dcbdba9;
 Tue, 13 Oct 2020 00:45:10 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4C9GzQ3G7kz9sVD; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
	id 1kS8RF-0001V8-OM
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:14 +0000
X-Inumbo-ID: 987dc933-c226-4274-b8c3-5c731dcbdba9
Received: from ozlabs.org (unknown [203.11.71.1])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 987dc933-c226-4274-b8c3-5c731dcbdba9;
	Tue, 13 Oct 2020 00:45:10 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
	id 4C9GzQ3G7kz9sVD; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1602549906;
	bh=FxmL8PgRV16Qjfh784pRUyyS19OMN6XPdKaH+2s3XQY=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Q5VlRYK42GLi323qOU1U2uPXwSgfixA5nhvnab92FcUhIbuIY8qjKt+l9+W5lbOn1
	 AHgXCUgkxM3LVkqXj64KOeWyIdTlj92EJnGgC7eQw5WtognVDjiZ3OPD6XO7UVi554
	 hT6k9cyVoyI/u8szxx+DNJXmKPtBUd6/JwkI7Nw0=
Date: Tue, 13 Oct 2020 11:43:30 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org, qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>, Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?iso-8859-1?Q?C=E9dric?= Le Goater <clg@kaod.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: Re: [PATCH 4/5] hw: Use the PCI_SLOT() macro from 'hw/pci/pci.h'
Message-ID: <20201013004330.GI71119@yekko.fritz.box>
References: <20201012124506.3406909-1-philmd@redhat.com>
 <20201012124506.3406909-5-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="D6z0c4W1rkZNF4Vu"
Content-Disposition: inline
In-Reply-To: <20201012124506.3406909-5-philmd@redhat.com>


--D6z0c4W1rkZNF4Vu
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Oct 12, 2020 at 02:45:05PM +0200, Philippe Mathieu-Daud=E9 wrote:
> From: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>
>=20
> We already have a generic PCI_SLOT() macro in "hw/pci/pci.h"
> to extract the PCI slot identifier, use it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>

ppc parts
Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/hppa/dino.c        | 2 +-
>  hw/i386/xen/xen-hvm.c | 2 +-
>  hw/isa/piix3.c        | 2 +-
>  hw/mips/gt64xxx_pci.c | 2 +-
>  hw/pci-host/bonito.c  | 2 +-
>  hw/pci-host/ppce500.c | 2 +-
>  hw/ppc/ppc4xx_pci.c   | 2 +-
>  hw/sh4/sh_pci.c       | 2 +-
>  8 files changed, 8 insertions(+), 8 deletions(-)
>=20
> diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
> index 81053b5fb64..5b82c9440d1 100644
> --- a/hw/hppa/dino.c
> +++ b/hw/hppa/dino.c
> @@ -496,7 +496,7 @@ static void dino_set_irq(void *opaque, int irq, int l=
evel)
> =20
>  static int dino_pci_map_irq(PCIDevice *d, int irq_num)
>  {
> -    int slot =3D d->devfn >> 3;
> +    int slot =3D PCI_SLOT(d->devfn);
> =20
>      assert(irq_num >=3D 0 && irq_num <=3D 3);
> =20
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index f3ababf33b6..276254e6ca9 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -140,7 +140,7 @@ typedef struct XenIOState {
> =20
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num)
>  {
> -    return irq_num + ((pci_dev->devfn >> 3) << 2);
> +    return irq_num + (PCI_SLOT(pci_dev->devfn) << 2);
>  }
> =20
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level)
> diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
> index 587850b8881..f46ccae25cf 100644
> --- a/hw/isa/piix3.c
> +++ b/hw/isa/piix3.c
> @@ -361,7 +361,7 @@ type_init(piix3_register_types)
>  static int pci_slot_get_pirq(PCIDevice *pci_dev, int pci_intx)
>  {
>      int slot_addend;
> -    slot_addend =3D (pci_dev->devfn >> 3) - 1;
> +    slot_addend =3D PCI_SLOT(pci_dev->devfn) - 1;
>      return (pci_intx + slot_addend) & 3;
>  }
> =20
> diff --git a/hw/mips/gt64xxx_pci.c b/hw/mips/gt64xxx_pci.c
> index e091bc4ed55..588e6f99301 100644
> --- a/hw/mips/gt64xxx_pci.c
> +++ b/hw/mips/gt64xxx_pci.c
> @@ -982,7 +982,7 @@ static int gt64120_pci_map_irq(PCIDevice *pci_dev, in=
t irq_num)
>  {
>      int slot;
> =20
> -    slot =3D (pci_dev->devfn >> 3);
> +    slot =3D PCI_SLOT(pci_dev->devfn);
> =20
>      switch (slot) {
>      /* PIIX4 USB */
> diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
> index b05295639a6..ee8b193e15b 100644
> --- a/hw/pci-host/bonito.c
> +++ b/hw/pci-host/bonito.c
> @@ -570,7 +570,7 @@ static int pci_bonito_map_irq(PCIDevice *pci_dev, int=
 irq_num)
>  {
>      int slot;
> =20
> -    slot =3D (pci_dev->devfn >> 3);
> +    slot =3D PCI_SLOT(pci_dev->devfn);
> =20
>      switch (slot) {
>      case 5:   /* FULOONG2E_VIA_SLOT, SouthBridge, IDE, USB, ACPI, AC97, =
MC97 */
> diff --git a/hw/pci-host/ppce500.c b/hw/pci-host/ppce500.c
> index 9517aab913e..5ad1424b31a 100644
> --- a/hw/pci-host/ppce500.c
> +++ b/hw/pci-host/ppce500.c
> @@ -342,7 +342,7 @@ static const MemoryRegionOps e500_pci_reg_ops =3D {
> =20
>  static int mpc85xx_pci_map_irq(PCIDevice *pci_dev, int pin)
>  {
> -    int devno =3D pci_dev->devfn >> 3;
> +    int devno =3D PCI_SLOT(pci_dev->devfn);
>      int ret;
> =20
>      ret =3D ppce500_pci_map_irq_slot(devno, pin);
> diff --git a/hw/ppc/ppc4xx_pci.c b/hw/ppc/ppc4xx_pci.c
> index 28724c06f88..e8789f64e80 100644
> --- a/hw/ppc/ppc4xx_pci.c
> +++ b/hw/ppc/ppc4xx_pci.c
> @@ -243,7 +243,7 @@ static void ppc4xx_pci_reset(void *opaque)
>   * may need further refactoring for other boards. */
>  static int ppc4xx_pci_map_irq(PCIDevice *pci_dev, int irq_num)
>  {
> -    int slot =3D pci_dev->devfn >> 3;
> +    int slot =3D PCI_SLOT(pci_dev->devfn);
> =20
>      trace_ppc4xx_pci_map_irq(pci_dev->devfn, irq_num, slot);
> =20
> diff --git a/hw/sh4/sh_pci.c b/hw/sh4/sh_pci.c
> index 73d2d0bccb0..734892f47c7 100644
> --- a/hw/sh4/sh_pci.c
> +++ b/hw/sh4/sh_pci.c
> @@ -109,7 +109,7 @@ static const MemoryRegionOps sh_pci_reg_ops =3D {
> =20
>  static int sh_pci_map_irq(PCIDevice *d, int irq_num)
>  {
> -    return (d->devfn >> 3);
> +    return PCI_SLOT(d->devfn);
>  }
> =20
>  static void sh_pci_set_irq(void *opaque, int irq_num, int level)

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--D6z0c4W1rkZNF4Vu
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl+E+DIACgkQbDjKyiDZ
s5JM4g//S3eKSoztBYSfXaiJO/9rehJ7mK4hYgR5S2pvQTSFkf6LP+HJaCkVa0lE
a4bUGSpR3bqHhGGYWF6ppIeGNowkD++B5YnsbeFaFVTxaRdDLY0WqcB6YnNVTU2Z
LskxdcLYtuDddT+S+DU/4ZkC+nXSJyfqU9CTjFq2QuY+4TI4Zf4zeK9m375Oo6qN
yUSBxj7pOiPy149ZF4tuIVBJy/WOijP5DUmz+I3xS/BqQecGuD2kQzRLGfC/oULr
TaulgUcr35isuBXOoV9SfQpiLjhzK7Thp1twsSV60rbHKN8mK5uL1M4h8NdJqVmm
ncct21mR6XQg5Zs/EB4ClFpa2IFKfCgDIA19wUUW627e1OAV8Xdb7OlLV7AkrB0i
mdt6kGLrs0dOSXUqqDZP04NdDFx49Jp1tK9ZBspjw9nxiluNhpOYC1SuND2+lenf
OiaZxMPNyIYKbCS1pfMGXyKsh8/qcf8M2RNtqdjRzKn2Odn9J+TFy6M7I6Y1VdTa
nJbPVDkHKoqRkkehT4fzSSW72NYznbJVTPi/NuTsoWLy6hqFxTemkZWf8PcjLhLj
X236DGBmExrBcFtPyuSV0iS2Nc8vxxzd+XiMUoAkPGmD9EWwU/++5S7HXm9PNrZ2
QetxthxhIXdnOkIu/hzAFMaj+8ABa4LWmrVBtq1SZmi1hdka2do=
=gg8r
-----END PGP SIGNATURE-----

--D6z0c4W1rkZNF4Vu--


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 00:45:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 00:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6050.15869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RI-0001X1-AE; Tue, 13 Oct 2020 00:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6050.15869; Tue, 13 Oct 2020 00:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RI-0001Ws-68; Tue, 13 Oct 2020 00:45:16 +0000
Received: by outflank-mailman (input) for mailman id 6050;
 Tue, 13 Oct 2020 00:45:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1kS8RG-0001VJ-Jh
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:14 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dc5a121-165f-4f58-ad07-b7d098b9e31f;
 Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4C9GzQ2kfkz9sTt; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
	id 1kS8RG-0001VJ-Jh
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:14 +0000
X-Inumbo-ID: 7dc5a121-165f-4f58-ad07-b7d098b9e31f
Received: from ozlabs.org (unknown [203.11.71.1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7dc5a121-165f-4f58-ad07-b7d098b9e31f;
	Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
	id 4C9GzQ2kfkz9sTt; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1602549906;
	bh=6z03D1ZV3p2wtWtAQ8HdiMejCmKYM8pplIFaWXlD8hg=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=Zfh5tPd5hMeq64adb90DDfNzMuDGuOVRs8Ifdlc8DR3LjBQWAgTUHf/Po97ZEGu8W
	 ndDXL0gxBLQKs0mJIjAQnOr9ekbU86UYMtLsuFFJ3SbC1eFrK22hbZxekU7OFKMygG
	 VUdBDaOmeqhjL14Ffsxb4Fm/c008JgvhPTizANX8=
Date: Tue, 13 Oct 2020 11:42:29 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org, qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>, Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?iso-8859-1?Q?C=E9dric?= Le Goater <clg@kaod.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: Re: [PATCH 2/5] hw/pci-host: Use the PCI_BUILD_BDF() macro from
 'hw/pci/pci.h'
Message-ID: <20201013004229.GG71119@yekko.fritz.box>
References: <20201012124506.3406909-1-philmd@redhat.com>
 <20201012124506.3406909-3-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="WkfBGePaEyrk4zXB"
Content-Disposition: inline
In-Reply-To: <20201012124506.3406909-3-philmd@redhat.com>


--WkfBGePaEyrk4zXB
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Oct 12, 2020 at 02:45:03PM +0200, Philippe Mathieu-Daud=E9 wrote:
> From: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>
>=20
> We already have a generic PCI_BUILD_BDF() macro in "hw/pci/pci.h"
> to pack these values, use it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>

pnv part

Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/pci-host/bonito.c   | 3 +--
>  hw/pci-host/pnv_phb4.c | 2 +-
>  2 files changed, 2 insertions(+), 3 deletions(-)
>=20
> diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
> index abb3ee86769..b05295639a6 100644
> --- a/hw/pci-host/bonito.c
> +++ b/hw/pci-host/bonito.c
> @@ -196,8 +196,7 @@ FIELD(BONGENCFG, PCIQUEUE,      12, 1)
>  #define PCI_IDSEL_VIA686B          (1 << PCI_IDSEL_VIA686B_BIT)
> =20
>  #define PCI_ADDR(busno , devno , funno , regno)  \
> -    ((((busno) << 8) & 0xff00) + (((devno) << 3) & 0xf8) + \
> -    (((funno) & 0x7) << 8) + (regno))
> +    ((PCI_BUILD_BDF(busno, PCI_DEVFN(devno , funno)) << 8) + (regno))
> =20
>  typedef struct BonitoState BonitoState;
> =20
> diff --git a/hw/pci-host/pnv_phb4.c b/hw/pci-host/pnv_phb4.c
> index 03daf40a237..6328e985f81 100644
> --- a/hw/pci-host/pnv_phb4.c
> +++ b/hw/pci-host/pnv_phb4.c
> @@ -889,7 +889,7 @@ static bool pnv_phb4_resolve_pe(PnvPhb4DMASpace *ds)
>      /* Read RTE */
>      bus_num =3D pci_bus_num(ds->bus);
>      addr =3D rtt & PHB_RTT_BASE_ADDRESS_MASK;
> -    addr +=3D 2 * ((bus_num << 8) | ds->devfn);
> +    addr +=3D 2 * PCI_BUILD_BDF(bus_num, ds->devfn);
>      if (dma_memory_read(&address_space_memory, addr, &rte, sizeof(rte)))=
 {
>          phb_error(ds->phb, "Failed to read RTT entry at 0x%"PRIx64, addr=
);
>          /* Set error bits ? fence ? ... */

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--WkfBGePaEyrk4zXB
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl+E9/UACgkQbDjKyiDZ
s5Ir4RAAgIeuP6joYj5FhZqzV4VbZ4t/QTHS4dXl1LknvyV37IXxicFsUz2A5aIj
PFrJU+tBknpj/rFbZIw5dWCK1N2kt2FM0fR9pIHzgOlH8Nv4LUawy4Fq4LREQGSV
LLS5OAEQ+SiphKfeMjYeokeznwVQYKIPGGY2lFk/DPQrHjYOx2Ln6sS3FcAvKO/4
5H+5b6uWMw9X4quka9o8p56m3/oBUYL3Yoy5wlZO8X8gniZ0GfV9DfqH3otegrZu
/1mEbsGK0hgILQrKi4mcND7aPFF9ijcGuj9HkS9xz9emXb2y9cXSSmHVr3lFmCsc
iyXWwqnehxrZn/3mAYmaISt9ACZENDaZ5zHTSvmXrEhz8RBsy0eiqD7i852Y86zZ
Sw9OuC135HwBDpvqVR2duwhORJt1+OVLFtJjijRImcWZQXiKIsmORRoWhNxRRCjE
pLsba7IztCdSsn7NMAfJCTZmaaPCq3ckqxj5C8zFOrJrV5P47lPZ+/v13iCZ7P4b
RZ5QQ35Hf9H8udCCu2cfij/0h9i+00BJO2o7az4htlCG2N3t5b0Fh6RTFWcTHGCe
WL9V/O16r+GmQ3P3wWCgMejlQ7/E37B3C/RPCCjkD8eNhz8sMc9m4ZvuIk7/Vovi
72S/ctPCd/MMjoRxjWSRehdhjrvUzRAzEaHlK2+NMCY6mJOLsqk=
=ZKQ5
-----END PGP SIGNATURE-----

--WkfBGePaEyrk4zXB--


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 00:45:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 00:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6051.15881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RM-0001aF-IP; Tue, 13 Oct 2020 00:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6051.15881; Tue, 13 Oct 2020 00:45:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RM-0001a5-FF; Tue, 13 Oct 2020 00:45:20 +0000
Received: by outflank-mailman (input) for mailman id 6051;
 Tue, 13 Oct 2020 00:45:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1kS8RK-0001V8-Mu
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:18 +0000
Received: from ozlabs.org (unknown [203.11.71.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ab263fa-56a9-498b-a966-d4a3cf00f787;
 Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4C9GzQ3p3Mz9sVM; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
	id 1kS8RK-0001V8-Mu
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:18 +0000
X-Inumbo-ID: 7ab263fa-56a9-498b-a966-d4a3cf00f787
Received: from ozlabs.org (unknown [203.11.71.1])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7ab263fa-56a9-498b-a966-d4a3cf00f787;
	Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
	id 4C9GzQ3p3Mz9sVM; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1602549906;
	bh=B3neiLDzH8RrfvD/WtpyEbiHUt7aAF0dawOY7BXLETc=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=PCr2cCivKVjWimiGYnDjtlLa5uZT/Y8VHxNTQRU5QHyFR+AZfsTa77b42k+9vKwx3
	 LO/i6hMZztXNNLP4o9AfcDaARlWHY/m6om2FEBougwEDqtlvWxnx5+HFmGIdaqHtDm
	 RYRZpPHZenUrS02Ae3vWPHJzSCUe7c3rZqDI6cvU=
Date: Tue, 13 Oct 2020 11:42:56 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org, qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>, Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?iso-8859-1?Q?C=E9dric?= Le Goater <clg@kaod.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: Re: [PATCH 3/5] hw/pci-host/uninorth: Use the PCI_FUNC() macro from
 'hw/pci/pci.h'
Message-ID: <20201013004256.GH71119@yekko.fritz.box>
References: <20201012124506.3406909-1-philmd@redhat.com>
 <20201012124506.3406909-4-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Yia77v5a8fyVHJSl"
Content-Disposition: inline
In-Reply-To: <20201012124506.3406909-4-philmd@redhat.com>


--Yia77v5a8fyVHJSl
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Oct 12, 2020 at 02:45:04PM +0200, Philippe Mathieu-Daud=E9 wrote:
> From: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>
>=20
> We already have a generic PCI_FUNC() macro in "hw/pci/pci.h" to
> extract the PCI function identifier, use it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>

Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/pci-host/uninorth.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/hw/pci-host/uninorth.c b/hw/pci-host/uninorth.c
> index 1ed1072eeb5..c21de0ab805 100644
> --- a/hw/pci-host/uninorth.c
> +++ b/hw/pci-host/uninorth.c
> @@ -65,7 +65,7 @@ static uint32_t unin_get_config_reg(uint32_t reg, uint3=
2_t addr)
>          if (slot =3D=3D 32) {
>              slot =3D -1; /* XXX: should this be 0? */
>          }
> -        func =3D (reg >> 8) & 7;
> +        func =3D PCI_FUNC(reg >> 8);
> =20
>          /* ... and then convert them to x86 format */
>          /* config pointer */

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--Yia77v5a8fyVHJSl
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl+E+BAACgkQbDjKyiDZ
s5JqUhAAwaHNI4PyhRz1NaHKujUD5pEGVl+BohEX4xZhYb9nyi7iA1qkel9qGhcg
2daGPZSaQpaPCM1f2hFxcUKS4TfGfk8CbGQ8rIUHhRoJdR//xqN64Q2uo8yagtCt
bxv8Wo0CnC0JUb9idlbr9S03tiQCJSkEPnK+ACJaGPKDJxA2I61eAVJtRYMX/0Bl
BT3J8QFi+wqH05l7FP1rnv1Z3IhlGk7cPJ126TEypdlBIUTKDr9X8LOYNqgrhTEK
0CIHfwsBPQVd7Gj7QN6yDmRiPKIfmxqnF9NRX+itaSe7EfP0OXiphgJLQvRldCnH
UmmcbaYVhMSjXSPcvRdWY1ahFwR+e256Mv8cMLeYFuwCUI19xagsujCNhr3rt6tj
DbQQx7U3c8+S6ggFHxC0BWy/XrrPtgDKr4Fqm5/nBseLK3P5W5RQMSVn1DCao8hx
TCibJgQ6B9p2bRTp2V2mzICw7k68APPSOcg8r+MxTtArCErG0IfO5+ERd/tEeBsa
4wWXXSn7+DDJauSUqVrnOq4JrxZDiDICujFXP+hhvU/FwBBpwPF5qfNCgtDwPrMR
TwmgY+eMFml+klzUuDX/AbhZstEq2DZvPXMrO8imZXGzyGWiYWP7NDZYTG/jD69z
Rc5HRuZfiGm08uaztotylX7G5u+UceWDaaFFswZ+UseMU1q9whY=
=VMSj
-----END PGP SIGNATURE-----

--Yia77v5a8fyVHJSl--


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 00:45:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 00:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6052.15885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RM-0001aq-UQ; Tue, 13 Oct 2020 00:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6052.15885; Tue, 13 Oct 2020 00:45:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8RM-0001ad-O5; Tue, 13 Oct 2020 00:45:20 +0000
Received: by outflank-mailman (input) for mailman id 6052;
 Tue, 13 Oct 2020 00:45:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
 id 1kS8RL-0001VJ-IE
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:19 +0000
Received: from ozlabs.org (unknown [2401:3900:2:1::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6456cd1e-34b2-4cbd-884b-6cd902c45769;
 Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
 id 4C9GzQ4GrGz9sVK; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yijH=DU=ozlabs.org=dgibson@srs-us1.protection.inumbo.net>)
	id 1kS8RL-0001VJ-IE
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 00:45:19 +0000
X-Inumbo-ID: 6456cd1e-34b2-4cbd-884b-6cd902c45769
Received: from ozlabs.org (unknown [2401:3900:2:1::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6456cd1e-34b2-4cbd-884b-6cd902c45769;
	Tue, 13 Oct 2020 00:45:11 +0000 (UTC)
Received: by ozlabs.org (Postfix, from userid 1007)
	id 4C9GzQ4GrGz9sVK; Tue, 13 Oct 2020 11:45:06 +1100 (AEDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
	d=gibson.dropbear.id.au; s=201602; t=1602549906;
	bh=+lALO7HgNrojm6+JDT1ennot8m6liZUo5GH8lT9O9Tg=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=onlHrub/sstF9DahVxIp6mc0qSOP4MEbM6deGOjZB9YlBmj2WNzKmC0iPNNmnvsv9
	 3+A/PXljQDpjzjzwlgkz9u0xN3HKj1rcZ4tGm8+tZtJG2/Azwk2ffuCZKXG0rDP9jo
	 d1bf9D8mnrWnV5p4Cc2SRmbWrpeS5JlZ80Q3jcRc=
Date: Tue, 13 Oct 2020 11:45:00 +1100
From: David Gibson <david@gibson.dropbear.id.au>
To: Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Cc: qemu-devel@nongnu.org, Peter Maydell <peter.maydell@linaro.org>,
	qemu-ppc@nongnu.org, qemu-trivial@nongnu.org,
	Paul Durrant <paul@xen.org>, Aurelien Jarno <aurelien@aurel32.net>,
	qemu-arm@nongnu.org,
	Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <f4bug@amsat.org>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Eduardo Habkost <ehabkost@redhat.com>,
	Jiaxun Yang <jiaxun.yang@flygoat.com>,
	Yoshinori Sato <ysato@users.sourceforge.jp>,
	=?iso-8859-1?Q?C=E9dric?= Le Goater <clg@kaod.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Helge Deller <deller@gmx.de>,
	Anthony Perard <anthony.perard@citrix.com>,
	Richard Henderson <rth@twiddle.net>,
	Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
	xen-devel@lists.xenproject.org,
	Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Huacai Chen <chenhc@lemote.com>
Subject: Re: [PATCH 5/5] hw: Use the PCI_DEVFN() macro from 'hw/pci/pci.h'
Message-ID: <20201013004500.GJ71119@yekko.fritz.box>
References: <20201012124506.3406909-1-philmd@redhat.com>
 <20201012124506.3406909-6-philmd@redhat.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="dwWFXG4JqVa0wfCP"
Content-Disposition: inline
In-Reply-To: <20201012124506.3406909-6-philmd@redhat.com>


--dwWFXG4JqVa0wfCP
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Mon, Oct 12, 2020 at 02:45:06PM +0200, Philippe Mathieu-Daud=E9 wrote:
> From: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>
>=20
> We already have a generic PCI_DEVFN() macro in "hw/pci/pci.h"
> to pack the PCI slot/function identifiers, use it.
>=20
> Signed-off-by: Philippe Mathieu-Daud=E9 <f4bug@amsat.org>

ppc part

Acked-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/arm/virt.c          | 3 ++-
>  hw/pci-host/uninorth.c | 6 ++----
>  2 files changed, 4 insertions(+), 5 deletions(-)
>=20
> diff --git a/hw/arm/virt.c b/hw/arm/virt.c
> index e465a988d68..f601ef0798c 100644
> --- a/hw/arm/virt.c
> +++ b/hw/arm/virt.c
> @@ -1144,7 +1144,8 @@ static void create_pcie_irq_map(const VirtMachineSt=
ate *vms,
>                       full_irq_map, sizeof(full_irq_map));
> =20
>      qemu_fdt_setprop_cells(vms->fdt, nodename, "interrupt-map-mask",
> -                           0x1800, 0, 0, /* devfn (PCI_SLOT(3)) */
> +                           cpu_to_be16(PCI_DEVFN(3, 0)), /* Slot 3 */
> +                           0, 0,
>                             0x7           /* PCI irq */);
>  }
> =20
> diff --git a/hw/pci-host/uninorth.c b/hw/pci-host/uninorth.c
> index c21de0ab805..f73d452bdce 100644
> --- a/hw/pci-host/uninorth.c
> +++ b/hw/pci-host/uninorth.c
> @@ -70,10 +70,8 @@ static uint32_t unin_get_config_reg(uint32_t reg, uint=
32_t addr)
>          /* ... and then convert them to x86 format */
>          /* config pointer */
>          retval =3D (reg & (0xff - 7)) | (addr & 7);
> -        /* slot */
> -        retval |=3D slot << 11;
> -        /* fn */
> -        retval |=3D func << 8;
> +        /* slot, fn */
> +        retval |=3D PCI_DEVFN(slot, func) << 8;
>      }
> =20
>      trace_unin_get_config_reg(reg, addr, retval);

--=20
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

--dwWFXG4JqVa0wfCP
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEdfRlhq5hpmzETofcbDjKyiDZs5IFAl+E+IwACgkQbDjKyiDZ
s5J+QQ//T4fQpDPR/JHRFc6LbsMxAjBTFaF0roQjm+CI8Md8/NY7ptdCqg1+GKy6
eoEUInbZ/945BywYqGDOqWxu3/zJcpG+4spb5usgG+IKMT3N5xuUzKd+PBIaJilP
JhB2UkcG5VhUOvf8FhbPGmAO/M6WYElXbtKGPy9/CLFLNXzWXiJ0MZy4TAAscpra
M3Xz8Fs/lwFTNOQKFXD7BpuESdLdWy5jPMhODBx0H0OCrsr6ZnZcIXYlVUUi4K3f
rLpdkhDyN1ZF2VeLYTslVEfMVCPHxSDFJRRONioEjpP6ELFnXAEvNqRmyYzF/a19
FJ/5nB7L4wNXQ9rV5C6E72joI3nkjujeKZT8lfRSKcz17owRlIJrrwa9hbbxomy+
G9dmE57lTw42mRCEd3nZgeq7CwBK0a5IfJOkt5WEriUMfHjRFyhCB5t7sqbaL5Aa
NDCD7XZowgia7WmsZvAb/dMrjQpUCzya4OtR16TQ63W6wKC/pCr6lb/J+BS9RCu0
mSdB2d1L/eItGWyJvGIHVbf7JSIY0pLYnkaM83E0QsXTKRABRzhQ6CBsK8URM6kh
5HziPQxsVW1ZkJ27hfLKjbZwHshw7IgicYRCDCLa+WfcZQv750KiJq/ku/Gnjq59
uy1uR5C2STlSxRSSNraOCu0u7ad0arUjhx/SNUtxXQBpQWAXBhI=
=E6th
-----END PGP SIGNATURE-----

--dwWFXG4JqVa0wfCP--


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 01:09:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 01:09:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6068.15909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8on-0004oA-4Y; Tue, 13 Oct 2020 01:09:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6068.15909; Tue, 13 Oct 2020 01:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8om-0004o3-WB; Tue, 13 Oct 2020 01:09:33 +0000
Received: by outflank-mailman (input) for mailman id 6068;
 Tue, 13 Oct 2020 01:09:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS8om-0004nb-8E
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:09:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6480133-62d0-405d-a696-fc38a8001cfb;
 Tue, 13 Oct 2020 01:09:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8od-0005pd-Fz; Tue, 13 Oct 2020 01:09:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8od-0004pS-5k; Tue, 13 Oct 2020 01:09:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8od-0003Hi-5E; Tue, 13 Oct 2020 01:09:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS8om-0004nb-8E
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:09:32 +0000
X-Inumbo-ID: f6480133-62d0-405d-a696-fc38a8001cfb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6480133-62d0-405d-a696-fc38a8001cfb;
	Tue, 13 Oct 2020 01:09:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Bbb+gMBdUXgBuWVW6CqwxPstqRRbP17zXOOlY3ZQtjs=; b=v/0XHK+S0thhNrpRW0L8t9HOHP
	ojRcRx2lQDYGEMm5eF3gMyJXG9u9uN4y6arGTeE0Bxa4B2rua6gt62YkYvQ9udVQMAe9ygvPx6JZV
	CQFOi6B1FtAtk7jiWxjConM8J45Ca7F1HvugspO7CYabnvkwUnaMLOE9ew2Oj/DMCk2A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8od-0005pd-Fz; Tue, 13 Oct 2020 01:09:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8od-0004pS-5k; Tue, 13 Oct 2020 01:09:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8od-0003Hi-5E; Tue, 13 Oct 2020 01:09:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155743-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155743: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a0bdf866873467271eff9a92f179ab0f77d735cb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 01:09:23 +0000

flight 155743 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155743/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-raw       12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-i386-pvgrub 12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a0bdf866873467271eff9a92f179ab0f77d735cb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   53 days
Failing since        152659  2020-08-21 14:07:39 Z   52 days   90 attempts
Testing same since   155743  2020-10-12 17:14:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43816 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 01:19:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 01:19:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6071.15923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8xs-0005ud-6N; Tue, 13 Oct 2020 01:18:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6071.15923; Tue, 13 Oct 2020 01:18:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS8xs-0005uW-3P; Tue, 13 Oct 2020 01:18:56 +0000
Received: by outflank-mailman (input) for mailman id 6071;
 Tue, 13 Oct 2020 01:18:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS8xq-0005tx-FO
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:18:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 406e500c-2f5d-45fa-b9bd-15dcda043cd3;
 Tue, 13 Oct 2020 01:18:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8xi-00061E-VM; Tue, 13 Oct 2020 01:18:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8xi-000511-Kz; Tue, 13 Oct 2020 01:18:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS8xi-0005ED-KW; Tue, 13 Oct 2020 01:18:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS8xq-0005tx-FO
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:18:54 +0000
X-Inumbo-ID: 406e500c-2f5d-45fa-b9bd-15dcda043cd3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 406e500c-2f5d-45fa-b9bd-15dcda043cd3;
	Tue, 13 Oct 2020 01:18:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=06objCAF3yeX6s9X47XOcdZOZ5ozTIeX/HhgA3ZiyqM=; b=Rr7Vswc3Tz/lZz1gl23oDuNIX/
	krAaS3p+GfJxyTsBgRcy2kH8NfdvRUsUREfl/QSqGaaiLVypK5aXbgBG6uJJ9ufFDHdp+yA9WjlW3
	FYD/YdFAfn9mbBl4qEHhB4AAKKZRJtq/Hl0ojsDo62mKMFI/ojGCorybTe6buaUUBQSA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8xi-00061E-VM; Tue, 13 Oct 2020 01:18:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8xi-000511-Kz; Tue, 13 Oct 2020 01:18:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS8xi-0005ED-KW; Tue, 13 Oct 2020 01:18:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155736-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155736: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=bbf5c979011a099af5dc76498918ed7df445635b
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 01:18:46 +0000

flight 155736 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155736/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot       fail in 155717 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 12 debian-di-install fail in 155717 pass in 155736
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 155717 pass in 155736
 test-amd64-amd64-pair   27 guest-migrate/dst_host/src_host fail pass in 155717

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1   3 hosts-allocate               starved  n/a

version targeted for testing:
 linux                bbf5c979011a099af5dc76498918ed7df445635b
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   73 days
Failing since        152366  2020-08-01 20:49:34 Z   72 days  122 attempts
Testing same since   155717  2020-10-12 02:12:48 Z    0 days    2 attempts

------------------------------------------------------------
2513 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 340006 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 01:50:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 01:50:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6077.15941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS9Rd-0000WH-LK; Tue, 13 Oct 2020 01:49:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6077.15941; Tue, 13 Oct 2020 01:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kS9Rd-0000WA-Hu; Tue, 13 Oct 2020 01:49:41 +0000
Received: by outflank-mailman (input) for mailman id 6077;
 Tue, 13 Oct 2020 01:49:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kS9Rc-0000V4-Hj
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:49:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 891d00c9-6e1a-4b90-a701-566f8daef1f6;
 Tue, 13 Oct 2020 01:49:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS9RV-0006cZ-4j; Tue, 13 Oct 2020 01:49:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kS9RU-0005lP-TZ; Tue, 13 Oct 2020 01:49:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kS9RU-0005Gq-So; Tue, 13 Oct 2020 01:49:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kS9Rc-0000V4-Hj
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 01:49:40 +0000
X-Inumbo-ID: 891d00c9-6e1a-4b90-a701-566f8daef1f6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 891d00c9-6e1a-4b90-a701-566f8daef1f6;
	Tue, 13 Oct 2020 01:49:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XbZdYbgXz2dEdxCkAiFCtbOawFgwgT8XGyVT53RDopI=; b=DUwvPULil7EklbZVzz8VT8rvqB
	Sm0RLb5Ut2TiBvxptdTqW+u2I0qz8JFvdjQ33XMs5R/cX9IizLhflBYUS8SxKR2NcwN7RgNIqfKn/
	NgoXugq0vZHLpPE0+du1vaZNFVlvqjToQjIyRcfGSKKzUuICcS/GrFju1iCHg8gZT+jk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS9RV-0006cZ-4j; Tue, 13 Oct 2020 01:49:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS9RU-0005lP-TZ; Tue, 13 Oct 2020 01:49:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kS9RU-0005Gq-So; Tue, 13 Oct 2020 01:49:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155751-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155751: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 01:49:32 +0000

flight 155751 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155751/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    3 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   24 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 03:03:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 03:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6080.15953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSAan-0008S3-54; Tue, 13 Oct 2020 03:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6080.15953; Tue, 13 Oct 2020 03:03:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSAan-0008Rw-1n; Tue, 13 Oct 2020 03:03:13 +0000
Received: by outflank-mailman (input) for mailman id 6080;
 Tue, 13 Oct 2020 03:03:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSAal-0008Rr-LJ
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 03:03:11 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecf83feb-d911-4198-8e97-33711573105a;
 Tue, 13 Oct 2020 03:03:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSAal-0008Rr-LJ
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 03:03:11 +0000
X-Inumbo-ID: ecf83feb-d911-4198-8e97-33711573105a
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ecf83feb-d911-4198-8e97-33711573105a;
	Tue, 13 Oct 2020 03:03:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602558189;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version;
  bh=JAZDZCYYIh8edTlKNiVcof5V1G+O0XlJEZ1gmZeAUqA=;
  b=DwfSHbpZtnuFkju9ALqEXTCZ/It/iOL/379GERhs9yQQN8SKueeGosyt
   7hWUtHKY/Zfn2fvp3IiizgFaK2Kj47KTH0rSL4u2zuwQ8B9B2SZsLUGD1
   ergz0i2vlQDWHrvrxtOAnW8o9oCe6EE+cJN6Y605mqHEKarQC5NKJB+fa
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sovlgH73aKH9f49hPMHdrpe59eKDLPKHvGTzNIj2pTRlaBj7Fu+GLZKyKzQKlTpQErkjn+mCeU
 WzdTOvZyR+D1nKOL+oY8adbr992Y9szjg4OGGwQDsLhtPs1vFkh8JU+d3aPFtBGkYRTGc4NqLF
 69MSp1sbhCj3BKPTrRYz9E+dhJWYPCKY7TemigzBsnVDoN7K+qw2haZ5A8SENnC1BReq6YGPqm
 6UluNySXVNaeZj/CBEumv9IZcjn0T3II33CL9I5EWwjnoepoQjCbmZiqJEP7GQo5UAyKkuydJz
 Q+A=
X-SBRS: 2.5
X-MesageID: 28920670
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,369,1596513600"; 
   d="scan'208";a="28920670"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>, Chen Yu
	<yu.c.chen@intel.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH 2/2] x86/mwait-idle: Customize IceLake server support
Date: Tue, 13 Oct 2020 04:02:49 +0100
Message-ID: <1602558169-23140-2-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
References: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain

From: Chen Yu <yu.c.chen@intel.com>

On ICX platform, the C1E auto-promotion is enabled by default.
As a result, the CPU might fall into C1E more offen than previous
platforms. So disable C1E auto-promotion and expose C1E as a separate
idle state.

Beside C1 and C1E, the exit latency of C6 was measured
by a dedicated tool. However the exit latency(41us) exposed
by _CST is much smaller than the one we measured(128us). This
is probably due to the _CST uses the exit latency when woken
up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
the latter as we need the longest latency in theory.

Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
[Linux commit a472ad2bcea479ba068880125d7273fc95c14b70]
Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Applying this gives almost 100% boost in sysbench cpu test on Whitley SDP
---
 xen/arch/x86/cpu/mwait-idle.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 8add13d..f0c6ff9 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -554,6 +554,28 @@ static const struct cpuidle_state skx_cstates[] = {
 	{}
 };
 
+static const struct cpuidle_state icx_cstates[] = {
+       {
+               .name = "C1-ICX",
+               .flags = MWAIT2flg(0x00),
+               .exit_latency = 1,
+               .target_residency = 1,
+       },
+       {
+               .name = "C1E-ICX",
+               .flags = MWAIT2flg(0x01),
+               .exit_latency = 4,
+               .target_residency = 4,
+       },
+       {
+               .name = "C6-ICX",
+               .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+               .exit_latency = 128,
+               .target_residency = 384,
+       },
+       {}
+};
+
 static const struct cpuidle_state atom_cstates[] = {
 	{
 		.name = "C1E-ATM",
@@ -904,6 +926,11 @@ static const struct idle_cpu idle_cpu_skx = {
 	.disable_promotion_to_c1e = 1,
 };
 
+static const struct idle_cpu idle_cpu_icx = {
+       .state_table = icx_cstates,
+       .disable_promotion_to_c1e = 1,
+};
+
 static const struct idle_cpu idle_cpu_avn = {
 	.state_table = avn_cstates,
 	.disable_promotion_to_c1e = 1,
@@ -958,6 +985,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconstrel = {
 	ICPU(0x8e, skl),
 	ICPU(0x9e, skl),
 	ICPU(0x55, skx),
+	ICPU(0x6a, icx),
 	ICPU(0x57, knl),
 	ICPU(0x85, knl),
 	ICPU(0x5c, bxt),
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 03:03:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 03:03:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6081.15965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSAbK-0000F0-FD; Tue, 13 Oct 2020 03:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6081.15965; Tue, 13 Oct 2020 03:03:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSAbK-0000Et-BI; Tue, 13 Oct 2020 03:03:46 +0000
Received: by outflank-mailman (input) for mailman id 6081;
 Tue, 13 Oct 2020 03:03:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSAbI-0000Em-Os
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 03:03:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 27ce39d8-7ebc-4fb4-9f47-b6e6e309afee;
 Tue, 13 Oct 2020 03:03:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSAbI-0000Em-Os
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 03:03:44 +0000
X-Inumbo-ID: 27ce39d8-7ebc-4fb4-9f47-b6e6e309afee
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 27ce39d8-7ebc-4fb4-9f47-b6e6e309afee;
	Tue, 13 Oct 2020 03:03:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602558223;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=O/mYe4btk6iL4cQRCK515PsmgpCt8gACdxAma4JiMfQ=;
  b=BNojE2AH4vzfOz3qzMG5JDItz8d3gOzr3b+l6WYFCTNJA2embfv5UsIl
   Kcl5C/5ECM1NIswuDVOmxnTd0fGvrT8HhNb6XlfWpzuDhD+qCCsHW5wyd
   tjowjWGE9O/OuIeBB/lWoQH4WAJsgEcBZwq3iy4pVZypaJ3JHJYJMKtgc
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cgHnYjnvSKBhweVppumowKMMFABWv8H6ulgkTl4OSbohxpERRkFlnWEiWOhuf7dbgstbxrHCmN
 NMgCi26sn50Qefhv8OWuv0T6uesJ35HqXvU01xZOYqmkcPmo7OuQNYAqUAU94ZPeB2Y07Q2+HT
 LbF2QFQSJDxwOwoysZONUvZ9BACzxnul2/AFVABttr2AJVAdc1sU54Frfk2J/7G7vYVay3sNvS
 +hhbyc+CUM1wFbIN7Ezjhx+4pk6lakyQCeow0VfUZzCQ2lEDFtO921XeecN4lnKlzJiVcYigNi
 gOE=
X-SBRS: 2.5
X-MesageID: 28851031
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,369,1596513600"; 
   d="scan'208";a="28851031"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers
Date: Tue, 13 Oct 2020 04:02:48 +0100
Message-ID: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
to Ice Lake desktop according to External Design Specification vol.2.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 xen/arch/x86/acpi/cpu_idle.c | 1 +
 xen/arch/x86/hvm/vmx/vmx.c   | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 27e0b52..7ad726a 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -183,6 +183,7 @@ static void do_get_hw_residencies(void *arg)
     /* Ice Lake */
     case 0x7D:
     case 0x7E:
+    case 0x6A:
     /* Kaby Lake */
     case 0x8E:
     case 0x9E:
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 86b8916..bce8b99 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2429,6 +2429,7 @@ static bool __init has_if_pschange_mc(void)
     case 0x55: /* Skylake-X / Cascade Lake */
     case 0x7d: /* Ice Lake */
     case 0x7e: /* Ice Lake */
+    case 0x6a: /* Ice Lake-X */
     case 0x8e: /* Kaby / Coffee / Whiskey Lake M */
     case 0x9e: /* Kaby / Coffee / Whiskey Lake D */
     case 0xa5: /* Comet Lake H/S */
@@ -2775,7 +2776,7 @@ static const struct lbr_info *last_branch_msr_get(void)
         /* Goldmont Plus */
         case 0x7a:
         /* Ice Lake */
-        case 0x7d: case 0x7e:
+        case 0x7d: case 0x7e: case 0x6a:
         /* Tremont */
         case 0x86:
         /* Kaby Lake */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 04:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 04:12:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6030.15987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSBfM-00071O-Jn; Tue, 13 Oct 2020 04:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6030.15987; Tue, 13 Oct 2020 04:12:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSBfM-000718-Dj; Tue, 13 Oct 2020 04:12:00 +0000
Received: by outflank-mailman (input) for mailman id 6030;
 Mon, 12 Oct 2020 20:25:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zk8D=DT=suse.de=cfontana@srs-us1.protection.inumbo.net>)
 id 1kS4NV-0000l8-Ox
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:25:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28a17db1-52a4-4e34-a587-3f4a41a797f2;
 Mon, 12 Oct 2020 20:25:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 809DDAFF9;
 Mon, 12 Oct 2020 20:25:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zk8D=DT=suse.de=cfontana@srs-us1.protection.inumbo.net>)
	id 1kS4NV-0000l8-Ox
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:25:05 +0000
X-Inumbo-ID: 28a17db1-52a4-4e34-a587-3f4a41a797f2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 28a17db1-52a4-4e34-a587-3f4a41a797f2;
	Mon, 12 Oct 2020 20:25:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 809DDAFF9;
	Mon, 12 Oct 2020 20:25:03 +0000 (UTC)
Subject: Re: [PATCH 2/2] accel: Add xen CpusAccel using dummy-cpu
To: Jason Andryuk <jandryuk@gmail.com>, qemu-devel@nongnu.org
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>
References: <20201012200725.64137-1-jandryuk@gmail.com>
 <20201012200725.64137-3-jandryuk@gmail.com>
From: Claudio Fontana <cfontana@suse.de>
Message-ID: <cafc34c0-0bcd-d138-d46d-ac2c0d5ba2fb@suse.de>
Date: Mon, 12 Oct 2020 22:25:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012200725.64137-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/12/20 10:07 PM, Jason Andryuk wrote:
> Xen was broken by commit 1583a3898853 ("cpus: extract out qtest-specific
> code to accel/qtest").  Xen relied on qemu_init_vcpu() calling
> qemu_dummy_start_vcpu() in the default case, but that was replaced by
> g_assert_not_reached().
> 
> Add a minimal "CpusAccel" for xen using the dummy-cpu implementation
> used by qtest.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  accel/dummy/meson.build |  1 +
>  accel/xen/xen-all.c     | 10 ++++++++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/accel/dummy/meson.build b/accel/dummy/meson.build
> index 5fbe27de90..cdff0ba746 100644
> --- a/accel/dummy/meson.build
> +++ b/accel/dummy/meson.build
> @@ -4,3 +4,4 @@ dummy_ss.add(files(
>  ))
>  
>  specific_ss.add_all(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'], if_true: dummy_ss)
> +specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)
> diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
> index 60b971d0a8..2d243c58d4 100644
> --- a/accel/xen/xen-all.c
> +++ b/accel/xen/xen-all.c
> @@ -16,12 +16,15 @@
>  #include "hw/xen/xen_pt.h"
>  #include "chardev/char.h"
>  #include "sysemu/accel.h"
> +#include "sysemu/cpus.h"
>  #include "sysemu/xen.h"
>  #include "sysemu/runstate.h"
>  #include "migration/misc.h"
>  #include "migration/global_state.h"
>  #include "hw/boards.h"
>  
> +#include "accel/dummy/dummy-cpus.h"

it seems this should be in includes/sysemu/accel.h or so.

> +
>  //#define DEBUG_XEN
>  
>  #ifdef DEBUG_XEN
> @@ -153,6 +156,10 @@ static void xen_setup_post(MachineState *ms, AccelState *accel)
>      }
>  }
>  
> +const CpusAccel xen_cpus = {
> +    .create_vcpu_thread = dummy_start_vcpu_thread,
> +};
> +
>  static int xen_init(MachineState *ms)
>  {
>      MachineClass *mc = MACHINE_GET_CLASS(ms);
> @@ -180,6 +187,9 @@ static int xen_init(MachineState *ms)
>       * opt out of system RAM being allocated by generic code
>       */
>      mc->default_ram_id = NULL;
> +
> +    cpus_register_accel(&xen_cpus);
> +
>      return 0;
>  }
>  
> 

Ciao,

Claudio


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 04:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 04:12:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6028.15981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSBfM-00070w-Aa; Tue, 13 Oct 2020 04:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6028.15981; Tue, 13 Oct 2020 04:12:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSBfM-00070o-5g; Tue, 13 Oct 2020 04:12:00 +0000
Received: by outflank-mailman (input) for mailman id 6028;
 Mon, 12 Oct 2020 20:17:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zk8D=DT=suse.de=cfontana@srs-us1.protection.inumbo.net>)
 id 1kS4Fh-0008Eg-Pf
 for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:17:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1e5fb89-147b-4516-89b2-995612048671;
 Mon, 12 Oct 2020 20:17:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0CCFFAC12;
 Mon, 12 Oct 2020 20:17:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zk8D=DT=suse.de=cfontana@srs-us1.protection.inumbo.net>)
	id 1kS4Fh-0008Eg-Pf
	for xen-devel@lists.xenproject.org; Mon, 12 Oct 2020 20:17:01 +0000
X-Inumbo-ID: e1e5fb89-147b-4516-89b2-995612048671
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1e5fb89-147b-4516-89b2-995612048671;
	Mon, 12 Oct 2020 20:17:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0CCFFAC12;
	Mon, 12 Oct 2020 20:17:00 +0000 (UTC)
Subject: Re: [PATCH 0/2] Add Xen CpusAccel
To: Jason Andryuk <jandryuk@gmail.com>
Cc: qemu-devel@nongnu.org, Laurent Vivier <lvivier@redhat.com>,
 Thomas Huth <thuth@redhat.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>
References: <20201012200725.64137-1-jandryuk@gmail.com>
From: Claudio Fontana <cfontana@suse.de>
Message-ID: <c2b2ed9a-879c-f676-86f0-22b3a77b770f@suse.de>
Date: Mon, 12 Oct 2020 22:16:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012200725.64137-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/12/20 10:07 PM, Jason Andryuk wrote:
> Xen was left behind when CpusAccel became mandatory and fails the assert
> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
> Move the qtest cpu functions to a common location and reuse them for
> Xen.
> 
> Jason Andryuk (2):
>   accel: move qtest CpusAccel functions to a common location
>   accel: Add xen CpusAccel using dummy-cpu
> 
>  .../qtest-cpus.c => dummy/dummy-cpus.c}       | 22 +++++--------------
>  .../qtest-cpus.h => dummy/dummy-cpus.h}       | 10 ++++-----
>  accel/dummy/meson.build                       |  7 ++++++
>  accel/meson.build                             |  1 +
>  accel/qtest/meson.build                       |  1 -
>  accel/qtest/qtest.c                           |  7 +++++-
>  accel/xen/xen-all.c                           | 10 +++++++++
>  7 files changed, 34 insertions(+), 24 deletions(-)
>  rename accel/{qtest/qtest-cpus.c => dummy/dummy-cpus.c} (76%)
>  rename accel/{qtest/qtest-cpus.h => dummy/dummy-cpus.h} (59%)
>  create mode 100644 accel/dummy/meson.build
> 

Yep, forgot completely, sorry.

Acked-by: Claudio Fontana <cfontana@suse.de>




From xen-devel-bounces@lists.xenproject.org Tue Oct 13 04:47:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 04:47:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6089.16007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSCDp-00024A-A4; Tue, 13 Oct 2020 04:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6089.16007; Tue, 13 Oct 2020 04:47:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSCDp-000243-5o; Tue, 13 Oct 2020 04:47:37 +0000
Received: by outflank-mailman (input) for mailman id 6089;
 Tue, 13 Oct 2020 04:47:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSCDo-00023b-DE
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 04:47:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c044da1-cf14-49db-8d12-d88bc5993ec0;
 Tue, 13 Oct 2020 04:47:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSCDg-0002Jk-Pp; Tue, 13 Oct 2020 04:47:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSCDg-0003nQ-GY; Tue, 13 Oct 2020 04:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSCDg-0006ST-G3; Tue, 13 Oct 2020 04:47:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSCDo-00023b-DE
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 04:47:36 +0000
X-Inumbo-ID: 5c044da1-cf14-49db-8d12-d88bc5993ec0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c044da1-cf14-49db-8d12-d88bc5993ec0;
	Tue, 13 Oct 2020 04:47:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=obqqUPnOUEaVhHGAWcU0wj+bZ5+ZTCZ3b/XG4B9L1mc=; b=qtsKoOlv5BQImmHWyiFUqXmAPY
	Pw+2p8OcUTkrA9mvKwtm+uPFlhsCSPwuqMsM2/lK8hkUxCVEUzg8HWcJxUYNEtRZFsh79ivCnua64
	O2W7FN1FQkdubuSv+OBgSDXjtl2TxvHdk2PrQz5pMVX6tQ6Z94SgEkYAVhQtyIVpUcyk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSCDg-0002Jk-Pp; Tue, 13 Oct 2020 04:47:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSCDg-0003nQ-GY; Tue, 13 Oct 2020 04:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSCDg-0006ST-G3; Tue, 13 Oct 2020 04:47:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155760-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155760: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 04:47:28 +0000

flight 155760 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155760/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   25 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 05:47:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 05:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6093.16023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSD91-0008Is-SU; Tue, 13 Oct 2020 05:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6093.16023; Tue, 13 Oct 2020 05:46:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSD91-0008Il-PF; Tue, 13 Oct 2020 05:46:43 +0000
Received: by outflank-mailman (input) for mailman id 6093;
 Tue, 13 Oct 2020 05:46:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PgYC=DU=antarean.org=joost@srs-us1.protection.inumbo.net>)
 id 1kSD90-0008IF-29
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 05:46:42 +0000
Received: from gw1.antarean.org (unknown [194.145.200.214])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6b330e1f-1e76-49b7-b1ec-537b1dde1082;
 Tue, 13 Oct 2020 05:46:40 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by gw1.antarean.org (Postfix) with ESMTP id 4C9P446Fdnzyv7
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
Received: from gw1.antarean.org ([127.0.0.1])
 by localhost (gw1.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id xGWLlVgXkscv for <xen-devel@lists.xenproject.org>;
 Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
Received: from mailstore1.adm.antarean.org (localhost [127.0.0.1])
 by gw1.antarean.org (Postfix) with ESMTP id 4C9P444M5MzyTX
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailstore1.adm.antarean.org (Postfix) with ESMTP id 4C9PDR2ThKz15
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
Received: from mailstore1.adm.antarean.org ([127.0.0.1])
 by localhost (mailstore1.adm.antarean.org [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id NwmJN0lDlLq9 for <xen-devel@lists.xenproject.org>;
 Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
Received: from eve.localnet (eve.adm.antarean.org [10.55.16.44])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mailstore1.adm.antarean.org (Postfix) with ESMTPSA id 4C9PDR1HKBz13
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PgYC=DU=antarean.org=joost@srs-us1.protection.inumbo.net>)
	id 1kSD90-0008IF-29
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 05:46:42 +0000
X-Inumbo-ID: 6b330e1f-1e76-49b7-b1ec-537b1dde1082
Received: from gw1.antarean.org (unknown [194.145.200.214])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 6b330e1f-1e76-49b7-b1ec-537b1dde1082;
	Tue, 13 Oct 2020 05:46:40 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by gw1.antarean.org (Postfix) with ESMTP id 4C9P446Fdnzyv7
	for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
X-Virus-Scanned: amavisd-new at antarean.org
Received: from gw1.antarean.org ([127.0.0.1])
	by localhost (gw1.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id xGWLlVgXkscv for <xen-devel@lists.xenproject.org>;
	Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
Received: from mailstore1.adm.antarean.org (localhost [127.0.0.1])
	by gw1.antarean.org (Postfix) with ESMTP id 4C9P444M5MzyTX
	for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:19:32 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by mailstore1.adm.antarean.org (Postfix) with ESMTP id 4C9PDR2ThKz15
	for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
X-Virus-Scanned: amavisd-new at antarean.org
Received: from mailstore1.adm.antarean.org ([127.0.0.1])
	by localhost (mailstore1.adm.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id NwmJN0lDlLq9 for <xen-devel@lists.xenproject.org>;
	Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
Received: from eve.localnet (eve.adm.antarean.org [10.55.16.44])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
	 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
	(No client certificate requested)
	by mailstore1.adm.antarean.org (Postfix) with ESMTPSA id 4C9PDR1HKBz13
	for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:26:47 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=antarean.org;
	s=default; t=1602566807;
	bh=zXisIOxsws3XIiTJG1L9WpX9cIVAU88saaQvoxTpqGw=;
	h=From:To:Subject:Date;
	b=TykF2jHlUhColVnVO/BltR8w9LsOq/Ih/M/HmTyPuJ+5nqkiXjRIGzBBvq5LRVMIg
	 S4X/sWO6/QOthW5KTxLvk4aZ79S2b7jz3gWGMxHEWxfJLrtYZOI7KvXqrZMBNFj3T3
	 X3TSTJXGZkqhyqxe7Km2Y+nlmus92mDadLQD2ovE=
From: "J. Roeleveld" <joost@antarean.org>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: xen-blkback: Scheduled work from previous purge is still busy, cannot purge list
Date: Tue, 13 Oct 2020 07:26:47 +0200
Message-ID: <15146361.Z0tdQxPx3m@eve>
Organization: Antarean
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

Hi All,

I am seeing the following message in the "dmesg" output of a driver domain.

[Thu Oct  8 20:57:04 2020] xen-blkback: Scheduled work from previous purge is 
still busy, cannot purge list
[Thu Oct  8 20:57:11 2020] xen-blkback: Scheduled work from previous purge is 
still busy, cannot purge list
[Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge is 
still busy, cannot purge list
[Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge is 
still busy, cannot purge list


Is this something to worry about? Or can I safely ignore this?

--
Joost




From xen-devel-bounces@lists.xenproject.org Tue Oct 13 06:05:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 06:05:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6099.16037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSDQY-0001zr-Iu; Tue, 13 Oct 2020 06:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6099.16037; Tue, 13 Oct 2020 06:04:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSDQY-0001zk-FB; Tue, 13 Oct 2020 06:04:50 +0000
Received: by outflank-mailman (input) for mailman id 6099;
 Tue, 13 Oct 2020 06:04:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSDQX-0001zI-FH
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 06:04:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4821bd78-c18d-4be2-b227-5d70ab745814;
 Tue, 13 Oct 2020 06:04:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSDQQ-0004QN-Ue; Tue, 13 Oct 2020 06:04:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSDQQ-0006bo-Nj; Tue, 13 Oct 2020 06:04:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSDQQ-00027t-NF; Tue, 13 Oct 2020 06:04:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSDQX-0001zI-FH
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 06:04:49 +0000
X-Inumbo-ID: 4821bd78-c18d-4be2-b227-5d70ab745814
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4821bd78-c18d-4be2-b227-5d70ab745814;
	Tue, 13 Oct 2020 06:04:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j+kW+F7nVWDdyZOUb2S946x+0f5aUMf+TsEpchV7LRo=; b=bOC9oVg75MAtKuDHtFZsd5oSXQ
	rRpjJPeB3rOmvWf4sLQmVu/YlmrP357PgbhBaeBEPqP2XqlxmIl1C/HCgGZ5xK9jpsmkD+qsbijBH
	fGoD69hVQ98nI4tNbSmDh8gCFsLIXZkcDkjd15edMLlIdvWCj1adfGp8JM5mThsF8xHM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSDQQ-0004QN-Ue; Tue, 13 Oct 2020 06:04:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSDQQ-0006bo-Nj; Tue, 13 Oct 2020 06:04:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSDQQ-00027t-NF; Tue, 13 Oct 2020 06:04:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155757-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155757: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5d1af380d3e4bd840fa324db33ca4f739136e654
X-Osstest-Versions-That:
    ovmf=cc942105ede58a300ba46f3df0edfa86b3abd4dd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 06:04:42 +0000

flight 155757 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155757/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5d1af380d3e4bd840fa324db33ca4f739136e654
baseline version:
 ovmf                 cc942105ede58a300ba46f3df0edfa86b3abd4dd

Last test of basis   155714  2020-10-12 01:54:46 Z    1 days
Testing same since   155757  2020-10-13 01:44:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Shenglei Zhang <shenglei.zhang@intel.com>
  Zhang, Shenglei <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cc942105ed..5d1af380d3  5d1af380d3e4bd840fa324db33ca4f739136e654 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 07:30:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 07:30:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6107.16060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSEkw-0001Zt-Pu; Tue, 13 Oct 2020 07:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6107.16060; Tue, 13 Oct 2020 07:29:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSEkw-0001Zm-Mq; Tue, 13 Oct 2020 07:29:58 +0000
Received: by outflank-mailman (input) for mailman id 6107;
 Tue, 13 Oct 2020 07:29:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSEkv-0001ZE-1Y
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 07:29:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2a52af3-f3c0-4a82-83e2-d460091de7db;
 Tue, 13 Oct 2020 07:29:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSEko-00067x-9o; Tue, 13 Oct 2020 07:29:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSEko-00026r-0O; Tue, 13 Oct 2020 07:29:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSEkn-0003qS-WE; Tue, 13 Oct 2020 07:29:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSEkv-0001ZE-1Y
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 07:29:57 +0000
X-Inumbo-ID: d2a52af3-f3c0-4a82-83e2-d460091de7db
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d2a52af3-f3c0-4a82-83e2-d460091de7db;
	Tue, 13 Oct 2020 07:29:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7zHqnnDakknn//saElGKTS6kSmL1nuYA2M+Rp2h+0uo=; b=sS9Ob98NXXMYqEq+JBsxRmwXv+
	K2oSucHg3a4IjgkrF7Zw+nVkqiJe4fmHiQ4DDqZFiKzfPw+VmD6BdZMztmx53i8KFrk9GgDsxW29R
	ktpFdFWTNfF02aNaOH8VoV+keNuFfisBV1/gSf9xHOneenag9eMarX+WUQVUyEtMEM4s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSEko-00067x-9o; Tue, 13 Oct 2020 07:29:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSEko-00026r-0O; Tue, 13 Oct 2020 07:29:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSEkn-0003qS-WE; Tue, 13 Oct 2020 07:29:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155763-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155763: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 07:29:49 +0000

flight 155763 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155763/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   26 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 07:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 07:56:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6111.16075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFAq-0004Y1-Sw; Tue, 13 Oct 2020 07:56:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6111.16075; Tue, 13 Oct 2020 07:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFAq-0004Xu-Py; Tue, 13 Oct 2020 07:56:44 +0000
Received: by outflank-mailman (input) for mailman id 6111;
 Tue, 13 Oct 2020 07:56:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSFAp-0004Xp-LE
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 07:56:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd1660e5-0be5-4d33-90e2-8c0af8333135;
 Tue, 13 Oct 2020 07:56:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFAo-0006gu-IZ; Tue, 13 Oct 2020 07:56:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFAo-00032t-BU; Tue, 13 Oct 2020 07:56:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFAo-0000xv-Ay; Tue, 13 Oct 2020 07:56:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSFAp-0004Xp-LE
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 07:56:43 +0000
X-Inumbo-ID: dd1660e5-0be5-4d33-90e2-8c0af8333135
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dd1660e5-0be5-4d33-90e2-8c0af8333135;
	Tue, 13 Oct 2020 07:56:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fDtT4zEsGKYKJnQggvBFO2b0Rito4LkS/a4LNsqgvLw=; b=Heq39H504ZsDPYqADwAdFzTaw1
	2DYab8g8I7j5d7UHNkjjAnLTZj2mNIW+V2nRhi7YhEJGnzl7sYIUIqZcPERSTi3PINgpPRNHDAq0c
	/Pc0J5095my20go/8W5++p6IgaSpVQrjFtTJnog038ILFAHot5OWnGQpFmm9G+ieYzLk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFAo-0006gu-IZ; Tue, 13 Oct 2020 07:56:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFAo-00032t-BU; Tue, 13 Oct 2020 07:56:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFAo-0000xv-Ay; Tue, 13 Oct 2020 07:56:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155762-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155762: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=accdc0e7730739f398e392c23bc8380d3574a878
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 07:56:42 +0000

flight 155762 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155762/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              accdc0e7730739f398e392c23bc8380d3574a878
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   95 days
Failing since        151818  2020-07-11 04:18:52 Z   94 days   89 attempts
Testing same since   155762  2020-10-13 04:19:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 20859 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 08:16:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 08:16:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6121.16091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFU2-00079y-1w; Tue, 13 Oct 2020 08:16:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6121.16091; Tue, 13 Oct 2020 08:16:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFU1-00079r-V2; Tue, 13 Oct 2020 08:16:33 +0000
Received: by outflank-mailman (input) for mailman id 6121;
 Tue, 13 Oct 2020 08:16:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSFU0-00079F-3o
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 08:16:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 403705fc-90f2-4d55-9b5a-c64756ae8790;
 Tue, 13 Oct 2020 08:16:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFTr-0007dK-Hy; Tue, 13 Oct 2020 08:16:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFTr-0003fi-Ao; Tue, 13 Oct 2020 08:16:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSFTr-0000Po-AK; Tue, 13 Oct 2020 08:16:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSFU0-00079F-3o
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 08:16:32 +0000
X-Inumbo-ID: 403705fc-90f2-4d55-9b5a-c64756ae8790
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 403705fc-90f2-4d55-9b5a-c64756ae8790;
	Tue, 13 Oct 2020 08:16:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sa2nxxxkwvIdCThaD0twLsPctmP1mjE/+CQ+56Jmu+k=; b=HSJLhbLBo94gSEETBxtl6Zi7PC
	XzlkPKmcWN5SMuDmCu7F2ZFZwnGwnCYs5EyfFh52kMAnKOe5Vk6vn9ija8kikwYMcBYKa3ydDYqTH
	2v7hwuYZOIjQh0WHC0woKueojPmLIWgw/JXg50aLa1y6SMLIkpU4BIqE1oilSkc8F5dQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFTr-0007dK-Hy; Tue, 13 Oct 2020 08:16:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFTr-0003fi-Ao; Tue, 13 Oct 2020 08:16:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSFTr-0000Po-AK; Tue, 13 Oct 2020 08:16:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155754-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155754: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-raw:debian-di-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-i386-pvgrub:debian-di-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a0bdf866873467271eff9a92f179ab0f77d735cb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 08:16:23 +0000

flight 155754 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155754/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-raw      12 debian-di-install fail in 155743 pass in 155754
 test-amd64-amd64-i386-pvgrub 12 debian-di-install fail in 155743 pass in 155754
 test-amd64-amd64-xl-qcow2    12 debian-di-install          fail pass in 155743

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a0bdf866873467271eff9a92f179ab0f77d735cb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   53 days
Failing since        152659  2020-08-21 14:07:39 Z   52 days   91 attempts
Testing same since   155743  2020-10-12 17:14:05 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43816 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 08:43:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 08:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6124.16104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFtY-0001Hz-Db; Tue, 13 Oct 2020 08:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6124.16104; Tue, 13 Oct 2020 08:42:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSFtY-0001Hs-Ai; Tue, 13 Oct 2020 08:42:56 +0000
Received: by outflank-mailman (input) for mailman id 6124;
 Tue, 13 Oct 2020 08:42:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg1d=DU=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1kSFtW-0001Hn-RZ
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 08:42:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id cf8e8f24-b858-40c9-abbd-75f59b91bbf2;
 Tue, 13 Oct 2020 08:42:52 +0000 (UTC)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-249-MHf93RUlP2CyGjzfHWgdpg-1; Tue, 13 Oct 2020 04:42:50 -0400
Received: by mail-wr1-f69.google.com with SMTP id 33so10470196wrf.22
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 01:42:49 -0700 (PDT)
Received: from [192.168.1.36] (106.red-83-59-162.dynamicip.rima-tde.net.
 [83.59.162.106])
 by smtp.gmail.com with ESMTPSA id z5sm27778002wrw.37.2020.10.13.01.42.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 13 Oct 2020 01:42:47 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tg1d=DU=redhat.com=philmd@srs-us1.protection.inumbo.net>)
	id 1kSFtW-0001Hn-RZ
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 08:42:54 +0000
X-Inumbo-ID: cf8e8f24-b858-40c9-abbd-75f59b91bbf2
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id cf8e8f24-b858-40c9-abbd-75f59b91bbf2;
	Tue, 13 Oct 2020 08:42:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602578571;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Cy5Umm45LAZG5+p/FZuyIVT+mHFYq4zbXyNxj1/WCmk=;
	b=fp4g32Ixj2O4GT/qQpNEXCwagHrT+cUG+1XoU9x1HbuIrOnDZOpBB/HDK99G5d10fsGFli
	gTJzleIheaQPk0MfRHglVuoivnfDmHrFDOv6Bh9Jsy3dyyQzqsrDKvrGXDW1rhQd2w9wAX
	IG4q0Hb73JTtnZwtddJnla9fFBpNPg0=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-249-MHf93RUlP2CyGjzfHWgdpg-1; Tue, 13 Oct 2020 04:42:50 -0400
X-MC-Unique: MHf93RUlP2CyGjzfHWgdpg-1
Received: by mail-wr1-f69.google.com with SMTP id 33so10470196wrf.22
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 01:42:49 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=Cy5Umm45LAZG5+p/FZuyIVT+mHFYq4zbXyNxj1/WCmk=;
        b=mqyCVIRDbgn79hI12QobYH6QFszCIX93RXkpE4PnVXybVk4moKFp6M872LiLp1GgYh
         2Gl8NkVgGZzLJjNU+VtN9xBvehlDY/cesEVaO0gOZsERmwoFjmpSOCrfTjQVorI2ia0F
         wQ5t6ZHhC+nFI2GOWpU81RqsIOcq06t43mIiIh1wO9Oik+Ssa6XZDyjN3g3xK2/Xgtbl
         tfb57CyqAW2uOb0Bf1pkI6J3LaN5HxrEqNZQXyLqd8xKwfQzbyFY3lg5XOpGr0vEVjZ5
         tKLP2mNq1RK+2Nk/gr+AvWAqfvdVYCgjMwINS2XXM9qpcOPJxrdftYIwYMBcx6iP0UTO
         JiOQ==
X-Gm-Message-State: AOAM532ZNcF5H7O0aBhhm5qPxsF5E2MWhOvr0tYKkgC+eWYwoTR5kpd5
	cGqaePd1ErFM+W+g6MUVYJrMP1bRh+7PZIwRnC3br3ZtIss/BgJOHBGIqpiAdpWHoosdAUNPEnt
	dlGlohLu+RNZc5fw6pZLpi8a7EMo=
X-Received: by 2002:a1c:2e53:: with SMTP id u80mr14675087wmu.58.1602578568526;
        Tue, 13 Oct 2020 01:42:48 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzQnxR/4TvkPrJhZ7T2WGOGG1UFL0hib7NXqP0OoMMU4JVYdxRNJTLW0wHk657n/DIXrbAuCQ==
X-Received: by 2002:a1c:2e53:: with SMTP id u80mr14675071wmu.58.1602578568373;
        Tue, 13 Oct 2020 01:42:48 -0700 (PDT)
Received: from [192.168.1.36] (106.red-83-59-162.dynamicip.rima-tde.net. [83.59.162.106])
        by smtp.gmail.com with ESMTPSA id z5sm27778002wrw.37.2020.10.13.01.42.47
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 13 Oct 2020 01:42:47 -0700 (PDT)
Subject: Re: [PATCH 0/2] Add Xen CpusAccel
To: Claudio Fontana <cfontana@suse.de>, Jason Andryuk <jandryuk@gmail.com>
Cc: Laurent Vivier <lvivier@redhat.com>, Thomas Huth <thuth@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20201012200725.64137-1-jandryuk@gmail.com>
 <c2b2ed9a-879c-f676-86f0-22b3a77b770f@suse.de>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <a88a1570-ccbd-987c-17db-53e8643c1ea8@redhat.com>
Date: Tue, 13 Oct 2020 10:42:46 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <c2b2ed9a-879c-f676-86f0-22b3a77b770f@suse.de>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=philmd@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/12/20 10:16 PM, Claudio Fontana wrote:
> On 10/12/20 10:07 PM, Jason Andryuk wrote:
>> Xen was left behind when CpusAccel became mandatory and fails the assert
>> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
>> Move the qtest cpu functions to a common location and reuse them for
>> Xen.
>>
>> Jason Andryuk (2):
>>    accel: move qtest CpusAccel functions to a common location
>>    accel: Add xen CpusAccel using dummy-cpu
>>
>>   .../qtest-cpus.c => dummy/dummy-cpus.c}       | 22 +++++--------------
>>   .../qtest-cpus.h => dummy/dummy-cpus.h}       | 10 ++++-----
>>   accel/dummy/meson.build                       |  7 ++++++
>>   accel/meson.build                             |  1 +
>>   accel/qtest/meson.build                       |  1 -
>>   accel/qtest/qtest.c                           |  7 +++++-
>>   accel/xen/xen-all.c                           | 10 +++++++++
>>   7 files changed, 34 insertions(+), 24 deletions(-)
>>   rename accel/{qtest/qtest-cpus.c => dummy/dummy-cpus.c} (76%)
>>   rename accel/{qtest/qtest-cpus.h => dummy/dummy-cpus.h} (59%)
>>   create mode 100644 accel/dummy/meson.build
>>
> 
> Yep, forgot completely, sorry.

Good opportunity to ask the Xen folks to add testing
to our Gitlab CI, so this doesn't happen again :)

> 
> Acked-by: Claudio Fontana <cfontana@suse.de>
> 
> 
> 



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 09:29:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 09:29:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6129.16118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGcg-0004vM-5d; Tue, 13 Oct 2020 09:29:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6129.16118; Tue, 13 Oct 2020 09:29:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGcg-0004vF-1H; Tue, 13 Oct 2020 09:29:34 +0000
Received: by outflank-mailman (input) for mailman id 6129;
 Tue, 13 Oct 2020 09:29:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ic0I=DU=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
 id 1kSGce-0004vA-Q8
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 09:29:32 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b72c0482-99b7-4dd0-9f51-e469e445e253;
 Tue, 13 Oct 2020 09:29:32 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id 13so20253413wmf.0
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 02:29:32 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q10sm28340120wrp.83.2020.10.13.02.29.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 13 Oct 2020 02:29:30 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ic0I=DU=gmail.com=wei.liu.linux@srs-us1.protection.inumbo.net>)
	id 1kSGce-0004vA-Q8
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 09:29:32 +0000
X-Inumbo-ID: b72c0482-99b7-4dd0-9f51-e469e445e253
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b72c0482-99b7-4dd0-9f51-e469e445e253;
	Tue, 13 Oct 2020 09:29:32 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id 13so20253413wmf.0
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 02:29:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=XztsDfCag2OaDzPERC4MA77k+MHx9zhekJXMdA2e2eU=;
        b=s95VKn07T2bNtZWZu6p0stAWjlDuBbdq7CSIeJ76quK4ef2L2Mawv2rlKDia3vj4a1
         VG2E8Db4/VDlIGd/dTTjsSqQIMS2IgPswBo/UaNt7U8z9FhMV2oB1cbTDpKsg6qNUxoY
         3VeFlDCH2+fUXTAE1UAzVSsic24M0/Xn2eFwsvzCgcOHN3B1L9ydnBEhZVjwF435YGfP
         lc/Iqvd4xddZJraiRwm/1h7ymAYPlSqAnHrtuo0yy9UdfvqBw1z0cl6mB/l1J2s24J8t
         l9Y2FUo3GCjapxtlx/DxdkojkjJ6TJHs3oK38jtPxdT8OxCRvuoOfhXl87IhWzjOzEVI
         Ot+A==
X-Gm-Message-State: AOAM5303K0+qt2nE17xGMZLnjHDrdt6iCgKFSRSfAo4+YiRCiY8eCEtK
	9zvbErYfMJWEWVvhf+7P1ko=
X-Google-Smtp-Source: ABdhPJx0npD3RdFJFlFyHcMgUiTbRjOKNo/TN4PiS1YSPMSJ8oGCSimgWDNQzF8MtoV95myhstqMXw==
X-Received: by 2002:a05:600c:21d3:: with SMTP id x19mr10039455wmj.170.1602581371318;
        Tue, 13 Oct 2020 02:29:31 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id q10sm28340120wrp.83.2020.10.13.02.29.30
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 13 Oct 2020 02:29:30 -0700 (PDT)
Date: Tue, 13 Oct 2020 09:29:29 +0000
From: Wei Liu <wei.liu@kernel.org>
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Wei Liu <wei.liu@kernel.org>, Paul Durrant <paul@xen.org>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Olivier Lambert <olivier.lambert@vates.fr>
Subject: Re: [OSSTEST PATCH 16/82] abolish "kernkind"; desupport non-pvops
 kernels
Message-ID: <20201013092929.4xq6l3asckn7setl@liuwe-devbox-debian-v2>
References: <20201007180024.7932-1-iwj@xenproject.org>
 <20201007180024.7932-17-iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201007180024.7932-17-iwj@xenproject.org>
User-Agent: NeoMutt/20180716

On Wed, Oct 07, 2020 at 06:59:18PM +0100, Ian Jackson wrote:
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> This was for distinguishing the old-style Xenolinux kernels from pvops
> kernels.
> 
> We have not actually tested any non-pvops kernels for a very very long
> time.  Delete this now because the runvar is slightly in the way of
> test host reuse.
> 
> (Sorry for the wide CC but it seems better to make sure anyone who
> might object can do so.)

No objection from me FWIW.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 09:37:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 09:37:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6135.16132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGkf-0005s1-5C; Tue, 13 Oct 2020 09:37:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6135.16132; Tue, 13 Oct 2020 09:37:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGkf-0005ru-01; Tue, 13 Oct 2020 09:37:49 +0000
Received: by outflank-mailman (input) for mailman id 6135;
 Tue, 13 Oct 2020 09:37:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSGke-0005rB-KC
 for xen-devel@lists.xen.org; Tue, 13 Oct 2020 09:37:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84596f92-858b-4962-86d2-6a2f0808ccdb;
 Tue, 13 Oct 2020 09:37:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 66112B00A;
 Tue, 13 Oct 2020 09:37:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSGke-0005rB-KC
	for xen-devel@lists.xen.org; Tue, 13 Oct 2020 09:37:48 +0000
X-Inumbo-ID: 84596f92-858b-4962-86d2-6a2f0808ccdb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84596f92-858b-4962-86d2-6a2f0808ccdb;
	Tue, 13 Oct 2020 09:37:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602581866;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FWFovjq8UywOZdRtko2zvRMKBxqypmNrhXV85DC7h+U=;
	b=jH8U92HEQqgXJ5pB6jF5FXIZiQtJn486VutZMkuKigLD6MpEO/bgaKZ747vRgMaIw/86hF
	3M5TkcwvTB8VFWszKq6oJJkz4xmuT152iNpqA+cwuDUlxt2IPMdVH27s6xTq3UaxvDs9EP
	F728AJ9R6Z/+t5Kaku+Wj7iimWN3Poo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 66112B00A;
	Tue, 13 Oct 2020 09:37:46 +0000 (UTC)
Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
To: paul@xen.org
Cc: 'Don Slutz' <don.slutz@gmail.com>, xen-devel@lists.xen.org,
 'Boris Ostrovsky' <boris.ostrovsky@oracle.com>,
 'Ian Jackson' <iwj@xenproject.org>, 'Jun Nakajima' <jun.nakajima@intel.com>,
 'Kevin Tian' <kevin.tian@intel.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Tim Deegan' <tim@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>,
 'George Dunlap' <George.Dunlap@eu.citrix.com>
References: <cover.1597854907.git.don.slutz@gmail.com>
 <bfe0b9bb7b283657bc33edb7c4b425930564ca46.1597854908.git.don.slutz@gmail.com>
 <e7581f3a-71eb-3181-9128-01e22653a47e@suse.com>
 <000901d69bb8$941489b0$bc3d9d10$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8bf54ee4-2379-f3eb-57a4-ee572978d219@suse.com>
Date: Tue, 13 Oct 2020 11:37:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <000901d69bb8$941489b0$bc3d9d10$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.10.2020 10:13, Paul Durrant wrote:
> 
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 01 October 2020 15:42
>> To: Don Slutz <don.slutz@gmail.com>
>> Cc: xen-devel@lists.xen.org; Boris Ostrovsky <boris.ostrovsky@oracle.com>; Ian Jackson
>> <iwj@xenproject.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>;
>> Stefano Stabellini <sstabellini@kernel.org>; Tim Deegan <tim@xen.org>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>; George Dunlap
>> <George.Dunlap@eu.citrix.com>; Paul Durrant <paul@xen.org>
>> Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
>>
>> On 19.08.2020 18:52, Don Slutz wrote:
>>> This adds synchronization of the 6 vcpu registers (only 32bits of
>>> them) that QEMU's vmport.c and vmmouse.c needs between Xen and QEMU.
>>> This is how VMware defined the use of these registers.
>>>
>>> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
>>> fetch and put these 6 vcpu registers used by the code in QEMU's
>>> vmport.c and vmmouse.c
>>
>> I'm unconvinced this warrants a new ioreq type, and all the overhead
>> associated with it. I'd be curious to know what Paul or the qemu
>> folks think here.
>>
> 
> The current shared ioreq_t does appear have enough space to accommodate 6 32-bit registers (in the addr, data, count and size) fields co couldn't the new IOREQ_TYPE_VMWARE_PORT type be dealt with by simply unioning the regs with these fields? That avoids the need for a whole new shared page.

Hmm, yes, good point. But this is assuming we're going to be fine with
using 32-bit registers now and going forward. Personally I'd prefer a
mechanism less constrained by the specific needs of the current VMware
interface, i.e. potentially allowing to scale to 64-bit registers as
well as any of the remaining 9 ones (leaving aside %rsp).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 09:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 09:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6137.16144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGx1-0007XX-7O; Tue, 13 Oct 2020 09:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6137.16144; Tue, 13 Oct 2020 09:50:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSGx1-0007XQ-3c; Tue, 13 Oct 2020 09:50:35 +0000
Received: by outflank-mailman (input) for mailman id 6137;
 Tue, 13 Oct 2020 09:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7Gs=DU=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kSGwz-0007XL-SO
 for xen-devel@lists.xen.org; Tue, 13 Oct 2020 09:50:33 +0000
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0de1c894-d0f1-4dcb-93ab-d6f7ad329001;
 Tue, 13 Oct 2020 09:50:32 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id b8so10004805wrn.0
 for <xen-devel@lists.xen.org>; Tue, 13 Oct 2020 02:50:32 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id t124sm27717735wmg.31.2020.10.13.02.50.30
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 13 Oct 2020 02:50:31 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=J7Gs=DU=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kSGwz-0007XL-SO
	for xen-devel@lists.xen.org; Tue, 13 Oct 2020 09:50:33 +0000
X-Inumbo-ID: 0de1c894-d0f1-4dcb-93ab-d6f7ad329001
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0de1c894-d0f1-4dcb-93ab-d6f7ad329001;
	Tue, 13 Oct 2020 09:50:32 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id b8so10004805wrn.0
        for <xen-devel@lists.xen.org>; Tue, 13 Oct 2020 02:50:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=rukc4O6BWhhiRJWFSR0WSL5dfYldZYGmLkmWRIlbsRA=;
        b=twLPNP0SSfNVVM+VX7KuHbtc2oPtMdERSPRax91RG8eO5h4yf/Nskn5mIWy/OIWnRh
         5O+0u303zr/dXnSoIul22NkE7LxQATr1LqGmK/D7osO7gxJ/CWMsmwr0rYEqGlbbAHCY
         eX5qJY5HibyJIvt1a3XaKixfgggfhISYY9OTphfYv8SIHC+U2wCqUBWtoO1a2JYW4dni
         LVGqGy5fTEFCrNPSEvVou8dy3PejJ1vrFe2I/VDOHhJuP5l+LmIifiqTindu5aIGMRzr
         Uuphv/ungBrFZeCk1Mbq6Vq509BctPm2Fpa5S3c6AjEt22q+Jw2DSZiiksB+YZkV6uwA
         oHng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=rukc4O6BWhhiRJWFSR0WSL5dfYldZYGmLkmWRIlbsRA=;
        b=BaEKJ/cfEVNMiKXgketCMHdf8QJo2kPa684QsNtmrhE9ckrEpBOgBVbqHpOBdNewUo
         CIYNhpSuR2Nv5DfPErh+h2BaxiAMunE8ieigPjpY99+/UYwJFe6r7wGMSEtAYAfviNfB
         OfEttJABBkki1vNHeN0EV4O186iKYkiRmkBCXLoJQku4+Qxj2wwBpzsetCbT8DCHWo6H
         ssYO/CfB44ouCKtKX969t/4j5A11cTRwsl+gtl6Td+8AHvINiC8N678r9bJo1devlAvt
         rRRC17huN6LCd+z0Gh1aM8gVzxSil6ApznllqACzfPjB1VcN4RNqtVcrUt9WFiLARChl
         gv9w==
X-Gm-Message-State: AOAM5321grnDItWxL43NsWcTDrZK+hTYElseLsc9LdJT36TVaDkA/UPe
	vE1sd4wPUjrv08/RAvHl3Jc=
X-Google-Smtp-Source: ABdhPJx9bzXYCNb8lC+HyMSDkZl8ZGHTaJ4auYOiFjy1chHk6eX/HbSb3UPev9K9ERTM4+pVcjdGaQ==
X-Received: by 2002:a05:6000:110f:: with SMTP id z15mr33251472wrw.87.1602582631708;
        Tue, 13 Oct 2020 02:50:31 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
        by smtp.gmail.com with ESMTPSA id t124sm27717735wmg.31.2020.10.13.02.50.30
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 13 Oct 2020 02:50:31 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Don Slutz'" <don.slutz@gmail.com>,
	<xen-devel@lists.xen.org>,
	"'Boris Ostrovsky'" <boris.ostrovsky@oracle.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'Konrad Rzeszutek Wilk'" <konrad.wilk@oracle.com>,
	"'George Dunlap'" <George.Dunlap@eu.citrix.com>
References: <cover.1597854907.git.don.slutz@gmail.com> <bfe0b9bb7b283657bc33edb7c4b425930564ca46.1597854908.git.don.slutz@gmail.com> <e7581f3a-71eb-3181-9128-01e22653a47e@suse.com> <000901d69bb8$941489b0$bc3d9d10$@xen.org> <8bf54ee4-2379-f3eb-57a4-ee572978d219@suse.com>
In-Reply-To: <8bf54ee4-2379-f3eb-57a4-ee572978d219@suse.com>
Subject: RE: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
Date: Tue, 13 Oct 2020 10:50:29 +0100
Message-ID: <005b01d6a146$49b79900$dd26cb00$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQE3EbIOCgkLUtOBXNlfcF4M3GLQwwHXOsu5AXIpxzoCBV3QmgKqbpWKqpSetoA=

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 13 October 2020 10:38
> To: paul@xen.org
> Cc: 'Don Slutz' <don.slutz@gmail.com>; xen-devel@lists.xen.org; 'Boris =
Ostrovsky'
> <boris.ostrovsky@oracle.com>; 'Ian Jackson' <iwj@xenproject.org>; 'Jun =
Nakajima'
> <jun.nakajima@intel.com>; 'Kevin Tian' <kevin.tian@intel.com>; =
'Stefano Stabellini'
> <sstabellini@kernel.org>; 'Tim Deegan' <tim@xen.org>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>;
> 'Konrad Rzeszutek Wilk' <konrad.wilk@oracle.com>; 'George Dunlap' =
<George.Dunlap@eu.citrix.com>
> Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
>=20
> On 06.10.2020 10:13, Paul Durrant wrote:
> >
> >
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 01 October 2020 15:42
> >> To: Don Slutz <don.slutz@gmail.com>
> >> Cc: xen-devel@lists.xen.org; Boris Ostrovsky =
<boris.ostrovsky@oracle.com>; Ian Jackson
> >> <iwj@xenproject.org>; Jun Nakajima <jun.nakajima@intel.com>; Kevin =
Tian <kevin.tian@intel.com>;
> >> Stefano Stabellini <sstabellini@kernel.org>; Tim Deegan =
<tim@xen.org>; Andrew Cooper
> >> <andrew.cooper3@citrix.com>; Konrad Rzeszutek Wilk =
<konrad.wilk@oracle.com>; George Dunlap
> >> <George.Dunlap@eu.citrix.com>; Paul Durrant <paul@xen.org>
> >> Subject: Re: [XEN PATCH v14 7/8] Add IOREQ_TYPE_VMWARE_PORT
> >>
> >> On 19.08.2020 18:52, Don Slutz wrote:
> >>> This adds synchronization of the 6 vcpu registers (only 32bits of
> >>> them) that QEMU's vmport.c and vmmouse.c needs between Xen and =
QEMU.
> >>> This is how VMware defined the use of these registers.
> >>>
> >>> This is to avoid a 2nd and 3rd exchange between QEMU and Xen to
> >>> fetch and put these 6 vcpu registers used by the code in QEMU's
> >>> vmport.c and vmmouse.c
> >>
> >> I'm unconvinced this warrants a new ioreq type, and all the =
overhead
> >> associated with it. I'd be curious to know what Paul or the qemu
> >> folks think here.
> >>
> >
> > The current shared ioreq_t does appear have enough space to =
accommodate 6 32-bit registers (in the
> addr, data, count and size) fields co couldn't the new =
IOREQ_TYPE_VMWARE_PORT type be dealt with by
> simply unioning the regs with these fields? That avoids the need for a =
whole new shared page.
>=20
> Hmm, yes, good point. But this is assuming we're going to be fine with
> using 32-bit registers now and going forward. Personally I'd prefer a
> mechanism less constrained by the specific needs of the current VMware
> interface, i.e. potentially allowing to scale to 64-bit registers as
> well as any of the remaining 9 ones (leaving aside %rsp).
>=20

I think that should probably be additional work, not needed for this =
series. We could look to expand and re-structure the ioreq_t structure =
with some headroom. An emulator aware of the new structure to resource =
map a different set of shared pages.

  Paul

> Jan




From xen-devel-bounces@lists.xenproject.org Tue Oct 13 10:29:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 10:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6139.16155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHYe-00021r-9C; Tue, 13 Oct 2020 10:29:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6139.16155; Tue, 13 Oct 2020 10:29:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHYe-00021j-5r; Tue, 13 Oct 2020 10:29:28 +0000
Received: by outflank-mailman (input) for mailman id 6139;
 Tue, 13 Oct 2020 10:29:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSHYc-00021e-Pp
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:29:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a757acf7-33d2-430e-8b07-e6cbb768f4c4;
 Tue, 13 Oct 2020 10:29:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D2852ADA8;
 Tue, 13 Oct 2020 10:29:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSHYc-00021e-Pp
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:29:26 +0000
X-Inumbo-ID: a757acf7-33d2-430e-8b07-e6cbb768f4c4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a757acf7-33d2-430e-8b07-e6cbb768f4c4;
	Tue, 13 Oct 2020 10:29:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602584963;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KyK3FBPEsgk1cDe7NYdwCSADIfUezZUAE+KS8wpVbEQ=;
	b=obDL5Qh24FeXCwuSarpaVCS2YCKTXhSpnCmC3GVH1B1y++6VzSXfsKRU4FkMoKn/idYLVk
	bDuBxQ45C2KDp60Fz+lrlqdhDS30sM3IkiIZgtA9xDY87VA2/5ZJEXoGYSif775Y4Kp/Zg
	L3/N/xCG1hdpz9jD7P0trq5KkFGgMMI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D2852ADA8;
	Tue, 13 Oct 2020 10:29:23 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
 <20201008151556.GL19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <070ad2c4-1887-67dc-34eb-7107c9360c01@suse.com>
Date: Tue, 13 Oct 2020 12:29:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201008151556.GL19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.10.2020 17:15, Roger Pau Monné wrote:
> On Wed, Sep 16, 2020 at 03:08:40PM +0200, Jan Beulich wrote:
>> +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
>> +                         mfn_t sl1mfn, const void *sl1e,
>> +                         const struct domain *d)
>> +{
>> +    unsigned long gfn;
>> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>> +
>> +    ASSERT(is_hvm_domain(d));
>> +
>> +    if ( !dirty_vram /* tracking disabled? */ ||
>> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
>> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
>> +        return;
>> +
>> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
>> +    /* Page sharing not supported on shadow PTs */
>> +    BUG_ON(SHARED_M2P(gfn));
>> +
>> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
>> +    {
>> +        unsigned long i = gfn - dirty_vram->begin_pfn;
>> +        const struct page_info *page = mfn_to_page(mfn);
>> +        bool dirty = false;
>> +        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
>> +
>> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
>> +        {
>> +            /* Last reference */
>> +            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
>> +            {
>> +                /* We didn't know it was that one, let's say it is dirty */
>> +                dirty = true;
>> +            }
>> +            else
>> +            {
>> +                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>> +                if ( l1f & _PAGE_DIRTY )
>> +                    dirty = true;
>> +            }
>> +        }
>> +        else
>> +        {
>> +            /* We had more than one reference, just consider the page dirty. */
>> +            dirty = true;
>> +            /* Check that it's not the one we recorded. */
>> +            if ( dirty_vram->sl1ma[i] == sl1ma )
>> +            {
>> +                /* Too bad, we remembered the wrong one... */
>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>> +            }
>> +            else
>> +            {
>> +                /*
>> +                 * Ok, our recorded sl1e is still pointing to this page, let's
>> +                 * just hope it will remain.
>> +                 */
>> +            }
>> +        }
>> +
>> +        if ( dirty )
>> +        {
>> +            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
> 
> Could you use _set_bit here?

In addition to what Andrew has said - this would be a non cosmetic
change, which I wouldn't want to do in a patch merely moving this
code.

>> @@ -1194,7 +1094,9 @@ static int shadow_set_l1e(struct domain
>>                  new_sl1e = shadow_l1e_flip_flags(new_sl1e, rc);
>>                  /* fall through */
>>              case 0:
>> -                shadow_vram_get_l1e(new_sl1e, sl1e, sl1mfn, d);
>> +                shadow_vram_get_mfn(shadow_l1e_get_mfn(new_sl1e),
>> +                                    shadow_l1e_get_flags(new_sl1e),
>> +                                    sl1mfn, sl1e, d);
> 
> As you have moved this function into a HVM build time file, don't you
> need to guard this call, or alternatively provide a dummy handler for
> !CONFIG_HVM in private.h?
> 
> Same for shadow_vram_put_mfn.

All uses are inside conditionals using shadow_mode_refcounts(), i.e.
the compiler's DCE pass will eliminate the calls. All we need are
declarations to be in scope.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 10:30:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 10:30:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6140.16168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHa4-0002nl-Kb; Tue, 13 Oct 2020 10:30:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6140.16168; Tue, 13 Oct 2020 10:30:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHa4-0002ne-GU; Tue, 13 Oct 2020 10:30:56 +0000
Received: by outflank-mailman (input) for mailman id 6140;
 Tue, 13 Oct 2020 10:30:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSHa2-0002nY-OW
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:30:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e54a6e2b-8432-4c2f-8c7e-36df2551e152;
 Tue, 13 Oct 2020 10:30:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3100BB07D;
 Tue, 13 Oct 2020 10:30:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSHa2-0002nY-OW
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:30:54 +0000
X-Inumbo-ID: e54a6e2b-8432-4c2f-8c7e-36df2551e152
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e54a6e2b-8432-4c2f-8c7e-36df2551e152;
	Tue, 13 Oct 2020 10:30:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602585053;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cjCKpF2vmv3OiCeAo+dpD3redlGSX1a+7nXDX1jjRc0=;
	b=sdtSVN22j7drEU7OT85uHfG3qc+kzM+azQtVSt2fprem0FhHpLSs6eb1J4loh2TBMw9RVI
	WALf57peud+eHhwPTFQymGsDihjSkcO1pBwjBKeVnGv7gC/L+0dePyM1VlgvF/qQOaJFXZ
	o5/jYaAtcPnOzJUSA0HHFyExuRX/Tdc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3100BB07D;
	Tue, 13 Oct 2020 10:30:53 +0000 (UTC)
Subject: Re: [PATCH v2 3/4] x86/shim: don't permit HVM and PV_SHIM_EXCLUSIVE
 at the same time
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <c94e4480-96a0-34b6-a4c6-6176daa57588@suse.com>
 <20201008145229.GK19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1bd5f625-351f-5f70-e3d8-830c4108ac60@suse.com>
Date: Tue, 13 Oct 2020 12:30:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201008145229.GK19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.10.2020 16:52, Roger Pau Monné wrote:
> On Wed, Sep 16, 2020 at 03:08:00PM +0200, Jan Beulich wrote:
>> This combination doesn't really make sense (and there likely are more);
>> in particular even if the code built with both options set, HVM guests
>> wouldn't work (and I think one wouldn't be able to create one in the
>> first place). The alternative here would be some presumably intrusive
>> #ifdef-ary to get this combination to actually build (but still not
>> work) again.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I can see the desire for being able to remove code, and the point
> Andrew made about one option not making another disappear in a
> completely different menu section.
> 
> Yet I don't see how to converge the two together, unless we completely
> change our menu layouts, and even then I'm not sure I see how we could
> structure this. Hence:
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

Andrew - are you okay with this going in then? Or if not, do you have
any thoughts towards an alternative approach?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 10:33:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 10:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6144.16180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHcd-00030M-5t; Tue, 13 Oct 2020 10:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6144.16180; Tue, 13 Oct 2020 10:33:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHcd-00030F-2h; Tue, 13 Oct 2020 10:33:35 +0000
Received: by outflank-mailman (input) for mailman id 6144;
 Tue, 13 Oct 2020 10:33:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSHcb-000309-8i
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:33:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0de2bf8d-4077-43ad-89d7-7edbee472268;
 Tue, 13 Oct 2020 10:33:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BCBD6ADA8;
 Tue, 13 Oct 2020 10:33:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSHcb-000309-8i
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:33:33 +0000
X-Inumbo-ID: 0de2bf8d-4077-43ad-89d7-7edbee472268
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0de2bf8d-4077-43ad-89d7-7edbee472268;
	Tue, 13 Oct 2020 10:33:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602585211;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uJpfnqEhG3clpJ6iODe36Kg1iSHOlDN5ELdcDXVv83I=;
	b=PiEMnCxUVUZTKhRuun9NuYT1mW1WRM2uLT8FYJ9IbqdmWBiUDqp0gXwTdwvQESh9oPOSRK
	2BaWmyVqooVgMcB/OOh+UhDtTwK8oS0u6RaQiAmlOe7/l68HjszQ5S9n7LI3C3XG3gJDUN
	XqKjo1xrdnQKFw8rlaLrwTNvORz3iwA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BCBD6ADA8;
	Tue, 13 Oct 2020 10:33:31 +0000 (UTC)
Subject: Re: [PATCH v2 5/6] x86: guard against straight-line speculation past
 RET
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
 <20201008162822.GM19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3a9a4d5-d647-013e-5c10-545af9c49429@suse.com>
Date: Tue, 13 Oct 2020 12:33:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201008162822.GM19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 08.10.2020 18:28, Roger Pau Monné wrote:
> On Mon, Sep 28, 2020 at 02:31:49PM +0200, Jan Beulich wrote:
>> Under certain conditions CPUs can speculate into the instruction stream
>> past a RET instruction. Guard against this just like 3b7dab93f240
>> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
>> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
>> achieve this that differ: A set of macros gets introduced to post-
>> process RET insns issued by the compiler (or living in assembly files).
>>
>> Unfortunately for clang this requires further features their built-in
>> assembler doesn't support: We need to be able to override insn mnemonics
>> produced by the compiler (which may be impossible, if internally
>> assembly mnemonics never get generated), and we want to use \(text)
>> escaping / quoting in the auxiliary macro.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Code LGTM.
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> TBD: Should this depend on CONFIG_SPECULATIVE_HARDEN_BRANCH?
> 
> I don't see the additions done in 3b7dab93f240 being guarded by
> CONFIG_SPECULATIVE_HARDEN_BRANCH, so in that regard I would say no.
> However those are already guarded by CONFIG_INDIRECT_THUNK so it's
> slightly weird that the addition of such protections cannot be turned
> off in any way.
> 
> I would be fine with having the additions done in 3b7dab93f240
> protected by CONFIG_SPECULATIVE_HARDEN_BRANCH, and then the additions
> done here also.

Okay, perhaps I'll make a separate patch then to add the conditional
at all respective places.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 10:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 10:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6147.16194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHq8-00041Z-Ep; Tue, 13 Oct 2020 10:47:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6147.16194; Tue, 13 Oct 2020 10:47:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHq8-00041S-Ak; Tue, 13 Oct 2020 10:47:32 +0000
Received: by outflank-mailman (input) for mailman id 6147;
 Tue, 13 Oct 2020 10:47:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSHq7-00040x-9W
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:47:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edc71109-715c-4b07-9f87-f442c17524cc;
 Tue, 13 Oct 2020 10:47:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSHpz-0002Pq-LP; Tue, 13 Oct 2020 10:47:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSHpz-0002gc-DL; Tue, 13 Oct 2020 10:47:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSHpz-0005va-Cq; Tue, 13 Oct 2020 10:47:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSHq7-00040x-9W
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:47:31 +0000
X-Inumbo-ID: edc71109-715c-4b07-9f87-f442c17524cc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id edc71109-715c-4b07-9f87-f442c17524cc;
	Tue, 13 Oct 2020 10:47:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Hu8cKKGpESMhGRBH72vfve5EjaadX/onE05WJwr9uA=; b=iB6E9bN43WZN0llGngE65Rxr2/
	5jMj5uTRbKIUViH7kbhqrWWyn+Anlxpmo8Sm446pBZwiMAq50tc8beqDSq66TRKRPtP1G94KLX0Qp
	3nL5Kg11PGJK/bO1+Zz/f/+/OinjTuOVR2kU0LzkIBWdx0/aATXRBeNPC3iopaHcf1S0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSHpz-0002Pq-LP; Tue, 13 Oct 2020 10:47:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSHpz-0002gc-DL; Tue, 13 Oct 2020 10:47:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSHpz-0005va-Cq; Tue, 13 Oct 2020 10:47:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155768-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155768: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 10:47:23 +0000

flight 155768 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155768/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   27 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 10:50:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 10:50:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6149.16206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHtE-0004rW-UH; Tue, 13 Oct 2020 10:50:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6149.16206; Tue, 13 Oct 2020 10:50:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSHtE-0004rP-Qh; Tue, 13 Oct 2020 10:50:44 +0000
Received: by outflank-mailman (input) for mailman id 6149;
 Tue, 13 Oct 2020 10:50:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSHtD-0004rJ-Py
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:50:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3ae139a-c45a-4f3f-86fa-a8fa42e3bb4d;
 Tue, 13 Oct 2020 10:50:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSHtD-0004rJ-Py
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 10:50:43 +0000
X-Inumbo-ID: a3ae139a-c45a-4f3f-86fa-a8fa42e3bb4d
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a3ae139a-c45a-4f3f-86fa-a8fa42e3bb4d;
	Tue, 13 Oct 2020 10:50:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602586241;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=bgPSfY5K8mqechUVxCHXmSrJrO0bpaOpkjX5pNnorcI=;
  b=VBI1Tb6w4bcjWUIv6fwQCLtTJvVKrkNZQt+uYHn8oITs5JO9NJc/u6DS
   vIH+AH1t9KJz4QTJRuLA4i5D+Abbakc4dx8zJudkpJbUE1oFiWgVWhWrO
   NPtL06h0N2zqmLIJWPHyzL88AiBbsNVWwK4Wg9BsEr/avPlDMq1WLfaBq
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: toMqVemWoncfsNyagp7tIjtaccWD2ZM0P2gqiQimPUlzyLy7Is/ri76BAspjK4j9vC/qqNQk1n
 OYrdKId7Dusg/GtjUdYoKIgACrqwTr0+GAP6AnYTVPQtLr6xe0A6wPVayA4kDJxaoZirroMeIi
 sx411r4Fq4Nvz9MyB5VRr6uWQL8OmsEfF455ipbnshQZFRIqBS1+mwLKv/6YRHMZfFJs2pEw0m
 QrbqJpTGw5Bd//VQa2lzIb1o55gzSVfDouo52MXGdDG7cwuxcggdUpxGus7y3Qa/vm2YweMA6l
 uJU=
X-SBRS: 2.5
X-MesageID: 29126438
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="29126438"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <iwj@xenproject.org>, Igor Druzhinin
	<igor.druzhinin@citrix.com>
Subject: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI table region
Date: Tue, 13 Oct 2020 11:50:16 +0100
Message-ID: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

ACPI specification contains statements describing memory marked with regular
"ACPI data" type as reclaimable by the guest. Although the guest shouldn't
really do it if it wants kexec or similar functionality to work, there
could still be ambiguities in treating these regions as potentially regular
RAM.

One such an example is SeaBIOS which currently reports "ACPI data" regions as
RAM to the guest in its e801 call. The guest then tries to use this region
for initrd placement and gets stuck. While arguably SeaBIOS needs to be fixed
here, that is just one example of the potential problems from using
reclaimable memory type.

Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
described by the spec as non-reclaimable (so cannot ever be treated like RAM).

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 tools/firmware/hvmloader/e820.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/e820.c b/tools/firmware/hvmloader/e820.c
index 38bcf18..8870099 100644
--- a/tools/firmware/hvmloader/e820.c
+++ b/tools/firmware/hvmloader/e820.c
@@ -202,16 +202,17 @@ int build_e820_table(struct e820entry *e820,
     nr++;
 
     /*
-     * Mark populated reserved memory that contains ACPI tables as ACPI data.
+     * Mark populated reserved memory that contains ACPI tables as ACPI NVS.
      * That should help the guest to treat it correctly later: e.g. pass to
-     * the next kernel on kexec or reclaim if necessary.
+     * the next kernel on kexec and prevent space reclaim which is possible
+     * with regular ACPI data type accoring to ACPI spec v6.3.
      */
 
     if ( acpi_enabled )
     {
         e820[nr].addr = RESERVED_MEMBASE;
         e820[nr].size = acpi_mem_end - RESERVED_MEMBASE;
-        e820[nr].type = E820_ACPI;
+        e820[nr].type = E820_NVS;
         nr++;
     }
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:05:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6151.16217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSI77-0005x5-5j; Tue, 13 Oct 2020 11:05:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6151.16217; Tue, 13 Oct 2020 11:05:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSI77-0005wy-2h; Tue, 13 Oct 2020 11:05:05 +0000
Received: by outflank-mailman (input) for mailman id 6151;
 Tue, 13 Oct 2020 11:05:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSI75-0005ws-8p
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:05:03 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43c0be4f-390d-445f-b550-b418b462d19a;
 Tue, 13 Oct 2020 11:05:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSI75-0005ws-8p
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:05:03 +0000
X-Inumbo-ID: 43c0be4f-390d-445f-b550-b418b462d19a
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43c0be4f-390d-445f-b550-b418b462d19a;
	Tue, 13 Oct 2020 11:05:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602587101;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Fil6qXpx5Jx7k0JgEfBa/teOaiR03jyju74BBwrClFY=;
  b=dtwGcF+wIYrOGwMJPbC0axbS8XWCD8yiuTpk6Z0Qm3ny9oA3XDC5pOUT
   xC7K80w+LO4oGdB5RJCX+8LvNMQVEns3TRmTJP0QtgpGptsd7pJaG6MdF
   43TBRFMboxEaKWu/huhvCXToZqahdIZtRVCLgrCwwindCgMDDNPE86q1X
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PFdo3m9GolughlWOToljGSDgRxnX/QljE/n0lI4kABxWYsGPJgChcaLRSA57jgOLk2PE27o/U+
 6wdIv3jGP41WuHULklUGcRylZhUG96ADLCa1zGDFuAq/UqLpbVUbBTA2gDXqJR4ngGJrwXuo+z
 MX1B6mjZCLuDwUtM1TQx6QGVxtxFUU9Zlp/PPmSS+azHQyEU4SEEp1ZvkbpZeo5tQehkAEfiz2
 +3x6VtKTQV4A9LVOyzDG/4D0KcMECpjXRoVwwMoGSafoptNymihXndFIYAqfSteazY+1MNNWf3
 Sow=
X-SBRS: 2.5
X-MesageID: 29127368
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="29127368"
Subject: Re: [PATCH v2 1/6] x86: replace __ASM_{CL,ST}AC
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <931e6d88-803e-36d6-da40-080879ec45a2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <54f12dc6-fb5d-d6b4-3d5f-7267f0e0ef00@citrix.com>
Date: Tue, 13 Oct 2020 12:04:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <931e6d88-803e-36d6-da40-080879ec45a2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 28/09/2020 13:29, Jan Beulich wrote:
> --- a/xen/arch/x86/arch.mk
> +++ b/xen/arch/x86/arch.mk
> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>  $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>  $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>  $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)

Kconfig

> --- /dev/null
> +++ b/xen/include/asm-x86/asm-defns.h
> @@ -0,0 +1,9 @@
> +#ifndef HAVE_AS_CLAC_STAC
> +.macro clac
> +    .byte 0x0f, 0x01, 0xca
> +.endm
> +
> +.macro stac
> +    .byte 0x0f, 0x01, 0xcb
> +.endm
> +#endif
> --- a/xen/include/asm-x86/asm_defns.h
> +++ b/xen/include/asm-x86/asm_defns.h

We cannot have two files which differ by the vertical positioning of a dash.

How about asm-insn.h for the former, seeing as that is what it contains.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:21:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:21:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6153.16230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIMg-0007fj-JL; Tue, 13 Oct 2020 11:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6153.16230; Tue, 13 Oct 2020 11:21:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIMg-0007fb-Fq; Tue, 13 Oct 2020 11:21:10 +0000
Received: by outflank-mailman (input) for mailman id 6153;
 Tue, 13 Oct 2020 11:21:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSIMe-0007fW-JF
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:21:08 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5044220c-bf42-429b-b4cf-8039616a1f60;
 Tue, 13 Oct 2020 11:21:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSIMe-0007fW-JF
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:21:08 +0000
X-Inumbo-ID: 5044220c-bf42-429b-b4cf-8039616a1f60
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5044220c-bf42-429b-b4cf-8039616a1f60;
	Tue, 13 Oct 2020 11:21:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602588065;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=22HRaNEeiCSKk6IM3vfGJUohKRLKp++5B2v9SNKLOqg=;
  b=gbxnOnnTR4puxJLk1UsATt1nmLaVArXimOBRM3xWAXxzsHpWGEt2NtDK
   5TPqIuOSJrey4zkrZCjC6HmA/P6g16od5nIaGtT/l6zq79wC2gu38y6M7
   4e9mjsmPs8E4nwMakTel4WociyJNXePS+J/6jjxyf7aSAOBuKwxhwXICe
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: L1CABsBfV+AFo9dOWgNZgRGF/uIlF/nykEUR2JpEkAz9+9qIZZwoGNE7alNK1EhEz6BhCdEaxl
 3ftNIETqzeGQ7alrKJOE4ZOREq/2yFIWdq3tUNVfGOyHvm2EnTYT0Pld/huGkpr76kEg4cBRmm
 ATKh//fOEv2JwZvsSyyLIHgNM4urjKDLF2R1L2Q30wktfxJDESZV/PYvTa991lJC1Qcs6iqY6p
 KgpD4Hwy3udz5Oj1P6/J2WDtmWG5fNHNk0Q5v/b9DaGq5KrkBq/Wndh1gIPrkM+hKs7ECozUKs
 IsE=
X-SBRS: 2.5
X-MesageID: 28943955
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="28943955"
Subject: Re: [PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <d8561c46-a6df-3f64-78df-f84c649a99b4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <88bf4e8d-055f-dfc6-5090-c2628d383632@citrix.com>
Date: Tue, 13 Oct 2020 12:20:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d8561c46-a6df-3f64-78df-f84c649a99b4@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 28/09/2020 13:30, Jan Beulich wrote:
> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
> to introduce a number of #ifdef-s to make the build work with older tool
> chains. Introduce an assembler macro covering for tool chains not
> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
> problem to be dropped again.
>
> No change to generated code.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Now that I've done this I'm no longer sure which direction is better to
> follow: On one hand this introduces dead code (even if just NOPs) into
> CET-SS-disabled builds. Otoh this is a step towards breaking the tool
> chain version dependency of the feature.

I've said before.  You cannot break the toolchain dependency without
hardcoding memory operands.  I'm not prepared to let that happen.

There is no problem requiring newer toolchains for newer features
(you're definitely not having CET-IBT, for example), and there is a
(unacceptably, IMO) large cost to this work.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:21:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:21:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6154.16242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSINM-0007ls-SD; Tue, 13 Oct 2020 11:21:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6154.16242; Tue, 13 Oct 2020 11:21:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSINM-0007ll-PF; Tue, 13 Oct 2020 11:21:52 +0000
Received: by outflank-mailman (input) for mailman id 6154;
 Tue, 13 Oct 2020 11:21:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSINM-0007lf-4i
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:21:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a543055-1195-44ac-878f-41d0a4220454;
 Tue, 13 Oct 2020 11:21:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83017AD5F;
 Tue, 13 Oct 2020 11:21:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSINM-0007lf-4i
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:21:52 +0000
X-Inumbo-ID: 4a543055-1195-44ac-878f-41d0a4220454
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a543055-1195-44ac-878f-41d0a4220454;
	Tue, 13 Oct 2020 11:21:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602588110;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o/zmrIFIfJ4lqDoscOzH0EH9jaWm465zjwJTXzvEocg=;
	b=AHq2ZasCYSQ457HHNnUG7DVrZrrvYHZHU1e3+OLYldpDUyPm5lLtyZMspTi7omSSju7U0a
	YV2gmiudOdTM5LfOYectv339yDEJrjfO0RId/Rh8L57PjgcAIRIcXITlv6a5QFGrH+4JCx
	3UKeglCyc+01iw4R28D6K1zNFmVAfJo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 83017AD5F;
	Tue, 13 Oct 2020 11:21:50 +0000 (UTC)
Subject: Re: [PATCH v2 1/6] x86: replace __ASM_{CL,ST}AC
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <931e6d88-803e-36d6-da40-080879ec45a2@suse.com>
 <54f12dc6-fb5d-d6b4-3d5f-7267f0e0ef00@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c8d3eb59-4bf1-c832-3ba2-e2d7a72126d1@suse.com>
Date: Tue, 13 Oct 2020 13:21:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <54f12dc6-fb5d-d6b4-3d5f-7267f0e0ef00@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 13:04, Andrew Cooper wrote:
> On 28/09/2020 13:29, Jan Beulich wrote:
>> --- a/xen/arch/x86/arch.mk
>> +++ b/xen/arch/x86/arch.mk
>> @@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
>>  $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
>>  $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
>>  $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
>> +$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
> 
> Kconfig

I know that's your view, and you know I disagree. I don't see the
thread I had started to have lead to any consensus.

>> --- /dev/null
>> +++ b/xen/include/asm-x86/asm-defns.h
>> @@ -0,0 +1,9 @@
>> +#ifndef HAVE_AS_CLAC_STAC
>> +.macro clac
>> +    .byte 0x0f, 0x01, 0xca
>> +.endm
>> +
>> +.macro stac
>> +    .byte 0x0f, 0x01, 0xcb
>> +.endm
>> +#endif
>> --- a/xen/include/asm-x86/asm_defns.h
>> +++ b/xen/include/asm-x86/asm_defns.h
> 
> We cannot have two files which differ by the vertical positioning of a dash.

Why "cannot"? One is the helper of the other, so them being named almost
identically is quite sensible imo (and no-one is supposed to include the
new one directly). In any event I'd at most see this be "we don't want to".

> How about asm-insn.h for the former, seeing as that is what it contains.

Until "x86: fold indirect_thunk_asm.h into asm-defns.h", where it starts
to be more than just plain insn replacements. And I suspect more non-insn
macros will appear over time. I'd have suggested asm-macros.h in case the
present name really can't be reached consensus on, but we have a
(generated) file of this name already.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:26:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:26:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6158.16253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIRr-00080U-Hk; Tue, 13 Oct 2020 11:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6158.16253; Tue, 13 Oct 2020 11:26:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIRr-00080N-Eo; Tue, 13 Oct 2020 11:26:31 +0000
Received: by outflank-mailman (input) for mailman id 6158;
 Tue, 13 Oct 2020 11:26:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSIRq-00080I-1T
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:26:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa1e477f-fe02-4aab-ba8f-6668161e0707;
 Tue, 13 Oct 2020 11:26:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98BFFAE42;
 Tue, 13 Oct 2020 11:26:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSIRq-00080I-1T
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:26:30 +0000
X-Inumbo-ID: aa1e477f-fe02-4aab-ba8f-6668161e0707
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aa1e477f-fe02-4aab-ba8f-6668161e0707;
	Tue, 13 Oct 2020 11:26:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602588388;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kw07yfL6az0UmJvBGC4MJOmiCkthwdIYWHMwsnc9tMo=;
	b=EjnVX6g87wKpgZ0PaA4s8WTHzYew4vBAYaVHXELpKNtd3Ltz9ZIVv5WhigTdyZN+FJ9+iB
	5f1xBtnysYO2j8AXuoTxmyuE8Nm/LtW92VM/fKp71bDtBS6YSy93ydSVhCeqWTpDijXCip
	2HBWsvbNw0rvGXwxAv5/VilB/xA0rbA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 98BFFAE42;
	Tue, 13 Oct 2020 11:26:28 +0000 (UTC)
Subject: Re: [PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <d8561c46-a6df-3f64-78df-f84c649a99b4@suse.com>
 <88bf4e8d-055f-dfc6-5090-c2628d383632@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bae57d19-b9dc-ae8c-5979-b269feb05428@suse.com>
Date: Tue, 13 Oct 2020 13:26:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <88bf4e8d-055f-dfc6-5090-c2628d383632@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 13:20, Andrew Cooper wrote:
> On 28/09/2020 13:30, Jan Beulich wrote:
>> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
>> to introduce a number of #ifdef-s to make the build work with older tool
>> chains. Introduce an assembler macro covering for tool chains not
>> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
>> problem to be dropped again.
>>
>> No change to generated code.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Now that I've done this I'm no longer sure which direction is better to
>> follow: On one hand this introduces dead code (even if just NOPs) into
>> CET-SS-disabled builds. Otoh this is a step towards breaking the tool
>> chain version dependency of the feature.
> 
> I've said before.  You cannot break the toolchain dependency without
> hardcoding memory operands.  I'm not prepared to let that happen.
> 
> There is no problem requiring newer toolchains for newer features
> (you're definitely not having CET-IBT, for example), and there is a
> (unacceptably, IMO) large cost to this work.

I'm aware of your view. What remains unclear to me is whether your
reply is merely a remark on this post-commit-message comment, or
whether it is an objection to the tidying (as I view it) the patch
does.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6159.16266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIS4-00084U-RO; Tue, 13 Oct 2020 11:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6159.16266; Tue, 13 Oct 2020 11:26:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIS4-00084M-NO; Tue, 13 Oct 2020 11:26:44 +0000
Received: by outflank-mailman (input) for mailman id 6159;
 Tue, 13 Oct 2020 11:26:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4xF=DU=casper.srs.infradead.org=batv+347c3dad313745b9998d+6260+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kSIS2-00083j-EX
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:26:42 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6054b378-931c-45be-bfa8-50d2f3a59233;
 Tue, 13 Oct 2020 11:26:40 +0000 (UTC)
Received: from hch by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat
 Linux)) id 1kSIR6-0001VK-7P; Tue, 13 Oct 2020 11:25:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=h4xF=DU=casper.srs.infradead.org=batv+347c3dad313745b9998d+6260+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kSIS2-00083j-EX
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:26:42 +0000
X-Inumbo-ID: 6054b378-931c-45be-bfa8-50d2f3a59233
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6054b378-931c-45be-bfa8-50d2f3a59233;
	Tue, 13 Oct 2020 11:26:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=ap+vugPRXoOSJhloW5EJc9FGxy9ZwiLwbnG+7+pEMhA=; b=a/Sp1GZZiVtkIrsbDZFKKgQTkQ
	FS+JHTt9pp+5vCrBdk0ac5b7U8ZgZGFScrKiULCJv4PZD4wWqOSzzq06ZoGh/8vFLI33VuvYYBdii
	wZ4VlXJvl5fnmlD+q4pIJJmvrTs/0jX/FIDmEAYUX2+Mt6vIwB3sAbRYRgWIA8hB4i4EWyZuWPFOQ
	rIaV+GwSaVBgBLKvO/SsFSj7I46VHFxg38PLmJQ+Oh1DkRIQcIx5NIWLGcWnDXshQ14JMdlpURiF2
	LxyBxRGv+1Wkfh31jf6dzRr+U4xHgvz3PsshAxeZoaF0OQPY6Fl5VA2XZOxLonra6CTBCfgsfUBp/
	6eTYHzgA==;
Received: from hch by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kSIR6-0001VK-7P; Tue, 13 Oct 2020 11:25:44 +0000
Date: Tue, 13 Oct 2020 12:25:44 +0100
From: Christoph Hellwig <hch@infradead.org>
To: ira.weiny@intel.com
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Christoph Hellwig <hch@infradead.org>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 24/58] fs/freevxfs: Utilize new kmap_thread()
Message-ID: <20201013112544.GA5249@infradead.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-25-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201009195033.3208459-25-ira.weiny@intel.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

> -	kaddr = kmap(pp);
> +	kaddr = kmap_thread(pp);
>  	memcpy(kaddr, vip->vii_immed.vi_immed + offset, PAGE_SIZE);
> -	kunmap(pp);
> +	kunmap_thread(pp);

You only Cced me on this particular patch, which means I have absolutely
no idea what kmap_thread and kunmap_thread actually do, and thus can't
provide an informed review.

That being said I think your life would be a lot easier if you add
helpers for the above code sequence and its counterpart that copies
to a potential hughmem page first, as that hides the implementation
details from most users.


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6165.16278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIfL-0001SK-3V; Tue, 13 Oct 2020 11:40:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6165.16278; Tue, 13 Oct 2020 11:40:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIfL-0001SD-0Q; Tue, 13 Oct 2020 11:40:27 +0000
Received: by outflank-mailman (input) for mailman id 6165;
 Tue, 13 Oct 2020 11:40:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSIfJ-0001S8-D2
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:40:25 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7705070-58ea-4cd6-9dbd-aef777ae3196;
 Tue, 13 Oct 2020 11:40:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSIfJ-0001S8-D2
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:40:25 +0000
X-Inumbo-ID: b7705070-58ea-4cd6-9dbd-aef777ae3196
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b7705070-58ea-4cd6-9dbd-aef777ae3196;
	Tue, 13 Oct 2020 11:40:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602589224;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=DfFjamiCwUGrGtEMDDzqdikRn14ktCx29euNiFykP6c=;
  b=CJzagFh3oslPNOSxmj7VWwbKfT0b8C0CE1A56OsQhxaIgVwNCvg9QCth
   oxcIvjStuABXMr3/+drTjy4DySSZKHTOFmVKYyZTsPIelexQHR91Blx5R
   SvL7NynEt7BDA+0MSJfKDnN+w1EPWl0jN8/2Jq8DK2y8tOG8JbMNDxroA
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7bWwEJufi0L18VOpqZtH9ELZNwbXg+PlHI7TCv7k/kUu5bdo4fTkrPPl87liPyvNQbpyQKRYk3
 kDbg5iFd99aCcPG0sz88MuuotseBunBQFTUjGtPghvYGm+zIJ8Spye0oUGsxhsGDLfv7C63MYj
 JqOV9TWAZiXEUI7TzjwoiyudUagJKAF1hPn66vpUcrrixSzm6br0jG4BSTJKQnMMdtzszVw61C
 VDFGvUFTVRXHoAU1ENDTNoegQUzGlK55vln6zR9Lc2DTPdJ4/dlS0lTnPWtslkNWUrWhiBLJr+
 fxc=
X-SBRS: 2.5
X-MesageID: 28877776
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="28877776"
Subject: Re: [PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <d8561c46-a6df-3f64-78df-f84c649a99b4@suse.com>
 <4120e048-5314-afba-d921-9f4651a61eaa@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3a2cfb6a-68cd-dbc7-c0c8-53b810b4eede@citrix.com>
Date: Tue, 13 Oct 2020 12:40:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4120e048-5314-afba-d921-9f4651a61eaa@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 28/09/2020 13:37, Jan Beulich wrote:
> On 28.09.2020 14:30, Jan Beulich wrote:
>> Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
>> to introduce a number of #ifdef-s to make the build work with older tool
>> chains. Introduce an assembler macro covering for tool chains not
>> knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
>> problem to be dropped again.
>>
>> No change to generated code.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Now that I've done this I'm no longer sure which direction is better to
>> follow: On one hand this introduces dead code (even if just NOPs) into
>> CET-SS-disabled builds.
> A possible compromise here might be to ...
>
>> --- a/xen/include/asm-x86/asm-defns.h
>> +++ b/xen/include/asm-x86/asm-defns.h
>> @@ -7,3 +7,9 @@
>>      .byte 0x0f, 0x01, 0xcb
>>  .endm
>>  #endif
>> +
>> +#ifndef CONFIG_HAS_AS_CET_SS
>> +.macro setssbsy
>> +    .byte 0xf3, 0x0f, 0x01, 0xe8
>> +.endm
>> +#endif
> ... comment out this macro's body. If we went this route, incssp
> and wrssp could be dealt with in similar ways, to allow dropping
> further #ifdef-s.

No, because how you've got something which reads as a real instruction
which sometimes disappears into nothing.  (Interestingly, zero length
alternatives do appear to compile, and this is clearly a bug.)

The thing which matters is the clarity of code in its surrounding
context, and this isn't an improvement.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:44:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:44:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6167.16290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIje-0001ey-LU; Tue, 13 Oct 2020 11:44:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6167.16290; Tue, 13 Oct 2020 11:44:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIje-0001er-IJ; Tue, 13 Oct 2020 11:44:54 +0000
Received: by outflank-mailman (input) for mailman id 6167;
 Tue, 13 Oct 2020 11:44:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSIjd-0001em-DF
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:44:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb37e374-d1f9-41ad-8843-7b712c331079;
 Tue, 13 Oct 2020 11:44:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8774AAD39;
 Tue, 13 Oct 2020 11:44:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSIjd-0001em-DF
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:44:53 +0000
X-Inumbo-ID: eb37e374-d1f9-41ad-8843-7b712c331079
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eb37e374-d1f9-41ad-8843-7b712c331079;
	Tue, 13 Oct 2020 11:44:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602589491;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M0ckmOjrWj4qFkWbwQzLX9Dnp9ZMcEHxH0eIXmqXS/c=;
	b=MZB46vjr/lDxEWdA1+72aKChgOvVUWxNe+BJCtAAf6yLOpozRCUZMQ4veGgj1NZu6CuB65
	9Nr+SqjhJLK7EYt3XHoyUABRvec3Zl3TtBzETybilpwvW75qt68ZvMMzoyWSkWGOdIyUQ/
	2XG+BWBQtC81IHfIPuIEV4WAAP7MJKg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8774AAD39;
	Tue, 13 Oct 2020 11:44:51 +0000 (UTC)
Subject: Re: [PATCH v9 1/8] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 'Julien Grall' <julien@xen.org>, 'George Dunlap' <george.dunlap@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-2-paul@xen.org>
 <2e51a5cb-df0c-d564-2a7b-5f2abbb5872c@citrix.com>
 <000201d69aed$fe07a990$fa16fcb0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <769dcdc2-a77d-47fa-e66a-2e2d92ec0e1c@suse.com>
Date: Tue, 13 Oct 2020 13:44:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <000201d69aed$fe07a990$fa16fcb0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 10:03, Paul Durrant wrote:
>> From: Andrew Cooper <andrew.cooper3@citrix.com>
>> Sent: 02 October 2020 22:20
>>
>> On 24/09/2020 14:10, Paul Durrant wrote:
>>> +int domain_save_end(struct domain_context *c)
>>> +{
>>> +    struct domain *d = c->domain;
>>> +    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
>>
>> DOMAIN_SAVE_ALIGN - (c->len & (DOMAIN_SAVE_ALIGN - 1))
>>
>> isn't vulnerable to overflow.
>>
> 
> ...and significantly uglier code. What's actually wrong with what I wrote?

I don't think there's anything "wrong" or "vulnerable" here, but
I still can see Andrew's point. The "vulnerable" aspect applies
only in the (highly hypothetical I think) cases of either
sizeof(size_t) < sizeof(int) or size_t being a signed type, afaict.
But since it's easy (and imo not "significantly uglier") to write
code that is free of any wrapping or overflowing behavior, I
think it is sensible to actually write it that way.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 11:50:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 11:50:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6169.16302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIoY-0001qz-6w; Tue, 13 Oct 2020 11:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6169.16302; Tue, 13 Oct 2020 11:49:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIoY-0001qs-3c; Tue, 13 Oct 2020 11:49:58 +0000
Received: by outflank-mailman (input) for mailman id 6169;
 Tue, 13 Oct 2020 11:49:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSIoW-0001qm-QB
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:49:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f4323a0-3ce1-4249-b1bf-464363253b8a;
 Tue, 13 Oct 2020 11:49:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F48CB23E;
 Tue, 13 Oct 2020 11:49:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSIoW-0001qm-QB
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 11:49:56 +0000
X-Inumbo-ID: 7f4323a0-3ce1-4249-b1bf-464363253b8a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7f4323a0-3ce1-4249-b1bf-464363253b8a;
	Tue, 13 Oct 2020 11:49:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602589794;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B6kOkuuNc8zz/wNm+JFNCCi7b3v2KE6SB93RDwm9Ftc=;
	b=XD1ym0bufZc8/uinCcBbRGbvOHv3QgBFfYVDyCCAzsMWMdU7sB6mw2moakBOmfX+nw7MXR
	LHCxb47q8B58LcxCLDfR0ae4ZcbpBm/JeHVxWgCb547UVlEh0VYOpGUbUiCi3vy4C3Pwe+
	9A/OJnklR8nQyW3szvsw73FgPARpbJY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2F48CB23E;
	Tue, 13 Oct 2020 11:49:54 +0000 (UTC)
Subject: Re: [PATCH v9 6/8] common/domain: add a domain context record for
 shared_info...
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20200924131030.1876-1-paul@xen.org>
 <20200924131030.1876-7-paul@xen.org>
 <a82cfb40-9ce5-d8ed-a2f7-1b02fc6e27e6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a71c300-fb05-d9e4-7d4d-17814db1edf8@suse.com>
Date: Tue, 13 Oct 2020 13:49:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <a82cfb40-9ce5-d8ed-a2f7-1b02fc6e27e6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 12:39, Andrew Cooper wrote:
> On 24/09/2020 14:10, Paul Durrant wrote:
>> +static int load_shared_info(struct domain *d, struct domain_context *c)
>> +{
>> +    struct domain_shared_info_context ctxt;
>> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
>> +    unsigned int i;
>> +    int rc;
>> +
>> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    if ( i ) /* expect only a single instance */
>> +        return -ENXIO;
>> +
>> +    rc = domain_load_data(c, &ctxt, hdr_size);
>> +    if ( rc )
>> +        return rc;
>> +
>> +    if ( ctxt.buffer_size > sizeof(shared_info_t) ||
>> +         (ctxt.flags & ~DOMAIN_SAVE_32BIT_SHINFO) )
>> +        return -EINVAL;
>> +
>> +    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
>> +    {
>> +#ifdef CONFIG_COMPAT
>> +        has_32bit_shinfo(d) = true;
> 
> d->arch.has_32bit_shinfo

But this is common code, i.e. using d->arch directly is a layering
violation. I know your dislike of lvalues disguised by function-
like macros, but what do you do?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:00:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6172.16313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIyv-0003VI-BK; Tue, 13 Oct 2020 12:00:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6172.16313; Tue, 13 Oct 2020 12:00:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSIyv-0003VB-8M; Tue, 13 Oct 2020 12:00:41 +0000
Received: by outflank-mailman (input) for mailman id 6172;
 Tue, 13 Oct 2020 12:00:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSIyt-0003V6-Tu
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:00:39 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee652bcc-c1e5-4ead-a737-9782639bd9d7;
 Tue, 13 Oct 2020 12:00:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSIyt-0003V6-Tu
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:00:39 +0000
X-Inumbo-ID: ee652bcc-c1e5-4ead-a737-9782639bd9d7
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ee652bcc-c1e5-4ead-a737-9782639bd9d7;
	Tue, 13 Oct 2020 12:00:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602590438;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=E7GxwzWxftx+vIFUDU59arHjDwX2Rf4nqMamUhfigvo=;
  b=H4Trpl0boDZTq2mCUu6f3bGP1RofRtItfvTRSXkwctn9N3rwibT0hkLp
   n1qvv8MxQWRRLT0Beah1ZP3OjChThEp7H24CnoZk15b9EJxX8bZuMyDfX
   zeWsenqWkLGxQ5aff1+6ei0o2l6UUy+MKxVQIIq6Uqi6AdMRl/0d5Opvh
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0KwUKmoaGeWGStoNHrjZPXyYVUuFeZJbUlxYDi1D/EomSFlE7h0UjE0G2aInXYc5jb4PZqZVGC
 GkGJ2P87iarkbqLeBkkOPLJBFIePWeXX6NFr78ik7AQwpu2Sqtyy5WinvVCAVwj7w/onczLPH8
 ScBgtbdzF/gQThVZRnvtt9duVvHh5d8CsZsr0uj9mvNenZmkgkqTv3xV6Gx8vxqjIIYb0886pF
 T1r5mEtawDVRzYmrD7pjKxZwE3iKIwh80u+z29Mxl8j5xVv2ZE9xiF8z5Ip305PhWOIpAu64n6
 PrY=
X-SBRS: 2.5
X-MesageID: 29910251
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="29910251"
Subject: Re: [PATCH v2 5/6] x86: guard against straight-line speculation past
 RET
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <447525bc-662d-ff52-6b73-e6e1a61767ec@citrix.com>
Date: Tue, 13 Oct 2020 13:00:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 28/09/2020 13:31, Jan Beulich wrote:
> Under certain conditions CPUs can speculate into the instruction stream
> past a RET instruction. Guard against this just like 3b7dab93f240
> ("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
> did - by inserting an "INT $3" insn. It's merely the mechanics of how to
> achieve this that differ: A set of macros gets introduced to post-
> process RET insns issued by the compiler (or living in assembly files).
>
> Unfortunately for clang this requires further features their built-in
> assembler doesn't support: We need to be able to override insn mnemonics
> produced by the compiler (which may be impossible, if internally
> assembly mnemonics never get generated), and we want to use \(text)
> escaping / quoting in the auxiliary macro.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: Should this depend on CONFIG_SPECULATIVE_HARDEN_BRANCH?
> TBD: Would be nice to avoid the additions in .init.text, but a query to
>      the binutils folks regarding the ability to identify the section
>      stuff is in (by Peter Zijlstra over a year ago:
>      https://sourceware.org/pipermail/binutils/2019-July/107528.html)
>      has been left without helpful replies.
> ---
> v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.
>
> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
>  
> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
> +# Check whether \(text) escaping in macro bodies is supported.
> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
> +
> +# Check whether macros can override insn mnemonics in inline assembly.
> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
> +
> +acc1 := $(call or,$(t1),$(t2),$(t3),$(t4))
> +
> +CLANG_FLAGS += $(call or,$(acc1),$(t5))

I'm not happy taking this until there is toolchain support visible in
the future.

We *cannot* rule out the use of IAS forever more, because there are
features far more important than ret speculation which depend on it.

>  endif
>  
>  CLANG_FLAGS += -Werror=unknown-warning-option
> --- a/xen/include/asm-x86/asm-defns.h
> +++ b/xen/include/asm-x86/asm-defns.h
> @@ -50,3 +50,22 @@
>  .macro INDIRECT_JMP arg:req
>      INDIRECT_BRANCH jmp \arg
>  .endm
> +
> +/*
> + * To guard against speculation past RET, insert a breakpoint insn
> + * immediately after them.
> + */
> +.macro ret operand:vararg
> +    ret$ \operand
> +.endm
> +.macro retq operand:vararg
> +    ret$ \operand
> +.endm
> +.macro ret$ operand:vararg
> +    .purgem ret
> +    ret \operand

You're substituting retq for ret, which defeats the purpose of unwrapping.

I will repeat my previous feedback.  Do away with this
wrapping/unwrapping and just use a raw .byte.  Its simpler and faster.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:06:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:06:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6175.16325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJ4u-0003iF-28; Tue, 13 Oct 2020 12:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6175.16325; Tue, 13 Oct 2020 12:06:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJ4t-0003i8-VB; Tue, 13 Oct 2020 12:06:51 +0000
Received: by outflank-mailman (input) for mailman id 6175;
 Tue, 13 Oct 2020 12:06:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSJ4s-0003i3-4w
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:06:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f13c70e7-05e9-417d-b04c-2725c82f400a;
 Tue, 13 Oct 2020 12:06:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 83701AD21;
 Tue, 13 Oct 2020 12:06:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSJ4s-0003i3-4w
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:06:50 +0000
X-Inumbo-ID: f13c70e7-05e9-417d-b04c-2725c82f400a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f13c70e7-05e9-417d-b04c-2725c82f400a;
	Tue, 13 Oct 2020 12:06:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602590808;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+WT/NwAvWlp7h/DwYe3tvj1G1QCnUKTIReoTt/Exc88=;
	b=sypVdqTDO0I33eJQarT9jDomFXBzvjdRh6XUguOkbCuWM6hEyqxO0gD3fOh/TEMzKMM1io
	frl7FPV/BkKDycXMVpuKlceRibYthCv1+aTJ/6VeZKX86EonzzL6z6kVuP9SIUGumhMH4I
	6Sfu/pStSJ5Mdz+ylhw4YDeohjF3Ih8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 83701AD21;
	Tue, 13 Oct 2020 12:06:48 +0000 (UTC)
Subject: Re: [PATCH v9 0/4] efi: Unified Xen hypervisor/kernel/initrd images
To: Trammell Hudson <hudson@trmm.net>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "wl@xen.org" <wl@xen.org>
References: <20201002111822.42142-1-hudson@trmm.net>
 <BbDD1Aa2FXJRlpSpqyFVl4-6u6S-OnBkoMyvoPHadElIyfNDl2h9J34bk12XyvFtEOweGsCRTmqY8eSSbvR98RHJpFzDHpWWa67IaW6Sz7I=@trmm.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b691e444-63d1-7f80-dc99-7629b6741b70@suse.com>
Date: Tue, 13 Oct 2020 14:06:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <BbDD1Aa2FXJRlpSpqyFVl4-6u6S-OnBkoMyvoPHadElIyfNDl2h9J34bk12XyvFtEOweGsCRTmqY8eSSbvR98RHJpFzDHpWWa67IaW6Sz7I=@trmm.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.10.2020 16:43, Trammell Hudson wrote:
> Any further thoughts on this patch series? Three out of four of
> them have been reviewed or acked by at least one reviewer, with
> only the last one currently unreviewed.

"unreviewed" isn't correct. I did review it, but I'm opposed to
parts of the resulting behavior.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:30:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:30:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6178.16338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJRQ-0006Fe-W8; Tue, 13 Oct 2020 12:30:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6178.16338; Tue, 13 Oct 2020 12:30:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJRQ-0006FX-T9; Tue, 13 Oct 2020 12:30:08 +0000
Received: by outflank-mailman (input) for mailman id 6178;
 Tue, 13 Oct 2020 12:30:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSJRP-0006FS-Ec
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:30:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6278e6ad-e607-460d-9869-50b1d4c3ba3d;
 Tue, 13 Oct 2020 12:30:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11A23AE0C;
 Tue, 13 Oct 2020 12:30:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSJRP-0006FS-Ec
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:30:07 +0000
X-Inumbo-ID: 6278e6ad-e607-460d-9869-50b1d4c3ba3d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6278e6ad-e607-460d-9869-50b1d4c3ba3d;
	Tue, 13 Oct 2020 12:30:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602592205;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=82g8CD3jPLdtQOFQZo35CBJ5sjGGHvc4e3t7Z3OAfNs=;
	b=T1vDxUqvuhktK7qu2JX0IL1232YvY+yYzLUGWs6SA7tX6eyYoLSlAfE3iixsoIu4VZWSaj
	/Ej/w/ObzKdaxKsehLTqcVHmAwoHtJaTWC2OUMCttPqrJoFO2CWTW9tucx4pes95kXNtaL
	FWF5kikmRVyzOK9cDVcBX7bkRaZuOO0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 11A23AE0C;
	Tue, 13 Oct 2020 12:30:05 +0000 (UTC)
Subject: Re: Xen Coding style and clang-format
To: George Dunlap <George.Dunlap@citrix.com>,
 Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
Cc: "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
 "vicooodin@gmail.com" <vicooodin@gmail.com>, "julien@xen.org"
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
Date: Tue, 13 Oct 2020 14:30:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.10.2020 20:09, George Dunlap wrote:
>> On Oct 7, 2020, at 11:19 AM, Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com> wrote:
>> So I want to know if the community is ready to add new formatting
>> options and edit old ones. Below I will give examples of what
>> corrections the checker is currently making (the first variant in each
>> case is existing code and the second variant is formatted by checker).
>> If they fit the standards, then I can document them in the coding
>> style. If not, then I try to configure the checker. But the idea is
>> that we need to choose one option that will be considered correct.
>> 1) Function prototype when the string length is longer than the allowed
>> -static int __init
>> -acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
>> -                             const unsigned long end)
>> +static int __init acpi_parse_gic_cpu_interface(
>> +    struct acpi_subtable_header *header, const unsigned long end)
> 
> Jan already commented on this one; is there any way to tell the checker to ignore  this discrepancy?
> 
> If not, I think we should just choose one; I’d go with the latter.
> 
>> 2) Wrapping an operation to a new line when the string length is longer
>> than the allowed
>> -    status = acpi_get_table(ACPI_SIG_SPCR, 0,
>> -                            (struct acpi_table_header **)&spcr);
>> +    status =
>> +        acpi_get_table(ACPI_SIG_SPCR, 0, (struct acpi_table_header
>> **)&spcr);
> 
> Personally I prefer the first version.

Same here.

>> 3) Space after brackets
>> -    return ((char *) base + offset);
>> +    return ((char *)base + offset);
> 
> This seems like a good change to me.
> 
>> 4) Spaces in brackets in switch condition
>> -    switch ( domctl->cmd )
>> +    switch (domctl->cmd)
> 
> This is explicitly against the current coding style.
> 
>> 5) Spaces in brackets in operation
>> -    imm = ( insn >> BRANCH_INSN_IMM_SHIFT ) & BRANCH_INSN_IMM_MASK;
>> +    imm = (insn >> BRANCH_INSN_IMM_SHIFT) & BRANCH_INSN_IMM_MASK;
> 
> I *think* this is already the official style.
> 
>> 6) Spaces in brackets in return
>> -        return ( !sym->name[2] || sym->name[2] == '.' );
>> +        return (!sym->name[2] || sym->name[2] == '.');
> 
> Similarly, I think this is already the official style.
> 
>> 7) Space after sizeof
>> -    clean_and_invalidate_dcache_va_range(new_ptr, sizeof (*new_ptr) *
>> len);
>> +    clean_and_invalidate_dcache_va_range(new_ptr, sizeof(*new_ptr) *
>> len);
> 
> I think this is correct.

I agree with George on all of the above.

>> 8) Spaces before comment if it’s on the same line
>> -    case R_ARM_MOVT_ABS: /* S + A */
>> +    case R_ARM_MOVT_ABS:    /* S + A */
>>
>> -    if ( tmp == 0UL )       /* Are any bits set? */
>> -        return result + size;   /* Nope. */
>> +    if ( tmp == 0UL )         /* Are any bits set? */
>> +        return result + size; /* Nope. */
> 
> Seem OK to me.

I don't think we have any rules how far apart a comment needs
to be; I don't think there should be any complaints or
"corrections" here.

>> 9) Space after for_each_vcpu
>> -        for_each_vcpu(d, v)
>> +        for_each_vcpu (d, v)
> 
> Er, not sure about this one.  This is actually a macro; but obviously it looks like for ( ).
> 
> I think Jan will probably have an opinion, and I think he’ll be back tomorrow; so maybe wait just a day or two before starting to prep your series.

This makes it look like Linux style. In Xen it ought to be one
of

        for_each_vcpu(d, v)
        for_each_vcpu ( d, v )

depending on whether the author of a change considers
for_each_vcpu an ordinary identifier or kind of a keyword.

>> 10) Spaces in declaration
>> -    union hsr hsr = { .bits = regs->hsr };
>> +    union hsr hsr = {.bits = regs->hsr};
> 
> I’m fine with this too.

I think we commonly put the blanks there that are being suggested
to get dropped, so I'm not convinced this is a change we would
want the tool making or suggesting.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:49:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:49:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6183.16356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJju-0007MJ-NM; Tue, 13 Oct 2020 12:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6183.16356; Tue, 13 Oct 2020 12:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJju-0007MC-KE; Tue, 13 Oct 2020 12:49:14 +0000
Received: by outflank-mailman (input) for mailman id 6183;
 Tue, 13 Oct 2020 12:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSJjt-0007Lk-4p
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:49:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b3e0663-b175-4694-810e-e91ef5247da9;
 Tue, 13 Oct 2020 12:49:06 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJjl-0004wG-UT; Tue, 13 Oct 2020 12:49:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJjl-00020M-MB; Tue, 13 Oct 2020 12:49:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJjl-0006e8-Lg; Tue, 13 Oct 2020 12:49:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSJjt-0007Lk-4p
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:49:13 +0000
X-Inumbo-ID: 0b3e0663-b175-4694-810e-e91ef5247da9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0b3e0663-b175-4694-810e-e91ef5247da9;
	Tue, 13 Oct 2020 12:49:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2BChpkEdGzZGWmLEOZsOslfsx+rE2iD78FKOzSAYv9Q=; b=qdOon2XT/2RIbncXk1EGmvB7gI
	cFDj74+ofSj0KAssIz+TI8WW2YTGRw7MN47h/bnOEB2KEEWJTnOfAzu+qvZL5n7lLFx69aCj4sI5D
	kFkm624xR/iQM76ZpipIfaJcDbhiHltzWURJI15KbY8oWYjVZS8zc/kvW/WSz3yMJaJc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJjl-0004wG-UT; Tue, 13 Oct 2020 12:49:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJjl-00020M-MB; Tue, 13 Oct 2020 12:49:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJjl-0006e8-Lg; Tue, 13 Oct 2020 12:49:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155758-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155758: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=53acd350503d56a73aa6c61bced1699e8396c6d0
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 12:49:05 +0000

flight 155758 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155758/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                53acd350503d56a73aa6c61bced1699e8396c6d0
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   73 days
Failing since        152366  2020-08-01 20:49:34 Z   72 days  123 attempts
Testing same since   155758  2020-10-13 01:45:38 Z    0 days    1 attempts

------------------------------------------------------------
2582 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 358207 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:52:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6186.16368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJmd-0008B7-Ad; Tue, 13 Oct 2020 12:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6186.16368; Tue, 13 Oct 2020 12:52:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJmd-0008B0-7Y; Tue, 13 Oct 2020 12:52:03 +0000
Received: by outflank-mailman (input) for mailman id 6186;
 Tue, 13 Oct 2020 12:52:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSJmc-0008Av-I4
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:52:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7961c51-38a8-466c-a632-84065a9c6205;
 Tue, 13 Oct 2020 12:52:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0488CAE7B;
 Tue, 13 Oct 2020 12:52:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSJmc-0008Av-I4
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:52:02 +0000
X-Inumbo-ID: c7961c51-38a8-466c-a632-84065a9c6205
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7961c51-38a8-466c-a632-84065a9c6205;
	Tue, 13 Oct 2020 12:52:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602593520;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=unoNLWfC8HNoatAz24p8JS6+JTNlhu9nwsqgKZ9Mn68=;
	b=WXStj+u2lHUgRV/2JwAq1JEzPMwtAW6AlqxOwom29w9R68IA9KZdYpumO/q9vzYdar+Ph1
	WBirZgW6qwtnuCB4bD+/jGD0XA4geEnAqnJ4v1mtfr8fVOhVBpTMnS+RwxJvhnkTER+6ds
	2VeHUpwdPpKfcNYspmnPHIF1q1mVL9M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0488CAE7B;
	Tue, 13 Oct 2020 12:52:00 +0000 (UTC)
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
Date: Tue, 13 Oct 2020 14:51:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 12:50, Igor Druzhinin wrote:
> ACPI specification contains statements describing memory marked with regular
> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
> really do it if it wants kexec or similar functionality to work, there
> could still be ambiguities in treating these regions as potentially regular
> RAM.
> 
> One such an example is SeaBIOS which currently reports "ACPI data" regions as
> RAM to the guest in its e801 call. The guest then tries to use this region
> for initrd placement and gets stuck.

Any theory on why it would get stuck? Having read the thread rooting
at Sander's report, it hasn't become clear to me where the collision
there is. A consumer of E801 (rather than E820) intends to not use
ACPI data, and hence I consider SeaBIOS right in this regard (the
lack of considering holes is a problem, though).

> --- a/tools/firmware/hvmloader/e820.c
> +++ b/tools/firmware/hvmloader/e820.c
> @@ -202,16 +202,17 @@ int build_e820_table(struct e820entry *e820,
>      nr++;
>  
>      /*
> -     * Mark populated reserved memory that contains ACPI tables as ACPI data.
> +     * Mark populated reserved memory that contains ACPI tables as ACPI NVS.
>       * That should help the guest to treat it correctly later: e.g. pass to
> -     * the next kernel on kexec or reclaim if necessary.
> +     * the next kernel on kexec and prevent space reclaim which is possible
> +     * with regular ACPI data type accoring to ACPI spec v6.3.

Preventing space reclaim is not the business of hvmloader. As per above,
an ACPI unaware OS ought to be permitted to use as ordinary RAM all the
space the tables occupy. Therefore at the very least the comment needs
to reflect that this preventing of space reclaim is a workaround, not
correct behavior.

Also as a nit: "according".

As a consequence I think we will also want to adjust Xen itself to
automatically disable ACPI when it ends up consuming E801 data. Or
alternatively we should consider dropping all E801-related code (as
being inapplicable to 64-bit systems).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 12:58:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 12:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6188.16380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJsi-0008QE-13; Tue, 13 Oct 2020 12:58:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6188.16380; Tue, 13 Oct 2020 12:58:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJsh-0008Q7-U9; Tue, 13 Oct 2020 12:58:19 +0000
Received: by outflank-mailman (input) for mailman id 6188;
 Tue, 13 Oct 2020 12:58:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSJsg-0008Q2-WE
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:58:19 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d37b0ad3-8135-4ec9-9ee1-e4fcef754c22;
 Tue, 13 Oct 2020 12:58:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSJsg-0008Q2-WE
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 12:58:19 +0000
X-Inumbo-ID: d37b0ad3-8135-4ec9-9ee1-e4fcef754c22
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d37b0ad3-8135-4ec9-9ee1-e4fcef754c22;
	Tue, 13 Oct 2020 12:58:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602593897;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=YRUbOlX3wi7QnVjIk6mmeKln8rCi5d84oc+ZVtQlJ0E=;
  b=GebY/jXUFhkr0478e3nfykFomRUdaeRySZleyyN+w/WJHdfzq39hcLU+
   YBA8XyaEvoXpbRm35EIrGVk54Q4RJZ4v09BtGwpIT/V9N62IxbMSNkv15
   TFzyqarLN1Ih/5wbFKMCeIZXFAVuy61oup5RPvBxgBHP3iJRdfiJ3bEwl
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pTbrwNvhuBfjRIVIZxxhHSWkIGoabKGl/dsk/EUU2R9FhgXzrohcumz39k0/4BNs3iwtAthUyL
 TM8CXHFd73MwSPgLzSSFh5KF+lOv6IEH2CCfRABgefihzNqbaNeq/s3pPY/vvm0a/w9fbyKpna
 W4W4xVZthO5fYY86QwhrkYhrFuLHyJ3iOEuB3jxxn5NhKapRym6jSO+JuwilI0L00u6KMeItOp
 zl44A/1E8clD/s5euiAdUs8hdUZVnDtgq3FMubJ14Byp8VL9Tm8Of4F9zYCCIH6XhFPywmG/tU
 Aho=
X-SBRS: 2.5
X-MesageID: 29135751
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="29135751"
Subject: Re: [PATCH v2 6/6] x86: limit amount of INT3 in IND_THUNK_*
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <4d66eb4d-4044-8b48-d7cc-354a236e6b26@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5123f9c4-8a1f-6879-82f2-47afa0d48f64@citrix.com>
Date: Tue, 13 Oct 2020 13:58:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <4d66eb4d-4044-8b48-d7cc-354a236e6b26@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 28/09/2020 13:32, Jan Beulich wrote:
> There's no point having every replacement variant to also specify the
> INT3 - just have it once in the base macro. When patching, NOPs will get
> inserted, which are fine to speculate through (until reaching the INT3).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I also wonder whether the LFENCE in IND_THUNK_RETPOLINE couldn't be
> replaced by INT3 as well. Of course the effect will be marginal, as the
> size of the thunk will still be 16 bytes when including tail padding
> resulting from alignment.

There are surprising performance implications from the choice of
speculation blocker.  RSB filling in particular had a benefit (up to 6%
iirc) from unrolling the loop.

Any differences here are likely to be marginal, whereas for inline
retpoline, the code volume reduction might easily be the winning factor.

> ---
> v2: New.
>
> --- a/xen/arch/x86/indirect-thunk.S
> +++ b/xen/arch/x86/indirect-thunk.S
> @@ -11,6 +11,8 @@
>  
>  #include <asm/asm_defns.h>
>  
> +.purgem ret

This needs a comment.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:00:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6190.16392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJuO-0000U8-CZ; Tue, 13 Oct 2020 13:00:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6190.16392; Tue, 13 Oct 2020 13:00:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJuO-0000TT-96; Tue, 13 Oct 2020 13:00:04 +0000
Received: by outflank-mailman (input) for mailman id 6190;
 Tue, 13 Oct 2020 13:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSJuN-0000KO-6c
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:00:03 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 853db183-e550-4a70-9650-3f3948a48bd8;
 Tue, 13 Oct 2020 13:00:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSJuN-0000KO-6c
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:00:03 +0000
X-Inumbo-ID: 853db183-e550-4a70-9650-3f3948a48bd8
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 853db183-e550-4a70-9650-3f3948a48bd8;
	Tue, 13 Oct 2020 13:00:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602594001;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=eEF6d00dX+EFEhC2CkJXfcJ8wfv347Sg+dKbe2ODmsg=;
  b=WMD8MnIHw7O+U/AFtgxP+jC+iaOlTua2Z2zmxuddUB5JcS+O02KNcR6F
   kOZFtfrycHrzkPAK8vvhUBFX/nUTGpCLKX3mMbAmF419Xt/M3Mknf0NiJ
   GGaSa4jXSAmwEYJnyadpGKQKF/f+4VxpWsD0XlQ29lw2/oTnLmQBPeGnd
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jHa7RTY4w4WYvuOHfw1dKr/I1brbYdIvUy62gGMaQ+GkHGshu7VhSS0qE5qjZJXtVpp+uMQMaU
 2WkAmLtJz+73iC2/P5Tkc8Tvm8wmPyEhIcOhYMIuQgi6e1xIf8VNabWRsw+fIH9AU7NHk8PObY
 Ky4qnyGzZh7qgSAgiuofcc4Bw9IYTXAZUTcHIXbF6hX8NSIZG0sZprMlqUDj+/fIzdIqyEvuMv
 3so4zjta87qPVS94D6AsyBMEOC3hK70OgCtqxMUpiONZjsma+lrDLXvJXWD6kNq0HJGSk4VUO5
 AtA=
X-SBRS: 2.5
X-MesageID: 29135935
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="29135935"
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <andrew.cooper3@citrix.com>,
	<roger.pau@citrix.com>, <wl@xen.org>, <iwj@xenproject.org>
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
 <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
Date: Tue, 13 Oct 2020 13:59:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13/10/2020 13:51, Jan Beulich wrote:
> On 13.10.2020 12:50, Igor Druzhinin wrote:
>> ACPI specification contains statements describing memory marked with regular
>> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
>> really do it if it wants kexec or similar functionality to work, there
>> could still be ambiguities in treating these regions as potentially regular
>> RAM.
>>
>> One such an example is SeaBIOS which currently reports "ACPI data" regions as
>> RAM to the guest in its e801 call. The guest then tries to use this region
>> for initrd placement and gets stuck.
> 
> Any theory on why it would get stuck? Having read the thread rooting
> at Sander's report, it hasn't become clear to me where the collision
> there is. A consumer of E801 (rather than E820) intends to not use
> ACPI data, and hence I consider SeaBIOS right in this regard (the
> lack of considering holes is a problem, though).

QEMU's fw_cfg Linux boot loader (that is used by our direct kernel boot method)
is usign E801 to find the top of RAM and places images below that address.
Since now it's 0xfc00000 it gets located right in a PCI hole below - which causes
the loader to hang.

>> --- a/tools/firmware/hvmloader/e820.c
>> +++ b/tools/firmware/hvmloader/e820.c
>> @@ -202,16 +202,17 @@ int build_e820_table(struct e820entry *e820,
>>      nr++;
>>  
>>      /*
>> -     * Mark populated reserved memory that contains ACPI tables as ACPI data.
>> +     * Mark populated reserved memory that contains ACPI tables as ACPI NVS.
>>       * That should help the guest to treat it correctly later: e.g. pass to
>> -     * the next kernel on kexec or reclaim if necessary.
>> +     * the next kernel on kexec and prevent space reclaim which is possible
>> +     * with regular ACPI data type accoring to ACPI spec v6.3.
> 
> Preventing space reclaim is not the business of hvmloader. As per above,
> an ACPI unaware OS ought to be permitted to use as ordinary RAM all the
> space the tables occupy. Therefore at the very least the comment needs
> to reflect that this preventing of space reclaim is a workaround, not
> correct behavior.

Agree to modify the comment.

> Also as a nit: "according".
> 
> As a consequence I think we will also want to adjust Xen itself to
> automatically disable ACPI when it ends up consuming E801 data. Or
> alternatively we should consider dropping all E801-related code (as
> being inapplicable to 64-bit systems).

I'm not following here. What Xen has to do with E801? It's a SeaBIOS implemented
call that happened to be used by QEMU option ROM. We cannot drop it from there
as it's part of BIOS spec.

Igor


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:00:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:00:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6191.16404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJuT-0000qH-K4; Tue, 13 Oct 2020 13:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6191.16404; Tue, 13 Oct 2020 13:00:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJuT-0000qA-Gv; Tue, 13 Oct 2020 13:00:09 +0000
Received: by outflank-mailman (input) for mailman id 6191;
 Tue, 13 Oct 2020 13:00:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSJuR-0000KO-Up
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:00:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47549fd2-a6fe-47e4-bcef-258365917fee;
 Tue, 13 Oct 2020 13:00:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C9AEFB083;
 Tue, 13 Oct 2020 13:00:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSJuR-0000KO-Up
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:00:07 +0000
X-Inumbo-ID: 47549fd2-a6fe-47e4-bcef-258365917fee
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47549fd2-a6fe-47e4-bcef-258365917fee;
	Tue, 13 Oct 2020 13:00:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602594002;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=riUAzU8AzOuq9qgpxdP77IOJBHeQh9wIAEwv5T018dQ=;
	b=EkB/meq1ePDjZeQQq8zGRrPoO4HVoUCTfw1LSIh+DA8WcQhB+fy3uD8nzFwG9LrI9bYEXi
	DFyxoKUMHefM+Kp6rk+OlyD8AGanPaQJnI6vwZFu7Et+nylqISOw/vs9y6tYZLj6fShcQQ
	iJEEL1ww/mleQvYdCAp2f/3aMQGsjSE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C9AEFB083;
	Tue, 13 Oct 2020 13:00:02 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] x86/shadow: refactor shadow_vram_{get,put}_l1e()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Tim Deegan <tim@xen.org>
References: <c6b9c903-02eb-d473-86e3-ccb67aff6cd7@suse.com>
 <51515581-19f3-5b7c-a2f9-1a0b11f8283a@suse.com>
 <20201008151556.GL19254@Air-de-Roger>
 <e769e1ae-fd2f-881e-4dcc-3cbf40d6b732@citrix.com>
 <20201010074525.GO19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae60e9a4-b5c5-d54d-dfe6-626996ec52bc@suse.com>
Date: Tue, 13 Oct 2020 15:00:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201010074525.GO19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 10.10.2020 09:45, Roger Pau Monné wrote:
> On Thu, Oct 08, 2020 at 04:36:47PM +0100, Andrew Cooper wrote:
>> On 08/10/2020 16:15, Roger Pau Monné wrote:
>>> On Wed, Sep 16, 2020 at 03:08:40PM +0200, Jan Beulich wrote:
>>>> +void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
>>>> +                         mfn_t sl1mfn, const void *sl1e,
>>>> +                         const struct domain *d)
>>>> +{
>>>> +    unsigned long gfn;
>>>> +    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
>>>> +
>>>> +    ASSERT(is_hvm_domain(d));
>>>> +
>>>> +    if ( !dirty_vram /* tracking disabled? */ ||
>>>> +         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
>>>> +         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
>>>> +        return;
>>>> +
>>>> +    gfn = gfn_x(mfn_to_gfn(d, mfn));
>>>> +    /* Page sharing not supported on shadow PTs */
>>>> +    BUG_ON(SHARED_M2P(gfn));
>>>> +
>>>> +    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
>>>> +    {
>>>> +        unsigned long i = gfn - dirty_vram->begin_pfn;
>>>> +        const struct page_info *page = mfn_to_page(mfn);
>>>> +        bool dirty = false;
>>>> +        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
>>>> +
>>>> +        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
>>>> +        {
>>>> +            /* Last reference */
>>>> +            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
>>>> +            {
>>>> +                /* We didn't know it was that one, let's say it is dirty */
>>>> +                dirty = true;
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
>>>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>>>> +                if ( l1f & _PAGE_DIRTY )
>>>> +                    dirty = true;
>>>> +            }
>>>> +        }
>>>> +        else
>>>> +        {
>>>> +            /* We had more than one reference, just consider the page dirty. */
>>>> +            dirty = true;
>>>> +            /* Check that it's not the one we recorded. */
>>>> +            if ( dirty_vram->sl1ma[i] == sl1ma )
>>>> +            {
>>>> +                /* Too bad, we remembered the wrong one... */
>>>> +                dirty_vram->sl1ma[i] = INVALID_PADDR;
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                /*
>>>> +                 * Ok, our recorded sl1e is still pointing to this page, let's
>>>> +                 * just hope it will remain.
>>>> +                 */
>>>> +            }
>>>> +        }
>>>> +
>>>> +        if ( dirty )
>>>> +        {
>>>> +            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
>>> Could you use _set_bit here?
>>
>> __set_bit() uses 4-byte accesses.  This uses 1-byte accesses.
> 
> Right, this is allocated using alloc directly, not the bitmap helper,
> and the size if rounded to byte level, not unsigned int.
> 
>> Last I checked, there is a boundary issue at the end of the dirty_bitmap.
>>
>> Both Julien and I have considered changing our bit infrastructure to use
>> byte accesses, which would make them more generally useful.
> 
> Does indeed seem useful to me, as we could safely expand the usage of
> the bitmap ops without risking introducing bugs.

Aren't there architectures being handicapped when it comes to sub-word
accesses? At least common code may better not make assumptions towards
more fine grained accesses ...

As to x86, couldn't we make the macros evaluate alignof(*(addr)) and
choose byte-based accesses when it's less than 4?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:04:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6196.16418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJyr-00017w-6Y; Tue, 13 Oct 2020 13:04:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6196.16418; Tue, 13 Oct 2020 13:04:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSJyr-00017p-3V; Tue, 13 Oct 2020 13:04:41 +0000
Received: by outflank-mailman (input) for mailman id 6196;
 Tue, 13 Oct 2020 13:04:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSJyp-00017H-2D
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:04:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b518b411-2a91-4e36-a52c-b846742cfa75;
 Tue, 13 Oct 2020 13:04:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJyh-0005He-IH; Tue, 13 Oct 2020 13:04:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJyh-0002aJ-5U; Tue, 13 Oct 2020 13:04:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSJyh-0002W9-50; Tue, 13 Oct 2020 13:04:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSJyp-00017H-2D
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:04:39 +0000
X-Inumbo-ID: b518b411-2a91-4e36-a52c-b846742cfa75
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b518b411-2a91-4e36-a52c-b846742cfa75;
	Tue, 13 Oct 2020 13:04:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qADhno+1ozF20eJe7q9SlybvdqWxccUHjDOgayg1ZK4=; b=Rn6UxvmN7WD3i1ayn3awR1lKti
	YlnEO0ahgs/F6RIs2k28s4mHoPfwMQUKTsCmvmMYVxVQ8v5fUTA41/mN5QzR9b88+DsJS6fr9tQAf
	IJ82cjHsCE+KkdDXXuH3+kkMBKnsKJk3r4WzrDor6bChJwezyHUjOlUrF+2ZkSpeUvU0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJyh-0005He-IH; Tue, 13 Oct 2020 13:04:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJyh-0002aJ-5U; Tue, 13 Oct 2020 13:04:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSJyh-0002W9-50; Tue, 13 Oct 2020 13:04:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155774-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155774: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=534b3d09958fdc4df64872c2ab19feb4b1eebc5a
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 13:04:31 +0000

flight 155774 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155774/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  534b3d09958fdc4df64872c2ab19feb4b1eebc5a
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   28 attempts
Testing same since   155708  2020-10-11 23:00:25 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:16:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6198.16430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKAQ-00026c-Fn; Tue, 13 Oct 2020 13:16:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6198.16430; Tue, 13 Oct 2020 13:16:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKAQ-00026V-Bi; Tue, 13 Oct 2020 13:16:38 +0000
Received: by outflank-mailman (input) for mailman id 6198;
 Tue, 13 Oct 2020 13:16:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKAO-00026Q-To
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:16:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7ee63c6-e8f7-4641-a2e5-e83b67212d44;
 Tue, 13 Oct 2020 13:16:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B950AD19;
 Tue, 13 Oct 2020 13:16:35 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKAO-00026Q-To
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:16:36 +0000
X-Inumbo-ID: a7ee63c6-e8f7-4641-a2e5-e83b67212d44
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7ee63c6-e8f7-4641-a2e5-e83b67212d44;
	Tue, 13 Oct 2020 13:16:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602594995;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UozvqUOvBWy6B7uQ9v6fP5TAufYZa+M1hEtWF8NtCtk=;
	b=sSIbHzSOGXzDdXMT1NuJ5hxh5NslRzBHYN+UNqoSTEsada56pJo2+qKGKEaaEyPDJh0PFE
	ttXRWgYTF/8AXKwlgpPKyaVuTypARPP0LcIgFkiBGOp+Ev7ZyPb1/C1X2tAeMJ+hTaUJLP
	nqQ2S2SYJLbCxNBoo0hGoDx93Y45Xfc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1B950AD19;
	Tue, 13 Oct 2020 13:16:35 +0000 (UTC)
Subject: Re: [PATCH] x86/smpboot: Unconditionally call
 memguard_unguard_stack() in cpu_smpboot_free()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201005122325.17395-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36d3443d-50dd-5163-ddac-973421f390e0@suse.com>
Date: Tue, 13 Oct 2020 15:16:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201005122325.17395-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 14:23, Andrew Cooper wrote:
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -971,16 +971,16 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
>      if ( IS_ENABLED(CONFIG_PV32) )
>          FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
>  
> +    if ( stack_base[cpu] )
> +        memguard_unguard_stack(stack_base[cpu]);
> +
>      if ( remove )
>      {
>          FREE_XENHEAP_PAGE(per_cpu(gdt, cpu));
>          FREE_XENHEAP_PAGE(idt_tables[cpu]);
>  
>          if ( stack_base[cpu] )
> -        {
> -            memguard_unguard_stack(stack_base[cpu]);
>              FREE_XENHEAP_PAGES(stack_base[cpu], STACK_ORDER);
> -        }
>      }
>  }

In my initial reply to Marek's report I did suggest putting the fix
in the alloc path in order to keep the pages "guarded" while the CPU
is parked, as the CPU during that period may still access at least
some of the stacks (e.g. when sending it an NMI IPI to enter a deeper
C state).

Otherwise, if the fix really was to remain here, the if() could now
also be dropped. And in this case I'd also suggest adding the new
piece of code a few lines earlier, so that all the
FREE_XENHEAP_PAGE() stay close together.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:24:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6201.16441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKHU-000313-6V; Tue, 13 Oct 2020 13:23:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6201.16441; Tue, 13 Oct 2020 13:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKHU-00030w-3Y; Tue, 13 Oct 2020 13:23:56 +0000
Received: by outflank-mailman (input) for mailman id 6201;
 Tue, 13 Oct 2020 13:23:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSKHS-00030D-QX
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:23:54 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47b15ca5-07f6-43fa-bfb0-56a31d25d1a3;
 Tue, 13 Oct 2020 13:23:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ben2=DU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSKHS-00030D-QX
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:23:54 +0000
X-Inumbo-ID: 47b15ca5-07f6-43fa-bfb0-56a31d25d1a3
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 47b15ca5-07f6-43fa-bfb0-56a31d25d1a3;
	Tue, 13 Oct 2020 13:23:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602595433;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=s6/6G/AA6r3e0Zt1cJD/3dGEjMPEVC+PIlv5DgBo0+M=;
  b=UzpitQvfbPpvEit/theUJaCcd+LpqMlE+UNTqDawrRFiAOEWRQylw9fl
   6JTkIpL0RPCtxftSn0TQVYRrCLomik2bIZRTpIys9zbmQWkV/kMkDVi7N
   lLvTEUxxNW5LUScGPpBkG78bwjEMcHniHXezP0qzutvuRROtUskm33sNo
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jC/qmyHPone5mVmQQqJ+10YW1rwPkw1gEnihkO3poVVITmE6vBpAT43fcH/Ro7CwbCNNIV20jp
 i/LiBfoJP6uvNr2rVpRcjkoQoN84EJhl/Yf5m7NaTamHFC//2v0ZwCgn7Jy+6+gpQSJfEGA2sb
 nnkUi/Uw2OK5ehyiKaGqvrLM+CY9dY15o5Nu/XpY+VI6GDniq9wu62yNx4t89Pc/BlXPxCQVqI
 5aFctrHhv3R51S+cRSuNGf+mJuBNgsEacbzC27NWJwLSJ9HIRmMDcpE9xagXp73aUn+Jm8yJsT
 GZ8=
X-SBRS: 2.5
X-MesageID: 28888871
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,370,1596513600"; 
   d="scan'208";a="28888871"
Subject: Re: [PATCH] x86/smpboot: Unconditionally call
 memguard_unguard_stack() in cpu_smpboot_free()
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201005122325.17395-1-andrew.cooper3@citrix.com>
 <36d3443d-50dd-5163-ddac-973421f390e0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d5c19b39-0413-d61d-3e1f-c35dd19b4287@citrix.com>
Date: Tue, 13 Oct 2020 14:23:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <36d3443d-50dd-5163-ddac-973421f390e0@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 13/10/2020 14:16, Jan Beulich wrote:
> On 05.10.2020 14:23, Andrew Cooper wrote:
>> --- a/xen/arch/x86/smpboot.c
>> +++ b/xen/arch/x86/smpboot.c
>> @@ -971,16 +971,16 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
>>      if ( IS_ENABLED(CONFIG_PV32) )
>>          FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
>>  
>> +    if ( stack_base[cpu] )
>> +        memguard_unguard_stack(stack_base[cpu]);
>> +
>>      if ( remove )
>>      {
>>          FREE_XENHEAP_PAGE(per_cpu(gdt, cpu));
>>          FREE_XENHEAP_PAGE(idt_tables[cpu]);
>>  
>>          if ( stack_base[cpu] )
>> -        {
>> -            memguard_unguard_stack(stack_base[cpu]);
>>              FREE_XENHEAP_PAGES(stack_base[cpu], STACK_ORDER);
>> -        }
>>      }
>>  }
> In my initial reply to Marek's report I did suggest putting the fix
> in the alloc path in order to keep the pages "guarded" while the CPU
> is parked

In which case you should have identified that bug explicitly.

Because I can't read your mind while fixing the other real bugs in your
suggestion.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:24:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:24:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6202.16454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKHe-00033m-FO; Tue, 13 Oct 2020 13:24:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6202.16454; Tue, 13 Oct 2020 13:24:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKHe-00033f-BS; Tue, 13 Oct 2020 13:24:06 +0000
Received: by outflank-mailman (input) for mailman id 6202;
 Tue, 13 Oct 2020 13:24:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKHd-00033N-6O
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:24:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8966304-3780-43c2-b3d1-906064a9add9;
 Tue, 13 Oct 2020 13:24:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 710B9B2F6;
 Tue, 13 Oct 2020 13:24:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKHd-00033N-6O
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:24:05 +0000
X-Inumbo-ID: b8966304-3780-43c2-b3d1-906064a9add9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b8966304-3780-43c2-b3d1-906064a9add9;
	Tue, 13 Oct 2020 13:24:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602595443;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tg4ogB+O6v36HtBcabX2efN40YqLQan2oe3KO/rXuQk=;
	b=HGDmB/6lJ38yzY76p7UrBWezFg9alGGczMJmPpw3Bh3b6Xambaq07blIUDP40UKjP1nTlP
	w4JNYQJqHBxNIeCPXumy1BpeL5kvTUrnH1gV5UROSLqv33298bXhkZVGFhnze43q2yki/p
	BQODvKE8PyA9dLZl6w8gk/hXkLNuDYw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 710B9B2F6;
	Tue, 13 Oct 2020 13:24:03 +0000 (UTC)
Subject: Re: [PATCH v2 2/6] x86: reduce CET-SS related #ifdef-ary
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <d8561c46-a6df-3f64-78df-f84c649a99b4@suse.com>
 <4120e048-5314-afba-d921-9f4651a61eaa@suse.com>
 <3a2cfb6a-68cd-dbc7-c0c8-53b810b4eede@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f21084b2-7424-f519-18c1-0450f73fcf02@suse.com>
Date: Tue, 13 Oct 2020 15:24:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <3a2cfb6a-68cd-dbc7-c0c8-53b810b4eede@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 13:40, Andrew Cooper wrote:
> (Interestingly, zero length
> alternatives do appear to compile, and this is clearly a bug.)

Why? The replacement code may be intended to be all NOPs.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:26:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:26:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6206.16466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKJg-0003G3-RZ; Tue, 13 Oct 2020 13:26:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6206.16466; Tue, 13 Oct 2020 13:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKJg-0003Fw-OX; Tue, 13 Oct 2020 13:26:12 +0000
Received: by outflank-mailman (input) for mailman id 6206;
 Tue, 13 Oct 2020 13:26:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=utaj=DU=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kSKJe-0003Fq-PG
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:26:10 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2244d217-60a2-4ff9-a333-d7491bf52b1d;
 Tue, 13 Oct 2020 13:26:09 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x7so15436586wrl.3
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 06:26:09 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o4sm10503356wrv.8.2020.10.13.06.26.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 13 Oct 2020 06:26:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=utaj=DU=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kSKJe-0003Fq-PG
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:26:10 +0000
X-Inumbo-ID: 2244d217-60a2-4ff9-a333-d7491bf52b1d
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2244d217-60a2-4ff9-a333-d7491bf52b1d;
	Tue, 13 Oct 2020 13:26:09 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x7so15436586wrl.3
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 06:26:09 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=QPZ70J0XAqH1Y+3yiN9i/DCST6pKlHsLt1ByqYIpk40=;
        b=NZCqdq6EF3mCDTO9pptVYwmTxOFHfptt5CeS3Pt3kK5frhSMzMhPWQHMuZHdGuzSNq
         XSvYqWonFvo76IAn7L2gDKduYPojSG6rzjDB7gBh465OmFMcUl100pQu6B2oTSSHTzyN
         SZnvFZJvICXhSvafO/veFQ/xkKbWrIBHHsS9CiwF//2hTeyYYzUPSMXtB27Tn82Npe80
         P0e317t1shOS+xcC629obgfns0QauOY7YC7VD6vGyoRVGeloYkZv/XhUa2CZm4+2RKfv
         i7DgFJ1hjzlBC/i6Iwfs5HQobXJp+/wmMu3N9oh4f3+iVJqgLEbSiJuHmD1NyNvB49Wr
         BPTQ==
X-Gm-Message-State: AOAM533Pk4AaScttJQiFFA7yw5SHKEnQyBDndvlS385HsfckSu/tkZyx
	uOiO3PIraqMcXGtfYCLpksEPS3W+X7U=
X-Google-Smtp-Source: ABdhPJxs97Jimm846Z8fGA3+LE3GrWk+XhgYsRu5WWMTJFobMmN0e8VDSqVEAyvU21Pic2nhm/xB/w==
X-Received: by 2002:adf:de89:: with SMTP id w9mr12997973wrl.212.1602595569016;
        Tue, 13 Oct 2020 06:26:09 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o4sm10503356wrv.8.2020.10.13.06.26.08
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 13 Oct 2020 06:26:08 -0700 (PDT)
Date: Tue, 13 Oct 2020 13:26:06 +0000
From: Wei Liu <wl@xen.org>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process
Message-ID: <20201013132606.7ff35mmpesklbmcx@liuwe-devbox-debian-v2>
References: <20201012011139.GA82449@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201012011139.GA82449@mattapan.m5p.com>
User-Agent: NeoMutt/20180716

On Sun, Oct 11, 2020 at 06:11:39PM -0700, Elliott Mitchell wrote:
> Unexpectedly the environment variable which needs to be passed is
> $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
> of the host `ld`.
> 
> Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
> it can load at runtime, not executables.
> 
> This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
> to $LDFLAGS which breaks many linkers.
> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> ---
> This is now the *third* time this has been sent to the list.  Mark Pryor
> has tested and confirms Python cross-building is working.  There is one
> wart left which I'm unsure of the best approach for.
> 
> Having looked around a bit, I believe this is a Python 2/3 compatibility
> issue.  "distutils" for Python 2 likely lacked a separate $LDSHARED or
> $LD variable, whereas Python 3 does have this.  Alas this is pointless
> due to the above (unless you can cause distutils.py to do the final link
> step separately).

I think this is well-reasoned but I don't have time to figure out and
verify the details.

Marek, do you have any comment on this?

> ---
>  tools/pygrub/Makefile | 9 +++++----
>  tools/python/Makefile | 9 +++++----
>  2 files changed, 10 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> index 3063c4998f..37b2146214 100644
> --- a/tools/pygrub/Makefile
> +++ b/tools/pygrub/Makefile
> @@ -3,20 +3,21 @@ XEN_ROOT = $(CURDIR)/../..
>  include $(XEN_ROOT)/tools/Rules.mk
>  
>  PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
> -PY_LDFLAGS = $(LDFLAGS) $(APPEND_LDFLAGS)
> +PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG = build/installed_files.txt
>  
>  .PHONY: all
>  all: build
>  .PHONY: build
>  build:
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
> +	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
>  
>  .PHONY: install
>  install: all
>  	$(INSTALL_DIR) $(DESTDIR)/$(bindir)
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) \
> -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
> +		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		 --root="$(DESTDIR)" --install-scripts=$(LIBEXEC_BIN) --force
>  	set -e; if [ $(bindir) != $(LIBEXEC_BIN) -a \
>  	             "`readlink -f $(DESTDIR)/$(bindir)`" != \
> diff --git a/tools/python/Makefile b/tools/python/Makefile
> index 8d22c03676..b675f5b4de 100644
> --- a/tools/python/Makefile
> +++ b/tools/python/Makefile
> @@ -5,19 +5,20 @@ include $(XEN_ROOT)/tools/Rules.mk
>  all: build
>  
>  PY_CFLAGS = $(CFLAGS) $(PY_NOOPT_CFLAGS)
> -PY_LDFLAGS = $(LDFLAGS) $(APPEND_LDFLAGS)
> +PY_LDFLAGS = $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG = build/installed_files.txt
>  
>  .PHONY: build
>  build:
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" $(PYTHON) setup.py build
> +	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py build
>  
>  .PHONY: install
>  install:
>  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
>  
> -	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) \
> -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	CC="$(CC)" CFLAGS="$(PY_CFLAGS)" LDSHARED="$(CC)" \
> +		LDFLAGS="$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		--root="$(DESTDIR)" --force
>  
>  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)
> -- 
> 2.20.1
> 
> 
> -- 
> (\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
>  \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
>   \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
> 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:35:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6208.16477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKSI-0004Bh-Ne; Tue, 13 Oct 2020 13:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6208.16477; Tue, 13 Oct 2020 13:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKSI-0004Ba-Kh; Tue, 13 Oct 2020 13:35:06 +0000
Received: by outflank-mailman (input) for mailman id 6208;
 Tue, 13 Oct 2020 13:35:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKSH-0004BV-C8
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:35:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b006341c-e05b-4fcc-8d15-39edf9cdd484;
 Tue, 13 Oct 2020 13:35:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 481F0B319;
 Tue, 13 Oct 2020 13:35:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKSH-0004BV-C8
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:35:05 +0000
X-Inumbo-ID: b006341c-e05b-4fcc-8d15-39edf9cdd484
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b006341c-e05b-4fcc-8d15-39edf9cdd484;
	Tue, 13 Oct 2020 13:35:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602596102;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1F9x9j8/3cvMphQwdgDEndglZUMVLyn3LmK/kAdiQm4=;
	b=nhS117TDZMG6VUfH9IBM8pQ4yiTHgeBDRO9YGi8FaqIWunqxVVXyEcseJb0E+7q6w4/s2w
	I/7LnIXP/xk+TwPJUlSZY5kKIOX4KtU6N75siVC59koE+VKntXA6vD1vx7QKMFLLmqwXmX
	OIrPFuZxifJs7oJSVz4Q52cM/8fdMb0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 481F0B319;
	Tue, 13 Oct 2020 13:35:01 +0000 (UTC)
Subject: Re: [PATCH v2 5/6] x86: guard against straight-line speculation past
 RET
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <62ffb078-d763-f845-c4b9-eeacb3358d02@suse.com>
 <fd18939c-cfc7-6de8-07f2-217f810afde1@suse.com>
 <447525bc-662d-ff52-6b73-e6e1a61767ec@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f5be4c67-95ad-dd98-cd24-b925da3ef519@suse.com>
Date: Tue, 13 Oct 2020 15:34:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <447525bc-662d-ff52-6b73-e6e1a61767ec@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 14:00, Andrew Cooper wrote:
> On 28/09/2020 13:31, Jan Beulich wrote:
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
>>  # https://bugs.llvm.org/show_bug.cgi?id=36110
>>  t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
>>  
>> -CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
>> +# Check whether \(text) escaping in macro bodies is supported.
>> +t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
>> +
>> +# Check whether macros can override insn mnemonics in inline assembly.
>> +t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
>> +
>> +acc1 := $(call or,$(t1),$(t2),$(t3),$(t4))
>> +
>> +CLANG_FLAGS += $(call or,$(acc1),$(t5))
> 
> I'm not happy taking this until there is toolchain support visible in
> the future.
> 
> We *cannot* rule out the use of IAS forever more, because there are
> features far more important than ret speculation which depend on it.

So what do you suggest? We can't have both, afaics, so we need to
pick either being able to use the integrated assembler or being
able to guard RET.

>> --- a/xen/include/asm-x86/asm-defns.h
>> +++ b/xen/include/asm-x86/asm-defns.h
>> @@ -50,3 +50,22 @@
>>  .macro INDIRECT_JMP arg:req
>>      INDIRECT_BRANCH jmp \arg
>>  .endm
>> +
>> +/*
>> + * To guard against speculation past RET, insert a breakpoint insn
>> + * immediately after them.
>> + */
>> +.macro ret operand:vararg
>> +    ret$ \operand
>> +.endm
>> +.macro retq operand:vararg
>> +    ret$ \operand
>> +.endm
>> +.macro ret$ operand:vararg
>> +    .purgem ret
>> +    ret \operand
> 
> You're substituting retq for ret, which defeats the purpose of unwrapping.

I'm afraid I don't understand the "defeats" aspect.

> I will repeat my previous feedback.  Do away with this
> wrapping/unwrapping and just use a raw .byte.  Its simpler and faster.

Well, I could now also repeat my prior response to your prior
feedback, but since iirc you didn't reply back then I don't
expect you would now. Instead I'll - once again - give in despite
my believe that this is the cleaner approach, and that in cases
like this one - when there are pros and cons to either approach -
it should be the author's choice rather than the reviewer's.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:44:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:44:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6214.16498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKbW-0005AB-Mo; Tue, 13 Oct 2020 13:44:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6214.16498; Tue, 13 Oct 2020 13:44:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKbW-0005A4-Jl; Tue, 13 Oct 2020 13:44:38 +0000
Received: by outflank-mailman (input) for mailman id 6214;
 Tue, 13 Oct 2020 13:44:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKbV-00059y-Qx
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:44:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5af6508-6873-4015-b46f-fa9738b81bbd;
 Tue, 13 Oct 2020 13:44:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB2E2AC82;
 Tue, 13 Oct 2020 13:44:35 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKbV-00059y-Qx
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:44:37 +0000
X-Inumbo-ID: d5af6508-6873-4015-b46f-fa9738b81bbd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5af6508-6873-4015-b46f-fa9738b81bbd;
	Tue, 13 Oct 2020 13:44:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602596676;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9sLalaXgyFQWB0q6HknwBauDenwgCZwL0plyLm2JloY=;
	b=rbtAZfomdm/r5efKJas4pya4N+EiuT3RZTargKaKjepFUe1zFLpzS/JjzydWcBxre/l6qa
	gq4164dcrS52gp2pMS8PmhehoutGdIgzFYZtxh+WEAYEEH5nDbG9Ez+u4w+HP4vKloMdrt
	oawbVi3qEXuoQKG5so04AKC5uHlKJ8M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EB2E2AC82;
	Tue, 13 Oct 2020 13:44:35 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201006162327.93055-1-roger.pau@citrix.com>
 <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
 <20201007164117.GH19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ef0631f7-f509-d0ca-773e-0758587a55bb@suse.com>
Date: Tue, 13 Oct 2020 15:44:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201007164117.GH19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.10.2020 18:41, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 01:06:08PM +0100, Andrew Cooper wrote:
>> On 06/10/2020 17:23, Roger Pau Monne wrote:
>>> Currently a PV hardware domain can also be given control over the CPU
>>> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
>>
>> This might be how the current logic "works", but its straight up broken.
>>
>> PERF_CTL is thread scope, so unless dom0 is identity pinned and has one
>> vcpu for every pcpu, it cannot use the interface correctly.
> 
> Selecting cpufreq=dom0-kernel will force vCPU pinning. I'm not able
> however to see anywhere that would force dom0 vCPUs == pCPUs.

Unless there are other overriding command line options, doesn't the
way sched_select_initial_cpu() works guarantee this?

>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>> index d8ed83f869..41baa3b7a1 100644
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
>>>      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
>>>  } cpufreq_controller;
>>>  
>>> +static inline bool is_cpufreq_controller(const struct domain *d)
>>> +{
>>> +    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
>>> +            is_hardware_domain(d));
>>
>> This won't compile on !CONFIG_X86, due to CONFIG_HAS_CPUFREQ
> 
> It does seem to build on Arm, because this is only used in x86 code:
> 
> https://gitlab.com/xen-project/people/royger/xen/-/jobs/778207412
> 
> The extern declaration of cpufreq_controller is just above, so if you
> tried to use is_cpufreq_controller on Arm you would get a link time
> error, otherwise it builds fine. The compiler removes the function on
> Arm as it has the inline attribute and it's not used.
> 
> Alternatively I could look into moving cpufreq_controller (and
> is_cpufreq_controller) out of sched.h into somewhere else, I haven't
> looked at why it needs to live there.
> 
>> Honestly - I don't see any point to this code.  Its opt-in via the
>> command line only, and doesn't provide adequate checks for enablement. 
>> (It's not as if we're lacking complexity or moving parts when it comes
>> to power/frequency management).
> 
> Right, I could do a pre-patch to remove this, but I also don't think
> we should block this fix on removing FREQCTL_dom0_kernel, so I would
> rather fix the regression and then remove the feature if we agree it
> can be removed.

I agree.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:49:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6218.16510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKgS-0005Nt-BG; Tue, 13 Oct 2020 13:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6218.16510; Tue, 13 Oct 2020 13:49:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKgS-0005Nm-7r; Tue, 13 Oct 2020 13:49:44 +0000
Received: by outflank-mailman (input) for mailman id 6218;
 Tue, 13 Oct 2020 13:49:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKgQ-0005Ne-Ca
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:49:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f3379ea-6274-4702-80ad-45795359687f;
 Tue, 13 Oct 2020 13:49:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B7CC0AC82;
 Tue, 13 Oct 2020 13:49:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKgQ-0005Ne-Ca
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:49:42 +0000
X-Inumbo-ID: 0f3379ea-6274-4702-80ad-45795359687f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f3379ea-6274-4702-80ad-45795359687f;
	Tue, 13 Oct 2020 13:49:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602596979;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gh5NjLwGrLknJvHV0EAb87hhejiY3zlRt/0QE1JSreE=;
	b=BnwCodKQqaYBbFDYUrhB61BrkCAYGLLVE0Q9OJqtXh76DraiPpfTIIYUHyMnf3sT9ry1sw
	njyke3+1jbcDlX3owZ4VB4n/dKwGpWHaYFUA0y8Mk5jdI+z3lvKOJqGC0ehJAuW0bzHs0i
	vR7wXExVab5+mBRYSfqLtKNawaaVIww=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B7CC0AC82;
	Tue, 13 Oct 2020 13:49:39 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201006162327.93055-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <edc93f6b-4770-baed-9d00-428bce47e30b@suse.com>
Date: Tue, 13 Oct 2020 15:49:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201006162327.93055-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.10.2020 18:23, Roger Pau Monne wrote:
> Currently a PV hardware domain can also be given control over the CPU
> frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> However since commit 322ec7c89f6 the default behavior has been changed
> to reject accesses to not explicitly handled MSRs, preventing PV
> guests that manage CPU frequency from reading
> MSR_IA32_PERF_{STATUS/CTL}.
> 
> Additionally some HVM guests (Windows at least) will attempt to read
> MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> 
> vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
> d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> 
> Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> handling shared between HVM and PV guests, and add an explicit case
> for reads to MSR_IA32_PERF_{STATUS/CTL}.
> 
> Restore previous behavior and allow PV guests with the required
> permissions to read the contents of the mentioned MSRs. Non privileged
> guests will get 0 when trying to read those registers, as writes to
> MSR_IA32_PERF_CTL by such guest will already be silently dropped.
> 
> Fixes: 322ec7c89f6 ('x86/pv: disallow access to unknown MSRs')
> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

I would have given this my R-b, but Andrew's "straight up broken"
comment needs resolving first, one way or another.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 13:58:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 13:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6223.16522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKpA-0006LD-9F; Tue, 13 Oct 2020 13:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6223.16522; Tue, 13 Oct 2020 13:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKpA-0006L6-67; Tue, 13 Oct 2020 13:58:44 +0000
Received: by outflank-mailman (input) for mailman id 6223;
 Tue, 13 Oct 2020 13:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKp8-0006L1-A5
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:58:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64cf5fd2-3f6b-4b80-b003-3c4bc9bb8775;
 Tue, 13 Oct 2020 13:58:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A79FCACBF;
 Tue, 13 Oct 2020 13:58:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKp8-0006L1-A5
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 13:58:42 +0000
X-Inumbo-ID: 64cf5fd2-3f6b-4b80-b003-3c4bc9bb8775
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 64cf5fd2-3f6b-4b80-b003-3c4bc9bb8775;
	Tue, 13 Oct 2020 13:58:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602597520;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mKiytihRbzR69xYtsHkh753vQVOTmQL2tS6ZPS+MFvM=;
	b=swgRKTXdEDMk0o/6UAYn5eGKFDy3dbBEebQVgyaOmBiQ8F67NI6RpfFaWJSGAvwzz7RfEV
	DZt8u4Cp09LyNfynEL2mZO6KKeph2cCMp3dJA5PKN8RTFxtHRsf7nkrhkuXZnHSChtjNOr
	msgzDRlCwIfH5YH7y3cIEvWs6GXHCMU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A79FCACBF;
	Tue, 13 Oct 2020 13:58:40 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
Date: Tue, 13 Oct 2020 15:58:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012092740.1617-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.2020 11:27, Juergen Gross wrote:
> The queue for a fifo event is depending on the vcpu_id and the
> priority of the event. When sending an event it might happen the
> event needs to change queues and the old queue needs to be kept for
> keeping the links between queue elements intact. For this purpose
> the event channel contains last_priority and last_vcpu_id values
> elements for being able to identify the old queue.
> 
> In order to avoid races always access last_priority and last_vcpu_id
> with a single atomic operation avoiding any inconsistencies.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

I seem to vaguely recall that at the time this seemingly racy
access was done on purpose by David. Did you go look at the
old commits to understand whether there really is a race which
can't be tolerated within the spec?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -114,8 +114,7 @@ struct evtchn
>          u16 virq;      /* state == ECS_VIRQ */
>      } u;
>      u8 priority;
> -    u8 last_priority;
> -    u16 last_vcpu_id;
> +    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */

This grows struct evtchn's size on at least 32-bit Arm. I'd
like to suggest including "priority" in the union, and call the
new field simply "fifo" or some such.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:02:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:02:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6226.16533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKsi-0007EY-Pj; Tue, 13 Oct 2020 14:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6226.16533; Tue, 13 Oct 2020 14:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKsi-0007ER-Mo; Tue, 13 Oct 2020 14:02:24 +0000
Received: by outflank-mailman (input) for mailman id 6226;
 Tue, 13 Oct 2020 14:02:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKsh-0007EM-Ll
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:02:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 970a5cd2-25e3-4fbc-865c-fa55bbafd943;
 Tue, 13 Oct 2020 14:02:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2D49AAC82;
 Tue, 13 Oct 2020 14:02:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKsh-0007EM-Ll
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:02:23 +0000
X-Inumbo-ID: 970a5cd2-25e3-4fbc-865c-fa55bbafd943
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 970a5cd2-25e3-4fbc-865c-fa55bbafd943;
	Tue, 13 Oct 2020 14:02:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602597742;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZTMb8P3RFOYYRXGyxDpKdMUHUSlVSwvG1R2W7s/1H4g=;
	b=RMecgYoymqF44kWIXrvOlavRoB4icldnKtXAeGWHvennoW1TDVSM+L6lGEz2YJkj6Qo0Pd
	tuNbvAlJe2I65tH2iv71wFkvWLgExWNUra6C8S4MxiIRETusrFgqALUqLog2sUznFrBAsA
	Y+0Nybry2P7cTocR3j19SgPiR3A63Nc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2D49AAC82;
	Tue, 13 Oct 2020 14:02:22 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3a15ba70-c6b1-dd07-12fe-f8d7a1e6c4d9@suse.com>
Date: Tue, 13 Oct 2020 16:02:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012092740.1617-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.2020 11:27, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between sending an event or
> querying the channel's state against removal of the event channel.
> 
> Use a locking scheme similar to a rwlock, but with some modifications:
> 
> - sending an event or querying the event channel's state uses an
>   operation similar to read_trylock(), in case of not obtaining the
>   lock the sending is omitted or a default state is returned

And how come omitting the send or returning default state is valid?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:04:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:04:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6227.16545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKuI-0007O8-4Q; Tue, 13 Oct 2020 14:04:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6227.16545; Tue, 13 Oct 2020 14:04:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKuI-0007O1-1a; Tue, 13 Oct 2020 14:04:02 +0000
Received: by outflank-mailman (input) for mailman id 6227;
 Tue, 13 Oct 2020 14:04:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKuH-0007Nw-A3
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:04:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65cb9423-5efa-4b64-bc24-df7fb6ed62d0;
 Tue, 13 Oct 2020 14:04:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BB98DAC2F;
 Tue, 13 Oct 2020 14:03:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKuH-0007Nw-A3
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:04:01 +0000
X-Inumbo-ID: 65cb9423-5efa-4b64-bc24-df7fb6ed62d0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65cb9423-5efa-4b64-bc24-df7fb6ed62d0;
	Tue, 13 Oct 2020 14:04:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602597839;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xGGF/KIIYFeZDMZNq0z7ds2K46CBuGaxaLzBLu3vtJQ=;
	b=hVkyavPPQQkIjnvenVxmRsDRObNJciGiTq0PqPoLA330zMJFlpYwMs4lIgZIHecAYKREXi
	l9mudO1CYfQk2Sa/YcYe9R4NTTJO7P2SBHcD09IgrPVEmdtVRZzI5AEhp62n5fvQT475Yb
	IaZ8HUdcpC4bJLu7qK5+EFDo5CEWL30=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BB98DAC2F;
	Tue, 13 Oct 2020 14:03:59 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: handle IA32_THERM_STATUS
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20201007102032.98565-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d2b0176-f976-5f39-903b-89c5d001fe3d@suse.com>
Date: Tue, 13 Oct 2020 16:03:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201007102032.98565-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.10.2020 12:20, Roger Pau Monne wrote:
> Windows 8 will attempt to read MSR_IA32_THERM_STATUS and panic if a
> #GP fault is injected as a result:
> 
> vmx.c:3035:d8v0 RDMSR 0x0000019c unimplemented
> d8v0 VIRIDIAN CRASH: 3b c0000096 fffff8061de31651 fffff4088a613720 0
> 
> So handle the MSR and return 0 instead.
> 
> Note that this is done on the generic MSR handler, and PV guest will
> also get 0 back when trying to read the MSR. There doesn't seem to be
> much value in handling the MSR for HVM guests only.
> 
> Fixes: 84e848fd7a1 ('x86/hvm: disallow access to unknown MSRs')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:05:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6231.16557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKv6-0007UP-Er; Tue, 13 Oct 2020 14:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6231.16557; Tue, 13 Oct 2020 14:04:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKv6-0007UI-Bq; Tue, 13 Oct 2020 14:04:52 +0000
Received: by outflank-mailman (input) for mailman id 6231;
 Tue, 13 Oct 2020 14:04:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKv4-0007U9-KD
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:04:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9758e83-e599-41b1-9cd2-56d89a03ad4a;
 Tue, 13 Oct 2020 14:04:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F31AABCC;
 Tue, 13 Oct 2020 14:04:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKv4-0007U9-KD
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:04:50 +0000
X-Inumbo-ID: d9758e83-e599-41b1-9cd2-56d89a03ad4a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d9758e83-e599-41b1-9cd2-56d89a03ad4a;
	Tue, 13 Oct 2020 14:04:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602597889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fJOGg2+tRr993kw3sK8IPbALK/T6ItKt1gWPLVjTZMk=;
	b=N1LQy7maKi7fdudmNNtuZxdZb+tD1oUGj8xk3fp74G1BvOZRgCrZZdLI1Z60e07BRBipwZ
	TfOPOKYb+tUkkfD2W8m/Futi9FwKlem4ktmrCz+T8wEIUI0BiQuAT2wfIBydEBHZdpi0Re
	mG2mtfojxqiDIFjlLyYfl0yIDGtEY7Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0F31AABCC;
	Tue, 13 Oct 2020 14:04:49 +0000 (UTC)
Subject: Re: [PATCH v2] build: always use BASEDIR for xen sub-directory
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <df2fc83d3a84dd3fc2e58101ded22847fdbaa862.1602082503.git.bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f492da43-4fd2-b798-7bb3-3810be5f4893@suse.com>
Date: Tue, 13 Oct 2020 16:04:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <df2fc83d3a84dd3fc2e58101ded22847fdbaa862.1602082503.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.10.2020 16:57, Bertrand Marquis wrote:
> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
> 
> This is removing the dependency to xen subdirectory preventing using a
> wrong configuration file when xen subdirectory is duplicated for
> compilation tests.
> 
> BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
> called from the tools build and install process and BASEDIR is not set
> there.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

And once again
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:05:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:05:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6232.16570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKvq-0007b8-QY; Tue, 13 Oct 2020 14:05:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6232.16570; Tue, 13 Oct 2020 14:05:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKvq-0007b1-MS; Tue, 13 Oct 2020 14:05:38 +0000
Received: by outflank-mailman (input) for mailman id 6232;
 Tue, 13 Oct 2020 14:05:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSKvp-0007ar-Ao
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:05:37 +0000
Received: from mail-qk1-x72e.google.com (unknown [2607:f8b0:4864:20::72e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e79a1db-bee2-4c59-896f-c71ecb68f6bd;
 Tue, 13 Oct 2020 14:05:36 +0000 (UTC)
Received: by mail-qk1-x72e.google.com with SMTP id s4so20910104qkf.7
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:05:36 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2df1:3321:942a:fbce])
 by smtp.gmail.com with ESMTPSA id z26sm13793609qki.40.2020.10.13.07.05.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 13 Oct 2020 07:05:35 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSKvp-0007ar-Ao
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:05:37 +0000
X-Inumbo-ID: 4e79a1db-bee2-4c59-896f-c71ecb68f6bd
Received: from mail-qk1-x72e.google.com (unknown [2607:f8b0:4864:20::72e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e79a1db-bee2-4c59-896f-c71ecb68f6bd;
	Tue, 13 Oct 2020 14:05:36 +0000 (UTC)
Received: by mail-qk1-x72e.google.com with SMTP id s4so20910104qkf.7
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:05:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ZUGu1UjR8L476LNL7fXxphmCIcS0ofD66g+kHXKobM4=;
        b=IObwVdNCG2HviBddPIHXaO7fBXgVd4IMV54T1c2dgm1mPpyez8saUJlRLHH0D6x6cc
         YxLX4GLFKpnuALRKE5kWohB56qyTKO78DlSbiO7OkXjlOP2zazIfIk411MKZrXJkC/7G
         9gm5WgxpovwnivU3B7/YrmdIPzcf5pOUBWNBpHiIvFw/kwmbIqriV++tLM20h75LZKW9
         tQ7lw8VMQfgoUvmIulj/27zu2xX4I6yZGAAR8j3PHcc88Tfq86jJ/PFcabnrl5YaJMLk
         K9um05Vg9uaGQUm+PBThNycqv6wH1rGvgeVwYnxXq8iOsqjOWnPh2pPNA7YyWwb4Wt0+
         MtSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=ZUGu1UjR8L476LNL7fXxphmCIcS0ofD66g+kHXKobM4=;
        b=DqHu8m9i46b98+XpjT+ZpHlv8s7qtib49MwA3D0D6xNhnyu1+Q8E83DEQ42KcVgDcu
         b7I5Owti9X6rMjN0cr9ifJTvVxbMBSZMJpLrPvRSLJsKFf0Qajp8DgfyVNauT1mxDgfu
         gojqFHrTse9LcRIOm4Na9WMutu16sJh84+noF0C7qwMHerdhCrQ5r393Z50aF0wiiXJN
         qPgIxD/wFj5a1akdR844ASHPmvo5q4VY6fbBgfac9JPY3qYfZ6MUVC4aK2YscpW0Kakp
         nSlS8hZiQxqBsWtUROSvEBWzQUPASpyBJ4BR90GFqHm+FwszVEkAV9GIGAtWuxkgpk4q
         iPXQ==
X-Gm-Message-State: AOAM530omk8MghLI3bowDtFymt4c8JZrS+Zkyyp3fYsACbTHCuzOcgCV
	K6l1zSl2N5lJX7O4OL58HdI=
X-Google-Smtp-Source: ABdhPJxOTxmkqh+AoB4DBTpV5Pol3D0nwJYAhdUYd7RDexCSOKqQbKr0DammqDLpjyQZagRGN1nPOg==
X-Received: by 2002:ae9:ef56:: with SMTP id d83mr98199qkg.83.1602597935940;
        Tue, 13 Oct 2020 07:05:35 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2df1:3321:942a:fbce])
        by smtp.gmail.com with ESMTPSA id z26sm13793609qki.40.2020.10.13.07.05.34
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 13 Oct 2020 07:05:35 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: Claudio Fontana <cfontana@suse.de>,
	Jason Andryuk <jandryuk@gmail.com>,
	Thomas Huth <thuth@redhat.com>,
	Laurent Vivier <lvivier@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/3] Add Xen CpusAccel
Date: Tue, 13 Oct 2020 10:05:08 -0400
Message-Id: <20201013140511.5681-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen was left behind when CpusAccel became mandatory and fails the assert
in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
Move the qtest cpu functions to a common location and reuse them for
Xen.

v2:
  New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
  Use accel/dummy-cpus.c for filename
  Put prototype in include/sysemu/cpus.h

Jason Andryuk (3):
  accel: Remove _WIN32 ifdef from qtest-cpus.c
  accel: move qtest CpusAccel functions to a common location
  accel: Add xen CpusAccel using dummy-cpus

 accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
 accel/meson.build                          |  8 +++++++
 accel/qtest/meson.build                    |  1 -
 accel/qtest/qtest-cpus.h                   | 17 --------------
 accel/qtest/qtest.c                        |  5 +++-
 accel/xen/xen-all.c                        |  8 +++++++
 include/sysemu/cpus.h                      |  3 +++
 7 files changed, 27 insertions(+), 42 deletions(-)
 rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
 delete mode 100644 accel/qtest/qtest-cpus.h

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:05:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6233.16582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKwA-0007h5-1z; Tue, 13 Oct 2020 14:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6233.16582; Tue, 13 Oct 2020 14:05:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKw9-0007gy-Un; Tue, 13 Oct 2020 14:05:57 +0000
Received: by outflank-mailman (input) for mailman id 6233;
 Tue, 13 Oct 2020 14:05:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSKw9-0007gn-Gq
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:05:57 +0000
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bbce91b6-eb35-4f3c-aad1-1bb9acab6041;
 Tue, 13 Oct 2020 14:05:56 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id b69so20908874qkg.8
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:05:56 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2df1:3321:942a:fbce])
 by smtp.gmail.com with ESMTPSA id z26sm13793609qki.40.2020.10.13.07.05.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 13 Oct 2020 07:05:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSKw9-0007gn-Gq
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:05:57 +0000
X-Inumbo-ID: bbce91b6-eb35-4f3c-aad1-1bb9acab6041
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bbce91b6-eb35-4f3c-aad1-1bb9acab6041;
	Tue, 13 Oct 2020 14:05:56 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id b69so20908874qkg.8
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 07:05:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Z5sCn2g22QNClDVC8ApV+Mu5VWc6D2QWQMvZw+Aegos=;
        b=CfvQXnZCbKI+aeJIUwszdFQKsZPrD0OxTTWrzEaSn/d3DDvLCCcXgs0dyeR+jPToj2
         ukNtrMP4s3qkhNcd4ZKW8eZc9hXTU5do+L9/1IXwwlil/SY1O+BoBGnTsV9LbPuRBOoW
         wLvovZjgplbCIvu7UWrbpYFfsq6LUq+KITh8knWtWOPyNIDkmkzgC/c/vjm/h9BKZtWR
         F3MUx8xrMrC6VFmBxyBfXPQUwvR2XyfQNtTxOgCECyVPgaWY048xCzZMVL3/JPU2XvOH
         +G7OLxyuA3UDUcVKDN5GSAS0q0FP/GZ8g0JPMLVvw8X0pYznkVSbSR2BOS0Poh4A+h+J
         J4RA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Z5sCn2g22QNClDVC8ApV+Mu5VWc6D2QWQMvZw+Aegos=;
        b=AG4hg/rEiy6IO9qtZMxObc70OtEiA2cRKN+VUG8+22lQwjZ1N5y6uTviHIbnO4k67o
         d30yXrueHsyZo61OVHtif+Zcf+e3WF4iUwLgtklAfMAXvv+A5IRNPBuCbOx5h/lk0S9L
         Y6NmOVvrF/x8V5vDQNAY5hot9CS+6LduDf1Gt+m8tFHWAm9wAJXiCCKhtggrs20KsP0G
         oux9/wJqOlBv2G9R42gO1MCyzXG3rAtvuklP9MiaTcPGfyF86i9Ai9ZJIhA2F7nQO12R
         MXCBwXIXSapXpO3MFthQqAE3c/pJBdvl9IxNwASn+HBP+y1z6zVLqqgvUzhgwnzmcOqB
         6xaw==
X-Gm-Message-State: AOAM530HOA163ed//RvL6zTQ+ae+s2xpAUs7cpusf9/npPeZRzWvTvAb
	SB8uDpuygBcj5dXQYweziN0=
X-Google-Smtp-Source: ABdhPJwtxBEnRnNPjEHcuVVHof6O92obazL/vN2TmbQ5O2rfIz0/rAfUoxQkml2srmIG5IH2ZnkKwQ==
X-Received: by 2002:ae9:ef56:: with SMTP id d83mr100087qkg.83.1602597956496;
        Tue, 13 Oct 2020 07:05:56 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2df1:3321:942a:fbce])
        by smtp.gmail.com with ESMTPSA id z26sm13793609qki.40.2020.10.13.07.05.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 13 Oct 2020 07:05:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: Claudio Fontana <cfontana@suse.de>,
	Jason Andryuk <jandryuk@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org (open list:X86 Xen CPUs)
Subject: [PATCH v2 3/3] accel: Add xen CpusAccel using dummy-cpus
Date: Tue, 13 Oct 2020 10:05:11 -0400
Message-Id: <20201013140511.5681-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201013140511.5681-1-jandryuk@gmail.com>
References: <20201013140511.5681-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen was broken by commit 1583a3898853 ("cpus: extract out qtest-specific
code to accel/qtest").  Xen relied on qemu_init_vcpu() calling
qemu_dummy_start_vcpu() in the default case, but that was replaced by
g_assert_not_reached().

Add a minimal "CpusAccel" for Xen using the dummy-cpus implementation
used by qtest.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 accel/meson.build   | 1 +
 accel/xen/xen-all.c | 8 ++++++++
 2 files changed, 9 insertions(+)

diff --git a/accel/meson.build b/accel/meson.build
index 9a417396bd..b26cca227a 100644
--- a/accel/meson.build
+++ b/accel/meson.build
@@ -12,3 +12,4 @@ dummy_ss.add(files(
 ))
 
 specific_ss.add_all(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'], if_true: dummy_ss)
+specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)
diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
index 60b971d0a8..878a4089d9 100644
--- a/accel/xen/xen-all.c
+++ b/accel/xen/xen-all.c
@@ -16,6 +16,7 @@
 #include "hw/xen/xen_pt.h"
 #include "chardev/char.h"
 #include "sysemu/accel.h"
+#include "sysemu/cpus.h"
 #include "sysemu/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/misc.h"
@@ -153,6 +154,10 @@ static void xen_setup_post(MachineState *ms, AccelState *accel)
     }
 }
 
+const CpusAccel xen_cpus = {
+    .create_vcpu_thread = dummy_start_vcpu_thread,
+};
+
 static int xen_init(MachineState *ms)
 {
     MachineClass *mc = MACHINE_GET_CLASS(ms);
@@ -180,6 +185,9 @@ static int xen_init(MachineState *ms)
      * opt out of system RAM being allocated by generic code
      */
     mc->default_ram_id = NULL;
+
+    cpus_register_accel(&xen_cpus);
+
     return 0;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:08:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:08:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6239.16593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKyi-0007uu-Ft; Tue, 13 Oct 2020 14:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6239.16593; Tue, 13 Oct 2020 14:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKyi-0007un-Cy; Tue, 13 Oct 2020 14:08:36 +0000
Received: by outflank-mailman (input) for mailman id 6239;
 Tue, 13 Oct 2020 14:08:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kSKyh-0007ui-36
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:08:35 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb0dd6db-a82e-4a17-9ed5-2c0541fbb029;
 Tue, 13 Oct 2020 14:08:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kSKyh-0007ui-36
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:08:35 +0000
X-Inumbo-ID: eb0dd6db-a82e-4a17-9ed5-2c0541fbb029
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb0dd6db-a82e-4a17-9ed5-2c0541fbb029;
	Tue, 13 Oct 2020 14:08:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602598112;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=iHh7wiC4B2V5XZ/fLiC+j0dCB49+U0P5NCnatMvWJYg=;
  b=V6QjKksHcEy2Br1Er2GjukL1eHYqmnnp3iZnG47YMvfUsgPCu2b73njX
   mersdAUI2aBzTvlQxl/O6AUc1Gpi7xREJqVF5Hr9KbYYmuWXs136XssYg
   ioi38nJV5C9/sMj11qYMJX4XFBDnywd6fRrG3sZXY9rCDxBFK2cwlrGA5
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 9rgqLsVmHgfwkpjldBx0kGqaR1UHeCtVL1Zg8Z8XKV5FVgkcRTGIm5gz4IyAl+77bM55hegEMr
 JuuCb46Rv3YrTgDWsPWq5H0+3N9e9n/M1YzwfQaQY5bHKkSyIIvdgNeeJajPQa8NCeZHtEY5tD
 nNI7DjjEoKk9nqX8avbxtRqdfr9LDkLHxQXQfP7D9P5jh+irT6uOG4vOBrQ8SicUcf1dEq2kkP
 4pTNzKjT3m9vUr+Jb4rvEPEbrzZtgFkdmWfXHiHVq+aakfmvA2+R7+284AzKEyOmiEyQXMQmSA
 Ogo=
X-SBRS: 2.5
X-MesageID: 28894422
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,371,1596513600"; 
   d="scan'208";a="28894422"
Date: Tue, 13 Oct 2020 16:08:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Paul Durrant <pdurrant@amazon.com>
Subject: Re: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI
 callbacks
Message-ID: <20201013140820.GP19254@Air-de-Roger>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-2-roger.pau@citrix.com>
 <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 02, 2020 at 10:48:07AM +0200, Jan Beulich wrote:
> On 30.09.2020 12:40, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -459,13 +459,10 @@ void vlapic_EOI_set(struct vlapic *vlapic)
> >  
> >  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
> >  {
> > -    struct vcpu *v = vlapic_vcpu(vlapic);
> > -    struct domain *d = v->domain;
> > -
> >      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> > -        vioapic_update_EOI(d, vector);
> > +        vioapic_update_EOI(vector);
> >  
> > -    hvm_dpci_msi_eoi(d, vector);
> > +    hvm_dpci_msi_eoi(vector);
> >  }
> 
> What about viridian_synic_wrmsr() -> vlapic_EOI_set() ->
> vlapic_handle_EOI()? You'd probably have noticed this if you
> had tried to (consistently) drop the respective parameters from
> the intermediate functions as well.
> 
> Question of course is in how far viridian_synic_wrmsr() for
> HV_X64_MSR_EOI makes much sense when v != current. Paul, Wei?

There's already an assert at the top of viridian_synic_wrmsr of v ==
current, which I assume is why I thought this change was fine. I can
purge the passing of v (current) further, but it wasn't really needed
for the rest of the series.

> A secondary question of course is whether passing around the
> pointers isn't really cheaper than the obtaining of 'current'.

Well, while there's indeed a performance aspect here, I think
using current is much clearer than passing a vcpu around. I could
rename the parameter to curr or some such, but I think using
current and not passing a vcpu parameter is clearer.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:08:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:08:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6240.16606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKyn-0007xU-Ph; Tue, 13 Oct 2020 14:08:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6240.16606; Tue, 13 Oct 2020 14:08:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSKyn-0007xN-Lu; Tue, 13 Oct 2020 14:08:41 +0000
Received: by outflank-mailman (input) for mailman id 6240;
 Tue, 13 Oct 2020 14:08:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSKym-0007ui-1y
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:08:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 200bb103-3998-41cb-acfe-aa9ca0001b9a;
 Tue, 13 Oct 2020 14:08:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6479AB91;
 Tue, 13 Oct 2020 14:08:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSKym-0007ui-1y
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:08:40 +0000
X-Inumbo-ID: 200bb103-3998-41cb-acfe-aa9ca0001b9a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 200bb103-3998-41cb-acfe-aa9ca0001b9a;
	Tue, 13 Oct 2020 14:08:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602598118;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=V2ewon66P2AXLDQsu1m/9tHwbMDilURFx1wdY+w44lo=;
	b=JDX1LPwFnA5ENuASYK1ZszjirCDSAzCWLQLCfq+Zwq2Z90FN/TRbgr+c3G+/oDE+8Bd93v
	J0yzTAr9RwNt8gnN5gg+PGV2Xag3FliLFI38NSkpxejPvozAar9lG98WMhpe/yjqbSiumT
	Llxg4noW+3QUSfTRs8uFSfclBPPF1hQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6479AB91;
	Tue, 13 Oct 2020 14:08:37 +0000 (UTC)
Subject: Re: [PATCH] x86/ucode/intel: Improve description for gathering the
 microcode revision
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201012142523.17652-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88b98c29-e948-739c-230c-165019b1656f@suse.com>
Date: Tue, 13 Oct 2020 16:08:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012142523.17652-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.2020 16:25, Andrew Cooper wrote:
> Obtaining the microcode revision on Intel CPUs is complicated for backwards
> compatibility reasons.  Update apply_microcode() to use a slightly more
> efficient CPUID invocation, now that the documentation has been updated to
> confirm that any CPUID instruction is fine, not just CPUID.1
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:12:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6245.16618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL2l-0000Rk-G2; Tue, 13 Oct 2020 14:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6245.16618; Tue, 13 Oct 2020 14:12:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL2l-0000Rd-CL; Tue, 13 Oct 2020 14:12:47 +0000
Received: by outflank-mailman (input) for mailman id 6245;
 Tue, 13 Oct 2020 14:12:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lZE5=DU=suse.de=cfontana@srs-us1.protection.inumbo.net>)
 id 1kSL2j-0000RY-RV
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:12:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b302022-2406-4a8d-aa34-70d0e029c67c;
 Tue, 13 Oct 2020 14:12:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2BA7ABCC;
 Tue, 13 Oct 2020 14:12:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lZE5=DU=suse.de=cfontana@srs-us1.protection.inumbo.net>)
	id 1kSL2j-0000RY-RV
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:12:45 +0000
X-Inumbo-ID: 7b302022-2406-4a8d-aa34-70d0e029c67c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b302022-2406-4a8d-aa34-70d0e029c67c;
	Tue, 13 Oct 2020 14:12:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2BA7ABCC;
	Tue, 13 Oct 2020 14:12:43 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] accel: Add xen CpusAccel using dummy-cpus
To: Jason Andryuk <jandryuk@gmail.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
References: <20201013140511.5681-1-jandryuk@gmail.com>
 <20201013140511.5681-4-jandryuk@gmail.com>
From: Claudio Fontana <cfontana@suse.de>
Message-ID: <803af254-a005-1a50-ddad-7116ef261a8a@suse.de>
Date: Tue, 13 Oct 2020 16:12:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201013140511.5681-4-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/13/20 4:05 PM, Jason Andryuk wrote:
> Xen was broken by commit 1583a3898853 ("cpus: extract out qtest-specific
> code to accel/qtest").  Xen relied on qemu_init_vcpu() calling
> qemu_dummy_start_vcpu() in the default case, but that was replaced by
> g_assert_not_reached().
> 
> Add a minimal "CpusAccel" for Xen using the dummy-cpus implementation
> used by qtest.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  accel/meson.build   | 1 +
>  accel/xen/xen-all.c | 8 ++++++++
>  2 files changed, 9 insertions(+)
> 
> diff --git a/accel/meson.build b/accel/meson.build
> index 9a417396bd..b26cca227a 100644
> --- a/accel/meson.build
> +++ b/accel/meson.build
> @@ -12,3 +12,4 @@ dummy_ss.add(files(
>  ))
>  
>  specific_ss.add_all(when: ['CONFIG_SOFTMMU', 'CONFIG_POSIX'], if_true: dummy_ss)
> +specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)
> diff --git a/accel/xen/xen-all.c b/accel/xen/xen-all.c
> index 60b971d0a8..878a4089d9 100644
> --- a/accel/xen/xen-all.c
> +++ b/accel/xen/xen-all.c
> @@ -16,6 +16,7 @@
>  #include "hw/xen/xen_pt.h"
>  #include "chardev/char.h"
>  #include "sysemu/accel.h"
> +#include "sysemu/cpus.h"
>  #include "sysemu/xen.h"
>  #include "sysemu/runstate.h"
>  #include "migration/misc.h"
> @@ -153,6 +154,10 @@ static void xen_setup_post(MachineState *ms, AccelState *accel)
>      }
>  }
>  
> +const CpusAccel xen_cpus = {
> +    .create_vcpu_thread = dummy_start_vcpu_thread,
> +};
> +
>  static int xen_init(MachineState *ms)
>  {
>      MachineClass *mc = MACHINE_GET_CLASS(ms);
> @@ -180,6 +185,9 @@ static int xen_init(MachineState *ms)
>       * opt out of system RAM being allocated by generic code
>       */
>      mc->default_ram_id = NULL;
> +
> +    cpus_register_accel(&xen_cpus);
> +
>      return 0;
>  }
>  
> 
Reviewed-by: Claudio Fontana <cfontana@suse.de>


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:13:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6246.16630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL3F-0000Z4-Oe; Tue, 13 Oct 2020 14:13:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6246.16630; Tue, 13 Oct 2020 14:13:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL3F-0000Yx-LH; Tue, 13 Oct 2020 14:13:17 +0000
Received: by outflank-mailman (input) for mailman id 6246;
 Tue, 13 Oct 2020 14:13:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSL3F-0000Yr-7M
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:13:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 959adc41-ebac-4197-9f38-5a4bcf5ce679;
 Tue, 13 Oct 2020 14:13:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E7F7BAEA8;
 Tue, 13 Oct 2020 14:13:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSL3F-0000Yr-7M
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:13:17 +0000
X-Inumbo-ID: 959adc41-ebac-4197-9f38-5a4bcf5ce679
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 959adc41-ebac-4197-9f38-5a4bcf5ce679;
	Tue, 13 Oct 2020 14:13:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602598395;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tblV7D7L2YyjW0I80+ScELDyVWso3QKuyM+f2sR91Cs=;
	b=cdmOA90ZO17SYq42P9HozFgQHHYLYfflq24QI/foKqCNZC4H9HexDhjbkcm8NpRegT+oDI
	ISKmkB6xp292oHMg7oCiNaLwDln3IIPjKxs/hjprjiVLaM/gcczt9bpfCZyYe6rJjAfj28
	9T0B4Cqz/B3AUBDce41dadG1cwWaIH8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E7F7BAEA8;
	Tue, 13 Oct 2020 14:13:14 +0000 (UTC)
Subject: Re: [PATCH v2 01/11] x86/hvm: drop vcpu parameter from vlapic EOI
 callbacks
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-2-roger.pau@citrix.com>
 <bafcd30e-f75b-79c8-2424-6a63cb0b96d4@suse.com>
 <20201013140820.GP19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb60119f-912f-3a7f-8c89-637e511dd395@suse.com>
Date: Tue, 13 Oct 2020 16:13:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201013140820.GP19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 16:08, Roger Pau Monné wrote:
> On Fri, Oct 02, 2020 at 10:48:07AM +0200, Jan Beulich wrote:
>> On 30.09.2020 12:40, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/vlapic.c
>>> +++ b/xen/arch/x86/hvm/vlapic.c
>>> @@ -459,13 +459,10 @@ void vlapic_EOI_set(struct vlapic *vlapic)
>>>  
>>>  void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
>>>  {
>>> -    struct vcpu *v = vlapic_vcpu(vlapic);
>>> -    struct domain *d = v->domain;
>>> -
>>>      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
>>> -        vioapic_update_EOI(d, vector);
>>> +        vioapic_update_EOI(vector);
>>>  
>>> -    hvm_dpci_msi_eoi(d, vector);
>>> +    hvm_dpci_msi_eoi(vector);
>>>  }
>>
>> What about viridian_synic_wrmsr() -> vlapic_EOI_set() ->
>> vlapic_handle_EOI()? You'd probably have noticed this if you
>> had tried to (consistently) drop the respective parameters from
>> the intermediate functions as well.
>>
>> Question of course is in how far viridian_synic_wrmsr() for
>> HV_X64_MSR_EOI makes much sense when v != current. Paul, Wei?
> 
> There's already an assert at the top of viridian_synic_wrmsr of v ==
> current, which I assume is why I thought this change was fine. I can
> purge the passing of v (current) further, but it wasn't really needed
> for the rest of the series.

To a large degree that's up to you. It's just that, as said, if
you had done so, you'd likely have noticed the issue, and hence
doing so here and elsewhere may provide reassurance that there's
no further similar case lurking anywhere.

>> A secondary question of course is whether passing around the
>> pointers isn't really cheaper than the obtaining of 'current'.
> 
> Well, while there's indeed a performance aspect here, I think
> using current is much clearer than passing a vcpu around. I could
> rename the parameter to curr or some such, but I think using
> current and not passing a vcpu parameter is clearer.

Personally I'd prefer "curr" named function parameters. But if
Andrew or Wei agree with your approach, I'm not going to object.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:13:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6247.16642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL3R-0000dH-0Q; Tue, 13 Oct 2020 14:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6247.16642; Tue, 13 Oct 2020 14:13:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL3Q-0000dA-Ta; Tue, 13 Oct 2020 14:13:28 +0000
Received: by outflank-mailman (input) for mailman id 6247;
 Tue, 13 Oct 2020 14:13:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Dmgz=DU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSL3P-0000ch-Fk
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:13:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a3830f8-3d91-41ec-93b7-454e54637145;
 Tue, 13 Oct 2020 14:13:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D3503AB91;
 Tue, 13 Oct 2020 14:13:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dmgz=DU=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSL3P-0000ch-Fk
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:13:27 +0000
X-Inumbo-ID: 1a3830f8-3d91-41ec-93b7-454e54637145
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a3830f8-3d91-41ec-93b7-454e54637145;
	Tue, 13 Oct 2020 14:13:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602598406;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mEgVzHXMYRaJZjS7Ul7xP+hcSYdshpyzSRY18kHJAQQ=;
	b=PsTE0iwKU0qkFL7dfyJ2thcubtoiGkghD3lsZjzbCudgOi8bYDPoEDTGNXq96JWZdKiMGG
	YXikZ1f2OEuK2OFJKSugRIupmvSDYApPBeD91ppER9WSwyWYD/T0Ak9oAmlczPJkL8X3K6
	5MHl2uB5UfacQLoitxHcRzR+PuiV95g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D3503AB91;
	Tue, 13 Oct 2020 14:13:25 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
 <3a15ba70-c6b1-dd07-12fe-f8d7a1e6c4d9@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <68aea3f2-21ef-8fbf-e1ad-c404e69a8b8e@suse.com>
Date: Tue, 13 Oct 2020 16:13:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <3a15ba70-c6b1-dd07-12fe-f8d7a1e6c4d9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.20 16:02, Jan Beulich wrote:
> On 12.10.2020 11:27, Juergen Gross wrote:
>> Currently the lock for a single event channel needs to be taken with
>> interrupts off, which causes deadlocks in some cases.
>>
>> Rework the per event channel lock to be non-blocking for the case of
>> sending an event and removing the need for disabling interrupts for
>> taking the lock.
>>
>> The lock is needed for avoiding races between sending an event or
>> querying the channel's state against removal of the event channel.
>>
>> Use a locking scheme similar to a rwlock, but with some modifications:
>>
>> - sending an event or querying the event channel's state uses an
>>    operation similar to read_trylock(), in case of not obtaining the
>>    lock the sending is omitted or a default state is returned
> 
> And how come omitting the send or returning default state is valid?

This is explained in the part of the commit message you didn't cite:

With this locking scheme it is mandatory that a writer will always
either start with an unbound or free event channel or will end with
an unbound or free event channel, as otherwise the reaction of a reader
not getting the lock would be wrong.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:20:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6251.16654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL9p-0001T8-Q8; Tue, 13 Oct 2020 14:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6251.16654; Tue, 13 Oct 2020 14:20:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSL9p-0001T1-L6; Tue, 13 Oct 2020 14:20:05 +0000
Received: by outflank-mailman (input) for mailman id 6251;
 Tue, 13 Oct 2020 14:20:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Dmgz=DU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSL9o-0001IO-QQ
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:20:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7afde62f-5295-4950-b80f-2a41db215f08;
 Tue, 13 Oct 2020 14:20:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FAF2ACBA;
 Tue, 13 Oct 2020 14:20:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dmgz=DU=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSL9o-0001IO-QQ
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:20:04 +0000
X-Inumbo-ID: 7afde62f-5295-4950-b80f-2a41db215f08
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7afde62f-5295-4950-b80f-2a41db215f08;
	Tue, 13 Oct 2020 14:20:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602598802;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D4nTWGEmb2KFqdqVBpRUOQww36Is/9stM1L4kjlUj9w=;
	b=T8bfPcdqJPQzK4fQZ85f3XyDqg4VPoSec8Vs2TpICcGN9gnu5A/jKDNb8ErFFIK9aWD0wT
	Ejr+cIb/XSP9oKRu7mSkCb/pteBDNQSOPyz2xwsNUKfxAFJJla4hN8UxDPBWbA/KvcWYOE
	7KiN7+vkwyx8QbjCPGUlaEflLahaP88=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6FAF2ACBA;
	Tue, 13 Oct 2020 14:20:02 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
Date: Tue, 13 Oct 2020 16:20:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.20 15:58, Jan Beulich wrote:
> On 12.10.2020 11:27, Juergen Gross wrote:
>> The queue for a fifo event is depending on the vcpu_id and the
>> priority of the event. When sending an event it might happen the
>> event needs to change queues and the old queue needs to be kept for
>> keeping the links between queue elements intact. For this purpose
>> the event channel contains last_priority and last_vcpu_id values
>> elements for being able to identify the old queue.
>>
>> In order to avoid races always access last_priority and last_vcpu_id
>> with a single atomic operation avoiding any inconsistencies.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> I seem to vaguely recall that at the time this seemingly racy
> access was done on purpose by David. Did you go look at the
> old commits to understand whether there really is a race which
> can't be tolerated within the spec?

At least the comments in the code tell us that the race regarding
the writing of priority (not last_priority) is acceptable.

Especially Julien was rather worried by the current situation. In
case you can convince him the current handling is fine, we can
easily drop this patch.

> 
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -114,8 +114,7 @@ struct evtchn
>>           u16 virq;      /* state == ECS_VIRQ */
>>       } u;
>>       u8 priority;
>> -    u8 last_priority;
>> -    u16 last_vcpu_id;
>> +    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
> 
> This grows struct evtchn's size on at least 32-bit Arm. I'd
> like to suggest including "priority" in the union, and call the
> new field simply "fifo" or some such.

This will add quite some complexity as suddenly all writes to the
union will need to be made through a cmpxchg() scheme.

Regarding the growth: struct evtchn is aligned to 64 bytes. So there
is no growth at all, as the size will not be larger than those 64
bytes.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:26:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6254.16666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLFv-0001na-G2; Tue, 13 Oct 2020 14:26:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6254.16666; Tue, 13 Oct 2020 14:26:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLFv-0001nT-BM; Tue, 13 Oct 2020 14:26:23 +0000
Received: by outflank-mailman (input) for mailman id 6254;
 Tue, 13 Oct 2020 14:26:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSLFt-0001nO-CR
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:26:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 655d70f9-c66a-487a-aa8d-ada1037ddda6;
 Tue, 13 Oct 2020 14:26:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A038AAC7D;
 Tue, 13 Oct 2020 14:26:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSLFt-0001nO-CR
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:26:21 +0000
X-Inumbo-ID: 655d70f9-c66a-487a-aa8d-ada1037ddda6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 655d70f9-c66a-487a-aa8d-ada1037ddda6;
	Tue, 13 Oct 2020 14:26:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602599178;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RZcX6dU+VQOJj1Sh5V7A54bk1gbLeruICYDTGZZO0o8=;
	b=k6gYdhVea2Vqy2kNjhtoS1P34tC4dvQiX2FiK4YHvDpcMkVPolvwsp5mnHlbtuPsFOlahZ
	w1c7tmlHvTO00nKFmaylkQIiGQdg1ojb4ZjSpUy7ylxPnOovCU04nT/BLtnSrGSS5FP+Cb
	F9I6vDeKYxjBp5vLtcB4Y1DOqq2x1Ek=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A038AAC7D;
	Tue, 13 Oct 2020 14:26:18 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
Date: Tue, 13 Oct 2020 16:26:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 16:20, Jürgen Groß wrote:
> On 13.10.20 15:58, Jan Beulich wrote:
>> On 12.10.2020 11:27, Juergen Gross wrote:
>>> The queue for a fifo event is depending on the vcpu_id and the
>>> priority of the event. When sending an event it might happen the
>>> event needs to change queues and the old queue needs to be kept for
>>> keeping the links between queue elements intact. For this purpose
>>> the event channel contains last_priority and last_vcpu_id values
>>> elements for being able to identify the old queue.
>>>
>>> In order to avoid races always access last_priority and last_vcpu_id
>>> with a single atomic operation avoiding any inconsistencies.
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> I seem to vaguely recall that at the time this seemingly racy
>> access was done on purpose by David. Did you go look at the
>> old commits to understand whether there really is a race which
>> can't be tolerated within the spec?
> 
> At least the comments in the code tell us that the race regarding
> the writing of priority (not last_priority) is acceptable.

Ah, then it was comments. I knew I read something to this effect
somewhere, recently.

> Especially Julien was rather worried by the current situation. In
> case you can convince him the current handling is fine, we can
> easily drop this patch.

Julien, in the light of the above - can you clarify the specific
concerns you (still) have?

>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -114,8 +114,7 @@ struct evtchn
>>>           u16 virq;      /* state == ECS_VIRQ */
>>>       } u;
>>>       u8 priority;
>>> -    u8 last_priority;
>>> -    u16 last_vcpu_id;
>>> +    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
>>
>> This grows struct evtchn's size on at least 32-bit Arm. I'd
>> like to suggest including "priority" in the union, and call the
>> new field simply "fifo" or some such.
> 
> This will add quite some complexity as suddenly all writes to the
> union will need to be made through a cmpxchg() scheme.
> 
> Regarding the growth: struct evtchn is aligned to 64 bytes. So there
> is no growth at all, as the size will not be larger than those 64
> bytes.

Oh, I didn't spot this attribute, which I consider at least
suspicious. Without XSM I'm getting the impression that on 32-bit
Arm the structure's size would be 32 bytes or less without it, so
it looks as if it shouldn't be there unconditionally.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:31:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:31:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6256.16677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLKw-0002eN-2C; Tue, 13 Oct 2020 14:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6256.16677; Tue, 13 Oct 2020 14:31:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLKv-0002eG-VM; Tue, 13 Oct 2020 14:31:33 +0000
Received: by outflank-mailman (input) for mailman id 6256;
 Tue, 13 Oct 2020 14:31:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kSLKu-0002eB-Ve
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:31:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b06388ef-72c6-41c1-8d04-d86572adeab9;
 Tue, 13 Oct 2020 14:31:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kSLKu-0002eB-Ve
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:31:33 +0000
X-Inumbo-ID: b06388ef-72c6-41c1-8d04-d86572adeab9
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b06388ef-72c6-41c1-8d04-d86572adeab9;
	Tue, 13 Oct 2020 14:31:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602599491;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=iDo8FB4WmMtlZfau0epF9uABoNlYLbVDVOagXTbSFWw=;
  b=ZnoGxfOFCB7IPx1An806voamNE2FSoVHbnGwJ4YIvPajfqMxTvAQRXsZ
   1k53takEcsHkdszKsX0i+/nphHekNh4AKL4rAvAFpgU1OS3a6jUhPxpYh
   jYYp5+lfqKP8ew4FEHH51cZl2c87AxXZuq4gzTJBTEb+ZV5Y7DkqG0EKo
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: rJTEUAUpSIF3w8Y+dF/haT23n5jHUbv36bMw98Rv2kxrfE55nZe7EQmOP7dIwmdEyKNWMLA8n4
 Ssvw/Wr4mL0gLeiXzlXjGj2V2ADTLrZSbk7E7wG9EYuWJfbHrPkyrQkaqCrxvGAXgL+JlFvy2W
 H1pbU7FaazmR/KX/lDwSwmic6Qt3RFPyeEt13gasVSx1X569AXIBJiA88u2BFEznNQN3MvrN3z
 ZIPVxeuNsM6xoEIRyTO0geMBXwyZA5AfGY1Pffw/KKlNWmlJPApHhWRPMwxbmcUAaf7tJOgcI+
 rr8=
X-SBRS: 2.5
X-MesageID: 28897343
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,371,1596513600"; 
   d="scan'208";a="28897343"
Date: Tue, 13 Oct 2020 16:30:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 03/11] x86/vlapic: introduce an EOI callback mechanism
Message-ID: <20201013143028.GQ19254@Air-de-Roger>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-4-roger.pau@citrix.com>
 <a6863a90-584a-af21-4a0a-1b104b750978@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <a6863a90-584a-af21-4a0a-1b104b750978@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 02, 2020 at 11:39:50AM +0200, Jan Beulich wrote:
> On 30.09.2020 12:41, Roger Pau Monne wrote:
> > Add a new vlapic_set_irq_callback helper in order to inject a vector
> > and set a callback to be executed when the guest performs the end of
> > interrupt acknowledgment.
> 
> On v1 I did ask
> 
> "One thing I don't understand at all for now is how these
>  callbacks are going to be re-instated after migration for
>  not-yet-EOIed interrupts."
> 
> Afaics I didn't get an answer on this.

Oh sorry, I remember your comment and I've changed further patches to
address this.

The setter of the callback will be in charge for setting the callback
again on resume. That's why vlapic_set_callback is not a static
function, and is added to the header.

Patch 5/11 [0] contains an example of hw the vIO-APIC resets the callbacks
on load. 

> 
> > ---
> > RFC: should callbacks also be executed in vlapic_do_init (which is
> > called by vlapic_reset). We would need to make sure ISR and IRR
> > are cleared using some kind of test and clear atomic functionality to
> > make this race free.
> 
> I guess this can't be decided at this point of the series, as it
> may depend on what exactly the callbacks mean to do. It may even
> be that whether a callback wants to do something depends on
> whether it gets called "normally" or from vlapic_do_init().

Well, lets try to get some progress on the other questions and we will
eventually get here I think.

> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -144,7 +144,32 @@ bool vlapic_test_irq(const struct vlapic *vlapic, uint8_t vec)
> >      return vlapic_test_vector(vec, &vlapic->regs->data[APIC_IRR]);
> >  }
> >  
> > -void vlapic_set_irq(struct vlapic *vlapic, uint8_t vec, uint8_t trig)
> > +void vlapic_set_callback(struct vlapic *vlapic, unsigned int vec,
> > +                         vlapic_eoi_callback_t *callback, void *data)
> > +{
> > +    unsigned long flags;
> > +    unsigned int index = vec - 16;
> > +
> > +    if ( !callback || vec < 16 || vec >= X86_NR_VECTORS )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return;
> > +    }
> > +
> > +    spin_lock_irqsave(&vlapic->callback_lock, flags);
> > +    if ( vlapic->callbacks[index].callback &&
> > +         vlapic->callbacks[index].callback != callback )
> > +        printk(XENLOG_G_WARNING
> > +               "%pv overriding vector %#x callback %ps (%p) with %ps (%p)\n",
> > +               vlapic_vcpu(vlapic), vec, vlapic->callbacks[index].callback,
> > +               vlapic->callbacks[index].callback, callback, callback);
> > +    vlapic->callbacks[index].callback = callback;
> > +    vlapic->callbacks[index].data = data;
> 
> Should "data" perhaps also be compared in the override check above?

Could do, there might indeed be cases where the callback is the same
but data has changed and should be interesting to log.

Thanks, Roger.

[0] https://lore.kernel.org/xen-devel/20200930104108.35969-6-roger.pau@citrix.com/


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:31:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6257.16690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLLK-0002jd-C4; Tue, 13 Oct 2020 14:31:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6257.16690; Tue, 13 Oct 2020 14:31:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLLK-0002jW-8d; Tue, 13 Oct 2020 14:31:58 +0000
Received: by outflank-mailman (input) for mailman id 6257;
 Tue, 13 Oct 2020 14:31:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSLLJ-0002jP-Kz
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:31:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 011e8717-d21f-453a-8ec8-7cd02ae35aab;
 Tue, 13 Oct 2020 14:31:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B428FAB91;
 Tue, 13 Oct 2020 14:31:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSLLJ-0002jP-Kz
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:31:57 +0000
X-Inumbo-ID: 011e8717-d21f-453a-8ec8-7cd02ae35aab
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 011e8717-d21f-453a-8ec8-7cd02ae35aab;
	Tue, 13 Oct 2020 14:31:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602599515;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FMVJuBs6YeKW3muduYJm9bMrd8qLYubVgrkpupMRjxY=;
	b=WoTQbBZn7bqbcNBQCIgKAYu6Xp/RFHy+RvxjfU4lYuhgstzxNdSuyoDdoAGIq21gtwCpmr
	pz0zPLvoJV4LgdTMXjiceKonnKtYwz4wVwcccVP9lSMqh+yT4sHuouzPNHjw9JKsInwUz/
	P+D5Dr9AsH4qVjkSfTJJ1Bkrv1EXYak=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B428FAB91;
	Tue, 13 Oct 2020 14:31:55 +0000 (UTC)
Subject: Re: [PATCH] x86/smpboot: Unconditionally call
 memguard_unguard_stack() in cpu_smpboot_free()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201005122325.17395-1-andrew.cooper3@citrix.com>
 <36d3443d-50dd-5163-ddac-973421f390e0@suse.com>
 <d5c19b39-0413-d61d-3e1f-c35dd19b4287@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d3a21823-c972-eeb1-ee12-368684662f7c@suse.com>
Date: Tue, 13 Oct 2020 16:31:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <d5c19b39-0413-d61d-3e1f-c35dd19b4287@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 15:23, Andrew Cooper wrote:
> On 13/10/2020 14:16, Jan Beulich wrote:
>> On 05.10.2020 14:23, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/smpboot.c
>>> +++ b/xen/arch/x86/smpboot.c
>>> @@ -971,16 +971,16 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
>>>      if ( IS_ENABLED(CONFIG_PV32) )
>>>          FREE_XENHEAP_PAGE(per_cpu(compat_gdt, cpu));
>>>  
>>> +    if ( stack_base[cpu] )
>>> +        memguard_unguard_stack(stack_base[cpu]);
>>> +
>>>      if ( remove )
>>>      {
>>>          FREE_XENHEAP_PAGE(per_cpu(gdt, cpu));
>>>          FREE_XENHEAP_PAGE(idt_tables[cpu]);
>>>  
>>>          if ( stack_base[cpu] )
>>> -        {
>>> -            memguard_unguard_stack(stack_base[cpu]);
>>>              FREE_XENHEAP_PAGES(stack_base[cpu], STACK_ORDER);
>>> -        }
>>>      }
>>>  }
>> In my initial reply to Marek's report I did suggest putting the fix
>> in the alloc path in order to keep the pages "guarded" while the CPU
>> is parked
> 
> In which case you should have identified that bug explicitly.
> 
> Because I can't read your mind while fixing the other real bugs in your
> suggestion.

I'm sorry for the brevity at that point - it was a Sunday, and I merely
thought I'd write down my observation after reading the report. And of
course I'm curious as to the other real bugs in my suggestion (when I
anyway said "something like").

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 14:47:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 14:47:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6262.16702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLaV-0003ok-Ry; Tue, 13 Oct 2020 14:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6262.16702; Tue, 13 Oct 2020 14:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLaV-0003od-Or; Tue, 13 Oct 2020 14:47:39 +0000
Received: by outflank-mailman (input) for mailman id 6262;
 Tue, 13 Oct 2020 14:47:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kSLaU-0003oY-BJ
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:47:38 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0f0d23f-4518-4c98-bd68-f2d6c5e8025c;
 Tue, 13 Oct 2020 14:47:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=COa4=DU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kSLaU-0003oY-BJ
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 14:47:38 +0000
X-Inumbo-ID: d0f0d23f-4518-4c98-bd68-f2d6c5e8025c
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d0f0d23f-4518-4c98-bd68-f2d6c5e8025c;
	Tue, 13 Oct 2020 14:47:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602600456;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=phnTqfecWLx/M2Er41CQxAFLuKwYNWrujpPQF3vL8JU=;
  b=X0/d7OajK1SUYHfdgYFETk92OPz49tVKyy4SfO07U1C3N2CvQlTpYLLa
   BOWFwa5KKMCUlphJdJ0Rc92vtq+sEAAKmf4Gm43QX8pvsUFhd0XzfDKrL
   I29YupDVbOdEjigkYo3SXfvaRO0m5mabtTcAMY4OmtMTfSvSJKWol+rLx
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 10Nz0ied6Tsvv3gXfKyE6uOmuHA9pU8PUa7lLNmsA/lsMeFrFlyveLmtpFKDybfox3wU/fg64x
 em/c3LWdVEaD8DS5MXakY2HAfEudsYHvFz208qyQORpaKc/ASbTPaa+/S+c2iozBMoAdh6gVC2
 hcVNBi6Ui2v7bTo+DjWBgCJJiMHBuqTQysHc0T4B50CMWP3CLCn2923h4+COSPLk2KNLAkbOVH
 ARBmESJUzrxa6gUhBhSXC+TT2WekIAOE3BLXObcxiI7T/vWUM+/I1IAAq9J9qW1df13OBJx52g
 zOs=
X-SBRS: 2.5
X-MesageID: 28875181
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,371,1596513600"; 
   d="scan'208";a="28875181"
Date: Tue, 13 Oct 2020 16:47:24 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Paul Durrant
	<paul@xen.org>
Subject: Re: [PATCH v2 04/11] x86/vmsi: use the newly introduced EOI callbacks
Message-ID: <20201013144724.GR19254@Air-de-Roger>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-5-roger.pau@citrix.com>
 <785f80d6-3a0a-6a58-fd9a-05d8ff87f6fe@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <785f80d6-3a0a-6a58-fd9a-05d8ff87f6fe@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 02, 2020 at 05:25:34PM +0200, Jan Beulich wrote:
> On 30.09.2020 12:41, Roger Pau Monne wrote:
> > Remove the unconditional call to hvm_dpci_msi_eoi in vlapic_handle_EOI
> > and instead use the newly introduced EOI callback mechanism in order
> > to register a callback for MSI vectors injected from passed through
> > devices.
> 
> What I'm kind of missing here is a word on why this is an improvement:
> After all ...
> 
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -496,8 +496,6 @@ void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector)
> >      if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) )
> >          vioapic_update_EOI(vector);
> >  
> > -    hvm_dpci_msi_eoi(vector);
> 
> ... you're exchanging this direct call for a more complex model with
> an indirect one (to the same function).

Sure. But this direct call will be made for each vlapic EOI, while my
added callback will only be executed if the vector was injected by
thee vmsi code, and hence will remove pointless calls to
hvm_dpci_msi_eoi.

It's IMO not feasible to be adding hardcoded calls to
vlapic_handle_EOI for each possible subsystem or emulated device that
wants to be notified of EOIs, hence we need some kind of generic
framework to achieve this.

> > @@ -119,7 +126,8 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
> >  
> >      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
> >  
> > -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
> > +    vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, trig_mode,
> > +                          hvm_dpci_msi_eoi, NULL);
> >  }
> 
> While I agree with your reply to Paul regarding Dom0, I still think
> the entire if() in hvm_dpci_msi_eoi() should be converted into a
> conditional here. There's no point registering the callback if it's
> not going to do anything.
> 
> However, looking further, the "!hvm_domain_irq(d)->dpci &&
> !is_hardware_domain(d)" can be simply dropped altogether, right away.
> It's now fulfilled by the identical check at the top of
> hvm_dirq_assist(), thus guarding the sole call site of this function.
> 
> The !is_iommu_enabled(d) is slightly more involved to prove, but it
> should also be possible to simply drop. What might help here is a
> separate change to suppress opening of HVM_DPCI_SOFTIRQ when there's
> no IOMMU in the system, as then it becomes obvious that this part of
> the condition is guaranteed by hvm_do_IRQ_dpci(), being the only
> site where the softirq can get raised (apart from the softirq
> handler itself).
> 
> To sum up - the call above can probably stay as is, but the callback
> can be simplified as a result of the change.

Yes, I agree. Would you be fine with converting the check in the
callback into an assert, or would you rather have it removed
completely?

> > --- a/xen/drivers/passthrough/io.c
> > +++ b/xen/drivers/passthrough/io.c
> > @@ -874,7 +874,7 @@ static int _hvm_dpci_msi_eoi(struct domain *d,
> >      return 0;
> >  }
> >  
> > -void hvm_dpci_msi_eoi(unsigned int vector)
> > +void hvm_dpci_msi_eoi(unsigned int vector, void *data)
> >  {
> >      struct domain *d = current->domain;
> 
> Instead of passing NULL for data and latching d from current, how
> about you make the registration pass d to more easily use here?

Yes, I think that's fine - we already have the domain pointer in
vmsi_deliver_callback so it could be passed as the callback private
data.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:05:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6264.16714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLrP-0005b6-Dc; Tue, 13 Oct 2020 15:05:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6264.16714; Tue, 13 Oct 2020 15:05:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSLrP-0005az-9Z; Tue, 13 Oct 2020 15:05:07 +0000
Received: by outflank-mailman (input) for mailman id 6264;
 Tue, 13 Oct 2020 15:05:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CdNA=DU=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSLrO-0005au-4k
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:05:06 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5be091c8-e712-47c5-99b1-fe827e1f074e;
 Tue, 13 Oct 2020 15:05:04 +0000 (UTC)
Received: from AM5PR1001CA0047.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::24) by VI1PR08MB2638.eurprd08.prod.outlook.com
 (2603:10a6:802:1f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.26; Tue, 13 Oct
 2020 15:05:01 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::d2) by AM5PR1001CA0047.outlook.office365.com
 (2603:10a6:206:15::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Tue, 13 Oct 2020 15:05:01 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Tue, 13 Oct 2020 15:05:01 +0000
Received: ("Tessian outbound 7161e0c2a082:v64");
 Tue, 13 Oct 2020 15:05:01 +0000
Received: from f6558cdb2a45.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0DCE99D7-C113-4D13-BD88-AC7F6B3D6EF7.1; 
 Tue, 13 Oct 2020 15:04:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f6558cdb2a45.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 13 Oct 2020 15:04:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2294.eurprd08.prod.outlook.com (2603:10a6:4:85::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Tue, 13 Oct
 2020 15:04:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.031; Tue, 13 Oct 2020
 15:04:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CdNA=DU=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSLrO-0005au-4k
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:05:06 +0000
X-Inumbo-ID: 5be091c8-e712-47c5-99b1-fe827e1f074e
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.77])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5be091c8-e712-47c5-99b1-fe827e1f074e;
	Tue, 13 Oct 2020 15:05:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BKVBFY/fJAo4yBB6RO4M6FqSawyOCSqPYmhV6hs/NM4=;
 b=1SWJGBoELitopM8HGnlXYG0xB43hPvKligNp7Q2itohoSSYFH2sSVNoCVuOBqKv8BBs53fWW2nZ0+2NJ7hLKL6yzL5T+FQYLX4lr0Invd/BwVip7pnp1hVxxwqXKo9sk/AHuGj9KuEaxXy7qYLkyW03qNZ+NrR7/ckvpEq25XOA=
Received: from AM5PR1001CA0047.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::24) by VI1PR08MB2638.eurprd08.prod.outlook.com
 (2603:10a6:802:1f::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.26; Tue, 13 Oct
 2020 15:05:01 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::d2) by AM5PR1001CA0047.outlook.office365.com
 (2603:10a6:206:15::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Tue, 13 Oct 2020 15:05:01 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3455.23 via Frontend Transport; Tue, 13 Oct 2020 15:05:01 +0000
Received: ("Tessian outbound 7161e0c2a082:v64"); Tue, 13 Oct 2020 15:05:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 116cb439551fccfd
X-CR-MTA-TID: 64aa7808
Received: from f6558cdb2a45.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 0DCE99D7-C113-4D13-BD88-AC7F6B3D6EF7.1;
	Tue, 13 Oct 2020 15:04:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f6558cdb2a45.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 13 Oct 2020 15:04:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dihJcuMHmjQxAVERod5fKoXuM+zSlnCms6AG2dshFaa99cp4JNYCQlJ4YVigKSx4jHnSHJ4O9LkOpHW6K52XZumlY1qtNsMrFxpOymigin8xrVpmhqaQAGxM5pX3yR6zwBm+EAhnsSls86ynipHIh523DdveuAaPuW2uOPjIY9x8aaXpVKusN0hjVe0EuiPHqKBy+2RoPuQsYNwO4aFEWMiU85BWZen2VjrupVlquUyjB0snkPoQP1z3/G3KTLA+z/L21ZAp8rwRvOlukTOtopkfPCIFXuM5ZjMi//22+Lot2NqXS/bQl+Cco69hpY/naqMyZEfAOEo2thXQANA3Hw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BKVBFY/fJAo4yBB6RO4M6FqSawyOCSqPYmhV6hs/NM4=;
 b=Nd59fCKHo6vVrIUEpnGf7E0LwXgKhQ+W4TFntIgQ8e6zvSmQSnnFAptTyoGRvU8+eMFsr2H7hZrsdcQMHrcRS6soMiJuWgFLN6rO2osPy67eeWjZARvBW7bKn/bQyQAs28FoTm05vfGtvIjYMAWl4GGk1gpIuNgJTWa10dlzifFTmcmjAPtyWFJVUjiPPytPyf+62vB+v4oxDbxGIdibaJ+xUZKHIbj1onOXtMqebl9BNlGT1kw83W2Zc2DlKucbl0oTgDh/PtLSisXi5juguJLnxNZ2W0hP0SEx+ehikhwpicLh3hm0tvQPUbS8FZ/6B5jY10OJOSi+kkFd+aqYeQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BKVBFY/fJAo4yBB6RO4M6FqSawyOCSqPYmhV6hs/NM4=;
 b=1SWJGBoELitopM8HGnlXYG0xB43hPvKligNp7Q2itohoSSYFH2sSVNoCVuOBqKv8BBs53fWW2nZ0+2NJ7hLKL6yzL5T+FQYLX4lr0Invd/BwVip7pnp1hVxxwqXKo9sk/AHuGj9KuEaxXy7qYLkyW03qNZ+NrR7/ckvpEq25XOA=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2294.eurprd08.prod.outlook.com (2603:10a6:4:85::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Tue, 13 Oct
 2020 15:04:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3455.031; Tue, 13 Oct 2020
 15:04:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Doug Goldstein
	<cardoe@cardoe.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: Re: [PATCH v2] build: always use BASEDIR for xen sub-directory
Thread-Topic: [PATCH v2] build: always use BASEDIR for xen sub-directory
Thread-Index: AQHWnLpvdle2+5+TPkS9pkZxhCxslKmVmiUAgAAQzIA=
Date: Tue, 13 Oct 2020 15:04:52 +0000
Message-ID: <A82D6948-1A32-45EA-B8CC-E3F0FBCEAF1F@arm.com>
References:
 <df2fc83d3a84dd3fc2e58101ded22847fdbaa862.1602082503.git.bertrand.marquis@arm.com>
 <f492da43-4fd2-b798-7bb3-3810be5f4893@suse.com>
In-Reply-To: <f492da43-4fd2-b798-7bb3-3810be5f4893@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: db9b9dfd-ceb3-474d-e7ee-08d86f895bb9
x-ms-traffictypediagnostic: DB6PR0802MB2294:|VI1PR08MB2638:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB263857802364BE9921D01CB99D040@VI1PR08MB2638.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3+1pfq0jxBSX/6XMvjVr/DeCnjrjhWz3c6EJVp1K5cD1mqt1Bb4IHkaBTLYH5HlMWy2qXfxRFsrJyv5JPnbYTfq1leHecFJpsc9K3bheQwouCLGK+n8Ses3lzFHur7u0Se7CEDrz9AEujEnbnjxxZ8INJygBhrYjOjiGOWU9M3GyVOYMgC3AFFfK30KSjaBiYgKNEyGV/mZ56E3gWXbuVWO221U+w+3k6Q0nHO57GOH4AfeEWDUvz+JiZSB+R9Zu8ItvBWjclANmjlVWkUyngICrS18v1r1FZYY61oe1fWxOzDDJ9tAB9B0s+DR2/MjzcWGkVUT6Rui8P2WzpGupeQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(346002)(136003)(39860400002)(396003)(8676002)(66946007)(91956017)(86362001)(76116006)(186003)(36756003)(8936002)(53546011)(6506007)(316002)(2616005)(64756008)(66556008)(66476007)(26005)(5660300002)(4744005)(2906002)(4326008)(6916009)(6512007)(71200400001)(54906003)(66446008)(7416002)(33656002)(6486002)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 6PY950/9JM+tmXuZ2GCfPkFJlFWQ6JJhIsq12hFbAP5ri6bbXyVdjlWB/BqaiQDA5U5acoLLcIA8mqkt3BoYKdDd/q2DPRSOD7bqzvGxVfh7SogHM3Ews1jrqX+RtVDJlqz2WNTwlcR0xLO2k/nrKqbvfc/nhqQe9Ze7rzXb9zvU9009QvElCtx8rQoUIpGpJQNQV0Fr6yjmgUfnxbx62i/V7yN8rpgSGOAdcPtpt0C3GLzNR7TXRupxe/gexA0c8o8qPRA3AB/otuySBJlejKCm2YX+m7Ytd/EBZlgGaX8pUkGNYmQE57deGCtTdODbiWDEXSEXjFKxUQKAHXSIaSpg0Zur0DRgaviTl/e97idgZ8u+tiQsGIeLGyzojDPgDpLQw2xxMWzJxWGipm5OsK7iXmDhKYOW680lrCO/6XoKoSX2teXEvZ6+4VRetuAUJYsRjNBSKV3kan6yDvBlkjn6umFNTnluyXPW+FezoZGXfVfSpa+Vj6xRcOyDRKMAhj6BDBzTzWHXotSAzcR4C6hgHL0f/QCJG7rgyn2xPS03YmdPmWdJoh1fEGjE2z8tpHCYKjovgd/2cHG8lJ2KzgIOVTIWWpGCWgbmXfAtoOJCTdmLIgXHJk4m0G/vmS/zVmSKcyJbIvx04rpD+ih0GQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <AD3A460FC5803F4D802977E4C866F7B8@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2294
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c4b1bb70-e0fb-4e31-6593-08d86f895670
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/WPLmXVkFf/wscSH0FzIDF3+zZwzKKvzM4zGknHuCEfZU/owEv+U7unNu4v4C+a++lPPLCWYKM7SqMRpqyhaWmsuXT6sB/nLFNeTogpCusgRwppleBTJRwO35bjtXt5zXqL+gB3k0t3ZUMpcbKEkqKLNQE+KVpYn64bMZtOpqOOvN46TOGcbTvCOhREC6ZWJztoFc9Y+KI8vG3cQdgQD5jypeGHNvqy2TO2p8uFRgnLL0WREtCSO0AWP2CblmcdUtxx87kfk54JT6e+euMy/CnjnJUY9mHiH67Oas30OFscv3MPlsdO/MxMFT1KUEfGNxW8GTS+KjqyH9cB4WNURClpaDNcEpewgObgDAgLs0617p1kvbtjuHgJS5a7/ZPjq+Ld8Bw3sxExX+wcJBGNiSw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(39860400002)(376002)(46966005)(82310400003)(478600001)(2906002)(70206006)(4326008)(36756003)(336012)(53546011)(70586007)(6506007)(6486002)(2616005)(81166007)(54906003)(356005)(33656002)(4744005)(26005)(47076004)(82740400003)(186003)(8676002)(8936002)(36906005)(316002)(6512007)(5660300002)(86362001)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Oct 2020 15:05:01.2418
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: db9b9dfd-ceb3-474d-e7ee-08d86f895bb9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2638



> On 13 Oct 2020, at 15:04, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 07.10.2020 16:57, Bertrand Marquis wrote:
>> Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
>>=20
>> This is removing the dependency to xen subdirectory preventing using a
>> wrong configuration file when xen subdirectory is duplicated for
>> compilation tests.
>>=20
>> BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
>> called from the tools build and install process and BASEDIR is not set
>> there.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> And once again
> Acked-by: Jan Beulich <jbeulich@suse.com>
>=20

And thanks :-)

Bertrand



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:28:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6281.16730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMDx-0007Su-C0; Tue, 13 Oct 2020 15:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6281.16730; Tue, 13 Oct 2020 15:28:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMDx-0007Sn-8Y; Tue, 13 Oct 2020 15:28:25 +0000
Received: by outflank-mailman (input) for mailman id 6281;
 Tue, 13 Oct 2020 15:28:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMDv-0007Si-Gk
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:28:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5f9f0a0-8c18-42a8-bae8-80c0230ba15d;
 Tue, 13 Oct 2020 15:28:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C3564AFC9;
 Tue, 13 Oct 2020 15:28:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMDv-0007Si-Gk
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:28:23 +0000
X-Inumbo-ID: d5f9f0a0-8c18-42a8-bae8-80c0230ba15d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d5f9f0a0-8c18-42a8-bae8-80c0230ba15d;
	Tue, 13 Oct 2020 15:28:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602602901;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ViE2dAzMCKwgSbGe7NIdRVw6pgkRt51tc4SJExPtLds=;
	b=Jhjdxby7AfjzNhcthRP288octn2bxhLqfqz2HmJ3BM5irA59IXRnMuY+YUqx9pV5C2ChZf
	LMjukJ/sQd4D4cjBcf+FG4iKZIyGHr6hTTX4N2ysu7mUuz9Pl8LEIleRdqi+iSQua0zehU
	ojVz0EKzuh5nVfEYdJYNc4WbNty4A2I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C3564AFC9;
	Tue, 13 Oct 2020 15:28:21 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com>
Date: Tue, 13 Oct 2020 17:28:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012092740.1617-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.2020 11:27, Juergen Gross wrote:
> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>  
>      d = v->domain;
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_tryread_lock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>          goto out;
>  
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_tryread_lock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);

Is this simply a copy-and-paste mistake (re-using the code from
send_guest_vcpu_virq()), or is there a reason you switch from
where to obtain the vCPU to send to (in which case this ought
to be mentioned in the description, and in which case you could
use literal zero)?

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>  #define bucket_from_port(d, p) \
>      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>  
> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS

Isn't the ceiling on simultaneous readers the number of pCPU-s,
and the value here then needs to be NR_CPUS + 1 to accommodate
the maximum number of readers? Furthermore, with you dropping
the disabling of interrupts, one pCPU can acquire a read lock
now more than once, when interrupting a locked region.

> +static inline void evtchn_write_lock(struct evtchn *evtchn)
> +{
> +    int val;
> +
> +    /* No barrier needed, atomic_add_return() is full barrier. */
> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +          val != EVENT_WRITE_LOCK_INC;
> +          val = atomic_read(&evtchn->lock) )
> +        cpu_relax();
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    arch_lock_release_barrier();
> +
> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +}
> +
> +static inline bool evtchn_tryread_lock(struct evtchn *evtchn)

The corresponding "generic" function is read_trylock() - I'd
suggest to use the same base name, with the evtchn_ prefix.

> @@ -274,12 +312,12 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>      if ( port_is_valid(d, port) )
>      {
>          struct evtchn *evtchn = evtchn_from_port(d, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&evtchn->lock, flags);
> -        if ( evtchn_usable(evtchn) )
> +        if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) )
> +        {
>              rc = evtchn_is_pending(d, evtchn);
> -        spin_unlock_irqrestore(&evtchn->lock, flags);
> +            evtchn_read_unlock(evtchn);
> +        }
>      }

This needs to be two nested if()-s, as you need to drop the lock
even when evtchn_usable() returns false.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:30:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:30:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6283.16741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMFx-0008G3-OJ; Tue, 13 Oct 2020 15:30:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6283.16741; Tue, 13 Oct 2020 15:30:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMFx-0008Fw-L9; Tue, 13 Oct 2020 15:30:29 +0000
Received: by outflank-mailman (input) for mailman id 6283;
 Tue, 13 Oct 2020 15:30:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMFv-0008Fq-Uw
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:30:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9a37e76-fe33-475a-9e56-47d37fb2fc9c;
 Tue, 13 Oct 2020 15:30:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDBF0AFB4;
 Tue, 13 Oct 2020 15:30:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMFv-0008Fq-Uw
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:30:27 +0000
X-Inumbo-ID: a9a37e76-fe33-475a-9e56-47d37fb2fc9c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a9a37e76-fe33-475a-9e56-47d37fb2fc9c;
	Tue, 13 Oct 2020 15:30:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602603024;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=h8L/ye6Ivq6ZDftmmpKRBn7rh8rkbJuKib1hoHVa0yw=;
	b=ZauRg+cywz/CtoY6p4DnGOIkxdE6O62clWTxwmJwE/xCCPZNvnsfMUi6sXMzeg6LEoWU1Y
	mvl4gywfBVIYVadG/+VXlQRIzV1X7urQGtNgF/zcJ63H+EeQl/yUzdgvY58m+9JObCXbhD
	95AlmH55GzpeZS+uPM1DCLcuZ5f4cZY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BDBF0AFB4;
	Tue, 13 Oct 2020 15:30:24 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
 <3a15ba70-c6b1-dd07-12fe-f8d7a1e6c4d9@suse.com>
 <68aea3f2-21ef-8fbf-e1ad-c404e69a8b8e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8d076369-b21a-6bf6-13f7-36b19469d66b@suse.com>
Date: Tue, 13 Oct 2020 17:30:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <68aea3f2-21ef-8fbf-e1ad-c404e69a8b8e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 16:13, Jürgen Groß wrote:
> On 13.10.20 16:02, Jan Beulich wrote:
>> On 12.10.2020 11:27, Juergen Gross wrote:
>>> Currently the lock for a single event channel needs to be taken with
>>> interrupts off, which causes deadlocks in some cases.
>>>
>>> Rework the per event channel lock to be non-blocking for the case of
>>> sending an event and removing the need for disabling interrupts for
>>> taking the lock.
>>>
>>> The lock is needed for avoiding races between sending an event or
>>> querying the channel's state against removal of the event channel.
>>>
>>> Use a locking scheme similar to a rwlock, but with some modifications:
>>>
>>> - sending an event or querying the event channel's state uses an
>>>    operation similar to read_trylock(), in case of not obtaining the
>>>    lock the sending is omitted or a default state is returned
>>
>> And how come omitting the send or returning default state is valid?
> 
> This is explained in the part of the commit message you didn't cite:
> 
> With this locking scheme it is mandatory that a writer will always
> either start with an unbound or free event channel or will end with
> an unbound or free event channel, as otherwise the reaction of a reader
> not getting the lock would be wrong.

Oh, I did read this latter part as something extra to be aware of,
not as this being the correctness guarantee. Could you make the
connection more clear?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:35:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6293.16753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMKW-0008T7-B7; Tue, 13 Oct 2020 15:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6293.16753; Tue, 13 Oct 2020 15:35:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMKW-0008T0-83; Tue, 13 Oct 2020 15:35:12 +0000
Received: by outflank-mailman (input) for mailman id 6293;
 Tue, 13 Oct 2020 15:35:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMKU-0008Sv-Ap
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:35:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24e32344-1552-474f-a4ea-68b28347f462;
 Tue, 13 Oct 2020 15:35:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE43BAD82;
 Tue, 13 Oct 2020 15:35:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMKU-0008Sv-Ap
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:35:10 +0000
X-Inumbo-ID: 24e32344-1552-474f-a4ea-68b28347f462
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 24e32344-1552-474f-a4ea-68b28347f462;
	Tue, 13 Oct 2020 15:35:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602603308;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TI1G1wW1cp5DQ/cJivBeSTwYJfsSgssahb1Q8w2jMGQ=;
	b=m32Dm+kEhgE7MjIZuQ8rZz5qUhW36EJDuBNV75ZPP9x6N6sPwHwo8J2eQLqW/2VCPJRf5k
	yehhGrDHxElTJnK3JMU3KY/s33IH6Z2D73577UIgtFYenkc2mekokNV/++FTIWk49MCWy+
	fAQbDS/2dacAlloUNVPmYjrkkYGBEvo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AE43BAD82;
	Tue, 13 Oct 2020 15:35:08 +0000 (UTC)
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
 <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
 <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ad54c16b-c3b0-cff2-921f-b84a735d3149@suse.com>
Date: Tue, 13 Oct 2020 17:35:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 14:59, Igor Druzhinin wrote:
> On 13/10/2020 13:51, Jan Beulich wrote:
>> As a consequence I think we will also want to adjust Xen itself to
>> automatically disable ACPI when it ends up consuming E801 data. Or
>> alternatively we should consider dropping all E801-related code (as
>> being inapplicable to 64-bit systems).
> 
> I'm not following here. What Xen has to do with E801? It's a SeaBIOS implemented
> call that happened to be used by QEMU option ROM. We cannot drop it from there
> as it's part of BIOS spec.

Any ACPI aware OS has to use E820 (and nothing else). Hence our
own use of E801 should either be dropped, or lead to the
disabling of ACPI. Otherwise real firmware using logic similar
to SeaBIOS'es (but hopefully properly accounting for holes)
could make us use ACPI table space as normal RAM.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:41:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:41:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6297.16766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMQR-0000vM-4n; Tue, 13 Oct 2020 15:41:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6297.16766; Tue, 13 Oct 2020 15:41:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMQR-0000vF-1J; Tue, 13 Oct 2020 15:41:19 +0000
Received: by outflank-mailman (input) for mailman id 6297;
 Tue, 13 Oct 2020 15:41:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMQQ-0000vA-3c
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:41:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9edea0e4-ddb0-4520-856b-065138215da2;
 Tue, 13 Oct 2020 15:41:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 87C25B1A6;
 Tue, 13 Oct 2020 15:41:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMQQ-0000vA-3c
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:41:18 +0000
X-Inumbo-ID: 9edea0e4-ddb0-4520-856b-065138215da2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9edea0e4-ddb0-4520-856b-065138215da2;
	Tue, 13 Oct 2020 15:41:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602603676;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=U4OemMNmTMTBI16JsiuhCKQ5aOSybutzFsISwsGQk70=;
	b=ZyP1+fYMd/XVeVjPkNU5lreZtF6BG0RcylFd/KwjLtmSYZPI6QXgx/zYWyCQDoaGi7B6p8
	7mTbhrBc3eRXBOAfDrSUZrUSZn/Vzg17oHmnghg2bS07iW/zorATPxVlu0I2soFcdXGmYA
	lEaiflLL/HlPAwrBZF49Rmj533VPqGk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 87C25B1A6;
	Tue, 13 Oct 2020 15:41:16 +0000 (UTC)
Subject: Re: [PATCH v2 03/11] x86/vlapic: introduce an EOI callback mechanism
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-4-roger.pau@citrix.com>
 <a6863a90-584a-af21-4a0a-1b104b750978@suse.com>
 <20201013143028.GQ19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a8d878fc-4d05-d059-61f4-6994cb595838@suse.com>
Date: Tue, 13 Oct 2020 17:41:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201013143028.GQ19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 16:30, Roger Pau Monné wrote:
> On Fri, Oct 02, 2020 at 11:39:50AM +0200, Jan Beulich wrote:
>> On 30.09.2020 12:41, Roger Pau Monne wrote:
>>> Add a new vlapic_set_irq_callback helper in order to inject a vector
>>> and set a callback to be executed when the guest performs the end of
>>> interrupt acknowledgment.
>>
>> On v1 I did ask
>>
>> "One thing I don't understand at all for now is how these
>>  callbacks are going to be re-instated after migration for
>>  not-yet-EOIed interrupts."
>>
>> Afaics I didn't get an answer on this.
> 
> Oh sorry, I remember your comment and I've changed further patches to
> address this.
> 
> The setter of the callback will be in charge for setting the callback
> again on resume. That's why vlapic_set_callback is not a static
> function, and is added to the header.
> 
> Patch 5/11 [0] contains an example of hw the vIO-APIC resets the callbacks
> on load. 

Ah, I see - I didn't get there yet. Could you mention this behavior in
the description here, or maybe in a code comment next to the declaration
(or definition) of the function?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:43:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6299.16777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMS3-00014w-FD; Tue, 13 Oct 2020 15:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6299.16777; Tue, 13 Oct 2020 15:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMS3-00014p-CE; Tue, 13 Oct 2020 15:42:59 +0000
Received: by outflank-mailman (input) for mailman id 6299;
 Tue, 13 Oct 2020 15:42:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMS1-000144-UP
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:42:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0364843-20c5-45f0-a972-d547676dc813;
 Tue, 13 Oct 2020 15:42:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 193F9B2BB;
 Tue, 13 Oct 2020 15:42:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMS1-000144-UP
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:42:57 +0000
X-Inumbo-ID: b0364843-20c5-45f0-a972-d547676dc813
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b0364843-20c5-45f0-a972-d547676dc813;
	Tue, 13 Oct 2020 15:42:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602603776;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cnDjqizlFPZeibM01x0Xf5XBdXKAaJe7w2xKEueQdxs=;
	b=XPNk50KXGJLcYjKTQTkDRfgOPYsX+S6FC0Zkbg4OC2MSl+stqBjRze/p68V8LSi5orL/df
	U9o7cZo1on3Egu04W77qsiJ+E2Fx4pF6rt1wU+9gcfpZL0rneWrWw2e0CxxBp+YgrnNWMj
	ovAGNkf28JmJhlxqMqS4Z5U0+GE585g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 193F9B2BB;
	Tue, 13 Oct 2020 15:42:56 +0000 (UTC)
Subject: Re: [PATCH v2 04/11] x86/vmsi: use the newly introduced EOI callbacks
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-5-roger.pau@citrix.com>
 <785f80d6-3a0a-6a58-fd9a-05d8ff87f6fe@suse.com>
 <20201013144724.GR19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9253f4a9-66f0-e796-da35-22456545edde@suse.com>
Date: Tue, 13 Oct 2020 17:42:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201013144724.GR19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.10.2020 16:47, Roger Pau Monné wrote:
> On Fri, Oct 02, 2020 at 05:25:34PM +0200, Jan Beulich wrote:
>> On 30.09.2020 12:41, Roger Pau Monne wrote:
>>> @@ -119,7 +126,8 @@ void vmsi_deliver_pirq(struct domain *d, const struct hvm_pirq_dpci *pirq_dpci)
>>>  
>>>      ASSERT(pirq_dpci->flags & HVM_IRQ_DPCI_GUEST_MSI);
>>>  
>>> -    vmsi_deliver(d, vector, dest, dest_mode, delivery_mode, trig_mode);
>>> +    vmsi_deliver_callback(d, vector, dest, dest_mode, delivery_mode, trig_mode,
>>> +                          hvm_dpci_msi_eoi, NULL);
>>>  }
>>
>> While I agree with your reply to Paul regarding Dom0, I still think
>> the entire if() in hvm_dpci_msi_eoi() should be converted into a
>> conditional here. There's no point registering the callback if it's
>> not going to do anything.
>>
>> However, looking further, the "!hvm_domain_irq(d)->dpci &&
>> !is_hardware_domain(d)" can be simply dropped altogether, right away.
>> It's now fulfilled by the identical check at the top of
>> hvm_dirq_assist(), thus guarding the sole call site of this function.
>>
>> The !is_iommu_enabled(d) is slightly more involved to prove, but it
>> should also be possible to simply drop. What might help here is a
>> separate change to suppress opening of HVM_DPCI_SOFTIRQ when there's
>> no IOMMU in the system, as then it becomes obvious that this part of
>> the condition is guaranteed by hvm_do_IRQ_dpci(), being the only
>> site where the softirq can get raised (apart from the softirq
>> handler itself).
>>
>> To sum up - the call above can probably stay as is, but the callback
>> can be simplified as a result of the change.
> 
> Yes, I agree. Would you be fine with converting the check in the
> callback into an assert, or would you rather have it removed
> completely?

Either way is fine with me, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:47:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6301.16789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMWC-0001FL-0t; Tue, 13 Oct 2020 15:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6301.16789; Tue, 13 Oct 2020 15:47:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMWB-0001FE-U6; Tue, 13 Oct 2020 15:47:15 +0000
Received: by outflank-mailman (input) for mailman id 6301;
 Tue, 13 Oct 2020 15:47:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSMWA-0001F7-Vp
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:47:15 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df3c3e0b-9d39-4f9a-8199-bafa4d4ad748;
 Tue, 13 Oct 2020 15:47:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pNIz=DU=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSMWA-0001F7-Vp
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:47:15 +0000
X-Inumbo-ID: df3c3e0b-9d39-4f9a-8199-bafa4d4ad748
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id df3c3e0b-9d39-4f9a-8199-bafa4d4ad748;
	Tue, 13 Oct 2020 15:47:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602604033;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=qPWmkc2ZFeFYf0b2dKh9DRNp0gTWx0DBaI4WjaMTVxw=;
  b=ILzzLaIV0uZY/hOZQ2hC3r+YiNr/hoZjHq3u4AJAi66RK/gKTLwrJtdT
   7xOhjxdDHcqgY2HRS0NHPfObYJzvnXwcJtMQeuP8U51lKFpC5l7pHvipE
   LZWuGlkKkajYkqphbUv4DgOyj1AlR4NQi3gbvujfdCfJnZ01w0nHmvLK9
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3e/y38pEDJMjCMfqTS0H0DHusp9IeEG24VUZjC0eniMrNLV744okFu+MwTwxhh6oGtINSogFDH
 gNaYfdD5m4KMmkZchp0RddtevJSkd0TLoEK51VBrxOerfB/dNeIvdco8tBIe7jUbcW1cC5bYru
 cmb5oJKSXbKXtMg1+tWDeQXWUhOd1aBQy5YcU+ISp1vykcgfGpwVOd0TnleZkgTjYa9DU6zg75
 4fdTSGfT9ruU0GAGAC/rbkGq/ngg9s5nEyZ7ddUrIyzYUfD/iKvALkdrjvkaGOgpqaVeoAhzud
 b+k=
X-SBRS: 2.5
X-MesageID: 29158354
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,371,1596513600"; 
   d="scan'208";a="29158354"
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <andrew.cooper3@citrix.com>,
	<roger.pau@citrix.com>, <wl@xen.org>, <iwj@xenproject.org>
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
 <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
 <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
 <ad54c16b-c3b0-cff2-921f-b84a735d3149@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <cc0f409e-60c0-41ae-f932-f6c2d7f82baa@citrix.com>
Date: Tue, 13 Oct 2020 16:47:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ad54c16b-c3b0-cff2-921f-b84a735d3149@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13/10/2020 16:35, Jan Beulich wrote:
> On 13.10.2020 14:59, Igor Druzhinin wrote:
>> On 13/10/2020 13:51, Jan Beulich wrote:
>>> As a consequence I think we will also want to adjust Xen itself to
>>> automatically disable ACPI when it ends up consuming E801 data. Or
>>> alternatively we should consider dropping all E801-related code (as
>>> being inapplicable to 64-bit systems).
>>
>> I'm not following here. What Xen has to do with E801? It's a SeaBIOS implemented
>> call that happened to be used by QEMU option ROM. We cannot drop it from there
>> as it's part of BIOS spec.
> 
> Any ACPI aware OS has to use E820 (and nothing else). Hence our
> own use of E801 should either be dropped, or lead to the
> disabling of ACPI. Otherwise real firmware using logic similar
> to SeaBIOS'es (but hopefully properly accounting for holes)
> could make us use ACPI table space as normal RAM.

It's not us using it - it's a boot loader from QEMU in a form of option ROM
that works in 16bit pre-OS environment which is not OS and relies on e801 BIOS call.
I'm sure any ACPI aware OS does indeed use E820 but the problem here is not an OS.

The option ROM is loaded using fw_cfg from QEMU so it's not our code. Technically
it's one foreign code (QEMU boot loader) talking to another foreign code (SeaBIOS)
which provides information based on E820 that we gave them.

So I'm afraid decision to dynamically disable ACPI (whatever you mean by this)
cannot be made by sole usage of this call by a pre-OS boot loader.

Igor


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:51:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6304.16802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMaQ-00025U-K3; Tue, 13 Oct 2020 15:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6304.16802; Tue, 13 Oct 2020 15:51:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMaQ-00025N-Gx; Tue, 13 Oct 2020 15:51:38 +0000
Received: by outflank-mailman (input) for mailman id 6304;
 Tue, 13 Oct 2020 15:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMaO-00025I-E2
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:51:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a4d8b98-a51b-41db-8046-95cfce1ea284;
 Tue, 13 Oct 2020 15:51:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2FB3AAB2;
 Tue, 13 Oct 2020 15:51:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMaO-00025I-E2
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:51:36 +0000
X-Inumbo-ID: 3a4d8b98-a51b-41db-8046-95cfce1ea284
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3a4d8b98-a51b-41db-8046-95cfce1ea284;
	Tue, 13 Oct 2020 15:51:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602604294;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vytuRpDje9ZD49Qmz7nTj2lJFZPkcVg367sRD5G5lm4=;
	b=nqKFjCvuksYsuqacn2mahUNSmaqt0QfO3Kve2ioeuME3+0b0o+N2quzDB4ZQIzwLRCTHRs
	ylXsBMvGa3/FGZ6pYN8Yj9OYthxmpNwvsGNS863olmdPgIudo+g532th9PnybshmNP8trI
	gV710Ea+r2awu/eORBYlfe+KjH3NU/8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C2FB3AAB2;
	Tue, 13 Oct 2020 15:51:34 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
Date: Tue, 13 Oct 2020 17:51:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201012134908.27497-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.10.2020 15:49, Andrew Cooper wrote:
> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
> contains the legacy vm86 fields from 32bit days, which are beyond the
> hardware-pushed frame.
> 
> Accessing these fields is generally illegal, as they are logically out of
> bounds for anything other than an interrupt/exception hitting ring1/3 code.
> 
> Unfortunately, the #DF handler uses these fields as part of preparing the
> state dump, and being IST, accesses the adjacent stack frame.
> 
> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
> the guard page, which turns this OoB write into a fatal pagefault:
> 
>   (XEN) *** DOUBLE FAULT ***
>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>   (XEN) CPU:    4
>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>   ...
>   (XEN) Xen call trace:
>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>   (XEN)
>   (XEN) Pagetable walk from ffff830236f6d008:
>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>   (XEN)
>   (XEN) ****************************************
>   (XEN) Panic on CPU 4:
>   (XEN) FATAL PAGE FAULT
>   (XEN) [error_code=0003]
>   (XEN) Faulting linear address: ffff830236f6d008
>   (XEN) ****************************************
>   (XEN)
> 
> and rendering the main #DF analysis broken.
> 
> The proper fix is to delete cpu_user_regs.es and later, so no
> interrupt/exception path can access OoB, but this needs disentangling from the
> PV ABI first.
> 
> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Is it perhaps worth also saying explicitly that the other IST
stacks don't suffer the same problem because show_registers()
makes an local copy of the passed in struct? (Doing so also
for #DF would apparently be an alternative solution.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:54:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:54:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6306.16814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMct-0002GB-1e; Tue, 13 Oct 2020 15:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6306.16814; Tue, 13 Oct 2020 15:54:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMcs-0002G4-UZ; Tue, 13 Oct 2020 15:54:10 +0000
Received: by outflank-mailman (input) for mailman id 6306;
 Tue, 13 Oct 2020 15:54:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMcs-0002Fz-CD
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:54:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0b96e96-0afc-49fb-9ba4-ff18ca0c4ddc;
 Tue, 13 Oct 2020 15:54:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AD8B9AEC4;
 Tue, 13 Oct 2020 15:54:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMcs-0002Fz-CD
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:54:10 +0000
X-Inumbo-ID: d0b96e96-0afc-49fb-9ba4-ff18ca0c4ddc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d0b96e96-0afc-49fb-9ba4-ff18ca0c4ddc;
	Tue, 13 Oct 2020 15:54:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602604448;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ASAJNSyPvPqEFLxRCkze7om4ldPd7D5tSkcPDTNicv4=;
	b=Vv8WlQ7fADAjay2FsEAF4FUrGgyhEKAm9Xp6W5/Vsh9tFig+TII39O2ppSw7o/AznPkS5N
	yYBMW+rmqaFs2T83hoysYfZq79TgKrMXmBkYWsHvzP2WzjS+blrrlBfNjIa9t2vOeyo4fc
	fgPWas4bSZq4Eq6DwczyDIdIIIZhN08=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AD8B9AEC4;
	Tue, 13 Oct 2020 15:54:08 +0000 (UTC)
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
 <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
 <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
 <ad54c16b-c3b0-cff2-921f-b84a735d3149@suse.com>
 <cc0f409e-60c0-41ae-f932-f6c2d7f82baa@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5d7bf2ce-1acb-05ff-a57b-d698e15c4dd1@suse.com>
Date: Tue, 13 Oct 2020 17:54:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <cc0f409e-60c0-41ae-f932-f6c2d7f82baa@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 17:47, Igor Druzhinin wrote:
> On 13/10/2020 16:35, Jan Beulich wrote:
>> On 13.10.2020 14:59, Igor Druzhinin wrote:
>>> On 13/10/2020 13:51, Jan Beulich wrote:
>>>> As a consequence I think we will also want to adjust Xen itself to
>>>> automatically disable ACPI when it ends up consuming E801 data. Or
>>>> alternatively we should consider dropping all E801-related code (as
>>>> being inapplicable to 64-bit systems).
>>>
>>> I'm not following here. What Xen has to do with E801? It's a SeaBIOS implemented
>>> call that happened to be used by QEMU option ROM. We cannot drop it from there
>>> as it's part of BIOS spec.
>>
>> Any ACPI aware OS has to use E820 (and nothing else). Hence our
>> own use of E801 should either be dropped, or lead to the
>> disabling of ACPI. Otherwise real firmware using logic similar
>> to SeaBIOS'es (but hopefully properly accounting for holes)
>> could make us use ACPI table space as normal RAM.
> 
> It's not us using it - it's a boot loader from QEMU in a form of option ROM
> that works in 16bit pre-OS environment which is not OS and relies on e801 BIOS call.
> I'm sure any ACPI aware OS does indeed use E820 but the problem here is not an OS.
> 
> The option ROM is loaded using fw_cfg from QEMU so it's not our code. Technically
> it's one foreign code (QEMU boot loader) talking to another foreign code (SeaBIOS)
> which provides information based on E820 that we gave them.
> 
> So I'm afraid decision to dynamically disable ACPI (whatever you mean by this)
> cannot be made by sole usage of this call by a pre-OS boot loader.

I guess this is simply a misunderstanding. I'm not talking about
your change or hvmloader or the boot loader at all. I was merely
noticing a consequence of your findings on the behavior of Xen
itself: Use of ACPI and use of E801 are exclusive of one another.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 15:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 15:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6308.16826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMhU-0002Rd-Km; Tue, 13 Oct 2020 15:58:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6308.16826; Tue, 13 Oct 2020 15:58:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSMhU-0002RW-H2; Tue, 13 Oct 2020 15:58:56 +0000
Received: by outflank-mailman (input) for mailman id 6308;
 Tue, 13 Oct 2020 15:58:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSMhT-0002RR-Us
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:58:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fcf638e8-54c8-4a3b-9bb6-52ebfe7c4989;
 Tue, 13 Oct 2020 15:58:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B928AC6D;
 Tue, 13 Oct 2020 15:58:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VY8U=DU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSMhT-0002RR-Us
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 15:58:55 +0000
X-Inumbo-ID: fcf638e8-54c8-4a3b-9bb6-52ebfe7c4989
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fcf638e8-54c8-4a3b-9bb6-52ebfe7c4989;
	Tue, 13 Oct 2020 15:58:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602604734;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7CtD/97jkM0ctnGcvP5fmJmXD0Un5r/DeGz/RrewPpo=;
	b=f45K6N9WLE7hNv28p/6hezdrGNwbVn/8iYzH2Y5TCINueU1T15foMFDuV6izusFw6/pCCe
	NdQX8uqVphhQJi4jxgVHhP967HCVDu/BSJDvZYwVkZasmNICh9S0TcrFNoKa6orVL1FYv5
	janVu5nkvVn00WzJYaRDkMr32hyE5YQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1B928AC6D;
	Tue, 13 Oct 2020 15:58:54 +0000 (UTC)
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
Date: Tue, 13 Oct 2020 17:58:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201009150948.31063-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.10.2020 17:09, Andrew Cooper wrote:
> At the time of XSA-170, the x86 instruction emulator really was broken, and
> would allow arbitrary non-canonical values to be loaded into %rip.  This was
> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
> targets".
> 
> However, in a demonstration that off-by-one errors really are one of the
> hardest programming issues we face, everyone involved with XSA-170, myself
> included, mistook the statement in the SDM which says:
> 
>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
> 
> to mean "must be canonical".  A real canonical check is bits 63:N-1.
> 
> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
> to the boundary condition at 0x0000800000000000.
> 
> Now that the emulator has been fixed, revert the XSA-170 change to fix
> architectural behaviour at the boundary case.  The XTF test case for XSA-170
> exercises this corner case, and still passes.
> 
> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

But why revert the change rather than fix ...

> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>  out:
>      if ( nestedhvm_vcpu_in_guestmode(v) )
>          nvmx_idtv_handling();
> -
> -    /*
> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
> -     * criteria. As we must not allow less than fully privileged mode to have
> -     * such an effect on the domain, we correct rIP in that case (accepting
> -     * this not being architecturally correct behavior, as the injected #GP
> -     * fault will then not see the correct [invalid] return address).
> -     * And since we know the guest will crash, we crash it right away if it
> -     * already is in most privileged mode.
> -     */
> -    mode = vmx_guest_x86_mode(v);
> -    if ( mode == 8 ? !is_canonical_address(regs->rip)

... the wrong use of is_canonical_address() here? By reverting
you open up avenues for XSAs in case we get things wrong elsewhere,
including ...

> -                   : regs->rip != regs->eip )

... for 32-bit guests.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 16:33:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 16:33:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6315.16840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSNEb-0006LS-Em; Tue, 13 Oct 2020 16:33:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6315.16840; Tue, 13 Oct 2020 16:33:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSNEb-0006LL-Ah; Tue, 13 Oct 2020 16:33:09 +0000
Received: by outflank-mailman (input) for mailman id 6315;
 Tue, 13 Oct 2020 16:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSNEZ-0006Kt-GG
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 16:33:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fb917a1-f6e4-48c0-8ecb-95e594efbeb8;
 Tue, 13 Oct 2020 16:32:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSNER-0001oL-B3; Tue, 13 Oct 2020 16:32:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSNER-0007Bi-0K; Tue, 13 Oct 2020 16:32:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSNEQ-0000Yq-Vl; Tue, 13 Oct 2020 16:32:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSNEZ-0006Kt-GG
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 16:33:07 +0000
X-Inumbo-ID: 0fb917a1-f6e4-48c0-8ecb-95e594efbeb8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0fb917a1-f6e4-48c0-8ecb-95e594efbeb8;
	Tue, 13 Oct 2020 16:32:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fHXH0bTkh/XjzJkAr8ua6R+u99NIoRj4JyB9GvAa4jY=; b=pVhhEUZrISnAa5qPbOEWpxeiJ6
	YWMWRARC4zv/SiaP3vhEyyTfvFTTtGlWfYzlCEyD3Et8PVkouv9raue8CWyY2oaK9+J84KM8shF2T
	tQINpYqhbpS550O6ttOZcJg6EVch2Y/l5iAjMjtmVtvSwJPIHZXel5YnBD/xXWC7wL94=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSNER-0001oL-B3; Tue, 13 Oct 2020 16:32:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSNER-0007Bi-0K; Tue, 13 Oct 2020 16:32:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSNEQ-0000Yq-Vl; Tue, 13 Oct 2020 16:32:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155778-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155778: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a95f31376ba4ae911536c647e1a583d144ccab73
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 16:32:58 +0000

flight 155778 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155778/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a95f31376ba4ae911536c647e1a583d144ccab73
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    3 days   29 attempts
Testing same since   155778  2020-10-13 14:01:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 17:16:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 17:16:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6320.16856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSNuu-0001R7-0m; Tue, 13 Oct 2020 17:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6320.16856; Tue, 13 Oct 2020 17:16:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSNut-0001R0-TX; Tue, 13 Oct 2020 17:16:51 +0000
Received: by outflank-mailman (input) for mailman id 6320;
 Tue, 13 Oct 2020 17:16:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iuM3=DU=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kSNus-0001Qv-Fg
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 17:16:50 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6ddb5c31-c594-4271-a716-d7b933bd45b2;
 Tue, 13 Oct 2020 17:16:49 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-240-Dy4qqe9UNdGDO3w_lNVJmA-1; Tue, 13 Oct 2020 13:16:47 -0400
Received: by mail-wm1-f71.google.com with SMTP id p17so22992wmi.7
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 10:16:47 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:61dd:85cb:23fc:fd54?
 ([2001:b07:6468:f312:61dd:85cb:23fc:fd54])
 by smtp.gmail.com with ESMTPSA id h16sm276181wre.87.2020.10.13.10.16.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 13 Oct 2020 10:16:44 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iuM3=DU=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kSNus-0001Qv-Fg
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 17:16:50 +0000
X-Inumbo-ID: 6ddb5c31-c594-4271-a716-d7b933bd45b2
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 6ddb5c31-c594-4271-a716-d7b933bd45b2;
	Tue, 13 Oct 2020 17:16:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602609409;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mbd51cFBxe+XUw/BaRsDfnw6Rspo0/0Cx5mNXVWCiAw=;
	b=Djfkz1bS24stcDLsrdTt4J/zyb/fZA259j+En9fHd9YjJwFhb79pUW44cDhLiPqdk8jt4O
	I6ed9McG7rwbHUd5HALTTkP0aaiewSMuNKam4rpSiZXTBkjWh/Fl1lrQJQh3rhOXBNvGhx
	2MFr9VM1wrwsJp2Waf57DNYq8F/6TYI=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-240-Dy4qqe9UNdGDO3w_lNVJmA-1; Tue, 13 Oct 2020 13:16:47 -0400
X-MC-Unique: Dy4qqe9UNdGDO3w_lNVJmA-1
Received: by mail-wm1-f71.google.com with SMTP id p17so22992wmi.7
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 10:16:47 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=mbd51cFBxe+XUw/BaRsDfnw6Rspo0/0Cx5mNXVWCiAw=;
        b=PFSE988dJQG65C4vsSMBy2oIg6HJT38OKiS+DdhBkt8S3y9xQA/JVCWDN16Si8PT+A
         iynGpWfiymSKOOAcCkYjEwi3ZmPPlj+CLiee96m/+ILwQFpyczmQRG2fYHPqJJNozmdw
         4Un9pBpuAwy5v4rJDHjo8f+pH9Ghl0w3VjWIU86G9V5pGy10aKU4JnyXgpznPg5Ns2Be
         CK2cNpZhpKo1CoubRtFtTfsQx1KzRM/diacGyBjkIhItWJW5DmAYpOi80rP0H1Ks9K/3
         zZwFQPGvwTn6c+zU0HBn9v8RLDQ4Suz/QgTcdSXm+ocYjMvNtjq7kPMCYVXmx03DM7bU
         PIEw==
X-Gm-Message-State: AOAM530i7QN3bIG/ls0ONZZeFA7Ap0QiCUc5Rx8K9FaqcqmI84F1hZ6S
	geHs9HTniftMKGev3jJdzIw4L3vDVyt7XnU3s0pGoBIFo2k99P7xbU01bClw50lF9vUnFtpEC0C
	NA+1yTP6zy6mlYmLjWGN9JByn3Lc=
X-Received: by 2002:adf:ab50:: with SMTP id r16mr709722wrc.235.1602609406298;
        Tue, 13 Oct 2020 10:16:46 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJzqvJcVbb6P2cHgVEVaTt5MgwUbDl0AbzF1nLxhSnu5jh3qcI8p8DLescv9YGK+kKeOV6PDXg==
X-Received: by 2002:adf:ab50:: with SMTP id r16mr709694wrc.235.1602609406080;
        Tue, 13 Oct 2020 10:16:46 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:61dd:85cb:23fc:fd54? ([2001:b07:6468:f312:61dd:85cb:23fc:fd54])
        by smtp.gmail.com with ESMTPSA id h16sm276181wre.87.2020.10.13.10.16.44
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 13 Oct 2020 10:16:44 -0700 (PDT)
Subject: Re: [PATCH v2 0/3] Add Xen CpusAccel
To: Jason Andryuk <jandryuk@gmail.com>, qemu-devel@nongnu.org
Cc: Claudio Fontana <cfontana@suse.de>, Thomas Huth <thuth@redhat.com>,
 Laurent Vivier <lvivier@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201013140511.5681-1-jandryuk@gmail.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
Date: Tue, 13 Oct 2020 19:16:43 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201013140511.5681-1-jandryuk@gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13/10/20 16:05, Jason Andryuk wrote:
> Xen was left behind when CpusAccel became mandatory and fails the assert
> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
> Move the qtest cpu functions to a common location and reuse them for
> Xen.
> 
> v2:
>   New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
>   Use accel/dummy-cpus.c for filename
>   Put prototype in include/sysemu/cpus.h
> 
> Jason Andryuk (3):
>   accel: Remove _WIN32 ifdef from qtest-cpus.c
>   accel: move qtest CpusAccel functions to a common location
>   accel: Add xen CpusAccel using dummy-cpus
> 
>  accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
>  accel/meson.build                          |  8 +++++++
>  accel/qtest/meson.build                    |  1 -
>  accel/qtest/qtest-cpus.h                   | 17 --------------
>  accel/qtest/qtest.c                        |  5 +++-
>  accel/xen/xen-all.c                        |  8 +++++++
>  include/sysemu/cpus.h                      |  3 +++
>  7 files changed, 27 insertions(+), 42 deletions(-)
>  rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
>  delete mode 100644 accel/qtest/qtest-cpus.h
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 18:12:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 18:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6343.16880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSOmU-0006bF-H5; Tue, 13 Oct 2020 18:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6343.16880; Tue, 13 Oct 2020 18:12:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSOmU-0006b8-DQ; Tue, 13 Oct 2020 18:12:14 +0000
Received: by outflank-mailman (input) for mailman id 6343;
 Tue, 13 Oct 2020 18:12:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSOmS-0006b3-OM
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:12:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68f1d56d-5118-4ae9-8f6d-77dae52fcc41;
 Tue, 13 Oct 2020 18:12:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOmP-0003uq-JA; Tue, 13 Oct 2020 18:12:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOmP-000300-9s; Tue, 13 Oct 2020 18:12:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOmP-0005ke-9N; Tue, 13 Oct 2020 18:12:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSOmS-0006b3-OM
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:12:12 +0000
X-Inumbo-ID: 68f1d56d-5118-4ae9-8f6d-77dae52fcc41
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 68f1d56d-5118-4ae9-8f6d-77dae52fcc41;
	Tue, 13 Oct 2020 18:12:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0f5QNQJU3jeGPVgNmLzvICYBVttpoJbHnxYXOyE8Mqw=; b=i/MWrT9W7kxoYQ2apUfqXrl70X
	LF+m97B1Za9QH7t+xbsloZ3ILiSa4sKIZdNDV38K93aiClp+je0lHnITL3xmP/+YCvZ3wyGPTEMim
	YnIwZi0/yzB6LKenVCU3srrLpAVnkpQQiKADmB0r0YNer3N95UEjcJHWgYwmjMA9zicI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOmP-0003uq-JA; Tue, 13 Oct 2020 18:12:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOmP-000300-9s; Tue, 13 Oct 2020 18:12:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOmP-0005ke-9N; Tue, 13 Oct 2020 18:12:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155759-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155759: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:guest-saverestore:fail:heisenbug
    xen-unstable:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 18:12:09 +0000

flight 155759 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155759/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 16 guest-saverestore fail in 155712 pass in 155759
 test-amd64-amd64-pair   26 guest-migrate/src_host/dst_host fail pass in 155712
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install     fail pass in 155712
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 155712

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 155712 like 155673
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155712
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155712
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155712
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155712
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155712
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155712
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155712
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155712
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155759  2020-10-13 01:53:11 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 18:23:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 18:23:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6356.16899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSOxO-0007cy-LR; Tue, 13 Oct 2020 18:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6356.16899; Tue, 13 Oct 2020 18:23:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSOxO-0007cr-Ho; Tue, 13 Oct 2020 18:23:30 +0000
Received: by outflank-mailman (input) for mailman id 6356;
 Tue, 13 Oct 2020 18:23:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSOxN-0007cP-Q5
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:23:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d823fe4-774e-4117-ad0f-49c6244ac1a3;
 Tue, 13 Oct 2020 18:23:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOxG-0004AK-U1; Tue, 13 Oct 2020 18:23:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOxG-0003Py-MC; Tue, 13 Oct 2020 18:23:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSOxG-0002ic-Lk; Tue, 13 Oct 2020 18:23:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSOxN-0007cP-Q5
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:23:29 +0000
X-Inumbo-ID: 2d823fe4-774e-4117-ad0f-49c6244ac1a3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2d823fe4-774e-4117-ad0f-49c6244ac1a3;
	Tue, 13 Oct 2020 18:23:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=50Zt0/KzVUd9ZS7yG52NlogseuEc0UEv+QWoJi56kI0=; b=pxGNgeTgi1X6uMEzE+2dr4ecpH
	G5KxkI1yI+RJGGfkf0sroGwODm8bv9nLfp5qTDV55DY1ahOA7GWQqXEBdSymG1J/VP9Cyo5P+78FC
	I4OG0inCJVnyqQ55mKZDAXdIaBPaUNvDRE2ZBHOUxSML4Z5vfTQZiG54HyRq3LMVQqjI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOxG-0004AK-U1; Tue, 13 Oct 2020 18:23:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOxG-0003Py-MC; Tue, 13 Oct 2020 18:23:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSOxG-0002ic-Lk; Tue, 13 Oct 2020 18:23:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155765-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155765: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9380177354387f03c8ff9eadb7ae94aa453b9469
X-Osstest-Versions-That:
    ovmf=5d1af380d3e4bd840fa324db33ca4f739136e654
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 18:23:22 +0000

flight 155765 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155765/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9380177354387f03c8ff9eadb7ae94aa453b9469
baseline version:
 ovmf                 5d1af380d3e4bd840fa324db33ca4f739136e654

Last test of basis   155757  2020-10-13 01:44:44 Z    0 days
Testing same since   155765  2020-10-13 06:07:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  fengyunhua <fengyunhua@byosoft.com.cn>
  Jan Bobek <jbobek@nvidia.com>
  Liming Gao <gaoliming@byosoft.com.cn>
  Michael D Kinney <michael.d.kinney@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5d1af380d3..9380177354  9380177354387f03c8ff9eadb7ae94aa453b9469 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 18:37:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 18:37:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6358.16912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPAV-0000By-2M; Tue, 13 Oct 2020 18:37:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6358.16912; Tue, 13 Oct 2020 18:37:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPAU-0000Bp-Uo; Tue, 13 Oct 2020 18:37:02 +0000
Received: by outflank-mailman (input) for mailman id 6358;
 Tue, 13 Oct 2020 18:37:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zIOr=DU=fluxnic.net=nico@srs-us1.protection.inumbo.net>)
 id 1kSPAT-0000B8-6Y
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:37:01 +0000
Received: from pb-sasl-trial2.pobox.com (unknown [64.147.108.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8baad28-1d41-4efd-9377-84e8985dd080;
 Tue, 13 Oct 2020 18:36:59 +0000 (UTC)
Received: from pb-sasl-trial2.pobox.com (localhost.local [127.0.0.1])
 by pb-sasl-trial2.pobox.com (Postfix) with ESMTP id B35092F08C;
 Tue, 13 Oct 2020 14:36:58 -0400 (EDT)
 (envelope-from nico@fluxnic.net)
Received: from pb-smtp1.nyi.icgroup.com (pb-smtp1.pobox.com [10.90.30.53])
 by pb-sasl-trial2.pobox.com (Postfix) with ESMTP id 7910C2F08B;
 Tue, 13 Oct 2020 14:36:58 -0400 (EDT)
 (envelope-from nico@fluxnic.net)
Received: from yoda.home (unknown [24.203.50.76])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp1.pobox.com (Postfix) with ESMTPSA id CD98F955F4;
 Tue, 13 Oct 2020 14:36:57 -0400 (EDT)
 (envelope-from nico@fluxnic.net)
Received: from xanadu.home (xanadu.home [192.168.2.2])
 by yoda.home (Postfix) with ESMTPSA id CF7492DA0BC7;
 Tue, 13 Oct 2020 14:36:56 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zIOr=DU=fluxnic.net=nico@srs-us1.protection.inumbo.net>)
	id 1kSPAT-0000B8-6Y
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:37:01 +0000
X-Inumbo-ID: d8baad28-1d41-4efd-9377-84e8985dd080
Received: from pb-sasl-trial2.pobox.com (unknown [64.147.108.86])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8baad28-1d41-4efd-9377-84e8985dd080;
	Tue, 13 Oct 2020 18:36:59 +0000 (UTC)
Received: from pb-sasl-trial2.pobox.com (localhost.local [127.0.0.1])
	by pb-sasl-trial2.pobox.com (Postfix) with ESMTP id B35092F08C;
	Tue, 13 Oct 2020 14:36:58 -0400 (EDT)
	(envelope-from nico@fluxnic.net)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=pobox.com; h=date:from:to
	:cc:subject:in-reply-to:message-id:references:mime-version
	:content-type; s=sasl; bh=1qdRcPgrMg9PaaTRWeHMHkWBgn4=; b=Epy+q5
	ans9ahJwXxlQvxdjICPrBYTo3ECIn9AzWxzmuo835zX7Go5RA+la+QVdJswbYHqY
	OA9uOWP+RHqwo1f/1Hjwskkbh9itwsmr5IKrZUme2Q4YRp5bQABuumhmd/Yh0NKM
	sMhZUgbkZQs79wJJn2wtIPZ7EN0v5uRSG8bTQ=
Received: from pb-smtp1.nyi.icgroup.com (pb-smtp1.pobox.com [10.90.30.53])
	by pb-sasl-trial2.pobox.com (Postfix) with ESMTP id 7910C2F08B;
	Tue, 13 Oct 2020 14:36:58 -0400 (EDT)
	(envelope-from nico@fluxnic.net)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=fluxnic.net;
 h=date:from:to:cc:subject:in-reply-to:message-id:references:mime-version:content-type; s=2016-12.pbsmtp; bh=/xoWviDLFg5PKRQ9rObRWDXVC++pmZtYhfbDb0DFq7E=; b=v5OoWtflZD131TYsBl2A9g0L/PCRe2nu6sy2IJY2ys8stI3sGPGydjk9hbVpZeTUKIjemrnRhLwKFlAM+dXEIGXz5t0LfwSiRA8m7hrB4WLH79+9F2ww8ICEhYu0fLjFgoDc1lKWqG4ZKNRDYjtbn/p6CJBipu1Te7ZvLuk/HMw=
Received: from yoda.home (unknown [24.203.50.76])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by pb-smtp1.pobox.com (Postfix) with ESMTPSA id CD98F955F4;
	Tue, 13 Oct 2020 14:36:57 -0400 (EDT)
	(envelope-from nico@fluxnic.net)
Received: from xanadu.home (xanadu.home [192.168.2.2])
	by yoda.home (Postfix) with ESMTPSA id CF7492DA0BC7;
	Tue, 13 Oct 2020 14:36:56 -0400 (EDT)
Date: Tue, 13 Oct 2020 14:36:56 -0400 (EDT)
From: Nicolas Pitre <nico@fluxnic.net>
To: Ira Weiny <ira.weiny@intel.com>
cc: Andrew Morton <akpm@linux-foundation.org>, 
    Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, 
    Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, 
    Peter Zijlstra <peterz@infradead.org>, x86@kernel.org, 
    Dave Hansen <dave.hansen@linux.intel.com>, 
    Dan Williams <dan.j.williams@intel.com>, Fenghua Yu <fenghua.yu@intel.com>, 
    linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, 
    linux-nvdimm@lists.01.org, linux-fsdevel@vger.kernel.org, 
    linux-mm@kvack.org, linux-kselftest@vger.kernel.org, 
    linuxppc-dev@lists.ozlabs.org, kvm@vger.kernel.org, netdev@vger.kernel.org, 
    bpf@vger.kernel.org, kexec@lists.infradead.org, 
    linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org, 
    devel@driverdev.osuosl.org, linux-efi@vger.kernel.org, 
    linux-mmc@vger.kernel.org, linux-scsi@vger.kernel.org, 
    target-devel@vger.kernel.org, linux-nfs@vger.kernel.org, 
    ceph-devel@vger.kernel.org, linux-ext4@vger.kernel.org, 
    linux-aio@kvack.org, io-uring@vger.kernel.org, 
    linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, 
    linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
    linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, 
    cluster-devel@redhat.com, ecryptfs@vger.kernel.org, 
    linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org, 
    linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org, 
    amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
    intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com, 
    linux-block@vger.kernel.org, xen-devel@lists.xenproject.org, 
    linux-cachefs@redhat.com, samba-technical@lists.samba.org, 
    intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new
 kmap_thread()
In-Reply-To: <20201009195033.3208459-34-ira.weiny@intel.com>
Message-ID: <nycvar.YSQ.7.78.906.2010131436200.2184@knanqh.ubzr>
References: <20201009195033.3208459-1-ira.weiny@intel.com> <20201009195033.3208459-34-ira.weiny@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-Pobox-Relay-ID:
 13301A02-0D83-11EB-84D0-D152C8D8090B-78420484!pb-smtp1.pobox.com

On Fri, 9 Oct 2020, ira.weiny@intel.com wrote:

> From: Ira Weiny <ira.weiny@intel.com>
> 
> The kmap() calls in this FS are localized to a single thread.  To avoid
> the over head of global PKRS updates use the new kmap_thread() call.
> 
> Cc: Nicolas Pitre <nico@fluxnic.net>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>

Acked-by: Nicolas Pitre <nico@fluxnic.net>

>  fs/cramfs/inode.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index 912308600d39..003c014a42ed 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
>  		struct page *page = pages[i];
>  
>  		if (page) {
> -			memcpy(data, kmap(page), PAGE_SIZE);
> -			kunmap(page);
> +			memcpy(data, kmap_thread(page), PAGE_SIZE);
> +			kunmap_thread(page);
>  			put_page(page);
>  		} else
>  			memset(data, 0, PAGE_SIZE);
> @@ -826,7 +826,7 @@ static int cramfs_readpage(struct file *file, struct page *page)
>  
>  	maxblock = (inode->i_size + PAGE_SIZE - 1) >> PAGE_SHIFT;
>  	bytes_filled = 0;
> -	pgdata = kmap(page);
> +	pgdata = kmap_thread(page);
>  
>  	if (page->index < maxblock) {
>  		struct super_block *sb = inode->i_sb;
> @@ -914,13 +914,13 @@ static int cramfs_readpage(struct file *file, struct page *page)
>  
>  	memset(pgdata + bytes_filled, 0, PAGE_SIZE - bytes_filled);
>  	flush_dcache_page(page);
> -	kunmap(page);
> +	kunmap_thread(page);
>  	SetPageUptodate(page);
>  	unlock_page(page);
>  	return 0;
>  
>  err:
> -	kunmap(page);
> +	kunmap_thread(page);
>  	ClearPageUptodate(page);
>  	SetPageError(page);
>  	unlock_page(page);
> -- 
> 2.28.0.rc0.12.gb6a658bd00c9
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 18:44:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 18:44:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6362.16923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPHy-00016t-RY; Tue, 13 Oct 2020 18:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6362.16923; Tue, 13 Oct 2020 18:44:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPHy-00016m-OV; Tue, 13 Oct 2020 18:44:46 +0000
Received: by outflank-mailman (input) for mailman id 6362;
 Tue, 13 Oct 2020 18:44:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BppY=DU=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1kSPHx-00016h-DJ
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:44:45 +0000
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1beafb17-dccc-4b10-841a-34d84209a6d9;
 Tue, 13 Oct 2020 18:44:42 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id i5so541220edr.5
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 11:44:42 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BppY=DU=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
	id 1kSPHx-00016h-DJ
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 18:44:45 +0000
X-Inumbo-ID: 1beafb17-dccc-4b10-841a-34d84209a6d9
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1beafb17-dccc-4b10-841a-34d84209a6d9;
	Tue, 13 Oct 2020 18:44:42 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id i5so541220edr.5
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 11:44:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=intel-com.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ACWQZXFIYAYZvPtA0kqqET9/h2VeNuWUmvFkuiSdGDk=;
        b=DayNIijN9rJ4wynxycYBrXAQR84x3xjByl1z2VFzt7YCa69TNkAlkIdWKG0xwP6Yp3
         3F/H2nz44gihJO9JHyXWzYL4P1dc46CKqiaNR3yRo/w37eMMjkAo/koyRg9J9cet4J5o
         8pAZLke452kN1ce4bdWIlGHc47qEL6KSVsDbeqJDAMDzDUrFyC9OKIrNu9C/VaSoREWg
         9B9an2yfiyuzSelcHqdOabrCOxSG6vx+IObNMb98zyieMNME4f/11W7Es7O7m/+Kevvk
         a0bkM4fWgVplo/2Lunr6iSwR8tSVOWAyX1+gyP4UJdvdg+R8/4qIc/xXlFMWt9dzq4A3
         WVxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ACWQZXFIYAYZvPtA0kqqET9/h2VeNuWUmvFkuiSdGDk=;
        b=YPKpkNftpxCxIOIwgN058wXBO4xViabu12vwY9HZ070vNi2b3VicPoxpudeOxUoUI+
         BV5ujP8/m16IjwPtCSjk9/vvwG/XIBp/daazGRsWJxuesYsL03FSkYZwl7glQgWlCNKR
         ZgJvtO2DA0yFpII2NYaq1go8xOpRwzyN/sOk0n6U6ezmG+nQin29mKezjwQ6+pIjOa9k
         BuEjr/haULlWrdnDssuuPgJ6Wdu1C2km09GUwx2eGDtF2JeZJB38d1O1eHkOSjoF6YEB
         ObVMXXGd+fnvzMgEe+sAKQ5eKJy7mkExrELegKkaOLpt9Ir8MHdCR0lBQ52QKz8PYPjF
         buJQ==
X-Gm-Message-State: AOAM530/zxOfZNAH0w1XQOfQJvKCq6Tf7vPBlKXnHfKvXR9KywgnkmBO
	CcesTf3T3qAO4gzB4tFUVnk83ENCLrGQU00Q3NP3XQ==
X-Google-Smtp-Source: ABdhPJx/D4CFI8iBKDneanR6scqyHSM4b5ae7bBsGU7JbHG59bZqw8grXz+3fxsJ2hpJPfBR1HipQc3XH+/A2xgb2Ys=
X-Received: by 2002:a50:8e1e:: with SMTP id 30mr1027503edw.354.1602614681174;
 Tue, 13 Oct 2020 11:44:41 -0700 (PDT)
MIME-Version: 1.0
References: <20201009195033.3208459-1-ira.weiny@intel.com> <20201009195033.3208459-34-ira.weiny@intel.com>
In-Reply-To: <20201009195033.3208459-34-ira.weiny@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Tue, 13 Oct 2020 11:44:29 -0700
Message-ID: <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
To: "Weiny, Ira" <ira.weiny@intel.com>
Cc: Andrew Morton <akpm@linux-foundation.org>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, Andy Lutomirski <luto@kernel.org>, 
	Peter Zijlstra <peterz@infradead.org>, Nicolas Pitre <nico@fluxnic.net>, X86 ML <x86@kernel.org>, 
	Dave Hansen <dave.hansen@linux.intel.com>, Fenghua Yu <fenghua.yu@intel.com>, 
	Linux Doc Mailing List <linux-doc@vger.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-nvdimm <linux-nvdimm@lists.01.org>, 
	linux-fsdevel <linux-fsdevel@vger.kernel.org>, Linux MM <linux-mm@kvack.org>, 
	linux-kselftest@vger.kernel.org, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>, bpf@vger.kernel.org, 
	Kexec Mailing List <kexec@lists.infradead.org>, linux-bcache@vger.kernel.org, 
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, 
	linux-efi <linux-efi@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	linux-scsi <linux-scsi@vger.kernel.org>, target-devel@vger.kernel.org, 
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, 
	linux-ext4 <linux-ext4@vger.kernel.org>, linux-aio@kvack.org, io-uring@vger.kernel.org, 
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, 
	linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, 
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org, 
	linux-cifs@vger.kernel.org, linux-btrfs <linux-btrfs@vger.kernel.org>, 
	linux-afs@lists.infradead.org, linux-rdma <linux-rdma@vger.kernel.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>, intel-gfx@lists.freedesktop.org, 
	drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, 
	xen-devel <xen-devel@lists.xenproject.org>, linux-cachefs@redhat.com, 
	samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org
Content-Type: text/plain; charset="UTF-8"

On Fri, Oct 9, 2020 at 12:52 PM <ira.weiny@intel.com> wrote:
>
> From: Ira Weiny <ira.weiny@intel.com>
>
> The kmap() calls in this FS are localized to a single thread.  To avoid
> the over head of global PKRS updates use the new kmap_thread() call.
>
> Cc: Nicolas Pitre <nico@fluxnic.net>
> Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> ---
>  fs/cramfs/inode.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> index 912308600d39..003c014a42ed 100644
> --- a/fs/cramfs/inode.c
> +++ b/fs/cramfs/inode.c
> @@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
>                 struct page *page = pages[i];
>
>                 if (page) {
> -                       memcpy(data, kmap(page), PAGE_SIZE);
> -                       kunmap(page);
> +                       memcpy(data, kmap_thread(page), PAGE_SIZE);
> +                       kunmap_thread(page);

Why does this need a sleepable kmap? This looks like a textbook
kmap_atomic() use case.


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 19:05:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 19:05:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6366.16937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPbp-0002vF-Gy; Tue, 13 Oct 2020 19:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6366.16937; Tue, 13 Oct 2020 19:05:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPbp-0002v8-Dl; Tue, 13 Oct 2020 19:05:17 +0000
Received: by outflank-mailman (input) for mailman id 6366;
 Tue, 13 Oct 2020 19:05:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSPbn-0002v3-Hl
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:05:15 +0000
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a94aba4d-f3f0-41af-8bb7-70789d6a06c1;
 Tue, 13 Oct 2020 19:05:14 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id q25so913729ioh.4
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 12:05:14 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2d14:a347:ac28:26f2])
 by smtp.gmail.com with ESMTPSA id s23sm653518iol.23.2020.10.13.12.05.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 13 Oct 2020 12:05:12 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=M+FY=DU=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSPbn-0002v3-Hl
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:05:15 +0000
X-Inumbo-ID: a94aba4d-f3f0-41af-8bb7-70789d6a06c1
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a94aba4d-f3f0-41af-8bb7-70789d6a06c1;
	Tue, 13 Oct 2020 19:05:14 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id q25so913729ioh.4
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 12:05:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=NlAURjkMShpO6CROxGAWX/5rLBekEJFkEIs/xx5Bfko=;
        b=TT46lMBgHTOXGcVlM3V/MO0l6ssKdR4I26NDwURKfAxnQzoGUwXnGhN5I1Ely1aW8t
         W+jsoej02F9xCAXi4L8lsRrv6ucl9PXNI7o4GJqFR/FBEh9agkzS7wo7PIeOJE6bUpfh
         wKuIJW5659FDPaVI++tDqjVY0T612XRtJBEbY+YWOUhZXmwf+htFB36qhxlqqJhxcq2/
         piyPTKJnmnIlrII20x4rRbBPQM2IHBomwVPlBSoxvfIBgtpxLksPLN8vPRaDGyl23cTG
         3BFDTnINYIDTxvGIcSsh02UGydv5T/X7zmADnYh3Zjux4NBXwdr2lfytcZXjb6RcLKJ1
         vl4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=NlAURjkMShpO6CROxGAWX/5rLBekEJFkEIs/xx5Bfko=;
        b=LslgwLQ9q8bQ5o0NikR/H5+067mqrJ8P/7GEWEEFdgtOnWtthzRU1EZEoqINLhiNsw
         I9CnF1C4bVvka/cbqXa0peoClUK4bSJDu2+UegVcBxJAys22j2jte0n/vPTv7Wg6WPNs
         fELFimCDpLi1WAFHXuIZt4ePUeEo06oGDLeOhi5BtfDdaEwpsPeJdP98IWU+f6knYMeZ
         r1xA3y7VMNxKkbpcusVI3/Ju0GK3ZZTNoihVuSnV8NmTvfYftQh5EijrsM/dCFCeIxfG
         HCHp46SWdD4vWnxt2jWo+HN00okSF53p0UT70QoMk78f0Rpe6G3BJliUlyBNkqXGTK2j
         YNew==
X-Gm-Message-State: AOAM533q8YPGcTr3hCIZBp1o/B+JyeAcVPMT9yMLGWb/5naboPQgxzlC
	cE8qlZVLmoyvM+I16RgGols=
X-Google-Smtp-Source: ABdhPJxb4KRaKlOzqd6+8N5whHfIAxDnwNVcj+ARqU7kD7qL4UzCUhJs8it/CfhVVefdCuTSH6zT5Q==
X-Received: by 2002:a6b:d214:: with SMTP id q20mr237841iob.23.1602615913834;
        Tue, 13 Oct 2020 12:05:13 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:2d14:a347:ac28:26f2])
        by smtp.gmail.com with ESMTPSA id s23sm653518iol.23.2020.10.13.12.05.11
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Tue, 13 Oct 2020 12:05:12 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: qemu-devel@nongnu.org
Cc: dgilbert@redhat.com,
	xen-devel@lists.xenproject.org,
	paul@xen.org,
	anthony.perard@citrix.com,
	sstabellini@kernel.org,
	Jason Andryuk <jandryuk@gmail.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	Eduardo Habkost <ehabkost@redhat.com>
Subject: [PATCH] hw/xen: Set suppress-vmdesc for Xen machines
Date: Tue, 13 Oct 2020 15:05:06 -0400
Message-Id: <20201013190506.3325-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xen-save-devices-state doesn't currently generate a vmdesc, so restore
always triggers "Expected vmdescription section, but got 0".  This is
not a problem when restore comes from a file.  However, when QEMU runs
in a linux stubdom and comes over a console, EOF is not received.  This
causes a delay restoring - though it does restore.

Setting suppress-vmdesc skips looking for the vmdesc during restore and
avoids the wait.

The other approach would be generate a vmdesc in qemu_save_device_state.
Since COLO shared that function, and the vmdesc is just discarded on
restore, we choose to skip it.

Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 hw/i386/pc_piix.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 3c2ae0612b..0cf22a57ad 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -987,7 +987,7 @@ static void xenfv_4_2_machine_options(MachineClass *m)
     pc_i440fx_4_2_machine_options(m);
     m->desc = "Xen Fully-virtualized PC";
     m->max_cpus = HVM_MAX_VCPUS;
-    m->default_machine_opts = "accel=xen";
+    m->default_machine_opts = "accel=xen,suppress-vmdesc=on";
 }
 
 DEFINE_PC_MACHINE(xenfv_4_2, "xenfv-4.2", pc_xen_hvm_init,
@@ -999,7 +999,7 @@ static void xenfv_3_1_machine_options(MachineClass *m)
     m->desc = "Xen Fully-virtualized PC";
     m->alias = "xenfv";
     m->max_cpus = HVM_MAX_VCPUS;
-    m->default_machine_opts = "accel=xen";
+    m->default_machine_opts = "accel=xen,suppress-vmdesc=on";
 }
 
 DEFINE_PC_MACHINE(xenfv, "xenfv-3.1", pc_xen_hvm_init,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 19:11:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 19:11:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6369.16952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPi8-0003nU-8G; Tue, 13 Oct 2020 19:11:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6369.16952; Tue, 13 Oct 2020 19:11:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSPi8-0003nN-4f; Tue, 13 Oct 2020 19:11:48 +0000
Received: by outflank-mailman (input) for mailman id 6369;
 Tue, 13 Oct 2020 19:11:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSPi7-0003nG-Ff
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:11:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6d6433f-197b-48f5-a0db-99d143194832;
 Tue, 13 Oct 2020 19:11:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSPi4-000596-Sm; Tue, 13 Oct 2020 19:11:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSPi4-0005Wi-Ln; Tue, 13 Oct 2020 19:11:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSPi4-0006yq-LI; Tue, 13 Oct 2020 19:11:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSPi7-0003nG-Ff
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:11:47 +0000
X-Inumbo-ID: b6d6433f-197b-48f5-a0db-99d143194832
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b6d6433f-197b-48f5-a0db-99d143194832;
	Tue, 13 Oct 2020 19:11:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zzAUcs5Ca2GTBjwgL7+rZNFl8RiBWITXoFOpxBheCGE=; b=oJdizNQFKslHv/MnfqI7nBREJg
	j2PP2eRvm/3JmkI3jJaBDx+OOrG6PcMewMdAA2lXrLaK55v4wc05N5f9VHfDQZwL4GHctBuiym7Nz
	3H2ti0ZAURJ4ygGGEAGLPjxZxXX0UIXXkCSJkpzXOz1gAQjDY2u3BO7AqDhTHY2GB/RU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSPi4-000596-Sm; Tue, 13 Oct 2020 19:11:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSPi4-0005Wi-Ln; Tue, 13 Oct 2020 19:11:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSPi4-0006yq-LI; Tue, 13 Oct 2020 19:11:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155779-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155779: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 19:11:44 +0000

flight 155779 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155779/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   30 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 19:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 19:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6373.16966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQ7K-0005i4-Be; Tue, 13 Oct 2020 19:37:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6373.16966; Tue, 13 Oct 2020 19:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQ7K-0005hx-8c; Tue, 13 Oct 2020 19:37:50 +0000
Received: by outflank-mailman (input) for mailman id 6373;
 Tue, 13 Oct 2020 19:37:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5yF7=DU=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kSQ7H-0005hs-Dx
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:37:48 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c3879a7-088e-4c22-b34f-fde749a72974;
 Tue, 13 Oct 2020 19:37:43 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kSQ6F-000768-Gq; Tue, 13 Oct 2020 19:36:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5yF7=DU=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kSQ7H-0005hs-Dx
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:37:48 +0000
X-Inumbo-ID: 0c3879a7-088e-4c22-b34f-fde749a72974
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c3879a7-088e-4c22-b34f-fde749a72974;
	Tue, 13 Oct 2020 19:37:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=julNFvrjaFWkGS8+6W5JRZKKlBpdy19jf/hCoUdzdEM=; b=iOPFuszmnN+d5S+Ahd5uX1N4pM
	9RF6NzOI8Sd5OvAHWw4A5T4V7HBzV5SYmEitcrT6L9uijzivpv3AAlrlQ+EeeMBwBERzfaRW96tDf
	PKzX+Sl2uCT2ULOLsIaATERkdV1cx8TAcLFUKC9PDDXh3BUb0PThsEH5DExMv1qS5B9SNDc+ifGE8
	i5adkcq3oTTGJk72Oatktad6H8mS4gZ84Snyq0JJxGsD2588n+HRQh2ps2O3qrgewJ3m2qt+GgsvN
	M0CVp3vtxwWhumjXm1Xcpjg1G/4g9JvOqlT5q+mB/7fQkjMGPjGhStjIFbjEA/fecM1aATnK3yCCv
	pha3ulKA==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kSQ6F-000768-Gq; Tue, 13 Oct 2020 19:36:43 +0000
Date: Tue, 13 Oct 2020 20:36:43 +0100
From: Matthew Wilcox <willy@infradead.org>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Weiny, Ira" <ira.weiny@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Nicolas Pitre <nico@fluxnic.net>, X86 ML <x86@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>, linux-kselftest@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>,
	bpf@vger.kernel.org, Kexec Mailing List <kexec@lists.infradead.org>,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org, linux-efi <linux-efi@vger.kernel.org>,
	linux-mmc@vger.kernel.org, linux-scsi <linux-scsi@vger.kernel.org>,
	target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org, linux-ext4 <linux-ext4@vger.kernel.org>,
	linux-aio@kvack.org, io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	linux-afs@lists.infradead.org,
	linux-rdma <linux-rdma@vger.kernel.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel <xen-devel@lists.xenproject.org>,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
Message-ID: <20201013193643.GK20115@casper.infradead.org>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-34-ira.weiny@intel.com>
 <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>

On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> On Fri, Oct 9, 2020 at 12:52 PM <ira.weiny@intel.com> wrote:
> >
> > From: Ira Weiny <ira.weiny@intel.com>
> >
> > The kmap() calls in this FS are localized to a single thread.  To avoid
> > the over head of global PKRS updates use the new kmap_thread() call.
> >
> > Cc: Nicolas Pitre <nico@fluxnic.net>
> > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > ---
> >  fs/cramfs/inode.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> > index 912308600d39..003c014a42ed 100644
> > --- a/fs/cramfs/inode.c
> > +++ b/fs/cramfs/inode.c
> > @@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
> >                 struct page *page = pages[i];
> >
> >                 if (page) {
> > -                       memcpy(data, kmap(page), PAGE_SIZE);
> > -                       kunmap(page);
> > +                       memcpy(data, kmap_thread(page), PAGE_SIZE);
> > +                       kunmap_thread(page);
> 
> Why does this need a sleepable kmap? This looks like a textbook
> kmap_atomic() use case.

There's a lot of code of this form.  Could we perhaps have:

static inline void copy_to_highpage(struct page *to, void *vfrom, unsigned int size)
{
	char *vto = kmap_atomic(to);

	memcpy(vto, vfrom, size);
	kunmap_atomic(vto);
}

in linux/highmem.h ?


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 19:41:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 19:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6375.16977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQBF-0006YB-TC; Tue, 13 Oct 2020 19:41:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6375.16977; Tue, 13 Oct 2020 19:41:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQBF-0006Y4-Pv; Tue, 13 Oct 2020 19:41:53 +0000
Received: by outflank-mailman (input) for mailman id 6375;
 Tue, 13 Oct 2020 19:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BppY=DU=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1kSQBE-0006Xz-7v
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:41:52 +0000
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1df82f3-c83b-46d2-ae4d-bb6f3c07291a;
 Tue, 13 Oct 2020 19:41:49 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id p15so1446167ejm.7
 for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 12:41:49 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BppY=DU=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
	id 1kSQBE-0006Xz-7v
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 19:41:52 +0000
X-Inumbo-ID: c1df82f3-c83b-46d2-ae4d-bb6f3c07291a
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1df82f3-c83b-46d2-ae4d-bb6f3c07291a;
	Tue, 13 Oct 2020 19:41:49 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id p15so1446167ejm.7
        for <xen-devel@lists.xenproject.org>; Tue, 13 Oct 2020 12:41:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=intel-com.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=jlOzF1QhqikCoQxsvfJV6M6Hjgbrow9dB9uNs1AoGLE=;
        b=kTjS5KS3nYcf/wrvTsuRt1BvzGw557kJOoPvyzYoBgiqb4b8bKtbI2/mMLiZYyRjFR
         J9ydTuu0x47x+MEvbUBFhFH3/2QgRWgMVRtg9k6OLuYDe131Mp/cuPa30fJMGhSFjepr
         cqBBXWNROTwzHkSN49lY/psarWhxU27El5EYWXEJaiWvhTqo66L6QK98x3Y0Yvi2nF9y
         UTW/CUag9LBP+VfPW+1mn8JfLpPCSXoAzeg5FBjWoJMu0+7UEW9Ka5T8iigFwOd+JDvR
         rrctnvxFOZW164mxjZq4IMmZGZxG4l2jAUbsCf6skOMzcKIRAwJ8OakZ/jL0rfexuXSd
         7ngQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=jlOzF1QhqikCoQxsvfJV6M6Hjgbrow9dB9uNs1AoGLE=;
        b=pbJaZ+AMso4vtQUxqYRNi7j4BGwWlEVxSmWFN2+618lgCPVUalkzP7Nq6Vx6XLsMTk
         IrxHbI0k2fInFLoWd026pj2NSy+QZMqQrwyp07LAIMvj08IKpyAjF55yzCYOv5XMvBU9
         AKe1iCR6Rv9GF6J6HyWIDUXd3NDHBH5y9VlG2xN2T1vVCklESl/WmUCC3ZxrVdBioyhT
         L4F8SCI6/fyDIWDW3i74+636Z8tDRX8ZxjJtJScP8eTbdeUyrj+gmB5uNcke9ca10l+K
         /RDb3wv1PWOq3pRRRi1FSLy/SCDpyXMcOTUPMcR7E+XGApRPLy+U6OM/+KSO8FZI6kK+
         88pw==
X-Gm-Message-State: AOAM532OLBeSNbyqgP1aEerKWGxwmQjJ8IelFsgyYMQzjWwvTilZjors
	XDtGnjSfoZaLOG0bQh5+EmAzx5KQwVQZD222qtfuIQ==
X-Google-Smtp-Source: ABdhPJxGxhMZ7o2BYtIDXOupUr3pSHVXck0tIwC9HMZ0nIbp0pU6AQKRljKqMwVkoxmX0BYp74elTVEnpYvo3LWFH24=
X-Received: by 2002:a17:906:7e47:: with SMTP id z7mr1390518ejr.418.1602618108255;
 Tue, 13 Oct 2020 12:41:48 -0700 (PDT)
MIME-Version: 1.0
References: <20201009195033.3208459-1-ira.weiny@intel.com> <20201009195033.3208459-34-ira.weiny@intel.com>
 <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com> <20201013193643.GK20115@casper.infradead.org>
In-Reply-To: <20201013193643.GK20115@casper.infradead.org>
From: Dan Williams <dan.j.williams@intel.com>
Date: Tue, 13 Oct 2020 12:41:36 -0700
Message-ID: <CAPcyv4gL70FcLe8az7ezmpcZV=bG0Cka7daKWcCdmV4GoenSZw@mail.gmail.com>
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
To: Matthew Wilcox <willy@infradead.org>
Cc: "Weiny, Ira" <ira.weiny@intel.com>, Andrew Morton <akpm@linux-foundation.org>, 
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Nicolas Pitre <nico@fluxnic.net>, 
	X86 ML <x86@kernel.org>, Dave Hansen <dave.hansen@linux.intel.com>, 
	Fenghua Yu <fenghua.yu@intel.com>, Linux Doc Mailing List <linux-doc@vger.kernel.org>, 
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-nvdimm <linux-nvdimm@lists.01.org>, 
	linux-fsdevel <linux-fsdevel@vger.kernel.org>, Linux MM <linux-mm@kvack.org>, 
	linux-kselftest@vger.kernel.org, linuxppc-dev <linuxppc-dev@lists.ozlabs.org>, 
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>, bpf@vger.kernel.org, 
	Kexec Mailing List <kexec@lists.infradead.org>, linux-bcache@vger.kernel.org, 
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org, 
	linux-efi <linux-efi@vger.kernel.org>, linux-mmc@vger.kernel.org, 
	linux-scsi <linux-scsi@vger.kernel.org>, target-devel@vger.kernel.org, 
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org, 
	linux-ext4 <linux-ext4@vger.kernel.org>, linux-aio@kvack.org, io-uring@vger.kernel.org, 
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org, 
	linux-ntfs-dev@lists.sourceforge.net, reiserfs-devel@vger.kernel.org, 
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org, 
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org, 
	linux-cifs@vger.kernel.org, linux-btrfs <linux-btrfs@vger.kernel.org>, 
	linux-afs@lists.infradead.org, linux-rdma <linux-rdma@vger.kernel.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>, intel-gfx@lists.freedesktop.org, 
	drbd-dev@lists.linbit.com, linux-block@vger.kernel.org, 
	xen-devel <xen-devel@lists.xenproject.org>, linux-cachefs@redhat.com, 
	samba-technical@lists.samba.org, intel-wired-lan@lists.osuosl.org
Content-Type: text/plain; charset="UTF-8"

On Tue, Oct 13, 2020 at 12:37 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM <ira.weiny@intel.com> wrote:
> > >
> > > From: Ira Weiny <ira.weiny@intel.com>
> > >
> > > The kmap() calls in this FS are localized to a single thread.  To avoid
> > > the over head of global PKRS updates use the new kmap_thread() call.
> > >
> > > Cc: Nicolas Pitre <nico@fluxnic.net>
> > > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > > ---
> > >  fs/cramfs/inode.c | 10 +++++-----
> > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> > > index 912308600d39..003c014a42ed 100644
> > > --- a/fs/cramfs/inode.c
> > > +++ b/fs/cramfs/inode.c
> > > @@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
> > >                 struct page *page = pages[i];
> > >
> > >                 if (page) {
> > > -                       memcpy(data, kmap(page), PAGE_SIZE);
> > > -                       kunmap(page);
> > > +                       memcpy(data, kmap_thread(page), PAGE_SIZE);
> > > +                       kunmap_thread(page);
> >
> > Why does this need a sleepable kmap? This looks like a textbook
> > kmap_atomic() use case.
>
> There's a lot of code of this form.  Could we perhaps have:
>
> static inline void copy_to_highpage(struct page *to, void *vfrom, unsigned int size)
> {
>         char *vto = kmap_atomic(to);
>
>         memcpy(vto, vfrom, size);
>         kunmap_atomic(vto);
> }
>
> in linux/highmem.h ?

Nice, yes, that could also replace the local ones in lib/iov_iter.c
(memcpy_{to,from}_page())


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 20:02:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 20:02:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6385.16994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQVY-0008RB-LW; Tue, 13 Oct 2020 20:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6385.16994; Tue, 13 Oct 2020 20:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSQVY-0008R4-IX; Tue, 13 Oct 2020 20:02:52 +0000
Received: by outflank-mailman (input) for mailman id 6385;
 Tue, 13 Oct 2020 20:02:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cX9a=DU=ftp.linux.org.uk=viro@srs-us1.protection.inumbo.net>)
 id 1kSQVX-0008Qt-4I
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:02:51 +0000
Received: from ZenIV.linux.org.uk (unknown [2002:c35c:fd02::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2a18346-39cc-4698-aa73-2500b050fe38;
 Tue, 13 Oct 2020 20:02:49 +0000 (UTC)
Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92.3 #3 (Red Hat
 Linux)) id 1kSQUX-00H96b-NT; Tue, 13 Oct 2020 20:01:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cX9a=DU=ftp.linux.org.uk=viro@srs-us1.protection.inumbo.net>)
	id 1kSQVX-0008Qt-4I
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:02:51 +0000
X-Inumbo-ID: a2a18346-39cc-4698-aa73-2500b050fe38
Received: from ZenIV.linux.org.uk (unknown [2002:c35c:fd02::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a2a18346-39cc-4698-aa73-2500b050fe38;
	Tue, 13 Oct 2020 20:02:49 +0000 (UTC)
Received: from viro by ZenIV.linux.org.uk with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kSQUX-00H96b-NT; Tue, 13 Oct 2020 20:01:49 +0000
Date: Tue, 13 Oct 2020 21:01:49 +0100
From: Al Viro <viro@zeniv.linux.org.uk>
To: Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>,
	"Weiny, Ira" <ira.weiny@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Nicolas Pitre <nico@fluxnic.net>, X86 ML <x86@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>, linux-kselftest@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>,
	bpf@vger.kernel.org, Kexec Mailing List <kexec@lists.infradead.org>,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org, linux-efi <linux-efi@vger.kernel.org>,
	linux-mmc@vger.kernel.org, linux-scsi <linux-scsi@vger.kernel.org>,
	target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org, linux-ext4 <linux-ext4@vger.kernel.org>,
	linux-aio@kvack.org, io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	linux-afs@lists.infradead.org,
	linux-rdma <linux-rdma@vger.kernel.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel <xen-devel@lists.xenproject.org>,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
Message-ID: <20201013200149.GI3576660@ZenIV.linux.org.uk>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-34-ira.weiny@intel.com>
 <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>
 <20201013193643.GK20115@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201013193643.GK20115@casper.infradead.org>
Sender: Al Viro <viro@ftp.linux.org.uk>

On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:

> static inline void copy_to_highpage(struct page *to, void *vfrom, unsigned int size)
> {
> 	char *vto = kmap_atomic(to);
> 
> 	memcpy(vto, vfrom, size);
> 	kunmap_atomic(vto);
> }
> 
> in linux/highmem.h ?

You mean, like
static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
{
        char *from = kmap_atomic(page);
        memcpy(to, from + offset, len);
        kunmap_atomic(from);
}

static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
{
        char *to = kmap_atomic(page);
        memcpy(to + offset, from, len);
        kunmap_atomic(to);
}

static void memzero_page(struct page *page, size_t offset, size_t len)
{
        char *addr = kmap_atomic(page);
        memset(addr + offset, 0, len);
        kunmap_atomic(addr);
}

in lib/iov_iter.c?  FWIW, I don't like that "highpage" in the name and
highmem.h as location - these make perfect sense regardless of highmem;
they are normal memory operations with page + offset used instead of
a pointer...


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 20:46:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 20:46:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6389.17008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRB2-0003VF-25; Tue, 13 Oct 2020 20:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6389.17008; Tue, 13 Oct 2020 20:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRB1-0003V8-Ui; Tue, 13 Oct 2020 20:45:43 +0000
Received: by outflank-mailman (input) for mailman id 6389;
 Tue, 13 Oct 2020 20:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kSRB1-0003V3-C8
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:45:43 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 718a2f0f-25e0-4289-9c5e-28ef1ae7cf05;
 Tue, 13 Oct 2020 20:45:41 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:45:39 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:45:37 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kSRB1-0003V3-C8
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:45:43 +0000
X-Inumbo-ID: 718a2f0f-25e0-4289-9c5e-28ef1ae7cf05
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 718a2f0f-25e0-4289-9c5e-28ef1ae7cf05;
	Tue, 13 Oct 2020 20:45:41 +0000 (UTC)
IronPort-SDR: wFUUw/A4Px6p+rgwj2h6nhY0lUrBdpzPgrlPqRUfHE8d9ASnLOBl40HHyBhzL2eUaotDpblFwI
 yTox+D8LAftw==
X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="162519269"
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="162519269"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:45:39 -0700
IronPort-SDR: 9cNNwyFW01V43IRpBEA5F+eU6gGnvdaUqvIfWWDJoXczsQwSY988pJwnA1f05GwG/v1vkNjadd
 aJS/wzoBibsg==
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="530558193"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:45:37 -0700
Date: Tue, 13 Oct 2020 13:45:37 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Dan Williams <dan.j.williams@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Nicolas Pitre <nico@fluxnic.net>, X86 ML <x86@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>, linux-kselftest@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>,
	bpf@vger.kernel.org, Kexec Mailing List <kexec@lists.infradead.org>,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org, linux-efi <linux-efi@vger.kernel.org>,
	linux-mmc@vger.kernel.org, linux-scsi <linux-scsi@vger.kernel.org>,
	target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org, linux-ext4 <linux-ext4@vger.kernel.org>,
	linux-aio@kvack.org, io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	linux-afs@lists.infradead.org,
	linux-rdma <linux-rdma@vger.kernel.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel <xen-devel@lists.xenproject.org>,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
Message-ID: <20201013204537.GH2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-34-ira.weiny@intel.com>
 <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>
 <20201013193643.GK20115@casper.infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201013193643.GK20115@casper.infradead.org>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 13, 2020 at 11:44:29AM -0700, Dan Williams wrote:
> > On Fri, Oct 9, 2020 at 12:52 PM <ira.weiny@intel.com> wrote:
> > >
> > > From: Ira Weiny <ira.weiny@intel.com>
> > >
> > > The kmap() calls in this FS are localized to a single thread.  To avoid
> > > the over head of global PKRS updates use the new kmap_thread() call.
> > >
> > > Cc: Nicolas Pitre <nico@fluxnic.net>
> > > Signed-off-by: Ira Weiny <ira.weiny@intel.com>
> > > ---
> > >  fs/cramfs/inode.c | 10 +++++-----
> > >  1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/fs/cramfs/inode.c b/fs/cramfs/inode.c
> > > index 912308600d39..003c014a42ed 100644
> > > --- a/fs/cramfs/inode.c
> > > +++ b/fs/cramfs/inode.c
> > > @@ -247,8 +247,8 @@ static void *cramfs_blkdev_read(struct super_block *sb, unsigned int offset,
> > >                 struct page *page = pages[i];
> > >
> > >                 if (page) {
> > > -                       memcpy(data, kmap(page), PAGE_SIZE);
> > > -                       kunmap(page);
> > > +                       memcpy(data, kmap_thread(page), PAGE_SIZE);
> > > +                       kunmap_thread(page);
> > 
> > Why does this need a sleepable kmap? This looks like a textbook
> > kmap_atomic() use case.
> 
> There's a lot of code of this form.  Could we perhaps have:
> 
> static inline void copy_to_highpage(struct page *to, void *vfrom, unsigned int size)
> {
> 	char *vto = kmap_atomic(to);
> 
> 	memcpy(vto, vfrom, size);
> 	kunmap_atomic(vto);
> }
> 
> in linux/highmem.h ?

Christoph had the same idea.  I'll work on it.

Ira



From xen-devel-bounces@lists.xenproject.org Tue Oct 13 20:50:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 20:50:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6391.17019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRFU-0004M6-Ko; Tue, 13 Oct 2020 20:50:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6391.17019; Tue, 13 Oct 2020 20:50:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRFU-0004Lz-Hm; Tue, 13 Oct 2020 20:50:20 +0000
Received: by outflank-mailman (input) for mailman id 6391;
 Tue, 13 Oct 2020 20:50:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kSRFT-0004Lu-7m
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:50:19 +0000
Received: from mga02.intel.com (unknown [134.134.136.20])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f034b316-1aa7-4f33-a6e4-3aebf0378205;
 Tue, 13 Oct 2020 20:50:15 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:50:14 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:50:12 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kSRFT-0004Lu-7m
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:50:19 +0000
X-Inumbo-ID: f034b316-1aa7-4f33-a6e4-3aebf0378205
Received: from mga02.intel.com (unknown [134.134.136.20])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f034b316-1aa7-4f33-a6e4-3aebf0378205;
	Tue, 13 Oct 2020 20:50:15 +0000 (UTC)
IronPort-SDR: E5xN0YDqp3avkBC6XcbpnSZJzPcNIEJDmKlkDWaeW9IlTyZqEOO28kYaLFLo/iPskFKdK8Ukub
 p1e2TW5foJ4g==
X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="152920015"
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="152920015"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
  by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:50:14 -0700
IronPort-SDR: tBU8Od+WgzKnoE84+DWDWPkln+je7r0O2Nptsn2TVKPsgnriflj5gQkH1GYPWlpzBZ1iqAqtCk
 /srhQcj0pjbQ==
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="318439699"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga006-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:50:12 -0700
Date: Tue, 13 Oct 2020 13:50:12 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Al Viro <viro@zeniv.linux.org.uk>
Cc: Matthew Wilcox <willy@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	Nicolas Pitre <nico@fluxnic.net>, X86 ML <x86@kernel.org>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>,
	Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-nvdimm <linux-nvdimm@lists.01.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>, linux-kselftest@vger.kernel.org,
	linuxppc-dev <linuxppc-dev@lists.ozlabs.org>,
	KVM list <kvm@vger.kernel.org>, Netdev <netdev@vger.kernel.org>,
	bpf@vger.kernel.org, Kexec Mailing List <kexec@lists.infradead.org>,
	linux-bcache@vger.kernel.org, linux-mtd@lists.infradead.org,
	devel@driverdev.osuosl.org, linux-efi <linux-efi@vger.kernel.org>,
	linux-mmc@vger.kernel.org, linux-scsi <linux-scsi@vger.kernel.org>,
	target-devel@vger.kernel.org, linux-nfs@vger.kernel.org,
	ceph-devel@vger.kernel.org, linux-ext4 <linux-ext4@vger.kernel.org>,
	linux-aio@kvack.org, io-uring@vger.kernel.org,
	linux-erofs@lists.ozlabs.org, linux-um@lists.infradead.org,
	linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org,
	linux-btrfs <linux-btrfs@vger.kernel.org>,
	linux-afs@lists.infradead.org,
	linux-rdma <linux-rdma@vger.kernel.org>,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org,
	xen-devel <xen-devel@lists.xenproject.org>,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 33/58] fs/cramfs: Utilize new kmap_thread()
Message-ID: <20201013205012.GI2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-34-ira.weiny@intel.com>
 <CAPcyv4gL3jfw4d+SJGPqAD3Dp4F_K=X3domuN4ndAA1FQDGcPg@mail.gmail.com>
 <20201013193643.GK20115@casper.infradead.org>
 <20201013200149.GI3576660@ZenIV.linux.org.uk>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201013200149.GI3576660@ZenIV.linux.org.uk>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Tue, Oct 13, 2020 at 09:01:49PM +0100, Al Viro wrote:
> On Tue, Oct 13, 2020 at 08:36:43PM +0100, Matthew Wilcox wrote:
> 
> > static inline void copy_to_highpage(struct page *to, void *vfrom, unsigned int size)
> > {
> > 	char *vto = kmap_atomic(to);
> > 
> > 	memcpy(vto, vfrom, size);
> > 	kunmap_atomic(vto);
> > }
> > 
> > in linux/highmem.h ?
> 
> You mean, like
> static void memcpy_from_page(char *to, struct page *page, size_t offset, size_t len)
> {
>         char *from = kmap_atomic(page);
>         memcpy(to, from + offset, len);
>         kunmap_atomic(from);
> }
> 
> static void memcpy_to_page(struct page *page, size_t offset, const char *from, size_t len)
> {
>         char *to = kmap_atomic(page);
>         memcpy(to + offset, from, len);
>         kunmap_atomic(to);
> }
> 
> static void memzero_page(struct page *page, size_t offset, size_t len)
> {
>         char *addr = kmap_atomic(page);
>         memset(addr + offset, 0, len);
>         kunmap_atomic(addr);
> }
> 
> in lib/iov_iter.c?  FWIW, I don't like that "highpage" in the name and
> highmem.h as location - these make perfect sense regardless of highmem;
> they are normal memory operations with page + offset used instead of
> a pointer...

I was thinking along those lines as well especially because of the direction
this patch set takes kmap().

Thanks for pointing these out to me.  How about I lift them to a common header?
But if not highmem.h where?

Ira


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 20:53:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 20:53:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6395.17034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRIk-0004Y3-9H; Tue, 13 Oct 2020 20:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6395.17034; Tue, 13 Oct 2020 20:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSRIk-0004Xw-6F; Tue, 13 Oct 2020 20:53:42 +0000
Received: by outflank-mailman (input) for mailman id 6395;
 Tue, 13 Oct 2020 20:53:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
 id 1kSRIi-0004Xr-Bs
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:53:40 +0000
Received: from mga03.intel.com (unknown [134.134.136.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97fedcfa-fbd2-4f74-b3b0-1b3a7de6fc61;
 Tue, 13 Oct 2020 20:53:37 +0000 (UTC)
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:53:36 -0700
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
 by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 13 Oct 2020 13:53:35 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BLCS=DU=intel.com=ira.weiny@srs-us1.protection.inumbo.net>)
	id 1kSRIi-0004Xr-Bs
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 20:53:40 +0000
X-Inumbo-ID: 97fedcfa-fbd2-4f74-b3b0-1b3a7de6fc61
Received: from mga03.intel.com (unknown [134.134.136.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 97fedcfa-fbd2-4f74-b3b0-1b3a7de6fc61;
	Tue, 13 Oct 2020 20:53:37 +0000 (UTC)
IronPort-SDR: /MmXQRYQvgBcEGfKoNhk9f2PS3vj/cqNlTfxp9L8WVqmrKA6ZTOXRPZQp7VD83GPrBTMTzbctC
 VysZucckUx8g==
X-IronPort-AV: E=McAfee;i="6000,8403,9773"; a="166045664"
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="166045664"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
  by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:53:36 -0700
IronPort-SDR: 6pC5EU5QwoXbsJ9AhSALdpEnefHex0iyEFu7THvX1eaSFNd77CbWePaQQdxa3yUvYDZ6VBgRvg
 Vwn0OOQPMwwA==
X-IronPort-AV: E=Sophos;i="5.77,371,1596524400"; 
   d="scan'208";a="313946459"
Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147])
  by orsmga003-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Oct 2020 13:53:35 -0700
Date: Tue, 13 Oct 2020 13:52:49 -0700
From: Ira Weiny <ira.weiny@intel.com>
To: Christoph Hellwig <hch@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Andy Lutomirski <luto@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Fenghua Yu <fenghua.yu@intel.com>, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-nvdimm@lists.01.org,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	linux-kselftest@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	kvm@vger.kernel.org, netdev@vger.kernel.org, bpf@vger.kernel.org,
	kexec@lists.infradead.org, linux-bcache@vger.kernel.org,
	linux-mtd@lists.infradead.org, devel@driverdev.osuosl.org,
	linux-efi@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-scsi@vger.kernel.org, target-devel@vger.kernel.org,
	linux-nfs@vger.kernel.org, ceph-devel@vger.kernel.org,
	linux-ext4@vger.kernel.org, linux-aio@kvack.org,
	io-uring@vger.kernel.org, linux-erofs@lists.ozlabs.org,
	linux-um@lists.infradead.org, linux-ntfs-dev@lists.sourceforge.net,
	reiserfs-devel@vger.kernel.org,
	linux-f2fs-devel@lists.sourceforge.net, linux-nilfs@vger.kernel.org,
	cluster-devel@redhat.com, ecryptfs@vger.kernel.org,
	linux-cifs@vger.kernel.org, linux-btrfs@vger.kernel.org,
	linux-afs@lists.infradead.org, linux-rdma@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, drbd-dev@lists.linbit.com,
	linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-cachefs@redhat.com, samba-technical@lists.samba.org,
	intel-wired-lan@lists.osuosl.org
Subject: Re: [PATCH RFC PKS/PMEM 24/58] fs/freevxfs: Utilize new kmap_thread()
Message-ID: <20201013205248.GJ2046448@iweiny-DESK2.sc.intel.com>
References: <20201009195033.3208459-1-ira.weiny@intel.com>
 <20201009195033.3208459-25-ira.weiny@intel.com>
 <20201013112544.GA5249@infradead.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201013112544.GA5249@infradead.org>
User-Agent: Mutt/1.11.1 (2018-12-01)

On Tue, Oct 13, 2020 at 12:25:44PM +0100, Christoph Hellwig wrote:
> > -	kaddr = kmap(pp);
> > +	kaddr = kmap_thread(pp);
> >  	memcpy(kaddr, vip->vii_immed.vi_immed + offset, PAGE_SIZE);
> > -	kunmap(pp);
> > +	kunmap_thread(pp);
> 
> You only Cced me on this particular patch, which means I have absolutely
> no idea what kmap_thread and kunmap_thread actually do, and thus can't
> provide an informed review.

Sorry the list was so big I struggled with who to CC and on which patches.

> 
> That being said I think your life would be a lot easier if you add
> helpers for the above code sequence and its counterpart that copies
> to a potential hughmem page first, as that hides the implementation
> details from most users.

Matthew Wilcox and Al Viro have suggested similar ideas.

https://lore.kernel.org/lkml/20201013205012.GI2046448@iweiny-DESK2.sc.intel.com/

Ira


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 22:31:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 22:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6405.17054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSSob-0004X6-Iv; Tue, 13 Oct 2020 22:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6405.17054; Tue, 13 Oct 2020 22:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSSob-0004Wz-EG; Tue, 13 Oct 2020 22:30:41 +0000
Received: by outflank-mailman (input) for mailman id 6405;
 Tue, 13 Oct 2020 22:30:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSSoa-0004Wu-1J
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 22:30:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 087b8f87-8164-4553-b7b4-264e7518fa31;
 Tue, 13 Oct 2020 22:30:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSSoV-0000vX-2c; Tue, 13 Oct 2020 22:30:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSSoU-0006GM-Ra; Tue, 13 Oct 2020 22:30:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSSoU-00074V-R6; Tue, 13 Oct 2020 22:30:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSSoa-0004Wu-1J
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 22:30:40 +0000
X-Inumbo-ID: 087b8f87-8164-4553-b7b4-264e7518fa31
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 087b8f87-8164-4553-b7b4-264e7518fa31;
	Tue, 13 Oct 2020 22:30:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/oJE4cXvbgUsvJxL1gpzLfVyJJT2cW9flaNh8oVeMm0=; b=QR3MMwbnZ+g5IhtC78VqCbv6n4
	8pUPDo9ldc1NsKTwwFBa9nWUR0PWPNATqu0TbnyftZAoEFOmBkVAeoOHpUo8ru1Qiastus9im482b
	l+c0qvirb0FmD/vky9MZEK8v/mv22z4jmzgPrZtVj0Y1HX7nBr/yUNY6ZztmHEJddXiU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSSoV-0000vX-2c; Tue, 13 Oct 2020 22:30:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSSoU-0006GM-Ra; Tue, 13 Oct 2020 22:30:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSSoU-00074V-R6; Tue, 13 Oct 2020 22:30:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155769-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155769: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:debian-di-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=a0bdf866873467271eff9a92f179ab0f77d735cb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 22:30:34 +0000

flight 155769 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155769/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2   12 debian-di-install fail in 155754 pass in 155769
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 155754

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 155754 like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                a0bdf866873467271eff9a92f179ab0f77d735cb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   54 days
Failing since        152659  2020-08-21 14:07:39 Z   53 days   92 attempts
Testing same since   155743  2020-10-12 17:14:05 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 43816 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 13 23:10:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 13 Oct 2020 23:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6410.17068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSTQb-0007LF-Ln; Tue, 13 Oct 2020 23:09:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6410.17068; Tue, 13 Oct 2020 23:09:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSTQb-0007L8-Im; Tue, 13 Oct 2020 23:09:57 +0000
Received: by outflank-mailman (input) for mailman id 6410;
 Tue, 13 Oct 2020 23:09:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSTQa-0007L3-Dr
 for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 23:09:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4196821-1269-45d8-b64b-fb15e40735ac;
 Tue, 13 Oct 2020 23:09:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSTQX-0001jF-Dx; Tue, 13 Oct 2020 23:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSTQX-00086S-68; Tue, 13 Oct 2020 23:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSTQX-00015x-5h; Tue, 13 Oct 2020 23:09:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Jd8M=DU=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSTQa-0007L3-Dr
	for xen-devel@lists.xenproject.org; Tue, 13 Oct 2020 23:09:56 +0000
X-Inumbo-ID: c4196821-1269-45d8-b64b-fb15e40735ac
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c4196821-1269-45d8-b64b-fb15e40735ac;
	Tue, 13 Oct 2020 23:09:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rpHvI1ZmKJWz95XFaHIDL/G/LQUV6PYdlwbW1OcX4bc=; b=Fq3+/4PL0LjespfZ+SfjUOmDaj
	g7nzAmPQ0E8RR1YqbuCDGEvKhOv21GJnHyjmn3f4nZPG4W80EeQ6AVIHCBY3s8plFN6CB8ZrY1qsi
	UXl+dghuJyXYGy6uQ6iH2al/CiDeV0hinIo0EklOh3hKc9i2i5DdiIMi7tuAMN7RcMAM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSTQX-0001jF-Dx; Tue, 13 Oct 2020 23:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSTQX-00086S-68; Tue, 13 Oct 2020 23:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSTQX-00015x-5h; Tue, 13 Oct 2020 23:09:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155782-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155782: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 13 Oct 2020 23:09:53 +0000

flight 155782 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155782/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    4 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   31 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 00:24:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 00:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6413.17081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSUaU-0006A4-7N; Wed, 14 Oct 2020 00:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6413.17081; Wed, 14 Oct 2020 00:24:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSUaU-00069x-4M; Wed, 14 Oct 2020 00:24:14 +0000
Received: by outflank-mailman (input) for mailman id 6413;
 Wed, 14 Oct 2020 00:24:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSUaR-00069s-Sq
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 00:24:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c194bfc-6dd6-4d81-997b-54bb30b8f606;
 Wed, 14 Oct 2020 00:24:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSUaP-0003qN-RI; Wed, 14 Oct 2020 00:24:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSUaP-0002Qu-JT; Wed, 14 Oct 2020 00:24:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSUaP-0000Mt-Iy; Wed, 14 Oct 2020 00:24:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSUaR-00069s-Sq
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 00:24:12 +0000
X-Inumbo-ID: 0c194bfc-6dd6-4d81-997b-54bb30b8f606
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c194bfc-6dd6-4d81-997b-54bb30b8f606;
	Wed, 14 Oct 2020 00:24:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N13/gLFm6miJnlzwBzehdjne3cyS1D0a/syh4rkSSr4=; b=jEFpPSV3xY9xfazh+H0F9xNwW/
	no2KafgoLaMgRu6eKIu2oJfu9nS89kOLn0glATUaule0+/aNKVGpgqI86H4bfwx7OIjV92pMumhiv
	fPQXezwJvvGWcEN0qha+5g8ZYzEj1P/XtP9b57jUgpjlx1Vtawb164gVzdzaOY1Tc48o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSUaP-0003qN-RI; Wed, 14 Oct 2020 00:24:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSUaP-0002Qu-JT; Wed, 14 Oct 2020 00:24:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSUaP-0000Mt-Iy; Wed, 14 Oct 2020 00:24:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155770-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 155770: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    seabios=c685fe3ff2d402caefc1487d99bb486c4a510b8b
X-Osstest-Versions-That:
    seabios=849c5e50b6f474df6cc113130575bcdccfafcd9e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 00:24:09 +0000

flight 155770 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155770/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155136
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155136
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155136
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155136
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              c685fe3ff2d402caefc1487d99bb486c4a510b8b
baseline version:
 seabios              849c5e50b6f474df6cc113130575bcdccfafcd9e

Last test of basis   155136  2020-09-30 11:09:37 Z   13 days
Testing same since   155770  2020-10-13 09:10:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   849c5e5..c685fe3  c685fe3ff2d402caefc1487d99bb486c4a510b8b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 00:42:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 00:42:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6419.17096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSUs1-0007sy-Uc; Wed, 14 Oct 2020 00:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6419.17096; Wed, 14 Oct 2020 00:42:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSUs1-0007sr-RK; Wed, 14 Oct 2020 00:42:21 +0000
Received: by outflank-mailman (input) for mailman id 6419;
 Wed, 14 Oct 2020 00:42:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nbRJ=DV=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSUs1-0007sm-4U
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 00:42:21 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5796dabe-4ae8-4bc7-afe0-1f236be85f18;
 Wed, 14 Oct 2020 00:42:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nbRJ=DV=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSUs1-0007sm-4U
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 00:42:21 +0000
X-Inumbo-ID: 5796dabe-4ae8-4bc7-afe0-1f236be85f18
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5796dabe-4ae8-4bc7-afe0-1f236be85f18;
	Wed, 14 Oct 2020 00:42:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602636139;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=4YyvTEpfSD9d9EHo8mi7qbI8RZCQtnkGVyDKCF0dWu4=;
  b=UB3Gy77Tf4kMOfBhK5GSV6PhP5avF9m+Unbjt8L7kxgidM4JroR/b9QE
   GFZyfTMcei+37ahg6Q25pYJu6dBjFe5zfwW8kA0083PSS/n8GAuiMalle
   KyyxMzxWtEYNTYjyKYly1TxdrJgEvwyT/wxogOdQu8hCVpCxaQABKwJiG
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Wwh1x4Wxwz+U54MzjqMciC9ZQoaju3/DDoCRzXM59RI4J1ilVPFW79SJvEhi8vq4JoLBGPZhTK
 SnLwNizzVNl3jhi0RTP1ldq7ChVpn7J50tu907QKBIe0mmx+VpvlP6AqLt8MZyrrsDAdlK1omD
 GrTE0YBj1KkY/RczRGmKQ+CpLZcwabAUFuvJtllM8qV63h07PaJZphBKJK4iVvqL+j0wwj168L
 UvpN8BVUPeI1ws2igmxJq22gDJRWJO70xWQf+2PuNXbIH9vUM/RDuK8z4YmEXpxKwmEsTvllsT
 e6k=
X-SBRS: 2.5
X-MesageID: 29978529
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,372,1596513600"; 
   d="scan'208";a="29978529"
Subject: Re: [PATCH] hvmloader: flip "ACPI data" to ACPI NVS type for ACPI
 table region
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <andrew.cooper3@citrix.com>,
	<roger.pau@citrix.com>, <wl@xen.org>, <iwj@xenproject.org>
References: <1602586216-27371-1-git-send-email-igor.druzhinin@citrix.com>
 <56bea9a9-2509-cc39-a6fd-fb7db3e54d71@suse.com>
 <83f567a1-35f3-a227-830b-a59b53217f3b@citrix.com>
 <ad54c16b-c3b0-cff2-921f-b84a735d3149@suse.com>
 <cc0f409e-60c0-41ae-f932-f6c2d7f82baa@citrix.com>
 <5d7bf2ce-1acb-05ff-a57b-d698e15c4dd1@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <74ad734a-634b-d6f0-3829-fb3895e7d9e5@citrix.com>
Date: Wed, 14 Oct 2020 01:42:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <5d7bf2ce-1acb-05ff-a57b-d698e15c4dd1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13/10/2020 16:54, Jan Beulich wrote:
> On 13.10.2020 17:47, Igor Druzhinin wrote:
>> On 13/10/2020 16:35, Jan Beulich wrote:
>>> On 13.10.2020 14:59, Igor Druzhinin wrote:
>>>> On 13/10/2020 13:51, Jan Beulich wrote:
>>>>> As a consequence I think we will also want to adjust Xen itself to
>>>>> automatically disable ACPI when it ends up consuming E801 data. Or
>>>>> alternatively we should consider dropping all E801-related code (as
>>>>> being inapplicable to 64-bit systems).
>>>>
>>>> I'm not following here. What Xen has to do with E801? It's a SeaBIOS implemented
>>>> call that happened to be used by QEMU option ROM. We cannot drop it from there
>>>> as it's part of BIOS spec.
>>>
>>> Any ACPI aware OS has to use E820 (and nothing else). Hence our
>>> own use of E801 should either be dropped, or lead to the
>>> disabling of ACPI. Otherwise real firmware using logic similar
>>> to SeaBIOS'es (but hopefully properly accounting for holes)
>>> could make us use ACPI table space as normal RAM.
>>
>> It's not us using it - it's a boot loader from QEMU in a form of option ROM
>> that works in 16bit pre-OS environment which is not OS and relies on e801 BIOS call.
>> I'm sure any ACPI aware OS does indeed use E820 but the problem here is not an OS.
>>
>> The option ROM is loaded using fw_cfg from QEMU so it's not our code. Technically
>> it's one foreign code (QEMU boot loader) talking to another foreign code (SeaBIOS)
>> which provides information based on E820 that we gave them.
>>
>> So I'm afraid decision to dynamically disable ACPI (whatever you mean by this)
>> cannot be made by sole usage of this call by a pre-OS boot loader.
> 
> I guess this is simply a misunderstanding. I'm not talking about
> your change or hvmloader or the boot loader at all. I was merely
> noticing a consequence of your findings on the behavior of Xen
> itself: Use of ACPI and use of E801 are exclusive of one another.

Sorry, yes. I forgot e801 is also used by Xen as an alternative to e820.

Igor


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 01:06:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 01:06:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6422.17110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSVFP-00027U-Vf; Wed, 14 Oct 2020 01:06:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6422.17110; Wed, 14 Oct 2020 01:06:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSVFP-00027K-SG; Wed, 14 Oct 2020 01:06:31 +0000
Received: by outflank-mailman (input) for mailman id 6422;
 Wed, 14 Oct 2020 01:06:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kSVFN-00027F-Vd
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 01:06:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb69429e-fdaa-497c-9685-157bef0fd2eb;
 Wed, 14 Oct 2020 01:06:28 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 13736208B3;
 Wed, 14 Oct 2020 01:06:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kSVFN-00027F-Vd
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 01:06:30 +0000
X-Inumbo-ID: eb69429e-fdaa-497c-9685-157bef0fd2eb
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eb69429e-fdaa-497c-9685-157bef0fd2eb;
	Wed, 14 Oct 2020 01:06:28 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 13736208B3;
	Wed, 14 Oct 2020 01:06:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602637587;
	bh=oJhlGVtF1Tk5FO3i6heiukaqcfG80pU4yQtavGGC3OU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=hChKhc0Ds+AoHI3nIdnZUO+zSc8AvIY2mkuqkK6RlOdHUOQ3ELufx1d9mvESXB7qu
	 kG8Qqr3GuvKkhnAIVMNWOHTASlABpInTPLvuhK8dFHgWwdbFj/5ukbdnIpjWC5NvnO
	 1Xr2A+BSjS9vqjB0qmzCiWS2eWptnmW8TX141mWA=
Date: Tue, 13 Oct 2020 18:06:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>, 
    bertrand.marquis@arm.com, andre.przywara@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
In-Reply-To: <20201012213451.GA89158@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010131759270.10386@sstabellini-ThinkPad-T480s>
References: <20200926205542.9261-1-julien@xen.org> <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com> <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org> <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
 <20201012213451.GA89158@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 12 Oct 2020, Elliott Mitchell wrote:
> On Mon, Oct 12, 2020 at 12:02:14PM -0700, Stefano Stabellini wrote:
> > On Sat, 10 Oct 2020, Julien Grall wrote:
> > > Therefore, I think the code should not try to find the STAO. Instead, it
> > > should check whether the SPCR table is present.
> > 
> > Yes, that makes sense, but that brings me to the next question.
> > 
> > SPCR seems to be required by SBBR, however, Masami wrote that he could
> > boot on a system without SPCR, which gets me very confused for two
> > reasons:
> > 
> > 1) Why there is no SPCR? Isn't it supposed to be mandatory? Is it
> > because there no UART on Masami's system?
> 
> I'm on different hardware, but some folks have setup Tianocore for it.
> According to Documentation/arm64/acpi_object_usage.rst,
> "Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT".  Yet when
> booting a Linux kernel directly on the hardware it lists APIC, BGRT,
> CSRT, DSDT, DBG2, FACP, GTDT, PPTT, RSDP, and XSDT.
> 
> I don't know whether Linux's ACPI code omits mention of some required
> tables and merely panics if they're absent.  Yet I'm speculating the list
> of required tables has shrunk, SPCR is no longer required, and the
> documentation is out of date.  Perhaps SPCR was required in early Linux
> ACPI implementations, but more recent ones removed that requirement?

I have just checked and SPCR is still a mandatory table in the latest
SBBR specification. It is probably one of those cases where the firmware
claims to be SBBR compliant, but it is not, and it happens to work with
Linux.


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 01:37:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 01:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6427.17124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSVjM-0004kb-IJ; Wed, 14 Oct 2020 01:37:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6427.17124; Wed, 14 Oct 2020 01:37:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSVjM-0004kU-Db; Wed, 14 Oct 2020 01:37:28 +0000
Received: by outflank-mailman (input) for mailman id 6427;
 Wed, 14 Oct 2020 01:37:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qloZ=DV=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kSVjL-0004kP-TH
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 01:37:27 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fdb012f-978f-4f81-8b27-e75e994412a8;
 Wed, 14 Oct 2020 01:37:26 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09E1b7TA098708
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 13 Oct 2020 21:37:13 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09E1b6tR098707;
 Tue, 13 Oct 2020 18:37:06 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qloZ=DV=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kSVjL-0004kP-TH
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 01:37:27 +0000
X-Inumbo-ID: 6fdb012f-978f-4f81-8b27-e75e994412a8
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6fdb012f-978f-4f81-8b27-e75e994412a8;
	Wed, 14 Oct 2020 01:37:26 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09E1b7TA098708
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 13 Oct 2020 21:37:13 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09E1b6tR098707;
	Tue, 13 Oct 2020 18:37:06 -0700 (PDT)
	(envelope-from ehem)
Date: Tue, 13 Oct 2020 18:37:06 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>,
        Masami Hiramatsu <masami.hiramatsu@linaro.org>,
        xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
        bertrand.marquis@arm.com, andre.przywara@arm.com,
        Julien Grall <jgrall@amazon.com>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
Message-ID: <20201014013706.GA98635@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
 <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
 <20201012213451.GA89158@mattapan.m5p.com>
 <alpine.DEB.2.21.2010131759270.10386@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010131759270.10386@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Tue, Oct 13, 2020 at 06:06:26PM -0700, Stefano Stabellini wrote:
> On Mon, 12 Oct 2020, Elliott Mitchell wrote:
> > I'm on different hardware, but some folks have setup Tianocore for it.
> > According to Documentation/arm64/acpi_object_usage.rst,
> > "Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT".  Yet when
> > booting a Linux kernel directly on the hardware it lists APIC, BGRT,
> > CSRT, DSDT, DBG2, FACP, GTDT, PPTT, RSDP, and XSDT.
> > 
> > I don't know whether Linux's ACPI code omits mention of some required
> > tables and merely panics if they're absent.  Yet I'm speculating the list
> > of required tables has shrunk, SPCR is no longer required, and the
> > documentation is out of date.  Perhaps SPCR was required in early Linux
> > ACPI implementations, but more recent ones removed that requirement?
> 
> I have just checked and SPCR is still a mandatory table in the latest
> SBBR specification. It is probably one of those cases where the firmware
> claims to be SBBR compliant, but it is not, and it happens to work with
> Linux.

Is meeting the SBBR specification supposed to be a requirement of running
Xen-ARM?

I don't seen any mention of such.
`find docs xen/arch/arm -type f -print0 | xargs -0 grep -eSBBR` produces
no output.

Perhaps you've been adding this as a presumptive requirement since
previously the only hardware capable of running Xen due to an
appropriately unlocked bootloader was SBBR compliant?  If so, it seems
time to either add this as an explicit requirement and document it, or
else remove this implicit requirement and start acting as such.

The Raspberry PI 4B has a UEFI implementation available which is based on
Tianocore.  No statement has been made of it qualifying as SBBR.  Yet it
is clearly mostly able to boot Xen, just this is exposing issues.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Oct 14 02:31:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 02:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6431.17139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSWZW-0001g6-GJ; Wed, 14 Oct 2020 02:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6431.17139; Wed, 14 Oct 2020 02:31:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSWZW-0001fz-DQ; Wed, 14 Oct 2020 02:31:22 +0000
Received: by outflank-mailman (input) for mailman id 6431;
 Wed, 14 Oct 2020 02:31:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSWZV-0001fX-1y
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 02:31:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4b5e381-7a1b-4d68-916f-0f68e968137b;
 Wed, 14 Oct 2020 02:31:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSWZN-0007He-7Q; Wed, 14 Oct 2020 02:31:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSWZM-0006cd-So; Wed, 14 Oct 2020 02:31:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSWZM-0002Fi-SN; Wed, 14 Oct 2020 02:31:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSWZV-0001fX-1y
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 02:31:21 +0000
X-Inumbo-ID: f4b5e381-7a1b-4d68-916f-0f68e968137b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f4b5e381-7a1b-4d68-916f-0f68e968137b;
	Wed, 14 Oct 2020 02:31:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tM77SHuJ4I/S2CfimeiEC18QKX5EMBH5xDP3ugFnOtk=; b=mh7gU1dInhC3GPu6j3m3O6bF0B
	C58838pY6dqB+kYmY7EYcw6OiSHPEStfcLnqmAqTttXVNXC7jmBHoGMZzjd7l3rJKDXhtysW4QjjA
	PyKbRgl3juuC71Dma4pqHe3V7pnUc0sAouzqV1nOOzFrFc9vPZ4KV0g8a7vcOrJsUxjU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSWZN-0007He-7Q; Wed, 14 Oct 2020 02:31:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSWZM-0006cd-So; Wed, 14 Oct 2020 02:31:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSWZM-0002Fi-SN; Wed, 14 Oct 2020 02:31:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155786-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155786: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 02:31:12 +0000

flight 155786 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155786/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    5 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   32 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 03:09:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 03:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6434.17154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSXA9-0004VP-JK; Wed, 14 Oct 2020 03:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6434.17154; Wed, 14 Oct 2020 03:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSXA9-0004VI-FZ; Wed, 14 Oct 2020 03:09:13 +0000
Received: by outflank-mailman (input) for mailman id 6434;
 Wed, 14 Oct 2020 03:09:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSXA7-0004VB-Qt
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 03:09:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0138983f-c448-4a04-839a-2e6f31da086d;
 Wed, 14 Oct 2020 03:09:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSXA4-000869-Jj; Wed, 14 Oct 2020 03:09:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSXA4-0008ON-2c; Wed, 14 Oct 2020 03:09:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSXA4-0002TJ-27; Wed, 14 Oct 2020 03:09:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSXA7-0004VB-Qt
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 03:09:12 +0000
X-Inumbo-ID: 0138983f-c448-4a04-839a-2e6f31da086d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0138983f-c448-4a04-839a-2e6f31da086d;
	Wed, 14 Oct 2020 03:09:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wNrhfIDgHglHeInOp43LEeXNZd/RlQFSfNUJwTxMbRU=; b=BvEQ6sko96uR75Mrb26PmUQ1fa
	WCu48CYt1NcsSI5jp3LOSwlbB/CMO5HS8TwyqLNSaaYEsvdzuSfy/ac1cPErsn3YV8SfM0Ob9eulp
	17Ghs5HWjFpnqiQ4UEaZRtRb/DItMbjoimkmFBTlQOHmpl2JN1HV+KKmJI0XhWSXZ1j8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSXA4-000869-Jj; Wed, 14 Oct 2020 03:09:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSXA4-0008ON-2c; Wed, 14 Oct 2020 03:09:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSXA4-0002TJ-27; Wed, 14 Oct 2020 03:09:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155777-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155777: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:debian-di-install:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=865c50e1d279671728c2936cb7680eb89355eeea
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 03:09:08 +0000

flight 155777 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155777/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    12 debian-di-install        fail REGR. vs. 152332
 build-arm64                   6 xen-build                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                865c50e1d279671728c2936cb7680eb89355eeea
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   74 days
Failing since        152366  2020-08-01 20:49:34 Z   73 days  124 attempts
Testing same since   155777  2020-10-13 12:51:31 Z    0 days    1 attempts

------------------------------------------------------------
2582 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 358289 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 05:24:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 05:24:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6442.17175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZGa-0008Fq-VY; Wed, 14 Oct 2020 05:24:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6442.17175; Wed, 14 Oct 2020 05:24:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZGa-0008Fj-Sk; Wed, 14 Oct 2020 05:24:00 +0000
Received: by outflank-mailman (input) for mailman id 6442;
 Wed, 14 Oct 2020 05:23:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSZGZ-0008Fb-IT
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 05:23:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e2f3651-66bc-4f3a-9e59-3bdb67208c0d;
 Wed, 14 Oct 2020 05:23:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSZGW-0002sc-16; Wed, 14 Oct 2020 05:23:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSZGV-0006YY-QN; Wed, 14 Oct 2020 05:23:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSZGV-0001hw-Pp; Wed, 14 Oct 2020 05:23:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSZGZ-0008Fb-IT
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 05:23:59 +0000
X-Inumbo-ID: 4e2f3651-66bc-4f3a-9e59-3bdb67208c0d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e2f3651-66bc-4f3a-9e59-3bdb67208c0d;
	Wed, 14 Oct 2020 05:23:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yGSZ0rT7qIRycoo+hqeuPuBFhhgUWxbHf6ZJqMCJnIY=; b=xbOXlbK4apQI3g3uAeoBWFVghn
	PWkCi/zD1BKdhPbydTwzB0yMIEU9hvdrAvdDziQD4ldDMhLVZ+HtCS1TV1SdYCnLmqgHcEuQdAMuf
	zwyrtqikVz7B305XzyP7ahHxr/lma+ueayDxA58E4gOcZLVXq92uK8IdbtEAbnJ0eNYE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSZGW-0002sc-16; Wed, 14 Oct 2020 05:23:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSZGV-0006YY-QN; Wed, 14 Oct 2020 05:23:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSZGV-0001hw-Pp; Wed, 14 Oct 2020 05:23:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155790-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155790: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 05:23:55 +0000

flight 155790 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155790/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    5 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   33 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 05:39:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 05:39:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6447.17194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZVQ-0000sC-Ga; Wed, 14 Oct 2020 05:39:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6447.17194; Wed, 14 Oct 2020 05:39:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZVQ-0000s5-BV; Wed, 14 Oct 2020 05:39:20 +0000
Received: by outflank-mailman (input) for mailman id 6447;
 Wed, 14 Oct 2020 05:39:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSZVP-0000rZ-JS
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 05:39:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa04ba88-a413-49ee-ae1e-a18fc182d58f;
 Wed, 14 Oct 2020 05:39:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9BFF4AFCC;
 Wed, 14 Oct 2020 05:39:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSZVP-0000rZ-JS
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 05:39:19 +0000
X-Inumbo-ID: aa04ba88-a413-49ee-ae1e-a18fc182d58f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aa04ba88-a413-49ee-ae1e-a18fc182d58f;
	Wed, 14 Oct 2020 05:39:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602653957;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=rwKx1/PQRXwtK7KwNu38C2wugXfvHcRdY2flAxFD91w=;
	b=t7YOMgrsuNovYjNiiI0fbC0zzmSIKOSylwsD6FLibphK3Nq6v+FM3K1dkbOe4Suj/S1bBV
	1GdGihQbgiOVU7PIFd8uDj79bEPUVhfVxNIP5V/qliYP/6QXR0I3+MFiOisvMbdBY0is8z
	0qFl/BkRNtbTF4VAOdQeE8Gcjv5FW1s=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9BFF4AFCC;
	Wed, 14 Oct 2020 05:39:17 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.10-rc1
Date: Wed, 14 Oct 2020 07:39:17 +0200
Message-Id: <20201014053917.19251-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1-tag

xen: branch for v5.10-rc1

It contains:

- 2 small cleanup patches

- A fix for avoiding error messages when initializing MCA banks in a
  Xen dom0

- A small series for converting the Xen gntdev driver to use
  pin_user_pages*() instead of get_user_pages*()

- An intermediate fix for running as a Xen guest on Arm with KPTI
  enabled (the final solution will need a new Xen functionality)


Thanks.

Juergen

 arch/arm/include/asm/xen/page.h   |  5 +++++
 arch/arm/xen/enlighten.c          |  6 ++++--
 arch/arm64/include/asm/xen/page.h |  6 ++++++
 arch/x86/xen/enlighten_pv.c       |  9 +++++++++
 arch/x86/xen/mmu_pv.c             |  2 +-
 drivers/xen/gntdev.c              | 17 +++++++++--------
 drivers/xen/pvcalls-front.c       |  2 +-
 7 files changed, 35 insertions(+), 12 deletions(-)

Hui Su (1):
      x86/xen: Fix typo in xen_pagetable_p2m_free()

Jing Xiangfeng (1):
      xen: remove redundant initialization of variable ret

Juergen Gross (1):
      x86/xen: disable Firmware First mode for correctable memory errors

Souptick Joarder (2):
      xen/gntdev.c: Mark pages as dirty
      xen/gntdev.c: Convert get_user_pages*() to pin_user_pages*()

Stefano Stabellini (1):
      xen/arm: do not setup the runstate info page if kpti is enabled


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 06:00:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 06:00:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6449.17205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZpy-0003OD-8c; Wed, 14 Oct 2020 06:00:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6449.17205; Wed, 14 Oct 2020 06:00:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSZpy-0003O6-5X; Wed, 14 Oct 2020 06:00:34 +0000
Received: by outflank-mailman (input) for mailman id 6449;
 Wed, 14 Oct 2020 06:00:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSZpw-0003O1-S1
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 06:00:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75ba719a-04e7-4d17-97cd-bf033d28b651;
 Wed, 14 Oct 2020 06:00:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BBA8BAFF7;
 Wed, 14 Oct 2020 06:00:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSZpw-0003O1-S1
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 06:00:32 +0000
X-Inumbo-ID: 75ba719a-04e7-4d17-97cd-bf033d28b651
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 75ba719a-04e7-4d17-97cd-bf033d28b651;
	Wed, 14 Oct 2020 06:00:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602655230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VvtUh/JjDBZzHEfOflQD1P/EAJrul9ia2HCqSusmwUQ=;
	b=lB4esmQ6ApaVOc0fSsnQL4cI3Gd+3Lqb79F2j6+1737lHV0ydGgmSTC7DgTVWJtdfwHRG0
	AtE1MZTPLDXnPhTi8RKBWV2am1ZC+aJbcRFtktx4PrgoSBFYdP3v/aJHpyXxLMooYEUhUB
	AteWyiiwtGKQQGFFJDmMe7PeW0jUxbY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BBA8BAFF7;
	Wed, 14 Oct 2020 06:00:30 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
 <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dbaff977-796b-bbd3-64e5-fbe30817077f@suse.com>
Date: Wed, 14 Oct 2020 08:00:30 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.20 17:28, Jan Beulich wrote:
> On 12.10.2020 11:27, Juergen Gross wrote:
>> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>>   
>>       d = v->domain;
>>       chn = evtchn_from_port(d, port);
>> -    spin_lock(&chn->lock);
>> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
>> -    spin_unlock(&chn->lock);
>> +    if ( evtchn_tryread_lock(chn) )
>> +    {
>> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
>> +        evtchn_read_unlock(chn);
>> +    }
>>   
>>    out:
>>       spin_unlock_irqrestore(&v->virq_lock, flags);
>> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>>           goto out;
>>   
>>       chn = evtchn_from_port(d, port);
>> -    spin_lock(&chn->lock);
>> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
>> -    spin_unlock(&chn->lock);
>> +    if ( evtchn_tryread_lock(chn) )
>> +    {
>> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
> 
> Is this simply a copy-and-paste mistake (re-using the code from
> send_guest_vcpu_virq()), or is there a reason you switch from
> where to obtain the vCPU to send to (in which case this ought
> to be mentioned in the description, and in which case you could
> use literal zero)?

Thanks for spotting! Its a copy-and-paste mistake.

> 
>> --- a/xen/include/xen/event.h
>> +++ b/xen/include/xen/event.h
>> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>   #define bucket_from_port(d, p) \
>>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>   
>> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
> 
> Isn't the ceiling on simultaneous readers the number of pCPU-s,
> and the value here then needs to be NR_CPUS + 1 to accommodate
> the maximum number of readers? Furthermore, with you dropping
> the disabling of interrupts, one pCPU can acquire a read lock
> now more than once, when interrupting a locked region.

Yes, I think you are right.

So at least 2 * (NR-CPUS + 1), or even 3 * (NR_CPUS + 1) for covering
NMIs, too?

> 
>> +static inline void evtchn_write_lock(struct evtchn *evtchn)
>> +{
>> +    int val;
>> +
>> +    /* No barrier needed, atomic_add_return() is full barrier. */
>> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
>> +          val != EVENT_WRITE_LOCK_INC;
>> +          val = atomic_read(&evtchn->lock) )
>> +        cpu_relax();
>> +}
>> +
>> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
>> +{
>> +    arch_lock_release_barrier();
>> +
>> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
>> +}
>> +
>> +static inline bool evtchn_tryread_lock(struct evtchn *evtchn)
> 
> The corresponding "generic" function is read_trylock() - I'd
> suggest to use the same base name, with the evtchn_ prefix.

Okay.

> 
>> @@ -274,12 +312,12 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
>>       if ( port_is_valid(d, port) )
>>       {
>>           struct evtchn *evtchn = evtchn_from_port(d, port);
>> -        unsigned long flags;
>>   
>> -        spin_lock_irqsave(&evtchn->lock, flags);
>> -        if ( evtchn_usable(evtchn) )
>> +        if ( evtchn_tryread_lock(evtchn) && evtchn_usable(evtchn) )
>> +        {
>>               rc = evtchn_is_pending(d, evtchn);
>> -        spin_unlock_irqrestore(&evtchn->lock, flags);
>> +            evtchn_read_unlock(evtchn);
>> +        }
>>       }
> 
> This needs to be two nested if()-s, as you need to drop the lock
> even when evtchn_usable() returns false.

Oh, yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 06:52:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 06:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6451.17218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSadr-0007fd-5z; Wed, 14 Oct 2020 06:52:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6451.17218; Wed, 14 Oct 2020 06:52:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSadr-0007fW-2v; Wed, 14 Oct 2020 06:52:07 +0000
Received: by outflank-mailman (input) for mailman id 6451;
 Wed, 14 Oct 2020 06:52:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSadp-0007fR-V0
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 06:52:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1bb496d-3246-4601-8ce1-37bb73100156;
 Wed, 14 Oct 2020 06:52:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59957B168;
 Wed, 14 Oct 2020 06:52:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSadp-0007fR-V0
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 06:52:05 +0000
X-Inumbo-ID: d1bb496d-3246-4601-8ce1-37bb73100156
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d1bb496d-3246-4601-8ce1-37bb73100156;
	Wed, 14 Oct 2020 06:52:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602658323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F5O2/nRBvCUtDgjSiRuV+IVmMGNponbvp1B+Lcstrvk=;
	b=dAzR4fvAYW75jR4L5doAnsCvTop3MsDBo39HU0hQ3fgVHnZ0sJGCd2ofhV8MvRu/Ap1udq
	OUFFR4g9nrRLT+9oPlqoBJH5BZU2la5oDQyngqMIWg2L0A71803Z+f7h/OMHuVV9g9i/Sw
	o8hiliKIgW/odtYRIOtpAnoB35HQEHk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 59957B168;
	Wed, 14 Oct 2020 06:52:03 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
 <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com>
 <dbaff977-796b-bbd3-64e5-fbe30817077f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae78449c-fb37-7403-ee75-ef53085df26a@suse.com>
Date: Wed, 14 Oct 2020 08:52:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <dbaff977-796b-bbd3-64e5-fbe30817077f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 08:00, Jürgen Groß wrote:
> On 13.10.20 17:28, Jan Beulich wrote:
>> On 12.10.2020 11:27, Juergen Gross wrote:
>>> --- a/xen/include/xen/event.h
>>> +++ b/xen/include/xen/event.h
>>> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>>   #define bucket_from_port(d, p) \
>>>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>>   
>>> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
>>
>> Isn't the ceiling on simultaneous readers the number of pCPU-s,
>> and the value here then needs to be NR_CPUS + 1 to accommodate
>> the maximum number of readers? Furthermore, with you dropping
>> the disabling of interrupts, one pCPU can acquire a read lock
>> now more than once, when interrupting a locked region.
> 
> Yes, I think you are right.
> 
> So at least 2 * (NR-CPUS + 1), or even 3 * (NR_CPUS + 1) for covering
> NMIs, too?

Hard to say: Even interrupts can in principle nest. I'd go further
and use e.g. INT_MAX / 4, albeit no matter what value we choose
there'll remain a theoretical risk. I'm therefore not fully
convinced of the concept, irrespective of it providing an elegant
solution to the problem at hand. I'd be curious what others think.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 07:05:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 07:05:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6453.17229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSaq4-0000Ig-6r; Wed, 14 Oct 2020 07:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6453.17229; Wed, 14 Oct 2020 07:04:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSaq4-0000IY-40; Wed, 14 Oct 2020 07:04:44 +0000
Received: by outflank-mailman (input) for mailman id 6453;
 Wed, 14 Oct 2020 07:04:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSaq2-0000IS-2l
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:04:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 143d503c-88ea-4aed-bf79-e8af03dbba46;
 Wed, 14 Oct 2020 07:04:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C44AB169;
 Wed, 14 Oct 2020 07:04:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSaq2-0000IS-2l
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:04:42 +0000
X-Inumbo-ID: 143d503c-88ea-4aed-bf79-e8af03dbba46
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 143d503c-88ea-4aed-bf79-e8af03dbba46;
	Wed, 14 Oct 2020 07:04:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602659079;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=TZDZBNYVYrkv3gZtVC6CWqF2fciZ1h1jgulCFx4UWnY=;
	b=vEAg3gHJgHkZQlMrzgRfLjOfMhku/i+k3joQnjDyq/wx1qT+UON7OzYtqYQMTzKyY+7KMR
	tl0TRjP2JoCZCxeqrRajk1GqH3MavzMidDUGIFXwkRyF5JnNpyE5umZDyaCs03tgXA14BT
	R4mrucoGXxmOE+c0mG1vEV7Hwet5W9k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6C44AB169;
	Wed, 14 Oct 2020 07:04:39 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] drop xen/hash.h
Message-ID: <c398362b-6a09-a67f-50a9-b43b73fbd265@suse.com>
Date: Wed, 14 Oct 2020 09:04:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

It has no users and hasn't been touched in 10 years.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/hash.h
+++ /dev/null
@@ -1,58 +0,0 @@
-#ifndef _XEN_HASH_H
-#define _XEN_HASH_H
-/* Fast hashing routine for a long.
-   (C) 2002 William Lee Irwin III, IBM */
-
-/*
- * Knuth recommends primes in approximately golden ratio to the maximum
- * integer representable by a machine word for multiplicative hashing.
- * Chuck Lever verified the effectiveness of this technique:
- * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf
- *
- * These primes are chosen to be bit-sparse, that is operations on
- * them can use shifts and additions instead of multiplications for
- * machines where multiplications are slow.
- */
-#if BITS_PER_LONG == 32
-/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
-#define GOLDEN_RATIO_PRIME 0x9e370001UL
-#elif BITS_PER_LONG == 64
-/*  2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */
-#define GOLDEN_RATIO_PRIME 0x9e37fffffffc0001UL
-#else
-#error Define GOLDEN_RATIO_PRIME for your wordsize.
-#endif
-
-static inline unsigned long hash_long(unsigned long val, unsigned int bits)
-{
-    unsigned long hash = val;
-
-#if BITS_PER_LONG == 64
-    /*  Sigh, gcc can't optimise this alone like it does for 32 bits. */
-    unsigned long n = hash;
-    n <<= 18;
-    hash -= n;
-    n <<= 33;
-    hash -= n;
-    n <<= 3;
-    hash += n;
-    n <<= 3;
-    hash -= n;
-    n <<= 4;
-    hash += n;
-    n <<= 2;
-    hash += n;
-#else
-    /* On some cpus multiply is faster, on others gcc will do shifts */
-    hash *= GOLDEN_RATIO_PRIME;
-#endif
-
-    /* High bits are more random, so use them. */
-    return hash >> (BITS_PER_LONG - bits);
-}
- 
-static inline unsigned long hash_ptr(void *ptr, unsigned int bits)
-{
-    return hash_long((unsigned long)ptr, bits);
-}
-#endif /* _XEN_HASH_H */


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 07:05:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 07:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6454.17241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSar1-0000Pv-Ko; Wed, 14 Oct 2020 07:05:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6454.17241; Wed, 14 Oct 2020 07:05:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSar1-0000Po-Hi; Wed, 14 Oct 2020 07:05:43 +0000
Received: by outflank-mailman (input) for mailman id 6454;
 Wed, 14 Oct 2020 07:05:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSar0-0000Pi-H8
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:05:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1363ac7b-77ba-4f62-b021-16281843ecfb;
 Wed, 14 Oct 2020 07:05:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EC8B2B169;
 Wed, 14 Oct 2020 07:05:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSar0-0000Pi-H8
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:05:42 +0000
X-Inumbo-ID: 1363ac7b-77ba-4f62-b021-16281843ecfb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1363ac7b-77ba-4f62-b021-16281843ecfb;
	Wed, 14 Oct 2020 07:05:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602659141;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=buQ33p2+rSRUtvOyS1U4qMMS2XBpZyaV3p+/tEWXhHU=;
	b=OJ4POUJQC6kBOwBh7vNmDpuf6vob77ZMGdYHEf+jRoxqS58RwMal0k32mheldvQ4Fm1ml5
	eKTppyF6JyklGs1zWLlSFxnjH52U5yyQVL7jTLuyyPU/jJIIs894w9c2odu4hS4yWAhyAj
	I3AtjDZroKMr5auZUSHSnoUzrrDVSp0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EC8B2B169;
	Wed, 14 Oct 2020 07:05:40 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] kexec: some #include adjustments
Message-ID: <c786bd7e-960a-6496-ec9a-4e04a771b80a@suse.com>
Date: Wed, 14 Oct 2020 09:05:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In the context of working on x86's elf_core_save_regs() I noticed there
were far more source files getting rebuilt than I would have expected.
While the main offender looks to have been fixmap.h including kexec.h,
also drop use of elfcore.h from kexec.h.

While adjusting machine_kexec.c also replace use of guest_access.h by
domain_page.h.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/machine_kexec.c
+++ b/xen/arch/x86/machine_kexec.c
@@ -16,8 +16,9 @@
  */
 
 #include <xen/types.h>
+#include <xen/domain_page.h>
+#include <xen/elfstructs.h>
 #include <xen/kexec.h>
-#include <xen/guest_access.h>
 #include <asm/fixmap.h>
 #include <asm/hpet.h>
 #include <asm/page.h>
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -10,6 +10,7 @@
 #include <xen/lib.h>
 #include <xen/acpi.h>
 #include <xen/ctype.h>
+#include <xen/elfcore.h>
 #include <xen/errno.h>
 #include <xen/guest_access.h>
 #include <xen/param.h>
--- a/xen/include/asm-x86/fixmap.h
+++ b/xen/include/asm-x86/fixmap.h
@@ -21,7 +21,6 @@
 
 #include <xen/acpi.h>
 #include <xen/pfn.h>
-#include <xen/kexec.h>
 #include <asm/apicdef.h>
 #include <asm/msi.h>
 #include <acpi/apei.h>
--- a/xen/include/xen/elfcore.h
+++ b/xen/include/xen/elfcore.h
@@ -56,7 +56,7 @@ typedef struct
     int pr_fpvalid;              /* True if math co-processor being used.  */
 } ELF_Prstatus;
 
-typedef struct {
+typedef struct crash_xen_info {
     unsigned long xen_major_version;
     unsigned long xen_minor_version;
     unsigned long xen_extra_version;
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -5,7 +5,6 @@
 
 #include <public/kexec.h>
 
-#include <xen/elfcore.h>
 #include <xen/kimage.h>
 
 typedef struct xen_kexec_reserve {
@@ -51,7 +50,7 @@ void machine_reboot_kexec(struct kexec_i
 void machine_kexec(struct kexec_image *image);
 void kexec_crash(void);
 void kexec_crash_save_cpu(void);
-crash_xen_info_t *kexec_crash_save_info(void);
+struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
 int machine_kexec_get(xen_kexec_range_t *range);
 int machine_kexec_get_xen(xen_kexec_range_t *range);


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 07:24:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 07:24:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6458.17253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSb8k-0002Bj-3s; Wed, 14 Oct 2020 07:24:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6458.17253; Wed, 14 Oct 2020 07:24:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSb8k-0002Bc-0y; Wed, 14 Oct 2020 07:24:02 +0000
Received: by outflank-mailman (input) for mailman id 6458;
 Wed, 14 Oct 2020 07:24:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSb8i-0002BX-TS
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:24:01 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee3b042a-6037-40f7-991b-2b49f9c04d0c;
 Wed, 14 Oct 2020 07:23:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSb8i-0002BX-TS
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:24:01 +0000
X-Inumbo-ID: ee3b042a-6037-40f7-991b-2b49f9c04d0c
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee3b042a-6037-40f7-991b-2b49f9c04d0c;
	Wed, 14 Oct 2020 07:23:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602660239;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=pNqNcRkuwbTFB10l6Qhk3A06eyWK8JbrjaTDj3AQDdg=;
  b=Rvmd0/EABjO5im4S3k0nrsEhU9DRqqd2mFy0OHVRtwyjvT4LEUVCmWAB
   HXCoZlhiPFOwgJKC3t29nvAcHPqWCni2BdkKbZmLcT2547TeJpHkpL2s0
   SjtH74fEIMeSPiPo2xbceiYicbD7vQOxm/fo/QOx+xeefAb9cI6n9dLlH
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CRjuGkHC4wz7MnVSM7ARhAaC3WQ8hKET1hs1dFUZz6GhXbxnmpbznyOPUarAQp4U2UnuNu0yrg
 4ZWTFYcS5QntHpgnDkw0ury+gfXH+1NX6F1YRNcWQt+VaVZJAX6rfB865gMJ2d2XKQRZ4lyHfM
 Yow5FXcv52Hb7wn1FAyRwdmH6UG4X1U5V2YALIPNuWIHdM/8ZlhWVVXUU7CyUlw36a6chiiOV3
 5xD6CclW1yFyMZDXRv3eDZ7WnDmBXSt9reSYCaH2P4VE8YJyin24m9llzUvSbf3Cp9OZ2gaknb
 IYs=
X-SBRS: 2.5
X-MesageID: 29023393
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29023393"
Subject: Re: [PATCH] drop xen/hash.h
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
References: <c398362b-6a09-a67f-50a9-b43b73fbd265@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <62bdf46f-e60f-d0e4-afc0-5955dc7d073e@citrix.com>
Date: Wed, 14 Oct 2020 08:23:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c398362b-6a09-a67f-50a9-b43b73fbd265@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 08:04, Jan Beulich wrote:
> It has no users and hasn't been touched in 10 years.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 07:27:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 07:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6462.17269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbBe-0002LX-Kt; Wed, 14 Oct 2020 07:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6462.17269; Wed, 14 Oct 2020 07:27:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbBe-0002LQ-He; Wed, 14 Oct 2020 07:27:02 +0000
Received: by outflank-mailman (input) for mailman id 6462;
 Wed, 14 Oct 2020 07:27:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSbBd-0002LL-FY
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:27:01 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c922892e-4ce5-44bc-a104-776455766291;
 Wed, 14 Oct 2020 07:27:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSbBd-0002LL-FY
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:27:01 +0000
X-Inumbo-ID: c922892e-4ce5-44bc-a104-776455766291
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c922892e-4ce5-44bc-a104-776455766291;
	Wed, 14 Oct 2020 07:27:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602660420;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=IY90oTO/XQ2Xn0kKiPXT1+zqd4KPsoyyBd43FnKJ+Kc=;
  b=D7M/h1F7RgzYZB+cCCyEXriIHDuZbDKap8MykgKAC70etlcju3IkEp/r
   FqI0MCbgKK5X5S9XzhbJe7GylkvPiLxJKbbyMREQ0BQNy8OvQ3swuP6eA
   NQ8pPPqMvtUUtSHkGlc16HTI5bbOVdrN78GQxEKQEgf0KcCzNgUZ2DxBb
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: p/uXqkSUDTUMmxjK3WMwSIhyUmptY2jWQ5yFTnoHv9h2/JDBmP4ZPkj+/qscwjA5eS+Jep+/vQ
 05zdHIL4GxVQyfRvUgMUkN1UkQTa+I7moq/R607wUuBPNz/f5ZhcztgvHe2gOniwYLgs/z2Fb2
 ETR8UmbN3kOpo+r1lm6HQTkTFQxs8KCvEI0yWJgjIcc2fwq/+uaaVv0evymiwIqSBOClEdKKCa
 Ewwl/gzhWD8IRJIU2ZqxBXcv9nVeJHh+qd75Ocma3FIKCiIHFBHR0/L5IXEhZcQtDalPuUWBR1
 lDY=
X-SBRS: 2.5
X-MesageID: 29023519
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29023519"
Subject: Re: [PATCH] kexec: some #include adjustments
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <c786bd7e-960a-6496-ec9a-4e04a771b80a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f00f10d5-96bd-d3f1-0800-a2415233265f@citrix.com>
Date: Wed, 14 Oct 2020 08:26:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c786bd7e-960a-6496-ec9a-4e04a771b80a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 08:05, Jan Beulich wrote:
> In the context of working on x86's elf_core_save_regs() I noticed there
> were far more source files getting rebuilt than I would have expected.
> While the main offender looks to have been fixmap.h including kexec.h,
> also drop use of elfcore.h from kexec.h.
>
> While adjusting machine_kexec.c also replace use of guest_access.h by
> domain_page.h.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Perhaps best not to ask why it was like that...

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 07:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 07:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6463.17281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbCS-0002SF-VG; Wed, 14 Oct 2020 07:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6463.17281; Wed, 14 Oct 2020 07:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbCS-0002S8-Rr; Wed, 14 Oct 2020 07:27:52 +0000
Received: by outflank-mailman (input) for mailman id 6463;
 Wed, 14 Oct 2020 07:27:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSbCR-0002Rt-G6
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:27:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c398f271-8aa0-41ec-9b99-8201c4793c60;
 Wed, 14 Oct 2020 07:27:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9E50AD63;
 Wed, 14 Oct 2020 07:27:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSbCR-0002Rt-G6
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 07:27:51 +0000
X-Inumbo-ID: c398f271-8aa0-41ec-9b99-8201c4793c60
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c398f271-8aa0-41ec-9b99-8201c4793c60;
	Wed, 14 Oct 2020 07:27:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602660469;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VJSVu3cb0Lt1DOQU4xWTNDZvR80RC8TipUAb5c2w8ao=;
	b=ZHeLADdnW46w9q0TGqSJYY7sGXel9LgkbehVr4U4WIpRWah7mOuPCt/zOnqNQ54l39W8uo
	lcCkiSwki24P8lRlXdQT0nGI2NxjzGwiLTRWEkn3SKLuq0s2Wfgb218LSBlQjkb1QoITOK
	qakvwIUv4aTjE2i7bVyiUx5J0SeUbyw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9E50AD63;
	Wed, 14 Oct 2020 07:27:49 +0000 (UTC)
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
 <75c5328c-c061-7ddf-a34d-0cd8b93043fc@suse.com>
 <dbaff977-796b-bbd3-64e5-fbe30817077f@suse.com>
 <ae78449c-fb37-7403-ee75-ef53085df26a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <45508e8d-0853-00f3-9f83-68192c74fb72@suse.com>
Date: Wed, 14 Oct 2020 09:27:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <ae78449c-fb37-7403-ee75-ef53085df26a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.20 08:52, Jan Beulich wrote:
> On 14.10.2020 08:00, Jürgen Groß wrote:
>> On 13.10.20 17:28, Jan Beulich wrote:
>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>> --- a/xen/include/xen/event.h
>>>> +++ b/xen/include/xen/event.h
>>>> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>>>>    #define bucket_from_port(d, p) \
>>>>        ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>>>>    
>>>> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
>>>
>>> Isn't the ceiling on simultaneous readers the number of pCPU-s,
>>> and the value here then needs to be NR_CPUS + 1 to accommodate
>>> the maximum number of readers? Furthermore, with you dropping
>>> the disabling of interrupts, one pCPU can acquire a read lock
>>> now more than once, when interrupting a locked region.
>>
>> Yes, I think you are right.
>>
>> So at least 2 * (NR-CPUS + 1), or even 3 * (NR_CPUS + 1) for covering
>> NMIs, too?
> 
> Hard to say: Even interrupts can in principle nest. I'd go further
> and use e.g. INT_MAX / 4, albeit no matter what value we choose
> there'll remain a theoretical risk. I'm therefore not fully
> convinced of the concept, irrespective of it providing an elegant
> solution to the problem at hand. I'd be curious what others think.

I just realized I should add a sanity test in evtchn_write_lock() to
exclude the case of multiple writers (this should never happen due to
all writers locking d->event_lock).

This in turn means we can set EVENT_WRITE_LOCK_INC to INT_MIN and use
negative lock values for a write-locked event channel.

Hitting this limit seems to require quite high values of NR_CPUS, even
with interrupts nesting (I'm quite sure we'll run out of stack space
way before this limit can be hit even with 16 million cpus).


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 08:18:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 08:18:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6492.17312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbzR-0007dg-II; Wed, 14 Oct 2020 08:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6492.17312; Wed, 14 Oct 2020 08:18:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSbzR-0007dZ-F3; Wed, 14 Oct 2020 08:18:29 +0000
Received: by outflank-mailman (input) for mailman id 6492;
 Wed, 14 Oct 2020 08:18:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSbzP-0007dU-IN
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 08:18:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c150998-d186-4b5e-aee6-56e1dbcd1b6a;
 Wed, 14 Oct 2020 08:18:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSbzM-0007Cu-Lr; Wed, 14 Oct 2020 08:18:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSbzM-0006rQ-DQ; Wed, 14 Oct 2020 08:18:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSbzM-00011A-Cx; Wed, 14 Oct 2020 08:18:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSbzP-0007dU-IN
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 08:18:27 +0000
X-Inumbo-ID: 3c150998-d186-4b5e-aee6-56e1dbcd1b6a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c150998-d186-4b5e-aee6-56e1dbcd1b6a;
	Wed, 14 Oct 2020 08:18:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DhBmb/JYAm2j/M8W3X2MtsKR2r0RGNPzUwv3WVaZvhk=; b=IKYflcgEL/541STf85KqNjLad3
	lnoWkTtmw1yTABiEW/fmmg8r5BwUafxio3Z5snsMdIEVIqS3fYkqfa8I5QHouB+gkg2za58Rmb2Se
	Lu9QMbYvtgPCYEXdwClZdePRbVXlbDv6JHHCPMXMmDumOn8Bg2c7AeP27NngI4ECSNd0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSbzM-0007Cu-Lr; Wed, 14 Oct 2020 08:18:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSbzM-0006rQ-DQ; Wed, 14 Oct 2020 08:18:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSbzM-00011A-Cx; Wed, 14 Oct 2020 08:18:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155796-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155796: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 08:18:24 +0000

flight 155796 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155796/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    5 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   34 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 08:20:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 08:20:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6496.17328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSc1B-0008QN-5E; Wed, 14 Oct 2020 08:20:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6496.17328; Wed, 14 Oct 2020 08:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSc1B-0008QG-1E; Wed, 14 Oct 2020 08:20:17 +0000
Received: by outflank-mailman (input) for mailman id 6496;
 Wed, 14 Oct 2020 08:20:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSc1A-0008PY-7F
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 08:20:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0daa6452-db51-4387-822e-75ba7ada142e;
 Wed, 14 Oct 2020 08:20:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSc11-0007En-LV; Wed, 14 Oct 2020 08:20:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSc11-0006wO-E4; Wed, 14 Oct 2020 08:20:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSc11-0001W5-Dc; Wed, 14 Oct 2020 08:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSc1A-0008PY-7F
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 08:20:16 +0000
X-Inumbo-ID: 0daa6452-db51-4387-822e-75ba7ada142e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0daa6452-db51-4387-822e-75ba7ada142e;
	Wed, 14 Oct 2020 08:20:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=WzJQ33+E7/n4t96AwyAi1dQagpjDh/T02/jVVZxmMZM=; b=Zy3VTdFaAe/AoHc+s/WqtYQQFE
	3EezaPiDnbvCAt1idGXPtBM1VMvjNPYzSGX1d4eb8De/1fvBkz1qdeH5haYcr0yocf3aLJRdtHpum
	LQv+0A0co/4dyMTTDR8QbcKPkAGQYGlCMhucK5nfZ1+wUjo16WXupIhDkj3PLWP/pmg8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSc11-0007En-LV; Wed, 14 Oct 2020 08:20:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSc11-0006wO-E4; Wed, 14 Oct 2020 08:20:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSc11-0001W5-Dc; Wed, 14 Oct 2020 08:20:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm
Message-Id: <E1kSc11-0001W5-Dc@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 08:20:07 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155797/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/155797.bisection-summary --basis-template=152631 --blessings=real,real-bisect qemu-mainline test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 155769 fail [host=elbling0] / 155509 [host=huxelrebe0] 155483 [host=albana0] 155434 [host=godello1] 155318 [host=godello0] 155184 [host=albana1] 155098 [host=rimava1] 155018 [host=fiano1] 154629 [host=elbling1] 154607 [host=pinot0] 154583 ok.
Failure / basis pass flights: 155769 / 154583
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d1af380d3e4bd840fa324db33ca4f739136e654 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ea9af51479fe04955443f0d366376a1008f07c94-5d1af380d3e4bd840fa324db33ca4f739136e654 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#5df6c87e8080e0021e975c8387baa20cfe43c932-a0bdf866873467271eff9a92f179ab0f77d735cb git://xenbits.xen.org/osstest/seabios.git#155821a1990b6de78dde5f98fa5ab90e802021e0-849c5e5\
 0b6f474df6cc113130575bcdccfafcd9e git://xenbits.xen.org/xen.git#baa4d064e91b6d2bcfe400bdf71f83b961e4c28e-25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Loaded 43053 nodes in revision graph
Searching for test results:
 154583 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 154607 [host=pinot0]
 154629 [host=elbling1]
 155018 [host=fiano1]
 155098 [host=rimava1]
 155184 [host=albana1]
 155318 [host=godello0]
 155434 [host=godello1]
 155483 [host=albana0]
 155509 [host=huxelrebe0]
 155518 fail irrelevant
 155544 fail irrelevant
 155585 fail irrelevant
 155613 fail irrelevant
 155645 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155665 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155675 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155695 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155703 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155713 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155732 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155733 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155735 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 eb94b81a94bce112e6b206df846c1551aaf6cab6 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155729 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155738 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a1d22c668a7662289b42624fe2aa92c9a23df1d2 849c5e50b6f474df6cc113130575bcdccfafcd9e 0241809bf838875615797f52af34222e5ab8e98f
 155742 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2687fdb7571a444b5af3509574b659d35ddd601 849c5e50b6f474df6cc113130575bcdccfafcd9e 93508595d588afe9dca087f95200effb7cedc81f
 155746 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 671ad7c4468f795b66b4cd8f376f1b1ce6701b63 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155747 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 36d9c2883e55c863b622b99f0ebb5143f0001401 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155749 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0d2a4545bf7e763984d3ee3e802617544cb7fc7a 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155750 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b23317eec4715aa62de9a6e5490a01122c8eef0e 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155752 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d17f305a2649fccdc50956b3381456a8fd318903 849c5e50b6f474df6cc113130575bcdccfafcd9e 11852c7bb070a18c3708b4c001772a23e7d4fc27
 155743 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155753 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7cd77fb02b9a2117a56fed172f09a1820fcd6b0b 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155755 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155761 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155764 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 213057383c9f73a17cfe635b204d88e11f918df1 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155766 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92d09502676678c8ebb1ad830666b323d3c88f9d 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 4bdbf746ac9152e70f264f87db4472707da805ce
 155754 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155767 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d8053e73fb2d295279b5cc4de7dc06bd581241ca 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5bcac985498ed83d89666959175ca9c9ed561ae1
 155771 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f7f1d916b22306c35ab9c090aab5233a91b4b7f9 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5a37207df52066efefe419c677b089a654d37afc
 155776 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0fc0142828b5bc965790a1c5c6e241897d3387cb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a6732807d335239fc29bd953538affc458bcc197
 155780 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 910093d54fc758e7d69261b344fdc8da3a7bd81e
 155781 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00b51fcb1ed7d2d5c2ea2f7dc598187d17c6f2e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155783 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 e045199c7c9c5433d7f1461a741ed539a75cbfad
 155769 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d1af380d3e4bd840fa324db33ca4f739136e654 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155784 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d
 155787 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155789 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
 155792 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155794 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
 155795 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155797 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
Searching for interesting versions
 Result found: flight 154583 (pass), for basis pass
 Result found: flight 155743 (fail), for basis failure (at ancestor ~2)
 Repro found: flight 155755 (pass), for basis pass
 Repro found: flight 155769 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
No revisions left to test, checking graph state.
 Result found: flight 155787 (pass), for last pass
 Result found: flight 155789 (fail), for first failure
 Repro found: flight 155792 (pass), for last pass
 Repro found: flight 155794 (fail), for first failure
 Repro found: flight 155795 (pass), for last pass
 Repro found: flight 155797 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155797/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.985918 to fit
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155797: tolerable FAIL

flight 155797 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155797/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 09:10:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 09:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6508.17346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kScnG-0003dy-Vi; Wed, 14 Oct 2020 09:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6508.17346; Wed, 14 Oct 2020 09:09:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kScnG-0003dr-SG; Wed, 14 Oct 2020 09:09:58 +0000
Received: by outflank-mailman (input) for mailman id 6508;
 Wed, 14 Oct 2020 09:09:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kScnG-0003dm-0Q
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 09:09:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a6f0fa2-a02b-4984-b7ff-be9b86286ead;
 Wed, 14 Oct 2020 09:09:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kScnB-0008Ht-MK; Wed, 14 Oct 2020 09:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kScnB-0008TV-Af; Wed, 14 Oct 2020 09:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kScnB-0008Sh-6g; Wed, 14 Oct 2020 09:09:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kScnG-0003dm-0Q
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 09:09:58 +0000
X-Inumbo-ID: 3a6f0fa2-a02b-4984-b7ff-be9b86286ead
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3a6f0fa2-a02b-4984-b7ff-be9b86286ead;
	Wed, 14 Oct 2020 09:09:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R4FQBeohkIX2FqFv2lrJ7jHI9RoTRCmye7oBAfb//6U=; b=V4LNyjhCM0KJp9WeSpnDxc3Tsi
	W4jlJGTyhFOMA9dcY/AxTXSYMz6ua7hPTLzfzR7KK/0FFwdQfigQH28y9ZuwF2OPXWqhgQvEU0jBi
	n3ENANeaoW1pNzrWJNDZ1CUayEhrBTG69C74p5hbZNvnrSkyrT6o/Cu7nZNqROHNGKms=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kScnB-0008Ht-MK; Wed, 14 Oct 2020 09:09:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kScnB-0008TV-Af; Wed, 14 Oct 2020 09:09:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kScnB-0008Sh-6g; Wed, 14 Oct 2020 09:09:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155785-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155785: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=96292515c07e3a99f5a29540ed2f257b1ff75111
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 09:09:53 +0000

flight 155785 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155785/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                96292515c07e3a99f5a29540ed2f257b1ff75111
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   54 days
Failing since        152659  2020-08-21 14:07:39 Z   53 days   93 attempts
Testing same since   155785  2020-10-13 22:39:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45190 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 09:45:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 09:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6513.17359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSdKm-000701-T0; Wed, 14 Oct 2020 09:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6513.17359; Wed, 14 Oct 2020 09:44:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSdKm-0006zu-QA; Wed, 14 Oct 2020 09:44:36 +0000
Received: by outflank-mailman (input) for mailman id 6513;
 Wed, 14 Oct 2020 09:44:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0aJ9=DV=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kSdKl-0006zp-BG
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 09:44:35 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4709c071-a606-4c8e-af65-8719d2203dbd;
 Wed, 14 Oct 2020 09:44:32 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
 with ESMTPSA id e003b5w9E9iQnCc
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 14 Oct 2020 11:44:26 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0aJ9=DV=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kSdKl-0006zp-BG
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 09:44:35 +0000
X-Inumbo-ID: 4709c071-a606-4c8e-af65-8719d2203dbd
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4709c071-a606-4c8e-af65-8719d2203dbd;
	Wed, 14 Oct 2020 09:44:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1602668671;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=fe6M6lgrfnf/4gKNSLiJl7iE+2lMWhoUMMKrcJogwPo=;
	b=K5cxlUvJVCdf5BZpsLGmioZmgTmrQco/gZAS9f+ihzSi7AaW0l+/xl2YbdFD9WUEVG
	QOai55sN7jsW6oXw7AnbU96RyMY0cm843LkvxmvO8yWmPNbwIYYo0UsoWNRjEXwHR72k
	Wd3upn5vyvk/MyEfEv4yPiXzoRph8Vmft3vjZ71iBkHj47xSspiNkGgCsYUHRj998fmf
	3H8+jvsl8Tvq+HsdsjixPZq1XjVgpFlhvxpIDML9gcQt1WCz1zGHbuZ7W8CMAS57mMoF
	zXrsN+SA/o1x9tWmcCzd5/PE7cMfNkLE6O7lMQ42I8xY0G/vfkGTJ0use0qTnLxe8+Bl
	ur4Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
	with ESMTPSA id e003b5w9E9iQnCc
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 14 Oct 2020 11:44:26 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools/libs: remove obsolete xc_map_foreign_bulk from error string
Date: Wed, 14 Oct 2020 11:44:22 +0200
Message-Id: <20201014094422.19347-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_map_foreign_bulk is an obsolete API, which is only used by
qemu-xen-traditional.

Adjust the error string to refer to the current API.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/foreignmemory/freebsd.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libs/foreignmemory/freebsd.c b/tools/libs/foreignmemory/freebsd.c
index 6e6bc4b11f..60bc87f530 100644
--- a/tools/libs/foreignmemory/freebsd.c
+++ b/tools/libs/foreignmemory/freebsd.c
@@ -66,7 +66,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
     addr = mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED, fd, 0);
     if ( addr == MAP_FAILED )
     {
-        PERROR("xc_map_foreign_bulk: mmap failed");
+        PERROR("xenforeignmemory_map: mmap failed");
         return NULL;
     }
 
@@ -80,7 +80,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
     if ( rc < 0 )
     {
         int saved_errno = errno;
-        PERROR("xc_map_foreign_bulk: ioctl failed");
+        PERROR("xenforeignmemory_map: ioctl failed");
         (void)munmap(addr, num << PAGE_SHIFT);
         errno = saved_errno;
         return NULL;


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:06:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:06:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6519.17374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSdfS-0000SR-K2; Wed, 14 Oct 2020 10:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6519.17374; Wed, 14 Oct 2020 10:05:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSdfS-0000SK-Gz; Wed, 14 Oct 2020 10:05:58 +0000
Received: by outflank-mailman (input) for mailman id 6519;
 Wed, 14 Oct 2020 10:05:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PlF3=DV=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kSdfQ-0000SF-Bh
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:05:56 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 15fefb76-0b92-48dc-a36a-97eb20914a5c;
 Wed, 14 Oct 2020 10:05:54 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3F7B30E;
 Wed, 14 Oct 2020 03:05:53 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.15.192])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E7773F73C;
 Wed, 14 Oct 2020 03:05:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PlF3=DV=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1kSdfQ-0000SF-Bh
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:05:56 +0000
X-Inumbo-ID: 15fefb76-0b92-48dc-a36a-97eb20914a5c
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 15fefb76-0b92-48dc-a36a-97eb20914a5c;
	Wed, 14 Oct 2020 10:05:54 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A3F7B30E;
	Wed, 14 Oct 2020 03:05:53 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.15.192])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E7773F73C;
	Wed, 14 Oct 2020 03:05:52 -0700 (PDT)
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH] xen/arm: Document the erratum #853709 related to Cortex A72
Date: Wed, 14 Oct 2020 12:05:41 +0200
Message-Id: <20201014100541.11687-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Workaround for Cortex-A57 erratum #852523 is already
in Xen but Cortex-A72 erratum #853709 is not although
it applies to the same issue.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
 docs/misc/arm/silicon-errata.txt | 1 +
 xen/arch/arm/domain.c            | 6 ++++--
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
index e15d0923e9..1f18a9df58 100644
--- a/docs/misc/arm/silicon-errata.txt
+++ b/docs/misc/arm/silicon-errata.txt
@@ -50,6 +50,7 @@ stable hypervisors.
 | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_834220    |
 | ARM            | Cortex-A57      | #1319537        | N/A                     |
 | ARM            | Cortex-A72      | #1319367        | N/A                     |
+| ARM            | Cortex-A72      | #853709         | N/A                     |
 | ARM            | Cortex-A76      | #1165522        | N/A                     |
 | ARM            | Neoverse-N1     | #1165522        | N/A
 | ARM            | MMU-500         | #842869         | N/A                     |
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 3b37f899b9..18cafcdda7 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -216,7 +216,8 @@ static void ctxt_switch_to(struct vcpu *n)
     WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
 
     /*
-     * Erratum #852523: DACR32_EL2 must be restored before one of the
+     * Erratum #852523 (Cortex-A57) or erratum #853709 (Cortex-A72):
+     * DACR32_EL2 must be restored before one of the
      * following sysregs: SCTLR_EL1, TCR_EL1, TTBR0_EL1, TTBR1_EL1 or
      * CONTEXTIDR_EL1.
      */
@@ -245,7 +246,8 @@ static void ctxt_switch_to(struct vcpu *n)
 
     /*
      * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
-     * #852523. I.e DACR32_EL2 is not correctly synchronized.
+     * #852523 (Cortex-A57) or #853709 (Cortex-A72).
+     * I.e DACR32_EL2 is not correctly synchronized.
      */
     WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
     WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:40:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6523.17391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeCt-0003qB-74; Wed, 14 Oct 2020 10:40:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6523.17391; Wed, 14 Oct 2020 10:40:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeCt-0003q4-45; Wed, 14 Oct 2020 10:40:31 +0000
Received: by outflank-mailman (input) for mailman id 6523;
 Wed, 14 Oct 2020 10:40:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSeCs-0003pz-5Q
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:40:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ebfe3c7-d529-4fb6-9efb-da78cc0d9b95;
 Wed, 14 Oct 2020 10:40:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 672FDACDB;
 Wed, 14 Oct 2020 10:40:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSeCs-0003pz-5Q
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:40:30 +0000
X-Inumbo-ID: 2ebfe3c7-d529-4fb6-9efb-da78cc0d9b95
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2ebfe3c7-d529-4fb6-9efb-da78cc0d9b95;
	Wed, 14 Oct 2020 10:40:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602672027;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ZeDaxzox18EA90MQXGDuuH+u33d5yeG2bASj7X/xAu8=;
	b=VkTBwq3m+c/b66+cF/FkPgtOPibVNRpAEyJ/4/Gn9iQA2v/dTxZUANLpeYA85cVsKNd2XM
	+b4ovW9FrF3AiSghYyd/9bo32O1xS1dcdOYx42iGGqyF5v6QUx+tG42MuCtny3RdtoAz7U
	qwcupAFKBE7hlpMUqKpviEcW+L4GQVA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 672FDACDB;
	Wed, 14 Oct 2020 10:40:27 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Trammell Hudson <hudson@trmm.net>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] EFI: adjustments after "Unified Xen
 hypervisor/kernel/initrd images"
Message-ID: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
Date: Wed, 14 Oct 2020 12:40:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The first change is, I believe, addressing the regression spotted by
osstest. The second change is simply a result of me going over the
involved code in, effectively, a re-review of the original changes.

1: EFI/Arm64: don't clobber DTB pointer
2: EFI: further "need_to_free" adjustments

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:42:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6525.17403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeEi-0003xB-K3; Wed, 14 Oct 2020 10:42:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6525.17403; Wed, 14 Oct 2020 10:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeEi-0003x4-Ge; Wed, 14 Oct 2020 10:42:24 +0000
Received: by outflank-mailman (input) for mailman id 6525;
 Wed, 14 Oct 2020 10:42:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSeEh-0003wz-Jy
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3613910b-71e1-4446-9ad1-422dd94331a1;
 Wed, 14 Oct 2020 10:42:23 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C46541FB;
 Wed, 14 Oct 2020 03:42:22 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F7643F73C;
 Wed, 14 Oct 2020 03:42:22 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSeEh-0003wz-Jy
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:23 +0000
X-Inumbo-ID: 3613910b-71e1-4446-9ad1-422dd94331a1
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 3613910b-71e1-4446-9ad1-422dd94331a1;
	Wed, 14 Oct 2020 10:42:23 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C46541FB;
	Wed, 14 Oct 2020 03:42:22 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1F7643F73C;
	Wed, 14 Oct 2020 03:42:22 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Warn user on cpu errata 832075
Date: Wed, 14 Oct 2020 11:41:51 +0100
Message-Id: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

When a Cortex A57 processor is affected by CPU errata 832075, a guest
not implementing the workaround for it could deadlock the system.
Add a warning during boot informing the user that only trusted guests
should be executed on the system.
An equivalent warning is already given to the user by KVM on cores
affected by this errata.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
 1 file changed, 21 insertions(+)

diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 6c09017515..8f9ab6dde1 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
 
 #endif
 
+#ifdef CONFIG_ARM64_ERRATUM_832075
+
+static int warn_device_load_acquire_errata(void *data)
+{
+    static bool warned = false;
+
+    if ( !warned )
+    {
+        warning_add("This CPU is affected by the errata 832075.\n"
+                    "Guests without required CPU erratum workarounds\n"
+                    "can deadlock the system!\n"
+                    "Only trusted guests should be used on this system.\n");
+        warned = true;
+    }
+
+    return 0;
+}
+
+#endif
+
 #ifdef CONFIG_ARM_SSBD
 
 enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
@@ -419,6 +439,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
         .capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
         MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
                    (1 << MIDR_VARIANT_SHIFT) | 2),
+        .enable = warn_device_load_acquire_errata,
     },
 #endif
 #ifdef CONFIG_ARM64_ERRATUM_834220
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:42:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:42:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6526.17415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeF0-00042k-Sf; Wed, 14 Oct 2020 10:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6526.17415; Wed, 14 Oct 2020 10:42:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeF0-00042c-PO; Wed, 14 Oct 2020 10:42:42 +0000
Received: by outflank-mailman (input) for mailman id 6526;
 Wed, 14 Oct 2020 10:42:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSeEz-00042L-7A
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c2bc5c7-2e50-43ec-befb-61b9dfb05aff;
 Wed, 14 Oct 2020 10:42:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99CA9ACDB;
 Wed, 14 Oct 2020 10:42:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSeEz-00042L-7A
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:41 +0000
X-Inumbo-ID: 5c2bc5c7-2e50-43ec-befb-61b9dfb05aff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c2bc5c7-2e50-43ec-befb-61b9dfb05aff;
	Wed, 14 Oct 2020 10:42:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602672159;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cRFa41oy/MRRj2d9XP66ZGeYSOCcmqsShFbW/d6kw5E=;
	b=T2Zl1OL0dUnVlGcZDrduV1LWkqSqOSDqUwT8bX0CZ306OOTdhESZ9rguLj05wHd2IHEYSw
	livzStP6TC4GvuH/kLpkC3YLEYjv0JspPVENPg+BA/l7X2uFn7crD/Bw1BlmOTSdBkfidb
	rdGqK80jWks1u3y7WoJg9CfuPHxaPOo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 99CA9ACDB;
	Wed, 14 Oct 2020 10:42:39 +0000 (UTC)
Subject: [PATCH 1/2] EFI/Arm64: don't clobber DTB pointer
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Trammell Hudson <hudson@trmm.net>
References: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
Message-ID: <825ded00-3971-4e56-7bef-324ee5531f70@suse.com>
Date: Wed, 14 Oct 2020 12:42:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

read_section() needs to be more careful: efi_arch_use_config_file()
may have found a DTB file (but without modules), and there may be no DTB
specified in the EFI config file. In this case the pointer to the blob
must not be overwritten with NULL when no ".dtb" section is present
either.

Fixes: 8a71d50ed40b ("efi: Enable booting unified hypervisor/kernel/initrd images")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -637,11 +637,14 @@ static bool __init read_section(const EF
                                 const CHAR16 *name, struct file *file,
                                 const char *options)
 {
-    file->ptr = pe_find_section(image->ImageBase, image->ImageSize,
-                                name, &file->size);
-    if ( !file->ptr )
+    const void *ptr = pe_find_section(image->ImageBase, image->ImageSize,
+                                      name, &file->size);
+
+    if ( !ptr )
         return false;
 
+    file->ptr = ptr;
+
     handle_file_info(name, file, options);
 
     return true;



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:42:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6527.17427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeFH-00049v-55; Wed, 14 Oct 2020 10:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6527.17427; Wed, 14 Oct 2020 10:42:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeFH-00049n-1Z; Wed, 14 Oct 2020 10:42:59 +0000
Received: by outflank-mailman (input) for mailman id 6527;
 Wed, 14 Oct 2020 10:42:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSeFF-00048h-IK
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f6a0da1-2f6b-4cdf-87e6-5ab004594f8c;
 Wed, 14 Oct 2020 10:42:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1E6C2ACDB;
 Wed, 14 Oct 2020 10:42:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSeFF-00048h-IK
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:42:57 +0000
X-Inumbo-ID: 9f6a0da1-2f6b-4cdf-87e6-5ab004594f8c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9f6a0da1-2f6b-4cdf-87e6-5ab004594f8c;
	Wed, 14 Oct 2020 10:42:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602672176;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0GKitqh2GXUK2H77YhdL4MLcrHgQv70vW5uQkhiCEro=;
	b=mDwuiKBum9y3/EzdWK472TyufrXaCbvtJg4bp73d2uny/EjE9UEFZ0CxxXWu7FhUI8HlBV
	fI+JM8hs8OF1Crb2RSwL/9CEQ1NM3MsSts5oZjZWAo8Cvg+/sBy9s3phdZeB91pR9/3hHw
	mM81Bvdr2n3Zlr42Xx2gmrNTW0sAocQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1E6C2ACDB;
	Wed, 14 Oct 2020 10:42:56 +0000 (UTC)
Subject: [PATCH 2/2] EFI: further "need_to_free" adjustments
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Trammell Hudson <hudson@trmm.net>
References: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
Message-ID: <a0e76e78-1f66-9825-b35b-86caed7da961@suse.com>
Date: Wed, 14 Oct 2020 12:42:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When processing "chain" directives, the previously loaded config file
gets freed. This needs to be recorded accordingly such that no error
path would try to free the same block of memory a 2nd time.

Furthermore, neither .addr nor .size being zero has any meaning towards
the need to free an allocated chunk anymore. Drop (from read_file()) and
replace (in Arm's efi_arch_use_config_file(), to sensibly retain the
comment) respective assignments.

Fixes: 04be2c3a0678 ("efi/boot.c: add file.need_to_free")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/efi/efi-boot.h
+++ b/xen/arch/arm/efi/efi-boot.h
@@ -591,7 +591,7 @@ static bool __init efi_arch_use_config_f
 
     fdt = lookup_fdt_config_table(SystemTable);
     dtbfile.ptr = fdt;
-    dtbfile.size = 0;  /* Config table memory can't be freed, so set size to 0 */
+    dtbfile.need_to_free = false; /* Config table memory can't be freed. */
     if ( !fdt || fdt_node_offset_by_compatible(fdt, 0, "multiboot,module") < 0 )
     {
         /*
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -601,10 +601,7 @@ static bool __init read_file(EFI_FILE_HA
                                     PFN_UP(size), &file->addr);
     }
     if ( EFI_ERROR(ret) )
-    {
-        file->addr = 0;
         what = what ?: L"Allocation";
-    }
     else
     {
         file->need_to_free = true;
@@ -1271,8 +1268,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
             name.s = get_value(&cfg, "global", "chain");
             if ( !name.s )
                 break;
-            efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
-            cfg.addr = 0;
+            if ( cfg.need_to_free )
+            {
+                efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
+                cfg.need_to_free = false;
+            }
             if ( !read_file(dir_handle, s2w(&name), &cfg, NULL) )
             {
                 PrintStr(L"Chained configuration file '");



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:47:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6531.17439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeJl-0004Oz-NW; Wed, 14 Oct 2020 10:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6531.17439; Wed, 14 Oct 2020 10:47:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeJl-0004Os-K5; Wed, 14 Oct 2020 10:47:37 +0000
Received: by outflank-mailman (input) for mailman id 6531;
 Wed, 14 Oct 2020 10:47:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSeJk-0004Ok-9H
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:47:36 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8a24443c-6692-483a-9be1-8d4dd2dde916;
 Wed, 14 Oct 2020 10:47:35 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6164E1FB;
 Wed, 14 Oct 2020 03:47:35 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C73133F66B;
 Wed, 14 Oct 2020 03:47:34 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSeJk-0004Ok-9H
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:47:36 +0000
X-Inumbo-ID: 8a24443c-6692-483a-9be1-8d4dd2dde916
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 8a24443c-6692-483a-9be1-8d4dd2dde916;
	Wed, 14 Oct 2020 10:47:35 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6164E1FB;
	Wed, 14 Oct 2020 03:47:35 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C73133F66B;
	Wed, 14 Oct 2020 03:47:34 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/xenmpd: Fix gcc10 snprintf warning
Date: Wed, 14 Oct 2020 11:47:23 +0100
Message-Id: <0ade4264c537819c3dd45179fcea2723df66b045.1602672245.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Add a check for snprintf return code and ignore the entry if we get an
error. This should in fact never happen and is more a trick to make gcc
happy and prevent compilation errors.

This is solving the gcc warning:
xenpmd.c:92:37: error: '%s' directive output may be truncated writing
between 4 and 2147483645 bytes into a region of size 271
[-Werror=format-truncation=]

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/xenpmd/xenpmd.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
index 35fd1c931a..12b82cf43e 100644
--- a/tools/xenpmd/xenpmd.c
+++ b/tools/xenpmd/xenpmd.c
@@ -102,6 +102,7 @@ FILE *get_next_battery_file(DIR *battery_dir,
     FILE *file = 0;
     struct dirent *dir_entries;
     char file_name[284];
+    int ret;
     
     do 
     {
@@ -111,11 +112,15 @@ FILE *get_next_battery_file(DIR *battery_dir,
         if ( strlen(dir_entries->d_name) < 4 )
             continue;
         if ( battery_info_type == BIF ) 
-            snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
                      dir_entries->d_name);
         else 
-            snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
                      dir_entries->d_name);
+        /* This should not happen but is needed to pass gcc checks */
+        if (ret < 0)
+            continue;
+        file_name[sizeof(file_name) - 1] = '\0';
         file = fopen(file_name, "r");
     } while ( !file );
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6533.17450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeSL-0005KQ-Ja; Wed, 14 Oct 2020 10:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6533.17450; Wed, 14 Oct 2020 10:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeSL-0005KJ-GB; Wed, 14 Oct 2020 10:56:29 +0000
Received: by outflank-mailman (input) for mailman id 6533;
 Wed, 14 Oct 2020 10:56:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSeSK-0005KE-9J
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:56:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a18d7f8-da38-4861-aea8-0cc5e2225579;
 Wed, 14 Oct 2020 10:56:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSeSH-00027S-Ba; Wed, 14 Oct 2020 10:56:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSeSH-0002lO-4f; Wed, 14 Oct 2020 10:56:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSeSK-0005KE-9J
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:56:28 +0000
X-Inumbo-ID: 1a18d7f8-da38-4861-aea8-0cc5e2225579
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1a18d7f8-da38-4861-aea8-0cc5e2225579;
	Wed, 14 Oct 2020 10:56:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=2Dsgn39aK/yisZSu3Xxi2WVSf/DHiqv93FX4cI6nkgM=; b=LFMN9vcYLvQBa8jVAPYyc8U039
	Giac1BaU1Y3UioKEsVA+Su9IK13UGCtDLt8OoCZgy01XJifGB7Cu9ZxxUseC8CQ/88JL4TpWPrZj6
	Q7PuEpIrDZMe7yZPP4JNmshCv9j/UDB5cThq6BrKS6VKEW+tKZtYr8jM2mieYJn0M4kQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSeSH-00027S-Ba; Wed, 14 Oct 2020 10:56:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSeSH-0002lO-4f; Wed, 14 Oct 2020 10:56:25 +0000
Subject: Re: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>
References: <20201014100541.11687-1-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ef5fc4c3-5de3-0ec1-fed9-afdb8dd1bfc1@xen.org>
Date: Wed, 14 Oct 2020 11:56:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201014100541.11687-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 14/10/2020 11:05, Michal Orzel wrote:
> Workaround for Cortex-A57 erratum #852523 is already
> in Xen but Cortex-A72 erratum #853709 is not although
> it applies to the same issue.

This commit message is a bit confusing because it implies that Xen 
doesn't workaround #852523. However, we do workaround it (there is no 
runtime check) but not document it.

So how about the following commit message?

"The Cortex-A72 erratum #853709 is the same as the Cortex-A57 erratum 
#852523. As the latter is already workaround, we only need to update the 
documentation."

> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Other than the commit message, I have cross-checked with the 
documentation ([1]):

Reviewed-by: Julien Grall <jgrall@amazon.com>

I can update the commit message on commit.

Cheers,

> ---
>   docs/misc/arm/silicon-errata.txt | 1 +
>   xen/arch/arm/domain.c            | 6 ++++--
>   2 files changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-errata.txt
> index e15d0923e9..1f18a9df58 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -50,6 +50,7 @@ stable hypervisors.
>   | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_834220    |
>   | ARM            | Cortex-A57      | #1319537        | N/A                     |
>   | ARM            | Cortex-A72      | #1319367        | N/A                     |
> +| ARM            | Cortex-A72      | #853709         | N/A                     |
>   | ARM            | Cortex-A76      | #1165522        | N/A                     |
>   | ARM            | Neoverse-N1     | #1165522        | N/A
>   | ARM            | MMU-500         | #842869         | N/A                     |
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 3b37f899b9..18cafcdda7 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -216,7 +216,8 @@ static void ctxt_switch_to(struct vcpu *n)
>       WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
>   
>       /*
> -     * Erratum #852523: DACR32_EL2 must be restored before one of the
> +     * Erratum #852523 (Cortex-A57) or erratum #853709 (Cortex-A72):
> +     * DACR32_EL2 must be restored before one of the
>        * following sysregs: SCTLR_EL1, TCR_EL1, TTBR0_EL1, TTBR1_EL1 or
>        * CONTEXTIDR_EL1.
>        */
> @@ -245,7 +246,8 @@ static void ctxt_switch_to(struct vcpu *n)
>   
>       /*
>        * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
> -     * #852523. I.e DACR32_EL2 is not correctly synchronized.
> +     * #852523 (Cortex-A57) or #853709 (Cortex-A72).
> +     * I.e DACR32_EL2 is not correctly synchronized.
>        */
>       WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
>       WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
> 

[1] https://developer.arm.com/documentation/epm012079/11/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:57:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:57:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6551.17481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeTh-0005aO-GK; Wed, 14 Oct 2020 10:57:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6551.17481; Wed, 14 Oct 2020 10:57:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeTh-0005aG-DD; Wed, 14 Oct 2020 10:57:53 +0000
Received: by outflank-mailman (input) for mailman id 6551;
 Wed, 14 Oct 2020 10:57:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSeTf-0005Zt-8B
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:57:51 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bfd345b-7ab7-45b4-8096-442f3d3185c7;
 Wed, 14 Oct 2020 10:57:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSeTf-0005Zt-8B
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:57:51 +0000
X-Inumbo-ID: 7bfd345b-7ab7-45b4-8096-442f3d3185c7
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7bfd345b-7ab7-45b4-8096-442f3d3185c7;
	Wed, 14 Oct 2020 10:57:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602673070;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=VZSfsKM6hXozM7xhDIjoCFNckdpNAUN3ydH++pfiwLs=;
  b=XlH2YGgFnn5enfEamUlbotwLnAthWKhDFW0uGb+MUnIzBdZGzfHhlttV
   nmphuGqYhZMoeQ3P1Qk2GPnLo5/av1E0MHPTpxkD+CramFsBkpHFWYLqh
   TKRTLeWCXEeyV9VH/lyfKYs0fBW9Yf9OJWsj7bkMc+g8g2Aen2za+70Yn
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bWOkWBdVNgsJXwaJqjdmXnViPra0NXB7jR0hl9QwOdIPMjaxBarTc7mfMOGjDUs0uA1WeIBlUg
 O6BkrdUZNtmQWXsdiOQklKikZvfhel3mDwVyKj1J6CM4sI6iISApEhRNMWtQRp+SSayLdVPwtn
 h1ETc1ZxCrm6TD2O12Re3zGjjV0hGqG2rK6HPr4AkBVirXdH8kH7KhGbqOeLIU4Tr2qZRofs+M
 Qe1XSYKycbLrAofTfm3JlaoVK0UWjV7Z6Dt2hOFPE9aeKoB0s+UixjKYrEss8h5hoGjN2LrfQC
 Bfw=
X-SBRS: 2.5
X-MesageID: 29220109
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29220109"
Subject: Re: [PATCH 1/2] EFI/Arm64: don't clobber DTB pointer
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Trammell Hudson
	<hudson@trmm.net>
References: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com>
 <825ded00-3971-4e56-7bef-324ee5531f70@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <43eced1c-d4ee-e596-6ec9-7d9ad22ed5e0@citrix.com>
Date: Wed, 14 Oct 2020 11:57:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <825ded00-3971-4e56-7bef-324ee5531f70@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 11:42, Jan Beulich wrote:
> read_section() needs to be more careful: efi_arch_use_config_file()
> may have found a DTB file (but without modules), and there may be no DTB
> specified in the EFI config file. In this case the pointer to the blob
> must not be overwritten with NULL when no ".dtb" section is present
> either.
>
> Fixes: 8a71d50ed40b ("efi: Enable booting unified hypervisor/kernel/initrd images")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 10:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 10:59:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6562.17493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeV2-0005nb-SL; Wed, 14 Oct 2020 10:59:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6562.17493; Wed, 14 Oct 2020 10:59:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeV2-0005nU-O5; Wed, 14 Oct 2020 10:59:16 +0000
Received: by outflank-mailman (input) for mailman id 6562;
 Wed, 14 Oct 2020 10:59:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSeV2-0005nO-3l
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:59:16 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.80]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc56b1b1-cb3d-4835-a8f4-3c0478bf68d9;
 Wed, 14 Oct 2020 10:59:14 +0000 (UTC)
Received: from DB8PR03CA0006.eurprd03.prod.outlook.com (2603:10a6:10:be::19)
 by DB7PR08MB3308.eurprd08.prod.outlook.com (2603:10a6:5:20::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.28; Wed, 14 Oct
 2020 10:59:13 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::59) by DB8PR03CA0006.outlook.office365.com
 (2603:10a6:10:be::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 10:59:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 10:59:13 +0000
Received: ("Tessian outbound d5e343850048:v64");
 Wed, 14 Oct 2020 10:59:13 +0000
Received: from 9b878a7f834e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AA9487CA-3184-4B3D-A6CB-998640FFCFA1.1; 
 Wed, 14 Oct 2020 10:58:35 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9b878a7f834e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 10:58:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4677.eurprd08.prod.outlook.com (2603:10a6:10:f1::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 10:58:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 10:58:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSeV2-0005nO-3l
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 10:59:16 +0000
X-Inumbo-ID: bc56b1b1-cb3d-4835-a8f4-3c0478bf68d9
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.80])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bc56b1b1-cb3d-4835-a8f4-3c0478bf68d9;
	Wed, 14 Oct 2020 10:59:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u/z+3GxGiuGnOI9OlFaRm6nlrVxjp/eazqjpMJaIMXI=;
 b=IYKyumXogtu3YRPi0Oxi4qVqB4IngWjA0MFb8u+LQ8XUDfmwHMJoCFXb903siJOV2aUN0XbjZZccFWsKJZY2Nso2BcFcXtRd2n7DpJB+Y74UGXPikv4yQg7SNIwVDICdpPS4u/O6NBsFJVgn/RRW+aKVhEdUDt5QlePBrJu57n8=
Received: from DB8PR03CA0006.eurprd03.prod.outlook.com (2603:10a6:10:be::19)
 by DB7PR08MB3308.eurprd08.prod.outlook.com (2603:10a6:5:20::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.28; Wed, 14 Oct
 2020 10:59:13 +0000
Received: from DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::59) by DB8PR03CA0006.outlook.office365.com
 (2603:10a6:10:be::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 10:59:13 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT003.mail.protection.outlook.com (10.152.20.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 10:59:13 +0000
Received: ("Tessian outbound d5e343850048:v64"); Wed, 14 Oct 2020 10:59:13 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 65f3f47ce3aa927c
X-CR-MTA-TID: 64aa7808
Received: from 9b878a7f834e.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id AA9487CA-3184-4B3D-A6CB-998640FFCFA1.1;
	Wed, 14 Oct 2020 10:58:35 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9b878a7f834e.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 10:58:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BtwDJWwZjFJXr/suui0z/3gPUsn5RZTlB94diPZOi44Q/aIg/D0j/L6lo0aQ4P/QbN47O7kNsC8/iZ8dS2vA5wDKRxdk8VzIY1KjYWWcwBW60Pi3ums24S/4xQ9hafFrXLdYwQUcKzt1gs+XpGscty3pu+qgsZ2rTbKv5Ds3eb2a2+ZRT5Uu8ze5Oi9y3joI408q3wThQ+zXm5fdCTBGBWAT5ORUeXabobzWhdceEctiB3MHsL2AxY/C3BGuy34bNfy3aSbS8+07RpYFP8oBi8e+06zbkzAT4kqxnZ6cl/rJFP3o6Z/2kUM3tAjSa/UkOtgMQPCVoSRMGxbzCMleQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u/z+3GxGiuGnOI9OlFaRm6nlrVxjp/eazqjpMJaIMXI=;
 b=hZfdTvNv7jxT96uFhqRw9K4TaMwYxxGWVV/8NUzteM0qXe08RI2PxJyOX2zEm/TjCeVGfIFB75o4IxaDKQbS9vEuYBWldcIoJ/WZ8/vIPOKXrLtQBkhADflstLDPXDCV0sU1S/bNK7jy+VRMjhydzXYeiC79+1ONRz0d5K/HAZ2RYI6r1VUdZpGalaVgyNbva4y547rvMOogR45NiEIqqsQmxOEoiOHFrH9Y5mzSfEMu1sU1lw7yfTZyl180c+h4Whg0hO+tBc2YvsAdYdo73fqx2NcuLU+vv6j0EqgC2lybetH3aF/1N7g0GUrsfol6IypBITRp0TO8ng/88MdATg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u/z+3GxGiuGnOI9OlFaRm6nlrVxjp/eazqjpMJaIMXI=;
 b=IYKyumXogtu3YRPi0Oxi4qVqB4IngWjA0MFb8u+LQ8XUDfmwHMJoCFXb903siJOV2aUN0XbjZZccFWsKJZY2Nso2BcFcXtRd2n7DpJB+Y74UGXPikv4yQg7SNIwVDICdpPS4u/O6NBsFJVgn/RRW+aKVhEdUDt5QlePBrJu57n8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4677.eurprd08.prod.outlook.com (2603:10a6:10:f1::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 10:58:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 10:58:33 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
CC: "jgross@suse.com" <jgross@suse.com>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in
 getBridge
Thread-Topic: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in
 getBridge
Thread-Index: AQHWnLHiDj1hArywQEGBhk10fWLXEamW+IUA
Date: Wed, 14 Oct 2020 10:58:33 +0000
Message-ID: <A6CDE62A-13F4-491B-BE0B-180657136504@arm.com>
References:
 <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
In-Reply-To:
 <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ae34eac0-5660-4f27-8d7c-08d870302fa2
x-ms-traffictypediagnostic: DBBPR08MB4677:|DB7PR08MB3308:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3308B96D1EF4367DC6BEB2DC9D050@DB7PR08MB3308.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 t+TN0Yh4cSJCDod6g45Y1HREIPCR+69TckTwDNgNAAERE3i1jQthtszRsUgtilqfS34QZ9/Fj2exabPePVXoCwG40LXknpDMCoUYKcqKPrzGD6MNxKkszXbP0i0DIJe6gJYKzdIRRDUE4cqcf0c57YtCNEcHtyPTxv8Mu6cl/CmM6kD2/ATw+UN6U3wlANDjTHJUqmGjssXheQR9Z/kxhHM678t8+hGYbK6XoJwoa6MGvpbNqgHYc9X5/auZfPD3wOzUVzlUmItEG7ZewDt1HyNDda3N6wknEvO3yCVaBsgMNl4QXklG48BKRwjcwibKYyfXV1K8Jh1Pwoqs/9TQww==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(53546011)(6506007)(478600001)(5660300002)(2616005)(71200400001)(83380400001)(6486002)(86362001)(26005)(33656002)(186003)(4326008)(66446008)(64756008)(66556008)(66476007)(76116006)(91956017)(54906003)(6512007)(6916009)(36756003)(66946007)(2906002)(8676002)(316002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 SAbKU+Md948TSYfbzOhAZrHTnzokVYhgBA0DPdFY0HP1Cr8RUPD4/B9AzRzxTnNMnWExkgg8OFtTk2B0RWKZFJVYrFOQCFIsQ74IhpyGzN5A1xdhdhsy/PTuigsudR2E5dRjRHeFVoP33c8i2dRBurmTQLwEARimrNSpYFfDUcXQBp7KlAJUq+A+AXqe6j5w93YIqtf7Jmy07vEacDnJeIIGGsOhE29Gl+U9zcvRtGN1EOCQc6I9IZjTk3onnu6oGAiQGMUgV1dpJkJ4TB2EBojABh+EwpxaJCvq8Ro0UJmvSZrpNpupWrjfF0VYb4jrgE64KTq9mlT8n+CGvhkJjH9tEEEZ5u2JCYaxqSlNNUgpi+vSpr+PPoJ3jPSvPoVd0v+Sd4HN1gffYjCPn/8LthxcFQF3UKvSemiBxUawU1Ul0h66gTNzxRvRFSSRwkj7xKGvvZ4aSmSkNuuGX/yNgGzkeuFcMeAIPCr6ExGAIJZeU5x4HJ8uRpqj7ACsOnND0Y8zvj6mSH/tNHFfgu7Ow2uCvNOp7ZtaFtA2hthSQmLDhVlNouxZyT95cxLVSHaViLBUoHznNg4j4xxCrLLm5v+wsC9i0SnFQfCu3VRsnv92G2a21/C4s7+HzDAaLK4ssq0hGD+fw2hj3bgcc73qyg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <34709061F61871418DCC607D3ECE1BE5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4677
Original-Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	272fcb94-a5c8-486a-e4d6-08d870301806
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	twccvz65O4xMseWTk4C7T22uTsu7DyFdTThlEgDBM4kMLv6sZsZ7DKfrlXMPWx765LRrIO5czSIjMSyxdLj6dy8qhqClTw+cCJKrC2WsMpoW4BV4YOzm5MPYYjUV3wL/jewIOPv3ZnMGhG7wH8+XN3eu3A4iQQ5CBc7cNyyIjK8ojnKaduwHicz4Mnageda/zPXOUo3RcybIUaO7J6658guos2813NIhNyIGqTUMl2rQuRaqLe0pNkLVir5KBKDTj7CZB8mmKkF8Z7BCq+7TMvAPfCnE/gOnPrDH2xptprJ0W+x5LnGflv1h380noo6a1UaeMovCYk77RiYooOZ2WmFRLupvRaF6/OvbaDYs4hDDJL2309A3Xqb0TjkM1CfAls1+wgqxdTNn20hWRDk6PQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(46966005)(8676002)(70206006)(5660300002)(81166007)(70586007)(54906003)(6506007)(316002)(26005)(2906002)(336012)(53546011)(2616005)(83380400001)(4326008)(33656002)(186003)(6512007)(6486002)(8936002)(478600001)(36756003)(6916009)(86362001)(47076004)(82740400003)(82310400003)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 10:59:13.2778
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ae34eac0-5660-4f27-8d7c-08d870302fa2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT003.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3308

Hi,

Could this be reviewed so that gcc10 issues are fixed ?

Thanks
Bertrand

> On 7 Oct 2020, at 14:57, Bertrand Marquis <bertrand.marquis@arm.com> wrot=
e:
>=20
> Use memcpy in getBridge to prevent gcc warnings about truncated
> strings. We know that we might truncate it, so the gcc warning
> here is wrong.
> Revert previous change changing buffer sizes as bigger buffers
> are not needed.
>=20
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
> Changes in v3:
> Do a memset 0 on destination buffer and use MIN between string length
> and resultLen - 1.
> Changes in v2:
> Use MIN between string length of de->d_name and resultLen to copy only
> the minimum size required and prevent crossing to from an unallocated
> space.
> ---
> tools/libs/stat/xenstat_linux.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>=20
> diff --git a/tools/libs/stat/xenstat_linux.c b/tools/libs/stat/xenstat_li=
nux.c
> index d2ee6fda64..e0d242e1bc 100644
> --- a/tools/libs/stat/xenstat_linux.c
> +++ b/tools/libs/stat/xenstat_linux.c
> @@ -29,6 +29,7 @@
> #include <string.h>
> #include <unistd.h>
> #include <regex.h>
> +#include <xen-tools/libs.h>
>=20
> #include "xenstat_priv.h"
>=20
> @@ -78,8 +79,14 @@ static void getBridge(char *excludeName, char *result,=
 size_t resultLen)
> 				sprintf(tmp, "/sys/class/net/%s/bridge", de->d_name);
>=20
> 				if (access(tmp, F_OK) =3D=3D 0) {
> -					strncpy(result, de->d_name, resultLen);
> -					result[resultLen - 1] =3D 0;
> +					/*
> +					 * Do not use strncpy to prevent compiler warning with
> +					 * gcc >=3D 10.0
> +					 * If de->d_name is longer then resultLen we truncate it
> +					 */
> +					memset(result, 0, resultLen);
> +					memcpy(result, de->d_name, MIN(strnlen(de->d_name,
> +									NAME_MAX),resultLen - 1));
> 				}
> 		}
> 	}
> @@ -264,7 +271,7 @@ int xenstat_collect_networks(xenstat_node * node)
> {
> 	/* Helper variables for parseNetDevLine() function defined above */
> 	int i;
> -	char line[512] =3D { 0 }, iface[16] =3D { 0 }, devBridge[256] =3D { 0 }=
, devNoBridge[257] =3D { 0 };
> +	char line[512] =3D { 0 }, iface[16] =3D { 0 }, devBridge[16] =3D { 0 },=
 devNoBridge[17] =3D { 0 };
> 	unsigned long long rxBytes, rxPackets, rxErrs, rxDrops, txBytes, txPacke=
ts, txErrs, txDrops;
>=20
> 	struct priv_data *priv =3D get_priv_data(node->handle);
> --=20
> 2.17.1
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:01:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:01:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6581.17505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeXQ-0006hu-9m; Wed, 14 Oct 2020 11:01:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6581.17505; Wed, 14 Oct 2020 11:01:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeXQ-0006hn-60; Wed, 14 Oct 2020 11:01:44 +0000
Received: by outflank-mailman (input) for mailman id 6581;
 Wed, 14 Oct 2020 11:01:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSeXP-0006hh-7U
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:01:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.49]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64d4c300-d1c0-44d1-8681-6a6ccfdd0c6e;
 Wed, 14 Oct 2020 11:01:42 +0000 (UTC)
Received: from AM5PR0701CA0065.eurprd07.prod.outlook.com (2603:10a6:203:2::27)
 by AM4PR0802MB2370.eurprd08.prod.outlook.com (2603:10a6:200:63::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:01:40 +0000
Received: from VE1EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::97) by AM5PR0701CA0065.outlook.office365.com
 (2603:10a6:203:2::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.12 via Frontend
 Transport; Wed, 14 Oct 2020 11:01:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT004.mail.protection.outlook.com (10.152.18.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:01:39 +0000
Received: ("Tessian outbound c579d876a324:v64");
 Wed, 14 Oct 2020 11:01:39 +0000
Received: from bf9827b9c574.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F6D03DD1-0FCD-4C73-8CCC-099460D32D02.1; 
 Wed, 14 Oct 2020 11:01:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf9827b9c574.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 11:01:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5816.eurprd08.prod.outlook.com (2603:10a6:10:1b3::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 14 Oct
 2020 11:01:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 11:01:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSeXP-0006hh-7U
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:01:43 +0000
X-Inumbo-ID: 64d4c300-d1c0-44d1-8681-6a6ccfdd0c6e
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.49])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 64d4c300-d1c0-44d1-8681-6a6ccfdd0c6e;
	Wed, 14 Oct 2020 11:01:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v81OGU/oYynWnpyl4yVM4oQHFkexGrHYGKU0kzcxm3Q=;
 b=s/RdB0OOjBgV4DWScI1t5nAHwNyhjIvUbKggRf3a2bWz/lbcMsmcZXo/v7lfAHyGunBfgoApVNSFE8Tc2qlwRwEfe1lpwMUgS/fltbxKSUG5QrVPLcqiwwwwZIT2gyoLK4kMFRjU+Z8tfQSPNZsTyN/nZHxFUxg4W5zmLDW/83o=
Received: from AM5PR0701CA0065.eurprd07.prod.outlook.com (2603:10a6:203:2::27)
 by AM4PR0802MB2370.eurprd08.prod.outlook.com (2603:10a6:200:63::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:01:40 +0000
Received: from VE1EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::97) by AM5PR0701CA0065.outlook.office365.com
 (2603:10a6:203:2::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.12 via Frontend
 Transport; Wed, 14 Oct 2020 11:01:40 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT004.mail.protection.outlook.com (10.152.18.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:01:39 +0000
Received: ("Tessian outbound c579d876a324:v64"); Wed, 14 Oct 2020 11:01:39 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ec7c5979650c06fc
X-CR-MTA-TID: 64aa7808
Received: from bf9827b9c574.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id F6D03DD1-0FCD-4C73-8CCC-099460D32D02.1;
	Wed, 14 Oct 2020 11:01:34 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bf9827b9c574.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 11:01:34 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=njqmeoFZnQxHI2ayxPSKtrWUYRQ6iy0AA1MKxSPd961H9RtFKry5k1H5DmqJOOKCNcK0m58pMD3iUGVOuzACTe6PoO2JiEebP0kEMppZA8ov0Jb9cPU2J1+5/9EWbVGrw+FUrWGoIlwpEVXH5WnV2M38ESzmvicGnzjmTMkoM7AAWhEjwJNRLq8HfjnOC/RAAFdcojJbaTCVyw7XxBo+PSdvi4+MeoB4XlKgg3Fa74DIRJEmuWj5BhbWKCHgGtoIyMfJKdAMeA1+LPaoIqm/PCAHUh9amICP663hqvNeapkKYgHIxgTFvSxG9tn0L2Ycd84eTP1oEU1LfFAGXFoyhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v81OGU/oYynWnpyl4yVM4oQHFkexGrHYGKU0kzcxm3Q=;
 b=ZIK3Zp9faqs3fRO9c7FtEEfAYyqPU1vQTHE1/NCR/8IOSv2Y04REtqj2WzIjoKRcg3gHsP2tDURcOJJoY3c78KSXg8LwgF0Kf02YDwELTx6B7KijLzLzl5UMa2pxqadhaw2imy0riOvb2NiMEAX+DieR+1DsgBmMIIWQiPTj4sPEHtW6J7LiSrojXO9WmgVUkkkl8S955dago7scEIRSp28ruilQ92sjwaK5FyR3h3s0SRccob0N/uj7j5C8nqFZoeRWpqDkOLvh4DxvfS66ba80x1WtufPWj04Wgk6lHvsqh85+gwXo5N9p6yiuMCr78AD0FoUu4PeBbLlqm8Ogtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v81OGU/oYynWnpyl4yVM4oQHFkexGrHYGKU0kzcxm3Q=;
 b=s/RdB0OOjBgV4DWScI1t5nAHwNyhjIvUbKggRf3a2bWz/lbcMsmcZXo/v7lfAHyGunBfgoApVNSFE8Tc2qlwRwEfe1lpwMUgS/fltbxKSUG5QrVPLcqiwwwwZIT2gyoLK4kMFRjU+Z8tfQSPNZsTyN/nZHxFUxg4W5zmLDW/83o=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5816.eurprd08.prod.outlook.com (2603:10a6:10:1b3::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 14 Oct
 2020 11:01:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 11:01:33 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <Michal.Orzel@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
Thread-Topic: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
Thread-Index: AQHWohGs6LiftGVtsEyTPb4Ao6iKoqmW7pwA
Date: Wed, 14 Oct 2020 11:01:33 +0000
Message-ID: <0A013228-01C5-4EC4-A464-85F7C8CB531F@arm.com>
References: <20201014100541.11687-1-michal.orzel@arm.com>
In-Reply-To: <20201014100541.11687-1-michal.orzel@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 21d8ba9b-3541-4c38-2f07-08d8703086e6
x-ms-traffictypediagnostic: DBAPR08MB5816:|AM4PR0802MB2370:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM4PR0802MB23700BBEF1AF537FE3F669899D050@AM4PR0802MB2370.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2733;OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Dal/kIrpPaJE254KuHbOXn64prVXdmrLih1OaU9lcXZAItre+l9D7vSA9HXz1SxkfvbsiihLnVc3Qkr/utaY4ggEafJshs26E4PTxzYzA7Y22sHL1MWtniGDLOLJdKk+fnqvOu7embOcAncc1VVesMImlW3Bgv9WngAwaLmQhjWyMVp7WwnBl28l47pDTfc6W9S2fOzpYv409pVIk9rMkqh20s4R4+8p7tWptK0aXYyR1Z1vW1/wK3ll8nDPQDG72uCFfHKKLK1rmruHmfjmHmh2JeEVszhlI0/u9BOe8/Lo74KS74IVbV8CM/92V+rb
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(39860400002)(396003)(136003)(33656002)(6506007)(2616005)(6486002)(83380400001)(6512007)(53546011)(36756003)(66946007)(76116006)(66476007)(91956017)(66446008)(6636002)(71200400001)(86362001)(5660300002)(64756008)(37006003)(54906003)(316002)(66556008)(478600001)(4326008)(6862004)(26005)(186003)(8676002)(2906002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 UW5xqdASljewRHuPo7/A+lILIK4nLd5oyRVYAl8UihHHHB2YK0hJVmAlKQj5nM89W6YMhrnaMVBckgMmTDMunNAB0agMGRa5efGbs/thOKInCxqSqX1arujkAvfne0YH7DsV+3F35H70ISZtwEHkhCXgPSdmOuAzNOwz521Ym12we9AjKoJqzonlQ4zpws1pAWN+bxt10u0/gWE+CCZ44vrAtI+zvXKNGnxFf7tvHeU0VGUC/I1vVn6SG03GCnGk5BdNsTpnhsVAfUY7dUyVwnkdnxSX/V+foKtfNya/7xSdt8ue5JFTb/ltPO0f87xkrutYo/wGCsT+siQYEoEVi+nY7l+MMhsbvuqzJIdjuBRVATb9F+4kirglo6tasqFiqcd9zh3ttQWsWcZKkbs2aKy+cSw1MrmfCozrzAfYW8SFCwyKRZKa4JRHga+DBPInfHnZo17VIqvbiO3vqAv8aRi3iWvTFWNiY7eXZlFjChwXy5z+yz7ep2mfFH3e2GUqRXLVXbAYQQmV5jrEDJmZUiK6vl9y/U46dd2rJrJhgbJaPSl62jnm6MCwqoDwW6hzufNndRV22jHtxQs+9CkDlQJyAWRsF/34gjRg47VMXxXyq97WciLtX7xISbrFsyx1gScQRvRE6n3JOcerGTlrXQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8D7C09EE9129794FA9FEB4C9D02AC730@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5816
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	df8f4d95-89d7-41cf-2ad1-08d87030830b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QxOMYsAKG8IU6lIK/4wHo5TTg28jKc4HQAw5j1f/MJf0G3s2A8W6/FBYqrtGRpH+4sBQlzFogtFP7AqtBxTUE2Tk0BiR8Egs+jrYWUoCV84uy/+VpwwKSiT71cg5L6hG+oFKWMCEHKknYVhZ8IvlDbMZBahQZoYZlbxahMKP0EtOjXpjT4dcCyM/eXBV1uXP6IQ5N1GMEW+n02yGmEF5z34u2sz4w7B+nTZeEXlvXok9EYYqquZj9UtYfjHJD6XSedyBi8WSremucPm31lqj2yBeyrh/pgNyAo0vg4Ih133V9fG9aNAmNE+wm3+jzoyWkirxSAHG7E7TZd7TIqtMS4O6dPiIeApu0BbrLjknVLX8BYM3UyTuqjkq7YClobYCJIi2GcDRSu0hYvGcS04b1w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39860400002)(396003)(46966005)(82310400003)(83380400001)(4326008)(186003)(8936002)(86362001)(356005)(81166007)(6862004)(107886003)(2906002)(6486002)(2616005)(336012)(47076004)(8676002)(36756003)(82740400003)(70206006)(6512007)(33656002)(70586007)(54906003)(6636002)(37006003)(5660300002)(478600001)(316002)(26005)(53546011)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 11:01:39.5779
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 21d8ba9b-3541-4c38-2f07-08d8703086e6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2370

Hi,

> On 14 Oct 2020, at 11:05, Michal Orzel <Michal.Orzel@arm.com> wrote:
>=20
> Workaround for Cortex-A57 erratum #852523 is already
> in Xen but Cortex-A72 erratum #853709 is not although
> it applies to the same issue.
>=20
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Change in commit message suggested by Julien is quite right and
should be added.

Bertrand

> ---
> docs/misc/arm/silicon-errata.txt | 1 +
> xen/arch/arm/domain.c            | 6 ++++--
> 2 files changed, 5 insertions(+), 2 deletions(-)
>=20
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index e15d0923e9..1f18a9df58 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -50,6 +50,7 @@ stable hypervisors.
> | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_8342=
20    |
> | ARM            | Cortex-A57      | #1319537        | N/A               =
      |
> | ARM            | Cortex-A72      | #1319367        | N/A               =
      |
> +| ARM            | Cortex-A72      | #853709         | N/A              =
       |
> | ARM            | Cortex-A76      | #1165522        | N/A               =
      |
> | ARM            | Neoverse-N1     | #1165522        | N/A
> | ARM            | MMU-500         | #842869         | N/A               =
      |
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 3b37f899b9..18cafcdda7 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -216,7 +216,8 @@ static void ctxt_switch_to(struct vcpu *n)
>     WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
>=20
>     /*
> -     * Erratum #852523: DACR32_EL2 must be restored before one of the
> +     * Erratum #852523 (Cortex-A57) or erratum #853709 (Cortex-A72):
> +     * DACR32_EL2 must be restored before one of the
>      * following sysregs: SCTLR_EL1, TCR_EL1, TTBR0_EL1, TTBR1_EL1 or
>      * CONTEXTIDR_EL1.
>      */
> @@ -245,7 +246,8 @@ static void ctxt_switch_to(struct vcpu *n)
>=20
>     /*
>      * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
> -     * #852523. I.e DACR32_EL2 is not correctly synchronized.
> +     * #852523 (Cortex-A57) or #853709 (Cortex-A72).
> +     * I.e DACR32_EL2 is not correctly synchronized.
>      */
>     WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
>     WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
> --=20
> 2.28.0
>=20



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:04:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6594.17516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeaS-0006um-SX; Wed, 14 Oct 2020 11:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6594.17516; Wed, 14 Oct 2020 11:04:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSeaS-0006uf-P1; Wed, 14 Oct 2020 11:04:52 +0000
Received: by outflank-mailman (input) for mailman id 6594;
 Wed, 14 Oct 2020 11:04:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSeaR-0006ua-Mo
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:04:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5120364f-f58c-4010-943f-74cda842a6f3;
 Wed, 14 Oct 2020 11:04:51 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSeaP-0002Lu-Vo; Wed, 14 Oct 2020 11:04:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSeaP-0003CB-Lf; Wed, 14 Oct 2020 11:04:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSeaR-0006ua-Mo
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:04:51 +0000
X-Inumbo-ID: 5120364f-f58c-4010-943f-74cda842a6f3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5120364f-f58c-4010-943f-74cda842a6f3;
	Wed, 14 Oct 2020 11:04:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bYs1e4B6dk8FulF7Wjnn1eQPo1MDL6NA3+DdZL+x0B0=; b=QBlSnn4mPfGuA6ijYGt4cWnSP6
	Jk4uld1GPNeJR4MTRMj6fAx9FNCqwBkR4ADZE8hda7gVS5MxVLSkhjltQB5O8gTUkEgvmQBH06g75
	42/jb9wIA4i1ZT2FdJNJLZ9NVppwwJ079XTxvn2h+pfYfB0vDTpRgPzq0UsGrOtAbX8s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSeaP-0002Lu-Vo; Wed, 14 Oct 2020 11:04:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSeaP-0003CB-Lf; Wed, 14 Oct 2020 11:04:49 +0000
Subject: Re: [PATCH] tools/xenmpd: Fix gcc10 snprintf warning
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <0ade4264c537819c3dd45179fcea2723df66b045.1602672245.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <74625fd9-f2a3-14be-714a-3cfb705434cc@xen.org>
Date: Wed, 14 Oct 2020 12:04:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <0ade4264c537819c3dd45179fcea2723df66b045.1602672245.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 14/10/2020 11:47, Bertrand Marquis wrote:
> Add a check for snprintf return code and ignore the entry if we get an
> error. This should in fact never happen and is more a trick to make gcc
> happy and prevent compilation errors.
> 
> This is solving the gcc warning:
> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
> between 4 and 2147483645 bytes into a region of size 271
> [-Werror=format-truncation=]

IIRC, this is only affecting GCC when building for Arm32 *and* when the 
optimizer is enabled. If so, it would be good to add more details in the 
commit message.

I would also suggest to link to the bug reported on Debian.

Cheers,

> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   tools/xenpmd/xenpmd.c | 9 +++++++--
>   1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
> index 35fd1c931a..12b82cf43e 100644
> --- a/tools/xenpmd/xenpmd.c
> +++ b/tools/xenpmd/xenpmd.c
> @@ -102,6 +102,7 @@ FILE *get_next_battery_file(DIR *battery_dir,
>       FILE *file = 0;
>       struct dirent *dir_entries;
>       char file_name[284];
> +    int ret;
>       
>       do
>       {
> @@ -111,11 +112,15 @@ FILE *get_next_battery_file(DIR *battery_dir,
>           if ( strlen(dir_entries->d_name) < 4 )
>               continue;
>           if ( battery_info_type == BIF )
> -            snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
> +            ret = snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
>                        dir_entries->d_name);
>           else
> -            snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
> +            ret = snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
>                        dir_entries->d_name);
> +        /* This should not happen but is needed to pass gcc checks */
> +        if (ret < 0)
> +            continue;
> +        file_name[sizeof(file_name) - 1] = '\0';
>           file = fopen(file_name, "r");
>       } while ( !file );
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:07:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:07:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6601.17529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSecY-00073L-9A; Wed, 14 Oct 2020 11:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6601.17529; Wed, 14 Oct 2020 11:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSecY-00073E-53; Wed, 14 Oct 2020 11:07:02 +0000
Received: by outflank-mailman (input) for mailman id 6601;
 Wed, 14 Oct 2020 11:07:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PlF3=DV=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1kSecW-000738-GU
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:07:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.50]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e221a3d3-bd10-452c-9c7f-3849e6bb7908;
 Wed, 14 Oct 2020 11:06:58 +0000 (UTC)
Received: from AM0P190CA0026.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::36)
 by AM6PR08MB5142.eurprd08.prod.outlook.com (2603:10a6:20b:d4::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:06:56 +0000
Received: from AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (10.141.38.4) by AM0P190CA0026.outlook.office365.com (10.141.38.36) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Wed, 14 Oct 2020 11:06:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT027.mail.protection.outlook.com (10.152.16.138) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:06:55 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Wed, 14 Oct 2020 11:06:55 +0000
Received: from 0a603f457922.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3239BF0F-C7D4-4363-9FAB-913311D2C476.1; 
 Wed, 14 Oct 2020 11:06:50 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0a603f457922.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 11:06:50 +0000
Received: from AM6PR08MB4641.eurprd08.prod.outlook.com (2603:10a6:20b:d1::16)
 by AM6PR08MB4834.eurprd08.prod.outlook.com (2603:10a6:20b:c9::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:06:48 +0000
Received: from AM6PR08MB4641.eurprd08.prod.outlook.com
 ([fe80::e0af:a21f:3a7f:aaef]) by AM6PR08MB4641.eurprd08.prod.outlook.com
 ([fe80::e0af:a21f:3a7f:aaef%4]) with mapi id 15.20.3455.032; Wed, 14 Oct 2020
 11:06:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PlF3=DV=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
	id 1kSecW-000738-GU
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:07:00 +0000
X-Inumbo-ID: e221a3d3-bd10-452c-9c7f-3849e6bb7908
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [40.107.7.50])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e221a3d3-bd10-452c-9c7f-3849e6bb7908;
	Wed, 14 Oct 2020 11:06:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8Q0Gq1c21htlzbiP8PjTsEJXSLimefd3TvPnBvIxcQ=;
 b=3Xc1ICzEjmloiNLwEntMwfvwoU6PKKJtXT5Ml4APcmyxR71AIl72c4XuyiSNvUJfyZ1vdb+IKt2Bug4lB6aRQNqSoUfALHqVNSt8k6i0lHSfNpMkhc4qi9YUP5fTNC/xz17uS9MgKzxfe+SNJW1vwg2qtHcVkq2OG8mTSt2sohI=
Received: from AM0P190CA0026.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::36)
 by AM6PR08MB5142.eurprd08.prod.outlook.com (2603:10a6:20b:d4::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:06:56 +0000
Received: from AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (10.141.38.4) by AM0P190CA0026.outlook.office365.com (10.141.38.36) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Wed, 14 Oct 2020 11:06:56 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT027.mail.protection.outlook.com (10.152.16.138) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:06:55 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Wed, 14 Oct 2020 11:06:55 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: dfd8289fcd4604ad
X-CR-MTA-TID: 64aa7808
Received: from 0a603f457922.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3239BF0F-C7D4-4363-9FAB-913311D2C476.1;
	Wed, 14 Oct 2020 11:06:50 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0a603f457922.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 11:06:50 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mim6exswXue7KDyyVzVFO0JwkMbAsQE32DfVeY9PrcS1VcxPGeV939yoU0LJVQUVjJ/Yo9y6iwlYt981wIX/232uosG3QnesXF8Cx8JYYKbjwYIyaIFAzd81IQ0dI+vDKQToLUCre6ilBnbuYCFfVoRV4awLM1FYccydemuS4i7azuswjDdNWtbhoUcwxB7Q7d1xXLD5mg5FHVBektXSID6f0DAaAYxfG0x1OchxRVutvig827UPtEcUx16eoefZUdrpHEJNbZk26MXja6aSh1TEd0ooBVDvBgBpaSl8YhLin+eWPmgFpygVd7u4NZs+l50jQEN8wPbESSliUop4cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8Q0Gq1c21htlzbiP8PjTsEJXSLimefd3TvPnBvIxcQ=;
 b=FZJSHeGwnYWXPnyaN9M6puyLU66l57vpODY1Hgk5bcGOsVieenALO7Y/eSF0VFjCuyFwwKvaGWwuoF3l2Jj2IHU/qFbhWeTSWbJZejB0Og051sBGwnnc19lq6uaWlpAYVsmzPCoteylsk2XxaV6OSy330ElrBN59mtEuThNoolZRfk0VyHQgfpxnCNli84WJbJXD5l6zFXEseYu6ueDvzzfbTN9LQ6THdaNxCQSaHnH18wZhkT7xMk930h+k16qNUNOqy852SZIiXLthwF3B0x9060C4r5/1rRwJzXt7fMyfPxUuaWJXGI+FhD5rcoHeSGdj/SlrMR7g7aJHFPm85A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8Q0Gq1c21htlzbiP8PjTsEJXSLimefd3TvPnBvIxcQ=;
 b=3Xc1ICzEjmloiNLwEntMwfvwoU6PKKJtXT5Ml4APcmyxR71AIl72c4XuyiSNvUJfyZ1vdb+IKt2Bug4lB6aRQNqSoUfALHqVNSt8k6i0lHSfNpMkhc4qi9YUP5fTNC/xz17uS9MgKzxfe+SNJW1vwg2qtHcVkq2OG8mTSt2sohI=
Received: from AM6PR08MB4641.eurprd08.prod.outlook.com (2603:10a6:20b:d1::16)
 by AM6PR08MB4834.eurprd08.prod.outlook.com (2603:10a6:20b:c9::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:06:48 +0000
Received: from AM6PR08MB4641.eurprd08.prod.outlook.com
 ([fe80::e0af:a21f:3a7f:aaef]) by AM6PR08MB4641.eurprd08.prod.outlook.com
 ([fe80::e0af:a21f:3a7f:aaef%4]) with mapi id 15.20.3455.032; Wed, 14 Oct 2020
 11:06:48 +0000
From: Michal Orzel <Michal.Orzel@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
Thread-Topic: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
Thread-Index: AQHWohGlCcemlpx2NkeqO0LskPrcGKmW7SyAgAACWXs=
Date: Wed, 14 Oct 2020 11:06:48 +0000
Message-ID:
 <AM6PR08MB4641ACDB3B63F0A065FBD48389050@AM6PR08MB4641.eurprd08.prod.outlook.com>
References:
 <20201014100541.11687-1-michal.orzel@arm.com>,<ef5fc4c3-5de3-0ec1-fed9-afdb8dd1bfc1@xen.org>
In-Reply-To: <ef5fc4c3-5de3-0ec1-fed9-afdb8dd1bfc1@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 843de3ff-b931-483c-2738-08d870314371
x-ms-traffictypediagnostic: AM6PR08MB4834:|AM6PR08MB5142:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB5142183BBDB495D4BF63E55D89050@AM6PR08MB5142.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4303;OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 K1EFKhj4hJBReSgTwWhVcJ3oa7ygTRSxUqJ0YK6+QVaQP+Z1CWqvnzyVJBbVaxdsl5V3WgJSUv3qyZwCgfe63eH0DDpWxS0NWnCJ9lio6ElYgXtSCZu6dX2GwaHXp1h2sUXqulvxaLoEoUv9yER3+kVcSLlHz5KzEo/xlHPpHKvnDv6BRet9hQNBupMf2eCQiD2DLil9AGzqS49c/QgQ2U2zbTQRTXnprQuIf/mTh1fUUWx0hRsaG3YU/u1uMzJCyOCle8l4AVVMfJsx+7ru6zf8MDwt9eSpRKaWPvGNhU3+9ospGLNQYlEY3vN20NJzazfm+mziVET+oXZGfTXPHvr54lRoa57ARrMHKiXMyw5855GKIZF5wYCvMjnnDnjZDR1UqrtrnEXoZfFslko/og==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB4641.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(39860400002)(396003)(366004)(66556008)(19627405001)(66446008)(71200400001)(7696005)(66476007)(64756008)(5660300002)(83080400001)(52536014)(26005)(186003)(83380400001)(8936002)(8676002)(76116006)(2906002)(53546011)(6506007)(4326008)(110136005)(55016002)(33656002)(478600001)(66946007)(166002)(54906003)(86362001)(316002)(966005)(9686003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 GWSgktWNjgnyRokUbJBrpSF0h1a/XgvxKLmuJlH/Xie9xzJlkE0dY73d8Skmsr5hcnTWd1nN66EBDAjUBwASm/VtTFx7D3touJXvLXOUdNdcUJ+caVm5QE9T99pfjLzDhDmJTxty1jHI+7DhgHtVGmSy53fWKuUwc3tYtYskwifKCZ7PvYuiTOp7pFWg3dYVA9x+B+X0X0gFHZ1lUH82/dhDbtt8z8SGI8ixBXBywzp0Ko5DdowSRqE8hi9EzaLnFcDpImoBVgALx+YfVUsgKwrtUmrbZ1lm7PwPS1fouM08GwG4y068fvsr9LmfCj1IuRSfq2f9BqxTIeFdcP2KDQ10RcsDUXZSLelvVhy8WA19/gDWtnKDy0fFrhqXGS1si1YUYneKQvi1nsN3ctv1/EbCNdIBPhSW/9u7zkHmmV9Jr3MAdIcaq356nH3m6QQppJr2OC/LV7UXCHnFaM0otk5lChRQB4QbBcAryXWJpfv33AWHcyeXBstJ0OkeeUH+5UO6eDeO7wF3jQQ9RTBetpFv2cvkIFeJLPgLirv/cGH3ySjgz42+ccPaA5RsmIseddkFUoTp7NrUiFNPoiF+N+bF5XnHWpOg2ymyt/tXysyf6+Br0/zZrqafkLbg/UANUI9JEEty69mEa86/4zltGQ==
Content-Type: multipart/alternative;
	boundary="_000_AM6PR08MB4641ACDB3B63F0A065FBD48389050AM6PR08MB4641eurp_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4834
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3ed70886-3fc4-4a03-5543-08d870313ef9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Uf2mK0wAc3BHNUGf8pclitmyp5QJi6/5E4yGiBEI6tleJjKE5IHqy+wfG7/InhnNiK5tu6HhAWtAwKv//3FRUOxrDMyZ50CxW4vVQy5UoXMNnjs0Wa9AP8ojDUS3GNzUTlQ7jDXm+iP6lNWcGcsDrRIT79qIQREg3JR7rotgKksgH+yAkTZfzaWvmsH+pN3vTeZJdFKwARM2hVfzL3PY1+EVAouI3ylD7IFYJ8VxPyFSN3UKh3EIE5hLtGpaP2Xw2NQcAKxTSW64yGGqYJwGuqsHgPZQvES/UUaTjDGsVa5oUcUJqSuSmFRkxlwmalMvpJqjK6CtSGPlTQs25+MDfzPYrIlA7JtoZaA90wP463rLxnIHY7hRyPP5X5rAremZd83wr7NZdRw2nzxG+Dw2CEWC+YF83KtLwk9EX5PBDT+e+/5kvKrBOv+HogmyyErQkpmYl/VE1TGcHapbx91MsmvGLosvReF4HbU792/rzxU=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(376002)(396003)(46966005)(19627405001)(5660300002)(70206006)(52536014)(26005)(83080400001)(356005)(186003)(2906002)(83380400001)(8936002)(8676002)(336012)(4326008)(6506007)(53546011)(47076004)(55016002)(33656002)(478600001)(7696005)(70586007)(81166007)(166002)(82740400003)(316002)(36906005)(82310400003)(86362001)(966005)(54906003)(9686003)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 11:06:55.9378
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 843de3ff-b931-483c-2738-08d870314371
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5142

--_000_AM6PR08MB4641ACDB3B63F0A065FBD48389050AM6PR08MB4641eurp_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

I agree. You can update the commit message.
Thanks for review.

Michal

________________________________
From: Julien Grall <julien@xen.org>
Sent: Wednesday, October 14, 2020 12:56 PM
To: Michal Orzel <Michal.Orzel@arm.com>; xen-devel@lists.xenproject.org <xe=
n-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>; Volodymyr Babchuk <Volodym=
yr_Babchuk@epam.com>; Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH] xen/arm: Document the erratum #853709 related to Corte=
x A72

Hi Michal,

On 14/10/2020 11:05, Michal Orzel wrote:
> Workaround for Cortex-A57 erratum #852523 is already
> in Xen but Cortex-A72 erratum #853709 is not although
> it applies to the same issue.

This commit message is a bit confusing because it implies that Xen
doesn't workaround #852523. However, we do workaround it (there is no
runtime check) but not document it.

So how about the following commit message?

"The Cortex-A72 erratum #853709 is the same as the Cortex-A57 erratum
#852523. As the latter is already workaround, we only need to update the
documentation."

> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Other than the commit message, I have cross-checked with the
documentation ([1]):

Reviewed-by: Julien Grall <jgrall@amazon.com>

I can update the commit message on commit.

Cheers,

> ---
>   docs/misc/arm/silicon-errata.txt | 1 +
>   xen/arch/arm/domain.c            | 6 ++++--
>   2 files changed, 5 insertions(+), 2 deletions(-)
>
> diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-err=
ata.txt
> index e15d0923e9..1f18a9df58 100644
> --- a/docs/misc/arm/silicon-errata.txt
> +++ b/docs/misc/arm/silicon-errata.txt
> @@ -50,6 +50,7 @@ stable hypervisors.
>   | ARM            | Cortex-A57      | #834220         | ARM64_ERRATUM_83=
4220    |
>   | ARM            | Cortex-A57      | #1319537        | N/A             =
        |
>   | ARM            | Cortex-A72      | #1319367        | N/A             =
        |
> +| ARM            | Cortex-A72      | #853709         | N/A              =
       |
>   | ARM            | Cortex-A76      | #1165522        | N/A             =
        |
>   | ARM            | Neoverse-N1     | #1165522        | N/A
>   | ARM            | MMU-500         | #842869         | N/A             =
        |
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 3b37f899b9..18cafcdda7 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -216,7 +216,8 @@ static void ctxt_switch_to(struct vcpu *n)
>       WRITE_SYSREG64(n->arch.ttbr1, TTBR1_EL1);
>
>       /*
> -     * Erratum #852523: DACR32_EL2 must be restored before one of the
> +     * Erratum #852523 (Cortex-A57) or erratum #853709 (Cortex-A72):
> +     * DACR32_EL2 must be restored before one of the
>        * following sysregs: SCTLR_EL1, TCR_EL1, TTBR0_EL1, TTBR1_EL1 or
>        * CONTEXTIDR_EL1.
>        */
> @@ -245,7 +246,8 @@ static void ctxt_switch_to(struct vcpu *n)
>
>       /*
>        * This write to sysreg CONTEXTIDR_EL1 ensures we don't hit erratum
> -     * #852523. I.e DACR32_EL2 is not correctly synchronized.
> +     * #852523 (Cortex-A57) or #853709 (Cortex-A72).
> +     * I.e DACR32_EL2 is not correctly synchronized.
>        */
>       WRITE_SYSREG(n->arch.contextidr, CONTEXTIDR_EL1);
>       WRITE_SYSREG(n->arch.tpidr_el0, TPIDR_EL0);
>

[1] https://developer.arm.com/documentation/epm012079/11/

--
Julien Grall

--_000_AM6PR08MB4641ACDB3B63F0A065FBD48389050AM6PR08MB4641eurp_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo=
ttom:0;} </style>
</head>
<body dir=3D"ltr">
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Hi Julien,</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
I agree. You can update the commit message.</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Thanks for review.</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
<br>
</div>
<div style=3D"font-family: Calibri, Arial, Helvetica, sans-serif; font-size=
: 12pt; color: rgb(0, 0, 0);">
Michal<br>
</div>
<div id=3D"appendonsend"></div>
<div style=3D"font-family:Calibri,Arial,Helvetica,sans-serif; font-size:12p=
t; color:rgb(0,0,0)">
<br>
</div>
<hr tabindex=3D"-1" style=3D"display:inline-block; width:98%">
<div id=3D"divRplyFwdMsg" dir=3D"ltr"><font style=3D"font-size:11pt" face=
=3D"Calibri, sans-serif" color=3D"#000000"><b>From:</b> Julien Grall &lt;ju=
lien@xen.org&gt;<br>
<b>Sent:</b> Wednesday, October 14, 2020 12:56 PM<br>
<b>To:</b> Michal Orzel &lt;Michal.Orzel@arm.com&gt;; xen-devel@lists.xenpr=
oject.org &lt;xen-devel@lists.xenproject.org&gt;<br>
<b>Cc:</b> Stefano Stabellini &lt;sstabellini@kernel.org&gt;; Volodymyr Bab=
chuk &lt;Volodymyr_Babchuk@epam.com&gt;; Bertrand Marquis &lt;Bertrand.Marq=
uis@arm.com&gt;<br>
<b>Subject:</b> Re: [PATCH] xen/arm: Document the erratum #853709 related t=
o Cortex A72</font>
<div>&nbsp;</div>
</div>
<div class=3D"BodyFragment"><font size=3D"2"><span style=3D"font-size:11pt"=
>
<div class=3D"PlainText">Hi Michal,<br>
<br>
On 14/10/2020 11:05, Michal Orzel wrote:<br>
&gt; Workaround for Cortex-A57 erratum #852523 is already<br>
&gt; in Xen but Cortex-A72 erratum #853709 is not although<br>
&gt; it applies to the same issue.<br>
<br>
This commit message is a bit confusing because it implies that Xen <br>
doesn't workaround #852523. However, we do workaround it (there is no <br>
runtime check) but not document it.<br>
<br>
So how about the following commit message?<br>
<br>
&quot;The Cortex-A72 erratum #853709 is the same as the Cortex-A57 erratum =
<br>
#852523. As the latter is already workaround, we only need to update the <b=
r>
documentation.&quot;<br>
<br>
&gt; Signed-off-by: Michal Orzel &lt;michal.orzel@arm.com&gt;<br>
<br>
Other than the commit message, I have cross-checked with the <br>
documentation ([1]):<br>
<br>
Reviewed-by: Julien Grall &lt;jgrall@amazon.com&gt;<br>
<br>
I can update the commit message on commit.<br>
<br>
Cheers,<br>
<br>
&gt; ---<br>
&gt;&nbsp;&nbsp; docs/misc/arm/silicon-errata.txt | 1 +<br>
&gt;&nbsp;&nbsp; xen/arch/arm/domain.c&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | 6 ++++--<br>
&gt;&nbsp;&nbsp; 2 files changed, 5 insertions(+), 2 deletions(-)<br>
&gt; <br>
&gt; diff --git a/docs/misc/arm/silicon-errata.txt b/docs/misc/arm/silicon-=
errata.txt<br>
&gt; index e15d0923e9..1f18a9df58 100644<br>
&gt; --- a/docs/misc/arm/silicon-errata.txt<br>
&gt; +++ b/docs/misc/arm/silicon-errata.txt<br>
&gt; @@ -50,6 +50,7 @@ stable hypervisors.<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | Cortex-A57&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #834220&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | ARM64_ERRATUM_834220&nbsp;&nbsp;&=
nbsp; |<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | Cortex-A57&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #1319537&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; |<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | Cortex-A72&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #1319367&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; |<br>
&gt; +| ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; | Cortex-A72&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #853709&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; |<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | Cortex-A76&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #1165522&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp; |<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | Neoverse-N1&nbsp;&nbsp;&nbsp;&nbsp; | #1165522&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A<br>
&gt;&nbsp;&nbsp; | ARM&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp; | MMU-500&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | #=
842869&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | N/A&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |<br>
&gt; diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c<br>
&gt; index 3b37f899b9..18cafcdda7 100644<br>
&gt; --- a/xen/arch/arm/domain.c<br>
&gt; +++ b/xen/arch/arm/domain.c<br>
&gt; @@ -216,7 +216,8 @@ static void ctxt_switch_to(struct vcpu *n)<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; WRITE_SYSREG64(n-&gt;arch.ttbr1, T=
TBR1_EL1);<br>
&gt;&nbsp;&nbsp; <br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /*<br>
&gt; -&nbsp;&nbsp;&nbsp;&nbsp; * Erratum #852523: DACR32_EL2 must be restor=
ed before one of the<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; * Erratum #852523 (Cortex-A57) or erratum #8=
53709 (Cortex-A72):<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; * DACR32_EL2 must be restored before one of =
the<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * following sysregs: SCTLR_E=
L1, TCR_EL1, TTBR0_EL1, TTBR1_EL1 or<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * CONTEXTIDR_EL1.<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; */<br>
&gt; @@ -245,7 +246,8 @@ static void ctxt_switch_to(struct vcpu *n)<br>
&gt;&nbsp;&nbsp; <br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /*<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * This write to sysreg CONTE=
XTIDR_EL1 ensures we don't hit erratum<br>
&gt; -&nbsp;&nbsp;&nbsp;&nbsp; * #852523. I.e DACR32_EL2 is not correctly s=
ynchronized.<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; * #852523 (Cortex-A57) or #853709 (Cortex-A7=
2).<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; * I.e DACR32_EL2 is not correctly synchroniz=
ed.<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; */<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; WRITE_SYSREG(n-&gt;arch.contextidr=
, CONTEXTIDR_EL1);<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; WRITE_SYSREG(n-&gt;arch.tpidr_el0,=
 TPIDR_EL0);<br>
&gt; <br>
<br>
[1] <a href=3D"https://developer.arm.com/documentation/epm012079/11/">https=
://developer.arm.com/documentation/epm012079/11/</a><br>
<br>
-- <br>
Julien Grall<br>
</div>
</span></font></div>
</body>
</html>

--_000_AM6PR08MB4641ACDB3B63F0A065FBD48389050AM6PR08MB4641eurp_--


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:11:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6614.17541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSehH-0007vU-U7; Wed, 14 Oct 2020 11:11:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6614.17541; Wed, 14 Oct 2020 11:11:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSehH-0007vN-Qm; Wed, 14 Oct 2020 11:11:55 +0000
Received: by outflank-mailman (input) for mailman id 6614;
 Wed, 14 Oct 2020 11:11:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSehG-0007vI-FP
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:11:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f0a3eec-76d6-42fd-a1fe-0961ed917f17;
 Wed, 14 Oct 2020 11:11:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSehD-0002Un-E3; Wed, 14 Oct 2020 11:11:51 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSehD-0003pE-5U; Wed, 14 Oct 2020 11:11:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSehG-0007vI-FP
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:11:54 +0000
X-Inumbo-ID: 4f0a3eec-76d6-42fd-a1fe-0961ed917f17
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f0a3eec-76d6-42fd-a1fe-0961ed917f17;
	Wed, 14 Oct 2020 11:11:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mWrFnB6uOcagh0LYfIRUsr+Y6y0Rci2+nTuToKgVvBk=; b=u4P0I8F0FkrHoLoGK62XJZZBNr
	5TundeL3HGhPZoThyH26EoFblJNjbbv4Kz/S1r74W0vRdGgeWHFgaS1aamidUfO7wWDCNg2ntJpDd
	xBsxDrwQy4oOaSWEQd/op+kQhbhgI1BLXEe+A6Hwt3lqKe522p9x1WElU5H6aXOqIzoU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSehD-0002Un-E3; Wed, 14 Oct 2020 11:11:51 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSehD-0003pE-5U; Wed, 14 Oct 2020 11:11:51 +0000
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c22235d1-9124-74f2-5856-58f7f44dc0b7@xen.org>
Date: Wed, 14 Oct 2020 12:11:49 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 14/10/2020 11:41, Bertrand Marquis wrote:
> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> not implementing the workaround for it could deadlock the system.
> Add a warning during boot informing the user that only trusted guests
> should be executed on the system.

I think we should update SUPPORT.MD to say we will not security support 
those processors. Stefano, what do you think?

> An equivalent warning is already given to the user by KVM on cores
> affected by this errata.

I don't seem to find the warning in Linux. Do you have a link?

> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>   1 file changed, 21 insertions(+)
> 
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 6c09017515..8f9ab6dde1 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>   
>   #endif
>   
> +#ifdef CONFIG_ARM64_ERRATUM_832075
> +
> +static int warn_device_load_acquire_errata(void *data)
> +{
> +    static bool warned = false;
> +
> +    if ( !warned )
> +    {
> +        warning_add("This CPU is affected by the errata 832075.\n"
> +                    "Guests without required CPU erratum workarounds\n"
> +                    "can deadlock the system!\n"
> +                    "Only trusted guests should be used on this system.\n");
> +        warned = true;

I was going to suggest to use WARN_ON_ONCE() but it looks like it never 
made upstream :(.

> +    }
> +
> +    return 0;
> +}
> +
> +#endif
> +
>   #ifdef CONFIG_ARM_SSBD
>   
>   enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
> @@ -419,6 +439,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>           .capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
>           MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
>                      (1 << MIDR_VARIANT_SHIFT) | 2),
> +        .enable = warn_device_load_acquire_errata,
>       },
>   #endif
>   #ifdef CONFIG_ARM64_ERRATUM_834220
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:24:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:24:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6634.17570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSetl-0000km-KH; Wed, 14 Oct 2020 11:24:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6634.17570; Wed, 14 Oct 2020 11:24:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSetl-0000kf-H1; Wed, 14 Oct 2020 11:24:49 +0000
Received: by outflank-mailman (input) for mailman id 6634;
 Wed, 14 Oct 2020 11:24:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSetk-0000ka-Bp
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:24:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1391d2b-64bd-4a50-b629-61271b6c9aaa;
 Wed, 14 Oct 2020 11:24:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSeth-0002mr-Ug; Wed, 14 Oct 2020 11:24:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSeth-0007dD-NB; Wed, 14 Oct 2020 11:24:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSeth-0000H7-Mg; Wed, 14 Oct 2020 11:24:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSetk-0000ka-Bp
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:24:48 +0000
X-Inumbo-ID: e1391d2b-64bd-4a50-b629-61271b6c9aaa
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1391d2b-64bd-4a50-b629-61271b6c9aaa;
	Wed, 14 Oct 2020 11:24:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mnlfspOPqr3tAXDI7eeXnhIszq27KfbgN+xtQ4M7Ruw=; b=We+sk+AvpSBkUyiTY+fTGsTiMf
	+9VyPA+bdvqS2toZ7MfEd82FMwdvcOhq3SBbWWpAB1zxQtvjHlmaZsvAz25NXSYjBVtBXLhJm206T
	YxXk9ztjJDlFlicWXgT2CuXBaAzJT7fPVuqnDVjK6zEFUARM8zGdzb131MmYbYONgn44=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSeth-0002mr-Ug; Wed, 14 Oct 2020 11:24:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSeth-0007dD-NB; Wed, 14 Oct 2020 11:24:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSeth-0000H7-Mg; Wed, 14 Oct 2020 11:24:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155793-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155793: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=5d787acbf03297369799ff40d3f27db0fea46e99
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 11:24:45 +0000

flight 155793 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155793/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              5d787acbf03297369799ff40d3f27db0fea46e99
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   96 days
Failing since        151818  2020-07-11 04:18:52 Z   95 days   90 attempts
Testing same since   155793  2020-10-14 04:20:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21083 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:35:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:35:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6666.17584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf41-0001jM-Kw; Wed, 14 Oct 2020 11:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6666.17584; Wed, 14 Oct 2020 11:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf41-0001jF-Hd; Wed, 14 Oct 2020 11:35:25 +0000
Received: by outflank-mailman (input) for mailman id 6666;
 Wed, 14 Oct 2020 11:35:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSf40-0001jA-BA
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:35:24 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f58c69ee-8373-4657-a52a-65d125a55e4e;
 Wed, 14 Oct 2020 11:35:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSf40-0001jA-BA
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:35:24 +0000
X-Inumbo-ID: f58c69ee-8373-4657-a52a-65d125a55e4e
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f58c69ee-8373-4657-a52a-65d125a55e4e;
	Wed, 14 Oct 2020 11:35:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602675323;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=I2rD9uri+Q+irdWguf9s0RoPX7S6hAcBXSic7kVKv8w=;
  b=iWWoGogMIehUKt6v1yh/+EMfZv/0C3nkeHs/ncegKpLowGMbSAj7e48h
   anFVEl8yf7w0FrA4XK+1FUtLwJm93J2isx5F3DxLyWnAhKW2HTvpO0nIk
   x8NhUhIB79o82md5/IsS5M7vJGGj8HSZIFmca2/LpoCRN/3JAfb0ZunrM
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sTMiV0CrnjHyYLETWirfhUTpObTkYBt5Z8zksrqayeCnzf7NCn2EUHPyFV3jKVsTEWKMzKgH9T
 bRwbdfaATlHWh8kRhdaOdin0v+2yP22RqMdyQ/hYkjmZ7tOo83lRbor30d7qN6/7denJNSk5hb
 VwfuhvIvXyZkQZhwcHWBpObuL11b9yeJcsnAaBUHM3vgSbF0qrOOEscyp+Ew9RFR4v7GA7Xmsa
 GFQxALddI3ELBToF96rC1HD9vnB5skpAxX7Y+KiatIKBdog71K3rcOyWgloqpmtAs+nuSZxMeK
 nKg=
X-SBRS: 2.5
X-MesageID: 29303081
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29303081"
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <bertrand.marquis@arm.com>,
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
Date: Wed, 14 Oct 2020 12:35:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 11:41, Bertrand Marquis wrote:
> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> not implementing the workaround for it could deadlock the system.
> Add a warning during boot informing the user that only trusted guests
> should be executed on the system.
> An equivalent warning is already given to the user by KVM on cores
> affected by this errata.
>
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>  xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>  1 file changed, 21 insertions(+)
>
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 6c09017515..8f9ab6dde1 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>  
>  #endif
>  
> +#ifdef CONFIG_ARM64_ERRATUM_832075
> +
> +static int warn_device_load_acquire_errata(void *data)
> +{
> +    static bool warned = false;
> +
> +    if ( !warned )
> +    {
> +        warning_add("This CPU is affected by the errata 832075.\n"
> +                    "Guests without required CPU erratum workarounds\n"
> +                    "can deadlock the system!\n"
> +                    "Only trusted guests should be used on this system.\n");
> +        warned = true;

This is an antipattern, which probably wants fixing elsewhere as well.

warning_add() is __init.  It's not legitimate to call from a non-init
function, and a less useless build system would have modpost to object.

The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
but this provides no safety at all.


What warning_add() actually does is queue messages for some point near
the end of boot.  It's not clear that this is even a clever thing to do.

I'm very tempted to suggest a blanket change to printk_once().

~Andrew

> +    }
> +
> +    return 0;
> +}
> +
> +#endif
> +
>  #ifdef CONFIG_ARM_SSBD
>  
>  enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
> @@ -419,6 +439,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
>          .capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
>          MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
>                     (1 << MIDR_VARIANT_SHIFT) | 2),
> +        .enable = warn_device_load_acquire_errata,
>      },
>  #endif
>  #ifdef CONFIG_ARM64_ERRATUM_834220



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:36:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6668.17597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf4x-0001pN-Ug; Wed, 14 Oct 2020 11:36:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6668.17597; Wed, 14 Oct 2020 11:36:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf4x-0001pG-Rc; Wed, 14 Oct 2020 11:36:23 +0000
Received: by outflank-mailman (input) for mailman id 6668;
 Wed, 14 Oct 2020 11:36:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSf4w-0001pA-LJ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:36:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e993ef5a-aa32-4256-b851-5a80b1fabf05;
 Wed, 14 Oct 2020 11:36:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSf4u-00031f-9y; Wed, 14 Oct 2020 11:36:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSf4t-0008Rm-W6; Wed, 14 Oct 2020 11:36:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSf4t-0007iX-Vb; Wed, 14 Oct 2020 11:36:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSf4w-0001pA-LJ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:36:22 +0000
X-Inumbo-ID: e993ef5a-aa32-4256-b851-5a80b1fabf05
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e993ef5a-aa32-4256-b851-5a80b1fabf05;
	Wed, 14 Oct 2020 11:36:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kl2w5W86F/Z/B881wOHervggE4eO/pdodlnys8gMB+s=; b=R2YYOznMV5/1Hkk950rM8b78KL
	WWyWgAbQWfpwmpqXQtN6KloPBW6LK+ZbSKSSViGg6LYE7jITduOqPKxgF9beeT1tUaEZyG/B9RRtY
	9yJWkzlVoqvNxBgANzvOuJMsbrSBzQaw6qzOLEutm+LGUW3WUiDOPNIw9F9vld+75BUU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSf4u-00031f-9y; Wed, 14 Oct 2020 11:36:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSf4t-0008Rm-W6; Wed, 14 Oct 2020 11:36:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSf4t-0007iX-Vb; Wed, 14 Oct 2020 11:36:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155788-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155788: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-pygrub:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 11:36:19 +0000

flight 155788 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155788/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair 26 guest-migrate/src_host/dst_host fail in 155759 pass in 155788
 test-amd64-i386-xl-qemut-win7-amd64 12 windows-install fail in 155759 pass in 155788
 test-amd64-amd64-examine    4 memdisk-try-append fail in 155759 pass in 155788
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install          fail pass in 155759
 test-armhf-armhf-xl-arndale   8 xen-boot                   fail pass in 155759
 test-amd64-amd64-pygrub      13 guest-start                fail pass in 155759

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 155759 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 155759 never pass
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155712
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155759
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155759
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155759
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155759
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155759
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155759
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155759
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155759
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155788  2020-10-14 01:52:30 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:41:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:41:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6674.17610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf9W-0002je-LJ; Wed, 14 Oct 2020 11:41:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6674.17610; Wed, 14 Oct 2020 11:41:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSf9W-0002jX-IT; Wed, 14 Oct 2020 11:41:06 +0000
Received: by outflank-mailman (input) for mailman id 6674;
 Wed, 14 Oct 2020 11:41:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSf9V-0002jS-Hc
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:41:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 283f4fec-7258-4786-9088-d91d87cc3cf7;
 Wed, 14 Oct 2020 11:41:01 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSf9P-00037P-Cu; Wed, 14 Oct 2020 11:40:59 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSf9P-0005pj-4D; Wed, 14 Oct 2020 11:40:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSf9V-0002jS-Hc
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:41:05 +0000
X-Inumbo-ID: 283f4fec-7258-4786-9088-d91d87cc3cf7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 283f4fec-7258-4786-9088-d91d87cc3cf7;
	Wed, 14 Oct 2020 11:41:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zEPrZuDvaJqkHFFUaNgQBwAmfNdBuHo4a9XYYZXFbW8=; b=nndr3TCQ5oGl54m9yz72KwrcS5
	Jtvm7ppGKTEgqO9vfHWlAQmSoFowGYkW/XZ7zlmrMA1kdJgBUWy74qXA9VkwuBDtutVTpQ3EY5QAb
	KTgFT0QppwwNG1vo5nsxhIU3bINrkhfJtySoaLjP95fmxUUawZx9ZCfk0MrevZTo+ALg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSf9P-00037P-Cu; Wed, 14 Oct 2020 11:40:59 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSf9P-0005pj-4D; Wed, 14 Oct 2020 11:40:59 +0000
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
Date: Wed, 14 Oct 2020 12:40:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 13/10/2020 15:26, Jan Beulich wrote:
> On 13.10.2020 16:20, Jürgen Groß wrote:
>> On 13.10.20 15:58, Jan Beulich wrote:
>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>> priority of the event. When sending an event it might happen the
>>>> event needs to change queues and the old queue needs to be kept for
>>>> keeping the links between queue elements intact. For this purpose
>>>> the event channel contains last_priority and last_vcpu_id values
>>>> elements for being able to identify the old queue.
>>>>
>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>> with a single atomic operation avoiding any inconsistencies.
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>>> I seem to vaguely recall that at the time this seemingly racy
>>> access was done on purpose by David. Did you go look at the
>>> old commits to understand whether there really is a race which
>>> can't be tolerated within the spec?
>>
>> At least the comments in the code tell us that the race regarding
>> the writing of priority (not last_priority) is acceptable.
> 
> Ah, then it was comments. I knew I read something to this effect
> somewhere, recently.
> 
>> Especially Julien was rather worried by the current situation. In
>> case you can convince him the current handling is fine, we can
>> easily drop this patch.
> 
> Julien, in the light of the above - can you clarify the specific
> concerns you (still) have?

Let me start with that the assumption if evtchn->lock is not held when 
evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.

 From my understanding, the goal of lock_old_queue() is to return the 
old queue used.

last_priority and last_vcpu_id may be updated separately and I could not 
convince myself that it would not be possible to return a queue that is 
neither the current one nor the old one.

The following could happen if evtchn->priority and 
evtchn->notify_vcpu_id keeps changing between calls.

pCPU0				| pCPU1
				|
evtchn_fifo_set_pending(v0,...)	|
				| evtchn_fifo_set_pending(v1, ...)
  [...]				|
  /* Queue has changed */	|
  evtchn->last_vcpu_id = v0 	|
				| -> evtchn_old_queue()
				| v = d->vcpu[evtchn->last_vcpu_id];
   				| old_q = ...
				| spin_lock(old_q->...)
				| v = ...
				| q = ...
				| /* q and old_q would be the same */
				|
  evtchn->las_priority = priority|

If my diagram is correct, then pCPU1 would return a queue that is 
neither the current nor old one.

In which case, I think it would at least be possible to corrupt the 
queue. From evtchn_fifo_set_pending():

         /*
          * If this event was a tail, the old queue is now empty and
          * its tail must be invalidated to prevent adding an event to
          * the old queue from corrupting the new queue.
          */
         if ( old_q->tail == port )
             old_q->tail = 0;

Did I miss anything?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 11:58:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 11:58:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6678.17622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfQJ-0003oT-61; Wed, 14 Oct 2020 11:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6678.17622; Wed, 14 Oct 2020 11:58:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfQJ-0003oM-2g; Wed, 14 Oct 2020 11:58:27 +0000
Received: by outflank-mailman (input) for mailman id 6678;
 Wed, 14 Oct 2020 11:58:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSfQH-0003oH-AB
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:58:25 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81cb6f32-f4d8-41ca-8a95-5de3b6ee41d0;
 Wed, 14 Oct 2020 11:58:24 +0000 (UTC)
Received: from MR2P264CA0015.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::27) by
 AM0PR08MB4241.eurprd08.prod.outlook.com (2603:10a6:208:140::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Wed, 14 Oct
 2020 11:58:22 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::d9) by MR2P264CA0015.outlook.office365.com
 (2603:10a6:500:1::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 11:58:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:58:21 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Wed, 14 Oct 2020 11:58:20 +0000
Received: from 3086894c57e0.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DA500688-6C45-434D-B3E8-3DBB84B5BA40.1; 
 Wed, 14 Oct 2020 11:58:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3086894c57e0.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 11:58:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1910.eurprd08.prod.outlook.com (2603:10a6:4:75::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:58:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 11:58:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSfQH-0003oH-AB
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 11:58:25 +0000
X-Inumbo-ID: 81cb6f32-f4d8-41ca-8a95-5de3b6ee41d0
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.49])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 81cb6f32-f4d8-41ca-8a95-5de3b6ee41d0;
	Wed, 14 Oct 2020 11:58:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iXDvGh9JfxV1PiZF6bms5+G/a1l1NrVGtErsneUHUUs=;
 b=vrOWViof/DjkMJGQHxiNSHc/YzXK5x2ftCEO4lFjpIKHI7L3knGfXtBl5O+QIMniylzuBHW/86NNHGKeT+srYlmjbqT0yXXueVlPnpSdBpgy++uMzMtwXB/G743yyA9jxu8izYNzpXXjdFutSLhc37rfrwH0OxrAwwsMA5LMAkM=
Received: from MR2P264CA0015.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::27) by
 AM0PR08MB4241.eurprd08.prod.outlook.com (2603:10a6:208:140::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Wed, 14 Oct
 2020 11:58:22 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::d9) by MR2P264CA0015.outlook.office365.com
 (2603:10a6:500:1::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 11:58:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 11:58:21 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Wed, 14 Oct 2020 11:58:20 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 81ddd6b24c47d92e
X-CR-MTA-TID: 64aa7808
Received: from 3086894c57e0.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DA500688-6C45-434D-B3E8-3DBB84B5BA40.1;
	Wed, 14 Oct 2020 11:58:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3086894c57e0.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 11:58:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lELGnCdbQfB583zZp3Abo5qj0z7qT7k9wuhKuaxc5Pws5GUpmDoI0F7+GjesBfxSdiB0pANigyQ5/GKz5N+dSdpejnA0x8QKVV4Vyn+958Mq2TBaSGeeKyth2eNyChhKXRdOe4E3jp4m9beZztdkhJ26UWE47PJg6NsdrIgXt0S8plW5ALiHoBKFjbMA7Q81/Vf29y74vDS0Gc3hsiS6EsaeLmQDMszdKUwViikLBWtS5cOywT5prL/bXr6OI5IgTuI15M8GmXy56hVKIS5FDPhix7PZfC7OoRtQd8P9jOE1O54zm4m+uasrbTMziNZ/erRi9eXAJLzuEPymXvclNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iXDvGh9JfxV1PiZF6bms5+G/a1l1NrVGtErsneUHUUs=;
 b=IQl7/H2BBklCEUQb31ltTNoW7O8bQPM6/AIOSDkFI+dze8nXtmWtbnVEKgSrUK8JAZAsVg66lkvX0P4Ql867hZyaTcoEPWqWdhsjiyFVtscNe89tIluWPHEn59pU2T3P8TNU0NXrHgR/JrneoHVeS1ljoqZcfwvrZ/6a2nvk6A24wD5ZYm9zVJicP5VahdUVu8KBim0ixHh+pets898yAEvfvpfiBXTOeNvrvu9mki66Mhe2xWh4pWSi0ulEK/5poLbfVf9Z0AYZHtfgxCiV1V/d9eFEQcXk2f63q/kH0yWgD4rA3hfg4Nse3mHOhZ+OlJNHankk3KLstvw0gV17/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iXDvGh9JfxV1PiZF6bms5+G/a1l1NrVGtErsneUHUUs=;
 b=vrOWViof/DjkMJGQHxiNSHc/YzXK5x2ftCEO4lFjpIKHI7L3knGfXtBl5O+QIMniylzuBHW/86NNHGKeT+srYlmjbqT0yXXueVlPnpSdBpgy++uMzMtwXB/G743yyA9jxu8izYNzpXXjdFutSLhc37rfrwH0OxrAwwsMA5LMAkM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1910.eurprd08.prod.outlook.com (2603:10a6:4:75::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 11:58:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 11:58:13 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/xenmpd: Fix gcc10 snprintf warning
Thread-Topic: [PATCH] tools/xenmpd: Fix gcc10 snprintf warning
Thread-Index: AQHWoheQbJERqQUoxES88omBLvhPP6mW73mAgAAO7YA=
Date: Wed, 14 Oct 2020 11:58:13 +0000
Message-ID: <23791043-A851-4DF2-A3B7-23D89EEDCE41@arm.com>
References:
 <0ade4264c537819c3dd45179fcea2723df66b045.1602672245.git.bertrand.marquis@arm.com>
 <74625fd9-f2a3-14be-714a-3cfb705434cc@xen.org>
In-Reply-To: <74625fd9-f2a3-14be-714a-3cfb705434cc@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 83df29aa-bdcb-4a12-b3b0-08d8703872b0
x-ms-traffictypediagnostic: DB6PR0801MB1910:|AM0PR08MB4241:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB424194F3A7A5F3411AF24D7D9D050@AM0PR08MB4241.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 aO4nZAR5JfXQn0BCdEvkCtWfofxfZusZaS7pM3+CEDgJfcORpHpnlaHLSubqAQ3l02vbKbYiFXBBiD9aJHrm3ab4hvRPeuFgpx7eSil/+Z25/9uuqAQpCo1lV/n6L7nWcNqJGq8gjnsMDD8Ud3eOtu7MvALOA8aJBDUBC01Dsq0rFpN/TN1lPLtvowCzpKtohUByOv58WYyUrpNXRboSD3RmYbou0GhCOZbH/s5QeSewtTv4/hcnOBvL8Gf1VsumGCF0JMoHM2m13uaf/6rnPaH/ySUiTDfXBFYORBmRWN7s63/DID9unqfoQnQPlh7zUZqv3mxuiiCI/Ie3HvQU4g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(396003)(376002)(39860400002)(6486002)(186003)(316002)(54906003)(6916009)(2616005)(64756008)(2906002)(4326008)(66446008)(66476007)(66556008)(6512007)(33656002)(76116006)(91956017)(83380400001)(66946007)(86362001)(71200400001)(478600001)(36756003)(5660300002)(8936002)(26005)(53546011)(6506007)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 4sqiHFjI6YOCdWwFlZCnYDOcIdsRB8JI2ktG9bHwzAF4MH1LDaR5LmvlJBC2ZN+Qche0bph4fdF5WxribbhckT8NYFQEJrvGMUiKSDL/YjZh/Y5Mr0F0pCpl57rJI5vDzaCOxinCwc/0iS/hKO3L3JYKiGVCEsQrogsl8pwoCBJSs4zPnHaGPC20GIGHYH7vaf7vf9yxE1yip+To8bpqmDDLtCn554NBLatLi4wORQVoVUu6CXAesggOW/8HIlWHWGz2o8zRcP6/nupBgLi51YBl2rekNIoxQvrr5dggv9/DksZMg3EMMOijfyFx8SxmrGEEyJ+z/BGGRzFT0nejf5xvAUsytAAmvC96Ij3FqVuFC9rjTh4TsharkcRbiJ8CZaYF1AcWxy1oi462v5x2Owb63HvzaKic1Lh1IeqwfVoMeIKDNPtrlyffD0b2shTKCbs9TNY00dUHZqcX/ddOjGLn5RFhxJFw4z8HzQkKtS0AOpUL8W2XiU28yZbZJlzVMqLjRdTsLbuZ2HKYP6KSQvdbkiobekWK6YVYsSyiIK8j1KZV9QMPPn9znoYnmHtuFzPthEQdALpw/Hb/A9dZ+fUEBFSlKiE4UmC6FZlR7OJXKOM1jksmCheUU1i+WRHKz88Qe6/DPtsGTCaaoYoa+w==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <45AC6BD786DE984EAD4848443DC7ABBB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1910
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d5918a61-fd7d-4f14-1c08-08d870386e01
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Kbp7e0Vq+uy2bLpvXRfaNMT+gBI9HZ8P6MePgrA1EqCnsGvhNelhC4CU+LFBEyBZB+UV/NM9PD4F/RVZJ5vbxgYDiCwrDzUen3w8BXNSyEMoywbatEgOmHUWojxRuqZuNrB/aahrRzOXJs8jmxd6LCjG/2iP1yjp7pg8Ql8odVZWrmEoBuHlZ8KREkYuaSA4K+hkwr79QvdTF1WFZHhSXxBjBFSv9IA4lOdWP6LrXxeV/BEM1qVmtJhQ1kry/QOoBnLmQILcJBYhG75pOBfi0dTBn3tmqWrvKNkB2ccLxMf4fI+Qc6J2jZ9h2E6FQOILWW0MZloR6qVpXpRCc1VxL2YXNQIDA9sx9MtH7SEkFwdLJJVkWYt5nHkAqFu/6icD3O9jQCMIvocd+gj2Kqd0uA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(39860400002)(346002)(46966005)(2616005)(4326008)(336012)(2906002)(6512007)(70206006)(478600001)(6862004)(356005)(186003)(6486002)(70586007)(36756003)(33656002)(81166007)(26005)(86362001)(6506007)(53546011)(82310400003)(8936002)(5660300002)(54906003)(47076004)(8676002)(83380400001)(316002)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 11:58:21.6468
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83df29aa-bdcb-4a12-b3b0-08d8703872b0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4241

Hi Julien,

> On 14 Oct 2020, at 12:04, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 14/10/2020 11:47, Bertrand Marquis wrote:
>> Add a check for snprintf return code and ignore the entry if we get an
>> error. This should in fact never happen and is more a trick to make gcc
>> happy and prevent compilation errors.
>> This is solving the gcc warning:
>> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
>> between 4 and 2147483645 bytes into a region of size 271
>> [-Werror=3Dformat-truncation=3D]
>=20
> IIRC, this is only affecting GCC when building for Arm32 *and* when the o=
ptimizer is enabled. If so, it would be good to add more details in the com=
mit message.

I can confirm this is the only build catching it on my side.

I will modify the commit message to say that the problem was encountered on=
 arm32 build with the optimizer enabled.

>=20
> I would also suggest to link to the bug reported on Debian.

I will add it to the commit message.

Cheers
Bertrand

>=20
> Cheers,
>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>  tools/xenpmd/xenpmd.c | 9 +++++++--
>>  1 file changed, 7 insertions(+), 2 deletions(-)
>> diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
>> index 35fd1c931a..12b82cf43e 100644
>> --- a/tools/xenpmd/xenpmd.c
>> +++ b/tools/xenpmd/xenpmd.c
>> @@ -102,6 +102,7 @@ FILE *get_next_battery_file(DIR *battery_dir,
>>      FILE *file =3D 0;
>>      struct dirent *dir_entries;
>>      char file_name[284];
>> +    int ret;
>>            do
>>      {
>> @@ -111,11 +112,15 @@ FILE *get_next_battery_file(DIR *battery_dir,
>>          if ( strlen(dir_entries->d_name) < 4 )
>>              continue;
>>          if ( battery_info_type =3D=3D BIF )
>> -            snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PA=
TH,
>> +            ret =3D snprintf(file_name, sizeof(file_name), BATTERY_INFO=
_FILE_PATH,
>>                       dir_entries->d_name);
>>          else
>> -            snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_P=
ATH,
>> +            ret =3D snprintf(file_name, sizeof(file_name), BATTERY_STAT=
E_FILE_PATH,
>>                       dir_entries->d_name);
>> +        /* This should not happen but is needed to pass gcc checks */
>> +        if (ret < 0)
>> +            continue;
>> +        file_name[sizeof(file_name) - 1] =3D '\0';
>>          file =3D fopen(file_name, "r");
>>      } while ( !file );
>> =20
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 12:17:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 12:17:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6708.17636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfiZ-0005fZ-6p; Wed, 14 Oct 2020 12:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6708.17636; Wed, 14 Oct 2020 12:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfiZ-0005fS-3z; Wed, 14 Oct 2020 12:17:19 +0000
Received: by outflank-mailman (input) for mailman id 6708;
 Wed, 14 Oct 2020 12:17:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSfiX-0005fN-QQ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:17:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9234147b-39c3-4cf8-b402-42c39acdc25d;
 Wed, 14 Oct 2020 12:17:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 355ACAFB2;
 Wed, 14 Oct 2020 12:17:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSfiX-0005fN-QQ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:17:17 +0000
X-Inumbo-ID: 9234147b-39c3-4cf8-b402-42c39acdc25d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9234147b-39c3-4cf8-b402-42c39acdc25d;
	Wed, 14 Oct 2020 12:17:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602677836;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+t59B7xQEmr/yMT4LOu0Kw/sh/p5DegGVYVvv2zAHWQ=;
	b=msCB71fojTY9lt9dWvTlaFw/Hb3XHiWAjAMeWoZpF3GWV6Otyh6U8clKvICYjd7WMNOJLs
	I+q0k4PIxVgubpta6weQFZbbgmOoBg78BgqooNTeDcnm2ISjLQXiZ1t8pW35axsPSocXnx
	MqeQ2hinpnYC1TIv2TGdzJ/F8FGqtBo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 355ACAFB2;
	Wed, 14 Oct 2020 12:17:16 +0000 (UTC)
Subject: Re: [PATCH] x86/msr: handle IA32_THERM_STATUS
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20201007102032.98565-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e694350-4665-a1e7-20a4-f68cbee34dd1@suse.com>
Date: Wed, 14 Oct 2020 14:17:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201007102032.98565-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.10.2020 12:20, Roger Pau Monne wrote:
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -253,6 +253,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
>              break;
>          goto gp_fault;
>  
> +    case MSR_IA32_THERM_STATUS:
> +        if ( cp->x86_vendor != X86_VENDOR_INTEL )
> +            goto gp_fault;
> +        *val = 0;
> +        break;

I've been puzzled while applying this: The upper patch context doesn't
match what's been in master for about the last month, and hence I
wonder what version of the tree you created this patch against. In any
event please double check that I didn't screw it up.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 12:18:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 12:18:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6711.17650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfk2-0005pF-JH; Wed, 14 Oct 2020 12:18:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6711.17650; Wed, 14 Oct 2020 12:18:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSfk2-0005p8-GI; Wed, 14 Oct 2020 12:18:50 +0000
Received: by outflank-mailman (input) for mailman id 6711;
 Wed, 14 Oct 2020 12:18:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSfk1-0005oX-AQ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:18:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 447f50b8-122d-4003-922f-ffd46efb394f;
 Wed, 14 Oct 2020 12:18:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSfjt-0003ue-Gm; Wed, 14 Oct 2020 12:18:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSfjt-0002Ex-97; Wed, 14 Oct 2020 12:18:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSfjt-0007HV-8d; Wed, 14 Oct 2020 12:18:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSfk1-0005oX-AQ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:18:49 +0000
X-Inumbo-ID: 447f50b8-122d-4003-922f-ffd46efb394f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 447f50b8-122d-4003-922f-ffd46efb394f;
	Wed, 14 Oct 2020 12:18:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2KcmFe5t7nf6aFmpQJB24VNbU/XbY3vjbKi/NPEE67M=; b=lsxG5tFthkKYTzPJhh3bInZyiH
	cNLtnrH8Z4vMqe+pHU7N6voRr5+JhjI24FcCOJ4mvt9BG3GNfUL7oLsQWZsqbeRqEIo7iRtv3nyy1
	pHyEYj+MuMMpgpkhhC4oXnarKXIV6Aw55gLNb1xAkocje2T6kvFbb+od8ODLvWquZqEU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSfjt-0003ue-Gm; Wed, 14 Oct 2020 12:18:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSfjt-0002Ex-97; Wed, 14 Oct 2020 12:18:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSfjt-0007HV-8d; Wed, 14 Oct 2020 12:18:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155800-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155800: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e5a9d0e6886f521453a63a2854ff6d06fa0d028
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 12:18:41 +0000

flight 155800 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155800/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 155584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e5a9d0e6886f521453a63a2854ff6d06fa0d028
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    5 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   35 attempts
Testing same since   155779  2020-10-13 17:01:26 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9e5a9d0e6886f521453a63a2854ff6d06fa0d028
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 15:57:51 2020 +0100

    build: always use BASEDIR for xen sub-directory
    
    Modify Makefiles using $(XEN_ROOT)/xen to use $(BASEDIR) instead.
    
    This is removing the dependency to xen subdirectory preventing using a
    wrong configuration file when xen subdirectory is duplicated for
    compilation tests.
    
    BASEDIR is set in xen/lib/x86/Makefile as this Makefile is directly
    called from the tools build and install process and BASEDIR is not set
    there.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a95f31376ba4ae911536c647e1a583d144ccab73
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:25 2020 -0400

    golang/xenlight: standardize generated code comment
    
    There is a standard format for generated Go code header comments, as set
    by [1]. Modify gengotypes.py to follow this standard, and use the
    additional
    
      // source: <IDL file basename>
    
    convention used by protoc-gen-go.
    
    This change is motivated by the fact that since 41aea82de2, the comment
    would include the absolute path to libxl_types.idl, therefore creating
    unintended diffs when generating code across different machines. This
    approach fixes that problem.
    
    [1] https://github.com/golang/go/issues/13560
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit c60f9e4360ec857bb0164387378e12ae8e66e189
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Sun Oct 11 19:31:24 2020 -0400

    golang/xenlight: do not hard code libxl dir in gengotypes.py
    
    Currently, in order to 'import idl' in gengotypes.py, we derive the path
    of the libxl source directory from the XEN_ROOT environment variable, and
    append that to sys.path so python can see idl.py. Since the the recent move of
    libxl to tools/libs/light, this hard coding breaks the build.
    
    Instead, check for the environment variable LIBXL_SRC_DIR, but move this
    check to a try-except block (with empty except). This simply makes the
    real error more visible, and does not strictly require that
    LIBXL_SRC_DIR is used. Finally, update the Makefile to set LIBXL_SRC_DIR
    rather than XEN_ROOT when calling gengotypes.py.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>

commit 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 11 14:24:01 2020 +0200

    tools/libs/store: add disclaimer to header file regarding ignored options
    
    Add a disclaimer to the libxenstore header file that all of the open
    flags (socket only connection, read only connection) are ignored.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 1b810a9d5a39230e76073b1a753cd2c34ded65fc
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu Oct 1 19:53:37 2020 -0400

    libxl: only query VNC when enabled
    
    QEMU without VNC support (configure --disable-vnc) will return an error
    when VNC is queried over QMP since it does not recognize the QMP
    command.  This will cause libxl to fail starting the domain even if VNC
    is not enabled.  Therefore only query QEMU for VNC support when using
    VNC, so a VNC-less QEMU will function in this configuration.
    
    'goto out' jumps to the call to device_model_postconfig_done(), the
    final callback after the chain of vnc queries.  This bypasses all the
    QMP VNC queries.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8a62dee9ceff3056c7e0bd9632bac39bee2a51b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 2 12:30:34 2020 +0200

    x86/vLAPIC: don't leak regs page from vlapic_init() upon error
    
    Fixes: 8a981e0bf25e ("Make map_domain_page_global fail")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 8a71d50ed40bfa78c37722dc11995ac2563662c3
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:21 2020 -0400

    efi: Enable booting unified hypervisor/kernel/initrd images
    
    This patch adds support for bundling the xen.efi hypervisor, the xen.cfg
    configuration file, the Linux kernel and initrd, as well as the XSM,
    and architectural specific files into a single "unified" EFI executable.
    This allows an administrator to update the components independently
    without requiring rebuilding xen, as well as to replace the components
    in an existing image.
    
    The resulting EFI executable can be invoked directly from the UEFI Boot
    Manager, removing the need to use a separate loader like grub as well
    as removing dependencies on local filesystem access.  And since it is
    a single file, it can be signed and validated by UEFI Secure Boot without
    requring the shim protocol.
    
    It is inspired by systemd-boot's unified kernel technique and borrows the
    function to locate PE sections from systemd's LGPL'ed code.  During EFI
    boot, Xen looks at its own loaded image to locate the PE sections for
    the Xen configuration (`.config`), dom0 kernel (`.kernel`), dom0 initrd
    (`.ramdisk`), and XSM config (`.xsm`), which are included after building
    xen.efi using objcopy to add named sections for each input file.
    
    For x86, the CPU ucode can be included in a section named `.ucode`,
    which is loaded in the efi_arch_cfg_file_late() stage of the boot process.
    
    On ARM systems the Device Tree can be included in a section named
    `.dtb`, which is loaded during the efi_arch_cfg_file_early() stage of
    the boot process.
    
    Note that the system will fall back to loading files from disk if
    the named sections do not exist. This allows distributions to continue
    with the status quo if they want a signed kernel + config, while still
    allowing a user provided initrd (which is how the shim protocol currently
    works as well).
    
    This patch also adds constness to the section parameter of
    efi_arch_cfg_file_early() and efi_arch_cfg_file_late(),
    changes pe_find_section() to use a const CHAR16 section name,
    and adds pe_name_compare() to match section names.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    [Fix ARM build by including pe.init.o]
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4dced5df761e36fa2561f6f0f6563b3580d95e7f
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:20 2020 -0400

    efi/boot.c: add handle_file_info()
    
    Add a separate function to display the address ranges used by
    the files and call `efi_arch_handle_module()` on the modules.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 04be2c3a067899a3860fc2c7bc7a1599502ed1c5
Author: Trammell Hudson <hudson@trmm.net>
Date:   Fri Oct 2 07:18:19 2020 -0400

    efi/boot.c: add file.need_to_free
    
    The config file, kernel, initrd, etc should only be freed if they
    are allocated with the UEFI allocator.  On x86 the ucode, and on
    ARM the dtb, are also marked as need_to_free when allocated or
    expanded.
    
    This also fixes a memory leak in ARM fdt_increase_size() if there
    is an error in building the new device tree.
    
    Signed-off-by: Trammell Hudson <hudson@trmm.net>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit afef39241b66df7d5fd66b07dc13350370a4991a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed Apr 1 15:51:08 2020 +0100

    x86/ucode: Trivial further cleanup
    
     * Drop unused include in private.h.
     * Used explicit width integers for Intel header fields.
     * Adjust comment to better describe the extended header.
     * Drop unnecessary __packed attribute for AMD header.
     * Fix types and style.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit 8d255609930bed04c6436974bd895be9a405d0c1
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Oct 2 12:20:44 2020 +0100

    x86/hvm: Correct error message in check_segment()
    
    The error message is wrong (given AMD's older interpretation of what a NUL
    segment should contain, attribute wise), and actively unhelpful because you
    only get it in response to a hypercall where the one piece of information you
    cannot provide is the segment selector.
    
    Fix the message to talk about segment attributes, rather than the selector.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 12:43:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 12:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6739.17713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg7i-00009W-4Z; Wed, 14 Oct 2020 12:43:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6739.17713; Wed, 14 Oct 2020 12:43:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg7i-00009P-1F; Wed, 14 Oct 2020 12:43:18 +0000
Received: by outflank-mailman (input) for mailman id 6739;
 Wed, 14 Oct 2020 12:43:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSg7g-00009K-MP
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:43:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0103dee8-2912-4e3f-b905-b41e1aff5e18;
 Wed, 14 Oct 2020 12:43:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F3578AFF4;
 Wed, 14 Oct 2020 12:43:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSg7g-00009K-MP
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:43:16 +0000
X-Inumbo-ID: 0103dee8-2912-4e3f-b905-b41e1aff5e18
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0103dee8-2912-4e3f-b905-b41e1aff5e18;
	Wed, 14 Oct 2020 12:43:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602679395;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a64Xv+F2mTfEjImcYxOkhYsx7ECDamiENYrjt0sYy6w=;
	b=m3zI6Lj7UbLLstO9sCdmorCMJpwBxzcpWfTuEXtIrJE83zewtKuH/N65ZS2M7pP8H46bMi
	Wq3dxlfTQShuGWbFYnsgEB1Kwy0aO2IMnj/f8vrM17dcfD6Uu5ZZh3Ja6TkAUfCIlYRNEw
	C66Zppab0Fcx8YGY9nU+9jqJg08K4g8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F3578AFF4;
	Wed, 14 Oct 2020 12:43:14 +0000 (UTC)
Subject: Re: [PATCH 0/2] maintainers: correct some entries
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20200909115944.4181-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a7bf7af0-9847-a9e9-09e8-c22f451199bb@suse.com>
Date: Wed, 14 Oct 2020 14:43:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200909115944.4181-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Ping?

On 09.09.20 13:59, Juergen Gross wrote:
> Fix some paths after reorg of library locations, and drop unreachable
> maintainer.
> 
> Juergen Gross (2):
>    maintainers: fix libxl paths
>    maintainers: remove unreachable remus maintainer
> 
>   MAINTAINERS | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 12:44:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 12:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6741.17725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg8W-0000G1-Dv; Wed, 14 Oct 2020 12:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6741.17725; Wed, 14 Oct 2020 12:44:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg8W-0000Fu-Aw; Wed, 14 Oct 2020 12:44:08 +0000
Received: by outflank-mailman (input) for mailman id 6741;
 Wed, 14 Oct 2020 12:44:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSg8V-0000Fn-Ez
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:44:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7263436-b274-4df4-bd27-1872374527ff;
 Wed, 14 Oct 2020 12:44:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93B2FABBD;
 Wed, 14 Oct 2020 12:44:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSg8V-0000Fn-Ez
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:44:07 +0000
X-Inumbo-ID: d7263436-b274-4df4-bd27-1872374527ff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7263436-b274-4df4-bd27-1872374527ff;
	Wed, 14 Oct 2020 12:44:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602679443;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zt8c11X/frnwitFC4Xq+aSq+45DY9Bs54ns5mbGsSQc=;
	b=dPTaMNGH6SxwDbJaasBvImBnS9urMYE8KxySDBQ7NwcHSimpuoDNe2d/zh6I5CPwJom6xb
	cSDXaQvESI5xj5YM5F98SWPDPgXiCsgP8iwG+3sY611rXHtIscgCMN5XY+PypnIdaRIwC4
	cvaW5hydIpEE0uyDz7hPGTuVDbXrCGk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93B2FABBD;
	Wed, 14 Oct 2020 12:44:03 +0000 (UTC)
Subject: Re: [PATCH 0/3] stubdom: add xenstore pvh stubdom support
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20200923064541.19546-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e1eb2571-16f0-2f2f-3127-ece3ca615ea9@suse.com>
Date: Wed, 14 Oct 2020 14:44:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20200923064541.19546-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.09.20 08:45, Juergen Gross wrote:
> Add support for creating a PVH Xenstore stub-domain.
> 
> This includes building the stubdom and loading it at system boot.
> 
> It should be noted that currently this stubdom is not in a working
> state as there is some support in Mini-OS missing. I'm working on
> adding this support.
> 
> Juergen Gross (3):
>    tools/init-xenstore-domain: add logging
>    tools/init-xenstore-domain: support xenstore pvh stubdom
>    stubdom: add xenstore pvh stubdom
> 
>   .gitignore                           |   1 +
>   stubdom/Makefile                     |  31 ++++-
>   stubdom/configure                    |  47 ++++++++
>   stubdom/configure.ac                 |   1 +
>   stubdom/xenstorepvh-minios.cfg       |  10 ++
>   tools/helpers/init-xenstore-domain.c | 170 ++++++++++++++++++++-------
>   6 files changed, 213 insertions(+), 47 deletions(-)
>   create mode 100644 stubdom/xenstorepvh-minios.cfg
> 

Anything missing for this series to be committed?


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 12:44:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 12:44:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6743.17736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg96-0000Mu-NE; Wed, 14 Oct 2020 12:44:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6743.17736; Wed, 14 Oct 2020 12:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSg96-0000Mm-KH; Wed, 14 Oct 2020 12:44:44 +0000
Received: by outflank-mailman (input) for mailman id 6743;
 Wed, 14 Oct 2020 12:44:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSg94-0000Me-Kh
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:44:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a1fff51-42f0-48cd-a768-9bc445c83ded;
 Wed, 14 Oct 2020 12:44:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDC75ABBD;
 Wed, 14 Oct 2020 12:44:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSg94-0000Me-Kh
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 12:44:42 +0000
X-Inumbo-ID: 1a1fff51-42f0-48cd-a768-9bc445c83ded
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a1fff51-42f0-48cd-a768-9bc445c83ded;
	Wed, 14 Oct 2020 12:44:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602679480;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Frn5JJNcjXrcLoxE8yQAHpNjUmsAd7Zt/Oc17SaPxXU=;
	b=Amh5CiTVXhnSBzDFhPIuicFSYtdAmbQzov7I8ZOMdqmDqVGkkgjW5gikOgyUjgEMZkc8vA
	7f1/OaVtbac+SjFBOl79K/uqQ9gAm5rbgVtndlpIGt0H2M7EZzfxNpT32UPYUwDQ372omS
	y23hEz5fZIgHwSY0sXaTcZ/CsN1/mkI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BDC75ABBD;
	Wed, 14 Oct 2020 12:44:40 +0000 (UTC)
Subject: Re: [PATCH 0/3] tools: avoid creating symbolic links during make
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20201002142214.3438-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3ce2be80-f4e8-9c33-f14d-26c021db6fba@suse.com>
Date: Wed, 14 Oct 2020 14:44:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Ping?

On 02.10.20 16:22, Juergen Gross wrote:
> The rework of the Xen library build introduced creating some additional
> symbolic links during the build process.
> 
> This series is undoing that by moving all official Xen library headers
> to tools/include and by using include paths and the vpath directive
> when access to some private headers of another directory is needed.
> 
> Juergen Gross (3):
>    tools/libs: move official headers to common directory
>    tools/libs/guest: don't use symbolic links for xenctrl headers
>    tools/libs/store: don't use symbolic links for external files
> 
>   .gitignore                                    |  5 ++--
>   stubdom/mini-os.mk                            |  2 +-
>   tools/Rules.mk                                |  5 ++--
>   tools/{libs/vchan => }/include/libxenvchan.h  |  0
>   tools/{libs/light => }/include/libxl.h        |  0
>   tools/{libs/light => }/include/libxl_event.h  |  0
>   tools/{libs/light => }/include/libxl_json.h   |  0
>   tools/{libs/light => }/include/libxl_utils.h  |  0
>   tools/{libs/light => }/include/libxl_uuid.h   |  0
>   tools/{libs/util => }/include/libxlutil.h     |  0
>   tools/{libs/call => }/include/xencall.h       |  0
>   tools/{libs/ctrl => }/include/xenctrl.h       |  0
>   .../{libs/ctrl => }/include/xenctrl_compat.h  |  0
>   .../devicemodel => }/include/xendevicemodel.h |  0
>   tools/{libs/evtchn => }/include/xenevtchn.h   |  0
>   .../include/xenforeignmemory.h                |  0
>   tools/{libs/gnttab => }/include/xengnttab.h   |  0
>   tools/{libs/guest => }/include/xenguest.h     |  0
>   tools/{libs/hypfs => }/include/xenhypfs.h     |  0
>   tools/{libs/stat => }/include/xenstat.h       |  0
>   .../compat => include/xenstore-compat}/xs.h   |  0
>   .../xenstore-compat}/xs_lib.h                 |  0
>   tools/{libs/store => }/include/xenstore.h     |  0
>   tools/{xenstore => include}/xenstore_lib.h    |  0
>   .../{libs/toolcore => }/include/xentoolcore.h |  0
>   .../include/xentoolcore_internal.h            |  0
>   tools/{libs/toollog => }/include/xentoollog.h |  0
>   tools/libs/call/Makefile                      |  3 ---
>   tools/libs/ctrl/Makefile                      |  3 ---
>   tools/libs/devicemodel/Makefile               |  3 ---
>   tools/libs/evtchn/Makefile                    |  2 --
>   tools/libs/foreignmemory/Makefile             |  3 ---
>   tools/libs/gnttab/Makefile                    |  3 ---
>   tools/libs/guest/Makefile                     | 12 ++-------
>   tools/libs/hypfs/Makefile                     |  3 ---
>   tools/libs/libs.mk                            | 10 +++----
>   tools/libs/light/Makefile                     | 27 +++++++++----------
>   tools/libs/stat/Makefile                      |  2 --
>   tools/libs/store/Makefile                     | 15 +++--------
>   tools/libs/toolcore/Makefile                  |  9 +++----
>   tools/libs/toollog/Makefile                   |  2 --
>   tools/libs/util/Makefile                      |  3 ---
>   tools/libs/vchan/Makefile                     |  3 ---
>   tools/ocaml/libs/xentoollog/Makefile          |  2 +-
>   tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
>   45 files changed, 32 insertions(+), 87 deletions(-)
>   rename tools/{libs/vchan => }/include/libxenvchan.h (100%)
>   rename tools/{libs/light => }/include/libxl.h (100%)
>   rename tools/{libs/light => }/include/libxl_event.h (100%)
>   rename tools/{libs/light => }/include/libxl_json.h (100%)
>   rename tools/{libs/light => }/include/libxl_utils.h (100%)
>   rename tools/{libs/light => }/include/libxl_uuid.h (100%)
>   rename tools/{libs/util => }/include/libxlutil.h (100%)
>   rename tools/{libs/call => }/include/xencall.h (100%)
>   rename tools/{libs/ctrl => }/include/xenctrl.h (100%)
>   rename tools/{libs/ctrl => }/include/xenctrl_compat.h (100%)
>   rename tools/{libs/devicemodel => }/include/xendevicemodel.h (100%)
>   rename tools/{libs/evtchn => }/include/xenevtchn.h (100%)
>   rename tools/{libs/foreignmemory => }/include/xenforeignmemory.h (100%)
>   rename tools/{libs/gnttab => }/include/xengnttab.h (100%)
>   rename tools/{libs/guest => }/include/xenguest.h (100%)
>   rename tools/{libs/hypfs => }/include/xenhypfs.h (100%)
>   rename tools/{libs/stat => }/include/xenstat.h (100%)
>   rename tools/{libs/store/include/compat => include/xenstore-compat}/xs.h (100%)
>   rename tools/{libs/store/include/compat => include/xenstore-compat}/xs_lib.h (100%)
>   rename tools/{libs/store => }/include/xenstore.h (100%)
>   rename tools/{xenstore => include}/xenstore_lib.h (100%)
>   rename tools/{libs/toolcore => }/include/xentoolcore.h (100%)
>   rename tools/{libs/toolcore => }/include/xentoolcore_internal.h (100%)
>   rename tools/{libs/toollog => }/include/xentoollog.h (100%)
> 



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 13:45:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 13:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6751.17753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSh5z-0005Yl-Ia; Wed, 14 Oct 2020 13:45:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6751.17753; Wed, 14 Oct 2020 13:45:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSh5z-0005Ye-FP; Wed, 14 Oct 2020 13:45:35 +0000
Received: by outflank-mailman (input) for mailman id 6751;
 Wed, 14 Oct 2020 13:45:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSh5z-0005Xk-07
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 13:45:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01f2e4b0-6a9a-4716-9f25-536b5a6de5f3;
 Wed, 14 Oct 2020 13:45:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSh5r-0005km-83; Wed, 14 Oct 2020 13:45:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSh5q-0007BE-SV; Wed, 14 Oct 2020 13:45:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSh5q-0003ZG-S0; Wed, 14 Oct 2020 13:45:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSh5z-0005Xk-07
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 13:45:35 +0000
X-Inumbo-ID: 01f2e4b0-6a9a-4716-9f25-536b5a6de5f3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01f2e4b0-6a9a-4716-9f25-536b5a6de5f3;
	Wed, 14 Oct 2020 13:45:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2RM8zBSF/fPiw/RKiyfzvlWxRkLFhkXK3IBOMOjtgtY=; b=TqFnrA1H0dAk+/mpCEYjGijXiG
	04sLklRF6SmdjJrz+UXdkq7DEmW+kt7/y/hj0o7y0G6MQtxkwOijewimosVXNbSLt3kX9jSU6ze9k
	FMfi8e2IIztrBDzUFd+zoaMO/qDOOvJ0WaRHz2ImBSspInBSD4cgbMGBdWXkx47hzlYk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSh5r-0005km-83; Wed, 14 Oct 2020 13:45:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSh5q-0007BE-SV; Wed, 14 Oct 2020 13:45:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSh5q-0003ZG-S0; Wed, 14 Oct 2020 13:45:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155791-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155791: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=b5fc7a89e58bcc059a3d5e4db79c481fb437de59
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 13:45:26 +0000

flight 155791 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155791/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-pair 26 guest-migrate/src_host/dst_host fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                b5fc7a89e58bcc059a3d5e4db79c481fb437de59
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   74 days
Failing since        152366  2020-08-01 20:49:34 Z   73 days  125 attempts
Testing same since   155791  2020-10-14 03:12:26 Z    0 days    1 attempts

------------------------------------------------------------
2687 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 395389 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 13:57:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 13:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6755.17767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShHQ-0006W5-NM; Wed, 14 Oct 2020 13:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6755.17767; Wed, 14 Oct 2020 13:57:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShHQ-0006Vy-Jv; Wed, 14 Oct 2020 13:57:24 +0000
Received: by outflank-mailman (input) for mailman id 6755;
 Wed, 14 Oct 2020 13:57:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kShHO-0006Vt-Uy
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 13:57:23 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3b40573-b13e-4974-87ae-0555cc73369a;
 Wed, 14 Oct 2020 13:57:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kShHO-0006Vt-Uy
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 13:57:23 +0000
X-Inumbo-ID: e3b40573-b13e-4974-87ae-0555cc73369a
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e3b40573-b13e-4974-87ae-0555cc73369a;
	Wed, 14 Oct 2020 13:57:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602683841;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=JJyQ7tUp+FCSQm7uoVm5LWpXqpz9Gw0sQv0caoKYOBI=;
  b=b0z3NeRE6vzNo3FaArBvdzmsjIywBJ1+VBMS4IB0TmMqDqx2D6R38LyF
   kft1hDX2ImevcBUPqTLDdvL+ntNu4oPgkBg1CXsoITDP9fOQMwitn3htr
   SSHDOCQ9iCnC2W+dMQLV0l5haj1sUnSZyDxFJYilSxc3gP+TVxq2orAMO
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hbvgQmFay1opfKYEpyyup9fMjxU35xHRqW4Qt5PkxPJphQD1TdJtu48Wl6TB3wRbc+yCQRfD7T
 vwAY/m9clL5Nf6Ug0MB4vkzjkiMrEtFHo3IyKgwl9xONa0FsVRiZrYTiLkQX57ZQ+C3FAoaJKr
 Frblsf46cToQ0FJrHuFJSI9C9ls3Nqi03VUKu2IYUQFpfcGvXMJV4sxCC0lsmHY6rP5ZDmZxVg
 kxxWIS3a0es9rYT2hbpEeBuMPgpNyDVrkDunK5MF2e3ZZaaCaRWuxUxynUHTQfV+eSIPnr18lA
 qB8=
X-SBRS: 2.5
X-MesageID: 29236849
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29236849"
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
Date: Wed, 14 Oct 2020 14:57:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 13/10/2020 16:58, Jan Beulich wrote:
> On 09.10.2020 17:09, Andrew Cooper wrote:
>> At the time of XSA-170, the x86 instruction emulator really was broken, and
>> would allow arbitrary non-canonical values to be loaded into %rip.  This was
>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
>> targets".
>>
>> However, in a demonstration that off-by-one errors really are one of the
>> hardest programming issues we face, everyone involved with XSA-170, myself
>> included, mistook the statement in the SDM which says:
>>
>>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
>>
>> to mean "must be canonical".  A real canonical check is bits 63:N-1.
>>
>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
>> to the boundary condition at 0x0000800000000000.
>>
>> Now that the emulator has been fixed, revert the XSA-170 change to fix
>> architectural behaviour at the boundary case.  The XTF test case for XSA-170
>> exercises this corner case, and still passes.
>>
>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> But why revert the change rather than fix ...
>
>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>  out:
>>      if ( nestedhvm_vcpu_in_guestmode(v) )
>>          nvmx_idtv_handling();
>> -
>> -    /*
>> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
>> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
>> -     * criteria. As we must not allow less than fully privileged mode to have
>> -     * such an effect on the domain, we correct rIP in that case (accepting
>> -     * this not being architecturally correct behavior, as the injected #GP
>> -     * fault will then not see the correct [invalid] return address).
>> -     * And since we know the guest will crash, we crash it right away if it
>> -     * already is in most privileged mode.
>> -     */
>> -    mode = vmx_guest_x86_mode(v);
>> -    if ( mode == 8 ? !is_canonical_address(regs->rip)
> ... the wrong use of is_canonical_address() here? By reverting
> you open up avenues for XSAs in case we get things wrong elsewhere,
> including ...
>
>> -                   : regs->rip != regs->eip )
> ... for 32-bit guests.

Because the only appropriate alternative would be ASSERT_UNREACHABLE()
and domain crash.

This logic corrupts guest state.

Running with corrupt state is every bit an XSA as hitting a VMEntry
failure if it can be triggered by userspace, but the latter safer and
much more obvious.

It was the appropriate security fix (give or take the functional bug in
it) at the time, given the complexity of retrofitting zero length
instruction fetches to the emulator.

However, it is one of a very long list of guest-state-induced VMEntry
failures, with non-trivial logic which we assert will pass, on a
fastpath, where hardware also performs the same checks and we already
have a runtime safe way of dealing with errors.  (Hence not actually
using ASSERT_UNREACHABLE() here.)

It isn't appropriate for this check to exist on its own (i.e. without
other guest state checks), and it isn't appropriate to live in a
fastpath.  In principle, some logic like this could live on the vmentry
failure path to try a second time, but then you're still creating an XSA
situation which is less obvious, and therefore isn't a clever move IMO.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 14:16:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 14:16:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6758.17778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShZw-0008LS-7z; Wed, 14 Oct 2020 14:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6758.17778; Wed, 14 Oct 2020 14:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShZw-0008LL-4P; Wed, 14 Oct 2020 14:16:32 +0000
Received: by outflank-mailman (input) for mailman id 6758;
 Wed, 14 Oct 2020 14:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WBOb=DV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kShZv-0008LB-7l
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:16:31 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28e2a1a6-6a1b-44e9-a08f-a95bd06020d8;
 Wed, 14 Oct 2020 14:16:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WBOb=DV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kShZv-0008LB-7l
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:16:31 +0000
X-Inumbo-ID: 28e2a1a6-6a1b-44e9-a08f-a95bd06020d8
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28e2a1a6-6a1b-44e9-a08f-a95bd06020d8;
	Wed, 14 Oct 2020 14:16:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602684989;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=tsRwn2NxeEksb8ZPinNdtJW/87jNTYlBuSUNtZOtQF4=;
  b=bf7F4Emfl/9hToljNgP7XRe3IqQ2CxyaUl0Pwkk2M5d/cBNixMz23fjX
   OVOSh3HB6SskFYOzPOpL7CtenOI5156FZdOj0WpMm0zzpl0UZh1c+Q4ml
   oBTz3feAokjES71Htv4Iprgso9PC9MCdLbWuXTPW6Jxk+ioS7sOGCDXs8
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GDvo43c4oBiKaf2X8EXXP3GbDkuNl4h42mw6NBG/5af8P5OPW8+yQ+Re4HYJZ5JqEmnqjduULc
 NQPiN4YkiSY4WyS5ZWYBes6Dd1Dsa7LFtaIJSVLFZGydT26sTl0P2JT5IUlMlCyFeZvguBajmL
 ONcHQ9T5aTQ3DNqBqMvRdLNrnPsi0ZB6Uc5NMyLRTsbP55dTW0TNqZGMXRDeJ9Vs6rIuKxjolP
 JN4gbh75WemT2iaIfrwdyGdWSR53gcV/bzTLvqiH2jAugDGBpe8+C7elxPNPJg840aK/Tnpj3l
 Z3s=
X-SBRS: 2.5
X-MesageID: 29053053
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,374,1596513600"; 
   d="scan'208";a="29053053"
Date: Wed, 14 Oct 2020 16:16:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>,
	Manuel Bouyer <bouyer@antioche.eu.org>
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
Message-ID: <20201014141620.GS19254@Air-de-Roger>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201009115301.19516-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 09, 2020 at 12:53:01PM +0100, Andrew Cooper wrote:
> Despite appearing to be a deliberate design choice of early PV64, the
> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
> testability problem for Xen.  Furthermore, the behaviour is undocumented,
> bizarre, and inconsistent with related behaviour in Xen, and very liable
> introduce a security vulnerability into a PV guest if the author hasn't
> studied Xen's assembly code in detail.
> 
> There are two different bugs here.
> 
> 1) The current logic confuses the registered entrypoints, and may deliver a
>    SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>    entrypoint is registered.
> 
>    This has been the case ever since 2007 (c/s cd75d47348b) but up until
>    2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>    a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
> 
>    Xen would malfunction under these circumstances, if it were a PV guest.
>    Linux would as well, but PVOps has always registered both entrypoints and
>    discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>    consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).
> 
> 2) In the case that neither SYSCALL callbacks are registered, the guest will
>    be crashed when userspace executes a SYSCALL instruction, which is a
>    userspace => kernel DoS.
> 
>    This has been the case ever since the introduction of 64bit PV support, but
>    behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which yield
>    #GP/#UD in userspace before the callback is registered, and are therefore
>    safe by default.

This seems fairly reasonable, as it turns a guest crash into an #UD
AFAICT.

> This change does constitute a change in the PV ABI, for corner cases of a PV
> guest kernel registering neither callback, or not registering the 32bit
> callback when running on AMD/Hygon hardware.

Is there any place suitable to document this behavior?

> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
> SYSENTER (safe by default, until explicitly enabled), as well as native
> hardware (always delivered to the single applicable callback).
> 
> Most importantly however, and the primary reason for the change, is that it
> lets us sensibly test the fast system call entrypoints under all states a PV
> guest can construct, to prove correct behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andy Lutomirski <luto@kernel.org>
> CC: Manuel Bouyer <bouyer@antioche.eu.org>
> 
> v2:
>  * Drop unnecessary instruction suffixes
>  * Don't truncate #UD entrypoint to 32 bits
> 
> Manuel: This will result in a corner case change for NetBSD.
> 
> At the moment on native, 32bit userspace on 64bit NetBSD will get #UD (Intel,
> etc), or an explicit -ENOSYS (AMD, etc) when trying to execute a 32bit SYSCALL
> instruction.
> 
> After this change, a 64bit PV VM will consistently see #UD (like on Intel, etc
> hardware) even when running on AMD/Hygon hardware (as Xsyscall32 isn't
> registered with Xen), rather than following Xsyscall into the proper system
> call path.

Would this result in a regression for NetBSD then? Is it fine to see
#UD regardless of the platform? It's not clear to me from the text
above whether this change will cause issues with NetBSD.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 14:21:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 14:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6761.17790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSheK-0000nE-Ui; Wed, 14 Oct 2020 14:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6761.17790; Wed, 14 Oct 2020 14:21:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSheK-0000n7-Rh; Wed, 14 Oct 2020 14:21:04 +0000
Received: by outflank-mailman (input) for mailman id 6761;
 Wed, 14 Oct 2020 14:21:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWa3=DV=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
 id 1kSheJ-0000n2-4d
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:21:03 +0000
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 574d0ff8-c528-4fcd-b50d-0fad4f5f1a99;
 Wed, 14 Oct 2020 14:21:00 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
 by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 09EEKpgL004693;
 Wed, 14 Oct 2020 16:20:51 +0200 (CEST)
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
 by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 09EEKpIN010168;
 Wed, 14 Oct 2020 16:20:51 +0200 (MEST)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
 id E932570CD; Wed, 14 Oct 2020 16:20:50 +0200 (MEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jWa3=DV=lip6.fr=manuel.bouyer@srs-us1.protection.inumbo.net>)
	id 1kSheJ-0000n2-4d
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:21:03 +0000
X-Inumbo-ID: 574d0ff8-c528-4fcd-b50d-0fad4f5f1a99
Received: from isis.lip6.fr (unknown [2001:660:3302:283c::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 574d0ff8-c528-4fcd-b50d-0fad4f5f1a99;
	Wed, 14 Oct 2020 14:21:00 +0000 (UTC)
Received: from asim.lip6.fr (asim.lip6.fr [132.227.86.2])
	by isis.lip6.fr (8.15.2/8.15.2) with ESMTP id 09EEKpgL004693;
	Wed, 14 Oct 2020 16:20:51 +0200 (CEST)
Received: from armandeche.soc.lip6.fr (armandeche [132.227.63.133])
	by asim.lip6.fr (8.15.2/8.14.4) with ESMTP id 09EEKpIN010168;
	Wed, 14 Oct 2020 16:20:51 +0200 (MEST)
Received: by armandeche.soc.lip6.fr (Postfix, from userid 20331)
	id E932570CD; Wed, 14 Oct 2020 16:20:50 +0200 (MEST)
Date: Wed, 14 Oct 2020 16:20:50 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        Xen-devel <xen-devel@lists.xenproject.org>,
        Jan Beulich <JBeulich@suse.com>, Wei Liu <wl@xen.org>,
        Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
Message-ID: <20201014142050.GA11721@mail.soc.lip6.fr>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
 <20201014141620.GS19254@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201014141620.GS19254@Air-de-Roger>
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3 (isis.lip6.fr [132.227.60.2]); Wed, 14 Oct 2020 16:20:51 +0200 (CEST)
X-Scanned-By: MIMEDefang 2.78 on 132.227.60.2

On Wed, Oct 14, 2020 at 04:16:20PM +0200, Roger Pau Monn wrote:
> [...]
> Would this result in a regression for NetBSD then? Is it fine to see
> #UD regardless of the platform? It's not clear to me from the text
> above whether this change will cause issues with NetBSD.

AFAIK this should not cause any issue. If I understand it properly,
SYSCALL in a 32bit context would not work in any case on Intel CPUs.
The patch just makes if fail on AMD cpus the same way it fails on Intel.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 14:34:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 14:34:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6766.17802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShqs-0001nJ-8o; Wed, 14 Oct 2020 14:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6766.17802; Wed, 14 Oct 2020 14:34:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kShqs-0001nC-5n; Wed, 14 Oct 2020 14:34:02 +0000
Received: by outflank-mailman (input) for mailman id 6766;
 Wed, 14 Oct 2020 14:34:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kShqr-0001n7-66
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:34:01 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0ae73c2-18ed-4660-8f90-379172a3944f;
 Wed, 14 Oct 2020 14:33:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kShqr-0001n7-66
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 14:34:01 +0000
X-Inumbo-ID: c0ae73c2-18ed-4660-8f90-379172a3944f
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0ae73c2-18ed-4660-8f90-379172a3944f;
	Wed, 14 Oct 2020 14:33:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602686040;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=OBlQNft8lThLjba6mdAAjURP9gw22tLr3iM2AsJIrD8=;
  b=EN+OyJQCb6pxb99hMM/frx8+dbVPugxFgrBNydU4Zs+vxZujmc+o45Ht
   wdN6CfVC3y80zUlMcusmCDZWq17/rmfixDx9+qUrX9RiNJK8yJQNrc9CB
   K6HdA4z4fuUk+uwi5BBznH5hGzh4A+sZBUxNnodHMyllrRzlMmafzKTL4
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0Ji6ZduMgnMoiynENHP8+q0RW58zkiP4zEmlnRX3nw4pQ63S2du3y8HHwNfRMigfY6yQqZQdDD
 tr4vC21AkwlZugpEWtAYqeEJf84gxJBXnOS1IfrB2ell/GVSjcMHx750F8GTbKKBkq+FLhb5nV
 UpcNQVmFjfUAkHKwBxF7z7UHhUHMAcn03axz6R2baYJSpOFmzSQCO697tNOfkKVS5PVUTaaScS
 GPRxHAylAEgF5lSx/g3ddA1TrjPApJZ9gRoY0og4SzONzHYYbgugp4Yp3w+oaDhu4w69vq4rk0
 l4Q=
X-SBRS: 2.5
X-MesageID: 28968362
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="28968362"
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
To: Manuel Bouyer <bouyer@antioche.eu.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
 <20201014141620.GS19254@Air-de-Roger>
 <20201014142050.GA11721@mail.soc.lip6.fr>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2a0dd6d0-703a-49e0-fccf-451c1e737746@citrix.com>
Date: Wed, 14 Oct 2020 15:26:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201014142050.GA11721@mail.soc.lip6.fr>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 15:20, Manuel Bouyer wrote:
> On Wed, Oct 14, 2020 at 04:16:20PM +0200, Roger Pau Monné wrote:
>> [...]
>> Would this result in a regression for NetBSD then? Is it fine to see
>> #UD regardless of the platform? It's not clear to me from the text
>> above whether this change will cause issues with NetBSD.
> AFAIK this should not cause any issue. If I understand it properly,
> SYSCALL in a 32bit context would not work in any case on Intel CPUs.
> The patch just makes if fail on AMD cpus the same way it fails on Intel.

Correct.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:17:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6770.17815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiWh-0005Go-Jq; Wed, 14 Oct 2020 15:17:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6770.17815; Wed, 14 Oct 2020 15:17:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiWh-0005Gh-GK; Wed, 14 Oct 2020 15:17:15 +0000
Received: by outflank-mailman (input) for mailman id 6770;
 Wed, 14 Oct 2020 15:17:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSiWg-0005Gc-7z
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:17:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7c8503d-004f-41df-892f-e53aba22975d;
 Wed, 14 Oct 2020 15:17:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSiWg-0005Gc-7z
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:17:14 +0000
X-Inumbo-ID: c7c8503d-004f-41df-892f-e53aba22975d
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7c8503d-004f-41df-892f-e53aba22975d;
	Wed, 14 Oct 2020 15:17:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602688633;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+zIBdoz6/W3MUf9wXb+tdhuqXgcLxsY9r6vf27xeiyk=;
  b=Zw++YPLalL/f57kSqt1CoLf74mVTlqN9JZPHVwWQ+VpuuX4fv1airYYT
   tTw2fotaxw7bX+3J153Zo18Tx98rLuLms7/ru4xCL2NMS/CSiitnSaRM+
   SYst4du94XoVzvQD6SMyl+EZKB1bFWae3d7+bUAslQcCbdJGJVg+Nghhw
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: AtWA3j4qrb9cVSDLQ8QWzXRJfgLfdk5igd1UbKij4I4kM27skkNSrUAYGYaOyRpmN715efrumw
 GNKEQIrK8xqA/8bPe8DaS5rfffx/1wTy4EqwaHQMGq8IqLj89cQXQFC00bcdKQ+44RNbqS7m9M
 cvjdk77lG/3jqVhhkopN/axrirYGybz2gJqgO4GeWKuVokgTnUh8xlP3EW4BRbxh2R+uMxyRWg
 3fjdIfU5rXVvFg0cwRcVt9nr6/YPZ9sglVHh3ksLIRo1x75HHdhYSrXNYDH6Cl6FIkgOVwxdyD
 ot0=
X-SBRS: 2.5
X-MesageID: 29248410
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="29248410"
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>,
	Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
 <20201014141620.GS19254@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <416f9d8d-532a-6b72-1e06-a325f0edaaab@citrix.com>
Date: Wed, 14 Oct 2020 16:17:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201014141620.GS19254@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 15:16, Roger Pau Monné wrote:
>> This change does constitute a change in the PV ABI, for corner cases of a PV
>> guest kernel registering neither callback, or not registering the 32bit
>> callback when running on AMD/Hygon hardware.
> Is there any place suitable to document this behavior?

In the short term, my XTF test which will eventually get into CI.

Longer term, my theoretical future where I've described some of this
stuff in docs/guest-guide/

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:23:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6774.17827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSici-0006AD-9Z; Wed, 14 Oct 2020 15:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6774.17827; Wed, 14 Oct 2020 15:23:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSici-0006A6-6Y; Wed, 14 Oct 2020 15:23:28 +0000
Received: by outflank-mailman (input) for mailman id 6774;
 Wed, 14 Oct 2020 15:23:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSicg-0006A1-Bq
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:23:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7cfe2a5-c932-4c91-a1d3-da4d79ee6b05;
 Wed, 14 Oct 2020 15:23:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3F2CCB1D1;
 Wed, 14 Oct 2020 15:23:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSicg-0006A1-Bq
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:23:26 +0000
X-Inumbo-ID: a7cfe2a5-c932-4c91-a1d3-da4d79ee6b05
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7cfe2a5-c932-4c91-a1d3-da4d79ee6b05;
	Wed, 14 Oct 2020 15:23:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602689004;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=YIC64efaetAnRxiHi3KIsJ9+2l2bUj2C43B0/zIIpvo=;
	b=LaWa2AFeHws5Gzpw4+0a8X0VqDGUIFkyHlDYSNd4/lgq5EQ+KHTZB157Ivq9P0+gJNgIgN
	oPxasxtP2z0UdhAce6ZDwpOhcm0srYckBkcvJyEu1OwysCufD7gSdicogKg2ELGFK7BNUu
	7ilnsfxJkzNQDbVJevT2NB10XGLvyWw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3F2CCB1D1;
	Wed, 14 Oct 2020 15:23:24 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen-detect: make CPUID fallback CPUID-faulting aware
Message-ID: <6b594869-1c64-93a3-7f19-f7374b62eeee@suse.com>
Date: Wed, 14 Oct 2020 17:23:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Relying on presence / absence of hypervisor leaves in raw / escaped
CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
capable hardware. Utilize a PV-only feature flag to avoid false positive
HVM detection.

While at it also short circuit the main detection loop: For PV, only
the base group of leaves can possibly hold hypervisor information.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/misc/xen-detect.c
+++ b/tools/misc/xen-detect.c
@@ -83,11 +83,31 @@ static int check_for_xen(int pv_context)
 
         if ( !strcmp("XenVMMXenVMM", signature) && (regs[0] >= (base + 2)) )
             goto found;
+
+        /* Higher base addresses are possible only with HVM. */
+        if ( pv_context )
+            break;
     }
 
     return 0;
 
  found:
+    /*
+     * On CPUID faulting capable hardware even un-escaped CPUID will return
+     * the hypervisor leaves. Need to further distinguish modes.
+     */
+    if ( !pv_context )
+    {
+        /*
+         * XEN_CPUID_FEAT1_MMU_PT_UPDATE_PRESERVE_AD is a PV-only feature
+         * pre-dating CPUID faulting support in Xen. Hence we can use it to
+         * tell whether we shouldn't report "success" to our caller here.
+         */
+        cpuid(base + 2, regs, 0);
+        if ( regs[2] & (1u << 0) )
+            return 0;
+    }
+
     cpuid(base + 1, regs, pv_context);
     if ( regs[0] )
     {


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6778.17839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiky-00075o-4H; Wed, 14 Oct 2020 15:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6778.17839; Wed, 14 Oct 2020 15:32:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiky-00075h-17; Wed, 14 Oct 2020 15:32:00 +0000
Received: by outflank-mailman (input) for mailman id 6778;
 Wed, 14 Oct 2020 15:31:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSikw-00075c-NH
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:31:58 +0000
Received: from mail-il1-x143.google.com (unknown [2607:f8b0:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d3a6679-e39a-4e85-96f0-af1d0d789080;
 Wed, 14 Oct 2020 15:31:57 +0000 (UTC)
Received: by mail-il1-x143.google.com with SMTP id y16so5653887ila.7
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:31:57 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 141sm3542028ile.28.2020.10.14.08.31.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 08:31:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSikw-00075c-NH
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:31:58 +0000
X-Inumbo-ID: 5d3a6679-e39a-4e85-96f0-af1d0d789080
Received: from mail-il1-x143.google.com (unknown [2607:f8b0:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5d3a6679-e39a-4e85-96f0-af1d0d789080;
	Wed, 14 Oct 2020 15:31:57 +0000 (UTC)
Received: by mail-il1-x143.google.com with SMTP id y16so5653887ila.7
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:31:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A231TBZ2Ivah9yjc9zzpIQqITm/85+M+9V1kUfCUUSo=;
        b=BFnSVUGuztDOl1NOlhPYRd20LtlvlhDxHd3bdOQP4eNv/E5yzhhbp9x8MAUBuO2dCh
         4nmXEm1SkeaPwANH0C5hex8WjEOTXZrjgwY0DkwoQCFa6GoMS7qMuyBJz2AhWY3rMEoy
         V4DxZmE8g6K3wJxLNczxJDgB2KklQNASAchny4fVNLPamBZz1/7rCSFM+lAH9/XgMK81
         lZLlELRXAZLy0B6mXHDIFKszFiLHsDE6YS5L4XaLxpGXoJ/Cx2k5qczBD2aACMh2nxSz
         2nZ8eBkYlxLg9eZhYc/Exf4diSL0Y0j38zNQ1W3uq/SK2UzKNtiRda0zzpTCo4ugVhvP
         0FbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A231TBZ2Ivah9yjc9zzpIQqITm/85+M+9V1kUfCUUSo=;
        b=j9zVFrHAOmjZ23YrF+5oHak20WF7gtFe461dY6E+t/vLQ0HwcclmOx/etr/dkL1WyJ
         kTXR7Uqhldl9ynW9CtQENIvPvcJiz/dZaQiZvuuxoP2AMEu/CkY8kCoB3IVSh0l0a2DX
         YHZ4a6CjDFhz74yLKx/jfVYZdyBhm+wceZnMfMGnpZIJfJ7TR/SQMFiwcog9jrvAzCog
         ve+ZZfN8OcH2thPkbCC76b1ettAfxZaYXaGGBALYrMMJlrSc/81qUsXmL+8hzOUQHIP1
         Id07KIAGgaFrpL4nfWLsuyYMlcZFF9FJq0axt4dRQnBWrdjFID36svcXMG/vQzT3xUnl
         jHkg==
X-Gm-Message-State: AOAM533hx57jDbHL7v/UYoyZYmHuDvJElrynkQIh8vm1uQMDcuvkE6lo
	D9RIpEvqtc+HlhHipNhtbVJfdcsw5I8=
X-Google-Smtp-Source: ABdhPJzsp9g+E+vLo8j/uENquVupd52HoH9TyanLD0kh6vg7z+YchUEKn1xkMdzWqAgQd2M9qiJf8w==
X-Received: by 2002:a92:c88e:: with SMTP id w14mr4005759ilo.185.1602689516859;
        Wed, 14 Oct 2020 08:31:56 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id 141sm3542028ile.28.2020.10.14.08.31.54
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 08:31:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
Date: Wed, 14 Oct 2020 11:31:50 -0400
Message-Id: <20201014153150.83875-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
and fail the check against the virt address range.

Change the code to only check virt_entry against the virtual address
range if it was set upon entry to the function.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
Maybe the overwriting of virt_entry could be removed, but I don't know
if there would be unintended consequences where (old?) kernels don't
have an elfnote, but do have an in-range e_entry?  The failing kernel I
just looked at has an e_entry of 0x1000000.

Oh, it looks like Mini-OS doesn't set the entry ELFNOTE and relies on
e_entry (of 0) to pass these checks.

---
 xen/common/libelf/libelf-dominfo.c | 17 +++++++++++++----
 1 file changed, 13 insertions(+), 4 deletions(-)

diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 508f08db42..1ecf35166b 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -416,6 +416,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
 static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
                                    struct elf_dom_parms *parms)
 {
+    bool check_virt_entry = true;
     uint64_t virt_offset;
 
     if ( (parms->elf_paddr_offset != UNSET_ADDR) &&
@@ -456,8 +457,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
-    if ( parms->virt_entry == UNSET_ADDR )
+    if ( parms->virt_entry == UNSET_ADDR ) {
         parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+        check_virt_entry = false;
+    }
 
     if ( parms->bsd_symtab )
     {
@@ -476,11 +479,17 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     elf_msg(elf, "    p2m_base         = 0x%" PRIx64 "\n", parms->p2m_base);
 
     if ( (parms->virt_kstart > parms->virt_kend) ||
-         (parms->virt_entry < parms->virt_kstart) ||
-         (parms->virt_entry > parms->virt_kend) ||
          (parms->virt_base > parms->virt_kstart) )
     {
-        elf_err(elf, "ERROR: ELF start or entries are out of bounds\n");
+        elf_err(elf, "ERROR: ELF start is out of bounds\n");
+        return -1;
+    }
+
+    if ( check_virt_entry &&
+         ( (parms->virt_entry < parms->virt_kstart) ||
+           (parms->virt_entry > parms->virt_kend) ) )
+    {
+        elf_err(elf, "ERROR: ELF entry is out of bounds\n");
         return -1;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:43:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:43:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6783.17852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiwK-00088C-6f; Wed, 14 Oct 2020 15:43:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6783.17852; Wed, 14 Oct 2020 15:43:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiwK-000885-3n; Wed, 14 Oct 2020 15:43:44 +0000
Received: by outflank-mailman (input) for mailman id 6783;
 Wed, 14 Oct 2020 15:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSiwJ-00087d-9f
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:43:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e86c916e-7a23-4607-841a-76c7eb02b703;
 Wed, 14 Oct 2020 15:43:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSiwC-0008E6-Mc; Wed, 14 Oct 2020 15:43:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSiwC-0003wH-FK; Wed, 14 Oct 2020 15:43:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSiwC-00082C-Er; Wed, 14 Oct 2020 15:43:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSiwJ-00087d-9f
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:43:43 +0000
X-Inumbo-ID: e86c916e-7a23-4607-841a-76c7eb02b703
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e86c916e-7a23-4607-841a-76c7eb02b703;
	Wed, 14 Oct 2020 15:43:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hr/QH1nu7KKoxBYOEKeehSCKsIlYaOB0sXN+5nEM0EY=; b=KyOvCryZOa+lvzoFTr0la2noHu
	quy0/hosrCTjHGcQ9Jx6NOsv/rCuOoPNQBsmAXXt02DJZnC4S1/s6hfUc1D0DY1LdlKYPgVoVyt3G
	dAKjJg197OD2wnVnJ/X+0iz6lqZZFVeGzEyTcFjcjWqEq2z2UZllFL36XYsHhnRI7S30=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSiwC-0008E6-Mc; Wed, 14 Oct 2020 15:43:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSiwC-0003wH-FK; Wed, 14 Oct 2020 15:43:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSiwC-00082C-Er; Wed, 14 Oct 2020 15:43:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155805-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155805: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 15:43:36 +0000

flight 155805 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155805/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155584  2020-10-09 02:01:25 Z    5 days
Failing since        155612  2020-10-09 18:01:22 Z    4 days   36 attempts
Testing same since   155805  2020-10-14 13:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   25849c8b16..884ef07f4f  884ef07f4f66b9d12fc4811047db95ba649db85c -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:46:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:46:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6786.17865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiyd-0008GE-KJ; Wed, 14 Oct 2020 15:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6786.17865; Wed, 14 Oct 2020 15:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSiyd-0008G7-HO; Wed, 14 Oct 2020 15:46:07 +0000
Received: by outflank-mailman (input) for mailman id 6786;
 Wed, 14 Oct 2020 15:46:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kSiyc-0008G2-J4
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:46:06 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8240dc1-69b7-4f76-adff-22e625938daf;
 Wed, 14 Oct 2020 15:46:03 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id k18so17013wmj.5
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:46:03 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o4sm5517460wrv.8.2020.10.14.08.46.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 08:46:02 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kSiyc-0008G2-J4
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:46:06 +0000
X-Inumbo-ID: e8240dc1-69b7-4f76-adff-22e625938daf
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e8240dc1-69b7-4f76-adff-22e625938daf;
	Wed, 14 Oct 2020 15:46:03 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id k18so17013wmj.5
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:46:03 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=fV+rCsJEKGlxea0oGVCwSXDELAtnoYO/dV9yfkNmUpI=;
        b=NhURuvAkviOsaI1RSDbafQgojvXM8dtnfk2HkRKG1QTM/S9g8EBYzM/9Epm5k/Yvb8
         H/le49v2GC16JbMf2PLXz2+WNxD5nWh+EyNqWp1B/gfvtFtaz5R4XixGR04zVcxAN/yQ
         /GGTunfmyTTg9vT6yRDrqL0rfxjdcN5bp0pPREfpoOhe83rgWgnRo8/nyauQY8Igyoh2
         kw9k7/EiZWGCyUCKVWuwIhUvpdRd5dVa7gbTucDBaNnyZesIpu84QFRup0eqSyAoKpTj
         O023XaTKniNcYgFV5jrafH/P73k+l8fqC/Z9WeErwNNHLye91B8tOam+4Bfh/iQ8na3O
         HZ5g==
X-Gm-Message-State: AOAM5303gyuEeaz6BX+bXwsgabidsybo1f7FhmNnxq4w4SrHCPYZ8Vbc
	ocQSa7UgCiVxhrQBNR6RqBw=
X-Google-Smtp-Source: ABdhPJy1SYlfg/nlPOv3UVdOUuCg2RjyLzZrcCZqAighnCDAZBHdIbEBaT5pvh7M7mQbMoueVSIO4g==
X-Received: by 2002:a1c:7514:: with SMTP id o20mr82905wmc.76.1602690362858;
        Wed, 14 Oct 2020 08:46:02 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o4sm5517460wrv.8.2020.10.14.08.46.02
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 08:46:02 -0700 (PDT)
Date: Wed, 14 Oct 2020 15:46:00 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools/libs: remove obsolete xc_map_foreign_bulk from
 error string
Message-ID: <20201014154600.cr52u65mahxh5tv3@liuwe-devbox-debian-v2>
References: <20201014094422.19347-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201014094422.19347-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716

On Wed, Oct 14, 2020 at 11:44:22AM +0200, Olaf Hering wrote:
> xc_map_foreign_bulk is an obsolete API, which is only used by
> qemu-xen-traditional.
> 
> Adjust the error string to refer to the current API.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6788.17877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSizK-0008Ns-2K; Wed, 14 Oct 2020 15:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6788.17877; Wed, 14 Oct 2020 15:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSizJ-0008Nl-VY; Wed, 14 Oct 2020 15:46:49 +0000
Received: by outflank-mailman (input) for mailman id 6788;
 Wed, 14 Oct 2020 15:46:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kSizI-0008NZ-2Q
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:46:48 +0000
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99fe849d-6377-43b1-8647-ead3e7699135;
 Wed, 14 Oct 2020 15:46:47 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id b127so33360wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:46:47 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h7sm5615881wrt.45.2020.10.14.08.46.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 08:46:45 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kSizI-0008NZ-2Q
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:46:48 +0000
X-Inumbo-ID: 99fe849d-6377-43b1-8647-ead3e7699135
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 99fe849d-6377-43b1-8647-ead3e7699135;
	Wed, 14 Oct 2020 15:46:47 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id b127so33360wmb.3
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:46:47 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Gxmd6tf1v9v8jlG0vzr7wvhJT2iN4CIhQtbCXla3r/4=;
        b=ZFdVg65YEyB8R8ufh3Pdwziy1mEiZqwmCdawDgtCWQjtkzx/jAmOxNZRVsXzE8NDzH
         hTA9GZrFe2xxezhDLW0AAh6fMtw1dEi5Fkox0xwLO3yFDA1l3TPUdyMOzu5E5VVUIXXs
         RBRpt6yna+mPQ6AupqgpFcFIjGlGeaHHUk99wYbkBFq9Uo5qrLvWiLaAuf3Uetp+tDAD
         C3weBCrQ8PAYNnx9JdBSN5aYjX1TKKAGopMT5bsVWzekdccNRK43Cb/XvPf41BUa0DaY
         vNsAfY5W5hC2gsYfuKOmRuAdqZEQwoGVcw+5NY10D/ftuynO+xsGFrdVbaOrmWX4eUZz
         MbGQ==
X-Gm-Message-State: AOAM532+6OPXA9vifaYpMTi2oDA/Zt60Q53YuonWcDGZrgjkYfidbZCC
	ek80k185Q12nkChaZJwqL/k=
X-Google-Smtp-Source: ABdhPJyokHUVvvNJnXbUj7VNqoqJdrsk3QVEYx9ef4IqIZs00axHw3LaJ0i/WmClG1AB5I5UYfg2ag==
X-Received: by 2002:a7b:c259:: with SMTP id b25mr32081wmj.141.1602690406494;
        Wed, 14 Oct 2020 08:46:46 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id h7sm5615881wrt.45.2020.10.14.08.46.45
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 08:46:45 -0700 (PDT)
Date: Wed, 14 Oct 2020 15:46:44 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH] xen-detect: make CPUID fallback CPUID-faulting aware
Message-ID: <20201014154644.ea63gzu7m3lqbrff@liuwe-devbox-debian-v2>
References: <6b594869-1c64-93a3-7f19-f7374b62eeee@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <6b594869-1c64-93a3-7f19-f7374b62eeee@suse.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 14, 2020 at 05:23:23PM +0200, Jan Beulich wrote:
> Relying on presence / absence of hypervisor leaves in raw / escaped
> CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
> capable hardware. Utilize a PV-only feature flag to avoid false positive
> HVM detection.
> 
> While at it also short circuit the main detection loop: For PV, only
> the base group of leaves can possibly hold hypervisor information.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:47:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6793.17888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj09-0008VW-Bk; Wed, 14 Oct 2020 15:47:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6793.17888; Wed, 14 Oct 2020 15:47:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj09-0008VP-8w; Wed, 14 Oct 2020 15:47:41 +0000
Received: by outflank-mailman (input) for mailman id 6793;
 Wed, 14 Oct 2020 15:47:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSj07-0008VG-5d
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:47:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3539e143-e524-4c35-a476-6f05c7665c87;
 Wed, 14 Oct 2020 15:47:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92CB0AB95;
 Wed, 14 Oct 2020 15:47:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSj07-0008VG-5d
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:47:39 +0000
X-Inumbo-ID: 3539e143-e524-4c35-a476-6f05c7665c87
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3539e143-e524-4c35-a476-6f05c7665c87;
	Wed, 14 Oct 2020 15:47:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602690457;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Yc+pGlsZ5yS30nxUbESq/MRMz8cRyLt/HITnSenioFs=;
	b=RWLc8V0j2WsyfZJHPpAysrWjMyLRjBMcKgZf3qHxs+nszsn1FA8gEDtyaC21F2VluXU2oL
	NeeE3FIzXnbrqBYg89+BLQElPBTebVRiNOH3JXq8LhT47ZWybip/Kq1ik2Mr7BKVNY4Dha
	1pBFYBRSHuiiIcs9NXCy9RnzxDF32lE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 92CB0AB95;
	Wed, 14 Oct 2020 15:47:37 +0000 (UTC)
Subject: Re: [PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, jun.nakajima@intel.com,
 kevin.tian@intel.com
References: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca9a1cce-1e51-0f55-4527-42f48bc7d6ab@suse.com>
Date: Wed, 14 Oct 2020 17:47:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 05:02, Igor Druzhinin wrote:
> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
> to Ice Lake desktop according to External Design Specification vol.2.

Could you tell me where this is publicly available? Even after spending
quite a bit of time on searching for it, I can't seem to be able to
find it. And the SDM doesn't have enough information (yet).

> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -183,6 +183,7 @@ static void do_get_hw_residencies(void *arg)
>      /* Ice Lake */
>      case 0x7D:
>      case 0x7E:
> +    case 0x6A:
>      /* Kaby Lake */
>      case 0x8E:
>      case 0x9E:

Here and below please honor the (partial) sorting that's in effect.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:47:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6794.17901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj0I-00007M-Lf; Wed, 14 Oct 2020 15:47:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6794.17901; Wed, 14 Oct 2020 15:47:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj0I-00007D-Ht; Wed, 14 Oct 2020 15:47:50 +0000
Received: by outflank-mailman (input) for mailman id 6794;
 Wed, 14 Oct 2020 15:47:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kSj0H-00006m-9R
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:47:49 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 801f07c0-5765-4c65-b20f-79ca8f052087;
 Wed, 14 Oct 2020 15:47:47 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d3so30205wma.4
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:47:47 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w1sm5465242wrp.95.2020.10.14.08.47.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 08:47:45 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kSj0H-00006m-9R
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:47:49 +0000
X-Inumbo-ID: 801f07c0-5765-4c65-b20f-79ca8f052087
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 801f07c0-5765-4c65-b20f-79ca8f052087;
	Wed, 14 Oct 2020 15:47:47 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id d3so30205wma.4
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:47:47 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=hSs/GN2TnOgkJHJZ69jAe3h9FqrpQAVOKf3fh9GUSVU=;
        b=PUnP/vNlRcmfl1dSlwv6kcTRlOFPcOj75Ykct0y5+lVgSqUzACgtreY659j0PoL4rJ
         8ns9emakFyacrLyiYoPhMr45oPmcXyNUqg/GcZZ7Zimb7EJw20RM8FRdTgfumlbGPn4E
         ne+7dYruKyu1Q/Jp47FYkvJsfUolLWm0omqqOsQ81bpwWKcRmmml8YEtgBZkc2WMMDAa
         8Y3CMLQwnlmQugC+AUWfyRnc5tEUQUVDVzZOCMx3dUEEcbT28GSPPDV0UMXNqzbV9LT0
         ywjn2bug/yiVZzrSpbYxLi3bjfYSqqT+m6DdI4EEw/qlw7v6JtnjMLJmuiu+Uh9PLtmU
         aBKQ==
X-Gm-Message-State: AOAM532bX2T/iAjTAsOmsGxOYBLCr+r2fGeTakWZ6pO78p9Twm6FQso6
	Z0iWWJBI5q2ewNABjx9fdvg=
X-Google-Smtp-Source: ABdhPJzd3kbVnsN4MPM1WM9LsDfrD0jGLOu7MPGp6vKWH8F4VNngJcfrio7NQk8Cqd4ZfMqfZUbK/w==
X-Received: by 2002:a7b:c7d5:: with SMTP id z21mr51872wmk.73.1602690466345;
        Wed, 14 Oct 2020 08:47:46 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id w1sm5465242wrp.95.2020.10.14.08.47.45
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 08:47:45 -0700 (PDT)
Date: Wed, 14 Oct 2020 15:47:44 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
	"jgross@suse.com" <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in
 getBridge
Message-ID: <20201014154744.j56skud26v5iwenx@liuwe-devbox-debian-v2>
References: <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
 <A6CDE62A-13F4-491B-BE0B-180657136504@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <A6CDE62A-13F4-491B-BE0B-180657136504@arm.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 14, 2020 at 10:58:33AM +0000, Bertrand Marquis wrote:
> Hi,
> 
> Could this be reviewed so that gcc10 issues are fixed ?

I think Juergen's comments have been addressed.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:52:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6802.17917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj5C-00016F-AC; Wed, 14 Oct 2020 15:52:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6802.17917; Wed, 14 Oct 2020 15:52:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj5C-000168-6j; Wed, 14 Oct 2020 15:52:54 +0000
Received: by outflank-mailman (input) for mailman id 6802;
 Wed, 14 Oct 2020 15:52:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kSj5A-000163-PW
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:52:52 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2eaafcb1-5579-408d-91e0-4802d2a56182;
 Wed, 14 Oct 2020 15:52:52 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id y12so4485916wrp.6
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:52:51 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h16sm5868732wre.87.2020.10.14.08.52.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 08:52:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TKJF=DV=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kSj5A-000163-PW
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:52:52 +0000
X-Inumbo-ID: 2eaafcb1-5579-408d-91e0-4802d2a56182
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2eaafcb1-5579-408d-91e0-4802d2a56182;
	Wed, 14 Oct 2020 15:52:52 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id y12so4485916wrp.6
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 08:52:51 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=9bsO4XTS8Ud4pEmBCazMT/i3UmJ173+XZwgvkVsK8QM=;
        b=fcrM+2EpdzKj61GTNYF/tqNtXFmE/KbktHCQielfSOom/qf1UuxrtG1SCruZ6DrFMB
         YuPv8QePXqcuzCPychTspZxE1cQojLejrcMYXmyn7GGfphTS401reSXIDlU/WB+TOD89
         zusfXbZXxmBENpZOzAgYg/qga4xkoDBb/zG406KbU7HuhQWu9Q95P+s+WMA+IKHXGzoO
         xduIjRrX8ipnQ4+fRpBtgsGymu98wT1zo/5sG07PgqUDpr8XnrPrz7BhmJv10e11zhUm
         V/2j0ywiVSl3x79YpzqTkulE6iZA8rNyXbUn4NWL/jOk62IIJJRfXJ29sLdguMV5upLL
         A1ug==
X-Gm-Message-State: AOAM533o/O1rKb6KBFheOHTVts0bH2Q/uglBCI2g9+nmAO/OWbCqlWtx
	sUQ5w18FzZZj/JuUbUC5lS0=
X-Google-Smtp-Source: ABdhPJxetcPUnvNy08/XzDzvnZ4VKNAsuFXTRz0s7wZtnUMj5IU7yl72Xhx8Iu85Eb9VDn6lKv8Lvg==
X-Received: by 2002:adf:e70a:: with SMTP id c10mr6124532wrm.425.1602690771200;
        Wed, 14 Oct 2020 08:52:51 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id h16sm5868732wre.87.2020.10.14.08.52.50
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 08:52:50 -0700 (PDT)
Date: Wed, 14 Oct 2020 15:52:49 +0000
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
Message-ID: <20201014155249.42zw6vvoteom4iff@liuwe-devbox-debian-v2>
References: <20201014153150.83875-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201014153150.83875-1-jandryuk@gmail.com>
User-Agent: NeoMutt/20180716

On Wed, Oct 14, 2020 at 11:31:50AM -0400, Jason Andryuk wrote:
> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
> kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
> virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
> and fail the check against the virt address range.
> 
> Change the code to only check virt_entry against the virtual address
> range if it was set upon entry to the function.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> 
> ---
> Maybe the overwriting of virt_entry could be removed, but I don't know
> if there would be unintended consequences where (old?) kernels don't
> have an elfnote, but do have an in-range e_entry?  The failing kernel I
> just looked at has an e_entry of 0x1000000.
> 
> Oh, it looks like Mini-OS doesn't set the entry ELFNOTE and relies on
> e_entry (of 0) to pass these checks.
> 

I have not looked into the patch, but please don't use mini-os as a source
for truth. ;-)

It is more likely than not we should fix mini-os instead of Xen and
Linux.

Wei.


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 15:55:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 15:55:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6806.17929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj7U-0001GH-Nt; Wed, 14 Oct 2020 15:55:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6806.17929; Wed, 14 Oct 2020 15:55:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSj7U-0001GA-KZ; Wed, 14 Oct 2020 15:55:16 +0000
Received: by outflank-mailman (input) for mailman id 6806;
 Wed, 14 Oct 2020 15:55:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSj7T-0001G5-IQ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:55:15 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.74]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffc447e3-215f-4cdc-81bc-beff4f21a70c;
 Wed, 14 Oct 2020 15:55:13 +0000 (UTC)
Received: from DB7PR05CA0018.eurprd05.prod.outlook.com (2603:10a6:10:36::31)
 by DBAPR08MB5591.eurprd08.prod.outlook.com (2603:10a6:10:1ae::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 15:55:12 +0000
Received: from DB5EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::e1) by DB7PR05CA0018.outlook.office365.com
 (2603:10a6:10:36::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23 via Frontend
 Transport; Wed, 14 Oct 2020 15:55:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT040.mail.protection.outlook.com (10.152.20.243) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 15:55:10 +0000
Received: ("Tessian outbound 7c188528bfe0:v64");
 Wed, 14 Oct 2020 15:55:10 +0000
Received: from cd05838f7509.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8C3E78A7-7546-4D70-998F-DC51775CF148.1; 
 Wed, 14 Oct 2020 15:54:37 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd05838f7509.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 15:54:37 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.26; Wed, 14 Oct
 2020 15:54:36 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 15:54:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSj7T-0001G5-IQ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 15:55:15 +0000
X-Inumbo-ID: ffc447e3-215f-4cdc-81bc-beff4f21a70c
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.74])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ffc447e3-215f-4cdc-81bc-beff4f21a70c;
	Wed, 14 Oct 2020 15:55:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OPJ4JDGdfSEEPY+sdPSqpfvJrXxoA9gK1629yUz9FtI=;
 b=oB6m6rmKn2KpB/R+N2WjcCaJxHzhJhDHMmcevKBoxUH461le/tfKxpBM0tR+H1So0MUgOChHZm27n53VsHwutJsJfB3q5ZyBDeyMFy8rhY1YMfEdaSK9rWPQRgOXhc4Rd0hBuYZkL8S8QvuuCWVvThP2LdBgAhI0hXVM6xIxfCQ=
Received: from DB7PR05CA0018.eurprd05.prod.outlook.com (2603:10a6:10:36::31)
 by DBAPR08MB5591.eurprd08.prod.outlook.com (2603:10a6:10:1ae::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 14 Oct
 2020 15:55:12 +0000
Received: from DB5EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:36:cafe::e1) by DB7PR05CA0018.outlook.office365.com
 (2603:10a6:10:36::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.23 via Frontend
 Transport; Wed, 14 Oct 2020 15:55:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT040.mail.protection.outlook.com (10.152.20.243) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 15:55:10 +0000
Received: ("Tessian outbound 7c188528bfe0:v64"); Wed, 14 Oct 2020 15:55:10 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5f753e88ab794ec3
X-CR-MTA-TID: 64aa7808
Received: from cd05838f7509.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 8C3E78A7-7546-4D70-998F-DC51775CF148.1;
	Wed, 14 Oct 2020 15:54:37 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cd05838f7509.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 15:54:37 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QvsRFy0QKGsdNdcPjrLIQJznDwVHdnXGX++PRFBZ1dcOxDsNN68BffUrRpoSzcKI3q8ivsbPw8DvsddbjpMIgg5bLMVTaUkeY5Hm5SCBrc+kJf2poS1pHZ9P4Q8sDSnaa20j63KnWwjW2GrFb5HvJpuy03HFiE/gUhC448NaQEsIm5aGs/4now4ykOcCdb/aXWyn7/1yUgB6c+X5T79gGHG51n8Xkdd6uxqVO6OlzS77U6naqVxfzwyKrGyMJLscjOhj4KTwFB3BGEa6yGKGhszx0wH+OtsZVY2pGuYx1x+kkkeNIGnQDaYwX7ln5OU0v1wHQ4N1JaLILGlNecSaSw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OPJ4JDGdfSEEPY+sdPSqpfvJrXxoA9gK1629yUz9FtI=;
 b=aXWR9rgnlIiGFrFIwZmsfg5kxm/MEqVGVEP7SNwcvF8s6irWNRFLeF1b2avLZdgVnJRMDtxLILqDs/O/6psMlV9Z4i62bdsJrB67mArF2/mYaDscg/1L6RDhzvKdhKYgSkwn1AEifDVMVhpxC9vMIQe3Chac4wyMQD/SDVNXLSYvmMaQM5BuNQP85qqzhjPh7tW74Fmr9E9r1ovTItTCAQU6CWZ6Om228mzkveDYEGiWQiUS3DFBN+V5FZZhhdUHdsc5iSRVBXUhBuLQYeOu4GFjg1izyLJ5aJlUGagNEW8AyZaaTWa7VHpNTgfvTEMIc4tjofdtHcggZsW62HQR1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OPJ4JDGdfSEEPY+sdPSqpfvJrXxoA9gK1629yUz9FtI=;
 b=oB6m6rmKn2KpB/R+N2WjcCaJxHzhJhDHMmcevKBoxUH461le/tfKxpBM0tR+H1So0MUgOChHZm27n53VsHwutJsJfB3q5ZyBDeyMFy8rhY1YMfEdaSK9rWPQRgOXhc4Rd0hBuYZkL8S8QvuuCWVvThP2LdBgAhI0hXVM6xIxfCQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4538.eurprd08.prod.outlook.com (2603:10a6:10:d2::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.26; Wed, 14 Oct
 2020 15:54:36 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 15:54:36 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Wei Liu <wl@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, "jgross@suse.com"
	<jgross@suse.com>, Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in
 getBridge
Thread-Topic: [PATCH v3 1/2] tools/libs/stat: use memcpy instead of strncpy in
 getBridge
Thread-Index: AQHWnLHiDj1hArywQEGBhk10fWLXEamW+IUAgABQzQCAAAHqgA==
Date: Wed, 14 Oct 2020 15:54:36 +0000
Message-ID: <94190674-32DB-4317-9368-7F5E9906160D@arm.com>
References:
 <4ecb03b40b0da6d480e95af1da8289501a3ede0a.1602078276.git.bertrand.marquis@arm.com>
 <A6CDE62A-13F4-491B-BE0B-180657136504@arm.com>
 <20201014154744.j56skud26v5iwenx@liuwe-devbox-debian-v2>
In-Reply-To: <20201014154744.j56skud26v5iwenx@liuwe-devbox-debian-v2>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 9afdb485-d845-4a4a-5023-08d8705987c8
x-ms-traffictypediagnostic: DBBPR08MB4538:|DBAPR08MB5591:
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB55915A31912E7ABE3736D6CF9D050@DBAPR08MB5591.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:1775;OLM:1775;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 tJJYkYsrouoQmvk0hSPl/mq49Mc2dBn1OJC/MA9O/BvhGLfLlRWLs574y5ZcfbBkBEKSvlx2G3uIpa6iE9X1MnVKBLi7Aa4YTlwpSG7CpKFuWzm/hUl9jAURu/jJWbq73R6cxB9ZG9/Xqz/QGFunIAzh7ZxtUf0DQzMGIgk9/vUGTkZkjqSxJBhGnsBHmlQ1cnbZOwqZTGoBTkYxyc6tyXA3OXZYHC95kDhsoxkAnM7rt4SRKb1o4xo76l1gXynRxCPaFu+ENhlukTf0CSXIvH9QaUMP2eYrOnQh1VskVzr/N2d4WnJs7epZYPSP+8CFIT8KulbEpe2HN4HXL3iqyA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(346002)(136003)(376002)(396003)(36756003)(6512007)(558084003)(33656002)(4326008)(26005)(53546011)(186003)(8676002)(6506007)(8936002)(2906002)(478600001)(66946007)(76116006)(71200400001)(66476007)(66446008)(64756008)(66556008)(5660300002)(2616005)(6916009)(86362001)(54906003)(316002)(6486002)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 dd9RDhcxJTRTUAekTQLymt37Mgz06xz+V9rOddndOfsiDHlPbF6oOQ6tnOQ7nGn+aR+/TcXO98XNfplTR64D9/BAp3glrace076SIpWSzWCvjOufo19/2cPcwurn3pvhJOq5Oli0ajL5mw98ze6DzefnCbLNdnqp9LCL4/0ZxajVNHJNBZzjp7H+zpAhzTK6phpHm/e/h1kvx0yZYhkkRFlyBtZxUe0s0QXNyY2FAofhjmpAMZyz5eJcaGpRapj6YH7rMbwMkyGgKWuR0fOpmlL+rBNFHlsM/Dw+KeOA+xLeNaR+AMLRdIt3LOsa2s2eERi+1Ym9ngAwJxhub1hQbrU4ZzuFEShyCzhQ4bwZP7O31uYdT9zvOqD5HyLvc77kL5GHKgS0bRoJE/3wH3SEhBY+UhqwvUOJ1slirzYkSipbmbo00g7lb/CabW3jERkVb4Acg/APsqmEa1Or8Wb0PvxblVCaBSgLah8evUd46w4+cUBmQhdXGur0RSLhbZGgv1rNgUY9gVxrKDaMlVIkGZZNahSDM2Pd0M8KF576bHsYyaGTAIryilY/p4u29QmoM49i1tB+oeowdCKhMUMdR+J6KKbZBxHpAQyLI4DmP9U7CRxmwHzQ9vTo5xNzH/kySybFaje2+qDCiETpdPUtow==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D439CB9BBC0F3D4FAD83DF3DBFCB9566@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4538
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d1192e51-e6bb-43e3-57ff-08d87059734a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Mjl1P3t14itDTgj3WidtQTlpc7j2IbAqBg/8T15e/5jUXL/j9ltJQKaQYWoY6OV02fp5CQTjyilkbHFIXhQzHmhr4umUKfL6siRGr3jus/VZ0z76MklYbOkp3P6zPvej096v0I6h+lKzg32GiDGkONJK7MZORMHxtwHnTPWvibRTYlz1Ax4Y8mbca4x/jYB039F9JYh7sEfGbKDEM4uhshCrHQ00Ei7EoLg7bNsLfloS1N6H0CUUueC0NHEmaByJoBHmG70xHmzCmFJz/ciP0BIPz9guyYMxwwFG3QcGhQDZst1u09gwy47NycVcwHojI+BKnMRa/oxzAS5k1uCiTecdtwe1kdTBdZvi74QsGDh1yV1wXrOqTe40a9rUoYbMTSFrzz6DxUDB8GUsj4gTDA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(136003)(46966005)(5660300002)(8936002)(6862004)(36756003)(54906003)(8676002)(33656002)(316002)(6486002)(478600001)(558084003)(6512007)(4326008)(2906002)(2616005)(336012)(53546011)(26005)(6506007)(186003)(47076004)(82310400003)(70206006)(70586007)(81166007)(356005)(86362001)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 15:55:10.5307
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9afdb485-d845-4a4a-5023-08d8705987c8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5591



> On 14 Oct 2020, at 16:47, Wei Liu <wl@xen.org> wrote:
>=20
> On Wed, Oct 14, 2020 at 10:58:33AM +0000, Bertrand Marquis wrote:
>> Hi,
>>=20
>> Could this be reviewed so that gcc10 issues are fixed ?
>=20
> I think Juergen's comments have been addressed.
>=20
> Acked-by: Wei Liu <wl@xen.org>

Thanks :-)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6818.17941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjDF-0002fe-Ew; Wed, 14 Oct 2020 16:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6818.17941; Wed, 14 Oct 2020 16:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjDF-0002fX-A9; Wed, 14 Oct 2020 16:01:13 +0000
Received: by outflank-mailman (input) for mailman id 6818;
 Wed, 14 Oct 2020 16:01:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSjDE-0002fS-7g
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:01:12 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe08::610])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53490081-4d95-4b2c-b35d-e42036191d7f;
 Wed, 14 Oct 2020 16:01:09 +0000 (UTC)
Received: from AM6P195CA0029.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::42)
 by VI1PR08MB3245.eurprd08.prod.outlook.com (2603:10a6:803:48::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 14 Oct
 2020 16:01:07 +0000
Received: from VE1EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::63) by AM6P195CA0029.outlook.office365.com
 (2603:10a6:209:81::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 16:01:07 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT064.mail.protection.outlook.com (10.152.19.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 16:01:06 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Wed, 14 Oct 2020 16:01:06 +0000
Received: from 262c79338bce.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5AA6E9BD-A7A5-4496-B86F-CEBA4DC5B97B.1; 
 Wed, 14 Oct 2020 16:01:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 262c79338bce.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 16:01:00 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6025.eurprd08.prod.outlook.com (2603:10a6:10:203::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.29; Wed, 14 Oct
 2020 16:00:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 16:00:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSjDE-0002fS-7g
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:01:12 +0000
X-Inumbo-ID: 53490081-4d95-4b2c-b35d-e42036191d7f
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe08::610])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 53490081-4d95-4b2c-b35d-e42036191d7f;
	Wed, 14 Oct 2020 16:01:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibaylR7xBjPsK6RSBCivOJaDHm8ur6lAqEqHCqFu66E=;
 b=H/cF/AtL4le7VBBwD7ljXoAiFxeEAuEiU/5RPaZU+gxduj2J4f30Ghs74P5toxmUqT8D0f1RZqvo50atnjljykrluJIpw97XeGOSBmJOJTed+ASZB1ouh4AxfGXuB3jTRXp207/pO4Pdb1FTc2NxIHPCiEczHuGiVKlo5IhBCE4=
Received: from AM6P195CA0029.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:81::42)
 by VI1PR08MB3245.eurprd08.prod.outlook.com (2603:10a6:803:48::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 14 Oct
 2020 16:01:07 +0000
Received: from VE1EUR03FT064.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:81:cafe::63) by AM6P195CA0029.outlook.office365.com
 (2603:10a6:209:81::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 16:01:07 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT064.mail.protection.outlook.com (10.152.19.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 16:01:06 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Wed, 14 Oct 2020 16:01:06 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f6ddcbd06d6467a8
X-CR-MTA-TID: 64aa7808
Received: from 262c79338bce.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5AA6E9BD-A7A5-4496-B86F-CEBA4DC5B97B.1;
	Wed, 14 Oct 2020 16:01:00 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 262c79338bce.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 16:01:00 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZFwRAeuTl6I9b+HPZciS4XNqT9JKU+IZ9bd4zlsYlOx6QOy9pYEhfuGW3alO6kaDKVk8DQnzul9Y3ffdI9Vypj03g6Uy13cZ9LSGIeWZ7sU/3aM7FGkw2WRk5q0M4LP9UOyokBHlkFS7Bq+EaQqYSJgA67MZhjUADoZrTMwNNTcFvzd1e5M9ibC4zwbmBagG+s4KsCSv7okkiLvQDkLqIKojeJh0A6dLj/Cabxpzlf7DiuS3qPM2UQCXc2PyJyJyZ/mdnjS/7HHDycdajc4rlWvLKHyblynZURc5B1j8qpA4T61V3KRbpJfX7YKA8axxIRK+QSDLjeX1AgTpcNpQig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibaylR7xBjPsK6RSBCivOJaDHm8ur6lAqEqHCqFu66E=;
 b=jwRMq9QRiYu/v0wOt6TPc3/DMkAozzNqbB3F64NKWtML1g3Czy+qmzpy/YV8kbuR+Fxjhh0JDdbnbQkZF4BKsxHSfzjES5n7yhcztVl/j5gkVSq+WPUCeU871IigS7oxVNpYx3nemXdygSxVWcz5yOWR2W5w4A0R7lSXf2HW/hPsqcjj1ru9IPVfctWNE0rvVIivVTpt19zMco8gZi8Hj0lD2XU7ELjn5HKjNKUmAIuWZ/c9UNuJEFTIeimBknnQRP3D3UAJIQ50nk0oIzfMkG31i9UmpPSRd2LfOxk7feqxzgOKmJ5L5DZNO+HIo0IB9XjSHaDIybtIuh9E+Nh85g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibaylR7xBjPsK6RSBCivOJaDHm8ur6lAqEqHCqFu66E=;
 b=H/cF/AtL4le7VBBwD7ljXoAiFxeEAuEiU/5RPaZU+gxduj2J4f30Ghs74P5toxmUqT8D0f1RZqvo50atnjljykrluJIpw97XeGOSBmJOJTed+ASZB1ouh4AxfGXuB3jTRXp207/pO4Pdb1FTc2NxIHPCiEczHuGiVKlo5IhBCE4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6025.eurprd08.prod.outlook.com (2603:10a6:10:203::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.29; Wed, 14 Oct
 2020 16:00:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 16:00:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWohbM+uoR14U9oEmwJFjy+kmnIqmW8XGAgABQygA=
Date: Wed, 14 Oct 2020 16:00:59 +0000
Message-ID: <5A45EDA3-B01B-4AC3-B2CB-77CF90D024AD@arm.com>
References:
 <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <c22235d1-9124-74f2-5856-58f7f44dc0b7@xen.org>
In-Reply-To: <c22235d1-9124-74f2-5856-58f7f44dc0b7@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 27926528-d4c9-41ff-a525-08d8705a5c20
x-ms-traffictypediagnostic: DBBPR08MB6025:|VI1PR08MB3245:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3245A8B490A0F123623E4BA89D050@VI1PR08MB3245.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:120;OLM:120;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ztfuT7BNSnUVjJSduyh+arVBldEZ/PIkMz1TSVB8cJ0JNqO3d+xEmb2dUi8Zr1TfiFx36QSDUEWd86orvJuoQ5oyhhfaFLI59U+2OytoVfPIjReqNmMszyQhPCERTXt2iPgD4gMgaWta+zEHCpynNOAhIFKdJUQtkvz9utRUDjrqx9McBDx8wZJRl7CSS/G5NkG/rUJAgQxi2LYUe5iQb2UqqcxNrLgEx3Paps0BMIwOTr/LNAfrw3QaTQj1uVLWXKHbS+U99Sq15UNsSP8i/iOLBY3f9Kx9NMfJjSFaC8bwUTgtWx1De2fxr5Iw4dOAojz6P4kNvPErKVcujxnTuywth/8nFl7lTaJkZsAwL+tLw9NMtvAo0w+mIJ0ynivmw97TMFY0xv0JA+2b6QGVdw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(136003)(376002)(39860400002)(83380400001)(966005)(36756003)(2906002)(4326008)(8936002)(478600001)(316002)(64756008)(33656002)(5660300002)(54906003)(66556008)(66446008)(2616005)(186003)(6512007)(66946007)(66476007)(83080400001)(86362001)(6916009)(71200400001)(76116006)(8676002)(6486002)(53546011)(6506007)(26005)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ziw5gO5PR1fH5Ref6A3Oq7KZrXgdV2B95cSydud0GJfAeJoJeOyT49BxOm3rc2a/TZg4apcFDk26krnzsY9PJYZnoqu26zwMqfyDlYqk7wlLvmMuVB7prZi16kgKpDlrpWBPEzezfRKo0iuQcCxUXTTYEV+eDuY3Twzv7gbBnFNEeIJDv8QW14CfCTu0+5i1FiUhmECryg19710ogq59qa+aabJz5qVojjrr5w5AbnY+bdYu3n+ZY0Nj/37zsMk7HpDIjum9d9et9LBee4H215N+IKaNEF6zCj8Aep6s9eUp2m5VU4sC9gxU65IHakRw4WkU/jf7ri1CUNXoppKqk/0F3Lsv91DHZQjdHHu6mBMQsPg7k+i0tbI1qD/tjknywnF7PuBO3tdJnMcvpEta6jPvDZqRCwZoFJr3lROCmrBs7ovMOHF8XiVQk269AJfINxoa6m2TxmaBz7oibg2ke0xSpjpSjELfB1gotMhQO3fEIaq92OfKffijIR8UzqzkDKjIxzAIaSzBw+vFHyfRZebNsSHY9ZjuDeyEDBfYkZ/PRpgikhXgTRb2ARfTG/LJXplX5BnmrLWOwSJxspno4FdqF32cDEHIKgYi4gDTtIf1vB8wseuaXsxapoi4vGDDd66M9DbkozrJ2OIfxAQjFQ==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <69A21FC582315C43B8FF1BDB528DFF0A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6025
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	271e0cdc-5325-4e9b-0dec-08d8705a57ab
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MFgI2EM3gGRgdxDeMqvSBVPaZEFlxMIgkPS+FGb/ue+pGyTScQt+QGojrah1zrYMVJOs36NzArNJCAX+bNThq+tQ2RzZe0zXGSlzNKvxrHzdAO6aB00WCBhd4CKlznvLk4Tz4A5wwIqbY0S7K2BKWJh3vn1I8ADps+7Y3xvs2TNu7enJzgN2vlOB0jZBrCBEGiEFEQH6A20s6+RNDXFY8aREUjJYxJbFvxU5rM5e+TnV5FIfor3OakDC5SBBAWbK+25xeiwyXvORPx2XM43UmP4T6gI7ZxiQdSSVlrjinqcZ8q8Dqqo07+RDNV2BYeeRDqYqZH2B05leVAQ72ZBBmzdYH0Z/skiOHLkfpW5Vm43FJs8Ib4ikUOw1iX59jFaktibS1xJMEGc+aIfrSDcO0Rh4c3brLD19/XhrRK5ED/T499ghgVx79DR0Hg8h0UiVcIou6FOo1WLFd5iJzutJQpdh4f4PHigNQnjkMCKI1JE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(396003)(39860400002)(46966005)(478600001)(33656002)(36906005)(2616005)(5660300002)(54906003)(6486002)(82310400003)(316002)(6506007)(336012)(186003)(53546011)(86362001)(26005)(107886003)(47076004)(4326008)(8936002)(966005)(8676002)(2906002)(356005)(36756003)(70206006)(83380400001)(81166007)(6862004)(6512007)(83080400001)(70586007)(82740400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 16:01:06.6816
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 27926528-d4c9-41ff-a525-08d8705a5c20
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT064.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3245

DQoNCj4gT24gMTQgT2N0IDIwMjAsIGF0IDEyOjExLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEJlcnRyYW5kLA0KPiANCj4gT24gMTQvMTAvMjAyMCAxMTo0
MSwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IFdoZW4gYSBDb3J0ZXggQTU3IHByb2Nlc3Nv
ciBpcyBhZmZlY3RlZCBieSBDUFUgZXJyYXRhIDgzMjA3NSwgYSBndWVzdA0KPj4gbm90IGltcGxl
bWVudGluZyB0aGUgd29ya2Fyb3VuZCBmb3IgaXQgY291bGQgZGVhZGxvY2sgdGhlIHN5c3RlbS4N
Cj4+IEFkZCBhIHdhcm5pbmcgZHVyaW5nIGJvb3QgaW5mb3JtaW5nIHRoZSB1c2VyIHRoYXQgb25s
eSB0cnVzdGVkIGd1ZXN0cw0KPj4gc2hvdWxkIGJlIGV4ZWN1dGVkIG9uIHRoZSBzeXN0ZW0uDQo+
IA0KPiBJIHRoaW5rIHdlIHNob3VsZCB1cGRhdGUgU1VQUE9SVC5NRCB0byBzYXkgd2Ugd2lsbCBu
b3Qgc2VjdXJpdHkgc3VwcG9ydCB0aG9zZSBwcm9jZXNzb3JzLiBTdGVmYW5vLCB3aGF0IGRvIHlv
dSB0aGluaz8NCg0KVGhhdCBjb3VsZCBtYWtlIHNlbnNlIHRvIGRvIHRoYXQgeWVzLg0KSWYgU3Rl
ZmFubyBjb25maXJtcyB0aGVuIGkgY2FuIGRvIHRoaXMgaW4gYSB2Mg0KDQo+IA0KPj4gQW4gZXF1
aXZhbGVudCB3YXJuaW5nIGlzIGFscmVhZHkgZ2l2ZW4gdG8gdGhlIHVzZXIgYnkgS1ZNIG9uIGNv
cmVzDQo+PiBhZmZlY3RlZCBieSB0aGlzIGVycmF0YS4NCj4gDQo+IEkgZG9uJ3Qgc2VlbSB0byBm
aW5kIHRoZSB3YXJuaW5nIGluIExpbnV4LiBEbyB5b3UgaGF2ZSBhIGxpbms/DQoNCnN1cmU6DQpo
dHRwczovL2dpdC5rZXJuZWwub3JnL3B1Yi9zY20vbGludXgva2VybmVsL2dpdC90b3J2YWxkcy9s
aW51eC5naXQvY29tbWl0Lz9pZD1hYmY1MzJjY2VhY2E5YzIxYTE0ODQ5ODA5MWY4N2RlMWI4YWU5
YjQ5DQoNCj4gDQo+PiBTaWduZWQtb2ZmLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5t
YXJxdWlzQGFybS5jb20+DQo+PiAtLS0NCj4+ICB4ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMgfCAy
MSArKysrKysrKysrKysrKysrKysrKysNCj4+ICAxIGZpbGUgY2hhbmdlZCwgMjEgaW5zZXJ0aW9u
cygrKQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9jcHVlcnJhdGEuYyBiL3hlbi9hcmNo
L2FybS9jcHVlcnJhdGEuYw0KPj4gaW5kZXggNmMwOTAxNzUxNS4uOGY5YWI2ZGRlMSAxMDA2NDQN
Cj4+IC0tLSBhL3hlbi9hcmNoL2FybS9jcHVlcnJhdGEuYw0KPj4gKysrIGIveGVuL2FyY2gvYXJt
L2NwdWVycmF0YS5jDQo+PiBAQCAtMjQwLDYgKzI0MCwyNiBAQCBzdGF0aWMgaW50IGVuYWJsZV9p
Y19pbnZfaGFyZGVuaW5nKHZvaWQgKmRhdGEpDQo+PiAgICAjZW5kaWYNCj4+ICArI2lmZGVmIENP
TkZJR19BUk02NF9FUlJBVFVNXzgzMjA3NQ0KPj4gKw0KPj4gK3N0YXRpYyBpbnQgd2Fybl9kZXZp
Y2VfbG9hZF9hY3F1aXJlX2VycmF0YSh2b2lkICpkYXRhKQ0KPj4gK3sNCj4+ICsgICAgc3RhdGlj
IGJvb2wgd2FybmVkID0gZmFsc2U7DQo+PiArDQo+PiArICAgIGlmICggIXdhcm5lZCApDQo+PiAr
ICAgIHsNCj4+ICsgICAgICAgIHdhcm5pbmdfYWRkKCJUaGlzIENQVSBpcyBhZmZlY3RlZCBieSB0
aGUgZXJyYXRhIDgzMjA3NS5cbiINCj4+ICsgICAgICAgICAgICAgICAgICAgICJHdWVzdHMgd2l0
aG91dCByZXF1aXJlZCBDUFUgZXJyYXR1bSB3b3JrYXJvdW5kc1xuIg0KPj4gKyAgICAgICAgICAg
ICAgICAgICAgImNhbiBkZWFkbG9jayB0aGUgc3lzdGVtIVxuIg0KPj4gKyAgICAgICAgICAgICAg
ICAgICAgIk9ubHkgdHJ1c3RlZCBndWVzdHMgc2hvdWxkIGJlIHVzZWQgb24gdGhpcyBzeXN0ZW0u
XG4iKTsNCj4+ICsgICAgICAgIHdhcm5lZCA9IHRydWU7DQo+IA0KPiBJIHdhcyBnb2luZyB0byBz
dWdnZXN0IHRvIHVzZSBXQVJOX09OX09OQ0UoKSBidXQgaXQgbG9va3MgbGlrZSBpdCBuZXZlciBt
YWRlIHVwc3RyZWFtIDooLg0KDQpJIGRpZCBkbyB0aGlzIGFzIGl0IHdhcyBkb25lIGluIHRoZSBz
bWMgd2FybmluZyBmdW5jdGlvbiAodGhhdOKAmXMgd2h5IGkgcHVzaGVkIGEgcGF0Y2ggZm9yIGl0
KS4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+IA0KPj4gKyAgICB9DQo+PiArDQo+PiArICAgIHJl
dHVybiAwOw0KPj4gK30NCj4+ICsNCj4+ICsjZW5kaWYNCj4+ICsNCj4+ICAjaWZkZWYgQ09ORklH
X0FSTV9TU0JEDQo+PiAgICBlbnVtIHNzYmRfc3RhdGUgc3NiZF9zdGF0ZSA9IEFSTV9TU0JEX1JV
TlRJTUU7DQo+PiBAQCAtNDE5LDYgKzQzOSw3IEBAIHN0YXRpYyBjb25zdCBzdHJ1Y3QgYXJtX2Nw
dV9jYXBhYmlsaXRpZXMgYXJtX2VycmF0YVtdID0gew0KPj4gICAgICAgICAgLmNhcGFiaWxpdHkg
PSBBUk02NF9XT1JLQVJPVU5EX0RFVklDRV9MT0FEX0FDUVVJUkUsDQo+PiAgICAgICAgICBNSURS
X1JBTkdFKE1JRFJfQ09SVEVYX0E1NywgMHgwMCwNCj4+ICAgICAgICAgICAgICAgICAgICAgKDEg
PDwgTUlEUl9WQVJJQU5UX1NISUZUKSB8IDIpLA0KPj4gKyAgICAgICAgLmVuYWJsZSA9IHdhcm5f
ZGV2aWNlX2xvYWRfYWNxdWlyZV9lcnJhdGEsDQo+PiAgICAgIH0sDQo+PiAgI2VuZGlmDQo+PiAg
I2lmZGVmIENPTkZJR19BUk02NF9FUlJBVFVNXzgzNDIyMA0KPiANCj4gLS0gDQo+IEp1bGllbiBH
cmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:02:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:02:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6824.17953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjEG-0002n0-Sh; Wed, 14 Oct 2020 16:02:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6824.17953; Wed, 14 Oct 2020 16:02:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjEG-0002mt-PY; Wed, 14 Oct 2020 16:02:16 +0000
Received: by outflank-mailman (input) for mailman id 6824;
 Wed, 14 Oct 2020 16:02:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSjEF-0002mo-NG
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:02:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c901b96e-ee31-4c5b-8169-8631ed022c1f;
 Wed, 14 Oct 2020 16:02:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EFF69AB95;
 Wed, 14 Oct 2020 16:02:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSjEF-0002mo-NG
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:02:15 +0000
X-Inumbo-ID: c901b96e-ee31-4c5b-8169-8631ed022c1f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c901b96e-ee31-4c5b-8169-8631ed022c1f;
	Wed, 14 Oct 2020 16:02:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602691334;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TM75gjrOKOQOYIVqj36yGiifNPOCac22GvcGkd56HZQ=;
	b=PeU8k3QgW50o68hEKqdxQyLymApfVqrPafynzrrvPWhGETNQnFZBavDowbuQR3+zXt91ap
	7BgnN73O7qhjypDFDrKmPrOqbEGPcsxJoYc+Jwc51AxrGYyswi3qnx7v0aGSEnatw0Jx1k
	+OMNGUNmE/TMSrJO2TyasiPyQ1JCnqo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EFF69AB95;
	Wed, 14 Oct 2020 16:02:13 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
Date: Wed, 14 Oct 2020 18:02:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201014153150.83875-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.2020 17:31, Jason Andryuk wrote:
> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
> kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
> virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
> and fail the check against the virt address range.
> 
> Change the code to only check virt_entry against the virtual address
> range if it was set upon entry to the function.

Not checking at all seems wrong to me. The ELF spec anyway says
"virtual address", so an out of bounds value is at least suspicious.

> Maybe the overwriting of virt_entry could be removed, but I don't know
> if there would be unintended consequences where (old?) kernels don't
> have an elfnote, but do have an in-range e_entry?  The failing kernel I
> just looked at has an e_entry of 0x1000000.

And if you dropped the overwriting, what entry point would we use
in the absence of an ELF note?

I'd rather put up the option of adjusting the entry (or the check),
if it looks like a valid physical address.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:03:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6830.17964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjFR-0002xR-6s; Wed, 14 Oct 2020 16:03:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6830.17964; Wed, 14 Oct 2020 16:03:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjFR-0002xK-3g; Wed, 14 Oct 2020 16:03:29 +0000
Received: by outflank-mailman (input) for mailman id 6830;
 Wed, 14 Oct 2020 16:03:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSjFQ-0002xE-ER
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:03:28 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79cf0579-ec7c-4187-a30d-a33442a64657;
 Wed, 14 Oct 2020 16:03:26 +0000 (UTC)
Received: from AM6P195CA0083.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::24)
 by HE1PR0801MB1979.eurprd08.prod.outlook.com (2603:10a6:3:4e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Wed, 14 Oct
 2020 16:03:23 +0000
Received: from VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::be) by AM6P195CA0083.outlook.office365.com
 (2603:10a6:209:86::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 16:03:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT057.mail.protection.outlook.com (10.152.19.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 16:03:22 +0000
Received: ("Tessian outbound d5e343850048:v64");
 Wed, 14 Oct 2020 16:03:21 +0000
Received: from 72698fdd898a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3251E851-1999-4E38-9789-611E9E81CDC1.1; 
 Wed, 14 Oct 2020 16:03:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 72698fdd898a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 14 Oct 2020 16:03:05 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6025.eurprd08.prod.outlook.com (2603:10a6:10:203::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.29; Wed, 14 Oct
 2020 16:03:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 16:03:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSjFQ-0002xE-ER
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:03:28 +0000
X-Inumbo-ID: 79cf0579-ec7c-4187-a30d-a33442a64657
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.72])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79cf0579-ec7c-4187-a30d-a33442a64657;
	Wed, 14 Oct 2020 16:03:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GroZN5Oqf9xFuSrmWp1qhjNI82PUQGs7fZy6KTEnEic=;
 b=fgGAMUv24uR7C6++l5EY3jigeC1XTSTdWZV8De+q3hUROKSINiHm42cuhUn70ZEo1q8LjvtUntS2j8CRsM0JOB6rSjA7YmJNl4cyx+Ciq30nZ1/f0qxokvHPIzPNs0IimlInWRRWihlxFmI88OHbOP6/2N5hPc+9EKNuUQGWKV8=
Received: from AM6P195CA0083.EURP195.PROD.OUTLOOK.COM (2603:10a6:209:86::24)
 by HE1PR0801MB1979.eurprd08.prod.outlook.com (2603:10a6:3:4e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Wed, 14 Oct
 2020 16:03:23 +0000
Received: from VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:86:cafe::be) by AM6P195CA0083.outlook.office365.com
 (2603:10a6:209:86::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Wed, 14 Oct 2020 16:03:23 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT057.mail.protection.outlook.com (10.152.19.123) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Wed, 14 Oct 2020 16:03:22 +0000
Received: ("Tessian outbound d5e343850048:v64"); Wed, 14 Oct 2020 16:03:21 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: bb43d7e5e12a3e86
X-CR-MTA-TID: 64aa7808
Received: from 72698fdd898a.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3251E851-1999-4E38-9789-611E9E81CDC1.1;
	Wed, 14 Oct 2020 16:03:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 72698fdd898a.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 14 Oct 2020 16:03:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lp7ObyWYb73RaX2wEIYA0VozMEjCUbcd9BN+pSgq9LiScvpXioC+SMEqSoZkadIG7j1sMncWqbxRCJtt8WhQrFQv58lt8GZG1SZ+UnbbUv+2X/GZNQaaJG0OVx1LQ9B2XGiFypzdKwqcJlADB7D0mAnMs14F4jnouIvzkBeQGPmhuGfi6jkWI0KBWB+JsJb8u2gvEQvkjAxw/MajCF8LK12xfNUWph0xYl71UhGwonpv1xxn8BH/f7zOXB1bmf0Wb2ibtE8F73yhxWJZXIFkbkTpMcfmeust/JRni3E4pqS/Z3m3UmAk7Y+fzz0vskQGFi5tJ/NcwTpQM0lJ18PZbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GroZN5Oqf9xFuSrmWp1qhjNI82PUQGs7fZy6KTEnEic=;
 b=CKqbunyaF5epRNEaqaNOuM1M4JPm9LVNTFO8JbvNDCDjzmKwS6Pko1rE412Op8ADpBWH9/y+HKQvk3unnswvyfhIYO3jsE7rFoTkUAGVpCuVGJ/6JTAvZTik7vPMKefiOOAY+QPpHWkVgKvlX7Xihhy9zCghjv1sJdN9Wnr6G7BjW2a/RNeDc1NExrjHeGOLC+KtfR8NzFPwSqHMYrpd4oQLIJESh+2udEjVA37D2nJGN9LmRFCM/uim3YONs6kKtgU8gLoQVKadRDbkdqBumYH/o4GbBgIS+wz3UfN8FiufXyU7c/4eeiWpQLG+Tae4m8X7+VKCWgBr5kXeGhaI9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GroZN5Oqf9xFuSrmWp1qhjNI82PUQGs7fZy6KTEnEic=;
 b=fgGAMUv24uR7C6++l5EY3jigeC1XTSTdWZV8De+q3hUROKSINiHm42cuhUn70ZEo1q8LjvtUntS2j8CRsM0JOB6rSjA7YmJNl4cyx+Ciq30nZ1/f0qxokvHPIzPNs0IimlInWRRWihlxFmI88OHbOP6/2N5hPc+9EKNuUQGWKV8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6025.eurprd08.prod.outlook.com (2603:10a6:10:203::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.29; Wed, 14 Oct
 2020 16:03:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Wed, 14 Oct 2020
 16:03:04 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWohbM+uoR14U9oEmwJFjy+kmnIqmW9/2AgABK1AA=
Date: Wed, 14 Oct 2020 16:03:04 +0000
Message-ID: <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
References:
 <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
In-Reply-To: <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4e072503-51d8-4d44-967e-08d8705aad1b
x-ms-traffictypediagnostic: DBBPR08MB6025:|HE1PR0801MB1979:
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB19799B12A2021DD2C47463659D050@HE1PR0801MB1979.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:773;OLM:773;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Jg0bkKLsd7dgSb83f6lScRWVVfZwkhZGGju1ZYWR4mRZ1Jn0tM/HOP+IJBLW9IpXqvOLmJbHXcCm4KiFxLyoCFIQS+HgN4Hu/EJOmMTZSgZ+rsUCN643ffDwvWTEXwH0SgjHfjxiDLd7qjQ5+s18ekQYwAzAzX4shgeK5cm+d67L6mtTFw90dakLtwM7KoMVrtCs+VJD49iNNXLqMOd7neafzAbD1iKYMtnVeJU058nvxEdrlgZrlKW/NsC+tE9FCghHiZH+suWh14Gu/HY3PQjHOmnmcP06SaRSE7FrIKzN+Kj1zdmcDMOEDlallGO+N8sIY59xKzPKA04YkNwTeg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(396003)(136003)(376002)(39860400002)(83380400001)(36756003)(2906002)(4326008)(8936002)(478600001)(316002)(64756008)(33656002)(5660300002)(54906003)(66556008)(66446008)(2616005)(186003)(6512007)(66946007)(66476007)(86362001)(6916009)(71200400001)(76116006)(8676002)(6486002)(53546011)(6506007)(26005)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 V3RjnXN3jMj+ppdq6W7ankN23tVui07YJ4XvE9aiWu3Y5IRvlBN3QMuF11H7No2OGwiLET7B14NGFSPUvmyGjK8zyBFVZuOaE6CoErmu5PtPp249h9ZaBnczS+jNeZDxrfYdMQoM6WJPoNQY6gbAKoT4i+o6xQaWjIAs2gHlC4vlsY34nE2BpL3H96YnNTUwyu2aHVOXqTzsuJ/0aJqC3ml9V0O1kjD2DybxoLUApE02Qe8U9pEqBd2gvjbnx9m7pkDWrlcmqluSAUUU6/qc+Y3ifO65ddOZmL1QbvE1ABSVdwBN9MSe8nWXKf2OA1bSxUARvthLd42NtJ3MTP5eOHYTHhaAdLING3PY7EViM/3hiJyxVBevnMJ+I+r0ZKOS+5G7Bf+nCnhr9cfX113sjdmwvHAYTXwuwkccrPnQB4C3bOlVPsEC6uPihxNbRXp5fW9lyMOhRuEfMmfcEPPim8GU3uYIKMHI658JFbZ97Qbq4NyxbbT+hpV7+iNhwcP4mpkgJJKuyY024Jz3bpSLWGaVx2LCbXShZvh2lVBg13xQDi+7fcXKBVPv68WNqtAxYR6HMhxoA2qH3PI30CUxsrO3jz1P+7uVc4it1vE+bz6RxSMScuG9oqhOEXbs5QzdG0/siIVd9em9wiRmy3Ig2w==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E6BDF892ABF61347BACF906A5285FDE6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6025
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f67589db-9ca5-4047-20c0-08d8705aa24a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	e1fvdU5afs1HkIeSKx4zoukwpPuYB/ngo7UDWDPb9gc2HNZGMPeddJYYx6JQqbMEqg7b7o32WGy4WwCkUwbH/m8LDJ62KrML4iJGgTac6UgAeFigtS2Fxt4pID6mzJgPrthGvQtBUHrpGOt3hFLC3PDqXT7qwN1oHuAr8e/NCi5fKlkorGkOf+iqCllqCurqDZfBechLhrf7IP/xHtnEhczI8/80ajeSOayOpmpwQkQta1nqWNrQz2FbpBXcJB9vUQcUG2eX5bMQWCGQftvY0WJOvlRyKmkHmJr6EERGBt+TA6GKZcynohv5qBdQwnn5XaYq3eDDwU5V3wGkbGaalLaagYYU56U8OMdEZx3FRgsYkms+kJzCf1iudcX0LZ0jyPlr/bKjUTsyS85AqMSIVQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(396003)(346002)(46966005)(6486002)(316002)(6862004)(70206006)(36756003)(478600001)(8936002)(5660300002)(8676002)(86362001)(70586007)(4326008)(81166007)(356005)(336012)(33656002)(2906002)(54906003)(2616005)(82740400003)(82310400003)(186003)(26005)(6512007)(6506007)(107886003)(83380400001)(47076004)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Oct 2020 16:03:22.5431
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e072503-51d8-4d44-967e-08d8705aad1b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1979



> On 14 Oct 2020, at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote=
:
>=20
> On 14/10/2020 11:41, Bertrand Marquis wrote:
>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>> not implementing the workaround for it could deadlock the system.
>> Add a warning during boot informing the user that only trusted guests
>> should be executed on the system.
>> An equivalent warning is already given to the user by KVM on cores
>> affected by this errata.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>> 1 file changed, 21 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 6c09017515..8f9ab6dde1 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>>=20
>> #endif
>>=20
>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>> +
>> +static int warn_device_load_acquire_errata(void *data)
>> +{
>> +    static bool warned =3D false;
>> +
>> +    if ( !warned )
>> +    {
>> +        warning_add("This CPU is affected by the errata 832075.\n"
>> +                    "Guests without required CPU erratum workarounds\n"
>> +                    "can deadlock the system!\n"
>> +                    "Only trusted guests should be used on this system.=
\n");
>> +        warned =3D true;
>=20
> This is an antipattern, which probably wants fixing elsewhere as well.
>=20
> warning_add() is __init.  It's not legitimate to call from a non-init
> function, and a less useless build system would have modpost to object.
>=20
> The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
> but this provides no safety at all.
>=20
>=20
> What warning_add() actually does is queue messages for some point near
> the end of boot.  It's not clear that this is even a clever thing to do.
>=20
> I'm very tempted to suggest a blanket change to printk_once().

If this is needed then this could be done in an other serie ?
Would be good to keep this patch as purely handling the errata.

Regards
Bertrand

>=20
> ~Andrew
>=20
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>> +#endif
>> +
>> #ifdef CONFIG_ARM_SSBD
>>=20
>> enum ssbd_state ssbd_state =3D ARM_SSBD_RUNTIME;
>> @@ -419,6 +439,7 @@ static const struct arm_cpu_capabilities arm_errata[=
] =3D {
>>         .capability =3D ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
>>         MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
>>                    (1 << MIDR_VARIANT_SHIFT) | 2),
>> +        .enable =3D warn_device_load_acquire_errata,
>>     },
>> #endif
>> #ifdef CONFIG_ARM64_ERRATUM_834220



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:10:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6847.17977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjLn-0003SF-V7; Wed, 14 Oct 2020 16:10:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6847.17977; Wed, 14 Oct 2020 16:10:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjLn-0003Ra-RV; Wed, 14 Oct 2020 16:10:03 +0000
Received: by outflank-mailman (input) for mailman id 6847;
 Wed, 14 Oct 2020 16:10:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSjLm-0003KW-HE
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:10:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58938394-22fa-4520-8430-193962dc4735;
 Wed, 14 Oct 2020 16:10:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B83B2AB95;
 Wed, 14 Oct 2020 16:09:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VoTD=DV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSjLm-0003KW-HE
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:10:02 +0000
X-Inumbo-ID: 58938394-22fa-4520-8430-193962dc4735
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 58938394-22fa-4520-8430-193962dc4735;
	Wed, 14 Oct 2020 16:10:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602691799;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=koDW1ZCC/VC0xA34aReXt7n81VfkMgCPkcz9C6Wbfcc=;
	b=NOaMDIt9D/DLqCTA0XTh4+slPUIp1K8yWz4RolDu0/jOKMsULQW88R6TZsbFmn+dYRHRZD
	3qSnVm7LWUwDMWr6rfjXK62sniMBy+k4ZSIrWgiFARjsd+j/ZVurukGaLC9WQAPYXZZZJd
	rF3eRDqcblxFsZMjRHfLkEnyx8y02KE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B83B2AB95;
	Wed, 14 Oct 2020 16:09:59 +0000 (UTC)
Subject: Re: [PATCH 2/2] x86/mwait-idle: Customize IceLake server support
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, jun.nakajima@intel.com,
 kevin.tian@intel.com, Chen Yu <yu.c.chen@intel.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>
References: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
 <1602558169-23140-2-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <13a10ad7-fc10-d055-3780-6d1a2be13549@suse.com>
Date: Wed, 14 Oct 2020 18:09:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <1602558169-23140-2-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.10.2020 05:02, Igor Druzhinin wrote:
> From: Chen Yu <yu.c.chen@intel.com>
> 
> On ICX platform, the C1E auto-promotion is enabled by default.
> As a result, the CPU might fall into C1E more offen than previous
> platforms. So disable C1E auto-promotion and expose C1E as a separate
> idle state.
> 
> Beside C1 and C1E, the exit latency of C6 was measured
> by a dedicated tool. However the exit latency(41us) exposed
> by _CST is much smaller than the one we measured(128us). This
> is probably due to the _CST uses the exit latency when woken
> up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
> the latter as we need the longest latency in theory.
> 
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> [Linux commit a472ad2bcea479ba068880125d7273fc95c14b70]
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:12:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6850.17989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjOE-00041I-Bf; Wed, 14 Oct 2020 16:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6850.17989; Wed, 14 Oct 2020 16:12:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjOE-00041B-8I; Wed, 14 Oct 2020 16:12:34 +0000
Received: by outflank-mailman (input) for mailman id 6850;
 Wed, 14 Oct 2020 16:12:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSjOD-000416-60
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:12:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f7eb236-8670-46b5-a0a2-d44f0e04382b;
 Wed, 14 Oct 2020 16:12:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B843AB95;
 Wed, 14 Oct 2020 16:12:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Hv6=DV=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSjOD-000416-60
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:12:33 +0000
X-Inumbo-ID: 8f7eb236-8670-46b5-a0a2-d44f0e04382b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8f7eb236-8670-46b5-a0a2-d44f0e04382b;
	Wed, 14 Oct 2020 16:12:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602691951;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OxMMnS5B+nWAljHNp7JmzV04WRtDVUgeRIpunFkUjx4=;
	b=I93qXBnOT+TO0xXsXDgiTZrpYeuwPlvtgPp+Mh5nh/xN974lqsNtxSk+uUQQuHOyOPHHkL
	c0eicMZhw/6b+l4RHrKHv/lOHz4yR7W9/B0dpyvdCXG5MSSgF7/9ZJVhBNnjzIZKaSfapo
	SySDKMnpvUB37JypmAG2X+cripOp4m8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9B843AB95;
	Wed, 14 Oct 2020 16:12:31 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <77e8bf3b-6172-2900-dd5e-9d059a410b0e@suse.com>
Date: Wed, 14 Oct 2020 18:12:30 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201014153150.83875-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.20 17:31, Jason Andryuk wrote:
> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A

This wrong. Have a look into arch/x86/platform/pvh/head.S


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:14:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6854.18001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjQP-0004CH-PE; Wed, 14 Oct 2020 16:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6854.18001; Wed, 14 Oct 2020 16:14:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjQP-0004CA-LO; Wed, 14 Oct 2020 16:14:49 +0000
Received: by outflank-mailman (input) for mailman id 6854;
 Wed, 14 Oct 2020 16:14:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSjQO-0004C5-MX
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:14:48 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f5ef3c6f-8320-4c70-94c2-87b521c2e66e;
 Wed, 14 Oct 2020 16:14:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 83DB8D6E;
 Wed, 14 Oct 2020 09:14:47 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EC7313F71F;
 Wed, 14 Oct 2020 09:14:46 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JG+m=DV=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSjQO-0004C5-MX
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:14:48 +0000
X-Inumbo-ID: f5ef3c6f-8320-4c70-94c2-87b521c2e66e
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id f5ef3c6f-8320-4c70-94c2-87b521c2e66e;
	Wed, 14 Oct 2020 16:14:47 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 83DB8D6E;
	Wed, 14 Oct 2020 09:14:47 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EC7313F71F;
	Wed, 14 Oct 2020 09:14:46 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools/xenmpd: Fix gcc10 snprintf warning
Date: Wed, 14 Oct 2020 17:14:29 +0100
Message-Id: <005bd16161fe803e9c2805bddc440db31c46169b.1602692002.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Add a check for snprintf return code and ignore the entry if we get an
error. This should in fact never happen and is more a trick to make gcc
happy and prevent compilation errors.

This is solving the following gcc warning when compiling for arm32 host
platforms with optimization activated:
xenpmd.c:92:37: error: '%s' directive output may be truncated writing
between 4 and 2147483645 bytes into a region of size 271
[-Werror=format-truncation=]

This is also solving the following Debian bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/xenpmd/xenpmd.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
index 35fd1c931a..12b82cf43e 100644
--- a/tools/xenpmd/xenpmd.c
+++ b/tools/xenpmd/xenpmd.c
@@ -102,6 +102,7 @@ FILE *get_next_battery_file(DIR *battery_dir,
     FILE *file = 0;
     struct dirent *dir_entries;
     char file_name[284];
+    int ret;
     
     do 
     {
@@ -111,11 +112,15 @@ FILE *get_next_battery_file(DIR *battery_dir,
         if ( strlen(dir_entries->d_name) < 4 )
             continue;
         if ( battery_info_type == BIF ) 
-            snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
                      dir_entries->d_name);
         else 
-            snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
                      dir_entries->d_name);
+        /* This should not happen but is needed to pass gcc checks */
+        if (ret < 0)
+            continue;
+        file_name[sizeof(file_name) - 1] = '\0';
         file = fopen(file_name, "r");
     } while ( !file );
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6860.18012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjd5-0005Ag-Va; Wed, 14 Oct 2020 16:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6860.18012; Wed, 14 Oct 2020 16:27:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjd5-0005AZ-Sd; Wed, 14 Oct 2020 16:27:55 +0000
Received: by outflank-mailman (input) for mailman id 6860;
 Wed, 14 Oct 2020 16:27:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSjd5-0005AU-0n
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:27:55 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6765579f-1f39-483b-92da-b6e6a98d28be;
 Wed, 14 Oct 2020 16:27:54 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id d24so58351ljg.10
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 09:27:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSjd5-0005AU-0n
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:27:55 +0000
X-Inumbo-ID: 6765579f-1f39-483b-92da-b6e6a98d28be
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6765579f-1f39-483b-92da-b6e6a98d28be;
	Wed, 14 Oct 2020 16:27:54 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id d24so58351ljg.10
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 09:27:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=7WRe1XAt+nB86rUXoXv+uMmWylO352IQALVwPEQlFTU=;
        b=LVJuC5DxeSO5i+lwK+GtVemtgt1WUYJSXj7bQIngLKRJ75gDUAPTXtTik2tKXKThMj
         Ch8AlUEp3yuUSXxvyBQOIqs/lPwlBFqCSZL0RYXtGzMosh1KsjOYhCVLaCWc2qsqL0Fd
         znqwvaCa4njLy4l+WQ7CppjbmO1fMN5H+JkqKyt66JTvzkOpqf8KyGslFIFpai1swlYx
         tVEFVy+NmuLVTGES+9alg7d/5/QAcEiMcaZ7PfNmpQRzNJt6he6OXlPmR699rUoxBopR
         ix0qsqRw8DSUtGOROiNTtD/VThOLiMJsQ2ErF0C47VEZbRUG6UmS9elpLxHQt9p9m++k
         C+vw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=7WRe1XAt+nB86rUXoXv+uMmWylO352IQALVwPEQlFTU=;
        b=LZriZqKnaNonjXd4M4p4BmnbkxJJ2M/ZdWg+Hpy7gROmGe5clzVCegBIecS+rssOk7
         P0RGGlBF74hEH4VGDkaBS/ydBnKdLQyGdk5jpmcj/lngiYsSJyWurTYx243vU6DF4FUM
         zrCnvAB0+ETRTUgbdgPYZPE1SKjdz/0WFLVIbl4VDEPRLT6I/4eayRtmCwtiygudMCR1
         GBsCGA+pcuPAPyYEaPJpA+ZQg+uEy20Jo2hAGlLeJzOHeDTDBvSd4rliigIiE5lejzZG
         uZRW1MQ4WUTuPR38tht2jckj/XGu3dXDPMoIsFiNeXEAWqXT0APaN7+T+uCCBrgI8H/O
         nLBg==
X-Gm-Message-State: AOAM531jHZ5i9hqstB2M+5U0mo4OynDZFOQJngak2cAYT/M6sA0k7MAK
	f7n5oOHWqCE6R9iBLmFaWhBPMaAlq5cdRCd7/cQ=
X-Google-Smtp-Source: ABdhPJwy7DZ4YGbH1j/s6TMHWgFw9mRRWW8xYK3ks+AR9JtSocHTkaqyomYOLUhlZkBsYDxkr2jjzh38I2Imd2v4k+g=
X-Received: by 2002:a2e:924b:: with SMTP id v11mr19704ljg.262.1602692872999;
 Wed, 14 Oct 2020 09:27:52 -0700 (PDT)
MIME-Version: 1.0
References: <20201014153150.83875-1-jandryuk@gmail.com> <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
In-Reply-To: <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 14 Oct 2020 12:27:40 -0400
Message-ID: <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Oct 14, 2020 at 12:02 PM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 14.10.2020 17:31, Jason Andryuk wrote:
> > Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
> > kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
> > virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
> > and fail the check against the virt address range.

Oh, these should be CONFIG_XEN_PVH=y and CONFIG_XEN_PV=n

> > Change the code to only check virt_entry against the virtual address
> > range if it was set upon entry to the function.
>
> Not checking at all seems wrong to me. The ELF spec anyway says
> "virtual address", so an out of bounds value is at least suspicious.
>
> > Maybe the overwriting of virt_entry could be removed, but I don't know
> > if there would be unintended consequences where (old?) kernels don't
> > have an elfnote, but do have an in-range e_entry?  The failing kernel I
> > just looked at has an e_entry of 0x1000000.
>
> And if you dropped the overwriting, what entry point would we use
> in the absence of an ELF note?

elf_xen_note_check currently has:

    /* PVH only requires one ELF note to be set */
    if ( parms->phys_entry != UNSET_ADDR32 )
    {
        elf_msg(elf, "ELF: Found PVH image\n");
        return 0;
    }

> I'd rather put up the option of adjusting the entry (or the check),
> if it looks like a valid physical address.

The function doesn't know if the image will be booted PV or PVH, so I
guess we do all the checks, but use 'parms->phys_entry != UNSET_ADDR32
&& parms->virt_entry == UNSET_ADDR' to conditionally skip checking
virt?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6861.18025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjdB-0005Dx-Dz; Wed, 14 Oct 2020 16:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6861.18025; Wed, 14 Oct 2020 16:28:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjdB-0005Dq-9Q; Wed, 14 Oct 2020 16:28:01 +0000
Received: by outflank-mailman (input) for mailman id 6861;
 Wed, 14 Oct 2020 16:28:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSjd9-0005AU-Vt
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:28:00 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e75df86a-1cd3-4001-9df5-80a838c4cadd;
 Wed, 14 Oct 2020 16:27:58 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l28so96145lfp.10
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 09:27:58 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSjd9-0005AU-Vt
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:28:00 +0000
X-Inumbo-ID: e75df86a-1cd3-4001-9df5-80a838c4cadd
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e75df86a-1cd3-4001-9df5-80a838c4cadd;
	Wed, 14 Oct 2020 16:27:58 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l28so96145lfp.10
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 09:27:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=1r3Mi3nzZHo+KHq/rD+GI+FDOuQPdwRMPGa7RKX0szg=;
        b=EMya3lOZYJJ8Z8JieBre1xEOpk9Xq6EVYzPCMrt5dDZlfFQI4SNNiy0AM46dMhz6WY
         d2trOsQIOrGsRGrIn0QM1ng06w+YKChSQqi7Pf7fGUsRWVl0TVpJ4sAhZ+Dz/ySF5rZ6
         AA5FhZdStDEWaAiSnzTKbliU8wjKY5Giab1goepJKMRtYCqZfCDhPQjuMCMb5xzL3j7S
         +S7vzhc9set2TDaR7KNuN9F7cFa6SDnYKvZzj89Uht41Xxwpxklf5eBDbGnMSRbQdM/Y
         x9X79dMTCbaiYVUcv3fsBoCRHdw6HpDsb4K1jtw77GKgBHoOiDRv+cLYcLcSp29LFcMq
         n3Rg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=1r3Mi3nzZHo+KHq/rD+GI+FDOuQPdwRMPGa7RKX0szg=;
        b=LMtA0cY8EZ7kCdWQF/oivav+nJdIa0ZOPcykYijOf7OxwKDcTixp72A3njf7Vc4Dyw
         ioYsr9sBFMc/uBcuVNAz9JmhosXgIBF6v2q1aiRTUt3ZBoOsG8baJ6g4EyT7Jy1+doem
         HviZEhxDz8rCod3g8v2zYAVOSGao7PqiGjnvaYUWuLt7c+fRAVQywudflS8MJiVwmK5W
         1LlyfMaGex5aW7pFLyX4bU7NYxB2aegl4QThWA5As+LzGWcj6XIcDNVsMJGHOWZ1LnWI
         gLbo/PtBCErQaqYyYs3qxK/gHoHsN87J4iST5CKLOq97RQu+mzdYlUhZMlMc9sVxvTgC
         lhPA==
X-Gm-Message-State: AOAM533009JLlA+mE93FlGELIcQh6Xo6UZh50xfhcD9mgDtsdzCuy48a
	G5T1tU+lImiGEX43Ilw4CKYdpRWyaUPawCOC+/Y=
X-Google-Smtp-Source: ABdhPJz6K2y3twdlgXriMYPczuRxnV6v1ICjDrBpOzy/uppp50Mj+JCsAH+sJMnYN1qCFfUMAn+5uT6ZR5rcuU4r/SY=
X-Received: by 2002:a19:7f4a:: with SMTP id a71mr58788lfd.202.1602692877110;
 Wed, 14 Oct 2020 09:27:57 -0700 (PDT)
MIME-Version: 1.0
References: <20201014153150.83875-1-jandryuk@gmail.com> <77e8bf3b-6172-2900-dd5e-9d059a410b0e@suse.com>
In-Reply-To: <77e8bf3b-6172-2900-dd5e-9d059a410b0e@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 14 Oct 2020 12:27:45 -0400
Message-ID: <CAKf6xptqRKJ87KiJ52MpYR50RNgDEqqA5RsqXphQ1NUeVZgb=Q@mail.gmail.com>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 14, 2020 at 12:12 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wr=
ote:
>
> On 14.10.20 17:31, Jason Andryuk wrote:
> > Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
>
> This wrong. Have a look into arch/x86/platform/pvh/head.S

That is XEN_ELFNOTE_PHYS32_ENTRY, which is different from
XEN_ELFNOTE_ENTRY in arch/x86/xen/xen-head.S:
#ifdef CONFIG_XEN_PV
        ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
#endif

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:28:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6863.18036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjdf-0005MV-MN; Wed, 14 Oct 2020 16:28:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6863.18036; Wed, 14 Oct 2020 16:28:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjdf-0005MO-J1; Wed, 14 Oct 2020 16:28:31 +0000
Received: by outflank-mailman (input) for mailman id 6863;
 Wed, 14 Oct 2020 16:28:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WBOb=DV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kSjde-0005M7-1a
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:28:30 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 108d1873-1c74-4836-be91-8357ba789c44;
 Wed, 14 Oct 2020 16:28:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WBOb=DV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kSjde-0005M7-1a
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:28:30 +0000
X-Inumbo-ID: 108d1873-1c74-4836-be91-8357ba789c44
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 108d1873-1c74-4836-be91-8357ba789c44;
	Wed, 14 Oct 2020 16:28:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602692909;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=GYvjGHiPLwLr37AO1ujQE9TZBPnF7iXTNfM397l76wg=;
  b=BJOi2lxF4Kumq/5v6f7JqFV6559UnkYaxAJC0qLqhHXQyYLkTiIZ/1Y9
   lxYWLzqYciTDoAxqzpOqmEjzs4kp8oyFC338K8C4XdmUgxfHYGKmEf+JN
   +xc+PF87IrphDMYlYyZ4RfPseWhX5Vq76NwFUmf8xRK8pTVapgzZkXrzH
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: P31UaNUW0/aCe7Mb9/SgZaZXvqh3GEVD+qvM2mSni5xbW4NgqULBnXxAKxHEc74XTQixQ5s2WK
 0SUgwvHTKrJhMtZu2hj8dFNwn9SqA8KrezSaw2DjQrkxdII8cEd96AfjHJQmagm+JmLZyqGYjz
 LzbOrxnGikIv6CjNH+Ss8foU7Ev4qZN2aOWkd+SSom4t5PpC7NNxn8+YcAfyZ3TmVeKC/O+pjH
 aWFARQRtDCLjQxkFkO49uMHaiEo74CGxKnM01D7G5yvWp/EOe20ftCj8icA7KJQcGFiSoDnpAz
 E7I=
X-SBRS: 2.5
X-MesageID: 28982875
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="28982875"
Date: Wed, 14 Oct 2020 18:28:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>,
	Manuel Bouyer <bouyer@antioche.eu.org>
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
Message-ID: <20201014162814.GT19254@Air-de-Roger>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201009115301.19516-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 09, 2020 at 12:53:01PM +0100, Andrew Cooper wrote:
> Despite appearing to be a deliberate design choice of early PV64, the
> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
> testability problem for Xen.  Furthermore, the behaviour is undocumented,
> bizarre, and inconsistent with related behaviour in Xen, and very liable
> introduce a security vulnerability into a PV guest if the author hasn't
> studied Xen's assembly code in detail.
> 
> There are two different bugs here.
> 
> 1) The current logic confuses the registered entrypoints, and may deliver a
>    SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>    entrypoint is registered.
> 
>    This has been the case ever since 2007 (c/s cd75d47348b) but up until
>    2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>    a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
> 
>    Xen would malfunction under these circumstances, if it were a PV guest.
>    Linux would as well, but PVOps has always registered both entrypoints and
>    discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>    consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).
> 
> 2) In the case that neither SYSCALL callbacks are registered, the guest will
>    be crashed when userspace executes a SYSCALL instruction, which is a
>    userspace => kernel DoS.
> 
>    This has been the case ever since the introduction of 64bit PV support, but
>    behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which yield
>    #GP/#UD in userspace before the callback is registered, and are therefore
>    safe by default.
> 
> This change does constitute a change in the PV ABI, for corner cases of a PV
> guest kernel registering neither callback, or not registering the 32bit
> callback when running on AMD/Hygon hardware.
> 
> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
> SYSENTER (safe by default, until explicitly enabled), as well as native
> hardware (always delivered to the single applicable callback).
> 
> Most importantly however, and the primary reason for the change, is that it
> lets us sensibly test the fast system call entrypoints under all states a PV
> guest can construct, to prove correct behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Andy Lutomirski <luto@kernel.org>
> CC: Manuel Bouyer <bouyer@antioche.eu.org>
> 
> v2:
>  * Drop unnecessary instruction suffixes
>  * Don't truncate #UD entrypoint to 32 bits
> 
> Manuel: This will result in a corner case change for NetBSD.
> 
> At the moment on native, 32bit userspace on 64bit NetBSD will get #UD (Intel,
> etc), or an explicit -ENOSYS (AMD, etc) when trying to execute a 32bit SYSCALL
> instruction.
> 
> After this change, a 64bit PV VM will consistently see #UD (like on Intel, etc
> hardware) even when running on AMD/Hygon hardware (as Xsyscall32 isn't
> registered with Xen), rather than following Xsyscall into the proper system
> call path.
> ---
>  xen/arch/x86/x86_64/entry.S | 26 +++++++++++++++++++-------
>  1 file changed, 19 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
> index 000eb9722b..aaf8402f93 100644
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -26,18 +26,30 @@
>  /* %rbx: struct vcpu */
>  ENTRY(switch_to_kernel)
>          leaq  VCPU_trap_bounce(%rbx),%rdx
> -        /* TB_eip = (32-bit syscall && syscall32_addr) ?
> -         *          syscall32_addr : syscall_addr */
> -        xor   %eax,%eax
> +
> +        /* TB_eip = 32-bit syscall ? syscall32_addr : syscall_addr */
> +        mov   VCPU_syscall32_addr(%rbx), %ecx

This being an unsigned long field, shouldn't you use %rcx here?

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 16:42:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 16:42:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6896.18049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjr2-00076F-OO; Wed, 14 Oct 2020 16:42:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6896.18049; Wed, 14 Oct 2020 16:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSjr2-000768-Kz; Wed, 14 Oct 2020 16:42:20 +0000
Received: by outflank-mailman (input) for mailman id 6896;
 Wed, 14 Oct 2020 16:42:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nbRJ=DV=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kSjr0-000763-Pb
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:42:18 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35cd91e0-8b3c-4e5c-b9d1-8f569fe15a3f;
 Wed, 14 Oct 2020 16:42:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nbRJ=DV=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kSjr0-000763-Pb
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 16:42:18 +0000
X-Inumbo-ID: 35cd91e0-8b3c-4e5c-b9d1-8f569fe15a3f
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35cd91e0-8b3c-4e5c-b9d1-8f569fe15a3f;
	Wed, 14 Oct 2020 16:42:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602693737;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=7bzHnT69tW0R5VXZ4MQBOqCdfcMHvevAeG9gp5iZwQE=;
  b=XYd+d7+w7WNW4XT74HqEZh7nUColr0Knw/QZpIpUJbai8N1xDOjnw++3
   sa2D37aZVWFZBCij5iVTfPF+ixpR54pGfEgBOqbWgZoHauhWAlsb7SoJL
   k7DxxqM2u/hdALL9a72Qc6/6CbVs8cZnL16ooMO7y7elMW7J1t08Y/JGt
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7oisZ/rXFwmGBpjwwUY/CpJ+X4cOtC4q8+dVsvSG2yXyhJvCIhj8DZ+bwjlfnkP/ThzrkU8b9v
 uYv8swOSWwupdAhhhH+VZjxHaW7Q6BlyzLc90JgKgw73rJCNkiZyWnmZH0Vgm2xwX0TV9BHAN4
 47JKsJ6GjKAtUDjKNRcj5HuZNGCMcnd4tstEkc5vPXevA/MrpZRu8GUHYZNxYZTKU7JoaRD4w8
 5MdtFJx4rADewqgITaaeVcvnLa23U2sfdMQIVbs+0KexfhA82jNk+BFV++uh2kPfZrFVXVpKMa
 bh4=
X-SBRS: 2.5
X-MesageID: 29256984
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="29256984"
Subject: Re: [PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <andrew.cooper3@citrix.com>,
	<roger.pau@citrix.com>, <wl@xen.org>, <jun.nakajima@intel.com>,
	<kevin.tian@intel.com>
References: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
 <ca9a1cce-1e51-0f55-4527-42f48bc7d6ab@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <e30f7f98-ee1a-1a24-0496-01911a79c861@citrix.com>
Date: Wed, 14 Oct 2020 17:42:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ca9a1cce-1e51-0f55-4527-42f48bc7d6ab@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14/10/2020 16:47, Jan Beulich wrote:
> On 13.10.2020 05:02, Igor Druzhinin wrote:
>> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
>> to Ice Lake desktop according to External Design Specification vol.2.
> 
> Could you tell me where this is publicly available? Even after spending
> quite a bit of time on searching for it, I can't seem to be able to
> find it. And the SDM doesn't have enough information (yet).

True that SDM doesn't have this data. As I mentioned that data is taken from
External Design Specification for Ice Lake server which is accessed using Intel
account. I'm not completely sure it is right to make changes in open source
project like Linux or Xen based on information which is not publicly available
yet. But Intel is frequently doing this with Linux : even my second patch uses
data taken from one of these documents and was committed by Intel to Linux first.

Do we need the information publicly available to commit these changes as well?
If not, we can run with these changes in our patchqueue until it gets out properly.

Igor


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:06:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:06:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6916.18079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkES-0000mU-4Q; Wed, 14 Oct 2020 17:06:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6916.18079; Wed, 14 Oct 2020 17:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkES-0000mN-1M; Wed, 14 Oct 2020 17:06:32 +0000
Received: by outflank-mailman (input) for mailman id 6916;
 Wed, 14 Oct 2020 17:06:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSkEQ-0000mE-98
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:06:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70251fd8-73ba-4907-8c36-27d714b03d5b;
 Wed, 14 Oct 2020 17:06:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkEN-00024b-QI; Wed, 14 Oct 2020 17:06:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkEN-00082n-JW; Wed, 14 Oct 2020 17:06:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSkEQ-0000mE-98
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:06:30 +0000
X-Inumbo-ID: 70251fd8-73ba-4907-8c36-27d714b03d5b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 70251fd8-73ba-4907-8c36-27d714b03d5b;
	Wed, 14 Oct 2020 17:06:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Q4fyAUhdnkBCkpuu4znc9gASkYekK/Z+3FZj3aUhbHo=; b=HxRb6YYIWFHQxObunVomfYbmsc
	Tf8p/97Fb25Dx3HHwTfvl+cwpygh4RjT+x1cavsfW8VdzIWp0vLDEyneCdCsuPMmRtnv320C/u0VD
	JTEXDqZJk3+m1h9Kh00cC2wFSF4yqV25mi/EuZM+1lLB+EiV3uYzt8j/v/J7hhOKW4Fs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkEN-00024b-QI; Wed, 14 Oct 2020 17:06:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkEN-00082n-JW; Wed, 14 Oct 2020 17:06:27 +0000
Subject: Re: [PATCH] xen/arm: Document the erratum #853709 related to Cortex
 A72
To: Michal Orzel <Michal.Orzel@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <20201014100541.11687-1-michal.orzel@arm.com>
 <ef5fc4c3-5de3-0ec1-fed9-afdb8dd1bfc1@xen.org>
 <AM6PR08MB4641ACDB3B63F0A065FBD48389050@AM6PR08MB4641.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c4341231-f41d-961f-c9cd-116369a7bc75@xen.org>
Date: Wed, 14 Oct 2020 18:06:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <AM6PR08MB4641ACDB3B63F0A065FBD48389050@AM6PR08MB4641.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/10/2020 12:06, Michal Orzel wrote:
> Hi Julien,
> 
> I agree. You can update the commit message.

Thanks. I have updated the commit message and committed it.

On a different topic, it looks like you are sending the e-mail with 
HTML. Would you mind to configure it to send plain text?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:13:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:13:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6920.18095 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkKu-0001hq-U4; Wed, 14 Oct 2020 17:13:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6920.18095; Wed, 14 Oct 2020 17:13:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkKu-0001hj-Pi; Wed, 14 Oct 2020 17:13:12 +0000
Received: by outflank-mailman (input) for mailman id 6920;
 Wed, 14 Oct 2020 17:13:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSkKs-0001he-Nf
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:13:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12e80ea1-d492-49f0-9274-195e11de360d;
 Wed, 14 Oct 2020 17:13:09 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkKq-0002Dj-1M; Wed, 14 Oct 2020 17:13:08 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkKp-0008UZ-Ob; Wed, 14 Oct 2020 17:13:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSkKs-0001he-Nf
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:13:10 +0000
X-Inumbo-ID: 12e80ea1-d492-49f0-9274-195e11de360d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 12e80ea1-d492-49f0-9274-195e11de360d;
	Wed, 14 Oct 2020 17:13:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=asYEsBCK7FHR113gjIQRyzHFSpY0wsQypVEj1ka1RUk=; b=BM7COcwL6CQGYYFLVwr85Mz3O4
	HhXiezYeDaVjHEGKyvQ7S6amyO0+/RWqsYIt9KR0Ho7ABnOnyAf1mYDXSsWOFOWr6AbrJATJvHWp3
	lm8iHsUJynBFGUXdsxfLGfzzuaoooUo4SiMN5SBG4q7ynDpzQ1fM4bNIGM0qwutEIDGo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkKq-0002Dj-1M; Wed, 14 Oct 2020 17:13:08 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkKp-0008UZ-Ob; Wed, 14 Oct 2020 17:13:07 +0000
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <53a6355c-b01c-f1a4-72dd-ea84b00bbd49@xen.org>
Date: Wed, 14 Oct 2020 18:13:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 14/10/2020 12:35, Andrew Cooper wrote:
> On 14/10/2020 11:41, Bertrand Marquis wrote:
>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>> not implementing the workaround for it could deadlock the system.
>> Add a warning during boot informing the user that only trusted guests
>> should be executed on the system.
>> An equivalent warning is already given to the user by KVM on cores
>> affected by this errata.
>>
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>>   xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>>   1 file changed, 21 insertions(+)
>>
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 6c09017515..8f9ab6dde1 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>>   
>>   #endif
>>   
>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>> +
>> +static int warn_device_load_acquire_errata(void *data)
>> +{
>> +    static bool warned = false;
>> +
>> +    if ( !warned )
>> +    {
>> +        warning_add("This CPU is affected by the errata 832075.\n"
>> +                    "Guests without required CPU erratum workarounds\n"
>> +                    "can deadlock the system!\n"
>> +                    "Only trusted guests should be used on this system.\n");
>> +        warned = true;
> 
> This is an antipattern, which probably wants fixing elsewhere as well.
> 
> warning_add() is __init.  It's not legitimate to call from a non-init
> function, and a less useless build system would have modpost to object.

You are right. We didn't spot any issue because CPU hotplug is not yet 
supported on Arm.

> 
> The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
> but this provides no safety at all.

Right.

> 
> 
> What warning_add() actually does is queue messages for some point near
> the end of boot.  It's not clear that this is even a clever thing to do.

Well, the goal is to have a single place where you can find out what are 
all the inconsistencies on the platform. It can be difficult to figure 
that out with...

> 
> I'm very tempted to suggest a blanket change to printk_once().

A simple printk. But I guess we could add a wrapper that would add all a 
line of **** before and after to make easier to spot.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:22:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6924.18107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkTX-0002bT-Or; Wed, 14 Oct 2020 17:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6924.18107; Wed, 14 Oct 2020 17:22:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkTX-0002bM-Lr; Wed, 14 Oct 2020 17:22:07 +0000
Received: by outflank-mailman (input) for mailman id 6924;
 Wed, 14 Oct 2020 17:22:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSkTV-0002bH-LQ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:22:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e728d711-da15-491b-a4b6-78a60af0244c;
 Wed, 14 Oct 2020 17:22:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkTT-0002OD-03; Wed, 14 Oct 2020 17:22:03 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkTS-0000P2-D1; Wed, 14 Oct 2020 17:22:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSkTV-0002bH-LQ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:22:05 +0000
X-Inumbo-ID: e728d711-da15-491b-a4b6-78a60af0244c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e728d711-da15-491b-a4b6-78a60af0244c;
	Wed, 14 Oct 2020 17:22:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Ls+4YOC0fgeNVpTgUvXE65IuxrYXZDVstu0AYEnVfF0=; b=ZbU1VXKisMiRO8vgo8EC94RDNf
	XR7mEVLyxNlf7otUSxSrYikcbKAGGRweqXJZYloI7Crhuvc0326Csqzzq7ekhvOHxrydNo4rsWN4o
	1vtREzWVOc9JAWcMOa3xmm9EbPBzhl97vlVLKW+A0dj0pk0Y21lVNvM3BWnUfTJ3Di+M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkTT-0002OD-03; Wed, 14 Oct 2020 17:22:03 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkTS-0000P2-D1; Wed, 14 Oct 2020 17:22:02 +0000
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
Date: Wed, 14 Oct 2020 18:22:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 14/10/2020 17:03, Bertrand Marquis wrote:
> 
> 
>> On 14 Oct 2020, at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>
>> On 14/10/2020 11:41, Bertrand Marquis wrote:
>>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>>> not implementing the workaround for it could deadlock the system.
>>> Add a warning during boot informing the user that only trusted guests
>>> should be executed on the system.
>>> An equivalent warning is already given to the user by KVM on cores
>>> affected by this errata.
>>>
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>>> 1 file changed, 21 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>>> index 6c09017515..8f9ab6dde1 100644
>>> --- a/xen/arch/arm/cpuerrata.c
>>> +++ b/xen/arch/arm/cpuerrata.c
>>> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>>>
>>> #endif
>>>
>>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>>> +
>>> +static int warn_device_load_acquire_errata(void *data)
>>> +{
>>> +    static bool warned = false;
>>> +
>>> +    if ( !warned )
>>> +    {
>>> +        warning_add("This CPU is affected by the errata 832075.\n"
>>> +                    "Guests without required CPU erratum workarounds\n"
>>> +                    "can deadlock the system!\n"
>>> +                    "Only trusted guests should be used on this system.\n");
>>> +        warned = true;
>>
>> This is an antipattern, which probably wants fixing elsewhere as well.
>>
>> warning_add() is __init.  It's not legitimate to call from a non-init
>> function, and a less useless build system would have modpost to object.
>>
>> The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
>> but this provides no safety at all.
>>
>>
>> What warning_add() actually does is queue messages for some point near
>> the end of boot.  It's not clear that this is even a clever thing to do.
>>
>> I'm very tempted to suggest a blanket change to printk_once().
> 
> If this is needed then this could be done in an other serie ?

The callback ->enable() will be called when a CPU is onlined/offlined. 
So this is going to require if you plan to support CPU hotplugs or 
suspend resume.

> Would be good to keep this patch as purely handling the errata.

In the case of this patch, how about moving the warning_add() in 
enable_errata_workarounds()?

By then we should now all the errata present on your platform. All CPUs 
onlined afterwards (i.e. runtime) should always abide to the set 
discover during boot.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:37:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6945.18137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkiC-0003v5-5L; Wed, 14 Oct 2020 17:37:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6945.18137; Wed, 14 Oct 2020 17:37:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkiC-0003uy-2Q; Wed, 14 Oct 2020 17:37:16 +0000
Received: by outflank-mailman (input) for mailman id 6945;
 Wed, 14 Oct 2020 17:37:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jyCh=DV=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kSkiB-0003ur-34
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:37:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de1e48d2-dcee-4b87-8283-43d3452be10d;
 Wed, 14 Oct 2020 17:37:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jyCh=DV=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kSkiB-0003ur-34
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:37:15 +0000
X-Inumbo-ID: de1e48d2-dcee-4b87-8283-43d3452be10d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de1e48d2-dcee-4b87-8283-43d3452be10d;
	Wed, 14 Oct 2020 17:37:13 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.10-rc1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602697032;
	bh=RBvyCIyAboxCfBbfrdgU7EznUEpAYdunpTXFg4FrMQs=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=lLWGVM2ZT+yC10c9FJmlitO3uH2CPiX9rsMfq2mXJ8JPU7MNmOxBtUFWX0NfSI56C
	 YQg9P6PoZjlE3bYvS1li5cFIUVSbHdAvqDm5EzYoOj/fpnV8WfmEJgmuTpqNrqQPGY
	 OjrcL/roJUSgiFALGhrwkF4D4qDl1y032ZOVPSU4=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201014053917.19251-1-jgross@suse.com>
References: <20201014053917.19251-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20201014053917.19251-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1-tag
X-PR-Tracked-Commit-Id: 32118f97f41d26a2447118fa956715cb4bd1bdac
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: a09b1d78505eb9fe27597a5174c61a7c66253fe8
Message-Id: <160269703278.25844.16425875472592967815.pr-tracker-bot@kernel.org>
Date: Wed, 14 Oct 2020 17:37:12 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Wed, 14 Oct 2020 07:39:17 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/a09b1d78505eb9fe27597a5174c61a7c66253fe8

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:41:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:41:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6948.18149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkmc-0004mV-UT; Wed, 14 Oct 2020 17:41:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6948.18149; Wed, 14 Oct 2020 17:41:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkmc-0004mO-QP; Wed, 14 Oct 2020 17:41:50 +0000
Received: by outflank-mailman (input) for mailman id 6948;
 Wed, 14 Oct 2020 17:41:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSkmb-0004mJ-VJ
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:41:50 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 904468c7-44dd-4100-87ad-f4876b9f707c;
 Wed, 14 Oct 2020 17:41:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSkmb-0004mJ-VJ
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:41:50 +0000
X-Inumbo-ID: 904468c7-44dd-4100-87ad-f4876b9f707c
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 904468c7-44dd-4100-87ad-f4876b9f707c;
	Wed, 14 Oct 2020 17:41:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602697307;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=S0vv2LR8wcqmu2jTJZ4PxlH8GTUhLCm/gIwKO2lPVVQ=;
  b=O+uSbfaujJkVbuO3h7CWQBjyjWgiMXzu7rDshpDbsR94819yHcsHcsYm
   fOFVKVuRFXqz8eiH426MyW5itzNWEYoLqb3sxSBkBFq36IrSNnz5c0XRS
   NHeHNZKEKKbFlNTJ7AynF1kEIJrg/WIB3KBJwjBG++wmQV8pKg3Xlz2Y6
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: BGW0CMy7pVmGHOZIBvWchbaY4Lopi6ln0HHFYhT6jY7g5CtblgRYFeLxuvFZ9RJEYq5YqVAFTI
 ej2ePBRLcgxkkBNzWLHjI95UWma555qGnUeQG8peBOUXox3IqbLgEOKz2ictkaStT1XXhyuK57
 QzvD4aVKeyFlAbVq3TCSqj0EJrLWhfPGtuv4/fcEApJo6ai33UjJaz2hfaVEQ5C/7kdc/djs9E
 262m/l2wXrCsYpaDWkl200FBoDU2MFST4pQAdv7wtiQiPnG+kt/Uv5sJ67aleDFUh7dbGFoWPn
 Wf0=
X-SBRS: 2.5
X-MesageID: 30051881
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="30051881"
Subject: Re: [PATCH v2] x86/pv: Inject #UD for missing SYSCALL callbacks
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>, Andy Lutomirski <luto@kernel.org>,
	Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200923101848.29049-4-andrew.cooper3@citrix.com>
 <20201009115301.19516-1-andrew.cooper3@citrix.com>
 <20201014162814.GT19254@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <476ff576-e3ea-a461-6486-e117d8360b35@citrix.com>
Date: Wed, 14 Oct 2020 18:41:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201014162814.GT19254@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 17:28, Roger Pau Monné wrote:
> On Fri, Oct 09, 2020 at 12:53:01PM +0100, Andrew Cooper wrote:
>> Despite appearing to be a deliberate design choice of early PV64, the
>> resulting behaviour for unregistered SYSCALL callbacks creates an untenable
>> testability problem for Xen.  Furthermore, the behaviour is undocumented,
>> bizarre, and inconsistent with related behaviour in Xen, and very liable
>> introduce a security vulnerability into a PV guest if the author hasn't
>> studied Xen's assembly code in detail.
>>
>> There are two different bugs here.
>>
>> 1) The current logic confuses the registered entrypoints, and may deliver a
>>    SYSCALL from 32bit userspace to the 64bit entry, when only a 64bit
>>    entrypoint is registered.
>>
>>    This has been the case ever since 2007 (c/s cd75d47348b) but up until
>>    2018 (c/s dba899de14) the wrong selectors would be handed to the guest for
>>    a 32bit SYSCALL entry, making it appear as if it a 64bit entry all along.
>>
>>    Xen would malfunction under these circumstances, if it were a PV guest.
>>    Linux would as well, but PVOps has always registered both entrypoints and
>>    discarded the Xen-provided selectors.  NetBSD really does malfunction as a
>>    consequence (benignly now, but a VM DoS before the 2018 Xen selector fix).
>>
>> 2) In the case that neither SYSCALL callbacks are registered, the guest will
>>    be crashed when userspace executes a SYSCALL instruction, which is a
>>    userspace => kernel DoS.
>>
>>    This has been the case ever since the introduction of 64bit PV support, but
>>    behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which yield
>>    #GP/#UD in userspace before the callback is registered, and are therefore
>>    safe by default.
>>
>> This change does constitute a change in the PV ABI, for corner cases of a PV
>> guest kernel registering neither callback, or not registering the 32bit
>> callback when running on AMD/Hygon hardware.
>>
>> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
>> SYSENTER (safe by default, until explicitly enabled), as well as native
>> hardware (always delivered to the single applicable callback).
>>
>> Most importantly however, and the primary reason for the change, is that it
>> lets us sensibly test the fast system call entrypoints under all states a PV
>> guest can construct, to prove correct behaviour.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Andy Lutomirski <luto@kernel.org>
>> CC: Manuel Bouyer <bouyer@antioche.eu.org>
>>
>> v2:
>>  * Drop unnecessary instruction suffixes
>>  * Don't truncate #UD entrypoint to 32 bits
>>
>> Manuel: This will result in a corner case change for NetBSD.
>>
>> At the moment on native, 32bit userspace on 64bit NetBSD will get #UD (Intel,
>> etc), or an explicit -ENOSYS (AMD, etc) when trying to execute a 32bit SYSCALL
>> instruction.
>>
>> After this change, a 64bit PV VM will consistently see #UD (like on Intel, etc
>> hardware) even when running on AMD/Hygon hardware (as Xsyscall32 isn't
>> registered with Xen), rather than following Xsyscall into the proper system
>> call path.
>> ---
>>  xen/arch/x86/x86_64/entry.S | 26 +++++++++++++++++++-------
>>  1 file changed, 19 insertions(+), 7 deletions(-)
>>
>> diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
>> index 000eb9722b..aaf8402f93 100644
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -26,18 +26,30 @@
>>  /* %rbx: struct vcpu */
>>  ENTRY(switch_to_kernel)
>>          leaq  VCPU_trap_bounce(%rbx),%rdx
>> -        /* TB_eip = (32-bit syscall && syscall32_addr) ?
>> -         *          syscall32_addr : syscall_addr */
>> -        xor   %eax,%eax
>> +
>> +        /* TB_eip = 32-bit syscall ? syscall32_addr : syscall_addr */
>> +        mov   VCPU_syscall32_addr(%rbx), %ecx
> This being an unsigned long field, shouldn't you use %rcx here?

Yes I should.  Sorry - thought I'd fixed all of these.  I'll ad higher
half handlers to the XTF test.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:44:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6951.18160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkpF-0004wP-B4; Wed, 14 Oct 2020 17:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6951.18160; Wed, 14 Oct 2020 17:44:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkpF-0004wI-8A; Wed, 14 Oct 2020 17:44:33 +0000
Received: by outflank-mailman (input) for mailman id 6951;
 Wed, 14 Oct 2020 17:44:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSkpE-0004wD-12
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:44:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 545f1002-d4f8-4083-bd26-9fe2f6bb09ab;
 Wed, 14 Oct 2020 17:44:31 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkp9-0002s4-5w; Wed, 14 Oct 2020 17:44:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSkp8-0001qI-UK; Wed, 14 Oct 2020 17:44:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSkpE-0004wD-12
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:44:32 +0000
X-Inumbo-ID: 545f1002-d4f8-4083-bd26-9fe2f6bb09ab
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 545f1002-d4f8-4083-bd26-9fe2f6bb09ab;
	Wed, 14 Oct 2020 17:44:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=9G9V5QVITeXFPoWuo/oZ2eGetGiHUKD9Hl5VI+jZpz4=; b=Cw6L94okPxWAvuaC0L7psMOOAH
	6034AkhoeuFe9ZFkpvwv6ryfYrPuFgeyt3vGf/NZO1bujCgK/RpRsOsUhUee8rkdqYoCmtqjO3XWR
	NAAoCnFs/tAmzsYFe4xfUim2uuxrIQIB+OuyyNJ/I+fHVo9J0Wz+WyuDrsa0N+DSw3qs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkp9-0002s4-5w; Wed, 14 Oct 2020 17:44:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSkp8-0001qI-UK; Wed, 14 Oct 2020 17:44:27 +0000
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Alex_Benn=c3=a9e?=
 <alex.bennee@linaro.org>, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
 <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <e3e39834-e137-00eb-d0b4-d55c8afdd0e1@xen.org>
Date: Wed, 14 Oct 2020 18:44:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 12/10/2020 20:02, Stefano Stabellini wrote:
> On Sat, 10 Oct 2020, Julien Grall wrote:
>> On 28/09/2020 07:47, Masami Hiramatsu wrote:
>>> Hello,
>>
>> Hi Masami,
>>
>>> This made progress with my Xen boot on DeveloperBox (
>>> https://www.96boards.org/product/developerbox/ ) with ACPI.
>>>
>>
>> I have reviewed the patch attached and I have a couple of remarks about it.
>>
>> The STAO table was originally created to allow an hypervisor to hide devices
>> from a controller domain (such as Dom0). If this table is not present, then it
>> means the OS/hypervisor can use any device listed in the ACPI table.
>>
>> Additionally, the STAO table should never be present in the host ACPI table.
>>
>> Therefore, I think the code should not try to find the STAO. Instead, it
>> should check whether the SPCR table is present.
> 
> Yes, that makes sense, but that brings me to the next question.
> 
> SPCR seems to be required by SBBR, however, Masami wrote that he could
> boot on a system without SPCR, which gets me very confused for two
> reasons:
> 
> 1) Why there is no SPCR? Isn't it supposed to be mandatory? Is it
> because there no UART on Masami's system?

I can't comment specically on Masami's system, but I can make two broad 
comments:
    1) Not all the systems have to be compliant with the SBBR. But in 
theory, only systems passing the SBBR test can be claimed to be 
complained. This brings to the second point...
    2) "Mandatory" is what the specs aim to enforce. In my experience, 
some of the vendors may bend the rule and still claim they are compliant.

Even Linux documentation says that the SPCR is mandatory. But in the 
reality, the implementation will just ignore it.

 From my understanding of the code, Linux will only use it to discover 
the preferred console and enable earlycon. Without it, Linux will 
require the user to specify console=<...> or earlycon=<...>.

The UART will then be discovered via the DSDT.

> 
> 2) If there is no SPCR, how did Masami manage to boot Xen?
> I take without any serial output? Just with the framebuffer?

After his patch, Xen can boot without the SPCR. You will just see no 
logs. But I belive, he enabled earlyprintk for his platform.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6954.18173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSksY-000585-RA; Wed, 14 Oct 2020 17:47:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6954.18173; Wed, 14 Oct 2020 17:47:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSksY-00057y-Nw; Wed, 14 Oct 2020 17:47:58 +0000
Received: by outflank-mailman (input) for mailman id 6954;
 Wed, 14 Oct 2020 17:47:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kSksX-000577-2R
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:47:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e490caa4-a6f5-42d9-b41d-639974549f07;
 Wed, 14 Oct 2020 17:47:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSksT-0002xV-A4; Wed, 14 Oct 2020 17:47:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kSksT-00021T-35; Wed, 14 Oct 2020 17:47:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MeL6=DV=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kSksX-000577-2R
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:47:57 +0000
X-Inumbo-ID: e490caa4-a6f5-42d9-b41d-639974549f07
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e490caa4-a6f5-42d9-b41d-639974549f07;
	Wed, 14 Oct 2020 17:47:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xcOGFvAWb0jDtMXoQWODmGvVCUSSa0MkKfdRyvwphGk=; b=Oj0MAr8mOTqNQmWqlHhMKk4ftI
	UX1dLj1vhugjAg1lapUT6Q4Yhv2xIbNXZwmmxtZDlYXP3zkZuBeYjPR92tZdkZJ3IkA/+xQLYp9jU
	3+oD9jLayq7klaIKWDeIVSMqp8ewAGSY8Odgykpe/gwACPu+0cPxUXytkOTtsMAlDCP4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSksT-0002xV-A4; Wed, 14 Oct 2020 17:47:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kSksT-00021T-35; Wed, 14 Oct 2020 17:47:53 +0000
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
To: Elliott Mitchell <ehem+xen@m5p.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
 xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>,
 bertrand.marquis@arm.com, andre.przywara@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org>
 <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s>
 <20201012213451.GA89158@mattapan.m5p.com>
 <alpine.DEB.2.21.2010131759270.10386@sstabellini-ThinkPad-T480s>
 <20201014013706.GA98635@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1b5f19f6-ea70-9e58-bf36-de7f7d54153a@xen.org>
Date: Wed, 14 Oct 2020 18:47:50 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201014013706.GA98635@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliot,

On 14/10/2020 02:37, Elliott Mitchell wrote:
> On Tue, Oct 13, 2020 at 06:06:26PM -0700, Stefano Stabellini wrote:
>> On Mon, 12 Oct 2020, Elliott Mitchell wrote:
>>> I'm on different hardware, but some folks have setup Tianocore for it.
>>> According to Documentation/arm64/acpi_object_usage.rst,
>>> "Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT".  Yet when
>>> booting a Linux kernel directly on the hardware it lists APIC, BGRT,
>>> CSRT, DSDT, DBG2, FACP, GTDT, PPTT, RSDP, and XSDT.
>>>
>>> I don't know whether Linux's ACPI code omits mention of some required
>>> tables and merely panics if they're absent.  Yet I'm speculating the list
>>> of required tables has shrunk, SPCR is no longer required, and the
>>> documentation is out of date.  Perhaps SPCR was required in early Linux
>>> ACPI implementations, but more recent ones removed that requirement?
>>
>> I have just checked and SPCR is still a mandatory table in the latest
>> SBBR specification. It is probably one of those cases where the firmware
>> claims to be SBBR compliant, but it is not, and it happens to work with
>> Linux.
> 
> Is meeting the SBBR specification supposed to be a requirement of running
> Xen-ARM?

This is not my goal. We should try to get Xen running everywhere as long 
as this doesn't require a lot of extra code. IOW, don't ask me to 
review/accept a port of Xen to RPI3 ;).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6959.18197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyN-00062W-Pc; Wed, 14 Oct 2020 17:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6959.18197; Wed, 14 Oct 2020 17:53:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyN-00062O-Lv; Wed, 14 Oct 2020 17:53:59 +0000
Received: by outflank-mailman (input) for mailman id 6959;
 Wed, 14 Oct 2020 17:53:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSkyM-00060V-AW
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:53:58 +0000
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55547cac-0c74-47f0-bfc4-7bec21957944;
 Wed, 14 Oct 2020 17:53:57 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id p16so233055ilq.5
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:53:57 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 v15sm67765ile.37.2020.10.14.10.53.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 10:53:56 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSkyM-00060V-AW
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:53:58 +0000
X-Inumbo-ID: 55547cac-0c74-47f0-bfc4-7bec21957944
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55547cac-0c74-47f0-bfc4-7bec21957944;
	Wed, 14 Oct 2020 17:53:57 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id p16so233055ilq.5
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:53:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=kNA7XdpIZYXKzRQjpHvs7EGpy//348CMdfmkrip0gJQ=;
        b=nKZUjGXMEvm82reNTDrwwnVGhfvkR8nBHBIkIGWc/7najPmnztimJ2Y2MPbe1HVnEZ
         PE4xmTrONc8ehF8itOwCT9lxIwpW7MfMK9kAtK4odWRlyCtm4y/TJzaSFx7ptXtz+jT+
         mnXIHov8FSkxTJh4b/zJvUAZ40Hb/bqcIl8CYPAxQlRrvNVSkaH+JzjQ455IxqxYa+ug
         9SOKt27fCZruecRT05ajlf4UytbgJw0kAalMZManyBLVyKhBgrewbeKHFpsZFsb0cNP7
         2hjUlQAxt79uBrN/WLGHxDnO48hXuKaJFkd1J+peAdp5qzztUtBP5+ybmwKMKdLhHZF0
         pN0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=kNA7XdpIZYXKzRQjpHvs7EGpy//348CMdfmkrip0gJQ=;
        b=JamkSk09fYrU6t7NxxtYXOhzJ1unotpAkDECkn6jkpN6mT2XupU9K7FU1GUMIGIZqS
         if/7F7Lgic+QootTjoUiYvYv7TTFNTVaHyypcxitQGmMQaUJVrJSfrK6AxD1So9JCrwd
         L2EBpSa7dEF+d9vNPHVNAjXYiYhpsRlBbUAi4xVWTWysT680r4QAQ+b62uHPeM1s3ZY7
         gLiFHT/Ysi4338fJBg4Azz9mrRqH14tDfhCTw5qSbkhoefd964GcvMGgM0rNYsn63xli
         ZPQRbZl3MpDC4l1h0VECJ2DeGPAiVUtQ8/1nbagdyLJyOx9u4v8n9nqR0yzXzoF5vzdl
         VGzA==
X-Gm-Message-State: AOAM531WODhXWL0tDhnhGo9TF6Q+06Vim7hpskIzRFWp9TX0pIkgrLSS
	x52sXP66f7lGZNEG+D4biMo=
X-Google-Smtp-Source: ABdhPJwK36uGgi3FZj3ET63aoMSgUwsTywlp/l6U8zvhBjSeEYqB2o8vtAMqwJulJ8pMwnCSxMQw/g==
X-Received: by 2002:a92:d5c1:: with SMTP id d1mr269086ilq.212.1602698036971;
        Wed, 14 Oct 2020 10:53:56 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id v15sm67765ile.37.2020.10.14.10.53.55
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 10:53:56 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] xen: Remove Xen PVH/PVHVM dependency on PCI
Date: Wed, 14 Oct 2020 13:53:40 -0400
Message-Id: <20201014175342.152712-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201014175342.152712-1-jandryuk@gmail.com>
References: <20201014175342.152712-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A Xen PVH domain doesn't have a PCI bus or devices, so it doesn't need
PCI support built in.  Currently, XEN_PVH depends on XEN_PVHVM which
depends on PCI.

Introduce XEN_PVHVM_GUEST as a toplevel item and change XEN_PVHVM to a
hidden variable.  This allows XEN_PVH to depend on XEN_PVHVM without PCI
while XEN_PVHVM_GUEST depends on PCI.

In drivers/xen, compile platform-pci depending on XEN_PVHVM_GUEST since
that pulls in the PCI dependency for linking.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
---
 arch/x86/xen/Kconfig | 18 ++++++++++++------
 drivers/xen/Makefile |  2 +-
 2 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index 218acbd5c7a0..b75007eb4ec4 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -39,16 +39,20 @@ config XEN_DOM0
 	  Support running as a Xen PV Dom0 guest.
 
 config XEN_PVHVM
-	bool "Xen PVHVM guest support"
-	default y
-	depends on XEN && PCI && X86_LOCAL_APIC
-	help
-	  Support running as a Xen PVHVM guest.
+	def_bool y
+	depends on XEN && X86_LOCAL_APIC
 
 config XEN_PVHVM_SMP
 	def_bool y
 	depends on XEN_PVHVM && SMP
 
+config XEN_PVHVM_GUEST
+	bool "Xen PVHVM guest support"
+	default y
+	depends on XEN_PVHVM && PCI
+	help
+	  Support running as a Xen PVHVM guest.
+
 config XEN_512GB
 	bool "Limit Xen pv-domain memory to 512GB"
 	depends on XEN_PV
@@ -76,7 +80,9 @@ config XEN_DEBUG_FS
 	  Enabling this option may incur a significant performance overhead.
 
 config XEN_PVH
-	bool "Support for running as a Xen PVH guest"
+	bool "Xen PVH guest support"
 	depends on XEN && XEN_PVHVM && ACPI
 	select PVH
 	def_bool n
+	help
+	  Support for running as a Xen PVH guest.
diff --git a/drivers/xen/Makefile b/drivers/xen/Makefile
index babdca808861..c3621b9f4012 100644
--- a/drivers/xen/Makefile
+++ b/drivers/xen/Makefile
@@ -21,7 +21,7 @@ obj-$(CONFIG_XEN_GNTDEV)		+= xen-gntdev.o
 obj-$(CONFIG_XEN_GRANT_DEV_ALLOC)	+= xen-gntalloc.o
 obj-$(CONFIG_XENFS)			+= xenfs/
 obj-$(CONFIG_XEN_SYS_HYPERVISOR)	+= sys-hypervisor.o
-obj-$(CONFIG_XEN_PVHVM)			+= platform-pci.o
+obj-$(CONFIG_XEN_PVHVM_GUEST)		+= platform-pci.o
 obj-$(CONFIG_SWIOTLB_XEN)		+= swiotlb-xen.o
 obj-$(CONFIG_XEN_MCE_LOG)		+= mcelog.o
 obj-$(CONFIG_XEN_PCIDEV_BACKEND)	+= xen-pciback/
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6958.18185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyI-00060h-Gq; Wed, 14 Oct 2020 17:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6958.18185; Wed, 14 Oct 2020 17:53:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyI-00060a-Db; Wed, 14 Oct 2020 17:53:54 +0000
Received: by outflank-mailman (input) for mailman id 6958;
 Wed, 14 Oct 2020 17:53:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSkyH-00060V-ER
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:53:53 +0000
Received: from mail-io1-xd2b.google.com (unknown [2607:f8b0:4864:20::d2b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aaeb0442-ea25-46b0-8d45-bb4f5f644f8f;
 Wed, 14 Oct 2020 17:53:52 +0000 (UTC)
Received: by mail-io1-xd2b.google.com with SMTP id 67so16824iob.8
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:53:52 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 v15sm67765ile.37.2020.10.14.10.53.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 10:53:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSkyH-00060V-ER
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:53:53 +0000
X-Inumbo-ID: aaeb0442-ea25-46b0-8d45-bb4f5f644f8f
Received: from mail-io1-xd2b.google.com (unknown [2607:f8b0:4864:20::d2b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aaeb0442-ea25-46b0-8d45-bb4f5f644f8f;
	Wed, 14 Oct 2020 17:53:52 +0000 (UTC)
Received: by mail-io1-xd2b.google.com with SMTP id 67so16824iob.8
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:53:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=9VhKl5FefWluZOToOnttrb98bLhg6tznaGTvKOR1WyM=;
        b=LG38mpaEoTV3xmy9S+QxjaWHOFTAY40ftlyU0IYGVHB9VtZgunbOT4b7+0t1C5T/EG
         0K/xX3UGeFbtMaIaaUMcDJPwbrJa44CMtFBH7HQwj32Kb5q7UppzuMTRdu8z+SZYXjFA
         PJ4Hw8bx+hbBQN68dgzyNRS6WRMAmLQynrOhBd8bvU5I3WtPesoBslSTB3jYYGO6jWAl
         rCEMC7IdqsHuI0fO8ML8nHdP22EXBUYdmj5R0h6M66pECLDPcPY2EnMSdeQzFb/rXPE5
         XDkuRe2mxM4/p9Tvg2PunJju+QPcb8PtUcEjYhyJ3bsGJ+af/CgeLPyNFnul+AHDJEYb
         GDpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=9VhKl5FefWluZOToOnttrb98bLhg6tznaGTvKOR1WyM=;
        b=a55Ke1h1aBzwKDpzSYbh5OK3+V299GShIemgydU/darBKZjYxRQfPzWQPgz+yUV4RA
         Pe++jcds4NKbXUncGiyp0yDIpYU3TT1adVAfKoPQ4pzewBcqTbxzajsTBU0kAsfXLJ8p
         UNO0tlpbr7VksF9oQ5qV5CCfIdPsbaI5MKuBWRCpRbotv4eGx0Cr6mEHIIHnacQ1MG4R
         bYorduKIb9VSam0QHQRyYEVUd3XW3N3wxNK1ByWfOd1mSRbaeYeIkt6fDEOBDoA6tEGg
         bHWy9nkVW/eD5vi4iG+9hj1xFtQY7NYK5t009swSzA3DCNeI/bR/m/m0XdKRs22W5PWI
         G1IA==
X-Gm-Message-State: AOAM532Gtkq5U9XN/E5TLw1H9xbWYZW7GWtbjW9WX1UTjBcmvALXQ9zh
	S1B8hSvkFFHuh6Jh2AJz2TQ=
X-Google-Smtp-Source: ABdhPJw39voKKffSwqBpceAiKDbV1Rgd5kVKGzEV/A5CvZiPsXSMIKxOhx3sM5EaQ4yg9t4Akr+CZw==
X-Received: by 2002:a6b:144e:: with SMTP id 75mr422228iou.39.1602698032271;
        Wed, 14 Oct 2020 10:53:52 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id v15sm67765ile.37.2020.10.14.10.53.50
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 10:53:51 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	linux-kernel@vger.kernel.org
Subject: [PATCH 0/2] Remove Xen PVH dependency on PCI
Date: Wed, 14 Oct 2020 13:53:39 -0400
Message-Id: <20201014175342.152712-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A Xen PVH domain doesn't have a PCI bus or devices, so it doesn't need
PCI support built in.  Currently, XEN_PVH depends on XEN_PVHVM which
depends on PCI.

The first patch introduces XEN_PVHVM_GUEST as a toplevel item and
changes XEN_PVHVM to a hidden variable.  This allows XEN_PVH to depend
on XEN_PVHVM without PCI while XEN_PVHVM_GUEST depends on PCI.

The second patch moves XEN_512GB to clean up the option nesting.

Jason Andryuk (2):
  xen: Remove Xen PVH/PVHVM dependency on PCI
  xen: Kconfig: nest Xen guest options

 arch/x86/xen/Kconfig | 38 ++++++++++++++++++++++----------------
 drivers/xen/Makefile |  2 +-
 2 files changed, 23 insertions(+), 17 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 17:54:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 17:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6960.18209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyT-00066R-1M; Wed, 14 Oct 2020 17:54:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6960.18209; Wed, 14 Oct 2020 17:54:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSkyS-00066I-UY; Wed, 14 Oct 2020 17:54:04 +0000
Received: by outflank-mailman (input) for mailman id 6960;
 Wed, 14 Oct 2020 17:54:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSkyR-00060V-Ak
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:54:03 +0000
Received: from mail-il1-x143.google.com (unknown [2607:f8b0:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d3bd845-3b31-4300-bfdf-2ca521d27f5f;
 Wed, 14 Oct 2020 17:54:00 +0000 (UTC)
Received: by mail-il1-x143.google.com with SMTP id j13so241218ilc.4
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:54:00 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 v15sm67765ile.37.2020.10.14.10.53.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 14 Oct 2020 10:53:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSkyR-00060V-Ak
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 17:54:03 +0000
X-Inumbo-ID: 6d3bd845-3b31-4300-bfdf-2ca521d27f5f
Received: from mail-il1-x143.google.com (unknown [2607:f8b0:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6d3bd845-3b31-4300-bfdf-2ca521d27f5f;
	Wed, 14 Oct 2020 17:54:00 +0000 (UTC)
Received: by mail-il1-x143.google.com with SMTP id j13so241218ilc.4
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 10:54:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=72uE8HBho6CRDEfq5zUs9af/VZhtVN+A95L9VpNei3g=;
        b=WPYcwYQQ8P4JgmLO9WS1jZolr0Vv2PVTia9WT/lQ7MX5gq+HsgEva9AfGW2FlgkgbM
         HVJwCL5VPEiTqLxbrJJ8A2q6YMODS7ZMAb+ADTjLgyMXIG9zTJC02fhcJE4vdysPzQ+h
         7wOEuIA4KEZYRd4Miv5wes3HRO3QndjWh+xU/zP7xcvds0rqh59kMY4QvTVA/uvaFEnT
         VhgDXyS/LFydlvr3Q2eIXePyc1P8MCdMbaxQEA4wPi7vlq72DYhfiuBpF8QOYHsd1Vnm
         0IQ8wo3SZrNi2vGPELBiHRkH0cJPFOXShMcxHEkloat2A4GQ1WyvAGOOUMdhQUib/HpS
         8FVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=72uE8HBho6CRDEfq5zUs9af/VZhtVN+A95L9VpNei3g=;
        b=qL0giTAbnKc8vlWuOmNR/2sMRfMXqWsFFY/CPQG+v7AvX4TF2Rvd7BuZAQPeuEt2W3
         y928SpxxagV40ia6FrWU5DiMIhZguUYUGhIW5dtC20oz4Tp71LrBG41cmytSDuD0y9R+
         QcdjXqgi5sAZ/3xiF7tiyA3XKSrsg2dLzrPcB/9NWrtdPXjdr6DUNLuMXc5ligWG+VqV
         Of7Imi+a5Y49Mk/xuQhyGy1Q20D5NWVxX7wtngmLPKVoc0s4G25utniJAJ8nHcJ9Ftnf
         J/2cKdjs0UMFJXEMeQkuEtN8WXcu6NUrt3odohpJNY6B1vkicj00s2cIyX2P3tXrcEHw
         IVQg==
X-Gm-Message-State: AOAM530y+QTzC7RwsjZDyhkAbCVi3HfmCJjTJkikRYMpx0qmoonuPR+U
	jwnbqF8tFy32L/tmqYSIyOI=
X-Google-Smtp-Source: ABdhPJwj+vJExjs0I5nCxR1nMrw0GQPeUqeZDyFFqHum+1s5g1WGgg2wXN6N5rYZrkp6aUT3nWYY8w==
X-Received: by 2002:a92:3650:: with SMTP id d16mr262484ilf.29.1602698040294;
        Wed, 14 Oct 2020 10:54:00 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id v15sm67765ile.37.2020.10.14.10.53.58
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 14 Oct 2020 10:53:59 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 2/2] xen: Kconfig: nest Xen guest options
Date: Wed, 14 Oct 2020 13:53:41 -0400
Message-Id: <20201014175342.152712-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201014175342.152712-1-jandryuk@gmail.com>
References: <20201014175342.152712-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Moving XEN_512GB allows it to nest under XEN_PV.  That also allows
XEN_PVH to nest under XEN as a sibling to XEN_PV and XEN_PVHVM giving:

[*]   Xen guest support
[*]     Xen PV guest support
[*]       Limit Xen pv-domain memory to 512GB
[*]       Xen PV Dom0 support
[*]     Xen PVHVM guest support
[*]     Xen PVH guest support

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 arch/x86/xen/Kconfig | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/arch/x86/xen/Kconfig b/arch/x86/xen/Kconfig
index b75007eb4ec4..2b105888927c 100644
--- a/arch/x86/xen/Kconfig
+++ b/arch/x86/xen/Kconfig
@@ -26,6 +26,19 @@ config XEN_PV
 	help
 	  Support running as a Xen PV guest.
 
+config XEN_512GB
+	bool "Limit Xen pv-domain memory to 512GB"
+	depends on XEN_PV && X86_64
+	default y
+	help
+	  Limit paravirtualized user domains to 512GB of RAM.
+
+	  The Xen tools and crash dump analysis tools might not support
+	  pv-domains with more than 512 GB of RAM. This option controls the
+	  default setting of the kernel to use only up to 512 GB or more.
+	  It is always possible to change the default via specifying the
+	  boot parameter "xen_512gb_limit".
+
 config XEN_PV_SMP
 	def_bool y
 	depends on XEN_PV && SMP
@@ -53,19 +66,6 @@ config XEN_PVHVM_GUEST
 	help
 	  Support running as a Xen PVHVM guest.
 
-config XEN_512GB
-	bool "Limit Xen pv-domain memory to 512GB"
-	depends on XEN_PV
-	default y
-	help
-	  Limit paravirtualized user domains to 512GB of RAM.
-
-	  The Xen tools and crash dump analysis tools might not support
-	  pv-domains with more than 512 GB of RAM. This option controls the
-	  default setting of the kernel to use only up to 512 GB or more.
-	  It is always possible to change the default via specifying the
-	  boot parameter "xen_512gb_limit".
-
 config XEN_SAVE_RESTORE
 	bool
 	depends on XEN
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 18:01:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 18:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6969.18222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSl5F-0007E0-0U; Wed, 14 Oct 2020 18:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6969.18222; Wed, 14 Oct 2020 18:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSl5E-0007Dt-To; Wed, 14 Oct 2020 18:01:04 +0000
Received: by outflank-mailman (input) for mailman id 6969;
 Wed, 14 Oct 2020 18:01:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSl5D-0007Do-Td
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:01:03 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3f57e28-5412-44d1-9c84-41d7861661f6;
 Wed, 14 Oct 2020 18:01:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSl5D-0007Do-Td
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:01:03 +0000
X-Inumbo-ID: f3f57e28-5412-44d1-9c84-41d7861661f6
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f3f57e28-5412-44d1-9c84-41d7861661f6;
	Wed, 14 Oct 2020 18:01:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602698461;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Bsx+Dx6Lh8fE0ZMgl+U7zDJoPgfY04D/vS2PEzoUDxU=;
  b=WhIWVp3SJ3PrDydIeHcEPbpqCjGQXpwJPhLikH5FrGhRpZZtPEzkhhLJ
   Ft9XyzZQpjor/1FylJkUzoZm7sztNB5zK08Mvd0+fHvnSs70KNHe744KH
   uo8vZd0fUmMmZapIK24OHdTnLnjv523uBT1OQOFJ4myOFOCRuJ/vL6y/q
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: aRW7qphm8ic4zqDSfvKalmd4NB/chpgENDhlVRYZr7360V+BSUab/iZ74xf0V4LB8xe+6ZtSjK
 DWNvOaL9Ox/v51Kf+UTICGW20gRmn48e0pKGhZRGUaGVlmb4rJNpJV/CKzl5eSTQ+gzkzlk3KP
 xSQOEa/D+qRpaPNwWUHoiySpwVRm7H1Uk4be36k43Ds+A71S5t24QKG+rE1ZHt6GkuwJ184gxj
 blR1l2ricZ1hvGRzhATSNTfI/tC2nowUZgHNYLuI207vVFXzmnyFnPVd6tSzW0HX36ANjLH6f7
 7yY=
X-SBRS: 2.5
X-MesageID: 29077229
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="29077229"
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
Date: Wed, 14 Oct 2020 19:00:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 13/10/2020 16:51, Jan Beulich wrote:
> On 12.10.2020 15:49, Andrew Cooper wrote:
>> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
>> contains the legacy vm86 fields from 32bit days, which are beyond the
>> hardware-pushed frame.
>>
>> Accessing these fields is generally illegal, as they are logically out of
>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>
>> Unfortunately, the #DF handler uses these fields as part of preparing the
>> state dump, and being IST, accesses the adjacent stack frame.
>>
>> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
>> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
>> the guard page, which turns this OoB write into a fatal pagefault:
>>
>>   (XEN) *** DOUBLE FAULT ***
>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>   (XEN) CPU:    4
>>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>>   ...
>>   (XEN) Xen call trace:
>>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>>   (XEN)
>>   (XEN) Pagetable walk from ffff830236f6d008:
>>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>>   (XEN)
>>   (XEN) ****************************************
>>   (XEN) Panic on CPU 4:
>>   (XEN) FATAL PAGE FAULT
>>   (XEN) [error_code=0003]
>>   (XEN) Faulting linear address: ffff830236f6d008
>>   (XEN) ****************************************
>>   (XEN)
>>
>> and rendering the main #DF analysis broken.
>>
>> The proper fix is to delete cpu_user_regs.es and later, so no
>> interrupt/exception path can access OoB, but this needs disentangling from the
>> PV ABI first.
>>
>> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> Is it perhaps worth also saying explicitly that the other IST
> stacks don't suffer the same problem because show_registers()
> makes an local copy of the passed in struct? (Doing so also
> for #DF would apparently be an alternative solution.)

They're not safe.  They merely don't explode.

https://lore.kernel.org/xen-devel/1532546157-5974-1-git-send-email-andrew.cooper3@citrix.com/
was "fixed" by CET-SS turning the guard page from not present to
read-only, but the same CET-SS series swapped #DB for #DF when it comes
to the single OoB write problem case.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 18:04:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 18:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6972.18235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSl8v-0007Ph-HP; Wed, 14 Oct 2020 18:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6972.18235; Wed, 14 Oct 2020 18:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSl8v-0007Pa-EJ; Wed, 14 Oct 2020 18:04:53 +0000
Received: by outflank-mailman (input) for mailman id 6972;
 Wed, 14 Oct 2020 18:04:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSl8u-0007PV-5j
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:04:52 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69f65ee3-45a0-4c28-8e51-5ae7fb18b630;
 Wed, 14 Oct 2020 18:04:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSl8u-0007PV-5j
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:04:52 +0000
X-Inumbo-ID: 69f65ee3-45a0-4c28-8e51-5ae7fb18b630
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 69f65ee3-45a0-4c28-8e51-5ae7fb18b630;
	Wed, 14 Oct 2020 18:04:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602698692;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=QP9dZ/kf6WxmC+XFLjXj3iwd2bvWj8WJ3e4EPYelksU=;
  b=cbX8m2MVku/e5Qrxnq+oLOzUIsZFGod8c8cMGPzcqOQHRkpWR3/MycbY
   tSThLTTgL1UiSoPcA8epbj8K8U1RJynPo8KfDkdGUOYzp4zIRgA9lAXzK
   j8ZtX6/R9SkHMinRm7Y8A5KLHHQ7RfGDT19M8ohV40d2grCHEts8ESKP9
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RkRU6LoFb+K0uroA17V3eGSGvjuUUGMUujSWCqKRxmtq2qsv7vOw8BkkHt9zfs7MM0a9cL4/Ft
 2dUiIVTILpDeiiulU9gHbgx2gjg8+21xfs5GSiPybMgsipLEe8gUsLsaJNrwhMSLLMQsemLMrK
 wdiahS+UH2PIC8/GMR79mm8Bs/hMc8XQYiz5aXGXmdH8ykSz0VWu7I/DAiuKR6KfhXRT9DtM9v
 zbRh/CILhefBnCwff+hwhCnTEGRqgmAYgX+34s8k22jyocg6vnUZJwOQKaxQIPzmtyJ671KRly
 GoE=
X-SBRS: 2.5
X-MesageID: 28991967
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="28991967"
Subject: Re: [SUSPECTED SPAM][PATCH 0/2] Remove Xen PVH dependency on PCI
To: Jason Andryuk <jandryuk@gmail.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, <xen-devel@lists.xenproject.org>
CC: "H. Peter Anvin" <hpa@zytor.com>, <linux-kernel@vger.kernel.org>
References: <20201014175342.152712-1-jandryuk@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4973553f-fad2-83b9-fa19-26370ced2c2d@citrix.com>
Date: Wed, 14 Oct 2020 19:04:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201014175342.152712-1-jandryuk@gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 18:53, Jason Andryuk wrote:
> A Xen PVH domain doesn't have a PCI bus or devices,

[*] Yet.

> so it doesn't need PCI support built in.

Untangling the dependences is a good thing, but eventually we plan to
put an optional PCI bus back in, e.g. for SRIOV usecases.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 18:47:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 18:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6976.18249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSlo8-0002Qg-IR; Wed, 14 Oct 2020 18:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6976.18249; Wed, 14 Oct 2020 18:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSlo8-0002QZ-F9; Wed, 14 Oct 2020 18:47:28 +0000
Received: by outflank-mailman (input) for mailman id 6976;
 Wed, 14 Oct 2020 18:47:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSlo7-0002QU-Eo
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:47:27 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b4b0977-021b-4c29-8908-6aa544b91eda;
 Wed, 14 Oct 2020 18:47:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSlo7-0002QU-Eo
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 18:47:27 +0000
X-Inumbo-ID: 9b4b0977-021b-4c29-8908-6aa544b91eda
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9b4b0977-021b-4c29-8908-6aa544b91eda;
	Wed, 14 Oct 2020 18:47:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602701246;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ozywa+G0/TlhuCvUoxhNXtJO/nG4bu9hmnl3Wc/T6ok=;
  b=QLRjF33OD7NK/lkNmAh5YYTO7DS0HKeuJDa45jx6wfyXbBLNrileG8Xp
   yTKbzaZqtLs7n3aC+h7SdjeARKm8vRNkQ+Ipfvw9zqs6Ijo7qTtdH7zsO
   H420Eq0YgzWLdT/fpeIxl/0IZ/TxBanhaJXvBhi7ExfOjxTlGtU2qN/Nj
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: svv+OPWgN8+e9erZVqILrhC1QXfsVBPkj5JDfFlIy7Pg7r8O2Bc01NSDiEtVsSZAs80BTLq+cJ
 twgfw1O0RKmlCQ3i/VgqIciDSFfm/lsSgqXT0dg0yqA5aKY9yp+ZEdG9x4N69NICQwUKSuiXtw
 2PB2do/pPnOzhf9w6PI5b0mT4HTOv8SYs0Y3jI2isyKOZznGiwOaMc+3+eo6MlLRcaf3MlY4YM
 bp9Qy2VcmuQqqX/l95YA/Bm5xa+F130f9k4rQGsasGCr+qdDk6TzBGLIQvt9+Y6+waC9zTRjZ2
 QHk=
X-SBRS: 2.5
X-MesageID: 29081389
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="29081389"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v2] x86/smpboot: Don't unconditionally call memguard_guard_stack() in cpu_smpboot_alloc()
Date: Wed, 14 Oct 2020 19:47:08 +0100
Message-ID: <20201014184708.17758-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

cpu_smpboot_alloc() is designed to be idempotent with respect to partially
initialised state.  This occurs for S3 and CPU parking, where enough state to
handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
when we otherwise want to offline the CPU.

For simplicity between various configuration, Xen always uses shadow stack
mappings (Read-only + Dirty) for the guard page, irrespective of whether
CET-SS is enabled.

Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
by first writing out the supervisor shadow stack tokens with plain writes,
then changing the mapping to being read-only.

This ordering is strictly necessary to configure the BSP, which cannot have
the supervisor tokens be written with WRSS.

Instead of calling memguard_guard_stack() unconditionally, call it only when
actually allocating a new stack.  Xenheap allocates are guaranteed to be
writeable, and the net result is idempotency WRT configuring stack_base[].

Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

This can more easily be demonstrated with CPU hotplug than S3, and the absence
of bug reports goes to show how rarely hotplug is used.

v2:
 * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
   turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
   shootdown completes.  For CPU Parking, it would invalidate the shadow stack
   of the parked CPUs, causing a #DF on the next NMI/#MC to hit the thread.
---
 xen/arch/x86/smpboot.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5708573c41..67e727cebd 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -997,16 +997,18 @@ static int cpu_smpboot_alloc(unsigned int cpu)
         memflags = MEMF_node(node);
 
     if ( stack_base[cpu] == NULL )
+    {
         stack_base[cpu] = alloc_xenheap_pages(STACK_ORDER, memflags);
-    if ( stack_base[cpu] == NULL )
-        goto out;
+        if ( !stack_base[cpu] )
+            goto out;
+
+        memguard_guard_stack(stack_base[cpu]);
+    }
 
     info = get_cpu_info_from_stack((unsigned long)stack_base[cpu]);
     info->processor_id = cpu;
     info->per_cpu_offset = __per_cpu_offset[cpu];
 
-    memguard_guard_stack(stack_base[cpu]);
-
     gdt = per_cpu(gdt, cpu) ?: alloc_xenheap_pages(0, memflags);
     if ( gdt == NULL )
         goto out;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 14 19:29:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 19:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6981.18264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmSL-0005zE-NV; Wed, 14 Oct 2020 19:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6981.18264; Wed, 14 Oct 2020 19:29:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmSL-0005z7-KX; Wed, 14 Oct 2020 19:29:01 +0000
Received: by outflank-mailman (input) for mailman id 6981;
 Wed, 14 Oct 2020 19:29:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSmSK-0005yz-4O
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:29:00 +0000
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7831302e-a03b-4909-86fa-03ee517314b0;
 Wed, 14 Oct 2020 19:28:59 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id a5so606147ljj.11
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 12:28:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSmSK-0005yz-4O
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:29:00 +0000
X-Inumbo-ID: 7831302e-a03b-4909-86fa-03ee517314b0
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7831302e-a03b-4909-86fa-03ee517314b0;
	Wed, 14 Oct 2020 19:28:59 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id a5so606147ljj.11
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 12:28:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=57NdamUPepK7XrLUB0t1rWCZ+s1OpvEZeQEb0MQTGGU=;
        b=mTawzr7KKz8RbPWokneHtSk5gesUVsppUe7cosu7qp3qd4lhm6xuquzUkGlr5UmZ49
         85Vt4pIP21N4tXLRWslgRnoxdtpO9iVT4uXFvRS7I83UR9FhE58fleHfs32yebGhOexo
         uC+3ipH0Ntaqyqyl2qzWUONhlcPk1w96PDnB5RAgvblmKhf6l/HliXGhNrxzeoV2UJk0
         Yp7xGWQUxMptodGT50j0JyJeuKUpvkzwZ1pv1W7s6C9+budAZ2yBcpZdfQy/k6xZUgFK
         JkylqpYBj3Yeqy9/eEYcg9vFjmoCPC0YvC1kK3VayEqrrEZTY6vdzqQOzkovYVnq/FlM
         KV2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=57NdamUPepK7XrLUB0t1rWCZ+s1OpvEZeQEb0MQTGGU=;
        b=CIXpYEZXHIufacvLNQcAcyinXioQt+JVLfbq1tUvhNNdse8js/Ag23XWLMXudCsqaz
         afipaH2+QjPieJ6Bu8jDc/7mCwRd2F7qiGD8rMfuzDxle5HTUQ1/OHTztNokTWfYdEsJ
         y6Xt39eW8zMiJKWb7gM9rPirUisURZXQj2nEbm+8GgIHrCQVIa4qTKOOk+X/iy9mYYSj
         m+gqhIjcMylsL1ZnmbXS8X5EKNfs8FWV3fozebCTQjrEl676AzQpX/qgSg0E3XZ7KXav
         GTO7XDp7VXzjfVBQG9AEBcic2XVWaeey0BBVa62d7/ZCbZz7nfelXwbRfkDCOH67ZYkT
         Yeuw==
X-Gm-Message-State: AOAM531adcK4Xb+hPwtd3Fvn3cR4PjKUZgD0v7THPPfXpYfWi93VQOhH
	nkWQEBQexeh4QtX6sAXvDDEnwdg3HAVF8+c0m8w=
X-Google-Smtp-Source: ABdhPJyrbDWmZ3YcRPLZrVdntHLlMrtVDXONclYOum3m3DQm8wpP0M5mGc8gL+sLnhNvu5oziLgFxUgMVVL0aOJByvo=
X-Received: by 2002:a2e:924b:: with SMTP id v11mr88844ljg.262.1602703738093;
 Wed, 14 Oct 2020 12:28:58 -0700 (PDT)
MIME-Version: 1.0
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 14 Oct 2020 15:28:46 -0400
Message-ID: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
Subject: i915 dma faults on Xen
To: intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Hi,

Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/2576

I'm seeing DMA faults for the i915 graphics hardware on a Dell
Latitude 5500. These were captured when I plugged into a Dell
Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.4
staging and Linux 5.4.70 (and some earlier versions).

Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handler
[i915]] *ERROR* Fault errors on pipe A: 0x00000080
Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handler
[i915]] *ERROR* Fault errors on pipe A: 0x00000080
Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =
ffff82c00021d000
Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
PTE Read access is not set
Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handler
[i915]] *ERROR* Fault errors on pipe A: 0x00000080
Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handler
[i915]] *ERROR* Fault errors on pipe A: 0x00000080
Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =
ffff82c00021d000
Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
PTE Read access is not set

They repeat. In the log attached to
https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start at
"Oct 14 18:41:49.056589" and continue until I unplug the dock around
"Oct 14 18:41:54.801802".

I've also seen similar messages when attaching the laptop's HDMI port
to a 4k monitor. The eDP display by itself seems okay.

I tried Fedora 31 & 32 live images with intel_iommu=on, so no Xen, and
didn't see any errors

This is a kernel & xen log with drm.debug=0x1e. It also includes some
application (glass) logging when it changes resolutions which seems to
set off the DMA faults. 5500-igfx-messages-kern-xen-glass

Running xen with iommu=no-igfx disables the iommu for the i915
graphics and no faults are reported. However, that breaks some other
devices (Dell Latitude 7200 and 5580) giving a black screen with:

Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Failed
to idle engines, declaring wedged!
Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Failed
to initialize GPU, declaring it wedged!

Any suggestions welcome.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 19:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 19:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6984.18277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmUi-0006ly-4h; Wed, 14 Oct 2020 19:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6984.18277; Wed, 14 Oct 2020 19:31:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmUi-0006lr-1j; Wed, 14 Oct 2020 19:31:28 +0000
Received: by outflank-mailman (input) for mailman id 6984;
 Wed, 14 Oct 2020 19:31:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kSmUg-0006lk-EV
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:31:26 +0000
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 749f7d32-3890-4b02-a482-08344a67e536;
 Wed, 14 Oct 2020 19:31:25 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id f21so643647ljh.7
 for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 12:31:25 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x6PD=DV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kSmUg-0006lk-EV
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:31:26 +0000
X-Inumbo-ID: 749f7d32-3890-4b02-a482-08344a67e536
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 749f7d32-3890-4b02-a482-08344a67e536;
	Wed, 14 Oct 2020 19:31:25 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id f21so643647ljh.7
        for <xen-devel@lists.xenproject.org>; Wed, 14 Oct 2020 12:31:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=/sQlbZjIbCkTXYZuFjyqLr22q7RrXyITM6aG6gZEwQM=;
        b=rNxdRPukI2aQeQhs80XN7ERXfk63ATqoDHdvm4y6fcPdYqooy40XCDG+mjGJ1quewV
         z7av/j5tnGBmHnG6b58/4GDRUNcAgs5ydoPZ9E65HRy+GSHu7NynekJ6n0UfxHO64cPP
         vpSR9bJAwVJwNKyx2CSNOfRFVfGK0NTYhHySkYSscCpeUkzlpa5aCXDOL8kPOSr3pCsv
         nuZdBCNCy3LfrfJsxa8DHTQOUKl5pgQL09i0xx1efDyg+ZdQ2v+CruYimYf1CV0V9A1J
         rCBtMBXat2nRuhcVRshcQQtmEWHiUqRPBZ4uyw/Cr/yv546I2OcqKsI36wi+SQw98Xov
         Y5OA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=/sQlbZjIbCkTXYZuFjyqLr22q7RrXyITM6aG6gZEwQM=;
        b=L5++wcSnv+VYE6GtKWALA+sIkfQgbovshEeJSUMfkqYnGvlf0wPT55haOfOvYxnsdx
         SOO6FSiykdhrNfdELeezFxHZ2zc5KCCdsQv/XQcnFIrsheIXuaIMdN8q15u8kO57s//c
         nUgahJG588YcXrtcsFDeoI6ndzLUxnbUn35AkV31Dd3Ytn2lPpHyU/a6bnoQIM3dvSTo
         7GO/RuBii/QUK+dJ12XJEIbzZsPHToeHk0rRflcVbJtfcbVG5a2XbNjkmX4jhPqVbT7O
         U5fIUEAAw/DH3MMkEmxld3T7Kq85iTe3SJ8awfW3M/bpraA9/+uIjhgmfp1BeYbQsrKB
         8FmA==
X-Gm-Message-State: AOAM532wkzkwYlaEtw9Jro+FChyqP/LSOH6tq5bTCdarWffKpLEwtgr5
	e0+iB6HCm3b38CFwyiVGlJaukHmzKwgCoIEh4ns=
X-Google-Smtp-Source: ABdhPJzxtIT/fRFaxw4POtgjFD2lu3ZKUpuH6kSY1wBzaGoIgufVPV+KqKbHVmfWxJGA9ncNLpIa+gGMVDT1QoDcz2Q=
X-Received: by 2002:a2e:96d2:: with SMTP id d18mr67974ljj.407.1602703884588;
 Wed, 14 Oct 2020 12:31:24 -0700 (PDT)
MIME-Version: 1.0
References: <20201014175342.152712-1-jandryuk@gmail.com> <4973553f-fad2-83b9-fa19-26370ced2c2d@citrix.com>
In-Reply-To: <4973553f-fad2-83b9-fa19-26370ced2c2d@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 14 Oct 2020 15:31:13 -0400
Message-ID: <CAKf6xpvg0sk4V_txu-RYhK8cO4kLNSm8dFjEOCm38phMSYSorg@mail.gmail.com>
Subject: Re: [SUSPECTED SPAM][PATCH 0/2] Remove Xen PVH dependency on PCI
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	"H. Peter Anvin" <hpa@zytor.com>, open list <linux-kernel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, Oct 14, 2020 at 2:04 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 14/10/2020 18:53, Jason Andryuk wrote:
> > A Xen PVH domain doesn't have a PCI bus or devices,
>
> [*] Yet.

:)

> > so it doesn't need PCI support built in.
>
> Untangling the dependences is a good thing, but eventually we plan to
> put an optional PCI bus back in, e.g. for SRIOV usecases.

Yes, and to be clear this change doesn't preclude including the PCI
code.  I was just looking to remove code from my VMs that aren't using
PCI devices.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 19:37:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 19:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6987.18289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmaL-0006zV-R6; Wed, 14 Oct 2020 19:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6987.18289; Wed, 14 Oct 2020 19:37:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSmaL-0006zO-MC; Wed, 14 Oct 2020 19:37:17 +0000
Received: by outflank-mailman (input) for mailman id 6987;
 Wed, 14 Oct 2020 19:37:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kSmaK-0006zJ-9M
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:37:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c630df3c-658a-4627-971f-6a46212fdd1c;
 Wed, 14 Oct 2020 19:37:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aNf1=DV=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kSmaK-0006zJ-9M
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 19:37:16 +0000
X-Inumbo-ID: c630df3c-658a-4627-971f-6a46212fdd1c
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c630df3c-658a-4627-971f-6a46212fdd1c;
	Wed, 14 Oct 2020 19:37:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602704235;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=1NquZVhKtHgIfSXsdXr9x93CbiYIcEFDLKC5Qh1YrSE=;
  b=B1WKLAp+2C0MEZVwyYSHRK9noA5XzEpn+IwwGfbhGxb8Z6uf97FI7No+
   NwJYl7hY0PWVwj6it/SsX78bHskgQ8WMvymwrnykSqp/FRi8/fCnakcyh
   LzE03BfrEgFh/envKPe/EBdzNASrk+tusYXl+I/4II12LwYHZgvu2hvnY
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bNF8p36IWhINFjAdplWDNjC82HQGJ6T8P9UbrPg5fzoF5Xgs/BfMZs0U8j4V310+KGmyMNy+Pg
 oVvIXPELkru50vfUzuSMbMp8EiD/TP1QpNU1xhQZJRl6fLht9x+e7rdUL6rinuoIv+AxUZ8W99
 ffbKxuBvbCNIkVgPWLcSreojgAsqabHuhXMxm68GF9n9JWcKnFK+UXb8/uKOBhrYUB6JV6qOxP
 mtjGdMbN0h1lazGpmokmxrUhIo+9L49JhyP4AenCXqrY6xAqDTp5pT2tBOO6t8iwwJ+4PZ6tAn
 K88=
X-SBRS: 2.5
X-MesageID: 29351929
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,375,1596513600"; 
   d="scan'208";a="29351929"
Subject: Re: i915 dma faults on Xen
To: Jason Andryuk <jandryuk@gmail.com>, <intel-gfx@lists.freedesktop.org>,
	xen-devel <xen-devel@lists.xenproject.org>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
Date: Wed, 14 Oct 2020 20:37:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 14/10/2020 20:28, Jason Andryuk wrote:
> Hi,
>
> Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/2576
>
> I'm seeing DMA faults for the i915 graphics hardware on a Dell
> Latitude 5500. These were captured when I plugged into a Dell
> Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.4
> staging and Linux 5.4.70 (and some earlier versions).
>
> Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handler
> [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handler
> [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =
> ffff82c00021d000
> Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> PTE Read access is not set
> Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handler
> [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handler
> [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =
> ffff82c00021d000
> Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> PTE Read access is not set
>
> They repeat. In the log attached to
> https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start at
> "Oct 14 18:41:49.056589" and continue until I unplug the dock around
> "Oct 14 18:41:54.801802".
>
> I've also seen similar messages when attaching the laptop's HDMI port
> to a 4k monitor. The eDP display by itself seems okay.
>
> I tried Fedora 31 & 32 live images with intel_iommu=on, so no Xen, and
> didn't see any errors
>
> This is a kernel & xen log with drm.debug=0x1e. It also includes some
> application (glass) logging when it changes resolutions which seems to
> set off the DMA faults. 5500-igfx-messages-kern-xen-glass
>
> Running xen with iommu=no-igfx disables the iommu for the i915
> graphics and no faults are reported. However, that breaks some other
> devices (Dell Latitude 7200 and 5580) giving a black screen with:
>
> Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Failed
> to idle engines, declaring wedged!
> Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Failed
> to initialize GPU, declaring it wedged!
>
> Any suggestions welcome.

Presumably this is with a PV dom0.  What are 39b5845000 and 4238d0a000
in the machine memory map?

This smells like a missing RMRR in the ACPI tables.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 20:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 20:13:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6994.18303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSn8t-00021z-OJ; Wed, 14 Oct 2020 20:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6994.18303; Wed, 14 Oct 2020 20:12:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSn8t-00021s-L4; Wed, 14 Oct 2020 20:12:59 +0000
Received: by outflank-mailman (input) for mailman id 6994;
 Wed, 14 Oct 2020 20:12:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSn8s-0001zt-KT
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:12:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 911a5db6-0ed3-4e8e-b8be-3f5ff026f25b;
 Wed, 14 Oct 2020 20:12:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSn8k-000666-Cb; Wed, 14 Oct 2020 20:12:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSn8k-0003LV-2h; Wed, 14 Oct 2020 20:12:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSn8k-0006mv-2E; Wed, 14 Oct 2020 20:12:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSn8s-0001zt-KT
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:12:58 +0000
X-Inumbo-ID: 911a5db6-0ed3-4e8e-b8be-3f5ff026f25b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 911a5db6-0ed3-4e8e-b8be-3f5ff026f25b;
	Wed, 14 Oct 2020 20:12:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8tL5IE2lZ0nLG/h8jwkFbCQbTyK8m1I/+JDiNLzMYaY=; b=bokjxR4c8Q8JyAqBxOqggDjwv0
	HCabB7brNCHj1NgEGaXMnQRoUU3zFwVOCnUqChgBr2RDeua/RdW3YY3pDg+9qWWCC/o0PkkLpSGvC
	7LxM+wC3NRQfY/VBFSK+IHpL/iiMGaehywgW4uxJFnYCbV723AkUplGMJCRhgJbV7pGk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSn8k-000666-Cb; Wed, 14 Oct 2020 20:12:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSn8k-0003LV-2h; Wed, 14 Oct 2020 20:12:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSn8k-0006mv-2E; Wed, 14 Oct 2020 20:12:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155799-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155799: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:regression
    linux-5.4:test-amd64-amd64-pygrub:guest-start:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=85b0841aab15c12948af951d477183ab3df7de14
X-Osstest-Versions-That:
    linux=d22f99d235e13356521b374410a6ee24f50b65e6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 20:12:50 +0000

flight 155799 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install        fail REGR. vs. 155534
 test-amd64-amd64-pygrub      13 guest-start              fail REGR. vs. 155534

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 155534
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop              fail never pass

version targeted for testing:
 linux                85b0841aab15c12948af951d477183ab3df7de14
baseline version:
 linux                d22f99d235e13356521b374410a6ee24f50b65e6

Last test of basis   155534  2020-10-07 22:08:49 Z    6 days
Testing same since   155799  2020-10-14 08:39:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Brown <aaron.f.brown@intel.com>
  Aaron Ma <aaron.ma@canonical.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Andrew Bowers <andrewx.bowers@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antony Antony <antony.antony@secunet.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Aya Levin <ayal@mellanox.com>
  Aya Levin <ayal@nvidia.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Cong Wang <xiyou.wangcong@gmail.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dumitru Ceara <dceara@redhat.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Eric Dumazet <edumazet@google.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hugh Dickins <hughd@google.com>
  Ido Schimmel <idosch@nvidia.com>
  Ingo Molnar <mingo@kernel.org>
  Ivan Khoronzhuk <ikhoronz@cisco.com>
  Ivan Khoronzhuk <ivan.khoronzhuk@gmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Jean Delvare <jdelvare@suse.de>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jiri Olsa <jolsa@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  John Fastabend <john.fastabend@gmail.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kajol Jain <kjain@linux.ibm.com>
  Karol Herbst <kherbst@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maor Gottlieb <maorg@nvidia.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Gross <mgross@linux.intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Muchun Song <songmuchun@bytedance.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nicolas Belin <nbelin@baylibre.com>
  Nikolay Borisov <nborisov@suse.com>
  Nobuhiro Iwamatsu (CIP) <nobuhiro1.iwamatsu@toshiba.co.jp>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petko Manolov <petkan@nucleusys.com>
  Philip Yang <Philip.Yang@amd.com>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rohit Maheshwari <rohitm@chelsio.com>
  Sabrina Dubroca <sd@queasysnail.net>
  Saeed Mahameed <saeedm@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srikar Dronamraju <srikar@linux.vnet.ibm.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Sylwester Dziedziuch <sylwesterx.dziedziuch@intel.com>
  syzbot+69b804437cfec30deac3@syzkaller.appspotmail.com
  syzbot+abbc768b560c84d92fd3@syzkaller.appspotmail.com
  syzbot+b1bb342d1d097516cbda@syzkaller.appspotmail.com
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Tom Rix <trix@redhat.com>
  Tommi Rantala <tommi.t.rantala@nokia.com>
  Tonghao Zhang <xiangxia.m.yue@gmail.com>
  Tony Ambardar <Tony.Ambardar@gmail.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vaibhav Gupta <vaibhavgupta40@gmail.com>
  Vijay Balakrishna <vijayb@linux.microsoft.com>
  Vladimir Zapolskiy <vladimir@tuxera.com>
  Voon Weifeng <weifeng.voon@intel.com>
  Wilken Gottwalt <wilken.gottwalt@mailbox.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Yinyin Zhu <zhuyinyin@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2856 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 20:15:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 20:15:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.6996.18315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnB0-00029e-6G; Wed, 14 Oct 2020 20:15:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 6996.18315; Wed, 14 Oct 2020 20:15:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnB0-00029W-1t; Wed, 14 Oct 2020 20:15:10 +0000
Received: by outflank-mailman (input) for mailman id 6996;
 Wed, 14 Oct 2020 20:15:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kSnAy-00029Q-Lq
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:15:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b58ffe7-ec15-4feb-abf0-fb514009bf2b;
 Wed, 14 Oct 2020 20:15:07 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0D7922222C;
 Wed, 14 Oct 2020 20:15:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kSnAy-00029Q-Lq
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:15:08 +0000
X-Inumbo-ID: 8b58ffe7-ec15-4feb-abf0-fb514009bf2b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8b58ffe7-ec15-4feb-abf0-fb514009bf2b;
	Wed, 14 Oct 2020 20:15:07 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0D7922222C;
	Wed, 14 Oct 2020 20:15:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602706506;
	bh=HfQdFDVZQUaPuvfVjtn6hsp/GZ+uDvPgOt8tAPYgp5Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WzpZtKkI3PjF1lNyodxDUJZcY1YC8xqAoI2fGvFg1O2y8sg4UcoazPH2SmPto3aox
	 GtN0M3IZKMEOW3Y40acLrsamTF/vrmg91U0Qv7Hl8U68o91BUllNLNJb4fN0mDibEZ
	 stGy9mcjBDlFVs4P/MOEW1kwbakRTTKPCi0DahqY=
Date: Wed, 14 Oct 2020 13:15:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <George.Dunlap@eu.citrix.com>, 
    Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
    Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Trammell Hudson <hudson@trmm.net>
Subject: Re: [PATCH 2/2] EFI: further "need_to_free" adjustments
In-Reply-To: <a0e76e78-1f66-9825-b35b-86caed7da961@suse.com>
Message-ID: <alpine.DEB.2.21.2010141314560.10386@sstabellini-ThinkPad-T480s>
References: <dd26ba44-66e4-8870-3359-efe93ab28f64@suse.com> <a0e76e78-1f66-9825-b35b-86caed7da961@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Oct 2020, Jan Beulich wrote:
> When processing "chain" directives, the previously loaded config file
> gets freed. This needs to be recorded accordingly such that no error
> path would try to free the same block of memory a 2nd time.
> 
> Furthermore, neither .addr nor .size being zero has any meaning towards
> the need to free an allocated chunk anymore. Drop (from read_file()) and
> replace (in Arm's efi_arch_use_config_file(), to sensibly retain the
> comment) respective assignments.
> 
> Fixes: 04be2c3a0678 ("efi/boot.c: add file.need_to_free")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> --- a/xen/arch/arm/efi/efi-boot.h
> +++ b/xen/arch/arm/efi/efi-boot.h
> @@ -591,7 +591,7 @@ static bool __init efi_arch_use_config_f
>  
>      fdt = lookup_fdt_config_table(SystemTable);
>      dtbfile.ptr = fdt;
> -    dtbfile.size = 0;  /* Config table memory can't be freed, so set size to 0 */
> +    dtbfile.need_to_free = false; /* Config table memory can't be freed. */
>      if ( !fdt || fdt_node_offset_by_compatible(fdt, 0, "multiboot,module") < 0 )
>      {
>          /*
> --- a/xen/common/efi/boot.c
> +++ b/xen/common/efi/boot.c
> @@ -601,10 +601,7 @@ static bool __init read_file(EFI_FILE_HA
>                                      PFN_UP(size), &file->addr);
>      }
>      if ( EFI_ERROR(ret) )
> -    {
> -        file->addr = 0;
>          what = what ?: L"Allocation";
> -    }
>      else
>      {
>          file->need_to_free = true;
> @@ -1271,8 +1268,11 @@ efi_start(EFI_HANDLE ImageHandle, EFI_SY
>              name.s = get_value(&cfg, "global", "chain");
>              if ( !name.s )
>                  break;
> -            efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
> -            cfg.addr = 0;
> +            if ( cfg.need_to_free )
> +            {
> +                efi_bs->FreePages(cfg.addr, PFN_UP(cfg.size));
> +                cfg.need_to_free = false;
> +            }
>              if ( !read_file(dir_handle, s2w(&name), &cfg, NULL) )
>              {
>                  PrintStr(L"Chained configuration file '");
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 20:33:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 20:33:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7001.18329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnSk-0003vQ-Pc; Wed, 14 Oct 2020 20:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7001.18329; Wed, 14 Oct 2020 20:33:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnSk-0003vJ-M8; Wed, 14 Oct 2020 20:33:30 +0000
Received: by outflank-mailman (input) for mailman id 7001;
 Wed, 14 Oct 2020 20:33:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSnSj-0003vD-4N
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:33:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66be07ac-6674-47d3-95c7-d8fb22355eb7;
 Wed, 14 Oct 2020 20:33:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnSg-0006Xr-D9; Wed, 14 Oct 2020 20:33:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnSg-00046L-3K; Wed, 14 Oct 2020 20:33:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnSg-0001jr-2s; Wed, 14 Oct 2020 20:33:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSnSj-0003vD-4N
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:33:29 +0000
X-Inumbo-ID: 66be07ac-6674-47d3-95c7-d8fb22355eb7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 66be07ac-6674-47d3-95c7-d8fb22355eb7;
	Wed, 14 Oct 2020 20:33:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Thbrt4xHrXTjbk+kZYvulJsRXgX1Ya8psX0g9ex29L8=; b=SjORmD70i9dt+NX/PyyfYFmDpm
	2fpon155RTZIIbeFPlo7h/3rn1crdaAl8SRgeS20EHauUlzw6up2pKJ23qm9gy/Rz0sOT+/89e9pi
	ukAL5g05aQhxjIOl/XXqJtnJV1bx3SudupeYGFHF25APFSYO3gmRO4VmD6XYXup9qvLQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnSg-0006Xr-D9; Wed, 14 Oct 2020 20:33:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnSg-00046L-3K; Wed, 14 Oct 2020 20:33:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnSg-0001jr-2s; Wed, 14 Oct 2020 20:33:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155801-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155801: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5d0a827122cccd1f884faf75b2a065d88a58bce1
X-Osstest-Versions-That:
    ovmf=9380177354387f03c8ff9eadb7ae94aa453b9469
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 20:33:26 +0000

flight 155801 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155801/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5d0a827122cccd1f884faf75b2a065d88a58bce1
baseline version:
 ovmf                 9380177354387f03c8ff9eadb7ae94aa453b9469

Last test of basis   155765  2020-10-13 06:07:35 Z    1 days
Testing same since   155801  2020-10-14 09:11:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gao, Zhichao <zhichao.gao@intel.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9380177354..5d0a827122  5d0a827122cccd1f884faf75b2a065d88a58bce1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 20:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 20:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7006.18345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnb2-0004qO-Su; Wed, 14 Oct 2020 20:42:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7006.18345; Wed, 14 Oct 2020 20:42:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnb2-0004qH-P6; Wed, 14 Oct 2020 20:42:04 +0000
Received: by outflank-mailman (input) for mailman id 7006;
 Wed, 14 Oct 2020 20:42:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSnb2-0004pp-0l
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:42:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4ac64cb-00d3-474c-9efe-0569171a6c9e;
 Wed, 14 Oct 2020 20:41:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnav-0006h8-Cs; Wed, 14 Oct 2020 20:41:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnav-0004OJ-1Z; Wed, 14 Oct 2020 20:41:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSnav-0005PE-13; Wed, 14 Oct 2020 20:41:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSnb2-0004pp-0l
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:42:04 +0000
X-Inumbo-ID: b4ac64cb-00d3-474c-9efe-0569171a6c9e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4ac64cb-00d3-474c-9efe-0569171a6c9e;
	Wed, 14 Oct 2020 20:41:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qouX04pPMfJiGMibt/gR7xQtjAgg4Gohcb+uBSC+pGQ=; b=7ASReugEpcmvVTIHFfyQOab5pX
	kI3XZtMI+pLbjta/v3Ra7nuplM4igdXNvgFwTvijnkI6X20maoRwNoy0rgd+FD7vON+RekA48RVM/
	RY+Hq6EcmUchgVfjjIX2jt1yFNfm3SsTjz8/5WBOhJh108pfD5KdfuhGt860yJYtLVS4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnav-0006h8-Cs; Wed, 14 Oct 2020 20:41:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnav-0004OJ-1Z; Wed, 14 Oct 2020 20:41:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSnav-0005PE-13; Wed, 14 Oct 2020 20:41:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155811-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155811: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
X-Osstest-Versions-That:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 20:41:57 +0000

flight 155811 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155811/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155805

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575
baseline version:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c

Last test of basis   155805  2020-10-14 13:00:28 Z    0 days
Testing same since   155811  2020-10-14 18:03:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f776e5fb3ee699745f6442ec8c47d0fa647e0575
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed Oct 14 12:05:41 2020 +0200

    xen/arm: Document the erratum #853709 related to Cortex A72
    
    The Cortex-A72 erratum #853709 is the same as the Cortex-A57
    erratum #852523. As the latter is already workaround, we only
    need to update the documentation.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    [julieng: Reworded the commit message]
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 20:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 20:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7010.18359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnfa-00053P-Gb; Wed, 14 Oct 2020 20:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7010.18359; Wed, 14 Oct 2020 20:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSnfa-00053I-Cr; Wed, 14 Oct 2020 20:46:46 +0000
Received: by outflank-mailman (input) for mailman id 7010;
 Wed, 14 Oct 2020 20:46:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kSnfY-00053D-K4
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:46:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50ac3311-7e5c-4a73-98af-912f2ee08f40;
 Wed, 14 Oct 2020 20:46:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CA7F022248;
 Wed, 14 Oct 2020 20:46:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kSnfY-00053D-K4
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 20:46:44 +0000
X-Inumbo-ID: 50ac3311-7e5c-4a73-98af-912f2ee08f40
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 50ac3311-7e5c-4a73-98af-912f2ee08f40;
	Wed, 14 Oct 2020 20:46:44 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id CA7F022248;
	Wed, 14 Oct 2020 20:46:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602708403;
	bh=XCqiykSQEa9rXNobIGoRiRoA8bmB2pG3qVdERVEa67E=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jwOyaaBvNYUb4WHMvTvyzr2JjO/xaNRdUOMiFVUB9Oc6S6OezD5mRi0WNMLZ7F2yP
	 SrwF3wWnrKzhDMU81jXNjACy4RC1irvsOfqSDjVkAhQrcsuosaqD7JoKbuT33r6caS
	 uBVUgQf36ipYYV/7JJVfkc3Q74Lq1e7dnTWkyAwA=
Date: Wed, 14 Oct 2020 13:46:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <5A45EDA3-B01B-4AC3-B2CB-77CF90D024AD@arm.com>
Message-ID: <alpine.DEB.2.21.2010141346130.10386@sstabellini-ThinkPad-T480s>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com> <c22235d1-9124-74f2-5856-58f7f44dc0b7@xen.org> <5A45EDA3-B01B-4AC3-B2CB-77CF90D024AD@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-897259223-1602708402=:10386"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-897259223-1602708402=:10386
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 14 Oct 2020, Bertrand Marquis wrote:
> > On 14 Oct 2020, at 12:11, Julien Grall <julien@xen.org> wrote:
> > 
> > Hi Bertrand,
> > 
> > On 14/10/2020 11:41, Bertrand Marquis wrote:
> >> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> >> not implementing the workaround for it could deadlock the system.
> >> Add a warning during boot informing the user that only trusted guests
> >> should be executed on the system.
> > 
> > I think we should update SUPPORT.MD to say we will not security support those processors. Stefano, what do you think?
> 
> That could make sense to do that yes.
> If Stefano confirms then i can do this in a v2

Yes, I confirm

 
> >> An equivalent warning is already given to the user by KVM on cores
> >> affected by this errata.
> > 
> > I don't seem to find the warning in Linux. Do you have a link?
> 
> sure:
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=abf532cceaca9c21a148498091f87de1b8ae9b49
> 
> > 
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >> ---
> >>  xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
> >>  1 file changed, 21 insertions(+)
> >> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> >> index 6c09017515..8f9ab6dde1 100644
> >> --- a/xen/arch/arm/cpuerrata.c
> >> +++ b/xen/arch/arm/cpuerrata.c
> >> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
> >>    #endif
> >>  +#ifdef CONFIG_ARM64_ERRATUM_832075
> >> +
> >> +static int warn_device_load_acquire_errata(void *data)
> >> +{
> >> +    static bool warned = false;
> >> +
> >> +    if ( !warned )
> >> +    {
> >> +        warning_add("This CPU is affected by the errata 832075.\n"
> >> +                    "Guests without required CPU erratum workarounds\n"
> >> +                    "can deadlock the system!\n"
> >> +                    "Only trusted guests should be used on this system.\n");
> >> +        warned = true;
> > 
> > I was going to suggest to use WARN_ON_ONCE() but it looks like it never made upstream :(.
> 
> I did do this as it was done in the smc warning function (that’s why i pushed a patch for it).
> 
> Cheers
> Bertrand
> 
> > 
> >> +    }
> >> +
> >> +    return 0;
> >> +}
> >> +
> >> +#endif
> >> +
> >>  #ifdef CONFIG_ARM_SSBD
> >>    enum ssbd_state ssbd_state = ARM_SSBD_RUNTIME;
> >> @@ -419,6 +439,7 @@ static const struct arm_cpu_capabilities arm_errata[] = {
> >>          .capability = ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE,
> >>          MIDR_RANGE(MIDR_CORTEX_A57, 0x00,
> >>                     (1 << MIDR_VARIANT_SHIFT) | 2),
> >> +        .enable = warn_device_load_acquire_errata,
> >>      },
> >>  #endif
> >>  #ifdef CONFIG_ARM64_ERRATUM_834220
> > 
> > -- 
> > Julien Grall
> 
> 
--8323329-897259223-1602708402=:10386--


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 21:16:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 21:16:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7016.18374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSo7i-0007fn-Uz; Wed, 14 Oct 2020 21:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7016.18374; Wed, 14 Oct 2020 21:15:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSo7i-0007fg-RZ; Wed, 14 Oct 2020 21:15:50 +0000
Received: by outflank-mailman (input) for mailman id 7016;
 Wed, 14 Oct 2020 21:15:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kSo7h-0007fb-LP
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 21:15:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60e7a1a1-c758-4be6-8141-81945d08f9ae;
 Wed, 14 Oct 2020 21:15:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A70A82173E;
 Wed, 14 Oct 2020 21:15:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zgx5=DV=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kSo7h-0007fb-LP
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 21:15:49 +0000
X-Inumbo-ID: 60e7a1a1-c758-4be6-8141-81945d08f9ae
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 60e7a1a1-c758-4be6-8141-81945d08f9ae;
	Wed, 14 Oct 2020 21:15:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A70A82173E;
	Wed, 14 Oct 2020 21:15:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602710148;
	bh=zJ91IA8L6eEdeguMdTlAJYKZ7TdxuGZd5zRwZaypAe0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GnSvAA0cJE53FVDYImP9/aYZ0KgE/Y6aHxcQapDphxIiUaGb/2COl/utHnPISIU+9
	 ZY3BMFdfI9fdDP8tKYWPgQ7gRpBYrpfmHa4my/lWo3VQOcelf0I8Q2irxvf+oCX+kB
	 wMvQelx8WqX3eG0dKIKsD4t4zmfAw83ZywM1TwEA=
Date: Wed, 14 Oct 2020 14:15:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
Message-ID: <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com> <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com> <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com> <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Oct 2020, Julien Grall wrote:
> On 14/10/2020 17:03, Bertrand Marquis wrote:
> > > On 14 Oct 2020, at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> > > 
> > > On 14/10/2020 11:41, Bertrand Marquis wrote:
> > > > When a Cortex A57 processor is affected by CPU errata 832075, a guest
> > > > not implementing the workaround for it could deadlock the system.
> > > > Add a warning during boot informing the user that only trusted guests
> > > > should be executed on the system.
> > > > An equivalent warning is already given to the user by KVM on cores
> > > > affected by this errata.
> > > > 
> > > > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > > > ---
> > > > xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
> > > > 1 file changed, 21 insertions(+)
> > > > 
> > > > diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> > > > index 6c09017515..8f9ab6dde1 100644
> > > > --- a/xen/arch/arm/cpuerrata.c
> > > > +++ b/xen/arch/arm/cpuerrata.c
> > > > @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
> > > > 
> > > > #endif
> > > > 
> > > > +#ifdef CONFIG_ARM64_ERRATUM_832075
> > > > +
> > > > +static int warn_device_load_acquire_errata(void *data)
> > > > +{
> > > > +    static bool warned = false;
> > > > +
> > > > +    if ( !warned )
> > > > +    {
> > > > +        warning_add("This CPU is affected by the errata 832075.\n"
> > > > +                    "Guests without required CPU erratum workarounds\n"
> > > > +                    "can deadlock the system!\n"
> > > > +                    "Only trusted guests should be used on this
> > > > system.\n");
> > > > +        warned = true;
> > > 
> > > This is an antipattern, which probably wants fixing elsewhere as well.
> > > 
> > > warning_add() is __init.  It's not legitimate to call from a non-init
> > > function, and a less useless build system would have modpost to object.
> > > 
> > > The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
> > > but this provides no safety at all.
> > > 
> > > 
> > > What warning_add() actually does is queue messages for some point near
> > > the end of boot.  It's not clear that this is even a clever thing to do.
> > > 
> > > I'm very tempted to suggest a blanket change to printk_once().
> > 
> > If this is needed then this could be done in an other serie ?
>
> The callback ->enable() will be called when a CPU is onlined/offlined. So this
> is going to require if you plan to support CPU hotplugs or suspend resume.
> 
> > Would be good to keep this patch as purely handling the errata.

My preference would be to keep this patch small with just the errata,
maybe using a simple printk_once as Andrew and Julien discussed.

There is another instance of warning_add potentially being called
outside __init in xen/arch/arm/cpuerrata.c:
enable_smccc_arch_workaround_1. So if you are up for it, it would be
good to produce a patch to fix that too.


> In the case of this patch, how about moving the warning_add() in
> enable_errata_workarounds()?
> 
> By then we should now all the errata present on your platform. All CPUs
> onlined afterwards (i.e. runtime) should always abide to the set discover
> during boot.

If I understand your suggestion correctly, it would work for
warn_device_load_acquire_errata, because it is just a warning, but it
would not work for enable_smccc_arch_workaround_1, because there is
actually a call to be made there.

Maybe it would be simpler to use printk_once in both cases? I don't have
a strong preference either way.


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 22:44:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 22:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7022.18389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSpVP-0006ta-F9; Wed, 14 Oct 2020 22:44:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7022.18389; Wed, 14 Oct 2020 22:44:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSpVP-0006tT-C8; Wed, 14 Oct 2020 22:44:23 +0000
Received: by outflank-mailman (input) for mailman id 7022;
 Wed, 14 Oct 2020 22:44:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSpVO-0006su-LE
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 22:44:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 112d1c05-adbe-4123-a64f-f01ca8b0afb9;
 Wed, 14 Oct 2020 22:44:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSpVG-0000ln-4Q; Wed, 14 Oct 2020 22:44:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSpVF-0001Fg-Q8; Wed, 14 Oct 2020 22:44:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSpVF-0002dA-PN; Wed, 14 Oct 2020 22:44:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSpVO-0006su-LE
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 22:44:22 +0000
X-Inumbo-ID: 112d1c05-adbe-4123-a64f-f01ca8b0afb9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 112d1c05-adbe-4123-a64f-f01ca8b0afb9;
	Wed, 14 Oct 2020 22:44:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tx+9jr6Dsc7ycSUic8YvWMGcrMtyxloKyj8tKVyuKZo=; b=iHt+a5dE1QuKSg1oiYBGlD2uoR
	6y4mMlSsV1ynMHVh+DCbdvvKw/0qOy43OKmdaXwkZAOJrJCpUf/G7Mb/uydBAb9R+8CEkT1GtrYL0
	4Y2MKxTdbnLDqaiucp6WkhmBnOWL+EpolnP4aGUp1jXVcHNwmEppGkPGb9tnJ50h0cSo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSpVG-0000ln-4Q; Wed, 14 Oct 2020 22:44:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSpVF-0001Fg-Q8; Wed, 14 Oct 2020 22:44:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSpVF-0002dA-PN; Wed, 14 Oct 2020 22:44:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155802-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155802: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=96292515c07e3a99f5a29540ed2f257b1ff75111
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 22:44:13 +0000

flight 155802 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155802/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 155785

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                96292515c07e3a99f5a29540ed2f257b1ff75111
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   55 days
Failing since        152659  2020-08-21 14:07:39 Z   54 days   94 attempts
Testing same since   155785  2020-10-13 22:39:07 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45190 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 14 23:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 14 Oct 2020 23:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7043.18423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSq5J-00026L-ON; Wed, 14 Oct 2020 23:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7043.18423; Wed, 14 Oct 2020 23:21:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSq5J-00026E-L4; Wed, 14 Oct 2020 23:21:29 +0000
Received: by outflank-mailman (input) for mailman id 7043;
 Wed, 14 Oct 2020 23:21:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSq5H-000269-P9
 for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 23:21:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 481c11cf-106a-4008-b9c0-4f2d10e61224;
 Wed, 14 Oct 2020 23:21:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSq5F-0001XH-Ji; Wed, 14 Oct 2020 23:21:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSq5F-0002Xt-7J; Wed, 14 Oct 2020 23:21:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSq5F-0006iA-6n; Wed, 14 Oct 2020 23:21:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mg0A=DV=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSq5H-000269-P9
	for xen-devel@lists.xenproject.org; Wed, 14 Oct 2020 23:21:27 +0000
X-Inumbo-ID: 481c11cf-106a-4008-b9c0-4f2d10e61224
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 481c11cf-106a-4008-b9c0-4f2d10e61224;
	Wed, 14 Oct 2020 23:21:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rf5d2ak+R1dXzNSlSFRLWuSqrBqHA1E/Q3sVeC/1ORM=; b=tpWxYuiBVvPtm88aQ5LpPGjCgq
	Ue7m4dEhBbUIBrAF2QMjeqt3w7ux1gU1dfFOrplHp3ByIzVNkwPJVdiTNzgVraWEl6oNxE4MiTlbq
	zXZuzOl3P801IbNRBALk9IM33svz2QMcKesycVZH2o/NayAghPGjIWRKqIkjCw1NbAJ4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSq5F-0001XH-Ji; Wed, 14 Oct 2020 23:21:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSq5F-0002Xt-7J; Wed, 14 Oct 2020 23:21:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSq5F-0006iA-6n; Wed, 14 Oct 2020 23:21:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155818-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155818: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
X-Osstest-Versions-That:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 14 Oct 2020 23:21:25 +0000

flight 155818 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155818/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155805

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575
baseline version:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c

Last test of basis   155805  2020-10-14 13:00:28 Z    0 days
Testing same since   155811  2020-10-14 18:03:04 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f776e5fb3ee699745f6442ec8c47d0fa647e0575
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed Oct 14 12:05:41 2020 +0200

    xen/arm: Document the erratum #853709 related to Cortex A72
    
    The Cortex-A72 erratum #853709 is the same as the Cortex-A57
    erratum #852523. As the latter is already workaround, we only
    need to update the documentation.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    [julieng: Reworded the commit message]
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 00:38:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 00:38:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7049.18440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrHf-0000KL-6X; Thu, 15 Oct 2020 00:38:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7049.18440; Thu, 15 Oct 2020 00:38:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrHf-0000KE-3f; Thu, 15 Oct 2020 00:38:19 +0000
Received: by outflank-mailman (input) for mailman id 7049;
 Thu, 15 Oct 2020 00:38:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0U26=DW=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1kSrHd-0000K9-5n
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 00:38:17 +0000
Received: from mail-40131.protonmail.ch (unknown [185.70.40.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e283bce3-2633-4cf9-b245-c34d745a5b1b;
 Thu, 15 Oct 2020 00:38:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0U26=DW=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
	id 1kSrHd-0000K9-5n
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 00:38:17 +0000
X-Inumbo-ID: e283bce3-2633-4cf9-b245-c34d745a5b1b
Received: from mail-40131.protonmail.ch (unknown [185.70.40.131])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e283bce3-2633-4cf9-b245-c34d745a5b1b;
	Thu, 15 Oct 2020 00:38:14 +0000 (UTC)
Date: Thu, 15 Oct 2020 00:38:05 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1602722292;
	bh=/fqAIMxPWPvzaXeK8gBlRXsB72ne3oC35pwC7ZnmDbM=;
	h=Date:To:From:Reply-To:Subject:From;
	b=MNyNJ6f163HsWWNZga8uncBJHsFuX2ma55N+Z33YlxQQA+XqnrlgV3Z45hLFSJ1AA
	 Z7T7uG+YN3+O2V4VrMU1rdBETDjAbWs0tndhRlniuL8V2BCC0RcvTcCgxWCPPJihje
	 u/cApWv+/BF59J4qoyI1IT+iyORuWshCFkaUa4W4=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="b1_0o8C518sVj5uzonzPIXYDTCbYqkejcC6SFvq1G2c"
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HTML_MESSAGE
	shortcircuit=no autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

This is a multi-part message in MIME format.

--b1_0o8C518sVj5uzonzPIXYDTCbYqkejcC6SFvq1G2c
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

SGkgQWxsLAoKSSdtIGN1cnJlbnRseSB1c2luZyBYZW4gNC4xNCAoUXViZXMgNC4xIE9TKSBvbiBh
IFJ5emVuIDcgNDc1MFUgUFJPLCBieSBkZWZhdWx0IEknbGwgZXhwZXJpZW5jZSBzb2Z0bG9ja3Mg
d2hlcmUgdGhlIG1vdXNlIGZvciBleGFtcGxlIHdpbGwgam9sdCBmcm9tIHRpbWUgdG8gdGltZSwg
aW4gdGhpcyBzdGF0ZSBpdCdzIG5vdCB1c2FibGUuCgpBZGRpbmcgYGRvbTBfbWF4X3ZjcHVzPTEg
ZG9tMF92Y3B1c19waW5gIHRvIFhlbidzIENNRExJTkUgcmVzdWx0cyBpbiBubyBtb3JlIGpvbHRp
bmcgaG93ZXZlciBwZXJmb3JtYW5jZSBpc24ndCB3aGF0IGl0IHNob3VsZCBiZSBvbiBhbiA4IGNv
cmUgQ1BVLCBzb2Z0bG9ja3MgYXJlIHN0aWxsIGEgcHJvYmxlbSB3aXRoaW4gZG9tVSdzLCBhbnkg
c29ydCBvZiBVSSBhbmltYXRpb24gZm9yIGV4YW1wbGUuCgpSZXZlcnRpbmcgW3RoaXMgY29tbWl0
ICg4ZTJhYTc2ZGMxNjcwZTgyZWFhMTU2ODMzNTM4NTNiYzY2YmY1NGZjKV0oaHR0cHM6Ly9naXRo
dWIuY29tL3hlbi1wcm9qZWN0L3hlbi9jb21taXQvOGUyYWE3NmRjMTY3MGU4MmVhYTE1NjgzMzUz
ODUzYmM2NmJmNTRmYykgcmVzdWx0cyBpbiBldmVuIHdvcnNlIHBlcmZvcm1hbmNlIHdpdGggb3Ig
d2l0aG91dCB0aGUgYWJvdmUgY2hhbmdlcyB0byBDTURMSU5FLCBhbmQgaXQncyBub3QgdXNhYmxl
IGF0IGFsbC4KCkRvZXMgYW55b25lIGhhdmUgYW55IHBvaW50ZXJzPwoKQ2hlZXJz

--b1_0o8C518sVj5uzonzPIXYDTCbYqkejcC6SFvq1G2c
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

PGRpdj5IaSBBbGwsPGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+SSdtIGN1cnJlbnRseSB1
c2luZyBYZW4gNC4xNCAoUXViZXMgNC4xIE9TKSBvbiBhIFJ5emVuIDcmbmJzcDs0NzUwVSBQUk8s
IGJ5IGRlZmF1bHQgSSdsbCBleHBlcmllbmNlIHNvZnRsb2NrcyB3aGVyZSB0aGUgbW91c2UgZm9y
IGV4YW1wbGUgd2lsbCBqb2x0IGZyb20gdGltZSB0byB0aW1lLCBpbiB0aGlzIHN0YXRlIGl0J3Mg
bm90IHVzYWJsZS48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5BZGRpbmcgYGRvbTBfbWF4
X3ZjcHVzPTEgZG9tMF92Y3B1c19waW5gIHRvIFhlbidzIENNRExJTkUgcmVzdWx0cyBpbiBubyBt
b3JlIGpvbHRpbmcgaG93ZXZlciBwZXJmb3JtYW5jZSBpc24ndCB3aGF0IGl0IHNob3VsZCBiZSBv
biBhbiA4IGNvcmUgQ1BVLCBzb2Z0bG9ja3MgYXJlIHN0aWxsIGEgcHJvYmxlbSB3aXRoaW4gZG9t
VSdzLCBhbnkgc29ydCBvZiBVSSBhbmltYXRpb24gZm9yIGV4YW1wbGUuPGJyPjwvZGl2PjxkaXY+
PGJyPjwvZGl2PjxkaXY+UmV2ZXJ0aW5nJm5ic3A7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29t
L3hlbi1wcm9qZWN0L3hlbi9jb21taXQvOGUyYWE3NmRjMTY3MGU4MmVhYTE1NjgzMzUzODUzYmM2
NmJmNTRmYyIgdGFyZ2V0PSJfYmxhbmsiIHRpdGxlPSJodHRwczovL2dpdGh1Yi5jb20veGVuLXBy
b2plY3QveGVuL2NvbW1pdC84ZTJhYTc2ZGMxNjcwZTgyZWFhMTU2ODMzNTM4NTNiYzY2YmY1NGZj
IiByZWw9Im5vZm9sbG93Ij50aGlzIGNvbW1pdCAoOGUyYWE3NmRjMTY3MGU4MmVhYTE1NjgzMzUz
ODUzYmM2NmJmNTRmYyk8L2E+Jm5ic3A7cmVzdWx0cyBpbiBldmVuIHdvcnNlIHBlcmZvcm1hbmNl
IHdpdGggb3Igd2l0aG91dCB0aGUgYWJvdmUgY2hhbmdlcyB0byBDTURMSU5FLCBhbmQgaXQncyBu
b3QgdXNhYmxlIGF0IGFsbC48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5Eb2VzIGFueW9u
ZSBoYXZlIGFueSBwb2ludGVycz88YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5DaGVlcnM8
L2Rpdj4=


--b1_0o8C518sVj5uzonzPIXYDTCbYqkejcC6SFvq1G2c--



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 01:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 01:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7056.18459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrdI-00009l-4h; Thu, 15 Oct 2020 01:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7056.18459; Thu, 15 Oct 2020 01:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrdI-00008t-17; Thu, 15 Oct 2020 01:00:40 +0000
Received: by outflank-mailman (input) for mailman id 7056;
 Thu, 15 Oct 2020 01:00:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TsEj=DW=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1kSrdG-000898-Uy
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:00:38 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e5c3bc9-bca5-4c95-a360-780f87990fdd;
 Thu, 15 Oct 2020 01:00:35 +0000 (UTC)
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Oct 2020 18:00:33 -0700
Received: from dwillia2-desk3.jf.intel.com (HELO
 dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25])
 by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Oct 2020 18:00:33 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TsEj=DW=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
	id 1kSrdG-000898-Uy
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:00:38 +0000
X-Inumbo-ID: 6e5c3bc9-bca5-4c95-a360-780f87990fdd
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6e5c3bc9-bca5-4c95-a360-780f87990fdd;
	Thu, 15 Oct 2020 01:00:35 +0000 (UTC)
IronPort-SDR: 6qH4PBgr3GRVJiM562Z+3wdm3ksv29kwdFYVXIeFB34y7aGAXHj3S6vxL7dE0QuWxuu6g2fBmo
 OhmMzFBNdjCA==
X-IronPort-AV: E=McAfee;i="6000,8403,9774"; a="166348621"
X-IronPort-AV: E=Sophos;i="5.77,376,1596524400"; 
   d="scan'208";a="166348621"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 18:00:33 -0700
IronPort-SDR: KS3Q2MPMmR4hzA6BX3wFA5W4ZTgE4BVgWBtN9d6K0JraXbAetqre2n+FMuEAQ3xYV6F7MwO/RB
 ken2QsQHin7A==
X-IronPort-AV: E=Sophos;i="5.77,376,1596524400"; 
   d="scan'208";a="531053664"
Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25])
  by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 18:00:33 -0700
Subject: [PATCH 0/2] device-dax subdivision v5 to v6 fixups
From: Dan Williams <dan.j.williams@intel.com>
To: linux-kernel@vger.kernel.org
Cc: David Hildenbrand <david@redhat.com>, Ira Weiny <ira.weiny@intel.com>,
 Dave Jiang <dave.jiang@intel.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Jonathan Cameron <Jonathan.Cameron@huawei.com>,
 Brice Goglin <Brice.Goglin@inria.fr>, Vishal Verma <vishal.l.verma@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Jia He <justin.he@arm.com>, Andrew Morton <akpm@linux-foundation.org>,
 Dave Hansen <dave.hansen@linux.intel.com>, Juergen Gross <jgross@suse.com>,
 Pavel Tatashin <pasha.tatashin@soleen.com>,
 Joao Martins <joao.m.martins@oracle.com>, akpm@linux-foundation.org,
 linux-nvdimm@lists.01.org, linux-mm@kvack.org
Bcc: dan.j.williams@intel.com
Date: Wed, 14 Oct 2020 17:42:04 -0700
Message-ID: <160272252400.3136502.13635752844548960833.stgit@dwillia2-desk3.amr.corp.intel.com>
User-Agent: StGit/0.18-3-g996c
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

Hi,

The v5 series of the device-dax-subdivision series landed upstream which
missed some of the late breaking fixups in v6 [1]. The Xen one is
cosmetic, the kmem one is a functional problem. I will handle the kmem
in a device-dax follow-on pull request post-rc1. The Xen one can go
through the Xen tree at its own pace.

My thanks to Andrew for wrangling the thrash up to v5, and my apologies
to Andrew et al for not highlighting this gap sooner.

[1]: http://lore.kernel.org/r/160196728453.2166475.12832711415715687418.stgit@dwillia2-desk3.amr.corp.intel.com

---

Dan Williams (2):
      device-dax/kmem: Fix resource release
      xen/unpopulated-alloc: Consolidate pgmap manipulation


 drivers/dax/kmem.c              |   48 ++++++++++++++++++++++++++++-----------
 drivers/xen/unpopulated-alloc.c |   14 ++++++-----
 2 files changed, 41 insertions(+), 21 deletions(-)

base-commit: 4da9af0014b51c8b015ed8c622440ef28912efe6


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 01:00:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 01:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7057.18471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrdT-0004Jv-Ds; Thu, 15 Oct 2020 01:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7057.18471; Thu, 15 Oct 2020 01:00:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrdT-0004J9-8p; Thu, 15 Oct 2020 01:00:51 +0000
Received: by outflank-mailman (input) for mailman id 7057;
 Thu, 15 Oct 2020 01:00:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TsEj=DW=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1kSrdR-0003ZY-Fu
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:00:49 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b64d8568-93cd-4626-a615-f38165d792c5;
 Thu, 15 Oct 2020 01:00:45 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Oct 2020 18:00:45 -0700
Received: from dwillia2-desk3.jf.intel.com (HELO
 dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Oct 2020 18:00:44 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TsEj=DW=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
	id 1kSrdR-0003ZY-Fu
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:00:49 +0000
X-Inumbo-ID: b64d8568-93cd-4626-a615-f38165d792c5
Received: from mga01.intel.com (unknown [192.55.52.88])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b64d8568-93cd-4626-a615-f38165d792c5;
	Thu, 15 Oct 2020 01:00:45 +0000 (UTC)
IronPort-SDR: Rari7y453Xph2I6nvivgDgSjF7Xnmm9pgy25ZHIn9Z5icVjSbcgm//I0wUlwfrk4KQeQaZnFZD
 KjBIkWhnQl4w==
X-IronPort-AV: E=McAfee;i="6000,8403,9774"; a="183763644"
X-IronPort-AV: E=Sophos;i="5.77,376,1596524400"; 
   d="scan'208";a="183763644"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
  by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 18:00:45 -0700
IronPort-SDR: FfdsxjRqp3q4f4etEQRl1TI5VTynO18vfKev9t2iOTvW0tZxScWrNfuysGf8rx03gz/Y/v7tG4
 l/NGqn36Tp+w==
X-IronPort-AV: E=Sophos;i="5.77,376,1596524400"; 
   d="scan'208";a="521643942"
Received: from dwillia2-desk3.jf.intel.com (HELO dwillia2-desk3.amr.corp.intel.com) ([10.54.39.25])
  by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Oct 2020 18:00:44 -0700
Subject: [PATCH 2/2] xen/unpopulated-alloc: Consolidate pgmap manipulation
From: Dan Williams <dan.j.williams@intel.com>
To: linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Morton <akpm@linux-foundation.org>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, vishal.l.verma@intel.com,
 dave.hansen@linux.intel.com, akpm@linux-foundation.org,
 linux-nvdimm@lists.01.org, linux-mm@kvack.org
Bcc: dan.j.williams@intel.com
Date: Wed, 14 Oct 2020 17:42:14 -0700
Message-ID: <160272253442.3136502.16683842453317773487.stgit@dwillia2-desk3.amr.corp.intel.com>
In-Reply-To: <160272252400.3136502.13635752844548960833.stgit@dwillia2-desk3.amr.corp.intel.com>
References: <160272252400.3136502.13635752844548960833.stgit@dwillia2-desk3.amr.corp.intel.com>
User-Agent: StGit/0.18-3-g996c
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

Cleanup fill_list() to keep all the pgmap manipulations in a single
location of the function. Update the exit unwind path accordingly.

Link: http://lore.kernel.org/r/6186fa28-d123-12db-6171-a75cb6e615a5@oracle.com

Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <xen-devel@lists.xenproject.org>
Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
 drivers/xen/unpopulated-alloc.c |   14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index 8c512ea550bb..75ab5de99868 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -27,11 +27,6 @@ static int fill_list(unsigned int nr_pages)
 	if (!res)
 		return -ENOMEM;
 
-	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
-	if (!pgmap)
-		goto err_pgmap;
-
-	pgmap->type = MEMORY_DEVICE_GENERIC;
 	res->name = "Xen scratch";
 	res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
 
@@ -43,6 +38,11 @@ static int fill_list(unsigned int nr_pages)
 		goto err_resource;
 	}
 
+	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
+	if (!pgmap)
+		goto err_pgmap;
+
+	pgmap->type = MEMORY_DEVICE_GENERIC;
 	pgmap->range = (struct range) {
 		.start = res->start,
 		.end = res->end,
@@ -91,10 +91,10 @@ static int fill_list(unsigned int nr_pages)
 	return 0;
 
 err_memremap:
-	release_resource(res);
-err_resource:
 	kfree(pgmap);
 err_pgmap:
+	release_resource(res);
+err_resource:
 	kfree(res);
 	return ret;
 }



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 01:01:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 01:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7058.18483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSreW-0004pk-Nu; Thu, 15 Oct 2020 01:01:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7058.18483; Thu, 15 Oct 2020 01:01:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSreW-0004pB-K4; Thu, 15 Oct 2020 01:01:56 +0000
Received: by outflank-mailman (input) for mailman id 7058;
 Thu, 15 Oct 2020 01:01:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G4/f=DW=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kSreV-0004CO-3T
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:01:55 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9241f8c4-067c-4d17-b3b5-6a5452a79212;
 Thu, 15 Oct 2020 01:01:54 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 0F0D65C025E;
 Wed, 14 Oct 2020 21:01:54 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Wed, 14 Oct 2020 21:01:54 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id C13313064680;
 Wed, 14 Oct 2020 21:01:52 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=G4/f=DW=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kSreV-0004CO-3T
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:01:55 +0000
X-Inumbo-ID: 9241f8c4-067c-4d17-b3b5-6a5452a79212
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9241f8c4-067c-4d17-b3b5-6a5452a79212;
	Thu, 15 Oct 2020 01:01:54 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 0F0D65C025E;
	Wed, 14 Oct 2020 21:01:54 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Wed, 14 Oct 2020 21:01:54 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=G0Pr9n
	ntlC0D3sjshW6GfCqIRmEwByZ7HF0jalDeOv0=; b=fxRqBEyE3bghPSDBRdggqh
	Hm7teyboKGQvCcWILupKJPYGqESFh8iw/dABMsiqP3hCWOKX9ynQApUeDLFwPb2l
	EQVxWpUfjULr8JJX2zXPDW2CY9pnftuwPCW+xxZzGiFmOMao1/MwygS5Q12IRERX
	awDl/dd4df4quXcxrTkQhR+5jN0KSUk6ZJ1iiTFveFiDTC4kphLCPmfrjZZUZLc8
	kQh7HANOj8GyX2YwkMI5qcnJxIj/+nLlaN+oCTTIqhsnGwKTXRX2uaeH5aBpaE1I
	HCOu7nJRN7VVke1XJWkctrALhcyzQyaXkzdirrsfYwJ02RIaDv2efs0mqD2jWdjA
	==
X-ME-Sender: <xms:gZ-HX6GjDS1o4FutSUJJq2Ybg02a0oui-MFUCrtRhcfNHn2xLNsawA>
    <xme:gZ-HX7VtMy5zvHgkut_7Q090rjbX0BXL-WLFGxspOZrGW8W6Z2iwRiZ3exMDRkI-e
    Ovmhbu3no9SmQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedriedvgdegudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieegrddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:gZ-HX0KssvB_IR3yxMyKkcOmP1SSCgQHRCp1DApKjC6H7NfHWLn22A>
    <xmx:gZ-HX0EQklAIyhJCZd0eH7pM3gCwDruciI0G8LYjaNvEyCvTBbt2kQ>
    <xmx:gZ-HXwUW-x7l76Siw3BjU3XxdqMI2u2A0oF7MuenON1R1fklr0DCLw>
    <xmx:gp-HX5dP3AvOmoFCuzgMmRGhHCoBcU2UkXXxTb15J2V1ATu6FFVUQQ>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id C13313064680;
	Wed, 14 Oct 2020 21:01:52 -0400 (EDT)
Date: Thu, 15 Oct 2020 03:01:48 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process
Message-ID: <20201015010148.GQ151766@mail-itl>
References: <20201012011139.GA82449@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="THYEXwetZJOK3OLY"
Content-Disposition: inline
In-Reply-To: <20201012011139.GA82449@mattapan.m5p.com>


--THYEXwetZJOK3OLY
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process

On Sun, Oct 11, 2020 at 06:11:39PM -0700, Elliott Mitchell wrote:
> Unexpectedly the environment variable which needs to be passed is
> $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
> of the host `ld`.
>=20
> Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
> it can load at runtime, not executables.
>=20
> This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
> to $LDFLAGS which breaks many linkers.
>=20
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>

Acked-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>

> ---
> This is now the *third* time this has been sent to the list.  Mark Pryor
> has tested and confirms Python cross-building is working.  There is one
> wart left which I'm unsure of the best approach for.
>=20
> Having looked around a bit, I believe this is a Python 2/3 compatibility
> issue.  "distutils" for Python 2 likely lacked a separate $LDSHARED or
> $LD variable, whereas Python 3 does have this.  Alas this is pointless
> due to the above (unless you can cause distutils.py to do the final link
> step separately).
> ---
>  tools/pygrub/Makefile | 9 +++++----
>  tools/python/Makefile | 9 +++++----
>  2 files changed, 10 insertions(+), 8 deletions(-)
>=20
> diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> index 3063c4998f..37b2146214 100644
> --- a/tools/pygrub/Makefile
> +++ b/tools/pygrub/Makefile
> @@ -3,20 +3,21 @@ XEN_ROOT =3D $(CURDIR)/../..
>  include $(XEN_ROOT)/tools/Rules.mk
> =20
>  PY_CFLAGS =3D $(CFLAGS) $(PY_NOOPT_CFLAGS)
> -PY_LDFLAGS =3D $(LDFLAGS) $(APPEND_LDFLAGS)
> +PY_LDFLAGS =3D $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG =3D build/installed_files.txt
> =20
>  .PHONY: all
>  all: build
>  .PHONY: build
>  build:
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON=
) setup.py build
> +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDFLAGS=3D"$(PY=
_LDFLAGS)" $(PYTHON) setup.py build
> =20
>  .PHONY: install
>  install: all
>  	$(INSTALL_DIR) $(DESTDIR)/$(bindir)
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON=
) \
> -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" \
> +		LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		 --root=3D"$(DESTDIR)" --install-scripts=3D$(LIBEXEC_BIN) --force
>  	set -e; if [ $(bindir) !=3D $(LIBEXEC_BIN) -a \
>  	             "`readlink -f $(DESTDIR)/$(bindir)`" !=3D \
> diff --git a/tools/python/Makefile b/tools/python/Makefile
> index 8d22c03676..b675f5b4de 100644
> --- a/tools/python/Makefile
> +++ b/tools/python/Makefile
> @@ -5,19 +5,20 @@ include $(XEN_ROOT)/tools/Rules.mk
>  all: build
> =20
>  PY_CFLAGS =3D $(CFLAGS) $(PY_NOOPT_CFLAGS)
> -PY_LDFLAGS =3D $(LDFLAGS) $(APPEND_LDFLAGS)
> +PY_LDFLAGS =3D $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
>  INSTALL_LOG =3D build/installed_files.txt
> =20
>  .PHONY: build
>  build:
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" $(PYTHON) setup.py build
> +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDFLAGS=3D"$(PY=
_LDFLAGS)" $(PYTHON) setup.py build
> =20
>  .PHONY: install
>  install:
>  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> =20
> -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON=
) \
> -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" \
> +		LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
>  		--root=3D"$(DESTDIR)" --force
> =20
>  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXE=
C_BIN)

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--THYEXwetZJOK3OLY
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+Hn30ACgkQ24/THMrX
1yxhaQf/awop0fQWRMmFPdIBwi05jVpCYh875R2Mwx9MLy1KTRyVBp0fRU5wwsTh
MpYvnvGgudQQ5TPk1zyUnmStqojuKSeoviaCNHSu+5bUqd/dh3X9UI/MKtM5wrvA
gyDprYFa3Mg7FSQga8P5YOgGBN01HxUk8X5lWB9D6wvesWh7W4tlndXdWlX+RBzk
Jumrjaak2J/vhp9SbHnitDkHI73DtrxXzl0+LDcWja9xKc7pqddywG6OTImHYSZJ
fvHKe+5784/u8YNXD6+MUqFWRzJzdcYUFhufyev8qKY3kYKQZRx/YdbWJWSWm1jj
Ep5GfE6vn4nhBQ0eHE8jG4a+iJNFgA==
=YZgd
-----END PGP SIGNATURE-----

--THYEXwetZJOK3OLY--


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 01:03:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 01:03:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7063.18495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrfg-0003ze-1D; Thu, 15 Oct 2020 01:03:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7063.18495; Thu, 15 Oct 2020 01:03:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSrff-0003zX-UU; Thu, 15 Oct 2020 01:03:07 +0000
Received: by outflank-mailman (input) for mailman id 7063;
 Thu, 15 Oct 2020 01:03:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G4/f=DW=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kSrfd-0003zJ-Mr
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:03:05 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d52c267c-d8d0-4306-861a-b8506e1fa7ce;
 Thu, 15 Oct 2020 01:03:04 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 9F25E5C0145;
 Wed, 14 Oct 2020 21:03:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Wed, 14 Oct 2020 21:03:04 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 7D40D3280063;
 Wed, 14 Oct 2020 21:03:03 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=G4/f=DW=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kSrfd-0003zJ-Mr
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 01:03:05 +0000
X-Inumbo-ID: d52c267c-d8d0-4306-861a-b8506e1fa7ce
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d52c267c-d8d0-4306-861a-b8506e1fa7ce;
	Thu, 15 Oct 2020 01:03:04 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 9F25E5C0145;
	Wed, 14 Oct 2020 21:03:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Wed, 14 Oct 2020 21:03:04 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=vWOY7z
	MD34pZOKjrMRxj5BndB7UtgBWfOJvAUFYGWsE=; b=STz5x17OTTY2CCAhEHhL9q
	Xuv0n4xQS2P06eU60pWsP+T4B60LM+z+SGECkK5Ok4k3T/dNcbHAE2DOHeMW3QV0
	r416Aq/kBURtUJfI/EX9ulmBa/AEbw3wiOUX2WIM6HnbYKdbj2vXglrciGa27WaN
	l/XFtNL+lJs3HIwpRMnmIuHtm16MIZxdxcNf0uIKDYU2GzStfJcDz6nPeBDlESmp
	IagEr9QE1p64yyOx4ea1sQA+giJ8i8lomnOQIDumV3INCJgqKIGSSzVyCdPHju1e
	iRNMgj+6XfTqE4ljU3ECx6fW7bdF7SvQG5HdNRSpe8EMOIFyw14Kr5HnnPzbX4Vw
	==
X-ME-Sender: <xms:yJ-HXyAcOrjJO1mYUttFBlTxOSaJtL-I6XnQhpzhXuw3KKflh_XZSQ>
    <xme:yJ-HX8jVeJuVXej-dz8SuYSpwnYQuT5VUOiGtzhjCPu9vt0E2RczOvqvOsrzjab07
    U0vzchIArrzWQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedriedvgdegudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
    gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
    drieegrddujedtrdekleenucevlhhushhtvghrufhiiigvpedunecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:yJ-HX1kORT5rlAd6_mdttCVlJPuNo70i7Q2K5DkVUZ4opOu6sPoy1Q>
    <xmx:yJ-HXwx-Ojvf6CizYovRYKJE7UZLByvpQEHZIJFJzy6giOyQugzcvQ>
    <xmx:yJ-HX3Q3epw2ZabpTAPzKZOdf0zqd9ZTy2IlDJYjEjs4hDNXh_lR5A>
    <xmx:yJ-HX449-myE3Ss8_qRVU7fdXnsk6vejr_wrUAQJ43JzkDrh4jE4fA>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 7D40D3280063;
	Wed, 14 Oct 2020 21:03:03 -0400 (EDT)
Date: Thu, 15 Oct 2020 03:02:59 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Wei Liu <wl@xen.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process
Message-ID: <20201015010259.GR151766@mail-itl>
References: <20201012011139.GA82449@mattapan.m5p.com>
 <20201013132606.7ff35mmpesklbmcx@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="OxDG9cJJSSQMUzGF"
Content-Disposition: inline
In-Reply-To: <20201013132606.7ff35mmpesklbmcx@liuwe-devbox-debian-v2>


--OxDG9cJJSSQMUzGF
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process

On Tue, Oct 13, 2020 at 01:26:06PM +0000, Wei Liu wrote:
> On Sun, Oct 11, 2020 at 06:11:39PM -0700, Elliott Mitchell wrote:
> > Unexpectedly the environment variable which needs to be passed is
> > $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
> > of the host `ld`.
> >=20
> > Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
> > it can load at runtime, not executables.
> >=20
> > This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
> > to $LDFLAGS which breaks many linkers.
> >=20
> > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> > ---
> > This is now the *third* time this has been sent to the list.  Mark Pryor
> > has tested and confirms Python cross-building is working.  There is one
> > wart left which I'm unsure of the best approach for.
> >=20
> > Having looked around a bit, I believe this is a Python 2/3 compatibility
> > issue.  "distutils" for Python 2 likely lacked a separate $LDSHARED or
> > $LD variable, whereas Python 3 does have this.  Alas this is pointless
> > due to the above (unless you can cause distutils.py to do the final link
> > step separately).
>=20
> I think this is well-reasoned but I don't have time to figure out and
> verify the details.

Yes, it looks like distutils in Python 2 was even more limited than
the one in Python 3.

> Marek, do you have any comment on this?
>=20
> > ---
> >  tools/pygrub/Makefile | 9 +++++----
> >  tools/python/Makefile | 9 +++++----
> >  2 files changed, 10 insertions(+), 8 deletions(-)
> >=20
> > diff --git a/tools/pygrub/Makefile b/tools/pygrub/Makefile
> > index 3063c4998f..37b2146214 100644
> > --- a/tools/pygrub/Makefile
> > +++ b/tools/pygrub/Makefile
> > @@ -3,20 +3,21 @@ XEN_ROOT =3D $(CURDIR)/../..
> >  include $(XEN_ROOT)/tools/Rules.mk
> > =20
> >  PY_CFLAGS =3D $(CFLAGS) $(PY_NOOPT_CFLAGS)
> > -PY_LDFLAGS =3D $(LDFLAGS) $(APPEND_LDFLAGS)
> > +PY_LDFLAGS =3D $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
> >  INSTALL_LOG =3D build/installed_files.txt
> > =20
> >  .PHONY: all
> >  all: build
> >  .PHONY: build
> >  build:
> > -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTH=
ON) setup.py build
> > +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDFLAGS=3D"$(=
PY_LDFLAGS)" $(PYTHON) setup.py build
> > =20
> >  .PHONY: install
> >  install: all
> >  	$(INSTALL_DIR) $(DESTDIR)/$(bindir)
> > -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTH=
ON) \
> > -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> > +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" \
> > +		LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> > +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> >  		 --root=3D"$(DESTDIR)" --install-scripts=3D$(LIBEXEC_BIN) --force
> >  	set -e; if [ $(bindir) !=3D $(LIBEXEC_BIN) -a \
> >  	             "`readlink -f $(DESTDIR)/$(bindir)`" !=3D \
> > diff --git a/tools/python/Makefile b/tools/python/Makefile
> > index 8d22c03676..b675f5b4de 100644
> > --- a/tools/python/Makefile
> > +++ b/tools/python/Makefile
> > @@ -5,19 +5,20 @@ include $(XEN_ROOT)/tools/Rules.mk
> >  all: build
> > =20
> >  PY_CFLAGS =3D $(CFLAGS) $(PY_NOOPT_CFLAGS)
> > -PY_LDFLAGS =3D $(LDFLAGS) $(APPEND_LDFLAGS)
> > +PY_LDFLAGS =3D $(SHLIB_LDFLAGS) $(APPEND_LDFLAGS)
> >  INSTALL_LOG =3D build/installed_files.txt
> > =20
> >  .PHONY: build
> >  build:
> > -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" $(PYTHON) setup.py build
> > +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" LDFLAGS=3D"$(=
PY_LDFLAGS)" $(PYTHON) setup.py build
> > =20
> >  .PHONY: install
> >  install:
> >  	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
> > =20
> > -	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTH=
ON) \
> > -		setup.py install --record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> > +	CC=3D"$(CC)" CFLAGS=3D"$(PY_CFLAGS)" LDSHARED=3D"$(CC)" \
> > +		LDFLAGS=3D"$(PY_LDFLAGS)" $(PYTHON) setup.py install \
> > +		--record $(INSTALL_LOG) $(PYTHON_PREFIX_ARG) \
> >  		--root=3D"$(DESTDIR)" --force
> > =20
> >  	$(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBE=
XEC_BIN)
> > --=20
> > 2.20.1
> >=20
> >=20
> > --=20
> > (\___(\___(\______          --=3D> 8-) EHM <=3D--          ______/)___/=
)___/)
> >  \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
> >   \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
> > 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/54=
45
> >=20
> >=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--OxDG9cJJSSQMUzGF
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+Hn8QACgkQ24/THMrX
1yxQhgf+M/yibdkpDozQk6YR3nYE4DekOTKvaNWNwQ2if+Ax7AZFuTwUC9+nk8zN
IBCQEDGqG1w3KrEPu8Gyx0FOIth1MOA3NUGrV0RdAx7/cnSbBxFxgbN6aCJhHb+Z
asU7yzWihNsZHdLloFJuSu0PUTF4dlQGUX8PO6K1PA4XhRW9u+WKxRoE1AXoNW77
dL7PvlQsOxo3CVsfPRUe7PV4/PoaDWA+p2T+Tuumg7b4zXsK0EZW7hX8uNZ6QrIa
LpcsyMZQL06MvOzKeifUy1FOZtOOlCcOcqHZ2D4JWDereqNV5t/U4g6dakzQ4Hgb
Y9erOx4N2U1g0wyBnAUKYvHI5O19Yg==
=Lao+
-----END PGP SIGNATURE-----

--OxDG9cJJSSQMUzGF--


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 02:22:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 02:22:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7069.18508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSsuT-0002fn-3J; Thu, 15 Oct 2020 02:22:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7069.18508; Thu, 15 Oct 2020 02:22:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSsuT-0002fg-0K; Thu, 15 Oct 2020 02:22:29 +0000
Received: by outflank-mailman (input) for mailman id 7069;
 Thu, 15 Oct 2020 02:22:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSsuR-0002fb-7g
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 02:22:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4bb6df52-1fbb-46e7-bc3a-d4f2c19fcbf7;
 Thu, 15 Oct 2020 02:22:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSsuP-0006ku-20; Thu, 15 Oct 2020 02:22:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSsuO-00046v-OS; Thu, 15 Oct 2020 02:22:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSsuO-0007Dm-Nu; Thu, 15 Oct 2020 02:22:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSsuR-0002fb-7g
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 02:22:27 +0000
X-Inumbo-ID: 4bb6df52-1fbb-46e7-bc3a-d4f2c19fcbf7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4bb6df52-1fbb-46e7-bc3a-d4f2c19fcbf7;
	Thu, 15 Oct 2020 02:22:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mdcYrTPUjpvk5iJnIBuWFpXPZclXDO1QIGKMhqb0uUw=; b=2fx3AttNzB2651fbrNLwp0RJab
	l/979xvi+xMpzD0lgPeuEVBOgCC9kLAlz77sUOp/Jadc9rDR5KQKh1lyhtd4DWdLBVFdMmiK2Fswz
	dLLxsrLsYO30tjsXL1fSEozyX9mcr3y7ZWEllokuqIOHOraNfWOdWSwD3HrOPcWTPKUM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSsuP-0006ku-20; Thu, 15 Oct 2020 02:22:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSsuO-00046v-OS; Thu, 15 Oct 2020 02:22:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSsuO-0007Dm-Nu; Thu, 15 Oct 2020 02:22:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155822-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155822: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
X-Osstest-Versions-That:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 02:22:24 +0000

flight 155822 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155822/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155805

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575
baseline version:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c

Last test of basis   155805  2020-10-14 13:00:28 Z    0 days
Testing same since   155811  2020-10-14 18:03:04 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f776e5fb3ee699745f6442ec8c47d0fa647e0575
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed Oct 14 12:05:41 2020 +0200

    xen/arm: Document the erratum #853709 related to Cortex A72
    
    The Cortex-A72 erratum #853709 is the same as the Cortex-A57
    erratum #852523. As the latter is already workaround, we only
    need to update the documentation.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    [julieng: Reworded the commit message]
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 04:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 04:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7079.18535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSuRm-0002kC-Aj; Thu, 15 Oct 2020 04:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7079.18535; Thu, 15 Oct 2020 04:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSuRm-0002k5-7k; Thu, 15 Oct 2020 04:00:58 +0000
Received: by outflank-mailman (input) for mailman id 7079;
 Thu, 15 Oct 2020 04:00:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSuRl-0002iz-1f
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:00:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c26cfd1a-0275-4ff6-a2da-8bd8b27e8179;
 Thu, 15 Oct 2020 04:00:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSuRd-0000Pu-Cx; Thu, 15 Oct 2020 04:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSuRd-0000p3-6f; Thu, 15 Oct 2020 04:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSuRd-00054J-6B; Thu, 15 Oct 2020 04:00:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSuRl-0002iz-1f
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:00:57 +0000
X-Inumbo-ID: c26cfd1a-0275-4ff6-a2da-8bd8b27e8179
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c26cfd1a-0275-4ff6-a2da-8bd8b27e8179;
	Thu, 15 Oct 2020 04:00:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nIr4PjprkyDaERskddD4pmm/FKmeYgzjoje7Ydu3Rl8=; b=Edbvq0KRiSrls5ViSORbJ8HOAj
	qSXAdyHChP2nY2TIJ8l0gG+JMJrb4vtwTcP8yC3Zfgrpf44LMY7jZ2G8L0l6i4GG+qfxpioVJyuZ8
	+cglnxKTAzG20jmB2HGcSfk3TlOrmUk6lGxSKPmXCh8UFm+v6VAjNIoM1TawuI6/qFZg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSuRd-0000Pu-Cx; Thu, 15 Oct 2020 04:00:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSuRd-0000p3-6f; Thu, 15 Oct 2020 04:00:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSuRd-00054J-6B; Thu, 15 Oct 2020 04:00:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155809-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155809: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:freebsd-install:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=b5fc7a89e58bcc059a3d5e4db79c481fb437de59
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 04:00:49 +0000

flight 155809 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155809/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 155791 pass in 155809
 test-amd64-amd64-pair 26 guest-migrate/src_host/dst_host fail in 155791 pass in 155809
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 155791 pass in 155809
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 155791 pass in 155809
 test-amd64-amd64-libvirt-vhd 12 debian-di-install fail in 155791 pass in 155809
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 155791 pass in 155809
 test-amd64-amd64-examine    4 memdisk-try-append fail in 155791 pass in 155809
 test-amd64-amd64-qemuu-freebsd11-amd64 12 freebsd-install  fail pass in 155791
 test-arm64-arm64-xl          10 host-ping-check-xen        fail pass in 155791
 test-arm64-arm64-xl-credit1  12 debian-install             fail pass in 155791

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl         15 migrate-support-check fail in 155791 never pass
 test-arm64-arm64-xl     16 saverestore-support-check fail in 155791 never pass
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 155791 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 155791 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                b5fc7a89e58bcc059a3d5e4db79c481fb437de59
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   75 days
Failing since        152366  2020-08-01 20:49:34 Z   74 days  126 attempts
Testing same since   155791  2020-10-14 03:12:26 Z    1 days    2 attempts

------------------------------------------------------------
2687 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 395389 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 04:03:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 04:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7082.18547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSuUZ-0002u0-Qr; Thu, 15 Oct 2020 04:03:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7082.18547; Thu, 15 Oct 2020 04:03:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSuUZ-0002tt-ND; Thu, 15 Oct 2020 04:03:51 +0000
Received: by outflank-mailman (input) for mailman id 7082;
 Thu, 15 Oct 2020 04:03:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jCZ5=DW=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kSuUY-0002tn-Ma
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:03:50 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a64a43c-88ac-4e1e-9c8b-dbdfa0cb3424;
 Thu, 15 Oct 2020 04:03:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09F43dP5007398
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 15 Oct 2020 00:03:45 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09F43d4B007397;
 Wed, 14 Oct 2020 21:03:39 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jCZ5=DW=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kSuUY-0002tn-Ma
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:03:50 +0000
X-Inumbo-ID: 2a64a43c-88ac-4e1e-9c8b-dbdfa0cb3424
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a64a43c-88ac-4e1e-9c8b-dbdfa0cb3424;
	Thu, 15 Oct 2020 04:03:49 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09F43dP5007398
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 15 Oct 2020 00:03:45 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09F43d4B007397;
	Wed, 14 Oct 2020 21:03:39 -0700 (PDT)
	(envelope-from ehem)
Date: Wed, 14 Oct 2020 21:03:39 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Marek Marczykowski-G??recki <marmarek@invisiblethingslab.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
        Ian Jackson <iwj@xenproject.org>
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process
Message-ID: <20201015040339.GB5803@mattapan.m5p.com>
References: <20201012011139.GA82449@mattapan.m5p.com>
 <20201013132606.7ff35mmpesklbmcx@liuwe-devbox-debian-v2>
 <20201015010259.GR151766@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201015010259.GR151766@mail-itl>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Oct 15, 2020 at 03:02:59AM +0200, Marek Marczykowski-G??recki wrote:
> On Tue, Oct 13, 2020 at 01:26:06PM +0000, Wei Liu wrote:
> > On Sun, Oct 11, 2020 at 06:11:39PM -0700, Elliott Mitchell wrote:
> > > Having looked around a bit, I believe this is a Python 2/3 compatibility
> > > issue.  "distutils" for Python 2 likely lacked a separate $LDSHARED or
> > > $LD variable, whereas Python 3 does have this.  Alas this is pointless
> > > due to the above (unless you can cause distutils.py to do the final link
> > > step separately).
> > 
> > I think this is well-reasoned but I don't have time to figure out and
> > verify the details.
> 
> Yes, it looks like distutils in Python 2 was even more limited than
> the one in Python 3.

Actually feels like two steps forward, one step back.  Separate
$LDSHARED, yet $CFLAGS is appended to $LDFLAGS during invocation.

Architecture name is included the output Python extension filename, yet
the only way to override requires writing Python code.  I've got ideas
to workaround this, but they're rather gross.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 15 04:17:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 04:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7088.18563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSui3-0003v3-85; Thu, 15 Oct 2020 04:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7088.18563; Thu, 15 Oct 2020 04:17:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSui3-0003uw-45; Thu, 15 Oct 2020 04:17:47 +0000
Received: by outflank-mailman (input) for mailman id 7088;
 Thu, 15 Oct 2020 04:17:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSui1-0003ur-IB
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:17:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9e90c91-f300-4327-8099-bffc4a6d124a;
 Thu, 15 Oct 2020 04:17:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 74764AAC6;
 Thu, 15 Oct 2020 04:17:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSui1-0003ur-IB
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:17:45 +0000
X-Inumbo-ID: c9e90c91-f300-4327-8099-bffc4a6d124a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c9e90c91-f300-4327-8099-bffc4a6d124a;
	Thu, 15 Oct 2020 04:17:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602735463;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=d6XREWJTbrLTiRCRnFwa8v4t3eXIsRm14t3UfLvWp/M=;
	b=bFQC5sMJl2mNtSu//V/l7bE4DIH+yfzMeMg8rDNbECmZwgNjb4MPNlVlPuWs31Qkz5JZdv
	HdLJ0MQ98nbi88Ca3xJ8wWGAUTmrdfq2fosmWvuTZx/PDbQSIrKZ58w/1ogwxxLka9vLtm
	c/Ba6cn2x7AMn2R8EoxiYZD+HUya3Og=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 74764AAC6;
	Thu, 15 Oct 2020 04:17:43 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <77e8bf3b-6172-2900-dd5e-9d059a410b0e@suse.com>
 <CAKf6xptqRKJ87KiJ52MpYR50RNgDEqqA5RsqXphQ1NUeVZgb=Q@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <03a9d7c8-61fe-a83f-93b3-55d38d070b0b@suse.com>
Date: Thu, 15 Oct 2020 06:17:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xptqRKJ87KiJ52MpYR50RNgDEqqA5RsqXphQ1NUeVZgb=Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.20 18:27, Jason Andryuk wrote:
> On Wed, Oct 14, 2020 at 12:12 PM Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 14.10.20 17:31, Jason Andryuk wrote:
>>> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
>>
>> This wrong. Have a look into arch/x86/platform/pvh/head.S
> 
> That is XEN_ELFNOTE_PHYS32_ENTRY, which is different from
> XEN_ELFNOTE_ENTRY in arch/x86/xen/xen-head.S:
> #ifdef CONFIG_XEN_PV
>          ELFNOTE(Xen, XEN_ELFNOTE_ENTRY,          _ASM_PTR startup_xen)
> #endif

Oh, sorry, I shouldn't have answered when being in a hurry.

I misunderstood the purpose of the patch.


Sorry for the noise,

Juergen



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 04:46:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 04:46:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7091.18575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSv9o-0006fB-Gf; Thu, 15 Oct 2020 04:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7091.18575; Thu, 15 Oct 2020 04:46:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSv9o-0006f4-D7; Thu, 15 Oct 2020 04:46:28 +0000
Received: by outflank-mailman (input) for mailman id 7091;
 Thu, 15 Oct 2020 04:46:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSv9m-0006ez-GH
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:46:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 968fce51-29e4-47b3-9500-d143bf00ba4e;
 Thu, 15 Oct 2020 04:46:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSv9i-0001Tm-TX; Thu, 15 Oct 2020 04:46:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSv9i-0002nj-LH; Thu, 15 Oct 2020 04:46:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSv9i-0004wO-Kc; Thu, 15 Oct 2020 04:46:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSv9m-0006ez-GH
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 04:46:26 +0000
X-Inumbo-ID: 968fce51-29e4-47b3-9500-d143bf00ba4e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 968fce51-29e4-47b3-9500-d143bf00ba4e;
	Thu, 15 Oct 2020 04:46:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9rIa7QmEDhC2SAAWzLbMybPU7xT9F8vAJzSYOMKKsHo=; b=Y5cE3Wb9WoAb89UJ7pKu0sV1o2
	TWtT0I9xp1VbhwlbKID8gxnkZ0zIken5RkJ4iw9l8kaSGBYyjB6woeahAN/jafIH1pYjpY6buEhlc
	d2/uB0qVEK6FADEtm5CLr2WYIPk4JjGq0O3/tgxqfAXNgN+ol7BtLbn4Xx56XDs0Z01M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSv9i-0001Tm-TX; Thu, 15 Oct 2020 04:46:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSv9i-0002nj-LH; Thu, 15 Oct 2020 04:46:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSv9i-0004wO-Kc; Thu, 15 Oct 2020 04:46:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155810-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155810: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-stop:fail:allowable
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 04:46:22 +0000

flight 155810 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155810/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155788
 build-amd64-xsm               6 xen-build                fail REGR. vs. 155788

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-stop               fail REGR. vs. 155788

Tests which did not succeed, but are not blocking:
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155788
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155788
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155788  2020-10-14 01:52:30 Z    1 days
Testing same since   155810  2020-10-14 16:08:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 05:54:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 05:54:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7099.18595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSwCp-0004SX-Ns; Thu, 15 Oct 2020 05:53:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7099.18595; Thu, 15 Oct 2020 05:53:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSwCp-0004SQ-KQ; Thu, 15 Oct 2020 05:53:39 +0000
Received: by outflank-mailman (input) for mailman id 7099;
 Thu, 15 Oct 2020 05:53:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSwCo-0004Rs-UG
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 05:53:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24f21b5a-4a4e-4368-b446-6540e69d761b;
 Thu, 15 Oct 2020 05:53:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSwCg-0003AI-Cn; Thu, 15 Oct 2020 05:53:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSwCg-0004zd-2j; Thu, 15 Oct 2020 05:53:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSwCg-0003Jh-2H; Thu, 15 Oct 2020 05:53:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSwCo-0004Rs-UG
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 05:53:38 +0000
X-Inumbo-ID: 24f21b5a-4a4e-4368-b446-6540e69d761b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 24f21b5a-4a4e-4368-b446-6540e69d761b;
	Thu, 15 Oct 2020 05:53:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cNsGd8aJuE1QSKh0mHkxzt3meXLdhSNq7Yu/1dg67sw=; b=HgG5kqD/qPYwlXlt0zcO9gVH2x
	6AERbrznOVerppjisyacJZ8pUjKp7ZiY6g8l+Ql6b3PtEUatc0TLbJM0e3vClQ/EjI2w/ttBZQgA2
	s+Og/mnJXXsZ3ubadeSagBuQD3yy1Td5H49vIilNNpqvrxfMI+5egQLJ54JbMYx6e1u8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSwCg-0003AI-Cn; Thu, 15 Oct 2020 05:53:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSwCg-0004zd-2j; Thu, 15 Oct 2020 05:53:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSwCg-0003Jh-2H; Thu, 15 Oct 2020 05:53:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155815-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155815: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    linux-5.4:test-amd64-amd64-pygrub:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:guest-start:fail:heisenbug
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    linux=85b0841aab15c12948af951d477183ab3df7de14
X-Osstest-Versions-That:
    linux=d22f99d235e13356521b374410a6ee24f50b65e6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 05:53:30 +0000

flight 155815 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155815/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install fail in 155799 pass in 155815
 test-amd64-amd64-pygrub      13 guest-start      fail in 155799 pass in 155815
 test-amd64-amd64-libvirt-vhd 12 debian-di-install          fail pass in 155799
 test-armhf-armhf-libvirt-raw 13 guest-start                fail pass in 155799

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 155799 never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check fail in 155799 never pass
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check fail in 155799 never pass
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 155534
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop              fail never pass

version targeted for testing:
 linux                85b0841aab15c12948af951d477183ab3df7de14
baseline version:
 linux                d22f99d235e13356521b374410a6ee24f50b65e6

Last test of basis   155534  2020-10-07 22:08:49 Z    7 days
Testing same since   155799  2020-10-14 08:39:59 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Brown <aaron.f.brown@intel.com>
  Aaron Ma <aaron.ma@canonical.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Andrew Bowers <andrewx.bowers@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antony Antony <antony.antony@secunet.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Aya Levin <ayal@mellanox.com>
  Aya Levin <ayal@nvidia.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Cong Wang <xiyou.wangcong@gmail.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Dave Airlie <airlied@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dumitru Ceara <dceara@redhat.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Eric Dumazet <edumazet@google.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hugh Dickins <hughd@google.com>
  Ido Schimmel <idosch@nvidia.com>
  Ingo Molnar <mingo@kernel.org>
  Ivan Khoronzhuk <ikhoronz@cisco.com>
  Ivan Khoronzhuk <ivan.khoronzhuk@gmail.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Jean Delvare <jdelvare@suse.de>
  Jens Axboe <axboe@kernel.dk>
  Jerome Brunet <jbrunet@baylibre.com>
  Jiri Olsa <jolsa@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  John Fastabend <john.fastabend@gmail.com>
  Jon Hunter <jonathanh@nvidia.com>
  Kajol Jain <kjain@linux.ibm.com>
  Karol Herbst <kherbst@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maor Gottlieb <maorg@nvidia.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Gross <mgross@linux.intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Muchun Song <songmuchun@bytedance.com>
  Neal Cardwell <ncardwell@google.com>
  Necip Fazil Yildiran <fazilyildiran@gmail.com>
  Nicolas Belin <nbelin@baylibre.com>
  Nikolay Borisov <nborisov@suse.com>
  Nobuhiro Iwamatsu (CIP) <nobuhiro1.iwamatsu@toshiba.co.jp>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Paolo Abeni <pabeni@redhat.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petko Manolov <petkan@nucleusys.com>
  Philip Yang <Philip.Yang@amd.com>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rohit Maheshwari <rohitm@chelsio.com>
  Sabrina Dubroca <sd@queasysnail.net>
  Saeed Mahameed <saeedm@nvidia.com>
  Sasha Levin <sashal@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srikar Dronamraju <srikar@linux.vnet.ibm.com>
  Steffen Klassert <steffen.klassert@secunet.com>
  Sylwester Dziedziuch <sylwesterx.dziedziuch@intel.com>
  syzbot+69b804437cfec30deac3@syzkaller.appspotmail.com
  syzbot+abbc768b560c84d92fd3@syzkaller.appspotmail.com
  syzbot+b1bb342d1d097516cbda@syzkaller.appspotmail.com
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Tom Rix <trix@redhat.com>
  Tommi Rantala <tommi.t.rantala@nokia.com>
  Tonghao Zhang <xiangxia.m.yue@gmail.com>
  Tony Ambardar <Tony.Ambardar@gmail.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vaibhav Gupta <vaibhavgupta40@gmail.com>
  Vijay Balakrishna <vijayb@linux.microsoft.com>
  Vladimir Zapolskiy <vladimir@tuxera.com>
  Voon Weifeng <weifeng.voon@intel.com>
  Wilken Gottwalt <wilken.gottwalt@mailbox.org>
  Willem de Bruijn <willemb@google.com>
  Wolfram Sang <wsa@kernel.org>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Yinyin Zhu <zhuyinyin@bytedance.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d22f99d235e1..85b0841aab15  85b0841aab15c12948af951d477183ab3df7de14 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 06:50:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 06:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7104.18611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSx5T-00018r-SJ; Thu, 15 Oct 2020 06:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7104.18611; Thu, 15 Oct 2020 06:50:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSx5T-00018k-OW; Thu, 15 Oct 2020 06:50:07 +0000
Received: by outflank-mailman (input) for mailman id 7104;
 Thu, 15 Oct 2020 06:50:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSx5S-00018f-WB
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 06:50:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2c9666c-9ec0-424b-82f2-7b206ba495b4;
 Thu, 15 Oct 2020 06:50:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSx5Q-0004NK-5N; Thu, 15 Oct 2020 06:50:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSx5P-0007PQ-UZ; Thu, 15 Oct 2020 06:50:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSx5P-0007zs-U3; Thu, 15 Oct 2020 06:50:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSx5S-00018f-WB
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 06:50:07 +0000
X-Inumbo-ID: b2c9666c-9ec0-424b-82f2-7b206ba495b4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b2c9666c-9ec0-424b-82f2-7b206ba495b4;
	Thu, 15 Oct 2020 06:50:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P5NQsLVuaArhw+HQ8iKfZuLQs26Kp6ixwNRImvyxkQw=; b=QwRK+MKV7qi7F0gO5JluhvNWPO
	2jgcog7uTuvOO7qElWcRyepcgYazbKa9oKitlFK0eND/RHW4RCH1877vtjwO626ymUzmwB/Zrx42e
	dvE5afcjnPuc+yoZwibkZxVtcVlpG7n75R1pw37qxaLM3FPnZgLXXNEsBTLDiPY7Sq8c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSx5Q-0004NK-5N; Thu, 15 Oct 2020 06:50:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSx5P-0007PQ-UZ; Thu, 15 Oct 2020 06:50:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSx5P-0007zs-U3; Thu, 15 Oct 2020 06:50:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155828-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155828: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
X-Osstest-Versions-That:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 06:50:03 +0000

flight 155828 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155828/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575
baseline version:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c

Last test of basis   155805  2020-10-14 13:00:28 Z    0 days
Testing same since   155811  2020-10-14 18:03:04 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   884ef07f4f..f776e5fb3e  f776e5fb3ee699745f6442ec8c47d0fa647e0575 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 07:00:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 07:00:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7108.18625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxFF-00027v-1H; Thu, 15 Oct 2020 07:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7108.18625; Thu, 15 Oct 2020 07:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxFE-00027o-Tu; Thu, 15 Oct 2020 07:00:12 +0000
Received: by outflank-mailman (input) for mailman id 7108;
 Thu, 15 Oct 2020 07:00:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSxFD-00027j-L1
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:00:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1e7be84-3501-444e-b44b-ac9151109f10;
 Thu, 15 Oct 2020 07:00:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 52FC8AD83;
 Thu, 15 Oct 2020 07:00:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSxFD-00027j-L1
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:00:11 +0000
X-Inumbo-ID: f1e7be84-3501-444e-b44b-ac9151109f10
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f1e7be84-3501-444e-b44b-ac9151109f10;
	Thu, 15 Oct 2020 07:00:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602745209;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=stXzUWWNNn0tlYniH0/+yke3yDVk8S39bV2uE3ntIgc=;
	b=GoiSPSALtfp/uFpw8+LcU0TRAdcuBcEoQlikDW7cWDqBLxvZD7zfSMY9QXM+6vumg2WGHE
	7e4Hs4JFbvg18QxBT45Tz/S0JReTZKlCi2t5B3h1h5RLx8hNj1PQp7aX4bcnx1pQHn1LGj
	078BOWyRNhkLuWechv9gpKESz08raic=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 52FC8AD83;
	Thu, 15 Oct 2020 07:00:09 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
Date: Thu, 15 Oct 2020 09:00:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 18:27, Jason Andryuk wrote:
> On Wed, Oct 14, 2020 at 12:02 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 14.10.2020 17:31, Jason Andryuk wrote:
>>> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
>>> kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
>>> virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
>>> and fail the check against the virt address range.
> 
> Oh, these should be CONFIG_XEN_PVH=y and CONFIG_XEN_PV=n
> 
>>> Change the code to only check virt_entry against the virtual address
>>> range if it was set upon entry to the function.
>>
>> Not checking at all seems wrong to me. The ELF spec anyway says
>> "virtual address", so an out of bounds value is at least suspicious.
>>
>>> Maybe the overwriting of virt_entry could be removed, but I don't know
>>> if there would be unintended consequences where (old?) kernels don't
>>> have an elfnote, but do have an in-range e_entry?  The failing kernel I
>>> just looked at has an e_entry of 0x1000000.
>>
>> And if you dropped the overwriting, what entry point would we use
>> in the absence of an ELF note?
> 
> elf_xen_note_check currently has:
> 
>     /* PVH only requires one ELF note to be set */
>     if ( parms->phys_entry != UNSET_ADDR32 )
>     {
>         elf_msg(elf, "ELF: Found PVH image\n");
>         return 0;
>     }
> 
>> I'd rather put up the option of adjusting the entry (or the check),
>> if it looks like a valid physical address.
> 
> The function doesn't know if the image will be booted PV or PVH, so I
> guess we do all the checks, but use 'parms->phys_entry != UNSET_ADDR32
> && parms->virt_entry == UNSET_ADDR' to conditionally skip checking
> virt?

Like Jürgen, the purpose of the patch hadn't become clear to me
from reading the description. As I understand it now, we're currently
refusing to boot such a kernel for no reason. If that's correct,
perhaps you could say so in the description in a more direct way?

As far as actual code adjustments go - how much of
elf_xen_addr_calc_check() is actually applicable when booting PVH?

And why is there no bounds check of ->phys_entry paralleling the
->virt_entry one?

On the whole, as long as we don't know what mode we're planning to
boot in, we can't skip any checks, as the mere presence of
XEN_ELFNOTE_PHYS32_ENTRY doesn't mean that's also what gets used.
Therefore simply bypassing any of the checks is not an option. In
particular what you suggest would lead to failure to check
e_entry-derived ->virt_entry when the PVH-specific note is
present but we're booting in PV mode. For now I don't see how to
address this without making the function aware of the intended
booting mode.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 07:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 07:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7112.18637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxQD-00036V-0L; Thu, 15 Oct 2020 07:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7112.18637; Thu, 15 Oct 2020 07:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxQC-00036O-TV; Thu, 15 Oct 2020 07:11:32 +0000
Received: by outflank-mailman (input) for mailman id 7112;
 Thu, 15 Oct 2020 07:11:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSxQB-00036F-Ik
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:11:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95f3307b-82c9-4a3c-a69a-291fda1b08bb;
 Thu, 15 Oct 2020 07:11:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99D3DAC24;
 Thu, 15 Oct 2020 07:11:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSxQB-00036F-Ik
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:11:31 +0000
X-Inumbo-ID: 95f3307b-82c9-4a3c-a69a-291fda1b08bb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 95f3307b-82c9-4a3c-a69a-291fda1b08bb;
	Thu, 15 Oct 2020 07:11:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602745889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3D1rJEdDtfrGwBnMAHJ2Ueb32PjJSc4BSKbXBS3PCFk=;
	b=GAA8+mXr1m/g89JLb7UjsDlPKIADzmF6pG/luWgArmvILNwiy/Ac2WZMmH/3nVlH3Y2OTo
	Z4FcSrEXSD6ClylncaE09z553+JUSREgwYo7vh3l138aRvI0czL6Fn/4i3REA109sEHIAD
	DAgKtgsKlxR+Yc9ykb4Fo2fE2YwSeH8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 99D3DAC24;
	Thu, 15 Oct 2020 07:11:29 +0000 (UTC)
Subject: Re: [PATCH 1/2] x86/intel: insert Ice Lake X (server) model numbers
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, jun.nakajima@intel.com,
 kevin.tian@intel.com
References: <1602558169-23140-1-git-send-email-igor.druzhinin@citrix.com>
 <ca9a1cce-1e51-0f55-4527-42f48bc7d6ab@suse.com>
 <e30f7f98-ee1a-1a24-0496-01911a79c861@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9c208aa0-ec20-7ee5-38ac-faaf1fef5aee@suse.com>
Date: Thu, 15 Oct 2020 09:11:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <e30f7f98-ee1a-1a24-0496-01911a79c861@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.2020 18:42, Igor Druzhinin wrote:
> On 14/10/2020 16:47, Jan Beulich wrote:
>> On 13.10.2020 05:02, Igor Druzhinin wrote:
>>> LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
>>> to Ice Lake desktop according to External Design Specification vol.2.
>>
>> Could you tell me where this is publicly available? Even after spending
>> quite a bit of time on searching for it, I can't seem to be able to
>> find it. And the SDM doesn't have enough information (yet).
> 
> True that SDM doesn't have this data. As I mentioned that data is taken from
> External Design Specification for Ice Lake server which is accessed using Intel
> account. I'm not completely sure it is right to make changes in open source
> project like Linux or Xen based on information which is not publicly available
> yet. But Intel is frequently doing this with Linux : even my second patch uses
> data taken from one of these documents and was committed by Intel to Linux first.
> 
> Do we need the information publicly available to commit these changes as well?

Not necessarily, but it means this patch needs to be acked by someone
having access to the doc, which hence isn't me. Given the last SDM
update was in May, I'm expecting a refresh any day now. Iirc updates
where frequently done on a roughly quarterly basis.

> If not, we can run with these changes in our patchqueue until it gets out properly.

Well, I'm all for having such changes upstream as early as possible.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 07:12:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 07:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7114.18649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxRD-0003Ce-AK; Thu, 15 Oct 2020 07:12:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7114.18649; Thu, 15 Oct 2020 07:12:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxRD-0003CW-71; Thu, 15 Oct 2020 07:12:35 +0000
Received: by outflank-mailman (input) for mailman id 7114;
 Thu, 15 Oct 2020 07:12:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kSxRB-0003CR-Qg
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:12:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecda4914-9bb4-4804-a9e5-9139b394dcb4;
 Thu, 15 Oct 2020 07:12:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSxRA-0004rP-6B; Thu, 15 Oct 2020 07:12:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kSxR9-0008Lr-Tq; Thu, 15 Oct 2020 07:12:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kSxR9-0006Jf-TQ; Thu, 15 Oct 2020 07:12:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kSxRB-0003CR-Qg
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:12:33 +0000
X-Inumbo-ID: ecda4914-9bb4-4804-a9e5-9139b394dcb4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ecda4914-9bb4-4804-a9e5-9139b394dcb4;
	Thu, 15 Oct 2020 07:12:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c3agQx5XBP8e0sESEYnrlwgG7djgwa5dbMyV5I9TuYQ=; b=RkTnK1yF+IHYtLMf8QWSPtuY5v
	n7XS6KdhXZw9e8U/FRJwt6CxtCkZ2DnDg3ZbNcfCknz/vAbOvL8KRAjg0yWRwRLtavZYtZA7cZI1i
	ZUTOE1DO7NiWBTE5HWQKLNaMvFu1JMIOZIp8KQnyn9HNW/CzL1T47oSLyELLlYKO8daQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSxRA-0004rP-6B; Thu, 15 Oct 2020 07:12:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSxR9-0008Lr-Tq; Thu, 15 Oct 2020 07:12:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kSxR9-0006Jf-TQ; Thu, 15 Oct 2020 07:12:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155825-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155825: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b9b7406c43e9d29bde3e9679c1b039cb91109097
X-Osstest-Versions-That:
    ovmf=5d0a827122cccd1f884faf75b2a065d88a58bce1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 07:12:31 +0000

flight 155825 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155825/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b9b7406c43e9d29bde3e9679c1b039cb91109097
baseline version:
 ovmf                 5d0a827122cccd1f884faf75b2a065d88a58bce1

Last test of basis   155801  2020-10-14 09:11:09 Z    0 days
Testing same since   155825  2020-10-15 01:10:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Fan Wang <fan.wang@intel.com>
  Jiaxin Wu <jiaxin.wu@intel.com>
  Siyuan Fu <siyuan.fu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5d0a827122..b9b7406c43  b9b7406c43e9d29bde3e9679c1b039cb91109097 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 07:27:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 07:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7121.18666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxfn-0004Gl-Np; Thu, 15 Oct 2020 07:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7121.18666; Thu, 15 Oct 2020 07:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSxfn-0004Ge-Kl; Thu, 15 Oct 2020 07:27:39 +0000
Received: by outflank-mailman (input) for mailman id 7121;
 Thu, 15 Oct 2020 07:27:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSxfn-0004GZ-1H
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:27:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1bab1e0-b2d7-4fa2-bd26-4794b27468be;
 Thu, 15 Oct 2020 07:27:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4439AB8F;
 Thu, 15 Oct 2020 07:27:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSxfn-0004GZ-1H
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:27:39 +0000
X-Inumbo-ID: e1bab1e0-b2d7-4fa2-bd26-4794b27468be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1bab1e0-b2d7-4fa2-bd26-4794b27468be;
	Thu, 15 Oct 2020 07:27:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602746857;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BFpbBU8Es+GhY7TGQw9gT3YXBV8vA4sq+zdC2nOmnPM=;
	b=s27oqcxu1ZAshAUnxmOLCjkYFwI0LUSdtCI+oKJDU/UxewkM3OZKASyAVS74/K3KuOhpIB
	ZELqqXey5bEK0kvmz3Z5cC0aAkjmPp1dVDTAKtRu3Is6NGhTFp24A5g1CVuoxt7S8Hvecp
	cH+j08l5tExDnKmS3yjfyvnrrZwF9Wk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4439AB8F;
	Thu, 15 Oct 2020 07:27:36 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
Date: Thu, 15 Oct 2020 09:27:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 20:00, Andrew Cooper wrote:
> On 13/10/2020 16:51, Jan Beulich wrote:
>> On 12.10.2020 15:49, Andrew Cooper wrote:
>>> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
>>> contains the legacy vm86 fields from 32bit days, which are beyond the
>>> hardware-pushed frame.
>>>
>>> Accessing these fields is generally illegal, as they are logically out of
>>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>>
>>> Unfortunately, the #DF handler uses these fields as part of preparing the
>>> state dump, and being IST, accesses the adjacent stack frame.
>>>
>>> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
>>> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
>>> the guard page, which turns this OoB write into a fatal pagefault:
>>>
>>>   (XEN) *** DOUBLE FAULT ***
>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>   (XEN) CPU:    4
>>>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>>>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>>>   ...
>>>   (XEN) Xen call trace:
>>>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>>>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>>>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>>>   (XEN)
>>>   (XEN) Pagetable walk from ffff830236f6d008:
>>>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>>>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>>>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>>>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>>>   (XEN)
>>>   (XEN) ****************************************
>>>   (XEN) Panic on CPU 4:
>>>   (XEN) FATAL PAGE FAULT
>>>   (XEN) [error_code=0003]
>>>   (XEN) Faulting linear address: ffff830236f6d008
>>>   (XEN) ****************************************
>>>   (XEN)
>>>
>>> and rendering the main #DF analysis broken.
>>>
>>> The proper fix is to delete cpu_user_regs.es and later, so no
>>> interrupt/exception path can access OoB, but this needs disentangling from the
>>> PV ABI first.
>>>
>>> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> Is it perhaps worth also saying explicitly that the other IST
>> stacks don't suffer the same problem because show_registers()
>> makes an local copy of the passed in struct? (Doing so also
>> for #DF would apparently be an alternative solution.)
> 
> They're not safe.  They merely don't explode.
> 
> https://lore.kernel.org/xen-devel/1532546157-5974-1-git-send-email-andrew.cooper3@citrix.com/
> was "fixed" by CET-SS turning the guard page from not present to
> read-only, but the same CET-SS series swapped #DB for #DF when it comes
> to the single OoB write problem case.

I see. While indeed I didn't pay attention to the OoB read aspect,
me saying "the other IST stacks don't suffer the same problem" was
still correct, wasn't it? Anyway - not a big deal.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 07:58:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 07:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7124.18679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSy9K-0006tc-6f; Thu, 15 Oct 2020 07:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7124.18679; Thu, 15 Oct 2020 07:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSy9K-0006tV-37; Thu, 15 Oct 2020 07:58:10 +0000
Received: by outflank-mailman (input) for mailman id 7124;
 Thu, 15 Oct 2020 07:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSy9I-0006tP-FJ
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:58:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9824c79e-ac98-4216-9ceb-b705fe76e8d2;
 Thu, 15 Oct 2020 07:58:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C1D0AC19;
 Thu, 15 Oct 2020 07:58:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSy9I-0006tP-FJ
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 07:58:08 +0000
X-Inumbo-ID: 9824c79e-ac98-4216-9ceb-b705fe76e8d2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9824c79e-ac98-4216-9ceb-b705fe76e8d2;
	Thu, 15 Oct 2020 07:58:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602748686;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=GIavv+C3ClerGiLM1iPKE2R/dfHndET1wtp3fGCaptQ=;
	b=ouqSd2WeDAINZH7A7PXauGYc+KvKaD1FcIph5lHhF1+bZFeZcwmmRPyX9xW7bMpKsEmJZC
	XwtqfmUJs12YrFoHAhIRAQzr6OD3JgBvqnFIFev9NXHwmbSXI3/BJvLJQhjUzfcTi5mFyM
	3pK5DJVTdoBjehpuOZrUjfaUuEbebqM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1C1D0AC19;
	Thu, 15 Oct 2020 07:58:06 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Subject: Getting rid of (many) dynamic link creations in the xen build
Message-ID: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
Date: Thu, 15 Oct 2020 09:58:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

After a short discussion on IRC yesterday I promised to send a mail
how I think we could get rid of creating dynamic links especially
for header files in the Xen build process.

This will require some restructuring, the amount will depend on the
selected way to proceed:

- avoid links completely, requires more restructuring
- avoid only dynamically created links, i.e. allowing some static
   links which are committed to git

The difference between both variants is affecting the public headers
in xen/include/public/: avoiding even static links would require to
add another directory or to move those headers to another place in the
tree (either use xen/include/public/xen/, or some other path */xen),
leading to the need to change all #include statements in the hypervisor
using <public/...> today.

The need for the path to have "xen/" is due to the Xen library headers
(which are installed on user's machines) are including the public
hypervisor headers via "#include <xen/...>" and we can't change that
scheme. A static link can avoid this problem via a different path, but
without any link we can't do that.

Apart from that decision, lets look which links are created today for
accessing the header files (I'll assume my series putting the library
headers to tools/include will be taken, so those links being created
in staging today are not mentioned) and what can be done to avoid them:

- xen/include/asm -> xen/include/asm-<arch>:
   Move all headers from xen/include/asm-<arch> to
   xen/arch/<arch>/include/asm and add that path via "-I" flag to CFLAGS.
   This has the other nice advantages that most architecture specific
   files are now in xen/arch (apart from the public headers) and that we
   can even add generic fallback headers in xen/include/asm in case an
   arch doesn't need a specific header file.

- xen/arch/<arch>/efi/*.[ch] -> xen/common/efi/*.[ch]:
   Use vpath for the *.c files and the "-I" flag for adding common/efi to
   the include path in the xen/arch/<arch>/efi/Makefile.

- tools/include/xen/asm -> xen/include/asm-<arch>:
   Add "-Ixen/arch/<arch>/include" to the CFLAGS. It might be a nice idea
   to move the headers needed by the tools to xen/arch/include/tools/asm
   and use "-Ixen/arch/<arch>/include/tools" instead, but this would
   require either the same path added to the hypervisor's CFLAGS or a
   modification of the related #include statements.

- tools/include/xen/foreign -> tools/include/xen-foreign:
   Get rid of tools/include/xen-foreign and generate the headers directly
   in xen/include/public/foreign instead.

- tools/include/xen/sys -> tools/include/xen-sys/<OS>:
   Move the headers from tools/include/xen-sys/<OS> to
   tools/include/<OS>/xen/sys/ and add the appropriate path to CFLAGS.

- tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
   Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
   add "-Ixen/include/tools" to the CFLAGS of tools.

- tools/include/xen/libelf/* -> xen/include/xen/*:
   Move the affected headers from xen/include/xen to
   xen/include/tools/libelf and reuse the above set CFLAGS.

- tools/include/xen/xsm -> xen/include/public/xsm:
   No longer needed (see next item in the list).

- tools/include/xen/* -> xen/include/public/*:
   See above discussion of the two possible variants. Either add a static
   link in git from tools/include/xen -> xen/include/public (now possible
   with all links below tools/include/xen gone), or add
   "-Ixen/include/public" to the tools' CFLAGS.

- stubdom/include/* -> tools/include/*:
   Set "-Itools/include -Itools/include/MiniOS" for the CFLAGS.

I hope I have covered everything.

Thoughts (especially about the two possible variants)?


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7134.18691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyCb-0008GQ-30; Thu, 15 Oct 2020 08:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7134.18691; Thu, 15 Oct 2020 08:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyCa-0008GJ-WC; Thu, 15 Oct 2020 08:01:32 +0000
Received: by outflank-mailman (input) for mailman id 7134;
 Thu, 15 Oct 2020 08:01:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyCZ-0008GA-Iq
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:01:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d064306-5a4e-4caa-908e-984941194d87;
 Thu, 15 Oct 2020 08:01:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B05FDAEEB;
 Thu, 15 Oct 2020 08:01:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyCZ-0008GA-Iq
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:01:31 +0000
X-Inumbo-ID: 3d064306-5a4e-4caa-908e-984941194d87
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3d064306-5a4e-4caa-908e-984941194d87;
	Thu, 15 Oct 2020 08:01:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602748887;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WA7A7TgUHP+AB/O7+hhVzxaGDyCzojagDUVxHGcH1j8=;
	b=EEFptd1/JfFVhJO/VJfOV6TNSiWI76xdTQqcAQMf9ZyxTyfoI7VVn/HuMKzldmis9K0yH/
	6ag+w6wdyBhVC9LKys/eEl5JZ+O3fSG7ai0gadgTG5TWQWX1fcRI+eqC1CBLXwv6z/ABGa
	NyKBuPZ5+FehivcOK2voQ+UyOgp+oE4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B05FDAEEB;
	Thu, 15 Oct 2020 08:01:27 +0000 (UTC)
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
Date: Thu, 15 Oct 2020 10:01:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 15:57, Andrew Cooper wrote:
> On 13/10/2020 16:58, Jan Beulich wrote:
>> On 09.10.2020 17:09, Andrew Cooper wrote:
>>> At the time of XSA-170, the x86 instruction emulator really was broken, and
>>> would allow arbitrary non-canonical values to be loaded into %rip.  This was
>>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
>>> targets".
>>>
>>> However, in a demonstration that off-by-one errors really are one of the
>>> hardest programming issues we face, everyone involved with XSA-170, myself
>>> included, mistook the statement in the SDM which says:
>>>
>>>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
>>>
>>> to mean "must be canonical".  A real canonical check is bits 63:N-1.
>>>
>>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
>>> to the boundary condition at 0x0000800000000000.
>>>
>>> Now that the emulator has been fixed, revert the XSA-170 change to fix
>>> architectural behaviour at the boundary case.  The XTF test case for XSA-170
>>> exercises this corner case, and still passes.
>>>
>>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> But why revert the change rather than fix ...
>>
>>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>>  out:
>>>      if ( nestedhvm_vcpu_in_guestmode(v) )
>>>          nvmx_idtv_handling();
>>> -
>>> -    /*
>>> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
>>> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
>>> -     * criteria. As we must not allow less than fully privileged mode to have
>>> -     * such an effect on the domain, we correct rIP in that case (accepting
>>> -     * this not being architecturally correct behavior, as the injected #GP
>>> -     * fault will then not see the correct [invalid] return address).
>>> -     * And since we know the guest will crash, we crash it right away if it
>>> -     * already is in most privileged mode.
>>> -     */
>>> -    mode = vmx_guest_x86_mode(v);
>>> -    if ( mode == 8 ? !is_canonical_address(regs->rip)
>> ... the wrong use of is_canonical_address() here? By reverting
>> you open up avenues for XSAs in case we get things wrong elsewhere,
>> including ...
>>
>>> -                   : regs->rip != regs->eip )
>> ... for 32-bit guests.
> 
> Because the only appropriate alternative would be ASSERT_UNREACHABLE()
> and domain crash.
> 
> This logic corrupts guest state.
> 
> Running with corrupt state is every bit an XSA as hitting a VMEntry
> failure if it can be triggered by userspace, but the latter safer and
> much more obvious.

I disagree. For CPL > 0 we don't "corrupt" guest state any more
than reporting a #GP fault when one is going to be reported
anyway (as long as the VM entry doesn't fail, and hence the
guest won't get crashed). IOW this raising of #GP actually is a
precautionary measure to _avoid_ XSAs.

Nor do I agree with the "much more obvious" aspect: A VM entry
failure requires quite a bit of analysis to recognize what has
caused it; whether a non-pseudo-canonical RIP is what catches your
eye right away is simply unknown. The gprintk() that you delete,
otoh, says very clearly what we have found to be wrong.

> It was the appropriate security fix (give or take the functional bug in
> it) at the time, given the complexity of retrofitting zero length
> instruction fetches to the emulator.
> 
> However, it is one of a very long list of guest-state-induced VMEntry
> failures, with non-trivial logic which we assert will pass, on a
> fastpath, where hardware also performs the same checks and we already
> have a runtime safe way of dealing with errors.  (Hence not actually
> using ASSERT_UNREACHABLE() here.)

"Runtime safe" as far as Xen is concerned, I take it. This isn't safe
for the guest at all, as vmx_failed_vmentry() results in an
unconditional domain_crash().

I certainly buy the fast path aspect of your comment, and if you were
moving the guest state adjustment into vmx_failed_vmentry(), I'd be
fine with the deletion here.

> It isn't appropriate for this check to exist on its own (i.e. without
> other guest state checks),

Well, if we run into cases where we get things wrong, more checks
and adjustments may want adding. Sadly each one of those has a fair
chance of needing an XSA.

As an aside, nvmx_n2_vmexit_handler()'s handling of
VMX_EXIT_REASONS_FAILED_VMENTRY looks pretty bogus - this is a flag,
not a separate exit reason. I guess I'll make a patch ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:10:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:10:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7137.18703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyL0-0000m5-37; Thu, 15 Oct 2020 08:10:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7137.18703; Thu, 15 Oct 2020 08:10:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyKz-0000ly-V3; Thu, 15 Oct 2020 08:10:13 +0000
Received: by outflank-mailman (input) for mailman id 7137;
 Thu, 15 Oct 2020 08:10:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyKy-0000lt-KF
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:10:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92978009-c361-437d-978f-285905641319;
 Thu, 15 Oct 2020 08:10:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F7D7B21C;
 Thu, 15 Oct 2020 08:10:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyKy-0000lt-KF
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:10:12 +0000
X-Inumbo-ID: 92978009-c361-437d-978f-285905641319
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92978009-c361-437d-978f-285905641319;
	Thu, 15 Oct 2020 08:10:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602749411;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=l4LuDWCMC95zktYzfkF8K0o2qYSbmL+xm8jeVcN0Nfc=;
	b=eZfuQh+jUZagkbHp1ErSwISVCMTt6O7fiJW8oXm/8QO5pK3uEpTyAQgMgsUo3NgMZo6qI/
	hM//F8W8+vR6PKC4ftQeYoSEcr1RIqQ/0nWBbom+XX/52cv2d2NOhUua9qFXSuUKwKYNck
	QQp58XkRJJC1lmPp21hN166hWEb3Nb8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0F7D7B21C;
	Thu, 15 Oct 2020 08:10:11 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen: Remove Xen PVH/PVHVM dependency on PCI
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-2-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b74a3f83-cd8a-34a3-b436-95141f01cb20@suse.com>
Date: Thu, 15 Oct 2020 10:10:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201014175342.152712-2-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.2020 19:53, Jason Andryuk wrote:
> @@ -76,7 +80,9 @@ config XEN_DEBUG_FS
>  	  Enabling this option may incur a significant performance overhead.
>  
>  config XEN_PVH
> -	bool "Support for running as a Xen PVH guest"
> +	bool "Xen PVH guest support"

Tangential question: Is "guest" here still appropriate, i.e.
isn't this option also controlling whether the kernel can be
used in a PVH Dom0?

>  	def_bool n

And is this default still appropriate?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:18:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:18:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7140.18715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSySu-00012M-SW; Thu, 15 Oct 2020 08:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7140.18715; Thu, 15 Oct 2020 08:18:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSySu-00012F-PK; Thu, 15 Oct 2020 08:18:24 +0000
Received: by outflank-mailman (input) for mailman id 7140;
 Thu, 15 Oct 2020 08:18:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSySt-00012A-FF
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:18:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4467d654-5ee6-44f5-ba0c-fde5a041c826;
 Thu, 15 Oct 2020 08:18:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A398FAD43;
 Thu, 15 Oct 2020 08:18:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSySt-00012A-FF
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:18:23 +0000
X-Inumbo-ID: 4467d654-5ee6-44f5-ba0c-fde5a041c826
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4467d654-5ee6-44f5-ba0c-fde5a041c826;
	Thu, 15 Oct 2020 08:18:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602749900;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pU93GxhBmH+PsnE7XhOnXvQoX0U937gY/Q4LvdZJRHQ=;
	b=rXYlq+MiDnQIgjWTJL4X2DVYa9dnN4ynOITrTWzCGQYMTiA90yszuf9iyqYDG3IZB6AoN0
	cs1eJl5ZvwTNRyQcpj5TBbI0hZTtNAOj7N/XVGuJHdPFqh+oPkscZxOvIE7Jl39GuD6/kD
	Nrj12orInxQMc4S6//QY40bymSYqQL4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A398FAD43;
	Thu, 15 Oct 2020 08:18:20 +0000 (UTC)
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
To: Dylanger Daly <dylangerdaly@protonmail.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <72589937-a918-96c8-4589-6d30efaead9a@suse.com>
Date: Thu, 15 Oct 2020 10:18:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 02:38, Dylanger Daly wrote:
> I'm currently using Xen 4.14 (Qubes 4.1 OS) on a Ryzen 7 4750U PRO, by default I'll experience softlocks where the mouse for example will jolt from time to time, in this state it's not usable.

>From what you say below I imply this is in Dom0?

> Adding `dom0_max_vcpus=1 dom0_vcpus_pin` to Xen's CMDLINE results in no more jolting however performance isn't what it should be on an 8 core CPU, softlocks are still a problem within domU's, any sort of UI animation for example.
> 
> Reverting [this commit (8e2aa76dc1670e82eaa15683353853bc66bf54fc)](https://github.com/xen-project/xen/commit/8e2aa76dc1670e82eaa15683353853bc66bf54fc) results in even worse performance with or without the above changes to CMDLINE, and it's not usable at all.

You saying this surely has a reason, but making the connection would
help. I don't consider it surprising that a revert of an improvement
makes things worse. You having bothered to find a certain code change
also makes me suspect you've experimented with other scheduler
related settings - if so, please share all data you've got. (FAOD -
with the information provided I have no idea what to suggest, sorry.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:31:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7144.18726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyfb-0002g6-1b; Thu, 15 Oct 2020 08:31:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7144.18726; Thu, 15 Oct 2020 08:31:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyfa-0002fz-Uw; Thu, 15 Oct 2020 08:31:30 +0000
Received: by outflank-mailman (input) for mailman id 7144;
 Thu, 15 Oct 2020 08:31:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyfZ-0002fu-BL
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:31:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9156eed4-90c9-4cba-94f3-18733ad634a3;
 Thu, 15 Oct 2020 08:31:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9CDA9ADA8;
 Thu, 15 Oct 2020 08:31:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyfZ-0002fu-BL
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:31:29 +0000
X-Inumbo-ID: 9156eed4-90c9-4cba-94f3-18733ad634a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9156eed4-90c9-4cba-94f3-18733ad634a3;
	Thu, 15 Oct 2020 08:31:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602750687;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ns9oVh6yhg+6fHG7ZCeGchxxhOro3pKFxjEtP6+3YIk=;
	b=uHh03yTmb2k+zMkdMLr7Khbeu6NJ5JKWbHa+Xt69WsWIPNh5nIcov/0w8AHqW5h7QXSRcI
	JBAODbCVtIvFKpwiiOsg6do0LkMEtUi5kgXPh2M+tuzqQ1r5qIx+1MJuzlKS1w2KeCn7qo
	RVcDBAwvfn3uVC3QQah5nfiBbXgK81o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9CDA9ADA8;
	Thu, 15 Oct 2020 08:31:27 +0000 (UTC)
Subject: Re: [xen-unstable-smoke test] 155811: regressions - FAIL
To: xen-devel@lists.xenproject.org
References: <osstest-155811-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Cc: osstest service owner <osstest-admin@xenproject.org>
Message-ID: <06d74ec9-948c-dc89-10a3-171dd364e97f@suse.com>
Date: Thu, 15 Oct 2020 10:31:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <osstest-155811-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.2020 22:41, osstest service owner wrote:
> flight 155811 xen-unstable-smoke real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/155811/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  build-amd64                   6 xen-build                fail REGR. vs. 155805

Looks like the changes to tools/include/xen/ population have
increased the chances of running into the race (or some similar
one) reported by Olaf in "races in toolstack build":

make[5]: Entering directory '/home/osstest/build.155810.build-amd64/xen/tools/include'
make -C xen-foreign
make[6]: Entering directory '/home/osstest/build.155810.build-amd64/xen/tools/include/xen-foreign'
mkdir -p xen/libelf acpi
ln -s /home/osstest/build.155810.build-amd64/xen/tools/include/../../xen/include/public/COPYING xen/
ln -s /home/osstest/build.155810.build-amd64/xen/tools/include/../../xen/include/public/*.h xen/
xg_main.c:52:10: fatal error: xen/sys/privcmd.h: No such file or directory
 #include <xen/sys/privcmd.h>
          ^~~~~~~~~~~~~~~~~~~
compilation terminated.
make[4]: *** [/home/osstest/build.155810.build-amd64/xen/tools/debugger/gdbsx/xg/../../../../tools/Rules.mk:145: xg_main.o] Error 1
make[4]: *** Waiting for unfinished jobs....
ln -s /home/osstest/build.155810.build-amd64/xen/tools/include/../../xen/include/public/*/ xen/
ln -s ../xen-sys/Linux xen/sys

Obviously recursing into tools/include/ needs to precede any other
recursion underneath tools/. Or wait - this is a bogus 2nd recursion
into tools/include/, there's an appropriate one very early in the
build. I guess it's

make[4]: Leaving directory '/home/osstest/build.155810.build-amd64/xen/tools/debugger/gdbsx/gx'
make -C xg
make[4]: Entering directory '/home/osstest/build.155810.build-amd64/xen/tools/debugger/gdbsx/xg'
make -C ../../../include

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:33:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7147.18739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyhh-0002q6-FV; Thu, 15 Oct 2020 08:33:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7147.18739; Thu, 15 Oct 2020 08:33:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyhh-0002pz-Bp; Thu, 15 Oct 2020 08:33:41 +0000
Received: by outflank-mailman (input) for mailman id 7147;
 Thu, 15 Oct 2020 08:33:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyhg-0002pu-Mz
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:33:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 967dfeca-9826-433e-8d55-d4266bcd98d5;
 Thu, 15 Oct 2020 08:33:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFCEBADA8;
 Thu, 15 Oct 2020 08:33:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyhg-0002pu-Mz
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:33:40 +0000
X-Inumbo-ID: 967dfeca-9826-433e-8d55-d4266bcd98d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 967dfeca-9826-433e-8d55-d4266bcd98d5;
	Thu, 15 Oct 2020 08:33:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602750819;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L7e5viUaGnGwbqXebYUy7WBbaKiwpb1Td6aUpI22ufM=;
	b=oJyrDAa3uoWsEsQ0QfYmvOFZ9IOfR1C8MCARkybo4rLhcVL7JF6xnZuA5OOSddc9i9ZhI3
	x1RaKfs14oc6zJwAXbk8DSCmGnn/AfpmDQq4bqw/1wCuLMtbEBluo55THdk2QNqkAiwRsy
	X5xo87yWbBSLntiePsOzYElejRR22LU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DFCEBADA8;
	Thu, 15 Oct 2020 08:33:38 +0000 (UTC)
Subject: Re: [PATCH v2] tools/xenmpd: Fix gcc10 snprintf warning
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <005bd16161fe803e9c2805bddc440db31c46169b.1602692002.git.bertrand.marquis@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <470f6555-9935-f581-eae6-6b8b3ed4490d@suse.com>
Date: Thu, 15 Oct 2020 10:33:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <005bd16161fe803e9c2805bddc440db31c46169b.1602692002.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.2020 18:14, Bertrand Marquis wrote:
> Add a check for snprintf return code and ignore the entry if we get an
> error. This should in fact never happen and is more a trick to make gcc
> happy and prevent compilation errors.
> 
> This is solving the following gcc warning when compiling for arm32 host
> platforms with optimization activated:
> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
> between 4 and 2147483645 bytes into a region of size 271
> [-Werror=format-truncation=]
> 
> This is also solving the following Debian bug:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Just as a nit - could you fix the typo in the prefix of the patch
subject, to correctly name the component?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:42:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:42:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7152.18755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyqF-0003lp-Ed; Thu, 15 Oct 2020 08:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7152.18755; Thu, 15 Oct 2020 08:42:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyqF-0003li-AG; Thu, 15 Oct 2020 08:42:31 +0000
Received: by outflank-mailman (input) for mailman id 7152;
 Thu, 15 Oct 2020 08:42:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyqD-0003la-Sa
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:42:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c34ec67-a9ae-44f0-828d-68c1cc858919;
 Thu, 15 Oct 2020 08:42:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E200ADA8;
 Thu, 15 Oct 2020 08:42:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyqD-0003la-Sa
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:42:29 +0000
X-Inumbo-ID: 2c34ec67-a9ae-44f0-828d-68c1cc858919
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2c34ec67-a9ae-44f0-828d-68c1cc858919;
	Thu, 15 Oct 2020 08:42:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602751348;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=a/afXgJ3BafDVXWwzXn01PQSSXilZ5CGddn4pqdyalw=;
	b=TbmAtZm1CKi9kz+eabFy74ONXhZdEYYlaGMLV+rfWAGjnAmU6yfdFoXfSmbAubl//NYcEX
	47J/NXcBXWUlOxlo3R41jALDRym6F15x1/NlFRHvmOvNwtLreZGjjZVJ0bWuv/nc955ETm
	/1uTQ6a30+JiFO6T1seADxCnojkW9nU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3E200ADA8;
	Thu, 15 Oct 2020 08:42:28 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Elena Ufimtseva <elena.ufimtseva@oracle.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] tools/gdbsx: drop stray recursion into tools/include/
Message-ID: <ece6c5c2-43f8-36d2-370c-37d988baeb87@suse.com>
Date: Thu, 15 Oct 2020 10:42:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Doing so isn't appropriate here - this gets done very early in the build
process. If the directory is mean to to be buildable on its own,
different arrangements would be needed.

The issue has become more pronounced by 47654a0d7320 ("tools/include:
fix (drop) dependencies of when to populate xen/"), but was there before
afaict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/debugger/gdbsx/xg/Makefile
+++ b/tools/debugger/gdbsx/xg/Makefile
@@ -12,7 +12,7 @@ CFLAGS += $(CFLAGS_xeninclude)
 all: build
 
 .PHONY: build
-build: xen-headers xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
+build: xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
 # build: mk-symlinks xg_all.a $(XG_HDRS) $(XG_OBJS) Makefile
 # build: mk-symlinks xg_all.a
 
@@ -21,9 +21,6 @@ xg_all.a: $(XG_OBJS) Makefile $(XG_HDRS)
 #	$(LD) -b elf32-i386 $(LDFLAGS) -r -o $@ $^
 #	$(CC) -m32 -c -o $@ $^
 
-xen-headers:
-	$(MAKE) -C ../../../include
-
 # xg_main.o: xg_main.c Makefile $(XG_HDRS)
 #$(CC) -c $(CFLAGS) -o $@ $<
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 08:50:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 08:50:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7155.18767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyxl-0004hB-77; Thu, 15 Oct 2020 08:50:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7155.18767; Thu, 15 Oct 2020 08:50:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSyxl-0004h4-3m; Thu, 15 Oct 2020 08:50:17 +0000
Received: by outflank-mailman (input) for mailman id 7155;
 Thu, 15 Oct 2020 08:50:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSyxj-0004gz-M8
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:50:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01d4bdac-761c-43fa-9521-c41563827ea7;
 Thu, 15 Oct 2020 08:50:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F3CCAAEC3;
 Thu, 15 Oct 2020 08:50:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSyxj-0004gz-M8
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 08:50:15 +0000
X-Inumbo-ID: 01d4bdac-761c-43fa-9521-c41563827ea7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 01d4bdac-761c-43fa-9521-c41563827ea7;
	Thu, 15 Oct 2020 08:50:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602751814;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JudKjNd3McfcqlmDvRYCEhEs7qtdjVptmBZId3MhdSU=;
	b=MXlQQedM4M7GbsDmMmefpR3L3jY1JzLqUpj1UrAFuwu7jvtzQpmBiR2PDgzPuROj7p9wvX
	oo43ZWy6mYyCaoEONcCXb+4Ouu6Sya1FMwtoFWgDwzJXYKfPxrrncKuaN1F/xPNo5rpiPM
	WfPPCch0y5NLi4QZ5NsWDYn8GR//Fvk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F3CCAAEC3;
	Thu, 15 Oct 2020 08:50:13 +0000 (UTC)
Subject: Re: [PATCH v2] x86/smpboot: Don't unconditionally call
 memguard_guard_stack() in cpu_smpboot_alloc()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201014184708.17758-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
Date: Thu, 15 Oct 2020 10:50:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201014184708.17758-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 20:47, Andrew Cooper wrote:
> cpu_smpboot_alloc() is designed to be idempotent with respect to partially
> initialised state.  This occurs for S3 and CPU parking, where enough state to
> handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
> when we otherwise want to offline the CPU.
> 
> For simplicity between various configuration, Xen always uses shadow stack
> mappings (Read-only + Dirty) for the guard page, irrespective of whether
> CET-SS is enabled.
> 
> Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
> by first writing out the supervisor shadow stack tokens with plain writes,
> then changing the mapping to being read-only.
> 
> This ordering is strictly necessary to configure the BSP, which cannot have
> the supervisor tokens be written with WRSS.
> 
> Instead of calling memguard_guard_stack() unconditionally, call it only when
> actually allocating a new stack.  Xenheap allocates are guaranteed to be
> writeable, and the net result is idempotency WRT configuring stack_base[].
> 
> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> This can more easily be demonstrated with CPU hotplug than S3, and the absence
> of bug reports goes to show how rarely hotplug is used.
> 
> v2:
>  * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
>    turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
>    shootdown completes.

The code change looks correct to me, but since I don't understand
this part I'm afraid I may be overlooking something. I understand
the "turn the BSP shadow stack into regular mappings" relates to
cpu_smpboot_free()'s call to memguard_unguard_stack(), but I
didn't think we come through cpu_smpboot_free() for the BSP upon
entering or leaving S3.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 09:15:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 09:15:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7159.18778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzLT-0006aI-7w; Thu, 15 Oct 2020 09:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7159.18778; Thu, 15 Oct 2020 09:14:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzLT-0006aB-50; Thu, 15 Oct 2020 09:14:47 +0000
Received: by outflank-mailman (input) for mailman id 7159;
 Thu, 15 Oct 2020 09:14:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0U26=DW=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1kSzLR-0006a6-Be
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:14:45 +0000
Received: from mail-40134.protonmail.ch (unknown [185.70.40.134])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3ea5527-7da8-44fb-b409-c8f3bb62271f;
 Thu, 15 Oct 2020 09:14:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0U26=DW=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
	id 1kSzLR-0006a6-Be
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:14:45 +0000
X-Inumbo-ID: b3ea5527-7da8-44fb-b409-c8f3bb62271f
Received: from mail-40134.protonmail.ch (unknown [185.70.40.134])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b3ea5527-7da8-44fb-b409-c8f3bb62271f;
	Thu, 15 Oct 2020 09:14:42 +0000 (UTC)
Date: Thu, 15 Oct 2020 09:14:38 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1602753281;
	bh=uApoKZdPH581TK1za6hWeAIesw7Sgfzm/ZWDVQo6M1U=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=mmB0NtwBI4tkQtcwWdtADNP029F3ACb0ygOjVPb0GtOAAe8yXQmLm/53ynPFvHE10
	 N+3wtBV1+V28LybpwVycFltktdYGHMNrEQOHRUc9x82Cx2U0952y6OjgkJT/gsRX8y
	 woxSgm0Y54izuoVU1nGQHQ8AQYzylUkfcuO6x3Ds=
To: Jan Beulich <jbeulich@suse.com>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <U00A4lb9CgpRhV9huYxk5kvyAAam9UcFJ7h2K1a6-M84ef8W58V4Shq7hmU5WKh3rKaVRl6EiTXVmDc-czrBJvyf7h1mjh3Dc3SPvj8qIog=@protonmail.com>
In-Reply-To: <72589937-a918-96c8-4589-6d30efaead9a@suse.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com> <72589937-a918-96c8-4589-6d30efaead9a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM shortcircuit=no
	autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

Hi Jan, thank you for responding.

Indeed this is for dom0, I only recently tried limiting a domU to 1 core an=
d observed absolutely no softlocks, UI animations are smooth as butter with=
 1 core only.

Indeed I believe this is a CPU Scheduling issue, I've tried both the older =
credit and RTDS however both don't boot correctly.
The number of cores on this CPU is 8, 16 threads however Qubes by default d=
isables SMT, sched_credit2_max_cpus_runqueue is 16 by default, I've tried t=
esting with setting this to 7 or 8 however it'll either not boot, or nothin=
g will change.

There are a number of credit2 tweak-ables so I'm hoping to play around and =
drop the `dom0_max_vcpus=3D1`, I suspect `sched_credit2_max_cpus_runqueue` =
is the main thing to play with.

I did manage to get it booting with sched_credit2_max_cpus_runqueue=3D7 but=
 it ended up locking up shortly after X launched on dom0

=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90 Original Me=
ssage =E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90=E2=80=90

On Thursday, October 15th, 2020 at 7:18 PM, Jan Beulich <jbeulich@suse.com>=
 wrote:

> On 15.10.2020 02:38, Dylanger Daly wrote:
>
> > I'm currently using Xen 4.14 (Qubes 4.1 OS) on a Ryzen 7 4750U PRO, by =
default I'll experience softlocks where the mouse for example will jolt fro=
m time to time, in this state it's not usable.
>
> From what you say below I imply this is in Dom0?
>
> > Adding `dom0_max_vcpus=3D1 dom0_vcpus_pin` to Xen's CMDLINE results in =
no more jolting however performance isn't what it should be on an 8 core CP=
U, softlocks are still a problem within domU's, any sort of UI animation fo=
r example.
> >
> > Reverting this commit (8e2aa76dc1670e82eaa15683353853bc66bf54fc) result=
s in even worse performance with or without the above changes to CMDLINE, a=
nd it's not usable at all.
>
> You saying this surely has a reason, but making the connection would
>
> help. I don't consider it surprising that a revert of an improvement
>
> makes things worse. You having bothered to find a certain code change
>
> also makes me suspect you've experimented with other scheduler
>
> related settings - if so, please share all data you've got. (FAOD -
>
> with the information provided I have no idea what to suggest, sorry.)
>
> Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 09:16:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 09:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7162.18791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzMy-0006hC-Lb; Thu, 15 Oct 2020 09:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7162.18791; Thu, 15 Oct 2020 09:16:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzMy-0006h5-Hb; Thu, 15 Oct 2020 09:16:20 +0000
Received: by outflank-mailman (input) for mailman id 7162;
 Thu, 15 Oct 2020 09:16:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSzMx-0006h0-Jw
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:16:19 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9ea3965-869c-4ab3-8184-2ab5ddb7156b;
 Thu, 15 Oct 2020 09:16:15 +0000 (UTC)
Received: from AM6PR02CA0033.eurprd02.prod.outlook.com (2603:10a6:20b:6e::46)
 by AM6PR08MB5095.eurprd08.prod.outlook.com (2603:10a6:20b:e1::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Thu, 15 Oct
 2020 09:16:12 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::83) by AM6PR02CA0033.outlook.office365.com
 (2603:10a6:20b:6e::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Thu, 15 Oct 2020 09:16:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 09:16:12 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Thu, 15 Oct 2020 09:16:12 +0000
Received: from 31dfee00bc8b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 27C496DE-782C-4F84-9FB0-DABEE7B679CD.1; 
 Thu, 15 Oct 2020 09:16:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 31dfee00bc8b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 15 Oct 2020 09:16:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5686.eurprd08.prod.outlook.com (2603:10a6:10:1a1::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Thu, 15 Oct
 2020 09:16:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 09:16:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSzMx-0006h0-Jw
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:16:19 +0000
X-Inumbo-ID: c9ea3965-869c-4ab3-8184-2ab5ddb7156b
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.49])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c9ea3965-869c-4ab3-8184-2ab5ddb7156b;
	Thu, 15 Oct 2020 09:16:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CBezyFDMAZVTmGEBaNCHQVY0TkXGlydWSP2kyYgPJLU=;
 b=sdgj6ZTp512kv/lgGD2K4PxCt4YkBgBsa1vqYN8LrhYHB+dr8KVux4Wj4+6LYn2EfDhL8Bl028XWrXNRzkuf0y5hLo4Lxaari0/KhcEbmg4Yu7T59Jj9MZ6xlDMA+dQLMrG86DIOTdQ9rut4pwaZdYsgGa9qIhxLjZ+GRTOTmWc=
Received: from AM6PR02CA0033.eurprd02.prod.outlook.com (2603:10a6:20b:6e::46)
 by AM6PR08MB5095.eurprd08.prod.outlook.com (2603:10a6:20b:e1::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Thu, 15 Oct
 2020 09:16:12 +0000
Received: from AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::83) by AM6PR02CA0033.outlook.office365.com
 (2603:10a6:20b:6e::46) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Thu, 15 Oct 2020 09:16:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT053.mail.protection.outlook.com (10.152.16.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 09:16:12 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Thu, 15 Oct 2020 09:16:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e097600fb1cb1920
X-CR-MTA-TID: 64aa7808
Received: from 31dfee00bc8b.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 27C496DE-782C-4F84-9FB0-DABEE7B679CD.1;
	Thu, 15 Oct 2020 09:16:03 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 31dfee00bc8b.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 15 Oct 2020 09:16:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a6YKD/miVUBqsaYEPZyGM0gFgk+JVKzYidx63cSm2ZZAiiQhoah2wpIx/GIXawyFbqVmZUhJ12R5n380qeIQeM0dO3HREmMe+6R8ZoGXfWW/SCILV2QXFg2iIGSIXj72Df7AIZkfCpwnDd6w3y6ACZiwp9KBDR8eFQXJM2yIyazX3KUhLPUkb6E70QphzCD8gvLdC7iVEGI6NFxrBtWMw12bmEfOQrm+hjliQZaLvKXaIP0Rfhn7s9LMMpnBpINGhy/chzWcNHtQQRrIZ8V8NqM3Pemwt+a2qf61ucENsi4UruNo1H/EtCyBpqsImEKPey9GgxVJZ/t6OY7aWOc3Yg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CBezyFDMAZVTmGEBaNCHQVY0TkXGlydWSP2kyYgPJLU=;
 b=WJCOftNIHnPjkZAtkiRKOyqywlHEPa+YOTN0zwbiCEEE6txwNndwETsZMoWDBh3GNi1pKCXHOIFCaeomqvmBPHYeVxs4PH7RUjZA4WF9KomfWzwWr4jiIuoYmd1xmXyy8k0Ytfvf2vYGXx1xyZ2czJFBJpwYMzoU0toIBa3ARcJImaKdcmmdIzJDTtUT4Of+nAzdGi3Hx4XleamhYwwUTkVKsOc5QNHsdUQmNHf19l74oIqThqAGGjPf1xxKUJxmEQosluLP/mbrXcxxs3yYwyd0WCHA2Br5Y2uVMWcE4cIvaUqlzK2+bLhe6sirNO2rVQzr+PV8AIJ6UQgKTI1VLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CBezyFDMAZVTmGEBaNCHQVY0TkXGlydWSP2kyYgPJLU=;
 b=sdgj6ZTp512kv/lgGD2K4PxCt4YkBgBsa1vqYN8LrhYHB+dr8KVux4Wj4+6LYn2EfDhL8Bl028XWrXNRzkuf0y5hLo4Lxaari0/KhcEbmg4Yu7T59Jj9MZ6xlDMA+dQLMrG86DIOTdQ9rut4pwaZdYsgGa9qIhxLjZ+GRTOTmWc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBAPR08MB5686.eurprd08.prod.outlook.com (2603:10a6:10:1a1::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Thu, 15 Oct
 2020 09:16:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 09:16:03 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools/xenmpd: Fix gcc10 snprintf warning
Thread-Topic: [PATCH v2] tools/xenmpd: Fix gcc10 snprintf warning
Thread-Index: AQHWokU2+acqwbijJUavOAENd1LPUqmYVziAgAAL2AA=
Date: Thu, 15 Oct 2020 09:16:03 +0000
Message-ID: <F132913E-955F-44DA-8D57-3A27DF764D89@arm.com>
References:
 <005bd16161fe803e9c2805bddc440db31c46169b.1602692002.git.bertrand.marquis@arm.com>
 <470f6555-9935-f581-eae6-6b8b3ed4490d@suse.com>
In-Reply-To: <470f6555-9935-f581-eae6-6b8b3ed4490d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1f4af322-d9c9-4d14-5298-08d870eaf607
x-ms-traffictypediagnostic: DBAPR08MB5686:|AM6PR08MB5095:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB509510CB438A21B892EC01129D020@AM6PR08MB5095.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wlpHTejlwY7k23cHbvnwFMe8j+F+h6h2RoO6ZqBscjyW8dh20ezjeA9vgjcE3zKqyyt3OSFwsyAJIiunFOdArVGDY7zlMkIQKbJ1DJbhvMKDCpRLsHjyLocd+H4KITEQ3TdIj0bBrbGw/Kd3/CYYnrDazO3IyjOKIFQ0MgAoGxPAm3GvNtF26F+7LR81zc6wqjOKLo51UFUCPvlTyxX79Wt1pbLWPgOtofI4e4XZCgxqDpufSbf2QyHFO1jGmfI+kotkmk9jLIzIRPiInM+yBDoCCkSkB2LGF5NQtxG6cRCZkfCXGRh6tlpuW2367s8NKaz3lWhomRZho1StgwwNzU1X0SJgA6wun4EHCmAd/hd3gzD7egWGtx9P8OInGxzSi40LEaowrUjo5Lte1eOW0g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(39860400002)(346002)(376002)(6916009)(2616005)(6486002)(54906003)(53546011)(478600001)(4744005)(33656002)(966005)(26005)(186003)(316002)(5660300002)(86362001)(6512007)(8676002)(2906002)(36756003)(83380400001)(64756008)(76116006)(91956017)(66556008)(66446008)(66476007)(66946007)(6506007)(83080400001)(8936002)(71200400001)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 w5WXXFPDL24rdJ8ycFiXenIw+BQZU4xO1QcxaZZO3Rf0X6cZch00vY5+atFvg0VkhGcB9n/UH6IURyqkux9sv3um9h6vy+qay0cKU1+QT+1Arp7TNQaXLBVwvUEi/wN225/oEfobvQGziT/VzrQFLS6k5iYzMmW6DgIMblcaxApHjHlQFzib8bzKpPcnyenE5stsfec30xnTaKqV30gBRYgRZxk3I1MhKK8PPg0o/MSH4SfgmbLozhP3uxR8NH6Dq/P9cts85t9Nuhrshm4BH8tApHIAJr8BqS2oMj8yJeyM141a7PUM1uPIhsUXZvH2MCg/Ylb4fE5kykfPARRt04ShMEaoe5Jfp6AFvQIt+J0TL+zeDuZhrT2WnQyjeC5ihBlLOBa7EKCT/4dB6Of84Lft/bQU7Fp8djAgKSLj4V11hzRhL6hsqaPyqiJaUGnO88rxiPIShJ6e39sj0z+0Ozd2hsw5jnkDa5FJcmJ6MYAD+BgTBpTQZSESPxm2JT0oCf5weHxCrRpIC9lnPKdxwAbKVZD0RfOmz4L730FLjEc1xnTtJm4CAc6XqYD+YArxEOEU8YmPCtKwBEGmCze7GJ2lqOaBS5qyCPStHJwB+OAQ1/z6pq3nIf4ElHUOjD+Jxws1VdxZtthhAlqx7+SjUw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <41118B39905B04469B30B5F11E140C61@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5686
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a182debb-e1ae-4666-293b-08d870eaf07c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WhO8pK9ueAvK9ubqN0k3mc8D0Ado1MzffbtxvGhwZ88JcYvF043wkCSlVRh9Y0ZsxyfUDo0QTovI2iLEhdw19we7dA4y8mczeVHs8373+K3iH8fM8HykMDstndvPAI47H4PLZuNaC7gp0dtlQSAuaXzTM8UahrTg8ykA9HzRdZ192iZYiaIEzV6zF4EcC7HDk45gFUmHbprL99D4Z9RsvsZOpGW6U2PZFzoSmYDNUsIgZ5ZA0CTOSO9qToYF5V6zLdLG5iFRyT22b+pU0dIU7Ov9jeCVXvLRG1hAF2q8Puw7A+Gmaaz3IsqsEKM37OviTR2KAN8psQAohPS/C0ar9zgL0w/fYRZpiKtNntLg0kA9JIMmkeZgVg1sOWoO/LPk0P//KgDbOY2yAloeIiPvUvi1rdOIhC3TCyKvzYg7Q4uoKX3mjUW/M9ubkAkFkZhDM6SPfehYOSc+wPweyLHJNzhV+vD5G0FSuSwcp8NGA1Q=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(346002)(376002)(46966005)(4744005)(356005)(5660300002)(83380400001)(83080400001)(316002)(8936002)(70586007)(70206006)(82310400003)(6486002)(82740400003)(81166007)(33656002)(47076004)(2906002)(86362001)(6512007)(2616005)(4326008)(966005)(478600001)(26005)(186003)(36756003)(8676002)(53546011)(6506007)(336012)(54906003)(6862004)(36906005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 09:16:12.4608
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1f4af322-d9c9-4d14-5298-08d870eaf607
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5095

Hi,

> On 15 Oct 2020, at 09:33, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 14.10.2020 18:14, Bertrand Marquis wrote:
>> Add a check for snprintf return code and ignore the entry if we get an
>> error. This should in fact never happen and is more a trick to make gcc
>> happy and prevent compilation errors.
>>=20
>> This is solving the following gcc warning when compiling for arm32 host
>> platforms with optimization activated:
>> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
>> between 4 and 2147483645 bytes into a region of size 271
>> [-Werror=3Dformat-truncation=3D]
>>=20
>> This is also solving the following Debian bug:
>> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D970802
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Just as a nit - could you fix the typo in the prefix of the patch
> subject, to correctly name the component?

oh right pmd not mpd.=20
v3 on the way.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 09:16:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 09:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7163.18803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzN9-0006lL-1p; Thu, 15 Oct 2020 09:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7163.18803; Thu, 15 Oct 2020 09:16:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzN8-0006lC-Ui; Thu, 15 Oct 2020 09:16:30 +0000
Received: by outflank-mailman (input) for mailman id 7163;
 Thu, 15 Oct 2020 09:16:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kSzN7-0006kj-Hr
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:16:29 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c9d6dfcc-03f4-45c6-8752-cdef913e6fba;
 Thu, 15 Oct 2020 09:16:29 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DADA813D5;
 Thu, 15 Oct 2020 02:16:28 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 504323F66B;
 Thu, 15 Oct 2020 02:16:28 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kSzN7-0006kj-Hr
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:16:29 +0000
X-Inumbo-ID: c9d6dfcc-03f4-45c6-8752-cdef913e6fba
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id c9d6dfcc-03f4-45c6-8752-cdef913e6fba;
	Thu, 15 Oct 2020 09:16:29 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DADA813D5;
	Thu, 15 Oct 2020 02:16:28 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 504323F66B;
	Thu, 15 Oct 2020 02:16:28 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3] tools/xenpmd: Fix gcc10 snprintf warning
Date: Thu, 15 Oct 2020 10:16:09 +0100
Message-Id: <14ac4900dcf4fb9b45ce4f5e3d60de7f7e3602ab.1602753323.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

Add a check for snprintf return code and ignore the entry if we get an
error. This should in fact never happen and is more a trick to make gcc
happy and prevent compilation errors.

This is solving the following gcc warning when compiling for arm32 host
platforms with optimization activated:
xenpmd.c:92:37: error: '%s' directive output may be truncated writing
between 4 and 2147483645 bytes into a region of size 271
[-Werror=format-truncation=]

This is also solving the following Debian bug:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/xenpmd/xenpmd.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/xenpmd/xenpmd.c b/tools/xenpmd/xenpmd.c
index 35fd1c931a..12b82cf43e 100644
--- a/tools/xenpmd/xenpmd.c
+++ b/tools/xenpmd/xenpmd.c
@@ -102,6 +102,7 @@ FILE *get_next_battery_file(DIR *battery_dir,
     FILE *file = 0;
     struct dirent *dir_entries;
     char file_name[284];
+    int ret;
     
     do 
     {
@@ -111,11 +112,15 @@ FILE *get_next_battery_file(DIR *battery_dir,
         if ( strlen(dir_entries->d_name) < 4 )
             continue;
         if ( battery_info_type == BIF ) 
-            snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_INFO_FILE_PATH,
                      dir_entries->d_name);
         else 
-            snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
+            ret = snprintf(file_name, sizeof(file_name), BATTERY_STATE_FILE_PATH,
                      dir_entries->d_name);
+        /* This should not happen but is needed to pass gcc checks */
+        if (ret < 0)
+            continue;
+        file_name[sizeof(file_name) - 1] = '\0';
         file = fopen(file_name, "r");
     } while ( !file );
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 09:20:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 09:20:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7180.18815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzQx-0007hi-Jy; Thu, 15 Oct 2020 09:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7180.18815; Thu, 15 Oct 2020 09:20:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzQx-0007hb-FR; Thu, 15 Oct 2020 09:20:27 +0000
Received: by outflank-mailman (input) for mailman id 7180;
 Thu, 15 Oct 2020 09:20:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kSzQw-0007hW-G3
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:20:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2658bd47-bd55-496a-a07f-dba18147df3e;
 Thu, 15 Oct 2020 09:20:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4D25AF16;
 Thu, 15 Oct 2020 09:20:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kSzQw-0007hW-G3
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:20:26 +0000
X-Inumbo-ID: 2658bd47-bd55-496a-a07f-dba18147df3e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2658bd47-bd55-496a-a07f-dba18147df3e;
	Thu, 15 Oct 2020 09:20:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602753625;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QMv7UBCOvFBSjIe6lxfS+zRlNYNSopMWBW8nh0iSO2k=;
	b=cf5J2IiMXlLYUlzoCWwW7KWh9DPz1TbBslSxPpFHWEghaDFQvO203IrRgPHGT+gy85WKXx
	wdStFKlLLEsHTpGeheZh6I2fV6Fdw/Fykw7PYgGEkQ3bYxW0iE0s4M/iSXg1bokQuXr5kK
	1m+e8PsTpstUOKLtE9Yc35MBoUkkWtM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4D25AF16;
	Thu, 15 Oct 2020 09:20:24 +0000 (UTC)
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
To: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
 <72589937-a918-96c8-4589-6d30efaead9a@suse.com>
 <U00A4lb9CgpRhV9huYxk5kvyAAam9UcFJ7h2K1a6-M84ef8W58V4Shq7hmU5WKh3rKaVRl6EiTXVmDc-czrBJvyf7h1mjh3Dc3SPvj8qIog=@protonmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5db65e32-31aa-57a5-f82b-ebe497f493f5@suse.com>
Date: Thu, 15 Oct 2020 11:20:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <U00A4lb9CgpRhV9huYxk5kvyAAam9UcFJ7h2K1a6-M84ef8W58V4Shq7hmU5WKh3rKaVRl6EiTXVmDc-czrBJvyf7h1mjh3Dc3SPvj8qIog=@protonmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 11:14, Dylanger Daly wrote:
> Indeed this is for dom0, I only recently tried limiting a domU to 1 core and observed absolutely no softlocks, UI animations are smooth as butter with 1 core only.
> 
> Indeed I believe this is a CPU Scheduling issue, I've tried both the older credit and RTDS however both don't boot correctly.

This wants reporting (with sufficient data, i.e. at least a serial log)
as separate issues.

> The number of cores on this CPU is 8, 16 threads however Qubes by default disables SMT, sched_credit2_max_cpus_runqueue is 16 by default, I've tried testing with setting this to 7 or 8 however it'll either not boot, or nothing will change.

Failure to boot, unless with insane command line options, should always
be reported to it can be fixed.

I'm afraid neither part of the reply gets you/us any closer to an
understanding of your softlockup issues. As a random thought, have you
tried disabling use of (deep) C-states? This is known to have helped
to work around errata on other hardware, so may be worth a try.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 09:42:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 09:42:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7184.18827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzlt-00013z-DJ; Thu, 15 Oct 2020 09:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7184.18827; Thu, 15 Oct 2020 09:42:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kSzlt-00013s-9d; Thu, 15 Oct 2020 09:42:05 +0000
Received: by outflank-mailman (input) for mailman id 7184;
 Thu, 15 Oct 2020 09:42:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kSzlr-00013n-Cw
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:42:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5564f6f1-62eb-496e-aae3-8baa81218f5b;
 Thu, 15 Oct 2020 09:42:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BAFECAFFB;
 Thu, 15 Oct 2020 09:42:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kSzlr-00013n-Cw
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 09:42:03 +0000
X-Inumbo-ID: 5564f6f1-62eb-496e-aae3-8baa81218f5b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5564f6f1-62eb-496e-aae3-8baa81218f5b;
	Thu, 15 Oct 2020 09:42:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602754920;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CGUEZhw5OsOWwTbxcg48LYkVTLy0xSGFFDqEYGHwsdY=;
	b=pCExSGjFkleCxW8TLXo0dw77TlP1sqwZmF77XE/mK0vyOkMZ4ghYySDaBHg+XIouJLZPml
	qVnmvJ8YbQRmhTKyWoxir7/tk01pEtSBAtrrTmsA4v7gJq2lLF+fezqqANGs+lSr7jYKuA
	2nYuV3f51cs55UHSonHP243h9bIZs94=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BAFECAFFB;
	Thu, 15 Oct 2020 09:42:00 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: Jason Andryuk <jandryuk@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-3-jandryuk@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d8a8ed95-ed55-4ccf-1b54-8d97db908742@suse.com>
Date: Thu, 15 Oct 2020 11:41:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201014175342.152712-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.10.20 19:53, Jason Andryuk wrote:
> Moving XEN_512GB allows it to nest under XEN_PV.  That also allows
> XEN_PVH to nest under XEN as a sibling to XEN_PV and XEN_PVHVM giving:
> 
> [*]   Xen guest support
> [*]     Xen PV guest support
> [*]       Limit Xen pv-domain memory to 512GB
> [*]       Xen PV Dom0 support

This has currently a wrong text/semantics:

It should be split to CONFIG_XEN_DOM0 and CONFIG_XEN_PV_DOM0.

Otherwise the backends won't be enabled per default for a PVH-only
config meant to be Dom0-capable.

You don't have to do that in your patches if you don't want to, but
I wanted to mention it with you touching this area of Kconfig.

> [*]     Xen PVHVM guest support
> [*]     Xen PVH guest support
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:10:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7201.18838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0Ci-000344-M8; Thu, 15 Oct 2020 10:09:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7201.18838; Thu, 15 Oct 2020 10:09:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0Ci-00033x-J4; Thu, 15 Oct 2020 10:09:48 +0000
Received: by outflank-mailman (input) for mailman id 7201;
 Thu, 15 Oct 2020 10:09:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kT0Ch-00033s-BT
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:09:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b77c1c09-6d7d-46ef-92aa-93881a7f9fdd;
 Thu, 15 Oct 2020 10:09:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 59A3FB22C;
 Thu, 15 Oct 2020 10:09:45 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kT0Ch-00033s-BT
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:09:47 +0000
X-Inumbo-ID: b77c1c09-6d7d-46ef-92aa-93881a7f9fdd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b77c1c09-6d7d-46ef-92aa-93881a7f9fdd;
	Thu, 15 Oct 2020 10:09:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602756585;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qwWbQztlm/3z6DQKZjph3Kd9N835MB1UnRBKmLo9dQs=;
	b=uwGwia/R9kSihlxEsJSjC3o5dJL0ZisW8mWaqta6L1NcrwnW1FF5kDEoWCnogldhIOJK8t
	RfT7paiyaX+I81wfFBps+gSMoQvcOjslpI/d5AR1oQV5Cre91zAXTCBhlgWV2CzgAztewn
	mERGWXiPzB0VqNMF57Ve1QYb4W31poo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 59A3FB22C;
	Thu, 15 Oct 2020 10:09:45 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
Date: Thu, 15 Oct 2020 12:09:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 09:58, Jürgen Groß wrote:
> After a short discussion on IRC yesterday I promised to send a mail
> how I think we could get rid of creating dynamic links especially
> for header files in the Xen build process.
> 
> This will require some restructuring, the amount will depend on the
> selected way to proceed:
> 
> - avoid links completely, requires more restructuring
> - avoid only dynamically created links, i.e. allowing some static
>    links which are committed to git

While I like the latter better, I'd like to point out that not all
file systems support symlinks, and hence the repo then couldn't be
stored on (or the tarball expanded onto) such a file system. Note
that this may be just for viewing purposes - I do this typically at
home -, i.e. there's no resulting limitation from the build process
needing symlinks. Similarly, once we fully support out of tree
builds, there wouldn't be any restriction from this as long as just
the build tree is placed on a capable file system.

As a result I'd like to propose variant 2´: Reduce the number of
dynamically created symlinks to a minimum. This said, I have to
admit that I haven't really understood yet why symlinks are bad.
They exist for exactly such purposes, I would think.

> The difference between both variants is affecting the public headers
> in xen/include/public/: avoiding even static links would require to
> add another directory or to move those headers to another place in the
> tree (either use xen/include/public/xen/, or some other path */xen),
> leading to the need to change all #include statements in the hypervisor
> using <public/...> today.
> 
> The need for the path to have "xen/" is due to the Xen library headers
> (which are installed on user's machines) are including the public
> hypervisor headers via "#include <xen/...>" and we can't change that
> scheme. A static link can avoid this problem via a different path, but
> without any link we can't do that.
> 
> Apart from that decision, lets look which links are created today for
> accessing the header files (I'll assume my series putting the library
> headers to tools/include will be taken, so those links being created
> in staging today are not mentioned) and what can be done to avoid them:
> 
> - xen/include/asm -> xen/include/asm-<arch>:
>    Move all headers from xen/include/asm-<arch> to
>    xen/arch/<arch>/include/asm and add that path via "-I" flag to CFLAGS.
>    This has the other nice advantages that most architecture specific
>    files are now in xen/arch (apart from the public headers) and that we
>    can even add generic fallback headers in xen/include/asm in case an
>    arch doesn't need a specific header file.

Iirc Andrew suggested years ago that we follow Linux in this regard
(and XTF already does). My only concern here is the churn this will
cause for backports.

> - xen/arch/<arch>/efi/*.[ch] -> xen/common/efi/*.[ch]:
>    Use vpath for the *.c files and the "-I" flag for adding common/efi to
>    the include path in the xen/arch/<arch>/efi/Makefile.

Fine with me.

> - tools/include/xen/asm -> xen/include/asm-<arch>:
>    Add "-Ixen/arch/<arch>/include" to the CFLAGS. It might be a nice idea
>    to move the headers needed by the tools to xen/arch/include/tools/asm
>    and use "-Ixen/arch/<arch>/include/tools" instead, but this would
>    require either the same path added to the hypervisor's CFLAGS or a
>    modification of the related #include statements.

Separating headers intended for tools consumption is okay with me,
but I dislike the tools/ infix in the path you suggest. Since there
can't possibly be any shared prototypes, how about defs/ or some
such not specifically naming either of the consuming components
(and thus visually excluding the other)?

Of course, the further asm/ underneath is kind of ugly because of
being largely unnecessary. Perhaps we could have just
xen/arch/include/defs/ and use #include <defs/xyz.h>?

> - tools/include/xen/foreign -> tools/include/xen-foreign:
>    Get rid of tools/include/xen-foreign and generate the headers directly
>    in xen/include/public/foreign instead.

Except that conceptually building in tools/ would better not alter
the xen/ subtree in any way.

> - tools/include/xen/sys -> tools/include/xen-sys/<OS>:
>    Move the headers from tools/include/xen-sys/<OS> to
>    tools/include/<OS>/xen/sys/ and add the appropriate path to CFLAGS.

Not very nice imo because of the otherwise pointless intermediate
directories, but if we truly need to minimize symlink usage, then
so be it.

> - tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
>    Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
>    add "-Ixen/include/tools" to the CFLAGS of tools.

Why not -Ixen/include/xen without any movement? Perhaps because
-Ixen/include/tools wouldn't work either, due to code using

#include <xen/lib/<arch>/xyz.h>

? I.e. you really mean Move xen/include/xen/lib/<arch> to
xen/include/tools/xen/lib/<arch>? Not very nice. I have to admit
I can't see why the header in xen/include/xen/lib/<arch>/ don't
use

#include "xyz.h"

But then this would leave the problem with xen/lib/<arch>/*.c
using similar #include-s. Would dropping xen/ from the paths
perhaps help, moving xen/include/xen/lib/* to xen/include/lib/*?
Istr suggesting this when the lib/ subtrees were introduced ...

> - tools/include/xen/libelf/* -> xen/include/xen/*:
>    Move the affected headers from xen/include/xen to
>    xen/include/tools/libelf and reuse the above set CFLAGS.

Why not xen/include/libelf/ or xen/include/lib/elf/?
libelf-private.h has distinct #include-s for Xen and the tools
anyway. All that's needed is that these headers don't sit in a
directory where headers also live which are not supposed to be
visible.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:10:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7203.18851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0DZ-0003ol-1C; Thu, 15 Oct 2020 10:10:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7203.18851; Thu, 15 Oct 2020 10:10:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0DY-0003oe-UB; Thu, 15 Oct 2020 10:10:40 +0000
Received: by outflank-mailman (input) for mailman id 7203;
 Thu, 15 Oct 2020 10:10:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kT0DX-0003oY-AM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:10:39 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.41]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0818378f-7a76-412f-ae2d-703fd6e6ca35;
 Thu, 15 Oct 2020 10:10:35 +0000 (UTC)
Received: from AM6PR02CA0026.eurprd02.prod.outlook.com (2603:10a6:20b:6e::39)
 by VI1PR08MB5487.eurprd08.prod.outlook.com (2603:10a6:803:13c::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Thu, 15 Oct
 2020 10:10:29 +0000
Received: from VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::63) by AM6PR02CA0026.outlook.office365.com
 (2603:10a6:20b:6e::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Thu, 15 Oct 2020 10:10:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT028.mail.protection.outlook.com (10.152.18.88) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 10:10:28 +0000
Received: ("Tessian outbound d5e343850048:v64");
 Thu, 15 Oct 2020 10:10:28 +0000
Received: from 652f2affa5c7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5E79CEDE-EF31-45F5-87D5-5C4807657F24.1; 
 Thu, 15 Oct 2020 10:10:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 652f2affa5c7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 15 Oct 2020 10:10:22 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5179.eurprd08.prod.outlook.com (2603:10a6:10:e7::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.28; Thu, 15 Oct
 2020 10:10:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 10:10:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ehY9=DW=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kT0DX-0003oY-AM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:10:39 +0000
X-Inumbo-ID: 0818378f-7a76-412f-ae2d-703fd6e6ca35
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0818378f-7a76-412f-ae2d-703fd6e6ca35;
	Thu, 15 Oct 2020 10:10:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=87f1sDpD7lWh4y7tN2tMTa1VeMTQwJBRmueQcvu7nlk=;
 b=08GY0XFSUuDyBzI7wJRorm1/VgkJl0rsVzmUaF9ByOmEYZDw6eptuffdYkc5ZUyxqAncgoTQSH/ybL9kDnXD78ERmsAGt9M4DcbX3S0ZYqEnWzpiSrQtMX5ksok6md6TmZTmpLR7riKn6+WZved4GsXPlZHNk4cHEw+4X6nCfuc=
Received: from AM6PR02CA0026.eurprd02.prod.outlook.com (2603:10a6:20b:6e::39)
 by VI1PR08MB5487.eurprd08.prod.outlook.com (2603:10a6:803:13c::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Thu, 15 Oct
 2020 10:10:29 +0000
Received: from VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:6e:cafe::63) by AM6PR02CA0026.outlook.office365.com
 (2603:10a6:20b:6e::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Thu, 15 Oct 2020 10:10:29 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT028.mail.protection.outlook.com (10.152.18.88) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 10:10:28 +0000
Received: ("Tessian outbound d5e343850048:v64"); Thu, 15 Oct 2020 10:10:28 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 04537f219bf83984
X-CR-MTA-TID: 64aa7808
Received: from 652f2affa5c7.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5E79CEDE-EF31-45F5-87D5-5C4807657F24.1;
	Thu, 15 Oct 2020 10:10:22 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 652f2affa5c7.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 15 Oct 2020 10:10:22 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b8XrDjm9vYwr2C/6siwhOeoFto4iTmFAywpPMBEJUhUOg6rOtaVB6MW2l1u97q83z6bi5IYI29pSY0DMTnPdu2G0UKIa8tLx0Un5eWGq4FrhQI3SqqKrT8XTehmxkClDmumPbyUlV7E0cazviugb+JDsz+OLMKuOKEqmKBwPk5sD3W+Od3SzvMRxV5OH2urfNDNnpF3gbadFz1HDKuavfD0mZSNQt6DiL7Cn0lCGF/8YXwZP52PJso3iTCkCqla6hhDiMZb7ZRWcy+DFwEBvCWI7SR88RQ9WDnW+dPBgwg/FjsU0jzn9a0FcPPUH7iNFzRc69ytRYNQMarsWKHMjWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=87f1sDpD7lWh4y7tN2tMTa1VeMTQwJBRmueQcvu7nlk=;
 b=Kzb56zhkc8sZ1rpaeFlLzt04VFNvSezjOqIWn38t/fUBtdXakZYAr7ahzyDeNAY+2KmiZwUOGq/bXU5DkOcwLEqRV/9WTCJ3T2XlNzp7mxcWnFFPNzuSvPXPoVP8rkFKxNKcR1TP/7c/GA9rNhKgq/gcjufL4+vJXZcuYPr5okBDMi3+C1+VthWQgsW8S3/OXUHfx/kW4C7qyyi1/hJ9BclhzFkjhBeAwFI01RgWlAv/iylahHMc2BtLaYOTg062fVsEnFjT/RVa7IyUxoS6IiEM4EF/o6svWKQ02RPK/mPn/K47qT9l6XXwuOtXVl6ttvE9snne89pGdF1AqyTt+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=87f1sDpD7lWh4y7tN2tMTa1VeMTQwJBRmueQcvu7nlk=;
 b=08GY0XFSUuDyBzI7wJRorm1/VgkJl0rsVzmUaF9ByOmEYZDw6eptuffdYkc5ZUyxqAncgoTQSH/ybL9kDnXD78ERmsAGt9M4DcbX3S0ZYqEnWzpiSrQtMX5ksok6md6TmZTmpLR7riKn6+WZved4GsXPlZHNk4cHEw+4X6nCfuc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5179.eurprd08.prod.outlook.com (2603:10a6:10:e7::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.28; Thu, 15 Oct
 2020 10:10:20 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 10:10:20 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	"open list:X86" <xen-devel@lists.xenproject.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWohbM+uoR14U9oEmwJFjy+kmnIqmW9/2AgABK1ACAABYOAIAAQVGAgADYZ4A=
Date: Thu, 15 Oct 2020 10:10:20 +0000
Message-ID: <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
References:
 <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
 <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: be303455-a152-48e0-8cff-08d870f28b1b
x-ms-traffictypediagnostic: DB8PR08MB5179:|VI1PR08MB5487:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB54879CA93289985E0AFFC0379D020@VI1PR08MB5487.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UiOsnH0fhfcJK8p6cmDdkWX6o1IYpPiqEGHntdDHAgVuDh9wQMrxFU7hEv8m0qJKQugzaZSBSUfA3p76VEhLaaxuCULMosh9ltsLFk221JT1D/s0iW4M5iyODXboW06dN59khi2h6EVOtPr7dsvy3dQGwAGJbqu+sgoqO/VvJXzEjVy0oJwtrWLLHBGugdCosMyYI/hD1/JT/CtOsQJ72Ks/9l1Wy7cHFuWyhtFC7VHm82Mbo5bqH2wZDt6kmayasdl8L1+UomTAae0t8e4i85ZlXRlW2flIfmsvYGSGNgUXBHv/NphteYorYRMVlcfuoPCvtBciUSZ95MD9oHi4ew==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(39860400002)(396003)(136003)(83380400001)(8676002)(186003)(36756003)(71200400001)(66946007)(86362001)(26005)(76116006)(64756008)(316002)(8936002)(5660300002)(6512007)(66476007)(66446008)(33656002)(66556008)(6506007)(478600001)(2616005)(54906003)(2906002)(4326008)(6486002)(91956017)(6916009)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 XmvNaFEPtY3TmeB4cnbWtyK2yOK5UDiuTLfVdnXJ1sVNzSmi5p5ZYVvXh8pzu/NXYQ1RTe0z3/b2hIZXgVGXVRC8FT5Q3A33U8197k8fSqWBefZgphAJY4YCps5vVkEL2WgcPP/Zj5AzXnoZqx822OCfYHHMT+vWon8OxxHXg0Umw7R+2dcJx1f3csF36DeVKWqCocUI44zw6vnKm/jtEr2g+AVT/NeSbSer0WQWvpyLCXQRWE+rDOh6ekrmoyqDfumR6kwC4YXueruqlba4ooqSDB8ah7kiOrz0pcJs/2TPGi9QVeQypyvK/0PYy5f3FR6N5eMPQANQocEkXql9Msvc2V6REnx/JZeit7nAqpA+0DzFwUOawPHNYaLAv0lXBbSCUlpdBU2ro7H3gaeaxqJhrKAKxjusJ09pQXifeZNJOYjTv5gYlDGjsuQI9YBaxg4YJjL/tBN/OpcNv/WAQh3yiB/DijrmZsazvAQHBZvUpNTHzf4ZmZZAFO6blHf/R92zGRlvtwm+qkwjltBBBYRgOipvbm62ALgkWiKR112MVZnKG6zB1RmCn1JeEl7snVxiF3LDRozCKy6z/SWS073vthd7+l+mbbC/tZxu/RDDVV3C0lYdfziuWeSxPDcmkeOJnpC5g1ziZ+xbfcBwZg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <412B582677BE464E8C326BA3A04557FC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5179
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	dc034169-ca9e-4ba4-93f3-08d870f285d8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bqcp62YEFY7tAY7TJp+UvHcURYZbRi79bcB5+TNS2ILzYH0SsAs+ro6UKrHTStFUvMCBMmSA1X4JkfIB3t8JgFnCmg0CwRr5L/xZDmGPiX9Nw0LYps1o5X1RzcDRBsvyPzRxfO67sWfOhqYRR6Lcc+RAKWeJ2nUhn3nrOBI4Cp8sCs8hKn4S01dqf+YWTiEIzbfcx3ZkjmgidI2T1U48E+chp0Taa3bp9T+6KFhdAs6f+3vTLHySC6bvOCLbHmFpFrEwORY23P+YwFAId06fEmsDcs5ZUwIJiPi3/ZpmUgATT05yV6f1nnqHNuuvO0VJaOhB4bSdkFCrgUZlwUjbmTOsIguInh2abkLn5fC14yj5YyteXCHqGgEAaF3nojc4HJKZQqVJRbi+VfhiVFIYiw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(136003)(396003)(46966005)(6862004)(47076004)(107886003)(36906005)(6486002)(36756003)(336012)(54906003)(2616005)(86362001)(4326008)(6512007)(8676002)(33656002)(8936002)(26005)(316002)(356005)(70206006)(186003)(82310400003)(5660300002)(478600001)(82740400003)(83380400001)(6506007)(53546011)(70586007)(81166007)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 10:10:28.9680
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: be303455-a152-48e0-8cff-08d870f28b1b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5487

SGksDQoNCj4gT24gMTQgT2N0IDIwMjAsIGF0IDIyOjE1LCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNz
dGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPiANCj4gT24gV2VkLCAxNCBPY3QgMjAyMCwg
SnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4gT24gMTQvMTAvMjAyMCAxNzowMywgQmVydHJhbmQgTWFy
cXVpcyB3cm90ZToNCj4+Pj4gT24gMTQgT2N0IDIwMjAsIGF0IDEyOjM1LCBBbmRyZXcgQ29vcGVy
IDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPiB3cm90ZToNCj4+Pj4gDQo+Pj4+IE9uIDE0LzEw
LzIwMjAgMTE6NDEsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+PiBXaGVuIGEgQ29ydGV4
IEE1NyBwcm9jZXNzb3IgaXMgYWZmZWN0ZWQgYnkgQ1BVIGVycmF0YSA4MzIwNzUsIGEgZ3Vlc3QN
Cj4+Pj4+IG5vdCBpbXBsZW1lbnRpbmcgdGhlIHdvcmthcm91bmQgZm9yIGl0IGNvdWxkIGRlYWRs
b2NrIHRoZSBzeXN0ZW0uDQo+Pj4+PiBBZGQgYSB3YXJuaW5nIGR1cmluZyBib290IGluZm9ybWlu
ZyB0aGUgdXNlciB0aGF0IG9ubHkgdHJ1c3RlZCBndWVzdHMNCj4+Pj4+IHNob3VsZCBiZSBleGVj
dXRlZCBvbiB0aGUgc3lzdGVtLg0KPj4+Pj4gQW4gZXF1aXZhbGVudCB3YXJuaW5nIGlzIGFscmVh
ZHkgZ2l2ZW4gdG8gdGhlIHVzZXIgYnkgS1ZNIG9uIGNvcmVzDQo+Pj4+PiBhZmZlY3RlZCBieSB0
aGlzIGVycmF0YS4NCj4+Pj4+IA0KPj4+Pj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVp
cyA8YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4+Pj4gLS0tDQo+Pj4+PiB4ZW4vYXJjaC9h
cm0vY3B1ZXJyYXRhLmMgfCAyMSArKysrKysrKysrKysrKysrKysrKysNCj4+Pj4+IDEgZmlsZSBj
aGFuZ2VkLCAyMSBpbnNlcnRpb25zKCspDQo+Pj4+PiANCj4+Pj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vY3B1ZXJyYXRhLmMgYi94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMNCj4+Pj4+IGlu
ZGV4IDZjMDkwMTc1MTUuLjhmOWFiNmRkZTEgMTAwNjQ0DQo+Pj4+PiAtLS0gYS94ZW4vYXJjaC9h
cm0vY3B1ZXJyYXRhLmMNCj4+Pj4+ICsrKyBiL3hlbi9hcmNoL2FybS9jcHVlcnJhdGEuYw0KPj4+
Pj4gQEAgLTI0MCw2ICsyNDAsMjYgQEAgc3RhdGljIGludCBlbmFibGVfaWNfaW52X2hhcmRlbmlu
Zyh2b2lkICpkYXRhKQ0KPj4+Pj4gDQo+Pj4+PiAjZW5kaWYNCj4+Pj4+IA0KPj4+Pj4gKyNpZmRl
ZiBDT05GSUdfQVJNNjRfRVJSQVRVTV84MzIwNzUNCj4+Pj4+ICsNCj4+Pj4+ICtzdGF0aWMgaW50
IHdhcm5fZGV2aWNlX2xvYWRfYWNxdWlyZV9lcnJhdGEodm9pZCAqZGF0YSkNCj4+Pj4+ICt7DQo+
Pj4+PiArICAgIHN0YXRpYyBib29sIHdhcm5lZCA9IGZhbHNlOw0KPj4+Pj4gKw0KPj4+Pj4gKyAg
ICBpZiAoICF3YXJuZWQgKQ0KPj4+Pj4gKyAgICB7DQo+Pj4+PiArICAgICAgICB3YXJuaW5nX2Fk
ZCgiVGhpcyBDUFUgaXMgYWZmZWN0ZWQgYnkgdGhlIGVycmF0YSA4MzIwNzUuXG4iDQo+Pj4+PiAr
ICAgICAgICAgICAgICAgICAgICAiR3Vlc3RzIHdpdGhvdXQgcmVxdWlyZWQgQ1BVIGVycmF0dW0g
d29ya2Fyb3VuZHNcbiINCj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICJjYW4gZGVhZGxvY2sg
dGhlIHN5c3RlbSFcbiINCj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICJPbmx5IHRydXN0ZWQg
Z3Vlc3RzIHNob3VsZCBiZSB1c2VkIG9uIHRoaXMNCj4+Pj4+IHN5c3RlbS5cbiIpOw0KPj4+Pj4g
KyAgICAgICAgd2FybmVkID0gdHJ1ZTsNCj4+Pj4gDQo+Pj4+IFRoaXMgaXMgYW4gYW50aXBhdHRl
cm4sIHdoaWNoIHByb2JhYmx5IHdhbnRzIGZpeGluZyBlbHNld2hlcmUgYXMgd2VsbC4NCj4+Pj4g
DQo+Pj4+IHdhcm5pbmdfYWRkKCkgaXMgX19pbml0LiAgSXQncyBub3QgbGVnaXRpbWF0ZSB0byBj
YWxsIGZyb20gYSBub24taW5pdA0KPj4+PiBmdW5jdGlvbiwgYW5kIGEgbGVzcyB1c2VsZXNzIGJ1
aWxkIHN5c3RlbSB3b3VsZCBoYXZlIG1vZHBvc3QgdG8gb2JqZWN0Lg0KPj4+PiANCj4+Pj4gVGhl
IEFSTV9TTUNDQ19BUkNIX1dPUktBUk9VTkRfMSBpbnN0YW5jZSBhc3NlcnRzIGJhc2VkIG9uIHN5
c3RlbSBzdGF0ZSwNCj4+Pj4gYnV0IHRoaXMgcHJvdmlkZXMgbm8gc2FmZXR5IGF0IGFsbC4NCj4+
Pj4gDQo+Pj4+IA0KPj4+PiBXaGF0IHdhcm5pbmdfYWRkKCkgYWN0dWFsbHkgZG9lcyBpcyBxdWV1
ZSBtZXNzYWdlcyBmb3Igc29tZSBwb2ludCBuZWFyDQo+Pj4+IHRoZSBlbmQgb2YgYm9vdC4gIEl0
J3Mgbm90IGNsZWFyIHRoYXQgdGhpcyBpcyBldmVuIGEgY2xldmVyIHRoaW5nIHRvIGRvLg0KPj4+
PiANCj4+Pj4gSSdtIHZlcnkgdGVtcHRlZCB0byBzdWdnZXN0IGEgYmxhbmtldCBjaGFuZ2UgdG8g
cHJpbnRrX29uY2UoKS4NCj4+PiANCj4+PiBJZiB0aGlzIGlzIG5lZWRlZCB0aGVuIHRoaXMgY291
bGQgYmUgZG9uZSBpbiBhbiBvdGhlciBzZXJpZSA/DQo+PiANCj4+IFRoZSBjYWxsYmFjayAtPmVu
YWJsZSgpIHdpbGwgYmUgY2FsbGVkIHdoZW4gYSBDUFUgaXMgb25saW5lZC9vZmZsaW5lZC4gU28g
dGhpcw0KPj4gaXMgZ29pbmcgdG8gcmVxdWlyZSBpZiB5b3UgcGxhbiB0byBzdXBwb3J0IENQVSBo
b3RwbHVncyBvciBzdXNwZW5kIHJlc3VtZS4NCj4+IA0KPj4+IFdvdWxkIGJlIGdvb2QgdG8ga2Vl
cCB0aGlzIHBhdGNoIGFzIHB1cmVseSBoYW5kbGluZyB0aGUgZXJyYXRhLg0KPiANCj4gTXkgcHJl
ZmVyZW5jZSB3b3VsZCBiZSB0byBrZWVwIHRoaXMgcGF0Y2ggc21hbGwgd2l0aCBqdXN0IHRoZSBl
cnJhdGEsDQo+IG1heWJlIHVzaW5nIGEgc2ltcGxlIHByaW50a19vbmNlIGFzIEFuZHJldyBhbmQg
SnVsaWVuIGRpc2N1c3NlZC4NCj4gDQo+IFRoZXJlIGlzIGFub3RoZXIgaW5zdGFuY2Ugb2Ygd2Fy
bmluZ19hZGQgcG90ZW50aWFsbHkgYmVpbmcgY2FsbGVkDQo+IG91dHNpZGUgX19pbml0IGluIHhl
bi9hcmNoL2FybS9jcHVlcnJhdGEuYzoNCj4gZW5hYmxlX3NtY2NjX2FyY2hfd29ya2Fyb3VuZF8x
LiBTbyBpZiB5b3UgYXJlIHVwIGZvciBpdCwgaXQgd291bGQgYmUNCj4gZ29vZCB0byBwcm9kdWNl
IGEgcGF0Y2ggdG8gZml4IHRoYXQgdG9vLg0KPiANCj4gDQo+PiBJbiB0aGUgY2FzZSBvZiB0aGlz
IHBhdGNoLCBob3cgYWJvdXQgbW92aW5nIHRoZSB3YXJuaW5nX2FkZCgpIGluDQo+PiBlbmFibGVf
ZXJyYXRhX3dvcmthcm91bmRzKCk/DQo+PiANCj4+IEJ5IHRoZW4gd2Ugc2hvdWxkIG5vdyBhbGwg
dGhlIGVycmF0YSBwcmVzZW50IG9uIHlvdXIgcGxhdGZvcm0uIEFsbCBDUFVzDQo+PiBvbmxpbmVk
IGFmdGVyd2FyZHMgKGkuZS4gcnVudGltZSkgc2hvdWxkIGFsd2F5cyBhYmlkZSB0byB0aGUgc2V0
IGRpc2NvdmVyDQo+PiBkdXJpbmcgYm9vdC4NCj4gDQo+IElmIEkgdW5kZXJzdGFuZCB5b3VyIHN1
Z2dlc3Rpb24gY29ycmVjdGx5LCBpdCB3b3VsZCB3b3JrIGZvcg0KPiB3YXJuX2RldmljZV9sb2Fk
X2FjcXVpcmVfZXJyYXRhLCBiZWNhdXNlIGl0IGlzIGp1c3QgYSB3YXJuaW5nLCBidXQgaXQNCj4g
d291bGQgbm90IHdvcmsgZm9yIGVuYWJsZV9zbWNjY19hcmNoX3dvcmthcm91bmRfMSwgYmVjYXVz
ZSB0aGVyZSBpcw0KPiBhY3R1YWxseSBhIGNhbGwgdG8gYmUgbWFkZSB0aGVyZS4NCj4gDQo+IE1h
eWJlIGl0IHdvdWxkIGJlIHNpbXBsZXIgdG8gdXNlIHByaW50a19vbmNlIGluIGJvdGggY2FzZXM/
IEkgZG9uJ3QgaGF2ZQ0KPiBhIHN0cm9uZyBwcmVmZXJlbmNlIGVpdGhlciB3YXkuDQoNCkkgY291
bGQgZG8gdGhlIGZvbGxvd2luZyAoaW4gYSBzZXJpZSBvZiAyIHBhdGNoZXMpOg0KLSBtb2RpZnkg
ZW5hYmxlX3NtY2NjX2FyY2hfd29ya2Fyb3VuZF8xIHRvIHVzZSBwcmludGtfb25jZSB3aXRoIGEg
DQogIHByZWZpeC9zdWZmaXgg4oCcKioqKuKAnSBvbiBlYWNoIGxpbmUgcHJpbnRlZCAoYW5kIG1h
eWJlIGFkYXB0aW5nIHByaW50IHRvIGZpdCBhIA0KICBsaW5lIGxlbmd0aCBvZiA4MCkNCi0gbW9k
aWZ5IG15IHBhdGNoIHRvIGRvIHRoZSBwcmludCBpbiBlbmFibGVfZXJyYXRhX3dvcmthcm91bmRz
IHVzaW5nIGFsc28NCiAgdGhlIHByZWZpeC9zdWZmaXggYW5kIHByaW50a19vbmNlDQoNClBsZWFz
ZSBjb25maXJtIHRoYXQgdGhpcyBzdHJhdGVneSB3b3VsZCBmaXQgZXZlcnlvbmUuDQoNCkNoZWVy
cw0KQmVydHJhbmQNCg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:35:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:35:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7247.18880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0b8-0005v1-Pr; Thu, 15 Oct 2020 10:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7247.18880; Thu, 15 Oct 2020 10:35:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0b8-0005uu-MV; Thu, 15 Oct 2020 10:35:02 +0000
Received: by outflank-mailman (input) for mailman id 7247;
 Thu, 15 Oct 2020 10:35:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT0b7-0005up-9o
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:35:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c4b02c4-3a07-46cf-9437-aa362ae827a5;
 Thu, 15 Oct 2020 10:34:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT0b3-0001Ga-HI; Thu, 15 Oct 2020 10:34:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT0b3-0001S2-6w; Thu, 15 Oct 2020 10:34:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT0b3-0006yv-6R; Thu, 15 Oct 2020 10:34:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT0b7-0005up-9o
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:35:01 +0000
X-Inumbo-ID: 3c4b02c4-3a07-46cf-9437-aa362ae827a5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c4b02c4-3a07-46cf-9437-aa362ae827a5;
	Thu, 15 Oct 2020 10:34:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pBJewjF4OZQw7/a7K5r0XmS0k1rn9cuxad/+2LS9N+8=; b=qlO0a8EOVyG3BESEbOsBM1AZeI
	zueLfTM134qQvKAHQ3XKg4QqRIYGlJkZMCrWCnRq+18YRQeMjkxiBNx4ycLlM1WfXKs3DOll+HvUm
	UJDJKyiaIgCTwTQktrZiFgdYrbrk/ctCndWnv7Ses/4YYc0PmNSKSUnQ72SKmliAexxU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT0b3-0001Ga-HI; Thu, 15 Oct 2020 10:34:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT0b3-0001S2-6w; Thu, 15 Oct 2020 10:34:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT0b3-0006yv-6R; Thu, 15 Oct 2020 10:34:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155819-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155819: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=57c98ea9acdcef5021f5671efa6475a5794a51c4
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 10:34:57 +0000

flight 155819 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155819/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install        fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                57c98ea9acdcef5021f5671efa6475a5794a51c4
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   56 days
Failing since        152659  2020-08-21 14:07:39 Z   54 days   95 attempts
Testing same since   155819  2020-10-14 23:08:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45458 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:41:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:41:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7252.18896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0hf-0006on-KQ; Thu, 15 Oct 2020 10:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7252.18896; Thu, 15 Oct 2020 10:41:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0hf-0006og-HP; Thu, 15 Oct 2020 10:41:47 +0000
Received: by outflank-mailman (input) for mailman id 7252;
 Thu, 15 Oct 2020 10:41:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kT0hd-0006ob-R0
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:41:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 105b030c-2baf-4ea9-8814-b281e8e673a7;
 Thu, 15 Oct 2020 10:41:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95F97B2A1;
 Thu, 15 Oct 2020 10:41:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kT0hd-0006ob-R0
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:41:45 +0000
X-Inumbo-ID: 105b030c-2baf-4ea9-8814-b281e8e673a7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 105b030c-2baf-4ea9-8814-b281e8e673a7;
	Thu, 15 Oct 2020 10:41:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602758503;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EnA6MYaKqf1RrEFuB4VDcWIXxw1nsLktMnw9Xj3eDgo=;
	b=VC9qCaSZLQZS9A7dLDOzbLNuxSBHRs+Mptg0eH91oJ4O41JYxk63xlrCFe7Rxi3oDydU8C
	7zIqa4yX2GDj1aur1dFQIouyicT3DFyJXqLZSkTyG9vTodLGtBZwyOlSyNFrBpQyzm2bCn
	NRgKi80EzUy2lQmltScRsknTmy0yA6Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 95F97B2A1;
	Thu, 15 Oct 2020 10:41:43 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
Date: Thu, 15 Oct 2020 12:41:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 12:09, Jan Beulich wrote:
> On 15.10.2020 09:58, Jürgen Groß wrote:
>> After a short discussion on IRC yesterday I promised to send a mail
>> how I think we could get rid of creating dynamic links especially
>> for header files in the Xen build process.
>>
>> This will require some restructuring, the amount will depend on the
>> selected way to proceed:
>>
>> - avoid links completely, requires more restructuring
>> - avoid only dynamically created links, i.e. allowing some static
>>     links which are committed to git
> 
> While I like the latter better, I'd like to point out that not all
> file systems support symlinks, and hence the repo then couldn't be
> stored on (or the tarball expanded onto) such a file system. Note
> that this may be just for viewing purposes - I do this typically at
> home -, i.e. there's no resulting limitation from the build process
> needing symlinks. Similarly, once we fully support out of tree
> builds, there wouldn't be any restriction from this as long as just
> the build tree is placed on a capable file system.
> 
> As a result I'd like to propose variant 2´: Reduce the number of
> dynamically created symlinks to a minimum. This said, I have to
> admit that I haven't really understood yet why symlinks are bad.
> They exist for exactly such purposes, I would think.

Not the symlinks as such, but the dynamically created ones seem to be
a problem, as we stumble upon those again and again.

> 
>> The difference between both variants is affecting the public headers
>> in xen/include/public/: avoiding even static links would require to
>> add another directory or to move those headers to another place in the
>> tree (either use xen/include/public/xen/, or some other path */xen),
>> leading to the need to change all #include statements in the hypervisor
>> using <public/...> today.
>>
>> The need for the path to have "xen/" is due to the Xen library headers
>> (which are installed on user's machines) are including the public
>> hypervisor headers via "#include <xen/...>" and we can't change that
>> scheme. A static link can avoid this problem via a different path, but
>> without any link we can't do that.
>>
>> Apart from that decision, lets look which links are created today for
>> accessing the header files (I'll assume my series putting the library
>> headers to tools/include will be taken, so those links being created
>> in staging today are not mentioned) and what can be done to avoid them:
>>
>> - xen/include/asm -> xen/include/asm-<arch>:
>>     Move all headers from xen/include/asm-<arch> to
>>     xen/arch/<arch>/include/asm and add that path via "-I" flag to CFLAGS.
>>     This has the other nice advantages that most architecture specific
>>     files are now in xen/arch (apart from the public headers) and that we
>>     can even add generic fallback headers in xen/include/asm in case an
>>     arch doesn't need a specific header file.
> 
> Iirc Andrew suggested years ago that we follow Linux in this regard
> (and XTF already does). My only concern here is the churn this will
> cause for backports.

Changing a directory name in a patch isn't that hard, IMO.

> 
>> - xen/arch/<arch>/efi/*.[ch] -> xen/common/efi/*.[ch]:
>>     Use vpath for the *.c files and the "-I" flag for adding common/efi to
>>     the include path in the xen/arch/<arch>/efi/Makefile.
> 
> Fine with me.
> 
>> - tools/include/xen/asm -> xen/include/asm-<arch>:
>>     Add "-Ixen/arch/<arch>/include" to the CFLAGS. It might be a nice idea
>>     to move the headers needed by the tools to xen/arch/include/tools/asm
>>     and use "-Ixen/arch/<arch>/include/tools" instead, but this would
>>     require either the same path added to the hypervisor's CFLAGS or a
>>     modification of the related #include statements.
> 
> Separating headers intended for tools consumption is okay with me,
> but I dislike the tools/ infix in the path you suggest. Since there
> can't possibly be any shared prototypes, how about defs/ or some
> such not specifically naming either of the consuming components
> (and thus visually excluding the other)?

I have absolutely no preferences for the naming. defs is fine IMO.

> 
> Of course, the further asm/ underneath is kind of ugly because of
> being largely unnecessary. Perhaps we could have just
> xen/arch/include/defs/ and use #include <defs/xyz.h>?

Yes, that should work, too.

> 
>> - tools/include/xen/foreign -> tools/include/xen-foreign:
>>     Get rid of tools/include/xen-foreign and generate the headers directly
>>     in xen/include/public/foreign instead.
> 
> Except that conceptually building in tools/ would better not alter
> the xen/ subtree in any way.

I meant to generate the headers from the hypervisor build instead.

> 
>> - tools/include/xen/sys -> tools/include/xen-sys/<OS>:
>>     Move the headers from tools/include/xen-sys/<OS> to
>>     tools/include/<OS>/xen/sys/ and add the appropriate path to CFLAGS.
> 
> Not very nice imo because of the otherwise pointless intermediate
> directories, but if we truly need to minimize symlink usage, then
> so be it.
> 
>> - tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
>>     Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
>>     add "-Ixen/include/tools" to the CFLAGS of tools.
> 
> Why not -Ixen/include/xen without any movement? Perhaps because

This would open up most of the hypervisor private headers to be
easily includable by tools.

> -Ixen/include/tools wouldn't work either, due to code using
> 
> #include <xen/lib/<arch>/xyz.h>
> 
> ? I.e. you really mean Move xen/include/xen/lib/<arch> to
> xen/include/tools/xen/lib/<arch>? Not very nice. I have to admit
> I can't see why the header in xen/include/xen/lib/<arch>/ don't
> use
> 
> #include "xyz.h"
> 
> But then this would leave the problem with xen/lib/<arch>/*.c
> using similar #include-s. Would dropping xen/ from the paths
> perhaps help, moving xen/include/xen/lib/* to xen/include/lib/*?
> Istr suggesting this when the lib/ subtrees were introduced ...

This would at least eliminate one directory level.

> 
>> - tools/include/xen/libelf/* -> xen/include/xen/*:
>>     Move the affected headers from xen/include/xen to
>>     xen/include/tools/libelf and reuse the above set CFLAGS.
> 
> Why not xen/include/libelf/ or xen/include/lib/elf/?
> libelf-private.h has distinct #include-s for Xen and the tools
> anyway. All that's needed is that these headers don't sit in a
> directory where headers also live which are not supposed to be
> visible.

That is correct.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:50:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7257.18909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0pP-00076W-LN; Thu, 15 Oct 2020 10:49:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7257.18909; Thu, 15 Oct 2020 10:49:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0pP-00076P-IP; Thu, 15 Oct 2020 10:49:47 +0000
Received: by outflank-mailman (input) for mailman id 7257;
 Thu, 15 Oct 2020 10:49:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kT0pO-00076K-BO
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:49:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8a7de2b-0b1e-4eab-89d6-3c4d9bc942cb;
 Thu, 15 Oct 2020 10:49:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7DE68ABCC;
 Thu, 15 Oct 2020 10:49:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kT0pO-00076K-BO
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:49:46 +0000
X-Inumbo-ID: c8a7de2b-0b1e-4eab-89d6-3c4d9bc942cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c8a7de2b-0b1e-4eab-89d6-3c4d9bc942cb;
	Thu, 15 Oct 2020 10:49:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602758984;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AnrNBsQHv2AP1Er+IlheRf4Qao10Mu+zU0YhfFOvt5c=;
	b=JOzQtgRHk7lmSB0bVEWG2YpWuNKFveuKSyVnQ79tTOxr2X+lUhBc7UrAziIHQB/jtpRYLq
	XJ+Bp9eVUFAxg5llpjc3elavJ3iJcEwT/LSoU1+0QGQsm3d/7H/DSCafdEiwKd6keS2lyY
	cFhxXMf5Iaxqhx4+fmahdl+fclg4I6w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7DE68ABCC;
	Thu, 15 Oct 2020 10:49:44 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0df66f1c-a02d-819c-0f05-8a7b26728e87@suse.com>
Date: Thu, 15 Oct 2020 12:49:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 12:09, Jan Beulich wrote:
> On 15.10.2020 09:58, Jürgen Groß wrote:
>> After a short discussion on IRC yesterday I promised to send a mail
>> how I think we could get rid of creating dynamic links especially
>> for header files in the Xen build process.
>>
>> This will require some restructuring, the amount will depend on the
>> selected way to proceed:
>>
>> - avoid links completely, requires more restructuring
>> - avoid only dynamically created links, i.e. allowing some static
>>     links which are committed to git
> 
> While I like the latter better, I'd like to point out that not all
> file systems support symlinks, and hence the repo then couldn't be
> stored on (or the tarball expanded onto) such a file system. Note
> that this may be just for viewing purposes - I do this typically at
> home -, i.e. there's no resulting limitation from the build process
> needing symlinks. Similarly, once we fully support out of tree
> builds, there wouldn't be any restriction from this as long as just
> the build tree is placed on a capable file system.
> 
> As a result I'd like to propose variant 2´: Reduce the number of
> dynamically created symlinks to a minimum. This said, I have to
> admit that I haven't really understood yet why symlinks are bad.
> They exist for exactly such purposes, I would think.

Another option would be to create the needed links from ./configure
instead of committing them to git.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:50:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7258.18922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0pZ-00079E-Ut; Thu, 15 Oct 2020 10:49:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7258.18922; Thu, 15 Oct 2020 10:49:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0pZ-000797-R9; Thu, 15 Oct 2020 10:49:57 +0000
Received: by outflank-mailman (input) for mailman id 7258;
 Thu, 15 Oct 2020 10:49:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT0pZ-00078q-0C
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:49:57 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77ecab16-f0e5-4532-8041-1a7527ca655a;
 Thu, 15 Oct 2020 10:49:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT0pZ-00078q-0C
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:49:57 +0000
X-Inumbo-ID: 77ecab16-f0e5-4532-8041-1a7527ca655a
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 77ecab16-f0e5-4532-8041-1a7527ca655a;
	Thu, 15 Oct 2020 10:49:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602758995;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=d3VYg3skl8zCQfjo7vhViKGqruNJHXt8sy93k/dq4Sk=;
  b=g8xy9Vi8Z2J0w3/jT9hlT2s9Buj0nhPZCNUhkXH0EXCuBPFzLWyWRWzE
   cN5eGAoQPgpCYQhSuN1A+szhYUB6ok+FECazCMVNmqWpLNiJSqr/RcHgt
   llLf8G+XIW7H74TqXw2gbcTKthSpltNuJDpYQ7X8ScrePnULqpVLYgHHH
   c=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Ekl9EsGDazf9xR5KCY38Ne2WICljMzDBeB74mBsg8olfFKkH9GuNR+t15qKWecZuNQ7XgC1v1+
 KC5ULo/68nMV6zUK6BHQEUkiBHyeBgw6YpfBCfurNoLnxKGaNO9bMqQu4Zy3F5DwcWU8JkjWhg
 p1QdMtfzzI0qviVR98vUfYbzHiRylMX5HzEtKOOQMxLqWsE4ylqpdo9V1+JQjLL0r0g4fDPUvz
 NlYtmgIyLplOlOtZDBdEgSpzqqEvFBB3m/mzrtc5Vs/lzbsfmgVGeii0YqynM5LrtkElMcaNIs
 nP0=
X-SBRS: 2.5
X-MesageID: 29312011
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208";a="29312011"
Date: Thu, 15 Oct 2020 12:49:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2 0/2] xen/x86: implement NMI continuation as softirq
Message-ID: <20201015104939.GA67506@Air-de-Roger>
References: <20201007133011.18871-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20201007133011.18871-1-jgross@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 07, 2020 at 03:30:09PM +0200, Juergen Gross wrote:
> Move sending of a virq event for oprofile to the local vcpu from NMI
> to softirq context.
> 
> This has been tested with a small test patch using the continuation
> framework of patch 1 for all NMIs and doing a print to console in
> the continuation handler.
> 
> Version 1 of this small series was sent to the security list before.
> 
> Juergen Gross (2):
>   xen/x86: add nmi continuation framework
>   xen/oprofile: use set_nmi_continuation() for sending virq to guest

Apart from the comments in patch 1, I think this is a fine approach if
it allows us to restore to the previous state of the event lock.

I think we should be expecting a v3 with the nmi callback prototype?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:52:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7263.18934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0rd-00080f-B0; Thu, 15 Oct 2020 10:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7263.18934; Thu, 15 Oct 2020 10:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0rd-00080Y-7h; Thu, 15 Oct 2020 10:52:05 +0000
Received: by outflank-mailman (input) for mailman id 7263;
 Thu, 15 Oct 2020 10:52:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kT0rc-000801-HL
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:52:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 494b7e29-b24a-4f2c-ba31-fb9de5a7499e;
 Thu, 15 Oct 2020 10:52:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 04AE3AF0F;
 Thu, 15 Oct 2020 10:52:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kT0rc-000801-HL
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:52:04 +0000
X-Inumbo-ID: 494b7e29-b24a-4f2c-ba31-fb9de5a7499e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 494b7e29-b24a-4f2c-ba31-fb9de5a7499e;
	Thu, 15 Oct 2020 10:52:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602759123;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jnpeTZrGFBvmlxr6aEM4Si89rh3Iavn2Pva5X6Exq/E=;
	b=JPoHsdXkCc+83sDQcDnnW0OJWgura5pP/+BDvWHi55lwNE5StA56+hvA35fbHPjrgpXaRX
	jMZo4b79OT+Wg6rt9RQD9X2kfUekATzTXOkr/6Pi/PLJbg7UvXcPCr1P3/2Msf2BNQn+x3
	6ArsYTV7O5dCyxJ76fZH25Miff2uJ7A=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 04AE3AF0F;
	Thu, 15 Oct 2020 10:52:03 +0000 (UTC)
Subject: Re: [PATCH v2 0/2] xen/x86: implement NMI continuation as softirq
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201007133011.18871-1-jgross@suse.com>
 <20201015104939.GA67506@Air-de-Roger>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a8e03a4e-57ae-b251-5e74-5207b03b4aba@suse.com>
Date: Thu, 15 Oct 2020 12:52:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201015104939.GA67506@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 12:49, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 03:30:09PM +0200, Juergen Gross wrote:
>> Move sending of a virq event for oprofile to the local vcpu from NMI
>> to softirq context.
>>
>> This has been tested with a small test patch using the continuation
>> framework of patch 1 for all NMIs and doing a print to console in
>> the continuation handler.
>>
>> Version 1 of this small series was sent to the security list before.
>>
>> Juergen Gross (2):
>>    xen/x86: add nmi continuation framework
>>    xen/oprofile: use set_nmi_continuation() for sending virq to guest
> 
> Apart from the comments in patch 1, I think this is a fine approach if
> it allows us to restore to the previous state of the event lock.

This will not be enough to do that, but it is clearly removing a
potential deadlock.

> I think we should be expecting a v3 with the nmi callback prototype?

And using an IPI instead of a softirq, yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 10:57:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 10:57:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7266.18946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0x8-0008EL-0U; Thu, 15 Oct 2020 10:57:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7266.18946; Thu, 15 Oct 2020 10:57:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT0x7-0008EE-Rn; Thu, 15 Oct 2020 10:57:45 +0000
Received: by outflank-mailman (input) for mailman id 7266;
 Thu, 15 Oct 2020 10:57:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT0x6-0008E9-PO
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:57:44 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b44f7df0-ee64-47fc-a650-dc65cf9c00ca;
 Thu, 15 Oct 2020 10:57:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT0x6-0008E9-PO
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 10:57:44 +0000
X-Inumbo-ID: b44f7df0-ee64-47fc-a650-dc65cf9c00ca
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b44f7df0-ee64-47fc-a650-dc65cf9c00ca;
	Thu, 15 Oct 2020 10:57:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602759463;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=LTw4HxuM2Hd9gC5Eofd+gOjQV+QnRt0hWS4gjM7xh+8=;
  b=fUV5EKo11EwSfk+SHHGHnoTHT9GU4/cL4BAOpKtrp0Nu/Lywk791HmgG
   oA6P/c5gTnxKW4wygZBT2Pv62uSTgoLb1AMi/49UUYPpwuxahrMt1yoxI
   37VNfSZj6dPBq+R+QpHqjvnnLD/zkKlkOgUnq6/kO5vgH7w6U2gkFKJqk
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: X6Ks6eBeFu7jQlD+vRQ225PMgZ7g1mUtc+3TOLAD5U9SgTGYDsNus9G2Zu6DfOthWL2HgHUH0Z
 pIizAmSJ6ZebloRm4hcpdl3qKE3Tg6BBhoAyhM7qnrHYEROVsnk7xxVsIi8Wck8L5yHfMm0g09
 w5ZABeSZfoZr4jlspLPkCEP5SQp1U/U5aRRoEljxdHVgkczWFxWqlWcPva4pWAAOLXbLdUAbuS
 uZQ+TtusKIW/k6y22MHpQ3lfTCEOvEr9E7zYt8vlrEdj9oGGnGZ4jD3GEbc4n+C9U2ED9Nq4Ps
 VJA=
X-SBRS: 2.5
X-MesageID: 29393780
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208";a="29393780"
Date: Thu, 15 Oct 2020 12:57:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "J. Roeleveld" <joost@antarean.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: xen-blkback: Scheduled work from previous purge is still busy,
 cannot purge list
Message-ID: <20201015105735.GB67506@Air-de-Roger>
References: <15146361.Z0tdQxPx3m@eve>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <15146361.Z0tdQxPx3m@eve>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 13, 2020 at 07:26:47AM +0200, J. Roeleveld wrote:
> Hi All,
> 
> I am seeing the following message in the "dmesg" output of a driver domain.
> 
> [Thu Oct  8 20:57:04 2020] xen-blkback: Scheduled work from previous purge is 
> still busy, cannot purge list
> [Thu Oct  8 20:57:11 2020] xen-blkback: Scheduled work from previous purge is 
> still busy, cannot purge list
> [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge is 
> still busy, cannot purge list
> [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge is 
> still busy, cannot purge list
> 
> 
> Is this something to worry about? Or can I safely ignore this?

What version of the Linux kernel are you running in that driver
domain?

Is the domain very busy? That might explain the delay in purging
grants.

Also is this an sporadic message, or it's constantly repeating?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 11:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 11:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7270.18958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT189-0000pb-1S; Thu, 15 Oct 2020 11:09:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7270.18958; Thu, 15 Oct 2020 11:09:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT188-0000pU-Ui; Thu, 15 Oct 2020 11:09:08 +0000
Received: by outflank-mailman (input) for mailman id 7270;
 Thu, 15 Oct 2020 11:09:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT187-0000pP-D8
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:09:07 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b9ab786-98cb-4ec4-bd16-c2df515ec375;
 Thu, 15 Oct 2020 11:09:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT187-0000pP-D8
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:09:07 +0000
X-Inumbo-ID: 1b9ab786-98cb-4ec4-bd16-c2df515ec375
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1b9ab786-98cb-4ec4-bd16-c2df515ec375;
	Thu, 15 Oct 2020 11:09:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602760145;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=qb/JYNgkIUuqBKtbKYSFf0NW8wtcCFfcuRBYldFJnOM=;
  b=Z665HW78ZN966JuFm8lhv72cgkhFuQCdB2YzkgGOEuy7qweahq5Qyck3
   WO9G/5syCtDSmhZozKJK2RcF3eAGpRMvFFgxgoUHeQcAqB8JiyxZliPHT
   X58sPUPIsE4CzK+ApTPyMDZ+YunS97w8Md0ApTgf2td7WtL/yA9onqa9N
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: SF8DQ8roVFA1JEZRIUkOaBBj7ruw/HnVxSo7hJJpchqbQxrDLtBMVVk2VeuSJCz/bmRFgHwZtD
 J8fyO+S5JviaBlplwc4cAKCEcXK5iXffZve5nYFk2eJN2dIr4mFr4gzwDR8wxBQwlwwUa9jnrC
 WRiM0+RHPoy+m+jTk1rIU57wcjXC5aO9PWVXZtgLe88YK4YGzb7ruIXLa7x78Fd/HhdehmnzJZ
 lFr9ZRZ8TRGut61o+xmxv1h6Mw0uLXprRWnOg2Lr8qwYeGhPjlRiKpv7SBpjzL9Omvxlx93CSI
 a9o=
X-SBRS: 2.5
X-MesageID: 30105623
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208";a="30105623"
Date: Thu, 15 Oct 2020 13:08:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/msr: handle IA32_THERM_STATUS
Message-ID: <20201015110857.GC67506@Air-de-Roger>
References: <20201007102032.98565-1-roger.pau@citrix.com>
 <1e694350-4665-a1e7-20a4-f68cbee34dd1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <1e694350-4665-a1e7-20a4-f68cbee34dd1@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 14, 2020 at 02:17:15PM +0200, Jan Beulich wrote:
> On 07.10.2020 12:20, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/msr.c
> > +++ b/xen/arch/x86/msr.c
> > @@ -253,6 +253,12 @@ int guest_rdmsr(struct vcpu *v, uint32_t msr, uint64_t *val)
> >              break;
> >          goto gp_fault;
> >  
> > +    case MSR_IA32_THERM_STATUS:
> > +        if ( cp->x86_vendor != X86_VENDOR_INTEL )
> > +            goto gp_fault;
> > +        *val = 0;
> > +        break;
> 
> I've been puzzled while applying this: The upper patch context doesn't
> match what's been in master for about the last month, and hence I
> wonder what version of the tree you created this patch against. In any
> event please double check that I didn't screw it up.

I had this applied on top of:

https://lore.kernel.org/xen-devel/20201006162327.93055-1-roger.pau@citrix.com/

Which I will reply to now because I'm not sure how to proceed there.

Thanks for fixing the context and applied.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 11:31:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 11:31:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7273.18970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1Tz-0003HR-Tq; Thu, 15 Oct 2020 11:31:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7273.18970; Thu, 15 Oct 2020 11:31:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1Tz-0003HK-Ql; Thu, 15 Oct 2020 11:31:43 +0000
Received: by outflank-mailman (input) for mailman id 7273;
 Thu, 15 Oct 2020 11:31:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT1Tz-0003HF-Bn
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:31:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb00959c-74d3-4245-8149-5e6e3c81f928;
 Thu, 15 Oct 2020 11:31:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT1Tz-0003HF-Bn
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:31:43 +0000
X-Inumbo-ID: bb00959c-74d3-4245-8149-5e6e3c81f928
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb00959c-74d3-4245-8149-5e6e3c81f928;
	Thu, 15 Oct 2020 11:31:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602761501;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=JaKKGt3d+i+nh0Sf/oHSA2TVHCS6q9lA/II9WTvsiHI=;
  b=NkMTXy6YUvAk3MKvr2cb5u5/tW5f15ePiJkQ+UOKrEqb86t0BBWBgW48
   pkQILHik67WuZc06GzfhHn2tP6CG8Gzor+W5wTamjD9UO2wyIhiYXV2ux
   avRYXSdVcpdOgGommbZeqGeGKhFkIAnAwjdidYw+W2J8Xl381+FGUmEiz
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RTxSUs8l+PfyoDrEqhblrAMlIjAPi4B35dQozjF52rakKudSspVBFEp++RApEw57Kq1MBrOhEv
 +y/JW6VwM0kfES4d7GN7Chx049SKFoCm1kP/XsuxrdjXwWr8Jr1J7vre3OX7pUyK3CJDpRrfeM
 XpdLLcgLEcJZ6iKbTGh5hfGgYma7ILqJ7iDP1T9/5l8Q84yk0xnTb6ZY3cliTNTNlemZyHOV+B
 6cf+bd3v+W7xz/eK69xtYZ/na3uJdT5RwfAjVAXhGE4Jp93Ad141PASZsyN+ntBcYvXuhHauRx
 1Ug=
X-SBRS: 2.5
X-MesageID: 29314360
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208";a="29314360"
Date: Thu, 15 Oct 2020 13:31:33 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	<intel-gfx@lists.freedesktop.org>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: i915 dma faults on Xen
Message-ID: <20201015113109.GA68032@Air-de-Roger>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 14, 2020 at 08:37:06PM +0100, Andrew Cooper wrote:
> On 14/10/2020 20:28, Jason Andryuk wrote:
> > Hi,
> >
> > Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/2576
> >
> > I'm seeing DMA faults for the i915 graphics hardware on a Dell
> > Latitude 5500. These were captured when I plugged into a Dell
> > Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.4
> > staging and Linux 5.4.70 (and some earlier versions).
> >
> > Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handler
> > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handler
> > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =
> > ffff82c00021d000
> > Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > PTE Read access is not set
> > Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handler
> > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handler
> > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =
> > ffff82c00021d000
> > Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > PTE Read access is not set
> >
> > They repeat. In the log attached to
> > https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start at
> > "Oct 14 18:41:49.056589" and continue until I unplug the dock around
> > "Oct 14 18:41:54.801802".
> >
> > I've also seen similar messages when attaching the laptop's HDMI port
> > to a 4k monitor. The eDP display by itself seems okay.
> >
> > I tried Fedora 31 & 32 live images with intel_iommu=on, so no Xen, and
> > didn't see any errors
> >
> > This is a kernel & xen log with drm.debug=0x1e. It also includes some
> > application (glass) logging when it changes resolutions which seems to
> > set off the DMA faults. 5500-igfx-messages-kern-xen-glass
> >
> > Running xen with iommu=no-igfx disables the iommu for the i915
> > graphics and no faults are reported. However, that breaks some other
> > devices (Dell Latitude 7200 and 5580) giving a black screen with:
> >
> > Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Failed
> > to idle engines, declaring wedged!
> > Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Failed
> > to initialize GPU, declaring it wedged!
> >
> > Any suggestions welcome.
> 
> Presumably this is with a PV dom0.  What are 39b5845000 and 4238d0a000
> in the machine memory map?
> 
> This smells like a missing RMRR in the ACPI tables.

I agree.

Can you paste the memory map as printed by Xen when booting, and what
command line are you using to boot Xen.

Have you tried adding dom0-iommu=map-inclusive to the Xen command
line?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 11:57:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 11:57:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7276.18982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1t5-00058C-1U; Thu, 15 Oct 2020 11:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7276.18982; Thu, 15 Oct 2020 11:57:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1t4-000585-UV; Thu, 15 Oct 2020 11:57:38 +0000
Received: by outflank-mailman (input) for mailman id 7276;
 Thu, 15 Oct 2020 11:57:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kT1t4-000580-41
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:57:38 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f82852d-5414-4c48-86ed-039df559f224;
 Thu, 15 Oct 2020 11:57:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kT1t4-000580-41
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 11:57:38 +0000
X-Inumbo-ID: 4f82852d-5414-4c48-86ed-039df559f224
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f82852d-5414-4c48-86ed-039df559f224;
	Thu, 15 Oct 2020 11:57:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602763056;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to;
  bh=p5jFLXPkePOi3rvQj8z0hA6OfTK1Oc2PnfKQR1b0BNs=;
  b=U3/+27u43PVDMRwJny+8NMayfwI0S+TXP5sxt09HbCcd0ZNWT4h+nkBR
   ia6Ww+TWScercgFm4DuMDtfEkNBTydxeVZyoj6+eiIbHsyVYhFet1gibV
   odGGgGQcH3fA/Pnlmh1zFMENIR2tHkIga/Q7fEml7BgdR6Rkb37cd/rda
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +COeSubuh7AKluxEmgAOVNOt0ijioLPIFs1FUE9jRg+SygzzHXfi1KTzG8hwBNHhaQwblyYz+S
 lI+nGRPLxqvawSAjDUenXdLxjsrXu77m3JENtL4vZ24c9DYgQADa3k6yrNYsYEmQ9lgWw2B3JJ
 cQr+JTxwD/hB+XadJZzmUjfCMcF/MGcaJU48TGAjc4qV4eivcFNIdn4xVe/2snXNCorkTTGD0d
 a+pE9hC/wQoEeqCRHxAC7z9PiRZfz0gtl71kdN3L8hnqcyqBlJjC8TtjUNysn6ZOhJa2VsesQI
 fPs=
X-SBRS: 2.5
X-MesageID: 30108690
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208,217";a="30108690"
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
To: Dylanger Daly <dylangerdaly@protonmail.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
Date: Thu, 15 Oct 2020 12:57:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
Content-Type: multipart/alternative;
	boundary="------------6A7E07F0E413729C47F15F11"
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

--------------6A7E07F0E413729C47F15F11
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

On 15/10/2020 01:38, Dylanger Daly wrote:
> Hi All,
>
> I'm currently using Xen 4.14 (Qubes 4.1 OS) on a Ryzen 7 4750U PRO, by
> default I'll experience softlocks where the mouse for example will
> jolt from time to time, in this state it's not usable.
>
> Adding `dom0_max_vcpus=1 dom0_vcpus_pin` to Xen's CMDLINE results in
> no more jolting however performance isn't what it should be on an 8
> core CPU, softlocks are still a problem within domU's, any sort of UI
> animation for example.
>
> Reverting this commit (8e2aa76dc1670e82eaa15683353853bc66bf54fc)
> <https://github.com/xen-project/xen/commit/8e2aa76dc1670e82eaa15683353853bc66bf54fc> results
> in even worse performance with or without the above changes to
> CMDLINE, and it's not usable at all.
>
> Does anyone have any pointers?

Does booting with sched=credit alter the symptoms?

~Andrew

--------------6A7E07F0E413729C47F15F11
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: 8bit

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <div class="moz-cite-prefix">On 15/10/2020 01:38, Dylanger Daly
      wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com">
      <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
      <div>Hi All,<br>
      </div>
      <div><br>
      </div>
      <div>I'm currently using Xen 4.14 (Qubes 4.1 OS) on a Ryzen
        7 4750U PRO, by default I'll experience softlocks where the
        mouse for example will jolt from time to time, in this state
        it's not usable.<br>
      </div>
      <div><br>
      </div>
      <div>Adding `dom0_max_vcpus=1 dom0_vcpus_pin` to Xen's CMDLINE
        results in no more jolting however performance isn't what it
        should be on an 8 core CPU, softlocks are still a problem within
        domU's, any sort of UI animation for example.<br>
      </div>
      <div><br>
      </div>
      <div>Reverting <a
href="https://github.com/xen-project/xen/commit/8e2aa76dc1670e82eaa15683353853bc66bf54fc"
          target="_blank"
title="https://github.com/xen-project/xen/commit/8e2aa76dc1670e82eaa15683353853bc66bf54fc"
          rel="nofollow" moz-do-not-send="true">this commit
          (8e2aa76dc1670e82eaa15683353853bc66bf54fc)</a> results in even
        worse performance with or without the above changes to CMDLINE,
        and it's not usable at all.<br>
      </div>
      <div><br>
      </div>
      <div>Does anyone have any pointers?<br>
      </div>
    </blockquote>
    <br>
    Does booting with sched=credit alter the symptoms?<br>
    <br>
    ~Andrew<br>
  </body>
</html>

--------------6A7E07F0E413729C47F15F11--


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:00:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7283.18994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1wI-00063V-Ry; Thu, 15 Oct 2020 12:00:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7283.18994; Thu, 15 Oct 2020 12:00:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT1wI-00063O-OZ; Thu, 15 Oct 2020 12:00:58 +0000
Received: by outflank-mailman (input) for mailman id 7283;
 Thu, 15 Oct 2020 12:00:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT1wG-00063H-Rl
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:00:56 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 587ebec1-06ee-4b66-9b1d-1affd46b6df5;
 Thu, 15 Oct 2020 12:00:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT1wG-00063H-Rl
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:00:56 +0000
X-Inumbo-ID: 587ebec1-06ee-4b66-9b1d-1affd46b6df5
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 587ebec1-06ee-4b66-9b1d-1affd46b6df5;
	Thu, 15 Oct 2020 12:00:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602763255;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=SD26uCWGPAl9VECFUd5wk6XlizSPMWmbo8zTupJX1QE=;
  b=KsSw3N+ABsDSLuoW4GSSj+uVl1ypsw7yw+aXUgGSpPbNPXAZUoYCudp5
   FmGJRpzAhAMFpZ1mbn8BYJiQW2WRMTvdvH6ZSteEmnd7cmRRdsVJ2owgl
   Rtig5JuOGeqaJbErrtSkqzfMPW6YYyf359rwZNPRx48+W1VBXzN/AlRgf
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zyajt8KycHUOuInisqErIMtjpgYmoikIVRuywWQphTwXdKHfW6tjI39ZGKYzR3njHwTxLr8Rfn
 L9les95bid7p3rvCfiBCf7eW+1T88me0xTZTDACX7okutisfC47UAgk0dYIQAcJThRT2ri9cml
 anfTHJ99sSWbKJD7XFDWpztz6zLTQ1rh+W+Ohxg8cPIywyYjJjDalfe6GEjaAPCmNXn1JZ6MP8
 CP1MT3fwvOWqiZRYKqScEnPT2QezhH0BHTRkwKNOuhuBfzUcfKfUKv0UJrnQY7Ex25WmkLNU3O
 SN8=
X-SBRS: 2.5
X-MesageID: 30108946
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,378,1596513600"; 
   d="scan'208";a="30108946"
Date: Thu, 15 Oct 2020 14:00:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "J. Roeleveld" <joost@antarean.org>
CC: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: xen-blkback: Scheduled work from previous purge is still busy,
 cannot purge list
Message-ID: <20201015120046.GE19243@Air-de-Roger>
References: <15146361.Z0tdQxPx3m@eve>
 <20201015105735.GB67506@Air-de-Roger>
 <1855015.FeAb16qnYt@eve>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <1855015.FeAb16qnYt@eve>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

Please don't drop xen-devel mailing list when replying.

On Thu, Oct 15, 2020 at 01:28:49PM +0200, J. Roeleveld wrote:
> On Thursday, October 15, 2020 12:57:35 PM CEST you wrote:
> > On Tue, Oct 13, 2020 at 07:26:47AM +0200, J. Roeleveld wrote:
> > > Hi All,
> > > 
> > > I am seeing the following message in the "dmesg" output of a driver
> > > domain.
> > > 
> > > [Thu Oct  8 20:57:04 2020] xen-blkback: Scheduled work from previous purge
> > > is still busy, cannot purge list
> > > [Thu Oct  8 20:57:11 2020] xen-blkback: Scheduled work from previous purge
> > > is still busy, cannot purge list
> > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge
> > > is still busy, cannot purge list
> > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous purge
> > > is still busy, cannot purge list
> > > 
> > > 
> > > Is this something to worry about? Or can I safely ignore this?
> > 
> > What version of the Linux kernel are you running in that driver
> > domain?
> 
> Host:
> Kernel: 5.4.66
> Xen: 4.12.3
> 
> Driver domain:
> Kernel: 5.4.66
> Xen: 4.12.3
> 
> 
> > Is the domain very busy? That might explain the delay in purging
> > grants.
> 
> No, it's generally asleep, been going through the munin-records and can't find 
> any spikes the correlate with the messages either.
> 
> > Also is this an sporadic message, or it's constantly repeating?
> 
> It's sporadic, but occasionally, I get it several times in a row.
> 
> My understanding of the code where this message comes from is far from 
> sufficient. Which means I have no clue what it is actually trying to do.

There's a recurrent worker thread in blkback that will go and purge
unused cache entries after they have expired. This is done to prevent
the cache from growing unbounded.

AFAICT this just means the worker is likely running faster than what
you can proceed, and hence you get another worker run before the old
entries have been removed. Should be safe to ignore, but makes me
wonder if I should add a parameter to tune the periodicity of the
purge work.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:07:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:07:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7289.19006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT22H-0006J0-Jz; Thu, 15 Oct 2020 12:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7289.19006; Thu, 15 Oct 2020 12:07:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT22H-0006It-Gc; Thu, 15 Oct 2020 12:07:09 +0000
Received: by outflank-mailman (input) for mailman id 7289;
 Thu, 15 Oct 2020 12:07:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kT22G-0006Io-KM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:07:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe2917c4-aa1d-48ac-8716-8309ff8f335a;
 Thu, 15 Oct 2020 12:07:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30612ACB5;
 Thu, 15 Oct 2020 12:07:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kT22G-0006Io-KM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:07:08 +0000
X-Inumbo-ID: fe2917c4-aa1d-48ac-8716-8309ff8f335a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe2917c4-aa1d-48ac-8716-8309ff8f335a;
	Thu, 15 Oct 2020 12:07:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602763626;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JXafp5VLA0P0xzAG6FhQCxBZCd4G7t26VpzSvuztxtQ=;
	b=CZGbDe0d3TwDZcXFGygdSsa2SAr/svia9/Y/aN7DLRVccq20haXg9uTXEYkKWE1R6By1LB
	oeoLBp2hDtFuTSk82g9Xqrtkb40spXWuQ/2hteWg3ODiAPy3Dhqh2JIultKv6rxFVoDT6u
	jWK/RLdoeVFJ7CF8qdPlSTRpurNX/u8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 30612ACB5;
	Thu, 15 Oct 2020 12:07:06 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
Date: Thu, 15 Oct 2020 14:07:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.10.2020 13:40, Julien Grall wrote:
> Hi Jan,
> 
> On 13/10/2020 15:26, Jan Beulich wrote:
>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>> On 13.10.20 15:58, Jan Beulich wrote:
>>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>>> priority of the event. When sending an event it might happen the
>>>>> event needs to change queues and the old queue needs to be kept for
>>>>> keeping the links between queue elements intact. For this purpose
>>>>> the event channel contains last_priority and last_vcpu_id values
>>>>> elements for being able to identify the old queue.
>>>>>
>>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>>> with a single atomic operation avoiding any inconsistencies.
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>
>>>> I seem to vaguely recall that at the time this seemingly racy
>>>> access was done on purpose by David. Did you go look at the
>>>> old commits to understand whether there really is a race which
>>>> can't be tolerated within the spec?
>>>
>>> At least the comments in the code tell us that the race regarding
>>> the writing of priority (not last_priority) is acceptable.
>>
>> Ah, then it was comments. I knew I read something to this effect
>> somewhere, recently.
>>
>>> Especially Julien was rather worried by the current situation. In
>>> case you can convince him the current handling is fine, we can
>>> easily drop this patch.
>>
>> Julien, in the light of the above - can you clarify the specific
>> concerns you (still) have?
> 
> Let me start with that the assumption if evtchn->lock is not held when 
> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.

But this isn't interesting - we know there are paths where it is
held, and ones (interdomain sending) where it's the remote port's
lock instead which is held. What's important here is that a
_consistent_ lock be held (but it doesn't need to be evtchn's).

>  From my understanding, the goal of lock_old_queue() is to return the 
> old queue used.
> 
> last_priority and last_vcpu_id may be updated separately and I could not 
> convince myself that it would not be possible to return a queue that is 
> neither the current one nor the old one.
> 
> The following could happen if evtchn->priority and 
> evtchn->notify_vcpu_id keeps changing between calls.
> 
> pCPU0				| pCPU1
> 				|
> evtchn_fifo_set_pending(v0,...)	|
> 				| evtchn_fifo_set_pending(v1, ...)
>   [...]				|
>   /* Queue has changed */	|
>   evtchn->last_vcpu_id = v0 	|
> 				| -> evtchn_old_queue()
> 				| v = d->vcpu[evtchn->last_vcpu_id];
>    				| old_q = ...
> 				| spin_lock(old_q->...)
> 				| v = ...
> 				| q = ...
> 				| /* q and old_q would be the same */
> 				|
>   evtchn->las_priority = priority|
> 
> If my diagram is correct, then pCPU1 would return a queue that is 
> neither the current nor old one.

I think I agree.

> In which case, I think it would at least be possible to corrupt the 
> queue. From evtchn_fifo_set_pending():
> 
>          /*
>           * If this event was a tail, the old queue is now empty and
>           * its tail must be invalidated to prevent adding an event to
>           * the old queue from corrupting the new queue.
>           */
>          if ( old_q->tail == port )
>              old_q->tail = 0;
> 
> Did I miss anything?

I don't think you did. The important point though is that a consistent
lock is being held whenever we come here, so two racing set_pending()
aren't possible for one and the same evtchn. As a result I don't think
the patch here is actually needed.

If I take this further, then I think I can reason why it wasn't
necessary to add further locking to send_guest_{global,vcpu}_virq():
The virq_lock is the "consistent lock" protecting ECS_VIRQ ports. The
spin_barrier() while closing the port guards that side against the
port changing to a different ECS_* behind the sending functions' backs.
And binding such ports sets ->virq_to_evtchn[] last, with a suitable
barrier (the unlock).

Which leaves send_guest_pirq() before we can drop the IRQ-safe locking
again. I guess we would need to work towards using the underlying
irq_desc's lock as consistent lock here, but this certainly isn't the
case just yet, and I'm not really certain this can be achieved.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7295.19044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WT-0000aH-IY; Thu, 15 Oct 2020 12:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7295.19044; Thu, 15 Oct 2020 12:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WT-0000aA-Ex; Thu, 15 Oct 2020 12:38:21 +0000
Received: by outflank-mailman (input) for mailman id 7295;
 Thu, 15 Oct 2020 12:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WR-0000WT-VX
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 424df50f-8960-4cde-b5ba-509bb32a06fe;
 Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB403B195;
 Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WR-0000WT-VX
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:20 +0000
X-Inumbo-ID: 424df50f-8960-4cde-b5ba-509bb32a06fe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 424df50f-8960-4cde-b5ba-509bb32a06fe;
	Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB403B195;
	Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 00/10] Support GEM object mappings from I/O memory
Date: Thu, 15 Oct 2020 14:37:56 +0200
Message-Id: <20201015123806.32416-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.

This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.

Patches #1 to #4 prepare VRAM helpers and drivers.

Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.

With struct dma_buf_map propagated through the layers, patches #9 and #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.

v4:
	* provide TTM vmap/vunmap plus GEM helpers and convert drivers
	  over (Christian, Daniel)
	* remove several empty functions
	* more TODOs and documentation (Daniel)

v3:
	* recreate the whole patchset on top of struct dma_buf_map

v2:
	* RFC patchset

Thomas Zimmermann (10):
  drm/vram-helper: Remove invariant parameters from internal kmap
    function
  drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
  drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
  drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
  drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
  drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
    backends
  drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
    dma_buf_map
  drm/gem: Store client buffer mappings as struct dma_buf_map
  dma-buf-map: Add memcpy and pointer-increment interfaces
  drm/fb_helper: Support framebuffers in I/O memory

 Documentation/gpu/todo.rst                  |  37 ++-
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 ++-
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/bochs/bochs_kms.c           |   1 -
 drivers/gpu/drm/drm_client.c                |  38 ++--
 drivers/gpu/drm/drm_fb_helper.c             | 238 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |  29 ++-
 drivers/gpu/drm/drm_gem_cma_helper.c        |  27 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 ++--
 drivers/gpu/drm/drm_gem_ttm_helper.c        |  38 ++++
 drivers/gpu/drm/drm_gem_vram_helper.c       | 117 +++++-----
 drivers/gpu/drm/drm_internal.h              |   5 +-
 drivers/gpu/drm/drm_prime.c                 |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   3 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       |   1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  12 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |  12 -
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |   2 -
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 --
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +-
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 ++-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +-
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 --
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c           |  72 ++++++
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   7 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   3 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_ttm_helper.h            |   6 +
 include/drm/drm_gem_vram_helper.h           |  14 +-
 include/drm/drm_mode_config.h               |  12 -
 include/drm/ttm/ttm_bo_api.h                |  28 +++
 include/linux/dma-buf-map.h                 |  92 +++++++-
 62 files changed, 817 insertions(+), 423 deletions(-)

-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7293.19019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WO-0000Wf-2O; Thu, 15 Oct 2020 12:38:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7293.19019; Thu, 15 Oct 2020 12:38:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WN-0000WY-VY; Thu, 15 Oct 2020 12:38:15 +0000
Received: by outflank-mailman (input) for mailman id 7293;
 Thu, 15 Oct 2020 12:38:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WN-0000WT-0T
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a05be3e6-a651-47c9-b7be-05b398be827c;
 Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB438B199;
 Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WN-0000WT-0T
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:15 +0000
X-Inumbo-ID: a05be3e6-a651-47c9-b7be-05b398be827c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a05be3e6-a651-47c9-b7be-05b398be827c;
	Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB438B199;
	Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
Date: Thu, 15 Oct 2020 14:37:57 +0200
Message-Id: <20201015123806.32416-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.

v4:
	* don't check for !kmap->virtual; will always be false

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 3213429f8444..2d5ed30518f1 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
 
-	if (kmap->virtual || !map)
-		goto out;
-
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
 		return ERR_PTR(ret);
 
 out:
-	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
-		return NULL; /* not mapped; don't increment ref */
-	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	return ttm_kmap_obj_virtual(kmap, &is_iomem);
 }
 
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7294.19031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WP-0000Xc-9s; Thu, 15 Oct 2020 12:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7294.19031; Thu, 15 Oct 2020 12:38:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WP-0000XV-6i; Thu, 15 Oct 2020 12:38:17 +0000
Received: by outflank-mailman (input) for mailman id 7294;
 Thu, 15 Oct 2020 12:38:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WO-0000Wl-6r
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 596f9b79-0b59-4e4f-aae1-98c0ad215b28;
 Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 42699B19B;
 Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WO-0000Wl-6r
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:16 +0000
X-Inumbo-ID: 596f9b79-0b59-4e4f-aae1-98c0ad215b28
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 596f9b79-0b59-4e4f-aae1-98c0ad215b28;
	Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 42699B19B;
	Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
Date: Thu, 15 Oct 2020 14:38:00 +0200
Message-Id: <20201015123806.32416-5-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
 drivers/gpu/drm/exynos/exynos_drm_gem.h |  2 --
 2 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
 static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
 	.free = exynos_drm_gem_free_object,
 	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
-	.vmap = exynos_drm_gem_prime_vmap,
-	.vunmap	= exynos_drm_gem_prime_vunmap,
 	.vm_ops = &exynos_drm_gem_vm_ops,
 };
 
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	return &exynos_gem->base;
 }
 
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma)
 {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7296.19055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WU-0000bw-9r; Thu, 15 Oct 2020 12:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7296.19055; Thu, 15 Oct 2020 12:38:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WU-0000bN-0J; Thu, 15 Oct 2020 12:38:22 +0000
Received: by outflank-mailman (input) for mailman id 7296;
 Thu, 15 Oct 2020 12:38:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WT-0000Wl-4B
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07d83d7a-f0be-4418-8a57-7164a10ea0b4;
 Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA1D7AFEA;
 Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WT-0000Wl-4B
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:21 +0000
X-Inumbo-ID: 07d83d7a-f0be-4418-8a57-7164a10ea0b4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 07d83d7a-f0be-4418-8a57-7164a10ea0b4;
	Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EA1D7AFEA;
	Thu, 15 Oct 2020 12:38:13 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
Date: Thu, 15 Oct 2020 14:38:01 +0200
Message-Id: <20201015123806.32416-6-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.

On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.

v4:
	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
	  Christian)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
 drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
 include/drm/drm_gem_ttm_helper.h     |  6 +++
 include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
 include/linux/dma-buf-map.h          | 20 ++++++++
 5 files changed, 164 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 }
 EXPORT_SYMBOL(drm_gem_ttm_print_info);
 
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
 /**
  * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
  * @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index bdee4df1f3f2..80c42c774c7d 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
 #include <linux/io.h>
 #include <linux/highmem.h>
 #include <linux/wait.h>
@@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 }
 EXPORT_SYMBOL(ttm_bo_kunmap);
 
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+	int ret;
+
+	ret = ttm_mem_io_reserve(bo->bdev, mem);
+	if (ret)
+		return ret;
+
+	if (mem->bus.is_iomem) {
+		void __iomem *vaddr_iomem;
+		unsigned long size = bo->num_pages << PAGE_SHIFT;
+
+		if (mem->bus.addr)
+			vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
+		else if (mem->placement & TTM_PL_FLAG_WC)
+			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+		else
+			vaddr_iomem = ioremap(mem->bus.offset, size);
+
+		if (!vaddr_iomem)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+	} else {
+		struct ttm_operation_ctx ctx = {
+			.interruptible = false,
+			.no_wait_gpu = false
+		};
+		struct ttm_tt *ttm = bo->ttm;
+		pgprot_t prot;
+		void *vaddr;
+
+		BUG_ON(!ttm);
+
+		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+		if (ret)
+			return ret;
+
+		/*
+		 * We need to use vmap to get the desired page protection
+		 * or to make the buffer object look contiguous.
+		 */
+		prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
+		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+		if (!vaddr)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr(map, vaddr);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	if (dma_buf_map_is_null(map))
+		return;
+
+	if (map->is_iomem)
+		iounmap(map->vaddr_iomem);
+	else
+		vunmap(map->vaddr);
+	dma_buf_map_clear(map);
+
+	ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
 static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
 				 bool dst_use_tt)
 {
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+struct dma_buf_map;
+
 #define drm_gem_ttm_of_gem(gem_obj) \
 	container_of(gem_obj, struct ttm_buffer_object, base)
 
 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 			    const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map);
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
 		     struct vm_area_struct *vma);
 
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
 
 struct ttm_bo_device;
 
+struct dma_buf_map;
+
 struct drm_mm_node;
 
 struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
  */
 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
 
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
 /**
  * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
  *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
  *
  *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
  *
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map:		The dma-buf mapping structure
+ * @vaddr_iomem:	An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+					       void __iomem *vaddr_iomem)
+{
+	map->vaddr_iomem = vaddr_iomem;
+	map->is_iomem = true;
+}
+
 /**
  * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
  * @lhs:	The dma-buf mapping structure
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7297.19068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WY-0000hR-GZ; Thu, 15 Oct 2020 12:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7297.19068; Thu, 15 Oct 2020 12:38:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WY-0000hG-Cl; Thu, 15 Oct 2020 12:38:26 +0000
Received: by outflank-mailman (input) for mailman id 7297;
 Thu, 15 Oct 2020 12:38:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WW-0000WT-Vh
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a250ab0b-602f-4700-9700-fc055a211682;
 Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A929AB186;
 Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WW-0000WT-Vh
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:25 +0000
X-Inumbo-ID: a250ab0b-602f-4700-9700-fc055a211682
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a250ab0b-602f-4700-9700-fc055a211682;
	Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A929AB186;
	Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
Date: Thu, 15 Oct 2020 14:37:59 +0200
Message-Id: <20201015123806.32416-4-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
 3 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
 void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
 	.unpin = etnaviv_gem_prime_unpin,
 	.get_sg_table = etnaviv_gem_prime_get_sg_table,
 	.vmap = etnaviv_gem_prime_vmap,
-	.vunmap = etnaviv_gem_prime_vunmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
 	return etnaviv_gem_vmap(obj);
 }
 
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* TODO msm_gem_vunmap() */
-}
-
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7298.19080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WZ-0000jp-RH; Thu, 15 Oct 2020 12:38:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7298.19080; Thu, 15 Oct 2020 12:38:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2WZ-0000jh-Nh; Thu, 15 Oct 2020 12:38:27 +0000
Received: by outflank-mailman (input) for mailman id 7298;
 Thu, 15 Oct 2020 12:38:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2WY-0000Wl-4X
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14180311-2c02-4c96-8f88-9e5e32e31de2;
 Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E978B19C;
 Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2WY-0000Wl-4X
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:26 +0000
X-Inumbo-ID: 14180311-2c02-4c96-8f88-9e5e32e31de2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14180311-2c02-4c96-8f88-9e5e32e31de2;
	Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7E978B19C;
	Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
Date: Thu, 15 Oct 2020 14:38:03 +0200
Message-Id: <20201015123806.32416-8-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
 drivers/gpu/drm/drm_gem.c      | 26 +++++++++++++-------------
 drivers/gpu/drm/drm_internal.h |  5 +++--
 drivers/gpu/drm/drm_prime.c    | 14 ++++----------
 4 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
  * Copyright 2018 Noralf Trønnes
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  */
 void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (buffer->vaddr)
 		return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
+	ret = drm_gem_vmap(buffer->gem, &map);
+	if (ret)
+		return ERR_PTR(ret);
 
-	buffer->vaddr = vaddr;
+	buffer->vaddr = map.vaddr;
 
-	return vaddr;
+	return map.vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+	drm_gem_vunmap(buffer->gem, &map);
 	buffer->vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map;
 	int ret;
 
 	if (!obj->funcs->vmap)
-		return ERR_PTR(-EOPNOTSUPP);
+		return -EOPNOTSUPP;
 
-	ret = obj->funcs->vmap(obj, &map);
+	ret = obj->funcs->vmap(obj, map);
 	if (ret)
-		return ERR_PTR(ret);
-	else if (dma_buf_map_is_null(&map))
-		return ERR_PTR(-ENOMEM);
+		return ret;
+	else if (dma_buf_map_is_null(map))
+		return -ENOMEM;
 
-	return map.vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
-	if (!vaddr)
+	if (dma_buf_map_is_null(map))
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, &map);
+		obj->funcs->vunmap(obj, map);
+
+	/* Always set the mapping to NULL. Callers may rely on this. */
+	dma_buf_map_clear(map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
 
 struct dentry;
 struct dma_buf;
+struct dma_buf_map;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
  * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
  *
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
  */
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
+	return drm_gem_vmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, map->vaddr);
+	drm_gem_vunmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7299.19092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wd-0000p6-9S; Thu, 15 Oct 2020 12:38:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7299.19092; Thu, 15 Oct 2020 12:38:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wd-0000ot-3W; Thu, 15 Oct 2020 12:38:31 +0000
Received: by outflank-mailman (input) for mailman id 7299;
 Thu, 15 Oct 2020 12:38:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2Wb-0000WT-W4
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c601559-1549-4690-9ff1-3aaad3c669c1;
 Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A66FDB18A;
 Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2Wb-0000WT-W4
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:30 +0000
X-Inumbo-ID: 2c601559-1549-4690-9ff1-3aaad3c669c1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c601559-1549-4690-9ff1-3aaad3c669c1;
	Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A66FDB18A;
	Thu, 15 Oct 2020 12:38:12 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
Date: Thu, 15 Oct 2020 14:37:58 +0200
Message-Id: <20201015123806.32416-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
 drivers/gpu/drm/vc4/vc4_bo.c         |  1 -
 include/drm/drm_gem_cma_helper.h     |  1 -
 3 files changed, 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- *     address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
 static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.free = drm_gem_cma_free_object,
 	.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
 	.export = vc4_prime_export,
 	.get_sg_table = drm_gem_cma_prime_get_sg_table,
 	.vmap = vc4_prime_vmap,
-	.vunmap = drm_gem_cma_prime_vunmap,
 	.vm_ops = &vc4_vm_ops,
 };
 
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7300.19104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2We-0000sr-Vp; Thu, 15 Oct 2020 12:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7300.19104; Thu, 15 Oct 2020 12:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2We-0000sR-Qe; Thu, 15 Oct 2020 12:38:32 +0000
Received: by outflank-mailman (input) for mailman id 7300;
 Thu, 15 Oct 2020 12:38:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2Wd-0000Wl-4T
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9edf332b-0fea-45eb-8528-2eef54b25b3a;
 Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0804B181;
 Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2Wd-0000Wl-4T
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:31 +0000
X-Inumbo-ID: 9edf332b-0fea-45eb-8528-2eef54b25b3a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9edf332b-0fea-45eb-8528-2eef54b25b3a;
	Thu, 15 Oct 2020 12:38:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C0804B181;
	Thu, 15 Oct 2020 12:38:14 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
Date: Thu, 15 Oct 2020 14:38:02 +0200
Message-Id: <20201015123806.32416-7-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v4:
	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
	* fix a trailing { in drm_gem_vmap()
	* remove several empty functions instead of converting them (Daniel)
	* comment uses of raw pointers with a TODO (Daniel)
	* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst                  |  18 ++++
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 -------
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +++--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/drm_gem.c                   |  23 +++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  10 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 +++++----
 drivers/gpu/drm/drm_gem_vram_helper.c       | 107 ++++++++++----------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   9 +-
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 ----
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +--
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 ++-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 +++---
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +--
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 ----
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 ++--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 ++-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   2 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_vram_helper.h           |  14 +--
 47 files changed, 321 insertions(+), 295 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
 
 Level: Intermediate
 
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
 
 Core refactorings
 =================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 147d61b9674e..319839b87d37 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
 	select FW_LOADER
         select DRM_KMS_HELPER
         select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
 	select DRM_KMS_HELPER
 	select DRM_SCHED
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
 #include <linux/dma-fence-array.h>
 #include <linux/pci-p2pdma.h>
 
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 /**
  * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
  * @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
 bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
 				      struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
 
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
 
 #include "amdgpu.h"
 #include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
 	.open = amdgpu_gem_object_open,
 	.close = amdgpu_gem_object_close,
 	.export = amdgpu_gem_prime_export,
-	.vmap = amdgpu_gem_prime_vmap,
-	.vunmap = amdgpu_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
 	struct amdgpu_bo		*parent;
 	struct amdgpu_bo		*shadow;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	struct amdgpu_mn		*mn;
 
 
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &map);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct dma_buf_map map;
 	int ret;
 	void *src;
 	void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
 
 #include <drm/drm_connector.h>
 #include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
 #include <linux/pagemap.h>
 #include <linux/shmem_fs.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
 #include <linux/mem_encrypt.h>
 #include <linux/pagevec.h>
 
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap)
+		return ERR_PTR(-EOPNOTSUPP);
 
-	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+	ret = obj->funcs->vmap(obj, &map);
+	if (ret)
+		return ERR_PTR(ret);
+	else if (dma_buf_map_is_null(&map))
+		return ERR_PTR(-ENOMEM);
 
-	return vaddr;
+	return map.vaddr;
 }
 
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
 	if (!vaddr)
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, &map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_gem_object_funcs.vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+	if (shmem->vmap_use_count++ > 0) {
+		dma_buf_map_set_vaddr(map, shmem->vaddr);
+		return 0;
+	}
 
 	if (obj->import_attach) {
-		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
-		if (!ret)
-			shmem->vaddr = map.vaddr;
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+		if (!ret) {
+			if (WARN_ON(map->is_iomem)) {
+				ret = -EIO;
+				goto err_put_pages;
+			}
+			shmem->vaddr = map->vaddr;
+		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 				    VM_MAP, prot);
 		if (!shmem->vaddr)
 			ret = -ENOMEM;
+		else
+			dma_buf_map_set_vaddr(map, shmem->vaddr);
 	}
 
 	if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	else
 		vunmap(shmem->vaddr);
 
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 2d5ed30518f1..4d8553b28558 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 
 #include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->kmap_use_count);
-	WARN_ON(gbo->kmap.virtual);
+	WARN_ON(gbo->vmap_use_count);
+	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
 }
@@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+				    struct dma_buf_map *map)
 {
 	int ret;
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
-	bool is_iomem;
 
-	if (gbo->kmap_use_count > 0)
+	if (gbo->vmap_use_count > 0)
 		goto out;
 
-	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
-	++gbo->kmap_use_count;
-	return ttm_kmap_obj_virtual(kmap, &is_iomem);
+	++gbo->vmap_use_count;
+	*map = gbo->map;
+
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct dma_buf_map *map)
 {
-	if (WARN_ON_ONCE(!gbo->kmap_use_count))
+	struct drm_device *dev = gbo->bo.base.dev;
+
+	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
 		return;
-	if (--gbo->kmap_use_count > 0)
+
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	if (--gbo->vmap_use_count > 0)
 		return;
 
 	/*
@@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, map);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
@@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       bool evict,
 					       struct ttm_resource *new_mem)
 {
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	struct ttm_buffer_object *bo = &gbo->bo;
+	struct drm_device *dev = bo->base.dev;
 
-	if (WARN_ON_ONCE(gbo->kmap_use_count))
+	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
 		return;
 
-	if (!kmap->virtual)
-		return;
-	ttm_bo_kunmap(kmap);
-	kmap->virtual = NULL;
+	ttm_bo_vunmap(bo, &gbo->map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, map);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  */
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, map);
 }
 
 /*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return etnaviv_gem_vmap(obj);
+	void *vaddr = etnaviv_gem_vmap(obj);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(obj);
 }
 
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (bo->heap_size)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
-	return drm_gem_shmem_vmap(obj);
+	return drm_gem_shmem_vmap(obj, map);
 }
 
 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
 
+#include <linux/dma-buf-map.h>
 #include <linux/kthread.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 	struct lima_dump_chunk_buffer *buffer_chunk;
 	u32 size, task_size, mem_size;
 	int i;
+	struct dma_buf_map map;
+	int ret;
 
 	mutex_lock(&dev->error_task_list_lock);
 
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 		} else {
 			buffer_chunk->size = lima_bo_size(bo);
 
-			data = drm_gem_shmem_vmap(&bo->base.base);
-			if (IS_ERR_OR_NULL(data)) {
+			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+			if (ret) {
 				kvfree(et);
 				goto out;
 			}
 
-			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
 
-			drm_gem_shmem_vunmap(&bo->base.base, data);
+			drm_gem_shmem_vunmap(&bo->base.base, &map);
 		}
 
 		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
+	struct dma_buf_map map;
 	void *vmap;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
 	select FW_LOADER
 	select DRM_KMS_HELPER
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
 	select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
 	select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
 	unsigned mode;
 
 	struct nouveau_drm_tile *tile;
-
-	struct ttm_bo_kmap_obj dma_buf_vmap;
 };
 
 static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
  *
  */
 
+#include <drm/drm_gem_ttm_helper.h>
+
 #include "nouveau_drv.h"
 #include "nouveau_dma.h"
 #include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
 	.pin = nouveau_gem_prime_pin,
 	.unpin = nouveau_gem_prime_unpin,
 	.get_sg_table = nouveau_gem_prime_get_sg_table,
-	.vmap = nouveau_gem_prime_vmap,
-	.vunmap = nouveau_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
-			  &nvbo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
-	ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
 struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
 							 struct dma_buf_attachment *attach,
 							 struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
 #include <drm/drm_gem_shmem_helper.h>
 #include <drm/panfrost_drm.h>
 #include <linux/completion.h>
+#include <linux/dma-buf-map.h>
 #include <linux/iopoll.h>
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map;
 	struct drm_gem_shmem_object *bo;
 	u32 cfg, as;
 	int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 		goto err_close_bo;
 	}
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
-	if (IS_ERR(perfcnt->buf)) {
-		ret = PTR_ERR(perfcnt->buf);
+	ret = drm_gem_shmem_vmap(&bo->base, &map);
+	if (ret)
 		goto err_put_mapping;
-	}
+	perfcnt->buf = map.vaddr;
 
 	/*
 	 * Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	return 0;
 
 err_vunmap:
-	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&bo->base, &map);
 err_put_mapping:
 	panfrost_gem_mapping_put(perfcnt->mapping);
 err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
 
 	if (user != perfcnt->user)
 		return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
 
 	perfcnt->user = NULL;
-	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
 	perfcnt->buf = NULL;
 	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
 	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
 
 #include <linux/crc32.h>
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_drv.h>
 #include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
+	struct dma_buf_map user_map;
+	struct dma_buf_map cursor_map;
 	void *user_ptr;
 	int size = 64*64*4;
 
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_map);
 		if (ret)
 			goto out_free_release;
+		user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 		ret = qxl_alloc_bo_reserved(qdev, release,
 					    sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
 		if (ret)
 			goto out_backoff;
 
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct dma_buf_map map;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &map);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
  * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 
 #include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct dma_buf_map map;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &map);
+	if (ret)
 		return NULL;
-	}
+	dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
+	struct dma_buf_map surface_map;
 	uint8_t *surface_base;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_map);
 	if (ret)
 		goto out_release_backoff;
+	surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = qxl_image_init(qdev, release, dimage, surface_base,
 			     left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
  * Definitions taken from spice-protocol, plus kernel driver specific bits.
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/dma-fence.h>
 #include <linux/firmware.h>
 #include <linux/platform_device.h>
@@ -50,6 +51,8 @@
 
 #include "qxl_dev.h"
 
+struct dma_buf_map;
+
 #define DRIVER_AUTHOR		"Dave Airlie"
 
 #define DRIVER_NAME		"qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
 	/* Protected by tbo.reserved */
 	struct ttm_place		placements[3];
 	struct ttm_placement		placement;
-	struct ttm_bo_kmap_obj		kmap;
+	struct dma_buf_map		map;
 	void				*kptr;
 	unsigned int                    map_count;
 	int                             type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 940e99354f49..755df4d8f95f 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
  *          Alon Levy
  */
 
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
-#include <linux/io-mapping.h>
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
 	struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
-	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+	r = ttm_bo_vmap(&bo->tbo, &bo->map);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+
+	/* TODO: Remove kptr in favor of map everywhere. */
+	if (bo->map.is_iomem)
+		bo->kptr = (void *)bo->map.vaddr_iomem;
+	else
+		bo->kptr = bo->map.vaddr;
+
+out:
+	*map = bo->map;
 	return 0;
 }
 
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct dma_buf_map bo_map;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &bo_map);
 	if (ret)
 		return NULL;
+	rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	rptr += page_offset * PAGE_SIZE;
 	return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
 	if (bo->map_count > 0)
 		return;
 	bo->kptr = NULL;
-	ttm_bo_kunmap(&bo->kmap);
+	ttm_bo_vunmap(&bo->tbo, &bo->map);
 }
 
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, map);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
 	/* Constant after initialization */
 	struct radeon_device		*rdev;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	pid_t				pid;
 
 #ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
 #include <drm/drm_debugfs.h>
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
 #include <drm/radeon_drm.h>
 
 #include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
 struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 static const struct drm_gem_object_funcs radeon_gem_object_funcs;
 
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
 	.pin = radeon_gem_prime_pin,
 	.unpin = radeon_gem_prime_unpin,
 	.get_sg_table = radeon_gem_prime_get_sg_table,
-	.vmap = radeon_gem_prime_vmap,
-	.vunmap = radeon_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct dma_buf_attachment *attach,
 							struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
 	return ERR_PTR(ret);
 }
 
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
-	if (rk_obj->pages)
-		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
-			    pgprot_writecombine(PAGE_KERNEL));
+	if (rk_obj->pages) {
+		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+				  pgprot_writecombine(PAGE_KERNEL));
+		if (!vaddr)
+			return -ENOMEM;
+		dma_buf_map_set_vaddr(map, vaddr);
+		return 0;
+	}
 
 	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
-		return NULL;
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
 
-	return rk_obj->kvaddr;
+	return 0;
 }
 
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
 	if (rk_obj->pages) {
-		vunmap(vaddr);
+		vunmap(map->vaddr);
 		return;
 	}
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
 rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 				   struct dma_buf_attachment *attach,
 				   struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm driver mmap file operations */
 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
  */
 
 #include <linux/console.h>
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
+	struct dma_buf_map map;
 	void *vmap;
 	int idx, ret;
 
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret)
 		goto out_dev_exit;
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
+	struct dma_buf_map map;
 	void *vaddr;
 	u8 *src;
 
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (fb->obj[0]->import_attach) {
 		ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
+	struct dma_buf_map map;
 	void *vaddr;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	urb = udl_get_urb(dev);
 	if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
  *          Michael Thayer <michael.thayer@oracle.com,
  *          Hans de Goede <hdegoede@redhat.com>
  */
+
+#include <linux/dma-buf-map.h>
 #include <linux/export.h>
 
 #include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
+	struct dma_buf_map map;
+	int ret;
 	u8 *src;
 
 	/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	/*
 	 * The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
 	if (bo->validated_shader) {
 		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
+	void *vaddr;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
-		return NULL;
+		return PTR_ERR(pages);
+
+	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return 0;
 }
 
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 	vgem_unpin_pages(bo);
 }
 
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	void *vaddr;
 
 	if (!xen_obj->pages)
-		return NULL;
+		return -ENOMEM;
 
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
-	return vmap(xen_obj->pages, xen_obj->num_pages,
-		    VM_MAP, PAGE_KERNEL);
+	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+		     VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr)
+				    struct dma_buf_map *map)
 {
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 }
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
 #define __XEN_DRM_FRONT_GEM_H
 
 struct dma_buf_attachment;
+struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
 struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr);
+				    struct dma_buf_map *map);
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
 				 struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct dma_buf_map;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+#include <linux/dma-buf-map.h>
 #include <linux/kernel.h> /* for container_of() */
 
 struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
 
 /**
  * struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem:	GEM object
  * @bo:		TTM buffer object
- * @kmap:	Mapping information for @bo
+ * @map:	Mapping information for @bo
  * @placement:	TTM placement information. Supported placements are \
 	%TTM_PL_VRAM and %TTM_PL_SYSTEM
  * @placements:	TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
-	struct ttm_bo_kmap_obj kmap;
+	struct dma_buf_map map;
 
 	/**
-	 * @kmap_use_count:
+	 * @vmap_use_count:
 	 *
 	 * Reference count on the virtual address.
 	 * The address are un-mapped when the count reaches zero.
 	 */
-	unsigned int kmap_use_count;
+	unsigned int vmap_use_count;
 
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7301.19116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wi-00010Z-QF; Thu, 15 Oct 2020 12:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7301.19116; Thu, 15 Oct 2020 12:38:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wi-00010F-JB; Thu, 15 Oct 2020 12:38:36 +0000
Received: by outflank-mailman (input) for mailman id 7301;
 Thu, 15 Oct 2020 12:38:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2Wg-0000WT-WB
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72199514-ffd8-427d-a971-e46c3b277dac;
 Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28C49B19A;
 Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2Wg-0000WT-WB
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:35 +0000
X-Inumbo-ID: 72199514-ffd8-427d-a971-e46c3b277dac
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 72199514-ffd8-427d-a971-e46c3b277dac;
	Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 28C49B19A;
	Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
Date: Thu, 15 Oct 2020 14:38:04 +0200
Message-Id: <20201015123806.32416-9-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.

Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
---
 drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
 drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
 include/drm/drm_client.h        |  7 ++++---
 3 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->map);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
  *
  * This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
  *
  * Client buffer mappings are not ref'counted. Each call to
  * drm_client_buffer_vmap() should be followed by a call to
  * drm_client_buffer_vunmap(); or the client buffer should be mapped
  * throughout its lifetime.
  *
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
  * Returns:
- *	The mapped memory's address
+ *	0 on success, or a negative errno code otherwise.
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
 {
-	struct dma_buf_map map;
+	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (dma_buf_map_is_set(map))
+		goto out;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	ret = drm_gem_vmap(buffer->gem, &map);
+	ret = drm_gem_vmap(buffer->gem, map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
-	buffer->vaddr = map.vaddr;
+out:
+	*map_copy = *map;
 
-	return map.vaddr;
+	return 0;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+	struct dma_buf_map *map = &buffer->map;
 
-	drm_gem_vunmap(buffer->gem, &map);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			ret = drm_client_buffer_vmap(helper->buffer, &map);
+			if (ret)
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+		if (ret)
+			return ret;
+		if (map.is_iomem)
+			fbi->screen_base = map.vaddr_iomem;
+		else
+			fbi->screen_buffer = map.vaddr;
 
-		fbi->screen_buffer = vaddr;
 		/* Shamelessly leak the physical address to user-space */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
 		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
 #ifndef _DRM_CLIENT_H_
 #define _DRM_CLIENT_H_
 
+#include <linux/dma-buf-map.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @map: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct dma_buf_map map;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7303.19128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wk-00014I-AZ; Thu, 15 Oct 2020 12:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7303.19128; Thu, 15 Oct 2020 12:38:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wk-00013u-3w; Thu, 15 Oct 2020 12:38:38 +0000
Received: by outflank-mailman (input) for mailman id 7303;
 Thu, 15 Oct 2020 12:38:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2Wi-0000Wl-4k
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1dc6ae4f-5fee-486e-af09-72b283662c7a;
 Thu, 15 Oct 2020 12:38:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8804EB1AD;
 Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2Wi-0000Wl-4k
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:36 +0000
X-Inumbo-ID: 1dc6ae4f-5fee-486e-af09-72b283662c7a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1dc6ae4f-5fee-486e-af09-72b283662c7a;
	Thu, 15 Oct 2020 12:38:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8804EB1AD;
	Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory
Date: Thu, 15 Oct 2020 14:38:06 +0200
Message-Id: <20201015123806.32416-11-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions.

For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.

The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.

v4:
	* move dma_buf_map changes into separate patch (Daniel)
	* TODO list: comment on fbdev updates (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 Documentation/gpu/todo.rst        |  19 ++-
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
 include/drm/drm_mode_config.h     |  12 --
 4 files changed, 220 insertions(+), 29 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
 ------------------------------------------------
 
 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
 
 Contact: Maintainer of the driver you plan to convert
 
 Level: Intermediate
 
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
 drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
 -----------------------------------------------------------------
 
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..462b0c130ebb 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 }
 
 static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip)
+					  struct drm_clip_rect *clip,
+					  struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
-			memcpy_toio((void __iomem *)dst, src, len);
+	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
 
+	for (y = clip->y1; y < clip->y2; y++) {
+		dma_buf_map_memcpy_to(dst, src, len);
+		dma_buf_map_incr(dst, fb->pitches[0]);
 		src += fb->pitches[0];
-		dst += fb->pitches[0];
 	}
 }
 
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
 		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
 }
 EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
 
+static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
+				      size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *dst;
+	u8 __iomem *src;
+	int c, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+	ssize_t ret = 0;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p >= total_size)
+		return 0;
+
+	if (count >= total_size)
+		count = total_size;
+
+	if (count + p > total_size)
+		count = total_size - p;
+
+	src = (u8 __iomem *)(info->screen_base + p);
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	dst = kmalloc(alloc_size, GFP_KERNEL);
+	if (!dst)
+		return -ENOMEM;
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		memcpy_fromio(dst, src, c);
+		if (copy_to_user(buf, dst, c)) {
+			err = -EFAULT;
+			break;
+		}
+
+		src += c;
+		*ppos += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(dst);
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
+static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
+				       size_t count, loff_t *ppos)
+{
+	unsigned long p = *ppos;
+	u8 *src;
+	u8 __iomem *dst;
+	int c, err = 0;
+	unsigned long total_size;
+	unsigned long alloc_size;
+	ssize_t ret = 0;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	total_size = info->screen_size;
+
+	if (total_size == 0)
+		total_size = info->fix.smem_len;
+
+	if (p > total_size)
+		return -EFBIG;
+
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+
+	if (count + p > total_size) {
+		/*
+		 * The framebuffer is too small. We do the
+		 * copy operation, but return an error code
+		 * afterwards. Taken from fbdev.
+		 */
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - p;
+	}
+
+	alloc_size = min(count, PAGE_SIZE);
+
+	src = kmalloc(alloc_size, GFP_KERNEL);
+	if (!src)
+		return -ENOMEM;
+
+	dst = (u8 __iomem *)(info->screen_base + p);
+
+	while (count) {
+		c = min(count, alloc_size);
+
+		if (copy_from_user(src, buf, c)) {
+			err = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, src, c);
+
+		dst += c;
+		*ppos += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(src);
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
 /**
  * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
  * @info: fbdev registered by the helper
@@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		return drm_fb_helper_sys_read(info, buf, count, ppos);
+	else
+		return drm_fb_helper_cfb_read(info, buf, count, ppos);
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		return drm_fb_helper_sys_write(info, buf, count, ppos);
+	else
+		return drm_fb_helper_cfb_write(info, buf, count, ppos);
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_fillrect(info, rect);
+	else
+		drm_fb_helper_cfb_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_copyarea(info, area);
+	else
+		drm_fb_helper_cfb_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
+		drm_fb_helper_sys_imageblit(info, image);
+	else
+		drm_fb_helper_cfb_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7304.19140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wn-0001Bd-Qs; Thu, 15 Oct 2020 12:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7304.19140; Thu, 15 Oct 2020 12:38:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Wn-0001BN-Lp; Thu, 15 Oct 2020 12:38:41 +0000
Received: by outflank-mailman (input) for mailman id 7304;
 Thu, 15 Oct 2020 12:38:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT2Wm-0000WT-04
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f37b26f0-f7bc-427d-a42a-d1eec1b43dae;
 Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7BADB1AA;
 Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT2Wm-0000WT-04
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:40 +0000
X-Inumbo-ID: f37b26f0-f7bc-427d-a42a-d1eec1b43dae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f37b26f0-f7bc-427d-a42a-d1eec1b43dae;
	Thu, 15 Oct 2020 12:38:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D7BADB1AA;
	Thu, 15 Oct 2020 12:38:16 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Thu, 15 Oct 2020 14:38:05 +0200
Message-Id: <20201015123806.32416-10-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de>
References: <20201015123806.32416-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
---
 include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
 1 file changed, 62 insertions(+), 10 deletions(-)

diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..6ca0f304dda2 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -32,6 +32,14 @@
  * accessing the buffer. Use the returned instance and the helper functions
  * to access the buffer's memory in the correct way.
  *
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
  * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
  * considered bad style. Rather then accessing its fields directly, use one
  * of the provided helper functions, or implement your own. For example,
@@ -51,6 +59,14 @@
  *
  *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
  *
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_clear(&map);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -73,17 +89,19 @@
  *	if (dma_buf_map_is_equal(&sys_map, &io_map))
  *		// always false
  *
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
  *
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ *	const void *src = ...; // source buffer
+ *	size_t len = ...; // length of src
+ *
+ *	dma_buf_map_memcpy_to(&map, src, len);
+ *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
  */
 
 /**
@@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
 	}
 }
 
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst:	The dma-buf mapping structure
+ * @src:	The source buffer
+ * @len:	The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+	if (dst->is_iomem)
+		memcpy_toio(dst->vaddr_iomem, src, len);
+	else
+		memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map:	The dma-buf mapping structure
+ * @incr:	The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+	if (map->is_iomem)
+		map->vaddr_iomem += incr;
+	else
+		map->vaddr += incr;
+}
+
 #endif /* __DMA_BUF_MAP_H__ */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:38:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:38:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7308.19152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Ws-0001KD-Kj; Thu, 15 Oct 2020 12:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7308.19152; Thu, 15 Oct 2020 12:38:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2Ws-0001K4-GJ; Thu, 15 Oct 2020 12:38:46 +0000
Received: by outflank-mailman (input) for mailman id 7308;
 Thu, 15 Oct 2020 12:38:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TbfH=DW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kT2Wr-0000WT-0H
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:45 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50541fb8-a46e-47b7-9882-058dab80a902;
 Thu, 15 Oct 2020 12:38:26 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FCXr68179009;
 Thu, 15 Oct 2020 12:38:02 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 343vaejrtx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 15 Oct 2020 12:38:01 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FCZKWO070415;
 Thu, 15 Oct 2020 12:38:01 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3020.oracle.com with ESMTP id 343pv1s863-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 15 Oct 2020 12:38:01 +0000
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09FCbqom028591;
 Thu, 15 Oct 2020 12:37:52 GMT
Received: from [10.39.226.219] (/10.39.226.219)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 15 Oct 2020 05:37:51 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TbfH=DW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kT2Wr-0000WT-0H
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:38:45 +0000
X-Inumbo-ID: 50541fb8-a46e-47b7-9882-058dab80a902
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 50541fb8-a46e-47b7-9882-058dab80a902;
	Thu, 15 Oct 2020 12:38:26 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FCXr68179009;
	Thu, 15 Oct 2020 12:38:02 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=49bcEdM7cuP+mMMv2qlqFDAObr/+ij1Sw78R9zLmZVw=;
 b=TlYJFy8XIrD4eomiT9aosxaFWdViJgtVQ2M/+PROA7GYl7iWMtf8ToxHIkW675rx1WoV
 Ei49JFN71Q9rR2D/11YeRR84AmwJjUX6KKmuLpahodh/8fHbL1cdncjiexv2IAvZWl+J
 P7wXBsXUfwP9D93D2anNQLpD5dR8oK4DzuS27U3uHZ5rGwGvexz/c1/0A8fVDND2W7zx
 XWxy6HYlEe9+haW1UJJjnJuA3zvClnSGZsN6XMxiCw78yvXD2vcnPj6E0S6YPZeGxEhM
 GworrstDO7gdM0qTTL3mQ0VuKdAMqDF3MNhkuoHgxC2WFbeCEYyv68zwAUC7gHQTA2b0 MA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by userp2120.oracle.com with ESMTP id 343vaejrtx-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Thu, 15 Oct 2020 12:38:01 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FCZKWO070415;
	Thu, 15 Oct 2020 12:38:01 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
	by aserp3020.oracle.com with ESMTP id 343pv1s863-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 15 Oct 2020 12:38:01 +0000
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
	by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09FCbqom028591;
	Thu, 15 Oct 2020 12:37:52 GMT
Received: from [10.39.226.219] (/10.39.226.219)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 15 Oct 2020 05:37:51 -0700
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: Jason Andryuk <jandryuk@gmail.com>, Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>, x86@kernel.org,
        "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-3-jandryuk@gmail.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <6cd9363c-ac0c-ea68-c8e7-9fd3cd30a89b@oracle.com>
Date: Thu, 15 Oct 2020 08:37:50 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201014175342.152712-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9774 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 mlxscore=0 spamscore=0
 adultscore=0 suspectscore=0 phishscore=0 bulkscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010150090
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9774 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 clxscore=1011
 impostorscore=0 phishscore=0 malwarescore=0 bulkscore=0 priorityscore=1501
 mlxscore=0 suspectscore=0 spamscore=0 adultscore=0 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010150090


On 10/14/20 1:53 PM, Jason Andryuk wrote:
> +config XEN_512GB
> +	bool "Limit Xen pv-domain memory to 512GB"
> +	depends on XEN_PV && X86_64


Why is X86_64 needed here?


-boris





From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:53:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7335.19165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2lI-0003YD-4z; Thu, 15 Oct 2020 12:53:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7335.19165; Thu, 15 Oct 2020 12:53:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2lI-0003Y6-1I; Thu, 15 Oct 2020 12:53:40 +0000
Received: by outflank-mailman (input) for mailman id 7335;
 Thu, 15 Oct 2020 12:53:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XL1Z=DW=antarean.org=joost@srs-us1.protection.inumbo.net>)
 id 1kT2lG-0003Y1-51
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:53:38 +0000
Received: from gw2.antarean.org (unknown [141.105.125.208])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id fc9733cf-f725-4f1e-81ee-b3ccdbef88c1;
 Thu, 15 Oct 2020 12:53:36 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by gw2.antarean.org (Postfix) with ESMTP id 4CBq1h1nBxz8tkM
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:52:24 +0200 (CEST)
Received: from gw2.antarean.org ([127.0.0.1])
 by localhost (gw2.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id eeUkXdmHvTkw for <xen-devel@lists.xenproject.org>;
 Thu, 15 Oct 2020 14:52:23 +0200 (CEST)
Received: from mailstore1.adm.antarean.org (localhost [127.0.0.1])
 by gw2.antarean.org (Postfix) with ESMTP id 4CBq1g6KLRz8tk5
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:52:23 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mailstore1.adm.antarean.org (Postfix) with ESMTP id 4CBq326p4kz15
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
Received: from mailstore1.adm.antarean.org ([127.0.0.1])
 by localhost (mailstore1.adm.antarean.org [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id NnSy03VyNqjT for <xen-devel@lists.xenproject.org>;
 Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
Received: from eve.localnet (eve.adm.antarean.org [10.55.16.44])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mailstore1.adm.antarean.org (Postfix) with ESMTPSA id 4CBq324zyLz13
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XL1Z=DW=antarean.org=joost@srs-us1.protection.inumbo.net>)
	id 1kT2lG-0003Y1-51
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:53:38 +0000
X-Inumbo-ID: fc9733cf-f725-4f1e-81ee-b3ccdbef88c1
Received: from gw2.antarean.org (unknown [141.105.125.208])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id fc9733cf-f725-4f1e-81ee-b3ccdbef88c1;
	Thu, 15 Oct 2020 12:53:36 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by gw2.antarean.org (Postfix) with ESMTP id 4CBq1h1nBxz8tkM
	for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:52:24 +0200 (CEST)
X-Virus-Scanned: amavisd-new at antarean.org
Received: from gw2.antarean.org ([127.0.0.1])
	by localhost (gw2.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id eeUkXdmHvTkw for <xen-devel@lists.xenproject.org>;
	Thu, 15 Oct 2020 14:52:23 +0200 (CEST)
Received: from mailstore1.adm.antarean.org (localhost [127.0.0.1])
	by gw2.antarean.org (Postfix) with ESMTP id 4CBq1g6KLRz8tk5
	for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:52:23 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
	by mailstore1.adm.antarean.org (Postfix) with ESMTP id 4CBq326p4kz15
	for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
X-Virus-Scanned: amavisd-new at antarean.org
Received: from mailstore1.adm.antarean.org ([127.0.0.1])
	by localhost (mailstore1.adm.antarean.org [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id NnSy03VyNqjT for <xen-devel@lists.xenproject.org>;
	Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
Received: from eve.localnet (eve.adm.antarean.org [10.55.16.44])
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
	 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
	(No client certificate requested)
	by mailstore1.adm.antarean.org (Postfix) with ESMTPSA id 4CBq324zyLz13
	for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 14:53:34 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=antarean.org;
	s=default; t=1602766414;
	bh=+hh6XDUEdDVtijGf5bFRgqI40uvDb4OzhbSJygUVZgU=;
	h=From:To:Subject:Date:In-Reply-To:References;
	b=Hl6aQBYzwWuJssLJNCuEpoFdPC+lIVXfH5Z2mLSazkWB5vuuR33rrh/johz1loPSM
	 0MB1PaZ+MFgi4oooG1e/udNk/G2erDNSQmalLXi9NfNidRBTitsVZp/bp8lD4Db07h
	 ci6hoJ15LFXqzvBKHV6FZNqrsJ4Z+AwniPdQw/9s=
From: "J. Roeleveld" <joost@antarean.org>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: xen-blkback: Scheduled work from previous purge is still busy, cannot purge list
Date: Thu, 15 Oct 2020 14:53:34 +0200
Message-ID: <11618501.OP9n9qO8XQ@eve>
Organization: Antarean
In-Reply-To: <20201015120046.GE19243@Air-de-Roger>
References: <15146361.Z0tdQxPx3m@eve> <1855015.FeAb16qnYt@eve> <20201015120046.GE19243@Air-de-Roger>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset="iso-8859-1"

On Thursday, October 15, 2020 2:00:46 PM CEST Roger Pau Monn=E9 wrote:
> Please don't drop xen-devel mailing list when replying.

My apologies, most mailing lists I am active on have a working "reply" butt=
on.=20
Here I need to use "reply-all".


> On Thu, Oct 15, 2020 at 01:28:49PM +0200, J. Roeleveld wrote:
> > On Thursday, October 15, 2020 12:57:35 PM CEST you wrote:
> > > On Tue, Oct 13, 2020 at 07:26:47AM +0200, J. Roeleveld wrote:
> > > > Hi All,
> > > >=20
> > > > I am seeing the following message in the "dmesg" output of a driver
> > > > domain.
> > > >=20
> > > > [Thu Oct  8 20:57:04 2020] xen-blkback: Scheduled work from previous
> > > > purge
> > > > is still busy, cannot purge list
> > > > [Thu Oct  8 20:57:11 2020] xen-blkback: Scheduled work from previous
> > > > purge
> > > > is still busy, cannot purge list
> > > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous
> > > > purge
> > > > is still busy, cannot purge list
> > > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous
> > > > purge
> > > > is still busy, cannot purge list
> > > >=20
> > > >=20
> > > > Is this something to worry about? Or can I safely ignore this?
> > >=20
> > > What version of the Linux kernel are you running in that driver
> > > domain?
> >=20
> > Host:
> > Kernel: 5.4.66
> > Xen: 4.12.3
> >=20
> > Driver domain:
> > Kernel: 5.4.66
> > Xen: 4.12.3
> >=20
> > > Is the domain very busy? That might explain the delay in purging
> > > grants.
> >=20
> > No, it's generally asleep, been going through the munin-records and can=
't
> > find any spikes the correlate with the messages either.
> >=20
> > > Also is this an sporadic message, or it's constantly repeating?
> >=20
> > It's sporadic, but occasionally, I get it several times in a row.
> >=20
> > My understanding of the code where this message comes from is far from
> > sufficient. Which means I have no clue what it is actually trying to do.
>=20
> There's a recurrent worker thread in blkback that will go and purge
> unused cache entries after they have expired. This is done to prevent
> the cache from growing unbounded.
>=20
> AFAICT this just means the worker is likely running faster than what
> you can proceed, and hence you get another worker run before the old
> entries have been removed. Should be safe to ignore, but makes me
> wonder if I should add a parameter to tune the periodicity of the
> purge work.

In other words, when it "fails" in this manner, the queue will simply be le=
ft=20
and processed the next time?

How often does this currently run?

A parameter to tune the periodicity might be an option, for now I feel=20
confident I can safely ignore these messages.

Thanks,

Joost




From xen-devel-bounces@lists.xenproject.org Thu Oct 15 12:59:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 12:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7344.19180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2qa-0003lt-PV; Thu, 15 Oct 2020 12:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7344.19180; Thu, 15 Oct 2020 12:59:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2qa-0003lm-MX; Thu, 15 Oct 2020 12:59:08 +0000
Received: by outflank-mailman (input) for mailman id 7344;
 Thu, 15 Oct 2020 12:59:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT2qa-0003kg-3K
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:59:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fb63c6c-19cc-4379-9e04-91d50cc4e98a;
 Thu, 15 Oct 2020 12:59:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT2qR-0004JF-Lf; Thu, 15 Oct 2020 12:58:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT2qR-00010g-ED; Thu, 15 Oct 2020 12:58:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT2qR-0006jk-Dh; Thu, 15 Oct 2020 12:58:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT2qa-0003kg-3K
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 12:59:08 +0000
X-Inumbo-ID: 1fb63c6c-19cc-4379-9e04-91d50cc4e98a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1fb63c6c-19cc-4379-9e04-91d50cc4e98a;
	Thu, 15 Oct 2020 12:59:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fo091tH+ZW2/9lfeeU6AMTDAQyRjqXoOWZLW8yRYRAc=; b=AwpSXlZF3LkgyAknXrC8Q4G8Vx
	LnInAEqvDIiczYKpIVMZw8vugewkgc6EGtTFGL+5cIgMT5zwzjC6BHgD4ghKBqpVOJmcPuUiZpzLt
	VfkpbtMZ3qFVZk8xdXAJrKYDepzgtksVWuCIhzkOGlrYc++nLZ1O+/yqlTUEv8rOEHjA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT2qR-0004JF-Lf; Thu, 15 Oct 2020 12:58:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT2qR-00010g-ED; Thu, 15 Oct 2020 12:58:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT2qR-0006jk-Dh; Thu, 15 Oct 2020 12:58:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155831-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155831: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=cb6c2fa4ed0af096a7779827803c141306a8105a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 12:58:59 +0000

flight 155831 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155831/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              cb6c2fa4ed0af096a7779827803c141306a8105a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   97 days
Failing since        151818  2020-07-11 04:18:52 Z   96 days   91 attempts
Testing same since   155831  2020-10-15 04:19:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21162 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:05:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:05:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7347.19192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2wm-0004gF-Gi; Thu, 15 Oct 2020 13:05:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7347.19192; Thu, 15 Oct 2020 13:05:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT2wm-0004g8-Dd; Thu, 15 Oct 2020 13:05:32 +0000
Received: by outflank-mailman (input) for mailman id 7347;
 Thu, 15 Oct 2020 13:05:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT2wl-0004g3-0Z
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:05:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d378b68-77aa-4a92-9132-2d1aa62a97d0;
 Thu, 15 Oct 2020 13:05:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT2wl-0004g3-0Z
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:05:31 +0000
X-Inumbo-ID: 7d378b68-77aa-4a92-9132-2d1aa62a97d0
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d378b68-77aa-4a92-9132-2d1aa62a97d0;
	Thu, 15 Oct 2020 13:05:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602767128;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=tHYOldh+zzg4BAoHPlXXGno3aTMtmkswwhlSrgFQYqg=;
  b=anHAGe6dMzgtEim11a7DOt9xPdD8iRY5rMj9A6H4Vh//CpPNe/wwtY+q
   t4USCESU6Kq7ZOFLz6/KwmzSDaAvm3IU+6tYYFXYqeG9UUXHaSjCKNRz2
   i0rwDOeltVBURKuGjJwKUbsMlTa2e8P07vhxI9n+9Og72X5DRXCaHZBhj
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: C4m0tNJT9YMeau3EFBnetO+MT6a3OwtKzEXVG3Tyiw8hblwqZmgd6ZmPFsn+czUdALU0uBZuaC
 1/jeXe6kOr4NNx1jfddA/NfbnJv3kP5MnUwIU89z/GdTigcs0SkMafxYJoCP/yYbcYtstBdnS2
 +P0TR41c/Gwu3QXEKUkwNdvD95GajXddUd6fYmTvBe6IMcBUhAHesdReeIm8zm3zwPh7MFiQLx
 CvWqU6XP+hCZmHzYpgRTDOCg8gh4uiCnTXWGEavfeDmtAOR7QGqM4urGVrWcqGFqTloRWFQ9WY
 aKk=
X-SBRS: 2.5
X-MesageID: 30116032
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="30116032"
Date: Thu, 15 Oct 2020 15:05:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "J. Roeleveld" <joost@antarean.org>
CC: xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: xen-blkback: Scheduled work from previous purge is still busy,
 cannot purge list
Message-ID: <20201015130520.GB68032@Air-de-Roger>
References: <15146361.Z0tdQxPx3m@eve>
 <1855015.FeAb16qnYt@eve>
 <20201015120046.GE19243@Air-de-Roger>
 <11618501.OP9n9qO8XQ@eve>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <11618501.OP9n9qO8XQ@eve>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 15, 2020 at 02:53:34PM +0200, J. Roeleveld wrote:
> On Thursday, October 15, 2020 2:00:46 PM CEST Roger Pau Monné wrote:
> > Please don't drop xen-devel mailing list when replying.
> 
> My apologies, most mailing lists I am active on have a working "reply" button. 
> Here I need to use "reply-all".
> 
> 
> > On Thu, Oct 15, 2020 at 01:28:49PM +0200, J. Roeleveld wrote:
> > > On Thursday, October 15, 2020 12:57:35 PM CEST you wrote:
> > > > On Tue, Oct 13, 2020 at 07:26:47AM +0200, J. Roeleveld wrote:
> > > > > Hi All,
> > > > > 
> > > > > I am seeing the following message in the "dmesg" output of a driver
> > > > > domain.
> > > > > 
> > > > > [Thu Oct  8 20:57:04 2020] xen-blkback: Scheduled work from previous
> > > > > purge
> > > > > is still busy, cannot purge list
> > > > > [Thu Oct  8 20:57:11 2020] xen-blkback: Scheduled work from previous
> > > > > purge
> > > > > is still busy, cannot purge list
> > > > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous
> > > > > purge
> > > > > is still busy, cannot purge list
> > > > > [Thu Oct  8 20:57:44 2020] xen-blkback: Scheduled work from previous
> > > > > purge
> > > > > is still busy, cannot purge list
> > > > > 
> > > > > 
> > > > > Is this something to worry about? Or can I safely ignore this?
> > > > 
> > > > What version of the Linux kernel are you running in that driver
> > > > domain?
> > > 
> > > Host:
> > > Kernel: 5.4.66
> > > Xen: 4.12.3
> > > 
> > > Driver domain:
> > > Kernel: 5.4.66
> > > Xen: 4.12.3
> > > 
> > > > Is the domain very busy? That might explain the delay in purging
> > > > grants.
> > > 
> > > No, it's generally asleep, been going through the munin-records and can't
> > > find any spikes the correlate with the messages either.
> > > 
> > > > Also is this an sporadic message, or it's constantly repeating?
> > > 
> > > It's sporadic, but occasionally, I get it several times in a row.
> > > 
> > > My understanding of the code where this message comes from is far from
> > > sufficient. Which means I have no clue what it is actually trying to do.
> > 
> > There's a recurrent worker thread in blkback that will go and purge
> > unused cache entries after they have expired. This is done to prevent
> > the cache from growing unbounded.
> > 
> > AFAICT this just means the worker is likely running faster than what
> > you can proceed, and hence you get another worker run before the old
> > entries have been removed. Should be safe to ignore, but makes me
> > wonder if I should add a parameter to tune the periodicity of the
> > purge work.
> 
> In other words, when it "fails" in this manner, the queue will simply be left 
> and processed the next time?

Yes, exactly.

> How often does this currently run?

The purge worked will run every 100ms, and the queued work should be
terminated before the next run.

> A parameter to tune the periodicity might be an option, for now I feel 
> confident I can safely ignore these messages.

Sure, I'm testing a patch series to that effect now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7360.19203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT31x-0005Zx-94; Thu, 15 Oct 2020 13:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7360.19203; Thu, 15 Oct 2020 13:10:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT31x-0005Zq-63; Thu, 15 Oct 2020 13:10:53 +0000
Received: by outflank-mailman (input) for mailman id 7360;
 Thu, 15 Oct 2020 13:10:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kT31v-0005Zl-FB
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:10:51 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5abb5e33-7b0f-437a-a9aa-8e1ca1174325;
 Thu, 15 Oct 2020 13:10:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kT31v-0005Zl-FB
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:10:51 +0000
X-Inumbo-ID: 5abb5e33-7b0f-437a-a9aa-8e1ca1174325
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5abb5e33-7b0f-437a-a9aa-8e1ca1174325;
	Thu, 15 Oct 2020 13:10:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602767450;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=f6Kamy1C8O875JTu/vLF90Ddc8FWBrVKHooa1uTjaeQ=;
  b=c2vt4C6rqNUF9VdfyCuMKsk1juKOixPSz7EXwnsSpnrpuSppcgHO2Guf
   IyHizkMxUZHf+wKiv0b202dp3KcKGCBdfepdGOhfYhP1SK8pDO2CLd+cP
   gq580Wo6KOPHskrPXZc2XVdCcJZZXPz8HnDFD2SomiekpYUwz19qqTRLw
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: TsNzbLxMsk384nJy5Ma5vOQoKEjgU+ipNL1nJSmtDZ6qs87H2v+vbvQIN4MbbD72V2Ey2BJWzK
 DJ2By9UKxVmig34H8L3MwHOYlSQxz2DBF8RnLL026OOdZRjfNPY566lXkCEMWpefwuzoeGsvFt
 SnTPGz44+AtTSC4bBi08uZqWHQKJcn+DFJ+74bUEgEdXNlB7yWOUEr2CPons5/qNhxHYKojNNA
 H5bJ7MetpyQXJ0vK5ZCl6jO/6csbfhHTojPmAvbq5tYkHveVjp4yyTrmmaBCv2pnjkGKD8M7ZL
 4Bg=
X-SBRS: 2.5
X-MesageID: 29078535
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29078535"
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: <boris.ostrovsky@oracle.com>, Jason Andryuk <jandryuk@gmail.com>, "Juergen
 Gross" <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, <x86@kernel.org>, "H. Peter Anvin"
	<hpa@zytor.com>, <xen-devel@lists.xenproject.org>,
	<linux-kernel@vger.kernel.org>
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-3-jandryuk@gmail.com>
 <6cd9363c-ac0c-ea68-c8e7-9fd3cd30a89b@oracle.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4e31301b-0e57-ac89-cd71-6ad5e1a66628@citrix.com>
Date: Thu, 15 Oct 2020 14:10:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6cd9363c-ac0c-ea68-c8e7-9fd3cd30a89b@oracle.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 13:37, boris.ostrovsky@oracle.com wrote:
> On 10/14/20 1:53 PM, Jason Andryuk wrote:
>> +config XEN_512GB
>> +	bool "Limit Xen pv-domain memory to 512GB"
>> +	depends on XEN_PV && X86_64
>
> Why is X86_64 needed here?

>512G support was implemented using a direct-mapped P2M, and is rather
beyond the virtual address capabilities of 32bit.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:17:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:17:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7363.19215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT38H-0005oR-3B; Thu, 15 Oct 2020 13:17:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7363.19215; Thu, 15 Oct 2020 13:17:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT38H-0005oK-0D; Thu, 15 Oct 2020 13:17:25 +0000
Received: by outflank-mailman (input) for mailman id 7363;
 Thu, 15 Oct 2020 13:17:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sl3P=DW=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kT38F-0005oF-4C
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e806a902-b08e-40e8-b24b-6d9d8d91bf72;
 Thu, 15 Oct 2020 13:17:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kT38D-0004hh-St
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:21 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kT38D-0001Q0-Rd
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kT38A-0000P7-K1; Thu, 15 Oct 2020 14:17:18 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sl3P=DW=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kT38F-0005oF-4C
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:23 +0000
X-Inumbo-ID: e806a902-b08e-40e8-b24b-6d9d8d91bf72
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e806a902-b08e-40e8-b24b-6d9d8d91bf72;
	Thu, 15 Oct 2020 13:17:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=H1KJpw10QLjHojNNuOFi9pGzZtl2DP3GHXC4cJ7LdlE=; b=sr6xgljpk2fT98oIk6E82bGeXl
	/kAsCZ1uk2oEzWjJ50vQN9pDFl7kL2APdStu73gMK8HfqgHTRywTsVmf6QLFwkevkP+q299JTXnQE
	buZazHyQV9JGx2K7meLp6cJmy++tBZTDuqyMSDxwa7s4ErOcvd1zTELejMvoGSCyS2KU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kT38D-0004hh-St
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:21 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kT38D-0001Q0-Rd
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:21 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kT38A-0000P7-K1; Thu, 15 Oct 2020 14:17:18 +0100
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24456.19422.318790.279648@mariner.uk.xensource.com>
Date: Thu, 15 Oct 2020 14:17:18 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    Elena Ufimtseva <elena.ufimtseva@oracle.com>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/gdbsx: drop stray recursion into tools/include/
In-Reply-To: <ece6c5c2-43f8-36d2-370c-37d988baeb87@suse.com>
References: <ece6c5c2-43f8-36d2-370c-37d988baeb87@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("[PATCH] tools/gdbsx: drop stray recursion into tools/include/"):
> Doing so isn't appropriate here - this gets done very early in the build
> process. If the directory is mean to to be buildable on its own,
> different arrangements would be needed.
> 
> The issue has become more pronounced by 47654a0d7320 ("tools/include:
> fix (drop) dependencies of when to populate xen/"), but was there before
> afaict.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:17:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7365.19228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT38b-0005td-CQ; Thu, 15 Oct 2020 13:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7365.19228; Thu, 15 Oct 2020 13:17:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT38b-0005tW-8v; Thu, 15 Oct 2020 13:17:45 +0000
Received: by outflank-mailman (input) for mailman id 7365;
 Thu, 15 Oct 2020 13:17:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TbfH=DW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kT38Z-0005tF-Ot
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:43 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8bd4586-03c1-4799-ac11-383f007489a2;
 Thu, 15 Oct 2020 13:17:42 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FDEf3v097962;
 Thu, 15 Oct 2020 13:17:18 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 346g8ghsak-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 15 Oct 2020 13:17:18 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FDEdKd025936;
 Thu, 15 Oct 2020 13:17:17 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3030.oracle.com with ESMTP id 343pw0brsa-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 15 Oct 2020 13:17:17 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09FDHEhc024698;
 Thu, 15 Oct 2020 13:17:15 GMT
Received: from [10.39.226.219] (/10.39.226.219)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 15 Oct 2020 06:17:14 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TbfH=DW=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kT38Z-0005tF-Ot
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:17:43 +0000
X-Inumbo-ID: a8bd4586-03c1-4799-ac11-383f007489a2
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8bd4586-03c1-4799-ac11-383f007489a2;
	Thu, 15 Oct 2020 13:17:42 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
	by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FDEf3v097962;
	Thu, 15 Oct 2020 13:17:18 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=dOOcmqv3ORCOAQisi0SZNKKf+PzDDb3epMdS0uvDdiQ=;
 b=VsOcvhb/uE1AkiuqhIwYpyoaN2Y0JlpwCeSOVkau9AH/4eZFn/KDGFORo2EHy2Ivp+0n
 upg8dsUz8m1RWjevnlwLIFlufFPG82PUcLhzD90Kl3KKa22VgxVcHF/wVJ/WJhTOvSwV
 SDt7rKpbcRth3EkaqnpwJweZcYo3ya1njakx1T8bz5ICbfAlJeFtR4UOJrq9WhKjJh9f
 TN9wOifglj3rLEdZdK1rUeKtTFzErdMlr+8sQDGhvcW189z/Lh+fscQ0NYnwW4Pq/rCx
 dkJfm75foKZUPaY1Wx10v/zryN9SqgraHQqv0Bd9vwDhZPga09HMwhxVtPivsGIzdx73 JQ== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by aserp2130.oracle.com with ESMTP id 346g8ghsak-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Thu, 15 Oct 2020 13:17:18 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09FDEdKd025936;
	Thu, 15 Oct 2020 13:17:17 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
	by userp3030.oracle.com with ESMTP id 343pw0brsa-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 15 Oct 2020 13:17:17 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09FDHEhc024698;
	Thu, 15 Oct 2020 13:17:15 GMT
Received: from [10.39.226.219] (/10.39.226.219)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Thu, 15 Oct 2020 06:17:14 -0700
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: Andrew Cooper <andrew.cooper3@citrix.com>,
        Jason Andryuk <jandryuk@gmail.com>, Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>, x86@kernel.org,
        "H. Peter Anvin" <hpa@zytor.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-3-jandryuk@gmail.com>
 <6cd9363c-ac0c-ea68-c8e7-9fd3cd30a89b@oracle.com>
 <4e31301b-0e57-ac89-cd71-6ad5e1a66628@citrix.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <b097aec1-e549-a89a-ce43-e9c0a71179f2@oracle.com>
Date: Thu, 15 Oct 2020 09:17:12 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <4e31301b-0e57-ac89-cd71-6ad5e1a66628@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9774 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 mlxscore=0 adultscore=0
 bulkscore=0 mlxlogscore=999 suspectscore=0 malwarescore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010150094
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9774 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0 suspectscore=0
 priorityscore=1501 phishscore=0 clxscore=1015 spamscore=0 adultscore=0
 mlxscore=0 malwarescore=0 bulkscore=0 mlxlogscore=999 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010150094


On 10/15/20 9:10 AM, Andrew Cooper wrote:
> On 15/10/2020 13:37, boris.ostrovsky@oracle.com wrote:
>> On 10/14/20 1:53 PM, Jason Andryuk wrote:
>>> +config XEN_512GB
>>> +	bool "Limit Xen pv-domain memory to 512GB"
>>> +	depends on XEN_PV && X86_64
>> Why is X86_64 needed here?
>> 512G support was implemented using a direct-mapped P2M, and is rather
> beyond the virtual address capabilities of 32bit.
>

Yes, my point was that XEN_PV already depends on X86_64.


-boris



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7375.19241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3Oh-0007fw-SI; Thu, 15 Oct 2020 13:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7375.19241; Thu, 15 Oct 2020 13:34:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3Oh-0007fp-PJ; Thu, 15 Oct 2020 13:34:23 +0000
Received: by outflank-mailman (input) for mailman id 7375;
 Thu, 15 Oct 2020 13:34:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT3Og-0007fk-NO
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:34:22 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7aa0491-5d1c-4bd0-8ef4-63e719eba775;
 Thu, 15 Oct 2020 13:34:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT3Og-0007fk-NO
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:34:22 +0000
X-Inumbo-ID: a7aa0491-5d1c-4bd0-8ef4-63e719eba775
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7aa0491-5d1c-4bd0-8ef4-63e719eba775;
	Thu, 15 Oct 2020 13:34:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602768861;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=bV4hBDKJJRqjaWnsdEfov1P7Xpzf6TANYNVwNbcq1S8=;
  b=CdnrK1tb3CvcanFPsAeZBZLRTCwbL5Ka/cgZouPVvCnQQDlcZveslawq
   QYOcf2FNt7Er8r0nOdqLe2m2eoID1aWQNbsCDKJi5Mz7CWiV2A8vMq6Yu
   EaFJULzeq9sZAMSB6PA2fHcWEeEVB1db1VxvmGNo0fFJJpmbBDS8XbTVH
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qvHeRrlmREgvUlot+GowN8hNqsH6Ki4z/O5GqfAA3FgsEH4DMHCqJkHJsLuMloBCQhMNARA42o
 goMyLvR/gVbxjfVlDUUTZXLNj78G4RzChZUfTPMLjncfs7vbOpODJEd43O5v2a6Tt0mNAgy0Rc
 /LIRkB5luERpY1JKi/w9ZnsIQYffH6DBDpGPIloQnAVxTRYSBIjHp42bdWhmQZpkl2vjwkmN8h
 Tbje+A8Oid2OnifEVkJXzHKYfJurdpwhKKXqzfDB6i77rAhqjJJxr5lMQsvo9LIrCNajPfkkUu
 koY=
X-SBRS: 2.5
X-MesageID: 29140587
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29140587"
Date: Thu, 15 Oct 2020 15:34:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH] x86/msr: fix handling of MSR_IA32_PERF_{STATUS/CTL}
Message-ID: <20201015133412.GC68032@Air-de-Roger>
References: <20201006162327.93055-1-roger.pau@citrix.com>
 <a98d6cb1-0b1d-8fb8-8718-c65e02e448bb@citrix.com>
 <20201007164117.GH19254@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201007164117.GH19254@Air-de-Roger>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 07, 2020 at 06:41:17PM +0200, Roger Pau Monné wrote:
> On Wed, Oct 07, 2020 at 01:06:08PM +0100, Andrew Cooper wrote:
> > On 06/10/2020 17:23, Roger Pau Monne wrote:
> > > Currently a PV hardware domain can also be given control over the CPU
> > > frequency, and such guest is allowed to write to MSR_IA32_PERF_CTL.
> > 
> > This might be how the current logic "works", but its straight up broken.
> > 
> > PERF_CTL is thread scope, so unless dom0 is identity pinned and has one
> > vcpu for every pcpu, it cannot use the interface correctly.
> 
> Selecting cpufreq=dom0-kernel will force vCPU pinning. I'm not able
> however to see anywhere that would force dom0 vCPUs == pCPUs.
> 
> > > However since commit 322ec7c89f6 the default behavior has been changed
> > > to reject accesses to not explicitly handled MSRs, preventing PV
> > > guests that manage CPU frequency from reading
> > > MSR_IA32_PERF_{STATUS/CTL}.
> > >
> > > Additionally some HVM guests (Windows at least) will attempt to read
> > > MSR_IA32_PERF_CTL and will panic if given back a #GP fault:
> > >
> > > vmx.c:3035:d8v0 RDMSR 0x00000199 unimplemented
> > > d8v0 VIRIDIAN CRASH: 3b c0000096 fffff806871c1651 ffffda0253683720 0
> > >
> > > Move the handling of MSR_IA32_PERF_{STATUS/CTL} to the common MSR
> > > handling shared between HVM and PV guests, and add an explicit case
> > > for reads to MSR_IA32_PERF_{STATUS/CTL}.
> > 
> > OTOH, PERF_CTL does have a seemingly architectural "please disable turbo
> > for me" bit, which is supposed to be for calibration loops.  I wonder if
> > anyone uses this, and whether we ought to honour it (probably not).
> 
> If we let guests play with this we would have to save/restore the
> guest value on context switch. Unless there's a strong case for this,
> I would say no.
> 
> > > diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> > > index d8ed83f869..41baa3b7a1 100644
> > > --- a/xen/include/xen/sched.h
> > > +++ b/xen/include/xen/sched.h
> > > @@ -1069,6 +1069,12 @@ extern enum cpufreq_controller {
> > >      FREQCTL_none, FREQCTL_dom0_kernel, FREQCTL_xen
> > >  } cpufreq_controller;
> > >  
> > > +static inline bool is_cpufreq_controller(const struct domain *d)
> > > +{
> > > +    return ((cpufreq_controller == FREQCTL_dom0_kernel) &&
> > > +            is_hardware_domain(d));
> > 
> > This won't compile on !CONFIG_X86, due to CONFIG_HAS_CPUFREQ
> 
> It does seem to build on Arm, because this is only used in x86 code:
> 
> https://gitlab.com/xen-project/people/royger/xen/-/jobs/778207412
> 
> The extern declaration of cpufreq_controller is just above, so if you
> tried to use is_cpufreq_controller on Arm you would get a link time
> error, otherwise it builds fine. The compiler removes the function on
> Arm as it has the inline attribute and it's not used.
> 
> Alternatively I could look into moving cpufreq_controller (and
> is_cpufreq_controller) out of sched.h into somewhere else, I haven't
> looked at why it needs to live there.
> 
> > Honestly - I don't see any point to this code.  Its opt-in via the
> > command line only, and doesn't provide adequate checks for enablement. 
> > (It's not as if we're lacking complexity or moving parts when it comes
> > to power/frequency management).
> 
> Right, I could do a pre-patch to remove this, but I also don't think
> we should block this fix on removing FREQCTL_dom0_kernel, so I would
> rather fix the regression and then remove the feature if we agree it
> can be removed.

Can we get some consensus on what to do next?

I think I've provided replies to all the points above, and I'm not sure
what do to next in order to proceed with this patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:36:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7378.19254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3Qa-0007ni-8r; Thu, 15 Oct 2020 13:36:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7378.19254; Thu, 15 Oct 2020 13:36:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3Qa-0007nb-5e; Thu, 15 Oct 2020 13:36:20 +0000
Received: by outflank-mailman (input) for mailman id 7378;
 Thu, 15 Oct 2020 13:36:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT3QY-0007nW-MQ
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:36:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cf2745e-ef59-4b97-8303-58daa586badd;
 Thu, 15 Oct 2020 13:36:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT3QV-00056r-KK; Thu, 15 Oct 2020 13:36:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT3QV-0002si-9M; Thu, 15 Oct 2020 13:36:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT3QV-0002ZY-8u; Thu, 15 Oct 2020 13:36:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT3QY-0007nW-MQ
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:36:18 +0000
X-Inumbo-ID: 4cf2745e-ef59-4b97-8303-58daa586badd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4cf2745e-ef59-4b97-8303-58daa586badd;
	Thu, 15 Oct 2020 13:36:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=77SdlHpMdodDk5fPOQUN99/yazcZY9MvnwmejtjT22E=; b=ydKv45Zh3opm8Ipwk9amJ8kS/s
	xmdTXlTp/PBoducB3fPjRobO8nYAlMveGTngDAWtRng6soVwMY3A/0xRGr4jnwZ8EtZvIVRqVOo3/
	Kd9nKnmS/5yJj+/k7KgqUd16KPLKNpKVl5uV1Ngcgzmw4WXCDeyey3wHe48miMhTcm3Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT3QV-00056r-KK; Thu, 15 Oct 2020 13:36:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT3QV-0002si-9M; Thu, 15 Oct 2020 13:36:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT3QV-0002ZY-8u; Thu, 15 Oct 2020 13:36:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155842-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155842: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a8a85f03c826bea045e345fa405f187049d63584
X-Osstest-Versions-That:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 13:36:15 +0000

flight 155842 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155842/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155828

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a8a85f03c826bea045e345fa405f187049d63584
baseline version:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575

Last test of basis   155828  2020-10-15 03:00:27 Z    0 days
Testing same since   155842  2020-10-15 11:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chen Yu <yu.c.chen@intel.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a8a85f03c826bea045e345fa405f187049d63584
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 15 12:30:01 2020 +0200

    EFI: further "need_to_free" adjustments
    
    When processing "chain" directives, the previously loaded config file
    gets freed. This needs to be recorded accordingly such that no error
    path would try to free the same block of memory a 2nd time.
    
    Furthermore, neither .addr nor .size being zero has any meaning towards
    the need to free an allocated chunk anymore. Drop (from read_file()) and
    replace (in Arm's efi_arch_use_config_file(), to sensibly retain the
    comment) respective assignments.
    
    Fixes: 04be2c3a0678 ("efi/boot.c: add file.need_to_free")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 44ac57af81ff8097e228895738b911ca819bda19
Author: Chen Yu <yu.c.chen@intel.com>
Date:   Thu Oct 15 12:29:11 2020 +0200

    x86/mwait-idle: customize IceLake server support
    
    On ICX platform, the C1E auto-promotion is enabled by default.
    As a result, the CPU might fall into C1E more offen than previous
    platforms. So disable C1E auto-promotion and expose C1E as a separate
    idle state.
    
    Beside C1 and C1E, the exit latency of C6 was measured
    by a dedicated tool. However the exit latency(41us) exposed
    by _CST is much smaller than the one we measured(128us). This
    is probably due to the _CST uses the exit latency when woken
    up from PC0+C6, rather than PC6+C6 when C6 was measured. Choose
    the latter as we need the longest latency in theory.
    
    Signed-off-by: Chen Yu <yu.c.chen@intel.com>
    Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
    [Linux commit a472ad2bcea479ba068880125d7273fc95c14b70]
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:57:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:57:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7389.19272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3l2-0001Bg-7J; Thu, 15 Oct 2020 13:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7389.19272; Thu, 15 Oct 2020 13:57:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3l2-0001BZ-3X; Thu, 15 Oct 2020 13:57:28 +0000
Received: by outflank-mailman (input) for mailman id 7389;
 Thu, 15 Oct 2020 13:57:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H3IV=DW=gmail.com=ckoenig.leichtzumerken@srs-us1.protection.inumbo.net>)
 id 1kT3l1-0001BU-0W
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:57:27 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 444ae937-1b94-4f9e-99af-0ca49320a25c;
 Thu, 15 Oct 2020 13:57:26 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id lw21so3762688ejb.6
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 06:57:25 -0700 (PDT)
Received: from [192.168.137.56] (tmo-123-114.customers.d1-online.com.
 [80.187.123.114])
 by smtp.gmail.com with ESMTPSA id i8sm1619354ejg.84.2020.10.15.06.57.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 15 Oct 2020 06:57:24 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H3IV=DW=gmail.com=ckoenig.leichtzumerken@srs-us1.protection.inumbo.net>)
	id 1kT3l1-0001BU-0W
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:57:27 +0000
X-Inumbo-ID: 444ae937-1b94-4f9e-99af-0ca49320a25c
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 444ae937-1b94-4f9e-99af-0ca49320a25c;
	Thu, 15 Oct 2020 13:57:26 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id lw21so3762688ejb.6
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 06:57:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=reply-to:subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=eqy7Jg2fPCw6JKPfho3FMkiXLNsZ2vOMeF3B2A8Mojc=;
        b=rWAhsjFjxNV2tSGmRZgmV07f7lkrurQgCx9bPljwSM5dlr+cFdfNcNP8TqDYr6RpmV
         kJKrteO6A0BWcah5RksBap2thah0f3FCug6g9rOyet2tvKS6wLurIXPy+sSprVRdDpXO
         IOnftXETFpeSKiWgWdWvXjoSMVSdBSpoxhozwHThiDlqjFFzd9Q9u5Oejc6caqTfdyr4
         wAOCXRStBQxEeLEOZzM3R4MwjLF5LE07MQ+q90DEXSJbdui7Ftt93dj8CphKOBSXE/1U
         TA9yRTGiIYdnvyfGyrO3NxXvENUg3bN9B4wUskFws67k2dcY70IkWJVmV0uurT9DwBTX
         hYoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:reply-to:subject:to:cc:references:from
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-transfer-encoding:content-language;
        bh=eqy7Jg2fPCw6JKPfho3FMkiXLNsZ2vOMeF3B2A8Mojc=;
        b=jgM1GONltGsOosrO+eOeWMTzHs8xGGRjxcuc9y0TmVZEAE+FjxNxNkGFYratgHqeX8
         7iw8hZ2nXHzS+sf6y4dq9OFUOiPWZgJAXUaqIwRgefseMYthSX54w75e+ZXy5ToaMM/m
         FJ9vVvsqIUuaM5wUnZfdjZqOYjOjP2tUV7YWIg4iZpYRRVF6OaOiGyGQTrOSGEtnJMC7
         hT25p6hbZD8vxq1fhpwsSK5ClgEiyerZYiKmiZRh+oErqPNG9WDwhBjXkNkIC6qyEbUb
         xWYcPLZZHou1DjHh7WwkuBQcSSXL9U9jdDFJdHMn9znuThIoczOIFZtftDh6qgwxJwf/
         9FNg==
X-Gm-Message-State: AOAM5307CuF2qI9pU7vlD9khUtkWQtS1YzuNWQ8XS46HczfTmFu2dVhi
	Cq6BYvL8/7JWmwmmNNBPo60=
X-Google-Smtp-Source: ABdhPJzF6r3Ll5i0gnC5JVk814gY1eCy0pl3+e4WV+JpVGbNy1VCb75/aaDOiQbf2OJV/MzOqd48ag==
X-Received: by 2002:a17:906:f4f:: with SMTP id h15mr4501249ejj.17.1602770245105;
        Thu, 15 Oct 2020 06:57:25 -0700 (PDT)
Received: from [192.168.137.56] (tmo-123-114.customers.d1-online.com. [80.187.123.114])
        by smtp.gmail.com with ESMTPSA id i8sm1619354ejg.84.2020.10.15.06.57.19
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 15 Oct 2020 06:57:24 -0700 (PDT)
Reply-To: christian.koenig@amd.com
Subject: Re: [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters
 from internal kmap function
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
 linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
 inki.dae@samsung.com, jy0922.shim@samsung.com, sw0312.kim@samsung.com,
 kyungmin.park@samsung.com, kgene@kernel.org, krzk@kernel.org,
 yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
 nouveau@lists.freedesktop.org, Daniel Vetter <daniel.vetter@ffwll.ch>,
 etnaviv@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, linaro-mm-sig@lists.linaro.org,
 linux-rockchip@lists.infradead.org, dri-devel@lists.freedesktop.org,
 xen-devel@lists.xenproject.org, spice-devel@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-2-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <ckoenig.leichtzumerken@gmail.com>
Message-ID: <06cab96a-5224-46dc-dbd2-8eb4950946cc@gmail.com>
Date: Thu, 15 Oct 2020 15:57:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201015123806.32416-2-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US

Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The parameters map and is_iomem are always of the same value. Removed them
> to prepares the function for conversion to struct dma_buf_map.
>
> v4:
> 	* don't check for !kmap->virtual; will always be false
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
>   1 file changed, 4 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 3213429f8444..2d5ed30518f1 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>   }
>   EXPORT_SYMBOL(drm_gem_vram_unpin);
>   
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> -				      bool map, bool *is_iomem)
> +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
>   {
>   	int ret;
>   	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	bool is_iomem;
>   
>   	if (gbo->kmap_use_count > 0)
>   		goto out;
>   
> -	if (kmap->virtual || !map)
> -		goto out;
> -
>   	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
>   	if (ret)
>   		return ERR_PTR(ret);
>   
>   out:
> -	if (!kmap->virtual) {
> -		if (is_iomem)
> -			*is_iomem = false;
> -		return NULL; /* not mapped; don't increment ref */
> -	}
>   	++gbo->kmap_use_count;
> -	if (is_iomem)
> -		return ttm_kmap_obj_virtual(kmap, is_iomem);
> -	return kmap->virtual;
> +	return ttm_kmap_obj_virtual(kmap, &is_iomem);
>   }
>   
>   static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
>   	ret = drm_gem_vram_pin_locked(gbo, 0);
>   	if (ret)
>   		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
> +	base = drm_gem_vram_kmap_locked(gbo);
>   	if (IS_ERR(base)) {
>   		ret = PTR_ERR(base);
>   		goto err_drm_gem_vram_unpin_locked;



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 13:59:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 13:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7392.19284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3mf-0001Lm-NI; Thu, 15 Oct 2020 13:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7392.19284; Thu, 15 Oct 2020 13:59:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3mf-0001Lf-KL; Thu, 15 Oct 2020 13:59:09 +0000
Received: by outflank-mailman (input) for mailman id 7392;
 Thu, 15 Oct 2020 13:59:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kT3me-0001Kv-PE
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:59:08 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.102.47]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45616e26-6d0a-4046-a8fb-f1ad6687348e;
 Thu, 15 Oct 2020 13:59:07 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4690.namprd12.prod.outlook.com (2603:10b6:208:8e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 13:59:04 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 13:59:04 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0P190CA0001.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 13:58:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kT3me-0001Kv-PE
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 13:59:08 +0000
X-Inumbo-ID: 45616e26-6d0a-4046-a8fb-f1ad6687348e
Received: from NAM04-DM6-obe.outbound.protection.outlook.com (unknown [40.107.102.47])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 45616e26-6d0a-4046-a8fb-f1ad6687348e;
	Thu, 15 Oct 2020 13:59:07 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oYcC60ctnJOCLR4Kn5YaGE7u7pJBkfsGXlPsN90cs/Oa9MBEMg3YzSN7nQxPaEUCkXmUe8Y5vIPDeUrG4CL6/ykIAHlgPiVhq60Gfi20mZXbGPSDELp/QTL6/F86FKZgMLyTxqv0mOmGpGxbOpn2azQVVglfr5lSAN64IiN+Ruc5xEBoJjp+snIjCoaF69HKq2qb1e/ktzFxiko66MrTUJ6y+h/wAJ8WXjgTPW5K54p0r3je80Mtwj3yWIndt+ak00JoK0O+e68P9Z2XKdT6kIUTcxn4+UhDiwOb7TAy6E/iylRhBIW3zGGdYCCdfAKAQDBN+T7NbSfPnycw5FHCzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/X6P3sHVYi8SGMfQVN4l2Td6iDIe836nU3EJJf7YeA0=;
 b=f6TstBHqAub+NROJZeHieUccD6TZRt5UdLse/UhFw7mmjbzPdgdFd5mvCRsMJCvdFBkd8LZHE5V1kWi8pZNh28T91y3W/epbdaCId+GCmgZaPjvhuEQPUu8Fxo1ZGOTNHl3HYeYRR855puUtHiFV6KG8tTLn7yqf2UmlRyNrCQMKRFShc8s2Fjpdq1liexpmU/gDoo8IIpOZK3L19mc2jJtobCmEB4inL8N8ENapHYznL6hgxAjNLs6eWjYH3+g0YIeay07xifTi9anPIsFREu39dUKMF0PnZZyyX31xwUqaAPo/CvHm78e9XprdFWCRD7vE2kDkVv02ll3BMsquEQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/X6P3sHVYi8SGMfQVN4l2Td6iDIe836nU3EJJf7YeA0=;
 b=NBFuDf98/dVaJgBpw1yN6XHEhfcVFfVpF/b0k+VWHHRoVLn14NCY3Uz8g8ILK1VHLMyDfm5TP4RwXRMYayv0mqSl7Xevbum6QwaBUz4Zzhy9A87ZjclBF0f8t13YCEewUqvYThza4lvu57nbDhTWrWRa3oIFIurAYbKBh5CUHQo=
Authentication-Results: lists.linaro.org; dkim=none (message not signed)
 header.d=none;lists.linaro.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4690.namprd12.prod.outlook.com (2603:10b6:208:8e::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 13:59:04 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 13:59:04 +0000
Subject: Re: [PATCH v4 02/10] drm/cma-helper: Remove empty
 drm_gem_cma_prime_vunmap()
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-3-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <3ff4a9ce-ff3a-392e-e67e-a7687b0826e9@amd.com>
Date: Thu, 15 Oct 2020 15:58:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015123806.32416-3-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0P190CA0001.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:208:190::11) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0P190CA0001.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend Transport; Thu, 15 Oct 2020 13:58:54 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 36738f6f-d5b1-4021-227d-08d871127994
X-MS-TrafficTypeDiagnostic: BL0PR12MB4690:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<BL0PR12MB46909E3095D926AA8D9F1A5383020@BL0PR12MB4690.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	oFCzxu+oWIOFBXgbS3E40StWNNOeCeNWPlLvXZA+5gbsrbC2xkNWwcvRemVacRRDuVzuUdyL7WxKSE0WPdFD0PZdgyeezO+LSXbPHf4SEjMnkwrmKl7shDYUbD7OKC3rPyokPgIX8VSDCDCsabLArRfDSByNB2BlwsMF4hCQIrePHQ8MX22w5V5+MEPgfa4P8il1+GxxeOs1athAJEDJOmj2bc3YrtCMVlzbr953BQD1t7CiENrdbJkAORIg3f33nfiKHJUY6O3+NS+uHsal0f9N3WnWm/mFzGef5POsN7RPXpoALYHLqqwCQ4MWOdjTqa3EA+GQJZ6bb5GmW784W34oAkvv2/UxDRhAjz3Shm8dEqaO8HwEsrxRSEu0iqo2/mNAQ40vXmUiryKWwXkENSfgBMnq/NKN7HQW8vGbn6ljjUPef4GjHBNku3ZeVfLs
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(346002)(376002)(956004)(478600001)(31686004)(8676002)(7416002)(4326008)(8936002)(7406005)(52116002)(186003)(2906002)(16526019)(26005)(6486002)(34490700002)(31696002)(83380400001)(16576012)(2616005)(86362001)(6666004)(316002)(66946007)(66556008)(5660300002)(66476007)(66574015)(36756003)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	7iZTwAHhsdsOS5e7MZL4vxRacbuYzr8FU2IF5UhejhJjF/7EGTcSEc8HlZG4Pkwwngal1jOgTkSGvE8a5eUM5qd/6P7hv7/H6EgR9bAAbS5LKQthkFYR74FcX48KVEin9gFHDbSsG7dYgSKtani2Gh4TeTW0YyrhF72Zz+GOwZB6wG4rC/4Uc4aAPxAkdFDR/ykHYqQjDuq4rizHcy9ISqwF6MBeKo1tDItQ4hbGgbbEfmf2SIynuEQ/PaAgxmSFrbxO/1dwA2SMacW20lPA+D8B1jz6aVvspg5N3rSa6Yh/2bWPjAmmdTInrwEn7lkYHY5CGdtvDwRRrXi79AyC/7du+MmAL9jI0DUDlSeGIGeuVrpVZIYbOgHM7QSYPTi1MmBnyiNohLFQ+/NdSXqkqHTCbYdspkfKOUhESFRRTT0RsgW4Y5wNHNPkhydA2ugdWTB+SRRKctwDw2vB1RFLAZ/maad0a04CFvBuH/upzc6MKPlNKA80Tu0cvdEjpqRu6PadBmfhsonSjwj12oY/TnlDEQpBLd6piWHXlDohjR+wTIE+U8dnlhEi82n9YkltFBMCrBwmQSljGakUAhZqQY5CjInXWQQeFDM5AfnsIx3t8FUQQZS8BxSe0/Eu3un/mYtf/1ckN57XyDPWu8+4fg==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 36738f6f-d5b1-4021-227d-08d871127994
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 13:59:04.3772
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: PYqzf6UaB495/1Q4rARdAUUDW8dyETsymTnQ8NutDz2IxispLaDQL4625w+jbCBN
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4690

Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function drm_gem_cma_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
>   drivers/gpu/drm/vc4/vc4_bo.c         |  1 -
>   include/drm/drm_gem_cma_helper.h     |  1 -
>   3 files changed, 19 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index 2165633c9b9e..d527485ea0b7 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>   
> -/**
> - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
> - *     address space
> - * @obj: GEM object
> - * @vaddr: kernel virtual address where the CMA GEM object was mapped
> - *
> - * This function removes a buffer exported via DRM PRIME from the kernel's
> - * virtual address space. This is a no-op because CMA buffers cannot be
> - * unmapped from kernel space. Drivers using the CMA helpers should set this
> - * as their &drm_gem_object_funcs.vunmap callback.
> - */
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	/* Nothing to do */
> -}
> -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
> -
>   static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
>   	.free = drm_gem_cma_free_object,
>   	.print_info = drm_gem_cma_print_info,
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index f432278173cd..557f0d1e6437 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
>   	.export = vc4_prime_export,
>   	.get_sg_table = drm_gem_cma_prime_get_sg_table,
>   	.vmap = vc4_prime_vmap,
> -	.vunmap = drm_gem_cma_prime_vunmap,
>   	.vm_ops = &vc4_vm_ops,
>   };
>   
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index 2bfa2502607a..a064b0d1c480 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>   int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>   			   struct vm_area_struct *vma);
>   void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   
>   struct drm_gem_object *
>   drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:00:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:00:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7396.19297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3ng-0002Cz-2Q; Thu, 15 Oct 2020 14:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7396.19297; Thu, 15 Oct 2020 14:00:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3nf-0002Cs-Vc; Thu, 15 Oct 2020 14:00:11 +0000
Received: by outflank-mailman (input) for mailman id 7396;
 Thu, 15 Oct 2020 14:00:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kT3ne-0002Cl-QA
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:00:10 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (unknown
 [40.107.92.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b171dfc-b9eb-44f3-89c0-6a9ddbf1ae41;
 Thu, 15 Oct 2020 14:00:09 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4061.namprd12.prod.outlook.com (2603:10b6:208:19a::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 14:00:08 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:00:08 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0P190CA0026.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::36) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:00:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kT3ne-0002Cl-QA
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:00:10 +0000
X-Inumbo-ID: 9b171dfc-b9eb-44f3-89c0-6a9ddbf1ae41
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (unknown [40.107.92.77])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9b171dfc-b9eb-44f3-89c0-6a9ddbf1ae41;
	Thu, 15 Oct 2020 14:00:09 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hrotYsJjeWOMiRBJ5xl+FV1qRNoMzKcJ1OP7AsXnDSWcsjQwegprSQ6NZjyTDAL1L2eMOuAVE/prj3rlDbWPXRmzf+kF/1kW8qsNGzsuGc5n58DCVzay87yn9U6wQMMAUvzMXLTRw+GasDbs3gOuAHXI7zStmO+ir1XgrTvh5o/NcOR/Ewna7/x4LfvWXhMqlZmd2UMA4DsSPAB7BZDZEMp0JHsMs2M3sPRyA8BIlcm2QP02DWo5siZjh2bwCeTDEkV3Um2PKZgtTNHhzqxcbsM1aM12CsEEGXJCn1xxFMwv/kpl5hfID2hhG77DjwSJJSuxsQn6tXPdctZBOXF/XA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MY0ToQsC5k5lgdXyZsEcz+Mg0BHPMRMwcR+tXpbXx4A=;
 b=YyPKHwimVEGy341RD8VxK7/sj4nkelutvMMNgUe4clA3bW3o4ISYQP49jPgMhcgiEbWLziKoIT1G28d1A4OBQgDEklmdsXXEgyse5dNk1FlYNJ7/s8t6BOJV46xv2RuOWIHU9I7HucieJn4W7BqV+YoD4+AyFv3nGBTO23FZ5huPtEeqVU2T10cD/pHXlLqha0AVsT6yt4aoMQxeZjydsJsm70xuHF0DpyBOz/DZHlamKwMAEWK/MgEtQVlM9rfHd8ON3QmGLthHyDNcIbE68GSJ/8DyNNPiqFo8cGvWRICmSE9H/k22JurK5tlJ2eR4p8JV69I4u+ttqpB3FyFAIQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MY0ToQsC5k5lgdXyZsEcz+Mg0BHPMRMwcR+tXpbXx4A=;
 b=uKtAvUqVZgTElsNEwYDKeRn16OpQfYsG2NOBID5VLUuXcNtsYHNZxLJtO7RE/dq1TeTwzBktkh9eUgGsDwc7XcJU9LT0LfEB7gUsKTebLCfNe42PDky9M2X31mp6VfR9oZngJeHltUsM4opPaxdxVoFgq/lYtI1iCyyQoYdbSPs=
Authentication-Results: lists.linaro.org; dkim=none (message not signed)
 header.d=none;lists.linaro.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4061.namprd12.prod.outlook.com (2603:10b6:208:19a::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 14:00:08 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:00:08 +0000
Subject: Re: [PATCH v4 03/10] drm/etnaviv: Remove empty
 etnaviv_gem_prime_vunmap()
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-4-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <2a01560b-d7f7-e59e-cf71-50b36e0ee078@amd.com>
Date: Thu, 15 Oct 2020 15:59:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015123806.32416-4-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0P190CA0026.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:208:190::36) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0P190CA0026.EURP190.PROD.OUTLOOK.COM (2603:10a6:208:190::36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:00:00 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 212df4cd-8a35-494f-accb-08d871129f11
X-MS-TrafficTypeDiagnostic: MN2PR12MB4061:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<MN2PR12MB4061D4F28C6CE95C8EA1CBBB83020@MN2PR12MB4061.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1107;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sx09Pw1cjzdKnru49xflPAssGdo4mvXxQIo1KBjpidSpmoaHPvBxVHLHKuWwB90kY+LNLdeOJjeGxh4pG29Jiev+VyYs5jsjF7sIre6kC3p1p9sX9iIT3Q20LamIo9k6ZYygwkHSqUNjhcYS9W2Wbd8Ocd6/muMX2JEJY1Cz0Cj8L6vKKlmE9k5wR3EZ7t4tvadPzBY8BmwXoYlhZ7oE2RANAFuFhDE5Z/BAUFvZ/qnVqqlt/jk6o/YkZdRqTZjaJIR72hynJ0aflfZdfNNKsfEwn23cpYXLA5vObKHzMxja8PgsWSEcTgWoFLSNFfhBnKu+bQBGZCcYeyJygQYZuVF6XY20d5ZlIk94o9Qzx5MwoQ1XEIQEq99NMoLduCvd6oTHzvJivfoqM/3ba2keqGBnIqGGCyvjWNCNhIBPUnmxnd4BxZdYU0PPmPzCoGX2
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(39860400002)(136003)(366004)(2906002)(2616005)(66556008)(66476007)(36756003)(83380400001)(52116002)(31686004)(8936002)(66946007)(7406005)(7416002)(8676002)(4326008)(34490700002)(16576012)(956004)(316002)(5660300002)(31696002)(6486002)(6666004)(86362001)(186003)(16526019)(478600001)(26005)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	lIj9WfLFMVEVu1L6IQjBPgqlEK8GWxN/L5KU1cZ94QY6UM2xZrbvMR3zX03N56XYrCVX9LGJHz1peLkHaQRk1LmBrhqi89YrGcWg77kuATWQd9aARfq+XN8Y8r+Nse0K/ERjc5OKC9BTcFtGLBgvC6C5ItrfAvzcSNzebmQJpTVUUYwGVJCrvVH/829r37FoDjB8ubN62hmjbVS5e9gmPuDaIPE0cdfMP5edJn5VLwmcmdsOqLEX3YzTxDBC0ct3IGN6s/JZxM6/soiNH8PNFiiQohBSQhIk3k7TO3dmycHI5GVwj8LObq9ZwEL6q1n+6w/5Xm3DsiUO0TvZW45uA6ImZ+Sr64Vs3hxBoPIX5Ezs7UfiqGVzbpzJLMtuYyIFDvpM+UIr0wZH0vcv5uie3wEXIMsaOfOsYXbrcM0gqjtEXeNIcnHfe9u2L6EtRDFDH99L9Q7rq+qU/EnP1O/bRORaxzamDnbFYllxUsNj8LNiiaadfQsKNhJm9L3D2MpmRiyU0llmrVds26SJVg4hJmbpLTZPDuKY1XU7aqsUyDry4uPNdoUKByurkYjaVWYim9kQdtkxdNOjsL6eiuhCrc1XHcAteukh2Csfb0GXZW9rLGw/lafIzVmbGyGG8mbOncDDO0gnCm9l2RWmJEjTow==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 212df4cd-8a35-494f-accb-08d871129f11
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 14:00:07.1477
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: x6pWEsmfeP4c6Y3uqcywYbYfrQlJ4351IJdGanvoxElYheMV7hkc0+Q74le5xvtg
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4061

Am 15.10.20 um 14:37 schrieb Thomas Zimmermann:
> The function etnaviv_gem_prime_vunmap() is empty. Remove it before
> changing the interface to use struct drm_buf_map.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Acked-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/etnaviv/etnaviv_drv.h       | 1 -
>   drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 1 -
>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
>   3 files changed, 7 deletions(-)
>
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 914f0867ff71..9682c26d89bb 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>   int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
>   struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
>   void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>   			   struct vm_area_struct *vma);
>   struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> index 67d9a2b9ea6a..bbd235473645 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
> @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
>   	.unpin = etnaviv_gem_prime_unpin,
>   	.get_sg_table = etnaviv_gem_prime_get_sg_table,
>   	.vmap = etnaviv_gem_prime_vmap,
> -	.vunmap = etnaviv_gem_prime_vunmap,
>   	.vm_ops = &vm_ops,
>   };
>   
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index 135fbff6fecf..a6d9932a32ae 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
>   	return etnaviv_gem_vmap(obj);
>   }
>   
> -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	/* TODO msm_gem_vunmap() */
> -}
> -
>   int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>   			   struct vm_area_struct *vma)
>   {



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:01:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:01:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7399.19310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3oU-0002JT-Dc; Thu, 15 Oct 2020 14:01:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7399.19310; Thu, 15 Oct 2020 14:01:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3oU-0002JM-9M; Thu, 15 Oct 2020 14:01:02 +0000
Received: by outflank-mailman (input) for mailman id 7399;
 Thu, 15 Oct 2020 14:01:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kT3oT-0002JF-Ds
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:01:01 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.102.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8420f961-da96-4dd0-9d9b-156bfa83b9e3;
 Thu, 15 Oct 2020 14:01:00 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4690.namprd12.prod.outlook.com (2603:10b6:208:8e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 14:00:57 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:00:57 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0PR08CA0005.eurprd08.prod.outlook.com (2603:10a6:208:d2::18) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:00:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kT3oT-0002JF-Ds
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:01:01 +0000
X-Inumbo-ID: 8420f961-da96-4dd0-9d9b-156bfa83b9e3
Received: from NAM04-DM6-obe.outbound.protection.outlook.com (unknown [40.107.102.72])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8420f961-da96-4dd0-9d9b-156bfa83b9e3;
	Thu, 15 Oct 2020 14:01:00 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IIEdIhAerksJz7LFA0/2tMVoP1eYy2m2Ya1PNPh5OrBDlo8nj2XEoSxbXWxo4Mz4DolOmzMtA+KEn8NPqB3wSk9m0s6LI/lDt/nJZ9hnUN1NTFw4XWEZa2dFSf9EimZAzUy4pn80DinZrRTZX4USS51MIogLo6gfidx5GriksHEWABTdpSuaHvrZpvThRPVgxiFnvjsGJ3PHFlgNcXfAtpZm/L8qGUaBN41c1wIdCb7/Ae6z4yntsDE9uVAK/FuqGrUhYQmzhO4SyJuZ4YfE747oNTNqVm7I7KSi/SioaWn8VRY/j3oqSSeUcKT+0yGuLT7jwmxpUBlYLCCfKvhhVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KyDMUj3c5Q/kAKqLamJTbfJcmn3zhWOYAX0CC7GZMQ8=;
 b=QTNqZ9bvtZskR6e39cbFMEBGtNy7Gf+S6js8Ub00HbZEp+CGzn2K162OqBmSWT8ES5GJ7/kzJQJPLlLK1j8+u/UGGHOBZTLmSHlyqgg9ab3Il9iXLyehrlLzJUepHURVQIUqsOTAADU8wVy1o7OWjazoxzMRRp+3DxADPI5Gd/9w1vdGMaxEtyMDH9cF0P0e9+DxWgeeDMBABuSGo9BJwCsIn1Gv8EO86RdopZzGKtyLe6df0sx3LnpTVFr7Tfc3hdKOF/S4UnbFmvXSXc5niVn/UlLb2DnpFFaPIy1+EzN6mIfQrZBOBHJynyVzYVZGSyvE5yVcpD0LjkwcAGNy/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KyDMUj3c5Q/kAKqLamJTbfJcmn3zhWOYAX0CC7GZMQ8=;
 b=nOxo+crch41rHvdBum9AtQW7CBZ126KH3aDpN2sdW8FdpdjZbswJpvxloNTgfl1wbbtTMqly2HlxcVx4TvM6PlGXkzFSfFVWW9yGEdv0x9BPzobGw3xK0pMx4gILw82yOirM5xL9KTkjHX/CPtKbLkvdQgS8nhnLq3vxj2lQpBk=
Authentication-Results: lists.linaro.org; dkim=none (message not signed)
 header.d=none;lists.linaro.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4690.namprd12.prod.outlook.com (2603:10b6:208:8e::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Thu, 15 Oct
 2020 14:00:57 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:00:57 +0000
Subject: Re: [PATCH v4 04/10] drm/exynos: Remove empty
 exynos_drm_gem_prime_{vmap,vunmap}()
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-5-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <7a6f6526-1b67-61c8-2239-50f2bfbdc29d@amd.com>
Date: Thu, 15 Oct 2020 16:00:42 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015123806.32416-5-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0PR08CA0005.eurprd08.prod.outlook.com
 (2603:10a6:208:d2::18) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0PR08CA0005.eurprd08.prod.outlook.com (2603:10a6:208:d2::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:00:46 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: dc82f225-0f8a-43ae-cb58-08d87112bccd
X-MS-TrafficTypeDiagnostic: BL0PR12MB4690:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<BL0PR12MB4690BB9C565CA20B6B0332C883020@BL0PR12MB4690.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+JBOwexIirDjBJSp39MdGx4etjX+D1JSxS0P8TKGjAx1Bmi1T/HpRGzkBUKpTm/XTd+/LrRMDocaj0Ux25QJ6ln93vwns8Cz82KXZp7az7wmFCwWDpKO5vCP3wdZhpKpdPGUQqsKTDxaSn4+A1aVXQOp4XRT0QJbqMPxQi8X7LBscjIL6rbKrxyCaaXEiW3RU34nFc6HjddNFg2AT4/Bh8f0C3aGvk/tUDErgOCm/9tDgXNzjn9lVfTaLEsbwx1LnZADAqpstprcTTr3gIx0vfW6ojRELLMSUc9DWwsuGmk5tjPuih28IqGJm18Oh6fI8Pm17X9yq8xQQYf0gRbte3FCMU0f4zIQiP/BB9JGvacfcxiCYo5ytMK9w9TN1SfHsjr9k38s3MzdcvZ1rClA8wEbsJP6VUXTqHQt7W/Qon9GVQQ+FvBVIvKlzQuqmAMf
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(346002)(376002)(956004)(478600001)(31686004)(8676002)(7416002)(4326008)(8936002)(7406005)(52116002)(186003)(2906002)(16526019)(26005)(6486002)(34490700002)(31696002)(83380400001)(16576012)(2616005)(86362001)(6666004)(316002)(66946007)(66556008)(5660300002)(66476007)(36756003)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	L+lw87r0GARxt6cpx/CQ9Knf+dyyeSpa0SnuyLreuQ4Ft54/jY4lbP8rZ5fgVHeZXyxczESkVI0XZU6KBk94qGVEzWZZ5+57Kb77FSr1MnmNvk5plVOe07W0Ju2YnE7dyIT4jMLY5Jaqw1CqI5/d7vfCVjJEriMzHd1ALMy2hXA8wH40hv9U/S/RE7aIjNjSl3FlWorHI/NaKLuCh3VBY5dDrHnxySi4nl6kVkttFn1vl6/Ralb3uGiSNjawA+PQzTCjlxFz5bP2njR4gEsbIBaF4gXPYLh6Tz4LxOhfHFmLj38gWfj7HZdcdMfLfSy/ukOkLME1tQ2gRj3cKz9E9DiMUsGPbP5GaCZzjGzYrCElmJbtLn8ZlrVcx3hrAFHV6g1ZKTCikLz3QTDjKJcV68T4rvyQYR9taf5IJQkqDSbLCTPjFD3+vU6GAAta3DPSJQXGaGUchlpve77w02No8FlfZIYKGJ7n99pRq1GLG0ir7UPHltbBjl3bysAZAQxszNBUZ7+bcXMEj9fa23yQ3FXp4ec8tT6tMVLazB8Djb2GLFu4mBgMuXx83KOgE3CtDf3tvNRHKx4kCrZMtIbBF7NXZigE7/BDE8xK86q78Owkpk4BOQelhulmZKJ1hIPekqr/gm5sTJXpG8JiW5Djxw==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc82f225-0f8a-43ae-cb58-08d87112bccd
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 14:00:57.0375
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NidAfpRa2NMLEGdnFWdYvsER8j3pbm+Oaf1NwO8cr9OAacK7WMR3KCrCNNQSGLqa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4690

Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
> them before changing the interface to use struct drm_buf_map. As a side
> effect of removing drm_gem_prime_vmap(), the error code changes from
> ENOMEM to EOPNOTSUPP.
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Acked-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
>   drivers/gpu/drm/exynos/exynos_drm_gem.h |  2 --
>   2 files changed, 14 deletions(-)
>
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> index e7a6eb96f692..13a35623ac04 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
> @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
>   static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
>   	.free = exynos_drm_gem_free_object,
>   	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
> -	.vmap = exynos_drm_gem_prime_vmap,
> -	.vunmap	= exynos_drm_gem_prime_vunmap,
>   	.vm_ops = &exynos_drm_gem_vm_ops,
>   };
>   
> @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>   	return &exynos_gem->base;
>   }
>   
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> -	return NULL;
> -}
> -
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	/* Nothing to do */
> -}
> -
>   int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
>   			      struct vm_area_struct *vma)
>   {
> diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> index 74e926abeff0..a23272fb96fb 100644
> --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
> +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
> @@ -107,8 +107,6 @@ struct drm_gem_object *
>   exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
>   				     struct dma_buf_attachment *attach,
>   				     struct sg_table *sgt);
> -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
> -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
>   			      struct vm_area_struct *vma);
>   



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:02:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:02:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7403.19322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3qG-0002UQ-Vc; Thu, 15 Oct 2020 14:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7403.19322; Thu, 15 Oct 2020 14:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3qG-0002UJ-S9; Thu, 15 Oct 2020 14:02:52 +0000
Received: by outflank-mailman (input) for mailman id 7403;
 Thu, 15 Oct 2020 14:02:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kT3qE-0002U3-OT
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:02:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f95744fe-b802-4a0a-8d5d-e93722dcb3ee;
 Thu, 15 Oct 2020 14:02:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kT3qE-0002U3-OT
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:02:50 +0000
X-Inumbo-ID: f95744fe-b802-4a0a-8d5d-e93722dcb3ee
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f95744fe-b802-4a0a-8d5d-e93722dcb3ee;
	Thu, 15 Oct 2020 14:02:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602770570;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=vJmFyAUEzqg6ecFBFpUq4RyHD+WUMAaJ+2eNPktHAdo=;
  b=GKJUmsCyZTYXcaefd7ScZW1oB7oJE818VFSmeYN9fLbhoRCxZDr3p9js
   ZfGU9AAN9NAYqij79+xit7Chfj5jQjYe+Hbg3SUBF4QecnQxQ1dipZFoJ
   EwOEPuMLVDBPCZovOJLO1/Ip4ge0VKi6x19fQEQeGwiXEaLvFk53G1kxY
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kmYhWeC8acTVT2lKvp+ywlNO8ZYGSPIKuI/DKZo4eiAkK1F7pCmiWhvLJx9qELnrEVdFxBngDr
 G5o4oIBM0ghyQjJJh1GYPiMG04gN73CVzZsxTBoYdNQEXepgZuFwqf22b5Ho/nVG5fnmyhjQc9
 op+OCY8PzgsvZ31UVFzq+17yRc/Dxhjco9Drh/yDAMUk/S/0Nwkjt9EMoLVcVfDL5riA2D786/
 6kgGtCRm8XeiWQiLwClixvbrmvXUK9f6dAxCf1tIwnpO8zWLm0lqfeKxEz/IIs6k2P6NH7Ao+J
 7Ww=
X-SBRS: 2.5
X-MesageID: 29410899
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29410899"
Subject: Re: [PATCH v2] x86/smpboot: Don't unconditionally call
 memguard_guard_stack() in cpu_smpboot_alloc()
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201014184708.17758-1-andrew.cooper3@citrix.com>
 <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0a294279-5de5-3b54-b1f9-847de1159447@citrix.com>
Date: Thu, 15 Oct 2020 15:02:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 09:50, Jan Beulich wrote:
> On 14.10.2020 20:47, Andrew Cooper wrote:
>> cpu_smpboot_alloc() is designed to be idempotent with respect to partially
>> initialised state.  This occurs for S3 and CPU parking, where enough state to
>> handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
>> when we otherwise want to offline the CPU.
>>
>> For simplicity between various configuration, Xen always uses shadow stack
>> mappings (Read-only + Dirty) for the guard page, irrespective of whether
>> CET-SS is enabled.
>>
>> Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
>> by first writing out the supervisor shadow stack tokens with plain writes,
>> then changing the mapping to being read-only.
>>
>> This ordering is strictly necessary to configure the BSP, which cannot have
>> the supervisor tokens be written with WRSS.
>>
>> Instead of calling memguard_guard_stack() unconditionally, call it only when
>> actually allocating a new stack.  Xenheap allocates are guaranteed to be
>> writeable, and the net result is idempotency WRT configuring stack_base[].
>>
>> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
>> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> This can more easily be demonstrated with CPU hotplug than S3, and the absence
>> of bug reports goes to show how rarely hotplug is used.
>>
>> v2:
>>  * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
>>    turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
>>    shootdown completes.
> The code change looks correct to me, but since I don't understand
> this part I'm afraid I may be overlooking something. I understand
> the "turn the BSP shadow stack into regular mappings" relates to
> cpu_smpboot_free()'s call to memguard_unguard_stack(), but I
> didn't think we come through cpu_smpboot_free() for the BSP upon
> entering or leaving S3.

The v1 really did fix Marek's repro of the problem.

The only possible way this can occur is if, somewhere, there is a call
to cpu_smpboot_free() for CPU0 with remove=0 on the S3 path

I have to admit that I can't actually spot where it is.


Either way - it doesn't impact the fix, which attempts to make "the
stack" into a single object.  I experimented with introducing
smpboot_{alloc,free}_stack(), but the result wasn't clean and I
abandoned that approach.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:08:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:08:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7407.19334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3vo-0002jd-M9; Thu, 15 Oct 2020 14:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7407.19334; Thu, 15 Oct 2020 14:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT3vo-0002jW-HY; Thu, 15 Oct 2020 14:08:36 +0000
Received: by outflank-mailman (input) for mailman id 7407;
 Thu, 15 Oct 2020 14:08:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kT3vn-0002jR-Qb
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:08:35 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe59::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec966429-cf8a-4a2f-9f90-309f47e6c399;
 Thu, 15 Oct 2020 14:08:34 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB3951.namprd12.prod.outlook.com (2603:10b6:208:16b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 15 Oct
 2020 14:08:29 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:08:29 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0PR10CA0062.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:08:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kT3vn-0002jR-Qb
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:08:35 +0000
X-Inumbo-ID: ec966429-cf8a-4a2f-9f90-309f47e6c399
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe59::620])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec966429-cf8a-4a2f-9f90-309f47e6c399;
	Thu, 15 Oct 2020 14:08:34 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LVdBYlHA+ZZ+5Hh3DDrQxM7vxQE+6uYWnUoJx+JLeWkROSQrqHtebXxOONTGNcpSr2VdikKaW/t+6ExBHDQgqHQigrKXx1AneUpo55Paj61VDSK3uyWpOuhyQl7epJab0K8yPA2JAswjsjo692MvK8kZ7xMxsL0IwUSR5eYv9u8X/bTBQmv7p+y1CCJmRp4ujc+VOY6IWQdes609C82t/Nq07uAhrtcLc0vtHVwfaxxJe9UBykPoc7ANdAGaU+/PaQIQfAkj+vCQ5eG53FDuX8u0a9hnRsfCnme8AQ+ZEyBzkdehFbouloOMyDLM7GoFc9YTqSbltrg46Scz0IPdtg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ElSABT3dzYVayvBI+cG0Ylje7BwHhTr2iqRweK77mNg=;
 b=WE/s9LqvzFrHOJjACjHySCn3BVo9HvogD3tfbqQfGN3SHuDVkyQexge6HQnbU4nAZaneflYPS9huYIF4Ej5YCJe0zO7TOSPbnr9Iui3s744Uosc6QzoVUjFVWP0Y3iR8vmhTJtOhTxAMxqAFYLkk/K1f7CGgSpU6EeqUE75tR1nOjpe0lfTdSrO3cowCi0NfKDW17woLaVxEUnRDGYXA4WIy4XpJ1zW0QXmdPMPNR+zs7/DX0kDPaAFGI56CJUTqPmJDfYqnyccY0V5sHKtDs8Mgp/H8vNKusiPk22box3aLg08lyC7qFCaW8H6OnrnMXVvaXEZBsmQd+GZaBUb7rA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ElSABT3dzYVayvBI+cG0Ylje7BwHhTr2iqRweK77mNg=;
 b=r6pk+dnZN73mOjvhDP5xuMOwdktd8SA3qRp5VnbFn+/BUZO/qrcsoOhfSw/aAFHVQTPPoRdZ42OAJKw7C0H5RBq0x/0EJY/MuW1hFPwlFSJs9SzDu2/FtI4wjkDZOvjlnd5e4Y6sHrmoAwwB1E/qPTWUQoYZy8W89HBUAJVCQhc=
Authentication-Results: lists.linaro.org; dkim=none (message not signed)
 header.d=none;lists.linaro.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB3951.namprd12.prod.outlook.com (2603:10b6:208:16b::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 15 Oct
 2020 14:08:29 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:08:29 +0000
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
Date: Thu, 15 Oct 2020 16:08:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015123806.32416-6-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0PR10CA0062.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:15::15) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0PR10CA0062.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:15::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:08:19 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 13dc04ab-0081-4173-237e-08d87113c9f4
X-MS-TrafficTypeDiagnostic: MN2PR12MB3951:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<MN2PR12MB39516C6629AAF06CAD2C31B583020@MN2PR12MB3951.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	fDfIXr9/7X5kT1QrRcll0NCUVn6kZ8pGzIQQCJD48eLvwHgwtolumBsnTl8tYr9tkSi/N3CieMnkAAyeJepHo6/z+4WIRopBXoeGwyNCzI4zKnXqdL+yRPQ1PwOb7oVTFNjZ+Mc8g9yZHCZxbQf3MkD9QDXXLnWs3ZP/pl9hdtICYNkqzqvsawUK09jIOeVs0PSYzt9E2Sjx9VKeYI3GUSsq4gnkqmgZf/fPq2duqazk4l4Md4erwreo/2QJZ9qlQLEEy+AbJ8HDEyojAfuXRHASHmdzA3olwGP4QgUMnIv8lWx5JGeEdhkSBkgk5LS9bF81q9b3+z2/ss8JouKYD9MOil4qFhfdmzWqq0TfhfgxPiNf1x3PvUV8kJ/kJwCfSnu2IfWxKUIPFaLZhISOXTVmapL6F3TkjJ516wCoNeJvAHLsz8QSj9awQUzPY6Ar
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(366004)(39860400002)(83380400001)(86362001)(8936002)(6666004)(956004)(6486002)(34490700002)(7406005)(2906002)(2616005)(16576012)(16526019)(5660300002)(31696002)(316002)(36756003)(66476007)(66556008)(66946007)(186003)(8676002)(52116002)(26005)(4326008)(7416002)(478600001)(31686004)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	WFQveVdFhYq2o10uQNOnzM2DZpX9ODAzL3xVKiUrWXH9Sewxa5qPVTBHbCrzZeNwRUWPgaHrhvY3+Iu89O5vBnpPlQ4W8nLhzEHWk7tAlj4zRbJC9DWSU6MaKOc2vZZc2KVhsAmzW7s1rIhwy28V9jRrYltMyPr1dJZTSo34VEhdWUPY6sEa/pQHJKP7WmTG6Xb5wSm91jx1KoKdL1SRPUW1g3cAGPIMerhXDV6VBKAHCcPEo/JXMrhcG8H/Fdm4+s5KC3kTOAWuVkF+koZGy/uPSMpsDd4fRa8b4Cvd4jv+/Px/wLqr2FONG/EGeKHUOzvlu9LylkyHI8NAKWQG9vY39OzkaeYRfiZ53UKeWUGIT9m7xrbOK7t1cmeY8b4gW0ho5rKyYIU6qgWjii3eIVXaZWMS2xG7YLh8dgKu1fS+eFyQe1NfphHX5aXTEIsehsFsHuy+tu9IcloFZxbd8s2y6K+LV4CoUdSMjGvS8lTHNT05megqHOF1e0i99t/nup/iLImR001KBbm1muR2j064yvZ7xx+lSvrh3wD44pjZrZ+F9sXxoP4b7WBKbEVK/JBs/1KCUuZqOB0Kg8h+sUQQCDIfZiKVR+RbyEqAH6hultzvVfomRqIIsVEgiVitkdLZLD7yQT/LeTUr9mvm6g==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 13dc04ab-0081-4173-237e-08d87113c9f4
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 14:08:29.1475
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: foxGeDB2c3N7aCsaE2FooMCOwo6Uufe5wNMOP0BllgmW7Ma8XHm6bEp3UJWrPfjC
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3951

Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v4:
> 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> 	  Christian)

Bunch of minor comments below, but over all look very solid to me.

>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
>   include/drm/drm_gem_ttm_helper.h     |  6 +++
>   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
>   include/linux/dma-buf-map.h          | 20 ++++++++
>   5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
>   }
>   EXPORT_SYMBOL(drm_gem_ttm_print_info);
>   
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> +		     struct dma_buf_map *map)
> +{
> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> +	return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> +			struct dma_buf_map *map)
> +{
> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> +	ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
>   /**
>    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>    * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index bdee4df1f3f2..80c42c774c7d 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
>   #include <drm/ttm/ttm_bo_driver.h>
>   #include <drm/ttm/ttm_placement.h>
>   #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/io.h>
>   #include <linux/highmem.h>
>   #include <linux/wait.h>
> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>   }
>   EXPORT_SYMBOL(ttm_bo_kunmap);
>   
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> +	struct ttm_resource *mem = &bo->mem;
> +	int ret;
> +
> +	ret = ttm_mem_io_reserve(bo->bdev, mem);
> +	if (ret)
> +		return ret;
> +
> +	if (mem->bus.is_iomem) {
> +		void __iomem *vaddr_iomem;
> +		unsigned long size = bo->num_pages << PAGE_SHIFT;

Please use uint64_t here and make sure to cast bo->num_pages before 
shifting.

We have an unit tests of allocating a 8GB BO and that should work on a 
32bit machine as well :)

> +
> +		if (mem->bus.addr)
> +			vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> +		else if (mem->placement & TTM_PL_FLAG_WC)

I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new 
mem->bus.caching enum as replacement.

> +			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> +		else
> +			vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> +		if (!vaddr_iomem)
> +			return -ENOMEM;
> +
> +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> +	} else {
> +		struct ttm_operation_ctx ctx = {
> +			.interruptible = false,
> +			.no_wait_gpu = false
> +		};
> +		struct ttm_tt *ttm = bo->ttm;
> +		pgprot_t prot;
> +		void *vaddr;
> +
> +		BUG_ON(!ttm);

I think we can drop this, populate will just crash badly anyway.

> +
> +		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> +		if (ret)
> +			return ret;
> +
> +		/*
> +		 * We need to use vmap to get the desired page protection
> +		 * or to make the buffer object look contiguous.
> +		 */
> +		prot = ttm_io_prot(mem->placement, PAGE_KERNEL);

The calling convention has changed on drm-misc-next as well, but should 
be trivial to adapt.

Regards,
Christian.

> +		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> +		if (!vaddr)
> +			return -ENOMEM;
> +
> +		dma_buf_map_set_vaddr(map, vaddr);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> +	if (dma_buf_map_is_null(map))
> +		return;
> +
> +	if (map->is_iomem)
> +		iounmap(map->vaddr_iomem);
> +	else
> +		vunmap(map->vaddr);
> +	dma_buf_map_clear(map);
> +
> +	ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
>   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>   				 bool dst_use_tt)
>   {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
>   #include <drm/ttm/ttm_bo_api.h>
>   #include <drm/ttm/ttm_bo_driver.h>
>   
> +struct dma_buf_map;
> +
>   #define drm_gem_ttm_of_gem(gem_obj) \
>   	container_of(gem_obj, struct ttm_buffer_object, base)
>   
>   void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
>   			    const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> +		     struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> +			struct dma_buf_map *map);
>   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>   		     struct vm_area_struct *vma);
>   
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>   
>   struct ttm_bo_device;
>   
> +struct dma_buf_map;
> +
>   struct drm_mm_node;
>   
>   struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
>    */
>   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>   
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
>   /**
>    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>    *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
>    *
>    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>    *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>    * dma_buf_map_is_null().
>    *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
>   	map->is_iomem = false;
>   }
>   
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map:		The dma-buf mapping structure
> + * @vaddr_iomem:	An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> +					       void __iomem *vaddr_iomem)
> +{
> +	map->vaddr_iomem = vaddr_iomem;
> +	map->is_iomem = true;
> +}
> +
>   /**
>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
>    * @lhs:	The dma-buf mapping structure



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:22:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:22:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7410.19346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT48i-0004Mr-U8; Thu, 15 Oct 2020 14:21:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7410.19346; Thu, 15 Oct 2020 14:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT48i-0004Mk-QL; Thu, 15 Oct 2020 14:21:56 +0000
Received: by outflank-mailman (input) for mailman id 7410;
 Thu, 15 Oct 2020 14:21:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kT48h-0004Mf-4l
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:21:55 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e89::614])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6b4b8de-a071-45ce-88bd-f280b91e8dd0;
 Thu, 15 Oct 2020 14:21:49 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4932.namprd12.prod.outlook.com (2603:10b6:208:1c2::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 15 Oct
 2020 14:21:45 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:21:45 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0PR03CA0058.eurprd03.prod.outlook.com (2603:10a6:208::35) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:21:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rpcQ=DW=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kT48h-0004Mf-4l
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:21:55 +0000
X-Inumbo-ID: b6b4b8de-a071-45ce-88bd-f280b91e8dd0
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e89::614])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6b4b8de-a071-45ce-88bd-f280b91e8dd0;
	Thu, 15 Oct 2020 14:21:49 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AUnnCFSXdry2sw1L4ia6O64g1l2Yg2XprdkPadyFI+VDLYQpELm2VsYMZE3+rN98ZLfKNf7ZphX5GGRd9D7k7jaw6iraQ+/dlfx0auXBDyTL9Zf8PtqanjhXpEyLOksrJfH1PN0DU1LDl5aDNME7fDWt2LvREi7iUXcpWzOCpdH9MSplKXOQ3CTMUlYUBcJppDZQh07ayvBDwiGlSWnHDYJpgZk8Yk+LGqgIfBLuxaEeYuGwwWwAFxtJGr9Qzh6uFO+9J061vwhH7lHFA+OAy6zEpF5zZuXR+AGe9W8+mQdS28X7YW/7SqucqFXCKoiOYeLxI0jtY4jEhuNMTEfWAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s1zJaDsCknzbjiRt/StSITflJScs7eZVdKcjpsdgjsI=;
 b=TQjQI9Srk0O8AB+AzDXQaAaw3QNjYOf/Xb8IHEneovSvqi8INt9ySqbThfjOf7FFqwXfdlk4TquKI6SY+K3ginCak7As1gfNfVjMNGMZSd04rDfYNZs5FKXaQXoU2Kazh+pOlgwVlQJMsFybcAeNFPF9NFtfWrTV4Lb8YCxM7EPq4hZebKJsNRAuibpw6FIb42JSLRtOd77RqZs99ouW3JJs3FYQL0sj8tsYwYeGZC4TRZyTuTrTX1gjTGHvjrLpinm40mVHBKf5gRgs9Ku/A2sypboomyvvxtmJWfMtLL92nT4rZzq4PSgY/LiSE562+kwhPpYDtrURlCNiA/9ZbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s1zJaDsCknzbjiRt/StSITflJScs7eZVdKcjpsdgjsI=;
 b=ev4nEbIT7ZjdwzBc5xW1PhxZPJppJlzPkhdhNwttquRQxJJCQXZL55EbUdeZxbtPSplLsnq7IvKwRQBHMH9ra5rt/6EYp863XKvnG8cPNuJprat50AFX3W4jScQhmUjBSqmW0qhMaGuFTfP5ECkw8GsXdJPH8Owv8OUVRJUiGMM=
Authentication-Results: lists.linaro.org; dkim=none (message not signed)
 header.d=none;lists.linaro.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB4932.namprd12.prod.outlook.com (2603:10b6:208:1c2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 15 Oct
 2020 14:21:45 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Thu, 15 Oct 2020
 14:21:45 +0000
Subject: Re: [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops
 and convert GEM backends
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-7-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <6bd7d5cf-06c8-3fd8-9bbe-a80ff6bb327e@amd.com>
Date: Thu, 15 Oct 2020 16:21:31 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015123806.32416-7-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0PR03CA0058.eurprd03.prod.outlook.com (2603:10a6:208::35)
 To MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0PR03CA0058.eurprd03.prod.outlook.com (2603:10a6:208::35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend Transport; Thu, 15 Oct 2020 14:21:37 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1402a292-410c-4dd7-5df2-08d87115a47e
X-MS-TrafficTypeDiagnostic: BL0PR12MB4932:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<BL0PR12MB4932E18B36762DC8F35A83D183020@BL0PR12MB4932.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qZE7L/zH5aGxqiqWmrxr/TkD8SGVxq492TWlZDRQ8SWP9LUeAjghyncA+V0RbcCiQOlF9BpSa0szaBr1rLMb4pd7NGjinWmMKZzBxFhl/AbSzHryacecP3jmc8SciQoEijiChjkblXtHc4usMJHnAzcMutTXYvIX5HfX6Nm+xOdzuOQCtRKC8c4GqNrw9YZ9L8ZX7aUG8Jez2Kf7T8MqZ1TKwlDddWh5CcJb0lhqwIRd8/qFy18d0ZTQ4NxTPYPSxuSOLxsOH7f+KxYILSZqFnNsnutVIAcmfePky9fzaNcsZwXH2qM8YnHjzPKV14l7WHo1Jo4RzSeIOOJY8yqf3Df5uGdhO9DrcUMwhgotFpmcaIVCTbe+MfIKTxkpI7od6w3lF2gvYlK/Q58+O1aXt5seGL7Bw9/go+uTg39iKc1BB9wtePnfCCvCmVcSs0aQ
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(376002)(346002)(36756003)(5660300002)(8936002)(30864003)(478600001)(8676002)(26005)(16526019)(7406005)(34490700002)(186003)(2616005)(16576012)(316002)(6666004)(6486002)(31686004)(52116002)(66574015)(86362001)(66946007)(956004)(83380400001)(2906002)(31696002)(4326008)(66556008)(66476007)(7416002)(921003)(43740500002)(579004)(559001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	m1LAH0A49qIsnoSwQsouBc95tPZsFvI7ILL+HyITCdwV3eSL2iszdF0dHXxMB2oIGD335BwU2D+eG6IuPWolrqv1GJ40za1adhpl3RbZ1r2ntg6FKwrWvSGDqY/euNspLIimOkkAyhMiIudpJhNHjSvToTllhvq/vOB6yXkJ2pK4/2RJUktLTxIQoz6/ZYV5B2FXrDyhgO+doQDORwIK5yqGiTmmTcP2JsACozYQc3xOqlQsDRoVZr6KFOB9/QRdbB1D0k+BYNH8mBU+lCqEXTZeUd3OX2d9Rz7ZUreK8Ld9YFw2teXCkJHPYtYvf7YtFAidpvqnsa+zqvxTCXrJgmEAdWAe62ob7ZrIeKcImgVPpNZC/qogN/FOi1n+ZaYI3EcWs+YLZ8jVE6Gq1sKrVFAlIEv0R5yxdIsbXVqsWuRspmjOiPY5BM1SKSwMC2GfuPK5vkzYOfg1ayT2mmJ4TRuyKXDO+/VHy09vY+KAuYSJFZ/+4PZ5ZM7hcJwk88LVWjAawTnzzWpbtlLyGfKChy2Fcf2yhG+/XLwUVWFJ5kZwGepbJJfsCK/1qs+idevkt4XuzMmAYCjuFjcGdu/5NaHKJDvaGTN63plDDLEo6/KcQ7lBx58OAjHWn7gqC2tgnk+/IDbC3P8DUpKFUGF2Tg==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1402a292-410c-4dd7-5df2-08d87115a47e
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 Oct 2020 14:21:44.9217
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hUctmsKmzwzlQVH4lhjqevgxGnxCyHnHl623cVkEli3p3TuwQp67xFQjNmWsqHz4
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB4932

Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> This patch replaces the vmap/vunmap's use of raw pointers in GEM object
> functions with instances of struct dma_buf_map. GEM backends are
> converted as well. For most of them, this simply changes the returned type.
>
> TTM-based drivers now return information about the location of the memory,
> either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
> et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
> implementing their own vmap callbacks.
>
> v4:
> 	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
> 	* fix a trailing { in drm_gem_vmap()
> 	* remove several empty functions instead of converting them (Daniel)
> 	* comment uses of raw pointers with a TODO (Daniel)
> 	* TODO list: convert more helpers to use struct dma_buf_map
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

The amdgpu changes look good to me, but I can't fully judge the other stuff.

Acked-by: Christian König <christian.koenig@amd.com>

> ---
>   Documentation/gpu/todo.rst                  |  18 ++++
>   drivers/gpu/drm/Kconfig                     |   2 +
>   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 -------
>   drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
>   drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
>   drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
>   drivers/gpu/drm/ast/ast_cursor.c            |  27 +++--
>   drivers/gpu/drm/ast/ast_drv.h               |   7 +-
>   drivers/gpu/drm/drm_gem.c                   |  23 +++--
>   drivers/gpu/drm/drm_gem_cma_helper.c        |  10 +-
>   drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 +++++----
>   drivers/gpu/drm/drm_gem_vram_helper.c       | 107 ++++++++++----------
>   drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
>   drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   9 +-
>   drivers/gpu/drm/lima/lima_gem.c             |   6 +-
>   drivers/gpu/drm/lima/lima_sched.c           |  11 +-
>   drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
>   drivers/gpu/drm/nouveau/Kconfig             |   1 +
>   drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
>   drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
>   drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
>   drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 ----
>   drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +--
>   drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
>   drivers/gpu/drm/qxl/qxl_draw.c              |  14 ++-
>   drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
>   drivers/gpu/drm/qxl/qxl_object.c            |  31 +++---
>   drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
>   drivers/gpu/drm/qxl/qxl_prime.c             |  12 +--
>   drivers/gpu/drm/radeon/radeon.h             |   1 -
>   drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
>   drivers/gpu/drm/radeon/radeon_prime.c       |  20 ----
>   drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 ++--
>   drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
>   drivers/gpu/drm/tiny/cirrus.c               |  10 +-
>   drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
>   drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
>   drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
>   drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
>   drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
>   drivers/gpu/drm/vgem/vgem_drv.c             |  16 ++-
>   drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 ++--
>   drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
>   include/drm/drm_gem.h                       |   5 +-
>   include/drm/drm_gem_cma_helper.h            |   2 +-
>   include/drm/drm_gem_shmem_helper.h          |   4 +-
>   include/drm/drm_gem_vram_helper.h           |  14 +--
>   47 files changed, 321 insertions(+), 295 deletions(-)
>
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 700637e25ecd..7e6fc3c04add 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
>   
>   Level: Intermediate
>   
> +Use struct dma_buf_map throughout codebase
> +------------------------------------------
> +
> +Pointers to shared device memory are stored in struct dma_buf_map. Each
> +instance knows whether it refers to system or I/O memory. Most of the DRM-wide
> +interface have been converted to use struct dma_buf_map, but implementations
> +often still use raw pointers.
> +
> +The task is to use struct dma_buf_map where it makes sense.
> +
> +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
> +* TTM might benefit from using struct dma_buf_map internally.
> +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
> +
> +Level: Intermediate
> +
>   
>   Core refactorings
>   =================
> diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
> index 147d61b9674e..319839b87d37 100644
> --- a/drivers/gpu/drm/Kconfig
> +++ b/drivers/gpu/drm/Kconfig
> @@ -239,6 +239,7 @@ config DRM_RADEON
>   	select FW_LOADER
>           select DRM_KMS_HELPER
>           select DRM_TTM
> +	select DRM_TTM_HELPER
>   	select POWER_SUPPLY
>   	select HWMON
>   	select BACKLIGHT_CLASS_DEVICE
> @@ -259,6 +260,7 @@ config DRM_AMDGPU
>   	select DRM_KMS_HELPER
>   	select DRM_SCHED
>   	select DRM_TTM
> +	select DRM_TTM_HELPER
>   	select POWER_SUPPLY
>   	select HWMON
>   	select BACKLIGHT_CLASS_DEVICE
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> index 5b465ab774d1..e5919efca870 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
> @@ -41,42 +41,6 @@
>   #include <linux/dma-fence-array.h>
>   #include <linux/pci-p2pdma.h>
>   
> -/**
> - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
> - * @obj: GEM BO
> - *
> - * Sets up an in-kernel virtual mapping of the BO's memory.
> - *
> - * Returns:
> - * The virtual address of the mapping or an error pointer.
> - */
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> -	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -	int ret;
> -
> -	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> -			  &bo->dma_buf_vmap);
> -	if (ret)
> -		return ERR_PTR(ret);
> -
> -	return bo->dma_buf_vmap.virtual;
> -}
> -
> -/**
> - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
> - * @obj: GEM BO
> - * @vaddr: Virtual address (unused)
> - *
> - * Tears down the in-kernel virtual mapping of the BO's memory.
> - */
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
> -
> -	ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
>   /**
>    * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
>    * @obj: GEM BO
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> index 2c5c84a06bb9..39b5b9616fd8 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
> @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
>   					    struct dma_buf *dma_buf);
>   bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
>   				      struct amdgpu_bo *bo);
> -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
> -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
>   			  struct vm_area_struct *vma);
>   
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> index be08a63ef58c..576659827e74 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
> @@ -33,6 +33,7 @@
>   
>   #include <drm/amdgpu_drm.h>
>   #include <drm/drm_debugfs.h>
> +#include <drm/drm_gem_ttm_helper.h>
>   
>   #include "amdgpu.h"
>   #include "amdgpu_display.h"
> @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
>   	.open = amdgpu_gem_object_open,
>   	.close = amdgpu_gem_object_close,
>   	.export = amdgpu_gem_prime_export,
> -	.vmap = amdgpu_gem_prime_vmap,
> -	.vunmap = amdgpu_gem_prime_vunmap,
> +	.vmap = drm_gem_ttm_vmap,
> +	.vunmap = drm_gem_ttm_vunmap,
>   };
>   
>   /*
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> index 132e5f955180..01296ef0d673 100644
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
> @@ -100,7 +100,6 @@ struct amdgpu_bo {
>   	struct amdgpu_bo		*parent;
>   	struct amdgpu_bo		*shadow;
>   
> -	struct ttm_bo_kmap_obj		dma_buf_vmap;
>   	struct amdgpu_mn		*mn;
>   
>   
> diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
> index e0f4613918ad..742d43a7edf4 100644
> --- a/drivers/gpu/drm/ast/ast_cursor.c
> +++ b/drivers/gpu/drm/ast/ast_cursor.c
> @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
>   
>   	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
>   		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>   		drm_gem_vram_unpin(gbo);
>   		drm_gem_vram_put(gbo);
>   	}
> @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
>   	struct drm_device *dev = &ast->base;
>   	size_t size, i;
>   	struct drm_gem_vram_object *gbo;
> -	void __iomem *vaddr;
> +	struct dma_buf_map map;
>   	int ret;
>   
>   	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
> @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
>   			drm_gem_vram_put(gbo);
>   			goto err_drm_gem_vram_put;
>   		}
> -		vaddr = drm_gem_vram_vmap(gbo);
> -		if (IS_ERR(vaddr)) {
> -			ret = PTR_ERR(vaddr);
> +		ret = drm_gem_vram_vmap(gbo, &map);
> +		if (ret) {
>   			drm_gem_vram_unpin(gbo);
>   			drm_gem_vram_put(gbo);
>   			goto err_drm_gem_vram_put;
>   		}
>   
>   		ast->cursor.gbo[i] = gbo;
> -		ast->cursor.vaddr[i] = vaddr;
> +		ast->cursor.map[i] = map;
>   	}
>   
>   	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
> @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
>   	while (i) {
>   		--i;
>   		gbo = ast->cursor.gbo[i];
> -		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
> +		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
>   		drm_gem_vram_unpin(gbo);
>   		drm_gem_vram_put(gbo);
>   	}
> @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>   {
>   	struct drm_device *dev = &ast->base;
>   	struct drm_gem_vram_object *gbo;
> +	struct dma_buf_map map;
>   	int ret;
>   	void *src;
>   	void __iomem *dst;
> @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
>   	ret = drm_gem_vram_pin(gbo, 0);
>   	if (ret)
>   		return ret;
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> -		ret = PTR_ERR(src);
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret)
>   		goto err_drm_gem_vram_unpin;
> -	}
> +	src = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
>   
>   	/* do data transfer to cursor BO */
>   	update_cursor_image(dst, src, fb->width, fb->height);
>   
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>   	drm_gem_vram_unpin(gbo);
>   
>   	return 0;
> @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
>   	u8 __iomem *sig;
>   	u8 jreg;
>   
> -	dst = ast->cursor.vaddr[ast->cursor.next_index];
> +	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
>   
>   	sig = dst + AST_HWC_SIZE;
>   	writel(x, sig + AST_HWC_SIGNATURE_X);
> diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
> index 467049ca8430..f963141dd851 100644
> --- a/drivers/gpu/drm/ast/ast_drv.h
> +++ b/drivers/gpu/drm/ast/ast_drv.h
> @@ -28,10 +28,11 @@
>   #ifndef __AST_DRV_H__
>   #define __AST_DRV_H__
>   
> -#include <linux/types.h>
> -#include <linux/io.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/i2c.h>
>   #include <linux/i2c-algo-bit.h>
> +#include <linux/io.h>
> +#include <linux/types.h>
>   
>   #include <drm/drm_connector.h>
>   #include <drm/drm_crtc.h>
> @@ -131,7 +132,7 @@ struct ast_private {
>   
>   	struct {
>   		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
> -		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
> +		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
>   		unsigned int next_index;
>   	} cursor;
>   
> diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
> index 1da67d34e55d..a89ad4570e3c 100644
> --- a/drivers/gpu/drm/drm_gem.c
> +++ b/drivers/gpu/drm/drm_gem.c
> @@ -36,6 +36,7 @@
>   #include <linux/pagemap.h>
>   #include <linux/shmem_fs.h>
>   #include <linux/dma-buf.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/mem_encrypt.h>
>   #include <linux/pagevec.h>
>   
> @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
>   
>   void *drm_gem_vmap(struct drm_gem_object *obj)
>   {
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>   
> -	if (obj->funcs->vmap)
> -		vaddr = obj->funcs->vmap(obj);
> -	else
> -		vaddr = ERR_PTR(-EOPNOTSUPP);
> +	if (!obj->funcs->vmap)
> +		return ERR_PTR(-EOPNOTSUPP);
>   
> -	if (!vaddr)
> -		vaddr = ERR_PTR(-ENOMEM);
> +	ret = obj->funcs->vmap(obj, &map);
> +	if (ret)
> +		return ERR_PTR(ret);
> +	else if (dma_buf_map_is_null(&map))
> +		return ERR_PTR(-ENOMEM);
>   
> -	return vaddr;
> +	return map.vaddr;
>   }
>   
>   void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
>   {
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
> +
>   	if (!vaddr)
>   		return;
>   
>   	if (obj->funcs->vunmap)
> -		obj->funcs->vunmap(obj, vaddr);
> +		obj->funcs->vunmap(obj, &map);
>   }
>   
>   /**
> diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
> index d527485ea0b7..b57e3e9222f0 100644
> --- a/drivers/gpu/drm/drm_gem_cma_helper.c
> +++ b/drivers/gpu/drm/drm_gem_cma_helper.c
> @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>    * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
>    *     address space
>    * @obj: GEM object
> + * @map: Returns the kernel virtual address of the CMA GEM object's backing
> + *       store.
>    *
>    * This function maps a buffer exported via DRM PRIME into the kernel's
>    * virtual address space. Since the CMA buffers are already mapped into the
> @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
>    * driver's &drm_gem_object_funcs.vmap callback.
>    *
>    * Returns:
> - * The kernel virtual address of the CMA GEM object's backing store.
> + * 0 on success, or a negative error code otherwise.
>    */
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
>   
> -	return cma_obj->vaddr;
> +	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
> +
> +	return 0;
>   }
>   EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
>   
> diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> index fb11df7aced5..5553f58f68f3 100644
> --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_unpin);
>   
> -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
> +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map;
>   	int ret = 0;
>   
> -	if (shmem->vmap_use_count++ > 0)
> -		return shmem->vaddr;
> +	if (shmem->vmap_use_count++ > 0) {
> +		dma_buf_map_set_vaddr(map, shmem->vaddr);
> +		return 0;
> +	}
>   
>   	if (obj->import_attach) {
> -		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
> -		if (!ret)
> -			shmem->vaddr = map.vaddr;
> +		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
> +		if (!ret) {
> +			if (WARN_ON(map->is_iomem)) {
> +				ret = -EIO;
> +				goto err_put_pages;
> +			}
> +			shmem->vaddr = map->vaddr;
> +		}
>   	} else {
>   		pgprot_t prot = PAGE_KERNEL;
>   
> @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>   				    VM_MAP, prot);
>   		if (!shmem->vaddr)
>   			ret = -ENOMEM;
> +		else
> +			dma_buf_map_set_vaddr(map, shmem->vaddr);
>   	}
>   
>   	if (ret) {
> @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>   		goto err_put_pages;
>   	}
>   
> -	return shmem->vaddr;
> +	return 0;
>   
>   err_put_pages:
>   	if (!obj->import_attach)
> @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>   err_zero_use:
>   	shmem->vmap_use_count = 0;
>   
> -	return ERR_PTR(ret);
> +	return ret;
>   }
>   
>   /*
>    * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
>    * @shmem: shmem GEM object
> + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
> + *       store.
>    *
>    * This function makes sure that a contiguous kernel virtual address mapping
>    * exists for the buffer backing the shmem GEM object.
> @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
>    * Returns:
>    * 0 on success or a negative error code on failure.
>    */
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
> -	void *vaddr;
>   	int ret;
>   
>   	ret = mutex_lock_interruptible(&shmem->vmap_lock);
>   	if (ret)
> -		return ERR_PTR(ret);
> -	vaddr = drm_gem_shmem_vmap_locked(shmem);
> +		return ret;
> +	ret = drm_gem_shmem_vmap_locked(shmem, map);
>   	mutex_unlock(&shmem->vmap_lock);
>   
> -	return vaddr;
> +	return ret;
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_vmap);
>   
> -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
> +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
> +					struct dma_buf_map *map)
>   {
>   	struct drm_gem_object *obj = &shmem->base;
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
>   
>   	if (WARN_ON_ONCE(!shmem->vmap_use_count))
>   		return;
> @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>   		return;
>   
>   	if (obj->import_attach)
> -		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
> +		dma_buf_vunmap(obj->import_attach->dmabuf, map);
>   	else
>   		vunmap(shmem->vaddr);
>   
> @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>   /*
>    * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
>    * @shmem: shmem GEM object
> + * @map: Kernel virtual address where the SHMEM GEM object was mapped
>    *
>    * This function cleans up a kernel virtual address mapping acquired by
>    * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
> @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
>    * also be called by drivers directly, in which case it will hide the
>    * differences between dma-buf imported and natively allocated objects.
>    */
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
>   
>   	mutex_lock(&shmem->vmap_lock);
> -	drm_gem_shmem_vunmap_locked(shmem);
> +	drm_gem_shmem_vunmap_locked(shmem, map);
>   	mutex_unlock(&shmem->vmap_lock);
>   }
>   EXPORT_SYMBOL(drm_gem_shmem_vunmap);
> diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
> index 2d5ed30518f1..4d8553b28558 100644
> --- a/drivers/gpu/drm/drm_gem_vram_helper.c
> +++ b/drivers/gpu/drm/drm_gem_vram_helper.c
> @@ -1,5 +1,6 @@
>   // SPDX-License-Identifier: GPL-2.0-or-later
>   
> +#include <linux/dma-buf-map.h>
>   #include <linux/module.h>
>   
>   #include <drm/drm_debugfs.h>
> @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
>   	 * up; only release the GEM object.
>   	 */
>   
> -	WARN_ON(gbo->kmap_use_count);
> -	WARN_ON(gbo->kmap.virtual);
> +	WARN_ON(gbo->vmap_use_count);
> +	WARN_ON(dma_buf_map_is_set(&gbo->map));
>   
>   	drm_gem_object_release(&gbo->bo.base);
>   }
> @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
>   }
>   EXPORT_SYMBOL(drm_gem_vram_unpin);
>   
> -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
> +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
> +				    struct dma_buf_map *map)
>   {
>   	int ret;
> -	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> -	bool is_iomem;
>   
> -	if (gbo->kmap_use_count > 0)
> +	if (gbo->vmap_use_count > 0)
>   		goto out;
>   
> -	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
> +	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
>   	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>   
>   out:
> -	++gbo->kmap_use_count;
> -	return ttm_kmap_obj_virtual(kmap, &is_iomem);
> +	++gbo->vmap_use_count;
> +	*map = gbo->map;
> +
> +	return 0;
>   }
>   
> -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
> +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
> +				       struct dma_buf_map *map)
>   {
> -	if (WARN_ON_ONCE(!gbo->kmap_use_count))
> +	struct drm_device *dev = gbo->bo.base.dev;
> +
> +	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
>   		return;
> -	if (--gbo->kmap_use_count > 0)
> +
> +	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
> +		return; /* BUG: map not mapped from this BO */
> +
> +	if (--gbo->vmap_use_count > 0)
>   		return;
>   
>   	/*
> @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>   /**
>    * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
>    *                       space
> - * @gbo:	The GEM VRAM object to map
> + * @gbo: The GEM VRAM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>    *
>    * The vmap function pins a GEM VRAM object to its current location, either
>    * system or video memory, and maps its buffer into kernel address space.
> @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
>    * unmap and unpin the GEM VRAM object.
>    *
>    * Returns:
> - * The buffer's virtual address on success, or
> - * an ERR_PTR()-encoded error code otherwise.
> + * 0 on success, or a negative error code otherwise.
>    */
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>   {
>   	int ret;
> -	void *base;
>   
>   	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
>   	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>   
>   	ret = drm_gem_vram_pin_locked(gbo, 0);
>   	if (ret)
>   		goto err_ttm_bo_unreserve;
> -	base = drm_gem_vram_kmap_locked(gbo);
> -	if (IS_ERR(base)) {
> -		ret = PTR_ERR(base);
> +	ret = drm_gem_vram_kmap_locked(gbo, map);
> +	if (ret)
>   		goto err_drm_gem_vram_unpin_locked;
> -	}
>   
>   	ttm_bo_unreserve(&gbo->bo);
>   
> -	return base;
> +	return 0;
>   
>   err_drm_gem_vram_unpin_locked:
>   	drm_gem_vram_unpin_locked(gbo);
>   err_ttm_bo_unreserve:
>   	ttm_bo_unreserve(&gbo->bo);
> -	return ERR_PTR(ret);
> +	return ret;
>   }
>   EXPORT_SYMBOL(drm_gem_vram_vmap);
>   
>   /**
>    * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
> - * @gbo:	The GEM VRAM object to unmap
> - * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
> + * @gbo: The GEM VRAM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>    *
>    * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
>    * the documentation for drm_gem_vram_vmap() for more information.
>    */
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
>   {
>   	int ret;
>   
> @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
>   	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
>   		return;
>   
> -	drm_gem_vram_kunmap_locked(gbo);
> +	drm_gem_vram_kunmap_locked(gbo, map);
>   	drm_gem_vram_unpin_locked(gbo);
>   
>   	ttm_bo_unreserve(&gbo->bo);
> @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
>   					       bool evict,
>   					       struct ttm_resource *new_mem)
>   {
> -	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
> +	struct ttm_buffer_object *bo = &gbo->bo;
> +	struct drm_device *dev = bo->base.dev;
>   
> -	if (WARN_ON_ONCE(gbo->kmap_use_count))
> +	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
>   		return;
>   
> -	if (!kmap->virtual)
> -		return;
> -	ttm_bo_kunmap(kmap);
> -	kmap->virtual = NULL;
> +	ttm_bo_vunmap(bo, &gbo->map);
>   }
>   
>   static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
> @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
>   }
>   
>   /**
> - * drm_gem_vram_object_vmap() - \
> -	Implements &struct drm_gem_object_funcs.vmap
> - * @gem:	The GEM object to map
> + * drm_gem_vram_object_vmap() -
> + *	Implements &struct drm_gem_object_funcs.vmap
> + * @gem: The GEM object to map
> + * @map: Returns the kernel virtual address of the VRAM GEM object's backing
> + *       store.
>    *
>    * Returns:
> - * The buffers virtual address on success, or
> - * NULL otherwise.
> + * 0 on success, or a negative error code otherwise.
>    */
> -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
> +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>   {
>   	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
> -	void *base;
>   
> -	base = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(base))
> -		return NULL;
> -	return base;
> +	return drm_gem_vram_vmap(gbo, map);
>   }
>   
>   /**
> - * drm_gem_vram_object_vunmap() - \
> -	Implements &struct drm_gem_object_funcs.vunmap
> - * @gem:	The GEM object to unmap
> - * @vaddr:	The mapping's base address
> + * drm_gem_vram_object_vunmap() -
> + *	Implements &struct drm_gem_object_funcs.vunmap
> + * @gem: The GEM object to unmap
> + * @map: Kernel virtual address where the VRAM GEM object was mapped
>    */
> -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
> -				       void *vaddr)
> +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
>   {
>   	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
>   
> -	drm_gem_vram_vunmap(gbo, vaddr);
> +	drm_gem_vram_vunmap(gbo, map);
>   }
>   
>   /*
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> index 9682c26d89bb..f5be627e1de0 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
> @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
>   int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>   int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
>   struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>   int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
>   			   struct vm_area_struct *vma);
>   struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
> diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> index a6d9932a32ae..bc2543dd987d 100644
> --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
> @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
>   }
>   
> -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
> +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
> -	return etnaviv_gem_vmap(obj);
> +	void *vaddr = etnaviv_gem_vmap(obj);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>   }
>   
>   int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 11223fe348df..832e5280a6ed 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
>   	return drm_gem_shmem_pin(obj);
>   }
>   
> -static void *lima_gem_vmap(struct drm_gem_object *obj)
> +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct lima_bo *bo = to_lima_bo(obj);
>   
>   	if (bo->heap_size)
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>   
> -	return drm_gem_shmem_vmap(obj);
> +	return drm_gem_shmem_vmap(obj, map);
>   }
>   
>   static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
> diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a40d..a070a85f8f36 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -1,6 +1,7 @@
>   // SPDX-License-Identifier: GPL-2.0 OR MIT
>   /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
>   
> +#include <linux/dma-buf-map.h>
>   #include <linux/kthread.h>
>   #include <linux/slab.h>
>   #include <linux/vmalloc.h>
> @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>   	struct lima_dump_chunk_buffer *buffer_chunk;
>   	u32 size, task_size, mem_size;
>   	int i;
> +	struct dma_buf_map map;
> +	int ret;
>   
>   	mutex_lock(&dev->error_task_list_lock);
>   
> @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
>   		} else {
>   			buffer_chunk->size = lima_bo_size(bo);
>   
> -			data = drm_gem_shmem_vmap(&bo->base.base);
> -			if (IS_ERR_OR_NULL(data)) {
> +			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
> +			if (ret) {
>   				kvfree(et);
>   				goto out;
>   			}
>   
> -			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
> +			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
>   
> -			drm_gem_shmem_vunmap(&bo->base.base, data);
> +			drm_gem_shmem_vunmap(&bo->base.base, &map);
>   		}
>   
>   		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..8ef76769b97f 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -9,6 +9,7 @@
>    */
>   
>   #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>   
>   #include <drm/drm_atomic_helper.h>
>   #include <drm/drm_atomic_state_helper.h>
> @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
>   		      struct drm_rect *clip)
>   {
>   	struct drm_device *dev = &mdev->base;
> +	struct dma_buf_map map;
>   	void *vmap;
> +	int ret;
>   
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (drm_WARN_ON(dev, !vmap))
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (drm_WARN_ON(dev, ret))
>   		return; /* BUG: SHMEM BO should always be vmapped */
> +	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
>   
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>   
>   	/* Always scanout image at VRAM offset 0 */
>   	mgag200_set_startadd(mdev, (u32)0);
> diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
> index 5dec1e5694b7..9436310d0854 100644
> --- a/drivers/gpu/drm/nouveau/Kconfig
> +++ b/drivers/gpu/drm/nouveau/Kconfig
> @@ -6,6 +6,7 @@ config DRM_NOUVEAU
>   	select FW_LOADER
>   	select DRM_KMS_HELPER
>   	select DRM_TTM
> +	select DRM_TTM_HELPER
>   	select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
>   	select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
>   	select X86_PLATFORM_DEVICES if ACPI && X86
> diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
> index 641ef6298a0e..6045b85a762a 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_bo.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
> @@ -39,8 +39,6 @@ struct nouveau_bo {
>   	unsigned mode;
>   
>   	struct nouveau_drm_tile *tile;
> -
> -	struct ttm_bo_kmap_obj dma_buf_vmap;
>   };
>   
>   static inline struct nouveau_bo *
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
> index 9a421c3949de..f942b526b0a5 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
> @@ -24,6 +24,8 @@
>    *
>    */
>   
> +#include <drm/drm_gem_ttm_helper.h>
> +
>   #include "nouveau_drv.h"
>   #include "nouveau_dma.h"
>   #include "nouveau_fence.h"
> @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
>   	.pin = nouveau_gem_prime_pin,
>   	.unpin = nouveau_gem_prime_unpin,
>   	.get_sg_table = nouveau_gem_prime_get_sg_table,
> -	.vmap = nouveau_gem_prime_vmap,
> -	.vunmap = nouveau_gem_prime_vunmap,
> +	.vmap = drm_gem_ttm_vmap,
> +	.vunmap = drm_gem_ttm_vunmap,
>   };
>   
>   int
> diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
> index b35c180322e2..3b919c7c931c 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_gem.h
> +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
> @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
>   extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
>   extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
>   	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
> -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
> -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
>   
>   #endif
> diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
> index a8264aebf3d4..2f16b5249283 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_prime.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
> @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
>   }
>   
> -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> -	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -	int ret;
> -
> -	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
> -			  &nvbo->dma_buf_vmap);
> -	if (ret)
> -		return ERR_PTR(ret);
> -
> -	return nvbo->dma_buf_vmap.virtual;
> -}
> -
> -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
> -
> -	ttm_bo_kunmap(&nvbo->dma_buf_vmap);
> -}
> -
>   struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
>   							 struct dma_buf_attachment *attach,
>   							 struct sg_table *sg)
> diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> index fdbc8d949135..5ab03d605f57 100644
> --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
> @@ -5,6 +5,7 @@
>   #include <drm/drm_gem_shmem_helper.h>
>   #include <drm/panfrost_drm.h>
>   #include <linux/completion.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/iopoll.h>
>   #include <linux/pm_runtime.h>
>   #include <linux/slab.h>
> @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>   {
>   	struct panfrost_file_priv *user = file_priv->driver_priv;
>   	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map;
>   	struct drm_gem_shmem_object *bo;
>   	u32 cfg, as;
>   	int ret;
> @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>   		goto err_close_bo;
>   	}
>   
> -	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
> -	if (IS_ERR(perfcnt->buf)) {
> -		ret = PTR_ERR(perfcnt->buf);
> +	ret = drm_gem_shmem_vmap(&bo->base, &map);
> +	if (ret)
>   		goto err_put_mapping;
> -	}
> +	perfcnt->buf = map.vaddr;
>   
>   	/*
>   	 * Invalidate the cache and clear the counters to start from a fresh
> @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
>   	return 0;
>   
>   err_vunmap:
> -	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&bo->base, &map);
>   err_put_mapping:
>   	panfrost_gem_mapping_put(perfcnt->mapping);
>   err_close_bo:
> @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>   {
>   	struct panfrost_file_priv *user = file_priv->driver_priv;
>   	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
> +	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
>   
>   	if (user != perfcnt->user)
>   		return -EINVAL;
> @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
>   		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
>   
>   	perfcnt->user = NULL;
> -	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
> +	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
>   	perfcnt->buf = NULL;
>   	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
>   	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
> diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
> index 45fd76e04bdc..e165fa9b2089 100644
> --- a/drivers/gpu/drm/qxl/qxl_display.c
> +++ b/drivers/gpu/drm/qxl/qxl_display.c
> @@ -25,6 +25,7 @@
>   
>   #include <linux/crc32.h>
>   #include <linux/delay.h>
> +#include <linux/dma-buf-map.h>
>   
>   #include <drm/drm_drv.h>
>   #include <drm/drm_atomic.h>
> @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>   	struct drm_gem_object *obj;
>   	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
>   	int ret;
> +	struct dma_buf_map user_map;
> +	struct dma_buf_map cursor_map;
>   	void *user_ptr;
>   	int size = 64*64*4;
>   
> @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>   		user_bo = gem_to_qxl_bo(obj);
>   
>   		/* pinning is done in the prepare/cleanup framevbuffer */
> -		ret = qxl_bo_kmap(user_bo, &user_ptr);
> +		ret = qxl_bo_kmap(user_bo, &user_map);
>   		if (ret)
>   			goto out_free_release;
> +		user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   		ret = qxl_alloc_bo_reserved(qdev, release,
>   					    sizeof(struct qxl_cursor) + size,
> @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
>   		if (ret)
>   			goto out_unpin;
>   
> -		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
> +		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
>   		if (ret)
>   			goto out_backoff;
>   
> @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>   {
>   	int ret;
>   	struct drm_gem_object *gobj;
> +	struct dma_buf_map map;
>   	int monitors_config_size = sizeof(struct qxl_monitors_config) +
>   		qxl_num_crtc * sizeof(struct qxl_head);
>   
> @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
>   	if (ret)
>   		return ret;
>   
> -	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
> +	qxl_bo_kmap(qdev->monitors_config_bo, &map);
>   
>   	qdev->monitors_config = qdev->monitors_config_bo->kptr;
>   	qdev->ram_header->monitors_config =
> diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
> index 3599db096973..7b7acb910780 100644
> --- a/drivers/gpu/drm/qxl/qxl_draw.c
> +++ b/drivers/gpu/drm/qxl/qxl_draw.c
> @@ -20,6 +20,8 @@
>    * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
>    */
>   
> +#include <linux/dma-buf-map.h>
> +
>   #include <drm/drm_fourcc.h>
>   
>   #include "qxl_drv.h"
> @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
>   					      unsigned int num_clips,
>   					      struct qxl_bo *clips_bo)
>   {
> +	struct dma_buf_map map;
>   	struct qxl_clip_rects *dev_clips;
>   	int ret;
>   
> -	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
> -	if (ret) {
> +	ret = qxl_bo_kmap(clips_bo, &map);
> +	if (ret)
>   		return NULL;
> -	}
> +	dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
> +
>   	dev_clips->num_rects = num_clips;
>   	dev_clips->chunk.next_chunk = 0;
>   	dev_clips->chunk.prev_chunk = 0;
> @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>   	int stride = fb->pitches[0];
>   	/* depth is not actually interesting, we don't mask with it */
>   	int depth = fb->format->cpp[0] * 8;
> +	struct dma_buf_map surface_map;
>   	uint8_t *surface_base;
>   	struct qxl_release *release;
>   	struct qxl_bo *clips_bo;
> @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
>   	if (ret)
>   		goto out_release_backoff;
>   
> -	ret = qxl_bo_kmap(bo, (void **)&surface_base);
> +	ret = qxl_bo_kmap(bo, &surface_map);
>   	if (ret)
>   		goto out_release_backoff;
> +	surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	ret = qxl_image_init(qdev, release, dimage, surface_base,
>   			     left - dumb_shadow_offset,
> diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
> index 3602e8b34189..eb437fea5d9e 100644
> --- a/drivers/gpu/drm/qxl/qxl_drv.h
> +++ b/drivers/gpu/drm/qxl/qxl_drv.h
> @@ -30,6 +30,7 @@
>    * Definitions taken from spice-protocol, plus kernel driver specific bits.
>    */
>   
> +#include <linux/dma-buf-map.h>
>   #include <linux/dma-fence.h>
>   #include <linux/firmware.h>
>   #include <linux/platform_device.h>
> @@ -50,6 +51,8 @@
>   
>   #include "qxl_dev.h"
>   
> +struct dma_buf_map;
> +
>   #define DRIVER_AUTHOR		"Dave Airlie"
>   
>   #define DRIVER_NAME		"qxl"
> @@ -79,7 +82,7 @@ struct qxl_bo {
>   	/* Protected by tbo.reserved */
>   	struct ttm_place		placements[3];
>   	struct ttm_placement		placement;
> -	struct ttm_bo_kmap_obj		kmap;
> +	struct dma_buf_map		map;
>   	void				*kptr;
>   	unsigned int                    map_count;
>   	int                             type;
> @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
>   void qxl_gem_object_close(struct drm_gem_object *obj,
>   			  struct drm_file *file_priv);
>   void qxl_bo_force_delete(struct qxl_device *qdev);
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
>   
>   /* qxl_dumb.c */
>   int qxl_mode_dumb_create(struct drm_file *file_priv,
> @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
>   struct drm_gem_object *qxl_gem_prime_import_sg_table(
>   	struct drm_device *dev, struct dma_buf_attachment *attach,
>   	struct sg_table *sgt);
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map);
>   int qxl_gem_prime_mmap(struct drm_gem_object *obj,
>   				struct vm_area_struct *vma);
>   
> diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
> index 940e99354f49..755df4d8f95f 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.c
> +++ b/drivers/gpu/drm/qxl/qxl_object.c
> @@ -23,10 +23,12 @@
>    *          Alon Levy
>    */
>   
> +#include <linux/dma-buf-map.h>
> +#include <linux/io-mapping.h>
> +
>   #include "qxl_drv.h"
>   #include "qxl_object.h"
>   
> -#include <linux/io-mapping.h>
>   static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
>   {
>   	struct qxl_bo *bo;
> @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
>   	return 0;
>   }
>   
> -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
> +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
>   {
> -	bool is_iomem;
>   	int r;
>   
>   	if (bo->kptr) {
> -		if (ptr)
> -			*ptr = bo->kptr;
>   		bo->map_count++;
> -		return 0;
> +		goto out;
>   	}
> -	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
> +	r = ttm_bo_vmap(&bo->tbo, &bo->map);
>   	if (r)
>   		return r;
> -	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
> -	if (ptr)
> -		*ptr = bo->kptr;
>   	bo->map_count = 1;
> +
> +	/* TODO: Remove kptr in favor of map everywhere. */
> +	if (bo->map.is_iomem)
> +		bo->kptr = (void *)bo->map.vaddr_iomem;
> +	else
> +		bo->kptr = bo->map.vaddr;
> +
> +out:
> +	*map = bo->map;
>   	return 0;
>   }
>   
> @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
>   	void *rptr;
>   	int ret;
>   	struct io_mapping *map;
> +	struct dma_buf_map bo_map;
>   
>   	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
>   		map = qdev->vram_mapping;
> @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
>   		return rptr;
>   	}
>   
> -	ret = qxl_bo_kmap(bo, &rptr);
> +	ret = qxl_bo_kmap(bo, &bo_map);
>   	if (ret)
>   		return NULL;
> +	rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	rptr += page_offset * PAGE_SIZE;
>   	return rptr;
> @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
>   	if (bo->map_count > 0)
>   		return;
>   	bo->kptr = NULL;
> -	ttm_bo_kunmap(&bo->kmap);
> +	ttm_bo_vunmap(&bo->tbo, &bo->map);
>   }
>   
>   void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
> diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
> index 09a5c818324d..ebf24c9d2bf2 100644
> --- a/drivers/gpu/drm/qxl/qxl_object.h
> +++ b/drivers/gpu/drm/qxl/qxl_object.h
> @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
>   			 bool kernel, bool pinned, u32 domain,
>   			 struct qxl_surface *surf,
>   			 struct qxl_bo **bo_ptr);
> -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
> +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
>   extern void qxl_bo_kunmap(struct qxl_bo *bo);
>   void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
>   void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
> diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
> index 7d3816fca5a8..4aa949799446 100644
> --- a/drivers/gpu/drm/qxl/qxl_prime.c
> +++ b/drivers/gpu/drm/qxl/qxl_prime.c
> @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
>   	return ERR_PTR(-ENOSYS);
>   }
>   
> -void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
> +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct qxl_bo *bo = gem_to_qxl_bo(obj);
> -	void *ptr;
>   	int ret;
>   
> -	ret = qxl_bo_kmap(bo, &ptr);
> +	ret = qxl_bo_kmap(bo, map);
>   	if (ret < 0)
> -		return ERR_PTR(ret);
> +		return ret;
>   
> -	return ptr;
> +	return 0;
>   }
>   
> -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
> +			  struct dma_buf_map *map)
>   {
>   	struct qxl_bo *bo = gem_to_qxl_bo(obj);
>   
> diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
> index 5d54bccebd4d..44cb5ee6fc20 100644
> --- a/drivers/gpu/drm/radeon/radeon.h
> +++ b/drivers/gpu/drm/radeon/radeon.h
> @@ -509,7 +509,6 @@ struct radeon_bo {
>   	/* Constant after initialization */
>   	struct radeon_device		*rdev;
>   
> -	struct ttm_bo_kmap_obj		dma_buf_vmap;
>   	pid_t				pid;
>   
>   #ifdef CONFIG_MMU_NOTIFIER
> diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
> index 0ccd7213e41f..d2876ce3bc9e 100644
> --- a/drivers/gpu/drm/radeon/radeon_gem.c
> +++ b/drivers/gpu/drm/radeon/radeon_gem.c
> @@ -31,6 +31,7 @@
>   #include <drm/drm_debugfs.h>
>   #include <drm/drm_device.h>
>   #include <drm/drm_file.h>
> +#include <drm/drm_gem_ttm_helper.h>
>   #include <drm/radeon_drm.h>
>   
>   #include "radeon.h"
> @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
>   struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
>   int radeon_gem_prime_pin(struct drm_gem_object *obj);
>   void radeon_gem_prime_unpin(struct drm_gem_object *obj);
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
>   
>   static const struct drm_gem_object_funcs radeon_gem_object_funcs;
>   
> @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
>   	.pin = radeon_gem_prime_pin,
>   	.unpin = radeon_gem_prime_unpin,
>   	.get_sg_table = radeon_gem_prime_get_sg_table,
> -	.vmap = radeon_gem_prime_vmap,
> -	.vunmap = radeon_gem_prime_vunmap,
> +	.vmap = drm_gem_ttm_vmap,
> +	.vunmap = drm_gem_ttm_vunmap,
>   };
>   
>   /*
> diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
> index b9de0e51c0be..088d39a51c0d 100644
> --- a/drivers/gpu/drm/radeon/radeon_prime.c
> +++ b/drivers/gpu/drm/radeon/radeon_prime.c
> @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
>   	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
>   }
>   
> -void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
> -{
> -	struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -	int ret;
> -
> -	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
> -			  &bo->dma_buf_vmap);
> -	if (ret)
> -		return ERR_PTR(ret);
> -
> -	return bo->dma_buf_vmap.virtual;
> -}
> -
> -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> -{
> -	struct radeon_bo *bo = gem_to_radeon_bo(obj);
> -
> -	ttm_bo_kunmap(&bo->dma_buf_vmap);
> -}
> -
>   struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
>   							struct dma_buf_attachment *attach,
>   							struct sg_table *sg)
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> index 7d5ebb10323b..7971f57436dd 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
> @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
>   	return ERR_PTR(ret);
>   }
>   
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>   
> -	if (rk_obj->pages)
> -		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> -			    pgprot_writecombine(PAGE_KERNEL));
> +	if (rk_obj->pages) {
> +		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
> +				  pgprot_writecombine(PAGE_KERNEL));
> +		if (!vaddr)
> +			return -ENOMEM;
> +		dma_buf_map_set_vaddr(map, vaddr);
> +		return 0;
> +	}
>   
>   	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
> -		return NULL;
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
>   
> -	return rk_obj->kvaddr;
> +	return 0;
>   }
>   
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
>   
>   	if (rk_obj->pages) {
> -		vunmap(vaddr);
> +		vunmap(map->vaddr);
>   		return;
>   	}
>   
> diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> index 7ffc541bea07..5a70a56cd406 100644
> --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
> @@ -31,8 +31,8 @@ struct drm_gem_object *
>   rockchip_gem_prime_import_sg_table(struct drm_device *dev,
>   				   struct dma_buf_attachment *attach,
>   				   struct sg_table *sg);
> -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
> -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>   
>   /* drm driver mmap file operations */
>   int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
> diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
> index 744a8e337e41..c02e35ed6e76 100644
> --- a/drivers/gpu/drm/tiny/cirrus.c
> +++ b/drivers/gpu/drm/tiny/cirrus.c
> @@ -17,6 +17,7 @@
>    */
>   
>   #include <linux/console.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/module.h>
>   #include <linux/pci.h>
>   
> @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>   			       struct drm_rect *rect)
>   {
>   	struct cirrus_device *cirrus = to_cirrus(fb->dev);
> +	struct dma_buf_map map;
>   	void *vmap;
>   	int idx, ret;
>   
> @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>   	if (!drm_dev_enter(&cirrus->dev, &idx))
>   		goto out;
>   
> -	ret = -ENOMEM;
> -	vmap = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (!vmap)
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret)
>   		goto out_dev_exit;
> +	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	if (cirrus->cpp == fb->format->cpp[0])
>   		drm_fb_memcpy_dstclip(cirrus->vram,
> @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
>   	else
>   		WARN_ON_ONCE("cpp mismatch");
>   
> -	drm_gem_shmem_vunmap(fb->obj[0], vmap);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>   	ret = 0;
>   
>   out_dev_exit:
> diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
> index cc397671f689..12a890cea6e9 100644
> --- a/drivers/gpu/drm/tiny/gm12u320.c
> +++ b/drivers/gpu/drm/tiny/gm12u320.c
> @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>   {
>   	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
>   	struct drm_framebuffer *fb;
> +	struct dma_buf_map map;
>   	void *vaddr;
>   	u8 *src;
>   
> @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>   	y1 = gm12u320->fb_update.rect.y1;
>   	y2 = gm12u320->fb_update.rect.y2;
>   
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> -		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
> +		GM12U320_ERR("failed to vmap fb: %d\n", ret);
>   		goto put_fb;
>   	}
> +	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	if (fb->obj[0]->import_attach) {
>   		ret = dma_buf_begin_cpu_access(
> @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
>   			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
>   	}
>   vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>   put_fb:
>   	drm_framebuffer_put(fb);
>   	gm12u320->fb_update.fb = NULL;
> diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
> index fef43f4e3bac..42eeba1dfdbf 100644
> --- a/drivers/gpu/drm/udl/udl_modeset.c
> +++ b/drivers/gpu/drm/udl/udl_modeset.c
> @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>   	struct urb *urb;
>   	struct drm_rect clip;
>   	int log_bpp;
> +	struct dma_buf_map map;
>   	void *vaddr;
>   
>   	ret = udl_log_cpp(fb->format->cpp[0]);
> @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>   			return ret;
>   	}
>   
> -	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
> -	if (IS_ERR(vaddr)) {
> +	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
> +	if (ret) {
>   		DRM_ERROR("failed to vmap fb\n");
>   		goto out_dma_buf_end_cpu_access;
>   	}
> +	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	urb = udl_get_urb(dev);
>   	if (!urb)
> @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
>   	ret = 0;
>   
>   out_drm_gem_shmem_vunmap:
> -	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
> +	drm_gem_shmem_vunmap(fb->obj[0], &map);
>   out_dma_buf_end_cpu_access:
>   	if (import_attach) {
>   		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
> diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> index 931c55126148..f268fb258c83 100644
> --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
> +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
> @@ -9,6 +9,8 @@
>    *          Michael Thayer <michael.thayer@oracle.com,
>    *          Hans de Goede <hdegoede@redhat.com>
>    */
> +
> +#include <linux/dma-buf-map.h>
>   #include <linux/export.h>
>   
>   #include <drm/drm_atomic.h>
> @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>   	u32 height = plane->state->crtc_h;
>   	size_t data_size, mask_size;
>   	u32 flags;
> +	struct dma_buf_map map;
> +	int ret;
>   	u8 *src;
>   
>   	/*
> @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>   
>   	vbox_crtc->cursor_enabled = true;
>   
> -	src = drm_gem_vram_vmap(gbo);
> -	if (IS_ERR(src)) {
> +	ret = drm_gem_vram_vmap(gbo, &map);
> +	if (ret) {
>   		/*
>   		 * BUG: we should have pinned the BO in prepare_fb().
>   		 */
> @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>   		DRM_WARN("Could not map cursor bo, skipping update\n");
>   		return;
>   	}
> +	src = map.vaddr; /* TODO: Use mapping abstraction properly */
>   
>   	/*
>   	 * The mask must be calculated based on the alpha
> @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
>   	data_size = width * height * 4 + mask_size;
>   
>   	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
> -	drm_gem_vram_vunmap(gbo, src);
> +	drm_gem_vram_vunmap(gbo, &map);
>   
>   	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
>   		VBOX_MOUSE_POINTER_ALPHA;
> diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
> index 557f0d1e6437..f290a9a942dc 100644
> --- a/drivers/gpu/drm/vc4/vc4_bo.c
> +++ b/drivers/gpu/drm/vc4/vc4_bo.c
> @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
>   	return drm_gem_cma_prime_mmap(obj, vma);
>   }
>   
> -void *vc4_prime_vmap(struct drm_gem_object *obj)
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct vc4_bo *bo = to_vc4_bo(obj);
>   
>   	if (bo->validated_shader) {
>   		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
> -		return ERR_PTR(-EINVAL);
> +		return -EINVAL;
>   	}
>   
> -	return drm_gem_cma_prime_vmap(obj);
> +	return drm_gem_cma_prime_vmap(obj, map);
>   }
>   
>   struct drm_gem_object *
> diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
> index cc79b1aaa878..904f2c36c963 100644
> --- a/drivers/gpu/drm/vc4/vc4_drv.h
> +++ b/drivers/gpu/drm/vc4/vc4_drv.h
> @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
>   struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
>   						 struct dma_buf_attachment *attach,
>   						 struct sg_table *sgt);
> -void *vc4_prime_vmap(struct drm_gem_object *obj);
> +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>   int vc4_bo_cache_init(struct drm_device *dev);
>   void vc4_bo_cache_destroy(struct drm_device *dev);
>   int vc4_bo_inc_usecnt(struct vc4_bo *bo);
> diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
> index fa54a6d1403d..b2aa26e1e4a2 100644
> --- a/drivers/gpu/drm/vgem/vgem_drv.c
> +++ b/drivers/gpu/drm/vgem/vgem_drv.c
> @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
>   	return &obj->base;
>   }
>   
> -static void *vgem_prime_vmap(struct drm_gem_object *obj)
> +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>   	long n_pages = obj->size >> PAGE_SHIFT;
>   	struct page **pages;
> +	void *vaddr;
>   
>   	pages = vgem_pin_pages(bo);
>   	if (IS_ERR(pages))
> -		return NULL;
> +		return PTR_ERR(pages);
> +
> +	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
>   
> -	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
> +	return 0;
>   }
>   
> -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
> +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
>   {
>   	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
>   
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>   	vgem_unpin_pages(bo);
>   }
>   
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> index 4f34ef34ba60..74db5a840bed 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
> @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
>   	return gem_mmap_obj(xen_obj, vma);
>   }
>   
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
>   {
>   	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
> +	void *vaddr;
>   
>   	if (!xen_obj->pages)
> -		return NULL;
> +		return -ENOMEM;
>   
>   	/* Please see comment in gem_mmap_obj on mapping and attributes. */
> -	return vmap(xen_obj->pages, xen_obj->num_pages,
> -		    VM_MAP, PAGE_KERNEL);
> +	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
> +		     VM_MAP, PAGE_KERNEL);
> +	if (!vaddr)
> +		return -ENOMEM;
> +	dma_buf_map_set_vaddr(map, vaddr);
> +
> +	return 0;
>   }
>   
>   void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr)
> +				    struct dma_buf_map *map)
>   {
> -	vunmap(vaddr);
> +	vunmap(map->vaddr);
>   }
>   
>   int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
> diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> index a39675fa31b2..a4e67d0a149c 100644
> --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
> +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
> @@ -12,6 +12,7 @@
>   #define __XEN_DRM_FRONT_GEM_H
>   
>   struct dma_buf_attachment;
> +struct dma_buf_map;
>   struct drm_device;
>   struct drm_gem_object;
>   struct file;
> @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
>   
>   int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
>   
> -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
> +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
> +				 struct dma_buf_map *map);
>   
>   void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
> -				    void *vaddr);
> +				    struct dma_buf_map *map);
>   
>   int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
>   				 struct vm_area_struct *vma);
> diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
> index c38dd35da00b..5e6daa1c982f 100644
> --- a/include/drm/drm_gem.h
> +++ b/include/drm/drm_gem.h
> @@ -39,6 +39,7 @@
>   
>   #include <drm/drm_vma_manager.h>
>   
> +struct dma_buf_map;
>   struct drm_gem_object;
>   
>   /**
> @@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
>   	 *
>   	 * This callback is optional.
>   	 */
> -	void *(*vmap)(struct drm_gem_object *obj);
> +	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>   
>   	/**
>   	 * @vunmap:
> @@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
>   	 *
>   	 * This callback is optional.
>   	 */
> -	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
> +	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
>   
>   	/**
>   	 * @mmap:
> diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
> index a064b0d1c480..caf98b9cf4b4 100644
> --- a/include/drm/drm_gem_cma_helper.h
> +++ b/include/drm/drm_gem_cma_helper.h
> @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
>   				  struct sg_table *sgt);
>   int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
>   			   struct vm_area_struct *vma);
> -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
> +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>   
>   struct drm_gem_object *
>   drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
> diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> index 5381f0c8cf6f..3449a0353fe0 100644
> --- a/include/drm/drm_gem_shmem_helper.h
> +++ b/include/drm/drm_gem_shmem_helper.h
> @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
>   void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
>   int drm_gem_shmem_pin(struct drm_gem_object *obj);
>   void drm_gem_shmem_unpin(struct drm_gem_object *obj);
> -void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
> -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
> +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
> +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
>   
>   int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
>   
> diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
> index 128f88174d32..c0d28ba0f5c9 100644
> --- a/include/drm/drm_gem_vram_helper.h
> +++ b/include/drm/drm_gem_vram_helper.h
> @@ -10,6 +10,7 @@
>   #include <drm/ttm/ttm_bo_api.h>
>   #include <drm/ttm/ttm_bo_driver.h>
>   
> +#include <linux/dma-buf-map.h>
>   #include <linux/kernel.h> /* for container_of() */
>   
>   struct drm_mode_create_dumb;
> @@ -29,9 +30,8 @@ struct vm_area_struct;
>   
>   /**
>    * struct drm_gem_vram_object - GEM object backed by VRAM
> - * @gem:	GEM object
>    * @bo:		TTM buffer object
> - * @kmap:	Mapping information for @bo
> + * @map:	Mapping information for @bo
>    * @placement:	TTM placement information. Supported placements are \
>   	%TTM_PL_VRAM and %TTM_PL_SYSTEM
>    * @placements:	TTM placement information.
> @@ -50,15 +50,15 @@ struct vm_area_struct;
>    */
>   struct drm_gem_vram_object {
>   	struct ttm_buffer_object bo;
> -	struct ttm_bo_kmap_obj kmap;
> +	struct dma_buf_map map;
>   
>   	/**
> -	 * @kmap_use_count:
> +	 * @vmap_use_count:
>   	 *
>   	 * Reference count on the virtual address.
>   	 * The address are un-mapped when the count reaches zero.
>   	 */
> -	unsigned int kmap_use_count;
> +	unsigned int vmap_use_count;
>   
>   	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
>   	struct ttm_placement placement;
> @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
>   s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
>   int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
>   int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
> -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
> -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
> +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
> +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
>   
>   int drm_gem_vram_fill_create_dumb(struct drm_file *file,
>   				  struct drm_device *dev,



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:25:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:25:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7414.19358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4CK-0004ZI-J4; Thu, 15 Oct 2020 14:25:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7414.19358; Thu, 15 Oct 2020 14:25:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4CK-0004ZB-G2; Thu, 15 Oct 2020 14:25:40 +0000
Received: by outflank-mailman (input) for mailman id 7414;
 Thu, 15 Oct 2020 14:25:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4CI-0004Z5-DY
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:25:38 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1f53e1f-a805-4668-a92e-685360403d7a;
 Thu, 15 Oct 2020 14:25:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4CI-0004Z5-DY
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:25:38 +0000
X-Inumbo-ID: c1f53e1f-a805-4668-a92e-685360403d7a
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1f53e1f-a805-4668-a92e-685360403d7a;
	Thu, 15 Oct 2020 14:25:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602771936;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ISsVjYKh083/gZzNPG9aHljTD9ZFK2BfAgEw3ykuDXk=;
  b=csrDIlI5rF/bBgy26IpXOvcBJWzvQUc/iyzx45seJe4EQyeUj4JlkvBi
   w6T+tcvr/XgoMMzMqWMtWdHRpZKx4+beJIdfDTry3QM8QeyV5kaOrU/bR
   fAX4RNiFXxyOyzdveCYCPa5NIOQcldttSSaqI9Pd+Suo50vrzJH5duFHa
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: smY+e8aDkg2xJlFhPHHTjL7NFnuIum4Ix6GjOJ6YaxdZqsg8/1UJjJ32TKlqd1YxwjNbde6up8
 yC7T+ITSUXgNSY2a4I3MXHsYxUw76HFH6qA/2yGGUKm/+pj+rQybqlXAf1cXaThs6mkhm2wK3k
 pqMVlh9ZBlPOIKL6m1DFiAf9wxcsIcngd8kR/3HsJM2Q3Ubz6Pyt+VYHiMUuLtKqUOdGokDeuy
 sj416JtmG+wqd2uYZ2gp8ubZH6U3MNyhAgzr4CD2wFciLv47sWjDpXD1cheNq65fDjPfGzQP8J
 XSU=
X-SBRS: 2.5
X-MesageID: 30126341
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="30126341"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, SeongJae Park <sjpark@amazon.de>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>,
	=?UTF-8?q?J=C3=BCrgen=20Gro=C3=9F?= <jgross@suse.com>, "J . Roeleveld"
	<joost@antarean.org>
Subject: [PATCH 0/2] xen/blkback: add LRU purge parameters
Date: Thu, 15 Oct 2020 16:24:14 +0200
Message-ID: <20201015142416.70294-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Add the LRU parameters as tunables.

Roger Pau Monne (2):
  xen/blkback: turn the cache purge LRU interval into a parameter
  xen/blkback: turn the cache purge percent into a parameter

 .../ABI/testing/sysfs-driver-xen-blkback      | 19 +++++++++++++++++++
 drivers/block/xen-blkback/blkback.c           | 16 +++++++++++-----
 2 files changed, 30 insertions(+), 5 deletions(-)

-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:25:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7415.19370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4CO-0004bX-ST; Thu, 15 Oct 2020 14:25:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7415.19370; Thu, 15 Oct 2020 14:25:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4CO-0004bP-P8; Thu, 15 Oct 2020 14:25:44 +0000
Received: by outflank-mailman (input) for mailman id 7415;
 Thu, 15 Oct 2020 14:25:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4CN-0004ay-GJ
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:25:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef1122d1-2b9c-4702-bf1d-9b0e24451e7e;
 Thu, 15 Oct 2020 14:25:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4CN-0004ay-GJ
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:25:43 +0000
X-Inumbo-ID: ef1122d1-2b9c-4702-bf1d-9b0e24451e7e
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ef1122d1-2b9c-4702-bf1d-9b0e24451e7e;
	Thu, 15 Oct 2020 14:25:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602771942;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=qACv2ER1JyTphbaRvO3qjGjCO4Ywxs60QaM06nUJtes=;
  b=gnxBu0Okljz4KZnafw9fJasKLyJEsI43noBV4BEOOxucWUU4pN1ER84H
   bjF80Y5hs/1HClri4hmBuq7vFIimhLPWYsvfkPmAYLv9IAnnkQLZXQ4VF
   CBK2MfO58C+MuosMAyDCt5RM44jKzXObbfPaf1nClUSKdhETtcZa+ZNfq
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pc4931EzyhHOF/1+wy2nZ4XleWl7qBYyCEkp00YAxqGPWcnc737w40tXoyYSmGIVRQQH4Iit93
 07qxT5ybUOhZ2qxJTCRSoPyXLgFI6SB+kOdkwJ+tnstszy0/PMaQb6jMTptLMYlA/iJY4Lf2Oi
 F/cLdsnFwau5CJFWfhOyvPnFRV7Fsi+iyINxA6sbSGk67l/81+RiveiaYesCDf5YPxdThZ1AFI
 ilb1k5+O1Wjm2wOq7NJkhCdFSBqRNJaXSmpoZMT/4v02/Y5YxNBbR02IjhXjykf3zIOuEoFlNN
 uzc=
X-SBRS: 2.5
X-MesageID: 29332243
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29332243"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, SeongJae Park <sjpark@amazon.de>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>, "J .
 Roeleveld" <joost@antarean.org>, =?UTF-8?q?J=C3=BCrgen=20Gro=C3=9F?=
	<jgross@suse.com>
Subject: [PATCH 2/2] xen/blkback: turn the cache purge percent into a parameter
Date: Thu, 15 Oct 2020 16:24:16 +0200
Message-ID: <20201015142416.70294-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015142416.70294-1-roger.pau@citrix.com>
References: <20201015142416.70294-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Assume that reads and writes to the variable will be atomic. The worse
that could happen is that one of the purges removes a partially
written percentage of grants, but the cache itself will recover.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: SeongJae Park <sjpark@amazon.de>
Cc: xen-devel@lists.xenproject.org
Cc: linux-block@vger.kernel.org
Cc: J. Roeleveld <joost@antarean.org>
Cc: Jürgen Groß <jgross@suse.com>
---
 Documentation/ABI/testing/sysfs-driver-xen-blkback | 9 +++++++++
 drivers/block/xen-blkback/blkback.c                | 7 +++++--
 2 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
index 776f25d335ca..7de791ad61f9 100644
--- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
+++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
@@ -45,3 +45,12 @@ Description:
                 to be executed periodically. This parameter controls the time
                 interval between consecutive executions of the purge mechanism
                 is set in ms.
+
+What:           /sys/module/xen_blkback/parameters/lru_percent_clean
+Date:           October 2020
+KernelVersion:  5.10
+Contact:        Roger Pau Monné <roger.pau@citrix.com>
+Description:
+                When the persistent grants list is full we will remove unused
+                grants from the list. The percent number of grants to be
+                removed at each LRU execution.
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 6ad9b76fdb2b..772852d45a5a 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -127,7 +127,10 @@ MODULE_PARM_DESC(lru_internval,
  * from the list. The percent number of grants to be removed at each LRU
  * execution.
  */
-#define LRU_PERCENT_CLEAN 5
+static unsigned int lru_percent_clean = 5;
+module_param_named(lru_percent_clean, lru_percent_clean, uint, 0644);
+MODULE_PARM_DESC(lru_percent_clean,
+		 "Percentage of persistent grants to remove from the cache when full");
 
 /* Run-time switchable: /sys/module/blkback/parameters/ */
 static unsigned int log_stats;
@@ -404,7 +407,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
 	    !ring->blkif->vbd.overflow_max_grants)) {
 		num_clean = 0;
 	} else {
-		num_clean = (max_pgrants / 100) * LRU_PERCENT_CLEAN;
+		num_clean = (max_pgrants / 100) * lru_percent_clean;
 		num_clean = ring->persistent_gnt_c - max_pgrants + num_clean;
 		num_clean = min(ring->persistent_gnt_c, num_clean);
 		pr_debug("Going to purge at least %u persistent grants\n",
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:26:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:26:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7418.19382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4Ck-0004jB-7x; Thu, 15 Oct 2020 14:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7418.19382; Thu, 15 Oct 2020 14:26:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4Ck-0004j3-3L; Thu, 15 Oct 2020 14:26:06 +0000
Received: by outflank-mailman (input) for mailman id 7418;
 Thu, 15 Oct 2020 14:26:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4Ci-0004im-Up
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:26:04 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b31eb84-11a5-4017-bcc3-ed083e0a1e0e;
 Thu, 15 Oct 2020 14:26:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4Ci-0004im-Up
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:26:04 +0000
X-Inumbo-ID: 5b31eb84-11a5-4017-bcc3-ed083e0a1e0e
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5b31eb84-11a5-4017-bcc3-ed083e0a1e0e;
	Thu, 15 Oct 2020 14:26:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602771963;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nec4gD5X3MmxS0I+tIlw7vXxkm0qi/q39D7bqb697+c=;
  b=HHdCFMYdp3iaiytTJKwwuZXXhBJuVSTxYO1RZ3dMYNibrbr2a4k1Lf8W
   h36r2U2wmCpY1PXpCQdgw0XxhrRWMt0fkpnKnu9SjBfJJUPvARX99CGGQ
   MvgBIhO5ITQu2SjW3B1t5gju0nnWS/237zPuSacssCCqCyxuQTl1K7C/e
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: iDE2yqknkOfC10XRaSF9d4E7CcP5Q7x0CCeK4aQp5Q1/nZv/prDXfC+SlM5ci1UypwQ6cB1QW+
 Rl8ICdJBrilb1/wW5SOKfXC43Dd3QJpWshFMC/Mu6GwUhqTT1AsidtLjSrF0UP9RjAgeNOlQ79
 8HTIxkcBb2U8ycE40/DnqPu0OXqxHm6i21NwnLMHnvEgjCF+aZN/ES0ZdS0A8AuJi3b9RvwYcM
 ZD4BGKRh2YA84jLWIu2a3ImyVJs3BdlrheZginPypI7gTI5wlbJDTrAusT9k6r+iiKjqi7gyug
 CCw=
X-SBRS: 2.5
X-MesageID: 29146562
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29146562"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <linux-kernel@vger.kernel.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, SeongJae Park <sjpark@amazon.de>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>, "J .
 Roeleveld" <joost@antarean.org>, =?UTF-8?q?J=C3=BCrgen=20Gro=C3=9F?=
	<jgross@suse.com>
Subject: [PATCH 1/2] xen/blkback: turn the cache purge LRU interval into a parameter
Date: Thu, 15 Oct 2020 16:24:15 +0200
Message-ID: <20201015142416.70294-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201015142416.70294-1-roger.pau@citrix.com>
References: <20201015142416.70294-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Assume that reads and writes to the variable will be atomic. The worse
that could happen is that one of the LRU intervals is not calculated
properly if a partially written value is read, but that would only be
a transient issue.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: SeongJae Park <sjpark@amazon.de>
Cc: xen-devel@lists.xenproject.org
Cc: linux-block@vger.kernel.org
Cc: J. Roeleveld <joost@antarean.org>
Cc: Jürgen Groß <jgross@suse.com>
---
 Documentation/ABI/testing/sysfs-driver-xen-blkback | 10 ++++++++++
 drivers/block/xen-blkback/blkback.c                |  9 ++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
index ecb7942ff146..776f25d335ca 100644
--- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
+++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
@@ -35,3 +35,13 @@ Description:
                 controls the duration in milliseconds that blkback will not
                 cache any page not backed by a grant mapping.
                 The default is 10ms.
+
+What:           /sys/module/xen_blkback/parameters/lru_internval
+Date:           October 2020
+KernelVersion:  5.10
+Contact:        Roger Pau Monné <roger.pau@citrix.com>
+Description:
+                The LRU mechanism to clean the lists of persistent grants needs
+                to be executed periodically. This parameter controls the time
+                interval between consecutive executions of the purge mechanism
+                is set in ms.
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index adfc9352351d..6ad9b76fdb2b 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -117,7 +117,10 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the
  * be executed periodically. The time interval between consecutive executions
  * of the purge mechanism is set in ms.
  */
-#define LRU_INTERVAL 100
+static unsigned int lru_interval = 100;
+module_param_named(lru_interval, lru_interval, uint, 0644);
+MODULE_PARM_DESC(lru_internval,
+		 "Time interval between consecutive executions of the cache purge mechanism (in ms)");
 
 /*
  * When the persistent grants list is full we will remove unused grants
@@ -620,7 +623,7 @@ int xen_blkif_schedule(void *arg)
 		if (unlikely(vbd->size != vbd_sz(vbd)))
 			xen_vbd_resize(blkif);
 
-		timeout = msecs_to_jiffies(LRU_INTERVAL);
+		timeout = msecs_to_jiffies(lru_interval);
 
 		timeout = wait_event_interruptible_timeout(
 			ring->wq,
@@ -650,7 +653,7 @@ int xen_blkif_schedule(void *arg)
 		if (blkif->vbd.feature_gnt_persistent &&
 		    time_after(jiffies, ring->next_lru)) {
 			purge_persistent_gnt(ring);
-			ring->next_lru = jiffies + msecs_to_jiffies(LRU_INTERVAL);
+			ring->next_lru = jiffies + msecs_to_jiffies(lru_interval);
 		}
 
 		/* Shrink the free pages pool if it is too large. */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:30:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:30:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7426.19394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4Gi-0005fn-RR; Thu, 15 Oct 2020 14:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7426.19394; Thu, 15 Oct 2020 14:30:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4Gi-0005fg-Me; Thu, 15 Oct 2020 14:30:12 +0000
Received: by outflank-mailman (input) for mailman id 7426;
 Thu, 15 Oct 2020 14:30:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kT4Gh-0005fb-Ed
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:30:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c420136e-482f-47cc-a5fe-308673b7ab11;
 Thu, 15 Oct 2020 14:30:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D379AC24;
 Thu, 15 Oct 2020 14:30:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kT4Gh-0005fb-Ed
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:30:11 +0000
X-Inumbo-ID: c420136e-482f-47cc-a5fe-308673b7ab11
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c420136e-482f-47cc-a5fe-308673b7ab11;
	Thu, 15 Oct 2020 14:30:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602772209;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KBFlgAaq0O3il+qQFEc0ZaY+TncdaaVKCCP2EAaai30=;
	b=RzMr/0HWeuj8biN+UO6Kh2qtwi3d4LazBCd9n3G2hkvhC/lGbGLt0tK2AF+2ydDY7ZQXdL
	PttM1r+Gx/Ul5tGU2FpVTaYSX8HKee47b3Ofbf6D2Vj/PfHao9RJzAJWSCH2IwwY0hcSzz
	wdYUIRhmCiI74LYYvGo5zsX9LXk51Mo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8D379AC24;
	Thu, 15 Oct 2020 14:30:09 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen/blkback: turn the cache purge LRU interval into a
 parameter
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 SeongJae Park <sjpark@amazon.de>, xen-devel@lists.xenproject.org,
 linux-block@vger.kernel.org, "J . Roeleveld" <joost@antarean.org>
References: <20201015142416.70294-1-roger.pau@citrix.com>
 <20201015142416.70294-2-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3441104d-7234-a0c3-8b15-7d5a1126182b@suse.com>
Date: Thu, 15 Oct 2020 16:30:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201015142416.70294-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 16:24, Roger Pau Monne wrote:
> Assume that reads and writes to the variable will be atomic. The worse
> that could happen is that one of the LRU intervals is not calculated
> properly if a partially written value is read, but that would only be
> a transient issue.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: SeongJae Park <sjpark@amazon.de>
> Cc: xen-devel@lists.xenproject.org
> Cc: linux-block@vger.kernel.org
> Cc: J. Roeleveld <joost@antarean.org>
> Cc: Jürgen Groß <jgross@suse.com>
> ---
>   Documentation/ABI/testing/sysfs-driver-xen-blkback | 10 ++++++++++
>   drivers/block/xen-blkback/blkback.c                |  9 ++++++---
>   2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> index ecb7942ff146..776f25d335ca 100644
> --- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
> +++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> @@ -35,3 +35,13 @@ Description:
>                   controls the duration in milliseconds that blkback will not
>                   cache any page not backed by a grant mapping.
>                   The default is 10ms.
> +
> +What:           /sys/module/xen_blkback/parameters/lru_internval

s/lru_internval/lru_interval/

> +Date:           October 2020
> +KernelVersion:  5.10
> +Contact:        Roger Pau Monné <roger.pau@citrix.com>
> +Description:
> +                The LRU mechanism to clean the lists of persistent grants needs
> +                to be executed periodically. This parameter controls the time
> +                interval between consecutive executions of the purge mechanism
> +                is set in ms.
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index adfc9352351d..6ad9b76fdb2b 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -117,7 +117,10 @@ MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the
>    * be executed periodically. The time interval between consecutive executions
>    * of the purge mechanism is set in ms.
>    */
> -#define LRU_INTERVAL 100
> +static unsigned int lru_interval = 100;
> +module_param_named(lru_interval, lru_interval, uint, 0644);
> +MODULE_PARM_DESC(lru_internval,

s/lru_internval/lru_interval/

> +		 "Time interval between consecutive executions of the cache purge mechanism (in ms)");
>   
>   /*
>    * When the persistent grants list is full we will remove unused grants
> @@ -620,7 +623,7 @@ int xen_blkif_schedule(void *arg)
>   		if (unlikely(vbd->size != vbd_sz(vbd)))
>   			xen_vbd_resize(blkif);
>   
> -		timeout = msecs_to_jiffies(LRU_INTERVAL);
> +		timeout = msecs_to_jiffies(lru_interval);
>   
>   		timeout = wait_event_interruptible_timeout(
>   			ring->wq,
> @@ -650,7 +653,7 @@ int xen_blkif_schedule(void *arg)
>   		if (blkif->vbd.feature_gnt_persistent &&
>   		    time_after(jiffies, ring->next_lru)) {
>   			purge_persistent_gnt(ring);
> -			ring->next_lru = jiffies + msecs_to_jiffies(LRU_INTERVAL);
> +			ring->next_lru = jiffies + msecs_to_jiffies(lru_interval);
>   		}
>   
>   		/* Shrink the free pages pool if it is too large. */
> 


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:33:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:33:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7431.19408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4JU-0005qi-9C; Thu, 15 Oct 2020 14:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7431.19408; Thu, 15 Oct 2020 14:33:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4JU-0005qb-5o; Thu, 15 Oct 2020 14:33:04 +0000
Received: by outflank-mailman (input) for mailman id 7431;
 Thu, 15 Oct 2020 14:33:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT4JS-0005pI-B5
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:33:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eea68783-2e07-4701-adb8-2daf48d18d26;
 Thu, 15 Oct 2020 14:32:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4JL-0006S5-Qs; Thu, 15 Oct 2020 14:32:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4JJ-0005FT-AB; Thu, 15 Oct 2020 14:32:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4JJ-00024q-9k; Thu, 15 Oct 2020 14:32:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT4JS-0005pI-B5
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:33:02 +0000
X-Inumbo-ID: eea68783-2e07-4701-adb8-2daf48d18d26
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eea68783-2e07-4701-adb8-2daf48d18d26;
	Thu, 15 Oct 2020 14:32:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MhROGmAGfqMP7RpVMQFYiq/GjI4lFXEkmt14cQb0Rz8=; b=Ebo2pB5CFDzWXSvCvuMsME8kKv
	T+EyFvP+DPOx0XbGDn5QokYtM1jlWI0kb4LazP3LwQC5mkHeObzY+vNv2ujdwmRxex1d87yJroJhR
	kM1wDRCdxYA3El5ygGl4srxm/37GeEPQsGIxcYEj5VNnhNpGqr7t8DaEQR6iSM/Lo4TI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4JL-0006S5-Qs; Thu, 15 Oct 2020 14:32:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4JJ-0005FT-AB; Thu, 15 Oct 2020 14:32:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4JJ-00024q-9k; Thu, 15 Oct 2020 14:32:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155837-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155837: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=19c87b7d446c3273e84b238cb02cd1c0ae69c43e
X-Osstest-Versions-That:
    ovmf=b9b7406c43e9d29bde3e9679c1b039cb91109097
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 14:32:53 +0000

flight 155837 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155837/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 19c87b7d446c3273e84b238cb02cd1c0ae69c43e
baseline version:
 ovmf                 b9b7406c43e9d29bde3e9679c1b039cb91109097

Last test of basis   155825  2020-10-15 01:10:19 Z    0 days
Testing same since   155837  2020-10-15 07:14:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b9b7406c43..19c87b7d44  19c87b7d446c3273e84b238cb02cd1c0ae69c43e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:38:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7435.19420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4OB-00062R-Ts; Thu, 15 Oct 2020 14:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7435.19420; Thu, 15 Oct 2020 14:37:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4OB-00062K-QH; Thu, 15 Oct 2020 14:37:55 +0000
Received: by outflank-mailman (input) for mailman id 7435;
 Thu, 15 Oct 2020 14:37:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kT4OB-00062F-BY
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:37:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6bf6c1f-20f0-4240-90a6-afc164980bff;
 Thu, 15 Oct 2020 14:37:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61299ACDB;
 Thu, 15 Oct 2020 14:37:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sFLp=DW=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kT4OB-00062F-BY
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:37:55 +0000
X-Inumbo-ID: f6bf6c1f-20f0-4240-90a6-afc164980bff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6bf6c1f-20f0-4240-90a6-afc164980bff;
	Thu, 15 Oct 2020 14:37:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602772673;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fyAdftOWPrn+xC9V3oziST8/L7BPDtEvR1vWQr/edYo=;
	b=TKpn88v+tlHuc50Ohx8tQdnLNWyltTr3kRUlCiFRMIiRw5JoK3h+kl0jroxGx8ajrdnqig
	hajS55X8gQIZxT1Ok5V/Jmb8v3EWIgbKZpPAXSSC/GhHePh1LGd/RWE/f5gXaszt5DqU+w
	KSo1AK0tMG061dn3oGjEoWnb1+RXGZQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61299ACDB;
	Thu, 15 Oct 2020 14:37:53 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen/blkback: turn the cache purge percent into a
 parameter
To: Roger Pau Monne <roger.pau@citrix.com>, linux-kernel@vger.kernel.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 SeongJae Park <sjpark@amazon.de>, xen-devel@lists.xenproject.org,
 linux-block@vger.kernel.org, "J . Roeleveld" <joost@antarean.org>
References: <20201015142416.70294-1-roger.pau@citrix.com>
 <20201015142416.70294-3-roger.pau@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0b7da9e1-6c59-5b8d-52aa-6293568613d1@suse.com>
Date: Thu, 15 Oct 2020 16:37:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201015142416.70294-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 16:24, Roger Pau Monne wrote:
> Assume that reads and writes to the variable will be atomic. The worse
> that could happen is that one of the purges removes a partially
> written percentage of grants, but the cache itself will recover.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: SeongJae Park <sjpark@amazon.de>
> Cc: xen-devel@lists.xenproject.org
> Cc: linux-block@vger.kernel.org
> Cc: J. Roeleveld <joost@antarean.org>
> Cc: Jürgen Groß <jgross@suse.com>
> ---
>   Documentation/ABI/testing/sysfs-driver-xen-blkback | 9 +++++++++
>   drivers/block/xen-blkback/blkback.c                | 7 +++++--
>   2 files changed, 14 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> index 776f25d335ca..7de791ad61f9 100644
> --- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
> +++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> @@ -45,3 +45,12 @@ Description:
>                   to be executed periodically. This parameter controls the time
>                   interval between consecutive executions of the purge mechanism
>                   is set in ms.
> +
> +What:           /sys/module/xen_blkback/parameters/lru_percent_clean
> +Date:           October 2020
> +KernelVersion:  5.10
> +Contact:        Roger Pau Monné <roger.pau@citrix.com>
> +Description:
> +                When the persistent grants list is full we will remove unused
> +                grants from the list. The percent number of grants to be
> +                removed at each LRU execution.
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index 6ad9b76fdb2b..772852d45a5a 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -127,7 +127,10 @@ MODULE_PARM_DESC(lru_internval,
>    * from the list. The percent number of grants to be removed at each LRU
>    * execution.
>    */
> -#define LRU_PERCENT_CLEAN 5
> +static unsigned int lru_percent_clean = 5;
> +module_param_named(lru_percent_clean, lru_percent_clean, uint, 0644);
> +MODULE_PARM_DESC(lru_percent_clean,
> +		 "Percentage of persistent grants to remove from the cache when full");
>   
>   /* Run-time switchable: /sys/module/blkback/parameters/ */
>   static unsigned int log_stats;
> @@ -404,7 +407,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
>   	    !ring->blkif->vbd.overflow_max_grants)) {
>   		num_clean = 0;
>   	} else {
> -		num_clean = (max_pgrants / 100) * LRU_PERCENT_CLEAN;
> +		num_clean = (max_pgrants / 100) * lru_percent_clean;

Hmm, wouldn't it be better to use (max_grants * lru_percent_clean) / 100
here in order to support max_grants values less than 100?


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7442.19435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4UG-0006xr-P5; Thu, 15 Oct 2020 14:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7442.19435; Thu, 15 Oct 2020 14:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4UG-0006xk-Ln; Thu, 15 Oct 2020 14:44:12 +0000
Received: by outflank-mailman (input) for mailman id 7442;
 Thu, 15 Oct 2020 14:44:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4UF-0006wk-At
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:44:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6d968ba-8232-4f0e-aba3-33880b615a18;
 Thu, 15 Oct 2020 14:44:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4UF-0006wk-At
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:44:11 +0000
X-Inumbo-ID: c6d968ba-8232-4f0e-aba3-33880b615a18
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6d968ba-8232-4f0e-aba3-33880b615a18;
	Thu, 15 Oct 2020 14:44:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602773050;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=o1CQ3R7emkmHv4RUcLBMXcqMzNDO5c9/K98ceiO5WPY=;
  b=iFnoaDG1lq157fyP3JYqW4OWqNCidcwzCPpTBqeTHJ3df6yBqRDNhWar
   eUYHrijEAb5JyFh4YsimPkItZ1+dyDHkWHU+n2nJhgNnCqmCXyQTSuZAP
   u2oh2EllsQuH6u4cLi1cm0wOL0mHEkMyWAoTuptpKsCdQC2TuF4ygsyhq
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Z4s7vITO+mHQS9oB+C4ouNemkOqjjehCAngW0yVOcOZq04ygvwCVcGgbPLD8RBhDSvA3F+eemm
 c/h6pxL2oC7X+MHZyYFuZpuaDaZNdFcnSrsAGb3po6+35gDs+IxmknEX0cldpOTM4OC8S2TOQm
 lGxsYqm4bpWcqDHsefCWV8DhMsH1IMssrsxK4sbm2H59uFD8UW5X+pY2kK1bstzhLb8rwllnr9
 ZU7ZHIfpgy0GhZXZOhMeXzdYi482XfAPpmiIJspXPtZhF5JviJRGkhXux9S5+oAjnkiIYS0RrX
 7BA=
X-SBRS: 2.5
X-MesageID: 30128371
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="30128371"
Date: Thu, 15 Oct 2020 16:44:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
CC: <linux-kernel@vger.kernel.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, SeongJae Park <sjpark@amazon.de>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>, "J .
 Roeleveld" <joost@antarean.org>
Subject: Re: [PATCH 2/2] xen/blkback: turn the cache purge percent into a
 parameter
Message-ID: <20201015144401.GD68032@Air-de-Roger>
References: <20201015142416.70294-1-roger.pau@citrix.com>
 <20201015142416.70294-3-roger.pau@citrix.com>
 <0b7da9e1-6c59-5b8d-52aa-6293568613d1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0b7da9e1-6c59-5b8d-52aa-6293568613d1@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 15, 2020 at 04:37:52PM +0200, Jürgen Groß wrote:
> On 15.10.20 16:24, Roger Pau Monne wrote:
> > Assume that reads and writes to the variable will be atomic. The worse
> > that could happen is that one of the purges removes a partially
> > written percentage of grants, but the cache itself will recover.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> > Cc: Jens Axboe <axboe@kernel.dk>
> > Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Cc: SeongJae Park <sjpark@amazon.de>
> > Cc: xen-devel@lists.xenproject.org
> > Cc: linux-block@vger.kernel.org
> > Cc: J. Roeleveld <joost@antarean.org>
> > Cc: Jürgen Groß <jgross@suse.com>
> > ---
> >   Documentation/ABI/testing/sysfs-driver-xen-blkback | 9 +++++++++
> >   drivers/block/xen-blkback/blkback.c                | 7 +++++--
> >   2 files changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> > index 776f25d335ca..7de791ad61f9 100644
> > --- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
> > +++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> > @@ -45,3 +45,12 @@ Description:
> >                   to be executed periodically. This parameter controls the time
> >                   interval between consecutive executions of the purge mechanism
> >                   is set in ms.
> > +
> > +What:           /sys/module/xen_blkback/parameters/lru_percent_clean
> > +Date:           October 2020
> > +KernelVersion:  5.10
> > +Contact:        Roger Pau Monné <roger.pau@citrix.com>
> > +Description:
> > +                When the persistent grants list is full we will remove unused
> > +                grants from the list. The percent number of grants to be
> > +                removed at each LRU execution.
> > diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> > index 6ad9b76fdb2b..772852d45a5a 100644
> > --- a/drivers/block/xen-blkback/blkback.c
> > +++ b/drivers/block/xen-blkback/blkback.c
> > @@ -127,7 +127,10 @@ MODULE_PARM_DESC(lru_internval,
> >    * from the list. The percent number of grants to be removed at each LRU
> >    * execution.
> >    */
> > -#define LRU_PERCENT_CLEAN 5
> > +static unsigned int lru_percent_clean = 5;
> > +module_param_named(lru_percent_clean, lru_percent_clean, uint, 0644);
> > +MODULE_PARM_DESC(lru_percent_clean,
> > +		 "Percentage of persistent grants to remove from the cache when full");
> >   /* Run-time switchable: /sys/module/blkback/parameters/ */
> >   static unsigned int log_stats;
> > @@ -404,7 +407,7 @@ static void purge_persistent_gnt(struct xen_blkif_ring *ring)
> >   	    !ring->blkif->vbd.overflow_max_grants)) {
> >   		num_clean = 0;
> >   	} else {
> > -		num_clean = (max_pgrants / 100) * LRU_PERCENT_CLEAN;
> > +		num_clean = (max_pgrants / 100) * lru_percent_clean;
> 
> Hmm, wouldn't it be better to use (max_grants * lru_percent_clean) / 100
> here in order to support max_grants values less than 100?

Yes, we should have done that when moving max_pgrants to a parameter,
it used to be fixed first.

Will do in next version since I'm already changing the line.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7447.19448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4an-0007qR-H1; Thu, 15 Oct 2020 14:50:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7447.19448; Thu, 15 Oct 2020 14:50:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4an-0007qK-Dg; Thu, 15 Oct 2020 14:50:57 +0000
Received: by outflank-mailman (input) for mailman id 7447;
 Thu, 15 Oct 2020 14:50:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT4al-0007qD-FY
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:50:55 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1eb4feb-be63-4c44-a600-fdaa095fbcdf;
 Thu, 15 Oct 2020 14:50:54 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l28so3901469lfp.10
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:50:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT4al-0007qD-FY
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:50:55 +0000
X-Inumbo-ID: c1eb4feb-be63-4c44-a600-fdaa095fbcdf
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1eb4feb-be63-4c44-a600-fdaa095fbcdf;
	Thu, 15 Oct 2020 14:50:54 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l28so3901469lfp.10
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:50:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=4Re/d8K/MlHcKuvoYcGduQCYOKA27KL2gs+AeLiBNn4=;
        b=GF/XEwHVl/wE8f4Kaw0QjDHPll2czIK2djLa+awk1owVI/zVer9dc6MLQZZGd5QR5p
         +bhm73n3RriW/QXvGb7x6Z8MTK/sSdeLsOy8mdSHEzy0RzWn/5sGXr9uMcFJ/14I/mxF
         izJbJgTztDyywvUeoSZ47gDrIwoLJxqZnOAIg7v8YSTMGnUj3MhEzS9vCtF0U//Yed/2
         fQL70bNC9RNS5asYaoPp0zAyZUMYN/zExx2nB+TOWOQ2F3kTG8eXe78hQsnlJPzxrUkQ
         tpC8JWxR3x9wf3KHO5e9MsPS/rcdrh3dlC/kWGP9L5VT3gSpMaEgRfw/WHj66MJFJvXe
         BNVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=4Re/d8K/MlHcKuvoYcGduQCYOKA27KL2gs+AeLiBNn4=;
        b=LitAvODobU97PQkHIC2QeoBpXxkfOt1wp1O8h+Cy4X2bFPF0wCabhGpBWhGd2PMl19
         mwLgv5aKMhiHPMJ92yEtknI4CXqa/6rgjukbgvIuvfROlfGU1iBYJOoncL229YCUwRox
         iyO5EZUxx1rpRe2hluNuiUClGEw9UJ8pa8JbIm9VO4D3aGgV1+rrlws8AZ2zDNzi+YUf
         pP/CKOGyQw+BLs/EoqdP8//9kkGmFsJq4Zli7rz88I+qGim5/oqxIJgY4peQzuF6Cy1k
         Mi+EyLwv8NrBLBzc1ywfTXPfQqv4/blWtiqwnENEa2r3aXCC0YN9PGeTQh+8+8Ag1391
         IeXQ==
X-Gm-Message-State: AOAM530NzLa/yfLoSE32loDxfwBTrpHq3i6hy4ikWqMrciwT7iewb6hJ
	sp0jBp+n6ipdEe0GAIf2rfM1Tpusq0ng1KL2vGw=
X-Google-Smtp-Source: ABdhPJzaRjlqWJEFcYwUJHG+A+w6vwICH8dz3Q6I1CRDsH5bnyLoKBOGMUzUuIGy5SAEwvM1X56/sBjVEjMA/Xq4p1k=
X-Received: by 2002:ac2:4ed0:: with SMTP id p16mr684154lfr.554.1602773453293;
 Thu, 15 Oct 2020 07:50:53 -0700 (PDT)
MIME-Version: 1.0
References: <20201014153150.83875-1-jandryuk@gmail.com> <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com> <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
In-Reply-To: <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 10:50:41 -0400
Message-ID: <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 15, 2020 at 3:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 14.10.2020 18:27, Jason Andryuk wrote:
> > On Wed, Oct 14, 2020 at 12:02 PM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 14.10.2020 17:31, Jason Andryuk wrote:
> >>> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  =
A
> >>> kernel build CONFIG_PVH=3Dy CONFIG_PV=3Dn lacks the note.  In this ca=
se,
> >>> virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
> >>> and fail the check against the virt address range.
> >
> > Oh, these should be CONFIG_XEN_PVH=3Dy and CONFIG_XEN_PV=3Dn
> >
> >>> Change the code to only check virt_entry against the virtual address
> >>> range if it was set upon entry to the function.
> >>
> >> Not checking at all seems wrong to me. The ELF spec anyway says
> >> "virtual address", so an out of bounds value is at least suspicious.
> >>
> >>> Maybe the overwriting of virt_entry could be removed, but I don't kno=
w
> >>> if there would be unintended consequences where (old?) kernels don't
> >>> have an elfnote, but do have an in-range e_entry?  The failing kernel=
 I
> >>> just looked at has an e_entry of 0x1000000.
> >>
> >> And if you dropped the overwriting, what entry point would we use
> >> in the absence of an ELF note?
> >
> > elf_xen_note_check currently has:
> >
> >     /* PVH only requires one ELF note to be set */
> >     if ( parms->phys_entry !=3D UNSET_ADDR32 )
> >     {
> >         elf_msg(elf, "ELF: Found PVH image\n");
> >         return 0;
> >     }
> >
> >> I'd rather put up the option of adjusting the entry (or the check),
> >> if it looks like a valid physical address.
> >
> > The function doesn't know if the image will be booted PV or PVH, so I
> > guess we do all the checks, but use 'parms->phys_entry !=3D UNSET_ADDR3=
2
> > && parms->virt_entry =3D=3D UNSET_ADDR' to conditionally skip checking
> > virt?
>
> Like J=C3=BCrgen, the purpose of the patch hadn't become clear to me
> from reading the description. As I understand it now, we're currently
> refusing to boot such a kernel for no reason. If that's correct,
> perhaps you could say so in the description in a more direct way?

Yes, sorry I didn't state it clearly.  You are correct, libxc fails
with "xc_dom_find_loader: no loader found" for a linux kernel with
PHYS32_ENTRY but without ENTRY.

> As far as actual code adjustments go - how much of
> elf_xen_addr_calc_check() is actually applicable when booting PVH?

I don't know...

> And why is there no bounds check of ->phys_entry paralleling the
> ->virt_entry one?

What is the purpose of this checking?  It's sanity checking which is
generally good, but what is the harm from failing the checks?  A
corrupt kernel can crash itself?  Maybe you could start executing
something (the initramfs?) instead of the actual kernel?

> On the whole, as long as we don't know what mode we're planning to
> boot in, we can't skip any checks, as the mere presence of
> XEN_ELFNOTE_PHYS32_ENTRY doesn't mean that's also what gets used.
> Therefore simply bypassing any of the checks is not an option.

elf_xen_note_check() early exits when it finds phys_entry set, so
there is already some bypassing.

> In
> particular what you suggest would lead to failure to check
> e_entry-derived ->virt_entry when the PVH-specific note is
> present but we're booting in PV mode. For now I don't see how to
> address this without making the function aware of the intended
> booting mode.

Yes, the relevant checks depend on the desired booting mode.

The e_entry use seems a little problematic.  You said the ELF
Specification states it should be a virtual address, but Linux seems
to fill it with a physical address.  You could use a heuristic e_entry
< 0 (0xffff...) to compare with the virtual addresses otherwise check
against physical?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:51:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:51:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7448.19460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4b0-0007u5-Or; Thu, 15 Oct 2020 14:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7448.19460; Thu, 15 Oct 2020 14:51:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4b0-0007ty-Lh; Thu, 15 Oct 2020 14:51:10 +0000
Received: by outflank-mailman (input) for mailman id 7448;
 Thu, 15 Oct 2020 14:51:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT4az-0007ta-9U
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:51:09 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ce8ef06-f75a-46f2-95d7-d9fa7b388a48;
 Thu, 15 Oct 2020 14:51:08 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id r127so3884008lff.12
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:51:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT4az-0007ta-9U
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:51:09 +0000
X-Inumbo-ID: 0ce8ef06-f75a-46f2-95d7-d9fa7b388a48
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ce8ef06-f75a-46f2-95d7-d9fa7b388a48;
	Thu, 15 Oct 2020 14:51:08 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id r127so3884008lff.12
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:51:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=bV4f1Qg+4Gv35N2wit/f7NjVpNxZdd3YNwQ6SWe/ma0=;
        b=OYat1tu/tT4xPSTOsQRpW6iStW0f2RAvrVqZM2wKYQ5qUpvTlkK0tW9fFDuKWCuYbx
         Ncv0Ts6rNHB0uqaIpNXIlQ0S7DkES1kA/yhljcF4/ynUIzo2tM/o87WVzk1bRBbjMP4B
         cSDdjZaLugIPplKKgZSVVV3mPnKPKuLVd8WksM8JbCG0OUG/fF4UUP3GMaKhdlwT/+zV
         HsXGKLICg97DrPVSd3rJxLq7PJBzjYNu3PyZJmM3JVfYKMvtP01owdX2hLVjWhsfVufr
         +rj+v3YZWE2VDHu65ilvtRlhbs8cnQqPZdBP9vlMVxEzgioF/fPt6CY7wLSadZ3WeNds
         7rrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=bV4f1Qg+4Gv35N2wit/f7NjVpNxZdd3YNwQ6SWe/ma0=;
        b=cljIdth65gARwi+dKONofHoInrsAE00/k8u/T+mWaZyc07xWxxOVN1dFGBOiIVrOTE
         YHzav0UfEoxzZ4JifhvXAE9iev8bX+WrmhI/VvpwzZUkeRA3Ti+ZT+FPDWHeKC/hjCMG
         tgDWMnf4CTB4/IVgCfTdtQAM7JwDXOrZ/m0KwRGmLP98aX+9JG+SWw/ni5vDvxvXJMZc
         Vpwxn8O8Ku/PJLWpTjfoP2YzWOAXHk2B1Btgq2RYBaRjI37f43EK3msEm09HZhIbksV9
         rV8rfz7YXNkf6U212VOacIvXLopf3dTqlFkaKXuy41ylPE0fjS0L/o3NITbjW81PDooM
         pO3Q==
X-Gm-Message-State: AOAM5332XBtGOE4TZe+cXUtKICRDYoQXGwRz9W9Te+B+2o+Frnx3sVWG
	pflVsVBfPli3FLqduqdThmBlVYmbEhvYbBU5G7I=
X-Google-Smtp-Source: ABdhPJz4cz6ryROltTYzKnZPApyMbINiaXN8B0RTQpbg1uR4m85nIpOyOyacfPQKPIETr+/AP5JTkJuMVSaTsmdK8zM=
X-Received: by 2002:ac2:52b7:: with SMTP id r23mr1126232lfm.30.1602773467610;
 Thu, 15 Oct 2020 07:51:07 -0700 (PDT)
MIME-Version: 1.0
References: <20201014175342.152712-1-jandryuk@gmail.com> <20201014175342.152712-3-jandryuk@gmail.com>
 <6cd9363c-ac0c-ea68-c8e7-9fd3cd30a89b@oracle.com> <4e31301b-0e57-ac89-cd71-6ad5e1a66628@citrix.com>
 <b097aec1-e549-a89a-ce43-e9c0a71179f2@oracle.com>
In-Reply-To: <b097aec1-e549-a89a-ce43-e9c0a71179f2@oracle.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 10:50:55 -0400
Message-ID: <CAKf6xpuRZKF56yyOR-Q6oBSJUpRSr0P+XVJD7DvaS6GWnNcMTg@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, 
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	open list <linux-kernel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 15, 2020 at 9:17 AM <boris.ostrovsky@oracle.com> wrote:
>
>
> On 10/15/20 9:10 AM, Andrew Cooper wrote:
> > On 15/10/2020 13:37, boris.ostrovsky@oracle.com wrote:
> >> On 10/14/20 1:53 PM, Jason Andryuk wrote:
> >>> +config XEN_512GB
> >>> +   bool "Limit Xen pv-domain memory to 512GB"
> >>> +   depends on XEN_PV && X86_64
> >> Why is X86_64 needed here?
> >> 512G support was implemented using a direct-mapped P2M, and is rather
> > beyond the virtual address capabilities of 32bit.
> >
>
> Yes, my point was that XEN_PV already depends on X86_64.

Oh, thanks for catching this.  I re-introduced it by accident when
rebasing the patches.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 14:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 14:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7454.19474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4cf-000864-8V; Thu, 15 Oct 2020 14:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7454.19474; Thu, 15 Oct 2020 14:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4cf-00085x-3q; Thu, 15 Oct 2020 14:52:53 +0000
Received: by outflank-mailman (input) for mailman id 7454;
 Thu, 15 Oct 2020 14:52:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT4cd-00085l-PX
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:52:51 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c3975da-4d0a-447b-a1f3-7f5faef25476;
 Thu, 15 Oct 2020 14:52:51 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id c21so3408915ljn.13
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:52:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT4cd-00085l-PX
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:52:51 +0000
X-Inumbo-ID: 8c3975da-4d0a-447b-a1f3-7f5faef25476
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c3975da-4d0a-447b-a1f3-7f5faef25476;
	Thu, 15 Oct 2020 14:52:51 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id c21so3408915ljn.13
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:52:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=bB0aML9iHly6lbfGENfyVAF9mzhMQjNP2v7ZMJRbBXc=;
        b=NYyuUoopM6Ye/9aTGGSOB131XjOnZjL/iS+3AOgzlbjqNBYntDH3hqF9Sa8z6T7qQO
         TFbfBig10huD73ZxXtbp6+MdY78PqWl46lNCDHJM4oMUiyPL/JbUuvpvSBlkaqgZ+tnf
         4zRprFMtakP7gVcQDwkAdzbbj2XVtPuDEL9MU8CfwnT1R/eg7qkNqGvCLhyf384K4Eut
         b5SOmAz6vqebc7qQRJZYfeOlodwWqVbroH95UIzhduCysDyNOWbP7yByoNfEyMNtzxlG
         STrJsFujUqu9tnOPCTY18x7xbvka2+B252zj3bRFlJvrvd7gKBrRxazqpkew1qU6YVZY
         V1hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=bB0aML9iHly6lbfGENfyVAF9mzhMQjNP2v7ZMJRbBXc=;
        b=sen0ngDJX9YZFTeNjP3E/zsUlcTfKb4+wXhEfEt+i1WBz7mNPyXKQ0e2udReCYN5L9
         uHPEdtzhNvwDHKmXjYqOogK/XusUyLp7sSMBOPrw0do91W6QF73RBSXJXaxCFrI9wq2G
         0zAO4ZTvxUzw44N+az4OJnLmk3qnlS+rCWUG7jokNggwhqe09FAMMhyWskgXOUspqeJs
         L+uKcZxmvt+upOnFBxgPcdO8RejPJUrGp4/jBCr2AYMLBPcrtZHhjLOOjfA83AXhM9vp
         Q6SGxO8UF0sN3ZY9TpzjBgr7oCZp//qxC1ciAPdkFG4UHtnmeeckoiA+Uc87zk+em3ak
         ncgw==
X-Gm-Message-State: AOAM532bIr+H95eEfufe4zkYHWhvZ22xT+peeQEh7nQe0dA40fhue0CJ
	MwEGs/dNiPvJlBce3ZS4ZhHdZIiI7bSX/j2GuNg=
X-Google-Smtp-Source: ABdhPJyx30xOfI/dCOEDwzdW1JN7ZaYBa0FLpSKwq8j154fT2ykNaCexw1AjUCzXXU/XmYTVCAGEXsCBq8BdXAsciwU=
X-Received: by 2002:a2e:9ccd:: with SMTP id g13mr1537362ljj.127.1602773570017;
 Thu, 15 Oct 2020 07:52:50 -0700 (PDT)
MIME-Version: 1.0
References: <20201014175342.152712-1-jandryuk@gmail.com> <20201014175342.152712-3-jandryuk@gmail.com>
 <d8a8ed95-ed55-4ccf-1b54-8d97db908742@suse.com>
In-Reply-To: <d8a8ed95-ed55-4ccf-1b54-8d97db908742@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 10:52:38 -0400
Message-ID: <CAKf6xpv4Kborx8-0UvadyyzPRGg0TLfD1RWxmkM1PnfPKuXOaA@mail.gmail.com>
Subject: Re: [PATCH 2/2] xen: Kconfig: nest Xen guest options
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, 
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	open list <linux-kernel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 15, 2020 at 5:42 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wro=
te:
>
> On 14.10.20 19:53, Jason Andryuk wrote:
> > Moving XEN_512GB allows it to nest under XEN_PV.  That also allows
> > XEN_PVH to nest under XEN as a sibling to XEN_PV and XEN_PVHVM giving:
> >
> > [*]   Xen guest support
> > [*]     Xen PV guest support
> > [*]       Limit Xen pv-domain memory to 512GB
> > [*]       Xen PV Dom0 support
>
> This has currently a wrong text/semantics:
>
> It should be split to CONFIG_XEN_DOM0 and CONFIG_XEN_PV_DOM0.
>
> Otherwise the backends won't be enabled per default for a PVH-only
> config meant to be Dom0-capable.
>
> You don't have to do that in your patches if you don't want to, but
> I wanted to mention it with you touching this area of Kconfig.

Yes, good point.  I had not considered that.

> > [*]     Xen PVHVM guest support
> > [*]     Xen PVH guest support
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:00:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:00:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7459.19488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4jT-0008N3-VV; Thu, 15 Oct 2020 14:59:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7459.19488; Thu, 15 Oct 2020 14:59:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4jT-0008Mw-SP; Thu, 15 Oct 2020 14:59:55 +0000
Received: by outflank-mailman (input) for mailman id 7459;
 Thu, 15 Oct 2020 14:59:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT4jS-0008Mr-Sy
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:59:54 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c68e8a5-8ddd-4327-a660-80e7be0e2884;
 Thu, 15 Oct 2020 14:59:54 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id h6so3985272lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:59:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT4jS-0008Mr-Sy
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 14:59:54 +0000
X-Inumbo-ID: 1c68e8a5-8ddd-4327-a660-80e7be0e2884
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c68e8a5-8ddd-4327-a660-80e7be0e2884;
	Thu, 15 Oct 2020 14:59:54 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id h6so3985272lfj.3
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 07:59:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Be+an15LrnaPiOtwboSIUoBz1mYy3noplAiogE4hPMs=;
        b=FDot9ShXu39lveCF+OW0yRUdygZ3wBX/8lxlKrgEBuG5B7eTyrubDH7f7TJSQfo3RR
         iehEgRownftvmxjDluVRSdsOAElZGd2VrWbbNg8EMGZC09bbj7l3Y/gInN3kObFUKfoP
         ZwTu3g2/Aau1OV/ovEFWnibmFaOFvNmP4kJve/qVXAutw3AFCoUiWXG3yuPpiu5EMbu1
         sIqBGWwQZEgq0C9fkY/ARxBVHT+4gXTF37ZLLZ0M0XyPLW7AsgbaqFQ2GHiLrQLnUEDq
         CinoKqIzX4xb6Ucf/544zW7O4jDBJ/rBMZR/HZtPnrpA3VcMyA64SlCKHzFSRYKXl4vH
         3o+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Be+an15LrnaPiOtwboSIUoBz1mYy3noplAiogE4hPMs=;
        b=cxfKiF9HSZkqFDjcRTRbDGGoB6ihyEBLZAGFexF/Mv3+Qv/rWrqJC1zxcYPhNibc9U
         HDpoC7G37oTzKTW6Dt5/Y+TAPhdHM1aYq0TQP9lz7IoYMQPrg8Azu0yF3lFbvrStjb0p
         dR9NVD9KImI67tCQVnSVpBXWQaRbTif9Xgm8LWdTFL/aM2NgK/1Iw9dWITmbcDiKbWyc
         aoKZf+2+vQjghQw9QhT5/LCSTb26RDTc99K8/CzwCnlItgp6Ds9s+g+DZ7eyT/gNfWYc
         q/rxO5Ek844wzz+9c5/GQL+HQPeWP4lCkZ2xef54UE6I8oxYMA2qNIpGwTtYDuGPzZlu
         4tyw==
X-Gm-Message-State: AOAM53093qjSN/FrWUI0l7r9GfDnbMXwqHJjtoNxmv3JAeWNA/ZJAlyx
	XF4xzrCb2SSjxyK+XEN8e5iu399YLnTZ5h3QKfE=
X-Google-Smtp-Source: ABdhPJx4L4/CQ4jjGqYEQKoZVBxedUwEFtk0SDdes7DwZrBGrzlWTuA3aoI8E4T/46sDTPw9nu3QcwgAWBcqXgEv7L8=
X-Received: by 2002:ac2:52b7:: with SMTP id r23mr1135747lfm.30.1602773992720;
 Thu, 15 Oct 2020 07:59:52 -0700 (PDT)
MIME-Version: 1.0
References: <20201014175342.152712-1-jandryuk@gmail.com> <20201014175342.152712-2-jandryuk@gmail.com>
 <b74a3f83-cd8a-34a3-b436-95141f01cb20@suse.com>
In-Reply-To: <b74a3f83-cd8a-34a3-b436-95141f01cb20@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 10:59:40 -0400
Message-ID: <CAKf6xps+mAFdfk8uBw=aMsAFNYmt4ETPkB8dwT3sTv-qPbVENw@mail.gmail.com>
Subject: Re: [PATCH 1/2] xen: Remove Xen PVH/PVHVM dependency on PCI
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>, 
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, 
	"H. Peter Anvin" <hpa@zytor.com>, xen-devel <xen-devel@lists.xenproject.org>, 
	open list <linux-kernel@vger.kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 15, 2020 at 4:10 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 14.10.2020 19:53, Jason Andryuk wrote:
> > @@ -76,7 +80,9 @@ config XEN_DEBUG_FS
> >         Enabling this option may incur a significant performance overhead.
> >
> >  config XEN_PVH
> > -     bool "Support for running as a Xen PVH guest"
> > +     bool "Xen PVH guest support"
>
> Tangential question: Is "guest" here still appropriate, i.e.
> isn't this option also controlling whether the kernel can be
> used in a PVH Dom0?

Would you like something more generic like "Xen PVH support" and
"Support for running in Xen PVH mode"?

> >       def_bool n
>
> And is this default still appropriate?

We probably want to flip it on, yes.  PVH is the future, isn't it?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:02:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:02:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7464.19514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4ls-0000pD-Sv; Thu, 15 Oct 2020 15:02:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7464.19514; Thu, 15 Oct 2020 15:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4ls-0000p6-Pq; Thu, 15 Oct 2020 15:02:24 +0000
Received: by outflank-mailman (input) for mailman id 7464;
 Thu, 15 Oct 2020 15:02:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kT4lr-0000ou-VG
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:02:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a344ae63-f6b5-4bff-81c2-84fdf5653d82;
 Thu, 15 Oct 2020 15:02:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8625AF87;
 Thu, 15 Oct 2020 15:02:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kT4lr-0000ou-VG
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:02:23 +0000
X-Inumbo-ID: a344ae63-f6b5-4bff-81c2-84fdf5653d82
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a344ae63-f6b5-4bff-81c2-84fdf5653d82;
	Thu, 15 Oct 2020 15:02:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602774142;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LOAorilrTXfeOIZQ8lkczi+c0MEjjEKFoJ24vlzPups=;
	b=GUhP8sHjWJlLvc81IjV2qAGyp9DPLy2UZxvgQrm4q9TjosVlEgw5GsrwMe+FKzcnMzexg/
	rrpix3Sv2u38vnE0/MQBIunUXc2JdRjV2qrE0Hlt+4wL/k8g2+7Rju4hfLCoOX29S/V/TX
	bNPmYaPvD8NARY29yRsCwl2nEjfKae0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E8625AF87;
	Thu, 15 Oct 2020 15:02:21 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen: Remove Xen PVH/PVHVM dependency on PCI
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 open list <linux-kernel@vger.kernel.org>
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-2-jandryuk@gmail.com>
 <b74a3f83-cd8a-34a3-b436-95141f01cb20@suse.com>
 <CAKf6xps+mAFdfk8uBw=aMsAFNYmt4ETPkB8dwT3sTv-qPbVENw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3919ef15-379b-cc1e-994c-c33b23865afd@suse.com>
Date: Thu, 15 Oct 2020 17:02:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xps+mAFdfk8uBw=aMsAFNYmt4ETPkB8dwT3sTv-qPbVENw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 16:59, Jason Andryuk wrote:
> On Thu, Oct 15, 2020 at 4:10 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 14.10.2020 19:53, Jason Andryuk wrote:
>>> @@ -76,7 +80,9 @@ config XEN_DEBUG_FS
>>>         Enabling this option may incur a significant performance overhead.
>>>
>>>  config XEN_PVH
>>> -     bool "Support for running as a Xen PVH guest"
>>> +     bool "Xen PVH guest support"
>>
>> Tangential question: Is "guest" here still appropriate, i.e.
>> isn't this option also controlling whether the kernel can be
>> used in a PVH Dom0?
> 
> Would you like something more generic like "Xen PVH support" and
> "Support for running in Xen PVH mode"?

Yeah, just dropping "guest" would be fine with me. No idea how
to reflect that PVH Dom0 isn't supported, yet.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:02:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:02:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7462.19500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4lh-0000lj-Do; Thu, 15 Oct 2020 15:02:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7462.19500; Thu, 15 Oct 2020 15:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4lh-0000lc-Ac; Thu, 15 Oct 2020 15:02:13 +0000
Received: by outflank-mailman (input) for mailman id 7462;
 Thu, 15 Oct 2020 15:02:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT4lf-0000lX-O8
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:02:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e95a8884-7c50-489d-880d-aeb685c9c06f;
 Thu, 15 Oct 2020 15:02:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4le-00075M-0d; Thu, 15 Oct 2020 15:02:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4ld-0006Eh-RS; Thu, 15 Oct 2020 15:02:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT4ld-00044P-Qx; Thu, 15 Oct 2020 15:02:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT4lf-0000lX-O8
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:02:11 +0000
X-Inumbo-ID: e95a8884-7c50-489d-880d-aeb685c9c06f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e95a8884-7c50-489d-880d-aeb685c9c06f;
	Thu, 15 Oct 2020 15:02:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r5BMwGfG0lbeS4mprBv5xIF5Y8QSFwrVoZSMDWk0O+o=; b=5TJtqgKYKT4UTWkvTRmSvlOQC6
	gPygZUPeJ1DCiMpP3wPomaDIVaPESj0nD63G1gVqm2GQFOohC756/1bqsJEsjnj9cegn3OhbI9I7b
	e0GOfp+tXoJRA7DaR430jiY5+TvN8Igt2Dbmlqzfq2v+gulceuBqHB5O+TBQhFZiZr2E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4le-00075M-0d; Thu, 15 Oct 2020 15:02:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4ld-0006Eh-RS; Thu, 15 Oct 2020 15:02:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT4ld-00044P-Qx; Thu, 15 Oct 2020 15:02:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155839-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 155839: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    seabios=58a44be024f69d2e4d2b58553529230abdd3935e
X-Osstest-Versions-That:
    seabios=c685fe3ff2d402caefc1487d99bb486c4a510b8b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 15:02:09 +0000

flight 155839 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155839/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155770
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155770
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155770
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155770
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              58a44be024f69d2e4d2b58553529230abdd3935e
baseline version:
 seabios              c685fe3ff2d402caefc1487d99bb486c4a510b8b

Last test of basis   155770  2020-10-13 09:10:37 Z    2 days
Testing same since   155839  2020-10-15 09:39:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   c685fe3..58a44be  58a44be024f69d2e4d2b58553529230abdd3935e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:04:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7468.19526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4nQ-00012X-7z; Thu, 15 Oct 2020 15:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7468.19526; Thu, 15 Oct 2020 15:04:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4nQ-00012Q-4x; Thu, 15 Oct 2020 15:04:00 +0000
Received: by outflank-mailman (input) for mailman id 7468;
 Thu, 15 Oct 2020 15:03:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4nO-00012I-NH
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:03:58 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7ea9203-85da-498a-a4fe-86851bc1865d;
 Thu, 15 Oct 2020 15:03:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4nO-00012I-NH
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:03:58 +0000
X-Inumbo-ID: b7ea9203-85da-498a-a4fe-86851bc1865d
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b7ea9203-85da-498a-a4fe-86851bc1865d;
	Thu, 15 Oct 2020 15:03:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602774237;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=AmVOV8edVjCau6Ds8dmXzC4qa5h7gP2n9yy/6GpM2t0=;
  b=K2bU8O/Zei2+PtRwS8I7V+1h0Ep7f0+lh19wriJL8qbQ8uONtKa/TCFp
   5l16qaz+IudWxMSyflf4aTlZ0QjIm+ZmDS5z6m6lgEPbJ0To9hyCzdgyG
   c6Z7A6WqsgCDZTbSkBdwi5+YifabMK9Vvm5USbyw3IbKl+iqDXlVkI2TM
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MRlUvHr24y02n57VARV8YpZ941IgLFI4mLFLOVFgb3OsZHBBEucJvbb4MbXdyGE5/OqaUPmA9g
 n3C4G2+iV/wNRuIO0Va0WiNwWC8eLvBH+i7yRrktub9UkCBnHiMKBSehgJnQTAT9T329VkYJ99
 WbqzP8Er6NMZ6A2vY28pb7djUqot+h1pyoQk523K4x8yCCB+QwdqhsP7aX2YzTgKBXDA7DgwS7
 dp3jMko5dRTF2ziY698mYNw+BtcOpoVpmP9p3xQ9Bw9YnISHnKmrQf2xZIZL002WHMb4tCr73C
 Fys=
X-SBRS: 2.5
X-MesageID: 29336058
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29336058"
Date: Thu, 15 Oct 2020 17:03:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, Jason Andryuk <jandryuk@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
Message-ID: <20201015150326.GE68032@Air-de-Roger>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 15, 2020 at 09:00:09AM +0200, Jan Beulich wrote:
> On 14.10.2020 18:27, Jason Andryuk wrote:
> > On Wed, Oct 14, 2020 at 12:02 PM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 14.10.2020 17:31, Jason Andryuk wrote:
> >>> Linux kernels only have an ENTRY elfnote when built with CONFIG_PV.  A
> >>> kernel build CONFIG_PVH=y CONFIG_PV=n lacks the note.  In this case,
> >>> virt_entry will be UNSET_ADDR, overwritten by the ELF header e_entry,
> >>> and fail the check against the virt address range.
> > 
> > Oh, these should be CONFIG_XEN_PVH=y and CONFIG_XEN_PV=n
> > 
> >>> Change the code to only check virt_entry against the virtual address
> >>> range if it was set upon entry to the function.
> >>
> >> Not checking at all seems wrong to me. The ELF spec anyway says
> >> "virtual address", so an out of bounds value is at least suspicious.
> >>
> >>> Maybe the overwriting of virt_entry could be removed, but I don't know
> >>> if there would be unintended consequences where (old?) kernels don't
> >>> have an elfnote, but do have an in-range e_entry?  The failing kernel I
> >>> just looked at has an e_entry of 0x1000000.
> >>
> >> And if you dropped the overwriting, what entry point would we use
> >> in the absence of an ELF note?
> > 
> > elf_xen_note_check currently has:
> > 
> >     /* PVH only requires one ELF note to be set */
> >     if ( parms->phys_entry != UNSET_ADDR32 )
> >     {
> >         elf_msg(elf, "ELF: Found PVH image\n");
> >         return 0;
> >     }
> > 
> >> I'd rather put up the option of adjusting the entry (or the check),
> >> if it looks like a valid physical address.
> > 
> > The function doesn't know if the image will be booted PV or PVH, so I
> > guess we do all the checks, but use 'parms->phys_entry != UNSET_ADDR32
> > && parms->virt_entry == UNSET_ADDR' to conditionally skip checking
> > virt?
> 
> Like Jürgen, the purpose of the patch hadn't become clear to me
> from reading the description. As I understand it now, we're currently
> refusing to boot such a kernel for no reason. If that's correct,
> perhaps you could say so in the description in a more direct way?
> 
> As far as actual code adjustments go - how much of
> elf_xen_addr_calc_check() is actually applicable when booting PVH?

I think the only relevant check for PVH would be the symtab loading
(XEN_ELFNOTE_BSD_SYMTAB).

> And why is there no bounds check of ->phys_entry paralleling the
> ->virt_entry one?
> 
> On the whole, as long as we don't know what mode we're planning to
> boot in, we can't skip any checks, as the mere presence of
> XEN_ELFNOTE_PHYS32_ENTRY doesn't mean that's also what gets used.
> Therefore simply bypassing any of the checks is not an option. In
> particular what you suggest would lead to failure to check
> e_entry-derived ->virt_entry when the PVH-specific note is
> present but we're booting in PV mode. For now I don't see how to
> address this without making the function aware of the intended
> booting mode.

That seems the only viable approach. Maybe an intended mode field could
be added to elf_dom_parms in order to signal this?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:09:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:09:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7473.19538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4sC-0001GL-Sn; Thu, 15 Oct 2020 15:08:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7473.19538; Thu, 15 Oct 2020 15:08:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4sC-0001GE-Ph; Thu, 15 Oct 2020 15:08:56 +0000
Received: by outflank-mailman (input) for mailman id 7473;
 Thu, 15 Oct 2020 15:08:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT4sB-0001G9-4o
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:08:55 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e29e6eb1-825e-4914-a67d-cb1db6cbdb98;
 Thu, 15 Oct 2020 15:08:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT4sB-0001G9-4o
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:08:55 +0000
X-Inumbo-ID: e29e6eb1-825e-4914-a67d-cb1db6cbdb98
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e29e6eb1-825e-4914-a67d-cb1db6cbdb98;
	Thu, 15 Oct 2020 15:08:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602774534;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=83CI7dEo4abtDvi3SxA31wt2De+c17XHCkaW0cRBP4E=;
  b=aTSxj3vpN0dmN6Z2Ix9kEHMbxKco0DyKovB7nzlVo41GBWAG/w91HU8L
   tXmJNrfB78TduUJ7LsO/tNpDMEnN4Ti8iHfuMCklo9o0JtMPBuusNMCBe
   2BnmiBwnoIYIzxn30OYiYLkNvouRKjTLx4RaYx/LYDz5I+z8pd2rK1f5U
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CjATV4Crrjol+n6HDirYBQsJMKcs7EyyWxNtiTk9h6eDIAlG6bdANvubnE/XRTJVY6tSS7bhhy
 CFUPRJ66GqBhVAau//zD8/EEAezobQHNRhXbyJN7D3J/SOUKqAorBdb+kR6InWVMOUK7zygELd
 wesJe14tWMEj5Qbkb2AaBJo2zq75S6QMpvJEDvZW4VE+cbl7RWTdSuJcNHl96WXz3Ixil7KYkK
 TvtRUO2KWXZQbGt9uSF9kaILv3XOFqkPPUaxQvgNB7K6Bj5n3KJyR/5TBcPXLSiAceyVLz8qsL
 0DY=
X-SBRS: 2.5
X-MesageID: 29067109
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29067109"
Date: Thu, 15 Oct 2020 17:08:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Jason Andryuk <jandryuk@gmail.com>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	<x86@kernel.org>, "H. Peter Anvin" <hpa@zytor.com>, xen-devel
	<xen-devel@lists.xenproject.org>, open list <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/2] xen: Remove Xen PVH/PVHVM dependency on PCI
Message-ID: <20201015150844.GF68032@Air-de-Roger>
References: <20201014175342.152712-1-jandryuk@gmail.com>
 <20201014175342.152712-2-jandryuk@gmail.com>
 <b74a3f83-cd8a-34a3-b436-95141f01cb20@suse.com>
 <CAKf6xps+mAFdfk8uBw=aMsAFNYmt4ETPkB8dwT3sTv-qPbVENw@mail.gmail.com>
 <3919ef15-379b-cc1e-994c-c33b23865afd@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <3919ef15-379b-cc1e-994c-c33b23865afd@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 15, 2020 at 05:02:21PM +0200, Jan Beulich wrote:
> On 15.10.2020 16:59, Jason Andryuk wrote:
> > On Thu, Oct 15, 2020 at 4:10 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 14.10.2020 19:53, Jason Andryuk wrote:
> >>> @@ -76,7 +80,9 @@ config XEN_DEBUG_FS
> >>>         Enabling this option may incur a significant performance overhead.
> >>>
> >>>  config XEN_PVH
> >>> -     bool "Support for running as a Xen PVH guest"
> >>> +     bool "Xen PVH guest support"
> >>
> >> Tangential question: Is "guest" here still appropriate, i.e.
> >> isn't this option also controlling whether the kernel can be
> >> used in a PVH Dom0?
> > 
> > Would you like something more generic like "Xen PVH support" and
> > "Support for running in Xen PVH mode"?
> 
> Yeah, just dropping "guest" would be fine with me. No idea how
> to reflect that PVH Dom0 isn't supported, yet.

The fact that it isn't supported by Xen shouldn't be reflected on the
Linux configuration, as it's independent. Ie: you could run this Linux
kernel on a future version of Xen where PVH dom0 is supported.

There's already a warning printed by Xen when booting PVH dom0 about
not being a supported mode.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7475.19549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4sj-0001MA-6E; Thu, 15 Oct 2020 15:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7475.19549; Thu, 15 Oct 2020 15:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4sj-0001M3-2v; Thu, 15 Oct 2020 15:09:29 +0000
Received: by outflank-mailman (input) for mailman id 7475;
 Thu, 15 Oct 2020 15:09:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BX1S=DW=amazon.com=prvs=5504cf30e=sjpark@srs-us1.protection.inumbo.net>)
 id 1kT4sh-0001Lv-GZ
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:09:27 +0000
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 340d506d-16fa-4408-8491-f049c003ce03;
 Thu, 15 Oct 2020 15:09:27 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-807d4a99.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 15 Oct 2020 15:09:21 +0000
Received: from EX13D31EUB001.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1a-807d4a99.us-east-1.amazon.com (Postfix) with ESMTPS
 id 65806A20A0; Thu, 15 Oct 2020 15:09:18 +0000 (UTC)
Received: from u3f2cd687b01c55.ant.amazon.com (10.43.160.125) by
 EX13D31EUB001.ant.amazon.com (10.43.166.210) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 15 Oct 2020 15:09:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BX1S=DW=amazon.com=prvs=5504cf30e=sjpark@srs-us1.protection.inumbo.net>)
	id 1kT4sh-0001Lv-GZ
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:09:27 +0000
X-Inumbo-ID: 340d506d-16fa-4408-8491-f049c003ce03
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 340d506d-16fa-4408-8491-f049c003ce03;
	Thu, 15 Oct 2020 15:09:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1602774568; x=1634310568;
  h=from:to:cc:subject:date:message-id:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=xI1wCRbxJCFX6yOdKgXcnKW1y6wTWNYIfl97uYZxpXE=;
  b=Vsjkyc1fSH69yCRzbm7KZgLcZ4D5al/jk3pmbnVEPrp0v40cWisL8N4v
   0uKrSD4N8yImivzTtD4aE07m/xbxepqapYq8lSiATPveM8XzwGKYLsGjQ
   EMraV5CcP6eIqI1fdu2FEyopPBxnHdwXbU571dpJJad4bEr6mHWxeULiI
   c=;
X-IronPort-AV: E=Sophos;i="5.77,379,1596499200"; 
   d="scan'208";a="59532495"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO email-inbound-relay-1a-807d4a99.us-east-1.amazon.com) ([10.43.8.2])
  by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP; 15 Oct 2020 15:09:21 +0000
Received: from EX13D31EUB001.ant.amazon.com (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
	by email-inbound-relay-1a-807d4a99.us-east-1.amazon.com (Postfix) with ESMTPS id 65806A20A0;
	Thu, 15 Oct 2020 15:09:18 +0000 (UTC)
Received: from u3f2cd687b01c55.ant.amazon.com (10.43.160.125) by
 EX13D31EUB001.ant.amazon.com (10.43.166.210) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 15 Oct 2020 15:09:13 +0000
From: SeongJae Park <sjpark@amazon.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: <linux-kernel@vger.kernel.org>, Konrad Rzeszutek Wilk
	<konrad.wilk@oracle.com>, Jens Axboe <axboe@kernel.dk>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, SeongJae Park <sjpark@amazon.de>,
	<xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>, "J .
 Roeleveld" <joost@antarean.org>, =?UTF-8?q?J=C3=BCrgen=20Gro=C3=9F?=
	<jgross@suse.com>
Subject: Re: [PATCH 1/2] xen/blkback: turn the cache purge LRU interval into a parameter
Date: Thu, 15 Oct 2020 17:08:49 +0200
Message-ID: <20201015150849.3844-1-sjpark@amazon.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
In-Reply-To: <20201015142416.70294-2-roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.43.160.125]
X-ClientProxiedBy: EX13D02UWC003.ant.amazon.com (10.43.162.199) To
 EX13D31EUB001.ant.amazon.com (10.43.166.210)
Precedence: Bulk

On Thu, 15 Oct 2020 16:24:15 +0200 Roger Pau Monne <roger.pau@citrix.com> wrote:

> Assume that reads and writes to the variable will be atomic. The worse
> that could happen is that one of the LRU intervals is not calculated
> properly if a partially written value is read, but that would only be
> a transient issue.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> Cc: Jens Axboe <axboe@kernel.dk>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Cc: SeongJae Park <sjpark@amazon.de>
> Cc: xen-devel@lists.xenproject.org
> Cc: linux-block@vger.kernel.org
> Cc: J. Roeleveld <joost@antarean.org>
> Cc: Jürgen Groß <jgross@suse.com>
> ---
>  Documentation/ABI/testing/sysfs-driver-xen-blkback | 10 ++++++++++
>  drivers/block/xen-blkback/blkback.c                |  9 ++++++---
>  2 files changed, 16 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-driver-xen-blkback b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> index ecb7942ff146..776f25d335ca 100644
> --- a/Documentation/ABI/testing/sysfs-driver-xen-blkback
> +++ b/Documentation/ABI/testing/sysfs-driver-xen-blkback
> @@ -35,3 +35,13 @@ Description:
>                  controls the duration in milliseconds that blkback will not
>                  cache any page not backed by a grant mapping.
>                  The default is 10ms.
> +
> +What:           /sys/module/xen_blkback/parameters/lru_internval
> +Date:           October 2020
> +KernelVersion:  5.10
> +Contact:        Roger Pau Monné <roger.pau@citrix.com>
> +Description:
> +                The LRU mechanism to clean the lists of persistent grants needs
> +                to be executed periodically. This parameter controls the time
> +                interval between consecutive executions of the purge mechanism
> +                is set in ms.

I think noticing the default value (100ms) here would be better.


Thanks,
SeongJae Park


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:14:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7482.19561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4xF-0002Et-Q3; Thu, 15 Oct 2020 15:14:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7482.19561; Thu, 15 Oct 2020 15:14:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4xF-0002Em-Mt; Thu, 15 Oct 2020 15:14:09 +0000
Received: by outflank-mailman (input) for mailman id 7482;
 Thu, 15 Oct 2020 15:14:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kT4xE-0002Eh-LP
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:14:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb5f354e-3026-4aa5-b2be-c582157dd67d;
 Thu, 15 Oct 2020 15:14:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DCC11AF38;
 Thu, 15 Oct 2020 15:14:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kT4xE-0002Eh-LP
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:14:08 +0000
X-Inumbo-ID: cb5f354e-3026-4aa5-b2be-c582157dd67d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb5f354e-3026-4aa5-b2be-c582157dd67d;
	Thu, 15 Oct 2020 15:14:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602774847;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JT35egOZX0/8U2WS3c60CxyaHcSgWqzsX0cCoEL980o=;
	b=AMKmceQJs/7D4jxUz5FFqlbRwxsQd8iQkepDpwtsC8LGD+DBuPGdUB83ETUuAQsmhIBvRG
	8ARTsotkGwOo72sVSYiTGpP3zP314Iqg94MgU9TTlqP+4FqKpaKUzTVqG3En/m/85DNnyd
	KWJCVFenSXwMoec46TXFfxiLj8IciOQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DCC11AF38;
	Thu, 15 Oct 2020 15:14:06 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
 <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
Date: Thu, 15 Oct 2020 17:14:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 16:50, Jason Andryuk wrote:
> On Thu, Oct 15, 2020 at 3:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>> And why is there no bounds check of ->phys_entry paralleling the
>> ->virt_entry one?
> 
> What is the purpose of this checking?  It's sanity checking which is
> generally good, but what is the harm from failing the checks?  A
> corrupt kernel can crash itself?  Maybe you could start executing
> something (the initramfs?) instead of the actual kernel?

This is at least getting close to a possible security issue.
Booting a hacked up binary can be a problem afaik.

>> On the whole, as long as we don't know what mode we're planning to
>> boot in, we can't skip any checks, as the mere presence of
>> XEN_ELFNOTE_PHYS32_ENTRY doesn't mean that's also what gets used.
>> Therefore simply bypassing any of the checks is not an option.
> 
> elf_xen_note_check() early exits when it finds phys_entry set, so
> there is already some bypassing.
> 
>> In
>> particular what you suggest would lead to failure to check
>> e_entry-derived ->virt_entry when the PVH-specific note is
>> present but we're booting in PV mode. For now I don't see how to
>> address this without making the function aware of the intended
>> booting mode.
> 
> Yes, the relevant checks depend on the desired booting mode.
> 
> The e_entry use seems a little problematic.  You said the ELF
> Specification states it should be a virtual address, but Linux seems
> to fill it with a physical address.  You could use a heuristic e_entry
> < 0 (0xffff...) to compare with the virtual addresses otherwise check
> against physical?

Don't we have a physical range as well? And don't we adjust the
entry point already in certain cases anyway? Checking and adjustment
can (and should) be brought in sync, and else checking the entry
point fits at least one of the two ranges may be better than no
checking at all, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:16:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7486.19573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4zI-0002NP-6B; Thu, 15 Oct 2020 15:16:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7486.19573; Thu, 15 Oct 2020 15:16:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4zI-0002NI-36; Thu, 15 Oct 2020 15:16:16 +0000
Received: by outflank-mailman (input) for mailman id 7486;
 Thu, 15 Oct 2020 15:16:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kT4zG-0002ND-Sr
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:16:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7453203-569d-4d1a-8c1d-b9f8215774a5;
 Thu, 15 Oct 2020 15:16:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB6FFAB0E;
 Thu, 15 Oct 2020 15:16:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MKI8=DW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kT4zG-0002ND-Sr
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:16:14 +0000
X-Inumbo-ID: f7453203-569d-4d1a-8c1d-b9f8215774a5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f7453203-569d-4d1a-8c1d-b9f8215774a5;
	Thu, 15 Oct 2020 15:16:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602774964;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KI5wVx4FRmhvGSkYP6r+ArTuozQ8lzQpP3Xv0BQ96Kc=;
	b=nEaako62Sup5pZxVlPOqzQ3Lntw9r2NNlM8EpvinE6mMROTPDm97eZ1nZvadqcSSRGm1Dl
	btEF2RHfei1ToFu0mLacIbYjt19igqNAVUeVLleHW2HRKKEB6G9VO2SwFSuD+zvhmSSPyi
	4u4nQMPQXXk8MBI4p1fsVpffB61sWZI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB6FFAB0E;
	Thu, 15 Oct 2020 15:16:04 +0000 (UTC)
Subject: Re: [PATCH v2] x86/smpboot: Don't unconditionally call
 memguard_guard_stack() in cpu_smpboot_alloc()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201014184708.17758-1-andrew.cooper3@citrix.com>
 <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
 <0a294279-5de5-3b54-b1f9-847de1159447@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <578a0afd-693a-c704-317e-477e5e27d497@suse.com>
Date: Thu, 15 Oct 2020 17:16:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <0a294279-5de5-3b54-b1f9-847de1159447@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 16:02, Andrew Cooper wrote:
> On 15/10/2020 09:50, Jan Beulich wrote:
>> On 14.10.2020 20:47, Andrew Cooper wrote:
>>> cpu_smpboot_alloc() is designed to be idempotent with respect to partially
>>> initialised state.  This occurs for S3 and CPU parking, where enough state to
>>> handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
>>> when we otherwise want to offline the CPU.
>>>
>>> For simplicity between various configuration, Xen always uses shadow stack
>>> mappings (Read-only + Dirty) for the guard page, irrespective of whether
>>> CET-SS is enabled.
>>>
>>> Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
>>> by first writing out the supervisor shadow stack tokens with plain writes,
>>> then changing the mapping to being read-only.
>>>
>>> This ordering is strictly necessary to configure the BSP, which cannot have
>>> the supervisor tokens be written with WRSS.
>>>
>>> Instead of calling memguard_guard_stack() unconditionally, call it only when
>>> actually allocating a new stack.  Xenheap allocates are guaranteed to be
>>> writeable, and the net result is idempotency WRT configuring stack_base[].
>>>
>>> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
>>> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>>
>>> This can more easily be demonstrated with CPU hotplug than S3, and the absence
>>> of bug reports goes to show how rarely hotplug is used.
>>>
>>> v2:
>>>  * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
>>>    turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
>>>    shootdown completes.
>> The code change looks correct to me, but since I don't understand
>> this part I'm afraid I may be overlooking something. I understand
>> the "turn the BSP shadow stack into regular mappings" relates to
>> cpu_smpboot_free()'s call to memguard_unguard_stack(), but I
>> didn't think we come through cpu_smpboot_free() for the BSP upon
>> entering or leaving S3.
> 
> The v1 really did fix Marek's repro of the problem.
> 
> The only possible way this can occur is if, somewhere, there is a call
> to cpu_smpboot_free() for CPU0 with remove=0 on the S3 path

I didn't think it was the BSP's stack that got written to, but the
first AP's before letting it run.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:16:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7487.19586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4zP-0002Qb-JJ; Thu, 15 Oct 2020 15:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7487.19586; Thu, 15 Oct 2020 15:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT4zP-0002QS-Fp; Thu, 15 Oct 2020 15:16:23 +0000
Received: by outflank-mailman (input) for mailman id 7487;
 Thu, 15 Oct 2020 15:16:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT4zN-0002Q2-Uf
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:16:22 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3898e9b7-0fff-45de-b312-cf77ddb844c5;
 Thu, 15 Oct 2020 15:16:20 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id m20so3547187ljj.5
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 08:16:20 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT4zN-0002Q2-Uf
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:16:22 +0000
X-Inumbo-ID: 3898e9b7-0fff-45de-b312-cf77ddb844c5
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3898e9b7-0fff-45de-b312-cf77ddb844c5;
	Thu, 15 Oct 2020 15:16:20 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id m20so3547187ljj.5
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 08:16:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=EO3Pq/O9VK6xNin3xKMxdHsOCMg9IBLz2CPySgXeILA=;
        b=R882uVN07sZSNnnQOaXiTsFAJzI11bA/kvOZ0stS65DI1vTNl0AUsJ3WkHw4RPMSO9
         B96KIHxgScUpB3FRTUrdCgKGDQLgrHnCPZ9lUWlH1CdMdDhehpXR970515G74w+0MYov
         7+iVbw+MtBoSYX9qUQAQ/C1lC91ot1JVbMOQjk/q2Fo/7VK/V6I3sOnKu/WBFxdmlnHy
         IvB2gZV0tsqB0bw7b+Z7WN6uNyLaxDbpZh6Zb8ALiELYs+rK0xJo3iFhqWwL/Lq99tD7
         LBmvtUdTBn/igSkWt36pmP+min85tz9+kU+ohwqvD8yPhYejRKjuMBJC3yaUmh2N3fPP
         +58w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=EO3Pq/O9VK6xNin3xKMxdHsOCMg9IBLz2CPySgXeILA=;
        b=uZDFVZlUPpnBND6SFQo1BUebtsJ6bHS6w2eCLSUdi6DpK+EIK2J90D4pP+4hLYpnMy
         EptsM6C5a7GxUffW9ZY5M1xWI/iXgzgAHVoKzBEokhSyRxIE7REyF4UuhiSSdt8Ra+ID
         M9dheGpUz2iyCia1lQ55DXK8gj9SSWgjAIVMBdOfMVgExMGzlPsABRR9/ET0VdhcqxQy
         yasi+uiMrMq5C1pqURzC7ZbgHXxzJP0zQ3sNuaqrDVxwyn7f6Qmx8mF7KRoSdBTJgGnS
         rJsbydCavd9Rq4mcbnA8ryi+eRyE8mXFVUVCVwWc8zF+RnQc6N3PwIN5tP5q+JF6n1fR
         sfIg==
X-Gm-Message-State: AOAM533RS8mY6LiatwdbVYMe7srYbrHtEsHrlarRot5bS5pZvlS6Biw8
	jckKKXBppnhnSqB/rOt/Lg+e9TKZ0AvqkUIqDQc=
X-Google-Smtp-Source: ABdhPJxO61tN56k8WsMx+OlUMGaTnPo5KMLMC9qnlkNooaRsP7rjCS0Rw21MllusQl2jFsvwQESW/Kq9/9guT2/BUqc=
X-Received: by 2002:a2e:b0c7:: with SMTP id g7mr1403613ljl.433.1602774979413;
 Thu, 15 Oct 2020 08:16:19 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
In-Reply-To: <20201015113109.GA68032@Air-de-Roger>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 11:16:07 -0400
Message-ID: <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 15, 2020 at 7:31 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com>=
 wrote:
>
> On Wed, Oct 14, 2020 at 08:37:06PM +0100, Andrew Cooper wrote:
> > On 14/10/2020 20:28, Jason Andryuk wrote:
> > > Hi,
> > >
> > > Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/2576
> > >
> > > I'm seeing DMA faults for the i915 graphics hardware on a Dell
> > > Latitude 5500. These were captured when I plugged into a Dell
> > > Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.4
> > > staging and Linux 5.4.70 (and some earlier versions).
> > >
> > > Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handler
> > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handler
> > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =3D
> > > ffff82c00021d000
> > > Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > PTE Read access is not set
> > > Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handler
> > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handler
> > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =3D
> > > ffff82c00021d000
> > > Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > PTE Read access is not set
> > >
> > > They repeat. In the log attached to
> > > https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start at
> > > "Oct 14 18:41:49.056589" and continue until I unplug the dock around
> > > "Oct 14 18:41:54.801802".
> > >
> > > I've also seen similar messages when attaching the laptop's HDMI port
> > > to a 4k monitor. The eDP display by itself seems okay.
> > >
> > > I tried Fedora 31 & 32 live images with intel_iommu=3Don, so no Xen, =
and
> > > didn't see any errors
> > >
> > > This is a kernel & xen log with drm.debug=3D0x1e. It also includes so=
me
> > > application (glass) logging when it changes resolutions which seems t=
o
> > > set off the DMA faults. 5500-igfx-messages-kern-xen-glass
> > >
> > > Running xen with iommu=3Dno-igfx disables the iommu for the i915
> > > graphics and no faults are reported. However, that breaks some other
> > > devices (Dell Latitude 7200 and 5580) giving a black screen with:
> > >
> > > Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Faile=
d
> > > to idle engines, declaring wedged!
> > > Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Faile=
d
> > > to initialize GPU, declaring it wedged!
> > >
> > > Any suggestions welcome.
> >
> > Presumably this is with a PV dom0.  What are 39b5845000 and 4238d0a000
> > in the machine memory map?

They are bogus?
End of RAM is 0x47c800000
Thats:
0x047c800000
vs.
0x39b5845000
0x4238d0a000

> > This smells like a missing RMRR in the ACPI tables.
>
> I agree.
>
> Can you paste the memory map as printed by Xen when booting, and what
> command line are you using to boot Xen.

So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen

There's the memory map
(XEN) TBOOT RAM map:
(XEN)  0000000000000000 - 0000000000060000 (usable)
(XEN)  0000000000060000 - 0000000000068000 (reserved)
(XEN)  0000000000068000 - 000000000009e000 (usable)
(XEN)  000000000009e000 - 000000000009f000 (reserved)
(XEN)  000000000009f000 - 00000000000a0000 (usable)
(XEN)  00000000000a0000 - 0000000000100000 (reserved)
(XEN)  0000000000100000 - 0000000040000000 (usable)
(XEN)  0000000040000000 - 0000000040400000 (reserved)
(XEN)  0000000040400000 - 000000007024b000 (usable)
(XEN)  000000007024b000 - 000000007024c000 (ACPI NVS)
(XEN)  000000007024c000 - 000000007024d000 (reserved)
(XEN)  000000007024d000 - 0000000077f19000 (usable)
(XEN)  0000000077f19000 - 0000000078987000 (reserved)
(XEN)  0000000078987000 - 0000000078a04000 (ACPI data)
(XEN)  0000000078a04000 - 0000000078ea3000 (ACPI NVS)
(XEN)  0000000078ea3000 - 000000007acff000 (reserved)
(XEN)  000000007acff000 - 000000007ad00000 (usable)
(XEN)  000000007ad00000 - 000000007f800000 (reserved)
(XEN)  00000000f0000000 - 00000000f8000000 (reserved)
(XEN)  00000000fe000000 - 00000000fe011000 (reserved)
(XEN)  00000000fec00000 - 00000000fec01000 (reserved)
(XEN)  00000000fee00000 - 00000000fee01000 (reserved)
(XEN)  00000000ff000000 - 0000000100000000 (reserved)
(XEN)  0000000100000000 - 000000047c800000 (usable)
(XEN) EFI memory map:
(XEN)  0000000000000-000000009dfff type=3D7 attr=3D000000000000000f
(XEN)  000000009e000-000000009efff type=3D0 attr=3D000000000000000f
(XEN)  000000009f000-000000009ffff type=3D3 attr=3D000000000000000f
(XEN)  0000000100000-000003fffffff type=3D7 attr=3D000000000000000f
(XEN)  0000040000000-00000403fffff type=3D0 attr=3D000000000000000f
(XEN)  0000040400000-000005e359fff type=3D7 attr=3D000000000000000f
(XEN)  000005e35a000-000005e399fff type=3D4 attr=3D000000000000000f
(XEN)  000005e39a000-000006a47dfff type=3D7 attr=3D000000000000000f
(XEN)  000006a47e000-000006c3eefff type=3D2 attr=3D000000000000000f
(XEN)  000006c3ef000-000006d5eefff type=3D1 attr=3D000000000000000f
(XEN)  000006d5ef000-000006d86cfff type=3D2 attr=3D000000000000000f
(XEN)  000006d86d000-000006d978fff type=3D1 attr=3D000000000000000f
(XEN)  000006d979000-000006dc7afff type=3D4 attr=3D000000000000000f
(XEN)  000006dc7b000-000006dc98fff type=3D3 attr=3D000000000000000f
(XEN)  000006dc99000-000006dcc7fff type=3D4 attr=3D000000000000000f
(XEN)  000006dcc8000-000006dccdfff type=3D3 attr=3D000000000000000f
(XEN)  000006dcce000-00000701a5fff type=3D4 attr=3D000000000000000f
(XEN)  00000701a6000-00000701c8fff type=3D3 attr=3D000000000000000f
(XEN)  00000701c9000-00000701edfff type=3D4 attr=3D000000000000000f
(XEN)  00000701ee000-0000070204fff type=3D3 attr=3D000000000000000f
(XEN)  0000070205000-000007022cfff type=3D4 attr=3D000000000000000f
(XEN)  000007022d000-000007024afff type=3D3 attr=3D000000000000000f
(XEN)  000007024b000-000007024bfff type=3D10 attr=3D000000000000000f
(XEN)  000007024c000-000007024cfff type=3D6 attr=3D800000000000000f
(XEN)  000007024d000-000007024dfff type=3D4 attr=3D000000000000000f
(XEN)  000007024e000-0000070282fff type=3D3 attr=3D000000000000000f
(XEN)  0000070283000-00000702c3fff type=3D4 attr=3D000000000000000f
(XEN)  00000702c4000-00000702c8fff type=3D3 attr=3D000000000000000f
(XEN)  00000702c9000-00000702defff type=3D4 attr=3D000000000000000f
(XEN)  00000702df000-0000070307fff type=3D3 attr=3D000000000000000f
(XEN)  0000070308000-0000070317fff type=3D4 attr=3D000000000000000f
(XEN)  0000070318000-0000070319fff type=3D3 attr=3D000000000000000f
(XEN)  000007031a000-0000070331fff type=3D4 attr=3D000000000000000f
(XEN)  0000070332000-0000070349fff type=3D3 attr=3D000000000000000f
(XEN)  000007034a000-0000070356fff type=3D2 attr=3D000000000000000f
(XEN)  0000070357000-0000070357fff type=3D7 attr=3D000000000000000f
(XEN)  0000070358000-0000070358fff type=3D2 attr=3D000000000000000f
(XEN)  0000070359000-0000076f3efff type=3D4 attr=3D000000000000000f
(XEN)  0000076f3f000-00000772affff type=3D7 attr=3D000000000000000f
(XEN)  00000772b0000-0000077f18fff type=3D3 attr=3D000000000000000f
(XEN)  0000077f19000-0000078986fff type=3D0 attr=3D000000000000000f
(XEN)  0000078987000-0000078a03fff type=3D9 attr=3D000000000000000f
(XEN)  0000078a04000-0000078ea2fff type=3D10 attr=3D000000000000000f
(XEN)  0000078ea3000-000007ab22fff type=3D6 attr=3D800000000000000f
(XEN)  000007ab23000-000007acfefff type=3D5 attr=3D800000000000000f
(XEN)  000007acff000-000007acfffff type=3D4 attr=3D000000000000000f
(XEN)  0000100000000-000047c7fffff type=3D7 attr=3D000000000000000f
(XEN)  00000000a0000-00000000fffff type=3D0 attr=3D0000000000000000
(XEN)  000007ad00000-000007adfffff type=3D0 attr=3D070000000000000f
(XEN)  000007ae00000-000007f7fffff type=3D0 attr=3D0000000000000000
(XEN)  00000f0000000-00000f7ffffff type=3D11 attr=3D800000000000100d
(XEN)  00000fe000000-00000fe010fff type=3D11 attr=3D8000000000000001
(XEN)  00000fec00000-00000fec00fff type=3D11 attr=3D8000000000000001
(XEN)  00000fee00000-00000fee00fff type=3D11 attr=3D8000000000000001
(XEN)  00000ff000000-00000ffffffff type=3D11 attr=3D800000000000100d

Command line
console=3Dcom1 dom0_mem=3Dmin:420M,max:420M,420M efi=3Dno-rs,attr=3Duc
com1=3D115200,8n1,pci mbi-video vga=3Dcurrent flask=3Denforcing loglvl=3Dde=
bug
guest_loglvl=3Ddebug smt=3D0 ucode=3D-1 bootscrub=3D1
argo=3Dyes,mac-permissive=3D1 iommu=3Dforce,igfx

iommu=3Dforce,igfx was to force igfx back on.  I added a dmi quirk to
set no-igfx on this platform as a temporary workaround.

> Have you tried adding dom0-iommu=3Dmap-inclusive to the Xen command
> line?

I have not.  I can try that tomorrow when I have access to the system again=
.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:24:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7496.19601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT57O-0003SG-EE; Thu, 15 Oct 2020 15:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7496.19601; Thu, 15 Oct 2020 15:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT57O-0003S9-Aw; Thu, 15 Oct 2020 15:24:38 +0000
Received: by outflank-mailman (input) for mailman id 7496;
 Thu, 15 Oct 2020 15:24:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT57M-0003S4-ST
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:24:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a8872f5-8215-4b46-8582-24d6774985c0;
 Thu, 15 Oct 2020 15:24:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT57K-0007aT-F5; Thu, 15 Oct 2020 15:24:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT57K-0006w0-7q; Thu, 15 Oct 2020 15:24:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT57K-0002iI-7M; Thu, 15 Oct 2020 15:24:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT57M-0003S4-ST
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:24:37 +0000
X-Inumbo-ID: 4a8872f5-8215-4b46-8582-24d6774985c0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a8872f5-8215-4b46-8582-24d6774985c0;
	Thu, 15 Oct 2020 15:24:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d/0K7IBx7VUS4oomMgcO429qZpkBrMYAfQ5guEnBj2s=; b=g0t3mQxo2EFA0NVSdCsz8bwjUj
	qTp9iZFspuP8qya3lo3L8mpOneeUUG3/QYaFG8O0egl/deZUn1AUuRF4CtGVAAI+JOUHPxgf+ByXj
	YtuZKbxu9lIifxzKXNGCWWQz/IVov68Kf/TMQ/f+0Hr8iWt2b6IuaS18c+nvA7KFJ+zk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT57K-0007aT-F5; Thu, 15 Oct 2020 15:24:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT57K-0006w0-7q; Thu, 15 Oct 2020 15:24:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT57K-0002iI-7M; Thu, 15 Oct 2020 15:24:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155829-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155829: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start.2:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=3e4fb4346c781068610d03c12b16c0cfb0fd24a3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 15:24:34 +0000

flight 155829 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155829/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-amd64-amd64-xl-multivcpu 23 guest-start.2           fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                3e4fb4346c781068610d03c12b16c0cfb0fd24a3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   75 days
Failing since        152366  2020-08-01 20:49:34 Z   74 days  127 attempts
Testing same since   155829  2020-10-15 04:03:41 Z    0 days    1 attempts

------------------------------------------------------------
2788 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424158 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:31:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7502.19615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5EN-0004M6-Hj; Thu, 15 Oct 2020 15:31:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7502.19615; Thu, 15 Oct 2020 15:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5EN-0004Lz-Eh; Thu, 15 Oct 2020 15:31:51 +0000
Received: by outflank-mailman (input) for mailman id 7502;
 Thu, 15 Oct 2020 15:31:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT5EM-0004Lu-3p
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:31:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66d5c2fd-0b47-4206-8b8f-059d5408e9de;
 Thu, 15 Oct 2020 15:31:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT5EJ-0007is-FM; Thu, 15 Oct 2020 15:31:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT5EJ-00077T-47; Thu, 15 Oct 2020 15:31:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT5EJ-0005Ac-3b; Thu, 15 Oct 2020 15:31:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT5EM-0004Lu-3p
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:31:50 +0000
X-Inumbo-ID: 66d5c2fd-0b47-4206-8b8f-059d5408e9de
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 66d5c2fd-0b47-4206-8b8f-059d5408e9de;
	Thu, 15 Oct 2020 15:31:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wgtHiipVC6LEa/RMTk4HIqQXLXyg6AeSuSK9po0KNuc=; b=NtgiexBeyi7NTI1r0e0dQzTONo
	rJ+LvYwC6Wcp0oYY+lkQ1dk09dG+snKuwp1dA3oBgAClDm5rmcWCFfEkDIcI/R8vozrHJaNjDMZ/I
	Sa9UEz+qeuuxwhqzUmsfumic1a2scXbff1jmxGs/wMrpLAhJhzAxa6GzNfnuIw3OYqo8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT5EJ-0007is-FM; Thu, 15 Oct 2020 15:31:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT5EJ-00077T-47; Thu, 15 Oct 2020 15:31:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT5EJ-0005Ac-3b; Thu, 15 Oct 2020 15:31:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155832-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155832: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-pvshim:debian-install:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=884ef07f4f66b9d12fc4811047db95ba649db85c
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 15:31:47 +0000

flight 155832 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155832/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvshim   12 debian-install           fail REGR. vs. 155788
 build-amd64-xsm               6 xen-build                fail REGR. vs. 155788
 build-i386                    6 xen-build                fail REGR. vs. 155788

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 155788

Tests which did not succeed, but are not blocking:
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155788
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155788
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155788
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155788
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155788
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155788
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  884ef07f4f66b9d12fc4811047db95ba649db85c
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155788  2020-10-14 01:52:30 Z    1 days
Testing same since   155810  2020-10-14 16:08:32 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 410 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:39:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:39:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7507.19630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5M3-0004cE-Dw; Thu, 15 Oct 2020 15:39:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7507.19630; Thu, 15 Oct 2020 15:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5M3-0004c7-AM; Thu, 15 Oct 2020 15:39:47 +0000
Received: by outflank-mailman (input) for mailman id 7507;
 Thu, 15 Oct 2020 15:39:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kT5M1-0004c2-OW
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:39:45 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb0f99f8-a78a-4c64-8be7-f0808dc7d3ef;
 Thu, 15 Oct 2020 15:39:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bKTB=DW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kT5M1-0004c2-OW
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:39:45 +0000
X-Inumbo-ID: eb0f99f8-a78a-4c64-8be7-f0808dc7d3ef
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb0f99f8-a78a-4c64-8be7-f0808dc7d3ef;
	Thu, 15 Oct 2020 15:39:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602776384;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=3t2AkRnvDXjMkreXCytT8+QZNU+r1mf6kfh8YJb7X30=;
  b=LYjl2wlolBBvhIvD4UvI+0S6fuOD2bobyICQgT6ErLNzoqvwx6y0cJgv
   crhmryhUyVE8avOnY+JGGBgxhKUigpYMQg7MDGrM5jL8psawAK6xk+jw3
   PveJR8Q915sjEQ3mDTTCP/KaIAsbrQHjaB8usrjtbkqXW2pE5pN2EsnjE
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: r4ZRRl8ye/Gofoo68jYCOU4cj6UhnZYJVCFwpuSzcx5TVHXscHypqr0tsrMERfbwM7XTcEAenv
 cceWHYo+0cYrGKAO+9Zq678jLk24DFD6PIzneeHNiGdT7cN2X7oVTngcT7oK+1hJB3mJiJXH+o
 CmF7aLfoXWijJDjcAc+BCGdd7y95yT4ew5MnHM/2Rm7huOVzo76Ls+bA7b0J5+9voEKBgvzJwt
 PFyC/TYhi3AGuIW3EJe96fpIzADAkKmyHgpVaFFl6GMifQWWXDa/G58ifHfdlhLlJ1wANr4k6+
 9Eo=
X-SBRS: 2.5
X-MesageID: 29095576
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29095576"
Date: Thu, 15 Oct 2020 17:39:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ian Jackson <iwj@xenproject.org>, =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?=
	<jgross@suse.com>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: [linux-linus test] 155829: regressions - FAIL
Message-ID: <20201015153928.GG68032@Air-de-Roger>
References: <osstest-155829-mainreport@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <osstest-155829-mainreport@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 15, 2020 at 03:24:34PM +0000, osstest service owner wrote:
> flight 155829 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/155829/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
>  test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
>  test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
>  test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
>  test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
>  test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
>  test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
>  test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
>  test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
>  test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
>  test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
>  test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
>  test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
>  test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
>  test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
>  test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332

All the above is likely fallout from the removal of i386 PV support?

I'm not sure how to deal with this, should the jobs be gated on the
kernel version under test? Would that cause issues to osstest, as
different Linux kernel hashes would generate different test
matrices?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7515.19654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WR-0006H6-0V; Thu, 15 Oct 2020 15:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7515.19654; Thu, 15 Oct 2020 15:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WQ-0006Gu-OB; Thu, 15 Oct 2020 15:50:30 +0000
Received: by outflank-mailman (input) for mailman id 7515;
 Thu, 15 Oct 2020 15:50:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WP-0006GR-Ub
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c9d18f5-3706-4452-bb2e-3b894d5a3882;
 Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-00087r-QW
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-0005Ey-Oo
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WN-0000oB-0F; Thu, 15 Oct 2020 16:50:27 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WP-0006GR-Ub
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
X-Inumbo-ID: 8c9d18f5-3706-4452-bb2e-3b894d5a3882
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8c9d18f5-3706-4452-bb2e-3b894d5a3882;
	Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=GKNUYQeiTTAK8ez/vVJcZ2hoW8Ug+I9meJLpgJqBeIU=; b=Q/8ZYg2rwXY9YNgyMWTqtdK71s
	Hnuv+uKXW4Y5/kNa24G4vbTHCC4kUxl0kAcWsbflvzZG+PKjMMND9O8uijMOaUI0WHyZEgdRDbgkK
	7r9RQqayc3e1qLH40ULry3k2bNV2W+sHp68N9Rp1NsMFipf4vagcjcVua7bKdRxT4WoI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-00087r-QW
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0005Ey-Oo
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WN-0000oB-0F; Thu, 15 Oct 2020 16:50:27 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 03/17] sg-report-flight: Consider all blessings for "never pass"
Date: Thu, 15 Oct 2020 16:50:05 +0100
Message-Id: <20201015155019.20705-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

$anypassq is used for the "never pass" check; the distinction between
this and simply "fail" is cosmetic (although it can be informative).

On non-"real" flights, it can easily happen that the flight never
passed *on this branch with this blessing* but has passed on real.  So
the steps subquery does not find us an answer within reasonable time.

Work around this by always searching for "real".  This keeps the
performance within acceptable bounds even during ad-hoc testing.

We don't actually use the row from this query, so the only effect is
that when the job passed in a "real" flight, we go on to the full
regresson analysis rather than short-circuiting and reporting "never
pass".

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/sg-report-flight b/sg-report-flight
index a07e03cb..15631001 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -935,7 +935,7 @@ sub justifyfailures ($;$) {
         )
         SELECT * FROM flights JOIN s USING (flight)
             WHERE $branches_cond_q
-              AND $blessingscond
+              AND ($blessingscond) OR blessing = 'real'
             LIMIT 1
 END
     $anypassq= db_prepare($anypassq);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7516.19670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WV-0006Jy-67; Thu, 15 Oct 2020 15:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7516.19670; Thu, 15 Oct 2020 15:50:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WV-0006Jp-1C; Thu, 15 Oct 2020 15:50:35 +0000
Received: by outflank-mailman (input) for mailman id 7516;
 Thu, 15 Oct 2020 15:50:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WU-0006GM-0o
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 733865e4-7403-49ce-adca-e4676c5d0d6c;
 Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-00087l-DE
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-0005EZ-B1
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WM-0000oB-Fn; Thu, 15 Oct 2020 16:50:26 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WU-0006GM-0o
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:34 +0000
X-Inumbo-ID: 733865e4-7403-49ce-adca-e4676c5d0d6c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 733865e4-7403-49ce-adca-e4676c5d0d6c;
	Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Ib9u1KozgEMCg0jCGLWPWL8C4aZzOGRCjREpAAS7g4Y=; b=5uRoQMzX8oU6XFYbtTezweKWbT
	rbMo9yOrAT5Pxnp+T9kmCHM10GgixrtLQ/Lr53QdVadPIXx2+ryUcYpracwnk6PdJ86qH6O7RfuMF
	o3tRVc4O8qdjJ0q/8TPh6sMfIwwR8O2qCLF9zoS9V+ZjIjP+DAQxJ7MVmCtaEE3Pk+5c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-00087l-DE
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0005EZ-B1
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WM-0000oB-Fn; Thu, 15 Oct 2020 16:50:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 01/17] Honour OSSTEST_SIMULATE=2 to actually run dummy flight
Date: Thu, 15 Oct 2020 16:50:03 +0100
Message-Id: <20201015155019.20705-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 994e00c0..6cdff53f 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -68,8 +68,8 @@ fi
 
 execute_flight () {
         case "x$OSSTEST_SIMULATE" in
-        x|x0)   ;;
-        *)      echo SIMULATING - NOT EXECUTING $1 $2
+        x|x0|x2)   ;;
+        *)      echo SIMULATING $OSSTEST_SIMULATE - NOT EXECUTING $1 $2
                 return
                 ;;
         esac
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7514.19646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WQ-0006Gd-Jn; Thu, 15 Oct 2020 15:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7514.19646; Thu, 15 Oct 2020 15:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WQ-0006GW-GC; Thu, 15 Oct 2020 15:50:30 +0000
Received: by outflank-mailman (input) for mailman id 7514;
 Thu, 15 Oct 2020 15:50:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WP-0006GM-50
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c08dda8-552f-4602-b07d-4671cb82684d;
 Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-00087i-0F
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WN-0005EJ-Vi
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WM-0000oB-8D; Thu, 15 Oct 2020 16:50:26 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WP-0006GM-50
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
X-Inumbo-ID: 1c08dda8-552f-4602-b07d-4671cb82684d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c08dda8-552f-4602-b07d-4671cb82684d;
	Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=0cu1fQEr/YhhkV7d6ryfyU30np6PR4Bzu+bivwbzW74=; b=VtqZkfNS9vcK3jAnRQM0G5IL/A
	CHimD6YIZx6fT4+B4mhrnvuFWIxdNDeaGv15+aUsp3uogOHjQm/5LrFt4oA1DISvou14V6q8/q+Wi
	RE0EA3ohdR1TQNaho8Fh4zjWU1w/1vHklPylyCYS7MOybVvbuodNrQxqTPy6NKFLMcDc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-00087i-0F
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WN-0005EJ-Vi
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WM-0000oB-8D; Thu, 15 Oct 2020 16:50:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 00/13] Immediately retry failing tests
Date: Thu, 15 Oct 2020 16:50:02 +0100
Message-Id: <20201015155019.20705-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We discussed this at the Xen Summit.  What I do here is immediate
retry the jobs with regressions, and then reanalyse the original full
flight.  If the retries showed the failures were heisenbugs, this will
let them though.

This should reduce the negative impact on development, of heisenbugs,
but it won't do anything to help keep them out of the tree.

This series has now had proper dev testing (insofar as possible for
something of this nature) and I will be pushing it to pretest shortly.

Ian Jackson (17):
  Honour OSSTEST_SIMULATE=2 to actually run dummy flight
  Honour OSSTEST_SIMULATE_FAIL in sg-run-job
  sg-report-flight: Consider all blessings for "never pass"
  mg-execute-flight: Do not include the transcript in reports
  sg-report-job-history: eval $DAILY_BRANCH_PREEXEC_HOOK
  cri-args-hostlists: New debug var $OSSTEST_REPORT_JOB_HISTORY_RUN
  cri-args-hostlists: Break out report_flight and publish_logs
  sg-report-flight: Break out printout_flightheader
  sg-report-flight: Provide --refer-to-flight option
  sg-report-flight: Nicer output for --refer-to-flight option
  Introduce real-retry blessing
  cri-args-hostlists: Move flight_html_dir variable
  cr-daily-branch: Immediately retry failing tests
  Honour OSSTEST_SIMULATE_FAIL_RETRY for immediate retries
  cr-daily-branch: Do not do immediate retry of failing xtf flights
  sg-report-flight: Include count of blockers, and of jobs, in mro
  cr-daily-branch: Heuristics for when to do immediate retest flight

 README.dev          |  9 +++---
 cr-daily-branch     | 73 +++++++++++++++++++++++++++++++++++++++++++--
 cr-disk-report      |  2 +-
 cr-try-bisect       |  4 +--
 cr-try-bisect-adhoc |  2 +-
 cri-args-hostlists  | 28 +++++++++++------
 cs-bisection-step   |  4 +--
 mg-execute-flight   |  3 --
 sg-report-flight    | 42 +++++++++++++++++++++-----
 sg-run-job          |  9 +++++-
 10 files changed, 143 insertions(+), 33 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7517.19682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WW-0006MD-Dl; Thu, 15 Oct 2020 15:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7517.19682; Thu, 15 Oct 2020 15:50:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5WW-0006M4-AO; Thu, 15 Oct 2020 15:50:36 +0000
Received: by outflank-mailman (input) for mailman id 7517;
 Thu, 15 Oct 2020 15:50:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WU-0006GR-Qx
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af5f75c7-18ec-4453-a7b3-5bf1a27f9f3d;
 Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-00087o-Jw
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WO-0005Em-Is
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WM-0000oB-OA; Thu, 15 Oct 2020 16:50:26 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WU-0006GR-Qx
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:34 +0000
X-Inumbo-ID: af5f75c7-18ec-4453-a7b3-5bf1a27f9f3d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id af5f75c7-18ec-4453-a7b3-5bf1a27f9f3d;
	Thu, 15 Oct 2020 15:50:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ChVOyZirwRN6GYYGaNTGSTLn1c+zNQzFzRYYMg92a3g=; b=b2yRrKKCQz2qK9sK7vrxLr4/Sl
	1YqLKwjwOa9DxF3CYK3TkhOctsO1LWUAqzohJvLMoNB/eznfYzezv0szba90PNbwE2uRkeYA0WPdL
	BBzRRRQY22a3bVpRJMKI6m1C1PxsmoyVYtthPKi1qrE6UcZLvsQf9M6Wiqn94Xh6yZ1s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-00087o-Jw
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0005Em-Is
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WM-0000oB-OA; Thu, 15 Oct 2020 16:50:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 02/17] Honour OSSTEST_SIMULATE_FAIL in sg-run-job
Date: Thu, 15 Oct 2020 16:50:04 +0100
Message-Id: <20201015155019.20705-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a Tcl list of globs for <job>.<step>, and allows for
simulating particular test failures.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/sg-run-job b/sg-run-job
index dd76d4f2..c64ae026 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -406,7 +406,14 @@ proc spawn-ts {iffail testid args} {
     jobdb::spawn-step-commit $flight $jobinfo(job) $stepno $testid
 
     set xprefix {}
-    if {[var-or-default env(OSSTEST_SIMULATE) 0]} { set xprefix echo }
+    if {[var-or-default env(OSSTEST_SIMULATE) 0]} {
+	set xprefix echo
+	foreach ent [var-or-default env(OSSTEST_SIMULATE_FAIL) {}] {
+	    if {[string match $ent $jobinfo(job).$testid]} {
+		set xprefix OSSTEST_SIMULATE_FAIL
+	    }
+	}
+    }
 
     set log [jobdb::step-log-filename $flight $jobinfo(job) $stepno $ts]
     set redirects {}
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7518.19694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wa-0006Qk-NY; Thu, 15 Oct 2020 15:50:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7518.19694; Thu, 15 Oct 2020 15:50:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wa-0006Qb-JJ; Thu, 15 Oct 2020 15:50:40 +0000
Received: by outflank-mailman (input) for mailman id 7518;
 Thu, 15 Oct 2020 15:50:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WZ-0006GM-11
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39964fa8-4653-494f-9ed6-c44f232c5b55;
 Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-00087x-7D
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-0005FQ-6O
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WN-0000oB-GH; Thu, 15 Oct 2020 16:50:27 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WZ-0006GM-11
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:39 +0000
X-Inumbo-ID: 39964fa8-4653-494f-9ed6-c44f232c5b55
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39964fa8-4653-494f-9ed6-c44f232c5b55;
	Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=02bnAe8Pk+WQt+MbpE+lK1JlJkiQs50S4+SkvCn1WvQ=; b=sOco/NYNjydGO2EzHcka/1MPPg
	V80yd2kaHgCE8uDjurbLWMIQ5GUy60sKXV6nxpbq6lHuN2OmXx2r/gP5AwQvmz3Tf9+rxHHPfpHqc
	8eaI7mYQ7ol9nV2c12KuyQ+Ej9z3Zft1UdVFW0F8p49Zn4FiX9t/CzZeTDd7/o5YAfgI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-00087x-7D
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0005FQ-6O
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WN-0000oB-GH; Thu, 15 Oct 2020 16:50:27 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 05/17] sg-report-job-history: eval $DAILY_BRANCH_PREEXEC_HOOK
Date: Thu, 15 Oct 2020 16:50:07 +0100
Message-Id: <20201015155019.20705-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Put the call to this debugging/testing variable inside an eval.  This
allows a wider variety of stunts.  The one in-tree reference is
already compatible with this new semantics.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index b8f221ee..23060588 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -472,7 +472,7 @@ sgr_args+=" $EXTRA_SGR_ARGS"
 
 date >&2
 : $flight $branch $OSSTEST_BLESSING $sgr_args
-$DAILY_BRANCH_PREEXEC_HOOK
+eval "$DAILY_BRANCH_PREEXEC_HOOK"
 execute_flight $flight $OSSTEST_BLESSING
 date >&2
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7519.19702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wb-0006Rr-8z; Thu, 15 Oct 2020 15:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7519.19702; Thu, 15 Oct 2020 15:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wa-0006RO-VD; Thu, 15 Oct 2020 15:50:40 +0000
Received: by outflank-mailman (input) for mailman id 7519;
 Thu, 15 Oct 2020 15:50:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5WZ-0006GR-RD
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 319bc935-827b-4bc2-a804-a0d3a64e1b32;
 Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-00087u-3W
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-0005FC-1D
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WN-0000oB-7F; Thu, 15 Oct 2020 16:50:27 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5WZ-0006GR-RD
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:39 +0000
X-Inumbo-ID: 319bc935-827b-4bc2-a804-a0d3a64e1b32
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 319bc935-827b-4bc2-a804-a0d3a64e1b32;
	Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jMrPuRfDpr5F+mH23YfeFKCRSJQrvgGxKa9leTchrQs=; b=CXoc1/puYuanYGZwuMzekdpowu
	8Rr6FPbGx+mQf1rceLB8ZdAcoj2yjyXt2pUuCdc0pU5DcrgN85o5JNGpciFEQH6ny7CQBmAO3RdFX
	OyY5lxg5eNxls6XlyA4ZznHZo3n7jE2uIvDjy0x5FUGDpta0EvjRgYIDZvCKrOVl27Ig=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-00087u-3W
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0005FC-1D
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WN-0000oB-7F; Thu, 15 Oct 2020 16:50:27 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 04/17] mg-execute-flight: Do not include the transcript in reports
Date: Thu, 15 Oct 2020 16:50:06 +0100
Message-Id: <20201015155019.20705-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

These are large and not very useful.  A copy is in the tree if needed.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 mg-execute-flight | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/mg-execute-flight b/mg-execute-flight
index 391f4810..bef8dab6 100755
--- a/mg-execute-flight
+++ b/mg-execute-flight
@@ -101,9 +101,6 @@ echo
 
 ./cr-fold-long-lines <tmp/$flight.report
 
-echo ============================================================
-./cr-fold-long-lines <tmp/$flight.transcript
-
 exec >&2
 
 if $publish; then
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7520.19718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wf-0006ZF-Ib; Thu, 15 Oct 2020 15:50:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7520.19718; Thu, 15 Oct 2020 15:50:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wf-0006Yy-DG; Thu, 15 Oct 2020 15:50:45 +0000
Received: by outflank-mailman (input) for mailman id 7520;
 Thu, 15 Oct 2020 15:50:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5We-0006GM-1H
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4208f03d-7c5c-4660-a429-de645d2d7303;
 Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WQ-00088E-7q
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WQ-0005Gg-6t
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WO-0000oB-Gu; Thu, 15 Oct 2020 16:50:28 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5We-0006GM-1H
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:44 +0000
X-Inumbo-ID: 4208f03d-7c5c-4660-a429-de645d2d7303
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4208f03d-7c5c-4660-a429-de645d2d7303;
	Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=gfMUQl6jqji00/zUqCTGBQAbnUbgIuHTcubMdohdrgs=; b=xwf0V06ExMj7gdaT7ohYraVscw
	Y6PoRK4crIguLGtGiO4j13aC03h3Pg/hckM43upCinIYFy/GXS6kK34FHpvdOL/85cNzzpt/ODbEl
	9Op/eRjF8ShNMQ1np7pq4VkhMf7vXxDMyPPEwL2XgFEOfo2r5ZFUUL9KpBNqzQ939HMY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-00088E-7q
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-0005Gg-6t
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0000oB-Gu; Thu, 15 Oct 2020 16:50:28 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH v2 08/17] sg-report-flight: Break out printout_flightheader
Date: Thu, 15 Oct 2020 16:50:10 +0100
Message-Id: <20201015155019.20705-9-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index 15631001..2ab1637f 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -783,6 +783,14 @@ sub includes ($) {
     }
 }    
 
+sub printout_flightheader ($) {
+    my ($r) = @_;
+    bodyprint <<END;
+flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
+$c{ReportHtmlPubBaseUrl}/$r->{Flight}/
+END
+}
+
 sub printout {
     my ($r, @failures) = @_;
     $header_text = '';
@@ -793,10 +801,9 @@ sub printout {
 $r->{Flight}: $r->{OutcomeSummary}
 END
     includes(\@includebeginfiles);
-    bodyprint <<END;
-flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
-$c{ReportHtmlPubBaseUrl}/$r->{Flight}/
-END
+
+    printout_flightheader($r);
+
     if (defined $r->{Overall}) {
         bodyprint "\n";
         bodyprint $r->{Overall};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7521.19730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wh-0006ch-1V; Thu, 15 Oct 2020 15:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7521.19730; Thu, 15 Oct 2020 15:50:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wg-0006cR-Sg; Thu, 15 Oct 2020 15:50:46 +0000
Received: by outflank-mailman (input) for mailman id 7521;
 Thu, 15 Oct 2020 15:50:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5We-0006GR-RR
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a670aa57-38dc-4535-97dc-1bc30af5c51d;
 Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-000882-JI
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-0005Fq-IS
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WN-0000oB-PA; Thu, 15 Oct 2020 16:50:27 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5We-0006GR-RR
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:44 +0000
X-Inumbo-ID: a670aa57-38dc-4535-97dc-1bc30af5c51d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a670aa57-38dc-4535-97dc-1bc30af5c51d;
	Thu, 15 Oct 2020 15:50:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=9Pahmn/najHz4bedXTajTfnABim3bLuns9qaaY31+FA=; b=unmqpZyDago95DXw00wFDinbP4
	Ewy2V1ZSOw3/0aBYfww8ftOk4YPyIH9V7WfPz0ngDwfM+At55jWSdctFxhM5F6UeuGJyRgbTAvm7+
	Y7X3EL8D+T2GjpByD82SOCIN41pJ/l5RtBHTbdjGHbhrZ4uuRMCng7LQaTaU9enXoCZs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-000882-JI
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0005Fq-IS
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WN-0000oB-PA; Thu, 15 Oct 2020 16:50:27 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 06/17] cri-args-hostlists: New debug var $OSSTEST_REPORT_JOB_HISTORY_RUN
Date: Thu, 15 Oct 2020 16:50:08 +0100
Message-Id: <20201015155019.20705-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

No effect if this is empty.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 1 +
 1 file changed, 1 insertion(+)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 6cdff53f..7019c0c7 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -121,6 +121,7 @@ start_email () {
 
 	date >&2
 
+	$OSSTEST_REPORT_JOB_HISTORY_RUN \
 	with-lock-ex -w $globallockdir/report-lock \
 	  ./sg-report-job-history --report-processing-start-time \
 	    --html-dir=$job_html_dir --flight=$flight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7522.19742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wl-0006lF-NU; Thu, 15 Oct 2020 15:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7522.19742; Thu, 15 Oct 2020 15:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wl-0006l1-Hp; Thu, 15 Oct 2020 15:50:51 +0000
Received: by outflank-mailman (input) for mailman id 7522;
 Thu, 15 Oct 2020 15:50:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5Wj-0006GR-RP
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6d92408-c0d2-446a-afef-747eb7d61167;
 Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-000888-Vc
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WP-0005GM-Um
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WO-0000oB-91; Thu, 15 Oct 2020 16:50:28 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5Wj-0006GR-RP
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:49 +0000
X-Inumbo-ID: d6d92408-c0d2-446a-afef-747eb7d61167
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d6d92408-c0d2-446a-afef-747eb7d61167;
	Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=e3H0s2+WzQtNrHrJPYnYMyg0OSRyEJNxDWc8yLBsrEc=; b=g11WSl3PbOg704AUjUeAfnRHnt
	55RaGFyjVc6ODZWVVFOrP0uH3pNZO5YUsLfMDZhZBfeZPVK+4IhEfJUxO6kPNL//a0o1kpCi+EN6H
	436yHTfkTvRU+Kvapj38GQ2sNfniIWIMlYahKdMNIjnKL34qnQdMe6By+gh+IneR7cpA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-000888-Vc
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0005GM-Um
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:29 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0000oB-91; Thu, 15 Oct 2020 16:50:28 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH v2 07/17] cri-args-hostlists: Break out report_flight and publish_logs
Date: Thu, 15 Oct 2020 16:50:09 +0100
Message-Id: <20201015155019.20705-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

NFC.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 cri-args-hostlists | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 7019c0c7..52e39f33 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -128,10 +128,7 @@ start_email () {
 
 	date >&2
 
-	./sg-report-flight --report-processing-start-time \
-	        --html-dir=$flight_html_dir/$flight/ \
-		--allow=allow.all --allow=allow.$branch \
-		$sgr_args $flight >tmp/$flight.report
+	report_flight $flight
 	./cr-fold-long-lines tmp/$flight.report
 
 	date >&2
@@ -144,11 +141,23 @@ start_email () {
 	date >&2
 }
 
+report_flight () {
+	local flight=$1
+	./sg-report-flight --html-dir=$flight_html_dir/$flight/ \
+		--allow=allow.all --allow=allow.$branch \
+		$sgr_args $flight >tmp/$flight.report
+}
+
+publish_logs () {
+	local flight=$1
+	./cr-publish-flight-logs ${OSSTEST_PUSH_HARNESS- --push-harness} \
+	    $flight >&2
+}
+
 publish_send_email () {
 	local flight=$1
+	publish_logs $flight
 	exec >&2
-	./cr-publish-flight-logs ${OSSTEST_PUSH_HARNESS- --push-harness} \
-	    $flight
 	send_email tmp/$flight.email
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 15:50:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 15:50:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7523.19754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wq-0006sI-6Q; Thu, 15 Oct 2020 15:50:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7523.19754; Thu, 15 Oct 2020 15:50:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5Wq-0006s9-1V; Thu, 15 Oct 2020 15:50:56 +0000
Received: by outflank-mailman (input) for mailman id 7523;
 Thu, 15 Oct 2020 15:50:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5Wo-0006GR-Re
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5e58849-0ead-484c-8253-532dfd143437;
 Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WQ-00088L-GG
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5WQ-0005H7-FM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WO-0000oB-ON; Thu, 15 Oct 2020 16:50:28 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5Wo-0006GR-Re
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:54 +0000
X-Inumbo-ID: a5e58849-0ead-484c-8253-532dfd143437
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a5e58849-0ead-484c-8253-532dfd143437;
	Thu, 15 Oct 2020 15:50:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/UKradk/z4OgKrjjcpnOfZ9csQk7I4shSwqTZgX4ibw=; b=1AN05+e+5E0n5jtCCVluZ9s+Mg
	cACAFIPLJyoHMpAVkpkURdXSZFEhoDC323dAUZNgOanvjicq7WHNVsheb9kCFQKyTH1pv4Pv1VsBK
	GBWDZEnUyLX7LmsHDOZ4Q8ExvSW7cbzWIYQQY+/RTnbNEq8o7lxwj1TFvSmFFbodPoDc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-00088L-GG
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-0005H7-FM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 15:50:30 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0000oB-ON; Thu, 15 Oct 2020 16:50:28 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: [OSSTEST PATCH v2 09/17] sg-report-flight: Provide --refer-to-flight option
Date: Thu, 15 Oct 2020 16:50:11 +0100
Message-Id: <20201015155019.20705-10-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

This just generates an extra heading and URL at the top of the output.
In particular, it doesn't affect the algorithms which calculate
regressions.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 sg-report-flight | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/sg-report-flight b/sg-report-flight
index 2ab1637f..bcd896a8 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -42,6 +42,7 @@ our $htmldir;
 our $want_info_headers;
 our ($branch, $branches_cond_q);
 our @allows;
+our (@refer_to_flights);
 our (@includebeginfiles,@includefiles);
 
 open DEBUG, ">/dev/null";
@@ -66,6 +67,8 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
         push @includebeginfiles, $1;
     } elsif (m/^--include=(.*)$/) {
         push @includefiles, $1;
+    } elsif (m/^--refer-to-flight=(.*)$/) {
+        push @refer_to_flights, $1;
     } elsif (restrictflight_arg($_)) {
         # Handled by Executive
     } elsif (m/^--allow=(.*)$/) {
@@ -504,6 +507,16 @@ END
         die unless defined $specflight;
     }
 }
+sub find_refer_to_flights () {
+    my $ffq = $dbh_tests->prepare("SELECT * FROM flights WHERE flight=?");
+    @refer_to_flights = map {
+	my $flight = $_;
+	$ffq->execute($flight);
+	my $row = $ffq->fetchrow_hashref();
+	die "refer to flight $flight not found\n" unless $row;
+	{ Flight => $flight, FlightInfo => $row };
+    } @refer_to_flights;
+}
 
 sub examineflight ($) {
     my ($flight) = @_;
@@ -804,6 +817,10 @@ END
 
     printout_flightheader($r);
 
+    foreach my $ref_r (@refer_to_flights) {
+	printout_flightheader($ref_r);
+    }
+
     if (defined $r->{Overall}) {
         bodyprint "\n";
         bodyprint $r->{Overall};
@@ -1878,6 +1895,7 @@ db_retry($dbh_tests, [], sub {
     if (defined $mro) {
 	open MRO, "> $mro.new" or die "$mro.new $!";
     }
+    find_refer_to_flights();
     findspecflight();
     my $fi= examineflight($specflight);
     my @fails= justifyfailures($fi);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7544.19796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oa-0000bf-GS; Thu, 15 Oct 2020 16:09:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7544.19796; Thu, 15 Oct 2020 16:09:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oa-0000bX-D4; Thu, 15 Oct 2020 16:09:16 +0000
Received: by outflank-mailman (input) for mailman id 7544;
 Thu, 15 Oct 2020 16:09:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5oY-0000WV-Rm
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd9e91f7-be31-4013-a56f-e8777305bc8c;
 Thu, 15 Oct 2020 16:09:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oV-0000gF-CJ
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:11 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oV-0006c9-BS
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:11 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WQ-0000oB-6Q; Thu, 15 Oct 2020 16:50:30 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5oY-0000WV-Rm
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:14 +0000
X-Inumbo-ID: fd9e91f7-be31-4013-a56f-e8777305bc8c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fd9e91f7-be31-4013-a56f-e8777305bc8c;
	Thu, 15 Oct 2020 16:09:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=sjdvYyM1PnXhPbXqN8m96KPPeA795n+928eEa0JNcoo=; b=rkJjFYWXXI5rcMm20svd5AyiKo
	wD//qzGLFyxtvnmo/nqrCsX9lTuJB1NwS+KXH2h2mdHDCEuPrThi1A0jwl1xWL+HFm7b8HkmIS5EM
	vck74Xe4WaZ+T5VoJ40m4SK9/P9INOd59mc/6f8u17cqvvNrgejc7a57P5B7VD/VGi1I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oV-0000gF-CJ
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:11 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oV-0006c9-BS
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:11 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-0000oB-6Q; Thu, 15 Oct 2020 16:50:30 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 14/17] Honour OSSTEST_SIMULATE_FAIL_RETRY for immediate retries
Date: Thu, 15 Oct 2020 16:50:16 +0100
Message-Id: <20201015155019.20705-15-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is primarily useful for debugging the immediate-retry logic, but
it seems churlish to delete it again.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index bea8734e..3e58d465 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -517,6 +517,10 @@ while true; do
 		--branch=$branch --revision-osstest=$narness_rev \
 		'^build-*' --debug --blessings=real
 
+	if [ "${OSSTEST_SIMULATE_FAIL_RETRY+y}" = y ]; then
+		export OSSTEST_SIMULATE_FAIL="${OSSTEST_SIMULATE_FAIL_RETRY}"
+	fi
+
 	export OSSTEST_RESOURCE_WAITSTART=$original_start
 	execute_flight $rflight $OSSTEST_BLESSING-retest
 	report_flight $rflight
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7545.19808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oe-0000ep-P1; Thu, 15 Oct 2020 16:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7545.19808; Thu, 15 Oct 2020 16:09:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oe-0000ei-LM; Thu, 15 Oct 2020 16:09:20 +0000
Received: by outflank-mailman (input) for mailman id 7545;
 Thu, 15 Oct 2020 16:09:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5od-0000WV-S0
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad5d6dd0-5b63-453f-b297-7a399c91773d;
 Thu, 15 Oct 2020 16:09:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oa-0000gP-23
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oa-0006d2-18
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WQ-0000oB-Fn; Thu, 15 Oct 2020 16:50:30 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5od-0000WV-S0
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:19 +0000
X-Inumbo-ID: ad5d6dd0-5b63-453f-b297-7a399c91773d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ad5d6dd0-5b63-453f-b297-7a399c91773d;
	Thu, 15 Oct 2020 16:09:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uF93lLE6Hujj9HHD1uOcz9Kb2G0DNm7aSbOO3mDjPxk=; b=ucPB9WYIV0ZAF7oH+CGrjitn7Y
	jOM9r+VrJwVBV1a9GkeV4ypM3hBlEDHCtcji9kMc+BcJ/m1a8iuBWL7WgCbFmfVkX4sPSs3DMMZfU
	7SVRFn3zPiEKWwyCWOFTkHzmIwiiEa0UhArz6L3LRjp/ZqfpJn+j2RySJ9vrI9Awsg0E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oa-0000gP-23
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oa-0006d2-18
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-0000oB-Fn; Thu, 15 Oct 2020 16:50:30 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: [OSSTEST PATCH v2 15/17] cr-daily-branch: Do not do immediate retry of failing xtf flights
Date: Thu, 15 Oct 2020 16:50:17 +0100
Message-Id: <20201015155019.20705-16-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

CC: Andrew Cooper <Andrew.Cooper3@citrix.com>
Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 1 +
 1 file changed, 1 insertion(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index 3e58d465..9b1961bd 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -484,6 +484,7 @@ default_immediate_retry=$wantpush
 case "$branch" in
 *smoke*)	default_immediate_retry=false ;;
 osstest)	default_immediate_retry=false ;;
+xtf*)		default_immediate_retry=false ;;
 *)		;;
 esac
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7543.19784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oV-0000Y6-7Y; Thu, 15 Oct 2020 16:09:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7543.19784; Thu, 15 Oct 2020 16:09:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oV-0000Xz-45; Thu, 15 Oct 2020 16:09:11 +0000
Received: by outflank-mailman (input) for mailman id 7543;
 Thu, 15 Oct 2020 16:09:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5oT-0000WV-Rf
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6149a3bf-9f7e-44f6-b055-eb9cb7ea1ec5;
 Thu, 15 Oct 2020 16:09:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oN-0000g0-Hg
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:03 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oN-0006ay-ER
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WP-0000oB-FP; Thu, 15 Oct 2020 16:50:29 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5oT-0000WV-Rf
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:09 +0000
X-Inumbo-ID: 6149a3bf-9f7e-44f6-b055-eb9cb7ea1ec5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6149a3bf-9f7e-44f6-b055-eb9cb7ea1ec5;
	Thu, 15 Oct 2020 16:09:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=pacL1ullUVs7eBNosppLBDynVv2r14S6C90c7shdrs4=; b=ymWNHGrAQqb+8EtIaJSaYB3mJx
	rTiYue9Ejhobk1qFqt/o2f0h/em6GcGqfMG8YTa8pFNdpf2TX5LT3Ti/RPfIJe4wrrxRBWsalNA1r
	7izg5bd8uAl+CKhOrV/k/sMNd/Fcnq97HODaqFGXitgdOqKNVf0E8msu6JvxP/g/RZ8U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oN-0000g0-Hg
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:03 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oN-0006ay-ER
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0000oB-FP; Thu, 15 Oct 2020 16:50:29 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 11/17] Introduce real-retry blessing
Date: Thu, 15 Oct 2020 16:50:13 +0100
Message-Id: <20201015155019.20705-12-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

Nothing produces this yet.  (There's play-retry as well of course but
we don't need to document that really.)

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 README.dev          | 9 +++++----
 cr-daily-branch     | 3 ++-
 cr-disk-report      | 2 +-
 cr-try-bisect       | 4 ++--
 cr-try-bisect-adhoc | 2 +-
 cs-bisection-step   | 4 ++--
 sg-report-flight    | 2 +-
 7 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/README.dev b/README.dev
index 2cbca109..3d09b3c6 100644
--- a/README.dev
+++ b/README.dev
@@ -381,10 +381,11 @@ These are the principal (intended) blessings:
    commissioning, and that blessing removed and replaced with `real'
    when the hosts are ready.
 
- * `real-bisect' and `adhoc-bisect': These are found only as the
-   blessing of finished flights.  (This is achieved by passing
-   *-bisect to sg-execute-flight.)  This allows the archaeologist
-   tools to distinguish full flights from bisection steps.
+ * `real-bisect', `real-retry', `adhoc-bisect': These are found only
+   as the blessing of finished flights.  (This is achieved by passing
+   *-bisect or *-retry to sg-execute-flight.)  This allows the
+   archaeologist tools to distinguish full flights from bisection
+   steps and retries.
 
    The corresponding intended blessing (as found in the `intended'
    column of the flights table) is `real'.  So the hosts used by the
diff --git a/cr-daily-branch b/cr-daily-branch
index 23060588..285ea361 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -76,7 +76,8 @@ case $branch in
 	treeurl=`./ap-print-url $branch`;;
 esac
 
-blessings_arg=--blessings=${DAILY_BRANCH_TESTED_BLESSING:-real}
+blessings_arg=${DAILY_BRANCH_TESTED_BLESSING:-real}
+blessings_arg=--blessings=${blessings_arg},${blessings_arg}-retest
 sgr_args+=" $blessings_arg"
 
 force_baseline='' # Non-empty = indication why we are forcing baseline.
diff --git a/cr-disk-report b/cr-disk-report
index 543d35bf..d76fd72f 100755
--- a/cr-disk-report
+++ b/cr-disk-report
@@ -38,7 +38,7 @@ our $graphs_px=0;
 our $graphs_py=0;
 open DEBUG, ">/dev/null" or die $!;
 
-our @blessings = qw(real real-bisect);
+our @blessings = qw(real real-retry real-bisect);
 # for these blessings column is       "<blessing> <branch>"
 # for other blessings column is       "<intended> [<blessing>]"
 
diff --git a/cr-try-bisect b/cr-try-bisect
index a2b77b9a..6adc2bcc 100755
--- a/cr-try-bisect
+++ b/cr-try-bisect
@@ -59,7 +59,7 @@ compute_state_done_callback () {
 compute_state_callback () {
 	compute_state_core \
 		--basis-template=$basisflight \
-                --blessings=$OSSTEST_BLESSING,$OSSTEST_BLESSING-bisect \
+                --blessings=$OSSTEST_BLESSING,$OSSTEST_BLESSING-bisect,$OSSTEST_BLESSING-retry \
                 "$@" $branch $job $testid
 }
 
@@ -78,7 +78,7 @@ perhaps_bisect_step () {
                 echo "already completed $branch $job $testid"
                 return
         fi
-        perhaps_bisect_step_core $OSSTEST_BLESSING $OSSTEST_BLESSING-bisect
+        perhaps_bisect_step_core $OSSTEST_BLESSING $OSSTEST_BLESSING-bisect $OSSTEST_BLESSING-real
 }
 
 subject_prefix="[$branch bisection]"
diff --git a/cr-try-bisect-adhoc b/cr-try-bisect-adhoc
index caadfd80..c2cfa475 100755
--- a/cr-try-bisect-adhoc
+++ b/cr-try-bisect-adhoc
@@ -49,7 +49,7 @@ export OSSTEST_BLESSING=adhoc
 
 compute_state_callback () {
 	compute_state_core \
-        	--blessings=real,real-bisect,adhoc-bisect \
+        	--blessings=real,real-retry,real-bisect,adhoc-bisect \
                 $bisect "$@" $branch $job $testid
 }
 
diff --git a/cs-bisection-step b/cs-bisection-step
index 762966da..8b391448 100755
--- a/cs-bisection-step
+++ b/cs-bisection-step
@@ -7,7 +7,7 @@
 # usage:
 #   ./cs-bisection-setup [<options>] <branch> <job> <testid>
 # options, usually:
-#      --blessings=real,real-bisect
+#      --blessings=real,real-retry,real-bisect
 #
 # First entry in --blessings list is the blessing of the basis
 # (non-bisection) flights.  This should not be the same as the
@@ -45,7 +45,7 @@ use HTML::Entities;
 use Osstest::Executive;
 use URI::Escape;
 
-our @blessings= qw(real real-bisect);
+our @blessings= qw(real real-retry real-bisect);
 our @revtuplegenargs= ();
 our $broken;
 
diff --git a/sg-report-flight b/sg-report-flight
index cbd39599..51a409ed 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -120,7 +120,7 @@ die if defined $specver{this}{flight};
 die if defined $specver{that}{flight} &&
     grep { $_ ne 'flight' } keys %{ $specver{that} };
 
-push @blessings, 'real', 'real-bisect' unless @blessings;
+push @blessings, 'real', 'real-retry', 'real-bisect' unless @blessings;
 
 csreadconfig();
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7542.19772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oP-0000Wh-V3; Thu, 15 Oct 2020 16:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7542.19772; Thu, 15 Oct 2020 16:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oP-0000Wa-Rz; Thu, 15 Oct 2020 16:09:05 +0000
Received: by outflank-mailman (input) for mailman id 7542;
 Thu, 15 Oct 2020 16:09:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5oO-0000WV-TC
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef44b822-9280-4aa4-b6a7-5da67909f9d1;
 Thu, 15 Oct 2020 16:08:59 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oI-0000fe-RS
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:08:58 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5oI-0006Zr-Ps
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:08:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WO-0000oB-W5; Thu, 15 Oct 2020 16:50:29 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5oO-0000WV-TC
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:04 +0000
X-Inumbo-ID: ef44b822-9280-4aa4-b6a7-5da67909f9d1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ef44b822-9280-4aa4-b6a7-5da67909f9d1;
	Thu, 15 Oct 2020 16:08:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=t3Vg9e/VRpm4vV8Mm3UCVRG1em/CaV0bxhpLQJqI3XA=; b=Ryfx2r6NB3s0ZO7bDJi7fVXYpv
	Dv5FiWdwIwi3MbiRkYVpbJp/GTCabTK8pWmHnxKPPWJN2LYJt68ow2zmDFQxszBRyZ8HqjI5xNg6X
	880i8MmDNPaTF54b4r2VuZS0RBl4EbnqaH95twQfHXcGOmlgNxOwCpWkkxfT/u78mS1o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oI-0000fe-RS
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:08:58 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5oI-0006Zr-Ps
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:08:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WO-0000oB-W5; Thu, 15 Oct 2020 16:50:29 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 10/17] sg-report-flight: Nicer output for --refer-to-flight option
Date: Thu, 15 Oct 2020 16:50:12 +0100
Message-Id: <20201015155019.20705-11-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Sort the flight summary lines together, before the URLs.  This makes
it considerably easier to read.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/sg-report-flight b/sg-report-flight
index bcd896a8..cbd39599 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -796,12 +796,17 @@ sub includes ($) {
     }
 }    
 
-sub printout_flightheader ($) {
-    my ($r) = @_;
-    bodyprint <<END;
+sub printout_flightheaders {
+    foreach my $r (@_) {
+	bodyprint <<END;
 flight $r->{Flight} $branch $r->{FlightInfo}{blessing} [$r->{FlightInfo}{intended}]
+END
+    }
+    foreach my $r (@_) {
+	bodyprint <<END;
 $c{ReportHtmlPubBaseUrl}/$r->{Flight}/
 END
+    }
 }
 
 sub printout {
@@ -814,12 +819,7 @@ sub printout {
 $r->{Flight}: $r->{OutcomeSummary}
 END
     includes(\@includebeginfiles);
-
-    printout_flightheader($r);
-
-    foreach my $ref_r (@refer_to_flights) {
-	printout_flightheader($ref_r);
-    }
+    printout_flightheaders $r, @refer_to_flights;
 
     if (defined $r->{Overall}) {
         bodyprint "\n";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7546.19820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5ok-0000k0-3S; Thu, 15 Oct 2020 16:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7546.19820; Thu, 15 Oct 2020 16:09:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5oj-0000jr-W2; Thu, 15 Oct 2020 16:09:25 +0000
Received: by outflank-mailman (input) for mailman id 7546;
 Thu, 15 Oct 2020 16:09:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5oi-0000WV-SA
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33e90831-08e4-4696-aef3-6363429603ce;
 Thu, 15 Oct 2020 16:09:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5og-0000gb-Fb
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:22 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5og-0006eP-Er
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:22 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WP-0000oB-NK; Thu, 15 Oct 2020 16:50:29 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5oi-0000WV-SA
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:24 +0000
X-Inumbo-ID: 33e90831-08e4-4696-aef3-6363429603ce
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 33e90831-08e4-4696-aef3-6363429603ce;
	Thu, 15 Oct 2020 16:09:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=T/OCcqRxeQc9x8tpxs5RGS4FPCqsIq9O+7k8X7cgE7E=; b=x4/oegTzScwg1bFjxBllaeFo5f
	DdYOED9tMHlPQ86HkMWdFYsr+UWXnFQhKl79+h/Cux0yMxM5InJfvqUZfOd+v2W/Fd91004LnnJx9
	gWFDwg0CAqPALj2uN5Pna/HvBNfvNP25FbGJ/5NkP/6j3lwW9Hk8Lnl1LPdKTqjoRqYc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5og-0000gb-Fb
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:22 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5og-0006eP-Er
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:22 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0000oB-NK; Thu, 15 Oct 2020 16:50:29 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 12/17] cri-args-hostlists: Move flight_html_dir variable
Date: Thu, 15 Oct 2020 16:50:14 +0100
Message-Id: <20201015155019.20705-13-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is only used in report_flight.  We are going to want to call
report_flight from outside start_email, without having to set that
variable ourselves.

The variable isn't actually used in start_email.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cri-args-hostlists | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 52e39f33..52cac195 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -113,7 +113,6 @@ start_email () {
 	printf '%s\n' "`getconfig EmailStdHeaders`"
 	printf 'Subject: %s' "${subject_prefix:-[$branch test] }"
 
-	local flight_html_dir=$OSSTEST_HTMLPUB_DIR/
 	local job_html_dir=$OSSTEST_HTML_DIR/
 	local host_html_dir=$OSSTEST_HTML_DIR/host/
 
@@ -143,6 +142,7 @@ start_email () {
 
 report_flight () {
 	local flight=$1
+	local flight_html_dir=$OSSTEST_HTMLPUB_DIR/
 	./sg-report-flight --html-dir=$flight_html_dir/$flight/ \
 		--allow=allow.all --allow=allow.$branch \
 		$sgr_args $flight >tmp/$flight.report
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7547.19832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5on-0000o8-Jw; Thu, 15 Oct 2020 16:09:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7547.19832; Thu, 15 Oct 2020 16:09:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5on-0000nv-Fc; Thu, 15 Oct 2020 16:09:29 +0000
Received: by outflank-mailman (input) for mailman id 7547;
 Thu, 15 Oct 2020 16:09:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5om-0000mi-3O
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd437341-70cf-41f8-ab23-e7417d87276f;
 Thu, 15 Oct 2020 16:09:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5ol-0000gl-0Z
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:27 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5ok-0006fE-W1
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:26 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WR-0000oB-0p; Thu, 15 Oct 2020 16:50:31 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5om-0000mi-3O
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:28 +0000
X-Inumbo-ID: dd437341-70cf-41f8-ab23-e7417d87276f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dd437341-70cf-41f8-ab23-e7417d87276f;
	Thu, 15 Oct 2020 16:09:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=Eb2yZLxvdxwd6Ruht+D5KNOOBqV7wkTSytbhDN+ksww=; b=pcoWROS+3Qx7axDTdPcbd1X2zg
	YDzlPXdGcMkg/HXAmGrVVr1QZWy5Wg8HIj+akL2PuMqCmwHJXN7xugVx+ur3oVxQWafXfeDn3qqjN
	TH8HThuWkxsRhpotYAAJwqkEuUNhqwu9ZJoGJdfmTGlNd3Jw8AMb1SPJthZQHHtL8hK4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5ol-0000gl-0Z
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:27 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5ok-0006fE-W1
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:26 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WR-0000oB-0p; Thu, 15 Oct 2020 16:50:31 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 17/17] cr-daily-branch: Heuristics for when to do immediate retest flight
Date: Thu, 15 Oct 2020 16:50:19 +0100
Message-Id: <20201015155019.20705-18-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do not do a retest if it would involve retesting more than 10% of the
original flight, or if it wouldn't get a push even if the retests
pass.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/cr-daily-branch b/cr-daily-branch
index 9b1961bd..e54ca227 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -497,11 +497,26 @@ while true; do
 	OSSTEST_IMMEDIATE_RETRY=false
 	retry_jobs=$(
 		perl <$mrof -wne '
+			$n_blockers = $1 if m/^nblockers (\d+)\s*$/;
+			$n_jobs     = $1 if m/^njobs (\d+)\s*$/;
 			next unless m/^regression (\S+) /;
 			my $j = $1;
 			next if $j =~ m/^build/;
 			$r{$j}++;
+			sub nope {
+				print STDERR "no retry: @_\n";
+				exit 0;
+			}
 			END {
+				my $n_retry_jobs = scalar(keys %r);
+ 				print STDERR <<"END";
+n_retry_jobs=$n_retry_jobs n_blockers=$n_blockers n_jobs=$n_jobs
+END
+				nope("other blockers") if 
+                                        $n_retry_jobs < $n_blockers;
+				nope("too many regressions") if
+					$n_retry_jobs > 1 &&
+					$n_retry_jobs > $n_jobs/10;
 				print "copy-jobs '$flight' $_ "
 					foreach sort keys %r;
 			}'
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7548.19844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5or-0000so-Tx; Thu, 15 Oct 2020 16:09:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7548.19844; Thu, 15 Oct 2020 16:09:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5or-0000sf-Pb; Thu, 15 Oct 2020 16:09:33 +0000
Received: by outflank-mailman (input) for mailman id 7548;
 Thu, 15 Oct 2020 16:09:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5oq-0000mi-UO
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b574eae-3e04-4a0f-bdce-bb8145bcf898;
 Thu, 15 Oct 2020 16:09:31 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5op-0000gy-LC
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:31 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5op-0006g4-KA
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:31 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WQ-0000oB-PH; Thu, 15 Oct 2020 16:50:30 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5oq-0000mi-UO
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:32 +0000
X-Inumbo-ID: 7b574eae-3e04-4a0f-bdce-bb8145bcf898
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7b574eae-3e04-4a0f-bdce-bb8145bcf898;
	Thu, 15 Oct 2020 16:09:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=/E/dZdst6zLSlIoKGnfjgU1l91GnjdF0tvF6OifHDMo=; b=CDw6OgNva7Jc9kANBV8ub63OU5
	Z4obG4I7LgAvfcTDkI3XGp0QSS5xJSdstWVWEvrlRY3bn87mXJqSRfsoijN74T5zrRaULsaE7b4Xm
	7CQ2Npcvs/F5cgVCfLwliuj06uC9QmJ22n0txD39LKwKESfO3Y4LWtTkdWvYpYFPIzYc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5op-0000gy-LC
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:31 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5op-0006g4-KA
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:31 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WQ-0000oB-PH; Thu, 15 Oct 2020 16:50:30 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 16/17] sg-report-flight: Include count of blockers, and of jobs, in mro
Date: Thu, 15 Oct 2020 16:50:18 +0100
Message-Id: <20201015155019.20705-17-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The mro will now contain exactly one of "blockers" or "tolerable".

Nothing uses this yet.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-report-flight | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/sg-report-flight b/sg-report-flight
index 51a409ed..fd266586 100755
--- a/sg-report-flight
+++ b/sg-report-flight
@@ -1128,13 +1128,16 @@ END
     }
 
     if (!$heisen_why) {
+	my $n_blockers = scalar grep { $_->{Blocker} } @failures;
+	print MRO "njobs ", scalar(@{ $fi->{JobTexts} }), "\n";
+	print MRO "nblockers $n_blockers\n" if $n_blockers;
 	if (!@failures) {
 	    print MRO "tolerable\nperfect\n" or die $!;
 	    $fi->{Overall}.= "Perfect :-)\n";
 	} elsif (grep { $_->{Blocker} eq 'regression' } @failures) {
 	    $fi->{OutcomeSummary}= "regressions - $fi->{OutcomeSummary}";
 	    $fi->{Overall}.= "Regressions :-(\n";
-	} elsif (!grep { $_->{Blocker} } @failures) {
+	} elsif (!$n_blockers) {
 	    $fi->{OutcomeSummary}= "tolerable $fi->{OutcomeSummary}";
 	    print MRO "tolerable\n" or die $!
 		unless defined $heisen_why;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:09:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7549.19856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5ow-0000yr-8R; Thu, 15 Oct 2020 16:09:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7549.19856; Thu, 15 Oct 2020 16:09:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT5ow-0000yd-4E; Thu, 15 Oct 2020 16:09:38 +0000
Received: by outflank-mailman (input) for mailman id 7549;
 Thu, 15 Oct 2020 16:09:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kT5ov-0000xQ-2W
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e17d3cb8-6410-454c-a99e-ac964f8ccf48;
 Thu, 15 Oct 2020 16:09:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5ou-0000h7-7W
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:36 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kT5ou-0006gy-6g
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kT5WP-0000oB-Uw; Thu, 15 Oct 2020 16:50:30 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/DR3=DW=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kT5ov-0000xQ-2W
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:37 +0000
X-Inumbo-ID: e17d3cb8-6410-454c-a99e-ac964f8ccf48
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e17d3cb8-6410-454c-a99e-ac964f8ccf48;
	Thu, 15 Oct 2020 16:09:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=rKhUBECvH4UVMiFTJmADZZZYMWQOR+f/Us5E4XBbMw4=; b=6nBC+4WrrE9VyBaqiUBy++5BXv
	PMlM1TTv/orJDF/qEz40NHcR1E9G9v+iv8KKje/8Q/h/hhgArylaZlbZB5cmke+cpuFmzhf7CPJTm
	EWcp1onyN6frRapjHtWJcV7ZP1MK3pA4e1q/fhVyZgMDLpyexhB6PoKTC0hiLmuoWfRE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5ou-0000h7-7W
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:36 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5ou-0006gy-6g
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:09:36 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kT5WP-0000oB-Uw; Thu, 15 Oct 2020 16:50:30 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH v2 13/17] cr-daily-branch: Immediately retry failing tests
Date: Thu, 15 Oct 2020 16:50:15 +0100
Message-Id: <20201015155019.20705-14-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201015155019.20705-1-iwj@xenproject.org>
References: <20201015155019.20705-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Ian Jackson <ian.jackson@eu.citrix.com>

We exclude the self-tests because we don't want to miss breakage, and
the Xen smoke tests because they will be run again RSN anyway.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 cr-daily-branch | 48 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 47 insertions(+), 1 deletion(-)

diff --git a/cr-daily-branch b/cr-daily-branch
index 285ea361..bea8734e 100755
--- a/cr-daily-branch
+++ b/cr-daily-branch
@@ -472,12 +472,58 @@ esac
 sgr_args+=" $EXTRA_SGR_ARGS"
 
 date >&2
+original_start=`date +%s`
+
 : $flight $branch $OSSTEST_BLESSING $sgr_args
 eval "$DAILY_BRANCH_PREEXEC_HOOK"
 execute_flight $flight $OSSTEST_BLESSING
 date >&2
 
-start_email $flight $branch "$sgr_args" "$subject_prefix"
+default_immediate_retry=$wantpush
+
+case "$branch" in
+*smoke*)	default_immediate_retry=false ;;
+osstest)	default_immediate_retry=false ;;
+*)		;;
+esac
+
+: ${OSSTEST_IMMEDIATE_RETRY:-$default_immediate_retry}
+
+while true; do
+	start_email $flight $branch "$sgr_args" "$subject_prefix"
+	if grep '^tolerable$' $mrof >/dev/null 2>&1; then break; fi
+	if ! $OSSTEST_IMMEDIATE_RETRY; then break; fi
+	OSSTEST_IMMEDIATE_RETRY=false
+	retry_jobs=$(
+		perl <$mrof -wne '
+			next unless m/^regression (\S+) /;
+			my $j = $1;
+			next if $j =~ m/^build/;
+			$r{$j}++;
+			END {
+				print "copy-jobs '$flight' $_ "
+					foreach sort keys %r;
+			}'
+	)
+	if [ "x$retry_jobs" = x ]; then break; fi
+
+	rflight=$(
+		./cs-adjust-flight new:$OSSTEST_BLESSING \
+			branch-set $branch \
+			$retry_jobs
+	)
+
+	./mg-adjust-flight-makexrefs -v $rflight \
+		--branch=$branch --revision-osstest=$narness_rev \
+		'^build-*' --debug --blessings=real
+
+	export OSSTEST_RESOURCE_WAITSTART=$original_start
+	execute_flight $rflight $OSSTEST_BLESSING-retest
+	report_flight $rflight
+	publish_logs $rflight
+
+	sgr_args+=" --refer-to-flight=$rflight"
+done
 
 push=false
 if grep '^tolerable$' $mrof >/dev/null 2>&1; then push=$wantpush; fi
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:38:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7564.19877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6GX-0003zm-Or; Thu, 15 Oct 2020 16:38:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7564.19877; Thu, 15 Oct 2020 16:38:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6GX-0003zf-Lf; Thu, 15 Oct 2020 16:38:09 +0000
Received: by outflank-mailman (input) for mailman id 7564;
 Thu, 15 Oct 2020 16:38:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kT6GW-0003za-GM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:38:08 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0573e44a-b5e7-4114-8606-4114c68c410f;
 Thu, 15 Oct 2020 16:38:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kT6GW-0003za-GM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:38:08 +0000
X-Inumbo-ID: 0573e44a-b5e7-4114-8606-4114c68c410f
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0573e44a-b5e7-4114-8606-4114c68c410f;
	Thu, 15 Oct 2020 16:38:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602779888;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Qj6daUl3a4L1zT29nMoh4pa8HEVDUNhuxBVEfS5VrR0=;
  b=hnDmHVSLHLgJVG9K+ZY7BNvKNsE8Vtzug6amD2tWZNFjxtXklTVaQ+WR
   MxH3aU22Vjjh5c5+J02gcYKbY7GiKKB9yMqAYx+Z5t1qz/0+Ghgx4XGux
   o0t2CHsX9eKHoZ9zajqyPpoisGnJ2Zziq44JIPLY3KQjlTZWbTYZAt2iN
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1dZCXaNiv6f3fJrYWfHabfZpoSQ/ExdZYjaapkQ0S1kuJwO3sfOM9HbFuNpN/MxAjkAJRJGbk8
 JPKIVWcaNLrbRS/B/te1m8+nn6O5b8buupV3l4EqsTaN6WSJvDO009kCF8d1YYP7X4QC56oxsn
 izy8usIuT6AtRVcAJ7v6rkH2REAK18MYeiMiGoceEwOmaV/8ay8fS1rd0lanonjaPlrJ1jb9rv
 p3cp7bpXy1lH2Fm0Q3LcUQwuZBX4bVCTO9uh2WiVXF2Vf/EeNYBKsdZR7lBoyrZPlztspRrZmM
 zGo=
X-SBRS: 2.5
X-MesageID: 29101339
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29101339"
Subject: Re: [PATCH v2] x86/smpboot: Don't unconditionally call
 memguard_guard_stack() in cpu_smpboot_alloc()
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201014184708.17758-1-andrew.cooper3@citrix.com>
 <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
 <0a294279-5de5-3b54-b1f9-847de1159447@citrix.com>
 <578a0afd-693a-c704-317e-477e5e27d497@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5df2626b-8755-8cdb-7cbc-74d51b569a0b@citrix.com>
Date: Thu, 15 Oct 2020 17:38:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <578a0afd-693a-c704-317e-477e5e27d497@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 16:16, Jan Beulich wrote:
> On 15.10.2020 16:02, Andrew Cooper wrote:
>> On 15/10/2020 09:50, Jan Beulich wrote:
>>> On 14.10.2020 20:47, Andrew Cooper wrote:
>>>> cpu_smpboot_alloc() is designed to be idempotent with respect to partially
>>>> initialised state.  This occurs for S3 and CPU parking, where enough state to
>>>> handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
>>>> when we otherwise want to offline the CPU.
>>>>
>>>> For simplicity between various configuration, Xen always uses shadow stack
>>>> mappings (Read-only + Dirty) for the guard page, irrespective of whether
>>>> CET-SS is enabled.
>>>>
>>>> Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
>>>> by first writing out the supervisor shadow stack tokens with plain writes,
>>>> then changing the mapping to being read-only.
>>>>
>>>> This ordering is strictly necessary to configure the BSP, which cannot have
>>>> the supervisor tokens be written with WRSS.
>>>>
>>>> Instead of calling memguard_guard_stack() unconditionally, call it only when
>>>> actually allocating a new stack.  Xenheap allocates are guaranteed to be
>>>> writeable, and the net result is idempotency WRT configuring stack_base[].
>>>>
>>>> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
>>>> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> ---
>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>>> CC: Wei Liu <wl@xen.org>
>>>>
>>>> This can more easily be demonstrated with CPU hotplug than S3, and the absence
>>>> of bug reports goes to show how rarely hotplug is used.
>>>>
>>>> v2:
>>>>  * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
>>>>    turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
>>>>    shootdown completes.
>>> The code change looks correct to me, but since I don't understand
>>> this part I'm afraid I may be overlooking something. I understand
>>> the "turn the BSP shadow stack into regular mappings" relates to
>>> cpu_smpboot_free()'s call to memguard_unguard_stack(), but I
>>> didn't think we come through cpu_smpboot_free() for the BSP upon
>>> entering or leaving S3.
>> The v1 really did fix Marek's repro of the problem.
>>
>> The only possible way this can occur is if, somewhere, there is a call
>> to cpu_smpboot_free() for CPU0 with remove=0 on the S3 path
> I didn't think it was the BSP's stack that got written to, but the
> first AP's before letting it run.

Oh yes - my analysis was wrong.  The CPU notifier for CPU 1 to come up
runs on CPU 0.

So only the --- text was wrong.  Are you happy with the fix now?

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:39:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:39:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7566.19889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6HY-000464-2g; Thu, 15 Oct 2020 16:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7566.19889; Thu, 15 Oct 2020 16:39:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6HX-00045x-Vq; Thu, 15 Oct 2020 16:39:11 +0000
Received: by outflank-mailman (input) for mailman id 7566;
 Thu, 15 Oct 2020 16:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VKkI=DW=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1kT6HW-00045r-R8
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:39:10 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80e109db-455d-485e-896b-56324b9a02b6;
 Thu, 15 Oct 2020 16:39:10 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id d3so4453774wma.4
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:39:10 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VKkI=DW=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
	id 1kT6HW-00045r-R8
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:39:10 +0000
X-Inumbo-ID: 80e109db-455d-485e-896b-56324b9a02b6
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80e109db-455d-485e-896b-56324b9a02b6;
	Thu, 15 Oct 2020 16:39:10 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id d3so4453774wma.4
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:39:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=4t3gw4yWmAQOQyd6UUTc0220EM21Rw8nlfAG9nvqlYk=;
        b=bvxmMSh9hdYzR5Aai56AVw9UTe98y68mbRJMHnMNLxpagXf7jNetayLprkr9u3aoLw
         mTgInE8Yz/DERCDve3upRRQpxjv0058NrIBd+ILJvR4KAPc1XLH8/8m6Gn02Sq9XDbBg
         HISI3+7vFYpCiyL0Uz8gaU9F7EssFKTlKP/jEo3/lnk2t7WA2+7h70P23hl/Go/g4jKt
         a7YyVV8bjqzLR6b7Y6c/mFYzx/dykfXA9RSFuPnIbnQVQbIiZ7Z0DOzmS7Wis85Eec1d
         ELv2+xSjCv3OflZ9CVajGZm1AqK5e/XjEpcRwMQ0IXyUXEEbty9sFP9S0JDwvuBmNk6x
         Kakg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=4t3gw4yWmAQOQyd6UUTc0220EM21Rw8nlfAG9nvqlYk=;
        b=ayZNxeTpScOlF6wP3f19aGiZxXso0Lr0OP8DmlGRbR18ih9DYKZk2PW72DfH5BFKFR
         4ngPuH84V7ywMhMdzM4EYWoi8YHf5SD3gMxZ5R3Is3Y6zgHNv9zR4MOK72tvm7jFJwLS
         gV5BAWWPAEyiSBlwvU/4JbjhKXKy+mFxuBb+fSd+GdrOKEF2VYwmbZpONfhnDwC9Vk/C
         CM4wjBWrjFDP+hQNPGIYrqtlkk7/VwEe+/PLYcg+qEE4LoX9apoXgaJetLL0dd2VCAWV
         rn6H3xvONcRC6Hj+1OPPkkrswREpYUbkMcUxO9+c2imvV2dvuM037e7xVbl1C+hWmBFv
         Z3YA==
X-Gm-Message-State: AOAM5302T8v9F2KWCmq8RBtTdFhgScc5AWixgxaoSPS0HNCSjDD9G6uA
	QDGnT6gZWNZkWrSUSaIhZ0L1Z4JxXoD5+OzhwrE=
X-Google-Smtp-Source: ABdhPJx5MLKwCgbdSGo9DPo2IbDvS7j6x2uRxd46dPkJThVy3wsb/cRbLucl0ZdGavyJb23nk5v5sYWMFwecz0ziIqc=
X-Received: by 2002:a1c:32c6:: with SMTP id y189mr5145549wmy.51.1602779949175;
 Thu, 15 Oct 2020 09:39:09 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
In-Reply-To: <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Thu, 15 Oct 2020 12:38:33 -0400
Message-ID: <CABfawhnwdkB01LKYbcNhyyhFXF2LbLFFmeN5kqh7VaYPevjzuw@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: Jason Andryuk <jandryuk@gmail.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

> > Can you paste the memory map as printed by Xen when booting, and what
> > command line are you using to boot Xen.
>
> So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen

Unrelated comment: since tboot now has a PE build
(http://hg.code.sf.net/p/tboot/code/rev/5c68f0963a78) I think it would
be time for OpenXT to drop the weird efi->xen->tboot->xen flow and
just do efi->tboot->xen. Only reason we did efi->xen->tboot was
because tboot didn't have a PE build at the time. It's a very hackish
solution that's no longer needed.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7570.19900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6N6-0004z3-Oy; Thu, 15 Oct 2020 16:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7570.19900; Thu, 15 Oct 2020 16:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6N6-0004yw-LO; Thu, 15 Oct 2020 16:44:56 +0000
Received: by outflank-mailman (input) for mailman id 7570;
 Thu, 15 Oct 2020 16:44:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6N5-0004yr-8I
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:44:55 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c650a53-509c-4fba-863a-1b9e903cd03a;
 Thu, 15 Oct 2020 16:44:53 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id d24so3809761ljg.10
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:53 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6N5-0004yr-8I
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:44:55 +0000
X-Inumbo-ID: 3c650a53-509c-4fba-863a-1b9e903cd03a
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c650a53-509c-4fba-863a-1b9e903cd03a;
	Thu, 15 Oct 2020 16:44:53 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id d24so3809761ljg.10
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=KPI4Q1F7W5euLLBiaZYUV/bLE6fHSu+9i7ATgosHWs0=;
        b=ZmAVGxzbwEoBMGgi8j7FP2RrG6tIPbX2Huq/rFX1zIZFLXpdmTafy1J+uhh3N6e9TR
         nX897+CrDSd94nh9gRD8jibfIQ31jRYelf8AfrbslVukWGkClOLz8u3OtBW/y4RkWonq
         ZwThx54weShSBO7vRopgGQiIyskkkG4ZfabD7dqCGyj+l/VB0BG6WuTzfBTtWHwpihmW
         WrcSb2e8cuF7c8azMDXZoNgMoERbvywuZBJm5i2InSE0cLXhfmRRezUh5dN+WJNsPjKD
         WV3Sye55PieAdH2zfplyHIPq8gqF8pLIEizr8IdUAMz8tE1UGEwVC+qX7ege8L1G2tw3
         vAyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=KPI4Q1F7W5euLLBiaZYUV/bLE6fHSu+9i7ATgosHWs0=;
        b=AXp6DMOYu9ly+AxxvK432MAvLtn4TAP617b909s137igpxL3F25EG4u5Fub5oFnzxb
         fn6PD/aAq6OaWsAuD715X/eG1Y66taiAoPm2BGDMGMrrEnnFkNKWKwr5fAQ0LHWcJzvk
         2Nq3085zX1hhj9IgWLIpMQ/O3TAt4dih8t59ODHMy0HOfbpT4WlQv3tXCG7t/hhV+pHq
         PRFuQz7w9mw0uiIkbmhdeMBOfg8xhgiGDZhBXV5tY1++Vfu6z+KYzY5c3ef1YfVzQMAL
         9XzC51arei8OYKs9WdwFq4B1mM0+Cb7GkNb7rjbJ+gLgdpOQqgnMleNSWHZvN4Qy5CL3
         garQ==
X-Gm-Message-State: AOAM5316jzu9LxIf+5z+e3YekfAP2j8biTLlehERI5y7DRRsdvIeHce8
	YLu6+hYHpnCmu4sN+6ybnXscWenyBpmbfA==
X-Google-Smtp-Source: ABdhPJyHbARbsSk7+f4wCyQ14OdMzOa7bj11yTGi7KR01ogF5R7x2S+z53imcYHFt2NCZ2gaJ+l48Q==
X-Received: by 2002:a2e:8815:: with SMTP id x21mr1777507ljh.312.1602780291717;
        Thu, 15 Oct 2020 09:44:51 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.50
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:51 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien.grall@arm.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Tim Deegan <tim@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
Date: Thu, 15 Oct 2020 19:44:11 +0300
Message-Id: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add IOREQ/DM support to Xen on Arm.
You can find an initial discussion at [1] and RFC/V1 series at [2]/[3].
Xen on Arm requires some implementation to forward guest MMIO access to a device
model in order to implement virtio-mmio backend or even mediator outside of hypervisor.
As Xen on x86 already contains required support this series tries to make it common
and introduce Arm specific bits plus some new functionality. Patch series is based on
Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device emulator".
Besides splitting existing IOREQ/DM support and introducing Arm side, the series
also includes virtio-mmio related changes (last 2 patches for toolstack)
for the reviewers to be able to see how the whole picture could look like.

According to the initial discussion there are a few open questions/concerns
regarding security, performance in VirtIO solution:
1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require different
   transport...
2. virtio backend is able to access all guest memory, some kind of protection
   is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in guest'
3. interface between toolstack and 'out-of-qemu' virtio backend, avoid using
   Xenstore in virtio backend if possible.
4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
   has some idea regarding that.

Looks like all of them are valid and worth considering, but the first thing
which we need on Arm is a mechanism to forward guest IO to a device emulator,
so let's focus on it in the first place.

***

There are a lot of changes since RFC series, almost all TODOs were resolved on Arm,
Arm code was improved and hardened, common IOREQ/DM code became really arch-agnostic
(without HVM-ism), but one TODO still remains which is "PIO handling" on Arm.
The "PIO handling" TODO is expected to left unaddressed for the current series.
It is not an big issue for now while Xen doesn't have support for vPCI on Arm.
On Arm64 they are only used for PCI IO Bar and we would probably want to expose
them to emulator as PIO access to make a DM completely arch-agnostic. So "PIO handling"
should be implemented when we add support for vPCI.

I left interface untouched in the following patch
"xen/dm: Introduce xendevicemodel_set_irq_level DM op"
since there is still an open discussion what interface to use/what
information to pass to the hypervisor.

Also I decided to drop the following patch:
"[RFC PATCH V1 07/12] A collection of tweaks to be able to run emulator in driver domain"
as I got an advise to write our own policy using FLASK which would cover our use
case (with emulator in driver domain) rather than tweak Xen.

There are two patches on review this series depends on (each involved patch in this series
contains this note as well):
1. https://patchwork.kernel.org/patch/11816689
2. https://patchwork.kernel.org/patch/11803383

Please note, that IOREQ feature is disabled by default within this series.

***

Patch series [4] was rebased on recent "staging branch"
(8a62dee x86/vLAPIC: don't leak regs page from vlapic_init() upon error) and tested on
Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk backend (we will
share it later) running in driver domain and unmodified Linux Guest running on existing
virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
use-cases work properly. Patch series was only build-tested on x86.

Please note, build-test passed for the following modes:
1. x86: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y (default)
2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
3. Arm64: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
4. Arm64: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)
5. Arm32: CONFIG_HVM=y / CONFIG_IOREQ_SERVER=y
6. Arm32: CONFIG_HVM=y / #CONFIG_IOREQ_SERVER is not set  (default)

***

Any feedback/help would be highly appreciated.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2020-08/msg00071.html
[3] https://lists.xenproject.org/archives/html/xen-devel/2020-09/msg00732.html
[4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3

Julien Grall (5):
  xen/dm: Make x86's DM feature common
  xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
  arm/ioreq: Introduce arch specific bits for IOREQ/DM features
  xen/dm: Introduce xendevicemodel_set_irq_level DM op
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (18):
  x86/ioreq: Prepare IOREQ feature for making it common
  xen/ioreq: Make x86's IOREQ feature common
  xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
  xen/ioreq: Provide alias for the handle_mmio()
  xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
  xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
  xen/ioreq: Move x86's ioreq_gfn(server) to struct domain
  xen/ioreq: Introduce ioreq_params to abstract accesses to
    arch.hvm.params
  xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
  xen/ioreq: Remove "hvm" prefixes from involved function names
  xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
  xen/arm: Stick around in leave_hypervisor_to_guest until I/O has
    completed
  xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
  xen/ioreq: Introduce domain_has_ioreq_server()
  xen/arm: io: Abstract sign-extension
  xen/ioreq: Make x86's send_invalidate_req() common
  xen/arm: Add mapcache invalidation handling
  [RFC] libxl: Add support for virtio-disk configuration

 MAINTAINERS                                     |    8 +-
 tools/libs/devicemodel/core.c                   |   18 +
 tools/libs/devicemodel/include/xendevicemodel.h |    4 +
 tools/libs/devicemodel/libxendevicemodel.map    |    1 +
 tools/libs/light/Makefile                       |    1 +
 tools/libs/light/libxl_arm.c                    |   94 +-
 tools/libs/light/libxl_create.c                 |    1 +
 tools/libs/light/libxl_internal.h               |    1 +
 tools/libs/light/libxl_types.idl                |   16 +
 tools/libs/light/libxl_types_internal.idl       |    1 +
 tools/libs/light/libxl_virtio_disk.c            |  109 ++
 tools/xl/Makefile                               |    2 +-
 tools/xl/xl.h                                   |    3 +
 tools/xl/xl_cmdtable.c                          |   15 +
 tools/xl/xl_parse.c                             |  116 ++
 tools/xl/xl_virtio_disk.c                       |   46 +
 xen/arch/arm/Makefile                           |    2 +
 xen/arch/arm/dm.c                               |   89 ++
 xen/arch/arm/domain.c                           |    9 +
 xen/arch/arm/hvm.c                              |    4 +
 xen/arch/arm/io.c                               |   29 +-
 xen/arch/arm/ioreq.c                            |  126 ++
 xen/arch/arm/p2m.c                              |   29 +
 xen/arch/arm/traps.c                            |   58 +-
 xen/arch/x86/Kconfig                            |    1 +
 xen/arch/x86/hvm/Makefile                       |    1 -
 xen/arch/x86/hvm/dm.c                           |  291 +----
 xen/arch/x86/hvm/emulate.c                      |   60 +-
 xen/arch/x86/hvm/hvm.c                          |   24 +-
 xen/arch/x86/hvm/hypercall.c                    |    9 +-
 xen/arch/x86/hvm/intercept.c                    |    5 +-
 xen/arch/x86/hvm/io.c                           |   26 +-
 xen/arch/x86/hvm/ioreq.c                        | 1533 -----------------------
 xen/arch/x86/hvm/stdvga.c                       |   10 +-
 xen/arch/x86/hvm/svm/nestedsvm.c                |    2 +-
 xen/arch/x86/hvm/vmx/realmode.c                 |    6 +-
 xen/arch/x86/hvm/vmx/vvmx.c                     |    2 +-
 xen/arch/x86/mm.c                               |   46 +-
 xen/arch/x86/mm/p2m.c                           |   13 +-
 xen/arch/x86/mm/shadow/common.c                 |    2 +-
 xen/common/Kconfig                              |    3 +
 xen/common/Makefile                             |    2 +
 xen/common/dm.c                                 |  292 +++++
 xen/common/ioreq.c                              | 1443 +++++++++++++++++++++
 xen/common/memory.c                             |   50 +-
 xen/include/asm-arm/domain.h                    |    5 +
 xen/include/asm-arm/hvm/ioreq.h                 |  109 ++
 xen/include/asm-arm/mm.h                        |    8 -
 xen/include/asm-arm/mmio.h                      |    1 +
 xen/include/asm-arm/p2m.h                       |   19 +-
 xen/include/asm-arm/paging.h                    |    4 +
 xen/include/asm-arm/traps.h                     |   24 +
 xen/include/asm-x86/hvm/domain.h                |   50 +-
 xen/include/asm-x86/hvm/emulate.h               |    2 +-
 xen/include/asm-x86/hvm/io.h                    |   17 -
 xen/include/asm-x86/hvm/ioreq.h                 |  198 ++-
 xen/include/asm-x86/hvm/vcpu.h                  |   18 -
 xen/include/asm-x86/mm.h                        |    4 -
 xen/include/asm-x86/p2m.h                       |   20 +-
 xen/include/public/arch-arm.h                   |    5 +
 xen/include/public/hvm/dm_op.h                  |   16 +
 xen/include/xen/dm.h                            |   44 +
 xen/include/xen/ioreq.h                         |  143 +++
 xen/include/xen/p2m-common.h                    |    4 +
 xen/include/xen/sched.h                         |   37 +
 xen/include/xsm/dummy.h                         |    4 +-
 xen/include/xsm/xsm.h                           |    6 +-
 xen/xsm/dummy.c                                 |    2 +-
 xen/xsm/flask/hooks.c                           |    5 +-
 69 files changed, 3223 insertions(+), 2125 deletions(-)
 create mode 100644 tools/libs/light/libxl_virtio_disk.c
 create mode 100644 tools/xl/xl_virtio_disk.c
 create mode 100644 xen/arch/arm/dm.c
 create mode 100644 xen/arch/arm/ioreq.c
 delete mode 100644 xen/arch/x86/hvm/ioreq.c
 create mode 100644 xen/common/dm.c
 create mode 100644 xen/common/ioreq.c
 create mode 100644 xen/include/asm-arm/hvm/ioreq.h
 create mode 100644 xen/include/xen/dm.h
 create mode 100644 xen/include/xen/ioreq.h

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7571.19912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NB-00050z-4i; Thu, 15 Oct 2020 16:45:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7571.19912; Thu, 15 Oct 2020 16:45:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NB-00050r-1T; Thu, 15 Oct 2020 16:45:01 +0000
Received: by outflank-mailman (input) for mailman id 7571;
 Thu, 15 Oct 2020 16:45:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NA-0004yr-4h
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:00 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b31d87fb-3f7e-45dc-adf7-64a3b9353b5d;
 Thu, 15 Oct 2020 16:44:54 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id h20so3821966lji.9
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:54 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.51
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NA-0004yr-4h
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:00 +0000
X-Inumbo-ID: b31d87fb-3f7e-45dc-adf7-64a3b9353b5d
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b31d87fb-3f7e-45dc-adf7-64a3b9353b5d;
	Thu, 15 Oct 2020 16:44:54 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id h20so3821966lji.9
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=nQibM6OLGUJNh20LEK8SFBDh55fjFRwqVu7e0OezZaA=;
        b=LpY/YPInCcQOPibddOk7rTHh7vATE8tPwbTLLzOZ2joUeEnlFw9Alav9KxN5BU8rN1
         Zz+gjFLuNjP1ngfjQdkIn8ECw/vxm+lpCKdrbOClQax7tIEkKaz4/05Cn5N5z+IZAIPw
         66edSD9rtjPvQihcLu0LNUBzX+tD14l+JYOFiiQ+rXdr5yxHF/tZcrHAI7HfeqszEir3
         4wYPe1td5vZdG+kHYhcYRXg2igyqjWYhKLjVEYrmOai/j8TazrYBnhkk466ntVaOilNo
         NLc0f4OfdpbnWnbyvEew5G6xrpLDuIPg+sst0wSXWzyx8bozHokCUFH1BPvx8iEorr1m
         wpVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=nQibM6OLGUJNh20LEK8SFBDh55fjFRwqVu7e0OezZaA=;
        b=igt2SQuSuvtjIU3daGWKw+uJAnWjKtMH5Nyyjst98RqlvqdTRGgxbYlx956PFCqj29
         YBpWbsFgdN/zv6+8QxP+10FalJwmsnIrizRdpOQkn1+yoVEshJ4brHdLjWnY/DyiZI3n
         8Ah3GsYlPg2V4UvZ19scl8u2IFwiANvqTCtEINnz4fiS+Km9fNhK5n+XcrFC4qNMbmwS
         gLeQaM3nvd1yRQrwtrCx4hd+wrSEbshU/etHRqLJWRau40v1todQi/M9AWixl9yq1ohM
         teH2s4TtVkOZ12bWj+X1RGrMGLJebcvqO/CImNuXz0EGIUhknFWdHZl1KZRNH/kZSH1T
         NVgw==
X-Gm-Message-State: AOAM532+q+ZrGwQ+RCXMRERHJ+WFM3odbWHF+/Vlk9fbyCMjUiFyNULm
	838in9WNEWqf+Khfb8gdX0KYbQDosMx5tw==
X-Google-Smtp-Source: ABdhPJywIMlWOYIyBcJjX6BDz4MfjG3WD16fS8DtYtKnNg29oCzYIjgpeYoPH1rzE2IlVTaEUaA/Aw==
X-Received: by 2002:a2e:b04f:: with SMTP id d15mr1572191ljl.413.1602780292832;
        Thu, 15 Oct 2020 09:44:52 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.51
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:52 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common
Date: Thu, 15 Oct 2020 19:44:12 +0300
Message-Id: <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As a lot of x86 code can be re-used on Arm later on, this
patch makes some preparation to x86/hvm/ioreq.c before moving
to the common code. This way we will get a verbatim copy for
a code movement in subsequent patch (arch/x86/hvm/ioreq.c
will be *just* renamed to common/ioreq).

This patch does the following:
1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(),
   arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and
   hvm_ioreq_server_get_type_addr() to abstract arch specific materials.
2  Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going
   to be called from the common code.
3. Make get_ioreq_server() global. It is going to be called from
   a few places.
4. Add IOREQ_STATUS_* #define-s and update candidates for moving.
5. Re-order #include-s alphabetically.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"
   - fold the check of p->type into hvm_get_ioreq_server_range_type()
     and make it return success/failure
   - remove relocate_portio_handler() call from arch_hvm_ioreq_destroy()
     in arch/x86/hvm/ioreq.c
   - introduce arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()

Changes V1 -> V2:
   - update patch description
   - make arch functions inline and put them into arch header
     to achieve a truly rename by the subsequent patch
   - return void in arch_hvm_destroy_ioreq_server()
   - return bool in arch_hvm_ioreq_destroy()
   - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy()
   - rename IOREQ_IO* to IOREQ_STATUS*
   - remove *handle* from arch_handle_hvm_io_completion()
   - re-order #include-s alphabetically
   - rename hvm_get_ioreq_server_range_type() to hvm_ioreq_server_get_type_addr()
     and add "const" to several arguments
---
 xen/arch/x86/hvm/ioreq.c        | 153 +++++--------------------------------
 xen/include/asm-x86/hvm/ioreq.h | 165 +++++++++++++++++++++++++++++++++++++++-
 2 files changed, 184 insertions(+), 134 deletions(-)

diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 1cc27df..d3433d7 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -1,5 +1,5 @@
 /*
- * hvm/io.c: hardware virtual machine I/O emulation
+ * ioreq.c: hardware virtual machine I/O emulation
  *
  * Copyright (c) 2016 Citrix Systems Inc.
  *
@@ -17,21 +17,18 @@
  */
 
 #include <xen/ctype.h>
+#include <xen/domain.h>
+#include <xen/event.h>
 #include <xen/init.h>
+#include <xen/irq.h>
 #include <xen/lib.h>
-#include <xen/trace.h>
+#include <xen/paging.h>
 #include <xen/sched.h>
-#include <xen/irq.h>
 #include <xen/softirq.h>
-#include <xen/domain.h>
-#include <xen/event.h>
-#include <xen/paging.h>
+#include <xen/trace.h>
 #include <xen/vpci.h>
 
-#include <asm/hvm/emulate.h>
-#include <asm/hvm/hvm.h>
 #include <asm/hvm/ioreq.h>
-#include <asm/hvm/vmx/vmx.h>
 
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
@@ -48,8 +45,8 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
 #define GET_IOREQ_SERVER(d, id) \
     (d)->arch.hvm.ioreq_server.server[id]
 
-static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                                 unsigned int id)
+struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
+                                          unsigned int id)
 {
     if ( id >= MAX_NR_IOREQ_SERVERS )
         return NULL;
@@ -209,19 +206,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
         return handle_pio(vio->io_req.addr, vio->io_req.size,
                           vio->io_req.dir);
 
-    case HVMIO_realmode_completion:
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        vmx_realmode_emulate_one(&ctxt);
-        hvm_emulate_writeback(&ctxt);
-
-        break;
-    }
     default:
-        ASSERT_UNREACHABLE();
-        break;
+        return arch_hvm_io_completion(io_completion);
     }
 
     return true;
@@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 
     domain_pause(d);
 
-    p2m_set_ioreq_server(d, 0, s);
+    arch_hvm_destroy_ioreq_server(s);
 
     hvm_ioreq_server_disable(s);
 
@@ -1080,54 +1066,6 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
-/*
- * Map or unmap an ioreq server to specific memory type. For now, only
- * HVMMEM_ioreq_server is supported, and in the future new types can be
- * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And
- * currently, only write operations are to be forwarded to an ioreq server.
- * Support for the emulation of read operations can be added when an ioreq
- * server has such requirement in the future.
- */
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    if ( type != HVMMEM_ioreq_server )
-        return -EINVAL;
-
-    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    rc = p2m_set_ioreq_server(d, flags, s);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    if ( rc == 0 && flags == 0 )
-    {
-        struct p2m_domain *p2m = p2m_get_hostp2m(d);
-
-        if ( read_atomic(&p2m->ioreq.entry_count) )
-            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
-    }
-
-    return rc;
-}
-
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool enabled)
 {
@@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     struct hvm_ioreq_server *s;
     unsigned int id;
 
-    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
+    if ( !arch_hvm_ioreq_destroy(d) )
         return;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -1243,50 +1181,13 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
                                                  ioreq_t *p)
 {
     struct hvm_ioreq_server *s;
-    uint32_t cf8;
     uint8_t type;
     uint64_t addr;
     unsigned int id;
 
-    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+    if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) )
         return NULL;
 
-    cf8 = d->arch.hvm.pci_cf8;
-
-    if ( p->type == IOREQ_TYPE_PIO &&
-         (p->addr & ~3) == 0xcfc &&
-         CF8_ENABLED(cf8) )
-    {
-        uint32_t x86_fam;
-        pci_sbdf_t sbdf;
-        unsigned int reg;
-
-        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
-
-        /* PCI config data cycle */
-        type = XEN_DMOP_IO_RANGE_PCI;
-        addr = ((uint64_t)sbdf.sbdf << 32) | reg;
-        /* AMD extended configuration space access? */
-        if ( CF8_ADDR_HI(cf8) &&
-             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
-             (x86_fam = get_cpu_family(
-                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
-             x86_fam < 0x17 )
-        {
-            uint64_t msr_val;
-
-            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
-                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
-                addr |= CF8_ADDR_HI(cf8);
-        }
-    }
-    else
-    {
-        type = (p->type == IOREQ_TYPE_PIO) ?
-                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
-        addr = p->addr;
-    }
-
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
         struct rangeset *r;
@@ -1351,7 +1252,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     pg = iorp->va;
 
     if ( !pg )
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
 
     /*
      * Return 0 for the cases we can't deal with:
@@ -1381,7 +1282,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
         break;
     default:
         gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
     }
 
     spin_lock(&s->bufioreq_lock);
@@ -1391,7 +1292,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     {
         /* The queue is full: send the iopacket through the normal path. */
         spin_unlock(&s->bufioreq_lock);
-        return X86EMUL_UNHANDLEABLE;
+        return IOREQ_STATUS_UNHANDLED;
     }
 
     pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
@@ -1422,7 +1323,7 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     notify_via_xen_event_channel(d, s->bufioreq_evtchn);
     spin_unlock(&s->bufioreq_lock);
 
-    return X86EMUL_OKAY;
+    return IOREQ_STATUS_HANDLED;
 }
 
 int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
@@ -1438,7 +1339,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
         return hvm_send_buffered_ioreq(s, proto_p);
 
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
-        return X86EMUL_RETRY;
+        return IOREQ_STATUS_RETRY;
 
     list_for_each_entry ( sv,
                           &s->ioreq_vcpu_list,
@@ -1478,11 +1379,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
             notify_via_xen_event_channel(d, port);
 
             sv->pending = true;
-            return X86EMUL_RETRY;
+            return IOREQ_STATUS_RETRY;
         }
     }
 
-    return X86EMUL_UNHANDLEABLE;
+    return IOREQ_STATUS_UNHANDLED;
 }
 
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
@@ -1496,30 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
         if ( !s->enabled )
             continue;
 
-        if ( hvm_send_ioreq(s, p, buffered) == X86EMUL_UNHANDLEABLE )
+        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
             failed++;
     }
 
     return failed;
 }
 
-static int hvm_access_cf8(
-    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
-{
-    struct domain *d = current->domain;
-
-    if ( dir == IOREQ_WRITE && bytes == 4 )
-        d->arch.hvm.pci_cf8 = *val;
-
-    /* We always need to fall through to the catch all emulator */
-    return X86EMUL_UNHANDLEABLE;
-}
-
 void hvm_ioreq_init(struct domain *d)
 {
     spin_lock_init(&d->arch.hvm.ioreq_server.lock);
 
-    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+    arch_hvm_ioreq_init(d);
 }
 
 /*
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index e2588e9..376e2ef 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,6 +19,165 @@
 #ifndef __ASM_X86_HVM_IOREQ_H__
 #define __ASM_X86_HVM_IOREQ_H__
 
+#include <asm/hvm/emulate.h>
+#include <asm/hvm/vmx/vmx.h>
+
+#include <public/hvm/params.h>
+
+struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
+                                          unsigned int id);
+
+static inline bool arch_hvm_io_completion(enum hvm_io_completion io_completion)
+{
+    switch ( io_completion )
+    {
+    case HVMIO_realmode_completion:
+    {
+        struct hvm_emulate_ctxt ctxt;
+
+        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
+        vmx_realmode_emulate_one(&ctxt);
+        hvm_emulate_writeback(&ctxt);
+
+        break;
+    }
+
+    default:
+        ASSERT_UNREACHABLE();
+        break;
+    }
+
+    return true;
+}
+
+/* Called when target domain is paused */
+static inline void arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s)
+{
+    p2m_set_ioreq_server(s->target, 0, s);
+}
+
+/*
+ * Map or unmap an ioreq server to specific memory type. For now, only
+ * HVMMEM_ioreq_server is supported, and in the future new types can be
+ * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. And
+ * currently, only write operations are to be forwarded to an ioreq server.
+ * Support for the emulation of read operations can be added when an ioreq
+ * server has such requirement in the future.
+ */
+static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
+                                                   ioservid_t id,
+                                                   uint32_t type,
+                                                   uint32_t flags)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    if ( type != HVMMEM_ioreq_server )
+        return -EINVAL;
+
+    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    rc = p2m_set_ioreq_server(d, flags, s);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    if ( rc == 0 && flags == 0 )
+    {
+        struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+        if ( read_atomic(&p2m->ioreq.entry_count) )
+            p2m_change_entry_type_global(d, p2m_ioreq_server, p2m_ram_rw);
+    }
+
+    return rc;
+}
+
+static inline int hvm_ioreq_server_get_type_addr(const struct domain *d,
+                                                 const ioreq_t *p,
+                                                 uint8_t *type,
+                                                 uint64_t *addr)
+{
+    uint32_t cf8 = d->arch.hvm.pci_cf8;
+
+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+        return -EINVAL;
+
+    if ( p->type == IOREQ_TYPE_PIO &&
+         (p->addr & ~3) == 0xcfc &&
+         CF8_ENABLED(cf8) )
+    {
+        uint32_t x86_fam;
+        pci_sbdf_t sbdf;
+        unsigned int reg;
+
+        reg = hvm_pci_decode_addr(cf8, p->addr, &sbdf);
+
+        /* PCI config data cycle */
+        *type = XEN_DMOP_IO_RANGE_PCI;
+        *addr = ((uint64_t)sbdf.sbdf << 32) | reg;
+        /* AMD extended configuration space access? */
+        if ( CF8_ADDR_HI(cf8) &&
+             d->arch.cpuid->x86_vendor == X86_VENDOR_AMD &&
+             (x86_fam = get_cpu_family(
+                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >= 0x10 &&
+             x86_fam < 0x17 )
+        {
+            uint64_t msr_val;
+
+            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
+                 (msr_val & (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
+                *addr |= CF8_ADDR_HI(cf8);
+        }
+    }
+    else
+    {
+        *type = (p->type == IOREQ_TYPE_PIO) ?
+                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
+        *addr = p->addr;
+    }
+
+    return 0;
+}
+
+static inline int hvm_access_cf8(
+    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
+{
+    struct domain *d = current->domain;
+
+    if ( dir == IOREQ_WRITE && bytes == 4 )
+        d->arch.hvm.pci_cf8 = *val;
+
+    /* We always need to fall through to the catch all emulator */
+    return X86EMUL_UNHANDLEABLE;
+}
+
+static inline void arch_hvm_ioreq_init(struct domain *d)
+{
+    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
+}
+
+static inline bool arch_hvm_ioreq_destroy(struct domain *d)
+{
+    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
+        return false;
+
+    return true;
+}
+
 bool hvm_io_pending(struct vcpu *v);
 bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
@@ -38,8 +197,6 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
                                          uint64_t end);
-int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint32_t flags);
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool enabled);
 
@@ -55,6 +212,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
 void hvm_ioreq_init(struct domain *d);
 
+#define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
+#define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
+#define IOREQ_STATUS_RETRY       X86EMUL_RETRY
+
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
 /*
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7572.19925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NG-00054s-EU; Thu, 15 Oct 2020 16:45:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7572.19925; Thu, 15 Oct 2020 16:45:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NG-00054h-AJ; Thu, 15 Oct 2020 16:45:06 +0000
Received: by outflank-mailman (input) for mailman id 7572;
 Thu, 15 Oct 2020 16:45:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NF-0004yr-54
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:05 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57abc5f2-97a3-41c2-8c74-148dd32caf9d;
 Thu, 15 Oct 2020 16:44:56 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id c141so4370022lfg.5
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.54
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:54 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NF-0004yr-54
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:05 +0000
X-Inumbo-ID: 57abc5f2-97a3-41c2-8c74-148dd32caf9d
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57abc5f2-97a3-41c2-8c74-148dd32caf9d;
	Thu, 15 Oct 2020 16:44:56 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id c141so4370022lfg.5
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=mmqWLOWImzeNp56Irm3zs3dKXoDrkJx4SfyF9oEDLUs=;
        b=ZsOiFQnpPnZdCYSQBVWu3vFDVo/kvZhqIi2OHTmpT5tDFt99YhqKH/TTcZ2CmHnvT/
         j+ie9IoJGAMqfAnD7BO4U9HVeu5a1tz8O+qkaP6ETO1Dq0KY6FL0rMWiJhWNFq30gRRP
         f6NnvRtAot/LJ68p4cMTYOKKI01Ki1/v5/qLYmIMzQTwjs+Qe4CVbS3ZI6blpFSuq3La
         knvLm2FlC7vCBLYhWPhJtGuBFSKeb76qg5H7YSOpPQzZnIBOCPxHvgXzEopSM1iYGnVl
         W10ihqrjlvYE4AiawmqyIUKaVJDUo6CdwyEF3/1ddzxDIBk6yOyem5tnwsWhuMPE+clD
         3+zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=mmqWLOWImzeNp56Irm3zs3dKXoDrkJx4SfyF9oEDLUs=;
        b=koR69UWYrvIDJAwXrafEG34W9LbwtPc732jE4KyayubzVJCoftIif2oQ/5IPoj1w0z
         7dbUBYwp7kqXjQ22owbN0Z2cumgeO3GJwjl/S71luwMXOy/JNgnD1MGj0y+tvMHe/yCv
         I2AygLM3duQzcvlSq9+zWqEm5ufm/MHY0HTSvlQ9Uqc1QGx72soOp8WLLc/gOWeOIB56
         Sx23zMddAdpkICeUrT0yoOBNRnfAsE2o1GZss+HR+hyuJxBXuIciAdLZaAr09Fb+/U5L
         CNV9n8HMpSuvxFTcNOYg58tfdyEIGXjNxYd9qGnsqDQTZl4TNdLakB3L4M9n+jsVQVSg
         f2JQ==
X-Gm-Message-State: AOAM533waJkx7bqfe+y66XDkQTgwP0nyaXfvjcc7/dPm/7k3WpbL2OYu
	FbBqj6CF84jvPAzXsgmfRVRqCpEorEshlA==
X-Google-Smtp-Source: ABdhPJy6EuL1uoUXZSQ5V/dK7lmolV1NcGlO5kNY6cD6uQs/KAZrdgDN68W8RZBxYvyVnopQ63KKng==
X-Received: by 2002:ac2:5699:: with SMTP id 25mr1486368lfr.396.1602780295173;
        Thu, 15 Oct 2020 09:44:55 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.54
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:54 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 03/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
Date: Thu, 15 Oct 2020 19:44:14 +0300
Message-Id: <1602780274-29141-4-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and this helper will be used
on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix.

Although PIO handling on Arm is not introduced with the current series
(it will be implemented when we add support for vPCI), technically
the PIOs exist on Arm (however they are accessed the same way as MMIO)
and it would be better not to diverge now.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"

Changes V1 -> V2:
   - remove "hvm" prefix
---
 xen/arch/x86/hvm/emulate.c     | 4 ++--
 xen/arch/x86/hvm/io.c          | 2 +-
 xen/common/ioreq.c             | 4 ++--
 xen/include/asm-x86/hvm/vcpu.h | 7 -------
 xen/include/xen/ioreq.h        | 7 +++++++
 5 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 24cf85f..5700274 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -336,7 +336,7 @@ static int hvmemul_do_io(
             rc = hvm_send_ioreq(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                 vio->io_req.state = STATE_IOREQ_NONE;
-            else if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+            else if ( !ioreq_needs_completion(&vio->io_req) )
                 rc = X86EMUL_OKAY;
         }
         break;
@@ -2649,7 +2649,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     if ( rc == X86EMUL_OKAY && vio->mmio_retry )
         rc = X86EMUL_RETRY;
 
-    if ( !hvm_ioreq_needs_completion(&vio->io_req) )
+    if ( !ioreq_needs_completion(&vio->io_req) )
         completion = HVMIO_no_completion;
     else if ( completion == HVMIO_no_completion )
         completion = (vio->io_req.type != IOREQ_TYPE_PIO ||
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 3e09d9b..b220d6b 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -135,7 +135,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
 
     rc = hvmemul_do_pio_buffer(port, size, dir, &data);
 
-    if ( hvm_ioreq_needs_completion(&vio->io_req) )
+    if ( ioreq_needs_completion(&vio->io_req) )
         vio->io_completion = HVMIO_pio_completion;
 
     switch ( rc )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index d3433d7..c89df7a 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -159,7 +159,7 @@ static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
     }
 
     p = &sv->vcpu->arch.hvm.hvm_io.io_req;
-    if ( hvm_ioreq_needs_completion(p) )
+    if ( ioreq_needs_completion(p) )
         p->data = data;
 
     sv->pending = false;
@@ -185,7 +185,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
         return false;
 
-    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
+    vio->io_req.state = ioreq_needs_completion(&vio->io_req) ?
         STATE_IORESP_READY : STATE_IOREQ_NONE;
 
     msix_write_completion(v);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 5ccd075..6c1feda 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -91,13 +91,6 @@ struct hvm_vcpu_io {
     const struct g2m_ioport *g2m_ioport;
 };
 
-static inline bool hvm_ioreq_needs_completion(const ioreq_t *ioreq)
-{
-    return ioreq->state == STATE_IOREQ_READY &&
-           !ioreq->data_is_ptr &&
-           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
-}
-
 struct nestedvcpu {
     bool_t nv_guestmode; /* vcpu in guestmode? */
     void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 6db4392..8e1603c 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -24,6 +24,13 @@
 struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
                                           unsigned int id);
 
+static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
+{
+    return ioreq->state == STATE_IOREQ_READY &&
+           !ioreq->data_is_ptr &&
+           (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
+}
+
 bool hvm_io_pending(struct vcpu *v);
 bool handle_hvm_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7573.19936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NK-00058e-NI; Thu, 15 Oct 2020 16:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7573.19936; Thu, 15 Oct 2020 16:45:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NK-00058T-Jj; Thu, 15 Oct 2020 16:45:10 +0000
Received: by outflank-mailman (input) for mailman id 7573;
 Thu, 15 Oct 2020 16:45:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NK-0004yr-5E
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:10 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ab12fba-4c45-497b-966b-2f9e79aa024b;
 Thu, 15 Oct 2020 16:44:57 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id l2so4403908lfk.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:57 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NK-0004yr-5E
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:10 +0000
X-Inumbo-ID: 6ab12fba-4c45-497b-966b-2f9e79aa024b
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ab12fba-4c45-497b-966b-2f9e79aa024b;
	Thu, 15 Oct 2020 16:44:57 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id l2so4403908lfk.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=ttyepk04DroNuuyrE7kUju7fxJWgSwghlPrbVPtWZwk=;
        b=V4XLO+24Y+705cePezQYGyFCz4z/qSTZsDUC1r3kjVWfPe15UvtNa51bdKnC5PWUEd
         HuWtcD7Hps6VST0HuSisyxAjgQjlvzRPCctuFl93LA8p6I/81eHSCyWXKUEdAHbZolUs
         +ojIFwxgYDR3sBsjU3H1hJbX/u9rWVuEzYb2lZwKHxjwJzY3Q1xwdQV+FkdkuSbk+FSo
         nyFAydRwuly3soU+DooQ5MflsgYO9bk6TX1huLRcMR1fxj9iIbd9I8bBnphK9KRGMSfE
         HlzC6xNxadcAs1+QR9E3UVcbKcP+f9VqklUmddNeKPQMjaZqnedTzuiuRKDfmP6rbM+A
         P36w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=ttyepk04DroNuuyrE7kUju7fxJWgSwghlPrbVPtWZwk=;
        b=Jpl4B48GkjVE8iMaD/H9eTnAnGYv4Qf8znanKam6CE1tmUU7z+nV1nviQJkiqBSldR
         J2nOkAdu9gsFdIBDIO8fce4HD1DToEd3FW8ApzmO5SWss8zNUk1NhKCKr/EcUltr0+qj
         5SSGf0vNjUMUGQI/Xbxh36zpGSSqWbh2667ZoB5bNooIAMjlYTBlITVOoxb2Urxm69Dp
         ely3lOh3dAvwGHI+I0M1jLbbe7ea1jUEX06uEI3q9uNSOTOGuTnoSZcuC69MF8+4OELy
         mqU3G3SFMeleSltaGmACBA3bKxeZTsK6QSnfK4kN7hKevk9iyeaZfH7n9q7M3dz0XgJ/
         EAug==
X-Gm-Message-State: AOAM531NmNYkUloCXs+WXtlPrjJKR0LP+ESzw2UA1DsYSE0WIn6Cn+Qr
	qUHnpL6FvSHPPU4h6sNygRlDY/ncexyzHw==
X-Google-Smtp-Source: ABdhPJz/slHPdEYDiEnJBk3d7otX848eBJ8VApE6o12yUQc60uWrA2Q3qdoeaPfm/qaplDbFmlu88w==
X-Received: by 2002:a19:c857:: with SMTP id y84mr1338613lff.432.1602780296167;
        Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.55
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:55 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
Date: Thu, 15 Oct 2020 19:44:15 +0300
Message-Id: <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and Arm will have its own
implementation.

But the name of the function is pretty generic and can be confusing
on Arm (we already have a try_handle_mmio()).

In order not to rename the function (which is used for a varying
set of purposes on x86) globally and get non-confusing variant on Arm
provide an alias ioreq_complete_mmio() to be used on common and
Arm code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>
Acked-by: Jan Beulich <jbeulich@suse.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - remove "handle"
   - add Jan's A-b
---
 xen/common/ioreq.c              | 2 +-
 xen/include/asm-x86/hvm/ioreq.h | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index c89df7a..29ad48e 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -200,7 +200,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
         break;
 
     case HVMIO_mmio_completion:
-        return handle_mmio();
+        return ioreq_complete_mmio();
 
     case HVMIO_pio_completion:
         return handle_pio(vio->io_req.addr, vio->io_req.size,
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index a3d8faa..a147856 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
 #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
 #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
 
+#define ioreq_complete_mmio   handle_mmio
+
 #endif /* __ASM_X86_HVM_IOREQ_H__ */
 
 /*
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7574.19949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NQ-0005Fc-8w; Thu, 15 Oct 2020 16:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7574.19949; Thu, 15 Oct 2020 16:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NQ-0005FT-5H; Thu, 15 Oct 2020 16:45:16 +0000
Received: by outflank-mailman (input) for mailman id 7574;
 Thu, 15 Oct 2020 16:45:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NP-0004yr-5R
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:15 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c;
 Thu, 15 Oct 2020 16:44:58 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id c21so3880022ljj.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:58 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NP-0004yr-5R
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:15 +0000
X-Inumbo-ID: ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec0d2ac8-c18c-49dc-a0e7-e58aa7a9280c;
	Thu, 15 Oct 2020 16:44:58 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id c21so3880022ljj.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=KgNrI0jjP6nQ+Uo9tT9QhBTyECmVbDeACqoYsGAUBPM=;
        b=ARUsd/BRZI5s/0J26oyNn8wIKcKZFAYkEa6I4wJXTRpkFPlrQQwHcbEUI3Os0zKtVu
         ypzmn9T/WO7Br8v/RvWuYLid8C8xy+Z8sCQGfL6hf8lXdQmEBCTsOG1ebukCwPPDjfqI
         Gc5PEaYtZIXMLGKohs2D0J6/9WEEtgIT5TIK/8zQMlIMJucQxE0HDVzyUj7i78Hx5BaJ
         h4xXR3joTKyku6PHCeWFOtuF06OBTsMwRdOlJZa31schR18SRcGe+y44bxe0xTdeQnlT
         lvZN6qRGSo6i3wDz95REeelfvOVpBnNhAZGqI1o/tk7Wtdwx/EFyNR0laxKxkkC+pOKM
         +v9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=KgNrI0jjP6nQ+Uo9tT9QhBTyECmVbDeACqoYsGAUBPM=;
        b=BBMVDfEoCQpBODiWq2aOs9PxZ+KwYODtYtPzBQ7ROcJlmXf2+RYozj/FulwIS2La95
         uajuuCYBpuhx/1OBgLhyfYoItEyeRhxPyL49VGi755ooykhQnNtn0zcszMg7UIVouVUJ
         NIN683PPQY09ONjun4X5PBGpH7+1S2ImgMEvp33uEInCzGsNj2VJlBoFPEcTGvgI/v8+
         L0uUaKCkOlRopxQkL4eeQ0kgYHmopG9pOwVRZD0Gnjcks9nidFggyxunXQaNOH7+53Fl
         8s0y3dnv9LFtip9/zg8ydUwiDqyVWHBO8nO5nGj8XTH/O5nm5pyfhHNgQx+wSJLNmvdz
         3BrA==
X-Gm-Message-State: AOAM530NuNbZVJ1Fk/PVc1oWkZxQT5Lt/VdeUm3uX1Jl2Jj1nabZf+Jp
	denRCId9rr2DDEFTB8ficFUkIcdT782ljQ==
X-Google-Smtp-Source: ABdhPJwknzxpiS2ZDmlrF2YihbjX8Gao6YgxxSmqJ8ySUZVR2HPG3uujUIxGTa3tZl6W/rBa9AT94g==
X-Received: by 2002:a2e:9a9a:: with SMTP id p26mr1523137lji.4.1602780297235;
        Thu, 15 Oct 2020 09:44:57 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.56
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 05/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
Date: Thu, 15 Oct 2020 19:44:16 +0300
Message-Id: <1602780274-29141-6-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these helpers will be used
on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes
with "ioreq".

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - replace "hvm" prefix by "ioreq"
---
 xen/arch/x86/hvm/intercept.c |  5 +++--
 xen/arch/x86/hvm/stdvga.c    |  4 ++--
 xen/common/ioreq.c           |  4 ++--
 xen/include/asm-x86/hvm/io.h | 16 ----------------
 xen/include/xen/ioreq.h      | 16 ++++++++++++++++
 5 files changed, 23 insertions(+), 22 deletions(-)

diff --git a/xen/arch/x86/hvm/intercept.c b/xen/arch/x86/hvm/intercept.c
index cd4c4c1..02ca3b0 100644
--- a/xen/arch/x86/hvm/intercept.c
+++ b/xen/arch/x86/hvm/intercept.c
@@ -17,6 +17,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/ioreq.h>
 #include <xen/types.h>
 #include <xen/sched.h>
 #include <asm/regs.h>
@@ -34,7 +35,7 @@
 static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler,
                               const ioreq_t *p)
 {
-    paddr_t first = hvm_mmio_first_byte(p), last;
+    paddr_t first = ioreq_mmio_first_byte(p), last;
 
     BUG_ON(handler->type != IOREQ_TYPE_COPY);
 
@@ -42,7 +43,7 @@ static bool_t hvm_mmio_accept(const struct hvm_io_handler *handler,
         return 0;
 
     /* Make sure the handler will accept the whole access. */
-    last = hvm_mmio_last_byte(p);
+    last = ioreq_mmio_last_byte(p);
     if ( last != first &&
          !handler->mmio.ops->check(current, last) )
         domain_crash(current->domain);
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index e267513..e184664 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -524,8 +524,8 @@ static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
      * deadlock when hvm_mmio_internal() is called from
      * hvm_copy_to/from_guest_phys() in hvm_process_io_intercept().
      */
-    if ( (hvm_mmio_first_byte(p) < VGA_MEM_BASE) ||
-         (hvm_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
+    if ( (ioreq_mmio_first_byte(p) < VGA_MEM_BASE) ||
+         (ioreq_mmio_last_byte(p) >= (VGA_MEM_BASE + VGA_MEM_SIZE)) )
         return 0;
 
     spin_lock(&s->lock);
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 29ad48e..5fa10b6 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -1210,8 +1210,8 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
             break;
 
         case XEN_DMOP_IO_RANGE_MEMORY:
-            start = hvm_mmio_first_byte(p);
-            end = hvm_mmio_last_byte(p);
+            start = ioreq_mmio_first_byte(p);
+            end = ioreq_mmio_last_byte(p);
 
             if ( rangeset_contains_range(r, start, end) )
                 return s;
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index 558426b..fb64294 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -40,22 +40,6 @@ struct hvm_mmio_ops {
     hvm_mmio_write_t write;
 };
 
-static inline paddr_t hvm_mmio_first_byte(const ioreq_t *p)
-{
-    return unlikely(p->df) ?
-           p->addr - (p->count - 1ul) * p->size :
-           p->addr;
-}
-
-static inline paddr_t hvm_mmio_last_byte(const ioreq_t *p)
-{
-    unsigned long size = p->size;
-
-    return unlikely(p->df) ?
-           p->addr + size - 1:
-           p->addr + (p->count * size) - 1;
-}
-
 typedef int (*portio_action_t)(
     int dir, unsigned int port, unsigned int bytes, uint32_t *val);
 
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 8e1603c..768ac94 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -24,6 +24,22 @@
 struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
                                           unsigned int id);
 
+static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p)
+{
+    return unlikely(p->df) ?
+           p->addr - (p->count - 1ul) * p->size :
+           p->addr;
+}
+
+static inline paddr_t ioreq_mmio_last_byte(const ioreq_t *p)
+{
+    unsigned long size = p->size;
+
+    return unlikely(p->df) ?
+           p->addr + size - 1:
+           p->addr + (p->count * size) - 1;
+}
+
 static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
 {
     return ioreq->state == STATE_IOREQ_READY &&
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7576.19961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NV-0005LZ-KR; Thu, 15 Oct 2020 16:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7576.19961; Thu, 15 Oct 2020 16:45:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6NV-0005LN-GZ; Thu, 15 Oct 2020 16:45:21 +0000
Received: by outflank-mailman (input) for mailman id 7576;
 Thu, 15 Oct 2020 16:45:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NU-0004yr-5n
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:20 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28;
 Thu, 15 Oct 2020 16:44:56 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id a4so3812332lji.12
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.52
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:53 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NU-0004yr-5n
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:20 +0000
X-Inumbo-ID: 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6851cef5-f3f4-4e3f-bdf8-f145c9d70b28;
	Thu, 15 Oct 2020 16:44:56 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id a4so3812332lji.12
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=oIjfo1CGKNSvcBAR0h3UWupQUOFTypLD55b2m9vW5pk=;
        b=Vykmed178toXYuQ3h7KwpQed3bgBglikHchHrPwhzfYTiF6h8N9+KWO/ZfXnP+4suW
         KGNBEElkagvR42jsh/fU3Luwm87ljYpHbWXnDaMqIYRA3BnA3UddEdGU0dGVUoHUMsKx
         jHdE7MUxHPcWvDa0/68BSFBPNHTdPLfbrZmX7TMAdpewUO2SL0eH2w9deemo22TCozfP
         v6JOGDJjR/+V9pMefRfvh+/+6WpUCftCxy8BFvXdbsPYoWNGE4NV/NVV47/3PBce2wd1
         AaKqKFBzHr27IIx1+rMHomhPshge+O2cMi+GmkR/4SWC6nETiR5TDuq7XTrob2f/SFD1
         D3MQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=oIjfo1CGKNSvcBAR0h3UWupQUOFTypLD55b2m9vW5pk=;
        b=kgQ26vuWaMsW+PWdrulQ0L4ayUm8EEJ27dOd3JJUZM3aUaITnj0CFIr7YGGdu9Ff3+
         d9ACIGu5TVD19HXCzkINwII5/f6uuLrbLXheAfW+U3Z4FnPqFEjyKGaVt9dF0y0/kMXy
         X/t0XaMneFnAsamn6PkHFQifD8KzUzYdOUO/rc7fCeeXxVAo/n4RlPnM9Xma5buRRzlj
         qzKp/sWDXZ82ux7ykD2ganDQOasv8/nQg4odkC9sWcmOZgbwFge0n1l2luBkRvJQbbOM
         cco9ix/nx8RG/84nK7gGMelQeuYHxDJ7mqsaoyKcIlWcBTLgTf/bJTHFqrDS3NOoQUlt
         8zKw==
X-Gm-Message-State: AOAM530hgh7byeisoo49YCaOixXN+lBVbxQ3VNjAKYLUUdOJaZ0os+jv
	woazuHqZAmsCVoXHNsQMurysEvNhQpppkw==
X-Google-Smtp-Source: ABdhPJw3RbUcqCpieE0iDXhJvgMzU77JvsZC3zTj2LiLH1W9Cdl8E5OfZL7nF63BjYW8w3SNla+aWA==
X-Received: by 2002:a2e:6813:: with SMTP id c19mr1712578lja.152.1602780294179;
        Thu, 15 Oct 2020 09:44:54 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.52
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:53 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
Date: Thu, 15 Oct 2020 19:44:13 +0300
Message-Id: <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As a lot of x86 code can be re-used on Arm later on, this patch
moves previously prepared x86/hvm/ioreq.c to the common code.

The common IOREQ feature is supposed to be built with IOREQ_SERVER
option enabled, which is selected for x86's config HVM for now.

In order to avoid having a gigantic patch here, the subsequent
patches will update remaining bits in the common code step by step:
- Make IOREQ related structs/materials common
- Drop the "hvm" prefixes and infixes
- Remove layering violation by moving corresponding fields
  out of *arch.hvm* or abstracting away accesses to them

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
Please note, this patch depends on the following which is
on review:
https://patchwork.kernel.org/patch/11816689/
***

Changes RFC -> V1:
   - was split into three patches:
     - x86/ioreq: Prepare IOREQ feature for making it common
     - xen/ioreq: Make x86's IOREQ feature common
     - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
   - update MAINTAINERS file
   - do not use a separate subdir for the IOREQ stuff, move it to:
     - xen/common/ioreq.c
     - xen/include/xen/ioreq.h
   - update x86's files to include xen/ioreq.h
   - remove unneeded headers in arch/x86/hvm/ioreq.c
   - re-order the headers alphabetically in common/ioreq.c
   - update common/ioreq.c according to the newly introduced arch functions:
     arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()

Changes V1 -> V2:
   - update patch description
   - make everything needed in the previous patch to achieve
     a truly rename here
   - don't include unnecessary headers from asm-x86/hvm/ioreq.h
     and xen/ioreq.h
   - use __XEN_IOREQ_H__ instead of __IOREQ_H__
   - move get_ioreq_server() to common/ioreq.c
---
 MAINTAINERS                     |    8 +-
 xen/arch/x86/Kconfig            |    1 +
 xen/arch/x86/hvm/Makefile       |    1 -
 xen/arch/x86/hvm/ioreq.c        | 1422 ---------------------------------------
 xen/arch/x86/mm.c               |    2 +-
 xen/arch/x86/mm/shadow/common.c |    2 +-
 xen/common/Kconfig              |    3 +
 xen/common/Makefile             |    1 +
 xen/common/ioreq.c              | 1422 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/hvm/ioreq.h |   39 +-
 xen/include/xen/ioreq.h         |   71 ++
 11 files changed, 1509 insertions(+), 1463 deletions(-)
 delete mode 100644 xen/arch/x86/hvm/ioreq.c
 create mode 100644 xen/common/ioreq.c
 create mode 100644 xen/include/xen/ioreq.h

diff --git a/MAINTAINERS b/MAINTAINERS
index 26c5382..cbb00d6 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -333,6 +333,13 @@ X:	xen/drivers/passthrough/vtd/
 X:	xen/drivers/passthrough/device_tree.c
 F:	xen/include/xen/iommu.h
 
+I/O EMULATION (IOREQ)
+M:	Paul Durrant <paul@xen.org>
+S:	Supported
+F:	xen/common/ioreq.c
+F:	xen/include/xen/ioreq.h
+F:	xen/include/public/hvm/ioreq.h
+
 KCONFIG
 M:	Doug Goldstein <cardoe@cardoe.com>
 S:	Supported
@@ -549,7 +556,6 @@ F:	xen/arch/x86/hvm/ioreq.c
 F:	xen/include/asm-x86/hvm/emulate.h
 F:	xen/include/asm-x86/hvm/io.h
 F:	xen/include/asm-x86/hvm/ioreq.h
-F:	xen/include/public/hvm/ioreq.h
 
 X86 MEMORY MANAGEMENT
 M:	Jan Beulich <jbeulich@suse.com>
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa..abe0fce 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config PV_LINEAR_PT
 
 config HVM
 	def_bool !PV_SHIM_EXCLUSIVE
+	select IOREQ_SERVER
 	prompt "HVM support"
 	---help---
 	  Interfaces to support HVM domains.  HVM domains require hardware
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 3464191..0c1eff2 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -13,7 +13,6 @@ obj-y += hvm.o
 obj-y += hypercall.o
 obj-y += intercept.o
 obj-y += io.o
-obj-y += ioreq.o
 obj-y += irq.o
 obj-y += monitor.o
 obj-y += mtrr.o
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
deleted file mode 100644
index d3433d7..0000000
--- a/xen/arch/x86/hvm/ioreq.c
+++ /dev/null
@@ -1,1422 +0,0 @@
-/*
- * ioreq.c: hardware virtual machine I/O emulation
- *
- * Copyright (c) 2016 Citrix Systems Inc.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
- * more details.
- *
- * You should have received a copy of the GNU General Public License along with
- * this program; If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <xen/ctype.h>
-#include <xen/domain.h>
-#include <xen/event.h>
-#include <xen/init.h>
-#include <xen/irq.h>
-#include <xen/lib.h>
-#include <xen/paging.h>
-#include <xen/sched.h>
-#include <xen/softirq.h>
-#include <xen/trace.h>
-#include <xen/vpci.h>
-
-#include <asm/hvm/ioreq.h>
-
-#include <public/hvm/ioreq.h>
-#include <public/hvm/params.h>
-
-static void set_ioreq_server(struct domain *d, unsigned int id,
-                             struct hvm_ioreq_server *s)
-{
-    ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
-
-    d->arch.hvm.ioreq_server.server[id] = s;
-}
-
-#define GET_IOREQ_SERVER(d, id) \
-    (d)->arch.hvm.ioreq_server.server[id]
-
-struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                          unsigned int id)
-{
-    if ( id >= MAX_NR_IOREQ_SERVERS )
-        return NULL;
-
-    return GET_IOREQ_SERVER(d, id);
-}
-
-/*
- * Iterate over all possible ioreq servers.
- *
- * NOTE: The iteration is backwards such that more recently created
- *       ioreq servers are favoured in hvm_select_ioreq_server().
- *       This is a semantic that previously existed when ioreq servers
- *       were held in a linked list.
- */
-#define FOR_EACH_IOREQ_SERVER(d, id, s) \
-    for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \
-        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
-            continue; \
-        else
-
-static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
-{
-    shared_iopage_t *p = s->ioreq.va;
-
-    ASSERT((v == current) || !vcpu_runnable(v));
-    ASSERT(p != NULL);
-
-    return &p->vcpu_ioreq[v->vcpu_id];
-}
-
-static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
-                                               struct hvm_ioreq_server **srvp)
-{
-    struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        struct hvm_ioreq_vcpu *sv;
-
-        list_for_each_entry ( sv,
-                              &s->ioreq_vcpu_list,
-                              list_entry )
-        {
-            if ( sv->vcpu == v && sv->pending )
-            {
-                if ( srvp )
-                    *srvp = s;
-                return sv;
-            }
-        }
-    }
-
-    return NULL;
-}
-
-bool hvm_io_pending(struct vcpu *v)
-{
-    return get_pending_vcpu(v, NULL);
-}
-
-static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
-{
-    unsigned int prev_state = STATE_IOREQ_NONE;
-    unsigned int state = p->state;
-    uint64_t data = ~0;
-
-    smp_rmb();
-
-    /*
-     * The only reason we should see this condition be false is when an
-     * emulator dying races with I/O being requested.
-     */
-    while ( likely(state != STATE_IOREQ_NONE) )
-    {
-        if ( unlikely(state < prev_state) )
-        {
-            gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
-                     prev_state, state);
-            sv->pending = false;
-            domain_crash(sv->vcpu->domain);
-            return false; /* bail */
-        }
-
-        switch ( prev_state = state )
-        {
-        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
-            p->state = STATE_IOREQ_NONE;
-            data = p->data;
-            break;
-
-        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
-        case STATE_IOREQ_INPROCESS:
-            wait_on_xen_event_channel(sv->ioreq_evtchn,
-                                      ({ state = p->state;
-                                         smp_rmb();
-                                         state != prev_state; }));
-            continue;
-
-        default:
-            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
-            sv->pending = false;
-            domain_crash(sv->vcpu->domain);
-            return false; /* bail */
-        }
-
-        break;
-    }
-
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
-    if ( hvm_ioreq_needs_completion(p) )
-        p->data = data;
-
-    sv->pending = false;
-
-    return true;
-}
-
-bool handle_hvm_io_completion(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
-    struct hvm_ioreq_server *s;
-    struct hvm_ioreq_vcpu *sv;
-    enum hvm_io_completion io_completion;
-
-    if ( has_vpci(d) && vpci_process_pending(v) )
-    {
-        raise_softirq(SCHEDULE_SOFTIRQ);
-        return false;
-    }
-
-    sv = get_pending_vcpu(v, &s);
-    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
-        return false;
-
-    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
-        STATE_IORESP_READY : STATE_IOREQ_NONE;
-
-    msix_write_completion(v);
-    vcpu_end_shutdown_deferral(v);
-
-    io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
-
-    switch ( io_completion )
-    {
-    case HVMIO_no_completion:
-        break;
-
-    case HVMIO_mmio_completion:
-        return handle_mmio();
-
-    case HVMIO_pio_completion:
-        return handle_pio(vio->io_req.addr, vio->io_req.size,
-                          vio->io_req.dir);
-
-    default:
-        return arch_hvm_io_completion(io_completion);
-    }
-
-    return true;
-}
-
-static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
-{
-    struct domain *d = s->target;
-    unsigned int i;
-
-    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
-
-    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
-    {
-        if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
-            return _gfn(d->arch.hvm.params[i]);
-    }
-
-    return INVALID_GFN;
-}
-
-static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
-{
-    struct domain *d = s->target;
-    unsigned int i;
-
-    for ( i = 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ )
-    {
-        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) )
-            return _gfn(d->arch.hvm.ioreq_gfn.base + i);
-    }
-
-    /*
-     * If we are out of 'normal' GFNs then we may still have a 'legacy'
-     * GFN available.
-     */
-    return hvm_alloc_legacy_ioreq_gfn(s);
-}
-
-static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
-                                      gfn_t gfn)
-{
-    struct domain *d = s->target;
-    unsigned int i;
-
-    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
-    {
-        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
-             break;
-    }
-    if ( i > HVM_PARAM_BUFIOREQ_PFN )
-        return false;
-
-    set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask);
-    return true;
-}
-
-static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
-{
-    struct domain *d = s->target;
-    unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
-
-    ASSERT(!gfn_eq(gfn, INVALID_GFN));
-
-    if ( !hvm_free_legacy_ioreq_gfn(s, gfn) )
-    {
-        ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8);
-        set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
-    }
-}
-
-static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-
-    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
-        return;
-
-    destroy_ring_for_helper(&iorp->va, iorp->page);
-    iorp->page = NULL;
-
-    hvm_free_ioreq_gfn(s, iorp->gfn);
-    iorp->gfn = INVALID_GFN;
-}
-
-static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    int rc;
-
-    if ( iorp->page )
-    {
-        /*
-         * If a page has already been allocated (which will happen on
-         * demand if hvm_get_ioreq_server_frame() is called), then
-         * mapping a guest frame is not permitted.
-         */
-        if ( gfn_eq(iorp->gfn, INVALID_GFN) )
-            return -EPERM;
-
-        return 0;
-    }
-
-    if ( d->is_dying )
-        return -EINVAL;
-
-    iorp->gfn = hvm_alloc_ioreq_gfn(s);
-
-    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
-        return -ENOMEM;
-
-    rc = prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page,
-                                 &iorp->va);
-
-    if ( rc )
-        hvm_unmap_ioreq_gfn(s, buf);
-
-    return rc;
-}
-
-static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    struct page_info *page;
-
-    if ( iorp->page )
-    {
-        /*
-         * If a guest frame has already been mapped (which may happen
-         * on demand if hvm_get_ioreq_server_info() is called), then
-         * allocating a page is not permitted.
-         */
-        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
-            return -EPERM;
-
-        return 0;
-    }
-
-    page = alloc_domheap_page(s->target, MEMF_no_refcount);
-
-    if ( !page )
-        return -ENOMEM;
-
-    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
-    {
-        /*
-         * The domain can't possibly know about this page yet, so failure
-         * here is a clear indication of something fishy going on.
-         */
-        domain_crash(s->emulator);
-        return -ENODATA;
-    }
-
-    iorp->va = __map_domain_page_global(page);
-    if ( !iorp->va )
-        goto fail;
-
-    iorp->page = page;
-    clear_page(iorp->va);
-    return 0;
-
- fail:
-    put_page_alloc_ref(page);
-    put_page_and_type(page);
-
-    return -ENOMEM;
-}
-
-static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    struct page_info *page = iorp->page;
-
-    if ( !page )
-        return;
-
-    iorp->page = NULL;
-
-    unmap_domain_page_global(iorp->va);
-    iorp->va = NULL;
-
-    put_page_alloc_ref(page);
-    put_page_and_type(page);
-}
-
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
-{
-    const struct hvm_ioreq_server *s;
-    unsigned int id;
-    bool found = false;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
-        {
-            found = true;
-            break;
-        }
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return found;
-}
-
-static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
-
-{
-    struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-
-    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
-        return;
-
-    if ( guest_physmap_remove_page(d, iorp->gfn,
-                                   page_to_mfn(iorp->page), 0) )
-        domain_crash(d);
-    clear_page(iorp->va);
-}
-
-static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
-{
-    struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
-    int rc;
-
-    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
-        return 0;
-
-    clear_page(iorp->va);
-
-    rc = guest_physmap_add_page(d, iorp->gfn,
-                                page_to_mfn(iorp->page), 0);
-    if ( rc == 0 )
-        paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn)));
-
-    return rc;
-}
-
-static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
-                                    struct hvm_ioreq_vcpu *sv)
-{
-    ASSERT(spin_is_locked(&s->lock));
-
-    if ( s->ioreq.va != NULL )
-    {
-        ioreq_t *p = get_ioreq(s, sv->vcpu);
-
-        p->vp_eport = sv->ioreq_evtchn;
-    }
-}
-
-#define HANDLE_BUFIOREQ(s) \
-    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
-
-static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
-                                     struct vcpu *v)
-{
-    struct hvm_ioreq_vcpu *sv;
-    int rc;
-
-    sv = xzalloc(struct hvm_ioreq_vcpu);
-
-    rc = -ENOMEM;
-    if ( !sv )
-        goto fail1;
-
-    spin_lock(&s->lock);
-
-    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
-                                         s->emulator->domain_id, NULL);
-    if ( rc < 0 )
-        goto fail2;
-
-    sv->ioreq_evtchn = rc;
-
-    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-    {
-        rc = alloc_unbound_xen_event_channel(v->domain, 0,
-                                             s->emulator->domain_id, NULL);
-        if ( rc < 0 )
-            goto fail3;
-
-        s->bufioreq_evtchn = rc;
-    }
-
-    sv->vcpu = v;
-
-    list_add(&sv->list_entry, &s->ioreq_vcpu_list);
-
-    if ( s->enabled )
-        hvm_update_ioreq_evtchn(s, sv);
-
-    spin_unlock(&s->lock);
-    return 0;
-
- fail3:
-    free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
- fail2:
-    spin_unlock(&s->lock);
-    xfree(sv);
-
- fail1:
-    return rc;
-}
-
-static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
-                                         struct vcpu *v)
-{
-    struct hvm_ioreq_vcpu *sv;
-
-    spin_lock(&s->lock);
-
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-    {
-        if ( sv->vcpu != v )
-            continue;
-
-        list_del(&sv->list_entry);
-
-        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
-
-        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
-        xfree(sv);
-        break;
-    }
-
-    spin_unlock(&s->lock);
-}
-
-static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
-{
-    struct hvm_ioreq_vcpu *sv, *next;
-
-    spin_lock(&s->lock);
-
-    list_for_each_entry_safe ( sv,
-                               next,
-                               &s->ioreq_vcpu_list,
-                               list_entry )
-    {
-        struct vcpu *v = sv->vcpu;
-
-        list_del(&sv->list_entry);
-
-        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
-            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
-
-        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
-
-        xfree(sv);
-    }
-
-    spin_unlock(&s->lock);
-}
-
-static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
-{
-    int rc;
-
-    rc = hvm_map_ioreq_gfn(s, false);
-
-    if ( !rc && HANDLE_BUFIOREQ(s) )
-        rc = hvm_map_ioreq_gfn(s, true);
-
-    if ( rc )
-        hvm_unmap_ioreq_gfn(s, false);
-
-    return rc;
-}
-
-static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
-{
-    hvm_unmap_ioreq_gfn(s, true);
-    hvm_unmap_ioreq_gfn(s, false);
-}
-
-static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
-{
-    int rc;
-
-    rc = hvm_alloc_ioreq_mfn(s, false);
-
-    if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
-        rc = hvm_alloc_ioreq_mfn(s, true);
-
-    if ( rc )
-        hvm_free_ioreq_mfn(s, false);
-
-    return rc;
-}
-
-static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
-{
-    hvm_free_ioreq_mfn(s, true);
-    hvm_free_ioreq_mfn(s, false);
-}
-
-static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
-{
-    unsigned int i;
-
-    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
-        rangeset_destroy(s->range[i]);
-}
-
-static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
-                                            ioservid_t id)
-{
-    unsigned int i;
-    int rc;
-
-    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
-    {
-        char *name;
-
-        rc = asprintf(&name, "ioreq_server %d %s", id,
-                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
-                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
-                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
-                      "");
-        if ( rc )
-            goto fail;
-
-        s->range[i] = rangeset_new(s->target, name,
-                                   RANGESETF_prettyprint_hex);
-
-        xfree(name);
-
-        rc = -ENOMEM;
-        if ( !s->range[i] )
-            goto fail;
-
-        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
-    }
-
-    return 0;
-
- fail:
-    hvm_ioreq_server_free_rangesets(s);
-
-    return rc;
-}
-
-static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
-{
-    struct hvm_ioreq_vcpu *sv;
-
-    spin_lock(&s->lock);
-
-    if ( s->enabled )
-        goto done;
-
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
-
-    s->enabled = true;
-
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
-
-  done:
-    spin_unlock(&s->lock);
-}
-
-static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
-{
-    spin_lock(&s->lock);
-
-    if ( !s->enabled )
-        goto done;
-
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
-
-    s->enabled = false;
-
- done:
-    spin_unlock(&s->lock);
-}
-
-static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
-                                 struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
-{
-    struct domain *currd = current->domain;
-    struct vcpu *v;
-    int rc;
-
-    s->target = d;
-
-    get_knownalive_domain(currd);
-    s->emulator = currd;
-
-    spin_lock_init(&s->lock);
-    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
-    spin_lock_init(&s->bufioreq_lock);
-
-    s->ioreq.gfn = INVALID_GFN;
-    s->bufioreq.gfn = INVALID_GFN;
-
-    rc = hvm_ioreq_server_alloc_rangesets(s, id);
-    if ( rc )
-        return rc;
-
-    s->bufioreq_handling = bufioreq_handling;
-
-    for_each_vcpu ( d, v )
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail_add;
-    }
-
-    return 0;
-
- fail_add:
-    hvm_ioreq_server_remove_all_vcpus(s);
-    hvm_ioreq_server_unmap_pages(s);
-
-    hvm_ioreq_server_free_rangesets(s);
-
-    put_domain(s->emulator);
-    return rc;
-}
-
-static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
-{
-    ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
-
-    /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
-     *       This is because the former will do nothing if the pages
-     *       are not mapped, leaving the page to be freed by the latter.
-     *       However if the pages are mapped then the former will set
-     *       the page_info pointer to NULL, meaning the latter will do
-     *       nothing.
-     */
-    hvm_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
-
-    hvm_ioreq_server_free_rangesets(s);
-
-    put_domain(s->emulator);
-}
-
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int i;
-    int rc;
-
-    if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
-        return -EINVAL;
-
-    s = xzalloc(struct hvm_ioreq_server);
-    if ( !s )
-        return -ENOMEM;
-
-    domain_pause(d);
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
-    {
-        if ( !GET_IOREQ_SERVER(d, i) )
-            break;
-    }
-
-    rc = -ENOSPC;
-    if ( i >= MAX_NR_IOREQ_SERVERS )
-        goto fail;
-
-    /*
-     * It is safe to call set_ioreq_server() prior to
-     * hvm_ioreq_server_init() since the target domain is paused.
-     */
-    set_ioreq_server(d, i, s);
-
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
-    if ( rc )
-    {
-        set_ioreq_server(d, i, NULL);
-        goto fail;
-    }
-
-    if ( id )
-        *id = i;
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    domain_unpause(d);
-
-    return 0;
-
- fail:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    domain_unpause(d);
-
-    xfree(s);
-    return rc;
-}
-
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    domain_pause(d);
-
-    arch_hvm_destroy_ioreq_server(s);
-
-    hvm_ioreq_server_disable(s);
-
-    /*
-     * It is safe to call hvm_ioreq_server_deinit() prior to
-     * set_ioreq_server() since the target domain is paused.
-     */
-    hvm_ioreq_server_deinit(s);
-    set_ioreq_server(d, id, NULL);
-
-    domain_unpause(d);
-
-    xfree(s);
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    if ( ioreq_gfn || bufioreq_gfn )
-    {
-        rc = hvm_ioreq_server_map_pages(s);
-        if ( rc )
-            goto out;
-    }
-
-    if ( ioreq_gfn )
-        *ioreq_gfn = gfn_x(s->ioreq.gfn);
-
-    if ( HANDLE_BUFIOREQ(s) )
-    {
-        if ( bufioreq_gfn )
-            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
-
-        if ( bufioreq_port )
-            *bufioreq_port = s->bufioreq_evtchn;
-    }
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    ASSERT(is_hvm_domain(d));
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    rc = hvm_ioreq_server_alloc_pages(s);
-    if ( rc )
-        goto out;
-
-    switch ( idx )
-    {
-    case XENMEM_resource_ioreq_server_frame_bufioreq:
-        rc = -ENOENT;
-        if ( !HANDLE_BUFIOREQ(s) )
-            goto out;
-
-        *mfn = page_to_mfn(s->bufioreq.page);
-        rc = 0;
-        break;
-
-    case XENMEM_resource_ioreq_server_frame_ioreq(0):
-        *mfn = page_to_mfn(s->ioreq.page);
-        rc = 0;
-        break;
-
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end)
-{
-    struct hvm_ioreq_server *s;
-    struct rangeset *r;
-    int rc;
-
-    if ( start > end )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    switch ( type )
-    {
-    case XEN_DMOP_IO_RANGE_PORT:
-    case XEN_DMOP_IO_RANGE_MEMORY:
-    case XEN_DMOP_IO_RANGE_PCI:
-        r = s->range[type];
-        break;
-
-    default:
-        r = NULL;
-        break;
-    }
-
-    rc = -EINVAL;
-    if ( !r )
-        goto out;
-
-    rc = -EEXIST;
-    if ( rangeset_overlaps_range(r, start, end) )
-        goto out;
-
-    rc = rangeset_add_range(r, start, end);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end)
-{
-    struct hvm_ioreq_server *s;
-    struct rangeset *r;
-    int rc;
-
-    if ( start > end )
-        return -EINVAL;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    switch ( type )
-    {
-    case XEN_DMOP_IO_RANGE_PORT:
-    case XEN_DMOP_IO_RANGE_MEMORY:
-    case XEN_DMOP_IO_RANGE_PCI:
-        r = s->range[type];
-        break;
-
-    default:
-        r = NULL;
-        break;
-    }
-
-    rc = -EINVAL;
-    if ( !r )
-        goto out;
-
-    rc = -ENOENT;
-    if ( !rangeset_contains_range(r, start, end) )
-        goto out;
-
-    rc = rangeset_remove_range(r, start, end);
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
-{
-    struct hvm_ioreq_server *s;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    s = get_ioreq_server(d, id);
-
-    rc = -ENOENT;
-    if ( !s )
-        goto out;
-
-    rc = -EPERM;
-    if ( s->emulator != current->domain )
-        goto out;
-
-    domain_pause(d);
-
-    if ( enabled )
-        hvm_ioreq_server_enable(s);
-    else
-        hvm_ioreq_server_disable(s);
-
-    domain_unpause(d);
-
-    rc = 0;
-
- out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-    return rc;
-}
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-    int rc;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
-        if ( rc )
-            goto fail;
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return 0;
-
- fail:
-    while ( ++id != MAX_NR_IOREQ_SERVERS )
-    {
-        s = GET_IOREQ_SERVER(d, id);
-
-        if ( !s )
-            continue;
-
-        hvm_ioreq_server_remove_vcpu(s, v);
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    return rc;
-}
-
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-        hvm_ioreq_server_remove_vcpu(s, v);
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-}
-
-void hvm_destroy_all_ioreq_servers(struct domain *d)
-{
-    struct hvm_ioreq_server *s;
-    unsigned int id;
-
-    if ( !arch_hvm_ioreq_destroy(d) )
-        return;
-
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
-
-    /* No need to domain_pause() as the domain is being torn down */
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        hvm_ioreq_server_disable(s);
-
-        /*
-         * It is safe to call hvm_ioreq_server_deinit() prior to
-         * set_ioreq_server() since the target domain is being destroyed.
-         */
-        hvm_ioreq_server_deinit(s);
-        set_ioreq_server(d, id, NULL);
-
-        xfree(s);
-    }
-
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
-}
-
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p)
-{
-    struct hvm_ioreq_server *s;
-    uint8_t type;
-    uint64_t addr;
-    unsigned int id;
-
-    if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) )
-        return NULL;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        struct rangeset *r;
-
-        if ( !s->enabled )
-            continue;
-
-        r = s->range[type];
-
-        switch ( type )
-        {
-            unsigned long start, end;
-
-        case XEN_DMOP_IO_RANGE_PORT:
-            start = addr;
-            end = start + p->size - 1;
-            if ( rangeset_contains_range(r, start, end) )
-                return s;
-
-            break;
-
-        case XEN_DMOP_IO_RANGE_MEMORY:
-            start = hvm_mmio_first_byte(p);
-            end = hvm_mmio_last_byte(p);
-
-            if ( rangeset_contains_range(r, start, end) )
-                return s;
-
-            break;
-
-        case XEN_DMOP_IO_RANGE_PCI:
-            if ( rangeset_contains_singleton(r, addr >> 32) )
-            {
-                p->type = IOREQ_TYPE_PCI_CONFIG;
-                p->addr = addr;
-                return s;
-            }
-
-            break;
-        }
-    }
-
-    return NULL;
-}
-
-static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
-{
-    struct domain *d = current->domain;
-    struct hvm_ioreq_page *iorp;
-    buffered_iopage_t *pg;
-    buf_ioreq_t bp = { .data = p->data,
-                       .addr = p->addr,
-                       .type = p->type,
-                       .dir = p->dir };
-    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
-    int qw = 0;
-
-    /* Ensure buffered_iopage fits in a page */
-    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
-
-    iorp = &s->bufioreq;
-    pg = iorp->va;
-
-    if ( !pg )
-        return IOREQ_STATUS_UNHANDLED;
-
-    /*
-     * Return 0 for the cases we can't deal with:
-     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
-     *  - we cannot buffer accesses to guest memory buffers, as the guest
-     *    may expect the memory buffer to be synchronously accessed
-     *  - the count field is usually used with data_is_ptr and since we don't
-     *    support data_is_ptr we do not waste space for the count field either
-     */
-    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
-        return 0;
-
-    switch ( p->size )
-    {
-    case 1:
-        bp.size = 0;
-        break;
-    case 2:
-        bp.size = 1;
-        break;
-    case 4:
-        bp.size = 2;
-        break;
-    case 8:
-        bp.size = 3;
-        qw = 1;
-        break;
-    default:
-        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
-        return IOREQ_STATUS_UNHANDLED;
-    }
-
-    spin_lock(&s->bufioreq_lock);
-
-    if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=
-         (IOREQ_BUFFER_SLOT_NUM - qw) )
-    {
-        /* The queue is full: send the iopacket through the normal path. */
-        spin_unlock(&s->bufioreq_lock);
-        return IOREQ_STATUS_UNHANDLED;
-    }
-
-    pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
-
-    if ( qw )
-    {
-        bp.data = p->data >> 32;
-        pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
-    }
-
-    /* Make the ioreq_t visible /before/ write_pointer. */
-    smp_wmb();
-    pg->ptrs.write_pointer += qw ? 2 : 1;
-
-    /* Canonicalize read/write pointers to prevent their overflow. */
-    while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) &&
-            qw++ < IOREQ_BUFFER_SLOT_NUM &&
-            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
-    {
-        union bufioreq_pointers old = pg->ptrs, new;
-        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
-
-        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        cmpxchg(&pg->ptrs.full, old.full, new.full);
-    }
-
-    notify_via_xen_event_channel(d, s->bufioreq_evtchn);
-    spin_unlock(&s->bufioreq_lock);
-
-    return IOREQ_STATUS_HANDLED;
-}
-
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered)
-{
-    struct vcpu *curr = current;
-    struct domain *d = curr->domain;
-    struct hvm_ioreq_vcpu *sv;
-
-    ASSERT(s);
-
-    if ( buffered )
-        return hvm_send_buffered_ioreq(s, proto_p);
-
-    if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
-        return IOREQ_STATUS_RETRY;
-
-    list_for_each_entry ( sv,
-                          &s->ioreq_vcpu_list,
-                          list_entry )
-    {
-        if ( sv->vcpu == curr )
-        {
-            evtchn_port_t port = sv->ioreq_evtchn;
-            ioreq_t *p = get_ioreq(s, curr);
-
-            if ( unlikely(p->state != STATE_IOREQ_NONE) )
-            {
-                gprintk(XENLOG_ERR, "device model set bad IO state %d\n",
-                        p->state);
-                break;
-            }
-
-            if ( unlikely(p->vp_eport != port) )
-            {
-                gprintk(XENLOG_ERR, "device model set bad event channel %d\n",
-                        p->vp_eport);
-                break;
-            }
-
-            proto_p->state = STATE_IOREQ_NONE;
-            proto_p->vp_eport = port;
-            *p = *proto_p;
-
-            prepare_wait_on_xen_event_channel(port);
-
-            /*
-             * Following happens /after/ blocking and setting up ioreq
-             * contents. prepare_wait_on_xen_event_channel() is an implicit
-             * barrier.
-             */
-            p->state = STATE_IOREQ_READY;
-            notify_via_xen_event_channel(d, port);
-
-            sv->pending = true;
-            return IOREQ_STATUS_RETRY;
-        }
-    }
-
-    return IOREQ_STATUS_UNHANDLED;
-}
-
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
-{
-    struct domain *d = current->domain;
-    struct hvm_ioreq_server *s;
-    unsigned int id, failed = 0;
-
-    FOR_EACH_IOREQ_SERVER(d, id, s)
-    {
-        if ( !s->enabled )
-            continue;
-
-        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
-            failed++;
-    }
-
-    return failed;
-}
-
-void hvm_ioreq_init(struct domain *d)
-{
-    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
-
-    arch_hvm_ioreq_init(d);
-}
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8c8f054..b5865ae 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -100,6 +100,7 @@
  */
 
 #include <xen/init.h>
+#include <xen/ioreq.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
 #include <xen/mm.h>
@@ -141,7 +142,6 @@
 #include <asm/io_apic.h>
 #include <asm/pci.h>
 #include <asm/guest.h>
-#include <asm/hvm/ioreq.h>
 
 #include <asm/hvm/grant_table.h>
 #include <asm/pv/domain.h>
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 6182313..3e6c14d 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -20,6 +20,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/ioreq.h>
 #include <xen/types.h>
 #include <xen/mm.h>
 #include <xen/trace.h>
@@ -34,7 +35,6 @@
 #include <asm/current.h>
 #include <asm/flushtlb.h>
 #include <asm/shadow.h>
-#include <asm/hvm/ioreq.h>
 #include <xen/numa.h>
 #include "private.h"
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf25..c971ded 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -139,6 +139,9 @@ config HYPFS_CONFIG
 	  Disable this option in case you want to spare some memory or you
 	  want to hide the .config contents from dom0.
 
+config IOREQ_SERVER
+	bool
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index b3b60a1..cdb99fb 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -15,6 +15,7 @@ obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
 obj-bin-y += gunzip.init.o
 obj-$(CONFIG_HYPFS) += hypfs.o
+obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += keyhandler.o
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
new file mode 100644
index 0000000..d3433d7
--- /dev/null
+++ b/xen/common/ioreq.c
@@ -0,0 +1,1422 @@
+/*
+ * ioreq.c: hardware virtual machine I/O emulation
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/ctype.h>
+#include <xen/domain.h>
+#include <xen/event.h>
+#include <xen/init.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/paging.h>
+#include <xen/sched.h>
+#include <xen/softirq.h>
+#include <xen/trace.h>
+#include <xen/vpci.h>
+
+#include <asm/hvm/ioreq.h>
+
+#include <public/hvm/ioreq.h>
+#include <public/hvm/params.h>
+
+static void set_ioreq_server(struct domain *d, unsigned int id,
+                             struct hvm_ioreq_server *s)
+{
+    ASSERT(id < MAX_NR_IOREQ_SERVERS);
+    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+
+    d->arch.hvm.ioreq_server.server[id] = s;
+}
+
+#define GET_IOREQ_SERVER(d, id) \
+    (d)->arch.hvm.ioreq_server.server[id]
+
+struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
+                                          unsigned int id)
+{
+    if ( id >= MAX_NR_IOREQ_SERVERS )
+        return NULL;
+
+    return GET_IOREQ_SERVER(d, id);
+}
+
+/*
+ * Iterate over all possible ioreq servers.
+ *
+ * NOTE: The iteration is backwards such that more recently created
+ *       ioreq servers are favoured in hvm_select_ioreq_server().
+ *       This is a semantic that previously existed when ioreq servers
+ *       were held in a linked list.
+ */
+#define FOR_EACH_IOREQ_SERVER(d, id, s) \
+    for ( (id) = MAX_NR_IOREQ_SERVERS; (id) != 0; ) \
+        if ( !(s = GET_IOREQ_SERVER(d, --(id))) ) \
+            continue; \
+        else
+
+static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
+{
+    shared_iopage_t *p = s->ioreq.va;
+
+    ASSERT((v == current) || !vcpu_runnable(v));
+    ASSERT(p != NULL);
+
+    return &p->vcpu_ioreq[v->vcpu_id];
+}
+
+static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
+                                               struct hvm_ioreq_server **srvp)
+{
+    struct domain *d = v->domain;
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        struct hvm_ioreq_vcpu *sv;
+
+        list_for_each_entry ( sv,
+                              &s->ioreq_vcpu_list,
+                              list_entry )
+        {
+            if ( sv->vcpu == v && sv->pending )
+            {
+                if ( srvp )
+                    *srvp = s;
+                return sv;
+            }
+        }
+    }
+
+    return NULL;
+}
+
+bool hvm_io_pending(struct vcpu *v)
+{
+    return get_pending_vcpu(v, NULL);
+}
+
+static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+{
+    unsigned int prev_state = STATE_IOREQ_NONE;
+    unsigned int state = p->state;
+    uint64_t data = ~0;
+
+    smp_rmb();
+
+    /*
+     * The only reason we should see this condition be false is when an
+     * emulator dying races with I/O being requested.
+     */
+    while ( likely(state != STATE_IOREQ_NONE) )
+    {
+        if ( unlikely(state < prev_state) )
+        {
+            gdprintk(XENLOG_ERR, "Weird HVM ioreq state transition %u -> %u\n",
+                     prev_state, state);
+            sv->pending = false;
+            domain_crash(sv->vcpu->domain);
+            return false; /* bail */
+        }
+
+        switch ( prev_state = state )
+        {
+        case STATE_IORESP_READY: /* IORESP_READY -> NONE */
+            p->state = STATE_IOREQ_NONE;
+            data = p->data;
+            break;
+
+        case STATE_IOREQ_READY:  /* IOREQ_{READY,INPROCESS} -> IORESP_READY */
+        case STATE_IOREQ_INPROCESS:
+            wait_on_xen_event_channel(sv->ioreq_evtchn,
+                                      ({ state = p->state;
+                                         smp_rmb();
+                                         state != prev_state; }));
+            continue;
+
+        default:
+            gdprintk(XENLOG_ERR, "Weird HVM iorequest state %u\n", state);
+            sv->pending = false;
+            domain_crash(sv->vcpu->domain);
+            return false; /* bail */
+        }
+
+        break;
+    }
+
+    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    if ( hvm_ioreq_needs_completion(p) )
+        p->data = data;
+
+    sv->pending = false;
+
+    return true;
+}
+
+bool handle_hvm_io_completion(struct vcpu *v)
+{
+    struct domain *d = v->domain;
+    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
+    struct hvm_ioreq_server *s;
+    struct hvm_ioreq_vcpu *sv;
+    enum hvm_io_completion io_completion;
+
+    if ( has_vpci(d) && vpci_process_pending(v) )
+    {
+        raise_softirq(SCHEDULE_SOFTIRQ);
+        return false;
+    }
+
+    sv = get_pending_vcpu(v, &s);
+    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
+        return false;
+
+    vio->io_req.state = hvm_ioreq_needs_completion(&vio->io_req) ?
+        STATE_IORESP_READY : STATE_IOREQ_NONE;
+
+    msix_write_completion(v);
+    vcpu_end_shutdown_deferral(v);
+
+    io_completion = vio->io_completion;
+    vio->io_completion = HVMIO_no_completion;
+
+    switch ( io_completion )
+    {
+    case HVMIO_no_completion:
+        break;
+
+    case HVMIO_mmio_completion:
+        return handle_mmio();
+
+    case HVMIO_pio_completion:
+        return handle_pio(vio->io_req.addr, vio->io_req.size,
+                          vio->io_req.dir);
+
+    default:
+        return arch_hvm_io_completion(io_completion);
+    }
+
+    return true;
+}
+
+static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
+{
+    struct domain *d = s->target;
+    unsigned int i;
+
+    BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN != HVM_PARAM_IOREQ_PFN + 1);
+
+    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
+    {
+        if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
+            return _gfn(d->arch.hvm.params[i]);
+    }
+
+    return INVALID_GFN;
+}
+
+static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
+{
+    struct domain *d = s->target;
+    unsigned int i;
+
+    for ( i = 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ )
+    {
+        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) )
+            return _gfn(d->arch.hvm.ioreq_gfn.base + i);
+    }
+
+    /*
+     * If we are out of 'normal' GFNs then we may still have a 'legacy'
+     * GFN available.
+     */
+    return hvm_alloc_legacy_ioreq_gfn(s);
+}
+
+static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
+                                      gfn_t gfn)
+{
+    struct domain *d = s->target;
+    unsigned int i;
+
+    for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
+    {
+        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+             break;
+    }
+    if ( i > HVM_PARAM_BUFIOREQ_PFN )
+        return false;
+
+    set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask);
+    return true;
+}
+
+static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
+{
+    struct domain *d = s->target;
+    unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
+
+    ASSERT(!gfn_eq(gfn, INVALID_GFN));
+
+    if ( !hvm_free_legacy_ioreq_gfn(s, gfn) )
+    {
+        ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8);
+        set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
+    }
+}
+
+static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+
+    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+        return;
+
+    destroy_ring_for_helper(&iorp->va, iorp->page);
+    iorp->page = NULL;
+
+    hvm_free_ioreq_gfn(s, iorp->gfn);
+    iorp->gfn = INVALID_GFN;
+}
+
+static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct domain *d = s->target;
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    int rc;
+
+    if ( iorp->page )
+    {
+        /*
+         * If a page has already been allocated (which will happen on
+         * demand if hvm_get_ioreq_server_frame() is called), then
+         * mapping a guest frame is not permitted.
+         */
+        if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+            return -EPERM;
+
+        return 0;
+    }
+
+    if ( d->is_dying )
+        return -EINVAL;
+
+    iorp->gfn = hvm_alloc_ioreq_gfn(s);
+
+    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+        return -ENOMEM;
+
+    rc = prepare_ring_for_helper(d, gfn_x(iorp->gfn), &iorp->page,
+                                 &iorp->va);
+
+    if ( rc )
+        hvm_unmap_ioreq_gfn(s, buf);
+
+    return rc;
+}
+
+static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page;
+
+    if ( iorp->page )
+    {
+        /*
+         * If a guest frame has already been mapped (which may happen
+         * on demand if hvm_get_ioreq_server_info() is called), then
+         * allocating a page is not permitted.
+         */
+        if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
+            return -EPERM;
+
+        return 0;
+    }
+
+    page = alloc_domheap_page(s->target, MEMF_no_refcount);
+
+    if ( !page )
+        return -ENOMEM;
+
+    if ( !get_page_and_type(page, s->target, PGT_writable_page) )
+    {
+        /*
+         * The domain can't possibly know about this page yet, so failure
+         * here is a clear indication of something fishy going on.
+         */
+        domain_crash(s->emulator);
+        return -ENODATA;
+    }
+
+    iorp->va = __map_domain_page_global(page);
+    if ( !iorp->va )
+        goto fail;
+
+    iorp->page = page;
+    clear_page(iorp->va);
+    return 0;
+
+ fail:
+    put_page_alloc_ref(page);
+    put_page_and_type(page);
+
+    return -ENOMEM;
+}
+
+static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct page_info *page = iorp->page;
+
+    if ( !page )
+        return;
+
+    iorp->page = NULL;
+
+    unmap_domain_page_global(iorp->va);
+    iorp->va = NULL;
+
+    put_page_alloc_ref(page);
+    put_page_and_type(page);
+}
+
+bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
+{
+    const struct hvm_ioreq_server *s;
+    unsigned int id;
+    bool found = false;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( (s->ioreq.page == page) || (s->bufioreq.page == page) )
+        {
+            found = true;
+            break;
+        }
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return found;
+}
+
+static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+
+{
+    struct domain *d = s->target;
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+
+    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+        return;
+
+    if ( guest_physmap_remove_page(d, iorp->gfn,
+                                   page_to_mfn(iorp->page), 0) )
+        domain_crash(d);
+    clear_page(iorp->va);
+}
+
+static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+{
+    struct domain *d = s->target;
+    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    int rc;
+
+    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
+        return 0;
+
+    clear_page(iorp->va);
+
+    rc = guest_physmap_add_page(d, iorp->gfn,
+                                page_to_mfn(iorp->page), 0);
+    if ( rc == 0 )
+        paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn)));
+
+    return rc;
+}
+
+static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
+                                    struct hvm_ioreq_vcpu *sv)
+{
+    ASSERT(spin_is_locked(&s->lock));
+
+    if ( s->ioreq.va != NULL )
+    {
+        ioreq_t *p = get_ioreq(s, sv->vcpu);
+
+        p->vp_eport = sv->ioreq_evtchn;
+    }
+}
+
+#define HANDLE_BUFIOREQ(s) \
+    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
+
+static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
+                                     struct vcpu *v)
+{
+    struct hvm_ioreq_vcpu *sv;
+    int rc;
+
+    sv = xzalloc(struct hvm_ioreq_vcpu);
+
+    rc = -ENOMEM;
+    if ( !sv )
+        goto fail1;
+
+    spin_lock(&s->lock);
+
+    rc = alloc_unbound_xen_event_channel(v->domain, v->vcpu_id,
+                                         s->emulator->domain_id, NULL);
+    if ( rc < 0 )
+        goto fail2;
+
+    sv->ioreq_evtchn = rc;
+
+    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+    {
+        rc = alloc_unbound_xen_event_channel(v->domain, 0,
+                                             s->emulator->domain_id, NULL);
+        if ( rc < 0 )
+            goto fail3;
+
+        s->bufioreq_evtchn = rc;
+    }
+
+    sv->vcpu = v;
+
+    list_add(&sv->list_entry, &s->ioreq_vcpu_list);
+
+    if ( s->enabled )
+        hvm_update_ioreq_evtchn(s, sv);
+
+    spin_unlock(&s->lock);
+    return 0;
+
+ fail3:
+    free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+ fail2:
+    spin_unlock(&s->lock);
+    xfree(sv);
+
+ fail1:
+    return rc;
+}
+
+static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
+                                         struct vcpu *v)
+{
+    struct hvm_ioreq_vcpu *sv;
+
+    spin_lock(&s->lock);
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+    {
+        if ( sv->vcpu != v )
+            continue;
+
+        list_del(&sv->list_entry);
+
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
+
+        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+        xfree(sv);
+        break;
+    }
+
+    spin_unlock(&s->lock);
+}
+
+static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
+{
+    struct hvm_ioreq_vcpu *sv, *next;
+
+    spin_lock(&s->lock);
+
+    list_for_each_entry_safe ( sv,
+                               next,
+                               &s->ioreq_vcpu_list,
+                               list_entry )
+    {
+        struct vcpu *v = sv->vcpu;
+
+        list_del(&sv->list_entry);
+
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
+            free_xen_event_channel(v->domain, s->bufioreq_evtchn);
+
+        free_xen_event_channel(v->domain, sv->ioreq_evtchn);
+
+        xfree(sv);
+    }
+
+    spin_unlock(&s->lock);
+}
+
+static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
+{
+    int rc;
+
+    rc = hvm_map_ioreq_gfn(s, false);
+
+    if ( !rc && HANDLE_BUFIOREQ(s) )
+        rc = hvm_map_ioreq_gfn(s, true);
+
+    if ( rc )
+        hvm_unmap_ioreq_gfn(s, false);
+
+    return rc;
+}
+
+static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
+{
+    hvm_unmap_ioreq_gfn(s, true);
+    hvm_unmap_ioreq_gfn(s, false);
+}
+
+static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
+{
+    int rc;
+
+    rc = hvm_alloc_ioreq_mfn(s, false);
+
+    if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
+        rc = hvm_alloc_ioreq_mfn(s, true);
+
+    if ( rc )
+        hvm_free_ioreq_mfn(s, false);
+
+    return rc;
+}
+
+static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
+{
+    hvm_free_ioreq_mfn(s, true);
+    hvm_free_ioreq_mfn(s, false);
+}
+
+static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
+{
+    unsigned int i;
+
+    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
+        rangeset_destroy(s->range[i]);
+}
+
+static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
+                                            ioservid_t id)
+{
+    unsigned int i;
+    int rc;
+
+    for ( i = 0; i < NR_IO_RANGE_TYPES; i++ )
+    {
+        char *name;
+
+        rc = asprintf(&name, "ioreq_server %d %s", id,
+                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
+                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
+                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
+                      "");
+        if ( rc )
+            goto fail;
+
+        s->range[i] = rangeset_new(s->target, name,
+                                   RANGESETF_prettyprint_hex);
+
+        xfree(name);
+
+        rc = -ENOMEM;
+        if ( !s->range[i] )
+            goto fail;
+
+        rangeset_limit(s->range[i], MAX_NR_IO_RANGES);
+    }
+
+    return 0;
+
+ fail:
+    hvm_ioreq_server_free_rangesets(s);
+
+    return rc;
+}
+
+static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
+{
+    struct hvm_ioreq_vcpu *sv;
+
+    spin_lock(&s->lock);
+
+    if ( s->enabled )
+        goto done;
+
+    hvm_remove_ioreq_gfn(s, false);
+    hvm_remove_ioreq_gfn(s, true);
+
+    s->enabled = true;
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+        hvm_update_ioreq_evtchn(s, sv);
+
+  done:
+    spin_unlock(&s->lock);
+}
+
+static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
+{
+    spin_lock(&s->lock);
+
+    if ( !s->enabled )
+        goto done;
+
+    hvm_add_ioreq_gfn(s, true);
+    hvm_add_ioreq_gfn(s, false);
+
+    s->enabled = false;
+
+ done:
+    spin_unlock(&s->lock);
+}
+
+static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
+                                 struct domain *d, int bufioreq_handling,
+                                 ioservid_t id)
+{
+    struct domain *currd = current->domain;
+    struct vcpu *v;
+    int rc;
+
+    s->target = d;
+
+    get_knownalive_domain(currd);
+    s->emulator = currd;
+
+    spin_lock_init(&s->lock);
+    INIT_LIST_HEAD(&s->ioreq_vcpu_list);
+    spin_lock_init(&s->bufioreq_lock);
+
+    s->ioreq.gfn = INVALID_GFN;
+    s->bufioreq.gfn = INVALID_GFN;
+
+    rc = hvm_ioreq_server_alloc_rangesets(s, id);
+    if ( rc )
+        return rc;
+
+    s->bufioreq_handling = bufioreq_handling;
+
+    for_each_vcpu ( d, v )
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc )
+            goto fail_add;
+    }
+
+    return 0;
+
+ fail_add:
+    hvm_ioreq_server_remove_all_vcpus(s);
+    hvm_ioreq_server_unmap_pages(s);
+
+    hvm_ioreq_server_free_rangesets(s);
+
+    put_domain(s->emulator);
+    return rc;
+}
+
+static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
+{
+    ASSERT(!s->enabled);
+    hvm_ioreq_server_remove_all_vcpus(s);
+
+    /*
+     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
+     *       hvm_ioreq_server_free_pages() in that order.
+     *       This is because the former will do nothing if the pages
+     *       are not mapped, leaving the page to be freed by the latter.
+     *       However if the pages are mapped then the former will set
+     *       the page_info pointer to NULL, meaning the latter will do
+     *       nothing.
+     */
+    hvm_ioreq_server_unmap_pages(s);
+    hvm_ioreq_server_free_pages(s);
+
+    hvm_ioreq_server_free_rangesets(s);
+
+    put_domain(s->emulator);
+}
+
+int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
+                            ioservid_t *id)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int i;
+    int rc;
+
+    if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
+        return -EINVAL;
+
+    s = xzalloc(struct hvm_ioreq_server);
+    if ( !s )
+        return -ENOMEM;
+
+    domain_pause(d);
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
+    {
+        if ( !GET_IOREQ_SERVER(d, i) )
+            break;
+    }
+
+    rc = -ENOSPC;
+    if ( i >= MAX_NR_IOREQ_SERVERS )
+        goto fail;
+
+    /*
+     * It is safe to call set_ioreq_server() prior to
+     * hvm_ioreq_server_init() since the target domain is paused.
+     */
+    set_ioreq_server(d, i, s);
+
+    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    if ( rc )
+    {
+        set_ioreq_server(d, i, NULL);
+        goto fail;
+    }
+
+    if ( id )
+        *id = i;
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    domain_unpause(d);
+
+    return 0;
+
+ fail:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    domain_unpause(d);
+
+    xfree(s);
+    return rc;
+}
+
+int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    domain_pause(d);
+
+    arch_hvm_destroy_ioreq_server(s);
+
+    hvm_ioreq_server_disable(s);
+
+    /*
+     * It is safe to call hvm_ioreq_server_deinit() prior to
+     * set_ioreq_server() since the target domain is paused.
+     */
+    hvm_ioreq_server_deinit(s);
+    set_ioreq_server(d, id, NULL);
+
+    domain_unpause(d);
+
+    xfree(s);
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
+                              unsigned long *ioreq_gfn,
+                              unsigned long *bufioreq_gfn,
+                              evtchn_port_t *bufioreq_port)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    if ( ioreq_gfn || bufioreq_gfn )
+    {
+        rc = hvm_ioreq_server_map_pages(s);
+        if ( rc )
+            goto out;
+    }
+
+    if ( ioreq_gfn )
+        *ioreq_gfn = gfn_x(s->ioreq.gfn);
+
+    if ( HANDLE_BUFIOREQ(s) )
+    {
+        if ( bufioreq_gfn )
+            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
+
+        if ( bufioreq_port )
+            *bufioreq_port = s->bufioreq_evtchn;
+    }
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                               unsigned long idx, mfn_t *mfn)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    ASSERT(is_hvm_domain(d));
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    rc = hvm_ioreq_server_alloc_pages(s);
+    if ( rc )
+        goto out;
+
+    switch ( idx )
+    {
+    case XENMEM_resource_ioreq_server_frame_bufioreq:
+        rc = -ENOENT;
+        if ( !HANDLE_BUFIOREQ(s) )
+            goto out;
+
+        *mfn = page_to_mfn(s->bufioreq.page);
+        rc = 0;
+        break;
+
+    case XENMEM_resource_ioreq_server_frame_ioreq(0):
+        *mfn = page_to_mfn(s->ioreq.page);
+        rc = 0;
+        break;
+
+    default:
+        rc = -EINVAL;
+        break;
+    }
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint64_t start,
+                                     uint64_t end)
+{
+    struct hvm_ioreq_server *s;
+    struct rangeset *r;
+    int rc;
+
+    if ( start > end )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    switch ( type )
+    {
+    case XEN_DMOP_IO_RANGE_PORT:
+    case XEN_DMOP_IO_RANGE_MEMORY:
+    case XEN_DMOP_IO_RANGE_PCI:
+        r = s->range[type];
+        break;
+
+    default:
+        r = NULL;
+        break;
+    }
+
+    rc = -EINVAL;
+    if ( !r )
+        goto out;
+
+    rc = -EEXIST;
+    if ( rangeset_overlaps_range(r, start, end) )
+        goto out;
+
+    rc = rangeset_add_range(r, start, end);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                         uint32_t type, uint64_t start,
+                                         uint64_t end)
+{
+    struct hvm_ioreq_server *s;
+    struct rangeset *r;
+    int rc;
+
+    if ( start > end )
+        return -EINVAL;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    switch ( type )
+    {
+    case XEN_DMOP_IO_RANGE_PORT:
+    case XEN_DMOP_IO_RANGE_MEMORY:
+    case XEN_DMOP_IO_RANGE_PCI:
+        r = s->range[type];
+        break;
+
+    default:
+        r = NULL;
+        break;
+    }
+
+    rc = -EINVAL;
+    if ( !r )
+        goto out;
+
+    rc = -ENOENT;
+    if ( !rangeset_contains_range(r, start, end) )
+        goto out;
+
+    rc = rangeset_remove_range(r, start, end);
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
+                               bool enabled)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    s = get_ioreq_server(d, id);
+
+    rc = -ENOENT;
+    if ( !s )
+        goto out;
+
+    rc = -EPERM;
+    if ( s->emulator != current->domain )
+        goto out;
+
+    domain_pause(d);
+
+    if ( enabled )
+        hvm_ioreq_server_enable(s);
+    else
+        hvm_ioreq_server_disable(s);
+
+    domain_unpause(d);
+
+    rc = 0;
+
+ out:
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    return rc;
+}
+
+int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+    int rc;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        rc = hvm_ioreq_server_add_vcpu(s, v);
+        if ( rc )
+            goto fail;
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return 0;
+
+ fail:
+    while ( ++id != MAX_NR_IOREQ_SERVERS )
+    {
+        s = GET_IOREQ_SERVER(d, id);
+
+        if ( !s )
+            continue;
+
+        hvm_ioreq_server_remove_vcpu(s, v);
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    return rc;
+}
+
+void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+        hvm_ioreq_server_remove_vcpu(s, v);
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+}
+
+void hvm_destroy_all_ioreq_servers(struct domain *d)
+{
+    struct hvm_ioreq_server *s;
+    unsigned int id;
+
+    if ( !arch_hvm_ioreq_destroy(d) )
+        return;
+
+    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+
+    /* No need to domain_pause() as the domain is being torn down */
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        hvm_ioreq_server_disable(s);
+
+        /*
+         * It is safe to call hvm_ioreq_server_deinit() prior to
+         * set_ioreq_server() since the target domain is being destroyed.
+         */
+        hvm_ioreq_server_deinit(s);
+        set_ioreq_server(d, id, NULL);
+
+        xfree(s);
+    }
+
+    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+}
+
+struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                                 ioreq_t *p)
+{
+    struct hvm_ioreq_server *s;
+    uint8_t type;
+    uint64_t addr;
+    unsigned int id;
+
+    if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) )
+        return NULL;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        struct rangeset *r;
+
+        if ( !s->enabled )
+            continue;
+
+        r = s->range[type];
+
+        switch ( type )
+        {
+            unsigned long start, end;
+
+        case XEN_DMOP_IO_RANGE_PORT:
+            start = addr;
+            end = start + p->size - 1;
+            if ( rangeset_contains_range(r, start, end) )
+                return s;
+
+            break;
+
+        case XEN_DMOP_IO_RANGE_MEMORY:
+            start = hvm_mmio_first_byte(p);
+            end = hvm_mmio_last_byte(p);
+
+            if ( rangeset_contains_range(r, start, end) )
+                return s;
+
+            break;
+
+        case XEN_DMOP_IO_RANGE_PCI:
+            if ( rangeset_contains_singleton(r, addr >> 32) )
+            {
+                p->type = IOREQ_TYPE_PCI_CONFIG;
+                p->addr = addr;
+                return s;
+            }
+
+            break;
+        }
+    }
+
+    return NULL;
+}
+
+static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
+{
+    struct domain *d = current->domain;
+    struct hvm_ioreq_page *iorp;
+    buffered_iopage_t *pg;
+    buf_ioreq_t bp = { .data = p->data,
+                       .addr = p->addr,
+                       .type = p->type,
+                       .dir = p->dir };
+    /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */
+    int qw = 0;
+
+    /* Ensure buffered_iopage fits in a page */
+    BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE);
+
+    iorp = &s->bufioreq;
+    pg = iorp->va;
+
+    if ( !pg )
+        return IOREQ_STATUS_UNHANDLED;
+
+    /*
+     * Return 0 for the cases we can't deal with:
+     *  - 'addr' is only a 20-bit field, so we cannot address beyond 1MB
+     *  - we cannot buffer accesses to guest memory buffers, as the guest
+     *    may expect the memory buffer to be synchronously accessed
+     *  - the count field is usually used with data_is_ptr and since we don't
+     *    support data_is_ptr we do not waste space for the count field either
+     */
+    if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) )
+        return 0;
+
+    switch ( p->size )
+    {
+    case 1:
+        bp.size = 0;
+        break;
+    case 2:
+        bp.size = 1;
+        break;
+    case 4:
+        bp.size = 2;
+        break;
+    case 8:
+        bp.size = 3;
+        qw = 1;
+        break;
+    default:
+        gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", p->size);
+        return IOREQ_STATUS_UNHANDLED;
+    }
+
+    spin_lock(&s->bufioreq_lock);
+
+    if ( (pg->ptrs.write_pointer - pg->ptrs.read_pointer) >=
+         (IOREQ_BUFFER_SLOT_NUM - qw) )
+    {
+        /* The queue is full: send the iopacket through the normal path. */
+        spin_unlock(&s->bufioreq_lock);
+        return IOREQ_STATUS_UNHANDLED;
+    }
+
+    pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] = bp;
+
+    if ( qw )
+    {
+        bp.data = p->data >> 32;
+        pg->buf_ioreq[(pg->ptrs.write_pointer+1) % IOREQ_BUFFER_SLOT_NUM] = bp;
+    }
+
+    /* Make the ioreq_t visible /before/ write_pointer. */
+    smp_wmb();
+    pg->ptrs.write_pointer += qw ? 2 : 1;
+
+    /* Canonicalize read/write pointers to prevent their overflow. */
+    while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) &&
+            qw++ < IOREQ_BUFFER_SLOT_NUM &&
+            pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
+    {
+        union bufioreq_pointers old = pg->ptrs, new;
+        unsigned int n = old.read_pointer / IOREQ_BUFFER_SLOT_NUM;
+
+        new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
+        new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
+        cmpxchg(&pg->ptrs.full, old.full, new.full);
+    }
+
+    notify_via_xen_event_channel(d, s->bufioreq_evtchn);
+    spin_unlock(&s->bufioreq_lock);
+
+    return IOREQ_STATUS_HANDLED;
+}
+
+int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+                   bool buffered)
+{
+    struct vcpu *curr = current;
+    struct domain *d = curr->domain;
+    struct hvm_ioreq_vcpu *sv;
+
+    ASSERT(s);
+
+    if ( buffered )
+        return hvm_send_buffered_ioreq(s, proto_p);
+
+    if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
+        return IOREQ_STATUS_RETRY;
+
+    list_for_each_entry ( sv,
+                          &s->ioreq_vcpu_list,
+                          list_entry )
+    {
+        if ( sv->vcpu == curr )
+        {
+            evtchn_port_t port = sv->ioreq_evtchn;
+            ioreq_t *p = get_ioreq(s, curr);
+
+            if ( unlikely(p->state != STATE_IOREQ_NONE) )
+            {
+                gprintk(XENLOG_ERR, "device model set bad IO state %d\n",
+                        p->state);
+                break;
+            }
+
+            if ( unlikely(p->vp_eport != port) )
+            {
+                gprintk(XENLOG_ERR, "device model set bad event channel %d\n",
+                        p->vp_eport);
+                break;
+            }
+
+            proto_p->state = STATE_IOREQ_NONE;
+            proto_p->vp_eport = port;
+            *p = *proto_p;
+
+            prepare_wait_on_xen_event_channel(port);
+
+            /*
+             * Following happens /after/ blocking and setting up ioreq
+             * contents. prepare_wait_on_xen_event_channel() is an implicit
+             * barrier.
+             */
+            p->state = STATE_IOREQ_READY;
+            notify_via_xen_event_channel(d, port);
+
+            sv->pending = true;
+            return IOREQ_STATUS_RETRY;
+        }
+    }
+
+    return IOREQ_STATUS_UNHANDLED;
+}
+
+unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
+{
+    struct domain *d = current->domain;
+    struct hvm_ioreq_server *s;
+    unsigned int id, failed = 0;
+
+    FOR_EACH_IOREQ_SERVER(d, id, s)
+    {
+        if ( !s->enabled )
+            continue;
+
+        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
+            failed++;
+    }
+
+    return failed;
+}
+
+void hvm_ioreq_init(struct domain *d)
+{
+    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+
+    arch_hvm_ioreq_init(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 376e2ef..a3d8faa 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,14 +19,13 @@
 #ifndef __ASM_X86_HVM_IOREQ_H__
 #define __ASM_X86_HVM_IOREQ_H__
 
+#include <xen/ioreq.h>
+
 #include <asm/hvm/emulate.h>
 #include <asm/hvm/vmx/vmx.h>
 
 #include <public/hvm/params.h>
 
-struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                          unsigned int id);
-
 static inline bool arch_hvm_io_completion(enum hvm_io_completion io_completion)
 {
     switch ( io_completion )
@@ -178,40 +177,6 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
     return true;
 }
 
-bool hvm_io_pending(struct vcpu *v);
-bool handle_hvm_io_completion(struct vcpu *v);
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
-
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port);
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn);
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end);
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end);
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
-void hvm_destroy_all_ioreq_servers(struct domain *d);
-
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p);
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered);
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
-
-void hvm_ioreq_init(struct domain *d);
-
 #define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
 #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
 #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
new file mode 100644
index 0000000..6db4392
--- /dev/null
+++ b/xen/include/xen/ioreq.h
@@ -0,0 +1,71 @@
+/*
+ * ioreq.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_IOREQ_H__
+#define __XEN_IOREQ_H__
+
+#include <xen/sched.h>
+
+struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
+                                          unsigned int id);
+
+bool hvm_io_pending(struct vcpu *v);
+bool handle_hvm_io_completion(struct vcpu *v);
+bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
+
+int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
+                            ioservid_t *id);
+int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
+int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
+                              unsigned long *ioreq_gfn,
+                              unsigned long *bufioreq_gfn,
+                              evtchn_port_t *bufioreq_port);
+int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                               unsigned long idx, mfn_t *mfn);
+int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint64_t start,
+                                     uint64_t end);
+int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                         uint32_t type, uint64_t start,
+                                         uint64_t end);
+int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
+                               bool enabled);
+
+int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
+void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
+void hvm_destroy_all_ioreq_servers(struct domain *d);
+
+struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                                 ioreq_t *p);
+int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+                   bool buffered);
+unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
+
+void hvm_ioreq_init(struct domain *d);
+
+#endif /* __XEN_IOREQ_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7577.19972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Na-0005RA-8N; Thu, 15 Oct 2020 16:45:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7577.19972; Thu, 15 Oct 2020 16:45:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Na-0005R1-4P; Thu, 15 Oct 2020 16:45:26 +0000
Received: by outflank-mailman (input) for mailman id 7577;
 Thu, 15 Oct 2020 16:45:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6NZ-0004yr-62
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:25 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ed28b36-dc40-41de-a257-d042cfe62eca;
 Thu, 15 Oct 2020 16:44:59 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l2so4404061lfk.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:59 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6NZ-0004yr-62
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:25 +0000
X-Inumbo-ID: 8ed28b36-dc40-41de-a257-d042cfe62eca
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ed28b36-dc40-41de-a257-d042cfe62eca;
	Thu, 15 Oct 2020 16:44:59 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l2so4404061lfk.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:44:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=iOP5c373gWXxnyEQtQ6/39WWsVy1IddOyO5zW1UbPhE=;
        b=kFavmZh0ZbnHd5xGoKY+YOWu68vMxGRVt2WafKGrCJngOb6uDjR/OMq8tjaaYgefme
         okFPYz4qFaKdQXGP1H1ZMrwxVfS6tDCbNnxs4abGaoRRzyo7JJSGlQX9hTZwzp5c8Sqe
         hqi+5eF8BMvKFDAM7UQ/ccFiwUWkzbOD6RRKWwTJdeZqiki5sOUEcjbLn1yCck9e4Ng9
         McW0wAD7DFOMr3ZN1/1cbgdMlH1lY9Vn4TeCZ33ub0RzYWKmVwpzI8gzGFdZ1Ddyut7V
         Pu0OTUXIemWydghO53ERVwhEJbdQooYP6gWwlfdffuRIEO/HamwQ4Wh1j7iTnx0CqUyI
         t/eA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=iOP5c373gWXxnyEQtQ6/39WWsVy1IddOyO5zW1UbPhE=;
        b=oglCZ/u/TmxiZbg8cjRj4MTvLeMp7ChN5A+vQOZGZsclcGPs4igeeFAlWSFPiYkXl/
         155XTF5ov8hHkOt6vaur7VYD47WyWjE5kKRUh7g732po41xx7I7Qj04NhkaFnhqSR6Ka
         S9eZ45YXlENuKTONL1BMBHKFBOxAbdc5JsxlGQvzyOs+VO7n60XVCEL3KKIfwVVBDTl7
         9QZBIceP/Huai7CV8adVsaxJMguWObpt99AwRFXP3NERto5oCXNLCaLjKrR6IEWKcgjS
         WvuXu3N82RY7Xy3fR+mO0UwxoiO7EBCcFEDDvBUfeO0dSULezYMo/8wveVJodGPl5sNt
         fmNQ==
X-Gm-Message-State: AOAM531R6cRg+UK8OdoW8tYRx0jF9STivDPxWC0htabVz3QW00WwrALx
	zha9qjqpDOJ8UnPkKICAkvKYf5GhXaBUbQ==
X-Google-Smtp-Source: ABdhPJwz+YDHJwfme7KDVmcq0ojADYmPGfgSySABhBOm8lRi65o2wRiQCuvgaOPCZMbIOk8CfIv52A==
X-Received: by 2002:a19:ee12:: with SMTP id g18mr1526973lfb.515.1602780298323;
        Thu, 15 Oct 2020 09:44:58 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.57
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:57 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 06/23] xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
Date: Thu, 15 Oct 2020 19:44:17 +0300
Message-Id: <1602780274-29141-7-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these structs will be used
on Arm as is. Move them to xen/ioreq.h and remove "hvm" prefixes.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - remove "hvm" prefix
---
 xen/arch/x86/hvm/emulate.c       |   2 +-
 xen/arch/x86/hvm/stdvga.c        |   2 +-
 xen/arch/x86/mm/p2m.c            |   8 +--
 xen/common/ioreq.c               | 134 +++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/domain.h |  36 +----------
 xen/include/asm-x86/hvm/ioreq.h  |   4 +-
 xen/include/asm-x86/p2m.h        |   8 +--
 xen/include/xen/ioreq.h          |  44 +++++++++++--
 8 files changed, 119 insertions(+), 119 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 5700274..4746d5a 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -287,7 +287,7 @@ static int hvmemul_do_io(
          * However, there's no cheap approach to avoid above situations in xen,
          * so the device model side needs to check the incoming ioreq event.
          */
-        struct hvm_ioreq_server *s = NULL;
+        struct ioreq_server *s = NULL;
         p2m_type_t p2mt = p2m_invalid;
 
         if ( is_mmio )
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index e184664..bafb3f6 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -466,7 +466,7 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
         .dir = IOREQ_WRITE,
         .data = data,
     };
-    struct hvm_ioreq_server *srv;
+    struct ioreq_server *srv;
 
     if ( !stdvga_cache_is_enabled(s) || !s->stdvga )
         goto done;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 928344b..6102771 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -366,7 +366,7 @@ void p2m_memory_type_changed(struct domain *d)
 
 int p2m_set_ioreq_server(struct domain *d,
                          unsigned int flags,
-                         struct hvm_ioreq_server *s)
+                         struct ioreq_server *s)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int rc;
@@ -414,11 +414,11 @@ int p2m_set_ioreq_server(struct domain *d,
     return rc;
 }
 
-struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
-                                              unsigned int *flags)
+struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                          unsigned int *flags)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
 
     spin_lock(&p2m->ioreq.lock);
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 5fa10b6..1d62d13 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -34,7 +34,7 @@
 #include <public/hvm/params.h>
 
 static void set_ioreq_server(struct domain *d, unsigned int id,
-                             struct hvm_ioreq_server *s)
+                             struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
     ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
@@ -45,8 +45,8 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
 #define GET_IOREQ_SERVER(d, id) \
     (d)->arch.hvm.ioreq_server.server[id]
 
-struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                          unsigned int id)
+struct ioreq_server *get_ioreq_server(const struct domain *d,
+                                      unsigned int id)
 {
     if ( id >= MAX_NR_IOREQ_SERVERS )
         return NULL;
@@ -68,7 +68,7 @@ struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
             continue; \
         else
 
-static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
+static ioreq_t *get_ioreq(struct ioreq_server *s, struct vcpu *v)
 {
     shared_iopage_t *p = s->ioreq.va;
 
@@ -78,16 +78,16 @@ static ioreq_t *get_ioreq(struct hvm_ioreq_server *s, struct vcpu *v)
     return &p->vcpu_ioreq[v->vcpu_id];
 }
 
-static struct hvm_ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
-                                               struct hvm_ioreq_server **srvp)
+static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
+                                           struct ioreq_server **srvp)
 {
     struct domain *d = v->domain;
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        struct hvm_ioreq_vcpu *sv;
+        struct ioreq_vcpu *sv;
 
         list_for_each_entry ( sv,
                               &s->ioreq_vcpu_list,
@@ -110,7 +110,7 @@ bool hvm_io_pending(struct vcpu *v)
     return get_pending_vcpu(v, NULL);
 }
 
-static bool hvm_wait_for_io(struct hvm_ioreq_vcpu *sv, ioreq_t *p)
+static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
@@ -171,8 +171,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
     struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
-    struct hvm_ioreq_server *s;
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_server *s;
+    struct ioreq_vcpu *sv;
     enum hvm_io_completion io_completion;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
@@ -213,7 +213,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     return true;
 }
 
-static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
+static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -229,7 +229,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct hvm_ioreq_server *s)
     return INVALID_GFN;
 }
 
-static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
+static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -247,7 +247,7 @@ static gfn_t hvm_alloc_ioreq_gfn(struct hvm_ioreq_server *s)
     return hvm_alloc_legacy_ioreq_gfn(s);
 }
 
-static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
+static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
                                       gfn_t gfn)
 {
     struct domain *d = s->target;
@@ -265,7 +265,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct hvm_ioreq_server *s,
     return true;
 }
 
-static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
+static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn)
 {
     struct domain *d = s->target;
     unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
@@ -279,9 +279,9 @@ static void hvm_free_ioreq_gfn(struct hvm_ioreq_server *s, gfn_t gfn)
     }
 }
 
-static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
@@ -293,10 +293,10 @@ static void hvm_unmap_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     iorp->gfn = INVALID_GFN;
 }
 
-static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     int rc;
 
     if ( iorp->page )
@@ -329,9 +329,9 @@ static int hvm_map_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page;
 
     if ( iorp->page )
@@ -377,9 +377,9 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
     return -ENOMEM;
 }
 
-static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page = iorp->page;
 
     if ( !page )
@@ -396,7 +396,7 @@ static void hvm_free_ioreq_mfn(struct hvm_ioreq_server *s, bool buf)
 
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
 {
-    const struct hvm_ioreq_server *s;
+    const struct ioreq_server *s;
     unsigned int id;
     bool found = false;
 
@@ -416,11 +416,11 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     return found;
 }
 
-static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf)
 
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return;
@@ -431,10 +431,10 @@ static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     clear_page(iorp->va);
 }
 
-static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
+static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
-    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
+    struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     int rc;
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
@@ -450,8 +450,8 @@ static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
     return rc;
 }
 
-static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
-                                    struct hvm_ioreq_vcpu *sv)
+static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
+                                    struct ioreq_vcpu *sv)
 {
     ASSERT(spin_is_locked(&s->lock));
 
@@ -466,13 +466,13 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
 #define HANDLE_BUFIOREQ(s) \
     ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
 
-static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
                                      struct vcpu *v)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
     int rc;
 
-    sv = xzalloc(struct hvm_ioreq_vcpu);
+    sv = xzalloc(struct ioreq_vcpu);
 
     rc = -ENOMEM;
     if ( !sv )
@@ -518,10 +518,10 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
+static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
                                          struct vcpu *v)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     spin_lock(&s->lock);
 
@@ -546,9 +546,9 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
 {
-    struct hvm_ioreq_vcpu *sv, *next;
+    struct ioreq_vcpu *sv, *next;
 
     spin_lock(&s->lock);
 
@@ -572,7 +572,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
+static int hvm_ioreq_server_map_pages(struct ioreq_server *s)
 {
     int rc;
 
@@ -587,13 +587,13 @@ static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
     return rc;
 }
 
-static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_unmap_pages(struct ioreq_server *s)
 {
     hvm_unmap_ioreq_gfn(s, true);
     hvm_unmap_ioreq_gfn(s, false);
 }
 
-static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
+static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s)
 {
     int rc;
 
@@ -608,13 +608,13 @@ static int hvm_ioreq_server_alloc_pages(struct hvm_ioreq_server *s)
     return rc;
 }
 
-static void hvm_ioreq_server_free_pages(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_free_pages(struct ioreq_server *s)
 {
     hvm_free_ioreq_mfn(s, true);
     hvm_free_ioreq_mfn(s, false);
 }
 
-static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
 {
     unsigned int i;
 
@@ -622,7 +622,7 @@ static void hvm_ioreq_server_free_rangesets(struct hvm_ioreq_server *s)
         rangeset_destroy(s->range[i]);
 }
 
-static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
                                             ioservid_t id)
 {
     unsigned int i;
@@ -660,9 +660,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_enable(struct ioreq_server *s)
 {
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     spin_lock(&s->lock);
 
@@ -683,7 +683,7 @@ static void hvm_ioreq_server_enable(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_disable(struct ioreq_server *s)
 {
     spin_lock(&s->lock);
 
@@ -699,7 +699,7 @@ static void hvm_ioreq_server_disable(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
+static int hvm_ioreq_server_init(struct ioreq_server *s,
                                  struct domain *d, int bufioreq_handling,
                                  ioservid_t id)
 {
@@ -744,7 +744,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
+static void hvm_ioreq_server_deinit(struct ioreq_server *s)
 {
     ASSERT(!s->enabled);
     hvm_ioreq_server_remove_all_vcpus(s);
@@ -769,14 +769,14 @@ static void hvm_ioreq_server_deinit(struct hvm_ioreq_server *s)
 int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
                             ioservid_t *id)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int i;
     int rc;
 
     if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
         return -EINVAL;
 
-    s = xzalloc(struct hvm_ioreq_server);
+    s = xzalloc(struct ioreq_server);
     if ( !s )
         return -ENOMEM;
 
@@ -824,7 +824,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
 
 int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -869,7 +869,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
                               unsigned long *bufioreq_gfn,
                               evtchn_port_t *bufioreq_port)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -914,7 +914,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
 int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
                                unsigned long idx, mfn_t *mfn)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     ASSERT(is_hvm_domain(d));
@@ -966,7 +966,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
                                      uint64_t end)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     struct rangeset *r;
     int rc;
 
@@ -1018,7 +1018,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
                                          uint64_t end)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     struct rangeset *r;
     int rc;
 
@@ -1069,7 +1069,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool enabled)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -1102,7 +1102,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
 
 int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
     int rc;
 
@@ -1137,7 +1137,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
@@ -1150,7 +1150,7 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
 
 void hvm_destroy_all_ioreq_servers(struct domain *d)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id;
 
     if ( !arch_hvm_ioreq_destroy(d) )
@@ -1177,10 +1177,10 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
 }
 
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p)
+struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                             ioreq_t *p)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     uint8_t type;
     uint64_t addr;
     unsigned int id;
@@ -1233,10 +1233,10 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     return NULL;
 }
 
-static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
+static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
 {
     struct domain *d = current->domain;
-    struct hvm_ioreq_page *iorp;
+    struct ioreq_page *iorp;
     buffered_iopage_t *pg;
     buf_ioreq_t bp = { .data = p->data,
                        .addr = p->addr,
@@ -1326,12 +1326,12 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     return IOREQ_STATUS_HANDLED;
 }
 
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
                    bool buffered)
 {
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
-    struct hvm_ioreq_vcpu *sv;
+    struct ioreq_vcpu *sv;
 
     ASSERT(s);
 
@@ -1389,7 +1389,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
 {
     struct domain *d = current->domain;
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     unsigned int id, failed = 0;
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 9d247ba..3b36c2f 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -30,40 +30,6 @@
 
 #include <public/hvm/dm_op.h>
 
-struct hvm_ioreq_page {
-    gfn_t gfn;
-    struct page_info *page;
-    void *va;
-};
-
-struct hvm_ioreq_vcpu {
-    struct list_head list_entry;
-    struct vcpu      *vcpu;
-    evtchn_port_t    ioreq_evtchn;
-    bool             pending;
-};
-
-#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1)
-#define MAX_NR_IO_RANGES  256
-
-struct hvm_ioreq_server {
-    struct domain          *target, *emulator;
-
-    /* Lock to serialize toolstack modifications */
-    spinlock_t             lock;
-
-    struct hvm_ioreq_page  ioreq;
-    struct list_head       ioreq_vcpu_list;
-    struct hvm_ioreq_page  bufioreq;
-
-    /* Lock to serialize access to buffered ioreq ring */
-    spinlock_t             bufioreq_lock;
-    evtchn_port_t          bufioreq_evtchn;
-    struct rangeset        *range[NR_IO_RANGE_TYPES];
-    bool                   enabled;
-    uint8_t                bufioreq_handling;
-};
-
 #ifdef CONFIG_MEM_SHARING
 struct mem_sharing_domain
 {
@@ -110,7 +76,7 @@ struct hvm_domain {
     /* Lock protects all other values in the sub-struct and the default */
     struct {
         spinlock_t              lock;
-        struct hvm_ioreq_server *server[MAX_NR_IOREQ_SERVERS];
+        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
     } ioreq_server;
 
     /* Cached CF8 for guest PCI config cycles */
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index a147856..d2d64a8 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -50,7 +50,7 @@ static inline bool arch_hvm_io_completion(enum hvm_io_completion io_completion)
 }
 
 /* Called when target domain is paused */
-static inline void arch_hvm_destroy_ioreq_server(struct hvm_ioreq_server *s)
+static inline void arch_hvm_destroy_ioreq_server(struct ioreq_server *s)
 {
     p2m_set_ioreq_server(s->target, 0, s);
 }
@@ -68,7 +68,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
                                                    uint32_t type,
                                                    uint32_t flags)
 {
-    struct hvm_ioreq_server *s;
+    struct ioreq_server *s;
     int rc;
 
     if ( type != HVMMEM_ioreq_server )
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 8abae34..5f7ba31 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -350,7 +350,7 @@ struct p2m_domain {
           * ioreq server who's responsible for the emulation of
           * gfns with specific p2m type(for now, p2m_ioreq_server).
           */
-         struct hvm_ioreq_server *server;
+         struct ioreq_server *server;
          /*
           * flags specifies whether read, write or both operations
           * are to be emulated by an ioreq server.
@@ -933,9 +933,9 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt, mfn_t mfn)
 }
 
 int p2m_set_ioreq_server(struct domain *d, unsigned int flags,
-                         struct hvm_ioreq_server *s);
-struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
-                                              unsigned int *flags);
+                         struct ioreq_server *s);
+struct ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                          unsigned int *flags);
 
 static inline int p2m_entry_modify(struct p2m_domain *p2m, p2m_type_t nt,
                                    p2m_type_t ot, mfn_t nfn, mfn_t ofn,
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 768ac94..8451866 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -21,8 +21,42 @@
 
 #include <xen/sched.h>
 
-struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
-                                          unsigned int id);
+struct ioreq_page {
+    gfn_t gfn;
+    struct page_info *page;
+    void *va;
+};
+
+struct ioreq_vcpu {
+    struct list_head list_entry;
+    struct vcpu      *vcpu;
+    evtchn_port_t    ioreq_evtchn;
+    bool             pending;
+};
+
+#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1)
+#define MAX_NR_IO_RANGES  256
+
+struct ioreq_server {
+    struct domain          *target, *emulator;
+
+    /* Lock to serialize toolstack modifications */
+    spinlock_t             lock;
+
+    struct ioreq_page      ioreq;
+    struct list_head       ioreq_vcpu_list;
+    struct ioreq_page      bufioreq;
+
+    /* Lock to serialize access to buffered ioreq ring */
+    spinlock_t             bufioreq_lock;
+    evtchn_port_t          bufioreq_evtchn;
+    struct rangeset        *range[NR_IO_RANGE_TYPES];
+    bool                   enabled;
+    uint8_t                bufioreq_handling;
+};
+
+struct ioreq_server *get_ioreq_server(const struct domain *d,
+                                      unsigned int id);
 
 static inline paddr_t ioreq_mmio_first_byte(const ioreq_t *p)
 {
@@ -73,9 +107,9 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
 void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
 void hvm_destroy_all_ioreq_servers(struct domain *d);
 
-struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                                 ioreq_t *p);
-int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
+struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
+                                             ioreq_t *p);
+int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
                    bool buffered);
 unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7579.19985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Nf-0005XS-K6; Thu, 15 Oct 2020 16:45:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7579.19985; Thu, 15 Oct 2020 16:45:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Nf-0005XK-Fh; Thu, 15 Oct 2020 16:45:31 +0000
Received: by outflank-mailman (input) for mailman id 7579;
 Thu, 15 Oct 2020 16:45:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Ne-0004yr-64
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:30 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c3f3660-c6ec-4475-97e7-0649efa3f0a1;
 Thu, 15 Oct 2020 16:45:00 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f29so3870725ljo.3
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:00 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.58
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:44:58 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Ne-0004yr-64
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:30 +0000
X-Inumbo-ID: 7c3f3660-c6ec-4475-97e7-0649efa3f0a1
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7c3f3660-c6ec-4475-97e7-0649efa3f0a1;
	Thu, 15 Oct 2020 16:45:00 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f29so3870725ljo.3
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=/4N5432elpoCFCTQOFacUfSDjBxB67QedZM20URUDaY=;
        b=eXOMmNq1BoDmsLtsgbPKkshMqC5uNq7eP/4RsyTKNtjGB2u1Onj9Rvy9osZjcYZ6lG
         X8V6fniQ0m9KsZbJ/ohQFnFz55Ot8p7JUpBkpFlC9P+lFK+ndFFktGZorKuy1wW5kcvD
         Z836JZ3kv2CRw2dWI7IMc7hHmXu35RAJUMsUsejuARTeCgJbhrXwbfJDIocfGj5oBuDk
         rcvMeLf65qxUwC7pGZxFfyu9C1bOpTBRvjDng4YQPPViDIOB6i3Jnoixdu11en9Lv1SQ
         1TpqrJde0BzbzuzvtPVbIIvL2iDGFI+NpWWOaMZxcNVefkqqIZPkpk7ZkMEBNinQx3Iq
         SrIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=/4N5432elpoCFCTQOFacUfSDjBxB67QedZM20URUDaY=;
        b=OY3VyiqYSYdPIGs7UVg8fb+USTLowCstxJrNRtjf6aduqyw3ue6b28dDZ5Ys4xkUOd
         SfUrcyO8fXAylVFsI9pLqi8umlRO5RH4YrqSxzV081MyJHZMJ2oZEG4k231lVg6cmrB2
         LdAKa4gieQ6v3xagCeyfWNQPEarCsb7H1/P2sIv6c4uHp0U37lbEVHOXrqZcb3QB/acW
         mwkDLFmPMNb2O+V533u3C+wetDAJfps7lq68XDtZcDycqHQcgdYRf+3vNTCvpM19THbA
         cHAgRoi60qt/fYDP+u5LzugNyS/L8yfQ7qoYx83Ddw7L9c2m0IJIGZkgzk2IGjt26kaQ
         rWDg==
X-Gm-Message-State: AOAM530qlLalFOnfPDdH5FX8jp7mxJowCLG/mO3pHswCEIWYIE74UQk2
	2a24sd1n25kkQaPxCFt3kC73QYIcy6+AAQ==
X-Google-Smtp-Source: ABdhPJyGbG38G+DinLf0m/SSKtu4fphOzSZb8YAqDDXWMGNemxvpmbSJY6E4O5YzMID/CuL1sbHY2A==
X-Received: by 2002:a2e:b557:: with SMTP id a23mr1746654ljn.5.1602780299477;
        Thu, 15 Oct 2020 09:44:59 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.58
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:44:58 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 07/23] xen/ioreq: Move x86's ioreq_gfn(server) to struct domain
Date: Thu, 15 Oct 2020 19:44:18 +0300
Message-Id: <1602780274-29141-8-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these structs will be used
on Arm as is. Move them to common struct domain. This also
significantly reduces the layering violation in the common code
(*arch.hvm* usage).

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch
---
 xen/arch/x86/hvm/hvm.c           | 12 +++----
 xen/common/ioreq.c               | 72 ++++++++++++++++++++--------------------
 xen/include/asm-x86/hvm/domain.h | 15 ---------
 xen/include/asm-x86/hvm/ioreq.h  |  4 +--
 xen/include/xen/sched.h          | 17 ++++++++++
 5 files changed, 61 insertions(+), 59 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 54e32e4..20376ce 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4218,20 +4218,20 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
             rc = -EINVAL;
         break;
     case HVM_PARAM_IOREQ_SERVER_PFN:
-        d->arch.hvm.ioreq_gfn.base = value;
+        d->ioreq_gfn.base = value;
         break;
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     {
         unsigned int i;
 
         if ( value == 0 ||
-             value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 )
+             value > sizeof(d->ioreq_gfn.mask) * 8 )
         {
             rc = -EINVAL;
             break;
         }
         for ( i = 0; i < value; i++ )
-            set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
+            set_bit(i, &d->ioreq_gfn.mask);
 
         break;
     }
@@ -4239,11 +4239,11 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
     case HVM_PARAM_IOREQ_PFN:
     case HVM_PARAM_BUFIOREQ_PFN:
         BUILD_BUG_ON(HVM_PARAM_IOREQ_PFN >
-                     sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
+                     sizeof(d->ioreq_gfn.legacy_mask) * 8);
         BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
-                     sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
+                     sizeof(d->ioreq_gfn.legacy_mask) * 8);
         if ( value )
-            set_bit(index, &d->arch.hvm.ioreq_gfn.legacy_mask);
+            set_bit(index, &d->ioreq_gfn.legacy_mask);
         break;
 
     case HVM_PARAM_X87_FIP_WIDTH:
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 1d62d13..7f91bc2 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -37,13 +37,13 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->arch.hvm.ioreq_server.server[id]);
+    ASSERT(!s || !d->ioreq_server.server[id]);
 
-    d->arch.hvm.ioreq_server.server[id] = s;
+    d->ioreq_server.server[id] = s;
 }
 
 #define GET_IOREQ_SERVER(d, id) \
-    (d)->arch.hvm.ioreq_server.server[id]
+    (d)->ioreq_server.server[id]
 
 struct ioreq_server *get_ioreq_server(const struct domain *d,
                                       unsigned int id)
@@ -222,7 +222,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 
     for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
     {
-        if ( !test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask) )
+        if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) )
             return _gfn(d->arch.hvm.params[i]);
     }
 
@@ -234,10 +234,10 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
     struct domain *d = s->target;
     unsigned int i;
 
-    for ( i = 0; i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8; i++ )
+    for ( i = 0; i < sizeof(d->ioreq_gfn.mask) * 8; i++ )
     {
-        if ( test_and_clear_bit(i, &d->arch.hvm.ioreq_gfn.mask) )
-            return _gfn(d->arch.hvm.ioreq_gfn.base + i);
+        if ( test_and_clear_bit(i, &d->ioreq_gfn.mask) )
+            return _gfn(d->ioreq_gfn.base + i);
     }
 
     /*
@@ -261,21 +261,21 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
     if ( i > HVM_PARAM_BUFIOREQ_PFN )
         return false;
 
-    set_bit(i, &d->arch.hvm.ioreq_gfn.legacy_mask);
+    set_bit(i, &d->ioreq_gfn.legacy_mask);
     return true;
 }
 
 static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn)
 {
     struct domain *d = s->target;
-    unsigned int i = gfn_x(gfn) - d->arch.hvm.ioreq_gfn.base;
+    unsigned int i = gfn_x(gfn) - d->ioreq_gfn.base;
 
     ASSERT(!gfn_eq(gfn, INVALID_GFN));
 
     if ( !hvm_free_legacy_ioreq_gfn(s, gfn) )
     {
-        ASSERT(i < sizeof(d->arch.hvm.ioreq_gfn.mask) * 8);
-        set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
+        ASSERT(i < sizeof(d->ioreq_gfn.mask) * 8);
+        set_bit(i, &d->ioreq_gfn.mask);
     }
 }
 
@@ -400,7 +400,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     unsigned int id;
     bool found = false;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -411,7 +411,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
         }
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return found;
 }
@@ -781,7 +781,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
         return -ENOMEM;
 
     domain_pause(d);
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     for ( i = 0; i < MAX_NR_IOREQ_SERVERS; i++ )
     {
@@ -809,13 +809,13 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
     if ( id )
         *id = i;
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     domain_unpause(d);
 
     return 0;
 
  fail:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     domain_unpause(d);
 
     xfree(s);
@@ -827,7 +827,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -859,7 +859,7 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -872,7 +872,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -906,7 +906,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -919,7 +919,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
 
     ASSERT(is_hvm_domain(d));
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -957,7 +957,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     }
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -973,7 +973,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     if ( start > end )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1009,7 +1009,7 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_add_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -1025,7 +1025,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     if ( start > end )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1061,7 +1061,7 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     rc = rangeset_remove_range(r, start, end);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -1072,7 +1072,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     struct ioreq_server *s;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -1096,7 +1096,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     rc = 0;
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
     return rc;
 }
 
@@ -1106,7 +1106,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
     unsigned int id;
     int rc;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
@@ -1115,7 +1115,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
             goto fail;
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return 0;
 
@@ -1130,7 +1130,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
         hvm_ioreq_server_remove_vcpu(s, v);
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     return rc;
 }
@@ -1140,12 +1140,12 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     struct ioreq_server *s;
     unsigned int id;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
         hvm_ioreq_server_remove_vcpu(s, v);
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
 void hvm_destroy_all_ioreq_servers(struct domain *d)
@@ -1156,7 +1156,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     if ( !arch_hvm_ioreq_destroy(d) )
         return;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     /* No need to domain_pause() as the domain is being torn down */
 
@@ -1174,7 +1174,7 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
         xfree(s);
     }
 
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
 struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
@@ -1406,7 +1406,7 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
 
 void hvm_ioreq_init(struct domain *d)
 {
-    spin_lock_init(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_init(&d->ioreq_server.lock);
 
     arch_hvm_ioreq_init(d);
 }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 3b36c2f..5d60737 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -63,22 +63,7 @@ struct hvm_pi_ops {
     void (*vcpu_block)(struct vcpu *);
 };
 
-#define MAX_NR_IOREQ_SERVERS 8
-
 struct hvm_domain {
-    /* Guest page range used for non-default ioreq servers */
-    struct {
-        unsigned long base;
-        unsigned long mask; /* indexed by GFN minus base */
-        unsigned long legacy_mask; /* indexed by HVM param number */
-    } ioreq_gfn;
-
-    /* Lock protects all other values in the sub-struct and the default */
-    struct {
-        spinlock_t              lock;
-        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
-    } ioreq_server;
-
     /* Cached CF8 for guest PCI config cycles */
     uint32_t                pci_cf8;
 
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index d2d64a8..0fccac5 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -77,7 +77,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
     if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
         return -EINVAL;
 
-    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_lock_recursive(&d->ioreq_server.lock);
 
     s = get_ioreq_server(d, id);
 
@@ -92,7 +92,7 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
     rc = p2m_set_ioreq_server(d, flags, s);
 
  out:
-    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
+    spin_unlock_recursive(&d->ioreq_server.lock);
 
     if ( rc == 0 && flags == 0 )
     {
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f..78761cd 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -314,6 +314,8 @@ struct sched_unit {
 
 struct evtchn_port_ops;
 
+#define MAX_NR_IOREQ_SERVERS 8
+
 struct domain
 {
     domid_t          domain_id;
@@ -521,6 +523,21 @@ struct domain
     /* Argo interdomain communication support */
     struct argo_domain *argo;
 #endif
+
+#ifdef CONFIG_IOREQ_SERVER
+    /* Guest page range used for non-default ioreq servers */
+    struct {
+        unsigned long base;
+        unsigned long mask;
+        unsigned long legacy_mask; /* indexed by HVM param number */
+    } ioreq_gfn;
+
+    /* Lock protects all other values in the sub-struct and the default */
+    struct {
+        spinlock_t              lock;
+        struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
+    } ioreq_server;
+#endif
 };
 
 static inline struct page_list_head *page_to_list(
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7581.19997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Nk-0005dE-4h; Thu, 15 Oct 2020 16:45:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7581.19997; Thu, 15 Oct 2020 16:45:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Nk-0005d3-0U; Thu, 15 Oct 2020 16:45:36 +0000
Received: by outflank-mailman (input) for mailman id 7581;
 Thu, 15 Oct 2020 16:45:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Nj-0004yr-6K
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:35 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0db6749-9135-4c9b-88f8-d16a1f677e15;
 Thu, 15 Oct 2020 16:45:01 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l28so4333489lfp.10
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:01 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Nj-0004yr-6K
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:35 +0000
X-Inumbo-ID: f0db6749-9135-4c9b-88f8-d16a1f677e15
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f0db6749-9135-4c9b-88f8-d16a1f677e15;
	Thu, 15 Oct 2020 16:45:01 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l28so4333489lfp.10
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=GUyEC+XSJJjDXTlfW49njL99iLeUHfZbzIxxr16Dn/Y=;
        b=mGoG7bqbIqXobTIQXqflKsraIQMcw+xybhJBtBKRipIqp3JjfVkDdIdquNTLmsFEVZ
         OX7u4ex9p0jTYLC4iZ1dTN7NgvLksWZR3DoBBM2MS3A6CptFuxxUYA9RhCRBjYxMXwfo
         v8pO7kvRRtFMBXlAZMbP11BSkHUqwSRX6US2QxneON1YAErbn3cnh1RYsd1tIPZ0+0C0
         WTO8Czv4/4UJWW5GnBoUY7lqQpE1+6kcXjzzkpI+Uyy6epzfihSiEdceNv/1cjcvkCJx
         eM865/w6oHdfye86xH7U9BQGcwQbWex5/uI4qLPcmaWY02CoWWyBnPykIV3sbHcGLOcm
         KAeQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=GUyEC+XSJJjDXTlfW49njL99iLeUHfZbzIxxr16Dn/Y=;
        b=foaohpTpSBzBnXrPF28ohgmSe93DKY/K76ijTC9HERpA2av1yQ2QrXOSlkWSXO6fVc
         /qGr1L0lHKcuOQ/WaFHjeSFscXsg+eR84Rhlq+JLamqcheUFh0rLq1t/bBmqcMgiX3lR
         6EtkZ+R0ywV7jsD+4TPM7WZcyhsiuL54NgyA2xNszxIi2g7JVgK63CaQAVDfhqRCteb9
         aaC+YBd2hXEFwuq7OTe9m1+8JaoIBc6WxdiPSBsVSQWI1OCHQ5pz81HY/ZGGWkPyJK2v
         HiPyONfwIsf2Dm1h/WPJsieYwAdlaC6kex7AppecwKuv9nKunxY2z9HOfhNO1wb97PaK
         wfOQ==
X-Gm-Message-State: AOAM530cRK511+X21Im7w53uZ+LbANA87MD2KjiATL0HRqbJxm+2JtUP
	u631liKjmQm63ULCN79k9ufndZP/VIg/4Q==
X-Google-Smtp-Source: ABdhPJyX/2+Gjm52GKB7Dk0uJInSKiMEMavXGp73CqcZgSiu2tAFiQgTMx5j4mJ6uUtSSLFeXPC4Bw==
X-Received: by 2002:a05:6512:dc:: with SMTP id c28mr1254916lfp.369.1602780300572;
        Thu, 15 Oct 2020 09:45:00 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.44.59
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:00 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to abstract accesses to arch.hvm.params
Date: Thu, 15 Oct 2020 19:44:19 +0300
Message-Id: <1602780274-29141-9-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

We don't want to move HVM params field out of *arch.hvm* in this particular
case as although it stores a few IOREQ params, it is not a (completely)
IOREQ stuff and is specific to the architecture. Instead, abstract
accesses by the proposed macro.

This is a follow up action to reduce layering violation in the common code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch
---
 xen/common/ioreq.c               | 4 ++--
 xen/include/asm-x86/hvm/domain.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 7f91bc2..a07f1d7 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -223,7 +223,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
     for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
     {
         if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) )
-            return _gfn(d->arch.hvm.params[i]);
+            return _gfn(ioreq_params(d, i));
     }
 
     return INVALID_GFN;
@@ -255,7 +255,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
 
     for ( i = HVM_PARAM_IOREQ_PFN; i <= HVM_PARAM_BUFIOREQ_PFN; i++ )
     {
-        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
+        if ( gfn_eq(gfn, _gfn(ioreq_params(d, i))) )
              break;
     }
     if ( i > HVM_PARAM_BUFIOREQ_PFN )
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 5d60737..c3af339 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -63,6 +63,8 @@ struct hvm_pi_ops {
     void (*vcpu_block)(struct vcpu *);
 };
 
+#define ioreq_params(d, i) ((d)->arch.hvm.params[i])
+
 struct hvm_domain {
     /* Cached CF8 for guest PCI config cycles */
     uint32_t                pci_cf8;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:45:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:45:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7584.20009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Np-0005jA-FV; Thu, 15 Oct 2020 16:45:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7584.20009; Thu, 15 Oct 2020 16:45:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6Np-0005iv-C9; Thu, 15 Oct 2020 16:45:41 +0000
Received: by outflank-mailman (input) for mailman id 7584;
 Thu, 15 Oct 2020 16:45:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6No-0004yr-6f
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:40 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ac59395-8402-4d7a-9ad9-0bb685af26e4;
 Thu, 15 Oct 2020 16:45:04 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a5so3799292ljj.11
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:04 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:02 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6No-0004yr-6f
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:40 +0000
X-Inumbo-ID: 6ac59395-8402-4d7a-9ad9-0bb685af26e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ac59395-8402-4d7a-9ad9-0bb685af26e4;
	Thu, 15 Oct 2020 16:45:04 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a5so3799292ljj.11
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=6/1vRBOVhlXpqAFPJrg7onrJHeVZj9qNd2N7AdSm6+I=;
        b=NxUSlgwYMHInywlP9Vh/osxO+AX5/V1EMzEvRhEhDylOsNg3f8tznfM4IQ5gWoDWVf
         jDmIUKloygpCypaDkOIuFJm4xP5IDSECifyOO1/hMPSL3xWcvYUk5Lnp7qDdGuCuW2cT
         KorqU6bNJv3f89ex2T7QTmS0Q1hcAmoFFGZEFlCKovQQpefZd8qogldHxp8KBMK0qjvY
         PpNL2vNDyzn8VGManvXoUV1jtjzWjsvd/8+2tnT/3zA8xapUUIc6mZyM5kEjy7VcIRwz
         9MsxvpzMkhSKMqwi9Iku60llBgDVmFVipfxHQMCeU9THU5JLZmtIC0l3pSNw/TwFAxFt
         orMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=6/1vRBOVhlXpqAFPJrg7onrJHeVZj9qNd2N7AdSm6+I=;
        b=PAa6WKBV1UQJvqAq+OiPKmyOTVB3tXkaB1q3GUEwFOY5kvx6of1wQf9HwxkJ2eGV2X
         N2Z1PL3wexRGHc0D/zz4AitaasXYmRnqU7gqrGSXOJnfMnfgZ2B3CNGIuYP7Xvm8quXr
         uD9HRtJBgdTDki+8iIqVfULK7GX6CQ1OsNGOPH/JeNjfkUIQfrEX2ompOoFdMzSGURu3
         PDb7cGGtGSmG9q4gk7RT1IPshdIfyD27GUvqbfMGtz7rVFkaZptFAiURNJ/KdR2lwulr
         dxm6FmyDgFKZppFIGSp/FgFw8ZImkyBf5qPUXTYilFUTzyqmvQ43fi1NmWJZc9omcGZ6
         1CyA==
X-Gm-Message-State: AOAM530FmC5idStLL4Rahs7VwqKvuN2z1e0f3pd1bcpfbVrL3r+ZmY0C
	wMa/DB/Ng7CG9w8/S23hBBjAt7K1jYpumA==
X-Google-Smtp-Source: ABdhPJwtiagApOtfBMwcSILR3Uh9/7kqM7ZaVQuLTRR1ZqApXjgAWKgMdmZDz7fH0K3DqG2SmRW69A==
X-Received: by 2002:a2e:8645:: with SMTP id i5mr1798706ljj.458.1602780302936;
        Thu, 15 Oct 2020 09:45:02 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.01
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:02 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V2 10/23] xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
Date: Thu, 15 Oct 2020 19:44:21 +0300
Message-Id: <1602780274-29141-11-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

As x86 implementation of XENMEM_resource_ioreq_server can be
re-used on Arm later on, this patch makes it common and removes
arch_acquire_resource as unneeded.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - update the author of a patch
---
 xen/arch/x86/mm.c        | 44 --------------------------------------------
 xen/common/memory.c      | 45 +++++++++++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/mm.h |  8 --------
 xen/include/asm-x86/mm.h |  4 ----
 4 files changed, 43 insertions(+), 58 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b5865ae..df7619d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4591,50 +4591,6 @@ int xenmem_add_to_physmap_one(
     return rc;
 }
 
-int arch_acquire_resource(struct domain *d, unsigned int type,
-                          unsigned int id, unsigned long frame,
-                          unsigned int nr_frames, xen_pfn_t mfn_list[])
-{
-    int rc;
-
-    switch ( type )
-    {
-#ifdef CONFIG_HVM
-    case XENMEM_resource_ioreq_server:
-    {
-        ioservid_t ioservid = id;
-        unsigned int i;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            break;
-
-        if ( id != (unsigned int)ioservid )
-            break;
-
-        rc = 0;
-        for ( i = 0; i < nr_frames; i++ )
-        {
-            mfn_t mfn;
-
-            rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
-            if ( rc )
-                break;
-
-            mfn_list[i] = mfn_x(mfn);
-        }
-        break;
-    }
-#endif
-
-    default:
-        rc = -EOPNOTSUPP;
-        break;
-    }
-
-    return rc;
-}
-
 long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     int rc;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 1bab0e8..83d800f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -30,6 +30,10 @@
 #include <public/memory.h>
 #include <xsm/xsm.h>
 
+#ifdef CONFIG_IOREQ_SERVER
+#include <xen/ioreq.h>
+#endif
+
 #ifdef CONFIG_X86
 #include <asm/guest.h>
 #endif
@@ -1045,6 +1049,38 @@ static int acquire_grant_table(struct domain *d, unsigned int id,
     return 0;
 }
 
+#ifdef CONFIG_IOREQ_SERVER
+static int acquire_ioreq_server(struct domain *d,
+                                unsigned int id,
+                                unsigned long frame,
+                                unsigned int nr_frames,
+                                xen_pfn_t mfn_list[])
+{
+    ioservid_t ioservid = id;
+    unsigned int i;
+    int rc;
+
+    if ( !is_hvm_domain(d) )
+        return -EINVAL;
+
+    if ( id != (unsigned int)ioservid )
+        return -EINVAL;
+
+    for ( i = 0; i < nr_frames; i++ )
+    {
+        mfn_t mfn;
+
+        rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
+        if ( rc )
+            return rc;
+
+        mfn_list[i] = mfn_x(mfn);
+    }
+
+    return 0;
+}
+#endif
+
 static int acquire_resource(
     XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
 {
@@ -1103,9 +1139,14 @@ static int acquire_resource(
                                  mfn_list);
         break;
 
+#ifdef CONFIG_IOREQ_SERVER
+    case XENMEM_resource_ioreq_server:
+        rc = acquire_ioreq_server(d, xmar.id, xmar.frame, xmar.nr_frames,
+                                  mfn_list);
+        break;
+#endif
     default:
-        rc = arch_acquire_resource(d, xmar.type, xmar.id, xmar.frame,
-                                   xmar.nr_frames, mfn_list);
+        rc = -EOPNOTSUPP;
         break;
     }
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index f8ba49b..0b7de31 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -358,14 +358,6 @@ static inline void put_page_and_type(struct page_info *page)
 
 void clear_and_clean_page(struct page_info *page);
 
-static inline
-int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
-                          unsigned long frame, unsigned int nr_frames,
-                          xen_pfn_t mfn_list[])
-{
-    return -EOPNOTSUPP;
-}
-
 unsigned int arch_get_dma_bitsize(void);
 
 #endif /*  __ARCH_ARM_MM__ */
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index deeba75..859214e 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -639,8 +639,4 @@ static inline bool arch_mfn_in_directmap(unsigned long mfn)
     return mfn <= (virt_to_mfn(eva - 1) + 1);
 }
 
-int arch_acquire_resource(struct domain *d, unsigned int type,
-                          unsigned int id, unsigned long frame,
-                          unsigned int nr_frames, xen_pfn_t mfn_list[]);
-
 #endif /* __ASM_X86_MM_H__ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:49:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:49:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7602.20021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6RL-0006HU-2g; Thu, 15 Oct 2020 16:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7602.20021; Thu, 15 Oct 2020 16:49:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6RK-0006HN-VO; Thu, 15 Oct 2020 16:49:18 +0000
Received: by outflank-mailman (input) for mailman id 7602;
 Thu, 15 Oct 2020 16:49:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j6bZ=DW=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kT6RI-0006HH-KI
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:49:17 +0000
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7561fe5-194a-421b-bc8a-4757c96138b2;
 Thu, 15 Oct 2020 16:49:14 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id b8so4402359wrn.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:49:14 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id j7sm4950464wmc.7.2020.10.15.09.49.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 15 Oct 2020 09:49:12 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=j6bZ=DW=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kT6RI-0006HH-KI
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:49:17 +0000
X-Inumbo-ID: e7561fe5-194a-421b-bc8a-4757c96138b2
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e7561fe5-194a-421b-bc8a-4757c96138b2;
	Thu, 15 Oct 2020 16:49:14 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id b8so4402359wrn.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:49:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=qcu7vbDkPHQ/dX3NgTIESQdy57qDX1kkbPJO+QtJy7U=;
        b=YX+pr8t3FbpyHf00VK8ZlbMj1z0EZQLLdj8cqVbimoBHqglJJQY6vri+ZumBLz/Pg0
         X4oNhhYpTjQWsCIyEZEhZplvY4V9uawMTFvhkOXqaad7727cHwmGjbJI/EP13XP/fzzn
         5PC8utpu1BcKtOPqAGQ33UWPCcQmJrkS7kwLc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=qcu7vbDkPHQ/dX3NgTIESQdy57qDX1kkbPJO+QtJy7U=;
        b=QKj1uLy+MJ58iQk7xAV0EciZwFNY/RWhzUFDj85IKCVmtaXVE4zMCm3cSLaEUnBxtg
         fzatj7aemZb1u3cNpTY2r37WfWNlC4sMHh8/1/RWb0G59J1i1rD51Srjm++NJrfsWDB5
         rjUbXcWy+XBtWkals8Ev82dKZDpf6bwgEvbBIMyL9yNSeNYABEZwickBGhoM36QgGe6A
         hZSMhI9L40hYWmAp+/a1Kad2T3sdsJ+Q5TGd56LsRkBBT/BNmYHMHlnfEGB4nrm4jb16
         sDorEMby1r1MYtm0mfgrkWprvULZcQetyh9klj/LzsBgjAOwi7Vn0K85C36TinzPoKtT
         UMDQ==
X-Gm-Message-State: AOAM533JS9dMYCs1baworYTrf7CexirVrWIegQ6XKBQwMVxvS5gSLigq
	EeSEhoshoqOnXHL5ZREunJYCig==
X-Google-Smtp-Source: ABdhPJyv9K9powSi+g3c0tJ2hulpH+XMJ0c3fzWMHzpnDe/dmPLbxFxjyntSdoFtKycWWSiBvrXq0Q==
X-Received: by 2002:adf:9ec2:: with SMTP id b2mr5396302wrf.107.1602780553658;
        Thu, 15 Oct 2020 09:49:13 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id j7sm4950464wmc.7.2020.10.15.09.49.11
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 15 Oct 2020 09:49:12 -0700 (PDT)
Date: Thu, 15 Oct 2020 18:49:09 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Christian =?iso-8859-1?Q?K=F6nig?= <christian.koenig@amd.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	airlied@linux.ie, daniel@ffwll.ch, sam@ravnborg.org,
	alexander.deucher@amd.com, kraxel@redhat.com,
	l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com, inki.dae@samsung.com,
	jy0922.shim@samsung.com, sw0312.kim@samsung.com,
	kyungmin.park@samsung.com, kgene@kernel.org, krzk@kernel.org,
	yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
	tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
Message-ID: <20201015164909.GC401619@phenom.ffwll.local>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
 <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian Knig wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> > 
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the call
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> > callbacks.
> > 
> > v4:
> > 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > 	  Christian)
> 
> Bunch of minor comments below, but over all look very solid to me.

Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
cleanest. And then we can maybe push the combinatorial monster into
vmwgfx, which I think is the only user after this series. Or perhaps a
dedicated set of helpers to map an invidual page (again using the
dma_buf_map stuff).

I'll let Christian with the details, but at a high level this is
definitely

Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Thanks a lot for doing all this.
-Daniel

> 
> > 
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> >   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> >   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
> >   include/drm/drm_gem_ttm_helper.h     |  6 +++
> >   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
> >   include/linux/dma-buf-map.h          | 20 ++++++++
> >   5 files changed, 164 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > index 0e4fb9ba43ad..db4c14d78a30 100644
> > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> >   }
> >   EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > +		     struct dma_buf_map *map)
> > +{
> > +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > +	return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > +			struct dma_buf_map *map)
> > +{
> > +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> > +
> > +	ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> >   /**
> >    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> >    * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > index bdee4df1f3f2..80c42c774c7d 100644
> > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> >   #include <drm/ttm/ttm_bo_driver.h>
> >   #include <drm/ttm/ttm_placement.h>
> >   #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> >   #include <linux/io.h>
> >   #include <linux/highmem.h>
> >   #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> >   }
> >   EXPORT_SYMBOL(ttm_bo_kunmap);
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > +	struct ttm_resource *mem = &bo->mem;
> > +	int ret;
> > +
> > +	ret = ttm_mem_io_reserve(bo->bdev, mem);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (mem->bus.is_iomem) {
> > +		void __iomem *vaddr_iomem;
> > +		unsigned long size = bo->num_pages << PAGE_SHIFT;
> 
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.
> 
> We have an unit tests of allocating a 8GB BO and that should work on a 32bit
> machine as well :)
> 
> > +
> > +		if (mem->bus.addr)
> > +			vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > +		else if (mem->placement & TTM_PL_FLAG_WC)
> 
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
> 
> > +			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > +		else
> > +			vaddr_iomem = ioremap(mem->bus.offset, size);
> > +
> > +		if (!vaddr_iomem)
> > +			return -ENOMEM;
> > +
> > +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > +	} else {
> > +		struct ttm_operation_ctx ctx = {
> > +			.interruptible = false,
> > +			.no_wait_gpu = false
> > +		};
> > +		struct ttm_tt *ttm = bo->ttm;
> > +		pgprot_t prot;
> > +		void *vaddr;
> > +
> > +		BUG_ON(!ttm);
> 
> I think we can drop this, populate will just crash badly anyway.
> 
> > +
> > +		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > +		if (ret)
> > +			return ret;
> > +
> > +		/*
> > +		 * We need to use vmap to get the desired page protection
> > +		 * or to make the buffer object look contiguous.
> > +		 */
> > +		prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> 
> The calling convention has changed on drm-misc-next as well, but should be
> trivial to adapt.
> 
> Regards,
> Christian.
> 
> > +		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > +		if (!vaddr)
> > +			return -ENOMEM;
> > +
> > +		dma_buf_map_set_vaddr(map, vaddr);
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > +	if (dma_buf_map_is_null(map))
> > +		return;
> > +
> > +	if (map->is_iomem)
> > +		iounmap(map->vaddr_iomem);
> > +	else
> > +		vunmap(map->vaddr);
> > +	dma_buf_map_clear(map);
> > +
> > +	ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> >   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> >   				 bool dst_use_tt)
> >   {
> > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> > index 118cef76f84f..7c6d874910b8 100644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> >   #include <drm/ttm/ttm_bo_api.h>
> >   #include <drm/ttm/ttm_bo_driver.h>
> > +struct dma_buf_map;
> > +
> >   #define drm_gem_ttm_of_gem(gem_obj) \
> >   	container_of(gem_obj, struct ttm_buffer_object, base)
> >   void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
> >   			    const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > +		     struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > +			struct dma_buf_map *map);
> >   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> >   		     struct vm_area_struct *vma);
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >   struct ttm_bo_device;
> > +struct dma_buf_map;
> > +
> >   struct drm_mm_node;
> >   struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
> >    */
> >   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> >   /**
> >    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> >    *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> >    *
> >    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >    *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> >    * Test if a mapping is valid with either dma_buf_map_is_set() or
> >    * dma_buf_map_is_null().
> >    *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
> >   	map->is_iomem = false;
> >   }
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> > + * @map:		The dma-buf mapping structure
> > + * @vaddr_iomem:	An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > +					       void __iomem *vaddr_iomem)
> > +{
> > +	map->vaddr_iomem = vaddr_iomem;
> > +	map->is_iomem = true;
> > +}
> > +
> >   /**
> >    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
> >    * @lhs:	The dma-buf mapping structure
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7614.20086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VD-0007FA-Gk; Thu, 15 Oct 2020 16:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7614.20086; Thu, 15 Oct 2020 16:53:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VD-0007EW-93; Thu, 15 Oct 2020 16:53:19 +0000
Received: by outflank-mailman (input) for mailman id 7614;
 Thu, 15 Oct 2020 16:53:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Oh-0004yr-7y
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:35 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1;
 Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l28so4334194lfp.10
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:14 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.12
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:12 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Oh-0004yr-7y
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:35 +0000
X-Inumbo-ID: 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3aec3c3f-05a3-4c14-a864-c5f2f4906ad1;
	Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l28so4334194lfp.10
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=BOyu/SSFPhkGm8O1hW7HvONLcLZpvripmtFylb3J5Vs=;
        b=BhO2bvJcrw8cmXzgh8FAEJGWMR7kX2kyYNercuZdW3SkczOyhcnUcr4vLXos0iC3ka
         nL0Tz86eumq55mJCrTHl8so/ZvOogPPOHyrFOHN0AkmEkwN6rPj0i01N0Dwx9R+U4zDX
         JF4YZoUwKi6O2V9Kjg4It9tknTxvWmERZYCIHTk6yv1twKRY8omy8rMbGqGym9+bDZ9T
         p5RFYIUcQZuPV9IEQXk4F3NgPAA9JC4MfXfSPsXUGNbBBiG7zPVZZwPbpqkqh7Yf92tO
         vnJlIDsEFIlJ+jybi++uhaF3EQwNKNdbm4d6n6c7fYQs3b7w8faFRtJblZJEbSprEwhw
         hwQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=BOyu/SSFPhkGm8O1hW7HvONLcLZpvripmtFylb3J5Vs=;
        b=VGm98orza2/3sOx/GS9ROYyyHM6ndnseoxZcL/S8MxGaxwmbRQ3RR1YcMJWiBM+UwA
         ji00UXEWKoBEIcO5APjCJj03gXVmWhdtN8CEnXa4eYPD7WHUn45iNIKqwqBLNLJq6YBg
         eXLUIjXCAU0W3T4zAQi0XRsbAwH832S1jlZEniSFKLzf6t4I8/57l15fc3XbGcF409MM
         9S+MBweyU0OMeQjSSKP5RlPDeKVvSaVMVnJtBRAj7IVsvVJmr3wD1E+Cn/FVZ2/WJQam
         WeP41XiDKbhI833Jny6jLJg6IgG/qDc3diM5BFQpACY+p925tlU8nICQCEQGLTjTlKDq
         WqFA==
X-Gm-Message-State: AOAM530A5L3uK3C07skdDOPykOaKtzjBwo6QvVwRx5kOntkiVdjU48GQ
	VvDFPT2qwvCDmYqDxPo0i/2y3Tv+dQqcLA==
X-Google-Smtp-Source: ABdhPJyV7f83GC/ZdOMR1ad9pWY1O8LMTxCKcMAKIhVnQUqNl0rqnxcFvevOv7/DxFCafpHRoGleAQ==
X-Received: by 2002:ac2:55a5:: with SMTP id y5mr1550077lfg.473.1602780313306;
        Thu, 15 Oct 2020 09:45:13 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.12
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:12 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 20/23] xen/ioreq: Make x86's send_invalidate_req() common
Date: Thu, 15 Oct 2020 19:44:31 +0300
Message-Id: <1602780274-29141-21-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

As the IOREQ is a common feature now and we also need to
invalidate qemu/demu mapcache on Arm when the required condition
occurs this patch moves this function to the common code
(and remames it to send_invalidate_ioreq).
This patch also moves per-domain qemu_mapcache_invalidate
variable out of the arch sub-struct.

The subsequent patch will add mapcache invalidation handling on Arm.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
Please note, this patch depends on the following which is
on review:
https://patchwork.kernel.org/patch/11803383/
***

Changes RFC -> V1:
   - move send_invalidate_req() to the common code
   - update patch subject/description
   - move qemu_mapcache_invalidate out of the arch sub-struct,
     update checks
   - remove #if defined(CONFIG_ARM64) from the common code

Changes V1 -> V2:
   - was split into:
     - xen/ioreq: Make x86's send_invalidate_req() common
     - xen/arm: Add mapcache invalidation handling
   - update patch description/subject
   - move Arm bits to a separate patch
   - don't alter the common code, the flag is set by arch code
   - rename send_invalidate_req() to send_invalidate_ioreq()
   - guard qemu_mapcache_invalidate with CONFIG_IOREQ_SERVER
   - use bool instead of bool_t
   - remove blank line blank line between head comment and #include-s
---
 xen/arch/x86/hvm/hypercall.c     |  9 +++++----
 xen/arch/x86/hvm/io.c            | 14 --------------
 xen/common/ioreq.c               | 14 ++++++++++++++
 xen/include/asm-x86/hvm/domain.h |  1 -
 xen/include/asm-x86/hvm/io.h     |  1 -
 xen/include/xen/ioreq.h          |  1 +
 xen/include/xen/sched.h          |  2 ++
 7 files changed, 22 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index b6ccaf4..324ff97 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -20,6 +20,7 @@
  */
 #include <xen/lib.h>
 #include <xen/hypercall.h>
+#include <xen/ioreq.h>
 #include <xen/nospec.h>
 
 #include <asm/hvm/emulate.h>
@@ -47,7 +48,7 @@ static long hvm_memory_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = compat_memory_op(cmd, arg);
 
     if ( (cmd & MEMOP_CMD_MASK) == XENMEM_decrease_reservation )
-        curr->domain->arch.hvm.qemu_mapcache_invalidate = true;
+        curr->domain->qemu_mapcache_invalidate = true;
 
     return rc;
 }
@@ -329,9 +330,9 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     if ( curr->hcall_preempted )
         return HVM_HCALL_preempted;
 
-    if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
-         test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
-        send_invalidate_req();
+    if ( unlikely(currd->qemu_mapcache_invalidate) &&
+         test_and_clear_bool(currd->qemu_mapcache_invalidate) )
+        send_invalidate_ioreq();
 
     return HVM_HCALL_completed;
 }
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 2d03ffe..e51304c 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -64,20 +64,6 @@ void send_timeoffset_req(unsigned long timeoff)
         gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n");
 }
 
-/* Ask ioemu mapcache to invalidate mappings. */
-void send_invalidate_req(void)
-{
-    ioreq_t p = {
-        .type = IOREQ_TYPE_INVALIDATE,
-        .size = 4,
-        .dir = IOREQ_WRITE,
-        .data = ~0UL, /* flush all */
-    };
-
-    if ( broadcast_ioreq(&p, false) != 0 )
-        gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
-}
-
 bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 {
     struct hvm_emulate_ctxt ctxt;
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index a72bc0e..2203cf0 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -35,6 +35,20 @@
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
 
+/* Ask ioemu mapcache to invalidate mappings. */
+void send_invalidate_ioreq(void)
+{
+    ioreq_t p = {
+        .type = IOREQ_TYPE_INVALIDATE,
+        .size = 4,
+        .dir = IOREQ_WRITE,
+        .data = ~0UL, /* flush all */
+    };
+
+    if ( broadcast_ioreq(&p, false) != 0 )
+        gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
+}
+
 static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index c3af339..caab3a9 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -117,7 +117,6 @@ struct hvm_domain {
 
     struct viridian_domain *viridian;
 
-    bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
 
     /*
diff --git a/xen/include/asm-x86/hvm/io.h b/xen/include/asm-x86/hvm/io.h
index fb64294..3da0136 100644
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -97,7 +97,6 @@ bool relocate_portio_handler(
     unsigned int size);
 
 void send_timeoffset_req(unsigned long timeoff);
-void send_invalidate_req(void);
 bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
                                   struct npfec);
 bool handle_pio(uint16_t port, unsigned int size, int dir);
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 0679fef..aad682f 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -126,6 +126,7 @@ struct ioreq_server *select_ioreq_server(struct domain *d,
 int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
                bool buffered);
 unsigned int broadcast_ioreq(ioreq_t *p, bool buffered);
+void send_invalidate_ioreq(void);
 
 void ioreq_init(struct domain *d);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 290cddb..1b8c6eb 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -555,6 +555,8 @@ struct domain
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
         unsigned int            nr_servers;
     } ioreq_server;
+
+    bool qemu_mapcache_invalidate;
 #endif
 };
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7612.20077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VC-0007Ca-PF; Thu, 15 Oct 2020 16:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7612.20077; Thu, 15 Oct 2020 16:53:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VC-0007By-AI; Thu, 15 Oct 2020 16:53:18 +0000
Received: by outflank-mailman (input) for mailman id 7612;
 Thu, 15 Oct 2020 16:53:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6O3-0004yr-6v
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:55 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f79a2b4f-f75a-40b3-8756-6dcebdebe072;
 Thu, 15 Oct 2020 16:45:07 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l2so4404508lfk.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:05 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6O3-0004yr-6v
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:55 +0000
X-Inumbo-ID: f79a2b4f-f75a-40b3-8756-6dcebdebe072
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f79a2b4f-f75a-40b3-8756-6dcebdebe072;
	Thu, 15 Oct 2020 16:45:07 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l2so4404508lfk.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=N4sHZjBrvdP2nIgCwj1N8kQLOeVN1MHVkf9kqSUqzcc=;
        b=By1+sZx/D6fqGrtAeMnWCLm09vNkxpnpy1Oh0Ydg6B/fHRp9Eviq44+yZSendm4Mlz
         eClZjLs66cfhBMRPcJRLl3oYkHIXXnFNtscK2P8CI8eyvnAWyra3P/bzaUXqW2CAK8wa
         QL3Xjbg/T0W2xOI7Ijx+p9QHpNJ28y4Q+89b3dpHDO1RvWmpO94UpPBNC6prmpqpp/oS
         OxriwipPOOH7AEEe6BkOwWjlIrbAXg8HHRPx+8kh39tJ8GSpnw7HLGZwMpt/Fw8Z5UaE
         VFbBBQ9Y38smL9Np9XmjvQjiE8qHCySg/II1Rl+LI42vhBJefGorQzGy0B8WJ3TpfiXO
         Olpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=N4sHZjBrvdP2nIgCwj1N8kQLOeVN1MHVkf9kqSUqzcc=;
        b=IqW703Q9/JUAg1mvP2iG/5Ab0eZ2OEpS+erqwBPKzIa7tudreTRfswHUM+u1AcF+nr
         RsmckKlxW5CSCsitmOMEwpa3UtKJAqg670FKG1OMY1rDru03GfF1e391gjpXEOzRejaN
         mqUo+5SHlKPA/uuekd0YySeFnUu6vhd0BI7gJAqLFBEqRRFg97UBssec/O0lQ7iCdFL2
         VMKEcRAQw8RAwfiz+qIOZKjCz9KDM/ML9OiTeMqmtWoqHVyME1lhFOCJK7Srm5SttXFX
         jdC+kAXuD+RY1aRLNH8jzn/uDhn/uvX0/5Xj5yUZ/5AS01Th8DYU2AnYgJfE3BIuLuGl
         C9Dg==
X-Gm-Message-State: AOAM53196ntoFRXDeQk7WwRt3u7WYmT+uWJlMM0O496JSQxcBP3/6pc4
	tZbyIzaUw3R0x0zA7zSp3//NPXV5iTAA4w==
X-Google-Smtp-Source: ABdhPJwhpoAS/UbyhzkuTItYICvt8rJ4Yol/uOLfyGdXlBHd66uiyGvQNddJO4lJY91mCIZH3jo+ww==
X-Received: by 2002:a19:824f:: with SMTP id e76mr1281242lfd.572.1602780306328;
        Thu, 15 Oct 2020 09:45:06 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.05
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:05 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 13/23] xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
Date: Thu, 15 Oct 2020 19:44:24 +0300
Message-Id: <1602780274-29141-14-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The cmpxchg() in hvm_send_buffered_ioreq() operates on memory shared
with the emulator domain (and the target domain if the legacy
interface is used).

In order to be on the safe side we need to switch
to guest_cmpxchg64() to prevent a domain to DoS Xen on Arm.

As there is no plan to support the legacy interface on Arm,
we will have a page to be mapped in a single domain at the time,
so we can use s->emulator in guest_cmpxchg64() safely.

Thankfully the only user of the legacy interface is x86 so far
and there is not concern regarding the atomics operations.

Please note, that the legacy interface *must* not be used on Arm
without revisiting the code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - move earlier to avoid breaking arm32 compilation
   - add an explanation to commit description and hvm_allow_set_param()
   - pass s->emulator
---
 xen/arch/arm/hvm.c | 4 ++++
 xen/common/ioreq.c | 3 ++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8951b34..9694e5a 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -31,6 +31,10 @@
 
 #include <asm/hypercall.h>
 
+/*
+ * The legacy interface (which involves magic IOREQ pages) *must* not be used
+ * without revisiting the code.
+ */
 static int hvm_allow_set_param(const struct domain *d, unsigned int param)
 {
     switch ( param )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 98fffae..8612159 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -28,6 +28,7 @@
 #include <xen/trace.h>
 #include <xen/vpci.h>
 
+#include <asm/guest_atomics.h>
 #include <asm/hvm/ioreq.h>
 
 #include <public/hvm/ioreq.h>
@@ -1317,7 +1318,7 @@ static int send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
 
         new.read_pointer = old.read_pointer - n * IOREQ_BUFFER_SLOT_NUM;
         new.write_pointer = old.write_pointer - n * IOREQ_BUFFER_SLOT_NUM;
-        cmpxchg(&pg->ptrs.full, old.full, new.full);
+        guest_cmpxchg64(s->emulator, &pg->ptrs.full, old.full, new.full);
     }
 
     notify_via_xen_event_channel(d, s->bufioreq_evtchn);
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7609.20047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VB-00079w-Ed; Thu, 15 Oct 2020 16:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7609.20047; Thu, 15 Oct 2020 16:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VB-00079d-5Q; Thu, 15 Oct 2020 16:53:17 +0000
Received: by outflank-mailman (input) for mailman id 7609;
 Thu, 15 Oct 2020 16:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Oc-0004yr-7t
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:30 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 811b3513-adc9-4cb8-b290-00b531824646;
 Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id d24so3810777ljg.10
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:13 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:11 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Oc-0004yr-7t
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:30 +0000
X-Inumbo-ID: 811b3513-adc9-4cb8-b290-00b531824646
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 811b3513-adc9-4cb8-b290-00b531824646;
	Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id d24so3810777ljg.10
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=47uSMzel/qyKvhdlJs+JuPDeminjR3OYh46RDdFyYQ0=;
        b=fG3fb6woHBFXSTVeiwPo9CpE+xX7xDaX3P/Ipp+QjUKLXEPN7utLwZsQnDYIXMZRhY
         sPxnO3QYtPIwozFSmAwgEqqXOZi2a3qGVy7JMgDgFFWzSAGSRUXkfhwD26mPruIIsgud
         5B1pJBwOjUw1CKnNCvZpxhWVWm4PqNPaegEzPTetVmJNgAsKmJQqqLx4V2qBWXjPmZXQ
         XVdn043F/aLeDZ8cwaeoNLq/fRNu2MBsCkNNlY31vhkb3IwiccabZxC81G+/juWmu+0m
         j3PjHniDNzVNyRul3upS2vMh/XMkbfpcLh/vKfO5t+z+vtinujboWf4TBxO6hBmvrlrU
         /5Lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=47uSMzel/qyKvhdlJs+JuPDeminjR3OYh46RDdFyYQ0=;
        b=WsXEXKtNeJ3f6R5sdvVlN25wbpeAIAm81v4/0uwKKyq65tZar637QfDu8UodFFb88i
         lEukt4RjigORHCneruwrricmumQWEWktMa2Msa9lOu4H/qH3N4pzV1pWBuPML9U1pgQQ
         n6TgE510h83aaGMcTd+/AByKJb59x3UiD38wp5fao+gQX31r5vDjJ3miE+gn2vuRRQZZ
         XZ6A5zOcqBDnhdzekwtiaezWI4SI3JKVNkDSZDSaDwAwTCpcjfrcWyy7ecZiWXAflcPh
         HIedMZubJ65UPntoVepZ9cTxj1OjrG/7EjNzK0+GYWaF4E5KNpc//eR6f7wiC1jaq+vi
         qjaQ==
X-Gm-Message-State: AOAM531vJpSqLL6+A0J5/8wERo4zad12c/uh3O7Ejeoqwe5JhLJ7rvdY
	jFiEDQtzVLOfQRnECcvhdETzgHwRLEXewQ==
X-Google-Smtp-Source: ABdhPJywyMpzDJLnEtIr2K9tTpva81uus1tSMSmwnOhgcs9eIfdM4k5UwxqS97psn9uT12wdbF171g==
X-Received: by 2002:a2e:8582:: with SMTP id b2mr1570200lji.376.1602780312263;
        Thu, 15 Oct 2020 09:45:12 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.11
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:11 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 19/23] xen/arm: io: Abstract sign-extension
Date: Thu, 15 Oct 2020 19:44:30 +0300
Message-Id: <1602780274-29141-20-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

In order to avoid code duplication (both handle_read() and
handle_ioserv() contain the same code for the sign-extension)
put this code to a common helper to be used for both.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch
---
 xen/arch/arm/io.c           | 18 ++----------------
 xen/arch/arm/ioreq.c        | 17 +----------------
 xen/include/asm-arm/traps.h | 24 ++++++++++++++++++++++++
 3 files changed, 27 insertions(+), 32 deletions(-)

diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index f44cfd4..8d6ec6c 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -23,6 +23,7 @@
 #include <asm/cpuerrata.h>
 #include <asm/current.h>
 #include <asm/mmio.h>
+#include <asm/traps.h>
 #include <asm/hvm/ioreq.h>
 
 #include "decode.h"
@@ -39,26 +40,11 @@ static enum io_state handle_read(const struct mmio_handler *handler,
      * setting r).
      */
     register_t r = 0;
-    uint8_t size = (1 << dabt.size) * 8;
 
     if ( !handler->ops->read(v, info, &r, handler->priv) )
         return IO_ABORT;
 
-    /*
-     * Sign extend if required.
-     * Note that we expect the read handler to have zeroed the bits
-     * outside the requested access size.
-     */
-    if ( dabt.sign && (r & (1UL << (size - 1))) )
-    {
-        /*
-         * We are relying on register_t using the same as
-         * an unsigned long in order to keep the 32-bit assembly
-         * code smaller.
-         */
-        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
-        r |= (~0UL) << size;
-    }
+    r = sign_extend(dabt, r);
 
     set_user_reg(regs, dabt.reg, r);
 
diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
index da5ceac..ad17b80 100644
--- a/xen/arch/arm/ioreq.c
+++ b/xen/arch/arm/ioreq.c
@@ -28,7 +28,6 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
     const union hsr hsr = { .bits = regs->hsr };
     const struct hsr_dabt dabt = hsr.dabt;
     /* Code is similar to handle_read */
-    uint8_t size = (1 << dabt.size) * 8;
     register_t r = v->io.io_req.data;
 
     /* We are done with the IO */
@@ -37,21 +36,7 @@ enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
     if ( dabt.write )
         return IO_HANDLED;
 
-    /*
-     * Sign extend if required.
-     * Note that we expect the read handler to have zeroed the bits
-     * outside the requested access size.
-     */
-    if ( dabt.sign && (r & (1UL << (size - 1))) )
-    {
-        /*
-         * We are relying on register_t using the same as
-         * an unsigned long in order to keep the 32-bit assembly
-         * code smaller.
-         */
-        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
-        r |= (~0UL) << size;
-    }
+    r = sign_extend(dabt, r);
 
     set_user_reg(regs, dabt.reg, r);
 
diff --git a/xen/include/asm-arm/traps.h b/xen/include/asm-arm/traps.h
index 997c378..e301c44 100644
--- a/xen/include/asm-arm/traps.h
+++ b/xen/include/asm-arm/traps.h
@@ -83,6 +83,30 @@ static inline bool VABORT_GEN_BY_GUEST(const struct cpu_user_regs *regs)
         (unsigned long)abort_guest_exit_end == regs->pc;
 }
 
+/* Check whether the sign extension is required and perform it */
+static inline register_t sign_extend(const struct hsr_dabt dabt, register_t r)
+{
+    uint8_t size = (1 << dabt.size) * 8;
+
+    /*
+     * Sign extend if required.
+     * Note that we expect the read handler to have zeroed the bits
+     * outside the requested access size.
+     */
+    if ( dabt.sign && (r & (1UL << (size - 1))) )
+    {
+        /*
+         * We are relying on register_t using the same as
+         * an unsigned long in order to keep the 32-bit assembly
+         * code smaller.
+         */
+        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
+        r |= (~0UL) << size;
+    }
+
+    return r;
+}
+
 #endif /* __ASM_ARM_TRAPS__ */
 /*
  * Local variables:
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7611.20060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VB-0007Ar-V9; Thu, 15 Oct 2020 16:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7611.20060; Thu, 15 Oct 2020 16:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VB-0007Ab-JQ; Thu, 15 Oct 2020 16:53:17 +0000
Received: by outflank-mailman (input) for mailman id 7611;
 Thu, 15 Oct 2020 16:53:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6OD-0004yr-7J
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:05 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e03c66cd-2b77-4141-b88d-5a04b589861c;
 Thu, 15 Oct 2020 16:45:09 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id h6so4385287lfj.3
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6OD-0004yr-7J
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:05 +0000
X-Inumbo-ID: e03c66cd-2b77-4141-b88d-5a04b589861c
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e03c66cd-2b77-4141-b88d-5a04b589861c;
	Thu, 15 Oct 2020 16:45:09 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id h6so4385287lfj.3
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=Keo31MrYeOTyvKyfr544wWTp0SuVVjzyXDaBlqBdDas=;
        b=OBlvZjvFoAJltafpJoEkNBdBI7u2knDZkjagqQ73OHny6SZJkL4xG8AFhUM/hl431F
         Mncj45jQ6T3PvIVxRnY693tUgV2xhrrkit06SZAg2jlsFiL1RV29WoiJj7J4cLDj9jN5
         Zkmrt+H7sBOaCa8bAH+OyFI730CV4BNb279HRhNSBunIQhDKfWv2d8sJ5CnsBkx4/ovf
         Y3SjDNPUsx5W0uGVsXV/82YQ3Y7Ykqae6DfkawPlYlSlwaZTI0wvuq8Pn2Ao6dWImK9L
         N5YcVH3gNDY0Jy8EYwxffQIFh8wxFoqME4zrM/VEGQaklUf1ySYkKb0/YBJSIm4jiUwB
         ppPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=Keo31MrYeOTyvKyfr544wWTp0SuVVjzyXDaBlqBdDas=;
        b=TkXP0i81NKR7vPphQLzzgbjFXNThvu5L3aD3KD/7OaOb+bmEWPkxtmRsZ5aCmELvlh
         bgz/oJhzCl2JtPubIc2hH2gJkD75Pbi6zdz23X4nxUidVSKFBhdkBd/gSd5dHd+W8bJL
         JRKwuI7pad7jiy8h+UehfhD+DlPqc8P0cBpTNnz4E5UZCzpS0ngxS2sfpj+XNATfB3Nd
         6XdMkGxM+bXtLJFhi3xM2DRaa0pIlIevCdS2fhVFDPRU00ROXB0gjJMNBhnRL5H2BPGL
         Ii5/WAtzos/delEXbhI7AD+VYjpQqnuax9KSQEoYfmJZ8jTJcS1V+hhqnJhO4NohVHjR
         Bl0A==
X-Gm-Message-State: AOAM531X7byo80d6Gb6QvbAXqqryLq1govcHSQ5l6guJxYdr7CzLqw2N
	hdE+PlNoYEw4atYSI5huLrTRjGicYw0+JA==
X-Google-Smtp-Source: ABdhPJxKP4JYrVOERvQAq7fAe3D43owO+copPgDkP5/Hg9N9VUFmZEhVveSzbzH8T+uUi6c3GZj9ig==
X-Received: by 2002:a19:c883:: with SMTP id y125mr1282972lff.485.1602780308177;
        Thu, 15 Oct 2020 09:45:08 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.07
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 15/23] xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
Date: Thu, 15 Oct 2020 19:44:26 +0300
Message-Id: <1602780274-29141-16-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds proper handling of return value of handle_io_completion()
which involves using a loop in leave_hypervisor_to_guest().

The reason to use an unbounded loop here is the fact that vCPU
shouldn't continue until an I/O has completed. In Xen case, if an I/O
never completes then it most likely means that something went horribly
wrong with the Device Emulator. And it is most likely not safe to
continue. So letting the vCPU to spin forever if I/O never completes
is a safer action than letting it continue and leaving the guest in
unclear state and is the best what we can do for now.

This wouldn't be an issue for Xen as do_softirq() would be called at
every loop. In case of failure, the guest will crash and the vCPU
will be unscheduled.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch, changes were derived from (+ new explanation):
     arm/ioreq: Introduce arch specific bits for IOREQ/DM features
---
 xen/arch/arm/traps.c | 31 ++++++++++++++++++++++++++-----
 1 file changed, 26 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index b154837..507c095 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2256,18 +2256,23 @@ static void check_for_pcpu_work(void)
  * Process pending work for the vCPU. Any call should be fast or
  * implement preemption.
  */
-static void check_for_vcpu_work(void)
+static bool check_for_vcpu_work(void)
 {
     struct vcpu *v = current;
 
 #ifdef CONFIG_IOREQ_SERVER
+    bool handled;
+
     local_irq_enable();
-    handle_io_completion(v);
+    handled = handle_io_completion(v);
     local_irq_disable();
+
+    if ( !handled )
+        return true;
 #endif
 
     if ( likely(!v->arch.need_flush_to_ram) )
-        return;
+        return false;
 
     /*
      * Give a chance for the pCPU to process work before handling the vCPU
@@ -2278,6 +2283,8 @@ static void check_for_vcpu_work(void)
     local_irq_enable();
     p2m_flush_vm(v);
     local_irq_disable();
+
+    return false;
 }
 
 /*
@@ -2290,8 +2297,22 @@ void leave_hypervisor_to_guest(void)
 {
     local_irq_disable();
 
-    check_for_vcpu_work();
-    check_for_pcpu_work();
+    /*
+     * The reason to use an unbounded loop here is the fact that vCPU
+     * shouldn't continue until an I/O has completed. In Xen case, if an I/O
+     * never completes then it most likely means that something went horribly
+     * wrong with the Device Emulator. And it is most likely not safe to
+     * continue. So letting the vCPU to spin forever if I/O never completes
+     * is a safer action than letting it continue and leaving the guest in
+     * unclear state and is the best what we can do for now.
+     *
+     * This wouldn't be an issue for Xen as do_softirq() would be called at
+     * every loop. In case of failure, the guest will crash and the vCPU
+     * will be unscheduled.
+     */
+    do {
+        check_for_pcpu_work();
+    } while ( check_for_vcpu_work() );
 
     vgic_sync_to_lrs();
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7615.20098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VE-0007Gs-D9; Thu, 15 Oct 2020 16:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7615.20098; Thu, 15 Oct 2020 16:53:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VD-0007GR-UG; Thu, 15 Oct 2020 16:53:19 +0000
Received: by outflank-mailman (input) for mailman id 7615;
 Thu, 15 Oct 2020 16:53:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Or-0004yr-8K
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:45 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d4d4140-25ab-4442-9a38-2c66f7862da7;
 Thu, 15 Oct 2020 16:45:22 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id y16so3878109ljk.1
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:16 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.14
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:14 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Or-0004yr-8K
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:45 +0000
X-Inumbo-ID: 8d4d4140-25ab-4442-9a38-2c66f7862da7
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d4d4140-25ab-4442-9a38-2c66f7862da7;
	Thu, 15 Oct 2020 16:45:22 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id y16so3878109ljk.1
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=Io9UJaZa2vr40+lSLt6wvD6ynVflJI9Al4nh6JXoutI=;
        b=qhNGtO5djVpxpfhukOTWZS1KwYbLm9hZzuRQayLeydkwh1ULMdigrk2cn1jO4BzKSK
         04TDAvE3r1NZYXaZjTKAPkdRKb0XfMm86Hk4sWaKLsW+43vgLKBG6WA5mrrCxbPEl2Ed
         DTtxjLAxvWWc9hH9rhgaGU0CvPW+9RdxR5ifbr0hmCppYe3f39dg3C7+N7bOH72v4M97
         JuiF0VYRZkdKpFXkRt7nazvBhNkipTy4UO0QxRdNM4FU0SmNWUePJIpSGRtZK3vChftW
         QB9TQZTcsDS8PYE2SuHwA00maz5WuW+G0JIHPR24ZrdxtdsKMfS2drS1ypTIgxsueTfM
         kgOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=Io9UJaZa2vr40+lSLt6wvD6ynVflJI9Al4nh6JXoutI=;
        b=tkZhrXRhb+ADkMxjDT+fONClS+AQkY9sOQxiNMmkNZYKUhXN8kBeBAlttIc43jd4nr
         j9LR2yc8XnJrFQ0DbGscYGRTNmbiEl2zUhRBLpmfzsefklRHwYgcZhYSIQnJ/mXW9S7/
         36YTkyCW5TekPZ7fSOlqQ32rJ3FvkVOHUpGxCMFNG+fwFUe7I0arPqcayMOO0/lPNljo
         LtdcbUv4+Kjru+wVUePQfJxzIeIz5FpCtmr9vg/LKGJ85djqy8fglcs9wC49iFpOFbB4
         se4wL+on5vCMqTizFPkVzWAhDhoxKP5QIosIeqEPIDCqBSLHGRcFWuXlDDvCL98dnvbM
         gfGw==
X-Gm-Message-State: AOAM530YO7GgLEefH+xjcvUm05FFH1jgDC2zsbRadcW1rrFj0P0Yayk/
	f2zZWJgfCePdsFoy/25dYYvqP20f0rx8bw==
X-Google-Smtp-Source: ABdhPJzshDvkHnaoQdkfm6rWN+hXIU4lgONNKaeqyrYRC6gft4yR6POrZRsZUT7Z+40lcLRpxWHafQ==
X-Received: by 2002:a05:651c:96:: with SMTP id 22mr1712441ljq.76.1602780315241;
        Thu, 15 Oct 2020 09:45:15 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.14
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:14 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V2 22/23] libxl: Introduce basic virtio-mmio support on Arm
Date: Thu, 15 Oct 2020 19:44:33 +0300
Message-Id: <1602780274-29141-23-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

This patch creates specific device node in the Guest device-tree
with allocated MMIO range and SPI interrupt if specific 'virtio'
property is present in domain config.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch
---
 tools/libs/light/libxl_arm.c     | 58 ++++++++++++++++++++++++++++++++++++++--
 tools/libs/light/libxl_types.idl |  1 +
 tools/xl/xl_parse.c              |  1 +
 xen/include/public/arch-arm.h    |  5 ++++
 4 files changed, 63 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 66e8a06..588ee5a 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -26,8 +26,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq;
+    bool vuart_enabled = false, virtio_enabled = false;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +39,17 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    /*
+     * XXX: Handle properly virtio
+     * A proper solution would be the toolstack to allocate the interrupts
+     * used by each virtio backend and let the backend now which one is used
+     */
+    if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) {
+        nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
+        virtio_irq = GUEST_VIRTIO_MMIO_SPI;
+        virtio_enabled = true;
+    }
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +69,12 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled && irq == virtio_irq) {
+            LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -658,6 +675,39 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    /* Placeholder for virtio@ + a 64-bit number + \0 */
+    char buf[24];
+
+    snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base);
+    res = fdt_begin_node(fdt, buf);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, GUEST_VIRTIO_MMIO_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -961,6 +1011,9 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
+        if (libxl_defbool_val(info->arch_arm.virtio))
+            FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) );
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
@@ -1178,6 +1231,7 @@ void libxl__arch_domain_build_info_setdefault(libxl__gc *gc,
 {
     /* ACPI is disabled by default */
     libxl_defbool_setdefault(&b_info->acpi, false);
+    libxl_defbool_setdefault(&b_info->arch_arm.virtio, false);
 
     if (b_info->type != LIBXL_DOMAIN_TYPE_PV)
         return;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f..b054bf9 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -639,6 +639,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
 
 
     ("arch_arm", Struct(None, [("gic_version", libxl_gic_version),
+                               ("virtio", libxl_defbool),
                                ("vuart", libxl_vuart_type),
                               ])),
     # Alternate p2m is not bound to any architecture or guest type, as it is
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb6..10acf22 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2581,6 +2581,7 @@ skip_usbdev:
     }
 
     xlu_cfg_get_defbool(config, "dm_restrict", &b_info->dm_restrict, 0);
+    xlu_cfg_get_defbool(config, "virtio", &b_info->arch_arm.virtio, 0);
 
     if (c_info->type == LIBXL_DOMAIN_TYPE_HVM) {
         if (!xlu_cfg_get_string (config, "vga", &buf, 0)) {
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c365b1b..be7595f 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -464,6 +464,11 @@ typedef uint64_t xen_callback_t;
 #define PSCI_cpu_on      2
 #define PSCI_migrate     3
 
+/* VirtIO MMIO definitions */
+#define GUEST_VIRTIO_MMIO_BASE  xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE  xen_mk_ullong(0x200)
+#define GUEST_VIRTIO_MMIO_SPI   33
+
 #endif
 
 #ifndef __ASSEMBLY__
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7607.20033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VA-00078e-KF; Thu, 15 Oct 2020 16:53:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7607.20033; Thu, 15 Oct 2020 16:53:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VA-00078X-GI; Thu, 15 Oct 2020 16:53:16 +0000
Received: by outflank-mailman (input) for mailman id 7607;
 Thu, 15 Oct 2020 16:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Ow-0004yr-8X
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:50 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 087e66d4-5203-42f8-bac5-9b88c9992b63;
 Thu, 15 Oct 2020 16:45:22 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id 77so4380178lfl.2
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:17 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Ow-0004yr-8X
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:50 +0000
X-Inumbo-ID: 087e66d4-5203-42f8-bac5-9b88c9992b63
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 087e66d4-5203-42f8-bac5-9b88c9992b63;
	Thu, 15 Oct 2020 16:45:22 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id 77so4380178lfl.2
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=xCZVPNSd778IcfiDHcmUgV2+bmw+xbtVorY65VaJeGs=;
        b=Avq2Hoh8lwOMvmjF3c6XbeNBaRbzuWIGIuolr8Q6LaYlBKeZ5kNLVc2aUpijb3psTy
         DN6QC9HY7KKywU4UFyxH1vsi17nMeIxUJc7/NIpwpSP7V40iUVqB67xCWkSIIezznbnp
         RooSEoZsgEVMyJN8XD+eM5XdxkykKAJU0W38u2Xtn9HNfF85TllybdvtP6SIETT4l49g
         Kp6SlzkKaxloTMK6NAxA/SvUzkWctQSwSzekVxUjUws+bsV6zNLkLPoa0j9KW/6fauUH
         JsZMrbOKj6W4YfppUi6GqH+j5M74GJikLCslEPFNhG8aPhTmWjjx55HIfPXqRj7chLSx
         w7jA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=xCZVPNSd778IcfiDHcmUgV2+bmw+xbtVorY65VaJeGs=;
        b=ucHXIafUWoHEf7e5HASI8FnPb1tOBqvWwNNvawPCV5Toj4u1JM7qtBqu9CetHHK160
         +hr69Lr/emaEljwY64NT17CuiDwVt0ZHbDFMEt27QR+cqK9khPzqMe0pW+O11TalOYgH
         HHCxp7+qZaksC7PAch8ZJf5I42IE2S7+4Js2mMw4lqXi2ByJOEm4+RZDcOrBzEQpC67d
         s7dk+7+UsVHlrfcbFH78Hu8fdPwpCWz9THGfe1S2rQsHgv5sVJWSd+ZG3iU/oXN4qIOT
         CFNOPEosjCgnBHIxrfJoMwxDMem5og8lh8w5k54ashs29t8Bmp6H/jAV8SO2U2oSbKgC
         suYg==
X-Gm-Message-State: AOAM533D2mw894oN/xz4po9FidTP99avvSd6RDy8VLJBq2b24roIA2Y9
	OFn4tRfE/xp1Z4EVopTAxpzqWckb1wYifw==
X-Google-Smtp-Source: ABdhPJxI6QsqQDzvUEy4e89SghbJpt00SiJg6myZgzN0L2sohpMkncs8FuLSnw9POz2gxpNHr2WNZg==
X-Received: by 2002:ac2:5e6d:: with SMTP id a13mr1495324lfr.514.1602780316173;
        Thu, 15 Oct 2020 09:45:16 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.15
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:15 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH V2 23/23] [RFC] libxl: Add support for virtio-disk configuration
Date: Thu, 15 Oct 2020 19:44:34 +0300
Message-Id: <1602780274-29141-24-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-disk
backend (emualator) which is intended to run out of Qemu and could be run
in any domain.

Xenstore was chosen as a communication interface for the emulator running
in non-toolstack domain to be able to get configuration either by reading
Xenstore directly or by receiving command line parameters (an updated 'xl devd'
running in the same domain would read Xenstore beforehand and call backend
executable with the required arguments).

An example of domain configuration (two disks are assigned to the guest,
the latter is in readonly mode):

vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Where per-disk Xenstore entries are:
- filename and readonly flag (configured via "vdisk" property)
- base and irq (allocated dynamically)

Besides handling 'visible' params described in configuration file,
patch also allocates virtio-mmio specific ones for each device and
writes them into Xenstore. virtio-mmio params (irq and base) are
unique per guest domain, they allocated at the domain creation time
and passed through to the emulator. Each VirtIO device has at least
one pair of these params.

TODO:
1. An extra "virtio" property could be removed.
2. Update documentation.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Please note, there is a real concern about VirtIO interrupts allocation.
Just copy here what Stefano said in RFC thread.

So, if we end up allocating let's say 6 virtio interrupts for a domain,
the chance of a clash with a physical interrupt of a passthrough device is real.

I am not entirely sure how to solve it, but these are a few ideas:
- choosing virtio interrupts that are less likely to conflict (maybe > 1000)
- make the virtio irq (optionally) configurable so that a user could
  override the default irq and specify one that doesn't conflict
- implementing support for virq != pirq (even the xl interface doesn't
  allow to specify the virq number for passthrough devices, see "irqs")

---
 tools/libs/light/Makefile                 |   1 +
 tools/libs/light/libxl_arm.c              |  56 ++++++++++++---
 tools/libs/light/libxl_create.c           |   1 +
 tools/libs/light/libxl_internal.h         |   1 +
 tools/libs/light/libxl_types.idl          |  15 ++++
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_virtio_disk.c      | 109 ++++++++++++++++++++++++++++
 tools/xl/Makefile                         |   2 +-
 tools/xl/xl.h                             |   3 +
 tools/xl/xl_cmdtable.c                    |  15 ++++
 tools/xl/xl_parse.c                       | 115 ++++++++++++++++++++++++++++++
 tools/xl/xl_virtio_disk.c                 |  46 ++++++++++++
 12 files changed, 354 insertions(+), 11 deletions(-)
 create mode 100644 tools/libs/light/libxl_virtio_disk.c
 create mode 100644 tools/xl/xl_virtio_disk.c

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index f58a321..2ee388a 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -115,6 +115,7 @@ SRCS-y += libxl_genid.c
 SRCS-y += _libxl_types.c
 SRCS-y += libxl_flask.c
 SRCS-y += _libxl_types_internal.c
+SRCS-y += libxl_virtio_disk.c
 
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index 588ee5a..9eb3022 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,12 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+#ifndef container_of
+#define container_of(ptr, type, member) ({			\
+        typeof( ((type *)0)->member ) *__mptr = (ptr);	\
+        (type *)( (char *)__mptr - offsetof(type,member) );})
+#endif
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -39,14 +45,32 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
-    /*
-     * XXX: Handle properly virtio
-     * A proper solution would be the toolstack to allocate the interrupts
-     * used by each virtio backend and let the backend now which one is used
-     */
     if (libxl_defbool_val(d_config->b_info.arch_arm.virtio)) {
-        nr_spis += (GUEST_VIRTIO_MMIO_SPI - 32) + 1;
+        uint64_t virtio_base;
+        libxl_device_virtio_disk *virtio_disk;
+
+        virtio_base = GUEST_VIRTIO_MMIO_BASE;
         virtio_irq = GUEST_VIRTIO_MMIO_SPI;
+
+        if (!d_config->num_virtio_disks) {
+            LOG(ERROR, "Virtio is enabled, but no Virtio devices present\n");
+            return ERROR_FAIL;
+        }
+        virtio_disk = &d_config->virtio_disks[0];
+
+        for (i = 0; i < virtio_disk->num_disks; i++) {
+            virtio_disk->disks[i].base = virtio_base;
+            virtio_disk->disks[i].irq = virtio_irq;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params: IRQ %u BASE 0x%"PRIx64,
+                virtio_irq, virtio_base);
+
+            virtio_irq ++;
+            virtio_base += GUEST_VIRTIO_MMIO_SIZE;
+        }
+        virtio_irq --;
+
+        nr_spis += (virtio_irq - 32) + 1;
         virtio_enabled = true;
     }
 
@@ -70,8 +94,9 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         }
 
         /* The same check as for vpl011 */
-        if (virtio_enabled && irq == virtio_irq) {
-            LOG(ERROR, "Physical IRQ %u conflicting with virtio SPI\n", irq);
+        if (virtio_enabled &&
+           (irq >= GUEST_VIRTIO_MMIO_SPI && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio IRQ range\n", irq);
             return ERROR_FAIL;
         }
 
@@ -1011,8 +1036,19 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
-        if (libxl_defbool_val(info->arch_arm.virtio))
-            FDT( make_virtio_mmio_node(gc, fdt, GUEST_VIRTIO_MMIO_BASE, GUEST_VIRTIO_MMIO_SPI) );
+        if (libxl_defbool_val(info->arch_arm.virtio)) {
+            libxl_domain_config *d_config =
+                container_of(info, libxl_domain_config, b_info);
+            libxl_device_virtio_disk *virtio_disk = &d_config->virtio_disks[0];
+            unsigned int i;
+
+            for (i = 0; i < virtio_disk->num_disks; i++) {
+                uint64_t base = virtio_disk->disks[i].base;
+                uint32_t irq = virtio_disk->disks[i].irq;
+
+                FDT( make_virtio_mmio_node(gc, fdt, base, irq) );
+            }
+        }
 
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e..8da328d 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1821,6 +1821,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
+    &libxl__virtio_disk_devtype,
     NULL
 };
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9..ea497bb 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4000,6 +4000,7 @@ extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
 extern const libxl__device_type libxl__vsnd_devtype;
+extern const libxl__device_type libxl__virtio_disk_devtype;
 
 extern const libxl__device_type *device_type_tbl[];
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index b054bf9..5f8a3ff 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -935,6 +935,20 @@ libxl_device_vsnd = Struct("device_vsnd", [
     ("pcms", Array(libxl_vsnd_pcm, "num_vsnd_pcms"))
     ])
 
+libxl_virtio_disk_param = Struct("virtio_disk_param", [
+    ("filename", string),
+    ("readonly", bool),
+    ("irq", uint32),
+    ("base", uint64),
+    ])
+
+libxl_device_virtio_disk = Struct("device_virtio_disk", [
+    ("backend_domid", libxl_domid),
+    ("backend_domname", string),
+    ("devid", libxl_devid),
+    ("disks", Array(libxl_virtio_disk_param, "num_disks")),
+    ])
+
 libxl_domain_config = Struct("domain_config", [
     ("c_info", libxl_domain_create_info),
     ("b_info", libxl_domain_build_info),
@@ -951,6 +965,7 @@ libxl_domain_config = Struct("domain_config", [
     ("pvcallsifs", Array(libxl_device_pvcallsif, "num_pvcallsifs")),
     ("vdispls", Array(libxl_device_vdispl, "num_vdispls")),
     ("vsnds", Array(libxl_device_vsnd, "num_vsnds")),
+    ("virtio_disks", Array(libxl_device_virtio_disk, "num_virtio_disks")),
     # a channel manifests as a console with a name,
     # see docs/misc/channels.txt
     ("channels", Array(libxl_device_channel, "num_channels")),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_virtio_disk.c b/tools/libs/light/libxl_virtio_disk.c
new file mode 100644
index 0000000..25e7f1a
--- /dev/null
+++ b/tools/libs/light/libxl_virtio_disk.c
@@ -0,0 +1,109 @@
+/*
+ * Copyright (C) 2020 EPAM Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include "libxl_internal.h"
+
+static int libxl__device_virtio_disk_setdefault(libxl__gc *gc, uint32_t domid,
+                                                libxl_device_virtio_disk *virtio_disk,
+                                                bool hotplug)
+{
+    return libxl__resolve_domid(gc, virtio_disk->backend_domname,
+                                &virtio_disk->backend_domid);
+}
+
+static int libxl__virtio_disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
+                                            libxl_devid devid,
+                                            libxl_device_virtio_disk *virtio_disk)
+{
+    const char *be_path;
+    int rc;
+
+    virtio_disk->devid = devid;
+    rc = libxl__xs_read_mandatory(gc, XBT_NULL,
+                                  GCSPRINTF("%s/backend", libxl_path),
+                                  &be_path);
+    if (rc) return rc;
+
+    rc = libxl__backendpath_parse_domid(gc, be_path, &virtio_disk->backend_domid);
+    if (rc) return rc;
+
+    return 0;
+}
+
+static void libxl__update_config_virtio_disk(libxl__gc *gc,
+                                             libxl_device_virtio_disk *dst,
+                                             libxl_device_virtio_disk *src)
+{
+    dst->devid = src->devid;
+}
+
+static int libxl_device_virtio_disk_compare(libxl_device_virtio_disk *d1,
+                                            libxl_device_virtio_disk *d2)
+{
+    return COMPARE_DEVID(d1, d2);
+}
+
+static void libxl__device_virtio_disk_add(libxl__egc *egc, uint32_t domid,
+                                          libxl_device_virtio_disk *virtio_disk,
+                                          libxl__ao_device *aodev)
+{
+    libxl__device_add_async(egc, domid, &libxl__virtio_disk_devtype, virtio_disk, aodev);
+}
+
+static int libxl__set_xenstore_virtio_disk(libxl__gc *gc, uint32_t domid,
+                                           libxl_device_virtio_disk *virtio_disk,
+                                           flexarray_t *back, flexarray_t *front,
+                                           flexarray_t *ro_front)
+{
+    int rc;
+    unsigned int i;
+
+    for (i = 0; i < virtio_disk->num_disks; i++) {
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/filename", i),
+                                   GCSPRINTF("%s", virtio_disk->disks[i].filename));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/readonly", i),
+                                   GCSPRINTF("%d", virtio_disk->disks[i].readonly));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/base", i),
+                                   GCSPRINTF("%lu", virtio_disk->disks[i].base));
+        if (rc) return rc;
+
+        rc = flexarray_append_pair(ro_front, GCSPRINTF("%d/irq", i),
+                                   GCSPRINTF("%u", virtio_disk->disks[i].irq));
+        if (rc) return rc;
+    }
+
+    return 0;
+}
+
+static LIBXL_DEFINE_UPDATE_DEVID(virtio_disk)
+static LIBXL_DEFINE_DEVICE_FROM_TYPE(virtio_disk)
+static LIBXL_DEFINE_DEVICES_ADD(virtio_disk)
+
+DEFINE_DEVICE_TYPE_STRUCT(virtio_disk, VIRTIO_DISK,
+    .update_config = (device_update_config_fn_t) libxl__update_config_virtio_disk,
+    .from_xenstore = (device_from_xenstore_fn_t) libxl__virtio_disk_from_xenstore,
+    .set_xenstore_config = (device_set_xenstore_config_fn_t) libxl__set_xenstore_virtio_disk
+);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index bdf67c8..9d8f2aa 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -23,7 +23,7 @@ XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o
 XL_OBJS += xl_sched.o xl_pci.o xl_vcpu.o xl_cdrom.o xl_mem.o
 XL_OBJS += xl_info.o xl_console.o xl_misc.o
 XL_OBJS += xl_vmcontrol.o xl_saverestore.o xl_migrate.o
-XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o
+XL_OBJS += xl_vdispl.o xl_vsnd.o xl_vkb.o xl_virtio_disk.o
 
 $(XL_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
 $(XL_OBJS): CFLAGS += $(CFLAGS_XL)
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 06569c6..3d26f19 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -178,6 +178,9 @@ int main_vsnddetach(int argc, char **argv);
 int main_vkbattach(int argc, char **argv);
 int main_vkblist(int argc, char **argv);
 int main_vkbdetach(int argc, char **argv);
+int main_virtio_diskattach(int argc, char **argv);
+int main_virtio_disklist(int argc, char **argv);
+int main_virtio_diskdetach(int argc, char **argv);
 int main_usbctrl_attach(int argc, char **argv);
 int main_usbctrl_detach(int argc, char **argv);
 int main_usbdev_attach(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b..745afab 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -435,6 +435,21 @@ struct cmd_spec cmd_table[] = {
       "Destroy a domain's virtual sound device",
       "<Domain> <DevId>",
     },
+    { "virtio-disk-attach",
+      &main_virtio_diskattach, 1, 1,
+      "Create a new virtio block device",
+      " TBD\n"
+    },
+    { "virtio-disk-list",
+      &main_virtio_disklist, 0, 0,
+      "List virtio block devices for a domain",
+      "<Domain(s)>",
+    },
+    { "virtio-disk-detach",
+      &main_virtio_diskdetach, 0, 1,
+      "Destroy a domain's virtio block device",
+      "<Domain> <DevId>",
+    },
     { "uptime",
       &main_uptime, 0, 0,
       "Print uptime for all/some domains",
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 10acf22..6cf3524 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1204,6 +1204,120 @@ out:
     if (rc) exit(EXIT_FAILURE);
 }
 
+#define MAX_VIRTIO_DISKS 4
+
+static int parse_virtio_disk_config(libxl_device_virtio_disk *virtio_disk, char *token)
+{
+    char *oparg;
+    libxl_string_list disks = NULL;
+    int i, rc;
+
+    if (MATCH_OPTION("backend", token, oparg)) {
+        virtio_disk->backend_domname = strdup(oparg);
+    } else if (MATCH_OPTION("disks", token, oparg)) {
+        split_string_into_string_list(oparg, ";", &disks);
+
+        virtio_disk->num_disks = libxl_string_list_length(&disks);
+        if (virtio_disk->num_disks > MAX_VIRTIO_DISKS) {
+            fprintf(stderr, "vdisk: currently only %d disks are supported",
+                    MAX_VIRTIO_DISKS);
+            return 1;
+        }
+        virtio_disk->disks = xcalloc(virtio_disk->num_disks,
+                                     sizeof(*virtio_disk->disks));
+
+        for(i = 0; i < virtio_disk->num_disks; i++) {
+            char *disk_opt;
+
+            rc = split_string_into_pair(disks[i], ":", &disk_opt,
+                                        &virtio_disk->disks[i].filename);
+            if (rc) {
+                fprintf(stderr, "vdisk: failed to split \"%s\" into pair\n",
+                        disks[i]);
+                goto out;
+            }
+
+            if (!strcmp(disk_opt, "ro"))
+                virtio_disk->disks[i].readonly = 1;
+            else if (!strcmp(disk_opt, "rw"))
+                virtio_disk->disks[i].readonly = 0;
+            else {
+                fprintf(stderr, "vdisk: failed to parse \"%s\" disk option\n",
+                        disk_opt);
+                rc = 1;
+            }
+            free(disk_opt);
+
+            if (rc) goto out;
+        }
+    } else {
+        fprintf(stderr, "Unknown string \"%s\" in vdisk spec\n", token);
+        rc = 1; goto out;
+    }
+
+    rc = 0;
+
+out:
+    libxl_string_list_dispose(&disks);
+    return rc;
+}
+
+static void parse_virtio_disk_list(const XLU_Config *config,
+                            libxl_domain_config *d_config)
+{
+    XLU_ConfigList *virtio_disks;
+    const char *item;
+    char *buf = NULL;
+    int rc;
+
+    if (!xlu_cfg_get_list (config, "vdisk", &virtio_disks, 0, 0)) {
+        libxl_domain_build_info *b_info = &d_config->b_info;
+        int entry = 0;
+
+        /* XXX Remove an extra property */
+        libxl_defbool_setdefault(&b_info->arch_arm.virtio, false);
+        if (!libxl_defbool_val(b_info->arch_arm.virtio)) {
+            fprintf(stderr, "Virtio device requires Virtio property to be set\n");
+            exit(EXIT_FAILURE);
+        }
+
+        while ((item = xlu_cfg_get_listitem(virtio_disks, entry)) != NULL) {
+            libxl_device_virtio_disk *virtio_disk;
+            char *p;
+
+            virtio_disk = ARRAY_EXTEND_INIT(d_config->virtio_disks,
+                                            d_config->num_virtio_disks,
+                                            libxl_device_virtio_disk_init);
+
+            buf = strdup(item);
+
+            p = strtok (buf, ",");
+            while (p != NULL)
+            {
+                while (*p == ' ') p++;
+
+                rc = parse_virtio_disk_config(virtio_disk, p);
+                if (rc) goto out;
+
+                p = strtok (NULL, ",");
+            }
+
+            entry++;
+
+            if (virtio_disk->num_disks == 0) {
+                fprintf(stderr, "At least one virtio disk should be specified\n");
+                rc = 1; goto out;
+            }
+        }
+    }
+
+    rc = 0;
+
+out:
+    free(buf);
+    if (rc) exit(EXIT_FAILURE);
+}
+
 void parse_config_data(const char *config_source,
                        const char *config_data,
                        int config_len,
@@ -2734,6 +2848,7 @@ skip_usbdev:
     }
 
     parse_vkb_list(config, d_config);
+    parse_virtio_disk_list(config, d_config);
 
     xlu_cfg_get_defbool(config, "xend_suspend_evtchn_compat",
                         &c_info->xend_suspend_evtchn_compat, 0);
diff --git a/tools/xl/xl_virtio_disk.c b/tools/xl/xl_virtio_disk.c
new file mode 100644
index 0000000..808a7da
--- /dev/null
+++ b/tools/xl/xl_virtio_disk.c
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2020 EPAM Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include <stdlib.h>
+
+#include <libxl.h>
+#include <libxl_utils.h>
+#include <libxlutil.h>
+
+#include "xl.h"
+#include "xl_utils.h"
+#include "xl_parse.h"
+
+int main_virtio_diskattach(int argc, char **argv)
+{
+    return 0;
+}
+
+int main_virtio_disklist(int argc, char **argv)
+{
+   return 0;
+}
+
+int main_virtio_diskdetach(int argc, char **argv)
+{
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7608.20039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VB-00079D-2G; Thu, 15 Oct 2020 16:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7608.20039; Thu, 15 Oct 2020 16:53:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VA-00078y-PL; Thu, 15 Oct 2020 16:53:16 +0000
Received: by outflank-mailman (input) for mailman id 7608;
 Thu, 15 Oct 2020 16:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6O8-0004yr-79
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35a4e4e2-ae98-4920-98eb-57504424b4da;
 Thu, 15 Oct 2020 16:45:07 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:04 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6O8-0004yr-79
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:00 +0000
X-Inumbo-ID: 35a4e4e2-ae98-4920-98eb-57504424b4da
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35a4e4e2-ae98-4920-98eb-57504424b4da;
	Thu, 15 Oct 2020 16:45:07 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id l2so4404490lfk.0
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=;
        b=mZHr8xPWPC6BwR4Rvshv8pcGslgwoHeHR995f+dQp2HBfJc3N2sA66iK3KBHJZYxkN
         T/Q1JrQPnyjBHR2/3yAjBjkIeMIIhJDrHm6IP+qt2S/XUEmQu8rlcooJaRRD5y9wIMmu
         GnbdX09mpqOk1z3qj4Gz2tsgs/gQP1uOzFdLwpLIzmT7DwvmQF79h7+7kLz6iuWimTBa
         HvDIiqenfWrnRu+gHpWE3cIGhbBoWsjoUwsAr+7bPmnf3RIZmNiA55r0XSTOxfEgP5nl
         VD6KDMs895Kks5NfhecJPYNTrIbvzNyOOv8nx3zVd8lawUE/AIhUmZlcqlSpT4YWjmuO
         5IgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=ATLX0ix+70BJdDYt/n8NIsAS6V9Lo7VwWj16nLbs74s=;
        b=nYwGJmitEhoZxmrhN5XLsnacY3FYcKGXoM8CRHM9LiAye77C/0DSyi4JMAxdNj1Dfn
         HyNDdUX4mndwyWGJkp8a2AtlVgWo/VGEifgLcrNNJrPQDkmF9X9QPmawjl8zne3BUT/D
         1xEZ52CG0PzzsgAhAtGfI9qYGBq4b0f8Y22jsiZm0WNI3ARrDvLw8en52p8csFJrAfsS
         S9UFBWBvj14g4j7x2mxe4NNfmWzQUd5oMdZVCWK6ME5E/iTflkuJbrdaxqJpZuOzbnKK
         MuRHqGiJs0eksU68hEjj2FxtjrWnQzG4vMDKDsWYggEp9aOnjhFtHkoCuIp0RniEToh8
         LRiA==
X-Gm-Message-State: AOAM533elyY6Ld87fcNXerFvGQ0JhNloYdK8C8KfAn4uzcGrGjQxOj2J
	ThbnGq0+PPemW4PVmak1FOrDGj8O+2bZAQ==
X-Google-Smtp-Source: ABdhPJzoVD12a1fmtxYj1gr0tCGo+X9eLMlAepPqLgCiZ1fzx12+msmYl926taDLXf7MlR7mvynsKg==
X-Received: by 2002:a19:cc8f:: with SMTP id c137mr1396381lfg.476.1602780305418;
        Thu, 15 Oct 2020 09:45:05 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.04
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:04 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 12/23] xen/ioreq: Remove "hvm" prefixes from involved function names
Date: Thu, 15 Oct 2020 19:44:23 +0300
Message-Id: <1602780274-29141-13-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch removes "hvm" prefixes and infixes from IOREQ
related function names in the common code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch
---
 xen/arch/x86/hvm/emulate.c      |   6 +-
 xen/arch/x86/hvm/hvm.c          |  10 +-
 xen/arch/x86/hvm/io.c           |   6 +-
 xen/arch/x86/hvm/stdvga.c       |   4 +-
 xen/arch/x86/hvm/vmx/vvmx.c     |   2 +-
 xen/common/dm.c                 |  28 ++---
 xen/common/ioreq.c              | 240 ++++++++++++++++++++--------------------
 xen/common/memory.c             |   2 +-
 xen/include/asm-x86/hvm/ioreq.h |  16 +--
 xen/include/xen/ioreq.h         |  58 +++++-----
 10 files changed, 186 insertions(+), 186 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index f6a4eef..54cd493 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -261,7 +261,7 @@ static int hvmemul_do_io(
          * an ioreq server that can handle it.
          *
          * Rules:
-         * A> PIO or MMIO accesses run through hvm_select_ioreq_server() to
+         * A> PIO or MMIO accesses run through select_ioreq_server() to
          * choose the ioreq server by range. If no server is found, the access
          * is ignored.
          *
@@ -323,7 +323,7 @@ static int hvmemul_do_io(
         }
 
         if ( !s )
-            s = hvm_select_ioreq_server(currd, &p);
+            s = select_ioreq_server(currd, &p);
 
         /* If there is no suitable backing DM, just ignore accesses */
         if ( !s )
@@ -333,7 +333,7 @@ static int hvmemul_do_io(
         }
         else
         {
-            rc = hvm_send_ioreq(s, &p, 0);
+            rc = send_ioreq(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                 vio->io_req.state = STATE_IOREQ_NONE;
             else if ( !ioreq_needs_completion(&vio->io_req) )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 341093b..1e788b5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -546,7 +546,7 @@ void hvm_do_resume(struct vcpu *v)
 
     pt_restore_timer(v);
 
-    if ( !handle_hvm_io_completion(v) )
+    if ( !handle_io_completion(v) )
         return;
 
     if ( unlikely(v->arch.vm_event) )
@@ -677,7 +677,7 @@ int hvm_domain_initialise(struct domain *d)
     register_g2m_portio_handler(d);
     register_vpci_portio_handler(d);
 
-    hvm_ioreq_init(d);
+    ioreq_init(d);
 
     hvm_init_guest_time(d);
 
@@ -739,7 +739,7 @@ void hvm_domain_relinquish_resources(struct domain *d)
 
     viridian_domain_deinit(d);
 
-    hvm_destroy_all_ioreq_servers(d);
+    destroy_all_ioreq_servers(d);
 
     msixtbl_pt_cleanup(d);
 
@@ -1582,7 +1582,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
     if ( rc )
         goto fail5;
 
-    rc = hvm_all_ioreq_servers_add_vcpu(d, v);
+    rc = all_ioreq_servers_add_vcpu(d, v);
     if ( rc != 0 )
         goto fail6;
 
@@ -1618,7 +1618,7 @@ void hvm_vcpu_destroy(struct vcpu *v)
 {
     viridian_vcpu_deinit(v);
 
-    hvm_all_ioreq_servers_remove_vcpu(v->domain, v);
+    all_ioreq_servers_remove_vcpu(v->domain, v);
 
     if ( hvm_altp2m_supported() )
         altp2m_vcpu_destroy(v);
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index 36584de..2d03ffe 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -60,7 +60,7 @@ void send_timeoffset_req(unsigned long timeoff)
     if ( timeoff == 0 )
         return;
 
-    if ( hvm_broadcast_ioreq(&p, true) != 0 )
+    if ( broadcast_ioreq(&p, true) != 0 )
         gprintk(XENLOG_ERR, "Unsuccessful timeoffset update\n");
 }
 
@@ -74,7 +74,7 @@ void send_invalidate_req(void)
         .data = ~0UL, /* flush all */
     };
 
-    if ( hvm_broadcast_ioreq(&p, false) != 0 )
+    if ( broadcast_ioreq(&p, false) != 0 )
         gprintk(XENLOG_ERR, "Unsuccessful map-cache invalidate\n");
 }
 
@@ -155,7 +155,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
          * We should not advance RIP/EIP if the domain is shutting down or
          * if X86EMUL_RETRY has been returned by an internal handler.
          */
-        if ( curr->domain->is_shutting_down || !hvm_io_pending(curr) )
+        if ( curr->domain->is_shutting_down || !io_pending(curr) )
             return false;
         break;
 
diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index bafb3f6..cb1cc7f 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -507,11 +507,11 @@ static int stdvga_mem_write(const struct hvm_io_handler *handler,
     }
 
  done:
-    srv = hvm_select_ioreq_server(current->domain, &p);
+    srv = select_ioreq_server(current->domain, &p);
     if ( !srv )
         return X86EMUL_UNHANDLEABLE;
 
-    return hvm_send_ioreq(srv, &p, 1);
+    return send_ioreq(srv, &p, 1);
 }
 
 static bool_t stdvga_mem_accept(const struct hvm_io_handler *handler,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 3a37e9e..d5a17f12 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1516,7 +1516,7 @@ void nvmx_switch_guest(void)
      * don't want to continue as this setup is not implemented nor supported
      * as of right now.
      */
-    if ( hvm_io_pending(v) )
+    if ( io_pending(v) )
         return;
     /*
      * a softirq may interrupt us between a virtual vmentry is
diff --git a/xen/common/dm.c b/xen/common/dm.c
index 36e01a2..f3a8353 100644
--- a/xen/common/dm.c
+++ b/xen/common/dm.c
@@ -100,8 +100,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad[0] || data->pad[1] || data->pad[2] )
             break;
 
-        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
+        rc = create_ioreq_server(d, data->handle_bufioreq,
+                                 &data->id);
         break;
     }
 
@@ -117,12 +117,12 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->flags & ~valid_flags )
             break;
 
-        rc = hvm_get_ioreq_server_info(d, data->id,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : (unsigned long *)&data->ioreq_gfn,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : (unsigned long *)&data->bufioreq_gfn,
-                                       &data->bufioreq_port);
+        rc = get_ioreq_server_info(d, data->id,
+                                   (data->flags & XEN_DMOP_no_gfns) ?
+                                   NULL : (unsigned long *)&data->ioreq_gfn,
+                                   (data->flags & XEN_DMOP_no_gfns) ?
+                                   NULL : (unsigned long *)&data->bufioreq_gfn,
+                                   &data->bufioreq_port);
         break;
     }
 
@@ -135,8 +135,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
+        rc = map_io_range_to_ioreq_server(d, data->id, data->type,
+                                          data->start, data->end);
         break;
     }
 
@@ -149,8 +149,8 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
+        rc = unmap_io_range_from_ioreq_server(d, data->id, data->type,
+                                              data->start, data->end);
         break;
     }
 
@@ -163,7 +163,7 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        rc = set_ioreq_server_state(d, data->id, !!data->enabled);
         break;
     }
 
@@ -176,7 +176,7 @@ static int dm_op(const struct dmop_args *op_args)
         if ( data->pad )
             break;
 
-        rc = hvm_destroy_ioreq_server(d, data->id);
+        rc = destroy_ioreq_server(d, data->id);
         break;
     }
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 57ddaaa..98fffae 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -58,7 +58,7 @@ struct ioreq_server *get_ioreq_server(const struct domain *d,
  * Iterate over all possible ioreq servers.
  *
  * NOTE: The iteration is backwards such that more recently created
- *       ioreq servers are favoured in hvm_select_ioreq_server().
+ *       ioreq servers are favoured in select_ioreq_server().
  *       This is a semantic that previously existed when ioreq servers
  *       were held in a linked list.
  */
@@ -105,12 +105,12 @@ static struct ioreq_vcpu *get_pending_vcpu(const struct vcpu *v,
     return NULL;
 }
 
-bool hvm_io_pending(struct vcpu *v)
+bool io_pending(struct vcpu *v)
 {
     return get_pending_vcpu(v, NULL);
 }
 
-static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
+static bool wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 {
     unsigned int prev_state = STATE_IOREQ_NONE;
     unsigned int state = p->state;
@@ -167,7 +167,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
     return true;
 }
 
-bool handle_hvm_io_completion(struct vcpu *v)
+bool handle_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
     struct vcpu_io *vio = &v->io;
@@ -182,7 +182,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
     }
 
     sv = get_pending_vcpu(v, &s);
-    if ( sv && !hvm_wait_for_io(sv, get_ioreq(s, v)) )
+    if ( sv && !wait_for_io(sv, get_ioreq(s, v)) )
         return false;
 
     vio->io_req.state = ioreq_needs_completion(&vio->io_req) ?
@@ -207,13 +207,13 @@ bool handle_hvm_io_completion(struct vcpu *v)
                           vio->io_req.dir);
 
     default:
-        return arch_hvm_io_completion(io_completion);
+        return arch_io_completion(io_completion);
     }
 
     return true;
 }
 
-static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
+static gfn_t alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -229,7 +229,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
     return INVALID_GFN;
 }
 
-static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
+static gfn_t alloc_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -244,11 +244,11 @@ static gfn_t hvm_alloc_ioreq_gfn(struct ioreq_server *s)
      * If we are out of 'normal' GFNs then we may still have a 'legacy'
      * GFN available.
      */
-    return hvm_alloc_legacy_ioreq_gfn(s);
+    return alloc_legacy_ioreq_gfn(s);
 }
 
-static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
-                                      gfn_t gfn)
+static bool free_legacy_ioreq_gfn(struct ioreq_server *s,
+                                  gfn_t gfn)
 {
     struct domain *d = s->target;
     unsigned int i;
@@ -265,21 +265,21 @@ static bool hvm_free_legacy_ioreq_gfn(struct ioreq_server *s,
     return true;
 }
 
-static void hvm_free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn)
+static void free_ioreq_gfn(struct ioreq_server *s, gfn_t gfn)
 {
     struct domain *d = s->target;
     unsigned int i = gfn_x(gfn) - d->ioreq_gfn.base;
 
     ASSERT(!gfn_eq(gfn, INVALID_GFN));
 
-    if ( !hvm_free_legacy_ioreq_gfn(s, gfn) )
+    if ( !free_legacy_ioreq_gfn(s, gfn) )
     {
         ASSERT(i < sizeof(d->ioreq_gfn.mask) * 8);
         set_bit(i, &d->ioreq_gfn.mask);
     }
 }
 
-static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf)
+static void unmap_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
 
@@ -289,11 +289,11 @@ static void hvm_unmap_ioreq_gfn(struct ioreq_server *s, bool buf)
     destroy_ring_for_helper(&iorp->va, iorp->page);
     iorp->page = NULL;
 
-    hvm_free_ioreq_gfn(s, iorp->gfn);
+    free_ioreq_gfn(s, iorp->gfn);
     iorp->gfn = INVALID_GFN;
 }
 
-static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
+static int map_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -303,7 +303,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
     {
         /*
          * If a page has already been allocated (which will happen on
-         * demand if hvm_get_ioreq_server_frame() is called), then
+         * demand if get_ioreq_server_frame() is called), then
          * mapping a guest frame is not permitted.
          */
         if ( gfn_eq(iorp->gfn, INVALID_GFN) )
@@ -315,7 +315,7 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
     if ( d->is_dying )
         return -EINVAL;
 
-    iorp->gfn = hvm_alloc_ioreq_gfn(s);
+    iorp->gfn = alloc_ioreq_gfn(s);
 
     if ( gfn_eq(iorp->gfn, INVALID_GFN) )
         return -ENOMEM;
@@ -324,12 +324,12 @@ static int hvm_map_ioreq_gfn(struct ioreq_server *s, bool buf)
                                  &iorp->va);
 
     if ( rc )
-        hvm_unmap_ioreq_gfn(s, buf);
+        unmap_ioreq_gfn(s, buf);
 
     return rc;
 }
 
-static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
+static int alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page;
@@ -338,7 +338,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
     {
         /*
          * If a guest frame has already been mapped (which may happen
-         * on demand if hvm_get_ioreq_server_info() is called), then
+         * on demand if get_ioreq_server_info() is called), then
          * allocating a page is not permitted.
          */
         if ( !gfn_eq(iorp->gfn, INVALID_GFN) )
@@ -377,7 +377,7 @@ static int hvm_alloc_ioreq_mfn(struct ioreq_server *s, bool buf)
     return -ENOMEM;
 }
 
-static void hvm_free_ioreq_mfn(struct ioreq_server *s, bool buf)
+static void free_ioreq_mfn(struct ioreq_server *s, bool buf)
 {
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
     struct page_info *page = iorp->page;
@@ -416,7 +416,7 @@ bool is_ioreq_server_page(struct domain *d, const struct page_info *page)
     return found;
 }
 
-static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf)
+static void remove_ioreq_gfn(struct ioreq_server *s, bool buf)
 
 {
     struct domain *d = s->target;
@@ -431,7 +431,7 @@ static void hvm_remove_ioreq_gfn(struct ioreq_server *s, bool buf)
     clear_page(iorp->va);
 }
 
-static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf)
+static int add_ioreq_gfn(struct ioreq_server *s, bool buf)
 {
     struct domain *d = s->target;
     struct ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
@@ -450,8 +450,8 @@ static int hvm_add_ioreq_gfn(struct ioreq_server *s, bool buf)
     return rc;
 }
 
-static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
-                                    struct ioreq_vcpu *sv)
+static void update_ioreq_evtchn(struct ioreq_server *s,
+                                struct ioreq_vcpu *sv)
 {
     ASSERT(spin_is_locked(&s->lock));
 
@@ -466,8 +466,8 @@ static void hvm_update_ioreq_evtchn(struct ioreq_server *s,
 #define HANDLE_BUFIOREQ(s) \
     ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
 
-static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
-                                     struct vcpu *v)
+static int ioreq_server_add_vcpu(struct ioreq_server *s,
+                                 struct vcpu *v)
 {
     struct ioreq_vcpu *sv;
     int rc;
@@ -502,7 +502,7 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
     list_add(&sv->list_entry, &s->ioreq_vcpu_list);
 
     if ( s->enabled )
-        hvm_update_ioreq_evtchn(s, sv);
+        update_ioreq_evtchn(s, sv);
 
     spin_unlock(&s->lock);
     return 0;
@@ -518,8 +518,8 @@ static int hvm_ioreq_server_add_vcpu(struct ioreq_server *s,
     return rc;
 }
 
-static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
-                                         struct vcpu *v)
+static void ioreq_server_remove_vcpu(struct ioreq_server *s,
+                                     struct vcpu *v)
 {
     struct ioreq_vcpu *sv;
 
@@ -546,7 +546,7 @@ static void hvm_ioreq_server_remove_vcpu(struct ioreq_server *s,
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
+static void ioreq_server_remove_all_vcpus(struct ioreq_server *s)
 {
     struct ioreq_vcpu *sv, *next;
 
@@ -572,49 +572,49 @@ static void hvm_ioreq_server_remove_all_vcpus(struct ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_map_pages(struct ioreq_server *s)
+static int ioreq_server_map_pages(struct ioreq_server *s)
 {
     int rc;
 
-    rc = hvm_map_ioreq_gfn(s, false);
+    rc = map_ioreq_gfn(s, false);
 
     if ( !rc && HANDLE_BUFIOREQ(s) )
-        rc = hvm_map_ioreq_gfn(s, true);
+        rc = map_ioreq_gfn(s, true);
 
     if ( rc )
-        hvm_unmap_ioreq_gfn(s, false);
+        unmap_ioreq_gfn(s, false);
 
     return rc;
 }
 
-static void hvm_ioreq_server_unmap_pages(struct ioreq_server *s)
+static void ioreq_server_unmap_pages(struct ioreq_server *s)
 {
-    hvm_unmap_ioreq_gfn(s, true);
-    hvm_unmap_ioreq_gfn(s, false);
+    unmap_ioreq_gfn(s, true);
+    unmap_ioreq_gfn(s, false);
 }
 
-static int hvm_ioreq_server_alloc_pages(struct ioreq_server *s)
+static int ioreq_server_alloc_pages(struct ioreq_server *s)
 {
     int rc;
 
-    rc = hvm_alloc_ioreq_mfn(s, false);
+    rc = alloc_ioreq_mfn(s, false);
 
     if ( !rc && (s->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF) )
-        rc = hvm_alloc_ioreq_mfn(s, true);
+        rc = alloc_ioreq_mfn(s, true);
 
     if ( rc )
-        hvm_free_ioreq_mfn(s, false);
+        free_ioreq_mfn(s, false);
 
     return rc;
 }
 
-static void hvm_ioreq_server_free_pages(struct ioreq_server *s)
+static void ioreq_server_free_pages(struct ioreq_server *s)
 {
-    hvm_free_ioreq_mfn(s, true);
-    hvm_free_ioreq_mfn(s, false);
+    free_ioreq_mfn(s, true);
+    free_ioreq_mfn(s, false);
 }
 
-static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
+static void ioreq_server_free_rangesets(struct ioreq_server *s)
 {
     unsigned int i;
 
@@ -622,8 +622,8 @@ static void hvm_ioreq_server_free_rangesets(struct ioreq_server *s)
         rangeset_destroy(s->range[i]);
 }
 
-static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
-                                            ioservid_t id)
+static int ioreq_server_alloc_rangesets(struct ioreq_server *s,
+                                        ioservid_t id)
 {
     unsigned int i;
     int rc;
@@ -655,12 +655,12 @@ static int hvm_ioreq_server_alloc_rangesets(struct ioreq_server *s,
     return 0;
 
  fail:
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     return rc;
 }
 
-static void hvm_ioreq_server_enable(struct ioreq_server *s)
+static void ioreq_server_enable(struct ioreq_server *s)
 {
     struct ioreq_vcpu *sv;
 
@@ -669,29 +669,29 @@ static void hvm_ioreq_server_enable(struct ioreq_server *s)
     if ( s->enabled )
         goto done;
 
-    hvm_remove_ioreq_gfn(s, false);
-    hvm_remove_ioreq_gfn(s, true);
+    remove_ioreq_gfn(s, false);
+    remove_ioreq_gfn(s, true);
 
     s->enabled = true;
 
     list_for_each_entry ( sv,
                           &s->ioreq_vcpu_list,
                           list_entry )
-        hvm_update_ioreq_evtchn(s, sv);
+        update_ioreq_evtchn(s, sv);
 
   done:
     spin_unlock(&s->lock);
 }
 
-static void hvm_ioreq_server_disable(struct ioreq_server *s)
+static void ioreq_server_disable(struct ioreq_server *s)
 {
     spin_lock(&s->lock);
 
     if ( !s->enabled )
         goto done;
 
-    hvm_add_ioreq_gfn(s, true);
-    hvm_add_ioreq_gfn(s, false);
+    add_ioreq_gfn(s, true);
+    add_ioreq_gfn(s, false);
 
     s->enabled = false;
 
@@ -699,9 +699,9 @@ static void hvm_ioreq_server_disable(struct ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_init(struct ioreq_server *s,
-                                 struct domain *d, int bufioreq_handling,
-                                 ioservid_t id)
+static int ioreq_server_init(struct ioreq_server *s,
+                             struct domain *d, int bufioreq_handling,
+                             ioservid_t id)
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
@@ -719,7 +719,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
     s->ioreq.gfn = INVALID_GFN;
     s->bufioreq.gfn = INVALID_GFN;
 
-    rc = hvm_ioreq_server_alloc_rangesets(s, id);
+    rc = ioreq_server_alloc_rangesets(s, id);
     if ( rc )
         return rc;
 
@@ -727,7 +727,7 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
 
     for_each_vcpu ( d, v )
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
+        rc = ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail_add;
     }
@@ -735,39 +735,39 @@ static int hvm_ioreq_server_init(struct ioreq_server *s,
     return 0;
 
  fail_add:
-    hvm_ioreq_server_remove_all_vcpus(s);
-    hvm_ioreq_server_unmap_pages(s);
+    ioreq_server_remove_all_vcpus(s);
+    ioreq_server_unmap_pages(s);
 
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     put_domain(s->emulator);
     return rc;
 }
 
-static void hvm_ioreq_server_deinit(struct ioreq_server *s)
+static void ioreq_server_deinit(struct ioreq_server *s)
 {
     ASSERT(!s->enabled);
-    hvm_ioreq_server_remove_all_vcpus(s);
+    ioreq_server_remove_all_vcpus(s);
 
     /*
-     * NOTE: It is safe to call both hvm_ioreq_server_unmap_pages() and
-     *       hvm_ioreq_server_free_pages() in that order.
+     * NOTE: It is safe to call both ioreq_server_unmap_pages() and
+     *       ioreq_server_free_pages() in that order.
      *       This is because the former will do nothing if the pages
      *       are not mapped, leaving the page to be freed by the latter.
      *       However if the pages are mapped then the former will set
      *       the page_info pointer to NULL, meaning the latter will do
      *       nothing.
      */
-    hvm_ioreq_server_unmap_pages(s);
-    hvm_ioreq_server_free_pages(s);
+    ioreq_server_unmap_pages(s);
+    ioreq_server_free_pages(s);
 
-    hvm_ioreq_server_free_rangesets(s);
+    ioreq_server_free_rangesets(s);
 
     put_domain(s->emulator);
 }
 
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id)
+int create_ioreq_server(struct domain *d, int bufioreq_handling,
+                        ioservid_t *id)
 {
     struct ioreq_server *s;
     unsigned int i;
@@ -795,11 +795,11 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
 
     /*
      * It is safe to call set_ioreq_server() prior to
-     * hvm_ioreq_server_init() since the target domain is paused.
+     * ioreq_server_init() since the target domain is paused.
      */
     set_ioreq_server(d, i, s);
 
-    rc = hvm_ioreq_server_init(s, d, bufioreq_handling, i);
+    rc = ioreq_server_init(s, d, bufioreq_handling, i);
     if ( rc )
     {
         set_ioreq_server(d, i, NULL);
@@ -822,7 +822,7 @@ int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
     return rc;
 }
 
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
+int destroy_ioreq_server(struct domain *d, ioservid_t id)
 {
     struct ioreq_server *s;
     int rc;
@@ -841,15 +841,15 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 
     domain_pause(d);
 
-    arch_hvm_destroy_ioreq_server(s);
+    arch_destroy_ioreq_server(s);
 
-    hvm_ioreq_server_disable(s);
+    ioreq_server_disable(s);
 
     /*
-     * It is safe to call hvm_ioreq_server_deinit() prior to
+     * It is safe to call ioreq_server_deinit() prior to
      * set_ioreq_server() since the target domain is paused.
      */
-    hvm_ioreq_server_deinit(s);
+    ioreq_server_deinit(s);
     set_ioreq_server(d, id, NULL);
 
     domain_unpause(d);
@@ -864,10 +864,10 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
     return rc;
 }
 
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port)
+int get_ioreq_server_info(struct domain *d, ioservid_t id,
+                          unsigned long *ioreq_gfn,
+                          unsigned long *bufioreq_gfn,
+                          evtchn_port_t *bufioreq_port)
 {
     struct ioreq_server *s;
     int rc;
@@ -886,7 +886,7 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
 
     if ( ioreq_gfn || bufioreq_gfn )
     {
-        rc = hvm_ioreq_server_map_pages(s);
+        rc = ioreq_server_map_pages(s);
         if ( rc )
             goto out;
     }
@@ -911,8 +911,8 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn)
+int get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                           unsigned long idx, mfn_t *mfn)
 {
     struct ioreq_server *s;
     int rc;
@@ -931,7 +931,7 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     if ( s->emulator != current->domain )
         goto out;
 
-    rc = hvm_ioreq_server_alloc_pages(s);
+    rc = ioreq_server_alloc_pages(s);
     if ( rc )
         goto out;
 
@@ -962,9 +962,9 @@ int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
-                                     uint32_t type, uint64_t start,
-                                     uint64_t end)
+int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                 uint32_t type, uint64_t start,
+                                 uint64_t end)
 {
     struct ioreq_server *s;
     struct rangeset *r;
@@ -1014,9 +1014,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end)
+int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint32_t type, uint64_t start,
+                                     uint64_t end)
 {
     struct ioreq_server *s;
     struct rangeset *r;
@@ -1066,8 +1066,8 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled)
+int set_ioreq_server_state(struct domain *d, ioservid_t id,
+                           bool enabled)
 {
     struct ioreq_server *s;
     int rc;
@@ -1087,9 +1087,9 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     domain_pause(d);
 
     if ( enabled )
-        hvm_ioreq_server_enable(s);
+        ioreq_server_enable(s);
     else
-        hvm_ioreq_server_disable(s);
+        ioreq_server_disable(s);
 
     domain_unpause(d);
 
@@ -1100,7 +1100,7 @@ int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
     return rc;
 }
 
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
+int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 {
     struct ioreq_server *s;
     unsigned int id;
@@ -1110,7 +1110,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        rc = hvm_ioreq_server_add_vcpu(s, v);
+        rc = ioreq_server_add_vcpu(s, v);
         if ( rc )
             goto fail;
     }
@@ -1127,7 +1127,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
         if ( !s )
             continue;
 
-        hvm_ioreq_server_remove_vcpu(s, v);
+        ioreq_server_remove_vcpu(s, v);
     }
 
     spin_unlock_recursive(&d->ioreq_server.lock);
@@ -1135,7 +1135,7 @@ int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v)
     return rc;
 }
 
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
+void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
 {
     struct ioreq_server *s;
     unsigned int id;
@@ -1143,17 +1143,17 @@ void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v)
     spin_lock_recursive(&d->ioreq_server.lock);
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
-        hvm_ioreq_server_remove_vcpu(s, v);
+        ioreq_server_remove_vcpu(s, v);
 
     spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
-void hvm_destroy_all_ioreq_servers(struct domain *d)
+void destroy_all_ioreq_servers(struct domain *d)
 {
     struct ioreq_server *s;
     unsigned int id;
 
-    if ( !arch_hvm_ioreq_destroy(d) )
+    if ( !arch_ioreq_destroy(d) )
         return;
 
     spin_lock_recursive(&d->ioreq_server.lock);
@@ -1162,13 +1162,13 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
     {
-        hvm_ioreq_server_disable(s);
+        ioreq_server_disable(s);
 
         /*
-         * It is safe to call hvm_ioreq_server_deinit() prior to
+         * It is safe to call ioreq_server_deinit() prior to
          * set_ioreq_server() since the target domain is being destroyed.
          */
-        hvm_ioreq_server_deinit(s);
+        ioreq_server_deinit(s);
         set_ioreq_server(d, id, NULL);
 
         xfree(s);
@@ -1177,15 +1177,15 @@ void hvm_destroy_all_ioreq_servers(struct domain *d)
     spin_unlock_recursive(&d->ioreq_server.lock);
 }
 
-struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                             ioreq_t *p)
+struct ioreq_server *select_ioreq_server(struct domain *d,
+                                         ioreq_t *p)
 {
     struct ioreq_server *s;
     uint8_t type;
     uint64_t addr;
     unsigned int id;
 
-    if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) )
+    if ( ioreq_server_get_type_addr(d, p, &type, &addr) )
         return NULL;
 
     FOR_EACH_IOREQ_SERVER(d, id, s)
@@ -1233,7 +1233,7 @@ struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
     return NULL;
 }
 
-static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
+static int send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
 {
     struct domain *d = current->domain;
     struct ioreq_page *iorp;
@@ -1326,8 +1326,8 @@ static int hvm_send_buffered_ioreq(struct ioreq_server *s, ioreq_t *p)
     return IOREQ_STATUS_HANDLED;
 }
 
-int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered)
+int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
+               bool buffered)
 {
     struct vcpu *curr = current;
     struct domain *d = curr->domain;
@@ -1336,7 +1336,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
     ASSERT(s);
 
     if ( buffered )
-        return hvm_send_buffered_ioreq(s, proto_p);
+        return send_buffered_ioreq(s, proto_p);
 
     if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
         return IOREQ_STATUS_RETRY;
@@ -1386,7 +1386,7 @@ int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
     return IOREQ_STATUS_UNHANDLED;
 }
 
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
+unsigned int broadcast_ioreq(ioreq_t *p, bool buffered)
 {
     struct domain *d = current->domain;
     struct ioreq_server *s;
@@ -1397,18 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
         if ( !s->enabled )
             continue;
 
-        if ( hvm_send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
+        if ( send_ioreq(s, p, buffered) == IOREQ_STATUS_UNHANDLED )
             failed++;
     }
 
     return failed;
 }
 
-void hvm_ioreq_init(struct domain *d)
+void ioreq_init(struct domain *d)
 {
     spin_lock_init(&d->ioreq_server.lock);
 
-    arch_hvm_ioreq_init(d);
+    arch_ioreq_init(d);
 }
 
 /*
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 83d800f..cf53ca3 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1070,7 +1070,7 @@ static int acquire_ioreq_server(struct domain *d,
     {
         mfn_t mfn;
 
-        rc = hvm_get_ioreq_server_frame(d, id, frame + i, &mfn);
+        rc = get_ioreq_server_frame(d, id, frame + i, &mfn);
         if ( rc )
             return rc;
 
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 5ed977e..1340441 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -26,7 +26,7 @@
 
 #include <public/hvm/params.h>
 
-static inline bool arch_hvm_io_completion(enum io_completion io_completion)
+static inline bool arch_io_completion(enum io_completion io_completion)
 {
     switch ( io_completion )
     {
@@ -50,7 +50,7 @@ static inline bool arch_hvm_io_completion(enum io_completion io_completion)
 }
 
 /* Called when target domain is paused */
-static inline void arch_hvm_destroy_ioreq_server(struct ioreq_server *s)
+static inline void arch_destroy_ioreq_server(struct ioreq_server *s)
 {
     p2m_set_ioreq_server(s->target, 0, s);
 }
@@ -105,10 +105,10 @@ static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
     return rc;
 }
 
-static inline int hvm_ioreq_server_get_type_addr(const struct domain *d,
-                                                 const ioreq_t *p,
-                                                 uint8_t *type,
-                                                 uint64_t *addr)
+static inline int ioreq_server_get_type_addr(const struct domain *d,
+                                             const ioreq_t *p,
+                                             uint8_t *type,
+                                             uint64_t *addr)
 {
     uint32_t cf8 = d->arch.hvm.pci_cf8;
 
@@ -164,12 +164,12 @@ static inline int hvm_access_cf8(
     return X86EMUL_UNHANDLEABLE;
 }
 
-static inline void arch_hvm_ioreq_init(struct domain *d)
+static inline void arch_ioreq_init(struct domain *d)
 {
     register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
 }
 
-static inline bool arch_hvm_ioreq_destroy(struct domain *d)
+static inline bool arch_ioreq_destroy(struct domain *d)
 {
     if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
         return false;
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 8451866..7b03ab5 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -81,39 +81,39 @@ static inline bool ioreq_needs_completion(const ioreq_t *ioreq)
            (ioreq->type != IOREQ_TYPE_PIO || ioreq->dir != IOREQ_WRITE);
 }
 
-bool hvm_io_pending(struct vcpu *v);
-bool handle_hvm_io_completion(struct vcpu *v);
+bool io_pending(struct vcpu *v);
+bool handle_io_completion(struct vcpu *v);
 bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
 
-int hvm_create_ioreq_server(struct domain *d, int bufioreq_handling,
-                            ioservid_t *id);
-int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id);
-int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
-                              unsigned long *ioreq_gfn,
-                              unsigned long *bufioreq_gfn,
-                              evtchn_port_t *bufioreq_port);
-int hvm_get_ioreq_server_frame(struct domain *d, ioservid_t id,
-                               unsigned long idx, mfn_t *mfn);
-int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+int create_ioreq_server(struct domain *d, int bufioreq_handling,
+                        ioservid_t *id);
+int destroy_ioreq_server(struct domain *d, ioservid_t id);
+int get_ioreq_server_info(struct domain *d, ioservid_t id,
+                          unsigned long *ioreq_gfn,
+                          unsigned long *bufioreq_gfn,
+                          evtchn_port_t *bufioreq_port);
+int get_ioreq_server_frame(struct domain *d, ioservid_t id,
+                           unsigned long idx, mfn_t *mfn);
+int map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
+                                 uint32_t type, uint64_t start,
+                                 uint64_t end);
+int unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                      uint32_t type, uint64_t start,
                                      uint64_t end);
-int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
-                                         uint32_t type, uint64_t start,
-                                         uint64_t end);
-int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
-                               bool enabled);
-
-int hvm_all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
-void hvm_all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
-void hvm_destroy_all_ioreq_servers(struct domain *d);
-
-struct ioreq_server *hvm_select_ioreq_server(struct domain *d,
-                                             ioreq_t *p);
-int hvm_send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
-                   bool buffered);
-unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered);
-
-void hvm_ioreq_init(struct domain *d);
+int set_ioreq_server_state(struct domain *d, ioservid_t id,
+                           bool enabled);
+
+int all_ioreq_servers_add_vcpu(struct domain *d, struct vcpu *v);
+void all_ioreq_servers_remove_vcpu(struct domain *d, struct vcpu *v);
+void destroy_all_ioreq_servers(struct domain *d);
+
+struct ioreq_server *select_ioreq_server(struct domain *d,
+                                         ioreq_t *p);
+int send_ioreq(struct ioreq_server *s, ioreq_t *proto_p,
+               bool buffered);
+unsigned int broadcast_ioreq(ioreq_t *p, bool buffered);
+
+void ioreq_init(struct domain *d);
 
 #endif /* __XEN_IOREQ_H__ */
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7616.20109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VF-0007JP-7U; Thu, 15 Oct 2020 16:53:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7616.20109; Thu, 15 Oct 2020 16:53:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VE-0007IM-OA; Thu, 15 Oct 2020 16:53:20 +0000
Received: by outflank-mailman (input) for mailman id 7616;
 Thu, 15 Oct 2020 16:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Nt-0004yr-6W
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:45 +0000
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5450b3d-aa1d-40c6-a3e9-164035a70f80;
 Thu, 15 Oct 2020 16:45:03 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id a5so3799244ljj.11
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:03 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.00
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:01 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Nt-0004yr-6W
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:45 +0000
X-Inumbo-ID: b5450b3d-aa1d-40c6-a3e9-164035a70f80
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b5450b3d-aa1d-40c6-a3e9-164035a70f80;
	Thu, 15 Oct 2020 16:45:03 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id a5so3799244ljj.11
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=sfzmMa9E3DuJZ5oelKOpzY6Iaa3MXyP15AbWJ9Be3Qk=;
        b=jKxOhK8ZDlH0FqA/Gtr7bY6UBLngn0b5RU37H4EZVxQA3etBXTPTtsrXbdLwuJtUOm
         AgEBFq9GKaPyRmc2PdODcnrTd5tymGQ73in7CTCj9+iq9gbte9asfLL6TDMBYgt/dxb5
         xgmFVxZaq+HlSBpDdTWwxGMn+HBtajBgvAWT7rBCej86suzjaUobdm3T6Ggu2ZVWvjl7
         s22Qyq3O4ViGkQkFVBKUoFkTqhTQ4JR4q5eW83y73DYhz796aIT6PohZt4+hlMsy6poi
         wR6bZKqaRP1C9zoyRJ1pBrS5KLUwI5FEO6DnUmXqwUHGpaFxh7++LHhk7JWsckdp0WeW
         9jHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=sfzmMa9E3DuJZ5oelKOpzY6Iaa3MXyP15AbWJ9Be3Qk=;
        b=jDE21zu213w8G9hmC89XwZ04hNXLOTUwJFkBtotw08I8ax60WatfcOk0njQYyhxjbJ
         eQRXaWFw3TNt1RrPBhjj9Vvd4fjQqJOpi/wXKmTbkiq3wdjVK7AlEBMiJDmSY8k55cCG
         0dzHDwK+u+U/WBDhrfBlKLIUWvMU8sb7wLzUZgyEDzv6mwrnfPsUqM+v3nB4M0r9n5Ev
         YenHs5jlRJ6/gmJgZnddXodUmmNt0abxkrjTdaX63Y5LPXZyPDxM+mwc4PMeN8Zf1Jkd
         Sl6+bXGp4WJGvvzApLE8DZN8Zx6K79qVpgiHdrslHQK4nBAYXHUFcWNi+bdpOZRTu4X3
         Iypg==
X-Gm-Message-State: AOAM5329LOoqOmE/o7BKF5KclwuK7WsQvjmQuwKLCnkUtE0w5eqjCLpt
	pWKFWAMrs3GwpDAJe2xbUC/78vML2zMTzg==
X-Google-Smtp-Source: ABdhPJzeC6CPrDSrI53xbPKpuh5RjvZblgP7T6mH6J0HyB851OnD3ZFRYihKV1c8ypODydHgu6U6cg==
X-Received: by 2002:a2e:85cd:: with SMTP id h13mr1789424ljj.345.1602780301733;
        Thu, 15 Oct 2020 09:45:01 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.00
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:01 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V2 09/23] xen/dm: Make x86's DM feature common
Date: Thu, 15 Oct 2020 19:44:20 +0300
Message-Id: <1602780274-29141-10-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

As a lot of x86 code can be re-used on Arm later on, this patch
splits devicemodel support into common and arch specific parts.

The common DM feature is supposed to be built with IOREQ_SERVER
option enabled (as well as the IOREQ feature), which is selected
for x86's config HVM for now.

Also update XSM code a bit to let DM op be used on Arm.

This support is going to be used on Arm to be able run device
emulator outside of Xen hypervisor.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - update XSM, related changes were pulled from:
     [RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features

Changes V1 -> V2:
   - update the author of a patch
   - update patch description
   - introduce xen/dm.h and move definitions here
---
 xen/arch/x86/hvm/dm.c   | 291 ++++--------------------------------------------
 xen/common/Makefile     |   1 +
 xen/common/dm.c         | 291 ++++++++++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/dm.h    |  44 ++++++++
 xen/include/xsm/dummy.h |   4 +-
 xen/include/xsm/xsm.h   |   6 +-
 xen/xsm/dummy.c         |   2 +-
 xen/xsm/flask/hooks.c   |   5 +-
 8 files changed, 364 insertions(+), 280 deletions(-)
 create mode 100644 xen/common/dm.c
 create mode 100644 xen/include/xen/dm.h

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 71f5ca4..35f860a 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -16,6 +16,7 @@
 
 #include <xen/event.h>
 #include <xen/guest_access.h>
+#include <xen/dm.h>
 #include <xen/hypercall.h>
 #include <xen/nospec.h>
 #include <xen/sched.h>
@@ -29,13 +30,6 @@
 
 #include <public/hvm/hvm_op.h>
 
-struct dmop_args {
-    domid_t domid;
-    unsigned int nr_bufs;
-    /* Reserve enough buf elements for all current hypercalls. */
-    struct xen_dm_op_buf buf[2];
-};
-
 static bool _raw_copy_from_guest_buf_offset(void *dst,
                                             const struct dmop_args *args,
                                             unsigned int buf_idx,
@@ -338,148 +332,20 @@ static int inject_event(struct domain *d,
     return 0;
 }
 
-static int dm_op(const struct dmop_args *op_args)
+int arch_dm_op(struct xen_dm_op *op, struct domain *d,
+               const struct dmop_args *op_args, bool *const_op)
 {
-    struct domain *d;
-    struct xen_dm_op op;
-    bool const_op = true;
     long rc;
-    size_t offset;
-
-    static const uint8_t op_size[] = {
-        [XEN_DMOP_create_ioreq_server]              = sizeof(struct xen_dm_op_create_ioreq_server),
-        [XEN_DMOP_get_ioreq_server_info]            = sizeof(struct xen_dm_op_get_ioreq_server_info),
-        [XEN_DMOP_map_io_range_to_ioreq_server]     = sizeof(struct xen_dm_op_ioreq_server_range),
-        [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range),
-        [XEN_DMOP_set_ioreq_server_state]           = sizeof(struct xen_dm_op_set_ioreq_server_state),
-        [XEN_DMOP_destroy_ioreq_server]             = sizeof(struct xen_dm_op_destroy_ioreq_server),
-        [XEN_DMOP_track_dirty_vram]                 = sizeof(struct xen_dm_op_track_dirty_vram),
-        [XEN_DMOP_set_pci_intx_level]               = sizeof(struct xen_dm_op_set_pci_intx_level),
-        [XEN_DMOP_set_isa_irq_level]                = sizeof(struct xen_dm_op_set_isa_irq_level),
-        [XEN_DMOP_set_pci_link_route]               = sizeof(struct xen_dm_op_set_pci_link_route),
-        [XEN_DMOP_modified_memory]                  = sizeof(struct xen_dm_op_modified_memory),
-        [XEN_DMOP_set_mem_type]                     = sizeof(struct xen_dm_op_set_mem_type),
-        [XEN_DMOP_inject_event]                     = sizeof(struct xen_dm_op_inject_event),
-        [XEN_DMOP_inject_msi]                       = sizeof(struct xen_dm_op_inject_msi),
-        [XEN_DMOP_map_mem_type_to_ioreq_server]     = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server),
-        [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
-        [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
-        [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
-    };
-
-    rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
-    if ( rc )
-        return rc;
-
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    offset = offsetof(struct xen_dm_op, u);
-
-    rc = -EFAULT;
-    if ( op_args->buf[0].size < offset )
-        goto out;
-
-    if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) )
-        goto out;
-
-    if ( op.op >= ARRAY_SIZE(op_size) )
-    {
-        rc = -EOPNOTSUPP;
-        goto out;
-    }
-
-    op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size));
-
-    if ( op_args->buf[0].size < offset + op_size[op.op] )
-        goto out;
-
-    if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset,
-                                op_size[op.op]) )
-        goto out;
-
-    rc = -EINVAL;
-    if ( op.pad )
-        goto out;
-
-    switch ( op.op )
-    {
-    case XEN_DMOP_create_ioreq_server:
-    {
-        struct xen_dm_op_create_ioreq_server *data =
-            &op.u.create_ioreq_server;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->pad[0] || data->pad[1] || data->pad[2] )
-            break;
-
-        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
-                                     &data->id);
-        break;
-    }
 
-    case XEN_DMOP_get_ioreq_server_info:
+    switch ( op->op )
     {
-        struct xen_dm_op_get_ioreq_server_info *data =
-            &op.u.get_ioreq_server_info;
-        const uint16_t valid_flags = XEN_DMOP_no_gfns;
-
-        const_op = false;
-
-        rc = -EINVAL;
-        if ( data->flags & ~valid_flags )
-            break;
-
-        rc = hvm_get_ioreq_server_info(d, data->id,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->ioreq_gfn,
-                                       (data->flags & XEN_DMOP_no_gfns) ?
-                                       NULL : &data->bufioreq_gfn,
-                                       &data->bufioreq_port);
-        break;
-    }
-
-    case XEN_DMOP_map_io_range_to_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.map_io_range_to_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
-                                              data->start, data->end);
-        break;
-    }
-
-    case XEN_DMOP_unmap_io_range_from_ioreq_server:
-    {
-        const struct xen_dm_op_ioreq_server_range *data =
-            &op.u.unmap_io_range_from_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
-                                                  data->start, data->end);
-        break;
-    }
-
     case XEN_DMOP_map_mem_type_to_ioreq_server:
     {
         struct xen_dm_op_map_mem_type_to_ioreq_server *data =
-            &op.u.map_mem_type_to_ioreq_server;
+            &op->u.map_mem_type_to_ioreq_server;
         unsigned long first_gfn = data->opaque;
 
-        const_op = false;
+        *const_op = false;
 
         rc = -EOPNOTSUPP;
         if ( !hap_enabled(d) )
@@ -523,36 +389,10 @@ static int dm_op(const struct dmop_args *op_args)
         break;
     }
 
-    case XEN_DMOP_set_ioreq_server_state:
-    {
-        const struct xen_dm_op_set_ioreq_server_state *data =
-            &op.u.set_ioreq_server_state;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
-        break;
-    }
-
-    case XEN_DMOP_destroy_ioreq_server:
-    {
-        const struct xen_dm_op_destroy_ioreq_server *data =
-            &op.u.destroy_ioreq_server;
-
-        rc = -EINVAL;
-        if ( data->pad )
-            break;
-
-        rc = hvm_destroy_ioreq_server(d, data->id);
-        break;
-    }
-
     case XEN_DMOP_track_dirty_vram:
     {
         const struct xen_dm_op_track_dirty_vram *data =
-            &op.u.track_dirty_vram;
+            &op->u.track_dirty_vram;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -568,7 +408,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_pci_intx_level:
     {
         const struct xen_dm_op_set_pci_intx_level *data =
-            &op.u.set_pci_intx_level;
+            &op->u.set_pci_intx_level;
 
         rc = set_pci_intx_level(d, data->domain, data->bus,
                                 data->device, data->intx,
@@ -579,7 +419,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_isa_irq_level:
     {
         const struct xen_dm_op_set_isa_irq_level *data =
-            &op.u.set_isa_irq_level;
+            &op->u.set_isa_irq_level;
 
         rc = set_isa_irq_level(d, data->isa_irq, data->level);
         break;
@@ -588,7 +428,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_set_pci_link_route:
     {
         const struct xen_dm_op_set_pci_link_route *data =
-            &op.u.set_pci_link_route;
+            &op->u.set_pci_link_route;
 
         rc = hvm_set_pci_link_route(d, data->link, data->isa_irq);
         break;
@@ -597,19 +437,19 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_modified_memory:
     {
         struct xen_dm_op_modified_memory *data =
-            &op.u.modified_memory;
+            &op->u.modified_memory;
 
         rc = modified_memory(d, op_args, data);
-        const_op = !rc;
+        *const_op = !rc;
         break;
     }
 
     case XEN_DMOP_set_mem_type:
     {
         struct xen_dm_op_set_mem_type *data =
-            &op.u.set_mem_type;
+            &op->u.set_mem_type;
 
-        const_op = false;
+        *const_op = false;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -622,7 +462,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_inject_event:
     {
         const struct xen_dm_op_inject_event *data =
-            &op.u.inject_event;
+            &op->u.inject_event;
 
         rc = -EINVAL;
         if ( data->pad0 || data->pad1 )
@@ -635,7 +475,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_inject_msi:
     {
         const struct xen_dm_op_inject_msi *data =
-            &op.u.inject_msi;
+            &op->u.inject_msi;
 
         rc = -EINVAL;
         if ( data->pad )
@@ -648,7 +488,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_remote_shutdown:
     {
         const struct xen_dm_op_remote_shutdown *data =
-            &op.u.remote_shutdown;
+            &op->u.remote_shutdown;
 
         domain_shutdown(d, data->reason);
         rc = 0;
@@ -657,7 +497,7 @@ static int dm_op(const struct dmop_args *op_args)
 
     case XEN_DMOP_relocate_memory:
     {
-        struct xen_dm_op_relocate_memory *data = &op.u.relocate_memory;
+        struct xen_dm_op_relocate_memory *data = &op->u.relocate_memory;
         struct xen_add_to_physmap xatp = {
             .domid = op_args->domid,
             .size = data->size,
@@ -680,7 +520,7 @@ static int dm_op(const struct dmop_args *op_args)
             data->size -= rc;
             data->src_gfn += rc;
             data->dst_gfn += rc;
-            const_op = false;
+            *const_op = false;
             rc = -ERESTART;
         }
         break;
@@ -689,7 +529,7 @@ static int dm_op(const struct dmop_args *op_args)
     case XEN_DMOP_pin_memory_cacheattr:
     {
         const struct xen_dm_op_pin_memory_cacheattr *data =
-            &op.u.pin_memory_cacheattr;
+            &op->u.pin_memory_cacheattr;
 
         if ( data->pad )
         {
@@ -707,97 +547,6 @@ static int dm_op(const struct dmop_args *op_args)
         break;
     }
 
-    if ( (!rc || rc == -ERESTART) &&
-         !const_op && copy_to_guest_offset(op_args->buf[0].h, offset,
-                                           (void *)&op.u, op_size[op.op]) )
-        rc = -EFAULT;
-
- out:
-    rcu_unlock_domain(d);
-
-    return rc;
-}
-
-#include <compat/hvm/dm_op.h>
-
-CHECK_dm_op_create_ioreq_server;
-CHECK_dm_op_get_ioreq_server_info;
-CHECK_dm_op_ioreq_server_range;
-CHECK_dm_op_set_ioreq_server_state;
-CHECK_dm_op_destroy_ioreq_server;
-CHECK_dm_op_track_dirty_vram;
-CHECK_dm_op_set_pci_intx_level;
-CHECK_dm_op_set_isa_irq_level;
-CHECK_dm_op_set_pci_link_route;
-CHECK_dm_op_modified_memory;
-CHECK_dm_op_set_mem_type;
-CHECK_dm_op_inject_event;
-CHECK_dm_op_inject_msi;
-CHECK_dm_op_map_mem_type_to_ioreq_server;
-CHECK_dm_op_remote_shutdown;
-CHECK_dm_op_relocate_memory;
-CHECK_dm_op_pin_memory_cacheattr;
-
-int compat_dm_op(domid_t domid,
-                 unsigned int nr_bufs,
-                 XEN_GUEST_HANDLE_PARAM(void) bufs)
-{
-    struct dmop_args args;
-    unsigned int i;
-    int rc;
-
-    if ( nr_bufs > ARRAY_SIZE(args.buf) )
-        return -E2BIG;
-
-    args.domid = domid;
-    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
-
-    for ( i = 0; i < args.nr_bufs; i++ )
-    {
-        struct compat_dm_op_buf cmp;
-
-        if ( copy_from_guest_offset(&cmp, bufs, i, 1) )
-            return -EFAULT;
-
-#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
-        guest_from_compat_handle((_d_)->h, (_s_)->h)
-
-        XLAT_dm_op_buf(&args.buf[i], &cmp);
-
-#undef XLAT_dm_op_buf_HNDL_h
-    }
-
-    rc = dm_op(&args);
-
-    if ( rc == -ERESTART )
-        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
-                                           domid, nr_bufs, bufs);
-
-    return rc;
-}
-
-long do_dm_op(domid_t domid,
-              unsigned int nr_bufs,
-              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
-{
-    struct dmop_args args;
-    int rc;
-
-    if ( nr_bufs > ARRAY_SIZE(args.buf) )
-        return -E2BIG;
-
-    args.domid = domid;
-    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
-
-    if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) )
-        return -EFAULT;
-
-    rc = dm_op(&args);
-
-    if ( rc == -ERESTART )
-        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
-                                           domid, nr_bufs, bufs);
-
     return rc;
 }
 
diff --git a/xen/common/Makefile b/xen/common/Makefile
index cdb99fb..8c872d3 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -6,6 +6,7 @@ obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
+obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += event_2l.o
 obj-y += event_channel.o
diff --git a/xen/common/dm.c b/xen/common/dm.c
new file mode 100644
index 0000000..36e01a2
--- /dev/null
+++ b/xen/common/dm.c
@@ -0,0 +1,291 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/guest_access.h>
+#include <xen/dm.h>
+#include <xen/hypercall.h>
+#include <xen/ioreq.h>
+#include <xen/nospec.h>
+
+static int dm_op(const struct dmop_args *op_args)
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+    bool const_op = true;
+    const size_t offset = offsetof(struct xen_dm_op, u);
+
+    static const uint8_t op_size[] = {
+        [XEN_DMOP_create_ioreq_server]              = sizeof(struct xen_dm_op_create_ioreq_server),
+        [XEN_DMOP_get_ioreq_server_info]            = sizeof(struct xen_dm_op_get_ioreq_server_info),
+        [XEN_DMOP_map_io_range_to_ioreq_server]     = sizeof(struct xen_dm_op_ioreq_server_range),
+        [XEN_DMOP_unmap_io_range_from_ioreq_server] = sizeof(struct xen_dm_op_ioreq_server_range),
+        [XEN_DMOP_set_ioreq_server_state]           = sizeof(struct xen_dm_op_set_ioreq_server_state),
+        [XEN_DMOP_destroy_ioreq_server]             = sizeof(struct xen_dm_op_destroy_ioreq_server),
+        [XEN_DMOP_track_dirty_vram]                 = sizeof(struct xen_dm_op_track_dirty_vram),
+        [XEN_DMOP_set_pci_intx_level]               = sizeof(struct xen_dm_op_set_pci_intx_level),
+        [XEN_DMOP_set_isa_irq_level]                = sizeof(struct xen_dm_op_set_isa_irq_level),
+        [XEN_DMOP_set_pci_link_route]               = sizeof(struct xen_dm_op_set_pci_link_route),
+        [XEN_DMOP_modified_memory]                  = sizeof(struct xen_dm_op_modified_memory),
+        [XEN_DMOP_set_mem_type]                     = sizeof(struct xen_dm_op_set_mem_type),
+        [XEN_DMOP_inject_event]                     = sizeof(struct xen_dm_op_inject_event),
+        [XEN_DMOP_inject_msi]                       = sizeof(struct xen_dm_op_inject_msi),
+        [XEN_DMOP_map_mem_type_to_ioreq_server]     = sizeof(struct xen_dm_op_map_mem_type_to_ioreq_server),
+        [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
+        [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
+        [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
+    };
+
+    rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = -EFAULT;
+    if ( op_args->buf[0].size < offset )
+        goto out;
+
+    if ( copy_from_guest_offset((void *)&op, op_args->buf[0].h, 0, offset) )
+        goto out;
+
+    if ( op.op >= ARRAY_SIZE(op_size) )
+    {
+        rc = -EOPNOTSUPP;
+        goto out;
+    }
+
+    op.op = array_index_nospec(op.op, ARRAY_SIZE(op_size));
+
+    if ( op_args->buf[0].size < offset + op_size[op.op] )
+        goto out;
+
+    if ( copy_from_guest_offset((void *)&op.u, op_args->buf[0].h, offset,
+                                op_size[op.op]) )
+        goto out;
+
+    rc = -EINVAL;
+    if ( op.pad )
+        goto out;
+
+    switch ( op.op )
+    {
+    case XEN_DMOP_create_ioreq_server:
+    {
+        struct xen_dm_op_create_ioreq_server *data =
+            &op.u.create_ioreq_server;
+
+        const_op = false;
+
+        rc = -EINVAL;
+        if ( data->pad[0] || data->pad[1] || data->pad[2] )
+            break;
+
+        rc = hvm_create_ioreq_server(d, data->handle_bufioreq,
+                                     &data->id);
+        break;
+    }
+
+    case XEN_DMOP_get_ioreq_server_info:
+    {
+        struct xen_dm_op_get_ioreq_server_info *data =
+            &op.u.get_ioreq_server_info;
+        const uint16_t valid_flags = XEN_DMOP_no_gfns;
+
+        const_op = false;
+
+        rc = -EINVAL;
+        if ( data->flags & ~valid_flags )
+            break;
+
+        rc = hvm_get_ioreq_server_info(d, data->id,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : (unsigned long *)&data->ioreq_gfn,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : (unsigned long *)&data->bufioreq_gfn,
+                                       &data->bufioreq_port);
+        break;
+    }
+
+    case XEN_DMOP_map_io_range_to_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.map_io_range_to_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
+                                              data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_unmap_io_range_from_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.unmap_io_range_from_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
+                                                  data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_set_ioreq_server_state:
+    {
+        const struct xen_dm_op_set_ioreq_server_state *data =
+            &op.u.set_ioreq_server_state;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        break;
+    }
+
+    case XEN_DMOP_destroy_ioreq_server:
+    {
+        const struct xen_dm_op_destroy_ioreq_server *data =
+            &op.u.destroy_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_destroy_ioreq_server(d, data->id);
+        break;
+    }
+
+    default:
+        rc = arch_dm_op(&op, d, op_args, &const_op);
+    }
+
+    if ( (!rc || rc == -ERESTART) &&
+         !const_op && copy_to_guest_offset(op_args->buf[0].h, offset,
+                                           (void *)&op.u, op_size[op.op]) )
+        rc = -EFAULT;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+#ifdef CONFIG_COMPAT
+#include <compat/hvm/dm_op.h>
+
+CHECK_dm_op_create_ioreq_server;
+CHECK_dm_op_get_ioreq_server_info;
+CHECK_dm_op_ioreq_server_range;
+CHECK_dm_op_set_ioreq_server_state;
+CHECK_dm_op_destroy_ioreq_server;
+CHECK_dm_op_track_dirty_vram;
+CHECK_dm_op_set_pci_intx_level;
+CHECK_dm_op_set_isa_irq_level;
+CHECK_dm_op_set_pci_link_route;
+CHECK_dm_op_modified_memory;
+CHECK_dm_op_set_mem_type;
+CHECK_dm_op_inject_event;
+CHECK_dm_op_inject_msi;
+CHECK_dm_op_map_mem_type_to_ioreq_server;
+CHECK_dm_op_remote_shutdown;
+CHECK_dm_op_relocate_memory;
+CHECK_dm_op_pin_memory_cacheattr;
+
+int compat_dm_op(domid_t domid,
+                 unsigned int nr_bufs,
+                 XEN_GUEST_HANDLE_PARAM(void) bufs)
+{
+    struct dmop_args args;
+    unsigned int i;
+    int rc;
+
+    if ( nr_bufs > ARRAY_SIZE(args.buf) )
+        return -E2BIG;
+
+    args.domid = domid;
+    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
+
+    for ( i = 0; i < args.nr_bufs; i++ )
+    {
+        struct compat_dm_op_buf cmp;
+
+        if ( copy_from_guest_offset(&cmp, bufs, i, 1) )
+            return -EFAULT;
+
+#define XLAT_dm_op_buf_HNDL_h(_d_, _s_) \
+        guest_from_compat_handle((_d_)->h, (_s_)->h)
+
+        XLAT_dm_op_buf(&args.buf[i], &cmp);
+
+#undef XLAT_dm_op_buf_HNDL_h
+    }
+
+    rc = dm_op(&args);
+
+    if ( rc == -ERESTART )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
+#endif
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct dmop_args args;
+    int rc;
+
+    if ( nr_bufs > ARRAY_SIZE(args.buf) )
+        return -E2BIG;
+
+    args.domid = domid;
+    args.nr_bufs = array_index_nospec(nr_bufs, ARRAY_SIZE(args.buf) + 1);
+
+    if ( copy_from_guest_offset(&args.buf[0], bufs, 0, args.nr_bufs) )
+        return -EFAULT;
+
+    rc = dm_op(&args);
+
+    if ( rc == -ERESTART )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/dm.h b/xen/include/xen/dm.h
new file mode 100644
index 0000000..ef15edf
--- /dev/null
+++ b/xen/include/xen/dm.h
@@ -0,0 +1,44 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_DM_H__
+#define __XEN_DM_H__
+
+#include <xen/sched.h>
+
+struct dmop_args {
+    domid_t domid;
+    unsigned int nr_bufs;
+    /* Reserve enough buf elements for all current hypercalls. */
+    struct xen_dm_op_buf buf[2];
+};
+
+int arch_dm_op(struct xen_dm_op *op,
+               struct domain *d,
+               const struct dmop_args *op_args,
+               bool *const_op);
+
+#endif /* __XEN_DM_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 7ae3c40..5c61d8e 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -707,14 +707,14 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
     }
 }
 
+#endif /* CONFIG_X86 */
+
 static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
     return xsm_default_action(action, current->domain, d);
 }
 
-#endif /* CONFIG_X86 */
-
 #ifdef CONFIG_ARGO
 static XSM_INLINE int xsm_argo_enable(const struct domain *d)
 {
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 358ec13..517f78a 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -177,8 +177,8 @@ struct xsm_operations {
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*pmu_op) (struct domain *d, unsigned int op);
-    int (*dm_op) (struct domain *d);
 #endif
+    int (*dm_op) (struct domain *d);
     int (*xen_version) (uint32_t cmd);
     int (*domain_resource_map) (struct domain *d);
 #ifdef CONFIG_ARGO
@@ -683,13 +683,13 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int
     return xsm_ops->pmu_op(d, op);
 }
 
+#endif /* CONFIG_X86 */
+
 static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
 {
     return xsm_ops->dm_op(d);
 }
 
-#endif /* CONFIG_X86 */
-
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
 {
     return xsm_ops->xen_version(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 9e09512..8bdffe7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -147,8 +147,8 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, ioport_permission);
     set_to_dummy_if_null(ops, ioport_mapping);
     set_to_dummy_if_null(ops, pmu_op);
-    set_to_dummy_if_null(ops, dm_op);
 #endif
+    set_to_dummy_if_null(ops, dm_op);
     set_to_dummy_if_null(ops, xen_version);
     set_to_dummy_if_null(ops, domain_resource_map);
 #ifdef CONFIG_ARGO
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de050cc..8f3f182 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1666,14 +1666,13 @@ static int flask_pmu_op (struct domain *d, unsigned int op)
         return -EPERM;
     }
 }
+#endif /* CONFIG_X86 */
 
 static int flask_dm_op(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_HVM, HVM__DM);
 }
 
-#endif /* CONFIG_X86 */
-
 static int flask_xen_version (uint32_t op)
 {
     u32 dsid = domain_sid(current->domain);
@@ -1875,8 +1874,8 @@ static struct xsm_operations flask_ops = {
     .ioport_permission = flask_ioport_permission,
     .ioport_mapping = flask_ioport_mapping,
     .pmu_op = flask_pmu_op,
-    .dm_op = flask_dm_op,
 #endif
+    .dm_op = flask_dm_op,
     .xen_version = flask_xen_version,
     .domain_resource_map = flask_domain_resource_map,
 #ifdef CONFIG_ARGO
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7618.20120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VG-0007Me-Jr; Thu, 15 Oct 2020 16:53:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7618.20120; Thu, 15 Oct 2020 16:53:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VF-0007Lc-QV; Thu, 15 Oct 2020 16:53:21 +0000
Received: by outflank-mailman (input) for mailman id 7618;
 Thu, 15 Oct 2020 16:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6OS-0004yr-7c
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:20 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32af136a-5429-4b00-9e08-60c144cc0ab3;
 Thu, 15 Oct 2020 16:45:12 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id j30so4384033lfp.4
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:11 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6OS-0004yr-7c
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:20 +0000
X-Inumbo-ID: 32af136a-5429-4b00-9e08-60c144cc0ab3
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 32af136a-5429-4b00-9e08-60c144cc0ab3;
	Thu, 15 Oct 2020 16:45:12 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id j30so4384033lfp.4
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=zwTLjXhVFQZPY98bDXqDnVXlV5nN/zRcFtlAgAIIKtA=;
        b=lrST4bt64gs4PNz3UdY5vzi3xN2250k4Zd38/7wosQDzG03cHDGXRkLEDBmTeiyRxt
         FrRApvPNjs9JWltwznEGxCZYSdEiTXQ/I1kijXKVHPEIIQVvOyFPeayMMa6QqwoIgRDO
         1Nj7CwAyQ3npadYy6JQiUHwh4xCynnUK+i13pyr9/c9+ms4oDHWa1Gk1PwgzYZDLNTES
         cKuLeCcb8NHUR5VZ7SCQlMACJ65DlDs+drUw31zdxlZDY92CJkeYcCLoODKKdirD0Xnd
         ktsrDym6ha0aADhF2FdOs5XCLKiRAli200vSwDOX6RIwcnBllyH+nUZMkjQ2hjAj0PjS
         ySWw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=zwTLjXhVFQZPY98bDXqDnVXlV5nN/zRcFtlAgAIIKtA=;
        b=anyWYJGhqJBwqg15O2wDlAT8c/BZcuhV2wziqUkiLc0cboE6iOIJ+zxgUwyDsUo9Qm
         WsLAcvLqDl3DgBNnf9v4PfSRNl2Dvwe6AyX8CmkS602JPSE8gvTGmCXg5iVs1znRuYrD
         54HXGOszZG5xzun7Xtd+bkejvYHjDrGiMH1n0WENw5hxeU7t5Us6SWJKlRSq0zcYfLog
         lqBusVMp+/MDdOxJdatVnX1nnABZggcu7+ZtbO0tSHqsFeKY1KfPrL1YbA2i4FffofM6
         tvESRlDyOzsC3QiutNCy4e1MvTvb0RcSimvgI8Twpzg1ems9HVQLPjhx9ifE6Zc9UzjV
         uJ0g==
X-Gm-Message-State: AOAM533KE4+UARVkHh20gBLeqrjC0AdCApcPFLUQdLuhwltlFXF8gLbT
	2U2lhoDOdJrT9NnzgG+b8NwDjqdKv6sn2w==
X-Google-Smtp-Source: ABdhPJxqqHSRzjcRFUYAOzYiBglt3Ho3KYEWGxvHa5/nJyWPqDD9K40O/lvReItlJHMTfAJak3MEdg==
X-Received: by 2002:ac2:592d:: with SMTP id v13mr1271605lfi.355.1602780310299;
        Thu, 15 Oct 2020 09:45:10 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.09
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Thu, 15 Oct 2020 19:44:28 +0300
Message-Id: <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch introduces a helper the main purpose of which is to check
if a domain is using IOREQ server(s).

On Arm the current benefit is to avoid calling handle_io_completion()
(which implies iterating over all possible IOREQ servers anyway)
on every return in leave_hypervisor_to_guest() if there is no active
servers for the particular domain.
Also this helper will be used by one of the subsequent patches on Arm.

This involves adding an extra per-domain variable to store the count
of servers in use.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch

Changes V1 -> V2:
   - update patch description
   - guard helper with CONFIG_IOREQ_SERVER
   - remove "hvm" prefix
   - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
   - put suitable ASSERT()s
   - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
   - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
---
 xen/arch/arm/traps.c    | 15 +++++++++------
 xen/common/ioreq.c      |  7 ++++++-
 xen/include/xen/ioreq.h | 14 ++++++++++++++
 xen/include/xen/sched.h |  1 +
 4 files changed, 30 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 507c095..a8f5fdf 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
     struct vcpu *v = current;
 
 #ifdef CONFIG_IOREQ_SERVER
-    bool handled;
+    if ( domain_has_ioreq_server(v->domain) )
+    {
+        bool handled;
 
-    local_irq_enable();
-    handled = handle_io_completion(v);
-    local_irq_disable();
+        local_irq_enable();
+        handled = handle_io_completion(v);
+        local_irq_disable();
 
-    if ( !handled )
-        return true;
+        if ( !handled )
+            return true;
+    }
 #endif
 
     if ( likely(!v->arch.need_flush_to_ram) )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index bcd4961..a72bc0e 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
                              struct ioreq_server *s)
 {
     ASSERT(id < MAX_NR_IOREQ_SERVERS);
-    ASSERT(!s || !d->ioreq_server.server[id]);
+    ASSERT(d->ioreq_server.server[id] ? !s : !!s);
 
     d->ioreq_server.server[id] = s;
+
+    if ( s )
+        d->ioreq_server.nr_servers++;
+    else
+        d->ioreq_server.nr_servers--;
 }
 
 #define GET_IOREQ_SERVER(d, id) \
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index 7b03ab5..0679fef 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -55,6 +55,20 @@ struct ioreq_server {
     uint8_t                bufioreq_handling;
 };
 
+#ifdef CONFIG_IOREQ_SERVER
+static inline bool domain_has_ioreq_server(const struct domain *d)
+{
+    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
+
+    return d->ioreq_server.nr_servers;
+}
+#else
+static inline bool domain_has_ioreq_server(const struct domain *d)
+{
+    return false;
+}
+#endif
+
 struct ioreq_server *get_ioreq_server(const struct domain *d,
                                       unsigned int id);
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index f9ce14c..290cddb 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -553,6 +553,7 @@ struct domain
     struct {
         spinlock_t              lock;
         struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
+        unsigned int            nr_servers;
     } ioreq_server;
 #endif
 };
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7619.20131 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VH-0007Pk-Hs; Thu, 15 Oct 2020 16:53:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7619.20131; Thu, 15 Oct 2020 16:53:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VG-0007OG-Oj; Thu, 15 Oct 2020 16:53:22 +0000
Received: by outflank-mailman (input) for mailman id 7619;
 Thu, 15 Oct 2020 16:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Ny-0004yr-6m
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:50 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82abcbad-658a-46ea-bdc7-8244eb16a9ab;
 Thu, 15 Oct 2020 16:45:05 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id r127so4314370lff.12
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:05 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Ny-0004yr-6m
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:45:50 +0000
X-Inumbo-ID: 82abcbad-658a-46ea-bdc7-8244eb16a9ab
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 82abcbad-658a-46ea-bdc7-8244eb16a9ab;
	Thu, 15 Oct 2020 16:45:05 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id r127so4314370lff.12
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=GIFa+EcRcUxKPr2Jpa1jfhOfy5HsLRU2xEhliboMdRU=;
        b=UB0MpQUyBFx0qhLWnadeL1UXzp33IaWRUriTv4CK4hiHV8QQsOhShN8Ry5bycZ1ocz
         jI21J4JlDvFJFXsLeNaZmAdZKDkTAvst9UHyKM+KpsM88AgFv72UYLYFZXJVSpOuIpwE
         mO7hMeygJpeQHd0gG0nPK1I2BNoC4s9H55wHOrLmT/iLpfWBvgQLKWDlM7YQWwnRwAuz
         ndxe5JxiZ3uxrlMBciaKrbKQXqf1dE5shLh/Hi3xpMsF9i1WkW5+aUUQz6sYbta+hor7
         LX3BpM/wrnxfFoXQisKl81neK5J8XQzerMW+yquGpT0YWmf98XhB3+7SIfLfDLZQQ220
         fIdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=GIFa+EcRcUxKPr2Jpa1jfhOfy5HsLRU2xEhliboMdRU=;
        b=Vow/5GMxmh6oOrjDBsxYZVaPxivClyY2yNKxHzkunOxnorg09L7jBm8sEtYnL43t6L
         jolmCte2+gvw0STRH+eR4H+C4Iwz8Jwd9seVZ8N3odPM5k80joBKspbbDFIoqSPveRLS
         UiorDTGDIjGtD1HTUxeFwigR4SL3G/xg7CuR/HWmfe/jtDvCV0bkiyOhpK8NO2fEFk6n
         kHQCC9h7yNBJXYtTlfJ5x00yj8P5acX0UyEuxlNsJjCzQPlyJMWyevFcgG8dGbcsCtDo
         LhQc9nkUy8WHE05IIDIDi7FZzbEWAV0vuaO2qzlprixonWYLj7EB+p96V+pLzsq+g/iv
         c1KA==
X-Gm-Message-State: AOAM533bHk5b217HLIkkxVUIyDjbQAOiHFblf2Mnwz17R81aCkcDVnV2
	qI7EAHWIcWcIK/LbvbIqa9fXd5vQq6pMGQ==
X-Google-Smtp-Source: ABdhPJxuegf36jDJMZnb4uqeaF5mSemQuPEeLn4XCufxv1K20iqFeDNtPrA2vNZhJzFktUL0chy2hw==
X-Received: by 2002:ac2:4dad:: with SMTP id h13mr1457757lfe.351.1602780304203;
        Thu, 15 Oct 2020 09:45:04 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.03
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:03 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jun Nakajima <jun.nakajima@intel.com>,
	Kevin Tian <kevin.tian@intel.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
Date: Thu, 15 Oct 2020 19:44:22 +0300
Message-Id: <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

The IOREQ is a common feature now and these fields will be used
on Arm as is. Move them to common struct vcpu as a part of new
struct vcpu_io. Also move enum hvm_io_completion to xen/sched.h
and remove "hvm" prefixes.

This patch completely removes layering violation in the common code.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
I was thinking that it may be better to place these two fields
into struct vcpu directly (without intermediate "io" struct).
I think, this way the code which operates with these fields
would become cleaner. Another possible option would be either
to rename "io" struct (I failed to think of a better name) or
to drop(replace?) duplicating "io" prefixes from these fields.
***

Changes V1 -> V2:
   - new patch
---
 xen/arch/x86/hvm/emulate.c        | 50 +++++++++++++++++++--------------------
 xen/arch/x86/hvm/hvm.c            |  2 +-
 xen/arch/x86/hvm/io.c             |  6 ++---
 xen/arch/x86/hvm/svm/nestedsvm.c  |  2 +-
 xen/arch/x86/hvm/vmx/realmode.c   |  6 ++---
 xen/common/ioreq.c                | 14 +++++------
 xen/include/asm-x86/hvm/emulate.h |  2 +-
 xen/include/asm-x86/hvm/ioreq.h   |  4 ++--
 xen/include/asm-x86/hvm/vcpu.h    | 11 ---------
 xen/include/xen/sched.h           | 17 +++++++++++++
 10 files changed, 60 insertions(+), 54 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 4746d5a..f6a4eef 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -142,8 +142,8 @@ void hvmemul_cancel(struct vcpu *v)
 {
     struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
 
-    vio->io_req.state = STATE_IOREQ_NONE;
-    vio->io_completion = HVMIO_no_completion;
+    v->io.io_req.state = STATE_IOREQ_NONE;
+    v->io.io_completion = IO_no_completion;
     vio->mmio_cache_count = 0;
     vio->mmio_insn_bytes = 0;
     vio->mmio_access = (struct npfec){};
@@ -159,7 +159,7 @@ static int hvmemul_do_io(
 {
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &curr->io;
     ioreq_t p = {
         .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
         .addr = addr,
@@ -1854,7 +1854,7 @@ static int hvmemul_rep_movs(
           * cheaper than multiple round trips through the device model. Yet
           * when processing a response we can always re-use the translation.
           */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.io_req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (saddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         sgpa = pfn_to_paddr(vio->mmio_gpfn) | (saddr & ~PAGE_MASK);
@@ -1870,7 +1870,7 @@ static int hvmemul_rep_movs(
     if ( vio->mmio_access.write_access &&
          (vio->mmio_gla == (daddr & PAGE_MASK)) &&
          /* See comment above. */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.io_req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (daddr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         dgpa = pfn_to_paddr(vio->mmio_gpfn) | (daddr & ~PAGE_MASK);
@@ -2007,7 +2007,7 @@ static int hvmemul_rep_stos(
     if ( vio->mmio_access.write_access &&
          (vio->mmio_gla == (addr & PAGE_MASK)) &&
          /* See respective comment in MOVS processing. */
-         (vio->io_req.state == STATE_IORESP_READY ||
+         (curr->io.io_req.state == STATE_IORESP_READY ||
           ((!df || *reps == 1) &&
            PAGE_SIZE - (addr & ~PAGE_MASK) >= *reps * bytes_per_rep)) )
         gpa = pfn_to_paddr(vio->mmio_gpfn) | (addr & ~PAGE_MASK);
@@ -2613,13 +2613,13 @@ static const struct x86_emulate_ops hvm_emulate_ops_no_write = {
 };
 
 /*
- * Note that passing HVMIO_no_completion into this function serves as kind
+ * Note that passing IO_no_completion into this function serves as kind
  * of (but not fully) an "auto select completion" indicator.  When there's
  * no completion needed, the passed in value will be ignored in any case.
  */
 static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     const struct x86_emulate_ops *ops,
-    enum hvm_io_completion completion)
+    enum io_completion completion)
 {
     const struct cpu_user_regs *regs = hvmemul_ctxt->ctxt.regs;
     struct vcpu *curr = current;
@@ -2634,11 +2634,11 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
      */
     if ( vio->cache->num_ents > vio->cache->max_ents )
     {
-        ASSERT(vio->io_req.state == STATE_IOREQ_NONE);
+        ASSERT(curr->io.io_req.state == STATE_IOREQ_NONE);
         vio->cache->num_ents = 0;
     }
     else
-        ASSERT(vio->io_req.state == STATE_IORESP_READY);
+        ASSERT(curr->io.io_req.state == STATE_IORESP_READY);
 
     hvm_emulate_init_per_insn(hvmemul_ctxt, vio->mmio_insn,
                               vio->mmio_insn_bytes);
@@ -2649,25 +2649,25 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
     if ( rc == X86EMUL_OKAY && vio->mmio_retry )
         rc = X86EMUL_RETRY;
 
-    if ( !ioreq_needs_completion(&vio->io_req) )
-        completion = HVMIO_no_completion;
-    else if ( completion == HVMIO_no_completion )
-        completion = (vio->io_req.type != IOREQ_TYPE_PIO ||
-                      hvmemul_ctxt->is_mem_access) ? HVMIO_mmio_completion
-                                                   : HVMIO_pio_completion;
+    if ( !ioreq_needs_completion(&curr->io.io_req) )
+        completion = IO_no_completion;
+    else if ( completion == IO_no_completion )
+        completion = (curr->io.io_req.type != IOREQ_TYPE_PIO ||
+                      hvmemul_ctxt->is_mem_access) ? IO_mmio_completion
+                                                   : IO_pio_completion;
 
-    switch ( vio->io_completion = completion )
+    switch ( curr->io.io_completion = completion )
     {
-    case HVMIO_no_completion:
-    case HVMIO_pio_completion:
+    case IO_no_completion:
+    case IO_pio_completion:
         vio->mmio_cache_count = 0;
         vio->mmio_insn_bytes = 0;
         vio->mmio_access = (struct npfec){};
         hvmemul_cache_disable(curr);
         break;
 
-    case HVMIO_mmio_completion:
-    case HVMIO_realmode_completion:
+    case IO_mmio_completion:
+    case IO_realmode_completion:
         BUILD_BUG_ON(sizeof(vio->mmio_insn) < sizeof(hvmemul_ctxt->insn_buf));
         vio->mmio_insn_bytes = hvmemul_ctxt->insn_buf_bytes;
         memcpy(vio->mmio_insn, hvmemul_ctxt->insn_buf, vio->mmio_insn_bytes);
@@ -2716,7 +2716,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt,
-    enum hvm_io_completion completion)
+    enum io_completion completion)
 {
     return _hvm_emulate_one(hvmemul_ctxt, &hvm_emulate_ops, completion);
 }
@@ -2754,7 +2754,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
                           guest_cpu_user_regs());
     ctxt.ctxt.data = &mmio_ro_ctxt;
 
-    switch ( rc = _hvm_emulate_one(&ctxt, ops, HVMIO_no_completion) )
+    switch ( rc = _hvm_emulate_one(&ctxt, ops, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
     case X86EMUL_UNIMPLEMENTED:
@@ -2782,7 +2782,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     {
     case EMUL_KIND_NOWRITE:
         rc = _hvm_emulate_one(&ctx, &hvm_emulate_ops_no_write,
-                              HVMIO_no_completion);
+                              IO_no_completion);
         break;
     case EMUL_KIND_SET_CONTEXT_INSN: {
         struct vcpu *curr = current;
@@ -2803,7 +2803,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     /* Fall-through */
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
-        rc = hvm_emulate_one(&ctx, HVMIO_no_completion);
+        rc = hvm_emulate_one(&ctx, IO_no_completion);
     }
 
     switch ( rc )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 20376ce..341093b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3800,7 +3800,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
         return;
     }
 
-    switch ( hvm_emulate_one(&ctxt, HVMIO_no_completion) )
+    switch ( hvm_emulate_one(&ctxt, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
     case X86EMUL_UNIMPLEMENTED:
diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index b220d6b..36584de 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -85,7 +85,7 @@ bool hvm_emulate_one_insn(hvm_emulate_validate_t *validate, const char *descr)
 
     hvm_emulate_init_once(&ctxt, validate, guest_cpu_user_regs());
 
-    switch ( rc = hvm_emulate_one(&ctxt, HVMIO_no_completion) )
+    switch ( rc = hvm_emulate_one(&ctxt, IO_no_completion) )
     {
     case X86EMUL_UNHANDLEABLE:
         hvm_dump_emulation_state(XENLOG_G_WARNING, descr, &ctxt, rc);
@@ -122,7 +122,7 @@ bool handle_mmio_with_translation(unsigned long gla, unsigned long gpfn,
 bool handle_pio(uint16_t port, unsigned int size, int dir)
 {
     struct vcpu *curr = current;
-    struct hvm_vcpu_io *vio = &curr->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &curr->io;
     unsigned int data;
     int rc;
 
@@ -136,7 +136,7 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
     rc = hvmemul_do_pio_buffer(port, size, dir, &data);
 
     if ( ioreq_needs_completion(&vio->io_req) )
-        vio->io_completion = HVMIO_pio_completion;
+        vio->io_completion = IO_pio_completion;
 
     switch ( rc )
     {
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index fcfccf7..787d4a0 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1266,7 +1266,7 @@ enum hvm_intblk nsvm_intr_blocked(struct vcpu *v)
          * Delay the injection because this would result in delivering
          * an interrupt *within* the execution of an instruction.
          */
-        if ( v->arch.hvm.hvm_io.io_req.state != STATE_IOREQ_NONE )
+        if ( v->io.io_req.state != STATE_IOREQ_NONE )
             return hvm_intblk_shadow;
 
         if ( !nv->nv_vmexit_pending && n2vmcb->exit_int_info.v )
diff --git a/xen/arch/x86/hvm/vmx/realmode.c b/xen/arch/x86/hvm/vmx/realmode.c
index 768f01e..f5832a0 100644
--- a/xen/arch/x86/hvm/vmx/realmode.c
+++ b/xen/arch/x86/hvm/vmx/realmode.c
@@ -101,7 +101,7 @@ void vmx_realmode_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt)
 
     perfc_incr(realmode_emulations);
 
-    rc = hvm_emulate_one(hvmemul_ctxt, HVMIO_realmode_completion);
+    rc = hvm_emulate_one(hvmemul_ctxt, IO_realmode_completion);
 
     if ( rc == X86EMUL_UNHANDLEABLE )
     {
@@ -188,7 +188,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
 
         vmx_realmode_emulate_one(&hvmemul_ctxt);
 
-        if ( vio->io_req.state != STATE_IOREQ_NONE || vio->mmio_retry )
+        if ( curr->io.io_req.state != STATE_IOREQ_NONE || vio->mmio_retry )
             break;
 
         /* Stop emulating unless our segment state is not safe */
@@ -202,7 +202,7 @@ void vmx_realmode(struct cpu_user_regs *regs)
     }
 
     /* Need to emulate next time if we've started an IO operation */
-    if ( vio->io_req.state != STATE_IOREQ_NONE )
+    if ( curr->io.io_req.state != STATE_IOREQ_NONE )
         curr->arch.hvm.vmx.vmx_emulate = 1;
 
     if ( !curr->arch.hvm.vmx.vmx_emulate && !curr->arch.hvm.vmx.vmx_realmode )
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index a07f1d7..57ddaaa 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -158,7 +158,7 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
         break;
     }
 
-    p = &sv->vcpu->arch.hvm.hvm_io.io_req;
+    p = &sv->vcpu->io.io_req;
     if ( ioreq_needs_completion(p) )
         p->data = data;
 
@@ -170,10 +170,10 @@ static bool hvm_wait_for_io(struct ioreq_vcpu *sv, ioreq_t *p)
 bool handle_hvm_io_completion(struct vcpu *v)
 {
     struct domain *d = v->domain;
-    struct hvm_vcpu_io *vio = &v->arch.hvm.hvm_io;
+    struct vcpu_io *vio = &v->io;
     struct ioreq_server *s;
     struct ioreq_vcpu *sv;
-    enum hvm_io_completion io_completion;
+    enum io_completion io_completion;
 
     if ( has_vpci(d) && vpci_process_pending(v) )
     {
@@ -192,17 +192,17 @@ bool handle_hvm_io_completion(struct vcpu *v)
     vcpu_end_shutdown_deferral(v);
 
     io_completion = vio->io_completion;
-    vio->io_completion = HVMIO_no_completion;
+    vio->io_completion = IO_no_completion;
 
     switch ( io_completion )
     {
-    case HVMIO_no_completion:
+    case IO_no_completion:
         break;
 
-    case HVMIO_mmio_completion:
+    case IO_mmio_completion:
         return ioreq_complete_mmio();
 
-    case HVMIO_pio_completion:
+    case IO_pio_completion:
         return handle_pio(vio->io_req.addr, vio->io_req.size,
                           vio->io_req.dir);
 
diff --git a/xen/include/asm-x86/hvm/emulate.h b/xen/include/asm-x86/hvm/emulate.h
index 1620cc7..131cdf4 100644
--- a/xen/include/asm-x86/hvm/emulate.h
+++ b/xen/include/asm-x86/hvm/emulate.h
@@ -65,7 +65,7 @@ bool __nonnull(1, 2) hvm_emulate_one_insn(
     const char *descr);
 int hvm_emulate_one(
     struct hvm_emulate_ctxt *hvmemul_ctxt,
-    enum hvm_io_completion completion);
+    enum io_completion completion);
 void hvm_emulate_one_vm_event(enum emul_kind kind,
     unsigned int trapnr,
     unsigned int errcode);
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index 0fccac5..5ed977e 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -26,11 +26,11 @@
 
 #include <public/hvm/params.h>
 
-static inline bool arch_hvm_io_completion(enum hvm_io_completion io_completion)
+static inline bool arch_hvm_io_completion(enum io_completion io_completion)
 {
     switch ( io_completion )
     {
-    case HVMIO_realmode_completion:
+    case IO_realmode_completion:
     {
         struct hvm_emulate_ctxt ctxt;
 
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 6c1feda..8adf455 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -28,13 +28,6 @@
 #include <asm/mtrr.h>
 #include <public/hvm/ioreq.h>
 
-enum hvm_io_completion {
-    HVMIO_no_completion,
-    HVMIO_mmio_completion,
-    HVMIO_pio_completion,
-    HVMIO_realmode_completion
-};
-
 struct hvm_vcpu_asid {
     uint64_t generation;
     uint32_t asid;
@@ -52,10 +45,6 @@ struct hvm_mmio_cache {
 };
 
 struct hvm_vcpu_io {
-    /* I/O request in flight to device model. */
-    enum hvm_io_completion io_completion;
-    ioreq_t                io_req;
-
     /*
      * HVM emulation:
      *  Linear address @mmio_gla maps to MMIO physical frame @mmio_gpfn.
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 78761cd..f9ce14c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -143,6 +143,19 @@ void evtchn_destroy_final(struct domain *d); /* from complete_domain_destroy */
 
 struct waitqueue_vcpu;
 
+enum io_completion {
+    IO_no_completion,
+    IO_mmio_completion,
+    IO_pio_completion,
+    IO_realmode_completion
+};
+
+struct vcpu_io {
+    /* I/O request in flight to device model. */
+    enum io_completion   io_completion;
+    ioreq_t              io_req;
+};
+
 struct vcpu
 {
     int              vcpu_id;
@@ -254,6 +267,10 @@ struct vcpu
     struct vpci_vcpu vpci;
 
     struct arch_vcpu arch;
+
+#ifdef CONFIG_IOREQ_SERVER
+    struct vcpu_io io;
+#endif
 };
 
 struct sched_unit {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7620.20150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VJ-0007Wf-TW; Thu, 15 Oct 2020 16:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7620.20150; Thu, 15 Oct 2020 16:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VJ-0007VA-3n; Thu, 15 Oct 2020 16:53:25 +0000
Received: by outflank-mailman (input) for mailman id 7620;
 Thu, 15 Oct 2020 16:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6ON-0004yr-89
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:15 +0000
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46fd3a8f-55f0-4ee0-8179-f6bd7276b904;
 Thu, 15 Oct 2020 16:45:10 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id h20so3822805lji.9
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:10 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.08
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6ON-0004yr-89
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:15 +0000
X-Inumbo-ID: 46fd3a8f-55f0-4ee0-8179-f6bd7276b904
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 46fd3a8f-55f0-4ee0-8179-f6bd7276b904;
	Thu, 15 Oct 2020 16:45:10 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id h20so3822805lji.9
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=/fPLWXB55C3rjnJr1JaRD7nWk5g9/E1PMqJP4mwbwZc=;
        b=HitaxQWEsvEv9/Rz9DvKDF6ecJ85ZC9GLKwviZZplNEQWjYaIpccBmQeTM19Dd2R7V
         1kXh8VU1tQevPtf26zRsdREGD2tfhx9Qn0YmijO+CiC1k/LFBEWMmWGSuhn1SgRLP6mf
         BJ8w5GOQ8D7jhxMIdYZn32MP8QoAlExjpexjxUFolKpCck8Dleo5k/g6bNNhzFkdBw9X
         8KqHNDMzkdG5LTGvJQcIU/Cd93rdALS/sUUTISGKgIg0yABiL4kOEV1qSVfhxAQ5kCVb
         Wz8RupR+PmexrOcxslMMqtAGhd2qdvc8FL7x6FTbyxqnYdi/U57R7JsiM74qVDbe7abw
         UAgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=/fPLWXB55C3rjnJr1JaRD7nWk5g9/E1PMqJP4mwbwZc=;
        b=eQtLWMdwwU43KvqCGiox2k6wrW2v1iOR+GtsQmO4Mz0kDu/0AW8GmLdRNtuKWU2nAw
         ua0FfUsLMQupEAf4ghskU256974DysqHkKGGujzREf6o8cBUF2bnzQ2dcr0oP2CtIxuA
         jl3KdvLL4PNtoU/50IPAL78ZUJ5x7bvJdsIMm+8C26usr37am970DNDRSk9lYXYHcixf
         8UguEjEZuYGMJSl8j0KNy0PkpDGLamoNkcE95/HO9IeI6Y16tLklGsDZHuvirxJlUd2B
         QZy7b3xGUnAwG4FUJTneRLM6Ittl8NQd+60w5eJUxjaOmU5lcFkAbQcgaxYxg+hveb2j
         3ERg==
X-Gm-Message-State: AOAM531DRggBqNyEd8lbJ4+Vd5JRA76zBoVRbf3z9IqepOolcUx4+Qg8
	fxpVpURNZntciKtL6IhY/SlA2l3qZtK5FQ==
X-Google-Smtp-Source: ABdhPJwACpsMFGhkuraX68uLZBKp6BNDpiiOkvIcf85E72rgjoANcmS+h8Z4oafvGu1h220iQ29gKw==
X-Received: by 2002:a2e:9618:: with SMTP id v24mr1687846ljh.191.1602780309232;
        Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.08
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:08 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 16/23] xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
Date: Thu, 15 Oct 2020 19:44:27 +0300
Message-Id: <1602780274-29141-17-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch implements reference counting of foreign entries in
in set_foreign_p2m_entry() on Arm. This is a mandatory action if
we want to run emulator (IOREQ server) in other than dom0 domain,
as we can't trust it to do the right thing if it is not running
in dom0. So we need to grab a reference on the page to avoid it
disappearing.

It is valid to always pass "p2m_map_foreign_rw" type to
guest_physmap_add_entry() since the current and foreign domains
would be always different. A case when they are equal would be
rejected by rcu_lock_remote_domain_by_id().

It was tested with IOREQ feature to confirm that all the pages given
to this function belong to a domain, so we can use the same approach
as for XENMAPSPACE_gmfn_foreign handling in xenmem_add_to_physmap_one().

This involves adding an extra parameter for the foreign domain to
set_foreign_p2m_entry() and a helper to indicate whether the arch
supports the reference counting of foreign entries and the restriction
for the hardware domain in the common code can be skipped for it.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - new patch, was split from:
     "[RFC PATCH V1 04/12] xen/arm: Introduce arch specific bits for IOREQ/DM features"
   - rewrite a logic to handle properly reference in set_foreign_p2m_entry()
     instead of treating foreign entries as p2m_ram_rw

Changes V1 -> V2:
   - rebase according to the recent changes to acquire_resource()
   - update patch description
   - introduce arch_refcounts_p2m()
   - add an explanation why p2m_map_foreign_rw is valid
   - move set_foreign_p2m_entry() to p2m-common.h
   - add const to new parameter
---
 xen/arch/arm/p2m.c           | 21 +++++++++++++++++++++
 xen/arch/x86/mm/p2m.c        |  5 +++--
 xen/common/memory.c          |  5 +++--
 xen/include/asm-arm/p2m.h    | 19 +++++++++----------
 xen/include/asm-x86/p2m.h    | 12 +++++++++---
 xen/include/xen/p2m-common.h |  4 ++++
 6 files changed, 49 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 4eeb867..370173c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1380,6 +1380,27 @@ int guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
     return p2m_remove_mapping(d, gfn, (1 << page_order), mfn);
 }
 
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    struct page_info *page = mfn_to_page(mfn);
+    int rc;
+
+    if ( !get_page(page, fd) )
+        return -EINVAL;
+
+    /*
+     * It is valid to always use p2m_map_foreign_rw here as if this gets
+     * called that d != fd. A case when d == fd would be rejected by
+     * rcu_lock_remote_domain_by_id() earlier.
+     */
+    rc = guest_physmap_add_entry(d, _gfn(gfn), mfn, 0, p2m_map_foreign_rw);
+    if ( rc )
+        put_page(page);
+
+    return 0;
+}
+
 static struct page_info *p2m_allocate_root(void)
 {
     struct page_info *page;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 6102771..8d03ab4 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1320,7 +1320,8 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
 }
 
 /* Set foreign mfn in the given guest's p2m table. */
-int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
 {
     return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign,
                                p2m_get_hostp2m(d)->default_access);
@@ -2620,7 +2621,7 @@ int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
      * will update the m2p table which will result in  mfn -> gpfn of dom0
      * and not fgfn of domU.
      */
-    rc = set_foreign_p2m_entry(tdom, gpfn, mfn);
+    rc = set_foreign_p2m_entry(tdom, fdom, gpfn, mfn);
     if ( rc )
         gdprintk(XENLOG_WARNING, "set_foreign_p2m_entry failed. "
                  "gpfn:%lx mfn:%lx fgfn:%lx td:%d fd:%d\n",
diff --git a/xen/common/memory.c b/xen/common/memory.c
index cf53ca3..fb9ea96 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -1099,7 +1099,8 @@ static int acquire_resource(
      *        reference counted, it is unsafe to allow mapping of
      *        resource pages unless the caller is the hardware domain.
      */
-    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) )
+    if ( paging_mode_translate(currd) && !is_hardware_domain(currd) &&
+         !arch_refcounts_p2m() )
         return -EACCES;
 
     if ( copy_from_guest(&xmar, arg, 1) )
@@ -1168,7 +1169,7 @@ static int acquire_resource(
 
         for ( i = 0; !rc && i < xmar.nr_frames; i++ )
         {
-            rc = set_foreign_p2m_entry(currd, gfn_list[i],
+            rc = set_foreign_p2m_entry(currd, d, gfn_list[i],
                                        _mfn(mfn_list[i]));
             /* rc should be -EIO for any iteration other than the first */
             if ( rc && i )
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index 28ca9a8..d11be80 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -161,6 +161,15 @@ typedef enum {
 #endif
 #include <xen/p2m-common.h>
 
+static inline bool arch_refcounts_p2m(void)
+{
+    /*
+     * The reference counting of foreign entries in set_foreign_p2m_entry()
+     * is supported on Arm.
+     */
+    return true;
+}
+
 static inline
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
 {
@@ -392,16 +401,6 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
     return gfn_add(gfn, 1UL << order);
 }
 
-static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
-                                        mfn_t mfn)
-{
-    /*
-     * NOTE: If this is implemented then proper reference counting of
-     *       foreign entries will need to be implemented.
-     */
-    return -EOPNOTSUPP;
-}
-
 /*
  * A vCPU has cache enabled only when the MMU is enabled and data cache
  * is enabled.
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 5f7ba31..6c42022 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -369,6 +369,15 @@ struct p2m_domain {
 #endif
 #include <xen/p2m-common.h>
 
+static inline bool arch_refcounts_p2m(void)
+{
+    /*
+     * The reference counting of foreign entries in set_foreign_p2m_entry()
+     * is not supported on x86.
+     */
+    return false;
+}
+
 /*
  * Updates vCPU's n2pm to match its np2m_base in VMCx12 and returns that np2m.
  */
@@ -634,9 +643,6 @@ int p2m_finish_type_change(struct domain *d,
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
-/* Set foreign entry in the p2m table (for priv-mapping) */
-int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn);
-
 /* Set mmio addresses in the p2m table (for pass-through) */
 int set_mmio_p2m_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
                        unsigned int order);
diff --git a/xen/include/xen/p2m-common.h b/xen/include/xen/p2m-common.h
index 58031a6..b4bc709 100644
--- a/xen/include/xen/p2m-common.h
+++ b/xen/include/xen/p2m-common.h
@@ -3,6 +3,10 @@
 
 #include <xen/mm.h>
 
+/* Set foreign entry in the p2m table */
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn);
+
 /* Remove a page from a domain's p2m table */
 int __must_check
 guest_physmap_remove_page(struct domain *d, gfn_t gfn, mfn_t mfn,
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7621.20162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VM-0007c1-0X; Thu, 15 Oct 2020 16:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7621.20162; Thu, 15 Oct 2020 16:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VK-0007ah-U9; Thu, 15 Oct 2020 16:53:26 +0000
Received: by outflank-mailman (input) for mailman id 7621;
 Thu, 15 Oct 2020 16:53:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6OX-0004yr-7t
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:25 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 495c746c-867e-4b9f-bfdc-3ebc58d73db7;
 Thu, 15 Oct 2020 16:45:12 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id b1so4320376lfp.11
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:12 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.10
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:10 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6OX-0004yr-7t
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:25 +0000
X-Inumbo-ID: 495c746c-867e-4b9f-bfdc-3ebc58d73db7
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 495c746c-867e-4b9f-bfdc-3ebc58d73db7;
	Thu, 15 Oct 2020 16:45:12 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id b1so4320376lfp.11
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=HaoSNmuxwJzJR/fKLMbZwT91GS4ftwQys6lt/km7J2g=;
        b=CeJiG4Nb1/Pb2sgz3TGnNpjrTYoj+Xr3WHwfBpq5V6VWIWlkhvHc7IQUISflX9dJj7
         if3zUD6eV9aAWdcwjWmJPw//abOUNr0ZKgvyO5Ahb5dHFjnfkUGa/8P03fJX30cM3k9p
         g+vGnWOZF7yWTckBaAA7IvsqTmBw5fkEe4Z0P36cfABlNKjsyTWMWnqKrcYtsJJPeqG4
         TmK6f6Fq3KY1RzealYEn3Suy9d5vzktZxa5nr6CWeX7Xl1sZfLhuU9U7B0j4jl6UHtYf
         jr/QLcxoOytiBaF6DvLENgCoD3tY4r62xmGOzwSeXVZUr1w8IAkZk3/5/KV+OYebiibl
         M2oQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=HaoSNmuxwJzJR/fKLMbZwT91GS4ftwQys6lt/km7J2g=;
        b=cccfG/8y+r747gyUUQ3ncLm3JtEl2qs7ESVhRqVsPq0FTOXg85PedmnTRI3+uCwFAx
         xHl/ywfcLjqRTjcIFyX9dt2tZSGakaxMMJkZLb1b/OLvsy7bxle7SGW4j07Ri3LryMNK
         96xL9PywSQsVwHVrcwUVgGPJJYagnIzOQlETwehp2ywzkg9HyftM3eHjXMM7KtlKiG6g
         7xY9DAzRcp/+JBsUyvG+pkzx90+bLi9iKb7zEy0NmtPpBriG0N+arUuQyXGgIMtRptrd
         QXkozgVZ2Wxb0BroDRFYY+te2bL4uB3wiDbcPeLMLNAWjOsW39ozlQ+nz6OtWn/TWpEs
         ycmQ==
X-Gm-Message-State: AOAM5321bnqy8wPj76WWE0Nnoti+dttpk85ZUwWg/7gmgUfHWfn5JVb5
	X7p3wfKhzUgcawUYQ0L6zJNMKtXXYC9BPQ==
X-Google-Smtp-Source: ABdhPJxxnhYf1jU2Q+ohzP7ySf8wbYADbqSgg/hOvijB/xB3BRucdTDPEe7sx0CDqgM+hg30Trv1Pg==
X-Received: by 2002:ac2:5449:: with SMTP id d9mr1270229lfn.546.1602780311348;
        Thu, 15 Oct 2020 09:45:11 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.10
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:10 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V2 18/23] xen/dm: Introduce xendevicemodel_set_irq_level DM op
Date: Thu, 15 Oct 2020 19:44:29 +0300
Message-Id: <1602780274-29141-19-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

This patch adds ability to the device emulator to notify otherend
(some entity running in the guest) using a SPI and implements Arm
specific bits for it. Proposed interface allows emulator to set
the logical level of a one of a domain's IRQ lines.

We can't reuse the existing DM op (xen_dm_op_set_isa_irq_level)
to inject an interrupt as the "isa_irq" field is only 8-bit and
able to cover IRQ 0 - 255, whereas we need a wider range (0 - 1020).

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

***
Please note, I left interface untouched since there is still
an open discussion what interface to use/what information to pass
to the hypervisor. The question whether we should abstract away
the state of the line or not.
***

Changes RFC -> V1:
   - check incoming parameters in arch_dm_op()
   - add explicit padding to struct xen_dm_op_set_irq_level

Changes V1 -> V2:
   - update the author of a patch
   - update patch description
   - check that padding is always 0
   - mention that interface is Arm only and only SPIs are
     supported for now
   - allow to set the logical level of a line for non-allocated
     interrupts only
   - add xen_dm_op_set_irq_level_t
---
 tools/libs/devicemodel/core.c                   | 18 ++++++++
 tools/libs/devicemodel/include/xendevicemodel.h |  4 ++
 tools/libs/devicemodel/libxendevicemodel.map    |  1 +
 xen/arch/arm/dm.c                               | 57 ++++++++++++++++++++++++-
 xen/common/dm.c                                 |  1 +
 xen/include/public/hvm/dm_op.h                  | 16 +++++++
 6 files changed, 96 insertions(+), 1 deletion(-)

diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 4d40639..30bd79f 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -430,6 +430,24 @@ int xendevicemodel_set_isa_irq_level(
     return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
 }
 
+int xendevicemodel_set_irq_level(
+    xendevicemodel_handle *dmod, domid_t domid, uint32_t irq,
+    unsigned int level)
+{
+    struct xen_dm_op op;
+    struct xen_dm_op_set_irq_level *data;
+
+    memset(&op, 0, sizeof(op));
+
+    op.op = XEN_DMOP_set_irq_level;
+    data = &op.u.set_irq_level;
+
+    data->irq = irq;
+    data->level = level;
+
+    return xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
+}
+
 int xendevicemodel_set_pci_link_route(
     xendevicemodel_handle *dmod, domid_t domid, uint8_t link, uint8_t irq)
 {
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/libs/devicemodel/include/xendevicemodel.h
index e877f5c..c06b3c8 100644
--- a/tools/libs/devicemodel/include/xendevicemodel.h
+++ b/tools/libs/devicemodel/include/xendevicemodel.h
@@ -209,6 +209,10 @@ int xendevicemodel_set_isa_irq_level(
     xendevicemodel_handle *dmod, domid_t domid, uint8_t irq,
     unsigned int level);
 
+int xendevicemodel_set_irq_level(
+    xendevicemodel_handle *dmod, domid_t domid, unsigned int irq,
+    unsigned int level);
+
 /**
  * This function maps a PCI INTx line to a an IRQ line.
  *
diff --git a/tools/libs/devicemodel/libxendevicemodel.map b/tools/libs/devicemodel/libxendevicemodel.map
index 561c62d..a0c3012 100644
--- a/tools/libs/devicemodel/libxendevicemodel.map
+++ b/tools/libs/devicemodel/libxendevicemodel.map
@@ -32,6 +32,7 @@ VERS_1.2 {
 	global:
 		xendevicemodel_relocate_memory;
 		xendevicemodel_pin_memory_cacheattr;
+		xendevicemodel_set_irq_level;
 } VERS_1.1;
 
 VERS_1.3 {
diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
index 5d3da37..e4bb233 100644
--- a/xen/arch/arm/dm.c
+++ b/xen/arch/arm/dm.c
@@ -17,10 +17,65 @@
 #include <xen/dm.h>
 #include <xen/hypercall.h>
 
+#include <asm/vgic.h>
+
 int arch_dm_op(struct xen_dm_op *op, struct domain *d,
                const struct dmop_args *op_args, bool *const_op)
 {
-    return -EOPNOTSUPP;
+    int rc;
+
+    switch ( op->op )
+    {
+    case XEN_DMOP_set_irq_level:
+    {
+        const struct xen_dm_op_set_irq_level *data =
+            &op->u.set_irq_level;
+        unsigned int i;
+
+        /* Only SPIs are supported */
+        if ( (data->irq < NR_LOCAL_IRQS) || (data->irq >= vgic_num_irqs(d)) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        if ( data->level != 0 && data->level != 1 )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        /* Check that padding is always 0 */
+        for ( i = 0; i < sizeof(data->pad); i++ )
+        {
+            if ( data->pad[i] )
+            {
+                rc = -EINVAL;
+                break;
+            }
+        }
+
+        /*
+         * Allow to set the logical level of a line for non-allocated
+         * interrupts only.
+         */
+        if ( test_bit(data->irq, d->arch.vgic.allocated_irqs) )
+        {
+            rc = -EINVAL;
+            break;
+        }
+
+        vgic_inject_irq(d, NULL, data->irq, data->level);
+        rc = 0;
+        break;
+    }
+
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    return rc;
 }
 
 /*
diff --git a/xen/common/dm.c b/xen/common/dm.c
index f3a8353..5f23420 100644
--- a/xen/common/dm.c
+++ b/xen/common/dm.c
@@ -48,6 +48,7 @@ static int dm_op(const struct dmop_args *op_args)
         [XEN_DMOP_remote_shutdown]                  = sizeof(struct xen_dm_op_remote_shutdown),
         [XEN_DMOP_relocate_memory]                  = sizeof(struct xen_dm_op_relocate_memory),
         [XEN_DMOP_pin_memory_cacheattr]             = sizeof(struct xen_dm_op_pin_memory_cacheattr),
+        [XEN_DMOP_set_irq_level]                    = sizeof(struct xen_dm_op_set_irq_level),
     };
 
     rc = rcu_lock_remote_domain_by_id(op_args->domid, &d);
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 66cae1a..1f70d58 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -434,6 +434,21 @@ struct xen_dm_op_pin_memory_cacheattr {
 };
 typedef struct xen_dm_op_pin_memory_cacheattr xen_dm_op_pin_memory_cacheattr_t;
 
+/*
+ * XEN_DMOP_set_irq_level: Set the logical level of a one of a domain's
+ *                         IRQ lines (currently Arm only).
+ * Only SPIs are supported.
+ */
+#define XEN_DMOP_set_irq_level 19
+
+struct xen_dm_op_set_irq_level {
+    uint32_t irq;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t level;
+    uint8_t pad[3];
+};
+typedef struct xen_dm_op_set_irq_level xen_dm_op_set_irq_level_t;
+
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
@@ -447,6 +462,7 @@ struct xen_dm_op {
         xen_dm_op_track_dirty_vram_t track_dirty_vram;
         xen_dm_op_set_pci_intx_level_t set_pci_intx_level;
         xen_dm_op_set_isa_irq_level_t set_isa_irq_level;
+        xen_dm_op_set_irq_level_t set_irq_level;
         xen_dm_op_set_pci_link_route_t set_pci_link_route;
         xen_dm_op_modified_memory_t modified_memory;
         xen_dm_op_set_mem_type_t set_mem_type;
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7622.20174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VN-0007hV-SU; Thu, 15 Oct 2020 16:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7622.20174; Thu, 15 Oct 2020 16:53:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VM-0007fO-Gm; Thu, 15 Oct 2020 16:53:28 +0000
Received: by outflank-mailman (input) for mailman id 7622;
 Thu, 15 Oct 2020 16:53:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6OI-0004yr-7O
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:10 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d;
 Thu, 15 Oct 2020 16:45:09 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id p15so3831734ljj.8
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.06
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:06 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6OI-0004yr-7O
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:10 +0000
X-Inumbo-ID: 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8dbf7494-b1e5-4da9-aef3-a0e8cd19873d;
	Thu, 15 Oct 2020 16:45:09 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id p15so3831734ljj.8
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=sOB0KRQzNqQKgBnd+DVp46wCjPUsJTci+TWGlyq6IHk=;
        b=XsQT+M7/kBteWBIsYxbEXnOvjFm4tf1tIglvblWEBKCE9jmqGtNPZDxBa07y2Byl3c
         ZfUwLl/xvcX6t5yKY4qkB1ZR7JPmaZ9CYgFpyWrQ9AZKYmgRIlT50R6bdHiNPk1EEmv5
         tzEdjIrGeE9rjaO6NsbIg9Ssc+QvnMhesOD/GkOye1AINuo5IYqpBuM/VZH1SqIEIs/8
         iwKPbJdLd78LG/ZknnGhh3JuQP7xBCtHek9neMg7aSauD5ZwU16vluQMW7V+DfypYbMG
         nFcuIhqsX6YfEx6XmxxoKx/s/rDePKzSm5VR6GAsL+4IYlfncwX7Iv+bG100ly22a0Ce
         YVfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=sOB0KRQzNqQKgBnd+DVp46wCjPUsJTci+TWGlyq6IHk=;
        b=n8TPxujkZPn/VVVsKxLhnbaXjetwxQxcLn6knjxvzQGeF6UUpMMho5+V9i8UY86VCT
         ohWOd8PNHsWvN5cw87v41kx1ZAqH5rYIEh+EACe5lpbSNrT8lGzAtG+GWKc/mZkMSPM8
         xdAr/9XgYDDXVfja+sv+I/bHxvup25lP+UEc5KtAn6lUeJj7rriiudd7EJTtvLBEhCzZ
         XXUIAkxyP1QYjoQ4bzy/XtJVOCuQGU1lSFi3RwO98S0JnLlgcYXwYtocATklu3Vi60n/
         0cILCKonRu4QUuy2vsxYyaKOE/QQs04zWk1pizKtwc5yStmDiR1TF3jIKp0HaikMlwaD
         twaw==
X-Gm-Message-State: AOAM532+NQmMSGUXqckyorzUFz0Aiss28L3BjkzpsOsNFPcyrcAV7YJ2
	gt8THBMJyjTcxa67yLWENDBgEvZrwPAxIw==
X-Google-Smtp-Source: ABdhPJwZZv2XOIEHlMZjS7b2fFtS4U+URg/aCid+fgq+JXBnk2tT6Jv191S0C+c6pfQ/2P+NWZ6a5A==
X-Received: by 2002:a2e:8816:: with SMTP id x22mr1261041ljh.377.1602780307354;
        Thu, 15 Oct 2020 09:45:07 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.06
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:06 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Paul Durrant <paul@xen.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V2 14/23] arm/ioreq: Introduce arch specific bits for IOREQ/DM features
Date: Thu, 15 Oct 2020 19:44:25 +0300
Message-Id: <1602780274-29141-15-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Julien Grall <julien.grall@arm.com>

This patch adds basic IOREQ/DM support on Arm. The subsequent
patches will improve functionality and add remaining bits.

The IOREQ/DM features are supposed to be built with IOREQ_SERVER
option enabled, which is disabled by default on Arm for now.

Please note, the "PIO handling" TODO is expected to left unaddressed
for the current series. It is not an big issue for now while Xen
doesn't have support for vPCI on Arm. On Arm64 they are only used
for PCI IO Bar and we would probably want to expose them to emulator
as PIO access to make a DM completely arch-agnostic. So "PIO handling"
should be implemented when we add support for vPCI.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was split into:
     - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
     - xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
   - update patch description
   - update asm-arm/hvm/ioreq.h according to the newly introduced arch functions:
     - arch_hvm_destroy_ioreq_server()
     - arch_handle_hvm_io_completion()
   - update arch files to include xen/ioreq.h
   - remove HVMOP plumbing
   - rewrite a logic to handle properly case when hvm_send_ioreq() returns IO_RETRY
   - add a logic to handle properly handle_hvm_io_completion() return value
   - rename handle_mmio() to ioreq_handle_complete_mmio()
   - move paging_mark_pfn_dirty() to asm-arm/paging.h
   - remove forward declaration for hvm_ioreq_server in asm-arm/paging.h
   - move try_fwd_ioserv() to ioreq.c, provide stubs if !CONFIG_IOREQ_SERVER
   - do not remove #ifdef CONFIG_IOREQ_SERVER in memory.c for guarding xen/ioreq.h
   - use gdprintk in try_fwd_ioserv(), remove unneeded prints
   - update list of #include-s
   - move has_vpci() to asm-arm/domain.h
   - add a comment (TODO) to unimplemented yet handle_pio()
   - remove hvm_mmio_first(last)_byte() and hvm_ioreq_(page/vcpu/server) structs
     from the arch files, they were already moved to the common code
   - remove set_foreign_p2m_entry() changes, they will be properly implemented
     in the follow-up patch
   - select IOREQ_SERVER for Arm instead of Arm64 in Kconfig
   - remove x86's realmode and other unneeded stubs from xen/ioreq.h
   - clafify ioreq_t p.df usage in try_fwd_ioserv()
   - set ioreq_t p.count to 1 in try_fwd_ioserv()

Changes V2 -> V1:
   - was split into:
     - arm/ioreq: Introduce arch specific bits for IOREQ/DM features
     - xen/arm: Stick around in leave_hypervisor_to_guest until I/O has completed
   - update the author of a patch
   - update patch description
   - move a loop in leave_hypervisor_to_guest() to a separate patch
   - set IOREQ_SERVER disabled by default
   - remove already clarified /* XXX */
   - replace BUG() by ASSERT_UNREACHABLE() in handle_pio()
   - remove default case for handling the return value of try_handle_mmio()
   - remove struct hvm_domain, enum hvm_io_completion, struct hvm_vcpu_io,
     struct hvm_vcpu from asm-arm/domain.h, these are common materials now
   - update everything according to the recent changes (IOREQ related function
     names don't contain "hvm" prefixes/infixes anymore, IOREQ related fields
     are part of common struct vcpu/domain now, etc)
---
 xen/arch/arm/Makefile           |   2 +
 xen/arch/arm/dm.c               |  34 ++++++++++
 xen/arch/arm/domain.c           |   9 +++
 xen/arch/arm/io.c               |  11 +++-
 xen/arch/arm/ioreq.c            | 141 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c            |  13 ++++
 xen/common/ioreq.c              |   1 +
 xen/include/asm-arm/domain.h    |   5 ++
 xen/include/asm-arm/hvm/ioreq.h | 109 +++++++++++++++++++++++++++++++
 xen/include/asm-arm/mmio.h      |   1 +
 xen/include/asm-arm/paging.h    |   4 ++
 11 files changed, 329 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/arm/dm.c
 create mode 100644 xen/arch/arm/ioreq.c
 create mode 100644 xen/include/asm-arm/hvm/ioreq.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e6..c3ff454 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -13,6 +13,7 @@ obj-y += cpuerrata.o
 obj-y += cpufeature.o
 obj-y += decode.o
 obj-y += device.o
+obj-$(CONFIG_IOREQ_SERVER) += dm.o
 obj-y += domain.o
 obj-y += domain_build.init.o
 obj-y += domctl.o
@@ -27,6 +28,7 @@ obj-y += guest_atomics.o
 obj-y += guest_walk.o
 obj-y += hvm.o
 obj-y += io.o
+obj-$(CONFIG_IOREQ_SERVER) += ioreq.o
 obj-y += irq.o
 obj-y += kernel.init.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o
diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
new file mode 100644
index 0000000..5d3da37
--- /dev/null
+++ b/xen/arch/arm/dm.c
@@ -0,0 +1,34 @@
+/*
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/dm.h>
+#include <xen/hypercall.h>
+
+int arch_dm_op(struct xen_dm_op *op, struct domain *d,
+               const struct dmop_args *op_args, bool *const_op)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 3b37f89..ba9b1fb 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -15,6 +15,7 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
+#include <xen/ioreq.h>
 #include <xen/lib.h>
 #include <xen/livepatch.h>
 #include <xen/sched.h>
@@ -694,6 +695,10 @@ int arch_domain_create(struct domain *d,
 
     ASSERT(config != NULL);
 
+#ifdef CONFIG_IOREQ_SERVER
+    ioreq_init(d);
+#endif
+
     /* p2m_init relies on some value initialized by the IOMMU subsystem */
     if ( (rc = iommu_domain_init(d, config->iommu_opts)) != 0 )
         goto fail;
@@ -1012,6 +1017,10 @@ int domain_relinquish_resources(struct domain *d)
         if (ret )
             return ret;
 
+#ifdef CONFIG_IOREQ_SERVER
+        destroy_all_ioreq_servers(d);
+#endif
+
     PROGRESS(xen):
         ret = relinquish_memory(d, &d->xenpage_list);
         if ( ret )
diff --git a/xen/arch/arm/io.c b/xen/arch/arm/io.c
index ae7ef96..f44cfd4 100644
--- a/xen/arch/arm/io.c
+++ b/xen/arch/arm/io.c
@@ -23,6 +23,7 @@
 #include <asm/cpuerrata.h>
 #include <asm/current.h>
 #include <asm/mmio.h>
+#include <asm/hvm/ioreq.h>
 
 #include "decode.h"
 
@@ -123,7 +124,15 @@ enum io_state try_handle_mmio(struct cpu_user_regs *regs,
 
     handler = find_mmio_handler(v->domain, info.gpa);
     if ( !handler )
-        return IO_UNHANDLED;
+    {
+        int rc;
+
+        rc = try_fwd_ioserv(regs, v, &info);
+        if ( rc == IO_HANDLED )
+            return handle_ioserv(regs, v);
+
+        return rc;
+    }
 
     /* All the instructions used on emulated MMIO region should be valid */
     if ( !dabt.valid )
diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
new file mode 100644
index 0000000..da5ceac
--- /dev/null
+++ b/xen/arch/arm/ioreq.c
@@ -0,0 +1,141 @@
+/*
+ * arm/ioreq.c: hardware virtual machine I/O emulation
+ *
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/domain.h>
+#include <xen/ioreq.h>
+
+#include <asm/traps.h>
+
+#include <public/hvm/ioreq.h>
+
+enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v)
+{
+    const union hsr hsr = { .bits = regs->hsr };
+    const struct hsr_dabt dabt = hsr.dabt;
+    /* Code is similar to handle_read */
+    uint8_t size = (1 << dabt.size) * 8;
+    register_t r = v->io.io_req.data;
+
+    /* We are done with the IO */
+    v->io.io_req.state = STATE_IOREQ_NONE;
+
+    if ( dabt.write )
+        return IO_HANDLED;
+
+    /*
+     * Sign extend if required.
+     * Note that we expect the read handler to have zeroed the bits
+     * outside the requested access size.
+     */
+    if ( dabt.sign && (r & (1UL << (size - 1))) )
+    {
+        /*
+         * We are relying on register_t using the same as
+         * an unsigned long in order to keep the 32-bit assembly
+         * code smaller.
+         */
+        BUILD_BUG_ON(sizeof(register_t) != sizeof(unsigned long));
+        r |= (~0UL) << size;
+    }
+
+    set_user_reg(regs, dabt.reg, r);
+
+    return IO_HANDLED;
+}
+
+enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                             struct vcpu *v, mmio_info_t *info)
+{
+    struct vcpu_io *vio = &v->io;
+    ioreq_t p = {
+        .type = IOREQ_TYPE_COPY,
+        .addr = info->gpa,
+        .size = 1 << info->dabt.size,
+        .count = 1,
+        .dir = !info->dabt.write,
+        /*
+         * On x86, df is used by 'rep' instruction to tell the direction
+         * to iterate (forward or backward).
+         * On Arm, all the accesses to MMIO region will do a single
+         * memory access. So for now, we can safely always set to 0.
+         */
+        .df = 0,
+        .data = get_user_reg(regs, info->dabt.reg),
+        .state = STATE_IOREQ_READY,
+    };
+    struct ioreq_server *s = NULL;
+    enum io_state rc;
+
+    switch ( vio->io_req.state )
+    {
+    case STATE_IOREQ_NONE:
+        break;
+
+    case STATE_IORESP_READY:
+        return IO_HANDLED;
+
+    default:
+        gdprintk(XENLOG_ERR, "wrong state %u\n", vio->io_req.state);
+        return IO_ABORT;
+    }
+
+    s = select_ioreq_server(v->domain, &p);
+    if ( !s )
+        return IO_UNHANDLED;
+
+    if ( !info->dabt.valid )
+        return IO_ABORT;
+
+    vio->io_req = p;
+
+    rc = send_ioreq(s, &p, 0);
+    if ( rc != IO_RETRY || v->domain->is_shutting_down )
+        vio->io_req.state = STATE_IOREQ_NONE;
+    else if ( !ioreq_needs_completion(&vio->io_req) )
+        rc = IO_HANDLED;
+    else
+        vio->io_completion = IO_mmio_completion;
+
+    return rc;
+}
+
+bool ioreq_complete_mmio(void)
+{
+    struct vcpu *v = current;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const union hsr hsr = { .bits = regs->hsr };
+    paddr_t addr = v->io.io_req.addr;
+
+    if ( try_handle_mmio(regs, hsr, addr) == IO_HANDLED )
+    {
+        advance_pc(regs, hsr);
+        return true;
+    }
+
+    return false;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 8f40d0e..b154837 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -21,6 +21,7 @@
 #include <xen/hypercall.h>
 #include <xen/init.h>
 #include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/irq.h>
 #include <xen/lib.h>
 #include <xen/mem_access.h>
@@ -1384,6 +1385,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
 #ifdef CONFIG_HYPFS
     HYPERCALL(hypfs_op, 5),
 #endif
+#ifdef CONFIG_IOREQ_SERVER
+    HYPERCALL(dm_op, 3),
+#endif
 };
 
 #ifndef NDEBUG
@@ -1955,6 +1959,9 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
             case IO_HANDLED:
                 advance_pc(regs, hsr);
                 return;
+            case IO_RETRY:
+                /* finish later */
+                return;
             case IO_UNHANDLED:
                 /* IO unhandled, try another way to handle it. */
                 break;
@@ -2253,6 +2260,12 @@ static void check_for_vcpu_work(void)
 {
     struct vcpu *v = current;
 
+#ifdef CONFIG_IOREQ_SERVER
+    local_irq_enable();
+    handle_io_completion(v);
+    local_irq_disable();
+#endif
+
     if ( likely(!v->arch.need_flush_to_ram) )
         return;
 
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 8612159..bcd4961 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -18,6 +18,7 @@
 
 #include <xen/ctype.h>
 #include <xen/domain.h>
+#include <xen/domain_page.h>
 #include <xen/event.h>
 #include <xen/init.h>
 #include <xen/irq.h>
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 6819a3b..d4c3da5 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -10,6 +10,7 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
+#include <public/hvm/dm_op.h>
 #include <public/hvm/params.h>
 
 struct hvm_domain
@@ -17,6 +18,8 @@ struct hvm_domain
     uint64_t              params[HVM_NR_PARAMS];
 };
 
+#define ioreq_params(d, i) ((d)->arch.hvm.params[i])
+
 #ifdef CONFIG_ARM_64
 enum domain_type {
     DOMAIN_32BIT,
@@ -262,6 +265,8 @@ static inline void arch_vcpu_block(struct vcpu *v) {}
 
 #define arch_vm_assist_valid_mask(d) (1UL << VMASST_TYPE_runstate_update_flag)
 
+#define has_vpci(d)    ({ (void)(d); false; })
+
 #endif /* __ASM_DOMAIN_H__ */
 
 /*
diff --git a/xen/include/asm-arm/hvm/ioreq.h b/xen/include/asm-arm/hvm/ioreq.h
new file mode 100644
index 0000000..9f59f23
--- /dev/null
+++ b/xen/include/asm-arm/hvm/ioreq.h
@@ -0,0 +1,109 @@
+/*
+ * hvm.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ * Copyright (c) 2019 Arm ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_ARM_HVM_IOREQ_H__
+#define __ASM_ARM_HVM_IOREQ_H__
+
+#include <xen/ioreq.h>
+
+#include <public/hvm/ioreq.h>
+
+#ifdef CONFIG_IOREQ_SERVER
+enum io_state handle_ioserv(struct cpu_user_regs *regs, struct vcpu *v);
+enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                             struct vcpu *v, mmio_info_t *info);
+#else
+static inline enum io_state handle_ioserv(struct cpu_user_regs *regs,
+                                          struct vcpu *v)
+{
+    return IO_UNHANDLED;
+}
+
+static inline enum io_state try_fwd_ioserv(struct cpu_user_regs *regs,
+                                           struct vcpu *v, mmio_info_t *info)
+{
+    return IO_UNHANDLED;
+}
+#endif
+
+bool ioreq_complete_mmio(void);
+
+static inline bool handle_pio(uint16_t port, unsigned int size, int dir)
+{
+    /*
+     * TODO: For Arm64, the main user will be PCI. So this should be
+     * implemented when we add support for vPCI.
+     */
+    ASSERT_UNREACHABLE();
+    return true;
+}
+
+static inline void arch_destroy_ioreq_server(struct ioreq_server *s)
+{
+}
+
+static inline void msix_write_completion(struct vcpu *v)
+{
+}
+
+static inline bool arch_io_completion(enum io_completion io_completion)
+{
+    ASSERT_UNREACHABLE();
+    return true;
+}
+
+static inline int ioreq_server_get_type_addr(const struct domain *d,
+                                             const ioreq_t *p,
+                                             uint8_t *type,
+                                             uint64_t *addr)
+{
+    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
+        return -EINVAL;
+
+    *type = (p->type == IOREQ_TYPE_PIO) ?
+             XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
+    *addr = p->addr;
+
+    return 0;
+}
+
+static inline void arch_ioreq_init(struct domain *d)
+{
+}
+
+static inline bool arch_ioreq_destroy(struct domain *d)
+{
+    return true;
+}
+
+#define IOREQ_STATUS_HANDLED     IO_HANDLED
+#define IOREQ_STATUS_UNHANDLED   IO_UNHANDLED
+#define IOREQ_STATUS_RETRY       IO_RETRY
+
+#endif /* __ASM_ARM_HVM_IOREQ_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-arm/mmio.h b/xen/include/asm-arm/mmio.h
index 8dbfb27..7ab873c 100644
--- a/xen/include/asm-arm/mmio.h
+++ b/xen/include/asm-arm/mmio.h
@@ -37,6 +37,7 @@ enum io_state
     IO_ABORT,       /* The IO was handled by the helper and led to an abort. */
     IO_HANDLED,     /* The IO was successfully handled by the helper. */
     IO_UNHANDLED,   /* The IO was not handled by the helper. */
+    IO_RETRY,       /* Retry the emulation for some reason */
 };
 
 typedef int (*mmio_read_t)(struct vcpu *v, mmio_info_t *info,
diff --git a/xen/include/asm-arm/paging.h b/xen/include/asm-arm/paging.h
index 6d1a000..0550c55 100644
--- a/xen/include/asm-arm/paging.h
+++ b/xen/include/asm-arm/paging.h
@@ -4,6 +4,10 @@
 #define paging_mode_translate(d)              (1)
 #define paging_mode_external(d)               (1)
 
+static inline void paging_mark_pfn_dirty(struct domain *d, pfn_t pfn)
+{
+}
+
 #endif /* XEN_PAGING_H */
 
 /*
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 16:53:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 16:53:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7623.20181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VP-0007lZ-4D; Thu, 15 Oct 2020 16:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7623.20181; Thu, 15 Oct 2020 16:53:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6VO-0007jv-2C; Thu, 15 Oct 2020 16:53:30 +0000
Received: by outflank-mailman (input) for mailman id 7623;
 Thu, 15 Oct 2020 16:53:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kT6Om-0004yr-8H
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:40 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58805a19-f509-4ef7-b1f1-945e5768e9fe;
 Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id 184so4357721lfd.6
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:15 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.13
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 15 Oct 2020 09:45:13 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YEeM=DW=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kT6Om-0004yr-8H
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 16:46:40 +0000
X-Inumbo-ID: 58805a19-f509-4ef7-b1f1-945e5768e9fe
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58805a19-f509-4ef7-b1f1-945e5768e9fe;
	Thu, 15 Oct 2020 16:45:15 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id 184so4357721lfd.6
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 09:45:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=2XrnkyCm2XtFs8+4UKcvf9+akIJtw9Lb3qRoVqK2C5U=;
        b=Pn9R4BXFpFWBtqgH7N8kD9d9XZroBl6pZJcDRGLAyalJNeKV0GAgAl81CXyg304D3C
         1M9/eAvRA+IriO/UFwvseXxiX8z2lxPhTw+RflHYDjWK2Opf+N/PB4qDR4Tflip+Qz3l
         yEOaGsG4xYNV5vbxEGmZpOt02p46Qo+fekkh92N7rDpIrFhhZmqJZQpUaKp51YKu3Oxm
         Y1UbzJrG4PAyvaVFjIgl2cbnAAjomKXXrk3fBwcM3QJx3GaJRBOoEsbTAK2fzoFqah11
         ltb2dsGo3/fNavYpZYJxZVzGk+ESuNbjRT5OmMFw/LpiVUYSyTOv7ItUdpn7YOZleWPa
         aa2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=2XrnkyCm2XtFs8+4UKcvf9+akIJtw9Lb3qRoVqK2C5U=;
        b=RYMjrS8BgMTPt3oKsga1U5k9T2E7D54Fnyma9ixEwaMIJpn66rVKDMA1rgkBlHw42o
         qP6vtXaWaYu/y0hzt89gnqihiFnF7GX14J4mfQsUPyFIaRcinCf+rhDj7PjX6dlZeFWG
         c3wrczjDfN6DI8JuC0e5/aINplHV/KksutNjYCSyIUk8vGIl3nKZ+q+CjPUnujSh+agI
         zdSlUtwDN9MjRnaRRTM4feGL+Ui9lmFWQsyxjzYBFtU2a15mu5qrcZaw+G83cqPK5axa
         7u56ELuDJAl1aEs6dUa5UJIOB5SQ4Fnjozm3ObSYYteQCmILq/EQsjMoZZQzBzMuw3lB
         dpfg==
X-Gm-Message-State: AOAM531oTJtQ4o/9vx2aYSYLuOQkrXBXx6XOiWrhab58xsu8TArGvRjB
	CsYBMD4nSQP7fJDuIyDRv2SrnSH4nD1lSA==
X-Google-Smtp-Source: ABdhPJwsQ5M/L3bwdFzhQ3KPuy/TYNQk5lUusJYwKYci7YsHMXQCx16rzuX7lAW/CXtxXZ+WZpjnjw==
X-Received: by 2002:a19:8888:: with SMTP id k130mr1414548lfd.265.1602780314167;
        Thu, 15 Oct 2020 09:45:14 -0700 (PDT)
Received: from otyshchenko.www.tendawifi.com ([212.22.223.21])
        by smtp.gmail.com with ESMTPSA id v13sm1482495ljh.66.2020.10.15.09.45.13
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Thu, 15 Oct 2020 09:45:13 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <julien.grall@arm.com>
Subject: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
Date: Thu, 15 Oct 2020 19:44:32 +0300
Message-Id: <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

We need to send mapcache invalidation request to qemu/demu everytime
the page gets removed from a guest.

At the moment, the Arm code doesn't explicitely remove the existing
mapping before inserting the new mapping. Instead, this is done
implicitely by __p2m_set_entry().

So the corresponding flag will be set in __p2m_set_entry() if old entry
is a RAM page *and* the new MFN is different. And the invalidation
request will be sent in do_trap_hypercall() later on.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
CC: Julien Grall <julien.grall@arm.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes V1 -> V2:
   - new patch, some changes were derived from (+ new explanation):
     xen/ioreq: Make x86's invalidate qemu mapcache handling common
   - put setting of the flag into __p2m_set_entry()
   - clarify the conditions when the flag should be set
   - use domain_has_ioreq_server()
   - update do_trap_hypercall() by adding local variable
---
 xen/arch/arm/p2m.c   |  8 ++++++++
 xen/arch/arm/traps.c | 13 ++++++++++---
 2 files changed, 18 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 370173c..2693b0c 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1,6 +1,7 @@
 #include <xen/cpu.h>
 #include <xen/domain_page.h>
 #include <xen/iocap.h>
+#include <xen/ioreq.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
 #include <xen/softirq.h>
@@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
      */
     if ( p2m_is_valid(orig_pte) &&
          !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
+    {
+#ifdef CONFIG_IOREQ_SERVER
+        if ( domain_has_ioreq_server(p2m->domain) &&
+             (p2m->domain == current->domain) && p2m_is_ram(orig_pte.p2m.type) )
+            p2m->domain->qemu_mapcache_invalidate = true;
+#endif
         p2m_free_entry(p2m, orig_pte, level);
+    }
 
 out:
     unmap_domain_page(table);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a8f5fdf..9eaa342 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1442,6 +1442,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
                               const union hsr hsr)
 {
     arm_hypercall_fn_t call = NULL;
+    struct vcpu *v = current;
 
     BUILD_BUG_ON(NR_hypercalls < ARRAY_SIZE(arm_hypercall_table) );
 
@@ -1458,7 +1459,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
         return;
     }
 
-    current->hcall_preempted = false;
+    v->hcall_preempted = false;
 
     perfc_incra(hypercalls, *nr);
     call = arm_hypercall_table[*nr].fn;
@@ -1471,7 +1472,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
     HYPERCALL_RESULT_REG(regs) = call(HYPERCALL_ARGS(regs));
 
 #ifndef NDEBUG
-    if ( !current->hcall_preempted )
+    if ( !v->hcall_preempted )
     {
         /* Deliberately corrupt parameter regs used by this hypercall. */
         switch ( arm_hypercall_table[*nr].nr_args ) {
@@ -1488,8 +1489,14 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
 #endif
 
     /* Ensure the hypercall trap instruction is re-executed. */
-    if ( current->hcall_preempted )
+    if ( v->hcall_preempted )
         regs->pc -= 4;  /* re-execute 'hvc #XEN_HYPERCALL_TAG' */
+
+#ifdef CONFIG_IOREQ_SERVER
+    if ( unlikely(v->domain->qemu_mapcache_invalidate) &&
+         test_and_clear_bool(v->domain->qemu_mapcache_invalidate) )
+        send_invalidate_ioreq();
+#endif
 }
 
 void arch_hypercall_tasklet_result(struct vcpu *v, long res)
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu Oct 15 17:14:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 17:14:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7659.20205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6pG-00027e-N7; Thu, 15 Oct 2020 17:14:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7659.20205; Thu, 15 Oct 2020 17:14:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT6pG-00027X-Jg; Thu, 15 Oct 2020 17:14:02 +0000
Received: by outflank-mailman (input) for mailman id 7659;
 Thu, 15 Oct 2020 17:14:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kT6pF-00027S-HM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:14:01 +0000
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8da83e91-9543-46f5-bd76-1909379d005c;
 Thu, 15 Oct 2020 17:14:00 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id p15so3923258ljj.8
 for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 10:14:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Un/f=DW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kT6pF-00027S-HM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:14:01 +0000
X-Inumbo-ID: 8da83e91-9543-46f5-bd76-1909379d005c
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8da83e91-9543-46f5-bd76-1909379d005c;
	Thu, 15 Oct 2020 17:14:00 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id p15so3923258ljj.8
        for <xen-devel@lists.xenproject.org>; Thu, 15 Oct 2020 10:14:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=1yUvifcv3fU+qGZfXkc7byEgAOcZMg51wV8y/5oEaQ4=;
        b=Q+YOi/nVndoAd1WZ6kH25jasK5kgj2ioDwaXt6ixjeRMRg6/iOEC+Es/O8aLIyakXp
         egaeIVzzY25B60hZbYP5YOwQn6AWlyU9FHYcg4MXdubT1PrPwoaCgmERPH+HsTcJNhtz
         00ywxOC/IRmvRgNhsgIacHbKdjLeNCKg/T8c5XxOK1Raq5mdg76LL/twvP6d42JITF5Z
         uy8KpLYGrqMqk2Q9zLOlYxLDPCh0pUhI2LitWA60v0cGIKg6S8eT59MpY/u8TTSUX+bc
         kXwmHkPRwqds3Oj3HkMiICBAK80pLlQGS9KSWrFpbTusWzC4BEKCNRqg8Pd0NxM7Pi/i
         k/KA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=1yUvifcv3fU+qGZfXkc7byEgAOcZMg51wV8y/5oEaQ4=;
        b=KrQq1CX2qJ3UKYfxs2nDAGPILmILHGubC0k+hda0TzYkGkQ+rWeR6iR33hqO8L+wwt
         AdHDijxFNVuMHoVXdndl5pRw5MlnkKN+VHIYBD09+ESvpM6niSujRKKQ+Nlv0Bp2EKEj
         iaqNgs+GH/3XfXfxDihiGf0SCjdMphraUltjM0bRS2DHf00rEnsqyt+RLGzhBoUlf5vV
         Vkfd9Z2PZjBXZDrY5a6NlyahBYdt12j2/+o3S1zLrI8GQiGwcz42sZJRZMjJ7xHY2vCb
         Y4pRoLCU7JQ7vhhNqjkjiyGYT5ovaONTMCXtFwTK6nTJJX+VvHKntoo47AWEFm5zgsZ2
         8i+g==
X-Gm-Message-State: AOAM533ZzpHbXf6py1ECUy3C2gL7RBNOR2ExC+mYIdXc2dQxSbBkcOQs
	9BIX2ldQVKryoa51GA7SmU3dP965nP+QHoShbmQ=
X-Google-Smtp-Source: ABdhPJw/inF4ypZZTJUUCINi7YWblL+rCo3QbMKOIKbpJ1ze8axOmb3yYlqUQoverWhcCGGhoJFQixhYXnKi+NwhKY0=
X-Received: by 2002:a2e:c49:: with SMTP id o9mr1731526ljd.296.1602782039521;
 Thu, 15 Oct 2020 10:13:59 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com> <CABfawhnwdkB01LKYbcNhyyhFXF2LbLFFmeN5kqh7VaYPevjzuw@mail.gmail.com>
In-Reply-To: <CABfawhnwdkB01LKYbcNhyyhFXF2LbLFFmeN5kqh7VaYPevjzuw@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 15 Oct 2020 13:13:47 -0400
Message-ID: <CAKf6xpuACuY63f+m6U55EVoSBL+RR04OStGPytb-Aeacou32gg@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 15, 2020 at 12:39 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> > > Can you paste the memory map as printed by Xen when booting, and what
> > > command line are you using to boot Xen.
> >
> > So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen
>
> Unrelated comment: since tboot now has a PE build
> (http://hg.code.sf.net/p/tboot/code/rev/5c68f0963a78) I think it would
> be time for OpenXT to drop the weird efi->xen->tboot->xen flow and
> just do efi->tboot->xen. Only reason we did efi->xen->tboot was
> because tboot didn't have a PE build at the time. It's a very hackish
> solution that's no longer needed.

Thanks for the pointer, Tamas.  If I recall correctly, there was also
an issue with ExitBootServices.  Do you know if that has been
addressed?

Depending on timing, OpenXT may just move to TrenchBoot for a DRTM solution.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 17:27:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 17:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7682.20239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT72H-0003Mx-9j; Thu, 15 Oct 2020 17:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7682.20239; Thu, 15 Oct 2020 17:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT72H-0003Mq-6d; Thu, 15 Oct 2020 17:27:29 +0000
Received: by outflank-mailman (input) for mailman id 7682;
 Thu, 15 Oct 2020 17:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kT72F-0003Ml-Mu
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:27:27 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef5a5738-dfe6-444d-bb24-b8dea69ba877;
 Thu, 15 Oct 2020 17:27:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kT72F-0003Ml-Mu
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:27:27 +0000
X-Inumbo-ID: ef5a5738-dfe6-444d-bb24-b8dea69ba877
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef5a5738-dfe6-444d-bb24-b8dea69ba877;
	Thu, 15 Oct 2020 17:27:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602782846;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=ZS18iKLT8mnFjsNWylUWI133jb2KC41TKGIcuW2aFw0=;
  b=SnvrbZMjjk4aRCLyxmGwsCCiVYtDnvoZ4YGXfE0p9X94XiGZ/PYVnRPG
   HO/Wxo3OyVZ4KqG/eKO3ZguTFKOmlixOvugz/+03LDSs27/geiLpiyd7W
   N1CJ4CVvPHJyzgNHwivgxSYTkfr6QU0cCNS8Ix7e2R0UfrD295KoVC/h+
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MZD92vvaZdoGFRWDDsfQCD+rR3JriCQkBt78e42NYxr4Cuk8raootnvOnVdIbDoVx1A5jsdX8E
 pdwAenzQeVqSfyeXj+9tQ7vpgPl+EzOrA6k19rjmevqIecNheJ+TqlpZ4sylLxupQVr4xXqQXZ
 2H8pTFspZdC/RycWSiWnR8t84I/gXl8AFcUxmyokJ09HkH2fpbpTxRcnDzS/njBWUlT4PsXrXT
 52rqndaBFUffWSaKEvdfLHR/O49nxma4/t263WZTyEzjv5AxueECAYtVTwkiXACb/N/0vogOUI
 K+0=
X-SBRS: 2.5
X-MesageID: 29106077
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,379,1596513600"; 
   d="scan'208";a="29106077"
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jan Beulich <jbeulich@suse.com>, Jason Andryuk <jandryuk@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
 <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
 <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d1d45ef5-067d-1edb-fac9-514495277765@citrix.com>
Date: Thu, 15 Oct 2020 18:27:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 16:14, Jan Beulich wrote:
> On 15.10.2020 16:50, Jason Andryuk wrote:
>> On Thu, Oct 15, 2020 at 3:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>> And why is there no bounds check of ->phys_entry paralleling the
>>> ->virt_entry one?
>> What is the purpose of this checking?  It's sanity checking which is
>> generally good, but what is the harm from failing the checks?  A
>> corrupt kernel can crash itself?  Maybe you could start executing
>> something (the initramfs?) instead of the actual kernel?
> This is at least getting close to a possible security issue.
> Booting a hacked up binary can be a problem afaik.

It's only a security issue if the absence of the check is going to cause
a malfunction outside of guest the guest context.  (e.g. in the
toolstack's elf parser)

There are a functionally infinite ways for a guest kernel to crash
itself early on boot - malforming the ELF header such that the state of
the guest once executing doesn't boot isn't interesting from this point
of view.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 17:46:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 17:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7698.20261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7Kn-0005BA-0o; Thu, 15 Oct 2020 17:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7698.20261; Thu, 15 Oct 2020 17:46:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7Km-0005B3-TZ; Thu, 15 Oct 2020 17:46:36 +0000
Received: by outflank-mailman (input) for mailman id 7698;
 Thu, 15 Oct 2020 17:46:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kT7Kl-0005AW-7V
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:46:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd581c62-18b0-46ef-abcb-c20730b80407;
 Thu, 15 Oct 2020 17:46:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT7Ke-0002s7-Bc; Thu, 15 Oct 2020 17:46:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kT7Ke-0002mZ-5m; Thu, 15 Oct 2020 17:46:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kT7Ke-0004I8-5I; Thu, 15 Oct 2020 17:46:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kT7Kl-0005AW-7V
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:46:35 +0000
X-Inumbo-ID: cd581c62-18b0-46ef-abcb-c20730b80407
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd581c62-18b0-46ef-abcb-c20730b80407;
	Thu, 15 Oct 2020 17:46:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tZwGEPlsOjInNfgMMtz/cGfryYKpPALNQ9ilwJm+sug=; b=UwNYyrR5NTHEwbMXBVb93MTxLY
	IqyCjcgqC7xSiWezWUUnXEosZqkEsZN/SpwYjWSxfS3v2Kq37ZxXO9LgKuWILMAtL5tyrIkyPGdin
	GntjhJYuwAqPUHusNcRsSPL+VInGRZuL7dy7mw2q5HW3gYznYu1En/Ubk13GR8SdNuUY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT7Ke-0002s7-Bc; Thu, 15 Oct 2020 17:46:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT7Ke-0002mZ-5m; Thu, 15 Oct 2020 17:46:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kT7Ke-0004I8-5I; Thu, 15 Oct 2020 17:46:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155850-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155850: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a8a85f03c826bea045e345fa405f187049d63584
X-Osstest-Versions-That:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 17:46:28 +0000

flight 155850 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155850/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a8a85f03c826bea045e345fa405f187049d63584
baseline version:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575

Last test of basis   155828  2020-10-15 03:00:27 Z    0 days
Testing same since   155842  2020-10-15 11:00:25 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chen Yu <yu.c.chen@intel.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f776e5fb3e..a8a85f03c8  a8a85f03c826bea045e345fa405f187049d63584 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 17:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 17:52:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7701.20273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7QB-00064O-LS; Thu, 15 Oct 2020 17:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7701.20273; Thu, 15 Oct 2020 17:52:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7QB-00064H-Hz; Thu, 15 Oct 2020 17:52:11 +0000
Received: by outflank-mailman (input) for mailman id 7701;
 Thu, 15 Oct 2020 17:52:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT7Q9-00064C-SI
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:52:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a36b26b9-0951-4592-98d4-df94ca8c92e7;
 Thu, 15 Oct 2020 17:52:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 367DEAFAE;
 Thu, 15 Oct 2020 17:52:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT7Q9-00064C-SI
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:52:09 +0000
X-Inumbo-ID: a36b26b9-0951-4592-98d4-df94ca8c92e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a36b26b9-0951-4592-98d4-df94ca8c92e7;
	Thu, 15 Oct 2020 17:52:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 367DEAFAE;
	Thu, 15 Oct 2020 17:52:07 +0000 (UTC)
Date: Thu, 15 Oct 2020 19:52:04 +0200
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Daniel Vetter <daniel@ffwll.ch>
Cc: Christian =?UTF-8?B?S8O2bmln?= <christian.koenig@amd.com>,
 luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 sam@ravnborg.org, emil.velikov@collabora.com,
 linux-samsung-soc@vger.kernel.org, jy0922.shim@samsung.com,
 lima@lists.freedesktop.org, oleksandr_andrushchenko@epam.com,
 krzk@kernel.org, steven.price@arm.com, linux-rockchip@lists.infradead.org,
 kgene@kernel.org, bskeggs@redhat.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, alyssa.rosenzweig@collabora.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, sw0312.kim@samsung.com, hjc@rock-chips.com,
 kyungmin.park@samsung.com, miaoqinglang@huawei.com, yuq825@gmail.com,
 alexander.deucher@amd.com, linux-media@vger.kernel.org
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
Message-ID: <20201015195204.1745fe7f@linux-uq9g>
In-Reply-To: <20201015164909.GC401619@phenom.ffwll.local>
References: <20201015123806.32416-1-tzimmermann@suse.de>
	<20201015123806.32416-6-tzimmermann@suse.de>
	<935d5771-5645-62a6-849c-31e286db1e30@amd.com>
	<20201015164909.GC401619@phenom.ffwll.local>
Organization: SUSE Software Solutions Germany GmbH
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi

On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:

> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian K=C3=B6nig wrote:
> > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
> > > kernel address space. The mapping's address is returned as struct
> > > dma_buf_map. Each function is a simplified version of TTM's existing
> > > kmap code. Both functions respect the memory's location ani/or
> > > writecombine flags.
> > >=20
> > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}=
(),
> > > two helpers that convert a GEM object into the TTM BO and forward the
> > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
> > > GEM object callbacks.
> > >=20
> > > v4:
> > > 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
> > > (Daniel, Christian)
> >=20
> > Bunch of minor comments below, but over all look very solid to me.
>=20
> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
> cleanest. And then we can maybe push the combinatorial monster into
> vmwgfx, which I think is the only user after this series. Or perhaps a
> dedicated set of helpers to map an invidual page (again using the
> dma_buf_map stuff).

=46rom a quick look, I'd say it should be possible to have the same interface
for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map=
).
All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can =
be
killed off entirely.

Best regards
Thomas

>=20
> I'll let Christian with the details, but at a high level this is
> definitely
>=20
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>=20
> Thanks a lot for doing all this.
> -Daniel
>=20
> >=20
> > >=20
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > > ---
> > >   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> > >   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 +++++++++++++++++++++++++=
+++
> > >   include/drm/drm_gem_ttm_helper.h     |  6 +++
> > >   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
> > >   include/linux/dma-buf-map.h          | 20 ++++++++
> > >   5 files changed, 164 insertions(+)
> > >=20
> > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a=
30
> > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > > unsigned int indent, }
> > >   EXPORT_SYMBOL(drm_gem_ttm_print_info);
> > > +/**
> > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: [out] returns the dma-buf mapping.
> > > + *
> > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + *
> > > + * Returns:
> > > + * 0 on success, or a negative errno code otherwise.
> > > + */
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > +		     struct dma_buf_map *map)
> > > +{
> > > +	struct ttm_buffer_object *bo =3D drm_gem_ttm_of_gem(gem);
> > > +
> > > +	return ttm_bo_vmap(bo, map);
> > > +
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > > +
> > > +/**
> > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > > + * @gem: GEM object.
> > > + * @map: dma-buf mapping.
> > > + *
> > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be us=
ed
> > > as
> > > + * &drm_gem_object_funcs.vmap callback.
> > > + */
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > +			struct dma_buf_map *map)
> > > +{
> > > +	struct ttm_buffer_object *bo =3D drm_gem_ttm_of_gem(gem);
> > > +
> > > +	ttm_bo_vunmap(bo, map);
> > > +}
> > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > > +
> > >   /**
> > >    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> > >    * @gem: GEM object.
> > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > > @@ -32,6 +32,7 @@
> > >   #include <drm/ttm/ttm_bo_driver.h>
> > >   #include <drm/ttm/ttm_placement.h>
> > >   #include <drm/drm_vma_manager.h>
> > > +#include <linux/dma-buf-map.h>
> > >   #include <linux/io.h>
> > >   #include <linux/highmem.h>
> > >   #include <linux/wait.h>
> > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> > >   }
> > >   EXPORT_SYMBOL(ttm_bo_kunmap);
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *ma=
p)
> > > +{
> > > +	struct ttm_resource *mem =3D &bo->mem;
> > > +	int ret;
> > > +
> > > +	ret =3D ttm_mem_io_reserve(bo->bdev, mem);
> > > +	if (ret)
> > > +		return ret;
> > > +
> > > +	if (mem->bus.is_iomem) {
> > > +		void __iomem *vaddr_iomem;
> > > +		unsigned long size =3D bo->num_pages << PAGE_SHIFT;
> >=20
> > Please use uint64_t here and make sure to cast bo->num_pages before
> > shifting.
> >=20
> > We have an unit tests of allocating a 8GB BO and that should work on a
> > 32bit machine as well :)
> >=20
> > > +
> > > +		if (mem->bus.addr)
> > > +			vaddr_iomem =3D (void *)(((u8 *)mem->bus.addr));
> > > +		else if (mem->placement & TTM_PL_FLAG_WC)
> >=20
> > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > mem->bus.caching enum as replacement.
> >=20
> > > +			vaddr_iomem =3D ioremap_wc(mem->bus.offset,
> > > size);
> > > +		else
> > > +			vaddr_iomem =3D ioremap(mem->bus.offset, size);
> > > +
> > > +		if (!vaddr_iomem)
> > > +			return -ENOMEM;
> > > +
> > > +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > +
> > > +	} else {
> > > +		struct ttm_operation_ctx ctx =3D {
> > > +			.interruptible =3D false,
> > > +			.no_wait_gpu =3D false
> > > +		};
> > > +		struct ttm_tt *ttm =3D bo->ttm;
> > > +		pgprot_t prot;
> > > +		void *vaddr;
> > > +
> > > +		BUG_ON(!ttm);
> >=20
> > I think we can drop this, populate will just crash badly anyway.
> >=20
> > > +
> > > +		ret =3D ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > +		if (ret)
> > > +			return ret;
> > > +
> > > +		/*
> > > +		 * We need to use vmap to get the desired page
> > > protection
> > > +		 * or to make the buffer object look contiguous.
> > > +		 */
> > > +		prot =3D ttm_io_prot(mem->placement, PAGE_KERNEL);
> >=20
> > The calling convention has changed on drm-misc-next as well, but should=
 be
> > trivial to adapt.
> >=20
> > Regards,
> > Christian.
> >=20
> > > +		vaddr =3D vmap(ttm->pages, bo->num_pages, 0, prot);
> > > +		if (!vaddr)
> > > +			return -ENOMEM;
> > > +
> > > +		dma_buf_map_set_vaddr(map, vaddr);
> > > +	}
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > +
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map) +{
> > > +	if (dma_buf_map_is_null(map))
> > > +		return;
> > > +
> > > +	if (map->is_iomem)
> > > +		iounmap(map->vaddr_iomem);
> > > +	else
> > > +		vunmap(map->vaddr);
> > > +	dma_buf_map_clear(map);
> > > +
> > > +	ttm_mem_io_free(bo->bdev, &bo->mem);
> > > +}
> > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > +
> > >   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > >   				 bool dst_use_tt)
> > >   {
> > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
> > > 100644 --- a/include/drm/drm_gem_ttm_helper.h
> > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > @@ -10,11 +10,17 @@
> > >   #include <drm/ttm/ttm_bo_api.h>
> > >   #include <drm/ttm/ttm_bo_driver.h>
> > > +struct dma_buf_map;
> > > +
> > >   #define drm_gem_ttm_of_gem(gem_obj) \
> > >   	container_of(gem_obj, struct ttm_buffer_object, base)
> > >   void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > indent, const struct drm_gem_object *gem);
> > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > +		     struct dma_buf_map *map);
> > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > +			struct dma_buf_map *map);
> > >   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > >   		     struct vm_area_struct *vma);
> > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_ap=
i.h
> > > index 37102e45e496..2c59a785374c 100644
> > > --- a/include/drm/ttm/ttm_bo_api.h
> > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > >   struct ttm_bo_device;
> > > +struct dma_buf_map;
> > > +
> > >   struct drm_mm_node;
> > >   struct ttm_placement;
> > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > unsigned long start_page, */
> > >   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > +/**
> > > + * ttm_bo_vmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > + *
> > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > + * data in the buffer object. The parameter @map returns the virtual
> > > + * address as struct dma_buf_map. Unmap the buffer with
> > > ttm_bo_vunmap().
> > > + *
> > > + * Returns
> > > + * -ENOMEM: Out of memory.
> > > + * -EINVAL: Invalid range.
> > > + */
> > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *ma=
p);
> > > +
> > > +/**
> > > + * ttm_bo_vunmap
> > > + *
> > > + * @bo: The buffer object.
> > > + * @map: Object describing the map to unmap.
> > > + *
> > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > + */
> > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > *map); +
> > >   /**
> > >    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > >    *
> > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > index fd1aba545fdf..2e8bbecb5091 100644
> > > --- a/include/linux/dma-buf-map.h
> > > +++ b/include/linux/dma-buf-map.h
> > > @@ -45,6 +45,12 @@
> > >    *
> > >    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > >    *
> > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(=
).
> > > + *
> > > + * .. code-block:: c
> > > + *
> > > + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > + *
> > >    * Test if a mapping is valid with either dma_buf_map_is_set() or
> > >    * dma_buf_map_is_null().
> > >    *
> > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > dma_buf_map *map, void *vaddr) map->is_iomem =3D false;
> > >   }
> > > +/**
> > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > an address in I/O memory
> > > + * @map:		The dma-buf mapping structure
> > > + * @vaddr_iomem:	An I/O-memory address
> > > + *
> > > + * Sets the address and the I/O-memory flag.
> > > + */
> > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *m=
ap,
> > > +					       void __iomem
> > > *vaddr_iomem) +{
> > > +	map->vaddr_iomem =3D vaddr_iomem;
> > > +	map->is_iomem =3D true;
> > > +}
> > > +
> > >   /**
> > >    * dma_buf_map_is_equal - Compares two dma-buf mapping structures f=
or
> > > equality
> > >    * @lhs:	The dma-buf mapping structure
> >=20
>=20



--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 17:56:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 17:56:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7704.20284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7UW-0006HE-Cg; Thu, 15 Oct 2020 17:56:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7704.20284; Thu, 15 Oct 2020 17:56:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7UW-0006H7-9f; Thu, 15 Oct 2020 17:56:40 +0000
Received: by outflank-mailman (input) for mailman id 7704;
 Thu, 15 Oct 2020 17:56:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kT7UV-0006H2-Ev
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:56:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67309ffa-ba58-40da-a2de-505aa97ad7ec;
 Thu, 15 Oct 2020 17:56:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA02CAC3A;
 Thu, 15 Oct 2020 17:56:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CmRu=DW=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kT7UV-0006H2-Ev
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 17:56:39 +0000
X-Inumbo-ID: 67309ffa-ba58-40da-a2de-505aa97ad7ec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 67309ffa-ba58-40da-a2de-505aa97ad7ec;
	Thu, 15 Oct 2020 17:56:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AA02CAC3A;
	Thu, 15 Oct 2020 17:56:36 +0000 (UTC)
Date: Thu, 15 Oct 2020 19:56:34 +0200
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Christian =?UTF-8?B?S8O2bmln?= <christian.koenig@amd.com>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
 linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
 nouveau@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
 amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-rockchip@lists.infradead.org,
 dri-devel@lists.freedesktop.org, xen-devel@lists.xenproject.org,
 spice-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
 linux-media@vger.kernel.org
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
Message-ID: <20201015195634.0221c84e@linux-uq9g>
In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
References: <20201015123806.32416-1-tzimmermann@suse.de>
	<20201015123806.32416-6-tzimmermann@suse.de>
	<935d5771-5645-62a6-849c-31e286db1e30@amd.com>
Organization: SUSE Software Solutions Germany GmbH
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi

On Thu, 15 Oct 2020 16:08:13 +0200 Christian K=C3=B6nig <christian.koenig@a=
md.com>
wrote:

> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
> > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kern=
el
> > address space. The mapping's address is returned as struct dma_buf_map.
> > Each function is a simplified version of TTM's existing kmap code. Both
> > functions respect the memory's location ani/or writecombine flags.
> >
> > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> > two helpers that convert a GEM object into the TTM BO and forward the c=
all
> > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM obj=
ect
> > callbacks.
> >
> > v4:
> > 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> > 	  Christian)
>=20
> Bunch of minor comments below, but over all look very solid to me.
>=20
> >
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> >   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
> >   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
> >   include/drm/drm_gem_ttm_helper.h     |  6 +++
> >   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
> >   include/linux/dma-buf-map.h          | 20 ++++++++
> >   5 files changed, 164 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
> > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
> > unsigned int indent, }
> >   EXPORT_SYMBOL(drm_gem_ttm_print_info);
> >  =20
> > +/**
> > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: [out] returns the dma-buf mapping.
> > + *
> > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> > + * &drm_gem_object_funcs.vmap callback.
> > + *
> > + * Returns:
> > + * 0 on success, or a negative errno code otherwise.
> > + */
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > +		     struct dma_buf_map *map)
> > +{
> > +	struct ttm_buffer_object *bo =3D drm_gem_ttm_of_gem(gem);
> > +
> > +	return ttm_bo_vmap(bo, map);
> > +
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> > +
> > +/**
> > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> > + * @gem: GEM object.
> > + * @map: dma-buf mapping.
> > + *
> > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used=
 as
> > + * &drm_gem_object_funcs.vmap callback.
> > + */
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > +			struct dma_buf_map *map)
> > +{
> > +	struct ttm_buffer_object *bo =3D drm_gem_ttm_of_gem(gem);
> > +
> > +	ttm_bo_vunmap(bo, map);
> > +}
> > +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> > +
> >   /**
> >    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
> >    * @gem: GEM object.
> > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
> > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> > @@ -32,6 +32,7 @@
> >   #include <drm/ttm/ttm_bo_driver.h>
> >   #include <drm/ttm/ttm_placement.h>
> >   #include <drm/drm_vma_manager.h>
> > +#include <linux/dma-buf-map.h>
> >   #include <linux/io.h>
> >   #include <linux/highmem.h>
> >   #include <linux/wait.h>
> > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
> >   }
> >   EXPORT_SYMBOL(ttm_bo_kunmap);
> >  =20
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > +{
> > +	struct ttm_resource *mem =3D &bo->mem;
> > +	int ret;
> > +
> > +	ret =3D ttm_mem_io_reserve(bo->bdev, mem);
> > +	if (ret)
> > +		return ret;
> > +
> > +	if (mem->bus.is_iomem) {
> > +		void __iomem *vaddr_iomem;
> > +		unsigned long size =3D bo->num_pages << PAGE_SHIFT;
>=20
> Please use uint64_t here and make sure to cast bo->num_pages before=20
> shifting.
>=20
> We have an unit tests of allocating a 8GB BO and that should work on a=20
> 32bit machine as well :)
>=20
> > +
> > +		if (mem->bus.addr)
> > +			vaddr_iomem =3D (void *)(((u8 *)mem->bus.addr));
> > +		else if (mem->placement & TTM_PL_FLAG_WC)
>=20
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new=
=20
> mem->bus.caching enum as replacement.
>=20
> > +			vaddr_iomem =3D ioremap_wc(mem->bus.offset, size);
> > +		else
> > +			vaddr_iomem =3D ioremap(mem->bus.offset, size);
> > +
> > +		if (!vaddr_iomem)
> > +			return -ENOMEM;
> > +
> > +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > +
> > +	} else {
> > +		struct ttm_operation_ctx ctx =3D {
> > +			.interruptible =3D false,
> > +			.no_wait_gpu =3D false
> > +		};
> > +		struct ttm_tt *ttm =3D bo->ttm;
> > +		pgprot_t prot;
> > +		void *vaddr;
> > +
> > +		BUG_ON(!ttm);
>=20
> I think we can drop this, populate will just crash badly anyway.
>=20
> > +
> > +		ret =3D ttm_tt_populate(bo->bdev, ttm, &ctx);
> > +		if (ret)
> > +			return ret;
> > +
> > +		/*
> > +		 * We need to use vmap to get the desired page protection
> > +		 * or to make the buffer object look contiguous.
> > +		 */
> > +		prot =3D ttm_io_prot(mem->placement, PAGE_KERNEL);
>=20
> The calling convention has changed on drm-misc-next as well, but should=20
> be trivial to adapt.

Thanks for quickly reviewing these patches. My drm-tip seems out of date
(last Sunday). TTM is moving fast these days and I still have to get used to
that. :)

Best regards
Thomas

>=20
> Regards,
> Christian.
>=20
> > +		vaddr =3D vmap(ttm->pages, bo->num_pages, 0, prot);
> > +		if (!vaddr)
> > +			return -ENOMEM;
> > +
> > +		dma_buf_map_set_vaddr(map, vaddr);
> > +	}
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vmap);
> > +
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *m=
ap)
> > +{
> > +	if (dma_buf_map_is_null(map))
> > +		return;
> > +
> > +	if (map->is_iomem)
> > +		iounmap(map->vaddr_iomem);
> > +	else
> > +		vunmap(map->vaddr);
> > +	dma_buf_map_clear(map);
> > +
> > +	ttm_mem_io_free(bo->bdev, &bo->mem);
> > +}
> > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > +
> >   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> >   				 bool dst_use_tt)
> >   {
> > diff --git a/include/drm/drm_gem_ttm_helper.h
> > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100=
644
> > --- a/include/drm/drm_gem_ttm_helper.h
> > +++ b/include/drm/drm_gem_ttm_helper.h
> > @@ -10,11 +10,17 @@
> >   #include <drm/ttm/ttm_bo_api.h>
> >   #include <drm/ttm/ttm_bo_driver.h>
> >  =20
> > +struct dma_buf_map;
> > +
> >   #define drm_gem_ttm_of_gem(gem_obj) \
> >   	container_of(gem_obj, struct ttm_buffer_object, base)
> >  =20
> >   void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int inden=
t,
> >   			    const struct drm_gem_object *gem);
> > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > +		     struct dma_buf_map *map);
> > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > +			struct dma_buf_map *map);
> >   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> >   		     struct vm_area_struct *vma);
> >  =20
> > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> > index 37102e45e496..2c59a785374c 100644
> > --- a/include/drm/ttm/ttm_bo_api.h
> > +++ b/include/drm/ttm/ttm_bo_api.h
> > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> >  =20
> >   struct ttm_bo_device;
> >  =20
> > +struct dma_buf_map;
> > +
> >   struct drm_mm_node;
> >  =20
> >   struct ttm_placement;
> > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > unsigned long start_page, */
> >   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> >  =20
> > +/**
> > + * ttm_bo_vmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: pointer to a struct dma_buf_map representing the map.
> > + *
> > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > + * data in the buffer object. The parameter @map returns the virtual
> > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(=
).
> > + *
> > + * Returns
> > + * -ENOMEM: Out of memory.
> > + * -EINVAL: Invalid range.
> > + */
> > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > +
> > +/**
> > + * ttm_bo_vunmap
> > + *
> > + * @bo: The buffer object.
> > + * @map: Object describing the map to unmap.
> > + *
> > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > + */
> > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > *map); +
> >   /**
> >    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> >    *
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index fd1aba545fdf..2e8bbecb5091 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -45,6 +45,12 @@
> >    *
> >    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> >    *
> > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > + *
> > + * .. code-block:: c
> > + *
> > + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > + *
> >    * Test if a mapping is valid with either dma_buf_map_is_set() or
> >    * dma_buf_map_is_null().
> >    *
> > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > dma_buf_map *map, void *vaddr) map->is_iomem =3D false;
> >   }
> >  =20
> > +/**
> > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an
> > address in I/O memory
> > + * @map:		The dma-buf mapping structure
> > + * @vaddr_iomem:	An I/O-memory address
> > + *
> > + * Sets the address and the I/O-memory flag.
> > + */
> > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > +					       void __iomem *vaddr_iomem)
> > +{
> > +	map->vaddr_iomem =3D vaddr_iomem;
> > +	map->is_iomem =3D true;
> > +}
> > +
> >   /**
> >    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
> > equality
> >    * @lhs:	The dma-buf mapping structure
>=20
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 18:00:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 18:00:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7710.20301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7Y2-0007DG-Vm; Thu, 15 Oct 2020 18:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7710.20301; Thu, 15 Oct 2020 18:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7Y2-0007D9-S5; Thu, 15 Oct 2020 18:00:18 +0000
Received: by outflank-mailman (input) for mailman id 7710;
 Thu, 15 Oct 2020 18:00:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+LXD=DW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kT7Y1-0007D4-LM
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 18:00:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be0109e6-9f76-472e-a777-c045964be67d;
 Thu, 15 Oct 2020 18:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 823E122263;
 Thu, 15 Oct 2020 18:00:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+LXD=DW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kT7Y1-0007D4-LM
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 18:00:17 +0000
X-Inumbo-ID: be0109e6-9f76-472e-a777-c045964be67d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id be0109e6-9f76-472e-a777-c045964be67d;
	Thu, 15 Oct 2020 18:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 823E122263;
	Thu, 15 Oct 2020 18:00:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602784816;
	bh=EISWL5wvNeHXVXUA8VqngZry+deFyoOswKvzNvnlW5I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=JylAledAIJeKKsQnjEAQl9hZM736l3atMXsMUUVeRrwzxuOLlAuIWgZHlXrBn+OLg
	 jJgKm9QWiohKjfMKIihf0x6tACO7jgh6ZzfK366K0l6TJroljce+exeHu8bEGIbT1U
	 TtHL2Wzx/n7Q9b/sE3CwHJwcX71Es34HiVbpFNRc=
Date: Thu, 15 Oct 2020 11:00:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Elliott Mitchell <ehem+xen@m5p.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel@lists.xenproject.org, Alex Benn??e <alex.bennee@linaro.org>, 
    bertrand.marquis@arm.com, andre.przywara@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH 0/4] xen/arm: Unbreak ACPI
In-Reply-To: <1b5f19f6-ea70-9e58-bf36-de7f7d54153a@xen.org>
Message-ID: <alpine.DEB.2.21.2010151058370.10386@sstabellini-ThinkPad-T480s>
References: <20200926205542.9261-1-julien@xen.org> <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com> <1a7b5a14-7d21-b067-a80b-27d963f9798a@xen.org> <alpine.DEB.2.21.2010121157350.10386@sstabellini-ThinkPad-T480s> <20201012213451.GA89158@mattapan.m5p.com>
 <alpine.DEB.2.21.2010131759270.10386@sstabellini-ThinkPad-T480s> <20201014013706.GA98635@mattapan.m5p.com> <1b5f19f6-ea70-9e58-bf36-de7f7d54153a@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 14 Oct 2020, Julien Grall wrote:
> Hi Elliot,
> 
> On 14/10/2020 02:37, Elliott Mitchell wrote:
> > On Tue, Oct 13, 2020 at 06:06:26PM -0700, Stefano Stabellini wrote:
> > > On Mon, 12 Oct 2020, Elliott Mitchell wrote:
> > > > I'm on different hardware, but some folks have setup Tianocore for it.
> > > > According to Documentation/arm64/acpi_object_usage.rst,
> > > > "Required: DSDT, FADT, GTDT, MADT, MCFG, RSDP, SPCR, XSDT".  Yet when
> > > > booting a Linux kernel directly on the hardware it lists APIC, BGRT,
> > > > CSRT, DSDT, DBG2, FACP, GTDT, PPTT, RSDP, and XSDT.
> > > > 
> > > > I don't know whether Linux's ACPI code omits mention of some required
> > > > tables and merely panics if they're absent.  Yet I'm speculating the
> > > > list
> > > > of required tables has shrunk, SPCR is no longer required, and the
> > > > documentation is out of date.  Perhaps SPCR was required in early Linux
> > > > ACPI implementations, but more recent ones removed that requirement?
> > > 
> > > I have just checked and SPCR is still a mandatory table in the latest
> > > SBBR specification. It is probably one of those cases where the firmware
> > > claims to be SBBR compliant, but it is not, and it happens to work with
> > > Linux.
> > 
> > Is meeting the SBBR specification supposed to be a requirement of running
> > Xen-ARM?
> 
> This is not my goal. We should try to get Xen running everywhere as long as
> this doesn't require a lot of extra code. IOW, don't ask me to review/accept a
> port of Xen to RPI3 ;).

I agree with Julien's statement.

For context, my previous comment in regards to SBBR was because I am
positive that Masami's platform is expected to be SBBR compliant.


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 18:06:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 18:06:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7713.20312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7dY-0007PO-Je; Thu, 15 Oct 2020 18:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7713.20312; Thu, 15 Oct 2020 18:06:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kT7dY-0007PH-Gp; Thu, 15 Oct 2020 18:06:00 +0000
Received: by outflank-mailman (input) for mailman id 7713;
 Thu, 15 Oct 2020 18:05:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+LXD=DW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kT7dX-0007PB-6z
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 18:05:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a32abb7-50cb-4704-8e62-86c9f1e33be4;
 Thu, 15 Oct 2020 18:05:58 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E9E6122240;
 Thu, 15 Oct 2020 18:05:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+LXD=DW=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kT7dX-0007PB-6z
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 18:05:59 +0000
X-Inumbo-ID: 5a32abb7-50cb-4704-8e62-86c9f1e33be4
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a32abb7-50cb-4704-8e62-86c9f1e33be4;
	Thu, 15 Oct 2020 18:05:58 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E9E6122240;
	Thu, 15 Oct 2020 18:05:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602785157;
	bh=3ZXv7TN0z+XQ/IBIv2+pK/DMCeCBVLy6j6t4P2cgPb8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UAkE7RG2/Lw3W0U77KJ+MiS7QPDkkmS1KZHE8aAd/L2DyhCG6jJQCdlQlwKzVihM6
	 TzfEhVH3emuCjvhfSJjAZeicPIIb+UphtYfAppTmNuryX7MGHW3B0m16n5qZYW+BUg
	 BeQOqstIJMT2HqhfKCKB1uigejew3NIkbsMJjdPU=
Date: Thu, 15 Oct 2020 11:05:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
Message-ID: <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com> <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com> <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com> <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s> <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-574996916-1602785115=:10386"
Content-ID: <alpine.DEB.2.21.2010151105400.10386@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-574996916-1602785115=:10386
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010151105401.10386@sstabellini-ThinkPad-T480s>

On Thu, 15 Oct 2020, Bertrand Marquis wrote:
> > On 14 Oct 2020, at 22:15, Stefano Stabellini <sstabellini@kernel.org> wrote:
> > 
> > On Wed, 14 Oct 2020, Julien Grall wrote:
> >> On 14/10/2020 17:03, Bertrand Marquis wrote:
> >>>> On 14 Oct 2020, at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >>>> 
> >>>> On 14/10/2020 11:41, Bertrand Marquis wrote:
> >>>>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> >>>>> not implementing the workaround for it could deadlock the system.
> >>>>> Add a warning during boot informing the user that only trusted guests
> >>>>> should be executed on the system.
> >>>>> An equivalent warning is already given to the user by KVM on cores
> >>>>> affected by this errata.
> >>>>> 
> >>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> >>>>> ---
> >>>>> xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
> >>>>> 1 file changed, 21 insertions(+)
> >>>>> 
> >>>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> >>>>> index 6c09017515..8f9ab6dde1 100644
> >>>>> --- a/xen/arch/arm/cpuerrata.c
> >>>>> +++ b/xen/arch/arm/cpuerrata.c
> >>>>> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
> >>>>> 
> >>>>> #endif
> >>>>> 
> >>>>> +#ifdef CONFIG_ARM64_ERRATUM_832075
> >>>>> +
> >>>>> +static int warn_device_load_acquire_errata(void *data)
> >>>>> +{
> >>>>> +    static bool warned = false;
> >>>>> +
> >>>>> +    if ( !warned )
> >>>>> +    {
> >>>>> +        warning_add("This CPU is affected by the errata 832075.\n"
> >>>>> +                    "Guests without required CPU erratum workarounds\n"
> >>>>> +                    "can deadlock the system!\n"
> >>>>> +                    "Only trusted guests should be used on this
> >>>>> system.\n");
> >>>>> +        warned = true;
> >>>> 
> >>>> This is an antipattern, which probably wants fixing elsewhere as well.
> >>>> 
> >>>> warning_add() is __init.  It's not legitimate to call from a non-init
> >>>> function, and a less useless build system would have modpost to object.
> >>>> 
> >>>> The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
> >>>> but this provides no safety at all.
> >>>> 
> >>>> 
> >>>> What warning_add() actually does is queue messages for some point near
> >>>> the end of boot.  It's not clear that this is even a clever thing to do.
> >>>> 
> >>>> I'm very tempted to suggest a blanket change to printk_once().
> >>> 
> >>> If this is needed then this could be done in an other serie ?
> >> 
> >> The callback ->enable() will be called when a CPU is onlined/offlined. So this
> >> is going to require if you plan to support CPU hotplugs or suspend resume.
> >> 
> >>> Would be good to keep this patch as purely handling the errata.
> > 
> > My preference would be to keep this patch small with just the errata,
> > maybe using a simple printk_once as Andrew and Julien discussed.
> > 
> > There is another instance of warning_add potentially being called
> > outside __init in xen/arch/arm/cpuerrata.c:
> > enable_smccc_arch_workaround_1. So if you are up for it, it would be
> > good to produce a patch to fix that too.
> > 
> > 
> >> In the case of this patch, how about moving the warning_add() in
> >> enable_errata_workarounds()?
> >> 
> >> By then we should now all the errata present on your platform. All CPUs
> >> onlined afterwards (i.e. runtime) should always abide to the set discover
> >> during boot.
> > 
> > If I understand your suggestion correctly, it would work for
> > warn_device_load_acquire_errata, because it is just a warning, but it
> > would not work for enable_smccc_arch_workaround_1, because there is
> > actually a call to be made there.
> > 
> > Maybe it would be simpler to use printk_once in both cases? I don't have
> > a strong preference either way.
> 
> I could do the following (in a serie of 2 patches):
> - modify enable_smccc_arch_workaround_1 to use printk_once with a 
>   prefix/suffix “****” on each line printed (and maybe adapting print to fit a 
>   line length of 80)
> - modify my patch to do the print in enable_errata_workarounds using also
>   the prefix/suffix and printk_once
> 
> Please confirm that this strategy would fit everyone.

I think it is OK but if you are going to use printk_once in your patch
you might as well leave it in the .enable implementation.

Julien, what do you think?
--8323329-574996916-1602785115=:10386--


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 20:44:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 20:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7734.20353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTA6d-0003zQ-9q; Thu, 15 Oct 2020 20:44:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7734.20353; Thu, 15 Oct 2020 20:44:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTA6d-0003zJ-68; Thu, 15 Oct 2020 20:44:11 +0000
Received: by outflank-mailman (input) for mailman id 7734;
 Thu, 15 Oct 2020 20:44:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTA6b-0003yk-MG
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:44:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e9456fa-3d98-46cb-8226-d8e814c44ade;
 Thu, 15 Oct 2020 20:44:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA6T-0006gb-Sh; Thu, 15 Oct 2020 20:44:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA6T-0003UQ-KR; Thu, 15 Oct 2020 20:44:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA6T-0006sa-Jw; Thu, 15 Oct 2020 20:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTA6b-0003yk-MG
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:44:09 +0000
X-Inumbo-ID: 9e9456fa-3d98-46cb-8226-d8e814c44ade
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9e9456fa-3d98-46cb-8226-d8e814c44ade;
	Thu, 15 Oct 2020 20:44:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PN151mKlleD537yMAoYqEJL9eI2kCPE5pbphfoCh4tM=; b=Na3InzV8tlMkwZMMSSINLM6C/E
	CrSj28xwRUteDiw+uqXt4qomTJ3yHNWopEv5aWWWB+6vf9WjYXfEBj8CQzaDtzaH9EpptmZm25saY
	j+I1f7TapXLuyvg8Rbe+DPo29OshqTbFmJUM8HcSv7q4C5/YImAGVveBHh2lhdfXNQCs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA6T-0006gb-Sh; Thu, 15 Oct 2020 20:44:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA6T-0003UQ-KR; Thu, 15 Oct 2020 20:44:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA6T-0006sa-Jw; Thu, 15 Oct 2020 20:44:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155869-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155869: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-localmigrate:fail:allowable
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
X-Osstest-Versions-That:
    xen=a8a85f03c826bea045e345fa405f187049d63584
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 20:44:01 +0000

flight 155869 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155869/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 16 guest-localmigrate fail REGR. vs. 155850

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4
baseline version:
 xen                  a8a85f03c826bea045e345fa405f187049d63584

Last test of basis   155850  2020-10-15 14:00:24 Z    0 days
Testing same since   155869  2020-10-15 18:00:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a8a85f03c8..6ee2e66674  6ee2e66674f36b6d27a95f4ddf27226905cc63a4 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 20:46:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 20:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7737.20366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTA8l-00047u-So; Thu, 15 Oct 2020 20:46:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7737.20366; Thu, 15 Oct 2020 20:46:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTA8l-00047n-PN; Thu, 15 Oct 2020 20:46:23 +0000
Received: by outflank-mailman (input) for mailman id 7737;
 Thu, 15 Oct 2020 20:46:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTA8j-00046t-Oh
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:46:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da4e239a-7a3c-445c-9450-57b928c76dcc;
 Thu, 15 Oct 2020 20:46:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA8b-0006jZ-7s; Thu, 15 Oct 2020 20:46:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA8b-0003aI-0n; Thu, 15 Oct 2020 20:46:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTA8b-0007WH-0G; Thu, 15 Oct 2020 20:46:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pHSr=DW=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTA8j-00046t-Oh
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:46:21 +0000
X-Inumbo-ID: da4e239a-7a3c-445c-9450-57b928c76dcc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da4e239a-7a3c-445c-9450-57b928c76dcc;
	Thu, 15 Oct 2020 20:46:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aIbHS0bRnYqMlsMid43NXEnz/MfbBSxUcBlK8ixmB6A=; b=LEGx3OBF8svBpNO8CBsa+03+U0
	Lixi5cbU43ttRLrGds64QdrHyyxIrzjv2E5vUGmVj6GXNZ897IBzjMzqtv5nK07Ji5Nq4HK6izdyk
	KXyYzk9H6Uyy2KMPYAFIjdB+nZvuHzKUJq3HhVRmJNDI8Zpk8l4lmlsVTiZdFJG9Lc/8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA8b-0006jZ-7s; Thu, 15 Oct 2020 20:46:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA8b-0003aI-0n; Thu, 15 Oct 2020 20:46:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTA8b-0007WH-0G; Thu, 15 Oct 2020 20:46:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155841: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:heisenbug
    qemu-mainline:test-amd64-i386-xl-raw:debian-di-install:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=57c98ea9acdcef5021f5671efa6475a5794a51c4
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 15 Oct 2020 20:46:13 +0000

flight 155841 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155841/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install fail in 155819 pass in 155841
 test-amd64-amd64-pair   26 guest-migrate/src_host/dst_host fail pass in 155819
 test-amd64-i386-xl-raw       12 debian-di-install          fail pass in 155819

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                57c98ea9acdcef5021f5671efa6475a5794a51c4
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   56 days
Failing since        152659  2020-08-21 14:07:39 Z   55 days   96 attempts
Testing same since   155819  2020-10-14 23:08:47 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45458 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 15 20:53:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 15 Oct 2020 20:53:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7741.20379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTAFH-00052h-MC; Thu, 15 Oct 2020 20:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7741.20379; Thu, 15 Oct 2020 20:53:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTAFH-00052a-Iq; Thu, 15 Oct 2020 20:53:07 +0000
Received: by outflank-mailman (input) for mailman id 7741;
 Thu, 15 Oct 2020 20:53:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kTAFG-00052V-Bg
 for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:53:06 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b282f8b-bde0-4e56-a135-fcbb19191e26;
 Thu, 15 Oct 2020 20:53:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LoCs=DW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kTAFG-00052V-Bg
	for xen-devel@lists.xenproject.org; Thu, 15 Oct 2020 20:53:06 +0000
X-Inumbo-ID: 9b282f8b-bde0-4e56-a135-fcbb19191e26
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b282f8b-bde0-4e56-a135-fcbb19191e26;
	Thu, 15 Oct 2020 20:53:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602795185;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=/RrzrOoo+RYQA/Og0Sa6N8M5S4GugatRwPXCV5pJh4c=;
  b=L3MIgksjNdpxD7twSVqaWUCkhCu4Q5oFagSty5fcbs3k5SaMu/qw4gF3
   RBBB4tuX4Py7tZJo/rfO3dIURxdVcexBXseonGRAIXZH819OcS3DiTbQX
   oi3EKWhuHm+xG0HSv8NL3svKMKtvSUBI7p+VdGswHhr+RZ8xApeCJI4EO
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LdCh5F02p4Sr7dmSiuxc4BPwRSGzFbNahRc/1+BMwlyqq3SN4CMt1GkzdWj64VwDIyEnbw4/T6
 Xfxace0VTwwTmw9f6mV0iqvG9r+jMmSBEsoYa5tmmBoeZuNySJz/DMbMS97Hb/VQK/aptuA4v4
 wiIXUOFCgagcew8Z6LC2s3VqSoJ5FAercYzcPY12iGkz9NeFAWwn/bD6ulB5wpEv1nJWIbg3yx
 BonjVHlJ13RvEQwKLd979AMTWt7VZ+D/8Kz4JuaX9ZIcKm801WTxfQWAWfgUdaXfZuXk6diGrY
 2qg=
X-SBRS: 2.5
X-MesageID: 29122980
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,380,1596513600"; 
   d="scan'208";a="29122980"
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Jan Beulich
	<jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei Liu
	<wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>, George Dunlap
	<George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ad909278-8ab0-dc7a-2004-5efd08e5acbd@citrix.com>
Date: Thu, 15 Oct 2020 21:52:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 11:41, Jürgen Groß wrote:
> On 15.10.20 12:09, Jan Beulich wrote:
>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>> After a short discussion on IRC yesterday I promised to send a mail
>>> how I think we could get rid of creating dynamic links especially
>>> for header files in the Xen build process.
>>>
>>> This will require some restructuring, the amount will depend on the
>>> selected way to proceed:
>>>
>>> - avoid links completely, requires more restructuring
>>> - avoid only dynamically created links, i.e. allowing some static
>>>     links which are committed to git
>>
>> While I like the latter better, I'd like to point out that not all
>> file systems support symlinks, and hence the repo then couldn't be
>> stored on (or the tarball expanded onto) such a file system. Note
>> that this may be just for viewing purposes - I do this typically at
>> home -, i.e. there's no resulting limitation from the build process
>> needing symlinks. Similarly, once we fully support out of tree
>> builds, there wouldn't be any restriction from this as long as just
>> the build tree is placed on a capable file system.
>>
>> As a result I'd like to propose variant 2´: Reduce the number of
>> dynamically created symlinks to a minimum. This said, I have to
>> admit that I haven't really understood yet why symlinks are bad.
>> They exist for exactly such purposes, I would think.
>
> Not the symlinks as such, but the dynamically created ones seem to be
> a problem, as we stumble upon those again and again.

We have multiple build system bugs every release to do with dynamically
generated symlinks.  Given that symlinks aren't a hard requirement, this
is a massive price to pay, and time which could be better spent doing
other other things.

Also, they prohibit the ability to build from a read-only source dir.

The asm symlink in particular prevents any attempt to do concurrent
builds of xen.  In some future, I'd love to be able to do concurrent
out-of-tree builds of Xen on different architectures, because elapsed
time to do this is one limiting factor of mine on pre-push sanity checks.

Personally, I'd prefer option 1 in the long run, but I've got no
problems with achieving option 2 as an intermediate goal.

>
>>
>>> The difference between both variants is affecting the public headers
>>> in xen/include/public/: avoiding even static links would require to
>>> add another directory or to move those headers to another place in the
>>> tree (either use xen/include/public/xen/, or some other path */xen),
>>> leading to the need to change all #include statements in the hypervisor
>>> using <public/...> today.
>>>
>>> The need for the path to have "xen/" is due to the Xen library headers
>>> (which are installed on user's machines) are including the public
>>> hypervisor headers via "#include <xen/...>" and we can't change that
>>> scheme. A static link can avoid this problem via a different path, but
>>> without any link we can't do that.
>>>
>>> Apart from that decision, lets look which links are created today for
>>> accessing the header files (I'll assume my series putting the library
>>> headers to tools/include will be taken, so those links being created
>>> in staging today are not mentioned) and what can be done to avoid them:
>>>
>>> - xen/include/asm -> xen/include/asm-<arch>:
>>>     Move all headers from xen/include/asm-<arch> to
>>>     xen/arch/<arch>/include/asm and add that path via "-I" flag to
>>> CFLAGS.
>>>     This has the other nice advantages that most architecture specific
>>>     files are now in xen/arch (apart from the public headers) and
>>> that we
>>>     can even add generic fallback headers in xen/include/asm in case an
>>>     arch doesn't need a specific header file.
>>
>> Iirc Andrew suggested years ago that we follow Linux in this regard
>> (and XTF already does). My only concern here is the churn this will
>> cause for backports.
>
> Changing a directory name in a patch isn't that hard, IMO.

Also git (if you throw it the correct runes) can cope with this
automatically.

>>> - xen/arch/<arch>/efi/*.[ch] -> xen/common/efi/*.[ch]:
>>>     Use vpath for the *.c files and the "-I" flag for adding
>>> common/efi to
>>>     the include path in the xen/arch/<arch>/efi/Makefile.
>>
>> Fine with me.

Something which has been irritating me for years is that cscope doesn't
tollerate the efi symlinks.  This would be a great solution.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 00:40:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 00:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7750.20399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTDmb-0007FJ-FV; Fri, 16 Oct 2020 00:39:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7750.20399; Fri, 16 Oct 2020 00:39:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTDmb-0007FC-At; Fri, 16 Oct 2020 00:39:45 +0000
Received: by outflank-mailman (input) for mailman id 7750;
 Fri, 16 Oct 2020 00:39:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4tE9=DX=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kTDmZ-0007F6-86
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 00:39:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39bd4f66-7deb-4e53-b153-db2ac39a2d1e;
 Fri, 16 Oct 2020 00:39:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4tE9=DX=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kTDmZ-0007F6-86
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 00:39:43 +0000
X-Inumbo-ID: 39bd4f66-7deb-4e53-b153-db2ac39a2d1e
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39bd4f66-7deb-4e53-b153-db2ac39a2d1e;
	Fri, 16 Oct 2020 00:39:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602808781;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=A7SS5LkOU52vzc6By7hOafvn87xPxCwRLNNCpbXX8O8=;
  b=CY/Vb/jCcahcaFT7KBAJv+xxZz5jgjL5y0kfdVEOQ4krZhWAtNji5mCC
   1LUEqc8KX7jB0QMdc6PCywd+WZzRSaG2C83pHnQeEksq/0Sw3MAOtNRMz
   wNmrg9zy8rF32ffpyVRdMmDPgTHuMng8I8zjQZ92EJydWkuHCXwe/kPtI
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sfdou8sKDvvWTiRJit1Kwt5bCz0OD68kt32J5SZ8stCn13rsb/nbREETUQAygbLmL/P0YAjBBn
 Eo3GN+g+yMoATm8ZaxgMV2RmgDOA8KpyX5icnNN+EVYs7/qGrKOWH9W5BwntTbcl25pWqJqTMC
 Uj5ziVpgDwbNPtkiCjh0rFwrCP/8YlnPHuolDoSPwLk2xMPtVn3MiRvMfXf9ia3sU50zXvE121
 21SnF6/Sg95mrPynirtJU84xIz8FGEFy4gO3ob82p048jbOp09uUKaCyuUDnAXe5rhmRo6g2Ur
 n9s=
X-SBRS: 2.5
X-MesageID: 29377436
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,380,1596513600"; 
   d="scan'208";a="29377436"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <iwj@xenproject.org>, Igor Druzhinin
	<igor.druzhinin@citrix.com>
Subject: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
Date: Fri, 16 Oct 2020 01:39:23 +0100
Message-ID: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

ACPI specification contains statements describing memory marked with regular
"ACPI data" type as reclaimable by the guest. Although the guest shouldn't
really do it if it wants kexec or similar functionality to work, there
could still be ambiguities in treating these regions as potentially regular
RAM.

One such example is SeaBIOS which currently reports "ACPI data" regions as
RAM to the guest in its e801 call. Which it might have the right to do as any
user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
to ignore that fact and is instead using e801 to find a place for initrd which
causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
to be fixed / improved here, that is just one example of the potential problems
from using a reclaimable memory type.

Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
described by the spec as non-reclaimable (so cannot ever be treated like RAM).

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Changes in v2:
- Put the exact reasoning into a comment
- Improved commit message
---
 tools/firmware/hvmloader/e820.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/firmware/hvmloader/e820.c b/tools/firmware/hvmloader/e820.c
index 38bcf18..c490a0b 100644
--- a/tools/firmware/hvmloader/e820.c
+++ b/tools/firmware/hvmloader/e820.c
@@ -202,16 +202,21 @@ int build_e820_table(struct e820entry *e820,
     nr++;
 
     /*
-     * Mark populated reserved memory that contains ACPI tables as ACPI data.
+     * Mark populated reserved memory that contains ACPI tables as ACPI NVS.
      * That should help the guest to treat it correctly later: e.g. pass to
-     * the next kernel on kexec or reclaim if necessary.
+     * the next kernel on kexec.
+     *
+     * Using NVS type instead of a regular one helps to prevent potential
+     * space reuse by an ACPI unaware / buggy bootloader, option ROM, etc.
+     * before an ACPI OS takes control. This is possible due to the fact that
+     * ACPI NVS memory is explicitly described as non-reclaimable in ACPI spec.
      */
 
     if ( acpi_enabled )
     {
         e820[nr].addr = RESERVED_MEMBASE;
         e820[nr].size = acpi_mem_end - RESERVED_MEMBASE;
-        e820[nr].type = E820_ACPI;
+        e820[nr].type = E820_NVS;
         nr++;
     }
 
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 01:01:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 01:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7753.20413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTE7w-0001Y9-Ag; Fri, 16 Oct 2020 01:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7753.20413; Fri, 16 Oct 2020 01:01:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTE7w-0001Ww-7J; Fri, 16 Oct 2020 01:01:48 +0000
Received: by outflank-mailman (input) for mailman id 7753;
 Fri, 16 Oct 2020 01:01:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTE7u-0007Pj-La
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 01:01:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c5e2a21-091f-4ee6-9a52-8c830e0165de;
 Fri, 16 Oct 2020 01:01:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTE7m-0006Tn-6s; Fri, 16 Oct 2020 01:01:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTE7l-0001zs-RK; Fri, 16 Oct 2020 01:01:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTE7l-0005ES-Qp; Fri, 16 Oct 2020 01:01:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTE7u-0007Pj-La
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 01:01:46 +0000
X-Inumbo-ID: 9c5e2a21-091f-4ee6-9a52-8c830e0165de
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9c5e2a21-091f-4ee6-9a52-8c830e0165de;
	Fri, 16 Oct 2020 01:01:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lHIUIxH8O3FR/ny7b8vpt0enNKzgxjtK8ZZhoG3NTAA=; b=USeESViNIPfHrStTQRhj3g4rqU
	Z6O0XczRl8wx4y6goq4mCR3JkLR/+k5Mtv8SQjrLTBQhd1J+p5+FgI/PnP2Zqh9iNSKQHBRufulcn
	pUME4DKhMJQGQPs3BkaUFbY3std9/AZ4HNv09gpYpDKlgqpnrmlIfvYJWLBuKzfV2PT0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTE7m-0006Tn-6s; Fri, 16 Oct 2020 01:01:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTE7l-0001zs-RK; Fri, 16 Oct 2020 01:01:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTE7l-0005ES-Qp; Fri, 16 Oct 2020 01:01:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155861-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155861: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-i386:xen-build:fail:regression
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=f776e5fb3ee699745f6442ec8c47d0fa647e0575
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 01:01:37 +0000

flight 155861 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155861/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 155788
 build-amd64-xsm               6 xen-build                fail REGR. vs. 155788
 build-i386                    6 xen-build                fail REGR. vs. 155788

Tests which did not succeed, but are not blocking:
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155788
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155788
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  f776e5fb3ee699745f6442ec8c47d0fa647e0575
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155788  2020-10-14 01:52:30 Z    1 days
Failing since        155810  2020-10-14 16:08:32 Z    1 days    3 attempts
Testing same since   155861  2020-10-15 15:37:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Michal Orzel <michal.orzel@arm.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 425 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 05:32:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 05:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7762.20437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTILN-0000AN-TS; Fri, 16 Oct 2020 05:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7762.20437; Fri, 16 Oct 2020 05:31:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTILN-0000AF-PV; Fri, 16 Oct 2020 05:31:57 +0000
Received: by outflank-mailman (input) for mailman id 7762;
 Fri, 16 Oct 2020 05:31:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTILM-00009n-Fu
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 05:31:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e4d9edc-c181-4cf8-a54a-7387c7f3300b;
 Fri, 16 Oct 2020 05:31:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTILC-0002db-OP; Fri, 16 Oct 2020 05:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTILC-0005or-AO; Fri, 16 Oct 2020 05:31:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTILC-0004iC-9q; Fri, 16 Oct 2020 05:31:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTILM-00009n-Fu
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 05:31:56 +0000
X-Inumbo-ID: 4e4d9edc-c181-4cf8-a54a-7387c7f3300b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e4d9edc-c181-4cf8-a54a-7387c7f3300b;
	Fri, 16 Oct 2020 05:31:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Mya5wJyNW1+wDyGF1UlXxkecspDvFA20JB+rjsm1kns=; b=noMybcIxPA66qvMsl5mi2sMMx8
	8I4wGvHPhC3z/+EFbZBUl1oumBTLjA6tyAdp/ttmZF/T8WWT1OcoCL3mdT/956bmRE0giPAbjQwGl
	lOW6Ok2kjquGGXjLaBXy6qagn3zu5Ca4SKcBpKXfeqn6hLrblUw91onX6y8GY94hDQdQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTILC-0002db-OP; Fri, 16 Oct 2020 05:31:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTILC-0005or-AO; Fri, 16 Oct 2020 05:31:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTILC-0004iC-9q; Fri, 16 Oct 2020 05:31:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155863-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155863: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-multivcpu:guest-start.2:fail:heisenbug
    linux-linus:test-amd64-amd64-pair:guest-migrate/src_host/dst_host:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=3e4fb4346c781068610d03c12b16c0cfb0fd24a3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 05:31:46 +0000

flight 155863 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155863/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install fail in 155829 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 155829 pass in 155863
 test-amd64-amd64-xl-multivcpu 23 guest-start.2   fail in 155829 pass in 155863
 test-amd64-amd64-pair   26 guest-migrate/src_host/dst_host fail pass in 155829
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen        fail pass in 155829
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 155829
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen        fail pass in 155829
 test-amd64-amd64-libvirt-vhd 12 debian-di-install          fail pass in 155829

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 155829 blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 155829 blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11) fail in 155829 blocked in 152332
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check fail in 155829 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                3e4fb4346c781068610d03c12b16c0cfb0fd24a3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   76 days
Failing since        152366  2020-08-01 20:49:34 Z   75 days  128 attempts
Testing same since   155829  2020-10-15 04:03:41 Z    1 days    2 attempts

------------------------------------------------------------
2788 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 424158 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 05:47:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 05:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7765.20449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTIZr-0001EB-3C; Fri, 16 Oct 2020 05:46:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7765.20449; Fri, 16 Oct 2020 05:46:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTIZq-0001E4-V4; Fri, 16 Oct 2020 05:46:54 +0000
Received: by outflank-mailman (input) for mailman id 7765;
 Fri, 16 Oct 2020 05:46:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTIZp-0001Dz-Pd
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 05:46:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ec9f4c4-aa57-4a10-80b0-566ee85ce3e7;
 Fri, 16 Oct 2020 05:46:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1B36ACC2;
 Fri, 16 Oct 2020 05:46:49 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTIZp-0001Dz-Pd
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 05:46:53 +0000
X-Inumbo-ID: 6ec9f4c4-aa57-4a10-80b0-566ee85ce3e7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6ec9f4c4-aa57-4a10-80b0-566ee85ce3e7;
	Fri, 16 Oct 2020 05:46:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602827210;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=x68ZJyqcU61n/+34vorNes1lfkpEuiCzjckRd48mSD4=;
	b=LHPWSdn0XOQSHkVEpiw6GCSvIx9MBJn0IXjEBV5XdVrzOwF557VZ1XZIighuqbD6HURd8o
	2k0g3UTOwcIoS4yM5emOTXoYTLXZjJDXKdnHs8Z07K7XGipuwRaGRlgmSg4JwM5XsTdUdw
	x7zXXjW7K0UqkVQGPeVnIJvO5mik0mU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1B36ACC2;
	Fri, 16 Oct 2020 05:46:49 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2e6f77c8-7ae6-3831-9221-b45c406e35a0@suse.com>
Date: Fri, 16 Oct 2020 07:46:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.20 14:07, Jan Beulich wrote:
> On 14.10.2020 13:40, Julien Grall wrote:
>> Hi Jan,
>>
>> On 13/10/2020 15:26, Jan Beulich wrote:
>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>> On 13.10.20 15:58, Jan Beulich wrote:
>>>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>>>> priority of the event. When sending an event it might happen the
>>>>>> event needs to change queues and the old queue needs to be kept for
>>>>>> keeping the links between queue elements intact. For this purpose
>>>>>> the event channel contains last_priority and last_vcpu_id values
>>>>>> elements for being able to identify the old queue.
>>>>>>
>>>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>>>> with a single atomic operation avoiding any inconsistencies.
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>
>>>>> I seem to vaguely recall that at the time this seemingly racy
>>>>> access was done on purpose by David. Did you go look at the
>>>>> old commits to understand whether there really is a race which
>>>>> can't be tolerated within the spec?
>>>>
>>>> At least the comments in the code tell us that the race regarding
>>>> the writing of priority (not last_priority) is acceptable.
>>>
>>> Ah, then it was comments. I knew I read something to this effect
>>> somewhere, recently.
>>>
>>>> Especially Julien was rather worried by the current situation. In
>>>> case you can convince him the current handling is fine, we can
>>>> easily drop this patch.
>>>
>>> Julien, in the light of the above - can you clarify the specific
>>> concerns you (still) have?
>>
>> Let me start with that the assumption if evtchn->lock is not held when
>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
> 
> But this isn't interesting - we know there are paths where it is
> held, and ones (interdomain sending) where it's the remote port's
> lock instead which is held. What's important here is that a
> _consistent_ lock be held (but it doesn't need to be evtchn's).
> 
>>   From my understanding, the goal of lock_old_queue() is to return the
>> old queue used.
>>
>> last_priority and last_vcpu_id may be updated separately and I could not
>> convince myself that it would not be possible to return a queue that is
>> neither the current one nor the old one.
>>
>> The following could happen if evtchn->priority and
>> evtchn->notify_vcpu_id keeps changing between calls.
>>
>> pCPU0				| pCPU1
>> 				|
>> evtchn_fifo_set_pending(v0,...)	|
>> 				| evtchn_fifo_set_pending(v1, ...)
>>    [...]				|
>>    /* Queue has changed */	|
>>    evtchn->last_vcpu_id = v0 	|
>> 				| -> evtchn_old_queue()
>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>     				| old_q = ...
>> 				| spin_lock(old_q->...)
>> 				| v = ...
>> 				| q = ...
>> 				| /* q and old_q would be the same */
>> 				|
>>    evtchn->las_priority = priority|
>>
>> If my diagram is correct, then pCPU1 would return a queue that is
>> neither the current nor old one.
> 
> I think I agree.
> 
>> In which case, I think it would at least be possible to corrupt the
>> queue. From evtchn_fifo_set_pending():
>>
>>           /*
>>            * If this event was a tail, the old queue is now empty and
>>            * its tail must be invalidated to prevent adding an event to
>>            * the old queue from corrupting the new queue.
>>            */
>>           if ( old_q->tail == port )
>>               old_q->tail = 0;
>>
>> Did I miss anything?
> 
> I don't think you did. The important point though is that a consistent
> lock is being held whenever we come here, so two racing set_pending()
> aren't possible for one and the same evtchn. As a result I don't think
> the patch here is actually needed.

Julien, do you agree?

Can i drop this patch?


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7769.20463 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJ0V-0003rZ-9A; Fri, 16 Oct 2020 06:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7769.20463; Fri, 16 Oct 2020 06:14:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJ0V-0003rS-5x; Fri, 16 Oct 2020 06:14:27 +0000
Received: by outflank-mailman (input) for mailman id 7769;
 Fri, 16 Oct 2020 06:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTJ0U-0003rN-5G
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:14:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fcb634d-9ba3-4962-b263-356091e0d4a6;
 Fri, 16 Oct 2020 06:14:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTJ0Q-0003c2-4A; Fri, 16 Oct 2020 06:14:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTJ0P-00089s-QT; Fri, 16 Oct 2020 06:14:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTJ0P-0004ST-Pw; Fri, 16 Oct 2020 06:14:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTJ0U-0003rN-5G
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:14:26 +0000
X-Inumbo-ID: 8fcb634d-9ba3-4962-b263-356091e0d4a6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fcb634d-9ba3-4962-b263-356091e0d4a6;
	Fri, 16 Oct 2020 06:14:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CONbHGYHryVnVM33gwjs6s2ntJIHuu1OTincjmsqqhM=; b=6Ii/JlOwlxjjWDypAfiL3iQcms
	ylfg9ZpKk4qHPab5YQI/sEF2mCV99dk2pB3ATdP1Y1WrxeQUbIPNGyv+PJ/FJkA9DXZSk0MQPH9u9
	S/6SRQcLj+SD2BF73OuUmH+ozkNtvKUolJTUO1YungwSug3mFmpcbbSR3DEV/sxaSk9c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTJ0Q-0003c2-4A; Fri, 16 Oct 2020 06:14:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTJ0P-00089s-QT; Fri, 16 Oct 2020 06:14:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTJ0P-0004ST-Pw; Fri, 16 Oct 2020 06:14:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155877: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:freebsd-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e545512b5e26f1e69fcd4c88df3c12853946dcdb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 06:14:21 +0000

flight 155877 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155877/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 12 freebsd-install        fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e545512b5e26f1e69fcd4c88df3c12853946dcdb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   56 days
Failing since        152659  2020-08-21 14:07:39 Z   55 days   97 attempts
Testing same since   155877  2020-10-15 21:07:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45639 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:30:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:30:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7774.20479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJFW-0004ve-PN; Fri, 16 Oct 2020 06:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7774.20479; Fri, 16 Oct 2020 06:29:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJFW-0004vX-Lp; Fri, 16 Oct 2020 06:29:58 +0000
Received: by outflank-mailman (input) for mailman id 7774;
 Fri, 16 Oct 2020 06:29:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJFV-0004vS-D3
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:29:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b1f3fe9-4c4c-41d7-9b25-59aeaf36b3be;
 Fri, 16 Oct 2020 06:29:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 68133AAB2;
 Fri, 16 Oct 2020 06:29:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJFV-0004vS-D3
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:29:57 +0000
X-Inumbo-ID: 3b1f3fe9-4c4c-41d7-9b25-59aeaf36b3be
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3b1f3fe9-4c4c-41d7-9b25-59aeaf36b3be;
	Fri, 16 Oct 2020 06:29:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602829795;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=48TEeKQ3r3RulpCqqFBSkHHiIbkv+XpY/YgA5NU9RG0=;
	b=Q4YGHZ/D48KPyz9EU5JN/ir/Q13NIEjuvsSu/966kOCqbSWtjwZnc2swDI+CVoaH4yagcO
	IajvzYzRww6oMXAVUQgtnfAixUYGJ7CYpawMf6umrKuK9inRMsKhgvlrQFTyKxRn7h9Hav
	qZRY+lMwNhs6zJQM1IN9ZubbkZqIToA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 68133AAB2;
	Fri, 16 Oct 2020 06:29:55 +0000 (UTC)
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
Date: Fri, 16 Oct 2020 08:29:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
> @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>       */
>      if ( p2m_is_valid(orig_pte) &&
>           !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
> +    {
> +#ifdef CONFIG_IOREQ_SERVER
> +        if ( domain_has_ioreq_server(p2m->domain) &&
> +             (p2m->domain == current->domain) && p2m_is_ram(orig_pte.p2m.type) )
> +            p2m->domain->qemu_mapcache_invalidate = true;
> +#endif
>          p2m_free_entry(p2m, orig_pte, level);
> +    }

For all I have to say here, please bear in mind that I don't know
the internals of Arm memory management.

The first odd thing here the merely MFN-based condition. It may
well be that's sufficient, if there's no way to get a "not present"
entry with an MFN matching any valid MFN. (This isn't just with
your addition, but even before.)

Given how p2m_free_entry() works (or is supposed to work in the
long run), is the new code you add guaranteed to only alter leaf
entries? If not, the freeing of page tables needs deferring until
after qemu has dropped its mappings.

And with there being refcounting only for foreign pages, how do
you prevent the freeing of the page just unmapped before qemu has
dropped its possible mapping? On the x86 side this problem is one
of the reasons why PVH Dom0 isn't "supported", yet. At least a
respective code comment would seem advisable, so the issue to be
addressed won't be forgotten.

> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1442,6 +1442,7 @@ static void do_trap_hypercall(struct cpu_user_regs *regs, register_t *nr,
>                                const union hsr hsr)
>  {
>      arm_hypercall_fn_t call = NULL;
> +    struct vcpu *v = current;

This ought to be named "curr".

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7778.20491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJJr-0005mB-Bo; Fri, 16 Oct 2020 06:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7778.20491; Fri, 16 Oct 2020 06:34:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJJr-0005m4-6o; Fri, 16 Oct 2020 06:34:27 +0000
Received: by outflank-mailman (input) for mailman id 7778;
 Fri, 16 Oct 2020 06:34:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJJp-0005ly-2l
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:34:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac51cd41-d605-47cc-94b0-057500150fd2;
 Fri, 16 Oct 2020 06:34:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5BB33AAB2;
 Fri, 16 Oct 2020 06:34:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJJp-0005ly-2l
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:34:25 +0000
X-Inumbo-ID: ac51cd41-d605-47cc-94b0-057500150fd2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ac51cd41-d605-47cc-94b0-057500150fd2;
	Fri, 16 Oct 2020 06:34:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602830062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SBYcJbm6E2RUymANpskA8uW7qA08jhfAaNuFPo147lU=;
	b=HbMJv5Ccu9XcMMs1LKWXT7p394MbksH5RsyxwbngLpKmLK8j9nXA56E64KriFmhPe0g3Rq
	aRcIBJzHWTmU+1A7FzdoX4IPo9ZB90q92SZwyfobDKIgMNRENPgYthvg7Gpd6PtRZdyMRf
	WH+Uo+lpftFmWlLkyjPs7uik469HWJM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5BB33AAB2;
	Fri, 16 Oct 2020 06:34:22 +0000 (UTC)
Subject: Re: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for
 ACPI table region
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca9ba430-f5d8-f520-e7db-3e8d41cd7d9b@suse.com>
Date: Fri, 16 Oct 2020 08:34:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 02:39, Igor Druzhinin wrote:
> ACPI specification contains statements describing memory marked with regular
> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
> really do it if it wants kexec or similar functionality to work, there
> could still be ambiguities in treating these regions as potentially regular
> RAM.
> 
> One such example is SeaBIOS which currently reports "ACPI data" regions as
> RAM to the guest in its e801 call. Which it might have the right to do as any
> user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
> to ignore that fact and is instead using e801 to find a place for initrd which
> causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
> to be fixed / improved here, that is just one example of the potential problems
> from using a reclaimable memory type.
> 
> Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
> described by the spec as non-reclaimable (so cannot ever be treated like RAM).
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:44:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7781.20502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJTK-0006i4-7v; Fri, 16 Oct 2020 06:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7781.20502; Fri, 16 Oct 2020 06:44:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJTK-0006hx-52; Fri, 16 Oct 2020 06:44:14 +0000
Received: by outflank-mailman (input) for mailman id 7781;
 Fri, 16 Oct 2020 06:44:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJTI-0006hs-Su
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:44:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f628764-2c53-4fdd-bc58-e06c0d956297;
 Fri, 16 Oct 2020 06:44:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 813D3ACD9;
 Fri, 16 Oct 2020 06:44:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJTI-0006hs-Su
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:44:12 +0000
X-Inumbo-ID: 0f628764-2c53-4fdd-bc58-e06c0d956297
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f628764-2c53-4fdd-bc58-e06c0d956297;
	Fri, 16 Oct 2020 06:44:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602830648;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=HwZIN0/5cggcluncTcnNNvqypXWTvfy5WvjBm4xdXaM=;
	b=sJkyLCgTSNtFUM3cazEmrkG7Zuh0KbzJpwyPmc05eHPcaUBC7sBT7qtIw76r8gyMw8lmLf
	1I71SbGcXh0HNZHpdwgQKPlEkJ68XttkFLXqEGLbxXVy7sJIcAsALTPL/JQkIpJsM3xPCH
	oSXrExojoE0S3qS5C+hNDuXZ7TUWJn4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 813D3ACD9;
	Fri, 16 Oct 2020 06:44:08 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special case
 idx == gpfn
Message-ID: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
Date: Fri, 16 Oct 2020 08:44:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In this case up to now we've been freeing the page (through
guest_remove_page(), with the actual free typically happening at the
put_page() later in the function), but then failing the call on the
subsequent GFN consistency check. However, in my opinion such a request
should complete as an "expensive" no-op (leaving aside the potential
unsharing of the page).

This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
xenmem_add_to_physmap_one()" would really have needed an XSA, despite
its description claiming otherwise, as in release builds we then put in
place a P2M entry referencing the about to be freed page.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I've been considering to make such operations "cheap" NOPs rather than
"expensive" ones, by comparing idx and gpfn early in the function in
the XENMAPSPACE_gmfn case block, but I've come to the conclusion that
having the operation perform otherwise normally is better - this way,
errors that would result if idx != gpfn will still result. While I'm
open to reasons towards the other alternative, having the added check be
MFN-based makes crystal clear that we're dealing with the same
underlying physical resource, i.e. also covers the hypothetical(?) case
of two GFNs referring to the same MFN.

I'm unconvinced that it is correct for prev_mfn's p2mt to not be
inspected at all - I don't think things will go right if p2m_shared()
was true for it. But I'm afraid I'm not up to correcting mem-sharing
related logic.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
         if ( is_special_page(mfn_to_page(prev_mfn)) )
             /* Special pages are simply unhooked from this phys slot. */
             rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
-        else
+        else if ( !mfn_eq(mfn, prev_mfn) )
             /* Normal domain memory is freed, to avoid leaking memory. */
             rc = guest_remove_page(d, gfn_x(gpfn));
     }


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:45:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:45:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7783.20514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJUk-0006pY-J3; Fri, 16 Oct 2020 06:45:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7783.20514; Fri, 16 Oct 2020 06:45:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJUk-0006pR-G1; Fri, 16 Oct 2020 06:45:42 +0000
Received: by outflank-mailman (input) for mailman id 7783;
 Fri, 16 Oct 2020 06:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJUj-0006pH-0F
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:45:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4196fd8b-ea82-4d4f-a40d-cbd7bd9890c4;
 Fri, 16 Oct 2020 06:45:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2F6FAACD5;
 Fri, 16 Oct 2020 06:45:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJUj-0006pH-0F
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:45:41 +0000
X-Inumbo-ID: 4196fd8b-ea82-4d4f-a40d-cbd7bd9890c4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4196fd8b-ea82-4d4f-a40d-cbd7bd9890c4;
	Fri, 16 Oct 2020 06:45:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602830739;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aOS7bPFV+sLf66LB+makeYj9odLxiFHiWVjcl0XyTw4=;
	b=shWazdF6MU5052F3s+xyu4OdttfeXqQIWgiFVxdjwnmhEozagM3DGPkS2LJncz3xqKtqcJ
	aFG/uyfUt7W7uj0Y5Ud7RZkEoKryyTQ/iNtWfIiMAEMTCoXoBVpsBlf86lbH/hNiGT0eWe
	RWmek9mRAJ2sg0qstLP0skeAHHU3bds=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2F6FAACD5;
	Fri, 16 Oct 2020 06:45:39 +0000 (UTC)
Subject: Re: [PATCH v2] x86/smpboot: Don't unconditionally call
 memguard_guard_stack() in cpu_smpboot_alloc()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201014184708.17758-1-andrew.cooper3@citrix.com>
 <0ed412d9-c9a2-194b-c953-c74ee102664f@suse.com>
 <0a294279-5de5-3b54-b1f9-847de1159447@citrix.com>
 <578a0afd-693a-c704-317e-477e5e27d497@suse.com>
 <5df2626b-8755-8cdb-7cbc-74d51b569a0b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6f3accdd-1581-cc70-10bf-017b762ea56d@suse.com>
Date: Fri, 16 Oct 2020 08:45:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <5df2626b-8755-8cdb-7cbc-74d51b569a0b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 18:38, Andrew Cooper wrote:
> On 15/10/2020 16:16, Jan Beulich wrote:
>> On 15.10.2020 16:02, Andrew Cooper wrote:
>>> On 15/10/2020 09:50, Jan Beulich wrote:
>>>> On 14.10.2020 20:47, Andrew Cooper wrote:
>>>>> cpu_smpboot_alloc() is designed to be idempotent with respect to partially
>>>>> initialised state.  This occurs for S3 and CPU parking, where enough state to
>>>>> handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
>>>>> when we otherwise want to offline the CPU.
>>>>>
>>>>> For simplicity between various configuration, Xen always uses shadow stack
>>>>> mappings (Read-only + Dirty) for the guard page, irrespective of whether
>>>>> CET-SS is enabled.
>>>>>
>>>>> Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
>>>>> by first writing out the supervisor shadow stack tokens with plain writes,
>>>>> then changing the mapping to being read-only.
>>>>>
>>>>> This ordering is strictly necessary to configure the BSP, which cannot have
>>>>> the supervisor tokens be written with WRSS.
>>>>>
>>>>> Instead of calling memguard_guard_stack() unconditionally, call it only when
>>>>> actually allocating a new stack.  Xenheap allocates are guaranteed to be
>>>>> writeable, and the net result is idempotency WRT configuring stack_base[].
>>>>>
>>>>> Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
>>>>> Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> ---
>>>>> CC: Jan Beulich <JBeulich@suse.com>
>>>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>>>> CC: Wei Liu <wl@xen.org>
>>>>>
>>>>> This can more easily be demonstrated with CPU hotplug than S3, and the absence
>>>>> of bug reports goes to show how rarely hotplug is used.
>>>>>
>>>>> v2:
>>>>>  * Don't break S3/CPU parking in combination with CET-SS.  v1 would, for S3,
>>>>>    turn the BSP shadow stack into regular mappings, and #DF as soon as the TLB
>>>>>    shootdown completes.
>>>> The code change looks correct to me, but since I don't understand
>>>> this part I'm afraid I may be overlooking something. I understand
>>>> the "turn the BSP shadow stack into regular mappings" relates to
>>>> cpu_smpboot_free()'s call to memguard_unguard_stack(), but I
>>>> didn't think we come through cpu_smpboot_free() for the BSP upon
>>>> entering or leaving S3.
>>> The v1 really did fix Marek's repro of the problem.
>>>
>>> The only possible way this can occur is if, somewhere, there is a call
>>> to cpu_smpboot_free() for CPU0 with remove=0 on the S3 path
>> I didn't think it was the BSP's stack that got written to, but the
>> first AP's before letting it run.
> 
> Oh yes - my analysis was wrong.  The CPU notifier for CPU 1 to come up
> runs on CPU 0.
> 
> So only the --- text was wrong.  Are you happy with the fix now?

Indeed I am:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:52:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7788.20527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJbZ-0007jP-Bg; Fri, 16 Oct 2020 06:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7788.20527; Fri, 16 Oct 2020 06:52:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJbZ-0007jI-7u; Fri, 16 Oct 2020 06:52:45 +0000
Received: by outflank-mailman (input) for mailman id 7788;
 Fri, 16 Oct 2020 06:52:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJbX-0007jD-H8
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:52:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70b5cef0-477e-4eab-b283-597742a6446a;
 Fri, 16 Oct 2020 06:52:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0628CAF30;
 Fri, 16 Oct 2020 06:52:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJbX-0007jD-H8
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:52:43 +0000
X-Inumbo-ID: 70b5cef0-477e-4eab-b283-597742a6446a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 70b5cef0-477e-4eab-b283-597742a6446a;
	Fri, 16 Oct 2020 06:52:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602831161;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FU7VEYcmI1s3fO2D5qXBOl6HFx1yAiqrlE49qMsQuI8=;
	b=pCyLFIf8DqPUs5vNScYGA3RX6wCofxVM/9ofjeSTHhUHmDkmFNe3qqAdchBoSGOD+tleZa
	n1Xh4UJ3WxJakYgHM+yfwA63oKJCH2WpG7UtvJ5NHqROxeFX2nE1gTSkveWf2dRrFSDPc7
	Rl77hdR2Q+a1JGcw2Zvh+7n5YENZbeU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0628CAF30;
	Fri, 16 Oct 2020 06:52:41 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
 <ad909278-8ab0-dc7a-2004-5efd08e5acbd@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <efb22794-7573-3fe8-516e-8f7a817341af@suse.com>
Date: Fri, 16 Oct 2020 08:52:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <ad909278-8ab0-dc7a-2004-5efd08e5acbd@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 22:52, Andrew Cooper wrote:
> On 15/10/2020 11:41, Jürgen Groß wrote:
>> On 15.10.20 12:09, Jan Beulich wrote:
>>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>>> After a short discussion on IRC yesterday I promised to send a mail
>>>> how I think we could get rid of creating dynamic links especially
>>>> for header files in the Xen build process.
>>>>
>>>> This will require some restructuring, the amount will depend on the
>>>> selected way to proceed:
>>>>
>>>> - avoid links completely, requires more restructuring
>>>> - avoid only dynamically created links, i.e. allowing some static
>>>>     links which are committed to git
>>>
>>> While I like the latter better, I'd like to point out that not all
>>> file systems support symlinks, and hence the repo then couldn't be
>>> stored on (or the tarball expanded onto) such a file system. Note
>>> that this may be just for viewing purposes - I do this typically at
>>> home -, i.e. there's no resulting limitation from the build process
>>> needing symlinks. Similarly, once we fully support out of tree
>>> builds, there wouldn't be any restriction from this as long as just
>>> the build tree is placed on a capable file system.
>>>
>>> As a result I'd like to propose variant 2´: Reduce the number of
>>> dynamically created symlinks to a minimum. This said, I have to
>>> admit that I haven't really understood yet why symlinks are bad.
>>> They exist for exactly such purposes, I would think.
>>
>> Not the symlinks as such, but the dynamically created ones seem to be
>> a problem, as we stumble upon those again and again.
> 
> We have multiple build system bugs every release to do with dynamically
> generated symlinks.  Given that symlinks aren't a hard requirement, this
> is a massive price to pay, and time which could be better spent doing
> other other things.
> 
> Also, they prohibit the ability to build from a read-only source dir.

In which way? In an out-of-tree build (see Linux) this gets created
in the build tree, not the source one. Or else ...

> The asm symlink in particular prevents any attempt to do concurrent
> builds of xen.  In some future, I'd love to be able to do concurrent
> out-of-tree builds of Xen on different architectures, because elapsed
> time to do this is one limiting factor of mine on pre-push sanity checks.

... this wouldn't already be possible there (including varying arch-es
built from the same source tree).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:58:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7791.20539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJhE-0007xw-0V; Fri, 16 Oct 2020 06:58:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7791.20539; Fri, 16 Oct 2020 06:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJhD-0007xp-TH; Fri, 16 Oct 2020 06:58:35 +0000
Received: by outflank-mailman (input) for mailman id 7791;
 Fri, 16 Oct 2020 06:58:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJhC-0007xJ-7R
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:58:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83dd5e2e-d7b7-4bcb-9f4c-15e5c30e5653;
 Fri, 16 Oct 2020 06:58:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1434FAD77;
 Fri, 16 Oct 2020 06:58:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJhC-0007xJ-7R
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:58:34 +0000
X-Inumbo-ID: 83dd5e2e-d7b7-4bcb-9f4c-15e5c30e5653
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 83dd5e2e-d7b7-4bcb-9f4c-15e5c30e5653;
	Fri, 16 Oct 2020 06:58:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602831512;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=42HArfaYBIcl6YoU2jHa+wxK8XDzY3WpSDQi8yRNQEA=;
	b=FtIrSmEEPd1QkvcCEsb5GZUIiYfj4dXx7JM0ecfXDkxIpOW2htE2R5+NWBtAJjO1xbuyQd
	S/88UCE37USanUZWJ7FD1r2CR38RYTmZWS27kblLj8USksi1kTV7AzkaiE5bOryg/0YcM6
	/OvsUPRojCq6xttqTBpVynO7k3y1sro=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1434FAD77;
	Fri, 16 Oct 2020 06:58:32 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <973eca36-d278-4c82-627a-e0d80a6055d5@suse.com>
Date: Fri, 16 Oct 2020 08:58:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 12:41, Jürgen Groß wrote:
> On 15.10.20 12:09, Jan Beulich wrote:
>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>> After a short discussion on IRC yesterday I promised to send a mail
>>> how I think we could get rid of creating dynamic links especially
>>> for header files in the Xen build process.
>>>
>>> This will require some restructuring, the amount will depend on the
>>> selected way to proceed:
>>>
>>> - avoid links completely, requires more restructuring
>>> - avoid only dynamically created links, i.e. allowing some static
>>>     links which are committed to git
>>
>> While I like the latter better, I'd like to point out that not all
>> file systems support symlinks, and hence the repo then couldn't be
>> stored on (or the tarball expanded onto) such a file system. Note
>> that this may be just for viewing purposes - I do this typically at
>> home -, i.e. there's no resulting limitation from the build process
>> needing symlinks. Similarly, once we fully support out of tree
>> builds, there wouldn't be any restriction from this as long as just
>> the build tree is placed on a capable file system.
>>
>> As a result I'd like to propose variant 2´: Reduce the number of
>> dynamically created symlinks to a minimum. This said, I have to
>> admit that I haven't really understood yet why symlinks are bad.
>> They exist for exactly such purposes, I would think.
> 
> Not the symlinks as such, but the dynamically created ones seem to be
> a problem, as we stumble upon those again and again.

Well, the machinery to get them put in place needs to be fixed
(and adjustments / additions be done more carefully). Taking
together with what Andrew has said, option 2´ would move us in
the same direction then.

>>> The difference between both variants is affecting the public headers
>>> in xen/include/public/: avoiding even static links would require to
>>> add another directory or to move those headers to another place in the
>>> tree (either use xen/include/public/xen/, or some other path */xen),
>>> leading to the need to change all #include statements in the hypervisor
>>> using <public/...> today.
>>>
>>> The need for the path to have "xen/" is due to the Xen library headers
>>> (which are installed on user's machines) are including the public
>>> hypervisor headers via "#include <xen/...>" and we can't change that
>>> scheme. A static link can avoid this problem via a different path, but
>>> without any link we can't do that.
>>>
>>> Apart from that decision, lets look which links are created today for
>>> accessing the header files (I'll assume my series putting the library
>>> headers to tools/include will be taken, so those links being created
>>> in staging today are not mentioned) and what can be done to avoid them:
>>>
>>> - xen/include/asm -> xen/include/asm-<arch>:
>>>     Move all headers from xen/include/asm-<arch> to
>>>     xen/arch/<arch>/include/asm and add that path via "-I" flag to CFLAGS.
>>>     This has the other nice advantages that most architecture specific
>>>     files are now in xen/arch (apart from the public headers) and that we
>>>     can even add generic fallback headers in xen/include/asm in case an
>>>     arch doesn't need a specific header file.
>>
>> Iirc Andrew suggested years ago that we follow Linux in this regard
>> (and XTF already does). My only concern here is the churn this will
>> cause for backports.
> 
> Changing a directory name in a patch isn't that hard, IMO.

It's not hard at all, no, but it still takes some of the most precious
resource we have: time.

>>> - tools/include/xen/foreign -> tools/include/xen-foreign:
>>>     Get rid of tools/include/xen-foreign and generate the headers directly
>>>     in xen/include/public/foreign instead.
>>
>> Except that conceptually building in tools/ would better not alter
>> the xen/ subtree in any way.
> 
> I meant to generate the headers from the hypervisor build instead.

This would make the tools/ build dependent upon xen/ having got
built first aiui, which I think we want to avoid.

>>> - tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
>>>     Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
>>>     add "-Ixen/include/tools" to the CFLAGS of tools.
>>
>> Why not -Ixen/include/xen without any movement? Perhaps because
> 
> This would open up most of the hypervisor private headers to be
> easily includable by tools.

Without the xen/ prefix, yes. But if someone wants to violate the
naming scheme to get at them, adding a suitable number of ../ will
also work as soon as symlinks aren't being used, or symlinks of
full directories are used instead of ones referencing individual
files.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 06:59:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 06:59:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7793.20550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJiG-00085M-DC; Fri, 16 Oct 2020 06:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7793.20550; Fri, 16 Oct 2020 06:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTJiG-00085F-AH; Fri, 16 Oct 2020 06:59:40 +0000
Received: by outflank-mailman (input) for mailman id 7793;
 Fri, 16 Oct 2020 06:59:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTJiF-000857-Ds
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:59:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de6f40f1-b3d7-4445-a692-5960ecd1428e;
 Fri, 16 Oct 2020 06:59:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DAA82B1BF;
 Fri, 16 Oct 2020 06:59:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTJiF-000857-Ds
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 06:59:39 +0000
X-Inumbo-ID: de6f40f1-b3d7-4445-a692-5960ecd1428e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id de6f40f1-b3d7-4445-a692-5960ecd1428e;
	Fri, 16 Oct 2020 06:59:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602831578;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oeT/manrINgkwYzE13nnGNV3XDrRLBaj7Dz4WwYFlwQ=;
	b=dqhdYA4iroe674ID1PHMoWK6p1App7hC0WmnWfqIqhqcnV5PcBk6Lm5VbQWGhGIDQ69o+f
	/TurQ8ZoWNPkGHDminVFkjYcWChQegF0GrEY0jVFncU853EcOlNd4xTSIEz1cRBxtePoWB
	reF4HPuERV6Ym1ecvAasurjmaV6myVA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DAA82B1BF;
	Fri, 16 Oct 2020 06:59:37 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <0df66f1c-a02d-819c-0f05-8a7b26728e87@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a0523587-8209-b4b0-08fb-d50ed365051b@suse.com>
Date: Fri, 16 Oct 2020 08:59:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <0df66f1c-a02d-819c-0f05-8a7b26728e87@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 15.10.2020 12:49, Jürgen Groß wrote:
> On 15.10.20 12:09, Jan Beulich wrote:
>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>> After a short discussion on IRC yesterday I promised to send a mail
>>> how I think we could get rid of creating dynamic links especially
>>> for header files in the Xen build process.
>>>
>>> This will require some restructuring, the amount will depend on the
>>> selected way to proceed:
>>>
>>> - avoid links completely, requires more restructuring
>>> - avoid only dynamically created links, i.e. allowing some static
>>>     links which are committed to git
>>
>> While I like the latter better, I'd like to point out that not all
>> file systems support symlinks, and hence the repo then couldn't be
>> stored on (or the tarball expanded onto) such a file system. Note
>> that this may be just for viewing purposes - I do this typically at
>> home -, i.e. there's no resulting limitation from the build process
>> needing symlinks. Similarly, once we fully support out of tree
>> builds, there wouldn't be any restriction from this as long as just
>> the build tree is placed on a capable file system.
>>
>> As a result I'd like to propose variant 2´: Reduce the number of
>> dynamically created symlinks to a minimum. This said, I have to
>> admit that I haven't really understood yet why symlinks are bad.
>> They exist for exactly such purposes, I would think.
> 
> Another option would be to create the needed links from ./configure
> instead of committing them to git.

Ah yes, this would indeed seem better to me. Not sure though whether
that's conceptually a legitimate thing to do.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 07:25:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 07:25:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7800.20567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTK72-0002FY-IM; Fri, 16 Oct 2020 07:25:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7800.20567; Fri, 16 Oct 2020 07:25:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTK72-0002FR-Ei; Fri, 16 Oct 2020 07:25:16 +0000
Received: by outflank-mailman (input) for mailman id 7800;
 Fri, 16 Oct 2020 07:25:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTK71-0002FM-5A
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 07:25:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bf5118c-7b2d-4b34-a96d-00d5dba143e8;
 Fri, 16 Oct 2020 07:25:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2D9A6AD04;
 Fri, 16 Oct 2020 07:25:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTK71-0002FM-5A
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 07:25:15 +0000
X-Inumbo-ID: 3bf5118c-7b2d-4b34-a96d-00d5dba143e8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3bf5118c-7b2d-4b34-a96d-00d5dba143e8;
	Fri, 16 Oct 2020 07:25:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602833113;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JFYvjnKPkIkyDnPbDArRURXDRNLfWak2oUB07OLBF7M=;
	b=olutEQY0Xjz0BuA1/QBGLs1LwH+6sw+kskd/EruVL7aNQnzfNI3q9o8ZixDNnRjVkRXppA
	uE6cx7LtYEU8+xquYKla9ZhxbTOvjNVTDHrA45hbvFSVE6yPzJG+tbC5adTYe12cB2LOiZ
	2ZENxTL3axPPN25cTnYsqiwc65CSUjY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2D9A6AD04;
	Fri, 16 Oct 2020 07:25:13 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
 <973eca36-d278-4c82-627a-e0d80a6055d5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f879dac7-35a5-f07b-a869-80abd1351c28@suse.com>
Date: Fri, 16 Oct 2020 09:25:12 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <973eca36-d278-4c82-627a-e0d80a6055d5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.10.20 08:58, Jan Beulich wrote:
> On 15.10.2020 12:41, Jürgen Groß wrote:
>> On 15.10.20 12:09, Jan Beulich wrote:
>>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>>> - tools/include/xen/foreign -> tools/include/xen-foreign:
>>>>      Get rid of tools/include/xen-foreign and generate the headers directly
>>>>      in xen/include/public/foreign instead.
>>>
>>> Except that conceptually building in tools/ would better not alter
>>> the xen/ subtree in any way.
>>
>> I meant to generate the headers from the hypervisor build instead.
> 
> This would make the tools/ build dependent upon xen/ having got
> built first aiui, which I think we want to avoid.

Today we have a mechanism to build tools/include (i.e. setup the links)
from the main Makefile. The same rule could be used to create the needed
headers in xen/include/public/foreign.

> 
>>>> - tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
>>>>      Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
>>>>      add "-Ixen/include/tools" to the CFLAGS of tools.
>>>
>>> Why not -Ixen/include/xen without any movement? Perhaps because
>>
>> This would open up most of the hypervisor private headers to be
>> easily includable by tools.
> 
> Without the xen/ prefix, yes. But if someone wants to violate the
> naming scheme to get at them, adding a suitable number of ../ will
> also work as soon as symlinks aren't being used, or symlinks of
> full directories are used instead of ones referencing individual
> files.

We'd need to be very careful regarding name collisions in this case
(there is e.g. xen/list.h and we have at least one list.h in a local
directory). I'm not sure we want to take that additional risk.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:11:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7817.20583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTKpo-00071a-IX; Fri, 16 Oct 2020 08:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7817.20583; Fri, 16 Oct 2020 08:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTKpo-00071T-FY; Fri, 16 Oct 2020 08:11:32 +0000
Received: by outflank-mailman (input) for mailman id 7817;
 Fri, 16 Oct 2020 08:11:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTKpn-00071K-6Y
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:11:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8ef2b531-8f47-443a-a27a-964a5303cee3;
 Fri, 16 Oct 2020 08:11:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 448ACAC82;
 Fri, 16 Oct 2020 08:11:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTKpn-00071K-6Y
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:11:31 +0000
X-Inumbo-ID: 8ef2b531-8f47-443a-a27a-964a5303cee3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8ef2b531-8f47-443a-a27a-964a5303cee3;
	Fri, 16 Oct 2020 08:11:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602835889;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7LhAESMXcWXHrHyTv9HCT2odTQIqZ5fpcfL3ur13ve4=;
	b=G+hgR+kP5e0bg3o0f/WLp82y3VLdz19pcL93aZfFZpG1Rw0/Ucr4ZMWTvcyc2HLkAnXSeA
	AJNJQj2m5dmZeP6ozaZQPlKAiiD7RI5annWBiKQLgLm+AMn89wOoRBF9njHKvSW12zrp7B
	oF4RBC6MfY+mvMwCQ5KRqcEkyEX8GRQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 448ACAC82;
	Fri, 16 Oct 2020 08:11:29 +0000 (UTC)
Subject: Re: Getting rid of (many) dynamic link creations in the xen build
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <85f1eea2-0c8b-de06-b9d8-69f9a7e34ea8@suse.com>
 <5c9d5d97-10c4-f5de-e4eb-7ae933706240@suse.com>
 <abd6d752-9a7f-fcf6-3273-82512c590151@suse.com>
 <973eca36-d278-4c82-627a-e0d80a6055d5@suse.com>
 <f879dac7-35a5-f07b-a869-80abd1351c28@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <85d0e991-154c-c1d7-1071-ad7fd3acf196@suse.com>
Date: Fri, 16 Oct 2020 10:11:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <f879dac7-35a5-f07b-a869-80abd1351c28@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.10.2020 09:25, Jürgen Groß wrote:
> On 16.10.20 08:58, Jan Beulich wrote:
>> On 15.10.2020 12:41, Jürgen Groß wrote:
>>> On 15.10.20 12:09, Jan Beulich wrote:
>>>> On 15.10.2020 09:58, Jürgen Groß wrote:
>>>>> - tools/include/xen/foreign -> tools/include/xen-foreign:
>>>>>      Get rid of tools/include/xen-foreign and generate the headers directly
>>>>>      in xen/include/public/foreign instead.
>>>>
>>>> Except that conceptually building in tools/ would better not alter
>>>> the xen/ subtree in any way.
>>>
>>> I meant to generate the headers from the hypervisor build instead.
>>
>> This would make the tools/ build dependent upon xen/ having got
>> built first aiui, which I think we want to avoid.
> 
> Today we have a mechanism to build tools/include (i.e. setup the links)
> from the main Makefile. The same rule could be used to create the needed
> headers in xen/include/public/foreign.

Oh, indeed.

>>>>> - tools/include/xen/lib/<arch>/* -> xen/include/xen/lib/<arch>/*:
>>>>>      Move xen/include/xen/lib/<arch> to xen/include/tools/lib/<arch> and
>>>>>      add "-Ixen/include/tools" to the CFLAGS of tools.
>>>>
>>>> Why not -Ixen/include/xen without any movement? Perhaps because
>>>
>>> This would open up most of the hypervisor private headers to be
>>> easily includable by tools.
>>
>> Without the xen/ prefix, yes. But if someone wants to violate the
>> naming scheme to get at them, adding a suitable number of ../ will
>> also work as soon as symlinks aren't being used, or symlinks of
>> full directories are used instead of ones referencing individual
>> files.
> 
> We'd need to be very careful regarding name collisions in this case
> (there is e.g. xen/list.h and we have at least one list.h in a local
> directory). I'm not sure we want to take that additional risk.

Well, header in local dirs aren't a big problem - they get included
via #include "xyz.h" anyway. But I see your point, and this imo is an
argument to stick to symlinks, as this avoids unnecessary dir levels
and allows to be as selective as we want/need to be.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:41:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7826.20598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLIz-0001Ds-V0; Fri, 16 Oct 2020 08:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7826.20598; Fri, 16 Oct 2020 08:41:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLIz-0001Dl-SF; Fri, 16 Oct 2020 08:41:41 +0000
Received: by outflank-mailman (input) for mailman id 7826;
 Fri, 16 Oct 2020 08:41:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTLIy-0001Dg-CY
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:41:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aefffed1-0d7c-48ad-9a0f-16b40f0ad5c2;
 Fri, 16 Oct 2020 08:41:39 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTLIu-0007C6-AZ; Fri, 16 Oct 2020 08:41:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTLIu-0006iG-1B; Fri, 16 Oct 2020 08:41:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTLIy-0001Dg-CY
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:41:40 +0000
X-Inumbo-ID: aefffed1-0d7c-48ad-9a0f-16b40f0ad5c2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aefffed1-0d7c-48ad-9a0f-16b40f0ad5c2;
	Fri, 16 Oct 2020 08:41:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JpUiGdpfcAUeQqezoyt1HraKe6cWV7EVmDIsmsuBiDE=; b=UMleSRYaRYRE3e0gk5mEwBlPrO
	SLrdxwq3IAM7cHiFiQ9o19Ou48axAyx0Z4rVgUd0e6fd5ZS1mrugs6wc4reBaH0Un+FU2grc7YJu4
	N8eNqE/EGhd0yz3WtcNW4vE0gkuqJRD8ShBNkb2HG8Ma5pUp85uRbNrN6YXteIo2oir8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTLIu-0007C6-AZ; Fri, 16 Oct 2020 08:41:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTLIu-0006iG-1B; Fri, 16 Oct 2020 08:41:36 +0000
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Jan Beulich <jbeulich@suse.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
 <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
Date: Fri, 16 Oct 2020 09:41:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 16/10/2020 07:29, Jan Beulich wrote:
> On 15.10.2020 18:44, Oleksandr Tyshchenko wrote:
>> @@ -1067,7 +1068,14 @@ static int __p2m_set_entry(struct p2m_domain *p2m,
>>        */
>>       if ( p2m_is_valid(orig_pte) &&
>>            !mfn_eq(lpae_get_mfn(*entry), lpae_get_mfn(orig_pte)) )
>> +    {
>> +#ifdef CONFIG_IOREQ_SERVER
>> +        if ( domain_has_ioreq_server(p2m->domain) &&
>> +             (p2m->domain == current->domain) && p2m_is_ram(orig_pte.p2m.type) )
>> +            p2m->domain->qemu_mapcache_invalidate = true;
>> +#endif
>>           p2m_free_entry(p2m, orig_pte, level);
>> +    }
> 
> For all I have to say here, please bear in mind that I don't know
> the internals of Arm memory management.
> 
> The first odd thing here the merely MFN-based condition. It may
> well be that's sufficient, if there's no way to get a "not present"
> entry with an MFN matching any valid MFN. (This isn't just with
> your addition, but even before.
Invalid entries are always zeroed. So in theory the problem could arise 
if MFN 0 used in the guest. It should not be possible on staging, but I 
agree this should be fixed.

> 
> Given how p2m_free_entry() works (or is supposed to work in the
> long run), is the new code you add guaranteed to only alter leaf
> entries?

This path may also be called with tables. I think we want to move the 
check in p2m_free_entry() so we can find the correct leaf type.

> If not, the freeing of page tables needs deferring until
> after qemu has dropped its mappings.

Freeing the page tables doesn't release a page. So may I ask why we 
would need to defer it?

> 
> And with there being refcounting only for foreign pages, how do
> you prevent the freeing of the page just unmapped before qemu has
> dropped its possible mapping?
QEMU mappings can only be done using the foreign mapping interface. This 
means that page reference count will be incremented for each QEMU 
mappings. Therefore the page cannot disappear until QEMU dropped the 
last reference.

> On the x86 side this problem is one
> of the reasons why PVH Dom0 isn't "supported", yet. At least a
> respective code comment would seem advisable, so the issue to be
> addressed won't be forgotten.

Are you sure? Isn't because you don't take a reference on foreign pages 
while mapping it?

Anyway, Arm has supported foreign mapping since its inception. So if 
there is a bug, then it should be fixed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7829.20611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUq-0002D8-2I; Fri, 16 Oct 2020 08:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7829.20611; Fri, 16 Oct 2020 08:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUp-0002D1-VC; Fri, 16 Oct 2020 08:53:55 +0000
Received: by outflank-mailman (input) for mailman id 7829;
 Fri, 16 Oct 2020 08:53:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTLUo-0002Cr-Ah
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e64d87c-f3c5-472e-95bc-a569bfb942e0;
 Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9C6BADFF;
 Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTLUo-0002Cr-Ah
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:54 +0000
X-Inumbo-ID: 4e64d87c-f3c5-472e-95bc-a569bfb942e0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e64d87c-f3c5-472e-95bc-a569bfb942e0;
	Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602838432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rSOB8WNkMaU2hnrfpEjv6May2yW7ubKae50ylsNLlyI=;
	b=JBkDJimAwG7YFhn7Tfy6L5waIaU2GINdyg3NuIpE1IGeZdYSB/vspy5e8BIWspmAs5R9KK
	zSmdkBkjXR5AKSqzZaSbrb4d20Xubf/XAsphoA/kBoflGHf2bdVSmHWlgJZBKJ7u2jhkII
	pMgSMsgPFhNapbRGP7k4epMyk3u/hmg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B9C6BADFF;
	Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/3] xen/oprofile: use set_nmi_continuation() for sending virq to guest
Date: Fri, 16 Oct 2020 10:53:49 +0200
Message-Id: <20201016085350.10233-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201016085350.10233-1-jgross@suse.com>
References: <20201016085350.10233-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of calling send_guest_vcpu_virq() from NMI context use the
NMI continuation framework for that purpose. This avoids taking locks
in NMI mode.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/oprofile/nmi_int.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/oprofile/nmi_int.c b/xen/arch/x86/oprofile/nmi_int.c
index 0f103d80a6..825f0aeef0 100644
--- a/xen/arch/x86/oprofile/nmi_int.c
+++ b/xen/arch/x86/oprofile/nmi_int.c
@@ -83,6 +83,13 @@ void passive_domain_destroy(struct vcpu *v)
 		model->free_msr(v);
 }
 
+static void nmi_oprofile_send_virq(void *arg)
+{
+	struct vcpu *v = arg;
+
+	send_guest_vcpu_virq(v, VIRQ_XENOPROF);
+}
+
 static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 {
 	int xen_mode, ovf;
@@ -90,7 +97,7 @@ static int nmi_callback(const struct cpu_user_regs *regs, int cpu)
 	ovf = model->check_ctrs(cpu, &cpu_msrs[cpu], regs);
 	xen_mode = ring_0(regs);
 	if ( ovf && is_active(current->domain) && !xen_mode )
-		send_guest_vcpu_virq(current, VIRQ_XENOPROF);
+		set_nmi_continuation(nmi_oprofile_send_virq, current);
 
 	if ( ovf == 2 )
 		current->arch.nmi_pending = true;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7831.20635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUu-0002GS-Kh; Fri, 16 Oct 2020 08:54:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7831.20635; Fri, 16 Oct 2020 08:54:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUu-0002GL-Gi; Fri, 16 Oct 2020 08:54:00 +0000
Received: by outflank-mailman (input) for mailman id 7831;
 Fri, 16 Oct 2020 08:53:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTLUt-0002Cr-6S
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0514ee4d-f27f-4eec-91c8-876f51d5a0c1;
 Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92038ADE8;
 Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTLUt-0002Cr-6S
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:59 +0000
X-Inumbo-ID: 0514ee4d-f27f-4eec-91c8-876f51d5a0c1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0514ee4d-f27f-4eec-91c8-876f51d5a0c1;
	Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602838432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zyQ6eY76TMGPa8WVIau7vgU5vO2lNHpIxzzWhjQGo/o=;
	b=BZLrBXjW2NcVN4gnBUmmQWXJtGnOha8usIRFFMsQ2VKj+cEPtjvMRMe+WLRCvluBZBmLsv
	Mii/naLC/ClGU3oocMpFgwxQ7VGuXUPGEGb2b5Fvpx+5gFEimK2aYGzlnT9/rxIA/zVZIY
	MIeL9jIWDa6bq+svRUamZlH99qnEcUM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 92038ADE8;
	Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/3] xen/x86: add nmi continuation framework
Date: Fri, 16 Oct 2020 10:53:48 +0200
Message-Id: <20201016085350.10233-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201016085350.10233-1-jgross@suse.com>
References: <20201016085350.10233-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Actions in NMI context are rather limited as e.g. locking is rather
fragile.

Add a generic framework to continue processing in normal interrupt
context after leaving NMI processing.

This is done by a high priority interrupt vector triggered via a
self IPI from NMI context, which will then call the continuation
function specified during NMI handling.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- add prototype for continuation function (Roger Pau Monné)
- switch from softirq to explicit self-IPI (Jan Beulich)
---
 xen/arch/x86/apic.c       | 13 +++++++---
 xen/arch/x86/traps.c      | 52 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/nmi.h | 13 +++++++++-
 3 files changed, 74 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 60627fd6e6..7497ddb5da 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -40,6 +40,7 @@
 #include <irq_vectors.h>
 #include <xen/kexec.h>
 #include <asm/guest.h>
+#include <asm/nmi.h>
 #include <asm/time.h>
 
 static bool __read_mostly tdt_enabled;
@@ -1376,16 +1377,22 @@ void spurious_interrupt(struct cpu_user_regs *regs)
 {
     /*
      * Check if this is a vectored interrupt (most likely, as this is probably
-     * a request to dump local CPU state). Vectored interrupts are ACKed;
-     * spurious interrupts are not.
+     * a request to dump local CPU state or to continue NMI handling).
+     * Vectored interrupts are ACKed; spurious interrupts are not.
      */
     if (apic_isr_read(SPURIOUS_APIC_VECTOR)) {
+        bool is_spurious;
+
         ack_APIC_irq();
+        is_spurious = !nmi_check_continuation();
         if (this_cpu(state_dump_pending)) {
             this_cpu(state_dump_pending) = false;
             dump_execstate(regs);
-            return;
+            is_spurious = false;
         }
+
+        if ( !is_spurious )
+            return;
     }
 
     /* see sw-dev-man vol 3, chapter 7.4.13.5 */
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index bc5b8f8ea3..6f4db9d549 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -79,6 +79,7 @@
 #include <public/hvm/params.h>
 #include <asm/cpuid.h>
 #include <xsm/xsm.h>
+#include <asm/mach-default/irq_vectors.h>
 #include <asm/pv/traps.h>
 #include <asm/pv/mm.h>
 
@@ -1799,6 +1800,57 @@ void unset_nmi_callback(void)
     nmi_callback = dummy_nmi_callback;
 }
 
+static DEFINE_PER_CPU(nmi_contfunc_t *, nmi_cont_func);
+static DEFINE_PER_CPU(void *, nmi_cont_arg);
+static DEFINE_PER_CPU(bool, nmi_cont_busy);
+
+bool nmi_check_continuation(void)
+{
+    unsigned int cpu = smp_processor_id();
+    nmi_contfunc_t *func = per_cpu(nmi_cont_func, cpu);
+    void *arg = per_cpu(nmi_cont_arg, cpu);
+
+    if ( per_cpu(nmi_cont_busy, cpu) )
+    {
+        per_cpu(nmi_cont_busy, cpu) = false;
+        printk("Trying to set NMI continuation while still one active!\n");
+    }
+
+    /* Reads must be done before following write (local cpu ordering only). */
+    barrier();
+
+    per_cpu(nmi_cont_func, cpu) = NULL;
+
+    if ( func )
+        func(arg);
+
+    return func;
+}
+
+int set_nmi_continuation(nmi_contfunc_t *func, void *arg)
+{
+    unsigned int cpu = smp_processor_id();
+
+    if ( per_cpu(nmi_cont_func, cpu) )
+    {
+        per_cpu(nmi_cont_busy, cpu) = true;
+        return -EBUSY;
+    }
+
+    per_cpu(nmi_cont_func, cpu) = func;
+    per_cpu(nmi_cont_arg, cpu) = arg;
+
+    /*
+     * Issue a self-IPI. Handling is done in spurious_interrupt().
+     * NMI could have happened in IPI sequence, so wait for ICR being idle
+     * again before leaving NMI handler.
+     */
+    send_IPI_self(SPURIOUS_APIC_VECTOR);
+    apic_wait_icr_idle();
+
+    return 0;
+}
+
 void do_device_not_available(struct cpu_user_regs *regs)
 {
 #ifdef CONFIG_PV
diff --git a/xen/include/asm-x86/nmi.h b/xen/include/asm-x86/nmi.h
index a288f02a50..68db75b1ed 100644
--- a/xen/include/asm-x86/nmi.h
+++ b/xen/include/asm-x86/nmi.h
@@ -33,5 +33,16 @@ nmi_callback_t *set_nmi_callback(nmi_callback_t *callback);
 void unset_nmi_callback(void);
 
 DECLARE_PER_CPU(unsigned int, nmi_count);
- 
+
+typedef void nmi_contfunc_t(void *arg);
+
+/**
+ * set_nmi_continuation
+ *
+ * Schedule a function to be started in interrupt context after NMI handling.
+ */
+int set_nmi_continuation(nmi_contfunc_t *func, void *arg);
+
+/* Check for NMI continuation pending. */
+bool nmi_check_continuation(void);
 #endif /* ASM_NMI_H */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:54:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7830.20617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUq-0002Da-Ci; Fri, 16 Oct 2020 08:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7830.20617; Fri, 16 Oct 2020 08:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUq-0002DP-6w; Fri, 16 Oct 2020 08:53:56 +0000
Received: by outflank-mailman (input) for mailman id 7830;
 Fri, 16 Oct 2020 08:53:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTLUo-0002Cw-Tc
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 472e706b-4829-411a-9c0b-844cf330e8d5;
 Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6DA3EADE4;
 Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTLUo-0002Cw-Tc
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:53:54 +0000
X-Inumbo-ID: 472e706b-4829-411a-9c0b-844cf330e8d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 472e706b-4829-411a-9c0b-844cf330e8d5;
	Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602838432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=uflXsgCH4arUjU1zZLQaRsAU0C5CIOEyLH+JEYOYK6k=;
	b=Z/J5gRoUPV+PO0kMiJFFsSN9ei68GzUGczE6Ikc5e3ChHnTrra5NkAeHQp1r55ug+TJYZd
	MAmJZq3AcX+QjnHZ11ETXP0nh7i9IPnYraWD/PmYFuG6PYprTELD2AxYmX6ICkjGqBU8rh
	TdCsLxpUNxhcaRqyXLGsyEB/DcjHxRk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6DA3EADE4;
	Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/3] xen/x86: implement NMI continuation
Date: Fri, 16 Oct 2020 10:53:47 +0200
Message-Id: <20201016085350.10233-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move sending of a virq event for oprofile to the local vcpu from NMI
to normal interrupt context.

This has been tested with a small test patch using the continuation
framework of patch 1 for all NMIs and doing a print to console in
the continuation handler.

Version 1 of this small series was sent to the security list before.

Changes in V3:
- switched to self-IPI instead of softirq
- added patch 3

Juergen Gross (3):
  xen/x86: add nmi continuation framework
  xen/oprofile: use set_nmi_continuation() for sending virq to guest
  xen/x86: issue pci_serr error message via NMI continuation

 xen/arch/x86/apic.c             | 13 +++++--
 xen/arch/x86/oprofile/nmi_int.c |  9 ++++-
 xen/arch/x86/traps.c            | 61 +++++++++++++++++++++++++++++----
 xen/include/asm-x86/nmi.h       | 13 ++++++-
 xen/include/asm-x86/softirq.h   |  5 ++-
 5 files changed, 87 insertions(+), 14 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:54:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7832.20647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLV0-0002LE-2Q; Fri, 16 Oct 2020 08:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7832.20647; Fri, 16 Oct 2020 08:54:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLUz-0002L7-V8; Fri, 16 Oct 2020 08:54:05 +0000
Received: by outflank-mailman (input) for mailman id 7832;
 Fri, 16 Oct 2020 08:54:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTLUy-0002Cr-6n
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:54:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fdcc0cbe-beb2-429a-a324-e4a52de30dda;
 Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF079AE3A;
 Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTLUy-0002Cr-6n
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:54:04 +0000
X-Inumbo-ID: fdcc0cbe-beb2-429a-a324-e4a52de30dda
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fdcc0cbe-beb2-429a-a324-e4a52de30dda;
	Fri, 16 Oct 2020 08:53:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602838433;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t+pus/iAw/KMl2PmZi703thdM1sjdiTbovsr4cajzfc=;
	b=VSxgyH+2PWjo2xPWP/qIbyRFuRCFo3Of0OnO9Qgy5pLDBvtfGPVQsJjkJ5mkNT0qF2Hx/8
	SqhNvCbgfpz1XlZ/XxUiD3+wS+QDiMHJYxkZeAbL3ggnTvcgOmj61u9OzHi6712OMHL8QF
	UpZHlN4tMzBL7E9nNaVGpU9fS8DgcE4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DF079AE3A;
	Fri, 16 Oct 2020 08:53:52 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/3] xen/x86: issue pci_serr error message via NMI continuation
Date: Fri, 16 Oct 2020 10:53:50 +0200
Message-Id: <20201016085350.10233-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201016085350.10233-1-jgross@suse.com>
References: <20201016085350.10233-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using a softirq pci_serr_error() can use NMI continuation
for issuing an error message.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/traps.c          | 9 +++------
 xen/include/asm-x86/softirq.h | 5 ++---
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 6f4db9d549..7a68ac40be 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1659,7 +1659,7 @@ void do_general_protection(struct cpu_user_regs *regs)
     panic("GENERAL PROTECTION FAULT\n[error_code=%04x]\n", regs->error_code);
 }
 
-static void pci_serr_softirq(void)
+static void pci_serr_nmicont(void *arg)
 {
     printk("\n\nNMI - PCI system error (SERR)\n");
     outb(inb(0x61) & 0x0b, 0x61); /* re-enable the PCI SERR error line. */
@@ -1687,9 +1687,8 @@ static void pci_serr_error(const struct cpu_user_regs *regs)
         nmi_hwdom_report(_XEN_NMIREASON_pci_serr);
         /* fallthrough */
     case 'i': /* 'ignore' */
-        /* Would like to print a diagnostic here but can't call printk()
-           from NMI context -- raise a softirq instead. */
-        raise_softirq(PCI_SERR_SOFTIRQ);
+        /* Issue error message in NMI continuation. */
+        set_nmi_continuation(pci_serr_nmicont, NULL);
         break;
     default:  /* 'fatal' */
         console_force_unlock();
@@ -2183,8 +2182,6 @@ void __init trap_init(void)
     percpu_traps_init();
 
     cpu_init();
-
-    open_softirq(PCI_SERR_SOFTIRQ, pci_serr_softirq);
 }
 
 void activate_debugregs(const struct vcpu *curr)
diff --git a/xen/include/asm-x86/softirq.h b/xen/include/asm-x86/softirq.h
index 0b7a77f11f..415ee866c7 100644
--- a/xen/include/asm-x86/softirq.h
+++ b/xen/include/asm-x86/softirq.h
@@ -6,9 +6,8 @@
 #define VCPU_KICK_SOFTIRQ      (NR_COMMON_SOFTIRQS + 2)
 
 #define MACHINE_CHECK_SOFTIRQ  (NR_COMMON_SOFTIRQS + 3)
-#define PCI_SERR_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
-#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 5)
-#define NR_ARCH_SOFTIRQS       6
+#define HVM_DPCI_SOFTIRQ       (NR_COMMON_SOFTIRQS + 4)
+#define NR_ARCH_SOFTIRQS       5
 
 bool arch_skip_send_event_check(unsigned int cpu);
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 08:56:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 08:56:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7842.20659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLXZ-0002hB-Il; Fri, 16 Oct 2020 08:56:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7842.20659; Fri, 16 Oct 2020 08:56:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTLXZ-0002h4-Ex; Fri, 16 Oct 2020 08:56:45 +0000
Received: by outflank-mailman (input) for mailman id 7842;
 Fri, 16 Oct 2020 08:56:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTLXY-0002gv-6E
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:56:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c96010d6-d6ae-44c4-b696-86114a01252d;
 Fri, 16 Oct 2020 08:56:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5006EAC0C;
 Fri, 16 Oct 2020 08:56:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTLXY-0002gv-6E
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 08:56:44 +0000
X-Inumbo-ID: c96010d6-d6ae-44c4-b696-86114a01252d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c96010d6-d6ae-44c4-b696-86114a01252d;
	Fri, 16 Oct 2020 08:56:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602838602;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2dy94yMzztdQsQpIhT40xhT2ki5Jo8sjdCZ511fC+Pk=;
	b=tlCeww70D4k6vNbikgYBYG9T5lcdLE5LDlnr7/ldWGNVFpg0pb2ohCbScR+19FcPzANgOX
	0V/ijDFM8lkuyAcx1E+Nwma1NxCBTesEgCQwz5qhoQMQNNj53ZfPcxH3SXcH/8Ly9zkdao
	W3HkbuJoycX8Um3SZGv86JDhtWSnICI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5006EAC0C;
	Fri, 16 Oct 2020 08:56:42 +0000 (UTC)
Subject: Re: [PATCH V2 21/23] xen/arm: Add mapcache invalidation handling
To: Julien Grall <julien@xen.org>
Cc: Oleksandr Tyshchenko <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-22-git-send-email-olekstysh@gmail.com>
 <cad29fdb-089a-541b-6c5b-538d96441714@suse.com>
 <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e638b6b7-1939-4542-85fa-70d8e1f5e9d6@suse.com>
Date: Fri, 16 Oct 2020 10:56:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <b074eb70-a770-1f96-3d68-b06476b963ca@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 10:41, Julien Grall wrote:
> On 16/10/2020 07:29, Jan Beulich wrote:
>> Given how p2m_free_entry() works (or is supposed to work in the
>> long run), is the new code you add guaranteed to only alter leaf
>> entries?
> 
> This path may also be called with tables. I think we want to move the 
> check in p2m_free_entry() so we can find the correct leaf type.
> 
>> If not, the freeing of page tables needs deferring until
>> after qemu has dropped its mappings.
> 
> Freeing the page tables doesn't release a page. So may I ask why we 
> would need to defer it?

Oh, sorry - qemu of course doesn't use the same p2m, so the
intermediate page tables are private to the subject guest.

>> And with there being refcounting only for foreign pages, how do
>> you prevent the freeing of the page just unmapped before qemu has
>> dropped its possible mapping?
> QEMU mappings can only be done using the foreign mapping interface. This 
> means that page reference count will be incremented for each QEMU 
> mappings. Therefore the page cannot disappear until QEMU dropped the 
> last reference.

Okay, sorry for the noise then.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:26:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:26:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7850.20677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM0H-0005Mj-2Y; Fri, 16 Oct 2020 09:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7850.20677; Fri, 16 Oct 2020 09:26:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM0G-0005Mc-Vj; Fri, 16 Oct 2020 09:26:24 +0000
Received: by outflank-mailman (input) for mailman id 7850;
 Fri, 16 Oct 2020 09:26:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTM0F-0005MA-JT
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:26:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dff46635-d760-4e78-9c72-d9a24daed830;
 Fri, 16 Oct 2020 09:26:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTM09-00088L-3H; Fri, 16 Oct 2020 09:26:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTM08-0001oD-QM; Fri, 16 Oct 2020 09:26:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTM08-0003k3-Ps; Fri, 16 Oct 2020 09:26:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTM0F-0005MA-JT
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:26:23 +0000
X-Inumbo-ID: dff46635-d760-4e78-9c72-d9a24daed830
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dff46635-d760-4e78-9c72-d9a24daed830;
	Fri, 16 Oct 2020 09:26:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K7nFD8ulU1BrdRX4c/Di2KlCmOk8+6M3tyW+QNUYMmg=; b=EPEUgefl94pcNzQg/Fa3gNWw1j
	kzW3kuLDHsBuNt8Uaa0HjJG6rY4KbNwu+WoNvpF+2ccvIowZPCruVHNwsUc7qRk4B+1KEn+0AgKi6
	eROLr9g4ULD3dpm3Qx3Q3sAI0Gv+c09ymSaO+07RTrhu2Vg4es+PmC5wCXosYx+FFM+I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTM09-00088L-3H; Fri, 16 Oct 2020 09:26:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTM08-0001oD-QM; Fri, 16 Oct 2020 09:26:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTM08-0003k3-Ps; Fri, 16 Oct 2020 09:26:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155881-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155881: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d25fd8710d6c8fc11582210fb1f8480c0d98416b
X-Osstest-Versions-That:
    ovmf=19c87b7d446c3273e84b238cb02cd1c0ae69c43e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 09:26:16 +0000

flight 155881 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155881/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d25fd8710d6c8fc11582210fb1f8480c0d98416b
baseline version:
 ovmf                 19c87b7d446c3273e84b238cb02cd1c0ae69c43e

Last test of basis   155837  2020-10-15 07:14:20 Z    1 days
Testing same since   155881  2020-10-16 01:40:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Compostella, Jeremy <jeremy.compostella@intel.com>
  Jeremy Compostella <jeremy.compostella@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   19c87b7d44..d25fd8710d  d25fd8710d6c8fc11582210fb1f8480c0d98416b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:32:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:32:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7853.20689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM69-0006EY-Nh; Fri, 16 Oct 2020 09:32:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7853.20689; Fri, 16 Oct 2020 09:32:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM69-0006ER-Kc; Fri, 16 Oct 2020 09:32:29 +0000
Received: by outflank-mailman (input) for mailman id 7853;
 Fri, 16 Oct 2020 09:32:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTM68-0006EM-CT
 for xen-devel@lists.xen.org; Fri, 16 Oct 2020 09:32:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5f72749-a7be-438b-8077-20b35450b082;
 Fri, 16 Oct 2020 09:32:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 32C3BAD82;
 Fri, 16 Oct 2020 09:32:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTM68-0006EM-CT
	for xen-devel@lists.xen.org; Fri, 16 Oct 2020 09:32:28 +0000
X-Inumbo-ID: f5f72749-a7be-438b-8077-20b35450b082
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5f72749-a7be-438b-8077-20b35450b082;
	Fri, 16 Oct 2020 09:32:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602840746;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dYDPGZrXdyi46y7fj9XtittCt4pGBBzOezx1Qmnham4=;
	b=GYq8SQwC0brftuO1X2jjp6UT1ZNhZ8PGhPWINDzUJ4aPqRMgqA+cd6YVsUXAtTw/D/Q3Ya
	fc9uYcmzBvBoAn7yvEj8PXHlnURHUg+mGPUjwdDb5Im8vz55zYPJzom+nJ6N4HAD3QLHN3
	mdQbTRVa7Z9fGaYvdOU3yjivwRYASog=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 32C3BAD82;
	Fri, 16 Oct 2020 09:32:26 +0000 (UTC)
Subject: Re: [Xen-devel] [XEN PATCH v14 8/8] Add xentrace to vmware_port
To: Don Slutz <don.slutz@gmail.com>
Cc: xen-devel@lists.xen.org,
 Aravind Gopalakrishnan <Aravind.Gopalakrishnan@amd.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Eddie Dong <eddie.dong@intel.com>, Ian Campbell <ian.campbell@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Keir Fraser <keir@xen.org>,
 Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <stefano.stabellini@eu.citrix.com>,
 Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
 Tim Deegan <tim@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Don Slutz <dslutz@verizon.com>
References: <cover.1597854907.git.don.slutz@gmail.com>
 <1bfc92ee47f425235821c3655564a5a4b3d34593.1597854908.git.don.slutz@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5ad14acd-eff3-ea34-93f3-cdc195a3b9bb@suse.com>
Date: Fri, 16 Oct 2020 11:32:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <1bfc92ee47f425235821c3655564a5a4b3d34593.1597854908.git.don.slutz@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.08.2020 18:52, Don Slutz wrote:
> From: Don Slutz <dslutz@verizon.com>
> 
> Also added missing TRAP_DEBUG & VLAPIC.
> 
> Signed-off-by: Don Slutz <dslutz@verizon.com>
> CC: Don Slutz <don.slutz@gmail.com>
> ---
> Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> v14:
>   Reworked to current code.
>   Added VMPORT_SEND because I wanted to see it during testing.
> 
> v13:
>     Please do this by extending the existing infrastructure rather
>     than special-casing 7 on the side.  (i.e. extend ND to take 7
>     parameters, and introduce HVMTRACE_7D)
>     = { d1, d2, d3, d4, d5, d6, d7 } will be far shorter, linewise.

I think this would have wanted to split into two patches right
at the time: One for the extension, and another for the new
VMware logic. But see below.

> @@ -62,6 +63,7 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>      if ( port == BDOOR_PORT && regs->eax == BDOOR_MAGIC )
>      {
>          uint32_t new_eax = ~0u;
> +        uint16_t cmd = regs->ecx;
>          uint64_t value;
>          struct vcpu *curr = current;
>          struct domain *currd = curr->domain;
> @@ -72,7 +74,7 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>           * leaving the high 32-bits unchanged, unlike what one would
>           * expect to happen.
>           */
> -        switch ( regs->ecx & 0xffff )
> +        switch ( cmd )
>          {
>          case BDOOR_CMD_GETMHZ:
>              new_eax = currd->arch.tsc_khz / 1000;
> @@ -147,14 +149,22 @@ static int vmport_ioport(int dir, uint32_t port, uint32_t bytes, uint32_t *val)
>              break;
>  
>          default:
> +            HVMTRACE_6D(VMPORT_SEND, cmd, regs->ebx, regs->ecx,
> +                        regs->edx, regs->esi, regs->edi);

With cmd derived from regs->ecx, why pass the same value twice here?

>              /* Let backing DM handle */
>              return X86EMUL_UNHANDLEABLE;
>          }
> +        HVMTRACE_7D(VMPORT_HANDLED, cmd, new_eax, regs->ebx, regs->ecx,
> +                    regs->edx, regs->esi, regs->edi);

None of the cases making it here consumes or alter regs->edi. Why
record / report its value? Without this, the entire widening to 7
parameters becomes unnecessary for now, afaics.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:36:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7856.20700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM9z-0006QK-9h; Fri, 16 Oct 2020 09:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7856.20700; Fri, 16 Oct 2020 09:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTM9z-0006QD-6B; Fri, 16 Oct 2020 09:36:27 +0000
Received: by outflank-mailman (input) for mailman id 7856;
 Fri, 16 Oct 2020 09:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTM9y-0006Q8-1h
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:36:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d477dd66-b8c6-47ac-a53c-579a8a7bd0ac;
 Fri, 16 Oct 2020 09:36:24 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTM9u-0008LW-BY; Fri, 16 Oct 2020 09:36:22 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTM9t-000350-UV; Fri, 16 Oct 2020 09:36:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTM9y-0006Q8-1h
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:36:26 +0000
X-Inumbo-ID: d477dd66-b8c6-47ac-a53c-579a8a7bd0ac
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d477dd66-b8c6-47ac-a53c-579a8a7bd0ac;
	Fri, 16 Oct 2020 09:36:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lrxXi9tez9sKle2WYq+9PsfzqQ5P97Rz7neO5OHVW2Q=; b=3BhTvwZRboR+pRwVCbp4vWVnFt
	Qh3Uip15APc5EriNUe9Dlz+Psr6Qv2yMsa2fC513Dl3V4gj7mAMJdKni4vrGH3qcrdudZbIbZ4IQf
	Gw5WmnlNe/MYqxxrcrtMDUe3wMvsNua1DraeOTQHdqhn352FJ+47ljgYZdBVWmpPlBzs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTM9u-0008LW-BY; Fri, 16 Oct 2020 09:36:22 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTM9t-000350-UV; Fri, 16 Oct 2020 09:36:22 +0000
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
Date: Fri, 16 Oct 2020 10:36:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 15/10/2020 13:07, Jan Beulich wrote:
> On 14.10.2020 13:40, Julien Grall wrote:
>> Hi Jan,
>>
>> On 13/10/2020 15:26, Jan Beulich wrote:
>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>> On 13.10.20 15:58, Jan Beulich wrote:
>>>>> On 12.10.2020 11:27, Juergen Gross wrote:
>>>>>> The queue for a fifo event is depending on the vcpu_id and the
>>>>>> priority of the event. When sending an event it might happen the
>>>>>> event needs to change queues and the old queue needs to be kept for
>>>>>> keeping the links between queue elements intact. For this purpose
>>>>>> the event channel contains last_priority and last_vcpu_id values
>>>>>> elements for being able to identify the old queue.
>>>>>>
>>>>>> In order to avoid races always access last_priority and last_vcpu_id
>>>>>> with a single atomic operation avoiding any inconsistencies.
>>>>>>
>>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>>
>>>>> I seem to vaguely recall that at the time this seemingly racy
>>>>> access was done on purpose by David. Did you go look at the
>>>>> old commits to understand whether there really is a race which
>>>>> can't be tolerated within the spec?
>>>>
>>>> At least the comments in the code tell us that the race regarding
>>>> the writing of priority (not last_priority) is acceptable.
>>>
>>> Ah, then it was comments. I knew I read something to this effect
>>> somewhere, recently.
>>>
>>>> Especially Julien was rather worried by the current situation. In
>>>> case you can convince him the current handling is fine, we can
>>>> easily drop this patch.
>>>
>>> Julien, in the light of the above - can you clarify the specific
>>> concerns you (still) have?
>>
>> Let me start with that the assumption if evtchn->lock is not held when
>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
> 
> But this isn't interesting - we know there are paths where it is
> held, and ones (interdomain sending) where it's the remote port's
> lock instead which is held. What's important here is that a
> _consistent_ lock be held (but it doesn't need to be evtchn's).

Yes, a _consistent_ lock *should* be sufficient. But it is better to use 
the same lock everywhere so it is easier to reason (see more below).

> 
>>   From my understanding, the goal of lock_old_queue() is to return the
>> old queue used.
>>
>> last_priority and last_vcpu_id may be updated separately and I could not
>> convince myself that it would not be possible to return a queue that is
>> neither the current one nor the old one.
>>
>> The following could happen if evtchn->priority and
>> evtchn->notify_vcpu_id keeps changing between calls.
>>
>> pCPU0				| pCPU1
>> 				|
>> evtchn_fifo_set_pending(v0,...)	|
>> 				| evtchn_fifo_set_pending(v1, ...)
>>    [...]				|
>>    /* Queue has changed */	|
>>    evtchn->last_vcpu_id = v0 	|
>> 				| -> evtchn_old_queue()
>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>     				| old_q = ...
>> 				| spin_lock(old_q->...)
>> 				| v = ...
>> 				| q = ...
>> 				| /* q and old_q would be the same */
>> 				|
>>    evtchn->las_priority = priority|
>>
>> If my diagram is correct, then pCPU1 would return a queue that is
>> neither the current nor old one.
> 
> I think I agree.
> 
>> In which case, I think it would at least be possible to corrupt the
>> queue. From evtchn_fifo_set_pending():
>>
>>           /*
>>            * If this event was a tail, the old queue is now empty and
>>            * its tail must be invalidated to prevent adding an event to
>>            * the old queue from corrupting the new queue.
>>            */
>>           if ( old_q->tail == port )
>>               old_q->tail = 0;
>>
>> Did I miss anything?
> 
> I don't think you did. The important point though is that a consistent
> lock is being held whenever we come here, so two racing set_pending()
> aren't possible for one and the same evtchn. As a result I don't think
> the patch here is actually needed.

I haven't yet read in full details the rest of the patches to say 
whether this is necessary or not. However, at a first glance, I think 
this is not a sane to rely on different lock to protect us. And don't 
get me started on the lack of documentation...

Furthermore, the implementation of old_lock_queue() suggests that the 
code was planned to be lockless. Why would you need the loop otherwise?

Therefore, regardless the rest of the discussion, I think this patch 
would be useful to have for our peace of mind.

> 
> If I take this further, then I think I can reason why it wasn't
> necessary to add further locking to send_guest_{global,vcpu}_virq():
> The virq_lock is the "consistent lock" protecting ECS_VIRQ ports. The
> spin_barrier() while closing the port guards that side against the
> port changing to a different ECS_* behind the sending functions' backs.
> And binding such ports sets ->virq_to_evtchn[] last, with a suitable
> barrier (the unlock).

This makes sense.

> 
> Which leaves send_guest_pirq() before we can drop the IRQ-safe locking
> again. I guess we would need to work towards using the underlying
> irq_desc's lock as consistent lock here, but this certainly isn't the
> case just yet, and I'm not really certain this can be achieved.
I can't comment on the PIRQ code but I think this is a risky approach 
(see more above).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7859.20713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMF1-0007HD-U8; Fri, 16 Oct 2020 09:41:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7859.20713; Fri, 16 Oct 2020 09:41:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMF1-0007H6-Qh; Fri, 16 Oct 2020 09:41:39 +0000
Received: by outflank-mailman (input) for mailman id 7859;
 Fri, 16 Oct 2020 09:41:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x+s4=DX=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kTMEz-0007H1-Im
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:41:37 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.243.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48744a6f-1522-4102-9da3-c1145b92e367;
 Fri, 16 Oct 2020 09:41:34 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB3837.namprd12.prod.outlook.com (2603:10b6:208:166::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 16 Oct
 2020 09:41:29 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Fri, 16 Oct 2020
 09:41:29 +0000
Received: from [192.168.137.56] (80.187.123.114) by
 AM0PR03CA0033.eurprd03.prod.outlook.com (2603:10a6:208:14::46) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.22 via Frontend Transport; Fri, 16 Oct 2020 09:41:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x+s4=DX=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kTMEz-0007H1-Im
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:41:37 +0000
X-Inumbo-ID: 48744a6f-1522-4102-9da3-c1145b92e367
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (unknown [40.107.243.43])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 48744a6f-1522-4102-9da3-c1145b92e367;
	Fri, 16 Oct 2020 09:41:34 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ts6/SrHzaw02eNPj7RW7eNnsHMF5DZyvK/z/G5BpLKWqnNi3T9HO96ji1g0OCXEVUilDj6SzeA+p8oJ8AJtSdkVfwpGFI7u3G8c7Rf6B6Lrjx7kDDEPx7J+BZBvaUxKlTMyGAzt34zr8D2UGgr+LWPvdoVR6rMu5ah/dmS9zOTWmtNpLQxnuPwd0ZRl/X4QkgVqIW72SWG2RoRvrIMia1+L4L9JOnoPloG5173KQFJ5fdZi9SmueSQAGkXe1bqbEMyjdmdBomHCLmnT2bjWWtqFCCh/9vFb5N/8OOo5R/4H8mWwQqRiC2PUCdWEkY+6x5SF/dBOAyjaRfJFtiakQyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2XMm/GtOB8CkNbU+RYyJp4wxnJVOunCEcz5wnrBr9Rg=;
 b=H3jSVXJB4QFPOOs4cg+HqU/QoWUTZJIlhyU1sza4MCL9yMPnpO5QFPd5Y+3HEgLM/D6kIXLrAcW1p+/7drzH6+zwBEPe4v57y1yRX7l97afPq4EaqLZ5USj1CwnUJmp2ZJtyHkMpWDJJW1RSYKx6E3PqfPkqEIXHYqICBp5NEbGAK4HvRO5KIDN5k+ZBbvuZh2HaTwC9F/o6VV1883KrVZMVtGg9DCDTYDecpYbOXHQ/JJGzQNJRVRzBO65Tf4ojKys/MmSshTP6q2pJLscxllvGvIs67ojDEz1zwKRab+31Bm2VuIKDh7O/Ar9fQLMWm0ueaCReekGtYntIUZ4LVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2XMm/GtOB8CkNbU+RYyJp4wxnJVOunCEcz5wnrBr9Rg=;
 b=GM0LmMBc9yArv1T3fvM1NOAE1YmmENohhCciWH/aI3WjBG4lEYHGuO0cWskiEs+8NT9mUSD7oifyS1Ek45SP4VPxkNX2YogTyFtpszGuZHRyBZd4d45aH+HzwmrqW6tr/LKwWYQ5GSOu5L0321qLqVYg9farI5ngPGKfO/YqhW0=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB3837.namprd12.prod.outlook.com (2603:10b6:208:166::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 16 Oct
 2020 09:41:29 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.021; Fri, 16 Oct 2020
 09:41:29 +0000
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
To: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter <daniel@ffwll.ch>
Cc: luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 sam@ravnborg.org, emil.velikov@collabora.com,
 linux-samsung-soc@vger.kernel.org, jy0922.shim@samsung.com,
 lima@lists.freedesktop.org, oleksandr_andrushchenko@epam.com,
 krzk@kernel.org, steven.price@arm.com, linux-rockchip@lists.infradead.org,
 kgene@kernel.org, bskeggs@redhat.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, alyssa.rosenzweig@collabora.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, sw0312.kim@samsung.com, hjc@rock-chips.com,
 kyungmin.park@samsung.com, miaoqinglang@huawei.com, yuq825@gmail.com,
 alexander.deucher@amd.com, linux-media@vger.kernel.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
 <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
 <20201015164909.GC401619@phenom.ffwll.local>
 <20201015195204.1745fe7f@linux-uq9g>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <64130e2a-0e45-60da-2929-6378f59bfe97@amd.com>
Date: Fri, 16 Oct 2020 11:41:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201015195204.1745fe7f@linux-uq9g>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [80.187.123.114]
X-ClientProxiedBy: AM0PR03CA0033.eurprd03.prod.outlook.com
 (2603:10a6:208:14::46) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [192.168.137.56] (80.187.123.114) by AM0PR03CA0033.eurprd03.prod.outlook.com (2603:10a6:208:14::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend Transport; Fri, 16 Oct 2020 09:41:23 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 54e4b907-3e0e-4ef0-1473-08d871b7a82f
X-MS-TrafficTypeDiagnostic: MN2PR12MB3837:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<MN2PR12MB383767E6EC0D7063954B084F83030@MN2PR12MB3837.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	kXrTBd+9u88VHMYd93epeIhDctfIidsN0+eRMwgJeZ4f7vbmChJRtXSPE0tzniBYcfbyYVioTrHbJVxAYhU73cGY9/GSqypKXAYfBo2OHVFYT75GsuwS6VsduvNfgBCpIevzjIpNALsmASBClljY9Ui+M2vejOZ6EAo10dir3OhMcyh6YxoNtOaPQ59jp5/nUtEqAeJ3SIVKHD1kPjRYwA4b+ufFF42k6v8mt1ova7xaZaEpA7Hkix7ndenrqioHD4Wh9/zESOv15DH5d8eGws5/0aiOdMu7pXwTTldFbOzpoDjCDN6Q/yW4Mu3fFAeWOr9TdSyRLnFHM25WcMnl350Zl47ormcdrerbqyga/PCd4Il+/iBqs26dlC/tbUVcCQS9mpVbmz1LD7bipc87WAy9TIyjs0e72bTf4OgkuY4=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(366004)(136003)(39860400002)(26005)(4326008)(36756003)(34490700002)(66574015)(66476007)(66556008)(8936002)(66946007)(478600001)(186003)(16526019)(6486002)(83380400001)(316002)(2906002)(31686004)(52116002)(8676002)(2616005)(956004)(5660300002)(86362001)(30864003)(110136005)(6666004)(7406005)(7416002)(16576012)(31696002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	kd8mF2L8WHHygvAkT90vDROU8vQq0A0ughG3wUf/bkLVVaF0EFqxgCtw3OePigZuV0QXlFEdkyklsa0aM6ih4f2E3Kwf9pvY2VdXYFLF1a8b2l/ZA36ocJHpj5xh6RCvqzDBLN3BB/65yobSvT7hOpFzmgYLh4eynADy2jy/diab1fe5PYWcGHXxn+mbn8HYpfukCUJxl8fzdWkZJhXWKKOF0eQ+AGlb236N/CA0WwV2e4ZuFgTkp/ic1JOuVF5Q/gTNdB8f3nAY9Ob5MzUAYTMhugIpihSoyK8uuzbPWH/5W33cas1LjiR/p/joDFedIutYL7oAHF8CHsUxlQ1wN2CXJ0/0AwhwBZw0u214ip6+OotCSGXr+K/hfIMOXrnSlpDSPr0yabhrpAv5XUFQ+9DUzd3bbAJg1rqlRHYZ4H9jdcmtPKNuzq0mJ0WHfvfV5zCu9JMWKq7WRgJsxAe1WxfNsZY6YshZZiovmUWWwdxZsw4jIsXlvzkmQbBxZnOmvzimRR+9q1uikrVzfC5Kb5FLbO837eOUXfDH3D/LGRh3b6iOKYfDp0tpiXoKdoZmfTtzOKoMPWe70DyUY4VcRU7CTn+odv5Z3hlK4iBOrfvYDiE3mH/1LT15nTK7qAkuP2dykTBqcPjrxxtKzsaNrg==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 54e4b907-3e0e-4ef0-1473-08d871b7a82f
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2020 09:41:29.3213
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SVU1kujjrTYOiK+xPO0JkP7qNIMXM5FC6bkmq3N1xjwvDaTuFxAbX7NPKjPDi0GL
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3837

Am 15.10.20 um 19:52 schrieb Thomas Zimmermann:
> Hi
>
> On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter <daniel@ffwll.ch> wrote:
>
>> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian König wrote:
>>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in
>>>> kernel address space. The mapping's address is returned as struct
>>>> dma_buf_map. Each function is a simplified version of TTM's existing
>>>> kmap code. Both functions respect the memory's location ani/or
>>>> writecombine flags.
>>>>
>>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>>>> two helpers that convert a GEM object into the TTM BO and forward the
>>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp
>>>> GEM object callbacks.
>>>>
>>>> v4:
>>>> 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers
>>>> (Daniel, Christian)
>>> Bunch of minor comments below, but over all look very solid to me.
>> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the
>> cleanest. And then we can maybe push the combinatorial monster into
>> vmwgfx, which I think is the only user after this series. Or perhaps a
>> dedicated set of helpers to map an invidual page (again using the
>> dma_buf_map stuff).
>  From a quick look, I'd say it should be possible to have the same interface
> for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map).
> All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be
> killed off entirely.

Yes, that would be rather nice to have.

Thanks,
Christian.

>
> Best regards
> Thomas
>
>> I'll let Christian with the details, but at a high level this is
>> definitely
>>
>> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>>
>> Thanks a lot for doing all this.
>> -Daniel
>>
>>>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>>>> ---
>>>>    drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>>>    drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
>>>>    include/drm/drm_gem_ttm_helper.h     |  6 +++
>>>>    include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
>>>>    include/linux/dma-buf-map.h          | 20 ++++++++
>>>>    5 files changed, 164 insertions(+)
>>>>
>>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30
>>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>>>> unsigned int indent, }
>>>>    EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>>> +/**
>>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: [out] returns the dma-buf mapping.
>>>> + *
>>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + *
>>>> + * Returns:
>>>> + * 0 on success, or a negative errno code otherwise.
>>>> + */
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> +		     struct dma_buf_map *map)
>>>> +{
>>>> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> +	return ttm_bo_vmap(bo, map);
>>>> +
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>>>> +
>>>> +/**
>>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>>>> + * @gem: GEM object.
>>>> + * @map: dma-buf mapping.
>>>> + *
>>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used
>>>> as
>>>> + * &drm_gem_object_funcs.vmap callback.
>>>> + */
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> +			struct dma_buf_map *map)
>>>> +{
>>>> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>>>> +
>>>> +	ttm_bo_vunmap(bo, map);
>>>> +}
>>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>>>> +
>>>>    /**
>>>>     * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>>>     * @gem: GEM object.
>>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d
>>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>>>> @@ -32,6 +32,7 @@
>>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>>    #include <drm/ttm/ttm_placement.h>
>>>>    #include <drm/drm_vma_manager.h>
>>>> +#include <linux/dma-buf-map.h>
>>>>    #include <linux/io.h>
>>>>    #include <linux/highmem.h>
>>>>    #include <linux/wait.h>
>>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>>>    }
>>>>    EXPORT_SYMBOL(ttm_bo_kunmap);
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>>> +{
>>>> +	struct ttm_resource *mem = &bo->mem;
>>>> +	int ret;
>>>> +
>>>> +	ret = ttm_mem_io_reserve(bo->bdev, mem);
>>>> +	if (ret)
>>>> +		return ret;
>>>> +
>>>> +	if (mem->bus.is_iomem) {
>>>> +		void __iomem *vaddr_iomem;
>>>> +		unsigned long size = bo->num_pages << PAGE_SHIFT;
>>> Please use uint64_t here and make sure to cast bo->num_pages before
>>> shifting.
>>>
>>> We have an unit tests of allocating a 8GB BO and that should work on a
>>> 32bit machine as well :)
>>>
>>>> +
>>>> +		if (mem->bus.addr)
>>>> +			vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
>>>> +		else if (mem->placement & TTM_PL_FLAG_WC)
>>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>>> mem->bus.caching enum as replacement.
>>>
>>>> +			vaddr_iomem = ioremap_wc(mem->bus.offset,
>>>> size);
>>>> +		else
>>>> +			vaddr_iomem = ioremap(mem->bus.offset, size);
>>>> +
>>>> +		if (!vaddr_iomem)
>>>> +			return -ENOMEM;
>>>> +
>>>> +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>>> +
>>>> +	} else {
>>>> +		struct ttm_operation_ctx ctx = {
>>>> +			.interruptible = false,
>>>> +			.no_wait_gpu = false
>>>> +		};
>>>> +		struct ttm_tt *ttm = bo->ttm;
>>>> +		pgprot_t prot;
>>>> +		void *vaddr;
>>>> +
>>>> +		BUG_ON(!ttm);
>>> I think we can drop this, populate will just crash badly anyway.
>>>
>>>> +
>>>> +		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>>> +		if (ret)
>>>> +			return ret;
>>>> +
>>>> +		/*
>>>> +		 * We need to use vmap to get the desired page
>>>> protection
>>>> +		 * or to make the buffer object look contiguous.
>>>> +		 */
>>>> +		prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>>> The calling convention has changed on drm-misc-next as well, but should be
>>> trivial to adapt.
>>>
>>> Regards,
>>> Christian.
>>>
>>>> +		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>>> +		if (!vaddr)
>>>> +			return -ENOMEM;
>>>> +
>>>> +		dma_buf_map_set_vaddr(map, vaddr);
>>>> +	}
>>>> +
>>>> +	return 0;
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>>> +
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map) +{
>>>> +	if (dma_buf_map_is_null(map))
>>>> +		return;
>>>> +
>>>> +	if (map->is_iomem)
>>>> +		iounmap(map->vaddr_iomem);
>>>> +	else
>>>> +		vunmap(map->vaddr);
>>>> +	dma_buf_map_clear(map);
>>>> +
>>>> +	ttm_mem_io_free(bo->bdev, &bo->mem);
>>>> +}
>>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>>> +
>>>>    static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>>    				 bool dst_use_tt)
>>>>    {
>>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8
>>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h
>>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>>> @@ -10,11 +10,17 @@
>>>>    #include <drm/ttm/ttm_bo_api.h>
>>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>> +struct dma_buf_map;
>>>> +
>>>>    #define drm_gem_ttm_of_gem(gem_obj) \
>>>>    	container_of(gem_obj, struct ttm_buffer_object, base)
>>>>    void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>>> indent, const struct drm_gem_object *gem);
>>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>>> +		     struct dma_buf_map *map);
>>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>>> +			struct dma_buf_map *map);
>>>>    int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>>    		     struct vm_area_struct *vma);
>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
>>>> index 37102e45e496..2c59a785374c 100644
>>>> --- a/include/drm/ttm/ttm_bo_api.h
>>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>>    struct ttm_bo_device;
>>>> +struct dma_buf_map;
>>>> +
>>>>    struct drm_mm_node;
>>>>    struct ttm_placement;
>>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>>> unsigned long start_page, */
>>>>    void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>> +/**
>>>> + * ttm_bo_vmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>>> + *
>>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>>> + * data in the buffer object. The parameter @map returns the virtual
>>>> + * address as struct dma_buf_map. Unmap the buffer with
>>>> ttm_bo_vunmap().
>>>> + *
>>>> + * Returns
>>>> + * -ENOMEM: Out of memory.
>>>> + * -EINVAL: Invalid range.
>>>> + */
>>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>>> +
>>>> +/**
>>>> + * ttm_bo_vunmap
>>>> + *
>>>> + * @bo: The buffer object.
>>>> + * @map: Object describing the map to unmap.
>>>> + *
>>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>>> + */
>>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>>> *map); +
>>>>    /**
>>>>     * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>>     *
>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>>> index fd1aba545fdf..2e8bbecb5091 100644
>>>> --- a/include/linux/dma-buf-map.h
>>>> +++ b/include/linux/dma-buf-map.h
>>>> @@ -45,6 +45,12 @@
>>>>     *
>>>>     *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>>     *
>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>>> + *
>>>> + * .. code-block:: c
>>>> + *
>>>> + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>>> + *
>>>>     * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>>     * dma_buf_map_is_null().
>>>>     *
>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>>> dma_buf_map *map, void *vaddr) map->is_iomem = false;
>>>>    }
>>>> +/**
>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>>> an address in I/O memory
>>>> + * @map:		The dma-buf mapping structure
>>>> + * @vaddr_iomem:	An I/O-memory address
>>>> + *
>>>> + * Sets the address and the I/O-memory flag.
>>>> + */
>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>>> +					       void __iomem
>>>> *vaddr_iomem) +{
>>>> +	map->vaddr_iomem = vaddr_iomem;
>>>> +	map->is_iomem = true;
>>>> +}
>>>> +
>>>>    /**
>>>>     * dma_buf_map_is_equal - Compares two dma-buf mapping structures for
>>>> equality
>>>>     * @lhs:	The dma-buf mapping structure
>
>



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:42:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7861.20724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMG7-0007P6-CW; Fri, 16 Oct 2020 09:42:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7861.20724; Fri, 16 Oct 2020 09:42:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMG7-0007Oz-9Z; Fri, 16 Oct 2020 09:42:47 +0000
Received: by outflank-mailman (input) for mailman id 7861;
 Fri, 16 Oct 2020 09:42:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8ta=DX=epam.com=prvs=8558ee1b56=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1kTMG5-0007Os-Vw
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:42:46 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b215da0-39d8-4826-9bf5-99d2d8044d90;
 Fri, 16 Oct 2020 09:42:44 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09G9ZCD5028122; Fri, 16 Oct 2020 09:42:43 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2056.outbound.protection.outlook.com [104.47.13.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 3476850jj7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 16 Oct 2020 09:42:43 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AS8PR03MB6902.eurprd03.prod.outlook.com (2603:10a6:20b:29f::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Fri, 16 Oct
 2020 09:42:39 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3477.025; Fri, 16 Oct 2020
 09:42:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=T8ta=DX=epam.com=prvs=8558ee1b56=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
	id 1kTMG5-0007Os-Vw
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:42:46 +0000
X-Inumbo-ID: 0b215da0-39d8-4826-9bf5-99d2d8044d90
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0b215da0-39d8-4826-9bf5-99d2d8044d90;
	Fri, 16 Oct 2020 09:42:44 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09G9ZCD5028122;
	Fri, 16 Oct 2020 09:42:43 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com (mail-he1eur04lp2056.outbound.protection.outlook.com [104.47.13.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 3476850jj7-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 16 Oct 2020 09:42:43 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=acRhFdRKanqCUF+cwpVzwBU+TYb8n5QOngg83+7K9hd4FSIsxsGp9M5itoBZJ9XBo1+ehRqDGwDf1Q/hoOtMsRmLje2dSjpjGJwAc2JJldVyhB8sj5gDIftb+oYHdAr9pn1cebGWkb6syL4vq+tyLgmDi/cwqWLUSPb7lMMcf4E3o+4BKijCs1LcwWS0lZOhVKJxZtyjeYdxMF35ItT7ax3qmtLmFHVB1kCQbqQu4qoY4MMzoDPPBb/aqZ1cQm9XEieMHOmaCSFYXb7V9wOoFb6vxRYpubqTKS/VGPxDyQNVP95aRLLfIJq7kEvb4siMwrxVz0a3DCXUlGziaCi91w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qd9ee9EFA7w7Qmj4sWQdYGIWRmp9LafewTNB9lkRU/A=;
 b=GCRWDYNOeQA7+NSEQb2Z1ch+meKl0lZOoU7GtauTvI8Qcf+Aj5r16VkBbZ/sIvUFM2W2ie+RHbVfEjTrZzRN05l3zHy5ICj5U7IXuouFqV1FYrLrvrrQ4+g32H/aiAaiVlJXkihceRZHER8bPG2VZP5jPeEZAyRweaJNPoJGUuJQ+fzfRk4dCSl53KsK5Znvo2i6amNzL/9ZCpTT+Th27fUfvYgSLJ6t05OSuXYu2itAO2RiouNeoqQ2IEqT1i1Ii3rJ8e1+wTSPKqlcZcY506wrCobNxh/jfdMx5lple3yposCykXDmdggtdOeqG9cWtrLr9b0m2dcJwMsQuhZpnA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qd9ee9EFA7w7Qmj4sWQdYGIWRmp9LafewTNB9lkRU/A=;
 b=AGchO6elV3+t0fJc1Qpy9/7tsNCy7IEXWKA2aCdIxp4u86Zwg23+0Np8GYOzx3D5uSk2c09hEXNIMsc8weqJUS59gRKsoVpEYJCwdu0rgBrhL3b+I1cUdGM13z0DRPAQ0Eio/0qZ5AnvI1lY5jiTutSmGAJ8M7bFd3sgf7alTRSsq/aThOyk5zPpiuq+bSOQJrrW4qZE1NS+fpWKBYTBoyO/fLHfGywMGrHu3+gcvAoAO3i2+TOkg/ytvnWjuG/a7KxWtQbLyksixQHU1jFzrtayTmItEzqCKSsD6EMToImpWLD5nNbxtQy0sjJOORSOIBuhY/r/RnGMvysdCOUYeg==
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AS8PR03MB6902.eurprd03.prod.outlook.com (2603:10a6:20b:29f::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Fri, 16 Oct
 2020 09:42:39 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3477.025; Fri, 16 Oct 2020
 09:42:39 +0000
From: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
To: "jbeulich@suse.com" <jbeulich@suse.com>,
        "George.Dunlap@citrix.com"
	<George.Dunlap@citrix.com>
CC: Artem Mygaiev <Artem_Mygaiev@epam.com>,
        "vicooodin@gmail.com"
	<vicooodin@gmail.com>,
        "julien@xen.org" <julien@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "committers@xenproject.org" <committers@xenproject.org>,
        "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: 
 AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgCACF7sgIABM44AgASIOQA=
Date: Fri, 16 Oct 2020 09:42:39 +0000
Message-ID: <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
	 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
	 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
	 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
	 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
	 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
	 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
	 <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
In-Reply-To: <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 87c76393-290a-4f64-1b6b-08d871b7d24c
x-ms-traffictypediagnostic: AS8PR03MB6902:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AS8PR03MB69024C9ADE675CBB7B79D333F2030@AS8PR03MB6902.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 72+qnb0xM0UPHf+VhNfxOVCPNrGY0v/Mhatc+fqGNEtJeyDffBxLcAtXZcYRNbc2ueGygRV6bZuDEXUbiPbgX+eUKU4xtNnALIgMXc0A7cI0Ta+yUOw81I2Evf+a6a2jrJpyBbpbfPKcoQXvmhjmcLULqQPqLAL4ZOmRRTX1vV9ibBqpYyK8U/Ss5F+A/FzO3jeI49OpYOzDt8njtuQ9X1w+yXspDhSO++j2tnNjl1zhkknb5nFasMSLGw3HqCqBWVMy+IGhMnnA2aWoMtKD/XcGmbXOsn6re/qazSJ/rtX3fGaN3t1Jsg+nvh59F3QyLsuFWhb19E93DzQjLYFPgNUq3lgRXmsXD0FV8cmPr6NjIVx4pMNBo0/KVDu8T18M/Hz/RpQsizeOD5Q3Zk1tdQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6531.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(39860400002)(396003)(366004)(5660300002)(478600001)(4001150100001)(86362001)(54906003)(2616005)(55236004)(6486002)(6506007)(110136005)(53546011)(316002)(186003)(26005)(4326008)(66556008)(8936002)(83380400001)(76116006)(71200400001)(8676002)(966005)(66476007)(2906002)(36756003)(6512007)(66446008)(107886003)(64756008)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 M5XE2C8SqwIPnV8/tXv1k7aMNO/JgtUiIak+sc2Q4b8SbyD098RauDn4TgYca5nFSUoo1z06cLSPp++8Fwegw4hHyrLbKn88Bjiny17r2zK1vmiDZe9ZZ494Vo0zRNUWYGpIO+HIK9iD/gPoa5WHAU+100TqN7r/zVVERIL9D1t7dqWgyI5t8Zj72gu8nCoeRWACO8ys6kP6D445FWscQoby7IIM+FWZG1o+AKJBh9+GqaPJdfHWSQiQ1vhn0shimgJ4e0CN+YfDTaIWoycRLBgTbE/Ays/bRi15oMlI1jdN936HlboN2jMcdnwKGnrErVSKaaN8d20zaKzDLG7tMkKampWirWvrWSfYRWq8iyb1VHlt2hqdKdA6bWuJ2qEeVvNrYpeNXPIYXEHzZKKAVAeYe3I/lPf8YOF5kQy6PJBsxC6ZWFz0UJv9ZYBvmXmN0p7F3JJbAW7NkF1D+4KB0tROVi83sMZKT4jtCRtXvWJrJj/D+TELZn+JdL4XTO5z9DzvJVlP9yyBf0GnLyUqSlC+xvC/qbVef74vvLnQJm3wpJ1jXi4wtOlAbN8d5UD4VstC3MnIBxO9e4wgLgMse4UiSRRo0S4gMVzkZVpyXDle60X2xOurMYXgK1z8ped5fbb/5yVFdOX9lwntnC69Hg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <D29E089B7C1B6D408B64E25A8B078CF9@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6531.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 87c76393-290a-4f64-1b6b-08d871b7d24c
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Oct 2020 09:42:39.2718
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: LIrh4pEoB+gIHftpzBQgloG1HZOCNS/02/dzqscgWJwvG8NCtMFWHi974uCPCGKTTVCNc5D+t/rDwnEMTUkEzJLs9ACnHod3OcZe1fNz3Nc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR03MB6902
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-16_05:2020-10-16,2020-10-16 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 mlxlogscore=999 malwarescore=0 phishscore=0 clxscore=1011
 priorityscore=1501 mlxscore=0 bulkscore=0 adultscore=0 suspectscore=0
 impostorscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2010160069

SGkgYWxsLA0KDQpPbiBUdWUsIDIwMjAtMTAtMTMgYXQgMTQ6MzAgKzAyMDAsIEphbiBCZXVsaWNo
IHdyb3RlOg0KPiBPbiAxMi4xMC4yMDIwIDIwOjA5LCBHZW9yZ2UgRHVubGFwIHdyb3RlOg0KPiA+
ID4gT24gT2N0IDcsIDIwMjAsIGF0IDExOjE5IEFNLCBBbmFzdGFzaWlhIEx1a2lhbmVua28gPA0K
PiA+ID4gQW5hc3Rhc2lpYV9MdWtpYW5lbmtvQGVwYW0uY29tPiB3cm90ZToNCj4gPiA+IFNvIEkg
d2FudCB0byBrbm93IGlmIHRoZSBjb21tdW5pdHkgaXMgcmVhZHkgdG8gYWRkIG5ldyBmb3JtYXR0
aW5nDQo+ID4gPiBvcHRpb25zIGFuZCBlZGl0IG9sZCBvbmVzLiBCZWxvdyBJIHdpbGwgZ2l2ZSBl
eGFtcGxlcyBvZiB3aGF0DQo+ID4gPiBjb3JyZWN0aW9ucyB0aGUgY2hlY2tlciBpcyBjdXJyZW50
bHkgbWFraW5nICh0aGUgZmlyc3QgdmFyaWFudCBpbg0KPiA+ID4gZWFjaA0KPiA+ID4gY2FzZSBp
cyBleGlzdGluZyBjb2RlIGFuZCB0aGUgc2Vjb25kIHZhcmlhbnQgaXMgZm9ybWF0dGVkIGJ5DQo+
ID4gPiBjaGVja2VyKS4NCj4gPiA+IElmIHRoZXkgZml0IHRoZSBzdGFuZGFyZHMsIHRoZW4gSSBj
YW4gZG9jdW1lbnQgdGhlbSBpbiB0aGUgY29kaW5nDQo+ID4gPiBzdHlsZS4gSWYgbm90LCB0aGVu
IEkgdHJ5IHRvIGNvbmZpZ3VyZSB0aGUgY2hlY2tlci4gQnV0IHRoZSBpZGVhDQo+ID4gPiBpcw0K
PiA+ID4gdGhhdCB3ZSBuZWVkIHRvIGNob29zZSBvbmUgb3B0aW9uIHRoYXQgd2lsbCBiZSBjb25z
aWRlcmVkDQo+ID4gPiBjb3JyZWN0Lg0KPiA+ID4gMSkgRnVuY3Rpb24gcHJvdG90eXBlIHdoZW4g
dGhlIHN0cmluZyBsZW5ndGggaXMgbG9uZ2VyIHRoYW4gdGhlDQo+ID4gPiBhbGxvd2VkDQo+ID4g
PiAtc3RhdGljIGludCBfX2luaXQNCj4gPiA+IC1hY3BpX3BhcnNlX2dpY19jcHVfaW50ZXJmYWNl
KHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlcg0KPiA+ID4gKmhlYWRlciwNCj4gPiA+IC0gICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGNvbnN0IHVuc2lnbmVkIGxvbmcgZW5kKQ0KPiA+ID4g
K3N0YXRpYyBpbnQgX19pbml0IGFjcGlfcGFyc2VfZ2ljX2NwdV9pbnRlcmZhY2UoDQo+ID4gPiAr
ICAgIHN0cnVjdCBhY3BpX3N1YnRhYmxlX2hlYWRlciAqaGVhZGVyLCBjb25zdCB1bnNpZ25lZCBs
b25nDQo+ID4gPiBlbmQpDQo+ID4gDQo+ID4gSmFuIGFscmVhZHkgY29tbWVudGVkIG9uIHRoaXMg
b25lOyBpcyB0aGVyZSBhbnkgd2F5IHRvIHRlbGwgdGhlDQo+ID4gY2hlY2tlciB0byBpZ25vcmUg
IHRoaXMgZGlzY3JlcGFuY3k/DQo+ID4gDQo+ID4gSWYgbm90LCBJIHRoaW5rIHdlIHNob3VsZCBq
dXN0IGNob29zZSBvbmU7IEnigJlkIGdvIHdpdGggdGhlIGxhdHRlci4NCg0KSWYgaXQgdHVybnMg
b3V0IHRvIG1ha2UgdGhlIGNoZWNrZXIgbW9yZSBmbGV4aWJsZSwgdGhlbiBJIHdpbGwgdHJ5IHRv
DQphZGQgYm90aCBvcHRpb25zIGFzIGNvcnJlY3QuDQoNCj4gPiANCj4gPiA+IDIpIFdyYXBwaW5n
IGFuIG9wZXJhdGlvbiB0byBhIG5ldyBsaW5lIHdoZW4gdGhlIHN0cmluZyBsZW5ndGggaXMNCj4g
PiA+IGxvbmdlcg0KPiA+ID4gdGhhbiB0aGUgYWxsb3dlZA0KPiA+ID4gLSAgICBzdGF0dXMgPSBh
Y3BpX2dldF90YWJsZShBQ1BJX1NJR19TUENSLCAwLA0KPiA+ID4gLSAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAoc3RydWN0IGFjcGlfdGFibGVfaGVhZGVyICoqKSZzcGNyKTsNCj4gPiA+ICsg
ICAgc3RhdHVzID0NCj4gPiA+ICsgICAgICAgIGFjcGlfZ2V0X3RhYmxlKEFDUElfU0lHX1NQQ1Is
IDAsIChzdHJ1Y3QNCj4gPiA+IGFjcGlfdGFibGVfaGVhZGVyDQo+ID4gPiAqKikmc3Bjcik7DQo+
ID4gDQo+ID4gUGVyc29uYWxseSBJIHByZWZlciB0aGUgZmlyc3QgdmVyc2lvbi4NCj4gDQo+IFNh
bWUgaGVyZS4NCg0KVW50aWwgSSBmb3VuZCBhIHdheSB0byBzYXZlIHRoZSBmaXJzdCBvcHRpb24s
IEkgdGhpbmsgdGhpcyBjYXNlIG1heQ0KcmVtYWluIGluIHRoZSBvcGluaW9uIG9mIHRoZSBhdXRo
b3IuDQoNCj4gDQo+ID4gPiAzKSBTcGFjZSBhZnRlciBicmFja2V0cw0KPiA+ID4gLSAgICByZXR1
cm4gKChjaGFyICopIGJhc2UgKyBvZmZzZXQpOw0KPiA+ID4gKyAgICByZXR1cm4gKChjaGFyICop
YmFzZSArIG9mZnNldCk7DQo+ID4gDQo+ID4gVGhpcyBzZWVtcyBsaWtlIGEgZ29vZCBjaGFuZ2Ug
dG8gbWUuDQo+ID4gDQo+ID4gPiA0KSBTcGFjZXMgaW4gYnJhY2tldHMgaW4gc3dpdGNoIGNvbmRp
dGlvbg0KPiA+ID4gLSAgICBzd2l0Y2ggKCBkb21jdGwtPmNtZCApDQo+ID4gPiArICAgIHN3aXRj
aCAoZG9tY3RsLT5jbWQpDQo+ID4gDQo+ID4gVGhpcyBpcyBleHBsaWNpdGx5IGFnYWluc3QgdGhl
IGN1cnJlbnQgY29kaW5nIHN0eWxlLg0KDQpGaXhlZCB0aGlzIGluIHRoZSBuZXcgdmVyc2lvbiBv
ZiBjaGVja2VyLg0KDQo+ID4gDQo+ID4gPiA1KSBTcGFjZXMgaW4gYnJhY2tldHMgaW4gb3BlcmF0
aW9uDQo+ID4gPiAtICAgIGltbSA9ICggaW5zbiA+PiBCUkFOQ0hfSU5TTl9JTU1fU0hJRlQgKSAm
DQo+ID4gPiBCUkFOQ0hfSU5TTl9JTU1fTUFTSzsNCj4gPiA+ICsgICAgaW1tID0gKGluc24gPj4g
QlJBTkNIX0lOU05fSU1NX1NISUZUKSAmDQo+ID4gPiBCUkFOQ0hfSU5TTl9JTU1fTUFTSzsNCj4g
PiANCj4gPiBJICp0aGluayogdGhpcyBpcyBhbHJlYWR5IHRoZSBvZmZpY2lhbCBzdHlsZS4NCj4g
PiANCj4gPiA+IDYpIFNwYWNlcyBpbiBicmFja2V0cyBpbiByZXR1cm4NCj4gPiA+IC0gICAgICAg
IHJldHVybiAoICFzeW0tPm5hbWVbMl0gfHwgc3ltLT5uYW1lWzJdID09ICcuJyApOw0KPiA+ID4g
KyAgICAgICAgcmV0dXJuICghc3ltLT5uYW1lWzJdIHx8IHN5bS0+bmFtZVsyXSA9PSAnLicpOw0K
PiA+IA0KPiA+IFNpbWlsYXJseSwgSSB0aGluayB0aGlzIGlzIGFscmVhZHkgdGhlIG9mZmljaWFs
IHN0eWxlLg0KPiA+IA0KPiA+ID4gNykgU3BhY2UgYWZ0ZXIgc2l6ZW9mDQo+ID4gPiAtICAgIGNs
ZWFuX2FuZF9pbnZhbGlkYXRlX2RjYWNoZV92YV9yYW5nZShuZXdfcHRyLCBzaXplb2YNCj4gPiA+
ICgqbmV3X3B0cikgKg0KPiA+ID4gbGVuKTsNCj4gPiA+ICsgICAgY2xlYW5fYW5kX2ludmFsaWRh
dGVfZGNhY2hlX3ZhX3JhbmdlKG5ld19wdHIsDQo+ID4gPiBzaXplb2YoKm5ld19wdHIpICoNCj4g
PiA+IGxlbik7DQo+ID4gDQo+ID4gSSB0aGluayB0aGlzIGlzIGNvcnJlY3QuDQo+IA0KPiBJIGFn
cmVlIHdpdGggR2VvcmdlIG9uIGFsbCBvZiB0aGUgYWJvdmUuDQo+IA0KPiA+ID4gOCkgU3BhY2Vz
IGJlZm9yZSBjb21tZW50IGlmIGl04oCZcyBvbiB0aGUgc2FtZSBsaW5lDQo+ID4gPiAtICAgIGNh
c2UgUl9BUk1fTU9WVF9BQlM6IC8qIFMgKyBBICovDQo+ID4gPiArICAgIGNhc2UgUl9BUk1fTU9W
VF9BQlM6ICAgIC8qIFMgKyBBICovDQo+ID4gPiANCj4gPiA+IC0gICAgaWYgKCB0bXAgPT0gMFVM
ICkgICAgICAgLyogQXJlIGFueSBiaXRzIHNldD8gKi8NCj4gPiA+IC0gICAgICAgIHJldHVybiBy
ZXN1bHQgKyBzaXplOyAgIC8qIE5vcGUuICovDQo+ID4gPiArICAgIGlmICggdG1wID09IDBVTCAp
ICAgICAgICAgLyogQXJlIGFueSBiaXRzIHNldD8gKi8NCj4gPiA+ICsgICAgICAgIHJldHVybiBy
ZXN1bHQgKyBzaXplOyAvKiBOb3BlLiAqLw0KPiA+IA0KPiA+IFNlZW0gT0sgdG8gbWUuDQo+IA0K
PiBJIGRvbid0IHRoaW5rIHdlIGhhdmUgYW55IHJ1bGVzIGhvdyBmYXIgYXBhcnQgYSBjb21tZW50
IG5lZWRzDQo+IHRvIGJlOyBJIGRvbid0IHRoaW5rIHRoZXJlIHNob3VsZCBiZSBhbnkgY29tcGxh
aW50cyBvcg0KPiAiY29ycmVjdGlvbnMiIGhlcmUuDQo+IA0KPiA+ID4gOSkgU3BhY2UgYWZ0ZXIg
Zm9yX2VhY2hfdmNwdQ0KPiA+ID4gLSAgICAgICAgZm9yX2VhY2hfdmNwdShkLCB2KQ0KPiA+ID4g
KyAgICAgICAgZm9yX2VhY2hfdmNwdSAoZCwgdikNCj4gPiANCj4gPiBFciwgbm90IHN1cmUgYWJv
dXQgdGhpcyBvbmUuICBUaGlzIGlzIGFjdHVhbGx5IGEgbWFjcm87IGJ1dA0KPiA+IG9idmlvdXNs
eSBpdCBsb29rcyBsaWtlIGZvciAoICkuDQo+ID4gDQo+ID4gSSB0aGluayBKYW4gd2lsbCBwcm9i
YWJseSBoYXZlIGFuIG9waW5pb24sIGFuZCBJIHRoaW5rIGhl4oCZbGwgYmUNCj4gPiBiYWNrIHRv
bW9ycm93OyBzbyBtYXliZSB3YWl0IGp1c3QgYSBkYXkgb3IgdHdvIGJlZm9yZSBzdGFydGluZyB0
bw0KPiA+IHByZXAgeW91ciBzZXJpZXMuDQo+IA0KPiBUaGlzIG1ha2VzIGl0IGxvb2sgbGlrZSBM
aW51eCBzdHlsZS4gSW4gWGVuIGl0IG91Z2h0IHRvIGJlIG9uZQ0KPiBvZg0KPiANCj4gICAgICAg
ICBmb3JfZWFjaF92Y3B1KGQsIHYpDQo+ICAgICAgICAgZm9yX2VhY2hfdmNwdSAoIGQsIHYgKQ0K
PiANCj4gZGVwZW5kaW5nIG9uIHdoZXRoZXIgdGhlIGF1dGhvciBvZiBhIGNoYW5nZSBjb25zaWRl
cnMNCj4gZm9yX2VhY2hfdmNwdSBhbiBvcmRpbmFyeSBpZGVudGlmaWVyIG9yIGtpbmQgb2YgYSBr
ZXl3b3JkLg0KPiANCj4gPiA+IDEwKSBTcGFjZXMgaW4gZGVjbGFyYXRpb24NCj4gPiA+IC0gICAg
dW5pb24gaHNyIGhzciA9IHsgLmJpdHMgPSByZWdzLT5oc3IgfTsNCj4gPiA+ICsgICAgdW5pb24g
aHNyIGhzciA9IHsuYml0cyA9IHJlZ3MtPmhzcn07DQo+ID4gDQo+ID4gSeKAmW0gZmluZSB3aXRo
IHRoaXMgdG9vLg0KPiANCj4gSSB0aGluayB3ZSBjb21tb25seSBwdXQgdGhlIGJsYW5rcyB0aGVy
ZSB0aGF0IGFyZSBiZWluZyBzdWdnZXN0ZWQNCj4gdG8gZ2V0IGRyb3BwZWQsIHNvIEknbSBub3Qg
Y29udmluY2VkIHRoaXMgaXMgYSBjaGFuZ2Ugd2Ugd291bGQNCj4gd2FudCB0aGUgdG9vbCBtYWtp
bmcgb3Igc3VnZ2VzdGluZy4NCj4gDQo+IEphbg0KDQpUaGFua3MgZm9yIHlvdXIgYWR2aWNlcywg
d2hpY2ggaGVscGVkIG1lIGltcHJvdmUgdGhlIGNoZWNrZXIuIEkNCnVuZGVyc3RhbmQgdGhhdCB0
aGVyZSBhcmUgc3RpbGwgc29tZSBkaXNhZ3JlZW1lbnRzIGFib3V0IHRoZQ0KZm9ybWF0dGluZywg
YnV0IGFzIEkgc2FpZCBiZWZvcmUsIHRoZSBjaGVja2VyIGNhbm5vdCBiZSB2ZXJ5IGZsZXhpYmxl
DQphbmQgdGFrZSBpbnRvIGFjY291bnQgYWxsIHRoZSBhdXRob3IncyBpZGVhcy4NCkkgc3VnZ2Vz
dCB1c2luZyB0aGUNCmNoZWNrZXIgbm90IGFzIGEgbWFuZGF0b3J5IGNoZWNrLCBidXQgYXMgYW4g
aW5kaWNhdGlvbiB0byB0aGUgYXV0aG9yIG9mDQpwb3NzaWJsZSBmb3JtYXR0aW5nIGVycm9ycyB0
aGF0IGhlIGNhbiBjb3JyZWN0IG9yIGlnbm9yZS4NCg0KSSBhdHRhY2hlZCB0aGUgbmV3IHZlcnNp
b24gb2YgWGVuIGNoZWNrZXIgYmVsb3cgd2l0aCB1cGRhdGVkDQpjbGFuZyB2ZXJzaW9uIGZyb20g
OS4wIHRvIDEyLjAgYW5kIHdpdGggbWlub3IgZml4ZXMuDQooYnJhbmNoIHhlbi1jbGFuZy1mb3Jt
YXRfMTIpDQpodHRwczovL2dpdGh1Yi5jb20veGVuLXRyb29wcy9sbHZtLXByb2plY3QvdHJlZS94
ZW4tY2xhbmctZm9ybWF0XzEyDQoNCklmIGR1cmluZyB0aGUgdXNpbmcgYW5kIHRlc3RpbmcgdGhl
IHRvb2wgbmV3IGluY29uc2lzdGVuY2llcyBhcmUgZm91bmQsDQpJIGFtIHJlYWR5IHRvIGZpeCB0
aGVtLg0KDQpSZWdhcmRzLA0KQW5hc3Rhc2lpYQ0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 09:52:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 09:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7866.20737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMP1-0008Me-5n; Fri, 16 Oct 2020 09:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7866.20737; Fri, 16 Oct 2020 09:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMP1-0008MX-2D; Fri, 16 Oct 2020 09:51:59 +0000
Received: by outflank-mailman (input) for mailman id 7866;
 Fri, 16 Oct 2020 09:51:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTMP0-0008MS-0k
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:51:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08496328-16b9-4bd0-9ae4-24bd755f4584;
 Fri, 16 Oct 2020 09:51:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTMOv-0000ER-W6; Fri, 16 Oct 2020 09:51:53 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTMOv-0004Av-OG; Fri, 16 Oct 2020 09:51:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTMP0-0008MS-0k
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 09:51:58 +0000
X-Inumbo-ID: 08496328-16b9-4bd0-9ae4-24bd755f4584
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 08496328-16b9-4bd0-9ae4-24bd755f4584;
	Fri, 16 Oct 2020 09:51:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bL9WSIRH/SISe4Ipc18xF683oHqDGoGyeZXxSZi2kno=; b=c3fHHpfgZGt81zWS9pyxXRzW55
	lssBM9+3yH5I0OgSsKdWWd8pplTbS/JMGjAqoK/H/ZJ2b27Ig6ii2662zgDB4pbMGv6V6mhUDVjsy
	G1Kd1FJgQZMCoNQ5IgsTH2gcNcjwMo9cQPwmU4hdqU5dZRE1mKhPeY5TTn2tDNb6OB6M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTMOv-0000ER-W6; Fri, 16 Oct 2020 09:51:53 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTMOv-0004Av-OG; Fri, 16 Oct 2020 09:51:53 +0000
Subject: Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c97130a4-3ba0-3fbf-f10d-761c6bb51e1e@xen.org>
Date: Fri, 16 Oct 2020 10:51:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201012092740.1617-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 12/10/2020 10:27, Juergen Gross wrote:
> Currently the lock for a single event channel needs to be taken with
> interrupts off, which causes deadlocks in some cases.
> 
> Rework the per event channel lock to be non-blocking for the case of
> sending an event and removing the need for disabling interrupts for
> taking the lock.
> 
> The lock is needed for avoiding races between sending an event or
> querying the channel's state against removal of the event channel.
> 
> Use a locking scheme similar to a rwlock, but with some modifications:
> 
> - sending an event or querying the event channel's state uses an
>    operation similar to read_trylock(), in case of not obtaining the
>    lock the sending is omitted or a default state is returned
> 
> - closing an event channel is similar to write_lock(), but without
>    real fairness regarding multiple writers (this saves some space in
>    the event channel structure and multiple writers are impossible as
>    closing an event channel requires the domain's event_lock to be
>    held).
> 
> With this locking scheme it is mandatory that a writer will always
> either start with an unbound or free event channel or will end with
> an unbound or free event channel, as otherwise the reaction of a reader
> not getting the lock would be wrong.
> 
> Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
> Signed-off-by: Juergen Gross <jgross@suse.com>

The approach looks ok to me. I have a couple of remarks below.

[...]

> diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
> index 509d3ae861..39a93f7556 100644
> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>   #define bucket_from_port(d, p) \
>       ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>   
> +#define EVENT_WRITE_LOCK_INC    MAX_VIRT_CPUS
> +static inline void evtchn_write_lock(struct evtchn *evtchn)

I think it would be good to describe the locking expectation in-code.

> +{
> +    int val;
> +
> +    /* No barrier needed, atomic_add_return() is full barrier. */
> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +          val != EVENT_WRITE_LOCK_INC;
> +          val = atomic_read(&evtchn->lock) )
> +        cpu_relax();
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    arch_lock_release_barrier();
> +
> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +}
> +
> +static inline bool evtchn_tryread_lock(struct evtchn *evtchn)
> +{
> +    if ( atomic_read(&evtchn->lock) >= EVENT_WRITE_LOCK_INC )
> +        return false;
> +
> +    /* No barrier needed, atomic_inc_return() is full barrier. */
> +    if ( atomic_inc_return(&evtchn->lock) < EVENT_WRITE_LOCK_INC )
> +        return true;
> +
> +    atomic_dec(&evtchn->lock);

NIT: Can you add a newline here?

> +    return false;
> +}
> +

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7870.20748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMfe-00015u-MK; Fri, 16 Oct 2020 10:09:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7870.20748; Fri, 16 Oct 2020 10:09:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMfe-00015n-J9; Fri, 16 Oct 2020 10:09:10 +0000
Received: by outflank-mailman (input) for mailman id 7870;
 Fri, 16 Oct 2020 10:09:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kTMfc-00015i-Lq
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:09:08 +0000
Received: from asavdk3.altibox.net (unknown [109.247.116.14])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc6680c5-20dc-4e99-95ac-c747aa491c1d;
 Fri, 16 Oct 2020 10:09:06 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk3.altibox.net (Postfix) with ESMTPS id DC35220027;
 Fri, 16 Oct 2020 12:08:55 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kTMfc-00015i-Lq
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:09:08 +0000
X-Inumbo-ID: cc6680c5-20dc-4e99-95ac-c747aa491c1d
Received: from asavdk3.altibox.net (unknown [109.247.116.14])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cc6680c5-20dc-4e99-95ac-c747aa491c1d;
	Fri, 16 Oct 2020 10:09:06 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk3.altibox.net (Postfix) with ESMTPS id DC35220027;
	Fri, 16 Oct 2020 12:08:55 +0200 (CEST)
Date: Fri, 16 Oct 2020 12:08:54 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
Message-ID: <20201016100854.GA1042954@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-10-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201015123806.32416-10-tzimmermann@suse.de>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=S433PrkP c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=7gkXJVJtAAAA:8 a=0A2xud3A4b7FAmx5SMIA:9
	a=7a1rlSNJFqSX5uOf:21 a=OU_kl8OV53ZSq-ss:21 a=CjuIK1q_8ugA:10
	a=E9Po1WZjFZOl8hwRPBS3:22

Hi Thomas.

On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

Looks good.
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>

> ---
>  include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
>  1 file changed, 62 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
>   * accessing the buffer. Use the returned instance and the helper functions
>   * to access the buffer's memory in the correct way.
>   *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
>   * considered bad style. Rather then accessing its fields directly, use one
>   * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
>   *
>   *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>   *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_clear(&map);
> + *
>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>   * dma_buf_map_is_null().
>   *
> @@ -73,17 +89,19 @@
>   *	if (dma_buf_map_is_equal(&sys_map, &io_map))
>   *		// always false
>   *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
>   *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + *	const void *src = ...; // source buffer
> + *	size_t len = ...; // length of src
> + *
> + *	dma_buf_map_memcpy_to(&map, src, len);
> + *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
>   */
>  
>  /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
>  	}
>  }
>  
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst:	The dma-buf mapping structure
> + * @src:	The source buffer
> + * @len:	The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> +	if (dst->is_iomem)
> +		memcpy_toio(dst->vaddr_iomem, src, len);
> +	else
> +		memcpy(dst->vaddr, src, len);
> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map:	The dma-buf mapping structure
> + * @incr:	The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> +	if (map->is_iomem)
> +		map->vaddr_iomem += incr;
> +	else
> +		map->vaddr += incr;
> +}
> +
>  #endif /* __DMA_BUF_MAP_H__ */
> -- 
> 2.28.0


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7873.20761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMrZ-0002jK-2F; Fri, 16 Oct 2020 10:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7873.20761; Fri, 16 Oct 2020 10:21:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMrY-0002jD-Uh; Fri, 16 Oct 2020 10:21:28 +0000
Received: by outflank-mailman (input) for mailman id 7873;
 Fri, 16 Oct 2020 10:21:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTMrX-0002j8-Se
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:21:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 25006e1c-c1df-4e4f-ac89-c0ab42d59e66;
 Fri, 16 Oct 2020 10:21:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 928EBAEEF;
 Fri, 16 Oct 2020 10:21:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTMrX-0002j8-Se
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:21:27 +0000
X-Inumbo-ID: 25006e1c-c1df-4e4f-ac89-c0ab42d59e66
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 25006e1c-c1df-4e4f-ac89-c0ab42d59e66;
	Fri, 16 Oct 2020 10:21:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602843685;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qcmxnvRUtgd5pwtZDJYRsRYYJtUit4eLtWDDXvSIsZA=;
	b=o1mH86nuLG+oZiXBJ815ysRWI7MJTtTamFh18yR9aCNFHFbcni3mN3kW36LxDyGCP/0lPW
	YuEdYKoRkZZoaICvm3QkNswkEjnB0sQK0/LiyGr9ow2JI9E1NuoWKoXdC8Dfzn+nDcpSBI
	KnjwDh4sUmyRSYMXuRsIYBdY4mC3RAo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 928EBAEEF;
	Fri, 16 Oct 2020 10:21:25 +0000 (UTC)
Subject: Ping: [PATCH] x86/PV: make post-migration page state consistent
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <f7ed53c1-768c-cc71-a432-553b56f7f0a7@suse.com>
Message-ID: <f4804dc9-4a6f-6601-2fc9-9b6d4a3ae41a@suse.com>
Date: Fri, 16 Oct 2020 12:21:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <f7ed53c1-768c-cc71-a432-553b56f7f0a7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.09.2020 12:34, Jan Beulich wrote:
> When a page table page gets de-validated, its type reference count drops
> to zero (and PGT_validated gets cleared), but its type remains intact.
> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
> such pages. An intermediate write to such a page via e.g.
> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
> return. In libxc the decision which pages to normalize / localize
> depends solely on the type returned from the domctl. As a result without
> further precautions the guest won't be able to tell whether such a page
> has had its (apparent) PTE entries transitioned to the new MFNs.
> 
> Add a check of PGT_validated, thus consistently avoiding normalization /
> localization in the tool stack.
> 
> Alongside using XEN_DOMCTL_PFINFO_NOTAB instead of plain zero for the
> change at hand, also change the variable's initializer to use this
> constant, too. Take the opportunity and also adjust its type.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think I did address all questions here.

Jan

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -215,7 +215,8 @@ long arch_do_domctl(
>  
>          for ( i = 0; i < num; ++i )
>          {
> -            unsigned long gfn = 0, type = 0;
> +            unsigned long gfn = 0;
> +            unsigned int type = XEN_DOMCTL_PFINFO_NOTAB;
>              struct page_info *page;
>              p2m_type_t t;
>  
> @@ -255,6 +256,8 @@ long arch_do_domctl(
>  
>                  if ( page->u.inuse.type_info & PGT_pinned )
>                      type |= XEN_DOMCTL_PFINFO_LPINTAB;
> +                else if ( !(page->u.inuse.type_info & PGT_validated) )
> +                    type = XEN_DOMCTL_PFINFO_NOTAB;
>  
>                  if ( page->count_info & PGC_broken )
>                      type = XEN_DOMCTL_PFINFO_BROKEN;
> 



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:23:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7876.20773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMtx-0002uE-Ex; Fri, 16 Oct 2020 10:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7876.20773; Fri, 16 Oct 2020 10:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTMtx-0002u7-Bf; Fri, 16 Oct 2020 10:23:57 +0000
Received: by outflank-mailman (input) for mailman id 7876;
 Fri, 16 Oct 2020 10:23:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTMtw-0002u1-1d
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:23:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eee113c4-05d2-4aaa-a1c3-e126932aaeb1;
 Fri, 16 Oct 2020 10:23:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTMto-00010x-HV; Fri, 16 Oct 2020 10:23:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTMto-0006UE-9a; Fri, 16 Oct 2020 10:23:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTMtw-0002u1-1d
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:23:56 +0000
X-Inumbo-ID: eee113c4-05d2-4aaa-a1c3-e126932aaeb1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eee113c4-05d2-4aaa-a1c3-e126932aaeb1;
	Fri, 16 Oct 2020 10:23:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0S/GXTmwpIu6yEEvu7d9OAdDFNc4RY8WZMqCXBptqLA=; b=ZOIqUEDCm9GMujrWg/yoycP/lT
	0QLMgwksUi4vQDDe4lwtmljVnlkTZWBYiTXaMBpH64PsaGSG46xFs66lD4iij3ELt5F5bfC9ybNh9
	0f3Ns7RQ2gEQkVSy0D23SrAbKp1+JD5kNdTDbpkVM+t/a38mcm+8bEEQfkrqUur08lzg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTMto-00010x-HV; Fri, 16 Oct 2020 10:23:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTMto-0006UE-9a; Fri, 16 Oct 2020 10:23:48 +0000
Subject: Re: Xen Coding style and clang-format
To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>
Cc: Artem Mygaiev <Artem_Mygaiev@epam.com>,
 "vicooodin@gmail.com" <vicooodin@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "committers@xenproject.org" <committers@xenproject.org>,
 "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
 <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
 <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
Date: Fri, 16 Oct 2020 11:23:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 16/10/2020 10:42, Anastasiia Lukianenko wrote:
> Thanks for your advices, which helped me improve the checker. I
> understand that there are still some disagreements about the
> formatting, but as I said before, the checker cannot be very flexible
> and take into account all the author's ideas.

I am not sure what you refer by "author's ideas" here. The checker 
should follow a coding style (Xen or a modified version):
    - Anything not following the coding style should be considered as 
invalid.
    - Anything not written in the coding style should be left 
untouched/uncommented by the checker.

> I suggest using the
> checker not as a mandatory check, but as an indication to the author of
> possible formatting errors that he can correct or ignore.

I can understand that short term we would want to make it optional so 
either the coding style or the checker can be tuned. But I don't think 
this is an ideal situation to be in long term.

The goal of the checker is to automatically verify the coding style and 
get it consistent across Xen. If we make it optional or it is 
"unreliable", then we lose the two benefits and possibly increase the 
contributor frustration as the checker would say A but we need B.

Therefore, we need to make sure the checker and the coding style match. 
I don't have any opinions on the approach to achieve that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7880.20784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTN4S-0003rv-Fz; Fri, 16 Oct 2020 10:34:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7880.20784; Fri, 16 Oct 2020 10:34:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTN4S-0003ro-D3; Fri, 16 Oct 2020 10:34:48 +0000
Received: by outflank-mailman (input) for mailman id 7880;
 Fri, 16 Oct 2020 10:34:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTN4Q-0003rj-Uv
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:34:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33d0f83e-9a2b-48ec-9f81-74e63a65b663;
 Fri, 16 Oct 2020 10:34:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTN4O-0001Eh-O9; Fri, 16 Oct 2020 10:34:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTN4O-0006u8-HO; Fri, 16 Oct 2020 10:34:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTN4O-00034L-Gr; Fri, 16 Oct 2020 10:34:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTN4Q-0003rj-Uv
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:34:46 +0000
X-Inumbo-ID: 33d0f83e-9a2b-48ec-9f81-74e63a65b663
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33d0f83e-9a2b-48ec-9f81-74e63a65b663;
	Fri, 16 Oct 2020 10:34:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AQ2AJzi3s97Kkmkr7b4C0vY7ePHzURX9b6B60bKWscg=; b=WIKYckaF2LkV9axVyNQfYwRgvW
	Sg1iG4gx5QUp1k8WrxM8e626TP1cUXA5XXVZWE74ychW7bDJYGtELQ2sZhchZr7TPO4fHHVwOEXku
	NrQ5yYHZLLmk0fLBuqDRRRBuA+SrZttOz9SLCrN8gBteDSfDhWOWFForWVQV8cdSRs18=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTN4O-0001Eh-O9; Fri, 16 Oct 2020 10:34:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTN4O-0006u8-HO; Fri, 16 Oct 2020 10:34:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTN4O-00034L-Gr; Fri, 16 Oct 2020 10:34:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155885-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155885: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 10:34:44 +0000

flight 155885 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155885/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   98 days
Failing since        151818  2020-07-11 04:18:52 Z   97 days   92 attempts
Testing same since   155885  2020-10-16 04:19:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21456 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:39:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7884.20798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTN99-00043m-3S; Fri, 16 Oct 2020 10:39:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7884.20798; Fri, 16 Oct 2020 10:39:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTN99-00043f-0S; Fri, 16 Oct 2020 10:39:39 +0000
Received: by outflank-mailman (input) for mailman id 7884;
 Fri, 16 Oct 2020 10:39:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kTN96-00043a-Vt
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:39:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 530d5047-09bd-4b38-853c-646a69572b60;
 Fri, 16 Oct 2020 10:39:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C254AB5C;
 Fri, 16 Oct 2020 10:39:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kTN96-00043a-Vt
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:39:37 +0000
X-Inumbo-ID: 530d5047-09bd-4b38-853c-646a69572b60
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 530d5047-09bd-4b38-853c-646a69572b60;
	Fri, 16 Oct 2020 10:39:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6C254AB5C;
	Fri, 16 Oct 2020 10:39:34 +0000 (UTC)
Date: Fri, 16 Oct 2020 12:39:31 +0200
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Sam Ravnborg <sam@ravnborg.org>
Cc: luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 emil.velikov@collabora.com, linux-samsung-soc@vger.kernel.org,
 jy0922.shim@samsung.com, lima@lists.freedesktop.org,
 oleksandr_andrushchenko@epam.com, krzk@kernel.org, steven.price@arm.com,
 linux-rockchip@lists.infradead.org, kgene@kernel.org,
 alyssa.rosenzweig@collabora.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, bskeggs@redhat.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, sw0312.kim@samsung.com, hjc@rock-chips.com,
 kyungmin.park@samsung.com, miaoqinglang@huawei.com, yuq825@gmail.com,
 alexander.deucher@amd.com, linux-media@vger.kernel.org,
 christian.koenig@amd.com
Subject: Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
Message-ID: <20201016123931.10dd3930@linux-uq9g>
In-Reply-To: <20201016100854.GA1042954@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
	<20201015123806.32416-10-tzimmermann@suse.de>
	<20201016100854.GA1042954@ravnborg.org>
Organization: SUSE Software Solutions Germany GmbH
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi Sam

On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:

> Hi Thomas.
>=20
> On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> > To do framebuffer updates, one needs memcpy from system memory and a
> > pointer-increment function. Add both interfaces with documentation.
> >=20
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>=20
> Looks good.
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>

Thanks. If you have the time, may I ask you to test this patchset on the
bochs/sparc64 system that failed with the original code?

Best regards
Thomas

>=20
> > ---
> >  include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
> >  1 file changed, 62 insertions(+), 10 deletions(-)
> >=20
> > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > index 2e8bbecb5091..6ca0f304dda2 100644
> > --- a/include/linux/dma-buf-map.h
> > +++ b/include/linux/dma-buf-map.h
> > @@ -32,6 +32,14 @@
> >   * accessing the buffer. Use the returned instance and the helper
> > functions
> >   * to access the buffer's memory in the correct way.
> >   *
> > + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > + * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > + * among devices, drivers have to know the location of the memory to
> > access
> > + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map=
>`
> > + * solves this problem for dma-buf and its users. If other drivers or
> > + * sub-systems require similar functionality, the type could be
> > generalized
> > + * and moved to a more prominent header file.
> > + *
> >   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
> >   * considered bad style. Rather then accessing its fields directly, use
> > one
> >   * of the provided helper functions, or implement your own. For exampl=
e,
> > @@ -51,6 +59,14 @@
> >   *
> >   *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> >   *
> > + * Instances of struct dma_buf_map do not have to be cleaned up, but
> > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > + * always refer to system memory.
> > + *
> > + * .. code-block:: c
> > + *
> > + *	dma_buf_map_clear(&map);
> > + *
> >   * Test if a mapping is valid with either dma_buf_map_is_set() or
> >   * dma_buf_map_is_null().
> >   *
> > @@ -73,17 +89,19 @@
> >   *	if (dma_buf_map_is_equal(&sys_map, &io_map))
> >   *		// always false
> >   *
> > - * Instances of struct dma_buf_map do not have to be cleaned up, but
> > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> > - * always refer to system memory.
> > + * A set up instance of struct dma_buf_map can be used to access or
> > manipulate
> > + * the buffer memory. Depending on the location of the memory, the
> > provided
> > + * helpers will pick the correct operations. Data can be copied into t=
he
> > memory
> > + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> > + * dma_buf_map_incr().
> >   *
> > - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers
> > are
> > - * actually independent from the dma-buf infrastructure. When sharing
> > buffers
> > - * among devices, drivers have to know the location of the memory to
> > access
> > - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map=
>`
> > - * solves this problem for dma-buf and its users. If other drivers or
> > - * sub-systems require similar functionality, the type could be
> > generalized
> > - * and moved to a more prominent header file.
> > + * .. code-block:: c
> > + *
> > + *	const void *src =3D ...; // source buffer
> > + *	size_t len =3D ...; // length of src
> > + *
> > + *	dma_buf_map_memcpy_to(&map, src, len);
> > + *	dma_buf_map_incr(&map, len); // go to first byte after the
> > memcpy */
> > =20
> >  /**
> > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct
> > dma_buf_map *map) }
> >  }
> > =20
> > +/**
> > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> > + * @dst:	The dma-buf mapping structure
> > + * @src:	The source buffer
> > + * @len:	The number of byte in src
> > + *
> > + * Copies data into a dma-buf mapping. The source buffer is in system
> > + * memory. Depending on the buffer's location, the helper picks the
> > correct
> > + * method of accessing the memory.
> > + */
> > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const
> > void *src, size_t len) +{
> > +	if (dst->is_iomem)
> > +		memcpy_toio(dst->vaddr_iomem, src, len);
> > +	else
> > +		memcpy(dst->vaddr, src, len);
> > +}
> > +
> > +/**
> > + * dma_buf_map_incr - Increments the address stored in a dma-buf mappi=
ng
> > + * @map:	The dma-buf mapping structure
> > + * @incr:	The number of bytes to increment
> > + *
> > + * Increments the address stored in a dma-buf mapping. Depending on the
> > + * buffer's location, the correct value will be updated.
> > + */
> > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t in=
cr)
> > +{
> > +	if (map->is_iomem)
> > +		map->vaddr_iomem +=3D incr;
> > +	else
> > +		map->vaddr +=3D incr;
> > +}
> > +
> >  #endif /* __DMA_BUF_MAP_H__ */
> > --=20
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7889.20822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRe-0005sz-3v; Fri, 16 Oct 2020 10:58:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7889.20822; Fri, 16 Oct 2020 10:58:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRe-0005ss-0v; Fri, 16 Oct 2020 10:58:46 +0000
Received: by outflank-mailman (input) for mailman id 7889;
 Fri, 16 Oct 2020 10:58:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTNRc-0005rv-Kl
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 075dc980-4a36-40c8-ab3d-39aca8adcbe4;
 Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BAA7DAD63;
 Fri, 16 Oct 2020 10:58:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTNRc-0005rv-Kl
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:44 +0000
X-Inumbo-ID: 075dc980-4a36-40c8-ab3d-39aca8adcbe4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 075dc980-4a36-40c8-ab3d-39aca8adcbe4;
	Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602845922;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3PiaK/obgiTv0XRvqmFG9GqAISyxz4e5B/+qnu8lyy8=;
	b=ifARsuR09pu3ja6qReJpPDQA/5SsLNTgOlFQf4OCyEhKgdRLziNBAOvipfOzCGygq0ReE0
	b1JP2WgcHPtbeBtwpMhNPLKtC+CTcUaVdMiAa3WhzUr8+WpM0nRAVN0ZKHm78Y/cQh5U7L
	ZjcjIRgljUqeTDxVZAvvn8HNPULiEMU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BAA7DAD63;
	Fri, 16 Oct 2020 10:58:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/2] xen/events: access last_priority and last_vcpu_id together
Date: Fri, 16 Oct 2020 12:58:38 +0200
Message-Id: <20201016105839.14796-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201016105839.14796-1-jgross@suse.com>
References: <20201016105839.14796-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The queue for a fifo event is depending on the vcpu_id and the
priority of the event. When sending an event it might happen the
event needs to change queues and the old queue needs to be kept for
keeping the links between queue elements intact. For this purpose
the event channel contains last_priority and last_vcpu_id values
elements for being able to identify the old queue.

In order to avoid races always access last_priority and last_vcpu_id
with a single atomic operation avoiding any inconsistencies.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/event_fifo.c | 25 +++++++++++++++++++------
 xen/include/xen/sched.h |  3 +--
 2 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/xen/common/event_fifo.c b/xen/common/event_fifo.c
index fc189152e1..2f4e8c54fc 100644
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -42,6 +42,14 @@ struct evtchn_fifo_domain {
     unsigned int num_evtchns;
 };
 
+union evtchn_fifo_lastq {
+    uint32_t raw;
+    struct {
+        uint8_t last_priority;
+        uint16_t last_vcpu_id;
+    };
+};
+
 static inline event_word_t *evtchn_fifo_word_from_port(const struct domain *d,
                                                        unsigned int port)
 {
@@ -86,16 +94,18 @@ static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;
+    union evtchn_fifo_lastq lastq;
 
     for ( try = 0; try < 3; try++ )
     {
-        v = d->vcpu[evtchn->last_vcpu_id];
-        old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        lastq.raw = read_atomic(&evtchn->fifo_lastq);
+        v = d->vcpu[lastq.last_vcpu_id];
+        old_q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         spin_lock_irqsave(&old_q->lock, *flags);
 
-        v = d->vcpu[evtchn->last_vcpu_id];
-        q = &v->evtchn_fifo->queue[evtchn->last_priority];
+        v = d->vcpu[lastq.last_vcpu_id];
+        q = &v->evtchn_fifo->queue[lastq.last_priority];
 
         if ( old_q == q )
             return old_q;
@@ -246,8 +256,11 @@ static void evtchn_fifo_set_pending(struct vcpu *v, struct evtchn *evtchn)
         /* Moved to a different queue? */
         if ( old_q != q )
         {
-            evtchn->last_vcpu_id = v->vcpu_id;
-            evtchn->last_priority = q->priority;
+            union evtchn_fifo_lastq lastq = { };
+
+            lastq.last_vcpu_id = v->vcpu_id;
+            lastq.last_priority = q->priority;
+            write_atomic(&evtchn->fifo_lastq, lastq.raw);
 
             spin_unlock_irqrestore(&old_q->lock, flags);
             spin_lock_irqsave(&q->lock, flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d8ed83f869..a298ff4df8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -114,8 +114,7 @@ struct evtchn
         u16 virq;      /* state == ECS_VIRQ */
     } u;
     u8 priority;
-    u8 last_priority;
-    u16 last_vcpu_id;
+    u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
 #ifdef CONFIG_XSM
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7890.20835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRi-0005va-DM; Fri, 16 Oct 2020 10:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7890.20835; Fri, 16 Oct 2020 10:58:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRi-0005vT-9x; Fri, 16 Oct 2020 10:58:50 +0000
Received: by outflank-mailman (input) for mailman id 7890;
 Fri, 16 Oct 2020 10:58:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTNRh-0005rq-5g
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30c2c371-37aa-4819-8685-6c2344435553;
 Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0506FAF2C;
 Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTNRh-0005rq-5g
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:49 +0000
X-Inumbo-ID: 30c2c371-37aa-4819-8685-6c2344435553
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30c2c371-37aa-4819-8685-6c2344435553;
	Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602845923;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GA8cfExfO91DV4o2HCBW/wxrfYxYGPyymWSFttFyle0=;
	b=oyWCzUuRnsUKEe6Nm2d3xCa49BRPUirIjlYiXC5qd8xVy+2Au4YfEfAs+HqcFZL0yxGNSY
	cqULpZD32UO+ZLwkjAw8bRE+gqzcEVzwKCOSkfKhBY1MWig0Ma2Byurbg2V55p2U2267cG
	UXCMdlpOJeBMFT2d+rTbt59PV34CSOU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0506FAF2C;
	Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
Date: Fri, 16 Oct 2020 12:58:39 +0200
Message-Id: <20201016105839.14796-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201016105839.14796-1-jgross@suse.com>
References: <20201016105839.14796-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between sending an event or
querying the channel's state against removal of the event channel.

Use a locking scheme similar to a rwlock, but with some modifications:

- sending an event or querying the event channel's state uses an
  operation similar to read_trylock(), in case of not obtaining the
  lock the sending is omitted or a default state is returned

- closing an event channel is similar to write_lock(), but without
  real fairness regarding multiple writers (this saves some space in
  the event channel structure and multiple writers are impossible as
  closing an event channel requires the domain's event_lock to be
  held).

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- corrected a copy-and-paste error (Jan Beulich)
- corrected unlocking in two cases (Jan Beulich)
- renamed evtchn_read_trylock() (Jan Beulich)
- added some comments and an ASSERT() for evtchn_write_lock()
- set EVENT_WRITE_LOCK_INC to INT_MIN

V2:
- added needed barriers

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/include/xen/event.h    |  76 ++++++++++++++++++++++----
 xen/include/xen/sched.h    |   2 +-
 5 files changed, 125 insertions(+), 77 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 93c4fb9a79..8d1f9a9fc6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2495,14 +2495,12 @@ static void dump_irqs(unsigned char key)
                 pirq = domain_irq_to_pirq(d, irq);
                 info = pirq_info(d, pirq);
                 evtchn = evtchn_from_port(d, info->evtchn);
-                local_irq_disable();
-                if ( spin_trylock(&evtchn->lock) )
+                if ( evtchn_read_trylock(evtchn) )
                 {
                     pending = evtchn_is_pending(d, evtchn);
                     masked = evtchn_is_masked(d, evtchn);
-                    spin_unlock(&evtchn->lock);
+                    evtchn_read_unlock(evtchn);
                 }
-                local_irq_enable();
                 printk("d%d:%3d(%c%c%c)%c",
                        d->domain_id, pirq, "-P?"[pending],
                        "-M?"[masked], info->masked ? 'M' : '-',
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 9aef7a860a..b4e83e0778 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
     if ( port_is_valid(guest, port) )
     {
         struct evtchn *chn = evtchn_from_port(guest, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&chn->lock, flags);
-        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
-        spin_unlock_irqrestore(&chn->lock, flags);
+        if ( evtchn_read_trylock(chn) )
+        {
+            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
+            evtchn_read_unlock(chn);
+        }
     }
 }
 
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index e365b5498f..3df73dbc71 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -131,7 +131,7 @@ static struct evtchn *alloc_evtchn_bucket(struct domain *d, unsigned int port)
             return NULL;
         }
         chn[i].port = port + i;
-        spin_lock_init(&chn[i].lock);
+        atomic_set(&chn[i].lock, 0);
     }
     return chn;
 }
@@ -253,7 +253,6 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     int            port;
     domid_t        dom = alloc->dom;
     long           rc;
-    unsigned long  flags;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -269,14 +268,14 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     if ( (chn->u.unbound.remote_domid = alloc->remote_dom) == DOMID_SELF )
         chn->u.unbound.remote_domid = current->domain->domain_id;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     alloc->port = port;
 
@@ -289,32 +288,26 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 }
 
 
-static unsigned long double_evtchn_lock(struct evtchn *lchn,
-                                        struct evtchn *rchn)
+static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
-    unsigned long flags;
-
     if ( lchn <= rchn )
     {
-        spin_lock_irqsave(&lchn->lock, flags);
+        evtchn_write_lock(lchn);
         if ( lchn != rchn )
-            spin_lock(&rchn->lock);
+            evtchn_write_lock(rchn);
     }
     else
     {
-        spin_lock_irqsave(&rchn->lock, flags);
-        spin_lock(&lchn->lock);
+        evtchn_write_lock(rchn);
+        evtchn_write_lock(lchn);
     }
-
-    return flags;
 }
 
-static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn,
-                                 unsigned long flags)
+static void double_evtchn_unlock(struct evtchn *lchn, struct evtchn *rchn)
 {
     if ( lchn != rchn )
-        spin_unlock(&lchn->lock);
-    spin_unlock_irqrestore(&rchn->lock, flags);
+        evtchn_write_unlock(lchn);
+    evtchn_write_unlock(rchn);
 }
 
 static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
@@ -324,7 +317,6 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     int            lport, rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
     long           rc;
-    unsigned long  flags;
 
     if ( rdom == DOMID_SELF )
         rdom = current->domain->domain_id;
@@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
     if ( rc )
         goto out;
 
-    flags = double_evtchn_lock(lchn, rchn);
+    double_evtchn_lock(lchn, rchn);
 
     lchn->u.interdomain.remote_dom  = rd;
     lchn->u.interdomain.remote_port = rport;
@@ -377,7 +369,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
      */
     evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);
 
-    double_evtchn_unlock(lchn, rchn, flags);
+    double_evtchn_unlock(lchn, rchn);
 
     bind->local_port = lport;
 
@@ -400,7 +392,6 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
     struct domain *d = current->domain;
     int            virq = bind->virq, vcpu = bind->vcpu;
     int            rc = 0;
-    unsigned long  flags;
 
     if ( (virq < 0) || (virq >= ARRAY_SIZE(v->virq_to_evtchn)) )
         return -EINVAL;
@@ -438,14 +429,14 @@ int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_VIRQ;
     chn->notify_vcpu_id = vcpu;
     chn->u.virq         = virq;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     v->virq_to_evtchn[virq] = bind->port = port;
 
@@ -462,7 +453,6 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
     struct domain *d = current->domain;
     int            port, vcpu = bind->vcpu;
     long           rc = 0;
-    unsigned long  flags;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -474,13 +464,13 @@ static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 
     chn = evtchn_from_port(d, port);
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state          = ECS_IPI;
     chn->notify_vcpu_id = vcpu;
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -524,7 +514,6 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
     struct pirq   *info;
     int            port = 0, pirq = bind->pirq;
     long           rc;
-    unsigned long  flags;
 
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
@@ -557,14 +546,14 @@ static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
         goto out;
     }
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state  = ECS_PIRQ;
     chn->u.pirq.irq = pirq;
     link_pirq_port(port, chn, v);
     evtchn_port_init(d, chn);
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     bind->port = port;
 
@@ -585,7 +574,6 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
     struct evtchn *chn1, *chn2;
     int            port2;
     long           rc = 0;
-    unsigned long  flags;
 
  again:
     spin_lock(&d1->event_lock);
@@ -686,14 +674,14 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
-        flags = double_evtchn_lock(chn1, chn2);
+        double_evtchn_lock(chn1, chn2);
 
         evtchn_free(d1, chn1);
 
         chn2->state = ECS_UNBOUND;
         chn2->u.unbound.remote_domid = d1->domain_id;
 
-        double_evtchn_unlock(chn1, chn2, flags);
+        double_evtchn_unlock(chn1, chn2);
 
         goto out;
 
@@ -701,9 +689,9 @@ int evtchn_close(struct domain *d1, int port1, bool guest)
         BUG();
     }
 
-    spin_lock_irqsave(&chn1->lock, flags);
+    evtchn_write_lock(chn1);
     evtchn_free(d1, chn1);
-    spin_unlock_irqrestore(&chn1->lock, flags);
+    evtchn_write_unlock(chn1);
 
  out:
     if ( d2 != NULL )
@@ -723,7 +711,6 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     struct evtchn *lchn, *rchn;
     struct domain *rd;
     int            rport, ret = 0;
-    unsigned long  flags;
 
     if ( !port_is_valid(ld, lport) )
         return -EINVAL;
@@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return 0;
 
     /* Guest cannot send via a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(lchn)) )
@@ -771,7 +759,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
     }
 
 out:
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 
     return ret;
 }
@@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
 
     d = v->domain;
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, v->vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -841,7 +833,6 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
 {
     int port;
     struct evtchn *chn;
-    unsigned long flags;
 
     /*
      * PV guests: It should not be possible to race with __evtchn_close(). The
@@ -856,9 +847,11 @@ void send_guest_pirq(struct domain *d, const struct pirq *pirq)
     }
 
     chn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&chn->lock, flags);
-    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( evtchn_read_trylock(chn) )
+    {
+        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
+        evtchn_read_unlock(chn);
+    }
 }
 
 static struct domain *global_virq_handlers[NR_VIRQS] __read_mostly;
@@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
     struct evtchn *evtchn;
-    unsigned long flags;
 
     if ( unlikely(!port_is_valid(d, port)) )
         return -EINVAL;
 
     evtchn = evtchn_from_port(d, port);
-    spin_lock_irqsave(&evtchn->lock, flags);
-    evtchn_port_unmask(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        evtchn_port_unmask(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return 0;
 }
@@ -1327,7 +1321,6 @@ int alloc_unbound_xen_event_channel(
 {
     struct evtchn *chn;
     int            port, rc;
-    unsigned long  flags;
 
     spin_lock(&ld->event_lock);
 
@@ -1340,14 +1333,14 @@ int alloc_unbound_xen_event_channel(
     if ( rc )
         goto out;
 
-    spin_lock_irqsave(&chn->lock, flags);
+    evtchn_write_lock(chn);
 
     chn->state = ECS_UNBOUND;
     chn->xen_consumer = get_xen_consumer(notification_fn);
     chn->notify_vcpu_id = lvcpu;
     chn->u.unbound.remote_domid = remote_domid;
 
-    spin_unlock_irqrestore(&chn->lock, flags);
+    evtchn_write_unlock(chn);
 
     /*
      * Increment ->xen_evtchns /after/ ->active_evtchns. No explicit
@@ -1383,7 +1376,6 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
     struct evtchn *lchn, *rchn;
     struct domain *rd;
-    unsigned long flags;
 
     if ( !port_is_valid(ld, lport) )
     {
@@ -1398,7 +1390,8 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
 
     lchn = evtchn_from_port(ld, lport);
 
-    spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return;
 
     if ( likely(lchn->state == ECS_INTERDOMAIN) )
     {
@@ -1408,7 +1401,7 @@ void notify_via_xen_event_channel(struct domain *ld, int lport)
         evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
     }
 
-    spin_unlock_irqrestore(&lchn->lock, flags);
+    evtchn_read_unlock(lchn);
 }
 
 void evtchn_check_pollers(struct domain *d, unsigned int port)
diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 509d3ae861..592e0dc22d 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,60 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
+#define EVENT_WRITE_LOCK_INC    INT_MIN
+
+/*
+ * Lock an event channel exclusively. This is allowed only with holding
+ * d->event_lock AND when the channel is free or unbound either when taking
+ * or when releasing the lock, as any concurrent operation on the event
+ * channel using evtchn_read_trylock() will just assume the event channel is
+ * free or unbound at the moment.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    int val;
+
+    /*
+     * The lock can't be held by a writer already, as all writers need to
+     * hold d->event_lock.
+     */
+    ASSERT(atomic_read(&evtchn->lock) >= 0);
+
+    /* No barrier needed, atomic_add_return() is full barrier. */
+    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+          val != EVENT_WRITE_LOCK_INC;
+          val = atomic_read(&evtchn->lock) )
+        cpu_relax();
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    arch_lock_release_barrier();
+
+    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+}
+
+static inline bool evtchn_read_trylock(struct evtchn *evtchn)
+{
+    if ( atomic_read(&evtchn->lock) < 0 )
+        return false;
+
+    /* No barrier needed, atomic_inc_return() is full barrier. */
+    if ( atomic_inc_return(&evtchn->lock) >= 0 )
+        return true;
+
+    atomic_dec(&evtchn->lock);
+
+    return false;
+}
+
+static inline void evtchn_read_unlock(struct evtchn *evtchn)
+{
+    arch_lock_release_barrier();
+
+    atomic_dec(&evtchn->lock);
+}
+
 static inline unsigned int max_evtchns(const struct domain *d)
 {
     return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS
@@ -249,12 +303,13 @@ static inline bool evtchn_is_masked(const struct domain *d,
 static inline bool evtchn_port_is_masked(struct domain *d, evtchn_port_t port)
 {
     struct evtchn *evtchn = evtchn_from_port(d, port);
-    bool rc;
-    unsigned long flags;
+    bool rc = true;
 
-    spin_lock_irqsave(&evtchn->lock, flags);
-    rc = evtchn_is_masked(d, evtchn);
-    spin_unlock_irqrestore(&evtchn->lock, flags);
+    if ( evtchn_read_trylock(evtchn) )
+    {
+        rc = evtchn_is_masked(d, evtchn);
+        evtchn_read_unlock(evtchn);
+    }
 
     return rc;
 }
@@ -274,12 +329,13 @@ static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
     if ( port_is_valid(d, port) )
     {
         struct evtchn *evtchn = evtchn_from_port(d, port);
-        unsigned long flags;
 
-        spin_lock_irqsave(&evtchn->lock, flags);
-        if ( evtchn_usable(evtchn) )
-            rc = evtchn_is_pending(d, evtchn);
-        spin_unlock_irqrestore(&evtchn->lock, flags);
+        if ( evtchn_read_trylock(evtchn) )
+        {
+            if ( evtchn_usable(evtchn) )
+                rc = evtchn_is_pending(d, evtchn);
+            evtchn_read_unlock(evtchn);
+        }
     }
 
     return rc;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index a298ff4df8..096e0ec6af 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -85,7 +85,7 @@ extern domid_t hardware_domid;
 
 struct evtchn
 {
-    spinlock_t lock;
+    atomic_t lock;         /* kind of rwlock, use evtchn_*_[un]lock()        */
 #define ECS_FREE         0 /* Channel is available for use.                  */
 #define ECS_RESERVED     1 /* Channel is reserved.                           */
 #define ECS_UNBOUND      2 /* Channel is waiting to bind to a remote domain. */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:58:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:58:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7888.20811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRc-0005s7-SX; Fri, 16 Oct 2020 10:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7888.20811; Fri, 16 Oct 2020 10:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRc-0005s0-P3; Fri, 16 Oct 2020 10:58:44 +0000
Received: by outflank-mailman (input) for mailman id 7888;
 Fri, 16 Oct 2020 10:58:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kTNRc-0005rq-6n
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fa88072-4dfe-4f75-bb3a-ce319564d7e4;
 Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AE1B5AB5C;
 Fri, 16 Oct 2020 10:58:42 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JQTg=DX=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kTNRc-0005rq-6n
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:44 +0000
X-Inumbo-ID: 6fa88072-4dfe-4f75-bb3a-ce319564d7e4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6fa88072-4dfe-4f75-bb3a-ce319564d7e4;
	Fri, 16 Oct 2020 10:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602845922;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=IwanLpIcHzmd1jTJgQsiVGDYfgOQtfWilyCT1REryeg=;
	b=hQtYr5IbceAdXJ9EHBvenVaDqys5gXTxo6VWYp9BbakFbGSGxxZM1NJp0D78LCk8LE+6ac
	ie3gnI+lb0jFd4oYbUWXdWMqAWUK3q8RIwQxvBwgpl95GHzRZM9q87Jm2ivQslWI+Dpg/D
	k6dvPSpzArFgHV2exR8x0vTZedJiaww=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AE1B5AB5C;
	Fri, 16 Oct 2020 10:58:42 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v3 0/2] XSA-343 followup patches
Date: Fri, 16 Oct 2020 12:58:37 +0200
Message-Id: <20201016105839.14796-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The patches for XSA-343 produced some fallout, especially the event
channel locking has shown to be problematic.

Patch 1 is targeting fifo event channels for avoiding any races for the
case that the fifo queue has been changed for a specific event channel.

The second patch is modifying the per event channel locking scheme in
order to avoid deadlocks and problems due to the event channel lock
having been changed to require IRQs off by the XSA-343 patches.

Changes in V3:
- addressed comments

Juergen Gross (2):
  xen/events: access last_priority and last_vcpu_id together
  xen/evtchn: rework per event channel lock

 xen/arch/x86/irq.c         |   6 +-
 xen/arch/x86/pv/shim.c     |   9 +--
 xen/common/event_channel.c | 109 +++++++++++++++++--------------------
 xen/common/event_fifo.c    |  25 +++++++--
 xen/include/xen/event.h    |  76 ++++++++++++++++++++++----
 xen/include/xen/sched.h    |   5 +-
 6 files changed, 145 insertions(+), 85 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:58:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:58:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7891.20847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRm-0005zp-Ou; Fri, 16 Oct 2020 10:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7891.20847; Fri, 16 Oct 2020 10:58:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRm-0005zf-Kq; Fri, 16 Oct 2020 10:58:54 +0000
Received: by outflank-mailman (input) for mailman id 7891;
 Fri, 16 Oct 2020 10:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kTNRm-0005rq-5m
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:54 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7d3f7fc-74af-4701-ab2e-9f7e155bf591;
 Fri, 16 Oct 2020 10:58:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kTNRm-0005rq-5m
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:58:54 +0000
X-Inumbo-ID: f7d3f7fc-74af-4701-ab2e-9f7e155bf591
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f7d3f7fc-74af-4701-ab2e-9f7e155bf591;
	Fri, 16 Oct 2020 10:58:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602845927;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+pBWnlvHDSSn8Da8INJJHrIl0uMpvD+yNPP15XdmzxA=;
  b=FFUNqfbiabPa9I+Mo/AyGc1L17JS4ic6HwWzdAkbUC5uj/p5bu3dFSno
   HD3wJGuiBNgk7I24+FW3SsR7qcg3w1eT7ymsIEkMz3BrTqec7CbzgsE/p
   0/gikMUeFZRO7IteZOcsjhF+HCWavGH6XEeS+jJidrf/tseV6P65QbAUn
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Rj08be3xfhrcr2A/tvCTIlSKgbNhWftc1HWPAzgN4TrmbwWrltuiAFTUXeCKdjONFpCyVotEzr
 ptcOuFyIRMs+TTMfnIG5HEejogIdWpH5mSIQv9QoRCYuVrMrfsfOVRYGL9/tjmNyIalrExokch
 wq2rzxcfDCi3R9bibI03cmB5cfZCqzYdzbWide3zucpxtuof9USyVR6udxc7UKOzEt2f1rYXmi
 0rvP37wEK4hftLpr+m8+UEuiWs7OJFaj0LiMjxJY2qlJmBqyGOTRBZYD4hLXuCqudXb7vt2RP2
 Xtk=
X-SBRS: 2.5
X-MesageID: 29159332
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,382,1596513600"; 
   d="scan'208";a="29159332"
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
Date: Fri, 16 Oct 2020 11:58:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 08:27, Jan Beulich wrote:
> On 14.10.2020 20:00, Andrew Cooper wrote:
>> On 13/10/2020 16:51, Jan Beulich wrote:
>>> On 12.10.2020 15:49, Andrew Cooper wrote:
>>>> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
>>>> contains the legacy vm86 fields from 32bit days, which are beyond the
>>>> hardware-pushed frame.
>>>>
>>>> Accessing these fields is generally illegal, as they are logically out of
>>>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>>>
>>>> Unfortunately, the #DF handler uses these fields as part of preparing the
>>>> state dump, and being IST, accesses the adjacent stack frame.
>>>>
>>>> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
>>>> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
>>>> the guard page, which turns this OoB write into a fatal pagefault:
>>>>
>>>>   (XEN) *** DOUBLE FAULT ***
>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>   (XEN) CPU:    4
>>>>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>>>>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>>>>   ...
>>>>   (XEN) Xen call trace:
>>>>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>>>>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>>>>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>>>>   (XEN)
>>>>   (XEN) Pagetable walk from ffff830236f6d008:
>>>>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>>>>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>>>>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>>>>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>>>>   (XEN)
>>>>   (XEN) ****************************************
>>>>   (XEN) Panic on CPU 4:
>>>>   (XEN) FATAL PAGE FAULT
>>>>   (XEN) [error_code=0003]
>>>>   (XEN) Faulting linear address: ffff830236f6d008
>>>>   (XEN) ****************************************
>>>>   (XEN)
>>>>
>>>> and rendering the main #DF analysis broken.
>>>>
>>>> The proper fix is to delete cpu_user_regs.es and later, so no
>>>> interrupt/exception path can access OoB, but this needs disentangling from the
>>>> PV ABI first.
>>>>
>>>> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Is it perhaps worth also saying explicitly that the other IST
>>> stacks don't suffer the same problem because show_registers()
>>> makes an local copy of the passed in struct? (Doing so also
>>> for #DF would apparently be an alternative solution.)
>> They're not safe.  They merely don't explode.
>>
>> https://lore.kernel.org/xen-devel/1532546157-5974-1-git-send-email-andrew.cooper3@citrix.com/
>> was "fixed" by CET-SS turning the guard page from not present to
>> read-only, but the same CET-SS series swapped #DB for #DF when it comes
>> to the single OoB write problem case.
> I see. While indeed I didn't pay attention to the OoB read aspect,
> me saying "the other IST stacks don't suffer the same problem" was
> still correct, wasn't it? Anyway - not a big deal.

I've tweaked the commit message to make this more clear.

--8<---
Accessing these fields is generally illegal, as they are logically out of
bounds for anything other than an interrupt/exception hitting ring1/3 code.

show_registers() unconditionally reads these fields, but the content is
discarded before use.  This is benign right now, as all parts of the
stack are
readable, including the guard pages.

However, read_registers() in the #DF handler writes to these fields as
part of
preparing the state dump, and being IST, hits the adjacent stack frame.
--8<--

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 10:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 10:59:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7892.20859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRy-00069v-63; Fri, 16 Oct 2020 10:59:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7892.20859; Fri, 16 Oct 2020 10:59:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNRy-00069n-2K; Fri, 16 Oct 2020 10:59:06 +0000
Received: by outflank-mailman (input) for mailman id 7892;
 Fri, 16 Oct 2020 10:59:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kTNRw-00068y-H4
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:59:04 +0000
Received: from asavdk3.altibox.net (unknown [109.247.116.14])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ff914bb-d76d-4203-9cb2-6f1a8bc04d33;
 Fri, 16 Oct 2020 10:59:01 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk3.altibox.net (Postfix) with ESMTPS id AF7C420074;
 Fri, 16 Oct 2020 12:58:55 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kTNRw-00068y-H4
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 10:59:04 +0000
X-Inumbo-ID: 5ff914bb-d76d-4203-9cb2-6f1a8bc04d33
Received: from asavdk3.altibox.net (unknown [109.247.116.14])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5ff914bb-d76d-4203-9cb2-6f1a8bc04d33;
	Fri, 16 Oct 2020 10:59:01 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk3.altibox.net (Postfix) with ESMTPS id AF7C420074;
	Fri, 16 Oct 2020 12:58:55 +0200 (CEST)
Date: Fri, 16 Oct 2020 12:58:54 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201016105854.GB1042954@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-11-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201015123806.32416-11-tzimmermann@suse.de>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=S433PrkP c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=7gkXJVJtAAAA:8 a=NqsBjqOBP8_30qnptSgA:9
	a=cHP_0by8mU2PnoFw:21 a=_eZ05lBCHhpxV7uM:21 a=CjuIK1q_8ugA:10
	a=qfUslh1TxfEA:10 a=E9Po1WZjFZOl8hwRPBS3:22

Hi Thomas.

On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> v4:
> 	* move dma_buf_map changes into separate patch (Daniel)
> 	* TODO list: comment on fbdev updates (Daniel)

I have been offline for a while so have not followed all the threads on
this. So may comments below may well be addressed but I failed to see
it.

If the point about fb_sync is already addressed/considered then:
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>


> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  Documentation/gpu/todo.rst        |  19 ++-
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  4 files changed, 220 insertions(+), 29 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
>  Level: Intermediate
>  
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>  -----------------------------------------------------------------
>  
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
Good to see this workaround gone again!

> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
>  }
>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>  
So far everything looks good.

> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> +				      size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *dst;
> +	u8 __iomem *src;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p >= total_size)
> +		return 0;
> +
> +	if (count >= total_size)
> +		count = total_size;
> +
> +	if (count + p > total_size)
> +		count = total_size - p;
> +
> +	src = (u8 __iomem *)(info->screen_base + p);
screen_base is a char __iomem * - so this cast looks semi redundant.

> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	dst = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!dst)
> +		return -ENOMEM;
> +
Same comment as below about fb_sync.


> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		memcpy_fromio(dst, src, c);
> +		if (copy_to_user(buf, dst, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(dst);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> +				       size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *src;
> +	u8 __iomem *dst;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p > total_size)
> +		return -EFBIG;
> +
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +
> +	if (count + p > total_size) {
> +		/*
> +		 * The framebuffer is too small. We do the
> +		 * copy operation, but return an error code
> +		 * afterwards. Taken from fbdev.
> +		 */
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - p;
> +	}
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	src = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	dst = (u8 __iomem *)(info->screen_base + p);
> +

The fbdev variant call the fb_sync callback here.
noveau and gma500 implments the fb_sync callback - but no-one else.


> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		if (copy_from_user(src, buf, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, src, c);
When we rewrite this part to use dma_buf_map_memcpy_to() then we can
merge the two variants of helper_{sys,cfb}_read()?
Which is part of the todo - so OK
> +
> +		dst += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(src);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
>  /**
>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>   * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_fillrect(info, rect);
> +	else
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_copyarea(info, area);
> +	else
> +		drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_imageblit(info, image);
> +	else
> +		drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> -- 
> 2.28.0


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:03:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:03:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7903.20870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNWb-0007Em-LX; Fri, 16 Oct 2020 11:03:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7903.20870; Fri, 16 Oct 2020 11:03:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNWb-0007Ef-Ig; Fri, 16 Oct 2020 11:03:53 +0000
Received: by outflank-mailman (input) for mailman id 7903;
 Fri, 16 Oct 2020 11:03:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTNWa-0007Ea-B7
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:03:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66ed3bd4-882d-419e-a494-d653ec6b3ae3;
 Fri, 16 Oct 2020 11:03:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5041EAF57;
 Fri, 16 Oct 2020 11:03:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTNWa-0007Ea-B7
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:03:52 +0000
X-Inumbo-ID: 66ed3bd4-882d-419e-a494-d653ec6b3ae3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66ed3bd4-882d-419e-a494-d653ec6b3ae3;
	Fri, 16 Oct 2020 11:03:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602846230;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Va+0gUhz7qtRrDzZw2xhJD5kqAtBVfdpqPmhdFPf9j4=;
	b=iXx6mzof2NKNCtme8qB3jiGKUtjzUAWvvTxud64h6UqtzsJPUcLG5tPu+pOXGBTHa9VS1s
	caae5yFCquRQ/HliibJqUVoGwxK93wbAWj/m0U+hIdI45prx4gMA9n69zNl99Yat16bMDi
	Mvd5pZxT0D39dOzoXnzzITbxnm28Dcs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5041EAF57;
	Fri, 16 Oct 2020 11:03:50 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
 <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
Date: Fri, 16 Oct 2020 13:03:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.10.2020 12:58, Andrew Cooper wrote:
> On 15/10/2020 08:27, Jan Beulich wrote:
>> On 14.10.2020 20:00, Andrew Cooper wrote:
>>> On 13/10/2020 16:51, Jan Beulich wrote:
>>>> On 12.10.2020 15:49, Andrew Cooper wrote:
>>>>> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
>>>>> contains the legacy vm86 fields from 32bit days, which are beyond the
>>>>> hardware-pushed frame.
>>>>>
>>>>> Accessing these fields is generally illegal, as they are logically out of
>>>>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>>>>
>>>>> Unfortunately, the #DF handler uses these fields as part of preparing the
>>>>> state dump, and being IST, accesses the adjacent stack frame.
>>>>>
>>>>> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
>>>>> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
>>>>> the guard page, which turns this OoB write into a fatal pagefault:
>>>>>
>>>>>   (XEN) *** DOUBLE FAULT ***
>>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>>   (XEN) CPU:    4
>>>>>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>>>>>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>>>>>   ...
>>>>>   (XEN) Xen call trace:
>>>>>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>>>>>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>>>>>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>>>>>   (XEN)
>>>>>   (XEN) Pagetable walk from ffff830236f6d008:
>>>>>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>>>>>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>>>>>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>>>>>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>>>>>   (XEN)
>>>>>   (XEN) ****************************************
>>>>>   (XEN) Panic on CPU 4:
>>>>>   (XEN) FATAL PAGE FAULT
>>>>>   (XEN) [error_code=0003]
>>>>>   (XEN) Faulting linear address: ffff830236f6d008
>>>>>   (XEN) ****************************************
>>>>>   (XEN)
>>>>>
>>>>> and rendering the main #DF analysis broken.
>>>>>
>>>>> The proper fix is to delete cpu_user_regs.es and later, so no
>>>>> interrupt/exception path can access OoB, but this needs disentangling from the
>>>>> PV ABI first.
>>>>>
>>>>> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> Is it perhaps worth also saying explicitly that the other IST
>>>> stacks don't suffer the same problem because show_registers()
>>>> makes an local copy of the passed in struct? (Doing so also
>>>> for #DF would apparently be an alternative solution.)
>>> They're not safe.  They merely don't explode.
>>>
>>> https://lore.kernel.org/xen-devel/1532546157-5974-1-git-send-email-andrew.cooper3@citrix.com/
>>> was "fixed" by CET-SS turning the guard page from not present to
>>> read-only, but the same CET-SS series swapped #DB for #DF when it comes
>>> to the single OoB write problem case.
>> I see. While indeed I didn't pay attention to the OoB read aspect,
>> me saying "the other IST stacks don't suffer the same problem" was
>> still correct, wasn't it? Anyway - not a big deal.
> 
> I've tweaked the commit message to make this more clear.
> 
> --8<---
> Accessing these fields is generally illegal, as they are logically out of
> bounds for anything other than an interrupt/exception hitting ring1/3 code.
> 
> show_registers() unconditionally reads these fields, but the content is
> discarded before use.  This is benign right now, as all parts of the
> stack are
> readable, including the guard pages.
> 
> However, read_registers() in the #DF handler writes to these fields as
> part of
> preparing the state dump, and being IST, hits the adjacent stack frame.
> --8<--

Thanks, lgtm.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:21:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7973.21142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNny-0001qx-5y; Fri, 16 Oct 2020 11:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7973.21142; Fri, 16 Oct 2020 11:21:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNny-0001qq-2x; Fri, 16 Oct 2020 11:21:50 +0000
Received: by outflank-mailman (input) for mailman id 7973;
 Fri, 16 Oct 2020 11:21:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTNnx-0001qd-5V
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:21:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cdceb64d-1d54-4038-b37f-4e2fa5408c30;
 Fri, 16 Oct 2020 11:21:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTNnu-0002NF-NI; Fri, 16 Oct 2020 11:21:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTNnu-00018f-Dn; Fri, 16 Oct 2020 11:21:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTNnu-0006c8-DI; Fri, 16 Oct 2020 11:21:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTNnx-0001qd-5V
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:21:49 +0000
X-Inumbo-ID: cdceb64d-1d54-4038-b37f-4e2fa5408c30
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cdceb64d-1d54-4038-b37f-4e2fa5408c30;
	Fri, 16 Oct 2020 11:21:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C8ZT8EIiOZFHXOjK+g+PBEtMWG5pI0rIWp84SU6x1pY=; b=s6991JNDlRKr8rX2BzVvfFA7UO
	PFkpOEO7yOkqFDB0wQP0g9Rq0vBOzTovTMZPLs14e8MFffDZT5N3X9arrEBdcZsO6hwaMcwm9uMhe
	z12G5VSqFCo8FaNElKsiW+0asvV3yAg3CVm6SPxmWfaW10N+UfLAvMsFSThiSA4/B9Ek=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTNnu-0002NF-NI; Fri, 16 Oct 2020 11:21:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTNnu-00018f-Dn; Fri, 16 Oct 2020 11:21:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTNnu-0006c8-DI; Fri, 16 Oct 2020 11:21:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155880-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155880: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 11:21:46 +0000

flight 155880 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155880/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     19 guest-start.2            fail REGR. vs. 155788

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155788
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155788
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155788
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155788
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155788
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155788
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155788
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155788
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155788
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155788  2020-10-14 01:52:30 Z    2 days
Failing since        155810  2020-10-14 16:08:32 Z    1 days    4 attempts
Testing same since   155880  2020-10-16 01:07:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Chen Yu <yu.c.chen@intel.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Michal Orzel <michal.orzel@arm.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   25849c8b16..6ee2e66674  6ee2e66674f36b6d27a95f4ddf27226905cc63a4 -> master


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:24:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:24:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7994.21225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNqk-0002Oz-Fs; Fri, 16 Oct 2020 11:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7994.21225; Fri, 16 Oct 2020 11:24:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNqk-0002Os-CW; Fri, 16 Oct 2020 11:24:42 +0000
Received: by outflank-mailman (input) for mailman id 7994;
 Fri, 16 Oct 2020 11:24:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kTNqi-0002Ni-Nl
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:24:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd3cc0af-13bd-4967-a909-7cf644a0d3c1;
 Fri, 16 Oct 2020 11:24:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kTNqi-0002Ni-Nl
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:24:40 +0000
X-Inumbo-ID: dd3cc0af-13bd-4967-a909-7cf644a0d3c1
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dd3cc0af-13bd-4967-a909-7cf644a0d3c1;
	Fri, 16 Oct 2020 11:24:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602847476;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Wc7h+L9Y71hUFTBvgSZme39cEt4VKmlA0CE+gNOThGI=;
  b=Cjvb8MHfksRO28esga8CjOyiIZFFgoe/1nNJlzmy93Y+XffU7uFxWeIs
   9i/+JtGkhxVVuvyi69IYXmuEM9iYyY3ia1FZQY6X94gBHNDIRin+kISIh
   odrHuiUlu3xY7zeIbzRvo7H07rQTv/Z9A0QLnxzS+Oj4tF/JrZOJ8oBa5
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4mokySQfk/oL/3RET0CM0PQf6p0StqAOodiHG6zwUUGcuTdbYcL6lrKhxHnRpiu5yewEIdaUzv
 vMY4jB8Ydswy7aWNiYoeua8xWBONC/+Gh9G4AEKs1+mKHnOf7tiNlaI2viq2q2GfZwJtcFZPMC
 bkoQfMUYnJa87evcAiut+zjb1DAP8UddKhk5/9XxW2CGpp+jQBVzznrDbn/XWMgQoMfzKAYfkd
 9MepNGnbi9LO8FMty6kPbiUwSMBTn7gW2O6h4F90NUYLWBfsndcBoKLBNcLnKPrTBFF+GW1/cG
 1Rc=
X-SBRS: 2.5
X-MesageID: 29489463
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,382,1596513600"; 
   d="scan'208";a="29489463"
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
 <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
 <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <09049e52-548b-3ffc-5259-b1ffc26413a5@citrix.com>
Date: Fri, 16 Oct 2020 12:24:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 16/10/2020 12:03, Jan Beulich wrote:
> On 16.10.2020 12:58, Andrew Cooper wrote:
>> On 15/10/2020 08:27, Jan Beulich wrote:
>>> On 14.10.2020 20:00, Andrew Cooper wrote:
>>>> On 13/10/2020 16:51, Jan Beulich wrote:
>>>>> On 12.10.2020 15:49, Andrew Cooper wrote:
>>>>>> All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
>>>>>> contains the legacy vm86 fields from 32bit days, which are beyond the
>>>>>> hardware-pushed frame.
>>>>>>
>>>>>> Accessing these fields is generally illegal, as they are logically out of
>>>>>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>>>>>
>>>>>> Unfortunately, the #DF handler uses these fields as part of preparing the
>>>>>> state dump, and being IST, accesses the adjacent stack frame.
>>>>>>
>>>>>> This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
>>>>>> layout to support shadow stacks" repositioned the #DF stack to be adjacent to
>>>>>> the guard page, which turns this OoB write into a fatal pagefault:
>>>>>>
>>>>>>   (XEN) *** DOUBLE FAULT ***
>>>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>>>   (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
>>>>>>   (XEN) CPU:    4
>>>>>>   (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
>>>>>>   (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
>>>>>>   ...
>>>>>>   (XEN) Xen call trace:
>>>>>>   (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
>>>>>>   (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
>>>>>>   (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
>>>>>>   (XEN)
>>>>>>   (XEN) Pagetable walk from ffff830236f6d008:
>>>>>>   (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
>>>>>>   (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
>>>>>>   (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
>>>>>>   (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
>>>>>>   (XEN)
>>>>>>   (XEN) ****************************************
>>>>>>   (XEN) Panic on CPU 4:
>>>>>>   (XEN) FATAL PAGE FAULT
>>>>>>   (XEN) [error_code=0003]
>>>>>>   (XEN) Faulting linear address: ffff830236f6d008
>>>>>>   (XEN) ****************************************
>>>>>>   (XEN)
>>>>>>
>>>>>> and rendering the main #DF analysis broken.
>>>>>>
>>>>>> The proper fix is to delete cpu_user_regs.es and later, so no
>>>>>> interrupt/exception path can access OoB, but this needs disentangling from the
>>>>>> PV ABI first.
>>>>>>
>>>>>> Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> Is it perhaps worth also saying explicitly that the other IST
>>>>> stacks don't suffer the same problem because show_registers()
>>>>> makes an local copy of the passed in struct? (Doing so also
>>>>> for #DF would apparently be an alternative solution.)
>>>> They're not safe.  They merely don't explode.
>>>>
>>>> https://lore.kernel.org/xen-devel/1532546157-5974-1-git-send-email-andrew.cooper3@citrix.com/
>>>> was "fixed" by CET-SS turning the guard page from not present to
>>>> read-only, but the same CET-SS series swapped #DB for #DF when it comes
>>>> to the single OoB write problem case.
>>> I see. While indeed I didn't pay attention to the OoB read aspect,
>>> me saying "the other IST stacks don't suffer the same problem" was
>>> still correct, wasn't it? Anyway - not a big deal.
>> I've tweaked the commit message to make this more clear.
>>
>> --8<---
>> Accessing these fields is generally illegal, as they are logically out of
>> bounds for anything other than an interrupt/exception hitting ring1/3 code.
>>
>> show_registers() unconditionally reads these fields, but the content is
>> discarded before use.  This is benign right now, as all parts of the
>> stack are
>> readable, including the guard pages.
>>
>> However, read_registers() in the #DF handler writes to these fields as
>> part of
>> preparing the state dump, and being IST, hits the adjacent stack frame.
>> --8<--
> Thanks, lgtm.

On a tangent, what are your views WRT backport beyond 4.14?

Back then, it was #DB which was adjacent to the guard frame (which was
not present), but it doesn't use show_registers() by default, so I think
the problem is mostly hidden.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.7997.21236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNxi-0003J9-6D; Fri, 16 Oct 2020 11:31:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 7997.21236; Fri, 16 Oct 2020 11:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTNxi-0003J2-3P; Fri, 16 Oct 2020 11:31:54 +0000
Received: by outflank-mailman (input) for mailman id 7997;
 Fri, 16 Oct 2020 11:31:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kTNxh-0003Ix-Ej
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:31:53 +0000
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03ad2bad-6f04-421a-88cd-5c371b00c56b;
 Fri, 16 Oct 2020 11:31:51 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk4.altibox.net (Postfix) with ESMTPS id 786F380735;
 Fri, 16 Oct 2020 13:31:42 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kTNxh-0003Ix-Ej
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:31:53 +0000
X-Inumbo-ID: 03ad2bad-6f04-421a-88cd-5c371b00c56b
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 03ad2bad-6f04-421a-88cd-5c371b00c56b;
	Fri, 16 Oct 2020 11:31:51 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk4.altibox.net (Postfix) with ESMTPS id 786F380735;
	Fri, 16 Oct 2020 13:31:42 +0200 (CEST)
Date: Fri, 16 Oct 2020 13:31:41 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment
 interfaces
Message-ID: <20201016113141.GA1125266@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-10-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201015123806.32416-10-tzimmermann@suse.de>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=fu7ymmwf c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=0A2xud3A4b7FAmx5SMIA:9 a=H9YgUdQFGw372Hqm:21
	a=tLNtb35BjFYYr1pq:21 a=CjuIK1q_8ugA:10

Hi Thomas.

On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote:
> To do framebuffer updates, one needs memcpy from system memory and a
> pointer-increment function. Add both interfaces with documentation.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> ---
>  include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------
>  1 file changed, 62 insertions(+), 10 deletions(-)
> 
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index 2e8bbecb5091..6ca0f304dda2 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -32,6 +32,14 @@
>   * accessing the buffer. Use the returned instance and the helper functions
>   * to access the buffer's memory in the correct way.
>   *
> + * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> + * actually independent from the dma-buf infrastructure. When sharing buffers
> + * among devices, drivers have to know the location of the memory to access
> + * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> + * solves this problem for dma-buf and its users. If other drivers or
> + * sub-systems require similar functionality, the type could be generalized
> + * and moved to a more prominent header file.
> + *
>   * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
>   * considered bad style. Rather then accessing its fields directly, use one
>   * of the provided helper functions, or implement your own. For example,
> @@ -51,6 +59,14 @@
>   *
>   *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>   *
> + * Instances of struct dma_buf_map do not have to be cleaned up, but
> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> + * always refer to system memory.
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_clear(&map);
> + *
>   * Test if a mapping is valid with either dma_buf_map_is_set() or
>   * dma_buf_map_is_null().
>   *
> @@ -73,17 +89,19 @@
>   *	if (dma_buf_map_is_equal(&sys_map, &io_map))
>   *		// always false
>   *
> - * Instances of struct dma_buf_map do not have to be cleaned up, but
> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
> - * always refer to system memory.
> + * A set up instance of struct dma_buf_map can be used to access or manipulate
> + * the buffer memory. Depending on the location of the memory, the provided
> + * helpers will pick the correct operations. Data can be copied into the memory
> + * with dma_buf_map_memcpy_to(). The address can be manipulated with
> + * dma_buf_map_incr().
>   *
> - * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
> - * actually independent from the dma-buf infrastructure. When sharing buffers
> - * among devices, drivers have to know the location of the memory to access
> - * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
> - * solves this problem for dma-buf and its users. If other drivers or
> - * sub-systems require similar functionality, the type could be generalized
> - * and moved to a more prominent header file.
> + * .. code-block:: c
> + *
> + *	const void *src = ...; // source buffer
> + *	size_t len = ...; // length of src
> + *
> + *	dma_buf_map_memcpy_to(&map, src, len);
> + *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
>   */
>  
>  /**
> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
>  	}
>  }
>  
> +/**
> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
> + * @dst:	The dma-buf mapping structure
> + * @src:	The source buffer
> + * @len:	The number of byte in src
> + *
> + * Copies data into a dma-buf mapping. The source buffer is in system
> + * memory. Depending on the buffer's location, the helper picks the correct
> + * method of accessing the memory.
> + */
> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
> +{
> +	if (dst->is_iomem)
> +		memcpy_toio(dst->vaddr_iomem, src, len);
> +	else
> +		memcpy(dst->vaddr, src, len);

sparc64 needs "#include <linux/string.h>" to build as is does not get
this via io.h

	Sam

> +}
> +
> +/**
> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
> + * @map:	The dma-buf mapping structure
> + * @incr:	The number of bytes to increment
> + *
> + * Increments the address stored in a dma-buf mapping. Depending on the
> + * buffer's location, the correct value will be updated.
> + */
> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
> +{
> +	if (map->is_iomem)
> +		map->vaddr_iomem += incr;
> +	else
> +		map->vaddr += incr;
> +}
> +
>  #endif /* __DMA_BUF_MAP_H__ */
> -- 
> 2.28.0


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8011.21292 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTO0V-0003hq-42; Fri, 16 Oct 2020 11:34:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8011.21292; Fri, 16 Oct 2020 11:34:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTO0V-0003hj-18; Fri, 16 Oct 2020 11:34:47 +0000
Received: by outflank-mailman (input) for mailman id 8011;
 Fri, 16 Oct 2020 11:34:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kTO0T-0003gv-4l
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:34:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e7ace9f-9464-4395-977e-fd3266d0afc0;
 Fri, 16 Oct 2020 11:34:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 936D2AD7C;
 Fri, 16 Oct 2020 11:34:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kTO0T-0003gv-4l
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:34:45 +0000
X-Inumbo-ID: 4e7ace9f-9464-4395-977e-fd3266d0afc0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e7ace9f-9464-4395-977e-fd3266d0afc0;
	Fri, 16 Oct 2020 11:34:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 936D2AD7C;
	Fri, 16 Oct 2020 11:34:42 +0000 (UTC)
Date: Fri, 16 Oct 2020 13:34:40 +0200
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Sam Ravnborg <sam@ravnborg.org>
Cc: luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 emil.velikov@collabora.com, linux-samsung-soc@vger.kernel.org,
 jy0922.shim@samsung.com, lima@lists.freedesktop.org,
 oleksandr_andrushchenko@epam.com, krzk@kernel.org, steven.price@arm.com,
 linux-rockchip@lists.infradead.org, kgene@kernel.org,
 alyssa.rosenzweig@collabora.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, bskeggs@redhat.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, sw0312.kim@samsung.com, hjc@rock-chips.com,
 kyungmin.park@samsung.com, miaoqinglang@huawei.com, yuq825@gmail.com,
 alexander.deucher@amd.com, linux-media@vger.kernel.org,
 christian.koenig@amd.com
Subject: Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201016133440.65cadb6d@linux-uq9g>
In-Reply-To: <20201016105854.GB1042954@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-11-tzimmermann@suse.de>
 <20201016105854.GB1042954@ravnborg.org>
Organization: SUSE Software Solutions Germany GmbH
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi

On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:

> Hi Thomas.
>=20
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >=20
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >=20
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >=20
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >=20
> > v4:
> > 	* move dma_buf_map changes into separate patch (Daniel)
> > 	* TODO list: comment on fbdev updates (Daniel)
>=20
> I have been offline for a while so have not followed all the threads on
> this. So may comments below may well be addressed but I failed to see
> it.
>=20
> If the point about fb_sync is already addressed/considered then:
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>

It has not been brought up yet. See below.

>=20
>=20
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > ---
> >  Documentation/gpu/todo.rst        |  19 ++-
> >  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> >  include/drm/drm_mode_config.h     |  12 --
> >  4 files changed, 220 insertions(+), 29 deletions(-)
> >=20
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> >  ------------------------------------------------
> > =20
> >  Most drivers can use drm_fbdev_generic_setup(). Driver have to impleme=
nt
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulati=
on
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I=
/O
> > memory can be supported +as well.
> > =20
> >  Contact: Maintainer of the driver you plan to convert
> > =20
> >  Level: Intermediate
> > =20
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> >  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> >  -----------------------------------------------------------------
> > =20
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >  	bochs->dev->mode_config.preferred_depth =3D 24;
> >  	bochs->dev->mode_config.prefer_shadow =3D 0;
> >  	bochs->dev->mode_config.prefer_shadow_fbdev =3D 1;
> > -	bochs->dev->mode_config.fbdev_use_iomem =3D true;
> >  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =3D
> > true;=20
> >  	bochs->dev->mode_config.funcs =3D &bochs_mode_funcs;
> Good to see this workaround gone again!
>=20
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 1006=
44
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> > =20
> >  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > -					  struct drm_clip_rect *clip)
> > +					  struct drm_clip_rect *clip,
> > +					  struct dma_buf_map *dst)
> >  {
> >  	struct drm_framebuffer *fb =3D fb_helper->fb;
> >  	unsigned int cpp =3D fb->format->cpp[0];
> >  	size_t offset =3D clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >  	void *src =3D fb_helper->fbdev->screen_buffer + offset;
> > -	void *dst =3D fb_helper->buffer->map.vaddr + offset;
> >  	size_t len =3D (clip->x2 - clip->x1) * cpp;
> >  	unsigned int y;
> > =20
> > -	for (y =3D clip->y1; y < clip->y2; y++) {
> > -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > -			memcpy(dst, src, len);
> > -		else
> > -			memcpy_toio((void __iomem *)dst, src, len);
> > +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */=20
> > +	for (y =3D clip->y1; y < clip->y2; y++) {
> > +		dma_buf_map_memcpy_to(dst, src, len);
> > +		dma_buf_map_incr(dst, fb->pitches[0]);
> >  		src +=3D fb->pitches[0];
> > -		dst +=3D fb->pitches[0];
> >  	}
> >  }
> > =20
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret =3D drm_client_buffer_vmap(helper->buffer, &map);
> >  			if (ret)
> >  				return;
> > -			drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > +			drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> >  		if (helper->fb->funcs->dirty)
> >  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >  						 &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> >  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > =20
> So far everything looks good.
>=20
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > +				      size_t count, loff_t *ppos)
> > +{
> > +	unsigned long p =3D *ppos;
> > +	u8 *dst;
> > +	u8 __iomem *src;
> > +	int c, err =3D 0;
> > +	unsigned long total_size;
> > +	unsigned long alloc_size;
> > +	ssize_t ret =3D 0;
> > +
> > +	if (info->state !=3D FBINFO_STATE_RUNNING)
> > +		return -EPERM;
> > +
> > +	total_size =3D info->screen_size;
> > +
> > +	if (total_size =3D=3D 0)
> > +		total_size =3D info->fix.smem_len;
> > +
> > +	if (p >=3D total_size)
> > +		return 0;
> > +
> > +	if (count >=3D total_size)
> > +		count =3D total_size;
> > +
> > +	if (count + p > total_size)
> > +		count =3D total_size - p;
> > +
> > +	src =3D (u8 __iomem *)(info->screen_base + p);
> screen_base is a char __iomem * - so this cast looks semi redundant.

I took the basic code from fbdev. Maybe there's a reason for the case,
otherwise I'll remove it.

>=20
> > +
> > +	alloc_size =3D min(count, PAGE_SIZE);
> > +
> > +	dst =3D kmalloc(alloc_size, GFP_KERNEL);
> > +	if (!dst)
> > +		return -ENOMEM;
> > +
> Same comment as below about fb_sync.
>=20
>=20
> > +	while (count) {
> > +		c =3D min(count, alloc_size);
> > +
> > +		memcpy_fromio(dst, src, c);
> > +		if (copy_to_user(buf, dst, c)) {
> > +			err =3D -EFAULT;
> > +			break;
> > +		}
> > +
> > +		src +=3D c;
> > +		*ppos +=3D c;
> > +		buf +=3D c;
> > +		ret +=3D c;
> > +		count -=3D c;
> > +	}
> > +
> > +	kfree(dst);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > +				       size_t count, loff_t *ppos)
> > +{
> > +	unsigned long p =3D *ppos;
> > +	u8 *src;
> > +	u8 __iomem *dst;
> > +	int c, err =3D 0;
> > +	unsigned long total_size;
> > +	unsigned long alloc_size;
> > +	ssize_t ret =3D 0;
> > +
> > +	if (info->state !=3D FBINFO_STATE_RUNNING)
> > +		return -EPERM;
> > +
> > +	total_size =3D info->screen_size;
> > +
> > +	if (total_size =3D=3D 0)
> > +		total_size =3D info->fix.smem_len;
> > +
> > +	if (p > total_size)
> > +		return -EFBIG;
> > +
> > +	if (count > total_size) {
> > +		err =3D -EFBIG;
> > +		count =3D total_size;
> > +	}
> > +
> > +	if (count + p > total_size) {
> > +		/*
> > +		 * The framebuffer is too small. We do the
> > +		 * copy operation, but return an error code
> > +		 * afterwards. Taken from fbdev.
> > +		 */
> > +		if (!err)
> > +			err =3D -ENOSPC;
> > +		count =3D total_size - p;
> > +	}
> > +
> > +	alloc_size =3D min(count, PAGE_SIZE);
> > +
> > +	src =3D kmalloc(alloc_size, GFP_KERNEL);
> > +	if (!src)
> > +		return -ENOMEM;
> > +
> > +	dst =3D (u8 __iomem *)(info->screen_base + p);
> > +
>=20
> The fbdev variant call the fb_sync callback here.
> noveau and gma500 implments the fb_sync callback - but no-one else.

These drivers implement some form of HW acceleration. If they have a HW
blit/draw/etc op queued up, they have to wait for it to complete. Otherwise,
the copied memory would contain an old state. The fb_sync acts as the fence.

Fbdev only uses software copying, so the fb_sync is not required.

=46rom what I heard, the HW acceleration is not useful on modern machines. I
hope to convert more drivers to generic fbdev after these patches for
I/O-memory support have been merged.

>=20
>=20
> > +	while (count) {
> > +		c =3D min(count, alloc_size);
> > +
> > +		if (copy_from_user(src, buf, c)) {
> > +			err =3D -EFAULT;
> > +			break;
> > +		}
> > +		memcpy_toio(dst, src, c);
> When we rewrite this part to use dma_buf_map_memcpy_to() then we can
> merge the two variants of helper_{sys,cfb}_read()?
> Which is part of the todo - so OK

I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does
an additional copy between system memory and I/O memory. Of course, the top
and bottom of both functions are similar and could probably be shared.

Best regards
Thomas

> > +
> > +		dst +=3D c;
> > +		*ppos +=3D c;
> > +		buf +=3D c;
> > +		ret +=3D c;
> > +		count -=3D c;
> > +	}
> > +
> > +	kfree(src);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return ret;
> > +}
> > +
> >  /**
> >   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >   * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *inf=
o,
> > struct vm_area_struct *vma) return -ENODEV;
> >  }
> > =20
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *bu=
f,
> > +				 size_t count, loff_t *ppos)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> > +	else
> > +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > +				  size_t count, loff_t *ppos)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> > +	else
> > +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > +				  const struct fb_fillrect *rect)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_fillrect(info, rect);
> > +	else
> > +		drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > +				  const struct fb_copyarea *area)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_copyarea(info, area);
> > +	else
> > +		drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > +				   const struct fb_image *image)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_imageblit(info, image);
> > +	else
> > +		drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> >  static const struct fb_ops drm_fbdev_fb_ops =3D {
> >  	.owner		=3D THIS_MODULE,
> >  	DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops =3D=
 {
> >  	.fb_release	=3D drm_fbdev_fb_release,
> >  	.fb_destroy	=3D drm_fbdev_fb_destroy,
> >  	.fb_mmap	=3D drm_fbdev_fb_mmap,
> > -	.fb_read	=3D drm_fb_helper_sys_read,
> > -	.fb_write	=3D drm_fb_helper_sys_write,
> > -	.fb_fillrect	=3D drm_fb_helper_sys_fillrect,
> > -	.fb_copyarea	=3D drm_fb_helper_sys_copyarea,
> > -	.fb_imageblit	=3D drm_fb_helper_sys_imageblit,
> > +	.fb_read	=3D drm_fbdev_fb_read,
> > +	.fb_write	=3D drm_fbdev_fb_write,
> > +	.fb_fillrect	=3D drm_fbdev_fb_fillrect,
> > +	.fb_copyarea	=3D drm_fbdev_fb_copyarea,
> > +	.fb_imageblit	=3D drm_fbdev_fb_imageblit,
> >  };
> > =20
> >  static struct fb_deferred_io drm_fbdev_defio =3D {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_confi=
g.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> >  	 */
> >  	bool prefer_shadow_fbdev;
> > =20
> > -	/**
> > -	 * @fbdev_use_iomem:
> > -	 *
> > -	 * Set to true if framebuffer reside in iomem.
> > -	 * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > -	 *
> > -	 * FIXME: This should be replaced with a per-mapping is_iomem
> > -	 * flag (like ttm does), and then used everywhere in fbdev code.
> > -	 */
> > -	bool fbdev_use_iomem;
> > -
> >  	/**
> >  	 * @quirk_addfb_prefer_xbgr_30bpp:
> >  	 *
> > --=20
> > 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:37:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:37:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8029.21365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTO35-0004BX-BA; Fri, 16 Oct 2020 11:37:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8029.21365; Fri, 16 Oct 2020 11:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTO35-0004BP-7X; Fri, 16 Oct 2020 11:37:27 +0000
Received: by outflank-mailman (input) for mailman id 8029;
 Fri, 16 Oct 2020 11:37:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4YNt=DX=epam.com=prvs=8558b98f7f=artem_mygaiev@srs-us1.protection.inumbo.net>)
 id 1kTO33-0004BB-PW
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:37:25 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd4ceea3-ba98-4069-85c4-8ddfa7638720;
 Fri, 16 Oct 2020 11:37:24 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09GBZ72k010829; Fri, 16 Oct 2020 11:37:23 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2106.outbound.protection.outlook.com [104.47.18.106])
 by mx0a-0039f301.pphosted.com with ESMTP id 346f2vchbk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 16 Oct 2020 11:37:23 +0000
Received: from AM6PR03MB3687.eurprd03.prod.outlook.com (2603:10a6:209:30::16)
 by AM7PR03MB6644.eurprd03.prod.outlook.com (2603:10a6:20b:1b6::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Fri, 16 Oct
 2020 11:37:20 +0000
Received: from AM6PR03MB3687.eurprd03.prod.outlook.com
 ([fe80::7819:980d:4992:f090]) by AM6PR03MB3687.eurprd03.prod.outlook.com
 ([fe80::7819:980d:4992:f090%6]) with mapi id 15.20.3477.025; Fri, 16 Oct 2020
 11:37:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4YNt=DX=epam.com=prvs=8558b98f7f=artem_mygaiev@srs-us1.protection.inumbo.net>)
	id 1kTO33-0004BB-PW
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:37:25 +0000
X-Inumbo-ID: cd4ceea3-ba98-4069-85c4-8ddfa7638720
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cd4ceea3-ba98-4069-85c4-8ddfa7638720;
	Fri, 16 Oct 2020 11:37:24 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09GBZ72k010829;
	Fri, 16 Oct 2020 11:37:23 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com (mail-am6eur05lp2106.outbound.protection.outlook.com [104.47.18.106])
	by mx0a-0039f301.pphosted.com with ESMTP id 346f2vchbk-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 16 Oct 2020 11:37:23 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZTtq7kWuIG3n3fZf5b+Ew/f5NRDikfttnFrxaZHhaYZeQ/iEIX8lJkelb3b3HrxeWG5uKrXMSuzELBTSwwplQ5dHQW4Yqb8t1nqtMtGu6EkCGwNMQVJMhyHIiOpioc8rgeYtg9j0faQFmlXLZwuJvwwmVoH8eYB4nvLseBnpeoR95gc/WY8xr43157VWNqcULCcvZHLEZgFzckLHCfRKEwciJeAji1iyPjn8MGvf2GsG2TuqCqBviRsxVf9R/R4schm/iCb4TgnYpf1X/PT+IybR9JHDdaLTs6+RH4KQL268WDDKqzYkW35Vy/ZZft5JM/H0ShIAnpXZ80EL7/JHYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p7bwsr9BNPpt1G2ulhrUtkoAEtVHKgN9w0+u0WfepFs=;
 b=dkUyRjmxKXlulgokjoKUGLH4T0P2/TKdnkaVH7FgH9xGjUPuKahprGPPoqngA6avjrsYtkFfECv0mDTuSULLvzBE5MvxVtivijiflwX/y0QKgCuZhQ70NkSFbMiiLChyU9yQYCdWvpY5UhAOPcAvS0O7fWq5X8qOrjBvpSobYDikFX0jfT81+LpjOjcqxsPLpzKHEXNIsKCJFFlbhOgva7kzqrhvwLAo9PNYovlFoXA3CfCeHNFJj/YOxHdvrcla9+w8EnXY+ICry/sCiB5ulRyd8UH6jCqxYhSqS7Eo0TlcQ8onq5P5HhnrArTtzwNsaVQwTddr/GtESjIZMOzfMw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p7bwsr9BNPpt1G2ulhrUtkoAEtVHKgN9w0+u0WfepFs=;
 b=YxqVcruOtJ6OT0YbamHYi/5UuRRJ3Z3yXaKQFjuUo7MfqzNU9XBRjenA3jjYFgN4lG40AmcQKce+NobPSxOgE5/rMdQMb3RdYOs8tPBjDRBHzYl/AShqSrDHXz/7i9iwkrozLZC0EMjPZHMZGkL928ElpyW1EAYbP1wIWlSMVB5dmjHdEp1PlxekeN5zUIvy8VYoxoEhxJU7G3pcoY+pUTVhwA3p93uXN5IgGXNqUqLMwL0zNAWew+6mKe2+o4Lle+T6tzpu7jBxhcEQgSaNUHR6Ch3rdTD9J67/AfPH9Rwkgt92qcxXe7FDR8Xyp49gCDZMXldM+afrI3s9U0nZhQ==
Received: from AM6PR03MB3687.eurprd03.prod.outlook.com (2603:10a6:209:30::16)
 by AM7PR03MB6644.eurprd03.prod.outlook.com (2603:10a6:20b:1b6::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24; Fri, 16 Oct
 2020 11:37:20 +0000
Received: from AM6PR03MB3687.eurprd03.prod.outlook.com
 ([fe80::7819:980d:4992:f090]) by AM6PR03MB3687.eurprd03.prod.outlook.com
 ([fe80::7819:980d:4992:f090%6]) with mapi id 15.20.3477.025; Fri, 16 Oct 2020
 11:37:20 +0000
From: Artem Mygaiev <Artem_Mygaiev@epam.com>
To: Julien Grall <julien@xen.org>,
        Anastasiia Lukianenko
	<Anastasiia_Lukianenko@epam.com>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>
CC: "vicooodin@gmail.com" <vicooodin@gmail.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "committers@xenproject.org" <committers@xenproject.org>,
        "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: RE: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: 
 AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgCACF7sgIABM44AgASIOQCAAAt9gIAAE8Bw
Date: Fri, 16 Oct 2020 11:37:20 +0000
Message-ID: 
 <AM6PR03MB3687A99424FA9FD062F5FE4BF4030@AM6PR03MB3687.eurprd03.prod.outlook.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
 <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
 <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
 <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
In-Reply-To: <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [194.53.196.52]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 2503e26b-e40f-4956-4daa-08d871c7d7fa
x-ms-traffictypediagnostic: AM7PR03MB6644:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM7PR03MB6644562B10C55A1F1EDB95EAF4030@AM7PR03MB6644.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 VRvea6+zUZ4Nrtg7kZ1dkhCa93cW+g3bd2zgsiRehBe9EIbge09PrmB2hBYzAzTuTXgyE8EkYns3zEJE1E3fFbTqHxioDk8k2097TbjCWQd/mQ77CtFzOKC94DKFVf1gYZEYYS48F0s9eOV5woZHUf4dJNnTJ/nCBIE+AnnrVH1ZV53lGcAELtW4ck6WT4PKskjXrfLe3oBmcX2nwLgONk/CakOYDr/OmGps/FNhvZJfU221HaYaPS9mN4/XSQP4dx5HK2lIA67CaieNN785netw1H1B5bPnv8VIwTrZPCkbMA3lOfAr8cbqAIi+zOxOu1Fi5VzFm9Dwlldz8lu05w==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR03MB3687.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(366004)(55016002)(478600001)(110136005)(33656002)(186003)(52536014)(26005)(54906003)(316002)(53546011)(5660300002)(107886003)(8676002)(71200400001)(9686003)(86362001)(66556008)(2906002)(7696005)(76116006)(64756008)(66946007)(66446008)(66476007)(83380400001)(6506007)(8936002)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 mgoXCRGZ52brfGygO9plHy1egf2hL5TxIZsqyqTfHbr7zk9PGFXPDs69yqoGlzvP3jbd0Td79g9xzQk/hRKnToX7w+lVmqbCKXPdi7smY+DDOwhxJ6qoEl1aoPEYX57l5nqk+398X4EfVcgvGmp3XLaYQBUu5u8eFwJXdWbP5N/fbkLA3PKsd09OaMUCU11vmEvB2MQwBTrmNB7Y2d/SOKGZh+o6XOTTkw52+qmjtwYHLqmI3JWCzVa0on8g0fnsOt1nF9tbkzo+kYxHw+UKZwofjk8uHpY74Tle6IaJJ0i96wYLXlMIOW4UiRUIQJjVcUJuOtU4hgSACT0PlBsnF6MvW3MqksxwYJ/Mbbl7gbXhcPHEWtUe6Hn7HeJPZo7d1mly5g7MhEYRox9Zi1w2Kqu/NMWuLZ8onx0UzAyveVVzIiaNLsIUxnIm6dxzhOeZznG2HSs3N3Bfdy8jBm/iW64K9idLHZgwTiULoIsnyW1pFK9cxb1GrizcSQlBlabZSHDhn4cec7//kpXf37rwZ9b4ECLPb77uqTN87U+m4iLXGP/Jw0pNbfpGuoz530xWqeqkyQ1jFYs5Wu8DhxXXniVoZ7rM/zd7wwecV8/rvE9ztluaSUT5wcZLaGHm+7vPLObKS7v+7xUoDyag3Hmpjg==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM6PR03MB3687.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2503e26b-e40f-4956-4daa-08d871c7d7fa
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Oct 2020 11:37:20.7781
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pPeAMFpZGm3XvZsQHmwzXe3rZEosLlzUEqJnSG02L9FHr7Bk1HytmjZMusGQyWcCGbEKRdMxZeG/HabUeVcpJg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR03MB6644
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.687
 definitions=2020-10-16_06:2020-10-16,2020-10-16 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 suspectscore=0
 clxscore=1011 mlxlogscore=999 priorityscore=1501 bulkscore=0
 lowpriorityscore=0 spamscore=0 phishscore=0 malwarescore=0 impostorscore=0
 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010160084

SGksDQoNCi0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQpGcm9tOiBKdWxpZW4gR3JhbGwgPGp1
bGllbkB4ZW4ub3JnPiANClNlbnQ6INC/0Y/RgtC90LjRhtCwLCAxNiDQvtC60YLRj9Cx0YDRjyAy
MDIwINCzLiAxMzoyNA0KVG86IEFuYXN0YXNpaWEgTHVraWFuZW5rbyA8QW5hc3Rhc2lpYV9MdWtp
YW5lbmtvQGVwYW0uY29tPjsgamJldWxpY2hAc3VzZS5jb207IEdlb3JnZS5EdW5sYXBAY2l0cml4
LmNvbQ0KQ2M6IEFydGVtIE15Z2FpZXYgPEFydGVtX015Z2FpZXZAZXBhbS5jb20+OyB2aWNvb29k
aW5AZ21haWwuY29tOyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IGNvbW1pdHRlcnNA
eGVucHJvamVjdC5vcmc7IHZpa3Rvci5taXRpbi4xOUBnbWFpbC5jb207IFZvbG9keW15ciBCYWJj
aHVrIDxWb2xvZHlteXJfQmFiY2h1a0BlcGFtLmNvbT4NClN1YmplY3Q6IFJlOiBYZW4gQ29kaW5n
IHN0eWxlIGFuZCBjbGFuZy1mb3JtYXQNCg0KPiBIaSwNCj4NCj4gT24gMTYvMTAvMjAyMCAxMDo0
MiwgQW5hc3Rhc2lpYSBMdWtpYW5lbmtvIHdyb3RlOg0KPiA+IFRoYW5rcyBmb3IgeW91ciBhZHZp
Y2VzLCB3aGljaCBoZWxwZWQgbWUgaW1wcm92ZSB0aGUgY2hlY2tlci4gSQ0KPiA+IHVuZGVyc3Rh
bmQgdGhhdCB0aGVyZSBhcmUgc3RpbGwgc29tZSBkaXNhZ3JlZW1lbnRzIGFib3V0IHRoZQ0KPiA+
IGZvcm1hdHRpbmcsIGJ1dCBhcyBJIHNhaWQgYmVmb3JlLCB0aGUgY2hlY2tlciBjYW5ub3QgYmUg
dmVyeSBmbGV4aWJsZQ0KPiA+IGFuZCB0YWtlIGludG8gYWNjb3VudCBhbGwgdGhlIGF1dGhvcidz
IGlkZWFzLg0KPg0KPiBJIGFtIG5vdCBzdXJlIHdoYXQgeW91IHJlZmVyIGJ5ICJhdXRob3IncyBp
ZGVhcyIgaGVyZS4gVGhlIGNoZWNrZXIgDQo+IHNob3VsZCBmb2xsb3cgYSBjb2Rpbmcgc3R5bGUg
KFhlbiBvciBhIG1vZGlmaWVkIHZlcnNpb24pOg0KPiAgICAgLSBBbnl0aGluZyBub3QgZm9sbG93
aW5nIHRoZSBjb2Rpbmcgc3R5bGUgc2hvdWxkIGJlIGNvbnNpZGVyZWQgYXMgDQo+IGludmFsaWQu
DQo+ICAgICAtIEFueXRoaW5nIG5vdCB3cml0dGVuIGluIHRoZSBjb2Rpbmcgc3R5bGUgc2hvdWxk
IGJlIGxlZnQgDQo+IHVudG91Y2hlZC91bmNvbW1lbnRlZCBieSB0aGUgY2hlY2tlci4NCj4NCg0K
QWdyZWUNCg0KPiA+IEkgc3VnZ2VzdCB1c2luZyB0aGUNCj4gPiBjaGVja2VyIG5vdCBhcyBhIG1h
bmRhdG9yeSBjaGVjaywgYnV0IGFzIGFuIGluZGljYXRpb24gdG8gdGhlIGF1dGhvciBvZg0KPiA+
IHBvc3NpYmxlIGZvcm1hdHRpbmcgZXJyb3JzIHRoYXQgaGUgY2FuIGNvcnJlY3Qgb3IgaWdub3Jl
Lg0KPg0KPiBJIGNhbiB1bmRlcnN0YW5kIHRoYXQgc2hvcnQgdGVybSB3ZSB3b3VsZCB3YW50IHRv
IG1ha2UgaXQgb3B0aW9uYWwgc28gDQo+IGVpdGhlciB0aGUgY29kaW5nIHN0eWxlIG9yIHRoZSBj
aGVja2VyIGNhbiBiZSB0dW5lZC4gQnV0IEkgZG9uJ3QgdGhpbmsgDQo+IHRoaXMgaXMgYW4gaWRl
YWwgc2l0dWF0aW9uIHRvIGJlIGluIGxvbmcgdGVybS4NCj4NCj4gVGhlIGdvYWwgb2YgdGhlIGNo
ZWNrZXIgaXMgdG8gYXV0b21hdGljYWxseSB2ZXJpZnkgdGhlIGNvZGluZyBzdHlsZSBhbmQgDQo+
IGdldCBpdCBjb25zaXN0ZW50IGFjcm9zcyBYZW4uIElmIHdlIG1ha2UgaXQgb3B0aW9uYWwgb3Ig
aXQgaXMgDQo+ICJ1bnJlbGlhYmxlIiwgdGhlbiB3ZSBsb3NlIHRoZSB0d28gYmVuZWZpdHMgYW5k
IHBvc3NpYmx5IGluY3JlYXNlIHRoZSANCj4gY29udHJpYnV0b3IgZnJ1c3RyYXRpb24gYXMgdGhl
IGNoZWNrZXIgd291bGQgc2F5IEEgYnV0IHdlIG5lZWQgQi4NCj4NCj4gVGhlcmVmb3JlLCB3ZSBu
ZWVkIHRvIG1ha2Ugc3VyZSB0aGUgY2hlY2tlciBhbmQgdGhlIGNvZGluZyBzdHlsZSBtYXRjaC4g
DQo+IEkgZG9uJ3QgaGF2ZSBhbnkgb3BpbmlvbnMgb24gdGhlIGFwcHJvYWNoIHRvIGFjaGlldmUg
dGhhdC4NCg0KT2YgdGhlIGxpc3Qgb2YgcmVtYWluaW5nIGlzc3VlcyBmcm9tIEFuYXN0YXNpaWEs
IGxvb2tzIGxpa2Ugb25seSBpdGVtcyA1DQphbmQgNiBhcmUgY29uZm9ybSB0byBvZmZpY2lhbCBY
ZW4gY29kaW5nIHN0eWxlLiBBcyBmb3IgcmVtYWlubmluZyBvbmVzLA0KSSB3b3VsZCBsaWtlIHRv
IHN1Z2dlc3QgZGlzYWJsaW5nIHRob3NlIHRoYXQgYXJlIGNvbnRyb3ZlcnNpYWwgKGl0ZW1zIDEs
DQoyLCA0LCA4LCA5LCAxMCkuIE1heWJlIHdlIHdhbnQgdG8gaGF2ZSBmdXJ0aGVyIGRpc2N1c3Np
b24gb24gcmVmaW5pbmcgDQpjb2Rpbmcgc3R5bGUsIHdlIGNhbiB1c2UgdGhlc2UgYXMgc3RhcnRp
bmcgcG9pbnQuIElmIHdlIGFyZSBvcGVuIHRvDQpleHRlbmRpbmcgc3R5bGUgbm93LCBJIHdvdWxk
IHN1Z2dlc3QgdG8gYWRkIHJ1bGVzIHRoYXQgc2VlbSB0byBiZQ0KbWVhbmluZ2Z1bCAoaXRlbXMg
MywgNykgYW5kIGtlZXAgdGhlbSBpbiBjaGVja2VyLg0KDQotLSBBcnRlbQ0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 11:55:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 11:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8040.21397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOKH-00065x-2E; Fri, 16 Oct 2020 11:55:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8040.21397; Fri, 16 Oct 2020 11:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOKG-00065q-VJ; Fri, 16 Oct 2020 11:55:12 +0000
Received: by outflank-mailman (input) for mailman id 8040;
 Fri, 16 Oct 2020 11:55:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTOKF-00065l-IH
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:55:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5f43027-5cc0-47ad-abe5-8d0bfc8aa616;
 Fri, 16 Oct 2020 11:55:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98E3AAF6C;
 Fri, 16 Oct 2020 11:55:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTOKF-00065l-IH
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 11:55:11 +0000
X-Inumbo-ID: c5f43027-5cc0-47ad-abe5-8d0bfc8aa616
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c5f43027-5cc0-47ad-abe5-8d0bfc8aa616;
	Fri, 16 Oct 2020 11:55:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602849309;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zlJi1Cesr381khVhjfq6NZWQkLO3lRKR7GjIYz+O338=;
	b=qgg5JQawlbYHfwfXdayDKu/IKj3R53gC1kgziS6d8BynttBWqS4jihvWO1vrQrr1/pY0wI
	KXmcT5632CnW9feLePjHqNo8j8eXHzOph8RroOC7NA4qg2S6A4w9tgtehkWsXY8A9iLHDD
	tlbej6oqhQqtCgVxVhBZOKUuOK0AOA4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 98E3AAF6C;
	Fri, 16 Oct 2020 11:55:09 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
 <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
 <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
 <09049e52-548b-3ffc-5259-b1ffc26413a5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7f3272d5-8ec7-26ec-33ec-2281539920e2@suse.com>
Date: Fri, 16 Oct 2020 13:55:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <09049e52-548b-3ffc-5259-b1ffc26413a5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 13:24, Andrew Cooper wrote:
> On a tangent, what are your views WRT backport beyond 4.14?
> 
> Back then, it was #DB which was adjacent to the guard frame (which was
> not present), but it doesn't use show_registers() by default, so I think
> the problem is mostly hidden.

I wasn't fully decided yet, but as long as it applies reasonably
cleanly I think I'm leaning towards also putting it on 4.13.
4.12 closes anyway once 4.12.4 is out, and I don't think I want
to pick up not-really-urgent changes for putting there beyond
the few ones that I already have (and that I mean to put in
alongside the XSA fixes on Tuesday); I could be talked into it,
though.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:04:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:04:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8049.21408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOSm-0007BJ-9X; Fri, 16 Oct 2020 12:04:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8049.21408; Fri, 16 Oct 2020 12:04:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOSm-0007BC-6e; Fri, 16 Oct 2020 12:04:00 +0000
Received: by outflank-mailman (input) for mailman id 8049;
 Fri, 16 Oct 2020 12:03:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kTOSk-0007B7-Sf
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:03:58 +0000
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ad125ad-2499-46c5-80eb-8cb355d1d9b8;
 Fri, 16 Oct 2020 12:03:56 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk4.altibox.net (Postfix) with ESMTPS id 5DD27806BD;
 Fri, 16 Oct 2020 14:03:49 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kTOSk-0007B7-Sf
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:03:58 +0000
X-Inumbo-ID: 7ad125ad-2499-46c5-80eb-8cb355d1d9b8
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ad125ad-2499-46c5-80eb-8cb355d1d9b8;
	Fri, 16 Oct 2020 12:03:56 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk4.altibox.net (Postfix) with ESMTPS id 5DD27806BD;
	Fri, 16 Oct 2020 14:03:49 +0200 (CEST)
Date: Fri, 16 Oct 2020 14:03:47 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201016120347.GB1125266@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-11-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201015123806.32416-11-tzimmermann@suse.de>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=fu7ymmwf c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=7gkXJVJtAAAA:8 a=NqsBjqOBP8_30qnptSgA:9
	a=FLTdSUe5Or9RI9Du:21 a=FeamgQ_8eGC75rZP:21 a=CjuIK1q_8ugA:10
	a=qfUslh1TxfEA:10 a=E9Po1WZjFZOl8hwRPBS3:22

Hi Thomas.

On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> v4:
> 	* move dma_buf_map changes into separate patch (Daniel)
> 	* TODO list: comment on fbdev updates (Daniel)
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>

The original workaround fixed it so we could run qemu with the
-nographic option.

So I went ahead and tried to run quemu version:
v5.0.0-1970-g0b100c8e72-dirty.
And with the BOCHS driver built-in.

With the following command line:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic

Behaviour was the same before and after applying this patch.
(panic due to VFS: Unable to mount root fs on unknown-block(0,0))
So I consider it fixed for real now and not just a workaround.

I also tested with:
qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio

and it worked in both cases too.

All the comments above so future-me have an easier time finding how to
reproduce.

Tested-by: Sam Ravnborg <sam@ravnborg.org>

	Sam

> ---
>  Documentation/gpu/todo.rst        |  19 ++-
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  4 files changed, 220 insertions(+), 29 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
>  Level: Intermediate
>  
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>  -----------------------------------------------------------------
>  
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..462b0c130ebb 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info,
>  }
>  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
>  
> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf,
> +				      size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *dst;
> +	u8 __iomem *src;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p >= total_size)
> +		return 0;
> +
> +	if (count >= total_size)
> +		count = total_size;
> +
> +	if (count + p > total_size)
> +		count = total_size - p;
> +
> +	src = (u8 __iomem *)(info->screen_base + p);
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	dst = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!dst)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		memcpy_fromio(dst, src, c);
> +		if (copy_to_user(buf, dst, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(dst);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf,
> +				       size_t count, loff_t *ppos)
> +{
> +	unsigned long p = *ppos;
> +	u8 *src;
> +	u8 __iomem *dst;
> +	int c, err = 0;
> +	unsigned long total_size;
> +	unsigned long alloc_size;
> +	ssize_t ret = 0;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	total_size = info->screen_size;
> +
> +	if (total_size == 0)
> +		total_size = info->fix.smem_len;
> +
> +	if (p > total_size)
> +		return -EFBIG;
> +
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +
> +	if (count + p > total_size) {
> +		/*
> +		 * The framebuffer is too small. We do the
> +		 * copy operation, but return an error code
> +		 * afterwards. Taken from fbdev.
> +		 */
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - p;
> +	}
> +
> +	alloc_size = min(count, PAGE_SIZE);
> +
> +	src = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!src)
> +		return -ENOMEM;
> +
> +	dst = (u8 __iomem *)(info->screen_base + p);
> +
> +	while (count) {
> +		c = min(count, alloc_size);
> +
> +		if (copy_from_user(src, buf, c)) {
> +			err = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, src, c);
> +
> +		dst += c;
> +		*ppos += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(src);
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
>  /**
>   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
>   * @info: fbdev registered by the helper
> @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> +	else
> +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_fillrect(info, rect);
> +	else
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_copyarea(info, area);
> +	else
> +		drm_fb_helper_cfb_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> +		drm_fb_helper_sys_imageblit(info, image);
> +	else
> +		drm_fb_helper_cfb_imageblit(info, image);
> +}
> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> -- 
> 2.28.0


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:09:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:09:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8052.21421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOXv-0007NE-UH; Fri, 16 Oct 2020 12:09:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8052.21421; Fri, 16 Oct 2020 12:09:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOXv-0007N7-RF; Fri, 16 Oct 2020 12:09:19 +0000
Received: by outflank-mailman (input) for mailman id 8052;
 Fri, 16 Oct 2020 12:09:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kTOXu-0007N2-4z
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:09:18 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f42769e-79e5-4469-88c8-e5ded720fc27;
 Fri, 16 Oct 2020 12:09:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kTOXu-0007N2-4z
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:09:18 +0000
X-Inumbo-ID: 6f42769e-79e5-4469-88c8-e5ded720fc27
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6f42769e-79e5-4469-88c8-e5ded720fc27;
	Fri, 16 Oct 2020 12:09:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602850156;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=wYHtKj7tFXBVmXh/3krYXanUGnDcUzLV5iImjfUSivY=;
  b=CsPQThEV7abQGg9oawM/39KJrt6q8P/7BwSj13C9AA9ZGgV5ThYkfp1v
   +UnFfp7QAmb93sPZWk4zP7+XJxTGAfXdA4oabM7qano/DIFXcFaG8h9iq
   kTgTqcS00UeOAoQFWtnhyEFpp6h5NBvJuUzWhMDoautW3ZWQZUawYxT/V
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jXKgeqHSUaj2Qd6gHAaFfKACcooX+EHsCGXz7oEzrtTMBO+4tdCE0CkENe2rnuz1SX7NlfTkY3
 Pc9tWDLMB5+0Q+hucFXjCus6peGk2+TJOTncPE/gGaGuoJeFQLNnpDRyN1eEz6n1txeQSKQ93z
 a0AGNTx+b6Y4zCdzw68npUQuMTayIc9Sa9NlbuesnVpr17s7POIfhP6iYeSO5g0n+5o80LDW4Z
 SnEJnut99v3CzOPK/sd+07X8LdbHoyL/ykMay7kamcox0F1RX7QElSmiMhI5+1TASc+1SLEFmy
 6oY=
X-SBRS: 2.5
X-MesageID: 29407124
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,382,1596513600"; 
   d="scan'208";a="29407124"
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
 <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
 <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
 <09049e52-548b-3ffc-5259-b1ffc26413a5@citrix.com>
 <7f3272d5-8ec7-26ec-33ec-2281539920e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5807e645-7242-125a-03cf-c7c23f28dfa3@citrix.com>
Date: Fri, 16 Oct 2020 13:07:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7f3272d5-8ec7-26ec-33ec-2281539920e2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 16/10/2020 12:55, Jan Beulich wrote:
> On 16.10.2020 13:24, Andrew Cooper wrote:
>> On a tangent, what are your views WRT backport beyond 4.14?
>>
>> Back then, it was #DB which was adjacent to the guard frame (which was
>> not present), but it doesn't use show_registers() by default, so I think
>> the problem is mostly hidden.
> I wasn't fully decided yet, but as long as it applies reasonably
> cleanly I think I'm leaning towards also putting it on 4.13.
> 4.12 closes anyway once 4.12.4 is out, and I don't think I want
> to pick up not-really-urgent changes for putting there beyond
> the few ones that I already have (and that I mean to put in
> alongside the XSA fixes on Tuesday); I could be talked into it,
> though.

The question I was asking was really "should I try and make an
equivalent fix for 4.13 and older".

While the base premise of the fix would be the same, the logic in
load_system_tables() is different, and the commit message is completely
wrong.

I only encountered this problem with added instrumentation in the #DB
handler, which is why I'm questioning the utility of going to this effort.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:09:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:09:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8054.21433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOYH-0007S0-6c; Fri, 16 Oct 2020 12:09:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8054.21433; Fri, 16 Oct 2020 12:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOYH-0007Rt-39; Fri, 16 Oct 2020 12:09:41 +0000
Received: by outflank-mailman (input) for mailman id 8054;
 Fri, 16 Oct 2020 12:09:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTOYE-0007Rg-WD
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:09:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 294377e8-71c1-4e0a-a4ea-1c458f112663;
 Fri, 16 Oct 2020 12:09:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6AFF8ADC2;
 Fri, 16 Oct 2020 12:09:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTOYE-0007Rg-WD
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:09:39 +0000
X-Inumbo-ID: 294377e8-71c1-4e0a-a4ea-1c458f112663
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 294377e8-71c1-4e0a-a4ea-1c458f112663;
	Fri, 16 Oct 2020 12:09:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602850176;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lviW8b5HNZW/WCv+CuAMlTXf2AzD76bGJTDfpVbQgnI=;
	b=X39xD7EBMHywMNKf8LNG4WQLBuWZ3qn+vuOOYUZ4KVSBKZ7tKKHo3VlkvDkIEl/BFNhAKP
	xccLpDGZPpmBUgzoTFN9twqatAwuzPbKrpzBxHQJUOdztEKQGpzQmkZOs30bfAwsh55mdv
	YlXFmsQfLQr0FY0es5nuveAD7bHBYlA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6AFF8ADC2;
	Fri, 16 Oct 2020 12:09:36 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
 <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
Date: Fri, 16 Oct 2020 14:09:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.10.2020 11:36, Julien Grall wrote:
> On 15/10/2020 13:07, Jan Beulich wrote:
>> On 14.10.2020 13:40, Julien Grall wrote:
>>> On 13/10/2020 15:26, Jan Beulich wrote:
>>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>>> Especially Julien was rather worried by the current situation. In
>>>>> case you can convince him the current handling is fine, we can
>>>>> easily drop this patch.
>>>>
>>>> Julien, in the light of the above - can you clarify the specific
>>>> concerns you (still) have?
>>>
>>> Let me start with that the assumption if evtchn->lock is not held when
>>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
>>
>> But this isn't interesting - we know there are paths where it is
>> held, and ones (interdomain sending) where it's the remote port's
>> lock instead which is held. What's important here is that a
>> _consistent_ lock be held (but it doesn't need to be evtchn's).
> 
> Yes, a _consistent_ lock *should* be sufficient. But it is better to use 
> the same lock everywhere so it is easier to reason (see more below).

But that's already not the case, due to the way interdomain channels
have events sent. You did suggest acquiring both locks, but as
indicated at the time I think this goes too far. As far as the doc
aspect - we can improve the situation. Iirc it was you who made me
add the respective comment ahead of struct evtchn_port_ops.

>>>   From my understanding, the goal of lock_old_queue() is to return the
>>> old queue used.
>>>
>>> last_priority and last_vcpu_id may be updated separately and I could not
>>> convince myself that it would not be possible to return a queue that is
>>> neither the current one nor the old one.
>>>
>>> The following could happen if evtchn->priority and
>>> evtchn->notify_vcpu_id keeps changing between calls.
>>>
>>> pCPU0				| pCPU1
>>> 				|
>>> evtchn_fifo_set_pending(v0,...)	|
>>> 				| evtchn_fifo_set_pending(v1, ...)
>>>    [...]				|
>>>    /* Queue has changed */	|
>>>    evtchn->last_vcpu_id = v0 	|
>>> 				| -> evtchn_old_queue()
>>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>>     				| old_q = ...
>>> 				| spin_lock(old_q->...)
>>> 				| v = ...
>>> 				| q = ...
>>> 				| /* q and old_q would be the same */
>>> 				|
>>>    evtchn->las_priority = priority|
>>>
>>> If my diagram is correct, then pCPU1 would return a queue that is
>>> neither the current nor old one.
>>
>> I think I agree.
>>
>>> In which case, I think it would at least be possible to corrupt the
>>> queue. From evtchn_fifo_set_pending():
>>>
>>>           /*
>>>            * If this event was a tail, the old queue is now empty and
>>>            * its tail must be invalidated to prevent adding an event to
>>>            * the old queue from corrupting the new queue.
>>>            */
>>>           if ( old_q->tail == port )
>>>               old_q->tail = 0;
>>>
>>> Did I miss anything?
>>
>> I don't think you did. The important point though is that a consistent
>> lock is being held whenever we come here, so two racing set_pending()
>> aren't possible for one and the same evtchn. As a result I don't think
>> the patch here is actually needed.
> 
> I haven't yet read in full details the rest of the patches to say 
> whether this is necessary or not. However, at a first glance, I think 
> this is not a sane to rely on different lock to protect us. And don't 
> get me started on the lack of documentation...
> 
> Furthermore, the implementation of old_lock_queue() suggests that the 
> code was planned to be lockless. Why would you need the loop otherwise?

The lock-less aspect of this affects multiple accesses to e.g.
the same queue, I think. I'm unconvinced it was really considered
whether racing sending on the same channel is also safe this way.

> Therefore, regardless the rest of the discussion, I think this patch 
> would be useful to have for our peace of mind.

That's a fair position to take. My counterargument is mainly
that readability (and hence maintainability) suffers with those
changes.

>> If I take this further, then I think I can reason why it wasn't
>> necessary to add further locking to send_guest_{global,vcpu}_virq():
>> The virq_lock is the "consistent lock" protecting ECS_VIRQ ports. The
>> spin_barrier() while closing the port guards that side against the
>> port changing to a different ECS_* behind the sending functions' backs.
>> And binding such ports sets ->virq_to_evtchn[] last, with a suitable
>> barrier (the unlock).
> 
> This makes sense.
> 
>>
>> Which leaves send_guest_pirq() before we can drop the IRQ-safe locking
>> again. I guess we would need to work towards using the underlying
>> irq_desc's lock as consistent lock here, but this certainly isn't the
>> case just yet, and I'm not really certain this can be achieved.
> I can't comment on the PIRQ code but I think this is a risky approach 
> (see more above).

It may be; one would only know how risky it is once it is being tried.
For the moment, with your apparent agreement above, I'll see whether I
can put together a relaxation patch for the vIRQ sending. Main
question is going to be whether in the process I wouldn't find a
reason why this isn't a safe thing to do.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:14:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:14:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8058.21444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOcV-0008M2-Of; Fri, 16 Oct 2020 12:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8058.21444; Fri, 16 Oct 2020 12:14:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOcV-0008Lv-LU; Fri, 16 Oct 2020 12:14:03 +0000
Received: by outflank-mailman (input) for mailman id 8058;
 Fri, 16 Oct 2020 12:14:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTOcU-0008Lq-S0
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:14:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf054e8b-e817-40a3-bbbe-7b256a4dd983;
 Fri, 16 Oct 2020 12:14:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4D0A4ACC5;
 Fri, 16 Oct 2020 12:14:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTOcU-0008Lq-S0
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:14:02 +0000
X-Inumbo-ID: cf054e8b-e817-40a3-bbbe-7b256a4dd983
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cf054e8b-e817-40a3-bbbe-7b256a4dd983;
	Fri, 16 Oct 2020 12:14:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602850441;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sJIiQ9yrcyyr/TSYLzEY2pC0tit/0unVayTqJXularY=;
	b=nWVBV8JXaIh5UHVbFYkqXaZdzkypEAyNIxQG3JAX4a1plaP+EvsQGD/RyCI1ciP1IuZGi+
	gVI6MdUgPH5Vi2fSpz4bMFtjbO3j8Gqorl+Kr4O4AUgW2hhyjxdhvwWvBszfQw0BYJOHqm
	gmzZojfCo+MJ3sEw2y+ADVSCtfmX5P4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4D0A4ACC5;
	Fri, 16 Oct 2020 12:14:01 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: 'Fix' safety of read_registers() in #DF path
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>
References: <20201012134908.27497-1-andrew.cooper3@citrix.com>
 <afc5c857-a97b-a268-e6b2-538f31609505@suse.com>
 <307753b0-fef8-658d-f897-8c0eb99ce3e5@citrix.com>
 <948f0753-561b-15e8-bf8c-52ff507133d2@suse.com>
 <dbd19cd0-316a-c62f-de7b-627ada4df350@citrix.com>
 <00ba5885-5ee6-c772-a72e-15431cd3b1f4@suse.com>
 <09049e52-548b-3ffc-5259-b1ffc26413a5@citrix.com>
 <7f3272d5-8ec7-26ec-33ec-2281539920e2@suse.com>
 <5807e645-7242-125a-03cf-c7c23f28dfa3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8f8da1a9-47dd-57d6-a91f-de90de36ea22@suse.com>
Date: Fri, 16 Oct 2020 14:14:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <5807e645-7242-125a-03cf-c7c23f28dfa3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 14:07, Andrew Cooper wrote:
> On 16/10/2020 12:55, Jan Beulich wrote:
>> On 16.10.2020 13:24, Andrew Cooper wrote:
>>> On a tangent, what are your views WRT backport beyond 4.14?
>>>
>>> Back then, it was #DB which was adjacent to the guard frame (which was
>>> not present), but it doesn't use show_registers() by default, so I think
>>> the problem is mostly hidden.
>> I wasn't fully decided yet, but as long as it applies reasonably
>> cleanly I think I'm leaning towards also putting it on 4.13.
>> 4.12 closes anyway once 4.12.4 is out, and I don't think I want
>> to pick up not-really-urgent changes for putting there beyond
>> the few ones that I already have (and that I mean to put in
>> alongside the XSA fixes on Tuesday); I could be talked into it,
>> though.
> 
> The question I was asking was really "should I try and make an
> equivalent fix for 4.13 and older".

Oh, I see.

> While the base premise of the fix would be the same, the logic in
> load_system_tables() is different, and the commit message is completely
> wrong.
> 
> I only encountered this problem with added instrumentation in the #DB
> handler, which is why I'm questioning the utility of going to this effort.

Yeah, then probably not worth it.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:19:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8063.21458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOi5-00009x-IA; Fri, 16 Oct 2020 12:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8063.21458; Fri, 16 Oct 2020 12:19:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOi5-00009q-FB; Fri, 16 Oct 2020 12:19:49 +0000
Received: by outflank-mailman (input) for mailman id 8063;
 Fri, 16 Oct 2020 12:19:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kTOi3-00009l-Vu
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:19:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f97cc22-243c-4c53-bf50-12af54c39052;
 Fri, 16 Oct 2020 12:19:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C85AAFAB;
 Fri, 16 Oct 2020 12:19:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9cG/=DX=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kTOi3-00009l-Vu
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:19:48 +0000
X-Inumbo-ID: 0f97cc22-243c-4c53-bf50-12af54c39052
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f97cc22-243c-4c53-bf50-12af54c39052;
	Fri, 16 Oct 2020 12:19:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6C85AAFAB;
	Fri, 16 Oct 2020 12:19:45 +0000 (UTC)
Date: Fri, 16 Oct 2020 14:19:42 +0200
From: Thomas Zimmermann <tzimmermann@suse.de>
To: Sam Ravnborg <sam@ravnborg.org>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, alexander.deucher@amd.com, christian.koenig@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
 dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201016141942.111e17f3@linux-uq9g>
In-Reply-To: <20201016120347.GB1125266@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
	<20201015123806.32416-11-tzimmermann@suse.de>
	<20201016120347.GB1125266@ravnborg.org>
Organization: SUSE Software Solutions Germany GmbH
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Hi

On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:

> Hi Thomas.
>=20
> On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > At least sparc64 requires I/O-specific access to framebuffers. This
> > patch updates the fbdev console accordingly.
> >=20
> > For drivers with direct access to the framebuffer memory, the callback
> > functions in struct fb_ops test for the type of memory and call the rsp
> > fb_sys_ of fb_cfb_ functions.
> >=20
> > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > interfaces to access the buffer.
> >=20
> > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > I/O memory and avoid a HW exception. With the introduction of struct
> > dma_buf_map, this is not required any longer. The patch removes the rsp
> > code from both, bochs and fbdev.
> >=20
> > v4:
> > 	* move dma_buf_map changes into separate patch (Daniel)
> > 	* TODO list: comment on fbdev updates (Daniel)
> >=20
> > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>=20
> The original workaround fixed it so we could run qemu with the
> -nographic option.
>=20
> So I went ahead and tried to run quemu version:
> v5.0.0-1970-g0b100c8e72-dirty.
> And with the BOCHS driver built-in.
>=20
> With the following command line:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=3DttyS0 -nogra=
phic
>=20
> Behaviour was the same before and after applying this patch.
> (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> So I consider it fixed for real now and not just a workaround.
>=20
> I also tested with:
> qemu-system-sparc64 -m 512 -kernel vmlinux -append console=3DttyS0 -serial
> stdio
>=20
> and it worked in both cases too.

FTR, you booted a kernel and got graphics output. The error is simply that
there was no disk to mount?

Best regards
Thomas

>=20
> All the comments above so future-me have an easier time finding how to
> reproduce.
>=20
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>=20
> 	Sam
>=20
> > ---
> >  Documentation/gpu/todo.rst        |  19 ++-
> >  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> >  include/drm/drm_mode_config.h     |  12 --
> >  4 files changed, 220 insertions(+), 29 deletions(-)
> >=20
> > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > index 7e6fc3c04add..638b7f704339 100644
> > --- a/Documentation/gpu/todo.rst
> > +++ b/Documentation/gpu/todo.rst
> > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> >  ------------------------------------------------
> > =20
> >  Most drivers can use drm_fbdev_generic_setup(). Driver have to impleme=
nt
> > -atomic modesetting and GEM vmap support. Current generic fbdev emulati=
on
> > -expects the framebuffer in system memory (or system-like memory).
> > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > emulation +expected the framebuffer in system memory or system-like
> > memory. By employing +struct dma_buf_map, drivers with frambuffers in I=
/O
> > memory can be supported +as well.
> > =20
> >  Contact: Maintainer of the driver you plan to convert
> > =20
> >  Level: Intermediate
> > =20
> > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > +-------------------------------------------------------
> > +
> > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > +being rewritten without dependencies on the fbdev module. Some of the
> > +helpers could further benefit from using struct dma_buf_map instead of
> > +raw pointers.
> > +
> > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > +
> > +Level: Advanced
> > +
> > +
> >  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> >  -----------------------------------------------------------------
> > =20
> > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >  	bochs->dev->mode_config.preferred_depth =3D 24;
> >  	bochs->dev->mode_config.prefer_shadow =3D 0;
> >  	bochs->dev->mode_config.prefer_shadow_fbdev =3D 1;
> > -	bochs->dev->mode_config.fbdev_use_iomem =3D true;
> >  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =3D
> > true;=20
> >  	bochs->dev->mode_config.funcs =3D &bochs_mode_funcs;
> > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 1006=
44
> > --- a/drivers/gpu/drm/drm_fb_helper.c
> > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > work_struct *work) }
> > =20
> >  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > *fb_helper,
> > -					  struct drm_clip_rect *clip)
> > +					  struct drm_clip_rect *clip,
> > +					  struct dma_buf_map *dst)
> >  {
> >  	struct drm_framebuffer *fb =3D fb_helper->fb;
> >  	unsigned int cpp =3D fb->format->cpp[0];
> >  	size_t offset =3D clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >  	void *src =3D fb_helper->fbdev->screen_buffer + offset;
> > -	void *dst =3D fb_helper->buffer->map.vaddr + offset;
> >  	size_t len =3D (clip->x2 - clip->x1) * cpp;
> >  	unsigned int y;
> > =20
> > -	for (y =3D clip->y1; y < clip->y2; y++) {
> > -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > -			memcpy(dst, src, len);
> > -		else
> > -			memcpy_toio((void __iomem *)dst, src, len);
> > +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > rect */=20
> > +	for (y =3D clip->y1; y < clip->y2; y++) {
> > +		dma_buf_map_memcpy_to(dst, src, len);
> > +		dma_buf_map_incr(dst, fb->pitches[0]);
> >  		src +=3D fb->pitches[0];
> > -		dst +=3D fb->pitches[0];
> >  	}
> >  }
> > =20
> > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > work_struct *work) ret =3D drm_client_buffer_vmap(helper->buffer, &map);
> >  			if (ret)
> >  				return;
> > -			drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy);
> > +			drm_fb_helper_dirty_blit_real(helper,
> > &clip_copy, &map); }
> > +
> >  		if (helper->fb->funcs->dirty)
> >  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >  						 &clip_copy, 1);
> > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > *info, }
> >  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > =20
> > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > *buf,
> > +				      size_t count, loff_t *ppos)
> > +{
> > +	unsigned long p =3D *ppos;
> > +	u8 *dst;
> > +	u8 __iomem *src;
> > +	int c, err =3D 0;
> > +	unsigned long total_size;
> > +	unsigned long alloc_size;
> > +	ssize_t ret =3D 0;
> > +
> > +	if (info->state !=3D FBINFO_STATE_RUNNING)
> > +		return -EPERM;
> > +
> > +	total_size =3D info->screen_size;
> > +
> > +	if (total_size =3D=3D 0)
> > +		total_size =3D info->fix.smem_len;
> > +
> > +	if (p >=3D total_size)
> > +		return 0;
> > +
> > +	if (count >=3D total_size)
> > +		count =3D total_size;
> > +
> > +	if (count + p > total_size)
> > +		count =3D total_size - p;
> > +
> > +	src =3D (u8 __iomem *)(info->screen_base + p);
> > +
> > +	alloc_size =3D min(count, PAGE_SIZE);
> > +
> > +	dst =3D kmalloc(alloc_size, GFP_KERNEL);
> > +	if (!dst)
> > +		return -ENOMEM;
> > +
> > +	while (count) {
> > +		c =3D min(count, alloc_size);
> > +
> > +		memcpy_fromio(dst, src, c);
> > +		if (copy_to_user(buf, dst, c)) {
> > +			err =3D -EFAULT;
> > +			break;
> > +		}
> > +
> > +		src +=3D c;
> > +		*ppos +=3D c;
> > +		buf +=3D c;
> > +		ret +=3D c;
> > +		count -=3D c;
> > +	}
> > +
> > +	kfree(dst);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return ret;
> > +}
> > +
> > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > __user *buf,
> > +				       size_t count, loff_t *ppos)
> > +{
> > +	unsigned long p =3D *ppos;
> > +	u8 *src;
> > +	u8 __iomem *dst;
> > +	int c, err =3D 0;
> > +	unsigned long total_size;
> > +	unsigned long alloc_size;
> > +	ssize_t ret =3D 0;
> > +
> > +	if (info->state !=3D FBINFO_STATE_RUNNING)
> > +		return -EPERM;
> > +
> > +	total_size =3D info->screen_size;
> > +
> > +	if (total_size =3D=3D 0)
> > +		total_size =3D info->fix.smem_len;
> > +
> > +	if (p > total_size)
> > +		return -EFBIG;
> > +
> > +	if (count > total_size) {
> > +		err =3D -EFBIG;
> > +		count =3D total_size;
> > +	}
> > +
> > +	if (count + p > total_size) {
> > +		/*
> > +		 * The framebuffer is too small. We do the
> > +		 * copy operation, but return an error code
> > +		 * afterwards. Taken from fbdev.
> > +		 */
> > +		if (!err)
> > +			err =3D -ENOSPC;
> > +		count =3D total_size - p;
> > +	}
> > +
> > +	alloc_size =3D min(count, PAGE_SIZE);
> > +
> > +	src =3D kmalloc(alloc_size, GFP_KERNEL);
> > +	if (!src)
> > +		return -ENOMEM;
> > +
> > +	dst =3D (u8 __iomem *)(info->screen_base + p);
> > +
> > +	while (count) {
> > +		c =3D min(count, alloc_size);
> > +
> > +		if (copy_from_user(src, buf, c)) {
> > +			err =3D -EFAULT;
> > +			break;
> > +		}
> > +		memcpy_toio(dst, src, c);
> > +
> > +		dst +=3D c;
> > +		*ppos +=3D c;
> > +		buf +=3D c;
> > +		ret +=3D c;
> > +		count -=3D c;
> > +	}
> > +
> > +	kfree(src);
> > +
> > +	if (err)
> > +		return err;
> > +
> > +	return ret;
> > +}
> > +
> >  /**
> >   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> >   * @info: fbdev registered by the helper
> > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *inf=
o,
> > struct vm_area_struct *vma) return -ENODEV;
> >  }
> > =20
> > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *bu=
f,
> > +				 size_t count, loff_t *ppos)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> > +	else
> > +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > +}
> > +
> > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > __user *buf,
> > +				  size_t count, loff_t *ppos)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> > +	else
> > +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > +}
> > +
> > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > +				  const struct fb_fillrect *rect)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_fillrect(info, rect);
> > +	else
> > +		drm_fb_helper_cfb_fillrect(info, rect);
> > +}
> > +
> > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > +				  const struct fb_copyarea *area)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_copyarea(info, area);
> > +	else
> > +		drm_fb_helper_cfb_copyarea(info, area);
> > +}
> > +
> > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > +				   const struct fb_image *image)
> > +{
> > +	struct drm_fb_helper *fb_helper =3D info->par;
> > +	struct drm_client_buffer *buffer =3D fb_helper->buffer;
> > +
> > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > +		drm_fb_helper_sys_imageblit(info, image);
> > +	else
> > +		drm_fb_helper_cfb_imageblit(info, image);
> > +}
> > +
> >  static const struct fb_ops drm_fbdev_fb_ops =3D {
> >  	.owner		=3D THIS_MODULE,
> >  	DRM_FB_HELPER_DEFAULT_OPS,
> > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops =3D=
 {
> >  	.fb_release	=3D drm_fbdev_fb_release,
> >  	.fb_destroy	=3D drm_fbdev_fb_destroy,
> >  	.fb_mmap	=3D drm_fbdev_fb_mmap,
> > -	.fb_read	=3D drm_fb_helper_sys_read,
> > -	.fb_write	=3D drm_fb_helper_sys_write,
> > -	.fb_fillrect	=3D drm_fb_helper_sys_fillrect,
> > -	.fb_copyarea	=3D drm_fb_helper_sys_copyarea,
> > -	.fb_imageblit	=3D drm_fb_helper_sys_imageblit,
> > +	.fb_read	=3D drm_fbdev_fb_read,
> > +	.fb_write	=3D drm_fbdev_fb_write,
> > +	.fb_fillrect	=3D drm_fbdev_fb_fillrect,
> > +	.fb_copyarea	=3D drm_fbdev_fb_copyarea,
> > +	.fb_imageblit	=3D drm_fbdev_fb_imageblit,
> >  };
> > =20
> >  static struct fb_deferred_io drm_fbdev_defio =3D {
> > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_confi=
g.h
> > index 5ffbb4ed5b35..ab424ddd7665 100644
> > --- a/include/drm/drm_mode_config.h
> > +++ b/include/drm/drm_mode_config.h
> > @@ -877,18 +877,6 @@ struct drm_mode_config {
> >  	 */
> >  	bool prefer_shadow_fbdev;
> > =20
> > -	/**
> > -	 * @fbdev_use_iomem:
> > -	 *
> > -	 * Set to true if framebuffer reside in iomem.
> > -	 * When set to true memcpy_toio() is used when copying the
> > framebuffer in
> > -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > -	 *
> > -	 * FIXME: This should be replaced with a per-mapping is_iomem
> > -	 * flag (like ttm does), and then used everywhere in fbdev code.
> > -	 */
> > -	bool fbdev_use_iomem;
> > -
> >  	/**
> >  	 * @quirk_addfb_prefer_xbgr_30bpp:
> >  	 *
> > --=20
> > 2.28.0



--=20
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
(HRB 36809, AG N=C3=BCrnberg)
Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:35:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8070.21478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOxP-0001wH-4y; Fri, 16 Oct 2020 12:35:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8070.21478; Fri, 16 Oct 2020 12:35:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTOxP-0001wA-0k; Fri, 16 Oct 2020 12:35:39 +0000
Received: by outflank-mailman (input) for mailman id 8070;
 Fri, 16 Oct 2020 12:35:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTOxN-0001w4-0o
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76d9e649-a8df-4b2a-89bd-e06495a32646;
 Fri, 16 Oct 2020 12:35:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTOxM-00041W-0q
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:36 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTOxL-0008PS-V8
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTOxK-0003eA-3Z; Fri, 16 Oct 2020 13:35:34 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTOxN-0001w4-0o
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:37 +0000
X-Inumbo-ID: 76d9e649-a8df-4b2a-89bd-e06495a32646
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 76d9e649-a8df-4b2a-89bd-e06495a32646;
	Fri, 16 Oct 2020 12:35:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=f0hAf7wfFJTtTLqm2PlOsH+pnB4+KPz9jXt4sXqTpK8=; b=6jBrCkxS1/UcaGB0UFxX1gvIcu
	jY+JK8zVdAStnfLexUTZ/jAnR+DCdDuJdhn0VAZ8MEUbnIfkG9+plbgm+QchLWPVoBicEp1hkUaBm
	g3hQtkEm+UYs8MoSXjwY4+XqDowMZipGwmIkdwuUDhxjlErIifKjYuOjPAq9gXdgIpr0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTOxM-00041W-0q
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:36 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTOxL-0008PS-V8
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:35:35 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTOxK-0003eA-3Z; Fri, 16 Oct 2020 13:35:34 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH] host reuse: Reuse hosts only in same role (for now)
Date: Fri, 16 Oct 2020 13:35:28 +0100
Message-Id: <20201016123528.1894-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is a workaround.  There is a problem with hoat key setup in a
group of hosts, which means that when a pair test reuses a host set up
by a different test, we can get
   Host key verification failed.
during the src-to-dst migration.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-host-reuse | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-host-reuse b/ts-host-reuse
index ae967304..8d674257 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -67,6 +67,7 @@ sub sharetype_add ($$) {
 sub compute_test_sharetype () {
     my @runvartexts;
     my %done;
+    push @runvartexts, $ho->{Ident};
     foreach my $key (runvar_glob(@accessible_runvar_pats)) {
 	next if runvar_is_synth($key);
 	my $val = $r{$key};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 12:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 12:49:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8074.21489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPAM-0002yF-As; Fri, 16 Oct 2020 12:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8074.21489; Fri, 16 Oct 2020 12:49:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPAM-0002y8-7t; Fri, 16 Oct 2020 12:49:02 +0000
Received: by outflank-mailman (input) for mailman id 8074;
 Fri, 16 Oct 2020 12:49:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kTPAL-0002y3-Ix
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:49:01 +0000
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5af31553-d482-439c-90fc-2b323f01e13f;
 Fri, 16 Oct 2020 12:48:59 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk4.altibox.net (Postfix) with ESMTPS id 383A2806F9;
 Fri, 16 Oct 2020 14:48:52 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YFnv=DX=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kTPAL-0002y3-Ix
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 12:49:01 +0000
X-Inumbo-ID: 5af31553-d482-439c-90fc-2b323f01e13f
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5af31553-d482-439c-90fc-2b323f01e13f;
	Fri, 16 Oct 2020 12:48:59 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk4.altibox.net (Postfix) with ESMTPS id 383A2806F9;
	Fri, 16 Oct 2020 14:48:52 +0200 (CEST)
Date: Fri, 16 Oct 2020 14:48:50 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201016124850.GA1174599@ravnborg.org>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-11-tzimmermann@suse.de>
 <20201016120347.GB1125266@ravnborg.org>
 <20201016141942.111e17f3@linux-uq9g>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201016141942.111e17f3@linux-uq9g>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=fu7ymmwf c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=8nJEP1OIZ-IA:10 a=7gkXJVJtAAAA:8 a=6Hpfs63HWkxrz3fWGNMA:9
	a=w4pgIJ6L-nyCBc2a:21 a=gikFlJ38tUkSyios:21 a=wPNLvfGTeEIA:10
	a=qfUslh1TxfEA:10 a=E9Po1WZjFZOl8hwRPBS3:22

On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote:
> Hi
> 
> On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg <sam@ravnborg.org> wrote:
> 
> > Hi Thomas.
> > 
> > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote:
> > > At least sparc64 requires I/O-specific access to framebuffers. This
> > > patch updates the fbdev console accordingly.
> > > 
> > > For drivers with direct access to the framebuffer memory, the callback
> > > functions in struct fb_ops test for the type of memory and call the rsp
> > > fb_sys_ of fb_cfb_ functions.
> > > 
> > > For drivers that employ a shadow buffer, fbdev's blit function retrieves
> > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> > > interfaces to access the buffer.
> > > 
> > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> > > I/O memory and avoid a HW exception. With the introduction of struct
> > > dma_buf_map, this is not required any longer. The patch removes the rsp
> > > code from both, bochs and fbdev.
> > > 
> > > v4:
> > > 	* move dma_buf_map changes into separate patch (Daniel)
> > > 	* TODO list: comment on fbdev updates (Daniel)
> > > 
> > > Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> > 
> > The original workaround fixed it so we could run qemu with the
> > -nographic option.
> > 
> > So I went ahead and tried to run quemu version:
> > v5.0.0-1970-g0b100c8e72-dirty.
> > And with the BOCHS driver built-in.
> > 
> > With the following command line:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic
> > 
> > Behaviour was the same before and after applying this patch.
> > (panic due to VFS: Unable to mount root fs on unknown-block(0,0))
> > So I consider it fixed for real now and not just a workaround.
> > 
> > I also tested with:
> > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial
> > stdio
> > 
> > and it worked in both cases too.
> 
> FTR, you booted a kernel and got graphics output. The error is simply that
> there was no disk to mount?

The short version "Yes".

The longer version:

With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-serial stdio" I got graphical output - one penguin.

With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0
-nographic" I got no graphical output, as implied by the -nographic
option. But the boot continued - where it would panic before when we
accessed IO memory as system memory.

In both cases I got an error because I had not specified any rootfs, so
qemu failed to mount any rootfs. So expected.

	Sam

> 
> Best regards
> Thomas
> 
> > 
> > All the comments above so future-me have an easier time finding how to
> > reproduce.
> > 
> > Tested-by: Sam Ravnborg <sam@ravnborg.org>
> > 
> > 	Sam
> > 
> > > ---
> > >  Documentation/gpu/todo.rst        |  19 ++-
> > >  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> > >  drivers/gpu/drm/drm_fb_helper.c   | 217 ++++++++++++++++++++++++++++--
> > >  include/drm/drm_mode_config.h     |  12 --
> > >  4 files changed, 220 insertions(+), 29 deletions(-)
> > > 
> > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> > > index 7e6fc3c04add..638b7f704339 100644
> > > --- a/Documentation/gpu/todo.rst
> > > +++ b/Documentation/gpu/todo.rst
> > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> > >  ------------------------------------------------
> > >  
> > >  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> > > -expects the framebuffer in system memory (or system-like memory).
> > > +atomic modesetting and GEM vmap support. Historically, generic fbdev
> > > emulation +expected the framebuffer in system memory or system-like
> > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O
> > > memory can be supported +as well.
> > >  
> > >  Contact: Maintainer of the driver you plan to convert
> > >  
> > >  Level: Intermediate
> > >  
> > > +Reimplement functions in drm_fbdev_fb_ops without fbdev
> > > +-------------------------------------------------------
> > > +
> > > +A number of callback functions in drm_fbdev_fb_ops could benefit from
> > > +being rewritten without dependencies on the fbdev module. Some of the
> > > +helpers could further benefit from using struct dma_buf_map instead of
> > > +raw pointers.
> > > +
> > > +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> > > +
> > > +Level: Advanced
> > > +
> > > +
> > >  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> > >  -----------------------------------------------------------------
> > >  
> > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c
> > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5
> > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c
> > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> > >  	bochs->dev->mode_config.preferred_depth = 24;
> > >  	bochs->dev->mode_config.prefer_shadow = 0;
> > >  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> > > -	bochs->dev->mode_config.fbdev_use_iomem = true;
> > >  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order =
> > > true; 
> > >  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> > > diff --git a/drivers/gpu/drm/drm_fb_helper.c
> > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644
> > > --- a/drivers/gpu/drm/drm_fb_helper.c
> > > +++ b/drivers/gpu/drm/drm_fb_helper.c
> > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct
> > > work_struct *work) }
> > >  
> > >  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper
> > > *fb_helper,
> > > -					  struct drm_clip_rect *clip)
> > > +					  struct drm_clip_rect *clip,
> > > +					  struct dma_buf_map *dst)
> > >  {
> > >  	struct drm_framebuffer *fb = fb_helper->fb;
> > >  	unsigned int cpp = fb->format->cpp[0];
> > >  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> > >  	void *src = fb_helper->fbdev->screen_buffer + offset;
> > > -	void *dst = fb_helper->buffer->map.vaddr + offset;
> > >  	size_t len = (clip->x2 - clip->x1) * cpp;
> > >  	unsigned int y;
> > >  
> > > -	for (y = clip->y1; y < clip->y2; y++) {
> > > -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> > > -			memcpy(dst, src, len);
> > > -		else
> > > -			memcpy_toio((void __iomem *)dst, src, len);
> > > +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip
> > > rect */ 
> > > +	for (y = clip->y1; y < clip->y2; y++) {
> > > +		dma_buf_map_memcpy_to(dst, src, len);
> > > +		dma_buf_map_incr(dst, fb->pitches[0]);
> > >  		src += fb->pitches[0];
> > > -		dst += fb->pitches[0];
> > >  	}
> > >  }
> > >  
> > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct
> > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map);
> > >  			if (ret)
> > >  				return;
> > > -			drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy);
> > > +			drm_fb_helper_dirty_blit_real(helper,
> > > &clip_copy, &map); }
> > > +
> > >  		if (helper->fb->funcs->dirty)
> > >  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> > >  						 &clip_copy, 1);
> > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info
> > > *info, }
> > >  EXPORT_SYMBOL(drm_fb_helper_sys_imageblit);
> > >  
> > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user
> > > *buf,
> > > +				      size_t count, loff_t *ppos)
> > > +{
> > > +	unsigned long p = *ppos;
> > > +	u8 *dst;
> > > +	u8 __iomem *src;
> > > +	int c, err = 0;
> > > +	unsigned long total_size;
> > > +	unsigned long alloc_size;
> > > +	ssize_t ret = 0;
> > > +
> > > +	if (info->state != FBINFO_STATE_RUNNING)
> > > +		return -EPERM;
> > > +
> > > +	total_size = info->screen_size;
> > > +
> > > +	if (total_size == 0)
> > > +		total_size = info->fix.smem_len;
> > > +
> > > +	if (p >= total_size)
> > > +		return 0;
> > > +
> > > +	if (count >= total_size)
> > > +		count = total_size;
> > > +
> > > +	if (count + p > total_size)
> > > +		count = total_size - p;
> > > +
> > > +	src = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > +	alloc_size = min(count, PAGE_SIZE);
> > > +
> > > +	dst = kmalloc(alloc_size, GFP_KERNEL);
> > > +	if (!dst)
> > > +		return -ENOMEM;
> > > +
> > > +	while (count) {
> > > +		c = min(count, alloc_size);
> > > +
> > > +		memcpy_fromio(dst, src, c);
> > > +		if (copy_to_user(buf, dst, c)) {
> > > +			err = -EFAULT;
> > > +			break;
> > > +		}
> > > +
> > > +		src += c;
> > > +		*ppos += c;
> > > +		buf += c;
> > > +		ret += c;
> > > +		count -= c;
> > > +	}
> > > +
> > > +	kfree(dst);
> > > +
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	return ret;
> > > +}
> > > +
> > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char
> > > __user *buf,
> > > +				       size_t count, loff_t *ppos)
> > > +{
> > > +	unsigned long p = *ppos;
> > > +	u8 *src;
> > > +	u8 __iomem *dst;
> > > +	int c, err = 0;
> > > +	unsigned long total_size;
> > > +	unsigned long alloc_size;
> > > +	ssize_t ret = 0;
> > > +
> > > +	if (info->state != FBINFO_STATE_RUNNING)
> > > +		return -EPERM;
> > > +
> > > +	total_size = info->screen_size;
> > > +
> > > +	if (total_size == 0)
> > > +		total_size = info->fix.smem_len;
> > > +
> > > +	if (p > total_size)
> > > +		return -EFBIG;
> > > +
> > > +	if (count > total_size) {
> > > +		err = -EFBIG;
> > > +		count = total_size;
> > > +	}
> > > +
> > > +	if (count + p > total_size) {
> > > +		/*
> > > +		 * The framebuffer is too small. We do the
> > > +		 * copy operation, but return an error code
> > > +		 * afterwards. Taken from fbdev.
> > > +		 */
> > > +		if (!err)
> > > +			err = -ENOSPC;
> > > +		count = total_size - p;
> > > +	}
> > > +
> > > +	alloc_size = min(count, PAGE_SIZE);
> > > +
> > > +	src = kmalloc(alloc_size, GFP_KERNEL);
> > > +	if (!src)
> > > +		return -ENOMEM;
> > > +
> > > +	dst = (u8 __iomem *)(info->screen_base + p);
> > > +
> > > +	while (count) {
> > > +		c = min(count, alloc_size);
> > > +
> > > +		if (copy_from_user(src, buf, c)) {
> > > +			err = -EFAULT;
> > > +			break;
> > > +		}
> > > +		memcpy_toio(dst, src, c);
> > > +
> > > +		dst += c;
> > > +		*ppos += c;
> > > +		buf += c;
> > > +		ret += c;
> > > +		count -= c;
> > > +	}
> > > +
> > > +	kfree(src);
> > > +
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	return ret;
> > > +}
> > > +
> > >  /**
> > >   * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect
> > >   * @info: fbdev registered by the helper
> > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info,
> > > struct vm_area_struct *vma) return -ENODEV;
> > >  }
> > >  
> > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> > > +				 size_t count, loff_t *ppos)
> > > +{
> > > +	struct drm_fb_helper *fb_helper = info->par;
> > > +	struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > +		return drm_fb_helper_sys_read(info, buf, count, ppos);
> > > +	else
> > > +		return drm_fb_helper_cfb_read(info, buf, count, ppos);
> > > +}
> > > +
> > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char
> > > __user *buf,
> > > +				  size_t count, loff_t *ppos)
> > > +{
> > > +	struct drm_fb_helper *fb_helper = info->par;
> > > +	struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > +		return drm_fb_helper_sys_write(info, buf, count, ppos);
> > > +	else
> > > +		return drm_fb_helper_cfb_write(info, buf, count, ppos);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> > > +				  const struct fb_fillrect *rect)
> > > +{
> > > +	struct drm_fb_helper *fb_helper = info->par;
> > > +	struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > +		drm_fb_helper_sys_fillrect(info, rect);
> > > +	else
> > > +		drm_fb_helper_cfb_fillrect(info, rect);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> > > +				  const struct fb_copyarea *area)
> > > +{
> > > +	struct drm_fb_helper *fb_helper = info->par;
> > > +	struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > +		drm_fb_helper_sys_copyarea(info, area);
> > > +	else
> > > +		drm_fb_helper_cfb_copyarea(info, area);
> > > +}
> > > +
> > > +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> > > +				   const struct fb_image *image)
> > > +{
> > > +	struct drm_fb_helper *fb_helper = info->par;
> > > +	struct drm_client_buffer *buffer = fb_helper->buffer;
> > > +
> > > +	if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem)
> > > +		drm_fb_helper_sys_imageblit(info, image);
> > > +	else
> > > +		drm_fb_helper_cfb_imageblit(info, image);
> > > +}
> > > +
> > >  static const struct fb_ops drm_fbdev_fb_ops = {
> > >  	.owner		= THIS_MODULE,
> > >  	DRM_FB_HELPER_DEFAULT_OPS,
> > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> > >  	.fb_release	= drm_fbdev_fb_release,
> > >  	.fb_destroy	= drm_fbdev_fb_destroy,
> > >  	.fb_mmap	= drm_fbdev_fb_mmap,
> > > -	.fb_read	= drm_fb_helper_sys_read,
> > > -	.fb_write	= drm_fb_helper_sys_write,
> > > -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> > > -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> > > -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> > > +	.fb_read	= drm_fbdev_fb_read,
> > > +	.fb_write	= drm_fbdev_fb_write,
> > > +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> > > +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> > > +	.fb_imageblit	= drm_fbdev_fb_imageblit,
> > >  };
> > >  
> > >  static struct fb_deferred_io drm_fbdev_defio = {
> > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> > > index 5ffbb4ed5b35..ab424ddd7665 100644
> > > --- a/include/drm/drm_mode_config.h
> > > +++ b/include/drm/drm_mode_config.h
> > > @@ -877,18 +877,6 @@ struct drm_mode_config {
> > >  	 */
> > >  	bool prefer_shadow_fbdev;
> > >  
> > > -	/**
> > > -	 * @fbdev_use_iomem:
> > > -	 *
> > > -	 * Set to true if framebuffer reside in iomem.
> > > -	 * When set to true memcpy_toio() is used when copying the
> > > framebuffer in
> > > -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> > > -	 *
> > > -	 * FIXME: This should be replaced with a per-mapping is_iomem
> > > -	 * flag (like ttm does), and then used everywhere in fbdev code.
> > > -	 */
> > > -	bool fbdev_use_iomem;
> > > -
> > >  	/**
> > >  	 * @quirk_addfb_prefer_xbgr_30bpp:
> > >  	 *
> > > -- 
> > > 2.28.0
> 
> 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nrnberg, Germany
> (HRB 36809, AG Nrnberg)
> Geschftsfhrer: Felix Imendrffer


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:25:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8079.21501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPje-0006Rq-Bm; Fri, 16 Oct 2020 13:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8079.21501; Fri, 16 Oct 2020 13:25:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPje-0006Rj-8q; Fri, 16 Oct 2020 13:25:30 +0000
Received: by outflank-mailman (input) for mailman id 8079;
 Fri, 16 Oct 2020 13:25:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kTPjc-0006Re-Ss
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:25:28 +0000
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e39ba013-943a-48fc-b9ec-fd31ba739f41;
 Fri, 16 Oct 2020 13:25:27 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id f21so2476948wml.3
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:25:27 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 64sm2856197wmd.3.2020.10.16.06.25.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 16 Oct 2020 06:25:25 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kTPjc-0006Re-Ss
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:25:28 +0000
X-Inumbo-ID: e39ba013-943a-48fc-b9ec-fd31ba739f41
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e39ba013-943a-48fc-b9ec-fd31ba739f41;
	Fri, 16 Oct 2020 13:25:27 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id f21so2476948wml.3
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:25:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=HRWYyHhjSt+jchGWaAnnJLVz+GBYjjs43COiHHYA0hM=;
        b=hMA3PifN+TkFADOuTujDdp5jirqI2pIjbRQLAXcDdK3LSFKlZGQvDVhO39kn4wCI1H
         JkFevRBKDbIuLL2JSzkwOofK9Vme7R/WXeL/5u7n0/Ffvs3RFl5OXUx1Btce99xgUL0W
         YuFzrsOxxgEsG+zxBZAU1entG9N7IzT0K+t6C/cC3cJBg7CJQPv/W5BIqJAm/HZ8F2cU
         ZqtoAk56pbPyq4Scg2MPOTA6lgX8dOSBZMr4GOgAd0fKMuYUKGkvQt52AbVgY7DP3vvr
         pOJl3eIs1dqLGnlc/XfRy+ZMuEtSruYE7hTCMdWZ3q2sjqz+DFBsjqGv3//wJdPrgzaV
         lp8Q==
X-Gm-Message-State: AOAM533fqmlMjRLvGTHNUxWYTDhPzuBfYL2rtyKfjv2SXm/+2m6cwM4I
	DU0pkrjBt9fq9Zx5KITry8eb+9O+hms=
X-Google-Smtp-Source: ABdhPJwLvSTIZsKY4SyQhj9apLT9Vhv8yopKQh1i8YJAE4MT93w6G92F34Iql4BfiJQkzZp6Dm8uIQ==
X-Received: by 2002:a05:600c:2217:: with SMTP id z23mr4025406wml.133.1602854726672;
        Fri, 16 Oct 2020 06:25:26 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id 64sm2856197wmd.3.2020.10.16.06.25.25
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 16 Oct 2020 06:25:25 -0700 (PDT)
Date: Fri, 16 Oct 2020 13:25:24 +0000
From: Wei Liu <wl@xen.org>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [SECOND RESEND] [PATCH] tools/python: Pass linker to Python
 build process
Message-ID: <20201016132524.wuli37asps4eshce@liuwe-devbox-debian-v2>
References: <20201012011139.GA82449@mattapan.m5p.com>
 <20201015010148.GQ151766@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20201015010148.GQ151766@mail-itl>
User-Agent: NeoMutt/20180716

On Thu, Oct 15, 2020 at 03:01:48AM +0200, Marek Marczykowski-Grecki wrote:
> On Sun, Oct 11, 2020 at 06:11:39PM -0700, Elliott Mitchell wrote:
> > Unexpectedly the environment variable which needs to be passed is
> > $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
> > of the host `ld`.
> > 
> > Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
> > it can load at runtime, not executables.
> > 
> > This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
> > to $LDFLAGS which breaks many linkers.
> > 
> > Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> 
> Acked-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>

Thanks. Applied.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:26:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:26:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8081.21513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPl1-0006Yy-Mq; Fri, 16 Oct 2020 13:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8081.21513; Fri, 16 Oct 2020 13:26:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPl1-0006Yr-JY; Fri, 16 Oct 2020 13:26:55 +0000
Received: by outflank-mailman (input) for mailman id 8081;
 Fri, 16 Oct 2020 13:26:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kTPkz-0006Yj-MS
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:26:53 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72b34579-4f43-4b23-b5ab-7d7d8df22896;
 Fri, 16 Oct 2020 13:26:52 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id q5so3006463wmq.0
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:26:52 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v6sm2904715wmj.6.2020.10.16.06.26.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 16 Oct 2020 06:26:51 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kTPkz-0006Yj-MS
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:26:53 +0000
X-Inumbo-ID: 72b34579-4f43-4b23-b5ab-7d7d8df22896
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 72b34579-4f43-4b23-b5ab-7d7d8df22896;
	Fri, 16 Oct 2020 13:26:52 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id q5so3006463wmq.0
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:26:52 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=7IbjvTI1eR/IAMivLeDmCbKLKTYNQtvDuViSY0ATWE4=;
        b=fjg8bb34elL+F6znlQ21qekwQDswOKjVbIu6jJ7rySSU3Tg/tayVQbRF/xr5tPd4ue
         xUzKrVFOw6b8EPI3oI7vayQ6lT4ujrsoEAHwfV4tlYEhCxC9SPdhl/GB11nRkhj/08Jp
         1ViUp8vTpzrevyQRoDE2MAk13xa+mD2rhmNr0jKfzSiOJh5Xt8rYo2xEVwYKnl91XkI3
         /ZM4QSrKD5IjRhddfCc0ow4UIevSZJzuVJFTHLV0mhgNtNBYze6lmFF6o2ht5OHUtdNO
         OJgVzbxu+YW1SwO7382Udxu+fNGzK7ocRgWC42IVTaRsWXmOdB8lQHWdDu5AFk+WExFv
         2g8g==
X-Gm-Message-State: AOAM533m44g5XJjHBfx4NNbHWe/UvLWdOfIpqxZNuf0311/u3BYZ/Uei
	aVwp8sVBV9K5m5aYBXUibFZ2UxTIGNI=
X-Google-Smtp-Source: ABdhPJyYU4prEnXJhzuHB63Y/JTgeEvOaLwg6FILHVLLLc0V19dug6gruuuHoLlzJEf87fRxKMpn/w==
X-Received: by 2002:a7b:cf26:: with SMTP id m6mr3753963wmg.71.1602854811762;
        Fri, 16 Oct 2020 06:26:51 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id v6sm2904715wmj.6.2020.10.16.06.26.51
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 16 Oct 2020 06:26:51 -0700 (PDT)
Date: Fri, 16 Oct 2020 13:26:49 +0000
From: Wei Liu <wl@xen.org>
To: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Elena Ufimtseva <elena.ufimtseva@oracle.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/gdbsx: drop stray recursion into tools/include/
Message-ID: <20201016132649.3ib7wiyucbukmvxo@liuwe-devbox-debian-v2>
References: <ece6c5c2-43f8-36d2-370c-37d988baeb87@suse.com>
 <24456.19422.318790.279648@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <24456.19422.318790.279648@mariner.uk.xensource.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 15, 2020 at 02:17:18PM +0100, Ian Jackson wrote:
> Jan Beulich writes ("[PATCH] tools/gdbsx: drop stray recursion into tools/include/"):
> > Doing so isn't appropriate here - this gets done very early in the build
> > process. If the directory is mean to to be buildable on its own,
> > different arrangements would be needed.
> > 
> > The issue has become more pronounced by 47654a0d7320 ("tools/include:
> > fix (drop) dependencies of when to populate xen/"), but was there before
> > afaict.
> > 
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I tried to applied this one but git didn't like it.

Jan, feel free to apply it yourself.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:28:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8085.21526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPmh-0006jD-23; Fri, 16 Oct 2020 13:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8085.21526; Fri, 16 Oct 2020 13:28:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPmg-0006j6-V2; Fri, 16 Oct 2020 13:28:38 +0000
Received: by outflank-mailman (input) for mailman id 8085;
 Fri, 16 Oct 2020 13:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kTPmf-0006iz-3R
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:28:37 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95324b5f-d30d-4433-ba3f-6a51da44eaf3;
 Fri, 16 Oct 2020 13:28:36 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id i1so2960976wro.1
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:28:36 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id n66sm2731489wmb.35.2020.10.16.06.28.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 16 Oct 2020 06:28:34 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kTPmf-0006iz-3R
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:28:37 +0000
X-Inumbo-ID: 95324b5f-d30d-4433-ba3f-6a51da44eaf3
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 95324b5f-d30d-4433-ba3f-6a51da44eaf3;
	Fri, 16 Oct 2020 13:28:36 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id i1so2960976wro.1
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:28:36 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Pj6FiS/1OvmJAiCK2mjsDrJ5Mxj1XafohpUchGS06KI=;
        b=glQoW26fzc+e1v0rCpV2TJgcPU21uCGzMB/ISZUNRr9ZCzevegnAcTWQe9vpLYooNR
         3gmnSD2C2v60guodIEDtKryXbGDa3IzWylPxUd1jw0Yqvxf8thth5i069kodY6SOoczx
         DkpE/eH3X3Hvy3H6+8Ypce/eI1BbVZPuLr66yYIgoXXQcZvHi58BvEtN2qURIkBRSqG+
         y09gIPGt9OmhwCryo7KSBbDM8+XyQwUSrkwpD2VRot02PI7RghWAJCqJBjcCFeWlCcqm
         493PKFKa7msOIR94dhjthwpqyYo3/E5QOg+CbjY6uCiGnlw8fXKmJFBK8cCSNGCllVL6
         3y3g==
X-Gm-Message-State: AOAM531GV71ShRvK2UfeCPbJiF2Sax4dwx2zpPUaSEAJIzNBsxV/NFzN
	43T9uXJfhu9pv/lugf5klM0=
X-Google-Smtp-Source: ABdhPJwZjyNl2jRrQ9wwWJX2gk9ewl8P6HgpBzCOF1u2nk9A80f6P1zCVASahTvR8c3+R6sIPrlaeA==
X-Received: by 2002:a5d:5743:: with SMTP id q3mr3904354wrw.167.1602854915404;
        Fri, 16 Oct 2020 06:28:35 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id n66sm2731489wmb.35.2020.10.16.06.28.34
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 16 Oct 2020 06:28:34 -0700 (PDT)
Date: Fri, 16 Oct 2020 13:28:33 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <bertrand.marquis@arm.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3] tools/xenpmd: Fix gcc10 snprintf warning
Message-ID: <20201016132833.fadg2dtj2q6pshrj@liuwe-devbox-debian-v2>
References: <14ac4900dcf4fb9b45ce4f5e3d60de7f7e3602ab.1602753323.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <14ac4900dcf4fb9b45ce4f5e3d60de7f7e3602ab.1602753323.git.bertrand.marquis@arm.com>
User-Agent: NeoMutt/20180716

On Thu, Oct 15, 2020 at 10:16:09AM +0100, Bertrand Marquis wrote:
> Add a check for snprintf return code and ignore the entry if we get an
> error. This should in fact never happen and is more a trick to make gcc
> happy and prevent compilation errors.
> 
> This is solving the following gcc warning when compiling for arm32 host
> platforms with optimization activated:
> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
> between 4 and 2147483645 bytes into a region of size 271
> [-Werror=format-truncation=]
> 
> This is also solving the following Debian bug:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:31:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:31:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8087.21538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPpT-0007XK-Gj; Fri, 16 Oct 2020 13:31:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8087.21538; Fri, 16 Oct 2020 13:31:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPpT-0007XD-DS; Fri, 16 Oct 2020 13:31:31 +0000
Received: by outflank-mailman (input) for mailman id 8087;
 Fri, 16 Oct 2020 13:31:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kTPpS-0007X8-P5
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:31:30 +0000
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18235c20-78d0-417a-b253-90f932c5f83f;
 Fri, 16 Oct 2020 13:31:29 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id h7so2966233wre.4
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:31:29 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id m14sm3501750wro.43.2020.10.16.06.31.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 16 Oct 2020 06:31:28 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rWFt=DX=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kTPpS-0007X8-P5
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:31:30 +0000
X-Inumbo-ID: 18235c20-78d0-417a-b253-90f932c5f83f
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 18235c20-78d0-417a-b253-90f932c5f83f;
	Fri, 16 Oct 2020 13:31:29 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id h7so2966233wre.4
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 06:31:29 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=ILev03gENE1fDCXPJ/uLCG5cBWF9ijbz/VU3bAdmti0=;
        b=sE0bPxP/Z0tEev9xpGw0i+lBhjl6zlvHwCttlnxNGDS8Mivs7tmCHw8qt3Mclb61vs
         rI2zbaphZ3eskWpWEtx9IGbrLVQH/WwwN3roWF2+EO1dTiXQ4NdBvwzaxZDhvEvRu+Uf
         Rc2iRMGr/dr232Uu7wqY19fLdMjLGhOU/kUoZ0ZsoacLJepFRSBSrftzKwYa2A28Rmzx
         SjJAs/m34b6sqqgB7Teq+DZZxsMLZ0f65bVtBwr3R+Kl2o40E0Y1abZvf+Vij3Ej+nZY
         cIaoGC9iWDoHiWtkGIPMCRniAOjxnX70lVGrLI+NVZ5yV8NkE4sgLJE1m6zYV/pJ3/VW
         w74w==
X-Gm-Message-State: AOAM533gGIc9L7o/tn23yfwcAk6MIk707HM8JhUI/6MB8fVByEBVMUVE
	xXUseZi3XnYr8g+LU5gSRu4=
X-Google-Smtp-Source: ABdhPJwUD57fyqhy9vPp6E/z3GVHCfQzexQkwf3J3HBaVqdsmVPBe1NA2N9tWyrbtstKvSN6q5M2rQ==
X-Received: by 2002:adf:8b85:: with SMTP id o5mr4149964wra.104.1602855089170;
        Fri, 16 Oct 2020 06:31:29 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id m14sm3501750wro.43.2020.10.16.06.31.28
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Fri, 16 Oct 2020 06:31:28 -0700 (PDT)
Date: Fri, 16 Oct 2020 13:31:27 +0000
From: Wei Liu <wl@xen.org>
To: Ian Jackson <iwj@xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Elena Ufimtseva <elena.ufimtseva@oracle.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/gdbsx: drop stray recursion into tools/include/
Message-ID: <20201016133127.keb6v66fgnuxdxew@liuwe-devbox-debian-v2>
References: <ece6c5c2-43f8-36d2-370c-37d988baeb87@suse.com>
 <24456.19422.318790.279648@mariner.uk.xensource.com>
 <20201016132649.3ib7wiyucbukmvxo@liuwe-devbox-debian-v2>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201016132649.3ib7wiyucbukmvxo@liuwe-devbox-debian-v2>
User-Agent: NeoMutt/20180716

On Fri, Oct 16, 2020 at 01:26:49PM +0000, Wei Liu wrote:
> On Thu, Oct 15, 2020 at 02:17:18PM +0100, Ian Jackson wrote:
> > Jan Beulich writes ("[PATCH] tools/gdbsx: drop stray recursion into tools/include/"):
> > > Doing so isn't appropriate here - this gets done very early in the build
> > > process. If the directory is mean to to be buildable on its own,
> > > different arrangements would be needed.
> > > 
> > > The issue has become more pronounced by 47654a0d7320 ("tools/include:
> > > fix (drop) dependencies of when to populate xen/"), but was there before
> > > afaict.
> > > 
> > > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> I tried to applied this one but git didn't like it.
> 
> Jan, feel free to apply it yourself.

This is already applied. Sorry for the noise.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:34:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:34:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8091.21550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPs8-0007jb-Vx; Fri, 16 Oct 2020 13:34:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8091.21550; Fri, 16 Oct 2020 13:34:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTPs8-0007jU-RR; Fri, 16 Oct 2020 13:34:16 +0000
Received: by outflank-mailman (input) for mailman id 8091;
 Fri, 16 Oct 2020 13:34:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sGIs=DX=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
 id 1kTPs7-0007jP-Ff
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:34:15 +0000
Received: from server.eikelenboom.it (unknown [91.121.65.215])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2247163-459f-4183-ba54-81c5bd170925;
 Fri, 16 Oct 2020 13:34:13 +0000 (UTC)
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:54194
 helo=[172.16.1.50]) by server.eikelenboom.it with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <linux@eikelenboom.it>)
 id 1kTPuo-0007a8-4Z; Fri, 16 Oct 2020 15:37:02 +0200
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sGIs=DX=eikelenboom.it=linux@srs-us1.protection.inumbo.net>)
	id 1kTPs7-0007jP-Ff
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:34:15 +0000
X-Inumbo-ID: b2247163-459f-4183-ba54-81c5bd170925
Received: from server.eikelenboom.it (unknown [91.121.65.215])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b2247163-459f-4183-ba54-81c5bd170925;
	Fri, 16 Oct 2020 13:34:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=eikelenboom.it; s=20180706; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=+eMXYOxq7wI5J0zUG4bv1bl3oUtIHN3tWi0zuQGQfQE=; b=JaqXdZKVlr8YtMsZjx9HgS3mnT
	yPkYlNhlpIPmWJCKW1fr/LpugwspILSInjs7yerYKMzMsQoeKXIm7hp/pIT3n+DVdaKJu4dnuXcNo
	biQcuhqLdPTcwSDobJeptL9trHYMXqj6Rnh7V4ctJIqtm1XVqeorGSbwSQPNE1RAEKBk=;
Received: from 76-24-144-85.ftth.glasoperator.nl ([85.144.24.76]:54194 helo=[172.16.1.50])
	by server.eikelenboom.it with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <linux@eikelenboom.it>)
	id 1kTPuo-0007a8-4Z; Fri, 16 Oct 2020 15:37:02 +0200
Subject: Re: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for
 ACPI table region
To: Jan Beulich <jbeulich@suse.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
 <ca9ba430-f5d8-f520-e7db-3e8d41cd7d9b@suse.com>
From: Sander Eikelenboom <linux@eikelenboom.it>
Message-ID: <53939fbe-6370-fdf7-9727-398a474b219e@eikelenboom.it>
Date: Fri, 16 Oct 2020 15:34:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <ca9ba430-f5d8-f520-e7db-3e8d41cd7d9b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16/10/2020 08:34, Jan Beulich wrote:
> On 16.10.2020 02:39, Igor Druzhinin wrote:
>> ACPI specification contains statements describing memory marked with regular
>> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
>> really do it if it wants kexec or similar functionality to work, there
>> could still be ambiguities in treating these regions as potentially regular
>> RAM.
>>
>> One such example is SeaBIOS which currently reports "ACPI data" regions as
>> RAM to the guest in its e801 call. Which it might have the right to do as any
>> user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
>> to ignore that fact and is instead using e801 to find a place for initrd which
>> causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
>> to be fixed / improved here, that is just one example of the potential problems
>> from using a reclaimable memory type.
>>
>> Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
>> described by the spec as non-reclaimable (so cannot ever be treated like RAM).
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> 

I don't see any stable and or fixes tags, but I assume this will go to
the stable trees (which have (a backport of)
8efa46516c5f4cf185c8df179812c185d3c27eb6 in their staging branches) ?

(and as reporter it would have been nice to have been CC'ed on the patch)

--
Sander


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8094.21562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQ1b-0000GW-Ta; Fri, 16 Oct 2020 13:44:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8094.21562; Fri, 16 Oct 2020 13:44:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQ1b-0000GP-QQ; Fri, 16 Oct 2020 13:44:03 +0000
Received: by outflank-mailman (input) for mailman id 8094;
 Fri, 16 Oct 2020 13:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wl+D=DX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kTQ1a-0000GK-2A
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:44:02 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.59]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e8f0548-fdc3-458c-9cc4-fa363a163964;
 Fri, 16 Oct 2020 13:44:00 +0000 (UTC)
Received: from AM6PR04CA0047.eurprd04.prod.outlook.com (2603:10a6:20b:f0::24)
 by AM0PR08MB5250.eurprd08.prod.outlook.com (2603:10a6:208:160::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 16 Oct
 2020 13:43:58 +0000
Received: from VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::8d) by AM6PR04CA0047.outlook.office365.com
 (2603:10a6:20b:f0::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Fri, 16 Oct 2020 13:43:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT016.mail.protection.outlook.com (10.152.18.115) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Fri, 16 Oct 2020 13:43:56 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Fri, 16 Oct 2020 13:43:56 +0000
Received: from d2702fef3c45.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7685BBA7-6028-4D58-8EF1-D222E626ACDF.1; 
 Fri, 16 Oct 2020 13:43:50 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d2702fef3c45.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 16 Oct 2020 13:43:50 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4540.eurprd08.prod.outlook.com (2603:10a6:10:c1::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 16 Oct
 2020 13:43:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Fri, 16 Oct 2020
 13:43:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wl+D=DX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kTQ1a-0000GK-2A
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:44:02 +0000
X-Inumbo-ID: 2e8f0548-fdc3-458c-9cc4-fa363a163964
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.59])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2e8f0548-fdc3-458c-9cc4-fa363a163964;
	Fri, 16 Oct 2020 13:44:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2YBwo7/KnBKZGR/GafM7NrcqA6CevgypttbbL0iwk1w=;
 b=6nL1UqXzEEp+L6D4Sk90xTLgQ8VmPm4qI+2qwuX6zB2opcBgvAW9Ymh1uSaEZWEORkkIOHeCXm3WbtaTnrc3O9P5buB+GB8BtbScbqfqcGWyWLarq3yNIWDs39KmdKWbBomVMH195y9wurU1EXs1JDPUDIUJJX+z5ppb0DqaJ8s=
Received: from AM6PR04CA0047.eurprd04.prod.outlook.com (2603:10a6:20b:f0::24)
 by AM0PR08MB5250.eurprd08.prod.outlook.com (2603:10a6:208:160::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 16 Oct
 2020 13:43:58 +0000
Received: from VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:f0:cafe::8d) by AM6PR04CA0047.outlook.office365.com
 (2603:10a6:20b:f0::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20 via Frontend
 Transport; Fri, 16 Oct 2020 13:43:58 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT016.mail.protection.outlook.com (10.152.18.115) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Fri, 16 Oct 2020 13:43:56 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Fri, 16 Oct 2020 13:43:56 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 508a43c2fb0d6a21
X-CR-MTA-TID: 64aa7808
Received: from d2702fef3c45.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 7685BBA7-6028-4D58-8EF1-D222E626ACDF.1;
	Fri, 16 Oct 2020 13:43:50 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d2702fef3c45.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 16 Oct 2020 13:43:50 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H3EALQsx5mFS6mMBkEIjXXFu3bSYTJQRE3tmYLMQL/vTszw6NMbqYuivEdw0fxzL1DxSIXVBCl1+YaznnK+AW+MMXtiHeTQ+1TUlXe6HP1HUDAa0yHm5+5ad8Rb2MeDqRGuUvLtt7KXYaM5eCQYUWCemSRk8rJRdGak5TEqUfC+LaA2L+AIrgkF7BAJzzZ04wMQJBJAdahW+wZVAdRgTywQamI881/qMJzSdozyplhI27rERBgwTzcVZp4xtovtZZO8MOdLhJJMWQ/M78mcetDwM8ZLQwsiCNKNrkoQV5WlMn2pnkB+QByLsBZLUSmq4pTOULO4wCRZj/j34SXTvAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2YBwo7/KnBKZGR/GafM7NrcqA6CevgypttbbL0iwk1w=;
 b=A2dQQMGy1JXBzHp31NsVWOfl5zzDejPxU1/XzlFodQzAKjTJa0SWld3DS111Eda3J0AxT0e05V492ec6I/3zV3SwbjGHj2kAsA1RZLEIdY8wsIENvjNLYy9ulWas+96BG5pB0HCR+cxTjhsHp1a29IH+3zm51rNYLgbeNEOzhDj53jnHUTXSNkQDpUxInaSge54d21uR/wa7N9ZTCIisJlS9LWXjk1brkoYGAJ2+J8TA+1bkJSRbEzwSAy2a4uSh6O886+KJEjVIO3tcDj81rI2JAXFZKHOWQkMTAGgvRoBY2kRVU/jDsqWac4Zej8QUmBBoGpRlcW/ty6NwJlg8XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2YBwo7/KnBKZGR/GafM7NrcqA6CevgypttbbL0iwk1w=;
 b=6nL1UqXzEEp+L6D4Sk90xTLgQ8VmPm4qI+2qwuX6zB2opcBgvAW9Ymh1uSaEZWEORkkIOHeCXm3WbtaTnrc3O9P5buB+GB8BtbScbqfqcGWyWLarq3yNIWDs39KmdKWbBomVMH195y9wurU1EXs1JDPUDIUJJX+z5ppb0DqaJ8s=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB4540.eurprd08.prod.outlook.com (2603:10a6:10:c1::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 16 Oct
 2020 13:43:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.021; Fri, 16 Oct 2020
 13:43:48 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Wei Liu <wl@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v3] tools/xenpmd: Fix gcc10 snprintf warning
Thread-Topic: [PATCH v3] tools/xenpmd: Fix gcc10 snprintf warning
Thread-Index: AQHWotP5wvQAAnPEZ06mRJYSlYfV46maOtSAgAAEQoA=
Date: Fri, 16 Oct 2020 13:43:48 +0000
Message-ID: <E8ECEA42-842F-4F66-BF86-1FB5F787D906@arm.com>
References:
 <14ac4900dcf4fb9b45ce4f5e3d60de7f7e3602ab.1602753323.git.bertrand.marquis@arm.com>
 <20201016132833.fadg2dtj2q6pshrj@liuwe-devbox-debian-v2>
In-Reply-To: <20201016132833.fadg2dtj2q6pshrj@liuwe-devbox-debian-v2>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 1b00998b-c334-4197-8a43-08d871d98785
x-ms-traffictypediagnostic: DBBPR08MB4540:|AM0PR08MB5250:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5250430E033C3F934E8E5FE39D030@AM0PR08MB5250.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5236;OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 y0V7kXjSCOCeYCY7tHHpUKWOF/mCJ/rCIjx8DdQNKvUsssZT+aAHxlDbGJULB4zxOAAbhQ75YcKhwizo5DbWjJQLoZFzODR42Ey66NprHTMneyj5NlHL0gNhgYLXBbVBris2qxFFOmps0A1QsdBmi8eA7SJqoiFGu/8npXZ180y67Q5UXTLLLj6fge3mbufXA/PHTrmnkOYQ6G+KW1xx39e3ZmV9ibH945pTqJlNmnYMQPjntjGpFwpfCZqLX1/99FdmgDAz66wvO9cpRC3T0cqQHgGx4cK5rZZVT2kXSz+b6jcuQVpQbaEjBlznCy/Z+PmIjjdqnIla3PdsAGsTlpICsjFImplW00OUFthRvasl2n4x4pc8TrJ0JmQckSEcUJVJ28leDWPlP+t4iN0p1Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(346002)(376002)(396003)(6506007)(54906003)(478600001)(36756003)(966005)(53546011)(4326008)(316002)(71200400001)(8936002)(26005)(2906002)(8676002)(4744005)(5660300002)(66946007)(91956017)(33656002)(76116006)(64756008)(66476007)(186003)(83380400001)(6486002)(86362001)(6512007)(6916009)(66446008)(2616005)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 BsLhek1evqpJDcl2vVghsfaua4utROSCWDGwSyspO52cgdxSSCUBgPZPvUyDiIm8Ohxbv1eXi2w78VTE+c14OeRJ7RHaSXxDuOOjvrZSSf+u0kpouQmXJfsXZWsuoa4OAZA39ebdFXotn4jtDSbh/EyYn3ND9XBZllsm4Np3kvNIQcJ6wqtsZM7ow14TxX55FU7Zi9L+hnV6QhdIQelmlijFkTodbqFrv3iFiSouDh2tfIg8SRvIsI5Z/4hGD1HDlkku31eQVW5dGGQnw8fbnSEioidqS5ngvqxJwCOtYvUTZkJt8Fh0HpSgqG1GSErrwz2cxMLoPD1u7tUBinFJHLYq91avwuarj7CLSXDjCsiIgtBso0tsVW2ghrmJ5X3BkyyHTdUoyfPN1jEmIq5RQHI1SkOAyJm1BdqwEMn1SXZgdGPE7uj8cRzWRLYGR54iFabON3tQ4V/OxOevu79K1rS+Ej98kA3p98cB3UQkPBHqd6M1ex716dxSlSYBQUwIYdUzRzpaUH28ljK3b1K9OHUEXbYauqrn9Yie+WqyJHp9y5bOdnGGnKHNDyWCsgDjcBS6GUMq00ScfynuAOs1fep4xjdyRJkcRWd1MSRbUXl9nge/i3azFVi52ZoWJyUt+481CFIlKHnyaaTbwRuFcg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <47CBC0C6CC6F3A46BFDBBD44B2C26578@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4540
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3357a7dd-4440-41a4-9464-08d871d98267
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	iuS8IkjfZCPKsZtAgDlBEdZz1iRYFcu9nZDoOASzpxa3l/KnGpL1leneH58e1qFUqinKZJbkPwnh3idXvnzJF+pntRvBPeE9SCr3UHdKPTvGADfCVEJJJbODV7V8bkwmgXUovqkHOpsgp/fDMBLt9pkF6ecpyQJFaoSvLSg3rii0JZPrIOtq9EM7U+5B0AqHQUhoqgeEuufiu0dLksagmjqfv9eqNfsppgCAxWHb1qR+RjAAzTEF7+D9OFeuvTUu+D2J/RbunIcP6CaP6qN56FnUiywZZQ0Bz7aEIsrgby8kyL0k+NObDCxVhr0nU5SR855BZ0YErXs1DHtVkCSGEShVHnwmmJ2GtXuIzi0l98uzDcU2T9Un1ceFc1aVmcahBv6awQXALc5lvLHXX0iE/T9lQVMpVjcEhELBdX72VtR7O7JlyXBLIAx5N/qUaTIgEUIKEP4vg5PZ62wX3SaSskDq1qHM/8fbGiap87joX5M=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(376002)(346002)(46966005)(54906003)(316002)(47076004)(336012)(2616005)(2906002)(478600001)(86362001)(4326008)(966005)(6862004)(70586007)(70206006)(36756003)(6486002)(53546011)(6506007)(186003)(26005)(33656002)(8936002)(6512007)(8676002)(356005)(82740400003)(81166007)(5660300002)(82310400002)(83080400001)(4744005)(36906005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2020 13:43:56.6943
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b00998b-c334-4197-8a43-08d871d98785
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5250



> On 16 Oct 2020, at 14:28, Wei Liu <wl@xen.org> wrote:
>=20
> On Thu, Oct 15, 2020 at 10:16:09AM +0100, Bertrand Marquis wrote:
>> Add a check for snprintf return code and ignore the entry if we get an
>> error. This should in fact never happen and is more a trick to make gcc
>> happy and prevent compilation errors.
>>=20
>> This is solving the following gcc warning when compiling for arm32 host
>> platforms with optimization activated:
>> xenpmd.c:92:37: error: '%s' directive output may be truncated writing
>> between 4 and 2147483645 bytes into a region of size 271
>> [-Werror=3Dformat-truncation=3D]
>>=20
>> This is also solving the following Debian bug:
>> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=3D970802
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Acked-by: Wei Liu <wl@xen.org>

Thanks :-)

Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:48:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:48:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8109.21585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQ5l-0000Xw-LY; Fri, 16 Oct 2020 13:48:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8109.21585; Fri, 16 Oct 2020 13:48:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQ5l-0000Xp-Ib; Fri, 16 Oct 2020 13:48:21 +0000
Received: by outflank-mailman (input) for mailman id 8109;
 Fri, 16 Oct 2020 13:48:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4tE9=DX=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kTQ5k-0000Xk-87
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:48:20 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27a63c06-51d1-4f0e-aaa2-7f1d7959f739;
 Fri, 16 Oct 2020 13:48:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4tE9=DX=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kTQ5k-0000Xk-87
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:48:20 +0000
X-Inumbo-ID: 27a63c06-51d1-4f0e-aaa2-7f1d7959f739
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27a63c06-51d1-4f0e-aaa2-7f1d7959f739;
	Fri, 16 Oct 2020 13:48:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602856100;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=/uTGAOHVM6OgzZGlniKnsA3XGb4PTOM3COnWEP9stio=;
  b=FXWt+3i4q9IyTy/TtDkLV52dpq1B1tjPpOSHJXIDPCL4Rk6CXNOy81ST
   7k5HTmZxz+dV+NfL9ErJ/0SOFTFkY2ee0JS9BUM89Co5zrTpu5k46kq9m
   1BASVVOsDYKZ8Gz8LVTlJe4qwJ4F8Bodu2DB+tHhPysSFfIXTUS9fctZy
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hk46Jgw93JoOWCt+ao15E/e/zU5NGbgvGR1UuRZIAakSINGhe5BltflNeD+ZbcWBeABmpn7qlB
 cFTY2vvNumndtYjgJtjp2R7XPtxDJ4EltqVmoDcxynJ5lfC3zC39nri5q8wjcEh6OFs7oD/VBy
 UrFAjxDcB5bjCUo446jXLSMxe4Qk9YqwbPWN1orutKkICP19ED9POs7+Uk6ShytMsvzP1CCvmR
 rwyE3UQhJbw3J/RiDMQxaD/+KiKJplVRCfrB9tmwRFGk8UhrLdT6fZmFzWrgQj6lelGR4uKXkZ
 XZ0=
X-SBRS: 2.5
X-MesageID: 29175189
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="29175189"
Subject: Re: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for
 ACPI table region
To: Sander Eikelenboom <linux@eikelenboom.it>, Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, <andrew.cooper3@citrix.com>,
	<roger.pau@citrix.com>, <wl@xen.org>, <iwj@xenproject.org>
References: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
 <ca9ba430-f5d8-f520-e7db-3e8d41cd7d9b@suse.com>
 <53939fbe-6370-fdf7-9727-398a474b219e@eikelenboom.it>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <fec9f2f9-0980-c28d-b4b4-c7c2fc9928b7@citrix.com>
Date: Fri, 16 Oct 2020 14:48:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <53939fbe-6370-fdf7-9727-398a474b219e@eikelenboom.it>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16/10/2020 14:34, Sander Eikelenboom wrote:
> On 16/10/2020 08:34, Jan Beulich wrote:
>> On 16.10.2020 02:39, Igor Druzhinin wrote:
>>> ACPI specification contains statements describing memory marked with regular
>>> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
>>> really do it if it wants kexec or similar functionality to work, there
>>> could still be ambiguities in treating these regions as potentially regular
>>> RAM.
>>>
>>> One such example is SeaBIOS which currently reports "ACPI data" regions as
>>> RAM to the guest in its e801 call. Which it might have the right to do as any
>>> user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
>>> to ignore that fact and is instead using e801 to find a place for initrd which
>>> causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
>>> to be fixed / improved here, that is just one example of the potential problems
>>> from using a reclaimable memory type.
>>>
>>> Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
>>> described by the spec as non-reclaimable (so cannot ever be treated like RAM).
>>>
>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>>
> 
> I don't see any stable and or fixes tags, but I assume this will go to
> the stable trees (which have (a backport of)
> 8efa46516c5f4cf185c8df179812c185d3c27eb6 in their staging branches) ?

Yes, this should go to the stable branches as well. I don't usually see Fixes:
tag being used on xen-devel (not sure if it's intentional or simply not
customary). It's also questionable whether it's a fix or a workaround for an
issue with compatibility.

Igor


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:54:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8113.21600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQB9-0001RH-C4; Fri, 16 Oct 2020 13:53:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8113.21600; Fri, 16 Oct 2020 13:53:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQB9-0001RA-7M; Fri, 16 Oct 2020 13:53:55 +0000
Received: by outflank-mailman (input) for mailman id 8113;
 Fri, 16 Oct 2020 13:53:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTQB8-0001Qi-OF
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:53:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4151deba-641f-4f03-8836-a66359002be2;
 Fri, 16 Oct 2020 13:53:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQAz-0005gB-Sf; Fri, 16 Oct 2020 13:53:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQAz-0000Y1-JI; Fri, 16 Oct 2020 13:53:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQAz-0001gT-Io; Fri, 16 Oct 2020 13:53:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTQB8-0001Qi-OF
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:53:54 +0000
X-Inumbo-ID: 4151deba-641f-4f03-8836-a66359002be2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4151deba-641f-4f03-8836-a66359002be2;
	Fri, 16 Oct 2020 13:53:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=15YDlMFwxq5N6+8TQ8n5wZ7exQRU7DRqfSrQ1/dO74E=; b=Xxvldl95t61IAvTYCT6O5leV8e
	3h8L+aHHFWvrm3s+1uwVZ6E3I5Yahyb+Xwz21k7myy+gR3UlPZ5+z07htdL9kmpA5JuyG1Uau0BRO
	hf2sY2seyvkQ+Ai4dxUAUTGEbZ8VvtwTohOddDzm/MUnRAGcqtdw4c0485CyJb4HvUQE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQAz-0005gB-Sf; Fri, 16 Oct 2020 13:53:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQAz-0000Y1-JI; Fri, 16 Oct 2020 13:53:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQAz-0001gT-Io; Fri, 16 Oct 2020 13:53:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm
Message-Id: <E1kTQAz-0001gT-Io@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 13:53:45 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155896/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/155896.bisection-summary --basis-template=152631 --blessings=real,real-bisect qemu-mainline test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 155877 fail [host=albana1] / 155509 [host=fiano0] 155483 [host=rimava1] 155434 [host=chardonnay1] 155318 [host=huxelrebe0] 155184 [host=albana0] 155098 [host=elbling1] 155018 [host=chardonnay0] 154629 [host=fiano0] 154607 [host=rimava1] 154583 ok.
Failure / basis pass flights: 155877 / 154583
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 19c87b7d446c3273e84b238cb02cd1c0ae69c43e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e545512b5e26f1e69fcd4c88df3c12853946dcdb 58a44be024f69d2e4d2b58553529230abdd3935e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ea9af51479fe04955443f0d366376a1008f07c94-19c87b7d446c3273e84b238cb02cd1c0ae69c43e git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#5df6c87e8080e0021e975c8387baa20cfe43c932-e545512b5e26f1e69fcd4c88df3c12853946dcdb git://xenbits.xen.org/osstest/seabios.git#155821a1990b6de78dde5f98fa5ab90e802021e0-58a44be\
 024f69d2e4d2b58553529230abdd3935e git://xenbits.xen.org/xen.git#baa4d064e91b6d2bcfe400bdf71f83b961e4c28e-25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
>From git://cache:9419/git://xenbits.xen.org/xen
   a7952a320c..0dfddb2116  staging    -> origin/staging
Loaded 43070 nodes in revision graph
Searching for test results:
 154583 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 154607 [host=rimava1]
 154629 [host=fiano0]
 155018 [host=chardonnay0]
 155098 [host=elbling1]
 155184 [host=albana0]
 155318 [host=huxelrebe0]
 155434 [host=chardonnay1]
 155483 [host=rimava1]
 155509 [host=fiano0]
 155518 fail irrelevant
 155544 fail irrelevant
 155585 fail irrelevant
 155613 fail irrelevant
 155645 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155665 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155675 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155695 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4a7c0bd9dcb08798c6f82e55b5a3423f7ee669f1 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155703 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155713 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155729 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ae511331e0fb1625ba649f377e81e487de3a5531 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 48a340d9b23ffcf7704f2de14d1e505481a84a1c 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155743 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155754 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155769 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d1af380d3e4bd840fa324db33ca4f739136e654 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155785 fail irrelevant
 155798 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155803 fail irrelevant
 155804 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 70c2f10fde5b67b0d7d62ba7ea3271fc514ebcc4 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 eb94b81a94bce112e6b206df846c1551aaf6cab6 849c5e50b6f474df6cc113130575bcdccfafcd9e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155808 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 69e95b9efed520e643b9e5b0573180aa7c5ecaca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a1d22c668a7662289b42624fe2aa92c9a23df1d2 849c5e50b6f474df6cc113130575bcdccfafcd9e 0241809bf838875615797f52af34222e5ab8e98f
 155812 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2687fdb7571a444b5af3509574b659d35ddd601 849c5e50b6f474df6cc113130575bcdccfafcd9e 93508595d588afe9dca087f95200effb7cedc81f
 155813 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 671ad7c4468f795b66b4cd8f376f1b1ce6701b63 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155802 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9380177354387f03c8ff9eadb7ae94aa453b9469 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 96292515c07e3a99f5a29540ed2f257b1ff75111 c685fe3ff2d402caefc1487d99bb486c4a510b8b 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155817 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 36d9c2883e55c863b622b99f0ebb5143f0001401 849c5e50b6f474df6cc113130575bcdccfafcd9e 8ef6345ef557cc2c47298217635a3088eaa59893
 155820 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155824 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9380177354387f03c8ff9eadb7ae94aa453b9469 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 96292515c07e3a99f5a29540ed2f257b1ff75111 c685fe3ff2d402caefc1487d99bb486c4a510b8b 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155827 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0d2a4545bf7e763984d3ee3e802617544cb7fc7a 849c5e50b6f474df6cc113130575bcdccfafcd9e 59b27f360e3d9dc0378c1288e67a91fa41a77158
 155830 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2d8ca4f90eaeb61bd7e9903b56bf412f0d187137 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b23317eec4715aa62de9a6e5490a01122c8eef0e 849c5e50b6f474df6cc113130575bcdccfafcd9e bdb380e1dbdc6b76576ab6db0b8e946cc95edc1c
 155835 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d8ab884fe9b4dd148980bf0d8673187f8fb25887 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d17f305a2649fccdc50956b3381456a8fd318903 849c5e50b6f474df6cc113130575bcdccfafcd9e 11852c7bb070a18c3708b4c001772a23e7d4fc27
 155836 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7cd77fb02b9a2117a56fed172f09a1820fcd6b0b 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 5dba8c2f23049aa68b777a9e7e9f76c12dd00012
 155819 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d0a827122cccd1f884faf75b2a065d88a58bce1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 57c98ea9acdcef5021f5671efa6475a5794a51c4 c685fe3ff2d402caefc1487d99bb486c4a510b8b 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155838 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 52dbaaeace647961bae61634c4be49ea2ca3d5cd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 213057383c9f73a17cfe635b204d88e11f918df1 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 358d57d411ee759a5a9dbf367179a9ac37faf0b3
 155840 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d0a827122cccd1f884faf75b2a065d88a58bce1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 57c98ea9acdcef5021f5671efa6475a5794a51c4 c685fe3ff2d402caefc1487d99bb486c4a510b8b 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155843 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92d09502676678c8ebb1ad830666b323d3c88f9d 41289b83ed3847dc45e7af3f1b7cb3cec6b6e7a5 4bdbf746ac9152e70f264f87db4472707da805ce
 155845 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1d058c3e86b079a2e207bb022fd7a97814c9a04f 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d8053e73fb2d295279b5cc4de7dc06bd581241ca 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5bcac985498ed83d89666959175ca9c9ed561ae1
 155883 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
 155860 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f7f1d916b22306c35ab9c090aab5233a91b4b7f9 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 5a37207df52066efefe419c677b089a654d37afc
 155864 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 dd5c7e3c5282b084daa5bbf0ec229cec699b2c17 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0fc0142828b5bc965790a1c5c6e241897d3387cb 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 a6732807d335239fc29bd953538affc458bcc197
 155866 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 910093d54fc758e7d69261b344fdc8da3a7bd81e
 155870 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 00b51fcb1ed7d2d5c2ea2f7dc598187d17c6f2e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 2785b2a9e04abc148e1c5259f4faee708ea356f4
 155841 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b9b7406c43e9d29bde3e9679c1b039cb91109097 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 57c98ea9acdcef5021f5671efa6475a5794a51c4 c685fe3ff2d402caefc1487d99bb486c4a510b8b 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155876 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 e045199c7c9c5433d7f1461a741ed539a75cbfad
 155878 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 62bcdc4edbf6d8c6e8a25544d48de22ccf75310d
 155879 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155884 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155877 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 19c87b7d446c3273e84b238cb02cd1c0ae69c43e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e545512b5e26f1e69fcd4c88df3c12853946dcdb 58a44be024f69d2e4d2b58553529230abdd3935e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155887 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
 155889 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5df6c87e8080e0021e975c8387baa20cfe43c932 155821a1990b6de78dde5f98fa5ab90e802021e0 baa4d064e91b6d2bcfe400bdf71f83b961e4c28e
 155890 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 19c87b7d446c3273e84b238cb02cd1c0ae69c43e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e545512b5e26f1e69fcd4c88df3c12853946dcdb 58a44be024f69d2e4d2b58553529230abdd3935e 25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
 155893 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
 155896 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 c0ddc8634845aba50774add6e4b73fdaffc82656
Searching for interesting versions
 Result found: flight 154583 (pass), for basis pass
 Result found: flight 155877 (fail), for basis failure
 Repro found: flight 155889 (pass), for basis pass
 Repro found: flight 155890 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ea9af51479fe04955443f0d366376a1008f07c94 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4dad0a9aa818698e0735c8352bf7925a1660df6f 4ea6aa9471f79cc81f957d6c0e2bb238d24675e5 8d385b247bca40ece40c9279391054bc98934325
No revisions left to test, checking graph state.
 Result found: flight 155879 (pass), for last pass
 Result found: flight 155883 (fail), for first failure
 Repro found: flight 155884 (pass), for last pass
 Repro found: flight 155887 (fail), for first failure
 Repro found: flight 155893 (pass), for last pass
 Repro found: flight 155896 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  c0ddc8634845aba50774add6e4b73fdaffc82656
  Bug not present: 8d385b247bca40ece40c9279391054bc98934325
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155896/


  commit c0ddc8634845aba50774add6e4b73fdaffc82656
  Author: Jan Beulich <jbeulich@suse.com>
  Date:   Tue Sep 22 15:51:28 2020 +0200
  
      evtchn: convert per-channel lock to be IRQ-safe
      
      ... in order for send_guest_{global,vcpu}_virq() to be able to make use
      of it.
      
      This is part of XSA-343.
      
      Signed-off-by: Jan Beulich <jbeulich@suse.com>
      Acked-by: Julien Grall <jgrall@amazon.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.884447 to fit
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
155896: tolerable FAIL

flight 155896 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155896/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 13:59:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 13:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8117.21612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQG4-0001eE-29; Fri, 16 Oct 2020 13:59:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8117.21612; Fri, 16 Oct 2020 13:59:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQG3-0001e7-VA; Fri, 16 Oct 2020 13:58:59 +0000
Received: by outflank-mailman (input) for mailman id 8117;
 Fri, 16 Oct 2020 13:58:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wl+D=DX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kTQG2-0001e2-Ba
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:58:58 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 61c52061-1400-4d69-b071-feed78196bf4;
 Fri, 16 Oct 2020 13:58:57 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 338C530E;
 Fri, 16 Oct 2020 06:58:57 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7C6273F71F;
 Fri, 16 Oct 2020 06:58:56 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wl+D=DX=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kTQG2-0001e2-Ba
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 13:58:58 +0000
X-Inumbo-ID: 61c52061-1400-4d69-b071-feed78196bf4
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 61c52061-1400-4d69-b071-feed78196bf4;
	Fri, 16 Oct 2020 13:58:57 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 338C530E;
	Fri, 16 Oct 2020 06:58:57 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7C6273F71F;
	Fri, 16 Oct 2020 06:58:56 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Print message if reset did not work
Date: Fri, 16 Oct 2020 14:58:47 +0100
Message-Id: <74a7359983a9d25ca62a6edd41805ab92918e2a1.1602856636.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

If for some reason the hardware reset is not working, print a message to
the user every 5 seconds to warn him that the system did not reset
properly and Xen is still looping.

The message is printed infinitely so that someone connecting to a serial
console with no history would see the message coming after 5 seconds.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/shutdown.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
index b32f07ec0e..600088ec48 100644
--- a/xen/arch/arm/shutdown.c
+++ b/xen/arch/arm/shutdown.c
@@ -36,6 +36,7 @@ void machine_halt(void)
 void machine_restart(unsigned int delay_millisecs)
 {
     int timeout = 10;
+    unsigned long count = 0;
 
     watchdog_disable();
     console_start_sync();
@@ -59,6 +60,9 @@ void machine_restart(unsigned int delay_millisecs)
     {
         platform_reset();
         mdelay(100);
+        if ( (count % 50) == 0 )
+            printk(XENLOG_ERR "Xen: Platform reset did not work properly!!\n");
+        count++;
     }
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 14:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 14:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8136.21629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQun-0005wt-CK; Fri, 16 Oct 2020 14:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8136.21629; Fri, 16 Oct 2020 14:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTQun-0005wm-9B; Fri, 16 Oct 2020 14:41:05 +0000
Received: by outflank-mailman (input) for mailman id 8136;
 Fri, 16 Oct 2020 14:41:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTQul-0005wJ-6S
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 14:41:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46455daa-7780-46dd-81c1-fb88a768010f;
 Fri, 16 Oct 2020 14:40:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQue-0006lL-HN; Fri, 16 Oct 2020 14:40:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQue-0002IA-93; Fri, 16 Oct 2020 14:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTQue-0007Aa-8Z; Fri, 16 Oct 2020 14:40:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTQul-0005wJ-6S
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 14:41:03 +0000
X-Inumbo-ID: 46455daa-7780-46dd-81c1-fb88a768010f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 46455daa-7780-46dd-81c1-fb88a768010f;
	Fri, 16 Oct 2020 14:40:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f3ikqCj4mLgMThuWlzzYZEJf7LsO0xAomFTjWFA+R64=; b=kub9jRNJz5GrpVQco7H8P9q6Mc
	+l6NnaIQG2Lsgxvjoa6BX3452+evvXgWvOM5ewJEVcwTQj4JbMMEfosbj4vLfgRlPsTDhZu1FZIi9
	OSmiU2kyEX0dgAMHr8oMqJkOyiwC3IGdUJEVPDwgpZOc4rRSfT867NoGkoiGyX0G/Qqk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQue-0006lL-HN; Fri, 16 Oct 2020 14:40:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQue-0002IA-93; Fri, 16 Oct 2020 14:40:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTQue-0007Aa-8Z; Fri, 16 Oct 2020 14:40:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155895: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7952a320c1e202a218702bfdb14f75132f04894
X-Osstest-Versions-That:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 14:40:56 +0000

flight 155895 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155895/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7952a320c1e202a218702bfdb14f75132f04894
baseline version:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4

Last test of basis   155869  2020-10-15 18:00:57 Z    0 days
Testing same since   155895  2020-10-16 12:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6ee2e66674..a7952a320c  a7952a320c1e202a218702bfdb14f75132f04894 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:05:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8151.21665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRIH-00080d-OM; Fri, 16 Oct 2020 15:05:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8151.21665; Fri, 16 Oct 2020 15:05:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRIH-00080W-LR; Fri, 16 Oct 2020 15:05:21 +0000
Received: by outflank-mailman (input) for mailman id 8151;
 Fri, 16 Oct 2020 15:05:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kTRIG-00080R-K3
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:05:20 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 592bf120-8a6b-4b6f-a002-3f45eb8d469d;
 Fri, 16 Oct 2020 15:05:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kTRIG-00080R-K3
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:05:20 +0000
X-Inumbo-ID: 592bf120-8a6b-4b6f-a002-3f45eb8d469d
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 592bf120-8a6b-4b6f-a002-3f45eb8d469d;
	Fri, 16 Oct 2020 15:05:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602860718;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=lzQYHByc9rCyTSqeD2dVH4ACwWaigXIrDdagCmA+pJc=;
  b=VvDUQaqrdy5Rt+lplW1nfyapeLFLkmfoHuLYoDY4hS0BpGtMWrMs3Fae
   8rHlEG1oDSl4kwYp0htMm4GKtjXk0zwrs+fUXEc0FAiPkktG8x76DfLO4
   KBkqGdSUE08svWBZuUDLvWYPnT0bRXATEMzbo8EcoIvU0H4YwUtYymV7l
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8xrsJXClWkgNoBBsL3od8vw5BuwDctmtaWKaLGuyACzzj+7cYljg5NTteI5HRnhC7FxKWPDppG
 pflLdsXdkaofdbUt2IAxnH75q6vBTJ85FLlv3LNCoJUFupdHYhKBseZ0wtaTLEbajzXDSOFbns
 WUysT5hhQDlpfO7t1WvSOlWBxu6ZDxWPKa0DUDKeSWlX51e6mPJnbWsC32WpOPBvhTteZprxM0
 QHgJOMzsHmxmyZdhenrk5Z4cIMllKL8Z73PyJQpHTd7t+KtXL0k+noEOBxuhJmhhSGf9TQYVwH
 YGU=
X-SBRS: 2.5
X-MesageID: 30227108
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="30227108"
Date: Fri, 16 Oct 2020 16:05:14 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <qemu-devel@nongnu.org>, Claudio Fontana <cfontana@suse.de>, Stefano
 Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, "open
 list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v2 3/3] accel: Add xen CpusAccel using dummy-cpus
Message-ID: <20201016150514.GA3105841@perard.uk.xensource.com>
References: <20201013140511.5681-1-jandryuk@gmail.com>
 <20201013140511.5681-4-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201013140511.5681-4-jandryuk@gmail.com>

On Tue, Oct 13, 2020 at 10:05:11AM -0400, Jason Andryuk wrote:
> Xen was broken by commit 1583a3898853 ("cpus: extract out qtest-specific
> code to accel/qtest").  Xen relied on qemu_init_vcpu() calling
> qemu_dummy_start_vcpu() in the default case, but that was replaced by
> g_assert_not_reached().
> 
> Add a minimal "CpusAccel" for Xen using the dummy-cpus implementation
> used by qtest.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:13:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8154.21678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRQ0-0000Vn-Jz; Fri, 16 Oct 2020 15:13:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8154.21678; Fri, 16 Oct 2020 15:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRQ0-0000Vg-Fu; Fri, 16 Oct 2020 15:13:20 +0000
Received: by outflank-mailman (input) for mailman id 8154;
 Fri, 16 Oct 2020 15:13:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kTRPy-0000VF-Ql
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:13:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 456f4deb-aac8-4ad1-8bea-3215f2602c34;
 Fri, 16 Oct 2020 15:13:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 946A6AF9A;
 Fri, 16 Oct 2020 15:13:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=S5YV=DX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kTRPy-0000VF-Ql
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:13:18 +0000
X-Inumbo-ID: 456f4deb-aac8-4ad1-8bea-3215f2602c34
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 456f4deb-aac8-4ad1-8bea-3215f2602c34;
	Fri, 16 Oct 2020 15:13:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1602861196;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HOgNhQtd3aRqAUNIbNLdv1bMju80JaCznmCPD38hmJg=;
	b=H7yC3e2hjheX73srcLgf0TABeesGUpSljmZg7sze4/yUg1oI5TthkGe9W04ocrEpW9BnuU
	1Cxn3FWwJHtGki2PPbb/Vv0+QuOkkLg4YjMlImsXeciIyalbXJnZ7kwmMVSGgLjqUvPlzS
	ear85P+PB5ncbY/RZJB0dS2OxPvXa94=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 946A6AF9A;
	Fri, 16 Oct 2020 15:13:16 +0000 (UTC)
Subject: Re: [PATCH v2] hvmloader: flip "ACPI data" to "ACPI NVS" type for
 ACPI table region
To: Igor Druzhinin <igor.druzhinin@citrix.com>,
 Sander Eikelenboom <linux@eikelenboom.it>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 roger.pau@citrix.com, wl@xen.org, iwj@xenproject.org
References: <1602808763-22396-1-git-send-email-igor.druzhinin@citrix.com>
 <ca9ba430-f5d8-f520-e7db-3e8d41cd7d9b@suse.com>
 <53939fbe-6370-fdf7-9727-398a474b219e@eikelenboom.it>
 <fec9f2f9-0980-c28d-b4b4-c7c2fc9928b7@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <73de9a13-0f93-6152-f256-37aa3510d6fd@suse.com>
Date: Fri, 16 Oct 2020 17:13:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <fec9f2f9-0980-c28d-b4b4-c7c2fc9928b7@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 15:48, Igor Druzhinin wrote:
> On 16/10/2020 14:34, Sander Eikelenboom wrote:
>> On 16/10/2020 08:34, Jan Beulich wrote:
>>> On 16.10.2020 02:39, Igor Druzhinin wrote:
>>>> ACPI specification contains statements describing memory marked with regular
>>>> "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
>>>> really do it if it wants kexec or similar functionality to work, there
>>>> could still be ambiguities in treating these regions as potentially regular
>>>> RAM.
>>>>
>>>> One such example is SeaBIOS which currently reports "ACPI data" regions as
>>>> RAM to the guest in its e801 call. Which it might have the right to do as any
>>>> user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
>>>> to ignore that fact and is instead using e801 to find a place for initrd which
>>>> causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
>>>> to be fixed / improved here, that is just one example of the potential problems
>>>> from using a reclaimable memory type.
>>>>
>>>> Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
>>>> described by the spec as non-reclaimable (so cannot ever be treated like RAM).
>>>>
>>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>
>>>
>>
>> I don't see any stable and or fixes tags, but I assume this will go to
>> the stable trees (which have (a backport of)
>> 8efa46516c5f4cf185c8df179812c185d3c27eb6 in their staging branches) ?

Yes, I intend to queue this up, as I did the backport of the
earlier one.

> Yes, this should go to the stable branches as well. I don't usually see Fixes:
> tag being used on xen-devel (not sure if it's intentional or simply not
> customary).

Go look again - there's an increasing amount of use of it.

> It's also questionable whether it's a fix or a workaround for an
> issue with compatibility.

Indeed - it is for this reason that I didn't ask for such a
tag to be added here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:38:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8158.21702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRoM-0002RY-Nx; Fri, 16 Oct 2020 15:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8158.21702; Fri, 16 Oct 2020 15:38:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRoM-0002RQ-Kr; Fri, 16 Oct 2020 15:38:30 +0000
Received: by outflank-mailman (input) for mailman id 8158;
 Fri, 16 Oct 2020 15:38:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kTRoK-0002Pk-DK
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:38:28 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d69e8daf-b759-4cfa-8517-5039ee192ad9;
 Fri, 16 Oct 2020 15:38:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Iki1=DX=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kTRoK-0002Pk-DK
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:38:28 +0000
X-Inumbo-ID: d69e8daf-b759-4cfa-8517-5039ee192ad9
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d69e8daf-b759-4cfa-8517-5039ee192ad9;
	Fri, 16 Oct 2020 15:38:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602862705;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=lSdmjCdnp5DmmGGCXMfRkuOKH6z6+aFF2sEdGlVhs0Q=;
  b=hMaOo/QVlIpC6STV//FOaJatm2zzo81L1SQzY81XTcsyXdGy1N4NAZwn
   sVszpZavhtA/66Qap0exB+1G7/0LdcGJCauGYnYArx+ODYNjTLWJa77Bb
   r7Z3rJ80F7fx83wyPwQZN2Zh7wzxjaIuO9Fn8tRMJptMvGfPKY44wj1T5
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: JDYWFBkFk317Y2XP/xaIkbXtnoKGnoHX5RPpuHXdnj+9AzQtlTcIsFRaGtDiPOHstWt6sdob6m
 m1u/d0ZwroF3Lk9ROJJipbnv3/PO+eVcUKaH9TkPHlMBYI0gQBg2gP2NxvGrXwpIOgHnEe+7hy
 DrRrXFpdHnswi3n3kiWCPEg7XveeMV8Je2zXQ+U9ONTT+wHOCx/zetofLEKV1+760yIK0Fy1Hk
 JsudpGnsjMr1uQ0yM1oa9ygNYT7wxKK1S/T7wqhpN2gIREf63INsHE1iJsC3Rw/WLpXksE2mQw
 1F4=
X-SBRS: 2.5
X-MesageID: 29429851
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="29429851"
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
 <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
Date: Fri, 16 Oct 2020 16:38:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 15/10/2020 09:01, Jan Beulich wrote:
> On 14.10.2020 15:57, Andrew Cooper wrote:
>> On 13/10/2020 16:58, Jan Beulich wrote:
>>> On 09.10.2020 17:09, Andrew Cooper wrote:
>>>> At the time of XSA-170, the x86 instruction emulator really was broken, and
>>>> would allow arbitrary non-canonical values to be loaded into %rip.  This was
>>>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
>>>> targets".
>>>>
>>>> However, in a demonstration that off-by-one errors really are one of the
>>>> hardest programming issues we face, everyone involved with XSA-170, myself
>>>> included, mistook the statement in the SDM which says:
>>>>
>>>>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
>>>>
>>>> to mean "must be canonical".  A real canonical check is bits 63:N-1.
>>>>
>>>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
>>>> to the boundary condition at 0x0000800000000000.
>>>>
>>>> Now that the emulator has been fixed, revert the XSA-170 change to fix
>>>> architectural behaviour at the boundary case.  The XTF test case for XSA-170
>>>> exercises this corner case, and still passes.
>>>>
>>>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> But why revert the change rather than fix ...
>>>
>>>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>>>  out:
>>>>      if ( nestedhvm_vcpu_in_guestmode(v) )
>>>>          nvmx_idtv_handling();
>>>> -
>>>> -    /*
>>>> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
>>>> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
>>>> -     * criteria. As we must not allow less than fully privileged mode to have
>>>> -     * such an effect on the domain, we correct rIP in that case (accepting
>>>> -     * this not being architecturally correct behavior, as the injected #GP
>>>> -     * fault will then not see the correct [invalid] return address).
>>>> -     * And since we know the guest will crash, we crash it right away if it
>>>> -     * already is in most privileged mode.
>>>> -     */
>>>> -    mode = vmx_guest_x86_mode(v);
>>>> -    if ( mode == 8 ? !is_canonical_address(regs->rip)
>>> ... the wrong use of is_canonical_address() here? By reverting
>>> you open up avenues for XSAs in case we get things wrong elsewhere,
>>> including ...
>>>
>>>> -                   : regs->rip != regs->eip )
>>> ... for 32-bit guests.
>> Because the only appropriate alternative would be ASSERT_UNREACHABLE()
>> and domain crash.
>>
>> This logic corrupts guest state.
>>
>> Running with corrupt state is every bit an XSA as hitting a VMEntry
>> failure if it can be triggered by userspace, but the latter safer and
>> much more obvious.
> I disagree. For CPL > 0 we don't "corrupt" guest state any more
> than reporting a #GP fault when one is going to be reported
> anyway (as long as the VM entry doesn't fail, and hence the
> guest won't get crashed). IOW this raising of #GP actually is a
> precautionary measure to _avoid_ XSAs.

It does not remove any XSAs.  It merely hides them.

There are legal states where RIP is 0x0000800000000000 and #GP is the
wrong thing to do.  Any async VMExit (Processor Trace Prefetch in
particular), or with debug traps pending.

> Nor do I agree with the "much more obvious" aspect:

A domain crash is far more likely to be reported to xen-devel/security
than something which bodges state in an almost-silent way.

> A VM entry
> failure requires quite a bit of analysis to recognize what has
> caused it; whether a non-pseudo-canonical RIP is what catches your
> eye right away is simply unknown. The gprintk() that you delete,
> otoh, says very clearly what we have found to be wrong.

Non-canonical values are easier to spot than most of the other rules, IMO.

>> It was the appropriate security fix (give or take the functional bug in
>> it) at the time, given the complexity of retrofitting zero length
>> instruction fetches to the emulator.
>>
>> However, it is one of a very long list of guest-state-induced VMEntry
>> failures, with non-trivial logic which we assert will pass, on a
>> fastpath, where hardware also performs the same checks and we already
>> have a runtime safe way of dealing with errors.  (Hence not actually
>> using ASSERT_UNREACHABLE() here.)
> "Runtime safe" as far as Xen is concerned, I take it. This isn't safe
> for the guest at all, as vmx_failed_vmentry() results in an
> unconditional domain_crash().

Any VMEntry failure is a bug in Xen.  If userspace can trigger it, it is
an XSA, *irrespective* of whether we crash the domain then and there, or
whether we let it try and limp on with corrupted state.

The different is purely in how obviously the bug manifests.

> I certainly buy the fast path aspect of your comment, and if you were
> moving the guest state adjustment into vmx_failed_vmentry(), I'd be
> fine with the deletion here.
>
>> It isn't appropriate for this check to exist on its own (i.e. without
>> other guest state checks),
> Well, if we run into cases where we get things wrong, more checks
> and adjustments may want adding. Sadly each one of those has a fair
> chance of needing an XSA.

We've had plenty of VMEntry failure XSAs, and we will no doubt have
plenty more in the future.  A non-canonical RIP is not special as far as
these go.


We absolutely should not be doing any fixup in debug builds.  I don't
see any security benefit for doing it in release builds, and an
important downside in terms of getting the bug noticed, and therefore fixed.


> As an aside, nvmx_n2_vmexit_handler()'s handling of
> VMX_EXIT_REASONS_FAILED_VMENTRY looks pretty bogus - this is a flag,
> not a separate exit reason. I guess I'll make a patch ...

This is far from the only problem.  I'm not intending to fix any issues
I find, until I can start getting some proper nested virt functionality
tests in place.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:38:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8157.21690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRoH-0002Pw-GC; Fri, 16 Oct 2020 15:38:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8157.21690; Fri, 16 Oct 2020 15:38:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRoH-0002Pp-D7; Fri, 16 Oct 2020 15:38:25 +0000
Received: by outflank-mailman (input) for mailman id 8157;
 Fri, 16 Oct 2020 15:38:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kTRoF-0002Pk-Sr
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:38:23 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a2555b3-db8a-48b2-b0e0-87655f23cf88;
 Fri, 16 Oct 2020 15:38:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kTRoF-0002Pk-Sr
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:38:23 +0000
X-Inumbo-ID: 4a2555b3-db8a-48b2-b0e0-87655f23cf88
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a2555b3-db8a-48b2-b0e0-87655f23cf88;
	Fri, 16 Oct 2020 15:38:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602862702;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=GTzravAnKnhxtLl9cPws+qnSmaTjDzSL5lAd3eS6BKo=;
  b=b5zCO3NE3cdRooURtbwwQHAAZ+dP9hs6t0q9aq2wbU0l50KGYN884n5S
   2uIAX86wjKjz5OilWgqkzv7e/OilMMCi0E9XapeftdPQ1rLrQgFOWgz3Y
   HzvxvQ3xbloyZ4eOWvOc8alza7T5yFuy8GmPlJD1k0mswh4LE0NWtq4Dj
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: wJ9MoMoAAlTEMN9xPetJhVcvwbURac+r3BOVHRnAPedrzcUqgyb1jssTnjrUu6GK62x4PdbINj
 5EkJ1MdR+QNdXDlnktt9DCpCCFO1yMyU/d79tkj51YWVd8ImatYQpYeGQZHZxcupfXpnhPi3bN
 1Ju1GtFm/kDuy2OauF4TfmZIFfQcVwdYN6NnYqKpC+FbE+AchvGFJjEtJYhp/myYg2NPZEjem5
 iEwxAP1SVZL9UhVha+35TvNBxJRhHgf0NcsYp3Qxj81adyCT+PuXTtPDcuOBBKwPYZenAP65tt
 6/o=
X-SBRS: 2.5
X-MesageID: 29429829
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="29429829"
Date: Fri, 16 Oct 2020 16:37:08 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <qemu-devel@nongnu.org>, <dgilbert@redhat.com>,
	<xen-devel@lists.xenproject.org>, <paul@xen.org>, <sstabellini@kernel.org>,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
	<marmarek@invisiblethingslab.com>, "Michael S. Tsirkin" <mst@redhat.com>,
	Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, Paolo Bonzini
	<pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net>, Eduardo Habkost
	<ehabkost@redhat.com>
Subject: Re: [PATCH] hw/xen: Set suppress-vmdesc for Xen machines
Message-ID: <20201016153708.GB3105841@perard.uk.xensource.com>
References: <20201013190506.3325-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201013190506.3325-1-jandryuk@gmail.com>

On Tue, Oct 13, 2020 at 03:05:06PM -0400, Jason Andryuk wrote:
> xen-save-devices-state doesn't currently generate a vmdesc, so restore
> always triggers "Expected vmdescription section, but got 0".  This is
> not a problem when restore comes from a file.  However, when QEMU runs
> in a linux stubdom and comes over a console, EOF is not received.  This
> causes a delay restoring - though it does restore.
> 
> Setting suppress-vmdesc skips looking for the vmdesc during restore and
> avoids the wait.

suppress-vmdesc is only used during restore, right? So starting a guest
without it, saving the guest and restoring the guest with
suppress-vmdesc=on added will work as intended? (I'm checking that migration
across update of QEMU will work.)

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:47:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8163.21713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRww-0003TK-OX; Fri, 16 Oct 2020 15:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8163.21713; Fri, 16 Oct 2020 15:47:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTRww-0003TD-Lb; Fri, 16 Oct 2020 15:47:22 +0000
Received: by outflank-mailman (input) for mailman id 8163;
 Fri, 16 Oct 2020 15:47:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTRwv-0003T8-T6
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:47:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1d561c8-5aed-4b3d-80a2-55b623e00d55;
 Fri, 16 Oct 2020 15:47:20 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTRwn-0008Bd-MT; Fri, 16 Oct 2020 15:47:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTRwn-00052Q-Dc; Fri, 16 Oct 2020 15:47:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTRwv-0003T8-T6
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:47:21 +0000
X-Inumbo-ID: b1d561c8-5aed-4b3d-80a2-55b623e00d55
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1d561c8-5aed-4b3d-80a2-55b623e00d55;
	Fri, 16 Oct 2020 15:47:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JonRfA+TgFx9lcPWATiPevoJJ4wS52nuolyRWYVusfE=; b=lK/egPom8Pf4nczzjrD+P9gbzn
	d4xtlVNO74AjsNA2iWDuBCzDfm/s+zy1UWuJp1LgurkIWvuGpinrT87EGx+8ZDn7reJVEmE3YrwSD
	K9qc0dlUanqW/57mn2dSLAkJgkzBt5Z/Wq49kQYKuXrm7Zac8PVZsgwaJuo951qt+vmY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTRwn-0008Bd-MT; Fri, 16 Oct 2020 15:47:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTRwn-00052Q-Dc; Fri, 16 Oct 2020 15:47:13 +0000
Subject: Re: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-3-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org>
Date: Fri, 16 Oct 2020 16:47:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 05/10/2020 10:49, Paul Durrant wrote:
> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 791f0a2592..75e855625a 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
>                                    */
>   };
>   
> +/*
> + * XEN_DOMCTL_iommu_ctl
> + *
> + * Control of VM IOMMU settings
> + */
> +
> +#define XEN_DOMCTL_IOMMU_INVALID 0

I can't find any user of XEN_DOMCTL_IOMMU_INVALID. What's the purpose 
for it?

[...]

> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index b21c3783d3..1a96d3502c 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -106,17 +106,19 @@ struct xsm_operations {
>       int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
>       int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
>       int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
> -    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);
> +    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access);

I can't figure out what changed here. Is it an intended change?

>   
> -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
> +#if defined(CONFIG_HAS_PASSTHROUGH)
> +    int (*iommu_ctl) (struct domain *d, unsigned int op);
> +#if defined(CONFIG_HAS_PCI)
>       int (*get_device_group) (uint32_t machine_bdf);
>       int (*assign_device) (struct domain *d, uint32_t machine_bdf);
>       int (*deassign_device) (struct domain *d, uint32_t machine_bdf);
>   #endif
> -
> -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
> +#if defined(CONFIG_HAS_DEVICE_TREE)
>       int (*assign_dtdevice) (struct domain *d, const char *dtpath);
>       int (*deassign_dtdevice) (struct domain *d, const char *dtpath);
> +#endif
>   #endif

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 15:54:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 15:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8166.21726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTS42-0004Q1-GV; Fri, 16 Oct 2020 15:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8166.21726; Fri, 16 Oct 2020 15:54:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTS42-0004Pu-Cx; Fri, 16 Oct 2020 15:54:42 +0000
Received: by outflank-mailman (input) for mailman id 8166;
 Fri, 16 Oct 2020 15:54:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTS40-0004Pp-Uy
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:54:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19cb9d07-f151-47b6-bec3-d962d5356c0a;
 Fri, 16 Oct 2020 15:54:40 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTS3v-0008Lx-9a; Fri, 16 Oct 2020 15:54:35 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTS3u-0005Em-TU; Fri, 16 Oct 2020 15:54:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTS40-0004Pp-Uy
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 15:54:41 +0000
X-Inumbo-ID: 19cb9d07-f151-47b6-bec3-d962d5356c0a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 19cb9d07-f151-47b6-bec3-d962d5356c0a;
	Fri, 16 Oct 2020 15:54:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7P1fPlrFQRURct1iDp9Hf9pmMYk3Sx4P11s9/9aJEI0=; b=B2lco4lhWzAkyWG1alWOZYmbvj
	0NSligQ+U98dxrigxn/z1popTjnwmj9ODQlCh+lVxwvtVrFRERiqbRSS7pN+SY2fwZ9+EiNyyrxvJ
	6nXB6vFBWbx1H8X6FRvrwEhQbhW4Epz0NKk6hRWv3IJ2x9RKM6p8mGG7dCPioe8K51kQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTS3v-0008Lx-9a; Fri, 16 Oct 2020 15:54:35 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTS3u-0005Em-TU; Fri, 16 Oct 2020 15:54:35 +0000
Subject: Re: [PATCH 3/5] libxl / iommu / domctl: introduce
 XEN_DOMCTL_IOMMU_SET_ALLOCATION...
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-4-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <471c9800-2ff0-d180-0840-e29dee4d3b4f@xen.org>
Date: Fri, 16 Oct 2020 16:54:32 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-4-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 05/10/2020 10:49, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... sub-operation of XEN_DOMCTL_iommu_ctl.
> 
> This patch adds a new sub-operation into the domctl. The code in iommu_ctl()
> is extended to call a new arch-specific iommu_set_allocation() function which
> will be called with the IOMMU page-table overhead (in 4k pages) in response

Why 4KB? Wouldn't it be better to use the hypervisor page size instead?

> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> index 75e855625a..6402678838 100644
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1138,8 +1138,16 @@ struct xen_domctl_vuart_op {
>   
>   #define XEN_DOMCTL_IOMMU_INVALID 0
>   
> +#define XEN_DOMCTL_IOMMU_SET_ALLOCATION 1
> +struct xen_domctl_iommu_set_allocation {
> +    uint32_t nr_pages;

Shouldn't this be a 64-bit value?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:02:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8169.21737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSB8-0005pj-7w; Fri, 16 Oct 2020 16:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8169.21737; Fri, 16 Oct 2020 16:02:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSB8-0005pc-56; Fri, 16 Oct 2020 16:02:02 +0000
Received: by outflank-mailman (input) for mailman id 8169;
 Fri, 16 Oct 2020 16:02:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kTSB6-0005pX-OC
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:02:00 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aefc59d2-601b-4f5c-88ba-a066e09496af;
 Fri, 16 Oct 2020 16:01:59 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id y16so3064386ljk.1
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:01:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kTSB6-0005pX-OC
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:02:00 +0000
X-Inumbo-ID: aefc59d2-601b-4f5c-88ba-a066e09496af
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aefc59d2-601b-4f5c-88ba-a066e09496af;
	Fri, 16 Oct 2020 16:01:59 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id y16so3064386ljk.1
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:01:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=cIjxEXi6/1Ci8HAZ985Gw92uP+12htILEakXToNiOhE=;
        b=E5+mJAlEFFRHn0PU1cWH2W8SIGpD5Eb5gb18yAkdIvqMPEAhLMThWiSU9xHFkUgYlr
         1juAUTeSJmWgjQpz7CmH9HA/cpDbEj4cyAKqFbPBkjJzzqHrp3M15Q0PlKdTPHtAAZ8x
         qPN4Tlhumd8wik+mOGPkpddWbra3MnoF4th1OmpWgBsqfAKhi3yYF/OtVbnC+JveaPBj
         M8oUE1DiOiZpIRjAX+zPeLqu6nJ/nkj5sCWFsTVVD1/2S+cOhh2uD0t/QaxwHGsB3Iu5
         Ge7qNMjZtGMaGtXRbBlfJo7G47jMSGyJ/qTxoo8iv+VB3wF1WUk6tf/QgL2LwSDkpwUK
         wIXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=cIjxEXi6/1Ci8HAZ985Gw92uP+12htILEakXToNiOhE=;
        b=iCVyX4AW9Yzo7bY2YoLlohlQHLmUltPAp9y3MKJLF2W/XtY26mzjQcy4p3DK/0HDkX
         kaIZi6mwnNmET/mW0BB7ih1CEixcsiAW/2A5TeaDMdkephHLb0mIk/hrquNcnjV4eg4Q
         jgrIgkwuogNzs981eyBw2B/8eao3zWjtVpQ6SiwtpmvP4KblDv/ymDXSH1oBKHXBh/iS
         dJpIT1qgz7C6mmZDBKhaHT/oTQGa9gvBRcVXQaVVuw05wTrdyEXTBYAqkprayu9EXgXe
         o1B5NL/nffBXijSVwFTVViTlGo/Yf3uqaXqLZ3p6q+A45/s2Nxb38x5G3wAt9icWYhJu
         oCfQ==
X-Gm-Message-State: AOAM532K0cL1dwSnUdO99sXUZOfH1G66VwZxBrqvoYCeNlwnELQrJVf8
	SEl19eVJSGHEwPKYErvfXnfg0HPSz+unA+zyIl4=
X-Google-Smtp-Source: ABdhPJzIMzwsUUc/O+9gITsdj67uq8SXf6e4S58oij+r74HezAGqEHNkHgyPbrj7SW5anw+QvlgpBGfQnlwH/m0ONoM=
X-Received: by 2002:a2e:96d2:: with SMTP id d18mr1777036ljj.407.1602864118597;
 Fri, 16 Oct 2020 09:01:58 -0700 (PDT)
MIME-Version: 1.0
References: <20201013190506.3325-1-jandryuk@gmail.com> <20201016153708.GB3105841@perard.uk.xensource.com>
In-Reply-To: <20201016153708.GB3105841@perard.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 16 Oct 2020 12:01:47 -0400
Message-ID: <CAKf6xpssB-FGwiEhLqV8OFjBGuP4LKYh+9Pj_Bj7p5U2CJSw=g@mail.gmail.com>
Subject: Re: [PATCH] hw/xen: Set suppress-vmdesc for Xen machines
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: QEMU <qemu-devel@nongnu.org>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net>, 
	Eduardo Habkost <ehabkost@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Oct 16, 2020 at 11:38 AM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Tue, Oct 13, 2020 at 03:05:06PM -0400, Jason Andryuk wrote:
> > xen-save-devices-state doesn't currently generate a vmdesc, so restore
> > always triggers "Expected vmdescription section, but got 0".  This is
> > not a problem when restore comes from a file.  However, when QEMU runs
> > in a linux stubdom and comes over a console, EOF is not received.  This
> > causes a delay restoring - though it does restore.
> >
> > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > avoids the wait.
>
> suppress-vmdesc is only used during restore, right? So starting a guest
> without it, saving the guest and restoring the guest with
> suppress-vmdesc=on added will work as intended? (I'm checking that migration
> across update of QEMU will work.)

vmdesc is a json description of the migration stream that comes after
the QEMU migration stream.  For our purposes, <migration
stream><vmdesc json blob>.  Normal QEMU savevm will generate it,
unless suppress-vmdesc is set.  QEMU restore will read it because:
"Try to read in the VMDESC section as well, so that dumping tools that
intercept our migration stream have the chance to see it."

Xen save does not go through savevm, but instead
xen-save-devices-state, which is a subset of the QEMU savevm.  It
skips RAM since that is read out through Xen interfaces.  Xen uses
xen-load-devices-state to restore device state.  That goes through the
common qemu_loadvm_state which tries to read the vmdesc stream.

For Xen, yes, suppress-vmdesc only matters for the restore case, and
it suppresses the attempt to read the vmdesc.  I think every Xen
restore currently has "Expected vmdescription section, but got -1" in
the -dm.log since the vmdesc is missing.  I have not tested restoring
across this change, but since it just controls reading and discarding
the vmdesc stream, I don't think it will break migration across
update.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:07:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8171.21750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSGP-00063T-OS; Fri, 16 Oct 2020 16:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8171.21750; Fri, 16 Oct 2020 16:07:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSGP-00063M-Kw; Fri, 16 Oct 2020 16:07:29 +0000
Received: by outflank-mailman (input) for mailman id 8171;
 Fri, 16 Oct 2020 16:07:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kTSGN-00063H-Ue
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:07:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17021f39-b029-4ad9-9f6b-089d1450a108;
 Fri, 16 Oct 2020 16:07:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTSGL-0000j7-TI; Fri, 16 Oct 2020 16:07:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kTSGL-00062F-Kt; Fri, 16 Oct 2020 16:07:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VWaZ=DX=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kTSGN-00063H-Ue
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:07:28 +0000
X-Inumbo-ID: 17021f39-b029-4ad9-9f6b-089d1450a108
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 17021f39-b029-4ad9-9f6b-089d1450a108;
	Fri, 16 Oct 2020 16:07:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3FDsXG7XIzLfVnJV29U2nAMTGO4lnOKRwV0u1qfRMr8=; b=WugEQxXZyJpNPN97IZ1nG7ydrl
	7exmvF8Q/S530PYdSpcv0Y3uU9f8b/INKkFgJiWbl2kWZrGp0gQy8exaTM0Cln4OQ9N24iF34+S0n
	4sVDu6WGLmtv6IZ6zAM0DJCwqJXVAFk4kpOpd4D27Qx2jeil251uWCkARhj2cDHE8uuI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTSGL-0000j7-TI; Fri, 16 Oct 2020 16:07:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kTSGL-00062F-Kt; Fri, 16 Oct 2020 16:07:25 +0000
Subject: Re: [PATCH 4/5] iommu: set 'hap_pt_share' and 'need_sync' flags
 earlier in iommu_domain_init()
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>, Jan Beulich <jbeulich@suse.com>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-5-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <e5ff6f9b-939f-dc04-561a-d77afbf1863a@xen.org>
Date: Fri, 16 Oct 2020 17:07:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 05/10/2020 10:49, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Set these flags prior to the calls to arch_iommu_domain_init() and the
> implementation init() method. There is no reason to hide this information from
> those functions and the value of 'hap_pt_share' will be needed by a
> modification to arch_iommu_domain_init() made in a subsequent patch.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> Cc: Jan Beulich <jbeulich@suse.com>
> ---
>   xen/drivers/passthrough/iommu.c | 16 ++++++----------
>   1 file changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
> index 642d5c8331..fd9705b3a9 100644
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -174,15 +174,6 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
>       hd->node = NUMA_NO_NODE;
>   #endif
>   
> -    ret = arch_iommu_domain_init(d);
> -    if ( ret )
> -        return ret;
> -
> -    hd->platform_ops = iommu_get_ops();
> -    ret = hd->platform_ops->init(d);
> -    if ( ret || is_system_domain(d) )
> -        return ret;
> -
>       /*
>        * Use shared page tables for HAP and IOMMU if the global option
>        * is enabled (from which we can infer the h/w is capable) and
> @@ -202,7 +193,12 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
>   
>       ASSERT(!(hd->need_sync && hd->hap_pt_share));
>   
> -    return 0;
> +    ret = arch_iommu_domain_init(d);
> +    if ( ret )
> +        return ret;
> +
> +    hd->platform_ops = iommu_get_ops();
> +    return hd->platform_ops->init(d);
>   }
>   
>   void __hwdom_init iommu_hwdom_init(struct domain *d)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8174.21762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSW2-0007pV-4s; Fri, 16 Oct 2020 16:23:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8174.21762; Fri, 16 Oct 2020 16:23:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSW2-0007pO-1B; Fri, 16 Oct 2020 16:23:38 +0000
Received: by outflank-mailman (input) for mailman id 8174;
 Fri, 16 Oct 2020 16:23:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kTSW1-0007pJ-9c
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:23:37 +0000
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89ef93da-93bb-446b-ab50-c1488e0c2f65;
 Fri, 16 Oct 2020 16:23:35 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id h6so3684560lfj.3
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:23:35 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kTSW1-0007pJ-9c
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:23:37 +0000
X-Inumbo-ID: 89ef93da-93bb-446b-ab50-c1488e0c2f65
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 89ef93da-93bb-446b-ab50-c1488e0c2f65;
	Fri, 16 Oct 2020 16:23:35 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id h6so3684560lfj.3
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:23:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=BCXz6neigo3liGVlp6o6t2wQp799QnNlGiCjoMhPnzU=;
        b=jHE3B/2eU4MYhsjWEaaTOGlDhKWaPAhz41rC2NSjHBRa0Tn5p+W61XfSJpE/Ew55M+
         heP8qz40FqafP9SmUSx5fQ4ZanLSPCzircSzfpIX/Bxh1kmUMsoRMhDqMdTQGcxtbb8C
         Jtzu6TCNPV60D8+JyvDN06XvGOwXw11V6mcw3iFPTQFFaKEnNN9O+gtDX43yC8IXfcOL
         kbV1Ft3y+TRWSb8mW6vtIO3FqK1Z8cLJe2IRJf3PEbb5RHDFgrgAt5JfHjOK3vvsU4yM
         jmzryDnW03FfAAIU5BR+hhAqAdv3J8GY2jf9lGD1R+RJCFxPx2UuniNbxWG7k2lZOo/G
         vzsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=BCXz6neigo3liGVlp6o6t2wQp799QnNlGiCjoMhPnzU=;
        b=mx/YicowdtFIvYOUntfArvGTQskKY+LcBFv1EXTLkW4+KTdBxbGFRVs/3tcpz0AJCd
         NNaDJd+58CJ19uMrY+cyXRCrhouOGtVHZ4uDErtyqGEVe2ETq5NM4vDcIkbVsSZsXI/2
         EcNlY6LAxes5Cnm1RGlMy2CPv2UB2zByoAEUF8humdYND25SnSYpwdyOWMyH04llYXuV
         x9frE2byjla8W0QtI6vFTXqOKKtnljm/qVgtdlmFW3GfLFHKFP/71kHCimZzLybcITxD
         hvnWd8ONbERN5xL/UmDxB+VtVbAxHuiLMKwh3RsZibrh194FUT5sQrvtWfQZp3vPJ+Mh
         NaGg==
X-Gm-Message-State: AOAM533zAjd6vFRMDvUVqDSLKYGTEmQu6tz0B7J7SCy6WDNTtYRg2qKW
	hrCodYDZbZVaSS7/d0A+hVlusNBXkNXdA1YsrW4=
X-Google-Smtp-Source: ABdhPJw3bv4nvYyUMgbs1gm5/tteyYKn9I4ruFRWvTKBJuJ0Rmo2gug2vVwh8Sv9o91YbI8q2yooUwbDm8DNfp1g39A=
X-Received: by 2002:ac2:52b7:: with SMTP id r23mr1574011lfm.30.1602865414058;
 Fri, 16 Oct 2020 09:23:34 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
In-Reply-To: <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 16 Oct 2020 12:23:22 -0400
Message-ID: <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 15, 2020 at 11:16 AM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> On Thu, Oct 15, 2020 at 7:31 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.co=
m> wrote:
> >
> > On Wed, Oct 14, 2020 at 08:37:06PM +0100, Andrew Cooper wrote:
> > > On 14/10/2020 20:28, Jason Andryuk wrote:
> > > > Hi,
> > > >
> > > > Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/257=
6
> > > >
> > > > I'm seeing DMA faults for the i915 graphics hardware on a Dell
> > > > Latitude 5500. These were captured when I plugged into a Dell
> > > > Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.=
4
> > > > staging and Linux 5.4.70 (and some earlier versions).
> > > >
> > > > Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handl=
er
> > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handl=
er
> > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > > Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =3D
> > > > ffff82c00021d000
> > > > Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > > PTE Read access is not set
> > > > Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handl=
er
> > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handl=
er
> > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > > Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =3D
> > > > ffff82c00021d000
> > > > Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > > PTE Read access is not set
> > > >
> > > > They repeat. In the log attached to
> > > > https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start =
at
> > > > "Oct 14 18:41:49.056589" and continue until I unplug the dock aroun=
d
> > > > "Oct 14 18:41:54.801802".
> > > >
> > > > I've also seen similar messages when attaching the laptop's HDMI po=
rt
> > > > to a 4k monitor. The eDP display by itself seems okay.
> > > >
> > > > I tried Fedora 31 & 32 live images with intel_iommu=3Don, so no Xen=
, and
> > > > didn't see any errors
> > > >
> > > > This is a kernel & xen log with drm.debug=3D0x1e. It also includes =
some
> > > > application (glass) logging when it changes resolutions which seems=
 to
> > > > set off the DMA faults. 5500-igfx-messages-kern-xen-glass
> > > >
> > > > Running xen with iommu=3Dno-igfx disables the iommu for the i915
> > > > graphics and no faults are reported. However, that breaks some othe=
r
> > > > devices (Dell Latitude 7200 and 5580) giving a black screen with:
> > > >
> > > > Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Fai=
led
> > > > to idle engines, declaring wedged!
> > > > Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Fai=
led
> > > > to initialize GPU, declaring it wedged!
> > > >
> > > > Any suggestions welcome.
> > >
> > > Presumably this is with a PV dom0.  What are 39b5845000 and 4238d0a00=
0
> > > in the machine memory map?
>
> They are bogus?
> End of RAM is 0x47c800000
> Thats:
> 0x047c800000
> vs.
> 0x39b5845000
> 0x4238d0a000
>
> > > This smells like a missing RMRR in the ACPI tables.

The RMRRs are:
(XEN) [VT-D]Host address width 39
(XEN) [VT-D]found ACPI_DMAR_DRHD:
(XEN) [VT-D]  dmaru->address =3D fed90000
(XEN) [VT-D]drhd->address =3D fed90000 iommu->reg =3D ffff82c00021d000
(XEN) [VT-D]cap =3D 1c0000c40660462 ecap =3D 19e2ff0505e
(XEN) [VT-D] endpoint: 0000:00:02.0
(XEN) [VT-D]found ACPI_DMAR_DRHD:
(XEN) [VT-D]  dmaru->address =3D fed91000
(XEN) [VT-D]drhd->address =3D fed91000 iommu->reg =3D ffff82c00021f000
(XEN) [VT-D]cap =3D d2008c40660462 ecap =3D f050da
(XEN) [VT-D] IOAPIC: 0000:00:1e.7
(XEN) [VT-D] MSI HPET: 0000:00:1e.6
(XEN) [VT-D]  flags: INCLUDE_ALL
(XEN) [VT-D]found ACPI_DMAR_RMRR:
(XEN) [VT-D] endpoint: 0000:00:14.0
(XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 78863000 end_addr 78882fff
(XEN) [VT-D]found ACPI_DMAR_RMRR:
(XEN) [VT-D] endpoint: 0000:00:02.0
(XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 7d000000 end_addr 7f7fffff
(XEN) [VT-D]found ACPI_DMAR_RMRR:
(XEN) [VT-D] endpoint: 0000:00:16.7
(XEN) [VT-D]dmar.c:581:  Non-existent device (0000:00:16.7) is
reported in RMRR (78907000, 78986fff)'s scope!
(XEN) [VT-D]dmar.c:596:   Ignore the RMRR (78907000, 78986fff) due to
devices under its scope are not PCI discoverable!

> > I agree.
> >
> > Can you paste the memory map as printed by Xen when booting, and what
> > command line are you using to boot Xen.
>
> So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen
>
> There's the memory map
> (XEN) TBOOT RAM map:
> (XEN)  0000000000000000 - 0000000000060000 (usable)
> (XEN)  0000000000060000 - 0000000000068000 (reserved)
> (XEN)  0000000000068000 - 000000000009e000 (usable)
> (XEN)  000000000009e000 - 000000000009f000 (reserved)
> (XEN)  000000000009f000 - 00000000000a0000 (usable)
> (XEN)  00000000000a0000 - 0000000000100000 (reserved)
> (XEN)  0000000000100000 - 0000000040000000 (usable)
> (XEN)  0000000040000000 - 0000000040400000 (reserved)
> (XEN)  0000000040400000 - 000000007024b000 (usable)
> (XEN)  000000007024b000 - 000000007024c000 (ACPI NVS)
> (XEN)  000000007024c000 - 000000007024d000 (reserved)
> (XEN)  000000007024d000 - 0000000077f19000 (usable)
> (XEN)  0000000077f19000 - 0000000078987000 (reserved)
> (XEN)  0000000078987000 - 0000000078a04000 (ACPI data)
> (XEN)  0000000078a04000 - 0000000078ea3000 (ACPI NVS)
> (XEN)  0000000078ea3000 - 000000007acff000 (reserved)
> (XEN)  000000007acff000 - 000000007ad00000 (usable)
> (XEN)  000000007ad00000 - 000000007f800000 (reserved)
> (XEN)  00000000f0000000 - 00000000f8000000 (reserved)
> (XEN)  00000000fe000000 - 00000000fe011000 (reserved)
> (XEN)  00000000fec00000 - 00000000fec01000 (reserved)
> (XEN)  00000000fee00000 - 00000000fee01000 (reserved)
> (XEN)  00000000ff000000 - 0000000100000000 (reserved)
> (XEN)  0000000100000000 - 000000047c800000 (usable)
> (XEN) EFI memory map:
> (XEN)  0000000000000-000000009dfff type=3D7 attr=3D000000000000000f
> (XEN)  000000009e000-000000009efff type=3D0 attr=3D000000000000000f
> (XEN)  000000009f000-000000009ffff type=3D3 attr=3D000000000000000f
> (XEN)  0000000100000-000003fffffff type=3D7 attr=3D000000000000000f
> (XEN)  0000040000000-00000403fffff type=3D0 attr=3D000000000000000f
> (XEN)  0000040400000-000005e359fff type=3D7 attr=3D000000000000000f
> (XEN)  000005e35a000-000005e399fff type=3D4 attr=3D000000000000000f
> (XEN)  000005e39a000-000006a47dfff type=3D7 attr=3D000000000000000f
> (XEN)  000006a47e000-000006c3eefff type=3D2 attr=3D000000000000000f
> (XEN)  000006c3ef000-000006d5eefff type=3D1 attr=3D000000000000000f
> (XEN)  000006d5ef000-000006d86cfff type=3D2 attr=3D000000000000000f
> (XEN)  000006d86d000-000006d978fff type=3D1 attr=3D000000000000000f
> (XEN)  000006d979000-000006dc7afff type=3D4 attr=3D000000000000000f
> (XEN)  000006dc7b000-000006dc98fff type=3D3 attr=3D000000000000000f
> (XEN)  000006dc99000-000006dcc7fff type=3D4 attr=3D000000000000000f
> (XEN)  000006dcc8000-000006dccdfff type=3D3 attr=3D000000000000000f
> (XEN)  000006dcce000-00000701a5fff type=3D4 attr=3D000000000000000f
> (XEN)  00000701a6000-00000701c8fff type=3D3 attr=3D000000000000000f
> (XEN)  00000701c9000-00000701edfff type=3D4 attr=3D000000000000000f
> (XEN)  00000701ee000-0000070204fff type=3D3 attr=3D000000000000000f
> (XEN)  0000070205000-000007022cfff type=3D4 attr=3D000000000000000f
> (XEN)  000007022d000-000007024afff type=3D3 attr=3D000000000000000f
> (XEN)  000007024b000-000007024bfff type=3D10 attr=3D000000000000000f
> (XEN)  000007024c000-000007024cfff type=3D6 attr=3D800000000000000f
> (XEN)  000007024d000-000007024dfff type=3D4 attr=3D000000000000000f
> (XEN)  000007024e000-0000070282fff type=3D3 attr=3D000000000000000f
> (XEN)  0000070283000-00000702c3fff type=3D4 attr=3D000000000000000f
> (XEN)  00000702c4000-00000702c8fff type=3D3 attr=3D000000000000000f
> (XEN)  00000702c9000-00000702defff type=3D4 attr=3D000000000000000f
> (XEN)  00000702df000-0000070307fff type=3D3 attr=3D000000000000000f
> (XEN)  0000070308000-0000070317fff type=3D4 attr=3D000000000000000f
> (XEN)  0000070318000-0000070319fff type=3D3 attr=3D000000000000000f
> (XEN)  000007031a000-0000070331fff type=3D4 attr=3D000000000000000f
> (XEN)  0000070332000-0000070349fff type=3D3 attr=3D000000000000000f
> (XEN)  000007034a000-0000070356fff type=3D2 attr=3D000000000000000f
> (XEN)  0000070357000-0000070357fff type=3D7 attr=3D000000000000000f
> (XEN)  0000070358000-0000070358fff type=3D2 attr=3D000000000000000f
> (XEN)  0000070359000-0000076f3efff type=3D4 attr=3D000000000000000f
> (XEN)  0000076f3f000-00000772affff type=3D7 attr=3D000000000000000f
> (XEN)  00000772b0000-0000077f18fff type=3D3 attr=3D000000000000000f
> (XEN)  0000077f19000-0000078986fff type=3D0 attr=3D000000000000000f
> (XEN)  0000078987000-0000078a03fff type=3D9 attr=3D000000000000000f
> (XEN)  0000078a04000-0000078ea2fff type=3D10 attr=3D000000000000000f
> (XEN)  0000078ea3000-000007ab22fff type=3D6 attr=3D800000000000000f
> (XEN)  000007ab23000-000007acfefff type=3D5 attr=3D800000000000000f
> (XEN)  000007acff000-000007acfffff type=3D4 attr=3D000000000000000f
> (XEN)  0000100000000-000047c7fffff type=3D7 attr=3D000000000000000f
> (XEN)  00000000a0000-00000000fffff type=3D0 attr=3D0000000000000000
> (XEN)  000007ad00000-000007adfffff type=3D0 attr=3D070000000000000f
> (XEN)  000007ae00000-000007f7fffff type=3D0 attr=3D0000000000000000
> (XEN)  00000f0000000-00000f7ffffff type=3D11 attr=3D800000000000100d
> (XEN)  00000fe000000-00000fe010fff type=3D11 attr=3D8000000000000001
> (XEN)  00000fec00000-00000fec00fff type=3D11 attr=3D8000000000000001
> (XEN)  00000fee00000-00000fee00fff type=3D11 attr=3D8000000000000001
> (XEN)  00000ff000000-00000ffffffff type=3D11 attr=3D800000000000100d
>
> Command line
> console=3Dcom1 dom0_mem=3Dmin:420M,max:420M,420M efi=3Dno-rs,attr=3Duc
> com1=3D115200,8n1,pci mbi-video vga=3Dcurrent flask=3Denforcing loglvl=3D=
debug
> guest_loglvl=3Ddebug smt=3D0 ucode=3D-1 bootscrub=3D1
> argo=3Dyes,mac-permissive=3D1 iommu=3Dforce,igfx
>
> iommu=3Dforce,igfx was to force igfx back on.  I added a dmi quirk to
> set no-igfx on this platform as a temporary workaround.
>
> > Have you tried adding dom0-iommu=3Dmap-inclusive to the Xen command
> > line?

Still seeing faults with dom0-iommu=3Dmap-inclusive.  At a different
address this time:
Oct 16 15:58:05.110768 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
Request device [0000:00:02.0] fault addr ea0c4f000, iommu reg =3D ffff
82c00021d000
Oct 16 15:58:05.110774 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
PTE Read access is not set
Oct 16 15:58:05.110777 VM hypervisor: (XEN) print_vtd_entries: iommu
#0 dev 0000:00:02.0 gmfn ea0c4f
Oct 16 15:58:05.110780 VM hypervisor: (XEN)     root_entry[00] =3D 46e12900=
1
Oct 16 15:58:05.110782 VM hypervisor: (XEN)     context[10] =3D 2_46e128001
Oct 16 15:58:05.110785 VM hypervisor: (XEN)     l4[000] =3D 46e11b003
Oct 16 15:58:05.110787 VM hypervisor: (XEN)     l3[03a] =3D 0
Oct 16 15:58:05.110789 VM hypervisor: (XEN)     l3[03a] not present

The previous posting, the two faulting addresses repeated in pairs.
Here it is only this one address repeating.

I plugged and unplugged and a different address was repeating with a
few other random addresses with 1 or 2 faults.  Here is uniq -c output
of the address and count pulled from the logs:
0x1ce9d6b000 2007
0x31b50d5000 1
0x1ce9d6b000 882
0x707741000 1
0x1ce9d6b000 1114
0x20d2099000 1
0x1ce9d6b000 3489
0xeb98eb000 1
0x1ce9d6b000 2430
0xeb98eb000 1
0x1ce9d6b000 1300
0x22f20bb000 1
0x1ce9d6b000 269
0x22f20bb000 1
0x1ce9d6b000 5091
0x6c99ec9000 1
0x1ce9d6b000 29
0xeb98eb000 1
0x1ce9d6b000 4599
0x6c99ec9000 1
0x1ce9d6b000 1989

In the i915 bug report, LAKSHMINARAYANA VUDUM commented "We have a
similar issue on SKL on our CI system
https://gitlab.freedesktop.org/drm/intel/-/issues/2017"

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:29:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:29:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8178.21776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSbM-00084h-Sv; Fri, 16 Oct 2020 16:29:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8178.21776; Fri, 16 Oct 2020 16:29:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSbM-00084a-Pq; Fri, 16 Oct 2020 16:29:08 +0000
Received: by outflank-mailman (input) for mailman id 8178;
 Fri, 16 Oct 2020 16:29:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kTSbL-00084V-5l
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:29:07 +0000
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62f3dd61-7f15-4d5f-8041-80ca799917fe;
 Fri, 16 Oct 2020 16:29:06 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l28so3661846lfp.10
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:29:06 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kTSbL-00084V-5l
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:29:07 +0000
X-Inumbo-ID: 62f3dd61-7f15-4d5f-8041-80ca799917fe
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 62f3dd61-7f15-4d5f-8041-80ca799917fe;
	Fri, 16 Oct 2020 16:29:06 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id l28so3661846lfp.10
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 09:29:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=dbM3Xm6LdvQINTqo9Uv0kvAi8ufQMlJhoQYSnKrkBVQ=;
        b=kodGX5vFb+G+pyj64zWNPA4CwaQkPqyIkAmXXKpJhAxsN/g4yEBMlvAks5V/Tmezzl
         x7B1UA417KOQwkt/lcHw+e9n50cEOZyvutDUKXcdtMcnUlSRQ+YLxUD8x5gr9rY7rry2
         HRKYDqsMdl2CGfCVbj/Kv+5Yn5vq/H6QJ14URQtLiWkzO8oC9P7k+aFPRQEMmtrs2DDp
         bpCwu3kMPySTDKOB3aaLdpEb7WMSil+a6qe7aq0y7HdQj1MFT/nyzA1v9kRc95z27Tw2
         NIE2OnUvMnycNx9LmLTdg89bLrRmQJcg30ZVSy0K1NnarScy1xoQAH+ydisSIBhGwKFx
         bu9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=dbM3Xm6LdvQINTqo9Uv0kvAi8ufQMlJhoQYSnKrkBVQ=;
        b=a0HH2gSgS4EUqKV0LRDbvH0TCdyMYgMadCBDTcqsu/UiyhbCwSWVSzqxRTkLBWFxTz
         NkeGzvJ0rRWz1bl7hZymgPLHm3xHgUgFzU99i4+H15509+ffUtkjuxlyfGFCzDAQdd8X
         /dvQe+VJaVEnyQdc/ZEi3Kl5TybyuFwxWjhocJSw1w0iXbeHxP5Rg+OQj+HZ7Z9ESMs/
         ErG3aqZ5k7kb4K5Ua2XoVTy56bKHWLZOH1molzegci/AS/mWaKDHHPZ4N3KY+HSBAfWq
         KCGAkJAUJbf9q4MJ09f9qPWR6loKtfpbxUscyECC2cMbCj0Zt/qx+aitEaM7Moeo8DP6
         Vzqw==
X-Gm-Message-State: AOAM532gbV1igu53bFd8uu9wLLQt8HSqSj5GL5/A+UKFlPv7B9D7kdXr
	Pu3QGrdg6URauVcDEYXHeUszzJNg/PfGwy1KZEY=
X-Google-Smtp-Source: ABdhPJz5Or8WR6LG+odLAIJUgiZi4iR96130L9L/4i55Dh7GKl59oBljq24iDZmdhr0XPy/tMO1cI3o6uIvf2vkhJH8=
X-Received: by 2002:ac2:4ed0:: with SMTP id p16mr1632510lfr.554.1602865744947;
 Fri, 16 Oct 2020 09:29:04 -0700 (PDT)
MIME-Version: 1.0
References: <20201014153150.83875-1-jandryuk@gmail.com> <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com> <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
 <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
In-Reply-To: <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 16 Oct 2020 12:28:53 -0400
Message-ID: <CAKf6xpv9qHJydjQ_TyZEKZAK14T4m2GLLqEwyMTraUxqvg+1Xw@mail.gmail.com>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 15, 2020 at 11:14 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 15.10.2020 16:50, Jason Andryuk wrote:
> > On Thu, Oct 15, 2020 at 3:00 AM Jan Beulich <jbeulich@suse.com> wrote:
> >> And why is there no bounds check of ->phys_entry paralleling the
> >> ->virt_entry one?
> >
> > What is the purpose of this checking?  It's sanity checking which is
> > generally good, but what is the harm from failing the checks?  A
> > corrupt kernel can crash itself?  Maybe you could start executing
> > something (the initramfs?) instead of the actual kernel?
>
> This is at least getting close to a possible security issue.
> Booting a hacked up binary can be a problem afaik.

If you are already letting the user provide a kernel, they can give a
well formed kernel that does whatever they want.  Like Andrew wrote,
the concern would be if the binary can subvert the hypervisor/tools.

> >> On the whole, as long as we don't know what mode we're planning to
> >> boot in, we can't skip any checks, as the mere presence of
> >> XEN_ELFNOTE_PHYS32_ENTRY doesn't mean that's also what gets used.
> >> Therefore simply bypassing any of the checks is not an option.
> >
> > elf_xen_note_check() early exits when it finds phys_entry set, so
> > there is already some bypassing.
> >
> >> In
> >> particular what you suggest would lead to failure to check
> >> e_entry-derived ->virt_entry when the PVH-specific note is
> >> present but we're booting in PV mode. For now I don't see how to
> >> address this without making the function aware of the intended
> >> booting mode.
> >
> > Yes, the relevant checks depend on the desired booting mode.
> >
> > The e_entry use seems a little problematic.  You said the ELF
> > Specification states it should be a virtual address, but Linux seems
> > to fill it with a physical address.  You could use a heuristic e_entry
> > < 0 (0xffff...) to compare with the virtual addresses otherwise check
> > against physical?
>
> Don't we have a physical range as well? And don't we adjust the
> entry point already in certain cases anyway? Checking and adjustment
> can (and should) be brought in sync, and else checking the entry
> point fits at least one of the two ranges may be better than no
> checking at all, I think.

Looks like we can pass XC_DOM_PV_CONTAINER/XC_DOM_HVM_CONTAINER down
into elf_xen_parse().  Then we would just validate phys_entry for HVM
and virt_entry for PV.  Does that sound reasonable?

(The use in xc_dom_probe_hvm_kernel() is interesting to disallow
Xen-enabled kernel.)

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:30:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:30:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8180.21787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTScJ-0000Le-7D; Fri, 16 Oct 2020 16:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8180.21787; Fri, 16 Oct 2020 16:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTScJ-0000LX-3x; Fri, 16 Oct 2020 16:30:07 +0000
Received: by outflank-mailman (input) for mailman id 8180;
 Fri, 16 Oct 2020 16:30:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k7OI=DX=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kTScH-0000Dv-KD
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:30:05 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 907992da-8fad-4150-8a01-9714ff5069ad;
 Fri, 16 Oct 2020 16:30:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=k7OI=DX=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kTScH-0000Dv-KD
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:30:05 +0000
X-Inumbo-ID: 907992da-8fad-4150-8a01-9714ff5069ad
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 907992da-8fad-4150-8a01-9714ff5069ad;
	Fri, 16 Oct 2020 16:30:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602865805;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=+PmAnM4uvorTQPw2AeRa/U2NxnD/M6ErGu5GxzohOnI=;
  b=PwB+CRq0hDGeYv4UPGimfp0FYRKRB3NZWgCTLjUK0/DTERh/Iri/2kgN
   owUhAbKlMFqyc25UObC6i+/IIa7bDwn2bgr3kR7jcWbeLamBIif/J5uZT
   NGIH9EcI5zRd9zwQSdRHu/TQ3teIj1YRloGhXaEoTP9764914lCzZxjid
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: noZIpdj67tA5J8oXSxfBQYacU3rd+GsZe59ZN9hfPD8H+rOyCZl5Qqg/ZpJiKqqdhUYOlgxUIh
 x/B7BuUUkbZfi2Nr/MOysmydwSI5aqgUrs/Up6+fUA+29XIJXJXB0QPelGIa2yKVWQx//UA1YC
 L5/mzzb+WRsN53Z/rp4Xup6/V4/lI7TZB3C3EERI/jjXBPxyLcaHRvbeI2gXUmWlMkStTsZtye
 Sg5V4BQ3DGFdZ8ZrlqiFBZ3u+/ER6AAO/CbKnDOk4Mdi9TekG8pjalsK8OCuUb649p07MMOVmS
 Edo=
X-SBRS: 2.5
X-MesageID: 29521653
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="29521653"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e0YyJ8tZXp72V7VasgNh6h+i4n2tJtjdIHSYxjyHUGuzWzFz6KUIUbVwj39oh2N+uCom/7oekDYABvG/hIbJ3jHd+fcScenkCPSqwYM0eNQpOiupNl7PhnlizvHv5EmqjSA5tqofKD/+vk0gXZdvNBVI7yCLtGTiOW/7TXo/fAGRCHh00M0cAPPBP7+jv5GMfQszLSuqCzytKRfSryGApreij8haJXRq0OMsFt/I/SVv0HBFvmgI1EYUY02K5EKyXK2inmqws82XfnAhZ849gpjEsNPAAhjtsr53iPotfq0uGF9XZkuUKz4uTZTohlrhfbCX7sejyVGq/i4vzAB7gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+PmAnM4uvorTQPw2AeRa/U2NxnD/M6ErGu5GxzohOnI=;
 b=Zb/l7rLc1LYtAIYwewZ1nRCq8TUamz2DKJDlvqHAC+ebvOHwAK0WsGe1leQbbUobkhoIw8KBafAx5MQEQmQ/G4rSmOLMdnSQuvUNYOi9649ncss3rEPc4tsgKunIasBDHp3yhXFAqNUu3ALgr7GBuULGzxHTxvUxQj/pIJgRzPBhtnlp/V2rlqlloBoqm/Lq7Vu+9rP9mUxCFLhJHSWWx4rtKO13imXgGLSxhrs3l4+n3XbqlPkKWlvoZT1BRKuIdk7fa4KcRF1JhyCI0B++qKGaruIWQ6aqvy9t5sOYXfa7WHl5RvwJ3npiHhOQjzfas7qg1EeOtECzacUxJ4FDZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+PmAnM4uvorTQPw2AeRa/U2NxnD/M6ErGu5GxzohOnI=;
 b=SMDUngB+8uN4WkIGPUTr8W3sH53Z5AodQM3xmrTC+QsNzSEFb22Y3Nrr0uYJR71SzQ310Iw0ROwmwKGOITBIIS06z5dDujkNzrmDAhdrUZqX3KT6EQPVIwiPOPhnacNmv1CAmfLKMqH+DwBevkwF9YsFE2XGFy+gNAFdpgm17bU=
From: George Dunlap <George.Dunlap@citrix.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
CC: Nick Rosbrook <rosbrookn@gmail.com>, Rich Persaud <persaur@gmail.com>, Ian
 Jackson <iwj@xenproject.org>, Olivier Lambert <olivier.lambert@vates.fr>,
	Edwin Torok <edvin.torok@citrix.com>
Subject: RFC: Early mock-up of a xenopsd based on golang libxl bindings
Thread-Topic: RFC: Early mock-up of a xenopsd based on golang libxl bindings
Thread-Index: AQHWo9mWX/nzbew8RUGADXmvLnD2mQ==
Date: Fri, 16 Oct 2020 16:29:58 +0000
Message-ID: <84FEBEAF-5859-421E-B595-2358D8490D3F@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d54db249-6256-42a6-0d54-08d871f0b90d
x-ms-traffictypediagnostic: BYAPR03MB3816:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB3816C62B5AEAF9BD59D72CFE99030@BYAPR03MB3816.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: aR+1Z8XRikOKYatX+/sDhuh08/M0wRTJZSiAAq65SBar7CqPfYqt90LsfthPQR14TT4UFofIW/+vNN34KPknxaYZAG8jltr0NpVa5ppTZCC03rXuSdL9UsismNhFE+9qGVUq2bB3iDP6WC6n/kfG9Q8iWFctq6pjASDJBjZtP3l1Gde6pSnJo1No8/9TeQwsQthNluvlyjg7Y2+eqWNH3JHjzeHQTCCgMHLDeKOwB148XvlVTPziNk20E7BxoLNq3JrtuixRWevVhROokba+qhIpv+PCGrbXGDWHQiVh7rXsY49KGvKOrbGdSHeqplwbl7dG08dyKi1Ie+keYGUL5EsVR9yjPfjSOOTPIeGoT3ssB0XpQ+OYt/tXhAENR2Mwop2W+R5m2BtDFiEgBt34zA==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(376002)(396003)(346002)(86362001)(26005)(55236004)(76116006)(54906003)(66946007)(316002)(107886003)(478600001)(8676002)(966005)(91956017)(71200400001)(8936002)(2616005)(6512007)(5660300002)(2906002)(33656002)(6916009)(6486002)(66476007)(66446008)(186003)(83380400001)(64756008)(36756003)(6506007)(4326008)(66556008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 93NRJGpxHbcBtEp3ABcL6sv3Aeudsnv7l9ouJPm777dCgbGHORJJc8rfIQun4g/L+4jOn3wpgORMpYoVgHUfZmXSqMj3N+6kJPAmkPacCfb+rDq5tAF2fJA0OR1486mxJf5DBA/yRf16/JzykgknIuxlrDWiRJYW6x8+fWG7sYqY/uk3hVFB4m9FgepGjdtE4pUi29rcDv0g6HaymulnrNTwEMA+NfLsqvrGmD6GWC54J0jK3sDoQRN1bat+4tSPn513f3ZdbgKOOo2vVm9NGfisRiHoRXcbyhD+x+9adOgc1AWmNYq9oU3su+giSOait+ENw9ASi6uQadWUHR+VlfhLMVC7n19Mef118G3HJoRu3ZEK+otuuQda84zNGnE1Wkm5bEhhF2yUDuTRAM728T1jgfeX5OZthq0qanaZWzxSo9rzQSO+EoW1j4zX537Kv+EfkLbEO4i3jas3mkKDtpdvseyTp2OgXnZFs5Uxf+n3r/o9QhJwl8nVnAAEsRgixt35t/z2BSdq/EvdWbwmqlGZyEINpqlZkYCKJz+hOxK5oFsI60x995xjQKWI2KoLsuRzHWUGOrzYnmdCpqkkmtA+Mu1uJAN67rtrcSsbsiDPDxZgMcEJuAXLouvTuWxW70j19jzAQaYu8bxNQK6jkw==
Content-Type: text/plain; charset="utf-8"
Content-ID: <B9A4F276E48BBC4BAAAD41849955E6A2@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d54db249-6256-42a6-0d54-08d871f0b90d
X-MS-Exchange-CrossTenant-originalarrivaltime: 16 Oct 2020 16:29:58.1936
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WT8Utg08NMQ0HmA2IbBH7S9FYetaTWZvFjpd9zx2VNcWKohlejxno7Y7ApMGs0/8n0nrHquebbeGg/QqyhrHH71wgVelzp8lhCdDKO3B4Wg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3816
X-OriginatorOrg: citrix.com

aHR0cHM6Ly9naXRsYWIuY29tL21hcnR5cm9zL2dvLXhlbiBicmFuY2ggYHdvcmtpbmcveGVub3Bz
YCBjb250YWlucyBhIHN1cGVyLWJhc2ljIG1vY2stdXAgb2YgYSB1bml4IGRvbWFpbiB4ZW5vcHNk
IGJhc2VkIG9uIHRoZSBnb2xhbmcgbGlieGwgYmluZGluZ3MuDQoNClRvIHVzZToNCg0KKiBJbnN0
YWxsIFhlbiA+PSA0LjE0IG9uIHlvdXIgdGFyZ2V0IHN5c3RlbQ0KDQoqIE1ha2Ugc3VyZSB5b3Ug
aGF2ZSBnbyA+PSAxLjExIGluc3RhbGxlZA0KDQoqIENsb25lICYgYnVpbGQgdGhlIHNlcnZlcg0K
DQokIGdpdCBjbG9uZSBodHRwczovL2dpdGxhYi5jb20vbWFydHlyb3MvZ28teGVuDQoNCiQgY2Qg
Z28teGVuDQoNCiQgZ2l0IGNoZWNrb3V0IHdvcmtpbmcveGVub3BzDQoNCk5vdGUgdGhhdCB0aGlz
IGlzICpub3QqIGEgZmFzdC1mb3J3YXJkaW5nIGJyYW5jaC4NCg0KJCBjZCB4ZW5vcHMveGVub3Bz
ZA0KDQokIGdvIGJ1aWxkDQoNCiQgLi94ZW5vcHNkDQoNClRoZW9yZXRpY2FsbHkgdGhpcyB3aWxs
IG5vdyBhY2NlcHQganNvbnJwYyB2MSBjYWxscyBvbiBgL3RtcC94ZW5vcHNgLiAgSSBoYXZlbuKA
mXQgZHVnIGludG8gZXhhY3RseSB3aGF0IHRoZSB3aXJlIHByb3RvY29sIGxvb2tzIGxpa2UsIGJ1
dCB5b3UgY2FuIHRlc3QgZ29sYW5n4oCZcyB2ZXJzaW9uIG9mIGl0IGJ5IHVzaW5nIG9uZSBvZiB0
aGUg4oCcY2xpZW50IGV4YW1wbGVz4oCdLiAgSW4gYW5vdGhlciB0ZXJtaW5hbDoNCg0KJCBjZCB4
ZW5vcHMvY2xpZW50LWV4YW1wbGVzDQoNCiQgZ28gcnVuIGdldC1kb21haW5zLWV4YW1wbGUuY28N
Cg0KSXQgc2hvdWxkIGxpc3QgdGhlIGN1cnJlbnRseS1ydW5uaW5nIGRvbWFpbnMgYW5kIHRoZWly
IGRvbWFpbiBuYW1lcy4NCg0KVGhlIGNvcmUgb2YgdGhlIGFjdHVhbCBpbXBsZW1lbnRhdGlvbiBp
cyBpbiBnby14ZW4veGVub3BzL3hlbm9wcy94ZW5vcHMuZ28uICBCYXNpY2FsbHksIGV2ZXJ5IG1l
dGhvZCB5b3UgYWRkIHRvIHRoZSBYZW5vcHMgdHlwZSBvZiB0aGUgY29ycmVjdCBmb3JtYXQgKGRl
c2NyaWJlZCBpbiB0aGUg4oCcbmV0L3JwY+KAnSBkb2N1bWVudGF0aW9uKSB3aWxsIGJlIGV4cG9z
ZWQgYXMgYSBtZXRob2QgYXZhaWxhYmxlIHZpYSBSUEMuDQoNClRoZSBjdXJyZW50IGNvZGUgb25s
eSBkb2VzIGEgVW5peCBzb2NrZXQsIGJ1dCBpdCBjb3VsZCBlYXNpbHkgYmUgbW9kaWZpZWQgdG8g
d29yayBvdmVyIGh0dHAgYXMgd2VsbC4NCg0KT25jZSB3ZSBoYXZlIGZ1bmN0aW9uIHNpZ25hdHVy
ZXMgaW4gdGhlIGxpYnhsIElETCwgdGhlIHhlbm9wcyBtZXRob2RzIGNvdWxkIGFsbCBiZSBhdXRv
Z2VuZXJhdGVkLCBqdXN0IGxpa2UgdGhlIHR5cGVzIGFyZSBmb3IgdGhlIGdvbGFuZyBiaW5kaW5n
cy4NCg0KSXQgc2hvdWxkIGJlIG5vdGVkIHRoYXQgYXQgdGhlIG1vbWVudCB0aGVyZSB3aWxsIGJl
IHR3byDigJxsYXllcnPigJ0gb2YgdHJhbnNsYXRpb24gYm90aCB3YXlzIGhlcmU6IFRoZSBnb2xh
bmcgcGFja2FnZSB3aWxsIGJlIGNvbnZlcnRpbmcgcnBjIGludG8gZ29sYW5nIHN0cnVjdHVyZXMs
IHRoZW4gdGhlIGxpYnhsIGxpYnJhcmllcyB3aWxsIGJlIGNvbnZlcnRpbmcgZ29sYW5nIHN0cnVj
dHVyZXMgaW50byBDIHN0cnVjdHVyZXM7IHRoZW4gYW55IHJldHVybiB2YWx1ZXMgaGF2ZSB0byBi
ZSBjb252ZXJ0ZWQgZnJvbSBDIHN0cnVjdHVyZXMgaW50byBnb2xhbmcgc3RydWN0dXJlcywgYW5k
IHRoZW4gY29udmVydGVkIGEgZ2FpbiBmcm9tIGdvbGFuZyBzdHJ1Y3R1cmVzIGludG8ganNvbiBi
ZWZvcmUgYmVpbmcgc2VudCBiYWNrIG92ZXIgdGhlIHdpcmUuICBUaGlzIG1heSBvciBtYXkgbm90
IGJlIGEgYmlnIG92ZXJoZWFkLg0KDQpUd28gdGhpbmdzIHRoYXQgYXJlIGN1cnJlbnRseSBzdWIt
b3B0aW1hbCBhYm91dCB0aGUgYHhlbmxpZ2h0YCBwYWNrYWdlIGZvciB0aGlzIHVzZSBjYXNlLg0K
DQpGaXJzdCwgYWx0aG91Z2ggd2UgaGF2ZSBhIHhlbmxpZ2h0LkVycm9yIHR5cGUsIGEgbG90IG9m
IHRoZSB4ZW5saWdodCB3cmFwcGVycyByZXR1cm4gYSBnZW5lcmljIOKAnGVycm9y4oCdLiAgSeKA
mW0gbm90IHN1cmUgaG93IHRoYXQgd2lsbCBlbmQgdXAgYmVpbmcgY29udmVydGVkIGludG8ganNv
biwgYnV0IHdlIG1pZ2h0IHRoaW5nIGFib3V0IG1ha2luZyB0aGUgeGVubGlnaHQgd3JhcHBlcnMg
YWxsIHJldHVybiB4ZW5saWdodC5FcnJvciBpbnN0ZWFkLg0KDQpTZWNvbmRseSwgYXQgdGhlIG1v
bWVudCB0aGUgeGVubGlnaHQgdHlwZXMgYXJlIGluIHRoZSBzYW1lIHBhY2thZ2UgYXMgdGhlIGZ1
bmN0aW9uIHdyYXBwZXJzLiAgVGhpcyBtZWFucyB0aGF0IGluIG9yZGVyIHRvIGJ1aWxkIGV2ZW4g
dGhlIGNsaWVudCwgeW91IG5lZWQgdG8gYmUgYWJsZSB0byBsaW5rIGFnYWluc3QgYW4gaW5zdGFs
bGVkIGxpYnhsIGxpYnJhcnkg4oCUIGV2ZW4gdGhvdWdoIHRoZSBmaW5hbCBiaW5hcnkgd29u4oCZ
dCBuZWVkIHRvIGxpbmsgYWdhaW5zdCBsaWJ4bCBhdCBhbGwsIGFuZCBjb3VsZCB0aGVvcmV0aWNh
bGx5IGJlIG9uIGEgY29tcGxldGVseSBzZXBhcmF0ZSBob3N0Lg0KDQpVbmZvcnR1bmF0ZWx5IHRo
ZSB3YXkgd2XigJl2ZSBzdHJ1Y3R1cmVkIHhlbmxpZ2h0LCBpdOKAmXMgbm90IHNpbXBsZSB0byBt
b3ZlIHR5cGVzLmdlbi5nbyBpbnRvIGl0cyBvd24gcGFja2FnZSwgYmVjYXVzZSBvZiB0aGUgdG9D
IGFuZCBmcm9tQyB3cmFwcGVycywgd2hpY2ggKmRvKiBuZWVkIHRvIGxpbmsgYWdhaW5zdCBsaWJ4
bCAoZm9yIHRoZSBpbml0IGFuZCBkaXNwb3NlIGZ1bmN0aW9ucykuICBOaWNrLCB3ZSBtaWdodCB0
aGluayBhYm91dCB3aGV0aGVyIHdlIHNob3VsZCBtYWtlIHNlcGFyYXRlIHRvQyBhbmQgZnJvbUMg
ZnVuY3Rpb25zIGZvciBlYWNoIG9mIHRoZSB0eXBlcywgcmF0aGVyIHRoYW4gbWFraW5nIHRob3Nl
IG1ldGhvZHMuDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:36:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:36:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8185.21802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSiU-0000gB-Vw; Fri, 16 Oct 2020 16:36:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8185.21802; Fri, 16 Oct 2020 16:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSiU-0000g4-SC; Fri, 16 Oct 2020 16:36:30 +0000
Received: by outflank-mailman (input) for mailman id 8185;
 Fri, 16 Oct 2020 16:36:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTSiT-0000fz-73
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb7653d7-44f9-43d4-b9e6-1199015185ee;
 Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiS-0001KV-4s
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiS-0008DR-3A
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTSiQ-0004zo-D2; Fri, 16 Oct 2020 17:36:26 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTSiT-0000fz-73
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:29 +0000
X-Inumbo-ID: bb7653d7-44f9-43d4-b9e6-1199015185ee
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bb7653d7-44f9-43d4-b9e6-1199015185ee;
	Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=fmqswlXn/zhEF5j7IJvtQzH2noI9IGpxxJj9vCVKny8=; b=5U8CotGTgaU75nceGmafXKfTFe
	LtsrbfP/alIbc1Jz8oduQn9OUEMKVPDVj2bS9k2NGETFoK8WA61Fw9fRgd9eOLMRMZ8KQtN4RRDnz
	EiZ0dXeE1OKhd4uveU+F/IBLiG7lXqElGJZz2s3SxIn9FnHEH/1F4XkLVU46FL0pNAik=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiS-0001KV-4s
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiS-0008DR-3A
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiQ-0004zo-D2; Fri, 16 Oct 2020 17:36:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/3] known hosts handling: Ensure things are good for multi-host jobs
Date: Fri, 16 Oct 2020 17:36:14 +0100
Message-Id: <20201016163615.5086-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201016163615.5086-1-iwj@xenproject.org>
References: <20201016163615.5086-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a multi-host job reuses host(s) from earlier jobs, the set of
hosts set up in the on-host known_hosts files may be insufficient,
since the hosts we are using now may not have been in any of the
flight's runvars when the earlier job set them up.

So we need to update the known_hosts.  We use the flight's current
set, which will include all of our hosts.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-host-reuse | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/ts-host-reuse b/ts-host-reuse
index 8d674257..ea8a471d 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -135,6 +135,14 @@ sub noop_if_playing () {
     }
 }
 
+sub ensure_known_hosts ($) {
+    my ($ho) = @_;
+    # Don't need to bother if job uses only one host
+    return if scalar(grep { m/(_|^)host$/ } keys %r) == 1;
+    target_putfilecontents_root_stash($ho, 30, known_hosts(),
+				      '/root/.ssh/known_hosts');
+}
+
 #---------- actions ----------
 
 sub act_prealloc () {
@@ -153,6 +161,7 @@ sub act_start_test () {
     return unless $ho->{Shared};
     my %oldstate = map { $_ => 1 } qw(prep ready);
     host_shared_mark_ready($ho, $sharetype, \%oldstate, 'mid-test');
+    ensure_known_hosts($ho);
 }
 
 sub act_final () {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:36:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:36:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8186.21813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSiZ-0000hp-7P; Fri, 16 Oct 2020 16:36:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8186.21813; Fri, 16 Oct 2020 16:36:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSiZ-0000hi-4S; Fri, 16 Oct 2020 16:36:35 +0000
Received: by outflank-mailman (input) for mailman id 8186;
 Fri, 16 Oct 2020 16:36:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTSiY-0000fz-4h
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45b47129-a002-42b0-b25a-756216792b4a;
 Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiR-0001KS-V9
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:27 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiR-0008DH-TH
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTSiP-0004zo-Tx; Fri, 16 Oct 2020 17:36:26 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTSiY-0000fz-4h
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:34 +0000
X-Inumbo-ID: 45b47129-a002-42b0-b25a-756216792b4a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 45b47129-a002-42b0-b25a-756216792b4a;
	Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=KMKNzsrzb73R7NSUV3lC+qhLhryXWiv87mxwA+2FYvE=; b=dmEGiXJ43dABmSvfydSCn0JAiH
	t3T8SPLO2wGb0uHD7W8dCUd3BRTxwStR/FNeVds1RErlp15lnQqon7Ro50ctmdzXIAdu1NpfTbZuU
	ppTO0xHDVSqX7BlTgYUG3PLUFkuD8kB+2LZeylFs61PvUv5+XO8/AROi3+ruu/i62vfo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiR-0001KS-V9
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:27 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiR-0008DH-TH
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiP-0004zo-Tx; Fri, 16 Oct 2020 17:36:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/3] known_hosts handling: Fix over-broad SQL query
Date: Fri, 16 Oct 2020 17:36:13 +0100
Message-Id: <20201016163615.5086-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This should match only "*_host" and "host".  We don't want it matching
"*host" without a "_".

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/TestSupport.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index f2d8a0e1..5e6b15d9 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2796,7 +2796,7 @@ sub known_hosts () {
 
     my $hostsq= $dbh_tests->prepare(<<END);
         SELECT val FROM runvars
-         WHERE flight=? AND name LIKE '%host'
+         WHERE flight=? AND (name = 'host' OR name LIKE '%\\_host')
          GROUP BY val
 END
     $hostsq->execute($flight);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:36:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:36:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8187.21826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSie-0000lA-GI; Fri, 16 Oct 2020 16:36:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8187.21826; Fri, 16 Oct 2020 16:36:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSie-0000l3-Cq; Fri, 16 Oct 2020 16:36:40 +0000
Received: by outflank-mailman (input) for mailman id 8187;
 Fri, 16 Oct 2020 16:36:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTSid-0000fz-57
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0be6163-2bcd-4d17-8fce-8d7a69e58454;
 Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiS-0001KY-NW
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSiS-0008Da-Bg
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTSiQ-0004zo-KS; Fri, 16 Oct 2020 17:36:26 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTSid-0000fz-57
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:39 +0000
X-Inumbo-ID: f0be6163-2bcd-4d17-8fce-8d7a69e58454
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f0be6163-2bcd-4d17-8fce-8d7a69e58454;
	Fri, 16 Oct 2020 16:36:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=YMqUcSG249nJyp30zh8d53bqbvmo1wljVruJ5ddf6HQ=; b=ynkpWmR9VV3StiRTDWzKyEPPFM
	LXgpeQhSkIDilGrs/K55t6MhOuBJkmCq2MJLydQ3Hb60o3BKDV99qnCpWCyRB97EJLdkUqMZduBJ+
	GT1HksnBVA0BsvuTmKN2ZToiPcaYy8r/r/jcyXcZnYWhDXVZF1GTVk8Rc7Pf1Y8+0fZo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiS-0001KY-NW
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiS-0008Da-Bg
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:36:28 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSiQ-0004zo-KS; Fri, 16 Oct 2020 17:36:26 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/3] Revert "host reuse: Reuse hosts only in same role (for now)"
Date: Fri, 16 Oct 2020 17:36:15 +0100
Message-Id: <20201016163615.5086-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201016163615.5086-1-iwj@xenproject.org>
References: <20201016163615.5086-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This workaround is no longer needed because I have fixed the problem
properly.

Also, it didn't work anyway, because at that point $ho isn't set, so
all this did was produce some Perl warnings.

This reverts commit f3668acae2c6201c680dc7b4e9085ab184136d7e.
---
 ts-host-reuse | 1 -
 1 file changed, 1 deletion(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index ea8a471d..e2498bb6 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -67,7 +67,6 @@ sub sharetype_add ($$) {
 sub compute_test_sharetype () {
     my @runvartexts;
     my %done;
-    push @runvartexts, $ho->{Ident};
     foreach my $key (runvar_glob(@accessible_runvar_pats)) {
 	next if runvar_is_synth($key);
 	my $val = $r{$key};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:44:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:44:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8196.21840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSqK-0001rk-BI; Fri, 16 Oct 2020 16:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8196.21840; Fri, 16 Oct 2020 16:44:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSqK-0001rd-7y; Fri, 16 Oct 2020 16:44:36 +0000
Received: by outflank-mailman (input) for mailman id 8196;
 Fri, 16 Oct 2020 16:44:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kTSqI-0001rY-T1
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:44:34 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95cc54da-0b1f-4c1d-ac4d-d416ebacf78e;
 Fri, 16 Oct 2020 16:44:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Q1u=DX=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kTSqI-0001rY-T1
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:44:34 +0000
X-Inumbo-ID: 95cc54da-0b1f-4c1d-ac4d-d416ebacf78e
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95cc54da-0b1f-4c1d-ac4d-d416ebacf78e;
	Fri, 16 Oct 2020 16:44:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1602866674;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=u08QdlcJJBMJaElC7SvikJOEk5yIS2EAg8zpnJLFXd4=;
  b=Pa5LffuZ6Y9WogFvP7swaJFHnqtQY1x3RfGfGwIeiRO6CkVG8sKsIiQQ
   8WDPeks8xBPCcOgMTXORyqXBcwneQNh+EiIU9xhuJDVeeHcGnKxzzbKdO
   RQKlS4/ODkMEDLk0Bx/XvfdtQoqbnAxnmYbydxfNmPai5K/puMeCCxqJr
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WgHsbEoe6SMytCQFC/Bb7MyLDZKoBfgiml7Myu5P3GO8+SSZCVv9f94vxxL//Uz0ahMvORq8ej
 Xbw4/AibDK5n9lB1lJSvb8c7uXs5624j+XV0ygdCMQEXg4cn84fzeJGIyQGz9FsCKORxxOGAeP
 RgWH6EZk7i7rysmc5JhwdkX3v4V+D5aaWt5XFG8fz836GGlZ8J2lnY8u4BAbElY1Zk1rCRZN8K
 cuoyuL4oaOifOkpc+m274eyZldmn/N1vhlPEb8dcAap055OFe3EjwII5QWRXQTf6ajsDDUxh+f
 saQ=
X-SBRS: 2.5
X-MesageID: 29522942
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,383,1596513600"; 
   d="scan'208";a="29522942"
Date: Fri, 16 Oct 2020 17:44:28 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: QEMU <qemu-devel@nongnu.org>, "Dr. David Alan Gilbert"
	<dgilbert@redhat.com>, xen-devel <xen-devel@lists.xenproject.org>, "Paul
 Durrant" <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Marek
 =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum
	<marcel.apfelbaum@gmail.com>, Paolo Bonzini <pbonzini@redhat.com>, "Richard
 Henderson" <rth@twiddle.net>, Eduardo Habkost <ehabkost@redhat.com>
Subject: Re: [PATCH] hw/xen: Set suppress-vmdesc for Xen machines
Message-ID: <20201016164428.GC3105841@perard.uk.xensource.com>
References: <20201013190506.3325-1-jandryuk@gmail.com>
 <20201016153708.GB3105841@perard.uk.xensource.com>
 <CAKf6xpssB-FGwiEhLqV8OFjBGuP4LKYh+9Pj_Bj7p5U2CJSw=g@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CAKf6xpssB-FGwiEhLqV8OFjBGuP4LKYh+9Pj_Bj7p5U2CJSw=g@mail.gmail.com>

On Fri, Oct 16, 2020 at 12:01:47PM -0400, Jason Andryuk wrote:
> On Fri, Oct 16, 2020 at 11:38 AM Anthony PERARD
> <anthony.perard@citrix.com> wrote:
> >
> > On Tue, Oct 13, 2020 at 03:05:06PM -0400, Jason Andryuk wrote:
> > > xen-save-devices-state doesn't currently generate a vmdesc, so restore
> > > always triggers "Expected vmdescription section, but got 0".  This is
> > > not a problem when restore comes from a file.  However, when QEMU runs
> > > in a linux stubdom and comes over a console, EOF is not received.  This
> > > causes a delay restoring - though it does restore.
> > >
> > > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > > avoids the wait.
> >
> > suppress-vmdesc is only used during restore, right? So starting a guest
> > without it, saving the guest and restoring the guest with
> > suppress-vmdesc=on added will work as intended? (I'm checking that migration
> > across update of QEMU will work.)
> 
> vmdesc is a json description of the migration stream that comes after
> the QEMU migration stream.  For our purposes, <migration
> stream><vmdesc json blob>.  Normal QEMU savevm will generate it,
> unless suppress-vmdesc is set.  QEMU restore will read it because:
> "Try to read in the VMDESC section as well, so that dumping tools that
> intercept our migration stream have the chance to see it."
> 
> Xen save does not go through savevm, but instead
> xen-save-devices-state, which is a subset of the QEMU savevm.  It
> skips RAM since that is read out through Xen interfaces.  Xen uses
> xen-load-devices-state to restore device state.  That goes through the
> common qemu_loadvm_state which tries to read the vmdesc stream.
> 
> For Xen, yes, suppress-vmdesc only matters for the restore case, and
> it suppresses the attempt to read the vmdesc.  I think every Xen
> restore currently has "Expected vmdescription section, but got -1" in
> the -dm.log since the vmdesc is missing.  I have not tested restoring
> across this change, but since it just controls reading and discarding
> the vmdesc stream, I don't think it will break migration across
> update.

Thanks for the explanation.

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Do you think you could send a patch for libxl as well? Since libxl in
some cases may use the "pc machine instead of "xenfv". I can send the
patch otherwise.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:50:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:50:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8200.21858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSwP-0002lB-FA; Fri, 16 Oct 2020 16:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8200.21858; Fri, 16 Oct 2020 16:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSwP-0002l2-9y; Fri, 16 Oct 2020 16:50:53 +0000
Received: by outflank-mailman (input) for mailman id 8200;
 Fri, 16 Oct 2020 16:50:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTSwO-0002kW-1P
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ec779fb-f0ff-45f5-b639-0fd43946a544;
 Fri, 16 Oct 2020 16:50:50 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSwM-0001df-EY
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSwM-0001Zw-CS
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTSwK-00052F-IL; Fri, 16 Oct 2020 17:50:48 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTSwO-0002kW-1P
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:52 +0000
X-Inumbo-ID: 4ec779fb-f0ff-45f5-b639-0fd43946a544
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4ec779fb-f0ff-45f5-b639-0fd43946a544;
	Fri, 16 Oct 2020 16:50:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=oi4GrmO6y5ften/EbtP56tMNPhNNhc9gnYGJmyUO+5s=; b=m5nWLjUVCJTIT0Z5yboCPoRSiv
	xw0oSeBkrjzkn3WG+Yuv/pmGE9HqIRBCI6yVJCU3pbis6tBKz7GDy828Qllh+ZUcrfUpBYdHrdQPc
	R4adMF5I2X7IOCM5lkhqDctqmSwMoBw9R7hyKtJkAiyk+wR0lLT5lsHmgIXpuS0XU+lA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwM-0001df-EY
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwM-0001Zw-CS
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwK-00052F-IL; Fri, 16 Oct 2020 17:50:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/2] sg-run-job: Allow per-job control of test host reuse
Date: Fri, 16 Oct 2020 17:50:40 +0100
Message-Id: <20201016165041.6716-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/sg-run-job b/sg-run-job
index c64ae026..69ee715b 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -42,10 +42,11 @@ proc per-host-finish {} {
 }
 
 proc run-job {job} {
-    global jobinfo builds flight ok truncate need_xen_hosts
+    global jobinfo builds flight ok reuse_ok truncate need_xen_hosts
     global nested_layers_hosts truncate_globs skip_globs anyskipped
 
     set ok 1
+    set reuse_ok 1
     set truncate 0
     set anyskipped 0
     jobdb::prepare $job
@@ -128,7 +129,7 @@ proc run-job {job} {
 	run-ts !broken =                  ts-host-reuse final host
     }
     set reuse {}
-    if {$ok} { lappend reuse --post-test-ok }
+    if {$ok && $reuse_ok} { lappend reuse --post-test-ok }
     eval [list per-host-ts !broken  = { ts-host-reuse final }] $reuse
 
     if {$ok} { setstatus pass                                             }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 16:50:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 16:50:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8199.21852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSwP-0002ki-5q; Fri, 16 Oct 2020 16:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8199.21852; Fri, 16 Oct 2020 16:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTSwP-0002kb-1z; Fri, 16 Oct 2020 16:50:53 +0000
Received: by outflank-mailman (input) for mailman id 8199;
 Fri, 16 Oct 2020 16:50:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kTSwN-0002kR-ME
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8c8aade-3d15-4f8c-a7ab-2c732511750e;
 Fri, 16 Oct 2020 16:50:50 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSwM-0001dk-MF
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kTSwM-0001aE-KO
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kTSwK-00052F-Qr; Fri, 16 Oct 2020 17:50:48 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tDey=DX=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kTSwN-0002kR-ME
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:51 +0000
X-Inumbo-ID: c8c8aade-3d15-4f8c-a7ab-2c732511750e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c8c8aade-3d15-4f8c-a7ab-2c732511750e;
	Fri, 16 Oct 2020 16:50:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=spOHayiZ5llu8K3oTStjH9Mre3G0q13wN0/Z0Ay6zSg=; b=vBQBb6xXMDulav22cSpnbumgIt
	FnmB0A+hBlEICG3JYAiweF65iX4UEcn3cGFGxxBfZ4yFeLqWLjQCH8dIR3gG4FE2PZpb4o0UqqoU9
	Yd1blEO6QZsQy9sLNyL7TMhpO1QrT13dP9gHG4awoHV4DcGt4quFibtIWMLJ31HV6ClA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwM-0001dk-MF
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwM-0001aE-KO
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 16:50:50 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kTSwK-00052F-Qr; Fri, 16 Oct 2020 17:50:48 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/2] Do not mark hosts used for pair test as reusable
Date: Fri, 16 Oct 2020 17:50:41 +0100
Message-Id: <20201016165041.6716-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201016165041.6716-1-iwj@xenproject.org>
References: <20201016165041.6716-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We do not currently tear down the nbd, and that means the next test
cannot remove our LVs.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 sg-run-job | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/sg-run-job b/sg-run-job
index 69ee715b..bc5a0c8f 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -581,6 +581,8 @@ proc run-job/test-debianhvm {} {
 }
 
 proc setup-test-pair {} {
+    global reuse_ok
+    set reuse_ok 0
     run-ts . =              ts-debian-install      dst_host
     run-ts . =              ts-debian-fixup        dst_host          + debian
     run-ts . =              ts-guests-nbd-mirror + dst_host src_host + debian
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 16 17:03:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 17:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8208.21876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTT8D-0003vC-Ka; Fri, 16 Oct 2020 17:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8208.21876; Fri, 16 Oct 2020 17:03:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTT8D-0003v5-HF; Fri, 16 Oct 2020 17:03:05 +0000
Received: by outflank-mailman (input) for mailman id 8208;
 Fri, 16 Oct 2020 17:03:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kTT8C-0003v0-Jr
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 17:03:04 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00f2df56-b60c-4af2-a803-66abcc4d56d7;
 Fri, 16 Oct 2020 17:03:03 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id c141so3829119lfg.5
 for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 10:03:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eysm=DX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kTT8C-0003v0-Jr
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 17:03:04 +0000
X-Inumbo-ID: 00f2df56-b60c-4af2-a803-66abcc4d56d7
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 00f2df56-b60c-4af2-a803-66abcc4d56d7;
	Fri, 16 Oct 2020 17:03:03 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id c141so3829119lfg.5
        for <xen-devel@lists.xenproject.org>; Fri, 16 Oct 2020 10:03:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=zv8iWa+xTqVtDgllI7gt6Ed7/TcekXXbJ1P2Kb3yK1g=;
        b=DCgWpy+eENccUPCa6eQgRmQx9F7+4oB0IQ+N0/6Gvhb8lFeGqBg91jEnDlISyaA4Xx
         IN3s+LFSouxDHdQBViSY4/IOhsg6NpUufQfyiKZYtMt/s712yMKlj1ht34wThTzi99WL
         NOTyxO2jQcSJxcnmYNVrjTRb8ZmhFb3M6ypnG7cczfm4HtWqPg0FbnHzpkHod20SKSCv
         oDzGZjcI4LNve/aq5UzLtTGwH1Izza5cx6tgU3221OgG5kb9LG5uZzATT/sjnqJVTtN5
         VFhMpDBwP/UDLg+nccjTpoAKytj7LrYXA7+qbPe/xZDZbBzXBardumWBj3kMPfnOA7xT
         A3GA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=zv8iWa+xTqVtDgllI7gt6Ed7/TcekXXbJ1P2Kb3yK1g=;
        b=cUO35jXCVj/pGfych2W9Hf7Vh8/SNQn31FxRv7a2WpAAGk3BQcsBSotX9uZbkwUkaA
         s9NNJpJjfbxcQt7Kj9qDCuKLrVKSWg7E/+Pzb5lCiDu5LLE5vvrhLWKuUAjyq7XjVFg9
         Tv1mPZBxFIMFr/kGmdze3bFtJLqZcD8M6/695r9ucAMAkxPSQCttQ7YDfOkJqFo9NV/t
         C51ArzkbcRLYqn2sACgDZD6cQOefKVXp5IecrOs66mthB605MjeGUtH35VV7Bnh/NnFF
         R4Lm9ro8eY7vfcySVVdwOk2zhgaVUC1OH1mIn9YjJnsGmpqxqPr6LjMakDRB+3dtkW8b
         nH4g==
X-Gm-Message-State: AOAM530EZov6yLVKRHOe2NZh3deQTzJ1B+XVFL8vVlZoeesDTxG0koS6
	HMrkCBl7H0qzaY9V770SCdiflbIIfCisN4Hfsq0=
X-Google-Smtp-Source: ABdhPJyGL/Yooebd0519+/4KTDkpfCwS6mxLkqyhD7ELt7L7HAP7ohf1M4H2M4NZPZEOuj0GEptsIJL8YN5hFbti6Ew=
X-Received: by 2002:ac2:4ed0:: with SMTP id p16mr1681945lfr.554.1602867782406;
 Fri, 16 Oct 2020 10:03:02 -0700 (PDT)
MIME-Version: 1.0
References: <20201013190506.3325-1-jandryuk@gmail.com> <20201016153708.GB3105841@perard.uk.xensource.com>
 <CAKf6xpssB-FGwiEhLqV8OFjBGuP4LKYh+9Pj_Bj7p5U2CJSw=g@mail.gmail.com> <20201016164428.GC3105841@perard.uk.xensource.com>
In-Reply-To: <20201016164428.GC3105841@perard.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 16 Oct 2020 13:02:50 -0400
Message-ID: <CAKf6xpst6xpMytFf_Pqi9-Y5TqhcfGp5odq=DEA-hBBjdSHMWw@mail.gmail.com>
Subject: Re: [PATCH] hw/xen: Set suppress-vmdesc for Xen machines
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: QEMU <qemu-devel@nongnu.org>, "Dr. David Alan Gilbert" <dgilbert@redhat.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, Richard Henderson <rth@twiddle.net>, 
	Eduardo Habkost <ehabkost@redhat.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, Oct 16, 2020 at 12:44 PM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Fri, Oct 16, 2020 at 12:01:47PM -0400, Jason Andryuk wrote:
> > On Fri, Oct 16, 2020 at 11:38 AM Anthony PERARD
> > <anthony.perard@citrix.com> wrote:
> > >
> > > On Tue, Oct 13, 2020 at 03:05:06PM -0400, Jason Andryuk wrote:
> > > > xen-save-devices-state doesn't currently generate a vmdesc, so restore
> > > > always triggers "Expected vmdescription section, but got 0".  This is
> > > > not a problem when restore comes from a file.  However, when QEMU runs
> > > > in a linux stubdom and comes over a console, EOF is not received.  This
> > > > causes a delay restoring - though it does restore.
> > > >
> > > > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > > > avoids the wait.
> > >
> > > suppress-vmdesc is only used during restore, right? So starting a guest
> > > without it, saving the guest and restoring the guest with
> > > suppress-vmdesc=on added will work as intended? (I'm checking that migration
> > > across update of QEMU will work.)
> >
> > vmdesc is a json description of the migration stream that comes after
> > the QEMU migration stream.  For our purposes, <migration
> > stream><vmdesc json blob>.  Normal QEMU savevm will generate it,
> > unless suppress-vmdesc is set.  QEMU restore will read it because:
> > "Try to read in the VMDESC section as well, so that dumping tools that
> > intercept our migration stream have the chance to see it."
> >
> > Xen save does not go through savevm, but instead
> > xen-save-devices-state, which is a subset of the QEMU savevm.  It
> > skips RAM since that is read out through Xen interfaces.  Xen uses
> > xen-load-devices-state to restore device state.  That goes through the
> > common qemu_loadvm_state which tries to read the vmdesc stream.
> >
> > For Xen, yes, suppress-vmdesc only matters for the restore case, and
> > it suppresses the attempt to read the vmdesc.  I think every Xen
> > restore currently has "Expected vmdescription section, but got -1" in
> > the -dm.log since the vmdesc is missing.  I have not tested restoring
> > across this change, but since it just controls reading and discarding
> > the vmdesc stream, I don't think it will break migration across
> > update.
>
> Thanks for the explanation.
>
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>
>
> Do you think you could send a patch for libxl as well? Since libxl in
> some cases may use the "pc machine instead of "xenfv". I can send the
> patch otherwise.

I should be able to, yes.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 17:58:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 17:58:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8211.21888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTTzP-0008Ry-R6; Fri, 16 Oct 2020 17:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8211.21888; Fri, 16 Oct 2020 17:58:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTTzP-0008Rr-Nr; Fri, 16 Oct 2020 17:58:03 +0000
Received: by outflank-mailman (input) for mailman id 8211;
 Fri, 16 Oct 2020 17:58:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTTzO-0008Rm-Vm
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 17:58:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88b99e78-afd4-4626-bbca-2e5685717e4f;
 Fri, 16 Oct 2020 17:58:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTTzL-00031Y-TZ; Fri, 16 Oct 2020 17:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTTzL-0001zt-MN; Fri, 16 Oct 2020 17:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTTzL-0005OE-LU; Fri, 16 Oct 2020 17:57:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTTzO-0008Rm-Vm
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 17:58:03 +0000
X-Inumbo-ID: 88b99e78-afd4-4626-bbca-2e5685717e4f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 88b99e78-afd4-4626-bbca-2e5685717e4f;
	Fri, 16 Oct 2020 17:58:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QoG77G68HBYAFzkcLUA8BNrjrZ7/WgLbZob3AU0yUPo=; b=Je3RM1ARdLoZjsZu+ZlGg7ZkTx
	L+QIuXMM9mLOMTjCynuXmGFw+16dcDMbXxznzZrYnaSAOA8JtErz04r5Ge9PwGLu0OdO38N/a+nyG
	7HfJ1BormhBRUvptVLv5K4iFGsf9GOPnUf/6xnMMmpo6mcx2dHt1qXkRraVuO9mGtbY4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTTzL-00031Y-TZ; Fri, 16 Oct 2020 17:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTTzL-0001zt-MN; Fri, 16 Oct 2020 17:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTTzL-0005OE-LU; Fri, 16 Oct 2020 17:57:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155891-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155891: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a7d977040bd82b89d1fe5ef32d488bfd10db2dbc
X-Osstest-Versions-That:
    ovmf=d25fd8710d6c8fc11582210fb1f8480c0d98416b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 17:57:59 +0000

flight 155891 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155891/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a7d977040bd82b89d1fe5ef32d488bfd10db2dbc
baseline version:
 ovmf                 d25fd8710d6c8fc11582210fb1f8480c0d98416b

Last test of basis   155881  2020-10-16 01:40:02 Z    0 days
Testing same since   155891  2020-10-16 10:41:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d25fd8710d..a7d977040b  a7d977040bd82b89d1fe5ef32d488bfd10db2dbc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 18:02:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 18:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8215.21901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTU3q-0000vE-Es; Fri, 16 Oct 2020 18:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8215.21901; Fri, 16 Oct 2020 18:02:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTU3q-0000v7-Bp; Fri, 16 Oct 2020 18:02:38 +0000
Received: by outflank-mailman (input) for mailman id 8215;
 Fri, 16 Oct 2020 18:02:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTU3p-0000uz-45
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 18:02:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9595c499-7def-4a44-adc8-9d87c05bd8a7;
 Fri, 16 Oct 2020 18:02:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTU3n-0003DB-FP; Fri, 16 Oct 2020 18:02:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTU3n-00029G-86; Fri, 16 Oct 2020 18:02:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTU3n-0007EU-7c; Fri, 16 Oct 2020 18:02:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTU3p-0000uz-45
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 18:02:37 +0000
X-Inumbo-ID: 9595c499-7def-4a44-adc8-9d87c05bd8a7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9595c499-7def-4a44-adc8-9d87c05bd8a7;
	Fri, 16 Oct 2020 18:02:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GeUUBp+A+ur2mDpyvAUgs58gSRlc3jKHuu05VfX0M5U=; b=ifb4jca7wf2q5+zZHI5o7pG1l4
	zP87dE/c/f4gB3vGHIDphy2Gxg3EbaG47sBf72SEdbC0DUfyMXAnO6FLfWvCggURidskZPp9Ewldp
	NXstwCdDs/vFg49FTUZT++MoRQN3+18/FT2uXsXk4hC7noz1QiQksxrUuQFU0qMGDyFE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTU3n-0003DB-FP; Fri, 16 Oct 2020 18:02:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTU3n-00029G-86; Fri, 16 Oct 2020 18:02:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTU3n-0007EU-7c; Fri, 16 Oct 2020 18:02:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 155900: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=a7952a320c1e202a218702bfdb14f75132f04894
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 18:02:35 +0000

flight 155900 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155900/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  a7952a320c1e202a218702bfdb14f75132f04894

Last test of basis   155895  2020-10-16 12:00:27 Z    0 days
Testing same since   155900  2020-10-16 15:04:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a7952a320c..0dfddb2116  0dfddb2116e3757f77a691a3fe335173088d69dc -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 19:11:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 19:11:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8222.21921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTV8V-00076O-Gd; Fri, 16 Oct 2020 19:11:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8222.21921; Fri, 16 Oct 2020 19:11:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTV8V-00076H-Dd; Fri, 16 Oct 2020 19:11:31 +0000
Received: by outflank-mailman (input) for mailman id 8222;
 Fri, 16 Oct 2020 19:11:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTV8U-00075p-2q
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 19:11:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43f5b9ef-2da1-4086-a6e7-2d3f6a0a8649;
 Fri, 16 Oct 2020 19:11:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTV8M-0004Zi-7R; Fri, 16 Oct 2020 19:11:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTV8M-0005FY-0I; Fri, 16 Oct 2020 19:11:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTV8L-00074z-W4; Fri, 16 Oct 2020 19:11:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTV8U-00075p-2q
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 19:11:30 +0000
X-Inumbo-ID: 43f5b9ef-2da1-4086-a6e7-2d3f6a0a8649
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43f5b9ef-2da1-4086-a6e7-2d3f6a0a8649;
	Fri, 16 Oct 2020 19:11:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=56+bCDwalRtyZSdizf1XsNO6UNrb6k0DqDRzwu4017g=; b=yUgdqufXCoyUyk1zBYTZyKEGkp
	UZtx7NSJDAVWiBE7SmjBCF+TVoiiBMHpMAEHIJyOaNb+kignB/cmPETWyz02Flj3bWsZjWTeIbg7Y
	pd+rs+yEWzmjEkcQjvXNEeNAu+O3pcjaPbVeOpyNw8cinoCKpzQe05mqeQxjUW69p78w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTV8M-0004Zi-7R; Fri, 16 Oct 2020 19:11:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTV8M-0005FY-0I; Fri, 16 Oct 2020 19:11:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTV8L-00074z-W4; Fri, 16 Oct 2020 19:11:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155886: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    linux=9ff9b0d392ea08090cd1780fb196f36dbb586529
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 19:11:21 +0000

flight 155886 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155886/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-xl-pvhv2-amd 12 debian-install          fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl          12 debian-install           fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 linux                9ff9b0d392ea08090cd1780fb196f36dbb586529
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   76 days
Failing since        152366  2020-08-01 20:49:34 Z   75 days  129 attempts
Testing same since   155886  2020-10-16 05:36:37 Z    0 days    1 attempts

------------------------------------------------------------
3152 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 557116 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 20:57:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 20:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8234.21942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTWmm-0007eT-Jn; Fri, 16 Oct 2020 20:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8234.21942; Fri, 16 Oct 2020 20:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTWmm-0007eM-G1; Fri, 16 Oct 2020 20:57:12 +0000
Received: by outflank-mailman (input) for mailman id 8234;
 Fri, 16 Oct 2020 20:57:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTWml-0007eH-Rn
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 20:57:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae7a1786-80d2-4269-b176-d51cda7ba497;
 Fri, 16 Oct 2020 20:57:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTWmg-0006lv-QU; Fri, 16 Oct 2020 20:57:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTWmg-0001z4-GO; Fri, 16 Oct 2020 20:57:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTWmg-0008Qs-Fr; Fri, 16 Oct 2020 20:57:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTWml-0007eH-Rn
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 20:57:11 +0000
X-Inumbo-ID: ae7a1786-80d2-4269-b176-d51cda7ba497
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ae7a1786-80d2-4269-b176-d51cda7ba497;
	Fri, 16 Oct 2020 20:57:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o1O/AAhkoZFOoB6A7zroyFuJ6rrxG7SKkymndTLC4WU=; b=PZcPCkQ7QXpHIw1Lxepp5J8j72
	PhRhVZi+ojs1YpBLciI/Bgt8Tt8c/zam3nIdWR56+VvndwyKQHWNiG4DZX6rAGsImF2Slknznhwe3
	SqSPkPwCycCOQTBsHQSKI7hySUG+bXCKcQxYXN4EFupvE5rzCa5B/yNmLJmoUdWJCn3g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTWmg-0006lv-QU; Fri, 16 Oct 2020 20:57:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTWmg-0001z4-GO; Fri, 16 Oct 2020 20:57:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTWmg-0008Qs-Fr; Fri, 16 Oct 2020 20:57:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155888-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155888: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-pvshim:guest-localmigrate/x10:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3e40748834923798aa57e3751db13a069e2c617b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 20:57:06 +0000

flight 155888 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155888/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-pvshim   20 guest-localmigrate/x10   fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3e40748834923798aa57e3751db13a069e2c617b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   57 days
Failing since        152659  2020-08-21 14:07:39 Z   56 days   98 attempts
Testing same since   155888  2020-10-16 06:19:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   fail    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 45701 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 21:38:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 21:38:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8239.21958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTXQM-0002rJ-OI; Fri, 16 Oct 2020 21:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8239.21958; Fri, 16 Oct 2020 21:38:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTXQM-0002rC-KY; Fri, 16 Oct 2020 21:38:06 +0000
Received: by outflank-mailman (input) for mailman id 8239;
 Fri, 16 Oct 2020 21:38:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTXQL-0002qh-Cj
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 21:38:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9459d60c-64c8-4aca-988a-59880bd8d535;
 Fri, 16 Oct 2020 21:37:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTXQD-0007cG-3p; Fri, 16 Oct 2020 21:37:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTXQC-0004Pn-PT; Fri, 16 Oct 2020 21:37:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTXQC-0000Eg-Ox; Fri, 16 Oct 2020 21:37:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=VcLu=DX=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTXQL-0002qh-Cj
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 21:38:05 +0000
X-Inumbo-ID: 9459d60c-64c8-4aca-988a-59880bd8d535
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9459d60c-64c8-4aca-988a-59880bd8d535;
	Fri, 16 Oct 2020 21:37:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5wUro19GRikv1n/Nn4Nhk3IVymu5d+Y7sXdvLo1H+f0=; b=5VRb+0dA+hyw7VxNS/FI7QE0BP
	ySh5xQO2HVJkXHH6aGxYzReQ3J432jXf0+oYTVohOunrXbJSuxqe7djvtuflCYCG/FQUPKk7MLIsF
	kf85wznIiBlTVDW5pMh0nK48gCIHgyUaiVWQB3DtNNDqXLhOWmNpaOTFC8XHr/KjL1hA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTXQD-0007cG-3p; Fri, 16 Oct 2020 21:37:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTXQC-0004Pn-PT; Fri, 16 Oct 2020 21:37:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTXQC-0000Eg-Ox; Fri, 16 Oct 2020 21:37:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.14-testing test] 155892: regressions - FAIL
X-Osstest-Failures:
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e6a4cbe48cfca6adbe4e7acdf7e405c8315facaa
X-Osstest-Versions-That:
    qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 16 Oct 2020 21:37:56 +0000

flight 155892 qemu-upstream-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 151900

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop              fail never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop              fail never pass

version targeted for testing:
 qemuu                e6a4cbe48cfca6adbe4e7acdf7e405c8315facaa
baseline version:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260

Last test of basis   151900  2020-07-14 18:08:51 Z   94 days
Testing same since   155892  2020-10-16 11:08:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Melnychenko <andrew@daynix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bruce Rogers <brogers@suse.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Dan Robertson <dan@dlrobertson.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eric Blake <eblake@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  lichun <lichun@ruijie.com.cn>
  Liu Yi L <yi.l.liu@intel.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Oleksandr Natalenko <oleksandr@redhat.com>
  Omar Sandoval <osandov@fb.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Pour <raphael.pour@hetzner.com>
  Richard Henderson <richard.henderson@linaro.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Sven Schnelle <svens@stackframe.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Yuri Benditovich <yuri.benditovich@daynix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2078 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 16 22:34:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 16 Oct 2020 22:34:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8246.21975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTYII-00087R-4J; Fri, 16 Oct 2020 22:33:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8246.21975; Fri, 16 Oct 2020 22:33:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTYII-00087K-1S; Fri, 16 Oct 2020 22:33:50 +0000
Received: by outflank-mailman (input) for mailman id 8246;
 Fri, 16 Oct 2020 22:33:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NCZ+=DX=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kTYIG-00087F-KA
 for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 22:33:48 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f509f265-2c51-4e95-ba41-330921ffc062;
 Fri, 16 Oct 2020 22:33:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09GMXPlK024387
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 16 Oct 2020 18:33:30 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09GMXNFv024386;
 Fri, 16 Oct 2020 15:33:23 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=NCZ+=DX=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kTYIG-00087F-KA
	for xen-devel@lists.xenproject.org; Fri, 16 Oct 2020 22:33:48 +0000
X-Inumbo-ID: f509f265-2c51-4e95-ba41-330921ffc062
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f509f265-2c51-4e95-ba41-330921ffc062;
	Fri, 16 Oct 2020 22:33:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09GMXPlK024387
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 16 Oct 2020 18:33:30 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09GMXNFv024386;
	Fri, 16 Oct 2020 15:33:23 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 16 Oct 2020 15:33:23 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Alex Benn??e <alex.bennee@linaro.org>, bertrand.marquis@arm.com,
        andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Xen-ARM EFI/ACPI problems (was: Re: [PATCH 0/4] xen/arm: Unbreak
 ACPI)
Message-ID: <20201016223323.GA23508@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <CAA93ih2EiyCnuL4sw1OLw+XEWa7sN3zJWvsnxHfx9b9Fq+cOxw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAA93ih2EiyCnuL4sw1OLw+XEWa7sN3zJWvsnxHfx9b9Fq+cOxw@mail.gmail.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Sep 28, 2020 at 10:00:35PM +0900, Masami Hiramatsu wrote:
> I've missed the explanation of the attached patch. This prototype
> patch was also needed for booting up the Xen on my box (for the system
> which has no SPCR).

I'm pretty sure of this on the hardware I'm dealing with, but don't know
about the hardware you're dealing with.

Does your device have a framebuffer?  Have you ever tried/succeeded
booting your device with the framebuffer disabled? (booted with a
HDMI/DVI cable disconnected or the device on the other end *completely*
powered down)

Based upon the back and forth both on xen-devel and some messages which
were split off due to not being general.  An observation I had made a
while ago finally caused me to try recreating it.

On the device I'm on (Raspberry PI 4B with Tianocore -> GRUB -> Xen) I
discovered a SPCR table shows up if I boot with the device the output is
plugged into is powered down.  I'm guessing this causes Tianocore to
advise GRUB/Linux/Xen to boot with a serial console (presenting a Serial
Port Console Redirect table), whereas if the display device is
functioning the absense of SPCR is supposed to indicate console on
framebuffer.

This means the ACPI_FAILURE case in acpi_iomem_deny_access() simply needs
to be filled in similar to how it likely is on x86.  Allocate a serial
port for Xen's use as console, present it to domain 0 as hvc0, and hide
it from domain 0.



Next issue for me will be getting the framebuffer operational.
Apparently the Xen-ARM EFI implementation doesn't provide any EFI
variables to domain 0?  Jan Beulich, your name was mentioned as person
likely to have ideas for getting Linux's efifb code operational Xen-ARM.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 17 04:26:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 04:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8260.22010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTdnG-0005Ug-Sp; Sat, 17 Oct 2020 04:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8260.22010; Sat, 17 Oct 2020 04:26:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTdnG-0005UZ-O8; Sat, 17 Oct 2020 04:26:10 +0000
Received: by outflank-mailman (input) for mailman id 8260;
 Sat, 17 Oct 2020 04:26:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTdnF-0005TM-Lu
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 04:26:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc19fdcc-4f47-4a1c-9cba-4daeee3f8a30;
 Sat, 17 Oct 2020 04:26:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTdn6-0000UO-Mx; Sat, 17 Oct 2020 04:26:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTdn6-0001Qu-EJ; Sat, 17 Oct 2020 04:26:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTdn6-0006lg-Dn; Sat, 17 Oct 2020 04:26:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTdnF-0005TM-Lu
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 04:26:09 +0000
X-Inumbo-ID: dc19fdcc-4f47-4a1c-9cba-4daeee3f8a30
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dc19fdcc-4f47-4a1c-9cba-4daeee3f8a30;
	Sat, 17 Oct 2020 04:26:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wElvmdJl1MvloOzq9vQoqx1QUtNT8zsXM4L8n46cAMI=; b=QM6ujTo9C+nEwqwJpy6dfu0zDT
	O1X97oYftE/tIbaozCX8XNJLlur0Ngvs+Dxul2sZsCISUHQJgB/0kn1ukB4EwSG+jpSzzjy4MCeR0
	kifBCQemOYHLFcPXrbYJA0x2dPcOWrOYID5cEXFhJc2FtwELJWXRSUEmIzxO+cSqeiWU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTdn6-0000UO-Mx; Sat, 17 Oct 2020 04:26:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTdn6-0001Qu-EJ; Sat, 17 Oct 2020 04:26:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTdn6-0006lg-Dn; Sat, 17 Oct 2020 04:26:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155894: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
X-Osstest-Versions-That:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 04:26:00 +0000

flight 155894 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155894/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-pair   27 guest-migrate/dst_host/src_host fail pass in 155880
 test-armhf-armhf-xl-vhd      17 guest-start/debian.repeat  fail pass in 155880
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 155880

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2 fail in 155880 blocked in 155894
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155880
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155880
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155880
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155880
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155880
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155880
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155880
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155880
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155880
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop              fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4
baseline version:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4

Last test of basis   155894  2020-10-16 11:23:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Oct 17 05:13:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 05:13:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8263.22022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTeWs-0001js-7a; Sat, 17 Oct 2020 05:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8263.22022; Sat, 17 Oct 2020 05:13:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTeWs-0001jl-4a; Sat, 17 Oct 2020 05:13:18 +0000
Received: by outflank-mailman (input) for mailman id 8263;
 Sat, 17 Oct 2020 05:13:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lxxR=DY=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kTeWq-0001jg-W4
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 05:13:17 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4921258-78b9-42e5-8b72-4b1865cd4ef8;
 Sat, 17 Oct 2020 05:13:15 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09H5Coji026800
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sat, 17 Oct 2020 01:12:55 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09H5CnCP026799;
 Fri, 16 Oct 2020 22:12:49 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lxxR=DY=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kTeWq-0001jg-W4
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 05:13:17 +0000
X-Inumbo-ID: d4921258-78b9-42e5-8b72-4b1865cd4ef8
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d4921258-78b9-42e5-8b72-4b1865cd4ef8;
	Sat, 17 Oct 2020 05:13:15 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09H5Coji026800
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Sat, 17 Oct 2020 01:12:55 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09H5CnCP026799;
	Fri, 16 Oct 2020 22:12:49 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 16 Oct 2020 22:12:49 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
        Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Alex Benn??e <alex.bennee@linaro.org>, bertrand.marquis@arm.com,
        andre.przywara@arm.com, Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: Xen-ARM EFI/ACPI problems (was: Re: [PATCH 0/4] xen/arm: Unbreak
 ACPI)
Message-ID: <20201017051249.GA26457@mattapan.m5p.com>
References: <20200926205542.9261-1-julien@xen.org>
 <CAA93ih3-gTAEzV=yYS-9cHGyN9rfAC28Xeyk8Gsmi7D2BS_OWQ@mail.gmail.com>
 <CAA93ih2EiyCnuL4sw1OLw+XEWa7sN3zJWvsnxHfx9b9Fq+cOxw@mail.gmail.com>
 <20201016223323.GA23508@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201016223323.GA23508@mattapan.m5p.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 16, 2020 at 03:33:23PM -0700, Elliott Mitchell wrote:
> On the device I'm on (Raspberry PI 4B with Tianocore -> GRUB -> Xen) I
> discovered a SPCR table shows up if I boot with the device the output is
> plugged into is powered down.  I'm guessing this causes Tianocore to
> advise GRUB/Linux/Xen to boot with a serial console (presenting a Serial
> Port Console Redirect table), whereas if the display device is
> functioning the absense of SPCR is supposed to indicate console on
> framebuffer.
> 
> This means the ACPI_FAILURE case in acpi_iomem_deny_access() simply needs
> to be filled in similar to how it likely is on x86.  Allocate a serial
> port for Xen's use as console, present it to domain 0 as hvc0, and hide
> it from domain 0.

Looks like things are worse than I thought.

Upon examining some of my `dmesg` copies it looks like Linux interprets
the ignore_uart field in STAO tables as applying strictly to the UART
referenced in the SPCR table.  As such, when booted with the framebuffer
available, Linux thinks it can freely access the UART found in the
hardware table.  The specification for the STAO table is apparently
garbage since it only allows a hypervisor to tell a VM to ignore a
single UART.  Instead it really needs to be possible to mask arbitrary
devices.  :-(

As such, for ARM devices which can include framebuffers, I'm guessing Xen
will need to either pass a modified table to domain 0, or simulate the
device sufficiently to prevent concurrent access.  This could be as
simple as simulating a MMIO page which discards all writes.





> Next issue for me will be getting the framebuffer operational.
> Apparently the Xen-ARM EFI implementation doesn't provide any EFI
> variables to domain 0?  Jan Beulich, your name was mentioned as person
> likely to have ideas for getting Linux's efifb code operational Xen-ARM.

There may be multiple pieces to this.

For the framebuffer this might be as simple as parsing the BGRT table and
ensuring the addresses are directly mapped to domain 0.  I just noticed
in dmesg, "efi_bgrt: Ignoring BGRT: invalid image address".  I'm guessing
one thing got remapped, but a second didn't.

The other need for EFI variable access is for modifying EFI's boot
process.  While I suspect it may be feasible for most users to reboot to
a kernel directly on hardware to update GRUB/other bootloader, adding an
extra step increases the potential for a failure at the Wrong Time.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 17 09:25:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 09:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8281.22049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTiSr-0006WA-AC; Sat, 17 Oct 2020 09:25:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8281.22049; Sat, 17 Oct 2020 09:25:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTiSr-0006W3-7D; Sat, 17 Oct 2020 09:25:25 +0000
Received: by outflank-mailman (input) for mailman id 8281;
 Sat, 17 Oct 2020 09:25:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTiSq-0006Vy-0Y
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 09:25:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d8ecbff-6514-47f8-94a6-59c115f7c052;
 Sat, 17 Oct 2020 09:25:21 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTiSn-0007Xu-Au; Sat, 17 Oct 2020 09:25:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTiSn-0007m6-22; Sat, 17 Oct 2020 09:25:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTiSn-0005Hb-1c; Sat, 17 Oct 2020 09:25:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTiSq-0006Vy-0Y
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 09:25:24 +0000
X-Inumbo-ID: 1d8ecbff-6514-47f8-94a6-59c115f7c052
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1d8ecbff-6514-47f8-94a6-59c115f7c052;
	Sat, 17 Oct 2020 09:25:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w2rsnAOCrgTTNyE7wFRQ6UfX9pk2pPmIo2yVrdvpyy0=; b=eJ2LP2Nmq7UtGevhJNDbA0zoiI
	zc1Okxdj3qxsM/bk4aa/j3msjkrB2LQOBZLUmiT+eBf9XtoU+g1sjzwSBz/XPCubBzk/qE7C0XL2P
	vjlaYCXUrgVxykppqUWVBnvQLrkQWd9PNMhIR+9/wj+M29Vr0CT8cZU1TUXT1kx9UzqM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTiSn-0007Xu-Au; Sat, 17 Oct 2020 09:25:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTiSn-0007m6-22; Sat, 17 Oct 2020 09:25:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTiSn-0005Hb-1c; Sat, 17 Oct 2020 09:25:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155908: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1
X-Osstest-Versions-That:
    ovmf=a7d977040bd82b89d1fe5ef32d488bfd10db2dbc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 09:25:21 +0000

flight 155908 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155908/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1
baseline version:
 ovmf                 a7d977040bd82b89d1fe5ef32d488bfd10db2dbc

Last test of basis   155891  2020-10-16 10:41:44 Z    0 days
Testing same since   155908  2020-10-16 18:15:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <Ard.Biesheuvel@arm.com>
  Hao A Wu <hao.a.wu@intel.com>
  Laszlo Ersek <lersek@redhat.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a7d977040b..30f0ec8d80  30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 12:25:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 12:25:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8295.22074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTlH9-0004qc-Eh; Sat, 17 Oct 2020 12:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8295.22074; Sat, 17 Oct 2020 12:25:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTlH9-0004qV-Bd; Sat, 17 Oct 2020 12:25:31 +0000
Received: by outflank-mailman (input) for mailman id 8295;
 Sat, 17 Oct 2020 12:25:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTlH7-0004q2-SQ
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 12:25:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d2ef225-a9a4-4406-bf27-f7f61da1b7dd;
 Sat, 17 Oct 2020 12:25:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlH0-0002qI-50; Sat, 17 Oct 2020 12:25:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlGz-0008Rf-Sb; Sat, 17 Oct 2020 12:25:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlGz-0002jb-S4; Sat, 17 Oct 2020 12:25:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTlH7-0004q2-SQ
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 12:25:29 +0000
X-Inumbo-ID: 5d2ef225-a9a4-4406-bf27-f7f61da1b7dd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5d2ef225-a9a4-4406-bf27-f7f61da1b7dd;
	Sat, 17 Oct 2020 12:25:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8ImhKE5y+/4eVOercRH5TQ7ZPCVdAbILLYnfczjWwZM=; b=yR7fxVoRLW9O/gEA2g9q73WUEb
	suR0IcA5hk8xXuOfeETV81kqVuc0AxWYEPQH1JGmFKwL0TcXT7+/W6SPNB+Ozdaqp85Q4Cak/WO19
	hDz7uTml9M9aiq2trPl76Rrf8cSwaAFLIPqmUX0NlY223hB1at23aZITvfYCBlAiyz44=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlH0-0002qI-50; Sat, 17 Oct 2020 12:25:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlGz-0008Rf-Sb; Sat, 17 Oct 2020 12:25:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlGz-0002jb-S4; Sat, 17 Oct 2020 12:25:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155910: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c4cf498dc0241fa2d758dba177634268446afb06
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 12:25:21 +0000

flight 155910 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155910/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c4cf498dc0241fa2d758dba177634268446afb06
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   77 days
Failing since        152366  2020-08-01 20:49:34 Z   76 days  130 attempts
Testing same since   155910  2020-10-16 19:39:12 Z    0 days    1 attempts

------------------------------------------------------------
3161 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 561712 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 12:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 12:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8299.22088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTljI-0007SQ-JP; Sat, 17 Oct 2020 12:54:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8299.22088; Sat, 17 Oct 2020 12:54:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTljI-0007SJ-Fp; Sat, 17 Oct 2020 12:54:36 +0000
Received: by outflank-mailman (input) for mailman id 8299;
 Sat, 17 Oct 2020 12:54:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTljH-0007Rj-8c
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 12:54:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0efae0f0-6045-4780-b304-1494ad9dc417;
 Sat, 17 Oct 2020 12:54:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlj8-0003QG-87; Sat, 17 Oct 2020 12:54:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlj7-00025I-Ue; Sat, 17 Oct 2020 12:54:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTlj7-0003sM-UB; Sat, 17 Oct 2020 12:54:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTljH-0007Rj-8c
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 12:54:35 +0000
X-Inumbo-ID: 0efae0f0-6045-4780-b304-1494ad9dc417
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0efae0f0-6045-4780-b304-1494ad9dc417;
	Sat, 17 Oct 2020 12:54:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TiABAgA1ia46BZ/DdvlkZiNYp7038kPId1L3kwWeops=; b=jTDGl2FsKiM9glJn5PpOX1IuHs
	6bvjnlcTIYwSTmsqQr6I3aG437lJVIvrtSsVpqcgBVyklFvK4MH7TeNvErG4lgKJCnL9Id2zTvf43
	2QS+qwHy31rHI4pmRtR1ZVd1unm30B8Z4e0/Wl6SYrQNOCEy0YT73zkm/Q0GHSAzAiUg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlj8-0003QG-87; Sat, 17 Oct 2020 12:54:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlj7-00025I-Ue; Sat, 17 Oct 2020 12:54:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTlj7-0003sM-UB; Sat, 17 Oct 2020 12:54:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155920-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155920: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 12:54:25 +0000

flight 155920 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155920/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z   99 days
Failing since        151818  2020-07-11 04:18:52 Z   98 days   93 attempts
Testing same since   155885  2020-10-16 04:19:10 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21456 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 13:38:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 13:38:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8305.22104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTmPm-0002Zf-0q; Sat, 17 Oct 2020 13:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8305.22104; Sat, 17 Oct 2020 13:38:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTmPl-0002ZY-Tz; Sat, 17 Oct 2020 13:38:29 +0000
Received: by outflank-mailman (input) for mailman id 8305;
 Sat, 17 Oct 2020 13:38:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTmPk-0002ZT-My
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 13:38:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f370a20-2f27-4db8-8d3d-9b17cb249d2c;
 Sat, 17 Oct 2020 13:38:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTmPg-0004Jb-RN; Sat, 17 Oct 2020 13:38:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTmPg-0004kq-2r; Sat, 17 Oct 2020 13:38:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTmPg-0006Zs-2K; Sat, 17 Oct 2020 13:38:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTmPk-0002ZT-My
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 13:38:28 +0000
X-Inumbo-ID: 4f370a20-2f27-4db8-8d3d-9b17cb249d2c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4f370a20-2f27-4db8-8d3d-9b17cb249d2c;
	Sat, 17 Oct 2020 13:38:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZOj4EsSMCDqU7YzGaYjeLqQtt/VfP35kW96jCi3zsT4=; b=Yg+k4UybeTwXNR7G4RoCKJV1y5
	1CPtF6NYDk96snfPvVgydb8z6GFEPjFI06I5eV7bjzkGSZDtIwJJJSvMd90sl2LUIfwgTW9TljMWG
	beoUaOcCIqJMtj9B7Jl8jU9B0kUXuYGRL4hZzEHsZDsq4TgLDsN1SwEPCY5Wo/labHOs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTmPg-0004Jb-RN; Sat, 17 Oct 2020 13:38:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTmPg-0004kq-2r; Sat, 17 Oct 2020 13:38:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTmPg-0006Zs-2K; Sat, 17 Oct 2020 13:38:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155911-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155911: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-pair:guest-migrate/dst_host/src_host:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7daf8f8d011cdd5d3e86930ed2bde969425c790c
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 13:38:24 +0000

flight 155911 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155911/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-pair 27 guest-migrate/dst_host/src_host fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                7daf8f8d011cdd5d3e86930ed2bde969425c790c
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   58 days
Failing since        152659  2020-08-21 14:07:39 Z   56 days   99 attempts
Testing same since   155911  2020-10-16 21:08:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        fail    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 46144 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 14:19:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 14:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8309.22117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTn3Y-00065f-0T; Sat, 17 Oct 2020 14:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8309.22117; Sat, 17 Oct 2020 14:19:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTn3X-00065Y-TZ; Sat, 17 Oct 2020 14:19:35 +0000
Received: by outflank-mailman (input) for mailman id 8309;
 Sat, 17 Oct 2020 14:19:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTn3W-00065T-5p
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 14:19:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d557fb36-7150-4a5d-96e1-bafdcf3bc8c0;
 Sat, 17 Oct 2020 14:19:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTn3S-0005D9-5p; Sat, 17 Oct 2020 14:19:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTn3R-0006HU-TD; Sat, 17 Oct 2020 14:19:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTn3R-0002lG-Se; Sat, 17 Oct 2020 14:19:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTn3W-00065T-5p
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 14:19:34 +0000
X-Inumbo-ID: d557fb36-7150-4a5d-96e1-bafdcf3bc8c0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d557fb36-7150-4a5d-96e1-bafdcf3bc8c0;
	Sat, 17 Oct 2020 14:19:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3J2Pz7vYcb9QQQABvFWaZdjLFnfIGBMf0IOF/X/wL3c=; b=h1LYgTLbdPUlkhBxR9y8Nr9azr
	Uwq8/W6AIMoygvoKx96fxdzER0QXYSW5JDjjZq1m+cRUzFpTETVOTA/hjc51L5rCLszJ7vR2MyY7r
	FGIxLyN7XzccyNXUzjntqOBl0P4nVP2yz49U/ZrKrUnStmR7bPTuIKO9gcLTmHo5qWe8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTn3S-0005D9-5p; Sat, 17 Oct 2020 14:19:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTn3R-0006HU-TD; Sat, 17 Oct 2020 14:19:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTn3R-0002lG-Se; Sat, 17 Oct 2020 14:19:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155914-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-4.14-testing test] 155914: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e6a4cbe48cfca6adbe4e7acdf7e405c8315facaa
X-Osstest-Versions-That:
    qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 14:19:29 +0000

flight 155914 qemu-upstream-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155914/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 12 debian-di-install fail in 155892 pass in 155914
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 155892

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 151900
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 151900
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 151900
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 151900
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 151900
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 151900
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 151900
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                e6a4cbe48cfca6adbe4e7acdf7e405c8315facaa
baseline version:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260

Last test of basis   151900  2020-07-14 18:08:51 Z   94 days
Testing same since   155892  2020-10-16 11:08:26 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Alexander Duyck <alexander.h.duyck@linux.intel.com>
  Alistair Francis <alistair.francis@wdc.com>
  Allan Peramaki <aperamak@pp1.inet.fi>
  Andrew Melnychenko <andrew@daynix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Bruce Rogers <brogers@suse.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Claudio Fontana <cfontana@suse.de>
  Cleber Rosa <crosa@redhat.com>
  Cornelia Huck <cohuck@redhat.com>
  Dan Robertson <dan@dlrobertson.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eric Blake <eblake@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Halil Pasic <pasic@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Igor Mammedov <imammedo@redhat.com>
  Jason Wang <jasowang@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  lichun <lichun@ruijie.com.cn>
  Liu Yi L <yi.l.liu@intel.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michal Privoznik <mprivozn@redhat.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Oleksandr Natalenko <oleksandr@redhat.com>
  Omar Sandoval <osandov@fb.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Raphael Pour <raphael.pour@hetzner.com>
  Richard Henderson <richard.henderson@linaro.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Sven Schnelle <svens@stackframe.org>
  Thomas Huth <thuth@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Yuri Benditovich <yuri.benditovich@daynix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ea6d3cd1ed..e6a4cbe48c  e6a4cbe48cfca6adbe4e7acdf7e405c8315facaa -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 16:25:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 16:25:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8401.22460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTp0u-0002Ra-67; Sat, 17 Oct 2020 16:25:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8401.22460; Sat, 17 Oct 2020 16:25:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTp0u-0002RT-32; Sat, 17 Oct 2020 16:25:00 +0000
Received: by outflank-mailman (input) for mailman id 8401;
 Sat, 17 Oct 2020 16:24:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WdE0=DY=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kTp0t-0002RO-5m
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 16:24:59 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc975ec2-092d-4e57-a368-891660857f6e;
 Sat, 17 Oct 2020 16:24:56 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay05.hostedemail.com (Postfix) with ESMTP id E0A6918029120;
 Sat, 17 Oct 2020 16:24:55 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.133.149])
 (Authenticated sender: joe@perches.com)
 by omf13.hostedemail.com (Postfix) with ESMTPA;
 Sat, 17 Oct 2020 16:24:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WdE0=DY=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kTp0t-0002RO-5m
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 16:24:59 +0000
X-Inumbo-ID: fc975ec2-092d-4e57-a368-891660857f6e
Received: from smtprelay.hostedemail.com (unknown [216.40.44.90])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fc975ec2-092d-4e57-a368-891660857f6e;
	Sat, 17 Oct 2020 16:24:56 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay05.hostedemail.com (Postfix) with ESMTP id E0A6918029120;
	Sat, 17 Oct 2020 16:24:55 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:327:355:379:599:857:960:966:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1434:1437:1515:1516:1518:1593:1594:1605:1730:1747:1777:1792:2194:2196:2198:2199:2200:2201:2393:2525:2561:2564:2682:2685:2828:2859:2894:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3622:3865:3866:3867:3870:3871:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4385:5007:6742:6743:7576:7903:8557:8957:9025:10004:10848:11026:11232:11914:12043:12050:12295:12296:12297:12438:12555:12663:12712:12737:12740:12760:12895:13161:13225:13229:13439:13870:14096:14097:14659:21080:21325:21433:21451:21611:21627:21773:21990:30010:30034:30054:30055:30070:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: soap40_350edcc27227
X-Filterd-Recvd-Size: 36889
Received: from XPS-9350.home (unknown [47.151.133.149])
	(Authenticated sender: joe@perches.com)
	by omf13.hostedemail.com (Postfix) with ESMTPA;
	Sat, 17 Oct 2020 16:24:48 +0000 (UTC)
Message-ID: <f530b7aeecbbf9654b4540cfa20023a4c2a11889.camel@perches.com>
Subject: Re: [RFC] treewide: cleanup unreachable breaks
From: Joe Perches <joe@perches.com>
To: trix@redhat.com, linux-kernel@vger.kernel.org, cocci
 <cocci@systeme.lip6.fr>
Cc: linux-edac@vger.kernel.org, linux-acpi@vger.kernel.org, 
 linux-pm@vger.kernel.org, xen-devel@lists.xenproject.org, 
 linux-block@vger.kernel.org, openipmi-developer@lists.sourceforge.net, 
 linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org, 
 linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org, 
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org, 
 nouveau@lists.freedesktop.org, virtualization@lists.linux-foundation.org, 
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org, 
 linux-amlogic@lists.infradead.org,
 industrypack-devel@lists.sourceforge.net,  linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com,  linux-scsi@vger.kernel.org,
 linux-mtd@lists.infradead.org,  linux-can@vger.kernel.org,
 netdev@vger.kernel.org,  intel-wired-lan@lists.osuosl.org,
 ath10k@lists.infradead.org,  linux-wireless@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com,  linux-nfc@lists.01.org,
 linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, 
 linux-samsung-soc@vger.kernel.org, platform-driver-x86@vger.kernel.org, 
 patches@opensource.cirrus.com, storagedev@microchip.com, 
 devel@driverdev.osuosl.org, linux-serial@vger.kernel.org, 
 linux-usb@vger.kernel.org, usb-storage@lists.one-eyed-alien.net, 
 linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com,
 bpf@vger.kernel.org,  linux-integrity@vger.kernel.org,
 linux-security-module@vger.kernel.org,  keyrings@vger.kernel.org,
 alsa-devel@alsa-project.org,  clang-built-linux@googlegroups.com
Date: Sat, 17 Oct 2020 09:24:47 -0700
In-Reply-To: <20201017160928.12698-1-trix@redhat.com>
References: <20201017160928.12698-1-trix@redhat.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.36.4-0ubuntu1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sat, 2020-10-17 at 09:09 -0700, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
> 
> This is a upcoming change to clean up a new warning treewide.
> I am wondering if the change could be one mega patch (see below) or
> normal patch per file about 100 patches or somewhere half way by collecting
> early acks.
> 
> clang has a number of useful, new warnings see
> https://clang.llvm.org/docs/DiagnosticsReference.html
> 
> This change cleans up -Wunreachable-code-break
> https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
> for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.

Early acks/individual patches by subsystem would be good.
Better still would be an automated cocci script.

The existing checkpatch test for UNNECESSARY_BREAK
has a few too many false positives.

>From a script run on next on July 28th:

arch/arm/mach-s3c24xx/mach-rx1950.c:266: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/arm/nwfpe/fpa11_cprt.c:38: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/arm/nwfpe/fpa11_cprt.c:41: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:684: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:687: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:690: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:693: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:697: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/include/asm/mach-au1x00/au1000.h:700: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c:276: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c:279: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c:282: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c:287: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c:290: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/rb532/setup.c:76: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/mips/rb532/setup.c:79: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/include/asm/kvm_book3s_64.h:231: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/include/asm/kvm_book3s_64.h:234: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/include/asm/kvm_book3s_64.h:237: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/include/asm/kvm_book3s_64.h:240: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/net/bpf_jit_comp.c:455: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/platforms/cell/spufs/switch.c:2047: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/powerpc/platforms/cell/spufs/switch.c:2077: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/sh/boards/mach-landisk/gio.c:111: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/x86/kernel/cpu/mce/core.c:1734: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/x86/kernel/cpu/mce/core.c:1738: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
arch/x86/kernel/cpu/microcode/amd.c:218: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/acpi/utils.c:107: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/acpi/utils.c:132: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/acpi/utils.c:147: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/acpi/utils.c:158: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/ata/libata-scsi.c:3973: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/base/power/main.c:366: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/block/xen-blkback/blkback.c:1272: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/char/ipmi/ipmi_devintf.c:493: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/char/lp.c:625: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/char/mwave/mwavedd.c:406: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/cpufreq/e_powersaver.c:226: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/cpufreq/longhaul.c:596: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/crypto/atmel-sha.c:462: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2464: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2468: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2471: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2481: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2485: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2488: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/edac/amd64_edac.c:2491: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c:505: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c:528: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c:2220: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c:2253: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c:2110: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/gma500/cdv_intel_hdmi.c:346: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/mgag200/mgag200_mode.c:723: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/mgag200/mgag200_mode.c:727: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/mgag200/mgag200_mode.c:730: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/mgag200/mgag200_mode.c:734: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/mgag200/mgag200_mode.c:737: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c:126: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c:143: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c:150: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c:153: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c:174: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/qxl/qxl_ioctl.c:163: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/radeon/atombios_encoders.c:749: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/radeon/atombios_encoders.c:780: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/radeon/r300.c:1160: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/gpu/drm/radeon/radeon_i2c.c:462: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/infiniband/hw/cxgb4/qp.c:1972: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/infiniband/hw/cxgb4/qp.c:2021: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/infiniband/hw/cxgb4/qp.c:2026: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/input/mouse/synaptics_usb.c:207: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/ipack/devices/ipoctal.c:547: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/irqchip/irq-crossbar.c:291: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/irqchip/irq-mips-gic.c:643: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/isdn/mISDN/dsp_dtmf.c:174: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/macintosh/via-pmu-led.c:65: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/cx24117.c:1175: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/dib0090.c:407: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/dib0090.c:497: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drxd_hard.c:1625: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:2328: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:2655: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:3597: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:3621: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:3645: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:10947: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:11068: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/drx39xyj/drxj.c:11890: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:171: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:193: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:219: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:242: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:245: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:377: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:558: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:583: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:597: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:613: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:629: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:641: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:667: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:723: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:745: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/nxt200x.c:1117: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/si21xx.c:467: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/si21xx.c:470: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/dvb-frontends/stv0900_core.c:1948: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/tuners/mt2063.c:1852: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/b2c2/flexcop-usb.c:198: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/b2c2/flexcop-usb.c:521: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/gspca/pac_common.h:105: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:860: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:872: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:931: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:957: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:976: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:988: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:1001: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/pwc/pwc-if.c:1019: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/tm6000/tm6000-core.c:674: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/tm6000/tm6000-core.c:696: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/tm6000/tm6000-core.c:761: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/ttusb-dec/ttusb_dec.c:1105: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/ttusb-dec/ttusb_dec.c:1109: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/ttusb-dec/ttusb_dec.c:1160: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/media/usb/ttusb-dec/ttusb_dec.c:1164: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/message/fusion/mptbase.c:476: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/misc/mei/hbm.c:1307: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/mmc/host/atmel-mci.c:1919: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/mtd/devices/ms02-nv.c:289: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/mtd/mtdchar.c:884: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/mtd/mtdchar.c:894: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/ethernet/8390/mac8390.c:193: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/ethernet/8390/mac8390.c:206: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/ethernet/cisco/enic/enic_ethtool.c:437: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c:353: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/wireless/ath/ath9k/hw.c:2311: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c:1230: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c:1135: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/nfc/trf7970a.c:1385: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/nvdimm/claim.c:205: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/parport/parport_ip32.c:1862: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pci/controller/pci-v3-semi.c:664: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pci/hotplug/ibmphp_pci.c:297: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pci/hotplug/ibmphp_pci.c:1512: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/pinctrl-rockchip.c:2718: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/pinctrl-rockchip.c:2788: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/samsung/pinctrl-s3c24xx.c:111: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/samsung/pinctrl-s3c24xx.c:114: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/samsung/pinctrl-s3c24xx.c:117: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/samsung/pinctrl-s3c24xx.c:120: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pinctrl/samsung/pinctrl-s3c24xx.c:123: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/acer-wmi.c:795: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/sony-laptop.c:2470: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/sony-laptop.c:2476: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/wmi.c:1263: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/wmi.c:1266: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/platform/x86/wmi.c:1269: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pnp/pnpbios/rsparser.c:192: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pnp/pnpbios/rsparser.c:476: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/pnp/pnpbios/rsparser.c:745: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/power/supply/ipaq_micro_battery.c:98: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/power/supply/ipaq_micro_battery.c:101: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/power/supply/ipaq_micro_battery.c:104: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/power/supply/wm831x_power.c:671: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/s390/char/tape_34xx.c:654: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/s390/char/tape_3590.c:946: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/s390/scsi/zfcp_fc.c:980: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/s390/scsi/zfcp_fc.c:983: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/aic94xx/aic94xx_task.c:272: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/arcmsr/arcmsr_hba.c:2702: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/arcmsr/arcmsr_hba.c:2705: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/be2iscsi/be_mgmt.c:1251: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/be2iscsi/be_mgmt.c:1254: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1001: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1004: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1007: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1019: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1022: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bfa/bfa_ioc.h:1025: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/bnx2fc/bnx2fc_hwi.c:773: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/fcoe/fcoe.c:1897: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/hptiop.c:761: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/hpsa.c:7443: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/isci/phy.c:756: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/isci/phy.c:961: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/ipr.c:9490: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:3347: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4387: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4439: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4453: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4495: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4520: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_debugfs.c:4523: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_scsi.c:4287: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_init.c:7189: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_sli.c:9192: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_sli.c:10075: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/lpfc/lpfc_sli.c:10237: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/mvumi.c:2299: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/nsp32.c:2114: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/nsp32.c:2125: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/qla2xxx/qla_mbx.c:4026: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/st.c:2849: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/scsi/sym53c8xx_2/sym_hipd.c:4599: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:907: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:910: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:913: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:916: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:998: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1035: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1322: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1699: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1702: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1705: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c:1708: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/nozomi.c:417: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/nozomi.c:421: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/nozomi.c:463: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/nozomi.c:471: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/serial/imx.c:323: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/serial/imx.c:334: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/serial/imx.c:337: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/serial/imx.c:340: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/tty/serial/imx.c:343: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:516: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:524: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:530: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:547: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:565: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:573: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:579: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/gadget/function/f_hid.c:587: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/host/xhci-mem.c:1147: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/image/microtek.c:294: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/misc/iowarrior.c:387: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/misc/iowarrior.c:457: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/misc/iowarrior.c:464: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/misc/usblcd.c:190: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/serial/iuu_phoenix.c:853: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/serial/iuu_phoenix.c:867: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/usb/storage/freecom.c:434: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/video/fbdev/amifb.c:1212: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/video/fbdev/core/fbcon.c:1915: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/video/fbdev/core/fbcon.c:2006: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_ca91cx42.c:372: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_ca91cx42.c:681: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_ca91cx42.c:713: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_ca91cx42.c:1126: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_ca91cx42.c:1338: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:509: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:998: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:1506: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:1606: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:1704: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:1741: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/bridges/vme_tsi148.c:1967: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/vme.c:71: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/vme.c:182: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/vme.c:190: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/vme.c:193: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
drivers/vme/vme.c:197: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
fs/efs/inode.c:166: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
fs/ocfs2/cluster/tcp.c:1201: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
kernel/bpf/syscall.c:2786: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
samples/hidraw/hid-example.c:168: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
samples/hidraw/hid-example.c:171: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
samples/hidraw/hid-example.c:174: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
samples/hidraw/hid-example.c:177: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
samples/hidraw/hid-example.c:180: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
security/integrity/ima/ima_appraise.c:173: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
security/keys/trusted-keys/trusted_tpm1.c:904: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1002: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1006: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1023: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1047: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1126: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/oss/dmasound/dmasound_core.c:1261: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/echoaudio/midi.c:100: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/echoaudio/midi.c:104: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme32.c:471: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:735: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:739: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:743: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:747: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:751: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:755: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/rme9652.c:762: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:2289: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:2315: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:2341: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:2361: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:3848: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/pci/rme9652/hdspm.c:3859: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/soc/codecs/wl1273.c:314: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/soc/intel/skylake/skl-pcm.c:505: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/soc/sh/hac.c:254: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
sound/soc/ti/davinci-mcasp.c:2388: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
tools/perf/ui/stdio/hist.c:401: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
tools/perf/ui/stdio/hist.c:404: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
tools/perf/util/probe-event.c:1516: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
tools/power/acpi/tools/acpidbg/acpidbg.c:412: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return
tools/power/acpi/tools/acpidbg/acpidbg.c:418: WARNING:UNNECESSARY_BREAK: break is not useful after a goto or return




From xen-devel-bounces@lists.xenproject.org Sat Oct 17 17:20:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 17:20:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8407.22476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTpsL-0007lZ-KR; Sat, 17 Oct 2020 17:20:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8407.22476; Sat, 17 Oct 2020 17:20:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTpsL-0007lS-FV; Sat, 17 Oct 2020 17:20:13 +0000
Received: by outflank-mailman (input) for mailman id 8407;
 Sat, 17 Oct 2020 17:20:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o+87=DY=ska67.de=pub@srs-us1.protection.inumbo.net>)
 id 1kTpsJ-0007lL-8n
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 17:20:11 +0000
Received: from mxout4.routing.net (unknown [2a03:2900:1:a::9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bed08450-710d-4620-90b9-c97e5b91f90d;
 Sat, 17 Oct 2020 17:20:09 +0000 (UTC)
Received: from mxbox2.masterlogin.de (unknown [192.168.10.89])
 by mxout4.routing.net (Postfix) with ESMTP id 4AF88101494
 for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 17:20:08 +0000 (UTC)
Received: from naboo.starwars.lan
 (HSI-KBW-46-223-214-20.hsi.kabel-badenwuerttemberg.de [46.223.214.20])
 by mxbox2.masterlogin.de (Postfix) with ESMTPSA id 054DB1002F9
 for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 17:20:08 +0000 (UTC)
Received: from triton.localnet (triton.starwars.lan [192.168.152.150])
 by naboo.starwars.lan (Postfix) with ESMTP id 3DEC361953
 for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 19:20:26 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o+87=DY=ska67.de=pub@srs-us1.protection.inumbo.net>)
	id 1kTpsJ-0007lL-8n
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 17:20:11 +0000
X-Inumbo-ID: bed08450-710d-4620-90b9-c97e5b91f90d
Received: from mxout4.routing.net (unknown [2a03:2900:1:a::9])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bed08450-710d-4620-90b9-c97e5b91f90d;
	Sat, 17 Oct 2020 17:20:09 +0000 (UTC)
Received: from mxbox2.masterlogin.de (unknown [192.168.10.89])
	by mxout4.routing.net (Postfix) with ESMTP id 4AF88101494
	for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 17:20:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mailerdienst.de;
	s=20200217; t=1602955208; h=from:from:reply-to:reply-to:subject:subject:date:date:
	 message-id:message-id:to:to:cc:mime-version:mime-version:
	 content-type:content-type:  content-transfer-encoding:content-transfer-encoding;
	bh=YUpM31dzrl9+Y0DIUsQLHnDeIM8pl08FeKFIyqIAsPc=;
	b=V4voTvnkmUSlvz7HRG5Mu/2x3u7ZoqiFxQfxhFpNNQLeasXo4vZxIvAiGg0t6z7gvTPX3z
	nTRRa6HWZoJRzWwHAECdPXbvaWiWbJ+4juo7I2tiAZd7WE8t+SmKIGNZVHlybLEIHJECjP
	7QcRhKN+2ANFovq6eaLDARa8h9vJHIo=
Received: from naboo.starwars.lan (HSI-KBW-46-223-214-20.hsi.kabel-badenwuerttemberg.de [46.223.214.20])
	by mxbox2.masterlogin.de (Postfix) with ESMTPSA id 054DB1002F9
	for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 17:20:08 +0000 (UTC)
Received: from triton.localnet (triton.starwars.lan [192.168.152.150])
	by naboo.starwars.lan (Postfix) with ESMTP id 3DEC361953
	for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 19:20:26 +0200 (CEST)
From: Stefan <pub@ska67.de>
To: xen-devel <xen-devel@lists.xenproject.org>
Reply-To: xen@ska67.de
Subject: QXL support broken
Date: Sat, 17 Oct 2020 19:20:06 +0200
Message-ID: <5982251.Trf6zQypKq@triton>
MIME-Version: 1.0
Content-Transfer-Encoding: 7Bit
Content-Type: text/plain; charset="us-ascii"

Hello xen project team,

I am using xen 4.14 from AUR on a laptop with arch linux.

I tried running a windows 10 domU with QXL graphics.
In my xl configuration file I set:
vga="qxl"

Within the domU itself I installed the latest qxldod driver from
fedora project.
Right after installation, the system becomes extremely laggy,
with response times of 30s and more.

I wrote to xen-users mailinglist:
https://lists.xenproject.org/archives/html/xen-users/2020-09/msg00006.html

And got an answer with the advice to report the bug
on the xen-devel mailing list:
https://lists.xenproject.org/archives/html/xen-users/2020-10/msg00000.html

-- 
THX
Stefan Kadow




From xen-devel-bounces@lists.xenproject.org Sat Oct 17 18:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 18:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8412.22493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTqpG-0004et-6k; Sat, 17 Oct 2020 18:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8412.22493; Sat, 17 Oct 2020 18:21:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTqpG-0004em-2W; Sat, 17 Oct 2020 18:21:06 +0000
Received: by outflank-mailman (input) for mailman id 8412;
 Sat, 17 Oct 2020 18:21:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b5n4=DY=inria.fr=julia.lawall@srs-us1.protection.inumbo.net>)
 id 1kTqpF-0004eh-6A
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 18:21:05 +0000
Received: from mail2-relais-roc.national.inria.fr (unknown [192.134.164.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1131df34-8442-413f-8936-ecf5f46221ae;
 Sat, 17 Oct 2020 18:21:04 +0000 (UTC)
Received: from abo-173-121-68.mrs.modulonet.fr (HELO hadrien) ([85.68.121.173])
 by mail2-relais-roc.national.inria.fr with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;
 17 Oct 2020 20:21:01 +0200
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b5n4=DY=inria.fr=julia.lawall@srs-us1.protection.inumbo.net>)
	id 1kTqpF-0004eh-6A
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 18:21:05 +0000
X-Inumbo-ID: 1131df34-8442-413f-8936-ecf5f46221ae
Received: from mail2-relais-roc.national.inria.fr (unknown [192.134.164.83])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1131df34-8442-413f-8936-ecf5f46221ae;
	Sat, 17 Oct 2020 18:21:04 +0000 (UTC)
X-IronPort-AV: E=Sophos;i="5.77,387,1596492000"; 
   d="scan'208";a="473115757"
Received: from abo-173-121-68.mrs.modulonet.fr (HELO hadrien) ([85.68.121.173])
  by mail2-relais-roc.national.inria.fr with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 17 Oct 2020 20:21:01 +0200
Date: Sat, 17 Oct 2020 20:21:01 +0200 (CEST)
From: Julia Lawall <julia.lawall@inria.fr>
X-X-Sender: jll@hadrien
To: Joe Perches <joe@perches.com>
cc: trix@redhat.com, linux-kernel@vger.kernel.org, 
    cocci <cocci@systeme.lip6.fr>, alsa-devel@alsa-project.org, 
    clang-built-linux@googlegroups.com, linux-iio@vger.kernel.org, 
    nouveau@lists.freedesktop.org, storagedev@microchip.com, 
    dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org, 
    keyrings@vger.kernel.org, linux-mtd@lists.infradead.org, 
    ath10k@lists.infradead.org, linux-stm32@st-md-mailman.stormreply.com, 
    usb-storage@lists.one-eyed-alien.net, linux-watchdog@vger.kernel.org, 
    devel@driverdev.osuosl.org, linux-samsung-soc@vger.kernel.org, 
    linux-scsi@vger.kernel.org, linux-nvdimm@lists.01.org, 
    amd-gfx@lists.freedesktop.org, linux-acpi@vger.kernel.org, 
    intel-wired-lan@lists.osuosl.org, industrypack-devel@lists.sourceforge.net, 
    linux-pci@vger.kernel.org, spice-devel@lists.freedesktop.org, 
    MPT-FusionLinux.pdl@broadcom.com, linux-media@vger.kernel.org, 
    linux-serial@vger.kernel.org, linux-nfc@lists.01.org, 
    linux-pm@vger.kernel.org, linux-can@vger.kernel.org, 
    linux-block@vger.kernel.org, linux-gpio@vger.kernel.org, 
    xen-devel@lists.xenproject.org, linux-amlogic@lists.infradead.org, 
    openipmi-developer@lists.sourceforge.net, 
    platform-driver-x86@vger.kernel.org, linux-integrity@vger.kernel.org, 
    linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org, 
    netdev@vger.kernel.org, linux-usb@vger.kernel.org, 
    linux-wireless@vger.kernel.org, linux-security-module@vger.kernel.org, 
    linux-crypto@vger.kernel.org, patches@opensource.cirrus.com, 
    bpf@vger.kernel.org, ocfs2-devel@oss.oracle.com, 
    linux-power@fi.rohmeurope.com
Subject: Re: [Cocci] [RFC] treewide: cleanup unreachable breaks
In-Reply-To: <f530b7aeecbbf9654b4540cfa20023a4c2a11889.camel@perches.com>
Message-ID: <alpine.DEB.2.22.394.2010172016370.9440@hadrien>
References: <20201017160928.12698-1-trix@redhat.com> <f530b7aeecbbf9654b4540cfa20023a4c2a11889.camel@perches.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII



On Sat, 17 Oct 2020, Joe Perches wrote:

> On Sat, 2020-10-17 at 09:09 -0700, trix@redhat.com wrote:
> > From: Tom Rix <trix@redhat.com>
> >
> > This is a upcoming change to clean up a new warning treewide.
> > I am wondering if the change could be one mega patch (see below) or
> > normal patch per file about 100 patches or somewhere half way by collecting
> > early acks.
> >
> > clang has a number of useful, new warnings see
> > https://clang.llvm.org/docs/DiagnosticsReference.html
> >
> > This change cleans up -Wunreachable-code-break
> > https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
> > for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.
>
> Early acks/individual patches by subsystem would be good.
> Better still would be an automated cocci script.

Coccinelle is not especially good at this, because it is based on control
flow, and a return or goto diverts the control flow away from the break.
A hack to solve the problem is to put an if around the return or goto, but
that gives the break a meaningless file name and line number.  I collected
the following list, but it only has 439 results, so fewer than clang.  But
maybe there are some files that are not considered by clang in the x86
allyesconfig configuration.

Probably checkpatch is the best solution here, since it is not
configuration sensitive and doesn't care about control flow.

julia

drivers/scsi/mvumi.c: function mvumi_cfg_hw_reg line 114
drivers/watchdog/geodewdt.c: function geodewdt_ioctl line 18
drivers/media/usb/b2c2/flexcop-usb.c: function flexcop_usb_init line 21
drivers/media/usb/b2c2/flexcop-usb.c: function flexcop_usb_memory_req line 20
drivers/tty/nozomi.c: function write_mem32 line 17
drivers/tty/nozomi.c: function write_mem32 line 25
drivers/tty/nozomi.c: function read_mem32 line 17
drivers/tty/nozomi.c: function read_mem32 line 21
sound/soc/codecs/wl1273.c: function wl1273_startup line 27
drivers/iio/adc/meson_saradc.c: function meson_sar_adc_iio_info_read_raw line 12
drivers/iio/adc/meson_saradc.c: function meson_sar_adc_iio_info_read_raw line 19
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/media/tuners/mt2063.c: function mt2063_init line 81
drivers/nfc/st21nfca/core.c: function st21nfca_hci_im_transceive line 46
arch/sh/boards/mach-landisk/gio.c: function gio_ioctl line 53
drivers/gpu/drm/mgag200/mgag200_mode.c: function mgag200_crtc_set_plls line 11
drivers/gpu/drm/mgag200/mgag200_mode.c: function mgag200_crtc_set_plls line 15
drivers/gpu/drm/mgag200/mgag200_mode.c: function mgag200_crtc_set_plls line 18
drivers/gpu/drm/mgag200/mgag200_mode.c: function mgag200_crtc_set_plls line 22
drivers/gpu/drm/mgag200/mgag200_mode.c: function mgag200_crtc_set_plls line 25
drivers/media/dvb-frontends/cx24117.c: function cx24117_attach line 16
drivers/block/xen-blkback/blkback.c: function dispatch_rw_block_io line 48
drivers/platform/x86/sony-laptop.c: function __sony_nc_gfx_switch_status_get line 16
drivers/platform/x86/sony-laptop.c: function __sony_nc_gfx_switch_status_get line 22
drivers/platform/x86/sony-laptop.c: function __sony_nc_gfx_switch_status_get line 31
drivers/char/mwave/mwavedd.c: function mwave_ioctl line 288
drivers/scsi/be2iscsi/be_mgmt.c: function beiscsi_adap_family_disp line 15
drivers/scsi/be2iscsi/be_mgmt.c: function beiscsi_adap_family_disp line 19
drivers/scsi/be2iscsi/be_mgmt.c: function beiscsi_adap_family_disp line 22
drivers/scsi/be2iscsi/be_mgmt.c: function beiscsi_adap_family_disp line 27
drivers/iio/imu/bmi160/bmi160_core.c: function bmi160_write_raw line 11
drivers/block/z2ram.c: function z2_open line 138
drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c: function _rtl8723e_set_media_status line 38
samples/hidraw/hid-example.c: function bus_str line 6
samples/hidraw/hid-example.c: function bus_str line 9
samples/hidraw/hid-example.c: function bus_str line 12
samples/hidraw/hid-example.c: function bus_str line 15
samples/hidraw/hid-example.c: function bus_str line 18
drivers/scsi/ipr.c: function ipr_pci_error_detected line 10
drivers/gpio/gpio-bd70528.c: function bd70528_gpio_set_config line 11
drivers/gpio/gpio-bd70528.c: function bd70528_gpio_set_config line 17
drivers/gpio/gpio-bd70528.c: function bd70528_gpio_set_config line 21
drivers/pinctrl/pinctrl-rockchip.c: function rockchip_pinconf_get line 71
drivers/pinctrl/pinctrl-rockchip.c: function rockchip_pinconf_set line 74
drivers/gpu/drm/amd/display/include/signal_types.h: function dc_is_dvi_signal line 6
security/keys/trusted-keys/trusted_tpm1.c: function datablob_parse line 63
arch/x86/math-emu/fpu_trig.c: function fyl2xp1 line 71
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 26
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 34
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 40
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 58
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 76
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 84
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 90
drivers/usb/gadget/function/f_hid.c: function hidg_setup line 98
drivers/staging/rts5208/rtsx_scsi.c: function start_stop_unit line 29
drivers/platform/x86/wmi.c: function acpi_wmi_ec_space_handler line 31
drivers/platform/x86/wmi.c: function acpi_wmi_ec_space_handler line 34
drivers/platform/x86/wmi.c: function acpi_wmi_ec_space_handler line 37
drivers/nvdimm/claim.c: function nd_namespace_store line 72
sound/soc/ti/davinci-mcasp.c: function davinci_mcasp_probe line 265
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c: function mcp77_clk_read line 66
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c: function mcp77_clk_read line 73
drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c: function mcp77_clk_read line 76
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 3
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 4
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 5
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 6
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 7
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 8
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 9
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 10
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 11
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 12
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 13
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 14
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 15
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 16
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 17
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 18
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 19
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 20
tools/testing/selftests/powerpc/pmu/ebb/trace.c: function trace_decode_reg line 21
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 91
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 103
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 162
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 188
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 207
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 219
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 232
drivers/media/usb/pwc/pwc-if.c: function usb_pwc_probe line 250
drivers/media/dvb-frontends/drx39xyj/drxj.c: function ctrl_set_standard line 76
drivers/media/dvb-frontends/drx39xyj/drxj.c: function ctrl_set_uio_cfg line 65
drivers/media/dvb-frontends/drx39xyj/drxj.c: function ctrl_set_uio_cfg line 90
drivers/media/dvb-frontends/drx39xyj/drxj.c: function ctrl_set_uio_cfg line 115
drivers/media/dvb-frontends/drx39xyj/drxj.c: function hi_command line 52
drivers/media/dvb-frontends/drx39xyj/drxj.c: function get_device_capabilities line 182
drivers/media/dvb-frontends/drx39xyj/drxj.c: function ctrl_power_mode line 40
drivers/media/dvb-frontends/drx39xyj/drxj.c: function drx_ctrl_u_code line 159
drivers/macintosh/via-pmu-led.c: function pmu_led_set line 15
drivers/net/wan/lmc/lmc_proto.c: function lmc_proto_type line 5
drivers/net/wan/lmc/lmc_proto.c: function lmc_proto_type line 8
drivers/net/wan/lmc/lmc_proto.c: function lmc_proto_type line 11
drivers/net/wan/lmc/lmc_proto.c: function lmc_proto_type line 15
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c: function rtl8812ae_phy_config_rf_with_headerfile line 20
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c: function rtl8812ae_phy_config_rf_with_headerfile line 26
drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c: function rtl8821ae_phy_config_rf_with_headerfile line 18
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c: function pci_isa_read_reg line 48
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c: function pci_isa_read_reg line 51
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c: function pci_isa_read_reg line 54
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c: function pci_isa_read_reg line 59
arch/mips/loongson2ef/common/cs5536/cs5536_isa.c: function pci_isa_read_reg line 62
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 5
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 8
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 11
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 14
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 18
arch/mips/include/asm/mach-au1x00/au1000.h: function alchemy_get_cputype line 21
arch/x86/kernel/cpu/microcode/amd.c: function __verify_patch_size line 21
drivers/scsi/aic94xx/aic94xx_task.c: function asd_task_tasklet_complete line 78
drivers/parport/parport_ip32.c: function parport_ip32_fifo_supported line 27
drivers/staging/comedi/drivers/ni_mio_common.c: function ni_serial_insn_config line 90
drivers/power/supply/ipaq_micro_battery.c: function get_capacity line 7
drivers/power/supply/ipaq_micro_battery.c: function get_capacity line 10
drivers/power/supply/ipaq_micro_battery.c: function get_capacity line 13
drivers/net/wireless/marvell/mwifiex/cfg80211.c: function mwifiex_cfg80211_change_virtual_intf line 95
drivers/net/wireless/marvell/mwifiex/cfg80211.c: function mwifiex_cfg80211_change_virtual_intf line 143
fs/ocfs2/cluster/tcp.c: function o2net_process_message line 33
drivers/scsi/hpsa.c: function find_PCI_BAR_index line 27
drivers/s390/char/tape_34xx.c: function tape_34xx_unit_check line 351
drivers/scsi/nsp32.c: function nsp32_msgin_occur line 193
drivers/scsi/nsp32.c: function nsp32_msgin_occur line 204
drivers/scsi/nsp32.c: function nsp32_msgin_occur line 215
drivers/scsi/nsp32.c: function nsp32_msgin_occur line 220
drivers/usb/misc/iowarrior.c: function iowarrior_write line 44
drivers/usb/misc/iowarrior.c: function iowarrior_write line 114
drivers/usb/misc/iowarrior.c: function iowarrior_write line 121
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/usb/image/microtek.c: function mts_show_command line 72
security/safesetid/lsm.c: function safesetid_security_capable line 41
security/safesetid/lsm.c: function safesetid_security_capable line 57
security/safesetid/lsm.c: function safesetid_security_capable line 61
arch/powerpc/net/bpf_jit_comp.c: function bpf_jit_build_body line 345
arch/mips/rb532/setup.c: function get_system_type line 5
arch/mips/rb532/setup.c: function get_system_type line 8
sound/soc/intel/skylake/skl-pcm.c: function skl_pcm_trigger line 46
drivers/scsi/pcmcia/nsp_cs.c: function nspintr line 151
drivers/usb/storage/freecom.c: function freecom_transport line 220
drivers/gpu/drm/radeon/radeon_i2c.c: function r100_hw_i2c_xfer line 133
drivers/power/supply/wm831x_power.c: function wm831x_power_probe line 141
drivers/platform/x86/acer-wmi.c: function AMW0_set_u32 line 34
drivers/scsi/bnx2fc/bnx2fc_hwi.c: function bnx2fc_process_unsol_compl line 150
drivers/isdn/mISDN/dsp_dtmf.c: function dsp_dtmf_goertzel_decode line 57
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/s390/scsi/zfcp_fc.c: function zfcp_fc_job_wka_port line 22
drivers/s390/scsi/zfcp_fc.c: function zfcp_fc_job_wka_port line 25
drivers/media/dvb-frontends/stv0900_core.c: function stv0900_attach line 52
drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c: function ixgbe_calc_eeprom_checksum_X540 line 47
drivers/scsi/isci/phy.c: function sci_phy_event_handler line 72
drivers/scsi/isci/phy.c: function sci_phy_event_handler line 277
drivers/vme/vme.c: function find_bridge line 8
drivers/vme/vme.c: function find_bridge line 13
drivers/vme/vme.c: function find_bridge line 18
drivers/vme/vme.c: function find_bridge line 23
drivers/vme/vme.c: function find_bridge line 27
drivers/vme/vme.c: function vme_get_size line 16
drivers/vme/vme.c: function vme_get_size line 25
drivers/vme/vme.c: function vme_get_size line 28
drivers/vme/vme.c: function vme_get_size line 32
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_pcicfg_read line 59
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 64
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 120
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 135
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 180
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 207
drivers/scsi/lpfc/lpfc_debugfs.c: function lpfc_idiag_queacc_write line 210
drivers/gpu/drm/qxl/qxl_ioctl.c: function qxl_process_single_command line 21
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/net/wireless/ath/ath10k/htt_rx.c: function ath10k_htt_t2h_msg_handler line 119
drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c: function pll_map line 11
drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c: function nv50_ram_timing_read line 24
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 34
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 38
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 42
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 46
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 50
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 54
sound/pci/rme9652/rme9652.c: function rme9652_spdif_sample_rate line 61
drivers/gpu/drm/amd/amdgpu/dce_v11_0.c: function dce_v11_0_pick_dig_encoder line 32
drivers/media/usb/go7007/go7007-usb.c: function go7007_usb_probe line 58
drivers/pnp/pnpbios/rsparser.c: function pnpbios_encode_allocated_resource_data line 95
drivers/pnp/pnpbios/rsparser.c: function pnpbios_parse_compatible_ids line 48
drivers/pnp/pnpbios/rsparser.c: function pnpbios_parse_allocated_resource_data line 120
drivers/net/wireless/ath/ath6kl/testmode.c: function ath6kl_tm_cmd line 31
drivers/misc/mei/hbm.c: function mei_hbm_dispatch line 277
drivers/scsi/hptiop.c: function hptiop_finish_scsi_req line 45
drivers/pinctrl/samsung/pinctrl-s3c24xx.c: function s3c24xx_eint_get_trigger line 5
drivers/pinctrl/samsung/pinctrl-s3c24xx.c: function s3c24xx_eint_get_trigger line 8
drivers/pinctrl/samsung/pinctrl-s3c24xx.c: function s3c24xx_eint_get_trigger line 11
drivers/pinctrl/samsung/pinctrl-s3c24xx.c: function s3c24xx_eint_get_trigger line 14
drivers/pinctrl/samsung/pinctrl-s3c24xx.c: function s3c24xx_eint_get_trigger line 17
drivers/base/power/main.c: function pm_op line 18
kernel/bpf/syscall.c: function attach_type_to_prog_type line 7
drivers/pci/hotplug/ibmphp_pci.c: function ibmphp_configure_card line 234
drivers/pci/hotplug/ibmphp_pci.c: function unconfigure_boot_card line 96
drivers/scsi/fcoe/fcoe.c: function fcoe_device_notification line 54
drivers/scsi/lpfc/lpfc_sli.c: function lpfc_mbox_api_table_setup line 26
drivers/scsi/lpfc/lpfc_sli.c: function lpfc_sli_api_table_setup line 18
drivers/scsi/lpfc/lpfc_sli.c: function lpfc_sli4_iocb2wqe line 567
drivers/media/dvb-frontends/drxd_hard.c: function CorrectSysClockDeviation line 43
tools/perf/util/probe-event.c: function parse_perf_probe_point line 174
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_chunk line 6
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_chunk line 9
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_chunk line 12
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_size line 6
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_size line 9
drivers/scsi/bfa/bfa_ioc.h: function bfa_cb_image_get_size line 12
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c: function _rtl88ee_set_media_status line 35
sound/oss/dmasound/dmasound_core.c: function sq_ioctl line 12
sound/oss/dmasound/dmasound_core.c: function sq_ioctl line 16
sound/oss/dmasound/dmasound_core.c: function sq_ioctl line 33
sound/oss/dmasound/dmasound_core.c: function sq_ioctl line 57
sound/oss/dmasound/dmasound_core.c: function sq_ioctl line 143
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 5
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 8
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 11
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 14
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 17
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 20
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 23
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 26
sound/oss/dmasound/dmasound_core.c: function get_afmt_string line 29
drivers/s390/char/tape_3590.c: function tape_3590_erp_read_opposite line 15
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/media/usb/ttusb-dec/ttusb_dec.c: function ttusb_dec_start_feed line 16
drivers/media/usb/ttusb-dec/ttusb_dec.c: function ttusb_dec_start_feed line 20
drivers/media/usb/ttusb-dec/ttusb_dec.c: function ttusb_dec_stop_feed line 7
drivers/media/usb/ttusb-dec/ttusb_dec.c: function ttusb_dec_stop_feed line 11
drivers/net/ethernet/cisco/enic/enic_ethtool.c: function enic_grxclsrule line 19
security/integrity/ima/ima_appraise.c: function ima_get_hash_algo line 18
sound/pci/rme9652/hdspm.c: function hdspm_get_aes_sample_rate line 8
sound/pci/rme9652/hdspm.c: function hdspm_wc_sync_check line 16
sound/pci/rme9652/hdspm.c: function hdspm_wc_sync_check line 29
sound/pci/rme9652/hdspm.c: function hdspm_wc_sync_check line 43
sound/pci/rme9652/hdspm.c: function hdspm_get_tco_sample_rate line 10
sound/pci/rme9652/hdspm.c: function hdspm_get_sync_in_sample_rate line 10
sound/pci/rme9652/hdspm.c: function hdspm_get_wc_sample_rate line 9
drivers/nfc/trf7970a.c: function trf7970a_is_iso15693_write_or_lock line 11
drivers/media/usb/gspca/pac_common.h: function pac_find_sof line 45
drivers/net/ethernet/aquantia/atlantic/aq_nic.c: function aq_nic_set_link_ksettings line 46
drivers/scsi/lpfc/lpfc_scsi.c: function lpfc_scsi_api_table_setup line 25
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_readreg_multibyte line 18
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_readreg_multibyte line 41
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_readreg_multibyte line 44
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_writereg_multibyte line 30
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_writereg_multibyte line 54
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 33
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 58
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 72
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 88
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 104
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 116
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 142
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 198
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_setup_frontend_parameters line 220
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_init line 15
drivers/media/dvb-frontends/nxt200x.c: function nxt200x_writetuner line 57
drivers/tty/serial/imx.c: function imx_uart_readl line 5
drivers/tty/serial/imx.c: function imx_uart_readl line 16
drivers/tty/serial/imx.c: function imx_uart_readl line 19
drivers/tty/serial/imx.c: function imx_uart_readl line 22
drivers/tty/serial/imx.c: function imx_uart_readl line 25
drivers/crypto/atmel-sha.c: function atmel_sha_init line 37
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 7
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 11
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 14
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 24
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 28
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 31
drivers/edac/amd64_edac.c: function map_err_sym_to_channel line 34
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c: function map_transmitter_id_to_phy_instance line 24
drivers/scsi/st.c: function st_int_ioctl line 155
drivers/infiniband/hw/cxgb4/qp.c: function c4iw_modify_qp line 146
drivers/infiniband/hw/cxgb4/qp.c: function c4iw_modify_qp line 195
drivers/infiniband/hw/cxgb4/qp.c: function c4iw_modify_qp line 200
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 4
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 6
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 8
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 10
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 12
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 14
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 16
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function mcp251xfd_get_mode_str line 18
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function __mcp251xfd_get_model_str line 4
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function __mcp251xfd_get_model_str line 6
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c: function __mcp251xfd_get_model_str line 8
drivers/ata/libata-scsi.c: function ata_get_xlat_func line 35
drivers/char/ipmi/ipmi_devintf.c: function ipmi_ioctl line 201
arch/arm/nwfpe/fpa11_cprt.c: function EmulateCPRT line 15
arch/arm/nwfpe/fpa11_cprt.c: function EmulateCPRT line 18
drivers/vme/bridges/vme_ca91cx42.c: function ca91cx42_dma_list_add line 108
drivers/vme/bridges/vme_ca91cx42.c: function ca91cx42_master_set line 92
drivers/vme/bridges/vme_ca91cx42.c: function ca91cx42_master_set line 124
drivers/vme/bridges/vme_ca91cx42.c: function ca91cx42_slave_set line 39
drivers/vme/bridges/vme_ca91cx42.c: function ca91cx42_lm_set line 45
drivers/scsi/sym53c8xx_2/sym_hipd.c: function sym_int_sir line 230
drivers/gpu/drm/radeon/r300.c: function r300_packet0_check line 534
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c: function amdgpu_atombios_encoder_get_encoder_mode line 70
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c: function amdgpu_atombios_encoder_get_encoder_mode line 96
drivers/gpu/drm/amd/amdgpu/atombios_encoders.c: function amdgpu_atombios_encoder_get_encoder_mode line 103
drivers/media/dvb-frontends/si21xx.c: function si21xx_set_voltage line 15
drivers/media/dvb-frontends/si21xx.c: function si21xx_set_voltage line 18
drivers/staging/vme/devices/vme_user.c: function vme_user_ioctl line 75
drivers/staging/vme/devices/vme_user.c: function vme_user_ioctl line 119
drivers/mmc/host/atmel-mci.c: function atmci_tasklet_func line 197
drivers/irqchip/irq-mips-gic.c: function gic_ipi_domain_match line 9
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_buffer_configuration line 47
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_buffer_configuration line 84
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel line 14
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel line 17
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel line 20
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel line 23
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function configuration_to_registers line 83
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel_sensor line 82
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel_sensor line 85
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel_sensor line 88
drivers/staging/media/atomisp/pci/hive_isp_css_common/host/input_system.c: function input_system_configure_channel_sensor line 91
drivers/net/ethernet/8390/mac8390.c: function mac8390_ident line 27
drivers/net/ethernet/8390/mac8390.c: function mac8390_ident line 42
drivers/irqchip/irq-crossbar.c: function crossbar_of_init line 100
drivers/mtd/devices/ms02-nv.c: function ms02nv_init line 21
arch/powerpc/platforms/cell/spufs/switch.c: function __do_spu_restore line 21
arch/powerpc/platforms/cell/spufs/switch.c: function __do_spu_save line 23
arch/powerpc/include/asm/kvm_book3s_64.h: function kvmppc_hpte_page_shifts line 11
arch/powerpc/include/asm/kvm_book3s_64.h: function kvmppc_hpte_page_shifts line 14
arch/powerpc/include/asm/kvm_book3s_64.h: function kvmppc_hpte_page_shifts line 17
arch/powerpc/include/asm/kvm_book3s_64.h: function kvmppc_hpte_page_shifts line 20
arch/x86/kernel/cpu/mce/core.c: function __mcheck_cpu_ancient_init line 10
arch/x86/kernel/cpu/mce/core.c: function __mcheck_cpu_ancient_init line 14
sound/pci/echoaudio/midi.c: function mtc_process_data line 10
sound/pci/echoaudio/midi.c: function mtc_process_data line 14
sound/pci/rme32.c: function snd_rme32_capture_getrate line 36
tools/power/acpi/tools/acpidbg/acpidbg.c: function main line 33
tools/power/acpi/tools/acpidbg/acpidbg.c: function main line 39
drivers/scsi/lpfc/lpfc_init.c: function lpfc_init_api_table_setup line 22
drivers/gpu/drm/amd/amdgpu/dce_v10_0.c: function dce_v10_0_pick_dig_encoder line 32
drivers/acpi/utils.c: function acpi_extract_package line 66
drivers/acpi/utils.c: function acpi_extract_package line 91
drivers/acpi/utils.c: function acpi_extract_package line 106
drivers/acpi/utils.c: function acpi_extract_package line 117
drivers/char/lp.c: function lp_do_ioctl line 48
drivers/mtd/mtdchar.c: function mtdchar_ioctl line 265
drivers/mtd/mtdchar.c: function mtdchar_ioctl line 276
arch/x86/kvm/emulate.c: function writeback line 26
drivers/usb/host/xhci-mem.c: function xhci_setup_addressable_virt_dev line 46
drivers/media/usb/tm6000/tm6000-core.c: function tm6000_set_audio_rinput line 26
drivers/media/usb/tm6000/tm6000-core.c: function tm6000_set_audio_rinput line 48
drivers/media/usb/tm6000/tm6000-core.c: function tm6000_tvaudio_set_mute line 26
drivers/media/dvb-frontends/dib0090.c: function dib0090_fw_identify line 79
drivers/media/dvb-frontends/dib0090.c: function dib0090_identify line 81
drivers/net/dsa/b53/b53_common.c: function b53_switch_init line 46
drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c: function iwl_mvm_mac_ctx_send line 9
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c: function map_transmitter_id_to_phy_instance line 18
sound/soc/sh/hac.c: function hac_hw_params line 19
drivers/gpu/drm/radeon/atombios_encoders.c: function atombios_get_encoder_mode line 83
drivers/gpu/drm/radeon/atombios_encoders.c: function atombios_get_encoder_mode line 119
drivers/gpu/drm/radeon/atombios_encoders.c: function atombios_get_encoder_mode line 126
drivers/pci/controller/pci-v3-semi.c: function v3_get_dma_range_config line 64
drivers/video/fbdev/amifb.c: function amifb_probe line 90
drivers/video/fbdev/amifb.c: function ami_decode_var line 84
fs/efs/inode.c: function efs_iget line 120
drivers/vme/bridges/vme_tsi148.c: function tsi148_dma_list_add line 81
drivers/vme/bridges/vme_tsi148.c: function tsi148_dma_list_add line 119
drivers/vme/bridges/vme_tsi148.c: function tsi148_master_set line 191
drivers/vme/bridges/vme_tsi148.c: function tsi148_slave_set line 38
drivers/vme/bridges/vme_tsi148.c: function tsi148_dma_set_vme_dest_attributes line 87
drivers/vme/bridges/vme_tsi148.c: function tsi148_dma_set_vme_src_attributes line 87
drivers/vme/bridges/vme_tsi148.c: function tsi148_lm_set line 41
net/ipv4/fib_frontend.c: function fib_gw_from_via line 35
arch/arm/mach-s3c/mach-rx1950.c: function rx1950_led_blink_set line 17
tools/perf/ui/stdio/hist.c: function hist_entry_callchain__fprintf line 15
tools/perf/ui/stdio/hist.c: function hist_entry_callchain__fprintf line 21
tools/perf/ui/stdio/hist.c: function hist_entry_callchain__fprintf line 24
tools/perf/ui/stdio/hist.c: function hist_entry_callchain__fprintf line 27
drivers/usb/serial/iuu_phoenix.c: function iuu_uart_baud line 74
drivers/usb/serial/iuu_phoenix.c: function iuu_uart_baud line 88
drivers/gpu/drm/amd/amdgpu/dce_v8_0.c: function dce_v8_0_pick_dig_encoder line 32
drivers/net/hyperv/netvsc.c: function netvsc_process_raw_pkt line 19
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 6
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 9
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 12
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 15
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 18
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c: function map_transmitter_id_to_phy_instance line 21
drivers/cpufreq/e_powersaver.c: function eps_cpu_init line 55


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 19:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 19:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8420.22509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTrR7-000899-CV; Sat, 17 Oct 2020 19:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8420.22509; Sat, 17 Oct 2020 19:00:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTrR7-000892-8g; Sat, 17 Oct 2020 19:00:13 +0000
Received: by outflank-mailman (input) for mailman id 8420;
 Sat, 17 Oct 2020 19:00:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WdE0=DY=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kTrR6-00088x-2O
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 19:00:12 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.164])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba66d11e-664d-4a73-a183-d1892a226a88;
 Sat, 17 Oct 2020 19:00:10 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay08.hostedemail.com (Postfix) with ESMTP id 15895182CED2A;
 Sat, 17 Oct 2020 19:00:10 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.133.149])
 (Authenticated sender: joe@perches.com)
 by omf01.hostedemail.com (Postfix) with ESMTPA;
 Sat, 17 Oct 2020 19:00:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WdE0=DY=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kTrR6-00088x-2O
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 19:00:12 +0000
X-Inumbo-ID: ba66d11e-664d-4a73-a183-d1892a226a88
Received: from smtprelay.hostedemail.com (unknown [216.40.44.164])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ba66d11e-664d-4a73-a183-d1892a226a88;
	Sat, 17 Oct 2020 19:00:10 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay08.hostedemail.com (Postfix) with ESMTP id 15895182CED2A;
	Sat, 17 Oct 2020 19:00:10 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:960:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1434:1437:1515:1516:1518:1535:1542:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2525:2553:2561:2564:2682:2685:2693:2828:2859:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3355:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4470:4823:5007:6117:6742:6743:7576:7903:8660:8792:8957:9010:9025:9108:10004:10400:10450:10455:11232:11658:11914:12043:12050:12295:12296:12297:12438:12555:12663:12740:12760:12895:12986:13138:13148:13230:13231:13439:14096:14097:14181:14659:14721:19904:19999:21080:21324:21451:21627:21939:21990:30029:30034:30054:30070:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: jewel08_4005cbe27228
X-Filterd-Recvd-Size: 5179
Received: from XPS-9350.home (unknown [47.151.133.149])
	(Authenticated sender: joe@perches.com)
	by omf01.hostedemail.com (Postfix) with ESMTPA;
	Sat, 17 Oct 2020 19:00:02 +0000 (UTC)
Message-ID: <503af4a57ca6daeb3e42a9be136dcd21e6d6e23d.camel@perches.com>
Subject: Re: [Cocci] [RFC] treewide: cleanup unreachable breaks
From: Joe Perches <joe@perches.com>
To: Julia Lawall <julia.lawall@inria.fr>
Cc: trix@redhat.com, linux-kernel@vger.kernel.org, cocci
 <cocci@systeme.lip6.fr>,  alsa-devel@alsa-project.org,
 clang-built-linux@googlegroups.com,  linux-iio@vger.kernel.org,
 nouveau@lists.freedesktop.org,  storagedev@microchip.com,
 dri-devel@lists.freedesktop.org, 
 virtualization@lists.linux-foundation.org, keyrings@vger.kernel.org, 
 linux-mtd@lists.infradead.org, ath10k@lists.infradead.org, 
 linux-stm32@st-md-mailman.stormreply.com,
 usb-storage@lists.one-eyed-alien.net,  linux-watchdog@vger.kernel.org,
 devel@driverdev.osuosl.org,  linux-samsung-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org,  linux-nvdimm@lists.01.org,
 amd-gfx@lists.freedesktop.org,  linux-acpi@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, 
 industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, 
 spice-devel@lists.freedesktop.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-media@vger.kernel.org, linux-serial@vger.kernel.org, 
 linux-nfc@lists.01.org, linux-pm@vger.kernel.org,
 linux-can@vger.kernel.org,  linux-block@vger.kernel.org,
 linux-gpio@vger.kernel.org,  xen-devel@lists.xenproject.org,
 linux-amlogic@lists.infradead.org, 
 openipmi-developer@lists.sourceforge.net,
 platform-driver-x86@vger.kernel.org,  linux-integrity@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org,  linux-edac@vger.kernel.org,
 netdev@vger.kernel.org, linux-usb@vger.kernel.org, 
 linux-wireless@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
 bpf@vger.kernel.org,  ocfs2-devel@oss.oracle.com,
 linux-power@fi.rohmeurope.com
Date: Sat, 17 Oct 2020 12:00:01 -0700
In-Reply-To: <alpine.DEB.2.22.394.2010172016370.9440@hadrien>
References: <20201017160928.12698-1-trix@redhat.com>
	 <f530b7aeecbbf9654b4540cfa20023a4c2a11889.camel@perches.com>
	 <alpine.DEB.2.22.394.2010172016370.9440@hadrien>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.36.4-0ubuntu1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sat, 2020-10-17 at 20:21 +0200, Julia Lawall wrote:
> On Sat, 17 Oct 2020, Joe Perches wrote:
> > On Sat, 2020-10-17 at 09:09 -0700, trix@redhat.com wrote:
> > > From: Tom Rix <trix@redhat.com>
> > > 
> > > This is a upcoming change to clean up a new warning treewide.
> > > I am wondering if the change could be one mega patch (see below) or
> > > normal patch per file about 100 patches or somewhere half way by collecting
> > > early acks.
> > > 
> > > clang has a number of useful, new warnings see
> > > https://clang.llvm.org/docs/DiagnosticsReference.html
> > > 
> > > This change cleans up -Wunreachable-code-break
> > > https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
> > > for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.
> > 
> > Early acks/individual patches by subsystem would be good.
> > Better still would be an automated cocci script.
> 
> Coccinelle is not especially good at this, because it is based on control
> flow, and a return or goto diverts the control flow away from the break.
> A hack to solve the problem is to put an if around the return or goto, but
> that gives the break a meaningless file name and line number.  I collected
> the following list, but it only has 439 results, so fewer than clang.  But
> maybe there are some files that are not considered by clang in the x86
> allyesconfig configuration.
> 
> Probably checkpatch is the best solution here, since it is not
> configuration sensitive and doesn't care about control flow.

Likely the clang compiler is the best option here.

It might be useful to add -Wunreachable-code-break to W=1
or just always enable it if it isn't already enabled.

diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
index 95e4cdb94fe9..3819787579d5 100644
--- a/scripts/Makefile.extrawarn
+++ b/scripts/Makefile.extrawarn
@@ -32,6 +32,7 @@ KBUILD_CFLAGS += $(call cc-option, -Wunused-but-set-variable)
 KBUILD_CFLAGS += $(call cc-option, -Wunused-const-variable)
 KBUILD_CFLAGS += $(call cc-option, -Wpacked-not-aligned)
 KBUILD_CFLAGS += $(call cc-option, -Wstringop-truncation)
+KBUILD_CFLAGS += $(call cc-option, -Wunreachable-code-break)
 # The following turn off the warnings enabled by -Wextra
 KBUILD_CFLAGS += -Wno-missing-field-initializers
 KBUILD_CFLAGS += -Wno-sign-compare

(and thank you Tom for pushing this forward)

checkpatch can't find instances like:

	case FOO:
		if (foo)
			return 1;
		else
			return 2;
		break;

As it doesn't track flow and relies on the number
of tabs to be the same for any goto/return and break;

checkpatch will warn on:

	case FOO:
		...
		goto bar;
		break;




From xen-devel-bounces@lists.xenproject.org Sat Oct 17 20:29:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 20:29:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8440.22527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTspP-0006kE-TT; Sat, 17 Oct 2020 20:29:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8440.22527; Sat, 17 Oct 2020 20:29:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTspP-0006k7-QM; Sat, 17 Oct 2020 20:29:23 +0000
Received: by outflank-mailman (input) for mailman id 8440;
 Sat, 17 Oct 2020 20:29:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTspO-0006jY-Js
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 20:29:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6319132-5f38-4a52-8f39-f57169e62866;
 Sat, 17 Oct 2020 20:29:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTspF-00053T-QL; Sat, 17 Oct 2020 20:29:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTspF-0000yq-Hb; Sat, 17 Oct 2020 20:29:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTspF-0006Qj-H3; Sat, 17 Oct 2020 20:29:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=QMVr=DY=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTspO-0006jY-Js
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 20:29:22 +0000
X-Inumbo-ID: a6319132-5f38-4a52-8f39-f57169e62866
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a6319132-5f38-4a52-8f39-f57169e62866;
	Sat, 17 Oct 2020 20:29:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e/DI5aPZHgxp8ytRLCCbv5Nsf6RVquWIwFbOwrgvjGk=; b=1e1f1Q28iTSCYTjTcyQxfzKQXE
	JKsd0ZmhrjMT4P7X+0iXM6mB5EsH/JhFgjCN3AUaKfeGhAJY76q8u9kDXZqMF6Ca5VI8W/BpA9JHE
	7jMIhdhQQ+8cODmDCFt55b20PSv+7GXuaFwoHoHz7hKPvrJyVUqjEfy8FMIXXZVg6aYE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTspF-00053T-QL; Sat, 17 Oct 2020 20:29:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTspF-0000yq-Hb; Sat, 17 Oct 2020 20:29:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTspF-0006Qj-H3; Sat, 17 Oct 2020 20:29:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155921: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 17 Oct 2020 20:29:13 +0000

flight 155921 xen-unstable real [real]
flight 155936 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/155921/
http://logs.test-lab.xenproject.org/osstest/logs/155936/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 12 debian-di-install        fail REGR. vs. 155894

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155894
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155894
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155894
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155894
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155894
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155894
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155894
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155894
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155894
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155894
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155894
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4

Last test of basis   155894  2020-10-16 11:23:43 Z    1 days
Testing same since   155921  2020-10-17 04:28:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0dfddb2116e3757f77a691a3fe335173088d69dc
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Thu Oct 15 10:16:09 2020 +0100

    tools/xenpmd: Fix gcc10 snprintf warning
    
    Add a check for snprintf return code and ignore the entry if we get an
    error. This should in fact never happen and is more a trick to make gcc
    happy and prevent compilation errors.
    
    This is solving the following gcc warning when compiling for arm32 host
    platforms with optimization activated:
    xenpmd.c:92:37: error: '%s' directive output may be truncated writing
    between 4 and 2147483645 bytes into a region of size 271
    [-Werror=format-truncation=]
    
    This is also solving the following Debian bug:
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=970802
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 17d192e0238d6c714e9f04593b59597b7090be38
Author: Elliott Mitchell <ehem+xen@m5p.com>
Date:   Sun Oct 11 18:11:39 2020 -0700

    tools/python: Pass linker to Python build process
    
    Unexpectedly the environment variable which needs to be passed is
    $LDSHARED and not $LD.  Otherwise Python may find the build `ld` instead
    of the host `ld`.
    
    Replace $(LDFLAGS) with $(SHLIB_LDFLAGS) as Python needs shared objects
    it can load at runtime, not executables.
    
    This uses $(CC) instead of $(LD) since Python distutils appends $CFLAGS
    to $LDFLAGS which breaks many linkers.
    
    Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
    Acked-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

commit 40fe714ca4245a9716264fcb3ee8df42c3650cf6
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Wed Oct 7 14:57:01 2020 +0100

    tools/libs/stat: use memcpy instead of strncpy in getBridge
    
    Use memcpy in getBridge to prevent gcc warnings about truncated
    strings. We know that we might truncate it, so the gcc warning
    here is wrong.
    Revert previous change changing buffer sizes as bigger buffers
    are not needed.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a7952a320c1e202a218702bfdb14f75132f04894
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 5 12:46:30 2020 +0100

    x86/smpboot: Don't unconditionally call memguard_guard_stack() in cpu_smpboot_alloc()
    
    cpu_smpboot_alloc() is designed to be idempotent with respect to partially
    initialised state.  This occurs for S3 and CPU parking, where enough state to
    handle NMIs/#MCs needs to remain valid for the entire lifetime of Xen, even
    when we otherwise want to offline the CPU.
    
    For simplicity between various configuration, Xen always uses shadow stack
    mappings (Read-only + Dirty) for the guard page, irrespective of whether
    CET-SS is enabled.
    
    Unfortunately, the CET-SS changes in memguard_guard_stack() broke idempotency
    by first writing out the supervisor shadow stack tokens with plain writes,
    then changing the mapping to being read-only.
    
    This ordering is strictly necessary to configure the BSP, which cannot have
    the supervisor tokens be written with WRSS.
    
    Instead of calling memguard_guard_stack() unconditionally, call it only when
    actually allocating a new stack.  Xenheap allocates are guaranteed to be
    writeable, and the net result is idempotency WRT configuring stack_base[].
    
    Fixes: 91d26ed304f ("x86/shstk: Create shadow stacks")
    Reported-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 04182d8b795dcdabf4f3873d3f5c78b67cbc04b0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 12 14:58:45 2020 +0100

    x86/ucode/intel: Improve description for gathering the microcode revision
    
    Obtaining the microcode revision on Intel CPUs is complicated for backwards
    compatibility reasons.  Update apply_microcode() to use a slightly more
    efficient CPUID invocation, now that the documentation has been updated to
    confirm that any CPUID instruction is fine, not just CPUID.1
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 6065a05adf152a556fb9f11a5218c89e41b62893
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 12 13:24:31 2020 +0100

    x86/traps: 'Fix' safety of read_registers() in #DF path
    
    All interrupts and exceptions pass a struct cpu_user_regs up into C.  This
    contains the legacy vm86 fields from 32bit days, which are beyond the
    hardware-pushed frame.
    
    Accessing these fields is generally illegal, as they are logically out of
    bounds for anything other than an interrupt/exception hitting ring1/3 code.
    
    show_registers() unconditionally reads these fields, but the content is
    discarded before use.  This is benign right now, as all parts of the stack are
    readable, including the guard pages.
    
    However, read_registers() in the #DF handler writes to these fields as part of
    preparing the state dump, and being IST, hits the adjacent stack frame.
    
    This has been broken forever, but c/s 6001660473 "x86/shstk: Rework the stack
    layout to support shadow stacks" repositioned the #DF stack to be adjacent to
    the guard page, which turns this OoB write into a fatal pagefault:
    
      (XEN) *** DOUBLE FAULT ***
      (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
      (XEN) ----[ Xen-4.15-unstable  x86_64  debug=y   Tainted:  C   ]----
      (XEN) CPU:    4
      (XEN) RIP:    e008:[<ffff82d04031fd4f>] traps.c#read_registers+0x29/0xc1
      (XEN) RFLAGS: 0000000000050086   CONTEXT: hypervisor (d1v0)
      ...
      (XEN) Xen call trace:
      (XEN)    [<ffff82d04031fd4f>] R traps.c#read_registers+0x29/0xc1
      (XEN)    [<ffff82d0403207b3>] F do_double_fault+0x3d/0x7e
      (XEN)    [<ffff82d04039acd7>] F double_fault+0x107/0x110
      (XEN)
      (XEN) Pagetable walk from ffff830236f6d008:
      (XEN)  L4[0x106] = 80000000bfa9b063 ffffffffffffffff
      (XEN)  L3[0x008] = 0000000236ffd063 ffffffffffffffff
      (XEN)  L2[0x1b7] = 0000000236ffc063 ffffffffffffffff
      (XEN)  L1[0x16d] = 8000000236f6d161 ffffffffffffffff
      (XEN)
      (XEN) ****************************************
      (XEN) Panic on CPU 4:
      (XEN) FATAL PAGE FAULT
      (XEN) [error_code=0003]
      (XEN) Faulting linear address: ffff830236f6d008
      (XEN) ****************************************
      (XEN)
    
    and rendering the main #DF analysis broken.
    
    The proper fix is to delete cpu_user_regs.es and later, so no
    interrupt/exception path can access OoB, but this needs disentangling from the
    PV ABI first.
    
    Not-really-fixes: 6001660473 ("x86/shstk: Rework the stack layout to support shadow stacks")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 17 21:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 17 Oct 2020 21:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8447.22544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTtKb-0001ec-IM; Sat, 17 Oct 2020 21:01:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8447.22544; Sat, 17 Oct 2020 21:01:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTtKb-0001eV-FU; Sat, 17 Oct 2020 21:01:37 +0000
Received: by outflank-mailman (input) for mailman id 8447;
 Sat, 17 Oct 2020 21:01:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rRtA=DY=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1kTtKZ-0001eQ-NI
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 21:01:35 +0000
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d02a1d5c-c263-42ef-8b87-7eb3d2b7d314;
 Sat, 17 Oct 2020 21:01:33 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id u8so8477731ejg.1
 for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 14:01:33 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rRtA=DY=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
	id 1kTtKZ-0001eQ-NI
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 21:01:35 +0000
X-Inumbo-ID: d02a1d5c-c263-42ef-8b87-7eb3d2b7d314
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d02a1d5c-c263-42ef-8b87-7eb3d2b7d314;
	Sat, 17 Oct 2020 21:01:33 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id u8so8477731ejg.1
        for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 14:01:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=intel-com.20150623.gappssmtp.com; s=20150623;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QU5tUnKrk3cCgEOM6DUuSpHohPcIoidzngrqxPrVqlg=;
        b=ntIAxdfj5B3UZZJPAO3W3zqpuYJdxZe6wmMN+Cg5+tfh5QjsxLbXR5o2amfdJQ2J48
         +r9G2vZiMs4S/e6t6kNU+/1RisTXP2Yspklt8lksB4PhSQjsUISEYusYJeN2p0XCHoqr
         5tX/QI+fZe3GQZJaviemAIYQlGxoIFC7s6aX8R5MJQLi2SOYJ/keQbrc62HNUeJ8pCOI
         7WYztP0OE4axPIRoGDfVK4aq4k9dPY6fZ69Y4fL3/a70qb0GrKtNhloBSbp9yIhwtpgg
         UECkhdkcAFyI2jUc3qFZfU7NKoc+E1dZJHZxhONiRQFFV//aVnbaYjp+hfJUaNjfh3TU
         ik3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QU5tUnKrk3cCgEOM6DUuSpHohPcIoidzngrqxPrVqlg=;
        b=rT86GXSUsxfh1pr6NlHLn8v6XgHFMM5qGu+OdI+evgFmAwMhH2s0DMe0XHdAQ+0zX7
         rD/ssDWR1dnytd+h4szF1yCsBZEBHTCUBSCfZBlVkW4mWjHBvdhV62YHfpDdv19KnyBn
         xiFkGrXZF5xJFCbr3kpqMhK9jvn9JUa8NQFkONeAf656KHj2OdSOM1lgu/2Fymtcfuhp
         XG/t9dmz6pgIjQbk9Mh3MZ8hn0BQXXhLS5i/EfQZPkq0x35glTNLny78g0qinPi/oPHZ
         24Bs2BemZJcGbP9KOzpHGnZSLBPh5kIuA/gN2hJiF9qTWoR2b1lotxi7tzKg1XN4b+qB
         8MYw==
X-Gm-Message-State: AOAM530MAD5qV/GwN4G7q9BXbEpjAauZgzSewdFJXOqT3GwRVQPghr9I
	/gE3dzd3E/t2CyhP7DVQpz9IQ7d+VkdGPCX2nF8OSw==
X-Google-Smtp-Source: ABdhPJzde2s6j/g/ekBLPPrgSH2EJqfd3KSDRuhe3Mnzr9w98lVRa6HniOngGCyhNuV4q/eDk1HFvyy/v8IqyEanGTA=
X-Received: by 2002:a17:906:1a19:: with SMTP id i25mr9957370ejf.323.1602968492144;
 Sat, 17 Oct 2020 14:01:32 -0700 (PDT)
MIME-Version: 1.0
References: <20201017160928.12698-1-trix@redhat.com>
In-Reply-To: <20201017160928.12698-1-trix@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Sat, 17 Oct 2020 14:01:22 -0700
Message-ID: <CAPcyv4jkSFxMXgMABX7sDbwmq8zJO=rLX2ww3Y9Tc0VAANY8xQ@mail.gmail.com>
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: trix@redhat.com
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>, linux-edac@vger.kernel.org, 
	Linux ACPI <linux-acpi@vger.kernel.org>, 
	Linux-pm mailing list <linux-pm@vger.kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	linux-block@vger.kernel.org, openipmi-developer@lists.sourceforge.net, 
	linux-crypto <linux-crypto@vger.kernel.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-power@fi.rohmeurope.com, 
	linux-gpio@vger.kernel.org, amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	Maling list - DRI developers <dri-devel@lists.freedesktop.org>, nouveau@lists.freedesktop.org, 
	virtualization@lists.linux-foundation.org, spice-devel@lists.freedesktop.org, 
	linux-iio@vger.kernel.org, linux-amlogic@lists.infradead.org, 
	industrypack-devel@lists.sourceforge.net, 
	"Linux-media@vger.kernel.org" <linux-media@vger.kernel.org>, MPT-FusionLinux.pdl@broadcom.com, 
	linux-scsi <linux-scsi@vger.kernel.org>, linux-mtd@lists.infradead.org, 
	linux-can@vger.kernel.org, Netdev <netdev@vger.kernel.org>, 
	intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org, 
	Linux Wireless List <linux-wireless@vger.kernel.org>, linux-stm32@st-md-mailman.stormreply.com, 
	linux-nfc@lists.01.org, linux-nvdimm <linux-nvdimm@lists.01.org>, 
	Linux PCI <linux-pci@vger.kernel.org>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, platform-driver-x86@vger.kernel.org, 
	patches@opensource.cirrus.com, storagedev@microchip.com, 
	devel@driverdev.osuosl.org, linux-serial@vger.kernel.org, 
	USB list <linux-usb@vger.kernel.org>, usb-storage@lists.one-eyed-alien.net, 
	linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com, 
	bpf@vger.kernel.org, linux-integrity@vger.kernel.org, 
	linux-security-module@vger.kernel.org, 
	"open list:KEYS-TRUSTED" <keyrings@vger.kernel.org>, alsa-devel@alsa-project.org, 
	clang-built-linux <clang-built-linux@googlegroups.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Oct 17, 2020 at 9:10 AM <trix@redhat.com> wrote:
>
> From: Tom Rix <trix@redhat.com>
>
> This is a upcoming change to clean up a new warning treewide.
> I am wondering if the change could be one mega patch (see below) or
> normal patch per file about 100 patches or somewhere half way by collecting
> early acks.
>
> clang has a number of useful, new warnings see
> https://clang.llvm.org/docs/DiagnosticsReference.html
>
> This change cleans up -Wunreachable-code-break
> https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
> for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.
>
> The method of fixing was to look for warnings where the preceding statement
> was a simple statement and by inspection made the subsequent break unneeded.
> In order of frequency these look like
>
> return and break
>
>         switch (c->x86_vendor) {
>         case X86_VENDOR_INTEL:
>                 intel_p5_mcheck_init(c);
>                 return 1;
> -               break;
>
> goto and break
>
>         default:
>                 operation = 0; /* make gcc happy */
>                 goto fail_response;
> -               break;
>
> break and break
>                 case COLOR_SPACE_SRGB:
>                         /* by pass */
>                         REG_SET(OUTPUT_CSC_CONTROL, 0,
>                                 OUTPUT_CSC_GRPH_MODE, 0);
>                         break;
> -                       break;
>
> The exception to the simple statement, is a switch case with a block
> and the end of block is a return
>
>                         struct obj_buffer *buff = r->ptr;
>                         return scnprintf(str, PRIV_STR_SIZE,
>                                         "size=%u\naddr=0x%X\n", buff->size,
>                                         buff->addr);
>                 }
> -               break;
>
> Not considered obvious and excluded, breaks after
> multi level switches
> complicated if-else if-else blocks
> panic() or similar calls
>
> And there is an odd addition of a 'fallthrough' in drivers/tty/nozomi.c
[..]
> diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
> index 5a7c80053c62..2f250874b1a4 100644
> --- a/drivers/nvdimm/claim.c
> +++ b/drivers/nvdimm/claim.c
> @@ -200,11 +200,10 @@ ssize_t nd_namespace_store(struct device *dev,
>                 }
>                 break;
>         default:
>                 len = -EBUSY;
>                 goto out_attach;
> -               break;
>         }

Acked-by: Dan Williams <dan.j.williams@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 02:52:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 02:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8460.22572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTyni-0007F1-VA; Sun, 18 Oct 2020 02:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8460.22572; Sun, 18 Oct 2020 02:52:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kTyni-0007Eu-S5; Sun, 18 Oct 2020 02:52:02 +0000
Received: by outflank-mailman (input) for mailman id 8460;
 Sun, 18 Oct 2020 02:52:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kTynh-0007EM-Ip
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 02:52:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecb74d30-ac0a-4f4e-a146-631634c90945;
 Sun, 18 Oct 2020 02:51:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTynZ-0005sv-K3; Sun, 18 Oct 2020 02:51:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kTynZ-0005Cd-9h; Sun, 18 Oct 2020 02:51:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kTynZ-0002nt-9E; Sun, 18 Oct 2020 02:51:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kTynh-0007EM-Ip
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 02:52:01 +0000
X-Inumbo-ID: ecb74d30-ac0a-4f4e-a146-631634c90945
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ecb74d30-ac0a-4f4e-a146-631634c90945;
	Sun, 18 Oct 2020 02:51:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HZ7uEKnCdtoXRV0M7uze7OFHGzcvfN8UfFDaiGJsCtc=; b=XTFCp2b0aWOfK9dDY5GP8Nu9y4
	OUBwmBnuFRr5PShdmki6DNum5T5EsxyAj1MJxBMmdRgAIQJAIGQwxB0e1OYrG+Na3SYzwHFU/p8o9
	7zTM3eyNbZM7jHmcydDcXprcenBIBrYx6dM2o0JxBdIN1WgZOb7FNLw7YLEv1+NtqgxY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTynZ-0005sv-K3; Sun, 18 Oct 2020 02:51:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTynZ-0005Cd-9h; Sun, 18 Oct 2020 02:51:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kTynZ-0002nt-9E; Sun, 18 Oct 2020 02:51:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155926: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=52f6ded2a377ac4f191c84182488e454b1386239
X-Osstest-Versions-That:
    linux=85b0841aab15c12948af951d477183ab3df7de14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 02:51:53 +0000

flight 155926 linux-5.4 real [real]
flight 155944 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/155926/
http://logs.test-lab.xenproject.org/osstest/logs/155944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 155815

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 155815

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155799
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155815
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155815
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                52f6ded2a377ac4f191c84182488e454b1386239
baseline version:
 linux                85b0841aab15c12948af951d477183ab3df7de14

Last test of basis   155815  2020-10-14 20:39:44 Z    3 days
Testing same since   155926  2020-10-17 08:41:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Arjan van de Ven <arjan@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  David Sterba <dsterba@suse.com>
  Dmitry Golovin <dima@golovin.in>
  Dominik Przychodni <dominik.przychodni@intel.com>
  Giovanni Cabiddu <giovanni.cabiddu@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jan Kara <jack@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Leo Yan <leo.yan@linaro.org>
  Leonid Bloch <lb.workbox@gmail.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marcel Holtmann <marcel@holtmann.org>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mike Leach <mike.leach@linaro.org>
  Mychaela N. Falconia <falcon@freecalypso.org>
  Nathan Chancellor <natechancellor@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Patrick Steinhardt <ps@pks.im>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Chen <scott@labau.com.tw>
  Stefan Bader <stefan.bader@canonical.com>
  syzbot+009f546aa1370056b1c2@syzkaller.appspotmail.com
  Thomas Backlund <tmb@mageia.org>
  Wilken Gottwalt <wilken.gottwalt@mailbox.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 542 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 04:35:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 04:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8399.22591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU0P4-0007M9-Mo; Sun, 18 Oct 2020 04:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8399.22591; Sun, 18 Oct 2020 04:34:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU0P4-0007M2-JS; Sun, 18 Oct 2020 04:34:42 +0000
Received: by outflank-mailman (input) for mailman id 8399;
 Sat, 17 Oct 2020 16:09:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iRV5=DY=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kTomG-0000q3-IR
 for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 16:09:52 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 29a14e42-3223-4f8c-bc88-dd580da64e9b;
 Sat, 17 Oct 2020 16:09:47 +0000 (UTC)
Received: from mail-oo1-f70.google.com (mail-oo1-f70.google.com
 [209.85.161.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-375-FgatwFSKN4m9y6rkVtNXDg-1; Sat, 17 Oct 2020 12:09:40 -0400
Received: by mail-oo1-f70.google.com with SMTP id p17so2402989ooe.22
 for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 09:09:40 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id o2sm1980029oia.42.2020.10.17.09.09.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 17 Oct 2020 09:09:36 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iRV5=DY=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kTomG-0000q3-IR
	for xen-devel@lists.xenproject.org; Sat, 17 Oct 2020 16:09:52 +0000
X-Inumbo-ID: 29a14e42-3223-4f8c-bc88-dd580da64e9b
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 29a14e42-3223-4f8c-bc88-dd580da64e9b;
	Sat, 17 Oct 2020 16:09:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1602950987;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:content-type:content-type;
	bh=PcObn7U1uXJZxW99CZjaXSkG6ko9pf2ze/ufhCTWJEY=;
	b=fUpY+c4XOwryWhd/3/mID+0dDWyDKK+1WKJmxAg21IdMzTCBEHwdWkFTGkTaIvUdCnSBQq
	ucEXfKgvuyCyvMyXRjXQQxGxBqF+cP6it3m3d/gsPzDbqcV8VoifLV8+wTqE1+/cVrFa4t
	KACAayxTfeUQ0skuiLKuUnHl87l6QRg=
Received: from mail-oo1-f70.google.com (mail-oo1-f70.google.com
 [209.85.161.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-375-FgatwFSKN4m9y6rkVtNXDg-1; Sat, 17 Oct 2020 12:09:40 -0400
X-MC-Unique: FgatwFSKN4m9y6rkVtNXDg-1
Received: by mail-oo1-f70.google.com with SMTP id p17so2402989ooe.22
        for <xen-devel@lists.xenproject.org>; Sat, 17 Oct 2020 09:09:40 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=PcObn7U1uXJZxW99CZjaXSkG6ko9pf2ze/ufhCTWJEY=;
        b=Qh/7ppaMkXyVqjmbjBG7566gD8jk3Iqsw8jLN/vDe/XAzJp2J+0e+RiCVNkt+o27qQ
         CQuhoOkpCGDj5N8Y7dUdwB4TMjgXXBJbFGURAkCpNr6a/XY+GZl6sf4eYOOffVZWjC86
         Rq7YBg4xAegiElM+QhCvApQA0MmFD3VlzANIa0dPQj/ZZihMeHJtLmheq5C425qpcUgi
         9w6ns0OKrohMXHgWIlwlNmqkH7TYC0CvzZ1zTzshKXTP2RjWX16OkGBDsEgojILxMIEx
         QX4x4C1+ob+vEfEy6CBRRHSkAFGys18Bj0pvAuel5JTtv8XcO545aE0H4no9rzPNDC+2
         a3Kw==
X-Gm-Message-State: AOAM5301cvIfj270TyWyErFFpNnLK8ByZI2HRyqcTm/qXOSc9GBSWbrU
	vsbZ6wrVj9q8gltwu7tDink2pAG1V9qHfORD/F+Z91P3u1ACBiAfLVkCI0T79OloMe8gHBUV10Z
	AYKT6LWLUPfzeh0eRvQig7nUgp5E=
X-Received: by 2002:aca:cc01:: with SMTP id c1mr6215367oig.128.1602950978334;
        Sat, 17 Oct 2020 09:09:38 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwPVzYK6gQsv5rjCfQ+MiyXlzTmsz5jTt6tATUCcXIJGoCXwjQmkLM2KtSlBhQxb9aFkWsozg==
X-Received: by 2002:aca:cc01:: with SMTP id c1mr6215304oig.128.1602950977353;
        Sat, 17 Oct 2020 09:09:37 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id o2sm1980029oia.42.2020.10.17.09.09.31
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sat, 17 Oct 2020 09:09:36 -0700 (PDT)
From: trix@redhat.com
To: linux-kernel@vger.kernel.org
Cc: linux-edac@vger.kernel.org,
	linux-acpi@vger.kernel.org,
	linux-pm@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	openipmi-developer@lists.sourceforge.net,
	linux-crypto@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-power@fi.rohmeurope.com,
	linux-gpio@vger.kernel.org,
	amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org,
	linux-iio@vger.kernel.org,
	linux-amlogic@lists.infradead.org,
	industrypack-devel@lists.sourceforge.net,
	linux-media@vger.kernel.org,
	MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	linux-can@vger.kernel.org,
	netdev@vger.kernel.org,
	intel-wired-lan@lists.osuosl.org,
	ath10k@lists.infradead.org,
	linux-wireless@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-nfc@lists.01.org,
	linux-nvdimm@lists.01.org,
	linux-pci@vger.kernel.org,
	linux-samsung-soc@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	patches@opensource.cirrus.com,
	storagedev@microchip.com,
	devel@driverdev.osuosl.org,
	linux-serial@vger.kernel.org,
	linux-usb@vger.kernel.org,
	usb-storage@lists.one-eyed-alien.net,
	linux-watchdog@vger.kernel.org,
	ocfs2-devel@oss.oracle.com,
	bpf@vger.kernel.org,
	linux-integrity@vger.kernel.org,
	linux-security-module@vger.kernel.org,
	keyrings@vger.kernel.org,
	alsa-devel@alsa-project.org,
	clang-built-linux@googlegroups.com,
	Tom Rix <trix@redhat.com>
Subject: [RFC] treewide: cleanup unreachable breaks
Date: Sat, 17 Oct 2020 09:09:28 -0700
Message-Id: <20201017160928.12698-1-trix@redhat.com>
X-Mailer: git-send-email 2.18.1
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="US-ASCII"

From: Tom Rix <trix@redhat.com>

This is a upcoming change to clean up a new warning treewide.
I am wondering if the change could be one mega patch (see below) or
normal patch per file about 100 patches or somewhere half way by collecting
early acks.

clang has a number of useful, new warnings see
https://clang.llvm.org/docs/DiagnosticsReference.html

This change cleans up -Wunreachable-code-break
https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.

The method of fixing was to look for warnings where the preceding statement
was a simple statement and by inspection made the subsequent break unneeded.
In order of frequency these look like

return and break

 	switch (c->x86_vendor) {
 	case X86_VENDOR_INTEL:
 		intel_p5_mcheck_init(c);
 		return 1;
-		break;

goto and break

 	default:
 		operation = 0; /* make gcc happy */
 		goto fail_response;
-		break;

break and break
 		case COLOR_SPACE_SRGB:
 			/* by pass */
 			REG_SET(OUTPUT_CSC_CONTROL, 0,
 				OUTPUT_CSC_GRPH_MODE, 0);
 			break;
-			break;

The exception to the simple statement, is a switch case with a block
and the end of block is a return

 			struct obj_buffer *buff = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"size=%u\naddr=0x%X\n", buff->size,
 					buff->addr);
 		}
-		break;

Not considered obvious and excluded, breaks after
multi level switches
complicated if-else if-else blocks
panic() or similar calls

And there is an odd addition of a 'fallthrough' in drivers/tty/nozomi.c

Tom

---

diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
index 1c08cb9eb9f6..16ce86aed8e2 100644
--- a/arch/x86/kernel/cpu/mce/core.c
+++ b/arch/x86/kernel/cpu/mce/core.c
@@ -1809,15 +1809,13 @@ static int __mcheck_cpu_ancient_init(struct cpuinfo_x86 *c)
 
 	switch (c->x86_vendor) {
 	case X86_VENDOR_INTEL:
 		intel_p5_mcheck_init(c);
 		return 1;
-		break;
 	case X86_VENDOR_CENTAUR:
 		winchip_mcheck_init(c);
 		return 1;
-		break;
 	default:
 		return 0;
 	}
 
 	return 0;
diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
index 3f6b137ef4e6..3d4a48336084 100644
--- a/arch/x86/kernel/cpu/microcode/amd.c
+++ b/arch/x86/kernel/cpu/microcode/amd.c
@@ -213,11 +213,10 @@ static unsigned int __verify_patch_size(u8 family, u32 sh_psize, size_t buf_size
 		max_size = F14H_MPB_MAX_SIZE;
 		break;
 	default:
 		WARN(1, "%s: WTF family: 0x%x\n", __func__, family);
 		return 0;
-		break;
 	}
 
 	if (sh_psize > min_t(u32, buf_size, max_size))
 		return 0;
 
diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
index 838b719ec7ce..d5411a166685 100644
--- a/drivers/acpi/utils.c
+++ b/drivers/acpi/utils.c
@@ -102,11 +102,10 @@ acpi_extract_package(union acpi_object *package,
 				printk(KERN_WARNING PREFIX "Invalid package element"
 					      " [%d]: got number, expecting"
 					      " [%c]\n",
 					      i, format_string[i]);
 				return AE_BAD_DATA;
-				break;
 			}
 			break;
 
 		case ACPI_TYPE_STRING:
 		case ACPI_TYPE_BUFFER:
@@ -127,11 +126,10 @@ acpi_extract_package(union acpi_object *package,
 				printk(KERN_WARNING PREFIX "Invalid package element"
 					      " [%d] got string/buffer,"
 					      " expecting [%c]\n",
 					      i, format_string[i]);
 				return AE_BAD_DATA;
-				break;
 			}
 			break;
 		case ACPI_TYPE_LOCAL_REFERENCE:
 			switch (format_string[i]) {
 			case 'R':
@@ -142,22 +140,20 @@ acpi_extract_package(union acpi_object *package,
 				printk(KERN_WARNING PREFIX "Invalid package element"
 					      " [%d] got reference,"
 					      " expecting [%c]\n",
 					      i, format_string[i]);
 				return AE_BAD_DATA;
-				break;
 			}
 			break;
 
 		case ACPI_TYPE_PACKAGE:
 		default:
 			ACPI_DEBUG_PRINT((ACPI_DB_INFO,
 					  "Found unsupported element at index=%d\n",
 					  i));
 			/* TBD: handle nested packages... */
 			return AE_SUPPORT;
-			break;
 		}
 	}
 
 	/*
 	 * Validate output buffer.
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 205a06752ca9..c7ac49042cee 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -361,11 +361,10 @@ static pm_callback_t pm_op(const struct dev_pm_ops *ops, pm_message_t state)
 	case PM_EVENT_HIBERNATE:
 		return ops->poweroff;
 	case PM_EVENT_THAW:
 	case PM_EVENT_RECOVER:
 		return ops->thaw;
-		break;
 	case PM_EVENT_RESTORE:
 		return ops->restore;
 #endif /* CONFIG_HIBERNATE_CALLBACKS */
 	}
 
diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index adfc9352351d..f769fbd1b4c4 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1267,11 +1267,10 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 		operation_flags = REQ_PREFLUSH;
 		break;
 	default:
 		operation = 0; /* make gcc happy */
 		goto fail_response;
-		break;
 	}
 
 	/* Check that the number of segments is sane. */
 	nseg = req->operation == BLKIF_OP_INDIRECT ?
 	       req->u.indirect.nr_segments : req->u.rw.nr_segments;
diff --git a/drivers/char/ipmi/ipmi_devintf.c b/drivers/char/ipmi/ipmi_devintf.c
index f7b1c004a12b..3dd1d5abb298 100644
--- a/drivers/char/ipmi/ipmi_devintf.c
+++ b/drivers/char/ipmi/ipmi_devintf.c
@@ -488,11 +488,10 @@ static long ipmi_ioctl(struct file   *file,
 			rv = -EFAULT;
 			break;
 		}
 
 		return ipmi_set_my_address(priv->user, val.channel, val.value);
-		break;
 	}
 
 	case IPMICTL_GET_MY_CHANNEL_ADDRESS_CMD:
 	{
 		struct ipmi_channel_lun_address_set val;
diff --git a/drivers/char/lp.c b/drivers/char/lp.c
index 0ec73917d8dd..862c2fd933c7 100644
--- a/drivers/char/lp.c
+++ b/drivers/char/lp.c
@@ -620,11 +620,10 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd,
 		case LPWAIT:
 			LP_WAIT(minor) = arg;
 			break;
 		case LPSETIRQ:
 			return -EINVAL;
-			break;
 		case LPGETIRQ:
 			if (copy_to_user(argp, &LP_IRQ(minor),
 					sizeof(int)))
 				return -EFAULT;
 			break;
diff --git a/drivers/char/mwave/mwavedd.c b/drivers/char/mwave/mwavedd.c
index e43c876a9223..11272d605ecd 100644
--- a/drivers/char/mwave/mwavedd.c
+++ b/drivers/char/mwave/mwavedd.c
@@ -401,11 +401,10 @@ static long mwave_ioctl(struct file *file, unsigned int iocmd,
 		}
 			break;
 	
 		default:
 			return -ENOTTY;
-			break;
 	} /* switch */
 
 	PRINTK_2(TRACE_MWAVE, "mwavedd::mwave_ioctl, exit retval %x\n", retval);
 
 	return retval;
diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
index 75ccf41a7cb9..0eb6f54e3b66 100644
--- a/drivers/crypto/atmel-sha.c
+++ b/drivers/crypto/atmel-sha.c
@@ -457,11 +457,10 @@ static int atmel_sha_init(struct ahash_request *req)
 		ctx->flags |= SHA_FLAGS_SHA512;
 		ctx->block_size = SHA512_BLOCK_SIZE;
 		break;
 	default:
 		return -EINVAL;
-		break;
 	}
 
 	ctx->bufcnt = 0;
 	ctx->digcnt[0] = 0;
 	ctx->digcnt[1] = 0;
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
index fcc08bbf6945..386a3a4cf279 100644
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -2459,38 +2459,30 @@ static int map_err_sym_to_channel(int err_sym, int sym_size)
 	if (sym_size == 4)
 		switch (err_sym) {
 		case 0x20:
 		case 0x21:
 			return 0;
-			break;
 		case 0x22:
 		case 0x23:
 			return 1;
-			break;
 		default:
 			return err_sym >> 4;
-			break;
 		}
 	/* x8 symbols */
 	else
 		switch (err_sym) {
 		/* imaginary bits not in a DIMM */
 		case 0x10:
 			WARN(1, KERN_ERR "Invalid error symbol: 0x%x\n",
 					  err_sym);
 			return -1;
-			break;
-
 		case 0x11:
 			return 0;
-			break;
 		case 0x12:
 			return 1;
-			break;
 		default:
 			return err_sym >> 3;
-			break;
 		}
 	return -1;
 }
 
 static int get_channel_from_ecc_syndrome(struct mem_ctl_info *mci, u16 syndrome)
diff --git a/drivers/gpio/gpio-bd70528.c b/drivers/gpio/gpio-bd70528.c
index 45b3da8da336..931e5765fe92 100644
--- a/drivers/gpio/gpio-bd70528.c
+++ b/drivers/gpio/gpio-bd70528.c
@@ -69,21 +69,18 @@ static int bd70528_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
 	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
 		return regmap_update_bits(bdgpio->chip.regmap,
 					  GPIO_OUT_REG(offset),
 					  BD70528_GPIO_DRIVE_MASK,
 					  BD70528_GPIO_OPEN_DRAIN);
-		break;
 	case PIN_CONFIG_DRIVE_PUSH_PULL:
 		return regmap_update_bits(bdgpio->chip.regmap,
 					  GPIO_OUT_REG(offset),
 					  BD70528_GPIO_DRIVE_MASK,
 					  BD70528_GPIO_PUSH_PULL);
-		break;
 	case PIN_CONFIG_INPUT_DEBOUNCE:
 		return bd70528_set_debounce(bdgpio, offset,
 					    pinconf_to_config_argument(config));
-		break;
 	default:
 		break;
 	}
 	return -ENOTSUPP;
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
index 2a32b66959ba..130a0a0c8332 100644
--- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
+++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
@@ -1328,11 +1328,10 @@ static bool configure_graphics_mode(
 		case COLOR_SPACE_SRGB:
 			/* by pass */
 			REG_SET(OUTPUT_CSC_CONTROL, 0,
 				OUTPUT_CSC_GRPH_MODE, 0);
 			break;
-			break;
 		case COLOR_SPACE_SRGB_LIMITED:
 			/* TV RGB */
 			REG_SET(OUTPUT_CSC_CONTROL, 0,
 				OUTPUT_CSC_GRPH_MODE, 1);
 			break;
diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
index d741787f75dc..42c7d157da32 100644
--- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
@@ -416,29 +416,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
index 2bbfa2e176a9..382581c4a674 100644
--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
@@ -469,29 +469,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
index b622b4b1dac3..7b4b2304bbff 100644
--- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
@@ -444,29 +444,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
index 16fe7344702f..3d782b7c86cb 100644
--- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
@@ -381,29 +381,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
index 5a5a9cb77acb..e9dd78c484d6 100644
--- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
@@ -451,29 +451,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
index 0eae8cd35f9a..9dbf658162cd 100644
--- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
+++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
@@ -456,29 +456,22 @@ static int map_transmitter_id_to_phy_instance(
 	enum transmitter transmitter)
 {
 	switch (transmitter) {
 	case TRANSMITTER_UNIPHY_A:
 		return 0;
-	break;
 	case TRANSMITTER_UNIPHY_B:
 		return 1;
-	break;
 	case TRANSMITTER_UNIPHY_C:
 		return 2;
-	break;
 	case TRANSMITTER_UNIPHY_D:
 		return 3;
-	break;
 	case TRANSMITTER_UNIPHY_E:
 		return 4;
-	break;
 	case TRANSMITTER_UNIPHY_F:
 		return 5;
-	break;
 	case TRANSMITTER_UNIPHY_G:
 		return 6;
-	break;
 	default:
 		ASSERT(0);
 		return 0;
 	}
 }
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..bbe4e60dfd08 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -792,25 +792,20 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
 	case G200_AGP:
 		return mgag200_g200_set_plls(mdev, clock);
 	case G200_SE_A:
 	case G200_SE_B:
 		return mga_g200se_set_plls(mdev, clock);
-		break;
 	case G200_WB:
 	case G200_EW3:
 		return mga_g200wb_set_plls(mdev, clock);
-		break;
 	case G200_EV:
 		return mga_g200ev_set_plls(mdev, clock);
-		break;
 	case G200_EH:
 	case G200_EH3:
 		return mga_g200eh_set_plls(mdev, clock);
-		break;
 	case G200_ER:
 		return mga_g200er_set_plls(mdev, clock);
-		break;
 	}
 
 	misc = RREG8(MGA_MISC_IN);
 	misc &= ~MGAREG_MISC_CLK_SEL_MASK;
 	misc |= MGAREG_MISC_CLK_SEL_MGA_MSK;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
index 350f10a3de37..2ec84b8a3b3a 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
@@ -121,11 +121,10 @@ pll_map(struct nvkm_bios *bios)
 	case NV_10:
 	case NV_11:
 	case NV_20:
 	case NV_30:
 		return nv04_pll_mapping;
-		break;
 	case NV_40:
 		return nv40_pll_mapping;
 	case NV_50:
 		if (device->chipset == 0x50)
 			return nv50_pll_mapping;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
index efa50274df97..4884eb4a9221 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
@@ -138,21 +138,18 @@ mcp77_clk_read(struct nvkm_clk *base, enum nv_clk_src src)
 		case 0x00000030: return read_pll(clk, 0x004020) >> P;
 		}
 		break;
 	case nv_clk_src_mem:
 		return 0;
-		break;
 	case nv_clk_src_vdec:
 		P = (read_div(clk) & 0x00000700) >> 8;
 
 		switch (mast & 0x00400000) {
 		case 0x00400000:
 			return nvkm_clk_read(&clk->base, nv_clk_src_core) >> P;
-			break;
 		default:
 			return 500000 >> P;
-			break;
 		}
 		break;
 	default:
 		break;
 	}
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
index 2ccb4b6be153..7b1eb44ff3da 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
@@ -169,11 +169,10 @@ nv50_ram_timing_read(struct nv50_ram *ram, u32 *timing)
 	case NVKM_RAM_TYPE_GDDR3:
 		T(CWL) = ((timing[2] & 0xff000000) >> 24) + 1;
 		break;
 	default:
 		return -ENOSYS;
-		break;
 	}
 
 	T(WR) = ((timing[1] >> 24) & 0xff) - 1 - T(CWL);
 
 	return 0;
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
index e01746ce9fc4..1156634533f9 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
@@ -88,11 +88,10 @@ gk104_top_oneinit(struct nvkm_top *top)
 		case 0x0000000e: B_(NVENC ); break;
 		case 0x0000000f: A_(NVENC1); break;
 		case 0x00000010: B_(NVDEC ); break;
 		case 0x00000013: B_(CE    ); break;
 		case 0x00000014: C_(GSP   ); break;
-			break;
 		default:
 			break;
 		}
 
 		nvkm_debug(subdev, "%02x.%d (%8s): addr %06x fault %2d "
diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
index 5cea6eea72ab..2072ddc9549c 100644
--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
+++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
@@ -158,11 +158,10 @@ static int qxl_process_single_command(struct qxl_device *qdev,
 	case QXL_CMD_SURFACE:
 	case QXL_CMD_CURSOR:
 	default:
 		DRM_DEBUG("Only draw commands in execbuffers\n");
 		return -EINVAL;
-		break;
 	}
 
 	if (cmd->command_size > PAGE_SIZE - sizeof(union qxl_release_info))
 		return -EINVAL;
 
diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
index e03988698755..66dc452d643a 100644
--- a/drivers/iio/adc/meson_saradc.c
+++ b/drivers/iio/adc/meson_saradc.c
@@ -591,17 +591,15 @@ static int meson_sar_adc_iio_info_read_raw(struct iio_dev *indio_dev,
 
 	switch (mask) {
 	case IIO_CHAN_INFO_RAW:
 		return meson_sar_adc_get_sample(indio_dev, chan, NO_AVERAGING,
 						ONE_SAMPLE, val);
-		break;
 
 	case IIO_CHAN_INFO_AVERAGE_RAW:
 		return meson_sar_adc_get_sample(indio_dev, chan,
 						MEAN_AVERAGING, EIGHT_SAMPLES,
 						val);
-		break;
 
 	case IIO_CHAN_INFO_SCALE:
 		if (chan->type == IIO_VOLTAGE) {
 			ret = regulator_get_voltage(priv->vref);
 			if (ret < 0) {
diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
index 222ebb26f013..431076dc0d2c 100644
--- a/drivers/iio/imu/bmi160/bmi160_core.c
+++ b/drivers/iio/imu/bmi160/bmi160_core.c
@@ -484,11 +484,10 @@ static int bmi160_write_raw(struct iio_dev *indio_dev,
 
 	switch (mask) {
 	case IIO_CHAN_INFO_SCALE:
 		return bmi160_set_scale(data,
 					bmi160_to_sensor(chan->type), val2);
-		break;
 	case IIO_CHAN_INFO_SAMP_FREQ:
 		return bmi160_set_odr(data, bmi160_to_sensor(chan->type),
 				      val, val2);
 	default:
 		return -EINVAL;
diff --git a/drivers/ipack/devices/ipoctal.c b/drivers/ipack/devices/ipoctal.c
index d480a514c983..3940714e4397 100644
--- a/drivers/ipack/devices/ipoctal.c
+++ b/drivers/ipack/devices/ipoctal.c
@@ -542,11 +542,10 @@ static void ipoctal_set_termios(struct tty_struct *tty,
 		mr1 |= MR1_RxRTS_CONTROL_OFF;
 		mr2 |= MR2_TxRTS_CONTROL_ON | MR2_CTS_ENABLE_TX_OFF;
 		break;
 	default:
 		return;
-		break;
 	}
 
 	baud = tty_get_baud_rate(tty);
 	tty_termios_encode_baud_rate(&tty->termios, baud, baud);
 
diff --git a/drivers/media/dvb-frontends/drx39xyj/drxj.c b/drivers/media/dvb-frontends/drx39xyj/drxj.c
index 237b9d04c076..37b32d9b398d 100644
--- a/drivers/media/dvb-frontends/drx39xyj/drxj.c
+++ b/drivers/media/dvb-frontends/drx39xyj/drxj.c
@@ -2323,11 +2323,10 @@ hi_command(struct i2c_device_addr *dev_addr, const struct drxj_hi_cmd *cmd, u16
 		/* No parameters */
 		break;
 
 	default:
 		return -EINVAL;
-		break;
 	}
 
 	/* Write command */
 	rc = drxj_dap_write_reg16(dev_addr, SIO_HI_RA_RAM_CMD__A, cmd->cmd, 0);
 	if (rc != 0) {
@@ -3592,11 +3591,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
 				goto rw_error;
 			}
 			break;
 		default:
 			return -EINVAL;
-			break;
 		}		/* switch ( uio_cfg->mode ) */
 		break;
       /*====================================================================*/
 	case DRX_UIO3:
 		/* DRX_UIO3: GPIO UIO-3 */
@@ -3616,11 +3614,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
 				goto rw_error;
 			}
 			break;
 		default:
 			return -EINVAL;
-			break;
 		}		/* switch ( uio_cfg->mode ) */
 		break;
       /*====================================================================*/
 	case DRX_UIO4:
 		/* DRX_UIO4: IRQN UIO-4 */
@@ -3640,11 +3637,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
 			ext_attr->uio_irqn_mode = uio_cfg->mode;
 			break;
 		case DRX_UIO_MODE_FIRMWARE0:
 		default:
 			return -EINVAL;
-			break;
 		}		/* switch ( uio_cfg->mode ) */
 		break;
       /*====================================================================*/
 	default:
 		return -EINVAL;
@@ -10951,11 +10947,10 @@ ctrl_set_standard(struct drx_demod_instance *demod, enum drx_standard *standard)
 		}
 		break;
 	default:
 		ext_attr->standard = DRX_STANDARD_UNKNOWN;
 		return -EINVAL;
-		break;
 	}
 
 	return 0;
 rw_error:
 	/* Don't know what the standard is now ... try again */
@@ -11072,11 +11067,10 @@ ctrl_power_mode(struct drx_demod_instance *demod, enum drx_power_mode *mode)
 		sio_cc_pwd_mode = SIO_CC_PWD_MODE_LEVEL_OSC;
 		break;
 	default:
 		/* Unknow sleep mode */
 		return -EINVAL;
-		break;
 	}
 
 	/* Check if device needs to be powered up */
 	if ((common_attr->current_power_mode != DRX_POWER_UP)) {
 		rc = power_up_device(demod);
@@ -11894,11 +11888,10 @@ static int drx_ctrl_u_code(struct drx_demod_instance *demod,
 			}
 			break;
 		}
 		default:
 			return -EINVAL;
-			break;
 
 		}
 		mc_data += mc_block_nr_bytes;
 	}
 
diff --git a/drivers/media/dvb-frontends/drxd_hard.c b/drivers/media/dvb-frontends/drxd_hard.c
index 45f982863904..a7eb81df88c2 100644
--- a/drivers/media/dvb-frontends/drxd_hard.c
+++ b/drivers/media/dvb-frontends/drxd_hard.c
@@ -1620,11 +1620,10 @@ static int CorrectSysClockDeviation(struct drxd_state *state)
 		case 6000000:
 			bandwidth = DRXD_BANDWIDTH_6MHZ_IN_HZ;
 			break;
 		default:
 			return -1;
-			break;
 		}
 
 		/* Compute new sysclock value
 		   sysClockFreq = (((incr + 2^23)*bandwidth)/2^21)/1000 */
 		incr += (1 << 23);
diff --git a/drivers/media/dvb-frontends/nxt200x.c b/drivers/media/dvb-frontends/nxt200x.c
index 35b83b1dd82c..200b6dbc75f8 100644
--- a/drivers/media/dvb-frontends/nxt200x.c
+++ b/drivers/media/dvb-frontends/nxt200x.c
@@ -166,11 +166,10 @@ static int nxt200x_writereg_multibyte (struct nxt200x_state* state, u8 reg, u8*
 			len2 = ((attr << 4) | 0x10) | len;
 			buf = 0x80;
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	/* set multi register length */
 	nxt200x_writebytes(state, 0x34, &len2, 1);
 
@@ -188,11 +187,10 @@ static int nxt200x_writereg_multibyte (struct nxt200x_state* state, u8 reg, u8*
 			if (buf == 0)
 				return 0;
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	pr_warn("Error writing multireg register 0x%02X\n", reg);
 
 	return 0;
@@ -214,11 +212,10 @@ static int nxt200x_readreg_multibyte (struct nxt200x_state* state, u8 reg, u8* d
 			nxt200x_writebytes(state, 0x34, &len2, 1);
 
 			/* read the actual data */
 			nxt200x_readbytes(state, reg, data, len);
 			return 0;
-			break;
 		case NXT2004:
 			/* probably not right, but gives correct values */
 			attr = 0x02;
 			if (reg & 0x80) {
 				attr = attr << 1;
@@ -237,14 +234,12 @@ static int nxt200x_readreg_multibyte (struct nxt200x_state* state, u8 reg, u8* d
 			/* read the actual data */
 			for(i = 0; i < len; i++) {
 				nxt200x_readbytes(state, 0x36 + i, &data[i], 1);
 			}
 			return 0;
-			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 }
 
 static void nxt200x_microcontroller_stop (struct nxt200x_state* state)
 {
@@ -372,11 +367,10 @@ static int nxt200x_writetuner (struct nxt200x_state* state, u8* data)
 			}
 			pr_warn("timeout error writing to tuner\n");
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 	return 0;
 }
 
 static void nxt200x_agc_reset(struct nxt200x_state* state)
@@ -553,11 +547,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 			if (state->config->set_ts_params)
 				state->config->set_ts_params(fe, 0);
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	if (fe->ops.tuner_ops.calc_regs) {
 		/* get tuning information */
 		fe->ops.tuner_ops.calc_regs(fe, buf, 5);
@@ -578,11 +571,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case VSB_8:
 			buf[0] = 0x70;
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 	nxt200x_writebytes(state, 0x42, buf, 1);
 
 	/* configure sdm */
 	switch (state->demod_chip) {
@@ -592,11 +584,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case NXT2004:
 			buf[0] = 0x07;
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 	nxt200x_writebytes(state, 0x57, buf, 1);
 
 	/* write sdm1 input */
 	buf[0] = 0x10;
@@ -608,11 +599,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case NXT2004:
 			nxt200x_writebytes(state, 0x58, buf, 2);
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	/* write sdmx input */
 	switch (p->modulation) {
 		case QAM_64:
@@ -624,11 +614,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case VSB_8:
 				buf[0] = 0x60;
 				break;
 		default:
 				return -EINVAL;
-				break;
 	}
 	buf[1] = 0x00;
 	switch (state->demod_chip) {
 		case NXT2002:
 			nxt200x_writereg_multibyte(state, 0x5C, buf, 2);
@@ -636,11 +625,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case NXT2004:
 			nxt200x_writebytes(state, 0x5C, buf, 2);
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	/* write adc power lpf fc */
 	buf[0] = 0x05;
 	nxt200x_writebytes(state, 0x43, buf, 1);
@@ -662,11 +650,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case NXT2004:
 			nxt200x_writebytes(state, 0x4B, buf, 2);
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	/* write kg1 */
 	buf[0] = 0x00;
 	nxt200x_writebytes(state, 0x4D, buf, 1);
@@ -718,11 +705,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 		case VSB_8:
 				buf[0] = 0x00;
 				break;
 		default:
 				return -EINVAL;
-				break;
 	}
 	nxt200x_writebytes(state, 0x30, buf, 1);
 
 	/* write agc control reg */
 	buf[0] = 0x00;
@@ -740,11 +726,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
 			nxt200x_writebytes(state, 0x49, buf, 2);
 			nxt200x_writebytes(state, 0x4B, buf, 2);
 			break;
 		default:
 			return -EINVAL;
-			break;
 	}
 
 	/* write agc control reg */
 	buf[0] = 0x04;
 	nxt200x_writebytes(state, 0x41, buf, 1);
@@ -1112,11 +1097,10 @@ static int nxt200x_init(struct dvb_frontend* fe)
 			case NXT2004:
 				ret = nxt2004_init(fe);
 				break;
 			default:
 				return -EINVAL;
-				break;
 		}
 		state->initialised = 1;
 	}
 	return ret;
 }
diff --git a/drivers/media/dvb-frontends/si21xx.c b/drivers/media/dvb-frontends/si21xx.c
index a116eff417f2..e31eb2c5cc4c 100644
--- a/drivers/media/dvb-frontends/si21xx.c
+++ b/drivers/media/dvb-frontends/si21xx.c
@@ -462,14 +462,12 @@ static int si21xx_set_voltage(struct dvb_frontend *fe, enum fe_sec_voltage volt)
 	val = (0x80 | si21_readreg(state, LNB_CTRL_REG_1));
 
 	switch (volt) {
 	case SEC_VOLTAGE_18:
 		return si21_writereg(state, LNB_CTRL_REG_1, val | 0x40);
-		break;
 	case SEC_VOLTAGE_13:
 		return si21_writereg(state, LNB_CTRL_REG_1, (val & ~0x40));
-		break;
 	default:
 		return -EINVAL;
 	}
 }
 
diff --git a/drivers/media/tuners/mt2063.c b/drivers/media/tuners/mt2063.c
index 2240d214dfac..d105431a2e2d 100644
--- a/drivers/media/tuners/mt2063.c
+++ b/drivers/media/tuners/mt2063.c
@@ -1847,11 +1847,10 @@ static int mt2063_init(struct dvb_frontend *fe)
 		def = MT2063B0_defaults;
 		break;
 
 	default:
 		return -ENODEV;
-		break;
 	}
 
 	while (status >= 0 && *def) {
 		u8 reg = *def++;
 		u8 val = *def++;
diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
index 9903e9660a38..3078fac34e51 100644
--- a/drivers/message/fusion/mptbase.c
+++ b/drivers/message/fusion/mptbase.c
@@ -471,11 +471,10 @@ mpt_turbo_reply(MPT_ADAPTER *ioc, u32 pa)
 			req_idx = pa & 0x0000FFFF;
 			mf = MPT_INDEX_2_MFPTR(ioc, req_idx);
 			mpt_free_msg_frame(ioc, mf);
 			mb();
 			return;
-			break;
 		}
 		mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa);
 		break;
 	case MPI_CONTEXT_REPLY_TYPE_SCSI_TARGET:
 		cb_idx = mpt_get_cb_idx(MPTSTM_DRIVER);
diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
index a97eb5d47705..686e8b6a4c55 100644
--- a/drivers/misc/mei/hbm.c
+++ b/drivers/misc/mei/hbm.c
@@ -1375,11 +1375,10 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
 
 		dev->dev_state = MEI_DEV_POWER_DOWN;
 		dev_info(dev->dev, "hbm: stop response: resetting.\n");
 		/* force the reset */
 		return -EPROTO;
-		break;
 
 	case CLIENT_DISCONNECT_REQ_CMD:
 		dev_dbg(dev->dev, "hbm: disconnect request: message received\n");
 
 		disconnect_req = (struct hbm_client_connect_request *)mei_msg;
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index b40f46a43fc6..323035d4f2d0 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -879,21 +879,19 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
 		loff_t offs;
 
 		if (copy_from_user(&offs, argp, sizeof(loff_t)))
 			return -EFAULT;
 		return mtd_block_isbad(mtd, offs);
-		break;
 	}
 
 	case MEMSETBADBLOCK:
 	{
 		loff_t offs;
 
 		if (copy_from_user(&offs, argp, sizeof(loff_t)))
 			return -EFAULT;
 		return mtd_block_markbad(mtd, offs);
-		break;
 	}
 
 	case OTPSELECT:
 	{
 		int mode;
diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
index c3f49543ff26..9c215f7c5f81 100644
--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
+++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
@@ -73,15 +73,15 @@ static const struct can_bittiming_const mcp251xfd_data_bittiming_const = {
 
 static const char *__mcp251xfd_get_model_str(enum mcp251xfd_model model)
 {
 	switch (model) {
 	case MCP251XFD_MODEL_MCP2517FD:
-		return "MCP2517FD"; break;
+		return "MCP2517FD";
 	case MCP251XFD_MODEL_MCP2518FD:
-		return "MCP2518FD"; break;
+		return "MCP2518FD";
 	case MCP251XFD_MODEL_MCP251XFD:
-		return "MCP251xFD"; break;
+		return "MCP251xFD";
 	}
 
 	return "<unknown>";
 }
 
@@ -93,25 +93,25 @@ mcp251xfd_get_model_str(const struct mcp251xfd_priv *priv)
 
 static const char *mcp251xfd_get_mode_str(const u8 mode)
 {
 	switch (mode) {
 	case MCP251XFD_REG_CON_MODE_MIXED:
-		return "Mixed (CAN FD/CAN 2.0)"; break;
+		return "Mixed (CAN FD/CAN 2.0)";
 	case MCP251XFD_REG_CON_MODE_SLEEP:
-		return "Sleep"; break;
+		return "Sleep";
 	case MCP251XFD_REG_CON_MODE_INT_LOOPBACK:
-		return "Internal Loopback"; break;
+		return "Internal Loopback";
 	case MCP251XFD_REG_CON_MODE_LISTENONLY:
-		return "Listen Only"; break;
+		return "Listen Only";
 	case MCP251XFD_REG_CON_MODE_CONFIG:
-		return "Configuration"; break;
+		return "Configuration";
 	case MCP251XFD_REG_CON_MODE_EXT_LOOPBACK:
-		return "External Loopback"; break;
+		return "External Loopback";
 	case MCP251XFD_REG_CON_MODE_CAN2_0:
-		return "CAN 2.0"; break;
+		return "CAN 2.0";
 	case MCP251XFD_REG_CON_MODE_RESTRICTED:
-		return "Restricted Operation"; break;
+		return "Restricted Operation";
 	}
 
 	return "<unknown>";
 }
 
diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
index 0f865daeb36d..bf5e0e9bd0e2 100644
--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
@@ -1161,11 +1161,10 @@ int aq_nic_set_link_ksettings(struct aq_nic_s *self,
 			break;
 
 		default:
 			err = -1;
 			goto err_exit;
-		break;
 		}
 		if (!(self->aq_nic_cfg.aq_hw_caps->link_speed_msk & rate)) {
 			err = -1;
 			goto err_exit;
 		}
diff --git a/drivers/net/ethernet/cisco/enic/enic_ethtool.c b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
index a4dd52bba2c3..1a9803f2073e 100644
--- a/drivers/net/ethernet/cisco/enic/enic_ethtool.c
+++ b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
@@ -432,11 +432,10 @@ static int enic_grxclsrule(struct enic *enic, struct ethtool_rxnfc *cmd)
 	case IPPROTO_UDP:
 		fsp->flow_type = UDP_V4_FLOW;
 		break;
 	default:
 		return -EINVAL;
-		break;
 	}
 
 	fsp->h_u.tcp_ip4_spec.ip4src = flow_get_u32_src(&n->keys);
 	fsp->m_u.tcp_ip4_spec.ip4src = (__u32)~0;
 
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
index de563cfd294d..4b93ba149ec5 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
@@ -348,11 +348,10 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
 			continue;
 
 		if (ixgbe_read_eerd_generic(hw, pointer, &length)) {
 			hw_dbg(hw, "EEPROM read failed\n");
 			return IXGBE_ERR_EEPROM;
-			break;
 		}
 
 		/* Skip pointer section if length is invalid. */
 		if (length == 0xFFFF || length == 0 ||
 		    (pointer + length) >= hw->eeprom.word_size)
diff --git a/drivers/net/wan/lmc/lmc_proto.c b/drivers/net/wan/lmc/lmc_proto.c
index e8b0b902b424..4e9cc83b615a 100644
--- a/drivers/net/wan/lmc/lmc_proto.c
+++ b/drivers/net/wan/lmc/lmc_proto.c
@@ -87,21 +87,17 @@ void lmc_proto_close(lmc_softc_t *sc)
 __be16 lmc_proto_type(lmc_softc_t *sc, struct sk_buff *skb) /*FOLD00*/
 {
     switch(sc->if_type){
     case LMC_PPP:
 	    return hdlc_type_trans(skb, sc->lmc_device);
-	    break;
     case LMC_NET:
         return htons(ETH_P_802_2);
-        break;
     case LMC_RAW: /* Packet type for skbuff kind of useless */
         return htons(ETH_P_802_2);
-        break;
     default:
         printk(KERN_WARNING "%s: No protocol set for this interface, assuming 802.2 (which is wrong!!)\n", sc->name);
         return htons(ETH_P_802_2);
-        break;
     }
 }
 
 void lmc_proto_netif(lmc_softc_t *sc, struct sk_buff *skb) /*FOLD00*/
 {
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index 5c1af2021883..9c4e6cf2137a 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -3876,11 +3876,10 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
 		atomic_inc(&htt->num_mpdus_ready);
 
 		return ath10k_htt_rx_proc_rx_frag_ind(htt,
 						      &resp->rx_frag_ind,
 						      skb);
-		break;
 	}
 	case HTT_T2H_MSG_TYPE_TEST:
 		break;
 	case HTT_T2H_MSG_TYPE_STATS_CONF:
 		trace_ath10k_htt_stats(ar, skb->data, skb->len);
diff --git a/drivers/net/wireless/ath/ath6kl/testmode.c b/drivers/net/wireless/ath/ath6kl/testmode.c
index f3906dbe5495..89c7c4e25169 100644
--- a/drivers/net/wireless/ath/ath6kl/testmode.c
+++ b/drivers/net/wireless/ath/ath6kl/testmode.c
@@ -92,11 +92,10 @@ int ath6kl_tm_cmd(struct wiphy *wiphy, struct wireless_dev *wdev,
 
 		ath6kl_wmi_test_cmd(ar->wmi, buf, buf_len);
 
 		return 0;
 
-		break;
 	case ATH6KL_TM_CMD_RX_REPORT:
 	default:
 		return -EOPNOTSUPP;
 	}
 }
diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
index 6609ce122e6e..b66eeb577272 100644
--- a/drivers/net/wireless/ath/ath9k/hw.c
+++ b/drivers/net/wireless/ath/ath9k/hw.c
@@ -2306,11 +2306,10 @@ void ath9k_hw_beaconinit(struct ath_hw *ah, u32 next_beacon, u32 beacon_period)
 		break;
 	default:
 		ath_dbg(ath9k_hw_common(ah), BEACON,
 			"%s: unsupported opmode: %d\n", __func__, ah->opmode);
 		return;
-		break;
 	}
 
 	REG_WRITE(ah, AR_BEACON_PERIOD, beacon_period);
 	REG_WRITE(ah, AR_DMA_BEACON_PERIOD, beacon_period);
 	REG_WRITE(ah, AR_SWBA_PERIOD, beacon_period);
diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
index cbdebefb854a..8698ca4d30de 100644
--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
@@ -1200,17 +1200,15 @@ static int iwl_mvm_mac_ctx_send(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
 	switch (vif->type) {
 	case NL80211_IFTYPE_STATION:
 		return iwl_mvm_mac_ctxt_cmd_sta(mvm, vif, action,
 						force_assoc_off,
 						bssid_override);
-		break;
 	case NL80211_IFTYPE_AP:
 		if (!vif->p2p)
 			return iwl_mvm_mac_ctxt_cmd_ap(mvm, vif, action);
 		else
 			return iwl_mvm_mac_ctxt_cmd_go(mvm, vif, action);
-		break;
 	case NL80211_IFTYPE_MONITOR:
 		return iwl_mvm_mac_ctxt_cmd_listener(mvm, vif, action);
 	case NL80211_IFTYPE_P2P_DEVICE:
 		return iwl_mvm_mac_ctxt_cmd_p2p_device(mvm, vif, action);
 	case NL80211_IFTYPE_ADHOC:
diff --git a/drivers/net/wireless/intersil/p54/eeprom.c b/drivers/net/wireless/intersil/p54/eeprom.c
index 5bd35c147e19..3ca9d26df174 100644
--- a/drivers/net/wireless/intersil/p54/eeprom.c
+++ b/drivers/net/wireless/intersil/p54/eeprom.c
@@ -868,11 +868,10 @@ int p54_parse_eeprom(struct ieee80211_hw *dev, void *eeprom, int len)
 				err = -ENOMSG;
 				goto err;
 			} else {
 				goto good_eeprom;
 			}
-			break;
 		default:
 			break;
 		}
 
 		crc16 = crc_ccitt(crc16, (u8 *)entry, (entry_len + 1) * 2);
diff --git a/drivers/net/wireless/intersil/prism54/oid_mgt.c b/drivers/net/wireless/intersil/prism54/oid_mgt.c
index 9fd307ca4b6d..7b251ae90a68 100644
--- a/drivers/net/wireless/intersil/prism54/oid_mgt.c
+++ b/drivers/net/wireless/intersil/prism54/oid_mgt.c
@@ -785,21 +785,19 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
 			struct obj_buffer *buff = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"size=%u\naddr=0x%X\n", buff->size,
 					buff->addr);
 		}
-		break;
 	case OID_TYPE_BSS:{
 			struct obj_bss *bss = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"age=%u\nchannel=%u\n"
 					"capinfo=0x%X\nrates=0x%X\n"
 					"basic_rates=0x%X\n", bss->age,
 					bss->channel, bss->capinfo,
 					bss->rates, bss->basic_rates);
 		}
-		break;
 	case OID_TYPE_BSSLIST:{
 			struct obj_bsslist *list = r->ptr;
 			int i, k;
 			k = scnprintf(str, PRIV_STR_SIZE, "nr=%u\n", list->nr);
 			for (i = 0; i < list->nr; i++)
@@ -812,53 +810,47 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
 					      list->bsslist[i].capinfo,
 					      list->bsslist[i].rates,
 					      list->bsslist[i].basic_rates);
 			return k;
 		}
-		break;
 	case OID_TYPE_FREQUENCIES:{
 			struct obj_frequencies *freq = r->ptr;
 			int i, t;
 			printk("nr : %u\n", freq->nr);
 			t = scnprintf(str, PRIV_STR_SIZE, "nr=%u\n", freq->nr);
 			for (i = 0; i < freq->nr; i++)
 				t += scnprintf(str + t, PRIV_STR_SIZE - t,
 					      "mhz[%u]=%u\n", i, freq->mhz[i]);
 			return t;
 		}
-		break;
 	case OID_TYPE_MLME:{
 			struct obj_mlme *mlme = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"id=0x%X\nstate=0x%X\ncode=0x%X\n",
 					mlme->id, mlme->state, mlme->code);
 		}
-		break;
 	case OID_TYPE_MLMEEX:{
 			struct obj_mlmeex *mlme = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"id=0x%X\nstate=0x%X\n"
 					"code=0x%X\nsize=0x%X\n", mlme->id,
 					mlme->state, mlme->code, mlme->size);
 		}
-		break;
 	case OID_TYPE_ATTACH:{
 			struct obj_attachment *attach = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"id=%d\nsize=%d\n",
 					attach->id,
 					attach->size);
 		}
-		break;
 	case OID_TYPE_SSID:{
 			struct obj_ssid *ssid = r->ptr;
 			return scnprintf(str, PRIV_STR_SIZE,
 					"length=%u\noctets=%.*s\n",
 					ssid->length, ssid->length,
 					ssid->octets);
 		}
-		break;
 	case OID_TYPE_KEY:{
 			struct obj_key *key = r->ptr;
 			int t, i;
 			t = scnprintf(str, PRIV_STR_SIZE,
 				     "type=0x%X\nlength=0x%X\nkey=0x",
@@ -867,11 +859,10 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
 				t += scnprintf(str + t, PRIV_STR_SIZE - t,
 					      "%02X:", key->key[i]);
 			t += scnprintf(str + t, PRIV_STR_SIZE - t, "\n");
 			return t;
 		}
-		break;
 	case OID_TYPE_RAW:
 	case OID_TYPE_ADDR:{
 			unsigned char *buff = r->ptr;
 			int t, i;
 			t = scnprintf(str, PRIV_STR_SIZE, "hex data=");
@@ -879,11 +870,10 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
 				t += scnprintf(str + t, PRIV_STR_SIZE - t,
 					      "%02X:", buff[i]);
 			t += scnprintf(str + t, PRIV_STR_SIZE - t, "\n");
 			return t;
 		}
-		break;
 	default:
 		BUG();
 	}
 	return 0;
 }
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
index 63f9ea21962f..bd9160b166c5 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
@@ -1224,11 +1224,10 @@ static int _rtl88ee_set_media_status(struct ieee80211_hw *hw,
 			"Set Network type to AP!\n");
 		break;
 	default:
 		pr_err("Network type %d not support!\n", type);
 		return 1;
-		break;
 	}
 
 	/* MSR_INFRA == Link in infrastructure network;
 	 * MSR_ADHOC == Link in ad hoc network;
 	 * Therefore, check link state is necessary.
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
index a36dc6e726d2..f8a1de6e9849 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
@@ -1130,11 +1130,10 @@ static int _rtl8723e_set_media_status(struct ieee80211_hw *hw,
 			"Set Network type to AP!\n");
 		break;
 	default:
 		pr_err("Network type %d not support!\n", type);
 		return 1;
-		break;
 	}
 
 	/* MSR_INFRA == Link in infrastructure network;
 	 * MSR_ADHOC == Link in ad hoc network;
 	 * Therefore, check link state is necessary.
diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
index f41a7643b9c4..225b8cd44f23 100644
--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
@@ -2083,16 +2083,14 @@ bool rtl8812ae_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
 	switch (rfpath) {
 	case RF90_PATH_A:
 		return __rtl8821ae_phy_config_with_headerfile(hw,
 				radioa_array_table_a, radioa_arraylen_a,
 				_rtl8821ae_config_rf_radio_a);
-		break;
 	case RF90_PATH_B:
 		return __rtl8821ae_phy_config_with_headerfile(hw,
 				radioa_array_table_b, radioa_arraylen_b,
 				_rtl8821ae_config_rf_radio_b);
-		break;
 	case RF90_PATH_C:
 	case RF90_PATH_D:
 		pr_err("switch case %#x not processed\n", rfpath);
 		break;
 	}
@@ -2114,11 +2112,10 @@ bool rtl8821ae_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
 	switch (rfpath) {
 	case RF90_PATH_A:
 		return __rtl8821ae_phy_config_with_headerfile(hw,
 			radioa_array_table, radioa_arraylen,
 			_rtl8821ae_config_rf_radio_a);
-		break;
 
 	case RF90_PATH_B:
 	case RF90_PATH_C:
 	case RF90_PATH_D:
 		pr_err("switch case %#x not processed\n", rfpath);
diff --git a/drivers/nfc/st21nfca/core.c b/drivers/nfc/st21nfca/core.c
index 2ce17932a073..6ca0d2f56b18 100644
--- a/drivers/nfc/st21nfca/core.c
+++ b/drivers/nfc/st21nfca/core.c
@@ -792,11 +792,10 @@ static int st21nfca_hci_im_transceive(struct nfc_hci_dev *hdev,
 		return nfc_hci_send_cmd_async(hdev, target->hci_reader_gate,
 					      ST21NFCA_WR_XCHG_DATA, skb->data,
 					      skb->len,
 					      st21nfca_hci_data_exchange_cb,
 					      info);
-		break;
 	default:
 		return 1;
 	}
 }
 
diff --git a/drivers/nfc/trf7970a.c b/drivers/nfc/trf7970a.c
index 3bd97c73f983..c70f62fe321e 100644
--- a/drivers/nfc/trf7970a.c
+++ b/drivers/nfc/trf7970a.c
@@ -1380,11 +1380,10 @@ static int trf7970a_is_iso15693_write_or_lock(u8 cmd)
 	case ISO15693_CMD_WRITE_AFI:
 	case ISO15693_CMD_LOCK_AFI:
 	case ISO15693_CMD_WRITE_DSFID:
 	case ISO15693_CMD_LOCK_DSFID:
 		return 1;
-		break;
 	default:
 		return 0;
 	}
 }
 
diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
index 5a7c80053c62..2f250874b1a4 100644
--- a/drivers/nvdimm/claim.c
+++ b/drivers/nvdimm/claim.c
@@ -200,11 +200,10 @@ ssize_t nd_namespace_store(struct device *dev,
 		}
 		break;
 	default:
 		len = -EBUSY;
 		goto out_attach;
-		break;
 	}
 
 	if (__nvdimm_namespace_capacity(ndns) < SZ_16M) {
 		dev_dbg(dev, "%s too small to host\n", name);
 		len = -ENXIO;
diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c
index 1f54334f09f7..154a5398633c 100644
--- a/drivers/pci/controller/pci-v3-semi.c
+++ b/drivers/pci/controller/pci-v3-semi.c
@@ -656,11 +656,10 @@ static int v3_get_dma_range_config(struct v3_pci *v3,
 		val |= V3_LB_BASE_ADR_SIZE_2GB;
 		break;
 	default:
 		dev_err(v3->dev, "illegal dma memory chunk size\n");
 		return -EINVAL;
-		break;
 	}
 	val |= V3_PCI_MAP_M_REG_EN | V3_PCI_MAP_M_ENABLE;
 	*pci_map = val;
 
 	dev_dbg(dev,
diff --git a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
index 5e24838a582f..2223ead5bd72 100644
--- a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
+++ b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
@@ -106,23 +106,18 @@ struct s3c24xx_eint_domain_data {
 static int s3c24xx_eint_get_trigger(unsigned int type)
 {
 	switch (type) {
 	case IRQ_TYPE_EDGE_RISING:
 		return EINT_EDGE_RISING;
-		break;
 	case IRQ_TYPE_EDGE_FALLING:
 		return EINT_EDGE_FALLING;
-		break;
 	case IRQ_TYPE_EDGE_BOTH:
 		return EINT_EDGE_BOTH;
-		break;
 	case IRQ_TYPE_LEVEL_HIGH:
 		return EINT_LEVEL_HIGH;
-		break;
 	case IRQ_TYPE_LEVEL_LOW:
 		return EINT_LEVEL_LOW;
-		break;
 	default:
 		return -EINVAL;
 	}
 }
 
diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
index 49f4b73be513..1c2084c74a57 100644
--- a/drivers/platform/x86/acer-wmi.c
+++ b/drivers/platform/x86/acer-wmi.c
@@ -790,11 +790,10 @@ static acpi_status AMW0_set_u32(u32 value, u32 cap)
 		if (value > max_brightness)
 			return AE_BAD_PARAMETER;
 		switch (quirks->brightness) {
 		default:
 			return ec_write(0x83, value);
-			break;
 		}
 	default:
 		return AE_ERROR;
 	}
 
diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
index e5a1b5533408..704813374922 100644
--- a/drivers/platform/x86/sony-laptop.c
+++ b/drivers/platform/x86/sony-laptop.c
@@ -2465,26 +2465,23 @@ static int __sony_nc_gfx_switch_status_get(void)
 	case 0x0146:
 		/* 1: discrete GFX (speed)
 		 * 0: integrated GFX (stamina)
 		 */
 		return result & 0x1 ? SPEED : STAMINA;
-		break;
 	case 0x015B:
 		/* 0: discrete GFX (speed)
 		 * 1: integrated GFX (stamina)
 		 */
 		return result & 0x1 ? STAMINA : SPEED;
-		break;
 	case 0x0128:
 		/* it's a more elaborated bitmask, for now:
 		 * 2: integrated GFX (stamina)
 		 * 0: discrete GFX (speed)
 		 */
 		dprintk("GFX Status: 0x%x\n", result);
 		return result & 0x80 ? AUTO :
 			result & 0x02 ? STAMINA : SPEED;
-		break;
 	}
 	return -EINVAL;
 }
 
 static ssize_t sony_nc_gfx_switch_status_show(struct device *dev,
diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
index d88f388a3450..44e802f9f1b4 100644
--- a/drivers/platform/x86/wmi.c
+++ b/drivers/platform/x86/wmi.c
@@ -1258,17 +1258,14 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
 	}
 
 	switch (result) {
 	case -EINVAL:
 		return AE_BAD_PARAMETER;
-		break;
 	case -ENODEV:
 		return AE_NOT_FOUND;
-		break;
 	case -ETIME:
 		return AE_TIME;
-		break;
 	default:
 		return AE_OK;
 	}
 }
 
diff --git a/drivers/power/supply/wm831x_power.c b/drivers/power/supply/wm831x_power.c
index 18b33f14dfee..4cd2dd870039 100644
--- a/drivers/power/supply/wm831x_power.c
+++ b/drivers/power/supply/wm831x_power.c
@@ -666,11 +666,10 @@ static int wm831x_power_probe(struct platform_device *pdev)
 	default:
 		dev_err(&pdev->dev, "Failed to find USB phy: %d\n", ret);
 		fallthrough;
 	case -EPROBE_DEFER:
 		goto err_bat_irq;
-		break;
 	}
 
 	return ret;
 
 err_bat_irq:
diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
index f923ed019d4a..ed034192b3c3 100644
--- a/drivers/scsi/aic94xx/aic94xx_task.c
+++ b/drivers/scsi/aic94xx/aic94xx_task.c
@@ -267,11 +267,10 @@ static void asd_task_tasklet_complete(struct asd_ascb *ascb,
 		ts->stat = SAS_NAK_R_ERR;
 		break;
 	case TA_I_T_NEXUS_LOSS:
 		opcode = dl->status_block[0];
 		goto Again;
-		break;
 	case TF_INV_CONN_HANDLE:
 		ts->resp = SAS_TASK_UNDELIVERED;
 		ts->stat = SAS_DEVICE_UNKNOWN;
 		break;
 	case TF_REQUESTED_N_PENDING:
diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c
index 96d6e384b2b2..0d4928567265 100644
--- a/drivers/scsi/be2iscsi/be_mgmt.c
+++ b/drivers/scsi/be2iscsi/be_mgmt.c
@@ -1242,22 +1242,18 @@ beiscsi_adap_family_disp(struct device *dev, struct device_attribute *attr,
 	case BE_DEVICE_ID1:
 	case OC_DEVICE_ID1:
 	case OC_DEVICE_ID2:
 		return snprintf(buf, PAGE_SIZE,
 				"Obsolete/Unsupported BE2 Adapter Family\n");
-		break;
 	case BE_DEVICE_ID2:
 	case OC_DEVICE_ID3:
 		return snprintf(buf, PAGE_SIZE, "BE3-R Adapter Family\n");
-		break;
 	case OC_SKH_ID1:
 		return snprintf(buf, PAGE_SIZE, "Skyhawk-R Adapter Family\n");
-		break;
 	default:
 		return snprintf(buf, PAGE_SIZE,
 				"Unknown Adapter Family: 0x%x\n", dev_id);
-		break;
 	}
 }
 
 /**
  * beiscsi_phys_port()- Display Physical Port Identifier
diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
index 08992095ce7a..b37b0a9ec12d 100644
--- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
+++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
@@ -768,11 +768,10 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
 				if (rc)
 					goto skip_rec;
 			} else
 				printk(KERN_ERR PFX "SRR in progress\n");
 			goto ret_err_rqe;
-			break;
 		default:
 			break;
 		}
 
 skip_rec:
diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
index 0f9274960dc6..a4be6f439c47 100644
--- a/drivers/scsi/fcoe/fcoe.c
+++ b/drivers/scsi/fcoe/fcoe.c
@@ -1892,11 +1892,10 @@ static int fcoe_device_notification(struct notifier_block *notifier,
 			fcoe_interface_remove(fcoe);
 		fcoe_interface_cleanup(fcoe);
 		mutex_unlock(&fcoe_config_mutex);
 		fcoe_ctlr_device_delete(fcoe_ctlr_to_ctlr_dev(ctlr));
 		goto out;
-		break;
 	case NETDEV_FEAT_CHANGE:
 		fcoe_netdev_features_change(lport, netdev);
 		break;
 	default:
 		FCOE_NETDEV_DBG(netdev, "Unknown event %ld "
diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 83ce4f11a589..45136e3a4efc 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -7440,11 +7440,10 @@ static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
 				break;
 			default:	/* reserved in PCI 2.2 */
 				dev_warn(&pdev->dev,
 				       "base address is invalid\n");
 				return -1;
-				break;
 			}
 		}
 		if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0)
 			return i + 1;
 	}
diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
index 6a2561f26e38..db4c7a7ff4dd 100644
--- a/drivers/scsi/hptiop.c
+++ b/drivers/scsi/hptiop.c
@@ -756,11 +756,10 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
 		scsi_set_resid(scp,
 			scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
 		scp->result = SAM_STAT_CHECK_CONDITION;
 		memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE);
 		goto skip_resid;
-		break;
 
 	default:
 		scp->result = DRIVER_INVALID << 24 | DID_ABORT << 16;
 		break;
 	}
diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
index b0aa58d117cc..e451102b9a29 100644
--- a/drivers/scsi/ipr.c
+++ b/drivers/scsi/ipr.c
@@ -9485,11 +9485,10 @@ static pci_ers_result_t ipr_pci_error_detected(struct pci_dev *pdev,
 		ipr_pci_frozen(pdev);
 		return PCI_ERS_RESULT_CAN_RECOVER;
 	case pci_channel_io_perm_failure:
 		ipr_pci_perm_failure(pdev);
 		return PCI_ERS_RESULT_DISCONNECT;
-		break;
 	default:
 		break;
 	}
 	return PCI_ERS_RESULT_NEED_RESET;
 }
diff --git a/drivers/scsi/isci/phy.c b/drivers/scsi/isci/phy.c
index 7041e2e3ab48..1b87d9080ebe 100644
--- a/drivers/scsi/isci/phy.c
+++ b/drivers/scsi/isci/phy.c
@@ -751,11 +751,10 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
 		       sci_change_state(&iphy->sm, SCI_PHY_STARTING);
 		       break;
 		default:
 			phy_event_warn(iphy, state, event_code);
 			return SCI_FAILURE;
-			break;
 		}
 		return SCI_SUCCESS;
 	case SCI_PHY_SUB_AWAIT_IAF_UF:
 		switch (scu_get_event_code(event_code)) {
 		case SCU_EVENT_SAS_PHY_DETECTED:
@@ -956,11 +955,10 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
 			sci_change_state(&iphy->sm, SCI_PHY_STARTING);
 			break;
 		default:
 			phy_event_warn(iphy, state, event_code);
 			return SCI_FAILURE_INVALID_STATE;
-			break;
 		}
 		return SCI_SUCCESS;
 	default:
 		dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
 			__func__, phy_state_name(state));
diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
index c9a327b13e5c..325081ac6553 100644
--- a/drivers/scsi/lpfc/lpfc_debugfs.c
+++ b/drivers/scsi/lpfc/lpfc_debugfs.c
@@ -3339,11 +3339,10 @@ lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes,
 		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
 				"%03x: %08x\n", where, u32val);
 		break;
 	case LPFC_PCI_CFG_BROWSE: /* browse all */
 		goto pcicfg_browse;
-		break;
 	default:
 		/* illegal count */
 		len = 0;
 		break;
 	}
@@ -4379,11 +4378,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
 					goto pass_check;
 				}
 			}
 		}
 		goto error_out;
-		break;
+
 	case LPFC_IDIAG_CQ:
 		/* MBX complete queue */
 		if (phba->sli4_hba.mbx_cq &&
 		    phba->sli4_hba.mbx_cq->queue_id == queid) {
 			/* Sanity check */
@@ -4431,11 +4430,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
 					goto pass_check;
 				}
 			}
 		}
 		goto error_out;
-		break;
+
 	case LPFC_IDIAG_MQ:
 		/* MBX work queue */
 		if (phba->sli4_hba.mbx_wq &&
 		    phba->sli4_hba.mbx_wq->queue_id == queid) {
 			/* Sanity check */
@@ -4445,11 +4444,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
 				goto error_out;
 			idiag.ptr_private = phba->sli4_hba.mbx_wq;
 			goto pass_check;
 		}
 		goto error_out;
-		break;
+
 	case LPFC_IDIAG_WQ:
 		/* ELS work queue */
 		if (phba->sli4_hba.els_wq &&
 		    phba->sli4_hba.els_wq->queue_id == queid) {
 			/* Sanity check */
@@ -4485,13 +4484,12 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
 					idiag.ptr_private = qp;
 					goto pass_check;
 				}
 			}
 		}
-
 		goto error_out;
-		break;
+
 	case LPFC_IDIAG_RQ:
 		/* HDR queue */
 		if (phba->sli4_hba.hdr_rq &&
 		    phba->sli4_hba.hdr_rq->queue_id == queid) {
 			/* Sanity check */
@@ -4512,14 +4510,12 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
 				goto error_out;
 			idiag.ptr_private = phba->sli4_hba.dat_rq;
 			goto pass_check;
 		}
 		goto error_out;
-		break;
 	default:
 		goto error_out;
-		break;
 	}
 
 pass_check:
 
 	if (idiag.cmd.opcode == LPFC_IDIAG_CMD_QUEACC_RD) {
diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
index ca25e54bb782..b6090357e8a5 100644
--- a/drivers/scsi/lpfc/lpfc_init.c
+++ b/drivers/scsi/lpfc/lpfc_init.c
@@ -7194,11 +7194,10 @@ lpfc_init_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
 	default:
 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
 				"1431 Invalid HBA PCI-device group: 0x%x\n",
 				dev_grp);
 		return -ENODEV;
-		break;
 	}
 	return 0;
 }
 
 /**
diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
index 983eeb0e3d07..c3b02dab6e5c 100644
--- a/drivers/scsi/lpfc/lpfc_scsi.c
+++ b/drivers/scsi/lpfc/lpfc_scsi.c
@@ -4282,11 +4282,10 @@ lpfc_scsi_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
 	default:
 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
 				"1418 Invalid HBA PCI-device group: 0x%x\n",
 				dev_grp);
 		return -ENODEV;
-		break;
 	}
 	phba->lpfc_rampdown_queue_depth = lpfc_rampdown_queue_depth;
 	phba->lpfc_scsi_cmd_iocb_cmpl = lpfc_scsi_cmd_iocb_cmpl;
 	return 0;
 }
diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index e158cd77d387..0f18f1ba8a28 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -9187,11 +9187,10 @@ lpfc_mbox_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
 	default:
 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
 				"1420 Invalid HBA PCI-device group: 0x%x\n",
 				dev_grp);
 		return -ENODEV;
-		break;
 	}
 	return 0;
 }
 
 /**
@@ -10070,11 +10069,10 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
 	default:
 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
 				"2014 Invalid command 0x%x\n",
 				iocbq->iocb.ulpCommand);
 		return IOCB_ERROR;
-		break;
 	}
 
 	if (iocbq->iocb_flag & LPFC_IO_DIF_PASS)
 		bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_PASSTHRU);
 	else if (iocbq->iocb_flag & LPFC_IO_DIF_STRIP)
@@ -10232,11 +10230,10 @@ lpfc_sli_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
 	default:
 		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
 				"1419 Invalid HBA PCI-device group: 0x%x\n",
 				dev_grp);
 		return -ENODEV;
-		break;
 	}
 	phba->lpfc_get_iocb_from_iocbq = lpfc_get_iocb_from_iocbq;
 	return 0;
 }
 
diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
index 0354898d7cac..2f7a52bd653a 100644
--- a/drivers/scsi/mvumi.c
+++ b/drivers/scsi/mvumi.c
@@ -2294,11 +2294,10 @@ static int mvumi_cfg_hw_reg(struct mvumi_hba *mhba)
 		regs->int_drbl_int_mask     = 0x3FFFFFFF;
 		regs->int_mu = regs->int_dl_cpu2pciea | regs->int_comaout;
 		break;
 	default:
 		return -1;
-		break;
 	}
 
 	return 0;
 }
 
diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c
index bc5a623519e7..bb3b3884f968 100644
--- a/drivers/scsi/pcmcia/nsp_cs.c
+++ b/drivers/scsi/pcmcia/nsp_cs.c
@@ -1100,12 +1100,10 @@ static irqreturn_t nspintr(int irq, void *dev_id)
 		nsp_index_write(base, SCSIBUSCTRL, SCSI_ATN);
 		udelay(1);
 		nsp_index_write(base, SCSIBUSCTRL, SCSI_ATN | AUTODIRECTION | ACKENB);
 		return IRQ_HANDLED;
 
-		break;
-
 	case PH_RESELECT:
 		//nsp_dbg(NSP_DEBUG_INTR, "phase reselect");
 		// *sync_neg = SYNC_NOT_YET;
 		if ((phase & BUSMON_PHASE_MASK) != BUSPHASE_MESSAGE_IN) {
 
diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
index 07afd0d8a8f3..40af7f1524ce 100644
--- a/drivers/scsi/qla2xxx/qla_mbx.c
+++ b/drivers/scsi/qla2xxx/qla_mbx.c
@@ -4028,11 +4028,10 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
 
 			/* if our portname is higher then initiate N2N login */
 
 			set_bit(N2N_LOGIN_NEEDED, &vha->dpc_flags);
 			return;
-			break;
 		case TOPO_FL:
 			ha->current_topology = ISP_CFG_FL;
 			break;
 		case TOPO_F:
 			ha->current_topology = ISP_CFG_F;
diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
index e2e5356a997d..43f7624508a9 100644
--- a/drivers/scsi/st.c
+++ b/drivers/scsi/st.c
@@ -2844,11 +2844,10 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
 		fileno = blkno = at_sm = 0;
 		break;
 	case MTNOP:
 		DEBC_printk(STp, "No op on tape.\n");
 		return 0;	/* Should do something ? */
-		break;
 	case MTRETEN:
 		cmd[0] = START_STOP;
 		if (STp->immediate) {
 			cmd[1] = 1;	/* Don't wait for completion */
 			timeout = STp->device->request_queue->rq_timeout;
diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.c b/drivers/scsi/sym53c8xx_2/sym_hipd.c
index a9fe092a4906..255a2d48d421 100644
--- a/drivers/scsi/sym53c8xx_2/sym_hipd.c
+++ b/drivers/scsi/sym53c8xx_2/sym_hipd.c
@@ -4594,11 +4594,10 @@ static void sym_int_sir(struct sym_hcb *np)
 				sym_print_addr(cp->cmd,
 					"M_REJECT received (%x:%x).\n",
 					scr_to_cpu(np->lastmsg), np->msgout[0]);
 			}
 			goto out_clrack;
-			break;
 		default:
 			goto out_reject;
 		}
 		break;
 	/*
diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
index ddee04c8248d..eab7a048ca7a 100644
--- a/drivers/staging/media/atomisp/pci/sh_css.c
+++ b/drivers/staging/media/atomisp/pci/sh_css.c
@@ -6284,11 +6284,10 @@ allocate_delay_frames(struct ia_css_pipe *pipe) {
 	case IA_CSS_PIPE_ID_CAPTURE: {
 		struct ia_css_capture_settings *mycs_capture = &pipe->pipe_settings.capture;
 		(void)mycs_capture;
 		return err;
 	}
-	break;
 	case IA_CSS_PIPE_ID_VIDEO: {
 		struct ia_css_video_settings *mycs_video = &pipe->pipe_settings.video;
 
 		ref_info = mycs_video->video_binary.internal_frame_info;
 		/*The ref frame expects
diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c
index 1deb74112ad4..672248be7bf3 100644
--- a/drivers/staging/rts5208/rtsx_scsi.c
+++ b/drivers/staging/rts5208/rtsx_scsi.c
@@ -569,12 +569,10 @@ static int start_stop_unit(struct scsi_cmnd *srb, struct rtsx_chip *chip)
 	case LOAD_MEDIUM:
 		if (check_card_ready(chip, lun))
 			return TRANSPORT_GOOD;
 		set_sense_type(chip, lun, SENSE_TYPE_MEDIA_NOT_PRESENT);
 		return TRANSPORT_FAILED;
-
-		break;
 	}
 
 	return TRANSPORT_ERROR;
 }
 
diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
index fd0ea4dbcb91..7c7d3858e6ca 100644
--- a/drivers/staging/vme/devices/vme_user.c
+++ b/drivers/staging/vme/devices/vme_user.c
@@ -355,12 +355,10 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
 			 *	to userspace as they are
 			 */
 			return vme_master_set(image[minor].resource,
 				master.enable, master.vme_addr, master.size,
 				master.aspace, master.cycle, master.dwidth);
-
-			break;
 		}
 		break;
 	case SLAVE_MINOR:
 		switch (cmd) {
 		case VME_GET_SLAVE:
@@ -396,12 +394,10 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
 			 */
 			return vme_slave_set(image[minor].resource,
 				slave.enable, slave.vme_addr, slave.size,
 				image[minor].pci_buf, slave.aspace,
 				slave.cycle);
-
-			break;
 		}
 		break;
 	}
 
 	return -EINVAL;
diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
index d42b854cb7df..861e95043191 100644
--- a/drivers/tty/nozomi.c
+++ b/drivers/tty/nozomi.c
@@ -412,15 +412,13 @@ static void read_mem32(u32 *buf, const void __iomem *mem_addr_start,
 	switch (size_bytes) {
 	case 2:	/* 2 bytes */
 		buf16 = (u16 *) buf;
 		*buf16 = __le16_to_cpu(readw(ptr));
 		goto out;
-		break;
 	case 4:	/* 4 bytes */
 		*(buf) = __le32_to_cpu(readl(ptr));
 		goto out;
-		break;
 	}
 
 	while (i < size_bytes) {
 		if (size_bytes - i == 2) {
 			/* Handle 2 bytes in the end */
@@ -458,19 +456,18 @@ static u32 write_mem32(void __iomem *mem_addr_start, const u32 *buf,
 	switch (size_bytes) {
 	case 2:	/* 2 bytes */
 		buf16 = (const u16 *)buf;
 		writew(__cpu_to_le16(*buf16), ptr);
 		return 2;
-		break;
 	case 1: /*
 		 * also needs to write 4 bytes in this case
 		 * so falling through..
 		 */
+		fallthrough;
 	case 4: /* 4 bytes */
 		writel(__cpu_to_le32(*buf), ptr);
 		return 4;
-		break;
 	}
 
 	while (i < size_bytes) {
 		if (size_bytes - i == 2) {
 			/* 2 bytes */
diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 1731d9728865..09703079db7b 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -318,31 +318,26 @@ static void imx_uart_writel(struct imx_port *sport, u32 val, u32 offset)
 static u32 imx_uart_readl(struct imx_port *sport, u32 offset)
 {
 	switch (offset) {
 	case UCR1:
 		return sport->ucr1;
-		break;
 	case UCR2:
 		/*
 		 * UCR2_SRST is the only bit in the cached registers that might
 		 * differ from the value that was last written. As it only
 		 * automatically becomes one after being cleared, reread
 		 * conditionally.
 		 */
 		if (!(sport->ucr2 & UCR2_SRST))
 			sport->ucr2 = readl(sport->port.membase + offset);
 		return sport->ucr2;
-		break;
 	case UCR3:
 		return sport->ucr3;
-		break;
 	case UCR4:
 		return sport->ucr4;
-		break;
 	case UFCR:
 		return sport->ufcr;
-		break;
 	default:
 		return readl(sport->port.membase + offset);
 	}
 }
 
diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
index 1125f4715830..5204769834d1 100644
--- a/drivers/usb/gadget/function/f_hid.c
+++ b/drivers/usb/gadget/function/f_hid.c
@@ -509,27 +509,23 @@ static int hidg_setup(struct usb_function *f,
 		VDBG(cdev, "get_report\n");
 
 		/* send an empty report */
 		length = min_t(unsigned, length, hidg->report_length);
 		memset(req->buf, 0x0, length);
-
 		goto respond;
-		break;
 
 	case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
 		  | HID_REQ_GET_PROTOCOL):
 		VDBG(cdev, "get_protocol\n");
 		length = min_t(unsigned int, length, 1);
 		((u8 *) req->buf)[0] = hidg->protocol;
 		goto respond;
-		break;
 
 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
 		  | HID_REQ_SET_REPORT):
 		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
 		goto stall;
-		break;
 
 	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
 		  | HID_REQ_SET_PROTOCOL):
 		VDBG(cdev, "set_protocol\n");
 		if (value > HID_REPORT_PROTOCOL)
@@ -542,11 +538,10 @@ static int hidg_setup(struct usb_function *f,
 		if (hidg->bInterfaceSubClass == USB_INTERFACE_SUBCLASS_BOOT) {
 			hidg->protocol = value;
 			goto respond;
 		}
 		goto stall;
-		break;
 
 	case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8
 		  | USB_REQ_GET_DESCRIPTOR):
 		switch (value >> 8) {
 		case HID_DT_HID:
@@ -560,33 +555,29 @@ static int hidg_setup(struct usb_function *f,
 
 			length = min_t(unsigned short, length,
 						   hidg_desc_copy.bLength);
 			memcpy(req->buf, &hidg_desc_copy, length);
 			goto respond;
-			break;
 		}
 		case HID_DT_REPORT:
 			VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: REPORT\n");
 			length = min_t(unsigned short, length,
 						   hidg->report_desc_length);
 			memcpy(req->buf, hidg->report_desc, length);
 			goto respond;
-			break;
 
 		default:
 			VDBG(cdev, "Unknown descriptor request 0x%x\n",
 				 value >> 8);
 			goto stall;
-			break;
 		}
 		break;
 
 	default:
 		VDBG(cdev, "Unknown request 0x%x\n",
 			 ctrl->bRequest);
 		goto stall;
-		break;
 	}
 
 stall:
 	return -EOPNOTSUPP;
 
diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
index fe405cd38dbc..b46ef45c4d25 100644
--- a/drivers/usb/host/xhci-mem.c
+++ b/drivers/usb/host/xhci-mem.c
@@ -1142,11 +1142,10 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
 		max_packets = MAX_PACKET(8);
 		break;
 	case USB_SPEED_WIRELESS:
 		xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n");
 		return -EINVAL;
-		break;
 	default:
 		/* Speed was set earlier, this shouldn't happen. */
 		return -EINVAL;
 	}
 	/* Find the root hub port this device is under */
diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
index 70ec29681526..efbd317f2f25 100644
--- a/drivers/usb/misc/iowarrior.c
+++ b/drivers/usb/misc/iowarrior.c
@@ -382,11 +382,10 @@ static ssize_t iowarrior_write(struct file *file,
 			goto exit;
 		}
 		retval = usb_set_report(dev->interface, 2, 0, buf, count);
 		kfree(buf);
 		goto exit;
-		break;
 	case USB_DEVICE_ID_CODEMERCS_IOW56:
 	case USB_DEVICE_ID_CODEMERCS_IOW56AM:
 	case USB_DEVICE_ID_CODEMERCS_IOW28:
 	case USB_DEVICE_ID_CODEMERCS_IOW28L:
 	case USB_DEVICE_ID_CODEMERCS_IOW100:
@@ -452,18 +451,16 @@ static ssize_t iowarrior_write(struct file *file,
 		}
 		/* submit was ok */
 		retval = count;
 		usb_free_urb(int_out_urb);
 		goto exit;
-		break;
 	default:
 		/* what do we have here ? An unsupported Product-ID ? */
 		dev_err(&dev->interface->dev, "%s - not supported for product=0x%x\n",
 			__func__, dev->product_id);
 		retval = -EFAULT;
 		goto exit;
-		break;
 	}
 error:
 	usb_free_coherent(dev->udev, dev->report_size, buf,
 			  int_out_urb->transfer_dma);
 error_no_buffer:
diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
index b4ba79123d9d..f1201d4de297 100644
--- a/drivers/usb/serial/iuu_phoenix.c
+++ b/drivers/usb/serial/iuu_phoenix.c
@@ -848,11 +848,10 @@ static int iuu_uart_baud(struct usb_serial_port *port, u32 baud_base,
 		dataout[DataCount++] = 0x04;
 		break;
 	default:
 		kfree(dataout);
 		return IUU_INVALID_PARAMETER;
-		break;
 	}
 
 	switch (parity & 0xF0) {
 	case IUU_ONE_STOP_BIT:
 		dataout[DataCount - 1] |= IUU_ONE_STOP_BIT;
@@ -862,11 +861,10 @@ static int iuu_uart_baud(struct usb_serial_port *port, u32 baud_base,
 		dataout[DataCount - 1] |= IUU_TWO_STOP_BITS;
 		break;
 	default:
 		kfree(dataout);
 		return IUU_INVALID_PARAMETER;
-		break;
 	}
 
 	status = bulk_immediate(port, dataout, DataCount);
 	if (status != IUU_OPERATION_OK)
 		dev_dbg(&port->dev, "%s - uart_off error\n", __func__);
diff --git a/drivers/usb/storage/freecom.c b/drivers/usb/storage/freecom.c
index 3d5f7d0ff0f1..2b098b55c4cb 100644
--- a/drivers/usb/storage/freecom.c
+++ b/drivers/usb/storage/freecom.c
@@ -429,11 +429,10 @@ static int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
 		/* should never hit here -- filtered in usb.c */
 		usb_stor_dbg(us, "freecom unimplemented direction: %d\n",
 			     us->srb->sc_data_direction);
 		/* Return fail, SCSI seems to handle this better. */
 		return USB_STOR_TRANSPORT_FAILED;
-		break;
 	}
 
 	return USB_STOR_TRANSPORT_GOOD;
 }
 
diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
index 50ae26977a02..1227ea937059 100644
--- a/drivers/vme/bridges/vme_tsi148.c
+++ b/drivers/vme/bridges/vme_tsi148.c
@@ -504,11 +504,10 @@ static int tsi148_slave_set(struct vme_slave_resource *image, int enabled,
 		addr |= TSI148_LCSR_ITAT_AS_A64;
 		break;
 	default:
 		dev_err(tsi148_bridge->parent, "Invalid address space\n");
 		return -EINVAL;
-		break;
 	}
 
 	/* Convert 64-bit variables to 2x 32-bit variables */
 	reg_split(vme_base, &vme_base_high, &vme_base_low);
 
@@ -993,11 +992,10 @@ static int tsi148_master_set(struct vme_master_resource *image, int enabled,
 	default:
 		spin_unlock(&image->lock);
 		dev_err(tsi148_bridge->parent, "Invalid address space\n");
 		retval = -EINVAL;
 		goto err_aspace;
-		break;
 	}
 
 	temp_ctl &= ~(3<<4);
 	if (cycle & VME_SUPER)
 		temp_ctl |= TSI148_LCSR_OTAT_SUP;
@@ -1501,11 +1499,10 @@ static int tsi148_dma_set_vme_src_attributes(struct device *dev, __be32 *attr,
 		val |= TSI148_LCSR_DSAT_AMODE_USER4;
 		break;
 	default:
 		dev_err(dev, "Invalid address space\n");
 		return -EINVAL;
-		break;
 	}
 
 	if (cycle & VME_SUPER)
 		val |= TSI148_LCSR_DSAT_SUP;
 	if (cycle & VME_PROG)
@@ -1601,11 +1598,10 @@ static int tsi148_dma_set_vme_dest_attributes(struct device *dev, __be32 *attr,
 		val |= TSI148_LCSR_DDAT_AMODE_USER4;
 		break;
 	default:
 		dev_err(dev, "Invalid address space\n");
 		return -EINVAL;
-		break;
 	}
 
 	if (cycle & VME_SUPER)
 		val |= TSI148_LCSR_DDAT_SUP;
 	if (cycle & VME_PROG)
@@ -1699,11 +1695,10 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
 		break;
 	default:
 		dev_err(tsi148_bridge->parent, "Invalid source type\n");
 		retval = -EINVAL;
 		goto err_source;
-		break;
 	}
 
 	/* Assume last link - this will be over-written by adding another */
 	entry->descriptor.dnlau = cpu_to_be32(0);
 	entry->descriptor.dnlal = cpu_to_be32(TSI148_LCSR_DNLAL_LLA);
@@ -1736,11 +1731,10 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
 		break;
 	default:
 		dev_err(tsi148_bridge->parent, "Invalid destination type\n");
 		retval = -EINVAL;
 		goto err_dest;
-		break;
 	}
 
 	/* Fill out count */
 	entry->descriptor.dcnt = cpu_to_be32((u32)count);
 
@@ -1962,11 +1956,10 @@ static int tsi148_lm_set(struct vme_lm_resource *lm, unsigned long long lm_base,
 		break;
 	default:
 		mutex_unlock(&lm->mtx);
 		dev_err(tsi148_bridge->parent, "Invalid address space\n");
 		return -EINVAL;
-		break;
 	}
 
 	if (cycle & VME_SUPER)
 		lm_ctl |= TSI148_LCSR_LMAT_SUPR ;
 	if (cycle & VME_USER)
diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c
index b398293980b6..e1a940e43327 100644
--- a/drivers/vme/vme.c
+++ b/drivers/vme/vme.c
@@ -50,27 +50,22 @@ static struct vme_bridge *find_bridge(struct vme_resource *resource)
 	/* Get list to search */
 	switch (resource->type) {
 	case VME_MASTER:
 		return list_entry(resource->entry, struct vme_master_resource,
 			list)->parent;
-		break;
 	case VME_SLAVE:
 		return list_entry(resource->entry, struct vme_slave_resource,
 			list)->parent;
-		break;
 	case VME_DMA:
 		return list_entry(resource->entry, struct vme_dma_resource,
 			list)->parent;
-		break;
 	case VME_LM:
 		return list_entry(resource->entry, struct vme_lm_resource,
 			list)->parent;
-		break;
 	default:
 		printk(KERN_ERR "Unknown resource type\n");
 		return NULL;
-		break;
 	}
 }
 
 /**
  * vme_free_consistent - Allocate contiguous memory.
@@ -177,26 +172,22 @@ size_t vme_get_size(struct vme_resource *resource)
 			&aspace, &cycle, &dwidth);
 		if (retval)
 			return 0;
 
 		return size;
-		break;
 	case VME_SLAVE:
 		retval = vme_slave_get(resource, &enabled, &base, &size,
 			&buf_base, &aspace, &cycle);
 		if (retval)
 			return 0;
 
 		return size;
-		break;
 	case VME_DMA:
 		return 0;
-		break;
 	default:
 		printk(KERN_ERR "Unknown resource type\n");
 		return 0;
-		break;
 	}
 }
 EXPORT_SYMBOL(vme_get_size);
 
 int vme_check_window(u32 aspace, unsigned long long vme_base,
diff --git a/drivers/watchdog/geodewdt.c b/drivers/watchdog/geodewdt.c
index 83418924e30a..0b699c783d57 100644
--- a/drivers/watchdog/geodewdt.c
+++ b/drivers/watchdog/geodewdt.c
@@ -148,12 +148,10 @@ static long geodewdt_ioctl(struct file *file, unsigned int cmd,
 
 	switch (cmd) {
 	case WDIOC_GETSUPPORT:
 		return copy_to_user(argp, &ident,
 				    sizeof(ident)) ? -EFAULT : 0;
-		break;
-
 	case WDIOC_GETSTATUS:
 	case WDIOC_GETBOOTSTATUS:
 		return put_user(0, p);
 
 	case WDIOC_SETOPTIONS:
diff --git a/fs/efs/inode.c b/fs/efs/inode.c
index 89e73a6f0d36..64f3a54a0f72 100644
--- a/fs/efs/inode.c
+++ b/fs/efs/inode.c
@@ -161,11 +161,10 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
 			init_special_inode(inode, inode->i_mode, device);
 			break;
 		default:
 			pr_warn("unsupported inode mode %o\n", inode->i_mode);
 			goto read_inode_error;
-			break;
 	}
 
 	unlock_new_inode(inode);
 	return inode;
         
diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
index 79a231719460..3bd8119bed5e 100644
--- a/fs/ocfs2/cluster/tcp.c
+++ b/fs/ocfs2/cluster/tcp.c
@@ -1196,11 +1196,10 @@ static int o2net_process_message(struct o2net_sock_container *sc,
 			break;
 		default:
 			msglog(hdr, "bad magic\n");
 			ret = -EINVAL;
 			goto out;
-			break;
 	}
 
 	/* find a handler for it */
 	handler_status = 0;
 	nmh = o2net_handler_get(be16_to_cpu(hdr->msg_type),
diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
index 1110ecd7d1f3..8f50c9c19f1b 100644
--- a/kernel/bpf/syscall.c
+++ b/kernel/bpf/syscall.c
@@ -2911,11 +2911,10 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
 {
 	switch (attach_type) {
 	case BPF_CGROUP_INET_INGRESS:
 	case BPF_CGROUP_INET_EGRESS:
 		return BPF_PROG_TYPE_CGROUP_SKB;
-		break;
 	case BPF_CGROUP_INET_SOCK_CREATE:
 	case BPF_CGROUP_INET_SOCK_RELEASE:
 	case BPF_CGROUP_INET4_POST_BIND:
 	case BPF_CGROUP_INET6_POST_BIND:
 		return BPF_PROG_TYPE_CGROUP_SOCK;
diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
index 3dd8c2e4314e..f400a6122b3c 100644
--- a/security/integrity/ima/ima_appraise.c
+++ b/security/integrity/ima/ima_appraise.c
@@ -179,11 +179,10 @@ enum hash_algo ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
 	case EVM_IMA_XATTR_DIGSIG:
 		sig = (typeof(sig))xattr_value;
 		if (sig->version != 2 || xattr_len <= sizeof(*sig))
 			return ima_hash_algo;
 		return sig->hash_algo;
-		break;
 	case IMA_XATTR_DIGEST_NG:
 		/* first byte contains algorithm id */
 		ret = xattr_value->data[0];
 		if (ret < HASH_ALGO__LAST)
 			return ret;
diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
index b9fe02e5f84f..eddc9477d42a 100644
--- a/security/keys/trusted-keys/trusted_tpm1.c
+++ b/security/keys/trusted-keys/trusted_tpm1.c
@@ -899,11 +899,10 @@ static int datablob_parse(char *datablob, struct trusted_key_payload *p,
 			return ret;
 		ret = Opt_update;
 		break;
 	case Opt_err:
 		return -EINVAL;
-		break;
 	}
 	return ret;
 }
 
 static struct trusted_key_options *trusted_options_alloc(void)
diff --git a/security/safesetid/lsm.c b/security/safesetid/lsm.c
index 8a176b6adbe5..1079c6d54784 100644
--- a/security/safesetid/lsm.c
+++ b/security/safesetid/lsm.c
@@ -123,11 +123,10 @@ static int safesetid_security_capable(const struct cred *cred,
 		 * set*uid() (e.g. setting up userns uid mappings).
 		 */
 		pr_warn("Operation requires CAP_SETUID, which is not available to UID %u for operations besides approved set*uid transitions\n",
 			__kuid_val(cred->uid));
 		return -EPERM;
-		break;
 	case CAP_SETGID:
 		/*
 		* If no policy applies to this task, allow the use of CAP_SETGID for
 		* other purposes.
 		*/
@@ -138,15 +137,13 @@ static int safesetid_security_capable(const struct cred *cred,
 		 * set*gid() (e.g. setting up userns gid mappings).
 		 */
 		pr_warn("Operation requires CAP_SETGID, which is not available to GID %u for operations besides approved set*gid transitions\n",
 			__kuid_val(cred->uid));
 		return -EPERM;
-		break;
 	default:
 		/* Error, the only capabilities were checking for is CAP_SETUID/GID */
 		return 0;
-		break;
 	}
 	return 0;
 }
 
 /*
diff --git a/sound/pci/rme32.c b/sound/pci/rme32.c
index 869af8a32c98..4eabece4dcba 100644
--- a/sound/pci/rme32.c
+++ b/sound/pci/rme32.c
@@ -466,11 +466,10 @@ static int snd_rme32_capture_getrate(struct rme32 * rme32, int *is_adat)
 			return 44100;
 		case 7:
 			return 32000;
 		default:
 			return -1;
-			break;
 		} 
 	else
 		switch (n) {	/* supporting the CS8412 */
 		case 0:
 			return -1;
diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
index 4a1f576dd9cf..3382c069fd3d 100644
--- a/sound/pci/rme9652/hdspm.c
+++ b/sound/pci/rme9652/hdspm.c
@@ -2284,11 +2284,10 @@ static int hdspm_get_wc_sample_rate(struct hdspm *hdspm)
 	switch (hdspm->io_type) {
 	case RayDAT:
 	case AIO:
 		status = hdspm_read(hdspm, HDSPM_RD_STATUS_1);
 		return (status >> 16) & 0xF;
-		break;
 	case AES32:
 		status = hdspm_read(hdspm, HDSPM_statusRegister);
 		return (status >> HDSPM_AES32_wcFreq_bit) & 0xF;
 	default:
 		break;
@@ -2310,11 +2309,10 @@ static int hdspm_get_tco_sample_rate(struct hdspm *hdspm)
 		switch (hdspm->io_type) {
 		case RayDAT:
 		case AIO:
 			status = hdspm_read(hdspm, HDSPM_RD_STATUS_1);
 			return (status >> 20) & 0xF;
-			break;
 		case AES32:
 			status = hdspm_read(hdspm, HDSPM_statusRegister);
 			return (status >> 1) & 0xF;
 		default:
 			break;
@@ -2336,11 +2334,10 @@ static int hdspm_get_sync_in_sample_rate(struct hdspm *hdspm)
 		switch (hdspm->io_type) {
 		case RayDAT:
 		case AIO:
 			status = hdspm_read(hdspm, HDSPM_RD_STATUS_2);
 			return (status >> 12) & 0xF;
-			break;
 		default:
 			break;
 		}
 	}
 
@@ -2356,11 +2353,10 @@ static int hdspm_get_aes_sample_rate(struct hdspm *hdspm, int index)
 
 	switch (hdspm->io_type) {
 	case AES32:
 		timecode = hdspm_read(hdspm, HDSPM_timecodeRegister);
 		return (timecode >> (4*index)) & 0xF;
-		break;
 	default:
 		break;
 	}
 	return 0;
 }
@@ -3843,22 +3839,20 @@ static int hdspm_wc_sync_check(struct hdspm *hdspm)
 				return 2;
 			else
 				return 1;
 		}
 		return 0;
-		break;
 
 	case MADI:
 		status2 = hdspm_read(hdspm, HDSPM_statusRegister2);
 		if (status2 & HDSPM_wcLock) {
 			if (status2 & HDSPM_wcSync)
 				return 2;
 			else
 				return 1;
 		}
 		return 0;
-		break;
 
 	case RayDAT:
 	case AIO:
 		status = hdspm_read(hdspm, HDSPM_statusRegister);
 
@@ -3866,12 +3860,10 @@ static int hdspm_wc_sync_check(struct hdspm *hdspm)
 			return 2;
 		else if (status & 0x1000000)
 			return 1;
 		return 0;
 
-		break;
-
 	case MADIface:
 		break;
 	}
 
 
diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
index 7ab10028d9fa..012fbec5e6a7 100644
--- a/sound/pci/rme9652/rme9652.c
+++ b/sound/pci/rme9652/rme9652.c
@@ -730,38 +730,31 @@ static inline int rme9652_spdif_sample_rate(struct snd_rme9652 *s)
 	rate_bits = rme9652_read(s, RME9652_status_register) & RME9652_F;
 
 	switch (rme9652_decode_spdif_rate(rate_bits)) {
 	case 0x7:
 		return 32000;
-		break;
 
 	case 0x6:
 		return 44100;
-		break;
 
 	case 0x5:
 		return 48000;
-		break;
 
 	case 0x4:
 		return 88200;
-		break;
 
 	case 0x3:
 		return 96000;
-		break;
 
 	case 0x0:
 		return 64000;
-		break;
 
 	default:
 		dev_err(s->card->dev,
 			"%s: unknown S/PDIF input rate (bits = 0x%x)\n",
 			   s->card_name, rate_bits);
 		return 0;
-		break;
 	}
 }
 
 /*-----------------------------------------------------------------------------
   Control Interface
diff --git a/sound/soc/codecs/wcd-clsh-v2.c b/sound/soc/codecs/wcd-clsh-v2.c
index 1be82113c59a..817d8259758c 100644
--- a/sound/soc/codecs/wcd-clsh-v2.c
+++ b/sound/soc/codecs/wcd-clsh-v2.c
@@ -478,11 +478,10 @@ static int _wcd_clsh_ctrl_set_state(struct wcd_clsh_ctrl *ctrl, int req_state,
 		wcd_clsh_state_hph_l(ctrl, req_state, is_enable, mode);
 		break;
 	case WCD_CLSH_STATE_HPHR:
 		wcd_clsh_state_hph_r(ctrl, req_state, is_enable, mode);
 		break;
-		break;
 	case WCD_CLSH_STATE_LO:
 		wcd_clsh_state_lo(ctrl, req_state, is_enable, mode);
 		break;
 	default:
 		break;
diff --git a/sound/soc/codecs/wl1273.c b/sound/soc/codecs/wl1273.c
index c56b9329240f..d8ced4559bf2 100644
--- a/sound/soc/codecs/wl1273.c
+++ b/sound/soc/codecs/wl1273.c
@@ -309,11 +309,10 @@ static int wl1273_startup(struct snd_pcm_substream *substream,
 			return -EINVAL;
 		}
 		break;
 	default:
 		return -EINVAL;
-		break;
 	}
 
 	return 0;
 }
 
diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
index bbe8d782e0af..b1ca64d2f7ea 100644
--- a/sound/soc/intel/skylake/skl-pcm.c
+++ b/sound/soc/intel/skylake/skl-pcm.c
@@ -500,11 +500,10 @@ static int skl_pcm_trigger(struct snd_pcm_substream *substream, int cmd,
 		 */
 		ret = skl_decoupled_trigger(substream, cmd);
 		if (ret < 0)
 			return ret;
 		return skl_run_pipe(skl, mconfig->pipe);
-		break;
 
 	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
 	case SNDRV_PCM_TRIGGER_SUSPEND:
 	case SNDRV_PCM_TRIGGER_STOP:
 		/*
diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
index a6b72ad53b43..2d85cc4c67fb 100644
--- a/sound/soc/ti/davinci-mcasp.c
+++ b/sound/soc/ti/davinci-mcasp.c
@@ -2383,11 +2383,10 @@ static int davinci_mcasp_probe(struct platform_device *pdev)
 		break;
 	default:
 		dev_err(&pdev->dev, "No DMA controller found (%d)\n", ret);
 	case -EPROBE_DEFER:
 		goto err;
-		break;
 	}
 
 	if (ret) {
 		dev_err(&pdev->dev, "register PCM failed: %d\n", ret);
 		goto err;



From xen-devel-bounces@lists.xenproject.org Sun Oct 18 05:43:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 05:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8469.22606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU1Tl-0005Ib-RJ; Sun, 18 Oct 2020 05:43:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8469.22606; Sun, 18 Oct 2020 05:43:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU1Tl-0005IU-OV; Sun, 18 Oct 2020 05:43:37 +0000
Received: by outflank-mailman (input) for mailman id 8469;
 Sun, 18 Oct 2020 05:43:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KX+1=DZ=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1kU1Tl-0005IP-09
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 05:43:37 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f8e388b-a6f7-4a3f-b02f-3bb5d772b61b;
 Sun, 18 Oct 2020 05:43:36 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 781412080D;
 Sun, 18 Oct 2020 05:43:34 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KX+1=DZ=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
	id 1kU1Tl-0005IP-09
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 05:43:37 +0000
X-Inumbo-ID: 5f8e388b-a6f7-4a3f-b02f-3bb5d772b61b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5f8e388b-a6f7-4a3f-b02f-3bb5d772b61b;
	Sun, 18 Oct 2020 05:43:36 +0000 (UTC)
Received: from localhost (83-86-74-64.cable.dynamic.v4.ziggo.nl [83.86.74.64])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 781412080D;
	Sun, 18 Oct 2020 05:43:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1602999815;
	bh=xQx4tam510oViG0aS6IAIECEgE6lGctfYht63oAFTjQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=w5E1+nGVmBfju9sqafA7RNAyWyGXNV3glWIi0a1uVAyl5PpHYhnZwSlrCnlNiRfzs
	 cfV4uO4ZitYRPtno4rRqk/s62SY+Vkk0iEk8aerfp+YXmT6bAocVlLg03Vxy6aXm6p
	 C7/g4QRFCPJ09/zXowenG6SzN/MQpYZKW+T/2i1s=
Date: Sun, 18 Oct 2020 07:43:32 +0200
From: Greg KH <gregkh@linuxfoundation.org>
To: trix@redhat.com
Cc: linux-kernel@vger.kernel.org, linux-edac@vger.kernel.org,
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
	openipmi-developer@lists.sourceforge.net,
	linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org,
	amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
	linux-amlogic@lists.infradead.org,
	industrypack-devel@lists.sourceforge.net,
	linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-can@vger.kernel.org, netdev@vger.kernel.org,
	intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
	linux-wireless@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com, linux-nfc@lists.01.org,
	linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org,
	linux-samsung-soc@vger.kernel.org,
	platform-driver-x86@vger.kernel.org, patches@opensource.cirrus.com,
	storagedev@microchip.com, devel@driverdev.osuosl.org,
	linux-serial@vger.kernel.org, linux-usb@vger.kernel.org,
	usb-storage@lists.one-eyed-alien.net,
	linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com,
	bpf@vger.kernel.org, linux-integrity@vger.kernel.org,
	linux-security-module@vger.kernel.org, keyrings@vger.kernel.org,
	alsa-devel@alsa-project.org, clang-built-linux@googlegroups.com
Subject: Re: [RFC] treewide: cleanup unreachable breaks
Message-ID: <20201018054332.GB593954@kroah.com>
References: <20201017160928.12698-1-trix@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201017160928.12698-1-trix@redhat.com>

On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
> 
> This is a upcoming change to clean up a new warning treewide.
> I am wondering if the change could be one mega patch (see below) or
> normal patch per file about 100 patches or somewhere half way by collecting
> early acks.

Please break it up into one-patch-per-subsystem, like normal, and get it
merged that way.

Sending us a patch, without even a diffstat to review, isn't going to
get you very far...

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 06:32:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 06:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8477.22621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU2Ek-0001Cd-PA; Sun, 18 Oct 2020 06:32:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8477.22621; Sun, 18 Oct 2020 06:32:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU2Ek-0001CW-ME; Sun, 18 Oct 2020 06:32:10 +0000
Received: by outflank-mailman (input) for mailman id 8477;
 Sun, 18 Oct 2020 06:32:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU2Ej-0001By-3l
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 06:32:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e109d3b-ab51-465c-a2ad-85468bb7ee01;
 Sun, 18 Oct 2020 06:32:01 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU2Eb-0002Tt-If; Sun, 18 Oct 2020 06:32:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU2Eb-0006Ti-8g; Sun, 18 Oct 2020 06:32:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU2Eb-0004oQ-8D; Sun, 18 Oct 2020 06:32:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU2Ej-0001By-3l
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 06:32:09 +0000
X-Inumbo-ID: 2e109d3b-ab51-465c-a2ad-85468bb7ee01
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2e109d3b-ab51-465c-a2ad-85468bb7ee01;
	Sun, 18 Oct 2020 06:32:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CDzgMBfs3XQHSAKG+HkLORycZbB3qSg8wbISeG+LrBo=; b=DRnYlXRhR2BzKwovbnbht2Ugsr
	vDmdMYBYMN/GAffd0Axw/14S7mI8EVAoe+en3kAJprS/2pz7NFT55rL0C0GP+Dtzv1Rt5DAdkyvtq
	+EZ/p7Ck8fCWWFn2p/NTmIcDNyPq337JgqpoE/m5LfMbGq1nU6vjApe1PS1++BtA21Fc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU2Eb-0002Tt-If; Sun, 18 Oct 2020 06:32:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU2Eb-0006Ti-8g; Sun, 18 Oct 2020 06:32:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU2Eb-0004oQ-8D; Sun, 18 Oct 2020 06:32:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155929-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155929: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=071a0578b0ce0b0e543d1e38ee6926b9cc21c198
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 06:32:01 +0000

flight 155929 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155929/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 12 debian-install           fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                071a0578b0ce0b0e543d1e38ee6926b9cc21c198
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   78 days
Failing since        152366  2020-08-01 20:49:34 Z   77 days  131 attempts
Testing same since   155929  2020-10-17 12:27:59 Z    0 days    1 attempts

------------------------------------------------------------
3179 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 574133 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 07:35:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 07:35:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8484.22646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU3Ds-0006PJ-LB; Sun, 18 Oct 2020 07:35:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8484.22646; Sun, 18 Oct 2020 07:35:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU3Ds-0006PC-Hk; Sun, 18 Oct 2020 07:35:20 +0000
Received: by outflank-mailman (input) for mailman id 8484;
 Sun, 18 Oct 2020 07:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU3Dq-0006P7-IF
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 07:35:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6787828-da4c-4259-884f-2af5ef9a6ac0;
 Sun, 18 Oct 2020 07:35:15 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU3Dm-0003kX-SC; Sun, 18 Oct 2020 07:35:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU3Dm-0001Es-KQ; Sun, 18 Oct 2020 07:35:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU3Dm-00039h-Js; Sun, 18 Oct 2020 07:35:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU3Dq-0006P7-IF
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 07:35:18 +0000
X-Inumbo-ID: e6787828-da4c-4259-884f-2af5ef9a6ac0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e6787828-da4c-4259-884f-2af5ef9a6ac0;
	Sun, 18 Oct 2020 07:35:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+AKSYjEmTU6DUFDuVXQz+AmPJbpaB9U7wLX/GX69Uik=; b=lN6eGUKw9wZxA17cpVebTJGQBF
	/FDfgYqHWdSS21Vx5cewtuOcP62DMudjkpLMTZNsO4qtDCd2GUDGI01sSXL79KSZLRUMqlhvyHnda
	EhRVH2ILFtGMZmLLhdvmt26zB8o/mFHbLFhUmfzFaWYTAL1HrsB+hnR58A0tqiQ7ULNc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU3Dm-0003kX-SC; Sun, 18 Oct 2020 07:35:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU3Dm-0001Es-KQ; Sun, 18 Oct 2020 07:35:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU3Dm-00039h-Js; Sun, 18 Oct 2020 07:35:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155931-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155931: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e12ce85b2c79d83a340953291912875c30b3af06
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 07:35:14 +0000

flight 155931 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155931/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e12ce85b2c79d83a340953291912875c30b3af06
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   58 days
Failing since        152659  2020-08-21 14:07:39 Z   57 days  100 attempts
Testing same since   155931  2020-10-17 13:40:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 46332 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 09:30:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 09:30:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8504.22679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU50l-000818-Ju; Sun, 18 Oct 2020 09:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8504.22679; Sun, 18 Oct 2020 09:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU50l-000811-GI; Sun, 18 Oct 2020 09:29:55 +0000
Received: by outflank-mailman (input) for mailman id 8504;
 Sun, 18 Oct 2020 09:29:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mvpY=DZ=redhat.com=hdegoede@srs-us1.protection.inumbo.net>)
 id 1kU50k-00080w-1v
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 09:29:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 52f7c925-5071-4a62-987f-bd104c4a7dd8;
 Sun, 18 Oct 2020 09:29:48 +0000 (UTC)
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-3-FNRymx7aNeG3ye7JMs9Uzw-1; Sun, 18 Oct 2020 05:29:45 -0400
Received: by mail-ej1-f70.google.com with SMTP id x12so3507218eju.22
 for <xen-devel@lists.xenproject.org>; Sun, 18 Oct 2020 02:29:45 -0700 (PDT)
Received: from x1.localdomain
 (2001-1c00-0c0c-fe00-d2ea-f29d-118b-24dc.cable.dynamic.v6.ziggo.nl.
 [2001:1c00:c0c:fe00:d2ea:f29d:118b:24dc])
 by smtp.gmail.com with ESMTPSA id r24sm6841834edm.95.2020.10.18.02.29.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 18 Oct 2020 02:29:36 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mvpY=DZ=redhat.com=hdegoede@srs-us1.protection.inumbo.net>)
	id 1kU50k-00080w-1v
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 09:29:54 +0000
X-Inumbo-ID: 52f7c925-5071-4a62-987f-bd104c4a7dd8
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 52f7c925-5071-4a62-987f-bd104c4a7dd8;
	Sun, 18 Oct 2020 09:29:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603013388;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kZop0SVIvzMKkpxp9YsI7L0veLsxRR0Ku7ccgrUU7qA=;
	b=Eh5R82VTNyQYXnfiCntLWN9+NuJfPqCqjFf7vJx32GR9RxqN+B4/jV2XFHiMrD5IO3dK++
	9r0nee8Is4DbdDm/Wsf2vxviVZUawi6YOXGtcQicgWFBoErms1VnjQvAgbQun640VHu7UD
	TwDoHzzp0743nCxnfHWuk1YLSTYLRfA=
Received: from mail-ej1-f70.google.com (mail-ej1-f70.google.com
 [209.85.218.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-3-FNRymx7aNeG3ye7JMs9Uzw-1; Sun, 18 Oct 2020 05:29:45 -0400
X-MC-Unique: FNRymx7aNeG3ye7JMs9Uzw-1
Received: by mail-ej1-f70.google.com with SMTP id x12so3507218eju.22
        for <xen-devel@lists.xenproject.org>; Sun, 18 Oct 2020 02:29:45 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=kZop0SVIvzMKkpxp9YsI7L0veLsxRR0Ku7ccgrUU7qA=;
        b=WjEdil0YR6q8x8lE9PLUcKS83v+LUDQ6HTrF9l7jxhHNxSTTFhG+H/nEV+F0HtSagh
         vnzZ+bqzFAmjnPeh87WXBWzyAW/GSPLmgu3kZ+wPlmFTMKu/5POqf1nPwJjzbk1N/Yel
         BIGQIzBQjAHnXCt+USgc9Cv2XgcKnrMG+GYqTUjutTNZEmIV2WyCoxxhxdLGeaFtQqnd
         YbeZsElEFWPw0Ov6zksFwxJi4QuViglzwy/G3s2SNC3j+Edv3yV6X1ulpcDKyWNFmqat
         1LnAphJz/o98qHxIOcU5oMATS7Bk0PunbhmSBl+WTOXGVVkfPoPOwbRHhEoBPDXXK6+n
         5MKQ==
X-Gm-Message-State: AOAM5302K8P4li/ZhDvcfJ7bHZ16/LDKE0nsaQWJROLlHnysYiH+dlGG
	pGfpzWjoISOD7UMpWXMHEwYYnF6zilA/A0JuHCUzlhkosRE/nMQH21plADHKUw0gyg2UFWBwZIo
	CevOmS0/z5LNJoUpv+5aRdIbewgg=
X-Received: by 2002:a05:6402:209:: with SMTP id t9mr13449894edv.208.1603013379925;
        Sun, 18 Oct 2020 02:29:39 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJz10LjHTwJL1+hihLIM5yzpH3slrKKX1/vUJuHuZm92ML741ZxF/3vI42v0Bi6VesflUbqj2g==
X-Received: by 2002:a05:6402:209:: with SMTP id t9mr13449734edv.208.1603013377239;
        Sun, 18 Oct 2020 02:29:37 -0700 (PDT)
Received: from x1.localdomain (2001-1c00-0c0c-fe00-d2ea-f29d-118b-24dc.cable.dynamic.v6.ziggo.nl. [2001:1c00:c0c:fe00:d2ea:f29d:118b:24dc])
        by smtp.gmail.com with ESMTPSA id r24sm6841834edm.95.2020.10.18.02.29.35
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 18 Oct 2020 02:29:36 -0700 (PDT)
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: trix@redhat.com, linux-kernel@vger.kernel.org
Cc: linux-edac@vger.kernel.org, linux-acpi@vger.kernel.org,
 linux-pm@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-block@vger.kernel.org, openipmi-developer@lists.sourceforge.net,
 linux-crypto@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org,
 amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
 nouveau@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
 linux-amlogic@lists.infradead.org, industrypack-devel@lists.sourceforge.net,
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
 linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-can@vger.kernel.org, netdev@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
 linux-wireless@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
 linux-nfc@lists.01.org, linux-nvdimm@lists.01.org,
 linux-pci@vger.kernel.org, linux-samsung-soc@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, patches@opensource.cirrus.com,
 storagedev@microchip.com, devel@driverdev.osuosl.org,
 linux-serial@vger.kernel.org, linux-usb@vger.kernel.org,
 usb-storage@lists.one-eyed-alien.net, linux-watchdog@vger.kernel.org,
 ocfs2-devel@oss.oracle.com, bpf@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org,
 keyrings@vger.kernel.org, alsa-devel@alsa-project.org,
 clang-built-linux@googlegroups.com
References: <20201017160928.12698-1-trix@redhat.com>
From: Hans de Goede <hdegoede@redhat.com>
Message-ID: <8c676751-a041-fbd2-b1cc-e0ae5d980f75@redhat.com>
Date: Sun, 18 Oct 2020 11:29:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201017160928.12698-1-trix@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=hdegoede@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Tom,

Quick self intro: I have take over drivers/platform/x86
maintainership from Andy.

On 10/17/20 6:09 PM, trix@redhat.com wrote:
> From: Tom Rix <trix@redhat.com>
> 
> This is a upcoming change to clean up a new warning treewide.
> I am wondering if the change could be one mega patch (see below) or
> normal patch per file about 100 patches or somewhere half way by collecting
> early acks.
> 
> clang has a number of useful, new warnings see
> https://clang.llvm.org/docs/DiagnosticsReference.html
> 
> This change cleans up -Wunreachable-code-break
> https://clang.llvm.org/docs/DiagnosticsReference.html#wunreachable-code-break
> for 266 of 485 warnings in this week's linux-next, allyesconfig on x86_64.
> 
> The method of fixing was to look for warnings where the preceding statement
> was a simple statement and by inspection made the subsequent break unneeded.
> In order of frequency these look like
> 
> return and break
> 
>  	switch (c->x86_vendor) {
>  	case X86_VENDOR_INTEL:
>  		intel_p5_mcheck_init(c);
>  		return 1;
> -		break;
> 
> goto and break
> 
>  	default:
>  		operation = 0; /* make gcc happy */
>  		goto fail_response;
> -		break;
> 
> break and break
>  		case COLOR_SPACE_SRGB:
>  			/* by pass */
>  			REG_SET(OUTPUT_CSC_CONTROL, 0,
>  				OUTPUT_CSC_GRPH_MODE, 0);
>  			break;
> -			break;
> 
> The exception to the simple statement, is a switch case with a block
> and the end of block is a return
> 
>  			struct obj_buffer *buff = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"size=%u\naddr=0x%X\n", buff->size,
>  					buff->addr);
>  		}
> -		break;
> 
> Not considered obvious and excluded, breaks after
> multi level switches
> complicated if-else if-else blocks
> panic() or similar calls
> 
> And there is an odd addition of a 'fallthrough' in drivers/tty/nozomi.c

As Greg already mentioned please split this out into per subsystem patches.

I would be happy to take all the changes to drivers/platform/x86
code upstream as a single patch for all the files there.

Regards,

Hans


> ---
> 
> diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
> index 1c08cb9eb9f6..16ce86aed8e2 100644
> --- a/arch/x86/kernel/cpu/mce/core.c
> +++ b/arch/x86/kernel/cpu/mce/core.c
> @@ -1809,15 +1809,13 @@ static int __mcheck_cpu_ancient_init(struct cpuinfo_x86 *c)
>  
>  	switch (c->x86_vendor) {
>  	case X86_VENDOR_INTEL:
>  		intel_p5_mcheck_init(c);
>  		return 1;
> -		break;
>  	case X86_VENDOR_CENTAUR:
>  		winchip_mcheck_init(c);
>  		return 1;
> -		break;
>  	default:
>  		return 0;
>  	}
>  
>  	return 0;
> diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
> index 3f6b137ef4e6..3d4a48336084 100644
> --- a/arch/x86/kernel/cpu/microcode/amd.c
> +++ b/arch/x86/kernel/cpu/microcode/amd.c
> @@ -213,11 +213,10 @@ static unsigned int __verify_patch_size(u8 family, u32 sh_psize, size_t buf_size
>  		max_size = F14H_MPB_MAX_SIZE;
>  		break;
>  	default:
>  		WARN(1, "%s: WTF family: 0x%x\n", __func__, family);
>  		return 0;
> -		break;
>  	}
>  
>  	if (sh_psize > min_t(u32, buf_size, max_size))
>  		return 0;
>  
> diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
> index 838b719ec7ce..d5411a166685 100644
> --- a/drivers/acpi/utils.c
> +++ b/drivers/acpi/utils.c
> @@ -102,11 +102,10 @@ acpi_extract_package(union acpi_object *package,
>  				printk(KERN_WARNING PREFIX "Invalid package element"
>  					      " [%d]: got number, expecting"
>  					      " [%c]\n",
>  					      i, format_string[i]);
>  				return AE_BAD_DATA;
> -				break;
>  			}
>  			break;
>  
>  		case ACPI_TYPE_STRING:
>  		case ACPI_TYPE_BUFFER:
> @@ -127,11 +126,10 @@ acpi_extract_package(union acpi_object *package,
>  				printk(KERN_WARNING PREFIX "Invalid package element"
>  					      " [%d] got string/buffer,"
>  					      " expecting [%c]\n",
>  					      i, format_string[i]);
>  				return AE_BAD_DATA;
> -				break;
>  			}
>  			break;
>  		case ACPI_TYPE_LOCAL_REFERENCE:
>  			switch (format_string[i]) {
>  			case 'R':
> @@ -142,22 +140,20 @@ acpi_extract_package(union acpi_object *package,
>  				printk(KERN_WARNING PREFIX "Invalid package element"
>  					      " [%d] got reference,"
>  					      " expecting [%c]\n",
>  					      i, format_string[i]);
>  				return AE_BAD_DATA;
> -				break;
>  			}
>  			break;
>  
>  		case ACPI_TYPE_PACKAGE:
>  		default:
>  			ACPI_DEBUG_PRINT((ACPI_DB_INFO,
>  					  "Found unsupported element at index=%d\n",
>  					  i));
>  			/* TBD: handle nested packages... */
>  			return AE_SUPPORT;
> -			break;
>  		}
>  	}
>  
>  	/*
>  	 * Validate output buffer.
> diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> index 205a06752ca9..c7ac49042cee 100644
> --- a/drivers/base/power/main.c
> +++ b/drivers/base/power/main.c
> @@ -361,11 +361,10 @@ static pm_callback_t pm_op(const struct dev_pm_ops *ops, pm_message_t state)
>  	case PM_EVENT_HIBERNATE:
>  		return ops->poweroff;
>  	case PM_EVENT_THAW:
>  	case PM_EVENT_RECOVER:
>  		return ops->thaw;
> -		break;
>  	case PM_EVENT_RESTORE:
>  		return ops->restore;
>  #endif /* CONFIG_HIBERNATE_CALLBACKS */
>  	}
>  
> diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
> index adfc9352351d..f769fbd1b4c4 100644
> --- a/drivers/block/xen-blkback/blkback.c
> +++ b/drivers/block/xen-blkback/blkback.c
> @@ -1267,11 +1267,10 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
>  		operation_flags = REQ_PREFLUSH;
>  		break;
>  	default:
>  		operation = 0; /* make gcc happy */
>  		goto fail_response;
> -		break;
>  	}
>  
>  	/* Check that the number of segments is sane. */
>  	nseg = req->operation == BLKIF_OP_INDIRECT ?
>  	       req->u.indirect.nr_segments : req->u.rw.nr_segments;
> diff --git a/drivers/char/ipmi/ipmi_devintf.c b/drivers/char/ipmi/ipmi_devintf.c
> index f7b1c004a12b..3dd1d5abb298 100644
> --- a/drivers/char/ipmi/ipmi_devintf.c
> +++ b/drivers/char/ipmi/ipmi_devintf.c
> @@ -488,11 +488,10 @@ static long ipmi_ioctl(struct file   *file,
>  			rv = -EFAULT;
>  			break;
>  		}
>  
>  		return ipmi_set_my_address(priv->user, val.channel, val.value);
> -		break;
>  	}
>  
>  	case IPMICTL_GET_MY_CHANNEL_ADDRESS_CMD:
>  	{
>  		struct ipmi_channel_lun_address_set val;
> diff --git a/drivers/char/lp.c b/drivers/char/lp.c
> index 0ec73917d8dd..862c2fd933c7 100644
> --- a/drivers/char/lp.c
> +++ b/drivers/char/lp.c
> @@ -620,11 +620,10 @@ static int lp_do_ioctl(unsigned int minor, unsigned int cmd,
>  		case LPWAIT:
>  			LP_WAIT(minor) = arg;
>  			break;
>  		case LPSETIRQ:
>  			return -EINVAL;
> -			break;
>  		case LPGETIRQ:
>  			if (copy_to_user(argp, &LP_IRQ(minor),
>  					sizeof(int)))
>  				return -EFAULT;
>  			break;
> diff --git a/drivers/char/mwave/mwavedd.c b/drivers/char/mwave/mwavedd.c
> index e43c876a9223..11272d605ecd 100644
> --- a/drivers/char/mwave/mwavedd.c
> +++ b/drivers/char/mwave/mwavedd.c
> @@ -401,11 +401,10 @@ static long mwave_ioctl(struct file *file, unsigned int iocmd,
>  		}
>  			break;
>  	
>  		default:
>  			return -ENOTTY;
> -			break;
>  	} /* switch */
>  
>  	PRINTK_2(TRACE_MWAVE, "mwavedd::mwave_ioctl, exit retval %x\n", retval);
>  
>  	return retval;
> diff --git a/drivers/crypto/atmel-sha.c b/drivers/crypto/atmel-sha.c
> index 75ccf41a7cb9..0eb6f54e3b66 100644
> --- a/drivers/crypto/atmel-sha.c
> +++ b/drivers/crypto/atmel-sha.c
> @@ -457,11 +457,10 @@ static int atmel_sha_init(struct ahash_request *req)
>  		ctx->flags |= SHA_FLAGS_SHA512;
>  		ctx->block_size = SHA512_BLOCK_SIZE;
>  		break;
>  	default:
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	ctx->bufcnt = 0;
>  	ctx->digcnt[0] = 0;
>  	ctx->digcnt[1] = 0;
> diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
> index fcc08bbf6945..386a3a4cf279 100644
> --- a/drivers/edac/amd64_edac.c
> +++ b/drivers/edac/amd64_edac.c
> @@ -2459,38 +2459,30 @@ static int map_err_sym_to_channel(int err_sym, int sym_size)
>  	if (sym_size == 4)
>  		switch (err_sym) {
>  		case 0x20:
>  		case 0x21:
>  			return 0;
> -			break;
>  		case 0x22:
>  		case 0x23:
>  			return 1;
> -			break;
>  		default:
>  			return err_sym >> 4;
> -			break;
>  		}
>  	/* x8 symbols */
>  	else
>  		switch (err_sym) {
>  		/* imaginary bits not in a DIMM */
>  		case 0x10:
>  			WARN(1, KERN_ERR "Invalid error symbol: 0x%x\n",
>  					  err_sym);
>  			return -1;
> -			break;
> -
>  		case 0x11:
>  			return 0;
> -			break;
>  		case 0x12:
>  			return 1;
> -			break;
>  		default:
>  			return err_sym >> 3;
> -			break;
>  		}
>  	return -1;
>  }
>  
>  static int get_channel_from_ecc_syndrome(struct mem_ctl_info *mci, u16 syndrome)
> diff --git a/drivers/gpio/gpio-bd70528.c b/drivers/gpio/gpio-bd70528.c
> index 45b3da8da336..931e5765fe92 100644
> --- a/drivers/gpio/gpio-bd70528.c
> +++ b/drivers/gpio/gpio-bd70528.c
> @@ -69,21 +69,18 @@ static int bd70528_gpio_set_config(struct gpio_chip *chip, unsigned int offset,
>  	case PIN_CONFIG_DRIVE_OPEN_DRAIN:
>  		return regmap_update_bits(bdgpio->chip.regmap,
>  					  GPIO_OUT_REG(offset),
>  					  BD70528_GPIO_DRIVE_MASK,
>  					  BD70528_GPIO_OPEN_DRAIN);
> -		break;
>  	case PIN_CONFIG_DRIVE_PUSH_PULL:
>  		return regmap_update_bits(bdgpio->chip.regmap,
>  					  GPIO_OUT_REG(offset),
>  					  BD70528_GPIO_DRIVE_MASK,
>  					  BD70528_GPIO_PUSH_PULL);
> -		break;
>  	case PIN_CONFIG_INPUT_DEBOUNCE:
>  		return bd70528_set_debounce(bdgpio, offset,
>  					    pinconf_to_config_argument(config));
> -		break;
>  	default:
>  		break;
>  	}
>  	return -ENOTSUPP;
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
> index 2a32b66959ba..130a0a0c8332 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce/dce_transform.c
> @@ -1328,11 +1328,10 @@ static bool configure_graphics_mode(
>  		case COLOR_SPACE_SRGB:
>  			/* by pass */
>  			REG_SET(OUTPUT_CSC_CONTROL, 0,
>  				OUTPUT_CSC_GRPH_MODE, 0);
>  			break;
> -			break;
>  		case COLOR_SPACE_SRGB_LIMITED:
>  			/* TV RGB */
>  			REG_SET(OUTPUT_CSC_CONTROL, 0,
>  				OUTPUT_CSC_GRPH_MODE, 1);
>  			break;
> diff --git a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> index d741787f75dc..42c7d157da32 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce100/dce100_resource.c
> @@ -416,29 +416,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> index 2bbfa2e176a9..382581c4a674 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_resource.c
> @@ -469,29 +469,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> index b622b4b1dac3..7b4b2304bbff 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce112/dce112_resource.c
> @@ -444,29 +444,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> index 16fe7344702f..3d782b7c86cb 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce120/dce120_resource.c
> @@ -381,29 +381,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
> index 5a5a9cb77acb..e9dd78c484d6 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_resource.c
> @@ -451,29 +451,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
> index 0eae8cd35f9a..9dbf658162cd 100644
> --- a/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
> +++ b/drivers/gpu/drm/amd/display/dc/dce80/dce80_resource.c
> @@ -456,29 +456,22 @@ static int map_transmitter_id_to_phy_instance(
>  	enum transmitter transmitter)
>  {
>  	switch (transmitter) {
>  	case TRANSMITTER_UNIPHY_A:
>  		return 0;
> -	break;
>  	case TRANSMITTER_UNIPHY_B:
>  		return 1;
> -	break;
>  	case TRANSMITTER_UNIPHY_C:
>  		return 2;
> -	break;
>  	case TRANSMITTER_UNIPHY_D:
>  		return 3;
> -	break;
>  	case TRANSMITTER_UNIPHY_E:
>  		return 4;
> -	break;
>  	case TRANSMITTER_UNIPHY_F:
>  		return 5;
> -	break;
>  	case TRANSMITTER_UNIPHY_G:
>  		return 6;
> -	break;
>  	default:
>  		ASSERT(0);
>  		return 0;
>  	}
>  }
> diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
> index 38672f9e5c4f..bbe4e60dfd08 100644
> --- a/drivers/gpu/drm/mgag200/mgag200_mode.c
> +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
> @@ -792,25 +792,20 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock)
>  	case G200_AGP:
>  		return mgag200_g200_set_plls(mdev, clock);
>  	case G200_SE_A:
>  	case G200_SE_B:
>  		return mga_g200se_set_plls(mdev, clock);
> -		break;
>  	case G200_WB:
>  	case G200_EW3:
>  		return mga_g200wb_set_plls(mdev, clock);
> -		break;
>  	case G200_EV:
>  		return mga_g200ev_set_plls(mdev, clock);
> -		break;
>  	case G200_EH:
>  	case G200_EH3:
>  		return mga_g200eh_set_plls(mdev, clock);
> -		break;
>  	case G200_ER:
>  		return mga_g200er_set_plls(mdev, clock);
> -		break;
>  	}
>  
>  	misc = RREG8(MGA_MISC_IN);
>  	misc &= ~MGAREG_MISC_CLK_SEL_MASK;
>  	misc |= MGAREG_MISC_CLK_SEL_MGA_MSK;
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
> index 350f10a3de37..2ec84b8a3b3a 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c
> @@ -121,11 +121,10 @@ pll_map(struct nvkm_bios *bios)
>  	case NV_10:
>  	case NV_11:
>  	case NV_20:
>  	case NV_30:
>  		return nv04_pll_mapping;
> -		break;
>  	case NV_40:
>  		return nv40_pll_mapping;
>  	case NV_50:
>  		if (device->chipset == 0x50)
>  			return nv50_pll_mapping;
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
> index efa50274df97..4884eb4a9221 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c
> @@ -138,21 +138,18 @@ mcp77_clk_read(struct nvkm_clk *base, enum nv_clk_src src)
>  		case 0x00000030: return read_pll(clk, 0x004020) >> P;
>  		}
>  		break;
>  	case nv_clk_src_mem:
>  		return 0;
> -		break;
>  	case nv_clk_src_vdec:
>  		P = (read_div(clk) & 0x00000700) >> 8;
>  
>  		switch (mast & 0x00400000) {
>  		case 0x00400000:
>  			return nvkm_clk_read(&clk->base, nv_clk_src_core) >> P;
> -			break;
>  		default:
>  			return 500000 >> P;
> -			break;
>  		}
>  		break;
>  	default:
>  		break;
>  	}
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
> index 2ccb4b6be153..7b1eb44ff3da 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c
> @@ -169,11 +169,10 @@ nv50_ram_timing_read(struct nv50_ram *ram, u32 *timing)
>  	case NVKM_RAM_TYPE_GDDR3:
>  		T(CWL) = ((timing[2] & 0xff000000) >> 24) + 1;
>  		break;
>  	default:
>  		return -ENOSYS;
> -		break;
>  	}
>  
>  	T(WR) = ((timing[1] >> 24) & 0xff) - 1 - T(CWL);
>  
>  	return 0;
> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
> index e01746ce9fc4..1156634533f9 100644
> --- a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
> +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c
> @@ -88,11 +88,10 @@ gk104_top_oneinit(struct nvkm_top *top)
>  		case 0x0000000e: B_(NVENC ); break;
>  		case 0x0000000f: A_(NVENC1); break;
>  		case 0x00000010: B_(NVDEC ); break;
>  		case 0x00000013: B_(CE    ); break;
>  		case 0x00000014: C_(GSP   ); break;
> -			break;
>  		default:
>  			break;
>  		}
>  
>  		nvkm_debug(subdev, "%02x.%d (%8s): addr %06x fault %2d "
> diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
> index 5cea6eea72ab..2072ddc9549c 100644
> --- a/drivers/gpu/drm/qxl/qxl_ioctl.c
> +++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
> @@ -158,11 +158,10 @@ static int qxl_process_single_command(struct qxl_device *qdev,
>  	case QXL_CMD_SURFACE:
>  	case QXL_CMD_CURSOR:
>  	default:
>  		DRM_DEBUG("Only draw commands in execbuffers\n");
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	if (cmd->command_size > PAGE_SIZE - sizeof(union qxl_release_info))
>  		return -EINVAL;
>  
> diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
> index e03988698755..66dc452d643a 100644
> --- a/drivers/iio/adc/meson_saradc.c
> +++ b/drivers/iio/adc/meson_saradc.c
> @@ -591,17 +591,15 @@ static int meson_sar_adc_iio_info_read_raw(struct iio_dev *indio_dev,
>  
>  	switch (mask) {
>  	case IIO_CHAN_INFO_RAW:
>  		return meson_sar_adc_get_sample(indio_dev, chan, NO_AVERAGING,
>  						ONE_SAMPLE, val);
> -		break;
>  
>  	case IIO_CHAN_INFO_AVERAGE_RAW:
>  		return meson_sar_adc_get_sample(indio_dev, chan,
>  						MEAN_AVERAGING, EIGHT_SAMPLES,
>  						val);
> -		break;
>  
>  	case IIO_CHAN_INFO_SCALE:
>  		if (chan->type == IIO_VOLTAGE) {
>  			ret = regulator_get_voltage(priv->vref);
>  			if (ret < 0) {
> diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
> index 222ebb26f013..431076dc0d2c 100644
> --- a/drivers/iio/imu/bmi160/bmi160_core.c
> +++ b/drivers/iio/imu/bmi160/bmi160_core.c
> @@ -484,11 +484,10 @@ static int bmi160_write_raw(struct iio_dev *indio_dev,
>  
>  	switch (mask) {
>  	case IIO_CHAN_INFO_SCALE:
>  		return bmi160_set_scale(data,
>  					bmi160_to_sensor(chan->type), val2);
> -		break;
>  	case IIO_CHAN_INFO_SAMP_FREQ:
>  		return bmi160_set_odr(data, bmi160_to_sensor(chan->type),
>  				      val, val2);
>  	default:
>  		return -EINVAL;
> diff --git a/drivers/ipack/devices/ipoctal.c b/drivers/ipack/devices/ipoctal.c
> index d480a514c983..3940714e4397 100644
> --- a/drivers/ipack/devices/ipoctal.c
> +++ b/drivers/ipack/devices/ipoctal.c
> @@ -542,11 +542,10 @@ static void ipoctal_set_termios(struct tty_struct *tty,
>  		mr1 |= MR1_RxRTS_CONTROL_OFF;
>  		mr2 |= MR2_TxRTS_CONTROL_ON | MR2_CTS_ENABLE_TX_OFF;
>  		break;
>  	default:
>  		return;
> -		break;
>  	}
>  
>  	baud = tty_get_baud_rate(tty);
>  	tty_termios_encode_baud_rate(&tty->termios, baud, baud);
>  
> diff --git a/drivers/media/dvb-frontends/drx39xyj/drxj.c b/drivers/media/dvb-frontends/drx39xyj/drxj.c
> index 237b9d04c076..37b32d9b398d 100644
> --- a/drivers/media/dvb-frontends/drx39xyj/drxj.c
> +++ b/drivers/media/dvb-frontends/drx39xyj/drxj.c
> @@ -2323,11 +2323,10 @@ hi_command(struct i2c_device_addr *dev_addr, const struct drxj_hi_cmd *cmd, u16
>  		/* No parameters */
>  		break;
>  
>  	default:
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	/* Write command */
>  	rc = drxj_dap_write_reg16(dev_addr, SIO_HI_RA_RAM_CMD__A, cmd->cmd, 0);
>  	if (rc != 0) {
> @@ -3592,11 +3591,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
>  				goto rw_error;
>  			}
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  		}		/* switch ( uio_cfg->mode ) */
>  		break;
>        /*====================================================================*/
>  	case DRX_UIO3:
>  		/* DRX_UIO3: GPIO UIO-3 */
> @@ -3616,11 +3614,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
>  				goto rw_error;
>  			}
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  		}		/* switch ( uio_cfg->mode ) */
>  		break;
>        /*====================================================================*/
>  	case DRX_UIO4:
>  		/* DRX_UIO4: IRQN UIO-4 */
> @@ -3640,11 +3637,10 @@ static int ctrl_set_uio_cfg(struct drx_demod_instance *demod, struct drxuio_cfg
>  			ext_attr->uio_irqn_mode = uio_cfg->mode;
>  			break;
>  		case DRX_UIO_MODE_FIRMWARE0:
>  		default:
>  			return -EINVAL;
> -			break;
>  		}		/* switch ( uio_cfg->mode ) */
>  		break;
>        /*====================================================================*/
>  	default:
>  		return -EINVAL;
> @@ -10951,11 +10947,10 @@ ctrl_set_standard(struct drx_demod_instance *demod, enum drx_standard *standard)
>  		}
>  		break;
>  	default:
>  		ext_attr->standard = DRX_STANDARD_UNKNOWN;
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	return 0;
>  rw_error:
>  	/* Don't know what the standard is now ... try again */
> @@ -11072,11 +11067,10 @@ ctrl_power_mode(struct drx_demod_instance *demod, enum drx_power_mode *mode)
>  		sio_cc_pwd_mode = SIO_CC_PWD_MODE_LEVEL_OSC;
>  		break;
>  	default:
>  		/* Unknow sleep mode */
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	/* Check if device needs to be powered up */
>  	if ((common_attr->current_power_mode != DRX_POWER_UP)) {
>  		rc = power_up_device(demod);
> @@ -11894,11 +11888,10 @@ static int drx_ctrl_u_code(struct drx_demod_instance *demod,
>  			}
>  			break;
>  		}
>  		default:
>  			return -EINVAL;
> -			break;
>  
>  		}
>  		mc_data += mc_block_nr_bytes;
>  	}
>  
> diff --git a/drivers/media/dvb-frontends/drxd_hard.c b/drivers/media/dvb-frontends/drxd_hard.c
> index 45f982863904..a7eb81df88c2 100644
> --- a/drivers/media/dvb-frontends/drxd_hard.c
> +++ b/drivers/media/dvb-frontends/drxd_hard.c
> @@ -1620,11 +1620,10 @@ static int CorrectSysClockDeviation(struct drxd_state *state)
>  		case 6000000:
>  			bandwidth = DRXD_BANDWIDTH_6MHZ_IN_HZ;
>  			break;
>  		default:
>  			return -1;
> -			break;
>  		}
>  
>  		/* Compute new sysclock value
>  		   sysClockFreq = (((incr + 2^23)*bandwidth)/2^21)/1000 */
>  		incr += (1 << 23);
> diff --git a/drivers/media/dvb-frontends/nxt200x.c b/drivers/media/dvb-frontends/nxt200x.c
> index 35b83b1dd82c..200b6dbc75f8 100644
> --- a/drivers/media/dvb-frontends/nxt200x.c
> +++ b/drivers/media/dvb-frontends/nxt200x.c
> @@ -166,11 +166,10 @@ static int nxt200x_writereg_multibyte (struct nxt200x_state* state, u8 reg, u8*
>  			len2 = ((attr << 4) | 0x10) | len;
>  			buf = 0x80;
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	/* set multi register length */
>  	nxt200x_writebytes(state, 0x34, &len2, 1);
>  
> @@ -188,11 +187,10 @@ static int nxt200x_writereg_multibyte (struct nxt200x_state* state, u8 reg, u8*
>  			if (buf == 0)
>  				return 0;
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	pr_warn("Error writing multireg register 0x%02X\n", reg);
>  
>  	return 0;
> @@ -214,11 +212,10 @@ static int nxt200x_readreg_multibyte (struct nxt200x_state* state, u8 reg, u8* d
>  			nxt200x_writebytes(state, 0x34, &len2, 1);
>  
>  			/* read the actual data */
>  			nxt200x_readbytes(state, reg, data, len);
>  			return 0;
> -			break;
>  		case NXT2004:
>  			/* probably not right, but gives correct values */
>  			attr = 0x02;
>  			if (reg & 0x80) {
>  				attr = attr << 1;
> @@ -237,14 +234,12 @@ static int nxt200x_readreg_multibyte (struct nxt200x_state* state, u8 reg, u8* d
>  			/* read the actual data */
>  			for(i = 0; i < len; i++) {
>  				nxt200x_readbytes(state, 0x36 + i, &data[i], 1);
>  			}
>  			return 0;
> -			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  }
>  
>  static void nxt200x_microcontroller_stop (struct nxt200x_state* state)
>  {
> @@ -372,11 +367,10 @@ static int nxt200x_writetuner (struct nxt200x_state* state, u8* data)
>  			}
>  			pr_warn("timeout error writing to tuner\n");
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  	return 0;
>  }
>  
>  static void nxt200x_agc_reset(struct nxt200x_state* state)
> @@ -553,11 +547,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  			if (state->config->set_ts_params)
>  				state->config->set_ts_params(fe, 0);
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	if (fe->ops.tuner_ops.calc_regs) {
>  		/* get tuning information */
>  		fe->ops.tuner_ops.calc_regs(fe, buf, 5);
> @@ -578,11 +571,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case VSB_8:
>  			buf[0] = 0x70;
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  	nxt200x_writebytes(state, 0x42, buf, 1);
>  
>  	/* configure sdm */
>  	switch (state->demod_chip) {
> @@ -592,11 +584,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case NXT2004:
>  			buf[0] = 0x07;
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  	nxt200x_writebytes(state, 0x57, buf, 1);
>  
>  	/* write sdm1 input */
>  	buf[0] = 0x10;
> @@ -608,11 +599,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case NXT2004:
>  			nxt200x_writebytes(state, 0x58, buf, 2);
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	/* write sdmx input */
>  	switch (p->modulation) {
>  		case QAM_64:
> @@ -624,11 +614,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case VSB_8:
>  				buf[0] = 0x60;
>  				break;
>  		default:
>  				return -EINVAL;
> -				break;
>  	}
>  	buf[1] = 0x00;
>  	switch (state->demod_chip) {
>  		case NXT2002:
>  			nxt200x_writereg_multibyte(state, 0x5C, buf, 2);
> @@ -636,11 +625,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case NXT2004:
>  			nxt200x_writebytes(state, 0x5C, buf, 2);
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	/* write adc power lpf fc */
>  	buf[0] = 0x05;
>  	nxt200x_writebytes(state, 0x43, buf, 1);
> @@ -662,11 +650,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case NXT2004:
>  			nxt200x_writebytes(state, 0x4B, buf, 2);
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	/* write kg1 */
>  	buf[0] = 0x00;
>  	nxt200x_writebytes(state, 0x4D, buf, 1);
> @@ -718,11 +705,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  		case VSB_8:
>  				buf[0] = 0x00;
>  				break;
>  		default:
>  				return -EINVAL;
> -				break;
>  	}
>  	nxt200x_writebytes(state, 0x30, buf, 1);
>  
>  	/* write agc control reg */
>  	buf[0] = 0x00;
> @@ -740,11 +726,10 @@ static int nxt200x_setup_frontend_parameters(struct dvb_frontend *fe)
>  			nxt200x_writebytes(state, 0x49, buf, 2);
>  			nxt200x_writebytes(state, 0x4B, buf, 2);
>  			break;
>  		default:
>  			return -EINVAL;
> -			break;
>  	}
>  
>  	/* write agc control reg */
>  	buf[0] = 0x04;
>  	nxt200x_writebytes(state, 0x41, buf, 1);
> @@ -1112,11 +1097,10 @@ static int nxt200x_init(struct dvb_frontend* fe)
>  			case NXT2004:
>  				ret = nxt2004_init(fe);
>  				break;
>  			default:
>  				return -EINVAL;
> -				break;
>  		}
>  		state->initialised = 1;
>  	}
>  	return ret;
>  }
> diff --git a/drivers/media/dvb-frontends/si21xx.c b/drivers/media/dvb-frontends/si21xx.c
> index a116eff417f2..e31eb2c5cc4c 100644
> --- a/drivers/media/dvb-frontends/si21xx.c
> +++ b/drivers/media/dvb-frontends/si21xx.c
> @@ -462,14 +462,12 @@ static int si21xx_set_voltage(struct dvb_frontend *fe, enum fe_sec_voltage volt)
>  	val = (0x80 | si21_readreg(state, LNB_CTRL_REG_1));
>  
>  	switch (volt) {
>  	case SEC_VOLTAGE_18:
>  		return si21_writereg(state, LNB_CTRL_REG_1, val | 0x40);
> -		break;
>  	case SEC_VOLTAGE_13:
>  		return si21_writereg(state, LNB_CTRL_REG_1, (val & ~0x40));
> -		break;
>  	default:
>  		return -EINVAL;
>  	}
>  }
>  
> diff --git a/drivers/media/tuners/mt2063.c b/drivers/media/tuners/mt2063.c
> index 2240d214dfac..d105431a2e2d 100644
> --- a/drivers/media/tuners/mt2063.c
> +++ b/drivers/media/tuners/mt2063.c
> @@ -1847,11 +1847,10 @@ static int mt2063_init(struct dvb_frontend *fe)
>  		def = MT2063B0_defaults;
>  		break;
>  
>  	default:
>  		return -ENODEV;
> -		break;
>  	}
>  
>  	while (status >= 0 && *def) {
>  		u8 reg = *def++;
>  		u8 val = *def++;
> diff --git a/drivers/message/fusion/mptbase.c b/drivers/message/fusion/mptbase.c
> index 9903e9660a38..3078fac34e51 100644
> --- a/drivers/message/fusion/mptbase.c
> +++ b/drivers/message/fusion/mptbase.c
> @@ -471,11 +471,10 @@ mpt_turbo_reply(MPT_ADAPTER *ioc, u32 pa)
>  			req_idx = pa & 0x0000FFFF;
>  			mf = MPT_INDEX_2_MFPTR(ioc, req_idx);
>  			mpt_free_msg_frame(ioc, mf);
>  			mb();
>  			return;
> -			break;
>  		}
>  		mr = (MPT_FRAME_HDR *) CAST_U32_TO_PTR(pa);
>  		break;
>  	case MPI_CONTEXT_REPLY_TYPE_SCSI_TARGET:
>  		cb_idx = mpt_get_cb_idx(MPTSTM_DRIVER);
> diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
> index a97eb5d47705..686e8b6a4c55 100644
> --- a/drivers/misc/mei/hbm.c
> +++ b/drivers/misc/mei/hbm.c
> @@ -1375,11 +1375,10 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
>  
>  		dev->dev_state = MEI_DEV_POWER_DOWN;
>  		dev_info(dev->dev, "hbm: stop response: resetting.\n");
>  		/* force the reset */
>  		return -EPROTO;
> -		break;
>  
>  	case CLIENT_DISCONNECT_REQ_CMD:
>  		dev_dbg(dev->dev, "hbm: disconnect request: message received\n");
>  
>  		disconnect_req = (struct hbm_client_connect_request *)mei_msg;
> diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
> index b40f46a43fc6..323035d4f2d0 100644
> --- a/drivers/mtd/mtdchar.c
> +++ b/drivers/mtd/mtdchar.c
> @@ -879,21 +879,19 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
>  		loff_t offs;
>  
>  		if (copy_from_user(&offs, argp, sizeof(loff_t)))
>  			return -EFAULT;
>  		return mtd_block_isbad(mtd, offs);
> -		break;
>  	}
>  
>  	case MEMSETBADBLOCK:
>  	{
>  		loff_t offs;
>  
>  		if (copy_from_user(&offs, argp, sizeof(loff_t)))
>  			return -EFAULT;
>  		return mtd_block_markbad(mtd, offs);
> -		break;
>  	}
>  
>  	case OTPSELECT:
>  	{
>  		int mode;
> diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
> index c3f49543ff26..9c215f7c5f81 100644
> --- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
> +++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
> @@ -73,15 +73,15 @@ static const struct can_bittiming_const mcp251xfd_data_bittiming_const = {
>  
>  static const char *__mcp251xfd_get_model_str(enum mcp251xfd_model model)
>  {
>  	switch (model) {
>  	case MCP251XFD_MODEL_MCP2517FD:
> -		return "MCP2517FD"; break;
> +		return "MCP2517FD";
>  	case MCP251XFD_MODEL_MCP2518FD:
> -		return "MCP2518FD"; break;
> +		return "MCP2518FD";
>  	case MCP251XFD_MODEL_MCP251XFD:
> -		return "MCP251xFD"; break;
> +		return "MCP251xFD";
>  	}
>  
>  	return "<unknown>";
>  }
>  
> @@ -93,25 +93,25 @@ mcp251xfd_get_model_str(const struct mcp251xfd_priv *priv)
>  
>  static const char *mcp251xfd_get_mode_str(const u8 mode)
>  {
>  	switch (mode) {
>  	case MCP251XFD_REG_CON_MODE_MIXED:
> -		return "Mixed (CAN FD/CAN 2.0)"; break;
> +		return "Mixed (CAN FD/CAN 2.0)";
>  	case MCP251XFD_REG_CON_MODE_SLEEP:
> -		return "Sleep"; break;
> +		return "Sleep";
>  	case MCP251XFD_REG_CON_MODE_INT_LOOPBACK:
> -		return "Internal Loopback"; break;
> +		return "Internal Loopback";
>  	case MCP251XFD_REG_CON_MODE_LISTENONLY:
> -		return "Listen Only"; break;
> +		return "Listen Only";
>  	case MCP251XFD_REG_CON_MODE_CONFIG:
> -		return "Configuration"; break;
> +		return "Configuration";
>  	case MCP251XFD_REG_CON_MODE_EXT_LOOPBACK:
> -		return "External Loopback"; break;
> +		return "External Loopback";
>  	case MCP251XFD_REG_CON_MODE_CAN2_0:
> -		return "CAN 2.0"; break;
> +		return "CAN 2.0";
>  	case MCP251XFD_REG_CON_MODE_RESTRICTED:
> -		return "Restricted Operation"; break;
> +		return "Restricted Operation";
>  	}
>  
>  	return "<unknown>";
>  }
>  
> diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
> index 0f865daeb36d..bf5e0e9bd0e2 100644
> --- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
> +++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
> @@ -1161,11 +1161,10 @@ int aq_nic_set_link_ksettings(struct aq_nic_s *self,
>  			break;
>  
>  		default:
>  			err = -1;
>  			goto err_exit;
> -		break;
>  		}
>  		if (!(self->aq_nic_cfg.aq_hw_caps->link_speed_msk & rate)) {
>  			err = -1;
>  			goto err_exit;
>  		}
> diff --git a/drivers/net/ethernet/cisco/enic/enic_ethtool.c b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
> index a4dd52bba2c3..1a9803f2073e 100644
> --- a/drivers/net/ethernet/cisco/enic/enic_ethtool.c
> +++ b/drivers/net/ethernet/cisco/enic/enic_ethtool.c
> @@ -432,11 +432,10 @@ static int enic_grxclsrule(struct enic *enic, struct ethtool_rxnfc *cmd)
>  	case IPPROTO_UDP:
>  		fsp->flow_type = UDP_V4_FLOW;
>  		break;
>  	default:
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	fsp->h_u.tcp_ip4_spec.ip4src = flow_get_u32_src(&n->keys);
>  	fsp->m_u.tcp_ip4_spec.ip4src = (__u32)~0;
>  
> diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
> index de563cfd294d..4b93ba149ec5 100644
> --- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
> +++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
> @@ -348,11 +348,10 @@ static s32 ixgbe_calc_eeprom_checksum_X540(struct ixgbe_hw *hw)
>  			continue;
>  
>  		if (ixgbe_read_eerd_generic(hw, pointer, &length)) {
>  			hw_dbg(hw, "EEPROM read failed\n");
>  			return IXGBE_ERR_EEPROM;
> -			break;
>  		}
>  
>  		/* Skip pointer section if length is invalid. */
>  		if (length == 0xFFFF || length == 0 ||
>  		    (pointer + length) >= hw->eeprom.word_size)
> diff --git a/drivers/net/wan/lmc/lmc_proto.c b/drivers/net/wan/lmc/lmc_proto.c
> index e8b0b902b424..4e9cc83b615a 100644
> --- a/drivers/net/wan/lmc/lmc_proto.c
> +++ b/drivers/net/wan/lmc/lmc_proto.c
> @@ -87,21 +87,17 @@ void lmc_proto_close(lmc_softc_t *sc)
>  __be16 lmc_proto_type(lmc_softc_t *sc, struct sk_buff *skb) /*FOLD00*/
>  {
>      switch(sc->if_type){
>      case LMC_PPP:
>  	    return hdlc_type_trans(skb, sc->lmc_device);
> -	    break;
>      case LMC_NET:
>          return htons(ETH_P_802_2);
> -        break;
>      case LMC_RAW: /* Packet type for skbuff kind of useless */
>          return htons(ETH_P_802_2);
> -        break;
>      default:
>          printk(KERN_WARNING "%s: No protocol set for this interface, assuming 802.2 (which is wrong!!)\n", sc->name);
>          return htons(ETH_P_802_2);
> -        break;
>      }
>  }
>  
>  void lmc_proto_netif(lmc_softc_t *sc, struct sk_buff *skb) /*FOLD00*/
>  {
> diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
> index 5c1af2021883..9c4e6cf2137a 100644
> --- a/drivers/net/wireless/ath/ath10k/htt_rx.c
> +++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
> @@ -3876,11 +3876,10 @@ bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
>  		atomic_inc(&htt->num_mpdus_ready);
>  
>  		return ath10k_htt_rx_proc_rx_frag_ind(htt,
>  						      &resp->rx_frag_ind,
>  						      skb);
> -		break;
>  	}
>  	case HTT_T2H_MSG_TYPE_TEST:
>  		break;
>  	case HTT_T2H_MSG_TYPE_STATS_CONF:
>  		trace_ath10k_htt_stats(ar, skb->data, skb->len);
> diff --git a/drivers/net/wireless/ath/ath6kl/testmode.c b/drivers/net/wireless/ath/ath6kl/testmode.c
> index f3906dbe5495..89c7c4e25169 100644
> --- a/drivers/net/wireless/ath/ath6kl/testmode.c
> +++ b/drivers/net/wireless/ath/ath6kl/testmode.c
> @@ -92,11 +92,10 @@ int ath6kl_tm_cmd(struct wiphy *wiphy, struct wireless_dev *wdev,
>  
>  		ath6kl_wmi_test_cmd(ar->wmi, buf, buf_len);
>  
>  		return 0;
>  
> -		break;
>  	case ATH6KL_TM_CMD_RX_REPORT:
>  	default:
>  		return -EOPNOTSUPP;
>  	}
>  }
> diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
> index 6609ce122e6e..b66eeb577272 100644
> --- a/drivers/net/wireless/ath/ath9k/hw.c
> +++ b/drivers/net/wireless/ath/ath9k/hw.c
> @@ -2306,11 +2306,10 @@ void ath9k_hw_beaconinit(struct ath_hw *ah, u32 next_beacon, u32 beacon_period)
>  		break;
>  	default:
>  		ath_dbg(ath9k_hw_common(ah), BEACON,
>  			"%s: unsupported opmode: %d\n", __func__, ah->opmode);
>  		return;
> -		break;
>  	}
>  
>  	REG_WRITE(ah, AR_BEACON_PERIOD, beacon_period);
>  	REG_WRITE(ah, AR_DMA_BEACON_PERIOD, beacon_period);
>  	REG_WRITE(ah, AR_SWBA_PERIOD, beacon_period);
> diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
> index cbdebefb854a..8698ca4d30de 100644
> --- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
> +++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
> @@ -1200,17 +1200,15 @@ static int iwl_mvm_mac_ctx_send(struct iwl_mvm *mvm, struct ieee80211_vif *vif,
>  	switch (vif->type) {
>  	case NL80211_IFTYPE_STATION:
>  		return iwl_mvm_mac_ctxt_cmd_sta(mvm, vif, action,
>  						force_assoc_off,
>  						bssid_override);
> -		break;
>  	case NL80211_IFTYPE_AP:
>  		if (!vif->p2p)
>  			return iwl_mvm_mac_ctxt_cmd_ap(mvm, vif, action);
>  		else
>  			return iwl_mvm_mac_ctxt_cmd_go(mvm, vif, action);
> -		break;
>  	case NL80211_IFTYPE_MONITOR:
>  		return iwl_mvm_mac_ctxt_cmd_listener(mvm, vif, action);
>  	case NL80211_IFTYPE_P2P_DEVICE:
>  		return iwl_mvm_mac_ctxt_cmd_p2p_device(mvm, vif, action);
>  	case NL80211_IFTYPE_ADHOC:
> diff --git a/drivers/net/wireless/intersil/p54/eeprom.c b/drivers/net/wireless/intersil/p54/eeprom.c
> index 5bd35c147e19..3ca9d26df174 100644
> --- a/drivers/net/wireless/intersil/p54/eeprom.c
> +++ b/drivers/net/wireless/intersil/p54/eeprom.c
> @@ -868,11 +868,10 @@ int p54_parse_eeprom(struct ieee80211_hw *dev, void *eeprom, int len)
>  				err = -ENOMSG;
>  				goto err;
>  			} else {
>  				goto good_eeprom;
>  			}
> -			break;
>  		default:
>  			break;
>  		}
>  
>  		crc16 = crc_ccitt(crc16, (u8 *)entry, (entry_len + 1) * 2);
> diff --git a/drivers/net/wireless/intersil/prism54/oid_mgt.c b/drivers/net/wireless/intersil/prism54/oid_mgt.c
> index 9fd307ca4b6d..7b251ae90a68 100644
> --- a/drivers/net/wireless/intersil/prism54/oid_mgt.c
> +++ b/drivers/net/wireless/intersil/prism54/oid_mgt.c
> @@ -785,21 +785,19 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
>  			struct obj_buffer *buff = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"size=%u\naddr=0x%X\n", buff->size,
>  					buff->addr);
>  		}
> -		break;
>  	case OID_TYPE_BSS:{
>  			struct obj_bss *bss = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"age=%u\nchannel=%u\n"
>  					"capinfo=0x%X\nrates=0x%X\n"
>  					"basic_rates=0x%X\n", bss->age,
>  					bss->channel, bss->capinfo,
>  					bss->rates, bss->basic_rates);
>  		}
> -		break;
>  	case OID_TYPE_BSSLIST:{
>  			struct obj_bsslist *list = r->ptr;
>  			int i, k;
>  			k = scnprintf(str, PRIV_STR_SIZE, "nr=%u\n", list->nr);
>  			for (i = 0; i < list->nr; i++)
> @@ -812,53 +810,47 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
>  					      list->bsslist[i].capinfo,
>  					      list->bsslist[i].rates,
>  					      list->bsslist[i].basic_rates);
>  			return k;
>  		}
> -		break;
>  	case OID_TYPE_FREQUENCIES:{
>  			struct obj_frequencies *freq = r->ptr;
>  			int i, t;
>  			printk("nr : %u\n", freq->nr);
>  			t = scnprintf(str, PRIV_STR_SIZE, "nr=%u\n", freq->nr);
>  			for (i = 0; i < freq->nr; i++)
>  				t += scnprintf(str + t, PRIV_STR_SIZE - t,
>  					      "mhz[%u]=%u\n", i, freq->mhz[i]);
>  			return t;
>  		}
> -		break;
>  	case OID_TYPE_MLME:{
>  			struct obj_mlme *mlme = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"id=0x%X\nstate=0x%X\ncode=0x%X\n",
>  					mlme->id, mlme->state, mlme->code);
>  		}
> -		break;
>  	case OID_TYPE_MLMEEX:{
>  			struct obj_mlmeex *mlme = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"id=0x%X\nstate=0x%X\n"
>  					"code=0x%X\nsize=0x%X\n", mlme->id,
>  					mlme->state, mlme->code, mlme->size);
>  		}
> -		break;
>  	case OID_TYPE_ATTACH:{
>  			struct obj_attachment *attach = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"id=%d\nsize=%d\n",
>  					attach->id,
>  					attach->size);
>  		}
> -		break;
>  	case OID_TYPE_SSID:{
>  			struct obj_ssid *ssid = r->ptr;
>  			return scnprintf(str, PRIV_STR_SIZE,
>  					"length=%u\noctets=%.*s\n",
>  					ssid->length, ssid->length,
>  					ssid->octets);
>  		}
> -		break;
>  	case OID_TYPE_KEY:{
>  			struct obj_key *key = r->ptr;
>  			int t, i;
>  			t = scnprintf(str, PRIV_STR_SIZE,
>  				     "type=0x%X\nlength=0x%X\nkey=0x",
> @@ -867,11 +859,10 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
>  				t += scnprintf(str + t, PRIV_STR_SIZE - t,
>  					      "%02X:", key->key[i]);
>  			t += scnprintf(str + t, PRIV_STR_SIZE - t, "\n");
>  			return t;
>  		}
> -		break;
>  	case OID_TYPE_RAW:
>  	case OID_TYPE_ADDR:{
>  			unsigned char *buff = r->ptr;
>  			int t, i;
>  			t = scnprintf(str, PRIV_STR_SIZE, "hex data=");
> @@ -879,11 +870,10 @@ mgt_response_to_str(enum oid_num_t n, union oid_res_t *r, char *str)
>  				t += scnprintf(str + t, PRIV_STR_SIZE - t,
>  					      "%02X:", buff[i]);
>  			t += scnprintf(str + t, PRIV_STR_SIZE - t, "\n");
>  			return t;
>  		}
> -		break;
>  	default:
>  		BUG();
>  	}
>  	return 0;
>  }
> diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
> index 63f9ea21962f..bd9160b166c5 100644
> --- a/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
> +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8188ee/hw.c
> @@ -1224,11 +1224,10 @@ static int _rtl88ee_set_media_status(struct ieee80211_hw *hw,
>  			"Set Network type to AP!\n");
>  		break;
>  	default:
>  		pr_err("Network type %d not support!\n", type);
>  		return 1;
> -		break;
>  	}
>  
>  	/* MSR_INFRA == Link in infrastructure network;
>  	 * MSR_ADHOC == Link in ad hoc network;
>  	 * Therefore, check link state is necessary.
> diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
> index a36dc6e726d2..f8a1de6e9849 100644
> --- a/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
> +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hw.c
> @@ -1130,11 +1130,10 @@ static int _rtl8723e_set_media_status(struct ieee80211_hw *hw,
>  			"Set Network type to AP!\n");
>  		break;
>  	default:
>  		pr_err("Network type %d not support!\n", type);
>  		return 1;
> -		break;
>  	}
>  
>  	/* MSR_INFRA == Link in infrastructure network;
>  	 * MSR_ADHOC == Link in ad hoc network;
>  	 * Therefore, check link state is necessary.
> diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
> index f41a7643b9c4..225b8cd44f23 100644
> --- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
> +++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
> @@ -2083,16 +2083,14 @@ bool rtl8812ae_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
>  	switch (rfpath) {
>  	case RF90_PATH_A:
>  		return __rtl8821ae_phy_config_with_headerfile(hw,
>  				radioa_array_table_a, radioa_arraylen_a,
>  				_rtl8821ae_config_rf_radio_a);
> -		break;
>  	case RF90_PATH_B:
>  		return __rtl8821ae_phy_config_with_headerfile(hw,
>  				radioa_array_table_b, radioa_arraylen_b,
>  				_rtl8821ae_config_rf_radio_b);
> -		break;
>  	case RF90_PATH_C:
>  	case RF90_PATH_D:
>  		pr_err("switch case %#x not processed\n", rfpath);
>  		break;
>  	}
> @@ -2114,11 +2112,10 @@ bool rtl8821ae_phy_config_rf_with_headerfile(struct ieee80211_hw *hw,
>  	switch (rfpath) {
>  	case RF90_PATH_A:
>  		return __rtl8821ae_phy_config_with_headerfile(hw,
>  			radioa_array_table, radioa_arraylen,
>  			_rtl8821ae_config_rf_radio_a);
> -		break;
>  
>  	case RF90_PATH_B:
>  	case RF90_PATH_C:
>  	case RF90_PATH_D:
>  		pr_err("switch case %#x not processed\n", rfpath);
> diff --git a/drivers/nfc/st21nfca/core.c b/drivers/nfc/st21nfca/core.c
> index 2ce17932a073..6ca0d2f56b18 100644
> --- a/drivers/nfc/st21nfca/core.c
> +++ b/drivers/nfc/st21nfca/core.c
> @@ -792,11 +792,10 @@ static int st21nfca_hci_im_transceive(struct nfc_hci_dev *hdev,
>  		return nfc_hci_send_cmd_async(hdev, target->hci_reader_gate,
>  					      ST21NFCA_WR_XCHG_DATA, skb->data,
>  					      skb->len,
>  					      st21nfca_hci_data_exchange_cb,
>  					      info);
> -		break;
>  	default:
>  		return 1;
>  	}
>  }
>  
> diff --git a/drivers/nfc/trf7970a.c b/drivers/nfc/trf7970a.c
> index 3bd97c73f983..c70f62fe321e 100644
> --- a/drivers/nfc/trf7970a.c
> +++ b/drivers/nfc/trf7970a.c
> @@ -1380,11 +1380,10 @@ static int trf7970a_is_iso15693_write_or_lock(u8 cmd)
>  	case ISO15693_CMD_WRITE_AFI:
>  	case ISO15693_CMD_LOCK_AFI:
>  	case ISO15693_CMD_WRITE_DSFID:
>  	case ISO15693_CMD_LOCK_DSFID:
>  		return 1;
> -		break;
>  	default:
>  		return 0;
>  	}
>  }
>  
> diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
> index 5a7c80053c62..2f250874b1a4 100644
> --- a/drivers/nvdimm/claim.c
> +++ b/drivers/nvdimm/claim.c
> @@ -200,11 +200,10 @@ ssize_t nd_namespace_store(struct device *dev,
>  		}
>  		break;
>  	default:
>  		len = -EBUSY;
>  		goto out_attach;
> -		break;
>  	}
>  
>  	if (__nvdimm_namespace_capacity(ndns) < SZ_16M) {
>  		dev_dbg(dev, "%s too small to host\n", name);
>  		len = -ENXIO;
> diff --git a/drivers/pci/controller/pci-v3-semi.c b/drivers/pci/controller/pci-v3-semi.c
> index 1f54334f09f7..154a5398633c 100644
> --- a/drivers/pci/controller/pci-v3-semi.c
> +++ b/drivers/pci/controller/pci-v3-semi.c
> @@ -656,11 +656,10 @@ static int v3_get_dma_range_config(struct v3_pci *v3,
>  		val |= V3_LB_BASE_ADR_SIZE_2GB;
>  		break;
>  	default:
>  		dev_err(v3->dev, "illegal dma memory chunk size\n");
>  		return -EINVAL;
> -		break;
>  	}
>  	val |= V3_PCI_MAP_M_REG_EN | V3_PCI_MAP_M_ENABLE;
>  	*pci_map = val;
>  
>  	dev_dbg(dev,
> diff --git a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
> index 5e24838a582f..2223ead5bd72 100644
> --- a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
> +++ b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
> @@ -106,23 +106,18 @@ struct s3c24xx_eint_domain_data {
>  static int s3c24xx_eint_get_trigger(unsigned int type)
>  {
>  	switch (type) {
>  	case IRQ_TYPE_EDGE_RISING:
>  		return EINT_EDGE_RISING;
> -		break;
>  	case IRQ_TYPE_EDGE_FALLING:
>  		return EINT_EDGE_FALLING;
> -		break;
>  	case IRQ_TYPE_EDGE_BOTH:
>  		return EINT_EDGE_BOTH;
> -		break;
>  	case IRQ_TYPE_LEVEL_HIGH:
>  		return EINT_LEVEL_HIGH;
> -		break;
>  	case IRQ_TYPE_LEVEL_LOW:
>  		return EINT_LEVEL_LOW;
> -		break;
>  	default:
>  		return -EINVAL;
>  	}
>  }
>  
> diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
> index 49f4b73be513..1c2084c74a57 100644
> --- a/drivers/platform/x86/acer-wmi.c
> +++ b/drivers/platform/x86/acer-wmi.c
> @@ -790,11 +790,10 @@ static acpi_status AMW0_set_u32(u32 value, u32 cap)
>  		if (value > max_brightness)
>  			return AE_BAD_PARAMETER;
>  		switch (quirks->brightness) {
>  		default:
>  			return ec_write(0x83, value);
> -			break;
>  		}
>  	default:
>  		return AE_ERROR;
>  	}
>  
> diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
> index e5a1b5533408..704813374922 100644
> --- a/drivers/platform/x86/sony-laptop.c
> +++ b/drivers/platform/x86/sony-laptop.c
> @@ -2465,26 +2465,23 @@ static int __sony_nc_gfx_switch_status_get(void)
>  	case 0x0146:
>  		/* 1: discrete GFX (speed)
>  		 * 0: integrated GFX (stamina)
>  		 */
>  		return result & 0x1 ? SPEED : STAMINA;
> -		break;
>  	case 0x015B:
>  		/* 0: discrete GFX (speed)
>  		 * 1: integrated GFX (stamina)
>  		 */
>  		return result & 0x1 ? STAMINA : SPEED;
> -		break;
>  	case 0x0128:
>  		/* it's a more elaborated bitmask, for now:
>  		 * 2: integrated GFX (stamina)
>  		 * 0: discrete GFX (speed)
>  		 */
>  		dprintk("GFX Status: 0x%x\n", result);
>  		return result & 0x80 ? AUTO :
>  			result & 0x02 ? STAMINA : SPEED;
> -		break;
>  	}
>  	return -EINVAL;
>  }
>  
>  static ssize_t sony_nc_gfx_switch_status_show(struct device *dev,
> diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
> index d88f388a3450..44e802f9f1b4 100644
> --- a/drivers/platform/x86/wmi.c
> +++ b/drivers/platform/x86/wmi.c
> @@ -1258,17 +1258,14 @@ acpi_wmi_ec_space_handler(u32 function, acpi_physical_address address,
>  	}
>  
>  	switch (result) {
>  	case -EINVAL:
>  		return AE_BAD_PARAMETER;
> -		break;
>  	case -ENODEV:
>  		return AE_NOT_FOUND;
> -		break;
>  	case -ETIME:
>  		return AE_TIME;
> -		break;
>  	default:
>  		return AE_OK;
>  	}
>  }
>  
> diff --git a/drivers/power/supply/wm831x_power.c b/drivers/power/supply/wm831x_power.c
> index 18b33f14dfee..4cd2dd870039 100644
> --- a/drivers/power/supply/wm831x_power.c
> +++ b/drivers/power/supply/wm831x_power.c
> @@ -666,11 +666,10 @@ static int wm831x_power_probe(struct platform_device *pdev)
>  	default:
>  		dev_err(&pdev->dev, "Failed to find USB phy: %d\n", ret);
>  		fallthrough;
>  	case -EPROBE_DEFER:
>  		goto err_bat_irq;
> -		break;
>  	}
>  
>  	return ret;
>  
>  err_bat_irq:
> diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
> index f923ed019d4a..ed034192b3c3 100644
> --- a/drivers/scsi/aic94xx/aic94xx_task.c
> +++ b/drivers/scsi/aic94xx/aic94xx_task.c
> @@ -267,11 +267,10 @@ static void asd_task_tasklet_complete(struct asd_ascb *ascb,
>  		ts->stat = SAS_NAK_R_ERR;
>  		break;
>  	case TA_I_T_NEXUS_LOSS:
>  		opcode = dl->status_block[0];
>  		goto Again;
> -		break;
>  	case TF_INV_CONN_HANDLE:
>  		ts->resp = SAS_TASK_UNDELIVERED;
>  		ts->stat = SAS_DEVICE_UNKNOWN;
>  		break;
>  	case TF_REQUESTED_N_PENDING:
> diff --git a/drivers/scsi/be2iscsi/be_mgmt.c b/drivers/scsi/be2iscsi/be_mgmt.c
> index 96d6e384b2b2..0d4928567265 100644
> --- a/drivers/scsi/be2iscsi/be_mgmt.c
> +++ b/drivers/scsi/be2iscsi/be_mgmt.c
> @@ -1242,22 +1242,18 @@ beiscsi_adap_family_disp(struct device *dev, struct device_attribute *attr,
>  	case BE_DEVICE_ID1:
>  	case OC_DEVICE_ID1:
>  	case OC_DEVICE_ID2:
>  		return snprintf(buf, PAGE_SIZE,
>  				"Obsolete/Unsupported BE2 Adapter Family\n");
> -		break;
>  	case BE_DEVICE_ID2:
>  	case OC_DEVICE_ID3:
>  		return snprintf(buf, PAGE_SIZE, "BE3-R Adapter Family\n");
> -		break;
>  	case OC_SKH_ID1:
>  		return snprintf(buf, PAGE_SIZE, "Skyhawk-R Adapter Family\n");
> -		break;
>  	default:
>  		return snprintf(buf, PAGE_SIZE,
>  				"Unknown Adapter Family: 0x%x\n", dev_id);
> -		break;
>  	}
>  }
>  
>  /**
>   * beiscsi_phys_port()- Display Physical Port Identifier
> diff --git a/drivers/scsi/bnx2fc/bnx2fc_hwi.c b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
> index 08992095ce7a..b37b0a9ec12d 100644
> --- a/drivers/scsi/bnx2fc/bnx2fc_hwi.c
> +++ b/drivers/scsi/bnx2fc/bnx2fc_hwi.c
> @@ -768,11 +768,10 @@ static void bnx2fc_process_unsol_compl(struct bnx2fc_rport *tgt, u16 wqe)
>  				if (rc)
>  					goto skip_rec;
>  			} else
>  				printk(KERN_ERR PFX "SRR in progress\n");
>  			goto ret_err_rqe;
> -			break;
>  		default:
>  			break;
>  		}
>  
>  skip_rec:
> diff --git a/drivers/scsi/fcoe/fcoe.c b/drivers/scsi/fcoe/fcoe.c
> index 0f9274960dc6..a4be6f439c47 100644
> --- a/drivers/scsi/fcoe/fcoe.c
> +++ b/drivers/scsi/fcoe/fcoe.c
> @@ -1892,11 +1892,10 @@ static int fcoe_device_notification(struct notifier_block *notifier,
>  			fcoe_interface_remove(fcoe);
>  		fcoe_interface_cleanup(fcoe);
>  		mutex_unlock(&fcoe_config_mutex);
>  		fcoe_ctlr_device_delete(fcoe_ctlr_to_ctlr_dev(ctlr));
>  		goto out;
> -		break;
>  	case NETDEV_FEAT_CHANGE:
>  		fcoe_netdev_features_change(lport, netdev);
>  		break;
>  	default:
>  		FCOE_NETDEV_DBG(netdev, "Unknown event %ld "
> diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> index 83ce4f11a589..45136e3a4efc 100644
> --- a/drivers/scsi/hpsa.c
> +++ b/drivers/scsi/hpsa.c
> @@ -7440,11 +7440,10 @@ static int find_PCI_BAR_index(struct pci_dev *pdev, unsigned long pci_bar_addr)
>  				break;
>  			default:	/* reserved in PCI 2.2 */
>  				dev_warn(&pdev->dev,
>  				       "base address is invalid\n");
>  				return -1;
> -				break;
>  			}
>  		}
>  		if (offset == pci_bar_addr - PCI_BASE_ADDRESS_0)
>  			return i + 1;
>  	}
> diff --git a/drivers/scsi/hptiop.c b/drivers/scsi/hptiop.c
> index 6a2561f26e38..db4c7a7ff4dd 100644
> --- a/drivers/scsi/hptiop.c
> +++ b/drivers/scsi/hptiop.c
> @@ -756,11 +756,10 @@ static void hptiop_finish_scsi_req(struct hptiop_hba *hba, u32 tag,
>  		scsi_set_resid(scp,
>  			scsi_bufflen(scp) - le32_to_cpu(req->dataxfer_length));
>  		scp->result = SAM_STAT_CHECK_CONDITION;
>  		memcpy(scp->sense_buffer, &req->sg_list, SCSI_SENSE_BUFFERSIZE);
>  		goto skip_resid;
> -		break;
>  
>  	default:
>  		scp->result = DRIVER_INVALID << 24 | DID_ABORT << 16;
>  		break;
>  	}
> diff --git a/drivers/scsi/ipr.c b/drivers/scsi/ipr.c
> index b0aa58d117cc..e451102b9a29 100644
> --- a/drivers/scsi/ipr.c
> +++ b/drivers/scsi/ipr.c
> @@ -9485,11 +9485,10 @@ static pci_ers_result_t ipr_pci_error_detected(struct pci_dev *pdev,
>  		ipr_pci_frozen(pdev);
>  		return PCI_ERS_RESULT_CAN_RECOVER;
>  	case pci_channel_io_perm_failure:
>  		ipr_pci_perm_failure(pdev);
>  		return PCI_ERS_RESULT_DISCONNECT;
> -		break;
>  	default:
>  		break;
>  	}
>  	return PCI_ERS_RESULT_NEED_RESET;
>  }
> diff --git a/drivers/scsi/isci/phy.c b/drivers/scsi/isci/phy.c
> index 7041e2e3ab48..1b87d9080ebe 100644
> --- a/drivers/scsi/isci/phy.c
> +++ b/drivers/scsi/isci/phy.c
> @@ -751,11 +751,10 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
>  		       sci_change_state(&iphy->sm, SCI_PHY_STARTING);
>  		       break;
>  		default:
>  			phy_event_warn(iphy, state, event_code);
>  			return SCI_FAILURE;
> -			break;
>  		}
>  		return SCI_SUCCESS;
>  	case SCI_PHY_SUB_AWAIT_IAF_UF:
>  		switch (scu_get_event_code(event_code)) {
>  		case SCU_EVENT_SAS_PHY_DETECTED:
> @@ -956,11 +955,10 @@ enum sci_status sci_phy_event_handler(struct isci_phy *iphy, u32 event_code)
>  			sci_change_state(&iphy->sm, SCI_PHY_STARTING);
>  			break;
>  		default:
>  			phy_event_warn(iphy, state, event_code);
>  			return SCI_FAILURE_INVALID_STATE;
> -			break;
>  		}
>  		return SCI_SUCCESS;
>  	default:
>  		dev_dbg(sciphy_to_dev(iphy), "%s: in wrong state: %s\n",
>  			__func__, phy_state_name(state));
> diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
> index c9a327b13e5c..325081ac6553 100644
> --- a/drivers/scsi/lpfc/lpfc_debugfs.c
> +++ b/drivers/scsi/lpfc/lpfc_debugfs.c
> @@ -3339,11 +3339,10 @@ lpfc_idiag_pcicfg_read(struct file *file, char __user *buf, size_t nbytes,
>  		len += scnprintf(pbuffer+len, LPFC_PCI_CFG_SIZE-len,
>  				"%03x: %08x\n", where, u32val);
>  		break;
>  	case LPFC_PCI_CFG_BROWSE: /* browse all */
>  		goto pcicfg_browse;
> -		break;
>  	default:
>  		/* illegal count */
>  		len = 0;
>  		break;
>  	}
> @@ -4379,11 +4378,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
>  					goto pass_check;
>  				}
>  			}
>  		}
>  		goto error_out;
> -		break;
> +
>  	case LPFC_IDIAG_CQ:
>  		/* MBX complete queue */
>  		if (phba->sli4_hba.mbx_cq &&
>  		    phba->sli4_hba.mbx_cq->queue_id == queid) {
>  			/* Sanity check */
> @@ -4431,11 +4430,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
>  					goto pass_check;
>  				}
>  			}
>  		}
>  		goto error_out;
> -		break;
> +
>  	case LPFC_IDIAG_MQ:
>  		/* MBX work queue */
>  		if (phba->sli4_hba.mbx_wq &&
>  		    phba->sli4_hba.mbx_wq->queue_id == queid) {
>  			/* Sanity check */
> @@ -4445,11 +4444,11 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
>  				goto error_out;
>  			idiag.ptr_private = phba->sli4_hba.mbx_wq;
>  			goto pass_check;
>  		}
>  		goto error_out;
> -		break;
> +
>  	case LPFC_IDIAG_WQ:
>  		/* ELS work queue */
>  		if (phba->sli4_hba.els_wq &&
>  		    phba->sli4_hba.els_wq->queue_id == queid) {
>  			/* Sanity check */
> @@ -4485,13 +4484,12 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
>  					idiag.ptr_private = qp;
>  					goto pass_check;
>  				}
>  			}
>  		}
> -
>  		goto error_out;
> -		break;
> +
>  	case LPFC_IDIAG_RQ:
>  		/* HDR queue */
>  		if (phba->sli4_hba.hdr_rq &&
>  		    phba->sli4_hba.hdr_rq->queue_id == queid) {
>  			/* Sanity check */
> @@ -4512,14 +4510,12 @@ lpfc_idiag_queacc_write(struct file *file, const char __user *buf,
>  				goto error_out;
>  			idiag.ptr_private = phba->sli4_hba.dat_rq;
>  			goto pass_check;
>  		}
>  		goto error_out;
> -		break;
>  	default:
>  		goto error_out;
> -		break;
>  	}
>  
>  pass_check:
>  
>  	if (idiag.cmd.opcode == LPFC_IDIAG_CMD_QUEACC_RD) {
> diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
> index ca25e54bb782..b6090357e8a5 100644
> --- a/drivers/scsi/lpfc/lpfc_init.c
> +++ b/drivers/scsi/lpfc/lpfc_init.c
> @@ -7194,11 +7194,10 @@ lpfc_init_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
>  	default:
>  		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
>  				"1431 Invalid HBA PCI-device group: 0x%x\n",
>  				dev_grp);
>  		return -ENODEV;
> -		break;
>  	}
>  	return 0;
>  }
>  
>  /**
> diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
> index 983eeb0e3d07..c3b02dab6e5c 100644
> --- a/drivers/scsi/lpfc/lpfc_scsi.c
> +++ b/drivers/scsi/lpfc/lpfc_scsi.c
> @@ -4282,11 +4282,10 @@ lpfc_scsi_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
>  	default:
>  		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
>  				"1418 Invalid HBA PCI-device group: 0x%x\n",
>  				dev_grp);
>  		return -ENODEV;
> -		break;
>  	}
>  	phba->lpfc_rampdown_queue_depth = lpfc_rampdown_queue_depth;
>  	phba->lpfc_scsi_cmd_iocb_cmpl = lpfc_scsi_cmd_iocb_cmpl;
>  	return 0;
>  }
> diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
> index e158cd77d387..0f18f1ba8a28 100644
> --- a/drivers/scsi/lpfc/lpfc_sli.c
> +++ b/drivers/scsi/lpfc/lpfc_sli.c
> @@ -9187,11 +9187,10 @@ lpfc_mbox_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
>  	default:
>  		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
>  				"1420 Invalid HBA PCI-device group: 0x%x\n",
>  				dev_grp);
>  		return -ENODEV;
> -		break;
>  	}
>  	return 0;
>  }
>  
>  /**
> @@ -10070,11 +10069,10 @@ lpfc_sli4_iocb2wqe(struct lpfc_hba *phba, struct lpfc_iocbq *iocbq,
>  	default:
>  		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
>  				"2014 Invalid command 0x%x\n",
>  				iocbq->iocb.ulpCommand);
>  		return IOCB_ERROR;
> -		break;
>  	}
>  
>  	if (iocbq->iocb_flag & LPFC_IO_DIF_PASS)
>  		bf_set(wqe_dif, &wqe->generic.wqe_com, LPFC_WQE_DIF_PASSTHRU);
>  	else if (iocbq->iocb_flag & LPFC_IO_DIF_STRIP)
> @@ -10232,11 +10230,10 @@ lpfc_sli_api_table_setup(struct lpfc_hba *phba, uint8_t dev_grp)
>  	default:
>  		lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT,
>  				"1419 Invalid HBA PCI-device group: 0x%x\n",
>  				dev_grp);
>  		return -ENODEV;
> -		break;
>  	}
>  	phba->lpfc_get_iocb_from_iocbq = lpfc_get_iocb_from_iocbq;
>  	return 0;
>  }
>  
> diff --git a/drivers/scsi/mvumi.c b/drivers/scsi/mvumi.c
> index 0354898d7cac..2f7a52bd653a 100644
> --- a/drivers/scsi/mvumi.c
> +++ b/drivers/scsi/mvumi.c
> @@ -2294,11 +2294,10 @@ static int mvumi_cfg_hw_reg(struct mvumi_hba *mhba)
>  		regs->int_drbl_int_mask     = 0x3FFFFFFF;
>  		regs->int_mu = regs->int_dl_cpu2pciea | regs->int_comaout;
>  		break;
>  	default:
>  		return -1;
> -		break;
>  	}
>  
>  	return 0;
>  }
>  
> diff --git a/drivers/scsi/pcmcia/nsp_cs.c b/drivers/scsi/pcmcia/nsp_cs.c
> index bc5a623519e7..bb3b3884f968 100644
> --- a/drivers/scsi/pcmcia/nsp_cs.c
> +++ b/drivers/scsi/pcmcia/nsp_cs.c
> @@ -1100,12 +1100,10 @@ static irqreturn_t nspintr(int irq, void *dev_id)
>  		nsp_index_write(base, SCSIBUSCTRL, SCSI_ATN);
>  		udelay(1);
>  		nsp_index_write(base, SCSIBUSCTRL, SCSI_ATN | AUTODIRECTION | ACKENB);
>  		return IRQ_HANDLED;
>  
> -		break;
> -
>  	case PH_RESELECT:
>  		//nsp_dbg(NSP_DEBUG_INTR, "phase reselect");
>  		// *sync_neg = SYNC_NOT_YET;
>  		if ((phase & BUSMON_PHASE_MASK) != BUSPHASE_MESSAGE_IN) {
>  
> diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
> index 07afd0d8a8f3..40af7f1524ce 100644
> --- a/drivers/scsi/qla2xxx/qla_mbx.c
> +++ b/drivers/scsi/qla2xxx/qla_mbx.c
> @@ -4028,11 +4028,10 @@ qla24xx_report_id_acquisition(scsi_qla_host_t *vha,
>  
>  			/* if our portname is higher then initiate N2N login */
>  
>  			set_bit(N2N_LOGIN_NEEDED, &vha->dpc_flags);
>  			return;
> -			break;
>  		case TOPO_FL:
>  			ha->current_topology = ISP_CFG_FL;
>  			break;
>  		case TOPO_F:
>  			ha->current_topology = ISP_CFG_F;
> diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
> index e2e5356a997d..43f7624508a9 100644
> --- a/drivers/scsi/st.c
> +++ b/drivers/scsi/st.c
> @@ -2844,11 +2844,10 @@ static int st_int_ioctl(struct scsi_tape *STp, unsigned int cmd_in, unsigned lon
>  		fileno = blkno = at_sm = 0;
>  		break;
>  	case MTNOP:
>  		DEBC_printk(STp, "No op on tape.\n");
>  		return 0;	/* Should do something ? */
> -		break;
>  	case MTRETEN:
>  		cmd[0] = START_STOP;
>  		if (STp->immediate) {
>  			cmd[1] = 1;	/* Don't wait for completion */
>  			timeout = STp->device->request_queue->rq_timeout;
> diff --git a/drivers/scsi/sym53c8xx_2/sym_hipd.c b/drivers/scsi/sym53c8xx_2/sym_hipd.c
> index a9fe092a4906..255a2d48d421 100644
> --- a/drivers/scsi/sym53c8xx_2/sym_hipd.c
> +++ b/drivers/scsi/sym53c8xx_2/sym_hipd.c
> @@ -4594,11 +4594,10 @@ static void sym_int_sir(struct sym_hcb *np)
>  				sym_print_addr(cp->cmd,
>  					"M_REJECT received (%x:%x).\n",
>  					scr_to_cpu(np->lastmsg), np->msgout[0]);
>  			}
>  			goto out_clrack;
> -			break;
>  		default:
>  			goto out_reject;
>  		}
>  		break;
>  	/*
> diff --git a/drivers/staging/media/atomisp/pci/sh_css.c b/drivers/staging/media/atomisp/pci/sh_css.c
> index ddee04c8248d..eab7a048ca7a 100644
> --- a/drivers/staging/media/atomisp/pci/sh_css.c
> +++ b/drivers/staging/media/atomisp/pci/sh_css.c
> @@ -6284,11 +6284,10 @@ allocate_delay_frames(struct ia_css_pipe *pipe) {
>  	case IA_CSS_PIPE_ID_CAPTURE: {
>  		struct ia_css_capture_settings *mycs_capture = &pipe->pipe_settings.capture;
>  		(void)mycs_capture;
>  		return err;
>  	}
> -	break;
>  	case IA_CSS_PIPE_ID_VIDEO: {
>  		struct ia_css_video_settings *mycs_video = &pipe->pipe_settings.video;
>  
>  		ref_info = mycs_video->video_binary.internal_frame_info;
>  		/*The ref frame expects
> diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c
> index 1deb74112ad4..672248be7bf3 100644
> --- a/drivers/staging/rts5208/rtsx_scsi.c
> +++ b/drivers/staging/rts5208/rtsx_scsi.c
> @@ -569,12 +569,10 @@ static int start_stop_unit(struct scsi_cmnd *srb, struct rtsx_chip *chip)
>  	case LOAD_MEDIUM:
>  		if (check_card_ready(chip, lun))
>  			return TRANSPORT_GOOD;
>  		set_sense_type(chip, lun, SENSE_TYPE_MEDIA_NOT_PRESENT);
>  		return TRANSPORT_FAILED;
> -
> -		break;
>  	}
>  
>  	return TRANSPORT_ERROR;
>  }
>  
> diff --git a/drivers/staging/vme/devices/vme_user.c b/drivers/staging/vme/devices/vme_user.c
> index fd0ea4dbcb91..7c7d3858e6ca 100644
> --- a/drivers/staging/vme/devices/vme_user.c
> +++ b/drivers/staging/vme/devices/vme_user.c
> @@ -355,12 +355,10 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
>  			 *	to userspace as they are
>  			 */
>  			return vme_master_set(image[minor].resource,
>  				master.enable, master.vme_addr, master.size,
>  				master.aspace, master.cycle, master.dwidth);
> -
> -			break;
>  		}
>  		break;
>  	case SLAVE_MINOR:
>  		switch (cmd) {
>  		case VME_GET_SLAVE:
> @@ -396,12 +394,10 @@ static int vme_user_ioctl(struct inode *inode, struct file *file,
>  			 */
>  			return vme_slave_set(image[minor].resource,
>  				slave.enable, slave.vme_addr, slave.size,
>  				image[minor].pci_buf, slave.aspace,
>  				slave.cycle);
> -
> -			break;
>  		}
>  		break;
>  	}
>  
>  	return -EINVAL;
> diff --git a/drivers/tty/nozomi.c b/drivers/tty/nozomi.c
> index d42b854cb7df..861e95043191 100644
> --- a/drivers/tty/nozomi.c
> +++ b/drivers/tty/nozomi.c
> @@ -412,15 +412,13 @@ static void read_mem32(u32 *buf, const void __iomem *mem_addr_start,
>  	switch (size_bytes) {
>  	case 2:	/* 2 bytes */
>  		buf16 = (u16 *) buf;
>  		*buf16 = __le16_to_cpu(readw(ptr));
>  		goto out;
> -		break;
>  	case 4:	/* 4 bytes */
>  		*(buf) = __le32_to_cpu(readl(ptr));
>  		goto out;
> -		break;
>  	}
>  
>  	while (i < size_bytes) {
>  		if (size_bytes - i == 2) {
>  			/* Handle 2 bytes in the end */
> @@ -458,19 +456,18 @@ static u32 write_mem32(void __iomem *mem_addr_start, const u32 *buf,
>  	switch (size_bytes) {
>  	case 2:	/* 2 bytes */
>  		buf16 = (const u16 *)buf;
>  		writew(__cpu_to_le16(*buf16), ptr);
>  		return 2;
> -		break;
>  	case 1: /*
>  		 * also needs to write 4 bytes in this case
>  		 * so falling through..
>  		 */
> +		fallthrough;
>  	case 4: /* 4 bytes */
>  		writel(__cpu_to_le32(*buf), ptr);
>  		return 4;
> -		break;
>  	}
>  
>  	while (i < size_bytes) {
>  		if (size_bytes - i == 2) {
>  			/* 2 bytes */
> diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
> index 1731d9728865..09703079db7b 100644
> --- a/drivers/tty/serial/imx.c
> +++ b/drivers/tty/serial/imx.c
> @@ -318,31 +318,26 @@ static void imx_uart_writel(struct imx_port *sport, u32 val, u32 offset)
>  static u32 imx_uart_readl(struct imx_port *sport, u32 offset)
>  {
>  	switch (offset) {
>  	case UCR1:
>  		return sport->ucr1;
> -		break;
>  	case UCR2:
>  		/*
>  		 * UCR2_SRST is the only bit in the cached registers that might
>  		 * differ from the value that was last written. As it only
>  		 * automatically becomes one after being cleared, reread
>  		 * conditionally.
>  		 */
>  		if (!(sport->ucr2 & UCR2_SRST))
>  			sport->ucr2 = readl(sport->port.membase + offset);
>  		return sport->ucr2;
> -		break;
>  	case UCR3:
>  		return sport->ucr3;
> -		break;
>  	case UCR4:
>  		return sport->ucr4;
> -		break;
>  	case UFCR:
>  		return sport->ufcr;
> -		break;
>  	default:
>  		return readl(sport->port.membase + offset);
>  	}
>  }
>  
> diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
> index 1125f4715830..5204769834d1 100644
> --- a/drivers/usb/gadget/function/f_hid.c
> +++ b/drivers/usb/gadget/function/f_hid.c
> @@ -509,27 +509,23 @@ static int hidg_setup(struct usb_function *f,
>  		VDBG(cdev, "get_report\n");
>  
>  		/* send an empty report */
>  		length = min_t(unsigned, length, hidg->report_length);
>  		memset(req->buf, 0x0, length);
> -
>  		goto respond;
> -		break;
>  
>  	case ((USB_DIR_IN | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
>  		  | HID_REQ_GET_PROTOCOL):
>  		VDBG(cdev, "get_protocol\n");
>  		length = min_t(unsigned int, length, 1);
>  		((u8 *) req->buf)[0] = hidg->protocol;
>  		goto respond;
> -		break;
>  
>  	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
>  		  | HID_REQ_SET_REPORT):
>  		VDBG(cdev, "set_report | wLength=%d\n", ctrl->wLength);
>  		goto stall;
> -		break;
>  
>  	case ((USB_DIR_OUT | USB_TYPE_CLASS | USB_RECIP_INTERFACE) << 8
>  		  | HID_REQ_SET_PROTOCOL):
>  		VDBG(cdev, "set_protocol\n");
>  		if (value > HID_REPORT_PROTOCOL)
> @@ -542,11 +538,10 @@ static int hidg_setup(struct usb_function *f,
>  		if (hidg->bInterfaceSubClass == USB_INTERFACE_SUBCLASS_BOOT) {
>  			hidg->protocol = value;
>  			goto respond;
>  		}
>  		goto stall;
> -		break;
>  
>  	case ((USB_DIR_IN | USB_TYPE_STANDARD | USB_RECIP_INTERFACE) << 8
>  		  | USB_REQ_GET_DESCRIPTOR):
>  		switch (value >> 8) {
>  		case HID_DT_HID:
> @@ -560,33 +555,29 @@ static int hidg_setup(struct usb_function *f,
>  
>  			length = min_t(unsigned short, length,
>  						   hidg_desc_copy.bLength);
>  			memcpy(req->buf, &hidg_desc_copy, length);
>  			goto respond;
> -			break;
>  		}
>  		case HID_DT_REPORT:
>  			VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: REPORT\n");
>  			length = min_t(unsigned short, length,
>  						   hidg->report_desc_length);
>  			memcpy(req->buf, hidg->report_desc, length);
>  			goto respond;
> -			break;
>  
>  		default:
>  			VDBG(cdev, "Unknown descriptor request 0x%x\n",
>  				 value >> 8);
>  			goto stall;
> -			break;
>  		}
>  		break;
>  
>  	default:
>  		VDBG(cdev, "Unknown request 0x%x\n",
>  			 ctrl->bRequest);
>  		goto stall;
> -		break;
>  	}
>  
>  stall:
>  	return -EOPNOTSUPP;
>  
> diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
> index fe405cd38dbc..b46ef45c4d25 100644
> --- a/drivers/usb/host/xhci-mem.c
> +++ b/drivers/usb/host/xhci-mem.c
> @@ -1142,11 +1142,10 @@ int xhci_setup_addressable_virt_dev(struct xhci_hcd *xhci, struct usb_device *ud
>  		max_packets = MAX_PACKET(8);
>  		break;
>  	case USB_SPEED_WIRELESS:
>  		xhci_dbg(xhci, "FIXME xHCI doesn't support wireless speeds\n");
>  		return -EINVAL;
> -		break;
>  	default:
>  		/* Speed was set earlier, this shouldn't happen. */
>  		return -EINVAL;
>  	}
>  	/* Find the root hub port this device is under */
> diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
> index 70ec29681526..efbd317f2f25 100644
> --- a/drivers/usb/misc/iowarrior.c
> +++ b/drivers/usb/misc/iowarrior.c
> @@ -382,11 +382,10 @@ static ssize_t iowarrior_write(struct file *file,
>  			goto exit;
>  		}
>  		retval = usb_set_report(dev->interface, 2, 0, buf, count);
>  		kfree(buf);
>  		goto exit;
> -		break;
>  	case USB_DEVICE_ID_CODEMERCS_IOW56:
>  	case USB_DEVICE_ID_CODEMERCS_IOW56AM:
>  	case USB_DEVICE_ID_CODEMERCS_IOW28:
>  	case USB_DEVICE_ID_CODEMERCS_IOW28L:
>  	case USB_DEVICE_ID_CODEMERCS_IOW100:
> @@ -452,18 +451,16 @@ static ssize_t iowarrior_write(struct file *file,
>  		}
>  		/* submit was ok */
>  		retval = count;
>  		usb_free_urb(int_out_urb);
>  		goto exit;
> -		break;
>  	default:
>  		/* what do we have here ? An unsupported Product-ID ? */
>  		dev_err(&dev->interface->dev, "%s - not supported for product=0x%x\n",
>  			__func__, dev->product_id);
>  		retval = -EFAULT;
>  		goto exit;
> -		break;
>  	}
>  error:
>  	usb_free_coherent(dev->udev, dev->report_size, buf,
>  			  int_out_urb->transfer_dma);
>  error_no_buffer:
> diff --git a/drivers/usb/serial/iuu_phoenix.c b/drivers/usb/serial/iuu_phoenix.c
> index b4ba79123d9d..f1201d4de297 100644
> --- a/drivers/usb/serial/iuu_phoenix.c
> +++ b/drivers/usb/serial/iuu_phoenix.c
> @@ -848,11 +848,10 @@ static int iuu_uart_baud(struct usb_serial_port *port, u32 baud_base,
>  		dataout[DataCount++] = 0x04;
>  		break;
>  	default:
>  		kfree(dataout);
>  		return IUU_INVALID_PARAMETER;
> -		break;
>  	}
>  
>  	switch (parity & 0xF0) {
>  	case IUU_ONE_STOP_BIT:
>  		dataout[DataCount - 1] |= IUU_ONE_STOP_BIT;
> @@ -862,11 +861,10 @@ static int iuu_uart_baud(struct usb_serial_port *port, u32 baud_base,
>  		dataout[DataCount - 1] |= IUU_TWO_STOP_BITS;
>  		break;
>  	default:
>  		kfree(dataout);
>  		return IUU_INVALID_PARAMETER;
> -		break;
>  	}
>  
>  	status = bulk_immediate(port, dataout, DataCount);
>  	if (status != IUU_OPERATION_OK)
>  		dev_dbg(&port->dev, "%s - uart_off error\n", __func__);
> diff --git a/drivers/usb/storage/freecom.c b/drivers/usb/storage/freecom.c
> index 3d5f7d0ff0f1..2b098b55c4cb 100644
> --- a/drivers/usb/storage/freecom.c
> +++ b/drivers/usb/storage/freecom.c
> @@ -429,11 +429,10 @@ static int freecom_transport(struct scsi_cmnd *srb, struct us_data *us)
>  		/* should never hit here -- filtered in usb.c */
>  		usb_stor_dbg(us, "freecom unimplemented direction: %d\n",
>  			     us->srb->sc_data_direction);
>  		/* Return fail, SCSI seems to handle this better. */
>  		return USB_STOR_TRANSPORT_FAILED;
> -		break;
>  	}
>  
>  	return USB_STOR_TRANSPORT_GOOD;
>  }
>  
> diff --git a/drivers/vme/bridges/vme_tsi148.c b/drivers/vme/bridges/vme_tsi148.c
> index 50ae26977a02..1227ea937059 100644
> --- a/drivers/vme/bridges/vme_tsi148.c
> +++ b/drivers/vme/bridges/vme_tsi148.c
> @@ -504,11 +504,10 @@ static int tsi148_slave_set(struct vme_slave_resource *image, int enabled,
>  		addr |= TSI148_LCSR_ITAT_AS_A64;
>  		break;
>  	default:
>  		dev_err(tsi148_bridge->parent, "Invalid address space\n");
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	/* Convert 64-bit variables to 2x 32-bit variables */
>  	reg_split(vme_base, &vme_base_high, &vme_base_low);
>  
> @@ -993,11 +992,10 @@ static int tsi148_master_set(struct vme_master_resource *image, int enabled,
>  	default:
>  		spin_unlock(&image->lock);
>  		dev_err(tsi148_bridge->parent, "Invalid address space\n");
>  		retval = -EINVAL;
>  		goto err_aspace;
> -		break;
>  	}
>  
>  	temp_ctl &= ~(3<<4);
>  	if (cycle & VME_SUPER)
>  		temp_ctl |= TSI148_LCSR_OTAT_SUP;
> @@ -1501,11 +1499,10 @@ static int tsi148_dma_set_vme_src_attributes(struct device *dev, __be32 *attr,
>  		val |= TSI148_LCSR_DSAT_AMODE_USER4;
>  		break;
>  	default:
>  		dev_err(dev, "Invalid address space\n");
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	if (cycle & VME_SUPER)
>  		val |= TSI148_LCSR_DSAT_SUP;
>  	if (cycle & VME_PROG)
> @@ -1601,11 +1598,10 @@ static int tsi148_dma_set_vme_dest_attributes(struct device *dev, __be32 *attr,
>  		val |= TSI148_LCSR_DDAT_AMODE_USER4;
>  		break;
>  	default:
>  		dev_err(dev, "Invalid address space\n");
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	if (cycle & VME_SUPER)
>  		val |= TSI148_LCSR_DDAT_SUP;
>  	if (cycle & VME_PROG)
> @@ -1699,11 +1695,10 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
>  		break;
>  	default:
>  		dev_err(tsi148_bridge->parent, "Invalid source type\n");
>  		retval = -EINVAL;
>  		goto err_source;
> -		break;
>  	}
>  
>  	/* Assume last link - this will be over-written by adding another */
>  	entry->descriptor.dnlau = cpu_to_be32(0);
>  	entry->descriptor.dnlal = cpu_to_be32(TSI148_LCSR_DNLAL_LLA);
> @@ -1736,11 +1731,10 @@ static int tsi148_dma_list_add(struct vme_dma_list *list,
>  		break;
>  	default:
>  		dev_err(tsi148_bridge->parent, "Invalid destination type\n");
>  		retval = -EINVAL;
>  		goto err_dest;
> -		break;
>  	}
>  
>  	/* Fill out count */
>  	entry->descriptor.dcnt = cpu_to_be32((u32)count);
>  
> @@ -1962,11 +1956,10 @@ static int tsi148_lm_set(struct vme_lm_resource *lm, unsigned long long lm_base,
>  		break;
>  	default:
>  		mutex_unlock(&lm->mtx);
>  		dev_err(tsi148_bridge->parent, "Invalid address space\n");
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	if (cycle & VME_SUPER)
>  		lm_ctl |= TSI148_LCSR_LMAT_SUPR ;
>  	if (cycle & VME_USER)
> diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c
> index b398293980b6..e1a940e43327 100644
> --- a/drivers/vme/vme.c
> +++ b/drivers/vme/vme.c
> @@ -50,27 +50,22 @@ static struct vme_bridge *find_bridge(struct vme_resource *resource)
>  	/* Get list to search */
>  	switch (resource->type) {
>  	case VME_MASTER:
>  		return list_entry(resource->entry, struct vme_master_resource,
>  			list)->parent;
> -		break;
>  	case VME_SLAVE:
>  		return list_entry(resource->entry, struct vme_slave_resource,
>  			list)->parent;
> -		break;
>  	case VME_DMA:
>  		return list_entry(resource->entry, struct vme_dma_resource,
>  			list)->parent;
> -		break;
>  	case VME_LM:
>  		return list_entry(resource->entry, struct vme_lm_resource,
>  			list)->parent;
> -		break;
>  	default:
>  		printk(KERN_ERR "Unknown resource type\n");
>  		return NULL;
> -		break;
>  	}
>  }
>  
>  /**
>   * vme_free_consistent - Allocate contiguous memory.
> @@ -177,26 +172,22 @@ size_t vme_get_size(struct vme_resource *resource)
>  			&aspace, &cycle, &dwidth);
>  		if (retval)
>  			return 0;
>  
>  		return size;
> -		break;
>  	case VME_SLAVE:
>  		retval = vme_slave_get(resource, &enabled, &base, &size,
>  			&buf_base, &aspace, &cycle);
>  		if (retval)
>  			return 0;
>  
>  		return size;
> -		break;
>  	case VME_DMA:
>  		return 0;
> -		break;
>  	default:
>  		printk(KERN_ERR "Unknown resource type\n");
>  		return 0;
> -		break;
>  	}
>  }
>  EXPORT_SYMBOL(vme_get_size);
>  
>  int vme_check_window(u32 aspace, unsigned long long vme_base,
> diff --git a/drivers/watchdog/geodewdt.c b/drivers/watchdog/geodewdt.c
> index 83418924e30a..0b699c783d57 100644
> --- a/drivers/watchdog/geodewdt.c
> +++ b/drivers/watchdog/geodewdt.c
> @@ -148,12 +148,10 @@ static long geodewdt_ioctl(struct file *file, unsigned int cmd,
>  
>  	switch (cmd) {
>  	case WDIOC_GETSUPPORT:
>  		return copy_to_user(argp, &ident,
>  				    sizeof(ident)) ? -EFAULT : 0;
> -		break;
> -
>  	case WDIOC_GETSTATUS:
>  	case WDIOC_GETBOOTSTATUS:
>  		return put_user(0, p);
>  
>  	case WDIOC_SETOPTIONS:
> diff --git a/fs/efs/inode.c b/fs/efs/inode.c
> index 89e73a6f0d36..64f3a54a0f72 100644
> --- a/fs/efs/inode.c
> +++ b/fs/efs/inode.c
> @@ -161,11 +161,10 @@ struct inode *efs_iget(struct super_block *super, unsigned long ino)
>  			init_special_inode(inode, inode->i_mode, device);
>  			break;
>  		default:
>  			pr_warn("unsupported inode mode %o\n", inode->i_mode);
>  			goto read_inode_error;
> -			break;
>  	}
>  
>  	unlock_new_inode(inode);
>  	return inode;
>          
> diff --git a/fs/ocfs2/cluster/tcp.c b/fs/ocfs2/cluster/tcp.c
> index 79a231719460..3bd8119bed5e 100644
> --- a/fs/ocfs2/cluster/tcp.c
> +++ b/fs/ocfs2/cluster/tcp.c
> @@ -1196,11 +1196,10 @@ static int o2net_process_message(struct o2net_sock_container *sc,
>  			break;
>  		default:
>  			msglog(hdr, "bad magic\n");
>  			ret = -EINVAL;
>  			goto out;
> -			break;
>  	}
>  
>  	/* find a handler for it */
>  	handler_status = 0;
>  	nmh = o2net_handler_get(be16_to_cpu(hdr->msg_type),
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 1110ecd7d1f3..8f50c9c19f1b 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -2911,11 +2911,10 @@ attach_type_to_prog_type(enum bpf_attach_type attach_type)
>  {
>  	switch (attach_type) {
>  	case BPF_CGROUP_INET_INGRESS:
>  	case BPF_CGROUP_INET_EGRESS:
>  		return BPF_PROG_TYPE_CGROUP_SKB;
> -		break;
>  	case BPF_CGROUP_INET_SOCK_CREATE:
>  	case BPF_CGROUP_INET_SOCK_RELEASE:
>  	case BPF_CGROUP_INET4_POST_BIND:
>  	case BPF_CGROUP_INET6_POST_BIND:
>  		return BPF_PROG_TYPE_CGROUP_SOCK;
> diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
> index 3dd8c2e4314e..f400a6122b3c 100644
> --- a/security/integrity/ima/ima_appraise.c
> +++ b/security/integrity/ima/ima_appraise.c
> @@ -179,11 +179,10 @@ enum hash_algo ima_get_hash_algo(struct evm_ima_xattr_data *xattr_value,
>  	case EVM_IMA_XATTR_DIGSIG:
>  		sig = (typeof(sig))xattr_value;
>  		if (sig->version != 2 || xattr_len <= sizeof(*sig))
>  			return ima_hash_algo;
>  		return sig->hash_algo;
> -		break;
>  	case IMA_XATTR_DIGEST_NG:
>  		/* first byte contains algorithm id */
>  		ret = xattr_value->data[0];
>  		if (ret < HASH_ALGO__LAST)
>  			return ret;
> diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
> index b9fe02e5f84f..eddc9477d42a 100644
> --- a/security/keys/trusted-keys/trusted_tpm1.c
> +++ b/security/keys/trusted-keys/trusted_tpm1.c
> @@ -899,11 +899,10 @@ static int datablob_parse(char *datablob, struct trusted_key_payload *p,
>  			return ret;
>  		ret = Opt_update;
>  		break;
>  	case Opt_err:
>  		return -EINVAL;
> -		break;
>  	}
>  	return ret;
>  }
>  
>  static struct trusted_key_options *trusted_options_alloc(void)
> diff --git a/security/safesetid/lsm.c b/security/safesetid/lsm.c
> index 8a176b6adbe5..1079c6d54784 100644
> --- a/security/safesetid/lsm.c
> +++ b/security/safesetid/lsm.c
> @@ -123,11 +123,10 @@ static int safesetid_security_capable(const struct cred *cred,
>  		 * set*uid() (e.g. setting up userns uid mappings).
>  		 */
>  		pr_warn("Operation requires CAP_SETUID, which is not available to UID %u for operations besides approved set*uid transitions\n",
>  			__kuid_val(cred->uid));
>  		return -EPERM;
> -		break;
>  	case CAP_SETGID:
>  		/*
>  		* If no policy applies to this task, allow the use of CAP_SETGID for
>  		* other purposes.
>  		*/
> @@ -138,15 +137,13 @@ static int safesetid_security_capable(const struct cred *cred,
>  		 * set*gid() (e.g. setting up userns gid mappings).
>  		 */
>  		pr_warn("Operation requires CAP_SETGID, which is not available to GID %u for operations besides approved set*gid transitions\n",
>  			__kuid_val(cred->uid));
>  		return -EPERM;
> -		break;
>  	default:
>  		/* Error, the only capabilities were checking for is CAP_SETUID/GID */
>  		return 0;
> -		break;
>  	}
>  	return 0;
>  }
>  
>  /*
> diff --git a/sound/pci/rme32.c b/sound/pci/rme32.c
> index 869af8a32c98..4eabece4dcba 100644
> --- a/sound/pci/rme32.c
> +++ b/sound/pci/rme32.c
> @@ -466,11 +466,10 @@ static int snd_rme32_capture_getrate(struct rme32 * rme32, int *is_adat)
>  			return 44100;
>  		case 7:
>  			return 32000;
>  		default:
>  			return -1;
> -			break;
>  		} 
>  	else
>  		switch (n) {	/* supporting the CS8412 */
>  		case 0:
>  			return -1;
> diff --git a/sound/pci/rme9652/hdspm.c b/sound/pci/rme9652/hdspm.c
> index 4a1f576dd9cf..3382c069fd3d 100644
> --- a/sound/pci/rme9652/hdspm.c
> +++ b/sound/pci/rme9652/hdspm.c
> @@ -2284,11 +2284,10 @@ static int hdspm_get_wc_sample_rate(struct hdspm *hdspm)
>  	switch (hdspm->io_type) {
>  	case RayDAT:
>  	case AIO:
>  		status = hdspm_read(hdspm, HDSPM_RD_STATUS_1);
>  		return (status >> 16) & 0xF;
> -		break;
>  	case AES32:
>  		status = hdspm_read(hdspm, HDSPM_statusRegister);
>  		return (status >> HDSPM_AES32_wcFreq_bit) & 0xF;
>  	default:
>  		break;
> @@ -2310,11 +2309,10 @@ static int hdspm_get_tco_sample_rate(struct hdspm *hdspm)
>  		switch (hdspm->io_type) {
>  		case RayDAT:
>  		case AIO:
>  			status = hdspm_read(hdspm, HDSPM_RD_STATUS_1);
>  			return (status >> 20) & 0xF;
> -			break;
>  		case AES32:
>  			status = hdspm_read(hdspm, HDSPM_statusRegister);
>  			return (status >> 1) & 0xF;
>  		default:
>  			break;
> @@ -2336,11 +2334,10 @@ static int hdspm_get_sync_in_sample_rate(struct hdspm *hdspm)
>  		switch (hdspm->io_type) {
>  		case RayDAT:
>  		case AIO:
>  			status = hdspm_read(hdspm, HDSPM_RD_STATUS_2);
>  			return (status >> 12) & 0xF;
> -			break;
>  		default:
>  			break;
>  		}
>  	}
>  
> @@ -2356,11 +2353,10 @@ static int hdspm_get_aes_sample_rate(struct hdspm *hdspm, int index)
>  
>  	switch (hdspm->io_type) {
>  	case AES32:
>  		timecode = hdspm_read(hdspm, HDSPM_timecodeRegister);
>  		return (timecode >> (4*index)) & 0xF;
> -		break;
>  	default:
>  		break;
>  	}
>  	return 0;
>  }
> @@ -3843,22 +3839,20 @@ static int hdspm_wc_sync_check(struct hdspm *hdspm)
>  				return 2;
>  			else
>  				return 1;
>  		}
>  		return 0;
> -		break;
>  
>  	case MADI:
>  		status2 = hdspm_read(hdspm, HDSPM_statusRegister2);
>  		if (status2 & HDSPM_wcLock) {
>  			if (status2 & HDSPM_wcSync)
>  				return 2;
>  			else
>  				return 1;
>  		}
>  		return 0;
> -		break;
>  
>  	case RayDAT:
>  	case AIO:
>  		status = hdspm_read(hdspm, HDSPM_statusRegister);
>  
> @@ -3866,12 +3860,10 @@ static int hdspm_wc_sync_check(struct hdspm *hdspm)
>  			return 2;
>  		else if (status & 0x1000000)
>  			return 1;
>  		return 0;
>  
> -		break;
> -
>  	case MADIface:
>  		break;
>  	}
>  
>  
> diff --git a/sound/pci/rme9652/rme9652.c b/sound/pci/rme9652/rme9652.c
> index 7ab10028d9fa..012fbec5e6a7 100644
> --- a/sound/pci/rme9652/rme9652.c
> +++ b/sound/pci/rme9652/rme9652.c
> @@ -730,38 +730,31 @@ static inline int rme9652_spdif_sample_rate(struct snd_rme9652 *s)
>  	rate_bits = rme9652_read(s, RME9652_status_register) & RME9652_F;
>  
>  	switch (rme9652_decode_spdif_rate(rate_bits)) {
>  	case 0x7:
>  		return 32000;
> -		break;
>  
>  	case 0x6:
>  		return 44100;
> -		break;
>  
>  	case 0x5:
>  		return 48000;
> -		break;
>  
>  	case 0x4:
>  		return 88200;
> -		break;
>  
>  	case 0x3:
>  		return 96000;
> -		break;
>  
>  	case 0x0:
>  		return 64000;
> -		break;
>  
>  	default:
>  		dev_err(s->card->dev,
>  			"%s: unknown S/PDIF input rate (bits = 0x%x)\n",
>  			   s->card_name, rate_bits);
>  		return 0;
> -		break;
>  	}
>  }
>  
>  /*-----------------------------------------------------------------------------
>    Control Interface
> diff --git a/sound/soc/codecs/wcd-clsh-v2.c b/sound/soc/codecs/wcd-clsh-v2.c
> index 1be82113c59a..817d8259758c 100644
> --- a/sound/soc/codecs/wcd-clsh-v2.c
> +++ b/sound/soc/codecs/wcd-clsh-v2.c
> @@ -478,11 +478,10 @@ static int _wcd_clsh_ctrl_set_state(struct wcd_clsh_ctrl *ctrl, int req_state,
>  		wcd_clsh_state_hph_l(ctrl, req_state, is_enable, mode);
>  		break;
>  	case WCD_CLSH_STATE_HPHR:
>  		wcd_clsh_state_hph_r(ctrl, req_state, is_enable, mode);
>  		break;
> -		break;
>  	case WCD_CLSH_STATE_LO:
>  		wcd_clsh_state_lo(ctrl, req_state, is_enable, mode);
>  		break;
>  	default:
>  		break;
> diff --git a/sound/soc/codecs/wl1273.c b/sound/soc/codecs/wl1273.c
> index c56b9329240f..d8ced4559bf2 100644
> --- a/sound/soc/codecs/wl1273.c
> +++ b/sound/soc/codecs/wl1273.c
> @@ -309,11 +309,10 @@ static int wl1273_startup(struct snd_pcm_substream *substream,
>  			return -EINVAL;
>  		}
>  		break;
>  	default:
>  		return -EINVAL;
> -		break;
>  	}
>  
>  	return 0;
>  }
>  
> diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
> index bbe8d782e0af..b1ca64d2f7ea 100644
> --- a/sound/soc/intel/skylake/skl-pcm.c
> +++ b/sound/soc/intel/skylake/skl-pcm.c
> @@ -500,11 +500,10 @@ static int skl_pcm_trigger(struct snd_pcm_substream *substream, int cmd,
>  		 */
>  		ret = skl_decoupled_trigger(substream, cmd);
>  		if (ret < 0)
>  			return ret;
>  		return skl_run_pipe(skl, mconfig->pipe);
> -		break;
>  
>  	case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
>  	case SNDRV_PCM_TRIGGER_SUSPEND:
>  	case SNDRV_PCM_TRIGGER_STOP:
>  		/*
> diff --git a/sound/soc/ti/davinci-mcasp.c b/sound/soc/ti/davinci-mcasp.c
> index a6b72ad53b43..2d85cc4c67fb 100644
> --- a/sound/soc/ti/davinci-mcasp.c
> +++ b/sound/soc/ti/davinci-mcasp.c
> @@ -2383,11 +2383,10 @@ static int davinci_mcasp_probe(struct platform_device *pdev)
>  		break;
>  	default:
>  		dev_err(&pdev->dev, "No DMA controller found (%d)\n", ret);
>  	case -EPROBE_DEFER:
>  		goto err;
> -		break;
>  	}
>  
>  	if (ret) {
>  		dev_err(&pdev->dev, "register PCM failed: %d\n", ret);
>  		goto err;
> 



From xen-devel-bounces@lists.xenproject.org Sun Oct 18 09:48:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 09:48:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8508.22690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU5Ie-0001MD-Cu; Sun, 18 Oct 2020 09:48:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8508.22690; Sun, 18 Oct 2020 09:48:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU5Ie-0001M6-A4; Sun, 18 Oct 2020 09:48:24 +0000
Received: by outflank-mailman (input) for mailman id 8508;
 Sun, 18 Oct 2020 09:48:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU5Ic-0001M1-Tq
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 09:48:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22b7add6-75fd-4c71-a016-5be085824d96;
 Sun, 18 Oct 2020 09:48:21 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU5Ib-0006xw-Cu; Sun, 18 Oct 2020 09:48:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU5Ib-00086Y-3E; Sun, 18 Oct 2020 09:48:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU5Ib-0006VK-2i; Sun, 18 Oct 2020 09:48:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU5Ic-0001M1-Tq
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 09:48:22 +0000
X-Inumbo-ID: 22b7add6-75fd-4c71-a016-5be085824d96
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 22b7add6-75fd-4c71-a016-5be085824d96;
	Sun, 18 Oct 2020 09:48:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5hp/k5C1vG6bt5mHJUbOpM68uXteZ6OCQoYabMWDrYI=; b=u4U1GN6hxR+syyHYhdNTCTKx67
	XTkhqI3Kaykj+b8sZVxLp9+GTKZxXII4YvLb0xGgZy/RoRKDrWwkdOrBCNTGBRwSxDn9L928OGmOQ
	y3l4jT/7S7bl4DnHGVM+ghgAxDB30y6rQQhm8cq0IGXM5lBx6Gck61orpdFZ5QHCHfog=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU5Ib-0006xw-Cu; Sun, 18 Oct 2020 09:48:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU5Ib-00086Y-3E; Sun, 18 Oct 2020 09:48:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU5Ib-0006VK-2i; Sun, 18 Oct 2020 09:48:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 155955: all pass - PUSHED
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=25849c8b16f2a5b7fcd0a823e80a5f1b590291f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 09:48:21 +0000

flight 155955 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155955/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  25849c8b16f2a5b7fcd0a823e80a5f1b590291f9

Last test of basis   155687  2020-10-11 09:19:29 Z    7 days
Testing same since   155955  2020-10-18 09:19:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Chen Yu <yu.c.chen@intel.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Michal Orzel <michal.orzel@arm.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Trammell Hudson <hudson@trmm.net>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   25849c8b16..0dfddb2116  0dfddb2116e3757f77a691a3fe335173088d69dc -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 11:05:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 11:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8512.22705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU6Up-0008Ao-4I; Sun, 18 Oct 2020 11:05:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8512.22705; Sun, 18 Oct 2020 11:05:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU6Up-0008Ah-1K; Sun, 18 Oct 2020 11:05:03 +0000
Received: by outflank-mailman (input) for mailman id 8512;
 Sun, 18 Oct 2020 11:05:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU6Un-0008Ac-Ph
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 11:05:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c90927b6-7baa-4a32-b6fc-ab82ad491c3c;
 Sun, 18 Oct 2020 11:05:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6Ul-00009a-WF; Sun, 18 Oct 2020 11:05:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6Ul-0003J2-Mc; Sun, 18 Oct 2020 11:04:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6Ul-0004SS-M5; Sun, 18 Oct 2020 11:04:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU6Un-0008Ac-Ph
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 11:05:01 +0000
X-Inumbo-ID: c90927b6-7baa-4a32-b6fc-ab82ad491c3c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c90927b6-7baa-4a32-b6fc-ab82ad491c3c;
	Sun, 18 Oct 2020 11:05:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XPyAnAJnardDVqqHXppcGzWDhFKW+0FiMdnz4rao4dU=; b=t561U7w9ebM9V1UqcWXyrkDj5X
	ggzeBYKWt5kXe/uQkOm2uheytBGvshtGHSnV0xcKTNrXSolBId3gleHWuuSr1giEsSUlgfiBO8s+e
	M+FlG9OKXVDtU7gFnzXNdtkNbxKAOsSyo6QWv7PbbLmaCihzA3FcMbkm8ZE7DlbiEFIs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6Ul-00009a-WF; Sun, 18 Oct 2020 11:05:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6Ul-0003J2-Mc; Sun, 18 Oct 2020 11:04:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6Ul-0004SS-M5; Sun, 18 Oct 2020 11:04:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155942-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155942: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=73e3cb6c7eea4f5db81c87574dcefe1282de4772
X-Osstest-Versions-That:
    ovmf=30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 11:04:59 +0000

flight 155942 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155942/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 73e3cb6c7eea4f5db81c87574dcefe1282de4772
baseline version:
 ovmf                 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1

Last test of basis   155908  2020-10-16 18:15:11 Z    1 days
Testing same since   155942  2020-10-18 01:09:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Matthew Carlson <macarl@microsoft.com>
  Matthew Carlson <matthewfcarlson@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   30f0ec8d80..73e3cb6c7e  73e3cb6c7eea4f5db81c87574dcefe1282de4772 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 11:28:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 11:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8517.22724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU6rc-0001aM-4D; Sun, 18 Oct 2020 11:28:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8517.22724; Sun, 18 Oct 2020 11:28:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU6rc-0001aF-0e; Sun, 18 Oct 2020 11:28:36 +0000
Received: by outflank-mailman (input) for mailman id 8517;
 Sun, 18 Oct 2020 11:28:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU6ra-0001Zg-68
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 11:28:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe38b430-c72b-4725-a3aa-ce329174fbd6;
 Sun, 18 Oct 2020 11:28:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6rR-0000db-QE; Sun, 18 Oct 2020 11:28:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6rR-0004qG-Hz; Sun, 18 Oct 2020 11:28:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU6rR-0008Cf-HD; Sun, 18 Oct 2020 11:28:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU6ra-0001Zg-68
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 11:28:34 +0000
X-Inumbo-ID: fe38b430-c72b-4725-a3aa-ce329174fbd6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fe38b430-c72b-4725-a3aa-ce329174fbd6;
	Sun, 18 Oct 2020 11:28:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X/Y75ZgiuXGw3uZNWQF7DJNCnY4YjKGDOEmnNL1gkQM=; b=N8XHVYKQb1bhQxlSAX4N58F3Y1
	ffrsEvv4St/8uiQk3NfWUat0qmelqrKAvdDtvzUB5G1Hn2Gb0BGeHx46NKF+6sAx0UDDpXNjHx/gO
	CGqgbmcxIwUy0lyJheztTHPx92u24t5e4MVhZKCjT1LPf20NQLwDDDfIq9PiNmNVOfbc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6rR-0000db-QE; Sun, 18 Oct 2020 11:28:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6rR-0004qG-Hz; Sun, 18 Oct 2020 11:28:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU6rR-0008Cf-HD; Sun, 18 Oct 2020 11:28:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155948-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155948: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 11:28:25 +0000

flight 155948 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  100 days
Failing since        151818  2020-07-11 04:18:52 Z   99 days   94 attempts
Testing same since   155885  2020-10-16 04:19:10 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21456 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 14:05:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 14:05:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8527.22747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU9Iz-0006a4-R1; Sun, 18 Oct 2020 14:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8527.22747; Sun, 18 Oct 2020 14:05:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU9Iz-0006Zx-O4; Sun, 18 Oct 2020 14:05:01 +0000
Received: by outflank-mailman (input) for mailman id 8527;
 Sun, 18 Oct 2020 14:05:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lyNC=DZ=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kU9Iy-0006ZE-3f
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 14:05:00 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 5ec35790-cc60-4e6a-ac46-d97376614ff4;
 Sun, 18 Oct 2020 14:04:59 +0000 (UTC)
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-380-QBBG1ZpeOSuxXa8F5RZ81A-1; Sun, 18 Oct 2020 10:04:56 -0400
Received: by mail-qk1-f197.google.com with SMTP id q15so5461309qkq.23
 for <xen-devel@lists.xenproject.org>; Sun, 18 Oct 2020 07:04:56 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id u16sm3288927qth.42.2020.10.18.07.04.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 18 Oct 2020 07:04:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lyNC=DZ=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kU9Iy-0006ZE-3f
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 14:05:00 +0000
X-Inumbo-ID: 5ec35790-cc60-4e6a-ac46-d97376614ff4
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 5ec35790-cc60-4e6a-ac46-d97376614ff4;
	Sun, 18 Oct 2020 14:04:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603029898;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kBVMPnQfqrYuRmZ0mFLSkNqYWCaxPfshC6ZYuKW12NQ=;
	b=CE3TTm0ZWS3xUbYzjwLA7+TYTk7Dg+VxIL4Zp73oDsqTuzAXXNjBX0Doq5M5Cu6ffXkYev
	bZ3C9BpL47kMqRFuXYSmeuVfmCYD0R8cyybmNyMbwair6n3qDaCsRQ2KiiaxG8Fe9Zs8sm
	6xUrramWNvo4ZIP3aUKUh3vEKSsfIgI=
Received: from mail-qk1-f197.google.com (mail-qk1-f197.google.com
 [209.85.222.197]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-380-QBBG1ZpeOSuxXa8F5RZ81A-1; Sun, 18 Oct 2020 10:04:56 -0400
X-MC-Unique: QBBG1ZpeOSuxXa8F5RZ81A-1
Received: by mail-qk1-f197.google.com with SMTP id q15so5461309qkq.23
        for <xen-devel@lists.xenproject.org>; Sun, 18 Oct 2020 07:04:56 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=kBVMPnQfqrYuRmZ0mFLSkNqYWCaxPfshC6ZYuKW12NQ=;
        b=qI9fMTbIoZJtZhqkAiOhCiT9P2l7dw+TqXX0QyAQ+Bg80qofXCXyML8bEW95eQUQfz
         ck6Y7AMqw3AcEh8rkKQ79YorXwV7utsiYLELD+JcQkmGI78Kt8JvqMBSKLn6CHd4KWLJ
         P+o7v3MSrzFcElJcU9UpXio4VtC/KBibIvDyMdOiH1h7vBfmz2Mq82e0GMZrJa9QbG51
         Q7wuI22vuOSXvTkeJvlkgTohBWpVPLP8RzerhuRB79Qv/3H3ELFT/l7c00V5lVMOumC9
         ChXEifYMdcjxVPdP6gWsmcH3uDeOT7hcmtb/kLjL9mWTZh5WPsz4YEARI0y7A4Puw/Xj
         9wbA==
X-Gm-Message-State: AOAM532P0GeWshsd8nDhnjwCqCLwGMI89vJwBCoPqEc7xIZ8pfjyXXwU
	MttYx6ALa5aYDXtjFc4EpJlmNeF2OKfelb+NEnu2SYI9WdHu8w3bLRve6Wf0cplDiUpx6MEWGeE
	zPq59ZLMYn/54FoWjGOc1wc1FhnU=
X-Received: by 2002:a05:620a:1287:: with SMTP id w7mr12724307qki.436.1603029896365;
        Sun, 18 Oct 2020 07:04:56 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJy6z5rS79nBj0nCbIqVRZ9qmkAzjArqewdITB0rtwnhi1UUe/kvxLZTENMJDRITA4iBlrUAlw==
X-Received: by 2002:a05:620a:1287:: with SMTP id w7mr12724258qki.436.1603029896034;
        Sun, 18 Oct 2020 07:04:56 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id u16sm3288927qth.42.2020.10.18.07.04.49
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Sun, 18 Oct 2020 07:04:55 -0700 (PDT)
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: Greg KH <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, linux-edac@vger.kernel.org,
 linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 openipmi-developer@lists.sourceforge.net, linux-crypto@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-power@fi.rohmeurope.com,
 linux-gpio@vger.kernel.org, amd-gfx@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org,
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
 linux-amlogic@lists.infradead.org, industrypack-devel@lists.sourceforge.net,
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
 linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-can@vger.kernel.org, netdev@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
 linux-wireless@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
 linux-nfc@lists.01.org, linux-nvdimm@lists.01.org,
 linux-pci@vger.kernel.org, linux-samsung-soc@vger.kernel.org,
 platform-driver-x86@vger.kernel.org, patches@opensource.cirrus.com,
 storagedev@microchip.com, devel@driverdev.osuosl.org,
 linux-serial@vger.kernel.org, linux-usb@vger.kernel.org,
 usb-storage@lists.one-eyed-alien.net, linux-watchdog@vger.kernel.org,
 ocfs2-devel@oss.oracle.com, bpf@vger.kernel.org,
 linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org,
 keyrings@vger.kernel.org, alsa-devel@alsa-project.org,
 clang-built-linux@googlegroups.com
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018054332.GB593954@kroah.com>
From: Tom Rix <trix@redhat.com>
Message-ID: <eecb7c3e-88b2-ec2f-0235-280da51ae69c@redhat.com>
Date: Sun, 18 Oct 2020 07:04:49 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <20201018054332.GB593954@kroah.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10/17/20 10:43 PM, Greg KH wrote:
> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
>> From: Tom Rix <trix@redhat.com>
>>
>> This is a upcoming change to clean up a new warning treewide.
>> I am wondering if the change could be one mega patch (see below) or
>> normal patch per file about 100 patches or somewhere half way by collecting
>> early acks.
> Please break it up into one-patch-per-subsystem, like normal, and get it
> merged that way.

OK.

Thanks,

Tom

>
> Sending us a patch, without even a diffstat to review, isn't going to
> get you very far...
>
> thanks,
>
> greg k-h
>



From xen-devel-bounces@lists.xenproject.org Sun Oct 18 14:46:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 14:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8537.22787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU9xO-0001iA-1g; Sun, 18 Oct 2020 14:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8537.22787; Sun, 18 Oct 2020 14:46:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kU9xN-0001i3-Ul; Sun, 18 Oct 2020 14:46:45 +0000
Received: by outflank-mailman (input) for mailman id 8537;
 Sun, 18 Oct 2020 14:46:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kU9xM-0001hV-Lf
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 14:46:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56239418-610b-4b3d-bc94-ea6f831c17eb;
 Sun, 18 Oct 2020 14:46:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU9xE-0004lG-OM; Sun, 18 Oct 2020 14:46:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kU9xE-0008T3-D7; Sun, 18 Oct 2020 14:46:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kU9xE-0006ZA-Cd; Sun, 18 Oct 2020 14:46:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kU9xM-0001hV-Lf
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 14:46:44 +0000
X-Inumbo-ID: 56239418-610b-4b3d-bc94-ea6f831c17eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 56239418-610b-4b3d-bc94-ea6f831c17eb;
	Sun, 18 Oct 2020 14:46:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MptqMdXReT/F6q8JQYbKjYNauNnn0nf+UnE7UAlwqpA=; b=Rsu2JKr4JPRmD3hAUcLbfis4qK
	FRhwItU+85hA+kTdjwRC5hUuXBwgxKJF9rvdznNOPI0D4P3VsptBJWPGQkj16IvV7HNC6T5tMgL0/
	aLneCIkMyyEC19Jx9P7JpKjcOeUEskdlLpsSI8BzAdA1UXoOqcslwSwoIVMUDF4HFUTY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU9xE-0004lG-OM; Sun, 18 Oct 2020 14:46:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU9xE-0008T3-D7; Sun, 18 Oct 2020 14:46:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kU9xE-0006ZA-Cd; Sun, 18 Oct 2020 14:46:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155937: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=6ee2e66674f36b6d27a95f4ddf27226905cc63a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 14:46:36 +0000

flight 155937 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155937/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 12 debian-di-install fail in 155921 pass in 155937
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install          fail pass in 155921
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 155921

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155894
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155894
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155894
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155894
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155894
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155894
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155894
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155894
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155894
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155894
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155894
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  6ee2e66674f36b6d27a95f4ddf27226905cc63a4

Last test of basis   155894  2020-10-16 11:23:43 Z    2 days
Testing same since   155921  2020-10-17 04:28:03 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              fail    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6ee2e66674..0dfddb2116  0dfddb2116e3757f77a691a3fe335173088d69dc -> master


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 16:10:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 16:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8542.22805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUBFy-0000is-E5; Sun, 18 Oct 2020 16:10:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8542.22805; Sun, 18 Oct 2020 16:10:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUBFy-0000iH-99; Sun, 18 Oct 2020 16:10:02 +0000
Received: by outflank-mailman (input) for mailman id 8542;
 Sun, 18 Oct 2020 16:10:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUBFx-0000dK-0R
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 16:10:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7351642-4387-420c-ac2b-3a204a63cceb;
 Sun, 18 Oct 2020 16:09:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUBFs-0006zo-Vz; Sun, 18 Oct 2020 16:09:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUBFs-0004db-Nk; Sun, 18 Oct 2020 16:09:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUBFs-0002ke-NB; Sun, 18 Oct 2020 16:09:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUBFx-0000dK-0R
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 16:10:01 +0000
X-Inumbo-ID: d7351642-4387-420c-ac2b-3a204a63cceb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7351642-4387-420c-ac2b-3a204a63cceb;
	Sun, 18 Oct 2020 16:09:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KEYwvGWxEMm5069XmHGea2xEGRr8Kq2yGksCAPGPphA=; b=JoZrqdHkYdSn0Qr/wHx5KwBkSv
	DAxTOwHlGUNvppMb6Gm9lYEpM8rfpXZpyVhIRRQwdBAV3YB0oRBbIKmIHgRXVaFkSvjDufzW3jkyt
	jxCw/IJVrCZZhetx16+Jf4N1DO7TB4YoDjsxm9kNst2slTSQr6PePXuuMX9pRP1b68Co=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUBFs-0006zo-Vz; Sun, 18 Oct 2020 16:09:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUBFs-0004db-Nk; Sun, 18 Oct 2020 16:09:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUBFs-0002ke-NB; Sun, 18 Oct 2020 16:09:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155945-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155945: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=52f6ded2a377ac4f191c84182488e454b1386239
X-Osstest-Versions-That:
    linux=85b0841aab15c12948af951d477183ab3df7de14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 16:09:56 +0000

flight 155945 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155945/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 155815

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 155926 pass in 155945
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 155926

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 155926 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 155926 never pass
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155799
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155815
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155815
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                52f6ded2a377ac4f191c84182488e454b1386239
baseline version:
 linux                85b0841aab15c12948af951d477183ab3df7de14

Last test of basis   155815  2020-10-14 20:39:44 Z    3 days
Testing same since   155926  2020-10-17 08:41:55 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Arjan van de Ven <arjan@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  David Sterba <dsterba@suse.com>
  Dmitry Golovin <dima@golovin.in>
  Dominik Przychodni <dominik.przychodni@intel.com>
  Giovanni Cabiddu <giovanni.cabiddu@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jan Kara <jack@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Leo Yan <leo.yan@linaro.org>
  Leonid Bloch <lb.workbox@gmail.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marcel Holtmann <marcel@holtmann.org>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mike Leach <mike.leach@linaro.org>
  Mychaela N. Falconia <falcon@freecalypso.org>
  Nathan Chancellor <natechancellor@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Patrick Steinhardt <ps@pks.im>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Chen <scott@labau.com.tw>
  Stefan Bader <stefan.bader@canonical.com>
  syzbot+009f546aa1370056b1c2@syzkaller.appspotmail.com
  Thomas Backlund <tmb@mageia.org>
  Wilken Gottwalt <wilken.gottwalt@mailbox.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 542 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:00:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:00:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8550.22831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUDuk-0007As-Ax; Sun, 18 Oct 2020 19:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8550.22831; Sun, 18 Oct 2020 19:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUDuk-0007Al-7k; Sun, 18 Oct 2020 19:00:18 +0000
Received: by outflank-mailman (input) for mailman id 8550;
 Sun, 18 Oct 2020 19:00:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I16y=DZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kUDui-0007Ag-0P
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:00:17 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a83ab8c-2f22-4553-874c-7741e9241a5b;
 Sun, 18 Oct 2020 19:00:11 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kUDuB-0007Wk-NT; Sun, 18 Oct 2020 18:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=I16y=DZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kUDui-0007Ag-0P
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:00:17 +0000
X-Inumbo-ID: 5a83ab8c-2f22-4553-874c-7741e9241a5b
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a83ab8c-2f22-4553-874c-7741e9241a5b;
	Sun, 18 Oct 2020 19:00:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=q0BhqrN3gbd6Z80uxUuHxnuOGNa3MQpvn/ujKpbjoKc=; b=qhWiWAYm9fZ2KNDR/Yk3WeXFl4
	HF2RciAjYHBQMCf3sjKhvFkXSfibnOaNYa7rSxO4GAjMdOb5lKFie8w5W48ySlrw8CCYSSnOZO0Xg
	5v+vzHZ7K/VxCxY2aBAso6UvZLXpaapi00aBgR18RXogoXyWfK3p9m9265NrgW55bmABmFf/ljB+D
	9Af5uL1UZOzL5n8T3eza+cGiO5HDs2I27gnorvuwnA1XSsP05jdLks+n6C8PrE6p/2IOOkRu9s0Qf
	2gQIITEDGF8BYWCD3hQsOGqb9beW/Q62vFU/IQef0kDvAQTCEweywjgtWsVjiKxR3wbF0iuVRtPwK
	m1h79KbA==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kUDuB-0007Wk-NT; Sun, 18 Oct 2020 18:59:43 +0000
Date: Sun, 18 Oct 2020 19:59:43 +0100
From: Matthew Wilcox <willy@infradead.org>
To: trix@redhat.com
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org,
	clang-built-linux@googlegroups.com, linux-iio@vger.kernel.org,
	nouveau@lists.freedesktop.org, storagedev@microchip.com,
	dri-devel@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org, keyrings@vger.kernel.org,
	linux-mtd@lists.infradead.org, ath10k@lists.infradead.org,
	MPT-FusionLinux.pdl@broadcom.com,
	linux-stm32@st-md-mailman.stormreply.com,
	usb-storage@lists.one-eyed-alien.net,
	linux-watchdog@vger.kernel.org, devel@driverdev.osuosl.org,
	linux-samsung-soc@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, amd-gfx@lists.freedesktop.org,
	linux-acpi@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
	industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org,
	spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org,
	linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
	linux-pm@vger.kernel.org, linux-can@vger.kernel.org,
	linux-block@vger.kernel.org, linux-gpio@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-amlogic@lists.infradead.org,
	openipmi-developer@lists.sourceforge.net,
	platform-driver-x86@vger.kernel.org,
	linux-integrity@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org,
	netdev@vger.kernel.org, linux-usb@vger.kernel.org,
	linux-wireless@vger.kernel.org,
	linux-security-module@vger.kernel.org, linux-crypto@vger.kernel.org,
	patches@opensource.cirrus.com, bpf@vger.kernel.org,
	ocfs2-devel@oss.oracle.com, linux-power@fi.rohmeurope.com
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
Message-ID: <20201018185943.GM20115@casper.infradead.org>
References: <20201017160928.12698-1-trix@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201017160928.12698-1-trix@redhat.com>

On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> clang has a number of useful, new warnings see
> https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$ 

Please get your IT department to remove that stupidity.  If you can't,
please send email from a non-Red Hat email address.

I don't understand why this is a useful warning to fix.  What actual
problem is caused by the code below?

> return and break
> 
>  	switch (c->x86_vendor) {
>  	case X86_VENDOR_INTEL:
>  		intel_p5_mcheck_init(c);
>  		return 1;
> -		break;

Sure, it's unnecessary, but it's not masking a bug.  It's not unclear.
Why do we want to enable this warning?



From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:06:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:06:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8552.22843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUE16-0007Ng-0y; Sun, 18 Oct 2020 19:06:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8552.22843; Sun, 18 Oct 2020 19:06:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUE15-0007NZ-UN; Sun, 18 Oct 2020 19:06:51 +0000
Received: by outflank-mailman (input) for mailman id 8552;
 Sun, 18 Oct 2020 19:06:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3iQ=DZ=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kUE15-0007NT-JW
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:06:51 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.22])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dea77b87-ca3c-4e17-a67a-19b53ed81275;
 Sun, 18 Oct 2020 19:06:49 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay01.hostedemail.com (Postfix) with ESMTP id 70CB3100E7B40;
 Sun, 18 Oct 2020 19:06:49 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.133.149])
 (Authenticated sender: joe@perches.com)
 by omf09.hostedemail.com (Postfix) with ESMTPA;
 Sun, 18 Oct 2020 19:06:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=t3iQ=DZ=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kUE15-0007NT-JW
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:06:51 +0000
X-Inumbo-ID: dea77b87-ca3c-4e17-a67a-19b53ed81275
Received: from smtprelay.hostedemail.com (unknown [216.40.44.22])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dea77b87-ca3c-4e17-a67a-19b53ed81275;
	Sun, 18 Oct 2020 19:06:49 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay01.hostedemail.com (Postfix) with ESMTP id 70CB3100E7B40;
	Sun, 18 Oct 2020 19:06:49 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:967:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1540:1593:1594:1711:1730:1747:1777:1792:2194:2199:2393:2525:2553:2561:2564:2682:2685:2692:2828:2859:2905:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3352:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4250:4321:5007:6119:6742:6743:7903:8957:8985:9025:10004:10400:10848:11232:11658:11914:12043:12295:12297:12438:12555:12740:12760:12895:12986:13069:13072:13311:13357:13439:14096:14097:14181:14659:14721:14777:21080:21347:21433:21451:21627:21811:21819:30003:30012:30022:30034:30054:30083:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: year67_630d5f827230
X-Filterd-Recvd-Size: 3209
Received: from XPS-9350.home (unknown [47.151.133.149])
	(Authenticated sender: joe@perches.com)
	by omf09.hostedemail.com (Postfix) with ESMTPA;
	Sun, 18 Oct 2020 19:06:42 +0000 (UTC)
Message-ID: <18981cad4ac27b4a22b2e38d40bd112432d4a4e7.camel@perches.com>
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
From: Joe Perches <joe@perches.com>
To: Matthew Wilcox <willy@infradead.org>, trix@redhat.com
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 clang-built-linux@googlegroups.com, linux-iio@vger.kernel.org, 
 nouveau@lists.freedesktop.org, storagedev@microchip.com, 
 dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
  keyrings@vger.kernel.org, linux-mtd@lists.infradead.org, 
 ath10k@lists.infradead.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-stm32@st-md-mailman.stormreply.com,
 usb-storage@lists.one-eyed-alien.net,  linux-watchdog@vger.kernel.org,
 devel@driverdev.osuosl.org,  linux-samsung-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org,  linux-nvdimm@lists.01.org,
 amd-gfx@lists.freedesktop.org,  linux-acpi@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, 
 industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, 
 spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org, 
 linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
 linux-pm@vger.kernel.org,  linux-can@vger.kernel.org,
 linux-block@vger.kernel.org,  linux-gpio@vger.kernel.org,
 xen-devel@lists.xenproject.org,  linux-amlogic@lists.infradead.org,
 openipmi-developer@lists.sourceforge.net, 
 platform-driver-x86@vger.kernel.org, linux-integrity@vger.kernel.org, 
 linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org, 
 netdev@vger.kernel.org, linux-usb@vger.kernel.org, 
 linux-wireless@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
 bpf@vger.kernel.org,  ocfs2-devel@oss.oracle.com,
 linux-power@fi.rohmeurope.com
Date: Sun, 18 Oct 2020 12:06:40 -0700
In-Reply-To: <20201018185943.GM20115@casper.infradead.org>
References: <20201017160928.12698-1-trix@redhat.com>
	 <20201018185943.GM20115@casper.infradead.org>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.36.4-0ubuntu1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-10-18 at 19:59 +0100, Matthew Wilcox wrote:
> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > clang has a number of useful, new warnings see
> > https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$ 
> 
> Please get your IT department to remove that stupidity.  If you can't,
> please send email from a non-Red Hat email address.

I didn't get it this way, neither did lore.
It's on your end.

https://lore.kernel.org/lkml/20201017160928.12698-1-trix@redhat.com/

> I don't understand why this is a useful warning to fix.

Precision in coding style intent and code minimization
would be the biggest factors IMO.

> What actual problem is caused by the code below?

Obviously none.




From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:13:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8554.22856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUE7l-0008IY-PS; Sun, 18 Oct 2020 19:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8554.22856; Sun, 18 Oct 2020 19:13:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUE7l-0008IR-MD; Sun, 18 Oct 2020 19:13:45 +0000
Received: by outflank-mailman (input) for mailman id 8554;
 Sun, 18 Oct 2020 19:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lHmQ=DZ=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kUE7j-0008IM-VF
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:13:45 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 392aa003-e3e8-4804-9d9f-4426377877b0;
 Sun, 18 Oct 2020 19:13:40 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 6E9191280300;
 Sun, 18 Oct 2020 12:13:38 -0700 (PDT)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id JsjuUxZpqgrR; Sun, 18 Oct 2020 12:13:38 -0700 (PDT)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::c447])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 1ED2912802BA;
 Sun, 18 Oct 2020 12:13:36 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lHmQ=DZ=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kUE7j-0008IM-VF
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:13:45 +0000
X-Inumbo-ID: 392aa003-e3e8-4804-9d9f-4426377877b0
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 392aa003-e3e8-4804-9d9f-4426377877b0;
	Sun, 18 Oct 2020 19:13:40 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 6E9191280300;
	Sun, 18 Oct 2020 12:13:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1603048418;
	bh=z260bBy56dgN8y3lDRHRQeuKrIr1eALRZNoNPoXlSSo=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=QOZMzgTQH16i5avnDNVW0Ktw3okhOk0UWDzTWl7121p+F93A1Quw31cCRVLQH+SGW
	 zn7QBaZkwCTKpROUCnm6j2wnXTspVYtdUU68F1++PaWc9wHgdj5YeqG30CVReBaxSQ
	 7zgTTXbB0BvsKBkgxIK2UI/Iu/DopS5PndAYThf8=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id JsjuUxZpqgrR; Sun, 18 Oct 2020 12:13:38 -0700 (PDT)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::c447])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 1ED2912802BA;
	Sun, 18 Oct 2020 12:13:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1603048418;
	bh=z260bBy56dgN8y3lDRHRQeuKrIr1eALRZNoNPoXlSSo=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=QOZMzgTQH16i5avnDNVW0Ktw3okhOk0UWDzTWl7121p+F93A1Quw31cCRVLQH+SGW
	 zn7QBaZkwCTKpROUCnm6j2wnXTspVYtdUU68F1++PaWc9wHgdj5YeqG30CVReBaxSQ
	 7zgTTXbB0BvsKBkgxIK2UI/Iu/DopS5PndAYThf8=
Message-ID: <45efa7780c79972eae9ca9bdeb9f7edbab4f3643.camel@HansenPartnership.com>
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Matthew Wilcox <willy@infradead.org>, trix@redhat.com
Cc: linux-kernel@vger.kernel.org, alsa-devel@alsa-project.org, 
 clang-built-linux@googlegroups.com, linux-iio@vger.kernel.org, 
 nouveau@lists.freedesktop.org, storagedev@microchip.com, 
 dri-devel@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
  keyrings@vger.kernel.org, linux-mtd@lists.infradead.org, 
 ath10k@lists.infradead.org, MPT-FusionLinux.pdl@broadcom.com, 
 linux-stm32@st-md-mailman.stormreply.com,
 usb-storage@lists.one-eyed-alien.net,  linux-watchdog@vger.kernel.org,
 devel@driverdev.osuosl.org,  linux-samsung-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org,  linux-nvdimm@lists.01.org,
 amd-gfx@lists.freedesktop.org,  linux-acpi@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, 
 industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, 
 spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org, 
 linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
 linux-pm@vger.kernel.org,  linux-can@vger.kernel.org,
 linux-block@vger.kernel.org,  linux-gpio@vger.kernel.org,
 xen-devel@lists.xenproject.org,  linux-amlogic@lists.infradead.org,
 openipmi-developer@lists.sourceforge.net, 
 platform-driver-x86@vger.kernel.org, linux-integrity@vger.kernel.org, 
 linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org, 
 netdev@vger.kernel.org, linux-usb@vger.kernel.org, 
 linux-wireless@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
 bpf@vger.kernel.org,  ocfs2-devel@oss.oracle.com,
 linux-power@fi.rohmeurope.com
Date: Sun, 18 Oct 2020 12:13:35 -0700
In-Reply-To: <20201018185943.GM20115@casper.infradead.org>
References: <20201017160928.12698-1-trix@redhat.com>
	 <20201018185943.GM20115@casper.infradead.org>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-10-18 at 19:59 +0100, Matthew Wilcox wrote:
> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > clang has a number of useful, new warnings see
> > https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$ 
> 
> Please get your IT department to remove that stupidity.  If you
> can't, please send email from a non-Red Hat email address.

Actually, the problem is at Oracle's end somewhere in the ocfs2 list
... if you could fix it, that would be great.  The usual real mailing
lists didn't get this transformation

https://lore.kernel.org/bpf/20201017160928.12698-1-trix@redhat.com/

but the ocfs2 list archive did:

https://oss.oracle.com/pipermail/ocfs2-devel/2020-October/015330.html

I bet Oracle IT has put some spam filter on the list that mangles URLs
this way.

James




From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:16:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:16:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8556.22868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEAd-0008RJ-7b; Sun, 18 Oct 2020 19:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8556.22868; Sun, 18 Oct 2020 19:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEAd-0008RC-4U; Sun, 18 Oct 2020 19:16:43 +0000
Received: by outflank-mailman (input) for mailman id 8556;
 Sun, 18 Oct 2020 19:16:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I16y=DZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
 id 1kUEAb-0008R6-Mh
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:16:41 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0613700d-df47-46c3-872e-070c23c8ed9f;
 Sun, 18 Oct 2020 19:16:39 +0000 (UTC)
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red
 Hat Linux)) id 1kUEAE-0008Qi-Sy; Sun, 18 Oct 2020 19:16:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=I16y=DZ=infradead.org=willy@srs-us1.protection.inumbo.net>)
	id 1kUEAb-0008R6-Mh
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:16:41 +0000
X-Inumbo-ID: 0613700d-df47-46c3-872e-070c23c8ed9f
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0613700d-df47-46c3-872e-070c23c8ed9f;
	Sun, 18 Oct 2020 19:16:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=jSZ48E7Pk5au6oJE5B/SpoPyGzHAu2LHc5e92XLrkbA=; b=iJ9Ua4Sb+7c2gBb5X093C2g3JF
	uPXvpy94QM9OfF4QdLHnbdoyPK+foTfLSCfptUIJX1L1QmBGwqRRLC+FO4yttdeacV1S+hl8hKo0C
	WTqtwQkEZQTbeO+X3m7Juje7eQPdNT7ZY2bxJ15gxf5bGTukHh/PFeI2Wotfd6qSzqn3KwhwiSJ8q
	nHMmLI1n6mOTyu1OQHnDgD5bqj+pk9E7DasCqQG55sL9hd/rW8umvQBQI/4FGFQAjFO02dSWITtwv
	yXkpHo9Iys1nXXFCivdyuKxTY6HM4UcykOYUxv1R/rziPsZPtETfWCfkBycLom8Snu0zA9/3jwQkF
	l4f6J+YQ==;
Received: from willy by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kUEAE-0008Qi-Sy; Sun, 18 Oct 2020 19:16:19 +0000
Date: Sun, 18 Oct 2020 20:16:18 +0100
From: Matthew Wilcox <willy@infradead.org>
To: James Bottomley <James.Bottomley@hansenpartnership.com>
Cc: trix@redhat.com, linux-kernel@vger.kernel.org,
	alsa-devel@alsa-project.org, clang-built-linux@googlegroups.com,
	linux-iio@vger.kernel.org, nouveau@lists.freedesktop.org,
	storagedev@microchip.com, dri-devel@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org, keyrings@vger.kernel.org,
	linux-mtd@lists.infradead.org, ath10k@lists.infradead.org,
	MPT-FusionLinux.pdl@broadcom.com,
	linux-stm32@st-md-mailman.stormreply.com,
	usb-storage@lists.one-eyed-alien.net,
	linux-watchdog@vger.kernel.org, devel@driverdev.osuosl.org,
	linux-samsung-soc@vger.kernel.org, linux-scsi@vger.kernel.org,
	linux-nvdimm@lists.01.org, amd-gfx@lists.freedesktop.org,
	linux-acpi@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
	industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org,
	spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org,
	linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
	linux-pm@vger.kernel.org, linux-can@vger.kernel.org,
	linux-block@vger.kernel.org, linux-gpio@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-amlogic@lists.infradead.org,
	openipmi-developer@lists.sourceforge.net,
	platform-driver-x86@vger.kernel.org,
	linux-integrity@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org,
	netdev@vger.kernel.org, linux-usb@vger.kernel.org,
	linux-wireless@vger.kernel.org,
	linux-security-module@vger.kernel.org, linux-crypto@vger.kernel.org,
	patches@opensource.cirrus.com, bpf@vger.kernel.org,
	ocfs2-devel@oss.oracle.com, linux-power@fi.rohmeurope.com
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
Message-ID: <20201018191618.GO20115@casper.infradead.org>
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018185943.GM20115@casper.infradead.org>
 <45efa7780c79972eae9ca9bdeb9f7edbab4f3643.camel@HansenPartnership.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <45efa7780c79972eae9ca9bdeb9f7edbab4f3643.camel@HansenPartnership.com>

On Sun, Oct 18, 2020 at 12:13:35PM -0700, James Bottomley wrote:
> On Sun, 2020-10-18 at 19:59 +0100, Matthew Wilcox wrote:
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > > clang has a number of useful, new warnings see
> > > https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$ 
> > 
> > Please get your IT department to remove that stupidity.  If you
> > can't, please send email from a non-Red Hat email address.
> 
> Actually, the problem is at Oracle's end somewhere in the ocfs2 list
> ... if you could fix it, that would be great.  The usual real mailing
> lists didn't get this transformation
> 
> https://lore.kernel.org/bpf/20201017160928.12698-1-trix@redhat.com/
> 
> but the ocfs2 list archive did:
> 
> https://oss.oracle.com/pipermail/ocfs2-devel/2020-October/015330.html
> 
> I bet Oracle IT has put some spam filter on the list that mangles URLs
> this way.

*sigh*.  I'm sure there's a way.  I've raised it with someone who should
be able to fix it.


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:18:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8557.22880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEBz-00008r-Jc; Sun, 18 Oct 2020 19:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8557.22880; Sun, 18 Oct 2020 19:18:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEBz-00008k-FV; Sun, 18 Oct 2020 19:18:07 +0000
Received: by outflank-mailman (input) for mailman id 8557;
 Sun, 18 Oct 2020 19:18:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lHmQ=DZ=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
 id 1kUEBy-00008d-A5
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:18:06 +0000
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b33386f-9c92-4eae-89b8-0cc15b0f4389;
 Sun, 18 Oct 2020 19:18:03 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by bedivere.hansenpartnership.com (Postfix) with ESMTP id 7A1D5128046A;
 Sun, 18 Oct 2020 12:18:02 -0700 (PDT)
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
 by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new,
 port 10024)
 with ESMTP id ywWMwLsGscQI; Sun, 18 Oct 2020 12:18:02 -0700 (PDT)
Received: from jarvis.int.hansenpartnership.com (unknown
 [IPv6:2601:600:8280:66d1::c447])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 340C31280456;
 Sun, 18 Oct 2020 12:18:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lHmQ=DZ=hansenpartnership.com=james.bottomley@srs-us1.protection.inumbo.net>)
	id 1kUEBy-00008d-A5
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:18:06 +0000
X-Inumbo-ID: 9b33386f-9c92-4eae-89b8-0cc15b0f4389
Received: from bedivere.hansenpartnership.com (unknown [2607:fcd0:100:8a00::2])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b33386f-9c92-4eae-89b8-0cc15b0f4389;
	Sun, 18 Oct 2020 19:18:03 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
	by bedivere.hansenpartnership.com (Postfix) with ESMTP id 7A1D5128046A;
	Sun, 18 Oct 2020 12:18:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1603048682;
	bh=aTHhEGSm6DrUFNt/hKuOWL0f+WzaMvx/rc4IP5RJYRs=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=WcP5INjnqihCMCJ+2ZkHdzEWBsqi3wavZOcf0NGlcoun37UNiQ4GoZk+2AoMrr8hd
	 1s2t7Y8IzQcDUGm581+QcIuy/enpzpZm6HswhyX4zoKl9l3S5fk96frr/LU4I9kVw8
	 r8s7AtR/5wOGbwEsua/QdQrVgo3j6VSqYIXYukhg=
Received: from bedivere.hansenpartnership.com ([127.0.0.1])
	by localhost (bedivere.hansenpartnership.com [127.0.0.1]) (amavisd-new, port 10024)
	with ESMTP id ywWMwLsGscQI; Sun, 18 Oct 2020 12:18:02 -0700 (PDT)
Received: from jarvis.int.hansenpartnership.com (unknown [IPv6:2601:600:8280:66d1::c447])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by bedivere.hansenpartnership.com (Postfix) with ESMTPSA id 340C31280456;
	Sun, 18 Oct 2020 12:18:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=hansenpartnership.com;
	s=20151216; t=1603048682;
	bh=aTHhEGSm6DrUFNt/hKuOWL0f+WzaMvx/rc4IP5RJYRs=;
	h=Subject:From:To:Cc:Date:In-Reply-To:References:From;
	b=WcP5INjnqihCMCJ+2ZkHdzEWBsqi3wavZOcf0NGlcoun37UNiQ4GoZk+2AoMrr8hd
	 1s2t7Y8IzQcDUGm581+QcIuy/enpzpZm6HswhyX4zoKl9l3S5fk96frr/LU4I9kVw8
	 r8s7AtR/5wOGbwEsua/QdQrVgo3j6VSqYIXYukhg=
Message-ID: <0a739bcd421a3154c2521b49779b287e6c0d08a2.camel@HansenPartnership.com>
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: trix@redhat.com, linux-kernel@vger.kernel.org,
 alsa-devel@alsa-project.org,  clang-built-linux@googlegroups.com,
 linux-iio@vger.kernel.org,  nouveau@lists.freedesktop.org,
 storagedev@microchip.com,  dri-devel@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org,  keyrings@vger.kernel.org,
 linux-mtd@lists.infradead.org,  ath10k@lists.infradead.org,
 MPT-FusionLinux.pdl@broadcom.com, 
 linux-stm32@st-md-mailman.stormreply.com,
 usb-storage@lists.one-eyed-alien.net,  linux-watchdog@vger.kernel.org,
 devel@driverdev.osuosl.org,  linux-samsung-soc@vger.kernel.org,
 linux-scsi@vger.kernel.org,  linux-nvdimm@lists.01.org,
 amd-gfx@lists.freedesktop.org,  linux-acpi@vger.kernel.org,
 intel-wired-lan@lists.osuosl.org, 
 industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org, 
 spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org, 
 linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
 linux-pm@vger.kernel.org,  linux-can@vger.kernel.org,
 linux-block@vger.kernel.org,  linux-gpio@vger.kernel.org,
 xen-devel@lists.xenproject.org,  linux-amlogic@lists.infradead.org,
 openipmi-developer@lists.sourceforge.net, 
 platform-driver-x86@vger.kernel.org, linux-integrity@vger.kernel.org, 
 linux-arm-kernel@lists.infradead.org, linux-edac@vger.kernel.org, 
 netdev@vger.kernel.org, linux-usb@vger.kernel.org, 
 linux-wireless@vger.kernel.org, linux-security-module@vger.kernel.org, 
 linux-crypto@vger.kernel.org, patches@opensource.cirrus.com,
 bpf@vger.kernel.org,  ocfs2-devel@oss.oracle.com,
 linux-power@fi.rohmeurope.com
Date: Sun, 18 Oct 2020 12:17:59 -0700
In-Reply-To: <20201018191618.GO20115@casper.infradead.org>
References: <20201017160928.12698-1-trix@redhat.com>
	 <20201018185943.GM20115@casper.infradead.org>
	 <45efa7780c79972eae9ca9bdeb9f7edbab4f3643.camel@HansenPartnership.com>
	 <20201018191618.GO20115@casper.infradead.org>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.34.4 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Sun, 2020-10-18 at 20:16 +0100, Matthew Wilcox wrote:
> On Sun, Oct 18, 2020 at 12:13:35PM -0700, James Bottomley wrote:
> > On Sun, 2020-10-18 at 19:59 +0100, Matthew Wilcox wrote:
> > > On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > > > clang has a number of useful, new warnings see
> > > > https://urldefense.com/v3/__https://clang.llvm.org/docs/DiagnosticsReference.html__;!!GqivPVa7Brio!Krxz78O3RKcB9JBMVo_F98FupVhj_jxX60ddN6tKGEbv_cnooXc1nnBmchm-e_O9ieGnyQ$ 
> > > 
> > > Please get your IT department to remove that stupidity.  If you
> > > can't, please send email from a non-Red Hat email address.
> > 
> > Actually, the problem is at Oracle's end somewhere in the ocfs2
> > list ... if you could fix it, that would be great.  The usual real
> > mailing lists didn't get this transformation
> > 
> > https://lore.kernel.org/bpf/20201017160928.12698-1-trix@redhat.com/
> > 
> > but the ocfs2 list archive did:
> > 
> > https://oss.oracle.com/pipermail/ocfs2-devel/2020-October/015330.html
> > 
> > I bet Oracle IT has put some spam filter on the list that mangles
> > URLs this way.
> 
> *sigh*.  I'm sure there's a way.  I've raised it with someone who
> should be able to fix it.

As someone who works for IBM I can only say I feel your pain ...

James




From xen-devel-bounces@lists.xenproject.org Sun Oct 18 19:31:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 19:31:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8561.22895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEPH-0001o8-PP; Sun, 18 Oct 2020 19:31:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8561.22895; Sun, 18 Oct 2020 19:31:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUEPH-0001o1-L8; Sun, 18 Oct 2020 19:31:51 +0000
Received: by outflank-mailman (input) for mailman id 8561;
 Sun, 18 Oct 2020 19:31:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUEPF-0001nT-Jj
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:31:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a1c6f72-6168-45b1-a971-a839937a4ace;
 Sun, 18 Oct 2020 19:31:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUEP8-0002gh-2f; Sun, 18 Oct 2020 19:31:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUEP7-0004V2-PE; Sun, 18 Oct 2020 19:31:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUEP7-0002tR-Oh; Sun, 18 Oct 2020 19:31:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUEPF-0001nT-Jj
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 19:31:49 +0000
X-Inumbo-ID: 5a1c6f72-6168-45b1-a971-a839937a4ace
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a1c6f72-6168-45b1-a971-a839937a4ace;
	Sun, 18 Oct 2020 19:31:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=McH3/zie5BtVzwaMnxEI4RR5ycZ1OZKqcQM2ppkm/mY=; b=Cm7ltMxk+vLAK0k4jB/AcGUckU
	yQTVzZUPe2+vEgKa1rHJUs7S88TqMRMuo+DIPmfsRFK5JVodSej2IrPx/OZ89T9uJJGqyIdTH4+sx
	VO3kYyp3TVowWjHTFF1haFtH2dLFMNzxW1rrc/fR49DC5svHWOXaUynHamzFTacw5Xec=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUEP8-0002gh-2f; Sun, 18 Oct 2020 19:31:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUEP7-0004V2-PE; Sun, 18 Oct 2020 19:31:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUEP7-0002tR-Oh; Sun, 18 Oct 2020 19:31:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155950: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9d9af1007bc08971953ae915d88dc9bb21344b53
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 19:31:41 +0000

flight 155950 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155950/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9d9af1007bc08971953ae915d88dc9bb21344b53
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   78 days
Failing since        152366  2020-08-01 20:49:34 Z   77 days  132 attempts
Testing same since   155950  2020-10-18 06:34:43 Z    0 days    1 attempts

------------------------------------------------------------
3213 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 584544 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 20:20:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 20:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8567.22912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUF9z-00067n-KG; Sun, 18 Oct 2020 20:20:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8567.22912; Sun, 18 Oct 2020 20:20:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUF9z-00067g-HI; Sun, 18 Oct 2020 20:20:07 +0000
Received: by outflank-mailman (input) for mailman id 8567;
 Sun, 18 Oct 2020 20:20:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUF9y-00063t-LK
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 20:20:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4055ad10-0cf0-406a-907b-88545e325fb4;
 Sun, 18 Oct 2020 20:20:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUF9v-0003k4-MG; Sun, 18 Oct 2020 20:20:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUF9v-0006CU-EZ; Sun, 18 Oct 2020 20:20:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUF9v-0006T5-Dc; Sun, 18 Oct 2020 20:20:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUF9y-00063t-LK
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 20:20:06 +0000
X-Inumbo-ID: 4055ad10-0cf0-406a-907b-88545e325fb4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4055ad10-0cf0-406a-907b-88545e325fb4;
	Sun, 18 Oct 2020 20:20:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JPBKx+B6eNbUtFzMMLFpTstIGh+R3YvCs9CFC4RQBkA=; b=jcawda7sv6HeR4HHrI/jxptn2N
	oHrZly6fAMjcVbNYQ49k6dfZmzFxEoqtFSZf5WkigYAcg6JR4t8t8TezLQkaGYkPWQkvMY5wT5YDX
	0R3JejFNJAyAxdENcPKc10KTifR1h9xTkRgUu7CaoP1fo4Jm+8jEufk2O5pOcjgVLeOI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUF9v-0003k4-MG; Sun, 18 Oct 2020 20:20:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUF9v-0006CU-EZ; Sun, 18 Oct 2020 20:20:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUF9v-0006T5-Dc; Sun, 18 Oct 2020 20:20:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155957: regressions - FAIL
X-Osstest-Failures:
    ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-localmigrate/x10:fail:regression
X-Osstest-Versions-This:
    ovmf=709b163940c55604b983400eb49dad144a2aa091
X-Osstest-Versions-That:
    ovmf=73e3cb6c7eea4f5db81c87574dcefe1282de4772
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 20:20:03 +0000

flight 155957 ovmf real [real]
flight 155967 ovmf real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/155957/
http://logs.test-lab.xenproject.org/osstest/logs/155967/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ovmf-amd64 18 guest-localmigrate/x10 fail REGR. vs. 155942

version targeted for testing:
 ovmf                 709b163940c55604b983400eb49dad144a2aa091
baseline version:
 ovmf                 73e3cb6c7eea4f5db81c87574dcefe1282de4772

Last test of basis   155942  2020-10-18 01:09:50 Z    0 days
Testing same since   155957  2020-10-18 11:07:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Terry Lee <terry.lee@hpe.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 709b163940c55604b983400eb49dad144a2aa091
Author: Terry Lee <terry.lee@hpe.com>
Date:   Thu Jul 9 10:46:47 2020 +0800

    SecurityPkg/Tcg2PhysicalPresenceLib: Fix incorrect TCG VER comparision
    
    REF: https://bugzilla.tianocore.org/show_bug.cgi?id=2697
    
    Tcg2PhysicalPresenceLibConstructor set the module variable
    mIsTcg2PPVerLowerThan_1_3 with incorrect TCG version comparision.
    
    Cc: Jiewen Yao <jiewen.yao@intel.com>
    Cc: Jian J Wang <jian.j.wang@intel.com>
    Cc: Chao Zhang <chao.b.zhang@intel.com>
    Signed-off-by: Zhichao Gao <zhichao.gao@intel.com>
    Reviewed-by: Jiewen Yao <jiewen.yao@intel.com>


From xen-devel-bounces@lists.xenproject.org Sun Oct 18 22:03:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 18 Oct 2020 22:03:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8575.22940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUGm4-0006CJ-AU; Sun, 18 Oct 2020 22:03:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8575.22940; Sun, 18 Oct 2020 22:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUGm4-0006CC-7N; Sun, 18 Oct 2020 22:03:32 +0000
Received: by outflank-mailman (input) for mailman id 8575;
 Sun, 18 Oct 2020 22:03:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUGm2-0006Be-5N
 for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 22:03:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a344ada9-ea29-4747-a1cc-0750312f4c01;
 Sun, 18 Oct 2020 22:03:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUGlr-0005qh-1V; Sun, 18 Oct 2020 22:03:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUGlq-0003gV-Pv; Sun, 18 Oct 2020 22:03:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUGlq-0001rH-PS; Sun, 18 Oct 2020 22:03:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MNWJ=DZ=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUGm2-0006Be-5N
	for xen-devel@lists.xenproject.org; Sun, 18 Oct 2020 22:03:30 +0000
X-Inumbo-ID: a344ada9-ea29-4747-a1cc-0750312f4c01
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a344ada9-ea29-4747-a1cc-0750312f4c01;
	Sun, 18 Oct 2020 22:03:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=npR6xhOZUgC70QBxzrJUU4G3zmZmlaysZPvGgvLsTzI=; b=AYNB5ejfqlVpxiBhymdBIHjTMR
	574rVMRWKodsnLL1wrNIX7xnrI0R3PzoDD+Ks11zGbpcJdTH+RE0iW81RiIq7YfgrS6QN60vGScAS
	8ALD0c8LhXF2w1LhpFpzc2WmUasM8MSnftVjI4yniHI+q3xubDnZCh5azSSR/DBIMGJY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUGlr-0005qh-1V; Sun, 18 Oct 2020 22:03:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUGlq-0003gV-Pv; Sun, 18 Oct 2020 22:03:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUGlq-0001rH-PS; Sun, 18 Oct 2020 22:03:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155953: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e12ce85b2c79d83a340953291912875c30b3af06
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 18 Oct 2020 22:03:18 +0000

flight 155953 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155953/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e12ce85b2c79d83a340953291912875c30b3af06
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   59 days
Failing since        152659  2020-08-21 14:07:39 Z   58 days  101 attempts
Testing same since   155931  2020-10-17 13:40:59 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 46332 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 01:02:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 01:02:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8582.22961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUJYa-0005SR-OH; Mon, 19 Oct 2020 01:01:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8582.22961; Mon, 19 Oct 2020 01:01:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUJYa-0005PY-Gb; Mon, 19 Oct 2020 01:01:48 +0000
Received: by outflank-mailman (input) for mailman id 8582;
 Mon, 19 Oct 2020 01:01:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUJYY-0002fQ-Gr
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 01:01:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a82899ac-d278-46e9-9f0f-6ee265777f43;
 Mon, 19 Oct 2020 01:01:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUJYP-0003EC-SZ; Mon, 19 Oct 2020 01:01:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUJYP-00056k-Kc; Mon, 19 Oct 2020 01:01:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUJYP-0000zr-K8; Mon, 19 Oct 2020 01:01:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUJYY-0002fQ-Gr
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 01:01:46 +0000
X-Inumbo-ID: a82899ac-d278-46e9-9f0f-6ee265777f43
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a82899ac-d278-46e9-9f0f-6ee265777f43;
	Mon, 19 Oct 2020 01:01:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=pYZjd0gbHbvUKoHcm/gFijeXp4y+oKW8EaKuTn36C/Q=; b=ppbX7cf4jm1+k+LTbIOPkeNfrW
	Qnr6tGfZd+jXi0vlG6/U+4u69rfYxRR8hEORTOJlnElmEFC+IsIYGGJAgu2Md1gWKQ9Y3Dxgt+4Zj
	/4ql3dFZDmDx3c35xhj65hXjcnXa9BLnoYG0f+QiQIWuFrgNCF5/AWJLB5dYu8C2NDIQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUJYP-0003EC-SZ; Mon, 19 Oct 2020 01:01:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUJYP-00056k-Kc; Mon, 19 Oct 2020 01:01:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUJYP-0000zr-K8; Mon, 19 Oct 2020 01:01:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qcow2
Message-Id: <E1kUJYP-0000zr-K8@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 01:01:37 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qcow2
testid guest-start/debian.repeat

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  2d24a64661549732fc77f632928318dd52f5bce5
  Bug not present: 7bed89958bfbf40df9ca681cefbdca63abdde39d
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155972/


  commit 2d24a64661549732fc77f632928318dd52f5bce5
  Author: Maxim Levitsky <mlevitsk@redhat.com>
  Date:   Tue Oct 6 15:38:59 2020 +0300
  
      device-core: use RCU for list of children of a bus
      
      This fixes the race between device emulation code that tries to find
      a child device to dispatch the request to (e.g a scsi disk),
      and hotplug of a new device to that bus.
      
      Note that this doesn't convert all the readers of the list
      but only these that might go over that list without BQL held.
      
      This is a very small first step to make this code thread safe.
      
      Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20200913160259.32145-5-mlevitsk@redhat.com>
      [Use RCU_READ_LOCK_GUARD in more places, adjust testcase now that
       the delay in DEVICE_DELETED due to RCU is more consistent. - Paolo]
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20201006123904.610658-9-mlevitsk@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qcow2.guest-start--debian.repeat.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qcow2.guest-start--debian.repeat --summary-out=tmp/155972.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qcow2 guest-start/debian.repeat
Searching for failure / basis pass:
 155953 fail [host=fiano1] / 155769 [host=elbling1] 155743 [host=fiano0] 155729 [host=pinot0] 155713 [host=godello1] 155703 [host=chardonnay1] 155695 [host=huxelrebe0] 155675 [host=pinot1] 155665 [host=huxelrebe1] 155645 [host=albana0] 155613 [host=rimava1] 155585 [host=albana1] 155544 [host=godello1] 155518 [host=chardonnay0] 155509 [host=godello0] 152631 [host=godello1] 152615 [host=chardonnay0] 152573 [host=huxelrebe0] 152563 [host=huxelrebe1] 152497 [host=godello0] 152480 [host=rimava1] 1524\
 56 [host=elbling1] 152411 [host=pinot1] 152380 [host=albana1] 152337 ok.
Failure / basis pass flights: 155953 / 152337
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f79b736b0a57da71d87c987357db0227cd16ac6 3c659044118e34603161457db9934a34f816d78b d74824cf7c8b352f9045e949dc636c7207a41eee d9c812dda519a1a73e8370e1b81ddf46eb22ed16 98bed5de1de3352c63cfe29a00f17e8d9ce72689
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#7f79b736b0a57da71d87c987357db0227cd16ac6-30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 git://xenbits.xen.org/qemu-xen-traditional.git#3c659044118e34603161457db99\
 34a34f816d78b-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#d74824cf7c8b352f9045e949dc636c7207a41eee-e12ce85b2c79d83a340953291912875c30b3af06 git://xenbits.xen.org/osstest/seabios.git#d9c812dda519a1a73e8370e1b81ddf46eb22ed16-58a44be024f69d2e4d2b58553529230abdd3935e git://xenbits.xen.org/xen.git#98bed5de1de3352c63cfe29a00f17e8d9ce72689-6ee2e66674f36b6d27a95f4ddf27226905cc63a4
Loaded 43461 nodes in revision graph
Searching for test results:
 152337 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f79b736b0a57da71d87c987357db0227cd16ac6 3c659044118e34603161457db9934a34f816d78b d74824cf7c8b352f9045e949dc636c7207a41eee d9c812dda519a1a73e8370e1b81ddf46eb22ed16 98bed5de1de3352c63cfe29a00f17e8d9ce72689
 152380 [host=albana1]
 152411 [host=pinot1]
 152480 [host=rimava1]
 152456 [host=elbling1]
 152497 [host=godello0]
 152563 [host=huxelrebe1]
 152573 [host=huxelrebe0]
 152615 [host=chardonnay0]
 152631 [host=godello1]
 152659 [host=godello0]
 152668 [host=godello0]
 152682 [host=godello0]
 152696 [host=godello0]
 152712 [host=godello0]
 152726 [host=godello0]
 152771 [host=godello0]
 152793 [host=godello0]
 152836 [host=godello0]
 152856 [host=godello0]
 152878 [host=godello0]
 152911 []
 152923 []
 152946 []
 152965 []
 152992 []
 153007 []
 153025 []
 153047 []
 153075 []
 153113 []
 153138 []
 153166 []
 153270 []
 153288 []
 153311 []
 153336 []
 153362 []
 153383 []
 153406 []
 153435 []
 153452 []
 153478 []
 153502 []
 153531 []
 153548 []
 153576 []
 153597 []
 153611 []
 153625 []
 153663 []
 153692 []
 153762 []
 153776 []
 153793 []
 153818 []
 153847 []
 153891 []
 153922 []
 153946 []
 153971 []
 153998 []
 154023 [host=godello0]
 154038 [host=godello0]
 154061 [host=godello0]
 154096 [host=godello0]
 154137 [host=godello0]
 154138 [host=godello0]
 154140 [host=godello0]
 154141 [host=godello0]
 154142 [host=godello0]
 154466 [host=godello0]
 154485 [host=godello0]
 154496 [host=godello0]
 154508 [host=godello0]
 154526 [host=godello0]
 154544 [host=godello0]
 154552 [host=godello0]
 154566 [host=godello0]
 154583 [host=godello0]
 154607 [host=godello0]
 154629 [host=godello0]
 155018 [host=godello0]
 155098 [host=godello0]
 155184 [host=godello0]
 155318 [host=godello0]
 155434 [host=godello0]
 155483 [host=godello0]
 155509 [host=godello0]
 155518 [host=chardonnay0]
 155544 [host=godello1]
 155585 [host=albana1]
 155613 [host=rimava1]
 155645 [host=albana0]
 155665 [host=huxelrebe1]
 155675 [host=pinot1]
 155695 [host=huxelrebe0]
 155703 [host=chardonnay1]
 155713 [host=godello1]
 155729 [host=pinot0]
 155743 [host=fiano0]
 155754 [host=elbling1]
 155769 [host=elbling1]
 155785 fail irrelevant
 155802 fail irrelevant
 155819 fail irrelevant
 155841 fail irrelevant
 155877 fail irrelevant
 155888 fail irrelevant
 155927 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f79b736b0a57da71d87c987357db0227cd16ac6 3c659044118e34603161457db9934a34f816d78b d74824cf7c8b352f9045e949dc636c7207a41eee d9c812dda519a1a73e8370e1b81ddf46eb22ed16 98bed5de1de3352c63cfe29a00f17e8d9ce72689
 155928 fail irrelevant
 155911 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a7d977040bd82b89d1fe5ef32d488bfd10db2dbc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7daf8f8d011cdd5d3e86930ed2bde969425c790c 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155930 fail irrelevant
 155932 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 a7d977040bd82b89d1fe5ef32d488bfd10db2dbc 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7daf8f8d011cdd5d3e86930ed2bde969425c790c 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155933 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 19c87b7d446c3273e84b238cb02cd1c0ae69c43e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3e40748834923798aa57e3751db13a069e2c617b 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155934 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 19c87b7d446c3273e84b238cb02cd1c0ae69c43e 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 57c98ea9acdcef5021f5671efa6475a5794a51c4 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155935 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5d0a827122cccd1f884faf75b2a065d88a58bce1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 96292515c07e3a99f5a29540ed2f257b1ff75111 c685fe3ff2d402caefc1487d99bb486c4a510b8b 884ef07f4f66b9d12fc4811047db95ba649db85c
 155939 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9380177354387f03c8ff9eadb7ae94aa453b9469 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b37da837630ca7cdbc45de4c5339bbfc6d21beed c685fe3ff2d402caefc1487d99bb486c4a510b8b 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155940 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a0bdf866873467271eff9a92f179ab0f77d735cb 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155941 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 c45a70d8c271056896a057fbcdc7743a2942d0ec 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155956 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 7f79b736b0a57da71d87c987357db0227cd16ac6 3c659044118e34603161457db9934a34f816d78b d74824cf7c8b352f9045e949dc636c7207a41eee d9c812dda519a1a73e8370e1b81ddf46eb22ed16 98bed5de1de3352c63cfe29a00f17e8d9ce72689
 155943 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9380177354387f03c8ff9eadb7ae94aa453b9469 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a4339de2de4def4beb33e22e6f506bcc8b9d9326 c685fe3ff2d402caefc1487d99bb486c4a510b8b 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155947 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 07a47d4a1879370009baab44f1f387610d88a299 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155949 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d9f24bf57241453e078dba28d16fe3a430f06da1 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155931 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155952 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bab88ead6fcbc7097ed75981622cce7850da1cc7 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155958 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155959 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7bed89958bfbf40df9ca681cefbdca63abdde39d 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a23151e8cc8cc08546252dc9c7671171d9c44615 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155962 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2d24a64661549732fc77f632928318dd52f5bce5 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155964 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7bed89958bfbf40df9ca681cefbdca63abdde39d 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155966 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2d24a64661549732fc77f632928318dd52f5bce5 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155953 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 30f0ec8d80072ae3ab58e08014e6b2ffe3ef97e1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 6ee2e66674f36b6d27a95f4ddf27226905cc63a4
 155970 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7bed89958bfbf40df9ca681cefbdca63abdde39d 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
 155972 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2d24a64661549732fc77f632928318dd52f5bce5 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
Searching for interesting versions
 Result found: flight 152337 (pass), for basis pass
 Result found: flight 155931 (fail), for basis failure
 Repro found: flight 155956 (pass), for basis pass
 Repro found: flight 155958 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cc942105ede58a300ba46f3df0edfa86b3abd4dd 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7bed89958bfbf40df9ca681cefbdca63abdde39d 849c5e50b6f474df6cc113130575bcdccfafcd9e 534b3d09958fdc4df64872c2ab19feb4b1eebc5a
No revisions left to test, checking graph state.
 Result found: flight 155959 (pass), for last pass
 Result found: flight 155962 (fail), for first failure
 Repro found: flight 155964 (pass), for last pass
 Repro found: flight 155966 (fail), for first failure
 Repro found: flight 155970 (pass), for last pass
 Repro found: flight 155972 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  2d24a64661549732fc77f632928318dd52f5bce5
  Bug not present: 7bed89958bfbf40df9ca681cefbdca63abdde39d
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155972/


  commit 2d24a64661549732fc77f632928318dd52f5bce5
  Author: Maxim Levitsky <mlevitsk@redhat.com>
  Date:   Tue Oct 6 15:38:59 2020 +0300
  
      device-core: use RCU for list of children of a bus
      
      This fixes the race between device emulation code that tries to find
      a child device to dispatch the request to (e.g a scsi disk),
      and hotplug of a new device to that bus.
      
      Note that this doesn't convert all the readers of the list
      but only these that might go over that list without BQL held.
      
      This is a very small first step to make this code thread safe.
      
      Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
      Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
      Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
      Message-Id: <20200913160259.32145-5-mlevitsk@redhat.com>
      [Use RCU_READ_LOCK_GUARD in more places, adjust testcase now that
       the delay in DEVICE_DELETED due to RCU is more consistent. - Paolo]
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
      Message-Id: <20201006123904.610658-9-mlevitsk@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.316537 to fit
pnmtopng: 99 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qcow2.guest-start--debian.repeat.{dot,ps,png,html,svg}.
----------------------------------------
155972: tolerable ALL FAIL

flight 155972 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155972/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2  21 guest-start/debian.repeat fail baseline untested


jobs:
 test-amd64-amd64-xl-qcow2                                    fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 02:48:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 02:48:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8584.22973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kULDI-0006Bb-MC; Mon, 19 Oct 2020 02:47:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8584.22973; Mon, 19 Oct 2020 02:47:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kULDI-0006BU-Id; Mon, 19 Oct 2020 02:47:56 +0000
Received: by outflank-mailman (input) for mailman id 8584;
 Mon, 19 Oct 2020 02:47:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Kus=D2=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1kULDG-0006BP-VP
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 02:47:55 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 342d17e0-d348-4515-a558-5ca559d8c559;
 Mon, 19 Oct 2020 02:47:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Kus=D2=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
	id 1kULDG-0006BP-VP
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 02:47:55 +0000
X-Inumbo-ID: 342d17e0-d348-4515-a558-5ca559d8c559
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 342d17e0-d348-4515-a558-5ca559d8c559;
	Mon, 19 Oct 2020 02:47:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603075673;
  h=from:to:cc:subject:date:message-id:mime-version;
  bh=i6jWet7wGgHzs5thnQOYofjpCBTMSaeHPfauKXNT09s=;
  b=baqeWE94OwOoOmKRQuvqS3oGAQGoYGarab+b5ggXU8eqb2puWjlul3IS
   9lSoQRj5sfikesRrFXWtK9fbZPTFHExfvFVRFCJJd3TEW3suLjEo2fmMf
   FrU7oLBuj0A2eUbfSZH0Lr+gxzGhmX28wzDqdIz2oLW4NRElK6Lgwxq5N
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jjvKeHmaVy0rgK1y3N3S3TQfJQgpolCCv4TBYw67leV04XWuMDR+Ljo/1E/WWqnNbDu0A2JgAP
 RuNyU0cLfIds2APDgLqzYHGZLbbNoPdDT7oEhckxokFkNx8Gkxogjw3Ex8ta0hv0DAMGAXReFb
 I19++EihawBznTUgnYjmC3bca2OuKLxTDsW/+2hNi+Z2Ce4rfiO0fOsrv2R0s5qrEeTjilVyxt
 /Bv/05tvwXJlLAgszdsOO/SR5wcfQcGS6kmrZYAN75K0VSq12wp+BsI4Fh/LeFF1RRNzMNhqWN
 QII=
X-SBRS: 2.5
X-MesageID: 29240102
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,393,1596513600"; 
   d="scan'208";a="29240102"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <andrew.cooper3@citrix.com>, <roger.pau@citrix.com>,
	<wl@xen.org>, <jun.nakajima@intel.com>, <kevin.tian@intel.com>, "Igor
 Druzhinin" <igor.druzhinin@citrix.com>
Subject: [PATCH v2] x86/intel: insert Ice Lake X (server) model numbers
Date: Mon, 19 Oct 2020 03:47:26 +0100
Message-ID: <1603075646-24995-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain

LBR, C-state MSRs and if_pschange_mc erratum applicability should correspond
to Ice Lake desktop according to External Design Specification vol.2.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Changes in v2:
- keep partial sorting

Andrew, since you have access to these documents, please review as you have time.
---
 xen/arch/x86/acpi/cpu_idle.c | 1 +
 xen/arch/x86/hvm/vmx/vmx.c   | 3 ++-
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 27e0b52..eca423c 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -181,6 +181,7 @@ static void do_get_hw_residencies(void *arg)
     case 0x55:
     case 0x5E:
     /* Ice Lake */
+    case 0x6A:
     case 0x7D:
     case 0x7E:
     /* Kaby Lake */
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 86b8916..8382917 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2427,6 +2427,7 @@ static bool __init has_if_pschange_mc(void)
     case 0x4e: /* Skylake M */
     case 0x5e: /* Skylake D */
     case 0x55: /* Skylake-X / Cascade Lake */
+    case 0x6a: /* Ice Lake-X */
     case 0x7d: /* Ice Lake */
     case 0x7e: /* Ice Lake */
     case 0x8e: /* Kaby / Coffee / Whiskey Lake M */
@@ -2775,7 +2776,7 @@ static const struct lbr_info *last_branch_msr_get(void)
         /* Goldmont Plus */
         case 0x7a:
         /* Ice Lake */
-        case 0x7d: case 0x7e:
+        case 0x6a: case 0x7d: case 0x7e:
         /* Tremont */
         case 0x86:
         /* Kaby Lake */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 03:06:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 03:06:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8587.22988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kULV8-0008QS-8y; Mon, 19 Oct 2020 03:06:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8587.22988; Mon, 19 Oct 2020 03:06:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kULV8-0008QL-5L; Mon, 19 Oct 2020 03:06:22 +0000
Received: by outflank-mailman (input) for mailman id 8587;
 Mon, 19 Oct 2020 03:06:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kULV7-0008PM-HU
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 03:06:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56548077-bfc9-430e-9ce2-72d9efb9841a;
 Mon, 19 Oct 2020 03:06:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kULUy-0005WI-Sv; Mon, 19 Oct 2020 03:06:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kULUy-0003eI-IZ; Mon, 19 Oct 2020 03:06:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kULUy-0002ta-H0; Mon, 19 Oct 2020 03:06:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kULV7-0008PM-HU
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 03:06:21 +0000
X-Inumbo-ID: 56548077-bfc9-430e-9ce2-72d9efb9841a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 56548077-bfc9-430e-9ce2-72d9efb9841a;
	Mon, 19 Oct 2020 03:06:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j4Ki5Gmkx5hF64wsbpt8m/inDzady2Q1PRs+nstDlx0=; b=AWeFUFXD1xQzT3eIbd7wz1qOgW
	LJcm9fPMUczxqmFHahqzcRzYVPUU71J629OGyp8XCgDtduHte5nLaVYC2AoCelqQmahIqW3Hby+G9
	3gsTOp63vzxMHIG8xTCB0/BsLBQofTtX2CdotQ8Mt64f+2wQA8ozj6IbMkRJnozywWu8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kULUy-0005WI-Sv; Mon, 19 Oct 2020 03:06:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kULUy-0003eI-IZ; Mon, 19 Oct 2020 03:06:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kULUy-0002ta-H0; Mon, 19 Oct 2020 03:06:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155960-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155960: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:debian-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 03:06:12 +0000

flight 155960 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155960/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-intel 12 debian-install fail in 155937 pass in 155960
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail in 155937 pass in 155960
 test-amd64-i386-libvirt-xsm  20 guest-start/debian.repeat  fail pass in 155937
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 155937

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155937
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155937
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155937
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155937
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155937
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155937
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155937
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155937
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155937
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155937
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155937
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   155960  2020-10-18 14:48:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 06:27:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 06:27:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8594.23012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUOdh-0000PD-OF; Mon, 19 Oct 2020 06:27:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8594.23012; Mon, 19 Oct 2020 06:27:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUOdh-0000P6-KP; Mon, 19 Oct 2020 06:27:25 +0000
Received: by outflank-mailman (input) for mailman id 8594;
 Mon, 19 Oct 2020 06:27:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUOdg-0000No-R3
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 06:27:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c223ab08-5883-48b5-a78b-c325dacb0d85;
 Mon, 19 Oct 2020 06:27:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUOdY-0001k1-TB; Mon, 19 Oct 2020 06:27:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUOdY-000552-H8; Mon, 19 Oct 2020 06:27:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUOdY-0000ap-Ge; Mon, 19 Oct 2020 06:27:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUOdg-0000No-R3
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 06:27:24 +0000
X-Inumbo-ID: c223ab08-5883-48b5-a78b-c325dacb0d85
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c223ab08-5883-48b5-a78b-c325dacb0d85;
	Mon, 19 Oct 2020 06:27:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BWqIgAHSvu2LlN8EuRKFAv3KPUgK4zv5DEyR+Q2c4bU=; b=TJwA5MT0xnU4yeaZtNJ0VRRXOz
	qkrU+LFOOCd+JYutR5/ckfUhgG9fMeWI/b5enSFK9e+JlB5ra5/PPfho2t1B8i4GGjxukznLaUzB3
	tof92ZwzCi+sIOnh0Y/hm1wUQwKMpvb7J7mg733nz6TuVCAgtV/oooHX+eVvGUF2p3Mg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUOdY-0001k1-TB; Mon, 19 Oct 2020 06:27:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUOdY-000552-H8; Mon, 19 Oct 2020 06:27:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUOdY-0000ap-Ge; Mon, 19 Oct 2020 06:27:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155963-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 155963: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=52f6ded2a377ac4f191c84182488e454b1386239
X-Osstest-Versions-That:
    linux=85b0841aab15c12948af951d477183ab3df7de14
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 06:27:16 +0000

flight 155963 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155963/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 155945 pass in 155963
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 155945 pass in 155963
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 155945
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 155945

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155799
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155815
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155815
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155815
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155815
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                52f6ded2a377ac4f191c84182488e454b1386239
baseline version:
 linux                85b0841aab15c12948af951d477183ab3df7de14

Last test of basis   155815  2020-10-14 20:39:44 Z    4 days
Testing same since   155926  2020-10-17 08:41:55 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anand Jain <anand.jain@oracle.com>
  Anant Thazhemadam <anant.thazhemadam@gmail.com>
  Arjan van de Ven <arjan@linux.intel.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  David Sterba <dsterba@suse.com>
  Dmitry Golovin <dima@golovin.in>
  Dominik Przychodni <dominik.przychodni@intel.com>
  Giovanni Cabiddu <giovanni.cabiddu@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jan Kara <jack@suse.cz>
  Johan Hovold <johan@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Leo Yan <leo.yan@linaro.org>
  Leonid Bloch <lb.workbox@gmail.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marcel Holtmann <marcel@holtmann.org>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Mike Leach <mike.leach@linaro.org>
  Mychaela N. Falconia <falcon@freecalypso.org>
  Nathan Chancellor <natechancellor@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Patrick Steinhardt <ps@pks.im>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Samuel Ortiz <sameo@linux.intel.com>
  Sasha Levin <sashal@kernel.org>
  Scott Chen <scott@labau.com.tw>
  Stefan Bader <stefan.bader@canonical.com>
  syzbot+009f546aa1370056b1c2@syzkaller.appspotmail.com
  Thomas Backlund <tmb@mageia.org>
  Wilken Gottwalt <wilken.gottwalt@mailbox.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   85b0841aab15..52f6ded2a377  52f6ded2a377ac4f191c84182488e454b1386239 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:19:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:19:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8597.23023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPS1-0004j0-Pk; Mon, 19 Oct 2020 07:19:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8597.23023; Mon, 19 Oct 2020 07:19:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPS1-0004it-Ml; Mon, 19 Oct 2020 07:19:25 +0000
Received: by outflank-mailman (input) for mailman id 8597;
 Mon, 19 Oct 2020 07:19:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUPRz-0004io-Ux
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:19:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a37262c2-e655-43ea-87f0-1f6583655d4e;
 Mon, 19 Oct 2020 07:19:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5AB3CABD5;
 Mon, 19 Oct 2020 07:19:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUPRz-0004io-Ux
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:19:23 +0000
X-Inumbo-ID: a37262c2-e655-43ea-87f0-1f6583655d4e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a37262c2-e655-43ea-87f0-1f6583655d4e;
	Mon, 19 Oct 2020 07:19:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603091962;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=iEd1reE4mBWWJcVdcup4nuKoiAsz84JQ8s20z7nRMv0=;
	b=cehT+UtFMhRWilwnG/fdvzrI0I7SDSkQGE7kqa2ZIRhwnV3ZEilMPpvvDpxQfZ4VD1XfcE
	bS2tbfH5W85C7AMXGVK75rS2emA73mjIYOF6YldKrCziBGceFkAqL5VeOSByvuudHN9bz0
	MB5L84ETkenao/nPkSmS+jpEfcCNllg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5AB3CABD5;
	Mon, 19 Oct 2020 07:19:22 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] tools/libs: fix build rules to correctly deal with
 multiple public headers
Message-ID: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Date: Mon, 19 Oct 2020 09:19:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: fix header symlinking rule
2: fix uninstall rule for header files

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:21:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8599.23036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPTn-0005Uy-5n; Mon, 19 Oct 2020 07:21:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8599.23036; Mon, 19 Oct 2020 07:21:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPTn-0005Ur-2g; Mon, 19 Oct 2020 07:21:15 +0000
Received: by outflank-mailman (input) for mailman id 8599;
 Mon, 19 Oct 2020 07:21:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUPTl-0005Um-95
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:21:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4da4b60-e98a-4069-ae32-3e024146e9c9;
 Mon, 19 Oct 2020 07:21:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6FF97AC2F;
 Mon, 19 Oct 2020 07:21:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUPTl-0005Um-95
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:21:13 +0000
X-Inumbo-ID: e4da4b60-e98a-4069-ae32-3e024146e9c9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e4da4b60-e98a-4069-ae32-3e024146e9c9;
	Mon, 19 Oct 2020 07:21:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603092068;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=teGLCrcUKZL1AHCBSXxSaZpkvLlDxc0tV0m8ZZiUxoA=;
	b=euWyR9e0hN7uo1YDG9zpjzrssDrV7zHfebwHXQYbo/+weTHYYclaWlqvVJPsBnGO6jAHdd
	GsNl2OBB8/kl1qhTH7uT330sic+iV/VUw9IbXoql2TNAcSe6lM0QsktDmhB7qq/rK/Mr7n
	eqm1IxE3Pw92Q6OJzi3MW7HQMsF5P84=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6FF97AC2F;
	Mon, 19 Oct 2020 07:21:08 +0000 (UTC)
Subject: [PATCH 1/2] tools/libs: fix header symlinking rule
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Message-ID: <f4daea32-89bd-dafb-833f-1288882e58d8@suse.com>
Date: Mon, 19 Oct 2020 09:21:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Unlike pattern rules, ordinary rules with multiple targets have their
commands executed once per target. Hence when $(LIBHEADERS) expands to
more than just one item, multiple identical commands would have been
issued, which have been observed to cause build failures relatively
frequently after libx{c,l} code was moved to tools/libs/{ctrl,light}/.
Use a static pattern rule instead.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm aware Jürgen has a series pending to entirely remove the rule in
question, but this being an isolated fix which ought to be easier to
review, I thought I'd still post it. Re-basing his series over this
change should be straightforward.
However, for the above reason I'm not bothering getting right the
theoretical case of headers in subdirs of the respective include/ being
mentioned in $(LIBHEADER).

--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -79,8 +79,8 @@ headers.chk: $(LIBHEADERSGLOB) $(AUTOINC
 libxen$(LIBNAME).map:
 	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
 
-$(LIBHEADERSGLOB): $(LIBHEADERS)
-	for i in $(realpath $(LIBHEADERS)); do ln -sf $$i $(XEN_ROOT)/tools/include; done
+$(LIBHEADERSGLOB): $(XEN_ROOT)/tools/include/%.h: include/%.h
+	ln -sf $(CURDIR)/$< $(XEN_ROOT)/tools/include/
 
 lib$(LIB_FILE_NAME).a: $(LIB_OBJS)
 	$(AR) rc $@ $^



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8601.23048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPUE-0005bP-Es; Mon, 19 Oct 2020 07:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8601.23048; Mon, 19 Oct 2020 07:21:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPUE-0005bI-BF; Mon, 19 Oct 2020 07:21:42 +0000
Received: by outflank-mailman (input) for mailman id 8601;
 Mon, 19 Oct 2020 07:21:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUPUC-0005b8-Mc
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:21:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d53d7282-ed35-47c9-9f7f-68a60b4a4cf4;
 Mon, 19 Oct 2020 07:21:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3956AB0E;
 Mon, 19 Oct 2020 07:21:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUPUC-0005b8-Mc
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:21:40 +0000
X-Inumbo-ID: d53d7282-ed35-47c9-9f7f-68a60b4a4cf4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d53d7282-ed35-47c9-9f7f-68a60b4a4cf4;
	Mon, 19 Oct 2020 07:21:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603092098;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QX2MHQbpg3wjWq1A1IoLHU+ocuPAEaIOhlM4GNE6C+o=;
	b=nwKqcS7kPpBWrpUpyuCJRoOpTcJV4fStXdAlG/L0Itto5ZgeKpA3nMTpinwb0tfCwiWZTT
	8AasZKgHHN+d5OjmabWLIHtbWx+0mTC193mBun4lhksol/AqiOy7cm/ihfdcfxTOxMBT2Y
	DcXJu06Gdr31Kpw6VJoEgHo9EqSX+7w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A3956AB0E;
	Mon, 19 Oct 2020 07:21:38 +0000 (UTC)
Subject: [PATCH 2/2] tools/libs: fix uninstall rule for header files
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Message-ID: <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
Date: Mon, 19 Oct 2020 09:21:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This again was working right only as long as $(LIBHEADER) consisted of
just one entry.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
An alternative would be to use $(addprefix ) without any shell loop.

--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -107,7 +107,7 @@ install: build
 .PHONY: uninstall
 uninstall:
 	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/$(LIB_FILE_NAME).pc
-	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$(LIBHEADER); done
+	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$$i; done
 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so
 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR)
 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:23:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:23:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8605.23060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPW9-0005oZ-UU; Mon, 19 Oct 2020 07:23:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8605.23060; Mon, 19 Oct 2020 07:23:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPW9-0005oS-Qr; Mon, 19 Oct 2020 07:23:41 +0000
Received: by outflank-mailman (input) for mailman id 8605;
 Mon, 19 Oct 2020 07:23:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUPW8-0005oM-U5
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:23:40 +0000
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 840569d3-30a6-44ab-9c17-a072b9b8341a;
 Mon, 19 Oct 2020 07:23:39 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id q5so11632142wmq.0
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:23:39 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id p9sm15195256wma.12.2020.10.19.00.23.37
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 19 Oct 2020 00:23:38 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUPW8-0005oM-U5
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:23:40 +0000
X-Inumbo-ID: 840569d3-30a6-44ab-9c17-a072b9b8341a
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 840569d3-30a6-44ab-9c17-a072b9b8341a;
	Mon, 19 Oct 2020 07:23:39 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id q5so11632142wmq.0
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:23:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=Yfcbj1U2V9MXMVv6f17g69PJ8r9vg/TmZ0vVVq4aOuw=;
        b=g5EQYaV7J/oYzm6yaMgs124XnLHS5mGI531XFf/o3EB+m+nN50W3zIPjbZcaFImcOt
         c4h3XQCqfa0mb5+hlDWT+c3h4Q3PCyilhAggiidp9W/IRLfJmpYkFSk9131beJjtrmPB
         utUhFQRSwHN8TZamk2ieJcJuOot/kdR65dWx9dX5BwJTLMpFGADGJCcBGGdV9zm/vNqQ
         kzogwG+sicy0tA3LPi/lfnQyDKJCoNrYetR36nl0u5+r+UgL6u6VEYvxvZuuk8DZfQeM
         d0TraeVJc8Aznu6vREtNyT/TbtzLGv6V6bDj+jKmyJeZ6u57n9EbKwLFYKoyqkXrorVH
         l8Xw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=Yfcbj1U2V9MXMVv6f17g69PJ8r9vg/TmZ0vVVq4aOuw=;
        b=C7ieIYNBck0Gdo8xipxcSNCajjZ453AR1NVQ0CpFXcvtPE8G68IBMGVl6pcVxumMVJ
         YUcP3NV0iskqSbORegx96EVCpXtL54yf53M5FLLfVw6iPuc7QuDNKsq+uHDsi8OG3JMK
         OfZZZCG25l0s19m31Rllg/M0r0sN/r7T+/9A7HQ7/nCoraYMn0iBypeCNNZwYNjrT/UF
         3YhdoX0F0EbO4ugyTIaFa60pHPm20jfnjbD5a3uk9tHYaxLK/2iJg+b/iCSSpYrtArCd
         6Jt2FPGFrael7CYX8hYyMDjWdjgLFNPb7ssvsyYhDJbPkDxz1FqibPP4K4azvTA27DBB
         Lq/Q==
X-Gm-Message-State: AOAM532yn2SVXtzwXtAEOLfMuAI+gi/7lfj1Xge8nbVcaV6OnSbLlHZO
	66EqQYRJKpUBK+tRyScLccs=
X-Google-Smtp-Source: ABdhPJzXujww7LqkUau/9jCXuYo1aHObW4z53PsBF42iRS+ri+3+VsQDWFzruHwRW+f1D1TBbXcF1A==
X-Received: by 2002:a1c:1f87:: with SMTP id f129mr16435034wmf.182.1603092218907;
        Mon, 19 Oct 2020 00:23:38 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id p9sm15195256wma.12.2020.10.19.00.23.37
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 19 Oct 2020 00:23:38 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Daniel De Graaf'" <dgdegra@tycho.nsa.gov>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20201005094905.2929-1-paul@xen.org> <20201005094905.2929-3-paul@xen.org> <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org>
In-Reply-To: <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org>
Subject: RE: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
Date: Mon, 19 Oct 2020 08:23:36 +0100
Message-ID: <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQH+ubxU5MmdI1NaD6dn626TcODZdgGmi3hFAim7XvWpL9smcA==
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 16 October 2020 16:47
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Daniel De Graaf <dgdegra@tycho.nsa.gov>; Ian Jackson
> <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
> <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>; Stefano Stabellini
> <sstabellini@kernel.org>
> Subject: Re: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
> 
> Hi Paul,
> 
> On 05/10/2020 10:49, Paul Durrant wrote:
> > diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
> > index 791f0a2592..75e855625a 100644
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
> >                                    */
> >   };
> >
> > +/*
> > + * XEN_DOMCTL_iommu_ctl
> > + *
> > + * Control of VM IOMMU settings
> > + */
> > +
> > +#define XEN_DOMCTL_IOMMU_INVALID 0
> 
> I can't find any user of XEN_DOMCTL_IOMMU_INVALID. What's the purpose
> for it?
> 

It's just a placeholder. I think it's generally safer that a zero opcode value is invalid.

> [...]
> 
> > diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> > index b21c3783d3..1a96d3502c 100644
> > --- a/xen/include/xsm/xsm.h
> > +++ b/xen/include/xsm/xsm.h
> > @@ -106,17 +106,19 @@ struct xsm_operations {
> >       int (*irq_permission) (struct domain *d, int pirq, uint8_t allow);
> >       int (*iomem_permission) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
> >       int (*iomem_mapping) (struct domain *d, uint64_t s, uint64_t e, uint8_t allow);
> > -    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t
> end, uint8_t access);
> > +    int (*pci_config_permission) (struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t
> end, uint8_t access);
> 
> I can't figure out what changed here. Is it an intended change?
> 

No, I'll check and get rid of it.

  Paul

> >
> > -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
> > +#if defined(CONFIG_HAS_PASSTHROUGH)
> > +    int (*iommu_ctl) (struct domain *d, unsigned int op);
> > +#if defined(CONFIG_HAS_PCI)
> >       int (*get_device_group) (uint32_t machine_bdf);
> >       int (*assign_device) (struct domain *d, uint32_t machine_bdf);
> >       int (*deassign_device) (struct domain *d, uint32_t machine_bdf);
> >   #endif
> > -
> > -#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
> > +#if defined(CONFIG_HAS_DEVICE_TREE)
> >       int (*assign_dtdevice) (struct domain *d, const char *dtpath);
> >       int (*deassign_dtdevice) (struct domain *d, const char *dtpath);
> > +#endif
> >   #endif
> 
> Cheers,
> 
> --
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:30:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8609.23072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPc5-00061j-Jy; Mon, 19 Oct 2020 07:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8609.23072; Mon, 19 Oct 2020 07:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPc5-00061c-Gu; Mon, 19 Oct 2020 07:29:49 +0000
Received: by outflank-mailman (input) for mailman id 8609;
 Mon, 19 Oct 2020 07:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUPc4-00061X-Mj
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:29:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c4f8ad6-12a5-453c-a8a3-3a5415855b8b;
 Mon, 19 Oct 2020 07:29:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ACF70AC8B;
 Mon, 19 Oct 2020 07:29:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUPc4-00061X-Mj
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:29:48 +0000
X-Inumbo-ID: 2c4f8ad6-12a5-453c-a8a3-3a5415855b8b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c4f8ad6-12a5-453c-a8a3-3a5415855b8b;
	Mon, 19 Oct 2020 07:29:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603092586;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SyjgKY1Si4K6wfxaWj8TKW9wqTWRjKNXPgMcltgR6zs=;
	b=It3YIAh8Tbtoj4VYIigE17U+Z4WsKeAhVF8eWE5FXAR4A6ZqLuM7zWaNhsTQqTZ/HFIJ7T
	uPA9lwF1okOdmnMGwqeKjUsipEZcYNrQ8teL8XwT1IQC81o/o7tjzBJUvRmoYa8BWPW+jB
	jxlE2bwvLExroL33LnNQ+y6fVYjtRRM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id ACF70AC8B;
	Mon, 19 Oct 2020 07:29:46 +0000 (UTC)
Subject: Re: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
To: paul@xen.org
Cc: 'Julien Grall' <julien@xen.org>, xen-devel@lists.xenproject.org,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>, 'Ian Jackson'
 <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-3-paul@xen.org>
 <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org>
 <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7c4a0cda-5fff-c9ae-2fc1-4256aec5f694@suse.com>
Date: Mon, 19 Oct 2020 09:29:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 09:23, Paul Durrant wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: 16 October 2020 16:47
>>
>> On 05/10/2020 10:49, Paul Durrant wrote:
>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>> index 791f0a2592..75e855625a 100644
>>> --- a/xen/include/public/domctl.h
>>> +++ b/xen/include/public/domctl.h
>>> @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
>>>                                    */
>>>   };
>>>
>>> +/*
>>> + * XEN_DOMCTL_iommu_ctl
>>> + *
>>> + * Control of VM IOMMU settings
>>> + */
>>> +
>>> +#define XEN_DOMCTL_IOMMU_INVALID 0
>>
>> I can't find any user of XEN_DOMCTL_IOMMU_INVALID. What's the purpose
>> for it?
>>
> 
> It's just a placeholder. I think it's generally safer that a zero opcode value is invalid.

But does this then need a #define? Starting valid command from 1
ought to be sufficient?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:31:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8611.23084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPdE-0006ml-V7; Mon, 19 Oct 2020 07:31:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8611.23084; Mon, 19 Oct 2020 07:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPdE-0006me-Rh; Mon, 19 Oct 2020 07:31:00 +0000
Received: by outflank-mailman (input) for mailman id 8611;
 Mon, 19 Oct 2020 07:30:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUPdC-0006mV-Sl
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:30:58 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 694a70b5-7173-494f-afe6-49879c748879;
 Mon, 19 Oct 2020 07:30:58 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id n15so10182132wrq.2
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:30:58 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id c130sm15061465wma.17.2020.10.19.00.30.55
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 19 Oct 2020 00:30:56 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUPdC-0006mV-Sl
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:30:58 +0000
X-Inumbo-ID: 694a70b5-7173-494f-afe6-49879c748879
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 694a70b5-7173-494f-afe6-49879c748879;
	Mon, 19 Oct 2020 07:30:58 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id n15so10182132wrq.2
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:30:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=3UaDZvd2RGQZ+R4RoT1ZOULGUmWOFiVK7Le4dLFx6ew=;
        b=PNspJM3/4DX0AgPfRv3za0iK/HDPq+1VrPjU24WxPIfpRhRYINDr32iVT2h3dGlVA4
         cpCMhVKgiqzFjxBK4o0CWNjTdeExnt+1/BZPewr3C3bSdkltiaE457b4DXFxRNfZq7zZ
         XHcbVoeAkZjheFyynDiD3+CW51cmFDCyiDsOmjxPiUj8HOYLl2QV8c/4i0K/37AzMVoI
         lRwWG52Fl/xKpq1cRA3KgceACx1Pgut9lHleo1WTKwTTs2hmAPgKVkv5/wXoELEVD76m
         w1SFwRrSE2Sy5BrLKVfqJIVg3lwupmTJgy5OP37FohDkTaE5hopJA3MrgGjbIAnSP4xH
         7eMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=3UaDZvd2RGQZ+R4RoT1ZOULGUmWOFiVK7Le4dLFx6ew=;
        b=DCio7H8LHVTs+0xOzUUtMOKGa/yw2PVtaN0tjqhzupb7ppInslQj8zrwy+xX/7fnpl
         hWxLAvm0ogbJYBnaj+PGu4Wndu+I2s/VPghPhm+hjAjeOjsGqY5MgfXLdLWydgGTBaD+
         WEyJbEJBnltIM811pE9yMARbQMIZhgP1aZrhwRxjVBgrdcamRdcahwrgn3wyhu2ZLmw7
         Z50otAjPT1m/98rE/9oraxg1hNa6ZVy9hIwJTWe1K8/dqFke7SewBkuFJCuBcJA5I0q8
         BQnw6WuYfw21DxVU3SvHzOIU0C/QST5kbjIYUFkCSTXw1H6dpGLOvR9olvIyD01qXYoH
         cQvg==
X-Gm-Message-State: AOAM531BmI8UEu2XzVmzXHxTaMVPeAwmpovJRfbw+J7N9P/yEuljsdAO
	VxEPoC5GbHXXJzh5aiWOUOI=
X-Google-Smtp-Source: ABdhPJzAUvEkPsz1TKSyh41TsRpeAIW378YBambYEUCQvhRO6hg9ydpoN8IyhzXe5sBEFx6HJTesmA==
X-Received: by 2002:adf:f9cb:: with SMTP id w11mr17847067wrr.131.1603092657195;
        Mon, 19 Oct 2020 00:30:57 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id c130sm15061465wma.17.2020.10.19.00.30.55
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 19 Oct 2020 00:30:56 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
Cc: "'Paul Durrant'" <pdurrant@amazon.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Anthony PERARD'" <anthony.perard@citrix.com>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
References: <20201005094905.2929-1-paul@xen.org> <20201005094905.2929-4-paul@xen.org> <471c9800-2ff0-d180-0840-e29dee4d3b4f@xen.org>
In-Reply-To: <471c9800-2ff0-d180-0840-e29dee4d3b4f@xen.org>
Subject: RE: [PATCH 3/5] libxl / iommu / domctl: introduce XEN_DOMCTL_IOMMU_SET_ALLOCATION...
Date: Mon, 19 Oct 2020 08:30:55 +0100
Message-ID: <002c01d6a5e9$c8a778f0$59f66ad0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQH+ubxU5MmdI1NaD6dn626TcODZdgKhLAozArQHbBSpI7R0MA==
Content-Language: en-gb

> -----Original Message-----
> From: Julien Grall <julien@xen.org>
> Sent: 16 October 2020 16:55
> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
> Cc: Paul Durrant <pdurrant@amazon.com>; Ian Jackson =
<iwj@xenproject.org>; Wei Liu <wl@xen.org>; Andrew
> Cooper <andrew.cooper3@citrix.com>; George Dunlap =
<george.dunlap@citrix.com>; Jan Beulich
> <jbeulich@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; =
Anthony PERARD
> <anthony.perard@citrix.com>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>
> Subject: Re: [PATCH 3/5] libxl / iommu / domctl: introduce =
XEN_DOMCTL_IOMMU_SET_ALLOCATION...
>=20
> Hi Paul,
>=20
> On 05/10/2020 10:49, Paul Durrant wrote:
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > ... sub-operation of XEN_DOMCTL_iommu_ctl.
> >
> > This patch adds a new sub-operation into the domctl. The code in =
iommu_ctl()
> > is extended to call a new arch-specific iommu_set_allocation() =
function which
> > will be called with the IOMMU page-table overhead (in 4k pages) in =
response
>=20
> Why 4KB? Wouldn't it be better to use the hypervisor page size =
instead?
>=20

I think I'll follow the shadow/hap code more closely and just pass a =
value in MB, then any issue with page size is left inside Xen.

> > diff --git a/xen/include/public/domctl.h =
b/xen/include/public/domctl.h
> > index 75e855625a..6402678838 100644
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -1138,8 +1138,16 @@ struct xen_domctl_vuart_op {
> >
> >   #define XEN_DOMCTL_IOMMU_INVALID 0
> >
> > +#define XEN_DOMCTL_IOMMU_SET_ALLOCATION 1
> > +struct xen_domctl_iommu_set_allocation {
> > +    uint32_t nr_pages;
>=20
> Shouldn't this be a 64-bit value?

If I pass the value in MB then 32-bits will cover it, I think. I do need =
to add padding though.

  Paul

>=20
> Cheers,
>=20
> --
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:33:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:33:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8616.23096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPfC-0006y8-B5; Mon, 19 Oct 2020 07:33:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8616.23096; Mon, 19 Oct 2020 07:33:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPfC-0006y1-83; Mon, 19 Oct 2020 07:33:02 +0000
Received: by outflank-mailman (input) for mailman id 8616;
 Mon, 19 Oct 2020 07:33:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUPfA-0006xw-Kf
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:33:00 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bea8250-dc53-49df-9d9d-584004cf35b9;
 Mon, 19 Oct 2020 07:32:59 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id q5so11660151wmq.0
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:32:59 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
 by smtp.gmail.com with ESMTPSA id v8sm15657225wmb.20.2020.10.19.00.32.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 19 Oct 2020 00:32:58 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CSGA=D2=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUPfA-0006xw-Kf
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:33:00 +0000
X-Inumbo-ID: 5bea8250-dc53-49df-9d9d-584004cf35b9
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5bea8250-dc53-49df-9d9d-584004cf35b9;
	Mon, 19 Oct 2020 07:32:59 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id q5so11660151wmq.0
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 00:32:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=jZa8/08xg2HhiRyEdtd7JzfFusEcmIi51CFreIpCG4Q=;
        b=oaknzuBuzHg+xmxEA0qkQsjp/Efv6ufRT5UVz5FpHhDcRjL/KWpwbpK+zbzqHLCem+
         SK9S0ZbEW+ZfGD/lqF3oN4xZ+Yd/Ja+mafVKxSjeMDB4sCt4P7st7UE+4ZI0TGT8rpq6
         wEI4MpUsde4SgX3VL7GLiImsD9gmR2LLP7vTRxLICUKIic4NtU/aM0VLBS35UqzGAa7y
         kz8eNgpxbdol3QN+dC+DzkLrzNLNIUngGj40H+n9SBYOSZ1C+cVj+0rkgTPRykDuQdDu
         hr1Dq0adX2Q94ZTexSCLxct0YPxww8Rx5E+0SIHaEzXTw4BuekNRtTQp2foyVXQptVWN
         vkpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=jZa8/08xg2HhiRyEdtd7JzfFusEcmIi51CFreIpCG4Q=;
        b=qFennhfQc4vI/4aOI60+6Lub8K2pORi5Ty6w1WkA7wG6eDOP8arPTTYeLSXdITFoM6
         f2AdAa5G23lzA1iKjhk7gTEJbTdHiAJ7SWDmwm88XcDudGm92Vimf3aHJNcC9rGgfPH+
         hjwkzFrhNotsSLwf2EvaIuwMxIylN9ZH/LHx3ZqO69BUBk0Q30UIzX/9TMHZHUOFQy2j
         vP7vBWawGk4kPRBL2QesBoI9YYhUo0GW2t4NEH+RS1dc8QiVBdCm2nIKeBvMuVv8U8B0
         Fw55ccfZ9+4NA6dVBahZwXECdfJo5p1GWLZnicreLre2jAiiaAif+KWcaa8tZ9FD8nft
         T3SQ==
X-Gm-Message-State: AOAM531ZwlEWvS5NdbR22G6Al/AR/gVSIswDlm0gbg+wawIVd2spS4L2
	xrIHdxmiKUkjmlUk+i7HLfU=
X-Google-Smtp-Source: ABdhPJx2BiYyfhVZ1HGWs4WCT5Aj9TYeliEOXidJVSUkUcsFTbrMhQeZhGr4xK2koWgpYBcfHES3Mw==
X-Received: by 2002:a1c:9a4b:: with SMTP id c72mr15711301wme.157.1603092778890;
        Mon, 19 Oct 2020 00:32:58 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-235.amazon.com. [54.240.197.235])
        by smtp.gmail.com with ESMTPSA id v8sm15657225wmb.20.2020.10.19.00.32.57
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 19 Oct 2020 00:32:58 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>,
	"'Paul Durrant'" <pdurrant@amazon.com>,
	"'Daniel De Graaf'" <dgdegra@tycho.nsa.gov>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>
References: <20201005094905.2929-1-paul@xen.org> <20201005094905.2929-3-paul@xen.org> <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org> <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org> <7c4a0cda-5fff-c9ae-2fc1-4256aec5f694@suse.com>
In-Reply-To: <7c4a0cda-5fff-c9ae-2fc1-4256aec5f694@suse.com>
Subject: RE: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
Date: Mon, 19 Oct 2020 08:32:57 +0100
Message-ID: <002d01d6a5ea$112d9aa0$3388cfe0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQH+ubxU5MmdI1NaD6dn626TcODZdgGmi3hFAim7XvUAu7WuFwHUyrZTqRtZupA=
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 October 2020 08:30
> To: paul@xen.org
> Cc: 'Julien Grall' <julien@xen.org>; xen-devel@lists.xenproject.org; =
'Paul Durrant'
> <pdurrant@amazon.com>; 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>; 'Ian =
Jackson' <iwj@xenproject.org>;
> 'Wei Liu' <wl@xen.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; =
'George Dunlap'
> <george.dunlap@citrix.com>; 'Stefano Stabellini' =
<sstabellini@kernel.org>
> Subject: Re: [PATCH 2/5] iommu / domctl: introduce =
XEN_DOMCTL_iommu_ctl
>=20
> On 19.10.2020 09:23, Paul Durrant wrote:
> >> From: Julien Grall <julien@xen.org>
> >> Sent: 16 October 2020 16:47
> >>
> >> On 05/10/2020 10:49, Paul Durrant wrote:
> >>> diff --git a/xen/include/public/domctl.h =
b/xen/include/public/domctl.h
> >>> index 791f0a2592..75e855625a 100644
> >>> --- a/xen/include/public/domctl.h
> >>> +++ b/xen/include/public/domctl.h
> >>> @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
> >>>                                    */
> >>>   };
> >>>
> >>> +/*
> >>> + * XEN_DOMCTL_iommu_ctl
> >>> + *
> >>> + * Control of VM IOMMU settings
> >>> + */
> >>> +
> >>> +#define XEN_DOMCTL_IOMMU_INVALID 0
> >>
> >> I can't find any user of XEN_DOMCTL_IOMMU_INVALID. What's the =
purpose
> >> for it?
> >>
> >
> > It's just a placeholder. I think it's generally safer that a zero =
opcode value is invalid.
>=20
> But does this then need a #define? Starting valid command from 1
> ought to be sufficient?
>=20

Seems harmless enough, and it also seemed the best way since to reserve =
0 since this patch doesn't actually introduce any ops. As it has caused =
so much controversy though, I'll remove it.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 07:38:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 07:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8620.23108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPkX-0007Bn-Vw; Mon, 19 Oct 2020 07:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8620.23108; Mon, 19 Oct 2020 07:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUPkX-0007Bg-Sz; Mon, 19 Oct 2020 07:38:33 +0000
Received: by outflank-mailman (input) for mailman id 8620;
 Mon, 19 Oct 2020 07:38:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUPkX-0007Bb-8k
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:38:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fa34437-509c-4bbc-9af8-8ebbc3cb6b8b;
 Mon, 19 Oct 2020 07:38:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5806ACD5;
 Mon, 19 Oct 2020 07:38:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUPkX-0007Bb-8k
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 07:38:33 +0000
X-Inumbo-ID: 8fa34437-509c-4bbc-9af8-8ebbc3cb6b8b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fa34437-509c-4bbc-9af8-8ebbc3cb6b8b;
	Mon, 19 Oct 2020 07:38:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603093111;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b/rptmWF1YODygFtjxwcSlxU0uXAJxRUrPQcm5wLcJw=;
	b=QZiVZmm7sXRhMlHRcFsw46aFihDABPBlPqjUkPJJpSPH+Nm3nnYF2823Chi4SpQYB/y/Vd
	VAwht7eobCeyOzD9rf5tTESEx4AFtC5dtUafadudQyClHdaBYpzHQeyUu5bo2220XpfnQV
	PjPdd1TyHH2BAtUqBr6knGmpy6D0xHo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C5806ACD5;
	Mon, 19 Oct 2020 07:38:31 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
 <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
 <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
 <CAKf6xpv9qHJydjQ_TyZEKZAK14T4m2GLLqEwyMTraUxqvg+1Xw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d10143d9-0082-fa09-3ef0-2d13e5ee54ef@suse.com>
Date: Mon, 19 Oct 2020 09:38:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpv9qHJydjQ_TyZEKZAK14T4m2GLLqEwyMTraUxqvg+1Xw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 18:28, Jason Andryuk wrote:
> Looks like we can pass XC_DOM_PV_CONTAINER/XC_DOM_HVM_CONTAINER down
> into elf_xen_parse().  Then we would just validate phys_entry for HVM
> and virt_entry for PV.  Does that sound reasonable?

I think so, yes. Assuming of course that you'll convert the XC_DOM_*
into a boolean, so that the hypervisor's use of libelf/ can also be
suitably adjusted.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8655.23151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQZy-0004Ri-SP; Mon, 19 Oct 2020 08:31:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8655.23151; Mon, 19 Oct 2020 08:31:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQZy-0004Rb-Pa; Mon, 19 Oct 2020 08:31:42 +0000
Received: by outflank-mailman (input) for mailman id 8655;
 Mon, 19 Oct 2020 08:31:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUQZx-0004RW-65
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:31:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bd6afc0-4b3d-4217-bf89-4345c8549902;
 Mon, 19 Oct 2020 08:31:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F22AACF1;
 Mon, 19 Oct 2020 08:31:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUQZx-0004RW-65
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:31:41 +0000
X-Inumbo-ID: 6bd6afc0-4b3d-4217-bf89-4345c8549902
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bd6afc0-4b3d-4217-bf89-4345c8549902;
	Mon, 19 Oct 2020 08:31:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603096298;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=gDl2iUCT19sfCpQsI79DfcMdnshL273KLKcJMUbWfTY=;
	b=UsmOAvcPaA5bM1y/Cdcp7Dy1bKhYxB2DkZYYMAph4cAC/kEJic52b+iTwz5SOcQp113SUg
	CoBT3UjLM96XVaEbO1NUjVyRTvCaGBPMUwgcvgYfqdthiq6Q/c9qJ20arDnsbVjJ5FY3y7
	EyXld53OncBbg4TYJlkoOd2ojkN90gQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8F22AACF1;
	Mon, 19 Oct 2020 08:31:38 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Marek Marczykowski <marmarek@invisiblethingslab.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] tools/python: pass more -rpath-link options to ld
Message-ID: <d10bb94f-c572-6977-40a4-57a61da4094b@suse.com>
Date: Mon, 19 Oct 2020 10:31:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

With the split of libraries, I've observed a number of warnings from
(old?) ld.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
It's unclear to me whether this is ld version dependent - the pattern
of where I've seen such warnings doesn't suggest a clear version
dependency.

--- a/tools/python/setup.py
+++ b/tools/python/setup.py
@@ -7,10 +7,15 @@ XEN_ROOT = "../.."
 extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
 
 PATH_XEN      = XEN_ROOT + "/tools/include"
+PATH_LIBXENTOOLCORE = XEN_ROOT + "/tools/libs/toolcore"
 PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
+PATH_LIBXENCALL = XEN_ROOT + "/tools/libs/call"
 PATH_LIBXENEVTCHN = XEN_ROOT + "/tools/libs/evtchn"
+PATH_LIBXENGNTTAB = XEN_ROOT + "/tools/libs/gnttab"
 PATH_LIBXENCTRL = XEN_ROOT + "/tools/libs/ctrl"
 PATH_LIBXENGUEST = XEN_ROOT + "/tools/libs/guest"
+PATH_LIBXENDEVICEMODEL = XEN_ROOT + "/tools/libs/devicemodel"
+PATH_LIBXENFOREIGNMEMORY = XEN_ROOT + "/tools/libs/foreignmemory"
 PATH_XENSTORE = XEN_ROOT + "/tools/libs/store"
 
 xc = Extension("xc",
@@ -24,7 +29,13 @@ xc = Extension("xc",
                library_dirs       = [ PATH_LIBXENCTRL, PATH_LIBXENGUEST ],
                libraries          = [ "xenctrl", "xenguest" ],
                depends            = [ PATH_LIBXENCTRL + "/libxenctrl.so", PATH_LIBXENGUEST + "/libxenguest.so" ],
-               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
+               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENCALL,
+                                      "-Wl,-rpath-link="+PATH_LIBXENDEVICEMODEL,
+                                      "-Wl,-rpath-link="+PATH_LIBXENEVTCHN,
+                                      "-Wl,-rpath-link="+PATH_LIBXENFOREIGNMEMORY,
+                                      "-Wl,-rpath-link="+PATH_LIBXENGNTTAB,
+                                      "-Wl,-rpath-link="+PATH_LIBXENTOOLCORE,
+                                      "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
                sources            = [ "xen/lowlevel/xc/xc.c" ])
 
 xs = Extension("xs",
@@ -33,6 +44,7 @@ xs = Extension("xs",
                library_dirs       = [ PATH_XENSTORE ],
                libraries          = [ "xenstore" ],
                depends            = [ PATH_XENSTORE + "/libxenstore.so" ],
+               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLCORE ],
                sources            = [ "xen/lowlevel/xs/xs.c" ])
 
 plat = os.uname()[0]


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:38:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:38:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8659.23167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQgA-0004ho-MY; Mon, 19 Oct 2020 08:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8659.23167; Mon, 19 Oct 2020 08:38:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQgA-0004hh-JU; Mon, 19 Oct 2020 08:38:06 +0000
Received: by outflank-mailman (input) for mailman id 8659;
 Mon, 19 Oct 2020 08:38:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUQg9-0004h8-JX
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:38:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b0e93bc-e5e3-4433-bea9-c146400dbc0c;
 Mon, 19 Oct 2020 08:37:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUQg2-0004wY-0o; Mon, 19 Oct 2020 08:37:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUQg1-0002aI-NO; Mon, 19 Oct 2020 08:37:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUQg1-000554-Mt; Mon, 19 Oct 2020 08:37:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUQg9-0004h8-JX
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:38:05 +0000
X-Inumbo-ID: 8b0e93bc-e5e3-4433-bea9-c146400dbc0c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8b0e93bc-e5e3-4433-bea9-c146400dbc0c;
	Mon, 19 Oct 2020 08:37:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QodAaPzjw7PTNAXYmza0/o943cAwCq7V7a0qdMMdLOI=; b=lKISZqEBIwPBthSn0XJfSBGyRI
	IFC3wbuJr78JuR/dogqYDYdR9ORgUa97o+gIgkSTe1SEtFX2C2FbGL8aX5VX+mSjunwfHIyrRIYva
	dNR5n0mEOyGQcwAuerfRjxknQnWfx/g1UKgz4pYKm/i+BaQN4CqgV/N8Hsg0yS8eSDS0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUQg2-0004wY-0o; Mon, 19 Oct 2020 08:37:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUQg1-0002aI-NO; Mon, 19 Oct 2020 08:37:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUQg1-000554-Mt; Mon, 19 Oct 2020 08:37:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155969-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155969: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=709b163940c55604b983400eb49dad144a2aa091
X-Osstest-Versions-That:
    ovmf=73e3cb6c7eea4f5db81c87574dcefe1282de4772
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 08:37:57 +0000

flight 155969 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155969/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 709b163940c55604b983400eb49dad144a2aa091
baseline version:
 ovmf                 73e3cb6c7eea4f5db81c87574dcefe1282de4772

Last test of basis   155942  2020-10-18 01:09:50 Z    1 days
Testing same since   155957  2020-10-18 11:07:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Terry Lee <terry.lee@hpe.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   73e3cb6c7e..709b163940  709b163940c55604b983400eb49dad144a2aa091 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:42:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:42:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8662.23178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQkg-0005Wo-8U; Mon, 19 Oct 2020 08:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8662.23178; Mon, 19 Oct 2020 08:42:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQkg-0005Wh-5d; Mon, 19 Oct 2020 08:42:46 +0000
Received: by outflank-mailman (input) for mailman id 8662;
 Mon, 19 Oct 2020 08:42:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUQke-0005Wc-U3
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:42:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 782ea23b-ddc4-4ec7-b6c8-478d4d14405a;
 Mon, 19 Oct 2020 08:42:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 46892ADCC;
 Mon, 19 Oct 2020 08:42:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUQke-0005Wc-U3
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:42:44 +0000
X-Inumbo-ID: 782ea23b-ddc4-4ec7-b6c8-478d4d14405a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 782ea23b-ddc4-4ec7-b6c8-478d4d14405a;
	Mon, 19 Oct 2020 08:42:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603096963;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NjUdc3Xz8megZ6ucNz+/Pu0EanI1CIIppZmzrzwJ0/s=;
	b=RzIdiMSiBO1bvyDiJYkCHM/o7xr9/c2inP/eSueQb7r2+aRiiRBH+ShXDeL5eM3FOMV6am
	CBoan3gPuU4h4jPuNvUbf8zZ7CghKIUdTt0zmIx6N242KGj6IOmsCbrmvtxNA8Duv+hnQz
	nVhTp7kZMmXeeb2+FeVkWVe5ETxMosg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 46892ADCC;
	Mon, 19 Oct 2020 08:42:43 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/3] x86: shim building adjustments (plus shadow follow-on)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Message-ID: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Date: Mon, 19 Oct 2020 10:42:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The change addressing the shadow related build issue noticed by
Andrew went in already. The build breakage goes beyond this
specific combination though - PV_SHIM_EXCLUSIVE plus HVM is
similarly an issue. This is what the 1st patch tries to take care
of, in a shape already on irc noticed to be controversial. I'm
submitting the change nevertheless because for the moment there
looks to be a majority in favor of going this route. One argument
not voiced there yet: What good does it do to allow a user to
enable HVM when then on the resulting hypervisor they still can't
run HVM guests (for the hypervisor still being a dedicated PV
shim one). On top of this, the alternative approach is likely
going to get ugly.

The shadow related adjustments are here merely because the want
to make them was noticed in the context of the patch which has
already gone in.

1: don't permit HVM and PV_SHIM_EXCLUSIVE at the same time
2: refactor shadow_vram_{get,put}_l1e()
3: sh_{make,destroy}_monitor_table() are "even more" HVM-only

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:44:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:44:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8665.23191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQm6-0005g3-KG; Mon, 19 Oct 2020 08:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8665.23191; Mon, 19 Oct 2020 08:44:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQm6-0005fw-Gn; Mon, 19 Oct 2020 08:44:14 +0000
Received: by outflank-mailman (input) for mailman id 8665;
 Mon, 19 Oct 2020 08:44:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUQm4-0005fr-NR
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:44:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84e74d34-620b-4c58-bddd-da16b64806cb;
 Mon, 19 Oct 2020 08:44:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9DA8AC8C;
 Mon, 19 Oct 2020 08:44:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUQm4-0005fr-NR
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:44:12 +0000
X-Inumbo-ID: 84e74d34-620b-4c58-bddd-da16b64806cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84e74d34-620b-4c58-bddd-da16b64806cb;
	Mon, 19 Oct 2020 08:44:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603097050;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=COuzPGSKu8kTkPCTbkSS7vettxOSsQ5QPecx7OAzo4Q=;
	b=M/ycs8f9xbKzqQQ04tWGJXJ2QwQ6sByoJJF0QHtkI66kiu6i/pxfZDATX9E5zF4dgmzObp
	lHy9UzyJOcCskoIXgknoNjdNb/gMM5l3/d2pUXcRfzpFJjLscUZpDlqCFt22JBOuTTfVG2
	Ec50qgsXmhT4S8WP9d7ENosGF62BIfI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B9DA8AC8C;
	Mon, 19 Oct 2020 08:44:10 +0000 (UTC)
Subject: [PATCH v3 1/3] x86/shim: don't permit HVM and PV_SHIM_EXCLUSIVE at
 the same time
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Message-ID: <2c687e27-9219-b2e4-f383-9ab6b3e1fbfe@suse.com>
Date: Mon, 19 Oct 2020 10:44:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

This combination doesn't really make sense (and there likely are more);
in particular even if the code built with both options set, HVM guests
wouldn't work (and I think one wouldn't be able to create one in the
first place). The alternative here would be some presumably intrusive
#ifdef-ary to get this combination to actually build (but still not
work) again.

Fixes: 8b5b49ceb3d9 ("x86: don't include domctl and alike in shim-exclusive builds")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
v2: Restore lost default setting.

--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -23,7 +23,7 @@ config X86
 	select HAS_PDX
 	select HAS_SCHED_GRANULARITY
 	select HAS_UBSAN
-	select HAS_VPCI if !PV_SHIM_EXCLUSIVE && HVM
+	select HAS_VPCI if HVM
 	select NEEDS_LIBELF
 	select NUMA
 
@@ -90,8 +90,9 @@ config PV_LINEAR_PT
          If unsure, say Y.
 
 config HVM
-	def_bool !PV_SHIM_EXCLUSIVE
-	prompt "HVM support"
+	bool "HVM support"
+	depends on !PV_SHIM_EXCLUSIVE
+	default y
 	---help---
 	  Interfaces to support HVM domains.  HVM domains require hardware
 	  virtualisation extensions (e.g. Intel VT-x, AMD SVM), but can boot


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:44:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8666.23203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQmT-0005lm-ST; Mon, 19 Oct 2020 08:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8666.23203; Mon, 19 Oct 2020 08:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQmT-0005le-PC; Mon, 19 Oct 2020 08:44:37 +0000
Received: by outflank-mailman (input) for mailman id 8666;
 Mon, 19 Oct 2020 08:44:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUQmS-0005lV-Ga
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:44:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e516737-8928-4f4e-b14c-517189b4b996;
 Mon, 19 Oct 2020 08:44:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D5601AC8C;
 Mon, 19 Oct 2020 08:44:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUQmS-0005lV-Ga
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:44:36 +0000
X-Inumbo-ID: 4e516737-8928-4f4e-b14c-517189b4b996
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4e516737-8928-4f4e-b14c-517189b4b996;
	Mon, 19 Oct 2020 08:44:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603097071;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LBtWaFEBZUGhu8aWD/XRiEveL+8Latdj2CST4lhLqjg=;
	b=DhVmTROQwDsSTzNNRaxfAbgh4UyMGjhWLp8CovLI77x1qbEZYK8KN5l+s65WcuJpvkfOjO
	OJnprwjMkedZOTfxU75qz5pfXBHi8TqvSy6kQBw9Rl1lZVfNYEijZgYnewow4izgUxPJZ0
	6dWQNY4ziuMw1BZ1wbZ0N3WEpF5XoGs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D5601AC8C;
	Mon, 19 Oct 2020 08:44:31 +0000 (UTC)
Subject: [PATCH v3 2/3] x86/shadow: refactor shadow_vram_{get,put}_l1e()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Message-ID: <bc686036-18c0-ba7b-b8e5-a20b914aac68@suse.com>
Date: Mon, 19 Oct 2020 10:44:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

By passing the functions an MFN and flags, only a single instance of
each is needed; they were pretty large for being inline functions
anyway.

While moving the code, also adjust coding style and add const where
sensible / possible.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -903,6 +903,104 @@ int shadow_track_dirty_vram(struct domai
     return rc;
 }
 
+void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
+                         mfn_t sl1mfn, const void *sl1e,
+                         const struct domain *d)
+{
+    unsigned long gfn;
+    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
+
+    ASSERT(is_hvm_domain(d));
+
+    if ( !dirty_vram /* tracking disabled? */ ||
+         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
+         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
+        return;
+
+    gfn = gfn_x(mfn_to_gfn(d, mfn));
+    /* Page sharing not supported on shadow PTs */
+    BUG_ON(SHARED_M2P(gfn));
+
+    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
+    {
+        unsigned long i = gfn - dirty_vram->begin_pfn;
+        const struct page_info *page = mfn_to_page(mfn);
+
+        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
+            /* Initial guest reference, record it */
+            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn) |
+                                   PAGE_OFFSET(sl1e);
+    }
+}
+
+void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
+                         mfn_t sl1mfn, const void *sl1e,
+                         const struct domain *d)
+{
+    unsigned long gfn;
+    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
+
+    ASSERT(is_hvm_domain(d));
+
+    if ( !dirty_vram /* tracking disabled? */ ||
+         !(l1f & _PAGE_RW) /* read-only mapping? */ ||
+         !mfn_valid(mfn) /* mfn can be invalid in mmio_direct */)
+        return;
+
+    gfn = gfn_x(mfn_to_gfn(d, mfn));
+    /* Page sharing not supported on shadow PTs */
+    BUG_ON(SHARED_M2P(gfn));
+
+    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
+    {
+        unsigned long i = gfn - dirty_vram->begin_pfn;
+        const struct page_info *page = mfn_to_page(mfn);
+        bool dirty = false;
+        paddr_t sl1ma = mfn_to_maddr(sl1mfn) | PAGE_OFFSET(sl1e);
+
+        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
+        {
+            /* Last reference */
+            if ( dirty_vram->sl1ma[i] == INVALID_PADDR )
+            {
+                /* We didn't know it was that one, let's say it is dirty */
+                dirty = true;
+            }
+            else
+            {
+                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
+                dirty_vram->sl1ma[i] = INVALID_PADDR;
+                if ( l1f & _PAGE_DIRTY )
+                    dirty = true;
+            }
+        }
+        else
+        {
+            /* We had more than one reference, just consider the page dirty. */
+            dirty = true;
+            /* Check that it's not the one we recorded. */
+            if ( dirty_vram->sl1ma[i] == sl1ma )
+            {
+                /* Too bad, we remembered the wrong one... */
+                dirty_vram->sl1ma[i] = INVALID_PADDR;
+            }
+            else
+            {
+                /*
+                 * Ok, our recorded sl1e is still pointing to this page, let's
+                 * just hope it will remain.
+                 */
+            }
+        }
+
+        if ( dirty )
+        {
+            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
+            dirty_vram->last_dirty = NOW();
+        }
+    }
+}
+
 /*
  * Local variables:
  * mode: C
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1047,107 +1047,6 @@ static int shadow_set_l2e(struct domain
     return flags;
 }
 
-static inline void shadow_vram_get_l1e(shadow_l1e_t new_sl1e,
-                                       shadow_l1e_t *sl1e,
-                                       mfn_t sl1mfn,
-                                       struct domain *d)
-{
-#ifdef CONFIG_HVM
-    mfn_t mfn = shadow_l1e_get_mfn(new_sl1e);
-    int flags = shadow_l1e_get_flags(new_sl1e);
-    unsigned long gfn;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
-
-    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
-         || !(flags & _PAGE_RW) /* read-only mapping? */
-         || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
-        return;
-
-    gfn = gfn_x(mfn_to_gfn(d, mfn));
-    /* Page sharing not supported on shadow PTs */
-    BUG_ON(SHARED_M2P(gfn));
-
-    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
-    {
-        unsigned long i = gfn - dirty_vram->begin_pfn;
-        struct page_info *page = mfn_to_page(mfn);
-
-        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
-            /* Initial guest reference, record it */
-            dirty_vram->sl1ma[i] = mfn_to_maddr(sl1mfn)
-                | ((unsigned long)sl1e & ~PAGE_MASK);
-    }
-#endif
-}
-
-static inline void shadow_vram_put_l1e(shadow_l1e_t old_sl1e,
-                                       shadow_l1e_t *sl1e,
-                                       mfn_t sl1mfn,
-                                       struct domain *d)
-{
-#ifdef CONFIG_HVM
-    mfn_t mfn = shadow_l1e_get_mfn(old_sl1e);
-    int flags = shadow_l1e_get_flags(old_sl1e);
-    unsigned long gfn;
-    struct sh_dirty_vram *dirty_vram = d->arch.hvm.dirty_vram;
-
-    if ( !is_hvm_domain(d) || !dirty_vram /* tracking disabled? */
-         || !(flags & _PAGE_RW) /* read-only mapping? */
-         || !mfn_valid(mfn) )   /* mfn can be invalid in mmio_direct */
-        return;
-
-    gfn = gfn_x(mfn_to_gfn(d, mfn));
-    /* Page sharing not supported on shadow PTs */
-    BUG_ON(SHARED_M2P(gfn));
-
-    if ( (gfn >= dirty_vram->begin_pfn) && (gfn < dirty_vram->end_pfn) )
-    {
-        unsigned long i = gfn - dirty_vram->begin_pfn;
-        struct page_info *page = mfn_to_page(mfn);
-        int dirty = 0;
-        paddr_t sl1ma = mfn_to_maddr(sl1mfn)
-            | ((unsigned long)sl1e & ~PAGE_MASK);
-
-        if ( (page->u.inuse.type_info & PGT_count_mask) == 1 )
-        {
-            /* Last reference */
-            if ( dirty_vram->sl1ma[i] == INVALID_PADDR ) {
-                /* We didn't know it was that one, let's say it is dirty */
-                dirty = 1;
-            }
-            else
-            {
-                ASSERT(dirty_vram->sl1ma[i] == sl1ma);
-                dirty_vram->sl1ma[i] = INVALID_PADDR;
-                if ( flags & _PAGE_DIRTY )
-                    dirty = 1;
-            }
-        }
-        else
-        {
-            /* We had more than one reference, just consider the page dirty. */
-            dirty = 1;
-            /* Check that it's not the one we recorded. */
-            if ( dirty_vram->sl1ma[i] == sl1ma )
-            {
-                /* Too bad, we remembered the wrong one... */
-                dirty_vram->sl1ma[i] = INVALID_PADDR;
-            }
-            else
-            {
-                /* Ok, our recorded sl1e is still pointing to this page, let's
-                 * just hope it will remain. */
-            }
-        }
-        if ( dirty )
-        {
-            dirty_vram->dirty_bitmap[i / 8] |= 1 << (i % 8);
-            dirty_vram->last_dirty = NOW();
-        }
-    }
-#endif
-}
-
 static int shadow_set_l1e(struct domain *d,
                           shadow_l1e_t *sl1e,
                           shadow_l1e_t new_sl1e,
@@ -1156,6 +1055,7 @@ static int shadow_set_l1e(struct domain
 {
     int flags = 0;
     shadow_l1e_t old_sl1e;
+    unsigned int old_sl1f;
 #if SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC
     mfn_t new_gmfn = shadow_l1e_get_mfn(new_sl1e);
 #endif
@@ -1194,7 +1094,9 @@ static int shadow_set_l1e(struct domain
                 new_sl1e = shadow_l1e_flip_flags(new_sl1e, rc);
                 /* fall through */
             case 0:
-                shadow_vram_get_l1e(new_sl1e, sl1e, sl1mfn, d);
+                shadow_vram_get_mfn(shadow_l1e_get_mfn(new_sl1e),
+                                    shadow_l1e_get_flags(new_sl1e),
+                                    sl1mfn, sl1e, d);
                 break;
             }
 #undef PAGE_FLIPPABLE
@@ -1205,20 +1107,19 @@ static int shadow_set_l1e(struct domain
     shadow_write_entries(sl1e, &new_sl1e, 1, sl1mfn);
     flags |= SHADOW_SET_CHANGED;
 
-    if ( (shadow_l1e_get_flags(old_sl1e) & _PAGE_PRESENT)
-         && !sh_l1e_is_magic(old_sl1e) )
+    old_sl1f = shadow_l1e_get_flags(old_sl1e);
+    if ( (old_sl1f & _PAGE_PRESENT) && !sh_l1e_is_magic(old_sl1e) &&
+         shadow_mode_refcounts(d) )
     {
         /* We lost a reference to an old mfn. */
         /* N.B. Unlike higher-level sets, never need an extra flush
          * when writing an l1e.  Because it points to the same guest frame
          * as the guest l1e did, it's the guest's responsibility to
          * trigger a flush later. */
-        if ( shadow_mode_refcounts(d) )
-        {
-            shadow_vram_put_l1e(old_sl1e, sl1e, sl1mfn, d);
-            shadow_put_page_from_l1e(old_sl1e, d);
-            TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_SHADOW_L1_PUT_REF);
-        }
+        shadow_vram_put_mfn(shadow_l1e_get_mfn(old_sl1e), old_sl1f,
+                            sl1mfn, sl1e, d);
+        shadow_put_page_from_l1e(old_sl1e, d);
+        TRACE_SHADOW_PATH_FLAG(TRCE_SFLAG_SHADOW_L1_PUT_REF);
     }
     return flags;
 }
@@ -1944,9 +1845,12 @@ void sh_destroy_l1_shadow(struct domain
         /* Decrement refcounts of all the old entries */
         mfn_t sl1mfn = smfn;
         SHADOW_FOREACH_L1E(sl1mfn, sl1e, 0, 0, {
-            if ( (shadow_l1e_get_flags(*sl1e) & _PAGE_PRESENT)
-                 && !sh_l1e_is_magic(*sl1e) ) {
-                shadow_vram_put_l1e(*sl1e, sl1e, sl1mfn, d);
+            unsigned int sl1f = shadow_l1e_get_flags(*sl1e);
+
+            if ( (sl1f & _PAGE_PRESENT) && !sh_l1e_is_magic(*sl1e) )
+            {
+                shadow_vram_put_mfn(shadow_l1e_get_mfn(*sl1e), sl1f,
+                                    sl1mfn, sl1e, d);
                 shadow_put_page_from_l1e(*sl1e, d);
             }
         });
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -410,6 +410,14 @@ void shadow_update_paging_modes(struct v
  * With user_only == 1, unhooks only the user-mode mappings. */
 void shadow_unhook_mappings(struct domain *d, mfn_t smfn, int user_only);
 
+/* VRAM dirty tracking helpers. */
+void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
+                         mfn_t sl1mfn, const void *sl1e,
+                         const struct domain *d);
+void shadow_vram_put_mfn(mfn_t mfn, unsigned int l1f,
+                         mfn_t sl1mfn, const void *sl1e,
+                         const struct domain *d);
+
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
 /* Allow a shadowed page to go out of sync */
 int sh_unsync(struct vcpu *v, mfn_t gmfn);



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 08:45:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 08:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8667.23215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQmu-0005rt-62; Mon, 19 Oct 2020 08:45:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8667.23215; Mon, 19 Oct 2020 08:45:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUQmu-0005rm-2P; Mon, 19 Oct 2020 08:45:04 +0000
Received: by outflank-mailman (input) for mailman id 8667;
 Mon, 19 Oct 2020 08:45:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUQmt-0005re-Bq
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:45:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13766e3a-8a20-45ae-8533-faba3f602fd7;
 Mon, 19 Oct 2020 08:45:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAD76AC19;
 Mon, 19 Oct 2020 08:45:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUQmt-0005re-Bq
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 08:45:03 +0000
X-Inumbo-ID: 13766e3a-8a20-45ae-8533-faba3f602fd7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13766e3a-8a20-45ae-8533-faba3f602fd7;
	Mon, 19 Oct 2020 08:45:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603097101;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0wq4/U2npj/ohoEwA/sWA4Jd1CxybC1RMbF7I0BG8rk=;
	b=D+IIQ7/BACLyKcfr6fujM8A5D1rK1yNb0h6lYf2Py990qHKg67uB8COCnrA0fCysGoUgrK
	tZOCMDDuZ5uZPSnMOGYf3HP/MKgvU42Mb7e0l4UEpTZjR192tfrHT2Nqp3kEoFLFRls4Tu
	Kd1O7eqRJrqd3TzAePlq8VH1j2EV+Bc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EAD76AC19;
	Mon, 19 Oct 2020 08:45:00 +0000 (UTC)
Subject: [PATCH v3 3/3] x86/shadow: sh_{make,destroy}_monitor_table() are
 "even more" HVM-only
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Message-ID: <cd39abe3-5a5c-6ebc-a11e-3d4ed1d74907@suse.com>
Date: Mon, 19 Oct 2020 10:45:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

With them depending on just the number of shadow levels, there's no need
for more than one instance of them, and hence no need for any hook (IOW
452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
far enough). Move the functions to hvm.c while dropping the dead
is_pv_32bit_domain() code paths.

While moving the code, replace a stale comment reference to
sh_install_xen_entries_in_l4(). Doing so made me notice the function
also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
gets done here as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.
---
TBD: In principle both functions could have their first parameter
     constified. In fact, "destroy" doesn't depend on the vCPU at all
     and hence could be passed a struct domain *. Not sure whether such
     an asymmetry would be acceptable.
     In principle "make" would also not need passing of the number of
     shadow levels (can be derived from v), which would result in yet
     another asymmetry.
     If these asymmetries were acceptable, "make" could then also update
     v->arch.hvm.monitor_table, instead of doing so at both call sites.
TBD: Collides with Andrew's "xen/x86: Fix memory leak in vcpu_create()
     error path".

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2467,7 +2467,9 @@ static void sh_update_paging_modes(struc
 
         if ( pagetable_is_null(v->arch.hvm.monitor_table) )
         {
-            mfn_t mmfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+            mfn_t mmfn = sh_make_monitor_table(
+                             v, v->arch.paging.mode->shadow.shadow_levels);
+
             v->arch.hvm.monitor_table = pagetable_from_mfn(mmfn);
             make_cr3(v, mmfn);
             hvm_update_host_cr3(v);
@@ -2504,7 +2506,8 @@ static void sh_update_paging_modes(struc
 
                 old_mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
                 v->arch.hvm.monitor_table = pagetable_null();
-                new_mfn = v->arch.paging.mode->shadow.make_monitor_table(v);
+                new_mfn = sh_make_monitor_table(
+                              v, v->arch.paging.mode->shadow.shadow_levels);
                 v->arch.hvm.monitor_table = pagetable_from_mfn(new_mfn);
                 SHADOW_PRINTK("new monitor table %"PRI_mfn "\n",
                                mfn_x(new_mfn));
@@ -2516,7 +2519,8 @@ static void sh_update_paging_modes(struc
                 if ( v == current )
                     write_ptbase(v);
                 hvm_update_host_cr3(v);
-                old_mode->shadow.destroy_monitor_table(v, old_mfn);
+                sh_destroy_monitor_table(v, old_mfn,
+                                         old_mode->shadow.shadow_levels);
             }
         }
 
@@ -2801,7 +2805,9 @@ void shadow_teardown(struct domain *d, b
                     mfn_t mfn = pagetable_get_mfn(v->arch.hvm.monitor_table);
 
                     if ( mfn_valid(mfn) && (mfn_x(mfn) != 0) )
-                        v->arch.paging.mode->shadow.destroy_monitor_table(v, mfn);
+                        sh_destroy_monitor_table(
+                            v, mfn,
+                            v->arch.paging.mode->shadow.shadow_levels);
                     v->arch.hvm.monitor_table = pagetable_null();
                 }
 #endif /* CONFIG_HVM */
--- a/xen/arch/x86/mm/shadow/hvm.c
+++ b/xen/arch/x86/mm/shadow/hvm.c
@@ -691,6 +691,88 @@ static void sh_emulate_unmap_dest(struct
     atomic_inc(&v->domain->arch.paging.shadow.gtable_dirty_version);
 }
 
+mfn_t sh_make_monitor_table(struct vcpu *v, unsigned int shadow_levels)
+{
+    struct domain *d = v->domain;
+    mfn_t m4mfn;
+    l4_pgentry_t *l4e;
+
+    ASSERT(!pagetable_get_pfn(v->arch.hvm.monitor_table));
+
+    /* Guarantee we can get the memory we need */
+    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
+    m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
+    mfn_to_page(m4mfn)->shadow_flags = 4;
+
+    l4e = map_domain_page(m4mfn);
+
+    /*
+     * Create a self-linear mapping, but no shadow-linear mapping.  A
+     * shadow-linear mapping will either be inserted below when creating
+     * lower level monitor tables, or later in sh_update_cr3().
+     */
+    init_xen_l4_slots(l4e, m4mfn, d, INVALID_MFN, false);
+
+    if ( shadow_levels < 4 )
+    {
+        mfn_t m3mfn, m2mfn;
+        l3_pgentry_t *l3e;
+
+        /*
+         * Install an l3 table and an l2 table that will hold the shadow
+         * linear map entries.  This overrides the empty entry that was
+         * installed by init_xen_l4_slots().
+         */
+        m3mfn = shadow_alloc(d, SH_type_monitor_table, 0);
+        mfn_to_page(m3mfn)->shadow_flags = 3;
+        l4e[l4_table_offset(SH_LINEAR_PT_VIRT_START)]
+            = l4e_from_mfn(m3mfn, __PAGE_HYPERVISOR_RW);
+
+        m2mfn = shadow_alloc(d, SH_type_monitor_table, 0);
+        mfn_to_page(m2mfn)->shadow_flags = 2;
+        l3e = map_domain_page(m3mfn);
+        l3e[0] = l3e_from_mfn(m2mfn, __PAGE_HYPERVISOR_RW);
+        unmap_domain_page(l3e);
+    }
+
+    unmap_domain_page(l4e);
+
+    return m4mfn;
+}
+
+void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn,
+                              unsigned int shadow_levels)
+{
+    struct domain *d = v->domain;
+
+    ASSERT(mfn_to_page(mmfn)->u.sh.type == SH_type_monitor_table);
+
+    if ( shadow_levels < 4 )
+    {
+        mfn_t m3mfn;
+        l4_pgentry_t *l4e = map_domain_page(mmfn);
+        l3_pgentry_t *l3e;
+        unsigned int linear_slot = l4_table_offset(SH_LINEAR_PT_VIRT_START);
+
+        /*
+         * Need to destroy the l3 and l2 monitor pages used
+         * for the linear map.
+         */
+        ASSERT(l4e_get_flags(l4e[linear_slot]) & _PAGE_PRESENT);
+        m3mfn = l4e_get_mfn(l4e[linear_slot]);
+        l3e = map_domain_page(m3mfn);
+        ASSERT(l3e_get_flags(l3e[0]) & _PAGE_PRESENT);
+        shadow_free(d, l3e_get_mfn(l3e[0]));
+        unmap_domain_page(l3e);
+        shadow_free(d, m3mfn);
+
+        unmap_domain_page(l4e);
+    }
+
+    /* Put the memory back in the pool */
+    shadow_free(d, mmfn);
+}
+
 /**************************************************************************/
 /* VRAM dirty tracking support */
 int shadow_track_dirty_vram(struct domain *d,
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1405,84 +1405,6 @@ make_fl1_shadow(struct domain *d, gfn_t
 }
 
 
-#if SHADOW_PAGING_LEVELS == GUEST_PAGING_LEVELS && defined(CONFIG_HVM)
-mfn_t
-sh_make_monitor_table(struct vcpu *v)
-{
-    struct domain *d = v->domain;
-
-    ASSERT(pagetable_get_pfn(v->arch.hvm.monitor_table) == 0);
-
-    /* Guarantee we can get the memory we need */
-    shadow_prealloc(d, SH_type_monitor_table, CONFIG_PAGING_LEVELS);
-
-    {
-        mfn_t m4mfn;
-        l4_pgentry_t *l4e;
-
-        m4mfn = shadow_alloc(d, SH_type_monitor_table, 0);
-        mfn_to_page(m4mfn)->shadow_flags = 4;
-
-        l4e = map_domain_page(m4mfn);
-
-        /*
-         * Create a self-linear mapping, but no shadow-linear mapping.  A
-         * shadow-linear mapping will either be inserted below when creating
-         * lower level monitor tables, or later in sh_update_cr3().
-         */
-        init_xen_l4_slots(l4e, m4mfn, d, INVALID_MFN, false);
-
-#if SHADOW_PAGING_LEVELS < 4
-        {
-            mfn_t m3mfn, m2mfn;
-            l3_pgentry_t *l3e;
-            /* Install an l3 table and an l2 table that will hold the shadow
-             * linear map entries.  This overrides the linear map entry that
-             * was installed by sh_install_xen_entries_in_l4. */
-
-            m3mfn = shadow_alloc(d, SH_type_monitor_table, 0);
-            mfn_to_page(m3mfn)->shadow_flags = 3;
-            l4e[shadow_l4_table_offset(SH_LINEAR_PT_VIRT_START)]
-                = l4e_from_mfn(m3mfn, __PAGE_HYPERVISOR_RW);
-
-            m2mfn = shadow_alloc(d, SH_type_monitor_table, 0);
-            mfn_to_page(m2mfn)->shadow_flags = 2;
-            l3e = map_domain_page(m3mfn);
-            l3e[0] = l3e_from_mfn(m2mfn, __PAGE_HYPERVISOR_RW);
-            unmap_domain_page(l3e);
-
-            if ( is_pv_32bit_domain(d) )
-            {
-                l2_pgentry_t *l2t;
-
-                /* For 32-bit PV guests, we need to map the 32-bit Xen
-                 * area into its usual VAs in the monitor tables */
-                m3mfn = shadow_alloc(d, SH_type_monitor_table, 0);
-                mfn_to_page(m3mfn)->shadow_flags = 3;
-                l4e[0] = l4e_from_mfn(m3mfn, __PAGE_HYPERVISOR_RW);
-
-                m2mfn = shadow_alloc(d, SH_type_monitor_table, 0);
-                mfn_to_page(m2mfn)->shadow_flags = 2;
-                l3e = map_domain_page(m3mfn);
-                l3e[3] = l3e_from_mfn(m2mfn, _PAGE_PRESENT);
-
-                l2t = map_domain_page(m2mfn);
-                init_xen_pae_l2_slots(l2t, d);
-                unmap_domain_page(l2t);
-
-                unmap_domain_page(l3e);
-            }
-
-        }
-#endif /* SHADOW_PAGING_LEVELS < 4 */
-
-        unmap_domain_page(l4e);
-
-        return m4mfn;
-    }
-}
-#endif /* SHADOW_PAGING_LEVELS == GUEST_PAGING_LEVELS */
-
 /**************************************************************************/
 /* These functions also take a virtual address and return the level-N
  * shadow table mfn and entry, but they create the shadow pagetables if
@@ -1860,50 +1782,6 @@ void sh_destroy_l1_shadow(struct domain
     shadow_free(d, smfn);
 }
 
-#if SHADOW_PAGING_LEVELS == GUEST_PAGING_LEVELS && defined(CONFIG_HVM)
-void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn)
-{
-    struct domain *d = v->domain;
-    ASSERT(mfn_to_page(mmfn)->u.sh.type == SH_type_monitor_table);
-
-#if SHADOW_PAGING_LEVELS != 4
-    {
-        mfn_t m3mfn;
-        l4_pgentry_t *l4e = map_domain_page(mmfn);
-        l3_pgentry_t *l3e;
-        int linear_slot = shadow_l4_table_offset(SH_LINEAR_PT_VIRT_START);
-
-        /* Need to destroy the l3 and l2 monitor pages used
-         * for the linear map */
-        ASSERT(l4e_get_flags(l4e[linear_slot]) & _PAGE_PRESENT);
-        m3mfn = l4e_get_mfn(l4e[linear_slot]);
-        l3e = map_domain_page(m3mfn);
-        ASSERT(l3e_get_flags(l3e[0]) & _PAGE_PRESENT);
-        shadow_free(d, l3e_get_mfn(l3e[0]));
-        unmap_domain_page(l3e);
-        shadow_free(d, m3mfn);
-
-        if ( is_pv_32bit_domain(d) )
-        {
-            /* Need to destroy the l3 and l2 monitor pages that map the
-             * Xen VAs at 3GB-4GB */
-            ASSERT(l4e_get_flags(l4e[0]) & _PAGE_PRESENT);
-            m3mfn = l4e_get_mfn(l4e[0]);
-            l3e = map_domain_page(m3mfn);
-            ASSERT(l3e_get_flags(l3e[3]) & _PAGE_PRESENT);
-            shadow_free(d, l3e_get_mfn(l3e[3]));
-            unmap_domain_page(l3e);
-            shadow_free(d, m3mfn);
-        }
-        unmap_domain_page(l4e);
-    }
-#endif
-
-    /* Put the memory back in the pool */
-    shadow_free(d, mmfn);
-}
-#endif
-
 /**************************************************************************/
 /* Functions to destroy non-Xen mappings in a pagetable hierarchy.
  * These are called from common code when we are running out of shadow
@@ -4705,8 +4583,6 @@ const struct paging_mode sh_paging_mode
     .shadow.cmpxchg_guest_entry    = sh_cmpxchg_guest_entry,
 #endif
 #ifdef CONFIG_HVM
-    .shadow.make_monitor_table     = sh_make_monitor_table,
-    .shadow.destroy_monitor_table  = sh_destroy_monitor_table,
 #if SHADOW_OPTIMIZATIONS & SHOPT_WRITABLE_HEURISTIC
     .shadow.guess_wrmap            = sh_guess_wrmap,
 #endif
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -366,9 +366,6 @@ void sh_set_toplevel_shadow(struct vcpu
                                                  mfn_t gmfn,
                                                  uint32_t shadow_type));
 
-/* Install the xen mappings in various flavours of shadow */
-void sh_install_xen_entries_in_l4(struct domain *, mfn_t gl4mfn, mfn_t sl4mfn);
-
 /* Update the shadows in response to a pagetable write from Xen */
 int sh_validate_guest_entry(struct vcpu *v, mfn_t gmfn, void *entry, u32 size);
 
@@ -410,6 +407,14 @@ void shadow_update_paging_modes(struct v
  * With user_only == 1, unhooks only the user-mode mappings. */
 void shadow_unhook_mappings(struct domain *d, mfn_t smfn, int user_only);
 
+/*
+ * sh_{make,destroy}_monitor_table() depend only on the number of shadow
+ * levels.
+ */
+mfn_t sh_make_monitor_table(struct vcpu *v, unsigned int shadow_levels);
+void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn,
+                              unsigned int shadow_levels);
+
 /* VRAM dirty tracking helpers. */
 void shadow_vram_get_mfn(mfn_t mfn, unsigned int l1f,
                          mfn_t sl1mfn, const void *sl1e,
--- a/xen/arch/x86/mm/shadow/types.h
+++ b/xen/arch/x86/mm/shadow/types.h
@@ -262,15 +262,6 @@ static inline shadow_l4e_t shadow_l4e_fr
 #define sh_rm_write_access_from_sl1p INTERNAL_NAME(sh_rm_write_access_from_sl1p)
 #endif
 
-/* sh_make_monitor_table depends only on the number of shadow levels */
-#define sh_make_monitor_table \
-        SHADOW_SH_NAME(sh_make_monitor_table, SHADOW_PAGING_LEVELS)
-#define sh_destroy_monitor_table \
-        SHADOW_SH_NAME(sh_destroy_monitor_table, SHADOW_PAGING_LEVELS)
-
-mfn_t sh_make_monitor_table(struct vcpu *v);
-void sh_destroy_monitor_table(struct vcpu *v, mfn_t mmfn);
-
 #if SHADOW_PAGING_LEVELS == 3
 #define MFN_FITS_IN_HVM_CR3(_MFN) !(mfn_x(_MFN) >> 20)
 #endif
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -107,8 +107,6 @@ struct shadow_paging_mode {
                                             mfn_t gmfn);
 #endif
 #ifdef CONFIG_HVM
-    mfn_t         (*make_monitor_table    )(struct vcpu *v);
-    void          (*destroy_monitor_table )(struct vcpu *v, mfn_t mmfn);
     int           (*guess_wrmap           )(struct vcpu *v, 
                                             unsigned long vaddr, mfn_t gmfn);
     void          (*pagetable_dying       )(paddr_t gpa);



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 09:09:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 09:09:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8676.23227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURA2-0007o8-8Z; Mon, 19 Oct 2020 09:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8676.23227; Mon, 19 Oct 2020 09:08:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURA2-0007o1-5d; Mon, 19 Oct 2020 09:08:58 +0000
Received: by outflank-mailman (input) for mailman id 8676;
 Mon, 19 Oct 2020 09:08:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cRFv=D2=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kURA0-0007nw-FS
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:08:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 440fc63b-2afa-4e34-950c-39a79c973e54;
 Mon, 19 Oct 2020 09:08:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D8DEBB298;
 Mon, 19 Oct 2020 09:08:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cRFv=D2=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kURA0-0007nw-FS
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:08:56 +0000
X-Inumbo-ID: 440fc63b-2afa-4e34-950c-39a79c973e54
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 440fc63b-2afa-4e34-950c-39a79c973e54;
	Mon, 19 Oct 2020 09:08:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D8DEBB298;
	Mon, 19 Oct 2020 09:08:53 +0000 (UTC)
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
To: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
 nouveau@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
 amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-rockchip@lists.infradead.org,
 dri-devel@lists.freedesktop.org, xen-devel@lists.xenproject.org,
 spice-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
 linux-media@vger.kernel.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
 <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
From: Thomas Zimmermann <tzimmermann@suse.de>
Message-ID: <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de>
Date: Mon, 19 Oct 2020 11:08:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi Christian

On 15.10.20 16:08, Christian König wrote:
> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann:
>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
>> address space. The mapping's address is returned as struct dma_buf_map.
>> Each function is a simplified version of TTM's existing kmap code. Both
>> functions respect the memory's location ani/or writecombine flags.
>>
>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
>> two helpers that convert a GEM object into the TTM BO and forward the
>> call
>> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM
>> object
>> callbacks.
>>
>> v4:
>>     * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
>>       Christian)
> 
> Bunch of minor comments below, but over all look very solid to me.
> 
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> ---
>>   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>>   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
>>   include/drm/drm_gem_ttm_helper.h     |  6 +++
>>   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
>>   include/linux/dma-buf-map.h          | 20 ++++++++
>>   5 files changed, 164 insertions(+)
>>
>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> index 0e4fb9ba43ad..db4c14d78a30 100644
>> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p,
>> unsigned int indent,
>>   }
>>   EXPORT_SYMBOL(drm_gem_ttm_print_info);
>>   +/**
>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: [out] returns the dma-buf mapping.
>> + *
>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + *
>> + * Returns:
>> + * 0 on success, or a negative errno code otherwise.
>> + */
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> +             struct dma_buf_map *map)
>> +{
>> +    struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> +    return ttm_bo_vmap(bo, map);
>> +
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
>> +
>> +/**
>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
>> + * @gem: GEM object.
>> + * @map: dma-buf mapping.
>> + *
>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be
>> used as
>> + * &drm_gem_object_funcs.vmap callback.
>> + */
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> +            struct dma_buf_map *map)
>> +{
>> +    struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
>> +
>> +    ttm_bo_vunmap(bo, map);
>> +}
>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
>> +
>>   /**
>>    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>>    * @gem: GEM object.
>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> index bdee4df1f3f2..80c42c774c7d 100644
>> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
>> @@ -32,6 +32,7 @@
>>   #include <drm/ttm/ttm_bo_driver.h>
>>   #include <drm/ttm/ttm_placement.h>
>>   #include <drm/drm_vma_manager.h>
>> +#include <linux/dma-buf-map.h>
>>   #include <linux/io.h>
>>   #include <linux/highmem.h>
>>   #include <linux/wait.h>
>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>>   }
>>   EXPORT_SYMBOL(ttm_bo_kunmap);
>>   +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>> +{
>> +    struct ttm_resource *mem = &bo->mem;
>> +    int ret;
>> +
>> +    ret = ttm_mem_io_reserve(bo->bdev, mem);
>> +    if (ret)
>> +        return ret;
>> +
>> +    if (mem->bus.is_iomem) {
>> +        void __iomem *vaddr_iomem;
>> +        unsigned long size = bo->num_pages << PAGE_SHIFT;
> 
> Please use uint64_t here and make sure to cast bo->num_pages before
> shifting.

I thought the rule of thumb is to use u64 in source code. Yet TTM only
uses uint*_t types. Is there anything special about TTM?

> 
> We have an unit tests of allocating a 8GB BO and that should work on a
> 32bit machine as well :)
> 
>> +
>> +        if (mem->bus.addr)
>> +            vaddr_iomem = (void *)(((u8 *)mem->bus.addr));

I after reading the patch again, I realized that this is the
'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
options here: ignore this case in _vunmap(), or do an ioremap()
unconditionally. Which one is preferable?

Best regards
Thomas

>> +        else if (mem->placement & TTM_PL_FLAG_WC)
> 
> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> mem->bus.caching enum as replacement.
> 
>> +            vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>> +        else
>> +            vaddr_iomem = ioremap(mem->bus.offset, size);
>> +
>> +        if (!vaddr_iomem)
>> +            return -ENOMEM;
>> +
>> +        dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>> +
>> +    } else {
>> +        struct ttm_operation_ctx ctx = {
>> +            .interruptible = false,
>> +            .no_wait_gpu = false
>> +        };
>> +        struct ttm_tt *ttm = bo->ttm;
>> +        pgprot_t prot;
>> +        void *vaddr;
>> +
>> +        BUG_ON(!ttm);
> 
> I think we can drop this, populate will just crash badly anyway.
> 
>> +
>> +        ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>> +        if (ret)
>> +            return ret;
>> +
>> +        /*
>> +         * We need to use vmap to get the desired page protection
>> +         * or to make the buffer object look contiguous.
>> +         */
>> +        prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> 
> The calling convention has changed on drm-misc-next as well, but should
> be trivial to adapt.
> 
> Regards,
> Christian.
> 
>> +        vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>> +        if (!vaddr)
>> +            return -ENOMEM;
>> +
>> +        dma_buf_map_set_vaddr(map, vaddr);
>> +    }
>> +
>> +    return 0;
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vmap);
>> +
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map)
>> +{
>> +    if (dma_buf_map_is_null(map))
>> +        return;
>> +
>> +    if (map->is_iomem)
>> +        iounmap(map->vaddr_iomem);
>> +    else
>> +        vunmap(map->vaddr);
>> +    dma_buf_map_clear(map);
>> +
>> +    ttm_mem_io_free(bo->bdev, &bo->mem);
>> +}
>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>> +
>>   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>                    bool dst_use_tt)
>>   {
>> diff --git a/include/drm/drm_gem_ttm_helper.h
>> b/include/drm/drm_gem_ttm_helper.h
>> index 118cef76f84f..7c6d874910b8 100644
>> --- a/include/drm/drm_gem_ttm_helper.h
>> +++ b/include/drm/drm_gem_ttm_helper.h
>> @@ -10,11 +10,17 @@
>>   #include <drm/ttm/ttm_bo_api.h>
>>   #include <drm/ttm/ttm_bo_driver.h>
>>   +struct dma_buf_map;
>> +
>>   #define drm_gem_ttm_of_gem(gem_obj) \
>>       container_of(gem_obj, struct ttm_buffer_object, base)
>>     void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>> indent,
>>                   const struct drm_gem_object *gem);
>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>> +             struct dma_buf_map *map);
>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>> +            struct dma_buf_map *map);
>>   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>                struct vm_area_struct *vma);
>>   diff --git a/include/drm/ttm/ttm_bo_api.h
>> b/include/drm/ttm/ttm_bo_api.h
>> index 37102e45e496..2c59a785374c 100644
>> --- a/include/drm/ttm/ttm_bo_api.h
>> +++ b/include/drm/ttm/ttm_bo_api.h
>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>     struct ttm_bo_device;
>>   +struct dma_buf_map;
>> +
>>   struct drm_mm_node;
>>     struct ttm_placement;
>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>> unsigned long start_page,
>>    */
>>   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>   +/**
>> + * ttm_bo_vmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: pointer to a struct dma_buf_map representing the map.
>> + *
>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>> + * data in the buffer object. The parameter @map returns the virtual
>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>> + *
>> + * Returns
>> + * -ENOMEM: Out of memory.
>> + * -EINVAL: Invalid range.
>> + */
>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>> +
>> +/**
>> + * ttm_bo_vunmap
>> + *
>> + * @bo: The buffer object.
>> + * @map: Object describing the map to unmap.
>> + *
>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>> + */
>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>> *map);
>> +
>>   /**
>>    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>    *
>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>> index fd1aba545fdf..2e8bbecb5091 100644
>> --- a/include/linux/dma-buf-map.h
>> +++ b/include/linux/dma-buf-map.h
>> @@ -45,6 +45,12 @@
>>    *
>>    *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>    *
>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>> + *
>> + * .. code-block:: c
>> + *
>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>> + *
>>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>>    * dma_buf_map_is_null().
>>    *
>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>> dma_buf_map *map, void *vaddr)
>>       map->is_iomem = false;
>>   }
>>   +/**
>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>> an address in I/O memory
>> + * @map:        The dma-buf mapping structure
>> + * @vaddr_iomem:    An I/O-memory address
>> + *
>> + * Sets the address and the I/O-memory flag.
>> + */
>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>> +                           void __iomem *vaddr_iomem)
>> +{
>> +    map->vaddr_iomem = vaddr_iomem;
>> +    map->is_iomem = true;
>> +}
>> +
>>   /**
>>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>> for equality
>>    * @lhs:    The dma-buf mapping structure
> 
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 09:09:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 09:09:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8677.23239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURA9-0007q1-H8; Mon, 19 Oct 2020 09:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8677.23239; Mon, 19 Oct 2020 09:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURA9-0007pu-Ds; Mon, 19 Oct 2020 09:09:05 +0000
Received: by outflank-mailman (input) for mailman id 8677;
 Mon, 19 Oct 2020 09:09:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kURA8-0007pe-IU
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:09:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e6ba33b-a0b9-4a3a-b173-5386f6458568;
 Mon, 19 Oct 2020 09:09:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E45EB290;
 Mon, 19 Oct 2020 09:09:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kURA8-0007pe-IU
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:09:04 +0000
X-Inumbo-ID: 8e6ba33b-a0b9-4a3a-b173-5386f6458568
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8e6ba33b-a0b9-4a3a-b173-5386f6458568;
	Mon, 19 Oct 2020 09:09:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603098541;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ldr9vP5enLyLasdYmjZrwcRMN/AHiax7zlq6K/z5+0k=;
	b=fe6eBpj6EXE+FDHeLRlspMLdjYW4QogZGp1qr4s/XX8MhgO41iDwu+M139Wvy2bnZ9VvuW
	TNOGEafp9VCiYZW5tmVXV9SAc3j4Uc2PS9HJyxn18smJfkE3kG6MqI9Pvf1lSI8zY/Asu7
	48PqwT+Pm8Lvc5KoyQ8onJVGVcgdshg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7E45EB290;
	Mon, 19 Oct 2020 09:09:01 +0000 (UTC)
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Kevin Tian <kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
 <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
 <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6430fef8-23f1-f4ef-8741-5e089eaa0df9@suse.com>
Date: Mon, 19 Oct 2020 11:09:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 16.10.2020 17:38, Andrew Cooper wrote:
> On 15/10/2020 09:01, Jan Beulich wrote:
>> On 14.10.2020 15:57, Andrew Cooper wrote:
>>> On 13/10/2020 16:58, Jan Beulich wrote:
>>>> On 09.10.2020 17:09, Andrew Cooper wrote:
>>>>> At the time of XSA-170, the x86 instruction emulator really was broken, and
>>>>> would allow arbitrary non-canonical values to be loaded into %rip.  This was
>>>>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
>>>>> targets".
>>>>>
>>>>> However, in a demonstration that off-by-one errors really are one of the
>>>>> hardest programming issues we face, everyone involved with XSA-170, myself
>>>>> included, mistook the statement in the SDM which says:
>>>>>
>>>>>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
>>>>>
>>>>> to mean "must be canonical".  A real canonical check is bits 63:N-1.
>>>>>
>>>>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
>>>>> to the boundary condition at 0x0000800000000000.
>>>>>
>>>>> Now that the emulator has been fixed, revert the XSA-170 change to fix
>>>>> architectural behaviour at the boundary case.  The XTF test case for XSA-170
>>>>> exercises this corner case, and still passes.
>>>>>
>>>>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> But why revert the change rather than fix ...
>>>>
>>>>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>>>>  out:
>>>>>      if ( nestedhvm_vcpu_in_guestmode(v) )
>>>>>          nvmx_idtv_handling();
>>>>> -
>>>>> -    /*
>>>>> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
>>>>> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
>>>>> -     * criteria. As we must not allow less than fully privileged mode to have
>>>>> -     * such an effect on the domain, we correct rIP in that case (accepting
>>>>> -     * this not being architecturally correct behavior, as the injected #GP
>>>>> -     * fault will then not see the correct [invalid] return address).
>>>>> -     * And since we know the guest will crash, we crash it right away if it
>>>>> -     * already is in most privileged mode.
>>>>> -     */
>>>>> -    mode = vmx_guest_x86_mode(v);
>>>>> -    if ( mode == 8 ? !is_canonical_address(regs->rip)
>>>> ... the wrong use of is_canonical_address() here? By reverting
>>>> you open up avenues for XSAs in case we get things wrong elsewhere,
>>>> including ...
>>>>
>>>>> -                   : regs->rip != regs->eip )
>>>> ... for 32-bit guests.
>>> Because the only appropriate alternative would be ASSERT_UNREACHABLE()
>>> and domain crash.
>>>
>>> This logic corrupts guest state.
>>>
>>> Running with corrupt state is every bit an XSA as hitting a VMEntry
>>> failure if it can be triggered by userspace, but the latter safer and
>>> much more obvious.
>> I disagree. For CPL > 0 we don't "corrupt" guest state any more
>> than reporting a #GP fault when one is going to be reported
>> anyway (as long as the VM entry doesn't fail, and hence the
>> guest won't get crashed). IOW this raising of #GP actually is a
>> precautionary measure to _avoid_ XSAs.
> 
> It does not remove any XSAs.  It merely hides them.

How that? If we convert the ability of guest user mode to crash
the guest into deliver of #GP(0), how is there a hidden XSA then?

> There are legal states where RIP is 0x0000800000000000 and #GP is the
> wrong thing to do.  Any async VMExit (Processor Trace Prefetch in
> particular), or with debug traps pending.

You realize we're in agreement about this pseudo-canonical check
needing fixing?

>> Nor do I agree with the "much more obvious" aspect:
> 
> A domain crash is far more likely to be reported to xen-devel/security
> than something which bodges state in an almost-silent way.
> 
>> A VM entry
>> failure requires quite a bit of analysis to recognize what has
>> caused it; whether a non-pseudo-canonical RIP is what catches your
>> eye right away is simply unknown. The gprintk() that you delete,
>> otoh, says very clearly what we have found to be wrong.
> 
> Non-canonical values are easier to spot than most of the other rules, IMO.

Which will get less obvious with 5-level paging capable hardware
in mind.

>>> It was the appropriate security fix (give or take the functional bug in
>>> it) at the time, given the complexity of retrofitting zero length
>>> instruction fetches to the emulator.
>>>
>>> However, it is one of a very long list of guest-state-induced VMEntry
>>> failures, with non-trivial logic which we assert will pass, on a
>>> fastpath, where hardware also performs the same checks and we already
>>> have a runtime safe way of dealing with errors.  (Hence not actually
>>> using ASSERT_UNREACHABLE() here.)
>> "Runtime safe" as far as Xen is concerned, I take it. This isn't safe
>> for the guest at all, as vmx_failed_vmentry() results in an
>> unconditional domain_crash().
> 
> Any VMEntry failure is a bug in Xen.  If userspace can trigger it, it is
> an XSA, *irrespective* of whether we crash the domain then and there, or
> whether we let it try and limp on with corrupted state.

Allowing the guest to continue with corrupted state is not a
useful thing to do, I agree. However, what falls under
"corrupted" seems to be different for you and me. I'd not call
delivery of #GP "corruption" in any way. The primary goal ought
to be that we don't corrupt the guest kernel view of the world.
It may then have the opportunity to kill the offending user
mode process.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 09:20:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 09:20:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8683.23254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURKl-0000fN-IO; Mon, 19 Oct 2020 09:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8683.23254; Mon, 19 Oct 2020 09:20:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURKl-0000fG-Ev; Mon, 19 Oct 2020 09:20:03 +0000
Received: by outflank-mailman (input) for mailman id 8683;
 Mon, 19 Oct 2020 09:20:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kURKk-0000R4-3G
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:20:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0633b052-48aa-4e61-b613-b5a9444bdc8f;
 Mon, 19 Oct 2020 09:19:55 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kURKc-0005os-Ji; Mon, 19 Oct 2020 09:19:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kURKc-0003uf-A9; Mon, 19 Oct 2020 09:19:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kURKc-0000Hm-9f; Mon, 19 Oct 2020 09:19:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kURKk-0000R4-3G
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:20:02 +0000
X-Inumbo-ID: 0633b052-48aa-4e61-b613-b5a9444bdc8f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0633b052-48aa-4e61-b613-b5a9444bdc8f;
	Mon, 19 Oct 2020 09:19:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JUXoSKE9CsmtueRk0IT+rpt4VEnNlS6KNZlbtHn8mCc=; b=sGsXv0TYOCrfFtq8PMWlX0M037
	Xn7YYGP7bVw8sVscaO1neLrAxj4aRLFkKMZLl03RdgOXYofU3ZLF3w6QGv3rgk5F8j8ovl2zBP0er
	dhQktNCwr6SC/E3BcBBvZR8afjgWCq5rf15hKeaav9HNbdzZXc5XX4uSI5k0BNFSlC4Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kURKc-0005os-Ji; Mon, 19 Oct 2020 09:19:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kURKc-0003uf-A9; Mon, 19 Oct 2020 09:19:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kURKc-0000Hm-9f; Mon, 19 Oct 2020 09:19:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155965-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155965: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9453b2d4694c2cb6c30d99e65d4a3deb09e94ac3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 09:19:54 +0000

flight 155965 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155965/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9453b2d4694c2cb6c30d99e65d4a3deb09e94ac3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   79 days
Failing since        152366  2020-08-01 20:49:34 Z   78 days  133 attempts
Testing same since   155965  2020-10-18 19:40:17 Z    0 days    1 attempts

------------------------------------------------------------
3213 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 585086 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 09:45:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 09:45:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8687.23266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURjI-0002yt-SV; Mon, 19 Oct 2020 09:45:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8687.23266; Mon, 19 Oct 2020 09:45:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kURjI-0002ym-OX; Mon, 19 Oct 2020 09:45:24 +0000
Received: by outflank-mailman (input) for mailman id 8687;
 Mon, 19 Oct 2020 09:45:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PNq7=D2=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kURjG-0002yh-Vj
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:45:23 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (unknown
 [40.107.92.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e95c614-42ec-44ae-822c-443508628d24;
 Mon, 19 Oct 2020 09:45:21 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB2417.namprd12.prod.outlook.com (2603:10b6:207:45::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 19 Oct
 2020 09:45:17 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.028; Mon, 19 Oct 2020
 09:45:17 +0000
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
 (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by
 AM0PR07CA0027.eurprd07.prod.outlook.com (2603:10a6:208:ac::40) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.9 via Frontend Transport; Mon, 19 Oct 2020 09:45:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PNq7=D2=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kURjG-0002yh-Vj
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 09:45:23 +0000
X-Inumbo-ID: 2e95c614-42ec-44ae-822c-443508628d24
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (unknown [40.107.92.44])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2e95c614-42ec-44ae-822c-443508628d24;
	Mon, 19 Oct 2020 09:45:21 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j/FnliMyyc6PzCm4aGI3rSXrefs5zb8xHdj9d4DoBgWJGSx5tqXDWQNbDsrfwSW64QJXT3XpiEmjQaKx2vI6Nrlwn4sBuq4DsmOg04T61+sn1kk0kLOyVz0WvIGI3psXcXI5nMG9G/Dmg0mR9/R6ERPeYsQo9othw9gpZVI1f2e5k0rhFU6N8yTPH9M1vdOwQKkg432WGS1wQHe/dvgYzS1Ub0lWneymsjUi/oOeusurrTiBxRIoLvzf+pbbgAk8C7j9yE6ik3AO2tqfQ3f/hkJZfkAwvT2+LMf3N2enasi0bLT8oMM/Q93CNjzMu7TxcNvF6CrwSH9SI1C9u81htQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zy5wcv5D2MUJuqKGL0V1ByCyOy9bh9LcggVVNvf/G48=;
 b=XOY4lcQe5HP+yb73hEvle0YLe/CjrfoogLOfcozRJe/tyx2S5zM/lUvq7PE0orshiVfvL+5cQXSTgtFn7LaNy18omq9h3JFx2orDxBnQYk+O5xez1N7aCShbv2a/R2q7RL42rofSB0odkrwjFhHU/m3A1nB7AMfFl9+XvBG8pJBhiYHUsYmASfs1YZB/u/90LS8FOpOJQ1HRAc/k0l66NKSrV/9dEBKbBMh8wgysDXUaZFkSRQzqZxah8MMbnM/zTvIQ8yI17/qmyKHrsDujb3G9pGAKYDBdr5Col73Qt6iqlHzpIm1KqTFIB4tCKTJDt4+qTxzJ6YRc/FqgtlGRjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Zy5wcv5D2MUJuqKGL0V1ByCyOy9bh9LcggVVNvf/G48=;
 b=HAisxrqOxM+yFzBIeivQca5MQl1W0EvQh9xsuS0W2b87Rt1SPj5cCrl7+Oi4yjd2jCISBR4uXY/rDuYAs9kGwzzAz7O+AYk9MEcNArQG26gfIl5eJieuGbvjQCOBJge35t749Mpuih+3wZ7PTUAejRJiSusdxGVeUQynWh6tM2w=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by BL0PR12MB2417.namprd12.prod.outlook.com (2603:10b6:207:45::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 19 Oct
 2020 09:45:17 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.028; Mon, 19 Oct 2020
 09:45:17 +0000
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
 nouveau@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
 amd-gfx@lists.freedesktop.org, virtualization@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-rockchip@lists.infradead.org,
 dri-devel@lists.freedesktop.org, xen-devel@lists.xenproject.org,
 spice-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
 linux-media@vger.kernel.org
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
 <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
 <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com>
Date: Mon, 19 Oct 2020 11:45:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
X-ClientProxiedBy: AM0PR07CA0027.eurprd07.prod.outlook.com
 (2603:10a6:208:ac::40) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7] (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by AM0PR07CA0027.eurprd07.prod.outlook.com (2603:10a6:208:ac::40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.9 via Frontend Transport; Mon, 19 Oct 2020 09:45:10 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 2de45ee1-d684-4c5f-c143-08d87413af3e
X-MS-TrafficTypeDiagnostic: BL0PR12MB2417:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<BL0PR12MB2417E35B821E9EF7A5B5AFC6831E0@BL0PR12MB2417.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	d4JLIMQwIVNzm92UzbJlG+1uGqD85bpaIBIWdkIvqy+5bO5umDecUzbdA2OrHnWTG50cA0ny/JCI1CzZvgyw+1FjN6aHmgVWrigZavYKj4LEcL+cX1UuDfTRfkC8NzdRfzBHjF3yyBdvUo89iJHR7FZMTD0ZlbISd1mPI8Fa5XZpx5SjPmGla1TR5B2rOimNL5gyzLsN/REXOw2vHcHZUuVpHzPapMDpBgw2stBq2jE5cJ9/1g7aIAe24EC6dbmiq0izN119FshZyW9GSTbyY1M+N358gLMv2N13z9bTckIsQ6IA2m5YRbmaev473/bBwUziHD1Gy88gW5JCfltSlE8k1Ia+qPkLFXHGxMrJEs2EZPqnwwDTOjA9VU2UXcpnNRzvMQpqXhcSS3p7NG2ZxaXmUKVj/X/7GDfIarrkPcDn2jqmmV2rzfthVFUk+jt7
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(376002)(346002)(136003)(39860400002)(83380400001)(45080400002)(966005)(8676002)(4326008)(31686004)(6666004)(2906002)(186003)(16526019)(36756003)(86362001)(2616005)(478600001)(5660300002)(8936002)(66556008)(66476007)(316002)(66946007)(31696002)(7416002)(7406005)(6486002)(52116002)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	DG1IF2aGtntU1HnH8WrGQv0BxrOWgvoFrUvrZ/ETE+qVydH2YByWQoPMDbLtl9PnDVbVhG0frqeMFhQaf3Vtz58lN+BSVBxX/5YUoCuHfVPbHNJ131OF8BG8o6HCx7qfKAWQKi23SjRRvPudmHmWydDaVWdoqlx94dmGGZApy3JBO+X666gv4eq6P1Ayr5gK8mtM4FY0/7ymmY5CLucrj9cD580hMoYunzbjo0HfzPktCM7UcpiT2vhkO+wSXS0SsmI0azvvvWDuBLVpGNm8+l0yiOtuU/dxHEaCNGB3WkK/An6zQrQqh008D0JHt84NNz4bvORAypo23Przrlyso7NFrkuYqC9jaOoFFmI3+T8agjgKPHWUTQ51akC/pRhom8yKrGoOBEnG71Q2UNIplHahwdmMVp/1LRC+khx1EM2TjUWmhOAwm7lgtIJWF0PKBJWS30Vl8X9LENEebXdouzN6nWe3nCnGVUHadIrKdTpq+NbSUTcx+UbOMyZhTs168y9mbcqx+R3rdsxnkJoDT0bF4CJpPCD3DDcTXrxX0amQ/6h4CMtW3V0114sgtEqNNrvMiHRmJH1dN12YFa/MB+4p0JrR+eBXjMZhu9PyaWZpVjCop9rk71seQY2VElj+z0N0U4v+eAESUsMqU9kJs9fohrQ1+WkOCQxetK3hLiK1hTG+R/85SbfwKIcuX7ABSxg5EviUsj6gJOIbQiqtkg==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2de45ee1-d684-4c5f-c143-08d87413af3e
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Oct 2020 09:45:17.7275
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ufX6vJ7VDtZTX3Z4vj04OPUaH+aqZVdTPZh+3zWPb5h0W+0bW29R9aWCT+GnVwx0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR12MB2417

Hi Thomas,

[SNIP]
>>>    +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
>>> +{
>>> +    struct ttm_resource *mem = &bo->mem;
>>> +    int ret;
>>> +
>>> +    ret = ttm_mem_io_reserve(bo->bdev, mem);
>>> +    if (ret)
>>> +        return ret;
>>> +
>>> +    if (mem->bus.is_iomem) {
>>> +        void __iomem *vaddr_iomem;
>>> +        unsigned long size = bo->num_pages << PAGE_SHIFT;
>> Please use uint64_t here and make sure to cast bo->num_pages before
>> shifting.
> I thought the rule of thumb is to use u64 in source code. Yet TTM only
> uses uint*_t types. Is there anything special about TTM?

My last status is that you can use both and my personal preference is to 
use the uint*_t types because they are part of a higher level standard.

>> We have an unit tests of allocating a 8GB BO and that should work on a
>> 32bit machine as well :)
>>
>>> +
>>> +        if (mem->bus.addr)
>>> +            vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> I after reading the patch again, I realized that this is the
> 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> options here: ignore this case in _vunmap(), or do an ioremap()
> unconditionally. Which one is preferable?

ioremap would be very very bad, so we should just do nothing.

Thanks,
Christian.

>
> Best regards
> Thomas
>
>>> +        else if (mem->placement & TTM_PL_FLAG_WC)
>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
>> mem->bus.caching enum as replacement.
>>
>>> +            vaddr_iomem = ioremap_wc(mem->bus.offset, size);
>>> +        else
>>> +            vaddr_iomem = ioremap(mem->bus.offset, size);
>>> +
>>> +        if (!vaddr_iomem)
>>> +            return -ENOMEM;
>>> +
>>> +        dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
>>> +
>>> +    } else {
>>> +        struct ttm_operation_ctx ctx = {
>>> +            .interruptible = false,
>>> +            .no_wait_gpu = false
>>> +        };
>>> +        struct ttm_tt *ttm = bo->ttm;
>>> +        pgprot_t prot;
>>> +        void *vaddr;
>>> +
>>> +        BUG_ON(!ttm);
>> I think we can drop this, populate will just crash badly anyway.
>>
>>> +
>>> +        ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
>>> +        if (ret)
>>> +            return ret;
>>> +
>>> +        /*
>>> +         * We need to use vmap to get the desired page protection
>>> +         * or to make the buffer object look contiguous.
>>> +         */
>>> +        prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
>> The calling convention has changed on drm-misc-next as well, but should
>> be trivial to adapt.
>>
>> Regards,
>> Christian.
>>
>>> +        vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
>>> +        if (!vaddr)
>>> +            return -ENOMEM;
>>> +
>>> +        dma_buf_map_set_vaddr(map, vaddr);
>>> +    }
>>> +
>>> +    return 0;
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vmap);
>>> +
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map)
>>> +{
>>> +    if (dma_buf_map_is_null(map))
>>> +        return;
>>> +
>>> +    if (map->is_iomem)
>>> +        iounmap(map->vaddr_iomem);
>>> +    else
>>> +        vunmap(map->vaddr);
>>> +    dma_buf_map_clear(map);
>>> +
>>> +    ttm_mem_io_free(bo->bdev, &bo->mem);
>>> +}
>>> +EXPORT_SYMBOL(ttm_bo_vunmap);
>>> +
>>>    static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>>>                     bool dst_use_tt)
>>>    {
>>> diff --git a/include/drm/drm_gem_ttm_helper.h
>>> b/include/drm/drm_gem_ttm_helper.h
>>> index 118cef76f84f..7c6d874910b8 100644
>>> --- a/include/drm/drm_gem_ttm_helper.h
>>> +++ b/include/drm/drm_gem_ttm_helper.h
>>> @@ -10,11 +10,17 @@
>>>    #include <drm/ttm/ttm_bo_api.h>
>>>    #include <drm/ttm/ttm_bo_driver.h>
>>>    +struct dma_buf_map;
>>> +
>>>    #define drm_gem_ttm_of_gem(gem_obj) \
>>>        container_of(gem_obj, struct ttm_buffer_object, base)
>>>      void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
>>> indent,
>>>                    const struct drm_gem_object *gem);
>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
>>> +             struct dma_buf_map *map);
>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
>>> +            struct dma_buf_map *map);
>>>    int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>>>                 struct vm_area_struct *vma);
>>>    diff --git a/include/drm/ttm/ttm_bo_api.h
>>> b/include/drm/ttm/ttm_bo_api.h
>>> index 37102e45e496..2c59a785374c 100644
>>> --- a/include/drm/ttm/ttm_bo_api.h
>>> +++ b/include/drm/ttm/ttm_bo_api.h
>>> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>>>      struct ttm_bo_device;
>>>    +struct dma_buf_map;
>>> +
>>>    struct drm_mm_node;
>>>      struct ttm_placement;
>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
>>> unsigned long start_page,
>>>     */
>>>    void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>>>    +/**
>>> + * ttm_bo_vmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: pointer to a struct dma_buf_map representing the map.
>>> + *
>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
>>> + * data in the buffer object. The parameter @map returns the virtual
>>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
>>> + *
>>> + * Returns
>>> + * -ENOMEM: Out of memory.
>>> + * -EINVAL: Invalid range.
>>> + */
>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
>>> +
>>> +/**
>>> + * ttm_bo_vunmap
>>> + *
>>> + * @bo: The buffer object.
>>> + * @map: Object describing the map to unmap.
>>> + *
>>> + * Unmaps a kernel map set up by ttm_bo_vmap().
>>> + */
>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
>>> *map);
>>> +
>>>    /**
>>>     * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>>>     *
>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
>>> index fd1aba545fdf..2e8bbecb5091 100644
>>> --- a/include/linux/dma-buf-map.h
>>> +++ b/include/linux/dma-buf-map.h
>>> @@ -45,6 +45,12 @@
>>>     *
>>>     *    dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>>>     *
>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
>>> + *
>>> + * .. code-block:: c
>>> + *
>>> + *    dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
>>> + *
>>>     * Test if a mapping is valid with either dma_buf_map_is_set() or
>>>     * dma_buf_map_is_null().
>>>     *
>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
>>> dma_buf_map *map, void *vaddr)
>>>        map->is_iomem = false;
>>>    }
>>>    +/**
>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
>>> an address in I/O memory
>>> + * @map:        The dma-buf mapping structure
>>> + * @vaddr_iomem:    An I/O-memory address
>>> + *
>>> + * Sets the address and the I/O-memory flag.
>>> + */
>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
>>> +                           void __iomem *vaddr_iomem)
>>> +{
>>> +    map->vaddr_iomem = vaddr_iomem;
>>> +    map->is_iomem = true;
>>> +}
>>> +
>>>    /**
>>>     * dma_buf_map_is_equal - Compares two dma-buf mapping structures
>>> for equality
>>>     * @lhs:    The dma-buf mapping structure
>> _______________________________________________
>> dri-devel mailing list
>> dri-devel@lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&amp;reserved=0



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 10:12:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 10:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8690.23278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUS90-0005ZU-3F; Mon, 19 Oct 2020 10:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8690.23278; Mon, 19 Oct 2020 10:11:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUS8z-0005ZN-Vv; Mon, 19 Oct 2020 10:11:57 +0000
Received: by outflank-mailman (input) for mailman id 8690;
 Mon, 19 Oct 2020 10:11:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUS8y-0005ZI-EI
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 10:11:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a59fb15-3bd7-4472-91cf-47b88bc5cecd;
 Mon, 19 Oct 2020 10:11:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUS8v-0006wq-Ri; Mon, 19 Oct 2020 10:11:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUS8v-0006LF-IV; Mon, 19 Oct 2020 10:11:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUS8v-0008UH-I2; Mon, 19 Oct 2020 10:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUS8y-0005ZI-EI
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 10:11:56 +0000
X-Inumbo-ID: 3a59fb15-3bd7-4472-91cf-47b88bc5cecd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3a59fb15-3bd7-4472-91cf-47b88bc5cecd;
	Mon, 19 Oct 2020 10:11:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Rg5nVx3e+jnAsXaW5vnuNXNQgKWmx2VBj2vYgFdLsQ=; b=RJft0bcLBlkME2waC4Fz+xBAH/
	m/DCti6VAWJEHlDDkdWQox0VVK58TkMlDyZfiHaWkBQPGqkmPUeatMW3RurqyhkP8SOTGt0nFY+5D
	rl8KI+kj7q1l/odefESn98cNKgPlPKdtpGz2nc4IEArJ1QNgE/9Mjv4nipEWzwzsT2n4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUS8v-0006wq-Ri; Mon, 19 Oct 2020 10:11:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUS8v-0006LF-IV; Mon, 19 Oct 2020 10:11:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUS8v-0008UH-I2; Mon, 19 Oct 2020 10:11:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155974-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 155974: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 10:11:53 +0000

flight 155974 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155974/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6a0e0dc7ba8a62035fb1693e0c91bb53214ec41f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  101 days
Failing since        151818  2020-07-11 04:18:52 Z  100 days   95 attempts
Testing same since   155885  2020-10-16 04:19:10 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21456 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 11:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 11:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8700.23301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUSuE-0001Rv-35; Mon, 19 Oct 2020 11:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8700.23301; Mon, 19 Oct 2020 11:00:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUSuE-0001Ro-00; Mon, 19 Oct 2020 11:00:46 +0000
Received: by outflank-mailman (input) for mailman id 8700;
 Mon, 19 Oct 2020 11:00:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GZg0=D2=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kUSuC-0001Rj-JS
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 11:00:44 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14f711cb-7c44-4597-9d87-0d059df63ff5;
 Mon, 19 Oct 2020 11:00:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GZg0=D2=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kUSuC-0001Rj-JS
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 11:00:44 +0000
X-Inumbo-ID: 14f711cb-7c44-4597-9d87-0d059df63ff5
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14f711cb-7c44-4597-9d87-0d059df63ff5;
	Mon, 19 Oct 2020 11:00:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603105243;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=DSFoGlvzf1IlAYNb1LzPQyz0U5BnYVTIcg0pPUWdenA=;
  b=Y+5KlKsT7E+4jdWj5OxNZCrXrCswIDFatAVcEMvUv1DXuQFjikqDuCsQ
   RhH7IPFPrIO/uXk6DXcHsyHbBboXOJy7nagHj+VcP9148sbCNo4czZDmb
   XQnNz5+VSW6x9E72OvfO5b0UMjEA6vpzdditxw9JLhHtRjl4AiZ8UhTAc
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: v9t/f9TQSgyyb8szPH97tR8seIGQtZRLY54ohttaEZRq70kVNaquYswqSnklLo2k4c3GaTUspE
 LL8Mg13bP9LnXiKGA1tkql0tKZvYtlZrFboszDHOyg7457pAP9S1my2eDts5q9FvYASRjY9R22
 wyQvp0BK4sr8MmL60snx/sR9Xk0lF/gS7V5e6GDZ92WJXkSuDUcyyd0IWphD5fOAh5pLUqP9GL
 CbtjyLe9NQ23ghas/cmKiVQkDla8hIl6W42/24Pt7zR93EVL+MqO1CAUmDAkmollCy8McKKUGL
 lIA=
X-SBRS: None
X-MesageID: 29619253
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29619253"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mt/UwtDwCWVSAHkaCAv76r3pp/fX7IPFZ+9di9sNLzCe5ZiFoM/mcrtVEXuUbXCYzTssbCL24RbE5wQn0JTWyqSFO23A1ENerV+cq54m8dFDcmdka7+6mYAsinYgrBSUH8v/QNHWvFu4cdeuZyZkb2mcTp8EYjFW7BUf0v7rZoIQ7KT/oc8L6TXfqwUJaINEsUFBTwugDU0b1leuPl4qd01XaVz32z9VdLQxYxkTWWtuyUY2ScJ2hZ1zYI/76BjTsk/M9mV/AtD7TJw95RJU98/5cKgrMuTlrTbSaG41gvXHJL4SgXhw/SQDczqJOunrvIaCeM6SExKlgxF+0lbYRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DSFoGlvzf1IlAYNb1LzPQyz0U5BnYVTIcg0pPUWdenA=;
 b=VCycboW1kHvsh8J2WmxtwJhEoGY9ZhAwOwVslOdbnkd9AFgeGZ7wV6LogLCiBeDpmp4bv6kZC00A6Z6EuzEoEFj+/QuPt1amlKSqJPWaoTA8kvkpXRRvSFSEXoqsJXmb+lqPBQQw1Dm27J5mD9zD3DMZR7xwENsH9QAeTtj6+1X/rjKP4Ql0VZZ9xMbFNSGkgFucBlO/z6zwykqNLpGZthCh1aU/IUlnrh1feaY8xM7c63C66u36TeRUHj9bsYBfaH6MxiyiX0MjbZW5Tzp+U7Z22hAbiBEKgZ/jjjoAkJ5BCZoMNoESCuwZCD33HuKqGrz+ifWkHRmcZK5bcCZ84w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DSFoGlvzf1IlAYNb1LzPQyz0U5BnYVTIcg0pPUWdenA=;
 b=DpguoH+OUwU/eB3/GrpTTtCLOIMkSs4fWrh5JC6lGBnOyB2REnZZzDQeabPnUWjGTloQFhIisA7p/OvivMLBNBxLsp68MHxFCQFXoIsDg+mlFYkiL+mSLUX8LQyYQ/y7gTeyl+oM2a1Kkn8tYEnGktufjykJo2AcSkwcCpkrnJs=
From: George Dunlap <George.Dunlap@citrix.com>
To: Wei Liu <wl@xen.org>
CC: Rich Persaud <persaur@gmail.com>, =?utf-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=
	<pasik@iki.fi>, xen-devel <xen-devel@lists.xenproject.org>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, "Konrad
 Rzeszutek Wilk" <konrad.wilk@oracle.com>, Jan Beulich <JBeulich@suse.com>,
	"bhelgaas@google.com" <bhelgaas@google.com>,
	=?utf-8?B?SMOla29uIEFsc3RhZGhlaW0=?= <hakon@alstadheim.priv.no>, "Roger Pau
 Monne" <roger.pau@citrix.com>,
	=?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Jason Andryuk <jandryuk@gmail.com>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Ian Jackson <Ian.Jackson@citrix.com>,
	Paul Durrant <pdurrant@amazon.com>, Anthony Perard
	<anthony.perard@citrix.com>
Subject: Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI
 flr/slot/bus reset with 'reset' SysFS attribute
Thread-Topic: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI
 flr/slot/bus reset with 'reset' SysFS attribute
Thread-Index: AQHV2EvZHHXF37OTcE6DktrDrf9FJ6mgXYmA
Date: Mon, 19 Oct 2020 11:00:34 +0000
Message-ID: <A325DB30-0282-4512-96D4-06AE661ADB5A@citrix.com>
References: <646A4BEA-C544-4C62-A7A3-B736D3860912@gmail.com>
 <20200131153332.r4oe3sadhvoly7ho@debian>
In-Reply-To: <20200131153332.r4oe3sadhvoly7ho@debian>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d1e0884d-63fc-4e0f-7f08-08d8741e355b
x-ms-traffictypediagnostic: BYAPR03MB4198:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB41982F2E662D10EE8BA16C74991E0@BYAPR03MB4198.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 5kvKBTKDW2hkaCZ/l5v8jzpsjxuQ+8Q7MefA/UJ3zUYPp2xGI7TdihDDhqr5r0dUCdrXWKnTmjkWxg5lLh/tex53Ap0ZBm0KZayF9m7gdOFXNW4AOswOf75VEzO7sGZW1S8bZdDl3YGOkELDVJWL1aGktM+Uu/Nk2aueKosPs808BXL9tUbOscSRNw58oU98LMCoTw//5qkLkdHna84RMcKLm7p3ZMX5ugfCYHeKA5fJ0X+ITEvArpSX4JjM7xl1G/7CaO2lrGVbgXicMNXwBAXhLo49fKaK6hzRNznVK2K+pqnnmADEbMM2KSP1jDJUo/HDlYPfHorclTauePNuZOGYJ040eeSdDUGT0dqeJbiMIm80pCaoBgMe/MBGlUkKKa9r0pmgwBpPCQhxcSdLDg==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(396003)(376002)(346002)(86362001)(6916009)(966005)(6512007)(478600001)(2906002)(5660300002)(54906003)(36756003)(316002)(33656002)(8936002)(83380400001)(8676002)(186003)(66574015)(2616005)(7416002)(53546011)(55236004)(91956017)(26005)(66476007)(6506007)(66446008)(71200400001)(76116006)(64756008)(66556008)(66946007)(6486002)(107886003)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: EGiKMfF8LDxQri6auJ3iLg5OAEcjHEqhBswlXC0oySyzDydgvBZfV6gZv9W7mybtSjsFPoa4P24aP/JSB13vH82G7N/QkFA+p37TbjfRDmeHiCd5ul0/meXpV/8debSDKl1azueeFtwot15m0GHfDw97gGtU3rVrth7gnThRPClLVs0973rtppuujwuXdkuvT225HlIJlR/ATV2qgBbbIS4XhNCyRpoBOaFo3f00qsc8xqbOm5q7QF1HwHlkbIei1NnAT/JXUbKyi9HpDwt5xVq9v9ju0PcEoEtENYegx6fFYH1zV6psFq42vHbdNLnF85kC41f/7XyQrg7CVBskF6OIUU5GXv23nMrqtbnwe5p2tTjKVzcRU9CYbU9M/BnOEL9u2RrTl5LMZ1YZOTNHMxdjzNMlUj5a6anVc6AeZjj7RJMzjLLUJxWMQHfmOfQZ9CUf+umsv69TFo2W8HjicXmlyYB3RR7UeFKpFYoLklGqzFSzNAK3Y9+R1kYKWn9UVTt8k7gpk8bHQbKJ9cJeKBEe6L0gr66Kf53oY22HgtPjjfchac7mSjt6+V/4kSDb1Exu4Igo5TbyDpW03taenaY6L6YGmeiOp/XF1smRTwEx7OmEOKjYuIlGPb/bskPAH67F0gXmHyhJJrdxmbTvGQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <6B1FD84D1E8F194BB94E48866DBB10B0@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d1e0884d-63fc-4e0f-7f08-08d8741e355b
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Oct 2020 11:00:34.9764
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cy4pSqODLIdAxZs/B1LFyhML0pX6UbDwVgr37EXNJYab7TJBbxgo1HdZ+3bU8fYKPy1H2jTZLSGksAdxs5D6uBnRTbzHsEkVhhV5SQ1d1q8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4198
X-OriginatorOrg: citrix.com

DQoNCj4gT24gSmFuIDMxLCAyMDIwLCBhdCAzOjMzIFBNLCBXZWkgTGl1IDx3bEB4ZW4ub3JnPiB3
cm90ZToNCj4gDQo+IE9uIEZyaSwgSmFuIDE3LCAyMDIwIGF0IDAyOjEzOjA0UE0gLTA1MDAsIFJp
Y2ggUGVyc2F1ZCB3cm90ZToNCj4+IE9uIEF1ZyAyNiwgMjAxOSwgYXQgMTc6MDgsIFBhc2kgS8Ok
cmtrw6RpbmVuIDxwYXNpa0Bpa2kuZmk+IHdyb3RlOg0KPj4+IO+7v0hpLA0KPj4+IA0KPj4+PiBP
biBNb24sIE9jdCAwOCwgMjAxOCBhdCAxMDozMjo0NUFNIC0wNDAwLCBCb3JpcyBPc3Ryb3Zza3kg
d3JvdGU6DQo+Pj4+PiBPbiAxMC8zLzE4IDExOjUxIEFNLCBQYXNpIEvDpHJra8OkaW5lbiB3cm90
ZToNCj4+Pj4+IE9uIFdlZCwgU2VwIDE5LCAyMDE4IGF0IDExOjA1OjI2QU0gKzAyMDAsIFJvZ2Vy
IFBhdSBNb25uw6kgd3JvdGU6DQo+Pj4+Pj4gT24gVHVlLCBTZXAgMTgsIDIwMTggYXQgMDI6MDk6
NTNQTSAtMDQwMCwgQm9yaXMgT3N0cm92c2t5IHdyb3RlOg0KPj4+Pj4+PiBPbiA5LzE4LzE4IDU6
MzIgQU0sIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+Pj4+Pj4+Pj4gT24gU2VwIDE4LCAyMDE4LCBh
dCA4OjE1IEFNLCBQYXNpIEvDpHJra8OkaW5lbiA8cGFzaWtAaWtpLmZpPiB3cm90ZToNCj4+Pj4+
Pj4+PiBIaSwNCj4+Pj4+Pj4+PiBPbiBNb24sIFNlcCAxNywgMjAxOCBhdCAwMjowNjowMlBNIC0w
NDAwLCBCb3JpcyBPc3Ryb3Zza3kgd3JvdGU6DQo+Pj4+Pj4+Pj4+IFdoYXQgYWJvdXQgdGhlIHRv
b2xzdGFjayBjaGFuZ2VzPyBIYXZlIHRoZXkgYmVlbiBhY2NlcHRlZD8gSSB2YWd1ZWx5DQo+Pj4+
Pj4+Pj4+IHJlY2FsbCB0aGVyZSB3YXMgYSBkaXNjdXNzaW9uIGFib3V0IHRob3NlIGNoYW5nZXMg
YnV0IGRvbid0IHJlbWVtYmVyIGhvdw0KPj4+Pj4+Pj4+PiBpdCBlbmRlZC4NCj4+Pj4+Pj4+PiBJ
IGRvbid0IHRoaW5rIHRvb2xzdGFjay9saWJ4bCBwYXRjaCBoYXMgYmVlbiBhcHBsaWVkIHlldCBl
aXRoZXIuDQo+Pj4+Pj4+Pj4gIltQQVRDSCBWMSAwLzFdIFhlbi9Ub29sczogUENJIHJlc2V0IHVz
aW5nICdyZXNldCcgU3lzRlMgYXR0cmlidXRlIjoNCj4+Pj4+Pj4+PiBodHRwczovL2xpc3RzLnhl
bi5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxNy0xMi9tc2cwMDY2NC5odG1sDQo+Pj4+
Pj4+Pj4gIltQQVRDSCBWMSAxLzFdIFhlbi9saWJ4bDogUGVyZm9ybSBQQ0kgcmVzZXQgdXNpbmcg
J3Jlc2V0JyBTeXNGUyBhdHRyaWJ1dGUiOg0KPj4+Pj4+Pj4+IGh0dHBzOi8vbGlzdHMueGVuLm9y
Zy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE3LTEyL21zZzAwNjYzLmh0bWwNCj4+Pj4+Pj4g
V2lsbCB0aGlzIHBhdGNoIHdvcmsgZm9yICpCU0Q/IFJvZ2VyPw0KPj4+Pj4+IEF0IGxlYXN0IEZy
ZWVCU0QgZG9uJ3Qgc3VwcG9ydCBwY2ktcGFzc3Rocm91Z2gsIHNvIG5vbmUgb2YgdGhpcyB3b3Jr
cw0KPj4+Pj4+IEFUTS4gVGhlcmUncyBubyBzeXNmcyBvbiBCU0QsIHNvIG11Y2ggb2Ygd2hhdCdz
IGluIGxpYnhsX3BjaS5jIHdpbGwNCj4+Pj4+PiBoYXZlIHRvIGJlIG1vdmVkIHRvIGxpYnhsX2xp
bnV4LmMgd2hlbiBCU0Qgc3VwcG9ydCBpcyBhZGRlZC4NCj4+Pj4+IE9rLiBUaGF0IHNvdW5kcyBs
aWtlIGl0J3MgT0sgZm9yIHRoZSBpbml0aWFsIHBjaSAncmVzZXQnIGltcGxlbWVudGF0aW9uIGlu
IHhsL2xpYnhsIHRvIGJlIGxpbnV4LW9ubHkuLg0KPj4+PiANCj4+Pj4gQXJlIHRoZXNlIHR3byBw
YXRjaGVzIHN0aWxsIG5lZWRlZD8gSVNUUiB0aGV5IHdlcmUgIHdyaXR0ZW4gb3JpZ2luYWxseQ0K
Pj4+PiB0byBkZWFsIHdpdGggZ3Vlc3QgdHJ5aW5nIHRvIHVzZSBkZXZpY2UgdGhhdCB3YXMgcHJl
dmlvdXNseSBhc3NpZ25lZCB0bw0KPj4+PiBhbm90aGVyIGd1ZXN0LiBCdXQgcGNpc3R1Yl9wdXRf
cGNpX2RldigpIGNhbGxzDQo+Pj4+IF9fcGNpX3Jlc2V0X2Z1bmN0aW9uX2xvY2tlZCgpIHdoaWNo
IGZpcnN0IHRyaWVzIEZMUiwgYW5kIGl0IGxvb2tzIGxpa2UNCj4+Pj4gaXQgd2FzIGFkZGVkIHJl
bGF0aXZlbHkgcmVjZW50bHkuDQo+Pj4gDQo+Pj4gUmVwbHlpbmcgdG8gYW4gb2xkIHRocmVhZC4u
IEkgb25seSBub3cgcmVhbGl6ZWQgSSBmb3Jnb3QgdG8gcmVwbHkgdG8gdGhpcyBtZXNzYWdlIGVh
cmxpZXIuDQo+Pj4gDQo+Pj4gYWZhaWsgdGhlc2UgcGF0Y2hlcyBhcmUgc3RpbGwgbmVlZGVkLiBI
w6Vrb24gKENDJ2QpIHdyb3RlIHRvIG1lIGluIHByaXZhdGUgdGhhdA0KPj4+IGhlIGdldHMgYSAo
ZG9tMCkgTGludXgga2VybmVsIGNyYXNoIGlmIGhlIGRvZXNuJ3QgaGF2ZSB0aGVzZSBwYXRjaGVz
IGFwcGxpZWQuDQo+Pj4gDQo+Pj4gDQo+Pj4gSGVyZSBhcmUgdGhlIGxpbmtzIHRvIGJvdGggdGhl
IGxpbnV4IGtlcm5lbCBhbmQgbGlieGwgcGF0Y2hlczoNCj4+PiANCj4+PiANCj4+PiAiW1hlbi1k
ZXZlbF0gW1BBVENIIFYzIDAvMl0gWGVuL1BDSWJhY2s6IFBDSSByZXNldCB1c2luZyAncmVzZXQn
IFN5c0ZTIGF0dHJpYnV0ZSI6DQo+Pj4gaHR0cHM6Ly9saXN0cy54ZW4ub3JnL2FyY2hpdmVzL2h0
bWwveGVuLWRldmVsLzIwMTctMTIvbXNnMDA2NTkuaHRtbA0KPj4+IA0KPj4+IFtOb3RlIHRoYXQg
UEFUQ0ggVjMgMS8yICJEcml2ZXJzL1BDSTogRXhwb3J0IHBjaWVfaGFzX2ZscigpIGludGVyZmFj
ZSIgaXMgYWxyZWFkeSBhcHBsaWVkIGluIHVwc3RyZWFtIGxpbnV4IGtlcm5lbCwgc28gaXQncyBu
b3QgbmVlZGVkIGFueW1vcmVdDQo+Pj4gDQo+Pj4gIltYZW4tZGV2ZWxdIFtQQVRDSCBWMyAyLzJd
IFhlbi9QQ0liYWNrOiBJbXBsZW1lbnQgUENJIGZsci9zbG90L2J1cyByZXNldCB3aXRoICdyZXNl
dCcgU3lzRlMgYXR0cmlidXRlIjoNCj4+PiBodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMv
aHRtbC94ZW4tZGV2ZWwvMjAxNy0xMi9tc2cwMDY2MS5odG1sDQo+Pj4gDQo+Pj4gDQo+Pj4gIltY
ZW4tZGV2ZWxdIFtQQVRDSCBWMSAwLzFdIFhlbi9Ub29sczogUENJIHJlc2V0IHVzaW5nICdyZXNl
dCcgU3lzRlMgYXR0cmlidXRlIjoNCj4+PiBodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMv
aHRtbC94ZW4tZGV2ZWwvMjAxNy0xMi9tc2cwMDY2NC5odG1sDQo+Pj4gDQo+Pj4gIltYZW4tZGV2
ZWxdIFtQQVRDSCBWMSAxLzFdIFhlbi9saWJ4bDogUGVyZm9ybSBQQ0kgcmVzZXQgdXNpbmcgJ3Jl
c2V0JyBTeXNGUyBhdHRyaWJ1dGUiOg0KPj4+IGh0dHBzOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZl
cy9odG1sL3hlbi1kZXZlbC8yMDE3LTEyL21zZzAwNjYzLmh0bWwNCj4+IA0KPj4gW2Ryb3BwaW5n
IExpbnV4IG1haWxpbmcgbGlzdHNdDQo+PiANCj4+IFdoYXQgaXMgcmVxdWlyZWQgdG8gZ2V0IHRo
ZSBYZW4gcGF0Y2hlcyBtZXJnZWQ/ICBSZWJhc2luZyBhZ2FpbnN0IFhlbg0KPj4gbWFzdGVyPyAg
T3BlblhUIGhhcyBiZWVuIGNhcnJ5aW5nIGEgc2ltaWxhciBwYXRjaCBmb3IgbWFueSB5ZWFycyBh
bmQNCj4+IHdlIHdvdWxkIGxpa2UgdG8gbW92ZSB0byBhbiB1cHN0cmVhbSBpbXBsZW1lbnRhdGlv
bi4gIFhlbiB1c2VycyBvZiBQQ0kNCj4+IHBhc3N0aHJvdWdoIHdvdWxkIGJlbmVmaXQgZnJvbSBt
b3JlIHJlbGlhYmxlIGRldmljZSByZXNldC4NCj4gDQo+IFJlYmFzZSBhbmQgcmVzZW5kPw0KPiAN
Cj4gU2tpbW1pbmcgdGhhdCB0aHJlYWQgSSB0aGluayB0aGUgbWFqb3IgY29uY2VybiB3YXMgYmFj
a3dhcmQNCj4gY29tcGF0aWJpbGl0eS4gVGhhdCBzZWVtZWQgdG8gaGF2ZSBiZWVuIGFkZHJlc3Nl
ZC4NCj4gDQo+IFVuZm9ydHVuYXRlbHkgSSBkb24ndCBoYXZlIHRoZSB0aW1lIHRvIGRpZyBpbnRv
IExpbnV4IHRvIHNlZSBpZiB0aGUNCj4gY2xhaW0gdGhlcmUgaXMgdHJ1ZSBvciBub3QuDQo+IA0K
PiBJdCB3b3VsZCBiZSBoZWxwZnVsIHRvIHdyaXRlIGEgY29uY2lzZSBwYXJhZ3JhcGggdG8gc2F5
IHdoeSBiYWNrd2FyZA0KPiBjb21wYXRpYmlsaXR5IGlzIG5vdCByZXF1aXJlZC4NCg0KSnVzdCBn
b2luZyB0aHJvdWdoIG15IG9sZCDigJxtYWtlIHN1cmUgc29tZXRoaW5nIGhhcHBlbnMgd2l0aCB0
aGlz4oCdIG1haWxzLiAgRGlkIGFueXRoaW5nIGV2ZXIgaGFwcGVuIHdpdGggdGhpcz8gIFdobyBo
YXMgdGhlIGJhbGwgaGVyZSAvIHdobyBpcyB0aGlzIHN0dWNrIG9uPw0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 11:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 11:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8704.23312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUTbJ-0004xj-KB; Mon, 19 Oct 2020 11:45:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8704.23312; Mon, 19 Oct 2020 11:45:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUTbJ-0004xc-HC; Mon, 19 Oct 2020 11:45:17 +0000
Received: by outflank-mailman (input) for mailman id 8704;
 Mon, 19 Oct 2020 11:45:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUTbI-0004xX-TB
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 11:45:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc1c6199-996e-4c2d-908a-a9c916dbad21;
 Mon, 19 Oct 2020 11:45:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUTbE-0000Qh-7F; Mon, 19 Oct 2020 11:45:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUTbD-00018m-Tt; Mon, 19 Oct 2020 11:45:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUTbD-0004dz-Rx; Mon, 19 Oct 2020 11:45:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUTbI-0004xX-TB
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 11:45:16 +0000
X-Inumbo-ID: fc1c6199-996e-4c2d-908a-a9c916dbad21
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fc1c6199-996e-4c2d-908a-a9c916dbad21;
	Mon, 19 Oct 2020 11:45:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GNaGh1Or6CnsJrZertmb9Dtursi7uxZUUmxtXlnXhIc=; b=rwD2NxmKsoHlJnoHJIsESw8tEC
	yyKrOG7iiBoavruO3hjyEMrFHjDP5uVJcMPkAWZDAVRi9a004ZRRjBOV7J+eHnsl82gLfmJZXwScf
	t41Sa+/OFACpVA7ULLzojUPWZLYYxwDsJuQgrFNl2VCtBsYks0iQcpU/LCfMvgdyXBgg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUTbE-0000Qh-7F; Mon, 19 Oct 2020 11:45:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUTbD-00018m-Tt; Mon, 19 Oct 2020 11:45:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUTbD-0004dz-Rx; Mon, 19 Oct 2020 11:45:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155971-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155971: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:redhat-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:windows-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e12ce85b2c79d83a340953291912875c30b3af06
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 11:45:11 +0000

flight 155971 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155971/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start    fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-amd 12 redhat-install     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 13 guest-start            fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-qemuu-rhel6hvm-intel 12 redhat-install   fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-amd 12 debian-hvm-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 12 windows-install   fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 12 windows-install  fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 13 guest-start           fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e12ce85b2c79d83a340953291912875c30b3af06
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   58 days  102 attempts
Testing same since   155931  2020-10-17 13:40:59 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 46332 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 13:41:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 13:41:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8709.23327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVP1-0006ZT-9y; Mon, 19 Oct 2020 13:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8709.23327; Mon, 19 Oct 2020 13:40:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVP1-0006ZM-6z; Mon, 19 Oct 2020 13:40:43 +0000
Received: by outflank-mailman (input) for mailman id 8709;
 Mon, 19 Oct 2020 13:40:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUVP0-0006ZH-6S
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 13:40:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d76b381d-7cd2-4a15-8ffb-ff150168e3ba;
 Mon, 19 Oct 2020 13:40:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 32411AFCD;
 Mon, 19 Oct 2020 13:40:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUVP0-0006ZH-6S
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 13:40:42 +0000
X-Inumbo-ID: d76b381d-7cd2-4a15-8ffb-ff150168e3ba
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d76b381d-7cd2-4a15-8ffb-ff150168e3ba;
	Mon, 19 Oct 2020 13:40:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603114840;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=F96zGFEWJ0bk6yCEfbveljepHQGb0tH3SrWs5zRqVF0=;
	b=EirkYcGjAFu6yJ+YZKfA7Xh+vE7hfqVbcvMsfGbhB8RMBYz+MeGIw7O0N1/ewYkOrqlOI5
	wQLPw9/voTRi9qAS6JJx276/5HrcV+qIRSujk1C/w+7gOzEveGEbhLoAPaqLzGkNpqVcMH
	qtRQnxSFh9Wx0YNA3rO9XNgecngkwcw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 32411AFCD;
	Mon, 19 Oct 2020 13:40:40 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
Message-ID: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
Date: Mon, 19 Oct 2020 15:40:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Of the state saved by the insn and reloaded by the corresponding VMLOAD
- TR, syscall, and sysenter state are invariant while having Xen's state
  loaded,
- FS, GS, and LDTR are not used by Xen and get suitably set in PV
  context switch code.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -984,7 +984,6 @@ static void svm_ctxt_switch_to(struct vc
 
     svm_restore_dr(v);
 
-    svm_vmsave_pa(per_cpu(host_vmcb, cpu));
     vmcb->cleanbits.raw = 0;
     svm_tsc_ratio_load(v);
 


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 13:46:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 13:46:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8712.23339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVUo-0006mS-TJ; Mon, 19 Oct 2020 13:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8712.23339; Mon, 19 Oct 2020 13:46:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVUo-0006mL-QN; Mon, 19 Oct 2020 13:46:42 +0000
Received: by outflank-mailman (input) for mailman id 8712;
 Mon, 19 Oct 2020 13:46:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUVUn-0006mF-9E
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 13:46:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19610d89-3d32-43b0-af75-7f8cd3222931;
 Mon, 19 Oct 2020 13:46:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19102B22A;
 Mon, 19 Oct 2020 13:46:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUVUn-0006mF-9E
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 13:46:41 +0000
X-Inumbo-ID: 19610d89-3d32-43b0-af75-7f8cd3222931
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 19610d89-3d32-43b0-af75-7f8cd3222931;
	Mon, 19 Oct 2020 13:46:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603115199;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o3uuDviRWo1QtDtFjPhK/Nzml2kSnSQTZRgY9EKGnbA=;
	b=pc7syDGASrgxu9p4ycGJlPyBIstMPB88fF1qdLL8pr0c57Qii3W0y/Zxki0PWN8A5aipNq
	6ZgRhUIb8Y9ikU0eEKZw193t/kgGQ8IDEEBA9PFujmjbzowlAl9HFmLTlj2uBgM3941OZO
	0voeUVVHnEuqJL22B/Nl7ReB4N4ig/U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 19102B22A;
	Mon, 19 Oct 2020 13:46:39 +0000 (UTC)
Subject: Re: [PATCH v10 01/11] docs / include: introduce a new framework for
 'domain context' records
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201008185735.29875-1-paul@xen.org>
 <20201008185735.29875-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f0b7537-807c-54cc-0c0b-23e30e833f45@suse.com>
Date: Mon, 19 Oct 2020 15:46:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201008185735.29875-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.10.2020 20:57, Paul Durrant wrote:
> --- /dev/null
> +++ b/xen/include/public/save.h
> @@ -0,0 +1,66 @@
> +/*
> + * save.h
> + *
> + * Structure definitions for common PV/HVM domain state that is held by Xen.
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef XEN_PUBLIC_SAVE_H
> +#define XEN_PUBLIC_SAVE_H
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#include "xen.h"
> +
> +/*
> + * C structures for the Domain Context v1 format.
> + * See docs/specs/domain-context.md
> + */
> +
> +struct domain_context_record {
> +    uint32_t type;
> +    uint32_t instance;
> +    uint64_t length;

Should this be uint64_aligned_t, such that alignof() will
produce consistent values regardless of bitness of the invoking
domain?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:07:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8715.23351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVok-0000D4-J0; Mon, 19 Oct 2020 14:07:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8715.23351; Mon, 19 Oct 2020 14:07:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVok-0000Cx-G0; Mon, 19 Oct 2020 14:07:18 +0000
Received: by outflank-mailman (input) for mailman id 8715;
 Mon, 19 Oct 2020 14:07:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUVoj-0000Cs-I6
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:07:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42ff7a6f-05cd-4561-8917-80d1a0a300d8;
 Mon, 19 Oct 2020 14:07:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63606AE0D;
 Mon, 19 Oct 2020 14:07:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUVoj-0000Cs-I6
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:07:17 +0000
X-Inumbo-ID: 42ff7a6f-05cd-4561-8917-80d1a0a300d8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 42ff7a6f-05cd-4561-8917-80d1a0a300d8;
	Mon, 19 Oct 2020 14:07:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603116435;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1nKgLPlVkEfrTGaZqmrHtaozVAlPCHq7SQ2nOKcKHBM=;
	b=AB/AIHgRV7vh5gXD9zT5qizeFS3nRofXGBZIcVuTiyBRYTz5C5aAWv80QLswZ1pQ19uKEu
	vl4RVBs8/NQ7d0KfwdTw82WQjY8vvigfiUwuSgNtUEJpoibmNZCaoTy1wwF69GG2Ray5fL
	ct5IX9KcDcQPpT2mbCafarpmRfLteuM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 63606AE0D;
	Mon, 19 Oct 2020 14:07:15 +0000 (UTC)
Subject: Re: [PATCH v10 02/11] xen: introduce implementation of save/restore
 of 'domain context'
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201008185735.29875-1-paul@xen.org>
 <20201008185735.29875-3-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a0bb2db8-e69a-8107-194c-538e2a85fecf@suse.com>
Date: Mon, 19 Oct 2020 16:07:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201008185735.29875-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.10.2020 20:57, Paul Durrant wrote:
> +void __init domain_register_ctxt_type(unsigned int type, const char *name,
> +                                      domain_save_ctxt_type save,
> +                                      domain_load_ctxt_type load)
> +{
> +    BUG_ON(type >= ARRAY_SIZE(fns));
> +
> +    ASSERT(!fns[type].save);
> +    ASSERT(!fns[type].load);
> +
> +    fns[type].name = name;

I expect I merely didn't spot this (perhaps just latent) issue in
earlier versions: If the caller lives in code getting built into
*.init.o, the string passed in will live in .init.rodata. That's a
general shortcoming (if you like) of the .o -> .init.o
tranformation, but I see no good alternative (or else all format
strings passed to printk() and alike won't get moved either).
Therefore I wonder whether it wouldn't be safer to have the struct
field be e.g. char[16], assuming 15 characters will allow for
meaningful names.

> +int domain_load_ctxt_rec_data(struct domain_ctxt_state *c, void *dst,
> +                              size_t len)
> +{
> +    int rc = 0;
> +
> +    c->len += len;
> +    if (c->len > c->rec.length)

Nit: Missing blanks.

> +int domain_load_ctxt(struct domain *d, const struct domain_load_ctxt_ops *ops,
> +                     void *priv)
> +{
> +    struct domain_ctxt_state c = { .d = d, .ops.load = ops, .priv = priv, };
> +    domain_load_ctxt_type load;
> +    int rc;
> +
> +    ASSERT(d != current->domain);
> +
> +    rc = c.ops.load->read(c.priv, &c.rec, sizeof(c.rec));
> +    if ( rc )
> +        return rc;
> +
> +    load = fns[DOMAIN_CONTEXT_START].load;
> +    BUG_ON(!load);
> +
> +    rc = load(d, &c);
> +    if ( rc )
> +        return rc;
> +
> +    domain_pause(d);
> +
> +    for (;;)

Nit: Missing blanks again.

> +    {
> +        unsigned int type;
> +
> +        rc = c.ops.load->read(c.priv, &c.rec, sizeof(c.rec));
> +        if ( rc )
> +            break;
> +
> +        type = c.rec.type;
> +        if ( type == DOMAIN_CONTEXT_END )
> +            break;
> +
> +        rc = -EINVAL;
> +        if ( type >= ARRAY_SIZE(fns) )
> +            break;
> +
> +        load = fns[type].load;

While this is meant to be used by Dom0 only, I think it would be
better if it nevertheless used array_access_nospec().

> +static int load_start(struct domain *d, struct domain_ctxt_state *c)
> +{
> +    static struct domain_context_start s;
> +    unsigned int i;
> +    int rc = domain_load_ctxt_rec(c, DOMAIN_CONTEXT_START, &i, &s, sizeof(s));
> +
> +    if ( rc )
> +        return rc;
> +
> +    if ( i )
> +        return -EINVAL;
> +
> +    /*
> +     * Make sure we are not attempting to load an image generated by a newer
> +     * version of Xen.
> +     */
> +    if ( s.xen_major > XEN_VERSION && s.xen_minor > XEN_SUBVERSION )
> +        return -EOPNOTSUPP;

Are you sure this needs to be excluded here and unilaterally?
And if this is to stay, then it wants to be

    if ( s.xen_major > XEN_VERSION ||
         (s.xen_major == XEN_VERSION && s.xen_minor > XEN_SUBVERSION) )
        return -EOPNOTSUPP;

> +/*
> + * Register save and load handlers for a record type.
> + *
> + * Save handlers will be invoked in an order which copes with any inter-
> + * entry dependencies. For now this means that HEADER will come first and
> + * END will come last, all others being invoked in order of 'typecode'.
> + *
> + * Load handlers will be invoked in the order of entries present in the
> + * buffer.
> + */
> +#define DOMAIN_REGISTER_CTXT_TYPE(x, s, l)                    \
> +    static int __init __domain_register_##x##_ctxt_type(void) \
> +    {                                                         \
> +        domain_register_ctxt_type(                            \
> +            DOMAIN_CONTEXT_ ## x,                             \
> +            #x,                                               \
> +            &(s),                                             \
> +            &(l));                                            \

I don't think there's a need for each of these to consume a separate
line.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:07:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:07:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8716.23364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVpC-0000IK-Sm; Mon, 19 Oct 2020 14:07:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8716.23364; Mon, 19 Oct 2020 14:07:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVpC-0000IC-Ot; Mon, 19 Oct 2020 14:07:46 +0000
Received: by outflank-mailman (input) for mailman id 8716;
 Mon, 19 Oct 2020 14:07:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jOPR=D2=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1kUVpB-0000I7-Pv
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:07:45 +0000
Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f1ed62b-e9c2-4a6a-8006-96f0ed54b4d1;
 Mon, 19 Oct 2020 14:07:44 +0000 (UTC)
Received: by mail-io1-xd2f.google.com with SMTP id q25so13106981ioh.4
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 07:07:44 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com
 (209-217-208-226.northland.net. [209.217.208.226])
 by smtp.gmail.com with ESMTPSA id b2sm10457909ila.62.2020.10.19.07.07.41
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 19 Oct 2020 07:07:43 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=jOPR=D2=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
	id 1kUVpB-0000I7-Pv
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:07:45 +0000
X-Inumbo-ID: 7f1ed62b-e9c2-4a6a-8006-96f0ed54b4d1
Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7f1ed62b-e9c2-4a6a-8006-96f0ed54b4d1;
	Mon, 19 Oct 2020 14:07:44 +0000 (UTC)
Received: by mail-io1-xd2f.google.com with SMTP id q25so13106981ioh.4
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 07:07:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=7NycqiBF8FosQT/JZwxwlEhE1XoDQEpaYFkJ/zcz7Fc=;
        b=DFbAyS3DUpQ5fvNvLP1kOGrGm8S7XzL3pJrMIcE+DOPHkOmHX6MA553zoCpLgyQ069
         UM+mzp57BkYYtZA9biU6c4EQgjkvtAR710tOXivfDbeozhkfJf1T23/sTj7Zbw+Wbi13
         or81s3aMZ9LP/qvD7mOIukXJbZN2NZ5kOnTuo9MGCOdhBRWavE2llRnUM+14vIN2B4aS
         XM2tBPdqWrQf46C0HTS3LpczMUmzpsRBIwWzEQFNbTlWXdQqiKlaEtd7F9ZcZdCkLEdE
         mhN4Ji7sZw+GoSBN/gd861gcqZ9GxMrcnRxFvGuUp/5kEpLO5rea0OhPk3BM2dN75wd3
         q9lQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=7NycqiBF8FosQT/JZwxwlEhE1XoDQEpaYFkJ/zcz7Fc=;
        b=Tb1BuzA9CzQXqURdlZncXWG+zXA0lCCcEtGIoQJhJKfqSTKu7w8y/XkiKOheMdUXgF
         68C8mWq82ddWJc32tJmIRjE4ekMWO//WymnNcGiKHW5E9fSh+Sa870VxMBqsb+MqtJ42
         sfG69OLb/BXqhtCGbZN/Uc2C/3YMR3OKbROme8uJleFZfRKyiMi/dH+KREKqkjv5Johw
         VPWd/KsQHNeVVB6IoJs0n2TDW0qWOx4gc4uqam+iAt5Uh8/QY3ZyW74hjWX1Ecmn69Lz
         l8ZTiDtBvDO1h/dovcfGk7eI8wSb5xGAJ0L5nbczEDyjZJosd27Wy1matqVA0LcN8oPA
         TqDw==
X-Gm-Message-State: AOAM531mLRplMvnAXtUzc07Lw1EnsbDFbC1utCKi59NLfNhzqCzJQZEN
	c7kYZkmll/BGkfPrrPo5fd8=
X-Google-Smtp-Source: ABdhPJyEuAmwLrm+xzUvMjd18feNAgxOua5BKKqkixX1h5HAXuH77RLYRNDAAc+v+XpBFv1ZugUTNA==
X-Received: by 2002:a6b:6016:: with SMTP id r22mr3570519iog.93.1603116463863;
        Mon, 19 Oct 2020 07:07:43 -0700 (PDT)
Received: from FED-nrosbr-BE.crux.rad.ainfosec.com (209-217-208-226.northland.net. [209.217.208.226])
        by smtp.gmail.com with ESMTPSA id b2sm10457909ila.62.2020.10.19.07.07.41
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Mon, 19 Oct 2020 07:07:43 -0700 (PDT)
Date: Mon, 19 Oct 2020 10:07:39 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
	Rich Persaud <persaur@gmail.com>, Ian Jackson <iwj@xenproject.org>,
	Olivier Lambert <olivier.lambert@vates.fr>,
	Edwin Torok <edvin.torok@citrix.com>
Subject: Re: RFC: Early mock-up of a xenopsd based on golang libxl bindings
Message-ID: <20201019140739.GA347335@FED-nrosbr-BE.crux.rad.ainfosec.com>
References: <84FEBEAF-5859-421E-B595-2358D8490D3F@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <84FEBEAF-5859-421E-B595-2358D8490D3F@citrix.com>

On Fri, Oct 16, 2020 at 04:29:58PM +0000, George Dunlap wrote:
> https://gitlab.com/martyros/go-xen branch `working/xenops` contains a super-basic mock-up of a unix domain xenopsd based on the golang libxl bindings.
> 
> To use:
> 
> * Install Xen >= 4.14 on your target system
> 
> * Make sure you have go >= 1.11 installed
> 
> * Clone & build the server
> 
> $ git clone https://gitlab.com/martyros/go-xen
> 
> $ cd go-xen
> 
> $ git checkout working/xenops
> 
> Note that this is *not* a fast-forwarding branch.
> 
> $ cd xenops/xenopsd
> 
> $ go build
> 
> $ ./xenopsd
> 
> Theoretically this will now accept jsonrpc v1 calls on `/tmp/xenops`.  I haven’t dug into exactly what the wire protocol looks like, but you can test golang’s version of it by using one of the “client examples”.  In another terminal:
> 
> $ cd xenops/client-examples
> 
> $ go run get-domains-example.co
> 
> It should list the currently-running domains and their domain names.
> 
> The core of the actual implementation is in go-xen/xenops/xenops/xenops.go.  Basically, every method you add to the Xenops type of the correct format (described in the “net/rpc” documentation) will be exposed as a method available via RPC.
> 
I haven't had a chance to run it yet, but the code all seems very
straight forward. Looks like a promising approach for prototyping.
> The current code only does a Unix socket, but it could easily be modified to work over http as well.
> 
> Once we have function signatures in the libxl IDL, the xenops methods could all be autogenerated, just like the types are for the golang bindings.
> 
It's on my todo list to get that RFC going again. :)

> It should be noted that at the moment there will be two “layers” of translation both ways here: The golang package will be converting rpc into golang structures, then the libxl libraries will be converting golang structures into C structures; then any return values have to be converted from C structures into golang structures, and then converted a gain from golang structures into json before being sent back over the wire.  This may or may not be a big overhead.
> 
> Two things that are currently sub-optimal about the `xenlight` package for this use case.
> 
> First, although we have a xenlight.Error type, a lot of the xenlight wrappers return a generic “error”.  I’m not sure how that will end up being converted into json, but we might thing about making the xenlight wrappers all return xenlight.Error instead.
>
Returning the "generic error" (i.e. the builtin error interface) is what
we want. Doing otherwise would be very awkward for callers. E.g., if
NameToDomid returned (xenlight.Domid, xenlight.Error), we would have:

  domid, err := ctx.NameToDomid(name)
  if err != nil { // <- Compile-time error; can't compare int to nil
   
  }

So, callers would need to explicitly convert to `error`, or always make
sure they declare `var err error` before assignment. 

This is essentially why the builtin `error` interface exists. 

Re: json, the RPC server will call Error() on any error we return from a
registered function, and set it in the Error field of the response
header (see [1]). So, I think it would be our responsibility to add an
additional "libxl error code" field to our RPC return types.

> Secondly, at the moment the xenlight types are in the same package as the function wrappers.  This means that in order to build even the client, you need to be able to link against an installed libxl library — even though the final binary won’t need to link against libxl at all, and could theoretically be on a completely separate host.
> 
> Unfortunately the way we’ve structured xenlight, it’s not simple to move types.gen.go into its own package, because of the toC and fromC wrappers, which *do* need to link against libxl (for the init and dispose functions).  Nick, we might think about whether we should make separate toC and fromC functions for each of the types, rather than making those methods.
> 
Splitting the package would be one option, but we could also just use a
build tag that allows users to say "I don't need to link agaist libxl, I
just want the types." E.g., users could run `go build -tags nolibxl`, 
assuming we put `// +build !nolibxl` in the appropriate .go files.

I can send a proof of concept patch for that.

Thanks,
NR

[1] https://golang.org/pkg/net/rpc/#Response


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:11:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:11:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8721.23376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVsZ-0001As-CH; Mon, 19 Oct 2020 14:11:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8721.23376; Mon, 19 Oct 2020 14:11:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUVsZ-0001Al-8m; Mon, 19 Oct 2020 14:11:15 +0000
Received: by outflank-mailman (input) for mailman id 8721;
 Mon, 19 Oct 2020 14:11:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUVsX-0001Ae-En
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:11:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5d267e0-b21b-492b-bc33-e0b8ff10cbb5;
 Mon, 19 Oct 2020 14:11:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUVsX-0001Ae-En
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:11:13 +0000
X-Inumbo-ID: c5d267e0-b21b-492b-bc33-e0b8ff10cbb5
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c5d267e0-b21b-492b-bc33-e0b8ff10cbb5;
	Mon, 19 Oct 2020 14:11:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603116672;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=5f7+/MNH6SaV19O3fTW/yecNFlNGLoKclSOSAsEY1Zo=;
  b=b6AQtf3vhINp/lKZNuMogwSB107/kbZx4IEyjkKs6kcz5BUnZuSwxqRs
   p8i66nsjGS6cmjYiA6I4arByEEg3Fb/GAI5+0lg+0Ie3hVKuFpL8VrO1D
   UNbkBagMcKPTR6XBnkiGEDReoJe9K2P6Sl7XOYt5657g+znYy9TSb+4yE
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vLVREriFjx0v8ejZNAN8fPnm3PHtWYqAF3XvpoLdTRi5TGNpr+ziIhwt6+vG/2vUT6njaNnT1u
 PhGLPX2r0Fv1yAYQgS+jEzOQilZlHLok2setgqlN5xFsrk/x+V3DQtV3K+IKoHKH9GpXboLLHe
 T+L16yN4gbG8ZeONtd4tAdbDPdc45WF5/YPwDuXK9V927gNJJ0UjZDuGxhwIJEzC3G9NEGZzPA
 NQBZ3HMjmDJQqOhWtR8dvGyGU5YgCGOqgWbwBi2xKtq0dZttWH2zb8ftC/GnqYX1gT9pHxFI3O
 iyk=
X-SBRS: None
X-MesageID: 29304147
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29304147"
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
Date: Mon, 19 Oct 2020 15:10:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 19/10/2020 14:40, Jan Beulich wrote:
> Of the state saved by the insn and reloaded by the corresponding VMLOAD
> - TR, syscall, and sysenter state are invariant while having Xen's state
>   loaded,
> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>   context switch code.

I think it would be helpful to state that Xen's state is suitably cached
in _svm_cpu_up(), as this does now introduce an ordering dependency
during boot.

Is it possibly worth putting an assert checking the TR selector?  That
ought to be good enough to catch stray init reordering problems.

> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Either way, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:21:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8724.23387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUW2R-00027r-Au; Mon, 19 Oct 2020 14:21:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8724.23387; Mon, 19 Oct 2020 14:21:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUW2R-00027k-7t; Mon, 19 Oct 2020 14:21:27 +0000
Received: by outflank-mailman (input) for mailman id 8724;
 Mon, 19 Oct 2020 14:21:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUW2P-00027f-Qa
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:21:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee4737ec-21bb-451f-8947-1273f4065393;
 Mon, 19 Oct 2020 14:21:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 46496AFD7;
 Mon, 19 Oct 2020 14:21:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUW2P-00027f-Qa
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:21:25 +0000
X-Inumbo-ID: ee4737ec-21bb-451f-8947-1273f4065393
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee4737ec-21bb-451f-8947-1273f4065393;
	Mon, 19 Oct 2020 14:21:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603117284;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OWHYh1SsJxYGo0Nv55Nki964xR3/rN0Hml5imNVbnRc=;
	b=W+dc3vNa9qAgN34yJCZVMBHZKXSDxI2wB/9zY2CKvymIaLG9V0K+R6QXMnxqf5SGnbRiKW
	RDoNNHZrcb21m9lAE7ZVLYG+6YPx5MDmqCZCJgcd10/g5rHpOYP8+AD2tlQOVDXKzPRnu0
	QYtDAW7cWpa6pm64ZXnhoSxt0rqTq8Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 46496AFD7;
	Mon, 19 Oct 2020 14:21:24 +0000 (UTC)
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
Date: Mon, 19 Oct 2020 16:21:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.10.2020 16:10, Andrew Cooper wrote:
> On 19/10/2020 14:40, Jan Beulich wrote:
>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>   loaded,
>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>   context switch code.
> 
> I think it would be helpful to state that Xen's state is suitably cached
> in _svm_cpu_up(), as this does now introduce an ordering dependency
> during boot.

I've added a sentence.

> Is it possibly worth putting an assert checking the TR selector?  That
> ought to be good enough to catch stray init reordering problems.

How would checking just the TR selector help? If other pieces of TR
or syscall/sysenter were out of sync, we'd be hosed, too.

I'm also not sure what exactly you mean to do for such an assertion:
Merely check the host VMCB field against TSS_SELECTOR, or do an
actual STR to be closer to what the VMSAVE actually did?

>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Either way, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:31:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:31:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8727.23400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWBh-00034q-Fj; Mon, 19 Oct 2020 14:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8727.23400; Mon, 19 Oct 2020 14:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWBh-00034j-B0; Mon, 19 Oct 2020 14:31:01 +0000
Received: by outflank-mailman (input) for mailman id 8727;
 Mon, 19 Oct 2020 14:31:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUWBg-00034e-Tq
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:31:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ad2fe33-7376-4003-a082-c54b18cb8549;
 Mon, 19 Oct 2020 14:30:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1BD2AE16;
 Mon, 19 Oct 2020 14:30:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUWBg-00034e-Tq
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:31:00 +0000
X-Inumbo-ID: 0ad2fe33-7376-4003-a082-c54b18cb8549
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0ad2fe33-7376-4003-a082-c54b18cb8549;
	Mon, 19 Oct 2020 14:30:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603117859;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZtR3m5DNeHACrZG7hBrgHqJDYkbABQ8PlejywjfYnOg=;
	b=dGFvWwhr35WZ7qVK4QyFy12l3HlmGPhonEVJ/I3taYdFQBceL8hZgey95715viE9x29UgM
	p+9ddO4bhb1u/C3IYdLJYx+aiN2TDUpn375Qx33/zRenBF95RTP1djNWFwnliUhfqeoGd8
	u55koVjg29Ks7EEBh1JFDj4KoKXpZjI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1BD2AE16;
	Mon, 19 Oct 2020 14:30:58 +0000 (UTC)
Subject: Re: [PATCH v10 03/11] xen/common/domctl: introduce
 XEN_DOMCTL_get/set_domain_context
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Ian Jackson <iwj@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201008185735.29875-1-paul@xen.org>
 <20201008185735.29875-4-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <025d8753-5dd7-a114-6b27-f50ec863582c@suse.com>
Date: Mon, 19 Oct 2020 16:30:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201008185735.29875-4-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.10.2020 20:57, Paul Durrant wrote:
> +static int dry_run_end(void *priv, size_t len)
> +{
> +    struct domctl_context *c = priv;
> +
> +    ASSERT(IS_ALIGNED(c->len, DOMAIN_CONTEXT_RECORD_ALIGN));
> +
> +    return 0;
> +}
> +
> +static struct domain_save_ctxt_ops dry_run_ops = {

const? (same for save_ops and load_ops then)

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1132,6 +1132,41 @@ struct xen_domctl_vuart_op {
>                                   */
>  };
>  
> +/*
> + * XEN_DOMCTL_get_domain_context
> + * -----------------------------
> + *
> + * buffer (IN):   The buffer into which the context data should be
> + *                copied, or NULL to query the buffer size that should
> + *                be allocated.
> + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
> + *                zero, and the value passed out will be the size of the
> + *                buffer to allocate.
> + *                If 'buffer' is non-NULL then the value passed in must
> + *                be the size of the buffer into which data may be copied.
> + *                The value passed out will be the size of data written.
> + */
> +struct xen_domctl_get_domain_context {
> +    uint64_t size;

uint64_aligned_t (also again below)?

With these adjusted
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 14:31:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 14:31:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8728.23412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWBq-00037J-Nv; Mon, 19 Oct 2020 14:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8728.23412; Mon, 19 Oct 2020 14:31:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWBq-00037C-KQ; Mon, 19 Oct 2020 14:31:10 +0000
Received: by outflank-mailman (input) for mailman id 8728;
 Mon, 19 Oct 2020 14:31:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUWBp-00036o-6T
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:31:09 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e41c120-92eb-49d6-95fe-e95dc6bdf72b;
 Mon, 19 Oct 2020 14:30:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUWBp-00036o-6T
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 14:31:09 +0000
X-Inumbo-ID: 5e41c120-92eb-49d6-95fe-e95dc6bdf72b
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5e41c120-92eb-49d6-95fe-e95dc6bdf72b;
	Mon, 19 Oct 2020 14:30:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603117857;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=52UN4GH401zBIqFd+LIiuHFO8x4myJdJe/Zsm2McNhY=;
  b=BXUtpANq14C+aEWAe716tUc9/wyvtZzY/cwBFISHdCHKyH7T1CQ+qdtA
   9Ry6i8P2LkohopCtCikp4zJx/MCamYnsFFbm7eY9+jh3aBmjVm8LIQrPk
   mVmt+d8eWxDOVciy47C4hxwzY/GDT6ZI2wCxBhCCNmBp29LZ3UyBR7K4p
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2NW4VCpKbbpQTiUYVJwqnjH7RxwYcekGibTeUbWeJV4InFtEtHA+RLaPu/e+N+TzkxVnnc3NdC
 Cji8B/eeCxt9dbcLPkVNYcwJLU07QcC3Y9SHqzXHqtyr76GsmC5NtY0/5ud4aE9hiUqFfPT1mg
 FoRS6YgRUQv4JrDnIh6hCf+esNiSWjMr2mJJD2Q1HGgrqNHEmpv9htfADJqxq6QruQuCREuois
 3+SI0daBAOW4IfMBc87D0X3/luU6fDBNN25H/BIbfnoUe66+8SHwhOcsiTkp6GKJjLtww1y7IQ
 /7A=
X-SBRS: None
X-MesageID: 29636912
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29636912"
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
Date: Mon, 19 Oct 2020 15:30:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 19/10/2020 15:21, Jan Beulich wrote:
> On 19.10.2020 16:10, Andrew Cooper wrote:
>> On 19/10/2020 14:40, Jan Beulich wrote:
>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>   loaded,
>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>   context switch code.
>> I think it would be helpful to state that Xen's state is suitably cached
>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>> during boot.
> I've added a sentence.
>
>> Is it possibly worth putting an assert checking the TR selector?  That
>> ought to be good enough to catch stray init reordering problems.
> How would checking just the TR selector help? If other pieces of TR
> or syscall/sysenter were out of sync, we'd be hosed, too.

They're far less likely to move relative to tr, than everything relative
to hvm_up().

> I'm also not sure what exactly you mean to do for such an assertion:
> Merely check the host VMCB field against TSS_SELECTOR, or do an
> actual STR to be closer to what the VMSAVE actually did?

ASSERT(str() == TSS_SELECTOR);

The problem with the other state is that it compiletime/runtime
dependent, and we don't want to be opencoding the logic a second time.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:03:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8734.23423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWgZ-0005pz-Ax; Mon, 19 Oct 2020 15:02:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8734.23423; Mon, 19 Oct 2020 15:02:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWgZ-0005ps-7w; Mon, 19 Oct 2020 15:02:55 +0000
Received: by outflank-mailman (input) for mailman id 8734;
 Mon, 19 Oct 2020 15:02:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUWgY-0005pn-0p
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:02:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 290af866-af43-481c-869a-d8cf361f1287;
 Mon, 19 Oct 2020 15:02:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6CF42ADF7;
 Mon, 19 Oct 2020 15:02:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUWgY-0005pn-0p
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:02:54 +0000
X-Inumbo-ID: 290af866-af43-481c-869a-d8cf361f1287
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 290af866-af43-481c-869a-d8cf361f1287;
	Mon, 19 Oct 2020 15:02:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603119771;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YLlIyCd/5KJtDGeP3FKDOG2rOrTGcnbz3rxNcUDALps=;
	b=KdrRlcJb9E7VlDJEr4Xw9B7gOANr8CuG3e3fRq+m8rSOWT4yjDS/uNExtW3ICv/H4EBGjq
	40ztMF/c4o9iPbbGP9Fbu1j8Wf8YjZQv7DKQ/cauZ3anVJDzq5PuEBDUXFgFqxeGcwHZu4
	ss+B9n1skwcIa9KF8JnjjmF+beGsTJM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6CF42ADF7;
	Mon, 19 Oct 2020 15:02:51 +0000 (UTC)
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
 <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
Date: Mon, 19 Oct 2020 17:02:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.10.2020 16:30, Andrew Cooper wrote:
> On 19/10/2020 15:21, Jan Beulich wrote:
>> On 19.10.2020 16:10, Andrew Cooper wrote:
>>> On 19/10/2020 14:40, Jan Beulich wrote:
>>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>>   loaded,
>>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>>   context switch code.
>>> I think it would be helpful to state that Xen's state is suitably cached
>>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>>> during boot.
>> I've added a sentence.
>>
>>> Is it possibly worth putting an assert checking the TR selector?  That
>>> ought to be good enough to catch stray init reordering problems.
>> How would checking just the TR selector help? If other pieces of TR
>> or syscall/sysenter were out of sync, we'd be hosed, too.
> 
> They're far less likely to move relative to tr, than everything relative
> to hvm_up().
> 
>> I'm also not sure what exactly you mean to do for such an assertion:
>> Merely check the host VMCB field against TSS_SELECTOR, or do an
>> actual STR to be closer to what the VMSAVE actually did?
> 
> ASSERT(str() == TSS_SELECTOR);

Oh, that's odd. How would this help with the VMCB? I thought
you want the VMCB field checked (which is what we're going to
have host state loaded from, and which hence is what shouldn't
be wrong). If the assert as you suggests passes, it means
nothing towards our environment after the next VM exit.

> The problem with the other state is that it compiletime/runtime
> dependent, and we don't want to be opencoding the logic a second time.

Right, but the assertion should be useful at least in some way,
which may make it unavoidable to duplicate some of the logic.
In effect the assertion as you're suggesting it does, too, in
that it further implies VMCB.trsel == TSS_SELECTOR.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:06:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:06:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8737.23436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWjm-00060k-R7; Mon, 19 Oct 2020 15:06:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8737.23436; Mon, 19 Oct 2020 15:06:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWjm-00060d-NE; Mon, 19 Oct 2020 15:06:14 +0000
Received: by outflank-mailman (input) for mailman id 8737;
 Mon, 19 Oct 2020 15:06:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUWjl-00060X-A5
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:06:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79b20b0d-5411-4c45-9d63-85c980849898;
 Mon, 19 Oct 2020 15:06:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E318FAD2C;
 Mon, 19 Oct 2020 15:06:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUWjl-00060X-A5
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:06:13 +0000
X-Inumbo-ID: 79b20b0d-5411-4c45-9d63-85c980849898
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 79b20b0d-5411-4c45-9d63-85c980849898;
	Mon, 19 Oct 2020 15:06:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603119971;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/79kF2ExC6eVhHBfS6zXHWQ6VawLjcyrM3D9bYLTy5w=;
	b=Hfn/wlTzOBfUhcPu6kNBhC7G8I1/eqBCJsyggUO3eBe5o1O3nXPxzlJdL9w1URdr5lovIi
	CcNCZDPsAvXcTjyPJzUBUUgSRvkkc0RzWZOR/1fEG04dGB2AHuGmDIsDCLmt8jCajw7bWn
	GFJfj2kihcpsisbd1Imf3nmhTrNMxoY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E318FAD2C;
	Mon, 19 Oct 2020 15:06:10 +0000 (UTC)
Subject: Re: [PATCH v10 03/11] xen/common/domctl: introduce
 XEN_DOMCTL_get/set_domain_context
From: Jan Beulich <jbeulich@suse.com>
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Ian Jackson <iwj@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201008185735.29875-1-paul@xen.org>
 <20201008185735.29875-4-paul@xen.org>
 <025d8753-5dd7-a114-6b27-f50ec863582c@suse.com>
Message-ID: <16f30ecf-6aa5-264f-1c2c-9db573d1cbb2@suse.com>
Date: Mon, 19 Oct 2020 17:06:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <025d8753-5dd7-a114-6b27-f50ec863582c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 16:30, Jan Beulich wrote:
> On 08.10.2020 20:57, Paul Durrant wrote:
>> +static int dry_run_end(void *priv, size_t len)
>> +{
>> +    struct domctl_context *c = priv;
>> +
>> +    ASSERT(IS_ALIGNED(c->len, DOMAIN_CONTEXT_RECORD_ALIGN));
>> +
>> +    return 0;
>> +}
>> +
>> +static struct domain_save_ctxt_ops dry_run_ops = {
> 
> const? (same for save_ops and load_ops then)
> 
>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -1132,6 +1132,41 @@ struct xen_domctl_vuart_op {
>>                                   */
>>  };
>>  
>> +/*
>> + * XEN_DOMCTL_get_domain_context
>> + * -----------------------------
>> + *
>> + * buffer (IN):   The buffer into which the context data should be
>> + *                copied, or NULL to query the buffer size that should
>> + *                be allocated.
>> + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
>> + *                zero, and the value passed out will be the size of the
>> + *                buffer to allocate.
>> + *                If 'buffer' is non-NULL then the value passed in must
>> + *                be the size of the buffer into which data may be copied.
>> + *                The value passed out will be the size of data written.
>> + */
>> +struct xen_domctl_get_domain_context {
>> +    uint64_t size;
> 
> uint64_aligned_t (also again below)?
> 
> With these adjusted
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

FAOD: Non-XSM hypervisor pieces only.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:16:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:16:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8740.23447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWtg-0006wo-QK; Mon, 19 Oct 2020 15:16:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8740.23447; Mon, 19 Oct 2020 15:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUWtg-0006wh-ND; Mon, 19 Oct 2020 15:16:28 +0000
Received: by outflank-mailman (input) for mailman id 8740;
 Mon, 19 Oct 2020 15:16:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xInf=D2=alstadheim.priv.no=hakon@srs-us1.protection.inumbo.net>)
 id 1kUWtf-0006wc-Ra
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:16:28 +0000
Received: from asav22.altibox.net (unknown [109.247.116.9])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 233d84c1-7aaa-406e-8c35-6876ba07cd29;
 Mon, 19 Oct 2020 15:16:25 +0000 (UTC)
Received: from postfix-relay.alstadheim.priv.no
 (148-252-96.12.3p.ntebredband.no [148.252.96.12])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 (Authenticated sender: hakon.alstadheim@ntebb.no)
 by asav22.altibox.net (Postfix) with ESMTPSA id 487CC201E4;
 Mon, 19 Oct 2020 17:16:23 +0200 (CEST)
Received: from smtps.alstadheim.priv.no (localhost [127.0.0.1])
 by postfix-relay.alstadheim.priv.no (Postfix) with ESMTP id EC6AE625EC1A;
 Mon, 19 Oct 2020 17:16:20 +0200 (CEST)
Received: from [192.168.2.201] (unknown [192.168.2.201])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: hakon)
 by smtps.alstadheim.priv.no (Postfix) with ESMTPSA id A0CA92410673;
 Mon, 19 Oct 2020 17:16:20 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xInf=D2=alstadheim.priv.no=hakon@srs-us1.protection.inumbo.net>)
	id 1kUWtf-0006wc-Ra
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:16:28 +0000
X-Inumbo-ID: 233d84c1-7aaa-406e-8c35-6876ba07cd29
Received: from asav22.altibox.net (unknown [109.247.116.9])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 233d84c1-7aaa-406e-8c35-6876ba07cd29;
	Mon, 19 Oct 2020 15:16:25 +0000 (UTC)
Received: from postfix-relay.alstadheim.priv.no (148-252-96.12.3p.ntebredband.no [148.252.96.12])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: hakon.alstadheim@ntebb.no)
	by asav22.altibox.net (Postfix) with ESMTPSA id 487CC201E4;
	Mon, 19 Oct 2020 17:16:23 +0200 (CEST)
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
Received: from smtps.alstadheim.priv.no (localhost [127.0.0.1])
	by postfix-relay.alstadheim.priv.no (Postfix) with ESMTP id EC6AE625EC1A;
	Mon, 19 Oct 2020 17:16:20 +0200 (CEST)
Received: from [192.168.2.201] (unknown [192.168.2.201])
	(using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
	 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
	(No client certificate requested)
	(Authenticated sender: hakon)
	by smtps.alstadheim.priv.no (Postfix) with ESMTPSA id A0CA92410673;
	Mon, 19 Oct 2020 17:16:20 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=alstadheim.priv.no;
	s=smtp; t=1603120580;
	bh=JJ8KE9aax60EZuFhLwHCU8UNqNosfxGFDPEIN4Y4Vkg=;
	h=Subject:To:Cc:References:From:Date:In-Reply-To:From;
	b=u0BjycYxzVnQQva5xDIv0Sn3sbk0xB/DbBO8IvX0ttJP0M5IjkLTQfjvwI7OJfqZX
	 rt1jF1QjNHes4zQoRIE3cF1NOxpBJZcK14lJ/6rQUroK/EIcwOYKTPFrHt2jqvsM9i
	 STk8eaKArssRqeH4uv+5VnEzuylJ32rMO8ZljsbM=
Subject: Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI
 flr/slot/bus reset with 'reset' SysFS attribute
To: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>
Cc: Rich Persaud <persaur@gmail.com>, =?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=
 <pasik@iki.fi>, xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Jan Beulich <JBeulich@suse.com>, "bhelgaas@google.com"
 <bhelgaas@google.com>, Roger Pau Monne <roger.pau@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <646A4BEA-C544-4C62-A7A3-B736D3860912@gmail.com>
 <20200131153332.r4oe3sadhvoly7ho@debian>
 <A325DB30-0282-4512-96D4-06AE661ADB5A@citrix.com>
From: =?UTF-8?Q?H=c3=a5kon_Alstadheim?= <hakon@alstadheim.priv.no>
Message-ID: <30109c0a-3aa3-8af7-dadf-66f508071963@alstadheim.priv.no>
Date: Mon, 19 Oct 2020 17:16:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <A325DB30-0282-4512-96D4-06AE661ADB5A@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=Du94Bl3+ c=1 sm=1 tr=0
	a=pSdy67GPd25lyaVjMobptA==:117 a=pSdy67GPd25lyaVjMobptA==:17
	a=IkcTkHD0fZMA:10 a=afefHYAZSVUA:10 a=M51BFTxLslgA:10 a=mLnsDVdbAAAA:8
	a=ZD_xPv-c6uJpPFqAP2MA:9 a=QEXdDO2ut3YA:10 a=KNCrymi1MkcA:10
	a=n3xS-gtqsLMA:10 a=L6mzFnP-7RsA:10 a=LJOHJtgqUKgA:10
	a=xnp1pY6zelCj5OLna2To:22

Den 19.10.2020 13:00, skrev George Dunlap:
>
>> On Jan 31, 2020, at 3:33 PM, Wei Liu <wl@xen.org> wrote:
>>
>> On Fri, Jan 17, 2020 at 02:13:04PM -0500, Rich Persaud wrote:
>>> On Aug 26, 2019, at 17:08, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>>> ﻿Hi,
>>>>
>>>>> On Mon, Oct 08, 2018 at 10:32:45AM -0400, Boris Ostrovsky wrote:
>>>>>> On 10/3/18 11:51 AM, Pasi Kärkkäinen wrote:
>>>>>> On Wed, Sep 19, 2018 at 11:05:26AM +0200, Roger Pau Monné wrote:
>>>>>>> On Tue, Sep 18, 2018 at 02:09:53PM -0400, Boris Ostrovsky wrote:
>>>>>>>> On 9/18/18 5:32 AM, George Dunlap wrote:
>>>>>>>>>> On Sep 18, 2018, at 8:15 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>>>>>>>>> Hi,
>>>>>>>>>> On Mon, Sep 17, 2018 at 02:06:02PM -0400, Boris Ostrovsky wrote:
>>>>>>>>>>> What about the toolstack changes? Have they been accepted? I vaguely
>>>>>>>>>>> recall there was a discussion about those changes but don't remember how
>>>>>>>>>>> it ended.
>>>>>>>>>> I don't think toolstack/libxl patch has been applied yet either.
>>>>>>>>>> "[PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attribute":
>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
>>>>>>>>>> "[PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' SysFS attribute":
>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
>>>>>>>> Will this patch work for *BSD? Roger?
>>>>>>> At least FreeBSD don't support pci-passthrough, so none of this works
>>>>>>> ATM. There's no sysfs on BSD, so much of what's in libxl_pci.c will
>>>>>>> have to be moved to libxl_linux.c when BSD support is added.
>>>>>> Ok. That sounds like it's OK for the initial pci 'reset' implementation in xl/libxl to be linux-only..
>>>>> Are these two patches still needed? ISTR they were  written originally
>>>>> to deal with guest trying to use device that was previously assigned to
>>>>> another guest. But pcistub_put_pci_dev() calls
>>>>> __pci_reset_function_locked() which first tries FLR, and it looks like
>>>>> it was added relatively recently.
>>>> Replying to an old thread.. I only now realized I forgot to reply to this message earlier.
>>>>
>>>> afaik these patches are still needed. Håkon (CC'd) wrote to me in private that
>>>> he gets a (dom0) Linux kernel crash if he doesn't have these patches applied.
>>>>
>>>>
>>>> Here are the links to both the linux kernel and libxl patches:
>>>>
>>>>
>>>> "[Xen-devel] [PATCH V3 0/2] Xen/PCIback: PCI reset using 'reset' SysFS attribute":
>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00659.html
>>>>
>>>> [Note that PATCH V3 1/2 "Drivers/PCI: Export pcie_has_flr() interface" is already applied in upstream linux kernel, so it's not needed anymore]
>>>>
>>>> "[Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI flr/slot/bus reset with 'reset' SysFS attribute":
>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00661.html
>>>>
>>>>
>>>> "[Xen-devel] [PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attribute":
>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
>>>>
>>>> "[Xen-devel] [PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' SysFS attribute":
>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
>>> [dropping Linux mailing lists]
>>>
>>> What is required to get the Xen patches merged?  Rebasing against Xen
>>> master?  OpenXT has been carrying a similar patch for many years and
>>> we would like to move to an upstream implementation.  Xen users of PCI
>>> passthrough would benefit from more reliable device reset.
>> Rebase and resend?
>>
>> Skimming that thread I think the major concern was backward
>> compatibility. That seemed to have been addressed.
>>
>> Unfortunately I don't have the time to dig into Linux to see if the
>> claim there is true or not.
>>
>> It would be helpful to write a concise paragraph to say why backward
>> compatibility is not required.
> Just going through my old “make sure something happens with this” mails.  Did anything ever happen with this?  Who has the ball here / who is this stuck on?

We're waiting for "somebody" to testify that fixing this will not 
adversely affect anyone. I'm not qualified, but my strong belief is that 
since "reset" or "do_flr"  in the linux kernel is not currently 
implemented/used in any official distribution, it should be OK.

Patches still work in current staging-4.14 btw.




From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:23:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:23:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8743.23460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX03-0007rF-HJ; Mon, 19 Oct 2020 15:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8743.23460; Mon, 19 Oct 2020 15:23:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX03-0007r8-EK; Mon, 19 Oct 2020 15:23:03 +0000
Received: by outflank-mailman (input) for mailman id 8743;
 Mon, 19 Oct 2020 15:23:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUX02-0007r3-O4
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:23:02 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78771f21-7380-4f52-9d43-fda1e4067b16;
 Mon, 19 Oct 2020 15:23:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUX02-0007r3-O4
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:23:02 +0000
X-Inumbo-ID: 78771f21-7380-4f52-9d43-fda1e4067b16
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 78771f21-7380-4f52-9d43-fda1e4067b16;
	Mon, 19 Oct 2020 15:23:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603120980;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=R11Xod3XK+6Hs4DEahgUgiVyeSCOxOG/2tTH0iJS3is=;
  b=ZKXyNiYLeh5sISGgMWm/HkbIXVlDHW5OXOAT2oGbnlLSm1Iri6rQSrmF
   I6gRZffp5Y4o4HelOJbmVLW77ALLYK1PA1cRMfz7XjZICkDDbwE+LMIeO
   pACWaJcTuB5z1752DI+7QmRjEWdEzjb9vByONAKRiGuOZMbkvrJJV+4r0
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: rW5d9cAXVkVUmWSfYS9Q90kGhpwTbSh46fca+7PQKdiRboF5luMIGifyTUo2QhVVwhKOBoD5RZ
 qLiVNoKZJ2bdbFnBTlvBY0GzZssHEaCyXum/5xdHYxPf12hdbVHcFtXfL6TOt7nbpPBa3pje2N
 nMrGu5o2IyJEY4rKuaNIfTqdiTMg6aCDkaXb0HW96zqiRbISlGIoPqJ+AQertp+pridls5+n3F
 OrRyiekNyMayOKcE2xUo9K73gknnlzL4/aqIGL9K5b/3jZN7ZMl7gKmm6cBnEupHcr7X0cGYMS
 Mpc=
X-SBRS: None
X-MesageID: 29555130
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29555130"
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
 <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
 <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <23d02e3b-7dac-ceb8-ebdd-3b77f264d6b4@citrix.com>
Date: Mon, 19 Oct 2020 16:22:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 19/10/2020 16:02, Jan Beulich wrote:
> On 19.10.2020 16:30, Andrew Cooper wrote:
>> On 19/10/2020 15:21, Jan Beulich wrote:
>>> On 19.10.2020 16:10, Andrew Cooper wrote:
>>>> On 19/10/2020 14:40, Jan Beulich wrote:
>>>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>>>   loaded,
>>>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>>>   context switch code.
>>>> I think it would be helpful to state that Xen's state is suitably cached
>>>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>>>> during boot.
>>> I've added a sentence.
>>>
>>>> Is it possibly worth putting an assert checking the TR selector?  That
>>>> ought to be good enough to catch stray init reordering problems.
>>> How would checking just the TR selector help? If other pieces of TR
>>> or syscall/sysenter were out of sync, we'd be hosed, too.
>> They're far less likely to move relative to tr, than everything relative
>> to hvm_up().
>>
>>> I'm also not sure what exactly you mean to do for such an assertion:
>>> Merely check the host VMCB field against TSS_SELECTOR, or do an
>>> actual STR to be closer to what the VMSAVE actually did?
>> ASSERT(str() == TSS_SELECTOR);
> Oh, that's odd. How would this help with the VMCB?

It wont.

We're not checking the behaviour of the VMSAVE instruction.  We just
want to sanity check that %tr is already configured.

This version is far more simple than checking VMCB.trsel, which will
require a map_domain_page().

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:25:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:25:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8746.23472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX2j-00081I-4U; Mon, 19 Oct 2020 15:25:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8746.23472; Mon, 19 Oct 2020 15:25:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX2j-00081B-1N; Mon, 19 Oct 2020 15:25:49 +0000
Received: by outflank-mailman (input) for mailman id 8746;
 Mon, 19 Oct 2020 15:25:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUX2h-000816-6g
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:25:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 168b3bbc-b38d-4cb9-8b86-068a83c51cf2;
 Mon, 19 Oct 2020 15:25:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 94507AC8B;
 Mon, 19 Oct 2020 15:25:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUX2h-000816-6g
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:25:47 +0000
X-Inumbo-ID: 168b3bbc-b38d-4cb9-8b86-068a83c51cf2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 168b3bbc-b38d-4cb9-8b86-068a83c51cf2;
	Mon, 19 Oct 2020 15:25:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603121144;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gh2o7skQDVKv2Z2IafZlyzrok3EROZphdgRUYxxQBDE=;
	b=GiuaglY7qP0hNg1bL8JZAn7L8pW9TynUz3smaxMpYhX7oM+rbpkN9MBinTwwxk1hNqZW7z
	bCOwcQgr8d58SKMmbtm7THOBa3UxRtaxz9NCwLU5zT4pjg0xnCLYNH0aItowtCGdcdPjCx
	oP8HalWjXaiVUZnb0ilqNLW8WmonzPs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 94507AC8B;
	Mon, 19 Oct 2020 15:25:44 +0000 (UTC)
Subject: Re: [PATCH v10 05/11] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201008185735.29875-1-paul@xen.org>
 <20201008185735.29875-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <053295cf-9150-8ba4-0427-ba65b639f4ae@suse.com>
Date: Mon, 19 Oct 2020 17:25:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201008185735.29875-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 08.10.2020 20:57, Paul Durrant wrote:
> @@ -1671,6 +1672,118 @@ int continue_hypercall_on_cpu(
>      return 0;
>  }
>  
> +static int save_shared_info(struct domain *d, struct domain_ctxt_state *c,
> +                            bool dry_run)
> +{
> +#ifdef CONFIG_COMPAT
> +    struct domain_context_shared_info s = {
> +        .flags = has_32bit_shinfo(d) ? DOMAIN_CONTEXT_32BIT_SHARED_INFO : 0,
> +    };
> +    size_t size = has_32bit_shinfo(d) ?
> +        sizeof(struct compat_shared_info) :
> +        sizeof(struct shared_info);
> +#else
> +    struct domain_context_shared_info s = {};
> +    size_t size = sizeof(struct shared_info);

All of these would imo better be expressed in terms of d->shared_info.
While chances are zero that these types will change in any way, it
still sets a bad precedent for people seeing this and then introducing
similar disconnects elsewhere. (Same in the load handling then.)

> +static int load_shared_info(struct domain *d, struct domain_ctxt_state *c)
> +{
> +    struct domain_context_shared_info s = {};
> +    size_t size;
> +    unsigned int i;
> +    int rc;
> +
> +    rc = domain_load_ctxt_rec_begin(c, DOMAIN_CONTEXT_SHARED_INFO, &i);
> +    if ( rc )
> +        return rc;
> +
> +    if ( i ) /* expect only a single instance */
> +        return -ENXIO;
> +
> +    rc = domain_load_ctxt_rec_data(c, &s, offsetof(typeof(s), buffer));
> +    if ( rc )
> +        return rc;
> +
> +    if ( s.flags & ~DOMAIN_CONTEXT_32BIT_SHARED_INFO )
> +        return -EINVAL;
> +
> +    if ( s.flags & DOMAIN_CONTEXT_32BIT_SHARED_INFO )
> +    {
> +#ifdef CONFIG_COMPAT
> +        d->arch.has_32bit_shinfo = true;
> +        size = sizeof(struct compat_shared_info);

I realize this has been more or less this way already in prior
versions, but aren't you introducing a way to have a degenerate
64-bit PV guest with 32-bit shared info (or vice versa), in that
shared info bitness isn't strictly tied to guest bitness anymore?
Rejecting this case may not need to live here, but it needs to be
present / added somewhere.

> +#else
> +        return -EINVAL;
> +#endif
> +    }
> +    else
> +        size = sizeof(struct shared_info);
> +
> +    if ( is_pv_domain(d) )
> +    {
> +        shared_info_t *shinfo = xzalloc(shared_info_t);
> +
> +        if ( !shinfo )
> +            return -ENOMEM;
> +
> +        rc = domain_load_ctxt_rec_data(c, shinfo, size);
> +        if ( rc )
> +            goto out;
> +
> +        memcpy(&shared_info(d, vcpu_info), &__shared_info(d, shinfo, vcpu_info),
> +               sizeof(shared_info(d, vcpu_info)));
> +        memcpy(&shared_info(d, arch), &__shared_info(d, shinfo, arch),
> +               sizeof(shared_info(d, arch)));
> +
> +        memset(&shared_info(d, evtchn_pending), 0,
> +               sizeof(shared_info(d, evtchn_pending)));
> +        memset(&shared_info(d, evtchn_mask), 0xff,
> +               sizeof(shared_info(d, evtchn_mask)));
> +
> +#ifdef CONFIG_X86
> +        shared_info(d, arch.pfn_to_mfn_frame_list_list) = 0;
> +#endif
> +        for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
> +            shared_info(d, vcpu_info[i].evtchn_pending_sel) = 0;

Again I realize this has been this way in earlier versions, and
it was also me to ask for streamlining the code, but is this
actually correct? I ask in particular in light of this comment

/*
 * Compat field is never larger than native field, so cast to that as it
 * is the largest memory range it is safe for the caller to modify without
 * further discrimination between compat and native cases.
 */

in xen/shared.h, next to the __shared_info() #define. I can't
help thinking that you'll fill only half of some of the fields
in the 64-bit case.

> @@ -58,6 +59,16 @@ struct domain_context_start {
>      uint32_t xen_major, xen_minor;
>  };
>  
> +struct domain_context_shared_info {
> +    uint32_t flags;
> +
> +#define _DOMAIN_CONTEXT_32BIT_SHARED_INFO 0

Is this separate constant actually needed for anything?

Speaking of which - wouldn't all your additions to this header
better be proper name spacing citizens, by having xen_ / XEN_
prefixes?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:26:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8748.23483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX3Q-00087G-Dw; Mon, 19 Oct 2020 15:26:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8748.23483; Mon, 19 Oct 2020 15:26:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUX3Q-000879-Az; Mon, 19 Oct 2020 15:26:32 +0000
Received: by outflank-mailman (input) for mailman id 8748;
 Mon, 19 Oct 2020 15:26:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kUX3O-00086w-GZ
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:26:30 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad5d1bca-7b4c-4780-9406-2c7a76caf4f5;
 Mon, 19 Oct 2020 15:26:29 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id j30so14680432lfp.4
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 08:26:29 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kUX3O-00086w-GZ
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:26:30 +0000
X-Inumbo-ID: ad5d1bca-7b4c-4780-9406-2c7a76caf4f5
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ad5d1bca-7b4c-4780-9406-2c7a76caf4f5;
	Mon, 19 Oct 2020 15:26:29 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id j30so14680432lfp.4
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 08:26:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QIs+2ZoR3i3pz+qqVrJep3R4w4Yivn+en8MAoLnkTA0=;
        b=gk7Ft9DBlookh7ly0b+NVSlnGjwjtOCnh4QX5qp7SRNMTgWA0DEx08n3OTaPzoPw+g
         HO2TS3iMeO6k+yx5cFX1SpvtlbI8QFgz1TmkVGsd1Q/eN1XsFAM7nj3I+jj2IGbuPEHl
         Q90SiRzFK8dMY1xCTc7FHDjXij+zLgKc1rsXBgTEkb9VtOxHBxHhDZK2LBQ9Ex6cH2by
         EmaUGcqTA1qKF36KpKMGq5yAKbmzm8jnbB7QQU6snjCjE3tXGJWSFX4jpBMvwjmqAiYa
         oy5avgoqbt5CqZ1dNpTBKMyEzS3sijy++DH/9yNcN/3jyIDfcWEU6A8RxIAfBSEKm8o1
         uPCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QIs+2ZoR3i3pz+qqVrJep3R4w4Yivn+en8MAoLnkTA0=;
        b=j+303N7Lixa6KuxgSOvJVJQacTf54rMIw5puW3sQX/8hFopMxsv8/JpE4mhwYjv2T5
         EgtwObZGSWPtMKbb7n7c0IkXU+eFwr+Oz4IgSZ/HeL7gnsX4+LetuaO979I2rSfZuci3
         5gZvUDxGNfq5hETX+YCGV7gLxv5bCBC+NZN8vDJnfDWTVf0/bpkNfokO8ooW4tc+6Sl+
         5GsBPsVZClb6INThwvNAXKnTgj/5fyRIQDC+d10lcuAMqbHNN6pJeFkUm4zh7xHF0XZ5
         05udjyYVzZd26SZlyT9F9kVgDEvB7nPDF0elkO7jVWykbaj29ftSIf7m3Kzl0peh3cAY
         +dsw==
X-Gm-Message-State: AOAM533D2HWcuxmmc3isBQdmuJ+mGlbaeUhWASyq2vP7aVg21e3CGYDt
	33C5nOmAO8zrMY48K26QuLDibw8VS+0hMlIvA2A=
X-Google-Smtp-Source: ABdhPJx4eyRaCUZG26uK8S97QMfZRuh+9xPt2obMCSmXVopsTcfufsRxM6589WGoDdlq0bwkiXbMZerHBanKIBTlaXc=
X-Received: by 2002:ac2:52b7:: with SMTP id r23mr78904lfm.30.1603121188232;
 Mon, 19 Oct 2020 08:26:28 -0700 (PDT)
MIME-Version: 1.0
References: <20201014153150.83875-1-jandryuk@gmail.com> <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com> <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
 <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com> <CAKf6xpv9qHJydjQ_TyZEKZAK14T4m2GLLqEwyMTraUxqvg+1Xw@mail.gmail.com>
 <d10143d9-0082-fa09-3ef0-2d13e5ee54ef@suse.com>
In-Reply-To: <d10143d9-0082-fa09-3ef0-2d13e5ee54ef@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 19 Oct 2020 11:26:16 -0400
Message-ID: <CAKf6xpvhTLVdBEXjz4MB_cmbfMGR0pjdAxHHoHz9hTsa+ObpOg@mail.gmail.com>
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Mon, Oct 19, 2020 at 3:38 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 16.10.2020 18:28, Jason Andryuk wrote:
> > Looks like we can pass XC_DOM_PV_CONTAINER/XC_DOM_HVM_CONTAINER down
> > into elf_xen_parse().  Then we would just validate phys_entry for HVM
> > and virt_entry for PV.  Does that sound reasonable?
>
> I think so, yes. Assuming of course that you'll convert the XC_DOM_*
> into a boolean, so that the hypervisor's use of libelf/ can also be
> suitably adjusted.

Are you okay with:
-int elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms);
+int elf_xen_parse_pvh(struct elf_binary *elf,
+                      struct elf_dom_parms *parms);
+int elf_xen_parse_pv(struct elf_binary *elf,
+                     struct elf_dom_parms *parms);
?

I prefer avoiding boolean arguments since I find it helps readability.
The xen dom0 builders can just call their variant, but the xenguest
elfloader and hvmloader call the appropriate one based on the
container_type.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:36:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:36:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8752.23496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXDM-0000eT-Eb; Mon, 19 Oct 2020 15:36:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8752.23496; Mon, 19 Oct 2020 15:36:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXDM-0000eM-AW; Mon, 19 Oct 2020 15:36:48 +0000
Received: by outflank-mailman (input) for mailman id 8752;
 Mon, 19 Oct 2020 15:36:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUXDL-0000eH-Es
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:36:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f1ba7c1-4d4c-4747-83b9-596542e3db6b;
 Mon, 19 Oct 2020 15:36:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6C47B1B6;
 Mon, 19 Oct 2020 15:36:45 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUXDL-0000eH-Es
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:36:47 +0000
X-Inumbo-ID: 6f1ba7c1-4d4c-4747-83b9-596542e3db6b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6f1ba7c1-4d4c-4747-83b9-596542e3db6b;
	Mon, 19 Oct 2020 15:36:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603121805;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LhI41s9uuQ9FvOUUfeBpwh8KwhupqRV/agwhY1nG4Ko=;
	b=RMEBy54pGb372TEx9pQqudlVTxPqJtgnxj3YAIzT6HwntnmmUb5rsMreHTOlqTVeiDMvHr
	RmnaiI4NP1WLUcCB/DLHT9/WjZQJDKp+6BDBgxr+kwcGWy2/jS7Dzvue5CaorRkY44EVZY
	tP6FvEtGbuXai9rFoftNGnLk9cAo6sA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B6C47B1B6;
	Mon, 19 Oct 2020 15:36:45 +0000 (UTC)
Subject: Re: [PATCH] libelf: Handle PVH kernels lacking ENTRY elfnote
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20201014153150.83875-1-jandryuk@gmail.com>
 <6d373cae-c7dc-e109-1df3-ccbbe4bdd9c8@suse.com>
 <CAKf6xpv5GNjw0pjOxEqdVj2+C6v+O5PDZG5yYkNfytDjUT_r5w@mail.gmail.com>
 <4229544b-e98d-6f3c-14aa-a884c403ba74@suse.com>
 <CAKf6xpt_VhJ5r4scuAkWU3aGxgwiYNtHaBDpMoFJS+q837aFiA@mail.gmail.com>
 <d8e93366-0f99-37c7-e5f4-8efaf804d2e2@suse.com>
 <CAKf6xpv9qHJydjQ_TyZEKZAK14T4m2GLLqEwyMTraUxqvg+1Xw@mail.gmail.com>
 <d10143d9-0082-fa09-3ef0-2d13e5ee54ef@suse.com>
 <CAKf6xpvhTLVdBEXjz4MB_cmbfMGR0pjdAxHHoHz9hTsa+ObpOg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec5d41a2-18e8-15a3-85e0-d6a72deab5fc@suse.com>
Date: Mon, 19 Oct 2020 17:36:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvhTLVdBEXjz4MB_cmbfMGR0pjdAxHHoHz9hTsa+ObpOg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 17:26, Jason Andryuk wrote:
> On Mon, Oct 19, 2020 at 3:38 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 16.10.2020 18:28, Jason Andryuk wrote:
>>> Looks like we can pass XC_DOM_PV_CONTAINER/XC_DOM_HVM_CONTAINER down
>>> into elf_xen_parse().  Then we would just validate phys_entry for HVM
>>> and virt_entry for PV.  Does that sound reasonable?
>>
>> I think so, yes. Assuming of course that you'll convert the XC_DOM_*
>> into a boolean, so that the hypervisor's use of libelf/ can also be
>> suitably adjusted.
> 
> Are you okay with:
> -int elf_xen_parse(struct elf_binary *elf,
> -                  struct elf_dom_parms *parms);
> +int elf_xen_parse_pvh(struct elf_binary *elf,
> +                      struct elf_dom_parms *parms);
> +int elf_xen_parse_pv(struct elf_binary *elf,
> +                     struct elf_dom_parms *parms);
> ?
> 
> I prefer avoiding boolean arguments since I find it helps readability.

And I view things the other way around. If readability is of
concern, how about adding an unsigned int flags parameter
and #define-ing two suitable constants to be passed?

And of course it's not me alone who needs to be okay with
whatever route you/we go.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:41:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8757.23510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXIA-0001W9-2z; Mon, 19 Oct 2020 15:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8757.23510; Mon, 19 Oct 2020 15:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXI9-0001W2-Vx; Mon, 19 Oct 2020 15:41:45 +0000
Received: by outflank-mailman (input) for mailman id 8757;
 Mon, 19 Oct 2020 15:41:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUXI8-0001VO-Um
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:41:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2732ce1-9d05-4080-8d27-e097dfdbe319;
 Mon, 19 Oct 2020 15:41:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUXI0-0005Ko-9j; Mon, 19 Oct 2020 15:41:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUXHz-0005wm-Qu; Mon, 19 Oct 2020 15:41:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUXHz-0003sN-Q1; Mon, 19 Oct 2020 15:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUXI8-0001VO-Um
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:41:44 +0000
X-Inumbo-ID: e2732ce1-9d05-4080-8d27-e097dfdbe319
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e2732ce1-9d05-4080-8d27-e097dfdbe319;
	Mon, 19 Oct 2020 15:41:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XAsb1IGMvgsyiBVISRTCLhnXZDAB2i/nUI578jgH10g=; b=hqEIBrr/gw0uxJ0ThGNREBK+sp
	uc9DS+G5m/faNhp/s1+OhLP17oX68v6ZPIib6I1xIAwjbwq5mVOoycpH5WPRwutqsdXgCYlfrMkRk
	DSaTPWnopzqjfrRReOSXtzeUii+0dNCj/J4GPfzObzyzW1PC8fAMlsprdBwmhK4kR/3c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUXI0-0005Ko-9j; Mon, 19 Oct 2020 15:41:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUXHz-0005wm-Qu; Mon, 19 Oct 2020 15:41:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUXHz-0003sN-Q1; Mon, 19 Oct 2020 15:41:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155979-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155979: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=ba2a9a9e6318bfd93a2306dec40137e198205b86
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 15:41:35 +0000

flight 155979 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155979/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                ba2a9a9e6318bfd93a2306dec40137e198205b86
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  103 attempts
Testing same since   155979  2020-10-19 11:47:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47700 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:46:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8760.23523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXN5-0001k3-R7; Mon, 19 Oct 2020 15:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8760.23523; Mon, 19 Oct 2020 15:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXN5-0001jw-Ne; Mon, 19 Oct 2020 15:46:51 +0000
Received: by outflank-mailman (input) for mailman id 8760;
 Mon, 19 Oct 2020 15:46:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cPh9=D2=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kUXN3-0001jr-8i
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:46:49 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13d1bc33-a47e-4c53-9fdc-5f9d01d1a12c;
 Mon, 19 Oct 2020 15:46:47 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id g12so232113wrp.10
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 08:46:47 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id s11sm138139wrm.56.2020.10.19.08.46.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 19 Oct 2020 08:46:45 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cPh9=D2=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kUXN3-0001jr-8i
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:46:49 +0000
X-Inumbo-ID: 13d1bc33-a47e-4c53-9fdc-5f9d01d1a12c
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13d1bc33-a47e-4c53-9fdc-5f9d01d1a12c;
	Mon, 19 Oct 2020 15:46:47 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id g12so232113wrp.10
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 08:46:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=HO1NWy71VB/XmKCrtzy6L4ESvVEPBzXrL3scn+Up8bc=;
        b=UC61EvUAH1ghGWvhgPIvwb0tJKvmN85XcCXlrZCX7WhPqQ5Kal9er+D9EGMj6Qy1Mw
         fLsBXywmZ7A82r1QjiPvCD/FrGiVsWYE0GJST415xH593KKFVZRAlHpTMAzkBzgOwrUv
         csDczr2hma4nlAuIXgtB1TgmwV8Q0Ey3gd2fI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=HO1NWy71VB/XmKCrtzy6L4ESvVEPBzXrL3scn+Up8bc=;
        b=Ae1Ejf2LNK0OenoLHH4UC/GFD7iPiSoqExqi/UuMr31kDwNXlAKaFSQe3CFSQsx7G4
         yjGDSciW8n7QS39T0ZFhaE0LkuTByNjxjeIuuexvspMrFwb14SUC4KctXdDjRRK7Ak1Z
         Abm5yLQlElPY4I4d04AzwZBqj0+cGImCTFpheGsbhnUhmxetAjw4a+EXjRVH0oZu4twV
         uUEow51JYkZsC5zpznwVbIzZEWnBN29Kv5LEO/0HVtIgc9AVZ2fmImZZB2v5cb4PYEmZ
         uJTsNAQnae8MNvQxU4npqVuDlv0qWpkcurY3Hd7BL5QgoaKoqrkNZ5JIWXhMkL+S80B5
         g3IQ==
X-Gm-Message-State: AOAM530VjFGkDSMwOG4kftS2ySEXwk2vZEhkWA9obvY2TvQdjLXiqZ0G
	2NUlKs5rsVLoDqiL+mNXi5pmTA==
X-Google-Smtp-Source: ABdhPJw3AhcfVn5Ol/2itQNrpLAPMvDH8DFfv/LdwaotJFWGF+wLGszDKhZ9OSoAyWYoQMAHgXYlBQ==
X-Received: by 2002:adf:ea4d:: with SMTP id j13mr121871wrn.345.1603122406591;
        Mon, 19 Oct 2020 08:46:46 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id s11sm138139wrm.56.2020.10.19.08.46.44
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 19 Oct 2020 08:46:45 -0700 (PDT)
Date: Mon, 19 Oct 2020 17:46:42 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Christian =?iso-8859-1?Q?K=F6nig?= <christian.koenig@amd.com>
Cc: Thomas Zimmermann <tzimmermann@suse.de>,
	maarten.lankhorst@linux.intel.com, mripard@kernel.org,
	airlied@linux.ie, daniel@ffwll.ch, sam@ravnborg.org,
	alexander.deucher@amd.com, kraxel@redhat.com,
	l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com, inki.dae@samsung.com,
	jy0922.shim@samsung.com, sw0312.kim@samsung.com,
	kyungmin.park@samsung.com, kgene@kernel.org, krzk@kernel.org,
	yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
	tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, etnaviv@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	linaro-mm-sig@lists.linaro.org, linux-rockchip@lists.infradead.org,
	dri-devel@lists.freedesktop.org, xen-devel@lists.xenproject.org,
	spice-devel@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org, linux-media@vger.kernel.org
Subject: Re: [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
Message-ID: <20201019154642.GF401619@phenom.ffwll.local>
References: <20201015123806.32416-1-tzimmermann@suse.de>
 <20201015123806.32416-6-tzimmermann@suse.de>
 <935d5771-5645-62a6-849c-31e286db1e30@amd.com>
 <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de>
 <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian Knig wrote:
> Hi Thomas,
> 
> [SNIP]
> > > >   +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> > > > +{
> > > > + struct ttm_resource *mem = &bo->mem;
> > > > + int ret;
> > > > +
> > > > + ret = ttm_mem_io_reserve(bo->bdev, mem);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + if (mem->bus.is_iomem) {
> > > > + void __iomem *vaddr_iomem;
> > > > + unsigned long size = bo->num_pages << PAGE_SHIFT;
> > > Please use uint64_t here and make sure to cast bo->num_pages before
> > > shifting.
> > I thought the rule of thumb is to use u64 in source code. Yet TTM only
> > uses uint*_t types. Is there anything special about TTM?
> 
> My last status is that you can use both and my personal preference is to use
> the uint*_t types because they are part of a higher level standard.

Yeah the only hard rule is that in uapi headers you need to use the __u64
and similar typedefs, to avoid cluttering the namespace for unrelated
stuff in userspace.

In the kernel c99 types are perfectly fine, and I think slowly on the
rise.
-Daniel

> 
> > > We have an unit tests of allocating a 8GB BO and that should work on a
> > > 32bit machine as well :)
> > > 
> > > > +
> > > > + if (mem->bus.addr)
> > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr));
> > I after reading the patch again, I realized that this is the
> > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two
> > options here: ignore this case in _vunmap(), or do an ioremap()
> > unconditionally. Which one is preferable?
> 
> ioremap would be very very bad, so we should just do nothing.
> 
> Thanks,
> Christian.
> 
> > 
> > Best regards
> > Thomas
> > 
> > > > + else if (mem->placement & TTM_PL_FLAG_WC)
> > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new
> > > mem->bus.caching enum as replacement.
> > > 
> > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> > > > + else
> > > > + vaddr_iomem = ioremap(mem->bus.offset, size);
> > > > +
> > > > + if (!vaddr_iomem)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> > > > +
> > > > + } else {
> > > > + struct ttm_operation_ctx ctx = {
> > > > + .interruptible = false,
> > > > + .no_wait_gpu = false
> > > > + };
> > > > + struct ttm_tt *ttm = bo->ttm;
> > > > + pgprot_t prot;
> > > > + void *vaddr;
> > > > +
> > > > + BUG_ON(!ttm);
> > > I think we can drop this, populate will just crash badly anyway.
> > > 
> > > > +
> > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> > > > + if (ret)
> > > > + return ret;
> > > > +
> > > > + /*
> > > > + * We need to use vmap to get the desired page protection
> > > > + * or to make the buffer object look contiguous.
> > > > + */
> > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL);
> > > The calling convention has changed on drm-misc-next as well, but should
> > > be trivial to adapt.
> > > 
> > > Regards,
> > > Christian.
> > > 
> > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> > > > + if (!vaddr)
> > > > + return -ENOMEM;
> > > > +
> > > > + dma_buf_map_set_vaddr(map, vaddr);
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vmap);
> > > > +
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map)
> > > > +{
> > > > + if (dma_buf_map_is_null(map))
> > > > + return;
> > > > +
> > > > + if (map->is_iomem)
> > > > + iounmap(map->vaddr_iomem);
> > > > + else
> > > > + vunmap(map->vaddr);
> > > > + dma_buf_map_clear(map);
> > > > +
> > > > + ttm_mem_io_free(bo->bdev, &bo->mem);
> > > > +}
> > > > +EXPORT_SYMBOL(ttm_bo_vunmap);
> > > > +
> > > >   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
> > > >   bool dst_use_tt)
> > > >   {
> > > > diff --git a/include/drm/drm_gem_ttm_helper.h
> > > > b/include/drm/drm_gem_ttm_helper.h
> > > > index 118cef76f84f..7c6d874910b8 100644
> > > > --- a/include/drm/drm_gem_ttm_helper.h
> > > > +++ b/include/drm/drm_gem_ttm_helper.h
> > > > @@ -10,11 +10,17 @@
> > > >   #include <drm/ttm/ttm_bo_api.h>
> > > >   #include <drm/ttm/ttm_bo_driver.h>
> > > >   +struct dma_buf_map;
> > > > +
> > > >   #define drm_gem_ttm_of_gem(gem_obj) \
> > > >   container_of(gem_obj, struct ttm_buffer_object, base)
> > > >    void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int
> > > > indent,
> > > >   const struct drm_gem_object *gem);
> > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> > > > + struct dma_buf_map *map);
> > > >   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
> > > >   struct vm_area_struct *vma);
> > > >   diff --git a/include/drm/ttm/ttm_bo_api.h
> > > > b/include/drm/ttm/ttm_bo_api.h
> > > > index 37102e45e496..2c59a785374c 100644
> > > > --- a/include/drm/ttm/ttm_bo_api.h
> > > > +++ b/include/drm/ttm/ttm_bo_api.h
> > > > @@ -48,6 +48,8 @@ struct ttm_bo_global;
> > > >    struct ttm_bo_device;
> > > >   +struct dma_buf_map;
> > > > +
> > > >   struct drm_mm_node;
> > > >    struct ttm_placement;
> > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo,
> > > > unsigned long start_page,
> > > >   */
> > > >   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
> > > >   +/**
> > > > + * ttm_bo_vmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: pointer to a struct dma_buf_map representing the map.
> > > > + *
> > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> > > > + * data in the buffer object. The parameter @map returns the virtual
> > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> > > > + *
> > > > + * Returns
> > > > + * -ENOMEM: Out of memory.
> > > > + * -EINVAL: Invalid range.
> > > > + */
> > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> > > > +
> > > > +/**
> > > > + * ttm_bo_vunmap
> > > > + *
> > > > + * @bo: The buffer object.
> > > > + * @map: Object describing the map to unmap.
> > > > + *
> > > > + * Unmaps a kernel map set up by ttm_bo_vmap().
> > > > + */
> > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map
> > > > *map);
> > > > +
> > > >   /**
> > > >   * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
> > > >   *
> > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> > > > index fd1aba545fdf..2e8bbecb5091 100644
> > > > --- a/include/linux/dma-buf-map.h
> > > > +++ b/include/linux/dma-buf-map.h
> > > > @@ -45,6 +45,12 @@
> > > >   *
> > > >   * dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
> > > >   *
> > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> > > > + *
> > > > + * .. code-block:: c
> > > > + *
> > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> > > > + *
> > > >   * Test if a mapping is valid with either dma_buf_map_is_set() or
> > > >   * dma_buf_map_is_null().
> > > >   *
> > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct
> > > > dma_buf_map *map, void *vaddr)
> > > >   map->is_iomem = false;
> > > >   }
> > > >   +/**
> > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to
> > > > an address in I/O memory
> > > > + * @map: The dma-buf mapping structure
> > > > + * @vaddr_iomem: An I/O-memory address
> > > > + *
> > > > + * Sets the address and the I/O-memory flag.
> > > > + */
> > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> > > > + void __iomem *vaddr_iomem)
> > > > +{
> > > > + map->vaddr_iomem = vaddr_iomem;
> > > > + map->is_iomem = true;
> > > > +}
> > > > +
> > > >   /**
> > > >   * dma_buf_map_is_equal - Compares two dma-buf mapping structures
> > > > for equality
> > > >   * @lhs: The dma-buf mapping structure
> > > _______________________________________________
> > > dri-devel mailing list
> > > dri-devel@lists.freedesktop.org
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&amp;data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&amp;reserved=0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:47:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8761.23534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXNR-0001oV-4d; Mon, 19 Oct 2020 15:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8761.23534; Mon, 19 Oct 2020 15:47:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXNR-0001oO-0q; Mon, 19 Oct 2020 15:47:13 +0000
Received: by outflank-mailman (input) for mailman id 8761;
 Mon, 19 Oct 2020 15:47:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUXNP-0001o5-UM
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:47:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c0c6364-26be-425b-9038-609058425dcc;
 Mon, 19 Oct 2020 15:47:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 548BFACD8;
 Mon, 19 Oct 2020 15:47:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nhcc=D2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUXNP-0001o5-UM
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:47:11 +0000
X-Inumbo-ID: 4c0c6364-26be-425b-9038-609058425dcc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4c0c6364-26be-425b-9038-609058425dcc;
	Mon, 19 Oct 2020 15:47:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603122430;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hONELVPcykQ8DjVweymjlKhpwT4/BSkuMu0H8zH6ULU=;
	b=S9HxywYjRadzId7QyATsalrNDbHzSadG8i5pTvmFlD7z4mAy3HF1N4HGvRCsltoMZOvb8U
	3DshpzLbM5RKDWC+WNNBpal9871SfXzLO58i0qollV1TJsBkVy5uGIl1GzYVLthWM6zkim
	tOjxLS/9gXNFhj5lp9q7cd7BOtVUcGQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 548BFACD8;
	Mon, 19 Oct 2020 15:47:10 +0000 (UTC)
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
 <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
 <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
 <23d02e3b-7dac-ceb8-ebdd-3b77f264d6b4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a5e30124-c1aa-e13f-cb09-8402b85209eb@suse.com>
Date: Mon, 19 Oct 2020 17:47:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <23d02e3b-7dac-ceb8-ebdd-3b77f264d6b4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.10.2020 17:22, Andrew Cooper wrote:
> On 19/10/2020 16:02, Jan Beulich wrote:
>> On 19.10.2020 16:30, Andrew Cooper wrote:
>>> On 19/10/2020 15:21, Jan Beulich wrote:
>>>> On 19.10.2020 16:10, Andrew Cooper wrote:
>>>>> On 19/10/2020 14:40, Jan Beulich wrote:
>>>>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>>>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>>>>   loaded,
>>>>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>>>>   context switch code.
>>>>> I think it would be helpful to state that Xen's state is suitably cached
>>>>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>>>>> during boot.
>>>> I've added a sentence.
>>>>
>>>>> Is it possibly worth putting an assert checking the TR selector?  That
>>>>> ought to be good enough to catch stray init reordering problems.
>>>> How would checking just the TR selector help? If other pieces of TR
>>>> or syscall/sysenter were out of sync, we'd be hosed, too.
>>> They're far less likely to move relative to tr, than everything relative
>>> to hvm_up().
>>>
>>>> I'm also not sure what exactly you mean to do for such an assertion:
>>>> Merely check the host VMCB field against TSS_SELECTOR, or do an
>>>> actual STR to be closer to what the VMSAVE actually did?
>>> ASSERT(str() == TSS_SELECTOR);
>> Oh, that's odd. How would this help with the VMCB?
> 
> It wont.
> 
> We're not checking the behaviour of the VMSAVE instruction.  We just
> want to sanity check that %tr is already configured.

TR not already being configured is out of question in a function
involved in context switching, don't you think? I thought you're
worried about the VMCB not being set up correctly? Or are you in
the end asking for the suggested assertion to go into
_svm_cpu_up(), yet I didn't understand it that way?

> This version is far more simple than checking VMCB.trsel, which will
> require a map_domain_page().

In the general case yes, but in the most common case (PV also
enabled) we have a mapping already (host_vmcb_va).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8766.23547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXSI-0002i3-QN; Mon, 19 Oct 2020 15:52:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8766.23547; Mon, 19 Oct 2020 15:52:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXSI-0002hw-LM; Mon, 19 Oct 2020 15:52:14 +0000
Received: by outflank-mailman (input) for mailman id 8766;
 Mon, 19 Oct 2020 15:52:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xInf=D2=alstadheim.priv.no=hakon@srs-us1.protection.inumbo.net>)
 id 1kUXSH-0002hr-Bi
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:52:13 +0000
Received: from asav22.altibox.net (unknown [109.247.116.9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6946994f-7421-43fc-95de-0f4a55e99e9a;
 Mon, 19 Oct 2020 15:52:09 +0000 (UTC)
Received: from postfix-relay.alstadheim.priv.no
 (148-252-96.12.3p.ntebredband.no [148.252.96.12])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 (Authenticated sender: hakon.alstadheim@ntebb.no)
 by asav22.altibox.net (Postfix) with ESMTPSA id 39D3D20053;
 Mon, 19 Oct 2020 17:52:07 +0200 (CEST)
Received: from smtps.alstadheim.priv.no (localhost [127.0.0.1])
 by postfix-relay.alstadheim.priv.no (Postfix) with ESMTP id 5E9A3625EC1A;
 Mon, 19 Oct 2020 17:52:06 +0200 (CEST)
Received: from [192.168.2.201] (unknown [192.168.2.201])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested) (Authenticated sender: hakon)
 by smtps.alstadheim.priv.no (Postfix) with ESMTPSA id 043FD2410673;
 Mon, 19 Oct 2020 17:52:06 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xInf=D2=alstadheim.priv.no=hakon@srs-us1.protection.inumbo.net>)
	id 1kUXSH-0002hr-Bi
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:52:13 +0000
X-Inumbo-ID: 6946994f-7421-43fc-95de-0f4a55e99e9a
Received: from asav22.altibox.net (unknown [109.247.116.9])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6946994f-7421-43fc-95de-0f4a55e99e9a;
	Mon, 19 Oct 2020 15:52:09 +0000 (UTC)
Received: from postfix-relay.alstadheim.priv.no (148-252-96.12.3p.ntebredband.no [148.252.96.12])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	(Authenticated sender: hakon.alstadheim@ntebb.no)
	by asav22.altibox.net (Postfix) with ESMTPSA id 39D3D20053;
	Mon, 19 Oct 2020 17:52:07 +0200 (CEST)
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
X-Finnesikke-B-A-I-T: finnesikke@alstadheim.priv.no
Received: from smtps.alstadheim.priv.no (localhost [127.0.0.1])
	by postfix-relay.alstadheim.priv.no (Postfix) with ESMTP id 5E9A3625EC1A;
	Mon, 19 Oct 2020 17:52:06 +0200 (CEST)
Received: from [192.168.2.201] (unknown [192.168.2.201])
	(using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
	 key-exchange X25519 server-signature RSA-PSS (2048 bits))
	(No client certificate requested)
	(Authenticated sender: hakon)
	by smtps.alstadheim.priv.no (Postfix) with ESMTPSA id 043FD2410673;
	Mon, 19 Oct 2020 17:52:06 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=alstadheim.priv.no;
	s=smtp; t=1603122726;
	bh=gCQkAYoXZAqzIbnZy6o19rInCC30/RPKQBAXmh1Gs2k=;
	h=Subject:From:To:Cc:References:Date:In-Reply-To:From;
	b=nQ2v03XQ/gS4QguIdw2gm5OBq5n1vLZVlC1ZegGW4r/thR7n1GzwfUWMikF4EjQEL
	 dBo938sjwNFHsaHSTp6T6cSUJNL4WLfm7KOYOF6QoKrICKB563vjPuX1AZSJYQlNWi
	 aMS4e6IyNOwXWsoFsgeEGTeTSilgN4tg8lDa/nU4=
Subject: Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI
 flr/slot/bus reset with 'reset' SysFS attribute
From: =?UTF-8?Q?H=c3=a5kon_Alstadheim?= <hakon@alstadheim.priv.no>
To: George Dunlap <George.Dunlap@citrix.com>, Wei Liu <wl@xen.org>
Cc: Rich Persaud <persaur@gmail.com>, =?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=
 <pasik@iki.fi>, xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Jan Beulich <JBeulich@suse.com>, "bhelgaas@google.com"
 <bhelgaas@google.com>, Roger Pau Monne <roger.pau@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <646A4BEA-C544-4C62-A7A3-B736D3860912@gmail.com>
 <20200131153332.r4oe3sadhvoly7ho@debian>
 <A325DB30-0282-4512-96D4-06AE661ADB5A@citrix.com>
 <30109c0a-3aa3-8af7-dadf-66f508071963@alstadheim.priv.no>
Message-ID: <2d2693c9-f2a9-7914-7362-947a61c28acd@alstadheim.priv.no>
Date: Mon, 19 Oct 2020 17:52:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <30109c0a-3aa3-8af7-dadf-66f508071963@alstadheim.priv.no>
Content-Type: multipart/mixed;
 boundary="------------33D4BB3561895DBA69F01C76"
Content-Language: en-US
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=Du94Bl3+ c=1 sm=1 tr=0
	a=pSdy67GPd25lyaVjMobptA==:117 a=pSdy67GPd25lyaVjMobptA==:17
	a=afefHYAZSVUA:10 a=M51BFTxLslgA:10 a=r77TgQKjGQsHNAKrUKIA:9
	a=mLnsDVdbAAAA:8 a=i0ZhjynrcT7MiOowqgUA:9 a=QEXdDO2ut3YA:10
	a=KNCrymi1MkcA:10 a=n3xS-gtqsLMA:10 a=L6mzFnP-7RsA:10 a=LJOHJtgqUKgA:10
	a=1T5jtObTL9Je9aXSS0AA:9 a=sdNDni57TT4C-w57:21 a=IGFs5n7OhTyz5cud:21
	a=B2y7HmGcmWMA:10 a=cWRNjhkoAAAA:8 a=wbyiEnzrXmTRt8bfFNkA:9
	a=e_foPgPAjL4A:10 a=ErHS4BFoGQUSCPzqX2cA:9 a=uJ3BPXZVW3gA:10
	a=xnp1pY6zelCj5OLna2To:22 a=sVa6W5Aao32NNC1mekxh:22

This is a multi-part message in MIME format.
--------------33D4BB3561895DBA69F01C76
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit


Den 19.10.2020 17:16, skrev Håkon Alstadheim:
> Den 19.10.2020 13:00, skrev George Dunlap:
>>
>>> On Jan 31, 2020, at 3:33 PM, Wei Liu <wl@xen.org> wrote:
>>>
>>> On Fri, Jan 17, 2020 at 02:13:04PM -0500, Rich Persaud wrote:
>>>> On Aug 26, 2019, at 17:08, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>>>> ﻿Hi,
>>>>>
>>>>>> On Mon, Oct 08, 2018 at 10:32:45AM -0400, Boris Ostrovsky wrote:
>>>>>>> On 10/3/18 11:51 AM, Pasi Kärkkäinen wrote:
>>>>>>> On Wed, Sep 19, 2018 at 11:05:26AM +0200, Roger Pau Monné wrote:
>>>>>>>> On Tue, Sep 18, 2018 at 02:09:53PM -0400, Boris Ostrovsky wrote:
>>>>>>>>> On 9/18/18 5:32 AM, George Dunlap wrote:
>>>>>>>>>>> On Sep 18, 2018, at 8:15 AM, Pasi Kärkkäinen <pasik@iki.fi> 
>>>>>>>>>>> wrote:
>>>>>>>>>>> Hi,
>>>>>>>>>>> On Mon, Sep 17, 2018 at 02:06:02PM -0400, Boris Ostrovsky 
>>>>>>>>>>> wrote:
>>>>>>>>>>>> What about the toolstack changes? Have they been accepted? 
>>>>>>>>>>>> I vaguely
>>>>>>>>>>>> recall there was a discussion about those changes but don't 
>>>>>>>>>>>> remember how
>>>>>>>>>>>> it ended.
>>>>>>>>>>> I don't think toolstack/libxl patch has been applied yet 
>>>>>>>>>>> either.
>>>>>>>>>>> "[PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS 
>>>>>>>>>>> attribute":
>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html 
>>>>>>>>>>>
>>>>>>>>>>> "[PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' 
>>>>>>>>>>> SysFS attribute":
>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html 
>>>>>>>>>>>
>>>>>>>>> Will this patch work for *BSD? Roger?
>>>>>>>> At least FreeBSD don't support pci-passthrough, so none of this 
>>>>>>>> works
>>>>>>>> ATM. There's no sysfs on BSD, so much of what's in libxl_pci.c 
>>>>>>>> will
>>>>>>>> have to be moved to libxl_linux.c when BSD support is added.
>>>>>>> Ok. That sounds like it's OK for the initial pci 'reset' 
>>>>>>> implementation in xl/libxl to be linux-only..
>>>>>> Are these two patches still needed? ISTR they were written 
>>>>>> originally
>>>>>> to deal with guest trying to use device that was previously 
>>>>>> assigned to
>>>>>> another guest. But pcistub_put_pci_dev() calls
>>>>>> __pci_reset_function_locked() which first tries FLR, and it looks 
>>>>>> like
>>>>>> it was added relatively recently.
>>>>> Replying to an old thread.. I only now realized I forgot to reply 
>>>>> to this message earlier.
>>>>>
>>>>> afaik these patches are still needed. Håkon (CC'd) wrote to me in 
>>>>> private that
>>>>> he gets a (dom0) Linux kernel crash if he doesn't have these 
>>>>> patches applied.
>>>>>
>>>>>
>>>>> Here are the links to both the linux kernel and libxl patches:
>>>>>
>>>>>
>>>>> "[Xen-devel] [PATCH V3 0/2] Xen/PCIback: PCI reset using 'reset' 
>>>>> SysFS attribute":
>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00659.html
>>>>>
>>>>> [Note that PATCH V3 1/2 "Drivers/PCI: Export pcie_has_flr() 
>>>>> interface" is already applied in upstream linux kernel, so it's 
>>>>> not needed anymore]
>>>>>
>>>>> "[Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI 
>>>>> flr/slot/bus reset with 'reset' SysFS attribute":
>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00661.html
>>>>>
>>>>>
>>>>> "[Xen-devel] [PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' 
>>>>> SysFS attribute":
>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
>>>>>
>>>>> "[Xen-devel] [PATCH V1 1/1] Xen/libxl: Perform PCI reset using 
>>>>> 'reset' SysFS attribute":
>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
>>>> [dropping Linux mailing lists]
>>>>
>>>> What is required to get the Xen patches merged?  Rebasing against Xen
>>>> master?  OpenXT has been carrying a similar patch for many years and
>>>> we would like to move to an upstream implementation.  Xen users of PCI
>>>> passthrough would benefit from more reliable device reset.
>>> Rebase and resend?
>>>
>>> Skimming that thread I think the major concern was backward
>>> compatibility. That seemed to have been addressed.
>>>
>>> Unfortunately I don't have the time to dig into Linux to see if the
>>> claim there is true or not.
>>>
>>> It would be helpful to write a concise paragraph to say why backward
>>> compatibility is not required.
>> Just going through my old “make sure something happens with this” 
>> mails.  Did anything ever happen with this?  Who has the ball here / 
>> who is this stuck on?
>
> We're waiting for "somebody" to testify that fixing this will not 
> adversely affect anyone. I'm not qualified, but my strong belief is 
> that since "reset" or "do_flr"  in the linux kernel is not currently 
> implemented/used in any official distribution, it should be OK.
>
> Patches still work in current staging-4.14 btw.
>
Just for the record, attached are the patches I am running on top of 
linux gentoo-sources-5.9.1  and xen-staging-4.14 respectively. (I am 
also running with the patch to mark populated reserved memory that 
contains ACPI tables as "ACPI NVS", not attached here ).


--------------33D4BB3561895DBA69F01C76
Content-Type: text/x-patch; charset=UTF-8;
 name="pci_brute_reset-home-hack.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="pci_brute_reset-home-hack.patch"

--- a/drivers/xen/xen-pciback/pci_stub.c	2020-03-30 21:08:39.406994339 +0200
+++ b/drivers/xen/xen-pciback/pci_stub.c	2020-03-30 20:56:18.225810279 +0200
@@ -245,6 +245,90 @@
 	return found_dev;
 }
 
+struct pcistub_args {
+	struct pci_dev *dev;
+	unsigned int dcount;
+};
+
+static int pcistub_search_dev(struct pci_dev *dev, void *data)
+{
+	struct pcistub_device *psdev;
+	struct pcistub_args *arg = data;
+	bool found_dev = false;
+	unsigned long flags;
+
+	spin_lock_irqsave(&pcistub_devices_lock, flags);
+
+	list_for_each_entry(psdev, &pcistub_devices, dev_list) {
+		if (psdev->dev == dev) {
+			found_dev = true;
+			arg->dcount++;
+			break;
+		}
+	}
+
+	spin_unlock_irqrestore(&pcistub_devices_lock, flags);
+
+	/* Device not owned by pcistub, someone owns it. Abort the walk */
+	if (!found_dev)
+		arg->dev = dev;
+
+	return found_dev ? 0 : 1;
+}
+
+static int pcistub_reset_dev(struct pci_dev *dev)
+{
+	struct xen_pcibk_dev_data *dev_data;
+	bool slot = false, bus = false;
+	struct pcistub_args arg = {};
+
+	if (!dev)
+		return -EINVAL;
+
+	dev_dbg(&dev->dev, "[%s]\n", __func__);
+
+	if (!pci_probe_reset_slot(dev->slot))
+		slot = true;
+	else if ((!pci_probe_reset_bus(dev->bus)) &&
+		 (!pci_is_root_bus(dev->bus)))
+		bus = true;
+
+	if (!bus && !slot)
+		return -EOPNOTSUPP;
+
+	/*
+	 * Make sure all devices on this bus are owned by the
+	 * PCI backend so that we can safely reset the whole bus.
+	 */
+	pci_walk_bus(dev->bus, pcistub_search_dev, &arg);
+
+	/* All devices under the bus should be part of pcistub! */
+	if (arg.dev) {
+		dev_err(&dev->dev, "%s device on bus 0x%x is not owned by pcistub\n",
+			pci_name(arg.dev), dev->bus->number);
+
+		return -EBUSY;
+	}
+
+	dev_dbg(&dev->dev, "pcistub owns %d devices on bus 0x%x\n",
+		arg.dcount, dev->bus->number);
+
+	dev_data = pci_get_drvdata(dev);
+	if (!pci_load_saved_state(dev, dev_data->pci_saved_state))
+		pci_restore_state(dev);
+
+	/* This disables the device. */
+	xen_pcibk_reset_device(dev);
+
+	/* Cleanup up any emulated fields */
+	xen_pcibk_config_reset_dev(dev);
+
+	dev_dbg(&dev->dev, "resetting %s device using %s reset\n",
+		pci_name(dev), slot ? "slot" : "bus");
+
+	return pci_reset_bus(dev);
+}
+
 /*
  * Called when:
  *  - XenBus state has been reconfigure (pci unplug). See xen_pcibk_remove_device
@@ -1492,6 +1576,33 @@
 }
 static DRIVER_ATTR_RW(allow_interrupt_control);
 
+static ssize_t reset_store(struct device_driver *drv, const char *buf,
+			      size_t count)
+{
+	struct pcistub_device *psdev;
+	int domain, bus, slot, func;
+	int err;
+
+	err = str_to_slot(buf, &domain, &bus, &slot, &func);
+	if (err)
+		return err;
+
+	psdev = pcistub_device_find(domain, bus, slot, func);
+	if (psdev) {
+		err = pcistub_reset_dev(psdev->dev);
+		pcistub_device_put(psdev);
+	} else {
+		err = -ENODEV;
+	}
+
+	if (!err)
+		err = count;
+
+	return err;
+}
+
+static DRIVER_ATTR_WO(reset);
+
 static void pcistub_exit(void)
 {
 	driver_remove_file(&xen_pcibk_pci_driver.driver, &driver_attr_new_slot);
@@ -1507,6 +1618,8 @@
 			   &driver_attr_irq_handlers);
 	driver_remove_file(&xen_pcibk_pci_driver.driver,
 			   &driver_attr_irq_handler_state);
+	driver_remove_file(&xen_pcibk_pci_driver.driver,
+			   &driver_attr_reset);
 	pci_unregister_driver(&xen_pcibk_pci_driver);
 }
 
@@ -1603,6 +1716,11 @@
 	if (!err)
 		err = driver_create_file(&xen_pcibk_pci_driver.driver,
 					&driver_attr_irq_handler_state);
+
+	if (!err)
+		err = driver_create_file(&xen_pcibk_pci_driver.driver,
+					 &driver_attr_reset);
+
 	if (err)
 		pcistub_exit();
 

--------------33D4BB3561895DBA69F01C76
Content-Type: text/x-patch; charset=UTF-8;
 name="pci_brute_reset-home-hack-doc.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="pci_brute_reset-home-hack-doc.patch"

--- a/Documentation/ABI/testing/sysfs-driver-pciback	2020-03-30 00:25:41.000000000 +0200
+++ b/Documentation/ABI/testing/sysfs-driver-pciback	2020-03-30 21:01:58.909583316 +0200
@@ -12,6 +12,19 @@
                 will allow the guest to read and write to the configuration
                 register 0x0E.
 
+
+What:           /sys/bus/pci/drivers/pciback/reset
+Date:           Nov 2017
+KernelVersion:  none
+Contact:        xen-devel@lists.xenproject.org
+Description:
+                An option to perform a slot or bus reset when a PCI device
+		is owned by Xen PCI backend. Writing a string of DDDD:BB:DD.F
+		will cause the pciback driver to perform a slot or bus reset
+		if the device supports it. It also checks to make sure that
+		all of the devices under the bridge are owned by Xen PCI
+		backend.
+
 What:           /sys/bus/pci/drivers/pciback/allow_interrupt_control
 Date:           Jan 2020
 KernelVersion:  5.6

--------------33D4BB3561895DBA69F01C76
Content-Type: text/x-patch; charset=UTF-8;
 name="pci-reset-min-egen.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename="pci-reset-min-egen.patch"

--- a/tools/libxl/libxl_pci.c	2020-04-09 09:26:54.000000000 +0200
+++ b/tools/libxl/libxl_pci.c	2020-04-14 23:39:12.830752851 +0200
@@ -1452,7 +1452,7 @@
     char *reset;
     int fd, rc;
 
-    reset = GCSPRINTF("%s/do_flr", SYSFS_PCIBACK_DRIVER);
+    reset = GCSPRINTF("%s/reset", SYSFS_PCIBACK_DRIVER);
     fd = open(reset, O_WRONLY);
     if (fd >= 0) {
         char *buf = GCSPRINTF(PCI_BDF, domain, bus, dev, func);

--------------33D4BB3561895DBA69F01C76--


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 15:52:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 15:52:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8767.23559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXSn-0002oM-6o; Mon, 19 Oct 2020 15:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8767.23559; Mon, 19 Oct 2020 15:52:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXSn-0002oE-2z; Mon, 19 Oct 2020 15:52:45 +0000
Received: by outflank-mailman (input) for mailman id 8767;
 Mon, 19 Oct 2020 15:52:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUXSl-0002o3-Oi
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:52:43 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06449247-8170-4378-a8cf-8a30014a13a7;
 Mon, 19 Oct 2020 15:52:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUXSl-0002o3-Oi
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 15:52:43 +0000
X-Inumbo-ID: 06449247-8170-4378-a8cf-8a30014a13a7
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06449247-8170-4378-a8cf-8a30014a13a7;
	Mon, 19 Oct 2020 15:52:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603122763;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=z7PqvPWXlAIeoXgjeWm7IAbeoFRA/7zzBYNHkA0kV6U=;
  b=TSknMctzPivm+mJ+nu3Q4369kQmwL0fP53F388QWBGcfrSeo67pG+fXe
   gbfAg7WqovWdBaXzYR8uTklhpgzYmlYUBFoOXBXbsXf7JZKQ1ySGLVS45
   nO6rZypLNuAuVm6CRnZOnkUb8fu/vUrxcTKWx21AQAlP3OeZLOGtnlh8J
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6NVJcIB61YKqhRjmvbljT1RA1LC67gi89xaDcbMmJ+aWEYpjdv0N3iSsx7YiE6ZsFvmS490tDS
 bRY1PX2iw/oe2EH+zcl5ZLo52+h8FxqcQ0o0pt3ofHazw8Iv+opCxXhFmE627RQZK49eGaX0og
 5GPv1lZ0Acn5FvnzpaDgTpnN6gwU5LjIXrn9cezZ52x4KTucKlbrqCFi5f+bYKoCCNHpGgcnGh
 /ozKuP+IGtVB396KlXDZMB5r0TcVjvx8kaxFxg3wZn+g0kNUV0vAnoCFkxF+KKuXHFYBthOQtt
 fOo=
X-SBRS: None
X-MesageID: 29317521
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29317521"
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
 <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
 <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
 <23d02e3b-7dac-ceb8-ebdd-3b77f264d6b4@citrix.com>
 <a5e30124-c1aa-e13f-cb09-8402b85209eb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e8ee8c84-f949-c882-07a6-58079987a308@citrix.com>
Date: Mon, 19 Oct 2020 16:52:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a5e30124-c1aa-e13f-cb09-8402b85209eb@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 19/10/2020 16:47, Jan Beulich wrote:
> On 19.10.2020 17:22, Andrew Cooper wrote:
>> On 19/10/2020 16:02, Jan Beulich wrote:
>>> On 19.10.2020 16:30, Andrew Cooper wrote:
>>>> On 19/10/2020 15:21, Jan Beulich wrote:
>>>>> On 19.10.2020 16:10, Andrew Cooper wrote:
>>>>>> On 19/10/2020 14:40, Jan Beulich wrote:
>>>>>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>>>>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>>>>>   loaded,
>>>>>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>>>>>   context switch code.
>>>>>> I think it would be helpful to state that Xen's state is suitably cached
>>>>>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>>>>>> during boot.
>>>>> I've added a sentence.
>>>>>
>>>>>> Is it possibly worth putting an assert checking the TR selector?  That
>>>>>> ought to be good enough to catch stray init reordering problems.
>>>>> How would checking just the TR selector help? If other pieces of TR
>>>>> or syscall/sysenter were out of sync, we'd be hosed, too.
>>>> They're far less likely to move relative to tr, than everything relative
>>>> to hvm_up().
>>>>
>>>>> I'm also not sure what exactly you mean to do for such an assertion:
>>>>> Merely check the host VMCB field against TSS_SELECTOR, or do an
>>>>> actual STR to be closer to what the VMSAVE actually did?
>>>> ASSERT(str() == TSS_SELECTOR);
>>> Oh, that's odd. How would this help with the VMCB?
>> It wont.
>>
>> We're not checking the behaviour of the VMSAVE instruction.  We just
>> want to sanity check that %tr is already configured.
> TR not already being configured is out of question in a function
> involved in context switching, don't you think? I thought you're
> worried about the VMCB not being set up correctly? Or are you in
> the end asking for the suggested assertion to go into
> _svm_cpu_up(), yet I didn't understand it that way?

I meant in _svm_cpu_up().  It is only at at __init time where the new
implicit dependency is created.

>
>> This version is far more simple than checking VMCB.trsel, which will
>> require a map_domain_page().
> In the general case yes, but in the most common case (PV also
> enabled) we have a mapping already (host_vmcb_va).

Still rather more invasive than I was hoping for a quick sanity check
that ought never to fire.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 16:13:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 16:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8773.23574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXmw-0005BC-Vo; Mon, 19 Oct 2020 16:13:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8773.23574; Mon, 19 Oct 2020 16:13:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUXmw-0005B5-Si; Mon, 19 Oct 2020 16:13:34 +0000
Received: by outflank-mailman (input) for mailman id 8773;
 Mon, 19 Oct 2020 16:13:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUXmv-0005B0-3N
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 16:13:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0547481b-1c9d-4318-8fd8-2b9ec8910443;
 Mon, 19 Oct 2020 16:13:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L3Wa=D2=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUXmv-0005B0-3N
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 16:13:33 +0000
X-Inumbo-ID: 0547481b-1c9d-4318-8fd8-2b9ec8910443
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0547481b-1c9d-4318-8fd8-2b9ec8910443;
	Mon, 19 Oct 2020 16:13:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603124011;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XhaqLBrbFS3MUcwH6e0X4EW+qzzLQQducc+2zEuq/Aw=;
  b=WgU/uuwtmOBF6pcoGypjO4VfaJrwWPZG7uevdqIy686FT/93Yjpp7jog
   L3E6IuxWjGqZlQkTPSSSD2pzbG004YtYnKhqwOo9D8tkmr9GEfhNyWyXr
   jVjqtP8NIBPhlW9QIiw4RX11qM2InE/ZLPwyT53Fli+1N0tayhcskhu/I
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kTVhxl+dGOPb+HXnJ4yqb4Ch19FveEYE2poqENH3DEZCnk09vUVHfOChSHEwq6pQrs1dgHOIty
 J1k46BZkHB87eVJmc+Yl9sC6/gh5dTmDtvSxtqPhJ97I2fE1IEBe5kmD8xj6QSVuolRl6UVsMh
 gzf/uL8cw6o5KKF+3fP+ftRwIYDrDcs+CBPsbqNqeLLaIDN7GMgdO3SBHvjmRTwNsWFXbBflWd
 ySqYC9aaQ4S2UVUQ46irB4SOkqWwy61JJewWJzrphsHCnNEgXIS+ldUX7MMrvH8IZfo02dIyMk
 JIs=
X-SBRS: None
X-MesageID: 29319364
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,394,1596513600"; 
   d="scan'208";a="29319364"
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian
	<kevin.tian@intel.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
 <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
 <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
 <6430fef8-23f1-f4ef-8741-5e089eaa0df9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8b618252-4535-a8d9-efb9-0c1fba176ff4@citrix.com>
Date: Mon, 19 Oct 2020 17:12:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <6430fef8-23f1-f4ef-8741-5e089eaa0df9@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 19/10/2020 10:09, Jan Beulich wrote:
> On 16.10.2020 17:38, Andrew Cooper wrote:
>> On 15/10/2020 09:01, Jan Beulich wrote:
>>> On 14.10.2020 15:57, Andrew Cooper wrote:
>>>> On 13/10/2020 16:58, Jan Beulich wrote:
>>>>> On 09.10.2020 17:09, Andrew Cooper wrote:
>>>>>> At the time of XSA-170, the x86 instruction emulator really was broken, and
>>>>>> would allow arbitrary non-canonical values to be loaded into %rip.  This was
>>>>>> fixed after the embargo by c/s 81d3a0b26c1 "x86emul: limit-check branch
>>>>>> targets".
>>>>>>
>>>>>> However, in a demonstration that off-by-one errors really are one of the
>>>>>> hardest programming issues we face, everyone involved with XSA-170, myself
>>>>>> included, mistook the statement in the SDM which says:
>>>>>>
>>>>>>   If the processor supports N < 64 linear-address bits, bits 63:N must be identical
>>>>>>
>>>>>> to mean "must be canonical".  A real canonical check is bits 63:N-1.
>>>>>>
>>>>>> VMEntries really do tolerate a not-quite-canonical %rip, specifically to cater
>>>>>> to the boundary condition at 0x0000800000000000.
>>>>>>
>>>>>> Now that the emulator has been fixed, revert the XSA-170 change to fix
>>>>>> architectural behaviour at the boundary case.  The XTF test case for XSA-170
>>>>>> exercises this corner case, and still passes.
>>>>>>
>>>>>> Fixes: ffbbfda377 ("x86/VMX: sanitize rIP before re-entering guest")
>>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> But why revert the change rather than fix ...
>>>>>
>>>>>> @@ -4280,38 +4280,6 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
>>>>>>  out:
>>>>>>      if ( nestedhvm_vcpu_in_guestmode(v) )
>>>>>>          nvmx_idtv_handling();
>>>>>> -
>>>>>> -    /*
>>>>>> -     * VM entry will fail (causing the guest to get crashed) if rIP (and
>>>>>> -     * rFLAGS, but we don't have an issue there) doesn't meet certain
>>>>>> -     * criteria. As we must not allow less than fully privileged mode to have
>>>>>> -     * such an effect on the domain, we correct rIP in that case (accepting
>>>>>> -     * this not being architecturally correct behavior, as the injected #GP
>>>>>> -     * fault will then not see the correct [invalid] return address).
>>>>>> -     * And since we know the guest will crash, we crash it right away if it
>>>>>> -     * already is in most privileged mode.
>>>>>> -     */
>>>>>> -    mode = vmx_guest_x86_mode(v);
>>>>>> -    if ( mode == 8 ? !is_canonical_address(regs->rip)
>>>>> ... the wrong use of is_canonical_address() here? By reverting
>>>>> you open up avenues for XSAs in case we get things wrong elsewhere,
>>>>> including ...
>>>>>
>>>>>> -                   : regs->rip != regs->eip )
>>>>> ... for 32-bit guests.
>>>> Because the only appropriate alternative would be ASSERT_UNREACHABLE()
>>>> and domain crash.
>>>>
>>>> This logic corrupts guest state.
>>>>
>>>> Running with corrupt state is every bit an XSA as hitting a VMEntry
>>>> failure if it can be triggered by userspace, but the latter safer and
>>>> much more obvious.
>>> I disagree. For CPL > 0 we don't "corrupt" guest state any more
>>> than reporting a #GP fault when one is going to be reported
>>> anyway (as long as the VM entry doesn't fail, and hence the
>>> guest won't get crashed). IOW this raising of #GP actually is a
>>> precautionary measure to _avoid_ XSAs.
>> It does not remove any XSAs.  It merely hides them.
> How that? If we convert the ability of guest user mode to crash
> the guest into deliver of #GP(0), how is there a hidden XSA then?

Because userspace being able to triggering this fixup is still an XSA.

>> There are legal states where RIP is 0x0000800000000000 and #GP is the
>> wrong thing to do.  Any async VMExit (Processor Trace Prefetch in
>> particular), or with debug traps pending.
> You realize we're in agreement about this pseudo-canonical check
> needing fixing?

Anything other than deleting this clause does not fix the bugs above.

>>>> It was the appropriate security fix (give or take the functional bug in
>>>> it) at the time, given the complexity of retrofitting zero length
>>>> instruction fetches to the emulator.
>>>>
>>>> However, it is one of a very long list of guest-state-induced VMEntry
>>>> failures, with non-trivial logic which we assert will pass, on a
>>>> fastpath, where hardware also performs the same checks and we already
>>>> have a runtime safe way of dealing with errors.  (Hence not actually
>>>> using ASSERT_UNREACHABLE() here.)
>>> "Runtime safe" as far as Xen is concerned, I take it. This isn't safe
>>> for the guest at all, as vmx_failed_vmentry() results in an
>>> unconditional domain_crash().
>> Any VMEntry failure is a bug in Xen.  If userspace can trigger it, it is
>> an XSA, *irrespective* of whether we crash the domain then and there, or
>> whether we let it try and limp on with corrupted state.
> Allowing the guest to continue with corrupted state is not a
> useful thing to do, I agree. However, what falls under
> "corrupted" seems to be different for you and me. I'd not call
> delivery of #GP "corruption" in any way.

I can only repeat my previous statement:

> There are legal states where RIP is 0x0000800000000000 and #GP is the
> wrong thing to do.

Blindly raising #GP in is not always the right thing to do.

>  The primary goal ought
> to be that we don't corrupt the guest kernel view of the world.
> It may then have the opportunity to kill the offending user
> mode process.

By the time we have hit this case, all bets are off, because Xen *is*
malfunctioning.  We have no idea if kernel context is still intact.  You
don't even know that current user context is the correct offending
context to clobber, and might be creating a user=>user DoS vulnerability.

We definitely have an XSA to find and fix, and we can either make it
very obvious and likely to be reported, or hidden and liable to go
unnoticed for a long period of time.

Every rational argument is on the side of killing the domain in an
obvious way.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 17:27:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 17:27:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8789.23613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUYwG-0002pc-BY; Mon, 19 Oct 2020 17:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8789.23613; Mon, 19 Oct 2020 17:27:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUYwG-0002pV-83; Mon, 19 Oct 2020 17:27:16 +0000
Received: by outflank-mailman (input) for mailman id 8789;
 Mon, 19 Oct 2020 17:27:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUYwE-0002op-P3
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 17:27:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec2c5f9b-f9ec-4f54-8d98-6bbd2c6c9d2d;
 Mon, 19 Oct 2020 17:27:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUYw8-00083h-16; Mon, 19 Oct 2020 17:27:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUYw7-00053z-Pn; Mon, 19 Oct 2020 17:27:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUYw7-0001d5-PJ; Mon, 19 Oct 2020 17:27:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUYwE-0002op-P3
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 17:27:14 +0000
X-Inumbo-ID: ec2c5f9b-f9ec-4f54-8d98-6bbd2c6c9d2d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ec2c5f9b-f9ec-4f54-8d98-6bbd2c6c9d2d;
	Mon, 19 Oct 2020 17:27:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=igO872ibP5RL7m8HgLxKbH4dV3O/7rLzrSkI9SXLwmQ=; b=ow3xsprCmUvSlYgvnkKL5bXkaA
	hjTPwALQ4SJMNuhInGlZme0xh0Y0R7LgFTopHM2aVfbtzrRCMwN2N7agomm2zrvDp/n8Bv6NQQMZQ
	zmB4rQSKV2q+oUIoiKs1OF+d05OkqN6P+Ob9bi+QcZHh7mCb5AhfqJJGOks+Qf5GKvDQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUYw8-00083h-16; Mon, 19 Oct 2020 17:27:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUYw7-00053z-Pn; Mon, 19 Oct 2020 17:27:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUYw7-0001d5-PJ; Mon, 19 Oct 2020 17:27:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155976-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155976: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=92e9c44f205a876556abe1a1addea5c40e4f3ccf
X-Osstest-Versions-That:
    ovmf=709b163940c55604b983400eb49dad144a2aa091
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 17:27:07 +0000

flight 155976 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155976/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 92e9c44f205a876556abe1a1addea5c40e4f3ccf
baseline version:
 ovmf                 709b163940c55604b983400eb49dad144a2aa091

Last test of basis   155969  2020-10-18 20:39:43 Z    0 days
Testing same since   155976  2020-10-19 08:40:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   709b163940..92e9c44f20  92e9c44f205a876556abe1a1addea5c40e4f3ccf -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 17:37:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 17:37:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8794.23631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUZ6M-0003mv-9O; Mon, 19 Oct 2020 17:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8794.23631; Mon, 19 Oct 2020 17:37:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUZ6M-0003mo-6F; Mon, 19 Oct 2020 17:37:42 +0000
Received: by outflank-mailman (input) for mailman id 8794;
 Mon, 19 Oct 2020 17:37:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kUZ6K-0003mj-Pw
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 17:37:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66bd54fb-aef6-4dd0-ab0d-58ed2ba280d7;
 Mon, 19 Oct 2020 17:37:40 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id EB8B12224D;
 Mon, 19 Oct 2020 17:37:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kUZ6K-0003mj-Pw
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 17:37:40 +0000
X-Inumbo-ID: 66bd54fb-aef6-4dd0-ab0d-58ed2ba280d7
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66bd54fb-aef6-4dd0-ab0d-58ed2ba280d7;
	Mon, 19 Oct 2020 17:37:40 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id EB8B12224D;
	Mon, 19 Oct 2020 17:37:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603129058;
	bh=3dO9Yv7AAoBgYo09ik4pB/TLqrtmdeRFwgz65VwvxAs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ecsEtqeMgoPSv46Hw/EbT3fZh9AsCwHoHRVSn+O2Ltlfv1+a/01a8GEEe3PtK1OSy
	 gfm4RCxGLWVG7/7NThxzAHv0SV5K4HUSXfTvy0a8utiZY10D2mhQBsTRJNcsHExH8r
	 1+Wn4QNv1SeREs3wVqwRWbftTsBwB59nUkIvvEuU=
Date: Mon, 19 Oct 2020 10:37:37 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Print message if reset did not work
In-Reply-To: <74a7359983a9d25ca62a6edd41805ab92918e2a1.1602856636.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2010191036230.12247@sstabellini-ThinkPad-T480s>
References: <74a7359983a9d25ca62a6edd41805ab92918e2a1.1602856636.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 16 Oct 2020, Bertrand Marquis wrote:
> If for some reason the hardware reset is not working, print a message to
> the user every 5 seconds to warn him that the system did not reset
> properly and Xen is still looping.
> 
> The message is printed infinitely so that someone connecting to a serial
> console with no history would see the message coming after 5 seconds.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>  xen/arch/arm/shutdown.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
> index b32f07ec0e..600088ec48 100644
> --- a/xen/arch/arm/shutdown.c
> +++ b/xen/arch/arm/shutdown.c
> @@ -36,6 +36,7 @@ void machine_halt(void)
>  void machine_restart(unsigned int delay_millisecs)
>  {
>      int timeout = 10;
> +    unsigned long count = 0;
>  
>      watchdog_disable();
>      console_start_sync();
> @@ -59,6 +60,9 @@ void machine_restart(unsigned int delay_millisecs)
>      {
>          platform_reset();
>          mdelay(100);
> +        if ( (count % 50) == 0 )
> +            printk(XENLOG_ERR "Xen: Platform reset did not work properly!!\n");
> +        count++;

I'd think that one "!" is enough :-) but anyway

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 18:08:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 18:08:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8802.23655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUZZi-0006W9-SO; Mon, 19 Oct 2020 18:08:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8802.23655; Mon, 19 Oct 2020 18:08:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUZZi-0006W2-OY; Mon, 19 Oct 2020 18:08:02 +0000
Received: by outflank-mailman (input) for mailman id 8802;
 Mon, 19 Oct 2020 18:08:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kUZZh-0006Vx-BN
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 18:08:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb2e32d7-0909-4613-b0fb-644524ccc002;
 Mon, 19 Oct 2020 18:08:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 34FB62225F;
 Mon, 19 Oct 2020 18:07:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kUZZh-0006Vx-BN
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 18:08:01 +0000
X-Inumbo-ID: eb2e32d7-0909-4613-b0fb-644524ccc002
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb2e32d7-0909-4613-b0fb-644524ccc002;
	Mon, 19 Oct 2020 18:08:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 34FB62225F;
	Mon, 19 Oct 2020 18:07:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603130879;
	bh=szW4QMEKptLRzsjukhcSstsKndsQNBu+FwXo2aGT3yw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bcllUHHWhF30+zcpO3aScHmBmlfHyB90la0fdxkI1lRLJjWsPig/FTkXaC3jxvvQb
	 QufqyJwSmKDJfkylWbhppgE5BrqqhPaACjawDCZuf79v4wJk+5IXSL11bBWzq+Rpp2
	 pBUPunZZE8Ozhw5GOpXGoEuyAsijF4tK6O30FLJM=
Date: Mon, 19 Oct 2020 11:07:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Artem Mygaiev <Artem_Mygaiev@epam.com>
cc: Julien Grall <julien@xen.org>, 
    Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>, 
    "jbeulich@suse.com" <jbeulich@suse.com>, 
    "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>, 
    "vicooodin@gmail.com" <vicooodin@gmail.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    "committers@xenproject.org" <committers@xenproject.org>, 
    "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: RE: Xen Coding style and clang-format
In-Reply-To: <AM6PR03MB3687A99424FA9FD062F5FE4BF4030@AM6PR03MB3687.eurprd03.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2010191101250.12247@sstabellini-ThinkPad-T480s>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com> <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com> <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com> <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com> <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com> <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com> <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com> <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com> <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
 <AM6PR03MB3687A99424FA9FD062F5FE4BF4030@AM6PR03MB3687.eurprd03.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-844236279-1603130502=:12247"
Content-ID: <alpine.DEB.2.21.2010191102390.12247@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-844236279-1603130502=:12247
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010191102391.12247@sstabellini-ThinkPad-T480s>

On Fri, 16 Oct 2020, Artem Mygaiev wrote:
> -----Original Message-----
> From: Julien Grall <julien@xen.org> 
> Sent: пятница, 16 октября 2020 г. 13:24
> To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>; jbeulich@suse.com; George.Dunlap@citrix.com
> Cc: Artem Mygaiev <Artem_Mygaiev@epam.com>; vicooodin@gmail.com; xen-devel@lists.xenproject.org; committers@xenproject.org; viktor.mitin.19@gmail.com; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> Subject: Re: Xen Coding style and clang-format
> 
> > Hi,
> >
> > On 16/10/2020 10:42, Anastasiia Lukianenko wrote:
> > > Thanks for your advices, which helped me improve the checker. I
> > > understand that there are still some disagreements about the
> > > formatting, but as I said before, the checker cannot be very flexible
> > > and take into account all the author's ideas.
> >
> > I am not sure what you refer by "author's ideas" here. The checker 
> > should follow a coding style (Xen or a modified version):
> >     - Anything not following the coding style should be considered as 
> > invalid.
> >     - Anything not written in the coding style should be left 
> > untouched/uncommented by the checker.
> >
> 
> Agree
> 
> > > I suggest using the
> > > checker not as a mandatory check, but as an indication to the author of
> > > possible formatting errors that he can correct or ignore.
> >
> > I can understand that short term we would want to make it optional so 
> > either the coding style or the checker can be tuned. But I don't think 
> > this is an ideal situation to be in long term.
> >
> > The goal of the checker is to automatically verify the coding style and 
> > get it consistent across Xen. If we make it optional or it is 
> > "unreliable", then we lose the two benefits and possibly increase the 
> > contributor frustration as the checker would say A but we need B.
> >
> > Therefore, we need to make sure the checker and the coding style match. 
> > I don't have any opinions on the approach to achieve that.
> 
> Of the list of remaining issues from Anastasiia, looks like only items 5
> and 6 are conform to official Xen coding style. As for remainning ones,
> I would like to suggest disabling those that are controversial (items 1,
> 2, 4, 8, 9, 10). Maybe we want to have further discussion on refining 
> coding style, we can use these as starting point. If we are open to
> extending style now, I would suggest to add rules that seem to be
> meaningful (items 3, 7) and keep them in checker.

Good approach. Yes, I would like to keep 3, 7 in the checker.

I would also keep 8 and add a small note to the coding style to say that
comments should be aligned where possible.
--8323329-844236279-1603130502=:12247--


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 19:42:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 19:42:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8816.23682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUb39-0006QF-Pe; Mon, 19 Oct 2020 19:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8816.23682; Mon, 19 Oct 2020 19:42:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUb39-0006Q8-Me; Mon, 19 Oct 2020 19:42:31 +0000
Received: by outflank-mailman (input) for mailman id 8816;
 Mon, 19 Oct 2020 19:42:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4z4C=D2=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
 id 1kUb37-0006Q0-KN
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:42:29 +0000
Received: from mail-pj1-x1041.google.com (unknown [2607:f8b0:4864:20::1041])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0e009de-e8e8-4da7-8710-9dec48a20810;
 Mon, 19 Oct 2020 19:42:28 +0000 (UTC)
Received: by mail-pj1-x1041.google.com with SMTP id j8so366150pjy.5
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 12:42:27 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4z4C=D2=google.com=ndesaulniers@srs-us1.protection.inumbo.net>)
	id 1kUb37-0006Q0-KN
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:42:29 +0000
X-Inumbo-ID: b0e009de-e8e8-4da7-8710-9dec48a20810
Received: from mail-pj1-x1041.google.com (unknown [2607:f8b0:4864:20::1041])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b0e009de-e8e8-4da7-8710-9dec48a20810;
	Mon, 19 Oct 2020 19:42:28 +0000 (UTC)
Received: by mail-pj1-x1041.google.com with SMTP id j8so366150pjy.5
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 12:42:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=no6WOfZDuAXhTDfVia9Vunkz4L7BthY0F8m0jo4//vs=;
        b=vqIoQqfPtCUD7ceHGEmbvFqqUZd/9dE0IpLQQ4p4nyONBXQMSFxBG5OzgCaJfT8eu8
         Ez0DMwUntzcU9c2WJzs/WqMvxG3bzlj0mstlcz+E4fW0gt02sZNjrL1HdU6SHVwmCE5R
         WFeHYzJRJuPZspqj8YJ2wlUmUN4Mc8MNrI6kLekCJ8yejCepkvkgEUeb7TbpDze1NnEh
         TP2SjhqdakZDUedR00qYjd62k5W2m7FgfHIpcuS/rhjqMGxVL3Apppn+UDRO/ftdIfoh
         duvoKavmXDacw9mysFV+xdp1Tg4QxCPiDxNSeCeZyO+34Y2S1kYSdX61NJzSKhmGsX5T
         HLpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=no6WOfZDuAXhTDfVia9Vunkz4L7BthY0F8m0jo4//vs=;
        b=WEj/Rs9xF8AojPbS515ZN3v2brfo7wpASIYogzECP11Iwbf3NvNIHwWYBHs40NKGL+
         CeVssUHTlSvapvhxn5vQfrV6AmBA5JcHBOGCo5TYWNI9NxBp2zJmPmhxzpaVgql2M3yo
         MNG2CZ/fGzF2uFkHshJFez9/HFy0ow3/e0F/VRDzHM7mQwPzQh6hJEwV2uZVuReBI3OC
         31LPVhQO9GSz4norgyRTt4cZ0Q3afwcjZ10hr2qji1Y1JZ1gBQ7wnHZXG8aNErCd1izG
         A0PHli5XKRYCrthVE6RMZG3D2VO2MDRG/f0Im4s+fQUVu6Yd1g67+3yTXpq5QAM4cz3X
         Iuaw==
X-Gm-Message-State: AOAM530pShFIXZ4L4coV2qlGzxPkIqRNOWERD2fUKGqTUFuH30EmacdO
	DbX0zqOHmO/N4Np8i3dn+9NyLKbgGTYKLYfD8/zaMA==
X-Google-Smtp-Source: ABdhPJxy4K+2uRaBuhFTeTSlHPetqrP1uAAP7dKvm6UBZz10SCa23PJUxb54E5JJmlle9J/y892Qp+TTrKvO4GeNXIk=
X-Received: by 2002:a17:90a:ee87:: with SMTP id i7mr921476pjz.25.1603136546933;
 Mon, 19 Oct 2020 12:42:26 -0700 (PDT)
MIME-Version: 1.0
References: <20201017160928.12698-1-trix@redhat.com> <20201018054332.GB593954@kroah.com>
In-Reply-To: <20201018054332.GB593954@kroah.com>
From: Nick Desaulniers <ndesaulniers@google.com>
Date: Mon, 19 Oct 2020 12:42:15 -0700
Message-ID: <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: Tom Rix <trix@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>, linux-edac@vger.kernel.org, 
	linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
	xen-devel@lists.xenproject.org, linux-block@vger.kernel.org, 
	openipmi-developer@lists.sourceforge.net, 
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, linux-power@fi.rohmeurope.com, 
	linux-gpio@vger.kernel.org, amd-gfx list <amd-gfx@lists.freedesktop.org>, 
	dri-devel <dri-devel@lists.freedesktop.org>, nouveau@lists.freedesktop.org, 
	virtualization@lists.linux-foundation.org, spice-devel@lists.freedesktop.org, 
	linux-iio@vger.kernel.org, linux-amlogic@lists.infradead.org, 
	industrypack-devel@lists.sourceforge.net, linux-media@vger.kernel.org, 
	MPT-FusionLinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, 
	linux-mtd@lists.infradead.org, linux-can@vger.kernel.org, 
	Network Development <netdev@vger.kernel.org>, intel-wired-lan@lists.osuosl.org, 
	ath10k@lists.infradead.org, linux-wireless <linux-wireless@vger.kernel.org>, 
	linux-stm32@st-md-mailman.stormreply.com, linux-nfc@lists.01.org, 
	linux-nvdimm <linux-nvdimm@lists.01.org>, linux-pci@vger.kernel.org, 
	linux-samsung-soc@vger.kernel.org, platform-driver-x86@vger.kernel.org, 
	patches@opensource.cirrus.com, storagedev@microchip.com, 
	devel@driverdev.osuosl.org, linux-serial@vger.kernel.org, 
	linux-usb@vger.kernel.org, usb-storage@lists.one-eyed-alien.net, 
	linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com, 
	bpf <bpf@vger.kernel.org>, linux-integrity@vger.kernel.org, 
	linux-security-module@vger.kernel.org, keyrings@vger.kernel.org, 
	alsa-devel@alsa-project.org, 
	clang-built-linux <clang-built-linux@googlegroups.com>, Greg KH <gregkh@linuxfoundation.org>, 
	George Burgess <gbiv@google.com>
Content-Type: text/plain; charset="UTF-8"

On Sat, Oct 17, 2020 at 10:43 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>
> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > From: Tom Rix <trix@redhat.com>
> >
> > This is a upcoming change to clean up a new warning treewide.
> > I am wondering if the change could be one mega patch (see below) or
> > normal patch per file about 100 patches or somewhere half way by collecting
> > early acks.
>
> Please break it up into one-patch-per-subsystem, like normal, and get it
> merged that way.
>
> Sending us a patch, without even a diffstat to review, isn't going to
> get you very far...

Tom,
If you're able to automate this cleanup, I suggest checking in a
script that can be run on a directory.  Then for each subsystem you
can say in your commit "I ran scripts/fix_whatever.py on this subdir."
 Then others can help you drive the tree wide cleanup.  Then we can
enable -Wunreachable-code-break either by default, or W=2 right now
might be a good idea.

Ah, George (gbiv@, cc'ed), did an analysis recently of
`-Wunreachable-code-loop-increment`, `-Wunreachable-code-break`, and
`-Wunreachable-code-return` for Android userspace.  From the review:
```
Spoilers: of these, it seems useful to turn on
-Wunreachable-code-loop-increment and -Wunreachable-code-return by
default for Android
...
While these conventions about always having break arguably became
obsolete when we enabled -Wfallthrough, my sample turned up zero
potential bugs caught by this warning, and we'd need to put a lot of
effort into getting a clean tree. So this warning doesn't seem to be
worth it.
```
Looks like there's an order of magnitude of `-Wunreachable-code-break`
than the other two.

We probably should add all 3 to W=2 builds (wrapped in cc-option).
I've filed https://github.com/ClangBuiltLinux/linux/issues/1180 to
follow up on.
-- 
Thanks,
~Nick Desaulniers


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 19:46:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 19:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8819.23695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUb6l-0006bR-8O; Mon, 19 Oct 2020 19:46:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8819.23695; Mon, 19 Oct 2020 19:46:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUb6l-0006bK-59; Mon, 19 Oct 2020 19:46:15 +0000
Received: by outflank-mailman (input) for mailman id 8819;
 Mon, 19 Oct 2020 19:46:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kUb6k-0006bF-IH
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:46:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1b1cd7b-c992-4251-9e39-63d032ae9a08;
 Mon, 19 Oct 2020 19:46:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C00BE222EA;
 Mon, 19 Oct 2020 19:46:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JRC9=D2=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kUb6k-0006bF-IH
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:46:14 +0000
X-Inumbo-ID: f1b1cd7b-c992-4251-9e39-63d032ae9a08
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f1b1cd7b-c992-4251-9e39-63d032ae9a08;
	Mon, 19 Oct 2020 19:46:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C00BE222EA;
	Mon, 19 Oct 2020 19:46:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603136773;
	bh=f9jUoDlKj3bp6hvA75GWJ5qRIX5T6tQDnN4TNMvYYm0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=bBlNZpMMjp9zWk4KVHSeJMPq8HozfAW5KWCV4EangbvxtVfIG99Ee0alsARAC0Vjt
	 M1ryLwvqNEqONCRE89uAzXXcwlYpmWrhWfCGBw33uZgIwI5ocLxwq25Bq5Wh5Zy/1M
	 Y77biisCCLy/ctYk3yUyaCaQPUNNROfcNX+eR9is=
Date: Mon, 19 Oct 2020 12:46:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>
Subject: Re: [PATCH] arm: optee: don't print warning about "wrong" RPC
 buffer
In-Reply-To: <20201005091212.186934-1-volodymyr_babchuk@epam.com>
Message-ID: <alpine.DEB.2.21.2010191245470.12247@sstabellini-ThinkPad-T480s>
References: <20201005091212.186934-1-volodymyr_babchuk@epam.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 5 Oct 2020, Volodymyr Babchuk wrote:
> OP-TEE mediator tracks cookie value for a last buffer which
> was requested by OP-TEE. This tracked value serves one important
> purpose: if OP-TEE wants to request another buffer, we know
> that it finished importing previous one and we can free page list
> associated with it.
> 
> Also, we had false premise that OP_TEE will free requested buffers in
> reversed order. So we checked rpc_data_cookie during handling
> OPTEE_RPC_CMD_SHM_FREE call and printed warning if cookie of buffer
> which is requested to be freed differs from last allocated one.
> 
> During testing RPMB FS services I discovered, that RPMB code frees
> request and response buffers in the same order is it allocated
> them. And this is perfectly fine, actually.
> 
> So, we are removing mentioned warning message in Xen, as it is
> perfectly normal to free buffers in arbitrary order.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>


Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I am going to fix the grammar on commit


> ---
>  xen/arch/arm/tee/optee.c | 20 +-------------------
>  1 file changed, 1 insertion(+), 19 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index 8a39fe33b0..ee85359742 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -1127,25 +1127,7 @@ static int handle_rpc_return(struct optee_domain *ctx,
>           */
>          if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
>          {
> -            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
> -
> -            free_optee_shm_buf(ctx, cookie);
> -
> -            /*
> -             * OP-TEE asks to free the buffer, but this is not the same
> -             * buffer we previously allocated for it. While nothing
> -             * prevents OP-TEE from asking this, it is the strange
> -             * situation. This may or may not be caused by a bug in
> -             * OP-TEE or mediator. But is better to print warning.
> -             */
> -            if ( call->rpc_data_cookie && call->rpc_data_cookie != cookie )
> -            {
> -                gprintk(XENLOG_ERR,
> -                        "Saved RPC cookie does not corresponds to OP-TEE's (%"PRIx64" != %"PRIx64")\n",
> -                        call->rpc_data_cookie, cookie);
> -
> -                WARN();
> -            }
> +            free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b);
>              call->rpc_data_cookie = 0;
>          }
>          unmap_domain_page(shm_rpc->xen_arg);
> -- 
> 2.27.0
> 


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 19:50:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 19:50:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8822.23706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbAq-0007S7-Ph; Mon, 19 Oct 2020 19:50:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8822.23706; Mon, 19 Oct 2020 19:50:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbAq-0007S0-Mj; Mon, 19 Oct 2020 19:50:28 +0000
Received: by outflank-mailman (input) for mailman id 8822;
 Mon, 19 Oct 2020 19:50:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fLnO=D2=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kUbAp-0007Rv-W8
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:50:28 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id cf0248a9-b928-4f09-969a-6e7f011d050d;
 Mon, 19 Oct 2020 19:50:26 +0000 (UTC)
Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com
 [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-303-a2_yWrXGPeKRGS6HwGc5WA-1; Mon, 19 Oct 2020 15:50:24 -0400
Received: by mail-qk1-f200.google.com with SMTP id a81so523241qkg.10
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 12:50:24 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id l25sm401073qtf.18.2020.10.19.12.50.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 19 Oct 2020 12:50:22 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fLnO=D2=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kUbAp-0007Rv-W8
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 19:50:28 +0000
X-Inumbo-ID: cf0248a9-b928-4f09-969a-6e7f011d050d
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id cf0248a9-b928-4f09-969a-6e7f011d050d;
	Mon, 19 Oct 2020 19:50:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603137026;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:content-type:content-type;
	bh=gurUpVJY92BNatRtGPJGA4twsuCjo97EHFL7kW3iTCQ=;
	b=iz05p/CCrBPlCjPg3iRXT4Y+T2jxZ0idujFsBeP1HPLER0DIOdRc7hmYxDD+zK3LfV2Llb
	qkWGhHi8aXc3bO2VgIF+QYKW9MsmAFBwcBVAfPL9as5iCQemzvLbF3hvKg40dF3a7ac/1s
	W7z3+rC5LY+EZ32kfLrw8GxpaJPjXMk=
Received: from mail-qk1-f200.google.com (mail-qk1-f200.google.com
 [209.85.222.200]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-303-a2_yWrXGPeKRGS6HwGc5WA-1; Mon, 19 Oct 2020 15:50:24 -0400
X-MC-Unique: a2_yWrXGPeKRGS6HwGc5WA-1
Received: by mail-qk1-f200.google.com with SMTP id a81so523241qkg.10
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 12:50:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=gurUpVJY92BNatRtGPJGA4twsuCjo97EHFL7kW3iTCQ=;
        b=tmKhMQVUegRsDFfQV3XrMxRWnkOCtZoaqvb+imBO+bPUx5iNCvyRjMm9z6tHKa7+DS
         tmyqMIPTOsZ6ydT4Y9b9UzkK/3y207NtE1T5lssbBjzT4dKyV0FB3nSRLuTTSTVH2WVw
         etvAXq3Kh5pMLJOI8Rc8kXy688wO88Wnqw2WFSkUAzYH9TxoBtBWnjdL/Gnoeytvinwc
         B93cdIidF8orCSuQ03gOxSQeMnUEkx4Gij6GGAVlNAScyYp1vIe+OlAn1JBLiG7TZibo
         rzyM4P+4gIYX6O+yd6Ef89wXhS6JgfO2JCj/u0wAnDq5VF5h1KQsQgUh+89RczzGGM3Z
         4dSA==
X-Gm-Message-State: AOAM533z+pOUGk/fqAqHKwBloZaA4McQWfpEu5xARJw5CkOLc44BR86T
	sNEGk86kXSXJsaJtvNkIj0vFa2GwZNYQKf/LwynGOQk+rfWlg14sBiWo7UtvJqAee2MvM2UNwBM
	LmGz+TcpkOjlJmtNKnUiBuUp0zsQ=
X-Received: by 2002:ac8:705b:: with SMTP id y27mr1114108qtm.192.1603137023756;
        Mon, 19 Oct 2020 12:50:23 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyvDlls0C3FPTa4CLaTFO3RK5ocpNR7SlP5iafd30354sGejOe41Jo8FGhAcBQn8KNGmwvqIQ==
X-Received: by 2002:ac8:705b:: with SMTP id y27mr1114095qtm.192.1603137023534;
        Mon, 19 Oct 2020 12:50:23 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id l25sm401073qtf.18.2020.10.19.12.50.21
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 19 Oct 2020 12:50:22 -0700 (PDT)
From: trix@redhat.com
To: konrad.wilk@oracle.com,
	axboe@kernel.dk
Cc: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Tom Rix <trix@redhat.com>
Subject: [PATCH] block: xen-blkback: remove unneeded break
Date: Mon, 19 Oct 2020 12:50:16 -0700
Message-Id: <20201019195016.15337-1-trix@redhat.com>
X-Mailer: git-send-email 2.18.1
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="US-ASCII"

From: Tom Rix <trix@redhat.com>

A break is not needed if it is preceded by a goto

Signed-off-by: Tom Rix <trix@redhat.com>
---
 drivers/block/xen-blkback/blkback.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index adfc9352351d..f769fbd1b4c4 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1269,7 +1269,6 @@ static int dispatch_rw_block_io(struct xen_blkif_ring *ring,
 	default:
 		operation = 0; /* make gcc happy */
 		goto fail_response;
-		break;
 	}
 
 	/* Check that the number of segments is sane. */
-- 
2.18.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 20:01:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 20:01:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8825.23718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbLF-0008Tv-Iv; Mon, 19 Oct 2020 20:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8825.23718; Mon, 19 Oct 2020 20:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbLF-0008To-Fu; Mon, 19 Oct 2020 20:01:13 +0000
Received: by outflank-mailman (input) for mailman id 8825;
 Mon, 19 Oct 2020 20:01:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kUbLE-0008Tj-Nl
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:01:12 +0000
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e333733-8481-45e8-b598-2c21f80f53db;
 Mon, 19 Oct 2020 20:01:06 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id k6so1300374ior.2
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 13:01:06 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 r12sm748051ilm.28.2020.10.19.13.01.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 19 Oct 2020 13:01:05 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kUbLE-0008Tj-Nl
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:01:12 +0000
X-Inumbo-ID: 1e333733-8481-45e8-b598-2c21f80f53db
Received: from mail-io1-xd44.google.com (unknown [2607:f8b0:4864:20::d44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1e333733-8481-45e8-b598-2c21f80f53db;
	Mon, 19 Oct 2020 20:01:06 +0000 (UTC)
Received: by mail-io1-xd44.google.com with SMTP id k6so1300374ior.2
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 13:01:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=+eQKL04xasUcDd+ftioh0iFMePD/ue+0Y1d/ZN01kBg=;
        b=XCEbA9S54338AP5dTI8F9hUleHSbLlg34igGxVjXXq/rnVp5Yuymde3Slc91gNLSqW
         7zG7zYxDigc6rWP1HgoqbvZQKysbKtv+CPOgh6CigBHdXZD61lAsG597Zj3Cgb/xYOmu
         bVOBVpHOgSTFNBCvgEeSv5e09oixObUy0onRC9tWLt6j5o8WJ1g4LzP+O+SA/3F+VhnT
         ONEs89tl+/fqI3AywtYxKe4oNWZtjXIisalBHbwEM9NLNwH7/2Md8v5sEM7/XyaF2N6e
         dikejV/M0/v+fQ6GIeTw3ChSn9cq48Opc0AVce6/RlhkmK722QOWmu4lNXGZv2MwUhg6
         yhiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=+eQKL04xasUcDd+ftioh0iFMePD/ue+0Y1d/ZN01kBg=;
        b=oWH8MFbXK7WcxK4mHB65egBT9UH8/gC71tgxUqC4DrrQgZ43ucm/y5Z5Hu6nGu+Pz8
         5ipg6TC8yREC7IYE6sz3SFH4B6PavVRUEoUdTM1scGOE/aV346sPHK91Y0jbZzlcIhjY
         R2XejfmOy15WyhTPeflSqYt96t6EmN+ROFq6UZ3kxtk0P7VJXokXybWlQ91gB4UVso7P
         cyL+7SrRPuOS5hmNIkOWSl48Xh7aN397FmJ/DpuIfFD+bMHamB5FM/pchsNNXOQsfsc3
         PeMi3n1wd09grA/E/yMy1QOmaYbUEYQ1rfNVU9bJ87SrwhKjPUL3Y+OrmCzbxh4Wa1TK
         Fmtg==
X-Gm-Message-State: AOAM533v//4/FJPD8BZz5QLeEcn1GVG27rWwUvDOyySO5bTHI7sxD7lS
	CC58QcalVe8hRxDM+2KU73w=
X-Google-Smtp-Source: ABdhPJyNHzUu4H6hURcTDLEKIci573YL5yT9qlQBpApJVZ8Ka0Qz6NplrJDui0k+P45kWTlLlZZY2Q==
X-Received: by 2002:a02:cd2c:: with SMTP id h12mr1404514jaq.138.1603137665874;
        Mon, 19 Oct 2020 13:01:05 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id r12sm748051ilm.28.2020.10.19.13.01.04
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 19 Oct 2020 13:01:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: anthony.perard@citrix.com,
	xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] libxl: Add suppress-vmdesc to QEMU -machine pc options
Date: Mon, 19 Oct 2020 16:00:50 -0400
Message-Id: <20201019200050.103360-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The device model state saved by QMP xen-save-devices-state doesn't
include the vmdesc json.  When restoring an HVM, xen-load-devices-state
always triggers "Expected vmdescription section, but got 0".  This is
not a problem when restore comes from a file.  However, when QEMU runs
in a linux stubdom and comes over a console, EOF is not received.  This
causes a delay restoring - though it does restore.

Setting suppress-vmdesc skips looking for the vmdesc during restore and
avoids the wait.

This is a libxl change for the non-xenfv case to match the xenfv change
made in qemu

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---

Should this also add suppress-vmdesc to xenfv for backwards
compatibility?  In that case, the change in QEMU is redundent.  Since
this only really matters for the stubdom case, it could be conditioned
on that.

 tools/libs/light/libxl_dm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index d1ff35dda3..7d735ffcd9 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1778,7 +1778,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
-            machinearg = libxl__strdup(gc, "pc,accel=xen");
+            machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on");
         } else {
             machinearg = libxl__strdup(gc, "xenfv");
         }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 20:02:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 20:02:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8827.23731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbML-00008t-Us; Mon, 19 Oct 2020 20:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8827.23731; Mon, 19 Oct 2020 20:02:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbML-00008m-Qu; Mon, 19 Oct 2020 20:02:21 +0000
Received: by outflank-mailman (input) for mailman id 8827;
 Mon, 19 Oct 2020 20:02:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUbMK-00008f-BT
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:02:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f8801ea-2829-444c-9312-338d4fc49346;
 Mon, 19 Oct 2020 20:02:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUbMB-0002wR-OK; Mon, 19 Oct 2020 20:02:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUbMB-0003fV-Dj; Mon, 19 Oct 2020 20:02:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUbMB-0004bP-DE; Mon, 19 Oct 2020 20:02:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUbMK-00008f-BT
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:02:20 +0000
X-Inumbo-ID: 3f8801ea-2829-444c-9312-338d4fc49346
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3f8801ea-2829-444c-9312-338d4fc49346;
	Mon, 19 Oct 2020 20:02:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OB4ygmKpTf4BOJdbIQa86qmeYoBwbvoyqb+yE3gONss=; b=rduvaCQBDSAnvmnh5ufwmrYqDx
	hvcfqGSDXESBDqpfTD5v9SeCKIyPrH2P2Q3oLhJLNTQpM20mQv+3k+4j+0P2CT1WSmbSRiYLTCPo1
	um0x3BAihbuGXQwfr//iaMkq5zNDZgW0oZRwaMzW4l2C1e5cAsGLYTaHv9L0i9FIiCIM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUbMB-0002wR-OK; Mon, 19 Oct 2020 20:02:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUbMB-0003fV-Dj; Mon, 19 Oct 2020 20:02:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUbMB-0004bP-DE; Mon, 19 Oct 2020 20:02:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155981: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-pvops:host-reuse/final/host:broken:nonblocking
    qemu-mainline:build-i386-pvops:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 20:02:11 +0000

flight 155981 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155981/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 build-i386-pvops              4 host-reuse/final/host broken blocked in 152631
 build-i386-pvops              2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  104 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             starved 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step build-i386-pvops host-reuse/final/host

Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 20:03:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 20:03:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8830.23746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbNN-0000Jn-FM; Mon, 19 Oct 2020 20:03:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8830.23746; Mon, 19 Oct 2020 20:03:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUbNN-0000Jg-BV; Mon, 19 Oct 2020 20:03:25 +0000
Received: by outflank-mailman (input) for mailman id 8830;
 Mon, 19 Oct 2020 20:03:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kUbNL-0000JZ-Tj
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:03:23 +0000
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 425876ac-93aa-4309-857f-533d156cbb83;
 Mon, 19 Oct 2020 20:03:23 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id j13so1605774ilc.4
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 13:03:23 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 s17sm796805ioa.38.2020.10.19.13.03.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 19 Oct 2020 13:03:21 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=85l8=D2=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kUbNL-0000JZ-Tj
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 20:03:23 +0000
X-Inumbo-ID: 425876ac-93aa-4309-857f-533d156cbb83
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 425876ac-93aa-4309-857f-533d156cbb83;
	Mon, 19 Oct 2020 20:03:23 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id j13so1605774ilc.4
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 13:03:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=F+TtsE/7V779qs9kiKK93Q96YivNooCbDhHQNdsZbls=;
        b=hlQ76gZyAKCrqHEeCaUzSMvlCFHBpOudrJ3lwG4Lgsef9BJpwhHTUWQdeQIGJBNBNO
         hWeAkHuBx3KPKIASzvOp4BcNrGZ988CmJOn7BKZ8onQYcGosz0bx9J1rJ3gSAsEz7/lm
         /1+g2MWvbkQI6jJV3G1V0FxJn0UazPX0IYwWHsciWTGWbwURmk0oo81oE7H5HWITXoh3
         yYI2womHos0kAlyT8T+tOOJp2miDnsr6DxzyVS6cY0dw4fJeZtpGzzDTNnkPh96c/gGf
         slIpkB3z3MfOk6nh9bgdskIyQH++b07kTw+vt5cdOs2pjcdkdFOgZMlTm1bvKB8hFJgT
         6HNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=F+TtsE/7V779qs9kiKK93Q96YivNooCbDhHQNdsZbls=;
        b=OrdbzFtRpbqxM8rnJy4GwyFT5hWratpawK0PGGq6rTJBsyt6vwoJYcvR9o5XtQjjAn
         JcyCRq7DKmrpeoaxGKp20Oq3uexCC5shL0W6aB6ZwR5rC2z48IIIfrIvHPJWyasThlJO
         ooCxOxyO+g4sM9ni3LIBLSt5wTFOzrt4pvJzkAtmpkpV0Yv73IbMsgcJ5BkWbcga7MKg
         4LyU0oocOk+tsyxE2AXXYSbwhaRn9ZzNwdEzeD1PI2y5MxQ1TMnFCGO++Q25u+7C86Ti
         XzcboEEMPYVV+tV6oe1DTWqjd1ne1os1y8iKMJHX4MDFfVglYvJOB4mSRPyxVbyBUbMN
         MMjw==
X-Gm-Message-State: AOAM5333WS9DicWU0zbEEtqBYusmvmxzOiPt4jabFH7XJ7RPMJV1W14S
	8QDYxlzxh210KqYzLHSuzr5XhCG6rtQ=
X-Google-Smtp-Source: ABdhPJzTPJh0vmNmmjYWc6FOA26Dq1fObmGyJ5ws7awxheE/44q2fAUUmXRGdU2/Auvsj7iO2xPMxA==
X-Received: by 2002:a92:d6cc:: with SMTP id z12mr1453737ilp.172.1603137802463;
        Mon, 19 Oct 2020 13:03:22 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
        by smtp.gmail.com with ESMTPSA id s17sm796805ioa.38.2020.10.19.13.03.21
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 19 Oct 2020 13:03:21 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] flask: label-pci: Allow specifying irq label
Date: Mon, 19 Oct 2020 16:03:18 -0400
Message-Id: <20201019200318.103781-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

IRQs can be shared, so uniquely labeling doesn't always work.  You run
into issues if you have domA_t allowed access to device_A_t and domB_t
to device_B_t.  The shared IRQ can only be labeled one of
device_A_t or device_B_t, and assignment of the second device fails
since domA_t doesn't have permission for device_B_t and vice versa.

Add the ability to specify an irq label to flask-label-pci.  A
shared_irq_t can then be used to for the PIRQ.  The default remains to
use the device label if an IRQ label isn't specified.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/flask/utils/label-pci.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/tools/flask/utils/label-pci.c b/tools/flask/utils/label-pci.c
index 9ddb713cf4..897b772804 100644
--- a/tools/flask/utils/label-pci.c
+++ b/tools/flask/utils/label-pci.c
@@ -28,7 +28,7 @@
 
 static void usage (int argCnt, char *argv[])
 {
-	fprintf(stderr, "Usage: %s SBDF label\n", argv[0]);
+	fprintf(stderr, "Usage: %s SBDF label <irq_label>\n", argv[0]);
 	exit(1);
 }
 
@@ -39,12 +39,19 @@ int main (int argCnt, char *argv[])
 	int seg, bus, dev, fn;
 	uint32_t sbdf;
 	uint64_t start, end, flags;
+	char *pirq_label;
 	char buf[1024];
 	FILE *f;
 
-	if (argCnt != 3)
+	if (argCnt < 3 || argCnt > 4)
 		usage(argCnt, argv);
 
+	if (argCnt == 4) {
+	    pirq_label = argv[3];
+	} else {
+	    pirq_label = argv[2];
+	}
+
 	xch = xc_interface_open(0,0,0);
 	if ( !xch )
 	{
@@ -107,7 +114,7 @@ int main (int argCnt, char *argv[])
 	if (fscanf(f, "%" SCNu64, &start) != 1)
 		start = 0;
 	if (start) {
-		ret = xc_flask_add_pirq(xch, start, argv[2]);
+		ret = xc_flask_add_pirq(xch, start, pirq_label);
 		if (ret) {
 			fprintf(stderr, "xc_flask_add_pirq %"PRIu64" failed: %d\n",
 					start, ret);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 21:37:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 21:37:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8836.23758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUcpa-0007xC-FO; Mon, 19 Oct 2020 21:36:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8836.23758; Mon, 19 Oct 2020 21:36:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUcpa-0007x5-BU; Mon, 19 Oct 2020 21:36:38 +0000
Received: by outflank-mailman (input) for mailman id 8836;
 Mon, 19 Oct 2020 21:36:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUcpY-0007x0-VH
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 21:36:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b4c8d91-2a75-4258-9b12-ea411f9fd54b;
 Mon, 19 Oct 2020 21:36:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUcpT-0004sC-7Q; Mon, 19 Oct 2020 21:36:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUcpS-00085G-Ty; Mon, 19 Oct 2020 21:36:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUcpS-0003lF-TS; Mon, 19 Oct 2020 21:36:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zzxy=D2=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUcpY-0007x0-VH
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 21:36:37 +0000
X-Inumbo-ID: 4b4c8d91-2a75-4258-9b12-ea411f9fd54b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b4c8d91-2a75-4258-9b12-ea411f9fd54b;
	Mon, 19 Oct 2020 21:36:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=YquLcXDrRumlIbhqisAdHVhtio7h/rwHCsb4qH6XEsk=; b=fLWTZn3wb0hX9C1qY7LkSf/WT3
	0AHgBvwR3iUmRnuq5yPvoZf4+djq6bhZHPfCfKM28Yv/AHm9koKylNZwWs/kWH9p41CHF3BB906xx
	swvhb2Dk/FgkfLCsp/oj8hi/N5WZ9CI2ugBHSqOYdWzB49EB3AIKvO334NeFTe356IHY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUcpT-0004sC-7Q; Mon, 19 Oct 2020 21:36:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUcpS-00085G-Ty; Mon, 19 Oct 2020 21:36:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUcpS-0003lF-TS; Mon, 19 Oct 2020 21:36:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete build-arm64-xsm
Message-Id: <E1kUcpS-0003lF-TS@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 19 Oct 2020 21:36:30 +0000

branch xen-unstable
xenbranch xen-unstable
job build-arm64-xsm
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155993/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-arm64-xsm.xen-build --summary-out=tmp/155993.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline build-arm64-xsm xen-build
Searching for failure / basis pass:
 155979 fail [host=rochester0] / 155971 ok.
Failure / basis pass flights: 155979 / 155971
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Basis pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#73e3cb6c7eea4f5db81c87574dcefe1282de4772-709b163940c55604b983400eb49dad144a2aa091 git://git.qemu.org/qemu.git#e12ce85b2c79d83a340953291912875c30b3af06-ba2a9a9e6318bfd93a2306dec40137e198205b86 git://xenbits.xen.org/osstest/seabios.git#58a44be024f69d2e4d2b58553529230abdd3935e-58a44be024f69d2e4d2b58553529230abdd3935e git://xenbits.xen.org/xen.git#0dfddb2116e3757f77a691a3fe335173088d69dc-0dfddb2116e3757f77a6\
 91a3fe335173088d69dc
Loaded 29910 nodes in revision graph
Searching for test results:
 155953 [host=laxton1]
 155971 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155979 fail 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155980 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155982 fail 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155984 pass 709b163940c55604b983400eb49dad144a2aa091 dc7a05da69613d5c87ec0359c5dbb9d2b4765301 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155985 pass 709b163940c55604b983400eb49dad144a2aa091 31a6f3534aba275aa9b3da21a58e79065ba865b5 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155986 fail 709b163940c55604b983400eb49dad144a2aa091 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155987 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155989 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155990 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155992 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155993 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Searching for interesting versions
 Result found: flight 155971 (pass), for basis pass
 Result found: flight 155979 (fail), for basis failure
 Repro found: flight 155980 (pass), for basis pass
 Repro found: flight 155982 (fail), for basis failure
 0 revisions at 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
No revisions left to test, checking graph state.
 Result found: flight 155971 (pass), for last pass
 Result found: flight 155987 (fail), for first failure
 Repro found: flight 155989 (pass), for last pass
 Repro found: flight 155990 (fail), for first failure
 Repro found: flight 155992 (pass), for last pass
 Repro found: flight 155993 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/155993/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
155993: tolerable ALL FAIL

flight 155993 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/155993/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 22:43:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 22:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8850.23800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUdsC-0005UB-T8; Mon, 19 Oct 2020 22:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8850.23800; Mon, 19 Oct 2020 22:43:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUdsC-0005U4-QE; Mon, 19 Oct 2020 22:43:24 +0000
Received: by outflank-mailman (input) for mailman id 8850;
 Mon, 19 Oct 2020 22:43:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HaMS=D2=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1kUdsC-0005Tz-88
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 22:43:24 +0000
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cdb63341-1118-4a6c-80ed-b6453e630d9e;
 Mon, 19 Oct 2020 22:43:22 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id q9so1914904iow.6
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 15:43:22 -0700 (PDT)
Received: from ?IPv6:2607:fb90:7a46:92b5:dce5:8829:1020:d33a?
 ([2607:fb90:7a46:92b5:dce5:8829:1020:d33a])
 by smtp.gmail.com with ESMTPSA id b9sm1083408ilq.51.2020.10.19.15.43.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 19 Oct 2020 15:43:21 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HaMS=D2=gmail.com=persaur@srs-us1.protection.inumbo.net>)
	id 1kUdsC-0005Tz-88
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 22:43:24 +0000
X-Inumbo-ID: cdb63341-1118-4a6c-80ed-b6453e630d9e
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cdb63341-1118-4a6c-80ed-b6453e630d9e;
	Mon, 19 Oct 2020 22:43:22 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id q9so1914904iow.6
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 15:43:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:date:message-id
         :references:cc:in-reply-to:to;
        bh=mxYceR9bLFp686VU0Upx9lU0iujDWt5Dv/uOnwECcMo=;
        b=DDDiaDKDFslmiyNR1q/FEVkm9rAoIYg2dEwPttACcX3YVQg8aopCES/6PogKl4x8/A
         khiRjptOcv0tJuEK1rP9uIZh8pvCmWMnlnmJBH1XYXQqQ8SNuABEMi10opp9L+Cho0p3
         zlap8/JIZNMDzfpNbUPnhq5Tr6XVh76xE03urffRkSB1/gmB2+mRq8whkrW6F+asuwE3
         alhOiMYCPNtRtvXt7dPvPA0UrXiSmTI1yNdI6hxjYtILw+g6EycJLBjSXBNYkVtZK5Gu
         VxfVIKuijpkTMpIs18Ua4p9VfqH7tq1vzD2xzJy07coKdN2NvKClIQUZ/90nYpXgaZQ7
         f18g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:date:message-id:references:cc:in-reply-to:to;
        bh=mxYceR9bLFp686VU0Upx9lU0iujDWt5Dv/uOnwECcMo=;
        b=LqHy4tzGNGY4yik9bLf22dhGIfMKxarnTndhrK8h6qrhptzOtSEiciuBCeWFY5yNkQ
         mIoS1ghHtzpbrAfgFEyiJHIgOWSaDSWVyfr2m94ikcpcoafwJWSczDshraGey+SNnVhn
         089iukQ5hdNzWS3qzOIsWFh+pWUmgywsAlwYnOa3dKPfpFfxj3oRRRrzGgvy3A5bZZNL
         +6aRB9NodatnQDM+Ry1gkwx0sAmeIUV2ifqi0F2kyBxJ7BKeq2jNEWCUFhbmq928OLJa
         kWnF7ZtfGQCmMGs2XqGfP3QIuwatL3nRlFk6ztCNsoovCUdtldcMAq91K2LKd8KpLlPg
         8fHg==
X-Gm-Message-State: AOAM531S8f09lLQH6URyKycsYBr8DgjeyPQtR1LcuET9d8Po4WrGfsvO
	ZWb27kmbtDsyn/gpGu0NnMI=
X-Google-Smtp-Source: ABdhPJyerB03H51qpTEmoM9XMfJDy0SqyXQtM3B5zxanvr+z8xrZFIQm0YiEZSo0/bPdMOhzBgPQIg==
X-Received: by 2002:a6b:7511:: with SMTP id l17mr1369099ioh.199.1603147402299;
        Mon, 19 Oct 2020 15:43:22 -0700 (PDT)
Received: from ?IPv6:2607:fb90:7a46:92b5:dce5:8829:1020:d33a? ([2607:fb90:7a46:92b5:dce5:8829:1020:d33a])
        by smtp.gmail.com with ESMTPSA id b9sm1083408ilq.51.2020.10.19.15.43.21
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Mon, 19 Oct 2020 15:43:21 -0700 (PDT)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI flr/slot/bus reset with 'reset' SysFS attribute
Date: Mon, 19 Oct 2020 18:43:20 -0400
Message-Id: <6D51F096-C247-486B-B4A2-50F85855EA06@gmail.com>
References: <2d2693c9-f2a9-7914-7362-947a61c28acd@alstadheim.priv.no>
Cc: Wei Liu <wl@xen.org>,
 =?utf-8?Q?Pasi_K=C3=A4rkk=C3=A4inen?= <pasik@iki.fi>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Jan Beulich <jbeulich@suse.com>, bhelgaas@google.com,
 Roger Pau Monne <roger.pau@citrix.com>,
 =?utf-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
 Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Anthony Perard <anthony.perard@citrix.com>
In-Reply-To: <2d2693c9-f2a9-7914-7362-947a61c28acd@alstadheim.priv.no>
To: =?utf-8?Q?H=C3=A5kon_Alstadheim?= <hakon@alstadheim.priv.no>,
 George Dunlap <George.Dunlap@citrix.com>
X-Mailer: iPhone Mail (18A393)

On Oct 19, 2020, at 11:52, H=C3=A5kon Alstadheim <hakon@alstadheim.priv.no> w=
rote:
>=20
> =EF=BB=BF
> Den 19.10.2020 17:16, skrev H=C3=A5kon Alstadheim:
>> Den 19.10.2020 13:00, skrev George Dunlap:
>>>=20
>>>> On Jan 31, 2020, at 3:33 PM, Wei Liu <wl@xen.org> wrote:
>>>>=20
>>>> On Fri, Jan 17, 2020 at 02:13:04PM -0500, Rich Persaud wrote:
>>>>> On Aug 26, 2019, at 17:08, Pasi K=C3=A4rkk=C3=A4inen <pasik@iki.fi> wr=
ote:
>>>>>> =EF=BB=BFHi,
>>>>>>=20
>>>>>>> On Mon, Oct 08, 2018 at 10:32:45AM -0400, Boris Ostrovsky wrote:
>>>>>>>> On 10/3/18 11:51 AM, Pasi K=C3=A4rkk=C3=A4inen wrote:
>>>>>>>> On Wed, Sep 19, 2018 at 11:05:26AM +0200, Roger Pau Monn=C3=A9 wrot=
e:
>>>>>>>>> On Tue, Sep 18, 2018 at 02:09:53PM -0400, Boris Ostrovsky wrote:
>>>>>>>>>> On 9/18/18 5:32 AM, George Dunlap wrote:
>>>>>>>>>>>> On Sep 18, 2018, at 8:15 AM, Pasi K=C3=A4rkk=C3=A4inen <pasik@i=
ki.fi> wrote:
>>>>>>>>>>>> Hi,
>>>>>>>>>>>>> On Mon, Sep 17, 2018 at 02:06:02PM -0400, Boris Ostrovsky wrot=
e:
>>>>>>>>>>>>>> What about the toolstack changes? Have they been accepted? I v=
aguely
>>>>>>>>>>>>>> recall there was a discussion about those changes but don't r=
emember how
>>>>>>>>>>>>>> it ended.
>>>>>>>>>>>>> I don't think toolstack/libxl patch has been applied yet eithe=
r.
>>>>>>>>>>>>> "[PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attri=
bute":
>>>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664=
.html=20
>>>>>>>>>>>>> "[PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' Sys=
FS attribute":
>>>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663=
.html=20
>>>>>>>>>>> Will this patch work for *BSD? Roger?
>>>>>>>>>> At least FreeBSD don't support pci-passthrough, so none of this w=
orks
>>>>>>>>>> ATM. There's no sysfs on BSD, so much of what's in libxl_pci.c wi=
ll
>>>>>>>>>> have to be moved to libxl_linux.c when BSD support is added.
>>>>>>>>> Ok. That sounds like it's OK for the initial pci 'reset' implement=
ation in xl/libxl to be linux-only..
>>>>>>>> Are these two patches still needed? ISTR they were written original=
ly
>>>>>>>> to deal with guest trying to use device that was previously assigne=
d to
>>>>>>>> another guest. But pcistub_put_pci_dev() calls
>>>>>>>> __pci_reset_function_locked() which first tries FLR, and it looks l=
ike
>>>>>>>> it was added relatively recently.
>>>>>>> Replying to an old thread.. I only now realized I forgot to reply to=
 this message earlier.
>>>>>>>=20
>>>>>>> afaik these patches are still needed. H=C3=A5kon (CC'd) wrote to me i=
n private that
>>>>>>> he gets a (dom0) Linux kernel crash if he doesn't have these patches=
 applied.
>>>>>>>=20
>>>>>>>=20
>>>>>>> Here are the links to both the linux kernel and libxl patches:
>>>>>>>=20
>>>>>>>=20
>>>>>>> "[Xen-devel] [PATCH V3 0/2] Xen/PCIback: PCI reset using 'reset' Sys=
FS attribute":
>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00659.html
>>>>>>>=20
>>>>>>> [Note that PATCH V3 1/2 "Drivers/PCI: Export pcie_has_flr() interfac=
e" is already applied in upstream linux kernel, so it's not needed anymore]
>>>>>>>=20
>>>>>>> "[Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI flr/slot/bus r=
eset with 'reset' SysFS attribute":
>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00661.html
>>>>>>>=20
>>>>>>>=20
>>>>>>> "[Xen-devel] [PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS=
 attribute":
>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
>>>>>>>=20
>>>>>>> "[Xen-devel] [PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'rese=
t' SysFS attribute":
>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
>>>>>> [dropping Linux mailing lists]
>>>>>>=20
>>>>>> What is required to get the Xen patches merged?  Rebasing against Xen=

>>>>>> master?  OpenXT has been carrying a similar patch for many years and
>>>>>> we would like to move to an upstream implementation.  Xen users of PC=
I
>>>>>> passthrough would benefit from more reliable device reset.
>>>>> Rebase and resend?
>>>>>=20
>>>>> Skimming that thread I think the major concern was backward
>>>>> compatibility. That seemed to have been addressed.
>>>>>=20
>>>>> Unfortunately I don't have the time to dig into Linux to see if the
>>>>> claim there is true or not.
>>>>>=20
>>>>> It would be helpful to write a concise paragraph to say why backward
>>>>> compatibility is not required.
>>> Just going through my old =E2=80=9Cmake sure something happens with this=
=E2=80=9D mails.  Did anything ever happen with this?  Who has the ball here=
 / who is this stuck on?
>>=20
>> We're waiting for "somebody" to testify that fixing this will not adverse=
ly affect anyone. I'm not qualified, but my strong belief is that since "res=
et" or "do_flr"  in the linux kernel is not currently implemented/used in an=
y official distribution, it should be OK.
>>=20
>> Patches still work in current staging-4.14 btw.
>>=20
> Just for the record, attached are the patches I am running on top of linux=
 gentoo-sources-5.9.1  and xen-staging-4.14 respectively. (I am also running=
 with the patch to mark populated reserved memory that contains ACPI tables a=
s "ACPI NVS", not attached here ).
>=20
> <pci_brute_reset-home-hack.patch>
> <pci_brute_reset-home-hack-doc.patch>
> <pci-reset-min-egen.patch>

Is there any reason not to merge the Xen patch, while waiting for the Linux p=
atch to be upstreamed?  Similar versions have been deployed in downstream pr=
oduction systems for years, we can at least de-dupe those Xen patches.

Do (can) we have an in-tree location to queue Xen-relevant Linux patches whi=
le they go through an upstreaming process that may last several (5+ here) ye=
ars?

Rich=


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 23:06:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 23:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8857.23824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeDu-0007K6-Rs; Mon, 19 Oct 2020 23:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8857.23824; Mon, 19 Oct 2020 23:05:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeDu-0007Jz-OQ; Mon, 19 Oct 2020 23:05:50 +0000
Received: by outflank-mailman (input) for mailman id 8857;
 Mon, 19 Oct 2020 23:05:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yU5t=D2=ziepe.ca=jgg@srs-us1.protection.inumbo.net>)
 id 1kUeDt-0007Ju-PM
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:05:49 +0000
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e4781db-af63-4ff4-8dac-bb36411dfc46;
 Mon, 19 Oct 2020 23:05:49 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id k21so36627ioa.9
 for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 16:05:49 -0700 (PDT)
Received: from ziepe.ca
 (hlfxns017vw-156-34-48-30.dhcp-dynamic.fibreop.ns.bellaliant.net.
 [156.34.48.30])
 by smtp.gmail.com with ESMTPSA id u8sm7938ilm.36.2020.10.19.16.05.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 19 Oct 2020 16:05:47 -0700 (PDT)
Received: from jgg by mlx with local (Exim 4.94) (envelope-from <jgg@ziepe.ca>)
 id 1kUeDq-002hRf-LL; Mon, 19 Oct 2020 20:05:46 -0300
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yU5t=D2=ziepe.ca=jgg@srs-us1.protection.inumbo.net>)
	id 1kUeDt-0007Ju-PM
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:05:49 +0000
X-Inumbo-ID: 0e4781db-af63-4ff4-8dac-bb36411dfc46
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0e4781db-af63-4ff4-8dac-bb36411dfc46;
	Mon, 19 Oct 2020 23:05:49 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id k21so36627ioa.9
        for <xen-devel@lists.xenproject.org>; Mon, 19 Oct 2020 16:05:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ziepe.ca; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=KZcwJitFojA7RhzeD/UU8gbzehCBdvf6g5ia0ZYCrq4=;
        b=R0THCPfeT+NjRv5n7wRuWr3+iQQVH5mYQugrcFEorv7jMlZOJpq4gWO8x2sltRZ1S3
         8+uXkfK+0xraFRPc7RLEyC+L1Eqn+lwfgcQ60rCu3Ir6T0iqCUlHxkXPI8IxQxljNihW
         MxA7dERE+Fo0B6yhfEPLGm6gbjuMrGvt0ee7i4ozPAa6C0OwTV1SJBaz+sj8rzyyiIix
         DQ1LhxNguLsVQ2r9xWcmCur9QDHoeimXQtC/UVpN+4Yl8O9ZbpYKUwlrKFZtzYHwjpgZ
         iC+kREvCZvwgmOBCIm7DmgxG6/6ncKrp6QDCnbxkp/qIzrhuMyauJsg53LWTp//HSslk
         z+3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=KZcwJitFojA7RhzeD/UU8gbzehCBdvf6g5ia0ZYCrq4=;
        b=KI5TO5uyyggyq3f4bqhArpazrWcb8PVEosCThRiFmDP3TVOW9MugCeIdbpgOm3moIr
         +K24Dx6kqisyVhoCFF377kdBfKTWtB0kMiBdCs3FSwOU0mbxzqtyqLBpKgL0IDnTDT7f
         KBWtkBgVc+zS9F2ARrXAZIeIC42pVWI1kU7IbdqsMHFoxjDxdGO9qn+GlpT4Y24MShz9
         b7HAd8x94EDcenqouuWwT0GvsuTFg2luH4Gb3qwmJDrnszILd8VWYFYHG/JUQR9oFtLq
         kkfNvuLzWCnyLH95yM22A5X4PXyjLcp7isOJ2+O4jV3KAmM7vie5yuJuJSKoSKKp9YUR
         ztlw==
X-Gm-Message-State: AOAM530/N4+gvwnryi5EAoyKbr91Io6ZQaC9X6gEPnmQo7y75aBWWnr7
	KCLiXwYYAzw5PNkKSfxYh3kh+g==
X-Google-Smtp-Source: ABdhPJwE/qhLAedndnNRaUrUDMs331Onaq8Iz+VDEVRJN+4h4B5ckC67pNXDnvS9MRF/DxLJjNlnIQ==
X-Received: by 2002:a6b:5019:: with SMTP id e25mr44377iob.123.1603148748578;
        Mon, 19 Oct 2020 16:05:48 -0700 (PDT)
Received: from ziepe.ca (hlfxns017vw-156-34-48-30.dhcp-dynamic.fibreop.ns.bellaliant.net. [156.34.48.30])
        by smtp.gmail.com with ESMTPSA id u8sm7938ilm.36.2020.10.19.16.05.47
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 19 Oct 2020 16:05:47 -0700 (PDT)
Received: from jgg by mlx with local (Exim 4.94)
	(envelope-from <jgg@ziepe.ca>)
	id 1kUeDq-002hRf-LL; Mon, 19 Oct 2020 20:05:46 -0300
Date: Mon, 19 Oct 2020 20:05:46 -0300
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Nick Desaulniers <ndesaulniers@google.com>
Cc: Tom Rix <trix@redhat.com>, LKML <linux-kernel@vger.kernel.org>,
	linux-edac@vger.kernel.org, linux-acpi@vger.kernel.org,
	linux-pm@vger.kernel.org, xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	openipmi-developer@lists.sourceforge.net,
	"open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>,
	Linux ARM <linux-arm-kernel@lists.infradead.org>,
	linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org,
	amd-gfx list <amd-gfx@lists.freedesktop.org>,
	dri-devel <dri-devel@lists.freedesktop.org>,
	nouveau@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
	linux-amlogic@lists.infradead.org,
	industrypack-devel@lists.sourceforge.net,
	linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
	linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
	linux-can@vger.kernel.org,
	Network Development <netdev@vger.kernel.org>,
	intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
	linux-wireless <linux-wireless@vger.kernel.org>,
	linux-stm32@st-md-mailman.stormreply.com, linux-nfc@lists.01.org,
	linux-nvdimm <linux-nvdimm@lists.01.org>, linux-pci@vger.kernel.org,
	linux-samsung-soc@vger.kernel.org,
	platform-driver-x86@vger.kernel.org, patches@opensource.cirrus.com,
	storagedev@microchip.com, devel@driverdev.osuosl.org,
	linux-serial@vger.kernel.org, linux-usb@vger.kernel.org,
	usb-storage@lists.one-eyed-alien.net,
	linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com,
	bpf <bpf@vger.kernel.org>, linux-integrity@vger.kernel.org,
	linux-security-module@vger.kernel.org, keyrings@vger.kernel.org,
	alsa-devel@alsa-project.org,
	clang-built-linux <clang-built-linux@googlegroups.com>,
	Greg KH <gregkh@linuxfoundation.org>,
	George Burgess <gbiv@google.com>
Subject: Re: [RFC] treewide: cleanup unreachable breaks
Message-ID: <20201019230546.GH36674@ziepe.ca>
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018054332.GB593954@kroah.com>
 <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>

On Mon, Oct 19, 2020 at 12:42:15PM -0700, Nick Desaulniers wrote:
> On Sat, Oct 17, 2020 at 10:43 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> >
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > > From: Tom Rix <trix@redhat.com>
> > >
> > > This is a upcoming change to clean up a new warning treewide.
> > > I am wondering if the change could be one mega patch (see below) or
> > > normal patch per file about 100 patches or somewhere half way by collecting
> > > early acks.
> >
> > Please break it up into one-patch-per-subsystem, like normal, and get it
> > merged that way.
> >
> > Sending us a patch, without even a diffstat to review, isn't going to
> > get you very far...
> 
> Tom,
> If you're able to automate this cleanup, I suggest checking in a
> script that can be run on a directory.  Then for each subsystem you
> can say in your commit "I ran scripts/fix_whatever.py on this subdir."
>  Then others can help you drive the tree wide cleanup.  Then we can
> enable -Wunreachable-code-break either by default, or W=2 right now
> might be a good idea.

I remember using clang-modernize in the past to fix issues very
similar to this, if clang machinery can generate the warning, can't
something like clang-tidy directly generate the patch?

You can send me a patch for drivers/infiniband/* as well

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Oct 19 23:37:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 23:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8860.23836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeiQ-0001VG-9F; Mon, 19 Oct 2020 23:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8860.23836; Mon, 19 Oct 2020 23:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeiQ-0001V9-5g; Mon, 19 Oct 2020 23:37:22 +0000
Received: by outflank-mailman (input) for mailman id 8860;
 Mon, 19 Oct 2020 23:37:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wIFX=D2=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
 id 1kUeiO-0001V4-E4
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:37:20 +0000
Received: from mail1.protonmail.ch (unknown [185.70.40.18])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21374878-229a-480d-9843-3c83272f1226;
 Mon, 19 Oct 2020 23:37:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wIFX=D2=protonmail.com=dylangerdaly@srs-us1.protection.inumbo.net>)
	id 1kUeiO-0001V4-E4
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:37:20 +0000
X-Inumbo-ID: 21374878-229a-480d-9843-3c83272f1226
Received: from mail1.protonmail.ch (unknown [185.70.40.18])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 21374878-229a-480d-9843-3c83272f1226;
	Mon, 19 Oct 2020 23:37:18 +0000 (UTC)
Date: Mon, 19 Oct 2020 23:37:06 +0000
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=protonmail.com;
	s=protonmail; t=1603150637;
	bh=MEid225IZ3x8VHvybxTNCCYhY9RSdgTQX17KRNNbzcI=;
	h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From;
	b=c8RMxPZRjG9aJydHR5NMAFRJXM56b9zrR9jumvoqz/XGCdTZMP1zO6XzeJxzqTZh2
	 tig0Svx1nTAGzgWxLf8unfZeUYlgHhjp/7pSOMMD6XbWY8W680CvOe2zNyzuHrxBDN
	 irdiJ4hFrsoXR3zg3sEoDb19neu7gc/cK5Me8rl4=
To: Andrew Cooper <andrew.cooper3@citrix.com>
From: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Reply-To: Dylanger Daly <dylangerdaly@protonmail.com>
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
Message-ID: <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com>
In-Reply-To: <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com> <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="b1_nv5ozyXJOfrNDuJXtqBmfNxxUi5U7IULQTexHTkE"
X-Spam-Status: No, score=-1.2 required=10.0 tests=ALL_TRUSTED,DKIM_SIGNED,
	DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HTML_MESSAGE
	shortcircuit=no autolearn=disabled version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on
	mailout.protonmail.ch

This is a multi-part message in MIME format.

--b1_nv5ozyXJOfrNDuJXtqBmfNxxUi5U7IULQTexHTkE
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64

PiBEb2VzIGJvb3Rpbmcgd2l0aCBzY2hlZD1jcmVkaXQgYWx0ZXIgdGhlIHN5bXB0b21zPwoKSW5k
ZWVkIEkndmUgdHJpZWQgdGhpcywgdGhlIHJlc3VsdCBpcyBhbiBvYnNlcnZhYmxlIGRlbGF5LCB1
bnVzYWJsZSBwZXJmb3JtYW5jZSwgY3JlZGl0MiBzZWVtcyB0byBiZSB0aGUgb25seSB1c2FibGUg
c2NoZWR1bGVyLCBJJ20gY2VydGFpbiBpdCBoYXMgc29tZXRoaW5nIHRvIGRvIHdpdGggU01UIGJl
aW5nIGRpc2FibGVkLCByZXN1bHRpbmcgaW4gOCBjb3JlcyBpbnN0ZWFkIG9mIHRoZSBleHBlY3Rl
ZCAxNiB0aHJlYWRzLgoKPiBBcyBhIHJhbmRvbSB0aG91Z2h0LCBoYXZlIHlvdSB0cmllZCBkaXNh
YmxpbmcgdXNlIG9mIChkZWVwKSBDLXN0YXRlcz8KClllYWggSSd2ZSB0cmllZCBib3RoIGBwcm9j
ZXNzb3IubWF4X2NzdGF0ZT0xfDVgCkkndmUgYWxzbyB0cmllZCBhZGRpbmcgYDB4QzAwMTAyOTJg
IGFuZCBgMHhDMDAxMDI5NmAgTVNScyBpbnRvIGFyY2gveDg2L21zci5jIChndWVzdF97cmRtc3Is
d3Jtc3J9KQoKVGhlIGFib3ZlIGFsbG93ZWQgbWUgdG8gdXNlIGh0dHBzOi8vZ2l0aHViLmNvbS9y
NG0wbi9aZW5TdGF0ZXMtTGludXgvYmxvYi9tYXN0ZXIvemVuc3RhdGVzLnB5CgpBZnRlciByZW1v
dmluZyBgZG9tMF9tYXhfdmNwdXM9MSBkb20wX3ZjcHVzX3BpbmAgZnJvbSBYZW4ncyBDTURMSU5F
LCBhbmQgZGlzYWJsaW5nIEM2IEkgb2JzZXJ2ZWQgbm8gY2hhbmdlLgoKPiBUaGlzIHdhbnRzIHJl
cG9ydGluZyAod2l0aCBzdWZmaWNpZW50IGRhdGEsIGkuZS4gYXQgbGVhc3QgYSBzZXJpYWwgbG9n
KQoKSG0sIEknbSBub3Qgc3VyZSB0aGVyZSdzIFVBUlQgb24gdGhpcyBMYXB0b3AsIGNhbiBJIHNh
dmUgdGhlIGJvb3QgbG9nIHNvbWV3aGVyZT8K4oCQ4oCQ4oCQ4oCQ4oCQ4oCQ4oCQIE9yaWdpbmFs
IE1lc3NhZ2Ug4oCQ4oCQ4oCQ4oCQ4oCQ4oCQ4oCQCk9uIFRodXJzZGF5LCBPY3RvYmVyIDE1dGgs
IDIwMjAgYXQgMTA6NTcgUE0sIEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5j
b20+IHdyb3RlOgoKPiBPbiAxNS8xMC8yMDIwIDAxOjM4LCBEeWxhbmdlciBEYWx5IHdyb3RlOgo+
Cj4+IEhpIEFsbCwKPj4KPj4gSSdtIGN1cnJlbnRseSB1c2luZyBYZW4gNC4xNCAoUXViZXMgNC4x
IE9TKSBvbiBhIFJ5emVuIDcgNDc1MFUgUFJPLCBieSBkZWZhdWx0IEknbGwgZXhwZXJpZW5jZSBz
b2Z0bG9ja3Mgd2hlcmUgdGhlIG1vdXNlIGZvciBleGFtcGxlIHdpbGwgam9sdCBmcm9tIHRpbWUg
dG8gdGltZSwgaW4gdGhpcyBzdGF0ZSBpdCdzIG5vdCB1c2FibGUuCj4+Cj4+IEFkZGluZyBgZG9t
MF9tYXhfdmNwdXM9MSBkb20wX3ZjcHVzX3BpbmAgdG8gWGVuJ3MgQ01ETElORSByZXN1bHRzIGlu
IG5vIG1vcmUgam9sdGluZyBob3dldmVyIHBlcmZvcm1hbmNlIGlzbid0IHdoYXQgaXQgc2hvdWxk
IGJlIG9uIGFuIDggY29yZSBDUFUsIHNvZnRsb2NrcyBhcmUgc3RpbGwgYSBwcm9ibGVtIHdpdGhp
biBkb21VJ3MsIGFueSBzb3J0IG9mIFVJIGFuaW1hdGlvbiBmb3IgZXhhbXBsZS4KPj4KPj4gUmV2
ZXJ0aW5nIFt0aGlzIGNvbW1pdCAoOGUyYWE3NmRjMTY3MGU4MmVhYTE1NjgzMzUzODUzYmM2NmJm
NTRmYyldKGh0dHBzOi8vZ2l0aHViLmNvbS94ZW4tcHJvamVjdC94ZW4vY29tbWl0LzhlMmFhNzZk
YzE2NzBlODJlYWExNTY4MzM1Mzg1M2JjNjZiZjU0ZmMpIHJlc3VsdHMgaW4gZXZlbiB3b3JzZSBw
ZXJmb3JtYW5jZSB3aXRoIG9yIHdpdGhvdXQgdGhlIGFib3ZlIGNoYW5nZXMgdG8gQ01ETElORSwg
YW5kIGl0J3Mgbm90IHVzYWJsZSBhdCBhbGwuCj4+Cj4+IERvZXMgYW55b25lIGhhdmUgYW55IHBv
aW50ZXJzPwo+Cj4gRG9lcyBib290aW5nIHdpdGggc2NoZWQ9Y3JlZGl0IGFsdGVyIHRoZSBzeW1w
dG9tcz8KPgo+IH5BbmRyZXc=

--b1_nv5ozyXJOfrNDuJXtqBmfNxxUi5U7IULQTexHTkE
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64

PGRpdj4mZ3Q7Jm5ic3A7PHNwYW4gY2xhc3M9ImhpZ2hsaWdodCIgc3R5bGU9ImJhY2tncm91bmQt
Y29sb3I6cmdiKDI1NSwgMjU1LCAyNTUpIj48c3BhbiBjbGFzcz0iY29sb3VyIiBzdHlsZT0iY29s
b3I6cmdiKDM4LCA0MiwgNTEpIj48c3BhbiBjbGFzcz0iZm9udCIgc3R5bGU9ImZvbnQtZmFtaWx5
Oi1hcHBsZS1zeXN0ZW0sIEJsaW5rTWFjU3lzdGVtRm9udCwgJnF1b3Q7U2Vnb2UgVUkmcXVvdDss
IFJvYm90bywgT3h5Z2VuLVNhbnMsIFVidW50dSwgQ2FudGFyZWxsLCAmcXVvdDtIZWx2ZXRpY2Eg
TmV1ZSZxdW90Oywgc2Fucy1zZXJpZiI+PHNwYW4gY2xhc3M9InNpemUiIHN0eWxlPSJmb250LXNp
emU6MTRweCI+RG9lcyBib290aW5nIHdpdGggc2NoZWQ9Y3JlZGl0IGFsdGVyIHRoZSBzeW1wdG9t
cz88L3NwYW4+PC9zcGFuPjwvc3Bhbj48L3NwYW4+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2Pjxk
aXY+SW5kZWVkIEkndmUgdHJpZWQgdGhpcywgdGhlIHJlc3VsdCBpcyBhbiBvYnNlcnZhYmxlIGRl
bGF5LCB1bnVzYWJsZSBwZXJmb3JtYW5jZSwgY3JlZGl0MiBzZWVtcyB0byBiZSB0aGUgb25seSB1
c2FibGUgc2NoZWR1bGVyLCBJJ20gY2VydGFpbiBpdCBoYXMgc29tZXRoaW5nIHRvIGRvIHdpdGgg
U01UIGJlaW5nIGRpc2FibGVkLCByZXN1bHRpbmcgaW4gOCBjb3JlcyBpbnN0ZWFkIG9mIHRoZSBl
eHBlY3RlZCAxNiB0aHJlYWRzLjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PiZndDsmbmJz
cDtBcyBhIHJhbmRvbSB0aG91Z2h0LCBoYXZlIHlvdSB0cmllZCBkaXNhYmxpbmcgdXNlIG9mIChk
ZWVwKSBDLXN0YXRlcz88YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5ZZWFoIEkndmUgdHJp
ZWQgYm90aCBgcHJvY2Vzc29yLm1heF9jc3RhdGU9MXw1YDxicj48L2Rpdj48ZGl2PkkndmUgYWxz
byB0cmllZCBhZGRpbmcgYDB4QzAwMTAyOTJgIGFuZCBgMHhDMDAxMDI5NmAgTVNScyBpbnRvJm5i
c3A7YXJjaC94ODYvbXNyLmMgKGd1ZXN0X3tyZG1zcix3cm1zcn0pPGJyPjwvZGl2PjxkaXY+PGJy
PjwvZGl2PjxkaXY+VGhlIGFib3ZlIGFsbG93ZWQgbWUgdG8gdXNlJm5ic3A7PGEgaHJlZj0iaHR0
cHM6Ly9naXRodWIuY29tL3I0bTBuL1plblN0YXRlcy1MaW51eC9ibG9iL21hc3Rlci96ZW5zdGF0
ZXMucHkiPmh0dHBzOi8vZ2l0aHViLmNvbS9yNG0wbi9aZW5TdGF0ZXMtTGludXgvYmxvYi9tYXN0
ZXIvemVuc3RhdGVzLnB5PC9hPjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkFmdGVyIHJl
bW92aW5nJm5ic3A7PHNwYW4gY2xhc3M9ImhpZ2hsaWdodCIgc3R5bGU9ImJhY2tncm91bmQtY29s
b3I6cmdiKDI1NSwgMjU1LCAyNTUpIj48c3BhbiBjbGFzcz0iY29sb3VyIiBzdHlsZT0iY29sb3I6
cmdiKDM4LCA0MiwgNTEpIj48c3BhbiBjbGFzcz0iZm9udCIgc3R5bGU9ImZvbnQtZmFtaWx5Oi1h
cHBsZS1zeXN0ZW0sIEJsaW5rTWFjU3lzdGVtRm9udCwgJnF1b3Q7U2Vnb2UgVUkmcXVvdDssIFJv
Ym90bywgT3h5Z2VuLVNhbnMsIFVidW50dSwgQ2FudGFyZWxsLCAmcXVvdDtIZWx2ZXRpY2EgTmV1
ZSZxdW90Oywgc2Fucy1zZXJpZiI+PHNwYW4gY2xhc3M9InNpemUiIHN0eWxlPSJmb250LXNpemU6
MTRweCI+YGRvbTBfbWF4X3ZjcHVzPTEgZG9tMF92Y3B1c19waW5gIGZyb20gWGVuJ3MgQ01ETElO
RSwgYW5kIGRpc2FibGluZyBDNiBJIG9ic2VydmVkIG5vIGNoYW5nZS48L3NwYW4+PC9zcGFuPjwv
c3Bhbj48L3NwYW4+PGJyPjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+PHNwYW4gY2xhc3M9Imhp
Z2hsaWdodCIgc3R5bGU9ImJhY2tncm91bmQtY29sb3I6cmdiKDI1NSwgMjU1LCAyNTUpIj48c3Bh
biBjbGFzcz0iY29sb3VyIiBzdHlsZT0iY29sb3I6cmdiKDM4LCA0MiwgNTEpIj48c3BhbiBjbGFz
cz0iZm9udCIgc3R5bGU9ImZvbnQtZmFtaWx5Oi1hcHBsZS1zeXN0ZW0sIEJsaW5rTWFjU3lzdGVt
Rm9udCwgJnF1b3Q7U2Vnb2UgVUkmcXVvdDssIFJvYm90bywgT3h5Z2VuLVNhbnMsIFVidW50dSwg
Q2FudGFyZWxsLCAmcXVvdDtIZWx2ZXRpY2EgTmV1ZSZxdW90Oywgc2Fucy1zZXJpZiI+PHNwYW4g
Y2xhc3M9InNpemUiIHN0eWxlPSJmb250LXNpemU6MTRweCI+Jmd0OyZuYnNwOzwvc3Bhbj48L3Nw
YW4+PC9zcGFuPjwvc3Bhbj5UaGlzIHdhbnRzIHJlcG9ydGluZyAod2l0aCBzdWZmaWNpZW50IGRh
dGEsIGkuZS4gYXQgbGVhc3QgYSBzZXJpYWwgbG9nKTwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+
SG0sIEknbSBub3Qgc3VyZSB0aGVyZSdzIFVBUlQgb24gdGhpcyBMYXB0b3AsIGNhbiBJIHNhdmUg
dGhlIGJvb3QgbG9nIHNvbWV3aGVyZT88YnI+PC9kaXY+PGRpdiBjbGFzcz0icHJvdG9ubWFpbF9x
dW90ZSI+DQogICAgICAgIOKAkOKAkOKAkOKAkOKAkOKAkOKAkCBPcmlnaW5hbCBNZXNzYWdlIOKA
kOKAkOKAkOKAkOKAkOKAkOKAkDxicj4NCiAgICAgICAgT24gVGh1cnNkYXksIE9jdG9iZXIgMTV0
aCwgMjAyMCBhdCAxMDo1NyBQTSwgQW5kcmV3IENvb3BlciAmbHQ7YW5kcmV3LmNvb3BlcjNAY2l0
cml4LmNvbSZndDsgd3JvdGU6PGJyPg0KICAgICAgICA8YmxvY2txdW90ZSBjbGFzcz0icHJvdG9u
bWFpbF9xdW90ZSIgdHlwZT0iY2l0ZSI+DQogICAgICAgICAgICA8ZGl2IGNsYXNzPSJtb3otY2l0
ZS1wcmVmaXgiPk9uIDE1LzEwLzIwMjAgMDE6MzgsIER5bGFuZ2VyIERhbHkNCiAgICAgIHdyb3Rl
Ojxicj4NCiAgICA8L2Rpdj4NCiAgICA8YmxvY2txdW90ZSB0eXBlPSJjaXRlIj4NCg0KICAgICAg
PGRpdj5IaSBBbGwsPGJyPg0KICAgICAgPC9kaXY+DQogICAgICA8ZGl2Pjxicj4NCiAgICAgIDwv
ZGl2Pg0KICAgICAgPGRpdj5JJ20gY3VycmVudGx5IHVzaW5nIFhlbiA0LjE0IChRdWJlcyA0LjEg
T1MpIG9uIGEgUnl6ZW4NCiAgICAgICAgNyZuYnNwOzQ3NTBVIFBSTywgYnkgZGVmYXVsdCBJJ2xs
IGV4cGVyaWVuY2Ugc29mdGxvY2tzIHdoZXJlIHRoZQ0KICAgICAgICBtb3VzZSBmb3IgZXhhbXBs
ZSB3aWxsIGpvbHQgZnJvbSB0aW1lIHRvIHRpbWUsIGluIHRoaXMgc3RhdGUNCiAgICAgICAgaXQn
cyBub3QgdXNhYmxlLjxicj4NCiAgICAgIDwvZGl2Pg0KICAgICAgPGRpdj48YnI+DQogICAgICA8
L2Rpdj4NCiAgICAgIDxkaXY+QWRkaW5nIGBkb20wX21heF92Y3B1cz0xIGRvbTBfdmNwdXNfcGlu
YCB0byBYZW4ncyBDTURMSU5FDQogICAgICAgIHJlc3VsdHMgaW4gbm8gbW9yZSBqb2x0aW5nIGhv
d2V2ZXIgcGVyZm9ybWFuY2UgaXNuJ3Qgd2hhdCBpdA0KICAgICAgICBzaG91bGQgYmUgb24gYW4g
OCBjb3JlIENQVSwgc29mdGxvY2tzIGFyZSBzdGlsbCBhIHByb2JsZW0gd2l0aGluDQogICAgICAg
IGRvbVUncywgYW55IHNvcnQgb2YgVUkgYW5pbWF0aW9uIGZvciBleGFtcGxlLjxicj4NCiAgICAg
IDwvZGl2Pg0KICAgICAgPGRpdj48YnI+DQogICAgICA8L2Rpdj4NCiAgICAgIDxkaXY+UmV2ZXJ0
aW5nJm5ic3A7PGEgaHJlZj0iaHR0cHM6Ly9naXRodWIuY29tL3hlbi1wcm9qZWN0L3hlbi9jb21t
aXQvOGUyYWE3NmRjMTY3MGU4MmVhYTE1NjgzMzUzODUzYmM2NmJmNTRmYyIgdGFyZ2V0PSJfYmxh
bmsiIHRpdGxlPSJodHRwczovL2dpdGh1Yi5jb20veGVuLXByb2plY3QveGVuL2NvbW1pdC84ZTJh
YTc2ZGMxNjcwZTgyZWFhMTU2ODMzNTM4NTNiYzY2YmY1NGZjIiByZWw9Im5vcmVmZXJyZXIgbm9m
b2xsb3cgbm9vcGVuZXIiPnRoaXMgY29tbWl0DQogICAgICAgICAgKDhlMmFhNzZkYzE2NzBlODJl
YWExNTY4MzM1Mzg1M2JjNjZiZjU0ZmMpPC9hPiZuYnNwO3Jlc3VsdHMgaW4gZXZlbg0KICAgICAg
ICB3b3JzZSBwZXJmb3JtYW5jZSB3aXRoIG9yIHdpdGhvdXQgdGhlIGFib3ZlIGNoYW5nZXMgdG8g
Q01ETElORSwNCiAgICAgICAgYW5kIGl0J3Mgbm90IHVzYWJsZSBhdCBhbGwuPGJyPg0KICAgICAg
PC9kaXY+DQogICAgICA8ZGl2Pjxicj4NCiAgICAgIDwvZGl2Pg0KICAgICAgPGRpdj5Eb2VzIGFu
eW9uZSBoYXZlIGFueSBwb2ludGVycz88YnI+DQogICAgICA8L2Rpdj4NCiAgICA8L2Jsb2NrcXVv
dGU+DQogICAgPGJyPg0KICAgIERvZXMgYm9vdGluZyB3aXRoIHNjaGVkPWNyZWRpdCBhbHRlciB0
aGUgc3ltcHRvbXM/PGJyPg0KICAgIDxicj4NCiAgICB+QW5kcmV3PGJyPg0KDQoNCg0KICAgICAg
ICA8L2Jsb2NrcXVvdGU+PGJyPg0KICAgIDwvZGl2Pg==


--b1_nv5ozyXJOfrNDuJXtqBmfNxxUi5U7IULQTexHTkE--



From xen-devel-bounces@lists.xenproject.org Mon Oct 19 23:50:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 19 Oct 2020 23:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8867.23853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeuX-0002Wu-Et; Mon, 19 Oct 2020 23:49:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8867.23853; Mon, 19 Oct 2020 23:49:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUeuX-0002Wn-Bu; Mon, 19 Oct 2020 23:49:53 +0000
Received: by outflank-mailman (input) for mailman id 8867;
 Mon, 19 Oct 2020 23:49:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uBDh=D2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kUeuW-0002W4-00
 for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:49:52 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 151c5437-5ed4-4757-93bd-c745c80acde2;
 Mon, 19 Oct 2020 23:49:50 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09JNeWmu008892;
 Mon, 19 Oct 2020 23:49:25 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 347s8mr587-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 19 Oct 2020 23:49:25 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09JNdpgP184956;
 Mon, 19 Oct 2020 23:49:24 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 348acq34kk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 19 Oct 2020 23:49:24 +0000
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09JNnFrC018030;
 Mon, 19 Oct 2020 23:49:15 GMT
Received: from [10.39.233.101] (/10.39.233.101)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 19 Oct 2020 16:49:14 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uBDh=D2=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kUeuW-0002W4-00
	for xen-devel@lists.xenproject.org; Mon, 19 Oct 2020 23:49:52 +0000
X-Inumbo-ID: 151c5437-5ed4-4757-93bd-c745c80acde2
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 151c5437-5ed4-4757-93bd-c745c80acde2;
	Mon, 19 Oct 2020 23:49:50 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09JNeWmu008892;
	Mon, 19 Oct 2020 23:49:25 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=bhZdIZ6uyaAnD5Lu5Uhke8liuugcYbL/KEuYLGPYBCw=;
 b=uOhEmuToYeCEFCRlRVs3zOCo29etlOg6bv8a389akvmhTZ7RlMCAgWL76XFroG40IpCF
 1X7V2UD2l/1zSoY6LB1bdV2K04ox9P8SSUzvIE8KwBU4UpZZCYmDCZ0wA9MicERc0wN7
 mN/VW0GN0i/wk56xsCKVdREgz3I/x32V/KixWQRR3/HJomhjZw4VjIfUHUG3CHc+FQBc
 QszoDPh2C5grHbcpQuXfDnEEFt9BRJhjW8f/0xgTGXnyqPl3evhpKyE+oaW/Ucz5pPCx
 /nTmNoPCn2P3GF92zn5veNuIKSUECCmBJ/NhBQ3AhaKi9hcven1z9jGx45E6T5mm0bB4 RA== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
	by userp2120.oracle.com with ESMTP id 347s8mr587-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Mon, 19 Oct 2020 23:49:25 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
	by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09JNdpgP184956;
	Mon, 19 Oct 2020 23:49:24 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
	by userp3020.oracle.com with ESMTP id 348acq34kk-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Mon, 19 Oct 2020 23:49:24 +0000
Received: from abhmp0016.oracle.com (abhmp0016.oracle.com [141.146.116.22])
	by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09JNnFrC018030;
	Mon, 19 Oct 2020 23:49:15 GMT
Received: from [10.39.233.101] (/10.39.233.101)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Mon, 19 Oct 2020 16:49:14 -0700
Subject: Re: [Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI
 flr/slot/bus reset with 'reset' SysFS attribute
To: Rich Persaud <persaur@gmail.com>,
        =?UTF-8?Q?H=c3=a5kon_Alstadheim?= <hakon@alstadheim.priv.no>,
        George Dunlap <George.Dunlap@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?B?UGFzaSBLw6Rya2vDpGluZW4=?=
 <pasik@iki.fi>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Juergen Gross <jgross@suse.com>,
        Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
        Jan Beulich <jbeulich@suse.com>, bhelgaas@google.com,
        Roger Pau Monne <roger.pau@citrix.com>,
        =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>,
        Jason Andryuk <jandryuk@gmail.com>,
        Andrew Cooper <Andrew.Cooper3@citrix.com>,
        Ian Jackson <Ian.Jackson@citrix.com>,
        Paul Durrant <pdurrant@amazon.com>,
        Anthony Perard <anthony.perard@citrix.com>
References: <2d2693c9-f2a9-7914-7362-947a61c28acd@alstadheim.priv.no>
 <6D51F096-C247-486B-B4A2-50F85855EA06@gmail.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <1c0c3e2e-51c0-1c94-d5fc-cfcf17f61236@oracle.com>
Date: Mon, 19 Oct 2020 19:49:11 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <6D51F096-C247-486B-B4A2-50F85855EA06@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9779 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 bulkscore=0 adultscore=0
 mlxscore=0 malwarescore=0 suspectscore=0 spamscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010190160
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9779 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 suspectscore=0
 lowpriorityscore=0 mlxlogscore=999 priorityscore=1501 spamscore=0
 phishscore=0 clxscore=1011 bulkscore=0 impostorscore=0 adultscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010190160


On 10/19/20 6:43 PM, Rich Persaud wrote:
> On Oct 19, 2020, at 11:52, Håkon Alstadheim <hakon@alstadheim.priv.no> wrote:
>> ﻿
>> Den 19.10.2020 17:16, skrev Håkon Alstadheim:
>>> Den 19.10.2020 13:00, skrev George Dunlap:
>>>>> On Jan 31, 2020, at 3:33 PM, Wei Liu <wl@xen.org> wrote:
>>>>>
>>>>> On Fri, Jan 17, 2020 at 02:13:04PM -0500, Rich Persaud wrote:
>>>>>> On Aug 26, 2019, at 17:08, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>>>>>> ﻿Hi,
>>>>>>>
>>>>>>>> On Mon, Oct 08, 2018 at 10:32:45AM -0400, Boris Ostrovsky wrote:
>>>>>>>>> On 10/3/18 11:51 AM, Pasi Kärkkäinen wrote:
>>>>>>>>> On Wed, Sep 19, 2018 at 11:05:26AM +0200, Roger Pau Monné wrote:
>>>>>>>>>> On Tue, Sep 18, 2018 at 02:09:53PM -0400, Boris Ostrovsky wrote:
>>>>>>>>>>> On 9/18/18 5:32 AM, George Dunlap wrote:
>>>>>>>>>>>>> On Sep 18, 2018, at 8:15 AM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>> On Mon, Sep 17, 2018 at 02:06:02PM -0400, Boris Ostrovsky wrote:
>>>>>>>>>>>>>>> What about the toolstack changes? Have they been accepted? I vaguely
>>>>>>>>>>>>>>> recall there was a discussion about those changes but don't remember how
>>>>>>>>>>>>>>> it ended.
>>>>>>>>>>>>>> I don't think toolstack/libxl patch has been applied yet either.
>>>>>>>>>>>>>> "[PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attribute":
>>>>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html 
>>>>>>>>>>>>>> "[PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' SysFS attribute":
>>>>>>>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html 
>>>>>>>>>>>> Will this patch work for *BSD? Roger?
>>>>>>>>>>> At least FreeBSD don't support pci-passthrough, so none of this works
>>>>>>>>>>> ATM. There's no sysfs on BSD, so much of what's in libxl_pci.c will
>>>>>>>>>>> have to be moved to libxl_linux.c when BSD support is added.
>>>>>>>>>> Ok. That sounds like it's OK for the initial pci 'reset' implementation in xl/libxl to be linux-only..
>>>>>>>>> Are these two patches still needed? ISTR they were written originally
>>>>>>>>> to deal with guest trying to use device that was previously assigned to
>>>>>>>>> another guest. But pcistub_put_pci_dev() calls
>>>>>>>>> __pci_reset_function_locked() which first tries FLR, and it looks like
>>>>>>>>> it was added relatively recently.
>>>>>>>> Replying to an old thread.. I only now realized I forgot to reply to this message earlier.
>>>>>>>>
>>>>>>>> afaik these patches are still needed. Håkon (CC'd) wrote to me in private that
>>>>>>>> he gets a (dom0) Linux kernel crash if he doesn't have these patches applied.


If this is still crashing I'd be curious to see the splat.


>>>>>>>>
>>>>>>>>
>>>>>>>> Here are the links to both the linux kernel and libxl patches:
>>>>>>>>
>>>>>>>>
>>>>>>>> "[Xen-devel] [PATCH V3 0/2] Xen/PCIback: PCI reset using 'reset' SysFS attribute":
>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00659.html
>>>>>>>>
>>>>>>>> [Note that PATCH V3 1/2 "Drivers/PCI: Export pcie_has_flr() interface" is already applied in upstream linux kernel, so it's not needed anymore]
>>>>>>>>
>>>>>>>> "[Xen-devel] [PATCH V3 2/2] Xen/PCIback: Implement PCI flr/slot/bus reset with 'reset' SysFS attribute":
>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00661.html
>>>>>>>>
>>>>>>>>
>>>>>>>> "[Xen-devel] [PATCH V1 0/1] Xen/Tools: PCI reset using 'reset' SysFS attribute":
>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00664.html
>>>>>>>>
>>>>>>>> "[Xen-devel] [PATCH V1 1/1] Xen/libxl: Perform PCI reset using 'reset' SysFS attribute":
>>>>>>>> https://lists.xen.org/archives/html/xen-devel/2017-12/msg00663.html
>>>>>>> [dropping Linux mailing lists]
>>>>>>>
>>>>>>> What is required to get the Xen patches merged?  Rebasing against Xen
>>>>>>> master?  OpenXT has been carrying a similar patch for many years and
>>>>>>> we would like to move to an upstream implementation.  Xen users of PCI
>>>>>>> passthrough would benefit from more reliable device reset.
>>>>>> Rebase and resend?


If you are is going to resend then please address Jan's comments from https://lists.xen.org/archives/html/xen-devel/2017-12/msg00695.html (although I am not sure in usefulness of the last one)


>>>>>>
>>>>>> Skimming that thread I think the major concern was backward
>>>>>> compatibility. That seemed to have been addressed.
>>>>>>
>>>>>> Unfortunately I don't have the time to dig into Linux to see if the
>>>>>> claim there is true or not.
>>>>>>
>>>>>> It would be helpful to write a concise paragraph to say why backward
>>>>>> compatibility is not required.
>>>> Just going through my old “make sure something happens with this” mails.  Did anything ever happen with this?  Who has the ball here / who is this stuck on?
>>> We're waiting for "somebody" to testify that fixing this will not adversely affect anyone. I'm not qualified, but my strong belief is that since "reset" or "do_flr"  in the linux kernel is not currently implemented/used in any official distribution, it should be OK.
>>>
>>> Patches still work in current staging-4.14 btw.
>>>
>> Just for the record, attached are the patches I am running on top of linux gentoo-sources-5.9.1  and xen-staging-4.14 respectively. (I am also running with the patch to mark populated reserved memory that contains ACPI tables as "ACPI NVS", not attached here ).
>>
>> <pci_brute_reset-home-hack.patch>
>> <pci_brute_reset-home-hack-doc.patch>
>> <pci-reset-min-egen.patch>
> Is there any reason not to merge the Xen patch, while waiting for the Linux patch to be upstreamed?  Similar versions have been deployed in downstream production systems for years, we can at least de-dupe those Xen patches.
>
> Do (can) we have an in-tree location to queue Xen-relevant Linux patches while they go through an upstreaming process that may last several (5+ here) years?


No (at least as far as I can think of it) but then I can't remember another instance of patches taking that long.


-boris



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 00:20:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 00:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8881.23881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUfO5-0006RF-GY; Tue, 20 Oct 2020 00:20:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8881.23881; Tue, 20 Oct 2020 00:20:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUfO5-0006R8-DX; Tue, 20 Oct 2020 00:20:25 +0000
Received: by outflank-mailman (input) for mailman id 8881;
 Tue, 20 Oct 2020 00:20:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUfO4-0006R3-QI
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 00:20:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9341acca-2b76-4be8-a888-a4d84ad98dc2;
 Tue, 20 Oct 2020 00:20:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUfNy-0000PB-DY; Tue, 20 Oct 2020 00:20:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUfNy-0001iY-3q; Tue, 20 Oct 2020 00:20:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUfNy-0005uj-3K; Tue, 20 Oct 2020 00:20:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUfO4-0006R3-QI
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 00:20:24 +0000
X-Inumbo-ID: 9341acca-2b76-4be8-a888-a4d84ad98dc2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9341acca-2b76-4be8-a888-a4d84ad98dc2;
	Tue, 20 Oct 2020 00:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=aJzi1yffyHMfRCZ06fe1HBViR+9WTyOOcxVUi53eVuU=; b=CUHyp+opZYvXv3rV5LBZJUCA3A
	CGTuFeJj75ec9UA+T/V7yxwTC0G55kZKmISfeykFZA1pdyjDcDvsp1bgMtKSnIsVjSwfaqfn85IyC
	mCg/B902BUFeX32ZGiPWpImGC+sD/pTTKUNrElp9lRQuYgZVE/KtbR3ZXkg2sDaXPbg0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUfNy-0000PB-DY; Tue, 20 Oct 2020 00:20:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUfNy-0001iY-3q; Tue, 20 Oct 2020 00:20:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUfNy-0005uj-3K; Tue, 20 Oct 2020 00:20:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete build-arm64
Message-Id: <E1kUfNy-0005uj-3K@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 00:20:18 +0000

branch xen-unstable
xenbranch xen-unstable
job build-arm64
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156005/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-arm64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-arm64.xen-build --summary-out=tmp/156005.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline build-arm64 xen-build
Searching for failure / basis pass:
 155979 fail [host=rochester0] / 155971 ok.
Failure / basis pass flights: 155979 / 155971
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Basis pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#73e3cb6c7eea4f5db81c87574dcefe1282de4772-709b163940c55604b983400eb49dad144a2aa091 git://git.qemu.org/qemu.git#e12ce85b2c79d83a340953291912875c30b3af06-ba2a9a9e6318bfd93a2306dec40137e198205b86 git://xenbits.xen.org/osstest/seabios.git#58a44be024f69d2e4d2b58553529230abdd3935e-58a44be024f69d2e4d2b58553529230abdd3935e git://xenbits.xen.org/xen.git#0dfddb2116e3757f77a691a3fe335173088d69dc-0dfddb2116e3757f77a6\
 91a3fe335173088d69dc
Loaded 29910 nodes in revision graph
Searching for test results:
 155953 [host=laxton1]
 155971 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155979 fail 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155995 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155996 fail 709b163940c55604b983400eb49dad144a2aa091 ba2a9a9e6318bfd93a2306dec40137e198205b86 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155997 pass 709b163940c55604b983400eb49dad144a2aa091 dc7a05da69613d5c87ec0359c5dbb9d2b4765301 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155999 pass 709b163940c55604b983400eb49dad144a2aa091 31a6f3534aba275aa9b3da21a58e79065ba865b5 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156000 fail 709b163940c55604b983400eb49dad144a2aa091 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156001 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156002 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156003 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156004 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156005 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Searching for interesting versions
 Result found: flight 155971 (pass), for basis pass
 Result found: flight 155979 (fail), for basis failure
 Repro found: flight 155995 (pass), for basis pass
 Repro found: flight 155996 (fail), for basis failure
 0 revisions at 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
No revisions left to test, checking graph state.
 Result found: flight 155971 (pass), for last pass
 Result found: flight 156001 (fail), for first failure
 Repro found: flight 156002 (pass), for last pass
 Repro found: flight 156003 (fail), for first failure
 Repro found: flight 156004 (pass), for last pass
 Repro found: flight 156005 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156005/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-arm64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156005: tolerable ALL FAIL

flight 156005 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156005/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64                   6 xen-build               fail baseline untested


jobs:
 build-arm64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 02:50:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 02:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8891.23905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUhig-0002bB-PA; Tue, 20 Oct 2020 02:49:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8891.23905; Tue, 20 Oct 2020 02:49:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUhig-0002b4-KR; Tue, 20 Oct 2020 02:49:50 +0000
Received: by outflank-mailman (input) for mailman id 8891;
 Tue, 20 Oct 2020 02:49:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUhif-0002az-Bz
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 02:49:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f3d51a7-68bf-4df4-99fc-467220eae863;
 Tue, 20 Oct 2020 02:49:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUhib-0004FP-CY; Tue, 20 Oct 2020 02:49:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUhib-0008JW-2W; Tue, 20 Oct 2020 02:49:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUhib-0002pZ-22; Tue, 20 Oct 2020 02:49:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUhif-0002az-Bz
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 02:49:49 +0000
X-Inumbo-ID: 8f3d51a7-68bf-4df4-99fc-467220eae863
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8f3d51a7-68bf-4df4-99fc-467220eae863;
	Tue, 20 Oct 2020 02:49:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s59cH1v3V87nXJCsollYStDE3xSC+pCMoTmxaVEpRtM=; b=vVm8yrkTXtX6Y4+757YGnwy+Ek
	pOb0601B2l5ZK6Tlu4Gc1My/JwN6gNzRHB0vdO4fki7MLy77QjL7/11vXm9HhrOZHy3yoS0tMPTay
	0ugvQ9IoScoriNvcw8Jny4lBQRerpeRlQMDn3/ubTFNxBwxCr8WY7a08YguMjXkJITgw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUhib-0004FP-CY; Tue, 20 Oct 2020 02:49:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUhib-0008JW-2W; Tue, 20 Oct 2020 02:49:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUhib-0002pZ-22; Tue, 20 Oct 2020 02:49:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155994-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 155994: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 02:49:45 +0000

flight 155994 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155994/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  105 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 03:30:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 03:30:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8898.23926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUiLU-00064O-4l; Tue, 20 Oct 2020 03:29:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8898.23926; Tue, 20 Oct 2020 03:29:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUiLU-00064H-1e; Tue, 20 Oct 2020 03:29:56 +0000
Received: by outflank-mailman (input) for mailman id 8898;
 Tue, 20 Oct 2020 03:29:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUiLS-00064C-QE
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 03:29:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1be4ff14-5e40-40b2-af1f-9c0287cadcde;
 Tue, 20 Oct 2020 03:29:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUiLO-00054N-SO; Tue, 20 Oct 2020 03:29:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUiLO-0001es-Ko; Tue, 20 Oct 2020 03:29:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUiLO-0000CN-KK; Tue, 20 Oct 2020 03:29:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUiLS-00064C-QE
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 03:29:54 +0000
X-Inumbo-ID: 1be4ff14-5e40-40b2-af1f-9c0287cadcde
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1be4ff14-5e40-40b2-af1f-9c0287cadcde;
	Tue, 20 Oct 2020 03:29:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=euyCmsFnIwsBdvE7B03NoUhT83R20yDREV8T5lvyXEg=; b=xtDs7IdEhrUOCtVcdCkEIBj5iE
	pwSJhJJb4G5R03oT6QF+JKaHoT9lCQ+FyKnm0FKcsY+zxtQDzZY3e/aj82X3kgHpYxgF3A95CZp5O
	Vvl5PFoHCgV07mjErPJ+vSOQ4HeIOkVs5Z5NxurCdDFoFt0UBcrmwSB2v7XHep3rD+TM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUiLO-00054N-SO; Tue, 20 Oct 2020 03:29:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUiLO-0001es-Ko; Tue, 20 Oct 2020 03:29:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUiLO-0000CN-KK; Tue, 20 Oct 2020 03:29:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155998-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 155998: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=93edd1887e34c3959ce927da1a22e8c54ce18a83
X-Osstest-Versions-That:
    ovmf=92e9c44f205a876556abe1a1addea5c40e4f3ccf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 03:29:50 +0000

flight 155998 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155998/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 93edd1887e34c3959ce927da1a22e8c54ce18a83
baseline version:
 ovmf                 92e9c44f205a876556abe1a1addea5c40e4f3ccf

Last test of basis   155976  2020-10-19 08:40:33 Z    0 days
Testing same since   155998  2020-10-19 22:09:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   92e9c44f20..93edd1887e  93edd1887e34c3959ce927da1a22e8c54ce18a83 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 04:00:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 04:00:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8903.23944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUioT-0000EC-JV; Tue, 20 Oct 2020 03:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8903.23944; Tue, 20 Oct 2020 03:59:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUioT-0000E5-GS; Tue, 20 Oct 2020 03:59:53 +0000
Received: by outflank-mailman (input) for mailman id 8903;
 Tue, 20 Oct 2020 03:59:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUioS-0000DQ-1o
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 03:59:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b19b161a-4a24-488d-b97b-3efd8591fa3e;
 Tue, 20 Oct 2020 03:59:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUioJ-0005eC-CV; Tue, 20 Oct 2020 03:59:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUioJ-0002Uj-3P; Tue, 20 Oct 2020 03:59:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUioJ-00007i-2u; Tue, 20 Oct 2020 03:59:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUioS-0000DQ-1o
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 03:59:52 +0000
X-Inumbo-ID: b19b161a-4a24-488d-b97b-3efd8591fa3e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b19b161a-4a24-488d-b97b-3efd8591fa3e;
	Tue, 20 Oct 2020 03:59:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K2P/2Ntafy56ceJYaYHFn84coMXcxCDP1rat7safxOo=; b=HalNOXmAu8XcIQfLbsp1v2547P
	UJvokJoUGrgBEQWiE76ljmdrTrov0Vk5fZdFRtTJqEbnx5RfkO9Kl/5gO3HBOpphqTJHs1H+I4Td1
	B9SNIsQFM7WUafMTwLSUN7McwDtoGF78hy2CxApcXYmduLLFbEq8eRNQF0o0rfkAsGQA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUioJ-0005eC-CV; Tue, 20 Oct 2020 03:59:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUioJ-0002Uj-3P; Tue, 20 Oct 2020 03:59:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUioJ-00007i-2u; Tue, 20 Oct 2020 03:59:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156008-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156008: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 03:59:43 +0000

flight 156008 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156008/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  106 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 04:29:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 04:29:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8907.23956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjGl-0004PK-SE; Tue, 20 Oct 2020 04:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8907.23956; Tue, 20 Oct 2020 04:29:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjGl-0004PD-O8; Tue, 20 Oct 2020 04:29:07 +0000
Received: by outflank-mailman (input) for mailman id 8907;
 Tue, 20 Oct 2020 04:29:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUjGk-0004P8-GF
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 04:29:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b43d522-d3f9-41fe-b723-041f05082658;
 Tue, 20 Oct 2020 04:29:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjGi-0006Kp-11; Tue, 20 Oct 2020 04:29:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjGh-0003CX-NY; Tue, 20 Oct 2020 04:29:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjGh-0001DU-N3; Tue, 20 Oct 2020 04:29:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUjGk-0004P8-GF
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 04:29:06 +0000
X-Inumbo-ID: 1b43d522-d3f9-41fe-b723-041f05082658
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1b43d522-d3f9-41fe-b723-041f05082658;
	Tue, 20 Oct 2020 04:29:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GIZ2Y0w/8E4tuJeaUl+hUYdqC+4c0+yyRy7/UtucEDY=; b=4xe/mspWZhOJJ3HehIvr947HNg
	+A8e41MWeEuDIXHBRr3oC6CtsUJSPE0kLvd6xw8RQhddZCetXWH8tKME0JW2WDadI3S5zGKeR4Dz7
	MjPaHOu6EzuVPmYM3DyyrQRkznVHtnCutjhBCX0Y47L4fEZKq0kK+ZJIFcPel4JU/n88=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjGi-0006Kp-11; Tue, 20 Oct 2020 04:29:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjGh-0003CX-NY; Tue, 20 Oct 2020 04:29:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjGh-0001DU-N3; Tue, 20 Oct 2020 04:29:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 155973: tolerable trouble: fail/pass/starved
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 04:29:03 +0000

flight 155973 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155973/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155960
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155960
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155960
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155960
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155960
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155960
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155960
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155960
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155960
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155960
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155960
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm   3 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   155973  2020-10-19 03:09:23 Z    1 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  starved 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 05:13:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 05:13:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8913.23976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjxD-0000Rb-Cl; Tue, 20 Oct 2020 05:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8913.23976; Tue, 20 Oct 2020 05:12:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjxD-0000RU-9v; Tue, 20 Oct 2020 05:12:59 +0000
Received: by outflank-mailman (input) for mailman id 8913;
 Tue, 20 Oct 2020 05:12:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUjxB-0000RP-Rj
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 05:12:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0885c0c8-008e-41b5-9be4-cb8589989421;
 Tue, 20 Oct 2020 05:12:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjxA-0000aA-5b; Tue, 20 Oct 2020 05:12:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjx9-0004oI-UQ; Tue, 20 Oct 2020 05:12:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjx9-0002X0-Tu; Tue, 20 Oct 2020 05:12:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUjxB-0000RP-Rj
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 05:12:57 +0000
X-Inumbo-ID: 0885c0c8-008e-41b5-9be4-cb8589989421
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0885c0c8-008e-41b5-9be4-cb8589989421;
	Tue, 20 Oct 2020 05:12:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KHa6Ra4k2kt6uUrN9IdVgUiT9len4dtoB2ieXyTscRA=; b=xYT1gOsXjvxPz0bduXT9/szuHp
	DYf0jloe/98LrhzZiqBTQKGB520j83tyTCPpIT9gLYvUDybxgRE3AJBqKdySNpLuaZUkdQeCHZiK7
	CP7z+0ceKhYyAcO7kv0TETWNKNAMyatNe/t2LGe1vY80fn3PegRG2ZNQhrhDxuvMlvyM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjxA-0000aA-5b; Tue, 20 Oct 2020 05:12:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjx9-0004oI-UQ; Tue, 20 Oct 2020 05:12:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjx9-0002X0-Tu; Tue, 20 Oct 2020 05:12:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156010-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156010: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=29d14d3a30fdfbe017d39b759423832054280f10
X-Osstest-Versions-That:
    ovmf=93edd1887e34c3959ce927da1a22e8c54ce18a83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 05:12:55 +0000

flight 156010 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156010/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 29d14d3a30fdfbe017d39b759423832054280f10
baseline version:
 ovmf                 93edd1887e34c3959ce927da1a22e8c54ce18a83

Last test of basis   155998  2020-10-19 22:09:56 Z    0 days
Testing same since   156010  2020-10-20 03:31:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  fengyunhua <fengyunhua@byosoft.com.cn>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   93edd1887e..29d14d3a30  29d14d3a30fdfbe017d39b759423832054280f10 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 05:15:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 05:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8918.23994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjzq-0000cl-TW; Tue, 20 Oct 2020 05:15:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8918.23994; Tue, 20 Oct 2020 05:15:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUjzq-0000ce-QU; Tue, 20 Oct 2020 05:15:42 +0000
Received: by outflank-mailman (input) for mailman id 8918;
 Tue, 20 Oct 2020 05:15:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUjzq-0000c0-0j
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 05:15:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15a0225b-7f09-4ed1-b78c-6d2df27f25ce;
 Tue, 20 Oct 2020 05:15:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjzg-0000eE-NF; Tue, 20 Oct 2020 05:15:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjzg-0004sG-F6; Tue, 20 Oct 2020 05:15:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUjzg-0002vf-Ec; Tue, 20 Oct 2020 05:15:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUjzq-0000c0-0j
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 05:15:42 +0000
X-Inumbo-ID: 15a0225b-7f09-4ed1-b78c-6d2df27f25ce
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 15a0225b-7f09-4ed1-b78c-6d2df27f25ce;
	Tue, 20 Oct 2020 05:15:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BTdgQQ8Pea8HC0CAeP9OAc0sf55ertKEuAJmAg9c2II=; b=W6ztqVlrQb24IHjOICJuwfU4bo
	hg0ofSQhrc9VJ0+f/vdFIyibCe/75u5fKlUkmmyDYbKch7nr5GmCk0CcE9Oo5U3BHB94FuOrBCPlo
	LQBu6yySzxUdRjaQ1IsLcC1Izpoka2sEpwvT3TZ541PgdKEmMHGRELNChZUWFRAXMf8c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjzg-0000eE-NF; Tue, 20 Oct 2020 05:15:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjzg-0004sG-F6; Tue, 20 Oct 2020 05:15:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUjzg-0002vf-Ec; Tue, 20 Oct 2020 05:15:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156011-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156011: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 05:15:32 +0000

flight 156011 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156011/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  107 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 06:23:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 06:23:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8924.24013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUl36-0006aZ-2I; Tue, 20 Oct 2020 06:23:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8924.24013; Tue, 20 Oct 2020 06:23:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUl35-0006aS-VW; Tue, 20 Oct 2020 06:23:07 +0000
Received: by outflank-mailman (input) for mailman id 8924;
 Tue, 20 Oct 2020 06:23:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUl34-0006aN-AH
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 06:23:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cd159aa-879c-4849-8aba-ff113a0c9aaf;
 Tue, 20 Oct 2020 06:23:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUl31-000261-Ep; Tue, 20 Oct 2020 06:23:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUl31-0007P2-4c; Tue, 20 Oct 2020 06:23:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUl31-000431-47; Tue, 20 Oct 2020 06:23:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUl34-0006aN-AH
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 06:23:06 +0000
X-Inumbo-ID: 8cd159aa-879c-4849-8aba-ff113a0c9aaf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cd159aa-879c-4849-8aba-ff113a0c9aaf;
	Tue, 20 Oct 2020 06:23:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ZHIloMS0peOAxrJeLyCqRJ696gM4k/hZfe346DL99eY=; b=uTj+dLHUi65NNl2u97Q1IrMT2s
	1viNMDaQyCpLzBr63U31AHj/4ylUpy5WHmUTUR3fyW76xMBjBH6oYOPpoosOo5fT+JUae2w8WKVkT
	kHVE/u0ggRKwkumsK+0sVR9oYNB+NeZi75WQLOp9KCRjtklXEa8fGR2m9NaPxUJRicAc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUl31-000261-Ep; Tue, 20 Oct 2020 06:23:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUl31-0007P2-4c; Tue, 20 Oct 2020 06:23:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUl31-000431-47; Tue, 20 Oct 2020 06:23:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156012-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156012: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=07d0a64ddb5db272ca583074d2211d3e27b8ec4d
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 06:23:03 +0000

flight 156012 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156012/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              07d0a64ddb5db272ca583074d2211d3e27b8ec4d
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  102 days
Failing since        151818  2020-07-11 04:18:52 Z  101 days   96 attempts
Testing same since   156012  2020-10-20 04:19:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 21899 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 06:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 06:55:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8929.24031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUlYc-0000ol-Nd; Tue, 20 Oct 2020 06:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8929.24031; Tue, 20 Oct 2020 06:55:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUlYc-0000oe-K3; Tue, 20 Oct 2020 06:55:42 +0000
Received: by outflank-mailman (input) for mailman id 8929;
 Tue, 20 Oct 2020 06:55:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUlYb-0000o0-Cd
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 06:55:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48777639-e852-4e06-9226-9e5f5bc265e5;
 Tue, 20 Oct 2020 06:55:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUlYS-0002k7-NE; Tue, 20 Oct 2020 06:55:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUlYS-0000ZK-EG; Tue, 20 Oct 2020 06:55:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUlYS-0004cr-Di; Tue, 20 Oct 2020 06:55:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUlYb-0000o0-Cd
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 06:55:41 +0000
X-Inumbo-ID: 48777639-e852-4e06-9226-9e5f5bc265e5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 48777639-e852-4e06-9226-9e5f5bc265e5;
	Tue, 20 Oct 2020 06:55:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N+mDszlJ2Ppe7kQaErgv5AVKufjXPzZ/BX3hpWyoM8c=; b=yzj+Ti5+1OEUo5YHw41lNrxrZC
	FP9H71IBMndNvsgaod3kOiv0tdsQV8tO2QtfRkda/SvsUoD4Symubn63gomu7VP2VV08zMFUo8kz5
	YT3mF8UeINg9s61vlQejB7M16pXt2Ck9wEWeI1w86xqZyhG+POH/TiUeDOMHriWgDJH8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUlYS-0002k7-NE; Tue, 20 Oct 2020 06:55:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUlYS-0000ZK-EG; Tue, 20 Oct 2020 06:55:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUlYS-0004cr-Di; Tue, 20 Oct 2020 06:55:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156015-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156015: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 06:55:32 +0000

flight 156015 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156015/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  108 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 07:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 07:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8936.24059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUldV-0001n2-Jq; Tue, 20 Oct 2020 07:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8936.24059; Tue, 20 Oct 2020 07:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUldV-0001mv-Ff; Tue, 20 Oct 2020 07:00:45 +0000
Received: by outflank-mailman (input) for mailman id 8936;
 Tue, 20 Oct 2020 07:00:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUldU-0001ml-5a
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:00:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59e9d0be-5e06-412d-a360-6b869e02bd11;
 Tue, 20 Oct 2020 07:00:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1E8ECACB8;
 Tue, 20 Oct 2020 07:00:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUldU-0001ml-5a
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:00:44 +0000
X-Inumbo-ID: 59e9d0be-5e06-412d-a360-6b869e02bd11
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 59e9d0be-5e06-412d-a360-6b869e02bd11;
	Tue, 20 Oct 2020 07:00:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603177242;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3/tP+QNMkjjbjZ0T80Qe0PFK/FrbD6gO8SanRBnsWT0=;
	b=IlUmwqUzSmgLr8WKmva3Rz/XO0oKPt0Fe+QdiZPqLNZtT2P4/nYHXLuN/mz+L2Pvsc3G/M
	3uz7JEoGfkL4cxJacamQmdYexsbGKMtlgSZROos1XKztSivzxys0FgbdA0cmIOJm+eG9CQ
	vo287Mhj+JuEQuUV4VkckwyPn1rUBQQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1E8ECACB8;
	Tue, 20 Oct 2020 07:00:42 +0000 (UTC)
Subject: Re: [PATCH] SVM: avoid VMSAVE in ctxt-switch-to
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a01862b8-6e16-5ddc-7f48-2d3bed2f34b6@suse.com>
 <9d0cae4e-f849-f2a3-4261-d3efb977deeb@citrix.com>
 <b3b581fc-b1c9-cdc2-add6-900a4305623a@suse.com>
 <6af1bbb6-d717-affa-6409-2b983e48ed30@citrix.com>
 <59f3e399-8676-bb44-ec85-500583f97b2f@suse.com>
 <23d02e3b-7dac-ceb8-ebdd-3b77f264d6b4@citrix.com>
 <a5e30124-c1aa-e13f-cb09-8402b85209eb@suse.com>
 <e8ee8c84-f949-c882-07a6-58079987a308@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <746db5e1-f78f-2446-c877-4abfa4ba8f39@suse.com>
Date: Tue, 20 Oct 2020 09:00:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <e8ee8c84-f949-c882-07a6-58079987a308@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.10.2020 17:52, Andrew Cooper wrote:
> On 19/10/2020 16:47, Jan Beulich wrote:
>> On 19.10.2020 17:22, Andrew Cooper wrote:
>>> On 19/10/2020 16:02, Jan Beulich wrote:
>>>> On 19.10.2020 16:30, Andrew Cooper wrote:
>>>>> On 19/10/2020 15:21, Jan Beulich wrote:
>>>>>> On 19.10.2020 16:10, Andrew Cooper wrote:
>>>>>>> On 19/10/2020 14:40, Jan Beulich wrote:
>>>>>>>> Of the state saved by the insn and reloaded by the corresponding VMLOAD
>>>>>>>> - TR, syscall, and sysenter state are invariant while having Xen's state
>>>>>>>>   loaded,
>>>>>>>> - FS, GS, and LDTR are not used by Xen and get suitably set in PV
>>>>>>>>   context switch code.
>>>>>>> I think it would be helpful to state that Xen's state is suitably cached
>>>>>>> in _svm_cpu_up(), as this does now introduce an ordering dependency
>>>>>>> during boot.
>>>>>> I've added a sentence.
>>>>>>
>>>>>>> Is it possibly worth putting an assert checking the TR selector?  That
>>>>>>> ought to be good enough to catch stray init reordering problems.
>>>>>> How would checking just the TR selector help? If other pieces of TR
>>>>>> or syscall/sysenter were out of sync, we'd be hosed, too.
>>>>> They're far less likely to move relative to tr, than everything relative
>>>>> to hvm_up().
>>>>>
>>>>>> I'm also not sure what exactly you mean to do for such an assertion:
>>>>>> Merely check the host VMCB field against TSS_SELECTOR, or do an
>>>>>> actual STR to be closer to what the VMSAVE actually did?
>>>>> ASSERT(str() == TSS_SELECTOR);
>>>> Oh, that's odd. How would this help with the VMCB?
>>> It wont.
>>>
>>> We're not checking the behaviour of the VMSAVE instruction.  We just
>>> want to sanity check that %tr is already configured.
>> TR not already being configured is out of question in a function
>> involved in context switching, don't you think? I thought you're
>> worried about the VMCB not being set up correctly? Or are you in
>> the end asking for the suggested assertion to go into
>> _svm_cpu_up(), yet I didn't understand it that way?
> 
> I meant in _svm_cpu_up().  It is only at at __init time where the new
> implicit dependency is created.

Okay, so just a misunderstanding on my part. I've done as you've
suggested, but I'd like to note that load_system_tables() runs
(often far) earlier than percpu_traps_init(), and hence ordering
issues with the latter are more likely. In fact the most worrying
case is its use by reinit_bsp_stack(), which is not a problem
only because the only relevant stack relative adjustment done is
the writing of SYSENTER_ESP, which gets skipped for AMD.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 07:13:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 07:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8940.24070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUlq9-0002nU-PW; Tue, 20 Oct 2020 07:13:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8940.24070; Tue, 20 Oct 2020 07:13:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUlq9-0002nN-Ma; Tue, 20 Oct 2020 07:13:49 +0000
Received: by outflank-mailman (input) for mailman id 8940;
 Tue, 20 Oct 2020 07:13:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUlq8-0002nI-HV
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:13:48 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31b43878-27ea-435e-ab4e-6769c12a6027;
 Tue, 20 Oct 2020 07:13:46 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id z5so1129026ejw.7
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 00:13:46 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id s29sm1265294edc.30.2020.10.20.00.13.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 00:13:45 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUlq8-0002nI-HV
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:13:48 +0000
X-Inumbo-ID: 31b43878-27ea-435e-ab4e-6769c12a6027
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31b43878-27ea-435e-ab4e-6769c12a6027;
	Tue, 20 Oct 2020 07:13:46 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id z5so1129026ejw.7
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 00:13:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=impcEQiRjn947J8FDCa/SgQe5WbnVHLHszymxM0Eh9E=;
        b=jUdCO9glnO34bN0rvLBwXGNZjJA1jtM5p1/3Vt+mepbDxsUFLoza1fcUIO6bqYsrbl
         VL9rQA6XxWjjqiKY4MuIQ9ewbhkgSHYOTgblgz2gBbDQPHgGd4cY0Vcmk0hivpQGas9p
         xO60uGszo2PTWurB7rVFNbAQjzGyOIaSWvtT0KeRsmycv6czL/XeN6cw24p/Ltgdi6AC
         AwLdpKkrLnKf8DTH3Mn02EdxxSu5X7DIDBAC2S0tqOTchrKiXlVDifjWSt6lXs68TiPL
         N9l5hGBlqzgMQBbbSvStNVw+vLiR31ZM8mNCCW//bn1CokuAfbLyHlmDKXaSfIFxGC3z
         uOSg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=impcEQiRjn947J8FDCa/SgQe5WbnVHLHszymxM0Eh9E=;
        b=ScYnYpUUY1OFbo12nO4krHFS95lCctNAhjCNQwZhkQKvREheDERE5KIC1Cucnc/fxF
         PAzwZcyr9xKWjOowYFkYf0lPevmZv8FfjIEj3ijjcV1aeQ9fDawcIkQ7bSK6bB1RPDkQ
         8AVDs3yuf+ReQsqhX1J0PTRe+cIAEO76KV/Ko0i8GC9LFfuVPZBl05wfrmWxVU2i/FZn
         GBHJcKmqnobwNnhhul/K61LNOig9tbu1LfaH5ENncgTSjdsBAe7ospBbZ/i9A+t1jBuM
         dEHYR+UvtduBDfvo5qHfj8nZ2as5fcR5GFXAU0LIpXGPBjPwzHypeRIKReD64TsmKlqT
         C2pQ==
X-Gm-Message-State: AOAM532b76+mHu3Ty41FHeS95oHPnG22NQMvwk176j2pCEBL3lVykHKW
	H8SfBYp5+asjkdsbZA+SZ5i2KHUE/neTsg==
X-Google-Smtp-Source: ABdhPJysn9VmtZYYsKidsB9RaH2tHJjz1F9ZB4QE7/9aEy6SvUwi06J6irHsUr0zlvUtOJYCbKAOaA==
X-Received: by 2002:a17:906:388d:: with SMTP id q13mr1686209ejd.92.1603178025764;
        Tue, 20 Oct 2020 00:13:45 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id s29sm1265294edc.30.2020.10.20.00.13.44
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 00:13:45 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-2-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making it common
Date: Tue, 20 Oct 2020 08:13:43 +0100
Message-ID: <003c01d6a6b0$8c418f50$a4c4adf0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAH3I7RCqmhWqiA=
Content-Language: en-gb

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Julien Grall <julien@xen.org>; Stefano =
Stabellini <sstabellini@kernel.org>;
> Wei Liu <wl@xen.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 01/23] x86/ioreq: Prepare IOREQ feature for making =
it common
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> As a lot of x86 code can be re-used on Arm later on, this
> patch makes some preparation to x86/hvm/ioreq.c before moving
> to the common code. This way we will get a verbatim copy for
> a code movement in subsequent patch (arch/x86/hvm/ioreq.c
> will be *just* renamed to common/ioreq).
>=20
> This patch does the following:
> 1. Introduce *inline* arch_hvm_ioreq_init(), arch_hvm_ioreq_destroy(),
>    arch_hvm_io_completion(), arch_hvm_destroy_ioreq_server() and
>    hvm_ioreq_server_get_type_addr() to abstract arch specific =
materials.
> 2  Make hvm_map_mem_type_to_ioreq_server() *inline*. It is not going
>    to be called from the common code.
> 3. Make get_ioreq_server() global. It is going to be called from
>    a few places.
> 4. Add IOREQ_STATUS_* #define-s and update candidates for moving.
> 5. Re-order #include-s alphabetically.
>=20
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>=20
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>=20
> Changes RFC -> V1:
>    - new patch, was split from:
>      "[RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common"
>    - fold the check of p->type into hvm_get_ioreq_server_range_type()
>      and make it return success/failure
>    - remove relocate_portio_handler() call from =
arch_hvm_ioreq_destroy()
>      in arch/x86/hvm/ioreq.c
>    - introduce =
arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()
>=20
> Changes V1 -> V2:
>    - update patch description
>    - make arch functions inline and put them into arch header
>      to achieve a truly rename by the subsequent patch
>    - return void in arch_hvm_destroy_ioreq_server()
>    - return bool in arch_hvm_ioreq_destroy()
>    - bring relocate_portio_handler() back to arch_hvm_ioreq_destroy()
>    - rename IOREQ_IO* to IOREQ_STATUS*
>    - remove *handle* from arch_handle_hvm_io_completion()
>    - re-order #include-s alphabetically
>    - rename hvm_get_ioreq_server_range_type() to =
hvm_ioreq_server_get_type_addr()
>      and add "const" to several arguments
> ---
>  xen/arch/x86/hvm/ioreq.c        | 153 =
+++++--------------------------------
>  xen/include/asm-x86/hvm/ioreq.h | 165 =
+++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 184 insertions(+), 134 deletions(-)
>=20
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 1cc27df..d3433d7 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -1,5 +1,5 @@
>  /*
> - * hvm/io.c: hardware virtual machine I/O emulation
> + * ioreq.c: hardware virtual machine I/O emulation
>   *
>   * Copyright (c) 2016 Citrix Systems Inc.
>   *
> @@ -17,21 +17,18 @@
>   */
>=20
>  #include <xen/ctype.h>
> +#include <xen/domain.h>
> +#include <xen/event.h>
>  #include <xen/init.h>
> +#include <xen/irq.h>
>  #include <xen/lib.h>
> -#include <xen/trace.h>
> +#include <xen/paging.h>
>  #include <xen/sched.h>
> -#include <xen/irq.h>
>  #include <xen/softirq.h>
> -#include <xen/domain.h>
> -#include <xen/event.h>
> -#include <xen/paging.h>
> +#include <xen/trace.h>
>  #include <xen/vpci.h>
>=20
> -#include <asm/hvm/emulate.h>
> -#include <asm/hvm/hvm.h>
>  #include <asm/hvm/ioreq.h>
> -#include <asm/hvm/vmx/vmx.h>
>=20
>  #include <public/hvm/ioreq.h>
>  #include <public/hvm/params.h>
> @@ -48,8 +45,8 @@ static void set_ioreq_server(struct domain *d, =
unsigned int id,
>  #define GET_IOREQ_SERVER(d, id) \
>      (d)->arch.hvm.ioreq_server.server[id]
>=20
> -static struct hvm_ioreq_server *get_ioreq_server(const struct domain =
*d,
> -                                                 unsigned int id)
> +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
> +                                          unsigned int id)
>  {
>      if ( id >=3D MAX_NR_IOREQ_SERVERS )
>          return NULL;
> @@ -209,19 +206,8 @@ bool handle_hvm_io_completion(struct vcpu *v)
>          return handle_pio(vio->io_req.addr, vio->io_req.size,
>                            vio->io_req.dir);
>=20
> -    case HVMIO_realmode_completion:
> -    {
> -        struct hvm_emulate_ctxt ctxt;
> -
> -        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> -        vmx_realmode_emulate_one(&ctxt);
> -        hvm_emulate_writeback(&ctxt);
> -
> -        break;
> -    }
>      default:
> -        ASSERT_UNREACHABLE();
> -        break;
> +        return arch_hvm_io_completion(io_completion);
>      }
>=20
>      return true;
> @@ -855,7 +841,7 @@ int hvm_destroy_ioreq_server(struct domain *d, =
ioservid_t id)
>=20
>      domain_pause(d);
>=20
> -    p2m_set_ioreq_server(d, 0, s);
> +    arch_hvm_destroy_ioreq_server(s);
>=20
>      hvm_ioreq_server_disable(s);
>=20
> @@ -1080,54 +1066,6 @@ int hvm_unmap_io_range_from_ioreq_server(struct =
domain *d, ioservid_t id,
>      return rc;
>  }
>=20
> -/*
> - * Map or unmap an ioreq server to specific memory type. For now, =
only
> - * HVMMEM_ioreq_server is supported, and in the future new types can =
be
> - * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. =
And
> - * currently, only write operations are to be forwarded to an ioreq =
server.
> - * Support for the emulation of read operations can be added when an =
ioreq
> - * server has such requirement in the future.
> - */
> -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
> -                                     uint32_t type, uint32_t flags)
> -{
> -    struct hvm_ioreq_server *s;
> -    int rc;
> -
> -    if ( type !=3D HVMMEM_ioreq_server )
> -        return -EINVAL;
> -
> -    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
> -        return -EINVAL;
> -
> -    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> -
> -    s =3D get_ioreq_server(d, id);
> -
> -    rc =3D -ENOENT;
> -    if ( !s )
> -        goto out;
> -
> -    rc =3D -EPERM;
> -    if ( s->emulator !=3D current->domain )
> -        goto out;
> -
> -    rc =3D p2m_set_ioreq_server(d, flags, s);
> -
> - out:
> -    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> -
> -    if ( rc =3D=3D 0 && flags =3D=3D 0 )
> -    {
> -        struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
> -
> -        if ( read_atomic(&p2m->ioreq.entry_count) )
> -            p2m_change_entry_type_global(d, p2m_ioreq_server, =
p2m_ram_rw);
> -    }
> -
> -    return rc;
> -}
> -
>  int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
>                                 bool enabled)
>  {
> @@ -1215,7 +1153,7 @@ void hvm_destroy_all_ioreq_servers(struct domain =
*d)
>      struct hvm_ioreq_server *s;
>      unsigned int id;
>=20
> -    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +    if ( !arch_hvm_ioreq_destroy(d) )
>          return;
>=20
>      spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> @@ -1243,50 +1181,13 @@ struct hvm_ioreq_server =
*hvm_select_ioreq_server(struct domain *d,
>                                                   ioreq_t *p)
>  {
>      struct hvm_ioreq_server *s;
> -    uint32_t cf8;
>      uint8_t type;
>      uint64_t addr;
>      unsigned int id;
>=20
> -    if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO =
)
> +    if ( hvm_ioreq_server_get_type_addr(d, p, &type, &addr) )
>          return NULL;
>=20
> -    cf8 =3D d->arch.hvm.pci_cf8;
> -
> -    if ( p->type =3D=3D IOREQ_TYPE_PIO &&
> -         (p->addr & ~3) =3D=3D 0xcfc &&
> -         CF8_ENABLED(cf8) )
> -    {
> -        uint32_t x86_fam;
> -        pci_sbdf_t sbdf;
> -        unsigned int reg;
> -
> -        reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf);
> -
> -        /* PCI config data cycle */
> -        type =3D XEN_DMOP_IO_RANGE_PCI;
> -        addr =3D ((uint64_t)sbdf.sbdf << 32) | reg;
> -        /* AMD extended configuration space access? */
> -        if ( CF8_ADDR_HI(cf8) &&
> -             d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD &&
> -             (x86_fam =3D get_cpu_family(
> -                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 =
&&
> -             x86_fam < 0x17 )
> -        {
> -            uint64_t msr_val;
> -
> -            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
> -                 (msr_val & (1ULL << =
AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
> -                addr |=3D CF8_ADDR_HI(cf8);
> -        }
> -    }
> -    else
> -    {
> -        type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ?
> -                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> -        addr =3D p->addr;
> -    }
> -
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>      {
>          struct rangeset *r;
> @@ -1351,7 +1252,7 @@ static int hvm_send_buffered_ioreq(struct =
hvm_ioreq_server *s, ioreq_t *p)
>      pg =3D iorp->va;
>=20
>      if ( !pg )
> -        return X86EMUL_UNHANDLEABLE;
> +        return IOREQ_STATUS_UNHANDLED;
>=20
>      /*
>       * Return 0 for the cases we can't deal with:
> @@ -1381,7 +1282,7 @@ static int hvm_send_buffered_ioreq(struct =
hvm_ioreq_server *s, ioreq_t *p)
>          break;
>      default:
>          gdprintk(XENLOG_WARNING, "unexpected ioreq size: %u\n", =
p->size);
> -        return X86EMUL_UNHANDLEABLE;
> +        return IOREQ_STATUS_UNHANDLED;
>      }
>=20
>      spin_lock(&s->bufioreq_lock);
> @@ -1391,7 +1292,7 @@ static int hvm_send_buffered_ioreq(struct =
hvm_ioreq_server *s, ioreq_t *p)
>      {
>          /* The queue is full: send the iopacket through the normal =
path. */
>          spin_unlock(&s->bufioreq_lock);
> -        return X86EMUL_UNHANDLEABLE;
> +        return IOREQ_STATUS_UNHANDLED;
>      }
>=20
>      pg->buf_ioreq[pg->ptrs.write_pointer % IOREQ_BUFFER_SLOT_NUM] =3D =
bp;
> @@ -1422,7 +1323,7 @@ static int hvm_send_buffered_ioreq(struct =
hvm_ioreq_server *s, ioreq_t *p)
>      notify_via_xen_event_channel(d, s->bufioreq_evtchn);
>      spin_unlock(&s->bufioreq_lock);
>=20
> -    return X86EMUL_OKAY;
> +    return IOREQ_STATUS_HANDLED;
>  }
>=20
>  int hvm_send_ioreq(struct hvm_ioreq_server *s, ioreq_t *proto_p,
> @@ -1438,7 +1339,7 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, =
ioreq_t *proto_p,
>          return hvm_send_buffered_ioreq(s, proto_p);
>=20
>      if ( unlikely(!vcpu_start_shutdown_deferral(curr)) )
> -        return X86EMUL_RETRY;
> +        return IOREQ_STATUS_RETRY;
>=20
>      list_for_each_entry ( sv,
>                            &s->ioreq_vcpu_list,
> @@ -1478,11 +1379,11 @@ int hvm_send_ioreq(struct hvm_ioreq_server *s, =
ioreq_t *proto_p,
>              notify_via_xen_event_channel(d, port);
>=20
>              sv->pending =3D true;
> -            return X86EMUL_RETRY;
> +            return IOREQ_STATUS_RETRY;
>          }
>      }
>=20
> -    return X86EMUL_UNHANDLEABLE;
> +    return IOREQ_STATUS_UNHANDLED;
>  }
>=20
>  unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool buffered)
> @@ -1496,30 +1397,18 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, =
bool buffered)
>          if ( !s->enabled )
>              continue;
>=20
> -        if ( hvm_send_ioreq(s, p, buffered) =3D=3D =
X86EMUL_UNHANDLEABLE )
> +        if ( hvm_send_ioreq(s, p, buffered) =3D=3D =
IOREQ_STATUS_UNHANDLED )
>              failed++;
>      }
>=20
>      return failed;
>  }
>=20
> -static int hvm_access_cf8(
> -    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
> -{
> -    struct domain *d =3D current->domain;
> -
> -    if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 )
> -        d->arch.hvm.pci_cf8 =3D *val;
> -
> -    /* We always need to fall through to the catch all emulator */
> -    return X86EMUL_UNHANDLEABLE;
> -}
> -
>  void hvm_ioreq_init(struct domain *d)
>  {
>      spin_lock_init(&d->arch.hvm.ioreq_server.lock);
>=20
> -    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
> +    arch_hvm_ioreq_init(d);
>  }
>=20
>  /*
> diff --git a/xen/include/asm-x86/hvm/ioreq.h =
b/xen/include/asm-x86/hvm/ioreq.h
> index e2588e9..376e2ef 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -19,6 +19,165 @@
>  #ifndef __ASM_X86_HVM_IOREQ_H__
>  #define __ASM_X86_HVM_IOREQ_H__
>=20
> +#include <asm/hvm/emulate.h>
> +#include <asm/hvm/vmx/vmx.h>
> +
> +#include <public/hvm/params.h>
> +
> +struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
> +                                          unsigned int id);
> +
> +static inline bool arch_hvm_io_completion(enum hvm_io_completion =
io_completion)
> +{
> +    switch ( io_completion )
> +    {
> +    case HVMIO_realmode_completion:
> +    {
> +        struct hvm_emulate_ctxt ctxt;
> +
> +        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +        vmx_realmode_emulate_one(&ctxt);
> +        hvm_emulate_writeback(&ctxt);
> +
> +        break;
> +    }
> +
> +    default:
> +        ASSERT_UNREACHABLE();
> +        break;
> +    }
> +
> +    return true;
> +}
> +
> +/* Called when target domain is paused */
> +static inline void arch_hvm_destroy_ioreq_server(struct =
hvm_ioreq_server *s)
> +{
> +    p2m_set_ioreq_server(s->target, 0, s);
> +}
> +
> +/*
> + * Map or unmap an ioreq server to specific memory type. For now, =
only
> + * HVMMEM_ioreq_server is supported, and in the future new types can =
be
> + * introduced, e.g. HVMMEM_ioreq_serverX mapped to ioreq server X. =
And
> + * currently, only write operations are to be forwarded to an ioreq =
server.
> + * Support for the emulation of read operations can be added when an =
ioreq
> + * server has such requirement in the future.
> + */
> +static inline int hvm_map_mem_type_to_ioreq_server(struct domain *d,
> +                                                   ioservid_t id,
> +                                                   uint32_t type,
> +                                                   uint32_t flags)
> +{
> +    struct hvm_ioreq_server *s;
> +    int rc;
> +
> +    if ( type !=3D HVMMEM_ioreq_server )
> +        return -EINVAL;
> +
> +    if ( flags & ~XEN_DMOP_IOREQ_MEM_ACCESS_WRITE )
> +        return -EINVAL;
> +
> +    spin_lock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    s =3D get_ioreq_server(d, id);
> +
> +    rc =3D -ENOENT;
> +    if ( !s )
> +        goto out;
> +
> +    rc =3D -EPERM;
> +    if ( s->emulator !=3D current->domain )
> +        goto out;
> +
> +    rc =3D p2m_set_ioreq_server(d, flags, s);
> +
> + out:
> +    spin_unlock_recursive(&d->arch.hvm.ioreq_server.lock);
> +
> +    if ( rc =3D=3D 0 && flags =3D=3D 0 )
> +    {
> +        struct p2m_domain *p2m =3D p2m_get_hostp2m(d);
> +
> +        if ( read_atomic(&p2m->ioreq.entry_count) )
> +            p2m_change_entry_type_global(d, p2m_ioreq_server, =
p2m_ram_rw);
> +    }
> +
> +    return rc;
> +}
> +

The above doesn't really feel right to me. It's really an entry point =
into the ioreq server code and as such I think it ought to be left in =
the common code. I suggest replacing the p2m_set_ioreq_server() function =
with an arch specific function (also taking the type) which you can then =
implement here.

The rest of the patch looks ok.

  Paul

> +static inline int hvm_ioreq_server_get_type_addr(const struct domain =
*d,
> +                                                 const ioreq_t *p,
> +                                                 uint8_t *type,
> +                                                 uint64_t *addr)
> +{
> +    uint32_t cf8 =3D d->arch.hvm.pci_cf8;
> +
> +    if ( p->type !=3D IOREQ_TYPE_COPY && p->type !=3D IOREQ_TYPE_PIO =
)
> +        return -EINVAL;
> +
> +    if ( p->type =3D=3D IOREQ_TYPE_PIO &&
> +         (p->addr & ~3) =3D=3D 0xcfc &&
> +         CF8_ENABLED(cf8) )
> +    {
> +        uint32_t x86_fam;
> +        pci_sbdf_t sbdf;
> +        unsigned int reg;
> +
> +        reg =3D hvm_pci_decode_addr(cf8, p->addr, &sbdf);
> +
> +        /* PCI config data cycle */
> +        *type =3D XEN_DMOP_IO_RANGE_PCI;
> +        *addr =3D ((uint64_t)sbdf.sbdf << 32) | reg;
> +        /* AMD extended configuration space access? */
> +        if ( CF8_ADDR_HI(cf8) &&
> +             d->arch.cpuid->x86_vendor =3D=3D X86_VENDOR_AMD &&
> +             (x86_fam =3D get_cpu_family(
> +                 d->arch.cpuid->basic.raw_fms, NULL, NULL)) >=3D 0x10 =
&&
> +             x86_fam < 0x17 )
> +        {
> +            uint64_t msr_val;
> +
> +            if ( !rdmsr_safe(MSR_AMD64_NB_CFG, msr_val) &&
> +                 (msr_val & (1ULL << =
AMD64_NB_CFG_CF8_EXT_ENABLE_BIT)) )
> +                *addr |=3D CF8_ADDR_HI(cf8);
> +        }
> +    }
> +    else
> +    {
> +        *type =3D (p->type =3D=3D IOREQ_TYPE_PIO) ?
> +                 XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
> +        *addr =3D p->addr;
> +    }
> +
> +    return 0;
> +}
> +
> +static inline int hvm_access_cf8(
> +    int dir, unsigned int port, unsigned int bytes, uint32_t *val)
> +{
> +    struct domain *d =3D current->domain;
> +
> +    if ( dir =3D=3D IOREQ_WRITE && bytes =3D=3D 4 )
> +        d->arch.hvm.pci_cf8 =3D *val;
> +
> +    /* We always need to fall through to the catch all emulator */
> +    return X86EMUL_UNHANDLEABLE;
> +}
> +
> +static inline void arch_hvm_ioreq_init(struct domain *d)
> +{
> +    register_portio_handler(d, 0xcf8, 4, hvm_access_cf8);
> +}
> +
> +static inline bool arch_hvm_ioreq_destroy(struct domain *d)
> +{
> +    if ( !relocate_portio_handler(d, 0xcf8, 0xcf8, 4) )
> +        return false;
> +
> +    return true;
> +}
> +
>  bool hvm_io_pending(struct vcpu *v);
>  bool handle_hvm_io_completion(struct vcpu *v);
>  bool is_ioreq_server_page(struct domain *d, const struct page_info =
*page);
> @@ -38,8 +197,6 @@ int hvm_map_io_range_to_ioreq_server(struct domain =
*d, ioservid_t id,
>  int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t =
id,
>                                           uint32_t type, uint64_t =
start,
>                                           uint64_t end);
> -int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
> -                                     uint32_t type, uint32_t flags);
>  int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
>                                 bool enabled);
>=20
> @@ -55,6 +212,10 @@ unsigned int hvm_broadcast_ioreq(ioreq_t *p, bool =
buffered);
>=20
>  void hvm_ioreq_init(struct domain *d);
>=20
> +#define IOREQ_STATUS_HANDLED     X86EMUL_OKAY
> +#define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
> +#define IOREQ_STATUS_RETRY       X86EMUL_RETRY
> +
>  #endif /* __ASM_X86_HVM_IOREQ_H__ */
>=20
>  /*
> --
> 2.7.4




From xen-devel-bounces@lists.xenproject.org Tue Oct 20 07:26:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 07:26:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8944.24086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUm1o-0003md-VV; Tue, 20 Oct 2020 07:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8944.24086; Tue, 20 Oct 2020 07:25:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUm1o-0003mW-Rn; Tue, 20 Oct 2020 07:25:52 +0000
Received: by outflank-mailman (input) for mailman id 8944;
 Tue, 20 Oct 2020 07:25:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUm1n-0003ly-UC
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:25:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b612e8b0-fc9a-4518-860f-ea34403960f1;
 Tue, 20 Oct 2020 07:25:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUm1f-0003O1-Cp; Tue, 20 Oct 2020 07:25:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUm1f-0003R8-3B; Tue, 20 Oct 2020 07:25:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUm1f-0002mp-2h; Tue, 20 Oct 2020 07:25:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUm1n-0003ly-UC
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:25:51 +0000
X-Inumbo-ID: b612e8b0-fc9a-4518-860f-ea34403960f1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b612e8b0-fc9a-4518-860f-ea34403960f1;
	Tue, 20 Oct 2020 07:25:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HEdsLyUBGf3K7XT3RJGmqazYBsN15gKUKWrqr3ulPPc=; b=whNS6kqvipVnQshpIv1viSXpnx
	0ClcCp+OIUcQZo2IssEvqZLHcCLwqWVnRMNzr07/+vnu9rbaEFrrE9HTRkVQP1UUpL4p0yB97FIuI
	vlJ2oAoAvSBM8Jkit++07mhI0oOFAY+TngpMRBuvcFmzXxzTXLfOx6FBkkAFVyL5olDk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUm1f-0003O1-Cp; Tue, 20 Oct 2020 07:25:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUm1f-0003R8-3B; Tue, 20 Oct 2020 07:25:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUm1f-0002mp-2h; Tue, 20 Oct 2020 07:25:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-155977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 155977: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7cf726a59435301046250c42131554d9ccc566b8
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 07:25:43 +0000

flight 155977 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/155977/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7cf726a59435301046250c42131554d9ccc566b8
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   80 days
Failing since        152366  2020-08-01 20:49:34 Z   79 days  134 attempts
Testing same since   155977  2020-10-19 09:23:10 Z    0 days    1 attempts

------------------------------------------------------------
3219 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 588383 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 07:57:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 07:57:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8948.24098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmWF-0006Ts-Iv; Tue, 20 Oct 2020 07:57:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8948.24098; Tue, 20 Oct 2020 07:57:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmWF-0006Tl-FN; Tue, 20 Oct 2020 07:57:19 +0000
Received: by outflank-mailman (input) for mailman id 8948;
 Tue, 20 Oct 2020 07:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUmWD-0006Tg-Sr
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:57:17 +0000
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62c0b558-ab37-44d1-840e-2cd45a042cad;
 Tue, 20 Oct 2020 07:57:16 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id ce10so1292522ejc.5
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 00:57:16 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id k9sm1496515ejv.113.2020.10.20.00.57.14
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 00:57:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUmWD-0006Tg-Sr
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 07:57:17 +0000
X-Inumbo-ID: 62c0b558-ab37-44d1-840e-2cd45a042cad
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 62c0b558-ab37-44d1-840e-2cd45a042cad;
	Tue, 20 Oct 2020 07:57:16 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id ce10so1292522ejc.5
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 00:57:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=0bOY2HFRrUjp3hY8B5bQp5UE09BO7L9vwPLsux92Lcs=;
        b=oCKQfpOC3QFae54cnhhAROBIBdjbdAJX3XF0l9nmHUxJS/FWwXP2ZHISANWLenjdRb
         aVnYN2aWy6R+6qgAUe50cB8a1kD+/EOdFEGybeTQBnhqUAf+TmHH3pRdS7LTy+6IKTx9
         4lpv8320DyOI84OGTuEIKr/eeFbFft9zf2Sxal+W4LKnbOHW/8704R4sV3sQqh6ot8aW
         0WJLGJDU1b7TFCQ0ubjLxQ2BmYDovqFvnj5fB662/g34gWMS7a28hiGpEu1D7SeeSltd
         P1iQy/bb20nCaT9S9+FS7WbwO95Isf4CsnUIsRpiZ33/RQzmiuw22gnpka+zX9GK9lDQ
         c1aQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=0bOY2HFRrUjp3hY8B5bQp5UE09BO7L9vwPLsux92Lcs=;
        b=gaRUsZnkBYVVKqd7V253tHRSV85TT1r5l3Dv/9CmyUPwEP4WB+/Aqa9GiyUIsnPaib
         0hsTImDQ4aP9EPswm6HMAydRerS7uc2WDfoe4z6yFAJPB174tQvFSaX37jsWtN8nNp63
         70QdZ9pqJY0MVJw7oEneO0mrBsXkBO+SOSFcVcaQ6lwyPxIyjt6LiFXXRHC3N84WotyX
         1OE1sROsnzWRGaoLefCTLchEgThmHg13fjd399JQqRQ9e80z6CbqdehkcecgiD5N/06u
         NkGszsFTv9E76x/Tk37Ax5khxXS1rzrOqOfZ2YzSKoJhhLgIkDf49ajjCzeouWcUwM4p
         77TQ==
X-Gm-Message-State: AOAM531gsBHpo0wMs4DV/XHvT8rtJNa9OebYXkfAAq3HI4HwbEti3Nhp
	OgOrEOix/r6y6+XL9ig0ey4=
X-Google-Smtp-Source: ABdhPJxi0FcaunaorqVm4yX8WQKGEbIFDU8QOU3lVZX5Ez5prpCm+A7EK0wp7CtYS76Ibg1Ap1Fm4g==
X-Received: by 2002:a17:906:28db:: with SMTP id p27mr1922657ejd.424.1603180635945;
        Tue, 20 Oct 2020 00:57:15 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id k9sm1496515ejv.113.2020.10.20.00.57.14
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 00:57:15 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Wei Liu'" <wl@xen.org>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Tim Deegan'" <tim@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-3-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
Date: Tue, 20 Oct 2020 08:57:13 +0100
Message-ID: <004001d6a6b6$9ffd3ac0$dff7b040$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAJRJ+K5qmWNg2A=
Content-Language: en-gb

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Andrew =
Cooper <andrew.cooper3@citrix.com>;
> George Dunlap <george.dunlap@citrix.com>; Ian Jackson =
<iwj@xenproject.org>; Jan Beulich
> <jbeulich@suse.com>; Julien Grall <julien@xen.org>; Stefano Stabellini =
<sstabellini@kernel.org>; Wei
> Liu <wl@xen.org>; Roger Pau Monn=C3=A9 <roger.pau@citrix.com>; Paul =
Durrant <paul@xen.org>; Tim Deegan
> <tim@xen.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 02/23] xen/ioreq: Make x86's IOREQ feature common
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> As a lot of x86 code can be re-used on Arm later on, this patch
> moves previously prepared x86/hvm/ioreq.c to the common code.
>=20
> The common IOREQ feature is supposed to be built with IOREQ_SERVER
> option enabled, which is selected for x86's config HVM for now.
>=20
> In order to avoid having a gigantic patch here, the subsequent
> patches will update remaining bits in the common code step by step:
> - Make IOREQ related structs/materials common
> - Drop the "hvm" prefixes and infixes

FWIW you could tackle the naming changes in patch #1.

> - Remove layering violation by moving corresponding fields
>   out of *arch.hvm* or abstracting away accesses to them
>=20
> This support is going to be used on Arm to be able run device
> emulator outside of Xen hypervisor.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>=20
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>=20
> ***
> Please note, this patch depends on the following which is
> on review:
> https://patchwork.kernel.org/patch/11816689/
> ***
>=20
> Changes RFC -> V1:
>    - was split into three patches:
>      - x86/ioreq: Prepare IOREQ feature for making it common
>      - xen/ioreq: Make x86's IOREQ feature common
>      - xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
>    - update MAINTAINERS file
>    - do not use a separate subdir for the IOREQ stuff, move it to:
>      - xen/common/ioreq.c
>      - xen/include/xen/ioreq.h
>    - update x86's files to include xen/ioreq.h
>    - remove unneeded headers in arch/x86/hvm/ioreq.c
>    - re-order the headers alphabetically in common/ioreq.c
>    - update common/ioreq.c according to the newly introduced arch =
functions:
>      arch_hvm_destroy_ioreq_server()/arch_handle_hvm_io_completion()
>=20
> Changes V1 -> V2:
>    - update patch description
>    - make everything needed in the previous patch to achieve
>      a truly rename here
>    - don't include unnecessary headers from asm-x86/hvm/ioreq.h
>      and xen/ioreq.h
>    - use __XEN_IOREQ_H__ instead of __IOREQ_H__
>    - move get_ioreq_server() to common/ioreq.c
> ---
>  MAINTAINERS                     |    8 +-
>  xen/arch/x86/Kconfig            |    1 +
>  xen/arch/x86/hvm/Makefile       |    1 -
>  xen/arch/x86/hvm/ioreq.c        | 1422 =
---------------------------------------
>  xen/arch/x86/mm.c               |    2 +-
>  xen/arch/x86/mm/shadow/common.c |    2 +-
>  xen/common/Kconfig              |    3 +
>  xen/common/Makefile             |    1 +
>  xen/common/ioreq.c              | 1422 =
+++++++++++++++++++++++++++++++++++++++
>  xen/include/asm-x86/hvm/ioreq.h |   39 +-
>  xen/include/xen/ioreq.h         |   71 ++
>  11 files changed, 1509 insertions(+), 1463 deletions(-)
>  delete mode 100644 xen/arch/x86/hvm/ioreq.c
>  create mode 100644 xen/common/ioreq.c
>  create mode 100644 xen/include/xen/ioreq.h
>=20
[snip]
> +static void hvm_remove_ioreq_gfn(struct hvm_ioreq_server *s, bool =
buf)
> +
> +{
> +    struct domain *d =3D s->target;
> +    struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq;
> +
> +    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> +        return;
> +
> +    if ( guest_physmap_remove_page(d, iorp->gfn,
> +                                   page_to_mfn(iorp->page), 0) )
> +        domain_crash(d);
> +    clear_page(iorp->va);
> +}
> +
> +static int hvm_add_ioreq_gfn(struct hvm_ioreq_server *s, bool buf)
> +{
> +    struct domain *d =3D s->target;
> +    struct hvm_ioreq_page *iorp =3D buf ? &s->bufioreq : &s->ioreq;
> +    int rc;
> +
> +    if ( gfn_eq(iorp->gfn, INVALID_GFN) )
> +        return 0;
> +
> +    clear_page(iorp->va);
> +
> +    rc =3D guest_physmap_add_page(d, iorp->gfn,
> +                                page_to_mfn(iorp->page), 0);
> +    if ( rc =3D=3D 0 )
> +        paging_mark_pfn_dirty(d, _pfn(gfn_x(iorp->gfn)));
> +
> +    return rc;
> +}
> +

The 'legacy' mechanism of mapping magic pages for ioreq servers should =
remain x86 specific I think that aspect of the code needs to remain =
behind and not get moved into common code. You could do that in arch =
specific calls in hvm_ioreq_server_enable/disable() and =
hvm_get_ioreq_server_info().

  Paul



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:00:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8958.24116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmYt-0007J0-G7; Tue, 20 Oct 2020 08:00:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8958.24116; Tue, 20 Oct 2020 08:00:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmYt-0007IY-87; Tue, 20 Oct 2020 08:00:03 +0000
Received: by outflank-mailman (input) for mailman id 8958;
 Tue, 20 Oct 2020 08:00:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUmYs-00072o-E0
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:00:02 +0000
Received: from mail-ej1-x633.google.com (unknown [2a00:1450:4864:20::633])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b8c3803-4e72-47aa-b4f6-b5bc93dc2fb0;
 Tue, 20 Oct 2020 08:00:01 +0000 (UTC)
Received: by mail-ej1-x633.google.com with SMTP id md26so1282852ejb.10
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 01:00:01 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
 by smtp.gmail.com with ESMTPSA id v1sm1400374eds.47.2020.10.20.00.59.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 01:00:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUmYs-00072o-E0
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:00:02 +0000
X-Inumbo-ID: 2b8c3803-4e72-47aa-b4f6-b5bc93dc2fb0
Received: from mail-ej1-x633.google.com (unknown [2a00:1450:4864:20::633])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2b8c3803-4e72-47aa-b4f6-b5bc93dc2fb0;
	Tue, 20 Oct 2020 08:00:01 +0000 (UTC)
Received: by mail-ej1-x633.google.com with SMTP id md26so1282852ejb.10
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 01:00:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=7a66diox9vHPT1Z9E79vY+WL73WqD7Lj+skCoPAkZNM=;
        b=VSa2OTKJ73WWkWHTo1NYNSda5Mi/f1t93CZjLPkE5OyWIjQ+VXTKH2o5DwAFEtOHD1
         gpTXGUI5FZ7XFmBRZGXZJznGBGsk5aYxaCgxmnDOeAr5ofTozpTZE0NRWs0rvL6om08+
         etWQ3/RxC7XgtnBS/IzgTdD0Spp+F0eInNPD8oIfjEEHeqxBVxROE3ZUEDCSdyg1Di/Q
         0Ts4Qgxalud/vVVRXpmMMpBVBpF33Iz/kfxli15k8KqMlafc7lxamoO7YzoUZY6T4KDK
         VkwazBs3k/sHAIkPkciaUY2sBGHA8gJrjIfFepYZemZfGz+UigEMFARKPBs2BkelAqJM
         yqIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=7a66diox9vHPT1Z9E79vY+WL73WqD7Lj+skCoPAkZNM=;
        b=esBv7QhTVbPjhFCUthR97GiCybGD7yqElWzQ4OTnI7zbUb/ltuHzQw+yrpRVkrExQM
         XeWihEfkmg3QkVT4/LI2Y7eG9uuyaDd8gyJo7wInMj4+AbXOgmBMe8PTCMn/oy8KzNND
         f5MYbthHtemHSpoR2YCXyM3kcbfYbAbagyKkFGQY96TeYWlFxRmfYPy/yS/QJpi8VX++
         CPzywVXCTFjN6cLeAZYhuOi995bvjtLTSrbV+GV0ghrafO1xNor+w8putSarf59N9fPn
         CAfwDlMkqlHbl5OHf7/JRGKG6c+gXrXfiXQxAyIT/pjhE/c2knx2/cunGhzDu979ec9O
         OeCA==
X-Gm-Message-State: AOAM531mzER6Z4X/3/Wi/TCbyBCeFtEVGxP23XTgfZK7zEk9xITMxkWN
	EhE9NMyZeE2xVZKSJvyYXvg=
X-Google-Smtp-Source: ABdhPJy1yG1mAAOmS5ajJQR1wC7D0H0FoOdKcIJHfPs6q/UvZbl+3BMujicBk7QAdJJG17h+YnnzMA==
X-Received: by 2002:a17:907:40bb:: with SMTP id nu19mr1786359ejb.246.1603180800820;
        Tue, 20 Oct 2020 01:00:00 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
        by smtp.gmail.com with ESMTPSA id v1sm1400374eds.47.2020.10.20.00.59.59
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 01:00:00 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-4-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-4-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 03/23] xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
Date: Tue, 20 Oct 2020 08:59:58 +0100
Message-ID: <004101d6a6b7$02452440$06cf6cc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAH+e+xpqmgq2XA=
Content-Language: en-gb

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall =
<julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 03/23] xen/ioreq: Make x86's =
hvm_ioreq_needs_completion() common
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> The IOREQ is a common feature now and this helper will be used
> on Arm as is. Move it to xen/ioreq.h and remove "hvm" prefix.
>=20
> Although PIO handling on Arm is not introduced with the current series
> (it will be implemented when we add support for vPCI), technically
> the PIOs exist on Arm (however they are accessed the same way as MMIO)
> and it would be better not to diverge now.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:09:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8964.24128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmiF-0008Aj-El; Tue, 20 Oct 2020 08:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8964.24128; Tue, 20 Oct 2020 08:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmiF-0008Ac-B4; Tue, 20 Oct 2020 08:09:43 +0000
Received: by outflank-mailman (input) for mailman id 8964;
 Tue, 20 Oct 2020 08:09:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUmiD-0008AW-Ki
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:09:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4bcebfb7-c15e-4e9f-8460-b0d60cee3510;
 Tue, 20 Oct 2020 08:09:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8138B2B1;
 Tue, 20 Oct 2020 08:09:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUmiD-0008AW-Ki
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:09:41 +0000
X-Inumbo-ID: 4bcebfb7-c15e-4e9f-8460-b0d60cee3510
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4bcebfb7-c15e-4e9f-8460-b0d60cee3510;
	Tue, 20 Oct 2020 08:09:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603181379;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NlkD+PpM+3EgdR7gKys/vxIhMN4702yOowUMx0BeWDc=;
	b=jgfPuOKq5p+ruQZA2Qjbpn5/plOU4UHUQGd4GDJ5nS5wQier6T1Q1Qklcr8eZsvSzX83F9
	neRRiBBSohqZWVXOouF/zZG5bcXS0fQ+sQqa2uTtSxAYErkSeysdOR15SxCcFVV0w+rpGv
	YUn/ksiielHeM3kv6eR6xylPsc03Fnc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A8138B2B1;
	Tue, 20 Oct 2020 08:09:39 +0000 (UTC)
Subject: Re: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
 <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
 <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
 <6430fef8-23f1-f4ef-8741-5e089eaa0df9@suse.com>
 <8b618252-4535-a8d9-efb9-0c1fba176ff4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4eb096ab-7052-f6a9-a5ee-74d18683dde3@suse.com>
Date: Tue, 20 Oct 2020 10:09:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <8b618252-4535-a8d9-efb9-0c1fba176ff4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 19.10.2020 18:12, Andrew Cooper wrote:
> On 19/10/2020 10:09, Jan Beulich wrote:
>> On 16.10.2020 17:38, Andrew Cooper wrote:
>>> On 15/10/2020 09:01, Jan Beulich wrote:
>>>> On 14.10.2020 15:57, Andrew Cooper wrote:
>>>>> Running with corrupt state is every bit an XSA as hitting a VMEntry
>>>>> failure if it can be triggered by userspace, but the latter safer and
>>>>> much more obvious.
>>>> I disagree. For CPL > 0 we don't "corrupt" guest state any more
>>>> than reporting a #GP fault when one is going to be reported
>>>> anyway (as long as the VM entry doesn't fail, and hence the
>>>> guest won't get crashed). IOW this raising of #GP actually is a
>>>> precautionary measure to _avoid_ XSAs.
>>> It does not remove any XSAs.  It merely hides them.
>> How that? If we convert the ability of guest user mode to crash
>> the guest into deliver of #GP(0), how is there a hidden XSA then?
> 
> Because userspace being able to triggering this fixup is still an XSA.

How do you know without a specific case at hand? It may well be
that all that's impacted is guest user space, in which case I
don't see why there would need to be an XSA. It's still a bug
then, sure.

>>>>> It was the appropriate security fix (give or take the functional bug in
>>>>> it) at the time, given the complexity of retrofitting zero length
>>>>> instruction fetches to the emulator.
>>>>>
>>>>> However, it is one of a very long list of guest-state-induced VMEntry
>>>>> failures, with non-trivial logic which we assert will pass, on a
>>>>> fastpath, where hardware also performs the same checks and we already
>>>>> have a runtime safe way of dealing with errors.  (Hence not actually
>>>>> using ASSERT_UNREACHABLE() here.)
>>>> "Runtime safe" as far as Xen is concerned, I take it. This isn't safe
>>>> for the guest at all, as vmx_failed_vmentry() results in an
>>>> unconditional domain_crash().
>>> Any VMEntry failure is a bug in Xen.  If userspace can trigger it, it is
>>> an XSA, *irrespective* of whether we crash the domain then and there, or
>>> whether we let it try and limp on with corrupted state.
>> Allowing the guest to continue with corrupted state is not a
>> useful thing to do, I agree. However, what falls under
>> "corrupted" seems to be different for you and me. I'd not call
>> delivery of #GP "corruption" in any way.
> 
> I can only repeat my previous statement:
> 
>> There are legal states where RIP is 0x0000800000000000 and #GP is the
>> wrong thing to do.
> 
> Blindly raising #GP in is not always the right thing to do.

Again - we're in agreement about "blindly". Let's be less blind.

>>  The primary goal ought
>> to be that we don't corrupt the guest kernel view of the world.
>> It may then have the opportunity to kill the offending user
>> mode process.
> 
> By the time we have hit this case, all bets are off, because Xen *is*
> malfunctioning.  We have no idea if kernel context is still intact.  You
> don't even know that current user context is the correct offending
> context to clobber, and might be creating a user=>user DoS vulnerability.
> 
> We definitely have an XSA to find and fix, and we can either make it
> very obvious and likely to be reported, or hidden and liable to go
> unnoticed for a long period of time.

Why would it go unnoticed when we log the incident? I very much
hope people inspect their logs at least every once in a while ...

And as per above - I disagree with your use of "definitely" here.
We have a bug to analyze and fix, yes. Whether it's an XSA-worthy
one isn't known ahead of time, unless we crash the guest in such
a case.

In any event I think it's about time that the VMX maintainers
voice their views here, as they're the ones to approve of
whichever change we end up with. Kevin, Jun?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:15:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8966.24139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmnv-0000ck-3V; Tue, 20 Oct 2020 08:15:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8966.24139; Tue, 20 Oct 2020 08:15:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUmnv-0000cd-0Z; Tue, 20 Oct 2020 08:15:35 +0000
Received: by outflank-mailman (input) for mailman id 8966;
 Tue, 20 Oct 2020 08:15:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUmnt-0000cY-6H
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:15:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c2ee5f0-32e1-4504-874a-3da47fcbb2c5;
 Tue, 20 Oct 2020 08:15:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79859AC6D;
 Tue, 20 Oct 2020 08:15:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUmnt-0000cY-6H
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:15:33 +0000
X-Inumbo-ID: 6c2ee5f0-32e1-4504-874a-3da47fcbb2c5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6c2ee5f0-32e1-4504-874a-3da47fcbb2c5;
	Tue, 20 Oct 2020 08:15:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603181731;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5LcSdpqMy4IhpO30RX9eV6Ws/DFikDuqIjDhsqzf0W8=;
	b=kgPjyDFXCDXNq6FnFRPVAA7+zKAn8tYRRRxrCMuah0Hig6GJXYxjCccKv3/T8fZQqTs4Dy
	jWejzJa6tNlxZv8OnjMCUeRIGqj7WAtgQpxetyr8sRhc12bgUs4XhrAR1u9LaSNh5IX7Po
	p3Mu/0xhZW4iO+Pf5YQr9fwAimVirbg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79859AC6D;
	Tue, 20 Oct 2020 08:15:31 +0000 (UTC)
Subject: Re: Ryzen 4000 (Mobile) Softlocks/Micro-stutters
To: Dylanger Daly <dylangerdaly@protonmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <9lQU_gCfRzGyyNb2j86pxTMi1IET1Iq7iK3994agUZPrTI5Xd-aCJAaRYuJlD3L5LT2WaV4N3-YF4xKl5ukialT0M_YD0ve6gmDFFfatpXw=@protonmail.com>
 <2cc5da3e-0ad0-4647-f1ca-190788c2910b@citrix.com>
 <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <768d9dbb-4387-099f-b489-7952d7e883b0@suse.com>
Date: Tue, 20 Oct 2020 10:15:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <3pKjdPYCiRimYjqHQP0xd_vqhoTOJqthTXOrY_rLeNvnQEpIF24gXDKgRhmr95JfARJzbVJVbfTrrJeiovGVHGbV0QBSZ2jez2Y_wt6db7g=@protonmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 01:37, Dylanger Daly wrote:
>> This wants reporting (with sufficient data, i.e. at least a serial log)
> 
> Hm, I'm not sure there's UART on this Laptop, can I save the boot log somewhere?

If the systems remains sufficiently usable "xl dmesg" will give you
the log. But you won't be able to get away without a serial-like
console (USB2 debug port may be an alternative, if you have a
suitable cable and if the USB topology in the laptop doesn't
prevent it functioning). Yes, laptops are always problematic in
this regard.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8970.24152 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnKD-0003I2-On; Tue, 20 Oct 2020 08:48:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8970.24152; Tue, 20 Oct 2020 08:48:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnKD-0003Hv-LX; Tue, 20 Oct 2020 08:48:57 +0000
Received: by outflank-mailman (input) for mailman id 8970;
 Tue, 20 Oct 2020 08:48:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UdfY=D3=oracle.com=john.haxby@srs-us1.protection.inumbo.net>)
 id 1kUnKC-0003Hq-V4
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:48:57 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d34c97ec-9473-4cef-a996-941de3f98055;
 Tue, 20 Oct 2020 08:48:54 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09K8i7rI188082;
 Tue, 20 Oct 2020 08:48:12 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 347s8msmp0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 20 Oct 2020 08:48:12 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09K8is45150623;
 Tue, 20 Oct 2020 08:48:12 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by userp3030.oracle.com with ESMTP id 348ahw07cp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 20 Oct 2020 08:48:12 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09K8mAEe159753;
 Tue, 20 Oct 2020 08:48:10 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 348ahw07bh-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 20 Oct 2020 08:48:10 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09K8lvTX021447;
 Tue, 20 Oct 2020 08:47:58 GMT
Received: from [10.175.164.120] (/10.175.164.120)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 20 Oct 2020 01:47:57 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UdfY=D3=oracle.com=john.haxby@srs-us1.protection.inumbo.net>)
	id 1kUnKC-0003Hq-V4
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:48:57 +0000
X-Inumbo-ID: d34c97ec-9473-4cef-a996-941de3f98055
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d34c97ec-9473-4cef-a996-941de3f98055;
	Tue, 20 Oct 2020 08:48:54 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09K8i7rI188082;
	Tue, 20 Oct 2020 08:48:12 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=from : message-id :
 content-type : mime-version : subject : date : in-reply-to : cc : to :
 references; s=corp-2020-01-29;
 bh=UBSvDuZX45rcdCwz1yIzvCzCqfd2joSCizhea4x+Xao=;
 b=q8gn83IPPWdFm1HT3taLhd9DUF/VTUU+yXtsd8f9nD6wriupB19ul4FsqzWGdtIZMcde
 GW+G9oeRVHaGmJfZ8muagtVwvuWLE6AywXuhak+OXkSgdFP6EIR2H2OqKDUhR7yIW2Vz
 zpamQMFlTWRfwdWHBA7I0p8HYGgPlEg7NOi5pNpKeOCI5/Zqu82RI3DyvlSb3YeNhNvu
 1nAbi2LxPOnr/RtC4QoVHdNGHfdCdQB+x9xvmqx+BqjtbEr8lrxt1aMjIali/bjhTn7W
 dzqjxrXdOv4FjsFo2kwRKNjQX5RScYby9/qqSUitUFXIKeMy4YBvXYVTZYkhax1TMfnU 0w== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by userp2120.oracle.com with ESMTP id 347s8msmp0-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Tue, 20 Oct 2020 08:48:12 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09K8is45150623;
	Tue, 20 Oct 2020 08:48:12 GMT
Received: from pps.reinject (localhost [127.0.0.1])
	by userp3030.oracle.com with ESMTP id 348ahw07cp-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Tue, 20 Oct 2020 08:48:12 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09K8mAEe159753;
	Tue, 20 Oct 2020 08:48:10 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
	by userp3030.oracle.com with ESMTP id 348ahw07bh-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Tue, 20 Oct 2020 08:48:10 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
	by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09K8lvTX021447;
	Tue, 20 Oct 2020 08:47:58 GMT
Received: from [10.175.164.120] (/10.175.164.120)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 20 Oct 2020 01:47:57 -0700
From: John Haxby <john.haxby@oracle.com>
Message-Id: <27A23102-A7F5-48C5-8972-48CE4C283C6E@oracle.com>
Content-Type: multipart/signed;
	boundary="Apple-Mail=_9F9749E9-79EA-41AB-B516-003ECE07BEE3";
	protocol="application/pgp-signature";
	micalg=pgp-sha256
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.4\))
Subject: Re: [Ocfs2-devel] [RFC] treewide: cleanup unreachable breaks
Date: Tue, 20 Oct 2020 09:47:45 +0100
In-Reply-To: <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
Cc: Tom Rix <trix@redhat.com>, alsa-devel@alsa-project.org,
        clang-built-linux <clang-built-linux@googlegroups.com>,
        Greg KH <gregkh@linuxfoundation.org>, linux-iio@vger.kernel.org,
        nouveau@lists.freedesktop.org, storagedev@microchip.com,
        dri-devel <dri-devel@lists.freedesktop.org>,
        virtualization@lists.linux-foundation.org, keyrings@vger.kernel.org,
        linux-mtd@lists.infradead.org, ath10k@lists.infradead.org,
        MPT-FusionLinux.pdl@broadcom.com,
        linux-stm32@st-md-mailman.stormreply.com,
        usb-storage@lists.one-eyed-alien.net, linux-watchdog@vger.kernel.org,
        devel@driverdev.osuosl.org, linux-samsung-soc@vger.kernel.org,
        linux-scsi@vger.kernel.org, linux-nvdimm <linux-nvdimm@lists.01.org>,
        amd-gfx list <amd-gfx@lists.freedesktop.org>,
        linux-acpi@vger.kernel.org, intel-wired-lan@lists.osuosl.org,
        industrypack-devel@lists.sourceforge.net, linux-pci@vger.kernel.org,
        spice-devel@lists.freedesktop.org, linux-media@vger.kernel.org,
        linux-serial@vger.kernel.org, linux-nfc@lists.01.org,
        linux-pm@vger.kernel.org, linux-can@vger.kernel.org,
        linux-block@vger.kernel.org, linux-gpio@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-amlogic@lists.infradead.org,
        openipmi-developer@lists.sourceforge.net,
        platform-driver-x86@vger.kernel.org, linux-integrity@vger.kernel.org,
        Linux ARM <linux-arm-kernel@lists.infradead.org>,
        linux-edac@vger.kernel.org, George Burgess <gbiv@google.com>,
        Network Development <netdev@vger.kernel.org>,
        linux-usb@vger.kernel.org,
        linux-wireless <linux-wireless@vger.kernel.org>,
        LKML <linux-kernel@vger.kernel.org>,
        linux-security-module@vger.kernel.org,
        "open list:HARDWARE RANDOM NUMBER GENERATOR CORE" <linux-crypto@vger.kernel.org>,
        patches@opensource.cirrus.com, bpf <bpf@vger.kernel.org>,
        ocfs2-devel@oss.oracle.com, linux-power@fi.rohmeurope.com
To: Nick Desaulniers <ndesaulniers@google.com>
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018054332.GB593954@kroah.com>
 <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
X-Mailer: Apple Mail (2.3608.120.23.2.4)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9779 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 suspectscore=0
 lowpriorityscore=0 mlxlogscore=999 priorityscore=1501 spamscore=0
 phishscore=0 clxscore=1011 bulkscore=0 impostorscore=0 adultscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010200059


--Apple-Mail=_9F9749E9-79EA-41AB-B516-003ECE07BEE3
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=us-ascii



> On 19 Oct 2020, at 20:42, Nick Desaulniers <ndesaulniers@google.com> =
wrote:
>=20
> We probably should add all 3 to W=3D2 builds (wrapped in cc-option).
> I've filed https://github.com/ClangBuiltLinux/linux/issues/1180 to
> follow up on.

It looks as though the URL mangling has been fixed.   If anyone sees =
that specific URL mangled, please let me know.

jch

--Apple-Mail=_9F9749E9-79EA-41AB-B516-003ECE07BEE3
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
	filename=signature.asc
Content-Type: application/pgp-signature;
	name=signature.asc
Content-Description: Message signed with OpenPGP

-----BEGIN PGP SIGNATURE-----
Comment: GPGTools - http://gpgtools.org

iHUEAREIAB0WIQT+pxvb11CFWUkNSOVFC7t+lC+jyAUCX46kMQAKCRBFC7t+lC+j
yBKiAP90JVXdPzuAwtRGkROpw1eVCo7wCaZ5nOa8Oo0sN6gC9gD/S0eGTqQhmg+n
sXPJxPYqQsg09qmS6k/HX+AP5Oz2AMo=
=xx66
-----END PGP SIGNATURE-----

--Apple-Mail=_9F9749E9-79EA-41AB-B516-003ECE07BEE3--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:51:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:51:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8981.24164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnN3-00047p-AT; Tue, 20 Oct 2020 08:51:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8981.24164; Tue, 20 Oct 2020 08:51:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnN3-00047i-7P; Tue, 20 Oct 2020 08:51:53 +0000
Received: by outflank-mailman (input) for mailman id 8981;
 Tue, 20 Oct 2020 08:51:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KWTV=D3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kUnN2-00047d-AQ
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:51:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60b49909-be9c-4136-8178-d67340ccc431;
 Tue, 20 Oct 2020 08:51:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0D73AAF44;
 Tue, 20 Oct 2020 08:51:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KWTV=D3=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kUnN2-00047d-AQ
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:51:52 +0000
X-Inumbo-ID: 60b49909-be9c-4136-8178-d67340ccc431
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 60b49909-be9c-4136-8178-d67340ccc431;
	Tue, 20 Oct 2020 08:51:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603183910;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=arOgXeiLTKU2wh0YCFEyIQpNOe0gMByJ4Vpip6u4gUs=;
	b=L6ns0KGI+kutmE2u3/upVcEkNBgS5qKkk/wTYpjsXQf0ks8A9ZP4ysy9Mev6klPJ/EyZu2
	qWSrh4hIr+voFf7iC1c53YoXAm44SkWDbsQjQY0ty54UdehohZ9QNSAsbthCfyXEYQ6MRp
	gLOP65mdJW1GXL5i7oV+Iiz1DJtJfNw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0D73AAF44;
	Tue, 20 Oct 2020 08:51:50 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs/ctrl: fix dumping of ballooned guest
Date: Tue, 20 Oct 2020 10:51:43 +0200
Message-Id: <20201020085143.21303-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

A guest with memory < maxmem can't be dumped via xl dump-core without
an error message today:

xc: info: exceeded nr_pages (262144) losing pages

In case the last page of the guest isn't allocated the loop in
xc_domain_dumpcore_via_callback() will always spit out this message,
as the number of already dumped pages is tested before the next page
is checked to be valid.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
Unfortunately this patch isn't a complete fix, while I believe it is
needed. There is still a mismatch of exactly 1 page. I have no idea
where this could come from.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/ctrl/xc_core.c | 32 +++++++++++++++++++++-----------
 1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/tools/libs/ctrl/xc_core.c b/tools/libs/ctrl/xc_core.c
index e8c6fb96f9..d83e3726b6 100644
--- a/tools/libs/ctrl/xc_core.c
+++ b/tools/libs/ctrl/xc_core.c
@@ -818,16 +818,6 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         {
             uint64_t gmfn;
             void *vaddr;
-            
-            if ( j >= nr_pages )
-            {
-                /*
-                 * When live dump-mode (-L option) is specified,
-                 * guest domain may increase memory.
-                 */
-                IPRINTF("exceeded nr_pages (%ld) losing pages", nr_pages);
-                goto copy_done;
-            }
 
             if ( !auto_translated_physmap )
             {
@@ -847,6 +837,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                        continue;
                 }
 
+                if ( j >= nr_pages )
+                {
+                    j++;
+                    continue;
+                }
+
                 p2m_array[j].pfn = i;
                 p2m_array[j].gmfn = gmfn;
             }
@@ -855,6 +851,12 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
                 if ( !xc_core_arch_gpfn_may_present(&arch_ctxt, i) )
                     continue;
 
+                if ( j >= nr_pages )
+                {
+                    j++;
+                    continue;
+                }
+
                 gmfn = i;
                 pfn_array[j] = i;
             }
@@ -879,7 +881,15 @@ xc_domain_dumpcore_via_callback(xc_interface *xch,
         }
     }
 
-copy_done:
+    if ( j > nr_pages )
+    {
+        /*
+         * When live dump-mode (-L option) is specified,
+         * guest domain may increase memory.
+         */
+        IPRINTF("exceeded nr_pages (%ld) losing %ld pages", nr_pages, j - nr_pages);
+    }
+
     sts = dump_rtn(xch, args, dump_mem_start, dump_mem - dump_mem_start);
     if ( sts != 0 )
         goto out;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 08:58:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 08:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.8986.24179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnSu-0004MP-1b; Tue, 20 Oct 2020 08:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 8986.24179; Tue, 20 Oct 2020 08:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnSt-0004MI-UL; Tue, 20 Oct 2020 08:57:55 +0000
Received: by outflank-mailman (input) for mailman id 8986;
 Tue, 20 Oct 2020 08:57:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUnSs-0004LV-Cs
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:57:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c1f69db-f1a6-4ba0-b554-c49d28ff03dd;
 Tue, 20 Oct 2020 08:57:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnSi-0005qp-QO; Tue, 20 Oct 2020 08:57:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnSi-0008LL-Gw; Tue, 20 Oct 2020 08:57:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnSi-00060V-GS; Tue, 20 Oct 2020 08:57:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUnSs-0004LV-Cs
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 08:57:54 +0000
X-Inumbo-ID: 6c1f69db-f1a6-4ba0-b554-c49d28ff03dd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6c1f69db-f1a6-4ba0-b554-c49d28ff03dd;
	Tue, 20 Oct 2020 08:57:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lTtJwmQh3/ovITxYFXZ20WxY3LKOtihYDwfku/WoKpo=; b=1LYEcDwXm672SPUTEA8Eu5IPzH
	LLV2yHQ1Rds7ggvuKkNpGA2P4PrXQimxz3kWiNCwLOnkftJgI7wcIIP5M3mLWOrBIJlT4W/Fe0bAn
	MSoBbY3NAZqTrjzSn3g9/H9RFAz00ffHAyLgobC+JqH0dqwzAlz+GYiblz4uL63XX4/Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnSi-0005qp-QO; Tue, 20 Oct 2020 08:57:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnSi-0008LL-Gw; Tue, 20 Oct 2020 08:57:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnSi-00060V-GS; Tue, 20 Oct 2020 08:57:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156019-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156019: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 08:57:44 +0000

flight 156019 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156019/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   60 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  109 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9000.24191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUneh-00061R-Br; Tue, 20 Oct 2020 09:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9000.24191; Tue, 20 Oct 2020 09:10:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUneh-00061K-6Z; Tue, 20 Oct 2020 09:10:07 +0000
Received: by outflank-mailman (input) for mailman id 9000;
 Tue, 20 Oct 2020 09:10:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUnef-0005tU-QE
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:10:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6c8d09ac-30c7-4b0e-8a7a-3453196e9dfb;
 Tue, 20 Oct 2020 09:10:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUned-00067T-DM; Tue, 20 Oct 2020 09:10:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUned-0000uu-4w; Tue, 20 Oct 2020 09:10:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUned-0002Ok-4P; Tue, 20 Oct 2020 09:10:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUnef-0005tU-QE
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:10:05 +0000
X-Inumbo-ID: 6c8d09ac-30c7-4b0e-8a7a-3453196e9dfb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6c8d09ac-30c7-4b0e-8a7a-3453196e9dfb;
	Tue, 20 Oct 2020 09:10:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dicV9po4tldQlm1lo7BQIso0mM/U99RD61wSSj2IiGw=; b=hejyUDI2IghI/IMAj+nXsqj3zJ
	EhywXWg2RV82IqthX7URyYV0JU5AcwaaYtAJNHOBRNy9tEBaVOY9G4HKPMy6dRrrH84WAla3BLKTN
	2nltClP0U2pNclrpVVh+Wc5xJPyixpAJVvVUn3DKURAlcZJQb7cJeA19bUQ57YgqfDPE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUned-00067T-DM; Tue, 20 Oct 2020 09:10:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUned-0000uu-4w; Tue, 20 Oct 2020 09:10:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUned-0002Ok-4P; Tue, 20 Oct 2020 09:10:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156018-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156018: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7f0831e58bf4681d710e9a029644b6fa07b7cb0
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 09:10:03 +0000

flight 156018 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156018/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7f0831e58bf4681d710e9a029644b6fa07b7cb0
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   155900  2020-10-16 15:04:05 Z    3 days
Testing same since   156018  2020-10-20 07:00:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0dfddb2116..a7f0831e58  a7f0831e58bf4681d710e9a029644b6fa07b7cb0 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:14:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:14:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9004.24206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnj1-0006Ho-TO; Tue, 20 Oct 2020 09:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9004.24206; Tue, 20 Oct 2020 09:14:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnj1-0006Hh-PM; Tue, 20 Oct 2020 09:14:35 +0000
Received: by outflank-mailman (input) for mailman id 9004;
 Tue, 20 Oct 2020 09:14:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUnj0-0006Hc-Rt
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:14:34 +0000
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 648354df-1dee-4118-8ffb-f5f869deb208;
 Tue, 20 Oct 2020 09:14:34 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id 33so1039262edq.13
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 02:14:33 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id b8sm1651263edv.20.2020.10.20.02.14.31
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 02:14:32 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUnj0-0006Hc-Rt
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:14:34 +0000
X-Inumbo-ID: 648354df-1dee-4118-8ffb-f5f869deb208
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 648354df-1dee-4118-8ffb-f5f869deb208;
	Tue, 20 Oct 2020 09:14:34 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id 33so1039262edq.13
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 02:14:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=W4N5mWRts6PavQ4l3YyiTpohks7cMtr9uYUq5R+2IFQ=;
        b=t+YpzKmAb0+LVML1n5jCCspqoOHFiuwJJRQoLYdqTUY7QMYdn+6rIVX1Pbae2dRcFF
         LciwftYB3g1G/EF0yIWXiEvq8qPH0LajSVXOamRu9CzOjBT55SVgHbzfObSznIbhtiUy
         Zl78jywPL69+n0ZrmgFTbuT9mi9Q/dg+aoO6Yc6QsDv+V7kyTX/VBHGd1x56bxo+sB8v
         sJTBUCM26MC7F8zi6cnb5nLAdop97D1GVWwdCk0i5+a+Fa7NCNV/jUv6QihSnN6DAv8n
         r6uSi5/jk+1Qwe0gSlqOT8J2ygNIpLxsgx8M/23NfzqvFgnLF83MNV+WMQQVXYiKv298
         Ve0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=W4N5mWRts6PavQ4l3YyiTpohks7cMtr9uYUq5R+2IFQ=;
        b=OgAdbOoaja+MgLg3cfjYeiUF1zBsMsUCt433k5wS0/ymjcvQ+XIRSQiCsuEfEaeaNz
         0ITZmvX1dybgIiluHGg9DJ4dRxxPVgXn3PDyhm5x7y8nCuKzNBBBFxeWP7rUPtX5YF1Q
         xdJs5R6b3hxdkv7zFwoRQhgmE34Ft+p54CLc0K3pliyloPAaxpG35Q4bTw+Qw9kkAgSy
         CRbdNwV5NFOTW/homCZr/0ioH9wCYEy2tEl769tJmO0PlsmHbsXG7djtuH/DgEs+fYcC
         Eu0WXsFDbB1W75GzpPDJn8tSyC2dq61U8NYkmIELzQKmtIC2RtymkKq1hUz8fQoENRTd
         osQA==
X-Gm-Message-State: AOAM531UKmQEfrMC9QCe7VIIMNYWMm1hygCqgkWYauUXiNWY3Stdms99
	/xtjD15PL0yBePYaj12DiHw=
X-Google-Smtp-Source: ABdhPJxKKj56OW/luoAHEz7Pu9q7uY2e/6elN5KrRgGIxiVxtgfb9MuSWL556BmYYIcAu3OqCC1uTw==
X-Received: by 2002:a05:6402:31af:: with SMTP id dj15mr1691550edb.275.1603185273175;
        Tue, 20 Oct 2020 02:14:33 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id b8sm1651263edv.20.2020.10.20.02.14.31
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 02:14:32 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
Date: Tue, 20 Oct 2020 10:14:31 +0100
Message-ID: <004701d6a6c1$6c09f860$441de920$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAG4T9I6qmpcqnA=
Content-Language: en-gb

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Oleksandr Tyshchenko
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall =
<julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 04/23] xen/ioreq: Provide alias for the =
handle_mmio()
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> The IOREQ is a common feature now and Arm will have its own
> implementation.
>=20
> But the name of the function is pretty generic and can be confusing
> on Arm (we already have a try_handle_mmio()).
>=20
> In order not to rename the function (which is used for a varying
> set of purposes on x86) globally and get non-confusing variant on Arm
> provide an alias ioreq_complete_mmio() to be used on common and
> Arm code.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
>=20
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>=20
> Changes RFC -> V1:
>    - new patch
>=20
> Changes V1 -> V2:
>    - remove "handle"
>    - add Jan's A-b
> ---
>  xen/common/ioreq.c              | 2 +-
>  xen/include/asm-x86/hvm/ioreq.h | 2 ++
>  2 files changed, 3 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index c89df7a..29ad48e 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -200,7 +200,7 @@ bool handle_hvm_io_completion(struct vcpu *v)
>          break;
>=20
>      case HVMIO_mmio_completion:
> -        return handle_mmio();
> +        return ioreq_complete_mmio();
>=20
>      case HVMIO_pio_completion:
>          return handle_pio(vio->io_req.addr, vio->io_req.size,
> diff --git a/xen/include/asm-x86/hvm/ioreq.h =
b/xen/include/asm-x86/hvm/ioreq.h
> index a3d8faa..a147856 100644
> --- a/xen/include/asm-x86/hvm/ioreq.h
> +++ b/xen/include/asm-x86/hvm/ioreq.h
> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct =
domain *d)
>  #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>  #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>=20
> +#define ioreq_complete_mmio   handle_mmio
> +

A #define? Really? Can we not have a static inline?

  Paul

>  #endif /* __ASM_X86_HVM_IOREQ_H__ */
>=20
>  /*
> --
> 2.7.4
>=20




From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:16:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9006.24218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnkS-0006Ox-7r; Tue, 20 Oct 2020 09:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9006.24218; Tue, 20 Oct 2020 09:16:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnkS-0006Oq-4s; Tue, 20 Oct 2020 09:16:04 +0000
Received: by outflank-mailman (input) for mailman id 9006;
 Tue, 20 Oct 2020 09:16:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUnkQ-0006Ol-C1
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:16:02 +0000
Received: from mail-ej1-x632.google.com (unknown [2a00:1450:4864:20::632])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ef99ae0-19dd-4272-aacb-cc74fdb0e37f;
 Tue, 20 Oct 2020 09:16:01 +0000 (UTC)
Received: by mail-ej1-x632.google.com with SMTP id u8so1629422ejg.1
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 02:16:01 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id m6sm2056922ejl.94.2020.10.20.02.15.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 02:16:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUnkQ-0006Ol-C1
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:16:02 +0000
X-Inumbo-ID: 8ef99ae0-19dd-4272-aacb-cc74fdb0e37f
Received: from mail-ej1-x632.google.com (unknown [2a00:1450:4864:20::632])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ef99ae0-19dd-4272-aacb-cc74fdb0e37f;
	Tue, 20 Oct 2020 09:16:01 +0000 (UTC)
Received: by mail-ej1-x632.google.com with SMTP id u8so1629422ejg.1
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 02:16:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=GRAttMbN/GcTabZL2HUfvGNlRSn0zQv6Um7v05WMMG4=;
        b=C2NDlWHKUTTeBiGQggAsU3O35HRh3ogDeGUzF7TjXDrHqMSagxqViXojg+naxG3MO+
         ZKGg116pfVPt9ofImjjpyrwrnq1MATA0XD8uqfmSerTOBrzi1pqC3h7Z354CmvLIOKoJ
         CnLW44MINUrOFGImC06xxnbJaJJYdvpey5GWFKGfzZWSOVBtjxeYBNxdnGrX1gM34uNe
         Wz5899HXfxBPo44MwkAUbcvIkgxKkvSEinyepuErijEWYf0AQnrpYkVyK/tonu0KQqmF
         B8Dk/sQTNhQVP8eUtEXs281mc5pmkD1kk7QlmxGc9Bc+CZpOr9c6fjK0NtsRtmO8jwJg
         0J7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=GRAttMbN/GcTabZL2HUfvGNlRSn0zQv6Um7v05WMMG4=;
        b=E0iJDRgX+WSGJG+HxZsN7WhMISxi1R2JzCL26Rg3OfArQORFRpoJzbhpUhaKMVQ/2m
         acMJxdrTudOkX9QyliFS2R8wnNOJnAv5Nyq8V6sHH8O7tPc68TsiIKcJBHLJcgsmtoGk
         h1aa88pz4InWIZiqDyoGVgBQSgHXm1LapvFAeX6vXEEt+/IjINSeusfTyN2o772sJinv
         KX1ucpNlD/2VkyoHG5HcWLMDgc4QfHntuOKqrHNvy/3rMok4KbIBB+/ZRXHekRXb/wZn
         K0k2Ysrs6P1gXqOa8gv7iufZPOW5QX7XSo+Fn9bFHhuGdMzrJiq1xaz1DbUOpuk5bVFj
         feeg==
X-Gm-Message-State: AOAM533RrUU5azdq8k8A6m7bajYBD2W8UxTieuTgVFhIHOlkYBjDAIgw
	iyvCqE9JhKgZuzwFgylReyY=
X-Google-Smtp-Source: ABdhPJz1+5NjxWARe/OSr65OJVNY8jninmojNdKdiTyYMD1HOp5fKvZkS4ZgZJ/+l42RcnRKLmd82w==
X-Received: by 2002:a17:906:4a98:: with SMTP id x24mr2084363eju.319.1603185360609;
        Tue, 20 Oct 2020 02:16:00 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id m6sm2056922ejl.94.2020.10.20.02.15.59
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 02:16:00 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-6-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-6-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 05/23] xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
Date: Tue, 20 Oct 2020 10:15:58 +0100
Message-ID: <004801d6a6c1$a0381fe0$e0a85fa0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAF29sarqmx8LuA=
Content-Language: en-gb

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Oleksandr Tyshchenko
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall =
<julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 05/23] xen/ioreq: Make x86's =
hvm_mmio_first(last)_byte() common
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> The IOREQ is a common feature now and these helpers will be used
> on Arm as is. Move them to xen/ioreq.h and replace "hvm" prefixes
> with "ioreq".
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>=20

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:20:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:20:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9012.24235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnow-0007HK-Rb; Tue, 20 Oct 2020 09:20:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9012.24235; Tue, 20 Oct 2020 09:20:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnow-0007HD-Oh; Tue, 20 Oct 2020 09:20:42 +0000
Received: by outflank-mailman (input) for mailman id 9012;
 Tue, 20 Oct 2020 09:20:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUnov-0007H8-8O
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:20:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cc6517b-685c-404c-a606-24c41c5362b3;
 Tue, 20 Oct 2020 09:20:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnos-0006Kq-UQ; Tue, 20 Oct 2020 09:20:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnos-0001ZT-Dp; Tue, 20 Oct 2020 09:20:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUnos-0007oX-DJ; Tue, 20 Oct 2020 09:20:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUnov-0007H8-8O
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:20:41 +0000
X-Inumbo-ID: 2cc6517b-685c-404c-a606-24c41c5362b3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2cc6517b-685c-404c-a606-24c41c5362b3;
	Tue, 20 Oct 2020 09:20:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Eb08fu9/fdixZRZw43jtIhX66jTfVY37IBIcOZ9nO3A=; b=OF57XSKpAHty9zsNvGQP5OpBbs
	8XhwzLxoLkb6AtSD22knK9FFCJ8+0hx9NsZQ8EjOazb9WYIqby9xr+MLxvwGgWEk4CI5T994P9CDe
	i7djpLgtDEhvTdNrmzEVS47XImuu9uKsKzpcB6kT7JE4Tp9qJmCruOoUZijl9/HxVpew=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnos-0006Kq-UQ; Tue, 20 Oct 2020 09:20:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnos-0001ZT-Dp; Tue, 20 Oct 2020 09:20:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUnos-0007oX-DJ; Tue, 20 Oct 2020 09:20:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156017-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156017: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f82b827c92f77eac8debdce6ef9689d156771871
X-Osstest-Versions-That:
    ovmf=29d14d3a30fdfbe017d39b759423832054280f10
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 09:20:38 +0000

flight 156017 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156017/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f82b827c92f77eac8debdce6ef9689d156771871
baseline version:
 ovmf                 29d14d3a30fdfbe017d39b759423832054280f10

Last test of basis   156010  2020-10-20 03:31:16 Z    0 days
Testing same since   156017  2020-10-20 06:40:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gary Lin <glin@suse.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   29d14d3a30..f82b827c92  f82b827c92f77eac8debdce6ef9689d156771871 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:25:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:25:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9016.24251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUntN-0007V9-GD; Tue, 20 Oct 2020 09:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9016.24251; Tue, 20 Oct 2020 09:25:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUntN-0007V2-By; Tue, 20 Oct 2020 09:25:17 +0000
Received: by outflank-mailman (input) for mailman id 9016;
 Tue, 20 Oct 2020 09:25:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUntM-0007Ux-5r
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:25:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6529c936-f715-4e44-8c76-53f4f8469560;
 Tue, 20 Oct 2020 09:25:14 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUntH-0006RL-Mt; Tue, 20 Oct 2020 09:25:11 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUntH-0000eV-Ab; Tue, 20 Oct 2020 09:25:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUntM-0007Ux-5r
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:25:16 +0000
X-Inumbo-ID: 6529c936-f715-4e44-8c76-53f4f8469560
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6529c936-f715-4e44-8c76-53f4f8469560;
	Tue, 20 Oct 2020 09:25:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IP0aLCAZ0NPf9jmG0UGvDcOQv+dvf5RqIa86OPq8vuE=; b=26QXCBC1L8WOiCOqHGNuMPg+kn
	FG4dvK527rBcxZRKpw6aXamWZUlmK9PMOiuSjCz4qjOhhFXejYP2vm6ois1zxerZQBXzuKFPDfPqO
	SCHOK5cLu5YGIr7rAvDW1/ypDgPHHl4BwYFZxiAyBxU3qZ2gjyQ+V2oJ4I7+bUpKzJBo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUntH-0006RL-Mt; Tue, 20 Oct 2020 09:25:11 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUntH-0000eV-Ab; Tue, 20 Oct 2020 09:25:11 +0000
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
 <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
 <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2eb42b0e-f31e-2c1e-28bf-32c366fb1688@xen.org>
Date: Tue, 20 Oct 2020 10:25:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 16/10/2020 13:09, Jan Beulich wrote:
> On 16.10.2020 11:36, Julien Grall wrote:
>> On 15/10/2020 13:07, Jan Beulich wrote:
>>> On 14.10.2020 13:40, Julien Grall wrote:
>>>> On 13/10/2020 15:26, Jan Beulich wrote:
>>>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>>>> Especially Julien was rather worried by the current situation. In
>>>>>> case you can convince him the current handling is fine, we can
>>>>>> easily drop this patch.
>>>>>
>>>>> Julien, in the light of the above - can you clarify the specific
>>>>> concerns you (still) have?
>>>>
>>>> Let me start with that the assumption if evtchn->lock is not held when
>>>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
>>>
>>> But this isn't interesting - we know there are paths where it is
>>> held, and ones (interdomain sending) where it's the remote port's
>>> lock instead which is held. What's important here is that a
>>> _consistent_ lock be held (but it doesn't need to be evtchn's).
>>
>> Yes, a _consistent_ lock *should* be sufficient. But it is better to use
>> the same lock everywhere so it is easier to reason (see more below).
> 
> But that's already not the case, due to the way interdomain channels
> have events sent. You did suggest acquiring both locks, but as
> indicated at the time I think this goes too far. As far as the doc
> aspect - we can improve the situation. Iirc it was you who made me
> add the respective comment ahead of struct evtchn_port_ops.
> 
>>>>    From my understanding, the goal of lock_old_queue() is to return the
>>>> old queue used.
>>>>
>>>> last_priority and last_vcpu_id may be updated separately and I could not
>>>> convince myself that it would not be possible to return a queue that is
>>>> neither the current one nor the old one.
>>>>
>>>> The following could happen if evtchn->priority and
>>>> evtchn->notify_vcpu_id keeps changing between calls.
>>>>
>>>> pCPU0				| pCPU1
>>>> 				|
>>>> evtchn_fifo_set_pending(v0,...)	|
>>>> 				| evtchn_fifo_set_pending(v1, ...)
>>>>     [...]				|
>>>>     /* Queue has changed */	|
>>>>     evtchn->last_vcpu_id = v0 	|
>>>> 				| -> evtchn_old_queue()
>>>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>>>      				| old_q = ...
>>>> 				| spin_lock(old_q->...)
>>>> 				| v = ...
>>>> 				| q = ...
>>>> 				| /* q and old_q would be the same */
>>>> 				|
>>>>     evtchn->las_priority = priority|
>>>>
>>>> If my diagram is correct, then pCPU1 would return a queue that is
>>>> neither the current nor old one.
>>>
>>> I think I agree.
>>>
>>>> In which case, I think it would at least be possible to corrupt the
>>>> queue. From evtchn_fifo_set_pending():
>>>>
>>>>            /*
>>>>             * If this event was a tail, the old queue is now empty and
>>>>             * its tail must be invalidated to prevent adding an event to
>>>>             * the old queue from corrupting the new queue.
>>>>             */
>>>>            if ( old_q->tail == port )
>>>>                old_q->tail = 0;
>>>>
>>>> Did I miss anything?
>>>
>>> I don't think you did. The important point though is that a consistent
>>> lock is being held whenever we come here, so two racing set_pending()
>>> aren't possible for one and the same evtchn. As a result I don't think
>>> the patch here is actually needed.
>>
>> I haven't yet read in full details the rest of the patches to say
>> whether this is necessary or not. However, at a first glance, I think
>> this is not a sane to rely on different lock to protect us. And don't
>> get me started on the lack of documentation...
>>
>> Furthermore, the implementation of old_lock_queue() suggests that the
>> code was planned to be lockless. Why would you need the loop otherwise?
> 
> The lock-less aspect of this affects multiple accesses to e.g.
> the same queue, I think.
I don't think we are talking about the same thing. What I was referring 
to is the following code:

static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
                                                 struct evtchn *evtchn,
                                                 unsigned long *flags)
{
     struct vcpu *v;
     struct evtchn_fifo_queue *q, *old_q;
     unsigned int try;

     for ( try = 0; try < 3; try++ )
     {
         v = d->vcpu[evtchn->last_vcpu_id];
         old_q = &v->evtchn_fifo->queue[evtchn->last_priority];

         spin_lock_irqsave(&old_q->lock, *flags);

         v = d->vcpu[evtchn->last_vcpu_id];
         q = &v->evtchn_fifo->queue[evtchn->last_priority];

         if ( old_q == q )
             return old_q;

         spin_unlock_irqrestore(&old_q->lock, *flags);
     }

     gprintk(XENLOG_WARNING,
             "dom%d port %d lost event (too many queue changes)\n",
             d->domain_id, evtchn->port);
     return NULL;
}

Given that evtchn->last_vcpu_id and evtchn->last_priority can only be 
modified in evtchn_fifo_set_pending(), this suggests that it is expected 
for the function to multiple called concurrently on the same event channel.

> I'm unconvinced it was really considered
> whether racing sending on the same channel is also safe this way.

How would you explain the 3 try in lock_old_queue then?

> 
>> Therefore, regardless the rest of the discussion, I think this patch
>> would be useful to have for our peace of mind.
> 
> That's a fair position to take. My counterargument is mainly
> that readability (and hence maintainability) suffers with those
> changes.

We surely have different opinion... I don't particularly care about the 
approach as long as it is *properly* documented.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:28:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:28:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9019.24262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnwH-0007hE-1Y; Tue, 20 Oct 2020 09:28:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9019.24262; Tue, 20 Oct 2020 09:28:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUnwG-0007h7-Ug; Tue, 20 Oct 2020 09:28:16 +0000
Received: by outflank-mailman (input) for mailman id 9019;
 Tue, 20 Oct 2020 09:28:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUnwG-0007h2-2I
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:28:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28d1500d-3cf4-4c7e-8b9c-526fd1e0456f;
 Tue, 20 Oct 2020 09:28:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD5A6ABCC;
 Tue, 20 Oct 2020 09:28:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUnwG-0007h2-2I
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:28:16 +0000
X-Inumbo-ID: 28d1500d-3cf4-4c7e-8b9c-526fd1e0456f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 28d1500d-3cf4-4c7e-8b9c-526fd1e0456f;
	Tue, 20 Oct 2020 09:28:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603186093;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lt0Wrl6rx3rsiH6AE/De57HWZ93c+gPsRNaUPfUJcpE=;
	b=uDubRB768QTfgwBcKZMaRAs/ydYCT6r+vC1jryc5aj7gay9u1mcjrO9nhwScZSVzV8iOmv
	MKx46NDGo1QZOIpsnGD8eoruDZa2k4TPKHa5nPON/9Wlhc/OVCAg6tT364PjL3PD9vSrms
	crU3Fny5s+rDi0ed24vMQuo8C1/t4uo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BD5A6ABCC;
	Tue, 20 Oct 2020 09:28:13 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] xen/evtchn: rework per event channel lock
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20201016105839.14796-1-jgross@suse.com>
 <20201016105839.14796-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c5975b1-97ec-9bbb-0ed9-9055556215cd@suse.com>
Date: Tue, 20 Oct 2020 11:28:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201016105839.14796-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 12:58, Juergen Gross wrote:
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -660,11 +660,12 @@ void pv_shim_inject_evtchn(unsigned int port)
>      if ( port_is_valid(guest, port) )
>      {
>          struct evtchn *chn = evtchn_from_port(guest, port);
> -        unsigned long flags;
>  
> -        spin_lock_irqsave(&chn->lock, flags);
> -        evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> -        spin_unlock_irqrestore(&chn->lock, flags);
> +        if ( evtchn_read_trylock(chn) )
> +        {
> +            evtchn_port_set_pending(guest, chn->notify_vcpu_id, chn);
> +            evtchn_read_unlock(chn);
> +        }

Does this want some form of else, e.g. at least a printk_once()?

> @@ -360,7 +352,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
>      if ( rc )
>          goto out;
>  
> -    flags = double_evtchn_lock(lchn, rchn);
> +    double_evtchn_lock(lchn, rchn);

This introduces an unfortunate conflict with my conversion of
the per-domain event lock to an rw one: It acquires rd's lock
in read mode only, while the requirements here would not allow
doing so. (Same in evtchn_close() then.)

> @@ -736,7 +723,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
>  
>      lchn = evtchn_from_port(ld, lport);
>  
> -    spin_lock_irqsave(&lchn->lock, flags);
> +    if ( !evtchn_read_trylock(lchn) )
> +        return 0;

With this, the auxiliary call to xsm_evtchn_send() up from
here should also go away again (possibly in a separate follow-
on, which would then likely be a clean revert).

> @@ -798,9 +786,11 @@ void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
>  
>      d = v->domain;
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_read_trylock(chn) )
> +    {
> +        evtchn_port_set_pending(d, v->vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> @@ -829,9 +819,11 @@ void send_guest_global_virq(struct domain *d, uint32_t virq)
>          goto out;
>  
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    if ( evtchn_read_trylock(chn) )
> +    {
> +        evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> +        evtchn_read_unlock(chn);
> +    }
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);

As said before, I think these lock uses can go away altogether.
I shall put together a patch.

And on the whole I'd really prefer if we first convinced ourselves
that there's no way to simply get rid of the IRQ-safe locking
forms instead, before taking a decision to go with this model with
its extra constraints.

> @@ -1060,15 +1053,16 @@ int evtchn_unmask(unsigned int port)
>  {
>      struct domain *d = current->domain;
>      struct evtchn *evtchn;
> -    unsigned long flags;
>  
>      if ( unlikely(!port_is_valid(d, port)) )
>          return -EINVAL;
>  
>      evtchn = evtchn_from_port(d, port);
> -    spin_lock_irqsave(&evtchn->lock, flags);
> -    evtchn_port_unmask(d, evtchn);
> -    spin_unlock_irqrestore(&evtchn->lock, flags);
> +    if ( evtchn_read_trylock(evtchn) )
> +    {
> +        evtchn_port_unmask(d, evtchn);
> +        evtchn_read_unlock(evtchn);
> +    }

I think this wants mentioning together with send / query in the
description.

> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -105,6 +105,60 @@ void notify_via_xen_event_channel(struct domain *ld, int lport);
>  #define bucket_from_port(d, p) \
>      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
>  
> +#define EVENT_WRITE_LOCK_INC    INT_MIN
> +
> +/*
> + * Lock an event channel exclusively. This is allowed only with holding
> + * d->event_lock AND when the channel is free or unbound either when taking
> + * or when releasing the lock, as any concurrent operation on the event
> + * channel using evtchn_read_trylock() will just assume the event channel is
> + * free or unbound at the moment.

... when the evtchn_read_trylock() returns false.

> + */
> +static inline void evtchn_write_lock(struct evtchn *evtchn)
> +{
> +    int val;
> +
> +    /*
> +     * The lock can't be held by a writer already, as all writers need to
> +     * hold d->event_lock.
> +     */
> +    ASSERT(atomic_read(&evtchn->lock) >= 0);
> +
> +    /* No barrier needed, atomic_add_return() is full barrier. */
> +    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +          val != EVENT_WRITE_LOCK_INC;

The _INC suffix is slightly odd for this 2nd use, but I guess
the dual use will make it so for about any name you may pick.

> +          val = atomic_read(&evtchn->lock) )
> +        cpu_relax();
> +}
> +
> +static inline void evtchn_write_unlock(struct evtchn *evtchn)
> +{
> +    arch_lock_release_barrier();
> +
> +    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
> +}
> +
> +static inline bool evtchn_read_trylock(struct evtchn *evtchn)
> +{
> +    if ( atomic_read(&evtchn->lock) < 0 )
> +        return false;
> +
> +    /* No barrier needed, atomic_inc_return() is full barrier. */
> +    if ( atomic_inc_return(&evtchn->lock) >= 0 )

atomic_*_return() return the new value, so I think you mean ">"
here?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:34:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:34:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9023.24274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUo2O-00009h-ON; Tue, 20 Oct 2020 09:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9023.24274; Tue, 20 Oct 2020 09:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUo2O-00009a-Ku; Tue, 20 Oct 2020 09:34:36 +0000
Received: by outflank-mailman (input) for mailman id 9023;
 Tue, 20 Oct 2020 09:34:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUo2M-00009V-RJ
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:34:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29b9364d-9ce0-40f4-ad82-008cd254cf06;
 Tue, 20 Oct 2020 09:34:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C46E0AFEA;
 Tue, 20 Oct 2020 09:34:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUo2M-00009V-RJ
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:34:34 +0000
X-Inumbo-ID: 29b9364d-9ce0-40f4-ad82-008cd254cf06
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29b9364d-9ce0-40f4-ad82-008cd254cf06;
	Tue, 20 Oct 2020 09:34:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603186472;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A/qsYyzz3ne4SLzXXczuoLGZspSjo2Gi2Dp4PFXWkrU=;
	b=T92FsbIE+MhO+l5yiJqcC+xCxwvbWlOCPp9uni95XGNmLz56FTDJbvDCxox1h3/3FirsUK
	aUi+qIcZ8jSBU3Ub6FRXzEwAVWN3+TuLogw1ODXwR8rCQFbnr/0HWtjkvfGtvJfWlEQyHg
	CM83bg8FhMqrO6qLBk5UxeZdBWQN6s8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C46E0AFEA;
	Tue, 20 Oct 2020 09:34:32 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
 <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
 <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
 <2eb42b0e-f31e-2c1e-28bf-32c366fb1688@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <abaf4b52-df8d-6df0-9100-a4c9884c09da@suse.com>
Date: Tue, 20 Oct 2020 11:34:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <2eb42b0e-f31e-2c1e-28bf-32c366fb1688@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.10.2020 11:25, Julien Grall wrote:
> Hi Jan,
> 
> On 16/10/2020 13:09, Jan Beulich wrote:
>> On 16.10.2020 11:36, Julien Grall wrote:
>>> On 15/10/2020 13:07, Jan Beulich wrote:
>>>> On 14.10.2020 13:40, Julien Grall wrote:
>>>>> On 13/10/2020 15:26, Jan Beulich wrote:
>>>>>> On 13.10.2020 16:20, Jürgen Groß wrote:
>>>>>>> Especially Julien was rather worried by the current situation. In
>>>>>>> case you can convince him the current handling is fine, we can
>>>>>>> easily drop this patch.
>>>>>>
>>>>>> Julien, in the light of the above - can you clarify the specific
>>>>>> concerns you (still) have?
>>>>>
>>>>> Let me start with that the assumption if evtchn->lock is not held when
>>>>> evtchn_fifo_set_pending() is called. If it is held, then my comment is moot.
>>>>
>>>> But this isn't interesting - we know there are paths where it is
>>>> held, and ones (interdomain sending) where it's the remote port's
>>>> lock instead which is held. What's important here is that a
>>>> _consistent_ lock be held (but it doesn't need to be evtchn's).
>>>
>>> Yes, a _consistent_ lock *should* be sufficient. But it is better to use
>>> the same lock everywhere so it is easier to reason (see more below).
>>
>> But that's already not the case, due to the way interdomain channels
>> have events sent. You did suggest acquiring both locks, but as
>> indicated at the time I think this goes too far. As far as the doc
>> aspect - we can improve the situation. Iirc it was you who made me
>> add the respective comment ahead of struct evtchn_port_ops.
>>
>>>>>    From my understanding, the goal of lock_old_queue() is to return the
>>>>> old queue used.
>>>>>
>>>>> last_priority and last_vcpu_id may be updated separately and I could not
>>>>> convince myself that it would not be possible to return a queue that is
>>>>> neither the current one nor the old one.
>>>>>
>>>>> The following could happen if evtchn->priority and
>>>>> evtchn->notify_vcpu_id keeps changing between calls.
>>>>>
>>>>> pCPU0				| pCPU1
>>>>> 				|
>>>>> evtchn_fifo_set_pending(v0,...)	|
>>>>> 				| evtchn_fifo_set_pending(v1, ...)
>>>>>     [...]				|
>>>>>     /* Queue has changed */	|
>>>>>     evtchn->last_vcpu_id = v0 	|
>>>>> 				| -> evtchn_old_queue()
>>>>> 				| v = d->vcpu[evtchn->last_vcpu_id];
>>>>>      				| old_q = ...
>>>>> 				| spin_lock(old_q->...)
>>>>> 				| v = ...
>>>>> 				| q = ...
>>>>> 				| /* q and old_q would be the same */
>>>>> 				|
>>>>>     evtchn->las_priority = priority|
>>>>>
>>>>> If my diagram is correct, then pCPU1 would return a queue that is
>>>>> neither the current nor old one.
>>>>
>>>> I think I agree.
>>>>
>>>>> In which case, I think it would at least be possible to corrupt the
>>>>> queue. From evtchn_fifo_set_pending():
>>>>>
>>>>>            /*
>>>>>             * If this event was a tail, the old queue is now empty and
>>>>>             * its tail must be invalidated to prevent adding an event to
>>>>>             * the old queue from corrupting the new queue.
>>>>>             */
>>>>>            if ( old_q->tail == port )
>>>>>                old_q->tail = 0;
>>>>>
>>>>> Did I miss anything?
>>>>
>>>> I don't think you did. The important point though is that a consistent
>>>> lock is being held whenever we come here, so two racing set_pending()
>>>> aren't possible for one and the same evtchn. As a result I don't think
>>>> the patch here is actually needed.
>>>
>>> I haven't yet read in full details the rest of the patches to say
>>> whether this is necessary or not. However, at a first glance, I think
>>> this is not a sane to rely on different lock to protect us. And don't
>>> get me started on the lack of documentation...
>>>
>>> Furthermore, the implementation of old_lock_queue() suggests that the
>>> code was planned to be lockless. Why would you need the loop otherwise?
>>
>> The lock-less aspect of this affects multiple accesses to e.g.
>> the same queue, I think.
> I don't think we are talking about the same thing. What I was referring 
> to is the following code:
> 
> static struct evtchn_fifo_queue *lock_old_queue(const struct domain *d,
>                                                  struct evtchn *evtchn,
>                                                  unsigned long *flags)
> {
>      struct vcpu *v;
>      struct evtchn_fifo_queue *q, *old_q;
>      unsigned int try;
> 
>      for ( try = 0; try < 3; try++ )
>      {
>          v = d->vcpu[evtchn->last_vcpu_id];
>          old_q = &v->evtchn_fifo->queue[evtchn->last_priority];
> 
>          spin_lock_irqsave(&old_q->lock, *flags);
> 
>          v = d->vcpu[evtchn->last_vcpu_id];
>          q = &v->evtchn_fifo->queue[evtchn->last_priority];
> 
>          if ( old_q == q )
>              return old_q;
> 
>          spin_unlock_irqrestore(&old_q->lock, *flags);
>      }
> 
>      gprintk(XENLOG_WARNING,
>              "dom%d port %d lost event (too many queue changes)\n",
>              d->domain_id, evtchn->port);
>      return NULL;
> }
> 
> Given that evtchn->last_vcpu_id and evtchn->last_priority can only be 
> modified in evtchn_fifo_set_pending(), this suggests that it is expected 
> for the function to multiple called concurrently on the same event channel.
> 
>> I'm unconvinced it was really considered
>> whether racing sending on the same channel is also safe this way.
> 
> How would you explain the 3 try in lock_old_queue then?

Queue changes (as said by the gprintk()) can't result from sending
alone, but require re-binding to a different vCPU or altering the
priority. I'm simply unconvinced that the code indeed fully reflects
the original intentions. IOW I'm unsure whether we talk about the
same thing ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:36:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9026.24287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUo48-0000IX-3x; Tue, 20 Oct 2020 09:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9026.24287; Tue, 20 Oct 2020 09:36:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUo48-0000IQ-0v; Tue, 20 Oct 2020 09:36:24 +0000
Received: by outflank-mailman (input) for mailman id 9026;
 Tue, 20 Oct 2020 09:36:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8HF=D3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kUo46-0000IL-9t
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:36:22 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49c8fcf0-143c-479b-9876-f8357475d70b;
 Tue, 20 Oct 2020 09:36:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=V8HF=D3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kUo46-0000IL-9t
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:36:22 +0000
X-Inumbo-ID: 49c8fcf0-143c-479b-9876-f8357475d70b
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 49c8fcf0-143c-479b-9876-f8357475d70b;
	Tue, 20 Oct 2020 09:36:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603186581;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=/nKCzpQ8X+4DBrdKHo+8uPOvKbU/EZW5MQG7/VZedVU=;
  b=Nfruz7OwEQCqMMvIeeX3kThz8JT3bV5wapFC77tv/IyRKwtImltI6EmK
   WOFs01+jITvn6VZQhq2DkhCLdP7mDMBGn1Nbqd5Uk9lio+IIwEk6iTHS9
   adyuySkkWLqHwseFzPpSU31BtuMNcKyR1TaERQSEhUeEFfsqGvpt/7197
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MlBOqquFyx623KlBzQYtjTVwtk1W0nI/uPeCoFOiHA0lE8JB2ZQe70TBpRr3L34Di6TtvAS1kn
 sPxynJ4O4a6ho/ZEMWwi6gGf1d3m1ACca1Ioz1OPc5ye2oq3V8w7tKYiJogm76xtJ+Wc83Bcve
 vXTXDuKXW0C7J/Y+Ii3MsUhjb8SQRYmb1dlZo4Oy4eS+znWoExsLUVZQ/bHtRnOTPvSPCeFRK6
 z6NXBacwc6b1VT81WgwVWgHCNQ/XR5+s1z+UrxOc7FBt9mtLw14Y29jYhw8ZW+pi+/OUeCIhtG
 pwM=
X-SBRS: None
X-MesageID: 30418410
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,396,1596513600"; 
   d="scan'208";a="30418410"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<Ian.Jackson@eu.citrix.com>
Subject: [OSSTEST PATCH] ts-xen-build-prep: Install ninja
Date: Tue, 20 Oct 2020 10:35:49 +0100
Message-ID: <20201020093549.270000-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

QEMU upstream now requires ninja to build. (Probably since QEMU commit
09e93326e448 ("build: replace ninjatool with ninja"))

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 ts-xen-build-prep | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index fcabf75a1686..5ec70dd538e9 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -199,7 +199,7 @@ END
 sub prep () {
     my @packages = qw(mercurial rsync
                       build-essential bin86 bcc iasl bc
-                      flex bison cmake
+                      flex bison cmake ninja-build
                       libpci-dev libncurses5-dev libssl-dev python-dev
                       libx11-dev git-core uuid-dev gettext gawk
                       libsdl-dev libyajl-dev libaio-dev libpixman-1-dev
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 09:47:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 09:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9031.24299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoEf-0001Hr-4I; Tue, 20 Oct 2020 09:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9031.24299; Tue, 20 Oct 2020 09:47:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoEf-0001Hk-1D; Tue, 20 Oct 2020 09:47:17 +0000
Received: by outflank-mailman (input) for mailman id 9031;
 Tue, 20 Oct 2020 09:47:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUoEd-0001Hf-Ot
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:47:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56d0a7b3-ee29-41f4-a3b9-b3bf2cab7c1e;
 Tue, 20 Oct 2020 09:47:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUoEa-0006tI-2a; Tue, 20 Oct 2020 09:47:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUoEZ-00034D-S1; Tue, 20 Oct 2020 09:47:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUoEZ-0005jo-RX; Tue, 20 Oct 2020 09:47:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUoEd-0001Hf-Ot
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 09:47:15 +0000
X-Inumbo-ID: 56d0a7b3-ee29-41f4-a3b9-b3bf2cab7c1e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 56d0a7b3-ee29-41f4-a3b9-b3bf2cab7c1e;
	Tue, 20 Oct 2020 09:47:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m1pWzeb3+qrLPyZK513UTUwEqXTRe0Qzw4vzN1Kwacc=; b=nKvQFbRYSsi4OdN8v1ZfIzSP1Q
	4mIOpA5UbjCTiR/ViUIROMoVPMNU8gdUnGJR1hvlsUMIAM2E71y2xe77go+sW6iSvpfe1lj2b7ytJ
	/ejssK+9crxO3HdW5WpSK/gik/dqH3uJ5JdfWmZg+ZtUqdb8IUi9ZSJqKccGrsf5+dss=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUoEa-0006tI-2a; Tue, 20 Oct 2020 09:47:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUoEZ-00034D-S1; Tue, 20 Oct 2020 09:47:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUoEZ-0005jo-RX; Tue, 20 Oct 2020 09:47:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156022-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156022: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 09:47:11 +0000

flight 156022 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156022/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   61 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  110 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:01:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:01:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9065.24356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoSF-0003XN-Br; Tue, 20 Oct 2020 10:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9065.24356; Tue, 20 Oct 2020 10:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoSF-0003XG-8Q; Tue, 20 Oct 2020 10:01:19 +0000
Received: by outflank-mailman (input) for mailman id 9065;
 Tue, 20 Oct 2020 10:01:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUoSE-0003XB-74
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:01:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bee38097-a65c-4440-9d9d-d9ff51422cb0;
 Tue, 20 Oct 2020 10:01:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUoSA-0007HD-A0; Tue, 20 Oct 2020 10:01:14 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUoSA-0002rF-0a; Tue, 20 Oct 2020 10:01:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUoSE-0003XB-74
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:01:18 +0000
X-Inumbo-ID: bee38097-a65c-4440-9d9d-d9ff51422cb0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bee38097-a65c-4440-9d9d-d9ff51422cb0;
	Tue, 20 Oct 2020 10:01:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bElc4AckwBa4LOpKqtZoTyUlEwTZQqk23g9/WLIMdSk=; b=VOFC38vLlDMd9HLCAroF/4CrK0
	CzVufYd5Lf1Va9fSRR1wjjjcqfU0CoVo5iPcxFY49spCKOANH3OaelJ5XW5hkjNfwM9cpPiG7X+Ff
	qHGXOIfYXD2Euoo/5U+xxr0SJBmg23GdpY3BKjWtz0Is+LKHfcSYzzmqR/NAU7PK0NQk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUoSA-0007HD-A0; Tue, 20 Oct 2020 10:01:14 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUoSA-0002rF-0a; Tue, 20 Oct 2020 10:01:14 +0000
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
 <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
 <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
 <2eb42b0e-f31e-2c1e-28bf-32c366fb1688@xen.org>
 <abaf4b52-df8d-6df0-9100-a4c9884c09da@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <244a4487-044b-5813-dd03-fe4ff5aac3b3@xen.org>
Date: Tue, 20 Oct 2020 11:01:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <abaf4b52-df8d-6df0-9100-a4c9884c09da@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 20/10/2020 10:34, Jan Beulich wrote:
> On 20.10.2020 11:25, Julien Grall wrote:
>> Given that evtchn->last_vcpu_id and evtchn->last_priority can only be
>> modified in evtchn_fifo_set_pending(), this suggests that it is expected
>> for the function to multiple called concurrently on the same event channel.
>>
>>> I'm unconvinced it was really considered
>>> whether racing sending on the same channel is also safe this way.
>>
>> How would you explain the 3 try in lock_old_queue then?
> 
> Queue changes (as said by the gprintk()) can't result from sending
> alone, but require re-binding to a different vCPU or altering the
> priority. 

I agree with that. However, this doesn't change the fact that update to 
evtchn->last_priority and evtchn->last_vcpu can only happen when calling 
evtchn_fifo_set_pending().

If evtchn_fifo_set_pending() cannot be called concurrently for the same 
event, then there is *no* way for evtchn->last_{priority, vcpu} to be 
updated concurrently.

> I'm simply unconvinced that the code indeed fully reflects
> the original intentions. 

Do you mind (re-)sharing what was the original intentions?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:05:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:05:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9069.24368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoW6-0003jb-T6; Tue, 20 Oct 2020 10:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9069.24368; Tue, 20 Oct 2020 10:05:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoW6-0003jU-PY; Tue, 20 Oct 2020 10:05:18 +0000
Received: by outflank-mailman (input) for mailman id 9069;
 Tue, 20 Oct 2020 10:05:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUoW5-0003jP-Bk
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:05:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a87d765d-7032-401f-a927-c37fd2723422;
 Tue, 20 Oct 2020 10:05:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9317FAFFA;
 Tue, 20 Oct 2020 10:05:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUoW5-0003jP-Bk
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:05:17 +0000
X-Inumbo-ID: a87d765d-7032-401f-a927-c37fd2723422
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a87d765d-7032-401f-a927-c37fd2723422;
	Tue, 20 Oct 2020 10:05:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603188314;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ADiIGIaN1wqYaEgkx2lrsJJhZVaGpCJeg86fFJjvVLQ=;
	b=frDqcAy65cBpAYBZSGKwYXCaArUG6Y0cCl5WqJU0edktJhEJHV4uR7DCJesaWl6jDrPi6X
	zEVKV22fRo2gwV1MVcxLMvy2pDGK9AM28bXXkY/a8+ghUDJ8UTrijeE+hwx1oiE3Rxhmor
	2+R+obpcpf6PqyTlQf2CTf4rp+hVKIo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9317FAFFA;
	Tue, 20 Oct 2020 10:05:14 +0000 (UTC)
Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
To: paul@xen.org
Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>,
 xen-devel@lists.xenproject.org,
 'Oleksandr Tyshchenko' <oleksandr_tyshchenko@epam.com>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>,
 'Wei Liu' <wl@xen.org>, 'Julien Grall' <julien@xen.org>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <1602780274-29141-5-git-send-email-olekstysh@gmail.com>
 <004701d6a6c1$6c09f860$441de920$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
Date: Tue, 20 Oct 2020 12:05:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <004701d6a6c1$6c09f860$441de920$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 11:14, Paul Durrant wrote:
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Oleksandr Tyshchenko
>> Sent: 15 October 2020 17:44
>>
>> --- a/xen/include/asm-x86/hvm/ioreq.h
>> +++ b/xen/include/asm-x86/hvm/ioreq.h
>> @@ -181,6 +181,8 @@ static inline bool arch_hvm_ioreq_destroy(struct domain *d)
>>  #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
>>  #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
>>
>> +#define ioreq_complete_mmio   handle_mmio
>> +
> 
> A #define? Really? Can we not have a static inline?

I guess this would require further shuffling: handle_mmio() is
an inline function in hvm/emulate.h, and hvm/ioreq.h has no
need to include the former (and imo it also shouldn't have).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:06:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9071.24380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoWr-0003pE-70; Tue, 20 Oct 2020 10:06:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9071.24380; Tue, 20 Oct 2020 10:06:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUoWr-0003p7-3N; Tue, 20 Oct 2020 10:06:05 +0000
Received: by outflank-mailman (input) for mailman id 9071;
 Tue, 20 Oct 2020 10:06:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUoWp-0003p1-Rg
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:06:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47586c08-a771-4519-ba03-ea7c40b38896;
 Tue, 20 Oct 2020 10:06:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C5979B8F2;
 Tue, 20 Oct 2020 10:06:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUoWp-0003p1-Rg
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:06:03 +0000
X-Inumbo-ID: 47586c08-a771-4519-ba03-ea7c40b38896
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47586c08-a771-4519-ba03-ea7c40b38896;
	Tue, 20 Oct 2020 10:06:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603188360;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VKuxEdiM9Ji36iyr7rdBh7O9VHjAms2kvNny2MEr28s=;
	b=S5u5hk3vin0Yz2Ak3oultYGGiAFeF56HzMv5m/HGj4gmYI4MHN7HAtxi6j1HSLehHm8h3d
	rRy4Yfzdr/nRYNohkbMR/WDqlubyDcriISlYJuuNSvDQrbUkw4v6V/0VzoJmRlI0UnEriY
	69HP1eBz1jR+Q/GR2Jvv3IZ+HAyNe+Y=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C5979B8F2;
	Tue, 20 Oct 2020 10:06:00 +0000 (UTC)
Subject: Re: [PATCH v2 1/2] xen/events: access last_priority and last_vcpu_id
 together
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201012092740.1617-1-jgross@suse.com>
 <20201012092740.1617-2-jgross@suse.com>
 <9485004c-b739-5590-202b-c8e6f84e5e54@suse.com>
 <821a77d3-7e37-d1d2-d904-94db0177893a@suse.com>
 <350a5738-b239-e36b-59aa-05b8f86648b8@suse.com>
 <548f80a9-0fa3-cd9e-ec44-5cd37d98eadc@xen.org>
 <4f4ecc8d-f5d2-81e9-1615-0f2925b928ba@suse.com>
 <4b77ba6d-bf49-7286-8f2a-53f7b2e7d122@xen.org>
 <4eb073bb-67ca-5376-bae1-e555d3c5fb30@suse.com>
 <2eb42b0e-f31e-2c1e-28bf-32c366fb1688@xen.org>
 <abaf4b52-df8d-6df0-9100-a4c9884c09da@suse.com>
 <244a4487-044b-5813-dd03-fe4ff5aac3b3@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6ff40bdc-69f5-2746-a163-c9376d497e49@suse.com>
Date: Tue, 20 Oct 2020 12:06:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <244a4487-044b-5813-dd03-fe4ff5aac3b3@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 12:01, Julien Grall wrote:
> 
> 
> On 20/10/2020 10:34, Jan Beulich wrote:
>> On 20.10.2020 11:25, Julien Grall wrote:
>>> Given that evtchn->last_vcpu_id and evtchn->last_priority can only be
>>> modified in evtchn_fifo_set_pending(), this suggests that it is expected
>>> for the function to multiple called concurrently on the same event channel.
>>>
>>>> I'm unconvinced it was really considered
>>>> whether racing sending on the same channel is also safe this way.
>>>
>>> How would you explain the 3 try in lock_old_queue then?
>>
>> Queue changes (as said by the gprintk()) can't result from sending
>> alone, but require re-binding to a different vCPU or altering the
>> priority. 
> 
> I agree with that. However, this doesn't change the fact that update to 
> evtchn->last_priority and evtchn->last_vcpu can only happen when calling 
> evtchn_fifo_set_pending().
> 
> If evtchn_fifo_set_pending() cannot be called concurrently for the same 
> event, then there is *no* way for evtchn->last_{priority, vcpu} to be 
> updated concurrently.
> 
>> I'm simply unconvinced that the code indeed fully reflects
>> the original intentions. 
> 
> Do you mind (re-)sharing what was the original intentions?

If only I knew, I would.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:38:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9075.24392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp25-0006XS-MA; Tue, 20 Oct 2020 10:38:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9075.24392; Tue, 20 Oct 2020 10:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp25-0006XL-HJ; Tue, 20 Oct 2020 10:38:21 +0000
Received: by outflank-mailman (input) for mailman id 9075;
 Tue, 20 Oct 2020 10:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUp24-0006XG-Ei
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:38:20 +0000
Received: from mail-ed1-x530.google.com (unknown [2a00:1450:4864:20::530])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a962fe0b-e350-4eeb-8372-77b2799decca;
 Tue, 20 Oct 2020 10:38:19 +0000 (UTC)
Received: by mail-ed1-x530.google.com with SMTP id t21so1296068eds.6
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:38:19 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
 by smtp.gmail.com with ESMTPSA id g9sm1887628edv.81.2020.10.20.03.38.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 03:38:17 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUp24-0006XG-Ei
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:38:20 +0000
X-Inumbo-ID: a962fe0b-e350-4eeb-8372-77b2799decca
Received: from mail-ed1-x530.google.com (unknown [2a00:1450:4864:20::530])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a962fe0b-e350-4eeb-8372-77b2799decca;
	Tue, 20 Oct 2020 10:38:19 +0000 (UTC)
Received: by mail-ed1-x530.google.com with SMTP id t21so1296068eds.6
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:38:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=j3qlUyBmKXWhPcO4B44JdevayqvreT7vzraRFkngAJM=;
        b=lr9eO/yjwgM2j9hlfCDgaFQydxTjPObKYevUXLK7IGV2bGTQixUZGWB7b/ZDuv22vY
         Ew2VdHB1hIBg+6HGUwE63YnISNfixEIxAJkDHWi6LBfCCE+Pjt/uU0NTuYArFWPdRyfC
         tkYz67XVaYH0POhHSqogZggo2VYotO8GPEpyKCwSP0T67+RELAT6MZV6U56Xz7dQ7n/v
         rI2W9PqOKtqhICedoXaNuEDB+dWLclml8JaEMJsvlrrUHDlXC1WGg8Z871jy6civBQS6
         blAfXq7LcTvX01dpn/xztsx6ZxS+0mDqcolq8m8ZjwXAiR4w1BWDsT2jZue0tM/jYJaj
         cMHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=j3qlUyBmKXWhPcO4B44JdevayqvreT7vzraRFkngAJM=;
        b=KM2FWv3M7FxbSe+OSmxR52cgWByEpMj5+wlSAKkDFDO2MHALXbp/LZQpv85+vZ6rr/
         yTc5fC1BuflPnUSoV7q7ugrJYrvvbQ0Fd/Ek0TrnK4wLedzwK18yL0Z/miOxIA/A0A+y
         h/aVcd/iwvc2g1RFwnQRWqBbZI6klwK7t8Qzm6uXryfji3cd4qeK9hOZOqUYx1j9lwgR
         qdwcJw1m2GLdtMx24MYJrMn2cxNNCDt4mIBivl6AqnbkyrgsGojCfcGTCHVQiy6H/nzm
         4GSQ1GzE6b67G3MzkxFVgwwkS6P4GMmcGntrzNIWWaeEkKikJKFcJrr/SQf3O1saYn4o
         h0kg==
X-Gm-Message-State: AOAM532qJAcE6dBDWFon8K553F7X7RmelLRD6Vld8sz8O+On8+/iDRq6
	Did7nRewaruN4mXJ9+bPFfc=
X-Google-Smtp-Source: ABdhPJzWTlmfDDy3ot2+SujuF15GMmeKMUhcZqFEXGfgSfHdN0gxL1nfWIAH0in6xFaAQIVu8jMtxA==
X-Received: by 2002:a05:6402:22cb:: with SMTP id dm11mr2100443edb.23.1603190298332;
        Tue, 20 Oct 2020 03:38:18 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
        by smtp.gmail.com with ESMTPSA id g9sm1887628edv.81.2020.10.20.03.38.15
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 03:38:17 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
Cc: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>,
	"'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-5-git-send-email-olekstysh@gmail.com> <004701d6a6c1$6c09f860$441de920$@xen.org> <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
In-Reply-To: <38ba45dd-f1cd-a289-3ea3-75148782e126@suse.com>
Subject: RE: [PATCH V2 04/23] xen/ioreq: Provide alias for the handle_mmio()
Date: Tue, 20 Oct 2020 11:38:15 +0100
Message-ID: <004a01d6a6cd$1f4684b0$5dd38e10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAG4T9I6AbhrsZgB8lcTCapNMgiw
Content-Language: en-gb

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 20 October 2020 11:05
> To: paul@xen.org
> Cc: 'Oleksandr Tyshchenko' <olekstysh@gmail.com>; =
xen-devel@lists.xenproject.org; 'Oleksandr
> Tyshchenko' <oleksandr_tyshchenko@epam.com>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>; 'Roger Pau
> Monn=C3=A9' <roger.pau@citrix.com>; 'Wei Liu' <wl@xen.org>; 'Julien =
Grall' <julien@xen.org>; 'Stefano
> Stabellini' <sstabellini@kernel.org>; 'Julien Grall' =
<julien.grall@arm.com>
> Subject: Re: [PATCH V2 04/23] xen/ioreq: Provide alias for the =
handle_mmio()
>=20
> On 20.10.2020 11:14, Paul Durrant wrote:
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf =
Of Oleksandr Tyshchenko
> >> Sent: 15 October 2020 17:44
> >>
> >> --- a/xen/include/asm-x86/hvm/ioreq.h
> >> +++ b/xen/include/asm-x86/hvm/ioreq.h
> >> @@ -181,6 +181,8 @@ static inline bool =
arch_hvm_ioreq_destroy(struct domain *d)
> >>  #define IOREQ_STATUS_UNHANDLED   X86EMUL_UNHANDLEABLE
> >>  #define IOREQ_STATUS_RETRY       X86EMUL_RETRY
> >>
> >> +#define ioreq_complete_mmio   handle_mmio
> >> +
> >
> > A #define? Really? Can we not have a static inline?
>=20
> I guess this would require further shuffling: handle_mmio() is
> an inline function in hvm/emulate.h, and hvm/ioreq.h has no
> need to include the former (and imo it also shouldn't have).
>=20

I see. I think we need an x86 ioreq.c anyway, to deal with the legacy =
use of magic pages, so it could be dealt with there instead.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:41:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:41:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9079.24403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp50-0007LN-2n; Tue, 20 Oct 2020 10:41:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9079.24403; Tue, 20 Oct 2020 10:41:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp4z-0007LG-W9; Tue, 20 Oct 2020 10:41:21 +0000
Received: by outflank-mailman (input) for mailman id 9079;
 Tue, 20 Oct 2020 10:41:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUp4z-0007LB-3C
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:41:21 +0000
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cc6fc9e-57af-4ccc-8e40-b2248c23c285;
 Tue, 20 Oct 2020 10:41:20 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id e22so1958841ejr.4
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:41:20 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
 by smtp.gmail.com with ESMTPSA id v2sm2000163ejh.57.2020.10.20.03.41.18
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 03:41:18 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUp4z-0007LB-3C
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:41:21 +0000
X-Inumbo-ID: 0cc6fc9e-57af-4ccc-8e40-b2248c23c285
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0cc6fc9e-57af-4ccc-8e40-b2248c23c285;
	Tue, 20 Oct 2020 10:41:20 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id e22so1958841ejr.4
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:41:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=jMMJAuqMJd9IwVqmRHD9s4S3XfgSMs6de7upEmAz9ms=;
        b=XocC9RKzCwbr5Y2Xe3XjxoLhS6G71tzSLhFVF+8vmAfEwbWph4Xr7CYoPy+c3l9Cjw
         HmPUz8FEqSkvFpZQ+lvsusotJXB6JWH6mBkNJAobtRaWKJow0i3/Q19QHt4iDzTh8FNv
         2FOXdlnPl0gipwTPgMHQXXQRjgOwWR+JcE5PrimYbNdtiugGuPi//qzawJwjR3Gf0Udo
         C3aI3Pm32Gpmx27Zmgs8D1R9A0K9/pzYcN+BTQdUVAj0fbjYzgrqpGSyerFl83TnYgxA
         k7RDinNVAjmcP0ving3Y12HDt9FhjDimTLBDOFYqK2ZJzUuPK/U4QCZZdx4sWrNPHfpR
         +jEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=jMMJAuqMJd9IwVqmRHD9s4S3XfgSMs6de7upEmAz9ms=;
        b=OhkhGuYeICpIXdmPKQrLdcKP4elEoTXy/3Zi3K9NUoMkrBOT91OKWe7m/+kSxvRIR4
         6yiNy7UBP4tR38eEuXu0psOOnybbaDaqUsohUc8rMp50WmBz5hxQdIwaSUJMhA1p66tq
         yQjYXC1fmlFPCtDBFINWdKc6YR9PqON/K6X7G/kPQQsbHw/kAE7+jQaQwH/KIhL5K8nr
         5SJ9QAh+eJD8DM+JHRidM5lz7JsUSrX3Yhxw5xk0N9blpcFTaq0QfUgf7DKUE5KW13DT
         cBCG4qsdM6CqBu5xOBYiur64NkGeluwabpcSno+4qk5yvTaLg93PoYibSG14DwMU8lLj
         jD8w==
X-Gm-Message-State: AOAM531b0+33SZ8fHpjNmyGo8rl6bYlFJu7ab2Nv5AQ9RT/5G2Uz3njR
	LQa1kEVscFAKF+fDhXU7CCk=
X-Google-Smtp-Source: ABdhPJyUVMVtvA5KSETiwHrRy/IApkaY7X05GHs1FxJMaRXedpHnjS5KY7uxC42I3lFy80mFpg1wdA==
X-Received: by 2002:a17:906:52c6:: with SMTP id w6mr2299919ejn.199.1603190479392;
        Tue, 20 Oct 2020 03:41:19 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
        by smtp.gmail.com with ESMTPSA id v2sm2000163ejh.57.2020.10.20.03.41.18
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 03:41:18 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-9-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-9-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to abstract accesses to arch.hvm.params
Date: Tue, 20 Oct 2020 11:41:17 +0100
Message-ID: <004c01d6a6cd$8b3e22e0$a1ba68a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAHfL4kBqmlR/vA=
Content-Language: en-gb

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; Julien Grall =
<julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 08/23] xen/ioreq: Introduce ioreq_params to =
abstract accesses to arch.hvm.params
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> We don't want to move HVM params field out of *arch.hvm* in this =
particular
> case as although it stores a few IOREQ params, it is not a =
(completely)
> IOREQ stuff and is specific to the architecture. Instead, abstract
> accesses by the proposed macro.
>=20
> This is a follow up action to reduce layering violation in the common =
code.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>=20

Keeping the 'legacy' magic page code under an x86 ioreq.c would avoid =
the need for this patch.

  Paul

> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>=20
> Changes V1 -> V2:
>    - new patch
> ---
>  xen/common/ioreq.c               | 4 ++--
>  xen/include/asm-x86/hvm/domain.h | 2 ++
>  2 files changed, 4 insertions(+), 2 deletions(-)
>=20
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index 7f91bc2..a07f1d7 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -223,7 +223,7 @@ static gfn_t hvm_alloc_legacy_ioreq_gfn(struct =
ioreq_server *s)
>      for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; =
i++ )
>      {
>          if ( !test_and_clear_bit(i, &d->ioreq_gfn.legacy_mask) )
> -            return _gfn(d->arch.hvm.params[i]);
> +            return _gfn(ioreq_params(d, i));
>      }
>=20
>      return INVALID_GFN;
> @@ -255,7 +255,7 @@ static bool hvm_free_legacy_ioreq_gfn(struct =
ioreq_server *s,
>=20
>      for ( i =3D HVM_PARAM_IOREQ_PFN; i <=3D HVM_PARAM_BUFIOREQ_PFN; =
i++ )
>      {
> -        if ( gfn_eq(gfn, _gfn(d->arch.hvm.params[i])) )
> +        if ( gfn_eq(gfn, _gfn(ioreq_params(d, i))) )
>               break;
>      }
>      if ( i > HVM_PARAM_BUFIOREQ_PFN )
> diff --git a/xen/include/asm-x86/hvm/domain.h =
b/xen/include/asm-x86/hvm/domain.h
> index 5d60737..c3af339 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -63,6 +63,8 @@ struct hvm_pi_ops {
>      void (*vcpu_block)(struct vcpu *);
>  };
>=20
> +#define ioreq_params(d, i) ((d)->arch.hvm.params[i])
> +
>  struct hvm_domain {
>      /* Cached CF8 for guest PCI config cycles */
>      uint32_t                pci_cf8;
> --
> 2.7.4




From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:44:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:44:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9082.24416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp7h-0007Wm-Hm; Tue, 20 Oct 2020 10:44:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9082.24416; Tue, 20 Oct 2020 10:44:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUp7h-0007Wf-EP; Tue, 20 Oct 2020 10:44:09 +0000
Received: by outflank-mailman (input) for mailman id 9082;
 Tue, 20 Oct 2020 10:44:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ryym=D3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kUp7g-0007Vw-CT
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:44:08 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1eabbd5-69bf-4743-bda4-65d4484bed17;
 Tue, 20 Oct 2020 10:44:06 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
 with ESMTPSA id e003b5w9KAhuBan
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 20 Oct 2020 12:43:56 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ryym=D3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kUp7g-0007Vw-CT
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:44:08 +0000
X-Inumbo-ID: e1eabbd5-69bf-4743-bda4-65d4484bed17
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1eabbd5-69bf-4743-bda4-65d4484bed17;
	Tue, 20 Oct 2020 10:44:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603190645;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=BeKfgTRL0Ve2EEdsp0Vygx0an64RH8IwvMCvdkEBUoY=;
	b=kmJfs6ro+FNdeSpJoN+Nn4osjKExM+WW54RKpRn8Al3mBfaZupSGlfsMS0ds9gPFng
	OF7kIVE2tRosj/knsnnFpa+LIjRNPZ76SeACypioaIec1oRZRxJzaE3lwY+CUk5JJrG8
	CskaUYwY7xe84ajK0KUHgTkihX1mIU9TxOI0U+3S+0og0I96IRrw5pATcocHLmj7fcLQ
	uMKFgerb6Qx6484VxxkCfZRyGvkKUKkRp++ZKgkIpaSonwgBseZGwmlDROGjswzBFCxc
	u+UKJhmdS06adYwAcJa1mEWrMoHUL33FHCA1BmuJ2K1zZ0ts8IfKH6KYFm1UtVBalRJD
	R0IA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
	with ESMTPSA id e003b5w9KAhuBan
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 20 Oct 2020 12:43:56 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] xl: optionally print timestamps during xl migrate
Date: Tue, 20 Oct 2020 12:43:53 +0200
Message-Id: <20201020104353.13425-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During 'xl -v.. migrate domU host' a large amount of debug is generated.
It is difficult to map each line to the sending and receiving side.
Also the time spent for migration is not reported.

With 'xl migrate -T domU host' both sides will print timestamps and
also the pid of the invoked xl process to make it more obvious which
side produced a given log line.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in   |  4 ++++
 tools/xl/xl_cmdtable.c |  1 +
 tools/xl/xl_migrate.c  | 25 +++++++++++++++++++++----
 3 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 5f7d3a7134..4bde0672fa 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -482,6 +482,10 @@ domain. See the corresponding option of the I<create> subcommand.
 Send the specified <config> file instead of the file used on creation of the
 domain.
 
+=item B<-T>
+
+Include timestamps in output.
+
 =item B<--debug>
 
 Display huge (!) amount of debug information during the migration process.
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927..fd2dc0aef2 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -167,6 +167,7 @@ struct cmd_spec cmd_table[] = {
       "                migrate-receive [-d -e]\n"
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
+      "-T              Show timestamps during the migration process.\n"
       "--debug         Print huge (!) amount of debug during the migration process.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
diff --git a/tools/xl/xl_migrate.c b/tools/xl/xl_migrate.c
index 0813beb801..856a6e2be1 100644
--- a/tools/xl/xl_migrate.c
+++ b/tools/xl/xl_migrate.c
@@ -32,6 +32,8 @@
 
 #ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 
+static bool timestamps;
+
 static pid_t create_migration_child(const char *rune, int *send_fd,
                                         int *recv_fd)
 {
@@ -187,6 +189,7 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     char rc_buf;
     uint8_t *config_data;
     int config_len, flags = LIBXL_SUSPEND_LIVE;
+    unsigned xtl_flags = XTL_STDIOSTREAM_HIDE_PROGRESS;
 
     save_domain_core_begin(domid, preserve_domid, override_config_file,
                            &config_data, &config_len);
@@ -202,7 +205,9 @@ static void migrate_domain(uint32_t domid, int preserve_domid,
     migrate_do_preamble(send_fd, recv_fd, child, config_data, config_len,
                         rune);
 
-    xtl_stdiostream_adjust_flags(logger, XTL_STDIOSTREAM_HIDE_PROGRESS, 0);
+    if (timestamps)
+        xtl_flags |= XTL_STDIOSTREAM_SHOW_DATE | XTL_STDIOSTREAM_SHOW_PID;
+    xtl_stdiostream_adjust_flags(logger, xtl_flags, 0);
 
     if (debug)
         flags |= LIBXL_SUSPEND_DEBUG;
@@ -328,6 +333,11 @@ static void migrate_receive(int debug, int daemonize, int monitor,
     char rc_buf;
     char *migration_domname;
     struct domain_create dom_info;
+    unsigned xtl_flags = 0;
+
+    if (timestamps)
+        xtl_flags |= XTL_STDIOSTREAM_SHOW_DATE | XTL_STDIOSTREAM_SHOW_PID;
+    xtl_stdiostream_adjust_flags(logger, xtl_flags, 0);
 
     signal(SIGPIPE, SIG_IGN);
     /* if we get SIGPIPE we'd rather just have it as an error */
@@ -491,7 +501,7 @@ int main_migrate_receive(int argc, char **argv)
         COMMON_LONG_OPTS
     };
 
-    SWITCH_FOREACH_OPT(opt, "Fedrp", opts, "migrate-receive", 0) {
+    SWITCH_FOREACH_OPT(opt, "FedrpT", opts, "migrate-receive", 0) {
     case 'F':
         daemonize = 0;
         break;
@@ -517,6 +527,9 @@ int main_migrate_receive(int argc, char **argv)
     case 'p':
         pause_after_migration = 1;
         break;
+    case 'T':
+        timestamps = 1;
+        break;
     }
 
     if (argc-optind != 0) {
@@ -545,7 +558,7 @@ int main_migrate(int argc, char **argv)
         COMMON_LONG_OPTS
     };
 
-    SWITCH_FOREACH_OPT(opt, "FC:s:epD", opts, "migrate", 2) {
+    SWITCH_FOREACH_OPT(opt, "FC:s:eTpD", opts, "migrate", 2) {
     case 'C':
         config_filename = optarg;
         break;
@@ -559,6 +572,9 @@ int main_migrate(int argc, char **argv)
         daemonize = 0;
         monitor = 0;
         break;
+    case 'T':
+        timestamps = 1;
+        break;
     case 'p':
         pause_after_migration = 1;
         break;
@@ -592,11 +608,12 @@ int main_migrate(int argc, char **argv)
         } else {
             verbose_len = (minmsglevel_default - minmsglevel) + 2;
         }
-        xasprintf(&rune, "exec %s %s xl%s%.*s migrate-receive%s%s%s",
+        xasprintf(&rune, "exec %s %s xl%s%.*s migrate-receive%s%s%s%s",
                   ssh_command, host,
                   pass_tty_arg ? " -t" : "",
                   verbose_len, verbose_buf,
                   daemonize ? "" : " -e",
+                  timestamps ? " -T" : "",
                   debug ? " -d" : "",
                   pause_after_migration ? " -p" : "");
     }


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:52:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:52:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9085.24428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUpFN-0008Ps-GN; Tue, 20 Oct 2020 10:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9085.24428; Tue, 20 Oct 2020 10:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUpFN-0008Pl-D7; Tue, 20 Oct 2020 10:52:05 +0000
Received: by outflank-mailman (input) for mailman id 9085;
 Tue, 20 Oct 2020 10:52:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUpFL-0008Pg-L7
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:52:03 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b3c36b7-642d-4890-9e05-dfde22723f77;
 Tue, 20 Oct 2020 10:52:02 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id x7so1972147eje.8
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:52:02 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
 by smtp.gmail.com with ESMTPSA id yz15sm2130137ejb.9.2020.10.20.03.51.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 03:52:00 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUpFL-0008Pg-L7
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:52:03 +0000
X-Inumbo-ID: 3b3c36b7-642d-4890-9e05-dfde22723f77
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3b3c36b7-642d-4890-9e05-dfde22723f77;
	Tue, 20 Oct 2020 10:52:02 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id x7so1972147eje.8
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:52:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=N9snk6wIxMy2vhViaLw1HdAzvnXDeCoIdpd888YMbN0=;
        b=pZ59wM4N3SxMPd4qf++rEbaom9ltWVJy55cGIrNG2eXy5MjNIkwvWfWpGQRq68c21B
         hH/ATinVvMCjuzh1nbfaUqL5aIHEZOoub3hwyRrRHyz8Bri7DYs83MiYLueOCwGYG6xD
         ir0ZRGDV9t3eAEO30MdZ88QBYVdFQC/TmKfa4naaaBmPaQqD/e1zypLLJCZIRp0hNhXZ
         cdvklksUdM9JocOQ86WUWXscR7B9FGA6Fk62xB6vvVQILUetstZ/8Jf/JCXRq35cyM3U
         E9erblTm6MlvooUHORpIzWw/7iRr09ldJn3EIaZDxuRfWYKjsu7dO19s2T6X3EL+6jz1
         yO2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=N9snk6wIxMy2vhViaLw1HdAzvnXDeCoIdpd888YMbN0=;
        b=gO22N3KsWBFyuQjW/EWz5KGlSPkk3fYrqpuFwSP6wSuGLc1z7Be1Pe8v+GqIhWLwrb
         0Ni0OdN+7ttRIcSyJo/O7S/niXgxmk55DfDZI5eayOJk56ZottAbFgaUE8tLSWE+O0el
         VWdcOdxK+PeZUx9ki09fHE2sjCWLi8F+cQ58VJJaII3cPO88UPWA3xezpz42WE/Gb72i
         D5hck6f2Iz0Myk9h908/BqEeCL857pF4+VNSdZXZAVlKUKafSnqn/lr9FM48KuJSQ6SC
         RbEgcnc8QnkNiFRE5Daq2yvOYT2AdDkkT6EP7KlZKtr/Jq1/jlj8IBxFac4SxSGOn8T4
         5/IA==
X-Gm-Message-State: AOAM531+Kegg47/bHYYvDdB/6njUDl1jjuDI3C+gB5wYXEopQBgvloyJ
	05SsZfXdXjOJ/13cevJMaBs=
X-Google-Smtp-Source: ABdhPJxQ1TijZUERYfNl3SFDtw1yyNsIxuQHXTZpxMfmqctzj3EFwVrzPaEEEEp3VZTEBU7a38+kgw==
X-Received: by 2002:a17:906:364f:: with SMTP id r15mr2448472ejb.388.1603191121414;
        Tue, 20 Oct 2020 03:52:01 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-238.amazon.com. [54.240.197.238])
        by smtp.gmail.com with ESMTPSA id yz15sm2130137ejb.9.2020.10.20.03.51.59
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 03:52:00 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Volodymyr Babchuk'" <Volodymyr_Babchuk@epam.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-18-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
Date: Tue, 20 Oct 2020 11:51:59 +0100
Message-ID: <004e01d6a6cf$09cd9f40$1d68ddc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAGr5WYaqmrtRJA=
Content-Language: en-gb



> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Stefano Stabellini <sstabellini@kernel.org>;
> Julien Grall <julien@xen.org>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Ian Jackson
> <iwj@xenproject.org>; Jan Beulich <jbeulich@suse.com>; Wei Liu <wl@xen.org>; Paul Durrant
> <paul@xen.org>; Julien Grall <julien.grall@arm.com>
> Subject: [PATCH V2 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> 
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> 
> This patch introduces a helper the main purpose of which is to check
> if a domain is using IOREQ server(s).
> 
> On Arm the current benefit is to avoid calling handle_io_completion()
> (which implies iterating over all possible IOREQ servers anyway)
> on every return in leave_hypervisor_to_guest() if there is no active
> servers for the particular domain.
> Also this helper will be used by one of the subsequent patches on Arm.
> 
> This involves adding an extra per-domain variable to store the count
> of servers in use.
> 
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
> 
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
> 
> Changes RFC -> V1:
>    - new patch
> 
> Changes V1 -> V2:
>    - update patch description
>    - guard helper with CONFIG_IOREQ_SERVER
>    - remove "hvm" prefix
>    - modify helper to just return d->arch.hvm.ioreq_server.nr_servers
>    - put suitable ASSERT()s
>    - use ASSERT(d->ioreq_server.server[id] ? !s : !!s) in set_ioreq_server()
>    - remove d->ioreq_server.nr_servers = 0 from hvm_ioreq_init()
> ---
>  xen/arch/arm/traps.c    | 15 +++++++++------
>  xen/common/ioreq.c      |  7 ++++++-
>  xen/include/xen/ioreq.h | 14 ++++++++++++++
>  xen/include/xen/sched.h |  1 +
>  4 files changed, 30 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 507c095..a8f5fdf 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -2261,14 +2261,17 @@ static bool check_for_vcpu_work(void)
>      struct vcpu *v = current;
> 
>  #ifdef CONFIG_IOREQ_SERVER
> -    bool handled;
> +    if ( domain_has_ioreq_server(v->domain) )
> +    {
> +        bool handled;
> 
> -    local_irq_enable();
> -    handled = handle_io_completion(v);
> -    local_irq_disable();
> +        local_irq_enable();
> +        handled = handle_io_completion(v);
> +        local_irq_disable();
> 
> -    if ( !handled )
> -        return true;
> +        if ( !handled )
> +            return true;
> +    }
>  #endif
> 
>      if ( likely(!v->arch.need_flush_to_ram) )
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index bcd4961..a72bc0e 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -39,9 +39,14 @@ static void set_ioreq_server(struct domain *d, unsigned int id,
>                               struct ioreq_server *s)
>  {
>      ASSERT(id < MAX_NR_IOREQ_SERVERS);
> -    ASSERT(!s || !d->ioreq_server.server[id]);
> +    ASSERT(d->ioreq_server.server[id] ? !s : !!s);

That looks odd. How about ASSERT(!s ^ !d->ioreq_server.server[id])?

  Paul

> 
>      d->ioreq_server.server[id] = s;
> +
> +    if ( s )
> +        d->ioreq_server.nr_servers++;
> +    else
> +        d->ioreq_server.nr_servers--;
>  }
> 
>  #define GET_IOREQ_SERVER(d, id) \
> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> index 7b03ab5..0679fef 100644
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -55,6 +55,20 @@ struct ioreq_server {
>      uint8_t                bufioreq_handling;
>  };
> 
> +#ifdef CONFIG_IOREQ_SERVER
> +static inline bool domain_has_ioreq_server(const struct domain *d)
> +{
> +    ASSERT((current->domain == d) || atomic_read(&d->pause_count));
> +

This seems like an odd place to put such an assertion.

> +    return d->ioreq_server.nr_servers;
> +}
> +#else
> +static inline bool domain_has_ioreq_server(const struct domain *d)
> +{
> +    return false;
> +}
> +#endif
> +

Can this be any more compact? E.g.

return IS_ENABLED(CONFIG_IOREQ_SERVER) && d->ioreq_server.nr_servers;

?

>  struct ioreq_server *get_ioreq_server(const struct domain *d,
>                                        unsigned int id);
> 
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index f9ce14c..290cddb 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -553,6 +553,7 @@ struct domain
>      struct {
>          spinlock_t              lock;
>          struct ioreq_server     *server[MAX_NR_IOREQ_SERVERS];
> +        unsigned int            nr_servers;
>      } ioreq_server;
>  #endif
>  };
> --
> 2.7.4




From xen-devel-bounces@lists.xenproject.org Tue Oct 20 10:55:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 10:55:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9088.24440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUpIN-0000BG-VB; Tue, 20 Oct 2020 10:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9088.24440; Tue, 20 Oct 2020 10:55:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUpIN-0000B9-SF; Tue, 20 Oct 2020 10:55:11 +0000
Received: by outflank-mailman (input) for mailman id 9088;
 Tue, 20 Oct 2020 10:55:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kUpIM-0000B3-OH
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:55:10 +0000
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 438461d1-4f92-448c-a00a-471d55fdc3d0;
 Tue, 20 Oct 2020 10:55:10 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id x1so1361205eds.1
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:55:09 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
 by smtp.gmail.com with ESMTPSA id w22sm2043189edu.15.2020.10.20.03.55.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 20 Oct 2020 03:55:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dZuW=D3=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kUpIM-0000B3-OH
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 10:55:10 +0000
X-Inumbo-ID: 438461d1-4f92-448c-a00a-471d55fdc3d0
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 438461d1-4f92-448c-a00a-471d55fdc3d0;
	Tue, 20 Oct 2020 10:55:10 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id x1so1361205eds.1
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 03:55:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:thread-index
         :content-language;
        bh=QDpxbzaKWhiOUrC0mixBdsCIdgIRhodTiFBVVSGm43Q=;
        b=L0PsJD1IcUwlXwLsX/7U9jFdlOMQ6sPFWxkEi2Tu+gSVmMZ1cCy/Bl4AzHOtFgyaHA
         WlWCpVUWq3WdD2TMMYUVGyuw5RvoS4w54nZNoPkVjMSw/uTk487BLpDfqKF7ON0NDtfA
         sGebQA7n8Kz9RCBzp5tDTZfa7QchG2MTdDM8bGo/HUcZVm7QS747ag6VjlyQvDKLHGnb
         AivRtgx9IAJXHj4x+DMq5nNkor3vnql9O/71bx3InPOnv6BD9MuxALTCVmsHPq4skowF
         cWMORZQQ1Tmy6T1n30BZVeCffCgr/+qbBKnekN+TydX/jJQ1jKJ6LhUMCqRUgK1gO61e
         iaZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :thread-index:content-language;
        bh=QDpxbzaKWhiOUrC0mixBdsCIdgIRhodTiFBVVSGm43Q=;
        b=soex0GDsOMb/T0Jan+W6qDmArxUCtQQyVetyib9d/0y5ahE80htS3Fhq+Tgns8L7B6
         VOgPAKZjs76Fm/7Qrvf+ZBq6Zf4qQtBuyWY/UtdF534oJ9R78gHeeH+VaUPn+JGOVA5T
         JeCPgdy2kF6vnSH7WV7cGqPFa1FmjFjgs+RH6AMLwTanpn9f7IhIxpkE5d/8zV1OxnY5
         eMrRnDFgKuJXW+cUlZMNCgkCBaaTmATgo+MMSuf8SfYTIfBtcd6EOWvF6THBn0xUVEUU
         Ez0THLYOviUc4H80ywS4GSweTRR3RD/8/joRTaZ5NI8fyVjrJpwcW1C0mPKOZg6erSSp
         W6/Q==
X-Gm-Message-State: AOAM531u4jCAhj16nh9RO7BGUqaO0YrZ9sS+kmrYnfbMArMkEVq2hRNu
	AldDBgddpFtYFQ3zbRTxqmY=
X-Google-Smtp-Source: ABdhPJxSxTpqy/Es07re/7wJNHCZ7KAS/02YaDdE7x4y/1z3o2CSFcX81IYlYZGZ3+oJ+MiuhfhEUQ==
X-Received: by 2002:a50:d751:: with SMTP id i17mr2128076edj.337.1603191309187;
        Tue, 20 Oct 2020 03:55:09 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-230.amazon.com. [54.240.197.230])
        by smtp.gmail.com with ESMTPSA id w22sm2043189edu.15.2020.10.20.03.55.07
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 20 Oct 2020 03:55:08 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Oleksandr Tyshchenko'" <olekstysh@gmail.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Oleksandr Tyshchenko'" <oleksandr_tyshchenko@epam.com>,
	"'Jan Beulich'" <jbeulich@suse.com>,
	"'Andrew Cooper'" <andrew.cooper3@citrix.com>,
	=?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>,
	"'Wei Liu'" <wl@xen.org>,
	"'George Dunlap'" <george.dunlap@citrix.com>,
	"'Ian Jackson'" <iwj@xenproject.org>,
	"'Julien Grall'" <julien@xen.org>,
	"'Stefano Stabellini'" <sstabellini@kernel.org>,
	"'Jun Nakajima'" <jun.nakajima@intel.com>,
	"'Kevin Tian'" <kevin.tian@intel.com>,
	"'Julien Grall'" <julien.grall@arm.com>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-12-git-send-email-olekstysh@gmail.com>
Subject: RE: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
Date: Tue, 20 Oct 2020 11:55:07 +0100
Message-ID: <004f01d6a6cf$79d0daa0$6d728fe0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQFqp5MaNUj6MKEiN9RM6S6pfA5bVAEW73X1qm+YG6A=
Content-Language: en-gb

> -----Original Message-----
> From: Oleksandr Tyshchenko <olekstysh@gmail.com>
> Sent: 15 October 2020 17:44
> To: xen-devel@lists.xenproject.org
> Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>; Paul Durrant =
<paul@xen.org>; Jan Beulich
> <jbeulich@suse.com>; Andrew Cooper <andrew.cooper3@citrix.com>; Roger =
Pau Monn=C3=A9
> <roger.pau@citrix.com>; Wei Liu <wl@xen.org>; George Dunlap =
<george.dunlap@citrix.com>; Ian Jackson
> <iwj@xenproject.org>; Julien Grall <julien@xen.org>; Stefano =
Stabellini <sstabellini@kernel.org>; Jun
> Nakajima <jun.nakajima@intel.com>; Kevin Tian <kevin.tian@intel.com>; =
Julien Grall
> <julien.grall@arm.com>
> Subject: [PATCH V2 11/23] xen/ioreq: Move x86's io_completion/io_req =
fields to struct vcpu
>=20
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>=20
> The IOREQ is a common feature now and these fields will be used
> on Arm as is. Move them to common struct vcpu as a part of new
> struct vcpu_io. Also move enum hvm_io_completion to xen/sched.h
> and remove "hvm" prefixes.
>=20
> This patch completely removes layering violation in the common code.
>=20
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
> CC: Julien Grall <julien.grall@arm.com>
>=20
> ---
> Please note, this is a split/cleanup/hardening of Julien's PoC:
> "Add support for Guest IO forwarding to a device emulator"
>=20
> ***
> I was thinking that it may be better to place these two fields
> into struct vcpu directly (without intermediate "io" struct).
> I think, this way the code which operates with these fields
> would become cleaner. Another possible option would be either
> to rename "io" struct (I failed to think of a better name) or
> to drop(replace?) duplicating "io" prefixes from these fields.

Just drop the 'io_' prefix from the field names.

  Paul



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9116.24523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqK1-0006LK-5B; Tue, 20 Oct 2020 12:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9116.24523; Tue, 20 Oct 2020 12:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqK1-0006LA-0x; Tue, 20 Oct 2020 12:00:57 +0000
Received: by outflank-mailman (input) for mailman id 9116;
 Tue, 20 Oct 2020 12:00:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqJz-0006DX-L0
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:00:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce2b6d7c-38b0-4162-8c70-60e68f1fdc40;
 Tue, 20 Oct 2020 12:00:39 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJd-0001Ja-4u; Tue, 20 Oct 2020 12:00:33 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJd-0001yN-2q; Tue, 20 Oct 2020 12:00:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqJz-0006DX-L0
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:00:55 +0000
X-Inumbo-ID: ce2b6d7c-38b0-4162-8c70-60e68f1fdc40
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ce2b6d7c-38b0-4162-8c70-60e68f1fdc40;
	Tue, 20 Oct 2020 12:00:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=03oVv59KE3qloTC2GXz56bxSgfUd293N6vDaNG5POWU=; b=o6i1k7+agkIQZkUtyHYq0MCw6h
	iU+AFyvv2kkB98w2zRrth1GHZxmu0iPDVoA6k80fqClcFGW9lMlkdA7jQrWmPdfkJuSPrrPeUR0HE
	Ar18jKm1AQmTyW3I13olHndppo2xJHmDuT7US0hHPHhDrh+myV1kx09qXe2sx7U3v0Ng=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJd-0001Ja-4u; Tue, 20 Oct 2020 12:00:33 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJd-0001yN-2q; Tue, 20 Oct 2020 12:00:33 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 331 v2 - Race condition in Linux event
 handler may crash dom0
Message-Id: <E1kUqJd-0001yN-2q@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:33 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-331
                              version 2

         Race condition in Linux event handler may crash dom0

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

The Linux kernel event channel handling code doesn't defend the
handling of an event against the same event channel being removed in
parallel.

This can result in accesses to already freed memory areas or NULL
pointer dereferences in the event handling code, leading to
misbehaviour of the system or even crashes.

IMPACT
======

A misbehaving guest can trigger a dom0 crash by sending events for a
paravirtualized device while simultaneously reconfiguring it.

VULNERABLE SYSTEMS
==================

All systems with a Linux dom0 are vulnerable.

All Linux kernel versions are vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Jinoh Kang of Theori.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa331-linux.patch     Linux

$ sha256sum xsa331*
8583392c0c573f7baa85e41c9afbdf74dcb04aea1be992d78991f0787230a193  xsa331-linux.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+OzqMMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZuo4H/R4b4Z7ZTMwwpL4u3PrguNZduaTc3vy9R+Gd0+5z
hY0Zfif7SfhJ2apN4Ihs1eAGxyWLI/I8kQQGE4xKgZy2ygciMbTK0OCsoGxfEr6v
bi4RKV9I03g3fQHy48z+lOt4XKTY8+OpHw8LYY3W7jdnQ0YJrPCOmap0Xkv91QhP
+EkmxzahVQv0T16cP4fxZFUvY0M9gijEjE9h9Gv23M+tLP9SGkW9Hd11qM135AKh
vVSYUIuvyd20zb5uiqXono9qP1CeKyCOXHL+YQ+K7eOjYCVbEDdREneBegFlS9By
jaFukH/psQDdemQDT4amzOmtBzdImIzkGhflvj+b5axRlrw=
=FLDG
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa331-linux.patch"
Content-Disposition: attachment; filename="xsa331-linux.patch"
Content-Transfer-Encoding: base64

RnJvbSBhNWFiMWQ3OGJlZmViNTNjM2I2MzI3MjFhYTViN2I0YjVlYzliNjQ4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyNyArMDIwMApTdWJqZWN0OiBbUEFUQ0hdIHhlbi9ldmVudHM6IGF2b2lk
IHJlbW92aW5nIGFuIGV2ZW50IGNoYW5uZWwgd2hpbGUKIGhhbmRsaW5nIGl0
CgpUb2RheSBpdCBjYW4gaGFwcGVuIHRoYXQgYW4gZXZlbnQgY2hhbm5lbCBp
cyBiZWluZyByZW1vdmVkIGZyb20gdGhlCnN5c3RlbSB3aGlsZSB0aGUgZXZl
bnQgaGFuZGxpbmcgbG9vcCBpcyBhY3RpdmUuIFRoaXMgY2FuIGxlYWQgdG8g
YQpyYWNlIHJlc3VsdGluZyBpbiBjcmFzaGVzIG9yIFdBUk4oKSBzcGxhdHMg
d2hlbiB0cnlpbmcgdG8gYWNjZXNzIHRoZQppcnFfaW5mbyBzdHJ1Y3R1cmUg
cmVsYXRlZCB0byB0aGUgZXZlbnQgY2hhbm5lbC4KCkZpeCB0aGlzIHByb2Js
ZW0gYnkgdXNpbmcgYSByd2xvY2sgdGFrZW4gYXMgcmVhZGVyIGluIHRoZSBl
dmVudApoYW5kbGluZyBsb29wIGFuZCBhcyB3cml0ZXIgd2hlbiBkZWFsbG9j
YXRpbmcgdGhlIGlycV9pbmZvIHN0cnVjdHVyZS4KCkFzIHRoZSBvYnNlcnZl
ZCBwcm9ibGVtIHdhcyBhIE5VTEwgZGVyZWZlcmVuY2UgaW4gZXZ0Y2huX2Zy
b21faXJxKCkKbWFrZSB0aGlzIGZ1bmN0aW9uIG1vcmUgcm9idXN0IGFnYWlu
c3QgcmFjZXMgYnkgdGVzdGluZyB0aGUgaXJxX2luZm8KcG9pbnRlciB0byBi
ZSBub3QgTlVMTCBiZWZvcmUgZGVyZWZlcmVuY2luZyBpdC4KCkFuZCBmaW5h
bGx5IG1ha2UgYWxsIGFjY2Vzc2VzIHRvIGV2dGNobl90b19pcnFbcm93XVtj
b2xdIGF0b21pYyBvbmVzCmluIG9yZGVyIHRvIGF2b2lkIHNlZWluZyBwYXJ0
aWFsIHVwZGF0ZXMgb2YgYW4gYXJyYXkgZWxlbWVudCBpbiBpcnEKaGFuZGxp
bmcuIE5vdGUgdGhhdCBpcnEgaGFuZGxpbmcgY2FuIGJlIGVudGVyZWQgb25s
eSBmb3IgZXZlbnQgY2hhbm5lbHMKd2hpY2ggaGF2ZSBiZWVuIHZhbGlkIGJl
Zm9yZSwgc28gYW55IG5vdCBwb3B1bGF0ZWQgcm93IGlzbid0IGEgcHJvYmxl
bQppbiB0aGlzIHJlZ2FyZCwgYXMgcm93cyBhcmUgb25seSBldmVyIGFkZGVk
IGFuZCBuZXZlciByZW1vdmVkLgoKVGhpcyBpcyBYU0EtMzMxLgoKQ2M6IHN0
YWJsZUB2Z2VyLmtlcm5lbC5vcmcKUmVwb3J0ZWQtYnk6IEppbm9oIEthbmcg
PGx1a2UxMzM3QHRoZW9yaS5pbz4KU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBH
cm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBT
dGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1i
eTogV2VpIExpdSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJzL3hlbi9ldmVu
dHMvZXZlbnRzX2Jhc2UuYyB8IDQxICsrKysrKysrKysrKysrKysrKysrKysr
KysrKystLS0tCiAxIGZpbGUgY2hhbmdlZCwgMzYgaW5zZXJ0aW9ucygrKSwg
NSBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9ldmVu
dHMvZXZlbnRzX2Jhc2UuYyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNf
YmFzZS5jCmluZGV4IDZmMDJjMThmYTY1Yy4uNDA3NzQxZWNlMDg0IDEwMDY0
NAotLS0gYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYworKysg
Yi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwpAQCAtMzMsNiAr
MzMsNyBAQAogI2luY2x1ZGUgPGxpbnV4L3NsYWIuaD4KICNpbmNsdWRlIDxs
aW51eC9pcnFuci5oPgogI2luY2x1ZGUgPGxpbnV4L3BjaS5oPgorI2luY2x1
ZGUgPGxpbnV4L3NwaW5sb2NrLmg+CiAKICNpZmRlZiBDT05GSUdfWDg2CiAj
aW5jbHVkZSA8YXNtL2Rlc2MuaD4KQEAgLTcxLDYgKzcyLDIzIEBAIGNvbnN0
IHN0cnVjdCBldnRjaG5fb3BzICpldnRjaG5fb3BzOwogICovCiBzdGF0aWMg
REVGSU5FX01VVEVYKGlycV9tYXBwaW5nX3VwZGF0ZV9sb2NrKTsKIAorLyoK
KyAqIExvY2sgcHJvdGVjdGluZyBldmVudCBoYW5kbGluZyBsb29wIGFnYWlu
c3QgcmVtb3ZpbmcgZXZlbnQgY2hhbm5lbHMuCisgKiBBZGRpbmcgb2YgZXZl
bnQgY2hhbm5lbHMgaXMgbm8gaXNzdWUgYXMgdGhlIGFzc29jaWF0ZWQgSVJR
IGJlY29tZXMgYWN0aXZlCisgKiBvbmx5IGFmdGVyIGV2ZXJ5dGhpbmcgaXMg
c2V0dXAgKGJlZm9yZSByZXF1ZXN0X1t0aHJlYWRlZF9daXJxKCkgdGhlIGhh
bmRsZXIKKyAqIGNhbid0IGJlIGVudGVyZWQgZm9yIGFuIGV2ZW50LCBhcyB0
aGUgZXZlbnQgY2hhbm5lbCB3aWxsIGJlIHVubWFza2VkIG9ubHkKKyAqIHRo
ZW4pLgorICovCitzdGF0aWMgREVGSU5FX1JXTE9DSyhldnRjaG5fcndsb2Nr
KTsKKworLyoKKyAqIExvY2sgaGllcmFyY2h5OgorICoKKyAqIGlycV9tYXBw
aW5nX3VwZGF0ZV9sb2NrCisgKiAgIGV2dGNobl9yd2xvY2sKKyAqICAgICBJ
UlEtZGVzYyBsb2NrCisgKi8KKwogc3RhdGljIExJU1RfSEVBRCh4ZW5faXJx
X2xpc3RfaGVhZCk7CiAKIC8qIElSUSA8LT4gVklSUSBtYXBwaW5nLiAqLwpA
QCAtMTA1LDcgKzEyMyw3IEBAIHN0YXRpYyB2b2lkIGNsZWFyX2V2dGNobl90
b19pcnFfcm93KHVuc2lnbmVkIHJvdykKIAl1bnNpZ25lZCBjb2w7CiAKIAlm
b3IgKGNvbCA9IDA7IGNvbCA8IEVWVENITl9QRVJfUk9XOyBjb2wrKykKLQkJ
ZXZ0Y2huX3RvX2lycVtyb3ddW2NvbF0gPSAtMTsKKwkJV1JJVEVfT05DRShl
dnRjaG5fdG9faXJxW3Jvd11bY29sXSwgLTEpOwogfQogCiBzdGF0aWMgdm9p
ZCBjbGVhcl9ldnRjaG5fdG9faXJxX2FsbCh2b2lkKQpAQCAtMTQyLDcgKzE2
MCw3IEBAIHN0YXRpYyBpbnQgc2V0X2V2dGNobl90b19pcnEoZXZ0Y2huX3Bv
cnRfdCBldnRjaG4sIHVuc2lnbmVkIGludCBpcnEpCiAJCWNsZWFyX2V2dGNo
bl90b19pcnFfcm93KHJvdyk7CiAJfQogCi0JZXZ0Y2huX3RvX2lycVtyb3dd
W2NvbF0gPSBpcnE7CisJV1JJVEVfT05DRShldnRjaG5fdG9faXJxW3Jvd11b
Y29sXSwgaXJxKTsKIAlyZXR1cm4gMDsKIH0KIApAQCAtMTUyLDcgKzE3MCw3
IEBAIGludCBnZXRfZXZ0Y2huX3RvX2lycShldnRjaG5fcG9ydF90IGV2dGNo
bikKIAkJcmV0dXJuIC0xOwogCWlmIChldnRjaG5fdG9faXJxW0VWVENITl9S
T1coZXZ0Y2huKV0gPT0gTlVMTCkKIAkJcmV0dXJuIC0xOwotCXJldHVybiBl
dnRjaG5fdG9faXJxW0VWVENITl9ST1coZXZ0Y2huKV1bRVZUQ0hOX0NPTChl
dnRjaG4pXTsKKwlyZXR1cm4gUkVBRF9PTkNFKGV2dGNobl90b19pcnFbRVZU
Q0hOX1JPVyhldnRjaG4pXVtFVlRDSE5fQ09MKGV2dGNobildKTsKIH0KIAog
LyogR2V0IGluZm8gZm9yIElSUSAqLwpAQCAtMjYxLDEwICsyNzksMTQgQEAg
c3RhdGljIHZvaWQgeGVuX2lycV9pbmZvX2NsZWFudXAoc3RydWN0IGlycV9p
bmZvICppbmZvKQogICovCiBldnRjaG5fcG9ydF90IGV2dGNobl9mcm9tX2ly
cSh1bnNpZ25lZCBpcnEpCiB7Ci0JaWYgKFdBUk4oaXJxID49IG5yX2lycXMs
ICJJbnZhbGlkIGlycSAlZCFcbiIsIGlycSkpCisJY29uc3Qgc3RydWN0IGly
cV9pbmZvICppbmZvID0gTlVMTDsKKworCWlmIChsaWtlbHkoaXJxIDwgbnJf
aXJxcykpCisJCWluZm8gPSBpbmZvX2Zvcl9pcnEoaXJxKTsKKwlpZiAoIWlu
Zm8pCiAJCXJldHVybiAwOwogCi0JcmV0dXJuIGluZm9fZm9yX2lycShpcnEp
LT5ldnRjaG47CisJcmV0dXJuIGluZm8tPmV2dGNobjsKIH0KIAogdW5zaWdu
ZWQgaW50IGlycV9mcm9tX2V2dGNobihldnRjaG5fcG9ydF90IGV2dGNobikK
QEAgLTQ0MCwxNiArNDYyLDIxIEBAIHN0YXRpYyBpbnQgX19tdXN0X2NoZWNr
IHhlbl9hbGxvY2F0ZV9pcnFfZ3NpKHVuc2lnbmVkIGdzaSkKIHN0YXRpYyB2
b2lkIHhlbl9mcmVlX2lycSh1bnNpZ25lZCBpcnEpCiB7CiAJc3RydWN0IGly
cV9pbmZvICppbmZvID0gaW5mb19mb3JfaXJxKGlycSk7CisJdW5zaWduZWQg
bG9uZyBmbGFnczsKIAogCWlmIChXQVJOX09OKCFpbmZvKSkKIAkJcmV0dXJu
OwogCisJd3JpdGVfbG9ja19pcnFzYXZlKCZldnRjaG5fcndsb2NrLCBmbGFn
cyk7CisKIAlsaXN0X2RlbCgmaW5mby0+bGlzdCk7CiAKIAlzZXRfaW5mb19m
b3JfaXJxKGlycSwgTlVMTCk7CiAKIAlXQVJOX09OKGluZm8tPnJlZmNudCA+
IDApOwogCisJd3JpdGVfdW5sb2NrX2lycXJlc3RvcmUoJmV2dGNobl9yd2xv
Y2ssIGZsYWdzKTsKKwogCWtmcmVlKGluZm8pOwogCiAJLyogTGVnYWN5IElS
USBkZXNjcmlwdG9ycyBhcmUgbWFuYWdlZCBieSB0aGUgYXJjaC4gKi8KQEAg
LTEyMzMsNiArMTI2MCw4IEBAIHN0YXRpYyB2b2lkIF9feGVuX2V2dGNobl9k
b191cGNhbGwodm9pZCkKIAlzdHJ1Y3QgdmNwdV9pbmZvICp2Y3B1X2luZm8g
PSBfX3RoaXNfY3B1X3JlYWQoeGVuX3ZjcHUpOwogCWludCBjcHUgPSBzbXBf
cHJvY2Vzc29yX2lkKCk7CiAKKwlyZWFkX2xvY2soJmV2dGNobl9yd2xvY2sp
OworCiAJZG8gewogCQl2Y3B1X2luZm8tPmV2dGNobl91cGNhbGxfcGVuZGlu
ZyA9IDA7CiAKQEAgLTEyNDMsNiArMTI3Miw4IEBAIHN0YXRpYyB2b2lkIF9f
eGVuX2V2dGNobl9kb191cGNhbGwodm9pZCkKIAkJdmlydF9ybWIoKTsgLyog
SHlwZXJ2aXNvciBjYW4gc2V0IHVwY2FsbCBwZW5kaW5nLiAqLwogCiAJfSB3
aGlsZSAodmNwdV9pbmZvLT5ldnRjaG5fdXBjYWxsX3BlbmRpbmcpOworCisJ
cmVhZF91bmxvY2soJmV2dGNobl9yd2xvY2spOwogfQogCiB2b2lkIHhlbl9l
dnRjaG5fZG9fdXBjYWxsKHN0cnVjdCBwdF9yZWdzICpyZWdzKQotLSAKMi4y
Ni4yCgo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9122.24556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqK9-0006Uc-Fs; Tue, 20 Oct 2020 12:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9122.24556; Tue, 20 Oct 2020 12:01:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqK9-0006UT-BY; Tue, 20 Oct 2020 12:01:05 +0000
Received: by outflank-mailman (input) for mailman id 9122;
 Tue, 20 Oct 2020 12:01:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqK7-0006So-Nh
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c73edf6-5685-4d42-b9de-7a9fef054c5e;
 Tue, 20 Oct 2020 12:00:40 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJZ-0001JQ-UX; Tue, 20 Oct 2020 12:00:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJZ-0001xT-Qb; Tue, 20 Oct 2020 12:00:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqK7-0006So-Nh
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:03 +0000
X-Inumbo-ID: 5c73edf6-5685-4d42-b9de-7a9fef054c5e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c73edf6-5685-4d42-b9de-7a9fef054c5e;
	Tue, 20 Oct 2020 12:00:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=RiwhqTEH0E0roI1zI7wWAXFmOGUyLlkz1tsekO2T5EU=; b=diov/HDtqI9d2lsgEohuxN9y4N
	ijG5fkcNT1HixB2fyPttj+0XpMf1vgNTFGrzLEjv7Hl38WEX+lmbmiEyPM9H5fl/jV93V45J7e4xn
	KEVbwiEvg+fTt9JlJL3SkndUFGGc1FLUXv9CLaKjTtoGs+vDvuUKFLuk1WBbs7Jd0G2E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJZ-0001JQ-UX; Tue, 20 Oct 2020 12:00:29 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJZ-0001xT-Qb; Tue, 20 Oct 2020 12:00:29 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 286 v4 - x86 PV guest INVLPG-like flushes
 may leave stale TLB entries
Message-Id: <E1kUqJZ-0001xT-Qb@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:29 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-286
                              version 4

     x86 PV guest INVLPG-like flushes may leave stale TLB entries

UPDATES IN VERSION 4
====================

Warn about performance impact.

Public release.

ISSUE DESCRIPTION
=================

x86 PV guest kernels may use hypercalls with INVLPG-like behavior to
invalidate TLB entries even after changes to non-leaf page tables.  Such
changes to non-leaf page tables will, however, also render stale
possible TLB entries created by Xen's internal use of linear page tables
to process guest requests like update-va-mapping.  Invalidation of these
TLB entries has been missing, allowing subsequent guest requests to
change address mappings for one process to potentially modify memory
meanwhile in use elsewhere.

IMPACT
======

Malicious x86 PV guest user mode may be able to escalate their privilege
to that of the guest kernel.

VULNERABLE SYSTEMS
==================

All versions of Xen expose the vulnerability.

The vulnerability is exposed to x86 PV guests only.  x86 HVM/PVH guests
as well as ARM ones are not vulnerable.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Jann Horn of Google Project Zero.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that these patches are known to produce serious performence
problems for at least some workloads.  Work is ongoing to improve the
performance, and this XSA will be updated when new patches are
available.

xsa286/*.patch           xen-unstable
xsa286-4.14/*.patch      Xen 4.14.x
xsa286-4.13/*.patch      Xen 4.13.x
xsa286-4.12/*.patch      Xen 4.12.x
xsa286-4.11/*.patch      Xen 4.11.x
xsa286-4.10/*.patch      Xen 4.10.x

$ sha256sum xsa286* xsa286*/*
e67a0828be2157c54282a4cc6956234581d32b793021e12ee61676bad4d3b740  xsa286.meta
95c1650a7e0496577929fd5d3240b14ab69e4086b613a52117fcfd879c9aea0d  xsa286-4.10/0001-x86-shadow-drop-further-32-bit-relics.patch
18218556f8f9218a57dc9afeffddeaa2133fbe2788871082d8b040ab67abb68c  xsa286-4.10/0002-x86-shadow-don-t-pass-wrong-L4-MFN-to-guest_walk_tab.patch
e8c89338a74b5fce9ee2c82e360889123afd7efe72536c236cf521c28e48ecc9  xsa286-4.10/0003-x86-shadow-don-t-use-map_domain_page_global-on-paths.patch
a677ddb308359450087c21085536d64a15d9339cd02a88eecd8d694c9d26837c  xsa286-4.10/0004-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch
927b68d2ffb4afb677b2f3d820f2e12ae61a37a2b4797fdc8316fe65adc0e46f  xsa286-4.10/0005-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
9c6dfc0a2bf7408c1852de2410fecbaab48a4b885f8cf4836781d31727f0c69d  xsa286-4.10/0006-x86-mm-check-page-types-in-do_page_walk.patch
e4c0fcbfd558a95d52b5312902b10aced0dd8d23d1d1af5b3f7d39c3641010c4  xsa286-4.10/0007-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
c0e481010cc801c1455008f52e5f2799240223dbfda2f252c872381d30f5af74  xsa286-4.10/0008-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
89cf1192938027eaa7bb38dcbe268f54771bac0e107fa33f4f625c1fef3397c1  xsa286-4.10/0009-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
6c01ba250ae9ffcd894f603d519bf3a170c92c9e0bc3bb7a79c3e67412ffcf35  xsa286-4.10/0010-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
21d8f1f05a537bae19088ed28cfb8990c2c19f3f93fbe894f40b354ea7702d3a  xsa286-4.11/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch
6e4023e7d366e53ccda8f80a4b36bd77f194b84ef0a30d1af6f18d56c11c5256  xsa286-4.11/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
aabccc3a6c267e41c3ac47916592ce645f6e6788cad900845dbf254801f9dd23  xsa286-4.11/0003-x86-mm-check-page-types-in-do_page_walk.patch
9bbc53a6533209f85edb005e8a517d5a80fda9db8af39b9378eba74857b9fa6b  xsa286-4.11/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
19f09374a4d3383eb1e5b9f0b465ff33e17f71fe7131e6cd61599ac9f79e8b00  xsa286-4.11/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
7b17dd013bdd9520a7c0cec2a6b56da677d980b982b6869865c99f374b3c1560  xsa286-4.11/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
c4c52a34efd14745a0a80f15a73daf7d7e72cfdf2925045a5ec1d22c798baaf2  xsa286-4.11/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
e831e18b528d845e3aef329df98244ebed1ade0388f1bfb083d8de626e148388  xsa286-4.12/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch
9db102623f51bb5757fff3a2364a675893ca52e09f47868dad15ce1cb44c40a8  xsa286-4.12/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
3b0768c05123681db5256a96af3a8abe18e0a488b767183be728cb0a06969333  xsa286-4.12/0003-x86-mm-check-page-types-in-do_page_walk.patch
30342a0c11e0a48b50cd461b6388dce8640f6380cde1d738a3ddd8b95a8c7b1b  xsa286-4.12/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
3c1a687243e2df7b64a49279cd99ceabdcc00d9f48476faacd8915a9cc61e775  xsa286-4.12/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
c5aacc47b696b74e80e368c25f0deba20aa69e1de6823f334fddd6a40c6f6a22  xsa286-4.12/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
07e3f5bcbfbdec070e28fc1bd727b0eb7adc2a75c26636e081800e9939c9bcc2  xsa286-4.12/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
089d284b29b7179b5c9c04beaf90f24b4b79d81eb3bf55b607405755f3f8b6d8  xsa286-4.13/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
96a6391f2a027a034f0e3308119b2cc9b3543db985f9975067db42eb553d2ff9  xsa286-4.13/0002-x86-mm-check-page-types-in-do_page_walk.patch
4060ed8ef41314ddf413e307d39254a899a8e65bd28d5dcb6da73a0d922b5cfb  xsa286-4.13/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
38ed58aceaeb3ab4d9991f12ec11b1eaa0a6d4853023e0e59c0d92589ed112b6  xsa286-4.13/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
99a0afbc30acf6df26e152752bb65a1de33d9a0a45a5e7ad7693c01c1d1f53d6  xsa286-4.13/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
4ce310f802a3fcc55d704af755ef4f6d04e029cddc7df03827f7518d3a8faaf9  xsa286-4.13/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
36f1631c0880a615de48006fc5318c1b305be2ba5ed9c3cd0dd4ed82bd481bed  xsa286-4.14/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
4c10f102b71f26e86b2f5ef9d7bbb31000e389b64185cc23491a113311d70983  xsa286-4.14/0002-x86-mm-check-page-types-in-do_page_walk.patch
ee227e37cb6de2bed50395a72c6ce9493bdfc0018ed2dc70c81334c98751564d  xsa286-4.14/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
4892411356967838e49a5063e80f6169446e460bc3bee8765fe8f4852b801851  xsa286-4.14/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
e840cb758e21be76cfcbd85be82a23bb91e2766d435d314047a62e3b333b7dcb  xsa286-4.14/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
16763d9a407ae30d79e1f849d1ae1e03e9a1d5dca15181f5d53f49e4eb4708a2  xsa286-4.14/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
34c00849d8cf56897d5c17c65d83b9ef2f0ce849766c9d7ca07b56de4a4c1307  xsa286/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch
b82ff481239b72f7b364cc4a66363a812a9205b3e605b215b2267cd6f13f9a06  xsa286/0002-x86-mm-check-page-types-in-do_page_walk.patch
527f7a74c0bee0f137b86b3b6475ff4648266f9dcaa5368df2f06b91efd9dbb8  xsa286/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch
fd11aaacef7fd090fbc6409069b57d76a06a7e835b9a9eb5de8e15ace3ffb3e5  xsa286/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch
dbfce6e2975d191b6568392593a835b5fc87d3869a5fad1917aecc11d1f2a62a  xsa286/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch
5ad8a48a603b3fd195168d62d481516999422f86e5bbc65b01cde1ff1d262fb2  xsa286/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+OzqAMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZf44IAITjC6CiRoB4BdmqcMwpQ2bJbC/XqNN/xV/DXTsg
p0sv0w3lXQQIOIzR7UG70IlA2vjW9LNX6k6qjqDGpOJQn5d2Pbj4jFkd11kq24IK
PWuoxpswQRaVu0CU5aPvvtAIkOu9v0wZ6//M3cpe81h1Pl+Mg413SSArP6qRFjhY
tVdzlBzOwqXYMH5prlvWG+td43D6e5UeMPZM4o4Rkovdjk3QPkpsBrElnlZInmqH
ntbFSTCCUcIpaLRY88yPOksTHXxPtdrDh2l9okNYhLhf7Ywk0z1SWkiueazT15t4
ytDw6OdmYjajqFhI9+FvyYLRiQK+twl6iPUXKarqEQyBbEU=
=ori5
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa286.meta"
Content-Disposition: attachment; filename="xsa286.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAyODYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxNzE5Zjc5YTBlZmQzNmQxNTgzN2M1MTk4MjE3M2RkMWMy
ODdkY2VkIiwKICAgICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAi
UGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTI4Ni00LjEwLyoucGF0Y2gi
CiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQu
MTEiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAg
ICAgICAgICAiU3RhYmxlUmVmIjogIjM2MzBhMzY3ODU0Yzk4YmJmOGU3NDdk
MDllZWFiN2U2OGYzNzAwMDMiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwK
ICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMjg2LTQu
MTEvKi5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAg
IH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7CiAgICAgICAg
InhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNjg4ODAxNzM5MmFj
MjViNWU1ODg1NTQwMzA2NDJhZmZhYzI1YTk1ZCIsCiAgICAgICAgICAiUHJl
cmVxcyI6IFtdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EyODYtNC4xMi8qLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0K
ICAgICAgfQogICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6
IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI4
ZTdlNTg1N2EyMDNjOWQ5ZGY3NzMzZmQ2ODc2ODU1NWM3ZTc2ODM5IiwKICAg
ICAgICAgICJQcmVyZXFzIjogW10sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTI4Ni00LjEzLyoucGF0Y2giCiAgICAgICAgICBd
CiAgICAgICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTQiOiB7CiAgICAg
ICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3Rh
YmxlUmVmIjogImM5M2I1MjBhNDFmMjc4N2RkNzZiZmIyZTQ1NDgzNmQxZDU3
ODc1MDUiLAogICAgICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMjg2LTQuMTQvKi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFz
dGVyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICI5MzUwODU5NWQ1ODhhZmU5ZGNhMDg3
Zjk1MjAwZWZmYjdjZWRjODFmIiwKICAgICAgICAgICJQcmVyZXFzIjogW10s
CiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTI4Ni8q
LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfQog
IH0KfQ==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0001-x86-shadow-drop-further-32-bit-relics.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0001-x86-shadow-drop-further-32-bit-relics.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvc2hhZG93OiBkcm9wIGZ1cnRoZXIgMzItYml0IHJlbGljcwoKUFYg
Z3Vlc3RzIGRvbid0IGV2ZXIgZ2V0IHNoYWRvd2VkIGluIG90aGVyIHRoYW4g
NC1sZXZlbCBtb2RlIGFueW1vcmU7CmNvbW1pdCA1YTNjZThmODVlICgieDg2
L3NoYWRvdzogZHJvcCBzdHJheSBuYW1lIHRhZ3MgZnJvbQpzaF97Z3Vlc3Rf
Z2V0LG1hcH1fZWZmX2wxZSgpIikgZGlkbid0IGdvIHF1aXRlIGZhcmUgZW5v
dWdoIChhbmQgdGhlcmUncwphIGdvb2QgY2hhbmNlIHRoYXQgZnVydGhlciBj
bGVhbnVwIG9wcG9ydHVuaXR5IGV4aXN0cywgd2hpY2ggSSBzaW1wbHkKZGlk
bid0IG5vdGljZSkuCgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+CkFja2VkLWJ5OiBUaW0gRGVlZ2FuIDx0aW1AeGVu
Lm9yZz4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211
bHRpLmMgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMKaW5kZXgg
Nzc4Zjc5MDdmMS4uMzVlMDhmOTA5NyAxMDA2NDQKLS0tIGEveGVuL2FyY2gv
eDg2L21tL3NoYWRvdy9tdWx0aS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9z
aGFkb3cvbXVsdGkuYwpAQCAtMzgwNCw4ICszODA0LDcgQEAgc2hfdXBkYXRl
X2xpbmVhcl9lbnRyaWVzKHN0cnVjdCB2Y3B1ICp2KQogCiAjZWxpZiBTSEFE
T1dfUEFHSU5HX0xFVkVMUyA9PSAzCiAKLSAgICAvKiBQVjogWFhYCi0gICAg
ICoKKyAgICAvKgogICAgICAqIEhWTTogVG8gZ2l2ZSBvdXJzZWx2ZXMgYSBs
aW5lYXIgbWFwIG9mIHRoZSAgc2hhZG93cywgd2UgbmVlZCB0bwogICAgICAq
IGV4dGVuZCBhIFBBRSBzaGFkb3cgdG8gNCBsZXZlbHMuICBXZSBkbyB0aGlz
IGJ5ICBoYXZpbmcgYSBtb25pdG9yCiAgICAgICogbDMgaW4gc2xvdCAwIG9m
IHRoZSBtb25pdG9yIGw0IHRhYmxlLCBhbmQgIGNvcHlpbmcgdGhlIFBBRSBs
MwpAQCAtMzgxNCw3ICszODEzLDcgQEAgc2hfdXBkYXRlX2xpbmVhcl9lbnRy
aWVzKHN0cnVjdCB2Y3B1ICp2KQogICAgICAqIHRoZSBzaGFkb3dzLgogICAg
ICAqLwogCi0gICAgaWYgKCBzaGFkb3dfbW9kZV9leHRlcm5hbChkKSApCisg
ICAgQVNTRVJUKHNoYWRvd19tb2RlX2V4dGVybmFsKGQpKTsKICAgICB7CiAg
ICAgICAgIC8qIEluc3RhbGwgY29waWVzIG9mIHRoZSBzaGFkb3cgbDNlcyBp
bnRvIHRoZSBtb25pdG9yIGwyIHRhYmxlCiAgICAgICAgICAqIHRoYXQgbWFw
cyBTSF9MSU5FQVJfUFRfVklSVF9TVEFSVC4gKi8KQEAgLTM4NjAsOCArMzg1
OSw2IEBAIHNoX3VwZGF0ZV9saW5lYXJfZW50cmllcyhzdHJ1Y3QgdmNwdSAq
dikKICAgICAgICAgaWYgKCB2ICE9IGN1cnJlbnQgKQogICAgICAgICAgICAg
dW5tYXBfZG9tYWluX3BhZ2UobWwyZSk7CiAgICAgfQotICAgIGVsc2UKLSAg
ICAgICAgZG9tYWluX2NyYXNoKGQpOyAvKiBYWFggKi8KIAogI2Vsc2UKICNl
cnJvciB0aGlzIHNob3VsZCBub3QgaGFwcGVuCkBAIC00MDkwLDEyICs0MDg3
LDkgQEAgc2hfdXBkYXRlX2NyMyhzdHJ1Y3QgdmNwdSAqdiwgaW50IGRvX2xv
Y2tpbmcpCiAgICAgICAqIHVudGlsIHRoZSBuZXh0IENSMyB3cml0ZSBtYWtl
cyB1cyByZWZyZXNoIG91ciBjYWNoZS4gKi8KICAgICAgQVNTRVJUKHYtPmFy
Y2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUgPT0gTlVMTCk7CiAKLSAg
ICAgaWYgKCBzaGFkb3dfbW9kZV9leHRlcm5hbChkKSApCi0gICAgICAgICAv
KiBGaW5kIHdoZXJlIGluIHRoZSBwYWdlIHRoZSBsMyB0YWJsZSBpcyAqLwot
ICAgICAgICAgZ3Vlc3RfaWR4ID0gZ3Vlc3RfaW5kZXgoKHZvaWQgKil2LT5h
cmNoLmh2bV92Y3B1Lmd1ZXN0X2NyWzNdKTsKLSAgICAgZWxzZQotICAgICAg
ICAgLyogUFYgZ3Vlc3Q6IGwzIGlzIGF0IHRoZSBzdGFydCBvZiBhIHBhZ2Ug
Ki8KLSAgICAgICAgIGd1ZXN0X2lkeCA9IDA7CisgICAgIEFTU0VSVChzaGFk
b3dfbW9kZV9leHRlcm5hbChkKSk7CisgICAgIC8qIEZpbmQgd2hlcmUgaW4g
dGhlIHBhZ2UgdGhlIGwzIHRhYmxlIGlzICovCisgICAgIGd1ZXN0X2lkeCA9
IGd1ZXN0X2luZGV4KCh2b2lkICopdi0+YXJjaC5odm1fdmNwdS5ndWVzdF9j
clszXSk7CiAKICAgICAgLy8gSWdub3JlIHRoZSBsb3cgMiBiaXRzIG9mIGd1
ZXN0X2lkeCAtLSB0aGV5IGFyZSByZWFsbHkganVzdAogICAgICAvLyBjYWNo
ZSBjb250cm9sLgpAQCAtNDEwNiwxNyArNDEwMCwxMyBAQCBzaF91cGRhdGVf
Y3IzKHN0cnVjdCB2Y3B1ICp2LCBpbnQgZG9fbG9ja2luZykKICAgICAgICAg
IHYtPmFyY2gucGFnaW5nLnNoYWRvdy5nbDNlW2ldID0gZ2wzZVtpXTsKICAg
ICAgdW5tYXBfZG9tYWluX3BhZ2UoZ2wzZSk7CiAjZWxpZiBHVUVTVF9QQUdJ
TkdfTEVWRUxTID09IDIKLSAgICBpZiAoIHNoYWRvd19tb2RlX2V4dGVybmFs
KGQpIHx8IHNoYWRvd19tb2RlX3RyYW5zbGF0ZShkKSApCi0gICAgewotICAg
ICAgICBpZiAoIHYtPmFyY2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUg
KQotICAgICAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2VfZ2xvYmFsKHYtPmFy
Y2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUpOwotICAgICAgICB2LT5h
cmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3RfdnRhYmxlID0gbWFwX2RvbWFpbl9w
YWdlX2dsb2JhbChnbWZuKTsKLSAgICAgICAgLyogRG9lcyB0aGlzIHJlYWxs
eSBuZWVkIG1hcF9kb21haW5fcGFnZV9nbG9iYWw/ICBIYW5kbGUgdGhlCi0g
ICAgICAgICAqIGVycm9yIHByb3Blcmx5IGlmIHNvLiAqLwotICAgICAgICBC
VUdfT04odi0+YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZSA9PSBO
VUxMKTsgLyogWFhYICovCi0gICAgfQotICAgIGVsc2UKLSAgICAgICAgdi0+
YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZSA9IF9fbGluZWFyX2wy
X3RhYmxlOworICAgIEFTU0VSVChzaGFkb3dfbW9kZV9leHRlcm5hbChkKSk7
CisgICAgaWYgKCB2LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3RfdnRhYmxl
ICkKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2VfZ2xvYmFsKHYtPmFyY2gu
cGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUpOworICAgIHYtPmFyY2gucGFn
aW5nLnNoYWRvdy5ndWVzdF92dGFibGUgPSBtYXBfZG9tYWluX3BhZ2VfZ2xv
YmFsKGdtZm4pOworICAgIC8qIERvZXMgdGhpcyByZWFsbHkgbmVlZCBtYXBf
ZG9tYWluX3BhZ2VfZ2xvYmFsPyAgSGFuZGxlIHRoZQorICAgICAqIGVycm9y
IHByb3Blcmx5IGlmIHNvLiAqLworICAgIEJVR19PTih2LT5hcmNoLnBhZ2lu
Zy5zaGFkb3cuZ3Vlc3RfdnRhYmxlID09IE5VTEwpOyAvKiBYWFggKi8KICNl
bHNlCiAjZXJyb3IgdGhpcyBzaG91bGQgbmV2ZXIgaGFwcGVuCiAjZW5kaWYK
QEAgLTQyMjUsMjEgKzQyMTUsMTUgQEAgc2hfdXBkYXRlX2NyMyhzdHJ1Y3Qg
dmNwdSAqdiwgaW50IGRvX2xvY2tpbmcpCiAgICAgewogICAgICAgICBtYWtl
X2NyMyh2LCBwYWdldGFibGVfZ2V0X21mbih2LT5hcmNoLm1vbml0b3JfdGFi
bGUpKTsKICAgICB9CisjaWYgU0hBRE9XX1BBR0lOR19MRVZFTFMgPT0gNAog
ICAgIGVsc2UgLy8gbm90IHNoYWRvd19tb2RlX2V4dGVybmFsLi4uCiAgICAg
ewogICAgICAgICAvKiBXZSBkb24ndCBzdXBwb3J0IFBWIGV4Y2VwdCBndWVz
dCA9PSBzaGFkb3cgPT0gY29uZmlnIGxldmVscyAqLwotICAgICAgICBCVUdf
T04oR1VFU1RfUEFHSU5HX0xFVkVMUyAhPSBTSEFET1dfUEFHSU5HX0xFVkVM
Uyk7Ci0jaWYgU0hBRE9XX1BBR0lOR19MRVZFTFMgPT0gMwotICAgICAgICAv
KiAyLW9uLTMgb3IgMy1vbi0zOiBVc2UgdGhlIFBBRSBzaGFkb3cgbDMgdGFi
bGUgd2UganVzdCBmYWJyaWNhdGVkLgotICAgICAgICAgKiBEb24ndCB1c2Ug
bWFrZV9jcjMgYmVjYXVzZSAoYSkgd2Uga25vdyBpdCdzIGJlbG93IDRHQiwg
YW5kCi0gICAgICAgICAqIChiKSBpdCdzIG5vdCBuZWNlc3NhcmlseSBwYWdl
LWFsaWduZWQsIGFuZCBtYWtlX2NyMyB0YWtlcyBhIHBmbiAqLwotICAgICAg
ICBBU1NFUlQodmlydF90b19tYWRkcigmdi0+YXJjaC5wYWdpbmcuc2hhZG93
LmwzdGFibGUpIDw9IDB4ZmZmZmZmZTBVTEwpOwotICAgICAgICB2LT5hcmNo
LmNyMyA9IHZpcnRfdG9fbWFkZHIoJnYtPmFyY2gucGFnaW5nLnNoYWRvdy5s
M3RhYmxlKTsKLSNlbHNlCi0gICAgICAgIC8qIDQtb24tNDogSnVzdCB1c2Ug
dGhlIHNoYWRvdyB0b3AtbGV2ZWwgZGlyZWN0bHkgKi8KKyAgICAgICAgQlVJ
TERfQlVHX09OKEdVRVNUX1BBR0lOR19MRVZFTFMgIT0gU0hBRE9XX1BBR0lO
R19MRVZFTFMpOworICAgICAgICAvKiBKdXN0IHVzZSB0aGUgc2hhZG93IHRv
cC1sZXZlbCBkaXJlY3RseSAqLwogICAgICAgICBtYWtlX2NyMyh2LCBwYWdl
dGFibGVfZ2V0X21mbih2LT5hcmNoLnNoYWRvd190YWJsZVswXSkpOwotI2Vu
ZGlmCiAgICAgfQorI2VuZGlmCiAKIAogICAgIC8vLwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0002-x86-shadow-don-t-pass-wrong-L4-MFN-to-guest_walk_tab.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0002-x86-shadow-don-t-pass-wrong-L4-MFN-to-guest_walk_tab.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvc2hhZG93OiBkb24ndCBwYXNzIHdyb25nIEw0IE1GTiB0byBndWVz
dF93YWxrX3RhYmxlcygpCgo2NC1iaXQgUFYgZ3Vlc3QgdXNlciBtb2RlIHJ1
bnMgb24gYSBkaWZmZXJlbnQgTDQgdGFibGUuIE1ha2Ugc3VyZQotIHRoZSBh
Y2Nlc3NlZCBiaXQgZ2V0cyBzZXQgaW4gdGhlIGNvcnJlY3QgdGFibGUgKGFu
ZCBpbiBsb2ctZGlydHkKICBtb2RlIHRoZSBjb3JyZWN0IHBhZ2UgZ2V0cyBt
YXJrZWQgZGlydHkpIGR1cmluZyBndWVzdCB3YWxrcywKLSB0aGUgY29ycmVj
dCB0YWJsZSBnZXRzIGF1ZGl0ZWQgYnkgc2hfYXVkaXRfZ3coKSwKLSBjb3Jy
ZWN0IGluZm8gZ2V0cyBsb2dnZWQgYnkgcHJpbnRfZ3coKS4KClNpZ25lZC1v
ZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5j
b20+CkFja2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNp
dHJpeC5jb20+Cm1hc3RlciBjb21taXQ6IGRiMmFmMjNkMTUwNzc2MDVmMjg2
ZDhlZjg2YzhmNWQ5YzFiODMwMmEKbWFzdGVyIGRhdGU6IDIwMTktMDItMjAg
MTc6MDc6MTcgKzAxMDAKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0v
c2hhZG93L211bHRpLmMgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRp
LmMKaW5kZXggMzVlMDhmOTA5Ny4uZTllNGRlZDQyNyAxMDA2NDQKLS0tIGEv
eGVuL2FyY2gveDg2L21tL3NoYWRvdy9tdWx0aS5jCisrKyBiL3hlbi9hcmNo
L3g4Ni9tbS9zaGFkb3cvbXVsdGkuYwpAQCAtMTgwLDcgKzE4MCwxMCBAQCBz
aF93YWxrX2d1ZXN0X3RhYmxlcyhzdHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQg
bG9uZyB2YSwgd2Fsa190ICpndywKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgSU5WQUxJRF9NRk4sCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHYtPmFyY2gucGFnaW5nLnNoYWRvdy5nbDNlCiAjZWxzZSAvKiAzMiBv
ciA2NCAqLwotICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYWdldGFi
bGVfZ2V0X21mbih2LT5hcmNoLmd1ZXN0X3RhYmxlKSwKKyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKCgodi0+YXJjaC5mbGFncyAmIFRGX2tlcm5l
bF9tb2RlKSB8fAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGlz
X3B2XzMyYml0X3ZjcHUodikpCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICA/IHBhZ2V0YWJsZV9nZXRfbWZuKHYtPmFyY2guZ3Vlc3RfdGFibGUp
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA6IHBhZ2V0YWJsZV9n
ZXRfbWZuKHYtPmFyY2guZ3Vlc3RfdGFibGVfdXNlcikpLAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB2LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vl
c3RfdnRhYmxlCiAjZW5kaWYKICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKTsK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0003-x86-shadow-don-t-use-map_domain_page_global-on-paths.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0003-x86-shadow-don-t-use-map_domain_page_global-on-paths.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvc2hhZG93OiBkb24ndCB1c2UgbWFwX2RvbWFpbl9wYWdlX2dsb2Jh
bCgpIG9uIHBhdGhzIHRoYXQgbWF5IG5vdAogZmFpbAoKVGhlIGFzc3VtcHRp
b24gKGFjY29yZGluZyB0byBvbmUgY29tbWVudCkgYW5kIGhvcGUgKGFjY29y
ZGluZyB0bwphbm90aGVyKSB0aGF0IG1hcF9kb21haW5fcGFnZV9nbG9iYWwo
KSBjYW4ndCBmYWlsIGFyZSBib3RoIHdyb25nIG9uCmxhcmdlIGVub3VnaCBz
eXN0ZW1zLiBEbyBhd2F5IHdpdGggdGhlIGd1ZXN0X3Z0YWJsZSBmaWVsZCBh
bHRvZ2V0aGVyLAphbmQgZXN0YWJsaXNoIC8gdGVhciBkb3duIHRoZSBkZXNp
cmVkIG1hcHBpbmcgYXMgbmVjZXNzYXJ5LgoKVGhlIGFsdGVybmF0aXZlcywg
ZGlzY2FyZGVkIGFzIGJlaW5nIHVuZGVzaXJhYmxlLCB3b3VsZCBoYXZlIGJl
ZW4gdG8KZWl0aGVyIGNyYXNoIHRoZSBndWVzdCBpbiBzaF91cGRhdGVfY3Iz
KCkgd2hlbiB0aGUgbWFwcGluZyBmYWlscywgb3IgdG8KYnViYmxlIHVwIGFu
IGVycm9yIGluZGljYXRvciwgd2hpY2ggdXBwZXIgbGF5ZXJzIHdvdWxkIGhh
dmUgYSBoYXJkIHRpbWUKdG8gZGVhbCB3aXRoIChvdGhlciB0aGFuIGFnYWlu
IGJ5IGNyYXNoaW5nIHRoZSBndWVzdCkuCgpTaWduZWQtb2ZmLWJ5OiBKYW4g
QmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBBbmRy
ZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPgpBY2tlZC1i
eTogVGltIERlZWdhbiA8dGltQHhlbi5vcmc+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L21tL3NoYWRvdy9tdWx0aS5jIGIveGVuL2FyY2gveDg2L21t
L3NoYWRvdy9tdWx0aS5jCmluZGV4IGU5ZTRkZWQ0MjcuLjRmOGNmZDYyMTIg
MTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvbXVsdGkuYwor
KysgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMKQEAgLTE3NSwx
OCArMTc1LDIyIEBAIHN0YXRpYyBpbmxpbmUgYm9vbAogc2hfd2Fsa19ndWVz
dF90YWJsZXMoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVkIGxvbmcgdmEsIHdh
bGtfdCAqZ3csCiAgICAgICAgICAgICAgICAgICAgICB1aW50MzJfdCBwZmVj
KQogewotICAgIHJldHVybiBndWVzdF93YWxrX3RhYmxlcyh2LCBwMm1fZ2V0
X2hvc3RwMm0odi0+ZG9tYWluKSwgdmEsIGd3LCBwZmVjLAogI2lmIEdVRVNU
X1BBR0lOR19MRVZFTFMgPT0gMyAvKiBQQUUgKi8KLSAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU5WQUxJRF9NRk4sCi0gICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIHYtPmFyY2gucGFnaW5nLnNoYWRvdy5nbDNlCisgICAg
cmV0dXJuIGd1ZXN0X3dhbGtfdGFibGVzKHYsIHAybV9nZXRfaG9zdHAybSh2
LT5kb21haW4pLCB2YSwgZ3csIHBmZWMsCisgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIElOVkFMSURfTUZOLCB2LT5hcmNoLnBhZ2luZy5zaGFkb3cu
Z2wzZSk7CiAjZWxzZSAvKiAzMiBvciA2NCAqLwotICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAoKCh2LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21v
ZGUpIHx8Ci0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaXNfcHZf
MzJiaXRfdmNwdSh2KSkKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ID8gcGFnZXRhYmxlX2dldF9tZm4odi0+YXJjaC5ndWVzdF90YWJsZSkKLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIDogcGFnZXRhYmxlX2dldF9t
Zm4odi0+YXJjaC5ndWVzdF90YWJsZV91c2VyKSksCi0gICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHYtPmFyY2gucGFnaW5nLnNoYWRvdy5ndWVzdF92
dGFibGUKKyAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9tYWlu
OworICAgIG1mbl90IHJvb3RfbWZuID0gKCh2LT5hcmNoLmZsYWdzICYgVEZf
a2VybmVsX21vZGUpIHx8IGlzX3B2XzMyYml0X2RvbWFpbihkKQorICAgICAg
ICAgICAgICAgICAgICAgID8gcGFnZXRhYmxlX2dldF9tZm4odi0+YXJjaC5n
dWVzdF90YWJsZSkKKyAgICAgICAgICAgICAgICAgICAgICA6IHBhZ2V0YWJs
ZV9nZXRfbWZuKHYtPmFyY2guZ3Vlc3RfdGFibGVfdXNlcikpOworICAgIHZv
aWQgKnJvb3RfbWFwID0gbWFwX2RvbWFpbl9wYWdlKHJvb3RfbWZuKTsKKyAg
ICBib29sIG9rID0gZ3Vlc3Rfd2Fsa190YWJsZXModiwgcDJtX2dldF9ob3N0
cDJtKGQpLCB2YSwgZ3csIHBmZWMsCisgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHJvb3RfbWZuLCByb290X21hcCk7CisKKyAgICB1bm1hcF9k
b21haW5fcGFnZShyb290X21hcCk7CisKKyAgICByZXR1cm4gb2s7CiAjZW5k
aWYKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKTsKIH0KIAogLyog
VGhpcyB2YWxpZGF0aW9uIGlzIGNhbGxlZCB3aXRoIGxvY2sgaGVsZCwgYW5k
IGFmdGVyIHdyaXRlIHBlcm1pc3Npb24KQEAgLTIyNiw4ICsyMzAsOSBAQCBz
aGFkb3dfY2hlY2tfZ3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVkIGxv
bmcgdmEsIHdhbGtfdCAqZ3csIGludCB2ZXJzaW9uKQogICAgIHBlcmZjX2lu
Y3Ioc2hhZG93X2NoZWNrX2d3YWxrKTsKICNpZiBHVUVTVF9QQUdJTkdfTEVW
RUxTID49IDMgLyogUEFFIG9yIDY0Li4uICovCiAjaWYgR1VFU1RfUEFHSU5H
X0xFVkVMUyA+PSA0IC8qIDY0LWJpdCBvbmx5Li4uICovCi0gICAgbDRwID0g
KGd1ZXN0X2w0ZV90ICopdi0+YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0
YWJsZTsKKyAgICBsNHAgPSBtYXBfZG9tYWluX3BhZ2UoZ3ctPmw0bWZuKTsK
ICAgICBtaXNtYXRjaCB8PSAoZ3ctPmw0ZS5sNCAhPSBsNHBbZ3Vlc3RfbDRf
dGFibGVfb2Zmc2V0KHZhKV0ubDQpOworICAgIHVubWFwX2RvbWFpbl9wYWdl
KGw0cCk7CiAgICAgbDNwID0gbWFwX2RvbWFpbl9wYWdlKGd3LT5sM21mbik7
CiAgICAgbWlzbWF0Y2ggfD0gKGd3LT5sM2UubDMgIT0gbDNwW2d1ZXN0X2wz
X3RhYmxlX29mZnNldCh2YSldLmwzKTsKICAgICB1bm1hcF9kb21haW5fcGFn
ZShsM3ApOwpAQCAtMjM1LDEzICsyNDAsMTEgQEAgc2hhZG93X2NoZWNrX2d3
YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZhLCB3YWxrX3Qg
Kmd3LCBpbnQgdmVyc2lvbikKICAgICBtaXNtYXRjaCB8PSAoZ3ctPmwzZS5s
MyAhPQogICAgICAgICAgICAgICAgICB2LT5hcmNoLnBhZ2luZy5zaGFkb3cu
Z2wzZVtndWVzdF9sM190YWJsZV9vZmZzZXQodmEpXS5sMyk7CiAjZW5kaWYK
KyNlbmRpZgogICAgIGwycCA9IG1hcF9kb21haW5fcGFnZShndy0+bDJtZm4p
OwogICAgIG1pc21hdGNoIHw9IChndy0+bDJlLmwyICE9IGwycFtndWVzdF9s
Ml90YWJsZV9vZmZzZXQodmEpXS5sMik7CiAgICAgdW5tYXBfZG9tYWluX3Bh
Z2UobDJwKTsKLSNlbHNlCi0gICAgbDJwID0gKGd1ZXN0X2wyZV90ICopdi0+
YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZTsKLSAgICBtaXNtYXRj
aCB8PSAoZ3ctPmwyZS5sMiAhPSBsMnBbZ3Vlc3RfbDJfdGFibGVfb2Zmc2V0
KHZhKV0ubDIpOwotI2VuZGlmCisKICAgICBpZiAoICEoZ3Vlc3RfY2FuX3Vz
ZV9sMl9zdXBlcnBhZ2VzKHYpICYmCiAgICAgICAgICAgIChndWVzdF9sMmVf
Z2V0X2ZsYWdzKGd3LT5sMmUpICYgX1BBR0VfUFNFKSkgKQogICAgIHsKQEAg
LTM4ODQsNyArMzg4Nyw4IEBAIHNoX3VwZGF0ZV9saW5lYXJfZW50cmllcyhz
dHJ1Y3QgdmNwdSAqdikKIH0KIAogCi0vKiBSZW1vdmVzIHZjcHUtPmFyY2gu
cGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUgYW5kIHZjcHUtPmFyY2guc2hh
ZG93X3RhYmxlW10uCisvKgorICogUmVtb3ZlcyB2Y3B1LT5hcmNoLnNoYWRv
d190YWJsZVtdLgogICogRG9lcyBhbGwgYXBwcm9wcmlhdGUgbWFuYWdlbWVu
dC9ib29ra2VlcGluZy9yZWZjb3VudGluZy9ldGMuLi4KICAqLwogc3RhdGlj
IHZvaWQKQEAgLTM4OTUsMjMgKzM4OTksNiBAQCBzaF9kZXRhY2hfb2xkX3Rh
YmxlcyhzdHJ1Y3QgdmNwdSAqdikKICAgICBpbnQgaSA9IDA7CiAKICAgICAv
Ly8vCi0gICAgLy8vLyB2Y3B1LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3Rf
dnRhYmxlCi0gICAgLy8vLwotCi0jaWYgR1VFU1RfUEFHSU5HX0xFVkVMUyA9
PSAzCi0gICAgLyogUEFFIGd1ZXN0cyBkb24ndCBoYXZlIGEgbWFwcGluZyBv
ZiB0aGUgZ3Vlc3QgdG9wLWxldmVsIHRhYmxlICovCi0gICAgQVNTRVJUKHYt
PmFyY2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUgPT0gTlVMTCk7Ci0j
ZWxzZQotICAgIGlmICggdi0+YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0
YWJsZSApCi0gICAgewotICAgICAgICBpZiAoIHNoYWRvd19tb2RlX2V4dGVy
bmFsKGQpIHx8IHNoYWRvd19tb2RlX3RyYW5zbGF0ZShkKSApCi0gICAgICAg
ICAgICB1bm1hcF9kb21haW5fcGFnZV9nbG9iYWwodi0+YXJjaC5wYWdpbmcu
c2hhZG93Lmd1ZXN0X3Z0YWJsZSk7Ci0gICAgICAgIHYtPmFyY2gucGFnaW5n
LnNoYWRvdy5ndWVzdF92dGFibGUgPSBOVUxMOwotICAgIH0KLSNlbmRpZiAv
LyAhTkRFQlVHCi0KLQotICAgIC8vLy8KICAgICAvLy8vIHZjcHUtPmFyY2gu
c2hhZG93X3RhYmxlW10KICAgICAvLy8vCiAKQEAgLTQwNjgsMjggKzQwNTUs
MTAgQEAgc2hfdXBkYXRlX2NyMyhzdHJ1Y3QgdmNwdSAqdiwgaW50IGRvX2xv
Y2tpbmcpCiAjZW5kaWYKICAgICAgICAgZ21mbiA9IHBhZ2V0YWJsZV9nZXRf
bWZuKHYtPmFyY2guZ3Vlc3RfdGFibGUpOwogCi0KLSAgICAvLy8vCi0gICAg
Ly8vLyB2Y3B1LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3RfdnRhYmxlCi0g
ICAgLy8vLwotI2lmIEdVRVNUX1BBR0lOR19MRVZFTFMgPT0gNAotICAgIGlm
ICggc2hhZG93X21vZGVfZXh0ZXJuYWwoZCkgfHwgc2hhZG93X21vZGVfdHJh
bnNsYXRlKGQpICkKLSAgICB7Ci0gICAgICAgIGlmICggdi0+YXJjaC5wYWdp
bmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZSApCi0gICAgICAgICAgICB1bm1hcF9k
b21haW5fcGFnZV9nbG9iYWwodi0+YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0
X3Z0YWJsZSk7Ci0gICAgICAgIHYtPmFyY2gucGFnaW5nLnNoYWRvdy5ndWVz
dF92dGFibGUgPSBtYXBfZG9tYWluX3BhZ2VfZ2xvYmFsKGdtZm4pOwotICAg
ICAgICAvKiBQQUdJTkdfTEVWRUxTPT00IGltcGxpZXMgNjQtYml0LCB3aGlj
aCBtZWFucyB0aGF0Ci0gICAgICAgICAqIG1hcF9kb21haW5fcGFnZV9nbG9i
YWwgY2FuJ3QgZmFpbCAqLwotICAgICAgICBCVUdfT04odi0+YXJjaC5wYWdp
bmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZSA9PSBOVUxMKTsKLSAgICB9Ci0gICAg
ZWxzZQotICAgICAgICB2LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3RfdnRh
YmxlID0gX19saW5lYXJfbDRfdGFibGU7Ci0jZWxpZiBHVUVTVF9QQUdJTkdf
TEVWRUxTID09IDMKKyNpZiBHVUVTVF9QQUdJTkdfTEVWRUxTID09IDMKICAg
ICAgLyogT24gUEFFIGd1ZXN0cyB3ZSBkb24ndCB1c2UgYSBtYXBwaW5nIG9m
IHRoZSBndWVzdCdzIG93biB0b3AtbGV2ZWwKICAgICAgICogdGFibGUuICBX
ZSBjYWNoZSB0aGUgY3VycmVudCBzdGF0ZSBvZiB0aGF0IHRhYmxlIGFuZCBz
aGFkb3cgdGhhdCwKICAgICAgICogdW50aWwgdGhlIG5leHQgQ1IzIHdyaXRl
IG1ha2VzIHVzIHJlZnJlc2ggb3VyIGNhY2hlLiAqLwotICAgICBBU1NFUlQo
di0+YXJjaC5wYWdpbmcuc2hhZG93Lmd1ZXN0X3Z0YWJsZSA9PSBOVUxMKTsK
LQogICAgICBBU1NFUlQoc2hhZG93X21vZGVfZXh0ZXJuYWwoZCkpOwogICAg
ICAvKiBGaW5kIHdoZXJlIGluIHRoZSBwYWdlIHRoZSBsMyB0YWJsZSBpcyAq
LwogICAgICBndWVzdF9pZHggPSBndWVzdF9pbmRleCgodm9pZCAqKXYtPmFy
Y2guaHZtX3ZjcHUuZ3Vlc3RfY3JbM10pOwpAQCAtNDEwMiwxNiArNDA3MSw2
IEBAIHNoX3VwZGF0ZV9jcjMoc3RydWN0IHZjcHUgKnYsIGludCBkb19sb2Nr
aW5nKQogICAgICBmb3IgKCBpID0gMDsgaSA8IDQgOyBpKysgKQogICAgICAg
ICAgdi0+YXJjaC5wYWdpbmcuc2hhZG93LmdsM2VbaV0gPSBnbDNlW2ldOwog
ICAgICB1bm1hcF9kb21haW5fcGFnZShnbDNlKTsKLSNlbGlmIEdVRVNUX1BB
R0lOR19MRVZFTFMgPT0gMgotICAgIEFTU0VSVChzaGFkb3dfbW9kZV9leHRl
cm5hbChkKSk7Ci0gICAgaWYgKCB2LT5hcmNoLnBhZ2luZy5zaGFkb3cuZ3Vl
c3RfdnRhYmxlICkKLSAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2VfZ2xvYmFs
KHYtPmFyY2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUpOwotICAgIHYt
PmFyY2gucGFnaW5nLnNoYWRvdy5ndWVzdF92dGFibGUgPSBtYXBfZG9tYWlu
X3BhZ2VfZ2xvYmFsKGdtZm4pOwotICAgIC8qIERvZXMgdGhpcyByZWFsbHkg
bmVlZCBtYXBfZG9tYWluX3BhZ2VfZ2xvYmFsPyAgSGFuZGxlIHRoZQotICAg
ICAqIGVycm9yIHByb3Blcmx5IGlmIHNvLiAqLwotICAgIEJVR19PTih2LT5h
cmNoLnBhZ2luZy5zaGFkb3cuZ3Vlc3RfdnRhYmxlID09IE5VTEwpOyAvKiBY
WFggKi8KLSNlbHNlCi0jZXJyb3IgdGhpcyBzaG91bGQgbmV2ZXIgaGFwcGVu
CiAjZW5kaWYKIAogCmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2
L2RvbWFpbi5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9kb21haW4uaAppbmRl
eCA1YWZhZjZiOWRlLi5hZjAzZWJkNzcyIDEwMDY0NAotLS0gYS94ZW4vaW5j
bHVkZS9hc20teDg2L2RvbWFpbi5oCisrKyBiL3hlbi9pbmNsdWRlL2FzbS14
ODYvZG9tYWluLmgKQEAgLTEzNSw4ICsxMzUsNiBAQCBzdHJ1Y3Qgc2hhZG93
X3ZjcHUgewogICAgIGwzX3BnZW50cnlfdCBsM3RhYmxlWzRdIF9fYXR0cmli
dXRlX18oKF9fYWxpZ25lZF9fKDMyKSkpOwogICAgIC8qIFBBRSBndWVzdHM6
IHBlci12Y3B1IGNhY2hlIG9mIHRoZSB0b3AtbGV2ZWwgKmd1ZXN0KiBlbnRy
aWVzICovCiAgICAgbDNfcGdlbnRyeV90IGdsM2VbNF0gX19hdHRyaWJ1dGVf
XygoX19hbGlnbmVkX18oMzIpKSk7Ci0gICAgLyogTm9uLVBBRSBndWVzdHM6
IHBvaW50ZXIgdG8gZ3Vlc3QgdG9wLWxldmVsIHBhZ2V0YWJsZSAqLwotICAg
IHZvaWQgKmd1ZXN0X3Z0YWJsZTsKICAgICAvKiBMYXN0IE1GTiB0aGF0IHdl
IGVtdWxhdGVkIGEgd3JpdGUgdG8gYXMgdW5zaGFkb3cgaGV1cmlzdGljcy4g
Ki8KICAgICB1bnNpZ25lZCBsb25nIGxhc3RfZW11bGF0ZWRfbWZuX2Zvcl91
bnNoYWRvdzsKICAgICAvKiBNRk4gb2YgdGhlIGxhc3Qgc2hhZG93IHRoYXQg
d2Ugc2hvdCBhIHdyaXRlYWJsZSBtYXBwaW5nIGluICovCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0004-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0004-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGRvbid0IGFsbG93IGNsZWFyaW5nIG9mIFRGX2tlcm5lbF9tb2Rl
IGZvciBvdGhlciB0aGFuIDY0LWJpdCBQVgoKVGhlIGZsYWcgaXMgcmVhbGx5
IG9ubHkgbWVhbnQgZm9yIHRob3NlLCBib3RoIEhWTSBhbmQgMzItYml0IFBW
IHRlbGwKa2VybmVsIGZyb20gdXNlciBtb2RlIGJhc2VkIG9uIENQTC9SUEwu
IFJlbW92ZSB0aGUgYWxsLXF1ZXN0aW9uLW1hcmtzCmNvbW1lbnQgYW5kIGxl
dCdzIGJlIG9uIHRoZSBzYWZlIHNpZGUgaGVyZSBhbmQgYWxzbyBzdXBwcmVz
cyBjbGVhcmluZwpmb3IgMzItYml0IFBWICh0aGlzIGlzbid0IGEgZmFzdCBw
YXRoIGFmdGVyIGFsbCkuCgpSZW1vdmUgbm8gbG9uZ2VyIG5lY2Vzc2FyeSBp
c19wdl8zMmJpdF8qKCkgZnJvbSBzaF91cGRhdGVfY3IzKCkgYW5kCnNoX3dh
bGtfZ3Vlc3RfdGFibGVzKCkuIE5vdGUgdGhhdCBzaGFkb3dfb25lX2JpdF9k
aXNhYmxlKCkgYWxyZWFkeQphc3N1bWVzIHRoZSBuZXcgYmVoYXZpb3IuCgpT
aWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBXZWkgTGl1IDx3ZWkubGl1MkBjaXRyaXguY29tPgpB
Y2tlZC1ieTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPgpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNA
Y2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvZG9tYWlu
LmMgYi94ZW4vYXJjaC94ODYvZG9tYWluLmMKaW5kZXggN2FlNzI2NmM1Zi4u
MzQxNTY5Mzk3ZiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2RvbWFpbi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYwpAQCAtODA4LDkgKzgwOCwx
NSBAQCBpbnQgYXJjaF9zZXRfaW5mb19ndWVzdCgKIAogICAgIHYtPmZwdV9p
bml0aWFsaXNlZCA9ICEhKGZsYWdzICYgVkdDRl9JMzg3X1ZBTElEKTsKIAot
ICAgIHYtPmFyY2guZmxhZ3MgJj0gflRGX2tlcm5lbF9tb2RlOwotICAgIGlm
ICggKGZsYWdzICYgVkdDRl9pbl9rZXJuZWwpIHx8IGlzX2h2bV9kb21haW4o
ZCkvKj8/PyovICkKLSAgICAgICAgdi0+YXJjaC5mbGFncyB8PSBURl9rZXJu
ZWxfbW9kZTsKKyAgICB2LT5hcmNoLmZsYWdzIHw9IFRGX2tlcm5lbF9tb2Rl
OworICAgIGlmICggdW5saWtlbHkoIShmbGFncyAmIFZHQ0ZfaW5fa2VybmVs
KSkgJiYKKyAgICAgICAgIC8qCisgICAgICAgICAgKiBURl9rZXJuZWxfbW9k
ZSBpcyBvbmx5IGFsbG93ZWQgdG8gYmUgY2xlYXIgZm9yIDY0LWJpdCBQVi4g
U2VlCisgICAgICAgICAgKiB1cGRhdGVfY3IzKCksIHNoX3VwZGF0ZV9jcjMo
KSwgc2hfd2Fsa19ndWVzdF90YWJsZXMoKSwgYW5kCisgICAgICAgICAgKiBz
aGFkb3dfb25lX2JpdF9kaXNhYmxlKCkgZm9yIHdoeSB0aGF0IGlzLgorICAg
ICAgICAgICovCisgICAgICAgICAhaXNfaHZtX2RvbWFpbihkKSAmJiAhaXNf
cHZfMzJiaXRfZG9tYWluKGQpICkKKyAgICAgICAgdi0+YXJjaC5mbGFncyAm
PSB+VEZfa2VybmVsX21vZGU7CiAKICAgICB2LT5hcmNoLnZnY19mbGFncyA9
IGZsYWdzOwogCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L211bHRpLmMgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMKaW5k
ZXggNGY4Y2ZkNjIxMi4uNzM0ZGMxNDMyMiAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L21tL3NoYWRvdy9tdWx0aS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9t
bS9zaGFkb3cvbXVsdGkuYwpAQCAtMTgwLDcgKzE4MCw3IEBAIHNoX3dhbGtf
Z3Vlc3RfdGFibGVzKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZh
LCB3YWxrX3QgKmd3LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJ
TlZBTElEX01GTiwgdi0+YXJjaC5wYWdpbmcuc2hhZG93LmdsM2UpOwogI2Vs
c2UgLyogMzIgb3IgNjQgKi8KICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpk
ID0gdi0+ZG9tYWluOwotICAgIG1mbl90IHJvb3RfbWZuID0gKCh2LT5hcmNo
LmZsYWdzICYgVEZfa2VybmVsX21vZGUpIHx8IGlzX3B2XzMyYml0X2RvbWFp
bihkKQorICAgIG1mbl90IHJvb3RfbWZuID0gKHYtPmFyY2guZmxhZ3MgJiBU
Rl9rZXJuZWxfbW9kZQogICAgICAgICAgICAgICAgICAgICAgID8gcGFnZXRh
YmxlX2dldF9tZm4odi0+YXJjaC5ndWVzdF90YWJsZSkKICAgICAgICAgICAg
ICAgICAgICAgICA6IHBhZ2V0YWJsZV9nZXRfbWZuKHYtPmFyY2guZ3Vlc3Rf
dGFibGVfdXNlcikpOwogICAgIHZvaWQgKnJvb3RfbWFwID0gbWFwX2RvbWFp
bl9wYWdlKHJvb3RfbWZuKTsKQEAgLTQwNDksNyArNDA0OSw3IEBAIHNoX3Vw
ZGF0ZV9jcjMoc3RydWN0IHZjcHUgKnYsIGludCBkb19sb2NraW5nKQogICAg
ICAgICAgICAgICAgICAgdiwgKHVuc2lnbmVkIGxvbmcpcGFnZXRhYmxlX2dl
dF9wZm4odi0+YXJjaC5ndWVzdF90YWJsZSkpOwogCiAjaWYgR1VFU1RfUEFH
SU5HX0xFVkVMUyA9PSA0Ci0gICAgaWYgKCAhKHYtPmFyY2guZmxhZ3MgJiBU
Rl9rZXJuZWxfbW9kZSkgJiYgIWlzX3B2XzMyYml0X2RvbWFpbihkKSApCisg
ICAgaWYgKCAhKHYtPmFyY2guZmxhZ3MgJiBURl9rZXJuZWxfbW9kZSkgKQog
ICAgICAgICBnbWZuID0gcGFnZXRhYmxlX2dldF9tZm4odi0+YXJjaC5ndWVz
dF90YWJsZV91c2VyKTsKICAgICBlbHNlCiAjZW5kaWYK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0005-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0005-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCAzNGNk
ODQ1N2NmLi5jMDA4OTk5Zjk2IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC00NCwyNiArNDQsNDcgQEAgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkg
bTJwX2NvbXBhdF92c3RhcnQgPSBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRf
U1RBUlQ7CiAKIGwyX3BnZW50cnlfdCAqY29tcGF0X2lkbGVfcGdfdGFibGVf
bDI7CiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwgdW5z
aWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2VudHJ5X3Qg
bDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7Ci0gICAg
bDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5X3QgbDFl
LCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRhYmxlX2dl
dF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0ZTsKIAot
ICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNhbF9hZGRy
ZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAgaWYgKCAh
aXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICByZXR1cm4g
bDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3BhZ2UoX21m
bihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAgcmV0dXJu
IGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0
X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCit7
CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRfbDRlKHJv
b3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7CisKICAg
ICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJFU0VOVCkg
KQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShsNGUpOwog
ICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwogICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNlOworfQor
Cit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOworICAgIGwy
X3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90IGwxZSwg
KmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlmICggIWlz
X3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKworICAgIGwz
ZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFibGUsIGFk
ZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAgaWYgKCAh
KGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5f
dmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxMOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0006-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0006-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBj
MDA4OTk5Zjk2Li43YmNkNmViZTMxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC00NiwxNSArNDYsMjkgQEAgbDJfcGdlbnRyeV90ICpjb21wYXRfaWRs
ZV9wZ190YWJsZV9sMjsKIAogc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3QgKmw0dCwgbDRlOworICAg
IG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZuKHJvb3QpOworICAgIC8q
IGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2FuJ3QgZGlzYXBwZWFyIHVu
ZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVlZF9sb2NrID0gIW1mbl9l
cShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJlbnQtPmFyY2guZ3Vlc3Rf
dGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsNF9w
Z2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAKICAgICBpZiAoICFpc19j
YW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAgICAgIHJldHVybiBsNGVf
ZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1m
bikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9vZmZzZXQoYWRkcildOwot
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgcGcgPSBtZm5fdG9f
cGFnZShtZm5feChtZm4pKTsKKyAgICBpZiAoIG5lZWRfbG9jayAmJiAhcGFn
ZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsNGVfZW1wdHkoKTsKKwor
ICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21h
c2spID09IFBHVF9sNF9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGw0
X3BnZW50cnlfdCAqbDR0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAg
ICAgICAgbDRlID0gbDR0W2w0X3RhYmxlX29mZnNldChhZGRyKV07CisgICAg
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgfQorCisgICAgaWYg
KCBuZWVkX2xvY2sgKQorICAgICAgICBwYWdlX3VubG9jayhwZyk7CiAKICAg
ICByZXR1cm4gbDRlOwogfQpAQCAtNjIsMTQgKzc2LDI2IEBAIHN0YXRpYyBs
NF9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sNGUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQogc3RhdGljIGwzX3BnZW50cnlfdCBw
YWdlX3dhbGtfZ2V0X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBs
b25nIGFkZHIpCiB7CiAgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fs
a19nZXRfbDRlKHJvb3QsIGFkZHIpOwotICAgIGwzX3BnZW50cnlfdCAqbDN0
LCBsM2U7CisgICAgbWZuX3QgbWZuID0gbDRlX2dldF9tZm4obDRlKTsKKyAg
ICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsM19wZ2VudHJ5X3QgbDNl
ID0gbDNlX2VtcHR5KCk7CiAKICAgICBpZiAoICEobDRlX2dldF9mbGFncyhs
NGUpICYgX1BBR0VfUFJFU0VOVCkgKQogICAgICAgICByZXR1cm4gbDNlX2Vt
cHR5KCk7CiAKLSAgICBsM3QgPSBtYXBfbDN0X2Zyb21fbDRlKGw0ZSk7Ci0g
ICAgbDNlID0gbDN0W2wzX3RhYmxlX29mZnNldChhZGRyKV07Ci0gICAgdW5t
YXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICBwZyA9IG1mbl90b19wYWdlKG1m
bl94KG1mbikpOworICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAg
ICByZXR1cm4gbDNlX2VtcHR5KCk7CisKKyAgICBpZiAoIChwZy0+dS5pbnVz
ZS50eXBlX2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDNfcGFnZV90
YWJsZSApCisgICAgeworICAgICAgICBsM19wZ2VudHJ5X3QgKmwzdCA9IG1h
cF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAgIGwzZSA9IGwzdFtsM190
YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFn
ZShsM3QpOworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKIAogICAg
IHJldHVybiBsM2U7CiB9CkBAIC03Nyw0NCArMTAzLDY3IEBAIHN0YXRpYyBs
M19wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sM2UocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQogdm9pZCAqZG9fcGFnZV93YWxrKHN0
cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNf
cGdlbnRyeV90IGwzZTsKLSAgICBsMl9wZ2VudHJ5X3QgbDJlLCAqbDJ0Owot
ICAgIGwxX3BnZW50cnlfdCBsMWUsICpsMXQ7Ci0gICAgdW5zaWduZWQgbG9u
ZyBtZm47CisgICAgbDJfcGdlbnRyeV90IGwyZSA9IGwyZV9lbXB0eSgpOwor
ICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBtZm5f
dCBtZm47CisgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGc7CiAKICAgICBpZiAo
ICFpc19wdl92Y3B1KHYpICkKICAgICAgICAgcmV0dXJuIE5VTEw7CiAKICAg
ICBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZSh2LT5hcmNoLmd1ZXN0X3RhYmxl
LCBhZGRyKTsKLSAgICBtZm4gPSBsM2VfZ2V0X3BmbihsM2UpOwotICAgIGlm
ICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fCAh
bWZuX3ZhbGlkKF9tZm4obWZuKSkgKQorICAgIG1mbiA9IGwzZV9nZXRfbWZu
KGwzZSk7CisgICAgaWYgKCAhKGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdF
X1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQobWZuKSApCiAgICAgICAgIHJldHVy
biBOVUxMOwogICAgIGlmICggKGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdF
X1BTRSkgKQogICAgIHsKLSAgICAgICAgbWZuICs9IFBGTl9ET1dOKGFkZHIg
JiAoKDFVTCA8PCBMM19QQUdFVEFCTEVfU0hJRlQpIC0gMSkpOworICAgICAg
ICBtZm4gPSBtZm5fYWRkKG1mbiwgUEZOX0RPV04oYWRkciAmICgoMVVMIDw8
IEwzX1BBR0VUQUJMRV9TSElGVCkgLSAxKSkpOwogICAgICAgICBnb3RvIHJl
dDsKICAgICB9CiAKLSAgICBsMnQgPSBtYXBfZG9tYWluX3BhZ2UoX21mbiht
Zm4pKTsKLSAgICBsMmUgPSBsMnRbbDJfdGFibGVfb2Zmc2V0KGFkZHIpXTsK
LSAgICB1bm1hcF9kb21haW5fcGFnZShsMnQpOwotICAgIG1mbiA9IGwyZV9n
ZXRfcGZuKGwyZSk7Ci0gICAgaWYgKCAhKGwyZV9nZXRfZmxhZ3MobDJlKSAm
IF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQoX21mbihtZm4pKSApCisg
ICAgcGcgPSBtZm5fdG9fcGFnZShtZm5feChtZm4pKTsKKyAgICBpZiAoICFw
YWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIE5VTEw7CisKKyAgICBp
ZiAoIChwZy0+dS5pbnVzZS50eXBlX2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9
PSBQR1RfbDJfcGFnZV90YWJsZSApCisgICAgeworICAgICAgICBjb25zdCBs
Ml9wZ2VudHJ5X3QgKmwydCA9IG1hcF9kb21haW5fcGFnZShtZm4pOworCisg
ICAgICAgIGwyZSA9IGwydFtsMl90YWJsZV9vZmZzZXQoYWRkcildOworICAg
ICAgICB1bm1hcF9kb21haW5fcGFnZShsMnQpOworICAgIH0KKworICAgIHBh
Z2VfdW5sb2NrKHBnKTsKKworICAgIG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7
CisgICAgaWYgKCAhKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNF
TlQpIHx8ICFtZm5fdmFsaWQobWZuKSApCiAgICAgICAgIHJldHVybiBOVUxM
OwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkg
KQogICAgIHsKLSAgICAgICAgbWZuICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFV
TCA8PCBMMl9QQUdFVEFCTEVfU0hJRlQpIC0gMSkpOworICAgICAgICBtZm4g
PSBtZm5fYWRkKG1mbiwgUEZOX0RPV04oYWRkciAmICgoMVVMIDw8IEwyX1BB
R0VUQUJMRV9TSElGVCkgLSAxKSkpOwogICAgICAgICBnb3RvIHJldDsKICAg
ICB9CiAKLSAgICBsMXQgPSBtYXBfZG9tYWluX3BhZ2UoX21mbihtZm4pKTsK
LSAgICBsMWUgPSBsMXRbbDFfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1
bm1hcF9kb21haW5fcGFnZShsMXQpOwotICAgIG1mbiA9IGwxZV9nZXRfcGZu
KGwxZSk7Ci0gICAgaWYgKCAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdF
X1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQoX21mbihtZm4pKSApCisgICAgcGcg
PSBtZm5fdG9fcGFnZShtZm5feChtZm4pKTsKKyAgICBpZiAoICFwYWdlX2xv
Y2socGcpICkKKyAgICAgICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChw
Zy0+dS5pbnVzZS50eXBlX2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rf
bDFfcGFnZV90YWJsZSApCisgICAgeworICAgICAgICBjb25zdCBsMV9wZ2Vu
dHJ5X3QgKmwxdCA9IG1hcF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAg
IGwxZSA9IGwxdFtsMV90YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1
bm1hcF9kb21haW5fcGFnZShsMXQpOworICAgIH0KKworICAgIHBhZ2VfdW5s
b2NrKHBnKTsKKworICAgIG1mbiA9IGwxZV9nZXRfbWZuKGwxZSk7CisgICAg
aWYgKCAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNFTlQpIHx8
ICFtZm5fdmFsaWQobWZuKSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAg
cmV0OgotICAgIHJldHVybiBtYXBfZG9tYWluX3BhZ2UoX21mbihtZm4pKSAr
IChhZGRyICYgflBBR0VfTUFTSyk7CisgICAgcmV0dXJuIG1hcF9kb21haW5f
cGFnZShtZm4pICsgKGFkZHIgJiB+UEFHRV9NQVNLKTsKIH0KIAogLyoK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0007-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0007-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCA1NzQ5MGUxOGIwLi5lY2Q5ZGM0NjJlIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDYsMTEgKzQ2LDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggN2JjZDZlYmUzMS4u
NzYzZjY0MWY2MSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTAwLDYg
KzEwMCwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbl94KG1m
bikpOworICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAgICByZXR1
cm4gbDJlX2VtcHR5KCk7CisKKyAgICBpZiAoIChwZy0+dS5pbnVzZS50eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDJfcGFnZV90YWJsZSAp
CisgICAgeworICAgICAgICBsMl9wZ2VudHJ5X3QgKmwydCA9IG1hcF9kb21h
aW5fcGFnZShtZm4pOworCisgICAgICAgIGwyZSA9IGwydFtsMl90YWJsZV9v
ZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFnZShsMnQp
OworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAgIHJldHVy
biBsMmU7Cit9CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAq
diwgdW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBs
M2U7CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKaW5kZXggNzA0MzQ1MzM1Yy4uNWMy
MTAyNzVmNyAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5o
CisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNTkzLDcgKzU5
Myw5IEBAIHZvaWQgYXVkaXRfZG9tYWlucyh2b2lkKTsKIHZvaWQgbWFrZV9j
cjMoc3RydWN0IHZjcHUgKnYsIG1mbl90IG1mbik7CiB2b2lkIHVwZGF0ZV9j
cjMoc3RydWN0IHZjcHUgKnYpOwogaW50IHZjcHVfZGVzdHJveV9wYWdldGFi
bGVzKHN0cnVjdCB2Y3B1ICopOworCiB2b2lkICpkb19wYWdlX3dhbGsoc3Ry
dWN0IHZjcHUgKnYsIHVuc2lnbmVkIGxvbmcgYWRkcik7CitsMl9wZ2VudHJ5
X3QgcGFnZV93YWxrX2dldF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWdu
ZWQgbG9uZyBhZGRyKTsKIAogaW50IF9fc3luY19sb2NhbF9leGVjc3RhdGUo
dm9pZCk7CiAK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0008-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0008-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCBlY2Q5ZGM0NjJlLi40ZWNkNzA3MDYxIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NjUsMjcgKzY1LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCA5NzYyMDliYTRjLi5jYzRlZTFhZmZiIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IDYyMmJiN2RmZjAuLmFiNmZmZjcwMDggMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzQ1LDcgKzM0NSw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggNzYzZjY0
MWY2MS4uZGNmNWNjMTU4NiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTI4LDYgKzEyOCw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbl94KG1mbikpOworICAgIGlmICggIXBhZ2Vf
bG9jayhwZykgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAg
ICBpZiAoIChwZy0+dS5pbnVzZS50eXBlX2luZm8gJiBQR1RfdHlwZV9tYXNr
KSA9PSBQR1RfbDFfcGFnZV90YWJsZSApCisgICAgeworICAgICAgICBsMV9w
Z2VudHJ5X3QgKmwxdCA9IG1hcF9kb21haW5fcGFnZShtZm4pOworCisgICAg
ICAgIGwxZSA9IGwxdFtsMV90YWJsZV9vZmZzZXQoYWRkcildOworCisgICAg
ICAgIGlmICggKGwxZV9nZXRfZmxhZ3MobDFlKSAmIChfUEFHRV9BQ0NFU1NF
RCB8IF9QQUdFX1BSRVNFTlQpKSA9PQorICAgICAgICAgICAgIF9QQUdFX1BS
RVNFTlQgKQorICAgICAgICB7CisgICAgICAgICAgICBsMV9wZ2VudHJ5X3Qg
b2wxZSA9IGwxZTsKKworICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhsMWUs
IF9QQUdFX0FDQ0VTU0VEKTsKKyAgICAgICAgICAgIC8qCisgICAgICAgICAg
ICAgKiBCZXN0IGVmZm9ydCBvbmx5OyB3aXRoIHRoZSBsb2NrIGhlbGQgdGhl
IHBhZ2Ugc2hvdWxkbid0CisgICAgICAgICAgICAgKiBjaGFuZ2UgYW55d2F5
LCBleGNlcHQgZm9yIHRoZSBkaXJ0eSBiaXQgdG8gcGVyaGFwcyBiZWNvbWUg
c2V0LgorICAgICAgICAgICAgICovCisgICAgICAgICAgICB3aGlsZSAoIGNt
cHhjaGcoJmwxZV9nZXRfaW50cHRlKGwxdFtsMV90YWJsZV9vZmZzZXQoYWRk
cildKSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICBsMWVfZ2V0X2lu
dHB0ZShvbDFlKSwgbDFlX2dldF9pbnRwdGUobDFlKSkgIT0KKyAgICAgICAg
ICAgICAgICAgICAgbDFlX2dldF9pbnRwdGUob2wxZSkgJiYKKyAgICAgICAg
ICAgICAgICAgICAgIShsMWVfZ2V0X2ZsYWdzKGwxZSkgJiBfUEFHRV9ESVJU
WSkgKQorICAgICAgICAgICAgeworICAgICAgICAgICAgICAgIGwxZV9hZGRf
ZmxhZ3Mob2wxZSwgX1BBR0VfRElSVFkpOworICAgICAgICAgICAgICAgIGwx
ZV9hZGRfZmxhZ3MobDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICB9
CisgICAgICAgIH0KKworICAgICAgICB1bm1hcF9kb21haW5fcGFnZShsMXQp
OworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAgIHJldHVy
biBsMWU7Cit9CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAq
diwgdW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBs
M2U7CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKaW5kZXggNWMyMTAyNzVmNy4uZjRj
YWY0ZjZiMiAxMDA2NDQKLS0tIGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5o
CisrKyBiL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaApAQCAtNTk2LDYgKzU5
Niw3IEBAIGludCB2Y3B1X2Rlc3Ryb3lfcGFnZXRhYmxlcyhzdHJ1Y3QgdmNw
dSAqKTsKIAogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogbDJfcGdlbnRyeV90IHBhZ2Vfd2Fsa19n
ZXRfbDJlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcik7
CitsMV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qg
cm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKIAogaW50IF9fc3luY19sb2Nh
bF9leGVjc3RhdGUodm9pZCk7CiAK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0009-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0009-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4KIHssdW59bWFwX2RvbWFpbl9wYWdlKCkKCk1vdmUgdGhlIHBh
Z2UgdGFibGUgcmVjdXJzaW9uIHR3byBsZXZlbHMgZG93bi4gVGhpcyBlbnRh
aWxzIGF2b2lkaW5nCnRvIGZyZWUgdGhlIHJlY3Vyc2l2ZSBtYXBwaW5nIHBy
ZW1hdHVyZWx5IGluIGZyZWVfcGVyZG9tYWluX21hcHBpbmdzKCkuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4g
PGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbl9wYWdlLmMgYi94ZW4vYXJj
aC94ODYvZG9tYWluX3BhZ2UuYwppbmRleCA5YTUyMjc2ODY2Li5kYzMyMmI1
MWUxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwor
KysgYi94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwpAQCAtNjUsNyArNjUs
OCBAQCB2b2lkIF9faW5pdCBtYXBjYWNoZV9vdmVycmlkZV9jdXJyZW50KHN0
cnVjdCB2Y3B1ICp2KQogI2RlZmluZSBtYXBjYWNoZV9sMl9lbnRyeShlKSAo
KGUpID4+IFBBR0VUQUJMRV9PUkRFUikKICNkZWZpbmUgTUFQQ0FDSEVfTDJf
RU5UUklFUyAobWFwY2FjaGVfbDJfZW50cnkoTUFQQ0FDSEVfRU5UUklFUyAt
IDEpICsgMSkKICNkZWZpbmUgTUFQQ0FDSEVfTDFFTlQoaWR4KSBcCi0gICAg
X19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldChNQVBDQUNIRV9W
SVJUX1NUQVJUICsgcGZuX3RvX3BhZGRyKGlkeCkpXQorICAgICgobDFfcGdl
bnRyeV90ICopKE1BUENBQ0hFX1ZJUlRfU1RBUlQgfCBcCisgICAgICAgICAg
ICAgICAgICAgICAgKChMMl9QQUdFVEFCTEVfRU5UUklFUyAtIDEpIDw8IEwy
X1BBR0VUQUJMRV9TSElGVCkpKVtpZHhdCiAKIHZvaWQgKm1hcF9kb21haW5f
cGFnZShtZm5fdCBtZm4pCiB7CkBAIC0yMzUsNiArMjM2LDcgQEAgaW50IG1h
cGNhY2hlX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAg
c3RydWN0IG1hcGNhY2hlX2RvbWFpbiAqZGNhY2hlID0gJmQtPmFyY2gucHZf
ZG9tYWluLm1hcGNhY2hlOwogICAgIHVuc2lnbmVkIGludCBiaXRtYXBfcGFn
ZXM7CisgICAgaW50IHJjOwogCiAgICAgaWYgKCAhaXNfcHZfZG9tYWluKGQp
IHx8IGlzX2lkbGVfZG9tYWluKGQpICkKICAgICAgICAgcmV0dXJuIDA7CkBA
IC0yNDQsOCArMjQ2LDEwIEBAIGludCBtYXBjYWNoZV9kb21haW5faW5pdChz
dHJ1Y3QgZG9tYWluICpkKQogICAgICAgICByZXR1cm4gMDsKICNlbmRpZgog
CisgICAgQlVJTERfQlVHX09OKE1BUENBQ0hFX1ZJUlRfU1RBUlQgJiAoKDEg
PDwgTDNfUEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKICAgICBCVUlMRF9CVUdf
T04oTUFQQ0FDSEVfVklSVF9FTkQgKyBQQUdFX1NJWkUgKiAoMyArCi0gICAg
ICAgICAgICAgICAgIDIgKiBQRk5fVVAoQklUU19UT19MT05HUyhNQVBDQUNI
RV9FTlRSSUVTKSAqIHNpemVvZihsb25nKSkpID4KKyAgICAgICAgICAgICAg
ICAgMiAqIFBGTl9VUChCSVRTX1RPX0xPTkdTKE1BUENBQ0hFX0VOVFJJRVMp
ICogc2l6ZW9mKGxvbmcpKSkgKworICAgICAgICAgICAgICAgICAoMVUgPDwg
TDJfUEFHRVRBQkxFX1NISUZUKSA+CiAgICAgICAgICAgICAgICAgIE1BUENB
Q0hFX1ZJUlRfU1RBUlQgKyAoUEVSRE9NQUlOX1NMT1RfTUJZVEVTIDw8IDIw
KSk7CiAgICAgYml0bWFwX3BhZ2VzID0gUEZOX1VQKEJJVFNfVE9fTE9OR1Mo
TUFQQ0FDSEVfRU5UUklFUykgKiBzaXplb2YobG9uZykpOwogICAgIGRjYWNo
ZS0+aW51c2UgPSAodm9pZCAqKU1BUENBQ0hFX1ZJUlRfRU5EICsgUEFHRV9T
SVpFOwpAQCAtMjU0LDkgKzI1OCwyNSBAQCBpbnQgbWFwY2FjaGVfZG9tYWlu
X2luaXQoc3RydWN0IGRvbWFpbiAqZCkKIAogICAgIHNwaW5fbG9ja19pbml0
KCZkY2FjaGUtPmxvY2spOwogCi0gICAgcmV0dXJuIGNyZWF0ZV9wZXJkb21h
aW5fbWFwcGluZyhkLCAodW5zaWduZWQgbG9uZylkY2FjaGUtPmludXNlLAot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMiAqIGJpdG1h
cF9wYWdlcyArIDEsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBOSUwobDFfcGdlbnRyeV90ICopLCBOVUxMKTsKKyAgICByYyA9IGNy
ZWF0ZV9wZXJkb21haW5fbWFwcGluZyhkLCAodW5zaWduZWQgbG9uZylkY2Fj
aGUtPmludXNlLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IDIgKiBiaXRtYXBfcGFnZXMgKyAxLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIE5JTChsMV9wZ2VudHJ5X3QgKiksIE5VTEwpOworICAg
IGlmICggIXJjICkKKyAgICB7CisgICAgICAgIC8qCisgICAgICAgICAqIElu
c3RhbGwgbWFwcGluZyBvZiBvdXIgTDIgdGFibGUgaW50byBpdHMgb3duIGxh
c3Qgc2xvdCwgZm9yIGVhc3kKKyAgICAgICAgICogYWNjZXNzIHRvIHRoZSBM
MSBlbnRyaWVzIHZpYSBNQVBDQUNIRV9MMUVOVCgpLgorICAgICAgICAgKi8K
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBfX21hcF9kb21haW5fcGFn
ZShkLT5hcmNoLnBlcmRvbWFpbl9sM19wZyk7CisgICAgICAgIGwzX3BnZW50
cnlfdCBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KE1BUENBQ0hFX1ZJUlRf
RU5EKV07CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2wydF9m
cm9tX2wzZShsM2UpOworCisgICAgICAgIGwyZV9nZXRfaW50cHRlKGwydFtM
Ml9QQUdFVEFCTEVfRU5UUklFUyAtIDFdKSA9IGwzZV9nZXRfaW50cHRlKGwz
ZSk7CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAgICAg
IHVubWFwX2RvbWFpbl9wYWdlKGwzdCk7CisgICAgfQorCisgICAgcmV0dXJu
IHJjOwogfQogCiBpbnQgbWFwY2FjaGVfdmNwdV9pbml0KHN0cnVjdCB2Y3B1
ICp2KQpAQCAtMzQ3LDcgKzM2Nyw3IEBAIHVuc2lnbmVkIGxvbmcgZG9tYWlu
X3BhZ2VfbWFwX3RvX21mbihjb25zdCB2b2lkICpwdHIpCiAgICAgZWxzZQog
ICAgIHsKICAgICAgICAgQVNTRVJUKHZhID49IE1BUENBQ0hFX1ZJUlRfU1RB
UlQgJiYgdmEgPCBNQVBDQUNIRV9WSVJUX0VORCk7Ci0gICAgICAgIHBsMWUg
PSAmX19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldCh2YSldOwor
ICAgICAgICBwbDFlID0gJk1BUENBQ0hFX0wxRU5UKFBGTl9ET1dOKHZhIC0g
TUFQQ0FDSEVfVklSVF9TVEFSVCkpOwogICAgIH0KIAogICAgIHJldHVybiBs
MWVfZ2V0X3BmbigqcGwxZSk7CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYv
bW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IGZkNzM0ZmY5NDcuLjk4
MWM4NDQ3ZTkgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01OTUyLDYgKzU5NTIsMTAgQEAgdm9p
ZCBmcmVlX3BlcmRvbWFpbl9tYXBwaW5ncyhzdHJ1Y3QgZG9tYWluICpkKQog
ICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAgICAgc3RydWN0
IHBhZ2VfaW5mbyAqbDFwZyA9IGwyZV9nZXRfcGFnZShsMnRhYltqXSk7CiAK
KyAgICAgICAgICAgICAgICAgICAgLyogbWFwY2FjaGVfZG9tYWluX2luaXQo
KSBpbnN0YWxscyBhIHJlY3Vyc2l2ZSBlbnRyeS4gKi8KKyAgICAgICAgICAg
ICAgICAgICAgaWYgKCBsMXBnID09IGwycGcgKQorICAgICAgICAgICAgICAg
ICAgICAgICAgY29udGludWU7CisKICAgICAgICAgICAgICAgICAgICAgaWYg
KCBsMmVfZ2V0X2ZsYWdzKGwydGFiW2pdKSAmIF9QQUdFX0FWQUlMMCApCiAg
ICAgICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAgICAgICAg
IGwxX3BnZW50cnlfdCAqbDF0YWIgPSBfX21hcF9kb21haW5fcGFnZShsMXBn
KTsK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.10/0010-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286-4.10/0010-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCA5ODFjODQ0N2U5Li4x
NmQ1MGU4MWMzIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTc4Niw5ICsxNzg2LDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IGIyZGFlN2E0NzMuLjgwMDBh
M2U3NTQgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzcsNiArMTM3LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggZGNmNWNjMTU4Ni4uMGNiMjA3M2Vi
NSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODI4LDkgKzgyOCw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCA5ZWY5ZDAzY2E3Li40NjcwYWI5OWY2IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Myw3ICsxOTMsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCA0NWNhNzQyNjc4Li44
ZDQyYjE0Zjk0IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjc2
LDE5ICsyNzYsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBsMWVf
Z2V0X3BmbigqdmlydF90b194ZW5fbDFlKCh1bnNpZ25lZCBsb25nKSh2YSkp
KQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19wYWdlKHZt
YXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9fQVNTRU1C
TFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVsIG9mIHRo
ZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9sMV90YWJs
ZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFSVCkpCi0j
ZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50cnlfdCAq
KShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQoTElORUFS
X1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNfdGFibGUg
XAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxlICsgbDJf
bGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQotI2RlZmlu
ZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3QgKikoX19s
aW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVBUl9QVF9W
SVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18KIGV4dGVy
biByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFHRVRBQkxF
X0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBhdF9pZGxl
X3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0ycF9jb21w
YXRfdnN0YXJ0Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGRvbid0IGFsbG93IGNsZWFyaW5nIG9mIFRGX2tlcm5lbF9tb2Rl
IGZvciBvdGhlciB0aGFuIDY0LWJpdCBQVgoKVGhlIGZsYWcgaXMgcmVhbGx5
IG9ubHkgbWVhbnQgZm9yIHRob3NlLCBib3RoIEhWTSBhbmQgMzItYml0IFBW
IHRlbGwKa2VybmVsIGZyb20gdXNlciBtb2RlIGJhc2VkIG9uIENQTC9SUEwu
IFJlbW92ZSB0aGUgYWxsLXF1ZXN0aW9uLW1hcmtzCmNvbW1lbnQgYW5kIGxl
dCdzIGJlIG9uIHRoZSBzYWZlIHNpZGUgaGVyZSBhbmQgYWxzbyBzdXBwcmVz
cyBjbGVhcmluZwpmb3IgMzItYml0IFBWICh0aGlzIGlzbid0IGEgZmFzdCBw
YXRoIGFmdGVyIGFsbCkuCgpSZW1vdmUgbm8gbG9uZ2VyIG5lY2Vzc2FyeSBp
c19wdl8zMmJpdF8qKCkgZnJvbSBzaF91cGRhdGVfY3IzKCkgYW5kCnNoX3dh
bGtfZ3Vlc3RfdGFibGVzKCkuIE5vdGUgdGhhdCBzaGFkb3dfb25lX2JpdF9k
aXNhYmxlKCkgYWxyZWFkeQphc3N1bWVzIHRoZSBuZXcgYmVoYXZpb3IuCgpT
aWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBXZWkgTGl1IDx3ZWkubGl1MkBjaXRyaXguY29tPgpB
Y2tlZC1ieTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPgpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNA
Y2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvZG9tYWlu
LmMgYi94ZW4vYXJjaC94ODYvZG9tYWluLmMKaW5kZXggMzU4NTdkYmU4Ni4u
MWQwYWM4MWM1YiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2RvbWFpbi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYwpAQCAtODA0LDkgKzgwNCwx
NSBAQCBpbnQgYXJjaF9zZXRfaW5mb19ndWVzdCgKIAogICAgIHYtPmZwdV9p
bml0aWFsaXNlZCA9ICEhKGZsYWdzICYgVkdDRl9JMzg3X1ZBTElEKTsKIAot
ICAgIHYtPmFyY2guZmxhZ3MgJj0gflRGX2tlcm5lbF9tb2RlOwotICAgIGlm
ICggKGZsYWdzICYgVkdDRl9pbl9rZXJuZWwpIHx8IGlzX2h2bV9kb21haW4o
ZCkvKj8/PyovICkKLSAgICAgICAgdi0+YXJjaC5mbGFncyB8PSBURl9rZXJu
ZWxfbW9kZTsKKyAgICB2LT5hcmNoLmZsYWdzIHw9IFRGX2tlcm5lbF9tb2Rl
OworICAgIGlmICggdW5saWtlbHkoIShmbGFncyAmIFZHQ0ZfaW5fa2VybmVs
KSkgJiYKKyAgICAgICAgIC8qCisgICAgICAgICAgKiBURl9rZXJuZWxfbW9k
ZSBpcyBvbmx5IGFsbG93ZWQgdG8gYmUgY2xlYXIgZm9yIDY0LWJpdCBQVi4g
U2VlCisgICAgICAgICAgKiB1cGRhdGVfY3IzKCksIHNoX3VwZGF0ZV9jcjMo
KSwgc2hfd2Fsa19ndWVzdF90YWJsZXMoKSwgYW5kCisgICAgICAgICAgKiBz
aGFkb3dfb25lX2JpdF9kaXNhYmxlKCkgZm9yIHdoeSB0aGF0IGlzLgorICAg
ICAgICAgICovCisgICAgICAgICAhaXNfaHZtX2RvbWFpbihkKSAmJiAhaXNf
cHZfMzJiaXRfZG9tYWluKGQpICkKKyAgICAgICAgdi0+YXJjaC5mbGFncyAm
PSB+VEZfa2VybmVsX21vZGU7CiAKICAgICB2LT5hcmNoLnZnY19mbGFncyA9
IGZsYWdzOwogCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0vc2hhZG93
L211bHRpLmMgYi94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMKaW5k
ZXggOGFiMzQzZDE2ZS4uYTJlYmI0OTQzZiAxMDA2NDQKLS0tIGEveGVuL2Fy
Y2gveDg2L21tL3NoYWRvdy9tdWx0aS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9t
bS9zaGFkb3cvbXVsdGkuYwpAQCAtMTgwLDcgKzE4MCw3IEBAIHNoX3dhbGtf
Z3Vlc3RfdGFibGVzKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZh
LCB3YWxrX3QgKmd3LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJ
TlZBTElEX01GTiwgdi0+YXJjaC5wYWdpbmcuc2hhZG93LmdsM2UpOwogI2Vs
c2UgLyogMzIgb3IgNjQgKi8KICAgICBjb25zdCBzdHJ1Y3QgZG9tYWluICpk
ID0gdi0+ZG9tYWluOwotICAgIG1mbl90IHJvb3RfbWZuID0gKCh2LT5hcmNo
LmZsYWdzICYgVEZfa2VybmVsX21vZGUpIHx8IGlzX3B2XzMyYml0X2RvbWFp
bihkKQorICAgIG1mbl90IHJvb3RfbWZuID0gKHYtPmFyY2guZmxhZ3MgJiBU
Rl9rZXJuZWxfbW9kZQogICAgICAgICAgICAgICAgICAgICAgID8gcGFnZXRh
YmxlX2dldF9tZm4odi0+YXJjaC5ndWVzdF90YWJsZSkKICAgICAgICAgICAg
ICAgICAgICAgICA6IHBhZ2V0YWJsZV9nZXRfbWZuKHYtPmFyY2guZ3Vlc3Rf
dGFibGVfdXNlcikpOwogICAgIHZvaWQgKnJvb3RfbWFwID0gbWFwX2RvbWFp
bl9wYWdlKHJvb3RfbWZuKTsKQEAgLTQwMTgsNyArNDAxOCw3IEBAIHNoX3Vw
ZGF0ZV9jcjMoc3RydWN0IHZjcHUgKnYsIGludCBkb19sb2NraW5nLCBib29s
IG5vZmx1c2gpCiAgICAgICAgICAgICAgICAgICB2LCAodW5zaWduZWQgbG9u
ZylwYWdldGFibGVfZ2V0X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKSk7CiAK
ICNpZiBHVUVTVF9QQUdJTkdfTEVWRUxTID09IDQKLSAgICBpZiAoICEodi0+
YXJjaC5mbGFncyAmIFRGX2tlcm5lbF9tb2RlKSAmJiAhaXNfcHZfMzJiaXRf
ZG9tYWluKGQpICkKKyAgICBpZiAoICEodi0+YXJjaC5mbGFncyAmIFRGX2tl
cm5lbF9tb2RlKSApCiAgICAgICAgIGdtZm4gPSBwYWdldGFibGVfZ2V0X21m
bih2LT5hcmNoLmd1ZXN0X3RhYmxlX3VzZXIpOwogICAgIGVsc2UKICNlbmRp
Zgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCAzYmQx
NTc5NjdhLi5lNzNkYWE1NWU0IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC00NCwyNiArNDQsNDcgQEAgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkg
bTJwX2NvbXBhdF92c3RhcnQgPSBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRf
U1RBUlQ7CiAKIGwyX3BnZW50cnlfdCAqY29tcGF0X2lkbGVfcGdfdGFibGVf
bDI7CiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwgdW5z
aWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2VudHJ5X3Qg
bDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7Ci0gICAg
bDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5X3QgbDFl
LCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRhYmxlX2dl
dF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0ZTsKIAot
ICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNhbF9hZGRy
ZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAgaWYgKCAh
aXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICByZXR1cm4g
bDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3BhZ2UoX21m
bihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAgcmV0dXJu
IGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0
X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCit7
CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRfbDRlKHJv
b3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7CisKICAg
ICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJFU0VOVCkg
KQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShsNGUpOwog
ICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwogICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNlOworfQor
Cit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOworICAgIGwy
X3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90IGwxZSwg
KmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlmICggIWlz
X3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKworICAgIGwz
ZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFibGUsIGFk
ZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAgaWYgKCAh
KGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5f
dmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxMOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0003-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0003-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBl
NzNkYWE1NWU0Li4xY2E5NTQ3ZDY4IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC00NiwxNSArNDYsMjkgQEAgbDJfcGdlbnRyeV90ICpjb21wYXRfaWRs
ZV9wZ190YWJsZV9sMjsKIAogc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3QgKmw0dCwgbDRlOworICAg
IG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZuKHJvb3QpOworICAgIC8q
IGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2FuJ3QgZGlzYXBwZWFyIHVu
ZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVlZF9sb2NrID0gIW1mbl9l
cShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJlbnQtPmFyY2guZ3Vlc3Rf
dGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsNF9w
Z2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAKICAgICBpZiAoICFpc19j
YW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAgICAgIHJldHVybiBsNGVf
ZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1m
bikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9vZmZzZXQoYWRkcildOwot
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgcGcgPSBtZm5fdG9f
cGFnZShtZm4pOworICAgIGlmICggbmVlZF9sb2NrICYmICFwYWdlX2xvY2so
cGcpICkKKyAgICAgICAgcmV0dXJuIGw0ZV9lbXB0eSgpOworCisgICAgaWYg
KCAocGctPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3R5cGVfbWFzaykgPT0g
UEdUX2w0X3BhZ2VfdGFibGUgKQorICAgIHsKKyAgICAgICAgbDRfcGdlbnRy
eV90ICpsNHQgPSBtYXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgICAgICBs
NGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIpXTsKKyAgICAgICAgdW5t
YXBfZG9tYWluX3BhZ2UobDR0KTsKKyAgICB9CisKKyAgICBpZiAoIG5lZWRf
bG9jayApCisgICAgICAgIHBhZ2VfdW5sb2NrKHBnKTsKIAogICAgIHJldHVy
biBsNGU7CiB9CkBAIC02MiwxNCArNzYsMjYgQEAgc3RhdGljIGw0X3BnZW50
cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fs
a19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRk
cikKIHsKICAgICBsNF9wZ2VudHJ5X3QgbDRlID0gcGFnZV93YWxrX2dldF9s
NGUocm9vdCwgYWRkcik7Ci0gICAgbDNfcGdlbnRyeV90ICpsM3QsIGwzZTsK
KyAgICBtZm5fdCBtZm4gPSBsNGVfZ2V0X21mbihsNGUpOworICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnOworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBsM2Vf
ZW1wdHkoKTsKIAogICAgIGlmICggIShsNGVfZ2V0X2ZsYWdzKGw0ZSkgJiBf
UEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHJldHVybiBsM2VfZW1wdHkoKTsK
IAotICAgIGwzdCA9IG1hcF9sM3RfZnJvbV9sNGUobDRlKTsKLSAgICBsM2Ug
PSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21h
aW5fcGFnZShsM3QpOworICAgIHBnID0gbWZuX3RvX3BhZ2UobWZuKTsKKyAg
ICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOworCisgICAgaWYgKCAocGctPnUuaW51c2UudHlwZV9pbmZvICYg
UEdUX3R5cGVfbWFzaykgPT0gUEdUX2wzX3BhZ2VfdGFibGUgKQorICAgIHsK
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBtYXBfZG9tYWluX3BhZ2Uo
bWZuKTsKKworICAgICAgICBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFk
ZHIpXTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICB9
CisKKyAgICBwYWdlX3VubG9jayhwZyk7CiAKICAgICByZXR1cm4gbDNlOwog
fQpAQCAtNzcsNDQgKzEwMyw2NyBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBh
Z2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxv
bmcgYWRkcikKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwg
dW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBsM2U7
Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5
X3QgbDFlLCAqbDF0OwotICAgIHVuc2lnbmVkIGxvbmcgbWZuOworICAgIGwy
X3BnZW50cnlfdCBsMmUgPSBsMmVfZW1wdHkoKTsKKyAgICBsMV9wZ2VudHJ5
X3QgbDFlID0gbDFlX2VtcHR5KCk7CisgICAgbWZuX3QgbWZuOworICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBnOwogCiAgICAgaWYgKCAhaXNfcHZfdmNwdSh2
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAgICAgbDNlID0gcGFnZV93
YWxrX2dldF9sM2Uodi0+YXJjaC5ndWVzdF90YWJsZSwgYWRkcik7Ci0gICAg
bWZuID0gbDNlX2dldF9wZm4obDNlKTsKLSAgICBpZiAoICEobDNlX2dldF9m
bGFncyhsM2UpICYgX1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChfbWZu
KG1mbikpICkKKyAgICBtZm4gPSBsM2VfZ2V0X21mbihsM2UpOworICAgIGlm
ICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fCAh
bWZuX3ZhbGlkKG1mbikgKQogICAgICAgICByZXR1cm4gTlVMTDsKICAgICBp
ZiAoIChsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QU0UpICkKICAgICB7
Ci0gICAgICAgIG1mbiArPSBQRk5fRE9XTihhZGRyICYgKCgxVUwgPDwgTDNf
UEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKKyAgICAgICAgbWZuID0gbWZuX2Fk
ZChtZm4sIFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMM19QQUdFVEFCTEVf
U0hJRlQpIC0gMSkpKTsKICAgICAgICAgZ290byByZXQ7CiAgICAgfQogCi0g
ICAgbDJ0ID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4obWZuKSk7Ci0gICAgbDJl
ID0gbDJ0W2wyX3RhYmxlX29mZnNldChhZGRyKV07Ci0gICAgdW5tYXBfZG9t
YWluX3BhZ2UobDJ0KTsKLSAgICBtZm4gPSBsMmVfZ2V0X3BmbihsMmUpOwot
ICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVTRU5U
KSB8fCAhbWZuX3ZhbGlkKF9tZm4obWZuKSkgKQorICAgIHBnID0gbWZuX3Rv
X3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAg
ICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChwZy0+dS5pbnVzZS50eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDJfcGFnZV90YWJsZSAp
CisgICAgeworICAgICAgICBjb25zdCBsMl9wZ2VudHJ5X3QgKmwydCA9IG1h
cF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAgIGwyZSA9IGwydFtsMl90
YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFn
ZShsMnQpOworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAg
IG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7CisgICAgaWYgKCAhKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQobWZu
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogICAgIGlmICggKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkgKQogICAgIHsKLSAgICAgICAgbWZu
ICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMMl9QQUdFVEFCTEVfU0hJ
RlQpIC0gMSkpOworICAgICAgICBtZm4gPSBtZm5fYWRkKG1mbiwgUEZOX0RP
V04oYWRkciAmICgoMVVMIDw8IEwyX1BBR0VUQUJMRV9TSElGVCkgLSAxKSkp
OwogICAgICAgICBnb3RvIHJldDsKICAgICB9CiAKLSAgICBsMXQgPSBtYXBf
ZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKLSAgICBsMWUgPSBsMXRbbDFfdGFi
bGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21haW5fcGFnZShsMXQp
OwotICAgIG1mbiA9IGwxZV9nZXRfcGZuKGwxZSk7Ci0gICAgaWYgKCAhKGwx
ZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFs
aWQoX21mbihtZm4pKSApCisgICAgcGcgPSBtZm5fdG9fcGFnZShtZm4pOwor
ICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAgICByZXR1cm4gTlVM
TDsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAg
ICAgIGNvbnN0IGwxX3BnZW50cnlfdCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDFlID0gbDF0W2wxX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgbWZuID0gbDFlX2dl
dF9tZm4obDFlKTsKKyAgICBpZiAoICEobDFlX2dldF9mbGFncyhsMWUpICYg
X1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChtZm4pICkKICAgICAgICAg
cmV0dXJuIE5VTEw7CiAKICByZXQ6Ci0gICAgcmV0dXJuIG1hcF9kb21haW5f
cGFnZShfbWZuKG1mbikpICsgKGFkZHIgJiB+UEFHRV9NQVNLKTsKKyAgICBy
ZXR1cm4gbWFwX2RvbWFpbl9wYWdlKG1mbikgKyAoYWRkciAmIH5QQUdFX01B
U0spOwogfQogCiAvKgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCA4MGJmMjgwZmIyLi5lZTA4YzEzODgxIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDAsMTEgKzQwLDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggMWNhOTU0N2Q2OC4u
ZGZhMzNiYTg5NCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTAwLDYg
KzEwMCwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbik7Cisg
ICAgaWYgKCAhcGFnZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsMmVf
ZW1wdHkoKTsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAm
IFBHVF90eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlICkKKyAgICB7
CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwyZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCA3ODI1NjkxZDA2Li5hZmFmZTg3ZmU3
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODUsNyArNTg1LDkgQEAg
dm9pZCBhdWRpdF9kb21haW5zKHZvaWQpOwogdm9pZCBtYWtlX2NyMyhzdHJ1
Y3QgdmNwdSAqdiwgbWZuX3QgbWZuKTsKIHZvaWQgdXBkYXRlX2NyMyhzdHJ1
Y3QgdmNwdSAqdik7CiBpbnQgdmNwdV9kZXN0cm95X3BhZ2V0YWJsZXMoc3Ry
dWN0IHZjcHUgKik7CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNw
dSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wyX3BnZW50cnlfdCBwYWdl
X3dhbGtfZ2V0X2wyZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25n
IGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKTsK
IAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCBlZTA4YzEzODgxLi5jNzA3ODVkMGNmIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NTksMjcgKzU5LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCA5NzYyMDliYTRjLi5jYzRlZTFhZmZiIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IGEzYzBjMmRkMTkuLmM5ZWU1MTU2ZjggMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzU3LDcgKzM1Nyw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggZGZhMzNi
YTg5NC4uY2NhN2VhNmU5ZCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTI4LDYgKzEyOCw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbik7CisgICAgaWYgKCAhcGFnZV9sb2NrKHBn
KSApCisgICAgICAgIHJldHVybiBsMWVfZW1wdHkoKTsKKworICAgIGlmICgg
KHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBH
VF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGwxX3BnZW50cnlf
dCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFl
ID0gbDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV07CisKKyAgICAgICAgaWYg
KCAobDFlX2dldF9mbGFncyhsMWUpICYgKF9QQUdFX0FDQ0VTU0VEIHwgX1BB
R0VfUFJFU0VOVCkpID09CisgICAgICAgICAgICAgX1BBR0VfUFJFU0VOVCAp
CisgICAgICAgIHsKKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBvbDFlID0g
bDFlOworCisgICAgICAgICAgICBsMWVfYWRkX2ZsYWdzKGwxZSwgX1BBR0Vf
QUNDRVNTRUQpOworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIEJl
c3QgZWZmb3J0IG9ubHk7IHdpdGggdGhlIGxvY2sgaGVsZCB0aGUgcGFnZSBz
aG91bGRuJ3QKKyAgICAgICAgICAgICAqIGNoYW5nZSBhbnl3YXksIGV4Y2Vw
dCBmb3IgdGhlIGRpcnR5IGJpdCB0byBwZXJoYXBzIGJlY29tZSBzZXQuCisg
ICAgICAgICAgICAgKi8KKyAgICAgICAgICAgIHdoaWxlICggY21weGNoZygm
bDFlX2dldF9pbnRwdGUobDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV0pLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGwxZV9nZXRfaW50cHRlKG9s
MWUpLCBsMWVfZ2V0X2ludHB0ZShsMWUpKSAhPQorICAgICAgICAgICAgICAg
ICAgICBsMWVfZ2V0X2ludHB0ZShvbDFlKSAmJgorICAgICAgICAgICAgICAg
ICAgICAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX0RJUlRZKSApCisg
ICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhv
bDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9m
bGFncyhsMWUsIF9QQUdFX0RJUlRZKTsKKyAgICAgICAgICAgIH0KKyAgICAg
ICAgfQorCisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwxZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCBhZmFmZTg3ZmU3Li40MjMzMTNhZTNh
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODgsNiArNTg4LDcgQEAg
aW50IHZjcHVfZGVzdHJveV9wYWdldGFibGVzKHN0cnVjdCB2Y3B1ICopOwog
CiB2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcik7CiBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMmUo
cGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wxX3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2wxZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNz
dGF0ZSh2b2lkKTsKIAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4KIHssdW59bWFwX2RvbWFpbl9wYWdlKCkKCk1vdmUgdGhlIHBh
Z2UgdGFibGUgcmVjdXJzaW9uIHR3byBsZXZlbHMgZG93bi4gVGhpcyBlbnRh
aWxzIGF2b2lkaW5nCnRvIGZyZWUgdGhlIHJlY3Vyc2l2ZSBtYXBwaW5nIHBy
ZW1hdHVyZWx5IGluIGZyZWVfcGVyZG9tYWluX21hcHBpbmdzKCkuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4g
PGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbl9wYWdlLmMgYi94ZW4vYXJj
aC94ODYvZG9tYWluX3BhZ2UuYwppbmRleCAwYzI0NTMwZWQ5Li5kODlmYTI3
ZjhlIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwor
KysgYi94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwpAQCAtNjUsNyArNjUs
OCBAQCB2b2lkIF9faW5pdCBtYXBjYWNoZV9vdmVycmlkZV9jdXJyZW50KHN0
cnVjdCB2Y3B1ICp2KQogI2RlZmluZSBtYXBjYWNoZV9sMl9lbnRyeShlKSAo
KGUpID4+IFBBR0VUQUJMRV9PUkRFUikKICNkZWZpbmUgTUFQQ0FDSEVfTDJf
RU5UUklFUyAobWFwY2FjaGVfbDJfZW50cnkoTUFQQ0FDSEVfRU5UUklFUyAt
IDEpICsgMSkKICNkZWZpbmUgTUFQQ0FDSEVfTDFFTlQoaWR4KSBcCi0gICAg
X19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldChNQVBDQUNIRV9W
SVJUX1NUQVJUICsgcGZuX3RvX3BhZGRyKGlkeCkpXQorICAgICgobDFfcGdl
bnRyeV90ICopKE1BUENBQ0hFX1ZJUlRfU1RBUlQgfCBcCisgICAgICAgICAg
ICAgICAgICAgICAgKChMMl9QQUdFVEFCTEVfRU5UUklFUyAtIDEpIDw8IEwy
X1BBR0VUQUJMRV9TSElGVCkpKVtpZHhdCiAKIHZvaWQgKm1hcF9kb21haW5f
cGFnZShtZm5fdCBtZm4pCiB7CkBAIC0yMzUsNiArMjM2LDcgQEAgaW50IG1h
cGNhY2hlX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAg
c3RydWN0IG1hcGNhY2hlX2RvbWFpbiAqZGNhY2hlID0gJmQtPmFyY2gucHZf
ZG9tYWluLm1hcGNhY2hlOwogICAgIHVuc2lnbmVkIGludCBiaXRtYXBfcGFn
ZXM7CisgICAgaW50IHJjOwogCiAgICAgQVNTRVJUKGlzX3B2X2RvbWFpbihk
KSk7CiAKQEAgLTI0Myw4ICsyNDUsMTAgQEAgaW50IG1hcGNhY2hlX2RvbWFp
bl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiAgICAgICAgIHJldHVybiAwOwog
I2VuZGlmCiAKKyAgICBCVUlMRF9CVUdfT04oTUFQQ0FDSEVfVklSVF9TVEFS
VCAmICgoMSA8PCBMM19QQUdFVEFCTEVfU0hJRlQpIC0gMSkpOwogICAgIEJV
SUxEX0JVR19PTihNQVBDQUNIRV9WSVJUX0VORCArIFBBR0VfU0laRSAqICgz
ICsKLSAgICAgICAgICAgICAgICAgMiAqIFBGTl9VUChCSVRTX1RPX0xPTkdT
KE1BUENBQ0hFX0VOVFJJRVMpICogc2l6ZW9mKGxvbmcpKSkgPgorICAgICAg
ICAgICAgICAgICAyICogUEZOX1VQKEJJVFNfVE9fTE9OR1MoTUFQQ0FDSEVf
RU5UUklFUykgKiBzaXplb2YobG9uZykpKSArCisgICAgICAgICAgICAgICAg
ICgxVSA8PCBMMl9QQUdFVEFCTEVfU0hJRlQpID4KICAgICAgICAgICAgICAg
ICAgTUFQQ0FDSEVfVklSVF9TVEFSVCArIChQRVJET01BSU5fU0xPVF9NQllU
RVMgPDwgMjApKTsKICAgICBiaXRtYXBfcGFnZXMgPSBQRk5fVVAoQklUU19U
T19MT05HUyhNQVBDQUNIRV9FTlRSSUVTKSAqIHNpemVvZihsb25nKSk7CiAg
ICAgZGNhY2hlLT5pbnVzZSA9ICh2b2lkICopTUFQQ0FDSEVfVklSVF9FTkQg
KyBQQUdFX1NJWkU7CkBAIC0yNTMsOSArMjU3LDI1IEBAIGludCBtYXBjYWNo
ZV9kb21haW5faW5pdChzdHJ1Y3QgZG9tYWluICpkKQogCiAgICAgc3Bpbl9s
b2NrX2luaXQoJmRjYWNoZS0+bG9jayk7CiAKLSAgICByZXR1cm4gY3JlYXRl
X3BlcmRvbWFpbl9tYXBwaW5nKGQsICh1bnNpZ25lZCBsb25nKWRjYWNoZS0+
aW51c2UsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAy
ICogYml0bWFwX3BhZ2VzICsgMSwKLSAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIE5JTChsMV9wZ2VudHJ5X3QgKiksIE5VTEwpOworICAg
IHJjID0gY3JlYXRlX3BlcmRvbWFpbl9tYXBwaW5nKGQsICh1bnNpZ25lZCBs
b25nKWRjYWNoZS0+aW51c2UsCisgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgMiAqIGJpdG1hcF9wYWdlcyArIDEsCisgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgTklMKGwxX3BnZW50cnlfdCAqKSwgTlVM
TCk7CisgICAgaWYgKCAhcmMgKQorICAgIHsKKyAgICAgICAgLyoKKyAgICAg
ICAgICogSW5zdGFsbCBtYXBwaW5nIG9mIG91ciBMMiB0YWJsZSBpbnRvIGl0
cyBvd24gbGFzdCBzbG90LCBmb3IgZWFzeQorICAgICAgICAgKiBhY2Nlc3Mg
dG8gdGhlIEwxIGVudHJpZXMgdmlhIE1BUENBQ0hFX0wxRU5UKCkuCisgICAg
ICAgICAqLworICAgICAgICBsM19wZ2VudHJ5X3QgKmwzdCA9IF9fbWFwX2Rv
bWFpbl9wYWdlKGQtPmFyY2gucGVyZG9tYWluX2wzX3BnKTsKKyAgICAgICAg
bDNfcGdlbnRyeV90IGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoTUFQQ0FD
SEVfVklSVF9FTkQpXTsKKyAgICAgICAgbDJfcGdlbnRyeV90ICpsMnQgPSBt
YXBfbDJ0X2Zyb21fbDNlKGwzZSk7CisKKyAgICAgICAgbDJlX2dldF9pbnRw
dGUobDJ0W0wyX1BBR0VUQUJMRV9FTlRSSUVTIC0gMV0pID0gbDNlX2dldF9p
bnRwdGUobDNlKTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDJ0KTsK
KyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICB9CisKKyAg
ICByZXR1cm4gcmM7CiB9CiAKIGludCBtYXBjYWNoZV92Y3B1X2luaXQoc3Ry
dWN0IHZjcHUgKnYpCkBAIC0zNDYsNyArMzY2LDcgQEAgbWZuX3QgZG9tYWlu
X3BhZ2VfbWFwX3RvX21mbihjb25zdCB2b2lkICpwdHIpCiAgICAgZWxzZQog
ICAgIHsKICAgICAgICAgQVNTRVJUKHZhID49IE1BUENBQ0hFX1ZJUlRfU1RB
UlQgJiYgdmEgPCBNQVBDQUNIRV9WSVJUX0VORCk7Ci0gICAgICAgIHBsMWUg
PSAmX19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldCh2YSldOwor
ICAgICAgICBwbDFlID0gJk1BUENBQ0hFX0wxRU5UKFBGTl9ET1dOKHZhIC0g
TUFQQ0FDSEVfVklSVF9TVEFSVCkpOwogICAgIH0KIAogICAgIHJldHVybiBs
MWVfZ2V0X21mbigqcGwxZSk7CmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYv
bW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4IDYyNjc2OGE5NTAuLjhm
OTc1YTc0N2QgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC02MDM4LDYgKzYwMzgsMTAgQEAgdm9p
ZCBmcmVlX3BlcmRvbWFpbl9tYXBwaW5ncyhzdHJ1Y3QgZG9tYWluICpkKQog
ICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAgICAgc3RydWN0
IHBhZ2VfaW5mbyAqbDFwZyA9IGwyZV9nZXRfcGFnZShsMnRhYltqXSk7CiAK
KyAgICAgICAgICAgICAgICAgICAgLyogbWFwY2FjaGVfZG9tYWluX2luaXQo
KSBpbnN0YWxscyBhIHJlY3Vyc2l2ZSBlbnRyeS4gKi8KKyAgICAgICAgICAg
ICAgICAgICAgaWYgKCBsMXBnID09IGwycGcgKQorICAgICAgICAgICAgICAg
ICAgICAgICAgY29udGludWU7CisKICAgICAgICAgICAgICAgICAgICAgaWYg
KCBsMmVfZ2V0X2ZsYWdzKGwydGFiW2pdKSAmIF9QQUdFX0FWQUlMMCApCiAg
ICAgICAgICAgICAgICAgICAgIHsKICAgICAgICAgICAgICAgICAgICAgICAg
IGwxX3BnZW50cnlfdCAqbDF0YWIgPSBfX21hcF9kb21haW5fcGFnZShsMXBn
KTsK

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.11/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286-4.11/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCA4Zjk3NWE3NDdkLi4x
MDE3NTc2NGU4IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTc1NSw5ICsxNzU1LDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IGM3ZmExODkyNWIuLjE5MzNh
NmEyYTIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzcsNiArMTM3LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggY2NhN2VhNmU5ZC4uZDc1NTFlNTk0
YSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODMzLDkgKzgzMyw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCA5ZWY5ZDAzY2E3Li40NjcwYWI5OWY2IDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Myw3ICsxOTMsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCBjMWU5MjkzN2MwLi5l
NzJjMjc3YjlmIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjc0
LDE5ICsyNzQsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBfbWZu
KGwxZV9nZXRfcGZuKCp2aXJ0X3RvX3hlbl9sMWUoKHVuc2lnbmVkIGxvbmcp
KHZhKSkpKQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19w
YWdlKHZtYXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9f
QVNTRU1CTFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVs
IG9mIHRoZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9s
MV90YWJsZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFS
VCkpCi0jZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50
cnlfdCAqKShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQo
TElORUFSX1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNf
dGFibGUgXAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxl
ICsgbDJfbGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQot
I2RlZmluZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3Qg
KikoX19saW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVB
Ul9QVF9WSVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18K
IGV4dGVybiByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFH
RVRBQkxFX0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBh
dF9pZGxlX3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0y
cF9jb21wYXRfdnN0YXJ0Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0001-x86-don-t-allow-clearing-of-TF_kernel_mode-for-other.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODY6IGRvbid0IGFsbG93IGNsZWFyaW5nIG9mIFRGX2tlcm5lbF9tb2Rl
IGZvciBvdGhlciB0aGFuIDY0LWJpdCBQVgoKVGhlIGZsYWcgaXMgcmVhbGx5
IG9ubHkgbWVhbnQgZm9yIHRob3NlLCBib3RoIEhWTSBhbmQgMzItYml0IFBW
IHRlbGwKa2VybmVsIGZyb20gdXNlciBtb2RlIGJhc2VkIG9uIENQTC9SUEwu
IFJlbW92ZSB0aGUgYWxsLXF1ZXN0aW9uLW1hcmtzCmNvbW1lbnQgYW5kIGxl
dCdzIGJlIG9uIHRoZSBzYWZlIHNpZGUgaGVyZSBhbmQgYWxzbyBzdXBwcmVz
cyBjbGVhcmluZwpmb3IgMzItYml0IFBWICh0aGlzIGlzbid0IGEgZmFzdCBw
YXRoIGFmdGVyIGFsbCkuCgpSZW1vdmUgbm8gbG9uZ2VyIG5lY2Vzc2FyeSBp
c19wdl8zMmJpdF8qKCkgZnJvbSBzaF91cGRhdGVfY3IzKCkgYW5kCnNoX3dh
bGtfZ3Vlc3RfdGFibGVzKCkuIE5vdGUgdGhhdCBzaGFkb3dfb25lX2JpdF9k
aXNhYmxlKCkgYWxyZWFkeQphc3N1bWVzIHRoZSBuZXcgYmVoYXZpb3IuCgpT
aWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBXZWkgTGl1IDx3ZWkubGl1MkBjaXRyaXguY29tPgpB
Y2tlZC1ieTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXgu
Y29tPgpBY2tlZC1ieTogQW5kcmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNA
Y2l0cml4LmNvbT4KCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvZG9tYWlu
LmMgYi94ZW4vYXJjaC94ODYvZG9tYWluLmMKaW5kZXggMDE5ZGM1NzQ1OS4u
NjA3YjM0NjczNyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L2RvbWFpbi5j
CisrKyBiL3hlbi9hcmNoL3g4Ni9kb21haW4uYwpAQCAtODQwLDkgKzg0MCwx
NSBAQCBpbnQgYXJjaF9zZXRfaW5mb19ndWVzdCgKICAgICAgICAgICAgIHJl
dHVybiAtRUlOVkFMOwogICAgIH0KIAotICAgIHYtPmFyY2guZmxhZ3MgJj0g
flRGX2tlcm5lbF9tb2RlOwotICAgIGlmICggKGZsYWdzICYgVkdDRl9pbl9r
ZXJuZWwpIHx8IGlzX2h2bV9kb21haW4oZCkvKj8/PyovICkKLSAgICAgICAg
di0+YXJjaC5mbGFncyB8PSBURl9rZXJuZWxfbW9kZTsKKyAgICB2LT5hcmNo
LmZsYWdzIHw9IFRGX2tlcm5lbF9tb2RlOworICAgIGlmICggdW5saWtlbHko
IShmbGFncyAmIFZHQ0ZfaW5fa2VybmVsKSkgJiYKKyAgICAgICAgIC8qCisg
ICAgICAgICAgKiBURl9rZXJuZWxfbW9kZSBpcyBvbmx5IGFsbG93ZWQgdG8g
YmUgY2xlYXIgZm9yIDY0LWJpdCBQVi4gU2VlCisgICAgICAgICAgKiB1cGRh
dGVfY3IzKCksIHNoX3VwZGF0ZV9jcjMoKSwgc2hfd2Fsa19ndWVzdF90YWJs
ZXMoKSwgYW5kCisgICAgICAgICAgKiBzaGFkb3dfb25lX2JpdF9kaXNhYmxl
KCkgZm9yIHdoeSB0aGF0IGlzLgorICAgICAgICAgICovCisgICAgICAgICAh
aXNfaHZtX2RvbWFpbihkKSAmJiAhaXNfcHZfMzJiaXRfZG9tYWluKGQpICkK
KyAgICAgICAgdi0+YXJjaC5mbGFncyAmPSB+VEZfa2VybmVsX21vZGU7CiAK
ICAgICB2LT5hcmNoLnZnY19mbGFncyA9IGZsYWdzOwogCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0vc2hhZG93L211bHRpLmMgYi94ZW4vYXJjaC94
ODYvbW0vc2hhZG93L211bHRpLmMKaW5kZXggM2U1NjUxZDAyOS4uNGFlOGUw
NWVjOCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L21tL3NoYWRvdy9tdWx0
aS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvbXVsdGkuYwpAQCAt
MTgwLDcgKzE4MCw3IEBAIHNoX3dhbGtfZ3Vlc3RfdGFibGVzKHN0cnVjdCB2
Y3B1ICp2LCB1bnNpZ25lZCBsb25nIHZhLCB3YWxrX3QgKmd3LAogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBJTlZBTElEX01GTiwgdi0+YXJjaC5w
YWdpbmcuc2hhZG93LmdsM2UpOwogI2Vsc2UgLyogMzIgb3IgNjQgKi8KICAg
ICBjb25zdCBzdHJ1Y3QgZG9tYWluICpkID0gdi0+ZG9tYWluOwotICAgIG1m
bl90IHJvb3RfbWZuID0gKCh2LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21v
ZGUpIHx8IGlzX3B2XzMyYml0X2RvbWFpbihkKQorICAgIG1mbl90IHJvb3Rf
bWZuID0gKHYtPmFyY2guZmxhZ3MgJiBURl9rZXJuZWxfbW9kZQogICAgICAg
ICAgICAgICAgICAgICAgID8gcGFnZXRhYmxlX2dldF9tZm4odi0+YXJjaC5n
dWVzdF90YWJsZSkKICAgICAgICAgICAgICAgICAgICAgICA6IHBhZ2V0YWJs
ZV9nZXRfbWZuKHYtPmFyY2guZ3Vlc3RfdGFibGVfdXNlcikpOwogICAgIHZv
aWQgKnJvb3RfbWFwID0gbWFwX2RvbWFpbl9wYWdlKHJvb3RfbWZuKTsKQEAg
LTQwMjUsNyArNDAyNSw3IEBAIHNoX3VwZGF0ZV9jcjMoc3RydWN0IHZjcHUg
KnYsIGludCBkb19sb2NraW5nLCBib29sIG5vZmx1c2gpCiAgICAgICAgICAg
ICAgICAgICB2LCAodW5zaWduZWQgbG9uZylwYWdldGFibGVfZ2V0X3Bmbih2
LT5hcmNoLmd1ZXN0X3RhYmxlKSk7CiAKICNpZiBHVUVTVF9QQUdJTkdfTEVW
RUxTID09IDQKLSAgICBpZiAoICEodi0+YXJjaC5mbGFncyAmIFRGX2tlcm5l
bF9tb2RlKSAmJiAhaXNfcHZfMzJiaXRfZG9tYWluKGQpICkKKyAgICBpZiAo
ICEodi0+YXJjaC5mbGFncyAmIFRGX2tlcm5lbF9tb2RlKSApCiAgICAgICAg
IGdtZm4gPSBwYWdldGFibGVfZ2V0X21mbih2LT5hcmNoLmd1ZXN0X3RhYmxl
X3VzZXIpOwogICAgIGVsc2UKICNlbmRpZgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0002-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCA3MjQ2
ZWU1MGVmLi45Yzg5YzdkMDc2IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC00NCwyNiArNDQsNDcgQEAgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkg
bTJwX2NvbXBhdF92c3RhcnQgPSBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRf
U1RBUlQ7CiAKIGwyX3BnZW50cnlfdCAqY29tcGF0X2lkbGVfcGdfdGFibGVf
bDI7CiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwgdW5z
aWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2VudHJ5X3Qg
bDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7Ci0gICAg
bDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5X3QgbDFl
LCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRhYmxlX2dl
dF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0ZTsKIAot
ICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNhbF9hZGRy
ZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAgaWYgKCAh
aXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICByZXR1cm4g
bDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3BhZ2UoX21m
bihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAgcmV0dXJu
IGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0
X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCit7
CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRfbDRlKHJv
b3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7CisKICAg
ICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJFU0VOVCkg
KQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShsNGUpOwog
ICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwogICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNlOworfQor
Cit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOworICAgIGwy
X3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90IGwxZSwg
KmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlmICggIWlz
X3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKworICAgIGwz
ZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFibGUsIGFk
ZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAgaWYgKCAh
KGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5f
dmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxMOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0003-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0003-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCA5
Yzg5YzdkMDc2Li5jNzY0MjM3NjUxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC00NiwxNSArNDYsMjkgQEAgbDJfcGdlbnRyeV90ICpjb21wYXRfaWRs
ZV9wZ190YWJsZV9sMjsKIAogc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3QgKmw0dCwgbDRlOworICAg
IG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZuKHJvb3QpOworICAgIC8q
IGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2FuJ3QgZGlzYXBwZWFyIHVu
ZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVlZF9sb2NrID0gIW1mbl9l
cShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJlbnQtPmFyY2guZ3Vlc3Rf
dGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsNF9w
Z2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAKICAgICBpZiAoICFpc19j
YW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAgICAgIHJldHVybiBsNGVf
ZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1m
bikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9vZmZzZXQoYWRkcildOwot
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgcGcgPSBtZm5fdG9f
cGFnZShtZm4pOworICAgIGlmICggbmVlZF9sb2NrICYmICFwYWdlX2xvY2so
cGcpICkKKyAgICAgICAgcmV0dXJuIGw0ZV9lbXB0eSgpOworCisgICAgaWYg
KCAocGctPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3R5cGVfbWFzaykgPT0g
UEdUX2w0X3BhZ2VfdGFibGUgKQorICAgIHsKKyAgICAgICAgbDRfcGdlbnRy
eV90ICpsNHQgPSBtYXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgICAgICBs
NGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIpXTsKKyAgICAgICAgdW5t
YXBfZG9tYWluX3BhZ2UobDR0KTsKKyAgICB9CisKKyAgICBpZiAoIG5lZWRf
bG9jayApCisgICAgICAgIHBhZ2VfdW5sb2NrKHBnKTsKIAogICAgIHJldHVy
biBsNGU7CiB9CkBAIC02MiwxNCArNzYsMjYgQEAgc3RhdGljIGw0X3BnZW50
cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fs
a19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRk
cikKIHsKICAgICBsNF9wZ2VudHJ5X3QgbDRlID0gcGFnZV93YWxrX2dldF9s
NGUocm9vdCwgYWRkcik7Ci0gICAgbDNfcGdlbnRyeV90ICpsM3QsIGwzZTsK
KyAgICBtZm5fdCBtZm4gPSBsNGVfZ2V0X21mbihsNGUpOworICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnOworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBsM2Vf
ZW1wdHkoKTsKIAogICAgIGlmICggIShsNGVfZ2V0X2ZsYWdzKGw0ZSkgJiBf
UEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHJldHVybiBsM2VfZW1wdHkoKTsK
IAotICAgIGwzdCA9IG1hcF9sM3RfZnJvbV9sNGUobDRlKTsKLSAgICBsM2Ug
PSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21h
aW5fcGFnZShsM3QpOworICAgIHBnID0gbWZuX3RvX3BhZ2UobWZuKTsKKyAg
ICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOworCisgICAgaWYgKCAocGctPnUuaW51c2UudHlwZV9pbmZvICYg
UEdUX3R5cGVfbWFzaykgPT0gUEdUX2wzX3BhZ2VfdGFibGUgKQorICAgIHsK
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBtYXBfZG9tYWluX3BhZ2Uo
bWZuKTsKKworICAgICAgICBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFk
ZHIpXTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICB9
CisKKyAgICBwYWdlX3VubG9jayhwZyk7CiAKICAgICByZXR1cm4gbDNlOwog
fQpAQCAtNzcsNDQgKzEwMyw2NyBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBh
Z2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxv
bmcgYWRkcikKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwg
dW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBsM2U7
Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5
X3QgbDFlLCAqbDF0OwotICAgIHVuc2lnbmVkIGxvbmcgbWZuOworICAgIGwy
X3BnZW50cnlfdCBsMmUgPSBsMmVfZW1wdHkoKTsKKyAgICBsMV9wZ2VudHJ5
X3QgbDFlID0gbDFlX2VtcHR5KCk7CisgICAgbWZuX3QgbWZuOworICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBnOwogCiAgICAgaWYgKCAhaXNfcHZfdmNwdSh2
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAgICAgbDNlID0gcGFnZV93
YWxrX2dldF9sM2Uodi0+YXJjaC5ndWVzdF90YWJsZSwgYWRkcik7Ci0gICAg
bWZuID0gbDNlX2dldF9wZm4obDNlKTsKLSAgICBpZiAoICEobDNlX2dldF9m
bGFncyhsM2UpICYgX1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChfbWZu
KG1mbikpICkKKyAgICBtZm4gPSBsM2VfZ2V0X21mbihsM2UpOworICAgIGlm
ICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fCAh
bWZuX3ZhbGlkKG1mbikgKQogICAgICAgICByZXR1cm4gTlVMTDsKICAgICBp
ZiAoIChsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QU0UpICkKICAgICB7
Ci0gICAgICAgIG1mbiArPSBQRk5fRE9XTihhZGRyICYgKCgxVUwgPDwgTDNf
UEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKKyAgICAgICAgbWZuID0gbWZuX2Fk
ZChtZm4sIFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMM19QQUdFVEFCTEVf
U0hJRlQpIC0gMSkpKTsKICAgICAgICAgZ290byByZXQ7CiAgICAgfQogCi0g
ICAgbDJ0ID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4obWZuKSk7Ci0gICAgbDJl
ID0gbDJ0W2wyX3RhYmxlX29mZnNldChhZGRyKV07Ci0gICAgdW5tYXBfZG9t
YWluX3BhZ2UobDJ0KTsKLSAgICBtZm4gPSBsMmVfZ2V0X3BmbihsMmUpOwot
ICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVTRU5U
KSB8fCAhbWZuX3ZhbGlkKF9tZm4obWZuKSkgKQorICAgIHBnID0gbWZuX3Rv
X3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAg
ICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChwZy0+dS5pbnVzZS50eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDJfcGFnZV90YWJsZSAp
CisgICAgeworICAgICAgICBjb25zdCBsMl9wZ2VudHJ5X3QgKmwydCA9IG1h
cF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAgIGwyZSA9IGwydFtsMl90
YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFn
ZShsMnQpOworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAg
IG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7CisgICAgaWYgKCAhKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQobWZu
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogICAgIGlmICggKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkgKQogICAgIHsKLSAgICAgICAgbWZu
ICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMMl9QQUdFVEFCTEVfU0hJ
RlQpIC0gMSkpOworICAgICAgICBtZm4gPSBtZm5fYWRkKG1mbiwgUEZOX0RP
V04oYWRkciAmICgoMVVMIDw8IEwyX1BBR0VUQUJMRV9TSElGVCkgLSAxKSkp
OwogICAgICAgICBnb3RvIHJldDsKICAgICB9CiAKLSAgICBsMXQgPSBtYXBf
ZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKLSAgICBsMWUgPSBsMXRbbDFfdGFi
bGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21haW5fcGFnZShsMXQp
OwotICAgIG1mbiA9IGwxZV9nZXRfcGZuKGwxZSk7Ci0gICAgaWYgKCAhKGwx
ZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFs
aWQoX21mbihtZm4pKSApCisgICAgcGcgPSBtZm5fdG9fcGFnZShtZm4pOwor
ICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAgICByZXR1cm4gTlVM
TDsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAg
ICAgIGNvbnN0IGwxX3BnZW50cnlfdCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDFlID0gbDF0W2wxX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgbWZuID0gbDFlX2dl
dF9tZm4obDFlKTsKKyAgICBpZiAoICEobDFlX2dldF9mbGFncyhsMWUpICYg
X1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChtZm4pICkKICAgICAgICAg
cmV0dXJuIE5VTEw7CiAKICByZXQ6Ci0gICAgcmV0dXJuIG1hcF9kb21haW5f
cGFnZShfbWZuKG1mbikpICsgKGFkZHIgJiB+UEFHRV9NQVNLKTsKKyAgICBy
ZXR1cm4gbWFwX2RvbWFpbl9wYWdlKG1mbikgKyAoYWRkciAmIH5QQUdFX01B
U0spOwogfQogCiAvKgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0004-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCAyYjBkYWRjOGRhLi5hY2ViZjllOTU3IDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDAsMTEgKzQwLDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggYzc2NDIzNzY1MS4u
MDYyMDY5YTkwOCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTAwLDYg
KzEwMCwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbik7Cisg
ICAgaWYgKCAhcGFnZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsMmVf
ZW1wdHkoKTsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAm
IFBHVF90eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlICkKKyAgICB7
CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwyZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCA4NDU1NTZlYjZkLi41MDRmM2ZkYzE0
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODUsNyArNTg1LDkgQEAg
dm9pZCBhdWRpdF9kb21haW5zKHZvaWQpOwogdm9pZCBtYWtlX2NyMyhzdHJ1
Y3QgdmNwdSAqdiwgbWZuX3QgbWZuKTsKIHZvaWQgdXBkYXRlX2NyMyhzdHJ1
Y3QgdmNwdSAqdik7CiBpbnQgdmNwdV9kZXN0cm95X3BhZ2V0YWJsZXMoc3Ry
dWN0IHZjcHUgKik7CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNw
dSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wyX3BnZW50cnlfdCBwYWdl
X3dhbGtfZ2V0X2wyZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25n
IGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKTsK
IAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0005-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCBhY2ViZjllOTU3Li43NjI0NDQ3MjQ2IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NTksMjcgKzU5LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCA5NzYyMDliYTRjLi5jYzRlZTFhZmZiIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IGU3YTcxNzlkZGEuLmViYWMzZjQ3NmEgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzYwLDcgKzM2MCw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggMDYyMDY5
YTkwOC4uN2RiZmFiZjI2NyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTI4LDYgKzEyOCw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbik7CisgICAgaWYgKCAhcGFnZV9sb2NrKHBn
KSApCisgICAgICAgIHJldHVybiBsMWVfZW1wdHkoKTsKKworICAgIGlmICgg
KHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBH
VF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGwxX3BnZW50cnlf
dCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFl
ID0gbDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV07CisKKyAgICAgICAgaWYg
KCAobDFlX2dldF9mbGFncyhsMWUpICYgKF9QQUdFX0FDQ0VTU0VEIHwgX1BB
R0VfUFJFU0VOVCkpID09CisgICAgICAgICAgICAgX1BBR0VfUFJFU0VOVCAp
CisgICAgICAgIHsKKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBvbDFlID0g
bDFlOworCisgICAgICAgICAgICBsMWVfYWRkX2ZsYWdzKGwxZSwgX1BBR0Vf
QUNDRVNTRUQpOworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIEJl
c3QgZWZmb3J0IG9ubHk7IHdpdGggdGhlIGxvY2sgaGVsZCB0aGUgcGFnZSBz
aG91bGRuJ3QKKyAgICAgICAgICAgICAqIGNoYW5nZSBhbnl3YXksIGV4Y2Vw
dCBmb3IgdGhlIGRpcnR5IGJpdCB0byBwZXJoYXBzIGJlY29tZSBzZXQuCisg
ICAgICAgICAgICAgKi8KKyAgICAgICAgICAgIHdoaWxlICggY21weGNoZygm
bDFlX2dldF9pbnRwdGUobDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV0pLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGwxZV9nZXRfaW50cHRlKG9s
MWUpLCBsMWVfZ2V0X2ludHB0ZShsMWUpKSAhPQorICAgICAgICAgICAgICAg
ICAgICBsMWVfZ2V0X2ludHB0ZShvbDFlKSAmJgorICAgICAgICAgICAgICAg
ICAgICAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX0RJUlRZKSApCisg
ICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhv
bDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9m
bGFncyhsMWUsIF9QQUdFX0RJUlRZKTsKKyAgICAgICAgICAgIH0KKyAgICAg
ICAgfQorCisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwxZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCA1MDRmM2ZkYzE0Li4wYWVkNGMzYjc3
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODgsNiArNTg4LDcgQEAg
aW50IHZjcHVfZGVzdHJveV9wYWdldGFibGVzKHN0cnVjdCB2Y3B1ICopOwog
CiB2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcik7CiBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMmUo
cGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wxX3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2wxZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNz
dGF0ZSh2b2lkKTsKIAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0006-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4KIHssdW59bWFwX2RvbWFpbl9wYWdlKCkKCk1vdmUgdGhlIHBh
Z2UgdGFibGUgcmVjdXJzaW9uIHR3byBsZXZlbHMgZG93bi4gVGhpcyBlbnRh
aWxzIGF2b2lkaW5nCnRvIGZyZWUgdGhlIHJlY3Vyc2l2ZSBtYXBwaW5nIHBy
ZW1hdHVyZWx5IGluIGZyZWVfcGVyZG9tYWluX21hcHBpbmdzKCkuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4g
PGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbl9wYWdlLmMgYi94ZW4vYXJj
aC94ODYvZG9tYWluX3BhZ2UuYwppbmRleCA0YTA3Y2ZiMThlLi42NjBiZDA2
YWFmIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwor
KysgYi94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwpAQCAtNjUsNyArNjUs
OCBAQCB2b2lkIF9faW5pdCBtYXBjYWNoZV9vdmVycmlkZV9jdXJyZW50KHN0
cnVjdCB2Y3B1ICp2KQogI2RlZmluZSBtYXBjYWNoZV9sMl9lbnRyeShlKSAo
KGUpID4+IFBBR0VUQUJMRV9PUkRFUikKICNkZWZpbmUgTUFQQ0FDSEVfTDJf
RU5UUklFUyAobWFwY2FjaGVfbDJfZW50cnkoTUFQQ0FDSEVfRU5UUklFUyAt
IDEpICsgMSkKICNkZWZpbmUgTUFQQ0FDSEVfTDFFTlQoaWR4KSBcCi0gICAg
X19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldChNQVBDQUNIRV9W
SVJUX1NUQVJUICsgcGZuX3RvX3BhZGRyKGlkeCkpXQorICAgICgobDFfcGdl
bnRyeV90ICopKE1BUENBQ0hFX1ZJUlRfU1RBUlQgfCBcCisgICAgICAgICAg
ICAgICAgICAgICAgKChMMl9QQUdFVEFCTEVfRU5UUklFUyAtIDEpIDw8IEwy
X1BBR0VUQUJMRV9TSElGVCkpKVtpZHhdCiAKIHZvaWQgKm1hcF9kb21haW5f
cGFnZShtZm5fdCBtZm4pCiB7CkBAIC0yMzUsNiArMjM2LDcgQEAgaW50IG1h
cGNhY2hlX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAg
c3RydWN0IG1hcGNhY2hlX2RvbWFpbiAqZGNhY2hlID0gJmQtPmFyY2gucHYu
bWFwY2FjaGU7CiAgICAgdW5zaWduZWQgaW50IGJpdG1hcF9wYWdlczsKKyAg
ICBpbnQgcmM7CiAKICAgICBBU1NFUlQoaXNfcHZfZG9tYWluKGQpKTsKIApA
QCAtMjQzLDggKzI0NSwxMCBAQCBpbnQgbWFwY2FjaGVfZG9tYWluX2luaXQo
c3RydWN0IGRvbWFpbiAqZCkKICAgICAgICAgcmV0dXJuIDA7CiAjZW5kaWYK
IAorICAgIEJVSUxEX0JVR19PTihNQVBDQUNIRV9WSVJUX1NUQVJUICYgKCgx
IDw8IEwzX1BBR0VUQUJMRV9TSElGVCkgLSAxKSk7CiAgICAgQlVJTERfQlVH
X09OKE1BUENBQ0hFX1ZJUlRfRU5EICsgUEFHRV9TSVpFICogKDMgKwotICAg
ICAgICAgICAgICAgICAyICogUEZOX1VQKEJJVFNfVE9fTE9OR1MoTUFQQ0FD
SEVfRU5UUklFUykgKiBzaXplb2YobG9uZykpKSA+CisgICAgICAgICAgICAg
ICAgIDIgKiBQRk5fVVAoQklUU19UT19MT05HUyhNQVBDQUNIRV9FTlRSSUVT
KSAqIHNpemVvZihsb25nKSkpICsKKyAgICAgICAgICAgICAgICAgKDFVIDw8
IEwyX1BBR0VUQUJMRV9TSElGVCkgPgogICAgICAgICAgICAgICAgICBNQVBD
QUNIRV9WSVJUX1NUQVJUICsgKFBFUkRPTUFJTl9TTE9UX01CWVRFUyA8PCAy
MCkpOwogICAgIGJpdG1hcF9wYWdlcyA9IFBGTl9VUChCSVRTX1RPX0xPTkdT
KE1BUENBQ0hFX0VOVFJJRVMpICogc2l6ZW9mKGxvbmcpKTsKICAgICBkY2Fj
aGUtPmludXNlID0gKHZvaWQgKilNQVBDQUNIRV9WSVJUX0VORCArIFBBR0Vf
U0laRTsKQEAgLTI1Myw5ICsyNTcsMjUgQEAgaW50IG1hcGNhY2hlX2RvbWFp
bl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiAKICAgICBzcGluX2xvY2tfaW5p
dCgmZGNhY2hlLT5sb2NrKTsKIAotICAgIHJldHVybiBjcmVhdGVfcGVyZG9t
YWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNhY2hlLT5pbnVzZSwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDIgKiBiaXRt
YXBfcGFnZXMgKyAxLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgTklMKGwxX3BnZW50cnlfdCAqKSwgTlVMTCk7CisgICAgcmMgPSBj
cmVhdGVfcGVyZG9tYWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNh
Y2hlLT5pbnVzZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAyICogYml0bWFwX3BhZ2VzICsgMSwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBOSUwobDFfcGdlbnRyeV90ICopLCBOVUxMKTsKKyAg
ICBpZiAoICFyYyApCisgICAgeworICAgICAgICAvKgorICAgICAgICAgKiBJ
bnN0YWxsIG1hcHBpbmcgb2Ygb3VyIEwyIHRhYmxlIGludG8gaXRzIG93biBs
YXN0IHNsb3QsIGZvciBlYXN5CisgICAgICAgICAqIGFjY2VzcyB0byB0aGUg
TDEgZW50cmllcyB2aWEgTUFQQ0FDSEVfTDFFTlQoKS4KKyAgICAgICAgICov
CisgICAgICAgIGwzX3BnZW50cnlfdCAqbDN0ID0gX19tYXBfZG9tYWluX3Bh
Z2UoZC0+YXJjaC5wZXJkb21haW5fbDNfcGcpOworICAgICAgICBsM19wZ2Vu
dHJ5X3QgbDNlID0gbDN0W2wzX3RhYmxlX29mZnNldChNQVBDQUNIRV9WSVJU
X0VORCldOworICAgICAgICBsMl9wZ2VudHJ5X3QgKmwydCA9IG1hcF9sMnRf
ZnJvbV9sM2UobDNlKTsKKworICAgICAgICBsMmVfZ2V0X2ludHB0ZShsMnRb
TDJfUEFHRVRBQkxFX0VOVFJJRVMgLSAxXSkgPSBsM2VfZ2V0X2ludHB0ZShs
M2UpOworICAgICAgICB1bm1hcF9kb21haW5fcGFnZShsMnQpOworICAgICAg
ICB1bm1hcF9kb21haW5fcGFnZShsM3QpOworICAgIH0KKworICAgIHJldHVy
biByYzsKIH0KIAogaW50IG1hcGNhY2hlX3ZjcHVfaW5pdChzdHJ1Y3QgdmNw
dSAqdikKQEAgLTM0Niw3ICszNjYsNyBAQCBtZm5fdCBkb21haW5fcGFnZV9t
YXBfdG9fbWZuKGNvbnN0IHZvaWQgKnB0cikKICAgICBlbHNlCiAgICAgewog
ICAgICAgICBBU1NFUlQodmEgPj0gTUFQQ0FDSEVfVklSVF9TVEFSVCAmJiB2
YSA8IE1BUENBQ0hFX1ZJUlRfRU5EKTsKLSAgICAgICAgcGwxZSA9ICZfX2xp
bmVhcl9sMV90YWJsZVtsMV9saW5lYXJfb2Zmc2V0KHZhKV07CisgICAgICAg
IHBsMWUgPSAmTUFQQ0FDSEVfTDFFTlQoUEZOX0RPV04odmEgLSBNQVBDQUNI
RV9WSVJUX1NUQVJUKSk7CiAgICAgfQogCiAgICAgcmV0dXJuIGwxZV9nZXRf
bWZuKCpwbDFlKTsKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS5jIGIv
eGVuL2FyY2gveDg2L21tLmMKaW5kZXggYjRjOTBiZDA1NC4uZDIzNGUxOTFh
NiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTYwNzEsNiArNjA3MSwxMCBAQCB2b2lkIGZyZWVf
cGVyZG9tYWluX21hcHBpbmdzKHN0cnVjdCBkb21haW4gKmQpCiAgICAgICAg
ICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpsMXBnID0gbDJlX2dldF9wYWdlKGwydGFiW2pdKTsKIAorICAgICAg
ICAgICAgICAgICAgICAvKiBtYXBjYWNoZV9kb21haW5faW5pdCgpIGluc3Rh
bGxzIGEgcmVjdXJzaXZlIGVudHJ5LiAqLworICAgICAgICAgICAgICAgICAg
ICBpZiAoIGwxcGcgPT0gbDJwZyApCisgICAgICAgICAgICAgICAgICAgICAg
ICBjb250aW51ZTsKKwogICAgICAgICAgICAgICAgICAgICBpZiAoIGwyZV9n
ZXRfZmxhZ3MobDJ0YWJbal0pICYgX1BBR0VfQVZBSUwwICkKICAgICAgICAg
ICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICAgICAgbDFfcGdl
bnRyeV90ICpsMXRhYiA9IF9fbWFwX2RvbWFpbl9wYWdlKGwxcGcpOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.12/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286-4.12/0007-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCBkMjM0ZTE5MWE2Li41
MTgyNmVjYjE0IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTc4Niw5ICsxNzg2LDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IDU4MGVmM2UyOWUuLmYwODJm
MDI2NzAgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzUsNiArMTM1LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggN2RiZmFiZjI2Ny4uNTc3OTM4YmMx
NyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODMzLDkgKzgzMyw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCBmZWViNDU2YmVlLi4wMWE1NmRjYWIwIDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Myw3ICsxOTMsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCBjMWU5MjkzN2MwLi5l
NzJjMjc3YjlmIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjc0
LDE5ICsyNzQsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBfbWZu
KGwxZV9nZXRfcGZuKCp2aXJ0X3RvX3hlbl9sMWUoKHVuc2lnbmVkIGxvbmcp
KHZhKSkpKQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19w
YWdlKHZtYXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9f
QVNTRU1CTFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVs
IG9mIHRoZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9s
MV90YWJsZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFS
VCkpCi0jZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50
cnlfdCAqKShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQo
TElORUFSX1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNf
dGFibGUgXAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxl
ICsgbDJfbGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQot
I2RlZmluZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3Qg
KikoX19saW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVB
Ul9QVF9WSVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18K
IGV4dGVybiByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFH
RVRBQkxFX0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBh
dF9pZGxlX3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0y
cF9jb21wYXRfdnN0YXJ0Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBkYjRm
MDM1ZDhkLi5iMTU4MmI1NmZiIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC00NCwyNiArNDQsNDcgQEAgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkg
bTJwX2NvbXBhdF92c3RhcnQgPSBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRf
U1RBUlQ7CiAKIGwyX3BnZW50cnlfdCAqY29tcGF0X2lkbGVfcGdfdGFibGVf
bDI7CiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwgdW5z
aWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2VudHJ5X3Qg
bDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7Ci0gICAg
bDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5X3QgbDFl
LCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRhYmxlX2dl
dF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0ZTsKIAot
ICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNhbF9hZGRy
ZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAgaWYgKCAh
aXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICByZXR1cm4g
bDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3BhZ2UoX21m
bihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAgcmV0dXJu
IGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0
X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCit7
CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRfbDRlKHJv
b3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7CisKICAg
ICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJFU0VOVCkg
KQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShsNGUpOwog
ICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwogICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNlOworfQor
Cit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOworICAgIGwy
X3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90IGwxZSwg
KmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlmICggIWlz
X3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKworICAgIGwz
ZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFibGUsIGFk
ZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAgaWYgKCAh
KGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5f
dmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxMOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBi
MTU4MmI1NmZiLi43ZDQzOTYzOWI3IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC00NiwxNSArNDYsMjkgQEAgbDJfcGdlbnRyeV90ICpjb21wYXRfaWRs
ZV9wZ190YWJsZV9sMjsKIAogc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3QgKmw0dCwgbDRlOworICAg
IG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZuKHJvb3QpOworICAgIC8q
IGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2FuJ3QgZGlzYXBwZWFyIHVu
ZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVlZF9sb2NrID0gIW1mbl9l
cShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJlbnQtPmFyY2guZ3Vlc3Rf
dGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsNF9w
Z2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAKICAgICBpZiAoICFpc19j
YW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAgICAgIHJldHVybiBsNGVf
ZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1m
bikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9vZmZzZXQoYWRkcildOwot
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgcGcgPSBtZm5fdG9f
cGFnZShtZm4pOworICAgIGlmICggbmVlZF9sb2NrICYmICFwYWdlX2xvY2so
cGcpICkKKyAgICAgICAgcmV0dXJuIGw0ZV9lbXB0eSgpOworCisgICAgaWYg
KCAocGctPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3R5cGVfbWFzaykgPT0g
UEdUX2w0X3BhZ2VfdGFibGUgKQorICAgIHsKKyAgICAgICAgbDRfcGdlbnRy
eV90ICpsNHQgPSBtYXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgICAgICBs
NGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIpXTsKKyAgICAgICAgdW5t
YXBfZG9tYWluX3BhZ2UobDR0KTsKKyAgICB9CisKKyAgICBpZiAoIG5lZWRf
bG9jayApCisgICAgICAgIHBhZ2VfdW5sb2NrKHBnKTsKIAogICAgIHJldHVy
biBsNGU7CiB9CkBAIC02MiwxNCArNzYsMjYgQEAgc3RhdGljIGw0X3BnZW50
cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fs
a19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRk
cikKIHsKICAgICBsNF9wZ2VudHJ5X3QgbDRlID0gcGFnZV93YWxrX2dldF9s
NGUocm9vdCwgYWRkcik7Ci0gICAgbDNfcGdlbnRyeV90ICpsM3QsIGwzZTsK
KyAgICBtZm5fdCBtZm4gPSBsNGVfZ2V0X21mbihsNGUpOworICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnOworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBsM2Vf
ZW1wdHkoKTsKIAogICAgIGlmICggIShsNGVfZ2V0X2ZsYWdzKGw0ZSkgJiBf
UEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHJldHVybiBsM2VfZW1wdHkoKTsK
IAotICAgIGwzdCA9IG1hcF9sM3RfZnJvbV9sNGUobDRlKTsKLSAgICBsM2Ug
PSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21h
aW5fcGFnZShsM3QpOworICAgIHBnID0gbWZuX3RvX3BhZ2UobWZuKTsKKyAg
ICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOworCisgICAgaWYgKCAocGctPnUuaW51c2UudHlwZV9pbmZvICYg
UEdUX3R5cGVfbWFzaykgPT0gUEdUX2wzX3BhZ2VfdGFibGUgKQorICAgIHsK
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBtYXBfZG9tYWluX3BhZ2Uo
bWZuKTsKKworICAgICAgICBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFk
ZHIpXTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICB9
CisKKyAgICBwYWdlX3VubG9jayhwZyk7CiAKICAgICByZXR1cm4gbDNlOwog
fQpAQCAtNzcsNDQgKzEwMyw2NyBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBh
Z2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxv
bmcgYWRkcikKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwg
dW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBsM2U7
Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5
X3QgbDFlLCAqbDF0OwotICAgIHVuc2lnbmVkIGxvbmcgbWZuOworICAgIGwy
X3BnZW50cnlfdCBsMmUgPSBsMmVfZW1wdHkoKTsKKyAgICBsMV9wZ2VudHJ5
X3QgbDFlID0gbDFlX2VtcHR5KCk7CisgICAgbWZuX3QgbWZuOworICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBnOwogCiAgICAgaWYgKCAhaXNfcHZfdmNwdSh2
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAgICAgbDNlID0gcGFnZV93
YWxrX2dldF9sM2Uodi0+YXJjaC5ndWVzdF90YWJsZSwgYWRkcik7Ci0gICAg
bWZuID0gbDNlX2dldF9wZm4obDNlKTsKLSAgICBpZiAoICEobDNlX2dldF9m
bGFncyhsM2UpICYgX1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChfbWZu
KG1mbikpICkKKyAgICBtZm4gPSBsM2VfZ2V0X21mbihsM2UpOworICAgIGlm
ICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fCAh
bWZuX3ZhbGlkKG1mbikgKQogICAgICAgICByZXR1cm4gTlVMTDsKICAgICBp
ZiAoIChsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QU0UpICkKICAgICB7
Ci0gICAgICAgIG1mbiArPSBQRk5fRE9XTihhZGRyICYgKCgxVUwgPDwgTDNf
UEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKKyAgICAgICAgbWZuID0gbWZuX2Fk
ZChtZm4sIFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMM19QQUdFVEFCTEVf
U0hJRlQpIC0gMSkpKTsKICAgICAgICAgZ290byByZXQ7CiAgICAgfQogCi0g
ICAgbDJ0ID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4obWZuKSk7Ci0gICAgbDJl
ID0gbDJ0W2wyX3RhYmxlX29mZnNldChhZGRyKV07Ci0gICAgdW5tYXBfZG9t
YWluX3BhZ2UobDJ0KTsKLSAgICBtZm4gPSBsMmVfZ2V0X3BmbihsMmUpOwot
ICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVTRU5U
KSB8fCAhbWZuX3ZhbGlkKF9tZm4obWZuKSkgKQorICAgIHBnID0gbWZuX3Rv
X3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAg
ICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChwZy0+dS5pbnVzZS50eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDJfcGFnZV90YWJsZSAp
CisgICAgeworICAgICAgICBjb25zdCBsMl9wZ2VudHJ5X3QgKmwydCA9IG1h
cF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAgIGwyZSA9IGwydFtsMl90
YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFn
ZShsMnQpOworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAg
IG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7CisgICAgaWYgKCAhKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQobWZu
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogICAgIGlmICggKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkgKQogICAgIHsKLSAgICAgICAgbWZu
ICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMMl9QQUdFVEFCTEVfU0hJ
RlQpIC0gMSkpOworICAgICAgICBtZm4gPSBtZm5fYWRkKG1mbiwgUEZOX0RP
V04oYWRkciAmICgoMVVMIDw8IEwyX1BBR0VUQUJMRV9TSElGVCkgLSAxKSkp
OwogICAgICAgICBnb3RvIHJldDsKICAgICB9CiAKLSAgICBsMXQgPSBtYXBf
ZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKLSAgICBsMWUgPSBsMXRbbDFfdGFi
bGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21haW5fcGFnZShsMXQp
OwotICAgIG1mbiA9IGwxZV9nZXRfcGZuKGwxZSk7Ci0gICAgaWYgKCAhKGwx
ZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFs
aWQoX21mbihtZm4pKSApCisgICAgcGcgPSBtZm5fdG9fcGFnZShtZm4pOwor
ICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAgICByZXR1cm4gTlVM
TDsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAg
ICAgIGNvbnN0IGwxX3BnZW50cnlfdCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDFlID0gbDF0W2wxX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgbWZuID0gbDFlX2dl
dF9tZm4obDFlKTsKKyAgICBpZiAoICEobDFlX2dldF9mbGFncyhsMWUpICYg
X1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChtZm4pICkKICAgICAgICAg
cmV0dXJuIE5VTEw7CiAKICByZXQ6Ci0gICAgcmV0dXJuIG1hcF9kb21haW5f
cGFnZShfbWZuKG1mbikpICsgKGFkZHIgJiB+UEFHRV9NQVNLKTsKKyAgICBy
ZXR1cm4gbWFwX2RvbWFpbl9wYWdlKG1mbikgKyAoYWRkciAmIH5QQUdFX01B
U0spOwogfQogCiAvKgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCAyYjBkYWRjOGRhLi5hY2ViZjllOTU3IDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDAsMTEgKzQwLDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggN2Q0Mzk2MzliNy4u
NjcwYWEzZjg5MiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTAwLDYg
KzEwMCwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbik7Cisg
ICAgaWYgKCAhcGFnZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsMmVf
ZW1wdHkoKTsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAm
IFBHVF90eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlICkKKyAgICB7
CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwyZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCAzMjBjNmNkMTk2Li5jZDNlN2VjNTAx
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01NzcsNyArNTc3LDkgQEAg
dm9pZCBhdWRpdF9kb21haW5zKHZvaWQpOwogdm9pZCBtYWtlX2NyMyhzdHJ1
Y3QgdmNwdSAqdiwgbWZuX3QgbWZuKTsKIHZvaWQgdXBkYXRlX2NyMyhzdHJ1
Y3QgdmNwdSAqdik7CiBpbnQgdmNwdV9kZXN0cm95X3BhZ2V0YWJsZXMoc3Ry
dWN0IHZjcHUgKik7CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNw
dSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wyX3BnZW50cnlfdCBwYWdl
X3dhbGtfZ2V0X2wyZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25n
IGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNzdGF0ZSh2b2lkKTsK
IAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCBhY2ViZjllOTU3Li43NjI0NDQ3MjQ2IDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NTksMjcgKzU5LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCBhMWJkNDczYjI5Li40M2QzM2ExZmQxIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IGE5MjBmYjVlMTUuLjJiZjQ0OTdhMTYgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzU3LDcgKzM1Nyw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggNjcwYWEz
Zjg5Mi4uYzU2ODZlMGQyNSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTI4LDYgKzEyOCw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbik7CisgICAgaWYgKCAhcGFnZV9sb2NrKHBn
KSApCisgICAgICAgIHJldHVybiBsMWVfZW1wdHkoKTsKKworICAgIGlmICgg
KHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBH
VF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGwxX3BnZW50cnlf
dCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFl
ID0gbDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV07CisKKyAgICAgICAgaWYg
KCAobDFlX2dldF9mbGFncyhsMWUpICYgKF9QQUdFX0FDQ0VTU0VEIHwgX1BB
R0VfUFJFU0VOVCkpID09CisgICAgICAgICAgICAgX1BBR0VfUFJFU0VOVCAp
CisgICAgICAgIHsKKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBvbDFlID0g
bDFlOworCisgICAgICAgICAgICBsMWVfYWRkX2ZsYWdzKGwxZSwgX1BBR0Vf
QUNDRVNTRUQpOworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIEJl
c3QgZWZmb3J0IG9ubHk7IHdpdGggdGhlIGxvY2sgaGVsZCB0aGUgcGFnZSBz
aG91bGRuJ3QKKyAgICAgICAgICAgICAqIGNoYW5nZSBhbnl3YXksIGV4Y2Vw
dCBmb3IgdGhlIGRpcnR5IGJpdCB0byBwZXJoYXBzIGJlY29tZSBzZXQuCisg
ICAgICAgICAgICAgKi8KKyAgICAgICAgICAgIHdoaWxlICggY21weGNoZygm
bDFlX2dldF9pbnRwdGUobDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV0pLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGwxZV9nZXRfaW50cHRlKG9s
MWUpLCBsMWVfZ2V0X2ludHB0ZShsMWUpKSAhPQorICAgICAgICAgICAgICAg
ICAgICBsMWVfZ2V0X2ludHB0ZShvbDFlKSAmJgorICAgICAgICAgICAgICAg
ICAgICAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX0RJUlRZKSApCisg
ICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhv
bDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9m
bGFncyhsMWUsIF9QQUdFX0RJUlRZKTsKKyAgICAgICAgICAgIH0KKyAgICAg
ICAgfQorCisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwxZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCBjZDNlN2VjNTAxLi44NjVkYjk5OWMx
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODAsNiArNTgwLDcgQEAg
aW50IHZjcHVfZGVzdHJveV9wYWdldGFibGVzKHN0cnVjdCB2Y3B1ICopOwog
CiB2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcik7CiBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMmUo
cGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wxX3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2wxZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogCiBpbnQgX19zeW5jX2xvY2FsX2V4ZWNz
dGF0ZSh2b2lkKTsKIAo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4KIHssdW59bWFwX2RvbWFpbl9wYWdlKCkKCk1vdmUgdGhlIHBh
Z2UgdGFibGUgcmVjdXJzaW9uIHR3byBsZXZlbHMgZG93bi4gVGhpcyBlbnRh
aWxzIGF2b2lkaW5nCnRvIGZyZWUgdGhlIHJlY3Vyc2l2ZSBtYXBwaW5nIHBy
ZW1hdHVyZWx5IGluIGZyZWVfcGVyZG9tYWluX21hcHBpbmdzKCkuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4g
PGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbl9wYWdlLmMgYi94ZW4vYXJj
aC94ODYvZG9tYWluX3BhZ2UuYwppbmRleCA0YTA3Y2ZiMThlLi42NjBiZDA2
YWFmIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwor
KysgYi94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwpAQCAtNjUsNyArNjUs
OCBAQCB2b2lkIF9faW5pdCBtYXBjYWNoZV9vdmVycmlkZV9jdXJyZW50KHN0
cnVjdCB2Y3B1ICp2KQogI2RlZmluZSBtYXBjYWNoZV9sMl9lbnRyeShlKSAo
KGUpID4+IFBBR0VUQUJMRV9PUkRFUikKICNkZWZpbmUgTUFQQ0FDSEVfTDJf
RU5UUklFUyAobWFwY2FjaGVfbDJfZW50cnkoTUFQQ0FDSEVfRU5UUklFUyAt
IDEpICsgMSkKICNkZWZpbmUgTUFQQ0FDSEVfTDFFTlQoaWR4KSBcCi0gICAg
X19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldChNQVBDQUNIRV9W
SVJUX1NUQVJUICsgcGZuX3RvX3BhZGRyKGlkeCkpXQorICAgICgobDFfcGdl
bnRyeV90ICopKE1BUENBQ0hFX1ZJUlRfU1RBUlQgfCBcCisgICAgICAgICAg
ICAgICAgICAgICAgKChMMl9QQUdFVEFCTEVfRU5UUklFUyAtIDEpIDw8IEwy
X1BBR0VUQUJMRV9TSElGVCkpKVtpZHhdCiAKIHZvaWQgKm1hcF9kb21haW5f
cGFnZShtZm5fdCBtZm4pCiB7CkBAIC0yMzUsNiArMjM2LDcgQEAgaW50IG1h
cGNhY2hlX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAg
c3RydWN0IG1hcGNhY2hlX2RvbWFpbiAqZGNhY2hlID0gJmQtPmFyY2gucHYu
bWFwY2FjaGU7CiAgICAgdW5zaWduZWQgaW50IGJpdG1hcF9wYWdlczsKKyAg
ICBpbnQgcmM7CiAKICAgICBBU1NFUlQoaXNfcHZfZG9tYWluKGQpKTsKIApA
QCAtMjQzLDggKzI0NSwxMCBAQCBpbnQgbWFwY2FjaGVfZG9tYWluX2luaXQo
c3RydWN0IGRvbWFpbiAqZCkKICAgICAgICAgcmV0dXJuIDA7CiAjZW5kaWYK
IAorICAgIEJVSUxEX0JVR19PTihNQVBDQUNIRV9WSVJUX1NUQVJUICYgKCgx
IDw8IEwzX1BBR0VUQUJMRV9TSElGVCkgLSAxKSk7CiAgICAgQlVJTERfQlVH
X09OKE1BUENBQ0hFX1ZJUlRfRU5EICsgUEFHRV9TSVpFICogKDMgKwotICAg
ICAgICAgICAgICAgICAyICogUEZOX1VQKEJJVFNfVE9fTE9OR1MoTUFQQ0FD
SEVfRU5UUklFUykgKiBzaXplb2YobG9uZykpKSA+CisgICAgICAgICAgICAg
ICAgIDIgKiBQRk5fVVAoQklUU19UT19MT05HUyhNQVBDQUNIRV9FTlRSSUVT
KSAqIHNpemVvZihsb25nKSkpICsKKyAgICAgICAgICAgICAgICAgKDFVIDw8
IEwyX1BBR0VUQUJMRV9TSElGVCkgPgogICAgICAgICAgICAgICAgICBNQVBD
QUNIRV9WSVJUX1NUQVJUICsgKFBFUkRPTUFJTl9TTE9UX01CWVRFUyA8PCAy
MCkpOwogICAgIGJpdG1hcF9wYWdlcyA9IFBGTl9VUChCSVRTX1RPX0xPTkdT
KE1BUENBQ0hFX0VOVFJJRVMpICogc2l6ZW9mKGxvbmcpKTsKICAgICBkY2Fj
aGUtPmludXNlID0gKHZvaWQgKilNQVBDQUNIRV9WSVJUX0VORCArIFBBR0Vf
U0laRTsKQEAgLTI1Myw5ICsyNTcsMjUgQEAgaW50IG1hcGNhY2hlX2RvbWFp
bl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiAKICAgICBzcGluX2xvY2tfaW5p
dCgmZGNhY2hlLT5sb2NrKTsKIAotICAgIHJldHVybiBjcmVhdGVfcGVyZG9t
YWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNhY2hlLT5pbnVzZSwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDIgKiBiaXRt
YXBfcGFnZXMgKyAxLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgTklMKGwxX3BnZW50cnlfdCAqKSwgTlVMTCk7CisgICAgcmMgPSBj
cmVhdGVfcGVyZG9tYWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNh
Y2hlLT5pbnVzZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAyICogYml0bWFwX3BhZ2VzICsgMSwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBOSUwobDFfcGdlbnRyeV90ICopLCBOVUxMKTsKKyAg
ICBpZiAoICFyYyApCisgICAgeworICAgICAgICAvKgorICAgICAgICAgKiBJ
bnN0YWxsIG1hcHBpbmcgb2Ygb3VyIEwyIHRhYmxlIGludG8gaXRzIG93biBs
YXN0IHNsb3QsIGZvciBlYXN5CisgICAgICAgICAqIGFjY2VzcyB0byB0aGUg
TDEgZW50cmllcyB2aWEgTUFQQ0FDSEVfTDFFTlQoKS4KKyAgICAgICAgICov
CisgICAgICAgIGwzX3BnZW50cnlfdCAqbDN0ID0gX19tYXBfZG9tYWluX3Bh
Z2UoZC0+YXJjaC5wZXJkb21haW5fbDNfcGcpOworICAgICAgICBsM19wZ2Vu
dHJ5X3QgbDNlID0gbDN0W2wzX3RhYmxlX29mZnNldChNQVBDQUNIRV9WSVJU
X0VORCldOworICAgICAgICBsMl9wZ2VudHJ5X3QgKmwydCA9IG1hcF9sMnRf
ZnJvbV9sM2UobDNlKTsKKworICAgICAgICBsMmVfZ2V0X2ludHB0ZShsMnRb
TDJfUEFHRVRBQkxFX0VOVFJJRVMgLSAxXSkgPSBsM2VfZ2V0X2ludHB0ZShs
M2UpOworICAgICAgICB1bm1hcF9kb21haW5fcGFnZShsMnQpOworICAgICAg
ICB1bm1hcF9kb21haW5fcGFnZShsM3QpOworICAgIH0KKworICAgIHJldHVy
biByYzsKIH0KIAogaW50IG1hcGNhY2hlX3ZjcHVfaW5pdChzdHJ1Y3QgdmNw
dSAqdikKQEAgLTM0Niw3ICszNjYsNyBAQCBtZm5fdCBkb21haW5fcGFnZV9t
YXBfdG9fbWZuKGNvbnN0IHZvaWQgKnB0cikKICAgICBlbHNlCiAgICAgewog
ICAgICAgICBBU1NFUlQodmEgPj0gTUFQQ0FDSEVfVklSVF9TVEFSVCAmJiB2
YSA8IE1BUENBQ0hFX1ZJUlRfRU5EKTsKLSAgICAgICAgcGwxZSA9ICZfX2xp
bmVhcl9sMV90YWJsZVtsMV9saW5lYXJfb2Zmc2V0KHZhKV07CisgICAgICAg
IHBsMWUgPSAmTUFQQ0FDSEVfTDFFTlQoUEZOX0RPV04odmEgLSBNQVBDQUNI
RV9WSVJUX1NUQVJUKSk7CiAgICAgfQogCiAgICAgcmV0dXJuIGwxZV9nZXRf
bWZuKCpwbDFlKTsKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS5jIGIv
eGVuL2FyY2gveDg2L21tLmMKaW5kZXggMzBkZmZiNjhlOC4uMjc5NjY0YTgz
ZSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTYwMzEsNiArNjAzMSwxMCBAQCB2b2lkIGZyZWVf
cGVyZG9tYWluX21hcHBpbmdzKHN0cnVjdCBkb21haW4gKmQpCiAgICAgICAg
ICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpsMXBnID0gbDJlX2dldF9wYWdlKGwydGFiW2pdKTsKIAorICAgICAg
ICAgICAgICAgICAgICAvKiBtYXBjYWNoZV9kb21haW5faW5pdCgpIGluc3Rh
bGxzIGEgcmVjdXJzaXZlIGVudHJ5LiAqLworICAgICAgICAgICAgICAgICAg
ICBpZiAoIGwxcGcgPT0gbDJwZyApCisgICAgICAgICAgICAgICAgICAgICAg
ICBjb250aW51ZTsKKwogICAgICAgICAgICAgICAgICAgICBpZiAoIGwyZV9n
ZXRfZmxhZ3MobDJ0YWJbal0pICYgX1BBR0VfQVZBSUwwICkKICAgICAgICAg
ICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICAgICAgbDFfcGdl
bnRyeV90ICpsMXRhYiA9IF9fbWFwX2RvbWFpbl9wYWdlKGwxcGcpOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.13/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286-4.13/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCAyNzk2NjRhODNlLi5m
YTBmODEzZDI5IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTc1Nyw5ICsxNzU3LDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IDMyMTc3Nzc5MjEuLmIyMTQw
ODcxOTQgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzUsNiArMTM1LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggYzU2ODZlMGQyNS4uZGNiMjBkMWQ5
ZCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODMzLDkgKzgzMyw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCA4ZDc5YTcxMzk4Li45ZDU4N2EwNzZhIDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Myw3ICsxOTMsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCBjMWU5MjkzN2MwLi5l
NzJjMjc3YjlmIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjc0
LDE5ICsyNzQsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBfbWZu
KGwxZV9nZXRfcGZuKCp2aXJ0X3RvX3hlbl9sMWUoKHVuc2lnbmVkIGxvbmcp
KHZhKSkpKQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19w
YWdlKHZtYXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9f
QVNTRU1CTFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVs
IG9mIHRoZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9s
MV90YWJsZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFS
VCkpCi0jZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50
cnlfdCAqKShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQo
TElORUFSX1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNf
dGFibGUgXAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxl
ICsgbDJfbGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQot
I2RlZmluZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3Qg
KikoX19saW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVB
Ul9QVF9WSVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18K
IGV4dGVybiByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFH
RVRBQkxFX0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBh
dF9pZGxlX3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0y
cF9jb21wYXRfdnN0YXJ0Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCA0OGZk
NjBhODc2Li5jMjVlYjAxZTQxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC00NCwyNiArNDQsNDcgQEAgdW5zaWduZWQgaW50IF9fcmVhZF9tb3N0bHkg
bTJwX2NvbXBhdF92c3RhcnQgPSBfX0hZUEVSVklTT1JfQ09NUEFUX1ZJUlRf
U1RBUlQ7CiAKIGwyX3BnZW50cnlfdCAqY29tcGF0X2lkbGVfcGdfdGFibGVf
bDI7CiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwgdW5z
aWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2VudHJ5X3Qg
bDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7Ci0gICAg
bDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5X3QgbDFl
LCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRhYmxlX2dl
dF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0ZTsKIAot
ICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNhbF9hZGRy
ZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAgaWYgKCAh
aXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICByZXR1cm4g
bDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3BhZ2UoX21m
bihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAgcmV0dXJu
IGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0
X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCit7
CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRfbDRlKHJv
b3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7CisKICAg
ICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJFU0VOVCkg
KQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShsNGUpOwog
ICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwogICAgIHVu
bWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNlOworfQor
Cit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOworICAgIGwy
X3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90IGwxZSwg
KmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlmICggIWlz
X3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKworICAgIGwz
ZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFibGUsIGFk
ZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAgaWYgKCAh
KGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5f
dmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxMOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBj
MjVlYjAxZTQxLi42MzA1Y2Y2MDMzIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC00NiwxNSArNDYsMjkgQEAgbDJfcGdlbnRyeV90ICpjb21wYXRfaWRs
ZV9wZ190YWJsZV9sMjsKIAogc3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFibGVfZ2V0
X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3QgKmw0dCwgbDRlOworICAg
IG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZuKHJvb3QpOworICAgIC8q
IGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2FuJ3QgZGlzYXBwZWFyIHVu
ZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVlZF9sb2NrID0gIW1mbl9l
cShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJlbnQtPmFyY2guZ3Vlc3Rf
dGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsKKyAgICBsNF9w
Z2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAKICAgICBpZiAoICFpc19j
YW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAgICAgIHJldHVybiBsNGVf
ZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21haW5fcGFnZShfbWZuKG1m
bikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9vZmZzZXQoYWRkcildOwot
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7CisgICAgcGcgPSBtZm5fdG9f
cGFnZShtZm4pOworICAgIGlmICggbmVlZF9sb2NrICYmICFwYWdlX2xvY2so
cGcpICkKKyAgICAgICAgcmV0dXJuIGw0ZV9lbXB0eSgpOworCisgICAgaWYg
KCAocGctPnUuaW51c2UudHlwZV9pbmZvICYgUEdUX3R5cGVfbWFzaykgPT0g
UEdUX2w0X3BhZ2VfdGFibGUgKQorICAgIHsKKyAgICAgICAgbDRfcGdlbnRy
eV90ICpsNHQgPSBtYXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgICAgICBs
NGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIpXTsKKyAgICAgICAgdW5t
YXBfZG9tYWluX3BhZ2UobDR0KTsKKyAgICB9CisKKyAgICBpZiAoIG5lZWRf
bG9jayApCisgICAgICAgIHBhZ2VfdW5sb2NrKHBnKTsKIAogICAgIHJldHVy
biBsNGU7CiB9CkBAIC02MiwxNCArNzYsMjYgQEAgc3RhdGljIGw0X3BnZW50
cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fs
a19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRk
cikKIHsKICAgICBsNF9wZ2VudHJ5X3QgbDRlID0gcGFnZV93YWxrX2dldF9s
NGUocm9vdCwgYWRkcik7Ci0gICAgbDNfcGdlbnRyeV90ICpsM3QsIGwzZTsK
KyAgICBtZm5fdCBtZm4gPSBsNGVfZ2V0X21mbihsNGUpOworICAgIHN0cnVj
dCBwYWdlX2luZm8gKnBnOworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBsM2Vf
ZW1wdHkoKTsKIAogICAgIGlmICggIShsNGVfZ2V0X2ZsYWdzKGw0ZSkgJiBf
UEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHJldHVybiBsM2VfZW1wdHkoKTsK
IAotICAgIGwzdCA9IG1hcF9sM3RfZnJvbV9sNGUobDRlKTsKLSAgICBsM2Ug
PSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21h
aW5fcGFnZShsM3QpOworICAgIHBnID0gbWZuX3RvX3BhZ2UobWZuKTsKKyAg
ICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIGwzZV9l
bXB0eSgpOworCisgICAgaWYgKCAocGctPnUuaW51c2UudHlwZV9pbmZvICYg
UEdUX3R5cGVfbWFzaykgPT0gUEdUX2wzX3BhZ2VfdGFibGUgKQorICAgIHsK
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBtYXBfZG9tYWluX3BhZ2Uo
bWZuKTsKKworICAgICAgICBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFk
ZHIpXTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDN0KTsKKyAgICB9
CisKKyAgICBwYWdlX3VubG9jayhwZyk7CiAKICAgICByZXR1cm4gbDNlOwog
fQpAQCAtNzcsNDQgKzEwMyw2NyBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBh
Z2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxv
bmcgYWRkcikKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAqdiwg
dW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwzX3BnZW50cnlfdCBsM2U7
Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5
X3QgbDFlLCAqbDF0OwotICAgIHVuc2lnbmVkIGxvbmcgbWZuOworICAgIGwy
X3BnZW50cnlfdCBsMmUgPSBsMmVfZW1wdHkoKTsKKyAgICBsMV9wZ2VudHJ5
X3QgbDFlID0gbDFlX2VtcHR5KCk7CisgICAgbWZuX3QgbWZuOworICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBnOwogCiAgICAgaWYgKCAhaXNfcHZfdmNwdSh2
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAgICAgbDNlID0gcGFnZV93
YWxrX2dldF9sM2Uodi0+YXJjaC5ndWVzdF90YWJsZSwgYWRkcik7Ci0gICAg
bWZuID0gbDNlX2dldF9wZm4obDNlKTsKLSAgICBpZiAoICEobDNlX2dldF9m
bGFncyhsM2UpICYgX1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChfbWZu
KG1mbikpICkKKyAgICBtZm4gPSBsM2VfZ2V0X21mbihsM2UpOworICAgIGlm
ICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fCAh
bWZuX3ZhbGlkKG1mbikgKQogICAgICAgICByZXR1cm4gTlVMTDsKICAgICBp
ZiAoIChsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFHRV9QU0UpICkKICAgICB7
Ci0gICAgICAgIG1mbiArPSBQRk5fRE9XTihhZGRyICYgKCgxVUwgPDwgTDNf
UEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKKyAgICAgICAgbWZuID0gbWZuX2Fk
ZChtZm4sIFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMM19QQUdFVEFCTEVf
U0hJRlQpIC0gMSkpKTsKICAgICAgICAgZ290byByZXQ7CiAgICAgfQogCi0g
ICAgbDJ0ID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4obWZuKSk7Ci0gICAgbDJl
ID0gbDJ0W2wyX3RhYmxlX29mZnNldChhZGRyKV07Ci0gICAgdW5tYXBfZG9t
YWluX3BhZ2UobDJ0KTsKLSAgICBtZm4gPSBsMmVfZ2V0X3BmbihsMmUpOwot
ICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVTRU5U
KSB8fCAhbWZuX3ZhbGlkKF9tZm4obWZuKSkgKQorICAgIHBnID0gbWZuX3Rv
X3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAg
ICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChwZy0+dS5pbnVzZS50eXBl
X2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1RfbDJfcGFnZV90YWJsZSAp
CisgICAgeworICAgICAgICBjb25zdCBsMl9wZ2VudHJ5X3QgKmwydCA9IG1h
cF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAgIGwyZSA9IGwydFtsMl90
YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1bm1hcF9kb21haW5fcGFn
ZShsMnQpOworICAgIH0KKworICAgIHBhZ2VfdW5sb2NrKHBnKTsKKworICAg
IG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7CisgICAgaWYgKCAhKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFsaWQobWZu
KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogICAgIGlmICggKGwyZV9nZXRf
ZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkgKQogICAgIHsKLSAgICAgICAgbWZu
ICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBMMl9QQUdFVEFCTEVfU0hJ
RlQpIC0gMSkpOworICAgICAgICBtZm4gPSBtZm5fYWRkKG1mbiwgUEZOX0RP
V04oYWRkciAmICgoMVVMIDw8IEwyX1BBR0VUQUJMRV9TSElGVCkgLSAxKSkp
OwogICAgICAgICBnb3RvIHJldDsKICAgICB9CiAKLSAgICBsMXQgPSBtYXBf
ZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKLSAgICBsMWUgPSBsMXRbbDFfdGFi
bGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9kb21haW5fcGFnZShsMXQp
OwotICAgIG1mbiA9IGwxZV9nZXRfcGZuKGwxZSk7Ci0gICAgaWYgKCAhKGwx
ZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNFTlQpIHx8ICFtZm5fdmFs
aWQoX21mbihtZm4pKSApCisgICAgcGcgPSBtZm5fdG9fcGFnZShtZm4pOwor
ICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAgICAgICByZXR1cm4gTlVM
TDsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90
eXBlX21hc2spID09IFBHVF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAg
ICAgIGNvbnN0IGwxX3BnZW50cnlfdCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDFlID0gbDF0W2wxX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgbWZuID0gbDFlX2dl
dF9tZm4obDFlKTsKKyAgICBpZiAoICEobDFlX2dldF9mbGFncyhsMWUpICYg
X1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZChtZm4pICkKICAgICAgICAg
cmV0dXJuIE5VTEw7CiAKICByZXQ6Ci0gICAgcmV0dXJuIG1hcF9kb21haW5f
cGFnZShfbWZuKG1mbikpICsgKGFkZHIgJiB+UEFHRV9NQVNLKTsKKyAgICBy
ZXR1cm4gbWFwX2RvbWFpbl9wYWdlKG1mbikgKyAoYWRkciAmIH5QQUdFX01B
U0spOwogfQogCiAvKgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCA1ZDRjZDAwOTQxLi43YmUwOThmNWVmIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDAsMTEgKzQwLDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggNjMwNWNmNjAzMy4u
NzFhOGJmYzAyNCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTAwLDYg
KzEwMCwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbik7Cisg
ICAgaWYgKCAhcGFnZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsMmVf
ZW1wdHkoKTsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAm
IFBHVF90eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlICkKKyAgICB7
CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwyZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCA3ZTc0OTk2MDUzLi4xMmVhODEyMzgx
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01NzksNyArNTc5LDkgQEAg
dm9pZCBhdWRpdF9kb21haW5zKHZvaWQpOwogdm9pZCBtYWtlX2NyMyhzdHJ1
Y3QgdmNwdSAqdiwgbWZuX3QgbWZuKTsKIHZvaWQgdXBkYXRlX2NyMyhzdHJ1
Y3QgdmNwdSAqdik7CiBpbnQgdmNwdV9kZXN0cm95X3BhZ2V0YWJsZXMoc3Ry
dWN0IHZjcHUgKik7CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNw
dSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wyX3BnZW50cnlfdCBwYWdl
X3dhbGtfZ2V0X2wyZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25n
IGFkZHIpOwogCiAvKiBBbGxvY2F0b3IgZnVuY3Rpb25zIGZvciBYZW4gcGFn
ZXRhYmxlcy4gKi8KIHZvaWQgKmFsbG9jX3hlbl9wYWdldGFibGUodm9pZCk7
Cg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCA3YmUwOThmNWVmLi41ZTQwODFhZWNkIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NTksMjcgKzU5LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCBhMWJkNDczYjI5Li40M2QzM2ExZmQxIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IDBlZWRiNzAwMDIuLmNlMzFkZDQwMWQgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzQ5LDcgKzM0OSw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggNzFhOGJm
YzAyNC4uOWU4N2E1NTE3NCAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTI4LDYgKzEyOCw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbik7CisgICAgaWYgKCAhcGFnZV9sb2NrKHBn
KSApCisgICAgICAgIHJldHVybiBsMWVfZW1wdHkoKTsKKworICAgIGlmICgg
KHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBH
VF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGwxX3BnZW50cnlf
dCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFl
ID0gbDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV07CisKKyAgICAgICAgaWYg
KCAobDFlX2dldF9mbGFncyhsMWUpICYgKF9QQUdFX0FDQ0VTU0VEIHwgX1BB
R0VfUFJFU0VOVCkpID09CisgICAgICAgICAgICAgX1BBR0VfUFJFU0VOVCAp
CisgICAgICAgIHsKKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBvbDFlID0g
bDFlOworCisgICAgICAgICAgICBsMWVfYWRkX2ZsYWdzKGwxZSwgX1BBR0Vf
QUNDRVNTRUQpOworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIEJl
c3QgZWZmb3J0IG9ubHk7IHdpdGggdGhlIGxvY2sgaGVsZCB0aGUgcGFnZSBz
aG91bGRuJ3QKKyAgICAgICAgICAgICAqIGNoYW5nZSBhbnl3YXksIGV4Y2Vw
dCBmb3IgdGhlIGRpcnR5IGJpdCB0byBwZXJoYXBzIGJlY29tZSBzZXQuCisg
ICAgICAgICAgICAgKi8KKyAgICAgICAgICAgIHdoaWxlICggY21weGNoZygm
bDFlX2dldF9pbnRwdGUobDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV0pLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGwxZV9nZXRfaW50cHRlKG9s
MWUpLCBsMWVfZ2V0X2ludHB0ZShsMWUpKSAhPQorICAgICAgICAgICAgICAg
ICAgICBsMWVfZ2V0X2ludHB0ZShvbDFlKSAmJgorICAgICAgICAgICAgICAg
ICAgICAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX0RJUlRZKSApCisg
ICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhv
bDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9m
bGFncyhsMWUsIF9QQUdFX0RJUlRZKTsKKyAgICAgICAgICAgIH0KKyAgICAg
ICAgfQorCisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwxZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCAxMmVhODEyMzgxLi5kYTFhNmY1NzEy
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01ODIsNiArNTgyLDcgQEAg
aW50IHZjcHVfZGVzdHJveV9wYWdldGFibGVzKHN0cnVjdCB2Y3B1ICopOwog
CiB2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcik7CiBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMmUo
cGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wxX3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2wxZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogCiAvKiBBbGxvY2F0b3IgZnVuY3Rpb25z
IGZvciBYZW4gcGFnZXRhYmxlcy4gKi8KIHZvaWQgKmFsbG9jX3hlbl9wYWdl
dGFibGUodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4KIHssdW59bWFwX2RvbWFpbl9wYWdlKCkKCk1vdmUgdGhlIHBh
Z2UgdGFibGUgcmVjdXJzaW9uIHR3byBsZXZlbHMgZG93bi4gVGhpcyBlbnRh
aWxzIGF2b2lkaW5nCnRvIGZyZWUgdGhlIHJlY3Vyc2l2ZSBtYXBwaW5nIHBy
ZW1hdHVyZWx5IGluIGZyZWVfcGVyZG9tYWluX21hcHBpbmdzKCkuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4g
PGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWFpbl9wYWdlLmMgYi94ZW4vYXJj
aC94ODYvZG9tYWluX3BhZ2UuYwppbmRleCBiMDM3MjhlMThlLi5lZDZhMmJm
MDgxIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwor
KysgYi94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYwpAQCAtNjUsNyArNjUs
OCBAQCB2b2lkIF9faW5pdCBtYXBjYWNoZV9vdmVycmlkZV9jdXJyZW50KHN0
cnVjdCB2Y3B1ICp2KQogI2RlZmluZSBtYXBjYWNoZV9sMl9lbnRyeShlKSAo
KGUpID4+IFBBR0VUQUJMRV9PUkRFUikKICNkZWZpbmUgTUFQQ0FDSEVfTDJf
RU5UUklFUyAobWFwY2FjaGVfbDJfZW50cnkoTUFQQ0FDSEVfRU5UUklFUyAt
IDEpICsgMSkKICNkZWZpbmUgTUFQQ0FDSEVfTDFFTlQoaWR4KSBcCi0gICAg
X19saW5lYXJfbDFfdGFibGVbbDFfbGluZWFyX29mZnNldChNQVBDQUNIRV9W
SVJUX1NUQVJUICsgcGZuX3RvX3BhZGRyKGlkeCkpXQorICAgICgobDFfcGdl
bnRyeV90ICopKE1BUENBQ0hFX1ZJUlRfU1RBUlQgfCBcCisgICAgICAgICAg
ICAgICAgICAgICAgKChMMl9QQUdFVEFCTEVfRU5UUklFUyAtIDEpIDw8IEwy
X1BBR0VUQUJMRV9TSElGVCkpKVtpZHhdCiAKIHZvaWQgKm1hcF9kb21haW5f
cGFnZShtZm5fdCBtZm4pCiB7CkBAIC0yMzUsNiArMjM2LDcgQEAgaW50IG1h
cGNhY2hlX2RvbWFpbl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiB7CiAgICAg
c3RydWN0IG1hcGNhY2hlX2RvbWFpbiAqZGNhY2hlID0gJmQtPmFyY2gucHYu
bWFwY2FjaGU7CiAgICAgdW5zaWduZWQgaW50IGJpdG1hcF9wYWdlczsKKyAg
ICBpbnQgcmM7CiAKICAgICBBU1NFUlQoaXNfcHZfZG9tYWluKGQpKTsKIApA
QCAtMjQzLDggKzI0NSwxMCBAQCBpbnQgbWFwY2FjaGVfZG9tYWluX2luaXQo
c3RydWN0IGRvbWFpbiAqZCkKICAgICAgICAgcmV0dXJuIDA7CiAjZW5kaWYK
IAorICAgIEJVSUxEX0JVR19PTihNQVBDQUNIRV9WSVJUX1NUQVJUICYgKCgx
IDw8IEwzX1BBR0VUQUJMRV9TSElGVCkgLSAxKSk7CiAgICAgQlVJTERfQlVH
X09OKE1BUENBQ0hFX1ZJUlRfRU5EICsgUEFHRV9TSVpFICogKDMgKwotICAg
ICAgICAgICAgICAgICAyICogUEZOX1VQKEJJVFNfVE9fTE9OR1MoTUFQQ0FD
SEVfRU5UUklFUykgKiBzaXplb2YobG9uZykpKSA+CisgICAgICAgICAgICAg
ICAgIDIgKiBQRk5fVVAoQklUU19UT19MT05HUyhNQVBDQUNIRV9FTlRSSUVT
KSAqIHNpemVvZihsb25nKSkpICsKKyAgICAgICAgICAgICAgICAgKDFVIDw8
IEwyX1BBR0VUQUJMRV9TSElGVCkgPgogICAgICAgICAgICAgICAgICBNQVBD
QUNIRV9WSVJUX1NUQVJUICsgKFBFUkRPTUFJTl9TTE9UX01CWVRFUyA8PCAy
MCkpOwogICAgIGJpdG1hcF9wYWdlcyA9IFBGTl9VUChCSVRTX1RPX0xPTkdT
KE1BUENBQ0hFX0VOVFJJRVMpICogc2l6ZW9mKGxvbmcpKTsKICAgICBkY2Fj
aGUtPmludXNlID0gKHZvaWQgKilNQVBDQUNIRV9WSVJUX0VORCArIFBBR0Vf
U0laRTsKQEAgLTI1Myw5ICsyNTcsMjUgQEAgaW50IG1hcGNhY2hlX2RvbWFp
bl9pbml0KHN0cnVjdCBkb21haW4gKmQpCiAKICAgICBzcGluX2xvY2tfaW5p
dCgmZGNhY2hlLT5sb2NrKTsKIAotICAgIHJldHVybiBjcmVhdGVfcGVyZG9t
YWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNhY2hlLT5pbnVzZSwK
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIDIgKiBiaXRt
YXBfcGFnZXMgKyAxLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgTklMKGwxX3BnZW50cnlfdCAqKSwgTlVMTCk7CisgICAgcmMgPSBj
cmVhdGVfcGVyZG9tYWluX21hcHBpbmcoZCwgKHVuc2lnbmVkIGxvbmcpZGNh
Y2hlLT5pbnVzZSwKKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAyICogYml0bWFwX3BhZ2VzICsgMSwKKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBOSUwobDFfcGdlbnRyeV90ICopLCBOVUxMKTsKKyAg
ICBpZiAoICFyYyApCisgICAgeworICAgICAgICAvKgorICAgICAgICAgKiBJ
bnN0YWxsIG1hcHBpbmcgb2Ygb3VyIEwyIHRhYmxlIGludG8gaXRzIG93biBs
YXN0IHNsb3QsIGZvciBlYXN5CisgICAgICAgICAqIGFjY2VzcyB0byB0aGUg
TDEgZW50cmllcyB2aWEgTUFQQ0FDSEVfTDFFTlQoKS4KKyAgICAgICAgICov
CisgICAgICAgIGwzX3BnZW50cnlfdCAqbDN0ID0gX19tYXBfZG9tYWluX3Bh
Z2UoZC0+YXJjaC5wZXJkb21haW5fbDNfcGcpOworICAgICAgICBsM19wZ2Vu
dHJ5X3QgbDNlID0gbDN0W2wzX3RhYmxlX29mZnNldChNQVBDQUNIRV9WSVJU
X0VORCldOworICAgICAgICBsMl9wZ2VudHJ5X3QgKmwydCA9IG1hcF9sMnRf
ZnJvbV9sM2UobDNlKTsKKworICAgICAgICBsMmVfZ2V0X2ludHB0ZShsMnRb
TDJfUEFHRVRBQkxFX0VOVFJJRVMgLSAxXSkgPSBsM2VfZ2V0X2ludHB0ZShs
M2UpOworICAgICAgICB1bm1hcF9kb21haW5fcGFnZShsMnQpOworICAgICAg
ICB1bm1hcF9kb21haW5fcGFnZShsM3QpOworICAgIH0KKworICAgIHJldHVy
biByYzsKIH0KIAogaW50IG1hcGNhY2hlX3ZjcHVfaW5pdChzdHJ1Y3QgdmNw
dSAqdikKQEAgLTM0Niw3ICszNjYsNyBAQCBtZm5fdCBkb21haW5fcGFnZV9t
YXBfdG9fbWZuKGNvbnN0IHZvaWQgKnB0cikKICAgICBlbHNlCiAgICAgewog
ICAgICAgICBBU1NFUlQodmEgPj0gTUFQQ0FDSEVfVklSVF9TVEFSVCAmJiB2
YSA8IE1BUENBQ0hFX1ZJUlRfRU5EKTsKLSAgICAgICAgcGwxZSA9ICZfX2xp
bmVhcl9sMV90YWJsZVtsMV9saW5lYXJfb2Zmc2V0KHZhKV07CisgICAgICAg
IHBsMWUgPSAmTUFQQ0FDSEVfTDFFTlQoUEZOX0RPV04odmEgLSBNQVBDQUNI
RV9WSVJUX1NUQVJUKSk7CiAgICAgfQogCiAgICAgcmV0dXJuIGwxZV9nZXRf
bWZuKCpwbDFlKTsKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9tbS5jIGIv
eGVuL2FyY2gveDg2L21tLmMKaW5kZXggODJiYzY3NjU1My4uNTgyZWEwOTcy
NSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTU5NTMsNiArNTk1MywxMCBAQCB2b2lkIGZyZWVf
cGVyZG9tYWluX21hcHBpbmdzKHN0cnVjdCBkb21haW4gKmQpCiAgICAgICAg
ICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpsMXBnID0gbDJlX2dldF9wYWdlKGwydGFiW2pdKTsKIAorICAgICAg
ICAgICAgICAgICAgICAvKiBtYXBjYWNoZV9kb21haW5faW5pdCgpIGluc3Rh
bGxzIGEgcmVjdXJzaXZlIGVudHJ5LiAqLworICAgICAgICAgICAgICAgICAg
ICBpZiAoIGwxcGcgPT0gbDJwZyApCisgICAgICAgICAgICAgICAgICAgICAg
ICBjb250aW51ZTsKKwogICAgICAgICAgICAgICAgICAgICBpZiAoIGwyZV9n
ZXRfZmxhZ3MobDJ0YWJbal0pICYgX1BBR0VfQVZBSUwwICkKICAgICAgICAg
ICAgICAgICAgICAgewogICAgICAgICAgICAgICAgICAgICAgICAgbDFfcGdl
bnRyeV90ICpsMXRhYiA9IF9fbWFwX2RvbWFpbl9wYWdlKGwxcGcpOwo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286-4.14/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286-4.14/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCA1ODJlYTA5NzI1Li41
NzMzM2JiMTIwIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTY4Miw5ICsxNjgyLDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IDNmZDNmMDYxN2EuLmJiMmY1
MGNiNmUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzksNiArMTM5LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggOWU4N2E1NTE3NC4uY2UwM2Y4M2Y1
MiAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODA4LDkgKzgwOCw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCA2NjVlOWNjMzFkLi4xN2I4ZWEwY2ZkIDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Nyw3ICsxOTcsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCBmNjMyYWZmYWVmLi5m
ZDI1NzQyNjdjIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjk0
LDE5ICsyOTQsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBfbWZu
KGwxZV9nZXRfcGZuKCp2aXJ0X3RvX3hlbl9sMWUoKHVuc2lnbmVkIGxvbmcp
KHZhKSkpKQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19w
YWdlKHZtYXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9f
QVNTRU1CTFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVs
IG9mIHRoZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9s
MV90YWJsZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFS
VCkpCi0jZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50
cnlfdCAqKShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQo
TElORUFSX1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNf
dGFibGUgXAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxl
ICsgbDJfbGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQot
I2RlZmluZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3Qg
KikoX19saW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVB
Ul9QVF9WSVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18K
IGV4dGVybiByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFH
RVRBQkxFX0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBh
dF9pZGxlX3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0y
cF9jb21wYXRfdnN0YXJ0Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Disposition: attachment;
 filename="xsa286/0001-x86-mm-split-L4-and-L3-parts-of-the-walk-out-of-do_p.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHNwbGl0IEw0IGFuZCBMMyBwYXJ0cyBvZiB0aGUgd2FsayBv
dXQgb2YgZG9fcGFnZV93YWxrKCkKClRoZSBMMyBvbmUgYXQgbGVhc3QgaXMg
Z29pbmcgdG8gYmUgcmUtdXNlZCBieSBhIHN1YnNlcXVlbnQgcGF0Y2gsIGFu
ZApzcGxpdHRpbmcgdGhlIEw0IG9uZSB0aGVuIGFzIHdlbGwgc2VlbXMgb25s
eSBuYXR1cmFsLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWduZWQt
b2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmll
d2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5j
b20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94ODZf
NjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBiY2Ux
NTYxZTFhLi5jNzViYWMwNzEyIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYv
eDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCkBA
IC01OSwyNiArNTksNDcgQEAgZXh0ZXJuIHVuc2lnbmVkIGludCBjb21wYXRf
bWFjaGluZV90b19waHlzX21hcHBpbmdbXTsKIAogI2VuZGlmIC8qIENPTkZJ
R19QVjMyICovCiAKLXZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNwdSAq
diwgdW5zaWduZWQgbG9uZyBhZGRyKQorc3RhdGljIGw0X3BnZW50cnlfdCBw
YWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBs
b25nIGFkZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4gPSBwYWdldGFi
bGVfZ2V0X3Bmbih2LT5hcmNoLmd1ZXN0X3RhYmxlKTsKLSAgICBsNF9wZ2Vu
dHJ5X3QgbDRlLCAqbDR0OwotICAgIGwzX3BnZW50cnlfdCBsM2UsICpsM3Q7
Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsKLSAgICBsMV9wZ2VudHJ5
X3QgbDFlLCAqbDF0OworICAgIHVuc2lnbmVkIGxvbmcgbWZuID0gcGFnZXRh
YmxlX2dldF9wZm4ocm9vdCk7CisgICAgbDRfcGdlbnRyeV90ICpsNHQsIGw0
ZTsKIAotICAgIGlmICggIWlzX3B2X3ZjcHUodikgfHwgIWlzX2Nhbm9uaWNh
bF9hZGRyZXNzKGFkZHIpICkKLSAgICAgICAgcmV0dXJuIE5VTEw7CisgICAg
aWYgKCAhaXNfY2Fub25pY2FsX2FkZHJlc3MoYWRkcikgKQorICAgICAgICBy
ZXR1cm4gbDRlX2VtcHR5KCk7CiAKICAgICBsNHQgPSBtYXBfZG9tYWluX3Bh
Z2UoX21mbihtZm4pKTsKICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0
KGFkZHIpXTsKICAgICB1bm1hcF9kb21haW5fcGFnZShsNHQpOworCisgICAg
cmV0dXJuIGw0ZTsKK30KKworc3RhdGljIGwzX3BnZW50cnlfdCBwYWdlX3dh
bGtfZ2V0X2wzZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFk
ZHIpCit7CisgICAgbDRfcGdlbnRyeV90IGw0ZSA9IHBhZ2Vfd2Fsa19nZXRf
bDRlKHJvb3QsIGFkZHIpOworICAgIGwzX3BnZW50cnlfdCAqbDN0LCBsM2U7
CisKICAgICBpZiAoICEobDRlX2dldF9mbGFncyhsNGUpICYgX1BBR0VfUFJF
U0VOVCkgKQotICAgICAgICByZXR1cm4gTlVMTDsKKyAgICAgICAgcmV0dXJu
IGwzZV9lbXB0eSgpOwogCiAgICAgbDN0ID0gbWFwX2wzdF9mcm9tX2w0ZShs
NGUpOwogICAgIGwzZSA9IGwzdFtsM190YWJsZV9vZmZzZXQoYWRkcildOwog
ICAgIHVubWFwX2RvbWFpbl9wYWdlKGwzdCk7CisKKyAgICByZXR1cm4gbDNl
OworfQorCit2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVu
c2lnbmVkIGxvbmcgYWRkcikKK3sKKyAgICBsM19wZ2VudHJ5X3QgbDNlOwor
ICAgIGwyX3BnZW50cnlfdCBsMmUsICpsMnQ7CisgICAgbDFfcGdlbnRyeV90
IGwxZSwgKmwxdDsKKyAgICB1bnNpZ25lZCBsb25nIG1mbjsKKworICAgIGlm
ICggIWlzX3B2X3ZjcHUodikgKQorICAgICAgICByZXR1cm4gTlVMTDsKKwor
ICAgIGwzZSA9IHBhZ2Vfd2Fsa19nZXRfbDNlKHYtPmFyY2guZ3Vlc3RfdGFi
bGUsIGFkZHIpOwogICAgIG1mbiA9IGwzZV9nZXRfcGZuKGwzZSk7CiAgICAg
aWYgKCAhKGwzZV9nZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BSRVNFTlQpIHx8
ICFtZm5fdmFsaWQoX21mbihtZm4pKSApCiAgICAgICAgIHJldHVybiBOVUxM
Owo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Disposition: attachment;
 filename="xsa286/0002-x86-mm-check-page-types-in-do_page_walk.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGNoZWNrIHBhZ2UgdHlwZXMgaW4gZG9fcGFnZV93YWxrKCkK
CkZvciBwYWdlIHRhYmxlIGVudHJpZXMgcmVhZCB0byBiZSBndWFyYW50ZWVk
IHZhbGlkLCB0cmFuc2llbnRseSBsb2NraW5nCnRoZSBwYWdlcyBhbmQgdmFs
aWRhdGluZyB0aGVpciB0eXBlcyBpcyBuZWNlc3NhcnkuIE5vdGUgdGhhdCBn
dWVzdCB1c2UKb2YgbGluZWFyIHBhZ2UgdGFibGVzIGlzIGludGVudGlvbmFs
bHkgbm90IHRha2VuIGludG8gYWNjb3VudCBoZXJlLCBhcwpvcmRpbmFyeSBk
YXRhIChndWVzdCBzdGFja3MpIGNhbid0IHBvc3NpYmx5IGxpdmUgaW5zaWRl
IHBhZ2UgdGFibGVzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0yODYuCgpTaWdu
ZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJp
eC5jb20+ClJldmlld2VkLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29v
cGVyM0BjaXRyaXguY29tPgoKZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni94
ODZfNjQvbW0uYyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwppbmRleCBj
NzViYWMwNzEyLi5jNGE1MmE1YzRjIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94
ODYveDg2XzY0L21tLmMKKysrIGIveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5j
CkBAIC02MSwxNSArNjEsMjkgQEAgZXh0ZXJuIHVuc2lnbmVkIGludCBjb21w
YXRfbWFjaGluZV90b19waHlzX21hcHBpbmdbXTsKIAogc3RhdGljIGw0X3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpCiB7Ci0gICAgdW5zaWduZWQgbG9uZyBtZm4g
PSBwYWdldGFibGVfZ2V0X3Bmbihyb290KTsKLSAgICBsNF9wZ2VudHJ5X3Qg
Kmw0dCwgbDRlOworICAgIG1mbl90IG1mbiA9IHBhZ2V0YWJsZV9nZXRfbWZu
KHJvb3QpOworICAgIC8qIGN1cnJlbnQncyByb290IHBhZ2UgdGFibGUgY2Fu
J3QgZGlzYXBwZWFyIHVuZGVyIG91ciBmZWV0LiAqLworICAgIGJvb2wgbmVl
ZF9sb2NrID0gIW1mbl9lcShtZm4sIHBhZ2V0YWJsZV9nZXRfbWZuKGN1cnJl
bnQtPmFyY2guZ3Vlc3RfdGFibGUpKTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZv
ICpwZzsKKyAgICBsNF9wZ2VudHJ5X3QgbDRlID0gbDRlX2VtcHR5KCk7CiAK
ICAgICBpZiAoICFpc19jYW5vbmljYWxfYWRkcmVzcyhhZGRyKSApCiAgICAg
ICAgIHJldHVybiBsNGVfZW1wdHkoKTsKIAotICAgIGw0dCA9IG1hcF9kb21h
aW5fcGFnZShfbWZuKG1mbikpOwotICAgIGw0ZSA9IGw0dFtsNF90YWJsZV9v
ZmZzZXQoYWRkcildOwotICAgIHVubWFwX2RvbWFpbl9wYWdlKGw0dCk7Cisg
ICAgcGcgPSBtZm5fdG9fcGFnZShtZm4pOworICAgIGlmICggbmVlZF9sb2Nr
ICYmICFwYWdlX2xvY2socGcpICkKKyAgICAgICAgcmV0dXJuIGw0ZV9lbXB0
eSgpOworCisgICAgaWYgKCAocGctPnUuaW51c2UudHlwZV9pbmZvICYgUEdU
X3R5cGVfbWFzaykgPT0gUEdUX2w0X3BhZ2VfdGFibGUgKQorICAgIHsKKyAg
ICAgICAgbDRfcGdlbnRyeV90ICpsNHQgPSBtYXBfZG9tYWluX3BhZ2UobWZu
KTsKKworICAgICAgICBsNGUgPSBsNHRbbDRfdGFibGVfb2Zmc2V0KGFkZHIp
XTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3BhZ2UobDR0KTsKKyAgICB9CisK
KyAgICBpZiAoIG5lZWRfbG9jayApCisgICAgICAgIHBhZ2VfdW5sb2NrKHBn
KTsKIAogICAgIHJldHVybiBsNGU7CiB9CkBAIC03NywxNCArOTEsMjYgQEAg
c3RhdGljIGw0X3BnZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2w0ZShwYWdldGFi
bGVfdCByb290LCB1bnNpZ25lZCBsb25nIGFkZHIpCiBzdGF0aWMgbDNfcGdl
bnRyeV90IHBhZ2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVu
c2lnbmVkIGxvbmcgYWRkcikKIHsKICAgICBsNF9wZ2VudHJ5X3QgbDRlID0g
cGFnZV93YWxrX2dldF9sNGUocm9vdCwgYWRkcik7Ci0gICAgbDNfcGdlbnRy
eV90ICpsM3QsIGwzZTsKKyAgICBtZm5fdCBtZm4gPSBsNGVfZ2V0X21mbihs
NGUpOworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwzX3BnZW50
cnlfdCBsM2UgPSBsM2VfZW1wdHkoKTsKIAogICAgIGlmICggIShsNGVfZ2V0
X2ZsYWdzKGw0ZSkgJiBfUEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHJldHVy
biBsM2VfZW1wdHkoKTsKIAotICAgIGwzdCA9IG1hcF9sM3RfZnJvbV9sNGUo
bDRlKTsKLSAgICBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KGFkZHIpXTsK
LSAgICB1bm1hcF9kb21haW5fcGFnZShsM3QpOworICAgIHBnID0gbWZuX3Rv
X3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xvY2socGcpICkKKyAgICAg
ICAgcmV0dXJuIGwzZV9lbXB0eSgpOworCisgICAgaWYgKCAocGctPnUuaW51
c2UudHlwZV9pbmZvICYgUEdUX3R5cGVfbWFzaykgPT0gUEdUX2wzX3BhZ2Vf
dGFibGUgKQorICAgIHsKKyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBt
YXBfZG9tYWluX3BhZ2UobWZuKTsKKworICAgICAgICBsM2UgPSBsM3RbbDNf
dGFibGVfb2Zmc2V0KGFkZHIpXTsKKyAgICAgICAgdW5tYXBfZG9tYWluX3Bh
Z2UobDN0KTsKKyAgICB9CisKKyAgICBwYWdlX3VubG9jayhwZyk7CiAKICAg
ICByZXR1cm4gbDNlOwogfQpAQCAtOTIsNDQgKzExOCw2NyBAQCBzdGF0aWMg
bDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRfbDNlKHBhZ2V0YWJsZV90IHJv
b3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKIHZvaWQgKmRvX3BhZ2Vfd2Fsayhz
dHJ1Y3QgdmNwdSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKQogewogICAgIGwz
X3BnZW50cnlfdCBsM2U7Ci0gICAgbDJfcGdlbnRyeV90IGwyZSwgKmwydDsK
LSAgICBsMV9wZ2VudHJ5X3QgbDFlLCAqbDF0OwotICAgIHVuc2lnbmVkIGxv
bmcgbWZuOworICAgIGwyX3BnZW50cnlfdCBsMmUgPSBsMmVfZW1wdHkoKTsK
KyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CisgICAgbWZu
X3QgbWZuOworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOwogCiAgICAgaWYg
KCAhaXNfcHZfdmNwdSh2KSApCiAgICAgICAgIHJldHVybiBOVUxMOwogCiAg
ICAgbDNlID0gcGFnZV93YWxrX2dldF9sM2Uodi0+YXJjaC5ndWVzdF90YWJs
ZSwgYWRkcik7Ci0gICAgbWZuID0gbDNlX2dldF9wZm4obDNlKTsKLSAgICBp
ZiAoICEobDNlX2dldF9mbGFncyhsM2UpICYgX1BBR0VfUFJFU0VOVCkgfHwg
IW1mbl92YWxpZChfbWZuKG1mbikpICkKKyAgICBtZm4gPSBsM2VfZ2V0X21m
bihsM2UpOworICAgIGlmICggIShsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFH
RV9QUkVTRU5UKSB8fCAhbWZuX3ZhbGlkKG1mbikgKQogICAgICAgICByZXR1
cm4gTlVMTDsKICAgICBpZiAoIChsM2VfZ2V0X2ZsYWdzKGwzZSkgJiBfUEFH
RV9QU0UpICkKICAgICB7Ci0gICAgICAgIG1mbiArPSBQRk5fRE9XTihhZGRy
ICYgKCgxVUwgPDwgTDNfUEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKKyAgICAg
ICAgbWZuID0gbWZuX2FkZChtZm4sIFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8
PCBMM19QQUdFVEFCTEVfU0hJRlQpIC0gMSkpKTsKICAgICAgICAgZ290byBy
ZXQ7CiAgICAgfQogCi0gICAgbDJ0ID0gbWFwX2RvbWFpbl9wYWdlKF9tZm4o
bWZuKSk7Ci0gICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChhZGRyKV07
Ci0gICAgdW5tYXBfZG9tYWluX3BhZ2UobDJ0KTsKLSAgICBtZm4gPSBsMmVf
Z2V0X3BmbihsMmUpOwotICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkg
JiBfUEFHRV9QUkVTRU5UKSB8fCAhbWZuX3ZhbGlkKF9tZm4obWZuKSkgKQor
ICAgIHBnID0gbWZuX3RvX3BhZ2UobWZuKTsKKyAgICBpZiAoICFwYWdlX2xv
Y2socGcpICkKKyAgICAgICAgcmV0dXJuIE5VTEw7CisKKyAgICBpZiAoIChw
Zy0+dS5pbnVzZS50eXBlX2luZm8gJiBQR1RfdHlwZV9tYXNrKSA9PSBQR1Rf
bDJfcGFnZV90YWJsZSApCisgICAgeworICAgICAgICBjb25zdCBsMl9wZ2Vu
dHJ5X3QgKmwydCA9IG1hcF9kb21haW5fcGFnZShtZm4pOworCisgICAgICAg
IGwyZSA9IGwydFtsMl90YWJsZV9vZmZzZXQoYWRkcildOworICAgICAgICB1
bm1hcF9kb21haW5fcGFnZShsMnQpOworICAgIH0KKworICAgIHBhZ2VfdW5s
b2NrKHBnKTsKKworICAgIG1mbiA9IGwyZV9nZXRfbWZuKGwyZSk7CisgICAg
aWYgKCAhKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdFX1BSRVNFTlQpIHx8
ICFtZm5fdmFsaWQobWZuKSApCiAgICAgICAgIHJldHVybiBOVUxMOwogICAg
IGlmICggKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdFX1BTRSkgKQogICAg
IHsKLSAgICAgICAgbWZuICs9IFBGTl9ET1dOKGFkZHIgJiAoKDFVTCA8PCBM
Ml9QQUdFVEFCTEVfU0hJRlQpIC0gMSkpOworICAgICAgICBtZm4gPSBtZm5f
YWRkKG1mbiwgUEZOX0RPV04oYWRkciAmICgoMVVMIDw8IEwyX1BBR0VUQUJM
RV9TSElGVCkgLSAxKSkpOwogICAgICAgICBnb3RvIHJldDsKICAgICB9CiAK
LSAgICBsMXQgPSBtYXBfZG9tYWluX3BhZ2UoX21mbihtZm4pKTsKLSAgICBs
MWUgPSBsMXRbbDFfdGFibGVfb2Zmc2V0KGFkZHIpXTsKLSAgICB1bm1hcF9k
b21haW5fcGFnZShsMXQpOwotICAgIG1mbiA9IGwxZV9nZXRfcGZuKGwxZSk7
Ci0gICAgaWYgKCAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX1BSRVNF
TlQpIHx8ICFtZm5fdmFsaWQoX21mbihtZm4pKSApCisgICAgcGcgPSBtZm5f
dG9fcGFnZShtZm4pOworICAgIGlmICggIXBhZ2VfbG9jayhwZykgKQorICAg
ICAgICByZXR1cm4gTlVMTDsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5
cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBHVF9sMV9wYWdlX3RhYmxl
ICkKKyAgICB7CisgICAgICAgIGNvbnN0IGwxX3BnZW50cnlfdCAqbDF0ID0g
bWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFlID0gbDF0W2wx
X3RhYmxlX29mZnNldChhZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9w
YWdlKGwxdCk7CisgICAgfQorCisgICAgcGFnZV91bmxvY2socGcpOworCisg
ICAgbWZuID0gbDFlX2dldF9tZm4obDFlKTsKKyAgICBpZiAoICEobDFlX2dl
dF9mbGFncyhsMWUpICYgX1BBR0VfUFJFU0VOVCkgfHwgIW1mbl92YWxpZCht
Zm4pICkKICAgICAgICAgcmV0dXJuIE5VTEw7CiAKICByZXQ6Ci0gICAgcmV0
dXJuIG1hcF9kb21haW5fcGFnZShfbWZuKG1mbikpICsgKGFkZHIgJiB+UEFH
RV9NQVNLKTsKKyAgICByZXR1cm4gbWFwX2RvbWFpbl9wYWdlKG1mbikgKyAo
YWRkciAmIH5QQUdFX01BU0spOwogfQogCiAvKgo=

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Disposition: attachment;
 filename="xsa286/0003-x86-mm-avoid-using-linear-page-tables-in-map_guest_l.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBt
YXBfZ3Vlc3RfbDFlKCkKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UeXBl
OiB0ZXh0L3BsYWluOyBjaGFyc2V0PVVURi04CkNvbnRlbnQtVHJhbnNmZXIt
RW5jb2Rpbmc6IDhiaXQKClJlcGxhY2UgdGhlIGxpbmVhciBMMiB0YWJsZSBh
Y2Nlc3MgYnkgYW4gYWN0dWFsIHBhZ2Ugd2Fsay4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8amFubmhAZ29v
Z2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPgpTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxy
b2dlci5wYXVAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5s
YXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZm
IC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwppbmRleCAxNGNiMGYyZDRlLi42MWJmNzM2ZWQ5IDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
cHYvbW0uYwpAQCAtNDAsMTEgKzQwLDE0IEBAIGwxX3BnZW50cnlfdCAqbWFw
X2d1ZXN0X2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhciwgbWZuX3QgKmdsMW1m
bikKICAgICBpZiAoIHVubGlrZWx5KCFfX2FkZHJfb2sobGluZWFyKSkgKQog
ICAgICAgICByZXR1cm4gTlVMTDsKIAotICAgIC8qIEZpbmQgdGhpcyBsMWUg
YW5kIGl0cyBlbmNsb3NpbmcgbDFtZm4gaW4gdGhlIGxpbmVhciBtYXAuICov
Ci0gICAgaWYgKCBfX2NvcHlfZnJvbV91c2VyKCZsMmUsCi0gICAgICAgICAg
ICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMl90YWJsZVtsMl9saW5lYXJf
b2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAgICAgICAgICAgICAgICBz
aXplb2YobDJfcGdlbnRyeV90KSkgKQorICAgIGlmICggdW5saWtlbHkoIShj
dXJyZW50LT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpKSApCisgICAg
eworICAgICAgICBBU1NFUlRfVU5SRUFDSEFCTEUoKTsKICAgICAgICAgcmV0
dXJuIE5VTEw7CisgICAgfQorCisgICAgLyogRmluZCB0aGlzIGwxZSBhbmQg
aXRzIGVuY2xvc2luZyBsMW1mbi4gKi8KKyAgICBsMmUgPSBwYWdlX3dhbGtf
Z2V0X2wyZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwog
CiAgICAgLyogQ2hlY2sgZmxhZ3MgdGhhdCBpdCB3aWxsIGJlIHNhZmUgdG8g
cmVhZCB0aGUgbDFlLiAqLwogICAgIGlmICggKGwyZV9nZXRfZmxhZ3MobDJl
KSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUFNFKSkgIT0gX1BBR0VfUFJF
U0VOVCApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMg
Yi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggYzRhNTJhNWM0Yy4u
MTdjMzA1NjVkYSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9t
bS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtMTE1LDYg
KzExNSwzNCBAQCBzdGF0aWMgbDNfcGdlbnRyeV90IHBhZ2Vfd2Fsa19nZXRf
bDNlKHBhZ2V0YWJsZV90IHJvb3QsIHVuc2lnbmVkIGxvbmcgYWRkcikKICAg
ICByZXR1cm4gbDNlOwogfQogCitsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dl
dF9sMmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQor
eworICAgIGwzX3BnZW50cnlfdCBsM2UgPSBwYWdlX3dhbGtfZ2V0X2wzZShy
b290LCBhZGRyKTsKKyAgICBtZm5fdCBtZm4gPSBsM2VfZ2V0X21mbihsM2Up
OworICAgIHN0cnVjdCBwYWdlX2luZm8gKnBnOworICAgIGwyX3BnZW50cnlf
dCBsMmUgPSBsMmVfZW1wdHkoKTsKKworICAgIGlmICggIShsM2VfZ2V0X2Zs
YWdzKGwzZSkgJiBfUEFHRV9QUkVTRU5UKSB8fAorICAgICAgICAgKGwzZV9n
ZXRfZmxhZ3MobDNlKSAmIF9QQUdFX1BTRSkgKQorICAgICAgICByZXR1cm4g
bDJlX2VtcHR5KCk7CisKKyAgICBwZyA9IG1mbl90b19wYWdlKG1mbik7Cisg
ICAgaWYgKCAhcGFnZV9sb2NrKHBnKSApCisgICAgICAgIHJldHVybiBsMmVf
ZW1wdHkoKTsKKworICAgIGlmICggKHBnLT51LmludXNlLnR5cGVfaW5mbyAm
IFBHVF90eXBlX21hc2spID09IFBHVF9sMl9wYWdlX3RhYmxlICkKKyAgICB7
CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2RvbWFpbl9wYWdl
KG1mbik7CisKKyAgICAgICAgbDJlID0gbDJ0W2wyX3RhYmxlX29mZnNldChh
ZGRyKV07CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwyZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCBkZWViYTc1YTFjLi4xMDFkNDAyZDYy
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01NjksNyArNTY5LDkgQEAg
dm9pZCBhdWRpdF9kb21haW5zKHZvaWQpOwogdm9pZCBtYWtlX2NyMyhzdHJ1
Y3QgdmNwdSAqdiwgbWZuX3QgbWZuKTsKIHZvaWQgdXBkYXRlX2NyMyhzdHJ1
Y3QgdmNwdSAqdik7CiBpbnQgdmNwdV9kZXN0cm95X3BhZ2V0YWJsZXMoc3Ry
dWN0IHZjcHUgKik7CisKIHZvaWQgKmRvX3BhZ2Vfd2FsayhzdHJ1Y3QgdmNw
dSAqdiwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wyX3BnZW50cnlfdCBwYWdl
X3dhbGtfZ2V0X2wyZShwYWdldGFibGVfdCByb290LCB1bnNpZ25lZCBsb25n
IGFkZHIpOwogCiAvKiBBbGxvY2F0b3IgZnVuY3Rpb25zIGZvciBYZW4gcGFn
ZXRhYmxlcy4gKi8KIHZvaWQgKmFsbG9jX3hlbl9wYWdldGFibGUodm9pZCk7
Cg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Disposition: attachment;
 filename="xsa286/0004-x86-mm-avoid-using-linear-page-tables-in-guest_get_e.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIGxpbmVhciBwYWdlIHRhYmxlcyBpbiBn
dWVzdF9nZXRfZWZmX2tlcm5fbDFlKCkKCkZpcnN0IG9mIGFsbCBkcm9wIGd1
ZXN0X2dldF9lZmZfbDFlKCkgZW50aXJlbHkgLSB0aGVyZSdzIG5vIGFjdHVh
bCB1c2VyCm9mIGl0OiBwdl9yb19wYWdlX2ZhdWx0KCkgaGFzIGEgZ3Vlc3Rf
a2VybmVsX21vZGUoKSBjb25kaXRpb25hbCBhcm91bmQKaXRzIG9ubHkgY2Fs
bCBzaXRlLgoKVGhlbiByZXBsYWNlIHRoZSBsaW5lYXIgTDEgdGFibGUgYWNj
ZXNzIGJ5IGFuIGFjdHVhbCBwYWdlIHdhbGsuCgpUaGlzIGlzIHBhcnQgb2Yg
WFNBLTI4Ni4KClJlcG9ydGVkLWJ5OiBKYW5uIEhvcm4gPGphbm5oQGdvb2ds
ZS5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5k
dW5sYXBAY2l0cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIg
PGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVu
L2FyY2gveDg2L3B2L21tLmMgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwppbmRl
eCA2MWJmNzM2ZWQ5Li5lZDEwOTg2ZDExIDEwMDY0NAotLS0gYS94ZW4vYXJj
aC94ODYvcHYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvcHYvbW0uYwpAQCAt
NTksMjcgKzU5LDYgQEAgbDFfcGdlbnRyeV90ICptYXBfZ3Vlc3RfbDFlKHVu
c2lnbmVkIGxvbmcgbGluZWFyLCBtZm5fdCAqZ2wxbWZuKQogfQogCiAvKgot
ICogUmVhZCB0aGUgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgYWRkcmVz
cywgZnJvbSB0aGUga2VybmVsLW1vZGUKLSAqIHBhZ2UgdGFibGVzLgotICov
Ci1zdGF0aWMgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9sMWUo
dW5zaWduZWQgbG9uZyBsaW5lYXIpCi17Ci0gICAgc3RydWN0IHZjcHUgKmN1
cnIgPSBjdXJyZW50OwotICAgIGNvbnN0IGJvb2wgdXNlcl9tb2RlID0gIShj
dXJyLT5hcmNoLmZsYWdzICYgVEZfa2VybmVsX21vZGUpOwotICAgIGwxX3Bn
ZW50cnlfdCBsMWU7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0gICAgICAg
IHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIGwxZSA9IGd1ZXN0X2dl
dF9lZmZfbDFlKGxpbmVhcik7Ci0KLSAgICBpZiAoIHVzZXJfbW9kZSApCi0g
ICAgICAgIHRvZ2dsZV9ndWVzdF9wdChjdXJyKTsKLQotICAgIHJldHVybiBs
MWU7Ci19Ci0KLS8qCiAgKiBNYXAgYSBndWVzdCdzIExEVCBwYWdlIChjb3Zl
cmluZyB0aGUgYnl0ZSBhdCBAb2Zmc2V0IGZyb20gc3RhcnQgb2YgdGhlIExE
VCkKICAqIGludG8gWGVuJ3MgdmlydHVhbCByYW5nZS4gIFJldHVybnMgdHJ1
ZSBpZiB0aGUgbWFwcGluZyBjaGFuZ2VkLCBmYWxzZQogICogb3RoZXJ3aXNl
LgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L3B2L21tLmggYi94ZW4vYXJj
aC94ODYvcHYvbW0uaAppbmRleCBiMWI2NmU0NmM4Li44Mzc3NDNkNzEyIDEw
MDY0NAotLS0gYS94ZW4vYXJjaC94ODYvcHYvbW0uaAorKysgYi94ZW4vYXJj
aC94ODYvcHYvbW0uaApAQCAtNSwxOSArNSwxOSBAQCBsMV9wZ2VudHJ5X3Qg
Km1hcF9ndWVzdF9sMWUodW5zaWduZWQgbG9uZyBsaW5lYXIsIG1mbl90ICpn
bDFtZm4pOwogCiBpbnQgbmV3X2d1ZXN0X2NyMyhtZm5fdCBtZm4pOwogCi0v
KiBSZWFkIGEgUFYgZ3Vlc3QncyBsMWUgdGhhdCBtYXBzIHRoaXMgbGluZWFy
IGFkZHJlc3MuICovCi1zdGF0aWMgaW5saW5lIGwxX3BnZW50cnlfdCBndWVz
dF9nZXRfZWZmX2wxZSh1bnNpZ25lZCBsb25nIGxpbmVhcikKKy8qCisgKiBS
ZWFkIHRoZSBndWVzdCdzIGwxZSB0aGF0IG1hcHMgdGhpcyBhZGRyZXNzLCBm
cm9tIHRoZSBrZXJuZWwtbW9kZQorICogcGFnZSB0YWJsZXMuCisgKi8KK3N0
YXRpYyBpbmxpbmUgbDFfcGdlbnRyeV90IGd1ZXN0X2dldF9lZmZfa2Vybl9s
MWUodW5zaWduZWQgbG9uZyBsaW5lYXIpCiB7Ci0gICAgbDFfcGdlbnRyeV90
IGwxZTsKKyAgICBsMV9wZ2VudHJ5X3QgbDFlID0gbDFlX2VtcHR5KCk7CiAK
ICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX3RyYW5zbGF0ZShjdXJyZW50LT5k
b21haW4pKTsKICAgICBBU1NFUlQoIXBhZ2luZ19tb2RlX2V4dGVybmFsKGN1
cnJlbnQtPmRvbWFpbikpOwogCi0gICAgaWYgKCB1bmxpa2VseSghX19hZGRy
X29rKGxpbmVhcikpIHx8Ci0gICAgICAgICBfX2NvcHlfZnJvbV91c2VyKCZs
MWUsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICZfX2xpbmVhcl9sMV90
YWJsZVtsMV9saW5lYXJfb2Zmc2V0KGxpbmVhcildLAotICAgICAgICAgICAg
ICAgICAgICAgICAgICBzaXplb2YobDFfcGdlbnRyeV90KSkgKQotICAgICAg
ICBsMWUgPSBsMWVfZW1wdHkoKTsKKyAgICBpZiAoIGxpa2VseShfX2FkZHJf
b2sobGluZWFyKSkgKQorICAgICAgICBsMWUgPSBwYWdlX3dhbGtfZ2V0X2wx
ZShjdXJyZW50LT5hcmNoLmd1ZXN0X3RhYmxlLCBsaW5lYXIpOwogCiAgICAg
cmV0dXJuIGwxZTsKIH0KZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9wdi9y
by1wYWdlLWZhdWx0LmMgYi94ZW4vYXJjaC94ODYvcHYvcm8tcGFnZS1mYXVs
dC5jCmluZGV4IDdmNmZiYzkyZmIuLjhkMDAwN2VkZTUgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9wdi9yby1wYWdlLWZhdWx0LmMKKysrIGIveGVuL2Fy
Y2gveDg2L3B2L3JvLXBhZ2UtZmF1bHQuYwpAQCAtMzQyLDcgKzM0Miw3IEBA
IGludCBwdl9yb19wYWdlX2ZhdWx0KHVuc2lnbmVkIGxvbmcgYWRkciwgc3Ry
dWN0IGNwdV91c2VyX3JlZ3MgKnJlZ3MpCiAgICAgYm9vbCBtbWlvX3JvOwog
CiAgICAgLyogQXR0ZW1wdCB0byByZWFkIHRoZSBQVEUgdGhhdCBtYXBzIHRo
ZSBWQSBiZWluZyBhY2Nlc3NlZC4gKi8KLSAgICBwdGUgPSBndWVzdF9nZXRf
ZWZmX2wxZShhZGRyKTsKKyAgICBwdGUgPSBndWVzdF9nZXRfZWZmX2tlcm5f
bDFlKGFkZHIpOwogCiAgICAgLyogV2UgYXJlIG9ubHkgbG9va2luZyBmb3Ig
cmVhZC1vbmx5IG1hcHBpbmdzICovCiAgICAgaWYgKCAoKGwxZV9nZXRfZmxh
Z3MocHRlKSAmIChfUEFHRV9QUkVTRU5UIHwgX1BBR0VfUlcpKSAhPSBfUEFH
RV9QUkVTRU5UKSApCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0
L21tLmMgYi94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMKaW5kZXggMTdjMzA1
NjVkYS4uZTFlZmVmNWM0YyAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4
Nl82NC9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAt
MTQzLDYgKzE0Myw2MiBAQCBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9s
MmUocGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKQogICAg
IHJldHVybiBsMmU7CiB9CiAKKy8qCisgKiBGb3Igbm93IG5vICJzZXRfYWNj
ZXNzZWQiIHBhcmFtZXRlciwgYXMgYWxsIGNhbGxlcnMgd2FudCBpdCBzZXQg
dG8gdHJ1ZS4KKyAqIEZvciBub3cgYWxzbyBubyAic2V0X2RpcnR5IiBwYXJh
bWV0ZXIsIGFzIGFsbCBjYWxsZXJzIGRlYWwgd2l0aCByL28KKyAqIG1hcHBp
bmdzLCBhbmQgd2UgZG9uJ3Qgd2FudCB0byBzZXQgdGhlIGRpcnR5IGJpdCB0
aGVyZSAoY29uZmxpY3RzIHdpdGgKKyAqIENFVC1TUykuIEhvd2V2ZXIsIGFz
IHRoZXJlIGFyZSBDUFVzIHdoaWNoIG1heSBzZXQgdGhlIGRpcnR5IGJpdCBv
biByL28KKyAqIFBURXMsIHRoZSBsb2dpYyBiZWxvdyB0b2xlcmF0ZXMgdGhl
IGJpdCBiZWNvbWluZyBzZXQgImJlaGluZCBvdXIgYmFja3MiLgorICovCits
MV9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMWUocGFnZXRhYmxlX3Qgcm9v
dCwgdW5zaWduZWQgbG9uZyBhZGRyKQoreworICAgIGwyX3BnZW50cnlfdCBs
MmUgPSBwYWdlX3dhbGtfZ2V0X2wyZShyb290LCBhZGRyKTsKKyAgICBtZm5f
dCBtZm4gPSBsMmVfZ2V0X21mbihsMmUpOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKnBnOworICAgIGwxX3BnZW50cnlfdCBsMWUgPSBsMWVfZW1wdHkoKTsK
KworICAgIGlmICggIShsMmVfZ2V0X2ZsYWdzKGwyZSkgJiBfUEFHRV9QUkVT
RU5UKSB8fAorICAgICAgICAgKGwyZV9nZXRfZmxhZ3MobDJlKSAmIF9QQUdF
X1BTRSkgKQorICAgICAgICByZXR1cm4gbDFlX2VtcHR5KCk7CisKKyAgICBw
ZyA9IG1mbl90b19wYWdlKG1mbik7CisgICAgaWYgKCAhcGFnZV9sb2NrKHBn
KSApCisgICAgICAgIHJldHVybiBsMWVfZW1wdHkoKTsKKworICAgIGlmICgg
KHBnLT51LmludXNlLnR5cGVfaW5mbyAmIFBHVF90eXBlX21hc2spID09IFBH
VF9sMV9wYWdlX3RhYmxlICkKKyAgICB7CisgICAgICAgIGwxX3BnZW50cnlf
dCAqbDF0ID0gbWFwX2RvbWFpbl9wYWdlKG1mbik7CisKKyAgICAgICAgbDFl
ID0gbDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV07CisKKyAgICAgICAgaWYg
KCAobDFlX2dldF9mbGFncyhsMWUpICYgKF9QQUdFX0FDQ0VTU0VEIHwgX1BB
R0VfUFJFU0VOVCkpID09CisgICAgICAgICAgICAgX1BBR0VfUFJFU0VOVCAp
CisgICAgICAgIHsKKyAgICAgICAgICAgIGwxX3BnZW50cnlfdCBvbDFlID0g
bDFlOworCisgICAgICAgICAgICBsMWVfYWRkX2ZsYWdzKGwxZSwgX1BBR0Vf
QUNDRVNTRUQpOworICAgICAgICAgICAgLyoKKyAgICAgICAgICAgICAqIEJl
c3QgZWZmb3J0IG9ubHk7IHdpdGggdGhlIGxvY2sgaGVsZCB0aGUgcGFnZSBz
aG91bGRuJ3QKKyAgICAgICAgICAgICAqIGNoYW5nZSBhbnl3YXksIGV4Y2Vw
dCBmb3IgdGhlIGRpcnR5IGJpdCB0byBwZXJoYXBzIGJlY29tZSBzZXQuCisg
ICAgICAgICAgICAgKi8KKyAgICAgICAgICAgIHdoaWxlICggY21weGNoZygm
bDFlX2dldF9pbnRwdGUobDF0W2wxX3RhYmxlX29mZnNldChhZGRyKV0pLAor
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGwxZV9nZXRfaW50cHRlKG9s
MWUpLCBsMWVfZ2V0X2ludHB0ZShsMWUpKSAhPQorICAgICAgICAgICAgICAg
ICAgICBsMWVfZ2V0X2ludHB0ZShvbDFlKSAmJgorICAgICAgICAgICAgICAg
ICAgICAhKGwxZV9nZXRfZmxhZ3MobDFlKSAmIF9QQUdFX0RJUlRZKSApCisg
ICAgICAgICAgICB7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9mbGFncyhv
bDFlLCBfUEFHRV9ESVJUWSk7CisgICAgICAgICAgICAgICAgbDFlX2FkZF9m
bGFncyhsMWUsIF9QQUdFX0RJUlRZKTsKKyAgICAgICAgICAgIH0KKyAgICAg
ICAgfQorCisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwxdCk7CisgICAg
fQorCisgICAgcGFnZV91bmxvY2socGcpOworCisgICAgcmV0dXJuIGwxZTsK
K30KKwogdm9pZCAqZG9fcGFnZV93YWxrKHN0cnVjdCB2Y3B1ICp2LCB1bnNp
Z25lZCBsb25nIGFkZHIpCiB7CiAgICAgbDNfcGdlbnRyeV90IGwzZTsKZGlm
ZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS14ODYvbW0uaCBiL3hlbi9pbmNs
dWRlL2FzbS14ODYvbW0uaAppbmRleCAxMDFkNDAyZDYyLi43MDE3YTE5M2I4
IDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L21tLmgKKysrIGIv
eGVuL2luY2x1ZGUvYXNtLXg4Ni9tbS5oCkBAIC01NzIsNiArNTcyLDcgQEAg
aW50IHZjcHVfZGVzdHJveV9wYWdldGFibGVzKHN0cnVjdCB2Y3B1ICopOwog
CiB2b2lkICpkb19wYWdlX3dhbGsoc3RydWN0IHZjcHUgKnYsIHVuc2lnbmVk
IGxvbmcgYWRkcik7CiBsMl9wZ2VudHJ5X3QgcGFnZV93YWxrX2dldF9sMmUo
cGFnZXRhYmxlX3Qgcm9vdCwgdW5zaWduZWQgbG9uZyBhZGRyKTsKK2wxX3Bn
ZW50cnlfdCBwYWdlX3dhbGtfZ2V0X2wxZShwYWdldGFibGVfdCByb290LCB1
bnNpZ25lZCBsb25nIGFkZHIpOwogCiAvKiBBbGxvY2F0b3IgZnVuY3Rpb25z
IGZvciBYZW4gcGFnZXRhYmxlcy4gKi8KIHZvaWQgKmFsbG9jX3hlbl9wYWdl
dGFibGUodm9pZCk7Cg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Disposition: attachment;
 filename="xsa286/0005-x86-mm-avoid-using-top-level-linear-page-tables-in-u.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IGF2b2lkIHVzaW5nIHRvcCBsZXZlbCBsaW5lYXIgcGFnZSB0
YWJsZXMgaW4geyx1bn1tYXBfZG9tYWluX3BhZ2UoKQoKTW92ZSB0aGUgcGFn
ZSB0YWJsZSByZWN1cnNpb24gdHdvIGxldmVscyBkb3duLiBUaGlzIGVudGFp
bHMgYXZvaWRpbmcKdG8gZnJlZSB0aGUgcmVjdXJzaXZlIG1hcHBpbmcgcHJl
bWF0dXJlbHkgaW4gZnJlZV9wZXJkb21haW5fbWFwcGluZ3MoKS4KClRoaXMg
aXMgcGFydCBvZiBYU0EtMjg2LgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8
amFubmhAZ29vZ2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSmFuIEJldWxpY2gg
PGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogR2VvcmdlIER1bmxh
cCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPgpSZXZpZXdlZC1ieTogQW5k
cmV3IENvb3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4KCmRpZmYg
LS1naXQgYS94ZW4vYXJjaC94ODYvZG9tYWluX3BhZ2UuYyBiL3hlbi9hcmNo
L3g4Ni9kb21haW5fcGFnZS5jCmluZGV4IGIwMzcyOGUxOGUuLmVkNmEyYmYw
ODEgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9kb21haW5fcGFnZS5jCisr
KyBiL3hlbi9hcmNoL3g4Ni9kb21haW5fcGFnZS5jCkBAIC02NSw3ICs2NSw4
IEBAIHZvaWQgX19pbml0IG1hcGNhY2hlX292ZXJyaWRlX2N1cnJlbnQoc3Ry
dWN0IHZjcHUgKnYpCiAjZGVmaW5lIG1hcGNhY2hlX2wyX2VudHJ5KGUpICgo
ZSkgPj4gUEFHRVRBQkxFX09SREVSKQogI2RlZmluZSBNQVBDQUNIRV9MMl9F
TlRSSUVTIChtYXBjYWNoZV9sMl9lbnRyeShNQVBDQUNIRV9FTlRSSUVTIC0g
MSkgKyAxKQogI2RlZmluZSBNQVBDQUNIRV9MMUVOVChpZHgpIFwKLSAgICBf
X2xpbmVhcl9sMV90YWJsZVtsMV9saW5lYXJfb2Zmc2V0KE1BUENBQ0hFX1ZJ
UlRfU1RBUlQgKyBwZm5fdG9fcGFkZHIoaWR4KSldCisgICAgKChsMV9wZ2Vu
dHJ5X3QgKikoTUFQQ0FDSEVfVklSVF9TVEFSVCB8IFwKKyAgICAgICAgICAg
ICAgICAgICAgICAoKEwyX1BBR0VUQUJMRV9FTlRSSUVTIC0gMSkgPDwgTDJf
UEFHRVRBQkxFX1NISUZUKSkpW2lkeF0KIAogdm9pZCAqbWFwX2RvbWFpbl9w
YWdlKG1mbl90IG1mbikKIHsKQEAgLTIzNSw2ICsyMzYsNyBAQCBpbnQgbWFw
Y2FjaGVfZG9tYWluX2luaXQoc3RydWN0IGRvbWFpbiAqZCkKIHsKICAgICBz
dHJ1Y3QgbWFwY2FjaGVfZG9tYWluICpkY2FjaGUgPSAmZC0+YXJjaC5wdi5t
YXBjYWNoZTsKICAgICB1bnNpZ25lZCBpbnQgYml0bWFwX3BhZ2VzOworICAg
IGludCByYzsKIAogICAgIEFTU0VSVChpc19wdl9kb21haW4oZCkpOwogCkBA
IC0yNDMsOCArMjQ1LDEwIEBAIGludCBtYXBjYWNoZV9kb21haW5faW5pdChz
dHJ1Y3QgZG9tYWluICpkKQogICAgICAgICByZXR1cm4gMDsKICNlbmRpZgog
CisgICAgQlVJTERfQlVHX09OKE1BUENBQ0hFX1ZJUlRfU1RBUlQgJiAoKDEg
PDwgTDNfUEFHRVRBQkxFX1NISUZUKSAtIDEpKTsKICAgICBCVUlMRF9CVUdf
T04oTUFQQ0FDSEVfVklSVF9FTkQgKyBQQUdFX1NJWkUgKiAoMyArCi0gICAg
ICAgICAgICAgICAgIDIgKiBQRk5fVVAoQklUU19UT19MT05HUyhNQVBDQUNI
RV9FTlRSSUVTKSAqIHNpemVvZihsb25nKSkpID4KKyAgICAgICAgICAgICAg
ICAgMiAqIFBGTl9VUChCSVRTX1RPX0xPTkdTKE1BUENBQ0hFX0VOVFJJRVMp
ICogc2l6ZW9mKGxvbmcpKSkgKworICAgICAgICAgICAgICAgICAoMVUgPDwg
TDJfUEFHRVRBQkxFX1NISUZUKSA+CiAgICAgICAgICAgICAgICAgIE1BUENB
Q0hFX1ZJUlRfU1RBUlQgKyAoUEVSRE9NQUlOX1NMT1RfTUJZVEVTIDw8IDIw
KSk7CiAgICAgYml0bWFwX3BhZ2VzID0gUEZOX1VQKEJJVFNfVE9fTE9OR1Mo
TUFQQ0FDSEVfRU5UUklFUykgKiBzaXplb2YobG9uZykpOwogICAgIGRjYWNo
ZS0+aW51c2UgPSAodm9pZCAqKU1BUENBQ0hFX1ZJUlRfRU5EICsgUEFHRV9T
SVpFOwpAQCAtMjUzLDkgKzI1NywyNSBAQCBpbnQgbWFwY2FjaGVfZG9tYWlu
X2luaXQoc3RydWN0IGRvbWFpbiAqZCkKIAogICAgIHNwaW5fbG9ja19pbml0
KCZkY2FjaGUtPmxvY2spOwogCi0gICAgcmV0dXJuIGNyZWF0ZV9wZXJkb21h
aW5fbWFwcGluZyhkLCAodW5zaWduZWQgbG9uZylkY2FjaGUtPmludXNlLAot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgMiAqIGJpdG1h
cF9wYWdlcyArIDEsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBOSUwobDFfcGdlbnRyeV90ICopLCBOVUxMKTsKKyAgICByYyA9IGNy
ZWF0ZV9wZXJkb21haW5fbWFwcGluZyhkLCAodW5zaWduZWQgbG9uZylkY2Fj
aGUtPmludXNlLAorICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IDIgKiBiaXRtYXBfcGFnZXMgKyAxLAorICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIE5JTChsMV9wZ2VudHJ5X3QgKiksIE5VTEwpOworICAg
IGlmICggIXJjICkKKyAgICB7CisgICAgICAgIC8qCisgICAgICAgICAqIElu
c3RhbGwgbWFwcGluZyBvZiBvdXIgTDIgdGFibGUgaW50byBpdHMgb3duIGxh
c3Qgc2xvdCwgZm9yIGVhc3kKKyAgICAgICAgICogYWNjZXNzIHRvIHRoZSBM
MSBlbnRyaWVzIHZpYSBNQVBDQUNIRV9MMUVOVCgpLgorICAgICAgICAgKi8K
KyAgICAgICAgbDNfcGdlbnRyeV90ICpsM3QgPSBfX21hcF9kb21haW5fcGFn
ZShkLT5hcmNoLnBlcmRvbWFpbl9sM19wZyk7CisgICAgICAgIGwzX3BnZW50
cnlfdCBsM2UgPSBsM3RbbDNfdGFibGVfb2Zmc2V0KE1BUENBQ0hFX1ZJUlRf
RU5EKV07CisgICAgICAgIGwyX3BnZW50cnlfdCAqbDJ0ID0gbWFwX2wydF9m
cm9tX2wzZShsM2UpOworCisgICAgICAgIGwyZV9nZXRfaW50cHRlKGwydFtM
Ml9QQUdFVEFCTEVfRU5UUklFUyAtIDFdKSA9IGwzZV9nZXRfaW50cHRlKGwz
ZSk7CisgICAgICAgIHVubWFwX2RvbWFpbl9wYWdlKGwydCk7CisgICAgICAg
IHVubWFwX2RvbWFpbl9wYWdlKGwzdCk7CisgICAgfQorCisgICAgcmV0dXJu
IHJjOwogfQogCiBpbnQgbWFwY2FjaGVfdmNwdV9pbml0KHN0cnVjdCB2Y3B1
ICp2KQpAQCAtMzQ2LDcgKzM2Niw3IEBAIG1mbl90IGRvbWFpbl9wYWdlX21h
cF90b19tZm4oY29uc3Qgdm9pZCAqcHRyKQogICAgIGVsc2UKICAgICB7CiAg
ICAgICAgIEFTU0VSVCh2YSA+PSBNQVBDQUNIRV9WSVJUX1NUQVJUICYmIHZh
IDwgTUFQQ0FDSEVfVklSVF9FTkQpOwotICAgICAgICBwbDFlID0gJl9fbGlu
ZWFyX2wxX3RhYmxlW2wxX2xpbmVhcl9vZmZzZXQodmEpXTsKKyAgICAgICAg
cGwxZSA9ICZNQVBDQUNIRV9MMUVOVChQRk5fRE9XTih2YSAtIE1BUENBQ0hF
X1ZJUlRfU1RBUlQpKTsKICAgICB9CiAKICAgICByZXR1cm4gbDFlX2dldF9t
Zm4oKnBsMWUpOwpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94
ZW4vYXJjaC94ODYvbW0uYwppbmRleCA4YzhmMDU0MTg2Li5mMTg4YzBmNjdm
IDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJj
aC94ODYvbW0uYwpAQCAtNTk0NCw2ICs1OTQ0LDEwIEBAIHZvaWQgZnJlZV9w
ZXJkb21haW5fbWFwcGluZ3Moc3RydWN0IGRvbWFpbiAqZCkKICAgICAgICAg
ICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKmwxcGcgPSBsMmVfZ2V0X3BhZ2UobDJ0YWJbal0pOwogCisgICAgICAg
ICAgICAgICAgICAgIC8qIG1hcGNhY2hlX2RvbWFpbl9pbml0KCkgaW5zdGFs
bHMgYSByZWN1cnNpdmUgZW50cnkuICovCisgICAgICAgICAgICAgICAgICAg
IGlmICggbDFwZyA9PSBsMnBnICkKKyAgICAgICAgICAgICAgICAgICAgICAg
IGNvbnRpbnVlOworCiAgICAgICAgICAgICAgICAgICAgIGlmICggbDJlX2dl
dF9mbGFncyhsMnRhYltqXSkgJiBfUEFHRV9BVkFJTDAgKQogICAgICAgICAg
ICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgICAgICAgICBsMV9wZ2Vu
dHJ5X3QgKmwxdGFiID0gX19tYXBfZG9tYWluX3BhZ2UobDFwZyk7Cg==

--=separator
Content-Type: application/octet-stream;
 name="xsa286/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Disposition: attachment;
 filename="xsa286/0006-x86-mm-restrict-use-of-linear-page-tables-to-shadow-.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiB4ODYvbW06IHJlc3RyaWN0IHVzZSBvZiBsaW5lYXIgcGFnZSB0YWJsZXMg
dG8gc2hhZG93IG1vZGUgY29kZQoKT3RoZXIgY29kZSBkb2VzIG5vdCByZXF1
aXJlIHRoZW0gdG8gYmUgc2V0IHVwIGFueW1vcmUsIHNvIHJlc3RyaWN0IHdo
ZW4KdG8gcG9wdWxhdGUgdGhlIHJlc3BlY3RpdmUgTDQgc2xvdCBhbmQgcmVk
dWNlIHZpc2liaWxpdHkgb2YgdGhlCmFjY2Vzc29ycy4KCldoaWxlIHdpdGgg
dGhlIHJlbW92YWwgb2YgYWxsIHVzZXMgdGhlIHZ1bG5lcmFiaWxpdHkgaXMg
YWN0dWFsbHkgZml4ZWQsCnJlbW92aW5nIHRoZSBjcmVhdGlvbiBvZiB0aGUg
bGluZWFyIG1hcHBpbmcgYWRkcyBhbiBleHRyYSBsYXllciBvZgpwcm90ZWN0
aW9uLiBTaW1pbGFybHkgcmVkdWNpbmcgdmlzaWJpbGl0eSBvZiB0aGUgYWNj
ZXNzb3JzIG1vc3RseQplbGltaW5hdGVzIHRoZSByaXNrIG9mIHVuZHVlIHJl
LWludHJvZHVjdGlvbiBvZiB1c2VzIG9mIHRoZSBsaW5lYXIKbWFwcGluZ3Mu
CgpUaGlzIGlzIChub3Qgc3RyaWN0bHkpIHBhcnQgb2YgWFNBLTI4Ni4KClNp
Z25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4K
UmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0
cml4LmNvbT4KUmV2aWV3ZWQtYnk6IEFuZHJldyBDb29wZXIgPGFuZHJldy5j
b29wZXIzQGNpdHJpeC5jb20+CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L21tLmMgYi94ZW4vYXJjaC94ODYvbW0uYwppbmRleCBmMTg4YzBmNjdmLi44
YzUxYzI5MTNkIDEwMDY0NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysg
Yi94ZW4vYXJjaC94ODYvbW0uYwpAQCAtMTY3Nyw5ICsxNjc3LDEwIEBAIHZv
aWQgaW5pdF94ZW5fbDRfc2xvdHMobDRfcGdlbnRyeV90ICpsNHQsIG1mbl90
IGw0bWZuLAogICAgIGw0dFtsNF90YWJsZV9vZmZzZXQoUENJX01DRkdfVklS
VF9TVEFSVCldID0KICAgICAgICAgaWRsZV9wZ190YWJsZVtsNF90YWJsZV9v
ZmZzZXQoUENJX01DRkdfVklSVF9TVEFSVCldOwogCi0gICAgLyogU2xvdCAy
NTg6IFNlbGYgbGluZWFyIG1hcHBpbmdzLiAqLworICAgIC8qIFNsb3QgMjU4
OiBTZWxmIGxpbmVhciBtYXBwaW5ncyAoc2hhZG93IHB0IG9ubHkpLiAqLwog
ICAgIEFTU0VSVCghbWZuX2VxKGw0bWZuLCBJTlZBTElEX01GTikpOwogICAg
IGw0dFtsNF90YWJsZV9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQpXSA9
CisgICAgICAgICFzaGFkb3dfbW9kZV9leHRlcm5hbChkKSA/IGw0ZV9lbXB0
eSgpIDoKICAgICAgICAgbDRlX2Zyb21fbWZuKGw0bWZuLCBfX1BBR0VfSFlQ
RVJWSVNPUl9SVyk7CiAKICAgICAvKiBTbG90IDI1OTogU2hhZG93IGxpbmVh
ciBtYXBwaW5ncyAoaWYgYXBwbGljYWJsZSkgLiovCmRpZmYgLS1naXQgYS94
ZW4vYXJjaC94ODYvbW0vc2hhZG93L3ByaXZhdGUuaCBiL3hlbi9hcmNoL3g4
Ni9tbS9zaGFkb3cvcHJpdmF0ZS5oCmluZGV4IDk5MjQ2MWQ0YmMuLmE3YWU5
OWYwODYgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJp
dmF0ZS5oCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS9zaGFkb3cvcHJpdmF0ZS5o
CkBAIC0xMzksNiArMTM5LDE1IEBAIGVudW0gewogIyBkZWZpbmUgR1VFU1Rf
UFRFX1NJWkUgNAogI2VuZGlmCiAKKy8qIFdoZXJlIHRvIGZpbmQgZWFjaCBs
ZXZlbCBvZiB0aGUgbGluZWFyIG1hcHBpbmcgKi8KKyNkZWZpbmUgX19saW5l
YXJfbDFfdGFibGUgKChsMV9wZ2VudHJ5X3QgKikoTElORUFSX1BUX1ZJUlRf
U1RBUlQpKQorI2RlZmluZSBfX2xpbmVhcl9sMl90YWJsZSBcCisgKChsMl9w
Z2VudHJ5X3QgKikoX19saW5lYXJfbDFfdGFibGUgKyBsMV9saW5lYXJfb2Zm
c2V0KExJTkVBUl9QVF9WSVJUX1NUQVJUKSkpCisjZGVmaW5lIF9fbGluZWFy
X2wzX3RhYmxlIFwKKyAoKGwzX3BnZW50cnlfdCAqKShfX2xpbmVhcl9sMl90
YWJsZSArIGwyX2xpbmVhcl9vZmZzZXQoTElORUFSX1BUX1ZJUlRfU1RBUlQp
KSkKKyNkZWZpbmUgX19saW5lYXJfbDRfdGFibGUgXAorICgobDRfcGdlbnRy
eV90ICopKF9fbGluZWFyX2wzX3RhYmxlICsgbDNfbGluZWFyX29mZnNldChM
SU5FQVJfUFRfVklSVF9TVEFSVCkpKQorCiAvKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqCiAgKiBBdWRpdGluZyByb3V0aW5lcwogICovCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYveDg2XzY0L21tLmMgYi94ZW4vYXJj
aC94ODYveDg2XzY0L21tLmMKaW5kZXggZTFlZmVmNWM0Yy4uZWFlYjgyNzA5
OSAxMDA2NDQKLS0tIGEveGVuL2FyY2gveDg2L3g4Nl82NC9tbS5jCisrKyBi
L3hlbi9hcmNoL3g4Ni94ODZfNjQvbW0uYwpAQCAtODMyLDkgKzgzMiw2IEBA
IHZvaWQgX19pbml0IHBhZ2luZ19pbml0KHZvaWQpCiAKICAgICBtYWNoaW5l
X3RvX3BoeXNfbWFwcGluZ192YWxpZCA9IDE7CiAKLSAgICAvKiBTZXQgdXAg
bGluZWFyIHBhZ2UgdGFibGUgbWFwcGluZy4gKi8KLSAgICBsNGVfd3JpdGUo
JmlkbGVfcGdfdGFibGVbbDRfdGFibGVfb2Zmc2V0KExJTkVBUl9QVF9WSVJU
X1NUQVJUKV0sCi0gICAgICAgICAgICAgIGw0ZV9mcm9tX3BhZGRyKF9fcGEo
aWRsZV9wZ190YWJsZSksIF9fUEFHRV9IWVBFUlZJU09SX1JXKSk7CiAgICAg
cmV0dXJuOwogCiAgbm9tZW06CmRpZmYgLS1naXQgYS94ZW4vaW5jbHVkZS9h
c20teDg2L2NvbmZpZy5oIGIveGVuL2luY2x1ZGUvYXNtLXg4Ni9jb25maWcu
aAppbmRleCBlYjI1ZmM0NzU4Li5jNmY4YjE4OTQyIDEwMDY0NAotLS0gYS94
ZW4vaW5jbHVkZS9hc20teDg2L2NvbmZpZy5oCisrKyBiL3hlbi9pbmNsdWRl
L2FzbS14ODYvY29uZmlnLmgKQEAgLTE5Nyw3ICsxOTcsNyBAQCBleHRlcm4g
dW5zaWduZWQgY2hhciBib290X2VkaWRfaW5mb1sxMjhdOwogICovCiAjZGVm
aW5lIFBDSV9NQ0ZHX1ZJUlRfU1RBUlQgICAgIChQTUw0X0FERFIoMjU3KSkK
ICNkZWZpbmUgUENJX01DRkdfVklSVF9FTkQgICAgICAgKFBDSV9NQ0ZHX1ZJ
UlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQotLyogU2xvdCAyNTg6IGxp
bmVhciBwYWdlIHRhYmxlIChndWVzdCB0YWJsZSkuICovCisvKiBTbG90IDI1
ODogbGluZWFyIHBhZ2UgdGFibGUgKG1vbml0b3IgdGFibGUsIEhWTSBvbmx5
KS4gKi8KICNkZWZpbmUgTElORUFSX1BUX1ZJUlRfU1RBUlQgICAgKFBNTDRf
QUREUigyNTgpKQogI2RlZmluZSBMSU5FQVJfUFRfVklSVF9FTkQgICAgICAo
TElORUFSX1BUX1ZJUlRfU1RBUlQgKyBQTUw0X0VOVFJZX0JZVEVTKQogLyog
U2xvdCAyNTk6IGxpbmVhciBwYWdlIHRhYmxlIChzaGFkb3cgdGFibGUpLiAq
LwpkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLXg4Ni9wYWdlLmggYi94
ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaAppbmRleCBmNjMyYWZmYWVmLi5m
ZDI1NzQyNjdjIDEwMDY0NAotLS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L3Bh
Z2UuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L3BhZ2UuaApAQCAtMjk0
LDE5ICsyOTQsNiBAQCB2b2lkIGNvcHlfcGFnZV9zc2UyKHZvaWQgKiwgY29u
c3Qgdm9pZCAqKTsKICNkZWZpbmUgdm1hcF90b19tZm4odmEpICAgICBfbWZu
KGwxZV9nZXRfcGZuKCp2aXJ0X3RvX3hlbl9sMWUoKHVuc2lnbmVkIGxvbmcp
KHZhKSkpKQogI2RlZmluZSB2bWFwX3RvX3BhZ2UodmEpICAgIG1mbl90b19w
YWdlKHZtYXBfdG9fbWZuKHZhKSkKIAotI2VuZGlmIC8qICFkZWZpbmVkKF9f
QVNTRU1CTFlfXykgKi8KLQotLyogV2hlcmUgdG8gZmluZCBlYWNoIGxldmVs
IG9mIHRoZSBsaW5lYXIgbWFwcGluZyAqLwotI2RlZmluZSBfX2xpbmVhcl9s
MV90YWJsZSAoKGwxX3BnZW50cnlfdCAqKShMSU5FQVJfUFRfVklSVF9TVEFS
VCkpCi0jZGVmaW5lIF9fbGluZWFyX2wyX3RhYmxlIFwKLSAoKGwyX3BnZW50
cnlfdCAqKShfX2xpbmVhcl9sMV90YWJsZSArIGwxX2xpbmVhcl9vZmZzZXQo
TElORUFSX1BUX1ZJUlRfU1RBUlQpKSkKLSNkZWZpbmUgX19saW5lYXJfbDNf
dGFibGUgXAotICgobDNfcGdlbnRyeV90ICopKF9fbGluZWFyX2wyX3RhYmxl
ICsgbDJfbGluZWFyX29mZnNldChMSU5FQVJfUFRfVklSVF9TVEFSVCkpKQot
I2RlZmluZSBfX2xpbmVhcl9sNF90YWJsZSBcCi0gKChsNF9wZ2VudHJ5X3Qg
KikoX19saW5lYXJfbDNfdGFibGUgKyBsM19saW5lYXJfb2Zmc2V0KExJTkVB
Ul9QVF9WSVJUX1NUQVJUKSkpCi0KLQotI2lmbmRlZiBfX0FTU0VNQkxZX18K
IGV4dGVybiByb290X3BnZW50cnlfdCBpZGxlX3BnX3RhYmxlW1JPT1RfUEFH
RVRBQkxFX0VOVFJJRVNdOwogZXh0ZXJuIGwyX3BnZW50cnlfdCAgKmNvbXBh
dF9pZGxlX3BnX3RhYmxlX2wyOwogZXh0ZXJuIHVuc2lnbmVkIGludCAgIG0y
cF9jb21wYXRfdnN0YXJ0Owo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:01:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9132.24634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqKL-0006sJ-SQ; Tue, 20 Oct 2020 12:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9132.24634; Tue, 20 Oct 2020 12:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqKL-0006rz-LK; Tue, 20 Oct 2020 12:01:17 +0000
Received: by outflank-mailman (input) for mailman id 9132;
 Tue, 20 Oct 2020 12:01:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqKJ-0006DX-KL
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 606dbaa2-065d-4b85-8699-989412608bdc;
 Tue, 20 Oct 2020 12:00:51 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJo-0001KQ-1l; Tue, 20 Oct 2020 12:00:44 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJo-00021v-0d; Tue, 20 Oct 2020 12:00:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqKJ-0006DX-KL
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:15 +0000
X-Inumbo-ID: 606dbaa2-065d-4b85-8699-989412608bdc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 606dbaa2-065d-4b85-8699-989412608bdc;
	Tue, 20 Oct 2020 12:00:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=uHlIz/z4uRF9R68bRtWybynZpCQOwe0wXxuZRBPwdKc=; b=64AmIrQTuhYHXQOeXzNZQAY2vr
	z/RhTTXQnVOqsnfb6VlDMm6K0uN3Grhr9/IMzbU5SRjTjKTQixbpjNWHQm/gr0Kb4LVAex4RTk3mk
	/acwcS0JANeTIgfxP0a3ebGq8SvoEWpQoqxibfX2ElfJtr5VnNoCrKqMGb9uzj/Hyd7A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJo-0001KQ-1l; Tue, 20 Oct 2020 12:00:44 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJo-00021v-0d; Tue, 20 Oct 2020 12:00:44 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 345 v3 - x86: Race condition in Xen mapping
 code
Message-Id: <E1kUqJo-00021v-0d@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:44 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-345
                              version 3

                x86: Race condition in Xen mapping code

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The Xen code handling the updating of the hypervisor's own pagetables
tries to use 2MiB and 1GiB superpages as much as possible to maximize
TLB efficiency.  Some of the operations for checking and coalescing
superpages take non-negligible amount of time; to avoid potential lock
contention, this code also tries to avoid holding locks for the entire
operation.

Unfortunately, several potential race conditions were not considered;
precisely-timed guest actions could potentially lead to the code
writing to a page which has been freed (and thus potentially already
reused).

IMPACT
======

A malicious guest can cause a host denial-of-service.  Data corruption
or privilege escalation cannot be ruled out.

VULNERABLE SYSTEMS
==================

Versions of Xen from at least 3.2 onward are affected.

Only x86 systems are vulnerable.  ARM systems are not vulnerable.

Guests can only exercise the vulnerability if they have passed through
hardware devices.  Guests without passthrough configured cannot
exploit the vulnerability.

Furthermore, HVM and PVH guests can only exercise the vulnerability if
they are running in shadow mode, and only when running on VT-x capable
hardware (as opposed to SVM).  This is believed to be Intel, Centaur
and Shanghai CPUs.

MITIGATION
==========

Running all guests in HVM or PVH mode, in each case with HAP enabled,
prevent those guests from exploiting the vulnerability.

CREDITS
=======

This issue was discovered by Hongyan Xia of Amazon.

RESOLUTION
==========

Applying the appropriate attached patch resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa345/*.patch           xen-unstable
xsa345-4.14/*.patch      Xen 4.14.x
xsa345-4.13/*.patch      Xen 4.12.x, Xen 4.13.x
xsa345-4.11/*.patch      Xen 4.11.x
xsa345-4.10/*.patch      Xen 4.10.x

$ sha256sum xsa345* xsa345*/*
c8b9445b05aa4c585d9817c2e6cbf08466452a15381ca5b9a0224a377522edf9  xsa345.meta
4ed69dce620449bedda29f3ce1ed767908d2bbeb888701e7c4c2461216b724f7  xsa345-4.10/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
98d3b171b197c1ff9f26ff70499a0cde3b23d048d622b12bf2ea0899de4f9e7f  xsa345-4.10/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
78c4be2f1747051d13869001180ee25bdeabe5e8138d0604a33db610b24e38f1  xsa345-4.10/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
4abd8271a70593fcde683071fdf0ac342ff9b0859b60c9790b14dd7e5ae85129  xsa345-4.11/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
3209195c1a7e8a6186b704d6bb791a3fb3c251d59e15b42bcb0ecc0d38f13a4f  xsa345-4.11/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
7e73f6c14718a0d4b25b4453b45c20bf265bd54c91b77678815be1ef7beae61f  xsa345-4.11/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
b68b82911c96feee9d05abcddf174c2f6b278829bc8c3bf3062739de8c4704b2  xsa345-4.12/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
fe2a1568a3e273ae01b3984c193e75aea16da53c6c9db27d21a2196d0f204c6e  xsa345-4.12/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
22c98f4a264bc6b15ed29da8698a733947849c16a3e9da58de88bf16986b6aad  xsa345-4.12/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
16299d885c19e1cd378a856caf8c1d1365c341bea648c0a0d5f24ae7d56015ae  xsa345-4.13/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
b820061c242c7fa4da44cbb44fa16e0d0542c16815a89699385da0c87321f7ea  xsa345-4.13/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
8a87ac2478c9bda6ef28c480b256448d51393a5e04f6e8a68ea29d9aeba92e47  xsa345-4.13/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
acf093741fecccccce0018d4a5c0f5dba367373dd1d6d04ed76ff3f178579670  xsa345-4.14/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
616f2547b4bb6d5eb9f853b1659e6e2a1fc0f67866665f4f6cdd8d763effcdfc  xsa345-4.14/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
17ae72d2af6759da17ce777e0fc9eef8f8eb6be3fe6d5b38b3589f641fc0f918  xsa345-4.14/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
65c56cb4d34ff4e97220311b303c09b54bfa44bcf4adc8e81d4a50c50eeb6b95  xsa345/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch
5512bd167c29ba7da06b2ace1397fc43ed33a362174ea927d6ca3f9bdd31748b  xsa345/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch
392524c9b0a01618e6c86a39dc1c68288065300b49548e29e9e6672947858060  xsa345/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+OzqoMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZLt4H/2hHMHfpQsPiWUXQj6/SXmjZrnIuBsBeP6Hno3p+
aKVzdFJkdBHN6thRlIgir1tffawxbzrFG4ARN3A4mBfdEJYFMLo79v6dn1FtCdzw
OFdI95/sZ+zeOR8InfjedX67S0fNzVW4QkU2dpS5pwupdn+wg+Z4313FIyV7Oteo
sbN8dCeCn9t2mDBXa6D9Tyhc5iTfPBU09AZWh29wjnjGH4nOgarDwHX4x7VzZLyY
CB18RZ/Ezwud3thlsZdLWfzGOvpRDMKFq2pYwBHd3Dc7cSOLRGf6x8FLAHVc7XzR
a5cLY0oYOppJa++a/yyG8pKs7O410943SZ7292mDv0hwjnw=
=pVu8
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa345.meta"
Content-Disposition: attachment; filename="xsa345.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNDUsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxNzE5Zjc5YTBlZmQzNmQxNTgzN2M1MTk4MjE3M2RkMWMy
ODdkY2VkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAy
ODYKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsKICAgICAg
ICAgICAgInhzYTM0NS00LjEwLyoucGF0Y2giCiAgICAgICAgICBdCiAgICAg
ICAgfQogICAgICB9CiAgICB9LAogICAgIjQuMTEiOiB7CiAgICAgICJSZWNp
cGVzIjogewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVm
IjogIjM2MzBhMzY3ODU0Yzk4YmJmOGU3NDdkMDllZWFiN2U2OGYzNzAwMDMi
LAogICAgICAgICAgIlByZXJlcXMiOiBbCiAgICAgICAgICAgIDI4NgogICAg
ICAgICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAi
eHNhMzQ1LTQuMTEvKi5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9CiAg
ICAgIH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMiOiB7
CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAiNjg4
ODAxNzM5MmFjMjViNWU1ODg1NTQwMzA2NDJhZmZhYzI1YTk1ZCIsCiAgICAg
ICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMjg2CiAgICAgICAgICBd
LAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2EzNDUt
NC4xMy8qLnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQog
ICAgfSwKICAgICI0LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAg
ICAieGVuIjogewogICAgICAgICAgIlN0YWJsZVJlZiI6ICI4ZTdlNTg1N2Ey
MDNjOWQ5ZGY3NzMzZmQ2ODc2ODU1NWM3ZTc2ODM5IiwKICAgICAgICAgICJQ
cmVyZXFzIjogWwogICAgICAgICAgICAyODYKICAgICAgICAgIF0sCiAgICAg
ICAgICAiUGF0Y2hlcyI6IFsKICAgICAgICAgICAgInhzYTM0NS00LjEzLyou
cGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAog
ICAgIjQuMTQiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImM5M2I1MjBhNDFmMjc4N2Rk
NzZiZmIyZTQ1NDgzNmQxZDU3ODc1MDUiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDI4NgogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ1LTQuMTQvKi5wYXRjaCIK
ICAgICAgICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAibWFz
dGVyIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICI1OWIyN2YzNjBlM2Q5ZGMwMzc4YzEy
ODhlNjdhOTFmYTQxYTc3MTU4IiwKICAgICAgICAgICJQcmVyZXFzIjogWwog
ICAgICAgICAgICAyODYKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTM0NS8qLnBhdGNoIgogICAgICAgICAg
XQogICAgICAgIH0KICAgICAgfQogICAgfQogIH0KfQ==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.10/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345-4.10/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSAzZWNkZmY0OGMyZTdmMDY0MDA2NWRmNDgzNjFhNTI2NTExNjVjZmRh
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+CkFj
a2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Ci0tLQog
eGVuL2FyY2gveDg2L21tLmMgfCAxNyArKysrKysrKysrKy0tLS0tLQogMSBm
aWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94
ODYvbW0uYwppbmRleCBmZDczNGZmOTQ3Li5mNmRmMGE0MWYxIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0u
YwpAQCAtNTExMiw2ICs1MTEyLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJlOwogICAgIGwxX3BnZW50
cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CisgICAg
aW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhvbGRm
KSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50IG9f
ID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUxMzIsNyArNTEz
Myw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICBsM19wZ2Vu
dHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAK
ICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgIG9sM2Ug
PSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpA
QCAtNTIxOCw3ICs1MjIwLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAK
ICAgICAgICAgICAgIHBsMmUgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7CiAg
ICAgICAgICAgICBpZiAoIHBsMmUgPT0gTlVMTCApCi0gICAgICAgICAgICAg
ICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgZ290byBvdXQ7
CiAKICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDJfUEFHRVRBQkxF
X0VOVFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgbDJlX3dyaXRlKHBs
MmUgKyBpLApAQCAtNTI0Nyw3ICs1MjQ5LDcgQEAgaW50IG1hcF9wYWdlc190
b194ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9feGVuX2wyZSh2aXJ0
KTsKICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OwogCiAgICAgICAgIGlm
ICggKCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm4pICYKICAgICAgICAg
ICAgICAgICgoMXUgPDwgUEFHRVRBQkxFX09SREVSKSAtIDEpKSA9PSAwKSAm
JgpAQCAtNTI4OSw3ICs1MjkxLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgICAgICAgICB7CiAgICAgICAgICAgICAgICAgcGwxZSA9IHZpcnRf
dG9feGVuX2wxZSh2aXJ0KTsKICAgICAgICAgICAgICAgICBpZiAoIHBsMWUg
PT0gTlVMTCApCi0gICAgICAgICAgICAgICAgICAgIHJldHVybiAtRU5PTUVN
OworICAgICAgICAgICAgICAgICAgICBnb3RvIG91dDsKICAgICAgICAgICAg
IH0KICAgICAgICAgICAgIGVsc2UgaWYgKCBsMmVfZ2V0X2ZsYWdzKCpwbDJl
KSAmIF9QQUdFX1BTRSApCiAgICAgICAgICAgICB7CkBAIC01MzE1LDcgKzUz
MTcsNyBAQCBpbnQgbWFwX3BhZ2VzX3RvX3hlbigKIAogICAgICAgICAgICAg
ICAgIHBsMWUgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7CiAgICAgICAgICAg
ICAgICAgaWYgKCBwbDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAgICAg
ICByZXR1cm4gLUVOT01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBv
dXQ7CiAKICAgICAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwxX1BB
R0VUQUJMRV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgICAgICBs
MWVfd3JpdGUoJnBsMWVbaV0sCkBAIC01NDU4LDcgKzU0NjAsMTAgQEAgaW50
IG1hcF9wYWdlc190b194ZW4oCiAKICN1bmRlZiBmbHVzaF9mbGFncwogCi0g
ICAgcmV0dXJuIDA7CisgICAgcmMgPSAwOworCisgb3V0OgorICAgIHJldHVy
biByYzsKIH0KIAogaW50IHBvcHVsYXRlX3B0X3JhbmdlKHVuc2lnbmVkIGxv
bmcgdmlydCwgdW5zaWduZWQgbG9uZyBtZm4sCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.10/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345-4.10/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSAyYWU0ODkxZjNhNDQyOGE2ODljOWEyMGUwOGFlYmQ2NjJjMGRjZjk3
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJj
aC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5nZWQs
IDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IGY2ZGYwYTQxZjEuLmU2NDQ2ODVjYWIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NDkxLDYg
KzU0OTEsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBs
b25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAg
IGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CiAg
ICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5PTUVN
OwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBi
ZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxf
UEFHRV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTUzMiw3ICs1NTMzLDggQEAg
aW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNp
Z25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICAgICAgICAgIC8q
IFBBR0UxR0I6IHNoYXR0ZXIgdGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0aHJv
dWdoLiAqLwogICAgICAgICAgICAgcGwyZSA9IGFsbG9jX3hlbl9wYWdldGFi
bGUoKTsKICAgICAgICAgICAgIGlmICggIXBsMmUgKQotICAgICAgICAgICAg
ICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgIGdvdG8gb3V0
OworCiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwyX1BBR0VUQUJM
RV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgIGwyZV93cml0ZShw
bDJlICsgaSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbDJlX2Zyb21f
cGZuKGwzZV9nZXRfcGZuKCpwbDNlKSArCkBAIC01NTg3LDcgKzU1ODksOCBA
QCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVu
c2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgICAgICAgICAg
ICAgIC8qIFBTRTogc2hhdHRlciB0aGUgc3VwZXJwYWdlIGFuZCB0cnkgYWdh
aW4uICovCiAgICAgICAgICAgICAgICAgcGwxZSA9IGFsbG9jX3hlbl9wYWdl
dGFibGUoKTsKICAgICAgICAgICAgICAgICBpZiAoICFwbDFlICkKLSAgICAg
ICAgICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAg
ICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgICAgICAgICAgZm9yICggaSA9
IDA7IGkgPCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAg
ICAgICAgICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbDFlX2Zyb21fcGZuKGwyZV9nZXRfcGZuKCpw
bDJlKSArIGksCkBAIC01NzE2LDcgKzU3MTksMTAgQEAgaW50IG1vZGlmeV94
ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUs
IHVuc2lnbmVkIGludCBuZikKICAgICBmbHVzaF9hcmVhKE5VTEwsIEZMVVNI
X1RMQl9HTE9CQUwpOwogCiAjdW5kZWYgRkxBR1NfTUFTSwotICAgIHJldHVy
biAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAgICByZXR1cm4gcmM7CiB9
CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.10/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345-4.10/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSAyNDE4MWY3NjM1YzEwNmY4NzE5YjBlYmE1YWFjMTBjZjc0N2E0Y2Vj
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IGU2NDQ2ODVjYWIuLmZhYTdkZjMwYzEgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMTMyLDYgKzIxMzIsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIH0gd2hpbGUgKCAoeSA9IGNtcHhjaGco
JnBhZ2UtPnUuaW51c2UudHlwZV9pbmZvLCB4LCBueCkpICE9IHggKTsKIH0K
IAorLyoKKyAqIEwzIHRhYmxlIGxvY2tzOgorICoKKyAqIFVzZWQgZm9yIHNl
cmlhbGl6YXRpb24gaW4gbWFwX3BhZ2VzX3RvX3hlbigpIGFuZCBtb2RpZnlf
eGVuX21hcHBpbmdzKCkuCisgKgorICogRm9yIFhlbiBQVCBwYWdlcywgdGhl
IHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvIGlzIHVudXNlZCBhbmQgaXQgaXMg
c2FmZSB0bworICogcmV1c2UgdGhlIFBHVF9sb2NrZWQgZmxhZy4gVGhpcyBs
b2NrIGlzIHRha2VuIG9ubHkgd2hlbiB3ZSBtb3ZlIGRvd24gdG8gTDMKKyAq
IHRhYmxlcyBhbmQgYmVsb3csIHNpbmNlIEw0IChhbmQgYWJvdmUsIGZvciA1
LWxldmVsIHBhZ2luZykgaXMgc3RpbGwgZ2xvYmFsbHkKKyAqIHByb3RlY3Rl
ZCBieSBtYXBfcGdkaXJfbG9jay4KKyAqCisgKiBQViBNTVUgdXBkYXRlIGh5
cGVyY2FsbHMgY2FsbCBtYXBfcGFnZXNfdG9feGVuIHdoaWxlIGhvbGRpbmcg
YSBwYWdlJ3MgcGFnZV9sb2NrKCkuCisgKiBUaGlzIGhhcyB0d28gaW1wbGlj
YXRpb25zOgorICogLSBXZSBjYW5ub3QgcmV1c2UgcmV1c2UgY3VycmVudF9s
b2NrZWRfcGFnZV8qIGZvciBkZWJ1Z2dpbmcKKyAqIC0gVG8gYXZvaWQgdGhl
IGNoYW5jZSBvZiBkZWFkbG9jaywgZXZlbiBmb3IgZGlmZmVyZW50IHBhZ2Vz
LCB3ZQorICogICBtdXN0IG5ldmVyIGdyYWIgcGFnZV9sb2NrKCkgYWZ0ZXIg
Z3JhYmJpbmcgbDN0X2xvY2soKS4gIFRoaXMKKyAqICAgaW5jbHVkZXMgYW55
IHBhZ2VfbG9jaygpLWJhc2VkIGxvY2tzLCBzdWNoIGFzCisgKiAgIG1lbV9z
aGFyaW5nX3BhZ2VfbG9jaygpLgorICoKKyAqIEFsc28gbm90ZSB0aGF0IHdl
IGdyYWIgdGhlIG1hcF9wZ2Rpcl9sb2NrIHdoaWxlIGhvbGRpbmcgdGhlCisg
KiBsM3RfbG9jaygpLCBzbyB0byBhdm9pZCBkZWFkbG9jayB3ZSBtdXN0IGF2
b2lkIGdyYWJiaW5nIHRoZW0gaW4KKyAqIHJldmVyc2Ugb3JkZXIuCisgKi8K
K3N0YXRpYyB2b2lkIGwzdF9sb2NrKHN0cnVjdCBwYWdlX2luZm8gKnBhZ2Up
Cit7CisgICAgdW5zaWduZWQgbG9uZyB4LCBueDsKKworICAgIGRvIHsKKyAg
ICAgICAgd2hpbGUgKCAoeCA9IHBhZ2UtPnUuaW51c2UudHlwZV9pbmZvKSAm
IFBHVF9sb2NrZWQgKQorICAgICAgICAgICAgY3B1X3JlbGF4KCk7CisgICAg
ICAgIG54ID0geCB8IFBHVF9sb2NrZWQ7CisgICAgfSB3aGlsZSAoIGNtcHhj
aGcoJnBhZ2UtPnUuaW51c2UudHlwZV9pbmZvLCB4LCBueCkgIT0geCApOwor
fQorCitzdGF0aWMgdm9pZCBsM3RfdW5sb2NrKHN0cnVjdCBwYWdlX2luZm8g
KnBhZ2UpCit7CisgICAgdW5zaWduZWQgbG9uZyB4LCBueCwgeSA9IHBhZ2Ut
PnUuaW51c2UudHlwZV9pbmZvOworCisgICAgZG8geworICAgICAgICB4ID0g
eTsKKyAgICAgICAgQlVHX09OKCEoeCAmIFBHVF9sb2NrZWQpKTsKKyAgICAg
ICAgbnggPSB4ICYgflBHVF9sb2NrZWQ7CisgICAgfSB3aGlsZSAoICh5ID0g
Y21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54KSkgIT0g
eCApOworfQorCiAvKgogICogUFRFIGZsYWdzIHRoYXQgYSBndWVzdCBtYXkg
Y2hhbmdlIHdpdGhvdXQgcmUtdmFsaWRhdGluZyB0aGUgUFRFLgogICogQWxs
IG90aGVyIGJpdHMgYWZmZWN0IHRyYW5zbGF0aW9uLCBjYWNoaW5nLCBvciBY
ZW4ncyBzYWZldHkuCkBAIC01MTAyLDYgKzUxNDYsMjMgQEAgbDFfcGdlbnRy
eV90ICp2aXJ0X3RvX3hlbl9sMWUodW5zaWduZWQgbG9uZyB2KQogICAgICAg
ICAgICAgICAgICAgICAgICAgIGZsdXNoX2FyZWFfbG9jYWwoKGNvbnN0IHZv
aWQgKil2LCBmKSA6IFwKICAgICAgICAgICAgICAgICAgICAgICAgICBmbHVz
aF9hcmVhX2FsbCgoY29uc3Qgdm9pZCAqKXYsIGYpKQogCisjZGVmaW5lIEwz
VF9JTklUKHBhZ2UpIChwYWdlKSA9IFpFUk9fQkxPQ0tfUFRSCisKKyNkZWZp
bmUgTDNUX0xPQ0socGFnZSkgICAgICAgIFwKKyAgICBkbyB7ICAgICAgICAg
ICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBsb2NraW5nICkgICAgICAg
IFwKKyAgICAgICAgICAgIGwzdF9sb2NrKHBhZ2UpOyAgIFwKKyAgICB9IHdo
aWxlICggZmFsc2UgKQorCisjZGVmaW5lIEwzVF9VTkxPQ0socGFnZSkgICAg
ICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgZG8geyAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgIGlm
ICggbG9ja2luZyAmJiAocGFnZSkgIT0gWkVST19CTE9DS19QVFIgKSBcCisg
ICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBcCisgICAgICAgICAgICBsM3RfdW5sb2NrKHBhZ2UpOyAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgICAgICAgICAocGFnZSkgPSBaRVJPX0JM
T0NLX1BUUjsgICAgICAgICAgICAgICBcCisgICAgICAgIH0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgfSB3aGls
ZSAoIGZhbHNlICkKKwogaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgdW5z
aWduZWQgbG9uZyB2aXJ0LAogICAgIHVuc2lnbmVkIGxvbmcgbWZuLApAQCAt
NTExMyw2ICs1MTc0LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAg
bDFfcGdlbnRyeV90ICpwbDFlLCBvbDFlOwogICAgIHVuc2lnbmVkIGludCAg
aTsKICAgICBpbnQgcmMgPSAtRU5PTUVNOworICAgIHN0cnVjdCBwYWdlX2lu
Zm8gKmN1cnJlbnRfbDNwYWdlOwogCiAjZGVmaW5lIGZsdXNoX2ZsYWdzKG9s
ZGYpIGRvIHsgICAgICAgICAgICAgICAgIFwKICAgICB1bnNpZ25lZCBpbnQg
b18gPSAob2xkZik7ICAgICAgICAgICAgICAgICAgXApAQCAtNTEyOCwxMyAr
NTE5MCwyMCBAQCBpbnQgbWFwX3BhZ2VzX3RvX3hlbigKICAgICB9ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogfSB3aGls
ZSAoMCkKIAorICAgIEwzVF9JTklUKGN1cnJlbnRfbDNwYWdlKTsKKwogICAg
IHdoaWxlICggbnJfbWZucyAhPSAwICkKICAgICB7Ci0gICAgICAgIGwzX3Bn
ZW50cnlfdCBvbDNlLCAqcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2aXJ0KTsK
KyAgICAgICAgbDNfcGdlbnRyeV90ICpwbDNlLCBvbDNlOwogCisgICAgICAg
IEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOworCisgICAgICAgIHBsM2Ug
PSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAgICAgICAgIGlmICggIXBsM2Ug
KQogICAgICAgICAgICAgZ290byBvdXQ7CiAKKyAgICAgICAgY3VycmVudF9s
M3BhZ2UgPSB2aXJ0X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwzVF9MT0NL
KGN1cnJlbnRfbDNwYWdlKTsKICAgICAgICAgb2wzZSA9ICpwbDNlOwogCiAg
ICAgICAgIGlmICggY3B1X2hhc19wYWdlMWdiICYmCkBAIC01NDYzLDYgKzU1
MzIsNyBAQCBpbnQgbWFwX3BhZ2VzX3RvX3hlbigKICAgICByYyA9IDA7CiAK
ICBvdXQ6CisgICAgTDNUX1VOTE9DSyhjdXJyZW50X2wzcGFnZSk7CiAgICAg
cmV0dXJuIHJjOwogfQogCkBAIC01NDkyLDYgKzU1NjIsNyBAQCBpbnQgbW9k
aWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVkIGxv
bmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgIHVuc2lnbmVkIGludCAgaTsK
ICAgICB1bnNpZ25lZCBsb25nIHYgPSBzOwogICAgIGludCByYyA9IC1FTk9N
RU07CisgICAgc3RydWN0IHBhZ2VfaW5mbyAqY3VycmVudF9sM3BhZ2U7CiAK
ICAgICAvKiBTZXQgb2YgdmFsaWQgUFRFIGJpdHMgd2hpY2ggbWF5IGJlIGFs
dGVyZWQuICovCiAjZGVmaW5lIEZMQUdTX01BU0sgKF9QQUdFX05YfF9QQUdF
X1JXfF9QQUdFX1BSRVNFTlQpCkBAIC01NTAwLDExICs1NTcxLDIyIEBAIGlu
dCBtb2RpZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxvbmcgcywgdW5zaWdu
ZWQgbG9uZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAgQVNTRVJUKElTX0FM
SUdORUQocywgUEFHRV9TSVpFKSk7CiAgICAgQVNTRVJUKElTX0FMSUdORUQo
ZSwgUEFHRV9TSVpFKSk7CiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wzcGFn
ZSk7CisKICAgICB3aGlsZSAoIHYgPCBlICkKICAgICB7Ci0gICAgICAgIGwz
X3BnZW50cnlfdCAqcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2KTsKKyAgICAg
ICAgbDNfcGdlbnRyeV90ICpwbDNlOworCisgICAgICAgIEwzVF9VTkxPQ0so
Y3VycmVudF9sM3BhZ2UpOwogCi0gICAgICAgIGlmICggIXBsM2UgfHwgIShs
M2VfZ2V0X2ZsYWdzKCpwbDNlKSAmIF9QQUdFX1BSRVNFTlQpICkKKyAgICAg
ICAgcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2KTsKKyAgICAgICAgaWYgKCAh
cGwzZSApCisgICAgICAgICAgICBnb3RvIG91dDsKKworICAgICAgICBjdXJy
ZW50X2wzcGFnZSA9IHZpcnRfdG9fcGFnZShwbDNlKTsKKyAgICAgICAgTDNU
X0xPQ0soY3VycmVudF9sM3BhZ2UpOworCisgICAgICAgIGlmICggIShsM2Vf
Z2V0X2ZsYWdzKCpwbDNlKSAmIF9QQUdFX1BSRVNFTlQpICkKICAgICAgICAg
ewogICAgICAgICAgICAgLyogQ29uZmlybSB0aGUgY2FsbGVyIGlzbid0IHRy
eWluZyB0byBjcmVhdGUgbmV3IG1hcHBpbmdzLiAqLwogICAgICAgICAgICAg
QVNTRVJUKCEobmYgJiBfUEFHRV9QUkVTRU5UKSk7CkBAIC01NzIyLDkgKzU4
MDQsMTMgQEAgaW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9u
ZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICBy
YyA9IDA7CiAKICBvdXQ6CisgICAgTDNUX1VOTE9DSyhjdXJyZW50X2wzcGFn
ZSk7CiAgICAgcmV0dXJuIHJjOwogfQogCisjdW5kZWYgTDNUX0xPQ0sKKyN1
bmRlZiBMM1RfVU5MT0NLCisKICN1bmRlZiBmbHVzaF9hcmVhCiAKIGludCBk
ZXN0cm95X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVk
IGxvbmcgZSkKLS0gCjIuMjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.11/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345-4.11/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSBlZGJlNzA0MjdlMTc3NDMzNTFmMWI3MzllYTE1MzZhY2Q3NTdhZTZj
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+CkFj
a2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Ci0tLQog
eGVuL2FyY2gveDg2L21tLmMgfCAxNyArKysrKysrKysrKy0tLS0tLQogMSBm
aWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94
ODYvbW0uYwppbmRleCA2MjY3NjhhOTUwLi43OWEzZmFjM2NjIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0u
YwpAQCAtNTE5NCw2ICs1MTk0LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJlOwogICAgIGwxX3BnZW50
cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CisgICAg
aW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhvbGRm
KSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50IG9f
ID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUyMTQsNyArNTIx
NSw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICBsM19wZ2Vu
dHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAK
ICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgIG9sM2Ug
PSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpA
QCAtNTMwMiw3ICs1MzA0LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAK
ICAgICAgICAgICAgIHBsMmUgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7CiAg
ICAgICAgICAgICBpZiAoIHBsMmUgPT0gTlVMTCApCi0gICAgICAgICAgICAg
ICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgZ290byBvdXQ7
CiAKICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDJfUEFHRVRBQkxF
X0VOVFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgbDJlX3dyaXRlKHBs
MmUgKyBpLApAQCAtNTMzMSw3ICs1MzMzLDcgQEAgaW50IG1hcF9wYWdlc190
b194ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9feGVuX2wyZSh2aXJ0
KTsKICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OwogCiAgICAgICAgIGlm
ICggKCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm5feChtZm4pKSAmCiAg
ICAgICAgICAgICAgICAoKDF1IDw8IFBBR0VUQUJMRV9PUkRFUikgLSAxKSkg
PT0gMCkgJiYKQEAgLTUzNzQsNyArNTM3Niw3IEBAIGludCBtYXBfcGFnZXNf
dG9feGVuKAogICAgICAgICAgICAgewogICAgICAgICAgICAgICAgIHBsMWUg
PSB2aXJ0X3RvX3hlbl9sMWUodmlydCk7CiAgICAgICAgICAgICAgICAgaWYg
KCBwbDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBvdXQ7CiAgICAg
ICAgICAgICB9CiAgICAgICAgICAgICBlbHNlIGlmICggbDJlX2dldF9mbGFn
cygqcGwyZSkgJiBfUEFHRV9QU0UgKQogICAgICAgICAgICAgewpAQCAtNTQw
MSw3ICs1NDAzLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAKICAgICAg
ICAgICAgICAgICBwbDFlID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAg
ICAgICAgICAgICAgIGlmICggcGwxZSA9PSBOVUxMICkKLSAgICAgICAgICAg
ICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgICAg
IGdvdG8gb3V0OwogCiAgICAgICAgICAgICAgICAgZm9yICggaSA9IDA7IGkg
PCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAgICAgICAg
ICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLApAQCAtNTU0NSw3ICs1NTQ3LDEw
IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogCiAjdW5kZWYgZmx1c2hfZmxh
Z3MKIAotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAg
ICByZXR1cm4gcmM7CiB9CiAKIGludCBwb3B1bGF0ZV9wdF9yYW5nZSh1bnNp
Z25lZCBsb25nIHZpcnQsIHVuc2lnbmVkIGxvbmcgbnJfbWZucykKLS0gCjIu
MjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.11/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345-4.11/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSA3MTAxNzg2YmU5MWRjZTY1MGI2ZTc5ZjEzNzRjNTgwYzczMWJiMzQ4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJj
aC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5nZWQs
IDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IDc5YTNmYWMzY2MuLjhlZDNlY2FjYmUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NTc3LDYg
KzU1NzcsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBs
b25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAg
IGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CiAg
ICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5PTUVN
OwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBi
ZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxf
UEFHRV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTYxOCw3ICs1NjE5LDggQEAg
aW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNp
Z25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICAgICAgICAgIC8q
IFBBR0UxR0I6IHNoYXR0ZXIgdGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0aHJv
dWdoLiAqLwogICAgICAgICAgICAgcGwyZSA9IGFsbG9jX3hlbl9wYWdldGFi
bGUoKTsKICAgICAgICAgICAgIGlmICggIXBsMmUgKQotICAgICAgICAgICAg
ICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgIGdvdG8gb3V0
OworCiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwyX1BBR0VUQUJM
RV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgIGwyZV93cml0ZShw
bDJlICsgaSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbDJlX2Zyb21f
cGZuKGwzZV9nZXRfcGZuKCpwbDNlKSArCkBAIC01NjczLDcgKzU2NzUsOCBA
QCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVu
c2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgICAgICAgICAg
ICAgIC8qIFBTRTogc2hhdHRlciB0aGUgc3VwZXJwYWdlIGFuZCB0cnkgYWdh
aW4uICovCiAgICAgICAgICAgICAgICAgcGwxZSA9IGFsbG9jX3hlbl9wYWdl
dGFibGUoKTsKICAgICAgICAgICAgICAgICBpZiAoICFwbDFlICkKLSAgICAg
ICAgICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAg
ICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgICAgICAgICAgZm9yICggaSA9
IDA7IGkgPCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAg
ICAgICAgICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbDFlX2Zyb21fcGZuKGwyZV9nZXRfcGZuKCpw
bDJlKSArIGksCkBAIC01ODAyLDcgKzU4MDUsMTAgQEAgaW50IG1vZGlmeV94
ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUs
IHVuc2lnbmVkIGludCBuZikKICAgICBmbHVzaF9hcmVhKE5VTEwsIEZMVVNI
X1RMQl9HTE9CQUwpOwogCiAjdW5kZWYgRkxBR1NfTUFTSwotICAgIHJldHVy
biAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAgICByZXR1cm4gcmM7CiB9
CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.11/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345-4.11/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSBlN2JiYzRhMGI1YWY3NmE4MmYwZGNmNGFmY2JmMTUwOWIwMjBlYjcz
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IDhlZDNlY2FjYmUuLjRmZjI0ZGU3M2QgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMTUzLDYgKzIxNTMsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIGN1cnJlbnRfbG9ja2VkX3BhZ2Vfc2V0
KE5VTEwpOwogfQogCisvKgorICogTDMgdGFibGUgbG9ja3M6CisgKgorICog
VXNlZCBmb3Igc2VyaWFsaXphdGlvbiBpbiBtYXBfcGFnZXNfdG9feGVuKCkg
YW5kIG1vZGlmeV94ZW5fbWFwcGluZ3MoKS4KKyAqCisgKiBGb3IgWGVuIFBU
IHBhZ2VzLCB0aGUgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gaXMgdW51c2Vk
IGFuZCBpdCBpcyBzYWZlIHRvCisgKiByZXVzZSB0aGUgUEdUX2xvY2tlZCBm
bGFnLiBUaGlzIGxvY2sgaXMgdGFrZW4gb25seSB3aGVuIHdlIG1vdmUgZG93
biB0byBMMworICogdGFibGVzIGFuZCBiZWxvdywgc2luY2UgTDQgKGFuZCBh
Ym92ZSwgZm9yIDUtbGV2ZWwgcGFnaW5nKSBpcyBzdGlsbCBnbG9iYWxseQor
ICogcHJvdGVjdGVkIGJ5IG1hcF9wZ2Rpcl9sb2NrLgorICoKKyAqIFBWIE1N
VSB1cGRhdGUgaHlwZXJjYWxscyBjYWxsIG1hcF9wYWdlc190b194ZW4gd2hp
bGUgaG9sZGluZyBhIHBhZ2UncyBwYWdlX2xvY2soKS4KKyAqIFRoaXMgaGFz
IHR3byBpbXBsaWNhdGlvbnM6CisgKiAtIFdlIGNhbm5vdCByZXVzZSByZXVz
ZSBjdXJyZW50X2xvY2tlZF9wYWdlXyogZm9yIGRlYnVnZ2luZworICogLSBU
byBhdm9pZCB0aGUgY2hhbmNlIG9mIGRlYWRsb2NrLCBldmVuIGZvciBkaWZm
ZXJlbnQgcGFnZXMsIHdlCisgKiAgIG11c3QgbmV2ZXIgZ3JhYiBwYWdlX2xv
Y2soKSBhZnRlciBncmFiYmluZyBsM3RfbG9jaygpLiAgVGhpcworICogICBp
bmNsdWRlcyBhbnkgcGFnZV9sb2NrKCktYmFzZWQgbG9ja3MsIHN1Y2ggYXMK
KyAqICAgbWVtX3NoYXJpbmdfcGFnZV9sb2NrKCkuCisgKgorICogQWxzbyBu
b3RlIHRoYXQgd2UgZ3JhYiB0aGUgbWFwX3BnZGlyX2xvY2sgd2hpbGUgaG9s
ZGluZyB0aGUKKyAqIGwzdF9sb2NrKCksIHNvIHRvIGF2b2lkIGRlYWRsb2Nr
IHdlIG11c3QgYXZvaWQgZ3JhYmJpbmcgdGhlbSBpbgorICogcmV2ZXJzZSBv
cmRlci4KKyAqLworc3RhdGljIHZvaWQgbDN0X2xvY2soc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54OworCisg
ICAgZG8geworICAgICAgICB3aGlsZSAoICh4ID0gcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pICYgUEdUX2xvY2tlZCApCisgICAgICAgICAgICBjcHVfcmVs
YXgoKTsKKyAgICAgICAgbnggPSB4IHwgUEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggY21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54
KSAhPSB4ICk7Cit9CisKK3N0YXRpYyB2b2lkIGwzdF91bmxvY2soc3RydWN0
IHBhZ2VfaW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54
LCB5ID0gcGFnZS0+dS5pbnVzZS50eXBlX2luZm87CisKKyAgICBkbyB7Cisg
ICAgICAgIHggPSB5OworICAgICAgICBCVUdfT04oISh4ICYgUEdUX2xvY2tl
ZCkpOworICAgICAgICBueCA9IHggJiB+UEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT51LmludXNlLnR5cGVfaW5mbywg
eCwgbngpKSAhPSB4ICk7Cit9CisKIC8qCiAgKiBQVEUgZmxhZ3MgdGhhdCBh
IGd1ZXN0IG1heSBjaGFuZ2Ugd2l0aG91dCByZS12YWxpZGF0aW5nIHRoZSBQ
VEUuCiAgKiBBbGwgb3RoZXIgYml0cyBhZmZlY3QgdHJhbnNsYXRpb24sIGNh
Y2hpbmcsIG9yIFhlbidzIHNhZmV0eS4KQEAgLTUxODQsNiArNTIyOCwyMyBA
QCBsMV9wZ2VudHJ5X3QgKnZpcnRfdG9feGVuX2wxZSh1bnNpZ25lZCBsb25n
IHYpCiAgICAgICAgICAgICAgICAgICAgICAgICAgZmx1c2hfYXJlYV9sb2Nh
bCgoY29uc3Qgdm9pZCAqKXYsIGYpIDogXAogICAgICAgICAgICAgICAgICAg
ICAgICAgIGZsdXNoX2FyZWFfYWxsKChjb25zdCB2b2lkICopdiwgZikpCiAK
KyNkZWZpbmUgTDNUX0lOSVQocGFnZSkgKHBhZ2UpID0gWkVST19CTE9DS19Q
VFIKKworI2RlZmluZSBMM1RfTE9DSyhwYWdlKSAgICAgICAgXAorICAgIGRv
IHsgICAgICAgICAgICAgICAgICAgICAgXAorICAgICAgICBpZiAoIGxvY2tp
bmcgKSAgICAgICAgXAorICAgICAgICAgICAgbDN0X2xvY2socGFnZSk7ICAg
XAorICAgIH0gd2hpbGUgKCBmYWxzZSApCisKKyNkZWZpbmUgTDNUX1VOTE9D
SyhwYWdlKSAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICBkbyB7
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICAgICAgaWYgKCBsb2NraW5nICYmIChwYWdlKSAhPSBaRVJPX0JMT0NL
X1BUUiApIFwKKyAgICAgICAgeyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgIGwzdF91bmxvY2socGFn
ZSk7ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgICAgIChwYWdl
KSA9IFpFUk9fQkxPQ0tfUFRSOyAgICAgICAgICAgICAgIFwKKyAgICAgICAg
fSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwK
KyAgICB9IHdoaWxlICggZmFsc2UgKQorCiBpbnQgbWFwX3BhZ2VzX3RvX3hl
bigKICAgICB1bnNpZ25lZCBsb25nIHZpcnQsCiAgICAgbWZuX3QgbWZuLApA
QCAtNTE5NSw2ICs1MjU2LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAg
ICAgbDFfcGdlbnRyeV90ICpwbDFlLCBvbDFlOwogICAgIHVuc2lnbmVkIGlu
dCAgaTsKICAgICBpbnQgcmMgPSAtRU5PTUVNOworICAgIHN0cnVjdCBwYWdl
X2luZm8gKmN1cnJlbnRfbDNwYWdlOwogCiAjZGVmaW5lIGZsdXNoX2ZsYWdz
KG9sZGYpIGRvIHsgICAgICAgICAgICAgICAgIFwKICAgICB1bnNpZ25lZCBp
bnQgb18gPSAob2xkZik7ICAgICAgICAgICAgICAgICAgXApAQCAtNTIxMCwx
MyArNTI3MiwyMCBAQCBpbnQgbWFwX3BhZ2VzX3RvX3hlbigKICAgICB9ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgXAogfSB3
aGlsZSAoMCkKIAorICAgIEwzVF9JTklUKGN1cnJlbnRfbDNwYWdlKTsKKwog
ICAgIHdoaWxlICggbnJfbWZucyAhPSAwICkKICAgICB7Ci0gICAgICAgIGwz
X3BnZW50cnlfdCBvbDNlLCAqcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2aXJ0
KTsKKyAgICAgICAgbDNfcGdlbnRyeV90ICpwbDNlLCBvbDNlOwogCisgICAg
ICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOworCisgICAgICAgIHBs
M2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAgICAgICAgIGlmICggIXBs
M2UgKQogICAgICAgICAgICAgZ290byBvdXQ7CiAKKyAgICAgICAgY3VycmVu
dF9sM3BhZ2UgPSB2aXJ0X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwzVF9M
T0NLKGN1cnJlbnRfbDNwYWdlKTsKICAgICAgICAgb2wzZSA9ICpwbDNlOwog
CiAgICAgICAgIGlmICggY3B1X2hhc19wYWdlMWdiICYmCkBAIC01NTUwLDYg
KzU2MTksNyBAQCBpbnQgbWFwX3BhZ2VzX3RvX3hlbigKICAgICByYyA9IDA7
CiAKICBvdXQ6CisgICAgTDNUX1VOTE9DSyhjdXJyZW50X2wzcGFnZSk7CiAg
ICAgcmV0dXJuIHJjOwogfQogCkBAIC01NTc4LDYgKzU2NDgsNyBAQCBpbnQg
bW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVk
IGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgIHVuc2lnbmVkIGludCAg
aTsKICAgICB1bnNpZ25lZCBsb25nIHYgPSBzOwogICAgIGludCByYyA9IC1F
Tk9NRU07CisgICAgc3RydWN0IHBhZ2VfaW5mbyAqY3VycmVudF9sM3BhZ2U7
CiAKICAgICAvKiBTZXQgb2YgdmFsaWQgUFRFIGJpdHMgd2hpY2ggbWF5IGJl
IGFsdGVyZWQuICovCiAjZGVmaW5lIEZMQUdTX01BU0sgKF9QQUdFX05YfF9Q
QUdFX1JXfF9QQUdFX1BSRVNFTlQpCkBAIC01NTg2LDExICs1NjU3LDIyIEBA
IGludCBtb2RpZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxvbmcgcywgdW5z
aWduZWQgbG9uZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAgQVNTRVJUKElT
X0FMSUdORUQocywgUEFHRV9TSVpFKSk7CiAgICAgQVNTRVJUKElTX0FMSUdO
RUQoZSwgUEFHRV9TSVpFKSk7CiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wz
cGFnZSk7CisKICAgICB3aGlsZSAoIHYgPCBlICkKICAgICB7Ci0gICAgICAg
IGwzX3BnZW50cnlfdCAqcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2KTsKKyAg
ICAgICAgbDNfcGdlbnRyeV90ICpwbDNlOworCisgICAgICAgIEwzVF9VTkxP
Q0soY3VycmVudF9sM3BhZ2UpOwogCi0gICAgICAgIGlmICggIXBsM2UgfHwg
IShsM2VfZ2V0X2ZsYWdzKCpwbDNlKSAmIF9QQUdFX1BSRVNFTlQpICkKKyAg
ICAgICAgcGwzZSA9IHZpcnRfdG9feGVuX2wzZSh2KTsKKyAgICAgICAgaWYg
KCAhcGwzZSApCisgICAgICAgICAgICBnb3RvIG91dDsKKworICAgICAgICBj
dXJyZW50X2wzcGFnZSA9IHZpcnRfdG9fcGFnZShwbDNlKTsKKyAgICAgICAg
TDNUX0xPQ0soY3VycmVudF9sM3BhZ2UpOworCisgICAgICAgIGlmICggIShs
M2VfZ2V0X2ZsYWdzKCpwbDNlKSAmIF9QQUdFX1BSRVNFTlQpICkKICAgICAg
ICAgewogICAgICAgICAgICAgLyogQ29uZmlybSB0aGUgY2FsbGVyIGlzbid0
IHRyeWluZyB0byBjcmVhdGUgbmV3IG1hcHBpbmdzLiAqLwogICAgICAgICAg
ICAgQVNTRVJUKCEobmYgJiBfUEFHRV9QUkVTRU5UKSk7CkBAIC01ODA4LDkg
KzU4OTAsMTMgQEAgaW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQg
bG9uZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAg
ICByYyA9IDA7CiAKICBvdXQ6CisgICAgTDNUX1VOTE9DSyhjdXJyZW50X2wz
cGFnZSk7CiAgICAgcmV0dXJuIHJjOwogfQogCisjdW5kZWYgTDNUX0xPQ0sK
KyN1bmRlZiBMM1RfVU5MT0NLCisKICN1bmRlZiBmbHVzaF9hcmVhCiAKIGlu
dCBkZXN0cm95X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2ln
bmVkIGxvbmcgZSkKLS0gCjIuMjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.12/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345-4.12/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSBlMzNmYWQzMDQ0YWFhZWVjNmVkOTkxNDkyNWQ5NTU4Njk1YmRiMDlk
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ci0t
LQogeGVuL2FyY2gveDg2L21tLmMgfCAxNyArKysrKysrKysrKy0tLS0tLQog
MSBmaWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25z
KC0pCgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJj
aC94ODYvbW0uYwppbmRleCBiNGM5MGJkMDU0Li4wZTU0MGYxNDNiIDEwMDY0
NAotLS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYv
bW0uYwpAQCAtNTIyNyw2ICs1MjI3LDcgQEAgaW50IG1hcF9wYWdlc190b194
ZW4oCiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJlOwogICAgIGwxX3Bn
ZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7Cisg
ICAgaW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhv
bGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50
IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUyNDcsNyAr
NTI0OCw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICBsM19w
Z2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7
CiAKICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgIG9s
M2UgPSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAm
JgpAQCAtNTMzNSw3ICs1MzM3LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAKICAgICAgICAgICAgIHBsMmUgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7
CiAgICAgICAgICAgICBpZiAoIHBsMmUgPT0gTlVMTCApCi0gICAgICAgICAg
ICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgZ290byBv
dXQ7CiAKICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDJfUEFHRVRB
QkxFX0VOVFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgbDJlX3dyaXRl
KHBsMmUgKyBpLApAQCAtNTM2NCw3ICs1MzY2LDcgQEAgaW50IG1hcF9wYWdl
c190b194ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9feGVuX2wyZSh2
aXJ0KTsKICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAgICAgICByZXR1
cm4gLUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OwogCiAgICAgICAg
IGlmICggKCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm5feChtZm4pKSAm
CiAgICAgICAgICAgICAgICAoKDF1IDw8IFBBR0VUQUJMRV9PUkRFUikgLSAx
KSkgPT0gMCkgJiYKQEAgLTU0MDcsNyArNTQwOSw3IEBAIGludCBtYXBfcGFn
ZXNfdG9feGVuKAogICAgICAgICAgICAgewogICAgICAgICAgICAgICAgIHBs
MWUgPSB2aXJ0X3RvX3hlbl9sMWUodmlydCk7CiAgICAgICAgICAgICAgICAg
aWYgKCBwbDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAgICAgICByZXR1
cm4gLUVOT01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBvdXQ7CiAg
ICAgICAgICAgICB9CiAgICAgICAgICAgICBlbHNlIGlmICggbDJlX2dldF9m
bGFncygqcGwyZSkgJiBfUEFHRV9QU0UgKQogICAgICAgICAgICAgewpAQCAt
NTQzNCw3ICs1NDM2LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAKICAg
ICAgICAgICAgICAgICBwbDFlID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwog
ICAgICAgICAgICAgICAgIGlmICggcGwxZSA9PSBOVUxMICkKLSAgICAgICAg
ICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAg
ICAgIGdvdG8gb3V0OwogCiAgICAgICAgICAgICAgICAgZm9yICggaSA9IDA7
IGkgPCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAgICAg
ICAgICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLApAQCAtNTU3OCw3ICs1NTgw
LDEwIEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogCiAjdW5kZWYgZmx1c2hf
ZmxhZ3MKIAotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91dDoK
KyAgICByZXR1cm4gcmM7CiB9CiAKIGludCBwb3B1bGF0ZV9wdF9yYW5nZSh1
bnNpZ25lZCBsb25nIHZpcnQsIHVuc2lnbmVkIGxvbmcgbnJfbWZucykKLS0g
CjIuMjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.12/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345-4.12/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSBhMTUxYjA3ZDUwNGQ3ZjQ0MjI1NmUxYzgyOTE3MzM0MjE2YzU4YTIx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KLS0tCiB4ZW4v
YXJjaC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5n
ZWQsIDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1n
aXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmlu
ZGV4IDBlNTQwZjE0M2IuLmJmZjI2ODllNjAgMTAwNjQ0Ci0tLSBhL3hlbi9h
cmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NjEw
LDYgKzU2MTAsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25l
ZCBsb25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQog
ICAgIGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7
CiAgICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5P
TUVNOwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1h
eSBiZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9O
WHxfUEFHRV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTY1MSw3ICs1NjUyLDgg
QEAgaW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1
bnNpZ25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICAgICAgICAg
IC8qIFBBR0UxR0I6IHNoYXR0ZXIgdGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0
aHJvdWdoLiAqLwogICAgICAgICAgICAgcGwyZSA9IGFsbG9jX3hlbl9wYWdl
dGFibGUoKTsKICAgICAgICAgICAgIGlmICggIXBsMmUgKQotICAgICAgICAg
ICAgICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgIGdvdG8g
b3V0OworCiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwyX1BBR0VU
QUJMRV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgIGwyZV93cml0
ZShwbDJlICsgaSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbDJlX2Zy
b21fcGZuKGwzZV9nZXRfcGZuKCpwbDNlKSArCkBAIC01NzA2LDcgKzU3MDgs
OCBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMs
IHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgICAgICAg
ICAgICAgIC8qIFBTRTogc2hhdHRlciB0aGUgc3VwZXJwYWdlIGFuZCB0cnkg
YWdhaW4uICovCiAgICAgICAgICAgICAgICAgcGwxZSA9IGFsbG9jX3hlbl9w
YWdldGFibGUoKTsKICAgICAgICAgICAgICAgICBpZiAoICFwbDFlICkKLSAg
ICAgICAgICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAg
ICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgICAgICAgICAgZm9yICgg
aSA9IDA7IGkgPCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAg
ICAgICAgICAgICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLAogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgbDFlX2Zyb21fcGZuKGwyZV9nZXRfcGZu
KCpwbDJlKSArIGksCkBAIC01ODM1LDcgKzU4MzgsMTAgQEAgaW50IG1vZGlm
eV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25n
IGUsIHVuc2lnbmVkIGludCBuZikKICAgICBmbHVzaF9hcmVhKE5VTEwsIEZM
VVNIX1RMQl9HTE9CQUwpOwogCiAjdW5kZWYgRkxBR1NfTUFTSwotICAgIHJl
dHVybiAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAgICByZXR1cm4gcmM7
CiB9CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.12/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345-4.12/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSBhYmFmMDVlMTgzZjNiYzM5Mjc4NDRhZTE2ZDY2NDJhMzgyYzBkYmVm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IGJmZjI2ODllNjAuLmQ2YmE4YzRiYjQgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMTk3LDYgKzIxOTcsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIGN1cnJlbnRfbG9ja2VkX3BhZ2Vfc2V0
KE5VTEwpOwogfQogCisvKgorICogTDMgdGFibGUgbG9ja3M6CisgKgorICog
VXNlZCBmb3Igc2VyaWFsaXphdGlvbiBpbiBtYXBfcGFnZXNfdG9feGVuKCkg
YW5kIG1vZGlmeV94ZW5fbWFwcGluZ3MoKS4KKyAqCisgKiBGb3IgWGVuIFBU
IHBhZ2VzLCB0aGUgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gaXMgdW51c2Vk
IGFuZCBpdCBpcyBzYWZlIHRvCisgKiByZXVzZSB0aGUgUEdUX2xvY2tlZCBm
bGFnLiBUaGlzIGxvY2sgaXMgdGFrZW4gb25seSB3aGVuIHdlIG1vdmUgZG93
biB0byBMMworICogdGFibGVzIGFuZCBiZWxvdywgc2luY2UgTDQgKGFuZCBh
Ym92ZSwgZm9yIDUtbGV2ZWwgcGFnaW5nKSBpcyBzdGlsbCBnbG9iYWxseQor
ICogcHJvdGVjdGVkIGJ5IG1hcF9wZ2Rpcl9sb2NrLgorICoKKyAqIFBWIE1N
VSB1cGRhdGUgaHlwZXJjYWxscyBjYWxsIG1hcF9wYWdlc190b194ZW4gd2hp
bGUgaG9sZGluZyBhIHBhZ2UncyBwYWdlX2xvY2soKS4KKyAqIFRoaXMgaGFz
IHR3byBpbXBsaWNhdGlvbnM6CisgKiAtIFdlIGNhbm5vdCByZXVzZSByZXVz
ZSBjdXJyZW50X2xvY2tlZF9wYWdlXyogZm9yIGRlYnVnZ2luZworICogLSBU
byBhdm9pZCB0aGUgY2hhbmNlIG9mIGRlYWRsb2NrLCBldmVuIGZvciBkaWZm
ZXJlbnQgcGFnZXMsIHdlCisgKiAgIG11c3QgbmV2ZXIgZ3JhYiBwYWdlX2xv
Y2soKSBhZnRlciBncmFiYmluZyBsM3RfbG9jaygpLiAgVGhpcworICogICBp
bmNsdWRlcyBhbnkgcGFnZV9sb2NrKCktYmFzZWQgbG9ja3MsIHN1Y2ggYXMK
KyAqICAgbWVtX3NoYXJpbmdfcGFnZV9sb2NrKCkuCisgKgorICogQWxzbyBu
b3RlIHRoYXQgd2UgZ3JhYiB0aGUgbWFwX3BnZGlyX2xvY2sgd2hpbGUgaG9s
ZGluZyB0aGUKKyAqIGwzdF9sb2NrKCksIHNvIHRvIGF2b2lkIGRlYWRsb2Nr
IHdlIG11c3QgYXZvaWQgZ3JhYmJpbmcgdGhlbSBpbgorICogcmV2ZXJzZSBv
cmRlci4KKyAqLworc3RhdGljIHZvaWQgbDN0X2xvY2soc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54OworCisg
ICAgZG8geworICAgICAgICB3aGlsZSAoICh4ID0gcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pICYgUEdUX2xvY2tlZCApCisgICAgICAgICAgICBjcHVfcmVs
YXgoKTsKKyAgICAgICAgbnggPSB4IHwgUEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggY21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54
KSAhPSB4ICk7Cit9CisKK3N0YXRpYyB2b2lkIGwzdF91bmxvY2soc3RydWN0
IHBhZ2VfaW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54
LCB5ID0gcGFnZS0+dS5pbnVzZS50eXBlX2luZm87CisKKyAgICBkbyB7Cisg
ICAgICAgIHggPSB5OworICAgICAgICBCVUdfT04oISh4ICYgUEdUX2xvY2tl
ZCkpOworICAgICAgICBueCA9IHggJiB+UEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT51LmludXNlLnR5cGVfaW5mbywg
eCwgbngpKSAhPSB4ICk7Cit9CisKICNpZmRlZiBDT05GSUdfUFYKIC8qCiAg
KiBQVEUgZmxhZ3MgdGhhdCBhIGd1ZXN0IG1heSBjaGFuZ2Ugd2l0aG91dCBy
ZS12YWxpZGF0aW5nIHRoZSBQVEUuCkBAIC01MjE3LDYgKzUyNjEsMjMgQEAg
bDFfcGdlbnRyeV90ICp2aXJ0X3RvX3hlbl9sMWUodW5zaWduZWQgbG9uZyB2
KQogICAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2FyZWFfbG9jYWwo
KGNvbnN0IHZvaWQgKil2LCBmKSA6IFwKICAgICAgICAgICAgICAgICAgICAg
ICAgICBmbHVzaF9hcmVhX2FsbCgoY29uc3Qgdm9pZCAqKXYsIGYpKQogCisj
ZGVmaW5lIEwzVF9JTklUKHBhZ2UpIChwYWdlKSA9IFpFUk9fQkxPQ0tfUFRS
CisKKyNkZWZpbmUgTDNUX0xPQ0socGFnZSkgICAgICAgIFwKKyAgICBkbyB7
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBsb2NraW5n
ICkgICAgICAgIFwKKyAgICAgICAgICAgIGwzdF9sb2NrKHBhZ2UpOyAgIFwK
KyAgICB9IHdoaWxlICggZmFsc2UgKQorCisjZGVmaW5lIEwzVF9VTkxPQ0so
cGFnZSkgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgZG8geyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgICAgIGlmICggbG9ja2luZyAmJiAocGFnZSkgIT0gWkVST19CTE9DS19Q
VFIgKSBcCisgICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgICAgICAgICBsM3RfdW5sb2NrKHBhZ2Up
OyAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAocGFnZSkg
PSBaRVJPX0JMT0NLX1BUUjsgICAgICAgICAgICAgICBcCisgICAgICAgIH0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgfSB3aGlsZSAoIGZhbHNlICkKKwogaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgdW5zaWduZWQgbG9uZyB2aXJ0LAogICAgIG1mbl90IG1mbiwKQEAg
LTUyMjgsNiArNTI4OSw3IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAg
IGwxX3BnZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQg
IGk7CiAgICAgaW50IHJjID0gLUVOT01FTTsKKyAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpjdXJyZW50X2wzcGFnZTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhv
bGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50
IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUyNDMsMTMg
KzUzMDUsMjAgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgfSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0gd2hp
bGUgKDApCiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wzcGFnZSk7CisKICAg
ICB3aGlsZSAoIG5yX21mbnMgIT0gMCApCiAgICAgewotICAgICAgICBsM19w
Z2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7
CisgICAgICAgIGwzX3BnZW50cnlfdCAqcGwzZSwgb2wzZTsKIAorICAgICAg
ICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBwbDNl
ID0gdmlydF90b194ZW5fbDNlKHZpcnQpOwogICAgICAgICBpZiAoICFwbDNl
ICkKICAgICAgICAgICAgIGdvdG8gb3V0OwogCisgICAgICAgIGN1cnJlbnRf
bDNwYWdlID0gdmlydF90b19wYWdlKHBsM2UpOworICAgICAgICBMM1RfTE9D
SyhjdXJyZW50X2wzcGFnZSk7CiAgICAgICAgIG9sM2UgPSAqcGwzZTsKIAog
ICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpAQCAtNTU4Myw2ICs1
NjUyLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgcmMgPSAwOwog
CiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAg
IHJldHVybiByYzsKIH0KIApAQCAtNTYxMSw2ICs1NjgxLDcgQEAgaW50IG1v
ZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBs
b25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICB1bnNpZ25lZCBpbnQgIGk7
CiAgICAgdW5zaWduZWQgbG9uZyB2ID0gczsKICAgICBpbnQgcmMgPSAtRU5P
TUVNOworICAgIHN0cnVjdCBwYWdlX2luZm8gKmN1cnJlbnRfbDNwYWdlOwog
CiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBiZSBh
bHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxfUEFH
RV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTYxOSwxMSArNTY5MCwyMiBAQCBp
bnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2ln
bmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgIEFTU0VSVChJU19B
TElHTkVEKHMsIFBBR0VfU0laRSkpOwogICAgIEFTU0VSVChJU19BTElHTkVE
KGUsIFBBR0VfU0laRSkpOwogCisgICAgTDNUX0lOSVQoY3VycmVudF9sM3Bh
Z2UpOworCiAgICAgd2hpbGUgKCB2IDwgZSApCiAgICAgewotICAgICAgICBs
M19wZ2VudHJ5X3QgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAg
ICAgIGwzX3BnZW50cnlfdCAqcGwzZTsKKworICAgICAgICBMM1RfVU5MT0NL
KGN1cnJlbnRfbDNwYWdlKTsKIAotICAgICAgICBpZiAoICFwbDNlIHx8ICEo
bDNlX2dldF9mbGFncygqcGwzZSkgJiBfUEFHRV9QUkVTRU5UKSApCisgICAg
ICAgIHBsM2UgPSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAgICAgIGlmICgg
IXBsM2UgKQorICAgICAgICAgICAgZ290byBvdXQ7CisKKyAgICAgICAgY3Vy
cmVudF9sM3BhZ2UgPSB2aXJ0X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwz
VF9MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBpZiAoICEobDNl
X2dldF9mbGFncygqcGwzZSkgJiBfUEFHRV9QUkVTRU5UKSApCiAgICAgICAg
IHsKICAgICAgICAgICAgIC8qIENvbmZpcm0gdGhlIGNhbGxlciBpc24ndCB0
cnlpbmcgdG8gY3JlYXRlIG5ldyBtYXBwaW5ncy4gKi8KICAgICAgICAgICAg
IEFTU0VSVCghKG5mICYgX1BBR0VfUFJFU0VOVCkpOwpAQCAtNTg0MSw5ICs1
OTIzLDEzIEBAIGludCBtb2RpZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxv
bmcgcywgdW5zaWduZWQgbG9uZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAg
cmMgPSAwOwogCiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3Bh
Z2UpOwogICAgIHJldHVybiByYzsKIH0KIAorI3VuZGVmIEwzVF9MT0NLCisj
dW5kZWYgTDNUX1VOTE9DSworCiAjdW5kZWYgZmx1c2hfYXJlYQogCiBpbnQg
ZGVzdHJveV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25l
ZCBsb25nIGUpCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.13/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345-4.13/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSBiM2UwZDRlMzdiNzkwMjUzM2E0NjM4MTIzNzQ5NDdkNGQ2ZDJlNDYz
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+CkFj
a2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Ci0tLQog
eGVuL2FyY2gveDg2L21tLmMgfCAxNyArKysrKysrKysrKy0tLS0tLQogMSBm
aWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94
ODYvbW0uYwppbmRleCAzMGRmZmI2OGU4Li4xMzNhMzkzODc1IDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0u
YwpAQCAtNTE4Nyw2ICs1MTg3LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJlOwogICAgIGwxX3BnZW50
cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CisgICAg
aW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhvbGRm
KSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50IG9f
ID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUyMDcsNyArNTIw
OCw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICBsM19wZ2Vu
dHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAK
ICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgIG9sM2Ug
PSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpA
QCAtNTI5NSw3ICs1Mjk3LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAK
ICAgICAgICAgICAgIHBsMmUgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7CiAg
ICAgICAgICAgICBpZiAoIHBsMmUgPT0gTlVMTCApCi0gICAgICAgICAgICAg
ICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgZ290byBvdXQ7
CiAKICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDJfUEFHRVRBQkxF
X0VOVFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgbDJlX3dyaXRlKHBs
MmUgKyBpLApAQCAtNTMyNCw3ICs1MzI2LDcgQEAgaW50IG1hcF9wYWdlc190
b194ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9feGVuX2wyZSh2aXJ0
KTsKICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OwogCiAgICAgICAgIGlm
ICggKCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm5feChtZm4pKSAmCiAg
ICAgICAgICAgICAgICAoKDF1IDw8IFBBR0VUQUJMRV9PUkRFUikgLSAxKSkg
PT0gMCkgJiYKQEAgLTUzNjcsNyArNTM2OSw3IEBAIGludCBtYXBfcGFnZXNf
dG9feGVuKAogICAgICAgICAgICAgewogICAgICAgICAgICAgICAgIHBsMWUg
PSB2aXJ0X3RvX3hlbl9sMWUodmlydCk7CiAgICAgICAgICAgICAgICAgaWYg
KCBwbDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAgICAgICByZXR1cm4g
LUVOT01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBvdXQ7CiAgICAg
ICAgICAgICB9CiAgICAgICAgICAgICBlbHNlIGlmICggbDJlX2dldF9mbGFn
cygqcGwyZSkgJiBfUEFHRV9QU0UgKQogICAgICAgICAgICAgewpAQCAtNTM5
NCw3ICs1Mzk2LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAKICAgICAg
ICAgICAgICAgICBwbDFlID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAg
ICAgICAgICAgICAgIGlmICggcGwxZSA9PSBOVUxMICkKLSAgICAgICAgICAg
ICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAgICAgICAg
IGdvdG8gb3V0OwogCiAgICAgICAgICAgICAgICAgZm9yICggaSA9IDA7IGkg
PCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAgICAgICAg
ICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLApAQCAtNTUzOCw3ICs1NTQwLDEw
IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogCiAjdW5kZWYgZmx1c2hfZmxh
Z3MKIAotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAg
ICByZXR1cm4gcmM7CiB9CiAKIGludCBwb3B1bGF0ZV9wdF9yYW5nZSh1bnNp
Z25lZCBsb25nIHZpcnQsIHVuc2lnbmVkIGxvbmcgbnJfbWZucykKLS0gCjIu
MjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.13/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345-4.13/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSA5ZjZmMzViODMzZDI5NWFjYWFhMmQ4ZmY4Y2YzMDliZjY4OGNmZDUw
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJj
aC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5nZWQs
IDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IDEzM2EzOTM4NzUuLmFmNzI2ZDMyNzQgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NTcwLDYg
KzU1NzAsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBs
b25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAg
IGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CiAg
ICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5PTUVN
OwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBi
ZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxf
UEFHRV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTYxMSw3ICs1NjEyLDggQEAg
aW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNp
Z25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICAgICAgICAgIC8q
IFBBR0UxR0I6IHNoYXR0ZXIgdGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0aHJv
dWdoLiAqLwogICAgICAgICAgICAgcGwyZSA9IGFsbG9jX3hlbl9wYWdldGFi
bGUoKTsKICAgICAgICAgICAgIGlmICggIXBsMmUgKQotICAgICAgICAgICAg
ICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgIGdvdG8gb3V0
OworCiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwyX1BBR0VUQUJM
RV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgIGwyZV93cml0ZShw
bDJlICsgaSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgbDJlX2Zyb21f
cGZuKGwzZV9nZXRfcGZuKCpwbDNlKSArCkBAIC01NjY2LDcgKzU2NjgsOCBA
QCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVu
c2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgICAgICAgICAg
ICAgIC8qIFBTRTogc2hhdHRlciB0aGUgc3VwZXJwYWdlIGFuZCB0cnkgYWdh
aW4uICovCiAgICAgICAgICAgICAgICAgcGwxZSA9IGFsbG9jX3hlbl9wYWdl
dGFibGUoKTsKICAgICAgICAgICAgICAgICBpZiAoICFwbDFlICkKLSAgICAg
ICAgICAgICAgICAgICAgcmV0dXJuIC1FTk9NRU07CisgICAgICAgICAgICAg
ICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgICAgICAgICAgZm9yICggaSA9
IDA7IGkgPCBMMV9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAgICAgICAg
ICAgICAgICAgICAgbDFlX3dyaXRlKCZwbDFlW2ldLAogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgbDFlX2Zyb21fcGZuKGwyZV9nZXRfcGZuKCpw
bDJlKSArIGksCkBAIC01Nzk1LDcgKzU3OTgsMTAgQEAgaW50IG1vZGlmeV94
ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUs
IHVuc2lnbmVkIGludCBuZikKICAgICBmbHVzaF9hcmVhKE5VTEwsIEZMVVNI
X1RMQl9HTE9CQUwpOwogCiAjdW5kZWYgRkxBR1NfTUFTSwotICAgIHJldHVy
biAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAgICByZXR1cm4gcmM7CiB9
CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.13/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345-4.13/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSAwZmY5YTg0NTNkYzQ3Y2Q0N2VlZTk2NTlkNTkxNmFmYjUwOTRlODcx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IGFmNzI2ZDMyNzQuLmQ2YTA3NjFmNDMgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMTY3LDYgKzIxNjcsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIGN1cnJlbnRfbG9ja2VkX3BhZ2Vfc2V0
KE5VTEwpOwogfQogCisvKgorICogTDMgdGFibGUgbG9ja3M6CisgKgorICog
VXNlZCBmb3Igc2VyaWFsaXphdGlvbiBpbiBtYXBfcGFnZXNfdG9feGVuKCkg
YW5kIG1vZGlmeV94ZW5fbWFwcGluZ3MoKS4KKyAqCisgKiBGb3IgWGVuIFBU
IHBhZ2VzLCB0aGUgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gaXMgdW51c2Vk
IGFuZCBpdCBpcyBzYWZlIHRvCisgKiByZXVzZSB0aGUgUEdUX2xvY2tlZCBm
bGFnLiBUaGlzIGxvY2sgaXMgdGFrZW4gb25seSB3aGVuIHdlIG1vdmUgZG93
biB0byBMMworICogdGFibGVzIGFuZCBiZWxvdywgc2luY2UgTDQgKGFuZCBh
Ym92ZSwgZm9yIDUtbGV2ZWwgcGFnaW5nKSBpcyBzdGlsbCBnbG9iYWxseQor
ICogcHJvdGVjdGVkIGJ5IG1hcF9wZ2Rpcl9sb2NrLgorICoKKyAqIFBWIE1N
VSB1cGRhdGUgaHlwZXJjYWxscyBjYWxsIG1hcF9wYWdlc190b194ZW4gd2hp
bGUgaG9sZGluZyBhIHBhZ2UncyBwYWdlX2xvY2soKS4KKyAqIFRoaXMgaGFz
IHR3byBpbXBsaWNhdGlvbnM6CisgKiAtIFdlIGNhbm5vdCByZXVzZSByZXVz
ZSBjdXJyZW50X2xvY2tlZF9wYWdlXyogZm9yIGRlYnVnZ2luZworICogLSBU
byBhdm9pZCB0aGUgY2hhbmNlIG9mIGRlYWRsb2NrLCBldmVuIGZvciBkaWZm
ZXJlbnQgcGFnZXMsIHdlCisgKiAgIG11c3QgbmV2ZXIgZ3JhYiBwYWdlX2xv
Y2soKSBhZnRlciBncmFiYmluZyBsM3RfbG9jaygpLiAgVGhpcworICogICBp
bmNsdWRlcyBhbnkgcGFnZV9sb2NrKCktYmFzZWQgbG9ja3MsIHN1Y2ggYXMK
KyAqICAgbWVtX3NoYXJpbmdfcGFnZV9sb2NrKCkuCisgKgorICogQWxzbyBu
b3RlIHRoYXQgd2UgZ3JhYiB0aGUgbWFwX3BnZGlyX2xvY2sgd2hpbGUgaG9s
ZGluZyB0aGUKKyAqIGwzdF9sb2NrKCksIHNvIHRvIGF2b2lkIGRlYWRsb2Nr
IHdlIG11c3QgYXZvaWQgZ3JhYmJpbmcgdGhlbSBpbgorICogcmV2ZXJzZSBv
cmRlci4KKyAqLworc3RhdGljIHZvaWQgbDN0X2xvY2soc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54OworCisg
ICAgZG8geworICAgICAgICB3aGlsZSAoICh4ID0gcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pICYgUEdUX2xvY2tlZCApCisgICAgICAgICAgICBjcHVfcmVs
YXgoKTsKKyAgICAgICAgbnggPSB4IHwgUEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggY21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54
KSAhPSB4ICk7Cit9CisKK3N0YXRpYyB2b2lkIGwzdF91bmxvY2soc3RydWN0
IHBhZ2VfaW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54
LCB5ID0gcGFnZS0+dS5pbnVzZS50eXBlX2luZm87CisKKyAgICBkbyB7Cisg
ICAgICAgIHggPSB5OworICAgICAgICBCVUdfT04oISh4ICYgUEdUX2xvY2tl
ZCkpOworICAgICAgICBueCA9IHggJiB+UEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT51LmludXNlLnR5cGVfaW5mbywg
eCwgbngpKSAhPSB4ICk7Cit9CisKICNpZmRlZiBDT05GSUdfUFYKIC8qCiAg
KiBQVEUgZmxhZ3MgdGhhdCBhIGd1ZXN0IG1heSBjaGFuZ2Ugd2l0aG91dCBy
ZS12YWxpZGF0aW5nIHRoZSBQVEUuCkBAIC01MTc3LDYgKzUyMjEsMjMgQEAg
bDFfcGdlbnRyeV90ICp2aXJ0X3RvX3hlbl9sMWUodW5zaWduZWQgbG9uZyB2
KQogICAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2FyZWFfbG9jYWwo
KGNvbnN0IHZvaWQgKil2LCBmKSA6IFwKICAgICAgICAgICAgICAgICAgICAg
ICAgICBmbHVzaF9hcmVhX2FsbCgoY29uc3Qgdm9pZCAqKXYsIGYpKQogCisj
ZGVmaW5lIEwzVF9JTklUKHBhZ2UpIChwYWdlKSA9IFpFUk9fQkxPQ0tfUFRS
CisKKyNkZWZpbmUgTDNUX0xPQ0socGFnZSkgICAgICAgIFwKKyAgICBkbyB7
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBsb2NraW5n
ICkgICAgICAgIFwKKyAgICAgICAgICAgIGwzdF9sb2NrKHBhZ2UpOyAgIFwK
KyAgICB9IHdoaWxlICggZmFsc2UgKQorCisjZGVmaW5lIEwzVF9VTkxPQ0so
cGFnZSkgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgZG8geyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgICAgIGlmICggbG9ja2luZyAmJiAocGFnZSkgIT0gWkVST19CTE9DS19Q
VFIgKSBcCisgICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgICAgICAgICBsM3RfdW5sb2NrKHBhZ2Up
OyAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAocGFnZSkg
PSBaRVJPX0JMT0NLX1BUUjsgICAgICAgICAgICAgICBcCisgICAgICAgIH0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgfSB3aGlsZSAoIGZhbHNlICkKKwogaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgdW5zaWduZWQgbG9uZyB2aXJ0LAogICAgIG1mbl90IG1mbiwKQEAg
LTUxODgsNiArNTI0OSw3IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAg
IGwxX3BnZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQg
IGk7CiAgICAgaW50IHJjID0gLUVOT01FTTsKKyAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpjdXJyZW50X2wzcGFnZTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhv
bGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50
IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUyMDMsMTMg
KzUyNjUsMjAgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgfSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0gd2hp
bGUgKDApCiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wzcGFnZSk7CisKICAg
ICB3aGlsZSAoIG5yX21mbnMgIT0gMCApCiAgICAgewotICAgICAgICBsM19w
Z2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7
CisgICAgICAgIGwzX3BnZW50cnlfdCAqcGwzZSwgb2wzZTsKIAorICAgICAg
ICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBwbDNl
ID0gdmlydF90b194ZW5fbDNlKHZpcnQpOwogICAgICAgICBpZiAoICFwbDNl
ICkKICAgICAgICAgICAgIGdvdG8gb3V0OwogCisgICAgICAgIGN1cnJlbnRf
bDNwYWdlID0gdmlydF90b19wYWdlKHBsM2UpOworICAgICAgICBMM1RfTE9D
SyhjdXJyZW50X2wzcGFnZSk7CiAgICAgICAgIG9sM2UgPSAqcGwzZTsKIAog
ICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpAQCAtNTU0Myw2ICs1
NjEyLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgcmMgPSAwOwog
CiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAg
IHJldHVybiByYzsKIH0KIApAQCAtNTU3MSw2ICs1NjQxLDcgQEAgaW50IG1v
ZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBs
b25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICB1bnNpZ25lZCBpbnQgIGk7
CiAgICAgdW5zaWduZWQgbG9uZyB2ID0gczsKICAgICBpbnQgcmMgPSAtRU5P
TUVNOworICAgIHN0cnVjdCBwYWdlX2luZm8gKmN1cnJlbnRfbDNwYWdlOwog
CiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBiZSBh
bHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxfUEFH
RV9SV3xfUEFHRV9QUkVTRU5UKQpAQCAtNTU3OSwxMSArNTY1MCwyMiBAQCBp
bnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2ln
bmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAgIEFTU0VSVChJU19B
TElHTkVEKHMsIFBBR0VfU0laRSkpOwogICAgIEFTU0VSVChJU19BTElHTkVE
KGUsIFBBR0VfU0laRSkpOwogCisgICAgTDNUX0lOSVQoY3VycmVudF9sM3Bh
Z2UpOworCiAgICAgd2hpbGUgKCB2IDwgZSApCiAgICAgewotICAgICAgICBs
M19wZ2VudHJ5X3QgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAg
ICAgIGwzX3BnZW50cnlfdCAqcGwzZTsKKworICAgICAgICBMM1RfVU5MT0NL
KGN1cnJlbnRfbDNwYWdlKTsKIAotICAgICAgICBpZiAoICFwbDNlIHx8ICEo
bDNlX2dldF9mbGFncygqcGwzZSkgJiBfUEFHRV9QUkVTRU5UKSApCisgICAg
ICAgIHBsM2UgPSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAgICAgIGlmICgg
IXBsM2UgKQorICAgICAgICAgICAgZ290byBvdXQ7CisKKyAgICAgICAgY3Vy
cmVudF9sM3BhZ2UgPSB2aXJ0X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwz
VF9MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBpZiAoICEobDNl
X2dldF9mbGFncygqcGwzZSkgJiBfUEFHRV9QUkVTRU5UKSApCiAgICAgICAg
IHsKICAgICAgICAgICAgIC8qIENvbmZpcm0gdGhlIGNhbGxlciBpc24ndCB0
cnlpbmcgdG8gY3JlYXRlIG5ldyBtYXBwaW5ncy4gKi8KICAgICAgICAgICAg
IEFTU0VSVCghKG5mICYgX1BBR0VfUFJFU0VOVCkpOwpAQCAtNTgwMSw5ICs1
ODgzLDEzIEBAIGludCBtb2RpZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxv
bmcgcywgdW5zaWduZWQgbG9uZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAg
cmMgPSAwOwogCiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3Bh
Z2UpOwogICAgIHJldHVybiByYzsKIH0KIAorI3VuZGVmIEwzVF9MT0NLCisj
dW5kZWYgTDNUX1VOTE9DSworCiAjdW5kZWYgZmx1c2hfYXJlYQogCiBpbnQg
ZGVzdHJveV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25l
ZCBsb25nIGUpCi0tIAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.14/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345-4.14/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSBlOWM1YTllZTVlMmU4ODhmOGJiMDVjZjBhMzUzZWQ2MzUzMDBhYmUz
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+CkFj
a2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Ci0tLQog
eGVuL2FyY2gveDg2L21tLmMgfCAxNyArKysrKysrKysrKy0tLS0tLQogMSBm
aWxlIGNoYW5nZWQsIDExIGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L21tLmMgYi94ZW4vYXJjaC94
ODYvbW0uYwppbmRleCA4MmJjNjc2NTUzLi4wM2Y2ZTZhYTYyIDEwMDY0NAot
LS0gYS94ZW4vYXJjaC94ODYvbW0uYworKysgYi94ZW4vYXJjaC94ODYvbW0u
YwpAQCAtNTA4OCw2ICs1MDg4LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJlOwogICAgIGwxX3BnZW50
cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CisgICAg
aW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhvbGRm
KSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50IG9f
ID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUxMDgsNyArNTEw
OSw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICBsM19wZ2Vu
dHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7CiAK
ICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OworCiAgICAgICAgIG9sM2Ug
PSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpA
QCAtNTE5OCw3ICs1MjAwLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAK
ICAgICAgICAgICAgIGwydCA9IGFsbG9jX3hlbl9wYWdldGFibGUoKTsKICAg
ICAgICAgICAgIGlmICggbDJ0ID09IE5VTEwgKQotICAgICAgICAgICAgICAg
IHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgIGdvdG8gb3V0Owog
CiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IEwyX1BBR0VUQUJMRV9F
TlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAgIGwyZV93cml0ZShsMnQg
KyBpLApAQCAtNTIyNyw3ICs1MjI5LDcgQEAgaW50IG1hcF9wYWdlc190b194
ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9feGVuX2wyZSh2aXJ0KTsK
ICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0OwogCiAgICAgICAgIGlmICgg
KCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm5feChtZm4pKSAmCiAgICAg
ICAgICAgICAgICAoKDF1IDw8IFBBR0VUQUJMRV9PUkRFUikgLSAxKSkgPT0g
MCkgJiYKQEAgLTUyNzEsNyArNTI3Myw3IEBAIGludCBtYXBfcGFnZXNfdG9f
eGVuKAogICAgICAgICAgICAgewogICAgICAgICAgICAgICAgIHBsMWUgPSB2
aXJ0X3RvX3hlbl9sMWUodmlydCk7CiAgICAgICAgICAgICAgICAgaWYgKCBw
bDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAgICAgICByZXR1cm4gLUVO
T01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290byBvdXQ7CiAgICAgICAg
ICAgICB9CiAgICAgICAgICAgICBlbHNlIGlmICggbDJlX2dldF9mbGFncygq
cGwyZSkgJiBfUEFHRV9QU0UgKQogICAgICAgICAgICAgewpAQCAtNTI5OSw3
ICs1MzAxLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAKICAgICAgICAg
ICAgICAgICBsMXQgPSBhbGxvY194ZW5fcGFnZXRhYmxlKCk7CiAgICAgICAg
ICAgICAgICAgaWYgKCBsMXQgPT0gTlVMTCApCi0gICAgICAgICAgICAgICAg
ICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAgICAgICAgICBnb3Rv
IG91dDsKIAogICAgICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDFf
UEFHRVRBQkxFX0VOVFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgICAg
IGwxZV93cml0ZSgmbDF0W2ldLApAQCAtNTQ0NSw3ICs1NDQ3LDEwIEBAIGlu
dCBtYXBfcGFnZXNfdG9feGVuKAogCiAjdW5kZWYgZmx1c2hfZmxhZ3MKIAot
ICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91dDoKKyAgICByZXR1
cm4gcmM7CiB9CiAKIGludCBwb3B1bGF0ZV9wdF9yYW5nZSh1bnNpZ25lZCBs
b25nIHZpcnQsIHVuc2lnbmVkIGxvbmcgbnJfbWZucykKLS0gCjIuMjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.14/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345-4.14/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSA4NjQ1YWRiN2FjNjc5ZTVkZGM1YzM5ZTBjNWM5MThlNGEyYmE1Mzkx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJj
aC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5nZWQs
IDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IDAzZjZlNmFhNjIuLjI0NjgzNDdhNDUgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NDc3LDYg
KzU0NzcsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBs
b25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAg
IGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CiAg
ICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5PTUVN
OwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBi
ZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxf
UEFHRV9ESVJUWXxfUEFHRV9BQ0NFU1NFRHxfUEFHRV9SV3xfUEFHRV9QUkVT
RU5UKQpAQCAtNTUyMCw3ICs1NTIxLDggQEAgaW50IG1vZGlmeV94ZW5fbWFw
cGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2ln
bmVkIGludCBuZikKICAgICAgICAgICAgIC8qIFBBR0UxR0I6IHNoYXR0ZXIg
dGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0aHJvdWdoLiAqLwogICAgICAgICAg
ICAgbDJ0ID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAgICAgICAgICAg
aWYgKCAhbDJ0ICkKLSAgICAgICAgICAgICAgICByZXR1cm4gLUVOT01FTTsK
KyAgICAgICAgICAgICAgICBnb3RvIG91dDsKKwogICAgICAgICAgICAgZm9y
ICggaSA9IDA7IGkgPCBMMl9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAg
ICAgICAgICAgICAgICBsMmVfd3JpdGUobDJ0ICsgaSwKICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbDJlX2Zyb21fcGZuKGwzZV9nZXRfcGZuKCpwbDNl
KSArCkBAIC01NTc3LDcgKzU1NzksOCBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBw
aW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWdu
ZWQgaW50IG5mKQogICAgICAgICAgICAgICAgIC8qIFBTRTogc2hhdHRlciB0
aGUgc3VwZXJwYWdlIGFuZCB0cnkgYWdhaW4uICovCiAgICAgICAgICAgICAg
ICAgbDF0ID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAgICAgICAgICAg
ICAgIGlmICggIWwxdCApCi0gICAgICAgICAgICAgICAgICAgIHJldHVybiAt
RU5PTUVNOworICAgICAgICAgICAgICAgICAgICBnb3RvIG91dDsKKwogICAg
ICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDFfUEFHRVRBQkxFX0VO
VFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgICAgIGwxZV93cml0ZSgm
bDF0W2ldLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbDFlX2Zy
b21fcGZuKGwyZV9nZXRfcGZuKCpwbDJlKSArIGksCkBAIC01NzEwLDcgKzU3
MTMsMTAgQEAgaW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9u
ZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICBm
bHVzaF9hcmVhKE5VTEwsIEZMVVNIX1RMQl9HTE9CQUwpOwogCiAjdW5kZWYg
RkxBR1NfTUFTSwotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91
dDoKKyAgICByZXR1cm4gcmM7CiB9CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0t
IAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345-4.14/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345-4.14/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSA2YjAyMDQxOGQwNTU0ZDllYzZlYjIwMWY1MDc3NmE3MmRiNjc3Mzli
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IDI0NjgzNDdhNDUuLjljNTViMmI5ZTMgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMDg4LDYgKzIwODgsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIGN1cnJlbnRfbG9ja2VkX3BhZ2Vfc2V0
KE5VTEwpOwogfQogCisvKgorICogTDMgdGFibGUgbG9ja3M6CisgKgorICog
VXNlZCBmb3Igc2VyaWFsaXphdGlvbiBpbiBtYXBfcGFnZXNfdG9feGVuKCkg
YW5kIG1vZGlmeV94ZW5fbWFwcGluZ3MoKS4KKyAqCisgKiBGb3IgWGVuIFBU
IHBhZ2VzLCB0aGUgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gaXMgdW51c2Vk
IGFuZCBpdCBpcyBzYWZlIHRvCisgKiByZXVzZSB0aGUgUEdUX2xvY2tlZCBm
bGFnLiBUaGlzIGxvY2sgaXMgdGFrZW4gb25seSB3aGVuIHdlIG1vdmUgZG93
biB0byBMMworICogdGFibGVzIGFuZCBiZWxvdywgc2luY2UgTDQgKGFuZCBh
Ym92ZSwgZm9yIDUtbGV2ZWwgcGFnaW5nKSBpcyBzdGlsbCBnbG9iYWxseQor
ICogcHJvdGVjdGVkIGJ5IG1hcF9wZ2Rpcl9sb2NrLgorICoKKyAqIFBWIE1N
VSB1cGRhdGUgaHlwZXJjYWxscyBjYWxsIG1hcF9wYWdlc190b194ZW4gd2hp
bGUgaG9sZGluZyBhIHBhZ2UncyBwYWdlX2xvY2soKS4KKyAqIFRoaXMgaGFz
IHR3byBpbXBsaWNhdGlvbnM6CisgKiAtIFdlIGNhbm5vdCByZXVzZSByZXVz
ZSBjdXJyZW50X2xvY2tlZF9wYWdlXyogZm9yIGRlYnVnZ2luZworICogLSBU
byBhdm9pZCB0aGUgY2hhbmNlIG9mIGRlYWRsb2NrLCBldmVuIGZvciBkaWZm
ZXJlbnQgcGFnZXMsIHdlCisgKiAgIG11c3QgbmV2ZXIgZ3JhYiBwYWdlX2xv
Y2soKSBhZnRlciBncmFiYmluZyBsM3RfbG9jaygpLiAgVGhpcworICogICBp
bmNsdWRlcyBhbnkgcGFnZV9sb2NrKCktYmFzZWQgbG9ja3MsIHN1Y2ggYXMK
KyAqICAgbWVtX3NoYXJpbmdfcGFnZV9sb2NrKCkuCisgKgorICogQWxzbyBu
b3RlIHRoYXQgd2UgZ3JhYiB0aGUgbWFwX3BnZGlyX2xvY2sgd2hpbGUgaG9s
ZGluZyB0aGUKKyAqIGwzdF9sb2NrKCksIHNvIHRvIGF2b2lkIGRlYWRsb2Nr
IHdlIG11c3QgYXZvaWQgZ3JhYmJpbmcgdGhlbSBpbgorICogcmV2ZXJzZSBv
cmRlci4KKyAqLworc3RhdGljIHZvaWQgbDN0X2xvY2soc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54OworCisg
ICAgZG8geworICAgICAgICB3aGlsZSAoICh4ID0gcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pICYgUEdUX2xvY2tlZCApCisgICAgICAgICAgICBjcHVfcmVs
YXgoKTsKKyAgICAgICAgbnggPSB4IHwgUEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggY21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54
KSAhPSB4ICk7Cit9CisKK3N0YXRpYyB2b2lkIGwzdF91bmxvY2soc3RydWN0
IHBhZ2VfaW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54
LCB5ID0gcGFnZS0+dS5pbnVzZS50eXBlX2luZm87CisKKyAgICBkbyB7Cisg
ICAgICAgIHggPSB5OworICAgICAgICBCVUdfT04oISh4ICYgUEdUX2xvY2tl
ZCkpOworICAgICAgICBueCA9IHggJiB+UEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT51LmludXNlLnR5cGVfaW5mbywg
eCwgbngpKSAhPSB4ICk7Cit9CisKICNpZmRlZiBDT05GSUdfUFYKIC8qCiAg
KiBQVEUgZmxhZ3MgdGhhdCBhIGd1ZXN0IG1heSBjaGFuZ2Ugd2l0aG91dCBy
ZS12YWxpZGF0aW5nIHRoZSBQVEUuCkBAIC01MDc4LDYgKzUxMjIsMjMgQEAg
bDFfcGdlbnRyeV90ICp2aXJ0X3RvX3hlbl9sMWUodW5zaWduZWQgbG9uZyB2
KQogICAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2FyZWFfbG9jYWwo
KGNvbnN0IHZvaWQgKil2LCBmKSA6IFwKICAgICAgICAgICAgICAgICAgICAg
ICAgICBmbHVzaF9hcmVhX2FsbCgoY29uc3Qgdm9pZCAqKXYsIGYpKQogCisj
ZGVmaW5lIEwzVF9JTklUKHBhZ2UpIChwYWdlKSA9IFpFUk9fQkxPQ0tfUFRS
CisKKyNkZWZpbmUgTDNUX0xPQ0socGFnZSkgICAgICAgIFwKKyAgICBkbyB7
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBsb2NraW5n
ICkgICAgICAgIFwKKyAgICAgICAgICAgIGwzdF9sb2NrKHBhZ2UpOyAgIFwK
KyAgICB9IHdoaWxlICggZmFsc2UgKQorCisjZGVmaW5lIEwzVF9VTkxPQ0so
cGFnZSkgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgZG8geyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgICAgIGlmICggbG9ja2luZyAmJiAocGFnZSkgIT0gWkVST19CTE9DS19Q
VFIgKSBcCisgICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgICAgICAgICBsM3RfdW5sb2NrKHBhZ2Up
OyAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAocGFnZSkg
PSBaRVJPX0JMT0NLX1BUUjsgICAgICAgICAgICAgICBcCisgICAgICAgIH0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgfSB3aGlsZSAoIGZhbHNlICkKKwogaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgdW5zaWduZWQgbG9uZyB2aXJ0LAogICAgIG1mbl90IG1mbiwKQEAg
LTUwODksNiArNTE1MCw3IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAg
IGwxX3BnZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQg
IGk7CiAgICAgaW50IHJjID0gLUVOT01FTTsKKyAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpjdXJyZW50X2wzcGFnZTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhv
bGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50
IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUxMDQsMTMg
KzUxNjYsMjAgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgfSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0gd2hp
bGUgKDApCiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wzcGFnZSk7CisKICAg
ICB3aGlsZSAoIG5yX21mbnMgIT0gMCApCiAgICAgewotICAgICAgICBsM19w
Z2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7
CisgICAgICAgIGwzX3BnZW50cnlfdCAqcGwzZSwgb2wzZTsKIAorICAgICAg
ICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBwbDNl
ID0gdmlydF90b194ZW5fbDNlKHZpcnQpOwogICAgICAgICBpZiAoICFwbDNl
ICkKICAgICAgICAgICAgIGdvdG8gb3V0OwogCisgICAgICAgIGN1cnJlbnRf
bDNwYWdlID0gdmlydF90b19wYWdlKHBsM2UpOworICAgICAgICBMM1RfTE9D
SyhjdXJyZW50X2wzcGFnZSk7CiAgICAgICAgIG9sM2UgPSAqcGwzZTsKIAog
ICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpAQCAtNTQ1MCw2ICs1
NTE5LDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgcmMgPSAwOwog
CiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAg
IHJldHVybiByYzsKIH0KIApAQCAtNTQ3OCw2ICs1NTQ4LDcgQEAgaW50IG1v
ZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBs
b25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICB1bnNpZ25lZCBpbnQgIGk7
CiAgICAgdW5zaWduZWQgbG9uZyB2ID0gczsKICAgICBpbnQgcmMgPSAtRU5P
TUVNOworICAgIHN0cnVjdCBwYWdlX2luZm8gKmN1cnJlbnRfbDNwYWdlOwog
CiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBiZSBh
bHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxfUEFH
RV9ESVJUWXxfUEFHRV9BQ0NFU1NFRHxfUEFHRV9SV3xfUEFHRV9QUkVTRU5U
KQpAQCAtNTQ4NiwxMSArNTU1NywyMiBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBw
aW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWdu
ZWQgaW50IG5mKQogICAgIEFTU0VSVChJU19BTElHTkVEKHMsIFBBR0VfU0la
RSkpOwogICAgIEFTU0VSVChJU19BTElHTkVEKGUsIFBBR0VfU0laRSkpOwog
CisgICAgTDNUX0lOSVQoY3VycmVudF9sM3BhZ2UpOworCiAgICAgd2hpbGUg
KCB2IDwgZSApCiAgICAgewotICAgICAgICBsM19wZ2VudHJ5X3QgKnBsM2Ug
PSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAgICAgIGwzX3BnZW50cnlfdCAq
cGwzZTsKKworICAgICAgICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsK
IAotICAgICAgICBpZiAoICFwbDNlIHx8ICEobDNlX2dldF9mbGFncygqcGwz
ZSkgJiBfUEFHRV9QUkVTRU5UKSApCisgICAgICAgIHBsM2UgPSB2aXJ0X3Rv
X3hlbl9sM2Uodik7CisgICAgICAgIGlmICggIXBsM2UgKQorICAgICAgICAg
ICAgZ290byBvdXQ7CisKKyAgICAgICAgY3VycmVudF9sM3BhZ2UgPSB2aXJ0
X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwzVF9MT0NLKGN1cnJlbnRfbDNw
YWdlKTsKKworICAgICAgICBpZiAoICEobDNlX2dldF9mbGFncygqcGwzZSkg
JiBfUEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIC8q
IENvbmZpcm0gdGhlIGNhbGxlciBpc24ndCB0cnlpbmcgdG8gY3JlYXRlIG5l
dyBtYXBwaW5ncy4gKi8KICAgICAgICAgICAgIEFTU0VSVCghKG5mICYgX1BB
R0VfUFJFU0VOVCkpOwpAQCAtNTcxNiw5ICs1Nzk4LDEzIEBAIGludCBtb2Rp
ZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxvbmcgcywgdW5zaWduZWQgbG9u
ZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAgcmMgPSAwOwogCiAgb3V0Ogor
ICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAgIHJldHVybiBy
YzsKIH0KIAorI3VuZGVmIEwzVF9MT0NLCisjdW5kZWYgTDNUX1VOTE9DSwor
CiAjdW5kZWYgZmx1c2hfYXJlYQogCiBpbnQgZGVzdHJveV94ZW5fbWFwcGlu
Z3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUpCi0tIAoyLjI1
LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Disposition: attachment;
 filename="xsa345/0001-x86-mm-Refactor-map_pages_to_xen-to-have-only-a-sing.patch"
Content-Transfer-Encoding: base64

RnJvbSBkODllODRmMzgyYjYwNDVlMzExYTA2N2FmNmQyODY4MGU1YTQ0MGZm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQxICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbWFwX3BhZ2VzX3RvX3hlbiB0byBoYXZlIG9ubHkgYSBzaW5nbGUKIGV4
aXQgcGF0aAoKV2Ugd2lsbCBzb29uIG5lZWQgdG8gcGVyZm9ybSBjbGVhbi11
cHMgYmVmb3JlIHJldHVybmluZy4KCk5vIGZ1bmN0aW9uYWwgY2hhbmdlLgoK
VGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDUuCgpSZXBvcnRlZC1ieTogSG9uZ3lh
biBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IFdl
aSBMaXUgPHdlaS5saXUyQGNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEhv
bmd5YW4gWGlhIDxob25neXhpYUBhbWF6b24uY29tPgpTaWduZWQtb2ZmLWJ5
OiBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+CkFj
a2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+Ci0tLQog
eGVuL2FyY2gveDg2L21tLmMgfCAxOSArKysrKysrKysrKystLS0tLS0tCiAx
IGZpbGUgY2hhbmdlZCwgMTIgaW5zZXJ0aW9ucygrKSwgNyBkZWxldGlvbnMo
LSkKCmRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNo
L3g4Ni9tbS5jCmluZGV4IGQxY2ZjOGZiNGEuLmI5MDkwODNiYzIgMTAwNjQ0
Ci0tLSBhL3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCkBAIC0yNzUsNyArMjc1LDcgQEAgc3RhdGljIGw0X3BnZW50cnlfdCBf
X3JlYWRfbW9zdGx5IHNwbGl0X2w0ZTsKICAqIE9yaWdpbmFsbHkgY2xvbmVk
IGZyb20gc2hhcmVfeGVuX3BhZ2Vfd2l0aF9ndWVzdCgpLCBqdXN0IHRvIGF2
b2lkIHNldHRpbmcKICAqIFBHQ194ZW5faGVhcCBvbiBub24taGVhcCAodHlw
aWNhbGx5KSBNTUlPIHBhZ2VzLiBPdGhlciBwaWVjZXMgZ290IGRyb3BwZWQK
ICAqIHNpbXBseSBiZWNhdXNlIHRoZXkncmUgbm90IG5lZWRlZCBpbiB0aGlz
IGNvbnRleHQuCi0gKi8gCisgKi8KIHN0YXRpYyB2b2lkIF9faW5pdCBhc3Np
Z25faW9fcGFnZShzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlKQogewogICAgIHNl
dF9ncGZuX2Zyb21fbWZuKG1mbl94KHBhZ2VfdG9fbWZuKHBhZ2UpKSwgSU5W
QUxJRF9NMlBfRU5UUlkpOwpAQCAtNTA3OSw2ICs1MDc5LDcgQEAgaW50IG1h
cF9wYWdlc190b194ZW4oCiAgICAgbDJfcGdlbnRyeV90ICpwbDJlLCBvbDJl
OwogICAgIGwxX3BnZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25l
ZCBpbnQgIGk7CisgICAgaW50IHJjID0gLUVOT01FTTsKIAogI2RlZmluZSBm
bHVzaF9mbGFncyhvbGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAg
dW5zaWduZWQgaW50IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwK
QEAgLTUwOTksNyArNTEwMCw4IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAog
ICAgICAgICBsM19wZ2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hl
bl9sM2UodmlydCk7CiAKICAgICAgICAgaWYgKCAhcGwzZSApCi0gICAgICAg
ICAgICByZXR1cm4gLUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0Owor
CiAgICAgICAgIG9sM2UgPSAqcGwzZTsKIAogICAgICAgICBpZiAoIGNwdV9o
YXNfcGFnZTFnYiAmJgpAQCAtNTE4OSw3ICs1MTkxLDcgQEAgaW50IG1hcF9w
YWdlc190b194ZW4oCiAKICAgICAgICAgICAgIGwydCA9IGFsbG9jX3hlbl9w
YWdldGFibGUoKTsKICAgICAgICAgICAgIGlmICggbDJ0ID09IE5VTEwgKQot
ICAgICAgICAgICAgICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAgICAg
ICAgIGdvdG8gb3V0OwogCiAgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8
IEwyX1BBR0VUQUJMRV9FTlRSSUVTOyBpKysgKQogICAgICAgICAgICAgICAg
IGwyZV93cml0ZShsMnQgKyBpLApAQCAtNTIxOCw3ICs1MjIwLDcgQEAgaW50
IG1hcF9wYWdlc190b194ZW4oCiAKICAgICAgICAgcGwyZSA9IHZpcnRfdG9f
eGVuX2wyZSh2aXJ0KTsKICAgICAgICAgaWYgKCAhcGwyZSApCi0gICAgICAg
ICAgICByZXR1cm4gLUVOT01FTTsKKyAgICAgICAgICAgIGdvdG8gb3V0Owog
CiAgICAgICAgIGlmICggKCgoKHZpcnQgPj4gUEFHRV9TSElGVCkgfCBtZm5f
eChtZm4pKSAmCiAgICAgICAgICAgICAgICAoKDF1IDw8IFBBR0VUQUJMRV9P
UkRFUikgLSAxKSkgPT0gMCkgJiYKQEAgLTUyNjIsNyArNTI2NCw3IEBAIGlu
dCBtYXBfcGFnZXNfdG9feGVuKAogICAgICAgICAgICAgewogICAgICAgICAg
ICAgICAgIHBsMWUgPSB2aXJ0X3RvX3hlbl9sMWUodmlydCk7CiAgICAgICAg
ICAgICAgICAgaWYgKCBwbDFlID09IE5VTEwgKQotICAgICAgICAgICAgICAg
ICAgICByZXR1cm4gLUVOT01FTTsKKyAgICAgICAgICAgICAgICAgICAgZ290
byBvdXQ7CiAgICAgICAgICAgICB9CiAgICAgICAgICAgICBlbHNlIGlmICgg
bDJlX2dldF9mbGFncygqcGwyZSkgJiBfUEFHRV9QU0UgKQogICAgICAgICAg
ICAgewpAQCAtNTI5MCw3ICs1MjkyLDcgQEAgaW50IG1hcF9wYWdlc190b194
ZW4oCiAKICAgICAgICAgICAgICAgICBsMXQgPSBhbGxvY194ZW5fcGFnZXRh
YmxlKCk7CiAgICAgICAgICAgICAgICAgaWYgKCBsMXQgPT0gTlVMTCApCi0g
ICAgICAgICAgICAgICAgICAgIHJldHVybiAtRU5PTUVNOworICAgICAgICAg
ICAgICAgICAgICBnb3RvIG91dDsKIAogICAgICAgICAgICAgICAgIGZvciAo
IGkgPSAwOyBpIDwgTDFfUEFHRVRBQkxFX0VOVFJJRVM7IGkrKyApCiAgICAg
ICAgICAgICAgICAgICAgIGwxZV93cml0ZSgmbDF0W2ldLApAQCAtNTQzNiw3
ICs1NDM4LDEwIEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogCiAjdW5kZWYg
Zmx1c2hfZmxhZ3MKIAotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKwor
IG91dDoKKyAgICByZXR1cm4gcmM7CiB9CiAKIGludCBwb3B1bGF0ZV9wdF9y
YW5nZSh1bnNpZ25lZCBsb25nIHZpcnQsIHVuc2lnbmVkIGxvbmcgbnJfbWZu
cykKLS0gCjIuMjUuMQoK

--=separator
Content-Type: application/octet-stream;
 name="xsa345/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Disposition: attachment;
 filename="xsa345/0002-x86-mm-Refactor-modify_xen_mappings-to-have-one-exit.patch"
Content-Transfer-Encoding: base64

RnJvbSA3YWNhOTA3NjY2NDhmMWI4ZWJjMDRlMGFiZDRkNzliYmQ0ZGI3N2Ew
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBXZWkgTGl1IDx3ZWku
bGl1MkBjaXRyaXguY29tPgpEYXRlOiBTYXQsIDExIEphbiAyMDIwIDIxOjU3
OjQyICswMDAwClN1YmplY3Q6IFtQQVRDSCAyLzNdIHg4Ni9tbTogUmVmYWN0
b3IgbW9kaWZ5X3hlbl9tYXBwaW5ncyB0byBoYXZlIG9uZSBleGl0CiBwYXRo
CgpXZSB3aWxsIHNvb24gbmVlZCB0byBwZXJmb3JtIGNsZWFuLXVwcyBiZWZv
cmUgcmV0dXJuaW5nLgoKTm8gZnVuY3Rpb25hbCBjaGFuZ2UuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5OiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9mZi1ieTogV2VpIExpdSA8
d2VpLmxpdTJAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1ieTogSG9uZ3lhbiBY
aWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3Jn
ZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQtYnk6
IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiB4ZW4vYXJj
aC94ODYvbW0uYyB8IDEyICsrKysrKysrKy0tLQogMSBmaWxlIGNoYW5nZWQs
IDkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQg
YS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9tbS5jCmluZGV4
IGI5MDkwODNiYzIuLmQwZmM4YTgxNDIgMTAwNjQ0Ci0tLSBhL3hlbi9hcmNo
L3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBAIC01NDY4LDYg
KzU0NjgsNyBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBwaW5ncyh1bnNpZ25lZCBs
b25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWduZWQgaW50IG5mKQogICAg
IGwxX3BnZW50cnlfdCAqcGwxZTsKICAgICB1bnNpZ25lZCBpbnQgIGk7CiAg
ICAgdW5zaWduZWQgbG9uZyB2ID0gczsKKyAgICBpbnQgcmMgPSAtRU5PTUVN
OwogCiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBi
ZSBhbHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxf
UEFHRV9ESVJUWXxfUEFHRV9BQ0NFU1NFRHxfUEFHRV9SV3xfUEFHRV9QUkVT
RU5UKQpAQCAtNTUxMSw3ICs1NTEyLDggQEAgaW50IG1vZGlmeV94ZW5fbWFw
cGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2ln
bmVkIGludCBuZikKICAgICAgICAgICAgIC8qIFBBR0UxR0I6IHNoYXR0ZXIg
dGhlIHN1cGVycGFnZSBhbmQgZmFsbCB0aHJvdWdoLiAqLwogICAgICAgICAg
ICAgbDJ0ID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAgICAgICAgICAg
aWYgKCAhbDJ0ICkKLSAgICAgICAgICAgICAgICByZXR1cm4gLUVOT01FTTsK
KyAgICAgICAgICAgICAgICBnb3RvIG91dDsKKwogICAgICAgICAgICAgZm9y
ICggaSA9IDA7IGkgPCBMMl9QQUdFVEFCTEVfRU5UUklFUzsgaSsrICkKICAg
ICAgICAgICAgICAgICBsMmVfd3JpdGUobDJ0ICsgaSwKICAgICAgICAgICAg
ICAgICAgICAgICAgICAgbDJlX2Zyb21fcGZuKGwzZV9nZXRfcGZuKCpwbDNl
KSArCkBAIC01NTY4LDcgKzU1NzAsOCBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBw
aW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWdu
ZWQgaW50IG5mKQogICAgICAgICAgICAgICAgIC8qIFBTRTogc2hhdHRlciB0
aGUgc3VwZXJwYWdlIGFuZCB0cnkgYWdhaW4uICovCiAgICAgICAgICAgICAg
ICAgbDF0ID0gYWxsb2NfeGVuX3BhZ2V0YWJsZSgpOwogICAgICAgICAgICAg
ICAgIGlmICggIWwxdCApCi0gICAgICAgICAgICAgICAgICAgIHJldHVybiAt
RU5PTUVNOworICAgICAgICAgICAgICAgICAgICBnb3RvIG91dDsKKwogICAg
ICAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgTDFfUEFHRVRBQkxFX0VO
VFJJRVM7IGkrKyApCiAgICAgICAgICAgICAgICAgICAgIGwxZV93cml0ZSgm
bDF0W2ldLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbDFlX2Zy
b21fcGZuKGwyZV9nZXRfcGZuKCpwbDJlKSArIGksCkBAIC01NzAxLDcgKzU3
MDQsMTAgQEAgaW50IG1vZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9u
ZyBzLCB1bnNpZ25lZCBsb25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICBm
bHVzaF9hcmVhKE5VTEwsIEZMVVNIX1RMQl9HTE9CQUwpOwogCiAjdW5kZWYg
RkxBR1NfTUFTSwotICAgIHJldHVybiAwOworICAgIHJjID0gMDsKKworIG91
dDoKKyAgICByZXR1cm4gcmM7CiB9CiAKICN1bmRlZiBmbHVzaF9hcmVhCi0t
IAoyLjI1LjEKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa345/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Disposition: attachment;
 filename="xsa345/0003-x86-mm-Prevent-some-races-in-hypervisor-mapping-upda.patch"
Content-Transfer-Encoding: base64

RnJvbSBkMjUzNTlkNWM0MjM4M2QyYmM3ZDFhOTQ0ZmFiOGJjMDM5ZGJiNDQy
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBIb25neWFuIFhpYSA8
aG9uZ3l4aWFAYW1hem9uLmNvbT4KRGF0ZTogU2F0LCAxMSBKYW4gMjAyMCAy
MTo1Nzo0MyArMDAwMApTdWJqZWN0OiBbUEFUQ0ggMy8zXSB4ODYvbW06IFBy
ZXZlbnQgc29tZSByYWNlcyBpbiBoeXBlcnZpc29yIG1hcHBpbmcgdXBkYXRl
cwoKbWFwX3BhZ2VzX3RvX3hlbiB3aWxsIGF0dGVtcHQgdG8gY29hbGVzY2Ug
bWFwcGluZ3MgaW50byAyTWlCIGFuZCAxR2lCCnN1cGVycGFnZXMgaWYgcG9z
c2libGUsIHRvIG1heGltaXplIFRMQiBlZmZpY2llbmN5LiAgVGhpcyBtZWFu
cyBib3RoCnJlcGxhY2luZyBzdXBlcnBhZ2UgZW50cmllcyB3aXRoIHNtYWxs
ZXIgZW50cmllcywgYW5kIHJlcGxhY2luZwpzbWFsbGVyIGVudHJpZXMgd2l0
aCBzdXBlcnBhZ2VzLgoKVW5mb3J0dW5hdGVseSwgd2hpbGUgc29tZSBwb3Rl
bnRpYWwgcmFjZXMgYXJlIGhhbmRsZWQgY29ycmVjdGx5LApvdGhlcnMgYXJl
IG5vdC4gIFRoZXNlIGluY2x1ZGU6CgoxLiBXaGVuIG9uZSBwcm9jZXNzb3Ig
bW9kaWZpZXMgYSBzdWItc3VwZXJwYWdlIG1hcHBpbmcgd2hpbGUgYW5vdGhl
cgpwcm9jZXNzb3IgcmVwbGFjZXMgdGhlIGVudGlyZSByYW5nZSB3aXRoIGEg
c3VwZXJwYWdlLgoKVGFrZSB0aGUgZm9sbG93aW5nIGV4YW1wbGU6CgpTdXBw
b3NlIEwzW05dIHBvaW50cyB0byBMMi4gIEFuZCBzdXBwb3NlIHdlIGhhdmUg
dHdvIHByb2Nlc3NvcnMsIEEgYW5kCkIuCgoqIEEgd2Fsa3MgdGhlIHBhZ2V0
YWJsZXMsIGdldCBhIHBvaW50ZXIgdG8gTDIuCiogQiByZXBsYWNlcyBMM1tO
XSB3aXRoIGEgMUdpQiBtYXBwaW5nLgoqIEIgRnJlZXMgTDIKKiBBIHdyaXRl
cyBMMltNXSAjCgpUaGlzIGlzIHJhY2UgZXhhY2VyYmF0ZWQgYnkgdGhlIGZh
Y3QgdGhhdCB2aXJ0X3RvX3hlbl9sWzIxXWUgZG9lc24ndApoYW5kbGUgaGln
aGVyLWxldmVsIHN1cGVycGFnZXMgcHJvcGVybHk6IElmIHlvdSBjYWxsIHZp
cnRfeGVuX3RvX2wyZQpvbiBhIHZpcnR1YWwgYWRkcmVzcyB3aXRoaW4gYW4g
TDMgc3VwZXJwYWdlLCB5b3UnbGwgZWl0aGVyIGhpdCBhIEJVRygpCihtb3N0
IGxpa2VseSksIG9yIGdldCBhIHBvaW50ZXIgaW50byB0aGUgbWlkZGxlIG9m
IGEgZGF0YSBwYWdlOyBzYW1lCndpdGggdmlydF94ZW5fdG9fbDEgb24gYSB2
aXJ0dWFsIGFkZHJlc3Mgd2l0aGluIGVpdGhlciBhbiBMMyBvciBMMgpzdXBl
cnBhZ2UuCgpTbyB0YWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToKCiogQSBy
ZWFkcyBwbDNlIGFuZCBkaXNjb3ZlcnMgaXQgdG8gcG9pbnQgdG8gYW4gTDIu
CiogQiByZXBsYWNlcyBMM1tOXSB3aXRoIGEgMUdpQiBtYXBwaW5nCiogQSBj
YWxscyB2aXJ0X3RvX3hlbl9sMmUoKSBhbmQgaGl0cyB0aGUgQlVHX09OKCkg
IwoKMi4gV2hlbiB0d28gcHJvY2Vzc29ycyBzaW11bHRhbmVvdXNseSB0cnkg
dG8gcmVwbGFjZSBhIHN1Yi1zdXBlcnBhZ2UKbWFwcGluZyB3aXRoIGEgc3Vw
ZXJwYWdlIG1hcHBpbmcuCgpUYWtlIHRoZSBmb2xsb3dpbmcgZXhhbXBsZToK
ClN1cHBvc2UgTDNbTl0gcG9pbnRzIHRvIEwyLiAgQW5kIHN1cHBvc2Ugd2Ug
aGF2ZSB0d28gcHJvY2Vzc29ycywgQSBhbmQgQiwKYm90aCB0cnlpbmcgdG8g
cmVwbGFjZSBMM1tOXSB3aXRoIGEgc3VwZXJwYWdlLgoKKiBBIHdhbGtzIHRo
ZSBwYWdldGFibGVzLCBnZXQgYSBwb2ludGVyIHRvIHBsM2UsIGFuZCB0YWtl
cyBhIGNvcHkgb2wzZSBwb2ludGluZyB0byBMMi4KKiBCIHdhbGtzIHRoZSBw
YWdldGFibGVzLCBnZXRzIGEgcG9pbnRyZSB0byBwbDNlLCBhbmQgdGFrZXMg
YSBjb3B5IG9sM2UgcG9pbnRpbmcgdG8gTDIuCiogQSB3cml0ZXMgdGhlIG5l
dyB2YWx1ZSBpbnRvIEwzW05dCiogQiB3cml0ZXMgdGhlIG5ldyB2YWx1ZSBp
bnRvIEwzW05dCiogQSByZWN1cnNpdmVseSBmcmVlcyBhbGwgdGhlIEwxJ3Mg
dW5kZXIgTDIsIHRoZW4gZnJlZXMgTDIKKiBCIHJlY3Vyc2l2ZWx5IGRvdWJs
ZS1mcmVlcyBhbGwgdGhlIEwxJ3MgdW5kZXIgTDIsIHRoZW4gZG91YmxlLWZy
ZWVzIEwyICMKCkZpeCB0aGlzIGJ5IGdyYWJiaW5nIGEgbG9jayBmb3IgdGhl
IGVudGlyZXR5IG9mIHRoZSBtYXBwaW5nIHVwZGF0ZQpvcGVyYXRpb24uCgpS
YXRoZXIgdGhhbiBncmFiYmluZyBtYXBfcGdkaXJfbG9jayBmb3IgdGhlIGVu
dGlyZSBvcGVyYXRpb24sIGhvd2V2ZXIsCnJlcHVycG9zZSB0aGUgUEdUX2xv
Y2tlZCBiaXQgZnJvbSBMMydzIHBhZ2UtPnR5cGVfaW5mbyBhcyBhIGxvY2su
ClRoaXMgbWVhbnMgdGhhdCByYXRoZXIgdGhhbiBsb2NraW5nIHRoZSBlbnRp
cmUgYWRkcmVzcyBzcGFjZSwgd2UKIm9ubHkiIGxvY2sgYSBzaW5nbGUgNTEy
R2lCIGNodW5rIG9mIGh5cGVydmlzb3IgYWRkcmVzcyBzcGFjZSBhdCBhCnRp
bWUuCgpUaGVyZSB3YXMgYSBwcm9wb3NhbCBmb3IgYSBsb2NrLWFuZC1yZXZl
cmlmeSBhcHByb2FjaCwgd2hlcmUgd2Ugd2Fsawp0aGUgcGFnZXRhYmxlcyB0
byB0aGUgcG9pbnQgd2hlcmUgd2UgZGVjaWRlIHdoYXQgdG8gZG87IHRoZW4g
Z3JhYiB0aGUKbWFwX3BnZGlyX2xvY2ssIHJlLXZlcmlmeSB0aGUgaW5mb3Jt
YXRpb24gd2UgY29sbGVjdGVkIHdpdGhvdXQgdGhlCmxvY2ssIGFuZCBmaW5h
bGx5IG1ha2UgdGhlIGNoYW5nZSAoc3RhcnRpbmcgb3ZlciBhZ2FpbiBpZiBh
bnl0aGluZyBoYWQKY2hhbmdlZCkuICBXaXRob3V0IGJlaW5nIGFibGUgdG8g
Z3VhcmFudGVlIHRoYXQgdGhlIEwyIHRhYmxlIHdhc24ndApmcmVlZCwgaG93
ZXZlciwgdGhhdCBtZWFucyBldmVyeSByZWFkIHdvdWxkIG5lZWQgdG8gYmUg
Y29uc2lkZXJlZApwb3RlbnRpYWxseSB1bnNhZmUuICBUaGlua2luZyBjYXJl
ZnVsbHkgYWJvdXQgdGhhdCBpcyBwcm9iYWJseQpzb21ldGhpbmcgdGhhdCB3
YW50cyB0byBiZSBkb25lIG9uIHB1YmxpYywgbm90IHVuZGVyIHRpbWUgcHJl
c3N1cmUuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0NS4KClJlcG9ydGVkLWJ5
OiBIb25neWFuIFhpYSA8aG9uZ3l4aWFAYW1hem9uLmNvbT4KU2lnbmVkLW9m
Zi1ieTogSG9uZ3lhbiBYaWEgPGhvbmd5eGlhQGFtYXpvbi5jb20+ClNpZ25l
ZC1vZmYtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4
LmNvbT4KUmV2aWV3ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNl
LmNvbT4KLS0tCiB4ZW4vYXJjaC94ODYvbW0uYyB8IDkyICsrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tCiAxIGZpbGUg
Y2hhbmdlZCwgODkgaW5zZXJ0aW9ucygrKSwgMyBkZWxldGlvbnMoLSkKCmRp
ZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvbW0uYyBiL3hlbi9hcmNoL3g4Ni9t
bS5jCmluZGV4IGQwZmM4YTgxNDIuLjAwYjM1NTQyZDEgMTAwNjQ0Ci0tLSBh
L3hlbi9hcmNoL3g4Ni9tbS5jCisrKyBiL3hlbi9hcmNoL3g4Ni9tbS5jCkBA
IC0yMDgzLDYgKzIwODMsNTAgQEAgdm9pZCBwYWdlX3VubG9jayhzdHJ1Y3Qg
cGFnZV9pbmZvICpwYWdlKQogICAgIGN1cnJlbnRfbG9ja2VkX3BhZ2Vfc2V0
KE5VTEwpOwogfQogCisvKgorICogTDMgdGFibGUgbG9ja3M6CisgKgorICog
VXNlZCBmb3Igc2VyaWFsaXphdGlvbiBpbiBtYXBfcGFnZXNfdG9feGVuKCkg
YW5kIG1vZGlmeV94ZW5fbWFwcGluZ3MoKS4KKyAqCisgKiBGb3IgWGVuIFBU
IHBhZ2VzLCB0aGUgcGFnZS0+dS5pbnVzZS50eXBlX2luZm8gaXMgdW51c2Vk
IGFuZCBpdCBpcyBzYWZlIHRvCisgKiByZXVzZSB0aGUgUEdUX2xvY2tlZCBm
bGFnLiBUaGlzIGxvY2sgaXMgdGFrZW4gb25seSB3aGVuIHdlIG1vdmUgZG93
biB0byBMMworICogdGFibGVzIGFuZCBiZWxvdywgc2luY2UgTDQgKGFuZCBh
Ym92ZSwgZm9yIDUtbGV2ZWwgcGFnaW5nKSBpcyBzdGlsbCBnbG9iYWxseQor
ICogcHJvdGVjdGVkIGJ5IG1hcF9wZ2Rpcl9sb2NrLgorICoKKyAqIFBWIE1N
VSB1cGRhdGUgaHlwZXJjYWxscyBjYWxsIG1hcF9wYWdlc190b194ZW4gd2hp
bGUgaG9sZGluZyBhIHBhZ2UncyBwYWdlX2xvY2soKS4KKyAqIFRoaXMgaGFz
IHR3byBpbXBsaWNhdGlvbnM6CisgKiAtIFdlIGNhbm5vdCByZXVzZSByZXVz
ZSBjdXJyZW50X2xvY2tlZF9wYWdlXyogZm9yIGRlYnVnZ2luZworICogLSBU
byBhdm9pZCB0aGUgY2hhbmNlIG9mIGRlYWRsb2NrLCBldmVuIGZvciBkaWZm
ZXJlbnQgcGFnZXMsIHdlCisgKiAgIG11c3QgbmV2ZXIgZ3JhYiBwYWdlX2xv
Y2soKSBhZnRlciBncmFiYmluZyBsM3RfbG9jaygpLiAgVGhpcworICogICBp
bmNsdWRlcyBhbnkgcGFnZV9sb2NrKCktYmFzZWQgbG9ja3MsIHN1Y2ggYXMK
KyAqICAgbWVtX3NoYXJpbmdfcGFnZV9sb2NrKCkuCisgKgorICogQWxzbyBu
b3RlIHRoYXQgd2UgZ3JhYiB0aGUgbWFwX3BnZGlyX2xvY2sgd2hpbGUgaG9s
ZGluZyB0aGUKKyAqIGwzdF9sb2NrKCksIHNvIHRvIGF2b2lkIGRlYWRsb2Nr
IHdlIG11c3QgYXZvaWQgZ3JhYmJpbmcgdGhlbSBpbgorICogcmV2ZXJzZSBv
cmRlci4KKyAqLworc3RhdGljIHZvaWQgbDN0X2xvY2soc3RydWN0IHBhZ2Vf
aW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54OworCisg
ICAgZG8geworICAgICAgICB3aGlsZSAoICh4ID0gcGFnZS0+dS5pbnVzZS50
eXBlX2luZm8pICYgUEdUX2xvY2tlZCApCisgICAgICAgICAgICBjcHVfcmVs
YXgoKTsKKyAgICAgICAgbnggPSB4IHwgUEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggY21weGNoZygmcGFnZS0+dS5pbnVzZS50eXBlX2luZm8sIHgsIG54
KSAhPSB4ICk7Cit9CisKK3N0YXRpYyB2b2lkIGwzdF91bmxvY2soc3RydWN0
IHBhZ2VfaW5mbyAqcGFnZSkKK3sKKyAgICB1bnNpZ25lZCBsb25nIHgsIG54
LCB5ID0gcGFnZS0+dS5pbnVzZS50eXBlX2luZm87CisKKyAgICBkbyB7Cisg
ICAgICAgIHggPSB5OworICAgICAgICBCVUdfT04oISh4ICYgUEdUX2xvY2tl
ZCkpOworICAgICAgICBueCA9IHggJiB+UEdUX2xvY2tlZDsKKyAgICB9IHdo
aWxlICggKHkgPSBjbXB4Y2hnKCZwYWdlLT51LmludXNlLnR5cGVfaW5mbywg
eCwgbngpKSAhPSB4ICk7Cit9CisKICNpZmRlZiBDT05GSUdfUFYKIC8qCiAg
KiBQVEUgZmxhZ3MgdGhhdCBhIGd1ZXN0IG1heSBjaGFuZ2Ugd2l0aG91dCBy
ZS12YWxpZGF0aW5nIHRoZSBQVEUuCkBAIC01MDY5LDYgKzUxMTMsMjMgQEAg
bDFfcGdlbnRyeV90ICp2aXJ0X3RvX3hlbl9sMWUodW5zaWduZWQgbG9uZyB2
KQogICAgICAgICAgICAgICAgICAgICAgICAgIGZsdXNoX2FyZWFfbG9jYWwo
KGNvbnN0IHZvaWQgKil2LCBmKSA6IFwKICAgICAgICAgICAgICAgICAgICAg
ICAgICBmbHVzaF9hcmVhX2FsbCgoY29uc3Qgdm9pZCAqKXYsIGYpKQogCisj
ZGVmaW5lIEwzVF9JTklUKHBhZ2UpIChwYWdlKSA9IFpFUk9fQkxPQ0tfUFRS
CisKKyNkZWZpbmUgTDNUX0xPQ0socGFnZSkgICAgICAgIFwKKyAgICBkbyB7
ICAgICAgICAgICAgICAgICAgICAgIFwKKyAgICAgICAgaWYgKCBsb2NraW5n
ICkgICAgICAgIFwKKyAgICAgICAgICAgIGwzdF9sb2NrKHBhZ2UpOyAgIFwK
KyAgICB9IHdoaWxlICggZmFsc2UgKQorCisjZGVmaW5lIEwzVF9VTkxPQ0so
cGFnZSkgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisgICAgZG8geyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgICAgIGlmICggbG9ja2luZyAmJiAocGFnZSkgIT0gWkVST19CTE9DS19Q
VFIgKSBcCisgICAgICAgIHsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBcCisgICAgICAgICAgICBsM3RfdW5sb2NrKHBhZ2Up
OyAgICAgICAgICAgICAgICAgICAgICBcCisgICAgICAgICAgICAocGFnZSkg
PSBaRVJPX0JMT0NLX1BUUjsgICAgICAgICAgICAgICBcCisgICAgICAgIH0g
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBcCisg
ICAgfSB3aGlsZSAoIGZhbHNlICkKKwogaW50IG1hcF9wYWdlc190b194ZW4o
CiAgICAgdW5zaWduZWQgbG9uZyB2aXJ0LAogICAgIG1mbl90IG1mbiwKQEAg
LTUwODAsNiArNTE0MSw3IEBAIGludCBtYXBfcGFnZXNfdG9feGVuKAogICAg
IGwxX3BnZW50cnlfdCAqcGwxZSwgb2wxZTsKICAgICB1bnNpZ25lZCBpbnQg
IGk7CiAgICAgaW50IHJjID0gLUVOT01FTTsKKyAgICBzdHJ1Y3QgcGFnZV9p
bmZvICpjdXJyZW50X2wzcGFnZTsKIAogI2RlZmluZSBmbHVzaF9mbGFncyhv
bGRmKSBkbyB7ICAgICAgICAgICAgICAgICBcCiAgICAgdW5zaWduZWQgaW50
IG9fID0gKG9sZGYpOyAgICAgICAgICAgICAgICAgIFwKQEAgLTUwOTUsMTMg
KzUxNTcsMjAgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgfSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFwKIH0gd2hp
bGUgKDApCiAKKyAgICBMM1RfSU5JVChjdXJyZW50X2wzcGFnZSk7CisKICAg
ICB3aGlsZSAoIG5yX21mbnMgIT0gMCApCiAgICAgewotICAgICAgICBsM19w
Z2VudHJ5X3Qgb2wzZSwgKnBsM2UgPSB2aXJ0X3RvX3hlbl9sM2UodmlydCk7
CisgICAgICAgIGwzX3BnZW50cnlfdCAqcGwzZSwgb2wzZTsKIAorICAgICAg
ICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsKKworICAgICAgICBwbDNl
ID0gdmlydF90b194ZW5fbDNlKHZpcnQpOwogICAgICAgICBpZiAoICFwbDNl
ICkKICAgICAgICAgICAgIGdvdG8gb3V0OwogCisgICAgICAgIGN1cnJlbnRf
bDNwYWdlID0gdmlydF90b19wYWdlKHBsM2UpOworICAgICAgICBMM1RfTE9D
SyhjdXJyZW50X2wzcGFnZSk7CiAgICAgICAgIG9sM2UgPSAqcGwzZTsKIAog
ICAgICAgICBpZiAoIGNwdV9oYXNfcGFnZTFnYiAmJgpAQCAtNTQ0MSw2ICs1
NTEwLDcgQEAgaW50IG1hcF9wYWdlc190b194ZW4oCiAgICAgcmMgPSAwOwog
CiAgb3V0OgorICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAg
IHJldHVybiByYzsKIH0KIApAQCAtNTQ2OSw2ICs1NTM5LDcgQEAgaW50IG1v
ZGlmeV94ZW5fbWFwcGluZ3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBs
b25nIGUsIHVuc2lnbmVkIGludCBuZikKICAgICB1bnNpZ25lZCBpbnQgIGk7
CiAgICAgdW5zaWduZWQgbG9uZyB2ID0gczsKICAgICBpbnQgcmMgPSAtRU5P
TUVNOworICAgIHN0cnVjdCBwYWdlX2luZm8gKmN1cnJlbnRfbDNwYWdlOwog
CiAgICAgLyogU2V0IG9mIHZhbGlkIFBURSBiaXRzIHdoaWNoIG1heSBiZSBh
bHRlcmVkLiAqLwogI2RlZmluZSBGTEFHU19NQVNLIChfUEFHRV9OWHxfUEFH
RV9ESVJUWXxfUEFHRV9BQ0NFU1NFRHxfUEFHRV9SV3xfUEFHRV9QUkVTRU5U
KQpAQCAtNTQ3NywxMSArNTU0OCwyMiBAQCBpbnQgbW9kaWZ5X3hlbl9tYXBw
aW5ncyh1bnNpZ25lZCBsb25nIHMsIHVuc2lnbmVkIGxvbmcgZSwgdW5zaWdu
ZWQgaW50IG5mKQogICAgIEFTU0VSVChJU19BTElHTkVEKHMsIFBBR0VfU0la
RSkpOwogICAgIEFTU0VSVChJU19BTElHTkVEKGUsIFBBR0VfU0laRSkpOwog
CisgICAgTDNUX0lOSVQoY3VycmVudF9sM3BhZ2UpOworCiAgICAgd2hpbGUg
KCB2IDwgZSApCiAgICAgewotICAgICAgICBsM19wZ2VudHJ5X3QgKnBsM2Ug
PSB2aXJ0X3RvX3hlbl9sM2Uodik7CisgICAgICAgIGwzX3BnZW50cnlfdCAq
cGwzZTsKKworICAgICAgICBMM1RfVU5MT0NLKGN1cnJlbnRfbDNwYWdlKTsK
IAotICAgICAgICBpZiAoICFwbDNlIHx8ICEobDNlX2dldF9mbGFncygqcGwz
ZSkgJiBfUEFHRV9QUkVTRU5UKSApCisgICAgICAgIHBsM2UgPSB2aXJ0X3Rv
X3hlbl9sM2Uodik7CisgICAgICAgIGlmICggIXBsM2UgKQorICAgICAgICAg
ICAgZ290byBvdXQ7CisKKyAgICAgICAgY3VycmVudF9sM3BhZ2UgPSB2aXJ0
X3RvX3BhZ2UocGwzZSk7CisgICAgICAgIEwzVF9MT0NLKGN1cnJlbnRfbDNw
YWdlKTsKKworICAgICAgICBpZiAoICEobDNlX2dldF9mbGFncygqcGwzZSkg
JiBfUEFHRV9QUkVTRU5UKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIC8q
IENvbmZpcm0gdGhlIGNhbGxlciBpc24ndCB0cnlpbmcgdG8gY3JlYXRlIG5l
dyBtYXBwaW5ncy4gKi8KICAgICAgICAgICAgIEFTU0VSVCghKG5mICYgX1BB
R0VfUFJFU0VOVCkpOwpAQCAtNTcwNyw5ICs1Nzg5LDEzIEBAIGludCBtb2Rp
ZnlfeGVuX21hcHBpbmdzKHVuc2lnbmVkIGxvbmcgcywgdW5zaWduZWQgbG9u
ZyBlLCB1bnNpZ25lZCBpbnQgbmYpCiAgICAgcmMgPSAwOwogCiAgb3V0Ogor
ICAgIEwzVF9VTkxPQ0soY3VycmVudF9sM3BhZ2UpOwogICAgIHJldHVybiBy
YzsKIH0KIAorI3VuZGVmIEwzVF9MT0NLCisjdW5kZWYgTDNUX1VOTE9DSwor
CiAjdW5kZWYgZmx1c2hfYXJlYQogCiBpbnQgZGVzdHJveV94ZW5fbWFwcGlu
Z3ModW5zaWduZWQgbG9uZyBzLCB1bnNpZ25lZCBsb25nIGUpCi0tIAoyLjI1
LjEKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:01:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9133.24646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqKN-0006wb-Qh; Tue, 20 Oct 2020 12:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9133.24646; Tue, 20 Oct 2020 12:01:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqKN-0006wE-Es; Tue, 20 Oct 2020 12:01:19 +0000
Received: by outflank-mailman (input) for mailman id 9133;
 Tue, 20 Oct 2020 12:01:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqKM-0006So-DP
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe5417ff-5478-4f43-bdc2-fb4fa3c2d855;
 Tue, 20 Oct 2020 12:00:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJf-0001Jq-T0; Tue, 20 Oct 2020 12:00:35 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJf-0001zq-Rt; Tue, 20 Oct 2020 12:00:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqKM-0006So-DP
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:18 +0000
X-Inumbo-ID: fe5417ff-5478-4f43-bdc2-fb4fa3c2d855
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fe5417ff-5478-4f43-bdc2-fb4fa3c2d855;
	Tue, 20 Oct 2020 12:00:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=K6P5v3UgSsI3wHFL2Fuy0ZAZrHVkXxn2Ev27tW0qf+0=; b=XPdQbGK/H/24rEEBlTXP7BdbCa
	KTJWMU1tYCKMaw2dp7HYqOnYxraNaY52LfgPpsZNvXdZk3LGCG1CBYIF50tw1bFkGH3twwkK6GB33
	DvXFgO5pdIYW2aZoua/BHPcedjJQex1KVwcOQtM3kXOqmigAKSIH9tMB9GUmGWlbsRl4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJf-0001Jq-T0; Tue, 20 Oct 2020 12:00:35 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJf-0001zq-Rt; Tue, 20 Oct 2020 12:00:35 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 332 v3 - Rogue guests can cause DoS of Dom0
 via high frequency events
Message-Id: <E1kUqJf-0001zq-Rt@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:35 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-332
                              version 3

     Rogue guests can cause DoS of Dom0 via high frequency events

UPDATES IN VERSION 3
====================

Public release.

ISSUE DESCRIPTION
=================

The handling of Xen events in the Linux kernel runs with interrupts
disabled in a loop until no further event is pending.

Whenever an event has been accepted by the kernel, another event can
come in via the same event channel.  This can result in the event
handling loop running for an extended time if new events are coming in
at a high rate.  In extreme cases this can lead to a complete hang of
the kernel, resulting in a DoS situation of the host when dom0 is
affected.

IMPACT
======

Malicious guests can hang the host by sending events to dom0 at a high
frequency.

VULNERABLE SYSTEMS
==================

All systems with a Linux dom0 are affected.

All Linux kernel versions are affected.

MITIGATION
==========

There is no known mitigation.

CREDITS
=======

This issue was discovered by Julien Grall from Arm

RESOLUTION
==========

Applying the appropriate attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa332-linux-??.patch  Linux

$ sha256sum xsa332*
92d0789e8e5b9ec7ae0cd8b01ef31e27930dbe9b81b727521d46328107f3c719  xsa332-linux-01.patch
0bd82febcaf7fc72b88082f46cae9b67f39786d03b3e6aae5f0789cf855e6143  xsa332-linux-02.patch
e646b7caf11ded7f22b209635b209f50ac583cbaeb3270148ce66a3cd922f0c1  xsa332-linux-03.patch
9bed2213774a8107a2f2c157aeb0ebfda7cc6384cee0a245017b3a9eb28cff7f  xsa332-linux-04.patch
8839af506b71946db35f223ff614aa92b4386aaf95e4d8b1408fbf31436ff80f  xsa332-linux-05.patch
b261706bd7f7120fadff0e928be366924cfc13418c81a67ad45724b4179e8a5c  xsa332-linux-06.patch
fc0c963a9a965fc7a72468b1a1ce0834dc866e77392ca0c1d9c8162457a526a0  xsa332-linux-07.patch
5d821c58dd7fcdb157c2844ba34675305c320de25f54409305ffcba610d5922b  xsa332-linux-08.patch
242eb83eca8e3b6d2d303e2943aa041b5f19ea54242cd0de20252d2ae3d128d1  xsa332-linux-09.patch
70a042006d1df3dbbefc4c7d4dfd50da8f3a8e47ee77c2d6d0ba1eda405ae574  xsa332-linux-10.patch
ebbfa66d11b8c81353b72ed5f381672e6784a67895df482f7e791a9fb4c6fbf0  xsa332-linux-11.patch
cda1cbcca19860d43804e80ec2d7d13b295a140b42aa7d16118bb2d20bd63cae  xsa332-linux-12.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+OzqQMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ3MIIAJR5SsBiZM7dhNHSJWMv1OXZK9MBpIxUgJuLY6da
dlpsb6c5eb7ppAfHzkg+JABzc1hIKQkzKBL9n/tvP57KAWqnCbrPfk3/pVrvAf9E
Vubra4+Ec8hY+8JqJsxHS6ZPyLzViFaE505pBEHlFOGZYkSgqM/s96SgoZtgMSpx
pUpFGJCAUPZ7uR+urznM4QrWvvytsRbZo3fUrqn0f9WgMXFge0U9vE7Clt1yzZns
J5nmYq2gBJkrMINreth8T7oDCx7l+I+Cq4yJ0hreUWCxp6svl7kbjI55sdlrI99O
J7rXH6uaGEHSFfy/Zx4aek3eB5LP6Asgp2pQZkXOcSg8RLE=
=q2XX
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-01.patch"
Content-Disposition: attachment; filename="xsa332-linux-01.patch"
Content-Transfer-Encoding: base64

RnJvbSBiYTBmOTZmOWM1MDk3NTNmZGIzZDM2NDRlODc1Yjc5NGQ4OTNlNGU4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyNyArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDEvMTJdIHhlbi9ldmVudHM6
IGFkZCBhIHByb3BlciBiYXJyaWVyIHRvIDItbGV2ZWwgdWV2ZW50CiB1bm1h
c2tpbmcKCkEgZm9sbG93LXVwIHBhdGNoIHdpbGwgcmVxdWlyZSBjZXJ0YWlu
IHdyaXRlIHRvIGhhcHBlbiBiZWZvcmUgYW4gZXZlbnQKY2hhbm5lbCBpcyB1
bm1hc2tlZC4KCldoaWxlIHRoZSBtZW1vcnkgYmFycmllciBpcyBub3Qgc3Ry
aWN0bHkgbmVjZXNzYXJ5IGZvciBhbGwgdGhlIGNhbGxlcnMsCnRoZSBtYWlu
IG9uZSB3aWxsIG5lZWQgaXQuIEluIG9yZGVyIHRvIGF2b2lkIGFuIGV4dHJh
IG1lbW9yeSBiYXJyaWVyCndoZW4gdXNpbmcgZmlmbyBldmVudCBjaGFubmVs
cywgbWFuZGF0ZSBldnRjaG5fdW5tYXNrKCkgdG8gcHJvdmlkZQp3cml0ZSBv
cmRlcmluZy4KClRoZSAyLWxldmVsIGV2ZW50IGhhbmRsaW5nIHVubWFzayBv
cGVyYXRpb24gaXMgbWlzc2luZyBhbiBhcHByb3ByaWF0ZQpiYXJyaWVyLCBz
byBhZGQgaXQuIEZpZm8gZXZlbnQgY2hhbm5lbHMgYXJlIGZpbmUgaW4gdGhp
cyByZWdhcmQgZHVlIHRvCnVzaW5nIHN5bmNfY21weGNoZygpLgoKVGhpcyBp
cyBwYXJ0IG9mIFhTQS0zMzIuCgpDYzogc3RhYmxlQHZnZXIua2VybmVsLm9y
ZwpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4K
U3VnZ2VzdGVkLWJ5OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPgpT
aWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+
ClJldmlld2VkLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29t
PgpSZXZpZXdlZC1ieTogV2VpIExpdSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2
ZXJzL3hlbi9ldmVudHMvZXZlbnRzXzJsLmMgfCAyICsrCiAxIGZpbGUgY2hh
bmdlZCwgMiBpbnNlcnRpb25zKCspCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94
ZW4vZXZlbnRzL2V2ZW50c18ybC5jIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c18ybC5jCmluZGV4IDY0ZGY5MTlhMjExMS4uZTFhZjVlMDkzZmY0IDEw
MDY0NAotLS0gYS9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzXzJsLmMKKysr
IGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c18ybC5jCkBAIC05MSw2ICs5
MSw4IEBAIHN0YXRpYyB2b2lkIGV2dGNobl8ybF91bm1hc2soZXZ0Y2huX3Bv
cnRfdCBwb3J0KQogCiAJQlVHX09OKCFpcnFzX2Rpc2FibGVkKCkpOwogCisJ
c21wX3dtYigpOwkvKiBBbGwgd3JpdGVzIGJlZm9yZSB1bm1hc2sgbXVzdCBi
ZSB2aXNpYmxlLiAqLworCiAJaWYgKHVubGlrZWx5KChjcHUgIT0gY3B1X2Zy
b21fZXZ0Y2huKHBvcnQpKSkpCiAJCWRvX2h5cGVyY2FsbCA9IDE7CiAJZWxz
ZSB7Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-02.patch"
Content-Disposition: attachment; filename="xsa332-linux-02.patch"
Content-Transfer-Encoding: base64

RnJvbSA4ZjA3ODkwOGYwZDQyNjE2OGY1YmMzZDg5YWViOTc5ZDg1ZGE2Y2Vm
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFR1ZSwgMjAgT2N0IDIwMjAgMDY6
NTI6NTUgKzAyMDAKU3ViamVjdDogW1BBVENIIDAyLzEyXSB4ZW4vZXZlbnRz
OiBmaXggcmFjZSBpbiBldnRjaG5fZmlmb191bm1hc2soKQoKVW5tYXNraW5n
IGEgZmlmbyBldmVudCBjaGFubmVsIGNhbiByZXN1bHQgaW4gdW5tYXNraW5n
IGl0IHR3aWNlLCBvbmNlCmRpcmVjdGx5IGluIHRoZSBrZXJuZWwgYW5kIG9u
Y2UgdmlhIGEgaHlwZXJjYWxsIGluIGNhc2UgdGhlIGV2ZW50IHdhcwpwZW5k
aW5nLgoKRml4IHRoYXQgYnkgZG9pbmcgdGhlIGxvY2FsIHVubWFzayBvbmx5
IGlmIHRoZSBldmVudCBpcyBub3QgcGVuZGluZy4KClRoaXMgaXMgcGFydCBv
ZiBYU0EtMzMyLgoKQ2M6IHN0YWJsZUB2Z2VyLmtlcm5lbC5vcmcKUmVwb3J0
ZWQtYnk6IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+ClNpZ25lZC1v
ZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3
ZWQtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCiBk
cml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ZpZm8uYyB8IDEzICsrKysrKysr
Ky0tLS0KIDEgZmlsZSBjaGFuZ2VkLCA5IGluc2VydGlvbnMoKyksIDQgZGVs
ZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19maWZvLmMgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ZpZm8u
YwppbmRleCBjNjBlZTA0NTAxNzMuLjdmZDM5YzY0ZDRiNSAxMDA2NDQKLS0t
IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMKKysrIGIvZHJp
dmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMKQEAgLTIyNywxOSArMjI3
LDI1IEBAIHN0YXRpYyBib29sIGV2dGNobl9maWZvX2lzX21hc2tlZChldnRj
aG5fcG9ydF90IHBvcnQpCiAJcmV0dXJuIHN5bmNfdGVzdF9iaXQoRVZUQ0hO
X0ZJRk9fQklUKE1BU0tFRCwgd29yZCksIEJNKHdvcmQpKTsKIH0KIC8qCi0g
KiBDbGVhciBNQVNLRUQsIHNwaW5uaW5nIGlmIEJVU1kgaXMgc2V0LgorICog
Q2xlYXIgTUFTS0VEIGlmIG5vdCBQRU5ESU5HLCBzcGlubmluZyBpZiBCVVNZ
IGlzIHNldC4KKyAqIFJldHVybiB0cnVlIGlmIG1hc2sgd2FzIGNsZWFyZWQu
CiAgKi8KLXN0YXRpYyB2b2lkIGNsZWFyX21hc2tlZCh2b2xhdGlsZSBldmVu
dF93b3JkX3QgKndvcmQpCitzdGF0aWMgYm9vbCBjbGVhcl9tYXNrZWRfY29u
ZCh2b2xhdGlsZSBldmVudF93b3JkX3QgKndvcmQpCiB7CiAJZXZlbnRfd29y
ZF90IG5ldywgb2xkLCB3OwogCiAJdyA9ICp3b3JkOwogCiAJZG8geworCQlp
ZiAodyAmICgxIDw8IEVWVENITl9GSUZPX1BFTkRJTkcpKQorCQkJcmV0dXJu
IGZhbHNlOworCiAJCW9sZCA9IHcgJiB+KDEgPDwgRVZUQ0hOX0ZJRk9fQlVT
WSk7CiAJCW5ldyA9IG9sZCAmIH4oMSA8PCBFVlRDSE5fRklGT19NQVNLRUQp
OwogCQl3ID0gc3luY19jbXB4Y2hnKHdvcmQsIG9sZCwgbmV3KTsKIAl9IHdo
aWxlICh3ICE9IG9sZCk7CisKKwlyZXR1cm4gdHJ1ZTsKIH0KIAogc3RhdGlj
IHZvaWQgZXZ0Y2huX2ZpZm9fdW5tYXNrKGV2dGNobl9wb3J0X3QgcG9ydCkK
QEAgLTI0OCw4ICsyNTQsNyBAQCBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb191
bm1hc2soZXZ0Y2huX3BvcnRfdCBwb3J0KQogCiAJQlVHX09OKCFpcnFzX2Rp
c2FibGVkKCkpOwogCi0JY2xlYXJfbWFza2VkKHdvcmQpOwotCWlmIChldnRj
aG5fZmlmb19pc19wZW5kaW5nKHBvcnQpKSB7CisJaWYgKCFjbGVhcl9tYXNr
ZWRfY29uZCh3b3JkKSkgewogCQlzdHJ1Y3QgZXZ0Y2huX3VubWFzayB1bm1h
c2sgPSB7IC5wb3J0ID0gcG9ydCB9OwogCQkodm9pZClIWVBFUlZJU09SX2V2
ZW50X2NoYW5uZWxfb3AoRVZUQ0hOT1BfdW5tYXNrLCAmdW5tYXNrKTsKIAl9
Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-03.patch"
Content-Disposition: attachment; filename="xsa332-linux-03.patch"
Content-Transfer-Encoding: base64

RnJvbSAyYjk0NTIwZTg5MzdkZjNjZmFjYjE4M2Y2MTc3NGQyN2UwZmZiNGM5
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyNyArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDMvMTJdIHhlbi9ldmVudHM6
IGFkZCBhIG5ldyAibGF0ZSBFT0kiIGV2dGNobiBmcmFtZXdvcmsKCkluIG9y
ZGVyIHRvIGF2b2lkIHRpZ2h0IGV2ZW50IGNoYW5uZWwgcmVsYXRlZCBJUlEg
bG9vcHMgYWRkIGEgbmV3CmZyYW1ld29yayBvZiAibGF0ZSBFT0kiIGhhbmRs
aW5nOiB0aGUgSVJRIHRoZSBldmVudCBjaGFubmVsIGlzIGJvdW5kCnRvIHdp
bGwgYmUgbWFza2VkIHVudGlsIHRoZSBldmVudCBoYXMgYmVlbiBoYW5kbGVk
IGFuZCB0aGUgcmVsYXRlZApkcml2ZXIgaXMgY2FwYWJsZSB0byBoYW5kbGUg
YW5vdGhlciBldmVudC4gVGhlIGRyaXZlciBpcyByZXNwb25zaWJsZQpmb3Ig
dW5tYXNraW5nIHRoZSBldmVudCBjaGFubmVsIHZpYSB0aGUgbmV3IGZ1bmN0
aW9uIHhlbl9pcnFfbGF0ZWVvaSgpLgoKVGhpcyBpcyBzaW1pbGFyIHRvIGJp
bmRpbmcgYW4gZXZlbnQgY2hhbm5lbCB0byBhIHRocmVhZGVkIElSUSwgYnV0
CndpdGhvdXQgaGF2aW5nIHRvIHN0cnVjdHVyZSB0aGUgZHJpdmVyIGFjY29y
ZGluZ2x5LgoKSW4gb3JkZXIgdG8gc3VwcG9ydCBhIGZ1dHVyZSBzcGVjaWFs
IGhhbmRsaW5nIGluIGNhc2UgYSByb2d1ZSBndWVzdAppcyBzZW5kaW5nIGxv
dHMgb2YgdW5zb2xpY2l0ZWQgZXZlbnRzLCBhZGQgYSBmbGFnIHRvIHhlbl9p
cnFfbGF0ZWVvaSgpCndoaWNoIGNhbiBiZSBzZXQgYnkgdGhlIGNhbGxlciB0
byBpbmRpY2F0ZSB0aGUgZXZlbnQgd2FzIGEgc3B1cmlvdXMKb25lLgoKVGhp
cyBpcyBwYXJ0IG9mIFhTQS0zMzIuCgpDYzogc3RhYmxlQHZnZXIua2VybmVs
Lm9yZwpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9y
Zz4KU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2Uu
Y29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVs
bGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTogV2VpIExpdSA8d2xAeGVu
Lm9yZz4KLS0tCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYyB8
IDE1MSArKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tCiBpbmNsdWRl
L3hlbi9ldmVudHMuaCAgICAgICAgICAgICB8ICAyMSArKysrKwogMiBmaWxl
cyBjaGFuZ2VkLCAxNTUgaW5zZXJ0aW9ucygrKSwgMTcgZGVsZXRpb25zKC0p
CgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNl
LmMgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwppbmRleCA0
MDc3NDFlY2UwODQuLjFlYmE4YmMyMDlhZCAxMDA2NDQKLS0tIGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4v
ZXZlbnRzL2V2ZW50c19iYXNlLmMKQEAgLTExMyw2ICsxMTMsNyBAQCBzdGF0
aWMgYm9vbCAoKnBpcnFfbmVlZHNfZW9pKSh1bnNpZ25lZCBpcnEpOwogc3Rh
dGljIHN0cnVjdCBpcnFfaW5mbyAqbGVnYWN5X2luZm9fcHRyc1tOUl9JUlFT
X0xFR0FDWV07CiAKIHN0YXRpYyBzdHJ1Y3QgaXJxX2NoaXAgeGVuX2R5bmFt
aWNfY2hpcDsKK3N0YXRpYyBzdHJ1Y3QgaXJxX2NoaXAgeGVuX2xhdGVlb2lf
Y2hpcDsKIHN0YXRpYyBzdHJ1Y3QgaXJxX2NoaXAgeGVuX3BlcmNwdV9jaGlw
Owogc3RhdGljIHN0cnVjdCBpcnFfY2hpcCB4ZW5fcGlycV9jaGlwOwogc3Rh
dGljIHZvaWQgZW5hYmxlX2R5bmlycShzdHJ1Y3QgaXJxX2RhdGEgKmRhdGEp
OwpAQCAtMzk3LDYgKzM5OCwzMyBAQCB2b2lkIG5vdGlmeV9yZW1vdGVfdmlh
X2lycShpbnQgaXJxKQogfQogRVhQT1JUX1NZTUJPTF9HUEwobm90aWZ5X3Jl
bW90ZV92aWFfaXJxKTsKIAorc3RhdGljIHZvaWQgeGVuX2lycV9sYXRlZW9p
X2xvY2tlZChzdHJ1Y3QgaXJxX2luZm8gKmluZm8pCit7CisJZXZ0Y2huX3Bv
cnRfdCBldnRjaG47CisKKwlldnRjaG4gPSBpbmZvLT5ldnRjaG47CisJaWYg
KCFWQUxJRF9FVlRDSE4oZXZ0Y2huKSkKKwkJcmV0dXJuOworCisJdW5tYXNr
X2V2dGNobihldnRjaG4pOworfQorCit2b2lkIHhlbl9pcnFfbGF0ZWVvaSh1
bnNpZ25lZCBpbnQgaXJxLCB1bnNpZ25lZCBpbnQgZW9pX2ZsYWdzKQorewor
CXN0cnVjdCBpcnFfaW5mbyAqaW5mbzsKKwl1bnNpZ25lZCBsb25nIGZsYWdz
OworCisJcmVhZF9sb2NrX2lycXNhdmUoJmV2dGNobl9yd2xvY2ssIGZsYWdz
KTsKKworCWluZm8gPSBpbmZvX2Zvcl9pcnEoaXJxKTsKKworCWlmIChpbmZv
KQorCQl4ZW5faXJxX2xhdGVlb2lfbG9ja2VkKGluZm8pOworCisJcmVhZF91
bmxvY2tfaXJxcmVzdG9yZSgmZXZ0Y2huX3J3bG9jaywgZmxhZ3MpOworfQor
RVhQT1JUX1NZTUJPTF9HUEwoeGVuX2lycV9sYXRlZW9pKTsKKwogc3RhdGlj
IHZvaWQgeGVuX2lycV9pbml0KHVuc2lnbmVkIGlycSkKIHsKIAlzdHJ1Y3Qg
aXJxX2luZm8gKmluZm87CkBAIC04NjgsNyArODk2LDcgQEAgaW50IHhlbl9w
aXJxX2Zyb21faXJxKHVuc2lnbmVkIGlycSkKIH0KIEVYUE9SVF9TWU1CT0xf
R1BMKHhlbl9waXJxX2Zyb21faXJxKTsKIAotaW50IGJpbmRfZXZ0Y2huX3Rv
X2lycShldnRjaG5fcG9ydF90IGV2dGNobikKK3N0YXRpYyBpbnQgYmluZF9l
dnRjaG5fdG9faXJxX2NoaXAoZXZ0Y2huX3BvcnRfdCBldnRjaG4sIHN0cnVj
dCBpcnFfY2hpcCAqY2hpcCkKIHsKIAlpbnQgaXJxOwogCWludCByZXQ7CkBA
IC04ODUsNyArOTEzLDcgQEAgaW50IGJpbmRfZXZ0Y2huX3RvX2lycShldnRj
aG5fcG9ydF90IGV2dGNobikKIAkJaWYgKGlycSA8IDApCiAJCQlnb3RvIG91
dDsKIAotCQlpcnFfc2V0X2NoaXBfYW5kX2hhbmRsZXJfbmFtZShpcnEsICZ4
ZW5fZHluYW1pY19jaGlwLAorCQlpcnFfc2V0X2NoaXBfYW5kX2hhbmRsZXJf
bmFtZShpcnEsIGNoaXAsCiAJCQkJCSAgICAgIGhhbmRsZV9lZGdlX2lycSwg
ImV2ZW50Iik7CiAKIAkJcmV0ID0geGVuX2lycV9pbmZvX2V2dGNobl9zZXR1
cChpcnEsIGV2dGNobik7CkBAIC05MDYsOCArOTM0LDE5IEBAIGludCBiaW5k
X2V2dGNobl90b19pcnEoZXZ0Y2huX3BvcnRfdCBldnRjaG4pCiAKIAlyZXR1
cm4gaXJxOwogfQorCitpbnQgYmluZF9ldnRjaG5fdG9faXJxKGV2dGNobl9w
b3J0X3QgZXZ0Y2huKQoreworCXJldHVybiBiaW5kX2V2dGNobl90b19pcnFf
Y2hpcChldnRjaG4sICZ4ZW5fZHluYW1pY19jaGlwKTsKK30KIEVYUE9SVF9T
WU1CT0xfR1BMKGJpbmRfZXZ0Y2huX3RvX2lycSk7CiAKK2ludCBiaW5kX2V2
dGNobl90b19pcnFfbGF0ZWVvaShldnRjaG5fcG9ydF90IGV2dGNobikKK3sK
KwlyZXR1cm4gYmluZF9ldnRjaG5fdG9faXJxX2NoaXAoZXZ0Y2huLCAmeGVu
X2xhdGVlb2lfY2hpcCk7Cit9CitFWFBPUlRfU1lNQk9MX0dQTChiaW5kX2V2
dGNobl90b19pcnFfbGF0ZWVvaSk7CisKIHN0YXRpYyBpbnQgYmluZF9pcGlf
dG9faXJxKHVuc2lnbmVkIGludCBpcGksIHVuc2lnbmVkIGludCBjcHUpCiB7
CiAJc3RydWN0IGV2dGNobl9iaW5kX2lwaSBiaW5kX2lwaTsKQEAgLTk0OSw4
ICs5ODgsOSBAQCBzdGF0aWMgaW50IGJpbmRfaXBpX3RvX2lycSh1bnNpZ25l
ZCBpbnQgaXBpLCB1bnNpZ25lZCBpbnQgY3B1KQogCXJldHVybiBpcnE7CiB9
CiAKLWludCBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnEodW5zaWdu
ZWQgaW50IHJlbW90ZV9kb21haW4sCi0JCQkJICAgZXZ0Y2huX3BvcnRfdCBy
ZW1vdGVfcG9ydCkKK3N0YXRpYyBpbnQgYmluZF9pbnRlcmRvbWFpbl9ldnRj
aG5fdG9faXJxX2NoaXAodW5zaWduZWQgaW50IHJlbW90ZV9kb21haW4sCisJ
CQkJCSAgICAgICBldnRjaG5fcG9ydF90IHJlbW90ZV9wb3J0LAorCQkJCQkg
ICAgICAgc3RydWN0IGlycV9jaGlwICpjaGlwKQogewogCXN0cnVjdCBldnRj
aG5fYmluZF9pbnRlcmRvbWFpbiBiaW5kX2ludGVyZG9tYWluOwogCWludCBl
cnI7CkBAIC05NjEsMTAgKzEwMDEsMjYgQEAgaW50IGJpbmRfaW50ZXJkb21h
aW5fZXZ0Y2huX3RvX2lycSh1bnNpZ25lZCBpbnQgcmVtb3RlX2RvbWFpbiwK
IAllcnIgPSBIWVBFUlZJU09SX2V2ZW50X2NoYW5uZWxfb3AoRVZUQ0hOT1Bf
YmluZF9pbnRlcmRvbWFpbiwKIAkJCQkJICAmYmluZF9pbnRlcmRvbWFpbik7
CiAKLQlyZXR1cm4gZXJyID8gOiBiaW5kX2V2dGNobl90b19pcnEoYmluZF9p
bnRlcmRvbWFpbi5sb2NhbF9wb3J0KTsKKwlyZXR1cm4gZXJyID8gOiBiaW5k
X2V2dGNobl90b19pcnFfY2hpcChiaW5kX2ludGVyZG9tYWluLmxvY2FsX3Bv
cnQsCisJCQkJCSAgICAgICBjaGlwKTsKK30KKworaW50IGJpbmRfaW50ZXJk
b21haW5fZXZ0Y2huX3RvX2lycSh1bnNpZ25lZCBpbnQgcmVtb3RlX2RvbWFp
biwKKwkJCQkgICBldnRjaG5fcG9ydF90IHJlbW90ZV9wb3J0KQoreworCXJl
dHVybiBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFfY2hpcChyZW1v
dGVfZG9tYWluLCByZW1vdGVfcG9ydCwKKwkJCQkJCSAgICZ4ZW5fZHluYW1p
Y19jaGlwKTsKIH0KIEVYUE9SVF9TWU1CT0xfR1BMKGJpbmRfaW50ZXJkb21h
aW5fZXZ0Y2huX3RvX2lycSk7CiAKK2ludCBiaW5kX2ludGVyZG9tYWluX2V2
dGNobl90b19pcnFfbGF0ZWVvaSh1bnNpZ25lZCBpbnQgcmVtb3RlX2RvbWFp
biwKKwkJCQkJICAgZXZ0Y2huX3BvcnRfdCByZW1vdGVfcG9ydCkKK3sKKwly
ZXR1cm4gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxX2NoaXAocmVt
b3RlX2RvbWFpbiwgcmVtb3RlX3BvcnQsCisJCQkJCQkgICAmeGVuX2xhdGVl
b2lfY2hpcCk7Cit9CitFWFBPUlRfU1lNQk9MX0dQTChiaW5kX2ludGVyZG9t
YWluX2V2dGNobl90b19pcnFfbGF0ZWVvaSk7CisKIHN0YXRpYyBpbnQgZmlu
ZF92aXJxKHVuc2lnbmVkIGludCB2aXJxLCB1bnNpZ25lZCBpbnQgY3B1LCBl
dnRjaG5fcG9ydF90ICpldnRjaG4pCiB7CiAJc3RydWN0IGV2dGNobl9zdGF0
dXMgc3RhdHVzOwpAQCAtMTA2MSwxNCArMTExNywxNSBAQCBzdGF0aWMgdm9p
ZCB1bmJpbmRfZnJvbV9pcnEodW5zaWduZWQgaW50IGlycSkKIAltdXRleF91
bmxvY2soJmlycV9tYXBwaW5nX3VwZGF0ZV9sb2NrKTsKIH0KIAotaW50IGJp
bmRfZXZ0Y2huX3RvX2lycWhhbmRsZXIoZXZ0Y2huX3BvcnRfdCBldnRjaG4s
Ci0JCQkgICAgICBpcnFfaGFuZGxlcl90IGhhbmRsZXIsCi0JCQkgICAgICB1
bnNpZ25lZCBsb25nIGlycWZsYWdzLAotCQkJICAgICAgY29uc3QgY2hhciAq
ZGV2bmFtZSwgdm9pZCAqZGV2X2lkKQorc3RhdGljIGludCBiaW5kX2V2dGNo
bl90b19pcnFoYW5kbGVyX2NoaXAoZXZ0Y2huX3BvcnRfdCBldnRjaG4sCisJ
CQkJCSAgaXJxX2hhbmRsZXJfdCBoYW5kbGVyLAorCQkJCQkgIHVuc2lnbmVk
IGxvbmcgaXJxZmxhZ3MsCisJCQkJCSAgY29uc3QgY2hhciAqZGV2bmFtZSwg
dm9pZCAqZGV2X2lkLAorCQkJCQkgIHN0cnVjdCBpcnFfY2hpcCAqY2hpcCkK
IHsKIAlpbnQgaXJxLCByZXR2YWw7CiAKLQlpcnEgPSBiaW5kX2V2dGNobl90
b19pcnEoZXZ0Y2huKTsKKwlpcnEgPSBiaW5kX2V2dGNobl90b19pcnFfY2hp
cChldnRjaG4sIGNoaXApOwogCWlmIChpcnEgPCAwKQogCQlyZXR1cm4gaXJx
OwogCXJldHZhbCA9IHJlcXVlc3RfaXJxKGlycSwgaGFuZGxlciwgaXJxZmxh
Z3MsIGRldm5hbWUsIGRldl9pZCk7CkBAIC0xMDc5LDE4ICsxMTM2LDM4IEBA
IGludCBiaW5kX2V2dGNobl90b19pcnFoYW5kbGVyKGV2dGNobl9wb3J0X3Qg
ZXZ0Y2huLAogCiAJcmV0dXJuIGlycTsKIH0KKworaW50IGJpbmRfZXZ0Y2hu
X3RvX2lycWhhbmRsZXIoZXZ0Y2huX3BvcnRfdCBldnRjaG4sCisJCQkgICAg
ICBpcnFfaGFuZGxlcl90IGhhbmRsZXIsCisJCQkgICAgICB1bnNpZ25lZCBs
b25nIGlycWZsYWdzLAorCQkJICAgICAgY29uc3QgY2hhciAqZGV2bmFtZSwg
dm9pZCAqZGV2X2lkKQoreworCXJldHVybiBiaW5kX2V2dGNobl90b19pcnFo
YW5kbGVyX2NoaXAoZXZ0Y2huLCBoYW5kbGVyLCBpcnFmbGFncywKKwkJCQkJ
ICAgICAgZGV2bmFtZSwgZGV2X2lkLAorCQkJCQkgICAgICAmeGVuX2R5bmFt
aWNfY2hpcCk7Cit9CiBFWFBPUlRfU1lNQk9MX0dQTChiaW5kX2V2dGNobl90
b19pcnFoYW5kbGVyKTsKIAotaW50IGJpbmRfaW50ZXJkb21haW5fZXZ0Y2hu
X3RvX2lycWhhbmRsZXIodW5zaWduZWQgaW50IHJlbW90ZV9kb21haW4sCi0J
CQkJCSAgZXZ0Y2huX3BvcnRfdCByZW1vdGVfcG9ydCwKLQkJCQkJICBpcnFf
aGFuZGxlcl90IGhhbmRsZXIsCi0JCQkJCSAgdW5zaWduZWQgbG9uZyBpcnFm
bGFncywKLQkJCQkJICBjb25zdCBjaGFyICpkZXZuYW1lLAotCQkJCQkgIHZv
aWQgKmRldl9pZCkKK2ludCBiaW5kX2V2dGNobl90b19pcnFoYW5kbGVyX2xh
dGVlb2koZXZ0Y2huX3BvcnRfdCBldnRjaG4sCisJCQkJICAgICAgaXJxX2hh
bmRsZXJfdCBoYW5kbGVyLAorCQkJCSAgICAgIHVuc2lnbmVkIGxvbmcgaXJx
ZmxhZ3MsCisJCQkJICAgICAgY29uc3QgY2hhciAqZGV2bmFtZSwgdm9pZCAq
ZGV2X2lkKQoreworCXJldHVybiBiaW5kX2V2dGNobl90b19pcnFoYW5kbGVy
X2NoaXAoZXZ0Y2huLCBoYW5kbGVyLCBpcnFmbGFncywKKwkJCQkJICAgICAg
ZGV2bmFtZSwgZGV2X2lkLAorCQkJCQkgICAgICAmeGVuX2xhdGVlb2lfY2hp
cCk7Cit9CitFWFBPUlRfU1lNQk9MX0dQTChiaW5kX2V2dGNobl90b19pcnFo
YW5kbGVyX2xhdGVlb2kpOworCitzdGF0aWMgaW50IGJpbmRfaW50ZXJkb21h
aW5fZXZ0Y2huX3RvX2lycWhhbmRsZXJfY2hpcCgKKwkJdW5zaWduZWQgaW50
IHJlbW90ZV9kb21haW4sIGV2dGNobl9wb3J0X3QgcmVtb3RlX3BvcnQsCisJ
CWlycV9oYW5kbGVyX3QgaGFuZGxlciwgdW5zaWduZWQgbG9uZyBpcnFmbGFn
cywKKwkJY29uc3QgY2hhciAqZGV2bmFtZSwgdm9pZCAqZGV2X2lkLCBzdHJ1
Y3QgaXJxX2NoaXAgKmNoaXApCiB7CiAJaW50IGlycSwgcmV0dmFsOwogCi0J
aXJxID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxKHJlbW90ZV9k
b21haW4sIHJlbW90ZV9wb3J0KTsKKwlpcnEgPSBiaW5kX2ludGVyZG9tYWlu
X2V2dGNobl90b19pcnFfY2hpcChyZW1vdGVfZG9tYWluLCByZW1vdGVfcG9y
dCwKKwkJCQkJCSAgY2hpcCk7CiAJaWYgKGlycSA8IDApCiAJCXJldHVybiBp
cnE7CiAKQEAgLTExMDIsOCArMTE3OSwzMyBAQCBpbnQgYmluZF9pbnRlcmRv
bWFpbl9ldnRjaG5fdG9faXJxaGFuZGxlcih1bnNpZ25lZCBpbnQgcmVtb3Rl
X2RvbWFpbiwKIAogCXJldHVybiBpcnE7CiB9CisKK2ludCBiaW5kX2ludGVy
ZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVyKHVuc2lnbmVkIGludCByZW1v
dGVfZG9tYWluLAorCQkJCQkgIGV2dGNobl9wb3J0X3QgcmVtb3RlX3BvcnQs
CisJCQkJCSAgaXJxX2hhbmRsZXJfdCBoYW5kbGVyLAorCQkJCQkgIHVuc2ln
bmVkIGxvbmcgaXJxZmxhZ3MsCisJCQkJCSAgY29uc3QgY2hhciAqZGV2bmFt
ZSwKKwkJCQkJICB2b2lkICpkZXZfaWQpCit7CisJcmV0dXJuIGJpbmRfaW50
ZXJkb21haW5fZXZ0Y2huX3RvX2lycWhhbmRsZXJfY2hpcChyZW1vdGVfZG9t
YWluLAorCQkJCXJlbW90ZV9wb3J0LCBoYW5kbGVyLCBpcnFmbGFncywgZGV2
bmFtZSwKKwkJCQlkZXZfaWQsICZ4ZW5fZHluYW1pY19jaGlwKTsKK30KIEVY
UE9SVF9TWU1CT0xfR1BMKGJpbmRfaW50ZXJkb21haW5fZXZ0Y2huX3RvX2ly
cWhhbmRsZXIpOwogCitpbnQgYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9f
aXJxaGFuZGxlcl9sYXRlZW9pKHVuc2lnbmVkIGludCByZW1vdGVfZG9tYWlu
LAorCQkJCQkJICBldnRjaG5fcG9ydF90IHJlbW90ZV9wb3J0LAorCQkJCQkJ
ICBpcnFfaGFuZGxlcl90IGhhbmRsZXIsCisJCQkJCQkgIHVuc2lnbmVkIGxv
bmcgaXJxZmxhZ3MsCisJCQkJCQkgIGNvbnN0IGNoYXIgKmRldm5hbWUsCisJ
CQkJCQkgIHZvaWQgKmRldl9pZCkKK3sKKwlyZXR1cm4gYmluZF9pbnRlcmRv
bWFpbl9ldnRjaG5fdG9faXJxaGFuZGxlcl9jaGlwKHJlbW90ZV9kb21haW4s
CisJCQkJcmVtb3RlX3BvcnQsIGhhbmRsZXIsIGlycWZsYWdzLCBkZXZuYW1l
LAorCQkJCWRldl9pZCwgJnhlbl9sYXRlZW9pX2NoaXApOworfQorRVhQT1JU
X1NZTUJPTF9HUEwoYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxaGFu
ZGxlcl9sYXRlZW9pKTsKKwogaW50IGJpbmRfdmlycV90b19pcnFoYW5kbGVy
KHVuc2lnbmVkIGludCB2aXJxLCB1bnNpZ25lZCBpbnQgY3B1LAogCQkJICAg
IGlycV9oYW5kbGVyX3QgaGFuZGxlciwKIAkJCSAgICB1bnNpZ25lZCBsb25n
IGlycWZsYWdzLCBjb25zdCBjaGFyICpkZXZuYW1lLCB2b2lkICpkZXZfaWQp
CkBAIC0xNjM3LDYgKzE3MzksMjEgQEAgc3RhdGljIHN0cnVjdCBpcnFfY2hp
cCB4ZW5fZHluYW1pY19jaGlwIF9fcmVhZF9tb3N0bHkgPSB7CiAJLmlycV9y
ZXRyaWdnZXIJCT0gcmV0cmlnZ2VyX2R5bmlycSwKIH07CiAKK3N0YXRpYyBz
dHJ1Y3QgaXJxX2NoaXAgeGVuX2xhdGVlb2lfY2hpcCBfX3JlYWRfbW9zdGx5
ID0geworCS8qIFRoZSBjaGlwIG5hbWUgbmVlZHMgdG8gY29udGFpbiAieGVu
LWR5biIgZm9yIGlycWJhbGFuY2UgdG8gd29yay4gKi8KKwkubmFtZQkJCT0g
Inhlbi1keW4tbGF0ZWVvaSIsCisKKwkuaXJxX2Rpc2FibGUJCT0gZGlzYWJs
ZV9keW5pcnEsCisJLmlycV9tYXNrCQk9IGRpc2FibGVfZHluaXJxLAorCS5p
cnFfdW5tYXNrCQk9IGVuYWJsZV9keW5pcnEsCisKKwkuaXJxX2FjawkJPSBt
YXNrX2Fja19keW5pcnEsCisJLmlycV9tYXNrX2FjawkJPSBtYXNrX2Fja19k
eW5pcnEsCisKKwkuaXJxX3NldF9hZmZpbml0eQk9IHNldF9hZmZpbml0eV9p
cnEsCisJLmlycV9yZXRyaWdnZXIJCT0gcmV0cmlnZ2VyX2R5bmlycSwKK307
CisKIHN0YXRpYyBzdHJ1Y3QgaXJxX2NoaXAgeGVuX3BpcnFfY2hpcCBfX3Jl
YWRfbW9zdGx5ID0gewogCS5uYW1lCQkJPSAieGVuLXBpcnEiLAogCmRpZmYg
LS1naXQgYS9pbmNsdWRlL3hlbi9ldmVudHMuaCBiL2luY2x1ZGUveGVuL2V2
ZW50cy5oCmluZGV4IGRmMWU2MzkxZjYzZi4uM2I4MTU1YzJlYTAzIDEwMDY0
NAotLS0gYS9pbmNsdWRlL3hlbi9ldmVudHMuaAorKysgYi9pbmNsdWRlL3hl
bi9ldmVudHMuaApAQCAtMTUsMTAgKzE1LDE1IEBACiB1bnNpZ25lZCB4ZW5f
ZXZ0Y2huX25yX2NoYW5uZWxzKHZvaWQpOwogCiBpbnQgYmluZF9ldnRjaG5f
dG9faXJxKGV2dGNobl9wb3J0X3QgZXZ0Y2huKTsKK2ludCBiaW5kX2V2dGNo
bl90b19pcnFfbGF0ZWVvaShldnRjaG5fcG9ydF90IGV2dGNobik7CiBpbnQg
YmluZF9ldnRjaG5fdG9faXJxaGFuZGxlcihldnRjaG5fcG9ydF90IGV2dGNo
biwKIAkJCSAgICAgIGlycV9oYW5kbGVyX3QgaGFuZGxlciwKIAkJCSAgICAg
IHVuc2lnbmVkIGxvbmcgaXJxZmxhZ3MsIGNvbnN0IGNoYXIgKmRldm5hbWUs
CiAJCQkgICAgICB2b2lkICpkZXZfaWQpOworaW50IGJpbmRfZXZ0Y2huX3Rv
X2lycWhhbmRsZXJfbGF0ZWVvaShldnRjaG5fcG9ydF90IGV2dGNobiwKKwkJ
CSAgICAgIGlycV9oYW5kbGVyX3QgaGFuZGxlciwKKwkJCSAgICAgIHVuc2ln
bmVkIGxvbmcgaXJxZmxhZ3MsIGNvbnN0IGNoYXIgKmRldm5hbWUsCisJCQkg
ICAgICB2b2lkICpkZXZfaWQpOwogaW50IGJpbmRfdmlycV90b19pcnEodW5z
aWduZWQgaW50IHZpcnEsIHVuc2lnbmVkIGludCBjcHUsIGJvb2wgcGVyY3B1
KTsKIGludCBiaW5kX3ZpcnFfdG9faXJxaGFuZGxlcih1bnNpZ25lZCBpbnQg
dmlycSwgdW5zaWduZWQgaW50IGNwdSwKIAkJCSAgICBpcnFfaGFuZGxlcl90
IGhhbmRsZXIsCkBAIC0zMiwxMiArMzcsMjAgQEAgaW50IGJpbmRfaXBpX3Rv
X2lycWhhbmRsZXIoZW51bSBpcGlfdmVjdG9yIGlwaSwKIAkJCSAgIHZvaWQg
KmRldl9pZCk7CiBpbnQgYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJx
KHVuc2lnbmVkIGludCByZW1vdGVfZG9tYWluLAogCQkJCSAgIGV2dGNobl9w
b3J0X3QgcmVtb3RlX3BvcnQpOworaW50IGJpbmRfaW50ZXJkb21haW5fZXZ0
Y2huX3RvX2lycV9sYXRlZW9pKHVuc2lnbmVkIGludCByZW1vdGVfZG9tYWlu
LAorCQkJCQkgICBldnRjaG5fcG9ydF90IHJlbW90ZV9wb3J0KTsKIGludCBi
aW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVyKHVuc2lnbmVk
IGludCByZW1vdGVfZG9tYWluLAogCQkJCQkgIGV2dGNobl9wb3J0X3QgcmVt
b3RlX3BvcnQsCiAJCQkJCSAgaXJxX2hhbmRsZXJfdCBoYW5kbGVyLAogCQkJ
CQkgIHVuc2lnbmVkIGxvbmcgaXJxZmxhZ3MsCiAJCQkJCSAgY29uc3QgY2hh
ciAqZGV2bmFtZSwKIAkJCQkJICB2b2lkICpkZXZfaWQpOworaW50IGJpbmRf
aW50ZXJkb21haW5fZXZ0Y2huX3RvX2lycWhhbmRsZXJfbGF0ZWVvaSh1bnNp
Z25lZCBpbnQgcmVtb3RlX2RvbWFpbiwKKwkJCQkJCSAgZXZ0Y2huX3BvcnRf
dCByZW1vdGVfcG9ydCwKKwkJCQkJCSAgaXJxX2hhbmRsZXJfdCBoYW5kbGVy
LAorCQkJCQkJICB1bnNpZ25lZCBsb25nIGlycWZsYWdzLAorCQkJCQkJICBj
b25zdCBjaGFyICpkZXZuYW1lLAorCQkJCQkJICB2b2lkICpkZXZfaWQpOwog
CiAvKgogICogQ29tbW9uIHVuYmluZCBmdW5jdGlvbiBmb3IgYWxsIGV2ZW50
IHNvdXJjZXMuIFRha2VzIElSUSB0byB1bmJpbmQgZnJvbS4KQEAgLTQ2LDYg
KzU5LDE0IEBAIGludCBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFo
YW5kbGVyKHVuc2lnbmVkIGludCByZW1vdGVfZG9tYWluLAogICovCiB2b2lk
IHVuYmluZF9mcm9tX2lycWhhbmRsZXIodW5zaWduZWQgaW50IGlycSwgdm9p
ZCAqZGV2X2lkKTsKIAorLyoKKyAqIFNlbmQgbGF0ZSBFT0kgZm9yIGFuIElS
USBib3VuZCB0byBhbiBldmVudCBjaGFubmVsIHZpYSBvbmUgb2YgdGhlICpf
bGF0ZWVvaQorICogZnVuY3Rpb25zIGFib3ZlLgorICovCit2b2lkIHhlbl9p
cnFfbGF0ZWVvaSh1bnNpZ25lZCBpbnQgaXJxLCB1bnNpZ25lZCBpbnQgZW9p
X2ZsYWdzKTsKKy8qIFNpZ25hbCBhbiBldmVudCB3YXMgc3B1cmlvdXMsIGku
ZS4gdGhlcmUgd2FzIG5vIGFjdGlvbiByZXN1bHRpbmcgZnJvbSBpdC4gKi8K
KyNkZWZpbmUgWEVOX0VPSV9GTEFHX1NQVVJJT1VTCTB4MDAwMDAwMDEKKwog
I2RlZmluZSBYRU5fSVJRX1BSSU9SSVRZX01BWCAgICAgRVZUQ0hOX0ZJRk9f
UFJJT1JJVFlfTUFYCiAjZGVmaW5lIFhFTl9JUlFfUFJJT1JJVFlfREVGQVVM
VCBFVlRDSE5fRklGT19QUklPUklUWV9ERUZBVUxUCiAjZGVmaW5lIFhFTl9J
UlFfUFJJT1JJVFlfTUlOICAgICBFVlRDSE5fRklGT19QUklPUklUWV9NSU4K
LS0gCjIuMjYuMgoK

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-04.patch"
Content-Disposition: attachment; filename="xsa332-linux-04.patch"
Content-Transfer-Encoding: base64

RnJvbSBjMmYxNzY0YTZhZTI1YzJlZmEwNDhlN2YyZTA3MDI0NzM5ZTk2OWQ2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyNyArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDQvMTJdIHhlbi9ibGtiYWNr
OiB1c2UgbGF0ZWVvaSBpcnEgYmluZGluZwoKSW4gb3JkZXIgdG8gcmVkdWNl
IHRoZSBjaGFuY2UgZm9yIHRoZSBzeXN0ZW0gYmVjb21pbmcgdW5yZXNwb25z
aXZlIGR1ZQp0byBldmVudCBzdG9ybXMgdHJpZ2dlcmVkIGJ5IGEgbWlzYmVo
YXZpbmcgYmxrZnJvbnQgdXNlIHRoZSBsYXRlZW9pCmlycSBiaW5kaW5nIGZv
ciBibGtiYWNrIGFuZCB1bm1hc2sgdGhlIGV2ZW50IGNoYW5uZWwgb25seSBh
ZnRlcgpwcm9jZXNzaW5nIGFsbCBwZW5kaW5nIHJlcXVlc3RzLgoKQXMgdGhl
IHRocmVhZCBwcm9jZXNzaW5nIHJlcXVlc3RzIGlzIHVzZWQgdG8gZG8gcHVy
Z2luZyB3b3JrIGluIHJlZ3VsYXIKaW50ZXJ2YWxzIGFuIEVPSSBtYXkgYmUg
c2VudCBvbmx5IGFmdGVyIGhhdmluZyByZWNlaXZlZCBhbiBldmVudC4gSWYK
dGhlcmUgd2FzIG5vIHBlbmRpbmcgSS9PIHJlcXVlc3QgZmxhZyB0aGUgRU9J
IGFzIHNwdXJpb3VzLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMzIuCgpDYzog
c3RhYmxlQHZnZXIua2VybmVsLm9yZwpSZXBvcnRlZC1ieTogSnVsaWVuIEdy
YWxsIDxqdWxpZW5AeGVuLm9yZz4KU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBH
cm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogV2VpIExpdSA8
d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Js
a2JhY2suYyB8IDIyICsrKysrKysrKysrKysrKysrLS0tLS0KIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2JhY2sveGVuYnVzLmMgIHwgIDUgKystLS0KIDIgZmls
ZXMgY2hhbmdlZCwgMTkgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMoLSkK
CmRpZmYgLS1naXQgYS9kcml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2Jh
Y2suYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2svYmxrYmFjay5jCmlu
ZGV4IGFkZmM5MzUyMzUxZC4uNTAxZTlkYWNmZmY5IDEwMDY0NAotLS0gYS9k
cml2ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYworKysgYi9kcml2
ZXJzL2Jsb2NrL3hlbi1ibGtiYWNrL2Jsa2JhY2suYwpAQCAtMjAxLDcgKzIw
MSw3IEBAIHN0YXRpYyBpbmxpbmUgdm9pZCBzaHJpbmtfZnJlZV9wYWdlcG9v
bChzdHJ1Y3QgeGVuX2Jsa2lmX3JpbmcgKnJpbmcsIGludCBudW0pCiAKICNk
ZWZpbmUgdmFkZHIocGFnZSkgKCh1bnNpZ25lZCBsb25nKXBmbl90b19rYWRk
cihwYWdlX3RvX3BmbihwYWdlKSkpCiAKLXN0YXRpYyBpbnQgZG9fYmxvY2tf
aW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9yaW5nICpyaW5nKTsKK3N0YXRpYyBp
bnQgZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9yaW5nICpyaW5n
LCB1bnNpZ25lZCBpbnQgKmVvaV9mbGFncyk7CiBzdGF0aWMgaW50IGRpc3Bh
dGNoX3J3X2Jsb2NrX2lvKHN0cnVjdCB4ZW5fYmxraWZfcmluZyAqcmluZywK
IAkJCQlzdHJ1Y3QgYmxraWZfcmVxdWVzdCAqcmVxLAogCQkJCXN0cnVjdCBw
ZW5kaW5nX3JlcSAqcGVuZGluZ19yZXEpOwpAQCAtNjEyLDYgKzYxMiw4IEBA
IGludCB4ZW5fYmxraWZfc2NoZWR1bGUodm9pZCAqYXJnKQogCXN0cnVjdCB4
ZW5fdmJkICp2YmQgPSAmYmxraWYtPnZiZDsKIAl1bnNpZ25lZCBsb25nIHRp
bWVvdXQ7CiAJaW50IHJldDsKKwlib29sIGRvX2VvaTsKKwl1bnNpZ25lZCBp
bnQgZW9pX2ZsYWdzID0gWEVOX0VPSV9GTEFHX1NQVVJJT1VTOwogCiAJc2V0
X2ZyZWV6YWJsZSgpOwogCXdoaWxlICgha3RocmVhZF9zaG91bGRfc3RvcCgp
KSB7CkBAIC02MzYsMTYgKzYzOCwyMyBAQCBpbnQgeGVuX2Jsa2lmX3NjaGVk
dWxlKHZvaWQgKmFyZykKIAkJaWYgKHRpbWVvdXQgPT0gMCkKIAkJCWdvdG8g
cHVyZ2VfZ250X2xpc3Q7CiAKKwkJZG9fZW9pID0gcmluZy0+d2FpdGluZ19y
ZXFzOworCiAJCXJpbmctPndhaXRpbmdfcmVxcyA9IDA7CiAJCXNtcF9tYigp
OyAvKiBjbGVhciBmbGFnICpiZWZvcmUqIGNoZWNraW5nIGZvciB3b3JrICov
CiAKLQkJcmV0ID0gZG9fYmxvY2tfaW9fb3AocmluZyk7CisJCXJldCA9IGRv
X2Jsb2NrX2lvX29wKHJpbmcsICZlb2lfZmxhZ3MpOwogCQlpZiAocmV0ID4g
MCkKIAkJCXJpbmctPndhaXRpbmdfcmVxcyA9IDE7CiAJCWlmIChyZXQgPT0g
LUVBQ0NFUykKIAkJCXdhaXRfZXZlbnRfaW50ZXJydXB0aWJsZShyaW5nLT5z
aHV0ZG93bl93cSwKIAkJCQkJCSBrdGhyZWFkX3Nob3VsZF9zdG9wKCkpOwog
CisJCWlmIChkb19lb2kgJiYgIXJpbmctPndhaXRpbmdfcmVxcykgeworCQkJ
eGVuX2lycV9sYXRlZW9pKHJpbmctPmlycSwgZW9pX2ZsYWdzKTsKKwkJCWVv
aV9mbGFncyB8PSBYRU5fRU9JX0ZMQUdfU1BVUklPVVM7CisJCX0KKwogcHVy
Z2VfZ250X2xpc3Q6CiAJCWlmIChibGtpZi0+dmJkLmZlYXR1cmVfZ250X3Bl
cnNpc3RlbnQgJiYKIAkJICAgIHRpbWVfYWZ0ZXIoamlmZmllcywgcmluZy0+
bmV4dF9scnUpKSB7CkBAIC0xMTIxLDcgKzExMzAsNyBAQCBzdGF0aWMgdm9p
ZCBlbmRfYmxvY2tfaW9fb3Aoc3RydWN0IGJpbyAqYmlvKQogICogYW5kIHRy
YW5zbXV0ZSAgaXQgdG8gdGhlIGJsb2NrIEFQSSB0byBoYW5kIGl0IG92ZXIg
dG8gdGhlIHByb3BlciBibG9jayBkaXNrLgogICovCiBzdGF0aWMgaW50Ci1f
X2RvX2Jsb2NrX2lvX29wKHN0cnVjdCB4ZW5fYmxraWZfcmluZyAqcmluZykK
K19fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9yaW5nICpyaW5n
LCB1bnNpZ25lZCBpbnQgKmVvaV9mbGFncykKIHsKIAl1bmlvbiBibGtpZl9i
YWNrX3JpbmdzICpibGtfcmluZ3MgPSAmcmluZy0+YmxrX3JpbmdzOwogCXN0
cnVjdCBibGtpZl9yZXF1ZXN0IHJlcTsKQEAgLTExNDQsNiArMTE1Myw5IEBA
IF9fZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9yaW5nICpyaW5n
KQogCQlpZiAoUklOR19SRVFVRVNUX0NPTlNfT1ZFUkZMT1coJmJsa19yaW5n
cy0+Y29tbW9uLCByYykpCiAJCQlicmVhazsKIAorCQkvKiBXZSd2ZSBzZWVu
IGEgcmVxdWVzdCwgc28gY2xlYXIgc3B1cmlvdXMgZW9pIGZsYWcuICovCisJ
CSplb2lfZmxhZ3MgJj0gflhFTl9FT0lfRkxBR19TUFVSSU9VUzsKKwogCQlp
ZiAoa3RocmVhZF9zaG91bGRfc3RvcCgpKSB7CiAJCQltb3JlX3RvX2RvID0g
MTsKIAkJCWJyZWFrOwpAQCAtMTIwMiwxMyArMTIxNCwxMyBAQCBfX2RvX2Js
b2NrX2lvX29wKHN0cnVjdCB4ZW5fYmxraWZfcmluZyAqcmluZykKIH0KIAog
c3RhdGljIGludAotZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9y
aW5nICpyaW5nKQorZG9fYmxvY2tfaW9fb3Aoc3RydWN0IHhlbl9ibGtpZl9y
aW5nICpyaW5nLCB1bnNpZ25lZCBpbnQgKmVvaV9mbGFncykKIHsKIAl1bmlv
biBibGtpZl9iYWNrX3JpbmdzICpibGtfcmluZ3MgPSAmcmluZy0+YmxrX3Jp
bmdzOwogCWludCBtb3JlX3RvX2RvOwogCiAJZG8gewotCQltb3JlX3RvX2Rv
ID0gX19kb19ibG9ja19pb19vcChyaW5nKTsKKwkJbW9yZV90b19kbyA9IF9f
ZG9fYmxvY2tfaW9fb3AocmluZywgZW9pX2ZsYWdzKTsKIAkJaWYgKG1vcmVf
dG9fZG8pCiAJCQlicmVhazsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9ibG9j
ay94ZW4tYmxrYmFjay94ZW5idXMuYyBiL2RyaXZlcnMvYmxvY2sveGVuLWJs
a2JhY2sveGVuYnVzLmMKaW5kZXggYjlhYTVkMWFjMTBiLi41ZTdjMzZkNzNk
YzYgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMvYmxvY2sveGVuLWJsa2JhY2sveGVu
YnVzLmMKKysrIGIvZHJpdmVycy9ibG9jay94ZW4tYmxrYmFjay94ZW5idXMu
YwpAQCAtMjQ2LDkgKzI0Niw4IEBAIHN0YXRpYyBpbnQgeGVuX2Jsa2lmX21h
cChzdHJ1Y3QgeGVuX2Jsa2lmX3JpbmcgKnJpbmcsIGdyYW50X3JlZl90ICpn
cmVmLAogCWlmIChyZXFfcHJvZCAtIHJzcF9wcm9kID4gc2l6ZSkKIAkJZ290
byBmYWlsOwogCi0JZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9f
aXJxaGFuZGxlcihibGtpZi0+ZG9taWQsIGV2dGNobiwKLQkJCQkJCSAgICB4
ZW5fYmxraWZfYmVfaW50LCAwLAotCQkJCQkJICAgICJibGtpZi1iYWNrZW5k
IiwgcmluZyk7CisJZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9f
aXJxaGFuZGxlcl9sYXRlZW9pKGJsa2lmLT5kb21pZCwKKwkJCWV2dGNobiwg
eGVuX2Jsa2lmX2JlX2ludCwgMCwgImJsa2lmLWJhY2tlbmQiLCByaW5nKTsK
IAlpZiAoZXJyIDwgMCkKIAkJZ290byBmYWlsOwogCXJpbmctPmlycSA9IGVy
cjsKLS0gCjIuMjYuMgoK

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-05.patch"
Content-Disposition: attachment; filename="xsa332-linux-05.patch"
Content-Transfer-Encoding: base64

RnJvbSAxODNhYTEzNjZlY2UxNjZlZDg0YzM3ZmVhYTBiNGY4NjIzMWQyYmJk
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyOCArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDUvMTJdIHhlbi9uZXRiYWNr
OiB1c2UgbGF0ZWVvaSBpcnEgYmluZGluZwoKSW4gb3JkZXIgdG8gcmVkdWNl
IHRoZSBjaGFuY2UgZm9yIHRoZSBzeXN0ZW0gYmVjb21pbmcgdW5yZXNwb25z
aXZlIGR1ZQp0byBldmVudCBzdG9ybXMgdHJpZ2dlcmVkIGJ5IGEgbWlzYmVo
YXZpbmcgbmV0ZnJvbnQgdXNlIHRoZSBsYXRlZW9pCmlycSBiaW5kaW5nIGZv
ciBuZXRiYWNrIGFuZCB1bm1hc2sgdGhlIGV2ZW50IGNoYW5uZWwgb25seSBq
dXN0IGJlZm9yZQpnb2luZyB0byBzbGVlcCB3YWl0aW5nIGZvciBuZXcgZXZl
bnRzLgoKTWFrZSBzdXJlIG5vdCB0byBpc3N1ZSBhbiBFT0kgd2hlbiBub25l
IGlzIHBlbmRpbmcgYnkgaW50cm9kdWNpbmcgYW4KZW9pX3BlbmRpbmcgZWxl
bWVudCB0byBzdHJ1Y3QgeGVudmlmX3F1ZXVlLgoKV2hlbiBubyByZXF1ZXN0
IGhhcyBiZWVuIGNvbnN1bWVkIHNldCB0aGUgc3B1cmlvdXMgZmxhZyB3aGVu
IHNlbmRpbmcKdGhlIEVPSSBmb3IgYW4gaW50ZXJydXB0LgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMzIuCgpDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZwpS
ZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4KU2ln
bmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogV2VpIExpdSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJz
L25ldC94ZW4tbmV0YmFjay9jb21tb24uaCAgICB8IDE1ICsrKysrKysKIGRy
aXZlcnMvbmV0L3hlbi1uZXRiYWNrL2ludGVyZmFjZS5jIHwgNjEgKysrKysr
KysrKysrKysrKysrKysrKysrLS0tLS0KIGRyaXZlcnMvbmV0L3hlbi1uZXRi
YWNrL25ldGJhY2suYyAgIHwgMTEgKysrKystCiBkcml2ZXJzL25ldC94ZW4t
bmV0YmFjay9yeC5jICAgICAgICB8IDEzICsrKystLQogNCBmaWxlcyBjaGFu
Z2VkLCA4NiBpbnNlcnRpb25zKCspLCAxNCBkZWxldGlvbnMoLSkKCmRpZmYg
LS1naXQgYS9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9jb21tb24uaCBiL2Ry
aXZlcnMvbmV0L3hlbi1uZXRiYWNrL2NvbW1vbi5oCmluZGV4IGFlNDc3Zjc3
NTZhZi4uOGVlMjRlMzUxYmRjIDEwMDY0NAotLS0gYS9kcml2ZXJzL25ldC94
ZW4tbmV0YmFjay9jb21tb24uaAorKysgYi9kcml2ZXJzL25ldC94ZW4tbmV0
YmFjay9jb21tb24uaApAQCAtMTQwLDYgKzE0MCwyMCBAQCBzdHJ1Y3QgeGVu
dmlmX3F1ZXVlIHsgLyogUGVyLXF1ZXVlIGRhdGEgZm9yIHhlbnZpZiAqLwog
CWNoYXIgbmFtZVtRVUVVRV9OQU1FX1NJWkVdOyAvKiBERVZOQU1FLXFOICov
CiAJc3RydWN0IHhlbnZpZiAqdmlmOyAvKiBQYXJlbnQgVklGICovCiAKKwkv
KgorCSAqIFRYL1JYIGNvbW1vbiBFT0kgaGFuZGxpbmcuCisJICogV2hlbiBm
ZWF0dXJlLXNwbGl0LWV2ZW50LWNoYW5uZWxzID0gMCwgaW50ZXJydXB0IGhh
bmRsZXIgc2V0cworCSAqIE5FVEJLX0NPTU1PTl9FT0ksIG90aGVyd2lzZSBO
RVRCS19SWF9FT0kgYW5kIE5FVEJLX1RYX0VPSSBhcmUgc2V0CisJICogYnkg
dGhlIFJYIGFuZCBUWCBpbnRlcnJ1cHQgaGFuZGxlcnMuCisJICogUlggYW5k
IFRYIGhhbmRsZXIgdGhyZWFkcyB3aWxsIGlzc3VlIGFuIEVPSSB3aGVuIGVp
dGhlcgorCSAqIE5FVEJLX0NPTU1PTl9FT0kgb3IgdGhlaXIgc3BlY2lmaWMg
Yml0cyAoTkVUQktfUlhfRU9JIG9yCisJICogTkVUQktfVFhfRU9JKSBhcmUg
c2V0IGFuZCB0aGV5IHdpbGwgcmVzZXQgdGhvc2UgYml0cy4KKwkgKi8KKwlh
dG9taWNfdCBlb2lfcGVuZGluZzsKKyNkZWZpbmUgTkVUQktfUlhfRU9JCQkw
eDAxCisjZGVmaW5lIE5FVEJLX1RYX0VPSQkJMHgwMgorI2RlZmluZSBORVRC
S19DT01NT05fRU9JCTB4MDQKKwogCS8qIFVzZSBOQVBJIGZvciBndWVzdCBU
WCAqLwogCXN0cnVjdCBuYXBpX3N0cnVjdCBuYXBpOwogCS8qIFdoZW4gZmVh
dHVyZS1zcGxpdC1ldmVudC1jaGFubmVscyA9IDAsIHR4X2lycSA9IHJ4X2ly
cS4gKi8KQEAgLTM3OCw2ICszOTIsNyBAQCBpbnQgeGVudmlmX2RlYWxsb2Nf
a3RocmVhZCh2b2lkICpkYXRhKTsKIAogaXJxcmV0dXJuX3QgeGVudmlmX2N0
cmxfaXJxX2ZuKGludCBpcnEsIHZvaWQgKmRhdGEpOwogCitib29sIHhlbnZp
Zl9oYXZlX3J4X3dvcmsoc3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUsIGJv
b2wgdGVzdF9rdGhyZWFkKTsKIHZvaWQgeGVudmlmX3J4X2FjdGlvbihzdHJ1
Y3QgeGVudmlmX3F1ZXVlICpxdWV1ZSk7CiB2b2lkIHhlbnZpZl9yeF9xdWV1
ZV90YWlsKHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlLCBzdHJ1Y3Qgc2tf
YnVmZiAqc2tiKTsKIApkaWZmIC0tZ2l0IGEvZHJpdmVycy9uZXQveGVuLW5l
dGJhY2svaW50ZXJmYWNlLmMgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9p
bnRlcmZhY2UuYwppbmRleCA4YWY0OTcyODU2OTEuLmFjYjc4NmQ4YjFkOCAx
MDA2NDQKLS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svaW50ZXJmYWNl
LmMKKysrIGIvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svaW50ZXJmYWNlLmMK
QEAgLTc3LDEyICs3NywyOCBAQCBpbnQgeGVudmlmX3NjaGVkdWxhYmxlKHN0
cnVjdCB4ZW52aWYgKnZpZikKIAkJIXZpZi0+ZGlzYWJsZWQ7CiB9CiAKK3N0
YXRpYyBib29sIHhlbnZpZl9oYW5kbGVfdHhfaW50ZXJydXB0KHN0cnVjdCB4
ZW52aWZfcXVldWUgKnF1ZXVlKQoreworCWJvb2wgcmM7CisKKwlyYyA9IFJJ
TkdfSEFTX1VOQ09OU1VNRURfUkVRVUVTVFMoJnF1ZXVlLT50eCk7CisJaWYg
KHJjKQorCQluYXBpX3NjaGVkdWxlKCZxdWV1ZS0+bmFwaSk7CisJcmV0dXJu
IHJjOworfQorCiBzdGF0aWMgaXJxcmV0dXJuX3QgeGVudmlmX3R4X2ludGVy
cnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiB7CiAJc3RydWN0IHhlbnZp
Zl9xdWV1ZSAqcXVldWUgPSBkZXZfaWQ7CisJaW50IG9sZDsKIAotCWlmIChS
SU5HX0hBU19VTkNPTlNVTUVEX1JFUVVFU1RTKCZxdWV1ZS0+dHgpKQotCQlu
YXBpX3NjaGVkdWxlKCZxdWV1ZS0+bmFwaSk7CisJb2xkID0gYXRvbWljX2Zl
dGNoX29yKE5FVEJLX1RYX0VPSSwgJnF1ZXVlLT5lb2lfcGVuZGluZyk7CisJ
V0FSTihvbGQgJiBORVRCS19UWF9FT0ksICJJbnRlcnJ1cHQgd2hpbGUgRU9J
IHBlbmRpbmdcbiIpOworCisJaWYgKCF4ZW52aWZfaGFuZGxlX3R4X2ludGVy
cnVwdChxdWV1ZSkpIHsKKwkJYXRvbWljX2FuZG5vdChORVRCS19UWF9FT0ks
ICZxdWV1ZS0+ZW9pX3BlbmRpbmcpOworCQl4ZW5faXJxX2xhdGVlb2koaXJx
LCBYRU5fRU9JX0ZMQUdfU1BVUklPVVMpOworCX0KIAogCXJldHVybiBJUlFf
SEFORExFRDsKIH0KQEAgLTExNiwxOSArMTMyLDQ2IEBAIHN0YXRpYyBpbnQg
eGVudmlmX3BvbGwoc3RydWN0IG5hcGlfc3RydWN0ICpuYXBpLCBpbnQgYnVk
Z2V0KQogCXJldHVybiB3b3JrX2RvbmU7CiB9CiAKK3N0YXRpYyBib29sIHhl
bnZpZl9oYW5kbGVfcnhfaW50ZXJydXB0KHN0cnVjdCB4ZW52aWZfcXVldWUg
KnF1ZXVlKQoreworCWJvb2wgcmM7CisKKwlyYyA9IHhlbnZpZl9oYXZlX3J4
X3dvcmsocXVldWUsIGZhbHNlKTsKKwlpZiAocmMpCisJCXhlbnZpZl9raWNr
X3RocmVhZChxdWV1ZSk7CisJcmV0dXJuIHJjOworfQorCiBzdGF0aWMgaXJx
cmV0dXJuX3QgeGVudmlmX3J4X2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpk
ZXZfaWQpCiB7CiAJc3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUgPSBkZXZf
aWQ7CisJaW50IG9sZDsKIAotCXhlbnZpZl9raWNrX3RocmVhZChxdWV1ZSk7
CisJb2xkID0gYXRvbWljX2ZldGNoX29yKE5FVEJLX1JYX0VPSSwgJnF1ZXVl
LT5lb2lfcGVuZGluZyk7CisJV0FSTihvbGQgJiBORVRCS19SWF9FT0ksICJJ
bnRlcnJ1cHQgd2hpbGUgRU9JIHBlbmRpbmdcbiIpOworCisJaWYgKCF4ZW52
aWZfaGFuZGxlX3J4X2ludGVycnVwdChxdWV1ZSkpIHsKKwkJYXRvbWljX2Fu
ZG5vdChORVRCS19SWF9FT0ksICZxdWV1ZS0+ZW9pX3BlbmRpbmcpOworCQl4
ZW5faXJxX2xhdGVlb2koaXJxLCBYRU5fRU9JX0ZMQUdfU1BVUklPVVMpOwor
CX0KIAogCXJldHVybiBJUlFfSEFORExFRDsKIH0KIAogaXJxcmV0dXJuX3Qg
eGVudmlmX2ludGVycnVwdChpbnQgaXJxLCB2b2lkICpkZXZfaWQpCiB7Ci0J
eGVudmlmX3R4X2ludGVycnVwdChpcnEsIGRldl9pZCk7Ci0JeGVudmlmX3J4
X2ludGVycnVwdChpcnEsIGRldl9pZCk7CisJc3RydWN0IHhlbnZpZl9xdWV1
ZSAqcXVldWUgPSBkZXZfaWQ7CisJaW50IG9sZDsKKworCW9sZCA9IGF0b21p
Y19mZXRjaF9vcihORVRCS19DT01NT05fRU9JLCAmcXVldWUtPmVvaV9wZW5k
aW5nKTsKKwlXQVJOKG9sZCwgIkludGVycnVwdCB3aGlsZSBFT0kgcGVuZGlu
Z1xuIik7CisKKwkvKiBVc2UgYml0d2lzZSBvciBhcyB3ZSBuZWVkIHRvIGNh
bGwgYm90aCBmdW5jdGlvbnMuICovCisJaWYgKCgheGVudmlmX2hhbmRsZV90
eF9pbnRlcnJ1cHQocXVldWUpIHwKKwkgICAgICF4ZW52aWZfaGFuZGxlX3J4
X2ludGVycnVwdChxdWV1ZSkpKSB7CisJCWF0b21pY19hbmRub3QoTkVUQktf
Q09NTU9OX0VPSSwgJnF1ZXVlLT5lb2lfcGVuZGluZyk7CisJCXhlbl9pcnFf
bGF0ZWVvaShpcnEsIFhFTl9FT0lfRkxBR19TUFVSSU9VUyk7CisJfQogCiAJ
cmV0dXJuIElSUV9IQU5ETEVEOwogfQpAQCAtNjA1LDcgKzY0OCw3IEBAIGlu
dCB4ZW52aWZfY29ubmVjdF9jdHJsKHN0cnVjdCB4ZW52aWYgKnZpZiwgZ3Jh
bnRfcmVmX3QgcmluZ19yZWYsCiAJaWYgKHJlcV9wcm9kIC0gcnNwX3Byb2Qg
PiBSSU5HX1NJWkUoJnZpZi0+Y3RybCkpCiAJCWdvdG8gZXJyX3VubWFwOwog
Ci0JZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxKHZpZi0+
ZG9taWQsIGV2dGNobik7CisJZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRj
aG5fdG9faXJxX2xhdGVlb2kodmlmLT5kb21pZCwgZXZ0Y2huKTsKIAlpZiAo
ZXJyIDwgMCkKIAkJZ290byBlcnJfdW5tYXA7CiAKQEAgLTcwOSw3ICs3NTIs
NyBAQCBpbnQgeGVudmlmX2Nvbm5lY3RfZGF0YShzdHJ1Y3QgeGVudmlmX3F1
ZXVlICpxdWV1ZSwKIAogCWlmICh0eF9ldnRjaG4gPT0gcnhfZXZ0Y2huKSB7
CiAJCS8qIGZlYXR1cmUtc3BsaXQtZXZlbnQtY2hhbm5lbHMgPT0gMCAqLwot
CQllcnIgPSBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVy
KAorCQllcnIgPSBiaW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5k
bGVyX2xhdGVlb2koCiAJCQlxdWV1ZS0+dmlmLT5kb21pZCwgdHhfZXZ0Y2hu
LCB4ZW52aWZfaW50ZXJydXB0LCAwLAogCQkJcXVldWUtPm5hbWUsIHF1ZXVl
KTsKIAkJaWYgKGVyciA8IDApCkBAIC03MjAsNyArNzYzLDcgQEAgaW50IHhl
bnZpZl9jb25uZWN0X2RhdGEoc3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUs
CiAJCS8qIGZlYXR1cmUtc3BsaXQtZXZlbnQtY2hhbm5lbHMgPT0gMSAqLwog
CQlzbnByaW50ZihxdWV1ZS0+dHhfaXJxX25hbWUsIHNpemVvZihxdWV1ZS0+
dHhfaXJxX25hbWUpLAogCQkJICIlcy10eCIsIHF1ZXVlLT5uYW1lKTsKLQkJ
ZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxaGFuZGxlcigK
KwkJZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxaGFuZGxl
cl9sYXRlZW9pKAogCQkJcXVldWUtPnZpZi0+ZG9taWQsIHR4X2V2dGNobiwg
eGVudmlmX3R4X2ludGVycnVwdCwgMCwKIAkJCXF1ZXVlLT50eF9pcnFfbmFt
ZSwgcXVldWUpOwogCQlpZiAoZXJyIDwgMCkKQEAgLTczMCw3ICs3NzMsNyBA
QCBpbnQgeGVudmlmX2Nvbm5lY3RfZGF0YShzdHJ1Y3QgeGVudmlmX3F1ZXVl
ICpxdWV1ZSwKIAogCQlzbnByaW50ZihxdWV1ZS0+cnhfaXJxX25hbWUsIHNp
emVvZihxdWV1ZS0+cnhfaXJxX25hbWUpLAogCQkJICIlcy1yeCIsIHF1ZXVl
LT5uYW1lKTsKLQkJZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9f
aXJxaGFuZGxlcigKKwkJZXJyID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5f
dG9faXJxaGFuZGxlcl9sYXRlZW9pKAogCQkJcXVldWUtPnZpZi0+ZG9taWQs
IHJ4X2V2dGNobiwgeGVudmlmX3J4X2ludGVycnVwdCwgMCwKIAkJCXF1ZXVl
LT5yeF9pcnFfbmFtZSwgcXVldWUpOwogCQlpZiAoZXJyIDwgMCkKZGlmZiAt
LWdpdCBhL2RyaXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYyBiL2Ry
aXZlcnMvbmV0L3hlbi1uZXRiYWNrL25ldGJhY2suYwppbmRleCA2ZGZjYTcy
NjU2NDQuLmJjMzQyMWQxNDU3NiAxMDA2NDQKLS0tIGEvZHJpdmVycy9uZXQv
eGVuLW5ldGJhY2svbmV0YmFjay5jCisrKyBiL2RyaXZlcnMvbmV0L3hlbi1u
ZXRiYWNrL25ldGJhY2suYwpAQCAtMTY5LDYgKzE2OSwxMCBAQCB2b2lkIHhl
bnZpZl9uYXBpX3NjaGVkdWxlX29yX2VuYWJsZV9ldmVudHMoc3RydWN0IHhl
bnZpZl9xdWV1ZSAqcXVldWUpCiAKIAlpZiAobW9yZV90b19kbykKIAkJbmFw
aV9zY2hlZHVsZSgmcXVldWUtPm5hcGkpOworCWVsc2UgaWYgKGF0b21pY19m
ZXRjaF9hbmRub3QoTkVUQktfVFhfRU9JIHwgTkVUQktfQ09NTU9OX0VPSSwK
KwkJCQkgICAgICZxdWV1ZS0+ZW9pX3BlbmRpbmcpICYKKwkJIChORVRCS19U
WF9FT0kgfCBORVRCS19DT01NT05fRU9JKSkKKwkJeGVuX2lycV9sYXRlZW9p
KHF1ZXVlLT50eF9pcnEsIDApOwogfQogCiBzdGF0aWMgdm9pZCB0eF9hZGRf
Y3JlZGl0KHN0cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQpAQCAtMTY0Myw5
ICsxNjQ3LDE0IEBAIHN0YXRpYyBib29sIHhlbnZpZl9jdHJsX3dvcmtfdG9k
byhzdHJ1Y3QgeGVudmlmICp2aWYpCiBpcnFyZXR1cm5fdCB4ZW52aWZfY3Ry
bF9pcnFfZm4oaW50IGlycSwgdm9pZCAqZGF0YSkKIHsKIAlzdHJ1Y3QgeGVu
dmlmICp2aWYgPSBkYXRhOworCXVuc2lnbmVkIGludCBlb2lfZmxhZyA9IFhF
Tl9FT0lfRkxBR19TUFVSSU9VUzsKIAotCXdoaWxlICh4ZW52aWZfY3RybF93
b3JrX3RvZG8odmlmKSkKKwl3aGlsZSAoeGVudmlmX2N0cmxfd29ya190b2Rv
KHZpZikpIHsKIAkJeGVudmlmX2N0cmxfYWN0aW9uKHZpZik7CisJCWVvaV9m
bGFnID0gMDsKKwl9CisKKwl4ZW5faXJxX2xhdGVlb2koaXJxLCBlb2lfZmxh
Zyk7CiAKIAlyZXR1cm4gSVJRX0hBTkRMRUQ7CiB9CmRpZmYgLS1naXQgYS9k
cml2ZXJzL25ldC94ZW4tbmV0YmFjay9yeC5jIGIvZHJpdmVycy9uZXQveGVu
LW5ldGJhY2svcnguYwppbmRleCBhYzAzNGY2OWExNzAuLmI4ZmViZTFkMWJm
ZCAxMDA2NDQKLS0tIGEvZHJpdmVycy9uZXQveGVuLW5ldGJhY2svcnguYwor
KysgYi9kcml2ZXJzL25ldC94ZW4tbmV0YmFjay9yeC5jCkBAIC01MDMsMTMg
KzUwMywxMyBAQCBzdGF0aWMgYm9vbCB4ZW52aWZfcnhfcXVldWVfcmVhZHko
c3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVldWUpCiAJcmV0dXJuIHF1ZXVlLT5z
dGFsbGVkICYmIHByb2QgLSBjb25zID49IDE7CiB9CiAKLXN0YXRpYyBib29s
IHhlbnZpZl9oYXZlX3J4X3dvcmsoc3RydWN0IHhlbnZpZl9xdWV1ZSAqcXVl
dWUpCitib29sIHhlbnZpZl9oYXZlX3J4X3dvcmsoc3RydWN0IHhlbnZpZl9x
dWV1ZSAqcXVldWUsIGJvb2wgdGVzdF9rdGhyZWFkKQogewogCXJldHVybiB4
ZW52aWZfcnhfcmluZ19zbG90c19hdmFpbGFibGUocXVldWUpIHx8CiAJCShx
dWV1ZS0+dmlmLT5zdGFsbF90aW1lb3V0ICYmCiAJCSAoeGVudmlmX3J4X3F1
ZXVlX3N0YWxsZWQocXVldWUpIHx8CiAJCSAgeGVudmlmX3J4X3F1ZXVlX3Jl
YWR5KHF1ZXVlKSkpIHx8Ci0JCWt0aHJlYWRfc2hvdWxkX3N0b3AoKSB8fAor
CQkodGVzdF9rdGhyZWFkICYmIGt0aHJlYWRfc2hvdWxkX3N0b3AoKSkgfHwK
IAkJcXVldWUtPnZpZi0+ZGlzYWJsZWQ7CiB9CiAKQEAgLTU0MCwxNSArNTQw
LDIwIEBAIHN0YXRpYyB2b2lkIHhlbnZpZl93YWl0X2Zvcl9yeF93b3JrKHN0
cnVjdCB4ZW52aWZfcXVldWUgKnF1ZXVlKQogewogCURFRklORV9XQUlUKHdh
aXQpOwogCi0JaWYgKHhlbnZpZl9oYXZlX3J4X3dvcmsocXVldWUpKQorCWlm
ICh4ZW52aWZfaGF2ZV9yeF93b3JrKHF1ZXVlLCB0cnVlKSkKIAkJcmV0dXJu
OwogCiAJZm9yICg7OykgewogCQlsb25nIHJldDsKIAogCQlwcmVwYXJlX3Rv
X3dhaXQoJnF1ZXVlLT53cSwgJndhaXQsIFRBU0tfSU5URVJSVVBUSUJMRSk7
Ci0JCWlmICh4ZW52aWZfaGF2ZV9yeF93b3JrKHF1ZXVlKSkKKwkJaWYgKHhl
bnZpZl9oYXZlX3J4X3dvcmsocXVldWUsIHRydWUpKQogCQkJYnJlYWs7CisJ
CWlmIChhdG9taWNfZmV0Y2hfYW5kbm90KE5FVEJLX1JYX0VPSSB8IE5FVEJL
X0NPTU1PTl9FT0ksCisJCQkJCSZxdWV1ZS0+ZW9pX3BlbmRpbmcpICYKKwkJ
ICAgIChORVRCS19SWF9FT0kgfCBORVRCS19DT01NT05fRU9JKSkKKwkJCXhl
bl9pcnFfbGF0ZWVvaShxdWV1ZS0+cnhfaXJxLCAwKTsKKwogCQlyZXQgPSBz
Y2hlZHVsZV90aW1lb3V0KHhlbnZpZl9yeF9xdWV1ZV90aW1lb3V0KHF1ZXVl
KSk7CiAJCWlmICghcmV0KQogCQkJYnJlYWs7Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-06.patch"
Content-Disposition: attachment; filename="xsa332-linux-06.patch"
Content-Transfer-Encoding: base64

RnJvbSBmNDQ1MDI2NjRiM2YzYjg4MjkzNzc3MmMzZmI3MTY5NzFkYWE0YTdk
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyOCArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDYvMTJdIHhlbi9zY3NpYmFj
azogdXNlIGxhdGVlb2kgaXJxIGJpbmRpbmcKCkluIG9yZGVyIHRvIHJlZHVj
ZSB0aGUgY2hhbmNlIGZvciB0aGUgc3lzdGVtIGJlY29taW5nIHVucmVzcG9u
c2l2ZSBkdWUKdG8gZXZlbnQgc3Rvcm1zIHRyaWdnZXJlZCBieSBhIG1pc2Jl
aGF2aW5nIHNjc2lmcm9udCB1c2UgdGhlIGxhdGVlb2kKaXJxIGJpbmRpbmcg
Zm9yIHNjc2liYWNrIGFuZCB1bm1hc2sgdGhlIGV2ZW50IGNoYW5uZWwgb25s
eSBqdXN0IGJlZm9yZQpsZWF2aW5nIHRoZSBldmVudCBoYW5kbGluZyBmdW5j
dGlvbi4KCkluIGNhc2Ugb2YgYSByaW5nIHByb3RvY29sIGVycm9yIGRvbid0
IGlzc3VlIGFuIEVPSSBpbiBvcmRlciB0byBhdm9pZAp0aGUgcG9zc2liaWxp
dHkgdG8gdXNlIHRoYXQgZm9yIHByb2R1Y2luZyBhbiBldmVudCBzdG9ybS4g
VGhpcyBhdCBvbmNlCndpbGwgcmVzdWx0IGluIG5vIGZ1cnRoZXIgY2FsbCBv
ZiBzY3NpYmFja19pcnFfZm4oKSwgc28gdGhlIHJpbmdfZXJyb3IKc3RydWN0
IG1lbWJlciBjYW4gYmUgZHJvcHBlZCBhbmQgc2NzaWJhY2tfZG9fY21kX2Zu
KCkgY2FuIHNpZ25hbCB0aGUKcHJvdG9jb2wgZXJyb3IgdmlhIGEgbmVnYXRp
dmUgcmV0dXJuIHZhbHVlLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMzIuCgpD
Yzogc3RhYmxlQHZnZXIua2VybmVsLm9yZwpSZXBvcnRlZC1ieTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4KU2lnbmVkLW9mZi1ieTogSnVlcmdl
biBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogSmFuIEJl
dWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogV2VpIExp
dSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJzL3hlbi94ZW4tc2NzaWJhY2su
YyB8IDIzICsrKysrKysrKysrKystLS0tLS0tLS0tCiAxIGZpbGUgY2hhbmdl
ZCwgMTMgaW5zZXJ0aW9ucygrKSwgMTAgZGVsZXRpb25zKC0pCgpkaWZmIC0t
Z2l0IGEvZHJpdmVycy94ZW4veGVuLXNjc2liYWNrLmMgYi9kcml2ZXJzL3hl
bi94ZW4tc2NzaWJhY2suYwppbmRleCAxZThjZmQ4MGE0ZTYuLjRhY2M0ZTg5
OTYwMCAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4veGVuLXNjc2liYWNrLmMK
KysrIGIvZHJpdmVycy94ZW4veGVuLXNjc2liYWNrLmMKQEAgLTkxLDcgKzkx
LDYgQEAgc3RydWN0IHZzY3NpYmtfaW5mbyB7CiAJdW5zaWduZWQgaW50IGly
cTsKIAogCXN0cnVjdCB2c2NzaWlmX2JhY2tfcmluZyByaW5nOwotCWludCBy
aW5nX2Vycm9yOwogCiAJc3BpbmxvY2tfdCByaW5nX2xvY2s7CiAJYXRvbWlj
X3QgbnJfdW5yZXBsaWVkX3JlcXM7CkBAIC03MjIsNyArNzIxLDggQEAgc3Rh
dGljIHN0cnVjdCB2c2NzaWJrX3BlbmQgKnByZXBhcmVfcGVuZGluZ19yZXFz
KHN0cnVjdCB2c2NzaWJrX2luZm8gKmluZm8sCiAJcmV0dXJuIHBlbmRpbmdf
cmVxOwogfQogCi1zdGF0aWMgaW50IHNjc2liYWNrX2RvX2NtZF9mbihzdHJ1
Y3QgdnNjc2lia19pbmZvICppbmZvKQorc3RhdGljIGludCBzY3NpYmFja19k
b19jbWRfZm4oc3RydWN0IHZzY3NpYmtfaW5mbyAqaW5mbywKKwkJCSAgICAg
IHVuc2lnbmVkIGludCAqZW9pX2ZsYWdzKQogewogCXN0cnVjdCB2c2NzaWlm
X2JhY2tfcmluZyAqcmluZyA9ICZpbmZvLT5yaW5nOwogCXN0cnVjdCB2c2Nz
aWlmX3JlcXVlc3QgcmluZ19yZXE7CkBAIC03MzksMTEgKzczOSwxMiBAQCBz
dGF0aWMgaW50IHNjc2liYWNrX2RvX2NtZF9mbihzdHJ1Y3QgdnNjc2lia19p
bmZvICppbmZvKQogCQlyYyA9IHJpbmctPnJzcF9wcm9kX3B2dDsKIAkJcHJf
d2FybigiRG9tJWQgcHJvdmlkZWQgYm9ndXMgcmluZyByZXF1ZXN0cyAoJSN4
IC0gJSN4ID0gJXUpLiBIYWx0aW5nIHJpbmcgcHJvY2Vzc2luZ1xuIiwKIAkJ
CSAgIGluZm8tPmRvbWlkLCBycCwgcmMsIHJwIC0gcmMpOwotCQlpbmZvLT5y
aW5nX2Vycm9yID0gMTsKLQkJcmV0dXJuIDA7CisJCXJldHVybiAtRUlOVkFM
OwogCX0KIAogCXdoaWxlICgocmMgIT0gcnApKSB7CisJCSplb2lfZmxhZ3Mg
Jj0gflhFTl9FT0lfRkxBR19TUFVSSU9VUzsKKwogCQlpZiAoUklOR19SRVFV
RVNUX0NPTlNfT1ZFUkZMT1cocmluZywgcmMpKQogCQkJYnJlYWs7CiAKQEAg
LTgwMiwxMyArODAzLDE2IEBAIHN0YXRpYyBpbnQgc2NzaWJhY2tfZG9fY21k
X2ZuKHN0cnVjdCB2c2NzaWJrX2luZm8gKmluZm8pCiBzdGF0aWMgaXJxcmV0
dXJuX3Qgc2NzaWJhY2tfaXJxX2ZuKGludCBpcnEsIHZvaWQgKmRldl9pZCkK
IHsKIAlzdHJ1Y3QgdnNjc2lia19pbmZvICppbmZvID0gZGV2X2lkOworCWlu
dCByYzsKKwl1bnNpZ25lZCBpbnQgZW9pX2ZsYWdzID0gWEVOX0VPSV9GTEFH
X1NQVVJJT1VTOwogCi0JaWYgKGluZm8tPnJpbmdfZXJyb3IpCi0JCXJldHVy
biBJUlFfSEFORExFRDsKLQotCXdoaWxlIChzY3NpYmFja19kb19jbWRfZm4o
aW5mbykpCisJd2hpbGUgKChyYyA9IHNjc2liYWNrX2RvX2NtZF9mbihpbmZv
LCAmZW9pX2ZsYWdzKSkgPiAwKQogCQljb25kX3Jlc2NoZWQoKTsKIAorCS8q
IEluIGNhc2Ugb2YgYSByaW5nIGVycm9yIHdlIGtlZXAgdGhlIGV2ZW50IGNo
YW5uZWwgbWFza2VkLiAqLworCWlmICghcmMpCisJCXhlbl9pcnFfbGF0ZWVv
aShpcnEsIGVvaV9mbGFncyk7CisKIAlyZXR1cm4gSVJRX0hBTkRMRUQ7CiB9
CiAKQEAgLTgyOSw3ICs4MzMsNyBAQCBzdGF0aWMgaW50IHNjc2liYWNrX2lu
aXRfc3Jpbmcoc3RydWN0IHZzY3NpYmtfaW5mbyAqaW5mbywgZ3JhbnRfcmVm
X3QgcmluZ19yZWYsCiAJc3JpbmcgPSAoc3RydWN0IHZzY3NpaWZfc3Jpbmcg
KilhcmVhOwogCUJBQ0tfUklOR19JTklUKCZpbmZvLT5yaW5nLCBzcmluZywg
UEFHRV9TSVpFKTsKIAotCWVyciA9IGJpbmRfaW50ZXJkb21haW5fZXZ0Y2hu
X3RvX2lycShpbmZvLT5kb21pZCwgZXZ0Y2huKTsKKwllcnIgPSBiaW5kX2lu
dGVyZG9tYWluX2V2dGNobl90b19pcnFfbGF0ZWVvaShpbmZvLT5kb21pZCwg
ZXZ0Y2huKTsKIAlpZiAoZXJyIDwgMCkKIAkJZ290byB1bm1hcF9wYWdlOwog
CkBAIC0xMjUzLDcgKzEyNTcsNiBAQCBzdGF0aWMgaW50IHNjc2liYWNrX3By
b2JlKHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYsCiAKIAlpbmZvLT5kb21p
ZCA9IGRldi0+b3RoZXJlbmRfaWQ7CiAJc3Bpbl9sb2NrX2luaXQoJmluZm8t
PnJpbmdfbG9jayk7Ci0JaW5mby0+cmluZ19lcnJvciA9IDA7CiAJYXRvbWlj
X3NldCgmaW5mby0+bnJfdW5yZXBsaWVkX3JlcXMsIDApOwogCWluaXRfd2Fp
dHF1ZXVlX2hlYWQoJmluZm8tPndhaXRpbmdfdG9fZnJlZSk7CiAJaW5mby0+
ZGV2ID0gZGV2OwotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-07.patch"
Content-Disposition: attachment; filename="xsa332-linux-07.patch"
Content-Transfer-Encoding: base64

RnJvbSAxNjUyOTQ3OTVmZGEzMWNmMjBhMjZjOGE3OWUxMjBkMzIwN2QxMzEx
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyOCArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDcvMTJdIHhlbi9wdmNhbGxz
YmFjazogdXNlIGxhdGVlb2kgaXJxIGJpbmRpbmcKCkluIG9yZGVyIHRvIHJl
ZHVjZSB0aGUgY2hhbmNlIGZvciB0aGUgc3lzdGVtIGJlY29taW5nIHVucmVz
cG9uc2l2ZSBkdWUKdG8gZXZlbnQgc3Rvcm1zIHRyaWdnZXJlZCBieSBhIG1p
c2JlaGF2aW5nIHB2Y2FsbHNmcm9udCB1c2UgdGhlIGxhdGVlb2kKaXJxIGJp
bmRpbmcgZm9yIHB2Y2FsbHNiYWNrIGFuZCB1bm1hc2sgdGhlIGV2ZW50IGNo
YW5uZWwgb25seSBhZnRlcgpoYW5kbGluZyBhbGwgd3JpdGUgcmVxdWVzdHMs
IHdoaWNoIGFyZSB0aGUgb25lcyBjb21pbmcgaW4gdmlhIGFuIGlycS4KClRo
aXMgcmVxdWlyZXMgbW9kaWZ5aW5nIHRoZSBsb2dpYyBhIGxpdHRsZSBiaXQg
dG8gbm90IHJlcXVpcmUgYW4gZXZlbnQKZm9yIGVhY2ggd3JpdGUgcmVxdWVz
dCwgYnV0IHRvIGtlZXAgdGhlIGlvd29ya2VyIHJ1bm5pbmcgdW50aWwgbm8K
ZnVydGhlciBkYXRhIGlzIGZvdW5kIG9uIHRoZSByaW5nIHBhZ2UgdG8gYmUg
cHJvY2Vzc2VkLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zMzIuCgpDYzogc3Rh
YmxlQHZnZXIua2VybmVsLm9yZwpSZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxs
IDxqdWxpZW5AeGVuLm9yZz4KU2lnbmVkLW9mZi1ieTogSnVlcmdlbiBHcm9z
cyA8amdyb3NzQHN1c2UuY29tPgpSZXZpZXdlZC1ieTogU3RlZmFubyBTdGFi
ZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPgpSZXZpZXdlZC1ieTog
V2VpIExpdSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJzL3hlbi9wdmNhbGxz
LWJhY2suYyB8IDc2ICsrKysrKysrKysrKysrKysrKysrKysrLS0tLS0tLS0t
LS0tLS0tCiAxIGZpbGUgY2hhbmdlZCwgNDYgaW5zZXJ0aW9ucygrKSwgMzAg
ZGVsZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vcHZjYWxs
cy1iYWNrLmMgYi9kcml2ZXJzL3hlbi9wdmNhbGxzLWJhY2suYwppbmRleCA5
ZWFlMWZjZWVjMWUuLmE3ZDI5M2ZhOGQxNCAxMDA2NDQKLS0tIGEvZHJpdmVy
cy94ZW4vcHZjYWxscy1iYWNrLmMKKysrIGIvZHJpdmVycy94ZW4vcHZjYWxs
cy1iYWNrLmMKQEAgLTY2LDYgKzY2LDcgQEAgc3RydWN0IHNvY2tfbWFwcGlu
ZyB7CiAJYXRvbWljX3Qgd3JpdGU7CiAJYXRvbWljX3QgaW87CiAJYXRvbWlj
X3QgcmVsZWFzZTsKKwlhdG9taWNfdCBlb2k7CiAJdm9pZCAoKnNhdmVkX2Rh
dGFfcmVhZHkpKHN0cnVjdCBzb2NrICpzayk7CiAJc3RydWN0IHB2Y2FsbHNf
aW93b3JrZXIgaW93b3JrZXI7CiB9OwpAQCAtODcsNyArODgsNyBAQCBzdGF0
aWMgaW50IHB2Y2FsbHNfYmFja19yZWxlYXNlX2FjdGl2ZShzdHJ1Y3QgeGVu
YnVzX2RldmljZSAqZGV2LAogCQkJCSAgICAgICBzdHJ1Y3QgcHZjYWxsc19m
ZWRhdGEgKmZlZGF0YSwKIAkJCQkgICAgICAgc3RydWN0IHNvY2tfbWFwcGlu
ZyAqbWFwKTsKIAotc3RhdGljIHZvaWQgcHZjYWxsc19jb25uX2JhY2tfcmVh
ZCh2b2lkICpvcGFxdWUpCitzdGF0aWMgYm9vbCBwdmNhbGxzX2Nvbm5fYmFj
a19yZWFkKHZvaWQgKm9wYXF1ZSkKIHsKIAlzdHJ1Y3Qgc29ja19tYXBwaW5n
ICptYXAgPSAoc3RydWN0IHNvY2tfbWFwcGluZyAqKW9wYXF1ZTsKIAlzdHJ1
Y3QgbXNnaGRyIG1zZzsKQEAgLTEwNywxNyArMTA4LDE3IEBAIHN0YXRpYyB2
b2lkIHB2Y2FsbHNfY29ubl9iYWNrX3JlYWQodm9pZCAqb3BhcXVlKQogCXZp
cnRfbWIoKTsKIAogCWlmIChlcnJvcikKLQkJcmV0dXJuOworCQlyZXR1cm4g
ZmFsc2U7CiAKIAlzaXplID0gcHZjYWxsc19xdWV1ZWQocHJvZCwgY29ucywg
YXJyYXlfc2l6ZSk7CiAJaWYgKHNpemUgPj0gYXJyYXlfc2l6ZSkKLQkJcmV0
dXJuOworCQlyZXR1cm4gZmFsc2U7CiAJc3Bpbl9sb2NrX2lycXNhdmUoJm1h
cC0+c29jay0+c2stPnNrX3JlY2VpdmVfcXVldWUubG9jaywgZmxhZ3MpOwog
CWlmIChza2JfcXVldWVfZW1wdHkoJm1hcC0+c29jay0+c2stPnNrX3JlY2Vp
dmVfcXVldWUpKSB7CiAJCWF0b21pY19zZXQoJm1hcC0+cmVhZCwgMCk7CiAJ
CXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJm1hcC0+c29jay0+c2stPnNrX3Jl
Y2VpdmVfcXVldWUubG9jaywKIAkJCQlmbGFncyk7Ci0JCXJldHVybjsKKwkJ
cmV0dXJuIHRydWU7CiAJfQogCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJm1h
cC0+c29jay0+c2stPnNrX3JlY2VpdmVfcXVldWUubG9jaywgZmxhZ3MpOwog
CXdhbnRlZCA9IGFycmF5X3NpemUgLSBzaXplOwpAQCAtMTQxLDcgKzE0Miw3
IEBAIHN0YXRpYyB2b2lkIHB2Y2FsbHNfY29ubl9iYWNrX3JlYWQodm9pZCAq
b3BhcXVlKQogCXJldCA9IGluZXRfcmVjdm1zZyhtYXAtPnNvY2ssICZtc2cs
IHdhbnRlZCwgTVNHX0RPTlRXQUlUKTsKIAlXQVJOX09OKHJldCA+IHdhbnRl
ZCk7CiAJaWYgKHJldCA9PSAtRUFHQUlOKSAvKiBzaG91bGRuJ3QgaGFwcGVu
ICovCi0JCXJldHVybjsKKwkJcmV0dXJuIHRydWU7CiAJaWYgKCFyZXQpCiAJ
CXJldCA9IC1FTk9UQ09OTjsKIAlzcGluX2xvY2tfaXJxc2F2ZSgmbWFwLT5z
b2NrLT5zay0+c2tfcmVjZWl2ZV9xdWV1ZS5sb2NrLCBmbGFncyk7CkBAIC0x
NjAsMTAgKzE2MSwxMCBAQCBzdGF0aWMgdm9pZCBwdmNhbGxzX2Nvbm5fYmFj
a19yZWFkKHZvaWQgKm9wYXF1ZSkKIAl2aXJ0X3dtYigpOwogCW5vdGlmeV9y
ZW1vdGVfdmlhX2lycShtYXAtPmlycSk7CiAKLQlyZXR1cm47CisJcmV0dXJu
IHRydWU7CiB9CiAKLXN0YXRpYyB2b2lkIHB2Y2FsbHNfY29ubl9iYWNrX3dy
aXRlKHN0cnVjdCBzb2NrX21hcHBpbmcgKm1hcCkKK3N0YXRpYyBib29sIHB2
Y2FsbHNfY29ubl9iYWNrX3dyaXRlKHN0cnVjdCBzb2NrX21hcHBpbmcgKm1h
cCkKIHsKIAlzdHJ1Y3QgcHZjYWxsc19kYXRhX2ludGYgKmludGYgPSBtYXAt
PnJpbmc7CiAJc3RydWN0IHB2Y2FsbHNfZGF0YSAqZGF0YSA9ICZtYXAtPmRh
dGE7CkBAIC0xODAsNyArMTgxLDcgQEAgc3RhdGljIHZvaWQgcHZjYWxsc19j
b25uX2JhY2tfd3JpdGUoc3RydWN0IHNvY2tfbWFwcGluZyAqbWFwKQogCWFy
cmF5X3NpemUgPSBYRU5fRkxFWF9SSU5HX1NJWkUobWFwLT5yaW5nX29yZGVy
KTsKIAlzaXplID0gcHZjYWxsc19xdWV1ZWQocHJvZCwgY29ucywgYXJyYXlf
c2l6ZSk7CiAJaWYgKHNpemUgPT0gMCkKLQkJcmV0dXJuOworCQlyZXR1cm4g
ZmFsc2U7CiAKIAltZW1zZXQoJm1zZywgMCwgc2l6ZW9mKG1zZykpOwogCW1z
Zy5tc2dfZmxhZ3MgfD0gTVNHX0RPTlRXQUlUOwpAQCAtMTk4LDEyICsxOTks
MTEgQEAgc3RhdGljIHZvaWQgcHZjYWxsc19jb25uX2JhY2tfd3JpdGUoc3Ry
dWN0IHNvY2tfbWFwcGluZyAqbWFwKQogCiAJYXRvbWljX3NldCgmbWFwLT53
cml0ZSwgMCk7CiAJcmV0ID0gaW5ldF9zZW5kbXNnKG1hcC0+c29jaywgJm1z
Zywgc2l6ZSk7Ci0JaWYgKHJldCA9PSAtRUFHQUlOIHx8IChyZXQgPj0gMCAm
JiByZXQgPCBzaXplKSkgeworCWlmIChyZXQgPT0gLUVBR0FJTikgewogCQlh
dG9taWNfaW5jKCZtYXAtPndyaXRlKTsKIAkJYXRvbWljX2luYygmbWFwLT5p
byk7CisJCXJldHVybiB0cnVlOwogCX0KLQlpZiAocmV0ID09IC1FQUdBSU4p
Ci0JCXJldHVybjsKIAogCS8qIHdyaXRlIHRoZSBkYXRhLCB0aGVuIHVwZGF0
ZSB0aGUgaW5kZXhlcyAqLwogCXZpcnRfd21iKCk7CkBAIC0yMTYsOSArMjE2
LDEzIEBAIHN0YXRpYyB2b2lkIHB2Y2FsbHNfY29ubl9iYWNrX3dyaXRlKHN0
cnVjdCBzb2NrX21hcHBpbmcgKm1hcCkKIAl9CiAJLyogdXBkYXRlIHRoZSBp
bmRleGVzLCB0aGVuIG5vdGlmeSB0aGUgb3RoZXIgZW5kICovCiAJdmlydF93
bWIoKTsKLQlpZiAocHJvZCAhPSBjb25zICsgcmV0KQorCWlmIChwcm9kICE9
IGNvbnMgKyByZXQpIHsKIAkJYXRvbWljX2luYygmbWFwLT53cml0ZSk7CisJ
CWF0b21pY19pbmMoJm1hcC0+aW8pOworCX0KIAlub3RpZnlfcmVtb3RlX3Zp
YV9pcnEobWFwLT5pcnEpOworCisJcmV0dXJuIHRydWU7CiB9CiAKIHN0YXRp
YyB2b2lkIHB2Y2FsbHNfYmFja19pb3dvcmtlcihzdHJ1Y3Qgd29ya19zdHJ1
Y3QgKndvcmspCkBAIC0yMjcsNiArMjMxLDcgQEAgc3RhdGljIHZvaWQgcHZj
YWxsc19iYWNrX2lvd29ya2VyKHN0cnVjdCB3b3JrX3N0cnVjdCAqd29yaykK
IAkJc3RydWN0IHB2Y2FsbHNfaW93b3JrZXIsIHJlZ2lzdGVyX3dvcmspOwog
CXN0cnVjdCBzb2NrX21hcHBpbmcgKm1hcCA9IGNvbnRhaW5lcl9vZihpb3dv
cmtlciwgc3RydWN0IHNvY2tfbWFwcGluZywKIAkJaW93b3JrZXIpOworCXVu
c2lnbmVkIGludCBlb2lfZmxhZ3MgPSBYRU5fRU9JX0ZMQUdfU1BVUklPVVM7
CiAKIAl3aGlsZSAoYXRvbWljX3JlYWQoJm1hcC0+aW8pID4gMCkgewogCQlp
ZiAoYXRvbWljX3JlYWQoJm1hcC0+cmVsZWFzZSkgPiAwKSB7CkBAIC0yMzQs
MTAgKzIzOSwxOCBAQCBzdGF0aWMgdm9pZCBwdmNhbGxzX2JhY2tfaW93b3Jr
ZXIoc3RydWN0IHdvcmtfc3RydWN0ICp3b3JrKQogCQkJcmV0dXJuOwogCQl9
CiAKLQkJaWYgKGF0b21pY19yZWFkKCZtYXAtPnJlYWQpID4gMCkKLQkJCXB2
Y2FsbHNfY29ubl9iYWNrX3JlYWQobWFwKTsKLQkJaWYgKGF0b21pY19yZWFk
KCZtYXAtPndyaXRlKSA+IDApCi0JCQlwdmNhbGxzX2Nvbm5fYmFja193cml0
ZShtYXApOworCQlpZiAoYXRvbWljX3JlYWQoJm1hcC0+cmVhZCkgPiAwICYm
CisJCSAgICBwdmNhbGxzX2Nvbm5fYmFja19yZWFkKG1hcCkpCisJCQllb2lf
ZmxhZ3MgPSAwOworCQlpZiAoYXRvbWljX3JlYWQoJm1hcC0+d3JpdGUpID4g
MCAmJgorCQkgICAgcHZjYWxsc19jb25uX2JhY2tfd3JpdGUobWFwKSkKKwkJ
CWVvaV9mbGFncyA9IDA7CisKKwkJaWYgKGF0b21pY19yZWFkKCZtYXAtPmVv
aSkgPiAwICYmICFhdG9taWNfcmVhZCgmbWFwLT53cml0ZSkpIHsKKwkJCWF0
b21pY19zZXQoJm1hcC0+ZW9pLCAwKTsKKwkJCXhlbl9pcnFfbGF0ZWVvaSht
YXAtPmlycSwgZW9pX2ZsYWdzKTsKKwkJCWVvaV9mbGFncyA9IFhFTl9FT0lf
RkxBR19TUFVSSU9VUzsKKwkJfQogCiAJCWF0b21pY19kZWMoJm1hcC0+aW8p
OwogCX0KQEAgLTMzNCwxMiArMzQ3LDkgQEAgc3RhdGljIHN0cnVjdCBzb2Nr
X21hcHBpbmcgKnB2Y2FsbHNfbmV3X2FjdGl2ZV9zb2NrZXQoCiAJCWdvdG8g
b3V0OwogCW1hcC0+Ynl0ZXMgPSBwYWdlOwogCi0JcmV0ID0gYmluZF9pbnRl
cmRvbWFpbl9ldnRjaG5fdG9faXJxaGFuZGxlcihmZWRhdGEtPmRldi0+b3Ro
ZXJlbmRfaWQsCi0JCQkJCQkgICAgZXZ0Y2huLAotCQkJCQkJICAgIHB2Y2Fs
bHNfYmFja19jb25uX2V2ZW50LAotCQkJCQkJICAgIDAsCi0JCQkJCQkgICAg
InB2Y2FsbHMtYmFja2VuZCIsCi0JCQkJCQkgICAgbWFwKTsKKwlyZXQgPSBi
aW5kX2ludGVyZG9tYWluX2V2dGNobl90b19pcnFoYW5kbGVyX2xhdGVlb2ko
CisJCQlmZWRhdGEtPmRldi0+b3RoZXJlbmRfaWQsIGV2dGNobiwKKwkJCXB2
Y2FsbHNfYmFja19jb25uX2V2ZW50LCAwLCAicHZjYWxscy1iYWNrZW5kIiwg
bWFwKTsKIAlpZiAocmV0IDwgMCkKIAkJZ290byBvdXQ7CiAJbWFwLT5pcnEg
PSByZXQ7CkBAIC04NzMsMTUgKzg4MywxOCBAQCBzdGF0aWMgaXJxcmV0dXJu
X3QgcHZjYWxsc19iYWNrX2V2ZW50KGludCBpcnEsIHZvaWQgKmRldl9pZCkK
IHsKIAlzdHJ1Y3QgeGVuYnVzX2RldmljZSAqZGV2ID0gZGV2X2lkOwogCXN0
cnVjdCBwdmNhbGxzX2ZlZGF0YSAqZmVkYXRhID0gTlVMTDsKKwl1bnNpZ25l
ZCBpbnQgZW9pX2ZsYWdzID0gWEVOX0VPSV9GTEFHX1NQVVJJT1VTOwogCi0J
aWYgKGRldiA9PSBOVUxMKQotCQlyZXR1cm4gSVJRX0hBTkRMRUQ7CisJaWYg
KGRldikgeworCQlmZWRhdGEgPSBkZXZfZ2V0X2RydmRhdGEoJmRldi0+ZGV2
KTsKKwkJaWYgKGZlZGF0YSkgeworCQkJcHZjYWxsc19iYWNrX3dvcmsoZmVk
YXRhKTsKKwkJCWVvaV9mbGFncyA9IDA7CisJCX0KKwl9CiAKLQlmZWRhdGEg
PSBkZXZfZ2V0X2RydmRhdGEoJmRldi0+ZGV2KTsKLQlpZiAoZmVkYXRhID09
IE5VTEwpCi0JCXJldHVybiBJUlFfSEFORExFRDsKKwl4ZW5faXJxX2xhdGVl
b2koaXJxLCBlb2lfZmxhZ3MpOwogCi0JcHZjYWxsc19iYWNrX3dvcmsoZmVk
YXRhKTsKIAlyZXR1cm4gSVJRX0hBTkRMRUQ7CiB9CiAKQEAgLTg5MSwxMiAr
OTA0LDE1IEBAIHN0YXRpYyBpcnFyZXR1cm5fdCBwdmNhbGxzX2JhY2tfY29u
bl9ldmVudChpbnQgaXJxLCB2b2lkICpzb2NrX21hcCkKIAlzdHJ1Y3QgcHZj
YWxsc19pb3dvcmtlciAqaW93OwogCiAJaWYgKG1hcCA9PSBOVUxMIHx8IG1h
cC0+c29jayA9PSBOVUxMIHx8IG1hcC0+c29jay0+c2sgPT0gTlVMTCB8fAot
CQltYXAtPnNvY2stPnNrLT5za191c2VyX2RhdGEgIT0gbWFwKQorCQltYXAt
PnNvY2stPnNrLT5za191c2VyX2RhdGEgIT0gbWFwKSB7CisJCXhlbl9pcnFf
bGF0ZWVvaShpcnEsIDApOwogCQlyZXR1cm4gSVJRX0hBTkRMRUQ7CisJfQog
CiAJaW93ID0gJm1hcC0+aW93b3JrZXI7CiAKIAlhdG9taWNfaW5jKCZtYXAt
PndyaXRlKTsKKwlhdG9taWNfaW5jKCZtYXAtPmVvaSk7CiAJYXRvbWljX2lu
YygmbWFwLT5pbyk7CiAJcXVldWVfd29yayhpb3ctPndxLCAmaW93LT5yZWdp
c3Rlcl93b3JrKTsKIApAQCAtOTMyLDcgKzk0OCw3IEBAIHN0YXRpYyBpbnQg
YmFja2VuZF9jb25uZWN0KHN0cnVjdCB4ZW5idXNfZGV2aWNlICpkZXYpCiAJ
CWdvdG8gZXJyb3I7CiAJfQogCi0JZXJyID0gYmluZF9pbnRlcmRvbWFpbl9l
dnRjaG5fdG9faXJxKGRldi0+b3RoZXJlbmRfaWQsIGV2dGNobik7CisJZXJy
ID0gYmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxX2xhdGVlb2koZGV2
LT5vdGhlcmVuZF9pZCwgZXZ0Y2huKTsKIAlpZiAoZXJyIDwgMCkKIAkJZ290
byBlcnJvcjsKIAlmZWRhdGEtPmlycSA9IGVycjsKLS0gCjIuMjYuMgoK

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-08.patch"
Content-Disposition: attachment; filename="xsa332-linux-08.patch"
Content-Transfer-Encoding: base64

RnJvbSAyNzFlNjZiYTRkYjFiNTZlYzM4NmNjNjNhOGNjMjk1N2JmMWJhOGE4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyOSArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDgvMTJdIHhlbi9wY2liYWNr
OiB1c2UgbGF0ZWVvaSBpcnEgYmluZGluZwoKSW4gb3JkZXIgdG8gcmVkdWNl
IHRoZSBjaGFuY2UgZm9yIHRoZSBzeXN0ZW0gYmVjb21pbmcgdW5yZXNwb25z
aXZlIGR1ZQp0byBldmVudCBzdG9ybXMgdHJpZ2dlcmVkIGJ5IGEgbWlzYmVo
YXZpbmcgcGNpZnJvbnQgdXNlIHRoZSBsYXRlZW9pIGlycQpiaW5kaW5nIGZv
ciBwY2liYWNrIGFuZCB1bm1hc2sgdGhlIGV2ZW50IGNoYW5uZWwgb25seSBq
dXN0IGJlZm9yZQpsZWF2aW5nIHRoZSBldmVudCBoYW5kbGluZyBmdW5jdGlv
bi4KClJlc3RydWN0dXJlIHRoZSBoYW5kbGluZyB0byBzdXBwb3J0IHRoYXQg
c2NoZW1lLiBCYXNpY2FsbHkgYW4gZXZlbnQgY2FuCmNvbWUgaW4gZm9yIHR3
byByZWFzb25zOiBlaXRoZXIgYSBub3JtYWwgcmVxdWVzdCBmb3IgYSBwY2li
YWNrIGFjdGlvbiwKd2hpY2ggaXMgaGFuZGxlZCBpbiBhIHdvcmtlciwgb3Ig
aW4gY2FzZSB0aGUgZ3Vlc3QgaGFzIGZpbmlzaGVkIGFuIEFFUgpyZXF1ZXN0
IHdoaWNoIHdhcyByZXF1ZXN0ZWQgYnkgcGNpYmFjay4KCldoZW4gYW4gQUVS
IHJlcXVlc3QgaXMgaXNzdWVkIHRvIHRoZSBndWVzdCBhbmQgYSBub3JtYWwg
cGNpYmFjayBhY3Rpb24KaXMgY3VycmVudGx5IGFjdGl2ZSBpc3N1ZSBhbiBF
T0kgZWFybHkgaW4gb3JkZXIgdG8gYmUgYWJsZSB0byByZWNlaXZlCmFub3Ro
ZXIgZXZlbnQgd2hlbiB0aGUgQUVSIHJlcXVlc3QgaGFzIGJlZW4gZmluaXNo
ZWQgYnkgdGhlIGd1ZXN0LgoKTGV0IHRoZSB3b3JrZXIgcHJvY2Vzc2luZyB0
aGUgbm9ybWFsIHJlcXVlc3RzIHJ1biB1bnRpbCBubyBmdXJ0aGVyCnJlcXVl
c3QgaXMgcGVuZGluZywgaW5zdGVhZCBvZiBzdGFydGluZyBhIG5ldyB3b3Jr
ZXIgaW9uIHRoYXQgY2FzZS4KSXNzdWUgdGhlIEVPSSBvbmx5IGp1c3QgYmVm
b3JlIGxlYXZpbmcgdGhlIHdvcmtlci4KClRoaXMgc2NoZW1lIGFsbG93cyB0
byBkcm9wIGNhbGxpbmcgdGhlIGdlbmVyaWMgZnVuY3Rpb24KeGVuX3BjaWJr
X3Rlc3RfYW5kX3NjaGVkdWxlX29wKCkgYWZ0ZXIgcHJvY2Vzc2luZyBvZiBh
bnkgcmVxdWVzdCBhcwp0aGUgaGFuZGxpbmcgb2YgYm90aCByZXF1ZXN0IHR5
cGVzIGlzIG5vdyBzZXBhcmF0ZWQgbW9yZSBjbGVhbmx5LgoKVGhpcyBpcyBw
YXJ0IG9mIFhTQS0zMzIuCgpDYzogc3RhYmxlQHZnZXIua2VybmVsLm9yZwpS
ZXBvcnRlZC1ieTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4KU2ln
bmVkLW9mZi1ieTogSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogV2VpIExpdSA8d2xAeGVuLm9yZz4KLS0tCiBkcml2ZXJz
L3hlbi94ZW4tcGNpYmFjay9wY2lfc3R1Yi5jICAgIHwgMTMgKysrKy0tLS0K
IGRyaXZlcnMveGVuL3hlbi1wY2liYWNrL3BjaWJhY2suaCAgICAgfCAxMiAr
KysrKy0tCiBkcml2ZXJzL3hlbi94ZW4tcGNpYmFjay9wY2liYWNrX29wcy5j
IHwgNDggKysrKysrKysrKysrKysrKysrKysrLS0tLS0tCiBkcml2ZXJzL3hl
bi94ZW4tcGNpYmFjay94ZW5idXMuYyAgICAgIHwgIDIgKy0KIDQgZmlsZXMg
Y2hhbmdlZCwgNTYgaW5zZXJ0aW9ucygrKSwgMTkgZGVsZXRpb25zKC0pCgpk
aWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4veGVuLXBjaWJhY2svcGNpX3N0dWIu
YyBiL2RyaXZlcnMveGVuL3hlbi1wY2liYWNrL3BjaV9zdHViLmMKaW5kZXgg
ZTg3NmMzZDZkYWQxLi5jYjkwNGFjODMwMDYgMTAwNjQ0Ci0tLSBhL2RyaXZl
cnMveGVuL3hlbi1wY2liYWNrL3BjaV9zdHViLmMKKysrIGIvZHJpdmVycy94
ZW4veGVuLXBjaWJhY2svcGNpX3N0dWIuYwpAQCAtNzM0LDEwICs3MzQsMTcg
QEAgc3RhdGljIHBjaV9lcnNfcmVzdWx0X3QgY29tbW9uX3Byb2Nlc3Moc3Ry
dWN0IHBjaXN0dWJfZGV2aWNlICpwc2RldiwKIAl3bWIoKTsKIAlub3RpZnlf
cmVtb3RlX3ZpYV9pcnEocGRldi0+ZXZ0Y2huX2lycSk7CiAKKwkvKiBFbmFi
bGUgSVJRIHRvIHNpZ25hbCAicmVxdWVzdCBkb25lIi4gKi8KKwl4ZW5fcGNp
YmtfbGF0ZWVvaShwZGV2LCAwKTsKKwogCXJldCA9IHdhaXRfZXZlbnRfdGlt
ZW91dCh4ZW5fcGNpYmtfYWVyX3dhaXRfcXVldWUsCiAJCQkJICEodGVzdF9i
aXQoX1hFTl9QQ0lCX2FjdGl2ZSwgKHVuc2lnbmVkIGxvbmcgKikKIAkJCQkg
JnNoX2luZm8tPmZsYWdzKSksIDMwMCpIWik7CiAKKwkvKiBFbmFibGUgSVJR
IGZvciBwY2lmcm9udCByZXF1ZXN0IGlmIG5vdCBhbHJlYWR5IGFjdGl2ZS4g
Ki8KKwlpZiAoIXRlc3RfYml0KF9QREVWRl9vcF9hY3RpdmUsICZwZGV2LT5m
bGFncykpCisJCXhlbl9wY2lia19sYXRlZW9pKHBkZXYsIDApOworCiAJaWYg
KCFyZXQpIHsKIAkJaWYgKHRlc3RfYml0KF9YRU5fUENJQl9hY3RpdmUsCiAJ
CQkodW5zaWduZWQgbG9uZyAqKSZzaF9pbmZvLT5mbGFncykpIHsKQEAgLTc1
MSwxMiArNzU4LDYgQEAgc3RhdGljIHBjaV9lcnNfcmVzdWx0X3QgY29tbW9u
X3Byb2Nlc3Moc3RydWN0IHBjaXN0dWJfZGV2aWNlICpwc2RldiwKIAl9CiAJ
Y2xlYXJfYml0KF9QQ0lCX29wX3BlbmRpbmcsICh1bnNpZ25lZCBsb25nICop
JnBkZXYtPmZsYWdzKTsKIAotCWlmICh0ZXN0X2JpdChfWEVOX1BDSUZfYWN0
aXZlLAotCQkodW5zaWduZWQgbG9uZyAqKSZzaF9pbmZvLT5mbGFncykpIHsK
LQkJZGV2X2RiZygmcHNkZXYtPmRldi0+ZGV2LCAic2NoZWR1bGUgcGNpX2Nv
bmYgc2VydmljZVxuIik7Ci0JCXhlbl9wY2lia190ZXN0X2FuZF9zY2hlZHVs
ZV9vcChwc2Rldi0+cGRldik7Ci0JfQotCiAJcmVzID0gKHBjaV9lcnNfcmVz
dWx0X3QpYWVyX29wLT5lcnI7CiAJcmV0dXJuIHJlczsKIH0KZGlmZiAtLWdp
dCBhL2RyaXZlcnMveGVuL3hlbi1wY2liYWNrL3BjaWJhY2suaCBiL2RyaXZl
cnMveGVuL3hlbi1wY2liYWNrL3BjaWJhY2suaAppbmRleCBmMWVkMmRiZjY4
NWMuLjk1ZTI4ZWU0OGQ1MiAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4veGVu
LXBjaWJhY2svcGNpYmFjay5oCisrKyBiL2RyaXZlcnMveGVuL3hlbi1wY2li
YWNrL3BjaWJhY2suaApAQCAtMTQsNiArMTQsNyBAQAogI2luY2x1ZGUgPGxp
bnV4L3NwaW5sb2NrLmg+CiAjaW5jbHVkZSA8bGludXgvd29ya3F1ZXVlLmg+
CiAjaW5jbHVkZSA8bGludXgvYXRvbWljLmg+CisjaW5jbHVkZSA8eGVuL2V2
ZW50cy5oPgogI2luY2x1ZGUgPHhlbi9pbnRlcmZhY2UvaW8vcGNpaWYuaD4K
IAogI2RlZmluZSBEUlZfTkFNRQkieGVuLXBjaWJhY2siCkBAIC0yNyw2ICsy
OCw4IEBAIHN0cnVjdCBwY2lfZGV2X2VudHJ5IHsKICNkZWZpbmUgUERFVkZf
b3BfYWN0aXZlCQkoMTw8KF9QREVWRl9vcF9hY3RpdmUpKQogI2RlZmluZSBf
UENJQl9vcF9wZW5kaW5nCSgxKQogI2RlZmluZSBQQ0lCX29wX3BlbmRpbmcJ
CSgxPDwoX1BDSUJfb3BfcGVuZGluZykpCisjZGVmaW5lIF9FT0lfcGVuZGlu
ZwkJKDIpCisjZGVmaW5lIEVPSV9wZW5kaW5nCQkoMTw8KF9FT0lfcGVuZGlu
ZykpCiAKIHN0cnVjdCB4ZW5fcGNpYmtfZGV2aWNlIHsKIAl2b2lkICpwY2lf
ZGV2X2RhdGE7CkBAIC0xODMsMTAgKzE4NiwxNSBAQCBzdGF0aWMgaW5saW5l
IHZvaWQgeGVuX3BjaWJrX3JlbGVhc2VfZGV2aWNlcyhzdHJ1Y3QgeGVuX3Bj
aWJrX2RldmljZSAqcGRldikKIGlycXJldHVybl90IHhlbl9wY2lia19oYW5k
bGVfZXZlbnQoaW50IGlycSwgdm9pZCAqZGV2X2lkKTsKIHZvaWQgeGVuX3Bj
aWJrX2RvX29wKHN0cnVjdCB3b3JrX3N0cnVjdCAqZGF0YSk7CiAKK3N0YXRp
YyBpbmxpbmUgdm9pZCB4ZW5fcGNpYmtfbGF0ZWVvaShzdHJ1Y3QgeGVuX3Bj
aWJrX2RldmljZSAqcGRldiwKKwkJCQkgICAgIHVuc2lnbmVkIGludCBlb2lf
ZmxhZykKK3sKKwlpZiAodGVzdF9hbmRfY2xlYXJfYml0KF9FT0lfcGVuZGlu
ZywgJnBkZXYtPmZsYWdzKSkKKwkJeGVuX2lycV9sYXRlZW9pKHBkZXYtPmV2
dGNobl9pcnEsIGVvaV9mbGFnKTsKK30KKwogaW50IHhlbl9wY2lia194ZW5i
dXNfcmVnaXN0ZXIodm9pZCk7CiB2b2lkIHhlbl9wY2lia194ZW5idXNfdW5y
ZWdpc3Rlcih2b2lkKTsKLQotdm9pZCB4ZW5fcGNpYmtfdGVzdF9hbmRfc2No
ZWR1bGVfb3Aoc3RydWN0IHhlbl9wY2lia19kZXZpY2UgKnBkZXYpOwogI2Vu
ZGlmCiAKIC8qIEhhbmRsZXMgc2hhcmVkIElSUXMgdGhhdCBjYW4gdG8gZGV2
aWNlIGRvbWFpbiBhbmQgY29udHJvbCBkb21haW4uICovCmRpZmYgLS1naXQg
YS9kcml2ZXJzL3hlbi94ZW4tcGNpYmFjay9wY2liYWNrX29wcy5jIGIvZHJp
dmVycy94ZW4veGVuLXBjaWJhY2svcGNpYmFja19vcHMuYwppbmRleCBlMTFh
NzQzOGUxYTIuLjNmYmMyMTQ2NmE5MyAxMDA2NDQKLS0tIGEvZHJpdmVycy94
ZW4veGVuLXBjaWJhY2svcGNpYmFja19vcHMuYworKysgYi9kcml2ZXJzL3hl
bi94ZW4tcGNpYmFjay9wY2liYWNrX29wcy5jCkBAIC0yNzYsMjYgKzI3Niw0
MSBAQCBpbnQgeGVuX3BjaWJrX2Rpc2FibGVfbXNpeChzdHJ1Y3QgeGVuX3Bj
aWJrX2RldmljZSAqcGRldiwKIAlyZXR1cm4gMDsKIH0KICNlbmRpZgorCitz
dGF0aWMgaW5saW5lIGJvb2wgeGVuX3BjaWJrX3Rlc3Rfb3BfcGVuZGluZyhz
dHJ1Y3QgeGVuX3BjaWJrX2RldmljZSAqcGRldikKK3sKKwlyZXR1cm4gdGVz
dF9iaXQoX1hFTl9QQ0lGX2FjdGl2ZSwKKwkJCSh1bnNpZ25lZCBsb25nICop
JnBkZXYtPnNoX2luZm8tPmZsYWdzKSAmJgorCSAgICAgICAhdGVzdF9hbmRf
c2V0X2JpdChfUERFVkZfb3BfYWN0aXZlLCAmcGRldi0+ZmxhZ3MpOworfQor
CiAvKgogKiBOb3cgdGhlIHNhbWUgZXZ0Y2huIGlzIHVzZWQgZm9yIGJvdGgg
cGNpZnJvbnQgY29uZl9yZWFkX3dyaXRlIHJlcXVlc3QKICogYXMgd2VsbCBh
cyBwY2llIGFlciBmcm9udCBlbmQgYWNrLiBXZSB1c2UgYSBuZXcgd29ya19x
dWV1ZSB0byBzY2hlZHVsZQogKiB4ZW5fcGNpYmsgY29uZl9yZWFkX3dyaXRl
IHNlcnZpY2UgZm9yIGF2b2lkaW5nIGNvbmZpY3Qgd2l0aCBhZXJfY29yZQog
KiBkb19yZWNvdmVyeSBqb2Igd2hpY2ggYWxzbyB1c2UgdGhlIHN5c3RlbSBk
ZWZhdWx0IHdvcmtfcXVldWUKICovCi12b2lkIHhlbl9wY2lia190ZXN0X2Fu
ZF9zY2hlZHVsZV9vcChzdHJ1Y3QgeGVuX3BjaWJrX2RldmljZSAqcGRldikK
K3N0YXRpYyB2b2lkIHhlbl9wY2lia190ZXN0X2FuZF9zY2hlZHVsZV9vcChz
dHJ1Y3QgeGVuX3BjaWJrX2RldmljZSAqcGRldikKIHsKKwlib29sIGVvaSA9
IHRydWU7CisKIAkvKiBDaGVjayB0aGF0IGZyb250ZW5kIGlzIHJlcXVlc3Rp
bmcgYW4gb3BlcmF0aW9uIGFuZCB0aGF0IHdlIGFyZSBub3QKIAkgKiBhbHJl
YWR5IHByb2Nlc3NpbmcgYSByZXF1ZXN0ICovCi0JaWYgKHRlc3RfYml0KF9Y
RU5fUENJRl9hY3RpdmUsICh1bnNpZ25lZCBsb25nICopJnBkZXYtPnNoX2lu
Zm8tPmZsYWdzKQotCSAgICAmJiAhdGVzdF9hbmRfc2V0X2JpdChfUERFVkZf
b3BfYWN0aXZlLCAmcGRldi0+ZmxhZ3MpKSB7CisJaWYgKHhlbl9wY2lia190
ZXN0X29wX3BlbmRpbmcocGRldikpIHsKIAkJc2NoZWR1bGVfd29yaygmcGRl
di0+b3Bfd29yayk7CisJCWVvaSA9IGZhbHNlOwogCX0KIAkvKl9YRU5fUENJ
Ql9hY3RpdmUgc2hvdWxkIGhhdmUgYmVlbiBjbGVhcmVkIGJ5IHBjaWZyb250
LiBBbmQgYWxzbyBtYWtlCiAJc3VyZSB4ZW5fcGNpYmsgaXMgd2FpdGluZyBm
b3IgYWNrIGJ5IGNoZWNraW5nIF9QQ0lCX29wX3BlbmRpbmcqLwogCWlmICgh
dGVzdF9iaXQoX1hFTl9QQ0lCX2FjdGl2ZSwgKHVuc2lnbmVkIGxvbmcgKikm
cGRldi0+c2hfaW5mby0+ZmxhZ3MpCiAJICAgICYmIHRlc3RfYml0KF9QQ0lC
X29wX3BlbmRpbmcsICZwZGV2LT5mbGFncykpIHsKIAkJd2FrZV91cCgmeGVu
X3BjaWJrX2Flcl93YWl0X3F1ZXVlKTsKKwkJZW9pID0gZmFsc2U7CiAJfQor
CisJLyogRU9JIGlmIHRoZXJlIHdhcyBub3RoaW5nIHRvIGRvLiAqLworCWlm
IChlb2kpCisJCXhlbl9wY2lia19sYXRlZW9pKHBkZXYsIFhFTl9FT0lfRkxB
R19TUFVSSU9VUyk7CiB9CiAKIC8qIFBlcmZvcm1pbmcgdGhlIGNvbmZpZ3Vy
YXRpb24gc3BhY2UgcmVhZHMvd3JpdGVzIG11c3Qgbm90IGJlIGRvbmUgaW4g
YXRvbWljCkBAIC0zMDMsMTAgKzMxOCw4IEBAIHZvaWQgeGVuX3BjaWJrX3Rl
c3RfYW5kX3NjaGVkdWxlX29wKHN0cnVjdCB4ZW5fcGNpYmtfZGV2aWNlICpw
ZGV2KQogICogdXNlIG9mIHNlbWFwaG9yZXMpLiBUaGlzIGZ1bmN0aW9uIGlz
IGludGVuZGVkIHRvIGJlIGNhbGxlZCBmcm9tIGEgd29yawogICogcXVldWUg
aW4gcHJvY2VzcyBjb250ZXh0IHRha2luZyBhIHN0cnVjdCB4ZW5fcGNpYmtf
ZGV2aWNlIGFzIGEgcGFyYW1ldGVyICovCiAKLXZvaWQgeGVuX3BjaWJrX2Rv
X29wKHN0cnVjdCB3b3JrX3N0cnVjdCAqZGF0YSkKK3N0YXRpYyB2b2lkIHhl
bl9wY2lia19kb19vbmVfb3Aoc3RydWN0IHhlbl9wY2lia19kZXZpY2UgKnBk
ZXYpCiB7Ci0Jc3RydWN0IHhlbl9wY2lia19kZXZpY2UgKnBkZXYgPQotCQlj
b250YWluZXJfb2YoZGF0YSwgc3RydWN0IHhlbl9wY2lia19kZXZpY2UsIG9w
X3dvcmspOwogCXN0cnVjdCBwY2lfZGV2ICpkZXY7CiAJc3RydWN0IHhlbl9w
Y2lia19kZXZfZGF0YSAqZGV2X2RhdGEgPSBOVUxMOwogCXN0cnVjdCB4ZW5f
cGNpX29wICpvcCA9ICZwZGV2LT5vcDsKQEAgLTM3OSwxNiArMzkyLDMxIEBA
IHZvaWQgeGVuX3BjaWJrX2RvX29wKHN0cnVjdCB3b3JrX3N0cnVjdCAqZGF0
YSkKIAlzbXBfbWJfX2JlZm9yZV9hdG9taWMoKTsgLyogL2FmdGVyLyBjbGVh
cmluZyBQQ0lGX2FjdGl2ZSAqLwogCWNsZWFyX2JpdChfUERFVkZfb3BfYWN0
aXZlLCAmcGRldi0+ZmxhZ3MpOwogCXNtcF9tYl9fYWZ0ZXJfYXRvbWljKCk7
IC8qIC9iZWZvcmUvIGZpbmFsIGNoZWNrIGZvciB3b3JrICovCit9CiAKLQkv
KiBDaGVjayB0byBzZWUgaWYgdGhlIGRyaXZlciBkb21haW4gdHJpZWQgdG8g
c3RhcnQgYW5vdGhlciByZXF1ZXN0IGluCi0JICogYmV0d2VlbiBjbGVhcmlu
ZyBfWEVOX1BDSUZfYWN0aXZlIGFuZCBjbGVhcmluZyBfUERFVkZfb3BfYWN0
aXZlLgotCSovCi0JeGVuX3BjaWJrX3Rlc3RfYW5kX3NjaGVkdWxlX29wKHBk
ZXYpOwordm9pZCB4ZW5fcGNpYmtfZG9fb3Aoc3RydWN0IHdvcmtfc3RydWN0
ICpkYXRhKQoreworCXN0cnVjdCB4ZW5fcGNpYmtfZGV2aWNlICpwZGV2ID0K
KwkJY29udGFpbmVyX29mKGRhdGEsIHN0cnVjdCB4ZW5fcGNpYmtfZGV2aWNl
LCBvcF93b3JrKTsKKworCWRvIHsKKwkJeGVuX3BjaWJrX2RvX29uZV9vcChw
ZGV2KTsKKwl9IHdoaWxlICh4ZW5fcGNpYmtfdGVzdF9vcF9wZW5kaW5nKHBk
ZXYpKTsKKworCXhlbl9wY2lia19sYXRlZW9pKHBkZXYsIDApOwogfQogCiBp
cnFyZXR1cm5fdCB4ZW5fcGNpYmtfaGFuZGxlX2V2ZW50KGludCBpcnEsIHZv
aWQgKmRldl9pZCkKIHsKIAlzdHJ1Y3QgeGVuX3BjaWJrX2RldmljZSAqcGRl
diA9IGRldl9pZDsKKwlib29sIGVvaTsKKworCS8qIElSUXMgbWlnaHQgY29t
ZSBpbiBiZWZvcmUgcGRldi0+ZXZ0Y2huX2lycSBpcyB3cml0dGVuLiAqLwor
CWlmICh1bmxpa2VseShwZGV2LT5ldnRjaG5faXJxICE9IGlycSkpCisJCXBk
ZXYtPmV2dGNobl9pcnEgPSBpcnE7CisKKwllb2kgPSB0ZXN0X2FuZF9zZXRf
Yml0KF9FT0lfcGVuZGluZywgJnBkZXYtPmZsYWdzKTsKKwlXQVJOKGVvaSwg
IklSUSB3aGlsZSBFT0kgcGVuZGluZ1xuIik7CiAKIAl4ZW5fcGNpYmtfdGVz
dF9hbmRfc2NoZWR1bGVfb3AocGRldik7CiAKZGlmZiAtLWdpdCBhL2RyaXZl
cnMveGVuL3hlbi1wY2liYWNrL3hlbmJ1cy5jIGIvZHJpdmVycy94ZW4veGVu
LXBjaWJhY2sveGVuYnVzLmMKaW5kZXggYjUwMDQ2NmE2YzM3Li40Yjk5ZWMz
ZGVjNTggMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL3hlbi1wY2liYWNrL3hl
bmJ1cy5jCisrKyBiL2RyaXZlcnMveGVuL3hlbi1wY2liYWNrL3hlbmJ1cy5j
CkBAIC0xMjMsNyArMTIzLDcgQEAgc3RhdGljIGludCB4ZW5fcGNpYmtfZG9f
YXR0YWNoKHN0cnVjdCB4ZW5fcGNpYmtfZGV2aWNlICpwZGV2LCBpbnQgZ250
X3JlZiwKIAogCXBkZXYtPnNoX2luZm8gPSB2YWRkcjsKIAotCWVyciA9IGJp
bmRfaW50ZXJkb21haW5fZXZ0Y2huX3RvX2lycWhhbmRsZXIoCisJZXJyID0g
YmluZF9pbnRlcmRvbWFpbl9ldnRjaG5fdG9faXJxaGFuZGxlcl9sYXRlZW9p
KAogCQlwZGV2LT54ZGV2LT5vdGhlcmVuZF9pZCwgcmVtb3RlX2V2dGNobiwg
eGVuX3BjaWJrX2hhbmRsZV9ldmVudCwKIAkJMCwgRFJWX05BTUUsIHBkZXYp
OwogCWlmIChlcnIgPCAwKSB7Ci0tIAoyLjI2LjIKCg==

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-09.patch"
Content-Disposition: attachment; filename="xsa332-linux-09.patch"
Content-Transfer-Encoding: base64

RnJvbSA3ZDNjZmJlZWZlMTZmODkwOWMyOTI0ZTk4MDY2YmMzMTVhYWQ4NjU2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzoyOSArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMDkvMTJdIHhlbi9ldmVudHM6
IHN3aXRjaCB1c2VyIGV2ZW50IGNoYW5uZWxzIHRvIGxhdGVlb2kKIG1vZGVs
CgpJbnN0ZWFkIG9mIGRpc2FibGluZyB0aGUgaXJxIHdoZW4gYW4gZXZlbnQg
aXMgcmVjZWl2ZWQgYW5kIGVuYWJsaW5nCml0IGFnYWluIHdoZW4gaGFuZGxl
ZCBieSB0aGUgdXNlciBwcm9jZXNzIHVzZSB0aGUgbGF0ZWVvaSBtb2RlbC4K
ClRoaXMgaXMgcGFydCBvZiBYU0EtMzMyLgoKQ2M6IHN0YWJsZUB2Z2VyLmtl
cm5lbC5vcmcKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc+ClNpZ25lZC1vZmYtYnk6IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0Bz
dXNlLmNvbT4KVGVzdGVkLWJ5OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmc+ClJldmlld2VkLWJ5OiBTdGVmYW5vIFN0YWJl
bGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+ClJldmlld2VkLWJ5OiBK
YW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJldmlld2VkLWJ5OiBX
ZWkgTGl1IDx3bEB4ZW4ub3JnPgotLS0KIGRyaXZlcnMveGVuL2V2dGNobi5j
IHwgNyArKystLS0tCiAxIGZpbGUgY2hhbmdlZCwgMyBpbnNlcnRpb25zKCsp
LCA0IGRlbGV0aW9ucygtKQoKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2
dGNobi5jIGIvZHJpdmVycy94ZW4vZXZ0Y2huLmMKaW5kZXggNmUwYjFkZDU1
NzNjLi41ZGMwMTZkNjhmODMgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2
dGNobi5jCisrKyBiL2RyaXZlcnMveGVuL2V2dGNobi5jCkBAIC0xNjcsNyAr
MTY3LDYgQEAgc3RhdGljIGlycXJldHVybl90IGV2dGNobl9pbnRlcnJ1cHQo
aW50IGlycSwgdm9pZCAqZGF0YSkKIAkgICAgICJJbnRlcnJ1cHQgZm9yIHBv
cnQgJXUsIGJ1dCBhcHBhcmVudGx5IG5vdCBlbmFibGVkOyBwZXItdXNlciAl
cFxuIiwKIAkgICAgIGV2dGNobi0+cG9ydCwgdSk7CiAKLQlkaXNhYmxlX2ly
cV9ub3N5bmMoaXJxKTsKIAlldnRjaG4tPmVuYWJsZWQgPSBmYWxzZTsKIAog
CXNwaW5fbG9jaygmdS0+cmluZ19wcm9kX2xvY2spOwpAQCAtMjkzLDcgKzI5
Miw3IEBAIHN0YXRpYyBzc2l6ZV90IGV2dGNobl93cml0ZShzdHJ1Y3QgZmls
ZSAqZmlsZSwgY29uc3QgY2hhciBfX3VzZXIgKmJ1ZiwKIAkJZXZ0Y2huID0g
ZmluZF9ldnRjaG4odSwgcG9ydCk7CiAJCWlmIChldnRjaG4gJiYgIWV2dGNo
bi0+ZW5hYmxlZCkgewogCQkJZXZ0Y2huLT5lbmFibGVkID0gdHJ1ZTsKLQkJ
CWVuYWJsZV9pcnEoaXJxX2Zyb21fZXZ0Y2huKHBvcnQpKTsKKwkJCXhlbl9p
cnFfbGF0ZWVvaShpcnFfZnJvbV9ldnRjaG4ocG9ydCksIDApOwogCQl9CiAJ
fQogCkBAIC0zOTMsOCArMzkyLDggQEAgc3RhdGljIGludCBldnRjaG5fYmlu
ZF90b191c2VyKHN0cnVjdCBwZXJfdXNlcl9kYXRhICp1LCBldnRjaG5fcG9y
dF90IHBvcnQpCiAJaWYgKHJjIDwgMCkKIAkJZ290byBlcnI7CiAKLQlyYyA9
IGJpbmRfZXZ0Y2huX3RvX2lycWhhbmRsZXIocG9ydCwgZXZ0Y2huX2ludGVy
cnVwdCwgMCwKLQkJCQkgICAgICAgdS0+bmFtZSwgZXZ0Y2huKTsKKwlyYyA9
IGJpbmRfZXZ0Y2huX3RvX2lycWhhbmRsZXJfbGF0ZWVvaShwb3J0LCBldnRj
aG5faW50ZXJydXB0LCAwLAorCQkJCQkgICAgICAgdS0+bmFtZSwgZXZ0Y2hu
KTsKIAlpZiAocmMgPCAwKQogCQlnb3RvIGVycjsKIAotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-10.patch"
Content-Disposition: attachment; filename="xsa332-linux-10.patch"
Content-Transfer-Encoding: base64

RnJvbSA3NmZmNGJjYzBlNzlhZDc1NDY5OTAwMDYyMTM0YTM1NGI2ZTYzNzA4
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IFN1biwgMTMgU2VwIDIwMjAgMTQ6
MjM6MDIgKzAyMDAKU3ViamVjdDogW1BBVENIIDEwLzEyXSB4ZW4vZXZlbnRz
OiB1c2UgYSBjb21tb24gY3B1IGhvdHBsdWcgaG9vayBmb3IgZXZlbnQKIGNo
YW5uZWxzCgpUb2RheSBvbmx5IGZpZm8gZXZlbnQgY2hhbm5lbHMgaGF2ZSBh
IGNwdSBob3RwbHVnIGNhbGxiYWNrLiBJbiBvcmRlcgp0byBwcmVwYXJlIGZv
ciBtb3JlIHBlcmNwdSAoZGUpaW5pdCB3b3JrIG1vdmUgdGhhdCBjYWxsYmFj
ayBpbnRvCmV2ZW50c19iYXNlLmMgYW5kIGFkZCBwZXJjcHVfaW5pdCgpIGFu
ZCBwZXJjcHVfZGVpbml0KCkgaG9va3MgdG8Kc3RydWN0IGV2dGNobl9vcHMu
CgpUaGlzIGlzIHBhcnQgb2YgWFNBLTMzMi4KCkNjOiBzdGFibGVAdmdlci5r
ZXJuZWwub3JnClJlcG9ydGVkLWJ5OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPgpTaWduZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NA
c3VzZS5jb20+ClJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+ClJldmlld2VkLWJ5OiBXZWkgTGl1IDx3bEB4ZW4ub3JnPgot
LS0KIGRyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jICAgICB8IDI1
ICsrKysrKysrKysrKysrKysrCiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRz
X2ZpZm8uYyAgICAgfCA0MCArKysrKysrKysrKysrLS0tLS0tLS0tLS0tLS0t
CiBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmggfCAgMyAr
KysKIDMgZmlsZXMgY2hhbmdlZCwgNDcgaW5zZXJ0aW9ucygrKSwgMjEgZGVs
ZXRpb25zKC0pCgpkaWZmIC0tZ2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19iYXNlLmMgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2Uu
YwppbmRleCAxZWJhOGJjMjA5YWQuLjljYmZlYTVlOWEwOCAxMDA2NDQKLS0t
IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJp
dmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMKQEAgLTM0LDYgKzM0LDcg
QEAKICNpbmNsdWRlIDxsaW51eC9pcnFuci5oPgogI2luY2x1ZGUgPGxpbnV4
L3BjaS5oPgogI2luY2x1ZGUgPGxpbnV4L3NwaW5sb2NrLmg+CisjaW5jbHVk
ZSA8bGludXgvY3B1aG90cGx1Zy5oPgogCiAjaWZkZWYgQ09ORklHX1g4Ngog
I2luY2x1ZGUgPGFzbS9kZXNjLmg+CkBAIC0xODMwLDYgKzE4MzEsMjYgQEAg
c3RhdGljIGlubGluZSB2b2lkIHhlbl9hbGxvY19jYWxsYmFja192ZWN0b3Io
dm9pZCkge30KIHN0YXRpYyBib29sIGZpZm9fZXZlbnRzID0gdHJ1ZTsKIG1v
ZHVsZV9wYXJhbShmaWZvX2V2ZW50cywgYm9vbCwgMCk7CiAKK3N0YXRpYyBp
bnQgeGVuX2V2dGNobl9jcHVfcHJlcGFyZSh1bnNpZ25lZCBpbnQgY3B1KQor
eworCWludCByZXQgPSAwOworCisJaWYgKGV2dGNobl9vcHMtPnBlcmNwdV9p
bml0KQorCQlyZXQgPSBldnRjaG5fb3BzLT5wZXJjcHVfaW5pdChjcHUpOwor
CisJcmV0dXJuIHJldDsKK30KKworc3RhdGljIGludCB4ZW5fZXZ0Y2huX2Nw
dV9kZWFkKHVuc2lnbmVkIGludCBjcHUpCit7CisJaW50IHJldCA9IDA7CisK
KwlpZiAoZXZ0Y2huX29wcy0+cGVyY3B1X2RlaW5pdCkKKwkJcmV0ID0gZXZ0
Y2huX29wcy0+cGVyY3B1X2RlaW5pdChjcHUpOworCisJcmV0dXJuIHJldDsK
K30KKwogdm9pZCBfX2luaXQgeGVuX2luaXRfSVJRKHZvaWQpCiB7CiAJaW50
IHJldCA9IC1FSU5WQUw7CkBAIC0xODQwLDYgKzE4NjEsMTAgQEAgdm9pZCBf
X2luaXQgeGVuX2luaXRfSVJRKHZvaWQpCiAJaWYgKHJldCA8IDApCiAJCXhl
bl9ldnRjaG5fMmxfaW5pdCgpOwogCisJY3B1aHBfc2V0dXBfc3RhdGVfbm9j
YWxscyhDUFVIUF9YRU5fRVZUQ0hOX1BSRVBBUkUsCisJCQkJICAieGVuL2V2
dGNobjpwcmVwYXJlIiwKKwkJCQkgIHhlbl9ldnRjaG5fY3B1X3ByZXBhcmUs
IHhlbl9ldnRjaG5fY3B1X2RlYWQpOworCiAJZXZ0Y2huX3RvX2lycSA9IGtj
YWxsb2MoRVZUQ0hOX1JPVyh4ZW5fZXZ0Y2huX21heF9jaGFubmVscygpKSwK
IAkJCQlzaXplb2YoKmV2dGNobl90b19pcnEpLCBHRlBfS0VSTkVMKTsKIAlC
VUdfT04oIWV2dGNobl90b19pcnEpOwpkaWZmIC0tZ2l0IGEvZHJpdmVycy94
ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMgYi9kcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2ZpZm8uYwppbmRleCA3ZmQzOWM2NGQ0YjUuLjQwZTRjYTE2ODVh
YSAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZv
LmMKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMKQEAg
LTM4NSwyMSArMzg1LDYgQEAgc3RhdGljIHZvaWQgZXZ0Y2huX2ZpZm9fcmVz
dW1lKHZvaWQpCiAJZXZlbnRfYXJyYXlfcGFnZXMgPSAwOwogfQogCi1zdGF0
aWMgY29uc3Qgc3RydWN0IGV2dGNobl9vcHMgZXZ0Y2huX29wc19maWZvID0g
ewotCS5tYXhfY2hhbm5lbHMgICAgICA9IGV2dGNobl9maWZvX21heF9jaGFu
bmVscywKLQkubnJfY2hhbm5lbHMgICAgICAgPSBldnRjaG5fZmlmb19ucl9j
aGFubmVscywKLQkuc2V0dXAgICAgICAgICAgICAgPSBldnRjaG5fZmlmb19z
ZXR1cCwKLQkuYmluZF90b19jcHUgICAgICAgPSBldnRjaG5fZmlmb19iaW5k
X3RvX2NwdSwKLQkuY2xlYXJfcGVuZGluZyAgICAgPSBldnRjaG5fZmlmb19j
bGVhcl9wZW5kaW5nLAotCS5zZXRfcGVuZGluZyAgICAgICA9IGV2dGNobl9m
aWZvX3NldF9wZW5kaW5nLAotCS5pc19wZW5kaW5nICAgICAgICA9IGV2dGNo
bl9maWZvX2lzX3BlbmRpbmcsCi0JLnRlc3RfYW5kX3NldF9tYXNrID0gZXZ0
Y2huX2ZpZm9fdGVzdF9hbmRfc2V0X21hc2ssCi0JLm1hc2sgICAgICAgICAg
ICAgID0gZXZ0Y2huX2ZpZm9fbWFzaywKLQkudW5tYXNrICAgICAgICAgICAg
PSBldnRjaG5fZmlmb191bm1hc2ssCi0JLmhhbmRsZV9ldmVudHMgICAgID0g
ZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50cywKLQkucmVzdW1lICAgICAgICAg
ICAgPSBldnRjaG5fZmlmb19yZXN1bWUsCi19OwotCiBzdGF0aWMgaW50IGV2
dGNobl9maWZvX2FsbG9jX2NvbnRyb2xfYmxvY2sodW5zaWduZWQgY3B1KQog
ewogCXZvaWQgKmNvbnRyb2xfYmxvY2sgPSBOVUxMOwpAQCAtNDIyLDE5ICs0
MDcsMzYgQEAgc3RhdGljIGludCBldnRjaG5fZmlmb19hbGxvY19jb250cm9s
X2Jsb2NrKHVuc2lnbmVkIGNwdSkKIAlyZXR1cm4gcmV0OwogfQogCi1zdGF0
aWMgaW50IHhlbl9ldnRjaG5fY3B1X3ByZXBhcmUodW5zaWduZWQgaW50IGNw
dSkKK3N0YXRpYyBpbnQgZXZ0Y2huX2ZpZm9fcGVyY3B1X2luaXQodW5zaWdu
ZWQgaW50IGNwdSkKIHsKIAlpZiAoIXBlcl9jcHUoY3B1X2NvbnRyb2xfYmxv
Y2ssIGNwdSkpCiAJCXJldHVybiBldnRjaG5fZmlmb19hbGxvY19jb250cm9s
X2Jsb2NrKGNwdSk7CiAJcmV0dXJuIDA7CiB9CiAKLXN0YXRpYyBpbnQgeGVu
X2V2dGNobl9jcHVfZGVhZCh1bnNpZ25lZCBpbnQgY3B1KQorc3RhdGljIGlu
dCBldnRjaG5fZmlmb19wZXJjcHVfZGVpbml0KHVuc2lnbmVkIGludCBjcHUp
CiB7CiAJX19ldnRjaG5fZmlmb19oYW5kbGVfZXZlbnRzKGNwdSwgdHJ1ZSk7
CiAJcmV0dXJuIDA7CiB9CiAKK3N0YXRpYyBjb25zdCBzdHJ1Y3QgZXZ0Y2hu
X29wcyBldnRjaG5fb3BzX2ZpZm8gPSB7CisJLm1heF9jaGFubmVscyAgICAg
ID0gZXZ0Y2huX2ZpZm9fbWF4X2NoYW5uZWxzLAorCS5ucl9jaGFubmVscyAg
ICAgICA9IGV2dGNobl9maWZvX25yX2NoYW5uZWxzLAorCS5zZXR1cCAgICAg
ICAgICAgICA9IGV2dGNobl9maWZvX3NldHVwLAorCS5iaW5kX3RvX2NwdSAg
ICAgICA9IGV2dGNobl9maWZvX2JpbmRfdG9fY3B1LAorCS5jbGVhcl9wZW5k
aW5nICAgICA9IGV2dGNobl9maWZvX2NsZWFyX3BlbmRpbmcsCisJLnNldF9w
ZW5kaW5nICAgICAgID0gZXZ0Y2huX2ZpZm9fc2V0X3BlbmRpbmcsCisJLmlz
X3BlbmRpbmcgICAgICAgID0gZXZ0Y2huX2ZpZm9faXNfcGVuZGluZywKKwku
dGVzdF9hbmRfc2V0X21hc2sgPSBldnRjaG5fZmlmb190ZXN0X2FuZF9zZXRf
bWFzaywKKwkubWFzayAgICAgICAgICAgICAgPSBldnRjaG5fZmlmb19tYXNr
LAorCS51bm1hc2sgICAgICAgICAgICA9IGV2dGNobl9maWZvX3VubWFzaywK
KwkuaGFuZGxlX2V2ZW50cyAgICAgPSBldnRjaG5fZmlmb19oYW5kbGVfZXZl
bnRzLAorCS5yZXN1bWUgICAgICAgICAgICA9IGV2dGNobl9maWZvX3Jlc3Vt
ZSwKKwkucGVyY3B1X2luaXQgICAgICAgPSBldnRjaG5fZmlmb19wZXJjcHVf
aW5pdCwKKwkucGVyY3B1X2RlaW5pdCAgICAgPSBldnRjaG5fZmlmb19wZXJj
cHVfZGVpbml0LAorfTsKKwogaW50IF9faW5pdCB4ZW5fZXZ0Y2huX2ZpZm9f
aW5pdCh2b2lkKQogewogCWludCBjcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7
CkBAIC00NDgsOSArNDUwLDUgQEAgaW50IF9faW5pdCB4ZW5fZXZ0Y2huX2Zp
Zm9faW5pdCh2b2lkKQogCiAJZXZ0Y2huX29wcyA9ICZldnRjaG5fb3BzX2Zp
Zm87CiAKLQljcHVocF9zZXR1cF9zdGF0ZV9ub2NhbGxzKENQVUhQX1hFTl9F
VlRDSE5fUFJFUEFSRSwKLQkJCQkgICJ4ZW4vZXZ0Y2huOnByZXBhcmUiLAot
CQkJCSAgeGVuX2V2dGNobl9jcHVfcHJlcGFyZSwgeGVuX2V2dGNobl9jcHVf
ZGVhZCk7Ci0KIAlyZXR1cm4gcmV0OwogfQpkaWZmIC0tZ2l0IGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oIGIvZHJpdmVycy94ZW4v
ZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5oCmluZGV4IDEwNjg0ZmViMDk0ZS4u
NTU4YWJlYTE5ZDBkIDEwMDY0NAotLS0gYS9kcml2ZXJzL3hlbi9ldmVudHMv
ZXZlbnRzX2ludGVybmFsLmgKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19pbnRlcm5hbC5oCkBAIC02OSw2ICs2OSw5IEBAIHN0cnVjdCBldnRj
aG5fb3BzIHsKIAogCXZvaWQgKCpoYW5kbGVfZXZlbnRzKSh1bnNpZ25lZCBj
cHUpOwogCXZvaWQgKCpyZXN1bWUpKHZvaWQpOworCisJaW50ICgqcGVyY3B1
X2luaXQpKHVuc2lnbmVkIGludCBjcHUpOworCWludCAoKnBlcmNwdV9kZWlu
aXQpKHVuc2lnbmVkIGludCBjcHUpOwogfTsKIAogZXh0ZXJuIGNvbnN0IHN0
cnVjdCBldnRjaG5fb3BzICpldnRjaG5fb3BzOwotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-11.patch"
Content-Disposition: attachment; filename="xsa332-linux-11.patch"
Content-Transfer-Encoding: base64

RnJvbSBhYjJkYjdjOGVhYWY3ZDFmYjU0NjYxNzUwM2YxN2I3OTQ5N2VkY2U3
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgNyBTZXAgMjAyMCAxNTo0
NzozMCArMDIwMApTdWJqZWN0OiBbUEFUQ0ggMTEvMTJdIHhlbi9ldmVudHM6
IGRlZmVyIGVvaSBpbiBjYXNlIG9mIGV4Y2Vzc2l2ZSBudW1iZXIKIG9mIGV2
ZW50cwoKSW4gY2FzZSByb2d1ZSBndWVzdHMgYXJlIHNlbmRpbmcgZXZlbnRz
IGF0IGhpZ2ggZnJlcXVlbmN5IGl0IG1pZ2h0CmhhcHBlbiB0aGF0IHhlbl9l
dnRjaG5fZG9fdXBjYWxsKCkgd29uJ3Qgc3RvcCBwcm9jZXNzaW5nIGV2ZW50
cyBpbgpkb20wLiBBcyB0aGlzIGlzIGRvbmUgaW4gaXJxIGhhbmRsaW5nIGEg
Y3Jhc2ggbWlnaHQgYmUgdGhlIHJlc3VsdC4KCkluIG9yZGVyIHRvIGF2b2lk
IHRoYXQsIGRlbGF5IGZ1cnRoZXIgaW50ZXItZG9tYWluIGV2ZW50cyBhZnRl
ciBzb21lCnRpbWUgaW4geGVuX2V2dGNobl9kb191cGNhbGwoKSBieSBmb3Jj
aW5nIGVvaSBwcm9jZXNzaW5nIGludG8gYQp3b3JrZXIgb24gdGhlIHNhbWUg
Y3B1LCB0aHVzIGluaGliaXRpbmcgbmV3IGV2ZW50cyBjb21pbmcgaW4uCgpU
aGUgdGltZSBhZnRlciB3aGljaCBlb2kgcHJvY2Vzc2luZyBpcyB0byBiZSBk
ZWxheWVkIGlzIGNvbmZpZ3VyYWJsZQp2aWEgYSBuZXcgbW9kdWxlIHBhcmFt
ZXRlciAiZXZlbnRfbG9vcF90aW1lb3V0IiB3aGljaCBzcGVjaWZpZXMgdGhl
Cm1heGltdW0gZXZlbnQgbG9vcCB0aW1lIGluIGppZmZpZXMgKGRlZmF1bHQ6
IDIsIHRoZSB2YWx1ZSB3YXMgY2hvc2VuCmFmdGVyIHNvbWUgdGVzdHMgc2hv
d2luZyB0aGF0IGEgdmFsdWUgb2YgMiB3YXMgdGhlIGxvd2VzdCB3aXRoIGFu
Cm9ubHkgc2xpZ2h0IGRyb3Agb2YgZG9tMCBuZXR3b3JrIHRocm91Z2hwdXQg
d2hpbGUgbXVsdGlwbGUgZ3Vlc3RzCnBlcmZvcm1lZCBhbiBldmVudCBzdG9y
bSkuCgpIb3cgbG9uZyBlb2kgcHJvY2Vzc2luZyB3aWxsIGJlIGRlbGF5ZWQg
Y2FuIGJlIHNwZWNpZmllZCB2aWEgYW5vdGhlcgpwYXJhbWV0ZXIgImV2ZW50
X2VvaV9kZWxheSIgKGFnYWluIGluIGppZmZpZXMsIGRlZmF1bHQgMTAsIGFn
YWluIHRoZQp2YWx1ZSB3YXMgY2hvc2VuIGFmdGVyIHRlc3Rpbmcgd2l0aCBk
aWZmZXJlbnQgZGVsYXkgdmFsdWVzKS4KClRoaXMgaXMgcGFydCBvZiBYU0Et
MzMyLgoKQ2M6IHN0YWJsZUB2Z2VyLmtlcm5lbC5vcmcKUmVwb3J0ZWQtYnk6
IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+ClNpZ25lZC1vZmYtYnk6
IEp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6
IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4K
UmV2aWV3ZWQtYnk6IFdlaSBMaXUgPHdsQHhlbi5vcmc+Ci0tLQogLi4uL2Fk
bWluLWd1aWRlL2tlcm5lbC1wYXJhbWV0ZXJzLnR4dCAgICAgICAgIHwgICA4
ICsKIGRyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfMmwuYyAgICAgICAgICAg
ICAgICB8ICAgNyArLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNl
LmMgICAgICAgICAgICAgIHwgMTg5ICsrKysrKysrKysrKysrKysrLQogZHJp
dmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMgICAgICAgICAgICAgIHwg
IDMwICstLQogZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5hbC5o
ICAgICAgICAgIHwgIDE0ICstCiA1IGZpbGVzIGNoYW5nZWQsIDIxNiBpbnNl
cnRpb25zKCspLCAzMiBkZWxldGlvbnMoLSkKCmRpZmYgLS1naXQgYS9Eb2N1
bWVudGF0aW9uL2FkbWluLWd1aWRlL2tlcm5lbC1wYXJhbWV0ZXJzLnR4dCBi
L0RvY3VtZW50YXRpb24vYWRtaW4tZ3VpZGUva2VybmVsLXBhcmFtZXRlcnMu
dHh0CmluZGV4IGExMDY4NzQyYTZkZi4uODlkOTc3ZjBiNzg2IDEwMDY0NAot
LS0gYS9Eb2N1bWVudGF0aW9uL2FkbWluLWd1aWRlL2tlcm5lbC1wYXJhbWV0
ZXJzLnR4dAorKysgYi9Eb2N1bWVudGF0aW9uL2FkbWluLWd1aWRlL2tlcm5l
bC1wYXJhbWV0ZXJzLnR4dApAQCAtNTgyOCw2ICs1ODI4LDE0IEBACiAJCQlp
bXByb3ZlIHRpbWVyIHJlc29sdXRpb24gYXQgdGhlIGV4cGVuc2Ugb2YgcHJv
Y2Vzc2luZwogCQkJbW9yZSB0aW1lciBpbnRlcnJ1cHRzLgogCisJeGVuLmV2
ZW50X2VvaV9kZWxheT0JW1hFTl0KKwkJCUhvdyBsb25nIHRvIGRlbGF5IEVP
SSBoYW5kbGluZyBpbiBjYXNlIG9mIGV2ZW50CisJCQlzdG9ybXMgKGppZmZp
ZXMpLiBEZWZhdWx0IGlzIDEwLgorCisJeGVuLmV2ZW50X2xvb3BfdGltZW91
dD0JW1hFTl0KKwkJCUFmdGVyIHdoaWNoIHRpbWUgKGppZmZpZXMpIHRoZSBl
dmVudCBoYW5kbGluZyBsb29wCisJCQlzaG91bGQgc3RhcnQgdG8gZGVsYXkg
RU9JIGhhbmRsaW5nLiBEZWZhdWx0IGlzIDIuCisKIAlub3B2PQkJW1g4NixY
RU4sS1ZNLEhZUEVSX1YsVk1XQVJFXQogCQkJRGlzYWJsZXMgdGhlIFBWIG9w
dGltaXphdGlvbnMgZm9yY2luZyB0aGUgZ3Vlc3QgdG8gcnVuCiAJCQlhcyBn
ZW5lcmljIGd1ZXN0IHdpdGggbm8gUFYgZHJpdmVycy4gQ3VycmVudGx5IHN1
cHBvcnQKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNf
MmwuYyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfMmwuYwppbmRleCBl
MWFmNWUwOTNmZjQuLmZlNWFkMGU4OWNkOCAxMDA2NDQKLS0tIGEvZHJpdmVy
cy94ZW4vZXZlbnRzL2V2ZW50c18ybC5jCisrKyBiL2RyaXZlcnMveGVuL2V2
ZW50cy9ldmVudHNfMmwuYwpAQCAtMTYxLDcgKzE2MSw3IEBAIHN0YXRpYyBp
bmxpbmUgeGVuX3Vsb25nX3QgYWN0aXZlX2V2dGNobnModW5zaWduZWQgaW50
IGNwdSwKICAqIGEgYml0c2V0IG9mIHdvcmRzIHdoaWNoIGNvbnRhaW4gcGVu
ZGluZyBldmVudCBiaXRzLiAgVGhlIHNlY29uZAogICogbGV2ZWwgaXMgYSBi
aXRzZXQgb2YgcGVuZGluZyBldmVudHMgdGhlbXNlbHZlcy4KICAqLwotc3Rh
dGljIHZvaWQgZXZ0Y2huXzJsX2hhbmRsZV9ldmVudHModW5zaWduZWQgY3B1
KQorc3RhdGljIHZvaWQgZXZ0Y2huXzJsX2hhbmRsZV9ldmVudHModW5zaWdu
ZWQgY3B1LCBzdHJ1Y3QgZXZ0Y2huX2xvb3BfY3RybCAqY3RybCkKIHsKIAlp
bnQgaXJxOwogCXhlbl91bG9uZ190IHBlbmRpbmdfd29yZHM7CkBAIC0yNDIs
MTAgKzI0Miw3IEBAIHN0YXRpYyB2b2lkIGV2dGNobl8ybF9oYW5kbGVfZXZl
bnRzKHVuc2lnbmVkIGNwdSkKIAogCQkJLyogUHJvY2VzcyBwb3J0LiAqLwog
CQkJcG9ydCA9ICh3b3JkX2lkeCAqIEJJVFNfUEVSX0VWVENITl9XT1JEKSAr
IGJpdF9pZHg7Ci0JCQlpcnEgPSBnZXRfZXZ0Y2huX3RvX2lycShwb3J0KTsK
LQotCQkJaWYgKGlycSAhPSAtMSkKLQkJCQlnZW5lcmljX2hhbmRsZV9pcnEo
aXJxKTsKKwkJCWhhbmRsZV9pcnFfZm9yX3BvcnQocG9ydCwgY3RybCk7CiAK
IAkJCWJpdF9pZHggPSAoYml0X2lkeCArIDEpICUgQklUU19QRVJfRVZUQ0hO
X1dPUkQ7CiAKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVu
dHNfYmFzZS5jIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMK
aW5kZXggOWNiZmVhNWU5YTA4Li5jZGUwOTZhNmYxMWQgMTAwNjQ0Ci0tLSBh
L2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCisrKyBiL2RyaXZl
cnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jCkBAIC0zNSw2ICszNSw4IEBA
CiAjaW5jbHVkZSA8bGludXgvcGNpLmg+CiAjaW5jbHVkZSA8bGludXgvc3Bp
bmxvY2suaD4KICNpbmNsdWRlIDxsaW51eC9jcHVob3RwbHVnLmg+CisjaW5j
bHVkZSA8bGludXgvYXRvbWljLmg+CisjaW5jbHVkZSA8bGludXgva3RpbWUu
aD4KIAogI2lmZGVmIENPTkZJR19YODYKICNpbmNsdWRlIDxhc20vZGVzYy5o
PgpAQCAtNjUsNiArNjcsMTUgQEAKIAogI2luY2x1ZGUgImV2ZW50c19pbnRl
cm5hbC5oIgogCisjdW5kZWYgTU9EVUxFX1BBUkFNX1BSRUZJWAorI2RlZmlu
ZSBNT0RVTEVfUEFSQU1fUFJFRklYICJ4ZW4uIgorCitzdGF0aWMgdWludCBf
X3JlYWRfbW9zdGx5IGV2ZW50X2xvb3BfdGltZW91dCA9IDI7Cittb2R1bGVf
cGFyYW0oZXZlbnRfbG9vcF90aW1lb3V0LCB1aW50LCAwNjQ0KTsKKworc3Rh
dGljIHVpbnQgX19yZWFkX21vc3RseSBldmVudF9lb2lfZGVsYXkgPSAxMDsK
K21vZHVsZV9wYXJhbShldmVudF9lb2lfZGVsYXksIHVpbnQsIDA2NDQpOwor
CiBjb25zdCBzdHJ1Y3QgZXZ0Y2huX29wcyAqZXZ0Y2huX29wczsKIAogLyoK
QEAgLTg4LDYgKzk5LDcgQEAgc3RhdGljIERFRklORV9SV0xPQ0soZXZ0Y2hu
X3J3bG9jayk7CiAgKiBpcnFfbWFwcGluZ191cGRhdGVfbG9jawogICogICBl
dnRjaG5fcndsb2NrCiAgKiAgICAgSVJRLWRlc2MgbG9jaworICogICAgICAg
cGVyY3B1IGVvaV9saXN0X2xvY2sKICAqLwogCiBzdGF0aWMgTElTVF9IRUFE
KHhlbl9pcnFfbGlzdF9oZWFkKTsKQEAgLTEyMCw2ICsxMzIsOCBAQCBzdGF0
aWMgc3RydWN0IGlycV9jaGlwIHhlbl9waXJxX2NoaXA7CiBzdGF0aWMgdm9p
ZCBlbmFibGVfZHluaXJxKHN0cnVjdCBpcnFfZGF0YSAqZGF0YSk7CiBzdGF0
aWMgdm9pZCBkaXNhYmxlX2R5bmlycShzdHJ1Y3QgaXJxX2RhdGEgKmRhdGEp
OwogCitzdGF0aWMgREVGSU5FX1BFUl9DUFUodW5zaWduZWQgaW50LCBpcnFf
ZXBvY2gpOworCiBzdGF0aWMgdm9pZCBjbGVhcl9ldnRjaG5fdG9faXJxX3Jv
dyh1bnNpZ25lZCByb3cpCiB7CiAJdW5zaWduZWQgY29sOwpAQCAtMzk5LDE3
ICs0MTMsMTIwIEBAIHZvaWQgbm90aWZ5X3JlbW90ZV92aWFfaXJxKGludCBp
cnEpCiB9CiBFWFBPUlRfU1lNQk9MX0dQTChub3RpZnlfcmVtb3RlX3ZpYV9p
cnEpOwogCitzdHJ1Y3QgbGF0ZWVvaV93b3JrIHsKKwlzdHJ1Y3QgZGVsYXll
ZF93b3JrIGRlbGF5ZWQ7CisJc3BpbmxvY2tfdCBlb2lfbGlzdF9sb2NrOwor
CXN0cnVjdCBsaXN0X2hlYWQgZW9pX2xpc3Q7Cit9OworCitzdGF0aWMgREVG
SU5FX1BFUl9DUFUoc3RydWN0IGxhdGVlb2lfd29yaywgbGF0ZWVvaSk7CisK
K3N0YXRpYyB2b2lkIGxhdGVlb2lfbGlzdF9kZWwoc3RydWN0IGlycV9pbmZv
ICppbmZvKQoreworCXN0cnVjdCBsYXRlZW9pX3dvcmsgKmVvaSA9ICZwZXJf
Y3B1KGxhdGVlb2ksIGluZm8tPmVvaV9jcHUpOworCXVuc2lnbmVkIGxvbmcg
ZmxhZ3M7CisKKwlzcGluX2xvY2tfaXJxc2F2ZSgmZW9pLT5lb2lfbGlzdF9s
b2NrLCBmbGFncyk7CisJbGlzdF9kZWxfaW5pdCgmaW5mby0+ZW9pX2xpc3Qp
OworCXNwaW5fdW5sb2NrX2lycXJlc3RvcmUoJmVvaS0+ZW9pX2xpc3RfbG9j
aywgZmxhZ3MpOworfQorCitzdGF0aWMgdm9pZCBsYXRlZW9pX2xpc3RfYWRk
KHN0cnVjdCBpcnFfaW5mbyAqaW5mbykKK3sKKwlzdHJ1Y3QgbGF0ZWVvaV93
b3JrICplb2kgPSAmcGVyX2NwdShsYXRlZW9pLCBpbmZvLT5lb2lfY3B1KTsK
KwlzdHJ1Y3QgaXJxX2luZm8gKmVsZW07CisJdTY0IG5vdyA9IGdldF9qaWZm
aWVzXzY0KCk7CisJdW5zaWduZWQgbG9uZyBkZWxheTsKKwl1bnNpZ25lZCBs
b25nIGZsYWdzOworCisJaWYgKG5vdyA8IGluZm8tPmVvaV90aW1lKQorCQlk
ZWxheSA9IGluZm8tPmVvaV90aW1lIC0gbm93OworCWVsc2UKKwkJZGVsYXkg
PSAxOworCisJc3Bpbl9sb2NrX2lycXNhdmUoJmVvaS0+ZW9pX2xpc3RfbG9j
aywgZmxhZ3MpOworCisJaWYgKGxpc3RfZW1wdHkoJmVvaS0+ZW9pX2xpc3Qp
KSB7CisJCWxpc3RfYWRkKCZpbmZvLT5lb2lfbGlzdCwgJmVvaS0+ZW9pX2xp
c3QpOworCQltb2RfZGVsYXllZF93b3JrX29uKGluZm8tPmVvaV9jcHUsIHN5
c3RlbV93cSwKKwkJCQkgICAgJmVvaS0+ZGVsYXllZCwgZGVsYXkpOworCX0g
ZWxzZSB7CisJCWxpc3RfZm9yX2VhY2hfZW50cnlfcmV2ZXJzZShlbGVtLCAm
ZW9pLT5lb2lfbGlzdCwgZW9pX2xpc3QpIHsKKwkJCWlmIChlbGVtLT5lb2lf
dGltZSA8PSBpbmZvLT5lb2lfdGltZSkKKwkJCQlicmVhazsKKwkJfQorCQls
aXN0X2FkZCgmaW5mby0+ZW9pX2xpc3QsICZlbGVtLT5lb2lfbGlzdCk7CisJ
fQorCisJc3Bpbl91bmxvY2tfaXJxcmVzdG9yZSgmZW9pLT5lb2lfbGlzdF9s
b2NrLCBmbGFncyk7Cit9CisKIHN0YXRpYyB2b2lkIHhlbl9pcnFfbGF0ZWVv
aV9sb2NrZWQoc3RydWN0IGlycV9pbmZvICppbmZvKQogewogCWV2dGNobl9w
b3J0X3QgZXZ0Y2huOworCXVuc2lnbmVkIGludCBjcHU7CiAKIAlldnRjaG4g
PSBpbmZvLT5ldnRjaG47Ci0JaWYgKCFWQUxJRF9FVlRDSE4oZXZ0Y2huKSkK
KwlpZiAoIVZBTElEX0VWVENITihldnRjaG4pIHx8ICFsaXN0X2VtcHR5KCZp
bmZvLT5lb2lfbGlzdCkpCiAJCXJldHVybjsKIAorCWNwdSA9IGluZm8tPmVv
aV9jcHU7CisJaWYgKGluZm8tPmVvaV90aW1lICYmIGluZm8tPmlycV9lcG9j
aCA9PSBwZXJfY3B1KGlycV9lcG9jaCwgY3B1KSkgeworCQlsYXRlZW9pX2xp
c3RfYWRkKGluZm8pOworCQlyZXR1cm47CisJfQorCisJaW5mby0+ZW9pX3Rp
bWUgPSAwOwogCXVubWFza19ldnRjaG4oZXZ0Y2huKTsKIH0KIAorc3RhdGlj
IHZvaWQgeGVuX2lycV9sYXRlZW9pX3dvcmtlcihzdHJ1Y3Qgd29ya19zdHJ1
Y3QgKndvcmspCit7CisJc3RydWN0IGxhdGVlb2lfd29yayAqZW9pOworCXN0
cnVjdCBpcnFfaW5mbyAqaW5mbzsKKwl1NjQgbm93ID0gZ2V0X2ppZmZpZXNf
NjQoKTsKKwl1bnNpZ25lZCBsb25nIGZsYWdzOworCisJZW9pID0gY29udGFp
bmVyX29mKHRvX2RlbGF5ZWRfd29yayh3b3JrKSwgc3RydWN0IGxhdGVlb2lf
d29yaywgZGVsYXllZCk7CisKKwlyZWFkX2xvY2tfaXJxc2F2ZSgmZXZ0Y2hu
X3J3bG9jaywgZmxhZ3MpOworCisJd2hpbGUgKHRydWUpIHsKKwkJc3Bpbl9s
b2NrKCZlb2ktPmVvaV9saXN0X2xvY2spOworCisJCWluZm8gPSBsaXN0X2Zp
cnN0X2VudHJ5X29yX251bGwoJmVvaS0+ZW9pX2xpc3QsIHN0cnVjdCBpcnFf
aW5mbywKKwkJCQkJCWVvaV9saXN0KTsKKworCQlpZiAoaW5mbyA9PSBOVUxM
IHx8IG5vdyA8IGluZm8tPmVvaV90aW1lKSB7CisJCQlzcGluX3VubG9jaygm
ZW9pLT5lb2lfbGlzdF9sb2NrKTsKKwkJCWJyZWFrOworCQl9CisKKwkJbGlz
dF9kZWxfaW5pdCgmaW5mby0+ZW9pX2xpc3QpOworCisJCXNwaW5fdW5sb2Nr
KCZlb2ktPmVvaV9saXN0X2xvY2spOworCisJCWluZm8tPmVvaV90aW1lID0g
MDsKKworCQl4ZW5faXJxX2xhdGVlb2lfbG9ja2VkKGluZm8pOworCX0KKwor
CWlmIChpbmZvKQorCQltb2RfZGVsYXllZF93b3JrX29uKGluZm8tPmVvaV9j
cHUsIHN5c3RlbV93cSwKKwkJCQkgICAgJmVvaS0+ZGVsYXllZCwgaW5mby0+
ZW9pX3RpbWUgLSBub3cpOworCisJcmVhZF91bmxvY2tfaXJxcmVzdG9yZSgm
ZXZ0Y2huX3J3bG9jaywgZmxhZ3MpOworfQorCitzdGF0aWMgdm9pZCB4ZW5f
Y3B1X2luaXRfZW9pKHVuc2lnbmVkIGludCBjcHUpCit7CisJc3RydWN0IGxh
dGVlb2lfd29yayAqZW9pID0gJnBlcl9jcHUobGF0ZWVvaSwgY3B1KTsKKwor
CUlOSVRfREVMQVlFRF9XT1JLKCZlb2ktPmRlbGF5ZWQsIHhlbl9pcnFfbGF0
ZWVvaV93b3JrZXIpOworCXNwaW5fbG9ja19pbml0KCZlb2ktPmVvaV9saXN0
X2xvY2spOworCUlOSVRfTElTVF9IRUFEKCZlb2ktPmVvaV9saXN0KTsKK30K
Kwogdm9pZCB4ZW5faXJxX2xhdGVlb2kodW5zaWduZWQgaW50IGlycSwgdW5z
aWduZWQgaW50IGVvaV9mbGFncykKIHsKIAlzdHJ1Y3QgaXJxX2luZm8gKmlu
Zm87CkBAIC00MjksNiArNTQ2LDcgQEAgRVhQT1JUX1NZTUJPTF9HUEwoeGVu
X2lycV9sYXRlZW9pKTsKIHN0YXRpYyB2b2lkIHhlbl9pcnFfaW5pdCh1bnNp
Z25lZCBpcnEpCiB7CiAJc3RydWN0IGlycV9pbmZvICppbmZvOworCiAjaWZk
ZWYgQ09ORklHX1NNUAogCS8qIEJ5IGRlZmF1bHQgYWxsIGV2ZW50IGNoYW5u
ZWxzIG5vdGlmeSBDUFUjMC4gKi8KIAljcHVtYXNrX2NvcHkoaXJxX2dldF9h
ZmZpbml0eV9tYXNrKGlycSksIGNwdW1hc2tfb2YoMCkpOwpAQCAtNDQzLDYg
KzU2MSw3IEBAIHN0YXRpYyB2b2lkIHhlbl9pcnFfaW5pdCh1bnNpZ25lZCBp
cnEpCiAKIAlzZXRfaW5mb19mb3JfaXJxKGlycSwgaW5mbyk7CiAKKwlJTklU
X0xJU1RfSEVBRCgmaW5mby0+ZW9pX2xpc3QpOwogCWxpc3RfYWRkX3RhaWwo
JmluZm8tPmxpc3QsICZ4ZW5faXJxX2xpc3RfaGVhZCk7CiB9CiAKQEAgLTQ5
OCw2ICs2MTcsOSBAQCBzdGF0aWMgdm9pZCB4ZW5fZnJlZV9pcnEodW5zaWdu
ZWQgaXJxKQogCiAJd3JpdGVfbG9ja19pcnFzYXZlKCZldnRjaG5fcndsb2Nr
LCBmbGFncyk7CiAKKwlpZiAoIWxpc3RfZW1wdHkoJmluZm8tPmVvaV9saXN0
KSkKKwkJbGF0ZWVvaV9saXN0X2RlbChpbmZvKTsKKwogCWxpc3RfZGVsKCZp
bmZvLT5saXN0KTsKIAogCXNldF9pbmZvX2Zvcl9pcnEoaXJxLCBOVUxMKTsK
QEAgLTEzNTgsMTcgKzE0ODAsNjYgQEAgdm9pZCB4ZW5fc2VuZF9JUElfb25l
KHVuc2lnbmVkIGludCBjcHUsIGVudW0gaXBpX3ZlY3RvciB2ZWN0b3IpCiAJ
bm90aWZ5X3JlbW90ZV92aWFfaXJxKGlycSk7CiB9CiAKK3N0cnVjdCBldnRj
aG5fbG9vcF9jdHJsIHsKKwlrdGltZV90IHRpbWVvdXQ7CisJdW5zaWduZWQg
Y291bnQ7CisJYm9vbCBkZWZlcl9lb2k7Cit9OworCit2b2lkIGhhbmRsZV9p
cnFfZm9yX3BvcnQoZXZ0Y2huX3BvcnRfdCBwb3J0LCBzdHJ1Y3QgZXZ0Y2hu
X2xvb3BfY3RybCAqY3RybCkKK3sKKwlpbnQgaXJxOworCXN0cnVjdCBpcnFf
aW5mbyAqaW5mbzsKKworCWlycSA9IGdldF9ldnRjaG5fdG9faXJxKHBvcnQp
OworCWlmIChpcnEgPT0gLTEpCisJCXJldHVybjsKKworCS8qCisJICogQ2hl
Y2sgZm9yIHRpbWVvdXQgZXZlcnkgMjU2IGV2ZW50cy4KKwkgKiBXZSBhcmUg
c2V0dGluZyB0aGUgdGltZW91dCB2YWx1ZSBvbmx5IGFmdGVyIHRoZSBmaXJz
dCAyNTYKKwkgKiBldmVudHMgaW4gb3JkZXIgdG8gbm90IGh1cnQgdGhlIGNv
bW1vbiBjYXNlIG9mIGZldyBsb29wCisJICogaXRlcmF0aW9ucy4gVGhlIDI1
NiBpcyBiYXNpY2FsbHkgYW4gYXJiaXRyYXJ5IHZhbHVlLgorCSAqCisJICog
SW4gY2FzZSB3ZSBhcmUgaGl0dGluZyB0aGUgdGltZW91dCB3ZSBuZWVkIHRv
IGRlZmVyIGFsbCBmdXJ0aGVyCisJICogRU9JcyBpbiBvcmRlciB0byBlbnN1
cmUgdG8gbGVhdmUgdGhlIGV2ZW50IGhhbmRsaW5nIGxvb3AgcmF0aGVyCisJ
ICogc29vbmVyIHRoYW4gbGF0ZXIuCisJICovCisJaWYgKCFjdHJsLT5kZWZl
cl9lb2kgJiYgISgrK2N0cmwtPmNvdW50ICYgMHhmZikpIHsKKwkJa3RpbWVf
dCBrdCA9IGt0aW1lX2dldCgpOworCisJCWlmICghY3RybC0+dGltZW91dCkg
eworCQkJa3QgPSBrdGltZV9hZGRfbXMoa3QsCisJCQkJCSAgamlmZmllc190
b19tc2VjcyhldmVudF9sb29wX3RpbWVvdXQpKTsKKwkJCWN0cmwtPnRpbWVv
dXQgPSBrdDsKKwkJfSBlbHNlIGlmIChrdCA+IGN0cmwtPnRpbWVvdXQpIHsK
KwkJCWN0cmwtPmRlZmVyX2VvaSA9IHRydWU7CisJCX0KKwl9CisKKwlpbmZv
ID0gaW5mb19mb3JfaXJxKGlycSk7CisKKwlpZiAoY3RybC0+ZGVmZXJfZW9p
KSB7CisJCWluZm8tPmVvaV9jcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7CisJ
CWluZm8tPmlycV9lcG9jaCA9IF9fdGhpc19jcHVfcmVhZChpcnFfZXBvY2gp
OworCQlpbmZvLT5lb2lfdGltZSA9IGdldF9qaWZmaWVzXzY0KCkgKyBldmVu
dF9lb2lfZGVsYXk7CisJfQorCisJZ2VuZXJpY19oYW5kbGVfaXJxKGlycSk7
Cit9CisKIHN0YXRpYyB2b2lkIF9feGVuX2V2dGNobl9kb191cGNhbGwodm9p
ZCkKIHsKIAlzdHJ1Y3QgdmNwdV9pbmZvICp2Y3B1X2luZm8gPSBfX3RoaXNf
Y3B1X3JlYWQoeGVuX3ZjcHUpOwogCWludCBjcHUgPSBzbXBfcHJvY2Vzc29y
X2lkKCk7CisJc3RydWN0IGV2dGNobl9sb29wX2N0cmwgY3RybCA9IHsgMCB9
OwogCiAJcmVhZF9sb2NrKCZldnRjaG5fcndsb2NrKTsKIAogCWRvIHsKIAkJ
dmNwdV9pbmZvLT5ldnRjaG5fdXBjYWxsX3BlbmRpbmcgPSAwOwogCi0JCXhl
bl9ldnRjaG5faGFuZGxlX2V2ZW50cyhjcHUpOworCQl4ZW5fZXZ0Y2huX2hh
bmRsZV9ldmVudHMoY3B1LCAmY3RybCk7CiAKIAkJQlVHX09OKCFpcnFzX2Rp
c2FibGVkKCkpOwogCkBAIC0xMzc3LDYgKzE1NDgsMTMgQEAgc3RhdGljIHZv
aWQgX194ZW5fZXZ0Y2huX2RvX3VwY2FsbCh2b2lkKQogCX0gd2hpbGUgKHZj
cHVfaW5mby0+ZXZ0Y2huX3VwY2FsbF9wZW5kaW5nKTsKIAogCXJlYWRfdW5s
b2NrKCZldnRjaG5fcndsb2NrKTsKKworCS8qCisJICogSW5jcmVtZW50IGly
cV9lcG9jaCBvbmx5IG5vdyB0byBkZWZlciBFT0lzIG9ubHkgZm9yCisJICog
eGVuX2lycV9sYXRlZW9pKCkgaW52b2NhdGlvbnMgb2NjdXJyaW5nIGZyb20g
aW5zaWRlIHRoZSBsb29wCisJICogYWJvdmUuCisJICovCisJX190aGlzX2Nw
dV9pbmMoaXJxX2Vwb2NoKTsKIH0KIAogdm9pZCB4ZW5fZXZ0Y2huX2RvX3Vw
Y2FsbChzdHJ1Y3QgcHRfcmVncyAqcmVncykKQEAgLTE4MjUsOSArMjAwMyw2
IEBAIHZvaWQgeGVuX3NldHVwX2NhbGxiYWNrX3ZlY3Rvcih2b2lkKSB7fQog
c3RhdGljIGlubGluZSB2b2lkIHhlbl9hbGxvY19jYWxsYmFja192ZWN0b3Io
dm9pZCkge30KICNlbmRpZgogCi0jdW5kZWYgTU9EVUxFX1BBUkFNX1BSRUZJ
WAotI2RlZmluZSBNT0RVTEVfUEFSQU1fUFJFRklYICJ4ZW4uIgotCiBzdGF0
aWMgYm9vbCBmaWZvX2V2ZW50cyA9IHRydWU7CiBtb2R1bGVfcGFyYW0oZmlm
b19ldmVudHMsIGJvb2wsIDApOwogCkBAIC0xODM1LDYgKzIwMTAsOCBAQCBz
dGF0aWMgaW50IHhlbl9ldnRjaG5fY3B1X3ByZXBhcmUodW5zaWduZWQgaW50
IGNwdSkKIHsKIAlpbnQgcmV0ID0gMDsKIAorCXhlbl9jcHVfaW5pdF9lb2ko
Y3B1KTsKKwogCWlmIChldnRjaG5fb3BzLT5wZXJjcHVfaW5pdCkKIAkJcmV0
ID0gZXZ0Y2huX29wcy0+cGVyY3B1X2luaXQoY3B1KTsKIApAQCAtMTg2MSw2
ICsyMDM4LDggQEAgdm9pZCBfX2luaXQgeGVuX2luaXRfSVJRKHZvaWQpCiAJ
aWYgKHJldCA8IDApCiAJCXhlbl9ldnRjaG5fMmxfaW5pdCgpOwogCisJeGVu
X2NwdV9pbml0X2VvaShzbXBfcHJvY2Vzc29yX2lkKCkpOworCiAJY3B1aHBf
c2V0dXBfc3RhdGVfbm9jYWxscyhDUFVIUF9YRU5fRVZUQ0hOX1BSRVBBUkUs
CiAJCQkJICAieGVuL2V2dGNobjpwcmVwYXJlIiwKIAkJCQkgIHhlbl9ldnRj
aG5fY3B1X3ByZXBhcmUsIHhlbl9ldnRjaG5fY3B1X2RlYWQpOwpkaWZmIC0t
Z2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19maWZvLmMgYi9kcml2
ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2ZpZm8uYwppbmRleCA0MGU0Y2ExNjg1
YWEuLjYwODVhODA4ZGE5NSAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4vZXZl
bnRzL2V2ZW50c19maWZvLmMKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19maWZvLmMKQEAgLTI3NSwxOSArMjc1LDkgQEAgc3RhdGljIHVpbnQz
Ml90IGNsZWFyX2xpbmtlZCh2b2xhdGlsZSBldmVudF93b3JkX3QgKndvcmQp
CiAJcmV0dXJuIHcgJiBFVlRDSE5fRklGT19MSU5LX01BU0s7CiB9CiAKLXN0
YXRpYyB2b2lkIGhhbmRsZV9pcnFfZm9yX3BvcnQoZXZ0Y2huX3BvcnRfdCBw
b3J0KQotewotCWludCBpcnE7Ci0KLQlpcnEgPSBnZXRfZXZ0Y2huX3RvX2ly
cShwb3J0KTsKLQlpZiAoaXJxICE9IC0xKQotCQlnZW5lcmljX2hhbmRsZV9p
cnEoaXJxKTsKLX0KLQotc3RhdGljIHZvaWQgY29uc3VtZV9vbmVfZXZlbnQo
dW5zaWduZWQgY3B1LAorc3RhdGljIHZvaWQgY29uc3VtZV9vbmVfZXZlbnQo
dW5zaWduZWQgY3B1LCBzdHJ1Y3QgZXZ0Y2huX2xvb3BfY3RybCAqY3RybCwK
IAkJCSAgICAgIHN0cnVjdCBldnRjaG5fZmlmb19jb250cm9sX2Jsb2NrICpj
b250cm9sX2Jsb2NrLAotCQkJICAgICAgdW5zaWduZWQgcHJpb3JpdHksIHVu
c2lnbmVkIGxvbmcgKnJlYWR5LAotCQkJICAgICAgYm9vbCBkcm9wKQorCQkJ
ICAgICAgdW5zaWduZWQgcHJpb3JpdHksIHVuc2lnbmVkIGxvbmcgKnJlYWR5
KQogewogCXN0cnVjdCBldnRjaG5fZmlmb19xdWV1ZSAqcSA9ICZwZXJfY3B1
KGNwdV9xdWV1ZSwgY3B1KTsKIAl1aW50MzJfdCBoZWFkOwpAQCAtMzIwLDE2
ICszMTAsMTcgQEAgc3RhdGljIHZvaWQgY29uc3VtZV9vbmVfZXZlbnQodW5z
aWduZWQgY3B1LAogCQljbGVhcl9iaXQocHJpb3JpdHksIHJlYWR5KTsKIAog
CWlmIChldnRjaG5fZmlmb19pc19wZW5kaW5nKHBvcnQpICYmICFldnRjaG5f
Zmlmb19pc19tYXNrZWQocG9ydCkpIHsKLQkJaWYgKHVubGlrZWx5KGRyb3Ap
KQorCQlpZiAodW5saWtlbHkoIWN0cmwpKQogCQkJcHJfd2FybigiRHJvcHBp
bmcgcGVuZGluZyBldmVudCBmb3IgcG9ydCAldVxuIiwgcG9ydCk7CiAJCWVs
c2UKLQkJCWhhbmRsZV9pcnFfZm9yX3BvcnQocG9ydCk7CisJCQloYW5kbGVf
aXJxX2Zvcl9wb3J0KHBvcnQsIGN0cmwpOwogCX0KIAogCXEtPmhlYWRbcHJp
b3JpdHldID0gaGVhZDsKIH0KIAotc3RhdGljIHZvaWQgX19ldnRjaG5fZmlm
b19oYW5kbGVfZXZlbnRzKHVuc2lnbmVkIGNwdSwgYm9vbCBkcm9wKQorc3Rh
dGljIHZvaWQgX19ldnRjaG5fZmlmb19oYW5kbGVfZXZlbnRzKHVuc2lnbmVk
IGNwdSwKKwkJCQkJc3RydWN0IGV2dGNobl9sb29wX2N0cmwgKmN0cmwpCiB7
CiAJc3RydWN0IGV2dGNobl9maWZvX2NvbnRyb2xfYmxvY2sgKmNvbnRyb2xf
YmxvY2s7CiAJdW5zaWduZWQgbG9uZyByZWFkeTsKQEAgLTM0MSwxNCArMzMy
LDE1IEBAIHN0YXRpYyB2b2lkIF9fZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50
cyh1bnNpZ25lZCBjcHUsIGJvb2wgZHJvcCkKIAogCXdoaWxlIChyZWFkeSkg
ewogCQlxID0gZmluZF9maXJzdF9iaXQoJnJlYWR5LCBFVlRDSE5fRklGT19N
QVhfUVVFVUVTKTsKLQkJY29uc3VtZV9vbmVfZXZlbnQoY3B1LCBjb250cm9s
X2Jsb2NrLCBxLCAmcmVhZHksIGRyb3ApOworCQljb25zdW1lX29uZV9ldmVu
dChjcHUsIGN0cmwsIGNvbnRyb2xfYmxvY2ssIHEsICZyZWFkeSk7CiAJCXJl
YWR5IHw9IHhjaGcoJmNvbnRyb2xfYmxvY2stPnJlYWR5LCAwKTsKIAl9CiB9
CiAKLXN0YXRpYyB2b2lkIGV2dGNobl9maWZvX2hhbmRsZV9ldmVudHModW5z
aWduZWQgY3B1KQorc3RhdGljIHZvaWQgZXZ0Y2huX2ZpZm9faGFuZGxlX2V2
ZW50cyh1bnNpZ25lZCBjcHUsCisJCQkJICAgICAgc3RydWN0IGV2dGNobl9s
b29wX2N0cmwgKmN0cmwpCiB7Ci0JX19ldnRjaG5fZmlmb19oYW5kbGVfZXZl
bnRzKGNwdSwgZmFsc2UpOworCV9fZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50
cyhjcHUsIGN0cmwpOwogfQogCiBzdGF0aWMgdm9pZCBldnRjaG5fZmlmb19y
ZXN1bWUodm9pZCkKQEAgLTQxNiw3ICs0MDgsNyBAQCBzdGF0aWMgaW50IGV2
dGNobl9maWZvX3BlcmNwdV9pbml0KHVuc2lnbmVkIGludCBjcHUpCiAKIHN0
YXRpYyBpbnQgZXZ0Y2huX2ZpZm9fcGVyY3B1X2RlaW5pdCh1bnNpZ25lZCBp
bnQgY3B1KQogewotCV9fZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50cyhjcHUs
IHRydWUpOworCV9fZXZ0Y2huX2ZpZm9faGFuZGxlX2V2ZW50cyhjcHUsIE5V
TEwpOwogCXJldHVybiAwOwogfQogCmRpZmYgLS1naXQgYS9kcml2ZXJzL3hl
bi9ldmVudHMvZXZlbnRzX2ludGVybmFsLmggYi9kcml2ZXJzL3hlbi9ldmVu
dHMvZXZlbnRzX2ludGVybmFsLmgKaW5kZXggNTU4YWJlYTE5ZDBkLi5hYWMw
NWNmNTJjZWQgMTAwNjQ0Ci0tLSBhL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVu
dHNfaW50ZXJuYWwuaAorKysgYi9kcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRz
X2ludGVybmFsLmgKQEAgLTMwLDExICszMCwxNSBAQCBlbnVtIHhlbl9pcnFf
dHlwZSB7CiAgKi8KIHN0cnVjdCBpcnFfaW5mbyB7CiAJc3RydWN0IGxpc3Rf
aGVhZCBsaXN0OworCXN0cnVjdCBsaXN0X2hlYWQgZW9pX2xpc3Q7CiAJaW50
IHJlZmNudDsKIAllbnVtIHhlbl9pcnFfdHlwZSB0eXBlOwkvKiB0eXBlICov
CiAJdW5zaWduZWQgaXJxOwogCWV2dGNobl9wb3J0X3QgZXZ0Y2huOwkvKiBl
dmVudCBjaGFubmVsICovCiAJdW5zaWduZWQgc2hvcnQgY3B1OwkvKiBjcHUg
Ym91bmQgKi8KKwl1bnNpZ25lZCBzaG9ydCBlb2lfY3B1OwkvKiBFT0kgbXVz
dCBoYXBwZW4gb24gdGhpcyBjcHUgKi8KKwl1bnNpZ25lZCBpbnQgaXJxX2Vw
b2NoOwkvKiBJZiBlb2lfY3B1IHZhbGlkOiBpcnFfZXBvY2ggb2YgZXZlbnQg
Ki8KKwl1NjQgZW9pX3RpbWU7CQkvKiBUaW1lIGluIGppZmZpZXMgd2hlbiB0
byBFT0kuICovCiAKIAl1bmlvbiB7CiAJCXVuc2lnbmVkIHNob3J0IHZpcnE7
CkBAIC01Myw2ICs1Nyw4IEBAIHN0cnVjdCBpcnFfaW5mbyB7CiAjZGVmaW5l
IFBJUlFfU0hBUkVBQkxFCSgxIDw8IDEpCiAjZGVmaW5lIFBJUlFfTVNJX0dS
T1VQCSgxIDw8IDIpCiAKK3N0cnVjdCBldnRjaG5fbG9vcF9jdHJsOworCiBz
dHJ1Y3QgZXZ0Y2huX29wcyB7CiAJdW5zaWduZWQgKCptYXhfY2hhbm5lbHMp
KHZvaWQpOwogCXVuc2lnbmVkICgqbnJfY2hhbm5lbHMpKHZvaWQpOwpAQCAt
NjcsNyArNzMsNyBAQCBzdHJ1Y3QgZXZ0Y2huX29wcyB7CiAJdm9pZCAoKm1h
c2spKGV2dGNobl9wb3J0X3QgcG9ydCk7CiAJdm9pZCAoKnVubWFzaykoZXZ0
Y2huX3BvcnRfdCBwb3J0KTsKIAotCXZvaWQgKCpoYW5kbGVfZXZlbnRzKSh1
bnNpZ25lZCBjcHUpOworCXZvaWQgKCpoYW5kbGVfZXZlbnRzKSh1bnNpZ25l
ZCBjcHUsIHN0cnVjdCBldnRjaG5fbG9vcF9jdHJsICpjdHJsKTsKIAl2b2lk
ICgqcmVzdW1lKSh2b2lkKTsKIAogCWludCAoKnBlcmNwdV9pbml0KSh1bnNp
Z25lZCBpbnQgY3B1KTsKQEAgLTc4LDYgKzg0LDcgQEAgZXh0ZXJuIGNvbnN0
IHN0cnVjdCBldnRjaG5fb3BzICpldnRjaG5fb3BzOwogCiBleHRlcm4gaW50
ICoqZXZ0Y2huX3RvX2lycTsKIGludCBnZXRfZXZ0Y2huX3RvX2lycShldnRj
aG5fcG9ydF90IGV2dGNobik7Cit2b2lkIGhhbmRsZV9pcnFfZm9yX3BvcnQo
ZXZ0Y2huX3BvcnRfdCBwb3J0LCBzdHJ1Y3QgZXZ0Y2huX2xvb3BfY3RybCAq
Y3RybCk7CiAKIHN0cnVjdCBpcnFfaW5mbyAqaW5mb19mb3JfaXJxKHVuc2ln
bmVkIGlycSk7CiB1bnNpZ25lZCBjcHVfZnJvbV9pcnEodW5zaWduZWQgaXJx
KTsKQEAgLTEzNSw5ICsxNDIsMTAgQEAgc3RhdGljIGlubGluZSB2b2lkIHVu
bWFza19ldnRjaG4oZXZ0Y2huX3BvcnRfdCBwb3J0KQogCXJldHVybiBldnRj
aG5fb3BzLT51bm1hc2socG9ydCk7CiB9CiAKLXN0YXRpYyBpbmxpbmUgdm9p
ZCB4ZW5fZXZ0Y2huX2hhbmRsZV9ldmVudHModW5zaWduZWQgY3B1KQorc3Rh
dGljIGlubGluZSB2b2lkIHhlbl9ldnRjaG5faGFuZGxlX2V2ZW50cyh1bnNp
Z25lZCBjcHUsCisJCQkJCSAgICBzdHJ1Y3QgZXZ0Y2huX2xvb3BfY3RybCAq
Y3RybCkKIHsKLQlyZXR1cm4gZXZ0Y2huX29wcy0+aGFuZGxlX2V2ZW50cyhj
cHUpOworCXJldHVybiBldnRjaG5fb3BzLT5oYW5kbGVfZXZlbnRzKGNwdSwg
Y3RybCk7CiB9CiAKIHN0YXRpYyBpbmxpbmUgdm9pZCB4ZW5fZXZ0Y2huX3Jl
c3VtZSh2b2lkKQotLSAKMi4yNi4yCgo=

--=separator
Content-Type: application/octet-stream; name="xsa332-linux-12.patch"
Content-Disposition: attachment; filename="xsa332-linux-12.patch"
Content-Transfer-Encoding: base64

RnJvbSAxNjVlZGNkZWYzYTdiYjE4ZjlmZmRlM2NkN2E2YTJiMjQ2NTYyYmQ2
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBKdWVyZ2VuIEdyb3Nz
IDxqZ3Jvc3NAc3VzZS5jb20+CkRhdGU6IE1vbiwgMTQgU2VwIDIwMjAgMTQ6
MDE6MDIgKzAyMDAKU3ViamVjdDogW1BBVENIIDEyLzEyXSB4ZW4vZXZlbnRz
OiBibG9jayByb2d1ZSBldmVudHMgZm9yIHNvbWUgdGltZQoKSW4gb3JkZXIg
dG8gYXZvaWQgaGlnaCBkb20wIGxvYWQgZHVlIHRvIHJvZ3VlIGd1ZXN0cyBz
ZW5kaW5nIGV2ZW50cyBhdApoaWdoIGZyZXF1ZW5jeSwgYmxvY2sgdGhvc2Ug
ZXZlbnRzIGluIGNhc2UgdGhlcmUgd2FzIG5vIGFjdGlvbiBuZWVkZWQKaW4g
ZG9tMCB0byBoYW5kbGUgdGhlIGV2ZW50cy4KClRoaXMgaXMgZG9uZSBieSBh
ZGRpbmcgYSBwZXItZXZlbnQgY291bnRlciwgd2hpY2ggc2V0IHRvIHplcm8g
aW4gY2FzZQphbiBFT0kgd2l0aG91dCB0aGUgWEVOX0VPSV9GTEFHX1NQVVJJ
T1VTIGlzIHJlY2VpdmVkIGZyb20gYSBiYWNrZW5kCmRyaXZlciwgYW5kIGlu
Y3JlbWVudGVkIHdoZW4gdGhpcyBmbGFnIGhhcyBiZWVuIHNldC4gSW4gY2Fz
ZSB0aGUKY291bnRlciBpcyAyIG9yIGhpZ2hlciBkZWxheSB0aGUgRU9JIGJ5
IDEgPDwgKGNudCAtIDIpIGppZmZpZXMsIGJ1dApub3QgbW9yZSB0aGFuIDEg
c2Vjb25kLgoKSW4gb3JkZXIgbm90IHRvIHdhc3RlIG1lbW9yeSBzaG9ydGVu
IHRoZSBwZXItZXZlbnQgcmVmY250IHRvIHR3byBieXRlcwooaXQgc2hvdWxk
IG5vcm1hbGx5IG5ldmVyIGV4Y2VlZCBhIHZhbHVlIG9mIDIpLiBBZGQgYW4g
b3ZlcmZsb3cgY2hlY2sKdG8gZXZ0Y2huX2dldCgpIHRvIG1ha2Ugc3VyZSB0
aGUgMiBieXRlcyByZWFsbHkgd29uJ3Qgb3ZlcmZsb3cuCgpUaGlzIGlzIHBh
cnQgb2YgWFNBLTMzMi4KCkNjOiBzdGFibGVAdmdlci5rZXJuZWwub3JnClJl
cG9ydGVkLWJ5OiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPgpTaWdu
ZWQtb2ZmLWJ5OiBKdWVyZ2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+ClJl
dmlld2VkLWJ5OiBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtl
cm5lbC5vcmc+ClJldmlld2VkLWJ5OiBXZWkgTGl1IDx3bEB4ZW4ub3JnPgot
LS0KIGRyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfYmFzZS5jICAgICB8IDI3
ICsrKysrKysrKysrKysrKysrKysrKystLS0tLQogZHJpdmVycy94ZW4vZXZl
bnRzL2V2ZW50c19pbnRlcm5hbC5oIHwgIDMgKystCiAyIGZpbGVzIGNoYW5n
ZWQsIDI0IGluc2VydGlvbnMoKyksIDYgZGVsZXRpb25zKC0pCgpkaWZmIC0t
Z2l0IGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19iYXNlLmMgYi9kcml2
ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2UuYwppbmRleCBjZGUwOTZhNmYx
MWQuLmNjMzE3NzM5ZTc4NiAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4vZXZl
bnRzL2V2ZW50c19iYXNlLmMKKysrIGIvZHJpdmVycy94ZW4vZXZlbnRzL2V2
ZW50c19iYXNlLmMKQEAgLTQ2MSwxNyArNDYxLDM0IEBAIHN0YXRpYyB2b2lk
IGxhdGVlb2lfbGlzdF9hZGQoc3RydWN0IGlycV9pbmZvICppbmZvKQogCXNw
aW5fdW5sb2NrX2lycXJlc3RvcmUoJmVvaS0+ZW9pX2xpc3RfbG9jaywgZmxh
Z3MpOwogfQogCi1zdGF0aWMgdm9pZCB4ZW5faXJxX2xhdGVlb2lfbG9ja2Vk
KHN0cnVjdCBpcnFfaW5mbyAqaW5mbykKK3N0YXRpYyB2b2lkIHhlbl9pcnFf
bGF0ZWVvaV9sb2NrZWQoc3RydWN0IGlycV9pbmZvICppbmZvLCBib29sIHNw
dXJpb3VzKQogewogCWV2dGNobl9wb3J0X3QgZXZ0Y2huOwogCXVuc2lnbmVk
IGludCBjcHU7CisJdW5zaWduZWQgaW50IGRlbGF5ID0gMDsKIAogCWV2dGNo
biA9IGluZm8tPmV2dGNobjsKIAlpZiAoIVZBTElEX0VWVENITihldnRjaG4p
IHx8ICFsaXN0X2VtcHR5KCZpbmZvLT5lb2lfbGlzdCkpCiAJCXJldHVybjsK
IAorCWlmIChzcHVyaW91cykgeworCQlpZiAoKDEgPDwgaW5mby0+c3B1cmlv
dXNfY250KSA8IChIWiA8PCAyKSkKKwkJCWluZm8tPnNwdXJpb3VzX2NudCsr
OworCQlpZiAoaW5mby0+c3B1cmlvdXNfY250ID4gMSkgeworCQkJZGVsYXkg
PSAxIDw8IChpbmZvLT5zcHVyaW91c19jbnQgLSAyKTsKKwkJCWlmIChkZWxh
eSA+IEhaKQorCQkJCWRlbGF5ID0gSFo7CisJCQlpZiAoIWluZm8tPmVvaV90
aW1lKQorCQkJCWluZm8tPmVvaV9jcHUgPSBzbXBfcHJvY2Vzc29yX2lkKCk7
CisJCQlpbmZvLT5lb2lfdGltZSA9IGdldF9qaWZmaWVzXzY0KCkgKyBkZWxh
eTsKKwkJfQorCX0gZWxzZSB7CisJCWluZm8tPnNwdXJpb3VzX2NudCA9IDA7
CisJfQorCiAJY3B1ID0gaW5mby0+ZW9pX2NwdTsKLQlpZiAoaW5mby0+ZW9p
X3RpbWUgJiYgaW5mby0+aXJxX2Vwb2NoID09IHBlcl9jcHUoaXJxX2Vwb2No
LCBjcHUpKSB7CisJaWYgKGluZm8tPmVvaV90aW1lICYmCisJICAgIChpbmZv
LT5pcnFfZXBvY2ggPT0gcGVyX2NwdShpcnFfZXBvY2gsIGNwdSkgfHwgZGVs
YXkpKSB7CiAJCWxhdGVlb2lfbGlzdF9hZGQoaW5mbyk7CiAJCXJldHVybjsK
IAl9CkBAIC01MDgsNyArNTI1LDcgQEAgc3RhdGljIHZvaWQgeGVuX2lycV9s
YXRlZW9pX3dvcmtlcihzdHJ1Y3Qgd29ya19zdHJ1Y3QgKndvcmspCiAKIAkJ
aW5mby0+ZW9pX3RpbWUgPSAwOwogCi0JCXhlbl9pcnFfbGF0ZWVvaV9sb2Nr
ZWQoaW5mbyk7CisJCXhlbl9pcnFfbGF0ZWVvaV9sb2NrZWQoaW5mbywgZmFs
c2UpOwogCX0KIAogCWlmIChpbmZvKQpAQCAtNTM3LDcgKzU1NCw3IEBAIHZv
aWQgeGVuX2lycV9sYXRlZW9pKHVuc2lnbmVkIGludCBpcnEsIHVuc2lnbmVk
IGludCBlb2lfZmxhZ3MpCiAJaW5mbyA9IGluZm9fZm9yX2lycShpcnEpOwog
CiAJaWYgKGluZm8pCi0JCXhlbl9pcnFfbGF0ZWVvaV9sb2NrZWQoaW5mbyk7
CisJCXhlbl9pcnFfbGF0ZWVvaV9sb2NrZWQoaW5mbywgZW9pX2ZsYWdzICYg
WEVOX0VPSV9GTEFHX1NQVVJJT1VTKTsKIAogCXJlYWRfdW5sb2NrX2lycXJl
c3RvcmUoJmV2dGNobl9yd2xvY2ssIGZsYWdzKTsKIH0KQEAgLTE0NDEsNyAr
MTQ1OCw3IEBAIGludCBldnRjaG5fZ2V0KGV2dGNobl9wb3J0X3QgZXZ0Y2hu
KQogCQlnb3RvIGRvbmU7CiAKIAllcnIgPSAtRUlOVkFMOwotCWlmIChpbmZv
LT5yZWZjbnQgPD0gMCkKKwlpZiAoaW5mby0+cmVmY250IDw9IDAgfHwgaW5m
by0+cmVmY250ID09IFNIUlRfTUFYKQogCQlnb3RvIGRvbmU7CiAKIAlpbmZv
LT5yZWZjbnQrKzsKZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL2V2ZW50cy9l
dmVudHNfaW50ZXJuYWwuaCBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNf
aW50ZXJuYWwuaAppbmRleCBhYWMwNWNmNTJjZWQuLjgyOTM3ZDkwZDdkNyAx
MDA2NDQKLS0tIGEvZHJpdmVycy94ZW4vZXZlbnRzL2V2ZW50c19pbnRlcm5h
bC5oCisrKyBiL2RyaXZlcnMveGVuL2V2ZW50cy9ldmVudHNfaW50ZXJuYWwu
aApAQCAtMzEsNyArMzEsOCBAQCBlbnVtIHhlbl9pcnFfdHlwZSB7CiBzdHJ1
Y3QgaXJxX2luZm8gewogCXN0cnVjdCBsaXN0X2hlYWQgbGlzdDsKIAlzdHJ1
Y3QgbGlzdF9oZWFkIGVvaV9saXN0OwotCWludCByZWZjbnQ7CisJc2hvcnQg
cmVmY250OworCXNob3J0IHNwdXJpb3VzX2NudDsKIAllbnVtIHhlbl9pcnFf
dHlwZSB0eXBlOwkvKiB0eXBlICovCiAJdW5zaWduZWQgaXJxOwogCWV2dGNo
bl9wb3J0X3QgZXZ0Y2huOwkvKiBldmVudCBjaGFubmVsICovCi0tIAoyLjI2
LjIKCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:03:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9229.24691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqMK-0000WI-4a; Tue, 20 Oct 2020 12:03:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9229.24691; Tue, 20 Oct 2020 12:03:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqMJ-0000W8-Uj; Tue, 20 Oct 2020 12:03:19 +0000
Received: by outflank-mailman (input) for mailman id 9229;
 Tue, 20 Oct 2020 12:03:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqKR-0006Dt-Da
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84dd45ad-8c89-4d82-8625-9e52f44973bf;
 Tue, 20 Oct 2020 12:00:53 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJq-0001Kg-Rp; Tue, 20 Oct 2020 12:00:46 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJq-00023M-Qq; Tue, 20 Oct 2020 12:00:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqKR-0006Dt-Da
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:23 +0000
X-Inumbo-ID: 84dd45ad-8c89-4d82-8625-9e52f44973bf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84dd45ad-8c89-4d82-8625-9e52f44973bf;
	Tue, 20 Oct 2020 12:00:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=7wB4uqY6kGMCF7XlR5ffd2ID0NPde8ssaqxMMSKWoo4=; b=Q2XFWCe6TurX2E3SWg1y2UwU4w
	yzX5Db6l+sBwAZSY9InK70f8J8Xkc+Wot07chELPpKbo85iGG3K7OKwR/TPDPhPuvNTS601e+ccaU
	NObrl/dfCNn1iZ0y6f4xqWxeGNtTnFcGiZkVnIfPqIgmzwOYdz6LGUtaXSGoG9kF6ar4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJq-0001Kg-Rp; Tue, 20 Oct 2020 12:00:46 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJq-00023M-Qq; Tue, 20 Oct 2020 12:00:46 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 346 v2 - undue deferral of IOMMU TLB flushes
Message-Id: <E1kUqJq-00023M-Qq@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:46 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-346
                              version 2

                  undue deferral of IOMMU TLB flushes

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

To efficiently change the physical to machine address mappings of a
larger range of addresses for fully virtualized guests, Xen contains
an optimization to coalesce per-page IOMMU TLB flushes into a single,
wider flush after all adjustments have been made.  While this is fine
to do for newly introduced page mappings, the possible removal of
pages from such guests during this operation should not be "optimized"
in the same way.  This is because the (typically) final reference of
such pages is dropped before the coalesced flush, and hence the pages
may have been put to a different use even though DMA initiated by
their original owner mightstill be in progress.

IMPACT
======

A malicious guest might be able to cause data corruption and data
leaks.  Host or guest Denial of Service (DoS), and privilege
escalation, cannot be ruled out.

VULNERABLE SYSTEMS
==================

All Xen versions from 4.2 onwards are vulnerable.  Xen versions 4.1 and
earlier are not vulnerable.

Only x86 HVM and PVH guests can leverage the vulnerability.  Arm guests
as well as x86 PV ones cannot leverage the vulnerability.

Only x86 HVM and PVH guests which have physical devices passed through
to them can leverage the vulnerability.

Only x86 HVM and PVH guests configured to not share IOMMU and CPU
page tables can leverage the vulnerability.  Sharing these page tables
is the default on capable Intel (VT-d) hardware.  On AMD hardware
sharing is not possible.  On Intel (VT-d) hardware sharing may also not
be possible, depending on hardware properties.  Whether it is possible
can be seen from the presence (or absence) of "iommu_hap_pt_share" on
the "virt_caps" line of "xl info" output.  Guests run in shadow mode
can leverage the vulnerability.

MITIGATION
==========

Not passing through physical devices to untrusted guests will avoid
the vulnerability.

On systems permitting page table sharing, not suppressing use of the
functionality will allow to avoid the vulnerability. This means guests
should not be run in
* shadow mode, i.e. hardware needs to be HAP (Hardware Assisted Paging)
  capable, there should not be "hap=0" in the guest's xl configuration
  file, and there should not be "hap=0" or equivalent on Xen's command
  line,
* non-shared page table mode, i.e. hardware needs to be capable of
  sharing, there should not be "passthrough=sync_pt" in the guest's xl
  configuration file, and there should not be "iommu=no-sharept" or
  equivalent on Xen's command line.

CREDITS
=======

This issue was discovered by Jan Beulich of SUSE.

RESOLUTION
==========

Applying the appropriate pair of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa346/xsa346-?.patch           Xen 4.14 - xen-unstable
xsa346/xsa346-4.13-?.patch      Xen 4.13
xsa346/xsa346-4.12-?.patch      Xen 4.12
xsa346/xsa346-4.11-?.patch      Xen 4.11
xsa346/xsa346-4.10-?.patch      Xen 4.10

$ sha256sum xsa346* xsa346*/*
ba560d34cb46f45d6da0ba5d672cb896c173e90de5c022d22415ace20c5e47b8  xsa346.meta
5f8b3e5565bc7d87283af173f5f2b35975e4ab6bff502780799d14fb263f730d  xsa346/xsa346-1.patch
9de89ca360f303e7aa3b42529cdf4191b0700ee7cb6928a22068195e047a4db7  xsa346/xsa346-2.patch
f3612bfad219160917a3bc46ea5b31673137593d62ae4f819a8e80ade0339c5b  xsa346/xsa346-4.10-1.patch
734ed82d583bbce342ffabeb9dd84e300f2717ec71e3de866670b0ddf18d57aa  xsa346/xsa346-4.10-2.patch
7a41bf06e19590cfc69d4f2ac132a23843dcec2ea5f98d86c4be971f9eec86af  xsa346/xsa346-4.11-1.patch
1359801b8f64ac62dc8de4e3acc15ec42c040f692f3a1ee9986acb478ee330cd  xsa346/xsa346-4.11-2.patch
190f594bb77dd044af8f0a051ab1d4143c348da192206da9b390af91c0a2cdec  xsa346/xsa346-4.12-1.patch
5bcb65dc45f6d74c644ee6b6add518044c9875e6759254773d3816e718c2be28  xsa346/xsa346-4.12-2.patch
69e0158276a922829eb60dc5bb13e60a71a232ace808843f45dac407716b107b  xsa346/xsa346-4.13-1.patch
eb8132a02c252dc65be1f334939f252db0c30ae2db8aa23f0d9e67f8148e2d2d  xsa346/xsa346-4.13-2.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

HOWEVER, deployment of the mitigations is NOT permitted (except where
all the affected systems and VMs are administered and used only by
organisations which are members of the Xen Project Security Issues
Predisclosure List).  Specifically, deployment on public cloud systems
is NOT permitted.

This is because removal of pass-through devices or their replacement by
emulated devices is a guest visible configuration change, which may lead
to re-discovery of the issue.  Similarly the possible guest
configuration changes can't be excluded to be noticeable to guests.

Deployment of this mitigation is permitted only AFTER the embargo ends.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+OzqwMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZVwEH/1NuF+s6eI2O9rFtIrKLdHXtT6ehGJsj+UAHR18N
V6COF6AbGJ3me/RB5OVo1Fl7WGE5js2sMUpP8s6hO+y+iCiRoOF/Um7eXivmp3Yv
xKqHKr/6fjs2g8WP+SX/02bwPWS6qupBiZeC+EGKDbJRO2uBeGlXrVD0Nxrdx33Y
ATA92CEUnJvqRQAHo15pL/32AK2B+fNHY/voAWMMp3PXKCBMhdw9HVlQz2tJS+2s
mX7SRWzOMjBwo7jCz88nKIBjWkGNObuREEogt2hWrICUaKCQH1Gv4TFBI5UvE4YJ
MsaPmCQAyZDPs1N0VB3OZykm5Z1bktzsVWykab2b/xhSqgU=
=2aos
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa346.meta"
Content-Disposition: attachment; filename="xsa346.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNDYsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxNzE5Zjc5YTBlZmQzNmQxNTgzN2M1MTk4MjE3M2RkMWMy
ODdkY2VkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAy
ODYsCiAgICAgICAgICAgIDM0NQogICAgICAgICAgXSwKICAgICAgICAgICJQ
YXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ2L3hzYTM0Ni00LjEwLT8u
cGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAgICB9LAog
ICAgIjQuMTEiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAgICJ4ZW4i
OiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjM2MzBhMzY3ODU0Yzk4YmJm
OGU3NDdkMDllZWFiN2U2OGYzNzAwMDMiLAogICAgICAgICAgIlByZXJlcXMi
OiBbCiAgICAgICAgICAgIDI4NiwKICAgICAgICAgICAgMzQ1CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NDYveHNhMzQ2LTQuMTEtPy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9
CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMiI6IHsKICAgICAgIlJlY2lwZXMi
OiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAi
Njg4ODAxNzM5MmFjMjViNWU1ODg1NTQwMzA2NDJhZmZhYzI1YTk1ZCIsCiAg
ICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMjg2LAogICAgICAg
ICAgICAzNDUKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6IFsK
ICAgICAgICAgICAgInhzYTM0Ni94c2EzNDYtNC4xMi0/LnBhdGNoIgogICAg
ICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0LjEzIjog
ewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAg
ICAgIlN0YWJsZVJlZiI6ICI4ZTdlNTg1N2EyMDNjOWQ5ZGY3NzMzZmQ2ODc2
ODU1NWM3ZTc2ODM5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAg
ICAgICAyODYsCiAgICAgICAgICAgIDM0NQogICAgICAgICAgXSwKICAgICAg
ICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ2L3hzYTM0Ni00
LjEzLT8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAg
ICB9LAogICAgIjQuMTQiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAg
ICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogImM5M2I1MjBhNDFm
Mjc4N2RkNzZiZmIyZTQ1NDgzNmQxZDU3ODc1MDUiLAogICAgICAgICAgIlBy
ZXJlcXMiOiBbCiAgICAgICAgICAgIDI4NiwKICAgICAgICAgICAgMzQ1CiAg
ICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAg
ICJ4c2EzNDYveHNhMzQ2LT8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiOTM1MDg1OTVkNTg4YWZlOWRjYTA4N2Y5NTIwMGVmZmI3Y2VkYzgxZiIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMjg2LAogICAg
ICAgICAgICAzNDUKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hlcyI6
IFsKICAgICAgICAgICAgInhzYTM0Ni94c2EzNDYtPy5wYXRjaCIKICAgICAg
ICAgIF0KICAgICAgICB9CiAgICAgIH0KICAgIH0KICB9Cn0=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-1.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogc3VwcHJlc3MgImlvbW11X2RvbnRfZmx1c2hfaW90bGIiIHdo
ZW4gYWJvdXQgdG8gZnJlZSBhIHBhZ2UKCkRlZmVycmluZyBmbHVzaGVzIHRv
IGEgc2luZ2xlLCB3aWRlIHJhbmdlIG9uZSAtIGFzIGlzIGRvbmUgd2hlbgpo
YW5kbGluZyBYRU5NQVBTUEFDRV9nbWZuX3JhbmdlIC0gaXMgb2theSBvbmx5
IGFzIGxvbmcgYXMKcGFnZXMgZG9uJ3QgZ2V0IGZyZWVkIGFoZWFkIG9mIHRo
ZSBldmVudHVhbCBmbHVzaC4gV2hpbGUgdGhlIG9ubHkKZnVuY3Rpb24gc2V0
dGluZyB0aGUgZmxhZyAoeGVubWVtX2FkZF90b19waHlzbWFwKCkpIHN1Z2dl
c3RzIGJ5IGl0cyBuYW1lCnRoYXQgaXQncyBvbmx5IG1hcHBpbmcgbmV3IGVu
dHJpZXMsIGluIHJlYWxpdHkgdGhlIHdheQp4ZW5tZW1fYWRkX3RvX3BoeXNt
YXBfb25lKCkgd29ya3MgbWVhbnMgYW4gdW5tYXAgd291bGQgaGFwcGVuIG5v
dCBvbmx5CmZvciB0aGUgcGFnZSBiZWluZyBtb3ZlZCAoYnV0IG5vdCBmcmVl
ZCkgYnV0LCBpZiB0aGUgZGVzdGluYXRpb24gR0ZOIGlzCnBvcHVsYXRlZCwg
YWxzbyBmb3IgdGhlIHBhZ2UgYmVpbmcgZGlzcGxhY2VkIGZyb20gdGhhdCBH
Rk4uIENvbGxhcHNpbmcKdGhlIHR3byBmbHVzaGVzIGZvciB0aGlzIEdGTiBp
bnRvIGp1c3Qgb25lIChlbmQgZXZlbiBtb3JlIHNvIGRlZmVycmluZwppdCB0
byBhIGJhdGNoZWQgaW52b2NhdGlvbikgaXMgbm90IGNvcnJlY3QuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJhOWZkNWEgKCJp
b21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIuLi4gIikKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2Vk
LWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgoKLS0tIGEv
eGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9uL21lbW9yeS5j
CkBAIC0yOTMsNiArMjkzLDcgQEAgaW50IGd1ZXN0X3JlbW92ZV9wYWdlKHN0
cnVjdCBkb21haW4gKmQsCiAgICAgcDJtX3R5cGVfdCBwMm10OwogI2VuZGlm
CiAgICAgbWZuX3QgbWZuOworICAgIGJvb2wgKmRvbnRfZmx1c2hfcCwgZG9u
dF9mbHVzaDsKICAgICBpbnQgcmM7CiAKICNpZmRlZiBDT05GSUdfWDg2CkBA
IC0zNzksOCArMzgwLDE4IEBAIGludCBndWVzdF9yZW1vdmVfcGFnZShzdHJ1
Y3QgZG9tYWluICpkLAogICAgICAgICByZXR1cm4gLUVOWElPOwogICAgIH0K
IAorICAgIC8qCisgICAgICogU2luY2Ugd2UncmUgbGlrZWx5IHRvIGZyZWUg
dGhlIHBhZ2UgYmVsb3csIHdlIG5lZWQgdG8gc3VzcGVuZAorICAgICAqIHhl
bm1lbV9hZGRfdG9fcGh5c21hcCgpJ3Mgc3VwcHJlc3Npbmcgb2YgSU9NTVUg
VExCIGZsdXNoZXMuCisgICAgICovCisgICAgZG9udF9mbHVzaF9wID0gJnRo
aXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpOworICAgIGRvbnRfZmx1
c2ggPSAqZG9udF9mbHVzaF9wOworICAgICpkb250X2ZsdXNoX3AgPSBmYWxz
ZTsKKwogICAgIHJjID0gZ3Vlc3RfcGh5c21hcF9yZW1vdmVfcGFnZShkLCBf
Z2ZuKGdtZm4pLCBtZm4sIDApOwogCisgICAgKmRvbnRfZmx1c2hfcCA9IGRv
bnRfZmx1c2g7CisKICAgICAvKgogICAgICAqIFdpdGggdGhlIGxhY2sgb2Yg
YW4gSU9NTVUgb24gc29tZSBwbGF0Zm9ybXMsIGRvbWFpbnMgd2l0aCBETUEt
Y2FwYWJsZQogICAgICAqIGRldmljZSBtdXN0IHJldHJpZXZlIHRoZSBzYW1l
IHBmbiB3aGVuIHRoZSBoeXBlcmNhbGwgcG9wdWxhdGVfcGh5c21hcAo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-2.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJlZCBU
TEIgZmx1c2gKCldoZW4gbW92aW5nIGFyb3VuZCBhIHBhZ2UgdmlhIFhFTk1B
UFNQQUNFX2dtZm5fcmFuZ2UsIGRlZmVycmluZyB0aGUgVExCCmZsdXNoIGZv
ciB0aGUgImZyb20iIEdGTiByYW5nZSByZXF1aXJlcyB0aGF0IHRoZSBwYWdl
IHJlbWFpbnMgYWxsb2NhdGVkCnRvIHRoZSBndWVzdCB1bnRpbCB0aGUgVExC
IGZsdXNoIGhhcyBhY3R1YWxseSBvY2N1cnJlZC4gT3RoZXJ3aXNlIGEKcGFy
YWxsZWwgaHlwZXJjYWxsIHRvIHJlbW92ZSB0aGUgcGFnZSB3b3VsZCBvbmx5
IGZsdXNoIHRoZSBUTEIgZm9yIHRoZQpHRk4gaXQgaGFzIGJlZW4gbW92ZWQg
dG8sIGJ1dCBub3QgdGhlIG9uZSBpcyB3YXMgbWFwcGVkIGF0IG9yaWdpbmFs
bHkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJh
OWZkNWEgKCJpb21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVf
ZG9udF9mbHVzaF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIu
Li4gIikKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+CgotLS0gYS94ZW4vYXJjaC9hcm0vbW0uYworKysgYi94ZW4vYXJj
aC9hcm0vbW0uYwpAQCAtMTQwNyw3ICsxNDA3LDcgQEAgdm9pZCBzaGFyZV94
ZW5fcGFnZV93aXRoX2d1ZXN0KHN0cnVjdCBwYQogaW50IHhlbm1lbV9hZGRf
dG9fcGh5c21hcF9vbmUoCiAgICAgc3RydWN0IGRvbWFpbiAqZCwKICAgICB1
bnNpZ25lZCBpbnQgc3BhY2UsCi0gICAgdW5pb24geGVuX2FkZF90b19waHlz
bWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgIHVuc2lnbmVkIGxvbmcgaWR4LAogICAg
IGdmbl90IGdmbikKIHsKQEAgLTE0ODAsMTAgKzE0ODAsNiBAQCBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZSgKICAgICAgICAgYnJlYWs7CiAgICAg
fQogICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86Ci0gICAgICAgIC8q
IGV4dHJhIHNob3VsZCBiZSAwLiBSZXNlcnZlZCBmb3IgZnV0dXJlIHVzZS4g
Ki8KLSAgICAgICAgaWYgKCBleHRyYS5yZXMwICkKLSAgICAgICAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsKLQogICAgICAgICByYyA9IG1hcF9kZXZfbW1p
b19yZWdpb24oZCwgZ2ZuLCAxLCBfbWZuKGlkeCkpOwogICAgICAgICByZXR1
cm4gcmM7CiAKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTQ0OTcsNyArNDQ5Nyw3IEBAIHN0YXRpYyBpbnQg
aGFuZGxlX2lvbWVtX3JhbmdlKHVuc2lnbmVkIGwKIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfb25lKAogICAgIHN0cnVjdCBkb21haW4gKmQsCiAgICAg
dW5zaWduZWQgaW50IHNwYWNlLAotICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5
c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICB1bmlvbiBhZGRfdG9fcGh5
c21hcF9leHRyYSBleHRyYSwKICAgICB1bnNpZ25lZCBsb25nIGlkeCwKICAg
ICBnZm5fdCBncGZuKQogewpAQCAtNDU4MSw5ICs0NTgxLDIwIEBAIGludCB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKAogICAgICAgICByYyA9IGd1ZXN0
X3BoeXNtYXBfYWRkX3BhZ2UoZCwgZ3BmbiwgbWZuLCBQQUdFX09SREVSXzRL
KTsKIAogIHB1dF9ib3RoOgotICAgIC8qIEluIHRoZSBYRU5NQVBTUEFDRV9n
bWZuIGNhc2UsIHdlIHRvb2sgYSByZWYgb2YgdGhlIGdmbiBhdCB0aGUgdG9w
LiAqLworICAgIC8qCisgICAgICogSW4gdGhlIFhFTk1BUFNQQUNFX2dtZm4g
Y2FzZSwgd2UgdG9vayBhIHJlZiBvZiB0aGUgZ2ZuIGF0IHRoZSB0b3AuCisg
ICAgICogV2UgYWxzbyBtYXkgbmVlZCB0byB0cmFuc2ZlciBvd25lcnNoaXAg
b2YgdGhlIHBhZ2UgcmVmZXJlbmNlIHRvIG91cgorICAgICAqIGNhbGxlci4K
KyAgICAgKi8KICAgICBpZiAoIHNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm4g
KQorICAgIHsKICAgICAgICAgcHV0X2dmbihkLCBnZm4pOworICAgICAgICBp
ZiAoICFyYyAmJiBleHRyYS5wcGFnZSApCisgICAgICAgIHsKKyAgICAgICAg
ICAgICpleHRyYS5wcGFnZSA9IHBhZ2U7CisgICAgICAgICAgICBwYWdlID0g
TlVMTDsKKyAgICAgICAgfQorICAgIH0KIAogICAgIGlmICggcGFnZSApCiAg
ICAgICAgIHB1dF9wYWdlKHBhZ2UpOwotLS0gYS94ZW4vY29tbW9uL21lbW9y
eS5jCisrKyBiL3hlbi9jb21tb24vbWVtb3J5LmMKQEAgLTgxNSwxMyArODE1
LDEyIEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0IGRvbWFp
bgogewogICAgIHVuc2lnbmVkIGludCBkb25lID0gMDsKICAgICBsb25nIHJj
ID0gMDsKLSAgICB1bmlvbiB4ZW5fYWRkX3RvX3BoeXNtYXBfYmF0Y2hfZXh0
cmEgZXh0cmE7CisgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgZXh0
cmEgPSB7fTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlc1sxNl07CiAK
ICAgICBBU1NFUlQocGFnaW5nX21vZGVfdHJhbnNsYXRlKGQpKTsKIAotICAg
IGlmICggeGF0cC0+c3BhY2UgIT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWdu
ICkKLSAgICAgICAgZXh0cmEucmVzMCA9IDA7Ci0gICAgZWxzZQorICAgIGlm
ICggeGF0cC0+c3BhY2UgPT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICkK
ICAgICAgICAgZXh0cmEuZm9yZWlnbl9kb21pZCA9IERPTUlEX0lOVkFMSUQ7
CiAKICAgICBpZiAoIHhhdHAtPnNwYWNlICE9IFhFTk1BUFNQQUNFX2dtZm5f
cmFuZ2UgKQpAQCAtODM2LDcgKzgzNSwxMCBAQCBpbnQgeGVubWVtX2FkZF90
b19waHlzbWFwKHN0cnVjdCBkb21haW4KICAgICB4YXRwLT5zaXplIC09IHN0
YXJ0OwogCiAgICAgaWYgKCBpc19pb21tdV9lbmFibGVkKGQpICkKKyAgICB7
CiAgICAgICAgdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAx
OworICAgICAgIGV4dHJhLnBwYWdlID0gJnBhZ2VzWzBdOworICAgIH0KIAog
ICAgIHdoaWxlICggeGF0cC0+c2l6ZSA+IGRvbmUgKQogICAgIHsKQEAgLTg0
OCw4ICs4NTAsMTIgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1
Y3QgZG9tYWluCiAgICAgICAgIHhhdHAtPmlkeCsrOwogICAgICAgICB4YXRw
LT5ncGZuKys7CiAKKyAgICAgICAgaWYgKCBleHRyYS5wcGFnZSApCisgICAg
ICAgICAgICArK2V4dHJhLnBwYWdlOworCiAgICAgICAgIC8qIENoZWNrIGZv
ciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRlcmF0aW9u
LiAqLwotICAgICAgICBpZiAoIHhhdHAtPnNpemUgPiArK2RvbmUgJiYgaHlw
ZXJjYWxsX3ByZWVtcHRfY2hlY2soKSApCisgICAgICAgIGlmICggKCsrZG9u
ZSA+IEFSUkFZX1NJWkUocGFnZXMpICYmIGV4dHJhLnBwYWdlKSB8fAorICAg
ICAgICAgICAgICh4YXRwLT5zaXplID4gZG9uZSAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHJjID0g
c3RhcnQgKyBkb25lOwogICAgICAgICAgICAgYnJlYWs7CkBAIC04NTksNiAr
ODY1LDcgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9t
YWluCiAgICAgaWYgKCBpc19pb21tdV9lbmFibGVkKGQpICkKICAgICB7CiAg
ICAgICAgIGludCByZXQ7CisgICAgICAgIHVuc2lnbmVkIGludCBpOwogCiAg
ICAgICAgIHRoaXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpID0gMDsK
IApAQCAtODY3LDYgKzg3NCwxNSBAQCBpbnQgeGVubWVtX2FkZF90b19waHlz
bWFwKHN0cnVjdCBkb21haW4KICAgICAgICAgaWYgKCB1bmxpa2VseShyZXQp
ICYmIHJjID49IDAgKQogICAgICAgICAgICAgcmMgPSByZXQ7CiAKKyAgICAg
ICAgLyoKKyAgICAgICAgICogTm93IHRoYXQgdGhlIElPTU1VIFRMQiBmbHVz
aCB3YXMgZG9uZSBmb3IgdGhlIG9yaWdpbmFsIEdGTiwgZHJvcAorICAgICAg
ICAgKiB0aGUgcGFnZSByZWZlcmVuY2VzLiBUaGUgMm5kIGZsdXNoIGJlbG93
IGlzIGZpbmUgdG8gbWFrZSBsYXRlciwgYXMKKyAgICAgICAgICogd2hvZXZl
ciByZW1vdmVzIHRoZSBwYWdlIGFnYWluIGZyb20gaXRzIG5ldyBHRk4gd2ls
bCBoYXZlIHRvIGRvCisgICAgICAgICAqIGFub3RoZXIgZmx1c2ggYW55d2F5
LgorICAgICAgICAgKi8KKyAgICAgICAgZm9yICggaSA9IDA7IGkgPCBkb25l
OyArK2kgKQorICAgICAgICAgICAgcHV0X3BhZ2UocGFnZXNbaV0pOworCiAg
ICAgICAgIHJldCA9IGlvbW11X2lvdGxiX2ZsdXNoKGQsIF9kZm4oeGF0cC0+
Z3BmbiAtIGRvbmUpLCBkb25lLAogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBJT01NVV9GTFVTSEZfYWRkZWQgfCBJT01NVV9GTFVTSEZfbW9k
aWZpZWQpOwogICAgICAgICBpZiAoIHVubGlrZWx5KHJldCkgJiYgcmMgPj0g
MCApCkBAIC04ODAsNiArODk2LDggQEAgc3RhdGljIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfYmF0Y2gocwogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgc3RydWN0IHhlbl9hZGRfdG9fcGh5c21hcF9iYXRj
aCAqeGF0cGIsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICB1bnNpZ25lZCBpbnQgZXh0ZW50KQogeworICAgIHVuaW9uIGFkZF90
b19waHlzbWFwX2V4dHJhIGV4dHJhID0ge307CisKICAgICBpZiAoIHVubGlr
ZWx5KHhhdHBiLT5zaXplIDwgZXh0ZW50KSApCiAgICAgICAgIHJldHVybiAt
RUlMU0VROwogCkBAIC04OTEsNiArOTA5LDE5IEBAIHN0YXRpYyBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX2JhdGNoKHMKICAgICAgICAgICFndWVzdF9o
YW5kbGVfc3VicmFuZ2Vfb2theSh4YXRwYi0+ZXJycywgZXh0ZW50LCB4YXRw
Yi0+c2l6ZSAtIDEpICkKICAgICAgICAgcmV0dXJuIC1FRkFVTFQ7CiAKKyAg
ICBzd2l0Y2ggKCB4YXRwYi0+c3BhY2UgKQorICAgIHsKKyAgICBjYXNlIFhF
Tk1BUFNQQUNFX2Rldl9tbWlvOgorICAgICAgICAvKiByZXMwIGlzIHJlc2Vy
dmVkIGZvciBmdXR1cmUgdXNlLiAqLworICAgICAgICBpZiAoIHhhdHBiLT51
LnJlczAgKQorICAgICAgICAgICAgcmV0dXJuIC1FT1BOT1RTVVBQOworICAg
ICAgICBicmVhazsKKworICAgIGNhc2UgWEVOTUFQU1BBQ0VfZ21mbl9mb3Jl
aWduOgorICAgICAgICBleHRyYS5mb3JlaWduX2RvbWlkID0geGF0cGItPnUu
Zm9yZWlnbl9kb21pZDsKKyAgICAgICAgYnJlYWs7CisgICAgfQorCiAgICAg
d2hpbGUgKCB4YXRwYi0+c2l6ZSA+IGV4dGVudCApCiAgICAgewogICAgICAg
ICB4ZW5fdWxvbmdfdCBpZHg7CkBAIC05MDMsOCArOTM0LDcgQEAgc3RhdGlj
IGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfYmF0Y2gocwogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBleHRlbnQs
IDEpKSApCiAgICAgICAgICAgICByZXR1cm4gLUVGQVVMVDsKIAotICAgICAg
ICByYyA9IHhlbm1lbV9hZGRfdG9fcGh5c21hcF9vbmUoZCwgeGF0cGItPnNw
YWNlLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
eGF0cGItPnUsCisgICAgICAgIHJjID0geGVubWVtX2FkZF90b19waHlzbWFw
X29uZShkLCB4YXRwYi0+c3BhY2UsIGV4dHJhLAogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgaWR4LCBfZ2ZuKGdwZm4pKTsKIAog
ICAgICAgICBpZiAoIHVubGlrZWx5KF9fY29weV90b19ndWVzdF9vZmZzZXQo
eGF0cGItPmVycnMsIGV4dGVudCwgJnJjLCAxKSkgKQotLS0gYS94ZW4vaW5j
bHVkZS94ZW4vbW0uaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vbW0uaApAQCAt
NTkyLDggKzU5MiwyMiBAQCB2b2lkIHNjcnViX29uZV9wYWdlKHN0cnVjdCBw
YWdlX2luZm8gKik7CiAgICAgcGFnZV9saXN0X2RlbChwZywgcGFnZV90b19s
aXN0KGQsIHBnKSkKICNlbmRpZgogCit1bmlvbiBhZGRfdG9fcGh5c21hcF9l
eHRyYSB7CisgICAgLyoKKyAgICAgKiBYRU5NQVBTUEFDRV9nbWZuOiBXaGVu
IGRlZmVycmluZyBUTEIgZmx1c2hlcywgYSBwYWdlIHJlZmVyZW5jZSBuZWVk
cworICAgICAqIHRvIGJlIGtlcHQgdW50aWwgYWZ0ZXIgdGhlIGZsdXNoLCBz
byB0aGUgcGFnZSBjYW4ndCBnZXQgcmVtb3ZlZCBmcm9tCisgICAgICogdGhl
IGRvbWFpbiAoYW5kIHJlLXVzZWQgZm9yIGFub3RoZXIgcHVycG9zZSkgYmVm
b3JlaGFuZC4gQnkgcGFzc2luZworICAgICAqIG5vbi1OVUxMLCB0aGUgY2Fs
bGVyIG9mIHhlbm1lbV9hZGRfdG9fcGh5c21hcF9vbmUoKSBpbmRpY2F0ZXMg
aXQgd2FudHMKKyAgICAgKiB0byBoYXZlIG93bmVyc2hpcCBvZiBzdWNoIGEg
cmVmZXJlbmNlIHRyYW5zZmVycmVkIGluIHRoZSBzdWNjZXNzIGNhc2UuCisg
ICAgICovCisgICAgc3RydWN0IHBhZ2VfaW5mbyAqKnBwYWdlOworCisgICAg
LyogWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICovCisgICAgZG9taWRfdCBm
b3JlaWduX2RvbWlkOworfTsKKwogaW50IHhlbm1lbV9hZGRfdG9fcGh5c21h
cF9vbmUoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IHNwYWNlLAot
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5pb24geGVuX2FkZF90
b19waHlzbWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgZXh0
cmEsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBs
b25nIGlkeCwgZ2ZuX3QgZ2ZuKTsKIAogaW50IHhlbm1lbV9hZGRfdG9fcGh5
c21hcChzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgeGVuX2FkZF90b19waHlz
bWFwICp4YXRwLAo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.10-1.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.10-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogc3VwcHJlc3MgImlvbW11X2RvbnRfZmx1c2hfaW90bGIiIHdo
ZW4gYWJvdXQgdG8gZnJlZSBhIHBhZ2UKCkRlZmVycmluZyBmbHVzaGVzIHRv
IGEgc2luZ2xlLCB3aWRlIHJhbmdlIG9uZSAtIGFzIGlzIGRvbmUgd2hlbgpo
YW5kbGluZyBYRU5NQVBTUEFDRV9nbWZuX3JhbmdlIC0gaXMgb2theSBvbmx5
IGFzIGxvbmcgYXMKcGFnZXMgZG9uJ3QgZ2V0IGZyZWVkIGFoZWFkIG9mIHRo
ZSBldmVudHVhbCBmbHVzaC4gV2hpbGUgdGhlIG9ubHkKZnVuY3Rpb24gc2V0
dGluZyB0aGUgZmxhZyAoeGVubWVtX2FkZF90b19waHlzbWFwKCkpIHN1Z2dl
c3RzIGJ5IGl0cyBuYW1lCnRoYXQgaXQncyBvbmx5IG1hcHBpbmcgbmV3IGVu
dHJpZXMsIGluIHJlYWxpdHkgdGhlIHdheQp4ZW5tZW1fYWRkX3RvX3BoeXNt
YXBfb25lKCkgd29ya3MgbWVhbnMgYW4gdW5tYXAgd291bGQgaGFwcGVuIG5v
dCBvbmx5CmZvciB0aGUgcGFnZSBiZWluZyBtb3ZlZCAoYnV0IG5vdCBmcmVl
ZCkgYnV0LCBpZiB0aGUgZGVzdGluYXRpb24gR0ZOIGlzCnBvcHVsYXRlZCwg
YWxzbyBmb3IgdGhlIHBhZ2UgYmVpbmcgZGlzcGxhY2VkIGZyb20gdGhhdCBH
Rk4uIENvbGxhcHNpbmcKdGhlIHR3byBmbHVzaGVzIGZvciB0aGlzIEdGTiBp
bnRvIGp1c3Qgb25lIChlbmQgZXZlbiBtb3JlIHNvIGRlZmVycmluZwppdCB0
byBhIGJhdGNoZWQgaW52b2NhdGlvbikgaXMgbm90IGNvcnJlY3QuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJhOWZkNWEgKCJp
b21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIuLi4gIikKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2Vk
LWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgoKLS0tIGEv
eGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9uL21lbW9yeS5j
CkBAIC0yODQsNyArMjg0LDEwIEBAIGludCBndWVzdF9yZW1vdmVfcGFnZShz
dHJ1Y3QgZG9tYWluICpkLAogICAgIHAybV90eXBlX3QgcDJtdDsKICNlbmRp
ZgogICAgIG1mbl90IG1mbjsKKyNpZmRlZiBDT05GSUdfSEFTX1BBU1NUSFJP
VUdICisgICAgYm9vbCAqZG9udF9mbHVzaF9wLCBkb250X2ZsdXNoOwogICAg
IGludCByYzsKKyNlbmRpZgogCiAjaWZkZWYgQ09ORklHX1g4NgogICAgIG1m
biA9IGdldF9nZm5fcXVlcnkoZCwgZ21mbiwgJnAybXQpOwpAQCAtMzU5LDgg
KzM2MiwyMiBAQCBpbnQgZ3Vlc3RfcmVtb3ZlX3BhZ2Uoc3RydWN0IGRvbWFp
biAqZCwKICAgICAgICAgcmV0dXJuIC1FTlhJTzsKICAgICB9CiAKKyNpZmRl
ZiBDT05GSUdfSEFTX1BBU1NUSFJPVUdICisgICAgLyoKKyAgICAgKiBTaW5j
ZSB3ZSdyZSBsaWtlbHkgdG8gZnJlZSB0aGUgcGFnZSBiZWxvdywgd2UgbmVl
ZCB0byBzdXNwZW5kCisgICAgICogeGVubWVtX2FkZF90b19waHlzbWFwKCkn
cyBzdXBwcmVzc2luZyBvZiBJT01NVSBUTEIgZmx1c2hlcy4KKyAgICAgKi8K
KyAgICBkb250X2ZsdXNoX3AgPSAmdGhpc19jcHUoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYik7CisgICAgZG9udF9mbHVzaCA9ICpkb250X2ZsdXNoX3A7Cisg
ICAgKmRvbnRfZmx1c2hfcCA9IGZhbHNlOworI2VuZGlmCisKICAgICByYyA9
IGd1ZXN0X3BoeXNtYXBfcmVtb3ZlX3BhZ2UoZCwgX2dmbihnbWZuKSwgbWZu
LCAwKTsKIAorI2lmZGVmIENPTkZJR19IQVNfUEFTU1RIUk9VR0gKKyAgICAq
ZG9udF9mbHVzaF9wID0gZG9udF9mbHVzaDsKKyNlbmRpZgorCiAgICAgLyoK
ICAgICAgKiBXaXRoIHRoZSBsYWNrIG9mIGFuIElPTU1VIG9uIHNvbWUgcGxh
dGZvcm1zLCBkb21haW5zIHdpdGggRE1BLWNhcGFibGUKICAgICAgKiBkZXZp
Y2UgbXVzdCByZXRyaWV2ZSB0aGUgc2FtZSBwZm4gd2hlbiB0aGUgaHlwZXJj
YWxsIHBvcHVsYXRlX3BoeXNtYXAK

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.10-2.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.10-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJlZCBU
TEIgZmx1c2gKCldoZW4gbW92aW5nIGFyb3VuZCBhIHBhZ2UgdmlhIFhFTk1B
UFNQQUNFX2dtZm5fcmFuZ2UsIGRlZmVycmluZyB0aGUgVExCCmZsdXNoIGZv
ciB0aGUgImZyb20iIEdGTiByYW5nZSByZXF1aXJlcyB0aGF0IHRoZSBwYWdl
IHJlbWFpbnMgYWxsb2NhdGVkCnRvIHRoZSBndWVzdCB1bnRpbCB0aGUgVExC
IGZsdXNoIGhhcyBhY3R1YWxseSBvY2N1cnJlZC4gT3RoZXJ3aXNlIGEKcGFy
YWxsZWwgaHlwZXJjYWxsIHRvIHJlbW92ZSB0aGUgcGFnZSB3b3VsZCBvbmx5
IGZsdXNoIHRoZSBUTEIgZm9yIHRoZQpHRk4gaXQgaGFzIGJlZW4gbW92ZWQg
dG8sIGJ1dCBub3QgdGhlIG9uZSBpcyB3YXMgbWFwcGVkIGF0IG9yaWdpbmFs
bHkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJh
OWZkNWEgKCJpb21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVf
ZG9udF9mbHVzaF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIu
Li4gIikKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+CgotLS0gYS94ZW4vYXJjaC9hcm0vbW0uYworKysgYi94ZW4vYXJj
aC9hcm0vbW0uYwpAQCAtMTIyNSw3ICsxMjI1LDcgQEAgdm9pZCBzaGFyZV94
ZW5fcGFnZV93aXRoX3ByaXZpbGVnZWRfZ3VlcwogaW50IHhlbm1lbV9hZGRf
dG9fcGh5c21hcF9vbmUoCiAgICAgc3RydWN0IGRvbWFpbiAqZCwKICAgICB1
bnNpZ25lZCBpbnQgc3BhY2UsCi0gICAgdW5pb24geGVuX2FkZF90b19waHlz
bWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgIHVuc2lnbmVkIGxvbmcgaWR4LAogICAg
IGdmbl90IGdmbikKIHsKQEAgLTEyOTcsMTAgKzEyOTcsNiBAQCBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZSgKICAgICAgICAgYnJlYWs7CiAgICAg
fQogICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86Ci0gICAgICAgIC8q
IGV4dHJhIHNob3VsZCBiZSAwLiBSZXNlcnZlZCBmb3IgZnV0dXJlIHVzZS4g
Ki8KLSAgICAgICAgaWYgKCBleHRyYS5yZXMwICkKLSAgICAgICAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsKLQogICAgICAgICByYyA9IG1hcF9kZXZfbW1p
b19yZWdpb24oZCwgZ2ZuLCAxLCBfbWZuKGlkeCkpOwogICAgICAgICByZXR1
cm4gcmM7CiAKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTQ1OTIsNyArNDU5Miw3IEBAIHN0YXRpYyBpbnQg
aGFuZGxlX2lvbWVtX3JhbmdlKHVuc2lnbmVkIGwKIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfb25lKAogICAgIHN0cnVjdCBkb21haW4gKmQsCiAgICAg
dW5zaWduZWQgaW50IHNwYWNlLAotICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5
c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICB1bmlvbiBhZGRfdG9fcGh5
c21hcF9leHRyYSBleHRyYSwKICAgICB1bnNpZ25lZCBsb25nIGlkeCwKICAg
ICBnZm5fdCBncGZuKQogewpAQCAtNDY4Miw5ICs0NjgyLDIwIEBAIGludCB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKAogICAgICAgICByYyA9IGd1ZXN0
X3BoeXNtYXBfYWRkX3BhZ2UoZCwgZ3BmbiwgbWZuLCBQQUdFX09SREVSXzRL
KTsKIAogIHB1dF9ib3RoOgotICAgIC8qIEluIHRoZSBYRU5NQVBTUEFDRV9n
bWZuLCB3ZSB0b29rIGEgcmVmIG9mIHRoZSBnZm4gYXQgdGhlIHRvcCAqLwor
ICAgIC8qCisgICAgICogSW4gdGhlIFhFTk1BUFNQQUNFX2dtZm4gY2FzZSwg
d2UgdG9vayBhIHJlZiBvZiB0aGUgZ2ZuIGF0IHRoZSB0b3AuCisgICAgICog
V2UgYWxzbyBtYXkgbmVlZCB0byB0cmFuc2ZlciBvd25lcnNoaXAgb2YgdGhl
IHBhZ2UgcmVmZXJlbmNlIHRvIG91cgorICAgICAqIGNhbGxlci4KKyAgICAg
Ki8KICAgICBpZiAoIHNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm4gfHwgc3Bh
Y2UgPT0gWEVOTUFQU1BBQ0VfZ21mbl9yYW5nZSApCisgICAgewogICAgICAg
ICBwdXRfZ2ZuKGQsIGdmbik7CisgICAgICAgIGlmICggIXJjICYmIGV4dHJh
LnBwYWdlICkKKyAgICAgICAgeworICAgICAgICAgICAgKmV4dHJhLnBwYWdl
ID0gcGFnZTsKKyAgICAgICAgICAgIHBhZ2UgPSBOVUxMOworICAgICAgICB9
CisgICAgfQogCiAgICAgaWYgKCBwYWdlICkKICAgICAgICAgcHV0X3BhZ2Uo
cGFnZSk7Ci0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMKKysrIGIveGVuL2Nv
bW1vbi9tZW1vcnkuYwpAQCAtNzY4LDExICs3NjgsMTAgQEAgc3RhdGljIGlu
dCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0CiB7CiAgICAgdW5zaWdu
ZWQgaW50IGRvbmUgPSAwOwogICAgIGxvbmcgcmMgPSAwOwotICAgIHVuaW9u
IHhlbl9hZGRfdG9fcGh5c21hcF9iYXRjaF9leHRyYSBleHRyYTsKKyAgICB1
bmlvbiBhZGRfdG9fcGh5c21hcF9leHRyYSBleHRyYSA9IHt9OworICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBhZ2VzWzE2XTsKIAotICAgIGlmICggeGF0cC0+
c3BhY2UgIT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICkKLSAgICAgICAg
ZXh0cmEucmVzMCA9IDA7Ci0gICAgZWxzZQorICAgIGlmICggeGF0cC0+c3Bh
Y2UgPT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICkKICAgICAgICAgZXh0
cmEuZm9yZWlnbl9kb21pZCA9IERPTUlEX0lOVkFMSUQ7CiAKICAgICBpZiAo
IHhhdHAtPnNwYWNlICE9IFhFTk1BUFNQQUNFX2dtZm5fcmFuZ2UgKQpAQCAt
Nzg4LDcgKzc4NywxMCBAQCBzdGF0aWMgaW50IHhlbm1lbV9hZGRfdG9fcGh5
c21hcChzdHJ1Y3QKIAogI2lmZGVmIENPTkZJR19IQVNfUEFTU1RIUk9VR0gK
ICAgICBpZiAoIG5lZWRfaW9tbXUoZCkgKQorICAgIHsKICAgICAgICAgdGhp
c19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAxOworICAgICAgICBl
eHRyYS5wcGFnZSA9ICZwYWdlc1swXTsKKyAgICB9CiAjZW5kaWYKIAogICAg
IHdoaWxlICggeGF0cC0+c2l6ZSA+IGRvbmUgKQpAQCAtODAxLDggKzgwMywx
MiBAQCBzdGF0aWMgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QK
ICAgICAgICAgeGF0cC0+aWR4Kys7CiAgICAgICAgIHhhdHAtPmdwZm4rKzsK
IAorICAgICAgICBpZiAoIGV4dHJhLnBwYWdlICkKKyAgICAgICAgICAgICsr
ZXh0cmEucHBhZ2U7CisKICAgICAgICAgLyogQ2hlY2sgZm9yIGNvbnRpbnVh
dGlvbiBpZiBpdCdzIG5vdCB0aGUgbGFzdCBpdGVyYXRpb24uICovCi0gICAg
ICAgIGlmICggeGF0cC0+c2l6ZSA+ICsrZG9uZSAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpICkKKyAgICAgICAgaWYgKCAoKytkb25lID4gQVJSQVlf
U0laRShwYWdlcykgJiYgZXh0cmEucHBhZ2UpIHx8CisgICAgICAgICAgICAg
KHhhdHAtPnNpemUgPiBkb25lICYmIGh5cGVyY2FsbF9wcmVlbXB0X2NoZWNr
KCkpICkKICAgICAgICAgewogICAgICAgICAgICAgcmMgPSBzdGFydCArIGRv
bmU7CiAgICAgICAgICAgICBicmVhazsKQEAgLTgxMyw2ICs4MTksNyBAQCBz
dGF0aWMgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QKICAgICBp
ZiAoIG5lZWRfaW9tbXUoZCkgKQogICAgIHsKICAgICAgICAgaW50IHJldDsK
KyAgICAgICAgdW5zaWduZWQgaW50IGk7CiAKICAgICAgICAgdGhpc19jcHUo
aW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAwOwogCkBAIC04MjAsNiArODI3
LDE1IEBAIHN0YXRpYyBpbnQgeGVubWVtX2FkZF90b19waHlzbWFwKHN0cnVj
dAogICAgICAgICBpZiAoIHVubGlrZWx5KHJldCkgJiYgcmMgPj0gMCApCiAg
ICAgICAgICAgICByYyA9IHJldDsKIAorICAgICAgICAvKgorICAgICAgICAg
KiBOb3cgdGhhdCB0aGUgSU9NTVUgVExCIGZsdXNoIHdhcyBkb25lIGZvciB0
aGUgb3JpZ2luYWwgR0ZOLCBkcm9wCisgICAgICAgICAqIHRoZSBwYWdlIHJl
ZmVyZW5jZXMuIFRoZSAybmQgZmx1c2ggYmVsb3cgaXMgZmluZSB0byBtYWtl
IGxhdGVyLCBhcworICAgICAgICAgKiB3aG9ldmVyIHJlbW92ZXMgdGhlIHBh
Z2UgYWdhaW4gZnJvbSBpdHMgbmV3IEdGTiB3aWxsIGhhdmUgdG8gZG8KKyAg
ICAgICAgICogYW5vdGhlciBmbHVzaCBhbnl3YXkuCisgICAgICAgICAqLwor
ICAgICAgICBmb3IgKCBpID0gMDsgaSA8IGRvbmU7ICsraSApCisgICAgICAg
ICAgICBwdXRfcGFnZShwYWdlc1tpXSk7CisKICAgICAgICAgcmV0ID0gaW9t
bXVfaW90bGJfZmx1c2goZCwgeGF0cC0+Z3BmbiAtIGRvbmUsIGRvbmUpOwog
ICAgICAgICBpZiAoIHVubGlrZWx5KHJldCkgJiYgcmMgPj0gMCApCiAgICAg
ICAgICAgICByYyA9IHJldDsKQEAgLTgzNSw2ICs4NTEsNyBAQCBzdGF0aWMg
aW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcF9iYXRjaChzCiB7CiAgICAgdW5z
aWduZWQgaW50IGRvbmUgPSAwOwogICAgIGludCByYzsKKyAgICB1bmlvbiBh
ZGRfdG9fcGh5c21hcF9leHRyYSBleHRyYSA9IHt9OwogCiAgICAgaWYgKCB4
YXRwYi0+c2l6ZSA8IHN0YXJ0ICkKICAgICAgICAgcmV0dXJuIC1FSUxTRVE7
CkBAIC04NDksNiArODY2LDE5IEBAIHN0YXRpYyBpbnQgeGVubWVtX2FkZF90
b19waHlzbWFwX2JhdGNoKHMKICAgICAgICAgICFndWVzdF9oYW5kbGVfb2th
eSh4YXRwYi0+ZXJycywgeGF0cGItPnNpemUpICkKICAgICAgICAgcmV0dXJu
IC1FRkFVTFQ7CiAKKyAgICBzd2l0Y2ggKCB4YXRwYi0+c3BhY2UgKQorICAg
IHsKKyAgICBjYXNlIFhFTk1BUFNQQUNFX2Rldl9tbWlvOgorICAgICAgICAv
KiByZXMwIGlzIHJlc2VydmVkIGZvciBmdXR1cmUgdXNlLiAqLworICAgICAg
ICBpZiAoIHhhdHBiLT51LnJlczAgKQorICAgICAgICAgICAgcmV0dXJuIC1F
T1BOT1RTVVBQOworICAgICAgICBicmVhazsKKworICAgIGNhc2UgWEVOTUFQ
U1BBQ0VfZ21mbl9mb3JlaWduOgorICAgICAgICBleHRyYS5mb3JlaWduX2Rv
bWlkID0geGF0cGItPnUuZm9yZWlnbl9kb21pZDsKKyAgICAgICAgYnJlYWs7
CisgICAgfQorCiAgICAgd2hpbGUgKCB4YXRwYi0+c2l6ZSA+IGRvbmUgKQog
ICAgIHsKICAgICAgICAgeGVuX3Vsb25nX3QgaWR4OwpAQCAtODY2LDggKzg5
Niw3IEBAIHN0YXRpYyBpbnQgeGVubWVtX2FkZF90b19waHlzbWFwX2JhdGNo
KHMKICAgICAgICAgICAgIGdvdG8gb3V0OwogICAgICAgICB9CiAKLSAgICAg
ICAgcmMgPSB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKGQsIHhhdHBiLT5z
cGFjZSwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHhhdHBiLT51LAorICAgICAgICByYyA9IHhlbm1lbV9hZGRfdG9fcGh5c21h
cF9vbmUoZCwgeGF0cGItPnNwYWNlLCBleHRyYSwKICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIGlkeCwgX2dmbihncGZuKSk7CiAK
ICAgICAgICAgaWYgKCB1bmxpa2VseShfX2NvcHlfdG9fZ3Vlc3Rfb2Zmc2V0
KHhhdHBiLT5lcnJzLCAwLCAmcmMsIDEpKSApCi0tLSBhL3hlbi9pbmNsdWRl
L3hlbi9tbS5oCisrKyBiL3hlbi9pbmNsdWRlL3hlbi9tbS5oCkBAIC01ODMs
OCArNTgzLDIyIEBAIHZvaWQgc2NydWJfb25lX3BhZ2Uoc3RydWN0IHBhZ2Vf
aW5mbyAqKTsKICAgICAgICAgICAgICAgICAgICAgICAmKGQpLT54ZW5wYWdl
X2xpc3QgOiAmKGQpLT5wYWdlX2xpc3QpCiAjZW5kaWYKIAordW5pb24gYWRk
X3RvX3BoeXNtYXBfZXh0cmEgeworICAgIC8qCisgICAgICogWEVOTUFQU1BB
Q0VfZ21mbjogV2hlbiBkZWZlcnJpbmcgVExCIGZsdXNoZXMsIGEgcGFnZSBy
ZWZlcmVuY2UgbmVlZHMKKyAgICAgKiB0byBiZSBrZXB0IHVudGlsIGFmdGVy
IHRoZSBmbHVzaCwgc28gdGhlIHBhZ2UgY2FuJ3QgZ2V0IHJlbW92ZWQgZnJv
bQorICAgICAqIHRoZSBkb21haW4gKGFuZCByZS11c2VkIGZvciBhbm90aGVy
IHB1cnBvc2UpIGJlZm9yZWhhbmQuIEJ5IHBhc3NpbmcKKyAgICAgKiBub24t
TlVMTCwgdGhlIGNhbGxlciBvZiB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25l
KCkgaW5kaWNhdGVzIGl0IHdhbnRzCisgICAgICogdG8gaGF2ZSBvd25lcnNo
aXAgb2Ygc3VjaCBhIHJlZmVyZW5jZSB0cmFuc2ZlcnJlZCBpbiB0aGUgc3Vj
Y2VzcyBjYXNlLgorICAgICAqLworICAgIHN0cnVjdCBwYWdlX2luZm8gKipw
cGFnZTsKKworICAgIC8qIFhFTk1BUFNQQUNFX2dtZm5fZm9yZWlnbiAqLwor
ICAgIGRvbWlkX3QgZm9yZWlnbl9kb21pZDsKK307CisKIGludCB4ZW5tZW1f
YWRkX3RvX3BoeXNtYXBfb25lKHN0cnVjdCBkb21haW4gKmQsIHVuc2lnbmVk
IGludCBzcGFjZSwKLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVu
aW9uIHhlbl9hZGRfdG9fcGh5c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdW5zaWduZWQgbG9uZyBpZHgsIGdmbl90IGdmbik7CiAKIC8qIFJldHVy
biAwIG9uIHN1Y2Nlc3MsIG9yIG5lZ2F0aXZlIG9uIGVycm9yLiAqLwo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.11-1.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogc3VwcHJlc3MgImlvbW11X2RvbnRfZmx1c2hfaW90bGIiIHdo
ZW4gYWJvdXQgdG8gZnJlZSBhIHBhZ2UKCkRlZmVycmluZyBmbHVzaGVzIHRv
IGEgc2luZ2xlLCB3aWRlIHJhbmdlIG9uZSAtIGFzIGlzIGRvbmUgd2hlbgpo
YW5kbGluZyBYRU5NQVBTUEFDRV9nbWZuX3JhbmdlIC0gaXMgb2theSBvbmx5
IGFzIGxvbmcgYXMKcGFnZXMgZG9uJ3QgZ2V0IGZyZWVkIGFoZWFkIG9mIHRo
ZSBldmVudHVhbCBmbHVzaC4gV2hpbGUgdGhlIG9ubHkKZnVuY3Rpb24gc2V0
dGluZyB0aGUgZmxhZyAoeGVubWVtX2FkZF90b19waHlzbWFwKCkpIHN1Z2dl
c3RzIGJ5IGl0cyBuYW1lCnRoYXQgaXQncyBvbmx5IG1hcHBpbmcgbmV3IGVu
dHJpZXMsIGluIHJlYWxpdHkgdGhlIHdheQp4ZW5tZW1fYWRkX3RvX3BoeXNt
YXBfb25lKCkgd29ya3MgbWVhbnMgYW4gdW5tYXAgd291bGQgaGFwcGVuIG5v
dCBvbmx5CmZvciB0aGUgcGFnZSBiZWluZyBtb3ZlZCAoYnV0IG5vdCBmcmVl
ZCkgYnV0LCBpZiB0aGUgZGVzdGluYXRpb24gR0ZOIGlzCnBvcHVsYXRlZCwg
YWxzbyBmb3IgdGhlIHBhZ2UgYmVpbmcgZGlzcGxhY2VkIGZyb20gdGhhdCBH
Rk4uIENvbGxhcHNpbmcKdGhlIHR3byBmbHVzaGVzIGZvciB0aGlzIEdGTiBp
bnRvIGp1c3Qgb25lIChlbmQgZXZlbiBtb3JlIHNvIGRlZmVycmluZwppdCB0
byBhIGJhdGNoZWQgaW52b2NhdGlvbikgaXMgbm90IGNvcnJlY3QuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJhOWZkNWEgKCJp
b21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIuLi4gIikKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2Vk
LWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgoKLS0tIGEv
eGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9uL21lbW9yeS5j
CkBAIC0yOTgsNyArMjk4LDEwIEBAIGludCBndWVzdF9yZW1vdmVfcGFnZShz
dHJ1Y3QgZG9tYWluICpkLAogICAgIHAybV90eXBlX3QgcDJtdDsKICNlbmRp
ZgogICAgIG1mbl90IG1mbjsKKyNpZmRlZiBDT05GSUdfSEFTX1BBU1NUSFJP
VUdICisgICAgYm9vbCAqZG9udF9mbHVzaF9wLCBkb250X2ZsdXNoOwogICAg
IGludCByYzsKKyNlbmRpZgogCiAjaWZkZWYgQ09ORklHX1g4NgogICAgIG1m
biA9IGdldF9nZm5fcXVlcnkoZCwgZ21mbiwgJnAybXQpOwpAQCAtMzc2LDgg
KzM3OSwyMiBAQCBpbnQgZ3Vlc3RfcmVtb3ZlX3BhZ2Uoc3RydWN0IGRvbWFp
biAqZCwKICAgICAgICAgcmV0dXJuIC1FTlhJTzsKICAgICB9CiAKKyNpZmRl
ZiBDT05GSUdfSEFTX1BBU1NUSFJPVUdICisgICAgLyoKKyAgICAgKiBTaW5j
ZSB3ZSdyZSBsaWtlbHkgdG8gZnJlZSB0aGUgcGFnZSBiZWxvdywgd2UgbmVl
ZCB0byBzdXNwZW5kCisgICAgICogeGVubWVtX2FkZF90b19waHlzbWFwKCkn
cyBzdXBwcmVzc2luZyBvZiBJT01NVSBUTEIgZmx1c2hlcy4KKyAgICAgKi8K
KyAgICBkb250X2ZsdXNoX3AgPSAmdGhpc19jcHUoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYik7CisgICAgZG9udF9mbHVzaCA9ICpkb250X2ZsdXNoX3A7Cisg
ICAgKmRvbnRfZmx1c2hfcCA9IGZhbHNlOworI2VuZGlmCisKICAgICByYyA9
IGd1ZXN0X3BoeXNtYXBfcmVtb3ZlX3BhZ2UoZCwgX2dmbihnbWZuKSwgbWZu
LCAwKTsKIAorI2lmZGVmIENPTkZJR19IQVNfUEFTU1RIUk9VR0gKKyAgICAq
ZG9udF9mbHVzaF9wID0gZG9udF9mbHVzaDsKKyNlbmRpZgorCiAgICAgLyoK
ICAgICAgKiBXaXRoIHRoZSBsYWNrIG9mIGFuIElPTU1VIG9uIHNvbWUgcGxh
dGZvcm1zLCBkb21haW5zIHdpdGggRE1BLWNhcGFibGUKICAgICAgKiBkZXZp
Y2UgbXVzdCByZXRyaWV2ZSB0aGUgc2FtZSBwZm4gd2hlbiB0aGUgaHlwZXJj
YWxsIHBvcHVsYXRlX3BoeXNtYXAK

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.11-2.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJlZCBU
TEIgZmx1c2gKCldoZW4gbW92aW5nIGFyb3VuZCBhIHBhZ2UgdmlhIFhFTk1B
UFNQQUNFX2dtZm5fcmFuZ2UsIGRlZmVycmluZyB0aGUgVExCCmZsdXNoIGZv
ciB0aGUgImZyb20iIEdGTiByYW5nZSByZXF1aXJlcyB0aGF0IHRoZSBwYWdl
IHJlbWFpbnMgYWxsb2NhdGVkCnRvIHRoZSBndWVzdCB1bnRpbCB0aGUgVExC
IGZsdXNoIGhhcyBhY3R1YWxseSBvY2N1cnJlZC4gT3RoZXJ3aXNlIGEKcGFy
YWxsZWwgaHlwZXJjYWxsIHRvIHJlbW92ZSB0aGUgcGFnZSB3b3VsZCBvbmx5
IGZsdXNoIHRoZSBUTEIgZm9yIHRoZQpHRk4gaXQgaGFzIGJlZW4gbW92ZWQg
dG8sIGJ1dCBub3QgdGhlIG9uZSBpcyB3YXMgbWFwcGVkIGF0IG9yaWdpbmFs
bHkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJh
OWZkNWEgKCJpb21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVf
ZG9udF9mbHVzaF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIu
Li4gIikKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+CgotLS0gYS94ZW4vYXJjaC9hcm0vbW0uYworKysgYi94ZW4vYXJj
aC9hcm0vbW0uYwpAQCAtMTIyMiw3ICsxMjIyLDcgQEAgdm9pZCBzaGFyZV94
ZW5fcGFnZV93aXRoX2d1ZXN0KHN0cnVjdCBwYQogaW50IHhlbm1lbV9hZGRf
dG9fcGh5c21hcF9vbmUoCiAgICAgc3RydWN0IGRvbWFpbiAqZCwKICAgICB1
bnNpZ25lZCBpbnQgc3BhY2UsCi0gICAgdW5pb24geGVuX2FkZF90b19waHlz
bWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgIHVuc2lnbmVkIGxvbmcgaWR4LAogICAg
IGdmbl90IGdmbikKIHsKQEAgLTEyOTQsMTAgKzEyOTQsNiBAQCBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZSgKICAgICAgICAgYnJlYWs7CiAgICAg
fQogICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86Ci0gICAgICAgIC8q
IGV4dHJhIHNob3VsZCBiZSAwLiBSZXNlcnZlZCBmb3IgZnV0dXJlIHVzZS4g
Ki8KLSAgICAgICAgaWYgKCBleHRyYS5yZXMwICkKLSAgICAgICAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsKLQogICAgICAgICByYyA9IG1hcF9kZXZfbW1p
b19yZWdpb24oZCwgZ2ZuLCAxLCBfbWZuKGlkeCkpOwogICAgICAgICByZXR1
cm4gcmM7CiAKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTQ2MzQsNyArNDYzNCw3IEBAIHN0YXRpYyBpbnQg
aGFuZGxlX2lvbWVtX3JhbmdlKHVuc2lnbmVkIGwKIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfb25lKAogICAgIHN0cnVjdCBkb21haW4gKmQsCiAgICAg
dW5zaWduZWQgaW50IHNwYWNlLAotICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5
c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICB1bmlvbiBhZGRfdG9fcGh5
c21hcF9leHRyYSBleHRyYSwKICAgICB1bnNpZ25lZCBsb25nIGlkeCwKICAg
ICBnZm5fdCBncGZuKQogewpAQCAtNDcyMSw5ICs0NzIxLDIwIEBAIGludCB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKAogICAgICAgICByYyA9IGd1ZXN0
X3BoeXNtYXBfYWRkX3BhZ2UoZCwgZ3BmbiwgbWZuLCBQQUdFX09SREVSXzRL
KTsKIAogIHB1dF9ib3RoOgotICAgIC8qIEluIHRoZSBYRU5NQVBTUEFDRV9n
bWZuIGNhc2UsIHdlIHRvb2sgYSByZWYgb2YgdGhlIGdmbiBhdCB0aGUgdG9w
LiAqLworICAgIC8qCisgICAgICogSW4gdGhlIFhFTk1BUFNQQUNFX2dtZm4g
Y2FzZSwgd2UgdG9vayBhIHJlZiBvZiB0aGUgZ2ZuIGF0IHRoZSB0b3AuCisg
ICAgICogV2UgYWxzbyBtYXkgbmVlZCB0byB0cmFuc2ZlciBvd25lcnNoaXAg
b2YgdGhlIHBhZ2UgcmVmZXJlbmNlIHRvIG91cgorICAgICAqIGNhbGxlci4K
KyAgICAgKi8KICAgICBpZiAoIHNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm4g
KQorICAgIHsKICAgICAgICAgcHV0X2dmbihkLCBnZm4pOworICAgICAgICBp
ZiAoICFyYyAmJiBleHRyYS5wcGFnZSApCisgICAgICAgIHsKKyAgICAgICAg
ICAgICpleHRyYS5wcGFnZSA9IHBhZ2U7CisgICAgICAgICAgICBwYWdlID0g
TlVMTDsKKyAgICAgICAgfQorICAgIH0KIAogICAgIGlmICggcGFnZSApCiAg
ICAgICAgIHB1dF9wYWdlKHBhZ2UpOwotLS0gYS94ZW4vY29tbW9uL21lbW9y
eS5jCisrKyBiL3hlbi9jb21tb24vbWVtb3J5LmMKQEAgLTgxMSwxMSArODEx
LDEwIEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0IGRvbWFp
bgogewogICAgIHVuc2lnbmVkIGludCBkb25lID0gMDsKICAgICBsb25nIHJj
ID0gMDsKLSAgICB1bmlvbiB4ZW5fYWRkX3RvX3BoeXNtYXBfYmF0Y2hfZXh0
cmEgZXh0cmE7CisgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgZXh0
cmEgPSB7fTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlc1sxNl07CiAK
LSAgICBpZiAoIHhhdHAtPnNwYWNlICE9IFhFTk1BUFNQQUNFX2dtZm5fZm9y
ZWlnbiApCi0gICAgICAgIGV4dHJhLnJlczAgPSAwOwotICAgIGVsc2UKKyAg
ICBpZiAoIHhhdHAtPnNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm5fZm9yZWln
biApCiAgICAgICAgIGV4dHJhLmZvcmVpZ25fZG9taWQgPSBET01JRF9JTlZB
TElEOwogCiAgICAgaWYgKCB4YXRwLT5zcGFjZSAhPSBYRU5NQVBTUEFDRV9n
bWZuX3JhbmdlICkKQEAgLTgzMSw3ICs4MzAsMTAgQEAgaW50IHhlbm1lbV9h
ZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9tYWluCiAKICNpZmRlZiBDT05GSUdf
SEFTX1BBU1NUSFJPVUdICiAgICAgaWYgKCBuZWVkX2lvbW11KGQpICkKKyAg
ICB7CiAgICAgICAgIHRoaXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIp
ID0gMTsKKyAgICAgICAgZXh0cmEucHBhZ2UgPSAmcGFnZXNbMF07CisgICAg
fQogI2VuZGlmCiAKICAgICB3aGlsZSAoIHhhdHAtPnNpemUgPiBkb25lICkK
QEAgLTg0NCw4ICs4NDYsMTIgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21h
cChzdHJ1Y3QgZG9tYWluCiAgICAgICAgIHhhdHAtPmlkeCsrOwogICAgICAg
ICB4YXRwLT5ncGZuKys7CiAKKyAgICAgICAgaWYgKCBleHRyYS5wcGFnZSAp
CisgICAgICAgICAgICArK2V4dHJhLnBwYWdlOworCiAgICAgICAgIC8qIENo
ZWNrIGZvciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRl
cmF0aW9uLiAqLwotICAgICAgICBpZiAoIHhhdHAtPnNpemUgPiArK2RvbmUg
JiYgaHlwZXJjYWxsX3ByZWVtcHRfY2hlY2soKSApCisgICAgICAgIGlmICgg
KCsrZG9uZSA+IEFSUkFZX1NJWkUocGFnZXMpICYmIGV4dHJhLnBwYWdlKSB8
fAorICAgICAgICAgICAgICh4YXRwLT5zaXplID4gZG9uZSAmJiBoeXBlcmNh
bGxfcHJlZW1wdF9jaGVjaygpKSApCiAgICAgICAgIHsKICAgICAgICAgICAg
IHJjID0gc3RhcnQgKyBkb25lOwogICAgICAgICAgICAgYnJlYWs7CkBAIC04
NTYsNiArODYyLDcgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1
Y3QgZG9tYWluCiAgICAgaWYgKCBuZWVkX2lvbW11KGQpICkKICAgICB7CiAg
ICAgICAgIGludCByZXQ7CisgICAgICAgIHVuc2lnbmVkIGludCBpOwogCiAg
ICAgICAgIHRoaXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpID0gMDsK
IApAQCAtODYzLDYgKzg3MCwxNSBAQCBpbnQgeGVubWVtX2FkZF90b19waHlz
bWFwKHN0cnVjdCBkb21haW4KICAgICAgICAgaWYgKCB1bmxpa2VseShyZXQp
ICYmIHJjID49IDAgKQogICAgICAgICAgICAgcmMgPSByZXQ7CiAKKyAgICAg
ICAgLyoKKyAgICAgICAgICogTm93IHRoYXQgdGhlIElPTU1VIFRMQiBmbHVz
aCB3YXMgZG9uZSBmb3IgdGhlIG9yaWdpbmFsIEdGTiwgZHJvcAorICAgICAg
ICAgKiB0aGUgcGFnZSByZWZlcmVuY2VzLiBUaGUgMm5kIGZsdXNoIGJlbG93
IGlzIGZpbmUgdG8gbWFrZSBsYXRlciwgYXMKKyAgICAgICAgICogd2hvZXZl
ciByZW1vdmVzIHRoZSBwYWdlIGFnYWluIGZyb20gaXRzIG5ldyBHRk4gd2ls
bCBoYXZlIHRvIGRvCisgICAgICAgICAqIGFub3RoZXIgZmx1c2ggYW55d2F5
LgorICAgICAgICAgKi8KKyAgICAgICAgZm9yICggaSA9IDA7IGkgPCBkb25l
OyArK2kgKQorICAgICAgICAgICAgcHV0X3BhZ2UocGFnZXNbaV0pOworCiAg
ICAgICAgIHJldCA9IGlvbW11X2lvdGxiX2ZsdXNoKGQsIHhhdHAtPmdwZm4g
LSBkb25lLCBkb25lKTsKICAgICAgICAgaWYgKCB1bmxpa2VseShyZXQpICYm
IHJjID49IDAgKQogICAgICAgICAgICAgcmMgPSByZXQ7CkBAIC04NzYsNiAr
ODkyLDggQEAgc3RhdGljIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfYmF0
Y2gocwogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
c3RydWN0IHhlbl9hZGRfdG9fcGh5c21hcF9iYXRjaCAqeGF0cGIsCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBp
bnQgZXh0ZW50KQogeworICAgIHVuaW9uIGFkZF90b19waHlzbWFwX2V4dHJh
IGV4dHJhID0ge307CisKICAgICBpZiAoIHhhdHBiLT5zaXplIDwgZXh0ZW50
ICkKICAgICAgICAgcmV0dXJuIC1FSUxTRVE7CiAKQEAgLTg4NCw2ICs5MDIs
MTkgQEAgc3RhdGljIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfYmF0Y2go
cwogICAgICAgICAgIWd1ZXN0X2hhbmRsZV9zdWJyYW5nZV9va2F5KHhhdHBi
LT5lcnJzLCBleHRlbnQsIHhhdHBiLT5zaXplIC0gMSkgKQogICAgICAgICBy
ZXR1cm4gLUVGQVVMVDsKIAorICAgIHN3aXRjaCAoIHhhdHBiLT5zcGFjZSAp
CisgICAgeworICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86CisgICAg
ICAgIC8qIHJlczAgaXMgcmVzZXJ2ZWQgZm9yIGZ1dHVyZSB1c2UuICovCisg
ICAgICAgIGlmICggeGF0cGItPnUucmVzMCApCisgICAgICAgICAgICByZXR1
cm4gLUVPUE5PVFNVUFA7CisgICAgICAgIGJyZWFrOworCisgICAgY2FzZSBY
RU5NQVBTUEFDRV9nbWZuX2ZvcmVpZ246CisgICAgICAgIGV4dHJhLmZvcmVp
Z25fZG9taWQgPSB4YXRwYi0+dS5mb3JlaWduX2RvbWlkOworICAgICAgICBi
cmVhazsKKyAgICB9CisKICAgICB3aGlsZSAoIHhhdHBiLT5zaXplID4gZXh0
ZW50ICkKICAgICB7CiAgICAgICAgIHhlbl91bG9uZ190IGlkeDsKQEAgLTg5
Niw4ICs5MjcsNyBAQCBzdGF0aWMgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21h
cF9iYXRjaChzCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgIGV4dGVudCwgMSkpICkKICAgICAgICAgICAgIHJldHVy
biAtRUZBVUxUOwogCi0gICAgICAgIHJjID0geGVubWVtX2FkZF90b19waHlz
bWFwX29uZShkLCB4YXRwYi0+c3BhY2UsCi0gICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICB4YXRwYi0+dSwKKyAgICAgICAgcmMgPSB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKGQsIHhhdHBiLT5zcGFjZSwgZXh0
cmEsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBp
ZHgsIF9nZm4oZ3BmbikpOwogCiAgICAgICAgIGlmICggdW5saWtlbHkoX19j
b3B5X3RvX2d1ZXN0X29mZnNldCh4YXRwYi0+ZXJycywgZXh0ZW50LCAmcmMs
IDEpKSApCi0tLSBhL3hlbi9pbmNsdWRlL3hlbi9tbS5oCisrKyBiL3hlbi9p
bmNsdWRlL3hlbi9tbS5oCkBAIC01NzcsOCArNTc3LDIyIEBAIHZvaWQgc2Ny
dWJfb25lX3BhZ2Uoc3RydWN0IHBhZ2VfaW5mbyAqKTsKICAgICAgICAgICAg
ICAgICAgICAgICAmKGQpLT54ZW5wYWdlX2xpc3QgOiAmKGQpLT5wYWdlX2xp
c3QpCiAjZW5kaWYKIAordW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgewor
ICAgIC8qCisgICAgICogWEVOTUFQU1BBQ0VfZ21mbjogV2hlbiBkZWZlcnJp
bmcgVExCIGZsdXNoZXMsIGEgcGFnZSByZWZlcmVuY2UgbmVlZHMKKyAgICAg
KiB0byBiZSBrZXB0IHVudGlsIGFmdGVyIHRoZSBmbHVzaCwgc28gdGhlIHBh
Z2UgY2FuJ3QgZ2V0IHJlbW92ZWQgZnJvbQorICAgICAqIHRoZSBkb21haW4g
KGFuZCByZS11c2VkIGZvciBhbm90aGVyIHB1cnBvc2UpIGJlZm9yZWhhbmQu
IEJ5IHBhc3NpbmcKKyAgICAgKiBub24tTlVMTCwgdGhlIGNhbGxlciBvZiB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKCkgaW5kaWNhdGVzIGl0IHdhbnRz
CisgICAgICogdG8gaGF2ZSBvd25lcnNoaXAgb2Ygc3VjaCBhIHJlZmVyZW5j
ZSB0cmFuc2ZlcnJlZCBpbiB0aGUgc3VjY2VzcyBjYXNlLgorICAgICAqLwor
ICAgIHN0cnVjdCBwYWdlX2luZm8gKipwcGFnZTsKKworICAgIC8qIFhFTk1B
UFNQQUNFX2dtZm5fZm9yZWlnbiAqLworICAgIGRvbWlkX3QgZm9yZWlnbl9k
b21pZDsKK307CisKIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKHN0
cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCBzcGFjZSwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5c21h
cF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVuaW9uIGFkZF90b19waHlzbWFwX2V4dHJhIGV4dHJhLAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBpZHgs
IGdmbl90IGdmbik7CiAKIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3Ry
dWN0IGRvbWFpbiAqZCwgc3RydWN0IHhlbl9hZGRfdG9fcGh5c21hcCAqeGF0
cCwK

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.12-1.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogc3VwcHJlc3MgImlvbW11X2RvbnRfZmx1c2hfaW90bGIiIHdo
ZW4gYWJvdXQgdG8gZnJlZSBhIHBhZ2UKCkRlZmVycmluZyBmbHVzaGVzIHRv
IGEgc2luZ2xlLCB3aWRlIHJhbmdlIG9uZSAtIGFzIGlzIGRvbmUgd2hlbgpo
YW5kbGluZyBYRU5NQVBTUEFDRV9nbWZuX3JhbmdlIC0gaXMgb2theSBvbmx5
IGFzIGxvbmcgYXMKcGFnZXMgZG9uJ3QgZ2V0IGZyZWVkIGFoZWFkIG9mIHRo
ZSBldmVudHVhbCBmbHVzaC4gV2hpbGUgdGhlIG9ubHkKZnVuY3Rpb24gc2V0
dGluZyB0aGUgZmxhZyAoeGVubWVtX2FkZF90b19waHlzbWFwKCkpIHN1Z2dl
c3RzIGJ5IGl0cyBuYW1lCnRoYXQgaXQncyBvbmx5IG1hcHBpbmcgbmV3IGVu
dHJpZXMsIGluIHJlYWxpdHkgdGhlIHdheQp4ZW5tZW1fYWRkX3RvX3BoeXNt
YXBfb25lKCkgd29ya3MgbWVhbnMgYW4gdW5tYXAgd291bGQgaGFwcGVuIG5v
dCBvbmx5CmZvciB0aGUgcGFnZSBiZWluZyBtb3ZlZCAoYnV0IG5vdCBmcmVl
ZCkgYnV0LCBpZiB0aGUgZGVzdGluYXRpb24gR0ZOIGlzCnBvcHVsYXRlZCwg
YWxzbyBmb3IgdGhlIHBhZ2UgYmVpbmcgZGlzcGxhY2VkIGZyb20gdGhhdCBH
Rk4uIENvbGxhcHNpbmcKdGhlIHR3byBmbHVzaGVzIGZvciB0aGlzIEdGTiBp
bnRvIGp1c3Qgb25lIChlbmQgZXZlbiBtb3JlIHNvIGRlZmVycmluZwppdCB0
byBhIGJhdGNoZWQgaW52b2NhdGlvbikgaXMgbm90IGNvcnJlY3QuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJhOWZkNWEgKCJp
b21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIuLi4gIikKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2Vk
LWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgoKLS0tIGEv
eGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9uL21lbW9yeS5j
CkBAIC0zMDAsNiArMzAwLDcgQEAgaW50IGd1ZXN0X3JlbW92ZV9wYWdlKHN0
cnVjdCBkb21haW4gKmQsCiAgICAgcDJtX3R5cGVfdCBwMm10OwogI2VuZGlm
CiAgICAgbWZuX3QgbWZuOworICAgIGJvb2wgKmRvbnRfZmx1c2hfcCwgZG9u
dF9mbHVzaDsKICAgICBpbnQgcmM7CiAKICNpZmRlZiBDT05GSUdfWDg2CkBA
IC0zODYsOCArMzg3LDE4IEBAIGludCBndWVzdF9yZW1vdmVfcGFnZShzdHJ1
Y3QgZG9tYWluICpkLAogICAgICAgICByZXR1cm4gLUVOWElPOwogICAgIH0K
IAorICAgIC8qCisgICAgICogU2luY2Ugd2UncmUgbGlrZWx5IHRvIGZyZWUg
dGhlIHBhZ2UgYmVsb3csIHdlIG5lZWQgdG8gc3VzcGVuZAorICAgICAqIHhl
bm1lbV9hZGRfdG9fcGh5c21hcCgpJ3Mgc3VwcHJlc3Npbmcgb2YgSU9NTVUg
VExCIGZsdXNoZXMuCisgICAgICovCisgICAgZG9udF9mbHVzaF9wID0gJnRo
aXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpOworICAgIGRvbnRfZmx1
c2ggPSAqZG9udF9mbHVzaF9wOworICAgICpkb250X2ZsdXNoX3AgPSBmYWxz
ZTsKKwogICAgIHJjID0gZ3Vlc3RfcGh5c21hcF9yZW1vdmVfcGFnZShkLCBf
Z2ZuKGdtZm4pLCBtZm4sIDApOwogCisgICAgKmRvbnRfZmx1c2hfcCA9IGRv
bnRfZmx1c2g7CisKICAgICAvKgogICAgICAqIFdpdGggdGhlIGxhY2sgb2Yg
YW4gSU9NTVUgb24gc29tZSBwbGF0Zm9ybXMsIGRvbWFpbnMgd2l0aCBETUEt
Y2FwYWJsZQogICAgICAqIGRldmljZSBtdXN0IHJldHJpZXZlIHRoZSBzYW1l
IHBmbiB3aGVuIHRoZSBoeXBlcmNhbGwgcG9wdWxhdGVfcGh5c21hcAo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.12-2.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJlZCBU
TEIgZmx1c2gKCldoZW4gbW92aW5nIGFyb3VuZCBhIHBhZ2UgdmlhIFhFTk1B
UFNQQUNFX2dtZm5fcmFuZ2UsIGRlZmVycmluZyB0aGUgVExCCmZsdXNoIGZv
ciB0aGUgImZyb20iIEdGTiByYW5nZSByZXF1aXJlcyB0aGF0IHRoZSBwYWdl
IHJlbWFpbnMgYWxsb2NhdGVkCnRvIHRoZSBndWVzdCB1bnRpbCB0aGUgVExC
IGZsdXNoIGhhcyBhY3R1YWxseSBvY2N1cnJlZC4gT3RoZXJ3aXNlIGEKcGFy
YWxsZWwgaHlwZXJjYWxsIHRvIHJlbW92ZSB0aGUgcGFnZSB3b3VsZCBvbmx5
IGZsdXNoIHRoZSBUTEIgZm9yIHRoZQpHRk4gaXQgaGFzIGJlZW4gbW92ZWQg
dG8sIGJ1dCBub3QgdGhlIG9uZSBpcyB3YXMgbWFwcGVkIGF0IG9yaWdpbmFs
bHkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJh
OWZkNWEgKCJpb21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVf
ZG9udF9mbHVzaF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIu
Li4gIikKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+CgotLS0gYS94ZW4vYXJjaC9hcm0vbW0uYworKysgYi94ZW4vYXJj
aC9hcm0vbW0uYwpAQCAtMTIxMSw3ICsxMjExLDcgQEAgdm9pZCBzaGFyZV94
ZW5fcGFnZV93aXRoX2d1ZXN0KHN0cnVjdCBwYQogaW50IHhlbm1lbV9hZGRf
dG9fcGh5c21hcF9vbmUoCiAgICAgc3RydWN0IGRvbWFpbiAqZCwKICAgICB1
bnNpZ25lZCBpbnQgc3BhY2UsCi0gICAgdW5pb24geGVuX2FkZF90b19waHlz
bWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgIHVuc2lnbmVkIGxvbmcgaWR4LAogICAg
IGdmbl90IGdmbikKIHsKQEAgLTEyODQsMTAgKzEyODQsNiBAQCBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZSgKICAgICAgICAgYnJlYWs7CiAgICAg
fQogICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86Ci0gICAgICAgIC8q
IGV4dHJhIHNob3VsZCBiZSAwLiBSZXNlcnZlZCBmb3IgZnV0dXJlIHVzZS4g
Ki8KLSAgICAgICAgaWYgKCBleHRyYS5yZXMwICkKLSAgICAgICAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsKLQogICAgICAgICByYyA9IG1hcF9kZXZfbW1p
b19yZWdpb24oZCwgZ2ZuLCAxLCBfbWZuKGlkeCkpOwogICAgICAgICByZXR1
cm4gcmM7CiAKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTQ2NTMsNyArNDY1Myw3IEBAIHN0YXRpYyBpbnQg
aGFuZGxlX2lvbWVtX3JhbmdlKHVuc2lnbmVkIGwKIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfb25lKAogICAgIHN0cnVjdCBkb21haW4gKmQsCiAgICAg
dW5zaWduZWQgaW50IHNwYWNlLAotICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5
c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICB1bmlvbiBhZGRfdG9fcGh5
c21hcF9leHRyYSBleHRyYSwKICAgICB1bnNpZ25lZCBsb25nIGlkeCwKICAg
ICBnZm5fdCBncGZuKQogewpAQCAtNDc0MCw5ICs0NzQwLDIwIEBAIGludCB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKAogICAgICAgICByYyA9IGd1ZXN0
X3BoeXNtYXBfYWRkX3BhZ2UoZCwgZ3BmbiwgbWZuLCBQQUdFX09SREVSXzRL
KTsKIAogIHB1dF9ib3RoOgotICAgIC8qIEluIHRoZSBYRU5NQVBTUEFDRV9n
bWZuIGNhc2UsIHdlIHRvb2sgYSByZWYgb2YgdGhlIGdmbiBhdCB0aGUgdG9w
LiAqLworICAgIC8qCisgICAgICogSW4gdGhlIFhFTk1BUFNQQUNFX2dtZm4g
Y2FzZSwgd2UgdG9vayBhIHJlZiBvZiB0aGUgZ2ZuIGF0IHRoZSB0b3AuCisg
ICAgICogV2UgYWxzbyBtYXkgbmVlZCB0byB0cmFuc2ZlciBvd25lcnNoaXAg
b2YgdGhlIHBhZ2UgcmVmZXJlbmNlIHRvIG91cgorICAgICAqIGNhbGxlci4K
KyAgICAgKi8KICAgICBpZiAoIHNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm4g
KQorICAgIHsKICAgICAgICAgcHV0X2dmbihkLCBnZm4pOworICAgICAgICBp
ZiAoICFyYyAmJiBleHRyYS5wcGFnZSApCisgICAgICAgIHsKKyAgICAgICAg
ICAgICpleHRyYS5wcGFnZSA9IHBhZ2U7CisgICAgICAgICAgICBwYWdlID0g
TlVMTDsKKyAgICAgICAgfQorICAgIH0KIAogICAgIGlmICggcGFnZSApCiAg
ICAgICAgIHB1dF9wYWdlKHBhZ2UpOwotLS0gYS94ZW4vY29tbW9uL21lbW9y
eS5jCisrKyBiL3hlbi9jb21tb24vbWVtb3J5LmMKQEAgLTgyNCwxMSArODI0
LDEwIEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0IGRvbWFp
bgogewogICAgIHVuc2lnbmVkIGludCBkb25lID0gMDsKICAgICBsb25nIHJj
ID0gMDsKLSAgICB1bmlvbiB4ZW5fYWRkX3RvX3BoeXNtYXBfYmF0Y2hfZXh0
cmEgZXh0cmE7CisgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgZXh0
cmEgPSB7fTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlc1sxNl07CiAK
LSAgICBpZiAoIHhhdHAtPnNwYWNlICE9IFhFTk1BUFNQQUNFX2dtZm5fZm9y
ZWlnbiApCi0gICAgICAgIGV4dHJhLnJlczAgPSAwOwotICAgIGVsc2UKKyAg
ICBpZiAoIHhhdHAtPnNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm5fZm9yZWln
biApCiAgICAgICAgIGV4dHJhLmZvcmVpZ25fZG9taWQgPSBET01JRF9JTlZB
TElEOwogCiAgICAgaWYgKCB4YXRwLT5zcGFjZSAhPSBYRU5NQVBTUEFDRV9n
bWZuX3JhbmdlICkKQEAgLTg0Myw3ICs4NDIsMTAgQEAgaW50IHhlbm1lbV9h
ZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9tYWluCiAgICAgeGF0cC0+c2l6ZSAt
PSBzdGFydDsKIAogICAgIGlmICggaGFzX2lvbW11X3B0KGQpICkKKyAgICB7
CiAgICAgICAgdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAx
OworICAgICAgIGV4dHJhLnBwYWdlID0gJnBhZ2VzWzBdOworICAgIH0KIAog
ICAgIHdoaWxlICggeGF0cC0+c2l6ZSA+IGRvbmUgKQogICAgIHsKQEAgLTg1
NSw4ICs4NTcsMTIgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1
Y3QgZG9tYWluCiAgICAgICAgIHhhdHAtPmlkeCsrOwogICAgICAgICB4YXRw
LT5ncGZuKys7CiAKKyAgICAgICAgaWYgKCBleHRyYS5wcGFnZSApCisgICAg
ICAgICAgICArK2V4dHJhLnBwYWdlOworCiAgICAgICAgIC8qIENoZWNrIGZv
ciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRlcmF0aW9u
LiAqLwotICAgICAgICBpZiAoIHhhdHAtPnNpemUgPiArK2RvbmUgJiYgaHlw
ZXJjYWxsX3ByZWVtcHRfY2hlY2soKSApCisgICAgICAgIGlmICggKCsrZG9u
ZSA+IEFSUkFZX1NJWkUocGFnZXMpICYmIGV4dHJhLnBwYWdlKSB8fAorICAg
ICAgICAgICAgICh4YXRwLT5zaXplID4gZG9uZSAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHJjID0g
c3RhcnQgKyBkb25lOwogICAgICAgICAgICAgYnJlYWs7CkBAIC04NjYsNiAr
ODcyLDcgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9t
YWluCiAgICAgaWYgKCBoYXNfaW9tbXVfcHQoZCkgKQogICAgIHsKICAgICAg
ICAgaW50IHJldDsKKyAgICAgICAgdW5zaWduZWQgaW50IGk7CiAKICAgICAg
ICAgdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAwOwogCkBA
IC04NzQsNiArODgxLDE1IEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAo
c3RydWN0IGRvbWFpbgogICAgICAgICBpZiAoIHVubGlrZWx5KHJldCkgJiYg
cmMgPj0gMCApCiAgICAgICAgICAgICByYyA9IHJldDsKIAorICAgICAgICAv
KgorICAgICAgICAgKiBOb3cgdGhhdCB0aGUgSU9NTVUgVExCIGZsdXNoIHdh
cyBkb25lIGZvciB0aGUgb3JpZ2luYWwgR0ZOLCBkcm9wCisgICAgICAgICAq
IHRoZSBwYWdlIHJlZmVyZW5jZXMuIFRoZSAybmQgZmx1c2ggYmVsb3cgaXMg
ZmluZSB0byBtYWtlIGxhdGVyLCBhcworICAgICAgICAgKiB3aG9ldmVyIHJl
bW92ZXMgdGhlIHBhZ2UgYWdhaW4gZnJvbSBpdHMgbmV3IEdGTiB3aWxsIGhh
dmUgdG8gZG8KKyAgICAgICAgICogYW5vdGhlciBmbHVzaCBhbnl3YXkuCisg
ICAgICAgICAqLworICAgICAgICBmb3IgKCBpID0gMDsgaSA8IGRvbmU7ICsr
aSApCisgICAgICAgICAgICBwdXRfcGFnZShwYWdlc1tpXSk7CisKICAgICAg
ICAgcmV0ID0gaW9tbXVfaW90bGJfZmx1c2goZCwgX2Rmbih4YXRwLT5ncGZu
IC0gZG9uZSksIGRvbmUsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIElPTU1VX0ZMVVNIRl9hZGRlZCB8IElPTU1VX0ZMVVNIRl9tb2RpZmll
ZCk7CiAgICAgICAgIGlmICggdW5saWtlbHkocmV0KSAmJiByYyA+PSAwICkK
QEAgLTg4Nyw2ICs5MDMsOCBAQCBzdGF0aWMgaW50IHhlbm1lbV9hZGRfdG9f
cGh5c21hcF9iYXRjaChzCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBzdHJ1Y3QgeGVuX2FkZF90b19waHlzbWFwX2JhdGNoICp4
YXRwYiwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHVuc2lnbmVkIGludCBleHRlbnQpCiB7CisgICAgdW5pb24gYWRkX3RvX3Bo
eXNtYXBfZXh0cmEgZXh0cmEgPSB7fTsKKwogICAgIGlmICggeGF0cGItPnNp
emUgPCBleHRlbnQgKQogICAgICAgICByZXR1cm4gLUVJTFNFUTsKIApAQCAt
ODk1LDYgKzkxMywxOSBAQCBzdGF0aWMgaW50IHhlbm1lbV9hZGRfdG9fcGh5
c21hcF9iYXRjaChzCiAgICAgICAgICAhZ3Vlc3RfaGFuZGxlX3N1YnJhbmdl
X29rYXkoeGF0cGItPmVycnMsIGV4dGVudCwgeGF0cGItPnNpemUgLSAxKSAp
CiAgICAgICAgIHJldHVybiAtRUZBVUxUOwogCisgICAgc3dpdGNoICggeGF0
cGItPnNwYWNlICkKKyAgICB7CisgICAgY2FzZSBYRU5NQVBTUEFDRV9kZXZf
bW1pbzoKKyAgICAgICAgLyogcmVzMCBpcyByZXNlcnZlZCBmb3IgZnV0dXJl
IHVzZS4gKi8KKyAgICAgICAgaWYgKCB4YXRwYi0+dS5yZXMwICkKKyAgICAg
ICAgICAgIHJldHVybiAtRU9QTk9UU1VQUDsKKyAgICAgICAgYnJlYWs7CisK
KyAgICBjYXNlIFhFTk1BUFNQQUNFX2dtZm5fZm9yZWlnbjoKKyAgICAgICAg
ZXh0cmEuZm9yZWlnbl9kb21pZCA9IHhhdHBiLT51LmZvcmVpZ25fZG9taWQ7
CisgICAgICAgIGJyZWFrOworICAgIH0KKwogICAgIHdoaWxlICggeGF0cGIt
PnNpemUgPiBleHRlbnQgKQogICAgIHsKICAgICAgICAgeGVuX3Vsb25nX3Qg
aWR4OwpAQCAtOTA3LDggKzkzOCw3IEBAIHN0YXRpYyBpbnQgeGVubWVtX2Fk
ZF90b19waHlzbWFwX2JhdGNoKHMKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgZXh0ZW50LCAxKSkgKQogICAgICAg
ICAgICAgcmV0dXJuIC1FRkFVTFQ7CiAKLSAgICAgICAgcmMgPSB4ZW5tZW1f
YWRkX3RvX3BoeXNtYXBfb25lKGQsIHhhdHBiLT5zcGFjZSwKLSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHhhdHBiLT51LAorICAg
ICAgICByYyA9IHhlbm1lbV9hZGRfdG9fcGh5c21hcF9vbmUoZCwgeGF0cGIt
PnNwYWNlLCBleHRyYSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGlkeCwgX2dmbihncGZuKSk7CiAKICAgICAgICAgaWYgKCB1
bmxpa2VseShfX2NvcHlfdG9fZ3Vlc3Rfb2Zmc2V0KHhhdHBiLT5lcnJzLCBl
eHRlbnQsICZyYywgMSkpICkKLS0tIGEveGVuL2luY2x1ZGUveGVuL21tLmgK
KysrIGIveGVuL2luY2x1ZGUveGVuL21tLmgKQEAgLTU4Myw4ICs1ODMsMjIg
QEAgdm9pZCBzY3J1Yl9vbmVfcGFnZShzdHJ1Y3QgcGFnZV9pbmZvICopOwog
ICAgICAgICAgICAgICAgICAgICAgICYoZCktPnhlbnBhZ2VfbGlzdCA6ICYo
ZCktPnBhZ2VfbGlzdCkKICNlbmRpZgogCit1bmlvbiBhZGRfdG9fcGh5c21h
cF9leHRyYSB7CisgICAgLyoKKyAgICAgKiBYRU5NQVBTUEFDRV9nbWZuOiBX
aGVuIGRlZmVycmluZyBUTEIgZmx1c2hlcywgYSBwYWdlIHJlZmVyZW5jZSBu
ZWVkcworICAgICAqIHRvIGJlIGtlcHQgdW50aWwgYWZ0ZXIgdGhlIGZsdXNo
LCBzbyB0aGUgcGFnZSBjYW4ndCBnZXQgcmVtb3ZlZCBmcm9tCisgICAgICog
dGhlIGRvbWFpbiAoYW5kIHJlLXVzZWQgZm9yIGFub3RoZXIgcHVycG9zZSkg
YmVmb3JlaGFuZC4gQnkgcGFzc2luZworICAgICAqIG5vbi1OVUxMLCB0aGUg
Y2FsbGVyIG9mIHhlbm1lbV9hZGRfdG9fcGh5c21hcF9vbmUoKSBpbmRpY2F0
ZXMgaXQgd2FudHMKKyAgICAgKiB0byBoYXZlIG93bmVyc2hpcCBvZiBzdWNo
IGEgcmVmZXJlbmNlIHRyYW5zZmVycmVkIGluIHRoZSBzdWNjZXNzIGNhc2Uu
CisgICAgICovCisgICAgc3RydWN0IHBhZ2VfaW5mbyAqKnBwYWdlOworCisg
ICAgLyogWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICovCisgICAgZG9taWRf
dCBmb3JlaWduX2RvbWlkOworfTsKKwogaW50IHhlbm1lbV9hZGRfdG9fcGh5
c21hcF9vbmUoc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgaW50IHNwYWNl
LAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5pb24geGVuX2Fk
ZF90b19waHlzbWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEg
ZXh0cmEsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25l
ZCBsb25nIGlkeCwgZ2ZuX3QgZ2ZuKTsKIAogaW50IHhlbm1lbV9hZGRfdG9f
cGh5c21hcChzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgeGVuX2FkZF90b19w
aHlzbWFwICp4YXRwLAo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.13-1.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogc3VwcHJlc3MgImlvbW11X2RvbnRfZmx1c2hfaW90bGIiIHdo
ZW4gYWJvdXQgdG8gZnJlZSBhIHBhZ2UKCkRlZmVycmluZyBmbHVzaGVzIHRv
IGEgc2luZ2xlLCB3aWRlIHJhbmdlIG9uZSAtIGFzIGlzIGRvbmUgd2hlbgpo
YW5kbGluZyBYRU5NQVBTUEFDRV9nbWZuX3JhbmdlIC0gaXMgb2theSBvbmx5
IGFzIGxvbmcgYXMKcGFnZXMgZG9uJ3QgZ2V0IGZyZWVkIGFoZWFkIG9mIHRo
ZSBldmVudHVhbCBmbHVzaC4gV2hpbGUgdGhlIG9ubHkKZnVuY3Rpb24gc2V0
dGluZyB0aGUgZmxhZyAoeGVubWVtX2FkZF90b19waHlzbWFwKCkpIHN1Z2dl
c3RzIGJ5IGl0cyBuYW1lCnRoYXQgaXQncyBvbmx5IG1hcHBpbmcgbmV3IGVu
dHJpZXMsIGluIHJlYWxpdHkgdGhlIHdheQp4ZW5tZW1fYWRkX3RvX3BoeXNt
YXBfb25lKCkgd29ya3MgbWVhbnMgYW4gdW5tYXAgd291bGQgaGFwcGVuIG5v
dCBvbmx5CmZvciB0aGUgcGFnZSBiZWluZyBtb3ZlZCAoYnV0IG5vdCBmcmVl
ZCkgYnV0LCBpZiB0aGUgZGVzdGluYXRpb24gR0ZOIGlzCnBvcHVsYXRlZCwg
YWxzbyBmb3IgdGhlIHBhZ2UgYmVpbmcgZGlzcGxhY2VkIGZyb20gdGhhdCBH
Rk4uIENvbGxhcHNpbmcKdGhlIHR3byBmbHVzaGVzIGZvciB0aGlzIEdGTiBp
bnRvIGp1c3Qgb25lIChlbmQgZXZlbiBtb3JlIHNvIGRlZmVycmluZwppdCB0
byBhIGJhdGNoZWQgaW52b2NhdGlvbikgaXMgbm90IGNvcnJlY3QuCgpUaGlz
IGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJhOWZkNWEgKCJp
b21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVfZG9udF9mbHVz
aF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIuLi4gIikKU2ln
bmVkLW9mZi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpS
ZXZpZXdlZC1ieTogUGF1bCBEdXJyYW50IDxwYXVsQHhlbi5vcmc+CkFja2Vk
LWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPgoKLS0tIGEv
eGVuL2NvbW1vbi9tZW1vcnkuYworKysgYi94ZW4vY29tbW9uL21lbW9yeS5j
CkBAIC0yOTIsNiArMjkyLDcgQEAgaW50IGd1ZXN0X3JlbW92ZV9wYWdlKHN0
cnVjdCBkb21haW4gKmQsCiAgICAgcDJtX3R5cGVfdCBwMm10OwogI2VuZGlm
CiAgICAgbWZuX3QgbWZuOworICAgIGJvb2wgKmRvbnRfZmx1c2hfcCwgZG9u
dF9mbHVzaDsKICAgICBpbnQgcmM7CiAKICNpZmRlZiBDT05GSUdfWDg2CkBA
IC0zNzgsOCArMzc5LDE4IEBAIGludCBndWVzdF9yZW1vdmVfcGFnZShzdHJ1
Y3QgZG9tYWluICpkLAogICAgICAgICByZXR1cm4gLUVOWElPOwogICAgIH0K
IAorICAgIC8qCisgICAgICogU2luY2Ugd2UncmUgbGlrZWx5IHRvIGZyZWUg
dGhlIHBhZ2UgYmVsb3csIHdlIG5lZWQgdG8gc3VzcGVuZAorICAgICAqIHhl
bm1lbV9hZGRfdG9fcGh5c21hcCgpJ3Mgc3VwcHJlc3Npbmcgb2YgSU9NTVUg
VExCIGZsdXNoZXMuCisgICAgICovCisgICAgZG9udF9mbHVzaF9wID0gJnRo
aXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpOworICAgIGRvbnRfZmx1
c2ggPSAqZG9udF9mbHVzaF9wOworICAgICpkb250X2ZsdXNoX3AgPSBmYWxz
ZTsKKwogICAgIHJjID0gZ3Vlc3RfcGh5c21hcF9yZW1vdmVfcGFnZShkLCBf
Z2ZuKGdtZm4pLCBtZm4sIDApOwogCisgICAgKmRvbnRfZmx1c2hfcCA9IGRv
bnRfZmx1c2g7CisKICAgICAvKgogICAgICAqIFdpdGggdGhlIGxhY2sgb2Yg
YW4gSU9NTVUgb24gc29tZSBwbGF0Zm9ybXMsIGRvbWFpbnMgd2l0aCBETUEt
Y2FwYWJsZQogICAgICAqIGRldmljZSBtdXN0IHJldHJpZXZlIHRoZSBzYW1l
IHBmbiB3aGVuIHRoZSBoeXBlcmNhbGwgcG9wdWxhdGVfcGh5c21hcAo=

--=separator
Content-Type: application/octet-stream; name="xsa346/xsa346-4.13-2.patch"
Content-Disposition: attachment; filename="xsa346/xsa346-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBJT01NVTogaG9sZCBwYWdlIHJlZiB1bnRpbCBhZnRlciBkZWZlcnJlZCBU
TEIgZmx1c2gKCldoZW4gbW92aW5nIGFyb3VuZCBhIHBhZ2UgdmlhIFhFTk1B
UFNQQUNFX2dtZm5fcmFuZ2UsIGRlZmVycmluZyB0aGUgVExCCmZsdXNoIGZv
ciB0aGUgImZyb20iIEdGTiByYW5nZSByZXF1aXJlcyB0aGF0IHRoZSBwYWdl
IHJlbWFpbnMgYWxsb2NhdGVkCnRvIHRoZSBndWVzdCB1bnRpbCB0aGUgVExC
IGZsdXNoIGhhcyBhY3R1YWxseSBvY2N1cnJlZC4gT3RoZXJ3aXNlIGEKcGFy
YWxsZWwgaHlwZXJjYWxsIHRvIHJlbW92ZSB0aGUgcGFnZSB3b3VsZCBvbmx5
IGZsdXNoIHRoZSBUTEIgZm9yIHRoZQpHRk4gaXQgaGFzIGJlZW4gbW92ZWQg
dG8sIGJ1dCBub3QgdGhlIG9uZSBpcyB3YXMgbWFwcGVkIGF0IG9yaWdpbmFs
bHkuCgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ni4KCkZpeGVzOiBjZjk1YjJh
OWZkNWEgKCJpb21tdTogSW50cm9kdWNlIHBlciBjcHUgZmxhZyAoaW9tbXVf
ZG9udF9mbHVzaF9pb3RsYikgdG8gYXZvaWQgdW5uZWNlc3NhcnkgaW90bGIu
Li4gIikKUmVwb3J0ZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+ClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBz
dXNlLmNvbT4KQWNrZWQtYnk6IEp1bGllbiBHcmFsbCA8amdyYWxsQGFtYXpv
bi5jb20+CgotLS0gYS94ZW4vYXJjaC9hcm0vbW0uYworKysgYi94ZW4vYXJj
aC9hcm0vbW0uYwpAQCAtMTQwNyw3ICsxNDA3LDcgQEAgdm9pZCBzaGFyZV94
ZW5fcGFnZV93aXRoX2d1ZXN0KHN0cnVjdCBwYQogaW50IHhlbm1lbV9hZGRf
dG9fcGh5c21hcF9vbmUoCiAgICAgc3RydWN0IGRvbWFpbiAqZCwKICAgICB1
bnNpZ25lZCBpbnQgc3BhY2UsCi0gICAgdW5pb24geGVuX2FkZF90b19waHlz
bWFwX2JhdGNoX2V4dHJhIGV4dHJhLAorICAgIHVuaW9uIGFkZF90b19waHlz
bWFwX2V4dHJhIGV4dHJhLAogICAgIHVuc2lnbmVkIGxvbmcgaWR4LAogICAg
IGdmbl90IGdmbikKIHsKQEAgLTE0ODAsMTAgKzE0ODAsNiBAQCBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZSgKICAgICAgICAgYnJlYWs7CiAgICAg
fQogICAgIGNhc2UgWEVOTUFQU1BBQ0VfZGV2X21taW86Ci0gICAgICAgIC8q
IGV4dHJhIHNob3VsZCBiZSAwLiBSZXNlcnZlZCBmb3IgZnV0dXJlIHVzZS4g
Ki8KLSAgICAgICAgaWYgKCBleHRyYS5yZXMwICkKLSAgICAgICAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsKLQogICAgICAgICByYyA9IG1hcF9kZXZfbW1p
b19yZWdpb24oZCwgZ2ZuLCAxLCBfbWZuKGlkeCkpOwogICAgICAgICByZXR1
cm4gcmM7CiAKLS0tIGEveGVuL2FyY2gveDg2L21tLmMKKysrIGIveGVuL2Fy
Y2gveDg2L21tLmMKQEAgLTQ2MTcsNyArNDYxNyw3IEBAIHN0YXRpYyBpbnQg
aGFuZGxlX2lvbWVtX3JhbmdlKHVuc2lnbmVkIGwKIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfb25lKAogICAgIHN0cnVjdCBkb21haW4gKmQsCiAgICAg
dW5zaWduZWQgaW50IHNwYWNlLAotICAgIHVuaW9uIHhlbl9hZGRfdG9fcGh5
c21hcF9iYXRjaF9leHRyYSBleHRyYSwKKyAgICB1bmlvbiBhZGRfdG9fcGh5
c21hcF9leHRyYSBleHRyYSwKICAgICB1bnNpZ25lZCBsb25nIGlkeCwKICAg
ICBnZm5fdCBncGZuKQogewpAQCAtNDcwMSw5ICs0NzAxLDIwIEBAIGludCB4
ZW5tZW1fYWRkX3RvX3BoeXNtYXBfb25lKAogICAgICAgICByYyA9IGd1ZXN0
X3BoeXNtYXBfYWRkX3BhZ2UoZCwgZ3BmbiwgbWZuLCBQQUdFX09SREVSXzRL
KTsKIAogIHB1dF9ib3RoOgotICAgIC8qIEluIHRoZSBYRU5NQVBTUEFDRV9n
bWZuIGNhc2UsIHdlIHRvb2sgYSByZWYgb2YgdGhlIGdmbiBhdCB0aGUgdG9w
LiAqLworICAgIC8qCisgICAgICogSW4gdGhlIFhFTk1BUFNQQUNFX2dtZm4g
Y2FzZSwgd2UgdG9vayBhIHJlZiBvZiB0aGUgZ2ZuIGF0IHRoZSB0b3AuCisg
ICAgICogV2UgYWxzbyBtYXkgbmVlZCB0byB0cmFuc2ZlciBvd25lcnNoaXAg
b2YgdGhlIHBhZ2UgcmVmZXJlbmNlIHRvIG91cgorICAgICAqIGNhbGxlci4K
KyAgICAgKi8KICAgICBpZiAoIHNwYWNlID09IFhFTk1BUFNQQUNFX2dtZm4g
KQorICAgIHsKICAgICAgICAgcHV0X2dmbihkLCBnZm4pOworICAgICAgICBp
ZiAoICFyYyAmJiBleHRyYS5wcGFnZSApCisgICAgICAgIHsKKyAgICAgICAg
ICAgICpleHRyYS5wcGFnZSA9IHBhZ2U7CisgICAgICAgICAgICBwYWdlID0g
TlVMTDsKKyAgICAgICAgfQorICAgIH0KIAogICAgIGlmICggcGFnZSApCiAg
ICAgICAgIHB1dF9wYWdlKHBhZ2UpOwotLS0gYS94ZW4vY29tbW9uL21lbW9y
eS5jCisrKyBiL3hlbi9jb21tb24vbWVtb3J5LmMKQEAgLTgxNCwxMyArODE0
LDEyIEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0IGRvbWFp
bgogewogICAgIHVuc2lnbmVkIGludCBkb25lID0gMDsKICAgICBsb25nIHJj
ID0gMDsKLSAgICB1bmlvbiB4ZW5fYWRkX3RvX3BoeXNtYXBfYmF0Y2hfZXh0
cmEgZXh0cmE7CisgICAgdW5pb24gYWRkX3RvX3BoeXNtYXBfZXh0cmEgZXh0
cmEgPSB7fTsKKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwYWdlc1sxNl07CiAK
ICAgICBBU1NFUlQocGFnaW5nX21vZGVfdHJhbnNsYXRlKGQpKTsKIAotICAg
IGlmICggeGF0cC0+c3BhY2UgIT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWdu
ICkKLSAgICAgICAgZXh0cmEucmVzMCA9IDA7Ci0gICAgZWxzZQorICAgIGlm
ICggeGF0cC0+c3BhY2UgPT0gWEVOTUFQU1BBQ0VfZ21mbl9mb3JlaWduICkK
ICAgICAgICAgZXh0cmEuZm9yZWlnbl9kb21pZCA9IERPTUlEX0lOVkFMSUQ7
CiAKICAgICBpZiAoIHhhdHAtPnNwYWNlICE9IFhFTk1BUFNQQUNFX2dtZm5f
cmFuZ2UgKQpAQCAtODM1LDcgKzgzNCwxMCBAQCBpbnQgeGVubWVtX2FkZF90
b19waHlzbWFwKHN0cnVjdCBkb21haW4KICAgICB4YXRwLT5zaXplIC09IHN0
YXJ0OwogCiAgICAgaWYgKCBpc19pb21tdV9lbmFibGVkKGQpICkKKyAgICB7
CiAgICAgICAgdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgPSAx
OworICAgICAgIGV4dHJhLnBwYWdlID0gJnBhZ2VzWzBdOworICAgIH0KIAog
ICAgIHdoaWxlICggeGF0cC0+c2l6ZSA+IGRvbmUgKQogICAgIHsKQEAgLTg0
Nyw4ICs4NDksMTIgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1
Y3QgZG9tYWluCiAgICAgICAgIHhhdHAtPmlkeCsrOwogICAgICAgICB4YXRw
LT5ncGZuKys7CiAKKyAgICAgICAgaWYgKCBleHRyYS5wcGFnZSApCisgICAg
ICAgICAgICArK2V4dHJhLnBwYWdlOworCiAgICAgICAgIC8qIENoZWNrIGZv
ciBjb250aW51YXRpb24gaWYgaXQncyBub3QgdGhlIGxhc3QgaXRlcmF0aW9u
LiAqLwotICAgICAgICBpZiAoIHhhdHAtPnNpemUgPiArK2RvbmUgJiYgaHlw
ZXJjYWxsX3ByZWVtcHRfY2hlY2soKSApCisgICAgICAgIGlmICggKCsrZG9u
ZSA+IEFSUkFZX1NJWkUocGFnZXMpICYmIGV4dHJhLnBwYWdlKSB8fAorICAg
ICAgICAgICAgICh4YXRwLT5zaXplID4gZG9uZSAmJiBoeXBlcmNhbGxfcHJl
ZW1wdF9jaGVjaygpKSApCiAgICAgICAgIHsKICAgICAgICAgICAgIHJjID0g
c3RhcnQgKyBkb25lOwogICAgICAgICAgICAgYnJlYWs7CkBAIC04NTgsNiAr
ODY0LDcgQEAgaW50IHhlbm1lbV9hZGRfdG9fcGh5c21hcChzdHJ1Y3QgZG9t
YWluCiAgICAgaWYgKCBpc19pb21tdV9lbmFibGVkKGQpICkKICAgICB7CiAg
ICAgICAgIGludCByZXQ7CisgICAgICAgIHVuc2lnbmVkIGludCBpOwogCiAg
ICAgICAgIHRoaXNfY3B1KGlvbW11X2RvbnRfZmx1c2hfaW90bGIpID0gMDsK
IApAQCAtODY2LDYgKzg3MywxNSBAQCBpbnQgeGVubWVtX2FkZF90b19waHlz
bWFwKHN0cnVjdCBkb21haW4KICAgICAgICAgaWYgKCB1bmxpa2VseShyZXQp
ICYmIHJjID49IDAgKQogICAgICAgICAgICAgcmMgPSByZXQ7CiAKKyAgICAg
ICAgLyoKKyAgICAgICAgICogTm93IHRoYXQgdGhlIElPTU1VIFRMQiBmbHVz
aCB3YXMgZG9uZSBmb3IgdGhlIG9yaWdpbmFsIEdGTiwgZHJvcAorICAgICAg
ICAgKiB0aGUgcGFnZSByZWZlcmVuY2VzLiBUaGUgMm5kIGZsdXNoIGJlbG93
IGlzIGZpbmUgdG8gbWFrZSBsYXRlciwgYXMKKyAgICAgICAgICogd2hvZXZl
ciByZW1vdmVzIHRoZSBwYWdlIGFnYWluIGZyb20gaXRzIG5ldyBHRk4gd2ls
bCBoYXZlIHRvIGRvCisgICAgICAgICAqIGFub3RoZXIgZmx1c2ggYW55d2F5
LgorICAgICAgICAgKi8KKyAgICAgICAgZm9yICggaSA9IDA7IGkgPCBkb25l
OyArK2kgKQorICAgICAgICAgICAgcHV0X3BhZ2UocGFnZXNbaV0pOworCiAg
ICAgICAgIHJldCA9IGlvbW11X2lvdGxiX2ZsdXNoKGQsIF9kZm4oeGF0cC0+
Z3BmbiAtIGRvbmUpLCBkb25lLAogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBJT01NVV9GTFVTSEZfYWRkZWQgfCBJT01NVV9GTFVTSEZfbW9k
aWZpZWQpOwogICAgICAgICBpZiAoIHVubGlrZWx5KHJldCkgJiYgcmMgPj0g
MCApCkBAIC04NzksNiArODk1LDggQEAgc3RhdGljIGludCB4ZW5tZW1fYWRk
X3RvX3BoeXNtYXBfYmF0Y2gocwogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgc3RydWN0IHhlbl9hZGRfdG9fcGh5c21hcF9iYXRj
aCAqeGF0cGIsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICB1bnNpZ25lZCBpbnQgZXh0ZW50KQogeworICAgIHVuaW9uIGFkZF90
b19waHlzbWFwX2V4dHJhIGV4dHJhID0ge307CisKICAgICBpZiAoIHVubGlr
ZWx5KHhhdHBiLT5zaXplIDwgZXh0ZW50KSApCiAgICAgICAgIHJldHVybiAt
RUlMU0VROwogCkBAIC04OTAsNiArOTA4LDE5IEBAIHN0YXRpYyBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX2JhdGNoKHMKICAgICAgICAgICFndWVzdF9o
YW5kbGVfc3VicmFuZ2Vfb2theSh4YXRwYi0+ZXJycywgZXh0ZW50LCB4YXRw
Yi0+c2l6ZSAtIDEpICkKICAgICAgICAgcmV0dXJuIC1FRkFVTFQ7CiAKKyAg
ICBzd2l0Y2ggKCB4YXRwYi0+c3BhY2UgKQorICAgIHsKKyAgICBjYXNlIFhF
Tk1BUFNQQUNFX2Rldl9tbWlvOgorICAgICAgICAvKiByZXMwIGlzIHJlc2Vy
dmVkIGZvciBmdXR1cmUgdXNlLiAqLworICAgICAgICBpZiAoIHhhdHBiLT51
LnJlczAgKQorICAgICAgICAgICAgcmV0dXJuIC1FT1BOT1RTVVBQOworICAg
ICAgICBicmVhazsKKworICAgIGNhc2UgWEVOTUFQU1BBQ0VfZ21mbl9mb3Jl
aWduOgorICAgICAgICBleHRyYS5mb3JlaWduX2RvbWlkID0geGF0cGItPnUu
Zm9yZWlnbl9kb21pZDsKKyAgICAgICAgYnJlYWs7CisgICAgfQorCiAgICAg
d2hpbGUgKCB4YXRwYi0+c2l6ZSA+IGV4dGVudCApCiAgICAgewogICAgICAg
ICB4ZW5fdWxvbmdfdCBpZHg7CkBAIC05MDIsOCArOTMzLDcgQEAgc3RhdGlj
IGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXBfYmF0Y2gocwogICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBleHRlbnQs
IDEpKSApCiAgICAgICAgICAgICByZXR1cm4gLUVGQVVMVDsKIAotICAgICAg
ICByYyA9IHhlbm1lbV9hZGRfdG9fcGh5c21hcF9vbmUoZCwgeGF0cGItPnNw
YWNlLAotICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
eGF0cGItPnUsCisgICAgICAgIHJjID0geGVubWVtX2FkZF90b19waHlzbWFw
X29uZShkLCB4YXRwYi0+c3BhY2UsIGV4dHJhLAogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgaWR4LCBfZ2ZuKGdwZm4pKTsKIAog
ICAgICAgICBpZiAoIHVubGlrZWx5KF9fY29weV90b19ndWVzdF9vZmZzZXQo
eGF0cGItPmVycnMsIGV4dGVudCwgJnJjLCAxKSkgKQotLS0gYS94ZW4vaW5j
bHVkZS94ZW4vbW0uaAorKysgYi94ZW4vaW5jbHVkZS94ZW4vbW0uaApAQCAt
NTg4LDggKzU4OCwyMiBAQCB2b2lkIHNjcnViX29uZV9wYWdlKHN0cnVjdCBw
YWdlX2luZm8gKik7CiAgICAgICAgICAgICAgICAgICAgICAgJihkKS0+eGVu
cGFnZV9saXN0IDogJihkKS0+cGFnZV9saXN0KQogI2VuZGlmCiAKK3VuaW9u
IGFkZF90b19waHlzbWFwX2V4dHJhIHsKKyAgICAvKgorICAgICAqIFhFTk1B
UFNQQUNFX2dtZm46IFdoZW4gZGVmZXJyaW5nIFRMQiBmbHVzaGVzLCBhIHBh
Z2UgcmVmZXJlbmNlIG5lZWRzCisgICAgICogdG8gYmUga2VwdCB1bnRpbCBh
ZnRlciB0aGUgZmx1c2gsIHNvIHRoZSBwYWdlIGNhbid0IGdldCByZW1vdmVk
IGZyb20KKyAgICAgKiB0aGUgZG9tYWluIChhbmQgcmUtdXNlZCBmb3IgYW5v
dGhlciBwdXJwb3NlKSBiZWZvcmVoYW5kLiBCeSBwYXNzaW5nCisgICAgICog
bm9uLU5VTEwsIHRoZSBjYWxsZXIgb2YgeGVubWVtX2FkZF90b19waHlzbWFw
X29uZSgpIGluZGljYXRlcyBpdCB3YW50cworICAgICAqIHRvIGhhdmUgb3du
ZXJzaGlwIG9mIHN1Y2ggYSByZWZlcmVuY2UgdHJhbnNmZXJyZWQgaW4gdGhl
IHN1Y2Nlc3MgY2FzZS4KKyAgICAgKi8KKyAgICBzdHJ1Y3QgcGFnZV9pbmZv
ICoqcHBhZ2U7CisKKyAgICAvKiBYRU5NQVBTUEFDRV9nbWZuX2ZvcmVpZ24g
Ki8KKyAgICBkb21pZF90IGZvcmVpZ25fZG9taWQ7Cit9OworCiBpbnQgeGVu
bWVtX2FkZF90b19waHlzbWFwX29uZShzdHJ1Y3QgZG9tYWluICpkLCB1bnNp
Z25lZCBpbnQgc3BhY2UsCi0gICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB1bmlvbiB4ZW5fYWRkX3RvX3BoeXNtYXBfYmF0Y2hfZXh0cmEgZXh0cmEs
CisgICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bmlvbiBhZGRfdG9f
cGh5c21hcF9leHRyYSBleHRyYSwKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVuc2lnbmVkIGxvbmcgaWR4LCBnZm5fdCBnZm4pOwogCiBpbnQg
eGVubWVtX2FkZF90b19waHlzbWFwKHN0cnVjdCBkb21haW4gKmQsIHN0cnVj
dCB4ZW5fYWRkX3RvX3BoeXNtYXAgKnhhdHAsCg==

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:03:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9243.24723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqMS-0000je-5c; Tue, 20 Oct 2020 12:03:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9243.24723; Tue, 20 Oct 2020 12:03:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqMS-0000jW-1H; Tue, 20 Oct 2020 12:03:28 +0000
Received: by outflank-mailman (input) for mailman id 9243;
 Tue, 20 Oct 2020 12:03:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kUqKT-0006DX-Kt
 for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9c2b219-4455-492b-8042-7d7cd498203c;
 Tue, 20 Oct 2020 12:00:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJu-0001Kz-EE; Tue, 20 Oct 2020 12:00:50 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <iwj@xenbits.xen.org>)
 id 1kUqJu-00024u-DS; Tue, 20 Oct 2020 12:00:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8aR8=D3=xenbits.xen.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kUqKT-0006DX-Kt
	for xen-devel@lists.xen.org; Tue, 20 Oct 2020 12:01:25 +0000
X-Inumbo-ID: b9c2b219-4455-492b-8042-7d7cd498203c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b9c2b219-4455-492b-8042-7d7cd498203c;
	Tue, 20 Oct 2020 12:00:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=B5LnVXlXyI7SkKLwT8enjVPAkqRwCrJ9IY42lFZWuuA=; b=LJXYwkyAbAZxNvk8wG8LaOi8Dq
	/xotufhCLTRSvZwGA5bgibsxBxMYGw7ccCcdJOYyXIJ7jfxkCMWRItgC16VzbmG2Ziw6hirYggkPU
	tP7PNEzR9ZClsawMKWLIF8uRjTRZlF9A6Gnr/gHmnApOErFWm2UZsj+Pq1h7h/P1KyaI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJu-0001Kz-EE; Tue, 20 Oct 2020 12:00:50 +0000
Received: from iwj by xenbits.xenproject.org with local (Exim 4.92)
	(envelope-from <iwj@xenbits.xen.org>)
	id 1kUqJu-00024u-DS; Tue, 20 Oct 2020 12:00:50 +0000
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 347 v2 - unsafe AMD IOMMU page table updates
Message-Id: <E1kUqJu-00024u-DS@xenbits.xenproject.org>
Date: Tue, 20 Oct 2020 12:00:50 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

                    Xen Security Advisory XSA-347
                              version 2

                  unsafe AMD IOMMU page table updates

UPDATES IN VERSION 2
====================

Public release.

ISSUE DESCRIPTION
=================

AMD IOMMU page table entries are updated in a step by step manner,
without regard to them being potentially in use by the IOMMU.  Therefore
it was possible that the IOMMU would read and then use a half-updated
entry.  Furthermore, updates to Device Table entries lacked suitable
ordering enforcement for certain steps involved in these updates.

In both case the specific outcome heavily depends on how exactly the
compiler translated the affected pieces of code.

IMPACT
======

A malicious guest might be able to cause data corruption and data
leaks.  Host or guest Denial of Service (DoS), and privilege
escalation, cannot be ruled out.

VULNERABLE SYSTEMS
==================

All Xen versions are potentially vulnerable.

Only x86 systems with AMD, Hygon, or compatible IOMMU hardware are
vulnerable.  Arm systems as well as x86 systems with VT-d hardware or
without any IOMMUs in use are not vulnerable.

Only x86 guests which have physical devices passed through to them can
leverage the vulnerability.

MITIGATION
==========

Not passing through physical devices to untrusted guests will avoid
the vulnerability.

CREDITS
=======

This issue was discovered by Paul Durrant of Amazon and Jan Beulich of
SUSE.

RESOLUTION
==========

Applying the appropriate set of attached patches resolves this issue.

Note that patches for released versions are generally prepared to
apply to the stable branches, and may not apply cleanly to the most
recent release tarball.  Downstreams are encouraged to update to the
tip of the stable branch before applying these patches.

xsa347/xsa347-?.patch           xen-unstable
xsa347/xsa347-4.14-?.patch      Xen 4.14
xsa347/xsa347-4.13-?.patch      Xen 4.13
xsa347/xsa347-4.12-?.patch      Xen 4.12
xsa347/xsa347-4.11-?.patch      Xen 4.10 - 4.11

$ sha256sum xsa347* xsa347*/*
f16e1a348b0e45601c96b2bd08afc4202bbccc92c8af8344b3c8286ca819acef  xsa347.meta
82e14d0507ec94f8cfac2b4d5d1b60681b925218ab927332bee338e6b6c679c9  xsa347/xsa347-1.patch
1bc6018c3685727ba4035bf0b5cea95940a1b9c4746fa9bddfd41507482d68a1  xsa347/xsa347-2.patch
f1bd8eba268300f564837ac37fe43b774ace885c9cbf8fcacae457128730bc80  xsa347/xsa347-3.patch
5aec8f3b15aa799e1ff7ec0dfe53523cb91aa5fd88033f7f034cb74ebaa6abe4  xsa347/xsa347-4.11-1.patch
4ab3a6fa181ce486b4c9943f6629b7c1a4337c7ccb92701ae6e40108533778ca  xsa347/xsa347-4.11-2.patch
fec82340dc65fc1001358de51d0639b2b401818fa1e831f8715cb1780b17dc7b  xsa347/xsa347-4.12-1.patch
be89e976fe03464ce3a73b162c07927128f41a8a03466e903ebfa4ea0dc46116  xsa347/xsa347-4.12-2.patch
5dc0abf73d1a9d21f2b57e6c57ee5c15cc3febbb783123c0946f3e5778671929  xsa347/xsa347-4.13-1.patch
6d2b6ea7a373fb1c4cce63db349bbafa8603b5e7c6b74fc6d029954075f2268d  xsa347/xsa347-4.13-2.patch
4e154bfca5101569c8260e307eb6439783bc99547b7dfb5aba2bafebbde46190  xsa347/xsa347-4.13-3.patch
6a70c2afba0d3ad73b12743a6808ba8002e9ee573d7c460397355e40de3b553f  xsa347/xsa347-4.14-1.patch
1bc6018c3685727ba4035bf0b5cea95940a1b9c4746fa9bddfd41507482d68a1  xsa347/xsa347-4.14-2.patch
f1bd8eba268300f564837ac37fe43b774ace885c9cbf8fcacae457128730bc80  xsa347/xsa347-4.14-3.patch
$

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches described above (or others which are
substantially similar) is permitted during the embargo, even on
public-facing systems with untrusted guest users and administrators.

HOWEVER, deployment of the mitigation is NOT permitted (except where
all the affected systems and VMs are administered and used only by
organisations which are members of the Xen Project Security Issues
Predisclosure List).  Specifically, deployment on public cloud systems
is NOT permitted.

This is because removal of pass-through devices or their replacement by
emulated devices is a guest visible configuration change, which may lead
to re-discovery of the issue.

Deployment of this mitigation is permitted only AFTER the embargo ends.

AND: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.

(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAl+Ozq4MHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZ49QH/jCHl5dId4Bj/FSfDEK1iTTpTMTIwIIv5PGaSBh1
F/VoQiS+e5hAlbucK8M362GJlHO3p4wHPgyZLNY82BZrPuzeL/GAV8p4qqrfsJjS
uk6UZQyyIAKH8NcbICzm06WrOQ2ayfGJvJtmyfkwqDcT+VSJ/ohmcBw9WQABACSS
+Wr2LRZAzucpY23z/sWMOYx312sRx8EvzAeA4qP0g1jAc54cNbCVuDV2iqcFot4F
vd+/vfkh7HuIvLk7gQ8KbKjXyGqR7Wt78EEDNTpSxvXuGUTFc+jCnA0429Se1NGw
cLgeaTr29RtDIlFtxqS0DR3Pu4HYL535Dkn/w8dfmL7mPjU=
=rjzR
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa347.meta"
Content-Disposition: attachment; filename="xsa347.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNDcsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIsCiAgICAiNC4xNCIsCiAgICAiNC4xMyIsCiAgICAiNC4xMiIs
CiAgICAiNC4xMSIsCiAgICAiNC4xMCIKICBdLAogICJUcmVlcyI6IFsKICAg
ICJ4ZW4iCiAgXSwKICAiUmVjaXBlcyI6IHsKICAgICI0LjEwIjogewogICAg
ICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewogICAgICAgICAgIlN0
YWJsZVJlZiI6ICIxNzE5Zjc5YTBlZmQzNmQxNTgzN2M1MTk4MjE3M2RkMWMy
ODdkY2VkIiwKICAgICAgICAgICJQcmVyZXFzIjogWwogICAgICAgICAgICAy
ODYsCiAgICAgICAgICAgIDM0NSwKICAgICAgICAgICAgMzQ2CiAgICAgICAg
ICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAgICAgICAgICJ4c2Ez
NDcveHNhMzQ3LTQuMTEtPy5wYXRjaCIKICAgICAgICAgIF0KICAgICAgICB9
CiAgICAgIH0KICAgIH0sCiAgICAiNC4xMSI6IHsKICAgICAgIlJlY2lwZXMi
OiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYiOiAi
MzYzMGEzNjc4NTRjOThiYmY4ZTc0N2QwOWVlYWI3ZTY4ZjM3MDAwMyIsCiAg
ICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMjg2LAogICAgICAg
ICAgICAzNDUsCiAgICAgICAgICAgIDM0NgogICAgICAgICAgXSwKICAgICAg
ICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ3L3hzYTM0Ny00
LjExLT8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAgfQogICAgICB9CiAg
ICB9LAogICAgIjQuMTIiOiB7CiAgICAgICJSZWNpcGVzIjogewogICAgICAg
ICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjY4ODgwMTczOTJh
YzI1YjVlNTg4NTU0MDMwNjQyYWZmYWMyNWE5NWQiLAogICAgICAgICAgIlBy
ZXJlcXMiOiBbCiAgICAgICAgICAgIDI4NiwKICAgICAgICAgICAgMzQ1LAog
ICAgICAgICAgICAzNDYKICAgICAgICAgIF0sCiAgICAgICAgICAiUGF0Y2hl
cyI6IFsKICAgICAgICAgICAgInhzYTM0Ny94c2EzNDctNC4xMi0/LnBhdGNo
IgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAgfSwKICAgICI0
LjEzIjogewogICAgICAiUmVjaXBlcyI6IHsKICAgICAgICAieGVuIjogewog
ICAgICAgICAgIlN0YWJsZVJlZiI6ICI4ZTdlNTg1N2EyMDNjOWQ5ZGY3NzMz
ZmQ2ODc2ODU1NWM3ZTc2ODM5IiwKICAgICAgICAgICJQcmVyZXFzIjogWwog
ICAgICAgICAgICAyODYsCiAgICAgICAgICAgIDM0NSwKICAgICAgICAgICAg
MzQ2CiAgICAgICAgICBdLAogICAgICAgICAgIlBhdGNoZXMiOiBbCiAgICAg
ICAgICAgICJ4c2EzNDcveHNhMzQ3LTQuMTMtPy5wYXRjaCIKICAgICAgICAg
IF0KICAgICAgICB9CiAgICAgIH0KICAgIH0sCiAgICAiNC4xNCI6IHsKICAg
ICAgIlJlY2lwZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJT
dGFibGVSZWYiOiAiYzkzYjUyMGE0MWYyNzg3ZGQ3NmJmYjJlNDU0ODM2ZDFk
NTc4NzUwNSIsCiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAg
Mjg2LAogICAgICAgICAgICAzNDUsCiAgICAgICAgICAgIDM0NgogICAgICAg
ICAgXSwKICAgICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNh
MzQ3L3hzYTM0Ny00LjE0LT8ucGF0Y2giCiAgICAgICAgICBdCiAgICAgICAg
fQogICAgICB9CiAgICB9LAogICAgIm1hc3RlciI6IHsKICAgICAgIlJlY2lw
ZXMiOiB7CiAgICAgICAgInhlbiI6IHsKICAgICAgICAgICJTdGFibGVSZWYi
OiAiOTM1MDg1OTVkNTg4YWZlOWRjYTA4N2Y5NTIwMGVmZmI3Y2VkYzgxZiIs
CiAgICAgICAgICAiUHJlcmVxcyI6IFsKICAgICAgICAgICAgMjg2LAogICAg
ICAgICAgICAzNDUsCiAgICAgICAgICAgIDM0NgogICAgICAgICAgXSwKICAg
ICAgICAgICJQYXRjaGVzIjogWwogICAgICAgICAgICAieHNhMzQ3L3hzYTM0
Ny0/LnBhdGNoIgogICAgICAgICAgXQogICAgICAgIH0KICAgICAgfQogICAg
fQogIH0KfQ==

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-1.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGNvbnZlcnQgYW1kX2lvbW11X3B0ZSBmcm9tIHN0cnVj
dCB0byB1bmlvbgoKVGhpcyBpcyB0byBhZGQgYSAicmF3IiBjb3VudGVycGFy
dCB0byB0aGUgYml0ZmllbGQgZXF1aXZhbGVudC4gVGFrZSB0aGUKb3Bwb3J0
dW5pdHkgYW5kCiAtIGNvbnZlcnQgZmllbGRzIHRvIGJvb2wgLyB1bnNpZ25l
ZCBpbnQsCiAtIGRyb3AgdGhlIG5hbWluZyBvZiB0aGUgcmVzZXJ2ZWQgZmll
bGQsCiAtIHNob3J0ZW4gdGhlIG5hbWVzIG9mIHRoZSBpZ25vcmVkIG9uZXMu
CgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCi0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdS1kZWZzLmgKKysrIGIv
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11LWRlZnMuaApAQCAt
NDUxLDIwICs0NTEsMjMgQEAgdW5pb24gYW1kX2lvbW11X3gyYXBpY19jb250
cm9sIHsKICNkZWZpbmUgSU9NTVVfUEFHRV9UQUJMRV9VMzJfUEVSX0VOVFJZ
CShJT01NVV9QQUdFX1RBQkxFX0VOVFJZX1NJWkUgLyA0KQogI2RlZmluZSBJ
T01NVV9QQUdFX1RBQkxFX0FMSUdOTUVOVAk0MDk2CiAKLXN0cnVjdCBhbWRf
aW9tbXVfcHRlIHsKLSAgICB1aW50NjRfdCBwcjoxOwotICAgIHVpbnQ2NF90
IGlnbm9yZWQwOjQ7Ci0gICAgdWludDY0X3QgYToxOwotICAgIHVpbnQ2NF90
IGQ6MTsKLSAgICB1aW50NjRfdCBpZ25vcmVkMToyOwotICAgIHVpbnQ2NF90
IG5leHRfbGV2ZWw6MzsKLSAgICB1aW50NjRfdCBtZm46NDA7Ci0gICAgdWlu
dDY0X3QgcmVzZXJ2ZWQ6NzsKLSAgICB1aW50NjRfdCB1OjE7Ci0gICAgdWlu
dDY0X3QgZmM6MTsKLSAgICB1aW50NjRfdCBpcjoxOwotICAgIHVpbnQ2NF90
IGl3OjE7Ci0gICAgdWludDY0X3QgaWdub3JlZDI6MTsKK3VuaW9uIGFtZF9p
b21tdV9wdGUgeworICAgIHVpbnQ2NF90IHJhdzsKKyAgICBzdHJ1Y3Qgewor
ICAgICAgICBib29sIHByOjE7CisgICAgICAgIHVuc2lnbmVkIGludCBpZ24w
OjQ7CisgICAgICAgIGJvb2wgYToxOworICAgICAgICBib29sIGQ6MTsKKyAg
ICAgICAgdW5zaWduZWQgaW50IGlnbjE6MjsKKyAgICAgICAgdW5zaWduZWQg
aW50IG5leHRfbGV2ZWw6MzsKKyAgICAgICAgdWludDY0X3QgbWZuOjQwOwor
ICAgICAgICB1bnNpZ25lZCBpbnQgOjc7CisgICAgICAgIGJvb2wgdToxOwor
ICAgICAgICBib29sIGZjOjE7CisgICAgICAgIGJvb2wgaXI6MTsKKyAgICAg
ICAgYm9vbCBpdzoxOworICAgICAgICB1bnNpZ25lZCBpbnQgaWduMjoxOwor
ICAgIH07CiB9OwogCiAvKiBQYWdpbmcgbW9kZXMgKi8KLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMzQsNyAr
MzQsNyBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHBmbl90b19wZGVfaWR4KHVu
c2lnCiBzdGF0aWMgdW5zaWduZWQgaW50IGNsZWFyX2lvbW11X3B0ZV9wcmVz
ZW50KHVuc2lnbmVkIGxvbmcgbDFfbWZuLAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIGRmbikK
IHsKLSAgICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwdGU7Cisg
ICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwdGU7CiAgICAgdW5z
aWduZWQgaW50IGZsdXNoX2ZsYWdzOwogCiAgICAgdGFibGUgPSBtYXBfZG9t
YWluX3BhZ2UoX21mbihsMV9tZm4pKTsKQEAgLTQ4LDcgKzQ4LDcgQEAgc3Rh
dGljIHVuc2lnbmVkIGludCBjbGVhcl9pb21tdV9wdGVfcHJlcwogICAgIHJl
dHVybiBmbHVzaF9mbGFnczsKIH0KIAotc3RhdGljIHVuc2lnbmVkIGludCBz
ZXRfaW9tbXVfcGRlX3ByZXNlbnQoc3RydWN0IGFtZF9pb21tdV9wdGUgKnB0
ZSwKK3N0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11X3BkZV9wcmVzZW50
KHVuaW9uIGFtZF9pb21tdV9wdGUgKnB0ZSwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgbmV4dF9t
Zm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB1bnNpZ25lZCBpbnQgbmV4dF9sZXZlbCwgYm9vbCBpdywKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2wgaXIpCkBA
IC04Myw3ICs4Myw3IEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11
X3B0ZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGludCBwZGVfbGV2ZWwsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sIGl3LCBib29sIGlyKQogewot
ICAgIHN0cnVjdCBhbWRfaW9tbXVfcHRlICp0YWJsZSwgKnBkZTsKKyAgICB1
bmlvbiBhbWRfaW9tbXVfcHRlICp0YWJsZSwgKnBkZTsKICAgICB1bnNpZ25l
ZCBpbnQgZmx1c2hfZmxhZ3M7CiAKICAgICB0YWJsZSA9IG1hcF9kb21haW5f
cGFnZShfbWZuKHB0X21mbikpOwpAQCAtMTc0LDcgKzE3NCw3IEBAIHZvaWQg
aW9tbXVfZHRlX3NldF9ndWVzdF9jcjMoc3RydWN0IGFtZF8KIHN0YXRpYyBp
bnQgaW9tbXVfcGRlX2Zyb21fZGZuKHN0cnVjdCBkb21haW4gKmQsIHVuc2ln
bmVkIGxvbmcgZGZuLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyBwdF9tZm5bXSwgYm9vbCBtYXApCiB7Ci0gICAgc3Ry
dWN0IGFtZF9pb21tdV9wdGUgKnBkZSwgKm5leHRfdGFibGVfdmFkZHI7Cisg
ICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqcGRlLCAqbmV4dF90YWJsZV92YWRk
cjsKICAgICB1bnNpZ25lZCBsb25nICBuZXh0X3RhYmxlX21mbjsKICAgICB1
bnNpZ25lZCBpbnQgbGV2ZWw7CiAgICAgc3RydWN0IHBhZ2VfaW5mbyAqdGFi
bGU7CkBAIC00NDgsNyArNDQ4LDcgQEAgaW50IF9faW5pdCBhbWRfaW9tbXVf
cXVhcmFudGluZV9pbml0KHN0cgogICAgIHVuc2lnbmVkIGxvbmcgZW5kX2dm
biA9CiAgICAgICAgIDF1bCA8PCAoREVGQVVMVF9ET01BSU5fQUREUkVTU19X
SURUSCAtIFBBR0VfU0hJRlQpOwogICAgIHVuc2lnbmVkIGludCBsZXZlbCA9
IGFtZF9pb21tdV9nZXRfcGFnaW5nX21vZGUoZW5kX2dmbik7Ci0gICAgc3Ry
dWN0IGFtZF9pb21tdV9wdGUgKnRhYmxlOworICAgIHVuaW9uIGFtZF9pb21t
dV9wdGUgKnRhYmxlOwogCiAgICAgaWYgKCBoZC0+YXJjaC5hbWQucm9vdF90
YWJsZSApCiAgICAgewpAQCAtNDc5LDcgKzQ3OSw3IEBAIGludCBfX2luaXQg
YW1kX2lvbW11X3F1YXJhbnRpbmVfaW5pdChzdHIKIAogICAgICAgICBmb3Ig
KCBpID0gMDsgaSA8IFBURV9QRVJfVEFCTEVfU0laRTsgaSsrICkKICAgICAg
ICAgewotICAgICAgICAgICAgc3RydWN0IGFtZF9pb21tdV9wdGUgKnBkZSA9
ICZ0YWJsZVtpXTsKKyAgICAgICAgICAgIHVuaW9uIGFtZF9pb21tdV9wdGUg
KnBkZSA9ICZ0YWJsZVtpXTsKIAogICAgICAgICAgICAgLyoKICAgICAgICAg
ICAgICAqIFBERXMgYXJlIGVzc2VudGlhbGx5IGEgc3Vic2V0IG9mIFBURXMs
IHNvIHRoaXMgZnVuY3Rpb24KLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL3BjaV9hbWRfaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jCkBAIC00OTUsNyArNDk1LDcg
QEAgc3RhdGljIHZvaWQgYW1kX2R1bXBfcDJtX3RhYmxlX2xldmVsKHN0cgog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3Qg
Z3BhLCBpbnQgaW5kZW50KQogewogICAgIHBhZGRyX3QgYWRkcmVzczsKLSAg
ICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGVfdmFkZHI7CisgICAgY29u
c3QgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGVfdmFkZHI7CiAgICAgaW50
IGluZGV4OwogCiAgICAgaWYgKCBsZXZlbCA8IDEgKQpAQCAtNTExLDcgKzUx
MSw3IEBAIHN0YXRpYyB2b2lkIGFtZF9kdW1wX3AybV90YWJsZV9sZXZlbChz
dHIKIAogICAgIGZvciAoIGluZGV4ID0gMDsgaW5kZXggPCBQVEVfUEVSX1RB
QkxFX1NJWkU7IGluZGV4KysgKQogICAgIHsKLSAgICAgICAgc3RydWN0IGFt
ZF9pb21tdV9wdGUgKnBkZSA9ICZ0YWJsZV92YWRkcltpbmRleF07CisgICAg
ICAgIGNvbnN0IHVuaW9uIGFtZF9pb21tdV9wdGUgKnBkZSA9ICZ0YWJsZV92
YWRkcltpbmRleF07CiAKICAgICAgICAgaWYgKCAhKGluZGV4ICUgMikgKQog
ICAgICAgICAgICAgcHJvY2Vzc19wZW5kaW5nX3NvZnRpcnFzKCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-2.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHVwZGF0ZSBsaXZlIFBURXMgYXRvbWljYWxseQoKVXBk
YXRpbmcgYSBsaXZlIFBURSBiaXRmaWVsZCBieSBiaXRmaWVsZCByaXNrcyB0
aGUgY29tcGlsZXIgcmUtb3JkZXJpbmcKdGhlIGluZGl2aWR1YWwgdXBkYXRl
cyBhcyB3ZWxsIGFzIHNwbGl0dGluZyBpbmRpdmlkdWFsIHVwZGF0ZXMgaW50
bwptdWx0aXBsZSBtZW1vcnkgd3JpdGVzLiBDb25zdHJ1Y3QgdGhlIG5ldyBl
bnRyeSBmdWxseSBpbiBhIGxvY2FsCnZhcmlhYmxlLCBkbyB0aGUgY2hlY2sg
dG8gZGV0ZXJtaW5lIHRoZSBmbHVzaGluZyBuZWVkcyBvbiB0aGUgdGh1cwpl
c3RhYmxpc2hlZCBuZXcgZW50cnksIGFuZCB0aGVuIHdyaXRlIHRoZSBuZXcg
ZW50cnkgYnkgYSBzaW5nbGUgaW5zbi4KClNpbWlsYXJseSB1c2luZyBtZW1z
ZXQoKSB0byBjbGVhciBhIFBURSBpcyB1bnNhZmUsIGFzIHRoZSBvcmRlciBv
Zgp3cml0ZXMgdGhlIGZ1bmN0aW9uIGRvZXMgaXMsIGF0IGxlYXN0IGluIHBy
aW5jaXBsZSwgdW5kZWZpbmVkLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDcu
CgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4K
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAu
YworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfbWFw
LmMKQEAgLTQxLDcgKzQxLDcgQEAgc3RhdGljIHVuc2lnbmVkIGludCBjbGVh
cl9pb21tdV9wdGVfcHJlcwogICAgIHB0ZSA9ICZ0YWJsZVtwZm5fdG9fcGRl
X2lkeChkZm4sIDEpXTsKIAogICAgIGZsdXNoX2ZsYWdzID0gcHRlLT5wciA/
IElPTU1VX0ZMVVNIRl9tb2RpZmllZCA6IDA7Ci0gICAgbWVtc2V0KHB0ZSwg
MCwgc2l6ZW9mKCpwdGUpKTsKKyAgICB3cml0ZV9hdG9taWMoJnB0ZS0+cmF3
LCAwKTsKIAogICAgIHVubWFwX2RvbWFpbl9wYWdlKHRhYmxlKTsKIApAQCAt
NTMsMjYgKzUzLDMwIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11
X3BkZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGludCBuZXh0X2xldmVsLCBib29sIGl3LAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bCBpcikKIHsKKyAgICB1bmlvbiBhbWRfaW9tbXVfcHRlIG5ldyA9IHt9LCBv
bGQ7CiAgICAgdW5zaWduZWQgaW50IGZsdXNoX2ZsYWdzID0gSU9NTVVfRkxV
U0hGX2FkZGVkOwogCi0gICAgaWYgKCBwdGUtPnByICYmCi0gICAgICAgICAo
cHRlLT5tZm4gIT0gbmV4dF9tZm4gfHwKLSAgICAgICAgICBwdGUtPml3ICE9
IGl3IHx8Ci0gICAgICAgICAgcHRlLT5pciAhPSBpciB8fAotICAgICAgICAg
IHB0ZS0+bmV4dF9sZXZlbCAhPSBuZXh0X2xldmVsKSApCi0gICAgICAgICAg
ICBmbHVzaF9mbGFncyB8PSBJT01NVV9GTFVTSEZfbW9kaWZpZWQ7Ci0KICAg
ICAvKgogICAgICAqIEZDIGJpdCBzaG91bGQgYmUgZW5hYmxlZCBpbiBQVEUs
IHRoaXMgaGVscHMgdG8gc29sdmUgcG90ZW50aWFsCiAgICAgICogaXNzdWVz
IHdpdGggQVRTIGRldmljZXMKICAgICAgKi8KLSAgICBwdGUtPmZjID0gIW5l
eHRfbGV2ZWw7CisgICAgbmV3LmZjID0gIW5leHRfbGV2ZWw7CisKKyAgICBu
ZXcubWZuID0gbmV4dF9tZm47CisgICAgbmV3Lml3ID0gaXc7CisgICAgbmV3
LmlyID0gaXI7CisgICAgbmV3Lm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwor
ICAgIG5ldy5wciA9IHRydWU7CisKKyAgICBvbGQucmF3ID0gcmVhZF9hdG9t
aWMoJnB0ZS0+cmF3KTsKKyAgICBvbGQuaWduMCA9IDA7CisgICAgb2xkLmln
bjEgPSAwOworICAgIG9sZC5pZ24yID0gMDsKKworICAgIGlmICggb2xkLnBy
ICYmIG9sZC5yYXcgIT0gbmV3LnJhdyApCisgICAgICAgIGZsdXNoX2ZsYWdz
IHw9IElPTU1VX0ZMVVNIRl9tb2RpZmllZDsKIAotICAgIHB0ZS0+bWZuID0g
bmV4dF9tZm47Ci0gICAgcHRlLT5pdyA9IGl3OwotICAgIHB0ZS0+aXIgPSBp
cjsKLSAgICBwdGUtPm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwotICAgIHB0
ZS0+cHIgPSAxOworICAgIHdyaXRlX2F0b21pYygmcHRlLT5yYXcsIG5ldy5y
YXcpOwogCiAgICAgcmV0dXJuIGZsdXNoX2ZsYWdzOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-3.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGVuc3VyZSBzdWl0YWJsZSBvcmRlcmluZyBvZiBEVEUg
bW9kaWZpY2F0aW9ucwoKRE1BIGFuZCBpbnRlcnJ1cHQgdHJhbnNsYXRpb24g
c2hvdWxkIGJlIGVuYWJsZWQgb25seSBhZnRlciBvdGhlcgphcHBsaWNhYmxl
IERURSBmaWVsZHMgaGF2ZSBiZWVuIHdyaXR0ZW4uIFNpbWlsYXJseSB3aGVu
IGRpc2FibGluZwp0cmFuc2xhdGlvbiBvciB3aGVuIG1vdmluZyBhIGRldmlj
ZSBiZXR3ZWVuIGRvbWFpbnMsIHRyYW5zbGF0aW9uIHNob3VsZApmaXJzdCBi
ZSBkaXNhYmxlZCwgYmVmb3JlIG90aGVyIGVudHJ5IGZpZWxkcyBnZXQgbW9k
aWZpZWQuIE5vdGUgaG93ZXZlcgp0aGF0IHRoZSAibW92aW5nIiBhc3BlY3Qg
ZG9lc24ndCBhcHBseSB0byB0aGUgaW50ZXJydXB0IHJlbWFwcGluZyBzaWRl
LAphcyBkb21haW4gc3BlY2lmaWNzIGFyZSBtYWludGFpbmVkIGluIHRoZSBJ
UlRFcyBoZXJlLCBub3QgdGhlIERURS4gV2UKYWxzbyBuZXZlciBkaXNhYmxl
IGludGVycnVwdCByZW1hcHBpbmcgb25jZSBpdCBnb3QgZW5hYmxlZCBmb3Ig
YSBkZXZpY2UKKHRoZSByZXNwZWN0aXZlIGFyZ3VtZW50IHBhc3NlZCBpcyBh
bHdheXMgdGhlIGltbXV0YWJsZSBpb21tdV9pbnRyZW1hcCkuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMTAzLDExICsxMDMsMTggQEAgdm9p
ZCBhbWRfaW9tbXVfc2V0X3Jvb3RfcGFnZV90YWJsZShzdHJ1YwogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50NjRfdCByb290X3B0
ciwgdWludDE2X3QgZG9tYWluX2lkLAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50OF90IHBhZ2luZ19tb2RlLCBib29sIHZhbGlk
KQogeworICAgIGlmICggdmFsaWQgfHwgZHRlLT52ICkKKyAgICB7CisgICAg
ICAgIGR0ZS0+dHYgPSBmYWxzZTsKKyAgICAgICAgZHRlLT52ID0gdHJ1ZTsK
KyAgICAgICAgc21wX3dtYigpOworICAgIH0KICAgICBkdGUtPmRvbWFpbl9p
ZCA9IGRvbWFpbl9pZDsKICAgICBkdGUtPnB0X3Jvb3QgPSBwYWRkcl90b19w
Zm4ocm9vdF9wdHIpOwogICAgIGR0ZS0+aXcgPSB0cnVlOwogICAgIGR0ZS0+
aXIgPSB0cnVlOwogICAgIGR0ZS0+cGFnaW5nX21vZGUgPSBwYWdpbmdfbW9k
ZTsKKyAgICBzbXBfd21iKCk7CiAgICAgZHRlLT50diA9IHRydWU7CiAgICAg
ZHRlLT52ID0gdmFsaWQ7CiB9CkBAIC0xMzAsNiArMTM3LDcgQEAgdm9pZCBh
bWRfaW9tbXVfc2V0X2ludHJlbWFwX3RhYmxlKAogICAgIH0KIAogICAgIGR0
ZS0+aWcgPSBmYWxzZTsgLyogdW5tYXBwZWQgaW50ZXJydXB0cyByZXN1bHQg
aW4gaS9vIHBhZ2UgZmF1bHRzICovCisgICAgc21wX3dtYigpOwogICAgIGR0
ZS0+aXYgPSB2YWxpZDsKIH0KIAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMKQEAgLTExNyw3ICsxMTcs
MTAgQEAgc3RhdGljIHZvaWQgYW1kX2lvbW11X3NldHVwX2RvbWFpbl9kZXZp
YwogICAgICAgICAvKiBVbmRvIHdoYXQgYW1kX2lvbW11X2Rpc2FibGVfZG9t
YWluX2RldmljZSgpIG1heSBoYXZlIGRvbmUuICovCiAgICAgICAgIGl2cnNf
ZGV2ID0gJmdldF9pdnJzX21hcHBpbmdzKGlvbW11LT5zZWcpW3JlcV9pZF07
CiAgICAgICAgIGlmICggZHRlLT5pdF9yb290ICkKKyAgICAgICAgewogICAg
ICAgICAgICAgZHRlLT5pbnRfY3RsID0gSU9NTVVfREVWX1RBQkxFX0lOVF9D
T05UUk9MX1RSQU5TTEFURUQ7CisgICAgICAgICAgICBzbXBfd21iKCk7Cisg
ICAgICAgIH0KICAgICAgICAgZHRlLT5pdiA9IGlvbW11X2ludHJlbWFwOwog
ICAgICAgICBkdGUtPmV4ID0gaXZyc19kZXYtPmR0ZV9hbGxvd19leGNsdXNp
b247CiAgICAgICAgIGR0ZS0+c3lzX21ndCA9IE1BU0tfRVhUUihpdnJzX2Rl
di0+ZGV2aWNlX2ZsYWdzLCBBQ1BJX0lWSERfU1lTVEVNX01HTVQpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.11-1.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.11-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHVwZGF0ZSBsaXZlIFBURXMgYXRvbWljYWxseQoKVXBk
YXRpbmcgYSBsaXZlIFBURSB3b3JkIGJ5IHdvcmQgYWxsb3dzIHRoZSBJT01N
VSB0byBzZWUgYSBwYXJ0aWFsbHkKdXBkYXRlZCBlbnRyeS4gQ29uc3RydWN0
IHRoZSBuZXcgZW50cnkgZnVsbHkgaW4gYSBsb2NhbCB2YXJpYWJsZSBhbmQK
dGhlbiB3cml0ZSB0aGUgbmV3IGVudHJ5IGJ5IGEgc2luZ2xlIGluc24uCgpU
aGlzIGlzIHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtNDEsNyArNDEsNyBAQCBz
dGF0aWMgdm9pZCBjbGVhcl9pb21tdV9wdGVfcHJlc2VudCh1bnNpCiAKICAg
ICB0YWJsZSA9IG1hcF9kb21haW5fcGFnZShfbWZuKGwxX21mbikpOwogICAg
IHB0ZSA9IHRhYmxlICsgcGZuX3RvX3BkZV9pZHgoZ2ZuLCBJT01NVV9QQUdJ
TkdfTU9ERV9MRVZFTF8xKTsKLSAgICAqcHRlID0gMDsKKyAgICB3cml0ZV9h
dG9taWMocHRlLCAwKTsKICAgICB1bm1hcF9kb21haW5fcGFnZSh0YWJsZSk7
CiB9CiAKQEAgLTQ5LDcgKzQ5LDcgQEAgc3RhdGljIGJvb2xfdCBzZXRfaW9t
bXVfcGRlX3ByZXNlbnQodTMyCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICB1bnNpZ25lZCBpbnQgbmV4dF9sZXZlbCwKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2xfdCBpdywgYm9vbF90
IGlyKQogewotICAgIHVpbnQ2NF90IGFkZHJfbG8sIGFkZHJfaGksIG1hZGRy
X25leHQ7CisgICAgdWludDY0X3QgYWRkcl9sbywgYWRkcl9oaSwgbWFkZHJf
bmV4dCwgZnVsbDsKICAgICB1MzIgZW50cnk7CiAgICAgYm9vbCBuZWVkX2Zs
dXNoID0gZmFsc2UsIG9sZF9wcmVzZW50OwogCkBAIC0xMDYsNyArMTA2LDcg
QEAgc3RhdGljIGJvb2xfdCBzZXRfaW9tbXVfcGRlX3ByZXNlbnQodTMyCiAg
ICAgaWYgKCBuZXh0X2xldmVsID09IElPTU1VX1BBR0lOR19NT0RFX0xFVkVM
XzAgKQogICAgICAgICBzZXRfZmllbGRfaW5fcmVnX3UzMihJT01NVV9DT05U
Uk9MX0VOQUJMRUQsIGVudHJ5LAogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBJT01NVV9QVEVfRkNfTUFTSywgSU9NTVVfUFRFX0ZDX1NISUZULCAm
ZW50cnkpOwotICAgIHBkZVsxXSA9IGVudHJ5OworICAgIGZ1bGwgPSAodWlu
dDY0X3QpZW50cnkgPDwgMzI7CiAKICAgICAvKiBtYXJrIG5leHQgbGV2ZWwg
YXMgJ3ByZXNlbnQnICovCiAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIoKHUz
MilhZGRyX2xvID4+IFBBR0VfU0hJRlQsIDAsCkBAIC0xMTgsNyArMTE4LDkg
QEAgc3RhdGljIGJvb2xfdCBzZXRfaW9tbXVfcGRlX3ByZXNlbnQodTMyCiAg
ICAgc2V0X2ZpZWxkX2luX3JlZ191MzIoSU9NTVVfQ09OVFJPTF9FTkFCTEVE
LCBlbnRyeSwKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9QREVf
UFJFU0VOVF9NQVNLLAogICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1V
X1BERV9QUkVTRU5UX1NISUZULCAmZW50cnkpOwotICAgIHBkZVswXSA9IGVu
dHJ5OworICAgIGZ1bGwgfD0gZW50cnk7CisKKyAgICB3cml0ZV9hdG9taWMo
KHVpbnQ2NF90ICopcGRlLCBmdWxsKTsKIAogICAgIHJldHVybiBuZWVkX2Zs
dXNoOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.11-2.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.11-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGVuc3VyZSBzdWl0YWJsZSBvcmRlcmluZyBvZiBEVEUg
bW9kaWZpY2F0aW9ucwoKRE1BIGFuZCBpbnRlcnJ1cHQgdHJhbnNsYXRpb24g
c2hvdWxkIGJlIGVuYWJsZWQgb25seSBhZnRlciBvdGhlcgphcHBsaWNhYmxl
IERURSBmaWVsZHMgaGF2ZSBiZWVuIHdyaXR0ZW4uIFNpbWlsYXJseSB3aGVu
IGRpc2FibGluZwp0cmFuc2xhdGlvbiBvciB3aGVuIG1vdmluZyBhIGRldmlj
ZSBiZXR3ZWVuIGRvbWFpbnMsIHRyYW5zbGF0aW9uIHNob3VsZApmaXJzdCBi
ZSBkaXNhYmxlZCwgYmVmb3JlIG90aGVyIGVudHJ5IGZpZWxkcyBnZXQgbW9k
aWZpZWQuIE5vdGUgaG93ZXZlcgp0aGF0IHRoZSAibW92aW5nIiBhc3BlY3Qg
ZG9lc24ndCBhcHBseSB0byB0aGUgaW50ZXJydXB0IHJlbWFwcGluZyBzaWRl
LAphcyBkb21haW4gc3BlY2lmaWNzIGFyZSBtYWludGFpbmVkIGluIHRoZSBJ
UlRFcyBoZXJlLCBub3QgdGhlIERURS4gV2UKYWxzbyBuZXZlciBkaXNhYmxl
IGludGVycnVwdCByZW1hcHBpbmcgb25jZSBpdCBnb3QgZW5hYmxlZCBmb3Ig
YSBkZXZpY2UKKHRoZSByZXNwZWN0aXZlIGFyZ3VtZW50IHBhc3NlZCBpcyBh
bHdheXMgdGhlIGltbXV0YWJsZSBpb21tdV9pbnRyZW1hcCkuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMTQ3LDcgKzE0NywyMiBAQCB2b2lk
IGFtZF9pb21tdV9zZXRfcm9vdF9wYWdlX3RhYmxlKAogICAgIHUzMiAqZHRl
LCB1NjQgcm9vdF9wdHIsIHUxNiBkb21haW5faWQsIHU4IHBhZ2luZ19tb2Rl
LCB1OCB2YWxpZCkKIHsKICAgICB1NjQgYWRkcl9oaSwgYWRkcl9sbzsKLSAg
ICB1MzIgZW50cnk7CisgICAgdTMyIGVudHJ5LCBkdGUwID0gZHRlWzBdOwor
CisgICAgaWYgKCB2YWxpZCB8fAorICAgICAgICAgZ2V0X2ZpZWxkX2Zyb21f
cmVnX3UzMihkdGUwLCBJT01NVV9ERVZfVEFCTEVfVkFMSURfTUFTSywKKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVWX1RBQkxF
X1ZBTElEX1NISUZUKSApCisgICAgeworICAgICAgICBzZXRfZmllbGRfaW5f
cmVnX3UzMihJT01NVV9DT05UUk9MX0RJU0FCTEVELCBkdGUwLAorICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfVFJBTlNM
QVRJT05fVkFMSURfTUFTSywKKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgSU9NTVVfREVWX1RBQkxFX1RSQU5TTEFUSU9OX1ZBTElEX1NISUZULCAm
ZHRlMCk7CisgICAgICAgIHNldF9maWVsZF9pbl9yZWdfdTMyKElPTU1VX0NP
TlRST0xfRU5BQkxFRCwgZHRlMCwKKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgSU9NTVVfREVWX1RBQkxFX1ZBTElEX01BU0ssCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9WQUxJRF9TSElG
VCwgJmR0ZTApOworICAgICAgICBkdGVbMF0gPSBkdGUwOworICAgICAgICBz
bXBfd21iKCk7CisgICAgfQorCiAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIo
ZG9tYWluX2lkLCAwLAogICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1V
X0RFVl9UQUJMRV9ET01BSU5fSURfTUFTSywKICAgICAgICAgICAgICAgICAg
ICAgICAgICBJT01NVV9ERVZfVEFCTEVfRE9NQUlOX0lEX1NISUZULCAmZW50
cnkpOwpAQCAtMTY2LDggKzE4MSw5IEBAIHZvaWQgYW1kX2lvbW11X3NldF9y
b290X3BhZ2VfdGFibGUoCiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9N
TVVfREVWX1RBQkxFX0lPX1JFQURfUEVSTUlTU0lPTl9NQVNLLAogICAgICAg
ICAgICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9JT19SRUFEX1BF
Uk1JU1NJT05fU0hJRlQsICZlbnRyeSk7CiAgICAgZHRlWzFdID0gZW50cnk7
CisgICAgc21wX3dtYigpOwogCi0gICAgc2V0X2ZpZWxkX2luX3JlZ191MzIo
KHUzMilhZGRyX2xvID4+IFBBR0VfU0hJRlQsIDAsCisgICAgc2V0X2ZpZWxk
X2luX3JlZ191MzIoKHUzMilhZGRyX2xvID4+IFBBR0VfU0hJRlQsIGR0ZTAs
CiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVWX1RBQkxFX1BB
R0VfVEFCTEVfUFRSX0xPV19NQVNLLAogICAgICAgICAgICAgICAgICAgICAg
ICAgIElPTU1VX0RFVl9UQUJMRV9QQUdFX1RBQkxFX1BUUl9MT1dfU0hJRlQs
ICZlbnRyeSk7CiAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIocGFnaW5nX21v
ZGUsIGVudHJ5LApAQCAtMTgwLDcgKzE5Niw3IEBAIHZvaWQgYW1kX2lvbW11
X3NldF9yb290X3BhZ2VfdGFibGUoCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgSU9NTVVfQ09OVFJPTF9ESVNBQkxFRCwgZW50cnksCiAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU9NTVVfREVWX1RBQkxFX1ZBTElEX01BU0ssCiAg
ICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVWX1RBQkxFX1ZBTElE
X1NISUZULCAmZW50cnkpOwotICAgIGR0ZVswXSA9IGVudHJ5OworICAgIHdy
aXRlX2F0b21pYygmZHRlWzBdLCBlbnRyeSk7CiB9CiAKIHZvaWQgaW9tbXVf
ZHRlX3NldF9pb3RsYih1MzIgKmR0ZSwgdTggaSkKQEAgLTIxMiw2ICsyMjgs
NyBAQCB2b2lkIF9faW5pdCBhbWRfaW9tbXVfc2V0X2ludHJlbWFwX3RhYmxl
CiAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfSU5U
X0NPTlRST0xfTUFTSywKICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1V
X0RFVl9UQUJMRV9JTlRfQ09OVFJPTF9TSElGVCwgJmVudHJ5KTsKICAgICBk
dGVbNV0gPSBlbnRyeTsKKyAgICBzbXBfd21iKCk7CiAKICAgICBzZXRfZmll
bGRfaW5fcmVnX3UzMigodTMyKWFkZHJfbG8gPj4gNiwgMCwKICAgICAgICAg
ICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9JTlRfVEFCTEVfUFRS
X0xPV19NQVNLLApAQCAtMjI5LDcgKzI0Niw3IEBAIHZvaWQgX19pbml0IGFt
ZF9pb21tdV9zZXRfaW50cmVtYXBfdGFibGUKICAgICAgICAgICAgICAgICAg
ICAgICAgICBJT01NVV9DT05UUk9MX0RJU0FCTEVELCBlbnRyeSwKICAgICAg
ICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfSU5UX1ZBTElE
X01BU0ssCiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVWX1RB
QkxFX0lOVF9WQUxJRF9TSElGVCwgJmVudHJ5KTsKLSAgICBkdGVbNF0gPSBl
bnRyeTsKKyAgICB3cml0ZV9hdG9taWMoJmR0ZVs0XSwgZW50cnkpOwogfQog
CiB2b2lkIF9faW5pdCBpb21tdV9kdGVfYWRkX2RldmljZV9lbnRyeSh1MzIg
KmR0ZSwgc3RydWN0IGl2cnNfbWFwcGluZ3MgKml2cnNfZGV2KQo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.12-1.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.12-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHVwZGF0ZSBsaXZlIFBURXMgYXRvbWljYWxseQoKVXBk
YXRpbmcgYSBsaXZlIFBURSB3b3JkIGJ5IHdvcmQgYWxsb3dzIHRoZSBJT01N
VSB0byBzZWUgYSBwYXJ0aWFsbHkKdXBkYXRlZCBlbnRyeS4gQ29uc3RydWN0
IHRoZSBuZXcgZW50cnkgZnVsbHkgaW4gYSBsb2NhbCB2YXJpYWJsZSBhbmQK
dGhlbiB3cml0ZSB0aGUgbmV3IGVudHJ5IGJ5IGEgc2luZ2xlIGluc24uCgpU
aGlzIGlzIHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBC
ZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwg
RHVycmFudCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFz
c3Rocm91Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtNDksNyArNDksNyBAQCBz
dGF0aWMgdW5zaWduZWQgaW50IGNsZWFyX2lvbW11X3B0ZV9wcmVzCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1VX1BU
RV9QUkVTRU5UX1NISUZUKSA/CiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIElPTU1VX0ZMVVNIRl9tb2RpZmllZCA6IDA7CiAK
LSAgICAqcHRlID0gMDsKKyAgICB3cml0ZV9hdG9taWMocHRlLCAwKTsKICAg
ICB1bm1hcF9kb21haW5fcGFnZSh0YWJsZSk7CiAKICAgICByZXR1cm4gZmx1
c2hfZmxhZ3M7CkBAIC02MCw3ICs2MCw3IEBAIHN0YXRpYyB1bnNpZ25lZCBp
bnQgc2V0X2lvbW11X3BkZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludCBuZXh0X2xldmVs
LCBib29sIGl3LAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgYm9vbCBpcikKIHsKLSAgICB1aW50NjRfdCBtYWRkcl9uZXh0
OworICAgIHVpbnQ2NF90IG1hZGRyX25leHQsIGZ1bGw7CiAgICAgdWludDMy
X3QgYWRkcl9sbywgYWRkcl9oaSwgZW50cnk7CiAgICAgYm9vbCBvbGRfcHJl
c2VudDsKICAgICB1bnNpZ25lZCBpbnQgZmx1c2hfZmxhZ3MgPSBJT01NVV9G
TFVTSEZfYWRkZWQ7CkBAIC0xMTksNyArMTE5LDcgQEAgc3RhdGljIHVuc2ln
bmVkIGludCBzZXRfaW9tbXVfcGRlX3ByZXNlbgogICAgIGlmICggbmV4dF9s
ZXZlbCA9PSAwICkKICAgICAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIoSU9N
TVVfQ09OVFJPTF9FTkFCTEVELCBlbnRyeSwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgSU9NTVVfUFRFX0ZDX01BU0ssIElPTU1VX1BURV9GQ19T
SElGVCwgJmVudHJ5KTsKLSAgICBwZGVbMV0gPSBlbnRyeTsKKyAgICBmdWxs
ID0gKHVpbnQ2NF90KWVudHJ5IDw8IDMyOwogCiAgICAgLyogbWFyayBuZXh0
IGxldmVsIGFzICdwcmVzZW50JyAqLwogICAgIHNldF9maWVsZF9pbl9yZWdf
dTMyKGFkZHJfbG8gPj4gUEFHRV9TSElGVCwgMCwKQEAgLTEzMSw3ICsxMzEs
OSBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHNldF9pb21tdV9wZGVfcHJlc2Vu
CiAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIoSU9NTVVfQ09OVFJPTF9FTkFC
TEVELCBlbnRyeSwKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9Q
REVfUFJFU0VOVF9NQVNLLAogICAgICAgICAgICAgICAgICAgICAgICAgIElP
TU1VX1BERV9QUkVTRU5UX1NISUZULCAmZW50cnkpOwotICAgIHBkZVswXSA9
IGVudHJ5OworICAgIGZ1bGwgfD0gZW50cnk7CisKKyAgICB3cml0ZV9hdG9t
aWMoKHVpbnQ2NF90ICopcGRlLCBmdWxsKTsKIAogICAgIHJldHVybiBmbHVz
aF9mbGFnczsKIH0K

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.12-2.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.12-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGVuc3VyZSBzdWl0YWJsZSBvcmRlcmluZyBvZiBEVEUg
bW9kaWZpY2F0aW9ucwoKRE1BIGFuZCBpbnRlcnJ1cHQgdHJhbnNsYXRpb24g
c2hvdWxkIGJlIGVuYWJsZWQgb25seSBhZnRlciBvdGhlcgphcHBsaWNhYmxl
IERURSBmaWVsZHMgaGF2ZSBiZWVuIHdyaXR0ZW4uIFNpbWlsYXJseSB3aGVu
IGRpc2FibGluZwp0cmFuc2xhdGlvbiBvciB3aGVuIG1vdmluZyBhIGRldmlj
ZSBiZXR3ZWVuIGRvbWFpbnMsIHRyYW5zbGF0aW9uIHNob3VsZApmaXJzdCBi
ZSBkaXNhYmxlZCwgYmVmb3JlIG90aGVyIGVudHJ5IGZpZWxkcyBnZXQgbW9k
aWZpZWQuIE5vdGUgaG93ZXZlcgp0aGF0IHRoZSAibW92aW5nIiBhc3BlY3Qg
ZG9lc24ndCBhcHBseSB0byB0aGUgaW50ZXJydXB0IHJlbWFwcGluZyBzaWRl
LAphcyBkb21haW4gc3BlY2lmaWNzIGFyZSBtYWludGFpbmVkIGluIHRoZSBJ
UlRFcyBoZXJlLCBub3QgdGhlIERURS4gV2UKYWxzbyBuZXZlciBkaXNhYmxl
IGludGVycnVwdCByZW1hcHBpbmcgb25jZSBpdCBnb3QgZW5hYmxlZCBmb3Ig
YSBkZXZpY2UKKHRoZSByZXNwZWN0aXZlIGFyZ3VtZW50IHBhc3NlZCBpcyBh
bHdheXMgdGhlIGltbXV0YWJsZSBpb21tdV9pbnRyZW1hcCkuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMTYyLDcgKzE2MiwyMiBAQCB2b2lk
IGFtZF9pb21tdV9zZXRfcm9vdF9wYWdlX3RhYmxlKHVpbnQzCiAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQxNl90IGRvbWFpbl9p
ZCwgdWludDhfdCBwYWdpbmdfbW9kZSwKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgdWludDhfdCB2YWxpZCkKIHsKLSAgICB1aW50MzJf
dCBhZGRyX2hpLCBhZGRyX2xvLCBlbnRyeTsKKyAgICB1aW50MzJfdCBhZGRy
X2hpLCBhZGRyX2xvLCBlbnRyeSwgZHRlMCA9IGR0ZVswXTsKKworICAgIGlm
ICggdmFsaWQgfHwKKyAgICAgICAgIGdldF9maWVsZF9mcm9tX3JlZ191MzIo
ZHRlMCwgSU9NTVVfREVWX1RBQkxFX1ZBTElEX01BU0ssCisgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9WQUxJRF9T
SElGVCkgKQorICAgIHsKKyAgICAgICAgc2V0X2ZpZWxkX2luX3JlZ191MzIo
SU9NTVVfQ09OVFJPTF9ESVNBQkxFRCwgZHRlMCwKKyAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgSU9NTVVfREVWX1RBQkxFX1RSQU5TTEFUSU9OX1ZB
TElEX01BU0ssCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1V
X0RFVl9UQUJMRV9UUkFOU0xBVElPTl9WQUxJRF9TSElGVCwgJmR0ZTApOwor
ICAgICAgICBzZXRfZmllbGRfaW5fcmVnX3UzMihJT01NVV9DT05UUk9MX0VO
QUJMRUQsIGR0ZTAsCisgICAgICAgICAgICAgICAgICAgICAgICAgICAgIElP
TU1VX0RFVl9UQUJMRV9WQUxJRF9NQVNLLAorICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfVkFMSURfU0hJRlQsICZkdGUw
KTsKKyAgICAgICAgZHRlWzBdID0gZHRlMDsKKyAgICAgICAgc21wX3dtYigp
OworICAgIH0KKwogICAgIHNldF9maWVsZF9pbl9yZWdfdTMyKGRvbWFpbl9p
ZCwgMCwKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFC
TEVfRE9NQUlOX0lEX01BU0ssCiAgICAgICAgICAgICAgICAgICAgICAgICAg
SU9NTVVfREVWX1RBQkxFX0RPTUFJTl9JRF9TSElGVCwgJmVudHJ5KTsKQEAg
LTE4MSw4ICsxOTYsOSBAQCB2b2lkIGFtZF9pb21tdV9zZXRfcm9vdF9wYWdl
X3RhYmxlKHVpbnQzCiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVf
REVWX1RBQkxFX0lPX1JFQURfUEVSTUlTU0lPTl9NQVNLLAogICAgICAgICAg
ICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9JT19SRUFEX1BFUk1J
U1NJT05fU0hJRlQsICZlbnRyeSk7CiAgICAgZHRlWzFdID0gZW50cnk7Cisg
ICAgc21wX3dtYigpOwogCi0gICAgc2V0X2ZpZWxkX2luX3JlZ191MzIoYWRk
cl9sbyA+PiBQQUdFX1NISUZULCAwLAorICAgIHNldF9maWVsZF9pbl9yZWdf
dTMyKGFkZHJfbG8gPj4gUEFHRV9TSElGVCwgZHRlMCwKICAgICAgICAgICAg
ICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfUEFHRV9UQUJMRV9QVFJf
TE9XX01BU0ssCiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVW
X1RBQkxFX1BBR0VfVEFCTEVfUFRSX0xPV19TSElGVCwgJmVudHJ5KTsKICAg
ICBzZXRfZmllbGRfaW5fcmVnX3UzMihwYWdpbmdfbW9kZSwgZW50cnksCkBA
IC0xOTUsNyArMjExLDcgQEAgdm9pZCBhbWRfaW9tbXVfc2V0X3Jvb3RfcGFn
ZV90YWJsZSh1aW50MwogICAgICAgICAgICAgICAgICAgICAgICAgIElPTU1V
X0NPTlRST0xfRElTQUJMRUQsIGVudHJ5LAogICAgICAgICAgICAgICAgICAg
ICAgICAgIElPTU1VX0RFVl9UQUJMRV9WQUxJRF9NQVNLLAogICAgICAgICAg
ICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9WQUxJRF9TSElGVCwg
JmVudHJ5KTsKLSAgICBkdGVbMF0gPSBlbnRyeTsKKyAgICB3cml0ZV9hdG9t
aWMoJmR0ZVswXSwgZW50cnkpOwogfQogCiB2b2lkIGlvbW11X2R0ZV9zZXRf
aW90bGIodWludDMyX3QgKmR0ZSwgdWludDhfdCBpKQpAQCAtMjI2LDYgKzI0
Miw3IEBAIHZvaWQgX19pbml0IGFtZF9pb21tdV9zZXRfaW50cmVtYXBfdGFi
bGUKICAgICAgICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVf
SU5UX0NPTlRST0xfTUFTSywKICAgICAgICAgICAgICAgICAgICAgICAgICBJ
T01NVV9ERVZfVEFCTEVfSU5UX0NPTlRST0xfU0hJRlQsICZlbnRyeSk7CiAg
ICAgZHRlWzVdID0gZW50cnk7CisgICAgc21wX3dtYigpOwogCiAgICAgc2V0
X2ZpZWxkX2luX3JlZ191MzIoYWRkcl9sbyA+PiA2LCAwLAogICAgICAgICAg
ICAgICAgICAgICAgICAgIElPTU1VX0RFVl9UQUJMRV9JTlRfVEFCTEVfUFRS
X0xPV19NQVNLLApAQCAtMjQzLDcgKzI2MCw3IEBAIHZvaWQgX19pbml0IGFt
ZF9pb21tdV9zZXRfaW50cmVtYXBfdGFibGUKICAgICAgICAgICAgICAgICAg
ICAgICAgICBJT01NVV9DT05UUk9MX0RJU0FCTEVELCBlbnRyeSwKICAgICAg
ICAgICAgICAgICAgICAgICAgICBJT01NVV9ERVZfVEFCTEVfSU5UX1ZBTElE
X01BU0ssCiAgICAgICAgICAgICAgICAgICAgICAgICAgSU9NTVVfREVWX1RB
QkxFX0lOVF9WQUxJRF9TSElGVCwgJmVudHJ5KTsKLSAgICBkdGVbNF0gPSBl
bnRyeTsKKyAgICB3cml0ZV9hdG9taWMoJmR0ZVs0XSwgZW50cnkpOwogfQog
CiB2b2lkIF9faW5pdCBpb21tdV9kdGVfYWRkX2RldmljZV9lbnRyeSh1aW50
MzJfdCAqZHRlLAo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.13-1.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.13-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGNvbnZlcnQgYW1kX2lvbW11X3B0ZSBmcm9tIHN0cnVj
dCB0byB1bmlvbgoKVGhpcyBpcyB0byBhZGQgYSAicmF3IiBjb3VudGVycGFy
dCB0byB0aGUgYml0ZmllbGQgZXF1aXZhbGVudC4gVGFrZSB0aGUKb3Bwb3J0
dW5pdHkgYW5kCiAtIGNvbnZlcnQgZmllbGRzIHRvIGJvb2wgLyB1bnNpZ25l
ZCBpbnQsCiAtIGRyb3AgdGhlIG5hbWluZyBvZiB0aGUgcmVzZXJ2ZWQgZmll
bGQsCiAtIHNob3J0ZW4gdGhlIG5hbWVzIG9mIHRoZSBpZ25vcmVkIG9uZXMu
CgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCi0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYworKysgYi94
ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfbWFwLmMKQEAgLTM4
LDcgKzM4LDcgQEAgc3RhdGljIHVuc2lnbmVkIGludCBwZm5fdG9fcGRlX2lk
eCh1bnNpZwogc3RhdGljIHVuc2lnbmVkIGludCBjbGVhcl9pb21tdV9wdGVf
cHJlc2VudCh1bnNpZ25lZCBsb25nIGwxX21mbiwKICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBk
Zm4pCiB7Ci0gICAgc3RydWN0IGFtZF9pb21tdV9wdGUgKnRhYmxlLCAqcHRl
OworICAgIHVuaW9uIGFtZF9pb21tdV9wdGUgKnRhYmxlLCAqcHRlOwogICAg
IHVuc2lnbmVkIGludCBmbHVzaF9mbGFnczsKIAogICAgIHRhYmxlID0gbWFw
X2RvbWFpbl9wYWdlKF9tZm4obDFfbWZuKSk7CkBAIC01Miw3ICs1Miw3IEBA
IHN0YXRpYyB1bnNpZ25lZCBpbnQgY2xlYXJfaW9tbXVfcHRlX3ByZXMKICAg
ICByZXR1cm4gZmx1c2hfZmxhZ3M7CiB9CiAKLXN0YXRpYyB1bnNpZ25lZCBp
bnQgc2V0X2lvbW11X3BkZV9wcmVzZW50KHN0cnVjdCBhbWRfaW9tbXVfcHRl
ICpwdGUsCitzdGF0aWMgdW5zaWduZWQgaW50IHNldF9pb21tdV9wZGVfcHJl
c2VudCh1bmlvbiBhbWRfaW9tbXVfcHRlICpwdGUsCiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIG5l
eHRfbWZuLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgdW5zaWduZWQgaW50IG5leHRfbGV2ZWwsIGJvb2wgaXcsCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBib29sIGly
KQpAQCAtODcsNyArODcsNyBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHNldF9p
b21tdV9wdGVfcHJlc2VuCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBpbnQgcGRlX2xldmVsLAogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCBpdywgYm9vbCBpcikK
IHsKLSAgICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwZGU7Cisg
ICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwZGU7CiAgICAgdW5z
aWduZWQgaW50IGZsdXNoX2ZsYWdzOwogCiAgICAgdGFibGUgPSBtYXBfZG9t
YWluX3BhZ2UoX21mbihwdF9tZm4pKTsKQEAgLTE3OCw3ICsxNzgsNyBAQCB2
b2lkIGlvbW11X2R0ZV9zZXRfZ3Vlc3RfY3IzKHN0cnVjdCBhbWRfCiBzdGF0
aWMgaW50IGlvbW11X3BkZV9mcm9tX2RmbihzdHJ1Y3QgZG9tYWluICpkLCB1
bnNpZ25lZCBsb25nIGRmbiwKICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHVuc2lnbmVkIGxvbmcgcHRfbWZuW10sIGJvb2wgbWFwKQogewotICAg
IHN0cnVjdCBhbWRfaW9tbXVfcHRlICpwZGUsICpuZXh0X3RhYmxlX3ZhZGRy
OworICAgIHVuaW9uIGFtZF9pb21tdV9wdGUgKnBkZSwgKm5leHRfdGFibGVf
dmFkZHI7CiAgICAgdW5zaWduZWQgbG9uZyAgbmV4dF90YWJsZV9tZm47CiAg
ICAgdW5zaWduZWQgaW50IGxldmVsOwogICAgIHN0cnVjdCBwYWdlX2luZm8g
KnRhYmxlOwpAQCAtNDU4LDcgKzQ1OCw3IEBAIGludCBfX2luaXQgYW1kX2lv
bW11X3F1YXJhbnRpbmVfaW5pdChzdHIKICAgICB1bnNpZ25lZCBsb25nIGVu
ZF9nZm4gPQogICAgICAgICAxdWwgPDwgKERFRkFVTFRfRE9NQUlOX0FERFJF
U1NfV0lEVEggLSBQQUdFX1NISUZUKTsKICAgICB1bnNpZ25lZCBpbnQgbGV2
ZWwgPSBhbWRfaW9tbXVfZ2V0X3BhZ2luZ19tb2RlKGVuZF9nZm4pOwotICAg
IHN0cnVjdCBhbWRfaW9tbXVfcHRlICp0YWJsZTsKKyAgICB1bmlvbiBhbWRf
aW9tbXVfcHRlICp0YWJsZTsKIAogICAgIGlmICggaGQtPmFyY2gucm9vdF90
YWJsZSApCiAgICAgewpAQCAtNDg5LDcgKzQ4OSw3IEBAIGludCBfX2luaXQg
YW1kX2lvbW11X3F1YXJhbnRpbmVfaW5pdChzdHIKIAogICAgICAgICBmb3Ig
KCBpID0gMDsgaSA8IFBURV9QRVJfVEFCTEVfU0laRTsgaSsrICkKICAgICAg
ICAgewotICAgICAgICAgICAgc3RydWN0IGFtZF9pb21tdV9wdGUgKnBkZSA9
ICZ0YWJsZVtpXTsKKyAgICAgICAgICAgIHVuaW9uIGFtZF9pb21tdV9wdGUg
KnBkZSA9ICZ0YWJsZVtpXTsKIAogICAgICAgICAgICAgLyoKICAgICAgICAg
ICAgICAqIFBERXMgYXJlIGVzc2VudGlhbGx5IGEgc3Vic2V0IG9mIFBURXMs
IHNvIHRoaXMgZnVuY3Rpb24KLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL3BjaV9hbWRfaW9tbXUuYworKysgYi94ZW4vZHJpdmVycy9wYXNz
dGhyb3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jCkBAIC0zOTAsNyArMzkwLDcg
QEAgc3RhdGljIHZvaWQgZGVhbGxvY2F0ZV9uZXh0X3BhZ2VfdGFibGUocwog
CiBzdGF0aWMgdm9pZCBkZWFsbG9jYXRlX3BhZ2VfdGFibGUoc3RydWN0IHBh
Z2VfaW5mbyAqcGcpCiB7Ci0gICAgc3RydWN0IGFtZF9pb21tdV9wdGUgKnRh
YmxlX3ZhZGRyOworICAgIHVuaW9uIGFtZF9pb21tdV9wdGUgKnRhYmxlX3Zh
ZGRyOwogICAgIHVuc2lnbmVkIGludCBpbmRleCwgbGV2ZWwgPSBQRk5fT1JE
RVIocGcpOwogCiAgICAgUEZOX09SREVSKHBnKSA9IDA7CkBAIC00MDUsNyAr
NDA1LDcgQEAgc3RhdGljIHZvaWQgZGVhbGxvY2F0ZV9wYWdlX3RhYmxlKHN0
cnVjdAogCiAgICAgZm9yICggaW5kZXggPSAwOyBpbmRleCA8IFBURV9QRVJf
VEFCTEVfU0laRTsgaW5kZXgrKyApCiAgICAgewotICAgICAgICBzdHJ1Y3Qg
YW1kX2lvbW11X3B0ZSAqcGRlID0gJnRhYmxlX3ZhZGRyW2luZGV4XTsKKyAg
ICAgICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqcGRlID0gJnRhYmxlX3ZhZGRy
W2luZGV4XTsKIAogICAgICAgICBpZiAoIHBkZS0+bWZuICYmIHBkZS0+bmV4
dF9sZXZlbCAmJiBwZGUtPnByICkKICAgICAgICAgewpAQCAtNTU3LDcgKzU1
Nyw3IEBAIHN0YXRpYyB2b2lkIGFtZF9kdW1wX3AybV90YWJsZV9sZXZlbChz
dHIKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYWRk
cl90IGdwYSwgaW50IGluZGVudCkKIHsKICAgICBwYWRkcl90IGFkZHJlc3M7
Ci0gICAgc3RydWN0IGFtZF9pb21tdV9wdGUgKnRhYmxlX3ZhZGRyOworICAg
IGNvbnN0IHVuaW9uIGFtZF9pb21tdV9wdGUgKnRhYmxlX3ZhZGRyOwogICAg
IGludCBpbmRleDsKIAogICAgIGlmICggbGV2ZWwgPCAxICkKQEAgLTU3Myw3
ICs1NzMsNyBAQCBzdGF0aWMgdm9pZCBhbWRfZHVtcF9wMm1fdGFibGVfbGV2
ZWwoc3RyCiAKICAgICBmb3IgKCBpbmRleCA9IDA7IGluZGV4IDwgUFRFX1BF
Ul9UQUJMRV9TSVpFOyBpbmRleCsrICkKICAgICB7Ci0gICAgICAgIHN0cnVj
dCBhbWRfaW9tbXVfcHRlICpwZGUgPSAmdGFibGVfdmFkZHJbaW5kZXhdOwor
ICAgICAgICBjb25zdCB1bmlvbiBhbWRfaW9tbXVfcHRlICpwZGUgPSAmdGFi
bGVfdmFkZHJbaW5kZXhdOwogCiAgICAgICAgIGlmICggIShpbmRleCAlIDIp
ICkKICAgICAgICAgICAgIHByb2Nlc3NfcGVuZGluZ19zb2Z0aXJxcygpOwot
LS0gYS94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9zdm0vYW1kLWlvbW11LWRl
ZnMuaAorKysgYi94ZW4vaW5jbHVkZS9hc20teDg2L2h2bS9zdm0vYW1kLWlv
bW11LWRlZnMuaApAQCAtNDY1LDIwICs0NjUsMjMgQEAgdW5pb24gYW1kX2lv
bW11X3gyYXBpY19jb250cm9sIHsKICNkZWZpbmUgSU9NTVVfUEFHRV9UQUJM
RV9VMzJfUEVSX0VOVFJZCShJT01NVV9QQUdFX1RBQkxFX0VOVFJZX1NJWkUg
LyA0KQogI2RlZmluZSBJT01NVV9QQUdFX1RBQkxFX0FMSUdOTUVOVAk0MDk2
CiAKLXN0cnVjdCBhbWRfaW9tbXVfcHRlIHsKLSAgICB1aW50NjRfdCBwcjox
OwotICAgIHVpbnQ2NF90IGlnbm9yZWQwOjQ7Ci0gICAgdWludDY0X3QgYTox
OwotICAgIHVpbnQ2NF90IGQ6MTsKLSAgICB1aW50NjRfdCBpZ25vcmVkMToy
OwotICAgIHVpbnQ2NF90IG5leHRfbGV2ZWw6MzsKLSAgICB1aW50NjRfdCBt
Zm46NDA7Ci0gICAgdWludDY0X3QgcmVzZXJ2ZWQ6NzsKLSAgICB1aW50NjRf
dCB1OjE7Ci0gICAgdWludDY0X3QgZmM6MTsKLSAgICB1aW50NjRfdCBpcjox
OwotICAgIHVpbnQ2NF90IGl3OjE7Ci0gICAgdWludDY0X3QgaWdub3JlZDI6
MTsKK3VuaW9uIGFtZF9pb21tdV9wdGUgeworICAgIHVpbnQ2NF90IHJhdzsK
KyAgICBzdHJ1Y3QgeworICAgICAgICBib29sIHByOjE7CisgICAgICAgIHVu
c2lnbmVkIGludCBpZ24wOjQ7CisgICAgICAgIGJvb2wgYToxOworICAgICAg
ICBib29sIGQ6MTsKKyAgICAgICAgdW5zaWduZWQgaW50IGlnbjE6MjsKKyAg
ICAgICAgdW5zaWduZWQgaW50IG5leHRfbGV2ZWw6MzsKKyAgICAgICAgdWlu
dDY0X3QgbWZuOjQwOworICAgICAgICB1bnNpZ25lZCBpbnQgOjc7CisgICAg
ICAgIGJvb2wgdToxOworICAgICAgICBib29sIGZjOjE7CisgICAgICAgIGJv
b2wgaXI6MTsKKyAgICAgICAgYm9vbCBpdzoxOworICAgICAgICB1bnNpZ25l
ZCBpbnQgaWduMjoxOworICAgIH07CiB9OwogCiAvKiBQYWdpbmcgbW9kZXMg
Ki8K

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.13-2.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.13-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHVwZGF0ZSBsaXZlIFBURXMgYXRvbWljYWxseQoKVXBk
YXRpbmcgYSBsaXZlIFBURSBiaXRmaWVsZCBieSBiaXRmaWVsZCByaXNrcyB0
aGUgY29tcGlsZXIgcmUtb3JkZXJpbmcKdGhlIGluZGl2aWR1YWwgdXBkYXRl
cyBhcyB3ZWxsIGFzIHNwbGl0dGluZyBpbmRpdmlkdWFsIHVwZGF0ZXMgaW50
bwptdWx0aXBsZSBtZW1vcnkgd3JpdGVzLiBDb25zdHJ1Y3QgdGhlIG5ldyBl
bnRyeSBmdWxseSBpbiBhIGxvY2FsCnZhcmlhYmxlLCBkbyB0aGUgY2hlY2sg
dG8gZGV0ZXJtaW5lIHRoZSBmbHVzaGluZyBuZWVkcyBvbiB0aGUgdGh1cwpl
c3RhYmxpc2hlZCBuZXcgZW50cnksIGFuZCB0aGVuIHdyaXRlIHRoZSBuZXcg
ZW50cnkgYnkgYSBzaW5nbGUgaW5zbi4KClNpbWlsYXJseSB1c2luZyBtZW1z
ZXQoKSB0byBjbGVhciBhIFBURSBpcyB1bnNhZmUsIGFzIHRoZSBvcmRlciBv
Zgp3cml0ZXMgdGhlIGZ1bmN0aW9uIGRvZXMgaXMsIGF0IGxlYXN0IGluIHBy
aW5jaXBsZSwgdW5kZWZpbmVkLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDcu
CgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4K
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAu
YworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfbWFw
LmMKQEAgLTQ1LDcgKzQ1LDcgQEAgc3RhdGljIHVuc2lnbmVkIGludCBjbGVh
cl9pb21tdV9wdGVfcHJlcwogICAgIHB0ZSA9ICZ0YWJsZVtwZm5fdG9fcGRl
X2lkeChkZm4sIDEpXTsKIAogICAgIGZsdXNoX2ZsYWdzID0gcHRlLT5wciA/
IElPTU1VX0ZMVVNIRl9tb2RpZmllZCA6IDA7Ci0gICAgbWVtc2V0KHB0ZSwg
MCwgc2l6ZW9mKCpwdGUpKTsKKyAgICB3cml0ZV9hdG9taWMoJnB0ZS0+cmF3
LCAwKTsKIAogICAgIHVubWFwX2RvbWFpbl9wYWdlKHRhYmxlKTsKIApAQCAt
NTcsMjYgKzU3LDMwIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11
X3BkZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGludCBuZXh0X2xldmVsLCBib29sIGl3LAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bCBpcikKIHsKKyAgICB1bmlvbiBhbWRfaW9tbXVfcHRlIG5ldyA9IHt9LCBv
bGQ7CiAgICAgdW5zaWduZWQgaW50IGZsdXNoX2ZsYWdzID0gSU9NTVVfRkxV
U0hGX2FkZGVkOwogCi0gICAgaWYgKCBwdGUtPnByICYmCi0gICAgICAgICAo
cHRlLT5tZm4gIT0gbmV4dF9tZm4gfHwKLSAgICAgICAgICBwdGUtPml3ICE9
IGl3IHx8Ci0gICAgICAgICAgcHRlLT5pciAhPSBpciB8fAotICAgICAgICAg
IHB0ZS0+bmV4dF9sZXZlbCAhPSBuZXh0X2xldmVsKSApCi0gICAgICAgICAg
ICBmbHVzaF9mbGFncyB8PSBJT01NVV9GTFVTSEZfbW9kaWZpZWQ7Ci0KICAg
ICAvKgogICAgICAqIEZDIGJpdCBzaG91bGQgYmUgZW5hYmxlZCBpbiBQVEUs
IHRoaXMgaGVscHMgdG8gc29sdmUgcG90ZW50aWFsCiAgICAgICogaXNzdWVz
IHdpdGggQVRTIGRldmljZXMKICAgICAgKi8KLSAgICBwdGUtPmZjID0gIW5l
eHRfbGV2ZWw7CisgICAgbmV3LmZjID0gIW5leHRfbGV2ZWw7CisKKyAgICBu
ZXcubWZuID0gbmV4dF9tZm47CisgICAgbmV3Lml3ID0gaXc7CisgICAgbmV3
LmlyID0gaXI7CisgICAgbmV3Lm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwor
ICAgIG5ldy5wciA9IHRydWU7CisKKyAgICBvbGQucmF3ID0gcmVhZF9hdG9t
aWMoJnB0ZS0+cmF3KTsKKyAgICBvbGQuaWduMCA9IDA7CisgICAgb2xkLmln
bjEgPSAwOworICAgIG9sZC5pZ24yID0gMDsKKworICAgIGlmICggb2xkLnBy
ICYmIG9sZC5yYXcgIT0gbmV3LnJhdyApCisgICAgICAgIGZsdXNoX2ZsYWdz
IHw9IElPTU1VX0ZMVVNIRl9tb2RpZmllZDsKIAotICAgIHB0ZS0+bWZuID0g
bmV4dF9tZm47Ci0gICAgcHRlLT5pdyA9IGl3OwotICAgIHB0ZS0+aXIgPSBp
cjsKLSAgICBwdGUtPm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwotICAgIHB0
ZS0+cHIgPSAxOworICAgIHdyaXRlX2F0b21pYygmcHRlLT5yYXcsIG5ldy5y
YXcpOwogCiAgICAgcmV0dXJuIGZsdXNoX2ZsYWdzOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.13-3.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.13-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGVuc3VyZSBzdWl0YWJsZSBvcmRlcmluZyBvZiBEVEUg
bW9kaWZpY2F0aW9ucwoKRE1BIGFuZCBpbnRlcnJ1cHQgdHJhbnNsYXRpb24g
c2hvdWxkIGJlIGVuYWJsZWQgb25seSBhZnRlciBvdGhlcgphcHBsaWNhYmxl
IERURSBmaWVsZHMgaGF2ZSBiZWVuIHdyaXR0ZW4uIFNpbWlsYXJseSB3aGVu
IGRpc2FibGluZwp0cmFuc2xhdGlvbiBvciB3aGVuIG1vdmluZyBhIGRldmlj
ZSBiZXR3ZWVuIGRvbWFpbnMsIHRyYW5zbGF0aW9uIHNob3VsZApmaXJzdCBi
ZSBkaXNhYmxlZCwgYmVmb3JlIG90aGVyIGVudHJ5IGZpZWxkcyBnZXQgbW9k
aWZpZWQuIE5vdGUgaG93ZXZlcgp0aGF0IHRoZSAibW92aW5nIiBhc3BlY3Qg
ZG9lc24ndCBhcHBseSB0byB0aGUgaW50ZXJydXB0IHJlbWFwcGluZyBzaWRl
LAphcyBkb21haW4gc3BlY2lmaWNzIGFyZSBtYWludGFpbmVkIGluIHRoZSBJ
UlRFcyBoZXJlLCBub3QgdGhlIERURS4gV2UKYWxzbyBuZXZlciBkaXNhYmxl
IGludGVycnVwdCByZW1hcHBpbmcgb25jZSBpdCBnb3QgZW5hYmxlZCBmb3Ig
YSBkZXZpY2UKKHRoZSByZXNwZWN0aXZlIGFyZ3VtZW50IHBhc3NlZCBpcyBh
bHdheXMgdGhlIGltbXV0YWJsZSBpb21tdV9pbnRyZW1hcCkuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMTA3LDExICsxMDcsMTggQEAgdm9p
ZCBhbWRfaW9tbXVfc2V0X3Jvb3RfcGFnZV90YWJsZShzdHJ1YwogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50NjRfdCByb290X3B0
ciwgdWludDE2X3QgZG9tYWluX2lkLAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50OF90IHBhZ2luZ19tb2RlLCBib29sIHZhbGlk
KQogeworICAgIGlmICggdmFsaWQgfHwgZHRlLT52ICkKKyAgICB7CisgICAg
ICAgIGR0ZS0+dHYgPSBmYWxzZTsKKyAgICAgICAgZHRlLT52ID0gdHJ1ZTsK
KyAgICAgICAgc21wX3dtYigpOworICAgIH0KICAgICBkdGUtPmRvbWFpbl9p
ZCA9IGRvbWFpbl9pZDsKICAgICBkdGUtPnB0X3Jvb3QgPSBwYWRkcl90b19w
Zm4ocm9vdF9wdHIpOwogICAgIGR0ZS0+aXcgPSB0cnVlOwogICAgIGR0ZS0+
aXIgPSB0cnVlOwogICAgIGR0ZS0+cGFnaW5nX21vZGUgPSBwYWdpbmdfbW9k
ZTsKKyAgICBzbXBfd21iKCk7CiAgICAgZHRlLT50diA9IHRydWU7CiAgICAg
ZHRlLT52ID0gdmFsaWQ7CiB9CkBAIC0xMzQsNiArMTQxLDcgQEAgdm9pZCBh
bWRfaW9tbXVfc2V0X2ludHJlbWFwX3RhYmxlKAogICAgIH0KIAogICAgIGR0
ZS0+aWcgPSBmYWxzZTsgLyogdW5tYXBwZWQgaW50ZXJydXB0cyByZXN1bHQg
aW4gaS9vIHBhZ2UgZmF1bHRzICovCisgICAgc21wX3dtYigpOwogICAgIGR0
ZS0+aXYgPSB2YWxpZDsKIH0KIAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMKQEAgLTEyMCw3ICsxMjAs
MTAgQEAgc3RhdGljIHZvaWQgYW1kX2lvbW11X3NldHVwX2RvbWFpbl9kZXZp
YwogICAgICAgICAvKiBVbmRvIHdoYXQgYW1kX2lvbW11X2Rpc2FibGVfZG9t
YWluX2RldmljZSgpIG1heSBoYXZlIGRvbmUuICovCiAgICAgICAgIGl2cnNf
ZGV2ID0gJmdldF9pdnJzX21hcHBpbmdzKGlvbW11LT5zZWcpW3JlcV9pZF07
CiAgICAgICAgIGlmICggZHRlLT5pdF9yb290ICkKKyAgICAgICAgewogICAg
ICAgICAgICAgZHRlLT5pbnRfY3RsID0gSU9NTVVfREVWX1RBQkxFX0lOVF9D
T05UUk9MX1RSQU5TTEFURUQ7CisgICAgICAgICAgICBzbXBfd21iKCk7Cisg
ICAgICAgIH0KICAgICAgICAgZHRlLT5pdiA9IGlvbW11X2ludHJlbWFwOwog
ICAgICAgICBkdGUtPmV4ID0gaXZyc19kZXYtPmR0ZV9hbGxvd19leGNsdXNp
b247CiAgICAgICAgIGR0ZS0+c3lzX21ndCA9IE1BU0tfRVhUUihpdnJzX2Rl
di0+ZGV2aWNlX2ZsYWdzLCBBQ1BJX0lWSERfU1lTVEVNX01HTVQpOwo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.14-1.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.14-1.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGNvbnZlcnQgYW1kX2lvbW11X3B0ZSBmcm9tIHN0cnVj
dCB0byB1bmlvbgoKVGhpcyBpcyB0byBhZGQgYSAicmF3IiBjb3VudGVycGFy
dCB0byB0aGUgYml0ZmllbGQgZXF1aXZhbGVudC4gVGFrZSB0aGUKb3Bwb3J0
dW5pdHkgYW5kCiAtIGNvbnZlcnQgZmllbGRzIHRvIGJvb2wgLyB1bnNpZ25l
ZCBpbnQsCiAtIGRyb3AgdGhlIG5hbWluZyBvZiB0aGUgcmVzZXJ2ZWQgZmll
bGQsCiAtIHNob3J0ZW4gdGhlIG5hbWVzIG9mIHRoZSBpZ25vcmVkIG9uZXMu
CgpUaGlzIGlzIHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEph
biBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IEFu
ZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+ClJldmll
d2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4KCi0tLSBhL3hl
bi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdS1kZWZzLmgKKysrIGIv
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11LWRlZnMuaApAQCAt
NDUxLDIwICs0NTEsMjMgQEAgdW5pb24gYW1kX2lvbW11X3gyYXBpY19jb250
cm9sIHsKICNkZWZpbmUgSU9NTVVfUEFHRV9UQUJMRV9VMzJfUEVSX0VOVFJZ
CShJT01NVV9QQUdFX1RBQkxFX0VOVFJZX1NJWkUgLyA0KQogI2RlZmluZSBJ
T01NVV9QQUdFX1RBQkxFX0FMSUdOTUVOVAk0MDk2CiAKLXN0cnVjdCBhbWRf
aW9tbXVfcHRlIHsKLSAgICB1aW50NjRfdCBwcjoxOwotICAgIHVpbnQ2NF90
IGlnbm9yZWQwOjQ7Ci0gICAgdWludDY0X3QgYToxOwotICAgIHVpbnQ2NF90
IGQ6MTsKLSAgICB1aW50NjRfdCBpZ25vcmVkMToyOwotICAgIHVpbnQ2NF90
IG5leHRfbGV2ZWw6MzsKLSAgICB1aW50NjRfdCBtZm46NDA7Ci0gICAgdWlu
dDY0X3QgcmVzZXJ2ZWQ6NzsKLSAgICB1aW50NjRfdCB1OjE7Ci0gICAgdWlu
dDY0X3QgZmM6MTsKLSAgICB1aW50NjRfdCBpcjoxOwotICAgIHVpbnQ2NF90
IGl3OjE7Ci0gICAgdWludDY0X3QgaWdub3JlZDI6MTsKK3VuaW9uIGFtZF9p
b21tdV9wdGUgeworICAgIHVpbnQ2NF90IHJhdzsKKyAgICBzdHJ1Y3Qgewor
ICAgICAgICBib29sIHByOjE7CisgICAgICAgIHVuc2lnbmVkIGludCBpZ24w
OjQ7CisgICAgICAgIGJvb2wgYToxOworICAgICAgICBib29sIGQ6MTsKKyAg
ICAgICAgdW5zaWduZWQgaW50IGlnbjE6MjsKKyAgICAgICAgdW5zaWduZWQg
aW50IG5leHRfbGV2ZWw6MzsKKyAgICAgICAgdWludDY0X3QgbWZuOjQwOwor
ICAgICAgICB1bnNpZ25lZCBpbnQgOjc7CisgICAgICAgIGJvb2wgdToxOwor
ICAgICAgICBib29sIGZjOjE7CisgICAgICAgIGJvb2wgaXI6MTsKKyAgICAg
ICAgYm9vbCBpdzoxOworICAgICAgICB1bnNpZ25lZCBpbnQgaWduMjoxOwor
ICAgIH07CiB9OwogCiAvKiBQYWdpbmcgbW9kZXMgKi8KLS0tIGEveGVuL2Ry
aXZlcnMvcGFzc3Rocm91Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9k
cml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMzQsNyAr
MzQsNyBAQCBzdGF0aWMgdW5zaWduZWQgaW50IHBmbl90b19wZGVfaWR4KHVu
c2lnCiBzdGF0aWMgdW5zaWduZWQgaW50IGNsZWFyX2lvbW11X3B0ZV9wcmVz
ZW50KHVuc2lnbmVkIGxvbmcgbDFfbWZuLAogICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICB1bnNpZ25lZCBsb25nIGRmbikK
IHsKLSAgICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwdGU7Cisg
ICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGUsICpwdGU7CiAgICAgdW5z
aWduZWQgaW50IGZsdXNoX2ZsYWdzOwogCiAgICAgdGFibGUgPSBtYXBfZG9t
YWluX3BhZ2UoX21mbihsMV9tZm4pKTsKQEAgLTQ4LDcgKzQ4LDcgQEAgc3Rh
dGljIHVuc2lnbmVkIGludCBjbGVhcl9pb21tdV9wdGVfcHJlcwogICAgIHJl
dHVybiBmbHVzaF9mbGFnczsKIH0KIAotc3RhdGljIHVuc2lnbmVkIGludCBz
ZXRfaW9tbXVfcGRlX3ByZXNlbnQoc3RydWN0IGFtZF9pb21tdV9wdGUgKnB0
ZSwKK3N0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11X3BkZV9wcmVzZW50
KHVuaW9uIGFtZF9pb21tdV9wdGUgKnB0ZSwKICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgbmV4dF9t
Zm4sCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB1bnNpZ25lZCBpbnQgbmV4dF9sZXZlbCwgYm9vbCBpdywKICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2wgaXIpCkBA
IC04Myw3ICs4Myw3IEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11
X3B0ZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGludCBwZGVfbGV2ZWwsCiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBib29sIGl3LCBib29sIGlyKQogewot
ICAgIHN0cnVjdCBhbWRfaW9tbXVfcHRlICp0YWJsZSwgKnBkZTsKKyAgICB1
bmlvbiBhbWRfaW9tbXVfcHRlICp0YWJsZSwgKnBkZTsKICAgICB1bnNpZ25l
ZCBpbnQgZmx1c2hfZmxhZ3M7CiAKICAgICB0YWJsZSA9IG1hcF9kb21haW5f
cGFnZShfbWZuKHB0X21mbikpOwpAQCAtMTc0LDcgKzE3NCw3IEBAIHZvaWQg
aW9tbXVfZHRlX3NldF9ndWVzdF9jcjMoc3RydWN0IGFtZF8KIHN0YXRpYyBp
bnQgaW9tbXVfcGRlX2Zyb21fZGZuKHN0cnVjdCBkb21haW4gKmQsIHVuc2ln
bmVkIGxvbmcgZGZuLAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
dW5zaWduZWQgbG9uZyBwdF9tZm5bXSwgYm9vbCBtYXApCiB7Ci0gICAgc3Ry
dWN0IGFtZF9pb21tdV9wdGUgKnBkZSwgKm5leHRfdGFibGVfdmFkZHI7Cisg
ICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqcGRlLCAqbmV4dF90YWJsZV92YWRk
cjsKICAgICB1bnNpZ25lZCBsb25nICBuZXh0X3RhYmxlX21mbjsKICAgICB1
bnNpZ25lZCBpbnQgbGV2ZWw7CiAgICAgc3RydWN0IHBhZ2VfaW5mbyAqdGFi
bGU7CkBAIC00NDgsNyArNDQ4LDcgQEAgaW50IF9faW5pdCBhbWRfaW9tbXVf
cXVhcmFudGluZV9pbml0KHN0cgogICAgIHVuc2lnbmVkIGxvbmcgZW5kX2dm
biA9CiAgICAgICAgIDF1bCA8PCAoREVGQVVMVF9ET01BSU5fQUREUkVTU19X
SURUSCAtIFBBR0VfU0hJRlQpOwogICAgIHVuc2lnbmVkIGludCBsZXZlbCA9
IGFtZF9pb21tdV9nZXRfcGFnaW5nX21vZGUoZW5kX2dmbik7Ci0gICAgc3Ry
dWN0IGFtZF9pb21tdV9wdGUgKnRhYmxlOworICAgIHVuaW9uIGFtZF9pb21t
dV9wdGUgKnRhYmxlOwogCiAgICAgaWYgKCBoZC0+YXJjaC5yb290X3RhYmxl
ICkKICAgICB7CkBAIC00NzksNyArNDc5LDcgQEAgaW50IF9faW5pdCBhbWRf
aW9tbXVfcXVhcmFudGluZV9pbml0KHN0cgogCiAgICAgICAgIGZvciAoIGkg
PSAwOyBpIDwgUFRFX1BFUl9UQUJMRV9TSVpFOyBpKysgKQogICAgICAgICB7
Ci0gICAgICAgICAgICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqcGRlID0gJnRh
YmxlW2ldOworICAgICAgICAgICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqcGRl
ID0gJnRhYmxlW2ldOwogCiAgICAgICAgICAgICAvKgogICAgICAgICAgICAg
ICogUERFcyBhcmUgZXNzZW50aWFsbHkgYSBzdWJzZXQgb2YgUFRFcywgc28g
dGhpcyBmdW5jdGlvbgotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9h
bWQvcGNpX2FtZF9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9wY2lfYW1kX2lvbW11LmMKQEAgLTM4Nyw3ICszODcsNyBAQCBz
dGF0aWMgdm9pZCBkZWFsbG9jYXRlX25leHRfcGFnZV90YWJsZShzCiAKIHN0
YXRpYyB2b2lkIGRlYWxsb2NhdGVfcGFnZV90YWJsZShzdHJ1Y3QgcGFnZV9p
bmZvICpwZykKIHsKLSAgICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGVf
dmFkZHI7CisgICAgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGVfdmFkZHI7
CiAgICAgdW5zaWduZWQgaW50IGluZGV4LCBsZXZlbCA9IFBGTl9PUkRFUihw
Zyk7CiAKICAgICBQRk5fT1JERVIocGcpID0gMDsKQEAgLTQwMiw3ICs0MDIs
NyBAQCBzdGF0aWMgdm9pZCBkZWFsbG9jYXRlX3BhZ2VfdGFibGUoc3RydWN0
CiAKICAgICBmb3IgKCBpbmRleCA9IDA7IGluZGV4IDwgUFRFX1BFUl9UQUJM
RV9TSVpFOyBpbmRleCsrICkKICAgICB7Ci0gICAgICAgIHN0cnVjdCBhbWRf
aW9tbXVfcHRlICpwZGUgPSAmdGFibGVfdmFkZHJbaW5kZXhdOworICAgICAg
ICB1bmlvbiBhbWRfaW9tbXVfcHRlICpwZGUgPSAmdGFibGVfdmFkZHJbaW5k
ZXhdOwogCiAgICAgICAgIGlmICggcGRlLT5tZm4gJiYgcGRlLT5uZXh0X2xl
dmVsICYmIHBkZS0+cHIgKQogICAgICAgICB7CkBAIC01NTQsNyArNTU0LDcg
QEAgc3RhdGljIHZvaWQgYW1kX2R1bXBfcDJtX3RhYmxlX2xldmVsKHN0cgog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3Qg
Z3BhLCBpbnQgaW5kZW50KQogewogICAgIHBhZGRyX3QgYWRkcmVzczsKLSAg
ICBzdHJ1Y3QgYW1kX2lvbW11X3B0ZSAqdGFibGVfdmFkZHI7CisgICAgY29u
c3QgdW5pb24gYW1kX2lvbW11X3B0ZSAqdGFibGVfdmFkZHI7CiAgICAgaW50
IGluZGV4OwogCiAgICAgaWYgKCBsZXZlbCA8IDEgKQpAQCAtNTcwLDcgKzU3
MCw3IEBAIHN0YXRpYyB2b2lkIGFtZF9kdW1wX3AybV90YWJsZV9sZXZlbChz
dHIKIAogICAgIGZvciAoIGluZGV4ID0gMDsgaW5kZXggPCBQVEVfUEVSX1RB
QkxFX1NJWkU7IGluZGV4KysgKQogICAgIHsKLSAgICAgICAgc3RydWN0IGFt
ZF9pb21tdV9wdGUgKnBkZSA9ICZ0YWJsZV92YWRkcltpbmRleF07CisgICAg
ICAgIGNvbnN0IHVuaW9uIGFtZF9pb21tdV9wdGUgKnBkZSA9ICZ0YWJsZV92
YWRkcltpbmRleF07CiAKICAgICAgICAgaWYgKCAhKGluZGV4ICUgMikgKQog
ICAgICAgICAgICAgcHJvY2Vzc19wZW5kaW5nX3NvZnRpcnFzKCk7Cg==

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.14-2.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.14-2.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IHVwZGF0ZSBsaXZlIFBURXMgYXRvbWljYWxseQoKVXBk
YXRpbmcgYSBsaXZlIFBURSBiaXRmaWVsZCBieSBiaXRmaWVsZCByaXNrcyB0
aGUgY29tcGlsZXIgcmUtb3JkZXJpbmcKdGhlIGluZGl2aWR1YWwgdXBkYXRl
cyBhcyB3ZWxsIGFzIHNwbGl0dGluZyBpbmRpdmlkdWFsIHVwZGF0ZXMgaW50
bwptdWx0aXBsZSBtZW1vcnkgd3JpdGVzLiBDb25zdHJ1Y3QgdGhlIG5ldyBl
bnRyeSBmdWxseSBpbiBhIGxvY2FsCnZhcmlhYmxlLCBkbyB0aGUgY2hlY2sg
dG8gZGV0ZXJtaW5lIHRoZSBmbHVzaGluZyBuZWVkcyBvbiB0aGUgdGh1cwpl
c3RhYmxpc2hlZCBuZXcgZW50cnksIGFuZCB0aGVuIHdyaXRlIHRoZSBuZXcg
ZW50cnkgYnkgYSBzaW5nbGUgaW5zbi4KClNpbWlsYXJseSB1c2luZyBtZW1z
ZXQoKSB0byBjbGVhciBhIFBURSBpcyB1bnNhZmUsIGFzIHRoZSBvcmRlciBv
Zgp3cml0ZXMgdGhlIGZ1bmN0aW9uIGRvZXMgaXMsIGF0IGxlYXN0IGluIHBy
aW5jaXBsZSwgdW5kZWZpbmVkLgoKVGhpcyBpcyBwYXJ0IG9mIFhTQS0zNDcu
CgpTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5j
b20+ClJldmlld2VkLWJ5OiBQYXVsIER1cnJhbnQgPHBhdWxAeGVuLm9yZz4K
Ci0tLSBhL3hlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FtZC9pb21tdV9tYXAu
YworKysgYi94ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hbWQvaW9tbXVfbWFw
LmMKQEAgLTQxLDcgKzQxLDcgQEAgc3RhdGljIHVuc2lnbmVkIGludCBjbGVh
cl9pb21tdV9wdGVfcHJlcwogICAgIHB0ZSA9ICZ0YWJsZVtwZm5fdG9fcGRl
X2lkeChkZm4sIDEpXTsKIAogICAgIGZsdXNoX2ZsYWdzID0gcHRlLT5wciA/
IElPTU1VX0ZMVVNIRl9tb2RpZmllZCA6IDA7Ci0gICAgbWVtc2V0KHB0ZSwg
MCwgc2l6ZW9mKCpwdGUpKTsKKyAgICB3cml0ZV9hdG9taWMoJnB0ZS0+cmF3
LCAwKTsKIAogICAgIHVubWFwX2RvbWFpbl9wYWdlKHRhYmxlKTsKIApAQCAt
NTMsMjYgKzUzLDMwIEBAIHN0YXRpYyB1bnNpZ25lZCBpbnQgc2V0X2lvbW11
X3BkZV9wcmVzZW4KICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVuc2lnbmVkIGludCBuZXh0X2xldmVsLCBib29sIGl3LAog
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9v
bCBpcikKIHsKKyAgICB1bmlvbiBhbWRfaW9tbXVfcHRlIG5ldyA9IHt9LCBv
bGQ7CiAgICAgdW5zaWduZWQgaW50IGZsdXNoX2ZsYWdzID0gSU9NTVVfRkxV
U0hGX2FkZGVkOwogCi0gICAgaWYgKCBwdGUtPnByICYmCi0gICAgICAgICAo
cHRlLT5tZm4gIT0gbmV4dF9tZm4gfHwKLSAgICAgICAgICBwdGUtPml3ICE9
IGl3IHx8Ci0gICAgICAgICAgcHRlLT5pciAhPSBpciB8fAotICAgICAgICAg
IHB0ZS0+bmV4dF9sZXZlbCAhPSBuZXh0X2xldmVsKSApCi0gICAgICAgICAg
ICBmbHVzaF9mbGFncyB8PSBJT01NVV9GTFVTSEZfbW9kaWZpZWQ7Ci0KICAg
ICAvKgogICAgICAqIEZDIGJpdCBzaG91bGQgYmUgZW5hYmxlZCBpbiBQVEUs
IHRoaXMgaGVscHMgdG8gc29sdmUgcG90ZW50aWFsCiAgICAgICogaXNzdWVz
IHdpdGggQVRTIGRldmljZXMKICAgICAgKi8KLSAgICBwdGUtPmZjID0gIW5l
eHRfbGV2ZWw7CisgICAgbmV3LmZjID0gIW5leHRfbGV2ZWw7CisKKyAgICBu
ZXcubWZuID0gbmV4dF9tZm47CisgICAgbmV3Lml3ID0gaXc7CisgICAgbmV3
LmlyID0gaXI7CisgICAgbmV3Lm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwor
ICAgIG5ldy5wciA9IHRydWU7CisKKyAgICBvbGQucmF3ID0gcmVhZF9hdG9t
aWMoJnB0ZS0+cmF3KTsKKyAgICBvbGQuaWduMCA9IDA7CisgICAgb2xkLmln
bjEgPSAwOworICAgIG9sZC5pZ24yID0gMDsKKworICAgIGlmICggb2xkLnBy
ICYmIG9sZC5yYXcgIT0gbmV3LnJhdyApCisgICAgICAgIGZsdXNoX2ZsYWdz
IHw9IElPTU1VX0ZMVVNIRl9tb2RpZmllZDsKIAotICAgIHB0ZS0+bWZuID0g
bmV4dF9tZm47Ci0gICAgcHRlLT5pdyA9IGl3OwotICAgIHB0ZS0+aXIgPSBp
cjsKLSAgICBwdGUtPm5leHRfbGV2ZWwgPSBuZXh0X2xldmVsOwotICAgIHB0
ZS0+cHIgPSAxOworICAgIHdyaXRlX2F0b21pYygmcHRlLT5yYXcsIG5ldy5y
YXcpOwogCiAgICAgcmV0dXJuIGZsdXNoX2ZsYWdzOwogfQo=

--=separator
Content-Type: application/octet-stream; name="xsa347/xsa347-4.14-3.patch"
Content-Disposition: attachment; filename="xsa347/xsa347-4.14-3.patch"
Content-Transfer-Encoding: base64

RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpTdWJqZWN0
OiBBTUQvSU9NTVU6IGVuc3VyZSBzdWl0YWJsZSBvcmRlcmluZyBvZiBEVEUg
bW9kaWZpY2F0aW9ucwoKRE1BIGFuZCBpbnRlcnJ1cHQgdHJhbnNsYXRpb24g
c2hvdWxkIGJlIGVuYWJsZWQgb25seSBhZnRlciBvdGhlcgphcHBsaWNhYmxl
IERURSBmaWVsZHMgaGF2ZSBiZWVuIHdyaXR0ZW4uIFNpbWlsYXJseSB3aGVu
IGRpc2FibGluZwp0cmFuc2xhdGlvbiBvciB3aGVuIG1vdmluZyBhIGRldmlj
ZSBiZXR3ZWVuIGRvbWFpbnMsIHRyYW5zbGF0aW9uIHNob3VsZApmaXJzdCBi
ZSBkaXNhYmxlZCwgYmVmb3JlIG90aGVyIGVudHJ5IGZpZWxkcyBnZXQgbW9k
aWZpZWQuIE5vdGUgaG93ZXZlcgp0aGF0IHRoZSAibW92aW5nIiBhc3BlY3Qg
ZG9lc24ndCBhcHBseSB0byB0aGUgaW50ZXJydXB0IHJlbWFwcGluZyBzaWRl
LAphcyBkb21haW4gc3BlY2lmaWNzIGFyZSBtYWludGFpbmVkIGluIHRoZSBJ
UlRFcyBoZXJlLCBub3QgdGhlIERURS4gV2UKYWxzbyBuZXZlciBkaXNhYmxl
IGludGVycnVwdCByZW1hcHBpbmcgb25jZSBpdCBnb3QgZW5hYmxlZCBmb3Ig
YSBkZXZpY2UKKHRoZSByZXNwZWN0aXZlIGFyZ3VtZW50IHBhc3NlZCBpcyBh
bHdheXMgdGhlIGltbXV0YWJsZSBpb21tdV9pbnRyZW1hcCkuCgpUaGlzIGlz
IHBhcnQgb2YgWFNBLTM0Ny4KClNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNo
IDxqYmV1bGljaEBzdXNlLmNvbT4KUmV2aWV3ZWQtYnk6IFBhdWwgRHVycmFu
dCA8cGF1bEB4ZW4ub3JnPgoKLS0tIGEveGVuL2RyaXZlcnMvcGFzc3Rocm91
Z2gvYW1kL2lvbW11X21hcC5jCisrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0aHJv
dWdoL2FtZC9pb21tdV9tYXAuYwpAQCAtMTAzLDExICsxMDMsMTggQEAgdm9p
ZCBhbWRfaW9tbXVfc2V0X3Jvb3RfcGFnZV90YWJsZShzdHJ1YwogICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICB1aW50NjRfdCByb290X3B0
ciwgdWludDE2X3QgZG9tYWluX2lkLAogICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICB1aW50OF90IHBhZ2luZ19tb2RlLCBib29sIHZhbGlk
KQogeworICAgIGlmICggdmFsaWQgfHwgZHRlLT52ICkKKyAgICB7CisgICAg
ICAgIGR0ZS0+dHYgPSBmYWxzZTsKKyAgICAgICAgZHRlLT52ID0gdHJ1ZTsK
KyAgICAgICAgc21wX3dtYigpOworICAgIH0KICAgICBkdGUtPmRvbWFpbl9p
ZCA9IGRvbWFpbl9pZDsKICAgICBkdGUtPnB0X3Jvb3QgPSBwYWRkcl90b19w
Zm4ocm9vdF9wdHIpOwogICAgIGR0ZS0+aXcgPSB0cnVlOwogICAgIGR0ZS0+
aXIgPSB0cnVlOwogICAgIGR0ZS0+cGFnaW5nX21vZGUgPSBwYWdpbmdfbW9k
ZTsKKyAgICBzbXBfd21iKCk7CiAgICAgZHRlLT50diA9IHRydWU7CiAgICAg
ZHRlLT52ID0gdmFsaWQ7CiB9CkBAIC0xMzAsNiArMTM3LDcgQEAgdm9pZCBh
bWRfaW9tbXVfc2V0X2ludHJlbWFwX3RhYmxlKAogICAgIH0KIAogICAgIGR0
ZS0+aWcgPSBmYWxzZTsgLyogdW5tYXBwZWQgaW50ZXJydXB0cyByZXN1bHQg
aW4gaS9vIHBhZ2UgZmF1bHRzICovCisgICAgc21wX3dtYigpOwogICAgIGR0
ZS0+aXYgPSB2YWxpZDsKIH0KIAotLS0gYS94ZW4vZHJpdmVycy9wYXNzdGhy
b3VnaC9hbWQvcGNpX2FtZF9pb21tdS5jCisrKyBiL3hlbi9kcml2ZXJzL3Bh
c3N0aHJvdWdoL2FtZC9wY2lfYW1kX2lvbW11LmMKQEAgLTExNyw3ICsxMTcs
MTAgQEAgc3RhdGljIHZvaWQgYW1kX2lvbW11X3NldHVwX2RvbWFpbl9kZXZp
YwogICAgICAgICAvKiBVbmRvIHdoYXQgYW1kX2lvbW11X2Rpc2FibGVfZG9t
YWluX2RldmljZSgpIG1heSBoYXZlIGRvbmUuICovCiAgICAgICAgIGl2cnNf
ZGV2ID0gJmdldF9pdnJzX21hcHBpbmdzKGlvbW11LT5zZWcpW3JlcV9pZF07
CiAgICAgICAgIGlmICggZHRlLT5pdF9yb290ICkKKyAgICAgICAgewogICAg
ICAgICAgICAgZHRlLT5pbnRfY3RsID0gSU9NTVVfREVWX1RBQkxFX0lOVF9D
T05UUk9MX1RSQU5TTEFURUQ7CisgICAgICAgICAgICBzbXBfd21iKCk7Cisg
ICAgICAgIH0KICAgICAgICAgZHRlLT5pdiA9IGlvbW11X2ludHJlbWFwOwog
ICAgICAgICBkdGUtPmV4ID0gaXZyc19kZXYtPmR0ZV9hbGxvd19leGNsdXNp
b247CiAgICAgICAgIGR0ZS0+c3lzX21ndCA9IE1BU0tfRVhUUihpdnJzX2Rl
di0+ZGV2aWNlX2ZsYWdzLCBBQ1BJX0lWSERfU1lTVEVNX01HTVQpOwo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:10:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9320.24754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqSm-0002Du-8G; Tue, 20 Oct 2020 12:10:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9320.24754; Tue, 20 Oct 2020 12:10:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqSm-0002Dn-3e; Tue, 20 Oct 2020 12:10:00 +0000
Received: by outflank-mailman (input) for mailman id 9320;
 Tue, 20 Oct 2020 12:09:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KWTV=D3=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kUqSk-0002Di-MF
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:09:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bf22342-8e41-4c7a-90d2-3cca2c7f0fde;
 Tue, 20 Oct 2020 12:09:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B22AABD1;
 Tue, 20 Oct 2020 12:09:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KWTV=D3=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kUqSk-0002Di-MF
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:09:58 +0000
X-Inumbo-ID: 6bf22342-8e41-4c7a-90d2-3cca2c7f0fde
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bf22342-8e41-4c7a-90d2-3cca2c7f0fde;
	Tue, 20 Oct 2020 12:09:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603195796;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=mN66kxz6vMIrIs3ZjXoTD87KAy1HjYEMfY65Crt1jqM=;
	b=ZJQUGnUkeFZz4I3m4la3/hTxp8M3+65EnmMlOo312EmAiLYbBosB3T6twMQro/uu5SlDWN
	bYYYc9kCxdidGhS2VblE0TYeNWfB/JU5YKTxkP5gsHrg2e3y8yFyg+PDSBBqIalv8OPEhc
	xMeHLeQcI+E3NGUxYMTWkgaqfdXFNRk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B22AABD1;
	Tue, 20 Oct 2020 12:09:56 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.10-rc1b
Date: Tue, 20 Oct 2020 14:09:56 +0200
Message-Id: <20201020120956.29708-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1b-tag

xen: branch for v5.10-rc1b

It contains the following:

- A single patch for fixing the Xen security issue XSA-331 (malicious
  guests can DoS dom0 by triggering NULL-pointer dereferences or access
  to stale data).

- A larger series for fixing the Xen security issue XSA-332 (malicious
  guests can DoS dom0 by sending events at high frequency leading to
  dom0's vcpus being busy in IRQ handling for elongated times).


Thanks.

Juergen

 Documentation/admin-guide/kernel-parameters.txt |   8 +
 drivers/block/xen-blkback/blkback.c             |  22 +-
 drivers/block/xen-blkback/xenbus.c              |   5 +-
 drivers/net/xen-netback/common.h                |  15 +
 drivers/net/xen-netback/interface.c             |  61 +++-
 drivers/net/xen-netback/netback.c               |  11 +-
 drivers/net/xen-netback/rx.c                    |  13 +-
 drivers/xen/events/events_2l.c                  |   9 +-
 drivers/xen/events/events_base.c                | 423 ++++++++++++++++++++++--
 drivers/xen/events/events_fifo.c                |  83 +++--
 drivers/xen/events/events_internal.h            |  20 +-
 drivers/xen/evtchn.c                            |   7 +-
 drivers/xen/pvcalls-back.c                      |  76 +++--
 drivers/xen/xen-pciback/pci_stub.c              |  13 +-
 drivers/xen/xen-pciback/pciback.h               |  12 +-
 drivers/xen/xen-pciback/pciback_ops.c           |  48 ++-
 drivers/xen/xen-pciback/xenbus.c                |   2 +-
 drivers/xen/xen-scsiback.c                      |  23 +-
 include/xen/events.h                            |  21 ++
 19 files changed, 707 insertions(+), 165 deletions(-)

Juergen Gross (13):
      xen/events: avoid removing an event channel while handling it
      xen/events: add a proper barrier to 2-level uevent unmasking
      xen/events: fix race in evtchn_fifo_unmask()
      xen/events: add a new "late EOI" evtchn framework
      xen/blkback: use lateeoi irq binding
      xen/netback: use lateeoi irq binding
      xen/scsiback: use lateeoi irq binding
      xen/pvcallsback: use lateeoi irq binding
      xen/pciback: use lateeoi irq binding
      xen/events: switch user event channels to lateeoi model
      xen/events: use a common cpu hotplug hook for event channels
      xen/events: defer eoi in case of excessive number of events
      xen/events: block rogue events for some time


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:13:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:13:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9424.24769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqWY-0003Fm-RG; Tue, 20 Oct 2020 12:13:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9424.24769; Tue, 20 Oct 2020 12:13:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqWY-0003Ff-O3; Tue, 20 Oct 2020 12:13:54 +0000
Received: by outflank-mailman (input) for mailman id 9424;
 Tue, 20 Oct 2020 12:13:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUqWX-0003DW-Uw
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:13:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ccc4d6f4-40c8-4ea1-aac2-95c1829a09a4;
 Tue, 20 Oct 2020 12:13:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqWQ-0001du-7I; Tue, 20 Oct 2020 12:13:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqWQ-0001eY-0j; Tue, 20 Oct 2020 12:13:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqWQ-0002s5-0C; Tue, 20 Oct 2020 12:13:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUqWX-0003DW-Uw
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:13:54 +0000
X-Inumbo-ID: ccc4d6f4-40c8-4ea1-aac2-95c1829a09a4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ccc4d6f4-40c8-4ea1-aac2-95c1829a09a4;
	Tue, 20 Oct 2020 12:13:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jrtx7zRxCXlapcIUl9g8f2wyk7Ov2HLzu4cagZ8gZVQ=; b=cdq+wh9vro/mVQw4Af0/6/oFGr
	QW4mpcidfbgwfmjLIri0c+jslXp9Wq6ZkDl67MUFjHGwLj8DmjA/NFcJGtMH0muNpgvgqsxcdJLjE
	wods21ehKDEoBQ21dVkMsO9epv3r8swr198gR3FVQzXcLL5V5QeXrwDtECvr25HAlh+E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqWQ-0001du-7I; Tue, 20 Oct 2020 12:13:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqWQ-0001eY-0j; Tue, 20 Oct 2020 12:13:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqWQ-0002s5-0C; Tue, 20 Oct 2020 12:13:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156013-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156013: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 12:13:46 +0000

flight 156013 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156013/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 155973

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155973
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155973
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155973
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155973
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155973
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155973
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155973
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155973
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155973
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155973
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155973
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm   3 hosts-allocate           starved in 155973 n/a

version targeted for testing:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   156013  2020-10-20 04:30:46 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:16:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9456.24783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqZH-0003SH-Bl; Tue, 20 Oct 2020 12:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9456.24783; Tue, 20 Oct 2020 12:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqZH-0003SA-8a; Tue, 20 Oct 2020 12:16:43 +0000
Received: by outflank-mailman (input) for mailman id 9456;
 Tue, 20 Oct 2020 12:16:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUqZG-0003RW-5p
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:16:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ccfdcf7-7b54-4d8d-9769-6b2b0fffca18;
 Tue, 20 Oct 2020 12:16:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqZ7-0001gt-Bk; Tue, 20 Oct 2020 12:16:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqZ7-0001lM-3w; Tue, 20 Oct 2020 12:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUqZ7-0003g5-3S; Tue, 20 Oct 2020 12:16:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUqZG-0003RW-5p
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:16:42 +0000
X-Inumbo-ID: 0ccfdcf7-7b54-4d8d-9769-6b2b0fffca18
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ccfdcf7-7b54-4d8d-9769-6b2b0fffca18;
	Tue, 20 Oct 2020 12:16:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AHb6Fv5jv6Hsj1DZ1ZCZzRxr69jqBQXfSQUmtE+prEk=; b=TYy3PXiVr2E35dyPe0qOqhzoPv
	kMemD6IIhs63nnLUS/Ysb1SIA7oUDdktJVtpvJ59UlOQHVD2yGU+l17atHvNv+Weodb6yxkvhoWg7
	oyXPhHAHgS19g5KKLrrsyjyN4oZZP65hMlXactKKxmlRGlB//ShXyul7Y0FkjMGlzEgA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqZ7-0001gt-Bk; Tue, 20 Oct 2020 12:16:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqZ7-0001lM-3w; Tue, 20 Oct 2020 12:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUqZ7-0003g5-3S; Tue, 20 Oct 2020 12:16:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156025-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156025: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d76f4f97eb2772bf85fe286097183d0c7db19ae8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 12:16:33 +0000

flight 156025 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156025/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d76f4f97eb2772bf85fe286097183d0c7db19ae8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   61 days
Failing since        152659  2020-08-21 14:07:39 Z   59 days  111 attempts
Testing same since   155981  2020-10-19 16:06:45 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 47947 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:20:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9463.24802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdM-0004Ky-C3; Tue, 20 Oct 2020 12:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9463.24802; Tue, 20 Oct 2020 12:20:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdM-0004Kl-5f; Tue, 20 Oct 2020 12:20:56 +0000
Received: by outflank-mailman (input) for mailman id 9463;
 Tue, 20 Oct 2020 12:20:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdK-0004KL-UC
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ac03c3e-ddf7-43fd-b871-2269e19623e6;
 Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E929FB1C6;
 Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdK-0004KL-UC
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:54 +0000
X-Inumbo-ID: 8ac03c3e-ddf7-43fd-b871-2269e19623e6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ac03c3e-ddf7-43fd-b871-2269e19623e6;
	Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E929FB1C6;
	Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 00/10] Support GEM object mappings from I/O memory
Date: Tue, 20 Oct 2020 14:20:36 +0200
Message-Id: <20201020122046.31167-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.

This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.

Patches #1 to #4 prepare VRAM helpers and drivers.

Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.

With struct dma_buf_map propagated through the layers, patches #8 to #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.

v5:
	* rebase onto latest TTM changes (Chrsitian)
	* support TTM premapped memory correctly (Christian)
	* implement fb_read/fb_write internally (Sam, Daniel)
	* cleanups
v4:
	* provide TTM vmap/vunmap plus GEM helpers and convert drivers
	  over (Christian, Daniel)
	* remove several empty functions
	* more TODOs and documentation (Daniel)
v3:
	* recreate the whole patchset on top of struct dma_buf_map
v2:
	* RFC patchset


Thomas Zimmermann (10):
  drm/vram-helper: Remove invariant parameters from internal kmap
    function
  drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
  drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
  drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
  drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
  drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
    backends
  drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
    dma_buf_map
  drm/gem: Store client buffer mappings as struct dma_buf_map
  dma-buf-map: Add memcpy and pointer-increment interfaces
  drm/fb_helper: Support framebuffers in I/O memory

 Documentation/gpu/todo.rst                  |  37 ++-
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/bochs/bochs_kms.c           |   1 -
 drivers/gpu/drm/drm_client.c                |  38 +--
 drivers/gpu/drm/drm_fb_helper.c             | 248 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |  29 ++-
 drivers/gpu/drm/drm_gem_cma_helper.c        |  27 +--
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 ++--
 drivers/gpu/drm/drm_gem_ttm_helper.c        |  38 +++
 drivers/gpu/drm/drm_gem_vram_helper.c       | 117 +++++----
 drivers/gpu/drm/drm_internal.h              |   5 +-
 drivers/gpu/drm/drm_prime.c                 |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   3 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       |   1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  12 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |  12 -
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |   2 -
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 --
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +-
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 ++-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +-
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 --
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c           |  72 ++++++
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   7 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 +-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 +-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   3 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_ttm_helper.h            |   6 +
 include/drm/drm_gem_vram_helper.h           |  14 +-
 include/drm/drm_mode_config.h               |  12 -
 include/drm/ttm/ttm_bo_api.h                |  28 +++
 include/linux/dma-buf-map.h                 |  93 +++++++-
 64 files changed, 852 insertions(+), 436 deletions(-)

-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:20:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9462.24796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdM-0004KX-20; Tue, 20 Oct 2020 12:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9462.24796; Tue, 20 Oct 2020 12:20:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdL-0004KQ-Tj; Tue, 20 Oct 2020 12:20:55 +0000
Received: by outflank-mailman (input) for mailman id 9462;
 Tue, 20 Oct 2020 12:20:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdK-0004KG-5R
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 066f9791-a601-4729-9658-e9555c69e14b;
 Tue, 20 Oct 2020 12:20:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E71DAB1C0;
 Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdK-0004KG-5R
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:54 +0000
X-Inumbo-ID: 066f9791-a601-4729-9658-e9555c69e14b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 066f9791-a601-4729-9658-e9555c69e14b;
	Tue, 20 Oct 2020 12:20:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E71DAB1C0;
	Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
Date: Tue, 20 Oct 2020 14:20:39 +0200
Message-Id: <20201020122046.31167-4-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
 3 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
 void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
 	.unpin = etnaviv_gem_prime_unpin,
 	.get_sg_table = etnaviv_gem_prime_get_sg_table,
 	.vmap = etnaviv_gem_prime_vmap,
-	.vunmap = etnaviv_gem_prime_vunmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
 	return etnaviv_gem_vmap(obj);
 }
 
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* TODO msm_gem_vunmap() */
-}
-
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma)
 {
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9464.24820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdQ-0004Ol-LR; Tue, 20 Oct 2020 12:21:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9464.24820; Tue, 20 Oct 2020 12:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdQ-0004OZ-GM; Tue, 20 Oct 2020 12:21:00 +0000
Received: by outflank-mailman (input) for mailman id 9464;
 Tue, 20 Oct 2020 12:20:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdP-0004KG-48
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 295c6168-8e3e-49e3-9f17-132fa31bf4c2;
 Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA87BB1C8;
 Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdP-0004KG-48
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:59 +0000
X-Inumbo-ID: 295c6168-8e3e-49e3-9f17-132fa31bf4c2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 295c6168-8e3e-49e3-9f17-132fa31bf4c2;
	Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EA87BB1C8;
	Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
Date: Tue, 20 Oct 2020 14:20:38 +0200
Message-Id: <20201020122046.31167-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
 drivers/gpu/drm/vc4/vc4_bo.c         |  1 -
 include/drm/drm_gem_cma_helper.h     |  1 -
 3 files changed, 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- *     address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
 static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.free = drm_gem_cma_free_object,
 	.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
 	.export = vc4_prime_export,
 	.get_sg_table = drm_gem_cma_prime_get_sg_table,
 	.vmap = vc4_prime_vmap,
-	.vunmap = drm_gem_cma_prime_vunmap,
 	.vm_ops = &vc4_vm_ops,
 };
 
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9465.24827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdR-0004PW-1P; Tue, 20 Oct 2020 12:21:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9465.24827; Tue, 20 Oct 2020 12:21:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdQ-0004PK-Px; Tue, 20 Oct 2020 12:21:00 +0000
Received: by outflank-mailman (input) for mailman id 9465;
 Tue, 20 Oct 2020 12:20:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdP-0004KL-T4
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7d1e691-2c59-49f5-b17a-210fc1f92d92;
 Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6E2A2B1AC;
 Tue, 20 Oct 2020 12:20:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdP-0004KL-T4
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:20:59 +0000
X-Inumbo-ID: a7d1e691-2c59-49f5-b17a-210fc1f92d92
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7d1e691-2c59-49f5-b17a-210fc1f92d92;
	Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6E2A2B1AC;
	Tue, 20 Oct 2020 12:20:52 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
Date: Tue, 20 Oct 2020 14:20:40 +0200
Message-Id: <20201020122046.31167-5-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
 drivers/gpu/drm/exynos/exynos_drm_gem.h |  2 --
 2 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
 static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
 	.free = exynos_drm_gem_free_object,
 	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
-	.vmap = exynos_drm_gem_prime_vmap,
-	.vunmap	= exynos_drm_gem_prime_vunmap,
 	.vm_ops = &exynos_drm_gem_vm_ops,
 };
 
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	return &exynos_gem->base;
 }
 
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma)
 {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9466.24844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdV-0004VU-CN; Tue, 20 Oct 2020 12:21:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9466.24844; Tue, 20 Oct 2020 12:21:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdV-0004VL-7r; Tue, 20 Oct 2020 12:21:05 +0000
Received: by outflank-mailman (input) for mailman id 9466;
 Tue, 20 Oct 2020 12:21:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdU-0004KG-4T
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d64b1c76-3357-4cfc-ac32-c0d5546bfa18;
 Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F029AB1D0;
 Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdU-0004KG-4T
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:04 +0000
X-Inumbo-ID: d64b1c76-3357-4cfc-ac32-c0d5546bfa18
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d64b1c76-3357-4cfc-ac32-c0d5546bfa18;
	Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F029AB1D0;
	Tue, 20 Oct 2020 12:20:51 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v5 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
Date: Tue, 20 Oct 2020 14:20:37 +0200
Message-Id: <20201020122046.31167-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.

v4:
	* don't check for !kmap->virtual; will always be false

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 7aeb5daf2805..bfc059945e31 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -379,32 +379,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
 
-	if (kmap->virtual || !map)
-		goto out;
-
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
 		return ERR_PTR(ret);
 
 out:
-	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
-		return NULL; /* not mapped; don't increment ref */
-	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	return ttm_kmap_obj_virtual(kmap, &is_iomem);
 }
 
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -449,7 +439,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9467.24853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdW-0004X9-1A; Tue, 20 Oct 2020 12:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9467.24853; Tue, 20 Oct 2020 12:21:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdV-0004Wq-RB; Tue, 20 Oct 2020 12:21:05 +0000
Received: by outflank-mailman (input) for mailman id 9467;
 Tue, 20 Oct 2020 12:21:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdU-0004KL-TG
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ce3d91e-9c34-4cea-8f98-b6012487e091;
 Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3B99DB1E9;
 Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdU-0004KL-TG
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:04 +0000
X-Inumbo-ID: 3ce3d91e-9c34-4cea-8f98-b6012487e091
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ce3d91e-9c34-4cea-8f98-b6012487e091;
	Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3B99DB1E9;
	Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
Date: Tue, 20 Oct 2020 14:20:44 +0200
Message-Id: <20201020122046.31167-9-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.

Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
 drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
 include/drm/drm_client.h        |  7 ++++---
 3 files changed, 38 insertions(+), 26 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->map);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
  *
  * This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
  *
  * Client buffer mappings are not ref'counted. Each call to
  * drm_client_buffer_vmap() should be followed by a call to
  * drm_client_buffer_vunmap(); or the client buffer should be mapped
  * throughout its lifetime.
  *
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
  * Returns:
- *	The mapped memory's address
+ *	0 on success, or a negative errno code otherwise.
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
 {
-	struct dma_buf_map map;
+	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (dma_buf_map_is_set(map))
+		goto out;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	ret = drm_gem_vmap(buffer->gem, &map);
+	ret = drm_gem_vmap(buffer->gem, map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
-	buffer->vaddr = map.vaddr;
+out:
+	*map_copy = *map;
 
-	return map.vaddr;
+	return 0;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+	struct dma_buf_map *map = &buffer->map;
 
-	drm_gem_vunmap(buffer->gem, &map);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6212cd7cde1d 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			ret = drm_client_buffer_vmap(helper->buffer, &map);
+			if (ret)
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+		if (ret)
+			return ret;
+		if (map.is_iomem)
+			fbi->screen_base = map.vaddr_iomem;
+		else
+			fbi->screen_buffer = map.vaddr;
 
-		fbi->screen_buffer = vaddr;
 		/* Shamelessly leak the physical address to user-space */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
 		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
 #ifndef _DRM_CLIENT_H_
 #define _DRM_CLIENT_H_
 
+#include <linux/dma-buf-map.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @map: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct dma_buf_map map;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9468.24867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqda-0004ee-EF; Tue, 20 Oct 2020 12:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9468.24867; Tue, 20 Oct 2020 12:21:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqda-0004eV-AQ; Tue, 20 Oct 2020 12:21:10 +0000
Received: by outflank-mailman (input) for mailman id 9468;
 Tue, 20 Oct 2020 12:21:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdZ-0004KG-4Y
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8fcb0a6-07f5-4e4d-b674-b1fe9502c87d;
 Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2CD15B1D7;
 Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdZ-0004KG-4Y
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:09 +0000
X-Inumbo-ID: a8fcb0a6-07f5-4e4d-b674-b1fe9502c87d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a8fcb0a6-07f5-4e4d-b674-b1fe9502c87d;
	Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2CD15B1D7;
	Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v5 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
Date: Tue, 20 Oct 2020 14:20:41 +0200
Message-Id: <20201020122046.31167-6-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.

On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.

v5:
	* use size_t for storing mapping size (Christian)
	* ignore premapped memory areas correctly in ttm_bo_vunmap()
	* rebase onto latest TTM interfaces (Christian)
	* remove BUG() from ttm_bo_vmap() (Christian)
v4:
	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
	  Christian)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
 drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
 include/drm/drm_gem_ttm_helper.h     |  6 +++
 include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
 include/linux/dma-buf-map.h          | 20 ++++++++
 5 files changed, 164 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 }
 EXPORT_SYMBOL(drm_gem_ttm_print_info);
 
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
 /**
  * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
  * @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index ba7ab5ed85d0..5c79418405ea 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
 #include <linux/io.h>
 #include <linux/highmem.h>
 #include <linux/wait.h>
@@ -527,6 +528,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 }
 EXPORT_SYMBOL(ttm_bo_kunmap);
 
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+	int ret;
+
+	ret = ttm_mem_io_reserve(bo->bdev, mem);
+	if (ret)
+		return ret;
+
+	if (mem->bus.is_iomem) {
+		void __iomem *vaddr_iomem;
+		size_t size = bo->num_pages << PAGE_SHIFT;
+
+		if (mem->bus.addr)
+			vaddr_iomem = (void __iomem *)mem->bus.addr;
+		else if (mem->bus.caching == ttm_write_combined)
+			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+		else
+			vaddr_iomem = ioremap(mem->bus.offset, size);
+
+		if (!vaddr_iomem)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+	} else {
+		struct ttm_operation_ctx ctx = {
+			.interruptible = false,
+			.no_wait_gpu = false
+		};
+		struct ttm_tt *ttm = bo->ttm;
+		pgprot_t prot;
+		void *vaddr;
+
+		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+		if (ret)
+			return ret;
+
+		/*
+		 * We need to use vmap to get the desired page protection
+		 * or to make the buffer object look contiguous.
+		 */
+		prot = ttm_io_prot(bo, mem, PAGE_KERNEL);
+		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+		if (!vaddr)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr(map, vaddr);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+
+	if (dma_buf_map_is_null(map))
+		return;
+
+	if (!map->is_iomem)
+		vunmap(map->vaddr);
+	else if (!mem->bus.addr)
+		iounmap(map->vaddr_iomem);
+	dma_buf_map_clear(map);
+
+	ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
 static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
 				 bool dst_use_tt)
 {
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+struct dma_buf_map;
+
 #define drm_gem_ttm_of_gem(gem_obj) \
 	container_of(gem_obj, struct ttm_buffer_object, base)
 
 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 			    const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map);
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
 		     struct vm_area_struct *vma);
 
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
 
 struct ttm_bo_device;
 
+struct dma_buf_map;
+
 struct drm_mm_node;
 
 struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
  */
 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
 
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
 /**
  * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
  *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
  *
  *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
  *
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map:		The dma-buf mapping structure
+ * @vaddr_iomem:	An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+					       void __iomem *vaddr_iomem)
+{
+	map->vaddr_iomem = vaddr_iomem;
+	map->is_iomem = true;
+}
+
 /**
  * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
  * @lhs:	The dma-buf mapping structure
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9469.24880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqde-0004mN-Ty; Tue, 20 Oct 2020 12:21:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9469.24880; Tue, 20 Oct 2020 12:21:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqde-0004mA-OX; Tue, 20 Oct 2020 12:21:14 +0000
Received: by outflank-mailman (input) for mailman id 9469;
 Tue, 20 Oct 2020 12:21:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqde-0004KG-4e
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a990c734-41f8-4c67-b278-5752ce3c6843;
 Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 946B1B1E1;
 Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqde-0004KG-4e
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:14 +0000
X-Inumbo-ID: a990c734-41f8-4c67-b278-5752ce3c6843
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a990c734-41f8-4c67-b278-5752ce3c6843;
	Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 946B1B1E1;
	Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v5 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
Date: Tue, 20 Oct 2020 14:20:43 +0200
Message-Id: <20201020122046.31167-8-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
 drivers/gpu/drm/drm_gem.c      | 26 +++++++++++++-------------
 drivers/gpu/drm/drm_internal.h |  5 +++--
 drivers/gpu/drm/drm_prime.c    | 14 ++++----------
 4 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
  * Copyright 2018 Noralf Trønnes
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  */
 void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (buffer->vaddr)
 		return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
+	ret = drm_gem_vmap(buffer->gem, &map);
+	if (ret)
+		return ERR_PTR(ret);
 
-	buffer->vaddr = vaddr;
+	buffer->vaddr = map.vaddr;
 
-	return vaddr;
+	return map.vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+	drm_gem_vunmap(buffer->gem, &map);
 	buffer->vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map;
 	int ret;
 
 	if (!obj->funcs->vmap)
-		return ERR_PTR(-EOPNOTSUPP);
+		return -EOPNOTSUPP;
 
-	ret = obj->funcs->vmap(obj, &map);
+	ret = obj->funcs->vmap(obj, map);
 	if (ret)
-		return ERR_PTR(ret);
-	else if (dma_buf_map_is_null(&map))
-		return ERR_PTR(-ENOMEM);
+		return ret;
+	else if (dma_buf_map_is_null(map))
+		return -ENOMEM;
 
-	return map.vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
-	if (!vaddr)
+	if (dma_buf_map_is_null(map))
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, &map);
+		obj->funcs->vunmap(obj, map);
+
+	/* Always set the mapping to NULL. Callers may rely on this. */
+	dma_buf_map_clear(map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index b65865c630b0..58832d75a9bd 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
 
 struct dentry;
 struct dma_buf;
+struct dma_buf_map;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
  * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
  *
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
  */
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
+	return drm_gem_vmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, map->vaddr);
+	drm_gem_vunmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9470.24892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdk-0004uI-8Y; Tue, 20 Oct 2020 12:21:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9470.24892; Tue, 20 Oct 2020 12:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdk-0004u6-3a; Tue, 20 Oct 2020 12:21:20 +0000
Received: by outflank-mailman (input) for mailman id 9470;
 Tue, 20 Oct 2020 12:21:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdj-0004KG-4j
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1123e526-3f9f-49fe-9b94-ae675b820c2d;
 Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D4090B1EA;
 Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdj-0004KG-4j
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:19 +0000
X-Inumbo-ID: 1123e526-3f9f-49fe-9b94-ae675b820c2d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1123e526-3f9f-49fe-9b94-ae675b820c2d;
	Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D4090B1EA;
	Tue, 20 Oct 2020 12:20:55 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Tue, 20 Oct 2020 14:20:45 +0200
Message-Id: <20201020122046.31167-10-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.

v5:
	* include <linux/string.h> to build on sparc64 (Sam)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 include/linux/dma-buf-map.h | 73 ++++++++++++++++++++++++++++++++-----
 1 file changed, 63 insertions(+), 10 deletions(-)

diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..583a3a1f9447 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -7,6 +7,7 @@
 #define __DMA_BUF_MAP_H__
 
 #include <linux/io.h>
+#include <linux/string.h>
 
 /**
  * DOC: overview
@@ -32,6 +33,14 @@
  * accessing the buffer. Use the returned instance and the helper functions
  * to access the buffer's memory in the correct way.
  *
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
  * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
  * considered bad style. Rather then accessing its fields directly, use one
  * of the provided helper functions, or implement your own. For example,
@@ -51,6 +60,14 @@
  *
  *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
  *
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_clear(&map);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -73,17 +90,19 @@
  *	if (dma_buf_map_is_equal(&sys_map, &io_map))
  *		// always false
  *
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
  *
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ *	const void *src = ...; // source buffer
+ *	size_t len = ...; // length of src
+ *
+ *	dma_buf_map_memcpy_to(&map, src, len);
+ *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
  */
 
 /**
@@ -210,4 +229,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
 	}
 }
 
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst:	The dma-buf mapping structure
+ * @src:	The source buffer
+ * @len:	The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+	if (dst->is_iomem)
+		memcpy_toio(dst->vaddr_iomem, src, len);
+	else
+		memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map:	The dma-buf mapping structure
+ * @incr:	The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+	if (map->is_iomem)
+		map->vaddr_iomem += incr;
+	else
+		map->vaddr += incr;
+}
+
 #endif /* __DMA_BUF_MAP_H__ */
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9472.24904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdp-000526-Te; Tue, 20 Oct 2020 12:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9472.24904; Tue, 20 Oct 2020 12:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdp-00051w-Ov; Tue, 20 Oct 2020 12:21:25 +0000
Received: by outflank-mailman (input) for mailman id 9472;
 Tue, 20 Oct 2020 12:21:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdo-0004KG-58
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82de9e46-6993-429b-be4b-41f0cb0a2394;
 Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D8EB1B1DC;
 Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdo-0004KG-58
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:24 +0000
X-Inumbo-ID: 82de9e46-6993-429b-be4b-41f0cb0a2394
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 82de9e46-6993-429b-be4b-41f0cb0a2394;
	Tue, 20 Oct 2020 12:20:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D8EB1B1DC;
	Tue, 20 Oct 2020 12:20:53 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
Date: Tue, 20 Oct 2020 14:20:42 +0200
Message-Id: <20201020122046.31167-7-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v5:
	* update vkms after switch to shmem
v4:
	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
	* fix a trailing { in drm_gem_vmap()
	* remove several empty functions instead of converting them (Daniel)
	* comment uses of raw pointers with a TODO (Daniel)
	* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst                  |  18 ++++
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 -------
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +++--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/drm_gem.c                   |  23 +++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  10 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 +++++----
 drivers/gpu/drm/drm_gem_vram_helper.c       | 107 ++++++++++----------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   9 +-
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 ----
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +--
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 ++-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 +++---
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +--
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 ----
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 ++--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 ++-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 ++-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   2 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_vram_helper.h           |  14 +--
 49 files changed, 345 insertions(+), 308 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
 
 Level: Intermediate
 
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
 
 Core refactorings
 =================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 32257189e09b..e479b04e955e 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
 	select FW_LOADER
         select DRM_KMS_HELPER
         select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
 	select DRM_KMS_HELPER
 	select DRM_SCHED
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
 #include <linux/dma-fence-array.h>
 #include <linux/pci-p2pdma.h>
 
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 /**
  * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
  * @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
 bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
 				      struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
 
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
 
 #include "amdgpu.h"
 #include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
 	.open = amdgpu_gem_object_open,
 	.close = amdgpu_gem_object_close,
 	.export = amdgpu_gem_prime_export,
-	.vmap = amdgpu_gem_prime_vmap,
-	.vunmap = amdgpu_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
 	struct amdgpu_bo		*parent;
 	struct amdgpu_bo		*shadow;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	struct amdgpu_mn		*mn;
 
 
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &map);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct dma_buf_map map;
 	int ret;
 	void *src;
 	void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
 
 #include <drm/drm_connector.h>
 #include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
 #include <linux/pagemap.h>
 #include <linux/shmem_fs.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
 #include <linux/mem_encrypt.h>
 #include <linux/pagevec.h>
 
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap)
+		return ERR_PTR(-EOPNOTSUPP);
 
-	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+	ret = obj->funcs->vmap(obj, &map);
+	if (ret)
+		return ERR_PTR(ret);
+	else if (dma_buf_map_is_null(&map))
+		return ERR_PTR(-ENOMEM);
 
-	return vaddr;
+	return map.vaddr;
 }
 
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
 	if (!vaddr)
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, &map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_gem_object_funcs.vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+	if (shmem->vmap_use_count++ > 0) {
+		dma_buf_map_set_vaddr(map, shmem->vaddr);
+		return 0;
+	}
 
 	if (obj->import_attach) {
-		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
-		if (!ret)
-			shmem->vaddr = map.vaddr;
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+		if (!ret) {
+			if (WARN_ON(map->is_iomem)) {
+				ret = -EIO;
+				goto err_put_pages;
+			}
+			shmem->vaddr = map->vaddr;
+		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 				    VM_MAP, prot);
 		if (!shmem->vaddr)
 			ret = -ENOMEM;
+		else
+			dma_buf_map_set_vaddr(map, shmem->vaddr);
 	}
 
 	if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	else
 		vunmap(shmem->vaddr);
 
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index bfc059945e31..96fbca6c2e5d 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 
 #include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->kmap_use_count);
-	WARN_ON(gbo->kmap.virtual);
+	WARN_ON(gbo->vmap_use_count);
+	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
 }
@@ -379,29 +380,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+				    struct dma_buf_map *map)
 {
 	int ret;
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
-	bool is_iomem;
 
-	if (gbo->kmap_use_count > 0)
+	if (gbo->vmap_use_count > 0)
 		goto out;
 
-	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
-	++gbo->kmap_use_count;
-	return ttm_kmap_obj_virtual(kmap, &is_iomem);
+	++gbo->vmap_use_count;
+	*map = gbo->map;
+
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct dma_buf_map *map)
 {
-	if (WARN_ON_ONCE(!gbo->kmap_use_count))
+	struct drm_device *dev = gbo->bo.base.dev;
+
+	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
 		return;
-	if (--gbo->kmap_use_count > 0)
+
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	if (--gbo->vmap_use_count > 0)
 		return;
 
 	/*
@@ -415,7 +424,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -424,48 +435,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, map);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
@@ -473,7 +480,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -564,15 +571,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       bool evict,
 					       struct ttm_resource *new_mem)
 {
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	struct ttm_buffer_object *bo = &gbo->bo;
+	struct drm_device *dev = bo->base.dev;
 
-	if (WARN_ON_ONCE(gbo->kmap_use_count))
+	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
 		return;
 
-	if (!kmap->virtual)
-		return;
-	ttm_bo_kunmap(kmap);
-	kmap->virtual = NULL;
+	ttm_bo_vunmap(bo, &gbo->map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -829,37 +834,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, map);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  */
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, map);
 }
 
 /*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return etnaviv_gem_vmap(obj);
+	void *vaddr = etnaviv_gem_vmap(obj);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(obj);
 }
 
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (bo->heap_size)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
-	return drm_gem_shmem_vmap(obj);
+	return drm_gem_shmem_vmap(obj, map);
 }
 
 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
 
+#include <linux/dma-buf-map.h>
 #include <linux/kthread.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 	struct lima_dump_chunk_buffer *buffer_chunk;
 	u32 size, task_size, mem_size;
 	int i;
+	struct dma_buf_map map;
+	int ret;
 
 	mutex_lock(&dev->error_task_list_lock);
 
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 		} else {
 			buffer_chunk->size = lima_bo_size(bo);
 
-			data = drm_gem_shmem_vmap(&bo->base.base);
-			if (IS_ERR_OR_NULL(data)) {
+			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+			if (ret) {
 				kvfree(et);
 				goto out;
 			}
 
-			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
 
-			drm_gem_shmem_vunmap(&bo->base.base, data);
+			drm_gem_shmem_vunmap(&bo->base.base, &map);
 		}
 
 		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
+	struct dma_buf_map map;
 	void *vmap;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
 	select FW_LOADER
 	select DRM_KMS_HELPER
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
 	select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
 	select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
 	unsigned mode;
 
 	struct nouveau_drm_tile *tile;
-
-	struct ttm_bo_kmap_obj dma_buf_vmap;
 };
 
 static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
  *
  */
 
+#include <drm/drm_gem_ttm_helper.h>
+
 #include "nouveau_drv.h"
 #include "nouveau_dma.h"
 #include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
 	.pin = nouveau_gem_prime_pin,
 	.unpin = nouveau_gem_prime_unpin,
 	.get_sg_table = nouveau_gem_prime_get_sg_table,
-	.vmap = nouveau_gem_prime_vmap,
-	.vunmap = nouveau_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
-			  &nvbo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
-	ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
 struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
 							 struct dma_buf_attachment *attach,
 							 struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
 #include <drm/drm_gem_shmem_helper.h>
 #include <drm/panfrost_drm.h>
 #include <linux/completion.h>
+#include <linux/dma-buf-map.h>
 #include <linux/iopoll.h>
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map;
 	struct drm_gem_shmem_object *bo;
 	u32 cfg, as;
 	int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 		goto err_close_bo;
 	}
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
-	if (IS_ERR(perfcnt->buf)) {
-		ret = PTR_ERR(perfcnt->buf);
+	ret = drm_gem_shmem_vmap(&bo->base, &map);
+	if (ret)
 		goto err_put_mapping;
-	}
+	perfcnt->buf = map.vaddr;
 
 	/*
 	 * Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	return 0;
 
 err_vunmap:
-	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&bo->base, &map);
 err_put_mapping:
 	panfrost_gem_mapping_put(perfcnt->mapping);
 err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
 
 	if (user != perfcnt->user)
 		return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
 
 	perfcnt->user = NULL;
-	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
 	perfcnt->buf = NULL;
 	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
 	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
 
 #include <linux/crc32.h>
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_drv.h>
 #include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
+	struct dma_buf_map user_map;
+	struct dma_buf_map cursor_map;
 	void *user_ptr;
 	int size = 64*64*4;
 
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_map);
 		if (ret)
 			goto out_free_release;
+		user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 		ret = qxl_alloc_bo_reserved(qdev, release,
 					    sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
 		if (ret)
 			goto out_backoff;
 
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct dma_buf_map map;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &map);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
  * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 
 #include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct dma_buf_map map;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &map);
+	if (ret)
 		return NULL;
-	}
+	dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
+	struct dma_buf_map surface_map;
 	uint8_t *surface_base;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_map);
 	if (ret)
 		goto out_release_backoff;
+	surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = qxl_image_init(qdev, release, dimage, surface_base,
 			     left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
  * Definitions taken from spice-protocol, plus kernel driver specific bits.
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/dma-fence.h>
 #include <linux/firmware.h>
 #include <linux/platform_device.h>
@@ -50,6 +51,8 @@
 
 #include "qxl_dev.h"
 
+struct dma_buf_map;
+
 #define DRIVER_AUTHOR		"Dave Airlie"
 
 #define DRIVER_NAME		"qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
 	/* Protected by tbo.reserved */
 	struct ttm_place		placements[3];
 	struct ttm_placement		placement;
-	struct ttm_bo_kmap_obj		kmap;
+	struct dma_buf_map		map;
 	void				*kptr;
 	unsigned int                    map_count;
 	int                             type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 547d46c14d56..ceebc5881f68 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
  *          Alon Levy
  */
 
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
-#include <linux/io-mapping.h>
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
 	struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
-	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+	r = ttm_bo_vmap(&bo->tbo, &bo->map);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+
+	/* TODO: Remove kptr in favor of map everywhere. */
+	if (bo->map.is_iomem)
+		bo->kptr = (void *)bo->map.vaddr_iomem;
+	else
+		bo->kptr = bo->map.vaddr;
+
+out:
+	*map = bo->map;
 	return 0;
 }
 
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct dma_buf_map bo_map;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &bo_map);
 	if (ret)
 		return NULL;
+	rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	rptr += page_offset * PAGE_SIZE;
 	return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
 	if (bo->map_count > 0)
 		return;
 	bo->kptr = NULL;
-	ttm_bo_kunmap(&bo->kmap);
+	ttm_bo_vunmap(&bo->tbo, &bo->map);
 }
 
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, map);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
 	/* Constant after initialization */
 	struct radeon_device		*rdev;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	pid_t				pid;
 
 #ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
 #include <drm/drm_debugfs.h>
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
 #include <drm/radeon_drm.h>
 
 #include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
 struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 static const struct drm_gem_object_funcs radeon_gem_object_funcs;
 
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
 	.pin = radeon_gem_prime_pin,
 	.unpin = radeon_gem_prime_unpin,
 	.get_sg_table = radeon_gem_prime_get_sg_table,
-	.vmap = radeon_gem_prime_vmap,
-	.vunmap = radeon_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct dma_buf_attachment *attach,
 							struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
 	return ERR_PTR(ret);
 }
 
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
-	if (rk_obj->pages)
-		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
-			    pgprot_writecombine(PAGE_KERNEL));
+	if (rk_obj->pages) {
+		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+				  pgprot_writecombine(PAGE_KERNEL));
+		if (!vaddr)
+			return -ENOMEM;
+		dma_buf_map_set_vaddr(map, vaddr);
+		return 0;
+	}
 
 	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
-		return NULL;
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
 
-	return rk_obj->kvaddr;
+	return 0;
 }
 
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
 	if (rk_obj->pages) {
-		vunmap(vaddr);
+		vunmap(map->vaddr);
 		return;
 	}
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
 rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 				   struct dma_buf_attachment *attach,
 				   struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm driver mmap file operations */
 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
  */
 
 #include <linux/console.h>
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
+	struct dma_buf_map map;
 	void *vmap;
 	int idx, ret;
 
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret)
 		goto out_dev_exit;
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
+	struct dma_buf_map map;
 	void *vaddr;
 	u8 *src;
 
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (fb->obj[0]->import_attach) {
 		ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
+	struct dma_buf_map map;
 	void *vaddr;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	urb = udl_get_urb(dev);
 	if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
  *          Michael Thayer <michael.thayer@oracle.com,
  *          Hans de Goede <hdegoede@redhat.com>
  */
+
+#include <linux/dma-buf-map.h>
 #include <linux/export.h>
 
 #include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
+	struct dma_buf_map map;
+	int ret;
 	u8 *src;
 
 	/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	/*
 	 * The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
 	if (bo->validated_shader) {
 		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
+	void *vaddr;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
-		return NULL;
+		return PTR_ERR(pages);
+
+	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return 0;
 }
 
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 	vgem_unpin_pages(bo);
 }
 
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 9890137bcb8d..0824327cc860 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
@@ -146,15 +148,16 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 			   struct drm_plane_state *state)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!state->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(state->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr))
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret)
+		DRM_ERROR("vmap failed: %d\n", ret);
 
 	return drm_gem_fb_prepare_fb(plane, state);
 }
@@ -164,13 +167,15 @@ static void vkms_cleanup_fb(struct drm_plane *plane,
 {
 	struct drm_gem_object *gem_obj;
 	struct drm_gem_shmem_object *shmem_obj;
+	struct dma_buf_map map;
 
 	if (!old_state->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(old_state->fb, 0);
 	shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0));
-	drm_gem_shmem_vunmap(gem_obj, shmem_obj->vaddr);
+	dma_buf_map_set_vaddr(&map, shmem_obj->vaddr);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 }
 
 static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
diff --git a/drivers/gpu/drm/vkms/vkms_writeback.c b/drivers/gpu/drm/vkms/vkms_writeback.c
index 26b903926872..67f80ab1e85f 100644
--- a/drivers/gpu/drm/vkms/vkms_writeback.c
+++ b/drivers/gpu/drm/vkms/vkms_writeback.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
-#include "vkms_drv.h"
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 #include <drm/drm_writeback.h>
 #include <drm/drm_probe_helper.h>
@@ -8,6 +9,8 @@
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
 
+#include "vkms_drv.h"
+
 static const u32 vkms_wb_formats[] = {
 	DRM_FORMAT_XRGB8888,
 };
@@ -65,19 +68,20 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector,
 			       struct drm_writeback_job *job)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!job->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr)) {
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
-		return PTR_ERR(vaddr);
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret) {
+		DRM_ERROR("vmap failed: %d\n", ret);
+		return ret;
 	}
 
-	job->priv = vaddr;
+	job->priv = map.vaddr;
 
 	return 0;
 }
@@ -87,12 +91,14 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector,
 {
 	struct drm_gem_object *gem_obj;
 	struct vkms_device *vkmsdev;
+	struct dma_buf_map map;
 
 	if (!job->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	drm_gem_shmem_vunmap(gem_obj, job->priv);
+	dma_buf_map_set_vaddr(&map, job->priv);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 
 	vkmsdev = drm_device_to_vkms_device(gem_obj->dev);
 	vkms_set_composer(&vkmsdev->output, false);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	void *vaddr;
 
 	if (!xen_obj->pages)
-		return NULL;
+		return -ENOMEM;
 
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
-	return vmap(xen_obj->pages, xen_obj->num_pages,
-		    VM_MAP, PAGE_KERNEL);
+	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+		     VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr)
+				    struct dma_buf_map *map)
 {
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 }
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
 #define __XEN_DRM_FRONT_GEM_H
 
 struct dma_buf_attachment;
+struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
 struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr);
+				    struct dma_buf_map *map);
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
 				 struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct dma_buf_map;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+#include <linux/dma-buf-map.h>
 #include <linux/kernel.h> /* for container_of() */
 
 struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
 
 /**
  * struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem:	GEM object
  * @bo:		TTM buffer object
- * @kmap:	Mapping information for @bo
+ * @map:	Mapping information for @bo
  * @placement:	TTM placement information. Supported placements are \
 	%TTM_PL_VRAM and %TTM_PL_SYSTEM
  * @placements:	TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
-	struct ttm_bo_kmap_obj kmap;
+	struct dma_buf_map map;
 
 	/**
-	 * @kmap_use_count:
+	 * @vmap_use_count:
 	 *
 	 * Reference count on the virtual address.
 	 * The address are un-mapped when the count reaches zero.
 	 */
-	unsigned int kmap_use_count;
+	unsigned int vmap_use_count;
 
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:21:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:21:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9474.24916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdu-000583-Eo; Tue, 20 Oct 2020 12:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9474.24916; Tue, 20 Oct 2020 12:21:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqdu-00057t-Az; Tue, 20 Oct 2020 12:21:30 +0000
Received: by outflank-mailman (input) for mailman id 9474;
 Tue, 20 Oct 2020 12:21:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kUqdt-0004KG-54
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84b209fa-2c86-48b0-8b70-b3f9ff926862;
 Tue, 20 Oct 2020 12:20:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D68DB1FF;
 Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wemb=D3=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kUqdt-0004KG-54
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:21:29 +0000
X-Inumbo-ID: 84b209fa-2c86-48b0-8b70-b3f9ff926862
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 84b209fa-2c86-48b0-8b70-b3f9ff926862;
	Tue, 20 Oct 2020 12:20:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7D68DB1FF;
	Tue, 20 Oct 2020 12:20:56 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory
Date: Tue, 20 Oct 2020 14:20:46 +0200
Message-Id: <20201020122046.31167-11-tzimmermann@suse.de>
X-Mailer: git-send-email 2.28.0
In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de>
References: <20201020122046.31167-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
internally by DRM's fbdev helper.

For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.

The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.

v5:
	* implement fb_read/fb_write internally (Daniel, Sam)
v4:
	* move dma_buf_map changes into separate patch (Daniel)
	* TODO list: comment on fbdev updates (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst        |  19 ++-
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
 include/drm/drm_mode_config.h     |  12 --
 4 files changed, 230 insertions(+), 29 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
 ------------------------------------------------
 
 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
 
 Contact: Maintainer of the driver you plan to convert
 
 Level: Intermediate
 
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
 drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
 -----------------------------------------------------------------
 
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6212cd7cde1d..1d3180841778 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 }
 
 static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip)
+					  struct drm_clip_rect *clip,
+					  struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
-			memcpy_toio((void __iomem *)dst, src, len);
+	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
 
+	for (y = clip->y1; y < clip->y2; y++) {
+		dma_buf_map_memcpy_to(dst, src, len);
+		dma_buf_map_incr(dst, fb->pitches[0]);
 		src += fb->pitches[0];
-		dst += fb->pitches[0];
 	}
 }
 
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
 		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static bool drm_fbdev_use_iomem(struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
+}
+
+static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
+				   loff_t pos)
+{
+	const char __iomem *src = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	char *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min(count, alloc_size);
+
+		memcpy_fromio(tmp, src, c);
+		if (copy_to_user(buf, tmp, c)) {
+			ret = -EFAULT;
+			break;
+		}
+
+		src += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret;
+}
+
+static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
+				     loff_t pos)
+{
+	const char *src = info->screen_buffer + pos;
+
+	if (copy_to_user(buf, src, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos >= total_size)
+		return 0;
+	if (count >= total_size)
+		count = total_size;
+	if (total_size - count < pos)
+		count = total_size - pos;
+
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_read_screen_base(info, buf, count, pos);
+	else
+		ret = fb_read_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos = ret;
+
+	return ret;
+}
+
+static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
+				    loff_t pos)
+{
+	char __iomem *dst = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	u8 *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min(count, alloc_size);
+
+		if (copy_from_user(tmp, buf, c)) {
+			ret = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, tmp, c);
+
+		dst += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret;
+}
+
+static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
+				      loff_t pos)
+{
+	char *dst = info->screen_buffer + pos;
+
+	if (copy_from_user(dst, buf, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+	int err;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos > total_size)
+		return -EFBIG;
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+	if (total_size - count < pos) {
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - pos;
+	}
+
+	/*
+	 * Copy to framebuffer even if we already logged an error. Emulates
+	 * the behavior of the original fbdev implementation.
+	 */
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_write_screen_base(info, buf, count, pos);
+	else
+		ret = fb_write_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos = ret;
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_fillrect(info, rect);
+	else
+		drm_fb_helper_sys_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_copyarea(info, area);
+	else
+		drm_fb_helper_sys_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_imageblit(info, image);
+	else
+		drm_fb_helper_sys_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:39:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9520.24959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqvY-0006zL-Be; Tue, 20 Oct 2020 12:39:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9520.24959; Tue, 20 Oct 2020 12:39:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUqvY-0006zE-8d; Tue, 20 Oct 2020 12:39:44 +0000
Received: by outflank-mailman (input) for mailman id 9520;
 Tue, 20 Oct 2020 12:39:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ryym=D3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kUqvV-0006z8-QS
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:39:42 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8ba0917-daaa-427c-ab01-3ca72fc8b542;
 Tue, 20 Oct 2020 12:39:40 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
 with ESMTPSA id e003b5w9KCdVCCA
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 20 Oct 2020 14:39:31 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ryym=D3=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kUqvV-0006z8-QS
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:39:42 +0000
X-Inumbo-ID: f8ba0917-daaa-427c-ab01-3ca72fc8b542
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f8ba0917-daaa-427c-ab01-3ca72fc8b542;
	Tue, 20 Oct 2020 12:39:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603197579;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=PqavFAEZ5cTdGstAdYlAd/8S+6nO+Bcr5/jInWJ/Ous=;
	b=MCvXTRAfywO92g61wBRufXFveVHulBBCifhqliheJ5J279JxxqpTp1xzd0KApIiMDy
	Pn7c9R7ELpwW/eoHjjOWpXU8eq3oNlOE9STVbMAQuPVj0VW8Tn0EaQkrSjdWetj1o1T6
	lJryZsq9jFA14fsJ1kqk62ueH5VM25cGi+3i+gzFmlnmyfRmsn+vQjHUzNCQr4aH1CCt
	BkwBcRksw42ar2SCeP6VoZRTM7OnzXvxtyIAkaC14Y34RouEV7Q4D4sJXw3zakUmyDMg
	Nm7JJ/Y/sfd79XmdBuFDzJPoJaBfDvru9VMP4jL67ODLDzjoBnoSad3wbHBA4OzoWEiU
	C9tg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.1 DYNA|AUTH)
	with ESMTPSA id e003b5w9KCdVCCA
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 20 Oct 2020 14:39:31 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] xl: use proper name for bash_completion file
Date: Tue, 20 Oct 2020 14:39:28 +0200
Message-Id: <20201020123928.27099-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Files in the bash-completion dirs should be named like the commands,
without suffix. Without this change 'xl' will not be recognized as a
command with completion support if BASH_COMPLETION_DIR is set to
/usr/share/bash-completion/completions.

Fixes commit 9136a919b19929ecb242ef327053d55d824397df

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xl/Makefile        | 4 ++--
 tools/xl/bash-completion | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index bdf67c8464..656b21c7da 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -45,11 +45,11 @@ install: all
 	$(INSTALL_DIR) $(DESTDIR)$(sbindir)
 	$(INSTALL_DIR) $(DESTDIR)$(BASH_COMPLETION_DIR)
 	$(INSTALL_PROG) xl $(DESTDIR)$(sbindir)
-	$(INSTALL_DATA) bash-completion $(DESTDIR)$(BASH_COMPLETION_DIR)/xl.sh
+	$(INSTALL_DATA) bash-completion $(DESTDIR)$(BASH_COMPLETION_DIR)/xl
 
 .PHONY: uninstall
 uninstall:
-	rm -f $(DESTDIR)$(BASH_COMPLETION_DIR)/xl.sh
+	rm -f $(DESTDIR)$(BASH_COMPLETION_DIR)/xl
 	rm -f $(DESTDIR)$(sbindir)/xl
 
 .PHONY: clean
diff --git a/tools/xl/bash-completion b/tools/xl/bash-completion
index b7cd6b3992..7c6ed32f88 100644
--- a/tools/xl/bash-completion
+++ b/tools/xl/bash-completion
@@ -1,4 +1,4 @@
-# Copy this file to /etc/bash_completion.d/xl.sh
+# Copy this file to /etc/bash_completion.d/xl
 
 _xl()
 {


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 12:47:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 12:47:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9523.24972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUr3J-0007wR-8d; Tue, 20 Oct 2020 12:47:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9523.24972; Tue, 20 Oct 2020 12:47:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUr3J-0007wK-4v; Tue, 20 Oct 2020 12:47:45 +0000
Received: by outflank-mailman (input) for mailman id 9523;
 Tue, 20 Oct 2020 12:47:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cu2Z=D3=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kUr3H-0007wF-Lt
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:47:43 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.87]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ccfaa31-2f37-436f-8b19-69d604387336;
 Tue, 20 Oct 2020 12:47:41 +0000 (UTC)
Received: from AM6P193CA0134.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::39)
 by AM6PR08MB5032.eurprd08.prod.outlook.com (2603:10a6:20b:ea::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Tue, 20 Oct
 2020 12:47:39 +0000
Received: from VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::4f) by AM6P193CA0134.outlook.office365.com
 (2603:10a6:209:85::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Tue, 20 Oct 2020 12:47:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT040.mail.protection.outlook.com (10.152.18.210) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Tue, 20 Oct 2020 12:47:36 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Tue, 20 Oct 2020 12:47:34 +0000
Received: from 83a7448c0371.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3CB12C91-ED4E-4603-A808-D84EF30EA363.1; 
 Tue, 20 Oct 2020 12:46:58 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 83a7448c0371.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 20 Oct 2020 12:46:58 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2168.eurprd08.prod.outlook.com (2603:10a6:4:84::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.27; Tue, 20 Oct
 2020 12:46:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 20 Oct 2020
 12:46:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Cu2Z=D3=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kUr3H-0007wF-Lt
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 12:47:43 +0000
X-Inumbo-ID: 3ccfaa31-2f37-436f-8b19-69d604387336
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.87])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ccfaa31-2f37-436f-8b19-69d604387336;
	Tue, 20 Oct 2020 12:47:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ymAY57WbRSpaYvoliG/60y285l3xgcLZeaAMKsM1xRI=;
 b=o2uuLoUjVwHm/NK+nbWUxRovmyHvgomxoBvHw646zjNlCW/1oymBGC2qu7brxFyJDGQI+Bz3Sq/nLzggdmG6zumiqvgX46bviY9X0wDVnRscna1HhgjMACIGFc6IMAu6HJ2JwEaZ0FTReLrX2/1+0NFv6ozyrAJ3Xlr9ms33rHY=
Received: from AM6P193CA0134.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::39)
 by AM6PR08MB5032.eurprd08.prod.outlook.com (2603:10a6:20b:ea::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Tue, 20 Oct
 2020 12:47:39 +0000
Received: from VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::4f) by AM6P193CA0134.outlook.office365.com
 (2603:10a6:209:85::39) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Tue, 20 Oct 2020 12:47:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT040.mail.protection.outlook.com (10.152.18.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21 via Frontend Transport; Tue, 20 Oct 2020 12:47:36 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Tue, 20 Oct 2020 12:47:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 38b9bef268231f6c
X-CR-MTA-TID: 64aa7808
Received: from 83a7448c0371.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3CB12C91-ED4E-4603-A808-D84EF30EA363.1;
	Tue, 20 Oct 2020 12:46:58 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 83a7448c0371.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 20 Oct 2020 12:46:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K2pDN0fYvZC4AcqXidKzY91/vE7/CoPrNDX7EwnrBvFEs+eBdedp9LE65icuIOw7oLFf1SMH+V2k2YoJwSu/x6+g0Rkn3tnQBaDOR/NzIse0KbjNPeyhDO2Mnq74Xgaa5rrjagSlGm1HwJ4kzd2RSc1tf6zeOYCUBvF92mAffOjT1kq+v/B8Cicu6pGPPnl6z76cuYSO0EKSW8Q5ARum2KMOi0gC6HjnreOmN2PYpkjJEECKk4Y1mUPrupMt6HipWyMkAQmLNJJQUrIWyQQh1+uxKmXy7MhYlEvdrF1DLcgCORXrcG/qA2vOj1fOWyFsj9rwhyw8elBe6dMkvgmOyg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ymAY57WbRSpaYvoliG/60y285l3xgcLZeaAMKsM1xRI=;
 b=bf9pn+9CFpa0G1UAMVe+vv33DBNfwMObwikfpr9oOgg5MKS4brBZjVs/JUlmFA+fVtHDNvN9C4F88OoFwloPuJHJeNJ0N9nVU6eihTS3cOXW1CxgMSuVjzBlwj1h4vlWXVdixixscEedUA3JShCJDkLpc0pBgU3VQTymdXgBeEOaUWhq4jp5WUTrXxMjMsb07jZVGNwUrQFeeTIKWMzMRmvRIjZT8Zgot3Mul4aUqkHXLj29JPGzafS4E+BGvYdejkfcNayDRe8G6U7U/TX7UMexO0CfBMxRdkw1ESR0lsvd0HLUZN3ljUNGhYE+1PdeFuV3ESv7/r2d1SuU9cuJSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ymAY57WbRSpaYvoliG/60y285l3xgcLZeaAMKsM1xRI=;
 b=o2uuLoUjVwHm/NK+nbWUxRovmyHvgomxoBvHw646zjNlCW/1oymBGC2qu7brxFyJDGQI+Bz3Sq/nLzggdmG6zumiqvgX46bviY9X0wDVnRscna1HhgjMACIGFc6IMAu6HJ2JwEaZ0FTReLrX2/1+0NFv6ozyrAJ3Xlr9ms33rHY=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2168.eurprd08.prod.outlook.com (2603:10a6:4:84::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.27; Tue, 20 Oct
 2020 12:46:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 20 Oct 2020
 12:46:53 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Julien Grall
	<julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Print message if reset did not work
Thread-Topic: [PATCH] xen/arm: Print message if reset did not work
Thread-Index: AQHWo8Slq58ZQPuN/USs6zrkcavsmamfNYiAgAFBGoA=
Date: Tue, 20 Oct 2020 12:46:53 +0000
Message-ID: <B6E323F6-34FD-4B7D-97DF-8B7B4B0F7448@arm.com>
References:
 <74a7359983a9d25ca62a6edd41805ab92918e2a1.1602856636.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2010191036230.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010191036230.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6402ba4c-9456-4187-c685-08d874f652a0
x-ms-traffictypediagnostic: DB6PR0802MB2168:|AM6PR08MB5032:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB5032522EBE40CB0AC26F2AB69D1F0@AM6PR08MB5032.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 1nJzqaiNS8lF1KWRYK6DKlAKtDh6sv+qkaggGnBTIweNi3nQZwVEW9N6JqXSBGe1Nk2Y22P0AzkZPFmoI2q5lCvnCpTOh6MvsAHoTN1rxApnHW55kn+Ny58NwNvHpqYQV4iJGHvQDjWw9B2YBGqSdQNA4NWPYxOFL8tEnYfqDCDeYgC4Cy6muiTo5laxSeKg5KHvD8eA4m5uarmSaTCo0yKhRjZVVPlU15Ul21mOuVIzz/d5Ywo9Ar6fZGoEaIMC9RqW4oJ6vFvV5qeg8p+72ez8N3e9MzRypcuZ6m9wn6m4FaHfBDytc2nqShRk7wQqAFw/89Ig4h33Rh5RhuWBfg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(396003)(376002)(136003)(346002)(86362001)(83380400001)(316002)(33656002)(26005)(4326008)(15650500001)(2616005)(6512007)(2906002)(478600001)(54906003)(5660300002)(36756003)(71200400001)(6916009)(64756008)(66446008)(186003)(66476007)(66556008)(76116006)(66946007)(91956017)(8936002)(6506007)(53546011)(6486002)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 0Wjl7OEV12LcJ+rTRTcPo/EZRaFmFnBRBZZtH60QWZhOcu3hSAEBKeq9Qg+Um6SCwmiXpDSuFNk977rmfOI9Gql1NJfLmGmUzxJG801xLWkLKa1zueGT0o+gFb0UQJozWOYCa1ZGhzz+6v42S0PA945CP5I0fWQE2pIPpnNjh5bLQSMiAsaxRkgAQ7x/qoDxKKgV3d1STdfZDzLh0yaABAg4PPeE0YeCgiT0dKDXffPHuWNGb1fNy3EixAKfIyf54btAxCDZdBCd6czTBkmAzsInOFr4jFNtX2gnfY0XpVNNCWDxVkChSiTFPFS2XXXMtTQzK/k2TZCKqeJHqEiqAX9Ni/myc2oo/6FLZ1RyRKphcSnxakZPuXCazwRuE3X3u75lmSAYbRZgOtfIxG3X7Xw9lz1yNACxYGJkv6hBr62CJLOoAAlSYRWG+q7lvK6vbhZM17lgLywmuhlYKiSU+TZROi7XAvz4rsJ9iTaAWo358+vif1ZYJeuD6hEXSh+Z4KgNhJkDxgvfPu65MPLmwkxIZJYueNXQXtHSR03tFQjuSXrAQaAYHIkye3BKn9sJk41Ho6Ay8QWTZ532PS/H+WV11uvvBGiaTX8rtwSLRSN38DfyenkdZKwXS8o2af46ypcVF/wXnWtb7S3chcF8cg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <62FB6FEA9B118A43B9608C7BAFD07147@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2168
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0bd6b3b4-9d1a-4705-be6b-08d874f638f8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5fp6EqmUKbZnwfqsMs40eZtL6w3MTHo1xR2t0sIF1HpMGR50+HD8zWOiOPMWwX9yVbCBRf1BC/cGhC8UVFIQmvdT/WG5PWaCQwQ7iXbg6WOzww20+aa8Q2XYaiDeRO6u7NNRoVCqTitmyvxMGYRG/bGTDfkf0Ljl8tf2evSYWP7BszKr54FkoNMu/zmOe9lLz8/m2iVLsDDt6A52h3uQl1rcVE5mqErwyK0SjzzKhp0u3+JC0YEvQoBgmM4VfmI6p1cmw0HXfaLju9lVc5efhUJsEaI1SxfuqGUzngHkpAuslj49DiAT0Wd5VUC+RyK+pSQisJlb7uZoHLlUw+TKfxgcYoOHO+xlqn/fGTjK1ndLrKatYOFHLr1RScVDGRKin7KzfoEPF1MWSuJn1ZdgcA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(346002)(396003)(46966005)(33656002)(70586007)(70206006)(36906005)(2906002)(2616005)(53546011)(6506007)(5660300002)(186003)(54906003)(26005)(8936002)(336012)(82310400003)(6486002)(6862004)(316002)(4326008)(47076004)(81166007)(6512007)(107886003)(86362001)(82740400003)(478600001)(356005)(83380400001)(36756003)(8676002)(15650500001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2020 12:47:36.8750
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6402ba4c-9456-4187-c685-08d874f652a0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT040.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5032



> On 19 Oct 2020, at 18:37, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Fri, 16 Oct 2020, Bertrand Marquis wrote:
>> If for some reason the hardware reset is not working, print a message to
>> the user every 5 seconds to warn him that the system did not reset
>> properly and Xen is still looping.
>>=20
>> The message is printed infinitely so that someone connecting to a serial
>> console with no history would see the message coming after 5 seconds.
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> xen/arch/arm/shutdown.c | 4 ++++
>> 1 file changed, 4 insertions(+)
>>=20
>> diff --git a/xen/arch/arm/shutdown.c b/xen/arch/arm/shutdown.c
>> index b32f07ec0e..600088ec48 100644
>> --- a/xen/arch/arm/shutdown.c
>> +++ b/xen/arch/arm/shutdown.c
>> @@ -36,6 +36,7 @@ void machine_halt(void)
>> void machine_restart(unsigned int delay_millisecs)
>> {
>>     int timeout =3D 10;
>> +    unsigned long count =3D 0;
>>=20
>>     watchdog_disable();
>>     console_start_sync();
>> @@ -59,6 +60,9 @@ void machine_restart(unsigned int delay_millisecs)
>>     {
>>         platform_reset();
>>         mdelay(100);
>> +        if ( (count % 50) =3D=3D 0 )
>> +            printk(XENLOG_ERR "Xen: Platform reset did not work properl=
y!!\n");
>> +        count++;
>=20
> I'd think that one "!" is enough :-) but anyway

True :-)
Feel to limit the exclamation to one while committing.

>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks

Bertrand=


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:28:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:28:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9621.25239 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrg7-0004XW-Ht; Tue, 20 Oct 2020 13:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9621.25239; Tue, 20 Oct 2020 13:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrg7-0004XP-EY; Tue, 20 Oct 2020 13:27:51 +0000
Received: by outflank-mailman (input) for mailman id 9621;
 Tue, 20 Oct 2020 13:27:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eGPc=D3=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kUrg7-0004XK-4B
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:27:51 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b7ca60b-e471-4536-b1ad-14e0fdbc4209;
 Tue, 20 Oct 2020 13:27:50 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a28so1997542ljn.3
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 06:27:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eGPc=D3=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kUrg7-0004XK-4B
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:27:51 +0000
X-Inumbo-ID: 0b7ca60b-e471-4536-b1ad-14e0fdbc4209
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0b7ca60b-e471-4536-b1ad-14e0fdbc4209;
	Tue, 20 Oct 2020 13:27:50 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a28so1997542ljn.3
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 06:27:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=MVMEMru1q0EklwVF80mRcema72Gntri3WkELhwCXdrw=;
        b=TvEiGXj8b3VZh2T831h/P3cov6WviKazZKIaNzZeTGfr3b1gK4RbYNl4CmEQe7RXdA
         /QoWLo/XzlZ3kmBmSYHZVlnIhcC2suqqPrZC/7zhlg9Optjxincujl9Jyu8Hnz6oZufG
         FNVoVq2smATOilvfsJHs5WkUSGfeKeX7WNukeZRCQK4QgVui49h3D5NdcTg4wGfFqKEU
         BINE5w24tgDH/+hsObg/6n6jUeGyR38j1ZXz8nzZYiEC5MJRfYb4XDhrAuWvBq1Q/Q4R
         CUDfAXA47nWeKqrYcjgNrSgDm1Ts0JwzehhLIuL0uLE88lKofW+x3ErvWhojwzYi+GlS
         K8GQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=MVMEMru1q0EklwVF80mRcema72Gntri3WkELhwCXdrw=;
        b=TSHD+SVOa+whYK8G77r1N2DxqDWaWRgN5/bXZpaxhN2Ez7uMaekoMG/2g8GAxeWJLl
         9yMV9rIkcQzfA12hmlHIqMQ47FbYR4DjBCr8iuPTrR0cTaDUebVojkURaKlydvRn3npa
         r2gU3DPnE9KV7mRy4GELSlabOuUjElOHCn9IVrmCxFxdchjqFeuoAv/vrrbpd3SDhGD8
         F4Bl1a1iWpaytsHAhaPYGVHYOwj9KJVotNPEgLzeqz2ovxQBPXPVKflJ1seuRYHdMnFX
         PyffHtb9wXkqErXyzZ0RaX3ld9ZPL6S2I9YyGTBGXIsGdPokj2f5zhLQmZ+0iyziogvP
         AGgA==
X-Gm-Message-State: AOAM530ff5nbAihYUqZjGjNWs71hzSl7KG0GXLJotNI4QzTjJMAr2l5w
	NPmrQUenNgxuqmajuMjI6kpdMJCZfQEz1csWZzk=
X-Google-Smtp-Source: ABdhPJzfXcQtq0+uZSasJjPZLRzaeLpS+AdAhSMuNN7eN2MPXJ/JNiQcxbHTvaaXZhBxKGdfOSG4SG6euJbx3FiFXKg=
X-Received: by 2002:a2e:c49:: with SMTP id o9mr1141731ljd.296.1603200469168;
 Tue, 20 Oct 2020 06:27:49 -0700 (PDT)
MIME-Version: 1.0
References: <20200525025506.225959-1-jandryuk@gmail.com> <24269.8360.504075.118119@mariner.uk.xensource.com>
In-Reply-To: <24269.8360.504075.118119@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 20 Oct 2020 09:27:36 -0400
Message-ID: <CAKf6xpuqTdSc-qnfHu=yyEo6V45QLiSP6j=XsgEudoO4ojFaJw@mail.gmail.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
To: Ian Jackson <ian.jackson@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 26, 2020 at 10:13 AM Ian Jackson <ian.jackson@citrix.com> wrote:
>
> Jason Andryuk writes ("[PATCH] SUPPORT: Add linux device model stubdom to Toolstack"):
> > Add qemu-xen linux device model stubdomain to the Toolstack section as a
> > Tech Preview.
>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Hi, this never got applied.  It should go to staging and 4.14.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:33:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:33:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9625.25258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrlA-0005R4-6a; Tue, 20 Oct 2020 13:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9625.25258; Tue, 20 Oct 2020 13:33:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrlA-0005Qx-3a; Tue, 20 Oct 2020 13:33:04 +0000
Received: by outflank-mailman (input) for mailman id 9625;
 Tue, 20 Oct 2020 13:33:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUrl9-0005Qs-Co
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:33:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28a590f3-f2cb-4477-b40b-95232c6847d0;
 Tue, 20 Oct 2020 13:33:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C2C8AF00;
 Tue, 20 Oct 2020 13:33:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUrl9-0005Qs-Co
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:33:03 +0000
X-Inumbo-ID: 28a590f3-f2cb-4477-b40b-95232c6847d0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28a590f3-f2cb-4477-b40b-95232c6847d0;
	Tue, 20 Oct 2020 13:33:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603200781;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uz1lBFJttZWkkaoKJhEkaOliHTFYzlyYAbrw8KFy1QA=;
	b=YOeVH4TRSGpbYxzQb71CsT4Zfj6cSsoI+6R878yP12YikcyjfVbvetHeaQzYUb8b5RfCEG
	npcUNu99LTjINpPMOb6jK6niOAmGDPs/UpK+IJPqdHlSyAZTxVWFiROAsp7YWcVX+W7pkW
	4QtF1pulFjt865YEgTygQ58zz47ZtLM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7C2C8AF00;
	Tue, 20 Oct 2020 13:33:01 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] xen/x86: add nmi continuation framework
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201016085350.10233-1-jgross@suse.com>
 <20201016085350.10233-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <12640bbf-475c-3d74-9bb0-57befcadd626@suse.com>
Date: Tue, 20 Oct 2020 15:33:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201016085350.10233-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 16.10.2020 10:53, Juergen Gross wrote:
> Actions in NMI context are rather limited as e.g. locking is rather
> fragile.
> 
> Add a generic framework to continue processing in normal interrupt
> context after leaving NMI processing.
> 
> This is done by a high priority interrupt vector triggered via a
> self IPI from NMI context, which will then call the continuation
> function specified during NMI handling.

I'm concerned by there being just a single handler allowed, when
the series already introduces two uses. A single NMI instance
may signal multiple things in one go. At the very least we then
need a priority, such that SERR could override oprofile.

> @@ -1799,6 +1800,57 @@ void unset_nmi_callback(void)
>      nmi_callback = dummy_nmi_callback;
>  }
>  
> +static DEFINE_PER_CPU(nmi_contfunc_t *, nmi_cont_func);
> +static DEFINE_PER_CPU(void *, nmi_cont_arg);
> +static DEFINE_PER_CPU(bool, nmi_cont_busy);
> +
> +bool nmi_check_continuation(void)
> +{
> +    unsigned int cpu = smp_processor_id();
> +    nmi_contfunc_t *func = per_cpu(nmi_cont_func, cpu);
> +    void *arg = per_cpu(nmi_cont_arg, cpu);
> +
> +    if ( per_cpu(nmi_cont_busy, cpu) )
> +    {
> +        per_cpu(nmi_cont_busy, cpu) = false;
> +        printk("Trying to set NMI continuation while still one active!\n");

I guess the bool would better be a const void *, written
with __builtin_return_address(0) and logged accordingly here
(also including the CPU number).

Also (nit) maybe "... while one still active"?

> +    }
> +
> +    /* Reads must be done before following write (local cpu ordering only). */
> +    barrier();
> +
> +    per_cpu(nmi_cont_func, cpu) = NULL;

I think this still is too lax, and you really need to use
xchg() to make sure you won't lose any case of "busy" (which
may become set between the if() above and the write here).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:40:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:40:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9630.25270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrrn-0005fs-Vk; Tue, 20 Oct 2020 13:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9630.25270; Tue, 20 Oct 2020 13:39:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUrrn-0005fl-SG; Tue, 20 Oct 2020 13:39:55 +0000
Received: by outflank-mailman (input) for mailman id 9630;
 Tue, 20 Oct 2020 13:39:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xpr5=D3=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
 id 1kUrrm-0005fe-T1
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:39:55 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (unknown
 [40.107.223.74]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbbc7556-87b5-4b7e-8620-87599d5924c7;
 Tue, 20 Oct 2020 13:39:53 +0000 (UTC)
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4357.namprd12.prod.outlook.com (2603:10b6:208:262::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 20 Oct
 2020 13:39:50 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.028; Tue, 20 Oct 2020
 13:39:50 +0000
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
 (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by
 AM0PR01CA0137.eurprd01.prod.exchangelabs.com (2603:10a6:208:168::42) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Tue, 20 Oct 2020 13:39:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Xpr5=D3=amd.com=christian.koenig@srs-us1.protection.inumbo.net>)
	id 1kUrrm-0005fe-T1
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:39:55 +0000
X-Inumbo-ID: bbbc7556-87b5-4b7e-8620-87599d5924c7
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (unknown [40.107.223.74])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bbbc7556-87b5-4b7e-8620-87599d5924c7;
	Tue, 20 Oct 2020 13:39:53 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eNLmLQEq2ogaJb89Y7a0DoyHU1dpu0akxG+g6Lj3M1spJtdg4xUXT5u3DWnog8xuhb4mHcDe4B5eFCGXwk7BJwkmsteVPmtz8Xy14cT6hiyaef0+Ih8M+CI7gxcp/vigxscIeESdvtqC/NiItwG6DJMLpl8iEU/1XU9puM/RCi+lRd0+TESk+HHMTxv3IcLZgaABA7GUj6CnDSJ5br8UhIz/+61FlhwrTRGmp0qD85QP/DCwd27VGkvji07Lf4kuzGv6aYDSDXX7cX7DjTct/gVg7/gwYUPM4KLrnUOH+4PNEcqO6EeHf0HylBajJD3a2uWq3SQGlQWX/usw5Kgmww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3cV8pVoP5z/fZz9XlAM66A4AVF1LwjqpED/6cqT1bI0=;
 b=CF240AmTPyDLWPLu3UPZ6ztQkl+vBNIRlafeSeiWU99XWWG8GF5w2+fn/m4TTecGNkyMLU34gotif7/5R8w4R1njtniZMmi6FeBNphabS+mrI3Uolm4ZrK/v9CglqjI/Gd9VYAacbdtWsyagqRdAwHmtlAnhdsd8TrqaJgfJTZmKRM2LqfaXmJphJ4F5AcSJr1EF+cuIpUux7r4TkPiV8/Ah0CoCkZPr9LeSktuSSkvIKoWhV2HzTRoBx0+hX8Hv/qd1T7wecnXYeIs65OJXv8B8rj409hoZ9oE/Khx2hT1q+1pF8wvVQwHqaJagXhEqGkgDmsJpNxaPz9scK07yiQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3cV8pVoP5z/fZz9XlAM66A4AVF1LwjqpED/6cqT1bI0=;
 b=sNC0fNgcvJAL/2ngyx/k7DGHYrof8M4TBS9HwyJ/L6b8U0Kvvrt8MUGhR3rZPI2uKITS94I0vsHU/aHAaKcUXSez75ZRLT+G7LDytUAJ/W3lIvnTD6fFw7qy1CDP8hqCraXBPH3lRRVMpn/fJf1CNkEE+aUiv1HVO0uUa9HOp6A=
Authentication-Results: ffwll.ch; dkim=none (message not signed)
 header.d=none;ffwll.ch; dmarc=none action=none header.from=amd.com;
Received: from MN2PR12MB3775.namprd12.prod.outlook.com (2603:10b6:208:159::19)
 by MN2PR12MB4357.namprd12.prod.outlook.com (2603:10b6:208:262::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 20 Oct
 2020 13:39:50 +0000
Received: from MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60]) by MN2PR12MB3775.namprd12.prod.outlook.com
 ([fe80::f8f7:7403:1c92:3a60%6]) with mapi id 15.20.3477.028; Tue, 20 Oct 2020
 13:39:50 +0000
Subject: Re: [PATCH v5 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM
 helpers
To: Thomas Zimmermann <tzimmermann@suse.de>,
 maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org, Daniel Vetter <daniel.vetter@ffwll.ch>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-6-tzimmermann@suse.de>
From: =?UTF-8?Q?Christian_K=c3=b6nig?= <christian.koenig@amd.com>
Message-ID: <0d936127-ff9b-ace1-97b8-bdbc01921a65@amd.com>
Date: Tue, 20 Oct 2020 15:39:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
In-Reply-To: <20201020122046.31167-6-tzimmermann@suse.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Originating-IP: [2a02:908:1252:fb60:be8a:bd56:1f94:86e7]
X-ClientProxiedBy: AM0PR01CA0137.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:168::42) To MN2PR12MB3775.namprd12.prod.outlook.com
 (2603:10b6:208:159::19)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from [IPv6:2a02:908:1252:fb60:be8a:bd56:1f94:86e7] (2a02:908:1252:fb60:be8a:bd56:1f94:86e7) by AM0PR01CA0137.eurprd01.prod.exchangelabs.com (2603:10a6:208:168::42) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend Transport; Tue, 20 Oct 2020 13:39:40 +0000
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 43684d1a-3b9c-422c-7318-08d874fd9dc0
X-MS-TrafficTypeDiagnostic: MN2PR12MB4357:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<MN2PR12MB435723291B1DCFBCFEB8E74F831F0@MN2PR12MB4357.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QvusYaEJ9X1ocSxHJDO7zzVLFArw/xLRGeFQXXgs7ue5Et0DBV4W6wnP4nEGhplE0exsjKFdih5AFpaO4cRf+9cEhaztFsYxgPCeKBP5LDg4cNOhNt7VS0VvO3IEhKrN0dJ/6IvsgQx0svWkwWT4GJibLAilL+G7Hqwf4Up08W5sGYUwJfLAAMFhPdbdYxLTZEDRY+dvuGNTmZ2DdN5OER2B+NFrFk/n6r6HCq2tGMKNadvZ3nkFj68UbUjLGUNFl3ZAVYCK1DybuiQ1AUzVrLk11bX5g2ElEIDEW6u6V58rS0mACUKNRIcaK4AufpG34SoBgRmGIgmbUc+P3m3ztXoJb+SjtDiV31oRg97i96ghJor4+MGVk3XMmx3vHYfBg+Pz8/nKEoojK4Cm0bHc6Q==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MN2PR12MB3775.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(346002)(39860400002)(376002)(136003)(7416002)(36756003)(316002)(7406005)(8676002)(31696002)(6486002)(2906002)(4326008)(16526019)(186003)(478600001)(2616005)(52116002)(8936002)(66946007)(66476007)(66556008)(83380400001)(86362001)(5660300002)(6666004)(31686004)(921003)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	mqlMkNwHEJI59+/j1v0Zjx12kRVJNDKvDQwM7o75w/UVI9jURs9OzV26jg8oM+Rf24H25I+lfqn3fhn5foYIhnEuRbMimlcZdeqYJJ9ASilw7SfS1zNBfdaCFiIRKNBijtbmkB57WJ2KWCY/Zrwkqy1eNrP/UBWH4SX/j5jbosrwpOZ8MRDGDqTo8QM/r1ccqcaluycGIq6v9x7aBvLwX4RqoyNeTpu4d0zB/SNzd+sE8OUuyQ8uIxidM09wPZXMSxSB/a00xnuu3g7iS9+QSFfICP/PbPbZcOhs/OlEeD58RoJcY9FiN++KGkVOLwhJRIrU1nitJXeNpkAmDZ53qInJQuDzkr6fREoVPokUN6itjq8B1ZuzC6bCDTUaTfda4EUKv4vmEUosOh4k3w9Q4T+LA1Uu9lnz8VkkI7fCjrozJ06tIfG6auK6IOs2Z79VclTFZEz5OijRM72378deskaBNn1aRZgs5FUgrjFlXIFkf6jEidr3yR7FMLb8O4fNa+UxBdRk/DKkxcjD14rcmXHj1b+9sOYlM2MSOeboPlYzlxTOGU1kyWPibQWR8crb1OHtzAqy+w7N3MZVjovknQTnMfB9f1irm83sgHEbYKbGK+Yc9gE6DKOlK2rF3JTUVRflMd8NQ2zY3bUcwcZharKy/+hCcZ6iUXmgDLmOsrisAmAvE+qRM91VTSQ7VGuUX6TcdlZrSuyzxFQnILrb5g==
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 43684d1a-3b9c-422c-7318-08d874fd9dc0
X-MS-Exchange-CrossTenant-AuthSource: MN2PR12MB3775.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Oct 2020 13:39:50.1814
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c8RgvKt9S4Nq3RhDPJfN3dHb/5uTiTRNW42pC37NdmMjd5S95Q3zNocaKAgu3yIa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4357

Am 20.10.20 um 14:20 schrieb Thomas Zimmermann:
> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
> address space. The mapping's address is returned as struct dma_buf_map.
> Each function is a simplified version of TTM's existing kmap code. Both
> functions respect the memory's location ani/or writecombine flags.
>
> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
> two helpers that convert a GEM object into the TTM BO and forward the call
> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
> callbacks.
>
> v5:
> 	* use size_t for storing mapping size (Christian)
> 	* ignore premapped memory areas correctly in ttm_bo_vunmap()
> 	* rebase onto latest TTM interfaces (Christian)
> 	* remove BUG() from ttm_bo_vmap() (Christian)
> v4:
> 	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
> 	  Christian)
>
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>

Reviewed-by: Christian König <christian.koenig@amd.com>

> ---
>   drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
>   drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
>   include/drm/drm_gem_ttm_helper.h     |  6 +++
>   include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
>   include/linux/dma-buf-map.h          | 20 ++++++++
>   5 files changed, 164 insertions(+)
>
> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
> index 0e4fb9ba43ad..db4c14d78a30 100644
> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c
> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
>   }
>   EXPORT_SYMBOL(drm_gem_ttm_print_info);
>   
> +/**
> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: [out] returns the dma-buf mapping.
> + *
> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + *
> + * Returns:
> + * 0 on success, or a negative errno code otherwise.
> + */
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> +		     struct dma_buf_map *map)
> +{
> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> +	return ttm_bo_vmap(bo, map);
> +
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vmap);
> +
> +/**
> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
> + * @gem: GEM object.
> + * @map: dma-buf mapping.
> + *
> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
> + * &drm_gem_object_funcs.vmap callback.
> + */
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> +			struct dma_buf_map *map)
> +{
> +	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
> +
> +	ttm_bo_vunmap(bo, map);
> +}
> +EXPORT_SYMBOL(drm_gem_ttm_vunmap);
> +
>   /**
>    * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
>    * @gem: GEM object.
> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
> index ba7ab5ed85d0..5c79418405ea 100644
> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c
> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
> @@ -32,6 +32,7 @@
>   #include <drm/ttm/ttm_bo_driver.h>
>   #include <drm/ttm/ttm_placement.h>
>   #include <drm/drm_vma_manager.h>
> +#include <linux/dma-buf-map.h>
>   #include <linux/io.h>
>   #include <linux/highmem.h>
>   #include <linux/wait.h>
> @@ -527,6 +528,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
>   }
>   EXPORT_SYMBOL(ttm_bo_kunmap);
>   
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> +	struct ttm_resource *mem = &bo->mem;
> +	int ret;
> +
> +	ret = ttm_mem_io_reserve(bo->bdev, mem);
> +	if (ret)
> +		return ret;
> +
> +	if (mem->bus.is_iomem) {
> +		void __iomem *vaddr_iomem;
> +		size_t size = bo->num_pages << PAGE_SHIFT;
> +
> +		if (mem->bus.addr)
> +			vaddr_iomem = (void __iomem *)mem->bus.addr;
> +		else if (mem->bus.caching == ttm_write_combined)
> +			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
> +		else
> +			vaddr_iomem = ioremap(mem->bus.offset, size);
> +
> +		if (!vaddr_iomem)
> +			return -ENOMEM;
> +
> +		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
> +
> +	} else {
> +		struct ttm_operation_ctx ctx = {
> +			.interruptible = false,
> +			.no_wait_gpu = false
> +		};
> +		struct ttm_tt *ttm = bo->ttm;
> +		pgprot_t prot;
> +		void *vaddr;
> +
> +		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
> +		if (ret)
> +			return ret;
> +
> +		/*
> +		 * We need to use vmap to get the desired page protection
> +		 * or to make the buffer object look contiguous.
> +		 */
> +		prot = ttm_io_prot(bo, mem, PAGE_KERNEL);
> +		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
> +		if (!vaddr)
> +			return -ENOMEM;
> +
> +		dma_buf_map_set_vaddr(map, vaddr);
> +	}
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL(ttm_bo_vmap);
> +
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
> +{
> +	struct ttm_resource *mem = &bo->mem;
> +
> +	if (dma_buf_map_is_null(map))
> +		return;
> +
> +	if (!map->is_iomem)
> +		vunmap(map->vaddr);
> +	else if (!mem->bus.addr)
> +		iounmap(map->vaddr_iomem);
> +	dma_buf_map_clear(map);
> +
> +	ttm_mem_io_free(bo->bdev, &bo->mem);
> +}
> +EXPORT_SYMBOL(ttm_bo_vunmap);
> +
>   static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
>   				 bool dst_use_tt)
>   {
> diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
> index 118cef76f84f..7c6d874910b8 100644
> --- a/include/drm/drm_gem_ttm_helper.h
> +++ b/include/drm/drm_gem_ttm_helper.h
> @@ -10,11 +10,17 @@
>   #include <drm/ttm/ttm_bo_api.h>
>   #include <drm/ttm/ttm_bo_driver.h>
>   
> +struct dma_buf_map;
> +
>   #define drm_gem_ttm_of_gem(gem_obj) \
>   	container_of(gem_obj, struct ttm_buffer_object, base)
>   
>   void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
>   			    const struct drm_gem_object *gem);
> +int drm_gem_ttm_vmap(struct drm_gem_object *gem,
> +		     struct dma_buf_map *map);
> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
> +			struct dma_buf_map *map);
>   int drm_gem_ttm_mmap(struct drm_gem_object *gem,
>   		     struct vm_area_struct *vma);
>   
> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
> index 37102e45e496..2c59a785374c 100644
> --- a/include/drm/ttm/ttm_bo_api.h
> +++ b/include/drm/ttm/ttm_bo_api.h
> @@ -48,6 +48,8 @@ struct ttm_bo_global;
>   
>   struct ttm_bo_device;
>   
> +struct dma_buf_map;
> +
>   struct drm_mm_node;
>   
>   struct ttm_placement;
> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
>    */
>   void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
>   
> +/**
> + * ttm_bo_vmap
> + *
> + * @bo: The buffer object.
> + * @map: pointer to a struct dma_buf_map representing the map.
> + *
> + * Sets up a kernel virtual mapping, using ioremap or vmap to the
> + * data in the buffer object. The parameter @map returns the virtual
> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
> + *
> + * Returns
> + * -ENOMEM: Out of memory.
> + * -EINVAL: Invalid range.
> + */
> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
> +/**
> + * ttm_bo_vunmap
> + *
> + * @bo: The buffer object.
> + * @map: Object describing the map to unmap.
> + *
> + * Unmaps a kernel map set up by ttm_bo_vmap().
> + */
> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
> +
>   /**
>    * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
>    *
> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
> index fd1aba545fdf..2e8bbecb5091 100644
> --- a/include/linux/dma-buf-map.h
> +++ b/include/linux/dma-buf-map.h
> @@ -45,6 +45,12 @@
>    *
>    *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
>    *
> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
> + *
> + * .. code-block:: c
> + *
> + *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
> + *
>    * Test if a mapping is valid with either dma_buf_map_is_set() or
>    * dma_buf_map_is_null().
>    *
> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
>   	map->is_iomem = false;
>   }
>   
> +/**
> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
> + * @map:		The dma-buf mapping structure
> + * @vaddr_iomem:	An I/O-memory address
> + *
> + * Sets the address and the I/O-memory flag.
> + */
> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
> +					       void __iomem *vaddr_iomem)
> +{
> +	map->vaddr_iomem = vaddr_iomem;
> +	map->is_iomem = true;
> +}
> +
>   /**
>    * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
>    * @lhs:	The dma-buf mapping structure



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:51:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9634.25285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs2u-0007O8-44; Tue, 20 Oct 2020 13:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9634.25285; Tue, 20 Oct 2020 13:51:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs2t-0007O1-WA; Tue, 20 Oct 2020 13:51:23 +0000
Received: by outflank-mailman (input) for mailman id 9634;
 Tue, 20 Oct 2020 13:51:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUs2r-0007Nw-UL
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:51:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6823dc1-6fd6-42a7-a395-0e39f7d329d5;
 Tue, 20 Oct 2020 13:51:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1755EAD12;
 Tue, 20 Oct 2020 13:51:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUs2r-0007Nw-UL
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:51:21 +0000
X-Inumbo-ID: d6823dc1-6fd6-42a7-a395-0e39f7d329d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d6823dc1-6fd6-42a7-a395-0e39f7d329d5;
	Tue, 20 Oct 2020 13:51:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603201879;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=pJbgg5ny64DMvi9s/cG7CeqWeW1h4zTksNkoqgt5ArI=;
	b=aL4nBGm6EmXyyxFWz6YAYvoCmjsCmS3KcLf5NB3P184WiKfLxDBrkUNc5Or+ZI77EIOgVg
	YAediLBuSpKrLQNs3MPZ7O14lSeoK9kYAqhtZaRwn3xHIa58SXAhug9wyQ6oMPZpJAqmZ2
	XwHlOrxnEfLb1qQlo7vrLhvcVK5de2c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1755EAD12;
	Tue, 20 Oct 2020 13:51:19 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mm: avoid playing with directmap when self-snoop can be
 relied upon
Message-ID: <33f7168c-b177-eed5-14e8-5e7a38dee853@suse.com>
Date: Tue, 20 Oct 2020 15:51:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The set of systems affected by XSA-345 would have been smaller is we had
this in place already: When the processor is capable of dealing with
mismatched cacheability, there's no extra work we need to carry out.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -795,6 +795,9 @@ static int update_xen_mappings(unsigned
     unsigned long xen_va =
         XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
 
+    if ( boot_cpu_has(X86_FEATURE_XEN_SELFSNOOP) )
+        return 0;
+
     if ( unlikely(alias) && cacheattr )
         err = map_pages_to_xen(xen_va, _mfn(mfn), 1, 0);
     if ( !err )
@@ -802,6 +805,7 @@ static int update_xen_mappings(unsigned
                      PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
     if ( unlikely(alias) && !cacheattr && !err )
         err = map_pages_to_xen(xen_va, _mfn(mfn), 1, PAGE_HYPERVISOR);
+
     return err;
 }
 


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:53:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9636.25297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs4V-0007XE-G4; Tue, 20 Oct 2020 13:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9636.25297; Tue, 20 Oct 2020 13:53:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs4V-0007X7-C4; Tue, 20 Oct 2020 13:53:03 +0000
Received: by outflank-mailman (input) for mailman id 9636;
 Tue, 20 Oct 2020 13:53:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUs4U-0007X2-Oy
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:53:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6eaa1f2-b3da-41c5-b272-ccffa42e1654;
 Tue, 20 Oct 2020 13:53:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6C9BB2A1;
 Tue, 20 Oct 2020 13:53:00 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUs4U-0007X2-Oy
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:53:02 +0000
X-Inumbo-ID: c6eaa1f2-b3da-41c5-b272-ccffa42e1654
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c6eaa1f2-b3da-41c5-b272-ccffa42e1654;
	Tue, 20 Oct 2020 13:53:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603201980;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Qw9DdLlTTRCZ41LkBf5W6P/I4UyBKlxJ/5Koa8Aym5M=;
	b=uD9eaGvggrXjd/5iMNhKtudPbBweYTK0gPxxaRBPCvGy2CItogPjgsU+H0cmg3u8Cc0m07
	fwQADUANEXGICwgJ1pE7TLBs2o6hnhiYsHXHmmJ4nrYd8ajasCaw/JW/HdCVvZ99AQ3m7Z
	BstQu+kh/GBTP8KHEmjAhsObnFApJbw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C6C9BB2A1;
	Tue, 20 Oct 2020 13:53:00 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Paul Durrant <paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] IOMMU: avoid double flushing in shared page table case
Message-ID: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
Date: Tue, 20 Oct 2020 15:52:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While the flush coalescing optimization has been helping the non-shared
case, it has actually lead to double flushes in the shared case (which
ought to be the more common one nowadays at least): Once from
*_set_entry() and a second time up the call tree from wherever the
overriding flag gets played with. In alignment with XSA-346 suppress
flushing in this case.

Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
flushes: "idx" hasn't been added a new mapping for.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: The Arm part really is just for completeness (and hence could also
     be dropped) - the affected mapping spaces aren't currently
     supported there.

--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1045,7 +1045,7 @@ static int __p2m_set_entry(struct p2m_do
         p2m->lowest_mapped_gfn = gfn_min(p2m->lowest_mapped_gfn, sgfn);
     }
 
-    if ( is_iommu_enabled(p2m->domain) &&
+    if ( is_iommu_enabled(p2m->domain) && !this_cpu(iommu_dont_flush_iotlb) &&
          (lpae_is_valid(orig_pte) || lpae_is_valid(*entry)) )
     {
         unsigned int flush_flags = 0;
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -842,7 +842,7 @@ out:
     if ( rc == 0 && p2m_is_hostp2m(p2m) &&
          need_modify_vtd_table )
     {
-        if ( iommu_use_hap_pt(d) )
+        if ( iommu_use_hap_pt(d) && !this_cpu(iommu_dont_flush_iotlb) )
             rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
                                    (iommu_flags ? IOMMU_FLUSHF_added : 0) |
                                    (vtd_pte_present ? IOMMU_FLUSHF_modified
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -870,7 +870,7 @@ int xenmem_add_to_physmap(struct domain
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
-                                IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
+                                IOMMU_FLUSHF_modified);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:53:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:53:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9639.25308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs5G-0007hd-UH; Tue, 20 Oct 2020 13:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9639.25308; Tue, 20 Oct 2020 13:53:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs5G-0007hW-RG; Tue, 20 Oct 2020 13:53:50 +0000
Received: by outflank-mailman (input) for mailman id 9639;
 Tue, 20 Oct 2020 13:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUs5F-0007hP-V8
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:53:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8890cbd1-8e0e-4596-9656-c4e317b1876b;
 Tue, 20 Oct 2020 13:53:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4DD88AD12;
 Tue, 20 Oct 2020 13:53:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUs5F-0007hP-V8
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:53:49 +0000
X-Inumbo-ID: 8890cbd1-8e0e-4596-9656-c4e317b1876b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8890cbd1-8e0e-4596-9656-c4e317b1876b;
	Tue, 20 Oct 2020 13:53:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202028;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=zsOTH7fEV0oWzw80gGikZZVHCHk5yVvlPuzIis7q6zY=;
	b=OwkHY/Y+ZLh48rMgV+kXwaGMbdC+1OaQla/ALCsaf4/UJJ2+W8dsMwYEgE3X+2fo1BuWC5
	DnsSdRGqzynFqQx+87In4C5tfnbI/7cD5/dZYUbC77KwV7eMcVvmU7DTLXWeU6dGmVMT1Z
	WYQbheMWfANq9V6mhNbmtcR801u2UpE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4DD88AD12;
	Tue, 20 Oct 2020 13:53:48 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] AMD/IOMMU: correct shattering of super pages
Message-ID: <7b8ad528-b0bd-4d93-f08b-42b5af376561@suse.com>
Date: Tue, 20 Oct 2020 15:53:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Fill the new page table _before_ installing into a live page table
hierarchy, as installing a blank page first risks I/O faults on
sub-ranges of the original super page which aren't part of the range
for which mappings are being updated.

While at it also do away with mapping and unmapping the same fresh
intermediate page table page once per entry to be written.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Afaict this corrects presently dead code: I don't think there are ways
for super pages to be created in the first place, i.e. none could ever
need shattering.

--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -81,19 +81,34 @@ static unsigned int set_iommu_pde_presen
     return flush_flags;
 }
 
-static unsigned int set_iommu_pte_present(unsigned long pt_mfn,
-                                          unsigned long dfn,
-                                          unsigned long next_mfn,
-                                          int pde_level,
-                                          bool iw, bool ir)
+static unsigned int set_iommu_ptes_present(unsigned long pt_mfn,
+                                           unsigned long dfn,
+                                           unsigned long next_mfn,
+                                           unsigned int nr_ptes,
+                                           unsigned int pde_level,
+                                           bool iw, bool ir)
 {
     union amd_iommu_pte *table, *pde;
-    unsigned int flush_flags;
+    unsigned int page_sz, flush_flags = 0;
 
     table = map_domain_page(_mfn(pt_mfn));
     pde = &table[pfn_to_pde_idx(dfn, pde_level)];
+    page_sz = 1U << (PTE_PER_TABLE_SHIFT * (pde_level - 1));
+
+    if ( (void *)(pde + nr_ptes) > (void *)table + PAGE_SIZE )
+    {
+        ASSERT_UNREACHABLE();
+        return 0;
+    }
+
+    while ( nr_ptes-- )
+    {
+        flush_flags |= set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
+
+        ++pde;
+        next_mfn += page_sz;
+    }
 
-    flush_flags = set_iommu_pde_present(pde, next_mfn, 0, iw, ir);
     unmap_domain_page(table);
 
     return flush_flags;
@@ -220,11 +235,8 @@ static int iommu_pde_from_dfn(struct dom
         /* Split super page frame into smaller pieces.*/
         if ( pde->pr && !pde->next_level && next_table_mfn )
         {
-            int i;
             unsigned long mfn, pfn;
-            unsigned int page_sz;
 
-            page_sz = 1 << (PTE_PER_TABLE_SHIFT * (next_level - 1));
             pfn =  dfn & ~((1 << (PTE_PER_TABLE_SHIFT * next_level)) - 1);
             mfn = next_table_mfn;
 
@@ -238,17 +250,13 @@ static int iommu_pde_from_dfn(struct dom
             }
 
             next_table_mfn = mfn_x(page_to_mfn(table));
+
+            set_iommu_ptes_present(next_table_mfn, pfn, mfn, PTE_PER_TABLE_SIZE,
+                                   next_level, true, true);
+            smp_wmb();
             set_iommu_pde_present(pde, next_table_mfn, next_level, true,
                                   true);
 
-            for ( i = 0; i < PTE_PER_TABLE_SIZE; i++ )
-            {
-                set_iommu_pte_present(next_table_mfn, pfn, mfn, next_level,
-                                      true, true);
-                mfn += page_sz;
-                pfn += page_sz;
-             }
-
             amd_iommu_flush_all_pages(d);
         }
 
@@ -318,9 +326,9 @@ int amd_iommu_map_page(struct domain *d,
     }
 
     /* Install 4k mapping */
-    *flush_flags |= set_iommu_pte_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn),
-                                          1, (flags & IOMMUF_writable),
-                                          (flags & IOMMUF_readable));
+    *flush_flags |= set_iommu_ptes_present(pt_mfn[1], dfn_x(dfn), mfn_x(mfn),
+                                           1, 1, (flags & IOMMUF_writable),
+                                           (flags & IOMMUF_readable));
 
     spin_unlock(&hd->arch.mapping_lock);
 


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 13:56:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 13:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9642.25321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs7V-0007rE-BN; Tue, 20 Oct 2020 13:56:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9642.25321; Tue, 20 Oct 2020 13:56:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUs7V-0007r7-8E; Tue, 20 Oct 2020 13:56:09 +0000
Received: by outflank-mailman (input) for mailman id 9642;
 Tue, 20 Oct 2020 13:56:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BvO=D3=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kUs7U-0007r2-4C
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:56:08 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id cf29dc5e-27ad-4ca8-bdf3-e5ed5c700bef;
 Tue, 20 Oct 2020 13:56:04 +0000 (UTC)
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-506-5R7MHkehMSef3ftuAUd6Yw-1; Tue, 20 Oct 2020 09:56:01 -0400
Received: by mail-qv1-f71.google.com with SMTP id t13so1395574qvm.14
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 06:56:01 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id b8sm775938qkn.133.2020.10.20.06.55.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 20 Oct 2020 06:55:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0BvO=D3=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kUs7U-0007r2-4C
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 13:56:08 +0000
X-Inumbo-ID: cf29dc5e-27ad-4ca8-bdf3-e5ed5c700bef
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id cf29dc5e-27ad-4ca8-bdf3-e5ed5c700bef;
	Tue, 20 Oct 2020 13:56:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603202163;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HyQ28xr5/lapv6azTtFCvHFT5XQ1+OzeHk+84iEm66Y=;
	b=QUp+Tm/49C2D2ArfwdGiXcYEZLmxY3ITF9bXZyS+HEfcJJoqaGxmch+2EHiNHeADDWpHNz
	Ouzap4oFcZixuWmorb4Ub+lRVmWRZDipZO+0tL8W1vnU06zVM/xfLz81/8VfcBPp80A1RW
	EdBHqnK2mgm1gY9dzcmZeB2rW1Al8Fc=
Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com
 [209.85.219.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-506-5R7MHkehMSef3ftuAUd6Yw-1; Tue, 20 Oct 2020 09:56:01 -0400
X-MC-Unique: 5R7MHkehMSef3ftuAUd6Yw-1
Received: by mail-qv1-f71.google.com with SMTP id t13so1395574qvm.14
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 06:56:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=HyQ28xr5/lapv6azTtFCvHFT5XQ1+OzeHk+84iEm66Y=;
        b=gnESylwg3bfr2FliaHl2xILt8lpeI7jklBbtEGGxjhSD+84uMSQ9ZZuW6ZMAPMxhFP
         Ww2RqWTzJsrcEwoFkiaQ3YW2mdzix/A5qgoC8TrEKzLE8hzLafGdHlXCf4JujR1+ifdx
         vNe8AVmwlLhzShLvNw5LerMTILJe5Irlg1Mr3nAF80c2FMx3R7aGNmgdDhxdLKBIxtQ+
         qypnVCTPRDzYhTHR8wLFeFxjvopu0id9joglpHyHGKyyzkcgnWB0SvEDetg+P7U76IU6
         XqzIEIqvXz0KcZGGWz2jvTPcS1O/7nhXQi3ZLfc+Sz4vKQuQIqTHO/6rKFelUEOeN+p/
         WlAA==
X-Gm-Message-State: AOAM531rgHFs14OYQvtv2qCqAZpTvI1WPCpHo20pjaShUt3YqN9TFB9h
	R5EEoKnsQ1fqdBhZJTtq5sZFBN/CErlgRx9k0RRAGNJhOmlC9K6jFtOzJNfAQ2Ml/zYdpnJdyQB
	2GrltopGzBuLZvi15MCOLVg+oa08=
X-Received: by 2002:a05:6214:174f:: with SMTP id dc15mr3370433qvb.25.1603202160688;
        Tue, 20 Oct 2020 06:56:00 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyLUAfqrAOJzxwKF3+voCBF5yQYNbMOvfOkDZhumkJj3bEnT15V4x8vUJ5iQ5pWt9KusIZtsQ==
X-Received: by 2002:a05:6214:174f:: with SMTP id dc15mr3370377qvb.25.1603202160139;
        Tue, 20 Oct 2020 06:56:00 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id b8sm775938qkn.133.2020.10.20.06.55.53
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 20 Oct 2020 06:55:59 -0700 (PDT)
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: Nick Desaulniers <ndesaulniers@google.com>
Cc: LKML <linux-kernel@vger.kernel.org>, linux-edac@vger.kernel.org,
 linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 openipmi-developer@lists.sourceforge.net,
 "open list:HARDWARE RANDOM NUMBER GENERATOR CORE"
 <linux-crypto@vger.kernel.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>, nouveau@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org,
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
 linux-amlogic@lists.infradead.org, industrypack-devel@lists.sourceforge.net,
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
 linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-can@vger.kernel.org, Network Development <netdev@vger.kernel.org>,
 intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
 linux-wireless <linux-wireless@vger.kernel.org>,
 linux-stm32@st-md-mailman.stormreply.com, linux-nfc@lists.01.org,
 linux-nvdimm <linux-nvdimm@lists.01.org>, linux-pci@vger.kernel.org,
 linux-samsung-soc@vger.kernel.org, platform-driver-x86@vger.kernel.org,
 patches@opensource.cirrus.com, storagedev@microchip.com,
 devel@driverdev.osuosl.org, linux-serial@vger.kernel.org,
 linux-usb@vger.kernel.org, usb-storage@lists.one-eyed-alien.net,
 linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com,
 bpf <bpf@vger.kernel.org>, linux-integrity@vger.kernel.org,
 linux-security-module@vger.kernel.org, keyrings@vger.kernel.org,
 alsa-devel@alsa-project.org,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 Greg KH <gregkh@linuxfoundation.org>, George Burgess <gbiv@google.com>,
 Joe Perches <joe@perches.com>
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018054332.GB593954@kroah.com>
 <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
From: Tom Rix <trix@redhat.com>
Message-ID: <ca1f50d6-1005-8e3d-8d5c-98c82a704338@redhat.com>
Date: Tue, 20 Oct 2020 06:55:52 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10/19/20 12:42 PM, Nick Desaulniers wrote:
> On Sat, Oct 17, 2020 at 10:43 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
>>> From: Tom Rix <trix@redhat.com>
>>>
>>> This is a upcoming change to clean up a new warning treewide.
>>> I am wondering if the change could be one mega patch (see below) or
>>> normal patch per file about 100 patches or somewhere half way by collecting
>>> early acks.
>> Please break it up into one-patch-per-subsystem, like normal, and get it
>> merged that way.
>>
>> Sending us a patch, without even a diffstat to review, isn't going to
>> get you very far...
> Tom,
> If you're able to automate this cleanup, I suggest checking in a
> script that can be run on a directory.  Then for each subsystem you
> can say in your commit "I ran scripts/fix_whatever.py on this subdir."
>  Then others can help you drive the tree wide cleanup.  Then we can
> enable -Wunreachable-code-break either by default, or W=2 right now
> might be a good idea.

I should have waited for Joe Perches's fixer addition to checkpatch :)

The easy fixes I did only cover about 1/2 of the problems.

Remaining are mostly nested switches, which from a complexity standpoint is bad.

>
> Ah, George (gbiv@, cc'ed), did an analysis recently of
> `-Wunreachable-code-loop-increment`, `-Wunreachable-code-break`, and
> `-Wunreachable-code-return` for Android userspace.  From the review:
> ```
> Spoilers: of these, it seems useful to turn on
> -Wunreachable-code-loop-increment and -Wunreachable-code-return by
> default for Android

In my simple add-a-cflag bot, i see there are about 250

issues for -Wunreachable-code-return.

I'll see about doing this one next.

> ...
> While these conventions about always having break arguably became
> obsolete when we enabled -Wfallthrough, my sample turned up zero
> potential bugs caught by this warning, and we'd need to put a lot of
> effort into getting a clean tree. So this warning doesn't seem to be
> worth it.
> ```
> Looks like there's an order of magnitude of `-Wunreachable-code-break`
> than the other two.
>
> We probably should add all 3 to W=2 builds (wrapped in cc-option).
> I've filed https://github.com/ClangBuiltLinux/linux/issues/1180 to
> follow up on.

Yes, i think think these should be added.

Tom



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:06:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:06:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9646.25333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsHb-0000Vi-B5; Tue, 20 Oct 2020 14:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9646.25333; Tue, 20 Oct 2020 14:06:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsHb-0000Vb-85; Tue, 20 Oct 2020 14:06:35 +0000
Received: by outflank-mailman (input) for mailman id 9646;
 Tue, 20 Oct 2020 14:06:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsHZ-0000VW-Ik
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:06:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49b2eaf7-8659-42b2-bc93-2e6f3d6888f7;
 Tue, 20 Oct 2020 14:06:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C8CFAC6A;
 Tue, 20 Oct 2020 14:06:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsHZ-0000VW-Ik
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:06:33 +0000
X-Inumbo-ID: 49b2eaf7-8659-42b2-bc93-2e6f3d6888f7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 49b2eaf7-8659-42b2-bc93-2e6f3d6888f7;
	Tue, 20 Oct 2020 14:06:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202791;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=2bEQvlLtJUOmcPiYR/FdIE43eZRz67l8BMCvXkO+7Vg=;
	b=MCDXh5COPvTqLLo9JQoXY+GXndOWhZdUjQ5M+fNgG3BtdEZ9nZ7PS+aMHI9VIDK5XTof+o
	X9JuPebvKzmn9mYA52cUfNz0ZHMUMvsdmDuPkhKfHsHTzt9FGD6LZ7mI0WXyBMQwZV+juB
	XMsWUxCGHWrw+Yq7r29QEMnws9kG1+U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9C8CFAC6A;
	Tue, 20 Oct 2020 14:06:31 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/8] evtchn: recent XSAs follow-on
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Date: Tue, 20 Oct 2020 16:06:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

These are grouped into a series largely because of their origin,
not so much because there are heavy dependencies among them.
Compared to v1, besides there being 3 new patches, some
re-ordering has been done; in particular the last patch isn't
ready yet, but I still wanted to include it to have a chance to
discuss what changes to make. See also the individual patches.

1: avoid race in get_xen_consumer()
2: replace FIFO-specific header by generic private one
3: rename and adjust guest_enabled_event()
4: let evtchn_set_priority() acquire the per-channel lock
5: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq()
6: convert vIRQ lock to an r/w one
7: convert domain event lock to an r/w one
8: don't call Xen consumer callback with per-channel lock held

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:08:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9648.25344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsJF-0000f6-Mj; Tue, 20 Oct 2020 14:08:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9648.25344; Tue, 20 Oct 2020 14:08:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsJF-0000ez-Jg; Tue, 20 Oct 2020 14:08:17 +0000
Received: by outflank-mailman (input) for mailman id 9648;
 Tue, 20 Oct 2020 14:08:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsJE-0000en-Pe
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:08:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a56a16c-0143-4418-9ae9-953674bb583b;
 Tue, 20 Oct 2020 14:08:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B588AC6A;
 Tue, 20 Oct 2020 14:08:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsJE-0000en-Pe
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:08:16 +0000
X-Inumbo-ID: 9a56a16c-0143-4418-9ae9-953674bb583b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9a56a16c-0143-4418-9ae9-953674bb583b;
	Tue, 20 Oct 2020 14:08:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202894;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gz4CIzBYeY0DzdSnosyvNHqxygRimtPUEZJgMtPh2JM=;
	b=ppTkP9wcauH3/HIG64Srdl/1gMbZuRl4B1z8mW6y8gH1+CfKlA8rJbDQZPk/vBWUtF4U8a
	UUgmRJRupzrTHZAJ7VpbNzpfGfoGz1IHp95oMVQHhr+a095GifJ73GKwK5GSjxB6+oTgCZ
	3rKddf3MuspuylQRA7r6zfF36jHL084=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2B588AC6A;
	Tue, 20 Oct 2020 14:08:14 +0000 (UTC)
Subject: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
Date: Tue, 20 Oct 2020 16:08:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no global lock around the updating of this global piece of data.
Make use of cmpxchgptr() to avoid two entities racing with their
updates.

While touching the functionality, mark xen_consumers[] read-mostly (or
else the if() condition could use the result of cmpxchgptr(), writing to
the slot unconditionally).

The use of cmpxchgptr() here points out (by way of clang warning about
it) that its original use of const was slightly wrong. Adjust the
placement, or else undefined behavior of const qualifying a function
type will result.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Use (and hence generalize) cmpxchgptr(). Add comment. Expand /
    adjust description.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -57,7 +57,8 @@
  * with a pointer, we stash them dynamically in a small lookup array which
  * can be indexed by a small integer.
  */
-static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS];
+static xen_event_channel_notification_t __read_mostly
+    xen_consumers[NR_XEN_CONSUMERS];
 
 /* Default notification action: wake up from wait_on_xen_event_channel(). */
 static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
@@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
 
     for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
     {
+        /* Use cmpxchgptr() in lieu of a global lock. */
         if ( xen_consumers[i] == NULL )
-            xen_consumers[i] = fn;
+            cmpxchgptr(&xen_consumers[i], NULL, fn);
         if ( xen_consumers[i] == fn )
             break;
     }
--- a/xen/include/asm-x86/system.h
+++ b/xen/include/asm-x86/system.h
@@ -148,13 +148,6 @@ static always_inline unsigned long cmpxc
     return prev;
 }
 
-#define cmpxchgptr(ptr,o,n) ({                                          \
-    const __typeof__(**(ptr)) *__o = (o);                               \
-    __typeof__(**(ptr)) *__n = (n);                                     \
-    ((__typeof__(*(ptr)))__cmpxchg((ptr),(unsigned long)__o,            \
-                                   (unsigned long)__n,sizeof(*(ptr)))); \
-})
-
 /*
  * Undefined symbol to cause link failure if a wrong size is used with
  * arch_fetch_and_add().
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -178,6 +178,17 @@ unsigned long long parse_size_and_unit(c
 
 uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
 
+/*
+ * A slightly more typesafe variant of cmpxchg() when the entities dealt with
+ * are pointers.
+ */
+#define cmpxchgptr(ptr, o, n) ({                                        \
+    __typeof__(**(ptr)) *const o_ = (o);                                \
+    __typeof__(**(ptr)) *n_ = (n);                                      \
+    ((__typeof__(*(ptr)))__cmpxchg(ptr, (unsigned long)o_,              \
+                                   (unsigned long)n_, sizeof(*(ptr)))); \
+})
+
 #define TAINT_SYNC_CONSOLE              (1u << 0)
 #define TAINT_MACHINE_CHECK             (1u << 1)
 #define TAINT_ERROR_INJECT              (1u << 2)



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:08:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:08:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9651.25356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsJZ-0000jZ-12; Tue, 20 Oct 2020 14:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9651.25356; Tue, 20 Oct 2020 14:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsJY-0000jS-Tf; Tue, 20 Oct 2020 14:08:36 +0000
Received: by outflank-mailman (input) for mailman id 9651;
 Tue, 20 Oct 2020 14:08:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsJX-0000jJ-W0
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:08:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29a9faeb-f22d-4aa5-ad60-f6da90b5e2c2;
 Tue, 20 Oct 2020 14:08:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 160A0AC6A;
 Tue, 20 Oct 2020 14:08:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsJX-0000jJ-W0
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:08:36 +0000
X-Inumbo-ID: 29a9faeb-f22d-4aa5-ad60-f6da90b5e2c2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29a9faeb-f22d-4aa5-ad60-f6da90b5e2c2;
	Tue, 20 Oct 2020 14:08:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202914;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UwtAw9MFG3NRnXuuCSs6MsSB/uT680OjsEYW6ot3L34=;
	b=FCYP1qk7r+4wlxSydtGqUEiw3PoHnCHPaGKkRO1cLpFIqt1TaaLOeSBe+JfLadJvrspKzW
	x1/d757gmJjQon5isYFNeeAb8YdRG7LMiO2zEG4WxVJR/0xN6N0ag5XRO1A9zYGED67v0j
	dAD5MJGmuRdXTGhPcTz9eLQQRlYTHt4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 160A0AC6A;
	Tue, 20 Oct 2020 14:08:34 +0000 (UTC)
Subject: [PATCH v2 2/8] evtchn: replace FIFO-specific header by generic
 private one
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
Date: Tue, 20 Oct 2020 16:08:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Having a FIFO specific header is not (or at least no longer) warranted
with just three function declarations left there. Introduce a private
header instead, moving there some further items from xen/event.h.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
TBD: If - considering the layering violation that's there anyway - we
     allowed PV shim code to make use of this header, a few more items
     could be moved out of "public sight".

--- a/xen/common/event_2l.c
+++ b/xen/common/event_2l.c
@@ -7,11 +7,12 @@
  * Version 2 or later.  See the file COPYING for more details.
  */
 
+#include "event_channel.h"
+
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
-#include <xen/event.h>
 
 #include <asm/guest_atomics.h>
 
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -14,17 +14,17 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include "event_channel.h"
+
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
-#include <xen/event.h>
 #include <xen/irq.h>
 #include <xen/iocap.h>
 #include <xen/compat.h>
 #include <xen/guest_access.h>
 #include <xen/keyhandler.h>
-#include <xen/event_fifo.h>
 #include <asm/current.h>
 
 #include <public/xen.h>
--- /dev/null
+++ b/xen/common/event_channel.h
@@ -0,0 +1,63 @@
+/* Event channel handling private header. */
+
+#include <xen/event.h>
+
+static inline unsigned int max_evtchns(const struct domain *d)
+{
+    return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS
+                          : BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d);
+}
+
+static inline bool evtchn_is_busy(const struct domain *d,
+                                  const struct evtchn *evtchn)
+{
+    return d->evtchn_port_ops->is_busy &&
+           d->evtchn_port_ops->is_busy(d, evtchn);
+}
+
+static inline void evtchn_port_unmask(struct domain *d,
+                                      struct evtchn *evtchn)
+{
+    if ( evtchn_usable(evtchn) )
+        d->evtchn_port_ops->unmask(d, evtchn);
+}
+
+static inline int evtchn_port_set_priority(struct domain *d,
+                                           struct evtchn *evtchn,
+                                           unsigned int priority)
+{
+    if ( !d->evtchn_port_ops->set_priority )
+        return -ENOSYS;
+    if ( !evtchn_usable(evtchn) )
+        return -EACCES;
+    return d->evtchn_port_ops->set_priority(d, evtchn, priority);
+}
+
+static inline void evtchn_port_print_state(struct domain *d,
+                                           const struct evtchn *evtchn)
+{
+    d->evtchn_port_ops->print_state(d, evtchn);
+}
+
+/* 2-level */
+
+void evtchn_2l_init(struct domain *d);
+
+/* FIFO */
+
+struct evtchn_init_control;
+struct evtchn_expand_array;
+
+int evtchn_fifo_init_control(struct evtchn_init_control *init_control);
+int evtchn_fifo_expand_array(const struct evtchn_expand_array *expand_array);
+void evtchn_fifo_destroy(struct domain *d);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -7,12 +7,12 @@
  * Version 2 or later.  See the file COPYING for more details.
  */
 
+#include "event_channel.h"
+
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
-#include <xen/event.h>
-#include <xen/event_fifo.h>
 #include <xen/paging.h>
 #include <xen/mm.h>
 #include <xen/domain_page.h>
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,12 +105,6 @@ void notify_via_xen_event_channel(struct
 #define bucket_from_port(d, p) \
     ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
 
-static inline unsigned int max_evtchns(const struct domain *d)
-{
-    return d->evtchn_fifo ? EVTCHN_FIFO_NR_CHANNELS
-                          : BITS_PER_EVTCHN_WORD(d) * BITS_PER_EVTCHN_WORD(d);
-}
-
 static inline bool_t port_is_valid(struct domain *d, unsigned int p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
@@ -176,8 +170,6 @@ static bool evtchn_usable(const struct e
 
 void evtchn_check_pollers(struct domain *d, unsigned int port);
 
-void evtchn_2l_init(struct domain *d);
-
 /* Close all event channels and reset to 2-level ABI. */
 int evtchn_reset(struct domain *d, bool resuming);
 
@@ -227,13 +219,6 @@ static inline void evtchn_port_clear_pen
         d->evtchn_port_ops->clear_pending(d, evtchn);
 }
 
-static inline void evtchn_port_unmask(struct domain *d,
-                                      struct evtchn *evtchn)
-{
-    if ( evtchn_usable(evtchn) )
-        d->evtchn_port_ops->unmask(d, evtchn);
-}
-
 static inline bool evtchn_is_pending(const struct domain *d,
                                      const struct evtchn *evtchn)
 {
@@ -259,13 +244,6 @@ static inline bool evtchn_port_is_masked
     return rc;
 }
 
-static inline bool evtchn_is_busy(const struct domain *d,
-                                  const struct evtchn *evtchn)
-{
-    return d->evtchn_port_ops->is_busy &&
-           d->evtchn_port_ops->is_busy(d, evtchn);
-}
-
 /* Returns negative errno, zero for not pending, or positive for pending. */
 static inline int evtchn_port_poll(struct domain *d, evtchn_port_t port)
 {
@@ -285,21 +263,4 @@ static inline int evtchn_port_poll(struc
     return rc;
 }
 
-static inline int evtchn_port_set_priority(struct domain *d,
-                                           struct evtchn *evtchn,
-                                           unsigned int priority)
-{
-    if ( !d->evtchn_port_ops->set_priority )
-        return -ENOSYS;
-    if ( !evtchn_usable(evtchn) )
-        return -EACCES;
-    return d->evtchn_port_ops->set_priority(d, evtchn, priority);
-}
-
-static inline void evtchn_port_print_state(struct domain *d,
-                                           const struct evtchn *evtchn)
-{
-    d->evtchn_port_ops->print_state(d, evtchn);
-}
-
 #endif /* __XEN_EVENT_H__ */
--- a/xen/include/xen/event_fifo.h
+++ /dev/null
@@ -1,26 +0,0 @@
-/*
- * FIFO-based event channel ABI.
- *
- * Copyright (C) 2013 Citrix Systems R&D Ltd.
- *
- * This source code is licensed under the GNU General Public License,
- * Version 2 or later.  See the file COPYING for more details.
- */
-#ifndef __XEN_EVENT_FIFO_H__
-#define __XEN_EVENT_FIFO_H__
-
-int evtchn_fifo_init_control(struct evtchn_init_control *init_control);
-int evtchn_fifo_expand_array(const struct evtchn_expand_array *expand_array);
-void evtchn_fifo_destroy(struct domain *domain);
-
-#endif /* __XEN_EVENT_FIFO_H__ */
-
-/*
- * Local variables:
- * mode: C
- * c-file-style: "BSD"
- * c-basic-offset: 4
- * tab-width: 4
- * indent-tabs-mode: nil
- * End:
- */



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9654.25369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKG-0000sa-Ev; Tue, 20 Oct 2020 14:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9654.25369; Tue, 20 Oct 2020 14:09:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKG-0000sT-Bi; Tue, 20 Oct 2020 14:09:20 +0000
Received: by outflank-mailman (input) for mailman id 9654;
 Tue, 20 Oct 2020 14:09:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsKF-0000sI-9H
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c630dad3-4799-4377-bdf2-79302f110b96;
 Tue, 20 Oct 2020 14:09:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 17FC8AE09;
 Tue, 20 Oct 2020 14:09:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsKF-0000sI-9H
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:19 +0000
X-Inumbo-ID: c630dad3-4799-4377-bdf2-79302f110b96
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c630dad3-4799-4377-bdf2-79302f110b96;
	Tue, 20 Oct 2020 14:09:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202957;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tHlxyP/F7gGB3JnsejxTwZrkjVu1g48ZBzUOABAs6/Y=;
	b=JDune0olJKBYxz3oNa5Q4OKvF+6lqD1XXXV2w7i4WOChIGlL/NE0j+H0YGt+sHz9A+UJOC
	tkrCocscQmxWy1M6S0bh/fEITVMIH2vzuKCozSJG90HSiitkOhZfBKFP6vMMLYQwmxDZUg
	zeFj/VNEFQuGXWJ9BZuUcsiscIXqRIE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 17FC8AE09;
	Tue, 20 Oct 2020 14:09:17 +0000 (UTC)
Subject: [PATCH v2 3/8] evtchn: rename and adjust guest_enabled_event()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <119ad32e-91f0-5c1d-c400-de78ab816839@suse.com>
Date: Tue, 20 Oct 2020 16:09:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The function isn't about an "event" in general, but about a vIRQ. The
function also failed to honor global vIRQ-s, instead assuming the caller
would pass vCPU 0 in such a case.

While at it also adjust the
- types the function uses,
- single user to make use of domain_vcpu().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/cpu/mcheck/vmce.h
+++ b/xen/arch/x86/cpu/mcheck/vmce.h
@@ -5,9 +5,9 @@
 
 int vmce_init(struct cpuinfo_x86 *c);
 
-#define dom0_vmce_enabled() (hardware_domain && hardware_domain->max_vcpus \
-        && hardware_domain->vcpu[0] \
-        && guest_enabled_event(hardware_domain->vcpu[0], VIRQ_MCA))
+#define dom0_vmce_enabled() \
+    (hardware_domain && \
+     evtchn_virq_enabled(domain_vcpu(hardware_domain, 0), VIRQ_MCA))
 
 int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
 
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -778,9 +778,15 @@ out:
     return ret;
 }
 
-int guest_enabled_event(struct vcpu *v, uint32_t virq)
+bool evtchn_virq_enabled(const struct vcpu *v, unsigned int virq)
 {
-    return ((v != NULL) && (v->virq_to_evtchn[virq] != 0));
+    if ( !v )
+        return false;
+
+    if ( virq_is_global(virq) && v->vcpu_id )
+        v = domain_vcpu(v->domain, 0);
+
+    return v->virq_to_evtchn[virq];
 }
 
 void send_guest_vcpu_virq(struct vcpu *v, uint32_t virq)
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -85,8 +85,8 @@ int alloc_unbound_xen_event_channel(
     xen_event_channel_notification_t notification_fn);
 void free_xen_event_channel(struct domain *d, int port);
 
-/* Query if event channel is in use by the guest */
-int guest_enabled_event(struct vcpu *v, uint32_t virq);
+/* Query whether a vIRQ is in use by the guest. */
+bool evtchn_virq_enabled(const struct vcpu *v, unsigned int virq);
 
 /* Notify remote end of a Xen-attached event channel.*/
 void notify_via_xen_event_channel(struct domain *ld, int lport);



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:09:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:09:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9655.25381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKU-0000wo-Nb; Tue, 20 Oct 2020 14:09:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9655.25381; Tue, 20 Oct 2020 14:09:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKU-0000wg-KC; Tue, 20 Oct 2020 14:09:34 +0000
Received: by outflank-mailman (input) for mailman id 9655;
 Tue, 20 Oct 2020 14:09:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0BvO=D3=redhat.com=trix@srs-us1.protection.inumbo.net>)
 id 1kUsKT-0000wJ-6W
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:33 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6309b2c0-1d40-43cb-a39d-e6892d122521;
 Tue, 20 Oct 2020 14:09:32 +0000 (UTC)
Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com
 [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-255-crs-SrgkPQi59zbUaQsCog-1; Tue, 20 Oct 2020 10:09:30 -0400
Received: by mail-qv1-f69.google.com with SMTP id m11so1420326qvt.11
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 07:09:30 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com.
 [75.142.250.213])
 by smtp.gmail.com with ESMTPSA id o14sm785284qto.16.2020.10.20.07.09.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 20 Oct 2020 07:09:28 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0BvO=D3=redhat.com=trix@srs-us1.protection.inumbo.net>)
	id 1kUsKT-0000wJ-6W
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:33 +0000
X-Inumbo-ID: 6309b2c0-1d40-43cb-a39d-e6892d122521
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 6309b2c0-1d40-43cb-a39d-e6892d122521;
	Tue, 20 Oct 2020 14:09:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603202972;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rOdQp2iOSkahNa0CfH1+VDlgH/zqWEjNOjN0qUXEes0=;
	b=ewDv3aOrdJ+x5A6s7YiY6e3DZ/xG19fiCFYxEBc219ynzJxgu/51wlX4XbcPcMAU2oHyfD
	PtMot7BJZaqOvPJXq6YGOAgeRSVWl5/eHGglGFP0XXl1T9R1hHW25Ri1M+zqhWdXytm0kf
	dEEfBPozJhD9ETuNOOZz/yhFPpgdSTo=
Received: from mail-qv1-f69.google.com (mail-qv1-f69.google.com
 [209.85.219.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-255-crs-SrgkPQi59zbUaQsCog-1; Tue, 20 Oct 2020 10:09:30 -0400
X-MC-Unique: crs-SrgkPQi59zbUaQsCog-1
Received: by mail-qv1-f69.google.com with SMTP id m11so1420326qvt.11
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 07:09:30 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=rOdQp2iOSkahNa0CfH1+VDlgH/zqWEjNOjN0qUXEes0=;
        b=iMPUrsx4ldXtzpmz3w5ub2agzkVd7ReSWu67R0EvJZjAUeQ0Fz/iD0LShrJuziZn4z
         KvL1wk6wT04TUVuxhTDQiWbMtdZrvEIRcJL/zCudLhkfe5Vc498WnMdwD9a7G6lRo3Xf
         rIwzOsXA9aNoMGxh3++vsTlzUD4HYrQlAcAFq1zqG2DQWuqr8z6XA2n4diAfw2On/axs
         n8noU3v4+KgndbkrVPYXiv218smDYYNCMeNIEWOozM11bZ6nEYqHd6/Iobz4ZAl9XJ1U
         Cu+YPF8L3nQgNKKWHIADfvOUVAsHxitE2lvMlw9knDrYRgxVupxeSyW26l6YDIUPZ/5+
         jNKQ==
X-Gm-Message-State: AOAM53161i/OxhEqiTYh/qd7Ztsfct6Mqn+zml8i4sDZmQsBSsrmza05
	4/3YOiJ9+e0jqilM+/3aSOCBNSjJaq++VbyIB+JhMAc0hybVhjYqeFcP75wVIMr5z1Ycyz93bfs
	YW4djY0HojDmNjLF/mu8EOsz0VTM=
X-Received: by 2002:a05:620a:2195:: with SMTP id g21mr2990080qka.358.1603202969736;
        Tue, 20 Oct 2020 07:09:29 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwKUy8N8weGLwzCzHDZ8oTBMtQsWbHEBLlPfZD5zpEfJ13ExA6QvxSe7FmqqHnm4D1jNxINeQ==
X-Received: by 2002:a05:620a:2195:: with SMTP id g21mr2990039qka.358.1603202969497;
        Tue, 20 Oct 2020 07:09:29 -0700 (PDT)
Received: from trix.remote.csb (075-142-250-213.res.spectrum.com. [75.142.250.213])
        by smtp.gmail.com with ESMTPSA id o14sm785284qto.16.2020.10.20.07.09.23
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Tue, 20 Oct 2020 07:09:28 -0700 (PDT)
Subject: Re: [RFC] treewide: cleanup unreachable breaks
To: Jason Gunthorpe <jgg@ziepe.ca>, Nick Desaulniers <ndesaulniers@google.com>
Cc: LKML <linux-kernel@vger.kernel.org>, linux-edac@vger.kernel.org,
 linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 openipmi-developer@lists.sourceforge.net,
 "open list:HARDWARE RANDOM NUMBER GENERATOR CORE"
 <linux-crypto@vger.kernel.org>,
 Linux ARM <linux-arm-kernel@lists.infradead.org>,
 linux-power@fi.rohmeurope.com, linux-gpio@vger.kernel.org,
 amd-gfx list <amd-gfx@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>, nouveau@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org,
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org,
 linux-amlogic@lists.infradead.org, industrypack-devel@lists.sourceforge.net,
 linux-media@vger.kernel.org, MPT-FusionLinux.pdl@broadcom.com,
 linux-scsi@vger.kernel.org, linux-mtd@lists.infradead.org,
 linux-can@vger.kernel.org, Network Development <netdev@vger.kernel.org>,
 intel-wired-lan@lists.osuosl.org, ath10k@lists.infradead.org,
 linux-wireless <linux-wireless@vger.kernel.org>,
 linux-stm32@st-md-mailman.stormreply.com, linux-nfc@lists.01.org,
 linux-nvdimm <linux-nvdimm@lists.01.org>, linux-pci@vger.kernel.org,
 linux-samsung-soc@vger.kernel.org, platform-driver-x86@vger.kernel.org,
 patches@opensource.cirrus.com, storagedev@microchip.com,
 devel@driverdev.osuosl.org, linux-serial@vger.kernel.org,
 linux-usb@vger.kernel.org, usb-storage@lists.one-eyed-alien.net,
 linux-watchdog@vger.kernel.org, ocfs2-devel@oss.oracle.com,
 bpf <bpf@vger.kernel.org>, linux-integrity@vger.kernel.org,
 linux-security-module@vger.kernel.org, keyrings@vger.kernel.org,
 alsa-devel@alsa-project.org,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 Greg KH <gregkh@linuxfoundation.org>, George Burgess <gbiv@google.com>
References: <20201017160928.12698-1-trix@redhat.com>
 <20201018054332.GB593954@kroah.com>
 <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
 <20201019230546.GH36674@ziepe.ca>
From: Tom Rix <trix@redhat.com>
Message-ID: <859ff6ff-3e10-195c-6961-7b2902b151d4@redhat.com>
Date: Tue, 20 Oct 2020 07:09:23 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <20201019230546.GH36674@ziepe.ca>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=trix@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 10/19/20 4:05 PM, Jason Gunthorpe wrote:
> On Mon, Oct 19, 2020 at 12:42:15PM -0700, Nick Desaulniers wrote:
>> On Sat, Oct 17, 2020 at 10:43 PM Greg KH <gregkh@linuxfoundation.org> wrote:
>>> On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
>>>> From: Tom Rix <trix@redhat.com>
>>>>
>>>> This is a upcoming change to clean up a new warning treewide.
>>>> I am wondering if the change could be one mega patch (see below) or
>>>> normal patch per file about 100 patches or somewhere half way by collecting
>>>> early acks.
>>> Please break it up into one-patch-per-subsystem, like normal, and get it
>>> merged that way.
>>>
>>> Sending us a patch, without even a diffstat to review, isn't going to
>>> get you very far...
>> Tom,
>> If you're able to automate this cleanup, I suggest checking in a
>> script that can be run on a directory.  Then for each subsystem you
>> can say in your commit "I ran scripts/fix_whatever.py on this subdir."
>>  Then others can help you drive the tree wide cleanup.  Then we can
>> enable -Wunreachable-code-break either by default, or W=2 right now
>> might be a good idea.
> I remember using clang-modernize in the past to fix issues very
> similar to this, if clang machinery can generate the warning, can't
> something like clang-tidy directly generate the patch?

Yes clang-tidy and similar are good tools.

Sometimes they change too much and your time shifts

from editing to analyzing and dropping changes.


I am looking at them for auto changing api.

When i have something greater than half baked i will post.

Tom

>
> You can send me a patch for drivers/infiniband/* as well
>
> Thanks,
> Jason
>



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:09:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:09:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9656.25393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKe-00011A-1X; Tue, 20 Oct 2020 14:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9656.25393; Tue, 20 Oct 2020 14:09:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsKd-000113-UX; Tue, 20 Oct 2020 14:09:43 +0000
Received: by outflank-mailman (input) for mailman id 9656;
 Tue, 20 Oct 2020 14:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsKc-00010e-Sl
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4aa5e11a-d135-4517-803f-299d1505e455;
 Tue, 20 Oct 2020 14:09:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B998AE09;
 Tue, 20 Oct 2020 14:09:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsKc-00010e-Sl
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:09:42 +0000
X-Inumbo-ID: 4aa5e11a-d135-4517-803f-299d1505e455
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4aa5e11a-d135-4517-803f-299d1505e455;
	Tue, 20 Oct 2020 14:09:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603202981;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=d1Xj/MOkNyuR81WgrEVOpCecqLMBcdoOGGhlDIYqrtI=;
	b=ABtTjs/uCyAQcnAUmVNC3t0s5n+GxoseUj44OjY9cG1LrB4T8wd1kCjlfyoY6KOM0+ts/z
	V7SUIXkHo86kMmmkMMhP4YDd8XNUwFVGnNf78ebRU3qfdv4U3jlUTOUdOtEddiOxb+RJhX
	+UExkW06BSjjDafAZyTEXLp/MNHtRD0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6B998AE09;
	Tue, 20 Oct 2020 14:09:41 +0000 (UTC)
Subject: [PATCH v2 4/8] evtchn: let evtchn_set_priority() acquire the
 per-channel lock
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <266c9178-700b-5663-4b5f-69f160a165ab@suse.com>
Date: Tue, 20 Oct 2020 16:09:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Some lock wants to be held to make sure the port doesn't change state,
but there's no point holding the per-domain event lock here. Switch to
using the finer grained per-channel lock instead.

FAOD this doesn't guarantee anything towards in particular
evtchn_fifo_set_pending(), as for interdomain channels that function
would be called with the remote side's per-channel lock held.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Drop acquiring of event lock. Re-write title and description.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1154,20 +1154,17 @@ static long evtchn_set_priority(const st
 {
     struct domain *d = current->domain;
     unsigned int port = set_priority->port;
+    struct evtchn *chn;
     long ret;
-
-    spin_lock(&d->event_lock);
+    unsigned long flags;
 
     if ( !port_is_valid(d, port) )
-    {
-        spin_unlock(&d->event_lock);
         return -EINVAL;
-    }
-
-    ret = evtchn_port_set_priority(d, evtchn_from_port(d, port),
-                                   set_priority->priority);
 
-    spin_unlock(&d->event_lock);
+    chn = evtchn_from_port(d, port);
+    spin_lock_irqsave(&chn->lock, flags);
+    ret = evtchn_port_set_priority(d, chn, set_priority->priority);
+    spin_unlock_irqrestore(&chn->lock, flags);
 
     return ret;
 }



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:10:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:10:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9661.25405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsL6-0001qc-AX; Tue, 20 Oct 2020 14:10:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9661.25405; Tue, 20 Oct 2020 14:10:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsL6-0001qV-7c; Tue, 20 Oct 2020 14:10:12 +0000
Received: by outflank-mailman (input) for mailman id 9661;
 Tue, 20 Oct 2020 14:10:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsL5-0001qB-4T
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:10:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee25d3e3-9aa1-48b8-ba7d-0f64b2da3bad;
 Tue, 20 Oct 2020 14:10:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 763AEAC6A;
 Tue, 20 Oct 2020 14:10:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsL5-0001qB-4T
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:10:11 +0000
X-Inumbo-ID: ee25d3e3-9aa1-48b8-ba7d-0f64b2da3bad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ee25d3e3-9aa1-48b8-ba7d-0f64b2da3bad;
	Tue, 20 Oct 2020 14:10:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603203009;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iO8TnTaJzqrD1+b+3oC7Ha3LDiV5w9SikaRO6jqv6D8=;
	b=m+cpOEI8+QL1qSwSA+pUH3g+065t4UMNLiXsXQZfVsSuQPB2/p5GPg3eh0uDqLpgKt0qrn
	8prfLLpi5eJrPMxWw3O7z004Ix+Z1IN3GVfGuePr0NWjK3iHNDBRa3pRsPCFcY2lMExyQM
	sa6IlUfPwkFIllZyYWME4leEOkDm8cQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 763AEAC6A;
	Tue, 20 Oct 2020 14:10:09 +0000 (UTC)
Subject: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
Date: Tue, 20 Oct 2020 16:10:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The per-vCPU virq_lock, which is being held anyway, together with there
not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
is zero, provide sufficient guarantees. Undo the lock addition done for
XSA-343 (commit e045199c7c9c "evtchn: address races with
evtchn_reset()"). Update the description next to struct evtchn_port_ops
accordingly.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -794,7 +794,6 @@ void send_guest_vcpu_virq(struct vcpu *v
     unsigned long flags;
     int port;
     struct domain *d;
-    struct evtchn *chn;
 
     ASSERT(!virq_is_global(virq));
 
@@ -805,10 +804,7 @@ void send_guest_vcpu_virq(struct vcpu *v
         goto out;
 
     d = v->domain;
-    chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
-    evtchn_port_set_pending(d, v->vcpu_id, chn);
-    spin_unlock(&chn->lock);
+    evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
@@ -837,9 +833,7 @@ void send_guest_global_virq(struct domai
         goto out;
 
     chn = evtchn_from_port(d, port);
-    spin_lock(&chn->lock);
     evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
-    spin_unlock(&chn->lock);
 
  out:
     spin_unlock_irqrestore(&v->virq_lock, flags);
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
  * Low-level event channel port ops.
  *
  * All hooks have to be called with a lock held which prevents the channel
- * from changing state. This may be the domain event lock, the per-channel
- * lock, or in the case of sending interdomain events also the other side's
- * per-channel lock. Exceptions apply in certain cases for the PV shim.
+ * from changing state. This may be
+ * - the domain event lock,
+ * - the per-channel lock,
+ * - in the case of sending interdomain events the other side's per-channel
+ *   lock,
+ * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
+ *   combination with the ordering enforced through how the vCPU's
+ *   virq_to_evtchn[] gets updated),
+ * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
+ * Exceptions apply in certain cases for the PV shim.
  */
 struct evtchn_port_ops {
     void (*init)(struct domain *d, struct evtchn *evtchn);



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9667.25417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsLq-00020f-LL; Tue, 20 Oct 2020 14:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9667.25417; Tue, 20 Oct 2020 14:10:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsLq-00020Y-HN; Tue, 20 Oct 2020 14:10:58 +0000
Received: by outflank-mailman (input) for mailman id 9667;
 Tue, 20 Oct 2020 14:10:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsLp-00020L-7E
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:10:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8ab37ee-23aa-4929-9bc3-89900844491d;
 Tue, 20 Oct 2020 14:10:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EBC7AC6A;
 Tue, 20 Oct 2020 14:10:55 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsLp-00020L-7E
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:10:57 +0000
X-Inumbo-ID: d8ab37ee-23aa-4929-9bc3-89900844491d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d8ab37ee-23aa-4929-9bc3-89900844491d;
	Tue, 20 Oct 2020 14:10:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603203055;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=htCKeLk050dUSTPkbNMbWkMtxwQVWeeYQPtwH2yDG7s=;
	b=XQDD2rgF+nL8IETAnhFBJqQXLoRjN5PBxYjw/Z+puWmois3/RhdAXqzeqFefDZDN69dJHc
	sEPPhn3rLphXG9K1Nv28fWAh2QnJQ4+NHu3T8fSJGDYXQuQmKTPQYsxlWRPPQcfLvsFgze
	QZqX5ENS4U9nIt4RIxpb6Gm/str1lac=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2EBC7AC6A;
	Tue, 20 Oct 2020 14:10:55 +0000 (UTC)
Subject: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
Date: Tue, 20 Oct 2020 16:10:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no need to serialize all sending of vIRQ-s; all that's needed
is serialization against the closing of the respective event channels
(so far by means of a barrier). To facilitate the conversion, switch to
an ordinary write locked region in evtchn_close().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Don't introduce/use rw_barrier() here. Add comment to
    evtchn_bind_virq(). Re-base.

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
     v->vcpu_id = vcpu_id;
     v->dirty_cpu = VCPU_CPU_CLEAN;
 
-    spin_lock_init(&v->virq_lock);
+    rwlock_init(&v->virq_lock);
 
     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
 
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
 
     spin_unlock_irqrestore(&chn->lock, flags);
 
+    /*
+     * If by any, the update of virq_to_evtchn[] would need guarding by
+     * virq_lock, but since this is the last action here, there's no strict
+     * need to acquire the lock. Hnece holding event_lock isn't helpful
+     * anymore at this point, but utilize that its unlocking acts as the
+     * otherwise necessary smp_wmb() here.
+     */
     v->virq_to_evtchn[virq] = bind->port = port;
 
  out:
@@ -638,10 +645,10 @@ int evtchn_close(struct domain *d1, int
     case ECS_VIRQ:
         for_each_vcpu ( d1, v )
         {
-            if ( v->virq_to_evtchn[chn1->u.virq] != port1 )
-                continue;
-            v->virq_to_evtchn[chn1->u.virq] = 0;
-            spin_barrier(&v->virq_lock);
+            write_lock_irqsave(&v->virq_lock, flags);
+            if ( v->virq_to_evtchn[chn1->u.virq] == port1 )
+                v->virq_to_evtchn[chn1->u.virq] = 0;
+            write_unlock_irqrestore(&v->virq_lock, flags);
         }
         break;
 
@@ -797,7 +804,7 @@ void send_guest_vcpu_virq(struct vcpu *v
 
     ASSERT(!virq_is_global(virq));
 
-    spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
 
     port = v->virq_to_evtchn[virq];
     if ( unlikely(port == 0) )
@@ -807,7 +814,7 @@ void send_guest_vcpu_virq(struct vcpu *v
     evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
 
  out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_global_virq(struct domain *d, uint32_t virq)
@@ -826,7 +833,7 @@ void send_guest_global_virq(struct domai
     if ( unlikely(v == NULL) )
         return;
 
-    spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
 
     port = v->virq_to_evtchn[virq];
     if ( unlikely(port == 0) )
@@ -836,7 +843,7 @@ void send_guest_global_virq(struct domai
     evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
 
  out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
 }
 
 void send_guest_pirq(struct domain *d, const struct pirq *pirq)
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -235,7 +235,7 @@ struct vcpu
 
     /* IRQ-safe virq_lock protects against delivering VIRQ to stale evtchn. */
     evtchn_port_t    virq_to_evtchn[NR_VIRQS];
-    spinlock_t       virq_lock;
+    rwlock_t         virq_lock;
 
     /* Tasklet for continue_hypercall_on_cpu(). */
     struct tasklet   continue_hypercall_tasklet;



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:12:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9672.25429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsMr-00029x-Vh; Tue, 20 Oct 2020 14:12:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9672.25429; Tue, 20 Oct 2020 14:12:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsMr-00029q-SN; Tue, 20 Oct 2020 14:12:01 +0000
Received: by outflank-mailman (input) for mailman id 9672;
 Tue, 20 Oct 2020 14:12:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsMq-00029h-CK
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:12:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad435167-be4b-4abf-8ea5-fc1d4eed57a3;
 Tue, 20 Oct 2020 14:11:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B25D8B30B;
 Tue, 20 Oct 2020 14:11:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsMq-00029h-CK
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:12:00 +0000
X-Inumbo-ID: ad435167-be4b-4abf-8ea5-fc1d4eed57a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ad435167-be4b-4abf-8ea5-fc1d4eed57a3;
	Tue, 20 Oct 2020 14:11:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603203117;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c3HgUm976g98yd0aX222catewuO2q3dm6b07jqOgiMI=;
	b=YONrbe2RZu4lD3GjZgmM5hkkjK1Uy5VPl/ZIvy0amF8YmljsjrLXdwi9KOjLiVwj3cvLTT
	oqOaMqFprxC0/UfQ/frIAWP72urJ80xn32PoV+UabLKy7w5Z/RX+kukhaMAe80vJ3Xzfk4
	DDZSRYwJmkb9j/kUy7rFG7NI3qxDe/A=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B25D8B30B;
	Tue, 20 Oct 2020 14:11:56 +0000 (UTC)
Subject: [PATCH v2 7/8] evtchn: convert domain event lock to an r/w one
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, Daniel de Graaf <dgdegra@tycho.nsa.gov>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <7016755e-72a1-3bc2-3987-a483e1709605@suse.com>
Date: Tue, 20 Oct 2020 16:11:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
across pCPU-s) and the ones in EOI handling in PCI pass-through code,
serializing perhaps an entire domain isn't helpful when no state (which
isn't e.g. further protected by the per-channel lock) changes.

Unfortunately this implies dropping of lock profiling for this lock,
until r/w locks may get enabled for such functionality.

While ->notify_vcpu_id is now meant to be consistently updated with the
per-channel lock held, an extension applies to ECS_PIRQ: The field is
also guaranteed to not change with the per-domain event lock held for
writing. Therefore the unlink_pirq_port() call from evtchn_bind_vcpu()
as well as the link_pirq_port() one from evtchn_bind_pirq() could in
principle be moved out of the per-channel locked regions, but this
further code churn didn't seem worth it.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Consistently lock for writing in evtchn_reset(). Fix error path in
    pci_clean_dpci_irqs(). Lock for writing in pt_irq_time_out(),
    hvm_dirq_assist(), hvm_dpci_eoi(), and hvm_dpci_isairq_eoi(). Move
    rw_barrier() introduction here. Re-base over changes earlier in the
    series.
---
RFC:
* In evtchn_bind_vcpu() the question is whether limiting the use of
  write_lock() to just the ECS_PIRQ case is really worth it.
* In flask_get_peer_sid() the question is whether we wouldn't better
  switch to using the per-channel lock.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -917,7 +917,7 @@ int arch_domain_soft_reset(struct domain
     if ( !is_hvm_domain(d) )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     for ( i = 0; i < d->nr_pirqs ; i++ )
     {
         if ( domain_pirq_to_emuirq(d, i) != IRQ_UNBOUND )
@@ -927,7 +927,7 @@ int arch_domain_soft_reset(struct domain
                 break;
         }
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( ret )
         return ret;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -528,9 +528,9 @@ void hvm_migrate_pirqs(struct vcpu *v)
     if ( !is_iommu_enabled(d) || !hvm_domain_irq(d)->dpci )
        return;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     pt_pirq_iterate(d, migrate_pirq, v);
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static bool hvm_get_pending_event(struct vcpu *v, struct x86_event *info)
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -404,9 +404,9 @@ int hvm_inject_msi(struct domain *d, uin
             {
                 int rc;
 
-                spin_lock(&d->event_lock);
+                write_lock(&d->event_lock);
                 rc = map_domain_emuirq_pirq(d, pirq, IRQ_MSI_EMU);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 if ( rc )
                     return rc;
                 info = pirq_info(d, pirq);
--- a/xen/arch/x86/hvm/vioapic.c
+++ b/xen/arch/x86/hvm/vioapic.c
@@ -203,9 +203,9 @@ static int vioapic_hwdom_map_gsi(unsigne
     {
         gprintk(XENLOG_WARNING, "vioapic: error binding GSI %u: %d\n",
                 gsi, ret);
-        spin_lock(&currd->event_lock);
+        write_lock(&currd->event_lock);
         unmap_domain_pirq(currd, pirq);
-        spin_unlock(&currd->event_lock);
+        write_unlock(&currd->event_lock);
     }
     pcidevs_unlock();
 
--- a/xen/arch/x86/hvm/vmsi.c
+++ b/xen/arch/x86/hvm/vmsi.c
@@ -465,7 +465,7 @@ int msixtbl_pt_register(struct domain *d
     int r = -EINVAL;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !msixtbl_initialised(d) )
         return -ENODEV;
@@ -535,7 +535,7 @@ void msixtbl_pt_unregister(struct domain
     struct msixtbl_entry *entry;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !msixtbl_initialised(d) )
         return;
@@ -589,13 +589,13 @@ void msixtbl_pt_cleanup(struct domain *d
     if ( !msixtbl_initialised(d) )
         return;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     list_for_each_entry_safe( entry, temp,
                               &d->arch.hvm.msixtbl_list, list )
         del_msixtbl_entry(entry);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 void msix_write_completion(struct vcpu *v)
@@ -719,9 +719,9 @@ int vpci_msi_arch_update(struct vpci_msi
                          msi->arch.pirq, msi->mask);
     if ( rc )
     {
-        spin_lock(&pdev->domain->event_lock);
+        write_lock(&pdev->domain->event_lock);
         unmap_domain_pirq(pdev->domain, msi->arch.pirq);
-        spin_unlock(&pdev->domain->event_lock);
+        write_unlock(&pdev->domain->event_lock);
         pcidevs_unlock();
         msi->arch.pirq = INVALID_PIRQ;
         return rc;
@@ -760,9 +760,9 @@ static int vpci_msi_enable(const struct
     rc = vpci_msi_update(pdev, data, address, vectors, pirq, mask);
     if ( rc )
     {
-        spin_lock(&pdev->domain->event_lock);
+        write_lock(&pdev->domain->event_lock);
         unmap_domain_pirq(pdev->domain, pirq);
-        spin_unlock(&pdev->domain->event_lock);
+        write_unlock(&pdev->domain->event_lock);
         pcidevs_unlock();
         return rc;
     }
@@ -807,9 +807,9 @@ static void vpci_msi_disable(const struc
         ASSERT(!rc);
     }
 
-    spin_lock(&pdev->domain->event_lock);
+    write_lock(&pdev->domain->event_lock);
     unmap_domain_pirq(pdev->domain, pirq);
-    spin_unlock(&pdev->domain->event_lock);
+    write_unlock(&pdev->domain->event_lock);
     pcidevs_unlock();
 }
 
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2413,10 +2413,10 @@ int ioapic_guest_write(unsigned long phy
     }
     if ( pirq >= 0 )
     {
-        spin_lock(&hardware_domain->event_lock);
+        write_lock(&hardware_domain->event_lock);
         ret = map_domain_pirq(hardware_domain, pirq, irq,
                               MAP_PIRQ_TYPE_GSI, NULL);
-        spin_unlock(&hardware_domain->event_lock);
+        write_unlock(&hardware_domain->event_lock);
         if ( ret < 0 )
             return ret;
     }
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1536,7 +1536,7 @@ int pirq_guest_bind(struct vcpu *v, stru
     irq_guest_action_t *action, *newaction = NULL;
     int                 rc = 0;
 
-    WARN_ON(!spin_is_locked(&v->domain->event_lock));
+    WARN_ON(!rw_is_write_locked(&v->domain->event_lock));
     BUG_ON(!local_irq_is_enabled());
 
  retry:
@@ -1756,7 +1756,7 @@ void pirq_guest_unbind(struct domain *d,
     struct irq_desc *desc;
     int irq = 0;
 
-    WARN_ON(!spin_is_locked(&d->event_lock));
+    WARN_ON(!rw_is_write_locked(&d->event_lock));
 
     BUG_ON(!local_irq_is_enabled());
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
@@ -1793,7 +1793,7 @@ static bool pirq_guest_force_unbind(stru
     unsigned int i;
     bool bound = false;
 
-    WARN_ON(!spin_is_locked(&d->event_lock));
+    WARN_ON(!rw_is_write_locked(&d->event_lock));
 
     BUG_ON(!local_irq_is_enabled());
     desc = pirq_spin_lock_irq_desc(pirq, NULL);
@@ -2037,7 +2037,7 @@ int get_free_pirq(struct domain *d, int
 {
     int i;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( type == MAP_PIRQ_TYPE_GSI )
     {
@@ -2062,7 +2062,7 @@ int get_free_pirqs(struct domain *d, uns
 {
     unsigned int i, found = 0;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     for ( i = d->nr_pirqs - 1; i >= nr_irqs_gsi; --i )
         if ( is_free_pirq(d, pirq_info(d, i)) )
@@ -2090,7 +2090,7 @@ int map_domain_pirq(
     DECLARE_BITMAP(prepared, MAX_MSI_IRQS) = {};
     DECLARE_BITMAP(granted, MAX_MSI_IRQS) = {};
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !irq_access_permitted(current->domain, irq))
         return -EPERM;
@@ -2309,7 +2309,7 @@ int unmap_domain_pirq(struct domain *d,
         return -EINVAL;
 
     ASSERT(pcidevs_locked());
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     info = pirq_info(d, pirq);
     if ( !info || (irq = info->arch.irq) <= 0 )
@@ -2436,13 +2436,13 @@ void free_domain_pirqs(struct domain *d)
     int i;
 
     pcidevs_lock();
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     for ( i = 0; i < d->nr_pirqs; i++ )
         if ( domain_pirq_to_irq(d, i) > 0 )
             unmap_domain_pirq(d, i);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
 }
 
@@ -2685,7 +2685,7 @@ int map_domain_emuirq_pirq(struct domain
     int old_emuirq = IRQ_UNBOUND, old_pirq = IRQ_UNBOUND;
     struct pirq *info;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     if ( !is_hvm_domain(d) )
         return -EINVAL;
@@ -2751,7 +2751,7 @@ int unmap_domain_pirq_emuirq(struct doma
     if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
         return -EINVAL;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     emuirq = domain_pirq_to_emuirq(d, pirq);
     if ( emuirq == IRQ_UNBOUND )
@@ -2799,7 +2799,7 @@ static int allocate_pirq(struct domain *
 {
     int current_pirq;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
     current_pirq = domain_irq_to_pirq(d, irq);
     if ( pirq < 0 )
     {
@@ -2871,7 +2871,7 @@ int allocate_and_map_gsi_pirq(struct dom
     }
 
     /* Verify or get pirq. */
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     pirq = allocate_pirq(d, index, *pirq_p, irq, MAP_PIRQ_TYPE_GSI, NULL);
     if ( pirq < 0 )
     {
@@ -2884,7 +2884,7 @@ int allocate_and_map_gsi_pirq(struct dom
         *pirq_p = pirq;
 
  done:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return ret;
 }
@@ -2925,7 +2925,7 @@ int allocate_and_map_msi_pirq(struct dom
 
     pcidevs_lock();
     /* Verify or get pirq. */
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     pirq = allocate_pirq(d, index, *pirq_p, irq, type, &msi->entry_nr);
     if ( pirq < 0 )
     {
@@ -2938,7 +2938,7 @@ int allocate_and_map_msi_pirq(struct dom
         *pirq_p = pirq;
 
  done:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
     if ( ret )
     {
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -34,7 +34,7 @@ static int physdev_hvm_map_pirq(
 
     ASSERT(!is_hardware_domain(d));
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     switch ( type )
     {
     case MAP_PIRQ_TYPE_GSI: {
@@ -84,7 +84,7 @@ static int physdev_hvm_map_pirq(
         break;
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return ret;
 }
 
@@ -154,18 +154,18 @@ int physdev_unmap_pirq(domid_t domid, in
 
     if ( is_hvm_domain(d) && has_pirq(d) )
     {
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         if ( domain_pirq_to_emuirq(d, pirq) != IRQ_UNBOUND )
             ret = unmap_domain_pirq_emuirq(d, pirq);
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         if ( domid == DOMID_SELF || ret )
             goto free_domain;
     }
 
     pcidevs_lock();
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     ret = unmap_domain_pirq(d, pirq);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     pcidevs_unlock();
 
  free_domain:
@@ -192,10 +192,10 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         ret = -EINVAL;
         if ( eoi.irq >= currd->nr_pirqs )
             break;
-        spin_lock(&currd->event_lock);
+        read_lock(&currd->event_lock);
         pirq = pirq_info(currd, eoi.irq);
         if ( !pirq ) {
-            spin_unlock(&currd->event_lock);
+            read_unlock(&currd->event_lock);
             break;
         }
         if ( currd->arch.auto_unmask )
@@ -214,7 +214,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
                     && hvm_irq->gsi_assert_count[gsi] )
                 send_guest_pirq(currd, pirq);
         }
-        spin_unlock(&currd->event_lock);
+        read_unlock(&currd->event_lock);
         ret = 0;
         break;
     }
@@ -626,7 +626,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( copy_from_guest(&out, arg, 1) != 0 )
             break;
 
-        spin_lock(&currd->event_lock);
+        write_lock(&currd->event_lock);
 
         ret = get_free_pirq(currd, out.type);
         if ( ret >= 0 )
@@ -639,7 +639,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
                 ret = -ENOMEM;
         }
 
-        spin_unlock(&currd->event_lock);
+        write_unlock(&currd->event_lock);
 
         if ( ret >= 0 )
         {
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -448,7 +448,7 @@ static long pv_shim_event_channel_op(int
         if ( rc )                                                           \
             break;                                                          \
                                                                             \
-        spin_lock(&d->event_lock);                                          \
+        write_lock(&d->event_lock);                                         \
         rc = evtchn_allocate_port(d, op.port_field);                        \
         if ( rc )                                                           \
         {                                                                   \
@@ -457,7 +457,7 @@ static long pv_shim_event_channel_op(int
         }                                                                   \
         else                                                                \
             evtchn_reserve(d, op.port_field);                               \
-        spin_unlock(&d->event_lock);                                        \
+        write_unlock(&d->event_lock);                                       \
                                                                             \
         if ( !rc && __copy_to_guest(arg, &op, 1) )                          \
             rc = -EFAULT;                                                   \
@@ -585,11 +585,11 @@ static long pv_shim_event_channel_op(int
         if ( rc )
             break;
 
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         rc = evtchn_allocate_port(d, ipi.port);
         if ( rc )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
 
             close.port = ipi.port;
             BUG_ON(xen_hypercall_event_channel_op(EVTCHNOP_close, &close));
@@ -598,7 +598,7 @@ static long pv_shim_event_channel_op(int
 
         evtchn_assign_vcpu(d, ipi.port, ipi.vcpu);
         evtchn_reserve(d, ipi.port);
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         if ( __copy_to_guest(arg, &ipi, 1) )
             rc = -EFAULT;
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -261,7 +261,7 @@ static long evtchn_alloc_unbound(evtchn_
     if ( d == NULL )
         return -ESRCH;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( (port = get_free_port(d)) < 0 )
         ERROR_EXIT_DOM(port, d);
@@ -284,7 +284,7 @@ static long evtchn_alloc_unbound(evtchn_
 
  out:
     check_free_port(d, port);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     rcu_unlock_domain(d);
 
     return rc;
@@ -337,14 +337,14 @@ static long evtchn_bind_interdomain(evtc
     /* Avoid deadlock by first acquiring lock of domain with smaller id. */
     if ( ld < rd )
     {
-        spin_lock(&ld->event_lock);
-        spin_lock(&rd->event_lock);
+        write_lock(&ld->event_lock);
+        read_lock(&rd->event_lock);
     }
     else
     {
         if ( ld != rd )
-            spin_lock(&rd->event_lock);
-        spin_lock(&ld->event_lock);
+            read_lock(&rd->event_lock);
+        write_lock(&ld->event_lock);
     }
 
     if ( (lport = get_free_port(ld)) < 0 )
@@ -385,9 +385,9 @@ static long evtchn_bind_interdomain(evtc
 
  out:
     check_free_port(ld, lport);
-    spin_unlock(&ld->event_lock);
+    write_unlock(&ld->event_lock);
     if ( ld != rd )
-        spin_unlock(&rd->event_lock);
+        read_unlock(&rd->event_lock);
     
     rcu_unlock_domain(rd);
 
@@ -419,7 +419,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
     if ( (v = domain_vcpu(d, vcpu)) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( v->virq_to_evtchn[virq] != 0 )
         ERROR_EXIT(-EEXIST);
@@ -459,7 +459,7 @@ int evtchn_bind_virq(evtchn_bind_virq_t
     v->virq_to_evtchn[virq] = bind->port = port;
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -476,7 +476,7 @@ static long evtchn_bind_ipi(evtchn_bind_
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( (port = get_free_port(d)) < 0 )
         ERROR_EXIT(port);
@@ -494,7 +494,7 @@ static long evtchn_bind_ipi(evtchn_bind_
     bind->port = port;
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -541,7 +541,7 @@ static long evtchn_bind_pirq(evtchn_bind
     if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) )
         return -EPERM;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     if ( pirq_to_evtchn(d, pirq) != 0 )
         ERROR_EXIT(-EEXIST);
@@ -581,7 +581,7 @@ static long evtchn_bind_pirq(evtchn_bind
 
  out:
     check_free_port(d, port);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -597,7 +597,7 @@ int evtchn_close(struct domain *d1, int
     unsigned long  flags;
 
  again:
-    spin_lock(&d1->event_lock);
+    write_lock(&d1->event_lock);
 
     if ( !port_is_valid(d1, port1) )
     {
@@ -665,13 +665,11 @@ int evtchn_close(struct domain *d1, int
                 BUG();
 
             if ( d1 < d2 )
-            {
-                spin_lock(&d2->event_lock);
-            }
+                read_lock(&d2->event_lock);
             else if ( d1 != d2 )
             {
-                spin_unlock(&d1->event_lock);
-                spin_lock(&d2->event_lock);
+                write_unlock(&d1->event_lock);
+                read_lock(&d2->event_lock);
                 goto again;
             }
         }
@@ -718,11 +716,11 @@ int evtchn_close(struct domain *d1, int
     if ( d2 != NULL )
     {
         if ( d1 != d2 )
-            spin_unlock(&d2->event_lock);
+            read_unlock(&d2->event_lock);
         put_domain(d2);
     }
 
-    spin_unlock(&d1->event_lock);
+    write_unlock(&d1->event_lock);
 
     return rc;
 }
@@ -944,7 +942,7 @@ int evtchn_status(evtchn_status_t *statu
     if ( d == NULL )
         return -ESRCH;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     if ( !port_is_valid(d, port) )
     {
@@ -997,7 +995,7 @@ int evtchn_status(evtchn_status_t *statu
     status->vcpu = chn->notify_vcpu_id;
 
  out:
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
     rcu_unlock_domain(d);
 
     return rc;
@@ -1010,20 +1008,19 @@ long evtchn_bind_vcpu(unsigned int port,
     struct evtchn *chn;
     long           rc = 0;
     struct vcpu   *v;
+    bool           write_locked = false;
+    unsigned long  flags;
 
     /* Use the vcpu info to prevent speculative out-of-bound accesses */
     if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
-
     if ( !port_is_valid(d, port) )
-    {
-        rc = -EINVAL;
-        goto out;
-    }
+        return -EINVAL;
 
     chn = evtchn_from_port(d, port);
+ again:
+    spin_lock_irqsave(&chn->lock, flags);
 
     /* Guest cannot re-bind a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(chn)) )
@@ -1047,19 +1044,32 @@ long evtchn_bind_vcpu(unsigned int port,
     case ECS_PIRQ:
         if ( chn->notify_vcpu_id == v->vcpu_id )
             break;
+        if ( !write_locked )
+        {
+            spin_unlock_irqrestore(&chn->lock, flags);
+            write_lock(&d->event_lock);
+            write_locked = true;
+            goto again;
+        }
+
         unlink_pirq_port(chn, d->vcpu[chn->notify_vcpu_id]);
         chn->notify_vcpu_id = v->vcpu_id;
+        spin_unlock_irqrestore(&chn->lock, flags);
         pirq_set_affinity(d, chn->u.pirq.irq,
                           cpumask_of(v->processor));
         link_pirq_port(port, chn, v);
-        break;
+
+        write_unlock(&d->event_lock);
+        return 0;
     default:
         rc = -EINVAL;
         break;
     }
 
  out:
-    spin_unlock(&d->event_lock);
+    spin_unlock_irqrestore(&chn->lock, flags);
+    if ( write_locked )
+        write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -1103,7 +1113,7 @@ int evtchn_reset(struct domain *d, bool
     if ( d != current->domain && !d->controller_pause_count )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     /*
      * If we are resuming, then start where we stopped. Otherwise, check
@@ -1114,7 +1124,7 @@ int evtchn_reset(struct domain *d, bool
     if ( i > d->next_evtchn )
         d->next_evtchn = i;
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( !i )
         return -EBUSY;
@@ -1126,14 +1136,14 @@ int evtchn_reset(struct domain *d, bool
         /* NB: Choice of frequency is arbitrary. */
         if ( !(i & 0x3f) && hypercall_preempt_check() )
         {
-            spin_lock(&d->event_lock);
+            write_lock(&d->event_lock);
             d->next_evtchn = i;
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -ERESTART;
         }
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     d->next_evtchn = 0;
 
@@ -1146,7 +1156,7 @@ int evtchn_reset(struct domain *d, bool
         evtchn_2l_init(d);
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
@@ -1335,7 +1345,7 @@ int alloc_unbound_xen_event_channel(
     int            port, rc;
     unsigned long  flags;
 
-    spin_lock(&ld->event_lock);
+    write_lock(&ld->event_lock);
 
     port = rc = get_free_port(ld);
     if ( rc < 0 )
@@ -1363,7 +1373,7 @@ int alloc_unbound_xen_event_channel(
 
  out:
     check_free_port(ld, port);
-    spin_unlock(&ld->event_lock);
+    write_unlock(&ld->event_lock);
 
     return rc < 0 ? rc : port;
 }
@@ -1451,7 +1461,8 @@ int evtchn_init(struct domain *d, unsign
         return -ENOMEM;
     d->valid_evtchns = EVTCHNS_PER_BUCKET;
 
-    spin_lock_init_prof(d, event_lock);
+    rwlock_init(&d->event_lock);
+
     if ( get_free_port(d) != 0 )
     {
         free_evtchn_bucket(d, d->evtchn);
@@ -1478,7 +1489,7 @@ int evtchn_destroy(struct domain *d)
 
     /* After this barrier no new event-channel allocations can occur. */
     BUG_ON(!d->is_dying);
-    spin_barrier(&d->event_lock);
+    rw_barrier(&d->event_lock);
 
     /* Close all existing event channels. */
     for ( i = d->valid_evtchns; --i; )
@@ -1536,13 +1547,13 @@ void evtchn_move_pirqs(struct vcpu *v)
     unsigned int port;
     struct evtchn *chn;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
     {
         chn = evtchn_from_port(d, port);
         pirq_set_affinity(d, chn->u.pirq.irq, mask);
     }
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 
@@ -1555,7 +1566,7 @@ static void domain_dump_evtchn_info(stru
            "Polling vCPUs: {%*pbl}\n"
            "    port [p/m/s]\n", d->domain_id, d->max_vcpus, d->poll_mask);
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     for ( port = 1; port_is_valid(d, port); ++port )
     {
@@ -1602,7 +1613,7 @@ static void domain_dump_evtchn_info(stru
         }
     }
 
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static void dump_evtchn_info(unsigned char key)
--- a/xen/common/event_fifo.c
+++ b/xen/common/event_fifo.c
@@ -561,7 +561,7 @@ int evtchn_fifo_init_control(struct evtc
     if ( offset & (8 - 1) )
         return -EINVAL;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     /*
      * If this is the first control block, setup an empty event array
@@ -593,13 +593,13 @@ int evtchn_fifo_init_control(struct evtc
     else
         rc = map_control_block(v, gfn, offset);
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 
  error:
     evtchn_fifo_destroy(d);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return rc;
 }
 
@@ -652,9 +652,9 @@ int evtchn_fifo_expand_array(const struc
     if ( !d->evtchn_fifo )
         return -EOPNOTSUPP;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     rc = add_page_to_event_array(d, expand_array->array_gfn);
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     return rc;
 }
--- a/xen/drivers/passthrough/io.c
+++ b/xen/drivers/passthrough/io.c
@@ -105,7 +105,7 @@ static void pt_pirq_softirq_reset(struct
 {
     struct domain *d = pirq_dpci->dom;
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_write_locked(&d->event_lock));
 
     switch ( cmpxchg(&pirq_dpci->state, 1 << STATE_SCHED, 0) )
     {
@@ -162,7 +162,7 @@ static void pt_irq_time_out(void *data)
     const struct hvm_irq_dpci *dpci;
     const struct dev_intx_gsi_link *digl;
 
-    spin_lock(&irq_map->dom->event_lock);
+    write_lock(&irq_map->dom->event_lock);
 
     if ( irq_map->flags & HVM_IRQ_DPCI_IDENTITY_GSI )
     {
@@ -177,7 +177,7 @@ static void pt_irq_time_out(void *data)
         hvm_gsi_deassert(irq_map->dom, dpci_pirq(irq_map)->pirq);
         irq_map->flags |= HVM_IRQ_DPCI_EOI_LATCH;
         pt_irq_guest_eoi(irq_map->dom, irq_map, NULL);
-        spin_unlock(&irq_map->dom->event_lock);
+        write_unlock(&irq_map->dom->event_lock);
         return;
     }
 
@@ -185,7 +185,7 @@ static void pt_irq_time_out(void *data)
     if ( unlikely(!dpci) )
     {
         ASSERT_UNREACHABLE();
-        spin_unlock(&irq_map->dom->event_lock);
+        write_unlock(&irq_map->dom->event_lock);
         return;
     }
     list_for_each_entry ( digl, &irq_map->digl_list, list )
@@ -204,7 +204,7 @@ static void pt_irq_time_out(void *data)
 
     pt_pirq_iterate(irq_map->dom, pt_irq_guest_eoi, NULL);
 
-    spin_unlock(&irq_map->dom->event_lock);
+    write_unlock(&irq_map->dom->event_lock);
 }
 
 struct hvm_irq_dpci *domain_get_irq_dpci(const struct domain *d)
@@ -288,7 +288,7 @@ int pt_irq_create_bind(
         return -EINVAL;
 
  restart:
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( !hvm_irq_dpci && !is_hardware_domain(d) )
@@ -304,7 +304,7 @@ int pt_irq_create_bind(
         hvm_irq_dpci = xzalloc(struct hvm_irq_dpci);
         if ( hvm_irq_dpci == NULL )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -ENOMEM;
         }
         for ( i = 0; i < NR_HVM_DOMU_IRQS; i++ )
@@ -316,7 +316,7 @@ int pt_irq_create_bind(
     info = pirq_get_info(d, pirq);
     if ( !info )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -ENOMEM;
     }
     pirq_dpci = pirq_dpci(info);
@@ -331,7 +331,7 @@ int pt_irq_create_bind(
      */
     if ( pt_pirq_softirq_active(pirq_dpci) )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         cpu_relax();
         goto restart;
     }
@@ -389,7 +389,7 @@ int pt_irq_create_bind(
                 pirq_dpci->dom = NULL;
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 return rc;
             }
         }
@@ -399,7 +399,7 @@ int pt_irq_create_bind(
 
             if ( (pirq_dpci->flags & mask) != mask )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 return -EBUSY;
             }
 
@@ -423,7 +423,7 @@ int pt_irq_create_bind(
 
         dest_vcpu_id = hvm_girq_dest_2_vcpu_id(d, dest, dest_mode);
         pirq_dpci->gmsi.dest_vcpu_id = dest_vcpu_id;
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         pirq_dpci->gmsi.posted = false;
         vcpu = (dest_vcpu_id >= 0) ? d->vcpu[dest_vcpu_id] : NULL;
@@ -483,7 +483,7 @@ int pt_irq_create_bind(
 
             if ( !digl || !girq )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 xfree(girq);
                 xfree(digl);
                 return -ENOMEM;
@@ -510,7 +510,7 @@ int pt_irq_create_bind(
             if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_PCI ||
                  pirq >= hvm_domain_irq(d)->nr_gsis )
             {
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
 
                 return -EINVAL;
             }
@@ -546,7 +546,7 @@ int pt_irq_create_bind(
 
                     if ( mask < 0 || trigger_mode < 0 )
                     {
-                        spin_unlock(&d->event_lock);
+                        write_unlock(&d->event_lock);
 
                         ASSERT_UNREACHABLE();
                         return -EINVAL;
@@ -594,14 +594,14 @@ int pt_irq_create_bind(
                 }
                 pirq_dpci->flags = 0;
                 pirq_cleanup_check(info, d);
-                spin_unlock(&d->event_lock);
+                write_unlock(&d->event_lock);
                 xfree(girq);
                 xfree(digl);
                 return rc;
             }
         }
 
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
 
         if ( iommu_verbose )
         {
@@ -619,7 +619,7 @@ int pt_irq_create_bind(
     }
 
     default:
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -EOPNOTSUPP;
     }
 
@@ -672,13 +672,13 @@ int pt_irq_destroy_bind(
         return -EOPNOTSUPP;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     hvm_irq_dpci = domain_get_irq_dpci(d);
 
     if ( !hvm_irq_dpci && !is_hardware_domain(d) )
     {
-        spin_unlock(&d->event_lock);
+        write_unlock(&d->event_lock);
         return -EINVAL;
     }
 
@@ -711,7 +711,7 @@ int pt_irq_destroy_bind(
 
         if ( girq )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return -EINVAL;
         }
 
@@ -755,7 +755,7 @@ int pt_irq_destroy_bind(
         pirq_cleanup_check(pirq, d);
     }
 
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 
     if ( what && iommu_verbose )
     {
@@ -799,7 +799,7 @@ int pt_pirq_iterate(struct domain *d,
     unsigned int pirq = 0, n, i;
     struct pirq *pirqs[8];
 
-    ASSERT(spin_is_locked(&d->event_lock));
+    ASSERT(rw_is_locked(&d->event_lock));
 
     do {
         n = radix_tree_gang_lookup(&d->pirq_tree, (void **)pirqs, pirq,
@@ -880,9 +880,9 @@ void hvm_dpci_msi_eoi(struct domain *d,
          (!hvm_domain_irq(d)->dpci && !is_hardware_domain(d)) )
        return;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
     pt_pirq_iterate(d, _hvm_dpci_msi_eoi, (void *)(long)vector);
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
 }
 
 static void hvm_dirq_assist(struct domain *d, struct hvm_pirq_dpci *pirq_dpci)
@@ -893,7 +893,7 @@ static void hvm_dirq_assist(struct domai
         return;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     if ( test_and_clear_bool(pirq_dpci->masked) )
     {
         struct pirq *pirq = dpci_pirq(pirq_dpci);
@@ -947,7 +947,7 @@ static void hvm_dirq_assist(struct domai
     }
 
  out:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 static void hvm_pirq_eoi(struct pirq *pirq,
@@ -1012,7 +1012,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
 
     if ( is_hardware_domain(d) )
     {
-        spin_lock(&d->event_lock);
+        write_lock(&d->event_lock);
         hvm_gsi_eoi(d, guest_gsi, ent);
         goto unlock;
     }
@@ -1023,7 +1023,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
         return;
     }
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     hvm_irq_dpci = domain_get_irq_dpci(d);
 
     if ( !hvm_irq_dpci )
@@ -1033,7 +1033,7 @@ void hvm_dpci_eoi(struct domain *d, unsi
         __hvm_dpci_eoi(d, girq, ent);
 
 unlock:
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
 
 /*
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -2,6 +2,7 @@
 #include <xen/irq.h>
 #include <xen/smp.h>
 #include <xen/time.h>
+#include <xen/rwlock.h>
 #include <xen/spinlock.h>
 #include <xen/guest_access.h>
 #include <xen/preempt.h>
@@ -334,6 +335,15 @@ void _spin_unlock_recursive(spinlock_t *
     }
 }
 
+void _rw_barrier(rwlock_t *lock)
+{
+    check_barrier(&lock->lock.debug);
+    smp_mb();
+    while ( _rw_is_locked(lock) )
+        arch_lock_relax();
+    smp_mb();
+}
+
 #ifdef CONFIG_DEBUG_LOCK_PROFILE
 
 struct lock_profile_anc {
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -883,7 +883,7 @@ static int pci_clean_dpci_irqs(struct do
     if ( !is_hvm_domain(d) )
         return 0;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
     hvm_irq_dpci = domain_get_irq_dpci(d);
     if ( hvm_irq_dpci != NULL )
     {
@@ -901,14 +901,14 @@ static int pci_clean_dpci_irqs(struct do
             ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
         if ( ret )
         {
-            spin_unlock(&d->event_lock);
+            write_unlock(&d->event_lock);
             return ret;
         }
 
         hvm_domain_irq(d)->dpci = NULL;
         free_hvm_irq_dpci(hvm_irq_dpci);
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
     return 0;
 }
 
--- a/xen/drivers/passthrough/vtd/x86/hvm.c
+++ b/xen/drivers/passthrough/vtd/x86/hvm.c
@@ -54,7 +54,7 @@ void hvm_dpci_isairq_eoi(struct domain *
     if ( !is_iommu_enabled(d) )
         return;
 
-    spin_lock(&d->event_lock);
+    write_lock(&d->event_lock);
 
     dpci = domain_get_irq_dpci(d);
 
@@ -63,5 +63,5 @@ void hvm_dpci_isairq_eoi(struct domain *
         /* Multiple mirq may be mapped to one isa irq */
         pt_pirq_iterate(d, _hvm_dpci_isairq_eoi, (void *)(long)isairq);
     }
-    spin_unlock(&d->event_lock);
+    write_unlock(&d->event_lock);
 }
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -237,6 +237,8 @@ static inline int _rw_is_write_locked(rw
     return (atomic_read(&lock->cnts) & _QW_WMASK) == _QW_LOCKED;
 }
 
+void _rw_barrier(rwlock_t *lock);
+
 #define read_lock(l)                  _read_lock(l)
 #define read_lock_irq(l)              _read_lock_irq(l)
 #define read_lock_irqsave(l, f)                                 \
@@ -266,6 +268,7 @@ static inline int _rw_is_write_locked(rw
 #define rw_is_locked(l)               _rw_is_locked(l)
 #define rw_is_write_locked(l)         _rw_is_write_locked(l)
 
+#define rw_barrier(l)                 _rw_barrier(l)
 
 typedef struct percpu_rwlock percpu_rwlock_t;
 
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -373,7 +373,7 @@ struct domain
     unsigned int     xen_evtchns;
     /* Port to resume from in evtchn_reset(), when in a continuation. */
     unsigned int     next_evtchn;
-    spinlock_t       event_lock;
+    rwlock_t         event_lock;
     const struct evtchn_port_ops *evtchn_port_ops;
     struct evtchn_fifo_domain *evtchn_fifo;
 
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -555,7 +555,7 @@ static int flask_get_peer_sid(struct xen
     struct evtchn *chn;
     struct domain_security_struct *dsec;
 
-    spin_lock(&d->event_lock);
+    read_lock(&d->event_lock);
 
     if ( !port_is_valid(d, arg->evtchn) )
         goto out;
@@ -573,7 +573,7 @@ static int flask_get_peer_sid(struct xen
     rv = 0;
 
  out:
-    spin_unlock(&d->event_lock);
+    read_unlock(&d->event_lock);
     return rv;
 }
 



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:13:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9675.25441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsOW-0002Ok-H3; Tue, 20 Oct 2020 14:13:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9675.25441; Tue, 20 Oct 2020 14:13:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsOW-0002Od-DK; Tue, 20 Oct 2020 14:13:44 +0000
Received: by outflank-mailman (input) for mailman id 9675;
 Tue, 20 Oct 2020 14:13:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsOU-0002OV-Vx
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:13:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6f4aeb2-f601-4a71-b4be-12c05831e4f9;
 Tue, 20 Oct 2020 14:13:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7BE94AF72;
 Tue, 20 Oct 2020 14:13:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsOU-0002OV-Vx
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:13:43 +0000
X-Inumbo-ID: f6f4aeb2-f601-4a71-b4be-12c05831e4f9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6f4aeb2-f601-4a71-b4be-12c05831e4f9;
	Tue, 20 Oct 2020 14:13:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603203221;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Zoh/7Mty3v+n/dVbOdwhyCYcVSC+WmCqXrglut1Q6/8=;
	b=p4jvDGv3U00P7Soo1kqzkIqdMYRpUWl8spkt+u1lNH5V7V7JIhsCJ3/TZttd8JzUChuUao
	x8ZUDHGoKSl705K4LE9nPyTZvPL1wKeffrIMt+nsc38VoGb0NKkQCVZ1ug101RpGndhTIC
	KHfE4gE92/RlzDdq7+3O4mif5goXtlA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7BE94AF72;
	Tue, 20 Oct 2020 14:13:41 +0000 (UTC)
Subject: [PATCH RFC v2 8/8] evtchn: don't call Xen consumer callback with
 per-channel lock held
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Message-ID: <247f0d77-9447-47d0-4fa6-8e17b3e6a6de@suse.com>
Date: Tue, 20 Oct 2020 16:13:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While there don't look to be any problems with this right now, the lock
order implications from holding the lock can be very difficult to follow
(and may be easy to violate unknowingly). The present callbacks don't
(and no such callback should) have any need for the lock to be held.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TODO: vm_event_disable() frees the structures used by respective
      callbacks - need to either use call_rcu() for freeing, or maintain
      a count of in-progress calls, for evtchn_close() to wait to drop
      to zero before dropping the lock / returning.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -763,9 +763,18 @@ int evtchn_send(struct domain *ld, unsig
         rport = lchn->u.interdomain.remote_port;
         rchn  = evtchn_from_port(rd, rport);
         if ( consumer_is_xen(rchn) )
-            xen_notification_fn(rchn)(rd->vcpu[rchn->notify_vcpu_id], rport);
-        else
-            evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
+        {
+            /* Don't keep holding the lock for the call below. */
+            xen_event_channel_notification_t fn = xen_notification_fn(rchn);
+            struct vcpu *rv = rd->vcpu[rchn->notify_vcpu_id];
+
+            rcu_lock_domain(rd);
+            spin_unlock_irqrestore(&lchn->lock, flags);
+            fn(rv, rport);
+            rcu_unlock_domain(rd);
+            return 0;
+        }
+        evtchn_port_set_pending(rd, rchn->notify_vcpu_id, rchn);
         break;
     case ECS_IPI:
         evtchn_port_set_pending(ld, lchn->notify_vcpu_id, lchn);



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 14:29:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 14:29:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9681.25453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsdM-0003Tn-Si; Tue, 20 Oct 2020 14:29:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9681.25453; Tue, 20 Oct 2020 14:29:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUsdM-0003Tg-Pd; Tue, 20 Oct 2020 14:29:04 +0000
Received: by outflank-mailman (input) for mailman id 9681;
 Tue, 20 Oct 2020 14:29:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUsdL-0003Tb-Tl
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:29:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9a9a76a-ef6e-4835-bdc5-abc1f6d6603d;
 Tue, 20 Oct 2020 14:29:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E1B6AB8F;
 Tue, 20 Oct 2020 14:29:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUsdL-0003Tb-Tl
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 14:29:03 +0000
X-Inumbo-ID: e9a9a76a-ef6e-4835-bdc5-abc1f6d6603d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e9a9a76a-ef6e-4835-bdc5-abc1f6d6603d;
	Tue, 20 Oct 2020 14:29:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603204142;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tGrb4u8FwGicwu2LfELLCD03J3dVGpcxqGyc+0ewGWw=;
	b=H/9ygFZohciQ8nMONaUKItTlsUVhPsWnklY/14C5GLyIeNTeBBQNzEEUktj8pKYQbuh2Hv
	Mc+7KrAGZnBnsDcBXLPtI7EorhWjaTmBFZG58r/5iZdSihxoFQOYmsjVc4nySyO05rPECC
	BKrbXPFPTUEDYv+OnlOqiVoxTzoaXD4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0E1B6AB8F;
	Tue, 20 Oct 2020 14:29:02 +0000 (UTC)
Subject: Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Paul Durrant <paul@xen.org>
References: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
 <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
 <58e33283-a883-3bde-c697-8605586abace@suse.com>
Message-ID: <48c0a58b-9a3c-d180-386c-7166986dd307@suse.com>
Date: Tue, 20 Oct 2020 16:29:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <58e33283-a883-3bde-c697-8605586abace@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.05.2020 12:30, Jan Beulich wrote:
> On 29.05.2020 12:07, Andrew Cooper wrote:
>> On 29/05/2020 10:34, Jan Beulich wrote:
>>> While the behavior to ignore this option without FLASK support was
>>> properly documented, it is still somewhat surprising to someone using
>>> this option and then still _not_ getting the assumed security. Add a
>>> 2nd handler for the command line option for the XSM_FLASK=n case, and
>>> invoke panic() when the option is specified (and not subsequently
>>> overridden by "flask=disabled").
>>>
>>> Suggested-by: Ian Jackson <ian.jackson@citrix.com>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> I'm very tempted to nack this outright, lest I remind both of you of the
>> total disaster that was XSA-9, and the subsequent retraction of the code
>> which did exactly this.
>>
>> If you want to do something like this, prohibit creating guests so the
>> administrator can still log in and unbreak,
> 
> Unbreaking is as easy as removing the command line option, or
> adding "flask=disable" at the end of the command line.
> 
> Preventing to create guests is another option, but complicated
> by the late-hwdom feature we still have - to achieve what you
> want we'd have to permit creating that one further domain.
> Dom0less perhaps also would need special treatment (and there
> I'm not sure we'd know which of the domains we are supposed to
> allow being created, and which one(s) not).

Furthermore the policy that would normally be loaded might
constrain Dom0 itself as well. Allowing Dom0 to boot is therefore
not necessarily the right thing to do. Therefore, while overall I
agree that generalizing x86's "allow_unsafe" may be a helpful
thing to do, it's not what I would want to use here. Instead,
together with ...

>> instead of having it
>> entering a reboot loop with no output.  The console isn't established
>> this early, so none of this text makes it out onto VGA/serial.
> 
> You didn't look at the patch then: I'm intentionally _not_
> panic()-ing from the command line parsing function, but from
> an initcall. Both VGA and serial have been set up by that time.
> (I was in fact considering to pull it a little earlier, into
> a pre-SMP initcall.)

... this, I think the patch wants to be re-considered as is.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:00:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:00:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9686.25470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUt7k-0006wz-Ea; Tue, 20 Oct 2020 15:00:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9686.25470; Tue, 20 Oct 2020 15:00:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUt7k-0006ws-Bi; Tue, 20 Oct 2020 15:00:28 +0000
Received: by outflank-mailman (input) for mailman id 9686;
 Tue, 20 Oct 2020 15:00:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUt7i-0006wn-Ad
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:00:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5df2f849-06ce-4bfd-ad9e-2d2670fcbc42;
 Tue, 20 Oct 2020 15:00:25 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUt7c-0005MV-El; Tue, 20 Oct 2020 15:00:20 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUt7c-0000q0-3f; Tue, 20 Oct 2020 15:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUt7i-0006wn-Ad
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:00:26 +0000
X-Inumbo-ID: 5df2f849-06ce-4bfd-ad9e-2d2670fcbc42
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5df2f849-06ce-4bfd-ad9e-2d2670fcbc42;
	Tue, 20 Oct 2020 15:00:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pml2VH/aJBySECuBJctkn8YHzhRubM/5Ho+AdXysPyE=; b=5gBAP7h17aUashmZF0D0G6jJwt
	9WzEgi9AOcyvtcLNiY5zfzu5xQstvNrH4Q89raW3wCWx3YxGbhK6fK2UV1CieBTsaQCil5RlcMuAU
	q7MeeASZrXM/0UwPLChEqFh15eWFJBtxgU9v515yzs2eTeEdW9P/flHDtbZ09EnIuaak=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUt7c-0005MV-El; Tue, 20 Oct 2020 15:00:20 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUt7c-0000q0-3f; Tue, 20 Oct 2020 15:00:20 +0000
Subject: Re: [PATCH] IOMMU: avoid double flushing in shared page table case
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Paul Durrant <paul@xen.org>
References: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <01a5840e-3250-246c-8d38-29a65d4937ea@xen.org>
Date: Tue, 20 Oct 2020 16:00:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/10/2020 14:52, Jan Beulich wrote:
> While the flush coalescing optimization has been helping the non-shared
> case, it has actually lead to double flushes in the shared case (which
> ought to be the more common one nowadays at least): Once from
> *_set_entry() and a second time up the call tree from wherever the
> overriding flag gets played with. In alignment with XSA-346 suppress
> flushing in this case.
> 
> Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
> flushes: "idx" hasn't been added a new mapping for.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The Arm part really is just for completeness (and hence could also
>       be dropped) - the affected mapping spaces aren't currently
>       supported there.

As I may I have pointed out in the past, there are many ways to screw 
things up when using iommu_dont_flush_iotlb.

So I would rather not introduce any usage on Arm until we see a use-case.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:06:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9689.25482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtD2-0007BZ-34; Tue, 20 Oct 2020 15:05:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9689.25482; Tue, 20 Oct 2020 15:05:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtD2-0007BS-0E; Tue, 20 Oct 2020 15:05:56 +0000
Received: by outflank-mailman (input) for mailman id 9689;
 Tue, 20 Oct 2020 15:05:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUtD1-0007BN-Ib
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:05:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 230a6fe0-aef5-4c56-a8ff-67cc8404a2cc;
 Tue, 20 Oct 2020 15:05:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CCCA4AD52;
 Tue, 20 Oct 2020 15:05:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUtD1-0007BN-Ib
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:05:55 +0000
X-Inumbo-ID: 230a6fe0-aef5-4c56-a8ff-67cc8404a2cc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 230a6fe0-aef5-4c56-a8ff-67cc8404a2cc;
	Tue, 20 Oct 2020 15:05:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603206353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oVqXOwluPVtwIxnZpXjiQN1OuEFTkA7XQpwqBQ4VJZY=;
	b=UkE3XIN0ss6kNgDzOf9CddCHiEv2CPCt6W1cigrUoh5pW2tWa8ttqjjyDPPlgWyDKA52Vb
	3tpsSe4plm1cMIvp5lfOABTG6S/xsCwYf0sXjXEij7VWxMEuHqFrfpFMZlmi+qhCq+WU6T
	SVX8OcVNafKlAk9hFVTTkfvUum5IsQE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CCCA4AD52;
	Tue, 20 Oct 2020 15:05:53 +0000 (UTC)
Subject: Re: [PATCH] IOMMU: avoid double flushing in shared page table case
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Paul Durrant <paul@xen.org>
References: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
 <01a5840e-3250-246c-8d38-29a65d4937ea@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <91fc14a3-6bc0-5929-2087-e4f57901fe14@suse.com>
Date: Tue, 20 Oct 2020 17:05:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <01a5840e-3250-246c-8d38-29a65d4937ea@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 17:00, Julien Grall wrote:
> On 20/10/2020 14:52, Jan Beulich wrote:
>> While the flush coalescing optimization has been helping the non-shared
>> case, it has actually lead to double flushes in the shared case (which
>> ought to be the more common one nowadays at least): Once from
>> *_set_entry() and a second time up the call tree from wherever the
>> overriding flag gets played with. In alignment with XSA-346 suppress
>> flushing in this case.
>>
>> Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
>> flushes: "idx" hasn't been added a new mapping for.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> TBD: The Arm part really is just for completeness (and hence could also
>>       be dropped) - the affected mapping spaces aren't currently
>>       supported there.
> 
> As I may I have pointed out in the past, there are many ways to screw 
> things up when using iommu_dont_flush_iotlb.
> 
> So I would rather not introduce any usage on Arm until we see a use-case.

"Usage" to me would mean a path actually setting the flag.
What I'm adding here, basically as a precautionary measure,
is a check of the flag. Does your use of "usage" imply you
don't want that either? Just to be sure; I'm okay to drop
the Arm part.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:22:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9692.25495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtSk-0000Uv-IQ; Tue, 20 Oct 2020 15:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9692.25495; Tue, 20 Oct 2020 15:22:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtSk-0000Uo-Eg; Tue, 20 Oct 2020 15:22:10 +0000
Received: by outflank-mailman (input) for mailman id 9692;
 Tue, 20 Oct 2020 15:22:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUtSj-0000Uj-6o
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:22:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fee82aca-fb22-44ac-a554-6f6e0e808967;
 Tue, 20 Oct 2020 15:22:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUtSg-0005nP-Mq; Tue, 20 Oct 2020 15:22:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUtSg-0007Tg-FG; Tue, 20 Oct 2020 15:22:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUtSg-0000TU-Ej; Tue, 20 Oct 2020 15:22:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUtSj-0000Uj-6o
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:22:09 +0000
X-Inumbo-ID: fee82aca-fb22-44ac-a554-6f6e0e808967
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fee82aca-fb22-44ac-a554-6f6e0e808967;
	Tue, 20 Oct 2020 15:22:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nlGxOAzNKFZ0MFLxj2sei+GEnMQv5AlvwE+XCK/a/SM=; b=gexielMFTzolycm5ptZtQfv7Wn
	BhiuvkGlbhCqsTKH4S03G5pNSlHqCgGcHDxiwii8ZEi02DvN9M6QynzXhUh7GPd8cMOjYXeMXTl/m
	V+hfTTCSNcEdeElJGfuOfFvLs1giuhdbdQ1AD0uZWAEy4rqc73b2XINp4XvqMn8N0gXs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUtSg-0005nP-Mq; Tue, 20 Oct 2020 15:22:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUtSg-0007Tg-FG; Tue, 20 Oct 2020 15:22:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUtSg-0000TU-Ej; Tue, 20 Oct 2020 15:22:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156029-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156029: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0514a3a25fb9ebff5d75cc8f00a9229385300858
X-Osstest-Versions-That:
    xen=a7f0831e58bf4681d710e9a029644b6fa07b7cb0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 15:22:06 +0000

flight 156029 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156029/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0514a3a25fb9ebff5d75cc8f00a9229385300858
baseline version:
 xen                  a7f0831e58bf4681d710e9a029644b6fa07b7cb0

Last test of basis   156018  2020-10-20 07:00:28 Z    0 days
Testing same since   156029  2020-10-20 13:01:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a7f0831e58..0514a3a25f  0514a3a25fb9ebff5d75cc8f00a9229385300858 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:24:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:24:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9696.25510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtV4-0000ig-0Z; Tue, 20 Oct 2020 15:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9696.25510; Tue, 20 Oct 2020 15:24:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtV3-0000iZ-Sx; Tue, 20 Oct 2020 15:24:33 +0000
Received: by outflank-mailman (input) for mailman id 9696;
 Tue, 20 Oct 2020 15:24:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUtV2-0000iT-BG
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:24:32 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28ede498-bd71-45bf-90ef-56d3858bae7d;
 Tue, 20 Oct 2020 15:24:31 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUtV2-0000iT-BG
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:24:32 +0000
X-Inumbo-ID: 28ede498-bd71-45bf-90ef-56d3858bae7d
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 28ede498-bd71-45bf-90ef-56d3858bae7d;
	Tue, 20 Oct 2020 15:24:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603207471;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=30VkK78JtXNGwhUSVQX7oNytWgBsewZox5IUkixUCTs=;
  b=e+o/x5VEfcI8ZhNJGLz9reDTh7IPk3EiSavfde2wz8ZN+Z6+fFSjyppu
   OKdp0zFWGE8iAYpQoo8WYjDfWvslQziiTVtsTj167dYvC6RvkGFUzkB+7
   JtzxxAYHVACPJzwABst5z5DDPmuCcp7ZNeLvwKprvc+orNjPG0iRQKh2B
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 9Azn97UajB6MH1YkPTf0RwoEw/w5nb5V/YWz/UT3H5Db3EOmmI9yPVfzvDY+LWfdU5iuEM1sVe
 q6kXwROGKKpP6wxcAsdEIdtMBARiFJBXIhped5lg/e3LXTaDzE5jbpCzTA1+Bpd7ufdwzDB47s
 RSJ7mkB/l9GuU6oTgTvBfnr9EwA25VEiHoKruAn/fzPIPVNgd/TXSlKIty9/nPGhU6uUCSB1uu
 yfvMCAPGXnvyvr+JLjxFL50V4Huzxn9GQY857qCEz4DeeQglYSBNKKA3dJI/xfPV8kPmO16Hyl
 mno=
X-SBRS: None
X-MesageID: 29374181
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,397,1596513600"; 
   d="scan'208";a="29374181"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/pv: Flush TLB in response to paging structure changes
Date: Tue, 20 Oct 2020 16:24:05 +0100
Message-ID: <20201020152405.26892-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
is from Xen's point of view (as the update only affects guest mappings), and
the guest is required to flush suitably after making updates.

However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.

Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings.  Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.

For all L2 and higher modifications, flush the full TLB.  (This could in
principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
mechanism exists in practice.)

As this combines with sync_guest for XPTI L4 "shadowing", replace the
sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
now always needs to flush, there is no point trying to exclude the current CPU
from the flush mask.  Use pt_owner->dirty_cpumask directly.

This is XSA-286.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

A couple of minor points.

 * PV guests can create global mappings.  I can't reason any safe way to relax
   FLUSH_TLB_GLOBAL to just FLUSH_TLB.

 * Performance tests are still ongoing, but so far is fairing better than the
   embargoed alternative.
---
 xen/arch/x86/mm.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 918ee2bbe3..a6a7fcb56c 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3883,11 +3883,10 @@ long do_mmu_update(
     void *va = NULL;
     unsigned long gpfn, gmfn;
     struct page_info *page;
-    unsigned int cmd, i = 0, done = 0, pt_dom;
+    unsigned int cmd, i = 0, done = 0, pt_dom, flush_flags = 0;
     struct vcpu *curr = current, *v = curr;
     struct domain *d = v->domain, *pt_owner = d, *pg_owner;
     mfn_t map_mfn = INVALID_MFN, mfn;
-    bool sync_guest = false;
     uint32_t xsm_needed = 0;
     uint32_t xsm_checked = 0;
     int rc = put_old_guest_table(curr);
@@ -4037,6 +4036,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_flags |= FLUSH_TLB_GLOBAL;
                     break;
 
                 case PGT_l3_page_table:
@@ -4044,6 +4045,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_flags |= FLUSH_TLB_GLOBAL;
                     break;
 
                 case PGT_l4_page_table:
@@ -4051,6 +4054,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_flags |= FLUSH_TLB_GLOBAL;
                     if ( !rc && pt_owner->arch.pv.xpti )
                     {
                         bool local_in_use = false;
@@ -4071,7 +4076,7 @@ long do_mmu_update(
                              (1 + !!(page->u.inuse.type_info & PGT_pinned) +
                               mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
                                      mfn) + local_in_use) )
-                            sync_guest = true;
+                            flush_flags |= FLUSH_ROOT_PGTBL;
                     }
                     break;
 
@@ -4173,19 +4178,13 @@ long do_mmu_update(
     if ( va )
         unmap_domain_page(va);
 
-    if ( sync_guest )
-    {
-        /*
-         * Force other vCPU-s of the affected guest to pick up L4 entry
-         * changes (if any).
-         */
-        unsigned int cpu = smp_processor_id();
-        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
-
-        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
-        if ( !cpumask_empty(mask) )
-            flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
-    }
+    /*
+     * Flush TLBs if an L2 or higher was changed (invalidates the structure of
+     * the linear pagetables), or an L4 in use by other CPUs was made (needs
+     * to resync the XPTI copy of the table).
+     */
+    if ( flush_flags )
+        flush_mask(pt_owner->dirty_cpumask, flush_flags);
 
     perfc_add(num_page_updates, i);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:26:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:26:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9699.25522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtWf-0000rB-Gp; Tue, 20 Oct 2020 15:26:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9699.25522; Tue, 20 Oct 2020 15:26:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtWf-0000r4-Dc; Tue, 20 Oct 2020 15:26:13 +0000
Received: by outflank-mailman (input) for mailman id 9699;
 Tue, 20 Oct 2020 15:26:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUtWe-0000qz-Hs
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:26:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c0f42e5-9e30-464b-a217-29fe4522ad1b;
 Tue, 20 Oct 2020 15:26:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUtWY-0005t2-V5; Tue, 20 Oct 2020 15:26:06 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUtWY-0002ok-N0; Tue, 20 Oct 2020 15:26:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUtWe-0000qz-Hs
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:26:12 +0000
X-Inumbo-ID: 4c0f42e5-9e30-464b-a217-29fe4522ad1b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4c0f42e5-9e30-464b-a217-29fe4522ad1b;
	Tue, 20 Oct 2020 15:26:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=F9G/pbmQAh/X7hBMDJhhroZkSRlVG49Ud+NSH7WAp/k=; b=lBAEOn7EB87TB72dviHxeU7kdQ
	RbOoLoC26/PI4QDynGf/vzpySh/gPZSpEg5EZ4f8ke9EDH2giM86VM2PX1oby553zuc//oyNcqhDQ
	ACjDnGJuBmZR5D0kKBF1p+a+VzzJZEH302DcrSuLqhQ0CMsZdOMLEzfABSpXXzKE2sy0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUtWY-0005t2-V5; Tue, 20 Oct 2020 15:26:06 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUtWY-0002ok-N0; Tue, 20 Oct 2020 15:26:06 +0000
Subject: Re: [PATCH] IOMMU: avoid double flushing in shared page table case
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Paul Durrant <paul@xen.org>
References: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
 <01a5840e-3250-246c-8d38-29a65d4937ea@xen.org>
 <91fc14a3-6bc0-5929-2087-e4f57901fe14@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <147ab40f-9ee7-a43e-e40f-7620fd9c26ac@xen.org>
Date: Tue, 20 Oct 2020 16:26:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <91fc14a3-6bc0-5929-2087-e4f57901fe14@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/10/2020 16:05, Jan Beulich wrote:
> On 20.10.2020 17:00, Julien Grall wrote:
>> On 20/10/2020 14:52, Jan Beulich wrote:
>>> While the flush coalescing optimization has been helping the non-shared
>>> case, it has actually lead to double flushes in the shared case (which
>>> ought to be the more common one nowadays at least): Once from
>>> *_set_entry() and a second time up the call tree from wherever the
>>> overriding flag gets played with. In alignment with XSA-346 suppress
>>> flushing in this case.
>>>
>>> Similarly avoid excessive setting of IOMMU_FLUSHF_added on the batched
>>> flushes: "idx" hasn't been added a new mapping for.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> TBD: The Arm part really is just for completeness (and hence could also
>>>        be dropped) - the affected mapping spaces aren't currently
>>>        supported there.
>>
>> As I may I have pointed out in the past, there are many ways to screw
>> things up when using iommu_dont_flush_iotlb.
>>
>> So I would rather not introduce any usage on Arm until we see a use-case.
> 
> "Usage" to me would mean a path actually setting the flag.
> What I'm adding here, basically as a precautionary measure,
> is a check of the flag.

The code would always be safe without checking the safe (albeit doing 
pointless flush). I wouldn't say the same if we check the flag because 
it is not correct to set it everywhere.

> Does your use of "usage" imply you
> don't want that either? Just to be sure; I'm okay to drop
> the Arm part.
That's correct, I don't want this check to be present until there is a 
user. Only then, we can assess whether this is the right approach for Arm.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:26:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:26:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9701.25534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtX5-0000xT-PW; Tue, 20 Oct 2020 15:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9701.25534; Tue, 20 Oct 2020 15:26:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtX5-0000xM-MO; Tue, 20 Oct 2020 15:26:39 +0000
Received: by outflank-mailman (input) for mailman id 9701;
 Tue, 20 Oct 2020 15:26:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dlxO=D3=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kUtX4-0000xF-LI
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:26:38 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8ffb76b9-003d-4512-a441-0b2e3cb691fd;
 Tue, 20 Oct 2020 15:26:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B4501FB;
 Tue, 20 Oct 2020 08:26:33 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 17ED93F66B;
 Tue, 20 Oct 2020 08:26:31 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dlxO=D3=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kUtX4-0000xF-LI
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:26:38 +0000
X-Inumbo-ID: 8ffb76b9-003d-4512-a441-0b2e3cb691fd
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 8ffb76b9-003d-4512-a441-0b2e3cb691fd;
	Tue, 20 Oct 2020 15:26:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8B4501FB;
	Tue, 20 Oct 2020 08:26:33 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 17ED93F66B;
	Tue, 20 Oct 2020 08:26:31 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Date: Tue, 20 Oct 2020 16:25:43 +0100
Message-Id: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

Add support for ARM architected SMMUv3 implementations. It is based on
the Linux SMMUv3 driver.

Major differences between the Linux driver are as follows:
1. Only Stage-2 translation is supported as compared to the Linux driver
   that supports both Stage-1 and Stage-2 translations.
2. Use P2M  page table instead of creating one as SMMUv3 has the
   capability to share the page tables with the CPU.
3. Tasklets is used in place of threaded IRQ's in Linux for event queue
   and priority queue IRQ handling.
4. Latest version of the Linux SMMUv3 code implements the commands queue
   access functions based on atomic operations implemented in Linux.
   Atomic functions used by the commands queue access functions is not
   implemented in XEN therefore we decided to port the earlier version
   of the code. Once the proper atomic operations will be available in XEN
   the driver can be updated.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/Kconfig       |   10 +
 xen/drivers/passthrough/arm/Makefile  |    1 +
 xen/drivers/passthrough/arm/smmu-v3.c | 2847 +++++++++++++++++++++++++
 3 files changed, 2858 insertions(+)
 create mode 100644 xen/drivers/passthrough/arm/smmu-v3.c

diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 0036007ec4..5b71c59f47 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -13,6 +13,16 @@ config ARM_SMMU
 	  Say Y here if your SoC includes an IOMMU device implementing the
 	  ARM SMMU architecture.
 
+config ARM_SMMU_V3
+	bool "ARM Ltd. System MMU Version 3 (SMMUv3) Support" if EXPERT
+	depends on ARM_64
+	---help---
+	 Support for implementations of the ARM System MMU architecture
+	 version 3.
+
+	 Say Y here if your system includes an IOMMU device implementing
+	 the ARM SMMUv3 architecture.
+
 config IPMMU_VMSA
 	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs"
 	depends on ARM_64
diff --git a/xen/drivers/passthrough/arm/Makefile b/xen/drivers/passthrough/arm/Makefile
index fcd918ea3e..c5fb3b58a5 100644
--- a/xen/drivers/passthrough/arm/Makefile
+++ b/xen/drivers/passthrough/arm/Makefile
@@ -1,3 +1,4 @@
 obj-y += iommu.o iommu_helpers.o iommu_fwspec.o
 obj-$(CONFIG_ARM_SMMU) += smmu.o
 obj-$(CONFIG_IPMMU_VMSA) += ipmmu-vmsa.o
+obj-$(CONFIG_ARM_SMMU_V3) += smmu-v3.o
diff --git a/xen/drivers/passthrough/arm/smmu-v3.c b/xen/drivers/passthrough/arm/smmu-v3.c
new file mode 100644
index 0000000000..d6d26ac7af
--- /dev/null
+++ b/xen/drivers/passthrough/arm/smmu-v3.c
@@ -0,0 +1,2847 @@
+/*
+ * IOMMU API for ARM architected SMMUv3 implementations.
+ *
+ * Based on Linux's SMMUv3 driver:
+ *    drivers/iommu/arm-smmu-v3.c
+ *    commit: 7c288a5b27934281d9ea8b5807bc727268b7001a
+ * and Xen's SMMU driver:
+ *    xen/drivers/passthrough/arm/smmu.c
+ *
+ * Copyright (C) 2015 ARM Limited Will Deacon <will.deacon@arm.com>
+ *
+ * Copyright (C) 2020 Arm Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/acpi.h>
+#include <xen/config.h>
+#include <xen/delay.h>
+#include <xen/errno.h>
+#include <xen/err.h>
+#include <xen/irq.h>
+#include <xen/lib.h>
+#include <xen/list.h>
+#include <xen/mm.h>
+#include <xen/rbtree.h>
+#include <xen/sched.h>
+#include <xen/sizes.h>
+#include <xen/vmap.h>
+#include <asm/atomic.h>
+#include <asm/device.h>
+#include <asm/io.h>
+#include <asm/platform.h>
+#include <asm/iommu_fwspec.h>
+
+/* Linux compatibility functions. */
+
+/* Device logger functions */
+#define dev_name(dev) dt_node_full_name(dev->of_node)
+#define dev_dbg(dev, fmt, ...)      \
+    printk(XENLOG_DEBUG "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_notice(dev, fmt, ...)   \
+    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_warn(dev, fmt, ...)     \
+    printk(XENLOG_WARNING "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err(dev, fmt, ...)      \
+    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_info(dev, fmt, ...)     \
+    printk(XENLOG_INFO "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+#define dev_err_ratelimited(dev, fmt, ...)      \
+    printk(XENLOG_ERR "SMMUv3: %s: " fmt, dev_name(dev), ## __VA_ARGS__)
+
+/*
+ * Periodically poll an address and wait between reads in us until a
+ * condition is met or a timeout occurs.
+ */
+#define readx_poll_timeout(op, addr, val, cond, sleep_us, timeout_us) \
+({ \
+     s_time_t deadline = NOW() + MICROSECS(timeout_us); \
+     for (;;) { \
+        (val) = op(addr); \
+        if (cond) \
+            break; \
+        if (NOW() > deadline) { \
+            (val) = op(addr); \
+            break; \
+        } \
+        udelay(sleep_us); \
+     } \
+     (cond) ? 0 : -ETIMEDOUT; \
+})
+
+#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
+    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
+
+#define FIELD_PREP(_mask, _val)         \
+    (((typeof(_mask))(_val) << (__builtin_ffsll(_mask) - 1)) & (_mask))
+
+#define FIELD_GET(_mask, _reg)          \
+    (typeof(_mask))(((_reg) & (_mask)) >> (__builtin_ffsll(_mask) - 1))
+
+/*
+ * Helpers for DMA allocation. Just the function name is reused for
+ * porting code, these allocation are not managed allocations
+ */
+
+static void *dmam_alloc_coherent(size_t size, paddr_t *dma_handle)
+{
+    void *vaddr;
+    unsigned long alignment = size;
+
+    /*
+     * _xzalloc requires that the (align & (align -1)) = 0. Most of the
+     * allocations in SMMU code should send the right value for size. In
+     * case this is not true print a warning and align to the size of a
+     * (void *)
+     */
+    if ( size & (size - 1) )
+    {
+        printk(XENLOG_WARNING "SMMUv3: Fixing alignment for the DMA buffer\n");
+        alignment = sizeof(void *);
+    }
+
+    vaddr = _xzalloc(size, alignment);
+    if ( !vaddr )
+    {
+        printk(XENLOG_ERR "SMMUv3: DMA allocation failed\n");
+        return NULL;
+    }
+
+    *dma_handle = virt_to_maddr(vaddr);
+
+    return vaddr;
+}
+
+/* Xen specific code. */
+struct iommu_domain {
+    /* Runtime SMMU configuration for this iommu_domain */
+    atomic_t ref;
+    /*
+     * Used to link iommu_domain contexts for a same domain.
+     * There is at least one per-SMMU to used by the domain.
+     */
+    struct list_head    list;
+};
+
+/* Describes information required for a Xen domain */
+struct arm_smmu_xen_domain {
+    spinlock_t      lock;
+
+    /* List of iommu domains associated to this domain */
+    struct list_head    contexts;
+};
+
+/*
+ * Information about each device stored in dev->archdata.iommu
+ * The dev->archdata.iommu stores the iommu_domain (runtime configuration of
+ * the SMMU).
+ */
+struct arm_smmu_xen_device {
+    struct iommu_domain *domain;
+};
+
+/* Keep a list of devices associated with this driver */
+static DEFINE_SPINLOCK(arm_smmu_devices_lock);
+static LIST_HEAD(arm_smmu_devices);
+
+
+static inline void *dev_iommu_priv_get(struct device *dev)
+{
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    return fwspec && fwspec->iommu_priv ? fwspec->iommu_priv : NULL;
+}
+
+static inline void dev_iommu_priv_set(struct device *dev, void *priv)
+{
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    fwspec->iommu_priv = priv;
+}
+
+/* Start of Linux SMMUv3 code */
+
+/* MMIO registers */
+#define ARM_SMMU_IDR0      0x0
+#define IDR0_ST_LVL      GENMASK(28, 27)
+#define IDR0_ST_LVL_2LVL    1
+#define IDR0_STALL_MODEL    GENMASK(25, 24)
+#define IDR0_STALL_MODEL_STALL    0
+#define IDR0_STALL_MODEL_FORCE    2
+#define IDR0_TTENDIAN      GENMASK(22, 21)
+#define IDR0_TTENDIAN_MIXED    0
+#define IDR0_TTENDIAN_LE    2
+#define IDR0_TTENDIAN_BE    3
+#define IDR0_CD2L      (1 << 19)
+#define IDR0_VMID16      (1 << 18)
+#define IDR0_PRI      (1 << 16)
+#define IDR0_SEV      (1 << 14)
+#define IDR0_MSI      (1 << 13)
+#define IDR0_ASID16      (1 << 12)
+#define IDR0_ATS      (1 << 10)
+#define IDR0_HYP      (1 << 9)
+#define IDR0_COHACC      (1 << 4)
+#define IDR0_TTF      GENMASK(3, 2)
+#define IDR0_TTF_AARCH64    2
+#define IDR0_TTF_AARCH32_64    3
+#define IDR0_S1P      (1 << 1)
+#define IDR0_S2P      (1 << 0)
+
+#define ARM_SMMU_IDR1      0x4
+#define IDR1_TABLES_PRESET    (1 << 30)
+#define IDR1_QUEUES_PRESET    (1 << 29)
+#define IDR1_REL      (1 << 28)
+#define IDR1_CMDQS      GENMASK(25, 21)
+#define IDR1_EVTQS      GENMASK(20, 16)
+#define IDR1_PRIQS      GENMASK(15, 11)
+#define IDR1_SSIDSIZE      GENMASK(10, 6)
+#define IDR1_SIDSIZE      GENMASK(5, 0)
+
+#define ARM_SMMU_IDR3      0xc
+#define IDR3_RIL      (1 << 10)
+
+#define ARM_SMMU_IDR5      0x14
+#define IDR5_STALL_MAX      GENMASK(31, 16)
+#define IDR5_GRAN64K      (1 << 6)
+#define IDR5_GRAN16K      (1 << 5)
+#define IDR5_GRAN4K      (1 << 4)
+#define IDR5_OAS      GENMASK(2, 0)
+#define IDR5_OAS_32_BIT      0
+#define IDR5_OAS_36_BIT      1
+#define IDR5_OAS_40_BIT      2
+#define IDR5_OAS_42_BIT      3
+#define IDR5_OAS_44_BIT      4
+#define IDR5_OAS_48_BIT      5
+#define IDR5_OAS_52_BIT      6
+#define IDR5_VAX      GENMASK(11, 10)
+#define IDR5_VAX_52_BIT      1
+
+#define ARM_SMMU_CR0      0x20
+#define CR0_ATSCHK      (1 << 4)
+#define CR0_CMDQEN      (1 << 3)
+#define CR0_EVTQEN      (1 << 2)
+#define CR0_PRIQEN      (1 << 1)
+#define CR0_SMMUEN      (1 << 0)
+
+#define ARM_SMMU_CR0ACK      0x24
+
+#define ARM_SMMU_CR1      0x28
+#define CR1_TABLE_SH      GENMASK(11, 10)
+#define CR1_TABLE_OC      GENMASK(9, 8)
+#define CR1_TABLE_IC      GENMASK(7, 6)
+#define CR1_QUEUE_SH      GENMASK(5, 4)
+#define CR1_QUEUE_OC      GENMASK(3, 2)
+#define CR1_QUEUE_IC      GENMASK(1, 0)
+/* CR1 cacheability fields don't quite follow the usual TCR-style encoding */
+#define CR1_CACHE_NC      0
+#define CR1_CACHE_WB      1
+#define CR1_CACHE_WT      2
+
+#define ARM_SMMU_CR2      0x2c
+#define CR2_PTM        (1 << 2)
+#define CR2_RECINVSID      (1 << 1)
+#define CR2_E2H        (1 << 0)
+
+#define ARM_SMMU_GBPA      0x44
+#define GBPA_UPDATE      (1 << 31)
+#define GBPA_ABORT      (1 << 20)
+
+#define ARM_SMMU_IRQ_CTRL    0x50
+#define IRQ_CTRL_EVTQ_IRQEN    (1 << 2)
+#define IRQ_CTRL_PRIQ_IRQEN    (1 << 1)
+#define IRQ_CTRL_GERROR_IRQEN    (1 << 0)
+
+#define ARM_SMMU_IRQ_CTRLACK    0x54
+
+#define ARM_SMMU_GERROR      0x60
+#define GERROR_SFM_ERR      (1 << 8)
+#define GERROR_MSI_GERROR_ABT_ERR  (1 << 7)
+#define GERROR_MSI_PRIQ_ABT_ERR    (1 << 6)
+#define GERROR_MSI_EVTQ_ABT_ERR    (1 << 5)
+#define GERROR_MSI_CMDQ_ABT_ERR    (1 << 4)
+#define GERROR_PRIQ_ABT_ERR    (1 << 3)
+#define GERROR_EVTQ_ABT_ERR    (1 << 2)
+#define GERROR_CMDQ_ERR      (1 << 0)
+#define GERROR_ERR_MASK      0xfd
+
+#define ARM_SMMU_GERRORN    0x64
+
+#define ARM_SMMU_GERROR_IRQ_CFG0  0x68
+#define ARM_SMMU_GERROR_IRQ_CFG1  0x70
+#define ARM_SMMU_GERROR_IRQ_CFG2  0x74
+
+#define ARM_SMMU_STRTAB_BASE    0x80
+#define STRTAB_BASE_RA      (1UL << 62)
+#define STRTAB_BASE_ADDR_MASK    GENMASK_ULL(51, 6)
+
+#define ARM_SMMU_STRTAB_BASE_CFG  0x88
+#define STRTAB_BASE_CFG_FMT    GENMASK(17, 16)
+#define STRTAB_BASE_CFG_FMT_LINEAR  0
+#define STRTAB_BASE_CFG_FMT_2LVL  1
+#define STRTAB_BASE_CFG_SPLIT    GENMASK(10, 6)
+#define STRTAB_BASE_CFG_LOG2SIZE  GENMASK(5, 0)
+
+#define ARM_SMMU_CMDQ_BASE    0x90
+#define ARM_SMMU_CMDQ_PROD    0x98
+#define ARM_SMMU_CMDQ_CONS    0x9c
+
+#define ARM_SMMU_EVTQ_BASE    0xa0
+#define ARM_SMMU_EVTQ_PROD    0x100a8
+#define ARM_SMMU_EVTQ_CONS    0x100ac
+#define ARM_SMMU_EVTQ_IRQ_CFG0    0xb0
+#define ARM_SMMU_EVTQ_IRQ_CFG1    0xb8
+#define ARM_SMMU_EVTQ_IRQ_CFG2    0xbc
+
+#define ARM_SMMU_PRIQ_BASE    0xc0
+#define ARM_SMMU_PRIQ_PROD    0x100c8
+#define ARM_SMMU_PRIQ_CONS    0x100cc
+#define ARM_SMMU_PRIQ_IRQ_CFG0    0xd0
+#define ARM_SMMU_PRIQ_IRQ_CFG1    0xd8
+#define ARM_SMMU_PRIQ_IRQ_CFG2    0xdc
+
+#define ARM_SMMU_REG_SZ      0xe00
+
+/* Common MSI config fields */
+#define MSI_CFG0_ADDR_MASK    GENMASK_ULL(51, 2)
+#define MSI_CFG2_SH      GENMASK(5, 4)
+#define MSI_CFG2_MEMATTR    GENMASK(3, 0)
+
+/* Common memory attribute values */
+#define ARM_SMMU_SH_NSH      0
+#define ARM_SMMU_SH_OSH      2
+#define ARM_SMMU_SH_ISH      3
+#define ARM_SMMU_MEMATTR_DEVICE_nGnRE  0x1
+#define ARM_SMMU_MEMATTR_OIWB    0xf
+
+#define Q_IDX(llq, p)      ((p) & ((1 << (llq)->max_n_shift) - 1))
+#define Q_WRP(llq, p)      ((p) & (1 << (llq)->max_n_shift))
+#define Q_OVERFLOW_FLAG      (1U << 31)
+#define Q_OVF(p)      ((p) & Q_OVERFLOW_FLAG)
+#define Q_ENT(q, p)      ((q)->base +      \
+                            Q_IDX(&((q)->llq), p) *  \
+                            (q)->ent_dwords)
+
+#define Q_BASE_RWA      (1UL << 62)
+#define Q_BASE_ADDR_MASK    GENMASK_ULL(51, 5)
+#define Q_BASE_LOG2SIZE      GENMASK(4, 0)
+
+/* Ensure DMA allocations are naturally aligned */
+#ifdef CONFIG_CMA_ALIGNMENT
+#define Q_MAX_SZ_SHIFT      (PAGE_SHIFT + CONFIG_CMA_ALIGNMENT)
+#else
+#define Q_MAX_SZ_SHIFT      (PAGE_SHIFT + MAX_ORDER - 1)
+#endif
+
+/*
+ * Stream table.
+ *
+ * Linear: Enough to cover 1 << IDR1.SIDSIZE entries
+ * 2lvl: 128k L1 entries,
+ *       256 lazy entries per table (each table covers a PCI bus)
+ */
+#define STRTAB_L1_SZ_SHIFT    20
+#define STRTAB_SPLIT      8
+
+#define STRTAB_L1_DESC_DWORDS    1
+#define STRTAB_L1_DESC_SPAN    GENMASK_ULL(4, 0)
+#define STRTAB_L1_DESC_L2PTR_MASK  GENMASK_ULL(51, 6)
+
+#define STRTAB_STE_DWORDS    8
+#define STRTAB_STE_0_V      (1UL << 0)
+#define STRTAB_STE_0_CFG    GENMASK_ULL(3, 1)
+#define STRTAB_STE_0_CFG_ABORT    0
+#define STRTAB_STE_0_CFG_BYPASS    4
+#define STRTAB_STE_0_CFG_S1_TRANS  5
+#define STRTAB_STE_0_CFG_S2_TRANS  6
+
+#define STRTAB_STE_0_S1FMT    GENMASK_ULL(5, 4)
+#define STRTAB_STE_0_S1FMT_LINEAR  0
+#define STRTAB_STE_0_S1FMT_64K_L2  2
+#define STRTAB_STE_0_S1CTXPTR_MASK  GENMASK_ULL(51, 6)
+#define STRTAB_STE_0_S1CDMAX    GENMASK_ULL(63, 59)
+
+#define STRTAB_STE_1_S1DSS    GENMASK_ULL(1, 0)
+#define STRTAB_STE_1_S1DSS_TERMINATE  0x0
+#define STRTAB_STE_1_S1DSS_BYPASS  0x1
+#define STRTAB_STE_1_S1DSS_SSID0  0x2
+
+#define STRTAB_STE_1_S1C_CACHE_NC  0UL
+#define STRTAB_STE_1_S1C_CACHE_WBRA  1UL
+#define STRTAB_STE_1_S1C_CACHE_WT  2UL
+#define STRTAB_STE_1_S1C_CACHE_WB  3UL
+#define STRTAB_STE_1_S1CIR    GENMASK_ULL(3, 2)
+#define STRTAB_STE_1_S1COR    GENMASK_ULL(5, 4)
+#define STRTAB_STE_1_S1CSH    GENMASK_ULL(7, 6)
+
+#define STRTAB_STE_1_S1STALLD    (1UL << 27)
+
+#define STRTAB_STE_1_EATS    GENMASK_ULL(29, 28)
+#define STRTAB_STE_1_EATS_ABT    0UL
+#define STRTAB_STE_1_EATS_TRANS    1UL
+#define STRTAB_STE_1_EATS_S1CHK    2UL
+
+#define STRTAB_STE_1_STRW    GENMASK_ULL(31, 30)
+#define STRTAB_STE_1_STRW_NSEL1    0UL
+#define STRTAB_STE_1_STRW_EL2    2UL
+
+#define STRTAB_STE_1_SHCFG    GENMASK_ULL(45, 44)
+#define STRTAB_STE_1_SHCFG_INCOMING  1UL
+
+#define STRTAB_STE_2_S2VMID    GENMASK_ULL(15, 0)
+#define STRTAB_STE_2_VTCR    GENMASK_ULL(50, 32)
+#define STRTAB_STE_2_VTCR_S2T0SZ  GENMASK_ULL(5, 0)
+#define STRTAB_STE_2_VTCR_S2SL0    GENMASK_ULL(7, 6)
+#define STRTAB_STE_2_VTCR_S2IR0    GENMASK_ULL(9, 8)
+#define STRTAB_STE_2_VTCR_S2OR0    GENMASK_ULL(11, 10)
+#define STRTAB_STE_2_VTCR_S2SH0    GENMASK_ULL(13, 12)
+#define STRTAB_STE_2_VTCR_S2TG    GENMASK_ULL(15, 14)
+#define STRTAB_STE_2_VTCR_S2PS    GENMASK_ULL(18, 16)
+#define STRTAB_STE_2_S2AA64    (1UL << 51)
+#define STRTAB_STE_2_S2ENDI    (1UL << 52)
+#define STRTAB_STE_2_S2PTW    (1UL << 54)
+#define STRTAB_STE_2_S2R    (1UL << 58)
+
+#define STRTAB_STE_3_S2TTB_MASK    GENMASK_ULL(51, 4)
+
+/*
+ * Context descriptors.
+ *
+ * Linear: when less than 1024 SSIDs are supported
+ * 2lvl: at most 1024 L1 entries,
+ *       1024 lazy entries per table.
+ */
+#define CTXDESC_SPLIT      10
+#define CTXDESC_L2_ENTRIES    (1 << CTXDESC_SPLIT)
+
+#define CTXDESC_L1_DESC_DWORDS    1
+#define CTXDESC_L1_DESC_V    (1UL << 0)
+#define CTXDESC_L1_DESC_L2PTR_MASK  GENMASK_ULL(51, 12)
+
+#define CTXDESC_CD_DWORDS    8
+#define CTXDESC_CD_0_TCR_T0SZ    GENMASK_ULL(5, 0)
+#define CTXDESC_CD_0_TCR_TG0    GENMASK_ULL(7, 6)
+#define CTXDESC_CD_0_TCR_IRGN0    GENMASK_ULL(9, 8)
+#define CTXDESC_CD_0_TCR_ORGN0    GENMASK_ULL(11, 10)
+#define CTXDESC_CD_0_TCR_SH0    GENMASK_ULL(13, 12)
+#define CTXDESC_CD_0_TCR_EPD0    (1ULL << 14)
+#define CTXDESC_CD_0_TCR_EPD1    (1ULL << 30)
+
+#define CTXDESC_CD_0_ENDI    (1UL << 15)
+#define CTXDESC_CD_0_V      (1UL << 31)
+
+#define CTXDESC_CD_0_TCR_IPS    GENMASK_ULL(34, 32)
+#define CTXDESC_CD_0_TCR_TBI0    (1ULL << 38)
+
+#define CTXDESC_CD_0_AA64    (1UL << 41)
+#define CTXDESC_CD_0_S      (1UL << 44)
+#define CTXDESC_CD_0_R      (1UL << 45)
+#define CTXDESC_CD_0_A      (1UL << 46)
+#define CTXDESC_CD_0_ASET    (1UL << 47)
+#define CTXDESC_CD_0_ASID    GENMASK_ULL(63, 48)
+
+#define CTXDESC_CD_1_TTB0_MASK    GENMASK_ULL(51, 4)
+
+/*
+ * When the SMMU only supports linear context descriptor tables, pick a
+ * reasonable size limit (64kB).
+ */
+#define CTXDESC_LINEAR_CDMAX    ilog2(SZ_64K / (CTXDESC_CD_DWORDS << 3))
+
+/* Command queue */
+#define CMDQ_ENT_SZ_SHIFT    4
+#define CMDQ_ENT_DWORDS      ((1 << CMDQ_ENT_SZ_SHIFT) >> 3)
+#define CMDQ_MAX_SZ_SHIFT    (Q_MAX_SZ_SHIFT - CMDQ_ENT_SZ_SHIFT)
+
+#define CMDQ_CONS_ERR      GENMASK(30, 24)
+#define CMDQ_ERR_CERROR_NONE_IDX  0
+#define CMDQ_ERR_CERROR_ILL_IDX    1
+#define CMDQ_ERR_CERROR_ABT_IDX    2
+#define CMDQ_ERR_CERROR_ATC_INV_IDX  3
+
+#define CMDQ_0_OP      GENMASK_ULL(7, 0)
+#define CMDQ_0_SSV      (1UL << 11)
+
+#define CMDQ_PREFETCH_0_SID    GENMASK_ULL(63, 32)
+#define CMDQ_PREFETCH_1_SIZE    GENMASK_ULL(4, 0)
+#define CMDQ_PREFETCH_1_ADDR_MASK  GENMASK_ULL(63, 12)
+
+#define CMDQ_CFGI_0_SSID    GENMASK_ULL(31, 12)
+#define CMDQ_CFGI_0_SID      GENMASK_ULL(63, 32)
+#define CMDQ_CFGI_1_LEAF    (1UL << 0)
+#define CMDQ_CFGI_1_RANGE    GENMASK_ULL(4, 0)
+
+#define CMDQ_TLBI_0_NUM      GENMASK_ULL(16, 12)
+#define CMDQ_TLBI_RANGE_NUM_MAX    31
+#define CMDQ_TLBI_0_SCALE    GENMASK_ULL(24, 20)
+#define CMDQ_TLBI_0_VMID    GENMASK_ULL(47, 32)
+#define CMDQ_TLBI_0_ASID    GENMASK_ULL(63, 48)
+#define CMDQ_TLBI_1_LEAF    (1UL << 0)
+#define CMDQ_TLBI_1_TTL      GENMASK_ULL(9, 8)
+#define CMDQ_TLBI_1_TG      GENMASK_ULL(11, 10)
+#define CMDQ_TLBI_1_VA_MASK    GENMASK_ULL(63, 12)
+#define CMDQ_TLBI_1_IPA_MASK    GENMASK_ULL(51, 12)
+
+#define CMDQ_ATC_0_SSID      GENMASK_ULL(31, 12)
+#define CMDQ_ATC_0_SID      GENMASK_ULL(63, 32)
+#define CMDQ_ATC_0_GLOBAL    (1UL << 9)
+#define CMDQ_ATC_1_SIZE      GENMASK_ULL(5, 0)
+#define CMDQ_ATC_1_ADDR_MASK    GENMASK_ULL(63, 12)
+
+#define CMDQ_PRI_0_SSID      GENMASK_ULL(31, 12)
+#define CMDQ_PRI_0_SID      GENMASK_ULL(63, 32)
+#define CMDQ_PRI_1_GRPID    GENMASK_ULL(8, 0)
+#define CMDQ_PRI_1_RESP      GENMASK_ULL(13, 12)
+
+#define CMDQ_SYNC_0_CS      GENMASK_ULL(13, 12)
+#define CMDQ_SYNC_0_CS_NONE    0
+#define CMDQ_SYNC_0_CS_IRQ    1
+#define CMDQ_SYNC_0_CS_SEV    2
+#define CMDQ_SYNC_0_MSH      GENMASK_ULL(23, 22)
+#define CMDQ_SYNC_0_MSIATTR    GENMASK_ULL(27, 24)
+#define CMDQ_SYNC_0_MSIDATA    GENMASK_ULL(63, 32)
+#define CMDQ_SYNC_1_MSIADDR_MASK  GENMASK_ULL(51, 2)
+
+/* Event queue */
+#define EVTQ_ENT_SZ_SHIFT    5
+#define EVTQ_ENT_DWORDS      ((1 << EVTQ_ENT_SZ_SHIFT) >> 3)
+#define EVTQ_MAX_SZ_SHIFT    (Q_MAX_SZ_SHIFT - EVTQ_ENT_SZ_SHIFT)
+
+#define EVTQ_0_ID      GENMASK_ULL(7, 0)
+
+/* PRI queue */
+#define PRIQ_ENT_SZ_SHIFT    4
+#define PRIQ_ENT_DWORDS      ((1 << PRIQ_ENT_SZ_SHIFT) >> 3)
+#define PRIQ_MAX_SZ_SHIFT    (Q_MAX_SZ_SHIFT - PRIQ_ENT_SZ_SHIFT)
+
+#define PRIQ_0_SID      GENMASK_ULL(31, 0)
+#define PRIQ_0_SSID      GENMASK_ULL(51, 32)
+#define PRIQ_0_PERM_PRIV    (1UL << 58)
+#define PRIQ_0_PERM_EXEC    (1UL << 59)
+#define PRIQ_0_PERM_READ    (1UL << 60)
+#define PRIQ_0_PERM_WRITE    (1UL << 61)
+#define PRIQ_0_PRG_LAST      (1UL << 62)
+#define PRIQ_0_SSID_V      (1UL << 63)
+
+#define PRIQ_1_PRG_IDX      GENMASK_ULL(8, 0)
+#define PRIQ_1_ADDR_MASK    GENMASK_ULL(63, 12)
+
+/* High-level queue structures */
+#define ARM_SMMU_POLL_TIMEOUT_US  100
+#define ARM_SMMU_CMDQ_SYNC_TIMEOUT_US  1000000 /* 1s! */
+#define ARM_SMMU_CMDQ_SYNC_SPIN_COUNT  10
+
+static bool disable_bypass = 1;
+
+enum pri_resp {
+    PRI_RESP_DENY = 0,
+    PRI_RESP_FAIL = 1,
+    PRI_RESP_SUCC = 2,
+};
+
+struct arm_smmu_cmdq_ent {
+    /* Common fields */
+    u8        opcode;
+    bool        substream_valid;
+
+    /* Command-specific fields */
+    union {
+    #define CMDQ_OP_PREFETCH_CFG  0x1
+        struct {
+            u32      sid;
+            u8      size;
+            u64      addr;
+        } prefetch;
+
+    #define CMDQ_OP_CFGI_STE  0x3
+    #define CMDQ_OP_CFGI_ALL  0x4
+        struct {
+            u32      sid;
+            union {
+                bool    leaf;
+                u8    span;
+            };
+        } cfgi;
+
+    #define CMDQ_OP_TLBI_EL2_ALL  0x20
+    #define CMDQ_OP_TLBI_S12_VMALL  0x28
+    #define CMDQ_OP_TLBI_NSNH_ALL  0x30
+        struct {
+            u16      vmid;
+        } tlbi;
+
+    #define CMDQ_OP_PRI_RESP  0x41
+        struct {
+            u32      sid;
+            u32      ssid;
+            u16      grpid;
+            enum pri_resp    resp;
+        } pri;
+
+    #define CMDQ_OP_CMD_SYNC  0x46
+        struct {
+            u32      msidata;
+            u64      msiaddr;
+        } sync;
+    };
+};
+
+struct arm_smmu_ll_queue {
+    u32        prod;
+    u32        cons;
+    u32        max_n_shift;
+};
+
+struct arm_smmu_queue {
+    struct arm_smmu_ll_queue  llq;
+    int        irq; /* Wired interrupt */
+
+    __le64        *base;
+    paddr_t      base_dma;
+    u64        q_base;
+
+    size_t        ent_dwords;
+
+    u32 __iomem      *prod_reg;
+    u32 __iomem      *cons_reg;
+};
+
+struct arm_smmu_cmdq {
+    struct arm_smmu_queue    q;
+    spinlock_t      lock;
+};
+
+struct arm_smmu_evtq {
+    struct arm_smmu_queue    q;
+};
+
+struct arm_smmu_priq {
+    struct arm_smmu_queue    q;
+};
+
+/* High-level stream table and context descriptor structures */
+struct arm_smmu_strtab_l1_desc {
+    u8        span;
+
+    __le64        *l2ptr;
+    paddr_t      l2ptr_dma;
+};
+
+struct arm_smmu_s2_cfg {
+    u16        vmid;
+    u64        vttbr;
+    u64        vtcr;
+    struct domain           *domain;
+};
+
+struct arm_smmu_strtab_cfg {
+    __le64        *strtab;
+    paddr_t      strtab_dma;
+    struct arm_smmu_strtab_l1_desc  *l1_desc;
+    unsigned int      num_l1_ents;
+
+    u64        strtab_base;
+    u32        strtab_base_cfg;
+};
+
+/* An SMMUv3 instance */
+struct arm_smmu_device {
+    struct device      *dev;
+    void __iomem      *base;
+    void __iomem      *page1;
+
+#define ARM_SMMU_FEAT_2_LVL_STRTAB  (1 << 0)
+#define ARM_SMMU_FEAT_PRI    (1 << 4)
+#define ARM_SMMU_FEAT_ATS    (1 << 5)
+#define ARM_SMMU_FEAT_SEV    (1 << 6)
+#define ARM_SMMU_FEAT_COHERENCY    (1 << 8)
+#define ARM_SMMU_FEAT_TRANS_S1    (1 << 9)
+#define ARM_SMMU_FEAT_TRANS_S2    (1 << 10)
+#define ARM_SMMU_FEAT_STALLS    (1 << 11)
+#define ARM_SMMU_FEAT_HYP    (1 << 12)
+#define ARM_SMMU_FEAT_VAX    (1 << 14)
+    u32        features;
+
+#define ARM_SMMU_OPT_SKIP_PREFETCH  (1 << 0)
+#define ARM_SMMU_OPT_PAGE0_REGS_ONLY  (1 << 1)
+    u32        options;
+
+    struct arm_smmu_cmdq    cmdq;
+    struct arm_smmu_evtq    evtq;
+    struct arm_smmu_priq    priq;
+
+    int        gerr_irq;
+    int        combined_irq;
+    u8        prev_cmd_opcode;
+
+    unsigned long      ias; /* IPA */
+    unsigned long      oas; /* PA */
+    unsigned long      pgsize_bitmap;
+
+#define ARM_SMMU_MAX_VMIDS    (1 << 16)
+    unsigned int      vmid_bits;
+    DECLARE_BITMAP(vmid_map, ARM_SMMU_MAX_VMIDS);
+
+    unsigned int      sid_bits;
+
+    struct arm_smmu_strtab_cfg  strtab_cfg;
+
+    /* Need to keep a list of SMMU devices */
+    struct list_head                devices;
+
+    /* Tasklets for handling evts/faults and pci page request IRQs*/
+    struct tasklet      evtq_irq_tasklet;
+    struct tasklet      priq_irq_tasklet;
+    struct tasklet      combined_irq_tasklet;
+};
+
+/* SMMU private data for each master */
+struct arm_smmu_master {
+    struct arm_smmu_device    *smmu;
+    struct device      *dev;
+    struct arm_smmu_domain    *domain;
+    struct list_head    domain_head;
+    u32        *sids;
+    unsigned int      num_sids;
+    bool        ats_enabled;
+};
+
+/* SMMU private data for an IOMMU domain */
+enum arm_smmu_domain_stage {
+    ARM_SMMU_DOMAIN_S1 = 0,
+    ARM_SMMU_DOMAIN_S2,
+    ARM_SMMU_DOMAIN_NESTED,
+    ARM_SMMU_DOMAIN_BYPASS,
+};
+
+struct arm_smmu_domain {
+    struct arm_smmu_device    *smmu;
+    struct spinlock      init_lock; /* Protects smmu pointer */
+
+    enum arm_smmu_domain_stage  stage;
+    union {
+        struct arm_smmu_s2_cfg  s2_cfg;
+    };
+
+    struct iommu_domain    domain;
+
+    struct list_head    devices;
+    spinlock_t      devices_lock;
+};
+
+struct arm_smmu_option_prop {
+    u32 opt;
+    const char *prop;
+};
+
+static struct arm_smmu_option_prop arm_smmu_options[] = {
+    { ARM_SMMU_OPT_SKIP_PREFETCH, "hisilicon,broken-prefetch-cmd" },
+    { ARM_SMMU_OPT_PAGE0_REGS_ONLY, "cavium,cn9900-broken-page1-regspace"},
+    { 0, NULL},
+};
+
+static inline void __iomem *arm_smmu_page1_fixup(unsigned long offset,
+                                                 struct arm_smmu_device *smmu)
+{
+    if ( offset > SZ_64K )
+        return smmu->page1 + offset - SZ_64K;
+
+    return smmu->base + offset;
+}
+
+static struct arm_smmu_domain *to_smmu_domain(struct iommu_domain *dom)
+{
+    return container_of(dom, struct arm_smmu_domain, domain);
+}
+
+static void parse_driver_options(struct arm_smmu_device *smmu)
+{
+    int i = 0;
+
+    do {
+        if ( dt_property_read_bool(smmu->dev->of_node,
+                arm_smmu_options[i].prop) )
+        {
+            smmu->options |= arm_smmu_options[i].opt;
+            dev_notice(smmu->dev, "option %s\n",
+                    arm_smmu_options[i].prop);
+        }
+    } while ( arm_smmu_options[++i].opt );
+}
+
+/* Low-level queue manipulation functions */
+static bool queue_full(struct arm_smmu_ll_queue *q)
+{
+    return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+        Q_WRP(q, q->prod) != Q_WRP(q, q->cons);
+}
+
+static bool queue_empty(struct arm_smmu_ll_queue *q)
+{
+    return Q_IDX(q, q->prod) == Q_IDX(q, q->cons) &&
+        Q_WRP(q, q->prod) == Q_WRP(q, q->cons);
+}
+
+static void queue_sync_cons_in(struct arm_smmu_queue *q)
+{
+    q->llq.cons = readl_relaxed(q->cons_reg);
+}
+
+static void queue_sync_cons_out(struct arm_smmu_queue *q)
+{
+    /*
+     * Ensure that all CPU accesses (reads and writes) to the queue
+     * are complete before we update the cons pointer.
+     */
+    mb();
+    writel_relaxed(q->llq.cons, q->cons_reg);
+}
+
+static void queue_inc_cons(struct arm_smmu_ll_queue *q)
+{
+    u32 cons = (Q_WRP(q, q->cons) | Q_IDX(q, q->cons)) + 1;
+    q->cons = Q_OVF(q->cons) | Q_WRP(q, cons) | Q_IDX(q, cons);
+}
+
+static int queue_sync_prod_in(struct arm_smmu_queue *q)
+{
+    int ret = 0;
+    u32 prod = readl_relaxed(q->prod_reg);
+
+    if ( Q_OVF(prod) != Q_OVF(q->llq.prod) )
+        ret = -EOVERFLOW;
+
+    q->llq.prod = prod;
+    return ret;
+}
+
+static void queue_sync_prod_out(struct arm_smmu_queue *q)
+{
+    writel(q->llq.prod, q->prod_reg);
+}
+
+static void queue_inc_prod(struct arm_smmu_ll_queue *q)
+{
+    u32 prod = (Q_WRP(q, q->prod) | Q_IDX(q, q->prod)) + 1;
+    q->prod = Q_OVF(q->prod) | Q_WRP(q, prod) | Q_IDX(q, prod);
+}
+
+/*
+ * Wait for the SMMU to consume items. If sync is true, wait until the queue
+ * is empty. Otherwise, wait until there is at least one free slot.
+ */
+static int queue_poll_cons(struct arm_smmu_queue *q, bool sync, bool wfe)
+{
+    s_time_t timeout;
+    unsigned int delay = 1, spin_cnt = 0;
+
+    /* Wait longer if it's a CMD_SYNC */
+    timeout =NOW() + MICROSECS(sync ? ARM_SMMU_CMDQ_SYNC_TIMEOUT_US :
+             ARM_SMMU_POLL_TIMEOUT_US);
+
+    while ( queue_sync_cons_in(q),
+            (sync ? !queue_empty(&q->llq) : queue_full(&q->llq)) )
+    {
+        if ( (NOW() > timeout) > 0 )
+            return -ETIMEDOUT;
+
+        if ( wfe )
+        {
+            wfe();
+        } else if ( ++spin_cnt < ARM_SMMU_CMDQ_SYNC_SPIN_COUNT )
+        {
+            cpu_relax();
+            continue;
+        } else
+        {
+            udelay(delay);
+            delay *= 2;
+            spin_cnt = 0;
+        }
+    }
+
+    return 0;
+}
+
+static void queue_write(__le64 *dst, u64 *src, size_t n_dwords)
+{
+    int i;
+
+    for ( i = 0; i < n_dwords; ++i )
+        *dst++ = cpu_to_le64(*src++);
+}
+
+static int queue_insert_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+    if ( queue_full(&q->llq) )
+        return -ENOSPC;
+
+    queue_write(Q_ENT(q, q->llq.prod), ent, q->ent_dwords);
+    queue_inc_prod(&q->llq);
+    queue_sync_prod_out(q);
+    return 0;
+}
+
+static void queue_read(__le64 *dst, u64 *src, size_t n_dwords)
+{
+    int i;
+
+    for ( i = 0; i < n_dwords; ++i )
+        *dst++ = le64_to_cpu(*src++);
+}
+
+static int queue_remove_raw(struct arm_smmu_queue *q, u64 *ent)
+{
+    if ( queue_empty(&q->llq) )
+        return -EAGAIN;
+
+    queue_read(ent, Q_ENT(q, q->llq.cons), q->ent_dwords);
+    queue_inc_cons(&q->llq);
+    queue_sync_cons_out(q);
+    return 0;
+}
+
+/* High-level queue accessors */
+static int arm_smmu_cmdq_build_cmd(u64 *cmd, struct arm_smmu_cmdq_ent *ent)
+{
+    memset(cmd, 0, 1 << CMDQ_ENT_SZ_SHIFT);
+    cmd[0] |= FIELD_PREP(CMDQ_0_OP, ent->opcode);
+
+    switch ( ent->opcode )
+    {
+    case CMDQ_OP_TLBI_EL2_ALL:
+    case CMDQ_OP_TLBI_NSNH_ALL:
+        break;
+    case CMDQ_OP_PREFETCH_CFG:
+        cmd[0] |= FIELD_PREP(CMDQ_PREFETCH_0_SID, ent->prefetch.sid);
+        cmd[1] |= FIELD_PREP(CMDQ_PREFETCH_1_SIZE, ent->prefetch.size);
+        cmd[1] |= ent->prefetch.addr & CMDQ_PREFETCH_1_ADDR_MASK;
+        break;
+    case CMDQ_OP_CFGI_STE:
+        cmd[0] |= FIELD_PREP(CMDQ_CFGI_0_SID, ent->cfgi.sid);
+        cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_LEAF, ent->cfgi.leaf);
+        break;
+    case CMDQ_OP_CFGI_ALL:
+        /* Cover the entire SID range */
+        cmd[1] |= FIELD_PREP(CMDQ_CFGI_1_RANGE, 31);
+        break;
+    case CMDQ_OP_TLBI_S12_VMALL:
+        cmd[0] |= FIELD_PREP(CMDQ_TLBI_0_VMID, ent->tlbi.vmid);
+        break;
+    case CMDQ_OP_PRI_RESP:
+        cmd[0] |= FIELD_PREP(CMDQ_0_SSV, ent->substream_valid);
+        cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SSID, ent->pri.ssid);
+        cmd[0] |= FIELD_PREP(CMDQ_PRI_0_SID, ent->pri.sid);
+        cmd[1] |= FIELD_PREP(CMDQ_PRI_1_GRPID, ent->pri.grpid);
+        switch ( ent->pri.resp )
+        {
+        case PRI_RESP_DENY:
+        case PRI_RESP_FAIL:
+        case PRI_RESP_SUCC:
+            break;
+        default:
+            return -EINVAL;
+        }
+        cmd[1] |= FIELD_PREP(CMDQ_PRI_1_RESP, ent->pri.resp);
+        break;
+    case CMDQ_OP_CMD_SYNC:
+        if ( ent->sync.msiaddr )
+            cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_IRQ);
+        else
+            cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_CS, CMDQ_SYNC_0_CS_SEV);
+        cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSH, ARM_SMMU_SH_ISH);
+        cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIATTR, ARM_SMMU_MEMATTR_OIWB);
+        /*
+         * Commands are written little-endian, but we want the SMMU to
+         * receive MSIData, and thus write it back to memory, in CPU
+         * byte order, so big-endian needs an extra byteswap here.
+         */
+        cmd[0] |= FIELD_PREP(CMDQ_SYNC_0_MSIDATA,
+        cpu_to_le32(ent->sync.msidata));
+        cmd[1] |= ent->sync.msiaddr & CMDQ_SYNC_1_MSIADDR_MASK;
+        break;
+    default:
+        return -ENOENT;
+    }
+
+    return 0;
+}
+
+static void arm_smmu_cmdq_skip_err(struct arm_smmu_device *smmu)
+{
+    static const char *cerror_str[] = {
+        [CMDQ_ERR_CERROR_NONE_IDX]  = "No error",
+        [CMDQ_ERR_CERROR_ILL_IDX]  = "Illegal command",
+        [CMDQ_ERR_CERROR_ABT_IDX]  = "Abort on command fetch",
+        [CMDQ_ERR_CERROR_ATC_INV_IDX]  = "ATC invalidate timeout",
+    };
+
+    int i;
+    u64 cmd[CMDQ_ENT_DWORDS];
+    struct arm_smmu_queue *q = &smmu->cmdq.q;
+    u32 cons = readl_relaxed(q->cons_reg);
+    u32 idx = FIELD_GET(CMDQ_CONS_ERR, cons);
+    struct arm_smmu_cmdq_ent cmd_sync = {
+        .opcode = CMDQ_OP_CMD_SYNC,
+    };
+
+    dev_err(smmu->dev, "CMDQ error (cons 0x%08x): %s\n", cons,
+            idx < ARRAY_SIZE(cerror_str) ?  cerror_str[idx] : "Unknown");
+
+    switch ( idx )
+    {
+    case CMDQ_ERR_CERROR_ABT_IDX:
+        dev_err(smmu->dev, "retrying command fetch\n");
+    case CMDQ_ERR_CERROR_NONE_IDX:
+        return;
+    case CMDQ_ERR_CERROR_ATC_INV_IDX:
+        /*
+         * ATC Invalidation Completion timeout. CONS is still pointing
+         * at the CMD_SYNC. Attempt to complete other pending commands
+         * by repeating the CMD_SYNC, though we might well end up back
+         * here since the ATC invalidation may still be pending.
+         */
+        return;
+    case CMDQ_ERR_CERROR_ILL_IDX:
+        /* Fallthrough */
+    default:
+        break;
+    }
+
+    /*
+     * We may have concurrent producers, so we need to be careful
+     * not to touch any of the shadow cmdq state.
+     */
+    queue_read(cmd, Q_ENT(q, cons), q->ent_dwords);
+    dev_err(smmu->dev, "skipping command in error state:\n");
+    for ( i = 0; i < ARRAY_SIZE(cmd); ++i )
+        dev_err(smmu->dev, "\t0x%016llx\n", (unsigned long long)cmd[i]);
+
+    /* Convert the erroneous command into a CMD_SYNC */
+    if ( arm_smmu_cmdq_build_cmd(cmd, &cmd_sync) )
+    {
+        dev_err(smmu->dev, "failed to convert to CMD_SYNC\n");
+        return;
+    }
+
+    queue_write(Q_ENT(q, cons), cmd, q->ent_dwords);
+}
+
+static void arm_smmu_cmdq_insert_cmd(struct arm_smmu_device *smmu, u64 *cmd)
+{
+    struct arm_smmu_queue *q = &smmu->cmdq.q;
+    bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+
+    smmu->prev_cmd_opcode = FIELD_GET(CMDQ_0_OP, cmd[0]);
+
+    while ( queue_insert_raw(q, cmd) == -ENOSPC )
+    {
+        if ( queue_poll_cons(q, false, wfe) )
+            dev_err_ratelimited(smmu->dev, "CMDQ timeout\n");
+    }
+}
+
+static void arm_smmu_cmdq_issue_cmd(struct arm_smmu_device *smmu,
+                                    struct arm_smmu_cmdq_ent *ent)
+{
+    u64 cmd[CMDQ_ENT_DWORDS];
+    unsigned long flags;
+
+    if ( arm_smmu_cmdq_build_cmd(cmd, ent) )
+    {
+        dev_warn(smmu->dev, "ignoring unknown CMDQ opcode 0x%x\n",
+                ent->opcode);
+        return;
+    }
+
+    spin_lock_irqsave(&smmu->cmdq.lock, flags);
+    arm_smmu_cmdq_insert_cmd(smmu, cmd);
+    spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
+}
+
+static int __arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
+{
+    u64 cmd[CMDQ_ENT_DWORDS];
+    unsigned long flags;
+    bool wfe = !!(smmu->features & ARM_SMMU_FEAT_SEV);
+    struct arm_smmu_cmdq_ent ent = { .opcode = CMDQ_OP_CMD_SYNC };
+    int ret;
+    arm_smmu_cmdq_build_cmd(cmd, &ent);
+
+    spin_lock_irqsave(&smmu->cmdq.lock, flags);
+    arm_smmu_cmdq_insert_cmd(smmu, cmd);
+    ret = queue_poll_cons(&smmu->cmdq.q, true, wfe);
+    spin_unlock_irqrestore(&smmu->cmdq.lock, flags);
+
+    return ret;
+}
+
+static int arm_smmu_cmdq_issue_sync(struct arm_smmu_device *smmu)
+{
+    int ret;
+
+    ret = __arm_smmu_cmdq_issue_sync(smmu);
+    if ( ret )
+        dev_err_ratelimited(smmu->dev, "CMD_SYNC timeout\n");
+    return ret;
+}
+
+/* Stream table manipulation functions */
+static void arm_smmu_write_strtab_l1_desc(__le64 *dst,
+                                          struct arm_smmu_strtab_l1_desc *desc)
+{
+    u64 val = 0;
+
+    val |= FIELD_PREP(STRTAB_L1_DESC_SPAN, desc->span);
+    val |= desc->l2ptr_dma & STRTAB_L1_DESC_L2PTR_MASK;
+
+    *dst = cpu_to_le64(val);
+}
+
+static void arm_smmu_sync_ste_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+    struct arm_smmu_cmdq_ent cmd = {
+        .opcode  = CMDQ_OP_CFGI_STE,
+        .cfgi  = {
+            .sid  = sid,
+            .leaf  = true,
+        },
+    };
+
+    arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    arm_smmu_cmdq_issue_sync(smmu);
+}
+
+static void arm_smmu_write_strtab_ent(struct arm_smmu_master *master, u32 sid,
+                                      __le64 *dst)
+{
+    /*
+     * This is hideously complicated, but we only really care about
+     * three cases at the moment:
+     *
+     * 1. Invalid (all zero) -> bypass/fault (init)
+     * 2. Bypass/fault -> translation/bypass (attach)
+     * 3. Translation/bypass -> bypass/fault (detach)
+     *
+     * Given that we can't update the STE atomically and the SMMU
+     * doesn't read the thing in a defined order, that leaves us
+     * with the following maintenance requirements:
+     *
+     * 1. Update Config, return (init time STEs aren't live)
+     * 2. Write everything apart from dword 0, sync, write dword 0, sync
+     * 3. Update Config, sync
+     */
+    u64 val = le64_to_cpu(dst[0]);
+    bool ste_live = false;
+    struct arm_smmu_device *smmu = NULL;
+    struct arm_smmu_s2_cfg *s2_cfg = NULL;
+    struct arm_smmu_domain *smmu_domain = NULL;
+    struct arm_smmu_cmdq_ent prefetch_cmd = {
+        .opcode    = CMDQ_OP_PREFETCH_CFG,
+        .prefetch  = {
+            .sid  = sid,
+        },
+    };
+
+    if ( master )
+    {
+        smmu_domain = master->domain;
+        smmu = master->smmu;
+    }
+
+    if ( smmu_domain )
+    {
+        s2_cfg = &smmu_domain->s2_cfg;
+    }
+
+    if ( val & STRTAB_STE_0_V )
+    {
+        switch ( FIELD_GET(STRTAB_STE_0_CFG, val) )
+        {
+        case STRTAB_STE_0_CFG_BYPASS:
+            break;
+        case STRTAB_STE_0_CFG_S2_TRANS:
+            ste_live = true;
+            break;
+        case STRTAB_STE_0_CFG_ABORT:
+            BUG_ON(!disable_bypass);
+            break;
+        default:
+            BUG(); /* STE corruption */
+        }
+    }
+
+    /* Nuke the existing STE_0 value, as we're going to rewrite it */
+    val = STRTAB_STE_0_V;
+
+    /* Bypass/fault */
+    if ( !smmu_domain || !(s2_cfg) )
+    {
+        if ( !smmu_domain && disable_bypass )
+            val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_ABORT);
+        else
+            val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_BYPASS);
+
+        dst[0] = cpu_to_le64(val);
+        dst[1] = cpu_to_le64(FIELD_PREP(STRTAB_STE_1_SHCFG,
+                    STRTAB_STE_1_SHCFG_INCOMING));
+        dst[2] = 0; /* Nuke the VMID */
+        /*
+         * The SMMU can perform negative caching, so we must sync
+         * the STE regardless of whether the old value was live.
+         */
+        if ( smmu )
+            arm_smmu_sync_ste_for_sid(smmu, sid);
+        return;
+    }
+
+    if ( s2_cfg )
+    {
+        BUG_ON(ste_live);
+        dst[2] = cpu_to_le64(
+                FIELD_PREP(STRTAB_STE_2_S2VMID, s2_cfg->vmid) |
+                FIELD_PREP(STRTAB_STE_2_VTCR, s2_cfg->vtcr) |
+#ifdef __BIG_ENDIAN
+                STRTAB_STE_2_S2ENDI |
+#endif
+                STRTAB_STE_2_S2PTW | STRTAB_STE_2_S2AA64 |
+                STRTAB_STE_2_S2R);
+
+        dst[3] = cpu_to_le64(s2_cfg->vttbr & STRTAB_STE_3_S2TTB_MASK);
+
+        val |= FIELD_PREP(STRTAB_STE_0_CFG, STRTAB_STE_0_CFG_S2_TRANS);
+    }
+
+    if ( master->ats_enabled )
+        dst[1] |= cpu_to_le64(FIELD_PREP(STRTAB_STE_1_EATS,
+                    STRTAB_STE_1_EATS_TRANS));
+
+    arm_smmu_sync_ste_for_sid(smmu, sid);
+    dst[0] = cpu_to_le64(val);
+    arm_smmu_sync_ste_for_sid(smmu, sid);
+
+    /* It's likely that we'll want to use the new STE soon */
+    if ( !(smmu->options & ARM_SMMU_OPT_SKIP_PREFETCH) )
+        arm_smmu_cmdq_issue_cmd(smmu, &prefetch_cmd);
+}
+
+static void arm_smmu_init_bypass_stes(u64 *strtab, unsigned int nent)
+{
+    unsigned int i;
+
+    for ( i = 0; i < nent; ++i )
+    {
+        arm_smmu_write_strtab_ent(NULL, -1, strtab);
+        strtab += STRTAB_STE_DWORDS;
+    }
+}
+
+static int arm_smmu_init_l2_strtab(struct arm_smmu_device *smmu, u32 sid)
+{
+    size_t size;
+    void *strtab;
+    struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+    struct arm_smmu_strtab_l1_desc *desc = &cfg->l1_desc[sid >> STRTAB_SPLIT];
+
+    if ( desc->l2ptr )
+        return 0;
+
+    size = 1 << (STRTAB_SPLIT + ilog2(STRTAB_STE_DWORDS) + 3);
+    strtab = &cfg->strtab[(sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS];
+
+    desc->span = STRTAB_SPLIT + 1;
+    desc->l2ptr = dmam_alloc_coherent(size, &desc->l2ptr_dma);
+    if ( !desc->l2ptr )
+    {
+        dev_err(smmu->dev,
+                "failed to allocate l2 stream table for SID %u\n",
+                sid);
+        return -ENOMEM;
+    }
+
+    arm_smmu_init_bypass_stes(desc->l2ptr, 1 << STRTAB_SPLIT);
+    arm_smmu_write_strtab_l1_desc(strtab, desc);
+    return 0;
+}
+
+/* IRQ and event handlers */
+static void arm_smmu_evtq_thread(void *dev)
+{
+    int i;
+    struct arm_smmu_device *smmu = dev;
+    struct arm_smmu_queue *q = &smmu->evtq.q;
+    struct arm_smmu_ll_queue *llq = &q->llq;
+    u64 evt[EVTQ_ENT_DWORDS];
+
+    do {
+        while ( !queue_remove_raw(q, evt) )
+        {
+            u8 id = FIELD_GET(EVTQ_0_ID, evt[0]);
+
+            dev_info(smmu->dev, "event 0x%02x received:\n", id);
+            for ( i = 0; i < ARRAY_SIZE(evt); ++i )
+                  dev_info(smmu->dev, "\t0x%016llx\n",
+                          (unsigned long long)evt[i]);
+
+        }
+
+        /*
+         * Not much we can do on overflow, so scream and pretend we're
+         * trying harder.
+         */
+        if ( queue_sync_prod_in(q) == -EOVERFLOW )
+            dev_err(smmu->dev, "EVTQ overflow detected -- events lost\n");
+    } while ( !queue_empty(llq) );
+
+    /* Sync our overflow flag, as we believe we're up to speed */
+    llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+        Q_IDX(llq, llq->cons);
+}
+
+static void arm_smmu_handle_ppr(struct arm_smmu_device *smmu, u64 *evt)
+{
+    u32 sid, ssid;
+    u16 grpid;
+    bool ssv, last;
+
+    sid = FIELD_GET(PRIQ_0_SID, evt[0]);
+    ssv = FIELD_GET(PRIQ_0_SSID_V, evt[0]);
+    ssid = ssv ? FIELD_GET(PRIQ_0_SSID, evt[0]) : 0;
+    last = FIELD_GET(PRIQ_0_PRG_LAST, evt[0]);
+    grpid = FIELD_GET(PRIQ_1_PRG_IDX, evt[1]);
+
+    dev_info(smmu->dev, "unexpected PRI request received:\n");
+    dev_info(smmu->dev,
+            "\tsid 0x%08x.0x%05x: [%u%s] %sprivileged %s%s%s access at iova 0x%016llx\n",
+            sid, ssid, grpid, last ? "L" : "",
+            evt[0] & PRIQ_0_PERM_PRIV ? "" : "un",
+            evt[0] & PRIQ_0_PERM_READ ? "R" : "",
+            evt[0] & PRIQ_0_PERM_WRITE ? "W" : "",
+            evt[0] & PRIQ_0_PERM_EXEC ? "X" : "",
+            evt[1] & PRIQ_1_ADDR_MASK);
+
+    if ( last )
+    {
+        struct arm_smmu_cmdq_ent cmd = {
+            .opcode      = CMDQ_OP_PRI_RESP,
+            .substream_valid  = ssv,
+            .pri      = {
+                .sid  = sid,
+                .ssid  = ssid,
+                .grpid  = grpid,
+                .resp  = PRI_RESP_DENY,
+            },
+        };
+
+        arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    }
+}
+
+static void arm_smmu_priq_thread(void *dev)
+{
+    struct arm_smmu_device *smmu = dev;
+    struct arm_smmu_queue *q = &smmu->priq.q;
+    struct arm_smmu_ll_queue *llq = &q->llq;
+    u64 evt[PRIQ_ENT_DWORDS];
+
+    do {
+        while ( !queue_remove_raw(q, evt) )
+            arm_smmu_handle_ppr(smmu, evt);
+
+        if ( queue_sync_prod_in(q) == -EOVERFLOW )
+            dev_err(smmu->dev, "PRIQ overflow detected -- requests lost\n");
+    } while ( !queue_empty(llq) );
+
+    /* Sync our overflow flag, as we believe we're up to speed */
+    llq->cons = Q_OVF(llq->prod) | Q_WRP(llq, llq->cons) |
+        Q_IDX(llq, llq->cons);
+    queue_sync_cons_out(q);
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu);
+
+static void arm_smmu_gerror_handler(int irq, void *dev,
+                                    struct cpu_user_regs *regs)
+{
+    u32 gerror, gerrorn, active;
+    struct arm_smmu_device *smmu = dev;
+
+    gerror = readl_relaxed(smmu->base + ARM_SMMU_GERROR);
+    gerrorn = readl_relaxed(smmu->base + ARM_SMMU_GERRORN);
+
+    active = gerror ^ gerrorn;
+    if ( !(active & GERROR_ERR_MASK) )
+        return; /* No errors pending */
+
+    dev_warn(smmu->dev,
+            "unexpected global error reported (0x%08x), this could be serious\n",
+            active);
+
+    if ( active & GERROR_SFM_ERR )
+    {
+        dev_err(smmu->dev, "device has entered Service Failure Mode!\n");
+        arm_smmu_device_disable(smmu);
+    }
+
+    if ( active & GERROR_MSI_GERROR_ABT_ERR )
+        dev_warn(smmu->dev, "GERROR MSI write aborted\n");
+
+    if ( active & GERROR_MSI_PRIQ_ABT_ERR )
+        dev_warn(smmu->dev, "PRIQ MSI write aborted\n");
+
+    if ( active & GERROR_MSI_EVTQ_ABT_ERR )
+        dev_warn(smmu->dev, "EVTQ MSI write aborted\n");
+
+    if ( active & GERROR_MSI_CMDQ_ABT_ERR )
+        dev_warn(smmu->dev, "CMDQ MSI write aborted\n");
+
+    if ( active & GERROR_PRIQ_ABT_ERR )
+        dev_err(smmu->dev, "PRIQ write aborted -- events may have been lost\n");
+
+    if ( active & GERROR_EVTQ_ABT_ERR )
+        dev_err(smmu->dev, "EVTQ write aborted -- events may have been lost\n");
+
+    if ( active & GERROR_CMDQ_ERR )
+        arm_smmu_cmdq_skip_err(smmu);
+
+    writel(gerror, smmu->base + ARM_SMMU_GERRORN);
+}
+
+static void arm_smmu_combined_irq_handler(int irq, void *dev,
+                                          struct cpu_user_regs *regs)
+{
+    struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+    arm_smmu_gerror_handler(irq, dev, regs);
+
+    tasklet_schedule(&(smmu->combined_irq_tasklet));
+}
+
+static void arm_smmu_combined_irq_thread(void *dev)
+{
+    struct arm_smmu_device *smmu = dev;
+
+    arm_smmu_evtq_thread(dev);
+    if ( smmu->features & ARM_SMMU_FEAT_PRI )
+        arm_smmu_priq_thread(dev);
+}
+
+static void arm_smmu_evtq_irq_tasklet(int irq, void *dev,
+                                       struct cpu_user_regs *regs)
+{
+    struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+    tasklet_schedule(&(smmu->evtq_irq_tasklet));
+}
+
+static void arm_smmu_priq_irq_tasklet(int irq, void *dev,
+                                       struct cpu_user_regs *regs)
+{
+    struct arm_smmu_device *smmu = (struct arm_smmu_device *)dev;
+
+    tasklet_schedule(&(smmu->priq_irq_tasklet));
+}
+
+static void arm_smmu_tlb_inv_context(void *cookie)
+{
+    struct arm_smmu_domain *smmu_domain = cookie;
+    struct arm_smmu_device *smmu = smmu_domain->smmu;
+    struct arm_smmu_cmdq_ent cmd;
+
+    cmd.opcode  = CMDQ_OP_TLBI_S12_VMALL;
+    cmd.tlbi.vmid  = smmu_domain->s2_cfg.vmid;
+
+    /*
+     * NOTE: when io-pgtable is in non-strict mode, we may get here with
+     * PTEs previously cleared by unmaps on the current CPU not yet visible
+     * to the SMMU. We are relying on the DSB implicit in
+     * queue_sync_prod_out() to guarantee those are observed before the
+     * TLBI. Do be careful, 007.
+     */
+    arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    arm_smmu_cmdq_issue_sync(smmu);
+}
+
+static struct iommu_domain *arm_smmu_domain_alloc(void)
+{
+    struct arm_smmu_domain *smmu_domain;
+
+    /*
+     * Allocate the domain and initialise some of its data structures.
+     * We can't really do anything meaningful until we've added a
+     * master.
+     */
+    smmu_domain = xzalloc(struct arm_smmu_domain);
+    if ( !smmu_domain )
+        return NULL;
+
+    spin_lock_init(&smmu_domain->init_lock);
+    INIT_LIST_HEAD(&smmu_domain->devices);
+    spin_lock_init(&smmu_domain->devices_lock);
+
+    return &smmu_domain->domain;
+}
+
+static int arm_smmu_bitmap_alloc(unsigned long *map, int span)
+{
+    int idx, size = 1 << span;
+
+    do {
+        idx = find_first_zero_bit(map, size);
+        if ( idx == size )
+            return -ENOSPC;
+    } while ( test_and_set_bit(idx, map) );
+
+    return idx;
+}
+
+static void arm_smmu_bitmap_free(unsigned long *map, int idx)
+{
+    clear_bit(idx, map);
+}
+
+static void arm_smmu_domain_free(struct iommu_domain *domain)
+{
+    struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+    struct arm_smmu_device *smmu = smmu_domain->smmu;
+    struct arm_smmu_s2_cfg *cfg;
+
+    cfg = &smmu_domain->s2_cfg;
+    if ( cfg->vmid )
+        arm_smmu_bitmap_free(smmu->vmid_map, cfg->vmid);
+
+    xfree(smmu_domain);
+}
+
+static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
+                                       struct arm_smmu_master *master)
+{
+    int vmid;
+    u64 reg;
+    struct arm_smmu_device *smmu = smmu_domain->smmu;
+    struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
+
+    /* VTCR */
+    reg = VTCR_RES1 | VTCR_SH0_IS | VTCR_IRGN0_WBWA | VTCR_ORGN0_WBWA;
+
+    switch ( PAGE_SIZE )
+    {
+    case SZ_4K:
+        reg |= VTCR_TG0_4K;
+        break;
+    case SZ_16K:
+        reg |= VTCR_TG0_16K;
+        break;
+    case SZ_64K:
+        reg |= VTCR_TG0_4K;
+        break;
+    }
+
+    switch ( smmu->oas )
+    {
+    case 32:
+        reg |= VTCR_PS(_AC(0x0,ULL));
+        break;
+    case 36:
+        reg |= VTCR_PS(_AC(0x1,ULL));
+        break;
+    case 40:
+        reg |= VTCR_PS(_AC(0x2,ULL));
+        break;
+    case 42:
+        reg |= VTCR_PS(_AC(0x3,ULL));
+        break;
+    case 44:
+        reg |= VTCR_PS(_AC(0x4,ULL));
+        break;
+    case 48:
+        reg |= VTCR_PS(_AC(0x5,ULL));
+        break;
+    case 52:
+        reg |= VTCR_PS(_AC(0x6,ULL));
+        break;
+    }
+
+    reg |= VTCR_T0SZ(64ULL - smmu->ias);
+    reg |= VTCR_SL0(0x2);
+    reg |= VTCR_VS;
+
+    cfg->vtcr   = reg;
+
+    vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
+    if ( vmid < 0 )
+        return vmid;
+    cfg->vmid  = (u16)vmid;
+
+    cfg->vttbr  = page_to_maddr(cfg->domain->arch.p2m.root);
+
+    printk(XENLOG_DEBUG "SMMUv3: d%u: vmid 0x%x vtcr 0x%"PRIpaddr" p2maddr 0x%"PRIpaddr"\n",
+            cfg->domain->domain_id, cfg->vmid, cfg->vtcr, cfg->vttbr);
+
+    return 0;
+}
+
+static int arm_smmu_domain_finalise(struct iommu_domain *domain,
+                                    struct arm_smmu_master *master)
+{
+    int ret;
+    struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+
+    /* Restrict the stage to what we can actually support */
+    smmu_domain->stage = ARM_SMMU_DOMAIN_S2;
+
+    ret = arm_smmu_domain_finalise_s2(smmu_domain, master);
+    if ( ret < 0 )
+    {
+        return ret;
+    }
+
+    return 0;
+}
+
+static __le64 *arm_smmu_get_step_for_sid(struct arm_smmu_device *smmu, u32 sid)
+{
+    __le64 *step;
+    struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+    if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
+    {
+        struct arm_smmu_strtab_l1_desc *l1_desc;
+        int idx;
+
+        /* Two-level walk */
+        idx = (sid >> STRTAB_SPLIT) * STRTAB_L1_DESC_DWORDS;
+        l1_desc = &cfg->l1_desc[idx];
+        idx = (sid & ((1 << STRTAB_SPLIT) - 1)) * STRTAB_STE_DWORDS;
+        step = &l1_desc->l2ptr[idx];
+    } else
+    {
+        /* Simple linear lookup */
+        step = &cfg->strtab[sid * STRTAB_STE_DWORDS];
+    }
+
+    return step;
+}
+
+static void arm_smmu_install_ste_for_dev(struct arm_smmu_master *master)
+{
+    int i, j;
+    struct arm_smmu_device *smmu = master->smmu;
+
+    for ( i = 0; i < master->num_sids; ++i )
+    {
+        u32 sid = master->sids[i];
+        __le64 *step = arm_smmu_get_step_for_sid(smmu, sid);
+
+        /* Bridged PCI devices may end up with duplicated IDs */
+        for ( j = 0; j < i; j++ )
+            if ( master->sids[j] == sid )
+                break;
+        if ( j < i )
+            continue;
+
+        arm_smmu_write_strtab_ent(master, sid, step);
+    }
+}
+
+static void arm_smmu_detach_dev(struct arm_smmu_master *master)
+{
+    unsigned long flags;
+    struct arm_smmu_domain *smmu_domain = master->domain;
+
+    if ( !smmu_domain )
+        return;
+
+    spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+    list_del(&master->domain_head);
+    spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+    master->domain = NULL;
+    master->ats_enabled = false;
+    arm_smmu_install_ste_for_dev(master);
+}
+
+static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+    int ret = 0;
+    unsigned long flags;
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
+    struct arm_smmu_master *master;
+
+    if ( !fwspec )
+        return -ENOENT;
+
+    master = dev_iommu_priv_get(dev);
+    smmu = master->smmu;
+
+    arm_smmu_detach_dev(master);
+
+    spin_lock(&smmu_domain->init_lock);
+
+    if ( !smmu_domain->smmu )
+    {
+        smmu_domain->smmu = smmu;
+        ret = arm_smmu_domain_finalise(domain, master);
+        if ( ret )
+        {
+            smmu_domain->smmu = NULL;
+            goto out_unlock;
+        }
+    } else if ( smmu_domain->smmu != smmu )
+    {
+        dev_err(dev,
+                "cannot attach to SMMU %s (upstream of %s)\n",
+                dev_name(smmu_domain->smmu->dev),
+                dev_name(smmu->dev));
+        ret = -ENXIO;
+        goto out_unlock;
+    }
+
+    master->domain = smmu_domain;
+
+    arm_smmu_install_ste_for_dev(master);
+
+    spin_lock_irqsave(&smmu_domain->devices_lock, flags);
+    list_add(&master->domain_head, &smmu_domain->devices);
+    spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
+
+out_unlock:
+    spin_unlock(&smmu_domain->init_lock);
+    return ret;
+}
+
+static bool arm_smmu_sid_in_range(struct arm_smmu_device *smmu, u32 sid)
+{
+    unsigned long limit = smmu->strtab_cfg.num_l1_ents;
+
+    if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
+        limit *= 1UL << STRTAB_SPLIT;
+
+    return sid < limit;
+}
+
+static int arm_smmu_init_one_queue(struct arm_smmu_device *smmu,
+                                   struct arm_smmu_queue *q,
+                                   unsigned long prod_off,
+                                   unsigned long cons_off,
+                                   size_t dwords, const char *name)
+{
+    size_t qsz;
+
+    do {
+        qsz = ((1 << q->llq.max_n_shift) * dwords) << 3;
+        q->base = dmam_alloc_coherent(qsz, &q->base_dma);
+        if ( q->base || qsz < PAGE_SIZE )
+            break;
+
+        q->llq.max_n_shift--;
+    } while (1);
+
+    if ( !q->base )
+    {
+        dev_err(smmu->dev,
+                "failed to allocate queue (0x%zx bytes) for %s\n",
+                qsz, name);
+        return -ENOMEM;
+    }
+
+    WARN_ON(q->base_dma & (qsz - 1));
+
+    if ( unlikely(q->base_dma & (qsz - 1)) )
+    {
+        dev_info(smmu->dev, "allocated %u entries for %s\n",
+                1 << q->llq.max_n_shift, name);
+    }
+
+    q->prod_reg  = arm_smmu_page1_fixup(prod_off, smmu);
+    q->cons_reg  = arm_smmu_page1_fixup(cons_off, smmu);
+    q->ent_dwords  = dwords;
+
+    q->q_base  = Q_BASE_RWA;
+    q->q_base |= q->base_dma & Q_BASE_ADDR_MASK;
+    q->q_base |= FIELD_PREP(Q_BASE_LOG2SIZE, q->llq.max_n_shift);
+
+    q->llq.prod = q->llq.cons = 0;
+    return 0;
+}
+
+static int arm_smmu_init_queues(struct arm_smmu_device *smmu)
+{
+    int ret;
+
+    /* cmdq */
+    spin_lock_init(&smmu->cmdq.lock);
+    ret = arm_smmu_init_one_queue(smmu, &smmu->cmdq.q, ARM_SMMU_CMDQ_PROD,
+            ARM_SMMU_CMDQ_CONS, CMDQ_ENT_DWORDS,
+            "cmdq");
+    if ( ret )
+        return ret;
+
+    /* evtq */
+    ret = arm_smmu_init_one_queue(smmu, &smmu->evtq.q, ARM_SMMU_EVTQ_PROD,
+            ARM_SMMU_EVTQ_CONS, EVTQ_ENT_DWORDS,
+            "evtq");
+    if ( ret )
+        return ret;
+
+    /* priq */
+    if ( !(smmu->features & ARM_SMMU_FEAT_PRI) )
+        return 0;
+
+    return arm_smmu_init_one_queue(smmu, &smmu->priq.q, ARM_SMMU_PRIQ_PROD,
+            ARM_SMMU_PRIQ_CONS, PRIQ_ENT_DWORDS,
+            "priq");
+}
+
+static int arm_smmu_init_l1_strtab(struct arm_smmu_device *smmu)
+{
+    unsigned int i;
+    struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+    size_t size = sizeof(*cfg->l1_desc) * cfg->num_l1_ents;
+    void *strtab = smmu->strtab_cfg.strtab;
+
+    cfg->l1_desc = _xzalloc(size, sizeof(void *));
+    if ( !cfg->l1_desc )
+    {
+        dev_err(smmu->dev, "failed to allocate l1 stream table desc\n");
+        return -ENOMEM;
+    }
+
+    for ( i = 0; i < cfg->num_l1_ents; ++i )
+    {
+        arm_smmu_write_strtab_l1_desc(strtab, &cfg->l1_desc[i]);
+        strtab += STRTAB_L1_DESC_DWORDS << 3;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_init_strtab_2lvl(struct arm_smmu_device *smmu)
+{
+    void *strtab;
+    u64 reg;
+    u32 size, l1size;
+    struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+    /* Calculate the L1 size, capped to the SIDSIZE. */
+    size = STRTAB_L1_SZ_SHIFT - (ilog2(STRTAB_L1_DESC_DWORDS) + 3);
+    size = min(size, smmu->sid_bits - STRTAB_SPLIT);
+    cfg->num_l1_ents = 1 << size;
+
+    size += STRTAB_SPLIT;
+    if ( size < smmu->sid_bits )
+        dev_warn(smmu->dev,
+                "2-level strtab only covers %u/%u bits of SID\n",
+                size, smmu->sid_bits);
+
+    l1size = cfg->num_l1_ents * (STRTAB_L1_DESC_DWORDS << 3);
+    strtab = dmam_alloc_coherent(l1size, &cfg->strtab_dma);
+    if ( !strtab )
+    {
+        dev_err(smmu->dev,
+                "failed to allocate l1 stream table (%u bytes)\n",
+                size);
+        return -ENOMEM;
+    }
+    cfg->strtab = strtab;
+
+    /* Configure strtab_base_cfg for 2 levels */
+    reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_2LVL);
+    reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, size);
+    reg |= FIELD_PREP(STRTAB_BASE_CFG_SPLIT, STRTAB_SPLIT);
+    cfg->strtab_base_cfg = reg;
+
+    return arm_smmu_init_l1_strtab(smmu);
+}
+
+static int arm_smmu_init_strtab_linear(struct arm_smmu_device *smmu)
+{
+    void *strtab;
+    u64 reg;
+    u32 size;
+    struct arm_smmu_strtab_cfg *cfg = &smmu->strtab_cfg;
+
+    size = (1 << smmu->sid_bits) * (STRTAB_STE_DWORDS << 3);
+    strtab = dmam_alloc_coherent(size, &cfg->strtab_dma);
+    if ( !strtab )
+    {
+        dev_err(smmu->dev,
+                "failed to allocate linear stream table (%u bytes)\n",
+                size);
+        return -ENOMEM;
+    }
+    cfg->strtab = strtab;
+    cfg->num_l1_ents = 1 << smmu->sid_bits;
+
+    /* Configure strtab_base_cfg for a linear table covering all SIDs */
+    reg  = FIELD_PREP(STRTAB_BASE_CFG_FMT, STRTAB_BASE_CFG_FMT_LINEAR);
+    reg |= FIELD_PREP(STRTAB_BASE_CFG_LOG2SIZE, smmu->sid_bits);
+    cfg->strtab_base_cfg = reg;
+
+    arm_smmu_init_bypass_stes(strtab, cfg->num_l1_ents);
+    return 0;
+}
+
+static int arm_smmu_init_strtab(struct arm_smmu_device *smmu)
+{
+    u64 reg;
+    int ret;
+
+    if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
+        ret = arm_smmu_init_strtab_2lvl(smmu);
+    else
+        ret = arm_smmu_init_strtab_linear(smmu);
+
+    if ( ret )
+        return ret;
+
+    /* Set the strtab base address */
+    reg  = smmu->strtab_cfg.strtab_dma & STRTAB_BASE_ADDR_MASK;
+    reg |= STRTAB_BASE_RA;
+    smmu->strtab_cfg.strtab_base = reg;
+
+    /* Allocate the first VMID for stage-2 bypass STEs */
+    set_bit(0, smmu->vmid_map);
+    return 0;
+}
+
+static int arm_smmu_init_structures(struct arm_smmu_device *smmu)
+{
+    int ret;
+
+    ret = arm_smmu_init_queues(smmu);
+    if ( ret )
+        return ret;
+
+    return arm_smmu_init_strtab(smmu);
+}
+
+static int arm_smmu_write_reg_sync(struct arm_smmu_device *smmu, u32 val,
+                                   unsigned int reg_off, unsigned int ack_off)
+{
+    u32 reg;
+
+    writel_relaxed(val, smmu->base + reg_off);
+    return readl_relaxed_poll_timeout(smmu->base + ack_off, reg, reg == val,
+            1, ARM_SMMU_POLL_TIMEOUT_US);
+}
+
+/* GBPA is "special" */
+static int arm_smmu_update_gbpa(struct arm_smmu_device *smmu, u32 set, u32 clr)
+{
+    int ret;
+    u32 reg, __iomem *gbpa = smmu->base + ARM_SMMU_GBPA;
+
+    ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+            1, ARM_SMMU_POLL_TIMEOUT_US);
+    if ( ret )
+        return ret;
+
+    reg &= ~clr;
+    reg |= set;
+    writel_relaxed(reg | GBPA_UPDATE, gbpa);
+    ret = readl_relaxed_poll_timeout(gbpa, reg, !(reg & GBPA_UPDATE),
+            1, ARM_SMMU_POLL_TIMEOUT_US);
+
+    if ( ret )
+        dev_err(smmu->dev, "GBPA not responding to update\n");
+    return ret;
+}
+
+static void arm_smmu_setup_unique_irqs(struct arm_smmu_device *smmu)
+{
+    int irq, ret;
+
+    /* Request interrupt lines */
+    irq = smmu->evtq.q.irq;
+    if ( irq )
+    {
+        irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+        ret = request_irq(irq, 0, arm_smmu_evtq_irq_tasklet,
+                          "arm-smmu-v3-evtq", smmu);
+        if ( ret < 0 )
+            dev_warn(smmu->dev, "failed to enable evtq irq\n");
+    } else
+    {
+        dev_warn(smmu->dev, "no evtq irq - events will not be reported!\n");
+    }
+
+    irq = smmu->gerr_irq;
+    if ( irq )
+    {
+        irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+        ret = request_irq(irq, 0, arm_smmu_gerror_handler,
+                          "arm-smmu-v3-gerror", smmu);
+        if ( ret < 0 )
+            dev_warn(smmu->dev, "failed to enable gerror irq\n");
+    } else
+    {
+        dev_warn(smmu->dev, "no gerr irq - errors will not be reported!\n");
+    }
+
+    if ( smmu->features & ARM_SMMU_FEAT_PRI )
+    {
+        irq = smmu->priq.q.irq;
+        if ( irq )
+        {
+            irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+            ret = request_irq(irq, 0, arm_smmu_priq_irq_tasklet,
+                              "arm-smmu-v3-priq", smmu);
+            if ( ret < 0 )
+                dev_warn(smmu->dev,
+                        "failed to enable priq irq\n");
+        } else
+        {
+            dev_warn(smmu->dev, "no priq irq - PRI will be broken\n");
+        }
+    }
+}
+
+static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
+{
+    int ret, irq;
+    u32 irqen_flags = IRQ_CTRL_EVTQ_IRQEN | IRQ_CTRL_GERROR_IRQEN;
+
+    /* Disable IRQs first */
+    ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_IRQ_CTRL,
+            ARM_SMMU_IRQ_CTRLACK);
+    if ( ret )
+    {
+        dev_err(smmu->dev, "failed to disable irqs\n");
+        return ret;
+    }
+
+    irq = smmu->combined_irq;
+    if ( irq )
+    {
+        /*
+         * Cavium ThunderX2 implementation doesn't support unique irq
+         * lines. Use a single irq line for all the SMMUv3 interrupts.
+         */
+        irq_set_type(irq, IRQ_TYPE_EDGE_BOTH);
+        ret = request_irq(irq, 0, arm_smmu_combined_irq_handler,
+                          "arm-smmu-v3-combined-irq", smmu);
+        if ( ret < 0 )
+            dev_warn(smmu->dev, "failed to enable combined irq\n");
+    } else
+        arm_smmu_setup_unique_irqs(smmu);
+
+    if ( smmu->features & ARM_SMMU_FEAT_PRI )
+        irqen_flags |= IRQ_CTRL_PRIQ_IRQEN;
+
+    /* Enable interrupt generation on the SMMU */
+    ret = arm_smmu_write_reg_sync(smmu, irqen_flags,
+            ARM_SMMU_IRQ_CTRL, ARM_SMMU_IRQ_CTRLACK);
+    if ( ret )
+        dev_warn(smmu->dev, "failed to enable irqs\n");
+
+    return 0;
+}
+
+static int arm_smmu_device_disable(struct arm_smmu_device *smmu)
+{
+    int ret;
+
+    ret = arm_smmu_write_reg_sync(smmu, 0, ARM_SMMU_CR0, ARM_SMMU_CR0ACK);
+    if ( ret )
+        dev_err(smmu->dev, "failed to clear cr0\n");
+
+    return ret;
+}
+
+static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
+{
+    int ret;
+    u32 reg, enables;
+    struct arm_smmu_cmdq_ent cmd;
+
+    /* Clear CR0 and sync (disables SMMU and queue processing) */
+    reg = readl_relaxed(smmu->base + ARM_SMMU_CR0);
+    if ( reg & CR0_SMMUEN )
+    {
+        dev_warn(smmu->dev, "SMMU currently enabled! Resetting...\n");
+        WARN_ON(!disable_bypass);
+        arm_smmu_update_gbpa(smmu, GBPA_ABORT, 0);
+    }
+
+    ret = arm_smmu_device_disable(smmu);
+    if ( ret )
+        return ret;
+
+    /* CR1 (table and queue memory attributes) */
+    reg = FIELD_PREP(CR1_TABLE_SH, ARM_SMMU_SH_ISH) |
+        FIELD_PREP(CR1_TABLE_OC, CR1_CACHE_WB) |
+        FIELD_PREP(CR1_TABLE_IC, CR1_CACHE_WB) |
+        FIELD_PREP(CR1_QUEUE_SH, ARM_SMMU_SH_ISH) |
+        FIELD_PREP(CR1_QUEUE_OC, CR1_CACHE_WB) |
+        FIELD_PREP(CR1_QUEUE_IC, CR1_CACHE_WB);
+    writel_relaxed(reg, smmu->base + ARM_SMMU_CR1);
+
+    /* CR2 (random crap) */
+    reg = CR2_PTM | CR2_RECINVSID | CR2_E2H;
+    writel_relaxed(reg, smmu->base + ARM_SMMU_CR2);
+
+    /* Stream table */
+    writeq_relaxed(smmu->strtab_cfg.strtab_base,
+            smmu->base + ARM_SMMU_STRTAB_BASE);
+    writel_relaxed(smmu->strtab_cfg.strtab_base_cfg,
+            smmu->base + ARM_SMMU_STRTAB_BASE_CFG);
+
+    /* Command queue */
+    writeq_relaxed(smmu->cmdq.q.q_base, smmu->base + ARM_SMMU_CMDQ_BASE);
+    writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
+    writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);
+
+    enables = CR0_CMDQEN;
+    ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+            ARM_SMMU_CR0ACK);
+    if ( ret )
+    {
+        dev_err(smmu->dev, "failed to enable command queue\n");
+        return ret;
+    }
+
+    /* Invalidate any cached configuration */
+    cmd.opcode = CMDQ_OP_CFGI_ALL;
+    arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    arm_smmu_cmdq_issue_sync(smmu);
+
+    /* Invalidate any stale TLB entries */
+    if ( smmu->features & ARM_SMMU_FEAT_HYP )
+    {
+        cmd.opcode = CMDQ_OP_TLBI_EL2_ALL;
+        arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    }
+
+    cmd.opcode = CMDQ_OP_TLBI_NSNH_ALL;
+    arm_smmu_cmdq_issue_cmd(smmu, &cmd);
+    arm_smmu_cmdq_issue_sync(smmu);
+
+    /* Event queue */
+    writeq_relaxed(smmu->evtq.q.q_base, smmu->base + ARM_SMMU_EVTQ_BASE);
+    writel_relaxed(smmu->evtq.q.llq.prod,
+            arm_smmu_page1_fixup(ARM_SMMU_EVTQ_PROD, smmu));
+    writel_relaxed(smmu->evtq.q.llq.cons,
+            arm_smmu_page1_fixup(ARM_SMMU_EVTQ_CONS, smmu));
+
+    enables |= CR0_EVTQEN;
+    ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+            ARM_SMMU_CR0ACK);
+    if ( ret )
+    {
+        dev_err(smmu->dev, "failed to enable event queue\n");
+        return ret;
+    }
+
+    /* PRI queue */
+    if ( smmu->features & ARM_SMMU_FEAT_PRI )
+    {
+        writeq_relaxed(smmu->priq.q.q_base,
+                smmu->base + ARM_SMMU_PRIQ_BASE);
+        writel_relaxed(smmu->priq.q.llq.prod,
+                arm_smmu_page1_fixup(ARM_SMMU_PRIQ_PROD, smmu));
+        writel_relaxed(smmu->priq.q.llq.cons,
+                arm_smmu_page1_fixup(ARM_SMMU_PRIQ_CONS, smmu));
+
+        enables |= CR0_PRIQEN;
+        ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+                ARM_SMMU_CR0ACK);
+        if ( ret )
+        {
+            dev_err(smmu->dev, "failed to enable PRI queue\n");
+            return ret;
+        }
+    }
+
+    if ( smmu->features & ARM_SMMU_FEAT_ATS )
+    {
+        enables |= CR0_ATSCHK;
+        ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+                ARM_SMMU_CR0ACK);
+        if ( ret )
+        {
+            dev_err(smmu->dev, "failed to enable ATS check\n");
+            return ret;
+        }
+    }
+
+    ret = arm_smmu_setup_irqs(smmu);
+    if ( ret )
+    {
+        dev_err(smmu->dev, "failed to setup irqs\n");
+        return ret;
+    }
+
+    /* Initialize tasklets for threaded IRQs*/
+    tasklet_init(&smmu->evtq_irq_tasklet, arm_smmu_evtq_thread, smmu);
+    tasklet_init(&smmu->priq_irq_tasklet, arm_smmu_priq_thread, smmu);
+    tasklet_init(&smmu->combined_irq_tasklet, arm_smmu_combined_irq_thread,
+                 smmu);
+
+    /* Enable the SMMU interface, or ensure bypass */
+    if ( !bypass || disable_bypass )
+    {
+        enables |= CR0_SMMUEN;
+    } else
+    {
+        ret = arm_smmu_update_gbpa(smmu, 0, GBPA_ABORT);
+        if ( ret )
+            return ret;
+    }
+    ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
+            ARM_SMMU_CR0ACK);
+    if ( ret )
+    {
+        dev_err(smmu->dev, "failed to enable SMMU interface\n");
+        return ret;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_device_hw_probe(struct arm_smmu_device *smmu)
+{
+    u32 reg;
+    bool coherent = smmu->features & ARM_SMMU_FEAT_COHERENCY;
+
+    /* IDR0 */
+    reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
+
+    /* 2-level structures */
+    if ( FIELD_GET(IDR0_ST_LVL, reg) == IDR0_ST_LVL_2LVL )
+        smmu->features |= ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+    /* Boolean feature flags */
+    if ( reg & IDR0_PRI )
+        smmu->features |= ARM_SMMU_FEAT_PRI;
+
+    if ( reg & IDR0_ATS )
+        smmu->features |= ARM_SMMU_FEAT_ATS;
+
+    if ( reg & IDR0_SEV )
+        smmu->features |= ARM_SMMU_FEAT_SEV;
+
+    if ( reg & IDR0_HYP )
+        smmu->features |= ARM_SMMU_FEAT_HYP;
+
+    /*
+     * The coherency feature as set by FW is used in preference to the ID
+     * register, but warn on mismatch.
+     */
+    if ( !!(reg & IDR0_COHACC) != coherent )
+        dev_warn(smmu->dev, "IDR0.COHACC overridden by FW configuration (%s)\n",
+                coherent ? "true" : "false");
+
+    if ( reg & IDR0_S2P )
+        smmu->features |= ARM_SMMU_FEAT_TRANS_S2;
+
+    if ( !(reg & IDR0_S2P) )
+    {
+        dev_err(smmu->dev, "no translation support!\n");
+        return -ENXIO;
+    }
+
+    /* We only support the AArch64 table format at present */
+    switch ( FIELD_GET(IDR0_TTF, reg) )
+    {
+    case IDR0_TTF_AARCH32_64:
+        smmu->ias = 40;
+        /* Fallthrough */
+    case IDR0_TTF_AARCH64:
+        break;
+    default:
+        dev_err(smmu->dev, "AArch64 table format not supported!\n");
+        return -ENXIO;
+    }
+
+    /* VMID sizes */
+    smmu->vmid_bits = reg & IDR0_VMID16 ? 16 : 8;
+
+    /* IDR1 */
+    reg = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
+    if ( reg & (IDR1_TABLES_PRESET | IDR1_QUEUES_PRESET | IDR1_REL) )
+    {
+        dev_err(smmu->dev, "embedded implementation not supported\n");
+        return -ENXIO;
+    }
+
+    /* Queue sizes, capped to ensure natural alignment */
+    smmu->cmdq.q.llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT,
+            FIELD_GET(IDR1_CMDQS, reg));
+    if ( !smmu->cmdq.q.llq.max_n_shift )
+    {
+        /* Odd alignment restrictions on the base, so ignore for now */
+        dev_err(smmu->dev, "unit-length command queue not supported\n");
+        return -ENXIO;
+    }
+
+    smmu->evtq.q.llq.max_n_shift = min_t(u32, EVTQ_MAX_SZ_SHIFT,
+            FIELD_GET(IDR1_EVTQS, reg));
+    smmu->priq.q.llq.max_n_shift = min_t(u32, PRIQ_MAX_SZ_SHIFT,
+            FIELD_GET(IDR1_PRIQS, reg));
+
+    /* SID sizes */
+    smmu->sid_bits = FIELD_GET(IDR1_SIDSIZE, reg);
+
+    /*
+     * If the SMMU supports fewer bits than would fill a single L2 stream
+     * table, use a linear table instead.
+     */
+    if ( smmu->sid_bits <= STRTAB_SPLIT )
+        smmu->features &= ~ARM_SMMU_FEAT_2_LVL_STRTAB;
+
+    /* IDR5 */
+    reg = readl_relaxed(smmu->base + ARM_SMMU_IDR5);
+
+    /* Page sizes */
+    if ( reg & IDR5_GRAN64K )
+        smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
+    if ( reg & IDR5_GRAN16K )
+        smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
+    if ( reg & IDR5_GRAN4K )
+        smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+
+    /* Output address size */
+    switch ( FIELD_GET(IDR5_OAS, reg) )
+    {
+    case IDR5_OAS_32_BIT:
+        smmu->oas = 32;
+        break;
+    case IDR5_OAS_36_BIT:
+        smmu->oas = 36;
+        break;
+     case IDR5_OAS_40_BIT:
+        smmu->oas = 40;
+        break;
+    case IDR5_OAS_42_BIT:
+        smmu->oas = 42;
+        break;
+    case IDR5_OAS_44_BIT:
+        smmu->oas = 44;
+        break;
+    case IDR5_OAS_52_BIT:
+        smmu->oas = 52;
+        smmu->pgsize_bitmap |= 1ULL << 42; /* 4TB */
+        break;
+    default:
+        dev_info(smmu->dev,
+                 "unknown output address size. Truncating to 48-bit\n");
+        /* Fallthrough */
+    case IDR5_OAS_48_BIT:
+        smmu->oas = 48;
+    }
+
+    smmu->ias = max(smmu->ias, smmu->oas);
+
+    dev_info(smmu->dev, "ias %lu-bit, oas %lu-bit (features 0x%08x)\n",
+            smmu->ias, smmu->oas, smmu->features);
+    return 0;
+}
+
+static int arm_smmu_device_dt_probe(struct device *dev,
+                                    struct arm_smmu_device *smmu)
+{
+    u32 cells;
+    int ret = -EINVAL;
+
+    if ( !dt_property_read_u32(dev->of_node, "#iommu-cells", &cells) )
+        dev_err(dev, "missing #iommu-cells property\n");
+    else if ( cells != 1 )
+        dev_err(dev, "invalid #iommu-cells value (%d)\n", cells);
+    else
+        ret = 0;
+
+    parse_driver_options(smmu);
+
+    return ret;
+}
+
+static unsigned long arm_smmu_resource_size(struct arm_smmu_device *smmu)
+{
+    if ( smmu->options & ARM_SMMU_OPT_PAGE0_REGS_ONLY )
+        return SZ_64K;
+    else
+        return SZ_128K;
+}
+
+static int platform_get_irq_byname(struct device *dev, const char *name)
+{
+    int ret = 0;
+    const struct dt_property *dtprop;
+    struct dt_irq irq;
+    struct dt_device_node *np  = dev_to_dt(dev);
+
+    dtprop = dt_find_property(np, "interrupt-names", NULL);
+    if ( !dtprop )
+    {
+        dev_err(dev, "SMMUv3: can't find 'interrupt-names' property\n");
+        return -EINVAL;
+    }
+
+    if ( NULL != dtprop->value )
+    {
+        dev_info(dev, "SMMUv3: DT value = %s\n", (char *)dtprop->value);
+        ret = dt_device_get_irq(np, 0, &irq);
+        if ( !ret )
+        {
+            return irq.irq;
+        }
+    }
+
+    return ret;
+}
+
+/* Start of Xen specific code. */
+
+static int arm_smmu_device_probe(struct device *dev)
+{
+    int irq, ret;
+    paddr_t ioaddr, iosize;
+    struct arm_smmu_device *smmu;
+    bool bypass;
+
+    smmu = xzalloc(struct arm_smmu_device);
+    if ( !smmu )
+    {
+        dev_err(dev, "failed to allocate arm_smmu_device\n");
+        return -ENOMEM;
+    }
+
+    smmu->dev = dev;
+
+    ret = arm_smmu_device_dt_probe(dev, smmu);
+
+    /* Set bypass mode according to firmware probing result */
+    bypass = !!ret;
+
+    /* Base address */
+    ret = dt_device_get_address(dev_to_dt(dev), 0, &ioaddr, &iosize);
+    if( ret )
+        return -ENODEV;
+
+    if ( iosize < arm_smmu_resource_size(smmu) )
+    {
+        dev_err(dev, "MMIO region too small (%lx)\n", iosize);
+        return -EINVAL;
+    }
+
+    smmu->base = ioremap_nocache(ioaddr, iosize);
+    if ( IS_ERR(smmu->base) )
+    {
+        dev_err(dev, "ioremap failed (addr 0x%"PRIx64" size 0x%"PRIx64")\n",
+                ioaddr, iosize);
+        return PTR_ERR(smmu->base);
+    }
+
+    if ( iosize > SZ_64K )
+    {
+        smmu->page1 = ioremap_nocache(ioaddr + SZ_64K, ARM_SMMU_REG_SZ);
+        if (IS_ERR(smmu->page1))
+            return PTR_ERR(smmu->page1);
+    }
+    else
+    {
+        smmu->page1 = smmu->base;
+    }
+
+    /* Interrupt lines */
+    irq = platform_get_irq_byname(dev, "combined");
+    if ( irq > 0 )
+    {
+        smmu->combined_irq = irq;
+    }
+    else
+    {
+        irq = platform_get_irq_byname(dev, "eventq");
+        if ( irq > 0 )
+            smmu->evtq.q.irq = irq;
+
+        irq = platform_get_irq_byname(dev, "priq");
+        if ( irq > 0 )
+            smmu->priq.q.irq = irq;
+
+        irq = platform_get_irq_byname(dev, "gerror");
+        if ( irq > 0 )
+            smmu->gerr_irq = irq;
+    }
+
+    /* Probe the h/w */
+    ret = arm_smmu_device_hw_probe(smmu);
+    if ( ret )
+        return ret;
+
+    /* Initialise in-memory data structures */
+    ret = arm_smmu_init_structures(smmu);
+    if ( ret )
+        return ret;
+
+    /* Reset the device */
+    ret = arm_smmu_device_reset(smmu, bypass);
+    if ( ret )
+        return ret;
+
+    /*
+     * Keep a list of all probed devices. This will be used to query
+     * the smmu devices based on the fwnode.
+     */
+    INIT_LIST_HEAD(&smmu->devices);
+
+    spin_lock(&arm_smmu_devices_lock);
+    list_add(&smmu->devices, &arm_smmu_devices);
+    spin_unlock(&arm_smmu_devices_lock);
+
+    return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush_all(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct iommu_domain *io_domain;
+
+    spin_lock(&xen_domain->lock);
+
+    list_for_each_entry( io_domain, &xen_domain->contexts, list )
+    {
+        /*
+         * Only invalidate the context when SMMU is present.
+         * This is because the context initialization is delayed
+         * until a master has been added.
+         */
+        if ( unlikely(!ACCESS_ONCE(to_smmu_domain(io_domain)->smmu)) )
+            continue;
+
+        arm_smmu_tlb_inv_context(to_smmu_domain(io_domain));
+    }
+
+    spin_unlock(&xen_domain->lock);
+
+    return 0;
+}
+
+static int __must_check arm_smmu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                             unsigned long page_count,
+                                             unsigned int flush_flags)
+{
+    return arm_smmu_iotlb_flush_all(d);
+}
+
+static struct arm_smmu_device *arm_smmu_get_by_dev(struct device *dev)
+{
+    struct arm_smmu_device *smmu = NULL;
+
+    spin_lock(&arm_smmu_devices_lock);
+    list_for_each_entry( smmu, &arm_smmu_devices, devices )
+    {
+        if ( smmu->dev  == dev )
+        {
+            spin_unlock(&arm_smmu_devices_lock);
+            return smmu;
+        }
+    }
+    spin_unlock(&arm_smmu_devices_lock);
+
+    return NULL;
+}
+
+/* Probing and initialisation functions */
+static struct iommu_domain *arm_smmu_get_domain(struct domain *d,
+                                                struct device *dev)
+{
+    struct iommu_domain *io_domain;
+    struct arm_smmu_domain *smmu_domain;
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct arm_smmu_device *smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+
+    if ( !smmu )
+        return NULL;
+
+    /*
+     * Loop through the &xen_domain->contexts to locate a context
+     * assigned to this SMMU
+     */
+    list_for_each_entry( io_domain, &xen_domain->contexts, list )
+    {
+        smmu_domain = to_smmu_domain(io_domain);
+        if ( smmu_domain->smmu == smmu )
+            return io_domain;
+    }
+
+    return NULL;
+}
+
+static void arm_smmu_destroy_iommu_domain(struct iommu_domain *io_domain)
+{
+    list_del(&io_domain->list);
+    arm_smmu_domain_free(io_domain);
+}
+
+static int arm_smmu_assign_dev(struct domain *d, u8 devfn,
+                               struct device *dev, u32 flag)
+{
+    int ret = 0;
+    struct iommu_domain *io_domain;
+    struct arm_smmu_domain *smmu_domain;
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+    if ( !dev->archdata.iommu )
+    {
+        dev->archdata.iommu = xzalloc(struct arm_smmu_xen_device);
+        if ( !dev->archdata.iommu )
+            return -ENOMEM;
+    }
+
+    spin_lock(&xen_domain->lock);
+
+    /*
+     * Check to see if an iommu_domain already exists for this xen domain
+     * under the same SMMU
+     */
+    io_domain = arm_smmu_get_domain(d, dev);
+    if ( !io_domain )
+    {
+        io_domain = arm_smmu_domain_alloc();
+        if ( !io_domain )
+        {
+            ret = -ENOMEM;
+            goto out;
+        }
+
+        smmu_domain = to_smmu_domain(io_domain);
+        smmu_domain->s2_cfg.domain = d;
+
+        /* Chain the new context to the domain */
+        list_add(&io_domain->list, &xen_domain->contexts);
+
+    }
+
+    ret = arm_smmu_attach_dev(io_domain, dev);
+    if ( ret )
+    {
+        if ( io_domain->ref.counter == 0 )
+            arm_smmu_destroy_iommu_domain(io_domain);
+    }
+    else
+    {
+        atomic_inc(&io_domain->ref);
+    }
+
+out:
+    spin_unlock(&xen_domain->lock);
+    return ret;
+}
+
+static int arm_smmu_deassign_dev(struct domain *d, struct device *dev)
+{
+    struct iommu_domain *io_domain = arm_smmu_get_domain(d, dev);
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+    struct arm_smmu_domain *arm_smmu = to_smmu_domain(io_domain);
+    struct arm_smmu_master *master = dev_iommu_priv_get(dev);
+
+    if ( !arm_smmu || arm_smmu->s2_cfg.domain != d )
+    {
+        dev_err(dev, " not attached to domain %d\n", d->domain_id);
+        return -ESRCH;
+    }
+
+    spin_lock(&xen_domain->lock);
+
+    arm_smmu_detach_dev(master);
+    atomic_dec(&io_domain->ref);
+
+    if ( io_domain->ref.counter == 0 )
+        arm_smmu_destroy_iommu_domain(io_domain);
+
+    spin_unlock(&xen_domain->lock);
+
+    return 0;
+}
+
+static int arm_smmu_reassign_dev(struct domain *s, struct domain *t,
+                                 u8 devfn,  struct device *dev)
+{
+    int ret = 0;
+
+    /* Don't allow remapping on other domain than hwdom */
+    if ( t && t != hardware_domain )
+        return -EPERM;
+
+    if ( t == s )
+        return 0;
+
+    ret = arm_smmu_deassign_dev(s, dev);
+    if ( ret )
+        return ret;
+
+    if ( t )
+    {
+        /* No flags are defined for ARM. */
+        ret = arm_smmu_assign_dev(t, devfn, dev, 0);
+        if ( ret )
+            return ret;
+    }
+
+    return 0;
+}
+
+static int arm_smmu_iommu_xen_domain_init(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain;
+
+    xen_domain = xzalloc(struct arm_smmu_xen_domain);
+    if ( !xen_domain )
+        return -ENOMEM;
+
+    spin_lock_init(&xen_domain->lock);
+    INIT_LIST_HEAD(&xen_domain->contexts);
+
+    dom_iommu(d)->arch.priv = xen_domain;
+
+    return 0;
+}
+
+static void __hwdom_init arm_smmu_iommu_hwdom_init(struct domain *d)
+{
+}
+
+static void arm_smmu_iommu_xen_domain_teardown(struct domain *d)
+{
+    struct arm_smmu_xen_domain *xen_domain = dom_iommu(d)->arch.priv;
+
+    ASSERT(list_empty(&xen_domain->contexts));
+    xfree(xen_domain);
+}
+
+static int arm_smmu_dt_xlate(struct device *dev,
+                             const struct dt_phandle_args *args)
+{
+    int ret;
+    struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
+
+    ret = iommu_fwspec_add_ids(dev, args->args, 1);
+    if ( ret )
+        return ret;
+
+    if ( dt_device_is_protected(dev_to_dt(dev)) )
+    {
+        dev_err(dev, "Already added to SMMUv3\n");
+        return -EEXIST;
+    }
+
+    /* Let Xen know that the master device is protected by an IOMMU. */
+    dt_device_set_protected(dev_to_dt(dev));
+
+    dev_info(dev, "Added master device (SMMUv3 %s StreamIds %u)\n",
+            dev_name(fwspec->iommu_dev), fwspec->num_ids);
+
+    return 0;
+}
+
+static int arm_smmu_add_device(u8 devfn, struct device *dev)
+{
+    int i, ret;
+    struct arm_smmu_device *smmu;
+    struct arm_smmu_master *master;
+    struct iommu_fwspec *fwspec;
+
+    fwspec = dev_iommu_fwspec_get(dev);
+    if ( !fwspec )
+        return -ENODEV;
+
+    smmu = arm_smmu_get_by_dev(fwspec->iommu_dev);
+    if ( !smmu )
+        return -ENODEV;
+
+    master = xzalloc(struct arm_smmu_master);
+    if ( !master )
+        return -ENOMEM;
+
+    master->dev = dev;
+    master->smmu = smmu;
+    master->sids = fwspec->ids;
+    master->num_sids = fwspec->num_ids;
+
+    dev_iommu_priv_set(dev, master);
+
+    /* Check the SIDs are in range of the SMMU and our stream table */
+    for ( i = 0; i < master->num_sids; i++ )
+    {
+        u32 sid = master->sids[i];
+
+        if ( !arm_smmu_sid_in_range(smmu, sid) )
+        {
+            ret = -ERANGE;
+            goto err_free_master;
+        }
+
+        /* Ensure l2 strtab is initialised */
+        if ( smmu->features & ARM_SMMU_FEAT_2_LVL_STRTAB )
+        {
+            ret = arm_smmu_init_l2_strtab(smmu, sid);
+            if ( ret )
+                goto err_free_master;
+        }
+    }
+
+    return 0;
+
+err_free_master:
+    xfree(master);
+    dev_iommu_priv_set(dev, NULL);
+    return ret;
+}
+
+static const struct iommu_ops arm_smmu_iommu_ops = {
+    .init = arm_smmu_iommu_xen_domain_init,
+    .hwdom_init = arm_smmu_iommu_hwdom_init,
+    .teardown = arm_smmu_iommu_xen_domain_teardown,
+    .iotlb_flush = arm_smmu_iotlb_flush,
+    .iotlb_flush_all = arm_smmu_iotlb_flush_all,
+    .assign_device = arm_smmu_assign_dev,
+    .reassign_device = arm_smmu_reassign_dev,
+    .map_page = arm_iommu_map_page,
+    .unmap_page = arm_iommu_unmap_page,
+    .dt_xlate = arm_smmu_dt_xlate,
+    .add_device = arm_smmu_add_device,
+};
+
+static const struct dt_device_match arm_smmu_of_match[] = {
+    { .compatible = "arm,smmu-v3", },
+    { },
+};
+
+static __init int arm_smmu_dt_init(struct dt_device_node *dev,
+                                   const void *data)
+{
+    int rc;
+
+    /*
+     * Even if the device can't be initialized, we don't want to
+     * give the SMMU device to dom0.
+     */
+    dt_device_set_used_by(dev, DOMID_XEN);
+
+    rc = arm_smmu_device_probe(dt_to_dev(dev));
+    if ( rc )
+        return rc;
+
+    iommu_set_ops(&arm_smmu_iommu_ops);
+    return 0;
+}
+
+DT_DEVICE_START(smmuv3, "ARM SMMU V3", DEVICE_IOMMU)
+    .dt_match = arm_smmu_of_match,
+    .init = arm_smmu_dt_init,
+DT_DEVICE_END
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:40:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9705.25546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtk7-0002dr-6i; Tue, 20 Oct 2020 15:40:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9705.25546; Tue, 20 Oct 2020 15:40:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtk7-0002dk-1n; Tue, 20 Oct 2020 15:40:07 +0000
Received: by outflank-mailman (input) for mailman id 9705;
 Tue, 20 Oct 2020 15:40:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V8HF=D3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kUtk5-0002VV-S7
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:40:05 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0852ae2-13c7-4a31-9846-626071fd9619;
 Tue, 20 Oct 2020 15:40:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=V8HF=D3=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kUtk5-0002VV-S7
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:40:05 +0000
X-Inumbo-ID: e0852ae2-13c7-4a31-9846-626071fd9619
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e0852ae2-13c7-4a31-9846-626071fd9619;
	Tue, 20 Oct 2020 15:40:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603208404;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=6ISmU+JH6ZgpC176exqVC/z/+DWR4Rlaalo9uKhU8Bw=;
  b=ERYbeiK4/fEXAZffRCOO/2HtsL4AlhMJR8nQ1gjKKO+H7wOgTJcN/Z4f
   UxjYraCHppCEKTo+tm20HmOQNAqPKQNcaqcqchtfr58n+ktyLPKmrxqDM
   NHrkYr2PcTDn1HvhZ9F50HVtOBks8LzFtnzw+GDwdI4/fPl9RiPLsobdj
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ty+Xv5mccCX3qSWAgpu110vkMzwbup7wE1qWzSeXT+vIKnmmzjwpV3C1Pa30GbCulyCoJoRiT3
 CGTVI/Vh5uriWlUeGGp0GM/lw9HV/57Tf6obCdTiF10mKWMgQSSi0iaOFtJ4W7CZAq8owe9Wp5
 oMPkllspQrg4C/P4zqnfe9jU3Cuea1t0kRpZ/NHO6BTnqSnojfPe8fMkLkVf0Bn3laC6jxbQkA
 dyfrJHE4ZvBFkEta8WkCyZ8Xb4Z4DMxw8bte4quRNtZOKphHUCYEfFFUQGnOHKx+Dpoxc5/zGX
 tUM=
X-SBRS: None
X-MesageID: 29641228
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,397,1596513600"; 
   d="scan'208";a="29641228"
Date: Tue, 20 Oct 2020 16:39:59 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
Subject: Re: [PATCH] libxl: Add suppress-vmdesc to QEMU -machine pc options
Message-ID: <20201020153959.GA2214@perard.uk.xensource.com>
References: <20201019200050.103360-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20201019200050.103360-1-jandryuk@gmail.com>

On Mon, Oct 19, 2020 at 04:00:50PM -0400, Jason Andryuk wrote:
> The device model state saved by QMP xen-save-devices-state doesn't
> include the vmdesc json.  When restoring an HVM, xen-load-devices-state
> always triggers "Expected vmdescription section, but got 0".  This is
> not a problem when restore comes from a file.  However, when QEMU runs
> in a linux stubdom and comes over a console, EOF is not received.  This
> causes a delay restoring - though it does restore.
> 
> Setting suppress-vmdesc skips looking for the vmdesc during restore and
> avoids the wait.
> 
> This is a libxl change for the non-xenfv case to match the xenfv change
> made in qemu
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> 
> Should this also add suppress-vmdesc to xenfv for backwards
> compatibility?  In that case, the change in QEMU is redundent.  Since
> this only really matters for the stubdom case, it could be conditioned
> on that.

QEMU doesn't complain about having suppress-vmdesc set on the command
line and as a default for the xenfv machine, so I don't mind adding it
to the xenfv machine in libxl, while keeping the change in QEMU.

The change is already applied to QEMU, so unless there's an issue, I
don't want to revert it. It might be useful for tool stacks that don't
use libxl.

Also, the change matters as well for non-stubdom cases as it removed a
cryptic error message from qemu-dm's logs :-).

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:44:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9710.25558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtob-0002wX-O2; Tue, 20 Oct 2020 15:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9710.25558; Tue, 20 Oct 2020 15:44:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtob-0002wQ-L0; Tue, 20 Oct 2020 15:44:45 +0000
Received: by outflank-mailman (input) for mailman id 9710;
 Tue, 20 Oct 2020 15:44:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUtoZ-0002wL-IE
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:44:43 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb82a996-8857-4b99-8268-92d7f18a9fd0;
 Tue, 20 Oct 2020 15:44:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUtoZ-0002wL-IE
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:44:43 +0000
X-Inumbo-ID: cb82a996-8857-4b99-8268-92d7f18a9fd0
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb82a996-8857-4b99-8268-92d7f18a9fd0;
	Tue, 20 Oct 2020 15:44:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603208682;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=QfxXeBR/5loKw62lVjegBNzmhEv6ZY5V35qXRj3FuNg=;
  b=CdgqPmqgqH3oh3xdfqXYsxDI5/YsrkcWFhqDVJ4eWW2CEaexD7IDlvJM
   l0OTWjiDXhwXDhzn3qMw6z8XAyuQUQu99ZuFLkixSxUPSR4cO+J6JdU2D
   7gRW+kUXCvjpNQnxCqyMD3e+FW5HgNnFy7MkNhcc4H/4CgnswZQISUoOu
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: W3gz1SiEe/rT6/AP9VZUGD/Dr42dHiP5xIRBWoSZYRR/sa9C6ZXygAUA8GGL4qLOXtCaICoeZG
 SWC1+1SpHjHAMA3mNjFjeETiT7vKjLNUEgSG1Qx+PQ88U4EyuLVtdjre0onRHB/1AC5agYMHc+
 7gyJOMYHPDQ/RalylnUDMUjKAtfy88OVEDTrUOeDXdIfkQC7M6hnKkOQnmqhGXnhls/mT/ZxIU
 ZiZyHd41kfNvQ0dFD3vNAux1jGk15Re+u3koldGxrsngvFUU79qjbeKHCQlj0sbLXx+fABj6g3
 JN0=
X-SBRS: None
X-MesageID: 29641765
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,397,1596513600"; 
   d="scan'208";a="29641765"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <03262d2d-7bc2-98a2-3d7f-bc5ab8a69d6b@citrix.com>
Date: Tue, 20 Oct 2020 16:44:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201020152405.26892-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 20/10/2020 16:24, Andrew Cooper wrote:
> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
> is from Xen's point of view (as the update only affects guest mappings), and
> the guest is required to flush suitably after making updates.
>
> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
> writeable pagetables, etc.) is an implementation detail outside of the
> API/ABI.
>
> Changes in the paging structure require invalidations in the linear pagetable
> range for subsequent accesses into the linear pagetables to access non-stale
> mappings.  Xen must provide suitable flushing to prevent intermixed guest
> actions from accidentally accessing/modifying the wrong pagetable.
>
> For all L2 and higher modifications, flush the full TLB.  (This could in
> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
> mechanism exists in practice.)
>
> As this combines with sync_guest for XPTI L4 "shadowing", replace the
> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
> now always needs to flush, there is no point trying to exclude the current CPU
> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>
> This is XSA-286.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
>
> A couple of minor points.
>
>  * PV guests can create global mappings.  I can't reason any safe way to relax
>    FLUSH_TLB_GLOBAL to just FLUSH_TLB.

Sorry - forgot one of the points here.

We could in principle relax the flush entirely if we know that we're
editing from a not-present to present entry, but plumbing this up
through mod_l?_entry() isn't trivial, and its also not not obvious how
much of an optimisation it would be in practice.

~Andrew

>  * Performance tests are still ongoing, but so far is fairing better than the
>    embargoed alternative.
> ---
>  xen/arch/x86/mm.c | 31 +++++++++++++++----------------
>  1 file changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 918ee2bbe3..a6a7fcb56c 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3883,11 +3883,10 @@ long do_mmu_update(
>      void *va = NULL;
>      unsigned long gpfn, gmfn;
>      struct page_info *page;
> -    unsigned int cmd, i = 0, done = 0, pt_dom;
> +    unsigned int cmd, i = 0, done = 0, pt_dom, flush_flags = 0;
>      struct vcpu *curr = current, *v = curr;
>      struct domain *d = v->domain, *pt_owner = d, *pg_owner;
>      mfn_t map_mfn = INVALID_MFN, mfn;
> -    bool sync_guest = false;
>      uint32_t xsm_needed = 0;
>      uint32_t xsm_checked = 0;
>      int rc = put_old_guest_table(curr);
> @@ -4037,6 +4036,8 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> +                    if ( !rc )
> +                        flush_flags |= FLUSH_TLB_GLOBAL;
>                      break;
>  
>                  case PGT_l3_page_table:
> @@ -4044,6 +4045,8 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> +                    if ( !rc )
> +                        flush_flags |= FLUSH_TLB_GLOBAL;
>                      break;
>  
>                  case PGT_l4_page_table:
> @@ -4051,6 +4054,8 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> +                    if ( !rc )
> +                        flush_flags |= FLUSH_TLB_GLOBAL;
>                      if ( !rc && pt_owner->arch.pv.xpti )
>                      {
>                          bool local_in_use = false;
> @@ -4071,7 +4076,7 @@ long do_mmu_update(
>                               (1 + !!(page->u.inuse.type_info & PGT_pinned) +
>                                mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
>                                       mfn) + local_in_use) )
> -                            sync_guest = true;
> +                            flush_flags |= FLUSH_ROOT_PGTBL;
>                      }
>                      break;
>  
> @@ -4173,19 +4178,13 @@ long do_mmu_update(
>      if ( va )
>          unmap_domain_page(va);
>  
> -    if ( sync_guest )
> -    {
> -        /*
> -         * Force other vCPU-s of the affected guest to pick up L4 entry
> -         * changes (if any).
> -         */
> -        unsigned int cpu = smp_processor_id();
> -        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
> -
> -        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
> -        if ( !cpumask_empty(mask) )
> -            flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
> -    }
> +    /*
> +     * Flush TLBs if an L2 or higher was changed (invalidates the structure of
> +     * the linear pagetables), or an L4 in use by other CPUs was made (needs
> +     * to resync the XPTI copy of the table).
> +     */
> +    if ( flush_flags )
> +        flush_mask(pt_owner->dirty_cpumask, flush_flags);
>  
>      perfc_add(num_page_updates, i);
>  



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:45:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:45:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9712.25570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtpf-00033i-2B; Tue, 20 Oct 2020 15:45:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9712.25570; Tue, 20 Oct 2020 15:45:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtpe-00033b-VP; Tue, 20 Oct 2020 15:45:50 +0000
Received: by outflank-mailman (input) for mailman id 9712;
 Tue, 20 Oct 2020 15:45:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eGPc=D3=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kUtpd-00033V-Pu
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:45:49 +0000
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65e69686-d3ad-4b14-a7e5-396a2c7da8a1;
 Tue, 20 Oct 2020 15:45:48 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id d24so2529122ljg.10
 for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 08:45:48 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=eGPc=D3=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kUtpd-00033V-Pu
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:45:49 +0000
X-Inumbo-ID: 65e69686-d3ad-4b14-a7e5-396a2c7da8a1
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65e69686-d3ad-4b14-a7e5-396a2c7da8a1;
	Tue, 20 Oct 2020 15:45:48 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id d24so2529122ljg.10
        for <xen-devel@lists.xenproject.org>; Tue, 20 Oct 2020 08:45:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8TPwytybK8AeaSJbvdM4+b3/L13Qn0l/zvc24ImBolw=;
        b=vbegK7uuKxFQyT34ZGZGDoZPnDMvGchrS0YnoWwPQEocg05ivdBqd7epkOnIw31qJh
         cnD1N9D0ZENsEs+Ywyh6ELs5B9SMEFCNBsz+9zU7IKyPNCnj6BX9G2iaIiaLs+XZbhe8
         6jTkQ+omZfrzd7c0S1UJxqkGr1AnBo/kBmzf35ehLLDAILZDyZ804CgFRsLyR6OI57/o
         ntlnM52JsHeGQmz1iqo6UVtWx+zoOdafsb0TWR6I3TQjpBrRRo9u7ULU9qLmDVQKWz2K
         SU01LkIwjJ3xS2Pddx+kiz0zKdZOGMELMaG+8GwO8BzuNNaqGo63XwnKtQtWDvoxtqeM
         vIPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8TPwytybK8AeaSJbvdM4+b3/L13Qn0l/zvc24ImBolw=;
        b=MbaTXT+pVCkPGHRxaWM81asXTFj6GlOmNijnN4hVrO0T1ErkRDrYBmR1RtfwTTLM57
         5dTvxXVaapVvoSZlDUcvUnQi/b0VPG9gld4uV1tov/gCIwZjcsBxhSdOeVG1sWm6gG32
         vdR3pmBv8VxQnHIQC5nZKC80DaXCNwAfC7C51ndrB8EGEKTdA7FjeKtcAwAideT4qXxs
         OJifSI5KYgkfZSzB+eduLAHWmYZCNk5+CRC+1oOwXLa58XtCVCfVND+Y6b4+KexJ5KfQ
         blad1RH+PuWmg/DIa0paZrRfywuXA1EvP2FMTo6VcBWlP10GiGjZXXIibAZKNOREbLtl
         0ecg==
X-Gm-Message-State: AOAM5318GhGyOuzwyurcVlpPMF9ajYbp3BobKXmLfUIK+EMduTXuscjl
	Oskt40Z/wAcMU86lQUqxucEmg5jT6AUlnzEGEgo=
X-Google-Smtp-Source: ABdhPJwuakbddeRoFEE2qHUpBtIFVeUn+nbJKtYki2xNQ+ErfbwVXEuvC8E+bVDjTonFmYuzC41NeuGimtvTf5eqTjI=
X-Received: by 2002:a2e:96d2:: with SMTP id d18mr1379344ljj.407.1603208747046;
 Tue, 20 Oct 2020 08:45:47 -0700 (PDT)
MIME-Version: 1.0
References: <20201019200050.103360-1-jandryuk@gmail.com> <20201020153959.GA2214@perard.uk.xensource.com>
In-Reply-To: <20201020153959.GA2214@perard.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 20 Oct 2020 11:45:33 -0400
Message-ID: <CAKf6xpsL1-7EKU3-dLnR8oki295OAEq-ZQ7hWq9uzuFN5a_F_Q@mail.gmail.com>
Subject: Re: [PATCH] libxl: Add suppress-vmdesc to QEMU -machine pc options
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Oct 20, 2020 at 11:40 AM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> On Mon, Oct 19, 2020 at 04:00:50PM -0400, Jason Andryuk wrote:
> > The device model state saved by QMP xen-save-devices-state doesn't
> > include the vmdesc json.  When restoring an HVM, xen-load-devices-state
> > always triggers "Expected vmdescription section, but got 0".  This is
> > not a problem when restore comes from a file.  However, when QEMU runs
> > in a linux stubdom and comes over a console, EOF is not received.  This
> > causes a delay restoring - though it does restore.
> >
> > Setting suppress-vmdesc skips looking for the vmdesc during restore and
> > avoids the wait.
> >
> > This is a libxl change for the non-xenfv case to match the xenfv change
> > made in qemu
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >
> > Should this also add suppress-vmdesc to xenfv for backwards
> > compatibility?  In that case, the change in QEMU is redundent.  Since
> > this only really matters for the stubdom case, it could be conditioned
> > on that.
>
> QEMU doesn't complain about having suppress-vmdesc set on the command
> line and as a default for the xenfv machine, so I don't mind adding it
> to the xenfv machine in libxl, while keeping the change in QEMU.

Okay.

> The change is already applied to QEMU, so unless there's an issue, I
> don't want to revert it. It might be useful for tool stacks that don't
> use libxl.

Good point about the alternative toolstacks.

> Also, the change matters as well for non-stubdom cases as it removed a
> cryptic error message from qemu-dm's logs :-).

:)

Yes, as I explained for the QEMU change it is the correct thing to do.
It's just kinda a shame that libxl will need a compatibility change
kept around indefinitely.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 15:48:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 15:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9716.25582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtsC-0003FC-GW; Tue, 20 Oct 2020 15:48:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9716.25582; Tue, 20 Oct 2020 15:48:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUtsC-0003F5-Cr; Tue, 20 Oct 2020 15:48:28 +0000
Received: by outflank-mailman (input) for mailman id 9716;
 Tue, 20 Oct 2020 15:48:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kUtsB-0003F0-3j
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:48:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 509d2e9e-df56-4c6a-a4f0-2650e554fdd4;
 Tue, 20 Oct 2020 15:48:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93A2FAF6D;
 Tue, 20 Oct 2020 15:48:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oMcx=D3=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kUtsB-0003F0-3j
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 15:48:27 +0000
X-Inumbo-ID: 509d2e9e-df56-4c6a-a4f0-2650e554fdd4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 509d2e9e-df56-4c6a-a4f0-2650e554fdd4;
	Tue, 20 Oct 2020 15:48:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603208900;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4oaI/JO+vm57/aBMG5Zgn752QfMBhOA6ONSbRKRRf4g=;
	b=sCda7TrB3v8Iva7BeCaDPhvaqqCR52KJDNV4IGjQHNnx21EEhFLjbQRaHH9GGIuYZMqQHO
	DPM4rTv4PCBUKxPp+DBjJbHBFjdIOsDvktV5vK1zvBQsK6wsrdFPBkx8mgJWVGAObyu4Ym
	mJst7mhfJYM6fuuUNd2gG1jHDTnQ0oM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93A2FAF6D;
	Tue, 20 Oct 2020 15:48:20 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
Date: Tue, 20 Oct 2020 17:48:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201020152405.26892-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 17:24, Andrew Cooper wrote:
> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
> is from Xen's point of view (as the update only affects guest mappings), and
> the guest is required to flush suitably after making updates.
> 
> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
> writeable pagetables, etc.) is an implementation detail outside of the
> API/ABI.
> 
> Changes in the paging structure require invalidations in the linear pagetable
> range for subsequent accesses into the linear pagetables to access non-stale
> mappings.  Xen must provide suitable flushing to prevent intermixed guest
> actions from accidentally accessing/modifying the wrong pagetable.
> 
> For all L2 and higher modifications, flush the full TLB.  (This could in
> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
> mechanism exists in practice.)
> 
> As this combines with sync_guest for XPTI L4 "shadowing", replace the
> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
> now always needs to flush, there is no point trying to exclude the current CPU
> from the flush mask.  Use pt_owner->dirty_cpumask directly.

Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
part of the flushing on the local CPU. The draft you had sent
earlier looked better in this regard.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 16:05:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 16:05:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9719.25597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUu8S-0005aJ-Um; Tue, 20 Oct 2020 16:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9719.25597; Tue, 20 Oct 2020 16:05:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUu8S-0005aC-RZ; Tue, 20 Oct 2020 16:05:16 +0000
Received: by outflank-mailman (input) for mailman id 9719;
 Tue, 20 Oct 2020 16:05:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUu8R-0005a7-Fg
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:05:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0066674-b370-435f-90f3-1848e771d391;
 Tue, 20 Oct 2020 16:05:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUu8N-0007GD-Dq; Tue, 20 Oct 2020 16:05:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUu8N-00013y-6g; Tue, 20 Oct 2020 16:05:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUu8N-0000C0-5p; Tue, 20 Oct 2020 16:05:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUu8R-0005a7-Fg
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:05:15 +0000
X-Inumbo-ID: e0066674-b370-435f-90f3-1848e771d391
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e0066674-b370-435f-90f3-1848e771d391;
	Tue, 20 Oct 2020 16:05:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UkwqXSVcl5UoDV1uyHKquURfedx/97c4bFS7NcF8JSo=; b=sIOXjeZSvhS5WugdivuAQ6lMmB
	KQJNbiSJO6n38OWDcnO8uKDLUWt86vSGV3RrA+ZaGIEgXMwQYsD1A9xRDr6TreOS80mmkgCl/RZpk
	JK0/R5znxjbA8sLGTSaTMCGlgxykMkko+vCSjB275g/7SQXDNN4QTza5mA3HQ20aVefA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUu8N-0007GD-Dq; Tue, 20 Oct 2020 16:05:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUu8N-00013y-6g; Tue, 20 Oct 2020 16:05:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUu8N-0000C0-5p; Tue, 20 Oct 2020 16:05:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156028-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156028: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c41341af76cfc85b5a6c0f87de4838672ab9f89
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 16:05:11 +0000

flight 156028 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156028/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c41341af76cfc85b5a6c0f87de4838672ab9f89
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   61 days
Failing since        152659  2020-08-21 14:07:39 Z   60 days  112 attempts
Testing same since   156028  2020-10-20 12:37:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 48058 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 16:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 16:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9722.25612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUuNS-0007LB-IN; Tue, 20 Oct 2020 16:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9722.25612; Tue, 20 Oct 2020 16:20:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUuNS-0007L4-FJ; Tue, 20 Oct 2020 16:20:46 +0000
Received: by outflank-mailman (input) for mailman id 9722;
 Tue, 20 Oct 2020 16:20:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUuNQ-0007Kz-V6
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:20:44 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13b23195-7e38-4067-87d2-de19086ed943;
 Tue, 20 Oct 2020 16:20:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUuNQ-0007Kz-V6
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:20:44 +0000
X-Inumbo-ID: 13b23195-7e38-4067-87d2-de19086ed943
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13b23195-7e38-4067-87d2-de19086ed943;
	Tue, 20 Oct 2020 16:20:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603210843;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=go9aa7z/q6dnIKw3qSrHR1pu5Z0g28FmdeD2H5ZgtW0=;
  b=J/M/PgAOpIYfQL2srAFOYFA84TOMQ+SOaqbF+zAQdSeZC9Qh7tq8XK+R
   TAUZMaXljqki46bQNcmVq7iSVPZaSCCFHw3MC0cg+xNetE4v2F7C7WZC6
   mCn3VkcL/G/ksVh/YzXTG64YruXWAs8xK/xrmZ9nlG6i5NgHPMB/+9piS
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Osz6V6MPdWnG4N76+1M9Pamg1UsxHFzkJ1FPj/hlSxPpYf3TYTErz2Ww50IUmM1bgg3Y4bLj4a
 d020G8k+TXK02R/bS6ANSaaQpNzA1l0m1avptX0DSHN73bSGZW0lytLZR+T08mgz+OQIbkxVBT
 iteGRlhhXNTTKo1S6KB19YeH3dDJ7xMOyVMGNEB/Vl2pst4GMKb1nlRmaOrGWaDA9Q9Hij0aHC
 pGVgJfyZ/E0SYB3g6qEeIRPHoLDpJHmaiGFGI6OAlVP7QLRQVDe0vdurAperdIgPFUUMOfRtLv
 eXg=
X-SBRS: None
X-MesageID: 29646463
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,397,1596513600"; 
   d="scan'208";a="29646463"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
Date: Tue, 20 Oct 2020 17:20:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 20/10/2020 16:48, Jan Beulich wrote:
> On 20.10.2020 17:24, Andrew Cooper wrote:
>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>> is from Xen's point of view (as the update only affects guest mappings), and
>> the guest is required to flush suitably after making updates.
>>
>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>> writeable pagetables, etc.) is an implementation detail outside of the
>> API/ABI.
>>
>> Changes in the paging structure require invalidations in the linear pagetable
>> range for subsequent accesses into the linear pagetables to access non-stale
>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>> actions from accidentally accessing/modifying the wrong pagetable.
>>
>> For all L2 and higher modifications, flush the full TLB.  (This could in
>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>> mechanism exists in practice.)
>>
>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>> now always needs to flush, there is no point trying to exclude the current CPU
>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
> part of the flushing on the local CPU. The draft you had sent
> earlier looked better in this regard.

This was the area which broke.  It is to do with subtle difference in
the scope of L4 updates.

ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
other references to the pages are found.

The TLB flush needs to be broadcast to the whole domain dirty mask, as
we can't (easily) know if the update was part of the current structure.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 16:29:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 16:29:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9726.25624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUuWD-0007eC-FS; Tue, 20 Oct 2020 16:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9726.25624; Tue, 20 Oct 2020 16:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUuWD-0007e5-CO; Tue, 20 Oct 2020 16:29:49 +0000
Received: by outflank-mailman (input) for mailman id 9726;
 Tue, 20 Oct 2020 16:29:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUuWC-0007e0-LY
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:29:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b776f298-8119-414d-b4e8-672927b293fd;
 Tue, 20 Oct 2020 16:29:46 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUuWA-0007kW-5K; Tue, 20 Oct 2020 16:29:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUuW9-0002ie-Qx; Tue, 20 Oct 2020 16:29:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUuW9-0003BL-QQ; Tue, 20 Oct 2020 16:29:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUuWC-0007e0-LY
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:29:48 +0000
X-Inumbo-ID: b776f298-8119-414d-b4e8-672927b293fd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b776f298-8119-414d-b4e8-672927b293fd;
	Tue, 20 Oct 2020 16:29:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7KlVrkV8a5h7NyL/0jQ6EKIjywzT951+HO2TuBP+iaI=; b=AHLeaDw90MRr7Yf9sjYxbmX7ai
	3PABEUWiqNgWU+NkeEuU3OoB09psCoRasdJStI9Pk+u0DMUegvw1R3YZYKZbiZ2jXjqx3pe+k8vI6
	38gegF3QFNDjTch74g6R6xzMyO7mC7RJ51Z1GWFILo87fW/P7Ul0T44Ealf6n0UlC6fw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUuWA-0007kW-5K; Tue, 20 Oct 2020 16:29:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUuW9-0002ie-Qx; Tue, 20 Oct 2020 16:29:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUuW9-0003BL-QQ; Tue, 20 Oct 2020 16:29:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156020-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156020: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=270315b8235e3d10c2e360cff56c2f9e0915a252
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 16:29:45 +0000

flight 156020 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156020/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle  10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                270315b8235e3d10c2e360cff56c2f9e0915a252
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   80 days
Failing since        152366  2020-08-01 20:49:34 Z   79 days  135 attempts
Testing same since   156020  2020-10-20 07:28:33 Z    0 days    1 attempts

------------------------------------------------------------
3226 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 590931 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 16:46:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 16:46:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9730.25639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUulu-00010g-TS; Tue, 20 Oct 2020 16:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9730.25639; Tue, 20 Oct 2020 16:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUulu-00010Z-PZ; Tue, 20 Oct 2020 16:46:02 +0000
Received: by outflank-mailman (input) for mailman id 9730;
 Tue, 20 Oct 2020 16:46:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C0Jl=D3=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kUult-00010U-7m
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:46:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78497dd5-d72f-44df-8ae6-6193ebe13954;
 Tue, 20 Oct 2020 16:46:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C0Jl=D3=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kUult-00010U-7m
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 16:46:01 +0000
X-Inumbo-ID: 78497dd5-d72f-44df-8ae6-6193ebe13954
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 78497dd5-d72f-44df-8ae6-6193ebe13954;
	Tue, 20 Oct 2020 16:46:00 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.10-rc1b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603212359;
	bh=0uoEpXADZJVCdJfgfoNUy1KQNOLib6qZkZ6gL0UTJug=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=Kdw5wA9Blt4AX6Kc2sijE5AWBlZsMrLIoyB2x2qWPqIZgiumNhIKctaOPMn5/WSAq
	 UMKLBS4qoVnPfV1EQ6XXq6o/Xc5hCrZijXA4OLaxsCO2W0cKqjt9kESt4HWtIXbQcQ
	 kies03Bh2QTUPO5FGGuFuCnfFhsMrsIzELIdq5nE=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201020120956.29708-1-jgross@suse.com>
References: <20201020120956.29708-1-jgross@suse.com>
X-PR-Tracked-List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
X-PR-Tracked-Message-Id: <20201020120956.29708-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1b-tag
X-PR-Tracked-Commit-Id: 5f7f77400ab5b357b5fdb7122c3442239672186c
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 4a5bb973fa0353d25dbe854694c71bb58eb4cf78
Message-Id: <160321235968.11581.17952625512231850079.pr-tracker-bot@kernel.org>
Date: Tue, 20 Oct 2020 16:45:59 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Tue, 20 Oct 2020 14:09:56 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1b-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/4a5bb973fa0353d25dbe854694c71bb58eb4cf78

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:04:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9737.25651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUv3H-0002s4-Cb; Tue, 20 Oct 2020 17:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9737.25651; Tue, 20 Oct 2020 17:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUv3H-0002rx-9B; Tue, 20 Oct 2020 17:03:59 +0000
Received: by outflank-mailman (input) for mailman id 9737;
 Tue, 20 Oct 2020 17:03:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUv3F-0002rs-Ve
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:03:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29440690-6e8f-4eee-b653-7a69fd20a5f8;
 Tue, 20 Oct 2020 17:03:57 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUv3C-0008Ts-Fo; Tue, 20 Oct 2020 17:03:54 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUv3C-00068R-61; Tue, 20 Oct 2020 17:03:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUv3F-0002rs-Ve
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:03:58 +0000
X-Inumbo-ID: 29440690-6e8f-4eee-b653-7a69fd20a5f8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29440690-6e8f-4eee-b653-7a69fd20a5f8;
	Tue, 20 Oct 2020 17:03:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=d+ED780SRrgtW5AzBF5UzDm/edLZ4md3isLRUfT2hzc=; b=EYB12XpSHayXTI6m7WUrNxBK6D
	0mgHAVceGy7B74diSAL1m+drgAj9RgT53uMixZKh6eOiUjGP+NbkUvfaqYHrjMFY7F8/pBtTFM588
	XUnzj7PkD+ul8Pg8ENV3ss6GAh4R7y0am4NRVY2eSSWW8pZNiWMIWWLWeeIVpk66cxu0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUv3C-0008Ts-Fo; Tue, 20 Oct 2020 17:03:54 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUv3C-00068R-61; Tue, 20 Oct 2020 17:03:54 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
Date: Tue, 20 Oct 2020 18:03:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Rahul,

Thank you for the contribution. Lets make sure this attempt to SMMUv3 
support in Xen will be more successful than the other one :).

I haven't reviewed the code yet, but I wanted to provide feedback on the 
commit message.

On 20/10/2020 16:25, Rahul Singh wrote:
> Add support for ARM architected SMMUv3 implementations. It is based on
> the Linux SMMUv3 driver.
> 
> Major differences between the Linux driver are as follows:
> 1. Only Stage-2 translation is supported as compared to the Linux driver
>     that supports both Stage-1 and Stage-2 translations.
> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>     capability to share the page tables with the CPU.
> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>     and priority queue IRQ handling.

Tasklets are not a replacement for threaded IRQ. In particular, they 
will have priority over anything else (IOW nothing will run on the pCPU 
until they are done).

Do you know why Linux is using thread. Is it because of long running 
operations?

> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>     access functions based on atomic operations implemented in Linux.

Can you provide more details?

>     Atomic functions used by the commands queue access functions is not
>     implemented in XEN therefore we decided to port the earlier version
>     of the code. Once the proper atomic operations will be available in XEN
>     the driver can be updated.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>   xen/drivers/passthrough/Kconfig       |   10 +
>   xen/drivers/passthrough/arm/Makefile  |    1 +
>   xen/drivers/passthrough/arm/smmu-v3.c | 2847 +++++++++++++++++++++++++
>   3 files changed, 2858 insertions(+)

This is quite significant patch to review. Is there any way to get it 
split (maybe a verbatim Linux copy + Xen modification)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:10:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9741.25662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUv9X-0003lv-8N; Tue, 20 Oct 2020 17:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9741.25662; Tue, 20 Oct 2020 17:10:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUv9X-0003lo-5Q; Tue, 20 Oct 2020 17:10:27 +0000
Received: by outflank-mailman (input) for mailman id 9741;
 Tue, 20 Oct 2020 17:10:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUv9V-0003lj-J7
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:10:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef7c16b3-d160-4b6d-8909-4b6d97b4a918;
 Tue, 20 Oct 2020 17:10:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUv9V-0003lj-J7
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:10:25 +0000
X-Inumbo-ID: ef7c16b3-d160-4b6d-8909-4b6d97b4a918
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef7c16b3-d160-4b6d-8909-4b6d97b4a918;
	Tue, 20 Oct 2020 17:10:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603213823;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Iv2tB/XLJiY+MEHRe4shKjglOE0dADFInmu5biQD+Sg=;
  b=QMhGRCHIA3gv4vyWqi3sKRMEF5Ck87SLRXQP8rHjNOMGT0EogjkeS7cG
   2vM4bCVBbjwwi0PIk+V2WslNow8kgTUYREdIkdJmlVijIfx/LVPBVO3PR
   IuGX0X+K58kp9vZFxzzXssFI1TuONN/IejRhE+9IDXF1Xdo8JTtOxb8k/
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: UL76UzMU8x0P3WxZfBQvNqc0HLG6FNzTjs1spCbow5XJ01K8modbxUF4LV9Wc3GiiSJ1/oGQiE
 UJXMFPP/YZT16YFNcZdxYiXz2c4pK2dWqusUcZuBLBx0h37w8okfdME9HAAVspHy+6Kw5uEmjP
 5vTTSrt7OksI/pmLXAq1VOXAnFV4bXAJTaLBEocJtDWPN7X4Gpy6mGYxakvCFWWhrRdCRqdSbH
 J2+3j5Rp5+McVITilW9EgpD5Xb9vXHQSv2xXXeCorpRDXjviynyRnYSlIvDa1AzdU1iVbu8Sr5
 Eck=
X-SBRS: None
X-MesageID: 29466396
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,398,1596513600"; 
   d="scan'208";a="29466396"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
 <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
Message-ID: <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
Date: Tue, 20 Oct 2020 18:10:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 20/10/2020 17:20, Andrew Cooper wrote:
> On 20/10/2020 16:48, Jan Beulich wrote:
>> On 20.10.2020 17:24, Andrew Cooper wrote:
>>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>>> is from Xen's point of view (as the update only affects guest mappings), and
>>> the guest is required to flush suitably after making updates.
>>>
>>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>>> writeable pagetables, etc.) is an implementation detail outside of the
>>> API/ABI.
>>>
>>> Changes in the paging structure require invalidations in the linear pagetable
>>> range for subsequent accesses into the linear pagetables to access non-stale
>>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>>> actions from accidentally accessing/modifying the wrong pagetable.
>>>
>>> For all L2 and higher modifications, flush the full TLB.  (This could in
>>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>>> mechanism exists in practice.)
>>>
>>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>>> now always needs to flush, there is no point trying to exclude the current CPU
>>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
>> part of the flushing on the local CPU. The draft you had sent
>> earlier looked better in this regard.
> This was the area which broke.  It is to do with subtle difference in
> the scope of L4 updates.
>
> ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
> other references to the pages are found.
>
> The TLB flush needs to be broadcast to the whole domain dirty mask, as
> we can't (easily) know if the update was part of the current structure.

Actually - we can know whether an L4 update needs flushing locally or
not, in exactly the same way as the sync logic currently works.

However, unlike the opencoded get_cpu_info()->root_pgt_changed = true,
we can't just flush locally for free.

This is quite awkward to express.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:13:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:13:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9744.25676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvCl-0003xs-RL; Tue, 20 Oct 2020 17:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9744.25676; Tue, 20 Oct 2020 17:13:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvCl-0003xl-LN; Tue, 20 Oct 2020 17:13:47 +0000
Received: by outflank-mailman (input) for mailman id 9744;
 Tue, 20 Oct 2020 17:13:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUvCk-0003wl-I1
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:13:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 221f105f-c56e-4973-a7af-79ab9fac0e24;
 Tue, 20 Oct 2020 17:13:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUvCg-0000FM-0b; Tue, 20 Oct 2020 17:13:42 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUvCf-0006zy-PL; Tue, 20 Oct 2020 17:13:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUvCk-0003wl-I1
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:13:46 +0000
X-Inumbo-ID: 221f105f-c56e-4973-a7af-79ab9fac0e24
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 221f105f-c56e-4973-a7af-79ab9fac0e24;
	Tue, 20 Oct 2020 17:13:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gEH0W/5uBUfmjQEj5YwJ9g+vVR/8JcE27YLqQ/85Mcw=; b=BCkF5dCIBtzNZF/alEsHdoJfDO
	aRXOiLLa2DzG7zVy72KdmY1yOsJpbxdNVHc3A8YzuZ+fyc9a3XPCV14ul4zy7kI7xeOmNSxbpiisg
	o7L6XzVwprG92VBg5a72QB256NUbd6Vh9V8YftBDEv+RvOChPMZwEYH78Urjw+KX3Cyc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUvCg-0000FM-0b; Tue, 20 Oct 2020 17:13:42 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUvCf-0006zy-PL; Tue, 20 Oct 2020 17:13:41 +0000
Subject: Re: Xen Coding style and clang-format
To: Stefano Stabellini <sstabellini@kernel.org>,
 Artem Mygaiev <Artem_Mygaiev@epam.com>
Cc: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "vicooodin@gmail.com" <vicooodin@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "committers@xenproject.org" <committers@xenproject.org>,
 "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
 <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
 <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
 <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
 <AM6PR03MB3687A99424FA9FD062F5FE4BF4030@AM6PR03MB3687.eurprd03.prod.outlook.com>
 <alpine.DEB.2.21.2010191101250.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b0f9c9e0-d43e-e05b-d4ab-40f3bf437643@xen.org>
Date: Tue, 20 Oct 2020 18:13:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010191101250.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 19/10/2020 19:07, Stefano Stabellini wrote:
> On Fri, 16 Oct 2020, Artem Mygaiev wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: пятница, 16 октября 2020 г. 13:24
>> To: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>; jbeulich@suse.com; George.Dunlap@citrix.com
>> Cc: Artem Mygaiev <Artem_Mygaiev@epam.com>; vicooodin@gmail.com; xen-devel@lists.xenproject.org; committers@xenproject.org; viktor.mitin.19@gmail.com; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
>> Subject: Re: Xen Coding style and clang-format
>>
>>> Hi,
>>>
>>> On 16/10/2020 10:42, Anastasiia Lukianenko wrote:
>>>> Thanks for your advices, which helped me improve the checker. I
>>>> understand that there are still some disagreements about the
>>>> formatting, but as I said before, the checker cannot be very flexible
>>>> and take into account all the author's ideas.
>>>
>>> I am not sure what you refer by "author's ideas" here. The checker
>>> should follow a coding style (Xen or a modified version):
>>>      - Anything not following the coding style should be considered as
>>> invalid.
>>>      - Anything not written in the coding style should be left
>>> untouched/uncommented by the checker.
>>>
>>
>> Agree
>>
>>>> I suggest using the
>>>> checker not as a mandatory check, but as an indication to the author of
>>>> possible formatting errors that he can correct or ignore.
>>>
>>> I can understand that short term we would want to make it optional so
>>> either the coding style or the checker can be tuned. But I don't think
>>> this is an ideal situation to be in long term.
>>>
>>> The goal of the checker is to automatically verify the coding style and
>>> get it consistent across Xen. If we make it optional or it is
>>> "unreliable", then we lose the two benefits and possibly increase the
>>> contributor frustration as the checker would say A but we need B.
>>>
>>> Therefore, we need to make sure the checker and the coding style match.
>>> I don't have any opinions on the approach to achieve that.
>>
>> Of the list of remaining issues from Anastasiia, looks like only items 5
>> and 6 are conform to official Xen coding style. As for remainning ones,
>> I would like to suggest disabling those that are controversial (items 1,
>> 2, 4, 8, 9, 10). Maybe we want to have further discussion on refining
>> coding style, we can use these as starting point. If we are open to
>> extending style now, I would suggest to add rules that seem to be
>> meaningful (items 3, 7) and keep them in checker.
> 
> Good approach. Yes, I would like to keep 3, 7 in the checker.
> 
> I would also keep 8 and add a small note to the coding style to say that
> comments should be aligned where possible.

+1 for this. Although, I don't mind the coding style used as long as we 
have a checker and the code is consistent :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:17:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:17:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9747.25687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvGF-00049j-8V; Tue, 20 Oct 2020 17:17:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9747.25687; Tue, 20 Oct 2020 17:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvGF-00049c-5G; Tue, 20 Oct 2020 17:17:23 +0000
Received: by outflank-mailman (input) for mailman id 9747;
 Tue, 20 Oct 2020 17:17:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kUvGE-00049X-H1
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:17:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3f8709c-fe60-4d4d-baaf-d6f4ebe44010;
 Tue, 20 Oct 2020 17:17:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUvG6-0000K1-Sr; Tue, 20 Oct 2020 17:17:14 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kUvG6-0007JD-Le; Tue, 20 Oct 2020 17:17:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TRnX=D3=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kUvGE-00049X-H1
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:17:22 +0000
X-Inumbo-ID: d3f8709c-fe60-4d4d-baaf-d6f4ebe44010
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d3f8709c-fe60-4d4d-baaf-d6f4ebe44010;
	Tue, 20 Oct 2020 17:17:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=6XRVWEq9OP5/+7TDqIxD74K72OTW4DwG6dXwruk5jZI=; b=ED+EJ9pAuuNXMRv4ZNOnrp3DHe
	X+v2jtzE8zMpMZGywk+QiBFsqozS9syrNdAmk28dtZNleL8cTkE7pT+0mOAOLFGpZS2oM3vijDHNc
	g724OsazygwjrDaoEg2XZC+Oe26pkFDTGFGh5mO6o0f/AWLTx/Uyb0GYvGXQCNosKz/M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUvG6-0000K1-Sr; Tue, 20 Oct 2020 17:17:14 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kUvG6-0007JD-Le; Tue, 20 Oct 2020 17:17:14 +0000
Subject: Re: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
To: paul@xen.org, xen-devel@lists.xenproject.org
Cc: 'Paul Durrant' <pdurrant@amazon.com>,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>, 'Ian Jackson'
 <iwj@xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich'
 <jbeulich@suse.com>, 'Stefano Stabellini' <sstabellini@kernel.org>
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-3-paul@xen.org>
 <97648df3-dcce-cd19-9074-6ca63d94b518@xen.org>
 <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <e37bdd4a-f483-9143-7860-81be31916aca@xen.org>
Date: Tue, 20 Oct 2020 18:17:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <002a01d6a5e8$c36bb5a0$4a4320e0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Paul,

On 19/10/2020 08:23, Paul Durrant wrote:
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: 16 October 2020 16:47
>> To: Paul Durrant <paul@xen.org>; xen-devel@lists.xenproject.org
>> Cc: Paul Durrant <pdurrant@amazon.com>; Daniel De Graaf <dgdegra@tycho.nsa.gov>; Ian Jackson
>> <iwj@xenproject.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; George Dunlap
>> <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>; Stefano Stabellini
>> <sstabellini@kernel.org>
>> Subject: Re: [PATCH 2/5] iommu / domctl: introduce XEN_DOMCTL_iommu_ctl
>>
>> Hi Paul,
>>
>> On 05/10/2020 10:49, Paul Durrant wrote:
>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>> index 791f0a2592..75e855625a 100644
>>> --- a/xen/include/public/domctl.h
>>> +++ b/xen/include/public/domctl.h
>>> @@ -1130,6 +1130,18 @@ struct xen_domctl_vuart_op {
>>>                                     */
>>>    };
>>>
>>> +/*
>>> + * XEN_DOMCTL_iommu_ctl
>>> + *
>>> + * Control of VM IOMMU settings
>>> + */
>>> +
>>> +#define XEN_DOMCTL_IOMMU_INVALID 0
>>
>> I can't find any user of XEN_DOMCTL_IOMMU_INVALID. What's the purpose
>> for it?
>>
> 
> It's just a placeholder. I think it's generally safer that a zero opcode value is invalid.

Thanks for the explanation. I first thought the goal would to somehow 
invalidate the IOMMU :).

Anyway, it might be worth adding /* Reserved - should never be used */ 
on top.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:18:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:18:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9749.25699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvGv-0004Gv-Is; Tue, 20 Oct 2020 17:18:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9749.25699; Tue, 20 Oct 2020 17:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvGv-0004Go-FB; Tue, 20 Oct 2020 17:18:05 +0000
Received: by outflank-mailman (input) for mailman id 9749;
 Tue, 20 Oct 2020 17:18:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
 id 1kUvGu-0004Gi-EW
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:18:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 526093e9-8a9d-4952-a2b4-e4f137c9f6f7;
 Tue, 20 Oct 2020 17:18:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 456F0AD85;
 Tue, 20 Oct 2020 17:18:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
	id 1kUvGu-0004Gi-EW
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:18:04 +0000
X-Inumbo-ID: 526093e9-8a9d-4952-a2b4-e4f137c9f6f7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 526093e9-8a9d-4952-a2b4-e4f137c9f6f7;
	Tue, 20 Oct 2020 17:18:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 456F0AD85;
	Tue, 20 Oct 2020 17:18:02 +0000 (UTC)
Subject: Re: [PATCH v2 2/5] mm/page_alloc: place pages to tail in
 __putback_isolated_page()
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
 Andrew Morton <akpm@linux-foundation.org>,
 Matthew Wilcox <willy@infradead.org>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Oscar Salvador <osalvador@suse.de>,
 Wei Yang <richard.weiyang@linux.alibaba.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>, Michal Hocko <mhocko@suse.com>,
 Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>,
 Dave Hansen <dave.hansen@intel.com>, Mike Rapoport <rppt@kernel.org>,
 Scott Cheloha <cheloha@linux.ibm.com>, Michael Ellerman <mpe@ellerman.id.au>
References: <20201005121534.15649-1-david@redhat.com>
 <20201005121534.15649-3-david@redhat.com>
From: Vlastimil Babka <vbabka@suse.cz>
Message-ID: <ddeba755-eed5-d412-ffa0-4d1a6a4bc297@suse.cz>
Date: Tue, 20 Oct 2020 19:18:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005121534.15649-3-david@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> __putback_isolated_page() already documents that pages will be placed to
> the tail of the freelist - this is, however, not the case for
> "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be
> the case for all existing users.
> 
> This change affects two users:
> - free page reporting
> - page isolation, when undoing the isolation (including memory onlining).
> 
> This behavior is desireable for pages that haven't really been touched
> lately, so exactly the two users that don't actually read/write page
> content, but rather move untouched pages.
> 
> The new behavior is especially desirable for memory onlining, where we
> allow allocation of newly onlined pages via undo_isolate_page_range()
> in online_pages(). Right now, we always place them to the head of the
> freelist, resulting in undesireable behavior: Assume we add
> individual memory chunks via add_memory() and online them right away to
> the NORMAL zone. We create a dependency chain of unmovable allocations
> e.g., via the memmap. The memmap of the next chunk will be placed onto
> previous chunks - if the last block cannot get offlined+removed, all
> dependent ones cannot get offlined+removed. While this can already be
> observed with individual DIMMs, it's more of an issue for virtio-mem
> (and I suspect also ppc DLPAR).
> 
> Document that this should only be used for optimizations, and no code
> should rely on this behavior for correction (if the order of the
> freelists ever changes).
> 
> We won't care about page shuffling: memory onlining already properly
> shuffles after onlining. free page reporting doesn't care about
> physically contiguous ranges, and there are already cases where page
> isolation will simply move (physically close) free pages to (currently)
> the head of the freelists via move_freepages_block() instead of
> shuffling. If this becomes ever relevant, we should shuffle the whole
> zone when undoing isolation of larger ranges, and after
> free_contig_range().
> 
> Reviewed-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> Reviewed-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Acked-by: Michal Hocko <mhocko@suse.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:20:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:20:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9753.25710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvJD-000562-0A; Tue, 20 Oct 2020 17:20:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9753.25710; Tue, 20 Oct 2020 17:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvJC-00055v-TP; Tue, 20 Oct 2020 17:20:26 +0000
Received: by outflank-mailman (input) for mailman id 9753;
 Tue, 20 Oct 2020 17:20:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
 id 1kUvJA-00055p-RC
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:20:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4c8e74e-8ca4-4db0-8519-c179f02554a1;
 Tue, 20 Oct 2020 17:20:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDB3BAC23;
 Tue, 20 Oct 2020 17:20:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
	id 1kUvJA-00055p-RC
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:20:24 +0000
X-Inumbo-ID: b4c8e74e-8ca4-4db0-8519-c179f02554a1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b4c8e74e-8ca4-4db0-8519-c179f02554a1;
	Tue, 20 Oct 2020 17:20:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EDB3BAC23;
	Tue, 20 Oct 2020 17:20:22 +0000 (UTC)
Subject: Re: [PATCH v2 3/5] mm/page_alloc: move pages to tail in
 move_to_free_list()
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
 Andrew Morton <akpm@linux-foundation.org>,
 Matthew Wilcox <willy@infradead.org>, Oscar Salvador <osalvador@suse.de>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 Wei Yang <richard.weiyang@linux.alibaba.com>,
 Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>,
 Dave Hansen <dave.hansen@intel.com>, Mike Rapoport <rppt@kernel.org>,
 Scott Cheloha <cheloha@linux.ibm.com>, Michael Ellerman <mpe@ellerman.id.au>
References: <20201005121534.15649-1-david@redhat.com>
 <20201005121534.15649-4-david@redhat.com>
From: Vlastimil Babka <vbabka@suse.cz>
Message-ID: <505935d6-90d2-3fce-57f0-5946968d6372@suse.cz>
Date: Tue, 20 Oct 2020 19:20:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005121534.15649-4-david@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> Whenever we move pages between freelists via move_to_free_list()/
> move_freepages_block(), we don't actually touch the pages:
> 1. Page isolation doesn't actually touch the pages, it simply isolates
>     pageblocks and moves all free pages to the MIGRATE_ISOLATE freelist.
>     When undoing isolation, we move the pages back to the target list.
> 2. Page stealing (steal_suitable_fallback()) moves free pages directly
>     between lists without touching them.
> 3. reserve_highatomic_pageblock()/unreserve_highatomic_pageblock() moves
>     free pages directly between freelists without touching them.
> 
> We already place pages to the tail of the freelists when undoing isolation
> via __putback_isolated_page(), let's do it in any case (e.g., if order <=
> pageblock_order) and document the behavior. To simplify, let's move the
> pages to the tail for all move_to_free_list()/move_freepages_block() users.
> 
> In 2., the target list is empty, so there should be no change. In 3.,
> we might observe a change, however, highatomic is more concerned about
> allocations succeeding than cache hotness - if we ever realize this
> change degrades a workload, we can special-case this instance and add a
> proper comment.
> 
> This change results in all pages getting onlined via online_pages() to
> be placed to the tail of the freelist.
> 
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>

Reviewed-by: Vlastimil Babka <vbabka@suse.cz>


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 17:21:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 17:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9757.25722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvKd-0005Dy-B9; Tue, 20 Oct 2020 17:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9757.25722; Tue, 20 Oct 2020 17:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUvKd-0005Dr-8C; Tue, 20 Oct 2020 17:21:55 +0000
Received: by outflank-mailman (input) for mailman id 9757;
 Tue, 20 Oct 2020 17:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
 id 1kUvKc-0005Dm-Fq
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:21:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d2de52c-04e8-4aa2-8522-18d336cc3566;
 Tue, 20 Oct 2020 17:21:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C225CADFF;
 Tue, 20 Oct 2020 17:21:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CtYs=D3=suse.cz=vbabka@srs-us1.protection.inumbo.net>)
	id 1kUvKc-0005Dm-Fq
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 17:21:54 +0000
X-Inumbo-ID: 8d2de52c-04e8-4aa2-8522-18d336cc3566
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d2de52c-04e8-4aa2-8522-18d336cc3566;
	Tue, 20 Oct 2020 17:21:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C225CADFF;
	Tue, 20 Oct 2020 17:21:52 +0000 (UTC)
Subject: Re: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone
 shuffling
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org,
 Andrew Morton <akpm@linux-foundation.org>,
 Matthew Wilcox <willy@infradead.org>,
 Wei Yang <richard.weiyang@linux.alibaba.com>, Michal Hocko
 <mhocko@suse.com>, Alexander Duyck <alexander.h.duyck@linux.intel.com>,
 Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>,
 Dave Hansen <dave.hansen@intel.com>, Oscar Salvador <osalvador@suse.de>,
 Mike Rapoport <rppt@kernel.org>, Pankaj Gupta <pankaj.gupta.linux@gmail.com>
References: <20201005121534.15649-1-david@redhat.com>
 <20201005121534.15649-6-david@redhat.com>
From: Vlastimil Babka <vbabka@suse.cz>
Message-ID: <79f2eb3b-d3db-3187-ff7e-1b7bb8e769a3@suse.cz>
Date: Tue, 20 Oct 2020 19:21:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201005121534.15649-6-david@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10/5/20 2:15 PM, David Hildenbrand wrote:
> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
> 
> We now effectively shuffle only once (properly) when onlining new
> memory.
> 
> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> Acked-by: Michal Hocko <mhocko@suse.com>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   mm/memory_hotplug.c | 11 ++++-------
>   1 file changed, 4 insertions(+), 7 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 03a00cb68bf7..b44d4c7ba73b 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>   	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>   
>   	/*
> -	 * When exposing larger, physically contiguous memory areas to the
> -	 * buddy, shuffling in the buddy (when freeing onlined pages, putting
> -	 * them either to the head or the tail of the freelist) is only helpful
> -	 * for maintaining the shuffle, but not for creating the initial
> -	 * shuffle. Shuffle the whole zone to make sure the just onlined pages
> -	 * are properly distributed across the whole freelist. Make sure to
> -	 * shuffle once pageblocks are no longer isolated.
> +	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> +	 * the tail of the freelist when undoing isolation). Shuffle the whole
> +	 * zone to make sure the just onlined pages are properly distributed
> +	 * across the whole freelist - to create an initial shuffle.
>   	 */
>   	shuffle_zone(zone);
>   
> 



From xen-devel-bounces@lists.xenproject.org Tue Oct 20 18:43:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 18:43:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9767.25740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUwaz-00041E-9Y; Tue, 20 Oct 2020 18:42:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9767.25740; Tue, 20 Oct 2020 18:42:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUwaz-000417-6f; Tue, 20 Oct 2020 18:42:53 +0000
Received: by outflank-mailman (input) for mailman id 9767;
 Tue, 20 Oct 2020 18:42:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ko+4=D3=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1kUway-00040w-E5
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 18:42:52 +0000
Received: from smtprelay.hostedemail.com (unknown [216.40.44.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67f6738a-9a97-4cd4-bdc6-ad77f90ad99f;
 Tue, 20 Oct 2020 18:42:51 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay02.hostedemail.com (Postfix) with ESMTP id 43FA51260;
 Tue, 20 Oct 2020 18:42:51 +0000 (UTC)
Received: from XPS-9350.home (unknown [47.151.133.149])
 (Authenticated sender: joe@perches.com)
 by omf17.hostedemail.com (Postfix) with ESMTPA;
 Tue, 20 Oct 2020 18:42:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ko+4=D3=perches.com=joe@srs-us1.protection.inumbo.net>)
	id 1kUway-00040w-E5
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 18:42:52 +0000
X-Inumbo-ID: 67f6738a-9a97-4cd4-bdc6-ad77f90ad99f
Received: from smtprelay.hostedemail.com (unknown [216.40.44.73])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67f6738a-9a97-4cd4-bdc6-ad77f90ad99f;
	Tue, 20 Oct 2020 18:42:51 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net [216.40.38.60])
	by smtprelay02.hostedemail.com (Postfix) with ESMTP id 43FA51260;
	Tue, 20 Oct 2020 18:42:51 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 50,0,0,,d41d8cd98f00b204,joe@perches.com,,RULES_HIT:41:355:379:599:800:960:967:973:982:988:989:1260:1277:1311:1313:1314:1345:1359:1434:1437:1515:1516:1518:1534:1542:1593:1594:1711:1730:1747:1777:1792:2198:2199:2393:2525:2553:2560:2563:2682:2685:2731:2828:2859:2911:2933:2937:2939:2942:2945:2947:2951:2954:3022:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:3934:3936:3938:3941:3944:3947:3950:3953:3956:3959:4321:4425:5007:6742:6743:7576:7903:8957:9025:10004:10400:10450:10455:10848:11232:11658:11914:12043:12295:12297:12663:12740:12760:12895:13153:13228:13439:14181:14659:14721:19904:19999:21080:21451:21627:21939:21990:30012:30034:30054:30070:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:1,LUA_SUMMARY:none
X-HE-Tag: humor84_3a06a8527241
X-Filterd-Recvd-Size: 4943
Received: from XPS-9350.home (unknown [47.151.133.149])
	(Authenticated sender: joe@perches.com)
	by omf17.hostedemail.com (Postfix) with ESMTPA;
	Tue, 20 Oct 2020 18:42:43 +0000 (UTC)
Message-ID: <3bc5c2e3b3edc22a4d167ec807ecdaaf8dcda76d.camel@perches.com>
Subject: Re: [RFC] treewide: cleanup unreachable breaks
From: Joe Perches <joe@perches.com>
To: Nick Desaulniers <ndesaulniers@google.com>, Tom Rix <trix@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>, linux-edac@vger.kernel.org, 
 linux-acpi@vger.kernel.org, linux-pm@vger.kernel.org, 
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org, 
 openipmi-developer@lists.sourceforge.net, "open list:HARDWARE RANDOM NUMBER
 GENERATOR CORE" <linux-crypto@vger.kernel.org>, Linux ARM
 <linux-arm-kernel@lists.infradead.org>,  linux-power@fi.rohmeurope.com,
 linux-gpio@vger.kernel.org, amd-gfx list <amd-gfx@lists.freedesktop.org>,
 dri-devel <dri-devel@lists.freedesktop.org>, 
 nouveau@lists.freedesktop.org, virtualization@lists.linux-foundation.org, 
 spice-devel@lists.freedesktop.org, linux-iio@vger.kernel.org, 
 linux-amlogic@lists.infradead.org,
 industrypack-devel@lists.sourceforge.net,  linux-media@vger.kernel.org,
 MPT-FusionLinux.pdl@broadcom.com,  linux-scsi@vger.kernel.org,
 linux-mtd@lists.infradead.org,  linux-can@vger.kernel.org, Network
 Development <netdev@vger.kernel.org>,  intel-wired-lan@lists.osuosl.org,
 ath10k@lists.infradead.org, linux-wireless
 <linux-wireless@vger.kernel.org>, linux-stm32@st-md-mailman.stormreply.com,
  linux-nfc@lists.01.org, linux-nvdimm <linux-nvdimm@lists.01.org>, 
 linux-pci@vger.kernel.org, linux-samsung-soc@vger.kernel.org, 
 platform-driver-x86@vger.kernel.org, patches@opensource.cirrus.com, 
 storagedev@microchip.com, devel@driverdev.osuosl.org, 
 linux-serial@vger.kernel.org, linux-usb@vger.kernel.org, 
 usb-storage@lists.one-eyed-alien.net, linux-watchdog@vger.kernel.org, 
 ocfs2-devel@oss.oracle.com, bpf <bpf@vger.kernel.org>, 
 linux-integrity@vger.kernel.org, linux-security-module@vger.kernel.org, 
 keyrings@vger.kernel.org, alsa-devel@alsa-project.org, clang-built-linux
 <clang-built-linux@googlegroups.com>, Greg KH <gregkh@linuxfoundation.org>,
  George Burgess <gbiv@google.com>
Date: Tue, 20 Oct 2020 11:42:42 -0700
In-Reply-To: <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
References: <20201017160928.12698-1-trix@redhat.com>
	 <20201018054332.GB593954@kroah.com>
	 <CAKwvOdkR_Ttfo7_JKUiZFVqr=Uh=4b05KCPCSuzwk=zaWtA2_Q@mail.gmail.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.36.4-0ubuntu1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Mon, 2020-10-19 at 12:42 -0700, Nick Desaulniers wrote:
> On Sat, Oct 17, 2020 at 10:43 PM Greg KH <gregkh@linuxfoundation.org> wrote:
> > On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix@redhat.com wrote:
> > > From: Tom Rix <trix@redhat.com>
> > > 
> > > This is a upcoming change to clean up a new warning treewide.
> > > I am wondering if the change could be one mega patch (see below) or
> > > normal patch per file about 100 patches or somewhere half way by collecting
> > > early acks.
> > 
> > Please break it up into one-patch-per-subsystem, like normal, and get it
> > merged that way.
> > 
> > Sending us a patch, without even a diffstat to review, isn't going to
> > get you very far...
> 
> Tom,
> If you're able to automate this cleanup, I suggest checking in a
> script that can be run on a directory.  Then for each subsystem you
> can say in your commit "I ran scripts/fix_whatever.py on this subdir."
>  Then others can help you drive the tree wide cleanup.  Then we can
> enable -Wunreachable-code-break either by default, or W=2 right now
> might be a good idea.
> 
> Ah, George (gbiv@, cc'ed), did an analysis recently of
> `-Wunreachable-code-loop-increment`, `-Wunreachable-code-break`, and
> `-Wunreachable-code-return` for Android userspace.  From the review:
> ```
> Spoilers: of these, it seems useful to turn on
> -Wunreachable-code-loop-increment and -Wunreachable-code-return by
> default for Android
> ...
> While these conventions about always having break arguably became
> obsolete when we enabled -Wfallthrough, my sample turned up zero
> potential bugs caught by this warning, and we'd need to put a lot of
> effort into getting a clean tree. So this warning doesn't seem to be
> worth it.
> ```
> Looks like there's an order of magnitude of `-Wunreachable-code-break`
> than the other two.
> 
> We probably should add all 3 to W=2 builds (wrapped in cc-option).
> I've filed https://github.com/ClangBuiltLinux/linux/issues/1180 to
> follow up on.

I suggest using W=1 as people that are doing cleanups
generally use that and not W=123 or any other style.

Every other use of W= is still quite noisy and these
code warnings are relatively trivially to fix up.




From xen-devel-bounces@lists.xenproject.org Tue Oct 20 18:46:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 18:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9770.25753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUweD-0004F1-RZ; Tue, 20 Oct 2020 18:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9770.25753; Tue, 20 Oct 2020 18:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUweD-0004Eu-Mr; Tue, 20 Oct 2020 18:46:13 +0000
Received: by outflank-mailman (input) for mailman id 9770;
 Tue, 20 Oct 2020 18:46:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kUweC-0004Ep-AA
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 18:46:12 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05c95d79-f021-4f54-8eb1-f753cc125ad7;
 Tue, 20 Oct 2020 18:46:07 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yF9C=D3=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kUweC-0004Ep-AA
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 18:46:12 +0000
X-Inumbo-ID: 05c95d79-f021-4f54-8eb1-f753cc125ad7
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05c95d79-f021-4f54-8eb1-f753cc125ad7;
	Tue, 20 Oct 2020 18:46:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603219568;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=wo7w9azPULKlvQIJ7Y/jP3i6QqYtsuiVG3CAvIZ9gac=;
  b=KxKROrWNZjI+Uf+Nkyo7Op2gcmwsapo0FSbHaRuZV7JyBFX36r321yu/
   jFyj/KcKRttfKjb0zLuZDGYLfctmKQECrBvWM9oGOgBEUvO+/7cZIUON0
   ziHtTKcwZMK4rjswcu/5Zx1iG3UFl71ZeDKrKtIAEoj9xX1uV695RjS57
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: z+X/ByM7+eqYJQGmuMfKCkjL/3AomXb9/xzc0yhCt07tJESOqx2KN/GQE8BOdDZq5VHz8TZSfw
 ueLuPaeSeGLh4G4o9GZctARzR5xAEFhB2EDFJfDNEA01yup/psbA9MPau/NAM8Ia5QrNmOmvFL
 PydIBlaqs9xQeZfnj7ySBAh2s43PHqM737MlTGt+SelzS44nLNMLiGimj7/+9lWeJWPHDPSFYy
 jPJ6kmePwh2jaREPAq1IVgQgONJY0BtHYWgJD3kLtN4oLae5chqz0nDvmZt190ZJxWvGo/WWdB
 awc=
X-SBRS: None
X-MesageID: 29394201
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,398,1596513600"; 
   d="scan'208";a="29394201"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
 <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
 <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
Message-ID: <9b54113c-9df2-2f44-1545-67ffe4831934@citrix.com>
Date: Tue, 20 Oct 2020 19:46:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 20/10/2020 18:10, Andrew Cooper wrote:
> On 20/10/2020 17:20, Andrew Cooper wrote:
>> On 20/10/2020 16:48, Jan Beulich wrote:
>>> On 20.10.2020 17:24, Andrew Cooper wrote:
>>>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>>>> is from Xen's point of view (as the update only affects guest mappings), and
>>>> the guest is required to flush suitably after making updates.
>>>>
>>>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>>>> writeable pagetables, etc.) is an implementation detail outside of the
>>>> API/ABI.
>>>>
>>>> Changes in the paging structure require invalidations in the linear pagetable
>>>> range for subsequent accesses into the linear pagetables to access non-stale
>>>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>>>> actions from accidentally accessing/modifying the wrong pagetable.
>>>>
>>>> For all L2 and higher modifications, flush the full TLB.  (This could in
>>>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>>>> mechanism exists in practice.)
>>>>
>>>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>>>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>>>> now always needs to flush, there is no point trying to exclude the current CPU
>>>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>>> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
>>> part of the flushing on the local CPU. The draft you had sent
>>> earlier looked better in this regard.
>> This was the area which broke.  It is to do with subtle difference in
>> the scope of L4 updates.
>>
>> ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
>> other references to the pages are found.
>>
>> The TLB flush needs to be broadcast to the whole domain dirty mask, as
>> we can't (easily) know if the update was part of the current structure.
> Actually - we can know whether an L4 update needs flushing locally or
> not, in exactly the same way as the sync logic currently works.
>
> However, unlike the opencoded get_cpu_info()->root_pgt_changed = true,
> we can't just flush locally for free.
>
> This is quite awkward to express.

And not safe.  Flushes may accumulate from multiple levels in a batch,
and pt_owner may not be equal to current.

I stand by the version submitted as the security fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 20 21:37:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 20 Oct 2020 21:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9786.25794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUzK2-0002bU-Sg; Tue, 20 Oct 2020 21:37:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9786.25794; Tue, 20 Oct 2020 21:37:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kUzK2-0002bN-Nz; Tue, 20 Oct 2020 21:37:34 +0000
Received: by outflank-mailman (input) for mailman id 9786;
 Tue, 20 Oct 2020 21:37:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kUzK1-0002ag-Jm
 for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 21:37:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c41d470d-47e9-4485-a9b7-079ba7fbaf18;
 Tue, 20 Oct 2020 21:37:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUzJt-0005hP-JN; Tue, 20 Oct 2020 21:37:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kUzJt-0003nq-9p; Tue, 20 Oct 2020 21:37:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kUzJt-00054f-9J; Tue, 20 Oct 2020 21:37:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3Or1=D3=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kUzK1-0002ag-Jm
	for xen-devel@lists.xenproject.org; Tue, 20 Oct 2020 21:37:33 +0000
X-Inumbo-ID: c41d470d-47e9-4485-a9b7-079ba7fbaf18
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c41d470d-47e9-4485-a9b7-079ba7fbaf18;
	Tue, 20 Oct 2020 21:37:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uGtdXhKGPtQjsK79FAvxk91mEZIvCiNWRhRLgL7Ohsc=; b=v7XHMjfEQY53eRqkEhUV7+YbBE
	s/vQKpWTvkzYivCUr/NC5y9puvE+j8CsRWeyf+sO482xCGoSH9uXrP/FENnkqcb8oUVLAaR2W4Nhc
	gpRpnwMS6XwGUOEJIJW8B/C8fm+W/daQ0q9OldfJyHwxNmxEO5CPRftWZj6LD9S2dvfA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUzJt-0005hP-JN; Tue, 20 Oct 2020 21:37:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUzJt-0003nq-9p; Tue, 20 Oct 2020 21:37:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kUzJt-00054f-9J; Tue, 20 Oct 2020 21:37:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156027-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156027: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7f0831e58bf4681d710e9a029644b6fa07b7cb0
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 20 Oct 2020 21:37:25 +0000

flight 156027 xen-unstable real [real]
flight 156048 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156027/
http://logs.test-lab.xenproject.org/osstest/logs/156048/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156013

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156013
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156013
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156013
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156013
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156013
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156013
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a7f0831e58bf4681d710e9a029644b6fa07b7cb0
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   156013  2020-10-20 04:30:46 Z    0 days
Testing same since   156027  2020-10-20 12:37:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a7f0831e58bf4681d710e9a029644b6fa07b7cb0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:54:59 2020 +0200

    SVM: avoid VMSAVE in ctxt-switch-to
    
    Of the state saved by the insn and reloaded by the corresponding VMLOAD
    - TR and syscall state are invariant while having Xen's state loaded,
    - sysenter is unused altogether by Xen,
    - FS, GS, and LDTR are not used by Xen and get suitably set in PV
      context switch code.
    Note that state is suitably populated in _svm_cpu_up(); a minimal
    respective assertion gets added.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit de6d188a519f9e3b7a1acc7784adf4c243865f9a
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Tue Oct 20 08:54:23 2020 +0200

    hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
    
    ACPI specification contains statements describing memory marked with regular
    "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
    really do it if it wants kexec or similar functionality to work, there
    could still be ambiguities in treating these regions as potentially regular
    RAM.
    
    One such example is SeaBIOS which currently reports "ACPI data" regions as
    RAM to the guest in its e801 call. Which it might have the right to do as any
    user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
    to ignore that fact and is instead using e801 to find a place for initrd which
    causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
    to be fixed / improved here, that is just one example of the potential problems
    from using a reclaimable memory type.
    
    Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
    described by the spec as non-reclaimable (so cannot ever be treated like RAM).
    
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 7b36d16d21ae70a1eaabe577b7e4b42ed0f1a7d1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:53:53 2020 +0200

    xen-detect: make CPUID fallback CPUID-faulting aware
    
    Relying on presence / absence of hypervisor leaves in raw / escaped
    CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
    capable hardware. Utilize a PV-only feature flag to avoid false positive
    HVM detection.
    
    While at it also short circuit the main detection loop: For PV, only
    the base group of leaves can possibly hold hypervisor information.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 25467bb5d121735af4969834a62bca752a7bfe10
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:52:53 2020 +0200

    EFI: free unused boot mem in at least some cases
    
    Address at least the primary reason why 52bba67f8b87 ("efi/boot: Don't
    free ebmalloc area at all") was put in place: Make xen_in_range() aware
    of the freed range. This is in particular relevant for EFI-enabled
    builds not actually running on EFI, as the entire range will be unused
    in this case.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9795.25839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Y9-0007zl-IP; Wed, 21 Oct 2020 00:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9795.25839; Wed, 21 Oct 2020 00:00:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Y9-0007ze-EA; Wed, 21 Oct 2020 00:00:17 +0000
Received: by outflank-mailman (input) for mailman id 9795;
 Wed, 21 Oct 2020 00:00:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1Y7-0007y0-P1
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a62aa01-00f0-4ef2-95e2-76fc75ec8221;
 Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E7A6B223FB;
 Wed, 21 Oct 2020 00:00:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1Y7-0007y0-P1
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:15 +0000
X-Inumbo-ID: 1a62aa01-00f0-4ef2-95e2-76fc75ec8221
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a62aa01-00f0-4ef2-95e2-76fc75ec8221;
	Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E7A6B223FB;
	Wed, 21 Oct 2020 00:00:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238414;
	bh=hxjQ0V81U0Cnve4cVEcBwbe968vdnzkI0O3rCTq2odE=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=rVg45YDpRhtSNG1QUlggs79es90EERqFSSu8R6zYkDvRKMa6yawv4MKaR4adGm4s7
	 zG7tffwDzdOuBMUxcO2FMRBNZbRE7rOwagEqjWU3ztrI5FceNX0vn/D7CAyXBpeha/
	 r1GzGqN6+/vGozkfG4oUrvVXZ8HNOcLfxQNBF+gg=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 03/14] kernel-doc: public/device_tree_defs.h
Date: Tue, 20 Oct 2020 17:00:00 -0700
Message-Id: <20201021000011.15351-3-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/device_tree_defs.h | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/xen/include/public/device_tree_defs.h b/xen/include/public/device_tree_defs.h
index 209d43de3f..be35598b53 100644
--- a/xen/include/public/device_tree_defs.h
+++ b/xen/include/public/device_tree_defs.h
@@ -2,7 +2,9 @@
 #define __XEN_DEVICE_TREE_DEFS_H__
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
-/*
+/**
+ * DOC: GUEST_PHANDLE_GIC
+ *
  * The device tree compiler (DTC) is allocating the phandle from 1 to
  * onwards. Reserve a high value for the GIC phandle.
  */
@@ -12,17 +14,17 @@
 #define GUEST_ROOT_SIZE_CELLS 2
 
 /**
- * IRQ line type.
+ * DOC: IRQ line type.
  *
- * DT_IRQ_TYPE_NONE            - default, unspecified type
- * DT_IRQ_TYPE_EDGE_RISING     - rising edge triggered
- * DT_IRQ_TYPE_EDGE_FALLING    - falling edge triggered
- * DT_IRQ_TYPE_EDGE_BOTH       - rising and falling edge triggered
- * DT_IRQ_TYPE_LEVEL_HIGH      - high level triggered
- * DT_IRQ_TYPE_LEVEL_LOW       - low level triggered
- * DT_IRQ_TYPE_LEVEL_MASK      - Mask to filter out the level bits
- * DT_IRQ_TYPE_SENSE_MASK      - Mask for all the above bits
- * DT_IRQ_TYPE_INVALID         - Use to initialize the type
+ * - DT_IRQ_TYPE_NONE            - default, unspecified type
+ * - DT_IRQ_TYPE_EDGE_RISING     - rising edge triggered
+ * - DT_IRQ_TYPE_EDGE_FALLING    - falling edge triggered
+ * - DT_IRQ_TYPE_EDGE_BOTH       - rising and falling edge triggered
+ * - DT_IRQ_TYPE_LEVEL_HIGH      - high level triggered
+ * - DT_IRQ_TYPE_LEVEL_LOW       - low level triggered
+ * - DT_IRQ_TYPE_LEVEL_MASK      - Mask to filter out the level bits
+ * - DT_IRQ_TYPE_SENSE_MASK      - Mask for all the above bits
+ * - DT_IRQ_TYPE_INVALID         - Use to initialize the type
  */
 #define DT_IRQ_TYPE_NONE           0x00000000
 #define DT_IRQ_TYPE_EDGE_RISING    0x00000001
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9796.25851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YA-00081R-Te; Wed, 21 Oct 2020 00:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9796.25851; Wed, 21 Oct 2020 00:00:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YA-00081J-Pa; Wed, 21 Oct 2020 00:00:18 +0000
Received: by outflank-mailman (input) for mailman id 9796;
 Wed, 21 Oct 2020 00:00:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1Y9-0007y0-3G
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60d0a6a7-e295-4694-9898-cc8ffab891de;
 Wed, 21 Oct 2020 00:00:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 45FD02242F;
 Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1Y9-0007y0-3G
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:17 +0000
X-Inumbo-ID: 60d0a6a7-e295-4694-9898-cc8ffab891de
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60d0a6a7-e295-4694-9898-cc8ffab891de;
	Wed, 21 Oct 2020 00:00:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 45FD02242F;
	Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238415;
	bh=GUNzRumMSHbwvq+jjIpryPQ6OW8c5fAFKRBl0mSCZ28=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=A2hg26SnrRkSfiixGzeHd7k87Ufua+YcqZoi9h8pM4WTmc5pZrI7BxMqTC9ivMhqp
	 tuWvtkqpA2xbX3phw49dpbMnuSjx5uibCqqvezrJTn0BPT93Qr5rHAtSK0GPxzOgEb
	 zkw8MGJRLZna5KvY17zOXThQqkaUfqpk/hyohl0Y=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 05/14] kernel-doc: public/features.h
Date: Tue, 20 Oct 2020 17:00:02 -0700
Message-Id: <20201021000011.15351-5-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/features.h | 78 ++++++++++++++++++++++++++---------
 1 file changed, 59 insertions(+), 19 deletions(-)

diff --git a/xen/include/public/features.h b/xen/include/public/features.h
index 1613b2aab8..524d1758c4 100644
--- a/xen/include/public/features.h
+++ b/xen/include/public/features.h
@@ -27,8 +27,8 @@
 #ifndef __XEN_PUBLIC_FEATURES_H__
 #define __XEN_PUBLIC_FEATURES_H__
 
-/*
- * `incontents 200 elfnotes_features XEN_ELFNOTE_FEATURES
+/**
+ * DOC: XEN_ELFNOTE_FEATURES
  *
  * The list of all the features the guest supports. They are set by
  * parsing the XEN_ELFNOTE_FEATURES and XEN_ELFNOTE_SUPPORTED_FEATURES
@@ -41,19 +41,25 @@
  * XENFEAT_dom0 MUST be set if the guest is to be booted as dom0,
  */
 
-/*
- * If set, the guest does not need to write-protect its pagetables, and can
- * update them via direct writes.
+/**
+ * DOC: XENFEAT_writable_page_tables
+ *
+ * If set, the guest does not need to write-protect its pagetables, and
+ * can update them via direct writes.
  */
 #define XENFEAT_writable_page_tables       0
 
-/*
+/**
+ * DOC: XENFEAT_writable_descriptor_tables
+ *
  * If set, the guest does not need to write-protect its segment descriptor
  * tables, and can update them via direct writes.
  */
 #define XENFEAT_writable_descriptor_tables 1
 
-/*
+/**
+ * DOC: XENFEAT_auto_translated_physmap
+ *
  * If set, translation between the guest's 'pseudo-physical' address space
  * and the host's machine address space are handled by the hypervisor. In this
  * mode the guest does not need to perform phys-to/from-machine translations
@@ -61,37 +67,63 @@
  */
 #define XENFEAT_auto_translated_physmap    2
 
-/* If set, the guest is running in supervisor mode (e.g., x86 ring 0). */
+/**
+ * DOC: XENFEAT_supervisor_mode_kernel
+ *
+ * If set, the guest is running in supervisor mode (e.g., x86 ring 0).
+ */
 #define XENFEAT_supervisor_mode_kernel     3
 
-/*
+/**
+ * DOC: XENFEAT_pae_pgdir_above_4gb
+ *
  * If set, the guest does not need to allocate x86 PAE page directories
  * below 4GB. This flag is usually implied by auto_translated_physmap.
  */
 #define XENFEAT_pae_pgdir_above_4gb        4
 
-/* x86: Does this Xen host support the MMU_PT_UPDATE_PRESERVE_AD hypercall? */
+/**
+ * DOC: XENFEAT_mmu_pt_update_preserve_ad
+ * x86: Does this Xen host support the MMU_PT_UPDATE_PRESERVE_AD hypercall?
+ */
 #define XENFEAT_mmu_pt_update_preserve_ad  5
 
-/* x86: Does this Xen host support the MMU_{CLEAR,COPY}_PAGE hypercall? */
+/**
+ * DOC: XENFEAT_highmem_assist
+ * x86: Does this Xen host support the MMU_{CLEAR,COPY}_PAGE hypercall?
+ */
 #define XENFEAT_highmem_assist             6
 
-/*
+/**
+ * DOC: XENFEAT_gnttab_map_avail_bits
+ *
  * If set, GNTTABOP_map_grant_ref honors flags to be placed into guest kernel
  * available pte bits.
  */
 #define XENFEAT_gnttab_map_avail_bits      7
 
-/* x86: Does this Xen host support the HVM callback vector type? */
+/**
+ * DOC: XENFEAT_hvm_callback_vector
+ * x86: Does this Xen host support the HVM callback vector type?
+ */
 #define XENFEAT_hvm_callback_vector        8
 
-/* x86: pvclock algorithm is safe to use on HVM */
+/**
+ * DOC: XENFEAT_hvm_safe_pvclock
+ * x86: pvclock algorithm is safe to use on HVM
+ */
 #define XENFEAT_hvm_safe_pvclock           9
 
-/* x86: pirq can be used by HVM guests */
+/**
+ * DOC: XENFEAT_hvm_pirqs
+ * x86: pirq can be used by HVM guests
+ */
 #define XENFEAT_hvm_pirqs                 10
 
-/* operation as Dom0 is supported */
+/**
+ * DOC: XENFEAT_dom0
+ * operation as Dom0 is supported
+ */
 #define XENFEAT_dom0                      11
 
 /* Xen also maps grant references at pfn = mfn.
@@ -99,13 +131,21 @@
 #define XENFEAT_grant_map_identity        12
  */
 
-/* Guest can use XENMEMF_vnode to specify virtual node for memory op. */
+/**
+ * DOC: XENFEAT_memory_op_vnode_supported
+ * Guest can use XENMEMF_vnode to specify virtual node for memory op.
+ */
 #define XENFEAT_memory_op_vnode_supported 13
 
-/* arm: Hypervisor supports ARM SMC calling convention. */
+/**
+ * DOC: XENFEAT_ARM_SMCCC_supported
+ * arm: Hypervisor supports ARM SMC calling convention.
+ */
 #define XENFEAT_ARM_SMCCC_supported       14
 
-/*
+/**
+ * DOC: XENFEAT_linux_rsdp_unrestricted
+ *
  * x86/PVH: If set, ACPI RSDP can be placed at any address. Otherwise RSDP
  * must be located in lower 1MB, as required by ACPI Specification for IA-PC
  * systems.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9793.25815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Xw-0007Lq-WA; Wed, 21 Oct 2020 00:00:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9793.25815; Wed, 21 Oct 2020 00:00:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Xw-0007LP-TA; Wed, 21 Oct 2020 00:00:04 +0000
Received: by outflank-mailman (input) for mailman id 9793;
 Wed, 21 Oct 2020 00:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1Xv-00074U-A9
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26a960f5-de87-4614-b387-b20100d01318;
 Wed, 21 Oct 2020 00:00:02 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id ED62C2076A;
 Wed, 21 Oct 2020 00:00:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1Xv-00074U-A9
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:03 +0000
X-Inumbo-ID: 26a960f5-de87-4614-b387-b20100d01318
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 26a960f5-de87-4614-b387-b20100d01318;
	Wed, 21 Oct 2020 00:00:02 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id ED62C2076A;
	Wed, 21 Oct 2020 00:00:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238401;
	bh=6ifM7mWeMV6n2fSjpynMysrrMZX/rqHLO3hhKeiec/I=;
	h=Date:From:To:cc:Subject:From;
	b=RU7wK8H0qMqFdGLH/ZepEr+VqIQLfpu5gcUjLYodS6Oa3EN8GDDBZ4f76fVydfUjb
	 zm22Wx9FZk2v9a3N7rJtvn/76lWhsESEoRmsw8NGHj27sxWI7twVBIZIrQMZchbhjL
	 bKXW/5IQTYUsrATkaD+3D+zA4NRVH7NTzcM85VUA=
Date: Tue, 20 Oct 2020 17:00:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, andrew.cooper3@citrix.com, 
    george.dunlap@citrix.com, ian.jackson@eu.citrix.com, jbeulich@suse.com, 
    julien@xen.org, wl@xen.org, Bertrand.Marquis@arm.com
Subject: [PATCH v2 00/14] kernel-doc: public/arch-arm.h
Message-ID: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This patch series converts Xen in-code comments to the kernel-doc (and
doxygen) format:

https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html

Please note that this patch series is meant as groundwork. The end goal
is to enable a more sophisticated documents generation with doxygen,
see: https://marc.info/?l=xen-devel&m=160220627022339


# Changes compared to v1:
- addressed review comments
- use oneline comments even for nested struct members



# WHAT WAS CONVERTED

I started from the public/ header files as I thought they are the most
important to generated documentation for.

I didn't cover all files under xen/include/public/, but we don't have to
boil the ocean in one go.

For the header files I addressed, I did cover all in-code comments
except for very few exceptions where the conversion to kernel-doc format
wasn't easily doable without major changes to the comments/code.

The conversion was done by hand (sigh!) but was mechanical, and only
stylistic: I didn't change the content of the comments (only in a couple
of places to make the English work right, e.g. when a comment has been
split into two comments.)


# THE KERNEL-DOC KEYWORDS USED

I used the "struct" keyword for structures, i.e.:

/**
 * struct foobar
 */

"struct" makes kernel-doc go and look at the following struct in the
code, parses struct members comments, and generate a doc optimized to
describe a struct. Note that in these cases the struct needs to follow
immediately the comment. Thus, I had to move an #define between the
comment and the struct in a few places.

Also note that kernel-doc supports nested structs but due to a quirk
comments for nested struct members are not recognized if they are on a
single line. Still, this version of the series uses single line comments
with the idea of fixing the document-generation tool later.


I used the "DOC" keyword otherwise. "DOC" is freeform, not particularly
tied to anything following (functions, enums, etc.) I kept a black line
between "DOC" and the following comment if multiline and no blank line
if it is single line.

  /**
   * DOC: doc1
   * single line comment
   */

   /**
    * DOC: doc2
    *
    * this is
    * multiline
    */

DOC doesn't generate any cross-documents links but it is still a great
place to start as it makes the in-code comments immediately available as
documents. Linking and references can be added later.


# HOW TO TEST IT

Simply run kernel-doc on a header file, for instance:

  ../linux/scripts/kernel-doc xen/include/public/event_channel.h > /tmp/doc.rst

You can inspect the rst file and also generate a html file out of it with
sphinx:

  sphinx-quickstart
  sphinx-build . /path/to/out

Cheers,

Stefano




The following changes since commit 3b49791e4cc2f38dd84bf331b75217adaef636e3:

  xen/arm: Print message if reset did not work (2020-10-20 13:20:31 -0700)

are available in the Git repository at:

  http://xenbits.xenproject.org/git-http/people/sstabellini/xen-unstable.git hyp-docs-2

for you to fetch changes up to 393bd090ae4f09bc68aa35af74e087cd4615be5a:

  kernel-doc: public/hvm/params.h (2020-10-20 16:45:54 -0700)

----------------------------------------------------------------
Stefano Stabellini (14):
      kernel-doc: public/arch-arm.h
      kernel-doc: public/hvm/hvm_op.h
      kernel-doc: public/device_tree_defs.h
      kernel-doc: public/event_channel.h
      kernel-doc: public/features.h
      kernel-doc: public/grant_table.h
      kernel-doc: public/hypfs.h
      kernel-doc: public/memory.h
      kernel-doc: public/sched.h
      kernel-doc: public/vcpu.h
      kernel-doc: public/version.h
      kernel-doc: public/xen.h
      kernel-doc: public/elfnote.h
      kernel-doc: public/hvm/params.h

 xen/include/public/arch-arm.h         |  43 ++-
 xen/include/public/device_tree_defs.h |  24 +-
 xen/include/public/elfnote.h          | 109 +++++--
 xen/include/public/event_channel.h    | 184 ++++++-----
 xen/include/public/features.h         |  78 +++--
 xen/include/public/grant_table.h      | 447 +++++++++++++++------------
 xen/include/public/hvm/hvm_op.h       |  20 +-
 xen/include/public/hvm/params.h       | 153 +++++++--
 xen/include/public/hypfs.h            |  72 +++--
 xen/include/public/memory.h           | 236 +++++++++-----
 xen/include/public/sched.h            | 134 +++++---
 xen/include/public/vcpu.h             | 180 ++++++++---
 xen/include/public/version.h          |  73 ++++-
 xen/include/public/xen.h              | 566 ++++++++++++++++++++++------------
 14 files changed, 1544 insertions(+), 775 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9794.25827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Y8-0007yH-9z; Wed, 21 Oct 2020 00:00:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9794.25827; Wed, 21 Oct 2020 00:00:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Y8-0007yA-6O; Wed, 21 Oct 2020 00:00:16 +0000
Received: by outflank-mailman (input) for mailman id 9794;
 Wed, 21 Oct 2020 00:00:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1Y7-0007xs-8i
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83671988-2886-4227-9646-90ec5a7dc997;
 Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9986F2076A;
 Wed, 21 Oct 2020 00:00:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1Y7-0007xs-8i
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:15 +0000
X-Inumbo-ID: 83671988-2886-4227-9646-90ec5a7dc997
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 83671988-2886-4227-9646-90ec5a7dc997;
	Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9986F2076A;
	Wed, 21 Oct 2020 00:00:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238413;
	bh=ZAlufUARB+MO+hXOZ2+s0SU7+f17uYJTPwjBRMscVvo=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=FhgXC9FX40tFcNlZf6C6aZksWsXnRjInguehqK+hTOgWCHUZCIoF2oDc9uHPOUVyG
	 Zbt5OYxarr4QlfO5uOsZi/WDAJEcHqNzypnCUpnojASzzz4UL2gF2IUfQcVMLrB3Oi
	 kaEyrrrfEL3YAs0sjHxL1vaTJZ/oJvXTS0+W2hKs=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 01/14] kernel-doc: public/arch-arm.h
Date: Tue, 20 Oct 2020 16:59:58 -0700
Message-Id: <20201021000011.15351-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/arch-arm.h | 43 ++++++++++++++++++++++-------------
 1 file changed, 27 insertions(+), 16 deletions(-)

diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index c365b1b39e..409697dede 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -27,8 +27,8 @@
 #ifndef __XEN_PUBLIC_ARCH_ARM_H__
 #define __XEN_PUBLIC_ARCH_ARM_H__
 
-/*
- * `incontents 50 arm_abi Hypercall Calling Convention
+/**
+ * DOC: Hypercall Calling Convention
  *
  * A hypercall is issued using the ARM HVC instruction.
  *
@@ -72,8 +72,8 @@
  * Any cache allocation hints are acceptable.
  */
 
-/*
- * `incontents 55 arm_hcall Supported Hypercalls
+/**
+ * DOC: Supported Hypercalls
  *
  * Xen on ARM makes extensive use of hardware facilities and therefore
  * only a subset of the potential hypercalls are required.
@@ -175,10 +175,17 @@
     typedef union { type *p; uint64_aligned_t q; }              \
         __guest_handle_64_ ## name
 
-/*
+/**
+ * DOC: XEN_GUEST_HANDLE - a guest pointer in a struct
+ *
  * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
  * in a struct in memory. On ARM is always 8 bytes sizes and 8 bytes
  * aligned.
+ */
+
+/**
+ * DOC: XEN_GUEST_HANDLE_PARAM - a guest pointer as a hypercall arg
+ *
  * XEN_GUEST_HANDLE_PARAM represents a guest pointer, when passed as an
  * hypercall argument. It is 4 bytes on aarch32 and 8 bytes on aarch64.
  */
@@ -201,7 +208,9 @@ typedef uint64_t xen_pfn_t;
 #define PRI_xen_pfn PRIx64
 #define PRIu_xen_pfn PRIu64
 
-/*
+/**
+ * DOC: XEN_LEGACY_MAX_VCPUS
+ *
  * Maximum number of virtual CPUs in legacy multi-processor guests.
  * Only one. All other VCPUS must use VCPUOP_register_vcpu_info.
  */
@@ -299,26 +308,28 @@ struct vcpu_guest_context {
 typedef struct vcpu_guest_context vcpu_guest_context_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
 
-/*
+
+/**
+ * struct xen_arch_domainconfig - arch-specific domain creation params
+ *
  * struct xen_arch_domainconfig's ABI is covered by
  * XEN_DOMCTL_INTERFACE_VERSION.
  */
+struct xen_arch_domainconfig {
+    /** @gic_version: IN/OUT parameter */
 #define XEN_DOMCTL_CONFIG_GIC_NATIVE    0
 #define XEN_DOMCTL_CONFIG_GIC_V2        1
 #define XEN_DOMCTL_CONFIG_GIC_V3        2
-
+    uint8_t gic_version;
+    /** @tee_type: IN parameter */
 #define XEN_DOMCTL_CONFIG_TEE_NONE      0
 #define XEN_DOMCTL_CONFIG_TEE_OPTEE     1
-
-struct xen_arch_domainconfig {
-    /* IN/OUT */
-    uint8_t gic_version;
-    /* IN */
     uint16_t tee_type;
-    /* IN */
+    /** @nr_spis: IN parameter */
     uint32_t nr_spis;
-    /*
-     * OUT
+    /**
+     * @clock_frequency: OUT parameter
+     *
      * Based on the property clock-frequency in the DT timer node.
      * The property may be present when the bootloader/firmware doesn't
      * set correctly CNTFRQ which hold the timer frequency.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9797.25863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YE-00085T-6W; Wed, 21 Oct 2020 00:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9797.25863; Wed, 21 Oct 2020 00:00:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YE-00085M-1N; Wed, 21 Oct 2020 00:00:22 +0000
Received: by outflank-mailman (input) for mailman id 9797;
 Wed, 21 Oct 2020 00:00:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YC-0007xs-48
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 347a1fe3-2152-4264-a255-5bf320b4ff86;
 Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4D3FF223BF;
 Wed, 21 Oct 2020 00:00:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YC-0007xs-48
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:20 +0000
X-Inumbo-ID: 347a1fe3-2152-4264-a255-5bf320b4ff86
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 347a1fe3-2152-4264-a255-5bf320b4ff86;
	Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4D3FF223BF;
	Wed, 21 Oct 2020 00:00:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238413;
	bh=/OW4xiRAVY2DWXEVmKmaedQ+klcsJQrs8oEIFZrDdxM=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=re4LNmevAUE/3b7wg3M4t8PkqKguIpGkMTpTjupMaq0wCsOiW5JKsEzuE/jkBcSJp
	 xDhqSlaAVJzcctQ4bu7vPXszvWayt18eeoEZR2qHnn7Dol1BNwyirErPO8fbnIjUMb
	 mjjgsZXjxgyLGokwCw7EU9IQFry2SscHWHIJnq5E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 02/14] kernel-doc: public/hvm/hvm_op.h
Date: Tue, 20 Oct 2020 16:59:59 -0700
Message-Id: <20201021000011.15351-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/hvm/hvm_op.h | 20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 870ec52060..d62d3a96f8 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -27,14 +27,26 @@
 #include "../trace.h"
 #include "../event_channel.h"
 
-/* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
+/**
+ * DOC: HVMOP_set_param and HVMOP_get_param
+ *
+ * Get/set subcommands: extra argument == pointer to xen_hvm_param struct.
+ */
 #define HVMOP_set_param           0
 #define HVMOP_get_param           1
+
+/**
+ * struct xen_hvm_param
+ */
 struct xen_hvm_param {
-    domid_t  domid;    /* IN */
+    /** @domid: IN parameter */
+    domid_t  domid;
+    /** @pad: padding */
     uint16_t pad;
-    uint32_t index;    /* IN */
-    uint64_t value;    /* IN/OUT */
+    /** @index: IN parameter */
+    uint32_t index;
+    /** @value: IN/OUT parameter */
+    uint64_t value;
 };
 typedef struct xen_hvm_param xen_hvm_param_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_param_t);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9798.25869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YE-00086M-ML; Wed, 21 Oct 2020 00:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9798.25869; Wed, 21 Oct 2020 00:00:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YE-00086B-CB; Wed, 21 Oct 2020 00:00:22 +0000
Received: by outflank-mailman (input) for mailman id 9798;
 Wed, 21 Oct 2020 00:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YC-0007y0-Np
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b007146-9efb-40e7-b4a0-3ffe504a0327;
 Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id EF39E22453;
 Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YC-0007y0-Np
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:20 +0000
X-Inumbo-ID: 1b007146-9efb-40e7-b4a0-3ffe504a0327
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1b007146-9efb-40e7-b4a0-3ffe504a0327;
	Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id EF39E22453;
	Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238416;
	bh=lGAJw225AnfmH/iLrtmRBGy8z5pR0eJow9EFtjbC/ag=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=nC17O/yOYMTgsZw0VmmMORL3fzcYjL0M8tCaYzntP3Su6MbxMAF7KxsnGzNKpbAS5
	 InbPQQYVgXPiRlaO7236mKgLKy/pN8eTrDWaXnZhvyK4yrxk0WcziThhky8iNz985z
	 xu32sI/I26dI1D6yKXt617bcMpMTt7oesWwF3Y24=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 06/14] kernel-doc: public/grant_table.h
Date: Tue, 20 Oct 2020 17:00:03 -0700
Message-Id: <20201021000011.15351-6-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- remove "enum" comments
- typedef grant_ref_t in the comment
- remove "New guests should use version 2."
- remove two redundant blanks
---
 xen/include/public/grant_table.h | 447 ++++++++++++++++++-------------
 1 file changed, 256 insertions(+), 191 deletions(-)

diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
index 3b7bf93d74..2fd50cc46f 100644
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -30,8 +30,8 @@
 
 #include "xen.h"
 
-/*
- * `incontents 150 gnttab Grant Tables
+/**
+ * DOC: Grant Tables
  *
  * Xen's grant tables provide a generic mechanism to memory sharing
  * between domains. This shared memory interface underpins the split
@@ -53,11 +53,10 @@
  * fully virtualised memory.
  */
 
-/***********************************
- * GRANT TABLE REPRESENTATION
- */
-
-/* Some rough guidelines on accessing and updating grant-table entries
+/**
+ * DOC: GRANT TABLE REPRESENTATION
+ *
+ * Some rough guidelines on accessing and updating grant-table entries
  * in a concurrency-safe manner. For more information, Linux contains a
  * reference implementation for guest OSes (drivers/xen/grant_table.c, see
  * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
@@ -108,56 +107,66 @@
  *  Use SMP-safe bit-setting instruction.
  */
 
-/*
+/**
+ * typedef grant_ref_t
  * Reference to a grant entry in a specified domain's grant table.
  */
 typedef uint32_t grant_ref_t;
 
-/*
+/**
+ * DOC: grant table
+ *
  * A grant table comprises a packed array of grant entries in one or more
  * page frames shared between Xen and a guest.
  * [XEN]: This field is written by Xen and read by the sharing guest.
  * [GST]: This field is written by the guest and read by Xen.
  */
 
-/*
- * Version 1 of the grant table entry structure is maintained purely
- * for backwards compatibility.  New guests should use version 2.
- */
 #if __XEN_INTERFACE_VERSION__ < 0x0003020a
 #define grant_entry_v1 grant_entry
 #define grant_entry_v1_t grant_entry_t
 #endif
+/**
+ * struct grant_entry_v1
+ *
+ * Version 1 of the grant table entry structure is maintained purely
+ * for backwards compatibility.
+ */
 struct grant_entry_v1 {
-    /* GTF_xxx: various type and flag information.  [XEN,GST] */
+    /** @flags: GTF_xxx, various type and flag information.  [XEN,GST] */
     uint16_t flags;
-    /* The domain being granted foreign privileges. [GST] */
+    /** @domid: The domain being granted foreign privileges. [GST] */
     domid_t  domid;
-    /*
-     * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
-     * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
-     * GTF_transfer_completed: MFN whose ownership transferred by @domid
-     *                         (non-translated guests only). [XEN]
+    /**
+     * @frame:
+     * - GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
+     * - GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
+     * - GTF_transfer_completed: MFN whose ownership transferred by @domid
+     *                           (non-translated guests only). [XEN]
      */
     uint32_t frame;
 };
 typedef struct grant_entry_v1 grant_entry_v1_t;
 
-/* The first few grant table entries will be preserved across grant table
+/**
+ * DOC: GNTTAB_NR_RESERVED_ENTRIES
+ *
+ * The first few grant table entries will be preserved across grant table
  * version changes and may be pre-populated at domain creation by tools.
  */
 #define GNTTAB_NR_RESERVED_ENTRIES     8
 #define GNTTAB_RESERVED_CONSOLE        0
 #define GNTTAB_RESERVED_XENSTORE       1
 
-/*
- * Type of grant entry.
- *  GTF_invalid: This grant entry grants no privileges.
- *  GTF_permit_access: Allow @domid to map/access @frame.
- *  GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
- *                       to this guest. Xen writes the page number to @frame.
- *  GTF_transitive: Allow @domid to transitively access a subrange of
- *                  @trans_grant in @trans_domid.  No mappings are allowed.
+/**
+ * DOC: Type of grant entry.
+ *
+ * - GTF_invalid: This grant entry grants no privileges.
+ * - GTF_permit_access: Allow @domid to map/access @frame.
+ * - GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
+ *                        to this guest. Xen writes the page number to @frame.
+ * - GTF_transitive: Allow @domid to transitively access a subrange of
+ *                   @trans_grant in @trans_domid.  No mappings are allowed.
  */
 #define GTF_invalid         (0U<<0)
 #define GTF_permit_access   (1U<<0)
@@ -165,15 +174,16 @@ typedef struct grant_entry_v1 grant_entry_v1_t;
 #define GTF_transitive      (3U<<0)
 #define GTF_type_mask       (3U<<0)
 
-/*
- * Subflags for GTF_permit_access.
- *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
- *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
- *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
- *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant [GST]
- *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
- *                will only be allowed to copy from the grant, and not
- *                map it. [GST]
+/**
+ * DOC: Subflags for GTF_permit_access.
+ *
+ * - GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
+ * - GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
+ * - GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
+ * - GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags for the grant [GST]
+ * - GTF_sub_page: Grant access to only a subrange of the page.  @domid
+ *                 will only be allowed to copy from the grant, and not
+ *                 map it. [GST]
  */
 #define _GTF_readonly       (2)
 #define GTF_readonly        (1U<<_GTF_readonly)
@@ -190,8 +200,9 @@ typedef struct grant_entry_v1 grant_entry_v1_t;
 #define _GTF_sub_page       (8)
 #define GTF_sub_page        (1U<<_GTF_sub_page)
 
-/*
- * Subflags for GTF_accept_transfer:
+/**
+ * DOC: Subflags for GTF_accept_transfer:
+ *
  *  GTF_transfer_committed: Xen sets this flag to indicate that it is committed
  *      to transferring ownership of a page frame. When a guest sees this flag
  *      it must /not/ modify the grant entry until GTF_transfer_completed is
@@ -205,38 +216,43 @@ typedef struct grant_entry_v1 grant_entry_v1_t;
 #define _GTF_transfer_completed (3)
 #define GTF_transfer_completed  (1U<<_GTF_transfer_completed)
 
-/*
- * Version 2 grant table entries.  These fulfil the same role as
- * version 1 entries, but can represent more complicated operations.
- * Any given domain will have either a version 1 or a version 2 table,
- * and every entry in the table will be the same version.
+#if __XEN_INTERFACE_VERSION__ >= 0x0003020a
+/**
+ * DOC: Version 2 grant table entries
+ *
+ * These fulfil the same role as version 1 entries, but can represent
+ * more complicated operations.  Any given domain will have either a
+ * version 1 or a version 2 table, and every entry in the table will be
+ * the same version.
  *
  * The interface by which domains use grant references does not depend
  * on the grant table version in use by the other domain.
- */
-#if __XEN_INTERFACE_VERSION__ >= 0x0003020a
-/*
+ *
  * Version 1 and version 2 grant entries share a common prefix.  The
  * fields of the prefix are documented as part of struct
  * grant_entry_v1.
  */
+/**
+ * struct grant_entry_header
+ */
 struct grant_entry_header {
     uint16_t flags;
     domid_t  domid;
 };
 typedef struct grant_entry_header grant_entry_header_t;
 
-/*
- * Version 2 of the grant entry structure.
+/**
+ * union grant_entry_v2 - Version 2 of the grant entry structure.
  */
 union grant_entry_v2 {
     grant_entry_header_t hdr;
 
-    /*
+    /**
+     * @full_page:
      * This member is used for V1-style full page grants, where either:
      *
-     * -- hdr.type is GTF_accept_transfer, or
-     * -- hdr.type is GTF_permit_access and GTF_sub_page is not set.
+     * - hdr.type is GTF_accept_transfer, or
+     * - hdr.type is GTF_permit_access and GTF_sub_page is not set.
      *
      * In that case, the frame field has the same semantics as the
      * field of the same name in the V1 entry structure.
@@ -247,7 +263,9 @@ union grant_entry_v2 {
         uint64_t frame;
     } full_page;
 
-    /*
+    /**
+     * @sub_page:
+     *
      * If the grant type is GTF_grant_access and GTF_sub_page is set,
      * @domid is allowed to access bytes [@page_off,@page_off+@length)
      * in frame @frame.
@@ -259,7 +277,9 @@ union grant_entry_v2 {
         uint64_t frame;
     } sub_page;
 
-    /*
+    /**
+     * @transitive:
+     *
      * If the grant is GTF_transitive, @domid is allowed to use the
      * grant @gref in domain @trans_domid, as if it was the local
      * domain.  Obviously, the transitive access must be compatible
@@ -275,7 +295,8 @@ union grant_entry_v2 {
         grant_ref_t gref;
     } transitive;
 
-    uint32_t __spacer[4]; /* Pad to a power of two */
+    /** @__spacer: Pad to a power of two */
+    uint32_t __spacer[4];
 };
 typedef union grant_entry_v2 grant_entry_v2_t;
 
@@ -283,21 +304,17 @@ typedef uint16_t grant_status_t;
 
 #endif /* __XEN_INTERFACE_VERSION__ */
 
-/***********************************
- * GRANT TABLE QUERIES AND USES
- */
-
-/* ` enum neg_errnoval
- * ` HYPERVISOR_grant_table_op(enum grant_table_op cmd,
- * `                           void *args,
- * `                           unsigned int count)
- * `
+/**
+ * DOC: GRANT TABLE QUERIES AND USES
  *
+ * enum neg_errnoval
+ * HYPERVISOR_grant_table_op(enum grant_table_op cmd,
+ *                           void *args,
+ *                           unsigned int count)
+ *  
  * @args points to an array of a per-command data structure. The array
  * has @count members
  */
-
-/* ` enum grant_table_op { // GNTTABOP_* => struct gnttab_* */
 #define GNTTABOP_map_grant_ref        0
 #define GNTTABOP_unmap_grant_ref      1
 #define GNTTABOP_setup_table          2
@@ -313,18 +330,20 @@ typedef uint16_t grant_status_t;
 #define GNTTABOP_swap_grant_ref	      11
 #define GNTTABOP_cache_flush	      12
 #endif /* __XEN_INTERFACE_VERSION__ */
-/* ` } */
 
-/*
+/**
+ * typedef
  * Handle to track a mapping created via a grant reference.
  */
 typedef uint32_t grant_handle_t;
 
-/*
- * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
- * by devices and/or host CPUs. If successful, <handle> is a tracking number
- * that must be presented later to destroy the mapping(s). On error, <status>
- * is a negative status code.
+/**
+ * struct gnttab_map_grant_ref - GNTTABOP_map_grant_ref
+ *
+ * Map the grant entry (<dom>,<ref>) for access by devices and/or host
+ * CPUs. If successful, <handle> is a tracking number that must be
+ * presented later to destroy the mapping(s). On error, <status> is a
+ * negative status code.
  * NOTES:
  *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
  *     via which I/O devices may access the granted frame.
@@ -338,24 +357,31 @@ typedef uint32_t grant_handle_t;
  *     to be accounted to the correct grant reference!
  */
 struct gnttab_map_grant_ref {
-    /* IN parameters. */
+    /** @host_addr: IN parameter */
     uint64_t host_addr;
-    uint32_t flags;               /* GNTMAP_* */
+    /** @flags: IN parameter, GNTMAP_* */
+    uint32_t flags;
+    /** @ref: IN parameter */
     grant_ref_t ref;
+    /** @dom: IN parameter */
     domid_t  dom;
-    /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t  status;
+    /** @handle: OUT parameter */
     grant_handle_t handle;
+    /** @dev_bus_addr: OUT parameter */
     uint64_t dev_bus_addr;
 };
 typedef struct gnttab_map_grant_ref gnttab_map_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_map_grant_ref_t);
 
-/*
- * GNTTABOP_unmap_grant_ref: Destroy one or more grant-reference mappings
- * tracked by <handle>. If <host_addr> or <dev_bus_addr> is zero, that
- * field is ignored. If non-zero, they must refer to a device/host mapping
- * that is tracked by <handle>
+/**
+ * struct gnttab_unmap_grant_ref - GNTTABOP_unmap_grant_ref
+ *
+ * Destroy one or more grant-reference mappings tracked by <handle>. If
+ * <host_addr> or <dev_bus_addr> is zero, that field is ignored. If
+ * non-zero, they must refer to a device/host mapping that is tracked by
+ * <handle>
  * NOTES:
  *  1. The call may fail in an undefined manner if either mapping is not
  *     tracked by <handle>.
@@ -363,31 +389,37 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_map_grant_ref_t);
  *     mappings will remain in the device or host TLBs.
  */
 struct gnttab_unmap_grant_ref {
-    /* IN parameters. */
+    /** @host_addr: IN parameter */
     uint64_t host_addr;
+    /** @dev_bus_addr: IN parameter */
     uint64_t dev_bus_addr;
+    /** @handle: IN parameter */
     grant_handle_t handle;
-    /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t  status;
 };
 typedef struct gnttab_unmap_grant_ref gnttab_unmap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t);
 
-/*
- * GNTTABOP_setup_table: Set up a grant table for <dom> comprising at least
- * <nr_frames> pages. The frame addresses are written to the <frame_list>.
- * Only <nr_frames> addresses are written, even if the table is larger.
+/**
+ * struct gnttab_setup_table - GNTTABOP_setup_table
+ *
+ * Set up a grant table for <dom> comprising at least <nr_frames> pages.
+ * The frame addresses are written to the <frame_list>.  Only
+ * <nr_frames> addresses are written, even if the table is larger.
  * NOTES:
  *  1. <dom> may be specified as DOMID_SELF.
  *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
  *  3. Xen may not support more than a single grant-table page per domain.
  */
 struct gnttab_setup_table {
-    /* IN parameters. */
+    /** @dom: IN parameter */
     domid_t  dom;
+    /** @nr_frames: IN parameter */
     uint32_t nr_frames;
-    /* OUT parameters. */
+    /** @status: OUT parameter */
     int16_t  status;              /* => enum grant_status */
+    /** @frame_list: OUT parameter */
 #if __XEN_INTERFACE_VERSION__ < 0x00040300
     XEN_GUEST_HANDLE(ulong) frame_list;
 #else
@@ -397,21 +429,25 @@ struct gnttab_setup_table {
 typedef struct gnttab_setup_table gnttab_setup_table_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_t);
 
-/*
- * GNTTABOP_dump_table: Dump the contents of the grant table to the
- * xen console. Debugging use only.
+/**
+ * struct gnttab_dump_table - GNTTABOP_dump_table
+ * 
+ * Dump the contents of the grant table to the xen console. Debugging
+ * use only.
  */
 struct gnttab_dump_table {
-    /* IN parameters. */
+    /** @dom: IN parameter */
     domid_t dom;
-    /* OUT parameters. */
-    int16_t status;               /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t status;
 };
 typedef struct gnttab_dump_table gnttab_dump_table_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_dump_table_t);
 
-/*
- * GNTTABOP_transfer: Transfer <frame> to a foreign domain. The foreign domain
+/**
+ * struct gnttab_transfer - GNTTABOP_transfer
+ *
+ * Transfer <frame> to a foreign domain. The foreign domain
  * has previously registered its interest in the transfer via <domid, ref>.
  *
  * Note that, even if the transfer fails, the specified page no longer belongs
@@ -420,19 +456,26 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_dump_table_t);
  * Note further that only PV guests can use this operation.
  */
 struct gnttab_transfer {
-    /* IN parameters. */
+    /** @mfn: IN parameter */
     xen_pfn_t     mfn;
+    /** @domid: IN parameter */
     domid_t       domid;
+    /** @ref: IN parameter */
     grant_ref_t   ref;
-    /* OUT parameters. */
+    /** @status: OUT parameter */
     int16_t       status;
 };
 typedef struct gnttab_transfer gnttab_transfer_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
 
 
-/*
- * GNTTABOP_copy: Hypervisor based copy
+#define _GNTCOPY_source_gref      (0)
+#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
+#define _GNTCOPY_dest_gref        (1)
+#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
+/**
+ * struct gnttab_copy - GNTTABOP_copy, Hypervisor based copy
+ *
  * source and destinations can be eithers MFNs or, for foreign domains,
  * grant references. the foreign domain has to grant read/write access
  * in its grant table.
@@ -448,14 +491,9 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
  * the offset in the target frame and  len specifies the number of
  * bytes to be copied.
  */
-
-#define _GNTCOPY_source_gref      (0)
-#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
-#define _GNTCOPY_dest_gref        (1)
-#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
-
 struct gnttab_copy {
-    /* IN parameters. */
+    /** @source: IN parameter */
+    /** @dest: IN parameter */
     struct gnttab_copy_ptr {
         union {
             grant_ref_t ref;
@@ -464,37 +502,44 @@ struct gnttab_copy {
         domid_t  domid;
         uint16_t offset;
     } source, dest;
+    /** @len: IN parameter */
     uint16_t      len;
-    uint16_t      flags;          /* GNTCOPY_* */
-    /* OUT parameters. */
+    /** @flags: IN parameter, GNTCOPY_* */
+    uint16_t      flags;
+    /** @status: OUT parameters. */
     int16_t       status;
 };
 typedef struct gnttab_copy  gnttab_copy_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_copy_t);
 
-/*
- * GNTTABOP_query_size: Query the current and maximum sizes of the shared
- * grant table.
+/**
+ * struct gnttab_query_size - GNTTABOP_query_size
+ *
+ * Query the current and maximum sizes of the shared grant table.
  * NOTES:
  *  1. <dom> may be specified as DOMID_SELF.
  *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
  */
 struct gnttab_query_size {
-    /* IN parameters. */
+    /** @dom: IN parameter */
     domid_t  dom;
-    /* OUT parameters. */
+    /** @nr_frames: OUT parameter */
     uint32_t nr_frames;
+    /** @max_nr_frames: OUT parameter */
     uint32_t max_nr_frames;
-    int16_t  status;              /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t  status;
 };
 typedef struct gnttab_query_size gnttab_query_size_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_t);
 
-/*
- * GNTTABOP_unmap_and_replace: Destroy one or more grant-reference mappings
- * tracked by <handle> but atomically replace the page table entry with one
- * pointing to the machine address under <new_addr>.  <new_addr> will be
- * redirected to the null entry.
+/**
+ * struct gnttab_unmap_and_replace - GNTTABOP_unmap_and_replace
+ *
+ * Destroy one or more grant-reference mappings tracked by <handle> but
+ * atomically replace the page table entry with one pointing to the
+ * machine address under <new_addr>.  <new_addr> will be redirected to
+ * the null entry.
  * NOTES:
  *  1. The call may fail in an undefined manner if either mapping is not
  *     tracked by <handle>.
@@ -502,36 +547,42 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_t);
  *     mappings will remain in the device or host TLBs.
  */
 struct gnttab_unmap_and_replace {
-    /* IN parameters. */
+    /** @host_addr: IN parameter */
     uint64_t host_addr;
+    /** @new_addr: IN parameter */
     uint64_t new_addr;
+    /** @handle: IN parameter */
     grant_handle_t handle;
-    /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t  status;
 };
 typedef struct gnttab_unmap_and_replace gnttab_unmap_and_replace_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t);
 
 #if __XEN_INTERFACE_VERSION__ >= 0x0003020a
-/*
- * GNTTABOP_set_version: Request a particular version of the grant
- * table shared table structure.  This operation may be used to toggle
- * between different versions, but must be performed while no grants
- * are active.  The only defined versions are 1 and 2.
+/**
+ * struct gnttab_set_version - GNTTABOP_set_version
+ *
+ * Request a particular version of the grant table shared table
+ * structure.  This operation may be used to toggle between different
+ * versions, but must be performed while no grants are active.  The only
+ * defined versions are 1 and 2.
  */
 struct gnttab_set_version {
-    /* IN/OUT parameters */
+    /** @version: IN/OUT parameter */
     uint32_t version;
 };
 typedef struct gnttab_set_version gnttab_set_version_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_set_version_t);
 
 
-/*
- * GNTTABOP_get_status_frames: Get the list of frames used to store grant
- * status for <dom>. In grant format version 2, the status is separated
- * from the other shared grant fields to allow more efficient synchronization
- * using barriers instead of atomic cmpexch operations.
+/**
+ * struct gnttab_get_status_frames - GNTTABOP_get_status_frames
+ *
+ * Get the list of frames used to store grant status for <dom>. In grant
+ * format version 2, the status is separated from the other shared grant
+ * fields to allow more efficient synchronization using barriers instead
+ * of atomic cmpexch operations.
  * <nr_frames> specify the size of vector <frame_list>.
  * The frame addresses are returned in the <frame_list>.
  * Only <nr_frames> addresses are returned, even if the table is larger.
@@ -540,44 +591,53 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_set_version_t);
  *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
  */
 struct gnttab_get_status_frames {
-    /* IN parameters. */
+    /** @nr_frames: IN parameter */
     uint32_t nr_frames;
+    /** @dom: IN parameter */
     domid_t  dom;
-    /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t  status;
+    /** @frame_list: OUT parameter */
     XEN_GUEST_HANDLE(uint64_t) frame_list;
 };
 typedef struct gnttab_get_status_frames gnttab_get_status_frames_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_get_status_frames_t);
 
-/*
- * GNTTABOP_get_version: Get the grant table version which is in
- * effect for domain <dom>.
+/**
+ * struct gnttab_get_version - GNTTABOP_get_version
+ *
+ * Get the grant table version which is in effect for domain <dom>.
  */
 struct gnttab_get_version {
-    /* IN parameters */
+    /** @dom: IN parameters */
     domid_t dom;
+    /** @pad: padding */
     uint16_t pad;
-    /* OUT parameters */
+    /** @version: OUT parameter */
     uint32_t version;
 };
 typedef struct gnttab_get_version gnttab_get_version_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_get_version_t);
 
-/*
- * GNTTABOP_swap_grant_ref: Swap the contents of two grant entries.
+/**
+ * struct gnttab_swap_grant_ref - GNTTABOP_swap_grant_ref
+ *
+ * Swap the contents of two grant entries.
  */
 struct gnttab_swap_grant_ref {
-    /* IN parameters */
+    /** @ref_a: IN parameter */
     grant_ref_t ref_a;
+    /** @ref_b: IN parameter */
     grant_ref_t ref_b;
-    /* OUT parameters */
-    int16_t status;             /* => enum grant_status */
+    /** @status: OUT parameter, enum grant_status */
+    int16_t status;
 };
 typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
 
-/*
+/**
+ * struct gnttab_cache_flush - GNTTABOP_cache_flush
+ *
  * Issue one or more cache maintenance operations on a portion of a
  * page granted to the calling domain by a foreign domain.
  */
@@ -598,62 +658,67 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
 
 #endif /* __XEN_INTERFACE_VERSION__ */
 
-/*
- * Bitfield values for gnttab_map_grant_ref.flags.
+/**
+ * DOC: Bitfield values for gnttab_map_grant_ref.flags.
+ *
+ * - GNTMAP_device_map: Map the grant entry for access by I/O devices.
+ * - GNTMAP_host_map: Map the grant entry for access by host CPUs.
+ * - GNTMAP_readonly: Accesses to the granted frame will be restricted to read-only access.
+ * - GNTMAP_host_map: 0 => The host mapping is usable only by the guest OS
+ *                    1 => The host mapping is usable by guest OS + current application.
+ * - GNTMAP_contains_pte: 0 => This map request contains a host virtual address.
+ *                        1 => This map request contains the machine addess of the PTE to update.
+ * - GNTMAP_can_fail
+ * - GNTMAP_guest_avail_mask: Bits to be placed in guest kernel available PTE bits
+ *                            (architecture dependent; only supported when
+ *                             XENFEAT_gnttab_map_avail_bits is set).
  */
- /* Map the grant entry for access by I/O devices. */
 #define _GNTMAP_device_map      (0)
 #define GNTMAP_device_map       (1<<_GNTMAP_device_map)
- /* Map the grant entry for access by host CPUs. */
 #define _GNTMAP_host_map        (1)
 #define GNTMAP_host_map         (1<<_GNTMAP_host_map)
- /* Accesses to the granted frame will be restricted to read-only access. */
 #define _GNTMAP_readonly        (2)
 #define GNTMAP_readonly         (1<<_GNTMAP_readonly)
- /*
-  * GNTMAP_host_map subflag:
-  *  0 => The host mapping is usable only by the guest OS.
-  *  1 => The host mapping is usable by guest OS + current application.
-  */
 #define _GNTMAP_application_map (3)
 #define GNTMAP_application_map  (1<<_GNTMAP_application_map)
-
- /*
-  * GNTMAP_contains_pte subflag:
-  *  0 => This map request contains a host virtual address.
-  *  1 => This map request contains the machine addess of the PTE to update.
-  */
 #define _GNTMAP_contains_pte    (4)
 #define GNTMAP_contains_pte     (1<<_GNTMAP_contains_pte)
-
 #define _GNTMAP_can_fail        (5)
 #define GNTMAP_can_fail         (1<<_GNTMAP_can_fail)
-
-/*
- * Bits to be placed in guest kernel available PTE bits (architecture
- * dependent; only supported when XENFEAT_gnttab_map_avail_bits is set).
- */
 #define _GNTMAP_guest_avail0    (16)
 #define GNTMAP_guest_avail_mask ((uint32_t)~0 << _GNTMAP_guest_avail0)
 
-/*
- * Values for error status returns. All errors are -ve.
+/**
+ * DOC: Values for error status returns. All errors are -ve.
+ *
+ * - GNTST_okay: Normal return.
+ * - GNTST_general_error: General undefined error.
+ * - GNTST_bad_domain: Unrecognsed domain id.
+ * - GNTST_bad_gntref: Unrecognised or inappropriate gntref.
+ * - GNTST_bad_handle: Unrecognised or inappropriate handle.
+ * - GNTST_bad_virt_addr: Inappropriate virtual address to map.
+ * - GNTST_bad_dev_addr: Inappropriate device address to unmap.
+ * - GNTST_no_device_space: Out of space in I/O MMU.
+ * - GNTST_permission_denied: Not enough privilege for operation.
+ * - GNTST_bad_page: Specified page was invalid for op.
+ * - GNTST_bad_copy_arg: copy arguments cross page boundary.
+ * - GNTST_address_too_big: transfer page address too large.
+ * - GNTST_eagain: Operation not done; try again.
+ *
  */
-/* ` enum grant_status { */
-#define GNTST_okay             (0)  /* Normal return.                        */
-#define GNTST_general_error    (-1) /* General undefined error.              */
-#define GNTST_bad_domain       (-2) /* Unrecognsed domain id.                */
-#define GNTST_bad_gntref       (-3) /* Unrecognised or inappropriate gntref. */
-#define GNTST_bad_handle       (-4) /* Unrecognised or inappropriate handle. */
-#define GNTST_bad_virt_addr    (-5) /* Inappropriate virtual address to map. */
-#define GNTST_bad_dev_addr     (-6) /* Inappropriate device address to unmap.*/
-#define GNTST_no_device_space  (-7) /* Out of space in I/O MMU.              */
-#define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
-#define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
-#define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary.   */
-#define GNTST_address_too_big (-11) /* transfer page address too large.      */
-#define GNTST_eagain          (-12) /* Operation not done; try again.        */
-/* ` } */
+#define GNTST_okay             (0)
+#define GNTST_general_error    (-1)
+#define GNTST_bad_domain       (-2)
+#define GNTST_bad_gntref       (-3)
+#define GNTST_bad_handle       (-4)
+#define GNTST_bad_virt_addr    (-5)
+#define GNTST_bad_dev_addr     (-6)
+#define GNTST_no_device_space  (-7)
+#define GNTST_permission_denied (-8)
+#define GNTST_bad_page         (-9)
+#define GNTST_bad_copy_arg    (-10)
+#define GNTST_address_too_big (-11)
+#define GNTST_eagain          (-12)
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9799.25887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YJ-0008Dv-93; Wed, 21 Oct 2020 00:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9799.25887; Wed, 21 Oct 2020 00:00:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YJ-0008Dk-4o; Wed, 21 Oct 2020 00:00:27 +0000
Received: by outflank-mailman (input) for mailman id 9799;
 Wed, 21 Oct 2020 00:00:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YH-0007xs-4B
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b15b8665-550e-4626-8ced-fbc5f125540d;
 Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A22D922409;
 Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YH-0007xs-4B
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:25 +0000
X-Inumbo-ID: b15b8665-550e-4626-8ced-fbc5f125540d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b15b8665-550e-4626-8ced-fbc5f125540d;
	Wed, 21 Oct 2020 00:00:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A22D922409;
	Wed, 21 Oct 2020 00:00:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238415;
	bh=YgCTZXutLQJjVJMzhrxHs4z822UvEjWSqunwpdx9Yy4=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=HLcsKTmnDNw9oi2aG4j+lwSt5co3cNp8UFwLpyrTbXghcQcTy6ZQRZjp36dTWW7DS
	 5g6V6+7NGuC2jtfYthiA/fLM+kRuejN8GA6laJW7wbJwMOT/ZNLBNF2NkAj4otW/5c
	 fZ2NrFQkrTaFWSkAiMDkMdBUn4t/Vjqr0q2dLwYk=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 04/14] kernel-doc: public/event_channel.h
Date: Tue, 20 Oct 2020 17:00:01 -0700
Message-Id: <20201021000011.15351-4-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- use oneline comments even for nested struct members
- remove redundant "EVTCHNOP_status:" prefix
---
 xen/include/public/event_channel.h | 184 ++++++++++++++++++-----------
 1 file changed, 115 insertions(+), 69 deletions(-)

diff --git a/xen/include/public/event_channel.h b/xen/include/public/event_channel.h
index 73c9f38ce1..10b2d4d210 100644
--- a/xen/include/public/event_channel.h
+++ b/xen/include/public/event_channel.h
@@ -29,8 +29,8 @@
 
 #include "xen.h"
 
-/*
- * `incontents 150 evtchn Event Channels
+/**
+ * DOC: Event Channels
  *
  * Event channels are the basic primitive provided by Xen for event
  * notifications. An event is the Xen equivalent of a hardware
@@ -82,27 +82,34 @@
 typedef uint32_t evtchn_port_t;
 DEFINE_XEN_GUEST_HANDLE(evtchn_port_t);
 
-/*
- * EVTCHNOP_alloc_unbound: Allocate a port in domain <dom> and mark as
- * accepting interdomain bindings from domain <remote_dom>. A fresh port
- * is allocated in <dom> and returned as <port>.
+/**
+ * struct evtchn_alloc_unbound - EVTCHNOP_alloc_unbound
+ *
+ * Allocate a port in domain <dom> and mark as accepting interdomain
+ * bindings from domain <remote_dom>. A fresh port is allocated in <dom>
+ * and returned as <port>.
+ *
  * NOTES:
  *  1. If the caller is unprivileged then <dom> must be DOMID_SELF.
  *  2. <remote_dom> may be DOMID_SELF, allowing loopback connections.
  */
 struct evtchn_alloc_unbound {
-    /* IN parameters */
-    domid_t dom, remote_dom;
-    /* OUT parameters */
+    /** @dom: IN parameter */
+    domid_t dom;
+    /** @remote_dom: IN parameter */
+    domid_t remote_dom;
+    /** @port: OUT parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_alloc_unbound evtchn_alloc_unbound_t;
 
-/*
- * EVTCHNOP_bind_interdomain: Construct an interdomain event channel between
- * the calling domain and <remote_dom>. <remote_dom,remote_port> must identify
- * a port that is unbound and marked as accepting bindings from the calling
- * domain. A fresh port is allocated in the calling domain and returned as
+/**
+ * struct evtchn_bind_interdomain - EVTCHNOP_bind_interdomain
+ *
+ * Construct an interdomain event channel between the calling domain and
+ * <remote_dom>. <remote_dom,remote_port> must identify a port that is
+ * unbound and marked as accepting bindings from the calling domain. A
+ * fresh port is allocated in the calling domain and returned as
  * <local_port>.
  *
  * In case the peer domain has already tried to set our event channel
@@ -119,17 +126,20 @@ typedef struct evtchn_alloc_unbound evtchn_alloc_unbound_t;
  *  1. <remote_dom> may be DOMID_SELF, allowing loopback connections.
  */
 struct evtchn_bind_interdomain {
-    /* IN parameters. */
+    /** @remote_dom: IN parameter */
     domid_t remote_dom;
+    /** @remote_port: IN parameter */
     evtchn_port_t remote_port;
-    /* OUT parameters. */
+    /** @local_port:OUT parameter */
     evtchn_port_t local_port;
 };
 typedef struct evtchn_bind_interdomain evtchn_bind_interdomain_t;
 
-/*
- * EVTCHNOP_bind_virq: Bind a local event channel to VIRQ <irq> on specified
- * vcpu.
+/**
+ * struct evtchn_bind_virq - EVTCHNOP_bind_virq
+ *
+ * Bind a local event channel to VIRQ <irq> on specified vcpu.
+ *
  * NOTES:
  *  1. Virtual IRQs are classified as per-vcpu or global. See the VIRQ list
  *     in xen.h for the classification of each VIRQ.
@@ -140,77 +150,91 @@ typedef struct evtchn_bind_interdomain evtchn_bind_interdomain_t;
  *     binding cannot be changed.
  */
 struct evtchn_bind_virq {
-    /* IN parameters. */
-    uint32_t virq; /* enum virq */
+    /** @virq: IN parameter, enum virq */
+    uint32_t virq;
+    /** @vcpu: IN parameter */
     uint32_t vcpu;
-    /* OUT parameters. */
+    /** @port: OUT parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_bind_virq evtchn_bind_virq_t;
 
-/*
- * EVTCHNOP_bind_pirq: Bind a local event channel to a real IRQ (PIRQ <irq>).
+/**
+ * struct evtchn_bind_pirq - EVTCHNOP_bind_pirq
+ *
+ * Bind a local event channel to a real IRQ (PIRQ <irq>).
  * NOTES:
  *  1. A physical IRQ may be bound to at most one event channel per domain.
  *  2. Only a sufficiently-privileged domain may bind to a physical IRQ.
  */
 struct evtchn_bind_pirq {
-    /* IN parameters. */
+    /** @pirq: IN parameter */
     uint32_t pirq;
+    /** @flags: IN parameter,  BIND_PIRQ__* */
 #define BIND_PIRQ__WILL_SHARE 1
-    uint32_t flags; /* BIND_PIRQ__* */
-    /* OUT parameters. */
+    uint32_t flags;
+    /** @port: OUT parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_bind_pirq evtchn_bind_pirq_t;
 
-/*
- * EVTCHNOP_bind_ipi: Bind a local event channel to receive events.
+/**
+ * struct struct evtchn_bind_ipi - EVTCHNOP_bind_ipi
+ *
+ * Bind a local event channel to receive events.
  * NOTES:
  *  1. The allocated event channel is bound to the specified vcpu. The binding
  *     may not be changed.
  */
 struct evtchn_bind_ipi {
+    /** @vcpu: IN parameter */
     uint32_t vcpu;
-    /* OUT parameters. */
+    /** @port: OUT parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_bind_ipi evtchn_bind_ipi_t;
 
-/*
- * EVTCHNOP_close: Close a local event channel <port>. If the channel is
- * interdomain then the remote end is placed in the unbound state
+/**
+ * struct evtchn_close - EVTCHNOP_close
+ *
+ * Close a local event channel <port>. If the channel is interdomain
+ * then the remote end is placed in the unbound state
  * (EVTCHNSTAT_unbound), awaiting a new connection.
  */
 struct evtchn_close {
-    /* IN parameters. */
+    /** @port: IN parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_close evtchn_close_t;
 
-/*
- * EVTCHNOP_send: Send an event to the remote end of the channel whose local
- * endpoint is <port>.
+/**
+ * struct evtchn_send - EVTCHNOP_send
+ *
+ * Send an event to the remote end of the channel whose local endpoint
+ * is <port>.
  */
 struct evtchn_send {
-    /* IN parameters. */
+    /** @port: IN parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_send evtchn_send_t;
 
-/*
- * EVTCHNOP_status: Get the current status of the communication channel which
- * has an endpoint at <dom, port>.
+/**
+ * struct evtchn_status - EVTCHNOP_status
+ *
+ * Get the current status of the communication channel which has an
+ * endpoint at <dom, port>.
  * NOTES:
  *  1. <dom> may be specified as DOMID_SELF.
  *  2. Only a sufficiently-privileged domain may obtain the status of an event
  *     channel for which <dom> is not DOMID_SELF.
  */
 struct evtchn_status {
-    /* IN parameters */
+    /** @dom: IN parameter */
     domid_t  dom;
+    /** @port: IN parameter */
     evtchn_port_t port;
-    /* OUT parameters */
+    /** @status: OUT parameter */
 #define EVTCHNSTAT_closed       0  /* Channel is not in use.                 */
 #define EVTCHNSTAT_unbound      1  /* Channel is waiting interdom connection.*/
 #define EVTCHNSTAT_interdomain  2  /* Channel is connected to remote domain. */
@@ -218,24 +242,31 @@ struct evtchn_status {
 #define EVTCHNSTAT_virq         4  /* Channel is bound to a virtual IRQ line */
 #define EVTCHNSTAT_ipi          5  /* Channel is bound to a virtual IPI line */
     uint32_t status;
-    uint32_t vcpu;                 /* VCPU to which this channel is bound.   */
+    /** @vcpu: OUT parameter, VCPU to which this channel is bound */
+    uint32_t vcpu;
+    /** @u: OUT parameter */
     union {
+        /** @u.unbound: EVTCHNSTAT_unbound */
         struct {
             domid_t dom;
-        } unbound;                 /* EVTCHNSTAT_unbound */
+        } unbound;
+        /** @u.interdomain: EVTCHNSTAT_interdomain */
         struct {
             domid_t dom;
             evtchn_port_t port;
-        } interdomain;             /* EVTCHNSTAT_interdomain */
-        uint32_t pirq;             /* EVTCHNSTAT_pirq        */
-        uint32_t virq;             /* EVTCHNSTAT_virq        */
+        } interdomain;
+        /** @u.pirq: EVTCHNSTAT_pirq */
+        uint32_t pirq;
+        /** @u.virq: EVTCHNSTAT_virq */
+        uint32_t virq;
     } u;
 };
 typedef struct evtchn_status evtchn_status_t;
 
-/*
- * EVTCHNOP_bind_vcpu: Specify which vcpu a channel should notify when an
- * event is pending.
+/**
+ * struct evtchn_bind_vcpu - EVTCHNOP_bind_vcpu
+ *
+ * Specify which vcpu a channel should notify when an event is pending.
  * NOTES:
  *  1. IPI-bound channels always notify the vcpu specified at bind time.
  *     This binding cannot be changed.
@@ -246,24 +277,29 @@ typedef struct evtchn_status evtchn_status_t;
  *     has its binding reset to vcpu0).
  */
 struct evtchn_bind_vcpu {
-    /* IN parameters. */
+    /** @port: IN parameter */
     evtchn_port_t port;
+    /** @vcpu: IN parameter */
     uint32_t vcpu;
 };
 typedef struct evtchn_bind_vcpu evtchn_bind_vcpu_t;
 
-/*
- * EVTCHNOP_unmask: Unmask the specified local event-channel port and deliver
- * a notification to the appropriate VCPU if an event is pending.
+/**
+ * struct evtchn_unmask - EVTCHNOP_unmask
+ *
+ * Unmask the specified local event-channel port and deliver a
+ * notification to the appropriate VCPU if an event is pending.
  */
 struct evtchn_unmask {
-    /* IN parameters. */
+    /** @port: IN parameter */
     evtchn_port_t port;
 };
 typedef struct evtchn_unmask evtchn_unmask_t;
 
-/*
- * EVTCHNOP_reset: Close all event channels associated with specified domain.
+/**
+ * struct evtchn_reset - EVTCHNOP_reset
+ *
+ * Close all event channels associated with specified domain.
  * NOTES:
  *  1. <dom> may be specified as DOMID_SELF.
  *  2. Only a sufficiently-privileged domain may specify other than DOMID_SELF.
@@ -273,44 +309,54 @@ typedef struct evtchn_unmask evtchn_unmask_t;
  *     as these events are likely to be lost.
  */
 struct evtchn_reset {
-    /* IN parameters. */
+    /** @dom: IN parameter */
     domid_t dom;
 };
 typedef struct evtchn_reset evtchn_reset_t;
 
-/*
- * EVTCHNOP_init_control: initialize the control block for the FIFO ABI.
+/**
+ * struct evtchn_init_control - EVTCHNOP_init_control
+ *
+ * Initialize the control block for the FIFO ABI.
  *
  * Note: any events that are currently pending will not be resent and
  * will be lost.  Guests should call this before binding any event to
  * avoid losing any events.
  */
 struct evtchn_init_control {
-    /* IN parameters. */
+    /** @control_gfn: IN parameter */
     uint64_t control_gfn;
+    /** @offset: IN parameter */
     uint32_t offset;
+    /** @vcpu: IN parameter */
     uint32_t vcpu;
-    /* OUT parameters. */
+    /** @link_bits: OUT parameter */
     uint8_t link_bits;
+    /** @_pad: padding */
     uint8_t _pad[7];
 };
 typedef struct evtchn_init_control evtchn_init_control_t;
 
-/*
- * EVTCHNOP_expand_array: add an additional page to the event array.
+/**
+ * struct evtchn_expand_array - EVTCHNOP_expand_array
+ *
+ * Add an additional page to the event array.
  */
 struct evtchn_expand_array {
-    /* IN parameters. */
+    /** @array_gfn: IN parameter */
     uint64_t array_gfn;
 };
 typedef struct evtchn_expand_array evtchn_expand_array_t;
 
-/*
- * EVTCHNOP_set_priority: set the priority for an event channel.
+/**
+ * struct evtchn_set_priority - EVTCHNOP_set_priority
+ *
+ * Set the priority for an event channel.
  */
 struct evtchn_set_priority {
-    /* IN parameters. */
+    /** @port: IN parameter */
     evtchn_port_t port;
+    /** @priority: IN parameter */
     uint32_t priority;
 };
 typedef struct evtchn_set_priority evtchn_set_priority_t;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9800.25895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YJ-0008FE-SI; Wed, 21 Oct 2020 00:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9800.25895; Wed, 21 Oct 2020 00:00:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YJ-0008En-In; Wed, 21 Oct 2020 00:00:27 +0000
Received: by outflank-mailman (input) for mailman id 9800;
 Wed, 21 Oct 2020 00:00:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YH-0007y0-O7
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5df9fff-7601-44a0-8d02-1977402000d7;
 Wed, 21 Oct 2020 00:00:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4D9652245C;
 Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YH-0007y0-O7
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:25 +0000
X-Inumbo-ID: d5df9fff-7601-44a0-8d02-1977402000d7
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d5df9fff-7601-44a0-8d02-1977402000d7;
	Wed, 21 Oct 2020 00:00:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4D9652245C;
	Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238417;
	bh=xqaH5TgPPZ9KHmziprJ9oKvkdhKN8THAHA2Rd9MFVnA=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=oT2Eb14OUu8Wct/ws0+QiOv/GxNv40sJrTxT2MIh+0o+JrkLTBBxnwXpMM/kzVT6O
	 vdnTCbMIxbmcG/gmYOGRHLa7bHNlYuNMSAaOpSa0ldRg8X3GRhsHu6xRixIMZDScps
	 iF41l+q6TODQU6zjdkFnKM0D1ZaCSoSQOUW+TdII=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 08/14] kernel-doc: public/memory.h
Date: Tue, 20 Oct 2020 17:00:05 -0700
Message-Id: <20201021000011.15351-8-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- remove "enum" comments
- fix line too long
- remove stray blank line
---
 xen/include/public/memory.h | 236 ++++++++++++++++++++++++------------
 1 file changed, 156 insertions(+), 80 deletions(-)

diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 21d483298e..3a8f668f5f 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -30,7 +30,9 @@
 #include "xen.h"
 #include "physdev.h"
 
-/*
+/**
+ * DOC: XENMEM_increase_reservation and XENMEM_decrease_reservation
+ *
  * Increase or decrease the specified domain's memory reservation. Returns the
  * number of extents successfully allocated or freed.
  * arg == addr of struct xen_memory_reservation.
@@ -40,29 +42,37 @@
 #define XENMEM_populate_physmap     6
 
 #if __XEN_INTERFACE_VERSION__ >= 0x00030209
-/*
- * Maximum # bits addressable by the user of the allocated region (e.g., I/O
- * devices often have a 32-bit limitation even in 64-bit systems). If zero
- * then the user has no addressing restriction. This field is not used by
- * XENMEM_decrease_reservation.
+/**
+ * DOC: XENMEMF_*
+ *
+ * - XENMEMF_address_bits, XENMEMF_get_address_bits:
+ *       Maximum # bits addressable by the user of the allocated region
+ *       (e.g., I/O devices often have a 32-bit limitation even in 64-bit
+ *       systems). If zero then the user has no addressing restriction. This
+ *       field is not used by XENMEM_decrease_reservation.
+ * - XENMEMF_node, XENMEMF_get_node: NUMA node to allocate from
+ * - XENMEMF_populate_on_demand: Flag to populate physmap with populate-on-demand entries
+ * - XENMEMF_exact_node_request, XENMEMF_exact_node: Flag to request allocation
+ *       only from the node specified
+ * - XENMEMF_vnode: Flag to indicate the node specified is virtual node
  */
 #define XENMEMF_address_bits(x)     (x)
 #define XENMEMF_get_address_bits(x) ((x) & 0xffu)
-/* NUMA node to allocate from. */
 #define XENMEMF_node(x)     (((x) + 1) << 8)
 #define XENMEMF_get_node(x) ((((x) >> 8) - 1) & 0xffu)
-/* Flag to populate physmap with populate-on-demand entries */
 #define XENMEMF_populate_on_demand (1<<16)
-/* Flag to request allocation only from the node specified */
 #define XENMEMF_exact_node_request  (1<<17)
 #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
-/* Flag to indicate the node specified is virtual node */
 #define XENMEMF_vnode  (1<<18)
 #endif
 
+/**
+ * struct xen_memory_reservation
+ */
 struct xen_memory_reservation {
-
-    /*
+    /**
+     * @extent_start:
+     *
      * XENMEM_increase_reservation:
      *   OUT: MFN (*not* GMFN) bases of extents that were allocated
      * XENMEM_decrease_reservation:
@@ -76,18 +86,20 @@ struct xen_memory_reservation {
      */
     XEN_GUEST_HANDLE(xen_pfn_t) extent_start;
 
-    /* Number of extents, and size/alignment of each (2^extent_order pages). */
+    /** @nr_extents: Number of extents, and size/alignment of each (2^extent_order pages). */
     xen_ulong_t    nr_extents;
     unsigned int   extent_order;
 
 #if __XEN_INTERFACE_VERSION__ >= 0x00030209
-    /* XENMEMF flags. */
+    /** @mem_flags: XENMEMF flags. */
     unsigned int   mem_flags;
 #else
     unsigned int   address_bits;
 #endif
 
-    /*
+    /**
+     * @domid:
+     *
      * Domain whose reservation is being changed.
      * Unprivileged domains can specify only DOMID_SELF.
      */
@@ -96,7 +108,10 @@ struct xen_memory_reservation {
 typedef struct xen_memory_reservation xen_memory_reservation_t;
 DEFINE_XEN_GUEST_HANDLE(xen_memory_reservation_t);
 
-/*
+#define XENMEM_exchange             11
+/**
+ * struct xen_memory_exchange - XENMEM_exchange
+ *
  * An atomic exchange of memory pages. If return code is zero then
  * @out.extent_list provides GMFNs of the newly-allocated memory.
  * Returns zero on complete success, otherwise a negative error code.
@@ -105,15 +120,18 @@ DEFINE_XEN_GUEST_HANDLE(xen_memory_reservation_t);
  *
  * Note that only PV guests can use this operation.
  */
-#define XENMEM_exchange             11
 struct xen_memory_exchange {
-    /*
+    /**
+     * @in:
+     *
      * [IN] Details of memory extents to be exchanged (GMFN bases).
      * Note that @in.address_bits is ignored and unused.
      */
     struct xen_memory_reservation in;
 
-    /*
+    /**
+     * @out:
+     *
      * [IN/OUT] Details of new memory extents.
      * We require that:
      *  1. @in.domid == @out.domid
@@ -125,7 +143,9 @@ struct xen_memory_exchange {
      */
     struct xen_memory_reservation out;
 
-    /*
+    /**
+     * @nr_exchanged:
+     *
      * [OUT] Number of input extents that were successfully exchanged:
      *  1. The first @nr_exchanged input extents were successfully
      *     deallocated.
@@ -141,14 +161,18 @@ struct xen_memory_exchange {
 typedef struct xen_memory_exchange xen_memory_exchange_t;
 DEFINE_XEN_GUEST_HANDLE(xen_memory_exchange_t);
 
-/*
+/**
+ * DOC: XENMEM_maximum_ram_page
+ *
  * Returns the maximum machine frame number of mapped RAM in this system.
  * This command always succeeds (it never returns an error code).
  * arg == NULL.
  */
 #define XENMEM_maximum_ram_page     2
 
-/*
+/**
+ * DOC: XENMEM_current_reservation and XENMEM_maximum_reservation
+ *
  * Returns the current or maximum memory reservation, in pages, of the
  * specified domain (may be DOMID_SELF). Returns -ve errcode on failure.
  * arg == addr of domid_t.
@@ -156,33 +180,43 @@ DEFINE_XEN_GUEST_HANDLE(xen_memory_exchange_t);
 #define XENMEM_current_reservation  3
 #define XENMEM_maximum_reservation  4
 
-/*
+/**
+ * DOC: XENMEM_maximum_gpfn
+ *
  * Returns the maximum GPFN in use by the guest, or -ve errcode on failure.
  */
 #define XENMEM_maximum_gpfn         14
 
-/*
+#define XENMEM_machphys_mfn_list    5
+/**
+ * struct xen_machphys_mfn_list - XENMEM_machphys_mfn_list
+ *
  * Returns a list of MFN bases of 2MB extents comprising the machine_to_phys
  * mapping table. Architectures which do not have a m2p table do not implement
  * this command.
  * arg == addr of xen_machphys_mfn_list_t.
  */
-#define XENMEM_machphys_mfn_list    5
 struct xen_machphys_mfn_list {
-    /*
+    /**
+     * @max_extents:
+     *
      * Size of the 'extent_start' array. Fewer entries will be filled if the
      * machphys table is smaller than max_extents * 2MB.
      */
     unsigned int max_extents;
 
-    /*
+    /**
+     * @extent_start:
+     *
      * Pointer to buffer to fill with list of extent starts. If there are
      * any large discontiguities in the machine address space, 2MB gaps in
      * the machphys table will be represented by an MFN base of zero.
      */
     XEN_GUEST_HANDLE(xen_pfn_t) extent_start;
 
-    /*
+    /**
+     * @nr_extents:
+     *
      * Number of extents written to the above array. This will be smaller
      * than 'max_extents' if the machphys table is smaller than max_e * 2MB.
      */
@@ -191,7 +225,9 @@ struct xen_machphys_mfn_list {
 typedef struct xen_machphys_mfn_list xen_machphys_mfn_list_t;
 DEFINE_XEN_GUEST_HANDLE(xen_machphys_mfn_list_t);
 
-/*
+/**
+ * DOC: XENMEM_machphys_compat_mfn_list
+ *
  * For a compat caller, this is identical to XENMEM_machphys_mfn_list.
  *
  * For a non compat caller, this functions similarly to
@@ -200,90 +236,113 @@ DEFINE_XEN_GUEST_HANDLE(xen_machphys_mfn_list_t);
  */
 #define XENMEM_machphys_compat_mfn_list     25
 
-/*
+#define XENMEM_machphys_mapping     12
+/**
+ * struct xen_machphys_mapping - XENMEM_machphys_mapping
+ *
  * Returns the location in virtual address space of the machine_to_phys
  * mapping table. Architectures which do not have a m2p table, or which do not
  * map it by default into guest address space, do not implement this command.
  * arg == addr of xen_machphys_mapping_t.
  */
-#define XENMEM_machphys_mapping     12
 struct xen_machphys_mapping {
+    /** @v_start: Start virtual address */
     xen_ulong_t v_start, v_end; /* Start and end virtual addresses.   */
-    xen_ulong_t max_mfn;        /* Maximum MFN that can be looked up. */
+    /** @v_end: End virtual addresses */
+    xen_ulong_t v_end;
+    /** @max_mfn: Maximum MFN that can be looked up */
+    xen_ulong_t max_mfn;
 };
 typedef struct xen_machphys_mapping xen_machphys_mapping_t;
 DEFINE_XEN_GUEST_HANDLE(xen_machphys_mapping_t);
 
-/* Source mapping space. */
-/* ` enum phys_map_space { */
-#define XENMAPSPACE_shared_info  0 /* shared info page */
-#define XENMAPSPACE_grant_table  1 /* grant table page */
-#define XENMAPSPACE_gmfn         2 /* GMFN */
-#define XENMAPSPACE_gmfn_range   3 /* GMFN range, XENMEM_add_to_physmap only. */
-#define XENMAPSPACE_gmfn_foreign 4 /* GMFN from another dom,
-                                    * XENMEM_add_to_physmap_batch only. */
-#define XENMAPSPACE_dev_mmio     5 /* device mmio region
-                                      ARM only; the region is mapped in
-                                      Stage-2 using the Normal Memory
-                                      Inner/Outer Write-Back Cacheable
-                                      memory attribute. */
-/* ` } */
+/**
+ * DOC: Source mapping space.
+ *
+ * - XENMAPSPACE_shared_info:  shared info page
+ * - XENMAPSPACE_grant_table:  grant table page
+ * - XENMAPSPACE_gmfn:         GMFN
+ * - XENMAPSPACE_gmfn_range:   GMFN range, XENMEM_add_to_physmap only.
+ * - XENMAPSPACE_gmfn_foreign: GMFN from another dom,
+ *                             XENMEM_add_to_physmap_batch only.
+ * - XENMAPSPACE_dev_mmio:     device mmio region ARM only; the region is mapped
+ *                             in Stage-2 using the Normal MemoryInner/Outer
+ *                             Write-Back Cacheable memory attribute.
+ */
+#define XENMAPSPACE_shared_info  0
+#define XENMAPSPACE_grant_table  1
+#define XENMAPSPACE_gmfn         2
+#define XENMAPSPACE_gmfn_range   3
+#define XENMAPSPACE_gmfn_foreign 4
+#define XENMAPSPACE_dev_mmio     5
 
-/*
+#define XENMEM_add_to_physmap      7
+/**
+ * struct xen_add_to_physmap - XENMEM_add_to_physmap
+ *
  * Sets the GPFN at which a particular page appears in the specified guest's
  * physical address space (translated guests only).
  * arg == addr of xen_add_to_physmap_t.
  */
-#define XENMEM_add_to_physmap      7
 struct xen_add_to_physmap {
-    /* Which domain to change the mapping for. */
+    /** @domid: Which domain to change the mapping for. */
     domid_t domid;
 
-    /* Number of pages to go through for gmfn_range */
+    /** @size: Number of pages to go through for gmfn_range */
     uint16_t    size;
 
-    unsigned int space; /* => enum phys_map_space */
+    /** @space: enum phys_map_space */
+    unsigned int space;
 
 #define XENMAPIDX_grant_table_status 0x80000000
 
-    /* Index into space being mapped. */
+    /** @idx: Index into space being mapped. */
     xen_ulong_t idx;
 
-    /* GPFN in domid where the source mapping page should appear. */
+    /** @gpfn: GPFN in domid where the source mapping page should appear. */
     xen_pfn_t     gpfn;
 };
 typedef struct xen_add_to_physmap xen_add_to_physmap_t;
 DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_t);
 
-/* A batched version of add_to_physmap. */
 #define XENMEM_add_to_physmap_batch 23
+/**
+ * struct xen_add_to_physmap_batch - XENMEM_add_to_physmap_batch
+ *
+ * A batched version of add_to_physmap.
+ */
 struct xen_add_to_physmap_batch {
-    /* IN */
-    /* Which domain to change the mapping for. */
+    /** @domid: IN, which domain to change the mapping for */
     domid_t domid;
-    uint16_t space; /* => enum phys_map_space */
+    /** @space: IN, enum phys_map_space */
+    uint16_t space;
 
-    /* Number of pages to go through */
+    /** @size: IN, Number of pages to go through */
     uint16_t size;
 
 #if __XEN_INTERFACE_VERSION__ < 0x00040700
-    domid_t foreign_domid; /* IFF gmfn_foreign. Should be 0 for other spaces. */
+    /** @foreign_domid: IN, iff gmfn_foreign. Should be 0 for other spaces. */
+    domid_t foreign_domid;
 #else
+    /**
+     * @u: IN
+     *
+     * - u.foreign_domid: gmfn_foreign
+     * - u.res0: All the other spaces. Should be 0
+     */
     union xen_add_to_physmap_batch_extra {
-        domid_t foreign_domid; /* gmfn_foreign */
-        uint16_t res0;  /* All the other spaces. Should be 0 */
+        domid_t foreign_domid;
+        uint16_t res0;
     } u;
 #endif
 
-    /* Indexes into space being mapped. */
+    /** @idxs: IN, Indexes into space being mapped. */
     XEN_GUEST_HANDLE(xen_ulong_t) idxs;
 
-    /* GPFN in domid where the source mapping page should appear. */
+    /** @gpfns: IN, GPFN in domid where the source mapping page should appear. */
     XEN_GUEST_HANDLE(xen_pfn_t) gpfns;
 
-    /* OUT */
-
-    /* Per index error code. */
+    /** @errs: OUT, Per index error code. */
     XEN_GUEST_HANDLE(int) errs;
 };
 typedef struct xen_add_to_physmap_batch xen_add_to_physmap_batch_t;
@@ -296,17 +355,19 @@ typedef struct xen_add_to_physmap_batch xen_add_to_physmap_range_t;
 DEFINE_XEN_GUEST_HANDLE(xen_add_to_physmap_range_t);
 #endif
 
-/*
+#define XENMEM_remove_from_physmap      15
+/**
+ * struct xen_remove_from_physmap - XENMEM_remove_from_physmap
+ *
  * Unmaps the page appearing at a particular GPFN from the specified guest's
  * physical address space (translated guests only).
  * arg == addr of xen_remove_from_physmap_t.
  */
-#define XENMEM_remove_from_physmap      15
 struct xen_remove_from_physmap {
-    /* Which domain to change the mapping for. */
+    /** @domid: Which domain to change the mapping for. */
     domid_t domid;
 
-    /* GPFN of the current mapping of the page. */
+    /** @gpfn: GPFN of the current mapping of the page. */
     xen_pfn_t     gpfn;
 };
 typedef struct xen_remove_from_physmap xen_remove_from_physmap_t;
@@ -315,21 +376,27 @@ DEFINE_XEN_GUEST_HANDLE(xen_remove_from_physmap_t);
 /*** REMOVED ***/
 /*#define XENMEM_translate_gpfn_list  8*/
 
-/*
+#define XENMEM_memory_map           9
+/**
+ * struct xen_memory_map - XENMEM_memory_map
+ *
  * Returns the pseudo-physical memory map as it was when the domain
  * was started (specified by XENMEM_set_memory_map).
  * arg == addr of xen_memory_map_t.
  */
-#define XENMEM_memory_map           9
 struct xen_memory_map {
-    /*
+    /**
+     * @nr_entries:
+     *
      * On call the number of entries which can be stored in buffer. On
      * return the number of entries which have been stored in
      * buffer.
      */
     unsigned int nr_entries;
 
-    /*
+    /**
+     * @buffer:
+     *
      * Entries in the buffer are in the same format as returned by the
      * BIOS INT 0x15 EAX=0xE820 call.
      */
@@ -338,7 +405,9 @@ struct xen_memory_map {
 typedef struct xen_memory_map xen_memory_map_t;
 DEFINE_XEN_GUEST_HANDLE(xen_memory_map_t);
 
-/*
+/**
+ * DOC: XENMEM_machine_memory_map
+ *
  * Returns the real physical memory map. Passes the same structure as
  * XENMEM_memory_map.
  * Specifying buffer as NULL will return the number of entries required
@@ -347,12 +416,14 @@ DEFINE_XEN_GUEST_HANDLE(xen_memory_map_t);
  */
 #define XENMEM_machine_memory_map   10
 
-/*
+#define XENMEM_set_memory_map       13
+/**
+ * struct xen_foreign_memory_map - XENMEM_set_memory_map
+ *
  * Set the pseudo-physical memory map of a domain, as returned by
  * XENMEM_memory_map.
  * arg == addr of xen_foreign_memory_map_t.
  */
-#define XENMEM_set_memory_map       13
 struct xen_foreign_memory_map {
     domid_t domid;
     struct xen_memory_map map;
@@ -362,14 +433,19 @@ DEFINE_XEN_GUEST_HANDLE(xen_foreign_memory_map_t);
 
 #define XENMEM_set_pod_target       16
 #define XENMEM_get_pod_target       17
+/**
+ * struct xen_pod_target - XENMEM_set_pod_target and XENMEM_get_pod_target
+ */
 struct xen_pod_target {
-    /* IN */
+    /** @target_pages: IN */
     uint64_t target_pages;
-    /* OUT */
+    /** @tot_pages: OUT */
     uint64_t tot_pages;
+    /** @pod_cache_pages: OUT */
     uint64_t pod_cache_pages;
+    /** @pod_entries: OUT */
     uint64_t pod_entries;
-    /* IN */
+    /** @domid: IN */
     domid_t domid;
 };
 typedef struct xen_pod_target xen_pod_target_t;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9801.25911 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YO-0008Od-Eg; Wed, 21 Oct 2020 00:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9801.25911; Wed, 21 Oct 2020 00:00:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YO-0008OJ-9C; Wed, 21 Oct 2020 00:00:32 +0000
Received: by outflank-mailman (input) for mailman id 9801;
 Wed, 21 Oct 2020 00:00:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YM-0007xs-4E
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8554e9df-0934-4c6d-bbff-85f9cf6d6015;
 Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BB2D422456;
 Wed, 21 Oct 2020 00:00:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YM-0007xs-4E
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:30 +0000
X-Inumbo-ID: 8554e9df-0934-4c6d-bbff-85f9cf6d6015
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8554e9df-0934-4c6d-bbff-85f9cf6d6015;
	Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id BB2D422456;
	Wed, 21 Oct 2020 00:00:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238417;
	bh=FBjcGWAj+YnmSwU5NH4MugavkY7ULNZqmOgxz/8AjMg=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=EJ8nE7nmg0R8bp2UE3eF9iTeyPeZGBY7QS8au/ZP4etB/s3I+4VBscy1Lv5K1du+M
	 g1rIUsKiqPLYzkzGYH65C/al2KaJplssCLQGh3BbE4uXqst82392N+/fizDmdpovRD
	 Ikutmgt+D/FIFkaMUAPIfcGLBBXtKrewjoIZLfdA=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 07/14] kernel-doc: public/hypfs.h
Date: Tue, 20 Oct 2020 17:00:04 -0700
Message-Id: <20201021000011.15351-7-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/hypfs.h | 72 ++++++++++++++++++++++++--------------
 1 file changed, 45 insertions(+), 27 deletions(-)

diff --git a/xen/include/public/hypfs.h b/xen/include/public/hypfs.h
index 2b7a66d68d..62a33a1646 100644
--- a/xen/include/public/hypfs.h
+++ b/xen/include/public/hypfs.h
@@ -32,12 +32,21 @@
  * Definitions for the __HYPERVISOR_hypfs_op hypercall.
  */
 
-/* Highest version number of the hypfs interface currently defined. */
+/**
+ * DOC: XEN_HYPFS_VERSION
+ * Highest version number of the hypfs interface currently defined.
+ */
 #define XEN_HYPFS_VERSION      1
 
-/* Maximum length of a path in the filesystem. */
+/**
+ * DOC: XEN_HYPFS_MAX_PATHLEN
+ * Maximum length of a path in the filesystem.
+ */
 #define XEN_HYPFS_MAX_PATHLEN  1024
 
+/**
+ * struct xen_hypfs_direntry
+ */
 struct xen_hypfs_direntry {
     uint8_t type;
 #define XEN_HYPFS_TYPE_DIR     0
@@ -49,17 +58,23 @@ struct xen_hypfs_direntry {
     uint8_t encoding;
 #define XEN_HYPFS_ENC_PLAIN    0
 #define XEN_HYPFS_ENC_GZIP     1
-    uint16_t pad;              /* Returned as 0. */
-    uint32_t content_len;      /* Current length of data. */
-    uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
+    /** @pad: Returned as 0. */
+    uint16_t pad;
+    /** @content_len: Current length of data. */
+    uint32_t content_len;
+    /** @max_write_len: Max. length for writes (0 if read-only). */
+    uint32_t max_write_len;
 };
 typedef struct xen_hypfs_direntry xen_hypfs_direntry_t;
 
+/**
+ * struct xen_hypfs_dirlistentry
+ */
 struct xen_hypfs_dirlistentry {
     xen_hypfs_direntry_t e;
-    /* Offset in bytes to next entry (0 == this is the last entry). */
+    /** @off_next: Offset in bytes to next entry (0 == this is the last entry). */
     uint16_t off_next;
-    /* Zero terminated entry name, possibly with some padding for alignment. */
+    /** @name: Zero terminated entry name, possibly with some padding for alignment. */
     char name[XEN_FLEX_ARRAY_DIM];
 };
 
@@ -67,21 +82,22 @@ struct xen_hypfs_dirlistentry {
  * Hypercall operations.
  */
 
-/*
- * XEN_HYPFS_OP_get_version
+/**
+ * DOC: XEN_HYPFS_OP_get_version
  *
  * Read highest interface version supported by the hypervisor.
  *
  * arg1 - arg4: all 0/NULL
  *
  * Possible return values:
- * >0: highest supported interface version
- * <0: negative Xen errno value
+ *
+ * - >0: highest supported interface version
+ * - <0: negative Xen errno value
  */
 #define XEN_HYPFS_OP_get_version     0
 
-/*
- * XEN_HYPFS_OP_read
+/**
+ * DOC: XEN_HYPFS_OP_read
  *
  * Read a filesystem entry.
  *
@@ -96,19 +112,20 @@ struct xen_hypfs_dirlistentry {
  * The contents of a directory are multiple struct xen_hypfs_dirlistentry
  * items.
  *
- * arg1: XEN_GUEST_HANDLE(path name)
- * arg2: length of path name (including trailing zero byte)
- * arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
- * arg4: data buffer size
+ * - arg1: XEN_GUEST_HANDLE(path name)
+ * - arg2: length of path name (including trailing zero byte)
+ * - arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
+ * - arg4: data buffer size
  *
  * Possible return values:
- * 0: success
- * <0 : negative Xen errno value
+ *
+ * - 0: success
+ * - <0 : negative Xen errno value
  */
 #define XEN_HYPFS_OP_read              1
 
-/*
- * XEN_HYPFS_OP_write_contents
+/**
+ * DOC: XEN_HYPFS_OP_write_contents
  *
  * Write contents of a filesystem entry.
  *
@@ -116,14 +133,15 @@ struct xen_hypfs_dirlistentry {
  * The data type and encoding can't be changed. The size can be changed only
  * for blobs and strings.
  *
- * arg1: XEN_GUEST_HANDLE(path name)
- * arg2: length of path name (including trailing zero byte)
- * arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
- * arg4: content buffer size
+ * - arg1: XEN_GUEST_HANDLE(path name)
+ * - arg2: length of path name (including trailing zero byte)
+ * - arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
+ * - arg4: content buffer size
  *
  * Possible return values:
- * 0: success
- * <0 : negative Xen errno value
+ *
+ * - 0: success
+ * - <0: negative Xen errno value
  */
 #define XEN_HYPFS_OP_write_contents    2
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9802.25917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YP-0008Pr-0Y; Wed, 21 Oct 2020 00:00:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9802.25917; Wed, 21 Oct 2020 00:00:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YO-0008PT-Mr; Wed, 21 Oct 2020 00:00:32 +0000
Received: by outflank-mailman (input) for mailman id 9802;
 Wed, 21 Oct 2020 00:00:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YM-0007y0-OR
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6989bcbe-7596-4718-b401-affd15347489;
 Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E49FC2245D;
 Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YM-0007y0-OR
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:30 +0000
X-Inumbo-ID: 6989bcbe-7596-4718-b401-affd15347489
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6989bcbe-7596-4718-b401-affd15347489;
	Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E49FC2245D;
	Wed, 21 Oct 2020 00:00:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238418;
	bh=+o3KM9eZPxssF3KyIcRr2NiHUn2Ukmo/akFdy6ptlI4=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Ul49e6LyZaclVsTzUS9MSVuAdLgHPJlt/ZvsjYVo0l+1v09h46ZOrYOH5jXkC71Pu
	 8DoiFejXSpB8yUZ5fKNb1WdsgwsQtdNHCvvwKnwSGnctO4+nI+OqE2KvGzskamB4cF
	 vj6XUUSXug0/tNelrhV4lxFxfCPBevRfxWjY754A=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 09/14] kernel-doc: public/sched.h
Date: Tue, 20 Oct 2020 17:00:06 -0700
Message-Id: <20201021000011.15351-9-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- remove "enum" comments
---
 xen/include/public/sched.h | 134 +++++++++++++++++++++++++------------
 1 file changed, 92 insertions(+), 42 deletions(-)

diff --git a/xen/include/public/sched.h b/xen/include/public/sched.h
index 811bd87c82..f6f0263569 100644
--- a/xen/include/public/sched.h
+++ b/xen/include/public/sched.h
@@ -29,41 +29,48 @@
 
 #include "event_channel.h"
 
-/*
- * `incontents 150 sched Guest Scheduler Operations
+/**
+ * DOC: Guest Scheduler Operations
  *
  * The SCHEDOP interface provides mechanisms for a guest to interact
  * with the scheduler, including yield, blocking and shutting itself
  * down.
  */
 
-/*
+/**
+ * DOC: HYPERVISOR_sched_op
+ *
  * The prototype for this hypercall is:
- * ` long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
  *
- * @cmd == SCHEDOP_??? (scheduler operation).
- * @arg == Operation-specific extra argument(s), as described below.
- * ...  == Additional Operation-specific extra arguments, described below.
+ * long HYPERVISOR_sched_op(enum sched_op cmd, void *arg, ...)
+ *
+ * - @cmd == SCHEDOP_??? (scheduler operation).
+ * - @arg == Operation-specific extra argument(s), as described below.
+ * - ...  == Additional Operation-specific extra arguments, described below.
  *
  * Versions of Xen prior to 3.0.2 provided only the following legacy version
  * of this hypercall, supporting only the commands yield, block and shutdown:
+ *
  *  long sched_op(int cmd, unsigned long arg)
- * @cmd == SCHEDOP_??? (scheduler operation).
- * @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
- *      == SHUTDOWN_* code (SCHEDOP_shutdown)
+ *
+ * - @cmd == SCHEDOP_??? (scheduler operation).
+ * - @arg == 0               (SCHEDOP_yield and SCHEDOP_block)
+ * -      == SHUTDOWN_* code (SCHEDOP_shutdown)
  *
  * This legacy version is available to new guests as:
- * ` long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
+ *
+ * long HYPERVISOR_sched_op_compat(enum sched_op cmd, unsigned long arg)
  */
 
-/* ` enum sched_op { // SCHEDOP_* => struct sched_* */
-/*
- * Voluntarily yield the CPU.
- * @arg == NULL.
+/**
+ * DOC: SCHEDOP_yield
+ * Voluntarily yield the CPU. @arg == NULL.
  */
 #define SCHEDOP_yield       0
 
-/*
+/**
+ * DOC: SCHEDOP_block
+ *
  * Block execution of this VCPU until an event is received for processing.
  * If called with event upcalls masked, this operation will atomically
  * reenable event delivery and check for pending events before blocking the
@@ -72,7 +79,9 @@
  */
 #define SCHEDOP_block       1
 
-/*
+/**
+ * DOC: SCHEDOP_shutdown
+ *
  * Halt execution of this domain (all VCPUs) and notify the system controller.
  * @arg == pointer to sched_shutdown_t structure.
  *
@@ -87,14 +96,18 @@
  */
 #define SCHEDOP_shutdown    2
 
-/*
+/**
+ * DOC: SCHEDOP_poll
+ *
  * Poll a set of event-channel ports. Return when one or more are pending. An
  * optional timeout may be specified.
  * @arg == pointer to sched_poll_t structure.
  */
 #define SCHEDOP_poll        3
 
-/*
+/**
+ * DOC: SCHEDOP_remote_shutdown
+ *
  * Declare a shutdown for another domain. The main use of this function is
  * in interpreting shutdown requests and reasons for fully-virtualized
  * domains.  A para-virtualized domain may use SCHEDOP_shutdown directly.
@@ -102,14 +115,18 @@
  */
 #define SCHEDOP_remote_shutdown        4
 
-/*
+/**
+ * DOC: SCHEDOP_shutdown_code
+ *
  * Latch a shutdown code, so that when the domain later shuts down it
  * reports this code to the control tools.
  * @arg == sched_shutdown_t, as for SCHEDOP_shutdown.
  */
 #define SCHEDOP_shutdown_code 5
 
-/*
+/**
+ * DOC: SCHEDOP_watchdog
+ *
  * Setup, poke and destroy a domain watchdog timer.
  * @arg == pointer to sched_watchdog_t structure.
  * With id == 0, setup a domain watchdog timer to cause domain shutdown
@@ -119,7 +136,9 @@
  */
 #define SCHEDOP_watchdog    6
 
-/*
+/**
+ * DOC: SCHEDOP_pin_override
+ *
  * Override the current vcpu affinity by pinning it to one physical cpu or
  * undo this override restoring the previous affinity.
  * @arg == pointer to sched_pin_override_t structure.
@@ -130,14 +149,20 @@
  * to be part of the domain's cpupool.
  */
 #define SCHEDOP_pin_override 7
-/* ` } */
 
+/**
+ * struct sched_shutdown
+ */
 struct sched_shutdown {
-    unsigned int reason; /* SHUTDOWN_* => enum sched_shutdown_reason */
+    /** @reason: SHUTDOWN_* => enum sched_shutdown_reason */
+    unsigned int reason;
 };
 typedef struct sched_shutdown sched_shutdown_t;
 DEFINE_XEN_GUEST_HANDLE(sched_shutdown_t);
 
+/**
+ * struct sched_poll
+ */
 struct sched_poll {
     XEN_GUEST_HANDLE(evtchn_port_t) ports;
     unsigned int nr_ports;
@@ -146,39 +171,61 @@ struct sched_poll {
 typedef struct sched_poll sched_poll_t;
 DEFINE_XEN_GUEST_HANDLE(sched_poll_t);
 
+/**
+ * struct sched_remote_shutdown
+ */
 struct sched_remote_shutdown {
-    domid_t domain_id;         /* Remote domain ID */
-    unsigned int reason;       /* SHUTDOWN_* => enum sched_shutdown_reason */
+    /** @domain_id: Remote domain ID */
+    domid_t domain_id;
+    /** @reason: SHUTDOWN_* => enum sched_shutdown_reason */
+    unsigned int reason;
 };
 typedef struct sched_remote_shutdown sched_remote_shutdown_t;
 DEFINE_XEN_GUEST_HANDLE(sched_remote_shutdown_t);
 
+/**
+ * struct sched_watchdog
+ */
 struct sched_watchdog {
-    uint32_t id;                /* watchdog ID */
-    uint32_t timeout;           /* timeout */
+    /** @id: watchdog ID */
+    uint32_t id;
+    /** @timeout: timeout */
+    uint32_t timeout;
 };
 typedef struct sched_watchdog sched_watchdog_t;
 DEFINE_XEN_GUEST_HANDLE(sched_watchdog_t);
 
+/**
+ * struct sched_pin_override
+ */
 struct sched_pin_override {
     int32_t pcpu;
 };
 typedef struct sched_pin_override sched_pin_override_t;
 DEFINE_XEN_GUEST_HANDLE(sched_pin_override_t);
 
-/*
- * Reason codes for SCHEDOP_shutdown. These may be interpreted by control
- * software to determine the appropriate action. For the most part, Xen does
- * not care about the shutdown code.
+/**
+ * DOC: Reason codes for SCHEDOP_shutdown
+ *
+ * These may be interpreted by control software to determine the
+ * appropriate action. For the most part, Xen does not care about the
+ * shutdown code.
+ *
+ * - SHUTDOWN_poweroff: Domain exited normally. Clean up and kill.
+ * - SHUTDOWN_reboot:   Clean up, kill, and then restart.
+ * - SHUTDOWN_suspend:  Clean up, save suspend info, kill.
+ * - SHUTDOWN_crash:    Tell controller we've crashed.
+ * - SHUTDOWN_watchdog: Restart because watchdog time expired.
  */
-/* ` enum sched_shutdown_reason { */
-#define SHUTDOWN_poweroff   0  /* Domain exited normally. Clean up and kill. */
-#define SHUTDOWN_reboot     1  /* Clean up, kill, and then restart.          */
-#define SHUTDOWN_suspend    2  /* Clean up, save suspend info, kill.         */
-#define SHUTDOWN_crash      3  /* Tell controller we've crashed.             */
-#define SHUTDOWN_watchdog   4  /* Restart because watchdog time expired.     */
-
-/*
+#define SHUTDOWN_poweroff   0
+#define SHUTDOWN_reboot     1
+#define SHUTDOWN_suspend    2
+#define SHUTDOWN_crash      3
+#define SHUTDOWN_watchdog   4
+
+/**
+ * DOC: SHUTDOWN_soft_reset
+ *
  * Domain asked to perform 'soft reset' for it. The expected behavior is to
  * reset internal Xen state for the domain returning it to the point where it
  * was created but leaving the domain's memory contents and vCPU contexts
@@ -186,8 +233,11 @@ DEFINE_XEN_GUEST_HANDLE(sched_pin_override_t);
  * interfaces again.
  */
 #define SHUTDOWN_soft_reset 5
-#define SHUTDOWN_MAX        5  /* Maximum valid shutdown reason.             */
-/* ` } */
+/**
+ * DOC: SHUTDOWN_MAX
+ * Maximum valid shutdown reason
+ */
+#define SHUTDOWN_MAX        5
 
 #endif /* __XEN_PUBLIC_SCHED_H__ */
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9803.25935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YT-00007S-IW; Wed, 21 Oct 2020 00:00:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9803.25935; Wed, 21 Oct 2020 00:00:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YT-00007F-D9; Wed, 21 Oct 2020 00:00:37 +0000
Received: by outflank-mailman (input) for mailman id 9803;
 Wed, 21 Oct 2020 00:00:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YR-0007xs-4O
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62713062-16e8-4abe-8186-a32cb459fbc1;
 Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A53AE22460;
 Wed, 21 Oct 2020 00:00:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YR-0007xs-4O
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:35 +0000
X-Inumbo-ID: 62713062-16e8-4abe-8186-a32cb459fbc1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 62713062-16e8-4abe-8186-a32cb459fbc1;
	Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A53AE22460;
	Wed, 21 Oct 2020 00:00:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238421;
	bh=bil3zAxsGICVbiRPDFmg+ayhtYh9RfCEVPeAMVzrx8M=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=xgRCeKVDWkdw2RJzOkHoOiYzXysCK/qN0WUW+w6BH/XZUptpE6CEdklalskokdaJ7
	 Z85gjqjeKCBr4A+0W1IJ9aNLMZSebReoZnLVqMoyy2IaU5Iw40+BQBxrdPCr/aqRmn
	 H7KyAi5GIhWCKWMe3BjvlcZDP/4BcHJrZRQ+xHqE=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 13/14] kernel-doc: public/elfnote.h
Date: Tue, 20 Oct 2020 17:00:10 -0700
Message-Id: <20201021000011.15351-13-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/elfnote.h | 109 ++++++++++++++++++++++++++---------
 1 file changed, 81 insertions(+), 28 deletions(-)

diff --git a/xen/include/public/elfnote.h b/xen/include/public/elfnote.h
index 181cbc4ec7..1dd567a6f1 100644
--- a/xen/include/public/elfnote.h
+++ b/xen/include/public/elfnote.h
@@ -27,8 +27,8 @@
 #ifndef __XEN_PUBLIC_ELFNOTE_H__
 #define __XEN_PUBLIC_ELFNOTE_H__
 
-/*
- * `incontents 200 elfnotes ELF notes
+/**
+ * DOC: ELF notes
  *
  * The notes should live in a PT_NOTE segment and have "Xen" in the
  * name field.
@@ -43,26 +43,35 @@
  * as ASCIZ type.
  */
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_INFO
  * NAME=VALUE pair (string).
  */
 #define XEN_ELFNOTE_INFO           0
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_ENTRY
+ *
  * The virtual address of the entry point (numeric).
  *
  * LEGACY: VIRT_ENTRY
  */
 #define XEN_ELFNOTE_ENTRY          1
 
-/* The virtual address of the hypercall transfer page (numeric).
+/**
+ * DOC: XEN_ELFNOTE_HYPERCALL_PAGE
+ *
+ * The virtual address of the hypercall transfer page (numeric).
  *
  * LEGACY: HYPERCALL_PAGE. (n.b. legacy value is a physical page
  * number not a virtual address)
  */
 #define XEN_ELFNOTE_HYPERCALL_PAGE 2
 
-/* The virtual address where the kernel image should be mapped (numeric).
+/**
+ * DOC: XEN_ELFNOTE_VIRT_BASE
+ *
+ * The virtual address where the kernel image should be mapped (numeric).
  *
  * Defaults to 0.
  *
@@ -70,7 +79,9 @@
  */
 #define XEN_ELFNOTE_VIRT_BASE      3
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_PADDR_OFFSET
+ *
  * The offset of the ELF paddr field from the actual required
  * pseudo-physical address (numeric).
  *
@@ -82,35 +93,45 @@
  */
 #define XEN_ELFNOTE_PADDR_OFFSET   4
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_XEN_VERSION
+ *
  * The version of Xen that we work with (string).
  *
  * LEGACY: XEN_VER
  */
 #define XEN_ELFNOTE_XEN_VERSION    5
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_GUEST_OS
+ *
  * The name of the guest operating system (string).
  *
  * LEGACY: GUEST_OS
  */
 #define XEN_ELFNOTE_GUEST_OS       6
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_GUEST_VERSION
+ *
  * The version of the guest operating system (string).
  *
  * LEGACY: GUEST_VER
  */
 #define XEN_ELFNOTE_GUEST_VERSION  7
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_LOADER
+ *
  * The loader type (string).
  *
  * LEGACY: LOADER
  */
 #define XEN_ELFNOTE_LOADER         8
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_PAE_MODE
+ *
  * The kernel supports PAE (x86/32 only, string = "yes", "no" or
  * "bimodal").
  *
@@ -126,7 +147,9 @@
  */
 #define XEN_ELFNOTE_PAE_MODE       9
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_FEATURES
+ *
  * The features supported/required by this kernel (string).
  *
  * The string must consist of a list of feature names (as given in
@@ -138,7 +161,9 @@
  */
 #define XEN_ELFNOTE_FEATURES      10
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_BSD_SYMTAB
+ *
  * The kernel requires the symbol table to be loaded (string = "yes" or "no")
  * LEGACY: BSD_SYMTAB (n.b. The legacy treated the presence or absence
  * of this string as a boolean flag rather than requiring "yes" or
@@ -146,7 +171,9 @@
  */
 #define XEN_ELFNOTE_BSD_SYMTAB    11
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_HV_START_LOW
+ *
  * The lowest address the hypervisor hole can begin at (numeric).
  *
  * This must not be set higher than HYPERVISOR_VIRT_START. Its presence
@@ -155,13 +182,17 @@
  */
 #define XEN_ELFNOTE_HV_START_LOW  12
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_L1_MFN_VALID
+ *
  * List of maddr_t-sized mask/value pairs describing how to recognize
  * (non-present) L1 page table entries carrying valid MFNs (numeric).
  */
 #define XEN_ELFNOTE_L1_MFN_VALID  13
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_SUSPEND_CANCEL
+ *
  * Whether or not the guest supports cooperative suspend cancellation.
  * This is a numeric value.
  *
@@ -169,7 +200,9 @@
  */
 #define XEN_ELFNOTE_SUSPEND_CANCEL 14
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_INIT_P2M
+ *
  * The (non-default) location the initial phys-to-machine map should be
  * placed at by the hypervisor (Dom0) or the tools (DomU).
  * The kernel must be prepared for this mapping to be established using
@@ -182,13 +215,17 @@
  */
 #define XEN_ELFNOTE_INIT_P2M      15
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_MOD_START_PFN
+ *
  * Whether or not the guest can deal with being passed an initrd not
  * mapped through its initial page tables.
  */
 #define XEN_ELFNOTE_MOD_START_PFN 16
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_SUPPORTED_FEATURES
+ *
  * The features supported by this kernel (numeric).
  *
  * Other than XEN_ELFNOTE_FEATURES on pre-4.2 Xen, this note allows a
@@ -201,7 +238,9 @@
  */
 #define XEN_ELFNOTE_SUPPORTED_FEATURES 17
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_PHYS32_ENTRY
+ *
  * Physical entry point into the kernel.
  *
  * 32bit entry point into the kernel. When requested to launch the
@@ -211,12 +250,16 @@
  */
 #define XEN_ELFNOTE_PHYS32_ENTRY 18
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_MAX
+ *
  * The number of the highest elfnote defined.
  */
 #define XEN_ELFNOTE_MAX XEN_ELFNOTE_PHYS32_ENTRY
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_CRASH_INFO
+ *
  * System information exported through crash notes.
  *
  * The kexec / kdump code will create one XEN_ELFNOTE_CRASH_INFO
@@ -225,7 +268,9 @@
  */
 #define XEN_ELFNOTE_CRASH_INFO 0x1000001
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_CRASH_REGS
+ *
  * System registers exported through crash notes.
  *
  * The kexec / kdump code will create one XEN_ELFNOTE_CRASH_REGS
@@ -236,7 +281,9 @@
 #define XEN_ELFNOTE_CRASH_REGS 0x1000002
 
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_DUMPCORE_NONE
+ *
  * xen dump-core none note.
  * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_NONE
  * in its dump file to indicate that the file is xen dump-core
@@ -245,7 +292,9 @@
  */
 #define XEN_ELFNOTE_DUMPCORE_NONE               0x2000000
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_DUMPCORE_HEADER
+ *
  * xen dump-core header note.
  * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_HEADER
  * in its dump file.
@@ -253,7 +302,9 @@
  */
 #define XEN_ELFNOTE_DUMPCORE_HEADER             0x2000001
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_DUMPCORE_XEN_VERSION
+ *
  * xen dump-core xen version note.
  * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_XEN_VERSION
  * in its dump file. It contains the xen version obtained via the
@@ -262,7 +313,9 @@
  */
 #define XEN_ELFNOTE_DUMPCORE_XEN_VERSION        0x2000002
 
-/*
+/**
+ * DOC: XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION
+ *
  * xen dump-core format version note.
  * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION
  * in its dump file. It contains a format version identifier.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9804.25941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YU-00008v-6g; Wed, 21 Oct 2020 00:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9804.25941; Wed, 21 Oct 2020 00:00:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YT-00008Z-SA; Wed, 21 Oct 2020 00:00:37 +0000
Received: by outflank-mailman (input) for mailman id 9804;
 Wed, 21 Oct 2020 00:00:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YR-0007y0-Ob
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb288b8f-34a9-4214-9d31-511fb1d9151a;
 Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 93AE12245F;
 Wed, 21 Oct 2020 00:00:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YR-0007y0-Ob
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:35 +0000
X-Inumbo-ID: fb288b8f-34a9-4214-9d31-511fb1d9151a
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb288b8f-34a9-4214-9d31-511fb1d9151a;
	Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 93AE12245F;
	Wed, 21 Oct 2020 00:00:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238419;
	bh=iUURwQfkG0gQB7i5lmO2e9p5o2Tt6lx6q08IvoYLuag=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=Nx9ZVDaZu0Xa3neqPnZT1SFokf/7Ub9rnKqc9NsllrYgJqSS9s2sBgJSxWkfafhHN
	 lZaPoTb8Ye3vRIK9PTRInQHLKLcOmmtoY0MNmEb/l50UTLDi1J+iOlhB4iAULBFbxl
	 TUKsWWnbMWFmVRRgckFX764VyAzsjxnHpbVL+bjs=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 10/14] kernel-doc: public/vcpu.h
Date: Tue, 20 Oct 2020 17:00:07 -0700
Message-Id: <20201021000011.15351-10-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/vcpu.h | 180 ++++++++++++++++++++++++++++----------
 1 file changed, 136 insertions(+), 44 deletions(-)

diff --git a/xen/include/public/vcpu.h b/xen/include/public/vcpu.h
index 3623af932f..e50471e2b2 100644
--- a/xen/include/public/vcpu.h
+++ b/xen/include/public/vcpu.h
@@ -29,15 +29,20 @@
 
 #include "xen.h"
 
-/*
+/**
+ * DOC: VCPUOP hypercall
+ *
  * Prototype for this hypercall is:
  *  long vcpu_op(int cmd, unsigned int vcpuid, void *extra_args)
- * @cmd        == VCPUOP_??? (VCPU operation).
- * @vcpuid     == VCPU to operate on.
- * @extra_args == Operation-specific extra arguments (NULL if none).
+ *
+ * - @cmd        == VCPUOP_??? (VCPU operation).
+ * - @vcpuid     == VCPU to operate on.
+ * - @extra_args == Operation-specific extra arguments (NULL if none).
  */
 
-/*
+/**
+ * DOC: VCPUOP_initialise
+ *
  * Initialise a VCPU. Each VCPU can be initialised only once. A
  * newly-initialised VCPU will not run until it is brought up by VCPUOP_up.
  *
@@ -48,13 +53,17 @@
  */
 #define VCPUOP_initialise            0
 
-/*
+/**
+ * DOC: VCPUOP_up
+ *
  * Bring up a VCPU. This makes the VCPU runnable. This operation will fail
  * if the VCPU has not been initialised (VCPUOP_initialise).
  */
 #define VCPUOP_up                    1
 
-/*
+/**
+ * DOC: VCPUOP_down
+ *
  * Bring down a VCPU (i.e., make it non-runnable).
  * There are a few caveats that callers should observe:
  *  1. This operation may return, and VCPU_is_up may return false, before the
@@ -70,26 +79,36 @@
  */
 #define VCPUOP_down                  2
 
-/* Returns 1 if the given VCPU is up. */
+/**
+ * DOC: VCPUOP_is_up
+ * Returns 1 if the given VCPU is up.
+ */
 #define VCPUOP_is_up                 3
 
-/*
+#define VCPUOP_get_runstate_info     4
+/**
+ * struct vcpu_runstate_info - VCPUOP_get_runstate_info
+ *
  * Return information about the state and running time of a VCPU.
  * @extra_arg == pointer to vcpu_runstate_info structure.
  */
-#define VCPUOP_get_runstate_info     4
 struct vcpu_runstate_info {
-    /* VCPU's current state (RUNSTATE_*). */
+    /** @state: VCPU's current state (RUNSTATE_*). */
     int      state;
-    /* When was current state entered (system time, ns)? */
-    uint64_t state_entry_time;
-    /*
-     * Update indicator set in state_entry_time:
+    /**
+     * @state_entry_time:
+     *
+     * When was current state entered (system time, ns)?
+     *
+     * XEN_RUNSTATE_UPDATE is the update indicator in state_entry_time:
      * When activated via VMASST_TYPE_runstate_update_flag, set during
      * updates in guest memory mapped copy of vcpu_runstate_info.
      */
 #define XEN_RUNSTATE_UPDATE          (xen_mk_ullong(1) << 63)
-    /*
+    uint64_t state_entry_time;
+    /**
+     * @time:
+     *
      * Time spent in each RUNSTATE_* (ns). The sum of these times is
      * guaranteed not to drift from system time.
      */
@@ -98,16 +117,27 @@ struct vcpu_runstate_info {
 typedef struct vcpu_runstate_info vcpu_runstate_info_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_t);
 
-/* VCPU is currently running on a physical CPU. */
+/**
+ * DOC: RUNSTATE_running
+ * VCPU is currently running on a physical CPU.
+ */
 #define RUNSTATE_running  0
 
-/* VCPU is runnable, but not currently scheduled on any physical CPU. */
+/**
+ * DOC: RUNSTATE_runnable
+ * VCPU is runnable, but not currently scheduled on any physical CPU.
+ */
 #define RUNSTATE_runnable 1
 
-/* VCPU is blocked (a.k.a. idle). It is therefore not runnable. */
+/**
+ * DOC: RUNSTATE_blocked
+ * VCPU is blocked (a.k.a. idle). It is therefore not runnable.
+ */
 #define RUNSTATE_blocked  2
 
-/*
+/**
+ * DOC: RUNSTATE_offline
+ *
  * VCPU is not runnable, but it is not blocked.
  * This is a 'catch all' state for things like hotplug and pauses by the
  * system administrator (or for critical sections in the hypervisor).
@@ -115,7 +145,10 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_t);
  */
 #define RUNSTATE_offline  3
 
-/*
+#define VCPUOP_register_runstate_memory_area 5
+/**
+ * struct vcpu_register_runstate_memory_area - VCPUOP_register_runstate_memory_area
+ *
  * Register a shared memory area from which the guest may obtain its own
  * runstate information without needing to execute a hypercall.
  * Notes:
@@ -127,9 +160,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_runstate_info_t);
  *     runstate.state will always be RUNSTATE_running and
  *     runstate.state_entry_time will indicate the system time at which the
  *     VCPU was last scheduled to run.
+ *
  * @extra_arg == pointer to vcpu_register_runstate_memory_area structure.
  */
-#define VCPUOP_register_runstate_memory_area 5
 struct vcpu_register_runstate_memory_area {
     union {
         XEN_GUEST_HANDLE(vcpu_runstate_info_t) h;
@@ -140,38 +173,74 @@ struct vcpu_register_runstate_memory_area {
 typedef struct vcpu_register_runstate_memory_area vcpu_register_runstate_memory_area_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_register_runstate_memory_area_t);
 
-/*
- * Set or stop a VCPU's periodic timer. Every VCPU has one periodic timer
- * which can be set via these commands. Periods smaller than one millisecond
- * may not be supported.
+/**
+ * DOC: VCPUOP_set_periodic_timer
+ *
+ * Set a VCPU's periodic timer. Every VCPU has one periodic timer which
+ * can be set via this command. Periods smaller than one millisecond may
+ * not be supported.
+ *
+ * @arg == vcpu_set_periodic_timer_t
+ */
+#define VCPUOP_set_periodic_timer    6
+/**
+ * DOC: VCPUOP_stop_periodic_timer
+ *
+ * Stop a VCPU's periodic timer.
+ *
+ * @arg == NULL
+ */
+#define VCPUOP_stop_periodic_timer   7
+/**
+ * struct vcpu_set_periodic_timer
  */
-#define VCPUOP_set_periodic_timer    6 /* arg == vcpu_set_periodic_timer_t */
-#define VCPUOP_stop_periodic_timer   7 /* arg == NULL */
 struct vcpu_set_periodic_timer {
     uint64_t period_ns;
 };
 typedef struct vcpu_set_periodic_timer vcpu_set_periodic_timer_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_set_periodic_timer_t);
 
-/*
- * Set or stop a VCPU's single-shot timer. Every VCPU has one single-shot
- * timer which can be set via these commands.
+/**
+ * DOC: VCPUOP_set_singleshot_timer
+ *
+ * Set a VCPU's single-shot timer. Every VCPU has one single-shot timer
+ * which can be set via this command.
+ *
+ * @arg == vcpu_set_singleshot_timer_t
+ */
+#define VCPUOP_set_singleshot_timer  8
+/**
+ * DOC: VCPUOP_stop_singleshot_timer
+ *
+ * Stop a VCPU's single-shot timer.
+ *
+ * arg == NULL
+ */
+#define VCPUOP_stop_singleshot_timer 9
+/**
+ * struct vcpu_set_singleshot_timer
  */
-#define VCPUOP_set_singleshot_timer  8 /* arg == vcpu_set_singleshot_timer_t */
-#define VCPUOP_stop_singleshot_timer 9 /* arg == NULL */
 struct vcpu_set_singleshot_timer {
-    uint64_t timeout_abs_ns;   /* Absolute system time value in nanoseconds. */
-    uint32_t flags;            /* VCPU_SSHOTTMR_??? */
+    /** @timeout_abs_ns: Absolute system time value in nanoseconds. */
+    uint64_t timeout_abs_ns;
+    /** @flags: VCPU_SSHOTTMR_??? */
+    uint32_t flags;
 };
 typedef struct vcpu_set_singleshot_timer vcpu_set_singleshot_timer_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
 
-/* Flags to VCPUOP_set_singleshot_timer. */
- /* Require the timeout to be in the future (return -ETIME if it's passed). */
+/**
+ * DOC: flags to VCPUOP_set_singleshot_timer.
+ *
+ * VCPU_SSHOTTMR_future: Require the timeout to be in the future
+ *                       (return -ETIME if it's passed).
+ */
 #define _VCPU_SSHOTTMR_future (0)
 #define VCPU_SSHOTTMR_future  (1U << _VCPU_SSHOTTMR_future)
 
-/*
+/**
+ * DOC: VCPUOP_register_vcpu_info
+ *
  * Register a memory location in the guest address space for the
  * vcpu_info structure.  This allows the guest to place the vcpu_info
  * structure in a convenient place, such as in a per-cpu data area.
@@ -179,26 +248,44 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_set_singleshot_timer_t);
  * cross a page boundary.
  *
  * This may be called only once per vcpu.
+ *
+ * @arg == vcpu_register_vcpu_info_t
+ */
+#define VCPUOP_register_vcpu_info   10
+/**
+ * struct vcpu_register_vcpu_info - VCPUOP_register_vcpu_info
  */
-#define VCPUOP_register_vcpu_info   10  /* arg == vcpu_register_vcpu_info_t */
 struct vcpu_register_vcpu_info {
+    /** @mfn: mfn of page to place vcpu_info */
     uint64_t mfn;    /* mfn of page to place vcpu_info */
+    /** @offset: offset within page */
     uint32_t offset; /* offset within page */
-    uint32_t rsvd;   /* unused */
+    /** @rsvd: unused */
+    uint32_t rsvd;
 };
 typedef struct vcpu_register_vcpu_info vcpu_register_vcpu_info_t;
 DEFINE_XEN_GUEST_HANDLE(vcpu_register_vcpu_info_t);
 
-/* Send an NMI to the specified VCPU. @extra_arg == NULL. */
+/**
+ * DOC: VCPUOP_send_nmi
+ * Send an NMI to the specified VCPU. @extra_arg == NULL.
+ */
 #define VCPUOP_send_nmi             11
 
-/*
+/**
+ * DOC: VCPUOP_get_physid
+ *
  * Get the physical ID information for a pinned vcpu's underlying physical
  * processor.  The physical ID informmation is architecture-specific.
  * On x86: id[31:0]=apic_id, id[63:32]=acpi_id.
  * This command returns -EINVAL if it is not a valid operation for this VCPU.
+ *
+ * @arg == vcpu_get_physid_t
+ */
+#define VCPUOP_get_physid           12
+/**
+ * struct vcpu_get_physid
  */
-#define VCPUOP_get_physid           12 /* arg == vcpu_get_physid_t */
 struct vcpu_get_physid {
     uint64_t phys_id;
 };
@@ -207,7 +294,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_get_physid_t);
 #define xen_vcpu_physid_to_x86_apicid(physid) ((uint32_t)(physid))
 #define xen_vcpu_physid_to_x86_acpiid(physid) ((uint32_t)((physid) >> 32))
 
-/*
+/**
+ * DOC: VCPUOP_register_vcpu_time_memory_area
+ *
  * Register a memory location to get a secondary copy of the vcpu time
  * parameters.  The master copy still exists as part of the vcpu shared
  * memory area, and this secondary copy is updated whenever the master copy
@@ -225,6 +314,9 @@ DEFINE_XEN_GUEST_HANDLE(vcpu_get_physid_t);
  */
 #define VCPUOP_register_vcpu_time_memory_area   13
 DEFINE_XEN_GUEST_HANDLE(vcpu_time_info_t);
+/**
+ * struct vcpu_register_time_memory_area
+ */
 struct vcpu_register_time_memory_area {
     union {
         XEN_GUEST_HANDLE(vcpu_time_info_t) h;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9806.25959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YX-0000GZ-7O; Wed, 21 Oct 2020 00:00:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9806.25959; Wed, 21 Oct 2020 00:00:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YW-0000G4-UY; Wed, 21 Oct 2020 00:00:40 +0000
Received: by outflank-mailman (input) for mailman id 9806;
 Wed, 21 Oct 2020 00:00:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YW-0007xs-4R
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d28842fa-b5a9-461b-b7ea-4e791c964b6c;
 Wed, 21 Oct 2020 00:00:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 45CEC22453;
 Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YW-0007xs-4R
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:40 +0000
X-Inumbo-ID: d28842fa-b5a9-461b-b7ea-4e791c964b6c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d28842fa-b5a9-461b-b7ea-4e791c964b6c;
	Wed, 21 Oct 2020 00:00:22 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 45CEC22453;
	Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238421;
	bh=BD/JHqXrWL/kgaMnyhDk7hBu4c8nq6XJ1V8apUZwUWE=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=MCKJrJJScGS/tkXYj7J1Qc409NSdXcfFozQZ17mb3hvCxrS2HviZVBlqcjuwDgwRk
	 l9z6jY8/LyenYqjdWCZihmzadRFaGdejU9Rxr8ZX9/bU1IGCSf+zISrj/GDXm9eybQ
	 Jsoy94Us+CklVPd20RpdNDZQ0Vv2+ve9J1lWGRFQ=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 14/14] kernel-doc: public/hvm/params.h
Date: Tue, 20 Oct 2020 17:00:11 -0700
Message-Id: <20201021000011.15351-14-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 xen/include/public/hvm/params.h | 153 +++++++++++++++++++++++++-------
 1 file changed, 120 insertions(+), 33 deletions(-)

diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
index 0e3fdca096..2017e4334d 100644
--- a/xen/include/public/hvm/params.h
+++ b/xen/include/public/hvm/params.h
@@ -41,13 +41,16 @@
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-/*
+/**
+ * DOC: HVMOP_set_param and HVMOP_get_param
  * Parameter space for HVMOP_{set,get}_param.
  */
 
 #define HVM_PARAM_CALLBACK_IRQ 0
 #define HVM_PARAM_CALLBACK_IRQ_TYPE_MASK xen_mk_ullong(0xFF00000000000000)
-/*
+/**
+ * DOC: HVM_PARAM_CALLBACK_*
+ *
  * How should CPU0 event-channel notifications be delivered?
  *
  * If val == 0 then CPU0 event-channel notifications are not delivered.
@@ -55,26 +58,34 @@
  */
 
 #define HVM_PARAM_CALLBACK_TYPE_GSI      0
-/*
+/**
+ * DOC: HVM_PARAM_CALLBACK_TYPE_GSI
+ *
  * val[55:0] is a delivery GSI.  GSI 0 cannot be used, as it aliases val == 0,
  * and disables all notifications.
  */
 
 #define HVM_PARAM_CALLBACK_TYPE_PCI_INTX 1
-/*
+/**
+ * DOC: HVM_PARAM_CALLBACK_TYPE_PCI_INTX
+ *
  * val[55:0] is a delivery PCI INTx line:
  * Domain = val[47:32], Bus = val[31:16] DevFn = val[15:8], IntX = val[1:0]
  */
 
 #if defined(__i386__) || defined(__x86_64__)
 #define HVM_PARAM_CALLBACK_TYPE_VECTOR   2
-/*
+/**
+ * DOC: HVM_PARAM_CALLBACK_TYPE_VECTOR
+ *
  * val[7:0] is a vector number.  Check for XENFEAT_hvm_callback_vector to know
  * if this delivery method is available.
  */
 #elif defined(__arm__) || defined(__aarch64__)
 #define HVM_PARAM_CALLBACK_TYPE_PPI      2
-/*
+/**
+ * DOC: HVM_PARAM_CALLBACK_TYPE_PPI
+ *
  * val[55:16] needs to be zero.
  * val[15:8] is interrupt flag of the PPI used by event-channel:
  *  bit 8: the PPI is edge(1) or level(0) triggered
@@ -87,7 +98,9 @@
 #define HVM_PARAM_CALLBACK_TYPE_PPI_FLAG_LOW_LEVEL 2
 #endif
 
-/*
+/**
+ * DOC: HVM_PARAM_STORE_*, HVM_PARAM_IOREQ_PFN, HVM_PARAM_BUFIOREQ_PFN
+ *
  * These are not used by Xen. They are here for convenience of HVM-guest
  * xenbus implementations.
  */
@@ -100,7 +113,9 @@
 
 #if defined(__i386__) || defined(__x86_64__)
 
-/*
+/**
+ * DOC: HVM_PARAM_VIRIDIAN
+ *
  * Viridian enlightenments
  *
  * (See http://download.microsoft.com/download/A/B/4/AB43A34E-BDD0-4FA6-BDEF-79EEF16E880B/Hypervisor%20Top%20Level%20Functional%20Specification%20v4.0.docx)
@@ -111,7 +126,10 @@
  */
 #define HVM_PARAM_VIRIDIAN     9
 
-/* Base+Freq viridian feature sets:
+/**
+ * DOC: HVMPV_base_freq
+ *
+ * Base+Freq viridian feature sets:
  *
  * - Hypercall MSRs (HV_X64_MSR_GUEST_OS_ID and HV_X64_MSR_HYPERCALL)
  * - APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR)
@@ -124,7 +142,10 @@
 
 /* Feature set modifications */
 
-/* Disable timer frequency MSRs (HV_X64_MSR_TSC_FREQUENCY and
+/**
+ * DOC: HVMPV_no_freq
+ *
+ * Disable timer frequency MSRs (HV_X64_MSR_TSC_FREQUENCY and
  * HV_X64_MSR_APIC_FREQUENCY).
  * This modification restores the viridian feature set to the
  * original 'base' set exposed in releases prior to Xen 4.4.
@@ -132,35 +153,59 @@
 #define _HVMPV_no_freq 1
 #define HVMPV_no_freq  (1 << _HVMPV_no_freq)
 
-/* Enable Partition Time Reference Counter (HV_X64_MSR_TIME_REF_COUNT) */
+/**
+ * DOC: HVMPV_time_ref_count
+ * Enable Partition Time Reference Counter (HV_X64_MSR_TIME_REF_COUNT)
+ */
 #define _HVMPV_time_ref_count 2
 #define HVMPV_time_ref_count  (1 << _HVMPV_time_ref_count)
 
-/* Enable Reference TSC Page (HV_X64_MSR_REFERENCE_TSC) */
+/**
+ * DOC: HVMPV_reference_tsc
+ * Enable Reference TSC Page (HV_X64_MSR_REFERENCE_TSC)
+ */
 #define _HVMPV_reference_tsc 3
 #define HVMPV_reference_tsc  (1 << _HVMPV_reference_tsc)
 
-/* Use Hypercall for remote TLB flush */
+/**
+ * DOC: HVMPV_hcall_remote_tlb_flush
+ * Use Hypercall for remote TLB flush
+ */
 #define _HVMPV_hcall_remote_tlb_flush 4
 #define HVMPV_hcall_remote_tlb_flush (1 << _HVMPV_hcall_remote_tlb_flush)
 
-/* Use APIC assist */
+/**
+ * DOC: HVMPV_apic_assist
+ * Use APIC assist
+ */
 #define _HVMPV_apic_assist 5
 #define HVMPV_apic_assist (1 << _HVMPV_apic_assist)
 
-/* Enable crash MSRs */
+/**
+ * DOC: HVMPV_crash_ctl
+ * Enable crash MSRs
+ */
 #define _HVMPV_crash_ctl 6
 #define HVMPV_crash_ctl (1 << _HVMPV_crash_ctl)
 
-/* Enable SYNIC MSRs */
+/**
+ * DOC: HVMPV_synic
+ * Enable SYNIC MSRs
+ */
 #define _HVMPV_synic 7
 #define HVMPV_synic (1 << _HVMPV_synic)
 
-/* Enable STIMER MSRs */
+/**
+ * DOC: HVMPV_stimer
+ * Enable STIMER MSRs
+ */
 #define _HVMPV_stimer 8
 #define HVMPV_stimer (1 << _HVMPV_stimer)
 
-/* Use Synthetic Cluster IPI Hypercall */
+/**
+ * DOC: HVMPV_hcall_ipi
+ * Use Synthetic Cluster IPI Hypercall
+ */
 #define _HVMPV_hcall_ipi 9
 #define HVMPV_hcall_ipi (1 << _HVMPV_hcall_ipi)
 
@@ -178,7 +223,9 @@
 
 #endif
 
-/*
+/**
+ * DOC: HVM_PARAM_TIMER_MODE
+ *
  * Set mode for virtual timers (currently x86 only):
  *  delay_for_missed_ticks (default):
  *   Do not advance a vcpu's time beyond the correct delivery time for
@@ -203,26 +250,47 @@
 #define HVMPTM_no_missed_ticks_pending   2
 #define HVMPTM_one_missed_tick_pending   3
 
-/* Boolean: Enable virtual HPET (high-precision event timer)? (x86-only) */
+/**
+ * DOC: HVM_PARAM_HPET_ENABLED
+ * Boolean: Enable virtual HPET (high-precision event timer)? (x86-only)
+ */
 #define HVM_PARAM_HPET_ENABLED 11
 
-/* Identity-map page directory used by Intel EPT when CR0.PG=0. */
+/**
+ * DOC: HVM_PARAM_IDENT_PT
+ * Identity-map page directory used by Intel EPT when CR0.PG=0.
+ */
 #define HVM_PARAM_IDENT_PT     12
 
-/* ACPI S state: currently support S0 and S3 on x86. */
+/**
+ * DOC: HVM_PARAM_ACPI_S_STATE
+ * ACPI S state: currently support S0 and S3 on x86.
+ */
 #define HVM_PARAM_ACPI_S_STATE 14
 
-/* TSS used on Intel when CR0.PE=0. */
+/**
+ * DOC: HVM_PARAM_VM86_TSS
+ * TSS used on Intel when CR0.PE=0.
+ */
 #define HVM_PARAM_VM86_TSS     15
 
-/* Boolean: Enable aligning all periodic vpts to reduce interrupts */
+/**
+ * DOC: HVM_PARAM_VPT_ALIGN
+ * Boolean: Enable aligning all periodic vpts to reduce interrupts
+ */
 #define HVM_PARAM_VPT_ALIGN    16
 
-/* Console debug shared memory ring and event channel */
+/**
+ * DOC: HVM_PARAM_CONSOLE_PFN and HVM_PARAM_CONSOLE_EVTCHN
+ *
+ * Console debug shared memory ring and event channel
+ */
 #define HVM_PARAM_CONSOLE_PFN    17
 #define HVM_PARAM_CONSOLE_EVTCHN 18
 
-/*
+/**
+ * DOC: HVM_PARAM_ACPI_IOPORTS_LOCATION
+ *
  * Select location of ACPI PM1a and TMR control blocks. Currently two locations
  * are supported, specified by version 0 or 1 in this parameter:
  *   - 0: default, use the old addresses
@@ -233,21 +301,33 @@
  */
 #define HVM_PARAM_ACPI_IOPORTS_LOCATION 19
 
-/* Params for the mem event rings */
+/**
+ * DOC: HVM_PARAM_*_RING_PFN
+ *
+ * Params for the mem event rings
+ */
 #define HVM_PARAM_PAGING_RING_PFN   27
 #define HVM_PARAM_MONITOR_RING_PFN  28
 #define HVM_PARAM_SHARING_RING_PFN  29
 
-/* SHUTDOWN_* action in case of a triple fault */
+/**
+ * DOC: HVM_PARAM_TRIPLE_FAULT_REASON
+ * SHUTDOWN_* action in case of a triple fault
+ */
 #define HVM_PARAM_TRIPLE_FAULT_REASON 31
 
 #define HVM_PARAM_IOREQ_SERVER_PFN 32
 #define HVM_PARAM_NR_IOREQ_SERVER_PAGES 33
 
-/* Location of the VM Generation ID in guest physical address space. */
+/**
+ * DOC: HVM_PARAM_VM_GENERATION_ID_ADDR
+ * Location of the VM Generation ID in guest physical address space.
+ */
 #define HVM_PARAM_VM_GENERATION_ID_ADDR 34
 
-/*
+/**
+ * DOC: HVM_PARAM_ALTP2M
+ *
  * Set mode for altp2m:
  *  disabled: don't activate altp2m (default)
  *  mixed: allow access to all altp2m ops for both in-guest and external tools
@@ -265,7 +345,9 @@
 #define XEN_ALTP2M_external      2
 #define XEN_ALTP2M_limited       3
 
-/*
+/**
+ * DOC: HVM_PARAM_X87_FIP_WIDTH
+ *
  * Size of the x87 FPU FIP/FDP registers that the hypervisor needs to
  * save/restore.  This is a workaround for a hardware limitation that
  * does not allow the full FIP/FDP and FCS/FDS to be restored.
@@ -287,13 +369,18 @@
  */
 #define HVM_PARAM_X87_FIP_WIDTH 36
 
-/*
+/**
+ * DOC: HVM_PARAM_VM86_TSS_SIZED
+ *
  * TSS (and its size) used on Intel when CR0.PE=0. The address occupies
  * the low 32 bits, while the size is in the high 32 ones.
  */
 #define HVM_PARAM_VM86_TSS_SIZED 37
 
-/* Enable MCA capabilities. */
+/**
+ * DOC: HVM_PARAM_MCA_CAP
+ * Enable MCA capabilities.
+ */
 #define HVM_PARAM_MCA_CAP 38
 #define XEN_HVM_MCA_CAP_LMCE   (xen_mk_ullong(1) << 0)
 #define XEN_HVM_MCA_CAP_MASK   XEN_HVM_MCA_CAP_LMCE
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9808.25971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YY-0000Ka-Ov; Wed, 21 Oct 2020 00:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9808.25971; Wed, 21 Oct 2020 00:00:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1YY-0000KF-F9; Wed, 21 Oct 2020 00:00:42 +0000
Received: by outflank-mailman (input) for mailman id 9808;
 Wed, 21 Oct 2020 00:00:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1YW-0007y0-Oa
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 594c8c1c-e70f-4bd0-b6c6-0fac37da93a2;
 Wed, 21 Oct 2020 00:00:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4A514223C6;
 Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1YW-0007y0-Oa
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:40 +0000
X-Inumbo-ID: 594c8c1c-e70f-4bd0-b6c6-0fac37da93a2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 594c8c1c-e70f-4bd0-b6c6-0fac37da93a2;
	Wed, 21 Oct 2020 00:00:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4A514223C6;
	Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238419;
	bh=6u2KSXtfrsaVwSC07uOvcRnYvoWFFX+5NVhxF6ZAfeE=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=uI445BEQ3WXlCwAp+fCf9meuqPiAQpURQ2fhdw9F2H1qOeAw5boeDg0tiXYXBFPr3
	 esoYaMG5hYToRovIhOkIB9BUCkyvGzVXgjvxwbc77B5ioQ50g4iKotEOJQx3bw7Oys
	 a0NOiUZlKb0xghZEcWyVaOYjlpcI7BRnSHTJNmdQ=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 11/14] kernel-doc: public/version.h
Date: Tue, 20 Oct 2020 17:00:08 -0700
Message-Id: <20201021000011.15351-11-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- remove "- XENVER_{version,pagesize}"
---
 xen/include/public/version.h | 73 +++++++++++++++++++++++++++++-------
 1 file changed, 59 insertions(+), 14 deletions(-)

diff --git a/xen/include/public/version.h b/xen/include/public/version.h
index 17a81e23cd..47bb04961e 100644
--- a/xen/include/public/version.h
+++ b/xen/include/public/version.h
@@ -30,19 +30,32 @@
 
 #include "xen.h"
 
-/* NB. All ops return zero on success, except XENVER_{version,pagesize}
- * XENVER_{version,pagesize,build_id} */
+/**
+ * DOC: XENVER_*
+ *
+ * NB. All ops return zero on success, except for:
+ *
+ * - XENVER_{version,pagesize,build_id}
+ */
 
-/* arg == NULL; returns major:minor (16:16). */
+/**
+ * DOC: XENVER_version
+ * @arg == NULL; returns major:minor (16:16).
+ */
 #define XENVER_version      0
 
-/* arg == xen_extraversion_t. */
+/**
+ * DOC: XENVER_extraversion
+ * @arg == xen_extraversion_t.
+ */
 #define XENVER_extraversion 1
 typedef char xen_extraversion_t[16];
 #define XEN_EXTRAVERSION_LEN (sizeof(xen_extraversion_t))
 
-/* arg == xen_compile_info_t. */
 #define XENVER_compile_info 2
+/**
+ * struct xen_compile_info - XENVER_compile_info
+ */
 struct xen_compile_info {
     char compiler[64];
     char compile_by[16];
@@ -51,52 +64,84 @@ struct xen_compile_info {
 };
 typedef struct xen_compile_info xen_compile_info_t;
 
+/**
+ * DOC: XENVER_capabilities
+ * @arg: xen_capabilities_info_t
+ */
 #define XENVER_capabilities 3
 typedef char xen_capabilities_info_t[1024];
 #define XEN_CAPABILITIES_INFO_LEN (sizeof(xen_capabilities_info_t))
 
+/**
+ * DOC: XENVER_changeset
+ * @arg: xen_changeset_info_t
+ */
 #define XENVER_changeset 4
 typedef char xen_changeset_info_t[64];
 #define XEN_CHANGESET_INFO_LEN (sizeof(xen_changeset_info_t))
 
 #define XENVER_platform_parameters 5
+/**
+ * struct xen_platform_parameters - XENVER_platform_parameters
+ */
 struct xen_platform_parameters {
     xen_ulong_t virt_start;
 };
 typedef struct xen_platform_parameters xen_platform_parameters_t;
 
 #define XENVER_get_features 6
+/**
+ * struct xen_feature_info - XENVER_get_features
+ */
 struct xen_feature_info {
-    unsigned int submap_idx;    /* IN: which 32-bit submap to return */
-    uint32_t     submap;        /* OUT: 32-bit submap */
+    /** @submap_idx: IN: which 32-bit submap to return */
+    unsigned int submap_idx;
+    /** @submap: OUT: 32-bit submap */
+    uint32_t     submap;
 };
 typedef struct xen_feature_info xen_feature_info_t;
 
-/* Declares the features reported by XENVER_get_features. */
+/**
+ * DOC: features.h
+ * Declares the features reported by XENVER_get_features.
+ */
 #include "features.h"
 
-/* arg == NULL; returns host memory page size. */
+/**
+ * DOC: XENVER_pagesize
+ * @arg == NULL; returns host memory page size.
+ */
 #define XENVER_pagesize 7
 
-/* arg == xen_domain_handle_t.
+/**
+ * DOC: XENVER_guest_handle
+ *
+ * @arg == xen_domain_handle_t.
  *
  * The toolstack fills it out for guest consumption. It is intended to hold
  * the UUID of the guest.
  */
 #define XENVER_guest_handle 8
 
+/**
+ * DOC: XENVER_commandline
+ * @arg: xen_commandline_t
+ */
 #define XENVER_commandline 9
 typedef char xen_commandline_t[1024];
 
-/*
+#define XENVER_build_id 10
+/**
+ * struct xen_build_id - XENVER_build_id
+ *
  * Return value is the number of bytes written, or XEN_Exx on error.
  * Calling with empty parameter returns the size of build_id.
  */
-#define XENVER_build_id 10
 struct xen_build_id {
-        uint32_t        len; /* IN: size of buf[]. */
+        /** @len: IN: size of buf[]. */
+        uint32_t        len;
+        /** @buf: OUT: Variable length buffer with build_id. */
         unsigned char   buf[XEN_FLEX_ARRAY_DIM];
-                             /* OUT: Variable length buffer with build_id. */
 };
 typedef struct xen_build_id xen_build_id_t;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9810.25982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Yd-0000Vq-E1; Wed, 21 Oct 2020 00:00:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9810.25982; Wed, 21 Oct 2020 00:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1Yd-0000VY-5l; Wed, 21 Oct 2020 00:00:47 +0000
Received: by outflank-mailman (input) for mailman id 9810;
 Wed, 21 Oct 2020 00:00:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kV1Yb-0007y0-Op
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:45 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 595ad608-21a7-43ab-88ee-7caa93da50e1;
 Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0104522409;
 Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kV1Yb-0007y0-Op
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:00:45 +0000
X-Inumbo-ID: 595ad608-21a7-43ab-88ee-7caa93da50e1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 595ad608-21a7-43ab-88ee-7caa93da50e1;
	Wed, 21 Oct 2020 00:00:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0104522409;
	Wed, 21 Oct 2020 00:00:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603238420;
	bh=m+Bt7REJylp9ULKwq9LZGyEvinEHkPdrUMKBkY9SP1g=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=jX+GYe87igyQpmmN/NchkyD42TfqbUQm5HY8r+tHC61pezuzIKrvDYXP9Df/kE2QQ
	 i34mFJMECnZMJebv/BJ9MnyeK8W7dxYYQnDw1Ycs0yQEs4+qruDa78KQ/XAOvXkY+S
	 sHJ+eA4Z9MG+k7n/Jmp2uzpDgEHv9wLS0VBO8y5E=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	ian.jackson@eu.citrix.com,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org,
	Bertrand.Marquis@arm.com,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 12/14] kernel-doc: public/xen.h
Date: Tue, 20 Oct 2020 17:00:09 -0700
Message-Id: <20201021000011.15351-12-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>

Convert in-code comments to kernel-doc format wherever possible.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- remove "enum" comments
- use oneline comments even for nested struct members
---
 xen/include/public/xen.h | 566 +++++++++++++++++++++++++--------------
 1 file changed, 369 insertions(+), 197 deletions(-)

diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index e373592c33..420b5f56cd 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -81,14 +81,62 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 
 #endif
 
-/*
- * HYPERCALLS
- */
-
-/* `incontents 100 hcalls List of hypercalls
- * ` enum hypercall_num { // __HYPERVISOR_* => HYPERVISOR_*()
+/**
+ * DOC: HYPERCALLS
+ *
+ * List of hypercalls
+ *
+ * - __HYPERVISOR_set_trap_table
+ * - __HYPERVISOR_mmu_update
+ * - __HYPERVISOR_set_gdt
+ * - __HYPERVISOR_stack_switch
+ * - __HYPERVISOR_set_callbacks
+ * - __HYPERVISOR_fpu_taskswitch
+ * - __HYPERVISOR_sched_op_compat
+ * - __HYPERVISOR_platform_op
+ * - __HYPERVISOR_set_debugreg
+ * - __HYPERVISOR_get_debugreg
+ * - __HYPERVISOR_update_descriptor
+ * - __HYPERVISOR_memory_op
+ * - __HYPERVISOR_multicall
+ * - __HYPERVISOR_update_va_mapping
+ * - __HYPERVISOR_set_timer_op
+ * - __HYPERVISOR_event_channel_op_compat
+ * - __HYPERVISOR_xen_version
+ * - __HYPERVISOR_console_io
+ * - __HYPERVISOR_physdev_op_compat
+ * - __HYPERVISOR_grant_table_op
+ * - __HYPERVISOR_vm_assist
+ * - __HYPERVISOR_update_va_mapping_otherdomain
+ * - __HYPERVISOR_iret
+ * - __HYPERVISOR_vcpu_op
+ * - __HYPERVISOR_set_segment_base
+ * - __HYPERVISOR_mmuext_op
+ * - __HYPERVISOR_xsm_op
+ * - __HYPERVISOR_nmi_op
+ * - __HYPERVISOR_sched_op
+ * - __HYPERVISOR_callback_op
+ * - __HYPERVISOR_xenoprof_op
+ * - __HYPERVISOR_event_channel_op
+ * - __HYPERVISOR_physdev_op
+ * - __HYPERVISOR_hvm_op
+ * - __HYPERVISOR_sysctl
+ * - __HYPERVISOR_domctl
+ * - __HYPERVISOR_kexec_op
+ * - __HYPERVISOR_tmem_op
+ * - __HYPERVISOR_argo_op
+ * - __HYPERVISOR_xenpmu_op
+ * - __HYPERVISOR_dm_op
+ * - __HYPERVISOR_hypfs_op
+ * - __HYPERVISOR_arch_0
+ * - __HYPERVISOR_arch_1
+ * - __HYPERVISOR_arch_2
+ * - __HYPERVISOR_arch_3
+ * - __HYPERVISOR_arch_4
+ * - __HYPERVISOR_arch_5
+ * - __HYPERVISOR_arch_6
+ * - __HYPERVISOR_arch_7
  */
-
 #define __HYPERVISOR_set_trap_table        0
 #define __HYPERVISOR_mmu_update            1
 #define __HYPERVISOR_set_gdt               2
@@ -142,8 +190,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_arch_6               54
 #define __HYPERVISOR_arch_7               55
 
-/* ` } */
-
 /*
  * HYPERCALL COMPATIBILITY.
  */
@@ -167,8 +213,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_dom0_op __HYPERVISOR_platform_op
 #endif
 
-/*
- * VIRTUAL INTERRUPTS
+/**
+ * DOC: VIRTUAL INTERRUPTS
  *
  * Virtual interrupts that a guest OS may receive from Xen.
  *
@@ -176,21 +222,42 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
  * global VIRQ. The former can be bound once per VCPU and cannot be re-bound.
  * The latter can be allocated only once per guest: they must initially be
  * allocated to VCPU0 but can subsequently be re-bound.
+ *
+ * - VIRQ_TIMER:      V. Timebase update, and/or requested timeout.
+ * - VIRQ_DEBUG:      V. Request guest to dump debug info.
+ * - VIRQ_CONSOLE:    G. (DOM0) Bytes received on emergency console.
+ * - VIRQ_DOM_EXC:    G. (DOM0) Exceptional event for some domain.
+ * - VIRQ_TBUF:       G. (DOM0) Trace buffer has records available.
+ * - VIRQ_DEBUGGER:   G. (DOM0) A domain has paused for debugging.
+ * - VIRQ_XENOPROF:   V. XenOprofile interrupt: new sample available
+ * - VIRQ_CON_RING:   G. (DOM0) Bytes received on console
+ * - VIRQ_PCPU_STATE: G. (DOM0) PCPU state changed
+ * - VIRQ_MEM_EVENT:  G. (DOM0) A memory event has occurred
+ * - VIRQ_ARGO:       G. Argo interdomain message notification
+ * - VIRQ_ENOMEM:     G. (DOM0) Low on heap memory
+ * - VIRQ_XENPMU:     V.  PMC interrupt
+ * - VIRQ_ARCH_0
+ * - VIRQ_ARCH_1
+ * - VIRQ_ARCH_2
+ * - VIRQ_ARCH_3
+ * - VIRQ_ARCH_4
+ * - VIRQ_ARCH_5
+ * - VIRQ_ARCH_6
+ * - VIRQ_ARCH_7
  */
-/* ` enum virq { */
-#define VIRQ_TIMER      0  /* V. Timebase update, and/or requested timeout.  */
-#define VIRQ_DEBUG      1  /* V. Request guest to dump debug info.           */
-#define VIRQ_CONSOLE    2  /* G. (DOM0) Bytes received on emergency console. */
-#define VIRQ_DOM_EXC    3  /* G. (DOM0) Exceptional event for some domain.   */
-#define VIRQ_TBUF       4  /* G. (DOM0) Trace buffer has records available.  */
-#define VIRQ_DEBUGGER   6  /* G. (DOM0) A domain has paused for debugging.   */
-#define VIRQ_XENOPROF   7  /* V. XenOprofile interrupt: new sample available */
-#define VIRQ_CON_RING   8  /* G. (DOM0) Bytes received on console            */
-#define VIRQ_PCPU_STATE 9  /* G. (DOM0) PCPU state changed                   */
-#define VIRQ_MEM_EVENT  10 /* G. (DOM0) A memory event has occurred          */
-#define VIRQ_ARGO       11 /* G. Argo interdomain message notification       */
-#define VIRQ_ENOMEM     12 /* G. (DOM0) Low on heap memory       */
-#define VIRQ_XENPMU     13 /* V.  PMC interrupt                              */
+#define VIRQ_TIMER      0
+#define VIRQ_DEBUG      1
+#define VIRQ_CONSOLE    2
+#define VIRQ_DOM_EXC    3
+#define VIRQ_TBUF       4
+#define VIRQ_DEBUGGER   6
+#define VIRQ_XENOPROF   7
+#define VIRQ_CON_RING   8
+#define VIRQ_PCPU_STATE 9
+#define VIRQ_MEM_EVENT  10
+#define VIRQ_ARGO       11
+#define VIRQ_ENOMEM     12
+#define VIRQ_XENPMU     13
 
 /* Architecture-specific VIRQ definitions. */
 #define VIRQ_ARCH_0    16
@@ -201,16 +268,17 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define VIRQ_ARCH_5    21
 #define VIRQ_ARCH_6    22
 #define VIRQ_ARCH_7    23
-/* ` } */
 
 #define NR_VIRQS       24
 
-/*
- * ` enum neg_errnoval
- * ` HYPERVISOR_mmu_update(const struct mmu_update reqs[],
- * `                       unsigned count, unsigned *done_out,
- * `                       unsigned foreigndom)
- * `
+/**
+ * DOC: HYPERVISOR_mmu_update
+ *
+ * enum neg_errnoval
+ * HYPERVISOR_mmu_update(const struct mmu_update reqs[],
+ *                       unsigned count, unsigned *done_out,
+ *                       unsigned foreigndom)
+ *
  * @reqs is an array of mmu_update_t structures ((ptr, val) pairs).
  * @count is the length of the above array.
  * @pdone is an output parameter indicating number of completed operations
@@ -222,8 +290,9 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
  *                     x == 0 => PFD == DOMID_SELF
  *                     x != 0 => PFD == x - 1
  *
+ *
  * Sub-commands: ptr[1:0] specifies the appropriate MMU_* command.
- * -------------
+ *
  * ptr[1:0] == MMU_NORMAL_PT_UPDATE:
  * Updates an entry in a page table belonging to PFD. If updating an L1 table,
  * and the new table entry is valid/present, the mapped frame must belong to
@@ -354,16 +423,16 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define MMU_PT_UPDATE_NO_TRANSLATE 3 /* checked '*ptr = val'. ptr is MA.      */
                                      /* val never translated.                 */
 
-/*
- * MMU EXTENDED OPERATIONS
+/**
+ * DOC: MMU EXTENDED OPERATIONS
  *
- * ` enum neg_errnoval
- * ` HYPERVISOR_mmuext_op(mmuext_op_t uops[],
- * `                      unsigned int count,
- * `                      unsigned int *pdone,
- * `                      unsigned int foreigndom)
- */
-/* HYPERVISOR_mmuext_op() accepts a list of mmuext_op structures.
+ * enum neg_errnoval
+ * HYPERVISOR_mmuext_op(mmuext_op_t uops[],
+ *                      unsigned int count,
+ *                      unsigned int *pdone,
+ *                      unsigned int foreigndom)
+ *
+ * HYPERVISOR_mmuext_op() accepts a list of mmuext_op structures.
  * A foreigndom (FD) can be specified (or DOMID_SELF for none).
  * Where the FD has some effect, it is described below.
  *
@@ -418,7 +487,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
  * cmd: MMUEXT_[UN]MARK_SUPER
  * mfn: Machine frame number of head of superpage to be [un]marked.
  */
-/* ` enum mmuext_cmd { */
 #define MMUEXT_PIN_L1_TABLE      0
 #define MMUEXT_PIN_L2_TABLE      1
 #define MMUEXT_PIN_L3_TABLE      2
@@ -439,7 +507,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define MMUEXT_FLUSH_CACHE_GLOBAL 18
 #define MMUEXT_MARK_SUPER       19
 #define MMUEXT_UNMARK_SUPER     20
-/* ` } */
 
 #ifndef __ASSEMBLY__
 struct mmuext_op {
@@ -468,24 +535,25 @@ typedef struct mmuext_op mmuext_op_t;
 DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 #endif
 
-/*
- * ` enum neg_errnoval
- * ` HYPERVISOR_update_va_mapping(unsigned long va, u64 val,
- * `                              enum uvm_flags flags)
- * `
- * ` enum neg_errnoval
- * ` HYPERVISOR_update_va_mapping_otherdomain(unsigned long va, u64 val,
- * `                                          enum uvm_flags flags,
- * `                                          domid_t domid)
- * `
- * ` @va: The virtual address whose mapping we want to change
- * ` @val: The new page table entry, must contain a machine address
- * ` @flags: Control TLB flushes
+/**
+ * DOC: HYPERVISOR_update_va_mapping
+ *
+ * enum neg_errnoval
+ * HYPERVISOR_update_va_mapping(unsigned long va, u64 val,
+ *                              enum uvm_flags flags)
+ *
+ * enum neg_errnoval
+ * HYPERVISOR_update_va_mapping_otherdomain(unsigned long va, u64 val,
+ *                                          enum uvm_flags flags,
+ *                                          domid_t domid)
+ *
+ * @va: The virtual address whose mapping we want to change
+ * @val: The new page table entry, must contain a machine address
+ * @flags: Control TLB flushes
  */
 /* These are passed as 'flags' to update_va_mapping. They can be ORed. */
 /* When specifying UVMF_MULTI, also OR in a pointer to a CPU bitmap.   */
 /* UVMF_LOCAL is merely UVMF_MULTI with a NULL bitmap pointer.         */
-/* ` enum uvm_flags { */
 #define UVMF_NONE           (xen_mk_ulong(0)<<0) /* No flushing at all.   */
 #define UVMF_TLB_FLUSH      (xen_mk_ulong(1)<<0) /* Flush entire TLB(s).  */
 #define UVMF_INVLPG         (xen_mk_ulong(2)<<0) /* Flush only one entry. */
@@ -493,13 +561,14 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 #define UVMF_MULTI          (xen_mk_ulong(0)<<2) /* Flush subset of TLBs. */
 #define UVMF_LOCAL          (xen_mk_ulong(0)<<2) /* Flush local TLB.      */
 #define UVMF_ALL            (xen_mk_ulong(1)<<2) /* Flush all TLBs.       */
-/* ` } */
 
-/*
- * ` int
- * ` HYPERVISOR_console_io(unsigned int cmd,
- * `                       unsigned int count,
- * `                       char buffer[]);
+/**
+ * DOC: HYPERVISOR_console_io
+ *
+ * int
+ * HYPERVISOR_console_io(unsigned int cmd,
+ *                       unsigned int count,
+ *                       char buffer[]);
  *
  * @cmd: Command (see below)
  * @count: Size of the buffer to read/write
@@ -522,29 +591,43 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 #define CONSOLEIO_write         0
 #define CONSOLEIO_read          1
 
-/*
+/**
+ * DOC: VMASST_CMD_enable and VMASST_CMD_disable
  * Commands to HYPERVISOR_vm_assist().
  */
 #define VMASST_CMD_enable                0
 #define VMASST_CMD_disable               1
 
-/* x86/32 guests: simulate full 4GB segment limits. */
+/**
+ * DOC: VMASST_TYPE_4gb_segments
+ * x86/32 guests: simulate full 4GB segment limits.
+ */
 #define VMASST_TYPE_4gb_segments         0
 
-/* x86/32 guests: trap (vector 15) whenever above vmassist is used. */
+/**
+ * DOC: VMASST_TYPE_4gb_segments_notify
+ * x86/32 guests: trap (vector 15) whenever above vmassist is used.
+ */
 #define VMASST_TYPE_4gb_segments_notify  1
 
-/*
+/**
+ * DOC: VMASST_TYPE_writable_pagetables
+ *
  * x86 guests: support writes to bottom-level PTEs.
  * NB1. Page-directory entries cannot be written.
  * NB2. Guest must continue to remove all writable mappings of PTEs.
  */
 #define VMASST_TYPE_writable_pagetables  2
 
-/* x86/PAE guests: support PDPTs above 4GB. */
+/**
+ * DOC: VMASST_TYPE_pae_extended_cr3
+ * x86/PAE guests: support PDPTs above 4GB.
+ */
 #define VMASST_TYPE_pae_extended_cr3     3
 
-/*
+/**
+ * DOC: VMASST_TYPE_architectural_iopl
+ *
  * x86 guests: Sane behaviour for virtual iopl
  *  - virtual iopl updated from do_iret() hypercalls.
  *  - virtual iopl reported in bounce frames.
@@ -552,14 +635,18 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
  */
 #define VMASST_TYPE_architectural_iopl   4
 
-/*
+/**
+ * DOC: VMASST_TYPE_runstate_update_flag
+ *
  * All guests: activate update indicator in vcpu_runstate_info
  * Enable setting the XEN_RUNSTATE_UPDATE flag in guest memory mapped
  * vcpu_runstate_info during updates of the runstate information.
  */
 #define VMASST_TYPE_runstate_update_flag 5
 
-/*
+/**
+ * DOC: VMASST_TYPE_m2p_strict
+ *
  * x86/64 guests: strictly hide M2P from user mode.
  * This allows the guest to control respective hypervisor behavior:
  * - when not set, L4 tables get created with the respective slot blank,
@@ -578,10 +665,15 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
 /* Domain ids >= DOMID_FIRST_RESERVED cannot be used for ordinary domains. */
 #define DOMID_FIRST_RESERVED xen_mk_uint(0x7FF0)
 
-/* DOMID_SELF is used in certain contexts to refer to oneself. */
+/**
+ * DOC: DOMID_SELF
+ * DOMID_SELF is used in certain contexts to refer to oneself.
+ */
 #define DOMID_SELF           xen_mk_uint(0x7FF0)
 
-/*
+/**
+ * DOC: DOMID_IO
+ *
  * DOMID_IO is used to restrict page-table updates to mapping I/O memory.
  * Although no Foreign Domain need be specified to map I/O pages, DOMID_IO
  * is useful to ensure that no mappings to the OS's own heap are accidentally
@@ -594,7 +686,9 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
  */
 #define DOMID_IO             xen_mk_uint(0x7FF1)
 
-/*
+/**
+ * DOC: DOMID_XEN
+ *
  * DOMID_XEN is used to allow privileged domains to map restricted parts of
  * Xen's heap space (e.g., the machine_to_phys table).
  * This only makes sense as
@@ -605,38 +699,55 @@ DEFINE_XEN_GUEST_HANDLE(mmuext_op_t);
  */
 #define DOMID_XEN            xen_mk_uint(0x7FF2)
 
-/*
- * DOMID_COW is used as the owner of sharable pages */
+/**
+ * DOC: DOMID_COW
+ * DOMID_COW is used as the owner of sharable pages
+ */
 #define DOMID_COW            xen_mk_uint(0x7FF3)
 
-/* DOMID_INVALID is used to identify pages with unknown owner. */
+/**
+ * DOC: DOMID_INVALID
+ * DOMID_INVALID is used to identify pages with unknown owner.
+ */
 #define DOMID_INVALID        xen_mk_uint(0x7FF4)
 
-/* Idle domain. */
+/**
+ * DOC: DOMID_IDLE
+ * Idle domain.
+ */
 #define DOMID_IDLE           xen_mk_uint(0x7FFF)
 
-/* Mask for valid domain id values */
+/**
+ * DOC: DOMID_MASK
+ * Mask for valid domain id values
+ */
 #define DOMID_MASK           xen_mk_uint(0x7FFF)
 
 #ifndef __ASSEMBLY__
 
 typedef uint16_t domid_t;
 
-/*
+/**
+ * struct mmu_update - HYPERVISOR_mmu_update
+ *
  * Send an array of these to HYPERVISOR_mmu_update().
  * NB. The fields are natural pointer/address size for this architecture.
  */
 struct mmu_update {
-    uint64_t ptr;       /* Machine address of PTE. */
-    uint64_t val;       /* New contents of PTE.    */
+    /** @ptr: Machine address of PTE. */
+    uint64_t ptr;
+    /** @val: New contents of PTE. */
+    uint64_t val;
 };
 typedef struct mmu_update mmu_update_t;
 DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
 
-/*
- * ` enum neg_errnoval
- * ` HYPERVISOR_multicall(multicall_entry_t call_list[],
- * `                      uint32_t nr_calls);
+/**
+ * struct multicall_entry - HYPERVISOR_multicall
+ *
+ * enum neg_errnoval
+ * HYPERVISOR_multicall(multicall_entry_t call_list[],
+ *                      uint32_t nr_calls);
  *
  * NB. The fields are logically the natural register size for this
  * architecture. In cases where xen_ulong_t is larger than this then
@@ -650,34 +761,40 @@ typedef struct multicall_entry multicall_entry_t;
 DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
 
 #if __XEN_INTERFACE_VERSION__ < 0x00040400
-/*
+/**
+ * DOC: NR_EVENT_CHANNELS
+ *
  * Event channel endpoints per domain (when using the 2-level ABI):
  *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
  */
 #define NR_EVENT_CHANNELS EVTCHN_2L_NR_CHANNELS
 #endif
 
+/**
+ * struct vcpu_time_info
+ *
+ * Updates to the following values are preceded and followed by an
+ * increment of 'version'. The guest can therefore detect updates by
+ * looking for changes to 'version'. If the least-significant bit of
+ * the version number is set then an update is in progress and the guest
+ * must wait to read a consistent set of values.
+ * The correct way to interact with the version number is similar to
+ * Linux's seqlock: see the implementations of read_seqbegin/read_seqretry.
+ *
+ * Current system time:
+ *   system_time +
+ *   ((((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul) >> 32)
+ * CPU frequency (Hz):
+ *   ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
+ */
 struct vcpu_time_info {
-    /*
-     * Updates to the following values are preceded and followed by an
-     * increment of 'version'. The guest can therefore detect updates by
-     * looking for changes to 'version'. If the least-significant bit of
-     * the version number is set then an update is in progress and the guest
-     * must wait to read a consistent set of values.
-     * The correct way to interact with the version number is similar to
-     * Linux's seqlock: see the implementations of read_seqbegin/read_seqretry.
-     */
     uint32_t version;
     uint32_t pad0;
-    uint64_t tsc_timestamp;   /* TSC at last update of time vals.  */
-    uint64_t system_time;     /* Time, in nanosecs, since boot.    */
-    /*
-     * Current system time:
-     *   system_time +
-     *   ((((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul) >> 32)
-     * CPU frequency (Hz):
-     *   ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
-     */
+    /** @tsc_timestamp: TSC at last update of time vals. */
+    uint64_t tsc_timestamp;
+    /** @system_time: Time, in nanosecs, since boot. */
+    uint64_t system_time;
+
     uint32_t tsc_to_system_mul;
     int8_t   tsc_shift;
 #if __XEN_INTERFACE_VERSION__ > 0x040600
@@ -692,18 +809,23 @@ typedef struct vcpu_time_info vcpu_time_info_t;
 #define XEN_PVCLOCK_TSC_STABLE_BIT     (1 << 0)
 #define XEN_PVCLOCK_GUEST_STOPPED      (1 << 1)
 
+/**
+ * struct vcpu_info
+ */
 struct vcpu_info {
-    /*
-     * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
-     * a pending notification for a particular VCPU. It is then cleared
-     * by the guest OS /before/ checking for pending work, thus avoiding
-     * a set-and-check race. Note that the mask is only accessed by Xen
-     * on the CPU that is currently hosting the VCPU. This means that the
-     * pending and mask flags can be updated by the guest without special
-     * synchronisation (i.e., no need for the x86 LOCK prefix).
-     * This may seem suboptimal because if the pending flag is set by
-     * a different CPU then an IPI may be scheduled even when the mask
-     * is set. However, note:
+    /**
+     * @evtchn_upcall_pending:
+     *
+     * it is written non-zero by Xen to indicate a pending notification
+     * for a particular VCPU. It is then cleared by the guest OS
+     * /before/ checking for pending work, thus avoiding a set-and-check
+     * race. Note that the mask is only accessed by Xen on the CPU that
+     * is currently hosting the VCPU. This means that the pending and
+     * mask flags can be updated by the guest without special
+     * synchronisation (i.e., no need for the x86 LOCK prefix).  This
+     * may seem suboptimal because if the pending flag is set by a
+     * different CPU then an IPI may be scheduled even when the mask is
+     * set. However, note:
      *  1. The task of 'interrupt holdoff' is covered by the per-event-
      *     channel mask bits. A 'noisy' event that is continually being
      *     triggered can be masked at source at this very precise
@@ -732,61 +854,69 @@ struct vcpu_info {
 typedef struct vcpu_info vcpu_info_t;
 #endif
 
-/*
- * `incontents 200 startofday_shared Start-of-day shared data structure
+/**
+ * struct shared_info - Start-of-day shared data structure
+ *
  * Xen/kernel shared data -- pointer provided in start_info.
  *
  * This structure is defined to be both smaller than a page, and the
  * only data on the shared page, but may vary in actual size even within
  * compatible Xen versions; guests should not rely on the size
  * of this structure remaining constant.
+ *
+ * A domain can create "event channels" on which it can send and receive
+ * asynchronous event notifications. There are three classes of event that
+ * are delivered by this mechanism:
+ *  1. Bi-directional inter- and intra-domain connections. Domains must
+ *     arrange out-of-band to set up a connection (usually by allocating
+ *     an unbound 'listener' port and avertising that via a storage service
+ *     such as xenstore).
+ *  2. Physical interrupts. A domain with suitable hardware-access
+ *     privileges can bind an event-channel port to a physical interrupt
+ *     source.
+ *  3. Virtual interrupts ('events'). A domain can bind an event-channel
+ *     port to a virtual interrupt source, such as the virtual-timer
+ *     device or the emergency console.
+ *
+ *
+ * @evtchn_pending: pending notifications
+ * @evtchn_mask: mask/unmask notifications
+ *
+ * Event channels are addressed by a "port index". Each channel is
+ * associated with two bits of information:
+ *  1. PENDING -- notifies the domain that there is a pending notification
+ *     to be processed. This bit is cleared by the guest.
+ *  2. MASK -- if this bit is clear then a 0->1 transition of PENDING
+ *     will cause an asynchronous upcall to be scheduled. This bit is only
+ *     updated by the guest. It is read-only within Xen. If a channel
+ *     becomes pending while the channel is masked then the 'edge' is lost
+ *     (i.e., when the channel is unmasked, the guest must manually handle
+ *     pending notifications as no upcall will be scheduled by Xen).
+ *
+ * To expedite scanning of pending notifications, any 0->1 pending
+ * transition on an unmasked channel causes a corresponding bit in a
+ * per-vcpu selector word to be set. Each bit in the selector covers a
+ * 'C long' in the PENDING bitfield array.
+ *
+ *
+ * @wc_version: wallclock time version
+ * @wc_sec: secs offset from Unix epoc
+ * @wc_nsec: nsecs offset from Unix epoc
+ *
+ * Wallclock time: updated by control software or RTC emulation.
+ * Guests should base their gettimeofday() syscall on this
+ * wallclock-base value.
+ * The values of wc_sec and wc_nsec are offsets from the Unix epoch
+ * adjusted by the domain's 'time offset' (in seconds) as set either
+ * by XEN_DOMCTL_settimeoffset, or adjusted via a guest write to the
+ * emulated RTC.
  */
 struct shared_info {
     struct vcpu_info vcpu_info[XEN_LEGACY_MAX_VCPUS];
 
-    /*
-     * A domain can create "event channels" on which it can send and receive
-     * asynchronous event notifications. There are three classes of event that
-     * are delivered by this mechanism:
-     *  1. Bi-directional inter- and intra-domain connections. Domains must
-     *     arrange out-of-band to set up a connection (usually by allocating
-     *     an unbound 'listener' port and avertising that via a storage service
-     *     such as xenstore).
-     *  2. Physical interrupts. A domain with suitable hardware-access
-     *     privileges can bind an event-channel port to a physical interrupt
-     *     source.
-     *  3. Virtual interrupts ('events'). A domain can bind an event-channel
-     *     port to a virtual interrupt source, such as the virtual-timer
-     *     device or the emergency console.
-     *
-     * Event channels are addressed by a "port index". Each channel is
-     * associated with two bits of information:
-     *  1. PENDING -- notifies the domain that there is a pending notification
-     *     to be processed. This bit is cleared by the guest.
-     *  2. MASK -- if this bit is clear then a 0->1 transition of PENDING
-     *     will cause an asynchronous upcall to be scheduled. This bit is only
-     *     updated by the guest. It is read-only within Xen. If a channel
-     *     becomes pending while the channel is masked then the 'edge' is lost
-     *     (i.e., when the channel is unmasked, the guest must manually handle
-     *     pending notifications as no upcall will be scheduled by Xen).
-     *
-     * To expedite scanning of pending notifications, any 0->1 pending
-     * transition on an unmasked channel causes a corresponding bit in a
-     * per-vcpu selector word to be set. Each bit in the selector covers a
-     * 'C long' in the PENDING bitfield array.
-     */
     xen_ulong_t evtchn_pending[sizeof(xen_ulong_t) * 8];
     xen_ulong_t evtchn_mask[sizeof(xen_ulong_t) * 8];
 
-    /*
-     * Wallclock time: updated by control software or RTC emulation.
-     * Guests should base their gettimeofday() syscall on this
-     * wallclock-base value.
-     * The values of wc_sec and wc_nsec are offsets from the Unix epoch
-     * adjusted by the domain's 'time offset' (in seconds) as set either
-     * by XEN_DOMCTL_settimeoffset, or adjusted via a guest write to the
-     * emulated RTC.
-     */
     uint32_t wc_version;      /* Version counter: see vcpu_time_info_t. */
     uint32_t wc_sec;
     uint32_t wc_nsec;
@@ -804,8 +934,9 @@ struct shared_info {
 typedef struct shared_info shared_info_t;
 #endif
 
-/*
- * `incontents 200 startofday Start-of-day memory layout
+#ifdef XEN_HAVE_PV_GUEST_ENTRY
+/**
+ * struct start_info - Start-of-day memory layout
  *
  *  1. The domain is started within contiguous virtual-memory region.
  *  2. The contiguous region ends on an aligned 4MB boundary.
@@ -841,38 +972,64 @@ typedef struct shared_info shared_info_t;
  * 32-bit and runs under a 64-bit hypervisor should _NOT_ use two of the
  * pages preceding pt_base and mark them as reserved/unused.
  */
-#ifdef XEN_HAVE_PV_GUEST_ENTRY
 struct start_info {
     /* THE FOLLOWING ARE FILLED IN BOTH ON INITIAL BOOT AND ON RESUME.    */
-    char magic[32];             /* "xen-<version>-<platform>".            */
-    unsigned long nr_pages;     /* Total pages allocated to this domain.  */
-    unsigned long shared_info;  /* MACHINE address of shared info struct. */
-    uint32_t flags;             /* SIF_xxx flags.                         */
-    xen_pfn_t store_mfn;        /* MACHINE page number of shared page.    */
-    uint32_t store_evtchn;      /* Event channel for store communication. */
+    /** @magic: "xen-<version>-<platform>". */
+    char magic[32];
+    /** @nr_pages: Total pages allocated to this domain. */
+    unsigned long nr_pages;
+    /** @shared_info: MACHINE address of shared info struct. */
+    unsigned long shared_info;
+    /** flags: SIF_xxx flags. */
+    uint32_t flags;
+    /** @store_mfn: MACHINE page number of shared page. */
+    xen_pfn_t store_mfn;
+    /** @store_evtchn: Event channel for store communication. */
+    uint32_t store_evtchn;
     union {
         struct {
-            xen_pfn_t mfn;      /* MACHINE page number of console page.   */
-            uint32_t  evtchn;   /* Event channel for console page.        */
+            /** @console.domU.mfn: MACHINE page number of console page. */
+            xen_pfn_t mfn;
+            /** @console.domU.evtchn: Event channel for console page. */
+            uint32_t evtchn;
         } domU;
         struct {
-            uint32_t info_off;  /* Offset of console_info struct.         */
-            uint32_t info_size; /* Size of console_info struct from start.*/
+            /** @console.dom0.info_off: Offset of console_info struct. */
+            uint32_t info_off;
+            /** @info_size: Size of console_info struct from start. */
+            uint32_t info_size;
         } dom0;
     } console;
     /* THE FOLLOWING ARE ONLY FILLED IN ON INITIAL BOOT (NOT RESUME).     */
-    unsigned long pt_base;      /* VIRTUAL address of page directory.     */
-    unsigned long nr_pt_frames; /* Number of bootstrap p.t. frames.       */
-    unsigned long mfn_list;     /* VIRTUAL address of page-frame list.    */
-    unsigned long mod_start;    /* VIRTUAL address of pre-loaded module   */
-                                /* (PFN of pre-loaded module if           */
-                                /*  SIF_MOD_START_PFN set in flags).      */
-    unsigned long mod_len;      /* Size (bytes) of pre-loaded module.     */
+    /** @pt_base: VIRTUAL address of page directory. */
+    unsigned long pt_base;
+    /** @nr_pt_frames: Number of bootstrap p.t. frames. */
+    unsigned long nr_pt_frames;
+    /** @mfn_list: VIRTUAL address of page-frame list. */
+    unsigned long mfn_list;
+    /**
+     * @mod_start: VIRTUAL address of pre-loaded module.
+     * (PFN of pre-loaded module if SIF_MOD_START_PFN set in flags).
+     */
+    unsigned long mod_start;
+    /** @mod_len: Size (bytes) of pre-loaded module. */
+    unsigned long mod_len;
 #define MAX_GUEST_CMDLINE 1024
     int8_t cmd_line[MAX_GUEST_CMDLINE];
-    /* The pfn range here covers both page table and p->m table frames.   */
-    unsigned long first_p2m_pfn;/* 1st pfn forming initial P->M table.    */
-    unsigned long nr_p2m_frames;/* # of pfns forming initial P->M table.  */
+    /**
+     * @first_p2m_pfn:
+     *
+     * 1st pfn forming initial P->M table from the pfn range that covers
+     * both page table and p->m table frames.
+     */
+    unsigned long first_p2m_pfn;
+    /**
+     * @nr_p2m_frames:
+     *
+     * # of pfns forming initial P->M table from the pfn range that
+     * covers both page table and p->m table frames.
+     */
+    unsigned long nr_p2m_frames;
 };
 typedef struct start_info start_info_t;
 
@@ -883,16 +1040,29 @@ typedef struct start_info start_info_t;
 #endif
 #endif /* XEN_HAVE_PV_GUEST_ENTRY */
 
-/* These flags are passed in the 'flags' field of start_info_t. */
-#define SIF_PRIVILEGED    (1<<0)  /* Is the domain privileged? */
-#define SIF_INITDOMAIN    (1<<1)  /* Is this the initial control domain? */
-#define SIF_MULTIBOOT_MOD (1<<2)  /* Is mod_start a multiboot module? */
-#define SIF_MOD_START_PFN (1<<3)  /* Is mod_start a PFN? */
-#define SIF_VIRT_P2M_4TOOLS (1<<4) /* Do Xen tools understand a virt. mapped */
-                                   /* P->M making the 3 level tree obsolete? */
-#define SIF_PM_MASK       (0xFF<<8) /* reserve 1 byte for xen-pm options */
-
-/*
+/**
+ * DOC: SIF_*
+ *
+ * These flags are passed in the 'flags' field of start_info_t.
+ *
+ * - SIF_PRIVILEGED:       Is the domain privileged?
+ * - SIF_INITDOMAIN:       Is this the initial control domain?
+ * - SIF_MULTIBOOT_MOD:    Is mod_start a multiboot module?
+ * - SIF_MOD_START_PFN:    Is mod_start a PFN?
+ * - SIF_VIRT_P2M_4TOOLS:  Do Xen tools understand a virt. mapped
+ *                         P->M making the 3 level tree obsolete?
+ * - SIF_PM_MASK:          reserve 1 byte for xen-pm options
+ */
+#define SIF_PRIVILEGED    (1<<0)
+#define SIF_INITDOMAIN    (1<<1)
+#define SIF_MULTIBOOT_MOD (1<<2)
+#define SIF_MOD_START_PFN (1<<3)
+#define SIF_VIRT_P2M_4TOOLS (1<<4)
+#define SIF_PM_MASK       (0xFF<<8)
+
+/**
+ * struct xen_multiboot_mod_list
+ *
  * A multiboot module is a package containing modules very similar to a
  * multiboot module array. The only differences are:
  * - the array of module descriptors is by convention simply at the beginning
@@ -908,13 +1078,13 @@ typedef struct start_info start_info_t;
  */
 struct xen_multiboot_mod_list
 {
-    /* Address of first byte of the module */
+    /** @mod_start: Address of first byte of the module */
     uint32_t mod_start;
-    /* Address of last byte of the module (inclusive) */
+    /** @mod_end: Address of last byte of the module (inclusive) */
     uint32_t mod_end;
-    /* Address of zero-terminated command line */
+    /** @cmdline: Address of zero-terminated command line */
     uint32_t cmdline;
-    /* Unused, must be zero */
+    /** @pad: Unused, must be zero */
     uint32_t pad;
 };
 /*
@@ -984,7 +1154,9 @@ typedef struct {
     uint8_t a[16];
 } xen_uuid_t;
 
-/*
+/**
+ * DOC: XEN_DEFINE_UUID
+ *
  * XEN_DEFINE_UUID(0x00112233, 0x4455, 0x6677, 0x8899,
  *                 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff)
  * will construct UUID 00112233-4455-6677-8899-aabbccddeeff presented as
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 00:22:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 00:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9847.26001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1tb-00036C-Na; Wed, 21 Oct 2020 00:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9847.26001; Wed, 21 Oct 2020 00:22:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV1tb-000365-KV; Wed, 21 Oct 2020 00:22:27 +0000
Received: by outflank-mailman (input) for mailman id 9847;
 Wed, 21 Oct 2020 00:22:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kV1ta-00035R-4z
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:22:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 798ed7bd-89dc-4a22-b1b8-ade2b8b63b48;
 Wed, 21 Oct 2020 00:22:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV1tS-0001Co-Qe; Wed, 21 Oct 2020 00:22:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV1tS-0006lU-Iz; Wed, 21 Oct 2020 00:22:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kV1tS-0002hq-I2; Wed, 21 Oct 2020 00:22:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kV1ta-00035R-4z
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 00:22:26 +0000
X-Inumbo-ID: 798ed7bd-89dc-4a22-b1b8-ade2b8b63b48
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 798ed7bd-89dc-4a22-b1b8-ade2b8b63b48;
	Wed, 21 Oct 2020 00:22:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ANgK4tAH94dc+RZu6y5lbPhWmSUcul79o3yjMFLWSg4=; b=iIk+rfiNQhXYQ/MDktd8iVpzjM
	AgtE2ePzDK9QCRz0H7XyaOxmyZGbcIjV4B8haQNdHo+eJ6Vo2bkKrCFiTJiC+OqDwndXSH/+7QquP
	hryxRWL2bZU4Rk/tVMdU3nUPzEKkQi72n+bKN6/hoU+gnwXuCTGxkVHDfAA6MMAyQL+I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV1tS-0001Co-Qe; Wed, 21 Oct 2020 00:22:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV1tS-0006lU-Iz; Wed, 21 Oct 2020 00:22:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV1tS-0002hq-I2; Wed, 21 Oct 2020 00:22:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156047-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156047: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b49791e4cc2f38dd84bf331b75217adaef636e3
X-Osstest-Versions-That:
    xen=0514a3a25fb9ebff5d75cc8f00a9229385300858
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 00:22:18 +0000

flight 156047 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156047/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3b49791e4cc2f38dd84bf331b75217adaef636e3
baseline version:
 xen                  0514a3a25fb9ebff5d75cc8f00a9229385300858

Last test of basis   156029  2020-10-20 13:01:23 Z    0 days
Testing same since   156047  2020-10-20 21:00:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0514a3a25f..3b49791e4c  3b49791e4cc2f38dd84bf331b75217adaef636e3 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 01:11:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 01:11:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9851.26016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV2fE-0003kr-IL; Wed, 21 Oct 2020 01:11:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9851.26016; Wed, 21 Oct 2020 01:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV2fE-0003kk-FE; Wed, 21 Oct 2020 01:11:40 +0000
Received: by outflank-mailman (input) for mailman id 9851;
 Wed, 21 Oct 2020 01:11:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kV2fC-0003kB-Jw
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 01:11:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70ada92f-6ec8-4540-8c65-a93beaa8de01;
 Wed, 21 Oct 2020 01:11:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV2f4-0006Qt-Mw; Wed, 21 Oct 2020 01:11:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV2f4-0002IH-FR; Wed, 21 Oct 2020 01:11:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kV2f4-0003a8-D3; Wed, 21 Oct 2020 01:11:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kV2fC-0003kB-Jw
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 01:11:38 +0000
X-Inumbo-ID: 70ada92f-6ec8-4540-8c65-a93beaa8de01
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 70ada92f-6ec8-4540-8c65-a93beaa8de01;
	Wed, 21 Oct 2020 01:11:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oNkbVOual+qd2qpSkOyoVcuhD3ab+hTalWweRDc5jJw=; b=BiEQEHcDG6KSQJY7PiJySP/9dA
	U/k28C+T3y4PSOoLdg0AEYIgbPkhR6hRf3CcDwJuHkJ55/rsQXf6sIlfi3QnHo/apdeSc/GDnUy8Q
	y+y8dErAs6yYalZwr9XywJYZict/+94SFhhit5hRRPafmS99sXBaHP/PFMpt8nQMV530=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV2f4-0006Qt-Mw; Wed, 21 Oct 2020 01:11:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV2f4-0002IH-FR; Wed, 21 Oct 2020 01:11:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV2f4-0003a8-D3; Wed, 21 Oct 2020 01:11:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156030-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156030: regressions - FAIL
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dc38c1103cfdc643860e10c1b9e925dac83332dc
X-Osstest-Versions-That:
    xen=8e7e5857a203c9d9df7733fd68768555c7e76839
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 01:11:30 +0000

flight 156030 xen-4.13-testing real [real]
flight 156052 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156030/
http://logs.test-lab.xenproject.org/osstest/logs/156052/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 155377

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155377
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155377
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155377
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155377
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155377
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155377
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155377
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155377
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155377
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155377
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155377
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dc38c1103cfdc643860e10c1b9e925dac83332dc
baseline version:
 xen                  8e7e5857a203c9d9df7733fd68768555c7e76839

Last test of basis   155377  2020-10-03 13:48:36 Z   17 days
Testing same since   156030  2020-10-20 13:06:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chen Yu <yu.c.chen@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 392 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 03:21:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 03:21:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9859.26045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV4h1-0007Gy-57; Wed, 21 Oct 2020 03:21:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9859.26045; Wed, 21 Oct 2020 03:21:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV4h1-0007Gr-1Q; Wed, 21 Oct 2020 03:21:39 +0000
Received: by outflank-mailman (input) for mailman id 9859;
 Wed, 21 Oct 2020 03:21:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kV4gz-0007Fm-Vw
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 03:21:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca05980c-1197-40b6-962c-79394c7a9b00;
 Wed, 21 Oct 2020 03:21:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV4gp-00013D-NP; Wed, 21 Oct 2020 03:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV4gp-0002uJ-D4; Wed, 21 Oct 2020 03:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kV4gp-0001hv-Cb; Wed, 21 Oct 2020 03:21:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kV4gz-0007Fm-Vw
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 03:21:38 +0000
X-Inumbo-ID: ca05980c-1197-40b6-962c-79394c7a9b00
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ca05980c-1197-40b6-962c-79394c7a9b00;
	Wed, 21 Oct 2020 03:21:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V/pRL+lGThzXnJiI8EM1eFEnvNLlKVai9Ve5rhTcPZE=; b=rlFNc8y1nYf7vg3tQjF1MaLQeL
	/z8w6ForpvBbwNdjYErkY5F2SVj4FW48cI8b5ArSnZeUMNvwFY6Pvoea+fR71JPh8JfFqVZPTVCfc
	8J2mcApxZiKombnM6765RbyuYEqY3xpjuqbx/tXyyB7rmLSs6zzXty5Z4QuQRn4kLS4Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV4gp-00013D-NP; Wed, 21 Oct 2020 03:21:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV4gp-0002uJ-D4; Wed, 21 Oct 2020 03:21:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV4gp-0001hv-Cb; Wed, 21 Oct 2020 03:21:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156040-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156040: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c41341af76cfc85b5a6c0f87de4838672ab9f89
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 03:21:27 +0000

flight 156040 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156040/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c41341af76cfc85b5a6c0f87de4838672ab9f89
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   61 days
Failing since        152659  2020-08-21 14:07:39 Z   60 days  113 attempts
Testing same since   156028  2020-10-20 12:37:35 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 48058 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 05:58:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 05:58:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9869.26076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV78l-0004PV-C8; Wed, 21 Oct 2020 05:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9869.26076; Wed, 21 Oct 2020 05:58:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV78l-0004PO-8x; Wed, 21 Oct 2020 05:58:27 +0000
Received: by outflank-mailman (input) for mailman id 9869;
 Wed, 21 Oct 2020 05:58:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kV78j-0004PJ-QB
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 05:58:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8aa93d3-5c61-4bbb-a2db-c69f1c55463a;
 Wed, 21 Oct 2020 05:58:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB458AC48;
 Wed, 21 Oct 2020 05:58:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kV78j-0004PJ-QB
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 05:58:25 +0000
X-Inumbo-ID: d8aa93d3-5c61-4bbb-a2db-c69f1c55463a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d8aa93d3-5c61-4bbb-a2db-c69f1c55463a;
	Wed, 21 Oct 2020 05:58:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603259902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oNswih2V7dAStqKuSd8QaGTZPVgz++In9Oe75UkhAMc=;
	b=RQbs7A3nfGLVlZlaxFegInOOP2664jJdhPsvAVK1XQMatRXxfqUqAWOpfieIXU2Aj2eKAS
	ANRgpUmDl3DGX5kRosMOYInftNi/3ORHCuZ8Usv1PQNMIGOSFZZt6HvohlzfxDMrmg+5Y3
	KFXGLknD8mF/NmV5aMfqH0JbpVrCDvI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB458AC48;
	Wed, 21 Oct 2020 05:58:22 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e0f62840-0ed9-cd25-76aa-eeb4484799bb@suse.com>
Date: Wed, 21 Oct 2020 07:58:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201020152405.26892-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 17:24, Andrew Cooper wrote:
> A couple of minor points.
> 
>  * PV guests can create global mappings.  I can't reason any safe way to relax
>    FLUSH_TLB_GLOBAL to just FLUSH_TLB.

We only care about guest view here, and from guest view we only
care about non-leaf entries. Non-leaf entries can't be global,
and luckily (for now at least) the G bit also doesn't get
different meaning assigned in non-leaf entries.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 05:59:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 05:59:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9871.26087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV79a-0004V9-LW; Wed, 21 Oct 2020 05:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9871.26087; Wed, 21 Oct 2020 05:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV79a-0004V2-Ie; Wed, 21 Oct 2020 05:59:18 +0000
Received: by outflank-mailman (input) for mailman id 9871;
 Wed, 21 Oct 2020 05:59:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kV79Z-0004Uv-2l
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 05:59:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d6e9abe-ada6-4e78-a416-52c6700a934d;
 Wed, 21 Oct 2020 05:59:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV79X-0004iv-Qp; Wed, 21 Oct 2020 05:59:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kV79X-0002ms-I9; Wed, 21 Oct 2020 05:59:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kV79X-0004zB-Hg; Wed, 21 Oct 2020 05:59:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kV79Z-0004Uv-2l
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 05:59:17 +0000
X-Inumbo-ID: 3d6e9abe-ada6-4e78-a416-52c6700a934d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3d6e9abe-ada6-4e78-a416-52c6700a934d;
	Wed, 21 Oct 2020 05:59:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XXdU5WYvmtJsU4Sg4+66J/3PGc7yw4EWILuWHF8ARJY=; b=bmqu+6Yru8YqXHde23ndtYqIEN
	biJnOHgl5oKOtmTfBmqGDaYX6for9Wls6BYGuOyabu5xPvI26xU8MNR/hUrVZG6FVOzvCg/Osq6ZQ
	cU/oSUm/kLQpQi2e+o/hoUp+qJFaAI60X8gSOdAVR0juj/NCgUItuL3aWVLCxfWjVMmo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV79X-0004iv-Qp; Wed, 21 Oct 2020 05:59:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV79X-0002ms-I9; Wed, 21 Oct 2020 05:59:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kV79X-0004zB-Hg; Wed, 21 Oct 2020 05:59:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156031-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156031: regressions - FAIL
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:regression
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7b1e587f25c2dda38236e48aae81729798f10663
X-Osstest-Versions-That:
    xen=c93b520a41f2787dd76bfb2e454836d1d5787505
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 05:59:15 +0000

flight 156031 xen-4.14-testing real [real]
flight 156062 xen-4.14-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156031/
http://logs.test-lab.xenproject.org/osstest/logs/156062/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel 14 guest-start/redhat.repeat fail REGR. vs. 155417
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 155417

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155417
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155417
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155417
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155417
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155417
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155417
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155417
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155417
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155417
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155417
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155417
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  7b1e587f25c2dda38236e48aae81729798f10663
baseline version:
 xen                  c93b520a41f2787dd76bfb2e454836d1d5787505

Last test of basis   155417  2020-10-04 02:29:19 Z   17 days
Testing same since   156031  2020-10-20 13:06:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chen Yu <yu.c.chen@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 532 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 06:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 06:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9879.26112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV824-0001WA-28; Wed, 21 Oct 2020 06:55:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9879.26112; Wed, 21 Oct 2020 06:55:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV823-0001W3-Ut; Wed, 21 Oct 2020 06:55:35 +0000
Received: by outflank-mailman (input) for mailman id 9879;
 Wed, 21 Oct 2020 06:55:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kV823-0001Vy-7C
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 06:55:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 403feec5-8818-40d1-b9bb-171a1f283e6a;
 Wed, 21 Oct 2020 06:55:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6D148ACB0;
 Wed, 21 Oct 2020 06:55:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kV823-0001Vy-7C
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 06:55:35 +0000
X-Inumbo-ID: 403feec5-8818-40d1-b9bb-171a1f283e6a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 403feec5-8818-40d1-b9bb-171a1f283e6a;
	Wed, 21 Oct 2020 06:55:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603263333;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N2QsjJ7J3nDNd2jiA2bh+jmOtjqefpalvv9UwIj3N9M=;
	b=ep2ojyasEp8KnjuZF1q0TPYr+/RaE/4PcWXj0bQ+zFo4yGxVmF6ga4PJjMOo5bgC1otUaN
	fRBo1+7anO12Q+wHPwBu9VuQsMozj1KKkAeQkXuqBX5yeuTDWF6rNDKqnz6vMJS6MxDjCN
	ndEBCDiwYUdx5v4ShinlVTqU5ZOPAJ8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6D148ACB0;
	Wed, 21 Oct 2020 06:55:33 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
 <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
 <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
 <9b54113c-9df2-2f44-1545-67ffe4831934@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <da2d9140-c70d-33a4-a375-9615e806d7d4@suse.com>
Date: Wed, 21 Oct 2020 08:55:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <9b54113c-9df2-2f44-1545-67ffe4831934@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.10.2020 20:46, Andrew Cooper wrote:
> On 20/10/2020 18:10, Andrew Cooper wrote:
>> On 20/10/2020 17:20, Andrew Cooper wrote:
>>> On 20/10/2020 16:48, Jan Beulich wrote:
>>>> On 20.10.2020 17:24, Andrew Cooper wrote:
>>>>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>>>>> is from Xen's point of view (as the update only affects guest mappings), and
>>>>> the guest is required to flush suitably after making updates.
>>>>>
>>>>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>>>>> writeable pagetables, etc.) is an implementation detail outside of the
>>>>> API/ABI.
>>>>>
>>>>> Changes in the paging structure require invalidations in the linear pagetable
>>>>> range for subsequent accesses into the linear pagetables to access non-stale
>>>>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>>>>> actions from accidentally accessing/modifying the wrong pagetable.
>>>>>
>>>>> For all L2 and higher modifications, flush the full TLB.  (This could in
>>>>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>>>>> mechanism exists in practice.)
>>>>>
>>>>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>>>>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>>>>> now always needs to flush, there is no point trying to exclude the current CPU
>>>>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>>>> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
>>>> part of the flushing on the local CPU. The draft you had sent
>>>> earlier looked better in this regard.
>>> This was the area which broke.  It is to do with subtle difference in
>>> the scope of L4 updates.
>>>
>>> ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
>>> other references to the pages are found.
>>>
>>> The TLB flush needs to be broadcast to the whole domain dirty mask, as
>>> we can't (easily) know if the update was part of the current structure.
>> Actually - we can know whether an L4 update needs flushing locally or
>> not, in exactly the same way as the sync logic currently works.
>>
>> However, unlike the opencoded get_cpu_info()->root_pgt_changed = true,
>> we can't just flush locally for free.
>>
>> This is quite awkward to express.
> 
> And not safe.  Flushes may accumulate from multiple levels in a batch,
> and pt_owner may not be equal to current.

I'm not questioning the TLB flush - this needs to be the scope you
use (but just FLUSH_TLB as per my earlier reply). I'm questioning
the extra ROOT_PGTBL sync (meaning changes to levels other than L4
don't matter), which is redundant with the explicit setting right
after the call to mod_l4_entry(). But I guess since now you need
to issue _some_ flush_mask() for the local CPU anyway, perhaps
it's rather the explicit setting of ->root_pgt_changed which wants
dropping?

(If pt_owner != current->domain, then pt_owner->dirty_cpumask
can't have smp_processor_id()'s bit set, and hence there was no
reduction in scope in this case anyway. Similarly in this case
local_in_use is necessarily false, as page tables can't be
shared between domains.)

Taking both adjustments together
- all L[234] changes require FLUSH_TLB on dirty CPUs of
  pt_owner including the local CPU
- the converted sync_guest continues to require
  FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL on remote dirty CPUs of
  pt_owner
This difference, I think, still warrants treating the local CPU
specially, as the global flush continues to be unnecessary there.
Whether the local CPU's ->root_pgt_changed gets set via
flush_local() or explicitly is then a pretty benign secondary
aspect.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 08:20:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 08:20:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9892.26124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV9MG-0001Xz-RS; Wed, 21 Oct 2020 08:20:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9892.26124; Wed, 21 Oct 2020 08:20:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kV9MG-0001Xs-Ny; Wed, 21 Oct 2020 08:20:32 +0000
Received: by outflank-mailman (input) for mailman id 9892;
 Wed, 21 Oct 2020 08:20:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kV9MF-0001Xn-DN
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 08:20:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53246437-3c70-48f1-91e9-8d68ad3dfce4;
 Wed, 21 Oct 2020 08:20:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kV9MF-0001Xn-DN
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 08:20:31 +0000
X-Inumbo-ID: 53246437-3c70-48f1-91e9-8d68ad3dfce4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 53246437-3c70-48f1-91e9-8d68ad3dfce4;
	Wed, 21 Oct 2020 08:20:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603268429;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Y1DsProsd+NskXZ7WmI5xqqNUBFyInGW0t+Wre805Zk=;
  b=DUFW7vml8Hcz5+B42oaS/7HrXvFxhrS3H/sRMBqAZT4mBgS5frzwhZau
   KfgCsnGvSPu0Jk7KXeB2ZJxrNLwTdB5yJgKAY45Z8z4ApMiYja1rV8bIU
   6Z/TublxmrFiHrQPf6l1rdUJsU4xR/ICyUaFSWufDl56ojm76bnPR86ad
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5C733hWXykDkPs3r9anZHWXJn489rFYLVVlMW3rYOIs2sYhLzzkEalWK+2QkxYlnDGUGGP1yKw
 JS9016T5ZQcD/Y6d/ikMtintAYeGauk5cMhIMlmMsHUK6lutG9Bd3vBECyYzQFWjy6TtiXm73n
 Kri8kUEoJFPYwGMDVlLbQuKX/MVORFdxOaROemC1SaryRRpyqOUHUBfYkAeEQvQH8oCTljfLjI
 47ZSZZpMfgqJwY2ttMyP5UKpc+yycR/BFA6ceKAXjbq2jgqOIQ0uxSgKn6k7kzqAFFuryUMFR/
 TEw=
X-SBRS: 2.5
X-MesageID: 30513882
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,400,1596513600"; 
   d="scan'208";a="30513882"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] pci: cleanup MSI interrupts before removing device from IOMMU
Date: Wed, 21 Oct 2020 10:19:45 +0200
Message-ID: <20201021081945.28425-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Doing the MSI cleanup after removing the device from the IOMMU leads
to the following panic on AMD hardware:

Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
----[ Xen-4.13.1-10.0.3-d  x86_64  debug=y   Not tainted ]----
CPU:    3
RIP:    e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
[...]
Xen call trace:
   [<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
   [<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
   [<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
   [<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
   [<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
   [<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
   [<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
   [<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
   [<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
   [<ffff82d08038a432>] F lstar_enter+0x112/0x120

That's because the call to iommu_remove_device on AMD hardware will
remove the per-device interrupt remapping table, and hence the call to
pci_cleanup_msi done afterwards will find a null intremap table and
crash.

Reorder the calls so that MSI interrupts are torn down before removing
the device from the IOMMU.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I've discussed the issue with Andrew and maybe we should try to avoid
removing the interrupt remapping table on device removal, but then the
tables would have to be sized to support the maximum amount of
interrupts instead of the maximum supported by the device currently
plugged in.

I think the currently proposed fix can be easily backported. We can
see about improving the resilience going ahead.
---
 xen/drivers/passthrough/pci.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index b035067975..64b8a77ce0 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -834,10 +834,15 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
         if ( pdev->bus == bus && pdev->devfn == devfn )
         {
+            /*
+             * Cleanup MSI interrupts before removing the device from the
+             * IOMMU, or else the internal IOMMU data used to track the device
+             * interrupts might be already gone.
+             */
+            pci_cleanup_msi(pdev);
             ret = iommu_remove_device(pdev);
             if ( pdev->domain )
                 list_del(&pdev->domain_list);
-            pci_cleanup_msi(pdev);
             printk(XENLOG_DEBUG "PCI remove device %pp\n", &pdev->sbdf);
             free_pdev(pseg, pdev);
             break;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9896.26135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVA25-0005Iv-6y; Wed, 21 Oct 2020 09:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9896.26135; Wed, 21 Oct 2020 09:03:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVA25-0005Io-4B; Wed, 21 Oct 2020 09:03:45 +0000
Received: by outflank-mailman (input) for mailman id 9896;
 Wed, 21 Oct 2020 09:03:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e6jT=D4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVA24-0005Ij-Gq
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:03:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ea41638-d5df-47b1-b89b-cc10cb073e88;
 Wed, 21 Oct 2020 09:03:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVA22-0000g1-KX; Wed, 21 Oct 2020 09:03:42 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVA22-00029O-6V; Wed, 21 Oct 2020 09:03:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e6jT=D4=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVA24-0005Ij-Gq
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:03:44 +0000
X-Inumbo-ID: 6ea41638-d5df-47b1-b89b-cc10cb073e88
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ea41638-d5df-47b1-b89b-cc10cb073e88;
	Wed, 21 Oct 2020 09:03:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=LjTQi++VgPcChdjuu9EVE6OJGolSd7lt/XatJW4HlVU=; b=LMEqECwwUqA+hQeO6cehxkC5dv
	8wQltEcx2AIRhO0dKw7JHXbXmPfTgpX0OjgOGzZjBd/oRRuhrgMKt1e1K7jgycuMW+5JbIkr8wBnC
	nqNrT/vOW3EMeVhIZa3199acVDUe4YC2ul4DAYMfbzyFW0M4bzLbfZH3vUjPorX61SRQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVA22-0000g1-KX; Wed, 21 Oct 2020 09:03:42 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVA22-00029O-6V; Wed, 21 Oct 2020 09:03:42 +0000
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
 <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
 <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
 <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <a418ff3a-0476-203c-d3d8-add3706eea14@xen.org>
Date: Wed, 21 Oct 2020 10:03:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 15/10/2020 19:05, Stefano Stabellini wrote:
> On Thu, 15 Oct 2020, Bertrand Marquis wrote:
>>> On 14 Oct 2020, at 22:15, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>
>>> On Wed, 14 Oct 2020, Julien Grall wrote:
>>>> On 14/10/2020 17:03, Bertrand Marquis wrote:
>>>>>> On 14 Oct 2020, at 12:35, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>>>>>>
>>>>>> On 14/10/2020 11:41, Bertrand Marquis wrote:
>>>>>>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>>>>>>> not implementing the workaround for it could deadlock the system.
>>>>>>> Add a warning during boot informing the user that only trusted guests
>>>>>>> should be executed on the system.
>>>>>>> An equivalent warning is already given to the user by KVM on cores
>>>>>>> affected by this errata.
>>>>>>>
>>>>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>>>>> ---
>>>>>>> xen/arch/arm/cpuerrata.c | 21 +++++++++++++++++++++
>>>>>>> 1 file changed, 21 insertions(+)
>>>>>>>
>>>>>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>>>>>>> index 6c09017515..8f9ab6dde1 100644
>>>>>>> --- a/xen/arch/arm/cpuerrata.c
>>>>>>> +++ b/xen/arch/arm/cpuerrata.c
>>>>>>> @@ -240,6 +240,26 @@ static int enable_ic_inv_hardening(void *data)
>>>>>>>
>>>>>>> #endif
>>>>>>>
>>>>>>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>>>>>>> +
>>>>>>> +static int warn_device_load_acquire_errata(void *data)
>>>>>>> +{
>>>>>>> +    static bool warned = false;
>>>>>>> +
>>>>>>> +    if ( !warned )
>>>>>>> +    {
>>>>>>> +        warning_add("This CPU is affected by the errata 832075.\n"
>>>>>>> +                    "Guests without required CPU erratum workarounds\n"
>>>>>>> +                    "can deadlock the system!\n"
>>>>>>> +                    "Only trusted guests should be used on this
>>>>>>> system.\n");
>>>>>>> +        warned = true;
>>>>>>
>>>>>> This is an antipattern, which probably wants fixing elsewhere as well.
>>>>>>
>>>>>> warning_add() is __init.  It's not legitimate to call from a non-init
>>>>>> function, and a less useless build system would have modpost to object.
>>>>>>
>>>>>> The ARM_SMCCC_ARCH_WORKAROUND_1 instance asserts based on system state,
>>>>>> but this provides no safety at all.
>>>>>>
>>>>>>
>>>>>> What warning_add() actually does is queue messages for some point near
>>>>>> the end of boot.  It's not clear that this is even a clever thing to do.
>>>>>>
>>>>>> I'm very tempted to suggest a blanket change to printk_once().
>>>>>
>>>>> If this is needed then this could be done in an other serie ?
>>>>
>>>> The callback ->enable() will be called when a CPU is onlined/offlined. So this
>>>> is going to require if you plan to support CPU hotplugs or suspend resume.
>>>>
>>>>> Would be good to keep this patch as purely handling the errata.
>>>
>>> My preference would be to keep this patch small with just the errata,
>>> maybe using a simple printk_once as Andrew and Julien discussed.
>>>
>>> There is another instance of warning_add potentially being called
>>> outside __init in xen/arch/arm/cpuerrata.c:
>>> enable_smccc_arch_workaround_1. So if you are up for it, it would be
>>> good to produce a patch to fix that too.
>>>
>>>
>>>> In the case of this patch, how about moving the warning_add() in
>>>> enable_errata_workarounds()?
>>>>
>>>> By then we should now all the errata present on your platform. All CPUs
>>>> onlined afterwards (i.e. runtime) should always abide to the set discover
>>>> during boot.
>>>
>>> If I understand your suggestion correctly, it would work for
>>> warn_device_load_acquire_errata, because it is just a warning, but it
>>> would not work for enable_smccc_arch_workaround_1, because there is
>>> actually a call to be made there.
>>>
>>> Maybe it would be simpler to use printk_once in both cases? I don't have
>>> a strong preference either way.
>>
>> I could do the following (in a serie of 2 patches):
>> - modify enable_smccc_arch_workaround_1 to use printk_once with a
>>    prefix/suffix “****” on each line printed (and maybe adapting print to fit a
>>    line length of 80)
>> - modify my patch to do the print in enable_errata_workarounds using also
>>    the prefix/suffix and printk_once
>>
>> Please confirm that this strategy would fit everyone.
> 
> I think it is OK but if you are going to use printk_once in your patch
> you might as well leave it in the .enable implementation.
> 
> Julien, what do you think?

Bertrand reminded me today that I forgot to answer the e-mail (sorry). I 
am happy with using printk_once().

I am also wondering if we should also taint the hypervisor (via 
add_taint()). This would be helpful if someone reports error on a Xen 
running on such platform.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:37:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9899.26148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAYY-00085C-Q0; Wed, 21 Oct 2020 09:37:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9899.26148; Wed, 21 Oct 2020 09:37:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAYY-000855-L8; Wed, 21 Oct 2020 09:37:18 +0000
Received: by outflank-mailman (input) for mailman id 9899;
 Wed, 21 Oct 2020 09:37:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVAYX-000850-Oa
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:37:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b23a8bb0-1801-4741-8058-7ddfed32ab05;
 Wed, 21 Oct 2020 09:37:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAYS-0001Jz-3r; Wed, 21 Oct 2020 09:37:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAYR-0007QP-Sw; Wed, 21 Oct 2020 09:37:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAYR-0003Cy-Qc; Wed, 21 Oct 2020 09:37:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVAYX-000850-Oa
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:37:17 +0000
X-Inumbo-ID: b23a8bb0-1801-4741-8058-7ddfed32ab05
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b23a8bb0-1801-4741-8058-7ddfed32ab05;
	Wed, 21 Oct 2020 09:37:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=T+aFEAdx3/x9IwAjDKKFO1h3hka7HV4HWa4N4q2PUt0=; b=4rm21rYI1Gpqwi61bDfM6k03pr
	UzD3prnik+FHog0qlFO+f7dF3lAhiHxbmRP01GpYLrmb6EfUda4HqC58ZpPjUUy+fOIJtwfxS0Awb
	J5snPewPV8yHIccxcKmtgFHfCSU2ubrln7AxrkhVT6qPz6K4ntgYwmJVSsIOuLQ7eg3s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAYS-0001Jz-3r; Wed, 21 Oct 2020 09:37:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAYR-0007QP-Sw; Wed, 21 Oct 2020 09:37:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAYR-0003Cy-Qc; Wed, 21 Oct 2020 09:37:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete build-armhf
Message-Id: <E1kVAYR-0003Cy-Qc@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 09:37:11 +0000

branch xen-unstable
xenbranch xen-unstable
job build-armhf
testid xen-build

Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156066/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/build-armhf.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/build-armhf.xen-build --summary-out=tmp/156066.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline build-armhf xen-build
Searching for failure / basis pass:
 156040 fail [host=cubietruck-picasso] / 155971 ok.
Failure / basis pass flights: 156040 / 155971
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest f82b827c92f77eac8debdce6ef9689d156771871 4c41341af76cfc85b5a6c0f87de4838672ab9f89 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Basis pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/osstest/ovmf.git#73e3cb6c7eea4f5db81c87574dcefe1282de4772-f82b827c92f77eac8debdce6ef9689d156771871 git://git.qemu.org/qemu.git#e12ce85b2c79d83a340953291912875c30b3af06-4c41341af76cfc85b5a6c0f87de4838672ab9f89 git://xenbits.xen.org/osstest/seabios.git#58a44be024f69d2e4d2b58553529230abdd3935e-58a44be024f69d2e4d2b58553529230abdd3935e git://xenbits.xen.org/xen.git#0dfddb2116e3757f77a691a3fe335173088d69dc-0dfddb2116e3757f77a6\
 91a3fe335173088d69dc
Loaded 34884 nodes in revision graph
Searching for test results:
 155953 [host=cubietruck-gleizes]
 155971 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 155979 [host=cubietruck-braque]
 156006 [host=cubietruck-braque]
 155994 [host=cubietruck-gleizes]
 156007 [host=cubietruck-braque]
 156008 [host=cubietruck-gleizes]
 156009 [host=cubietruck-gleizes]
 156011 [host=cubietruck-braque]
 156014 [host=cubietruck-gleizes]
 156015 [host=cubietruck-gleizes]
 156016 [host=cubietruck-braque]
 156019 [host=cubietruck-braque]
 156021 [host=cubietruck-gleizes]
 156022 [host=cubietruck-gleizes]
 156023 [host=cubietruck-braque]
 156024 [host=cubietruck-gleizes]
 156025 [host=cubietruck-gleizes]
 156026 [host=cubietruck-gleizes]
 156032 [host=cubietruck-gleizes]
 156028 fail f82b827c92f77eac8debdce6ef9689d156771871 4c41341af76cfc85b5a6c0f87de4838672ab9f89 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156036 [host=cubietruck-gleizes]
 156043 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156044 fail f82b827c92f77eac8debdce6ef9689d156771871 4c41341af76cfc85b5a6c0f87de4838672ab9f89 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156045 fail 92e9c44f205a876556abe1a1addea5c40e4f3ccf 000f5b8f46f9a9f0a0d5304b605d89808ad92d4e 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156046 pass 92e9c44f205a876556abe1a1addea5c40e4f3ccf 3e7e134d827790c3714cae1d5b8aff8612000116 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156051 pass 92e9c44f205a876556abe1a1addea5c40e4f3ccf bb997e5c967b3b6f19f1461811df6317ed37c5ff 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156055 fail 92e9c44f205a876556abe1a1addea5c40e4f3ccf 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156040 fail f82b827c92f77eac8debdce6ef9689d156771871 4c41341af76cfc85b5a6c0f87de4838672ab9f89 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156056 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156058 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156060 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156061 pass 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
 156066 fail 73e3cb6c7eea4f5db81c87574dcefe1282de4772 782d7b30dd8e27ba24346e7c411b476db88b59e7 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
Searching for interesting versions
 Result found: flight 155971 (pass), for basis pass
 Result found: flight 156028 (fail), for basis failure
 Repro found: flight 156043 (pass), for basis pass
 Repro found: flight 156044 (fail), for basis failure
 0 revisions at 73e3cb6c7eea4f5db81c87574dcefe1282de4772 e12ce85b2c79d83a340953291912875c30b3af06 58a44be024f69d2e4d2b58553529230abdd3935e 0dfddb2116e3757f77a691a3fe335173088d69dc
No revisions left to test, checking graph state.
 Result found: flight 155971 (pass), for last pass
 Result found: flight 156056 (fail), for first failure
 Repro found: flight 156058 (pass), for last pass
 Repro found: flight 156060 (fail), for first failure
 Repro found: flight 156061 (pass), for last pass
 Repro found: flight 156066 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  782d7b30dd8e27ba24346e7c411b476db88b59e7
  Bug not present: e12ce85b2c79d83a340953291912875c30b3af06
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156066/


  commit 782d7b30dd8e27ba24346e7c411b476db88b59e7
  Merge: e12ce85b2c c47110d90f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Sat Oct 17 20:52:55 2020 +0100
  
      Merge remote-tracking branch 'remotes/bonzini-gitlab/tags/for-upstream' into staging
      
      * Drop ninjatool and just require ninja (Paolo)
      * Fix docs build under msys2 (Yonggang)
      * HAX snafu fix (Claudio)
      * Disable signal handlers during fuzzing (Alex)
      * Miscellaneous fixes (Bruce, Greg)
      
      # gpg: Signature made Sat 17 Oct 2020 15:45:56 BST
      # gpg:                using RSA key F13338574B662389866C7682BFFBD25F78C7AE83
      # gpg:                issuer "pbonzini@redhat.com"
      # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full]
      # gpg:                 aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
      # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4  E2F7 7E15 100C CD36 69B1
      #      Subkey fingerprint: F133 3857 4B66 2389 866C  7682 BFFB D25F 78C7 AE83
      
      * remotes/bonzini-gitlab/tags/for-upstream: (22 commits)
        ci: include configure and meson logs in all jobs if configure fails
        hax: unbreak accelerator cpu code after cpus.c split
        fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
        cirrus: Enable doc build on msys2/mingw
        meson: Move the detection logic for sphinx to meson
        meson: move SPHINX_ARGS references within "if build_docs"
        docs: Fix Sphinx configuration for msys2/mingw
        meson: Only install icons and qemu.desktop if have_system
        configure: fix handling of --docdir parameter
        meson: cleanup curses/iconv test
        meson.build: don't condition iconv detection on library detection
        build: add --enable/--disable-libudev
        build: replace ninjatool with ninja
        build: cleanups to Makefile
        add ninja to dockerfiles, CI configurations and test VMs
        dockerfiles: enable Centos 8 PowerTools
        configure: move QEMU_INCLUDES to meson
        tests: add missing generated sources to testqapi
        make: run shell with pipefail
        tests/Makefile.include: unbreak non-tcg builds
        ...
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit c47110d90fa5401bcc42c17f8ae0724a1c96599a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 05:49:28 2020 -0400
  
      ci: include configure and meson logs in all jobs if configure fails
      
      Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a1b0e4613006704fb02209df548ce9fde62232e0
  Author: Claudio Fontana <cfontana@suse.de>
  Date:   Fri Oct 16 10:00:32 2020 +0200
  
      hax: unbreak accelerator cpu code after cpus.c split
      
      during my split of cpus.c, code line
      "current_cpu = cpu"
      was removed by mistake, causing hax to break.
      
      This commit fixes the situation restoring it.
      
      Reported-by: Volker Rümelin <vr_qemu@t-online.de>
      Fixes: e92558e4bf8059ce4f0b310afe218802b72766bc
      Signed-off-by: Claudio Fontana <cfontana@suse.de>
      Message-Id: <20201016080032.13914-1-cfontana@suse.de>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit fc69fa216cf52709b1279a592364e50c674db6ff
  Author: Alexander Bulekov <alxndr@bu.edu>
  Date:   Wed Oct 14 10:21:57 2020 -0400
  
      fuzz: Disable QEMU's SIG{INT,HUP,TERM} handlers
      
      Prior to this patch, the only way I found to terminate the fuzzer was
      either to:
       1. Explicitly specify the number of fuzzer runs with the -runs= flag
       2. SIGKILL the process with "pkill -9 qemu-fuzz-*" or similar
      
      In addition to being annoying to deal with, SIGKILLing the process skips
      over any exit handlers(e.g. registered with atexit()). This is bad,
      since some fuzzers might create temporary files that should ideally be
      removed on exit using an exit handler. The only way to achieve a clean
      exit now is to specify -runs=N , but the desired "N" is tricky to
      identify prior to fuzzing.
      
      Why doesn't the process exit with standard SIGINT,SIGHUP,SIGTERM
      signals? QEMU installs its own handlers for these signals in
      os-posix.c:os_setup_signal_handling, which notify the main loop that an
      exit was requested. The fuzzer, however, does not run qemu_main_loop,
      which performs the main_loop_should_exit() check.  This means that the
      fuzzer effectively ignores these signals. As we don't really care about
      cleanly stopping the disposable fuzzer "VM", this patch uninstalls
      QEMU's signal handlers. Thus, we can stop the fuzzer with
      SIG{INT,HUP,TERM} and the fuzzing code can optionally use atexit() to
      clean up temporary files/resources.
      
      Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
      Signed-off-by: Alexander Bulekov <alxndr@bu.edu>
      Message-Id: <20201014142157.46028-1-alxndr@bu.edu>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5bfb4f52fe897f5594a0089891e19c78d3ecd672
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:26 2020 +0800
  
      cirrus: Enable doc build on msys2/mingw
      
      Currently rST depends on old version sphinx-2.x.
      Install it by downloading it.
      Remove the need of university mirror, the main repo are recovered.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-5-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e36676604683c1ee12963d83eaaf3d3c2a1790ce
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:25 2020 +0800
  
      meson: Move the detection logic for sphinx to meson
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-4-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 9dc6ee3fd78a478935eecf936cddd575c6dfb20a
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Fri Oct 16 04:05:26 2020 -0400
  
      meson: move SPHINX_ARGS references within "if build_docs"
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit a94a689cc5c5b2a1fbba4dd418e456a14e6e12e5
  Author: Yonggang Luo <luoyonggang@gmail.com>
  Date:   Fri Oct 16 06:06:23 2020 +0800
  
      docs: Fix Sphinx configuration for msys2/mingw
      
      Python doesn't support running ../scripts/kernel-doc directly.
      
      Signed-off-by: Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201015220626.418-2-luoyonggang@gmail.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3856873ee404c028a47115147f21cdc4b0d25566
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 14:18:40 2020 -0600
  
      meson: Only install icons and qemu.desktop if have_system
      
      These files are not needed for a linux-user only install.
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Message-Id: <20201015201840.282956-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit c6502638075557ff38fbb874af32f91186b667eb
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Thu Oct 15 13:07:42 2020 -0600
  
      configure: fix handling of --docdir parameter
      
      Commit ca8c0909f01 changed qemu_docdir to be docdir, then later uses the
      qemu_docdir name in the final assignment. Unfortunately, one instance of
      qemu_docdir was missed: the one which comes from the --docdir parameter.
      This patch restores the proper handling of the --docdir parameter.
      
      Fixes: ca8c0909f01 ("configure: build docdir like other suffixed
      directories")
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
      Message-Id: <20201015190742.270629-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 30fe76b17cc5aad395eb8a8a3da59e377a0b3d8e
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 13:26:50 2020 -0400
  
      meson: cleanup curses/iconv test
      
      Skip the test if it is system emulation is not requested, and
      differentiate errors for lack of iconv and lack of curses.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit ac0c8351abf79f3b65105ea27bd0491387d804f6
  Author: Bruce Rogers <brogers@suse.com>
  Date:   Wed Oct 14 16:19:39 2020 -0600
  
      meson.build: don't condition iconv detection on library detection
      
      It isn't necessarily the case that use of iconv requires an additional
      library. For that reason we shouldn't conditionalize iconv detection on
      libiconv.found.
      
      Fixes: 5285e593c33 (configure: Fixes ncursesw detection under msys2/mingw by convert them to meson)
      
      Signed-off-by: Bruce Rogers <brogers@suse.com>
      Reviewed-by: Yonggang Luo<l <brogers@suse.com>uoyonggang@gmail.com>
      Reviewed-by:Yonggang Luo <luoyonggang@gmail.com>
      Message-Id: <20201014221939.196958-1-brogers@suse.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 5c53015a480b3fe137ebd8b3b584a595c65e8f21
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 06:09:27 2020 -0400
  
      build: add --enable/--disable-libudev
      
      Initially, libudev detection was bundled with --enable-mpath because
      qemu-pr-helper was the only user of libudev.  Recently however the USB
      U2F emulation has also started using libudev, so add a separate
      option.  This also allows 1) disabling libudev if desired for static
      builds and 2) for non-static builds, requiring libudev even if
      multipath support is undesirable.
      
      The multipath test is adjusted, because it is now possible to enter it
      with configurations that should fail, such as --static --enable-mpath
      --disable-libudev.
      
      Reported-by: Peter Maydell <peter.maydell@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 09e93326e448ab43fa26a9e2d9cc20ecf951f32b
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:28:11 2020 -0400
  
      build: replace ninjatool with ninja
      
      Now that the build is done entirely by Meson, there is no need
      to keep the Makefile conversion.  Instead, we can ask Ninja about
      the targets it exposes and forward them.
      
      The main advantages are, from smallest to largest:
      
      - reducing the possible namespace pollution within the Makefile
      
      - removal of a relatively large Python program
      
      - faster build because parsing Makefile.ninja is slower than
      parsing build.ninja; and faster build after Meson runs because
      we do not have to generate Makefile.ninja.
      
      - tracking of command lines, which provides more accurate rebuilds
      
      In addition the change removes the requirement for GNU make 3.82, which
      was annoying on Mac, and avoids bugs on Windows due to ninjatool not
      knowing how to convert Windows escapes to POSIX escapes.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 2b8575bd5fbc8a8880e9ecfb1c7e7990feb1fea6
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 12:20:02 2020 -0400
  
      build: cleanups to Makefile
      
      Group similar rules, add comments to "else" and "endif" lines,
      detect too-old config-host.mak before messing things up.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 345d7053ca4a39b0496366f3c953ae2681570ce3
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Aug 13 09:58:50 2020 -0400
  
      add ninja to dockerfiles, CI configurations and test VMs
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Acked-by: Alex Bennée <alex.bennee@linaro.org>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit f2f984a3b3bc8322df2efa3937bf11e8ea2bcaa5
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:12:37 2020 -0400
  
      dockerfiles: enable Centos 8 PowerTools
      
      ninja is included in the CentOS PowerTools repository.
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 1e6e616dc21a8117cbe36a7e9026221b566cdf56
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 08:45:42 2020 -0400
  
      configure: move QEMU_INCLUDES to meson
      
      Confusingly, QEMU_INCLUDES is not used by configure tests.  Moving
      it to meson.build ensures that Windows paths are specified instead of
      the msys paths like /c/Users/...
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 97d6efd0a3f3a08942de6c2aee5d2983c54ca84c
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:20:17 2020 -0400
  
      tests: add missing generated sources to testqapi
      
      Ninja notices them due to a different order in visiting the graph.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 3bf4583580ab705de1beff6222e934239c3a0356
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Wed Oct 14 07:35:13 2020 -0400
  
      make: run shell with pipefail
      
      Without pipefail, it is possible to miss failures if the recipes
      include pipes.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 88da4b043b4f91a265947149b1cd6758c046a4bd
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Tue Oct 13 21:21:21 2020 +0200
  
      tests/Makefile.include: unbreak non-tcg builds
      
      Remove from check-block the requirement that all TARGET_DIRS are built.
      
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit e90df5eada4e6047548203d781bd61ddcc45d7b4
  Author: Greg Kurz <groug@kaod.org>
  Date:   Thu Oct 15 16:49:06 2020 +0200
  
      Makefile: Ensure cscope.out/tags/TAGS are generated in the source tree
      
      Tools usually expect the index files to be in the source tree, eg. emacs.
      This is already the case when doing out-of-tree builds, but with in-tree
      builds they end up in the build directory.
      
      Force cscope, ctags and etags to put them in the source tree.
      
      Signed-off-by: Greg Kurz <groug@kaod.org>
      Message-Id: <160277334665.1754102.10921580280105870386.stgit@bahia.lan>
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
  
  commit 6ebd89cf9ca3f5a6948542c4522b9380b1e9539f
  Author: Paolo Bonzini <pbonzini@redhat.com>
  Date:   Thu Oct 15 03:20:45 2020 -0400
  
      submodules: bump meson to 0.55.3
      
      This adds some bugfixes, and allows MSYS2 to configure
      without "--ninja=ninja".
      
      Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Revision graph left in /home/logs/results/bisect/qemu-mainline/build-armhf.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156066: tolerable ALL FAIL

flight 156066 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156066/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf                   6 xen-build               fail baseline untested


jobs:
 build-armhf                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:40:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9903.26164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAbO-0000VJ-Hj; Wed, 21 Oct 2020 09:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9903.26164; Wed, 21 Oct 2020 09:40:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAbO-0000VC-Af; Wed, 21 Oct 2020 09:40:14 +0000
Received: by outflank-mailman (input) for mailman id 9903;
 Wed, 21 Oct 2020 09:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVAbN-0000V1-0k
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:40:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58e4250a-0d48-4fed-ba3e-54f8472fffac;
 Wed, 21 Oct 2020 09:40:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVAbN-0000V1-0k
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:40:13 +0000
X-Inumbo-ID: 58e4250a-0d48-4fed-ba3e-54f8472fffac
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58e4250a-0d48-4fed-ba3e-54f8472fffac;
	Wed, 21 Oct 2020 09:40:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603273211;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=y1TlbR+gOi1GdVpz8eW7YhNSSQuTMe9zPWilDB21238=;
  b=JCpsK14gIfrlyjo9Fj4svhsqeVojhIBuN4Zuup0JfS9n/9IVy55pcoU/
   cEaMSSKk5iGbJuUkhMKsqMk6XiD/iMalF2jSqVy1B0jE7Eu2kV2k0yYTb
   xhRKTy5LdpLT61s1UzMLb7EvYou5foUxGwP6u1UBCSaoY7pliq+lTFMFB
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ejDPsS1yydqkXGHoLjqb7zQa3VP6PPuatcMBvrV2pOrFGJkjxFRvYRGIN8nypJRlITckiI0IfD
 jp68bV+s8DY3oUCycMhvqkutwOdR52g7kZq0PZBWYoM0Wt4q/bTPCRFM/gU0Rsei6LuyMX9XNW
 ko7O9O/7ll+Xj6taOiffM8FYI4IKKjB80Q2V4jAt0/KnObfQRmLKK7zqWZW4J87aNAzG5Gw7l3
 MKjVp3RS7e69bOhYhlj2WnaDaKsBnwCgRX4Cpweo5IeKhUvcez+/fDRL+F+lo1JGykfCbuxGz5
 WWg=
X-SBRS: 2.5
X-MesageID: 29467375
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,400,1596513600"; 
   d="scan'208";a="29467375"
Date: Wed, 21 Oct 2020 11:39:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special
 case idx == gpfn
Message-ID: <20201021093958.e4kopykalddam7pk@Air-de-Roger>
References: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 16, 2020 at 08:44:10AM +0200, Jan Beulich wrote:
> In this case up to now we've been freeing the page (through
> guest_remove_page(), with the actual free typically happening at the
> put_page() later in the function), but then failing the call on the
> subsequent GFN consistency check. However, in my opinion such a request
> should complete as an "expensive" no-op (leaving aside the potential
> unsharing of the page).
> 
> This points out that f33d653f46f5 ("x86: replace bad ASSERT() in
> xenmem_add_to_physmap_one()" would really have needed an XSA, despite
> its description claiming otherwise, as in release builds we then put in
> place a P2M entry referencing the about to be freed page.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> I've been considering to make such operations "cheap" NOPs rather than
> "expensive" ones, by comparing idx and gpfn early in the function in
> the XENMAPSPACE_gmfn case block, but I've come to the conclusion that
> having the operation perform otherwise normally is better - this way,
> errors that would result if idx != gpfn will still result. While I'm
> open to reasons towards the other alternative, having the added check be
> MFN-based makes crystal clear that we're dealing with the same
> underlying physical resource, i.e. also covers the hypothetical(?) case
> of two GFNs referring to the same MFN.
> 
> I'm unconvinced that it is correct for prev_mfn's p2mt to not be
> inspected at all - I don't think things will go right if p2m_shared()
> was true for it. But I'm afraid I'm not up to correcting mem-sharing
> related logic.
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
>          if ( is_special_page(mfn_to_page(prev_mfn)) )
>              /* Special pages are simply unhooked from this phys slot. */
>              rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
> -        else
> +        else if ( !mfn_eq(mfn, prev_mfn) )
>              /* Normal domain memory is freed, to avoid leaking memory. */
>              rc = guest_remove_page(d, gfn_x(gpfn));

What about the access differing between the old and the new entries,
while pointing to the same mfn, would Xen install the new entry
successfully?

Seems easier to me to use guest_physmap_remove_page in that case to
remove the entry from the p2m without freeing the page.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:45:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9906.26175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAg7-0000kj-2N; Wed, 21 Oct 2020 09:45:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9906.26175; Wed, 21 Oct 2020 09:45:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAg6-0000kc-Uy; Wed, 21 Oct 2020 09:45:06 +0000
Received: by outflank-mailman (input) for mailman id 9906;
 Wed, 21 Oct 2020 09:45:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrJA=D4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kVAg5-0000kX-Hr
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:45:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 193dcfd1-875a-4b44-bf67-037341ba42fe;
 Wed, 21 Oct 2020 09:45:03 +0000 (UTC)
Received: from DB6PR1001CA0041.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::27)
 by HE1PR0802MB2489.eurprd08.prod.outlook.com (2603:10a6:3:d8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 21 Oct
 2020 09:45:00 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::e1) by DB6PR1001CA0041.outlook.office365.com
 (2603:10a6:4:55::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 21 Oct 2020 09:45:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 09:45:00 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Wed, 21 Oct 2020 09:45:00 +0000
Received: from a933d3d9eb81.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2E47A783-C000-4BD8-A6A3-05D9ECF98971.1; 
 Wed, 21 Oct 2020 09:44:54 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a933d3d9eb81.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 21 Oct 2020 09:44:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 21 Oct
 2020 09:44:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 09:44:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrJA=D4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kVAg5-0000kX-Hr
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:45:05 +0000
X-Inumbo-ID: 193dcfd1-875a-4b44-bf67-037341ba42fe
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.84])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 193dcfd1-875a-4b44-bf67-037341ba42fe;
	Wed, 21 Oct 2020 09:45:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qQRXNdeNhZJ+HP9AUpIowYt4o9AC864aJmcwQyxp1Aw=;
 b=3Bdbrrf+1nI3MuUGHgc7nnTtQCMH4yZrRQdeXZo1B7DYTk1zcdJovDdaLC+1s7CMoWaeg2Ivf9kvTMeoveDKi6OSs0yoXwNw7yoP0n2C3eeNpGCxpY1E2CWZKEIjNNsjD9bbwZSaYuo0VB6G5KqWQNMarG4bDGBQohq/wN8sulo=
Received: from DB6PR1001CA0041.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::27)
 by HE1PR0802MB2489.eurprd08.prod.outlook.com (2603:10a6:3:d8::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 21 Oct
 2020 09:45:00 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::e1) by DB6PR1001CA0041.outlook.office365.com
 (2603:10a6:4:55::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 21 Oct 2020 09:45:00 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 09:45:00 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Wed, 21 Oct 2020 09:45:00 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3f1322966e593602
X-CR-MTA-TID: 64aa7808
Received: from a933d3d9eb81.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 2E47A783-C000-4BD8-A6A3-05D9ECF98971.1;
	Wed, 21 Oct 2020 09:44:54 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a933d3d9eb81.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 21 Oct 2020 09:44:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=baCO6QASQsu16i3fTmemANkcqOYKsLU1sfzm51GGTjz90H2LQePtdPKVMy0OZlk0oLsfYCFH/GnHyTRKzWuFYASkMwHdGupQXLMrTOw77JNe0BVuRetWZQ/Ic453WglOu3WZZl8WTXiHsmuGJctyUe8C6jLS3KPqcnLGGsPdqToCYhg1YS5D7FqVwU+K2OGX5zy9jKfypqHrxFIj03jdaniNf2N6lqloY8HHvkMWLiJacwF4esxjY1NFZCDn6uxKaFEMK+xqgkidlLtZnHdhpXqKlDvQ+T4bfrYtPGNM+z6UdLujhUZoGjxOxksKqqDo5QWrNjVzVt2B6CUHTnI8hg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qQRXNdeNhZJ+HP9AUpIowYt4o9AC864aJmcwQyxp1Aw=;
 b=QkCnZpJBIWhQJR59fJQbAHIEGSZx2RFO4uVFgQwoxFz/Rm1s3GDZhroSxoblgQz41x5YodxFTtaudiKNB0ntaZnKboueZrFZbeuAUE9cyrWUUfdm8ue1Yj3r3Zwqb1culP4aFb6LCTed3lIlNACuKJKHhoE7s/PD+4W8bcDB2gUQFY0mywFTijItYaXHbOpnRUl5PPv8o5rTxhRwf/XEflmxS3+Fjywsd2gg1JIY9ggMxzm0vYEkftAoMqFW+REGGfVO+jCB+rmG+B7Y6cZOvo2vo6jGy5eljqkqLdJwbAOfuWvBVggRvvpdRwpsieEKJWLAulkSu0BPqUdFxlyY2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qQRXNdeNhZJ+HP9AUpIowYt4o9AC864aJmcwQyxp1Aw=;
 b=3Bdbrrf+1nI3MuUGHgc7nnTtQCMH4yZrRQdeXZo1B7DYTk1zcdJovDdaLC+1s7CMoWaeg2Ivf9kvTMeoveDKi6OSs0yoXwNw7yoP0n2C3eeNpGCxpY1E2CWZKEIjNNsjD9bbwZSaYuo0VB6G5KqWQNMarG4bDGBQohq/wN8sulo=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0802MB2566.eurprd08.prod.outlook.com (2603:10a6:4:a2::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 21 Oct
 2020 09:44:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 09:44:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Index:
 AQHWohbM+uoR14U9oEmwJFjy+kmnIqmW9/2AgABK1ACAABYOAIAAQVGAgADYZ4CAAITigIAI1nwAgAALgoA=
Date: Wed, 21 Oct 2020 09:44:52 +0000
Message-ID: <DFD23AC6-9F7A-4591-96B3-29F2A04718FF@arm.com>
References:
 <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
 <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
 <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
 <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
 <a418ff3a-0476-203c-d3d8-add3706eea14@xen.org>
In-Reply-To: <a418ff3a-0476-203c-d3d8-add3706eea14@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 418578cf-11c5-4ecd-9b3a-08d875a5fa72
x-ms-traffictypediagnostic: DB6PR0802MB2566:|HE1PR0802MB2489:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2489E0DBAD010C95712D187B9D1C0@HE1PR0802MB2489.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zkAjFfM/YV8+NG9Ur8oiFKjHlBxRlC5gqAQRlCx4jPn0K48BcQROwHMhtmO0CA3NuqPdsT9Hxi/IGOLuhwlNSMc8JSVXgoRuLxSZ5D2FOwNPJTpWHiEetA9gYqY7uXA3KdmUeLjwrNcropZ+KhYUj+eqAS+8pO3YOztAneXrs9ogf7vNUsLxPePOYJDRSfLdf16nqmZuxYYl+XHaf2vXRhFT0wPH8x76ROfHlMWaF395LMcLFqKfL3vperhE6UXMGqEKZgvDCgKMq4T8e06Iq3TPnistdkoeJJBRqdjR+kKczHI9IpLZAjYyPkndittQIy1m5EkNRlpuYq1leQB/uw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(366004)(136003)(376002)(346002)(86362001)(33656002)(5660300002)(186003)(8936002)(6512007)(64756008)(26005)(76116006)(91956017)(66476007)(36756003)(66556008)(6916009)(66946007)(316002)(6486002)(53546011)(6506007)(66446008)(2906002)(8676002)(4326008)(54906003)(478600001)(83380400001)(2616005)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 MltQyhl9tb9m5p0eQgKcrApUYgUxu3f1/OVeNSC8L1Vxz3bXHsvzi6v3p9ZOqu7T3TO/fZlcFNzsP2slCn4uc89mCjpR4vVZbqF53CK9vuwE/kII1Z2ObnIg9tUA3PgUMMMTb0ZkpSjM+6H7DhyXZRuD2Ekm3d7v44tXSfVNnQejFj/XlD5EYWGbHQduEBsewMI2337P24KHw3KAJcWM9lKCFGg54TEVVsmuvCyQTNtOJ5t8elrFhLuNSzKgX2HsprXsAy9JdK/EE7xNyWLTXYxpaD9QA+/pFiDQ7+qyrYliNppZUtH5/4gS9gIfpVPPToqoO16TOlAfQRNaXLeclGnwU8aoFwjvFl16+3LvS8SJaZW5j0vRAEJYnnhnMkgEombnfqEBYtu6ZIQa0TQpG2txNdimw3JjdBnpyNmH9HSN4Jh+PzvI5tK5h4Czb+E3LlEMNPKHH9Anehs6E7AOP+rHfD8Yj0Y1p+Ch9KpXW9fszWtlGe/NxmWT7LbCUvI5XD0Z1pnb1g6RylSQKN+nago/fsoXwOGmuDy/qAnTZoP+yIfPbg3BDRmm2U6UYtpOnQZcluYHPOPVQ9ahEGqBZJx0/xITrFS2WsFa1QTHC/kRxvuvrAJgHx7WVLNgJ75kPFnceUls8He5QR2BLEjmcw==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <372C236FB0E258498F6923893038A749@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2566
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	41348e52-dbf7-4366-5146-08d875a5f5a9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Hi22ZvTr/fOy9pp+Pz6tapIZQB4/I+RMozZVOwbiPY58Mm98+H/62QCATeqO/+Ba9BkfboNxY5Ohv+DLm2W9vThiB/imsgK80DBgrzRYn6JvNTZTSyde5KFSW1R4FbOLvXpuc86xZUzdvi9uq63Pr1CUNGz9Hepzn4vh80SxhCvC9fKHyeXZ8iD/YHiJ/kTezgWhnRP67W3EXGt4SISTbl1QwTq95WHS4DQGUky+mzEyX2lLyn30DIVo7BszbmJKoCCEohT1DE9K7cmr2DqmoR+DLMtpAAUNPKxZE/2yvi0kSe9Ht5JU1ny8APRxh/BKYOadCxDOaXx/pTpkqMiyEceVtIcvPV4rVvxXejAOTxzi3BMox1mGatnCws/vD//I3+Hb8yLqC8dcgdpUGfsa7A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966005)(8676002)(26005)(356005)(6862004)(2616005)(83380400001)(6486002)(186003)(82310400003)(6512007)(107886003)(54906003)(336012)(86362001)(316002)(36756003)(70206006)(478600001)(8936002)(4326008)(70586007)(33656002)(2906002)(53546011)(6506007)(81166007)(47076004)(82740400003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2020 09:45:00.4723
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 418578cf-11c5-4ecd-9b3a-08d875a5fa72
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2489

SGksDQoNCj4gT24gMjEgT2N0IDIwMjAsIGF0IDEwOjAzLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMTUvMTAvMjAyMCAxOTowNSwgU3Rl
ZmFubyBTdGFiZWxsaW5pIHdyb3RlOg0KPj4gT24gVGh1LCAxNSBPY3QgMjAyMCwgQmVydHJhbmQg
TWFycXVpcyB3cm90ZToNCj4+Pj4gT24gMTQgT2N0IDIwMjAsIGF0IDIyOjE1LCBTdGVmYW5vIFN0
YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+PiANCj4+Pj4gT24g
V2VkLCAxNCBPY3QgMjAyMCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+Pj4gT24gMTQvMTAvMjAy
MCAxNzowMywgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+Pj4+Pj4gT24gMTQgT2N0IDIwMjAs
IGF0IDEyOjM1LCBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPiB3cm90
ZToNCj4+Pj4+Pj4gDQo+Pj4+Pj4+IE9uIDE0LzEwLzIwMjAgMTE6NDEsIEJlcnRyYW5kIE1hcnF1
aXMgd3JvdGU6DQo+Pj4+Pj4+PiBXaGVuIGEgQ29ydGV4IEE1NyBwcm9jZXNzb3IgaXMgYWZmZWN0
ZWQgYnkgQ1BVIGVycmF0YSA4MzIwNzUsIGEgZ3Vlc3QNCj4+Pj4+Pj4+IG5vdCBpbXBsZW1lbnRp
bmcgdGhlIHdvcmthcm91bmQgZm9yIGl0IGNvdWxkIGRlYWRsb2NrIHRoZSBzeXN0ZW0uDQo+Pj4+
Pj4+PiBBZGQgYSB3YXJuaW5nIGR1cmluZyBib290IGluZm9ybWluZyB0aGUgdXNlciB0aGF0IG9u
bHkgdHJ1c3RlZCBndWVzdHMNCj4+Pj4+Pj4+IHNob3VsZCBiZSBleGVjdXRlZCBvbiB0aGUgc3lz
dGVtLg0KPj4+Pj4+Pj4gQW4gZXF1aXZhbGVudCB3YXJuaW5nIGlzIGFscmVhZHkgZ2l2ZW4gdG8g
dGhlIHVzZXIgYnkgS1ZNIG9uIGNvcmVzDQo+Pj4+Pj4+PiBhZmZlY3RlZCBieSB0aGlzIGVycmF0
YS4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8
YmVydHJhbmQubWFycXVpc0Bhcm0uY29tPg0KPj4+Pj4+Pj4gLS0tDQo+Pj4+Pj4+PiB4ZW4vYXJj
aC9hcm0vY3B1ZXJyYXRhLmMgfCAyMSArKysrKysrKysrKysrKysrKysrKysNCj4+Pj4+Pj4+IDEg
ZmlsZSBjaGFuZ2VkLCAyMSBpbnNlcnRpb25zKCspDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IGRpZmYg
LS1naXQgYS94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMgYi94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRh
LmMNCj4+Pj4+Pj4+IGluZGV4IDZjMDkwMTc1MTUuLjhmOWFiNmRkZTEgMTAwNjQ0DQo+Pj4+Pj4+
PiAtLS0gYS94ZW4vYXJjaC9hcm0vY3B1ZXJyYXRhLmMNCj4+Pj4+Pj4+ICsrKyBiL3hlbi9hcmNo
L2FybS9jcHVlcnJhdGEuYw0KPj4+Pj4+Pj4gQEAgLTI0MCw2ICsyNDAsMjYgQEAgc3RhdGljIGlu
dCBlbmFibGVfaWNfaW52X2hhcmRlbmluZyh2b2lkICpkYXRhKQ0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+
PiAjZW5kaWYNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gKyNpZmRlZiBDT05GSUdfQVJNNjRfRVJSQVRV
TV84MzIwNzUNCj4+Pj4+Pj4+ICsNCj4+Pj4+Pj4+ICtzdGF0aWMgaW50IHdhcm5fZGV2aWNlX2xv
YWRfYWNxdWlyZV9lcnJhdGEodm9pZCAqZGF0YSkNCj4+Pj4+Pj4+ICt7DQo+Pj4+Pj4+PiArICAg
IHN0YXRpYyBib29sIHdhcm5lZCA9IGZhbHNlOw0KPj4+Pj4+Pj4gKw0KPj4+Pj4+Pj4gKyAgICBp
ZiAoICF3YXJuZWQgKQ0KPj4+Pj4+Pj4gKyAgICB7DQo+Pj4+Pj4+PiArICAgICAgICB3YXJuaW5n
X2FkZCgiVGhpcyBDUFUgaXMgYWZmZWN0ZWQgYnkgdGhlIGVycmF0YSA4MzIwNzUuXG4iDQo+Pj4+
Pj4+PiArICAgICAgICAgICAgICAgICAgICAiR3Vlc3RzIHdpdGhvdXQgcmVxdWlyZWQgQ1BVIGVy
cmF0dW0gd29ya2Fyb3VuZHNcbiINCj4+Pj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICJjYW4g
ZGVhZGxvY2sgdGhlIHN5c3RlbSFcbiINCj4+Pj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICJP
bmx5IHRydXN0ZWQgZ3Vlc3RzIHNob3VsZCBiZSB1c2VkIG9uIHRoaXMNCj4+Pj4+Pj4+IHN5c3Rl
bS5cbiIpOw0KPj4+Pj4+Pj4gKyAgICAgICAgd2FybmVkID0gdHJ1ZTsNCj4+Pj4+Pj4gDQo+Pj4+
Pj4+IFRoaXMgaXMgYW4gYW50aXBhdHRlcm4sIHdoaWNoIHByb2JhYmx5IHdhbnRzIGZpeGluZyBl
bHNld2hlcmUgYXMgd2VsbC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+IHdhcm5pbmdfYWRkKCkgaXMgX19p
bml0LiAgSXQncyBub3QgbGVnaXRpbWF0ZSB0byBjYWxsIGZyb20gYSBub24taW5pdA0KPj4+Pj4+
PiBmdW5jdGlvbiwgYW5kIGEgbGVzcyB1c2VsZXNzIGJ1aWxkIHN5c3RlbSB3b3VsZCBoYXZlIG1v
ZHBvc3QgdG8gb2JqZWN0Lg0KPj4+Pj4+PiANCj4+Pj4+Pj4gVGhlIEFSTV9TTUNDQ19BUkNIX1dP
UktBUk9VTkRfMSBpbnN0YW5jZSBhc3NlcnRzIGJhc2VkIG9uIHN5c3RlbSBzdGF0ZSwNCj4+Pj4+
Pj4gYnV0IHRoaXMgcHJvdmlkZXMgbm8gc2FmZXR5IGF0IGFsbC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+
IA0KPj4+Pj4+PiBXaGF0IHdhcm5pbmdfYWRkKCkgYWN0dWFsbHkgZG9lcyBpcyBxdWV1ZSBtZXNz
YWdlcyBmb3Igc29tZSBwb2ludCBuZWFyDQo+Pj4+Pj4+IHRoZSBlbmQgb2YgYm9vdC4gIEl0J3Mg
bm90IGNsZWFyIHRoYXQgdGhpcyBpcyBldmVuIGEgY2xldmVyIHRoaW5nIHRvIGRvLg0KPj4+Pj4+
PiANCj4+Pj4+Pj4gSSdtIHZlcnkgdGVtcHRlZCB0byBzdWdnZXN0IGEgYmxhbmtldCBjaGFuZ2Ug
dG8gcHJpbnRrX29uY2UoKS4NCj4+Pj4+PiANCj4+Pj4+PiBJZiB0aGlzIGlzIG5lZWRlZCB0aGVu
IHRoaXMgY291bGQgYmUgZG9uZSBpbiBhbiBvdGhlciBzZXJpZSA/DQo+Pj4+PiANCj4+Pj4+IFRo
ZSBjYWxsYmFjayAtPmVuYWJsZSgpIHdpbGwgYmUgY2FsbGVkIHdoZW4gYSBDUFUgaXMgb25saW5l
ZC9vZmZsaW5lZC4gU28gdGhpcw0KPj4+Pj4gaXMgZ29pbmcgdG8gcmVxdWlyZSBpZiB5b3UgcGxh
biB0byBzdXBwb3J0IENQVSBob3RwbHVncyBvciBzdXNwZW5kIHJlc3VtZS4NCj4+Pj4+IA0KPj4+
Pj4+IFdvdWxkIGJlIGdvb2QgdG8ga2VlcCB0aGlzIHBhdGNoIGFzIHB1cmVseSBoYW5kbGluZyB0
aGUgZXJyYXRhLg0KPj4+PiANCj4+Pj4gTXkgcHJlZmVyZW5jZSB3b3VsZCBiZSB0byBrZWVwIHRo
aXMgcGF0Y2ggc21hbGwgd2l0aCBqdXN0IHRoZSBlcnJhdGEsDQo+Pj4+IG1heWJlIHVzaW5nIGEg
c2ltcGxlIHByaW50a19vbmNlIGFzIEFuZHJldyBhbmQgSnVsaWVuIGRpc2N1c3NlZC4NCj4+Pj4g
DQo+Pj4+IFRoZXJlIGlzIGFub3RoZXIgaW5zdGFuY2Ugb2Ygd2FybmluZ19hZGQgcG90ZW50aWFs
bHkgYmVpbmcgY2FsbGVkDQo+Pj4+IG91dHNpZGUgX19pbml0IGluIHhlbi9hcmNoL2FybS9jcHVl
cnJhdGEuYzoNCj4+Pj4gZW5hYmxlX3NtY2NjX2FyY2hfd29ya2Fyb3VuZF8xLiBTbyBpZiB5b3Ug
YXJlIHVwIGZvciBpdCwgaXQgd291bGQgYmUNCj4+Pj4gZ29vZCB0byBwcm9kdWNlIGEgcGF0Y2gg
dG8gZml4IHRoYXQgdG9vLg0KPj4+PiANCj4+Pj4gDQo+Pj4+PiBJbiB0aGUgY2FzZSBvZiB0aGlz
IHBhdGNoLCBob3cgYWJvdXQgbW92aW5nIHRoZSB3YXJuaW5nX2FkZCgpIGluDQo+Pj4+PiBlbmFi
bGVfZXJyYXRhX3dvcmthcm91bmRzKCk/DQo+Pj4+PiANCj4+Pj4+IEJ5IHRoZW4gd2Ugc2hvdWxk
IG5vdyBhbGwgdGhlIGVycmF0YSBwcmVzZW50IG9uIHlvdXIgcGxhdGZvcm0uIEFsbCBDUFVzDQo+
Pj4+PiBvbmxpbmVkIGFmdGVyd2FyZHMgKGkuZS4gcnVudGltZSkgc2hvdWxkIGFsd2F5cyBhYmlk
ZSB0byB0aGUgc2V0IGRpc2NvdmVyDQo+Pj4+PiBkdXJpbmcgYm9vdC4NCj4+Pj4gDQo+Pj4+IElm
IEkgdW5kZXJzdGFuZCB5b3VyIHN1Z2dlc3Rpb24gY29ycmVjdGx5LCBpdCB3b3VsZCB3b3JrIGZv
cg0KPj4+PiB3YXJuX2RldmljZV9sb2FkX2FjcXVpcmVfZXJyYXRhLCBiZWNhdXNlIGl0IGlzIGp1
c3QgYSB3YXJuaW5nLCBidXQgaXQNCj4+Pj4gd291bGQgbm90IHdvcmsgZm9yIGVuYWJsZV9zbWNj
Y19hcmNoX3dvcmthcm91bmRfMSwgYmVjYXVzZSB0aGVyZSBpcw0KPj4+PiBhY3R1YWxseSBhIGNh
bGwgdG8gYmUgbWFkZSB0aGVyZS4NCj4+Pj4gDQo+Pj4+IE1heWJlIGl0IHdvdWxkIGJlIHNpbXBs
ZXIgdG8gdXNlIHByaW50a19vbmNlIGluIGJvdGggY2FzZXM/IEkgZG9uJ3QgaGF2ZQ0KPj4+PiBh
IHN0cm9uZyBwcmVmZXJlbmNlIGVpdGhlciB3YXkuDQo+Pj4gDQo+Pj4gSSBjb3VsZCBkbyB0aGUg
Zm9sbG93aW5nIChpbiBhIHNlcmllIG9mIDIgcGF0Y2hlcyk6DQo+Pj4gLSBtb2RpZnkgZW5hYmxl
X3NtY2NjX2FyY2hfd29ya2Fyb3VuZF8xIHRvIHVzZSBwcmludGtfb25jZSB3aXRoIGENCj4+PiAg
IHByZWZpeC9zdWZmaXgg4oCcKioqKuKAnSBvbiBlYWNoIGxpbmUgcHJpbnRlZCAoYW5kIG1heWJl
IGFkYXB0aW5nIHByaW50IHRvIGZpdCBhDQo+Pj4gICBsaW5lIGxlbmd0aCBvZiA4MCkNCj4+PiAt
IG1vZGlmeSBteSBwYXRjaCB0byBkbyB0aGUgcHJpbnQgaW4gZW5hYmxlX2VycmF0YV93b3JrYXJv
dW5kcyB1c2luZyBhbHNvDQo+Pj4gICB0aGUgcHJlZml4L3N1ZmZpeCBhbmQgcHJpbnRrX29uY2UN
Cj4+PiANCj4+PiBQbGVhc2UgY29uZmlybSB0aGF0IHRoaXMgc3RyYXRlZ3kgd291bGQgZml0IGV2
ZXJ5b25lLg0KPj4gSSB0aGluayBpdCBpcyBPSyBidXQgaWYgeW91IGFyZSBnb2luZyB0byB1c2Ug
cHJpbnRrX29uY2UgaW4geW91ciBwYXRjaA0KPj4geW91IG1pZ2h0IGFzIHdlbGwgbGVhdmUgaXQg
aW4gdGhlIC5lbmFibGUgaW1wbGVtZW50YXRpb24uDQo+PiBKdWxpZW4sIHdoYXQgZG8geW91IHRo
aW5rPw0KPiANCj4gQmVydHJhbmQgcmVtaW5kZWQgbWUgdG9kYXkgdGhhdCBJIGZvcmdvdCB0byBh
bnN3ZXIgdGhlIGUtbWFpbCAoc29ycnkpLiBJIGFtIGhhcHB5IHdpdGggdXNpbmcgcHJpbnRrX29u
Y2UoKS4NCg0KU2hhbGwgaSBhbHNvIGtlZXAgdGhlIC5lbmFibGUgaW1wbGVtZW50YXRpb24gPw0K
QXQgdGhlIGVuZCBoYXZpbmc6DQogaWYgKCBjcHVzX2hhdmVfY2FwKEFSTTY0X1dPUktBUk9VTkRf
REVWSUNFX0xPQURfQUNRVUlSRSkgKSANCmluIGVuYWJsZV9lcnJhdGFfd29ya2Fyb3VuZHMgaXMg
cXVpdGUgY2xlYW4uDQoNCj4gDQo+IEkgYW0gYWxzbyB3b25kZXJpbmcgaWYgd2Ugc2hvdWxkIGFs
c28gdGFpbnQgdGhlIGh5cGVydmlzb3IgKHZpYSBhZGRfdGFpbnQoKSkuIFRoaXMgd291bGQgYmUg
aGVscGZ1bCBpZiBzb21lb25lIHJlcG9ydHMgZXJyb3Igb24gYSBYZW4gcnVubmluZyBvbiBzdWNo
IHBsYXRmb3JtLg0KDQpHb29kIGlkZWEgeWVzLg0KSSB3aWxsIGFkZCB0aGF0IGFuZCByZW1vdmlu
ZyB0aGUgY29yZSBmcm9tIHRoZSBzZWN1cml0eSBzdXBwb3J0ZWQgb25lcyB0byBteSBwYXRjaC4N
Cg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:47:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9916.26187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAhw-0000tF-Ee; Wed, 21 Oct 2020 09:47:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9916.26187; Wed, 21 Oct 2020 09:47:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAhw-0000t8-BI; Wed, 21 Oct 2020 09:47:00 +0000
Received: by outflank-mailman (input) for mailman id 9916;
 Wed, 21 Oct 2020 09:46:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e6jT=D4=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVAhv-0000ss-Cj
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:46:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f722b5ae-1ec8-4871-bf55-4dfc958c245f;
 Wed, 21 Oct 2020 09:46:53 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVAhp-0001Wn-Ik; Wed, 21 Oct 2020 09:46:53 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVAhp-0005sC-B3; Wed, 21 Oct 2020 09:46:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e6jT=D4=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVAhv-0000ss-Cj
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:46:59 +0000
X-Inumbo-ID: f722b5ae-1ec8-4871-bf55-4dfc958c245f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f722b5ae-1ec8-4871-bf55-4dfc958c245f;
	Wed, 21 Oct 2020 09:46:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JsbnLHAFf1gZLARWwZWUMurki5jcA5fTYKzle46cHIE=; b=cGnxzuLVOYsGL+RGm8Oo8BPrXE
	r+Vu76esM5wefKOPFrX4F4INnVwL36ecdFna6agX63p2lvoZIU5/vbmoo46UH0xbUls8fIgi1dNn4
	DWW9j5Nw/cYwUTVawqvKlXsrIOLmWkQ9Fiijy4odmm7zZ5Ae4Gcmm78d2tYhO6aQeEZE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVAhp-0001Wn-Ik; Wed, 21 Oct 2020 09:46:53 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVAhp-0005sC-B3; Wed, 21 Oct 2020 09:46:53 +0000
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
 <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
 <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
 <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
 <a418ff3a-0476-203c-d3d8-add3706eea14@xen.org>
 <DFD23AC6-9F7A-4591-96B3-29F2A04718FF@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1cd355a4-9907-1596-44d3-d524618e4a35@xen.org>
Date: Wed, 21 Oct 2020 10:46:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <DFD23AC6-9F7A-4591-96B3-29F2A04718FF@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 21/10/2020 10:44, Bertrand Marquis wrote:
>> Bertrand reminded me today that I forgot to answer the e-mail (sorry). I am happy with using printk_once().
> 
> Shall i also keep the .enable implementation ?
> At the end having:
>   if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
> in enable_errata_workarounds is quite clean.

You can pick the one you prefer :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:53:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9921.26199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAo0-0001pz-4t; Wed, 21 Oct 2020 09:53:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9921.26199; Wed, 21 Oct 2020 09:53:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAo0-0001ps-1U; Wed, 21 Oct 2020 09:53:16 +0000
Received: by outflank-mailman (input) for mailman id 9921;
 Wed, 21 Oct 2020 09:53:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xrJA=D4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kVAny-0001mo-Nc
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:53:14 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.68]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c6e2c40-8933-4e1e-ae05-e9749d41174c;
 Wed, 21 Oct 2020 09:53:13 +0000 (UTC)
Received: from AM6P194CA0104.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::45)
 by AM6PR08MB3685.eurprd08.prod.outlook.com (2603:10a6:20b:6f::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 21 Oct
 2020 09:53:11 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::34) by AM6P194CA0104.outlook.office365.com
 (2603:10a6:209:8f::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 21 Oct 2020 09:53:11 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 09:53:11 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64");
 Wed, 21 Oct 2020 09:53:10 +0000
Received: from f470a11f0bf3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DD7DF5E9-F3B4-4517-A8DC-EF9019622224.1; 
 Wed, 21 Oct 2020 09:52:33 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f470a11f0bf3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 21 Oct 2020 09:52:33 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5372.eurprd08.prod.outlook.com (2603:10a6:10:f9::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Wed, 21 Oct
 2020 09:52:32 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 09:52:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=xrJA=D4=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kVAny-0001mo-Nc
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:53:14 +0000
X-Inumbo-ID: 0c6e2c40-8933-4e1e-ae05-e9749d41174c
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.68])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0c6e2c40-8933-4e1e-ae05-e9749d41174c;
	Wed, 21 Oct 2020 09:53:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v25lfyaUstm8PThpObUVYD9Bmh+bNjHIuMIjN7EsQ2U=;
 b=3xP7moyYaJJFQ3WIjFC6OHZCODUF5EZ4Gsy1ZK2e9okMyhB11IIeq53EJDw3jlxf3cNt9vyC54i6TQFvUO2KL7zOtG6Qom+Esl0HMxMDyvJg5eTRmDlBlaSCKhUWYwQf35ypmyz4YWIFfwmqWzQPqlYSAnxABAiBb1cb0D+O5/I=
Received: from AM6P194CA0104.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::45)
 by AM6PR08MB3685.eurprd08.prod.outlook.com (2603:10a6:20b:6f::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 21 Oct
 2020 09:53:11 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::34) by AM6P194CA0104.outlook.office365.com
 (2603:10a6:209:8f::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 21 Oct 2020 09:53:11 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 09:53:11 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64"); Wed, 21 Oct 2020 09:53:10 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1bb1e6501042aa78
X-CR-MTA-TID: 64aa7808
Received: from f470a11f0bf3.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DD7DF5E9-F3B4-4517-A8DC-EF9019622224.1;
	Wed, 21 Oct 2020 09:52:33 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f470a11f0bf3.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 21 Oct 2020 09:52:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cnIujTW/NzpxkdvaRLYqI+yjtPHxaj85CGXu67aZWPfKtaI0OMhLuqdODC9C1k2s95YoLkfrA+QcG3ijn7SfiRyh7/J6wE8XDoGkffadD/XPaxTqvEN4bG4OqMDAIZWepzD9OOUCoBlkCiAeAzakDD/fP7RLMJw/lG8t+zQ+Sqae3RO3pkcL7VPyR/Lbg1yexsG1e96U0GWSe6IoGopa4hX6X4OkrvAroe21geH7FMwZQdh6cv8Kyu2gfzoGlD7825WH4rQhEyZQAN44xZ04TPVU2z/IGp0EwF6dwilOptiiZ145tTVV/pyDuHAp1RMTmGam9oO5QdtnR3edzalGZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v25lfyaUstm8PThpObUVYD9Bmh+bNjHIuMIjN7EsQ2U=;
 b=jYcg2yvMfXDCeo/oYxzG5s8aGiF9Bw4h8QKU+Js/HDNTtoNIQBlbVnHmAgRuGpdew0b7oFjXL/PZITNneXB1Fz/dJZr8PWQcG0a2zOn4d5aQMrE9Rcn3Qg5R91vOS8kMxRjHl6DpR6ho9D85nWGEVypyNFfSG7yL9Rj+YQQIui4F1OjD7v1X0xR1XunmwECB2Z3B5wx2fgiF812Tfc5cBOyrM+zbJD6RMowaI5y2lw9plKoWue1XMNE3CIFqmt2OfZ10L6DEyEQwdWqzeT6Hz0Yg3eu4leBEC89O6dZXvK+dmPbhQtw4e8Ldp+QcFQD1m0trRc2g5KvngmBAch1r+Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v25lfyaUstm8PThpObUVYD9Bmh+bNjHIuMIjN7EsQ2U=;
 b=3xP7moyYaJJFQ3WIjFC6OHZCODUF5EZ4Gsy1ZK2e9okMyhB11IIeq53EJDw3jlxf3cNt9vyC54i6TQFvUO2KL7zOtG6Qom+Esl0HMxMDyvJg5eTRmDlBlaSCKhUWYwQf35ypmyz4YWIFfwmqWzQPqlYSAnxABAiBb1cb0D+O5/I=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5372.eurprd08.prod.outlook.com (2603:10a6:10:f9::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.22; Wed, 21 Oct
 2020 09:52:32 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 09:52:31 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH] xen/arm: Warn user on cpu errata 832075
Thread-Index:
 AQHWohbM+uoR14U9oEmwJFjy+kmnIqmW9/2AgABK1ACAABYOAIAAQVGAgADYZ4CAAITigIAI1nwAgAALgoCAAACPgIAAAZWA
Date: Wed, 21 Oct 2020 09:52:31 +0000
Message-ID: <3603E591-0936-4D2B-B310-8236AD4DCD7E@arm.com>
References:
 <f11fe960a111530501fd0c20893bec4e32edf3cb.1602671985.git.bertrand.marquis@arm.com>
 <26742825-25fc-0f82-2b20-d536e8380b2a@citrix.com>
 <90BC5355-EB52-469F-B0A6-ACAAB9AD9EF5@arm.com>
 <f49d478f-4efe-955e-c378-f2fa5fbc6a71@xen.org>
 <alpine.DEB.2.21.2010141350170.10386@sstabellini-ThinkPad-T480s>
 <C07DA84A-6527-4480-99CC-F6B26553E3FE@arm.com>
 <alpine.DEB.2.21.2010151104200.10386@sstabellini-ThinkPad-T480s>
 <a418ff3a-0476-203c-d3d8-add3706eea14@xen.org>
 <DFD23AC6-9F7A-4591-96B3-29F2A04718FF@arm.com>
 <1cd355a4-9907-1596-44d3-d524618e4a35@xen.org>
In-Reply-To: <1cd355a4-9907-1596-44d3-d524618e4a35@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 76d936e2-955f-476e-7045-08d875a71efb
x-ms-traffictypediagnostic: DB8PR08MB5372:|AM6PR08MB3685:
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3685BE7F7493054C478FF8E09D1C0@AM6PR08MB3685.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4502;OLM:4502;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yLOGIuIqMhjzzlvdovMbJnjGle4XgRV+S0y/Dglkvqu4jS/g2vpVK1vi5YSQ/w/1Jwpmv7M2oHvmevlYp9hC40L2rhWtxPi2y0R5Q40aMmuTjuZrvCvtzSvvZBkRlEmjteQiQG3eJaQ0IfnsLDFWn+vV1F5qhKTWsO2XdCvU3UMHM9D3fhZC13JdRwFgZ9UQIf8CSavuJ41coc7vRBWZru66ddMgjxkqrjJhQxoD5AZ6MHsCla6yuUo07bdWDzU4iM9oyP5GNxRPQM7fN/25BIuR3Pt0e4ptzr0dut2QBg3ZADLW6RG0WyT7f1HelaVm73sQQurn39t2sfMWJ/Ej6A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(346002)(396003)(366004)(71200400001)(66556008)(53546011)(66946007)(66446008)(54906003)(2906002)(64756008)(4744005)(36756003)(2616005)(76116006)(91956017)(4326008)(5660300002)(6512007)(66476007)(316002)(6506007)(186003)(478600001)(86362001)(6916009)(8936002)(8676002)(26005)(6486002)(33656002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 Hp3kml/o6Ad+rIgVNJiGM/GU41+z0Zi+sUNQpZnywRVi6vAxzFhSqu4st3KJ+byNL6jEAH7X+lDuLivs5z1x/Ig7iLPzLQz/rO9hOJ63EWam4yo2Ph/LnXzeFffWYMaUHikwQkIE+r4BShHpuNKuRlYdrLKe923ourkHz/+Aofa6WeS6b+hB72ofszJWqn0F0TiHFKfFpE9q8D/WGvrTo5bbBFdjKbrx2qIuH5Dqlf9IeSicQB8zYJaiuenL7vrKnnyiwSxr7dDKMG7y10V8oKWunauPeLmwVi4mEyNLTZfXulxIX5nytBbo/9f359XGRmFKVYf3aF+QD+d5aNJBbpO7GJiZt0xSiCqzSGP1Cl+KbnGdIrnoiccgakl85MunmsH91zuokqXwNh2r7eLdPxa6Qn4mV0iN4v06IH9vN+o9sV1w5xs1QGbx1/CgDf2DkMKh9kWYQJDy1Jnt5Uh/4VBGJfjKIQqTieEZ+OZdvd3dB9BFNsTZPOTuhXEUNiy8tUhWppq5iv/fXE94Bcx+MSfYQBZ5JdUsITZeATBn6rVHZnHSmKqSM6ocayvzcwjeEh+Qs0d06YLhEqaa7o/qvZfZJDNrEGvaEsToQgBEl6IV0srMVdF6+Uvy3LfNnzRdvO7p1w2OzLZKQRm7eAt/vg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2802934B64B8434B94F013BDE81D3063@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5372
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ad7c8607-a346-4c4f-64c9-08d875a7078c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SY1UpT6rjNdnPVy1m6Ljau/N2+78CgRXktLor6JyOXPD8oMgPNiQlPXUlO5P3tovhC0PCwfEbdLDKLvm1AcpBQaQRbN/xWJfnHfRhIK8s/qV/r5UgivxsDtHUTojsLWmXOArA850z1W+AHNCZ5Gzm7N8ha12FviIzShvL3TLLHD3voIOdn+1VSwsaqYMrbTYC/8JNvW+cMb8h0usmYtq4nHGBpeSurgzCwjKZs4wGZX5v92aRqnTGdym84oPHGwQjnz9KrWAfB2vOxc7OmUpxEDSSOi9/Z6ET7OgmLzf+9gC9hkRGpzIeYWwEsPB0Hf1C1WIjr5XLrqplF/bZV2O7mC45F3w6mmduPYB3VK6gwIoVyPXdRP5cfubbJa6JeUMIGaZtwhoa31CMPu4KvDaBQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(396003)(346002)(46966005)(8676002)(6486002)(6512007)(2906002)(107886003)(70206006)(4744005)(2616005)(36756003)(81166007)(356005)(82740400003)(5660300002)(82310400003)(70586007)(336012)(186003)(26005)(36906005)(316002)(54906003)(478600001)(47076004)(6506007)(86362001)(33656002)(6862004)(8936002)(4326008)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2020 09:53:11.1208
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 76d936e2-955f-476e-7045-08d875a71efb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3685



> On 21 Oct 2020, at 10:46, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 21/10/2020 10:44, Bertrand Marquis wrote:
>>> Bertrand reminded me today that I forgot to answer the e-mail (sorry). =
I am happy with using printk_once().
>> Shall i also keep the .enable implementation ?
>> At the end having:
>>  if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
>> in enable_errata_workarounds is quite clean.
>=20
> You can pick the one you prefer :).

Thanks, I will push a v2 shortly.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:58:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9937.26211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAsz-00023z-Tg; Wed, 21 Oct 2020 09:58:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9937.26211; Wed, 21 Oct 2020 09:58:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAsz-00023s-Pf; Wed, 21 Oct 2020 09:58:25 +0000
Received: by outflank-mailman (input) for mailman id 9937;
 Wed, 21 Oct 2020 09:58:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVAsy-00023n-Hp
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:58:24 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59e28261-7df4-4bfd-a099-de01a6f872b6;
 Wed, 21 Oct 2020 09:58:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVAsy-00023n-Hp
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:58:24 +0000
X-Inumbo-ID: 59e28261-7df4-4bfd-a099-de01a6f872b6
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 59e28261-7df4-4bfd-a099-de01a6f872b6;
	Wed, 21 Oct 2020 09:58:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603274303;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=O29350yNgiHYjXb3qrigNg8xxVDACGF1x6pzlHVVpQk=;
  b=N66B1oAgd7QE18kSD88oqA3I+5H4C33+scrZBAAkSG/Dmn//BZppbhVc
   V8dYWrSYZr2DaDp1JymVSnt9ZXybQVCDRhlh0rcZNGm6+Q9rNvYLYYs2i
   O7ONL1jHyWo86ls3KnGY9EL1otO/jIKtv8ljKU6C1G/mUq1EJiwFlIeg5
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WJ9qrBzI4wb5PDRnwJ8d6vwtT4H+i5G4WN1cj774c14rWpnFEGwNM3brmK9We5kd6PZu8QJSZQ
 Gn2eAJTDSYrsPq+U/nHwd9woMJaYgTmm6qqQ1PuSOqQcspURX0C242MOSbRpGEAbo3c+CaWQpb
 oniLk1HEGkq9UqR8t5gaD3x1ymm+mOrbY6437GV2UOCnfLAMZ2/F7wrOMtg1VLQAikHslL0rEZ
 KTvSunuW+/cbw7J+cQ6xH4icz8kSQamyDyBUGtkKC9Zjc7FqJWEaL2CV+DT5iuNxppYJRnq5wh
 KHs=
X-SBRS: 2.5
X-MesageID: 29468608
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29468608"
Date: Wed, 21 Oct 2020 11:58:09 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	<intel-gfx@lists.freedesktop.org>, xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: i915 dma faults on Xen
Message-ID: <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Fri, Oct 16, 2020 at 12:23:22PM -0400, Jason Andryuk wrote:
> On Thu, Oct 15, 2020 at 11:16 AM Jason Andryuk <jandryuk@gmail.com> wrote:
> >
> > On Thu, Oct 15, 2020 at 7:31 AM Roger Pau Monné <roger.pau@citrix.com> wrote:
> > >
> > > On Wed, Oct 14, 2020 at 08:37:06PM +0100, Andrew Cooper wrote:
> > > > On 14/10/2020 20:28, Jason Andryuk wrote:
> > > > > Hi,
> > > > >
> > > > > Bug opened at https://gitlab.freedesktop.org/drm/intel/-/issues/2576
> > > > >
> > > > > I'm seeing DMA faults for the i915 graphics hardware on a Dell
> > > > > Latitude 5500. These were captured when I plugged into a Dell
> > > > > Thunderbolt dock with two DisplayPort monitors attached.  Xen 4.12.4
> > > > > staging and Linux 5.4.70 (and some earlier versions).
> > > > >
> > > > > Oct 14 18:41:49.056490 kernel:[   85.570347] [drm:gen8_de_irq_handler
> > > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > > Oct 14 18:41:49.056494 kernel:[   85.570395] [drm:gen8_de_irq_handler
> > > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > > Oct 14 18:41:49.056589 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > > > Request device [0000:00:02.0] fault addr 39b5845000, iommu reg =
> > > > > ffff82c00021d000
> > > > > Oct 14 18:41:49.056594 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > > > PTE Read access is not set
> > > > > Oct 14 18:41:49.056784 kernel:[   85.570668] [drm:gen8_de_irq_handler
> > > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > > Oct 14 18:41:49.056789 kernel:[   85.570687] [drm:gen8_de_irq_handler
> > > > > [i915]] *ERROR* Fault errors on pipe A: 0x00000080
> > > > > Oct 14 18:41:49.056885 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > > > > Request device [0000:00:02.0] fault addr 4238d0a000, iommu reg =
> > > > > ffff82c00021d000
> > > > > Oct 14 18:41:49.056890 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > > > > PTE Read access is not set
> > > > >
> > > > > They repeat. In the log attached to
> > > > > https://gitlab.freedesktop.org/drm/intel/-/issues/2576, they start at
> > > > > "Oct 14 18:41:49.056589" and continue until I unplug the dock around
> > > > > "Oct 14 18:41:54.801802".
> > > > >
> > > > > I've also seen similar messages when attaching the laptop's HDMI port
> > > > > to a 4k monitor. The eDP display by itself seems okay.
> > > > >
> > > > > I tried Fedora 31 & 32 live images with intel_iommu=on, so no Xen, and
> > > > > didn't see any errors
> > > > >
> > > > > This is a kernel & xen log with drm.debug=0x1e. It also includes some
> > > > > application (glass) logging when it changes resolutions which seems to
> > > > > set off the DMA faults. 5500-igfx-messages-kern-xen-glass
> > > > >
> > > > > Running xen with iommu=no-igfx disables the iommu for the i915
> > > > > graphics and no faults are reported. However, that breaks some other
> > > > > devices (Dell Latitude 7200 and 5580) giving a black screen with:
> > > > >
> > > > > Oct 10 13:24:37.022117 kernel:[   14.884759] i915 0000:00:02.0: Failed
> > > > > to idle engines, declaring wedged!
> > > > > Oct 10 13:24:37.022118 kernel:[   14.964794] i915 0000:00:02.0: Failed
> > > > > to initialize GPU, declaring it wedged!
> > > > >
> > > > > Any suggestions welcome.
> > > >
> > > > Presumably this is with a PV dom0.  What are 39b5845000 and 4238d0a000
> > > > in the machine memory map?
> >
> > They are bogus?
> > End of RAM is 0x47c800000
> > Thats:
> > 0x047c800000
> > vs.
> > 0x39b5845000
> > 0x4238d0a000
> >
> > > > This smells like a missing RMRR in the ACPI tables.
> 
> The RMRRs are:
> (XEN) [VT-D]Host address width 39
> (XEN) [VT-D]found ACPI_DMAR_DRHD:
> (XEN) [VT-D]  dmaru->address = fed90000
> (XEN) [VT-D]drhd->address = fed90000 iommu->reg = ffff82c00021d000
> (XEN) [VT-D]cap = 1c0000c40660462 ecap = 19e2ff0505e
> (XEN) [VT-D] endpoint: 0000:00:02.0
> (XEN) [VT-D]found ACPI_DMAR_DRHD:
> (XEN) [VT-D]  dmaru->address = fed91000
> (XEN) [VT-D]drhd->address = fed91000 iommu->reg = ffff82c00021f000
> (XEN) [VT-D]cap = d2008c40660462 ecap = f050da
> (XEN) [VT-D] IOAPIC: 0000:00:1e.7
> (XEN) [VT-D] MSI HPET: 0000:00:1e.6
> (XEN) [VT-D]  flags: INCLUDE_ALL
> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> (XEN) [VT-D] endpoint: 0000:00:14.0
> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 78863000 end_addr 78882fff
> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> (XEN) [VT-D] endpoint: 0000:00:02.0
> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 7d000000 end_addr 7f7fffff
> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> (XEN) [VT-D] endpoint: 0000:00:16.7
> (XEN) [VT-D]dmar.c:581:  Non-existent device (0000:00:16.7) is
> reported in RMRR (78907000, 78986fff)'s scope!
> (XEN) [VT-D]dmar.c:596:   Ignore the RMRR (78907000, 78986fff) due to

This is also part of a reserved region, so should be added to the
iommu page tables anyway regardless of this message.

> devices under its scope are not PCI discoverable!
> 
> > > I agree.
> > >
> > > Can you paste the memory map as printed by Xen when booting, and what
> > > command line are you using to boot Xen.
> >
> > So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen
> >
> > There's the memory map
> > (XEN) TBOOT RAM map:
> > (XEN)  0000000000000000 - 0000000000060000 (usable)
> > (XEN)  0000000000060000 - 0000000000068000 (reserved)
> > (XEN)  0000000000068000 - 000000000009e000 (usable)
> > (XEN)  000000000009e000 - 000000000009f000 (reserved)
> > (XEN)  000000000009f000 - 00000000000a0000 (usable)
> > (XEN)  00000000000a0000 - 0000000000100000 (reserved)
> > (XEN)  0000000000100000 - 0000000040000000 (usable)
> > (XEN)  0000000040000000 - 0000000040400000 (reserved)
> > (XEN)  0000000040400000 - 000000007024b000 (usable)
> > (XEN)  000000007024b000 - 000000007024c000 (ACPI NVS)
> > (XEN)  000000007024c000 - 000000007024d000 (reserved)
> > (XEN)  000000007024d000 - 0000000077f19000 (usable)
> > (XEN)  0000000077f19000 - 0000000078987000 (reserved)
> > (XEN)  0000000078987000 - 0000000078a04000 (ACPI data)
> > (XEN)  0000000078a04000 - 0000000078ea3000 (ACPI NVS)
> > (XEN)  0000000078ea3000 - 000000007acff000 (reserved)
> > (XEN)  000000007acff000 - 000000007ad00000 (usable)
> > (XEN)  000000007ad00000 - 000000007f800000 (reserved)
> > (XEN)  00000000f0000000 - 00000000f8000000 (reserved)
> > (XEN)  00000000fe000000 - 00000000fe011000 (reserved)
> > (XEN)  00000000fec00000 - 00000000fec01000 (reserved)
> > (XEN)  00000000fee00000 - 00000000fee01000 (reserved)
> > (XEN)  00000000ff000000 - 0000000100000000 (reserved)
> > (XEN)  0000000100000000 - 000000047c800000 (usable)
> > (XEN) EFI memory map:
> > (XEN)  0000000000000-000000009dfff type=7 attr=000000000000000f
> > (XEN)  000000009e000-000000009efff type=0 attr=000000000000000f
> > (XEN)  000000009f000-000000009ffff type=3 attr=000000000000000f
> > (XEN)  0000000100000-000003fffffff type=7 attr=000000000000000f
> > (XEN)  0000040000000-00000403fffff type=0 attr=000000000000000f
> > (XEN)  0000040400000-000005e359fff type=7 attr=000000000000000f
> > (XEN)  000005e35a000-000005e399fff type=4 attr=000000000000000f
> > (XEN)  000005e39a000-000006a47dfff type=7 attr=000000000000000f
> > (XEN)  000006a47e000-000006c3eefff type=2 attr=000000000000000f
> > (XEN)  000006c3ef000-000006d5eefff type=1 attr=000000000000000f
> > (XEN)  000006d5ef000-000006d86cfff type=2 attr=000000000000000f
> > (XEN)  000006d86d000-000006d978fff type=1 attr=000000000000000f
> > (XEN)  000006d979000-000006dc7afff type=4 attr=000000000000000f
> > (XEN)  000006dc7b000-000006dc98fff type=3 attr=000000000000000f
> > (XEN)  000006dc99000-000006dcc7fff type=4 attr=000000000000000f
> > (XEN)  000006dcc8000-000006dccdfff type=3 attr=000000000000000f
> > (XEN)  000006dcce000-00000701a5fff type=4 attr=000000000000000f
> > (XEN)  00000701a6000-00000701c8fff type=3 attr=000000000000000f
> > (XEN)  00000701c9000-00000701edfff type=4 attr=000000000000000f
> > (XEN)  00000701ee000-0000070204fff type=3 attr=000000000000000f
> > (XEN)  0000070205000-000007022cfff type=4 attr=000000000000000f
> > (XEN)  000007022d000-000007024afff type=3 attr=000000000000000f
> > (XEN)  000007024b000-000007024bfff type=10 attr=000000000000000f
> > (XEN)  000007024c000-000007024cfff type=6 attr=800000000000000f
> > (XEN)  000007024d000-000007024dfff type=4 attr=000000000000000f
> > (XEN)  000007024e000-0000070282fff type=3 attr=000000000000000f
> > (XEN)  0000070283000-00000702c3fff type=4 attr=000000000000000f
> > (XEN)  00000702c4000-00000702c8fff type=3 attr=000000000000000f
> > (XEN)  00000702c9000-00000702defff type=4 attr=000000000000000f
> > (XEN)  00000702df000-0000070307fff type=3 attr=000000000000000f
> > (XEN)  0000070308000-0000070317fff type=4 attr=000000000000000f
> > (XEN)  0000070318000-0000070319fff type=3 attr=000000000000000f
> > (XEN)  000007031a000-0000070331fff type=4 attr=000000000000000f
> > (XEN)  0000070332000-0000070349fff type=3 attr=000000000000000f
> > (XEN)  000007034a000-0000070356fff type=2 attr=000000000000000f
> > (XEN)  0000070357000-0000070357fff type=7 attr=000000000000000f
> > (XEN)  0000070358000-0000070358fff type=2 attr=000000000000000f
> > (XEN)  0000070359000-0000076f3efff type=4 attr=000000000000000f
> > (XEN)  0000076f3f000-00000772affff type=7 attr=000000000000000f
> > (XEN)  00000772b0000-0000077f18fff type=3 attr=000000000000000f
> > (XEN)  0000077f19000-0000078986fff type=0 attr=000000000000000f
> > (XEN)  0000078987000-0000078a03fff type=9 attr=000000000000000f
> > (XEN)  0000078a04000-0000078ea2fff type=10 attr=000000000000000f
> > (XEN)  0000078ea3000-000007ab22fff type=6 attr=800000000000000f
> > (XEN)  000007ab23000-000007acfefff type=5 attr=800000000000000f
> > (XEN)  000007acff000-000007acfffff type=4 attr=000000000000000f
> > (XEN)  0000100000000-000047c7fffff type=7 attr=000000000000000f
> > (XEN)  00000000a0000-00000000fffff type=0 attr=0000000000000000
> > (XEN)  000007ad00000-000007adfffff type=0 attr=070000000000000f
> > (XEN)  000007ae00000-000007f7fffff type=0 attr=0000000000000000
> > (XEN)  00000f0000000-00000f7ffffff type=11 attr=800000000000100d
> > (XEN)  00000fe000000-00000fe010fff type=11 attr=8000000000000001
> > (XEN)  00000fec00000-00000fec00fff type=11 attr=8000000000000001
> > (XEN)  00000fee00000-00000fee00fff type=11 attr=8000000000000001
> > (XEN)  00000ff000000-00000ffffffff type=11 attr=800000000000100d
> >
> > Command line
> > console=com1 dom0_mem=min:420M,max:420M,420M efi=no-rs,attr=uc
> > com1=115200,8n1,pci mbi-video vga=current flask=enforcing loglvl=debug
> > guest_loglvl=debug smt=0 ucode=-1 bootscrub=1
> > argo=yes,mac-permissive=1 iommu=force,igfx
> >
> > iommu=force,igfx was to force igfx back on.  I added a dmi quirk to
> > set no-igfx on this platform as a temporary workaround.

I assume setting no-igfx fixed the issue and the card works fine in
that case?

> > > Have you tried adding dom0-iommu=map-inclusive to the Xen command
> > > line?
> 
> Still seeing faults with dom0-iommu=map-inclusive.  At a different
> address this time:
> Oct 16 15:58:05.110768 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> Request device [0000:00:02.0] fault addr ea0c4f000, iommu reg = ffff

That's also past the end of RAM.

> 82c00021d000
> Oct 16 15:58:05.110774 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> PTE Read access is not set
> Oct 16 15:58:05.110777 VM hypervisor: (XEN) print_vtd_entries: iommu
> #0 dev 0000:00:02.0 gmfn ea0c4f
> Oct 16 15:58:05.110780 VM hypervisor: (XEN)     root_entry[00] = 46e129001
> Oct 16 15:58:05.110782 VM hypervisor: (XEN)     context[10] = 2_46e128001
> Oct 16 15:58:05.110785 VM hypervisor: (XEN)     l4[000] = 46e11b003
> Oct 16 15:58:05.110787 VM hypervisor: (XEN)     l3[03a] = 0
> Oct 16 15:58:05.110789 VM hypervisor: (XEN)     l3[03a] not present
> 
> The previous posting, the two faulting addresses repeated in pairs.
> Here it is only this one address repeating.
> 
> I plugged and unplugged and a different address was repeating with a
> few other random addresses with 1 or 2 faults.  Here is uniq -c output
> of the address and count pulled from the logs:
> 0x1ce9d6b000 2007
> 0x31b50d5000 1
> 0x1ce9d6b000 882
> 0x707741000 1
> 0x1ce9d6b000 1114
> 0x20d2099000 1
> 0x1ce9d6b000 3489
> 0xeb98eb000 1
> 0x1ce9d6b000 2430
> 0xeb98eb000 1
> 0x1ce9d6b000 1300
> 0x22f20bb000 1
> 0x1ce9d6b000 269
> 0x22f20bb000 1
> 0x1ce9d6b000 5091
> 0x6c99ec9000 1
> 0x1ce9d6b000 29
> 0xeb98eb000 1
> 0x1ce9d6b000 4599
> 0x6c99ec9000 1
> 0x1ce9d6b000 1989

Hm, it's hard to tell what's going on. My limited experience with
IOMMU faults on broken systems there's a small range that initially
triggers those, and then the device goes wonky and starts accessing a
whole load of invalid addresses.

You could try adding those manually using the rmrr Xen command line
option [0], maybe you can figure out which range(s) are missing?

Roger.

[0] https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#rmrr


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 09:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 09:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9939.26223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAtu-0002AW-6j; Wed, 21 Oct 2020 09:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9939.26223; Wed, 21 Oct 2020 09:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAtu-0002AO-3a; Wed, 21 Oct 2020 09:59:22 +0000
Received: by outflank-mailman (input) for mailman id 9939;
 Wed, 21 Oct 2020 09:59:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVAts-0002AJ-SC
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:59:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4be9edf6-b9ee-4a95-97ff-03f0c8ffd3b7;
 Wed, 21 Oct 2020 09:59:19 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAtr-0001mv-HY; Wed, 21 Oct 2020 09:59:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAtr-0008GI-9b; Wed, 21 Oct 2020 09:59:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAtr-0002yL-99; Wed, 21 Oct 2020 09:59:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVAts-0002AJ-SC
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 09:59:20 +0000
X-Inumbo-ID: 4be9edf6-b9ee-4a95-97ff-03f0c8ffd3b7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4be9edf6-b9ee-4a95-97ff-03f0c8ffd3b7;
	Wed, 21 Oct 2020 09:59:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Qj1hOcAZrtL2sPMq+58DIGEe1NKYO0xgpbEYG4pj+I0=; b=N0jbMCM29tGp0w9gwioZwFNUEY
	Bn1LlRkT2kYO4oztTfRKy4dXfHGGr9cB166pjoTe5BaYc8I0gx8GuLtFbfMLXk3U6uWcmEOhHVQOx
	ISQTEI1I/UDSQnTnViHvfYUYXPyvn1KwJZrz6OIOgWNJ0b9fAPfnMVplK+h8dPsODhX4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAtr-0001mv-HY; Wed, 21 Oct 2020 09:59:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAtr-0008GI-9b; Wed, 21 Oct 2020 09:59:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAtr-0002yL-99; Wed, 21 Oct 2020 09:59:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156067-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156067: all pass - PUSHED
X-Osstest-Versions-This:
    xen=3b49791e4cc2f38dd84bf331b75217adaef636e3
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 09:59:19 +0000

flight 156067 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156067/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  3b49791e4cc2f38dd84bf331b75217adaef636e3
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   155955  2020-10-18 09:19:24 Z    3 days
Testing same since   156067  2020-10-21 09:19:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0dfddb2116..3b49791e4c  3b49791e4cc2f38dd84bf331b75217adaef636e3 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:01:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:01:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9944.26237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAvt-000348-Ju; Wed, 21 Oct 2020 10:01:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9944.26237; Wed, 21 Oct 2020 10:01:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAvt-000341-H4; Wed, 21 Oct 2020 10:01:25 +0000
Received: by outflank-mailman (input) for mailman id 9944;
 Wed, 21 Oct 2020 10:01:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVAvs-00033w-BB
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:01:24 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bf65b47-03dd-4574-8a94-8a3c9c216bb0;
 Wed, 21 Oct 2020 10:01:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVAvs-00033w-BB
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:01:24 +0000
X-Inumbo-ID: 2bf65b47-03dd-4574-8a94-8a3c9c216bb0
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2bf65b47-03dd-4574-8a94-8a3c9c216bb0;
	Wed, 21 Oct 2020 10:01:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603274482;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=W9Jm5X8cavR0hR8jXN5zI+G2fCQCtDcK4YikXrmOcU4=;
  b=SnvkZyqKvOocgY2rlWry6frDkPtk8AD/s79aeIpBP8p1OETeUzfINACL
   8D9c1Zd2BB6fT8jQsBaE1IhDywOWq6B4rsbX0SctgK8Qnvf/GMlpAKWGY
   5krZot9kxnsyfkF3VAKXqpo8K1YhuS12yFlrQE75OPhfa0HR++LF9m+OX
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Sb3VlgnxyA0nqx1ZwDbkswQHrpvQc1EamSX4zQcGTyQH1WV0Ak7DDpNSnHP2Bppjjj6ifVN0OK
 fcGmhvU0Nc9h7hFdNz51/Ts0MpSMkeREq3gXqyIRG48UekkG8Drw1si0/qjJsZVtitQ+s4ZYI3
 7W44Tihp/hxdR61CWHsulx0mcEwZ5LoXhIoSh9k9ch61ykicTVVeiaHQqU2pfHVaYAzntK3vJ1
 I+QTnaLT/p1PRpAyTO0S1tA9S7lv0m8PW7sVpukX2ahRAd+aejBm5FFWoBJd5TfW0yNyVIEM6p
 uDU=
X-SBRS: 2.5
X-MesageID: 29796762
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29796762"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
 <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
 <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
 <9b54113c-9df2-2f44-1545-67ffe4831934@citrix.com>
 <da2d9140-c70d-33a4-a375-9615e806d7d4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6df749cd-bb2a-edd9-e74d-4e2f658e8929@citrix.com>
Date: Wed, 21 Oct 2020 11:01:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <da2d9140-c70d-33a4-a375-9615e806d7d4@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 07:55, Jan Beulich wrote:
> On 20.10.2020 20:46, Andrew Cooper wrote:
>> On 20/10/2020 18:10, Andrew Cooper wrote:
>>> On 20/10/2020 17:20, Andrew Cooper wrote:
>>>> On 20/10/2020 16:48, Jan Beulich wrote:
>>>>> On 20.10.2020 17:24, Andrew Cooper wrote:
>>>>>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>>>>>> is from Xen's point of view (as the update only affects guest mappings), and
>>>>>> the guest is required to flush suitably after making updates.
>>>>>>
>>>>>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>>>>>> writeable pagetables, etc.) is an implementation detail outside of the
>>>>>> API/ABI.
>>>>>>
>>>>>> Changes in the paging structure require invalidations in the linear pagetable
>>>>>> range for subsequent accesses into the linear pagetables to access non-stale
>>>>>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>>>>>> actions from accidentally accessing/modifying the wrong pagetable.
>>>>>>
>>>>>> For all L2 and higher modifications, flush the full TLB.  (This could in
>>>>>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>>>>>> mechanism exists in practice.)
>>>>>>
>>>>>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>>>>>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>>>>>> now always needs to flush, there is no point trying to exclude the current CPU
>>>>>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>>>>> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
>>>>> part of the flushing on the local CPU. The draft you had sent
>>>>> earlier looked better in this regard.
>>>> This was the area which broke.  It is to do with subtle difference in
>>>> the scope of L4 updates.
>>>>
>>>> ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
>>>> other references to the pages are found.
>>>>
>>>> The TLB flush needs to be broadcast to the whole domain dirty mask, as
>>>> we can't (easily) know if the update was part of the current structure.
>>> Actually - we can know whether an L4 update needs flushing locally or
>>> not, in exactly the same way as the sync logic currently works.
>>>
>>> However, unlike the opencoded get_cpu_info()->root_pgt_changed = true,
>>> we can't just flush locally for free.
>>>
>>> This is quite awkward to express.
>> And not safe.  Flushes may accumulate from multiple levels in a batch,
>> and pt_owner may not be equal to current.
> I'm not questioning the TLB flush - this needs to be the scope you
> use (but just FLUSH_TLB as per my earlier reply). I'm questioning
> the extra ROOT_PGTBL sync (meaning changes to levels other than L4
> don't matter), which is redundant with the explicit setting right
> after the call to mod_l4_entry(). But I guess since now you need
> to issue _some_ flush_mask() for the local CPU anyway, perhaps
> it's rather the explicit setting of ->root_pgt_changed which wants
> dropping?

No.  That was the delta which delayed posting in the first place.  Dom0
crashes very early without it.

The problem, as I said, is the asymmetry.

As dom0 is booting, the "only one use of this L4" logic kicks in, and
skips setting ROOT_PGTBL, which then causes the flush_mask() not to
flush on the local CPU either.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:04:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:04:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9952.26253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAyt-0003J5-4O; Wed, 21 Oct 2020 10:04:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9952.26253; Wed, 21 Oct 2020 10:04:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAyt-0003Iy-0g; Wed, 21 Oct 2020 10:04:31 +0000
Received: by outflank-mailman (input) for mailman id 9952;
 Wed, 21 Oct 2020 10:04:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVAys-0003IO-2u
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:04:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6fbdae02-b61c-4b12-b0fd-cfde73e76643;
 Wed, 21 Oct 2020 10:04:24 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAyl-00020C-Ud; Wed, 21 Oct 2020 10:04:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAyl-0008P3-Jp; Wed, 21 Oct 2020 10:04:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVAyl-00067y-JK; Wed, 21 Oct 2020 10:04:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVAys-0003IO-2u
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:04:30 +0000
X-Inumbo-ID: 6fbdae02-b61c-4b12-b0fd-cfde73e76643
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6fbdae02-b61c-4b12-b0fd-cfde73e76643;
	Wed, 21 Oct 2020 10:04:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=16iGHKNs1KdEnpY0DThezxOZcz5ub3kCGizc96oTecI=; b=tF6Am5w2wE9c5X6kPle3LIsZnx
	hIr9yEzBC+FwglvjiL5AMOpNn7WBmnxk4MelXFL2D/7414W4D5ivvPNOeb5Pvshd0AhvJECPVvtXV
	7tnjkEA9z8zrVXg0yjwS7M9KZwFDGcSJsCaOiwuLBQ7yzLdLODkUWtPgWbKUPt322dvo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAyl-00020C-Ud; Wed, 21 Oct 2020 10:04:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAyl-0008P3-Jp; Wed, 21 Oct 2020 10:04:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVAyl-00067y-JK; Wed, 21 Oct 2020 10:04:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156033-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 156033: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=71da63bbb83af8c8c537f3731dda7dc2d2fd31ac
X-Osstest-Versions-That:
    xen=1719f79a0efd36d15837c51982173dd1c287dced
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 10:04:23 +0000

flight 156033 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156033/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155514
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 155541
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 155541
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 155541
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155541
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155541
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155541
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155541
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155541
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155541
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155541
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155541
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155541
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155541
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  71da63bbb83af8c8c537f3731dda7dc2d2fd31ac
baseline version:
 xen                  1719f79a0efd36d15837c51982173dd1c287dced

Last test of basis   155541  2020-10-08 01:38:48 Z   13 days
Testing same since   156033  2020-10-20 13:35:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1719f79a0e..71da63bbb8  71da63bbb83af8c8c537f3731dda7dc2d2fd31ac -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:05:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9954.26265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAzg-0003QL-Hh; Wed, 21 Oct 2020 10:05:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9954.26265; Wed, 21 Oct 2020 10:05:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVAzg-0003QE-EM; Wed, 21 Oct 2020 10:05:20 +0000
Received: by outflank-mailman (input) for mailman id 9954;
 Wed, 21 Oct 2020 10:05:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVAzf-0003Q8-1c
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:05:19 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42c603b6-fe54-4dec-bdd4-26211da79e55;
 Wed, 21 Oct 2020 10:05:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVAzf-0003Q8-1c
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:05:19 +0000
X-Inumbo-ID: 42c603b6-fe54-4dec-bdd4-26211da79e55
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 42c603b6-fe54-4dec-bdd4-26211da79e55;
	Wed, 21 Oct 2020 10:05:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603274719;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=ux6ZeF9tliAVd3JLr0GPv/0u9LqFcQi6AbLVXu+zPmk=;
  b=R/JHWc+aXDdvODOgC/DMCbQeKhSNFj8VSUeiYu9pvEi8lJ9Rs1HSSLdf
   kWRr/s+rTQFtYRj1YQ0RwZPNPOkZkXnKAS2OSH/3rZ8OtAkjh9FDYWO/u
   D7YiUybtlK3vHXNdy1RtakJT7Nk66OLfXsVai73t9nqaP5tuNwWt4J+ba
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: uOfS8XzK4ftBq9H9NxtSJMWreYyTnn31qyjAiOB+B/4aTNNRuR0wBN44Xdawg9bzIN1el9lPOi
 mU38zDvm+1rnedvw0hbSdmRgNIoKQATEPkRdMgoqlBP1F0K+oJl8bxTsg0uyO9HCrs9/HZ1b50
 IRSp5AYuwbONbWa4nkuGqatDtqL1ayI+B52sNzS/Xutx2mzWeBtLiJnxGGss0NoR0RVJS/ERN2
 N3wiBOLyDAkfxs6tLFLUxvT+XgnyofulsHmOuo4KHrcRLglAZHRLpbu3H5rF8SHDjROP33p7JY
 SnI=
X-SBRS: 2.5
X-MesageID: 29438825
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29438825"
Date: Wed, 21 Oct 2020 12:05:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v3 2/3] x86/shadow: refactor shadow_vram_{get,put}_l1e()
Message-ID: <20201021100505.bxnlpmrqwnb5gqqx@Air-de-Roger>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
 <bc686036-18c0-ba7b-b8e5-a20b914aac68@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bc686036-18c0-ba7b-b8e5-a20b914aac68@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Mon, Oct 19, 2020 at 10:44:31AM +0200, Jan Beulich wrote:
> By passing the functions an MFN and flags, only a single instance of
> each is needed; they were pretty large for being inline functions
> anyway.
> 
> While moving the code, also adjust coding style and add const where
> sensible / possible.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:10:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:10:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9962.26277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVB4U-0004JB-4w; Wed, 21 Oct 2020 10:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9962.26277; Wed, 21 Oct 2020 10:10:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVB4U-0004J4-1t; Wed, 21 Oct 2020 10:10:18 +0000
Received: by outflank-mailman (input) for mailman id 9962;
 Wed, 21 Oct 2020 10:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVB4S-0004Iz-6h
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:10:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6004a910-c7d0-49b7-b173-32f3cfa0e013;
 Wed, 21 Oct 2020 10:10:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73CECB2B8;
 Wed, 21 Oct 2020 10:10:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVB4S-0004Iz-6h
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:10:16 +0000
X-Inumbo-ID: 6004a910-c7d0-49b7-b173-32f3cfa0e013
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6004a910-c7d0-49b7-b173-32f3cfa0e013;
	Wed, 21 Oct 2020 10:10:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603275014;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JUflINopUiLlYqrN5sufhJ97Pnst6amZ13r1ziOvJmg=;
	b=RntKs8irZM6otmiq+F1ExDUtCKICWbEFVGqk6Sg9IkgELvmIPYTpxCk1s9KNOcFrfi2wWE
	SeZoLKc5/mskn7OkqTIa0o+ghw/mU/vyxVs21bhlo/+7wQAuwATvwkEg/1lLH7qX9pP85Q
	c8oI+OZpxlfOfk20jPjk+5/C0NcAgvw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 73CECB2B8;
	Wed, 21 Oct 2020 10:10:14 +0000 (UTC)
Subject: Re: [xen-unstable test] 156027: regressions - FAIL
To: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <osstest-156027-mainreport@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Cc: osstest service owner <osstest-admin@xenproject.org>
Message-ID: <432399e9-9bd2-07d2-c182-353e0b7f21d4@suse.com>
Date: Wed, 21 Oct 2020 12:10:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <osstest-156027-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.10.2020 23:37, osstest service owner wrote:
> flight 156027 xen-unstable real [real]
> flight 156048 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156027/
> http://logs.test-lab.xenproject.org/osstest/logs/156048/

Here as well as in the respective 4.14 and 4.13 reports the
"retest" flights look to not really work, so they don't provide
any additional useful data. Ian?

> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156013

Taking together this and the respective 4.14 and 4.13 reports
there looks to be an interaction problem with qemu: The gpa
of the physical address of the NPF vCPU1 did encounter last
points at 04:00.0, i.e. the (emulated) NIC. In the 4.x flights
no such information is available (on VT-x the gpa of the last
EPT violation doesn't get dumped when dumping the VMCS), but
one of them shows Dom0 in the context of
XEN_DMOP_set_pci_intx_level, again for 04:00.0.

In any event the guests in all cases experience soft lockups.

What I'm unable to do for the moment is put in place any kind
of connection to the commits under test, but it's highly
likely one of the security fixes committed yesterday.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:27:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9972.26317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBKw-0005bc-V5; Wed, 21 Oct 2020 10:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9972.26317; Wed, 21 Oct 2020 10:27:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBKw-0005bV-Rw; Wed, 21 Oct 2020 10:27:18 +0000
Received: by outflank-mailman (input) for mailman id 9972;
 Wed, 21 Oct 2020 10:27:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVBKv-0005bQ-Dq
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:27:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f46a4776-59bb-4428-a178-af3e3ae58343;
 Wed, 21 Oct 2020 10:27:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6897EACD9;
 Wed, 21 Oct 2020 10:27:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVBKv-0005bQ-Dq
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:27:17 +0000
X-Inumbo-ID: f46a4776-59bb-4428-a178-af3e3ae58343
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f46a4776-59bb-4428-a178-af3e3ae58343;
	Wed, 21 Oct 2020 10:27:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603276035;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mHNd1S3yTsyFbS3ZcMZ+fi6ZJD4zjmufIMU5+ykiN00=;
	b=uSGZEhiowsQRzIF/nyZR8JsrqLvxTob37Jn/yiFNwqCdfnFeLrIyb6L78W9t72qQ4nP86p
	6mzanFnN4mldhEvBEDmlDH/kgghVT0tbTHyYry82/zFA4B73xQpw/pY3di/ae6RbYGFXNF
	7kY8flTiyMH/TtG20W6l0yIw4Q0vngc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6897EACD9;
	Wed, 21 Oct 2020 10:27:15 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201020152405.26892-1-andrew.cooper3@citrix.com>
 <a50a19ce-321a-ceef-55e4-95ffbebff59d@suse.com>
 <c359adee-1826-032b-2d07-c06c545e3b96@citrix.com>
 <b24c21b0-607b-6add-e156-a37fcf7f2352@citrix.com>
 <9b54113c-9df2-2f44-1545-67ffe4831934@citrix.com>
 <da2d9140-c70d-33a4-a375-9615e806d7d4@suse.com>
 <6df749cd-bb2a-edd9-e74d-4e2f658e8929@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eeeabfa2-df36-42d7-326a-570f4a9373f9@suse.com>
Date: Wed, 21 Oct 2020 12:27:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <6df749cd-bb2a-edd9-e74d-4e2f658e8929@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 12:01, Andrew Cooper wrote:
> On 21/10/2020 07:55, Jan Beulich wrote:
>> On 20.10.2020 20:46, Andrew Cooper wrote:
>>> On 20/10/2020 18:10, Andrew Cooper wrote:
>>>> On 20/10/2020 17:20, Andrew Cooper wrote:
>>>>> On 20/10/2020 16:48, Jan Beulich wrote:
>>>>>> On 20.10.2020 17:24, Andrew Cooper wrote:
>>>>>>> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
>>>>>>> is from Xen's point of view (as the update only affects guest mappings), and
>>>>>>> the guest is required to flush suitably after making updates.
>>>>>>>
>>>>>>> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
>>>>>>> writeable pagetables, etc.) is an implementation detail outside of the
>>>>>>> API/ABI.
>>>>>>>
>>>>>>> Changes in the paging structure require invalidations in the linear pagetable
>>>>>>> range for subsequent accesses into the linear pagetables to access non-stale
>>>>>>> mappings.  Xen must provide suitable flushing to prevent intermixed guest
>>>>>>> actions from accidentally accessing/modifying the wrong pagetable.
>>>>>>>
>>>>>>> For all L2 and higher modifications, flush the full TLB.  (This could in
>>>>>>> principle be an order 39 flush starting at LINEAR_PT_VIRT_START, but no such
>>>>>>> mechanism exists in practice.)
>>>>>>>
>>>>>>> As this combines with sync_guest for XPTI L4 "shadowing", replace the
>>>>>>> sync_guest boolean with flush_flags and accumulate flags.  The sync_guest case
>>>>>>> now always needs to flush, there is no point trying to exclude the current CPU
>>>>>>> from the flush mask.  Use pt_owner->dirty_cpumask directly.
>>>>>> Why is there no point? There's no need for the FLUSH_ROOT_PGTBL
>>>>>> part of the flushing on the local CPU. The draft you had sent
>>>>>> earlier looked better in this regard.
>>>>> This was the area which broke.  It is to do with subtle difference in
>>>>> the scope of L4 updates.
>>>>>
>>>>> ROOT_PGTBL needs to resync current (if in use), and be broadcasted if
>>>>> other references to the pages are found.
>>>>>
>>>>> The TLB flush needs to be broadcast to the whole domain dirty mask, as
>>>>> we can't (easily) know if the update was part of the current structure.
>>>> Actually - we can know whether an L4 update needs flushing locally or
>>>> not, in exactly the same way as the sync logic currently works.
>>>>
>>>> However, unlike the opencoded get_cpu_info()->root_pgt_changed = true,
>>>> we can't just flush locally for free.
>>>>
>>>> This is quite awkward to express.
>>> And not safe.  Flushes may accumulate from multiple levels in a batch,
>>> and pt_owner may not be equal to current.
>> I'm not questioning the TLB flush - this needs to be the scope you
>> use (but just FLUSH_TLB as per my earlier reply). I'm questioning
>> the extra ROOT_PGTBL sync (meaning changes to levels other than L4
>> don't matter), which is redundant with the explicit setting right
>> after the call to mod_l4_entry(). But I guess since now you need
>> to issue _some_ flush_mask() for the local CPU anyway, perhaps
>> it's rather the explicit setting of ->root_pgt_changed which wants
>> dropping?
> 
> No.  That was the delta which delayed posting in the first place.  Dom0
> crashes very early without it.
> 
> The problem, as I said, is the asymmetry.
> 
> As dom0 is booting, the "only one use of this L4" logic kicks in, and
> skips setting ROOT_PGTBL, which then causes the flush_mask() not to
> flush on the local CPU either.

Ah, I think I finally got it. This asymmetry wants expressing then
in two different sets of flush flags (not sure whether two different
variables are needed), since - as per other replies - the local CPU
has different requirements anyway.

- all CPUs need FLUSH_TLB
- the local CPU may additionally need FLUSH_ROOT_PGTBL
- other CPUs may additionally (or instead, if you like) need
  FLUSH_ROOT_PGTBL | FLUSH_TLB_GLOBAL

But then of course the local CPU can as well have its
->root_pgt_changed updated directly - there's no difference whether
it gets done this way or by passing FLUSH_ROOT_PGTBL to flush_local().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:33:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9981.26332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBQa-0006W5-Kr; Wed, 21 Oct 2020 10:33:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9981.26332; Wed, 21 Oct 2020 10:33:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBQa-0006Vy-Hg; Wed, 21 Oct 2020 10:33:08 +0000
Received: by outflank-mailman (input) for mailman id 9981;
 Wed, 21 Oct 2020 10:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVBQZ-0006Vo-E2
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:33:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56b049c3-6dd2-4b41-a2cf-63cef2155d93;
 Wed, 21 Oct 2020 10:33:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C24A7ACA1;
 Wed, 21 Oct 2020 10:33:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVBQZ-0006Vo-E2
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:33:07 +0000
X-Inumbo-ID: 56b049c3-6dd2-4b41-a2cf-63cef2155d93
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 56b049c3-6dd2-4b41-a2cf-63cef2155d93;
	Wed, 21 Oct 2020 10:33:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603276385;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RIz6Fh1/9dRHRoJ4gps/Px3tXLhgXeXwx/bPJoKNxXw=;
	b=uYe7eMD26aZhibgoxe8vu3VSsAzWec/Nyqeuf3ySjVcg6YN7beo2GOqFZcnMcmbErzIvPl
	Xa+KkN2uVkKtADPzoXvd1z1rslwHgb5mW+yH6wNtF9/OWEyNTowuyWZMd44/2l+cC6dD/h
	OkNrk/NAUDoY5Sr90HRYZpDQjh+LEow=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C24A7ACA1;
	Wed, 21 Oct 2020 10:33:05 +0000 (UTC)
Subject: Re: i915 dma faults on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org,
 xen-devel <xen-devel@lists.xenproject.org>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a855e542-4e12-14e2-b663-75e2efceb937@suse.com>
Date: Wed, 21 Oct 2020 12:33:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 11:58, Roger Pau Monné wrote:
> On Fri, Oct 16, 2020 at 12:23:22PM -0400, Jason Andryuk wrote:
>> The RMRRs are:
>> (XEN) [VT-D]Host address width 39
>> (XEN) [VT-D]found ACPI_DMAR_DRHD:
>> (XEN) [VT-D]  dmaru->address = fed90000
>> (XEN) [VT-D]drhd->address = fed90000 iommu->reg = ffff82c00021d000
>> (XEN) [VT-D]cap = 1c0000c40660462 ecap = 19e2ff0505e
>> (XEN) [VT-D] endpoint: 0000:00:02.0
>> (XEN) [VT-D]found ACPI_DMAR_DRHD:
>> (XEN) [VT-D]  dmaru->address = fed91000
>> (XEN) [VT-D]drhd->address = fed91000 iommu->reg = ffff82c00021f000
>> (XEN) [VT-D]cap = d2008c40660462 ecap = f050da
>> (XEN) [VT-D] IOAPIC: 0000:00:1e.7
>> (XEN) [VT-D] MSI HPET: 0000:00:1e.6
>> (XEN) [VT-D]  flags: INCLUDE_ALL
>> (XEN) [VT-D]found ACPI_DMAR_RMRR:
>> (XEN) [VT-D] endpoint: 0000:00:14.0
>> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 78863000 end_addr 78882fff
>> (XEN) [VT-D]found ACPI_DMAR_RMRR:
>> (XEN) [VT-D] endpoint: 0000:00:02.0
>> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 7d000000 end_addr 7f7fffff
>> (XEN) [VT-D]found ACPI_DMAR_RMRR:
>> (XEN) [VT-D] endpoint: 0000:00:16.7
>> (XEN) [VT-D]dmar.c:581:  Non-existent device (0000:00:16.7) is
>> reported in RMRR (78907000, 78986fff)'s scope!
>> (XEN) [VT-D]dmar.c:596:   Ignore the RMRR (78907000, 78986fff) due to
> 
> This is also part of a reserved region, so should be added to the
> iommu page tables anyway regardless of this message.

Could you clarify why you think so? RMRRs are tied to devices, so
if a device in reality doesn't exist (and no other one uses the
same range), I don't see why an IOMMU mapping would be needed
(unless to work around some related firmware bug). Plus aiui none
of the IOMMU faults actually report this range as having got
accessed.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:39:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9995.26344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBW8-0006ma-9p; Wed, 21 Oct 2020 10:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9995.26344; Wed, 21 Oct 2020 10:38:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBW8-0006mT-6d; Wed, 21 Oct 2020 10:38:52 +0000
Received: by outflank-mailman (input) for mailman id 9995;
 Wed, 21 Oct 2020 10:38:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVBW6-0006mO-9T
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:38:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7628a35e-a96c-4fc0-ac80-7b6528cfb389;
 Wed, 21 Oct 2020 10:38:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 935A3B2D9;
 Wed, 21 Oct 2020 10:38:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVBW6-0006mO-9T
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:38:50 +0000
X-Inumbo-ID: 7628a35e-a96c-4fc0-ac80-7b6528cfb389
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7628a35e-a96c-4fc0-ac80-7b6528cfb389;
	Wed, 21 Oct 2020 10:38:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603276728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XgubHRqIfP0cibpTbKfPdLKpJXoVZgrOsEZpWcsiIO8=;
	b=R7jsosuKkk62/+XRaS84e0U/6Pc4tIDvC6SgwCJgQ78aFgW6eC6ZeuakNqFv0StgoWILwx
	E8PItOIN247TBrx+fyJBhypYupwxIuaXiFkXmiJ5WvOg9hUZ6NhqyPoJs19K9Eai7iFstM
	IpKZDBcceQc9kiIvEvc8EoHM5RlEr90=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 935A3B2D9;
	Wed, 21 Oct 2020 10:38:48 +0000 (UTC)
Subject: Re: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special
 case idx == gpfn
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
 <20201021093958.e4kopykalddam7pk@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a979d21d-efed-9493-efd1-2643bddbbdd9@suse.com>
Date: Wed, 21 Oct 2020 12:38:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021093958.e4kopykalddam7pk@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 11:39, Roger Pau Monné wrote:
> On Fri, Oct 16, 2020 at 08:44:10AM +0200, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
>>          if ( is_special_page(mfn_to_page(prev_mfn)) )
>>              /* Special pages are simply unhooked from this phys slot. */
>>              rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
>> -        else
>> +        else if ( !mfn_eq(mfn, prev_mfn) )
>>              /* Normal domain memory is freed, to avoid leaking memory. */
>>              rc = guest_remove_page(d, gfn_x(gpfn));
> 
> What about the access differing between the old and the new entries,
> while pointing to the same mfn, would Xen install the new entry
> successfully?

Yes - guest_physmap_add_page() doesn't get bypassed.

> Seems easier to me to use guest_physmap_remove_page in that case to
> remove the entry from the p2m without freeing the page.

Why do any removal when none is really needed? I also don't see
this fit the "special pages" clause and comment very well. I'd
question the other way around whether guest_physmap_remove_page()
needs calling at all (the instance above; the other one of course
is needed).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:46:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.9998.26356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBd7-0007ip-2g; Wed, 21 Oct 2020 10:46:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 9998.26356; Wed, 21 Oct 2020 10:46:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBd6-0007ii-V8; Wed, 21 Oct 2020 10:46:04 +0000
Received: by outflank-mailman (input) for mailman id 9998;
 Wed, 21 Oct 2020 10:46:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVBd6-0007id-Af
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:46:04 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff526090-a319-40e3-b1a6-e6954c111253;
 Wed, 21 Oct 2020 10:46:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVBd6-0007id-Af
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:46:04 +0000
X-Inumbo-ID: ff526090-a319-40e3-b1a6-e6954c111253
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ff526090-a319-40e3-b1a6-e6954c111253;
	Wed, 21 Oct 2020 10:46:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603277164;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=gIEQ33dXgjTvkAJ6j0Oyq0xbArK2F/V3iU4JiWwzL+U=;
  b=E+6uRVVBQm9FmHmszj/ChnMSpOAGQIkdO/uy5uOnHj0WxGTwJ7/pKDVC
   YCTPlvo1AMxOTlCnxxD7cy8HkQCbvy2ixICyszZqnvF2NtH/b8iKIl1aq
   I3492Z06HuqjuJ1E+MswPQd1CZMlU+EbIVZF3pnSlESWOwjU4WcLDstK9
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: axAjVyzTwrXwLLDYdBja0/JJm//svAONa2OIk52CGar0CNjnvRBdcQtyn2XD3PGBbMTcQqxVBl
 fA4JFPaK1nb9Wvpv8NQnTi2JQt0QSxnJ9cTV0OBwkNGYx0B4QZVslGaOOl2Ixoguv1kYaHk4gp
 bDC6XxoFcSyds2Szblb93b1ssLNcY+UyAea15/cyt7A+gaH4ERDpuo2jomkA9rySpmjg6r0Bze
 p0xZX5mciKE2EexMZilStEh/lKu/4lWMLcFwwjLIS76UkY0EYAnztn1nQT+2nmKdW2p405dKui
 XMg=
X-SBRS: 2.5
X-MesageID: 29799057
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29799057"
Date: Wed, 21 Oct 2020 12:45:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
Subject: Re: [PATCH v3 3/3] x86/shadow: sh_{make,destroy}_monitor_table() are
 "even more" HVM-only
Message-ID: <20201021104550.zhlxcqia3cqwmyju@Air-de-Roger>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
 <cd39abe3-5a5c-6ebc-a11e-3d4ed1d74907@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cd39abe3-5a5c-6ebc-a11e-3d4ed1d74907@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Mon, Oct 19, 2020 at 10:45:00AM +0200, Jan Beulich wrote:
> With them depending on just the number of shadow levels, there's no need
> for more than one instance of them, and hence no need for any hook (IOW
> 452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
> far enough). Move the functions to hvm.c while dropping the dead
> is_pv_32bit_domain() code paths.
> 
> While moving the code, replace a stale comment reference to
> sh_install_xen_entries_in_l4(). Doing so made me notice the function
> also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
> Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
> gets done here as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> v3: New.
> ---
> TBD: In principle both functions could have their first parameter
>      constified. In fact, "destroy" doesn't depend on the vCPU at all
>      and hence could be passed a struct domain *. Not sure whether such
>      an asymmetry would be acceptable.
>      In principle "make" would also not need passing of the number of
>      shadow levels (can be derived from v), which would result in yet
>      another asymmetry.

I'm not specially fuzzed either way - having const v would be good
IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:50:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:50:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10005.26367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBhU-000084-Kf; Wed, 21 Oct 2020 10:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10005.26367; Wed, 21 Oct 2020 10:50:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBhU-00007x-HW; Wed, 21 Oct 2020 10:50:36 +0000
Received: by outflank-mailman (input) for mailman id 10005;
 Wed, 21 Oct 2020 10:50:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FlF9=D4=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1kVBhT-00007s-Lw
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:50:35 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92612200-8ffd-42b9-b227-3b7bdcd63073;
 Wed, 21 Oct 2020 10:50:35 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09LAoH74062157;
 Wed, 21 Oct 2020 10:50:32 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 34ak16g412-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 21 Oct 2020 10:50:32 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09LAo67G104855;
 Wed, 21 Oct 2020 10:50:31 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3030.oracle.com with ESMTP id 348a6p9udr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 21 Oct 2020 10:50:31 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09LAoU9k032054;
 Wed, 21 Oct 2020 10:50:30 GMT
Received: from mwanda (/41.57.98.10) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 21 Oct 2020 03:50:28 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FlF9=D4=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
	id 1kVBhT-00007s-Lw
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:50:35 +0000
X-Inumbo-ID: 92612200-8ffd-42b9-b227-3b7bdcd63073
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 92612200-8ffd-42b9-b227-3b7bdcd63073;
	Wed, 21 Oct 2020 10:50:35 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09LAoH74062157;
	Wed, 21 Oct 2020 10:50:32 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : mime-version : content-type; s=corp-2020-01-29;
 bh=6ZnmW0Uwc1tXoKg5slRMLYnB9wb7dyA7I4QxcQLbwpU=;
 b=hxeV/6KFBAJ1sEJfVxTqRLdiaGd1QkeMsv4RkFCu2Iu8VqG7SBLvPetL7CAjd3bydmy/
 DsEs7Jbbh+AUMVo6rabylweFTKEqBKBE1Ooka9PC9N3m8HpaOpkdv8joQ69J2fzDbEq/
 XaIwH9Psl6Y4drYS3zGdFqpK9BR+dE7AgHO7QeC8GvqWBkPgOpMv3ttgbyXzFJKXu7fL
 7bI9/rkqpDZ+ozt6GQtsRZDQogm4KacNOCb/NtosRjc440B2eZmTtJXkb5uO7MyNIBbZ
 /yW6xWRcFWwMwX3f5wnxB2TM+nVbfKo+3QOZd6qxcwdgRf//TV9IAPpTxFw8oQ4GdJ2c sA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
	by userp2120.oracle.com with ESMTP id 34ak16g412-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Wed, 21 Oct 2020 10:50:32 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
	by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09LAo67G104855;
	Wed, 21 Oct 2020 10:50:31 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
	by aserp3030.oracle.com with ESMTP id 348a6p9udr-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Wed, 21 Oct 2020 10:50:31 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
	by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09LAoU9k032054;
	Wed, 21 Oct 2020 10:50:30 GMT
Received: from mwanda (/41.57.98.10)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Wed, 21 Oct 2020 03:50:28 -0700
Date: Wed, 21 Oct 2020 13:50:23 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: oleksandr_andrushchenko@epam.com
Cc: xen-devel@lists.xenproject.org
Subject: [bug report] ALSA: xen-front: Use Xen common shared buffer
 implementation
Message-ID: <20201021105023.GA957589@mwanda>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9780 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0 mlxlogscore=750
 bulkscore=0 spamscore=0 adultscore=0 suspectscore=11 mlxscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010210085
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9780 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 phishscore=0
 priorityscore=1501 clxscore=1011 malwarescore=0 mlxscore=0 adultscore=0
 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxlogscore=761
 suspectscore=11 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010210085

Hello Oleksandr Andrushchenko,

The patch 58f9d806d16a: "ALSA: xen-front: Use Xen common shared
buffer implementation" from Nov 30, 2018, leads to the following
static checker warning:

    sound/xen/xen_snd_front_alsa.c:495 alsa_hw_params()
    warn: 'stream->shbuf.directory' double freed
    sound/xen/xen_snd_front_alsa.c:495 alsa_hw_params()
    warn: 'stream->shbuf.grefs' double freed

sound/xen/xen_snd_front_alsa.c
   461  static int alsa_hw_params(struct snd_pcm_substream *substream,
   462                            struct snd_pcm_hw_params *params)
   463  {
   464          struct xen_snd_front_pcm_stream_info *stream = stream_get(substream);
   465          struct xen_snd_front_info *front_info = stream->front_info;
   466          struct xen_front_pgdir_shbuf_cfg buf_cfg;
   467          int ret;
   468  
   469          /*
   470           * This callback may be called multiple times,
   471           * so free the previously allocated shared buffer if any.
   472           */
   473          stream_free(stream);
                ^^^^^^^^^^^^^^^^^^^
This is freed here.

   474          ret = shbuf_setup_backstore(stream, params_buffer_bytes(params));
   475          if (ret < 0)
   476                  goto fail;
                        ^^^^^^^^^^
This leads to some double frees.  Probably more double frees than Smatch
is detecting.

   477  
   478          memset(&buf_cfg, 0, sizeof(buf_cfg));
   479          buf_cfg.xb_dev = front_info->xb_dev;
   480          buf_cfg.pgdir = &stream->shbuf;
   481          buf_cfg.num_pages = stream->num_pages;
   482          buf_cfg.pages = stream->pages;
   483  
   484          ret = xen_front_pgdir_shbuf_alloc(&buf_cfg);
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This is where "stream->shbuf.directory" is re-allocated on the success
path.

   485          if (ret < 0)
   486                  goto fail;
   487  
   488          ret = xen_front_pgdir_shbuf_map(&stream->shbuf);
   489          if (ret < 0)
   490                  goto fail;
   491  
   492          return 0;
   493  
   494  fail:
   495          stream_free(stream);
                ^^^^^^^^^^^^^^^^^^^^
Double free.

   496          dev_err(&front_info->xb_dev->dev,
   497                  "Failed to allocate buffers for stream with index %d\n",
   498                  stream->index);
   499          return ret;
   500  }

regards,
dan carpenter


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:52:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:52:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10008.26380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBin-0000H0-3v; Wed, 21 Oct 2020 10:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10008.26380; Wed, 21 Oct 2020 10:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBin-0000Gt-0V; Wed, 21 Oct 2020 10:51:57 +0000
Received: by outflank-mailman (input) for mailman id 10008;
 Wed, 21 Oct 2020 10:51:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVBim-0000Gm-0Q
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:51:56 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e991aee-1363-4b07-a967-0ddd7ef02b28;
 Wed, 21 Oct 2020 10:51:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVBim-0000Gm-0Q
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:51:56 +0000
X-Inumbo-ID: 8e991aee-1363-4b07-a967-0ddd7ef02b28
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8e991aee-1363-4b07-a967-0ddd7ef02b28;
	Wed, 21 Oct 2020 10:51:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603277515;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=c/6PNeJMbMhVj9+L62PuhzsK96Dd1mycxD6opnCaWi4=;
  b=Otk9t4KqHIo+V8YszIM1mXI5M9en/Q/S36FOuV3FOmOSb8HQtH5vguVi
   /XDNcQFEJ8KFBV2FKzXzO3VuEz2gKnA6aDsSHulJxrS29ztgZoHT2m90M
   fbb997cagB4+2GXnMtkeYY7plW42TeXl6eoB68ZBi847YqvkDoNYdrsfr
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: X/9pofebGbC3SRGOQ0k4b7miJU30ivGMOuKXiblO+/Q4ptsCuoRB0kCcx/bA11VMpJeXnYF9IN
 qbEVdA1UDdxhEOXi/TCf7/Pg78e5bKuYA+HVsDQRs9InfJeO8jUNSvli05givy+ICEu7kClzQZ
 4uUKg/l0zvwtTQVlMG5Achlzyr8OcBEpiigCdHkBlvRJzuYL3H8E+ecpaHvF3BsYt5xKvCN2Ie
 tKb3o0r01YZOsbdRn9Ld7S8VrPnljIs+cMSBp3H5bUUldPjozjtT6wMCpgk/O249BnQEOpEN8/
 9ro=
X-SBRS: 2.5
X-MesageID: 29471648
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29471648"
Date: Wed, 21 Oct 2020 12:51:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Jason Andryuk <jandryuk@gmail.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, <intel-gfx@lists.freedesktop.org>, xen-devel
	<xen-devel@lists.xenproject.org>
Subject: Re: i915 dma faults on Xen
Message-ID: <20201021105110.w3nyd4xod363kp4d@Air-de-Roger>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
 <a855e542-4e12-14e2-b663-75e2efceb937@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a855e542-4e12-14e2-b663-75e2efceb937@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 21, 2020 at 12:33:05PM +0200, Jan Beulich wrote:
> On 21.10.2020 11:58, Roger Pau Monné wrote:
> > On Fri, Oct 16, 2020 at 12:23:22PM -0400, Jason Andryuk wrote:
> >> The RMRRs are:
> >> (XEN) [VT-D]Host address width 39
> >> (XEN) [VT-D]found ACPI_DMAR_DRHD:
> >> (XEN) [VT-D]  dmaru->address = fed90000
> >> (XEN) [VT-D]drhd->address = fed90000 iommu->reg = ffff82c00021d000
> >> (XEN) [VT-D]cap = 1c0000c40660462 ecap = 19e2ff0505e
> >> (XEN) [VT-D] endpoint: 0000:00:02.0
> >> (XEN) [VT-D]found ACPI_DMAR_DRHD:
> >> (XEN) [VT-D]  dmaru->address = fed91000
> >> (XEN) [VT-D]drhd->address = fed91000 iommu->reg = ffff82c00021f000
> >> (XEN) [VT-D]cap = d2008c40660462 ecap = f050da
> >> (XEN) [VT-D] IOAPIC: 0000:00:1e.7
> >> (XEN) [VT-D] MSI HPET: 0000:00:1e.6
> >> (XEN) [VT-D]  flags: INCLUDE_ALL
> >> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> >> (XEN) [VT-D] endpoint: 0000:00:14.0
> >> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 78863000 end_addr 78882fff
> >> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> >> (XEN) [VT-D] endpoint: 0000:00:02.0
> >> (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 7d000000 end_addr 7f7fffff
> >> (XEN) [VT-D]found ACPI_DMAR_RMRR:
> >> (XEN) [VT-D] endpoint: 0000:00:16.7
> >> (XEN) [VT-D]dmar.c:581:  Non-existent device (0000:00:16.7) is
> >> reported in RMRR (78907000, 78986fff)'s scope!
> >> (XEN) [VT-D]dmar.c:596:   Ignore the RMRR (78907000, 78986fff) due to
> > 
> > This is also part of a reserved region, so should be added to the
> > iommu page tables anyway regardless of this message.
> 
> Could you clarify why you think so? RMRRs are tied to devices, so
> if a device in reality doesn't exist (and no other one uses the
> same range), I don't see why an IOMMU mapping would be needed
> (unless to work around some related firmware bug). Plus aiui none
> of the IOMMU faults actually report this range as having got
> accessed.

Since it's the hardware domain that gets the gfx card assigned here it
will get any reserved regions added to the IOMMU page tables in
arch_iommu_hwdom_init. I agree it's not relevant here, since those are
not the regions reported in the IOMMU faults.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10014.26391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBpX-0000bA-RX; Wed, 21 Oct 2020 10:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10014.26391; Wed, 21 Oct 2020 10:58:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBpX-0000b3-OM; Wed, 21 Oct 2020 10:58:55 +0000
Received: by outflank-mailman (input) for mailman id 10014;
 Wed, 21 Oct 2020 10:58:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVBpW-0000ay-9F
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:58:54 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a51e6f7a-89d1-4fc3-ba3d-148d59eece99;
 Wed, 21 Oct 2020 10:58:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVBpW-0000ay-9F
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:58:54 +0000
X-Inumbo-ID: a51e6f7a-89d1-4fc3-ba3d-148d59eece99
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a51e6f7a-89d1-4fc3-ba3d-148d59eece99;
	Wed, 21 Oct 2020 10:58:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603277934;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=4iSsupOD6Q4gc152pUGIf39Oy9WL839bdkn4POmCRe8=;
  b=ewhVWz7iiiy6kqMoALQGiYMfhrSIRSD7xzFAveKdAVIuZdgBqYovJG8t
   UFCEYueCnz48FzBXn0SGzkMd2apj/DlKh3737pv9P+djmUmVEco2yXrQr
   pbszDxjegU3ZBW/Gn3dy7Y8KtL4ETy2Xwlsw0/e/65N663BJWMzCuGoKa
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Abx5GA9fO+utGWBYzMaxwLe+qjMt0rrBzzDV383h303FqESbIrdEq9BZqvUIZUodt6aPVybGnL
 BsxayAldXeSqpMVJfhjmzq5yCv+qCz/8Gc3Ds67QpL3tDgeRoekZDIWG2qbI0MNdGQeyHCozhD
 PBJyknQHZR7q6k19N2W15bCU8uckNOPnb19Hdj3CjOIEhJPgAhllApScmdrzx+y0tILg7IcDBh
 InVSVIRLzW7EKLW/TTjdyQ9T5d74O3LV3Y3ISB3zKG/m32zCD/7r7DFBcBQlbO8kIgPqYXCIK+
 qFU=
X-SBRS: 2.5
X-MesageID: 29441339
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29441339"
Date: Wed, 21 Oct 2020 12:58:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special
 case idx == gpfn
Message-ID: <20201021105841.dqx3tnw3pkys5mun@Air-de-Roger>
References: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
 <20201021093958.e4kopykalddam7pk@Air-de-Roger>
 <a979d21d-efed-9493-efd1-2643bddbbdd9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a979d21d-efed-9493-efd1-2643bddbbdd9@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 21, 2020 at 12:38:48PM +0200, Jan Beulich wrote:
> On 21.10.2020 11:39, Roger Pau Monné wrote:
> > On Fri, Oct 16, 2020 at 08:44:10AM +0200, Jan Beulich wrote:
> >> --- a/xen/arch/x86/mm.c
> >> +++ b/xen/arch/x86/mm.c
> >> @@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
> >>          if ( is_special_page(mfn_to_page(prev_mfn)) )
> >>              /* Special pages are simply unhooked from this phys slot. */
> >>              rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
> >> -        else
> >> +        else if ( !mfn_eq(mfn, prev_mfn) )
> >>              /* Normal domain memory is freed, to avoid leaking memory. */
> >>              rc = guest_remove_page(d, gfn_x(gpfn));
> > 
> > What about the access differing between the old and the new entries,
> > while pointing to the same mfn, would Xen install the new entry
> > successfully?
> 
> Yes - guest_physmap_add_page() doesn't get bypassed.

But will it succeed if the default access is different from the one
the installed entry currently has? Will it update the access bits
to match the new ones?

> 
> > Seems easier to me to use guest_physmap_remove_page in that case to
> > remove the entry from the p2m without freeing the page.
> 
> Why do any removal when none is really needed? I also don't see
> this fit the "special pages" clause and comment very well. I'd
> question the other way around whether guest_physmap_remove_page()
> needs calling at all (the instance above; the other one of course
> is needed).

Right, replying to my question above: it will succeed, since
guest_physmap_add_entry will overwrite the previous entry.

I agree, it looks like the guest_physmap_remove_page call done for
special pages is not really needed, as guest_physmap_add_entry would
already overwrite such entries and not free the associated mfn?

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 10:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 10:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10015.26404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBpa-0000cO-3W; Wed, 21 Oct 2020 10:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10015.26404; Wed, 21 Oct 2020 10:58:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVBpa-0000cG-06; Wed, 21 Oct 2020 10:58:58 +0000
Received: by outflank-mailman (input) for mailman id 10015;
 Wed, 21 Oct 2020 10:58:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zsxo=D4=gmail.com=pankaj.gupta.linux@srs-us1.protection.inumbo.net>)
 id 1kVBpZ-0000c1-Dw
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:58:57 +0000
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd35d6ce-8b8d-43aa-9446-57d42f91c94d;
 Wed, 21 Oct 2020 10:58:56 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id p15so2697393ioh.0
 for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 03:58:56 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zsxo=D4=gmail.com=pankaj.gupta.linux@srs-us1.protection.inumbo.net>)
	id 1kVBpZ-0000c1-Dw
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 10:58:57 +0000
X-Inumbo-ID: fd35d6ce-8b8d-43aa-9446-57d42f91c94d
Received: from mail-io1-xd42.google.com (unknown [2607:f8b0:4864:20::d42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd35d6ce-8b8d-43aa-9446-57d42f91c94d;
	Wed, 21 Oct 2020 10:58:56 +0000 (UTC)
Received: by mail-io1-xd42.google.com with SMTP id p15so2697393ioh.0
        for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 03:58:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=GICSwgN3sNFkPLzb9ZiuRIa+ZIKU5hbtJb9zZM8PdNs=;
        b=ENzNLNVw380JSauwEXTsNIwfSS+Pfic6wIwk3mcDkx2ZI8vSJTG9T4To95nV5K++3S
         wXrPtza7/nKOZyWO6GUYg/19vJCSiI3irHrOtUq4HfvmyEmN9+Er/0f8hl5JC3NY9rCU
         4VXyirx16ZxcwPrFkWNc5+fdgPzJzwoqh23QZ4siuTOAmG7kAyniMqDNFkdjAyBLV9KU
         6xUTE1fU2WS7Y2iHpg238szVhAOmVhcjtjhCua1z7pgont9fb1+CXR9apRekKvjEBXQ4
         Mjsvsuixy9gpE8FbEhU64igpCUNBdcJC/8ogPwXvQvXYuRk8lqRCeWNgGrCpzH1AQQOC
         Q+4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=GICSwgN3sNFkPLzb9ZiuRIa+ZIKU5hbtJb9zZM8PdNs=;
        b=N8iTpknxlkPHFXddK9SpLj1ELntD/L7k/W4hwUs1zNjMbU+AYB+Z9dQEZIZFVAvyca
         tei2OaBzX9qT9umrygjef+g7nRWcnVc5RRA4sQrbdByUVxTC0pxRYd6PyhPHo4m5oVIc
         Ke5rnnjfqEd1nSw76MGbTIKZik/W3872MsZbrgqwxcSU25tHq2n+DmZ7/lzaQrS7owJ9
         SKZoxMlIcbZoAaYBzM2ec9F33Ypd1WSsXfAmagxeAYqtDjAVaW9tAq4fHs0PcbCJ44h2
         OY5F/C9mHpXyPx8PopSytsBnpbGd81oXur6p8yWTapnVuCZYCZmtknZGtLVw2LiOSYoz
         fgnA==
X-Gm-Message-State: AOAM530CjFXKJf/P9swbZCxCId1xxZzwi+FchSE3R2gqhSgc2HZ7GMdB
	lffRzOdw41SnA01O/SPhT+E7WYI31IzImvl43FQ=
X-Google-Smtp-Source: ABdhPJxVGoa8oGWf7NMjH5r135bL5msWqxriS77JIexiAMhTvnG4ddTh5FYX+VcY2vpk3tlFdrFb/MgirSrWDhsBaEo=
X-Received: by 2002:a5d:87c7:: with SMTP id q7mr2174472ios.162.1603277936222;
 Wed, 21 Oct 2020 03:58:56 -0700 (PDT)
MIME-Version: 1.0
References: <20201005121534.15649-1-david@redhat.com> <20201005121534.15649-6-david@redhat.com>
In-Reply-To: <20201005121534.15649-6-david@redhat.com>
From: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
Date: Wed, 21 Oct 2020 12:58:45 +0200
Message-ID: <CAM9Jb+jXR6iPvSxExaEJvm90mqRozh1wcJ6ukEmDy_pqc-37oQ@mail.gmail.com>
Subject: Re: [PATCH v2 5/5] mm/memory_hotplug: update comment regarding zone shuffling
To: David Hildenbrand <david@redhat.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Linux MM <linux-mm@kvack.org>, 
	linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, 
	linux-acpi@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>, 
	Matthew Wilcox <willy@infradead.org>, Wei Yang <richard.weiyang@linux.alibaba.com>, 
	Michal Hocko <mhocko@suse.com>, Alexander Duyck <alexander.h.duyck@linux.intel.com>, 
	Mel Gorman <mgorman@techsingularity.net>, Michal Hocko <mhocko@kernel.org>, 
	Dave Hansen <dave.hansen@intel.com>, Vlastimil Babka <vbabka@suse.cz>, 
	Oscar Salvador <osalvador@suse.de>, Mike Rapoport <rppt@kernel.org>
Content-Type: text/plain; charset="UTF-8"

> As we no longer shuffle via generic_online_page() and when undoing
> isolation, we can simplify the comment.
>
> We now effectively shuffle only once (properly) when onlining new
> memory.
>
> Reviewed-by: Wei Yang <richard.weiyang@linux.alibaba.com>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Dave Hansen <dave.hansen@intel.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Wei Yang <richard.weiyang@linux.alibaba.com>
> Cc: Oscar Salvador <osalvador@suse.de>
> Cc: Mike Rapoport <rppt@kernel.org>
> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/memory_hotplug.c | 11 ++++-------
>  1 file changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 03a00cb68bf7..b44d4c7ba73b 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -858,13 +858,10 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>         undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
>
>         /*
> -        * When exposing larger, physically contiguous memory areas to the
> -        * buddy, shuffling in the buddy (when freeing onlined pages, putting
> -        * them either to the head or the tail of the freelist) is only helpful
> -        * for maintaining the shuffle, but not for creating the initial
> -        * shuffle. Shuffle the whole zone to make sure the just onlined pages
> -        * are properly distributed across the whole freelist. Make sure to
> -        * shuffle once pageblocks are no longer isolated.
> +        * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> +        * the tail of the freelist when undoing isolation). Shuffle the whole
> +        * zone to make sure the just onlined pages are properly distributed
> +        * across the whole freelist - to create an initial shuffle.
>          */
>         shuffle_zone(zone);
>

Acked-by: Pankaj Gupta <pankaj.gupta.linux@gmail.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 11:10:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 11:10:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10020.26416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVC0d-0002PF-7c; Wed, 21 Oct 2020 11:10:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10020.26416; Wed, 21 Oct 2020 11:10:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVC0d-0002P8-4b; Wed, 21 Oct 2020 11:10:23 +0000
Received: by outflank-mailman (input) for mailman id 10020;
 Wed, 21 Oct 2020 11:10:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVC0c-0002P3-BU
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:10:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77b764a2-fd47-4c4b-9b5d-fa932a9530a4;
 Wed, 21 Oct 2020 11:10:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B39CAB26A;
 Wed, 21 Oct 2020 11:10:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVC0c-0002P3-BU
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:10:22 +0000
X-Inumbo-ID: 77b764a2-fd47-4c4b-9b5d-fa932a9530a4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 77b764a2-fd47-4c4b-9b5d-fa932a9530a4;
	Wed, 21 Oct 2020 11:10:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603278620;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uvaC1OGUKM/nTrlof1kFzS7RbCtPv4+jEYS7rMJJGE0=;
	b=pnv4sIYt/0VcaaSczwzlZzGuhkxlhlPID2palrZXjGT2fDkAlBjHNCeAppmcJvko0uJGQM
	imESIcJIIXh/h/F0duYRIfMDiltSQA3ug3KUrN4gUGgNJaF/Z+OHDHhqUZjiRa3/G2+Zd0
	RkL0J/y2ApPYcnVnxmvt75oexvFTGnA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B39CAB26A;
	Wed, 21 Oct 2020 11:10:20 +0000 (UTC)
Subject: Re: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special
 case idx == gpfn
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
 <20201021093958.e4kopykalddam7pk@Air-de-Roger>
 <a979d21d-efed-9493-efd1-2643bddbbdd9@suse.com>
 <20201021105841.dqx3tnw3pkys5mun@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4de4c497-496b-d55f-0d4d-aa61246daca6@suse.com>
Date: Wed, 21 Oct 2020 13:10:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021105841.dqx3tnw3pkys5mun@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 12:58, Roger Pau Monné wrote:
> On Wed, Oct 21, 2020 at 12:38:48PM +0200, Jan Beulich wrote:
>> On 21.10.2020 11:39, Roger Pau Monné wrote:
>>> On Fri, Oct 16, 2020 at 08:44:10AM +0200, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
>>>>          if ( is_special_page(mfn_to_page(prev_mfn)) )
>>>>              /* Special pages are simply unhooked from this phys slot. */
>>>>              rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
>>>> -        else
>>>> +        else if ( !mfn_eq(mfn, prev_mfn) )
>>>>              /* Normal domain memory is freed, to avoid leaking memory. */
>>>>              rc = guest_remove_page(d, gfn_x(gpfn));
>>>
>>> What about the access differing between the old and the new entries,
>>> while pointing to the same mfn, would Xen install the new entry
>>> successfully?
>>
>> Yes - guest_physmap_add_page() doesn't get bypassed.
> 
> But will it succeed if the default access is different from the one
> the installed entry currently has? Will it update the access bits
> to match the new ones?

It will construct and put in place a completely new entry. Old
values are of concern only for keeping statistics right, and
of course for refusing certain changes.

>>> Seems easier to me to use guest_physmap_remove_page in that case to
>>> remove the entry from the p2m without freeing the page.
>>
>> Why do any removal when none is really needed? I also don't see
>> this fit the "special pages" clause and comment very well. I'd
>> question the other way around whether guest_physmap_remove_page()
>> needs calling at all (the instance above; the other one of course
>> is needed).
> 
> Right, replying to my question above: it will succeed, since
> guest_physmap_add_entry will overwrite the previous entry.
> 
> I agree, it looks like the guest_physmap_remove_page call done for
> special pages is not really needed, as guest_physmap_add_entry would
> already overwrite such entries and not free the associated mfn?

That's my understanding, yes.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 11:20:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 11:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10023.26428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCAQ-0003P2-86; Wed, 21 Oct 2020 11:20:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10023.26428; Wed, 21 Oct 2020 11:20:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCAQ-0003Ov-4z; Wed, 21 Oct 2020 11:20:30 +0000
Received: by outflank-mailman (input) for mailman id 10023;
 Wed, 21 Oct 2020 11:20:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVCAO-0003Oq-Np
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:20:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1977d54b-4c15-44de-9130-f81f93f5c406;
 Wed, 21 Oct 2020 11:20:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2322ACB8;
 Wed, 21 Oct 2020 11:20:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVCAO-0003Oq-Np
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:20:28 +0000
X-Inumbo-ID: 1977d54b-4c15-44de-9130-f81f93f5c406
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1977d54b-4c15-44de-9130-f81f93f5c406;
	Wed, 21 Oct 2020 11:20:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603279226;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6Or0mI1KPB16g6QgFGEQT367aWfREX95j1EIY3ezxcY=;
	b=MQ/bK8aTFNxCCOXo8dewmNwMiuHdGVlRFYVuCH2Fk2Yd4hEBQ93q4xfmU3GUeroSZwOyXG
	f+r4CUn6ZfmLophlr1RuHUs5asEO3WRgBsf+Mh+gHbte51XRViIz0vbyR0Q8Bg244I7u7Z
	WZYGrzFLe3zYQqCd53mJDBTl4lIomhs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C2322ACB8;
	Wed, 21 Oct 2020 11:20:26 +0000 (UTC)
Subject: Re: [PATCH] pci: cleanup MSI interrupts before removing device from
 IOMMU
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
References: <20201021081945.28425-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3799f954-ca43-98a0-9e86-b100c86ea25b@suse.com>
Date: Wed, 21 Oct 2020 13:20:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021081945.28425-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 10:19, Roger Pau Monne wrote:
> Doing the MSI cleanup after removing the device from the IOMMU leads
> to the following panic on AMD hardware:
> 
> Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
> ----[ Xen-4.13.1-10.0.3-d  x86_64  debug=y   Not tainted ]----
> CPU:    3
> RIP:    e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
> [...]
> Xen call trace:
>    [<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
>    [<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
>    [<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
>    [<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
>    [<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
>    [<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
>    [<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
>    [<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
>    [<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
>    [<ffff82d08038a432>] F lstar_enter+0x112/0x120
> 
> That's because the call to iommu_remove_device on AMD hardware will
> remove the per-device interrupt remapping table, and hence the call to
> pci_cleanup_msi done afterwards will find a null intremap table and
> crash.
> 
> Reorder the calls so that MSI interrupts are torn down before removing
> the device from the IOMMU.

I guess this wants

Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables")

?

> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> I've discussed the issue with Andrew and maybe we should try to avoid
> removing the interrupt remapping table on device removal, but then the
> tables would have to be sized to support the maximum amount of
> interrupts instead of the maximum supported by the device currently
> plugged in.

We've specifically limited allocation sizes not so long ago (the
commit above was the first of two steps in that direction). So
I'd rather not see us go back unless there's truly new information
available now.

> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -834,10 +834,15 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
>          if ( pdev->bus == bus && pdev->devfn == devfn )
>          {
> +            /*
> +             * Cleanup MSI interrupts before removing the device from the
> +             * IOMMU, or else the internal IOMMU data used to track the device
> +             * interrupts might be already gone.
> +             */
> +            pci_cleanup_msi(pdev);
>              ret = iommu_remove_device(pdev);
>              if ( pdev->domain )
>                  list_del(&pdev->domain_list);
> -            pci_cleanup_msi(pdev);

To be honest I'm not sure about the comment. It should really have
been this way from the very beginning, and VT-d not being affected
makes me wonder what possible improvements are there waiting to be
noticed and then carried out.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 11:25:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 11:25:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10026.26440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCFN-0003ea-Sj; Wed, 21 Oct 2020 11:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10026.26440; Wed, 21 Oct 2020 11:25:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCFN-0003eT-P3; Wed, 21 Oct 2020 11:25:37 +0000
Received: by outflank-mailman (input) for mailman id 10026;
 Wed, 21 Oct 2020 11:25:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7u2m=D4=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kVCFL-0003eO-Oc
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:25:35 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 606edfb3-9be4-4ee6-8d9e-132c6abf49a5;
 Wed, 21 Oct 2020 11:25:34 +0000 (UTC)
Received: from DB6PR07CA0060.eurprd07.prod.outlook.com (2603:10a6:6:2a::22) by
 VI1PR08MB3085.eurprd08.prod.outlook.com (2603:10a6:803:47::25) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21; Wed, 21 Oct 2020 11:25:31 +0000
Received: from DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::e4) by DB6PR07CA0060.outlook.office365.com
 (2603:10a6:6:2a::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.10 via Frontend
 Transport; Wed, 21 Oct 2020 11:25:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT048.mail.protection.outlook.com (10.152.21.28) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 11:25:31 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Wed, 21 Oct 2020 11:25:31 +0000
Received: from 8940dd5145f2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9C258253-1410-4C2D-B750-927DAE103C69.1; 
 Wed, 21 Oct 2020 11:25:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8940dd5145f2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 21 Oct 2020 11:25:11 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4930.eurprd08.prod.outlook.com (2603:10a6:208:157::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 21 Oct
 2020 11:25:09 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 11:25:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7u2m=D4=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kVCFL-0003eO-Oc
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:25:35 +0000
X-Inumbo-ID: 606edfb3-9be4-4ee6-8d9e-132c6abf49a5
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.43])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 606edfb3-9be4-4ee6-8d9e-132c6abf49a5;
	Wed, 21 Oct 2020 11:25:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lt3ypk5J9n7FEDuyUQmx2wmugEqyA1CsfIiZPKiuawk=;
 b=RSQusM3fygUpQ4pnYF8k8thZGUEwVsicVTWVpDsz3J1dCh44InvS3e1stpfXAS++NGY//5W80/EmKgBwVXKRSpkAdIHfYrSYmnbhTv2DdG89zcPrYR1amjRUSFI3G90fanmkALi+5zW2AAGf03axIbaxBAAKwUBll8RBPj6cGDE=
Received: from DB6PR07CA0060.eurprd07.prod.outlook.com (2603:10a6:6:2a::22) by
 VI1PR08MB3085.eurprd08.prod.outlook.com (2603:10a6:803:47::25) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.21; Wed, 21 Oct 2020 11:25:31 +0000
Received: from DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2a:cafe::e4) by DB6PR07CA0060.outlook.office365.com
 (2603:10a6:6:2a::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.10 via Frontend
 Transport; Wed, 21 Oct 2020 11:25:31 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT048.mail.protection.outlook.com (10.152.21.28) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Wed, 21 Oct 2020 11:25:31 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Wed, 21 Oct 2020 11:25:31 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e175e76036575f01
X-CR-MTA-TID: 64aa7808
Received: from 8940dd5145f2.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9C258253-1410-4C2D-B750-927DAE103C69.1;
	Wed, 21 Oct 2020 11:25:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8940dd5145f2.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 21 Oct 2020 11:25:11 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UwwDUJDwnr2++1WD2gqe741o8ywB4pdkhd4iGvD4EL/huLwCEPzVD18v2VLQdJxua/gnSqNhTB0egTjxtL6kJAXyEBURwQjQVfRamrIbATVMEjjqYP1coXiBV/dgexp22HauyzcJCNOuPxg+ZKjTgN2nrUsupBQkFvu1XD+8/gmGWeWTBqUE12Fpa2+L5641yRl1hXNlrjQgZ/LT/VPR7be3vjm4DmoXO2gVL1Rx3WQ22JHKfkrBaRfH6e5i1JW0ayuD/Lvlq+swAbulY4itM7qx6TY6TJEaUZ9FoH+CvCFc9+OTv2tqgXS91887zEBaWtBCgIO1CC1SEZBT4bvCTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lt3ypk5J9n7FEDuyUQmx2wmugEqyA1CsfIiZPKiuawk=;
 b=dvceMxGZMInyVKtVVCGrimUoTzGfUbpAK6ObK/YNBi96a6sRqcOd/J/rQAZZprJlwy44GP2NCYmcdU6KF913daqWTqbqMyZ3rrBXE4NoJ9knYucGjs1Yfp/mwCBxO+ElvV7jYHJeMwx+McwMA8G5rSqUYkk4+LAAbmVu7KQmCPzJdn+POqWxLntoirtmsut2H8nALXU17UfdebSpH/Ttf8LG0MZpooykSEvNmJIFvaDv1Fg4hGzt61FGPJg3ca0skzfCbJwjoc0Z6e3olOJQCLQZnZLsk7+7EFocUBvyKd4PL1rF9xT81BrxaX7MsrlXcy2LmLITjRx1GSS/TkRtMg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lt3ypk5J9n7FEDuyUQmx2wmugEqyA1CsfIiZPKiuawk=;
 b=RSQusM3fygUpQ4pnYF8k8thZGUEwVsicVTWVpDsz3J1dCh44InvS3e1stpfXAS++NGY//5W80/EmKgBwVXKRSpkAdIHfYrSYmnbhTv2DdG89zcPrYR1amjRUSFI3G90fanmkALi+5zW2AAGf03axIbaxBAAKwUBll8RBPj6cGDE=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4930.eurprd08.prod.outlook.com (2603:10a6:208:157::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 21 Oct
 2020 11:25:09 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 21 Oct 2020
 11:25:09 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQA=
Date: Wed, 21 Oct 2020 11:25:08 +0000
Message-ID: <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
In-Reply-To: <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 48fd1fdf-6f07-49f1-62b7-08d875b40519
x-ms-traffictypediagnostic: AM0PR08MB4930:|VI1PR08MB3085:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3085EC2727A7A02CEAD4E2B8FC1C0@VI1PR08MB3085.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 37TRDc2FV2gjkbC79MGUJkwHiX4Rm4uqAr5rXNg8Elmgabo5a6Vwmb0zk81NZwFdnOtsi/I5gcj/HuULbj+qd6paA1ih/r2H5GP/TkS/dE1TU7Bp9Je3kd/twBPmcmRDUjiljmiVfWc6l3djIZoNqLUq64N4Rv4MXCCIH8+5/N8dX/JKs99IBm4VU4LscKkDyINo6k+qiQnO+QHeFGHVMBFY6AU8WYhSYmhL5C6BU/NiuYklhUsH9Lua/qDKjgKRYY9y/Ywm5yjTgIgnkFaQckUvfq8rCGTRoheSDn1fvGzgBV94Bi8z+tTKSxrC3f8xrWZtlm6meLRl+xKJoGZJ5Q==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(346002)(376002)(136003)(83380400001)(66946007)(2616005)(5660300002)(36756003)(6486002)(71200400001)(478600001)(8936002)(6916009)(26005)(33656002)(186003)(316002)(64756008)(6506007)(53546011)(76116006)(66556008)(86362001)(91956017)(4326008)(8676002)(55236004)(66446008)(54906003)(2906002)(6512007)(66476007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 KA+dX8KWl6u4QS1Pw70TssBz+vwg2YnuavgjPmB4PACBSfd6saJ0je1hAxN5bkQDwr3wigUglsboH+TBOKroBwaneUY5/iA9RjzPngiSmuN1E7lhGrLQrHiQkBAVUAvnCKqTWRtXb+BPl+903Sf9aCSk6pVRYTNovnB1dyQmIEozYrSWzMqHRCMTPSHKNH3YLYDleJvviigjm/kVavrpQIfjyGtN8pQwytmeQNqt7fmable/IlzezHafGfLlzfJh4DYeAxrNSZmH50faeSwU+4A/Bp+N7d+OzF99ak2u6BbZ8HwNlsQw9uJfOmpHteSiy9jqgjg6YsXIqqS4MUk0zrensv/sJRQhmrBkyEmBWJaVsBk3B5eYRFwAt0wH7FqRWkc8Nkt0qZg5InMQcBc5/MjOiVGtY48EtVUO19cf7K7AG5ThXiZSw0y/JVYx9VngLl80mzds43ybC4h9w9qSop8Y2/jU1uTI9oAUh1c8SP2HQoZzkVm4saxEXmvFuDLxHzY1EbM8X5/UAIM7lWK1EF2DSZDEB71lNwVNkyhSIs2rni2EQvv9DOqb5A2Xw7NbwqZeAf9JUK0R03fFDonEl2LOBYzZ6P1usQEBsJSKVC+e0GNMjoyqw4eU+DP97pRSMDokSw29NpUKmb1GGq3z3w==
Content-Type: text/plain; charset="utf-8"
Content-ID: <B83730D3261B0D4183A22F1989ABB0EB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4930
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c6801180-073d-4db8-dfa0-08d875b3f7d5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SPv2iJc1tIeQbwJ3OKm09iEOErvKX294tMFBVn4Qf/mWGxlt7K+lBgcY3z8c3vh4Kp01QPO0zp3fF8Gk+TxhyBH6xv+qHThpGwqbK50bzglIRkiTKLUjXN8obumrWmJyqnrNW2NZgv/OjiuzPSqD4zOHXk91sdJHRXgi1byyO8g2iUXL/NB6+ozYJIOjh0G7oQX0O7SEwuHsH8Yo1XDj86iVt3kXMYAm+VY1Dse7bQrcB7qp8hUCcvPqG4rYGch+HqTdLoG3sLYNMdtKqYIsTo9VxeMuT8DfU+cH6gmmzQugwmQuYHkLBg2jfNSl6SWCJWnNYyBn7+BV1iQuxZhKzYm7rDtllz+zZhCAIuEmqqOoZtqLhoNFqQ889/yHQ0YQ/feCvvyYuTY8Klb2dTLWwA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(46966005)(8676002)(53546011)(33656002)(8936002)(316002)(186003)(70586007)(26005)(70206006)(55236004)(6506007)(2616005)(336012)(36756003)(81166007)(2906002)(6486002)(82310400003)(4326008)(356005)(5660300002)(86362001)(478600001)(54906003)(6862004)(6512007)(83380400001)(47076004)(82740400003)(107886003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Oct 2020 11:25:31.2958
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 48fd1fdf-6f07-49f1-62b7-08d875b40519
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT048.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3085

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDIwIE9jdCAyMDIwLCBhdCA2OjAzIHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IEhpIFJhaHVsLA0KPiANCj4gVGhhbmsg
eW91IGZvciB0aGUgY29udHJpYnV0aW9uLiBMZXRzIG1ha2Ugc3VyZSB0aGlzIGF0dGVtcHQgdG8g
U01NVXYzIHN1cHBvcnQgaW4gWGVuIHdpbGwgYmUgbW9yZSBzdWNjZXNzZnVsIHRoYW4gdGhlIG90
aGVyIG9uZSA6KS4NCg0KWWVzIHN1cmUuDQo+IA0KPiBJIGhhdmVuJ3QgcmV2aWV3ZWQgdGhlIGNv
ZGUgeWV0LCBidXQgSSB3YW50ZWQgdG8gcHJvdmlkZSBmZWVkYmFjayBvbiB0aGUgY29tbWl0IG1l
c3NhZ2UuDQo+IA0KPiBPbiAyMC8xMC8yMDIwIDE2OjI1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+
IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9ucy4g
SXQgaXMgYmFzZWQgb24NCj4+IHRoZSBMaW51eCBTTU1VdjMgZHJpdmVyLg0KPj4gTWFqb3IgZGlm
ZmVyZW5jZXMgYmV0d2VlbiB0aGUgTGludXggZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0KPj4gMS4g
T25seSBTdGFnZS0yIHRyYW5zbGF0aW9uIGlzIHN1cHBvcnRlZCBhcyBjb21wYXJlZCB0byB0aGUg
TGludXggZHJpdmVyDQo+PiAgICB0aGF0IHN1cHBvcnRzIGJvdGggU3RhZ2UtMSBhbmQgU3RhZ2Ut
MiB0cmFuc2xhdGlvbnMuDQo+PiAyLiBVc2UgUDJNICBwYWdlIHRhYmxlIGluc3RlYWQgb2YgY3Jl
YXRpbmcgb25lIGFzIFNNTVV2MyBoYXMgdGhlDQo+PiAgICBjYXBhYmlsaXR5IHRvIHNoYXJlIHRo
ZSBwYWdlIHRhYmxlcyB3aXRoIHRoZSBDUFUuDQo+PiAzLiBUYXNrbGV0cyBpcyB1c2VkIGluIHBs
YWNlIG9mIHRocmVhZGVkIElSUSdzIGluIExpbnV4IGZvciBldmVudCBxdWV1ZQ0KPj4gICAgYW5k
IHByaW9yaXR5IHF1ZXVlIElSUSBoYW5kbGluZy4NCj4gDQo+IFRhc2tsZXRzIGFyZSBub3QgYSBy
ZXBsYWNlbWVudCBmb3IgdGhyZWFkZWQgSVJRLiBJbiBwYXJ0aWN1bGFyLCB0aGV5IHdpbGwgaGF2
ZSBwcmlvcml0eSBvdmVyIGFueXRoaW5nIGVsc2UgKElPVyBub3RoaW5nIHdpbGwgcnVuIG9uIHRo
ZSBwQ1BVIHVudGlsIHRoZXkgYXJlIGRvbmUpLg0KPiANCj4gRG8geW91IGtub3cgd2h5IExpbnV4
IGlzIHVzaW5nIHRocmVhZC4gSXMgaXQgYmVjYXVzZSBvZiBsb25nIHJ1bm5pbmcgb3BlcmF0aW9u
cz8NCg0KWWVzIHlvdSBhcmUgcmlnaHQgYmVjYXVzZSBvZiBsb25nIHJ1bm5pbmcgb3BlcmF0aW9u
cyBMaW51eCBpcyB1c2luZyB0aGUgdGhyZWFkZWQgSVJRcy4gDQoNClNNTVV2MyByZXBvcnRzIGZh
dWx0L2V2ZW50cyBiYXNlcyBvbiBtZW1vcnktYmFzZWQgY2lyY3VsYXIgYnVmZmVyIHF1ZXVlcyBu
b3QgYmFzZWQgb24gdGhlIHJlZ2lzdGVyLiBBcyBwZXIgbXkgdW5kZXJzdGFuZGluZywgaXQgaXMg
dGltZS1jb25zdW1pbmcgdG8gcHJvY2VzcyB0aGUgbWVtb3J5IGJhc2VkIHF1ZXVlcyBpbiBpbnRl
cnJ1cHQgY29udGV4dCBiZWNhdXNlIG9mIHRoYXQgTGludXggaXMgdXNpbmcgdGhyZWFkZWQgSVJR
IHRvIHByb2Nlc3MgdGhlIGZhdWx0cy9ldmVudHMgZnJvbSBTTU1VLg0KDQpJIGRpZG7igJl0IGZp
bmQgYW55IG90aGVyIHNvbHV0aW9uIGluIFhFTiBpbiBwbGFjZSBvZiB0YXNrbGV0IHRvIGRlZmVy
IHRoZSB3b3JrLCB0aGF04oCZcyB3aHkgSSB1c2VkIHRhc2tsZXQgaW4gWEVOIGluIHJlcGxhY2Vt
ZW50IG9mIHRocmVhZGVkIElSUXMuIElmIHdlIGRvIGFsbCB3b3JrIGluIGludGVycnVwdCBjb250
ZXh0IHdlIHdpbGwgbWFrZSBYRU4gbGVzcyByZXNwb25zaXZlLg0KDQpJZiB5b3Uga25vdyBhbm90
aGVyIHNvbHV0aW9uIGluIFhFTiB0aGF0IHdpbGwgYmUgdXNlZCB0byBkZWZlciB0aGUgd29yayBp
biB0aGUgaW50ZXJydXB0IHBsZWFzZSBsZXQgbWUga25vdyBJIHdpbGwgdHJ5IHRvIHVzZSB0aGF0
Lg0KDQo+PiA0LiBMYXRlc3QgdmVyc2lvbiBvZiB0aGUgTGludXggU01NVXYzIGNvZGUgaW1wbGVt
ZW50cyB0aGUgY29tbWFuZHMgcXVldWUNCj4+ICAgIGFjY2VzcyBmdW5jdGlvbnMgYmFzZWQgb24g
YXRvbWljIG9wZXJhdGlvbnMgaW1wbGVtZW50ZWQgaW4gTGludXguDQo+IA0KPiBDYW4geW91IHBy
b3ZpZGUgbW9yZSBkZXRhaWxzPw0KDQpJIHRyaWVkIHRvIHBvcnQgdGhlIGxhdGVzdCB2ZXJzaW9u
IG9mIHRoZSBTTU1VdjMgY29kZSB0aGFuIEkgb2JzZXJ2ZWQgdGhhdCBpbiBvcmRlciB0byBwb3J0
IHRoYXQgY29kZSBJIGhhdmUgdG8gYWxzbyBwb3J0IGF0b21pYyBvcGVyYXRpb24gaW1wbGVtZW50
ZWQgaW4gTGludXggdG8gWEVOLiBBcyBsYXRlc3QgTGludXggY29kZSB1c2VzIGF0b21pYyBvcGVy
YXRpb24gdG8gcHJvY2VzcyB0aGUgY29tbWFuZCBxdWV1ZXMgKGF0b21pY19jb25kX3JlYWRfcmVs
YXhlZCgpLGF0b21pY19sb25nX2NvbmRfcmVhZF9yZWxheGVkKCkgLCBhdG9taWNfZmV0Y2hfYW5k
bm90X3JlbGF4ZWQoKSkgLg0KDQo+IA0KPj4gICAgQXRvbWljIGZ1bmN0aW9ucyB1c2VkIGJ5IHRo
ZSBjb21tYW5kcyBxdWV1ZSBhY2Nlc3MgZnVuY3Rpb25zIGlzIG5vdA0KPj4gICAgaW1wbGVtZW50
ZWQgaW4gWEVOIHRoZXJlZm9yZSB3ZSBkZWNpZGVkIHRvIHBvcnQgdGhlIGVhcmxpZXIgdmVyc2lv
bg0KPj4gICAgb2YgdGhlIGNvZGUuIE9uY2UgdGhlIHByb3BlciBhdG9taWMgb3BlcmF0aW9ucyB3
aWxsIGJlIGF2YWlsYWJsZSBpbiBYRU4NCj4+ICAgIHRoZSBkcml2ZXIgY2FuIGJlIHVwZGF0ZWQu
DQo+PiBTaWduZWQtb2ZmLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4NCj4+
IC0tLQ0KPj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcgICAgICAgfCAgIDEwICsN
Cj4+ICB4ZW4vZHJpdmVycy9wYXNzdGhyb3VnaC9hcm0vTWFrZWZpbGUgIHwgICAgMSArDQo+PiAg
eGVuL2RyaXZlcnMvcGFzc3Rocm91Z2gvYXJtL3NtbXUtdjMuYyB8IDI4NDcgKysrKysrKysrKysr
KysrKysrKysrKysrKw0KPj4gIDMgZmlsZXMgY2hhbmdlZCwgMjg1OCBpbnNlcnRpb25zKCspDQo+
IA0KPiBUaGlzIGlzIHF1aXRlIHNpZ25pZmljYW50IHBhdGNoIHRvIHJldmlldy4gSXMgdGhlcmUg
YW55IHdheSB0byBnZXQgaXQgc3BsaXQgKG1heWJlIGEgdmVyYmF0aW0gTGludXggY29weSArIFhl
biBtb2RpZmljYXRpb24pPw0KDQpZZXMsIEkgdW5kZXJzdGFuZCB0aGlzIGlzIGEgcXVpdGUgc2ln
bmlmaWNhbnQgcGF0Y2ggdG8gcmV2aWV3IGxldCBtZSB0aGluayB0byBnZXQgaXQgc3BsaXQuIElm
IGl0IGlzIG9rIGZvciB5b3UgdG8gcmV2aWV3IHRoaXMgcGF0Y2ggYW5kIHByb3ZpZGUgeW91ciBj
b21tZW50cyB0aGVuIGl0IHdpbGwgZ3JlYXQgZm9yIHVzLg0KDQo+IA0KPiBDaGVlcnMsDQo+IA0K
PiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 11:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 11:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10041.26452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCO2-0004es-UJ; Wed, 21 Oct 2020 11:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10041.26452; Wed, 21 Oct 2020 11:34:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCO2-0004el-QW; Wed, 21 Oct 2020 11:34:34 +0000
Received: by outflank-mailman (input) for mailman id 10041;
 Wed, 21 Oct 2020 11:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVCO1-0004eg-FH
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:34:33 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 779358ad-6197-4d9c-bef0-312110d0c995;
 Wed, 21 Oct 2020 11:34:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVCO1-0004eg-FH
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:34:33 +0000
X-Inumbo-ID: 779358ad-6197-4d9c-bef0-312110d0c995
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 779358ad-6197-4d9c-bef0-312110d0c995;
	Wed, 21 Oct 2020 11:34:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603280072;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=J9FAwl1Iadl4BWGhOj+igxHGZHD4tORBONzjQO9PtDQ=;
  b=QU/k9kr2SFo61imKYGkGETyf49y+m6G0iiibg6q/WAVs7Lc6FE+SSnOI
   pKyaS7pwJGcarIv/Afsocw+irwLgRXPiD4VRFy2P+RJrWXZuvC0Pzh72J
   WE6OUUKt3RLyrRiuZF2i0Z1MT1no6lq+81Xa7H0yfl8e9ArXJas0uHe9B
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zzHIZr1zb7xQqPtakwZxhbNkxAe6FSNXlECuthiGZWMFc2oWOuLVXZFDBRt2y597h055NQgeow
 hMrDK7ClNvP7/9ZMEARkQZOKUe8ErQUeIuzjq4ik+9SDpgHybY4PO5c9Lj33LlVcZKcwMyQdPE
 5+GuXCGzeQdcK8BLW8j296OXHj83tpuLqt19Q7NRbVF0vKyVWoeeCFVyLQPH7nVeRTtuHUdgh0
 CKG0mgfW/QW2O8Y4JvwWDzmUY+GpoWKnOuTHO49WjZ8xYYBjQO7Iq8gk28uhImJ6NE2no0fUdQ
 RqA=
X-SBRS: 2.5
X-MesageID: 29443929
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29443929"
Date: Wed, 21 Oct 2020 13:34:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] pci: cleanup MSI interrupts before removing device from
 IOMMU
Message-ID: <20201021113417.t3cnbm4hqvmwk2up@Air-de-Roger>
References: <20201021081945.28425-1-roger.pau@citrix.com>
 <3799f954-ca43-98a0-9e86-b100c86ea25b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3799f954-ca43-98a0-9e86-b100c86ea25b@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 21, 2020 at 01:20:27PM +0200, Jan Beulich wrote:
> On 21.10.2020 10:19, Roger Pau Monne wrote:
> > Doing the MSI cleanup after removing the device from the IOMMU leads
> > to the following panic on AMD hardware:
> > 
> > Assertion 'table.ptr && (index < intremap_table_entries(table.ptr, iommu))' failed at iommu_intr.c:172
> > ----[ Xen-4.13.1-10.0.3-d  x86_64  debug=y   Not tainted ]----
> > CPU:    3
> > RIP:    e008:[<ffff82d08026ae3c>] drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
> > [...]
> > Xen call trace:
> >    [<ffff82d08026ae3c>] R drivers/passthrough/amd/iommu_intr.c#get_intremap_entry+0x52/0x7b
> >    [<ffff82d08026af25>] F drivers/passthrough/amd/iommu_intr.c#update_intremap_entry_from_msi_msg+0xc0/0x342
> >    [<ffff82d08026ba65>] F amd_iommu_msi_msg_update_ire+0x98/0x129
> >    [<ffff82d08025dd36>] F iommu_update_ire_from_msi+0x1e/0x21
> >    [<ffff82d080286862>] F msi_free_irq+0x55/0x1a0
> >    [<ffff82d080286f25>] F pci_cleanup_msi+0x8c/0xb0
> >    [<ffff82d08025cf52>] F pci_remove_device+0x1af/0x2da
> >    [<ffff82d0802a42d1>] F do_physdev_op+0xd18/0x1187
> >    [<ffff82d080383925>] F pv_hypercall+0x1f5/0x567
> >    [<ffff82d08038a432>] F lstar_enter+0x112/0x120
> > 
> > That's because the call to iommu_remove_device on AMD hardware will
> > remove the per-device interrupt remapping table, and hence the call to
> > pci_cleanup_msi done afterwards will find a null intremap table and
> > crash.
> > 
> > Reorder the calls so that MSI interrupts are torn down before removing
> > the device from the IOMMU.
> 
> I guess this wants
> 
> Fixes: d7cfeb7c13ed ("AMD/IOMMU: don't blindly allocate interrupt remapping tables")
> 
> ?

Oh yes, I didn't git blame the file to figure out when such allocating
and freeing was added.

> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> > --- a/xen/drivers/passthrough/pci.c
> > +++ b/xen/drivers/passthrough/pci.c
> > @@ -834,10 +834,15 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
> >      list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
> >          if ( pdev->bus == bus && pdev->devfn == devfn )
> >          {
> > +            /*
> > +             * Cleanup MSI interrupts before removing the device from the
> > +             * IOMMU, or else the internal IOMMU data used to track the device
> > +             * interrupts might be already gone.
> > +             */
> > +            pci_cleanup_msi(pdev);
> >              ret = iommu_remove_device(pdev);
> >              if ( pdev->domain )
> >                  list_del(&pdev->domain_list);
> > -            pci_cleanup_msi(pdev);
> 
> To be honest I'm not sure about the comment. It should really have
> been this way from the very beginning, and VT-d not being affected
> makes me wonder what possible improvements are there waiting to be
> noticed and then carried out.

I'm fine with dropping the comment, I would also expect the normal
flow to be to cleanup any interrupt and then remove the device,
instead of the other way around.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 11:36:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 11:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10044.26464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCPr-0004mr-AR; Wed, 21 Oct 2020 11:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10044.26464; Wed, 21 Oct 2020 11:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCPr-0004mk-6X; Wed, 21 Oct 2020 11:36:27 +0000
Received: by outflank-mailman (input) for mailman id 10044;
 Wed, 21 Oct 2020 11:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVCPq-0004mf-7Y
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:36:26 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba2f1a79-833f-405b-8a89-7e6e213c2992;
 Wed, 21 Oct 2020 11:36:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVCPq-0004mf-7Y
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 11:36:26 +0000
X-Inumbo-ID: ba2f1a79-833f-405b-8a89-7e6e213c2992
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ba2f1a79-833f-405b-8a89-7e6e213c2992;
	Wed, 21 Oct 2020 11:36:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603280186;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=Dt3zzdvMM4atAHxHc6GXfY9Zx2wAPx/MIkQZnoK/N9c=;
  b=ZT8qfXxl02EaAMSn9D6buao/fyRQYQnIrl3JLaJpYvNeICiJvJGPiyCL
   J1+2C/Ik9GDyAkfrMekwCkJu35HxJEzuERQtaitg8X4ZDUx288oh03OCJ
   BHUNu6k0diZV7+h2brvZ5DmS1/QMmrMFPWhZTE1wg4H/hFgcEiqcwNNbh
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jpVzgbIu5+VWEExXVG+ZsonI+bVKecWKfNBb+XnyceYvkvaGcBdbvYNUa26+fyvDCAm9MowHWi
 Zq/I7ZEWgmHChxUf/KneinoEKdffRIjM+7K9hLG1QyIQ7un0cdIF9/oaui+hmtWXSDF5PVuSkr
 wDhPaBJWK9VWv/9gCAUn65WsJCQ5GrFNjWPQTOA3s1qU4o/JB0bK6aXepLtB9Rc7xpR/1Pl8ZQ
 eX71ofuJ7s8WI2YfYRBHGpG6qUbJGOUhP+L+rEzJjCK+Iahbcg+jL1aD8RT21uUYFn8sXNJSyp
 dZ0=
X-SBRS: 2.5
X-MesageID: 29802679
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29802679"
Date: Wed, 21 Oct 2020 13:36:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH] x86: XENMAPSPACE_gmfn{,_batch,_range} want to special
 case idx == gpfn
Message-ID: <20201021113610.6foqzvhdcpwkzoxg@Air-de-Roger>
References: <920fa307-190e-dc11-f338-5b44a2126050@suse.com>
 <20201021093958.e4kopykalddam7pk@Air-de-Roger>
 <a979d21d-efed-9493-efd1-2643bddbbdd9@suse.com>
 <20201021105841.dqx3tnw3pkys5mun@Air-de-Roger>
 <4de4c497-496b-d55f-0d4d-aa61246daca6@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4de4c497-496b-d55f-0d4d-aa61246daca6@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Wed, Oct 21, 2020 at 01:10:20PM +0200, Jan Beulich wrote:
> On 21.10.2020 12:58, Roger Pau Monné wrote:
> > On Wed, Oct 21, 2020 at 12:38:48PM +0200, Jan Beulich wrote:
> >> On 21.10.2020 11:39, Roger Pau Monné wrote:
> >>> On Fri, Oct 16, 2020 at 08:44:10AM +0200, Jan Beulich wrote:
> >>>> --- a/xen/arch/x86/mm.c
> >>>> +++ b/xen/arch/x86/mm.c
> >>>> @@ -4555,7 +4555,7 @@ int xenmem_add_to_physmap_one(
> >>>>          if ( is_special_page(mfn_to_page(prev_mfn)) )
> >>>>              /* Special pages are simply unhooked from this phys slot. */
> >>>>              rc = guest_physmap_remove_page(d, gpfn, prev_mfn, PAGE_ORDER_4K);
> >>>> -        else
> >>>> +        else if ( !mfn_eq(mfn, prev_mfn) )
> >>>>              /* Normal domain memory is freed, to avoid leaking memory. */
> >>>>              rc = guest_remove_page(d, gfn_x(gpfn));
> >>>
> >>> What about the access differing between the old and the new entries,
> >>> while pointing to the same mfn, would Xen install the new entry
> >>> successfully?
> >>
> >> Yes - guest_physmap_add_page() doesn't get bypassed.
> > 
> > But will it succeed if the default access is different from the one
> > the installed entry currently has? Will it update the access bits
> > to match the new ones?
> 
> It will construct and put in place a completely new entry. Old
> values are of concern only for keeping statistics right, and
> of course for refusing certain changes.
> 
> >>> Seems easier to me to use guest_physmap_remove_page in that case to
> >>> remove the entry from the p2m without freeing the page.
> >>
> >> Why do any removal when none is really needed? I also don't see
> >> this fit the "special pages" clause and comment very well. I'd
> >> question the other way around whether guest_physmap_remove_page()
> >> needs calling at all (the instance above; the other one of course
> >> is needed).
> > 
> > Right, replying to my question above: it will succeed, since
> > guest_physmap_add_entry will overwrite the previous entry.
> > 
> > I agree, it looks like the guest_physmap_remove_page call done for
> > special pages is not really needed, as guest_physmap_add_entry would
> > already overwrite such entries and not free the associated mfn?
> 
> That's my understanding, yes.

Then:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Albeit I would also like to see the call to guest_physmap_remove_page
for special pages removed for consistency.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 12:05:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 12:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10055.26476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCrm-0007ap-Q4; Wed, 21 Oct 2020 12:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10055.26476; Wed, 21 Oct 2020 12:05:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVCrm-0007ai-L8; Wed, 21 Oct 2020 12:05:18 +0000
Received: by outflank-mailman (input) for mailman id 10055;
 Wed, 21 Oct 2020 12:05:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVCrk-0007ad-Jp
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:05:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ec8a28d-5e38-4130-8606-d6200dfeea44;
 Wed, 21 Oct 2020 12:05:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 82DD9AFB7;
 Wed, 21 Oct 2020 12:05:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVCrk-0007ad-Jp
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:05:16 +0000
X-Inumbo-ID: 4ec8a28d-5e38-4130-8606-d6200dfeea44
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4ec8a28d-5e38-4130-8606-d6200dfeea44;
	Wed, 21 Oct 2020 12:05:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603281914;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7VOUx0DkW26czOinrAAOSNh8nOmYfjZLZ1vWW+O55sE=;
	b=Km/3LS+6FmllUUOLRK18on/0dx+PQweec3Ocn58jwjjF8CznUozt7/37pxiLKkOozCBcMV
	B0ZkC692a1cTJHPOTLDpbRoJL9L+oWFrDBSUK5A1cHTJXcHRkY48mDEYo5V/Ux1uW1dnwN
	3V5PSnOjH0TEu8AMO/TSFkBp5pUa+wE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 82DD9AFB7;
	Wed, 21 Oct 2020 12:05:14 +0000 (UTC)
Subject: Re: [PATCH 0/2] tools/libs: fix build rules to correctly deal with
 multiple public headers
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Message-ID: <d0e9f859-5ac5-d683-e7eb-535184a561b0@suse.com>
Date: Wed, 21 Oct 2020 14:05:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 09:19, Jan Beulich wrote:
> 1: fix header symlinking rule
> 2: fix uninstall rule for header files

Actually I've noticed these issues were introduced relatively
recently only, in particular after 4.14. I've added

Fixes: bc44e2fb3199 ("tools: add a copy of library headers in tools/include")

to both of them, albeit with the above they won't need even
just considering of backport.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 12:45:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 12:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10060.26487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDUq-0002mT-Oy; Wed, 21 Oct 2020 12:45:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10060.26487; Wed, 21 Oct 2020 12:45:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDUq-0002mM-M3; Wed, 21 Oct 2020 12:45:40 +0000
Received: by outflank-mailman (input) for mailman id 10060;
 Wed, 21 Oct 2020 12:45:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kVDUq-0002mH-6R
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:45:40 +0000
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fdd37b36-e955-41b7-8bdd-950c3ddd4dae;
 Wed, 21 Oct 2020 12:45:38 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l28so2887382lfp.10
 for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 05:45:38 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kVDUq-0002mH-6R
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:45:40 +0000
X-Inumbo-ID: fdd37b36-e955-41b7-8bdd-950c3ddd4dae
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fdd37b36-e955-41b7-8bdd-950c3ddd4dae;
	Wed, 21 Oct 2020 12:45:38 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id l28so2887382lfp.10
        for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 05:45:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=8q1bxnQpOzUjFsYj7y5zeYUNO8C/aL6+R6t4O3DmJrE=;
        b=faMO7zxo1+6GUlcNFfAb+UjM8HEFLZHUVRdR0WGR7/GoK6KxmWlFuZznfx+WgnQwxB
         EnzZBVLZNRCfx/Z36/Uqi5XHiIAOCx8rgptx4srRxViTNbTFmZjdCaFAaq1yj8Ue4jU5
         QB64oIjBtR/hwIWQs6rknajj00KyTldbfDK9gyvm2FohmjanGgilgbxBv6UTpsGXPE6P
         y1PyIJOYSc/HjLpK7ZQHY72uIWH+39cXsx4PfZNoerr155IRBjjqTJxuflwp4MacXSEz
         lLP+ogyUKUmQVl905x9lA3wFtV94KPEQrU9s3er93JbHy4ADsdBBkKW1dltStAkVt6A7
         +/YA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=8q1bxnQpOzUjFsYj7y5zeYUNO8C/aL6+R6t4O3DmJrE=;
        b=HolQDVuhOZXFoQooHa7LXuG4uAs+P1AwYNC9MIc/yG+EYwYIDZ0NkU2YavEGYevByx
         Uck8J/S9EXLxQdDmIioL0pjCQ4cAFxbtD3hfBQDbS1mGyDEIKgP/S5FL689Agf/mb5EN
         lED2QEzkcy5+ovbZSMVsi+0hRu1E5siBdcOnO4AvA/+5NPA7R1mjO3HJixRG3C+lM6rf
         qX/rC+LiE0aLc6pALelXbqNVvgFoYTzQliwBMhyzaYbbsUj+kCagjDMjuALD7WH0qFbS
         FPpudgikZceQESZFZB6MOabVmOY/I/+jIXgXJVpG1BVKliiGGiQA3Vam79Hw8+SUxsfF
         h0zQ==
X-Gm-Message-State: AOAM532EGyl17AqcX/x8hg53QF6gVaZ7LflYNqdDt9luibY/GO9BaqgS
	L5340kR/bIFwqXhri+E63rhO9VWbH9Ys5BVpjgk=
X-Google-Smtp-Source: ABdhPJx7uwr7l+T8x9VhEwwwahy+m8BtASpeKWoVnO/SJkiEF+g5Iv36IO9teBJE0Ezkj5EvjWOBTtmBK41oET3NsBk=
X-Received: by 2002:ac2:47fc:: with SMTP id b28mr1122588lfp.454.1603284337480;
 Wed, 21 Oct 2020 05:45:37 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com> <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
In-Reply-To: <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 21 Oct 2020 08:45:25 -0400
Message-ID: <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.com>=
 wrote:
>
> On Fri, Oct 16, 2020 at 12:23:22PM -0400, Jason Andryuk wrote:
> >
> > The RMRRs are:
> > (XEN) [VT-D]Host address width 39
> > (XEN) [VT-D]found ACPI_DMAR_DRHD:
> > (XEN) [VT-D]  dmaru->address =3D fed90000
> > (XEN) [VT-D]drhd->address =3D fed90000 iommu->reg =3D ffff82c00021d000
> > (XEN) [VT-D]cap =3D 1c0000c40660462 ecap =3D 19e2ff0505e
> > (XEN) [VT-D] endpoint: 0000:00:02.0
> > (XEN) [VT-D]found ACPI_DMAR_DRHD:
> > (XEN) [VT-D]  dmaru->address =3D fed91000
> > (XEN) [VT-D]drhd->address =3D fed91000 iommu->reg =3D ffff82c00021f000
> > (XEN) [VT-D]cap =3D d2008c40660462 ecap =3D f050da
> > (XEN) [VT-D] IOAPIC: 0000:00:1e.7
> > (XEN) [VT-D] MSI HPET: 0000:00:1e.6
> > (XEN) [VT-D]  flags: INCLUDE_ALL
> > (XEN) [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [VT-D] endpoint: 0000:00:14.0
> > (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 78863000 end_addr 7888=
2fff
> > (XEN) [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [VT-D] endpoint: 0000:00:02.0
> > (XEN) [VT-D]dmar.c:615:   RMRR region: base_addr 7d000000 end_addr 7f7f=
ffff
> > (XEN) [VT-D]found ACPI_DMAR_RMRR:
> > (XEN) [VT-D] endpoint: 0000:00:16.7
> > (XEN) [VT-D]dmar.c:581:  Non-existent device (0000:00:16.7) is
> > reported in RMRR (78907000, 78986fff)'s scope!
> > (XEN) [VT-D]dmar.c:596:   Ignore the RMRR (78907000, 78986fff) due to
>
> This is also part of a reserved region, so should be added to the
> iommu page tables anyway regardless of this message.

I wonder if this is for the Intel AMT PCI device?  I assumed it is
disabled, but I actually can't find it listed in the BIOS
configuration to verify.

> > devices under its scope are not PCI discoverable!
> >
> > > > I agree.
> > > >
> > > > Can you paste the memory map as printed by Xen when booting, and wh=
at
> > > > command line are you using to boot Xen.
> > >
> > > So this is OpenXT, and it's booting EFI -> xen -> tboot -> xen
> > >
> > > There's the memory map
> > > (XEN) TBOOT RAM map:
> > > (XEN)  0000000000000000 - 0000000000060000 (usable)
> > > (XEN)  0000000000060000 - 0000000000068000 (reserved)
> > > (XEN)  0000000000068000 - 000000000009e000 (usable)
> > > (XEN)  000000000009e000 - 000000000009f000 (reserved)
> > > (XEN)  000000000009f000 - 00000000000a0000 (usable)
> > > (XEN)  00000000000a0000 - 0000000000100000 (reserved)
> > > (XEN)  0000000000100000 - 0000000040000000 (usable)
> > > (XEN)  0000000040000000 - 0000000040400000 (reserved)
> > > (XEN)  0000000040400000 - 000000007024b000 (usable)
> > > (XEN)  000000007024b000 - 000000007024c000 (ACPI NVS)
> > > (XEN)  000000007024c000 - 000000007024d000 (reserved)
> > > (XEN)  000000007024d000 - 0000000077f19000 (usable)
> > > (XEN)  0000000077f19000 - 0000000078987000 (reserved)
> > > (XEN)  0000000078987000 - 0000000078a04000 (ACPI data)
> > > (XEN)  0000000078a04000 - 0000000078ea3000 (ACPI NVS)
> > > (XEN)  0000000078ea3000 - 000000007acff000 (reserved)
> > > (XEN)  000000007acff000 - 000000007ad00000 (usable)
> > > (XEN)  000000007ad00000 - 000000007f800000 (reserved)
> > > (XEN)  00000000f0000000 - 00000000f8000000 (reserved)
> > > (XEN)  00000000fe000000 - 00000000fe011000 (reserved)
> > > (XEN)  00000000fec00000 - 00000000fec01000 (reserved)
> > > (XEN)  00000000fee00000 - 00000000fee01000 (reserved)
> > > (XEN)  00000000ff000000 - 0000000100000000 (reserved)
> > > (XEN)  0000000100000000 - 000000047c800000 (usable)
> > > (XEN) EFI memory map:
> > > (XEN)  0000000000000-000000009dfff type=3D7 attr=3D000000000000000f
> > > (XEN)  000000009e000-000000009efff type=3D0 attr=3D000000000000000f
> > > (XEN)  000000009f000-000000009ffff type=3D3 attr=3D000000000000000f
> > > (XEN)  0000000100000-000003fffffff type=3D7 attr=3D000000000000000f
> > > (XEN)  0000040000000-00000403fffff type=3D0 attr=3D000000000000000f
> > > (XEN)  0000040400000-000005e359fff type=3D7 attr=3D000000000000000f
> > > (XEN)  000005e35a000-000005e399fff type=3D4 attr=3D000000000000000f
> > > (XEN)  000005e39a000-000006a47dfff type=3D7 attr=3D000000000000000f
> > > (XEN)  000006a47e000-000006c3eefff type=3D2 attr=3D000000000000000f
> > > (XEN)  000006c3ef000-000006d5eefff type=3D1 attr=3D000000000000000f
> > > (XEN)  000006d5ef000-000006d86cfff type=3D2 attr=3D000000000000000f
> > > (XEN)  000006d86d000-000006d978fff type=3D1 attr=3D000000000000000f
> > > (XEN)  000006d979000-000006dc7afff type=3D4 attr=3D000000000000000f
> > > (XEN)  000006dc7b000-000006dc98fff type=3D3 attr=3D000000000000000f
> > > (XEN)  000006dc99000-000006dcc7fff type=3D4 attr=3D000000000000000f
> > > (XEN)  000006dcc8000-000006dccdfff type=3D3 attr=3D000000000000000f
> > > (XEN)  000006dcce000-00000701a5fff type=3D4 attr=3D000000000000000f
> > > (XEN)  00000701a6000-00000701c8fff type=3D3 attr=3D000000000000000f
> > > (XEN)  00000701c9000-00000701edfff type=3D4 attr=3D000000000000000f
> > > (XEN)  00000701ee000-0000070204fff type=3D3 attr=3D000000000000000f
> > > (XEN)  0000070205000-000007022cfff type=3D4 attr=3D000000000000000f
> > > (XEN)  000007022d000-000007024afff type=3D3 attr=3D000000000000000f
> > > (XEN)  000007024b000-000007024bfff type=3D10 attr=3D000000000000000f
> > > (XEN)  000007024c000-000007024cfff type=3D6 attr=3D800000000000000f
> > > (XEN)  000007024d000-000007024dfff type=3D4 attr=3D000000000000000f
> > > (XEN)  000007024e000-0000070282fff type=3D3 attr=3D000000000000000f
> > > (XEN)  0000070283000-00000702c3fff type=3D4 attr=3D000000000000000f
> > > (XEN)  00000702c4000-00000702c8fff type=3D3 attr=3D000000000000000f
> > > (XEN)  00000702c9000-00000702defff type=3D4 attr=3D000000000000000f
> > > (XEN)  00000702df000-0000070307fff type=3D3 attr=3D000000000000000f
> > > (XEN)  0000070308000-0000070317fff type=3D4 attr=3D000000000000000f
> > > (XEN)  0000070318000-0000070319fff type=3D3 attr=3D000000000000000f
> > > (XEN)  000007031a000-0000070331fff type=3D4 attr=3D000000000000000f
> > > (XEN)  0000070332000-0000070349fff type=3D3 attr=3D000000000000000f
> > > (XEN)  000007034a000-0000070356fff type=3D2 attr=3D000000000000000f
> > > (XEN)  0000070357000-0000070357fff type=3D7 attr=3D000000000000000f
> > > (XEN)  0000070358000-0000070358fff type=3D2 attr=3D000000000000000f
> > > (XEN)  0000070359000-0000076f3efff type=3D4 attr=3D000000000000000f
> > > (XEN)  0000076f3f000-00000772affff type=3D7 attr=3D000000000000000f
> > > (XEN)  00000772b0000-0000077f18fff type=3D3 attr=3D000000000000000f
> > > (XEN)  0000077f19000-0000078986fff type=3D0 attr=3D000000000000000f
> > > (XEN)  0000078987000-0000078a03fff type=3D9 attr=3D000000000000000f
> > > (XEN)  0000078a04000-0000078ea2fff type=3D10 attr=3D000000000000000f
> > > (XEN)  0000078ea3000-000007ab22fff type=3D6 attr=3D800000000000000f
> > > (XEN)  000007ab23000-000007acfefff type=3D5 attr=3D800000000000000f
> > > (XEN)  000007acff000-000007acfffff type=3D4 attr=3D000000000000000f
> > > (XEN)  0000100000000-000047c7fffff type=3D7 attr=3D000000000000000f
> > > (XEN)  00000000a0000-00000000fffff type=3D0 attr=3D0000000000000000
> > > (XEN)  000007ad00000-000007adfffff type=3D0 attr=3D070000000000000f
> > > (XEN)  000007ae00000-000007f7fffff type=3D0 attr=3D0000000000000000
> > > (XEN)  00000f0000000-00000f7ffffff type=3D11 attr=3D800000000000100d
> > > (XEN)  00000fe000000-00000fe010fff type=3D11 attr=3D8000000000000001
> > > (XEN)  00000fec00000-00000fec00fff type=3D11 attr=3D8000000000000001
> > > (XEN)  00000fee00000-00000fee00fff type=3D11 attr=3D8000000000000001
> > > (XEN)  00000ff000000-00000ffffffff type=3D11 attr=3D800000000000100d
> > >
> > > Command line
> > > console=3Dcom1 dom0_mem=3Dmin:420M,max:420M,420M efi=3Dno-rs,attr=3Du=
c
> > > com1=3D115200,8n1,pci mbi-video vga=3Dcurrent flask=3Denforcing loglv=
l=3Ddebug
> > > guest_loglvl=3Ddebug smt=3D0 ucode=3D-1 bootscrub=3D1
> > > argo=3Dyes,mac-permissive=3D1 iommu=3Dforce,igfx
> > >
> > > iommu=3Dforce,igfx was to force igfx back on.  I added a dmi quirk to
> > > set no-igfx on this platform as a temporary workaround.
>
> I assume setting no-igfx fixed the issue and the card works fine in
> that case?

Yes, it seems to work.  The internal and 2 external monitors are
displaying and seem okay.  If I unplug the dock with those 2 displays,
then go plug in a different dock with a different monitor, I've seen
(unclear how often) the i915 report errors with configuring it's
"pipe" and the built in display (eDP) is black.  But it may recover
sometimes?

> > > > Have you tried adding dom0-iommu=3Dmap-inclusive to the Xen command
> > > > line?
> >
> > Still seeing faults with dom0-iommu=3Dmap-inclusive.  At a different
> > address this time:
> > Oct 16 15:58:05.110768 VM hypervisor: (XEN) [VT-D]DMAR:[DMA Read]
> > Request device [0000:00:02.0] fault addr ea0c4f000, iommu reg =3D ffff
>
> That's also past the end of RAM.
>
> > 82c00021d000
> > Oct 16 15:58:05.110774 VM hypervisor: (XEN) [VT-D]DMAR: reason 06 -
> > PTE Read access is not set
> > Oct 16 15:58:05.110777 VM hypervisor: (XEN) print_vtd_entries: iommu
> > #0 dev 0000:00:02.0 gmfn ea0c4f
> > Oct 16 15:58:05.110780 VM hypervisor: (XEN)     root_entry[00] =3D 46e1=
29001
> > Oct 16 15:58:05.110782 VM hypervisor: (XEN)     context[10] =3D 2_46e12=
8001
> > Oct 16 15:58:05.110785 VM hypervisor: (XEN)     l4[000] =3D 46e11b003
> > Oct 16 15:58:05.110787 VM hypervisor: (XEN)     l3[03a] =3D 0
> > Oct 16 15:58:05.110789 VM hypervisor: (XEN)     l3[03a] not present
> >
> > The previous posting, the two faulting addresses repeated in pairs.
> > Here it is only this one address repeating.
> >
> > I plugged and unplugged and a different address was repeating with a
> > few other random addresses with 1 or 2 faults.  Here is uniq -c output
> > of the address and count pulled from the logs:
> > 0x1ce9d6b000 2007
> > 0x31b50d5000 1
> > 0x1ce9d6b000 882
> > 0x707741000 1
> > 0x1ce9d6b000 1114
> > 0x20d2099000 1
> > 0x1ce9d6b000 3489
> > 0xeb98eb000 1
> > 0x1ce9d6b000 2430
> > 0xeb98eb000 1
> > 0x1ce9d6b000 1300
> > 0x22f20bb000 1
> > 0x1ce9d6b000 269
> > 0x22f20bb000 1
> > 0x1ce9d6b000 5091
> > 0x6c99ec9000 1
> > 0x1ce9d6b000 29
> > 0xeb98eb000 1
> > 0x1ce9d6b000 4599
> > 0x6c99ec9000 1
> > 0x1ce9d6b000 1989
>
> Hm, it's hard to tell what's going on. My limited experience with
> IOMMU faults on broken systems there's a small range that initially
> triggers those, and then the device goes wonky and starts accessing a
> whole load of invalid addresses.
>
> You could try adding those manually using the rmrr Xen command line
> option [0], maybe you can figure out which range(s) are missing?

They seem to change, so it's hard to know.  Would there be harm in
adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
while leaving the IOMMU enabled?

Thanks for taking a look.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 12:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 12:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10063.26500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDc5-0003hH-IJ; Wed, 21 Oct 2020 12:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10063.26500; Wed, 21 Oct 2020 12:53:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDc5-0003hA-Em; Wed, 21 Oct 2020 12:53:09 +0000
Received: by outflank-mailman (input) for mailman id 10063;
 Wed, 21 Oct 2020 12:53:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVDc4-0003h5-AC
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:53:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5604dd80-3872-47fb-a6d5-ec577dc41bce;
 Wed, 21 Oct 2020 12:53:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1A251B007;
 Wed, 21 Oct 2020 12:53:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVDc4-0003h5-AC
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 12:53:08 +0000
X-Inumbo-ID: 5604dd80-3872-47fb-a6d5-ec577dc41bce
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5604dd80-3872-47fb-a6d5-ec577dc41bce;
	Wed, 21 Oct 2020 12:53:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603284784;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YW4GeGaaq+R5j4BAU33aDFRNJQJgkt5cTgR3nwU6Kfk=;
	b=OEqS13xgP1xXGjTUMzbe2fkZ/2Iio2FsHeiJFz4rLyhRrrh5qUESVn1RyzT9eHvgC/xWCA
	XcXw5frSoysAC2VTooKZTpEcXbPXp+3NdlYSc4vF5Iygr5KSgpZykL7dtaeiDFYRw5riEU
	U+Z+RJboJaB1kQHNKEX98jVaJgE6H2o=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1A251B007;
	Wed, 21 Oct 2020 12:53:04 +0000 (UTC)
Subject: Re: i915 dma faults on Xen
To: Jason Andryuk <jandryuk@gmail.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org,
 xen-devel <xen-devel@lists.xenproject.org>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
 <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com>
Date: Wed, 21 Oct 2020 14:52:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 14:45, Jason Andryuk wrote:
> On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monné <roger.pau@citrix.com> wrote:
>> Hm, it's hard to tell what's going on. My limited experience with
>> IOMMU faults on broken systems there's a small range that initially
>> triggers those, and then the device goes wonky and starts accessing a
>> whole load of invalid addresses.
>>
>> You could try adding those manually using the rmrr Xen command line
>> option [0], maybe you can figure out which range(s) are missing?
> 
> They seem to change, so it's hard to know.  Would there be harm in
> adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
> 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
> while leaving the IOMMU enabled?

While they may quieten the faults, I don't think those faults are
pointless. They indicate some problem with the software (less
likely the hardware, possibly the firmware) that you're using.
Also there's the question of what the overall behavior is going
to be when devices are permitted to access unpopulated address
ranges. I assume you did check already that no devices have their
BARs placed in that range?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 13:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 13:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10073.26540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDjt-0004oC-MV; Wed, 21 Oct 2020 13:01:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10073.26540; Wed, 21 Oct 2020 13:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDjt-0004o5-Iq; Wed, 21 Oct 2020 13:01:13 +0000
Received: by outflank-mailman (input) for mailman id 10073;
 Wed, 21 Oct 2020 13:01:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVDjr-0004nv-RB
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:01:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 749ef855-1a57-4052-9da9-3da1e0905fd0;
 Wed, 21 Oct 2020 13:01:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVDjo-0005iJ-Qv; Wed, 21 Oct 2020 13:01:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVDjo-0000SV-JL; Wed, 21 Oct 2020 13:01:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVDjo-0007KI-IP; Wed, 21 Oct 2020 13:01:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVDjr-0004nv-RB
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:01:11 +0000
X-Inumbo-ID: 749ef855-1a57-4052-9da9-3da1e0905fd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 749ef855-1a57-4052-9da9-3da1e0905fd0;
	Wed, 21 Oct 2020 13:01:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Uom5VydGY6cQDegvnJcg/hxe9BR+tln7WuZpxnw2oww=; b=IaHL1AYH/dzCN9ZBqja7nawBvN
	ivZZur/R3dxBf+LdQIXVl0Yg05AAQXTDO7Soyu5iv6emkjHx4awHUslJSSsq09NEmdH8hKyDnJ+Jo
	OTuwdMrjqwStC9CVxNlW/cfRRHK+viASLwLL81PJuXeDyXjgp3qDPRcrPc3/4EFS6c90=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVDjo-0005iJ-Qv; Wed, 21 Oct 2020 13:01:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVDjo-0000SV-JL; Wed, 21 Oct 2020 13:01:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVDjo-0007KI-IP; Wed, 21 Oct 2020 13:01:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156034-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156034: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=63199dfd3a0418f1677c6ccd7fe05b123af4610a
X-Osstest-Versions-That:
    xen=3630a367854c98bbf8e747d09eeab7e68f370003
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 13:01:08 +0000

flight 156034 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156034/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 155388
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 155388
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155388
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155388
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155388
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155388
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155388
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155388
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155388
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155388
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155388
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155388
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155388
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  63199dfd3a0418f1677c6ccd7fe05b123af4610a
baseline version:
 xen                  3630a367854c98bbf8e747d09eeab7e68f370003

Last test of basis   155388  2020-10-03 17:40:53 Z   17 days
Testing same since   156034  2020-10-20 13:35:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3630a36785..63199dfd3a  63199dfd3a0418f1677c6ccd7fe05b123af4610a -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 13:07:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 13:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10077.26555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDpv-00056Y-Ir; Wed, 21 Oct 2020 13:07:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10077.26555; Wed, 21 Oct 2020 13:07:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVDpv-00056R-Ft; Wed, 21 Oct 2020 13:07:27 +0000
Received: by outflank-mailman (input) for mailman id 10077;
 Wed, 21 Oct 2020 13:07:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVDpu-00056M-Ok
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:07:26 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aebc959f-3077-4a61-8964-2c036dc670c6;
 Wed, 21 Oct 2020 13:07:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVDpu-00056M-Ok
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:07:26 +0000
X-Inumbo-ID: aebc959f-3077-4a61-8964-2c036dc670c6
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aebc959f-3077-4a61-8964-2c036dc670c6;
	Wed, 21 Oct 2020 13:07:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603285645;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=ClnLEGNUPj+vWOEdbAhPiu3vVyLMnxIDNxKGoCjlfbk=;
  b=LLL5yGPthG6e8IeidMRCp7Innmm0SGzrg4+ZWD1ARqirX69uB4xCA3M2
   Q4JmvAmxylZk1YX6yHl3vefte7LLgmGVKO34NUlxbwFRrUv0IUmXbu32U
   NVx9y5Ewdwkm1ttqNpNE+qnFWAR0JfoAyRRXffarP4mtC+oRjw40JkEFp
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: QzQyDFug2bnDXNed5yYiXo+hSko/zYmU8QQ2+xRD9bJ0g7yu6l7W1VzEcalKqPv3t+mat6uoQl
 bAoB+L4QBLt0rFPJTY3oYhL9DylF0ab4g3tKffZDd6XH5p6HJybUD3u/5Gxjfz/hDvuoLPfzTd
 rVAYeESMVoxulph7lOXVN6VYo1fsXrQoy6aspq2zNrTELMbMTLujkr0XoBpHv4EGG4FJZZNAau
 8VLiMYevf1MZpEVD3oArMdy2qO4fKLyJR+yoYBAmu+RpmRUKWIXGjmeJX7cKzk/vKkbJAiRHfX
 t3Q=
X-SBRS: 2.5
X-MesageID: 30534304
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="30534304"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/pv: Flush TLB in response to paging structure changes
Date: Wed, 21 Oct 2020 14:07:08 +0100
Message-ID: <20201021130708.12249-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
is from Xen's point of view (as the update only affects guest mappings), and
the guest is required to flush suitably after making updates.

However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.

Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings.  Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.

For all L2 and higher modifications, flush the TLB.  PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global.  (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)

This combines with sync_guest for XPTI L4 "shadowing", but has some asymmetry
between local and remote flush requirements.  Replace the sync_guest boolean
with flush_flags_{local,all} and accumulate flags, performing all required
flushing at the end.

This is XSA-286.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v2:
 * Use two separate flush flags.
 * Use non-global flushes.
---
 xen/arch/x86/mm.c | 61 +++++++++++++++++++++++++++++++++++++++----------------
 1 file changed, 43 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 918ee2bbe3..87860c2ca3 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3887,7 +3887,7 @@ long do_mmu_update(
     struct vcpu *curr = current, *v = curr;
     struct domain *d = v->domain, *pt_owner = d, *pg_owner;
     mfn_t map_mfn = INVALID_MFN, mfn;
-    bool sync_guest = false;
+    unsigned int flush_flags_local = 0, flush_flags_all = 0;
     uint32_t xsm_needed = 0;
     uint32_t xsm_checked = 0;
     int rc = put_old_guest_table(curr);
@@ -4037,6 +4037,9 @@ long do_mmu_update(
                         break;
                     rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    /* Paging structure maybe changed.  Flush linear range. */
+                    if ( !rc )
+                        flush_flags_all |= FLUSH_TLB;
                     break;
 
                 case PGT_l3_page_table:
@@ -4044,6 +4047,9 @@ long do_mmu_update(
                         break;
                     rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    /* Paging structure maybe changed.  Flush linear range. */
+                    if ( !rc )
+                        flush_flags_all |= FLUSH_TLB;
                     break;
 
                 case PGT_l4_page_table:
@@ -4051,27 +4057,28 @@ long do_mmu_update(
                         break;
                     rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
-                    if ( !rc && pt_owner->arch.pv.xpti )
+                    /* Paging structure maybe changed.  Flush linear range. */
+                    if ( !rc )
                     {
-                        bool local_in_use = false;
+                        bool local_in_use = mfn_eq(
+                            pagetable_get_mfn(curr->arch.guest_table), mfn);
 
-                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
-                                    mfn) )
-                        {
-                            local_in_use = true;
-                            get_cpu_info()->root_pgt_changed = true;
-                        }
+                        flush_flags_all |= FLUSH_TLB;
+
+                        if ( local_in_use )
+                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;
 
                         /*
                          * No need to sync if all uses of the page can be
                          * accounted to the page lock we hold, its pinned
                          * status, and uses on this (v)CPU.
                          */
-                        if ( (page->u.inuse.type_info & PGT_count_mask) >
+                        if ( pt_owner->arch.pv.xpti &&
+                             (page->u.inuse.type_info & PGT_count_mask) >
                              (1 + !!(page->u.inuse.type_info & PGT_pinned) +
                               mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
                                      mfn) + local_in_use) )
-                            sync_guest = true;
+                            flush_flags_all |= FLUSH_ROOT_PGTBL;
                     }
                     break;
 
@@ -4173,18 +4180,36 @@ long do_mmu_update(
     if ( va )
         unmap_domain_page(va);
 
-    if ( sync_guest )
+    /*
+     * Flushing needs to occur for one of several reasons.
+     *
+     * 1) An update to an L2 or higher occured.  This potentially changes the
+     *    pagetable structure, requiring a flush of the linear range.
+     * 2) An update to an L4 occured, and XPTI is enabled.  All CPUs running
+     *    on a copy of this L4 need refreshing.
+     */
+    if ( flush_flags_all || flush_flags_local )
     {
+        cpumask_t *mask = pt_owner->dirty_cpumask;
+
         /*
-         * Force other vCPU-s of the affected guest to pick up L4 entry
-         * changes (if any).
+         * Local flushing may be asymmetric with remote.  If there is local
+         * flushing to do, perform it separately and omit the current CPU from
+         * pt_owner->dirty_cpumask.
          */
-        unsigned int cpu = smp_processor_id();
-        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
+        if ( flush_flags_local )
+        {
+            unsigned int cpu = smp_processor_id();
+
+            mask = per_cpu(scratch_cpumask, cpu);
+            cpumask_copy(mask, pt_owner->dirty_cpumask);
+            __cpumask_clear_cpu(cpu, mask);
+
+            flush_local(flush_flags_local);
+        }
 
-        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
         if ( !cpumask_empty(mask) )
-            flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
+            flush_mask(mask, flush_flags_all);
     }
 
     perfc_add(num_page_updates, i);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 13:36:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 13:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10081.26566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEIH-0007uB-RO; Wed, 21 Oct 2020 13:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10081.26566; Wed, 21 Oct 2020 13:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEIH-0007u4-OO; Wed, 21 Oct 2020 13:36:45 +0000
Received: by outflank-mailman (input) for mailman id 10081;
 Wed, 21 Oct 2020 13:36:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kVEIG-0007tz-FK
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:36:44 +0000
Received: from mail-lf1-x12d.google.com (unknown [2a00:1450:4864:20::12d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 638910aa-936d-4132-9e20-1a64d75b6c20;
 Wed, 21 Oct 2020 13:36:43 +0000 (UTC)
Received: by mail-lf1-x12d.google.com with SMTP id a9so3139576lfc.7
 for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 06:36:43 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kVEIG-0007tz-FK
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:36:44 +0000
X-Inumbo-ID: 638910aa-936d-4132-9e20-1a64d75b6c20
Received: from mail-lf1-x12d.google.com (unknown [2a00:1450:4864:20::12d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 638910aa-936d-4132-9e20-1a64d75b6c20;
	Wed, 21 Oct 2020 13:36:43 +0000 (UTC)
Received: by mail-lf1-x12d.google.com with SMTP id a9so3139576lfc.7
        for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 06:36:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=lrH3D/+6YhDxpCFWKiukOFeDa/kuA2Z9B0hdudtFyXU=;
        b=CfWtwJB9GIYMeENoIM+rvSb94AJ/unbe3PgvR1hVri27vuz5zqfvjBCKlPzqUK6vK5
         Rfk9JTBIfTvNZaEDrDSdKBOJ0I3PIkO2FHm75nbXfem6jJe5PrEvs6ViMk2zVhxgO7Qs
         6JElLj1A6/m8z1p3nNM/0Q/MX9FbLR6avS1FXkr5UIZk94RM8AXEoWKT3gA5QkQQrAsJ
         rzajGxhdtWLxG0Az6YaO9G50j1+QpSt0FwtVyXOff1Q2OOcBC/pmO5FC1fFK3A3Fyl6/
         eXbthKKY/9YghsZApbck0oVGiGt3BdJgIm4XNusp5pUWMSuCAquhzko5Yh5phlO40G6f
         DiOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=lrH3D/+6YhDxpCFWKiukOFeDa/kuA2Z9B0hdudtFyXU=;
        b=s66kk8/uGhDyjGczjHzvvaLtjsraGGYXGVlG9GSK3L3uOvPiPcNFSSz/I5v9NmGNH1
         +8Kwac6pbTTcFcX04CuwwXwDfSkf7ylgCgB1QaCCaLP5d/zd45ZPOpAKQg7qd2vdR81w
         yROK6rFVTW2om1ukjZ6s+SONyhp49pahXFKkY2ypOiaswPWDbdtfsecFYnqXV/pf3Gvv
         DzQYxGQl+wnHezcCs4zNFmdWRR0Zmk89PaR4okU2eWflAgAPsVdCAMd/hIF0AxgiNoDF
         4/8jALV3q+KBzToo2Le1mrAYdvrXZTMqdrqWPx9mHN0C3cnrwbUBB2LYS6iEeCCXWhA+
         lzOg==
X-Gm-Message-State: AOAM530ZG85e/MPSFFAg5V5RtW1FCNzhqKeg9AZrqHN79ajfj7lZC2Zy
	nv0Ym0di8sxYmcCAKTyWEDGyWYSOfAn+H9wyDPE=
X-Google-Smtp-Source: ABdhPJwaw0B2BxohZx2W/5ekhIK+Zik2kh8+yEatERPJMG5UG0JGyrU6+fQrIXg1TmtzU3mrOH1+BLaZsvep/tAuT6k=
X-Received: by 2002:ac2:42ce:: with SMTP id n14mr1243449lfl.412.1603287402490;
 Wed, 21 Oct 2020 06:36:42 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com> <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger> <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
 <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com>
In-Reply-To: <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 21 Oct 2020 09:36:30 -0400
Message-ID: <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
Subject: Re: i915 dma faults on Xen
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Oct 21, 2020 at 8:53 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.10.2020 14:45, Jason Andryuk wrote:
> > On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monn=C3=A9 <roger.pau@citrix.=
com> wrote:
> >> Hm, it's hard to tell what's going on. My limited experience with
> >> IOMMU faults on broken systems there's a small range that initially
> >> triggers those, and then the device goes wonky and starts accessing a
> >> whole load of invalid addresses.
> >>
> >> You could try adding those manually using the rmrr Xen command line
> >> option [0], maybe you can figure out which range(s) are missing?
> >
> > They seem to change, so it's hard to know.  Would there be harm in
> > adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
> > 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
> > while leaving the IOMMU enabled?
>
> While they may quieten the faults, I don't think those faults are
> pointless. They indicate some problem with the software (less
> likely the hardware, possibly the firmware) that you're using.
> Also there's the question of what the overall behavior is going
> to be when devices are permitted to access unpopulated address
> ranges. I assume you did check already that no devices have their
> BARs placed in that range?

Isn't no-igfx already letting them try to read those unpopulated addresses?

Looks like all PCI BARs are below 4GB.  The graphics ones are:
00:02.0 VGA compatible controller: Intel Corporation Device 3ea0 (rev
02) (prog-if 00 [VGA controller])
    Subsystem: Dell Device 08b9
    Flags: bus master, fast devsel, latency 0, IRQ 177
    Memory at cb000000 (64-bit, non-prefetchable) [size=3D16M]
    Memory at 80000000 (64-bit, prefetchable) [size=3D256M]

Yes, I agree the faults aren't pointless.  I'm wondering if it's
something with the i915 driver or hardware having assumptions that
aren't met by Xen swiotlb.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 13:57:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 13:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10084.26579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEbk-0001Me-In; Wed, 21 Oct 2020 13:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10084.26579; Wed, 21 Oct 2020 13:56:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEbk-0001MX-F3; Wed, 21 Oct 2020 13:56:52 +0000
Received: by outflank-mailman (input) for mailman id 10084;
 Wed, 21 Oct 2020 13:56:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVEbi-0001Lo-Ml
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:56:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a62ac48-afe3-43a5-bec1-79817ec655f7;
 Wed, 21 Oct 2020 13:56:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C55BEAE0C;
 Wed, 21 Oct 2020 13:56:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVEbi-0001Lo-Ml
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:56:50 +0000
X-Inumbo-ID: 8a62ac48-afe3-43a5-bec1-79817ec655f7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8a62ac48-afe3-43a5-bec1-79817ec655f7;
	Wed, 21 Oct 2020 13:56:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603288608;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z3qXGbst+Pk6JDM4JK8cfOB9fQVYHkDFGQ6b7JIQTD0=;
	b=NMs9wwZXOdNIPvCIa36zvhy8DvgjRG5CMCasN8hbxDg9PIpyMXVZD76MZ4QrFGWlEQYojY
	7esvejxaEV3rGR1rVvYW5Y5YlQjMgIhmiX6yBs5D0QVDT9afmhk1/yd5Q6m7QggyOdg7bw
	9ELiDxI8weYszbZy6/YVX6MAS2AR02U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C55BEAE0C;
	Wed, 21 Oct 2020 13:56:48 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201021130708.12249-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
Date: Wed, 21 Oct 2020 15:56:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021130708.12249-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.10.2020 15:07, Andrew Cooper wrote:
> @@ -4037,6 +4037,9 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> +                    /* Paging structure maybe changed.  Flush linear range. */
> +                    if ( !rc )
> +                        flush_flags_all |= FLUSH_TLB;
>                      break;
>  
>                  case PGT_l3_page_table:
> @@ -4044,6 +4047,9 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> +                    /* Paging structure maybe changed.  Flush linear range. */
> +                    if ( !rc )
> +                        flush_flags_all |= FLUSH_TLB;
>                      break;
>  
>                  case PGT_l4_page_table:
> @@ -4051,27 +4057,28 @@ long do_mmu_update(
>                          break;
>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
> -                    if ( !rc && pt_owner->arch.pv.xpti )
> +                    /* Paging structure maybe changed.  Flush linear range. */
> +                    if ( !rc )
>                      {
> -                        bool local_in_use = false;
> +                        bool local_in_use = mfn_eq(
> +                            pagetable_get_mfn(curr->arch.guest_table), mfn);
>  
> -                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
> -                                    mfn) )
> -                        {
> -                            local_in_use = true;
> -                            get_cpu_info()->root_pgt_changed = true;
> -                        }
> +                        flush_flags_all |= FLUSH_TLB;
> +
> +                        if ( local_in_use )
> +                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;

Aiui here (and in the code consuming the flags) you build upon
flush_flags_local, when not zero, always being a superset of
flush_flags_all. I think this is a trap to fall into when later
wanting to change this code, but as per below this won't hold
anymore anyway, I think. Hence here I think you want to set
FLUSH_TLB unconditionally, and above for L3 and L2 you want to
set it in both variables. Or, if I'm wrong below, a comment to
that effect may help people avoid falling into this trap.

An alternative would be to have

    flush_flags_local |= (flush_flags_all & FLUSH_TLB);

before doing the actual flush.

>                          /*
>                           * No need to sync if all uses of the page can be
>                           * accounted to the page lock we hold, its pinned
>                           * status, and uses on this (v)CPU.
>                           */
> -                        if ( (page->u.inuse.type_info & PGT_count_mask) >
> +                        if ( pt_owner->arch.pv.xpti &&

I assume you've moved this here to avoid the further non-trivial
checks when possible, but you've not put it around the setting
of FLUSH_ROOT_PGTBL in flush_flags_local because setting
->root_pgt_changed is benign when !XPTI?

> +                             (page->u.inuse.type_info & PGT_count_mask) >
>                               (1 + !!(page->u.inuse.type_info & PGT_pinned) +
>                                mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
>                                       mfn) + local_in_use) )
> -                            sync_guest = true;
> +                            flush_flags_all |= FLUSH_ROOT_PGTBL;

This needs to also specify FLUSH_TLB_GLOBAL, or else original
XPTI behavior gets broken. Since the local CPU doesn't need this,
the variable may then better be named flush_flags_remote?

Or if I'm wrong here, the reasoning behind the dropping of the
global flush in this case needs putting in the description, not
the least because it would mean the change introducing it went
too far.

> @@ -4173,18 +4180,36 @@ long do_mmu_update(
>      if ( va )
>          unmap_domain_page(va);
>  
> -    if ( sync_guest )
> +    /*
> +     * Flushing needs to occur for one of several reasons.
> +     *
> +     * 1) An update to an L2 or higher occured.  This potentially changes the
> +     *    pagetable structure, requiring a flush of the linear range.
> +     * 2) An update to an L4 occured, and XPTI is enabled.  All CPUs running
> +     *    on a copy of this L4 need refreshing.
> +     */
> +    if ( flush_flags_all || flush_flags_local )

Minor remark: At least in x86 code it is more efficient to use
| instead of || in such cases, to avoid relying on the compiler
to carry out this small optimization. It may well be that all
compilers we care about do so, but it's certainly not the case
for all the compilers I've ever worked with.

>      {
> +        cpumask_t *mask = pt_owner->dirty_cpumask;
> +
>          /*
> -         * Force other vCPU-s of the affected guest to pick up L4 entry
> -         * changes (if any).
> +         * Local flushing may be asymmetric with remote.  If there is local
> +         * flushing to do, perform it separately and omit the current CPU from
> +         * pt_owner->dirty_cpumask.
>           */
> -        unsigned int cpu = smp_processor_id();
> -        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
> +        if ( flush_flags_local )
> +        {
> +            unsigned int cpu = smp_processor_id();
> +
> +            mask = per_cpu(scratch_cpumask, cpu);
> +            cpumask_copy(mask, pt_owner->dirty_cpumask);
> +            __cpumask_clear_cpu(cpu, mask);
> +
> +            flush_local(flush_flags_local);
> +        }
>  
> -        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));

I understand you're of the opinion that cpumask_copy() +
__cpumask_clear_cpu() is more efficient than cpumask_andnot()?
(I guess I agree for high enough CPU counts.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 13:59:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 13:59:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10087.26591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEeG-0001WZ-0V; Wed, 21 Oct 2020 13:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10087.26591; Wed, 21 Oct 2020 13:59:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVEeF-0001WS-Sj; Wed, 21 Oct 2020 13:59:27 +0000
Received: by outflank-mailman (input) for mailman id 10087;
 Wed, 21 Oct 2020 13:59:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVEeF-0001WN-4i
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:59:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f32bf0b-bfd6-4569-9dab-e8d2cd0b16d0;
 Wed, 21 Oct 2020 13:59:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61683ACE6;
 Wed, 21 Oct 2020 13:59:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JBpP=D4=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVEeF-0001WN-4i
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 13:59:27 +0000
X-Inumbo-ID: 2f32bf0b-bfd6-4569-9dab-e8d2cd0b16d0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2f32bf0b-bfd6-4569-9dab-e8d2cd0b16d0;
	Wed, 21 Oct 2020 13:59:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603288764;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/WERzru05Wb69j9MCKzq+AXXcyefJcxA2Zc/EP6RoqE=;
	b=WZwwrHeBy/Ul0HckOcAl5CtPMetXmjU4TkIwf/aC4TL5xdNtsXsTKqgPEyQiRpfsSuDzEv
	GO/1z7aAIU/TYUrNSyDXpB3fQm4bQm65EQoqempmfuNxyhqnk5IJsyJY3oGGX36b48b1Zv
	WJwO/2EXx0FYX2oczl2m+QcOkbfD09I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 61683ACE6;
	Wed, 21 Oct 2020 13:59:24 +0000 (UTC)
Subject: Re: i915 dma faults on Xen
To: Jason Andryuk <jandryuk@gmail.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, intel-gfx@lists.freedesktop.org,
 xen-devel <xen-devel@lists.xenproject.org>
References: <CAKf6xpv-LRCuo-qHHWMuukYtvJiR-i+-YhLUOZeqoAFd-=swEQ@mail.gmail.com>
 <1a3b90f4-564e-84d3-fd6a-3454e8753579@citrix.com>
 <20201015113109.GA68032@Air-de-Roger>
 <CAKf6xpsJYT7VCeaf6TxPNK1QD+3U9E8ST7E+mWtfDjw0k9L9dA@mail.gmail.com>
 <CAKf6xps1q9zMBeFg7C7ZhD-JcwQ6EG6+bYvvA9QT8PzzxKqMNg@mail.gmail.com>
 <20201021095809.o53b6hpvjl2lbqsi@Air-de-Roger>
 <CAKf6xpuTE4gBNe4YXPYh_hAMLaJduDuKL5_6aC4H=y6DRxaxvw@mail.gmail.com>
 <a4dd7778-9bd4-00c1-3056-96d435b70d70@suse.com>
 <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9bbf6768-a39e-2b3c-c4de-fd883cc9ef85@suse.com>
Date: Wed, 21 Oct 2020 15:59:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvKiWiU5Wsv2C1EiEFr77nMZTd+VHgkdk7qcKw1OFD8Vg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 15:36, Jason Andryuk wrote:
> On Wed, Oct 21, 2020 at 8:53 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 21.10.2020 14:45, Jason Andryuk wrote:
>>> On Wed, Oct 21, 2020 at 5:58 AM Roger Pau Monné <roger.pau@citrix.com> wrote:
>>>> Hm, it's hard to tell what's going on. My limited experience with
>>>> IOMMU faults on broken systems there's a small range that initially
>>>> triggers those, and then the device goes wonky and starts accessing a
>>>> whole load of invalid addresses.
>>>>
>>>> You could try adding those manually using the rmrr Xen command line
>>>> option [0], maybe you can figure out which range(s) are missing?
>>>
>>> They seem to change, so it's hard to know.  Would there be harm in
>>> adding one to cover the end of RAM ( 0x04,7c80,0000 ) to (
>>> 0xff,ffff,ffff )?  Maybe that would just quiet the pointless faults
>>> while leaving the IOMMU enabled?
>>
>> While they may quieten the faults, I don't think those faults are
>> pointless. They indicate some problem with the software (less
>> likely the hardware, possibly the firmware) that you're using.
>> Also there's the question of what the overall behavior is going
>> to be when devices are permitted to access unpopulated address
>> ranges. I assume you did check already that no devices have their
>> BARs placed in that range?
> 
> Isn't no-igfx already letting them try to read those unpopulated addresses?

Yes, and it is for the reason that the documentation for the
option says "If specifying `no-igfx` fixes anything, please
report the problem." I imply from in in particular that one
better wouldn't use it for non-development purposes of whatever
kind.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 14:35:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 14:35:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10090.26603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVFCf-0005BN-Sy; Wed, 21 Oct 2020 14:35:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10090.26603; Wed, 21 Oct 2020 14:35:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVFCf-0005BG-O3; Wed, 21 Oct 2020 14:35:01 +0000
Received: by outflank-mailman (input) for mailman id 10090;
 Wed, 21 Oct 2020 14:35:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4T7O=D4=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1kVFCe-0005BB-2e
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 14:35:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19e33695-2708-4a2e-9155-ef2f118f7f74;
 Wed, 21 Oct 2020 14:34:59 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVFCa-0007fA-Ke; Wed, 21 Oct 2020 14:34:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVFCa-0002uy-7u; Wed, 21 Oct 2020 14:34:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=4T7O=D4=xen.org=hx242@srs-us1.protection.inumbo.net>)
	id 1kVFCe-0005BB-2e
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 14:35:00 +0000
X-Inumbo-ID: 19e33695-2708-4a2e-9155-ef2f118f7f74
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 19e33695-2708-4a2e-9155-ef2f118f7f74;
	Wed, 21 Oct 2020 14:34:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:Date:To
	:From:Subject:Message-ID; bh=j3wf0tZlxmJNJZixJiYCQiUTDKdeJMxtQfWtZ7TexiQ=; b=
	EHTl5PBIk2XlUUAUFp6EYcbTnq2AgQSMvJdqqUe19ybLwBqGAC1kbDfYBsK8D3gMfBtqpRtcJtM0B
	otrNL8VGgP1VTodIHZrZIYhRAwBwBgK5gQNlxq5Agm/Vc915My0U+GwvCbZm3QPLXRSnlKNwTyshO
	zSKNjuYV8Lmex+tNE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVFCa-0007fA-Ke; Wed, 21 Oct 2020 14:34:56 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=u1bbd043a57dd5a.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVFCa-0002uy-7u; Wed, 21 Oct 2020 14:34:56 +0000
Message-ID: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
Subject: XSM and the idle domain
From: Hongyan Xia <hx242@xen.org>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	jbeulich@suse.com, andrew.cooper3@citrix.com, jandryuk@gmail.com, 
	dgdegra@tycho.nsa.gov
Date: Wed, 21 Oct 2020 15:34:52 +0100
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

Hi,

A while ago there was a quick chat on IRC about how XSM interacts with
the idle domain. The conversation did not reach any clear conclusions
so it might be a good idea to summarise the questions in an email.

Basically there were two questions in that conversation:

1. In its current state, are security modules able to limit what the
idle domain can do?
2. Should security modules be able to restrict the idle domain?

The first question came up during ongoing work in LiveUpdate. After an
LU, the next Xen needs to restore all domains. To do that, some
hypercalls need to be issued from the idle domain context and
apparently XSM does not like it. We need to introduce hacks in the
dummy module to leave the idle domain alone. Our work is not compiled
with CONFIG_XSM at all, but with CONFIG_XSM, are we able to enforce
security policies against the idle domain? Of course, without any LU
work this does not make any difference because the idle domain does not
do any useful work to be restricted anyway.

Also, should idle domain be restricted? IMO the idle domain is Xen
itself which mostly bootstraps the system and performs limited work
when switched to, and is not something a user (either dom0 or domU)
directly interacts with. I doubt XSM was designed to include the idle
domain (although there is an ID allocated for it in the code), so I
would say just exclude idle in all security policy checks.

I may have missed some points in that discussion, so please feel free
to add.

Hongyan



From xen-devel-bounces@lists.xenproject.org Wed Oct 21 15:12:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 15:12:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10096.26621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVFmx-0000LO-3n; Wed, 21 Oct 2020 15:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10096.26621; Wed, 21 Oct 2020 15:12:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVFmx-0000LH-0r; Wed, 21 Oct 2020 15:12:31 +0000
Received: by outflank-mailman (input) for mailman id 10096;
 Wed, 21 Oct 2020 15:12:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7WgZ=D4=gmail.com=dav.sec.lists@srs-us1.protection.inumbo.net>)
 id 1kVFmv-0000L9-Eh
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:12:29 +0000
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 705fbda5-3143-4488-8600-f762eb3d4571;
 Wed, 21 Oct 2020 15:12:28 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id h24so3747413ejg.9
 for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 08:12:28 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7WgZ=D4=gmail.com=dav.sec.lists@srs-us1.protection.inumbo.net>)
	id 1kVFmv-0000L9-Eh
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:12:29 +0000
X-Inumbo-ID: 705fbda5-3143-4488-8600-f762eb3d4571
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 705fbda5-3143-4488-8600-f762eb3d4571;
	Wed, 21 Oct 2020 15:12:28 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id h24so3747413ejg.9
        for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 08:12:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=RYdevE8O7fcX49s4xRX8ALJXkbrB3dmIdvFUcG0HyOU=;
        b=tq6F7D5wyxDxGWvAPRsIFShyjSTzZXLaC5Zd67FR7446NA2CWtUS/rnMYbGp/XVanY
         rjIaJ/U3XiAw3Da4hPTb7U+BGJSnp8rJQ//VOOnmdPodeqibsPh/sy4lh9vxTz9tDLLw
         B9z/ksG3RXgSk4HhRD1A463Blc6C1oRY0hNGJz5gXaPrTLkVAmTOLQ7YO8S07l/WbYQV
         q9ORdGpVPvj3ZZmrPtJkTbs5K954Kx08hyVP7kKzMTpMGy3NVwg8EboGNtVxXahJGUoM
         mnrJDuByECNLD5QBC3uMxdVIBQUbXF0dLVK1jq/QdgKhQ4QzgSIwy8JWaLhjdhE5huDz
         WJNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=RYdevE8O7fcX49s4xRX8ALJXkbrB3dmIdvFUcG0HyOU=;
        b=nwx5QdclhSpUqhMnq1xp1WSTse/jOPW+TZIjsp/QngviAg4lu25oFY7W1OWWM7Af4F
         Pk0A7FGJ8ndLlU6N9f2d5Te/n/M1Fr4M7NKUTlvANRit15k6aAzLsn0Oyn2lICIzx1St
         Li0YJzjoWqYSMEx+Rj6ahOKgom8lAasku1dRdpxHrMlInu5HeLjaonIHF/9cItHQfLv8
         6FV4TxQakugOKn9F7PMGECT3PXJIOgrhntnSGt1rOI6skcNB7H9SnVQDTGql/QtryRmI
         +4SXA5bt4Ucu9VTJe5iHDwwBIuEY8NFmtghdMGtMIt/N7SxLsMzbVoFn6FDEq3eWg96S
         S59A==
X-Gm-Message-State: AOAM532MTBbym7/43aaWFW6GknPXUJZPLuWBtWay6HxnHO3htBnHDkWi
	Evt7E+sLZxAg1Byqa3sjisRakuPwa+CfaMkaSbY=
X-Google-Smtp-Source: ABdhPJyy0bU7jGp+facbpScCyQkneW5xCWcCrLFApUCsPGjRNcpw9O0XYm+0zA6yK8+GPJ2qvu3kCuIQA5uqrWd7IQY=
X-Received: by 2002:a17:906:3641:: with SMTP id r1mr3820492ejb.405.1603293147428;
 Wed, 21 Oct 2020 08:12:27 -0700 (PDT)
MIME-Version: 1.0
References: <CA+js8Lk2f99BqeNgSyAh1jh5gH1iC2BZyz+AY7mGTqPTX_Qf=w@mail.gmail.com>
 <58e3421c-6939-831f-8f0e-0c83fa9f1366@citrix.com> <7217a50c-d1f7-8160-2405-c04a84abf61f@knorrie.org>
 <CA+js8L=dCJkE6y=GO2WNc0ufLaOXkx1BsMg3soCw+=wyDduPMQ@mail.gmail.com>
 <CA+js8LnzTkPtQXhQ-N85rM4Qd3HC2SpRQ5ZoSzh4CVx92tNYNQ@mail.gmail.com>
 <20200916161206.GA20338@aepfle.de> <e68fd134-bb40-8646-89f0-dd8241737342@knorrie.org>
In-Reply-To: <e68fd134-bb40-8646-89f0-dd8241737342@knorrie.org>
From: David I <dav.sec.lists@gmail.com>
Date: Wed, 21 Oct 2020 17:12:16 +0200
Message-ID: <CA+js8Lm9gQK_8A6zJnA6irZfVsFNAeuwGt+Jj3LsJR8MyjDd8A@mail.gmail.com>
Subject: Re: Compiling Xen from source
To: Hans van Kranenburg <hans@knorrie.org>
Cc: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>
Content-Type: multipart/alternative; boundary="00000000000085fae005b22fc6fe"

--00000000000085fae005b22fc6fe
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

for information, I was able to compile xen from source on Debian 10, with
the following configuration:


david@debian:~/xen/xen$ uname -a
Linux debian 5.7.0-2-amd64 #1 SMP Debian 5.7.10-1 (2020-07-26) x86_64
GNU/Linux
david@debian:~/xen/xen$ gcc --version
gcc (Debian 8.4.0-4) 8.4.0
Copyright =C2=A9 2018 Free Software Foundation, Inc.
Ce logiciel est un logiciel libre; voir les sources pour les conditions de
copie.  Il n'y a
AUCUNE GARANTIE, pas m=C3=AAme pour la COMMERCIALISATION ni L'AD=C3=89QUATI=
ON =C3=80 UNE
T=C3=82CHE PARTICULI=C3=88RE.

david@debian:~/xen/xen$ git branch
  master
  stable-4.11
* stable-4.13
  stable-4.14
  staging
david@debian:~/xen/xen$

with a simple: "./configure && make dist".
But I had to modify some config files following the thread here:

http://buildroot-busybox.2317881.n4.nabble.com/PATCH-1-2-xen-Disable-Werror=
-when-building-td145149.html

the first build failed with a message: "configure test passed without
-werror but failed with -error".
I had to resolve this to edit these files:

xen/Config.mk:

+-HOSTCFLAGS  =3D -Wall -Werror -Wstrict-prototypes -O2 -fomit-frame-pointe=
r
++HOSTCFLAGS  =3D -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer

xen/xen/arch/x86/Rules.mk:

+-CFLAGS +=3D '-D__OBJECT_LABEL__=3D$(subst /,$$,$(subst -,_,$(subst
$(BASEDIR)/,,$(CURDIR))/$@))'
++CFLAGS +=3D -U__OBJECT_LABEL__ '-D__OBJECT_LABEL__=3D$(subst /,$$,$(subst
-,_,$(subst $(BASEDIR)/,,$(CURDIR))/$@))'

xen/Rules.mk:
+-CFLAGS +=3D '-D__OBJECT_FILE__=3D"$@"'
++CFLAGS +=3D -U__OBJECT_FILE__ '-D__OBJECT_FILE__=3D"$@"'

I haven't tested the binary yet.

Kind regards,

David

Le mer. 16 sept. 2020 =C3=A0 22:59, Hans van Kranenburg <hans@knorrie.org> =
a
=C3=A9crit :

> On 9/16/20 6:12 PM, Olaf Hering wrote:
> > On Wed, Sep 16, David I wrote:
> >
> >> So, how did the debian package was compiled ? is this the same source
> code with
> >> different options ?
> >
> > Xen 4.11 is from 2018. Use also a compiler from that year.
> > Using this years compiler will lead to errors...
>
> In Debian, Buster with Xen 4.11 uses gcc 8.
>
> The Xen 4.11 that is in Debian unstable now does not compile any more
> with gcc 10. That's why we really need to get Xen 4.14 in there now to
> un-stuck that (even with additional picked patches again already).
>
> Hans
>
>

--00000000000085fae005b22fc6fe
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello,<div><br></div><div>for information, I was able to c=
ompile xen from source on Debian 10, with the following configuration:</div=
><div><br></div><div><br></div><div>david@debian:~/xen/xen$ uname -a<br>Lin=
ux debian 5.7.0-2-amd64 #1 SMP Debian 5.7.10-1 (2020-07-26) x86_64 GNU/Linu=
x<br>david@debian:~/xen/xen$ gcc --version<br>gcc (Debian 8.4.0-4) 8.4.0<br=
>Copyright =C2=A9 2018 Free Software Foundation, Inc.<br>Ce logiciel est un=
 logiciel libre; voir les sources pour les conditions de copie.=C2=A0 Il n&=
#39;y a<br>AUCUNE GARANTIE, pas m=C3=AAme pour la COMMERCIALISATION ni L&#3=
9;AD=C3=89QUATION =C3=80 UNE T=C3=82CHE PARTICULI=C3=88RE.<br><br>david@deb=
ian:~/xen/xen$ git branch<br>=C2=A0 master<br>=C2=A0 stable-4.11<br>* stabl=
e-4.13<br>=C2=A0 stable-4.14<br>=C2=A0 staging<br>david@debian:~/xen/xen$=
=C2=A0<br></div><div><br></div><div>with a simple: &quot;./configure &amp;&=
amp; make dist&quot;.</div><div>But I had to modify some config files follo=
wing the thread here:</div><div><br></div><div><a href=3D"http://buildroot-=
busybox.2317881.n4.nabble.com/PATCH-1-2-xen-Disable-Werror-when-building-td=
145149.html">http://buildroot-busybox.2317881.n4.nabble.com/PATCH-1-2-xen-D=
isable-Werror-when-building-td145149.html</a><br></div><div><br></div><div>=
the first build failed with a message: &quot;configure test passed without =
-werror but failed with -error&quot;.</div><div>I had to resolve this to ed=
it these files:</div><div><br></div><div>xen/Config.mk:</div><div><br></div=
><div>+-HOSTCFLAGS =C2=A0=3D -Wall -Werror -Wstrict-prototypes -O2 -fomit-f=
rame-pointer<br></div><div>++HOSTCFLAGS =C2=A0=3D -Wall -Wstrict-prototypes=
 -O2 -fomit-frame-pointer<br></div><div><br></div><div>xen/xen/arch/x86/Rul=
es.mk:<br></div><div><br></div><div>+-CFLAGS +=3D &#39;-D__OBJECT_LABEL__=
=3D$(subst /,$$,$(subst -,_,$(subst $(BASEDIR)/,,$(CURDIR))/$@))&#39;<br></=
div><div>++CFLAGS +=3D -U__OBJECT_LABEL__ &#39;-D__OBJECT_LABEL__=3D$(subst=
 /,$$,$(subst -,_,$(subst $(BASEDIR)/,,$(CURDIR))/$@))&#39;<br></div><div><=
br></div><div>xen/Rules.mk:<br></div><div>+-CFLAGS +=3D &#39;-D__OBJECT_FIL=
E__=3D&quot;$@&quot;&#39;<br></div><div>++CFLAGS +=3D -U__OBJECT_FILE__ &#3=
9;-D__OBJECT_FILE__=3D&quot;$@&quot;&#39;<br></div><div><br></div><div>I ha=
ven&#39;t tested the binary yet.</div><div><br></div><div>Kind regards,</di=
v><div><br></div><div>David</div></div><br><div class=3D"gmail_quote"><div =
dir=3D"ltr" class=3D"gmail_attr">Le=C2=A0mer. 16 sept. 2020 =C3=A0=C2=A022:=
59, Hans van Kranenburg &lt;<a href=3D"mailto:hans@knorrie.org">hans@knorri=
e.org</a>&gt; a =C3=A9crit=C2=A0:<br></div><blockquote class=3D"gmail_quote=
" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);=
padding-left:1ex">On 9/16/20 6:12 PM, Olaf Hering wrote:<br>
&gt; On Wed, Sep 16, David I wrote:<br>
&gt; <br>
&gt;&gt; So, how did the debian package was compiled ? is this the same sou=
rce code with<br>
&gt;&gt; different options ?<br>
&gt; <br>
&gt; Xen 4.11 is from 2018. Use also a compiler from that year.<br>
&gt; Using this years compiler will lead to errors...<br>
<br>
In Debian, Buster with Xen 4.11 uses gcc 8.<br>
<br>
The Xen 4.11 that is in Debian unstable now does not compile any more<br>
with gcc 10. That&#39;s why we really need to get Xen 4.14 in there now to<=
br>
un-stuck that (even with additional picked patches again already).<br>
<br>
Hans<br>
<br>
</blockquote></div>

--00000000000085fae005b22fc6fe--


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 15:32:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 15:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10099.26633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVG5t-0002DU-P8; Wed, 21 Oct 2020 15:32:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10099.26633; Wed, 21 Oct 2020 15:32:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVG5t-0002DN-Lp; Wed, 21 Oct 2020 15:32:05 +0000
Received: by outflank-mailman (input) for mailman id 10099;
 Wed, 21 Oct 2020 15:32:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVG5s-0002DI-Ay
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:32:04 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22f8f12e-e119-49f7-9847-ea6e5094a216;
 Wed, 21 Oct 2020 15:32:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVG5s-0002DI-Ay
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:32:04 +0000
X-Inumbo-ID: 22f8f12e-e119-49f7-9847-ea6e5094a216
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 22f8f12e-e119-49f7-9847-ea6e5094a216;
	Wed, 21 Oct 2020 15:32:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603294322;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=ZXd/UgLZ6xf6qq2Ng9dV8Ey1NbP3sb1HXy/BnQTzX7M=;
  b=W6VE2ElxfqnfF8qf+RRfzS2s0mHdEVGcLD7DtfKzNgFW/eLqzfpbaaaH
   kVWxN8HBIhOt9QRvRXMTxeT7KC6YEuTGe1UjAmUO8rAWQ80OWv7iMzs7z
   /A2j5a0MXpSh1xjXCl/Hqlfr3DK7uW08dlx42io5+zmHJGbGerrHE8mqJ
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZLxZ4n7eINdDjoDDUkyYUGTpWa6NUq0P9PdY1JhgxuqAeYDVHvflGBfwPVn9K/ElPLEZJAZO5c
 oYAga1AI0p04oy3Yin5yehLMb6H3TiwDNxQS6NuLm+4boi7Mm+ktJTdqObD12alE3Oa4WKH3UX
 YTjeKSucVkli9y90xGIrtgbvZRhdPn1ywbPWOrkEmeJjFeic3U/tevFZGh4J8KeXYwYodfwnFv
 FWfuFe+jeAfc/kLij4r8l3pY3TZDzrI9/XLJ1F6rtrd2MZsY3CSIimX0xjAabcGySqexXy+rtj
 vTo=
X-SBRS: 2.5
X-MesageID: 29550127
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29550127"
Date: Wed, 21 Oct 2020 17:23:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/mm: avoid playing with directmap when self-snoop can
 be relied upon
Message-ID: <20201021152321.cw6sdx3biyc2pmtx@Air-de-Roger>
References: <33f7168c-b177-eed5-14e8-5e7a38dee853@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <33f7168c-b177-eed5-14e8-5e7a38dee853@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 03:51:18PM +0200, Jan Beulich wrote:
> The set of systems affected by XSA-345 would have been smaller is we had
> this in place already: When the processor is capable of dealing with
> mismatched cacheability, there's no extra work we need to carry out.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

I guess it's not worth using the alternative framework to patch this
up at boot in order to avoid the call in the first place?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 15:39:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 15:39:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10102.26645 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGD0-0002Wa-I9; Wed, 21 Oct 2020 15:39:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10102.26645; Wed, 21 Oct 2020 15:39:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGD0-0002WT-E4; Wed, 21 Oct 2020 15:39:26 +0000
Received: by outflank-mailman (input) for mailman id 10102;
 Wed, 21 Oct 2020 15:39:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVGCz-0002WO-65
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:39:25 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39edfadf-4cbd-46df-bdee-84d9fccedc3f;
 Wed, 21 Oct 2020 15:39:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVGCz-0002WO-65
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:39:25 +0000
X-Inumbo-ID: 39edfadf-4cbd-46df-bdee-84d9fccedc3f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39edfadf-4cbd-46df-bdee-84d9fccedc3f;
	Wed, 21 Oct 2020 15:39:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603294764;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=j1f/C6xHFI5imBkiewetuRffC+ZV4VWL4EInYpwkqik=;
  b=N2UvY/4pROXsE0/AQsWGfh0uH6bQjUR8n55TbsMe+RboGdHvpH17GKmb
   REORs+Yti1YLFSjdC3uKoDZPf/f0ahxcB9Lsv1xPgcOyOoAA+zZSDv3EV
   I6RvzZAERAOPbOXFg4bhxmHSLqxXhjiTvgz7KK9+af3JfeBzOSdgAA/BX
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: +cDykpqcAK1Q00ombkIHAnxKp0+9Ucxp5aNgtqaGvXH5YsLyG7dHGx1a8a0o44cNVDXfI/uvcb
 mzv2ClnsLDl/4r0PZZ8nfPGHE4fBp8OBnZgiojY8VutoqaA5SUios2B7hSUNBTjti1PoLs2vI1
 p6xlqFKRHl/Lj52kcDXXkiZO/6HN6pPE66zUX92FQN7OzrJrbTRpe+exhyRJiHFY1w76wUf5Si
 zCOHAzlAUJ7zgAgFV5NFkNArvUeAlb0NrGfCLTvrDL/jrMicO1RKsdKJhCxECmu0MkoKmXL2Bn
 kI4=
X-SBRS: 2.5
X-MesageID: 29827639
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29827639"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201021130708.12249-1-andrew.cooper3@citrix.com>
 <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
Date: Wed, 21 Oct 2020 16:39:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 14:56, Jan Beulich wrote:
> On 21.10.2020 15:07, Andrew Cooper wrote:
>> @@ -4037,6 +4037,9 @@ long do_mmu_update(
>>                          break;
>>                      rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>> +                    /* Paging structure maybe changed.  Flush linear range. */
>> +                    if ( !rc )
>> +                        flush_flags_all |= FLUSH_TLB;
>>                      break;
>>  
>>                  case PGT_l3_page_table:
>> @@ -4044,6 +4047,9 @@ long do_mmu_update(
>>                          break;
>>                      rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>> +                    /* Paging structure maybe changed.  Flush linear range. */
>> +                    if ( !rc )
>> +                        flush_flags_all |= FLUSH_TLB;
>>                      break;
>>  
>>                  case PGT_l4_page_table:
>> @@ -4051,27 +4057,28 @@ long do_mmu_update(
>>                          break;
>>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>> -                    if ( !rc && pt_owner->arch.pv.xpti )
>> +                    /* Paging structure maybe changed.  Flush linear range. */
>> +                    if ( !rc )
>>                      {
>> -                        bool local_in_use = false;
>> +                        bool local_in_use = mfn_eq(
>> +                            pagetable_get_mfn(curr->arch.guest_table), mfn);
>>  
>> -                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
>> -                                    mfn) )
>> -                        {
>> -                            local_in_use = true;
>> -                            get_cpu_info()->root_pgt_changed = true;
>> -                        }
>> +                        flush_flags_all |= FLUSH_TLB;
>> +
>> +                        if ( local_in_use )
>> +                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;
> Aiui here (and in the code consuming the flags) you build upon
> flush_flags_local, when not zero, always being a superset of
> flush_flags_all. I think this is a trap to fall into when later
> wanting to change this code, but as per below this won't hold
> anymore anyway, I think. Hence here I think you want to set
> FLUSH_TLB unconditionally, and above for L3 and L2 you want to
> set it in both variables. Or, if I'm wrong below, a comment to
> that effect may help people avoid falling into this trap.
>
> An alternative would be to have
>
>     flush_flags_local |= (flush_flags_all & FLUSH_TLB);
>
> before doing the actual flush.

Honestly, this is what I meant by stating that the asymmetry is a total
mess.

I originally named all 'remote', but this is even less accurate, it may
still contain the current cpu.

Our matrix of complexity:

* FLUSH_TLB for L2+ structure changes
* FLUSH_TLB_GLOBAL/FLUSH_ROOT_PGTBL for XPTI

with optimisations to skip GLOBAL/ROOT_PGTBL on the local CPU if none of
the updates hit the L4-in-use, and to skip the remote if we hold all
references on the L4.

Everything is complicated because pt_owner may not be current, for
toolstack operations constructing a new domain.

>
>>                          /*
>>                           * No need to sync if all uses of the page can be
>>                           * accounted to the page lock we hold, its pinned
>>                           * status, and uses on this (v)CPU.
>>                           */
>> -                        if ( (page->u.inuse.type_info & PGT_count_mask) >
>> +                        if ( pt_owner->arch.pv.xpti &&
> I assume you've moved this here to avoid the further non-trivial
> checks when possible, but you've not put it around the setting
> of FLUSH_ROOT_PGTBL in flush_flags_local because setting
> ->root_pgt_changed is benign when !XPTI?

No - that was accidental, while attempting to reduce the diff.

>
>> +                             (page->u.inuse.type_info & PGT_count_mask) >
>>                               (1 + !!(page->u.inuse.type_info & PGT_pinned) +
>>                                mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
>>                                       mfn) + local_in_use) )
>> -                            sync_guest = true;
>> +                            flush_flags_all |= FLUSH_ROOT_PGTBL;
> This needs to also specify FLUSH_TLB_GLOBAL, or else original
> XPTI behavior gets broken. Since the local CPU doesn't need this,
> the variable may then better be named flush_flags_remote?

See above.  remote is even more confusing than all.

>
> Or if I'm wrong here, the reasoning behind the dropping of the
> global flush in this case needs putting in the description, not
> the least because it would mean the change introducing it went
> too far.
>
>> @@ -4173,18 +4180,36 @@ long do_mmu_update(
>>      if ( va )
>>          unmap_domain_page(va);
>>  
>> -    if ( sync_guest )
>> +    /*
>> +     * Flushing needs to occur for one of several reasons.
>> +     *
>> +     * 1) An update to an L2 or higher occured.  This potentially changes the
>> +     *    pagetable structure, requiring a flush of the linear range.
>> +     * 2) An update to an L4 occured, and XPTI is enabled.  All CPUs running
>> +     *    on a copy of this L4 need refreshing.
>> +     */
>> +    if ( flush_flags_all || flush_flags_local )
> Minor remark: At least in x86 code it is more efficient to use
> | instead of || in such cases, to avoid relying on the compiler
> to carry out this small optimization.

This transformation should not be recommended in general.  There are
cases, including this one, where it is at best, no effect, and at worst,
wrong.

I don't care about people using ancient compilers.  They've got far
bigger (== more impactful) problems than (the absence of) this
transformation, and the TLB flush will dwarf anything the compiler does
here.

However, the hand "optimised" case breaks a compiler spotting that the
entire second clause is actually redundant for now.

I specifically didn't encode the dependency, to avoid subtle bugs
if/when someone alters the logic.

>
>>      {
>> +        cpumask_t *mask = pt_owner->dirty_cpumask;
>> +
>>          /*
>> -         * Force other vCPU-s of the affected guest to pick up L4 entry
>> -         * changes (if any).
>> +         * Local flushing may be asymmetric with remote.  If there is local
>> +         * flushing to do, perform it separately and omit the current CPU from
>> +         * pt_owner->dirty_cpumask.
>>           */
>> -        unsigned int cpu = smp_processor_id();
>> -        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
>> +        if ( flush_flags_local )
>> +        {
>> +            unsigned int cpu = smp_processor_id();
>> +
>> +            mask = per_cpu(scratch_cpumask, cpu);
>> +            cpumask_copy(mask, pt_owner->dirty_cpumask);
>> +            __cpumask_clear_cpu(cpu, mask);
>> +
>> +            flush_local(flush_flags_local);
>> +        }
>>  
>> -        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
> I understand you're of the opinion that cpumask_copy() +
> __cpumask_clear_cpu() is more efficient than cpumask_andnot()?
> (I guess I agree for high enough CPU counts.)

Its faster in all cases, even low CPU counts.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 15:47:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 15:47:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10105.26657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGL0-0003Ti-Br; Wed, 21 Oct 2020 15:47:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10105.26657; Wed, 21 Oct 2020 15:47:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGL0-0003Tb-8c; Wed, 21 Oct 2020 15:47:42 +0000
Received: by outflank-mailman (input) for mailman id 10105;
 Wed, 21 Oct 2020 15:47:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVGKy-0003TW-GH
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:47:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04f2b524-ebae-478e-8fb6-68fa4cab5302;
 Wed, 21 Oct 2020 15:47:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVGKy-0003TW-GH
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:47:40 +0000
X-Inumbo-ID: 04f2b524-ebae-478e-8fb6-68fa4cab5302
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 04f2b524-ebae-478e-8fb6-68fa4cab5302;
	Wed, 21 Oct 2020 15:47:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603295259;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=HRSRWL1izdNolTPUiM6RFeUxAig9vH/jhnR/vXkiZKI=;
  b=BJKOu1sVCSFo6wPtbdER3cvVOf6pLuT9pwMRDm9m5B59wFQC0e2w1BY/
   3hVbCdARANRm9LONGbLSBnNd9gsTZyRCaxiIv8QH0sgV+NLubPgZ201CB
   0lWIhcdxi7hg1uVDKOaWBWX7x2ZA+XM+MfzzwShKcT1D+/1Vf3jktpvOO
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vR73awpzCRCKuvc5r/FkeNsHFwPdinOj8zIQii9Ibc8+bE9LD2/tubkWick7AF+nf4U8q9gR/S
 Jp3gm9hegJcpT3TXFzlAwUh5vnot+ESdYUQimdEmYdDbsHOy5uAx4HI1FZo2AGNKb3hFVxKbEC
 3r0wcST6Xk8E00XalowG0IVI2awgNikYSTfSAGiZtzspJ4YPR0av3k9Rh577BwKAI63USePGIC
 0+d7+ICE2Pl6Ktn8HbbNNLRcY5ElMuaPYIf0igpnqWywSy1W6MSNZtsllr2KtqwLe6hdnNeLbQ
 qd0=
X-SBRS: 2.5
X-MesageID: 29828493
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29828493"
Date: Wed, 21 Oct 2020 17:46:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
Message-ID: <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
> There's no global lock around the updating of this global piece of data.
> Make use of cmpxchgptr() to avoid two entities racing with their
> updates.
> 
> While touching the functionality, mark xen_consumers[] read-mostly (or
> else the if() condition could use the result of cmpxchgptr(), writing to
> the slot unconditionally).

I'm not sure I get this, likely related to the comment I have below.

> The use of cmpxchgptr() here points out (by way of clang warning about
> it) that its original use of const was slightly wrong. Adjust the
> placement, or else undefined behavior of const qualifying a function
> type will result.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> ---
> v2: Use (and hence generalize) cmpxchgptr(). Add comment. Expand /
>     adjust description.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -57,7 +57,8 @@
>   * with a pointer, we stash them dynamically in a small lookup array which
>   * can be indexed by a small integer.
>   */
> -static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS];
> +static xen_event_channel_notification_t __read_mostly
> +    xen_consumers[NR_XEN_CONSUMERS];
>  
>  /* Default notification action: wake up from wait_on_xen_event_channel(). */
>  static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
>  
>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
>      {
> +        /* Use cmpxchgptr() in lieu of a global lock. */
>          if ( xen_consumers[i] == NULL )
> -            xen_consumers[i] = fn;
> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
>          if ( xen_consumers[i] == fn )
>              break;

I think you could join it as:

if ( !xen_consumers[i] &&
     !cmpxchgptr(&xen_consumers[i], NULL, fn) )
    break;

As cmpxchgptr will return the previous value of &xen_consumers[i]?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 15:55:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 15:55:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10108.26669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGSm-0004SV-79; Wed, 21 Oct 2020 15:55:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10108.26669; Wed, 21 Oct 2020 15:55:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGSm-0004SO-2p; Wed, 21 Oct 2020 15:55:44 +0000
Received: by outflank-mailman (input) for mailman id 10108;
 Wed, 21 Oct 2020 15:55:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVGSk-0004SJ-GX
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:55:42 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a06dfe49-14eb-4bb3-8dad-2badd2c19f4a;
 Wed, 21 Oct 2020 15:55:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVGSk-0004SJ-GX
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 15:55:42 +0000
X-Inumbo-ID: a06dfe49-14eb-4bb3-8dad-2badd2c19f4a
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a06dfe49-14eb-4bb3-8dad-2badd2c19f4a;
	Wed, 21 Oct 2020 15:55:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603295741;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=3Ij3KMWZF3vJCFollHAJrc9dutGwQkDI7uVM6NY8uc4=;
  b=Li5PEYVMKbyKAKQ/9KbVHPfwMfbBUmSW5wvE2LchRSLP23u2NwLf9chw
   /5YXJbEe+ESangySGeWBCf8L1JrV0k4oORVqGm6Mh2/pMPp4Nm98xMJtL
   AiTDSWao60fInR1pKLi0GPBgjykSlEq76TxlRJKoy7ZTpjO03J+P1zGtq
   g=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0vfR56qbxVlDV4WFYB6R6d0WLUj/lXEP7hiFt7Rvb01HijHzU57P8eUBY6l4i9PBmvTPSVQL8d
 PEMSNaC4EJxtXp6RvYa8JahOTY+7EPRWd+9dPrpVOlo8paW+Zpoltkk1wShyqTcR0Oei4PNDiF
 okE0poYlXaHGzqad/F0Hcrm3kOBVoGSUoZkwPEoF7BOabiThHS56zHTVdEYU3MYsmAug3cC8iX
 KyS943gLUJBPjES37SNKNYYNwpHzjwXXA2bKdGROeSnf7l/EQx9eQ7th8xm3y54GAcc+kk5j/1
 oVE=
X-SBRS: 2.5
X-MesageID: 30555245
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="30555245"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201021130708.12249-1-andrew.cooper3@citrix.com>
 <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
 <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
Message-ID: <cd2bdd84-4b78-7f19-81a2-ffd358cb3b13@citrix.com>
Date: Wed, 21 Oct 2020 16:55:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 16:39, Andrew Cooper wrote:
>>> @@ -4051,27 +4057,28 @@ long do_mmu_update(
>>>                          break;
>>>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>>> -                    if ( !rc && pt_owner->arch.pv.xpti )
>>> +                    /* Paging structure maybe changed.  Flush linear range. */
>>> +                    if ( !rc )
>>>                      {
>>> -                        bool local_in_use = false;
>>> +                        bool local_in_use = mfn_eq(
>>> +                            pagetable_get_mfn(curr->arch.guest_table), mfn);
>>>  
>>> -                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
>>> -                                    mfn) )
>>> -                        {
>>> -                            local_in_use = true;
>>> -                            get_cpu_info()->root_pgt_changed = true;
>>> -                        }
>>> +                        flush_flags_all |= FLUSH_TLB;
>>> +
>>> +                        if ( local_in_use )
>>> +                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;
>> Aiui here (and in the code consuming the flags) you build upon
>> flush_flags_local, when not zero, always being a superset of
>> flush_flags_all. I think this is a trap to fall into when later
>> wanting to change this code, but as per below this won't hold
>> anymore anyway, I think. Hence here I think you want to set
>> FLUSH_TLB unconditionally, and above for L3 and L2 you want to
>> set it in both variables. Or, if I'm wrong below, a comment to
>> that effect may help people avoid falling into this trap.
>>
>> An alternative would be to have
>>
>>     flush_flags_local |= (flush_flags_all & FLUSH_TLB);
>>
>> before doing the actual flush.

Also, what I forgot to say in the previous reply, this is still buggy if
hypothetically speaking FLUSH_CACHE had managed to be accumulated in
flush_flags_all.

You cannot have general accumulation logic, a special case for local,
and any catch-all logic like that, because the only correct way to do
the catch-all logic will clobber the special case for local.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 16:01:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 16:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10111.26680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGYE-0005sY-UX; Wed, 21 Oct 2020 16:01:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10111.26680; Wed, 21 Oct 2020 16:01:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGYE-0005sR-RG; Wed, 21 Oct 2020 16:01:22 +0000
Received: by outflank-mailman (input) for mailman id 10111;
 Wed, 21 Oct 2020 16:01:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVGYD-0005sM-Jj
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:01:21 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03e97207-5d55-4983-b852-8b5cc74957ce;
 Wed, 21 Oct 2020 16:01:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hwko=D4=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVGYD-0005sM-Jj
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:01:21 +0000
X-Inumbo-ID: 03e97207-5d55-4983-b852-8b5cc74957ce
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 03e97207-5d55-4983-b852-8b5cc74957ce;
	Wed, 21 Oct 2020 16:01:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603296079;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=5+br0Xv6fhu7mhelHBzTWatSM7V/mNQlATYx3z4riCo=;
  b=Tw4+N/XXkZJIGj/fY58xclfXt+qomSZnXrN34bTRPi6d++oTN4WBNJl4
   TR4p+JelUi/fnUKhsDQVYi7/vglW/if6TzrSyJV1G2Aj9/MnqMHDkuBfz
   YMthS4+fuLwE4ovvrGxf3rNAL2f3SrvBzkEgqtqUOnaQKtxss2zu9AH/T
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: N42q3vuJLIAiwoCu4Kq/cxmzAQfIFgbCbdyQ7yGARdoQIOLcHZzx1wG3MWOojlhDrHYb4xMjGn
 4WWm+s5WKDD10Q0TE5VPnc0CAC1CWHXouVf4j2NM7tNBXLlaB5ZgDT2zm5M8nLjJFlJ+0EM/M/
 i5p9JVP4Xo6Xh2QpDAN9u2fHOf2sZKmQOKHebugY5yoChpnIwHbcUCCaaxpJMRuiPwSHuuZGFZ
 2LYQ0wNn74tldgUgmu6am8HaK3dgnpfo1iZcg2NivF6B+Y5rNbNM17vet/aJDIbXQKXjs7rVqa
 nc8=
X-SBRS: 2.5
X-MesageID: 30555846
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="30555846"
Date: Wed, 21 Oct 2020 18:00:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 2/8] evtchn: replace FIFO-specific header by generic
 private one
Message-ID: <20201021160028.j4shjjvdysfti747@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:08:33PM +0200, Jan Beulich wrote:
> Having a FIFO specific header is not (or at least no longer) warranted
> with just three function declarations left there. Introduce a private
> header instead, moving there some further items from xen/event.h.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> v2: New.
> ---
> TBD: If - considering the layering violation that's there anyway - we
>      allowed PV shim code to make use of this header, a few more items
>      could be moved out of "public sight".
> 
> --- a/xen/common/event_2l.c
> +++ b/xen/common/event_2l.c
> @@ -7,11 +7,12 @@
>   * Version 2 or later.  See the file COPYING for more details.
>   */
>  
> +#include "event_channel.h"
> +
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <xen/sched.h>
> -#include <xen/event.h>
>  
>  #include <asm/guest_atomics.h>
>  
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -14,17 +14,17 @@
>   * along with this program; If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> +#include "event_channel.h"
> +
>  #include <xen/init.h>
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <xen/sched.h>
> -#include <xen/event.h>
>  #include <xen/irq.h>
>  #include <xen/iocap.h>
>  #include <xen/compat.h>
>  #include <xen/guest_access.h>
>  #include <xen/keyhandler.h>
> -#include <xen/event_fifo.h>
>  #include <asm/current.h>
>  
>  #include <public/xen.h>
> --- /dev/null
> +++ b/xen/common/event_channel.h
> @@ -0,0 +1,63 @@
> +/* Event channel handling private header. */

I've got no idea whether it matters or not, but the code moved here
from xen/events.h had an explicit copyright banner with Keirs name,
should this be kept?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 16:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 16:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10113.26693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGsP-0007nJ-NB; Wed, 21 Oct 2020 16:22:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10113.26693; Wed, 21 Oct 2020 16:22:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVGsP-0007nC-Jp; Wed, 21 Oct 2020 16:22:13 +0000
Received: by outflank-mailman (input) for mailman id 10113;
 Wed, 21 Oct 2020 16:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kVGsO-0007n7-EF
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:22:12 +0000
Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e174ff0-ac62-4368-8320-3dc07b8e8ea3;
 Wed, 21 Oct 2020 16:22:11 +0000 (UTC)
Received: by mail-lj1-x22a.google.com with SMTP id y16so3316806ljk.1
 for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 09:22:11 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=286h=D4=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kVGsO-0007n7-EF
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:22:12 +0000
X-Inumbo-ID: 0e174ff0-ac62-4368-8320-3dc07b8e8ea3
Received: from mail-lj1-x22a.google.com (unknown [2a00:1450:4864:20::22a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0e174ff0-ac62-4368-8320-3dc07b8e8ea3;
	Wed, 21 Oct 2020 16:22:11 +0000 (UTC)
Received: by mail-lj1-x22a.google.com with SMTP id y16so3316806ljk.1
        for <xen-devel@lists.xenproject.org>; Wed, 21 Oct 2020 09:22:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=l7c0S9vOF+ht2BiXcHdKl9h63RTgilNPsS+b9BKHDac=;
        b=PhfaJsniuA4LOyaSRG6ajIIJkvlX2R8YNkrDcalFA1i/L/CzWG7hrmKEpkbF9aGdKU
         QTMZUx667Dj716CiYgESrAaO7hGTXXrHOfw1sEkhwLIqahdZ2BRi9Gp6QTv78Ctt6XzJ
         fEi6YPKrjwskbdTKMnVXmK39M4flSaBOuUshcUmHxlfCDKAoBm+XLkggIXTKrWV0gcbB
         DaGU9BIf+uEjb8IV/7KyLGd3UOhhFMpWGEvHiQpFG4O796gB+PMMqTyx7EH+zn5srNHJ
         GXAyWwGDZl/n7j01yhiUtbnAWVOlZVbWfLze49Wg+7Sr8Mc2THj3y1AsxhraJ4iKICfY
         62FQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=l7c0S9vOF+ht2BiXcHdKl9h63RTgilNPsS+b9BKHDac=;
        b=aPcH3lq14Xw8U5ahTNE6Twb1Apzo7EbIOX6yc85plDt7dJkazPP767bprRqpQWHmip
         mLqwZZQymwxayA9Rst3CTFlKVqAsAvcjPx1G9z9Z+wPrkzASPfHS9zOkESoluCYaxawW
         dfjIZjfagNnfbwHDVIM29QgNKeOBVa3pRVaKBvgQlfpUDy1AGK5SKNK+EHHqYe34LNWE
         dtuwN9tee1Az7XgY6HoFj/EiAYP9riD4p6SfZACk12HyKFdmNYxlk8HuPcl7iKUiho+G
         ac+qmdD3ZciXkU9uF0hOiPVofrK3Sc/NTEVWfcWCk1ewSXS5BuMRTW4YNtm1V0X3ILu2
         34aw==
X-Gm-Message-State: AOAM531vtSsEb/GFqeAw3d6lqhncPAzm6RtSeeAGpWX+DBxXOYLsAz9p
	B/hp7wSvET0Ir3Ukimlz8cbTwPzcJECQsJ1JRiE=
X-Google-Smtp-Source: ABdhPJxTm93YGz2oS64DClXv+a1gKIcdRnNgChqTsns7+gRnEBNhsiz6jreAYCE/S7nDzhUv4KaSeqgsK0CG+bGx/bw=
X-Received: by 2002:a2e:96d2:: with SMTP id d18mr1701200ljj.407.1603297329809;
 Wed, 21 Oct 2020 09:22:09 -0700 (PDT)
MIME-Version: 1.0
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
In-Reply-To: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 21 Oct 2020 12:21:58 -0400
Message-ID: <CAKf6xpt7zgM3HghQru28kovd0m7z84bAR8Uqt6KKxbSrvQv8ZA@mail.gmail.com>
Subject: Re: XSM and the idle domain
To: Hongyan Xia <hx242@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Content-Type: text/plain; charset="UTF-8"

On Wed, Oct 21, 2020 at 10:35 AM Hongyan Xia <hx242@xen.org> wrote:
>
> Hi,

Hi, Hongyan.

I'm familiar with Flask but not particularly familiar with other XSMs
or CONFIG_XSM=n.

> A while ago there was a quick chat on IRC about how XSM interacts with
> the idle domain. The conversation did not reach any clear conclusions
> so it might be a good idea to summarise the questions in an email.
>
> Basically there were two questions in that conversation:
>
> 1. In its current state, are security modules able to limit what the
> idle domain can do?
> 2. Should security modules be able to restrict the idle domain?
>
> The first question came up during ongoing work in LiveUpdate. After an
> LU, the next Xen needs to restore all domains. To do that, some
> hypercalls need to be issued from the idle domain context and
> apparently XSM does not like it. We need to introduce hacks in the
> dummy module to leave the idle domain alone.

Is this modifying xsm_default_action() to add an is_idle_domain()
check which always succeeds?

>Our work is not compiled
> with CONFIG_XSM at all, but with CONFIG_XSM, are we able to enforce
> security policies against the idle domain?

It's not clear to me if you want to use CONFIG_XSM, or just don't want
to break it.

> Of course, without any LU
> work this does not make any difference because the idle domain does not
> do any useful work to be restricted anyway.

I think this last sentence is the main point.  It's always been
labeled xen_t, but since it doesn't go through any of the hook points,
it hasn't needed any restrictions.  Actually, reviewing the Flask
policy there is:
# Domain destruction can result in some access checks for actions performed by
# the hypervisor.  These should always be allowed.
allow xen_t resource_type : resource { remove_irq remove_ioport remove_iomem };

> Also, should idle domain be restricted? IMO the idle domain is Xen
> itself which mostly bootstraps the system and performs limited work
> when switched to, and is not something a user (either dom0 or domU)
> directly interacts with. I doubt XSM was designed to include the idle
> domain (although there is an ID allocated for it in the code), so I
> would say just exclude idle in all security policy checks.

I think it makes sense to label xen_t, even if it doesn't do anything.
As you say, it is a distinct entity from dom0 and domU.  Yes, it can
circumvent the policy, but it's not actively hurting anything.  And it
can be good to catch when it does start doing something, as you found.

Might it make sense to create a LU domain instead of using the idle
domain for Live Update?  Another approach could be to run the
idle_domain as "dom0" during Live Update, and then transition to the
regular idle_domain when it completes?  You are re-creating dom0, but
you could flip is_privileged on during live update and then remove it
once complete.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 16:49:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 16:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10115.26705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVHI9-0001Qv-TD; Wed, 21 Oct 2020 16:48:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10115.26705; Wed, 21 Oct 2020 16:48:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVHI9-0001Qo-QA; Wed, 21 Oct 2020 16:48:49 +0000
Received: by outflank-mailman (input) for mailman id 10115;
 Wed, 21 Oct 2020 16:48:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVHI8-0001Qj-Hx
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:48:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id befa7eca-f152-4bca-b82b-ad92633f4349;
 Wed, 21 Oct 2020 16:48:45 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVHI4-0002UM-Mv; Wed, 21 Oct 2020 16:48:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVHI4-00058C-D6; Wed, 21 Oct 2020 16:48:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVHI4-0008Ia-Cc; Wed, 21 Oct 2020 16:48:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CZbQ=D4=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVHI8-0001Qj-Hx
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:48:48 +0000
X-Inumbo-ID: befa7eca-f152-4bca-b82b-ad92633f4349
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id befa7eca-f152-4bca-b82b-ad92633f4349;
	Wed, 21 Oct 2020 16:48:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5S4gIPSaQWYym/gzhOgB5seeJ1iAWoAekGw5K/2+fbU=; b=QGnQ7VeAcO7IWFU9F9HTCXdJgE
	3gMJtNMdnt2LfsXsLGwqiVdkRsboyy8mSFR2ocJlWpEfLLfX74hirJuAlYwdCeiNEVKnNPbIIdBL1
	ULlk26jwEn6ytYG98RvWszpA9jAAV3SOGbhd7y8Bkr0TtNIxN75HL5DNBtJ8UaaHNyY8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVHI4-0002UM-Mv; Wed, 21 Oct 2020 16:48:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVHI4-00058C-D6; Wed, 21 Oct 2020 16:48:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVHI4-0008Ia-Cc; Wed, 21 Oct 2020 16:48:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156035-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156035: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
X-Osstest-Versions-That:
    xen=6888017392ac25b5e588554030642affac25a95d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 21 Oct 2020 16:48:44 +0000

flight 156035 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156035/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155402
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10       fail  like 155402
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155402
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155402
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155402
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155402
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155402
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155402
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155402
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155402
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155402
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155402
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3
baseline version:
 xen                  6888017392ac25b5e588554030642affac25a95d

Last test of basis   155402  2020-10-03 21:55:13 Z   17 days
Testing same since   156035  2020-10-20 13:36:02 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chen Yu <yu.c.chen@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6888017392..0108b011e1  0108b011e133915a8ebd33636811d8c141b6e9f3 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 16:53:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 16:53:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10118.26720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVHMs-0002NA-M3; Wed, 21 Oct 2020 16:53:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10118.26720; Wed, 21 Oct 2020 16:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVHMs-0002N3-Ig; Wed, 21 Oct 2020 16:53:42 +0000
Received: by outflank-mailman (input) for mailman id 10118;
 Wed, 21 Oct 2020 16:53:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVHMr-0002My-Uy
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:53:42 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8675a79f-4375-439e-86e0-7001351bea17;
 Wed, 21 Oct 2020 16:53:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVHMr-0002My-Uy
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 16:53:42 +0000
X-Inumbo-ID: 8675a79f-4375-439e-86e0-7001351bea17
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8675a79f-4375-439e-86e0-7001351bea17;
	Wed, 21 Oct 2020 16:53:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603299220;
  h=subject:from:to:cc:references:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=pGLUYrgKfbt27QziQs8IU9vQTxO7MYaXgcxQoQKaQgI=;
  b=IJCj2TjkjeBARGp0LZHpY7wjM8bloYvyKe1L/StFEnlUqL4emnpo07UF
   K1kuygdILVVMumfi/oZv9bL9c8x8LFSAxduMbIqBsXuFmSMRr8laqPh71
   qnpPZN9sLaVL6lpTKyOI1hXTfzkhEJoBkzRyvzUwLY0RMTbXxFTuiPi57
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: xwtpIL3xebimPrSkhT64oPOtMbqNU22vaZVVDEEbYF5vua5ZBvjXlYkV/Hq4M03rkOG/kkRD6V
 SX96RV4KWkVIPXtQrpdhE0vPJPcqnixZtBayMYa4vXuZ6j8we9ZdfKbg/hChtkkZcHDV1T8Z09
 iB4JeCaSe/NeVnJUaI4t9wjZXX/88cmWhyCRmPNFc3S5F+8K2E2SwdCL+XxgDFSN4GNV4+ycg+
 JWxXo4xqTbAqha2gj3QWKXm8HJonWY16C4i+5U8nBNpDwWgQmge1ScyBIkiiSNRjdR8GB0Wo/b
 UnI=
X-SBRS: 2.5
X-MesageID: 29833932
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29833932"
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201021130708.12249-1-andrew.cooper3@citrix.com>
 <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
 <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
 <cd2bdd84-4b78-7f19-81a2-ffd358cb3b13@citrix.com>
Message-ID: <72c9dfbd-3ace-ee66-51a6-9490cdf5ffc9@citrix.com>
Date: Wed, 21 Oct 2020 17:53:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <cd2bdd84-4b78-7f19-81a2-ffd358cb3b13@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 16:55, Andrew Cooper wrote:
> On 21/10/2020 16:39, Andrew Cooper wrote:
>>>> @@ -4051,27 +4057,28 @@ long do_mmu_update(
>>>>                          break;
>>>>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>>>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>>>> -                    if ( !rc && pt_owner->arch.pv.xpti )
>>>> +                    /* Paging structure maybe changed.  Flush linear range. */
>>>> +                    if ( !rc )
>>>>                      {
>>>> -                        bool local_in_use = false;
>>>> +                        bool local_in_use = mfn_eq(
>>>> +                            pagetable_get_mfn(curr->arch.guest_table), mfn);
>>>>  
>>>> -                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
>>>> -                                    mfn) )
>>>> -                        {
>>>> -                            local_in_use = true;
>>>> -                            get_cpu_info()->root_pgt_changed = true;
>>>> -                        }
>>>> +                        flush_flags_all |= FLUSH_TLB;
>>>> +
>>>> +                        if ( local_in_use )
>>>> +                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;
>>> Aiui here (and in the code consuming the flags) you build upon
>>> flush_flags_local, when not zero, always being a superset of
>>> flush_flags_all. I think this is a trap to fall into when later
>>> wanting to change this code, but as per below this won't hold
>>> anymore anyway, I think. Hence here I think you want to set
>>> FLUSH_TLB unconditionally, and above for L3 and L2 you want to
>>> set it in both variables. Or, if I'm wrong below, a comment to
>>> that effect may help people avoid falling into this trap.
>>>
>>> An alternative would be to have
>>>
>>>     flush_flags_local |= (flush_flags_all & FLUSH_TLB);
>>>
>>> before doing the actual flush.
> Also, what I forgot to say in the previous reply, this is still buggy if
> hypothetically speaking FLUSH_CACHE had managed to be accumulated in
> flush_flags_all.
>
> You cannot have general accumulation logic, a special case for local,
> and any catch-all logic like that, because the only correct way to do
> the catch-all logic will clobber the special case for local.

I'm going to try a 3rd time with flush_flags and local_may_skip which
defaults to GLOBAL|ROOT_PGTBL, and may get clobbered.

This seems like it might be a less fragile way of expressing the
optimisation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 17:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 17:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10135.26787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVI7d-0007AB-1e; Wed, 21 Oct 2020 17:42:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10135.26787; Wed, 21 Oct 2020 17:42:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVI7c-0007A4-V0; Wed, 21 Oct 2020 17:42:00 +0000
Received: by outflank-mailman (input) for mailman id 10135;
 Wed, 21 Oct 2020 17:41:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVI7b-00079z-8v
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 17:41:59 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e7857f7-49b8-47e0-9d0a-43ac289923a2;
 Wed, 21 Oct 2020 17:41:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVI7b-00079z-8v
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 17:41:59 +0000
X-Inumbo-ID: 4e7857f7-49b8-47e0-9d0a-43ac289923a2
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e7857f7-49b8-47e0-9d0a-43ac289923a2;
	Wed, 21 Oct 2020 17:41:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603302118;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=nTqh4d75K3JROmU6X1GvO+WNAKP5DpJgavm1crRCtUI=;
  b=BNi+KJFeq6gHSj+Mu13tZHZusyzsgb9AsquvhIRrScx1IgFly4slAbKF
   52z3fo6DaYXQKi5B93D/HsjPcUio54Qy3cnCNsEUkZco2KXRPM/8M53I2
   Pvx80AF7dQnqi9G4ZPdJ1Zxfl9Y9Ef5VWtX1IDdcF+3Me/tcIl76QLc9I
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: J9eY8jhvX+cLa7Rpvcod0oBhFc/dWWOR+rbci/a9W7NHqZLoIZdUv1RHr6a7aO0USH3VigVv3a
 3MhJmMyAnGEU02fyi/ntBTU7fCdVnH6d8GtuoE3eSjz7gwEsEb29/NyAvI00J6h/LEm+075haJ
 X2qulEXE7OimTp5gD/Hm0oDYpao2/5CTKbgerxPV2uHB4Q1jSIaTEtOsbGA2SV7zeFB702zwR4
 P2Q+6HO46qNXw71tOxfr6nAEFBG3sirI1tNhaVR/Ni0S8bUIdmyLI3lEsAGYzY06uyzLPhGvb3
 1o8=
X-SBRS: 2.5
X-MesageID: 29510736
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29510736"
Subject: Re: [PATCH v2 00/14] kernel-doc: public/arch-arm.h
To: Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
CC: <george.dunlap@citrix.com>, <ian.jackson@eu.citrix.com>,
	<jbeulich@suse.com>, <julien@xen.org>, <wl@xen.org>,
	<Bertrand.Marquis@arm.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <85d505c1-1328-055e-e3f9-1b8cddde16d6@citrix.com>
Date: Wed, 21 Oct 2020 18:41:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 01:00, Stefano Stabellini wrote:
> Hi all,
>
> This patch series converts Xen in-code comments to the kernel-doc (and
> doxygen) format:
>
> https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html
>
> Please note that this patch series is meant as groundwork. The end goal
> is to enable a more sophisticated documents generation with doxygen,
> see: https://marc.info/?l=xen-devel&m=160220627022339
>
>
> # Changes compared to v1:
> - addressed review comments
> - use oneline comments even for nested struct members

On the whole, good.

However, there is one thing which problematic.  Right from patch 1, you
start breaking the content used to render
https://xenbits.xen.org/docs/unstable/hypercall/index.html

Either the patches need to incrementally feed the converted files into
Sphinx directly (possibly with some one-time plumbing ahead of time), or
patch 1 needs to be some script in docs/ capable of rendering kernel-doc
to HTML, so we at least keep the plain docs around until the Sphinx
integration is complete.

i.e. don't cause what we currently have to fall off
https://xenbits.xen.org/docs/ entirely as a consequence of this series.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 17:49:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 17:49:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10137.26800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVIEi-0007TY-QR; Wed, 21 Oct 2020 17:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10137.26800; Wed, 21 Oct 2020 17:49:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVIEi-0007TR-N7; Wed, 21 Oct 2020 17:49:20 +0000
Received: by outflank-mailman (input) for mailman id 10137;
 Wed, 21 Oct 2020 17:49:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVIEh-0007TM-N2
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 17:49:19 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13779129-d811-48ac-8c7a-6ce4c0cdd585;
 Wed, 21 Oct 2020 17:49:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVIEh-0007TM-N2
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 17:49:19 +0000
X-Inumbo-ID: 13779129-d811-48ac-8c7a-6ce4c0cdd585
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13779129-d811-48ac-8c7a-6ce4c0cdd585;
	Wed, 21 Oct 2020 17:49:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603302558;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=+HYpv9sM4gfGI+KPF7hJF4XqQpPinO3QulYldAM9FU4=;
  b=Ynjnc+0F+gg9pvHNPd8/jdDz82TrecUhMnvqFfxRtjtBUPzRcPFYOIKJ
   gn3ceXC7e3H4imJrEpBH2SuglO8Rr0pmnV5S9pEzAgaFuNWWghlP1N7wH
   l3MKGCgIjr0GOVjT8ijphCxs909y5IeX+7isqDhOceAmvvmDjaJ67N8vI
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: nSO4HHEtLjwvDW/mEN8htJRTsK8GuD0HOuDkGRe3CuG6tN7+WZnk3CXKBzh1ILLLFb6iyGvhZ0
 Kx4I/xGKFiewTWRUuo+Q6RwEeUU5icU+fI69dXLInAMlhp/l5VGfmZbUv1OnmS3HknGlyGgh5x
 qSgVMRD6w3Yh6PcUXi1Ch8UdoUsrZM+jCE6vAHlcsplawvGZd5AlpKDmjNMxzhzlgjjPv2x60z
 5vR9d49mlmq2IsZBvuAFYy8RqFr85Ra5vB7ecSgYhYoASgF39GOkKMPtUj50kyAkpwH2W+mkVf
 zQw=
X-SBRS: 2.5
X-MesageID: 29562355
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,401,1596513600"; 
   d="scan'208";a="29562355"
Subject: Re: [PATCH v2 01/14] kernel-doc: public/arch-arm.h
To: Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
CC: <george.dunlap@citrix.com>, <ian.jackson@eu.citrix.com>,
	<jbeulich@suse.com>, <julien@xen.org>, <wl@xen.org>,
	<Bertrand.Marquis@arm.com>, Stefano Stabellini
	<stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
 <20201021000011.15351-1-sstabellini@kernel.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1659282c-dc07-ee3e-9c1d-654861643b72@citrix.com>
Date: Wed, 21 Oct 2020 18:47:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201021000011.15351-1-sstabellini@kernel.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 00:59, Stefano Stabellini wrote:
> diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> index c365b1b39e..409697dede 100644
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -201,7 +208,9 @@ typedef uint64_t xen_pfn_t;
>  #define PRI_xen_pfn PRIx64
>  #define PRIu_xen_pfn PRIu64
>  
> -/*
> +/**
> + * DOC: XEN_LEGACY_MAX_VCPUS
> + *
>   * Maximum number of virtual CPUs in legacy multi-processor guests.
>   * Only one. All other VCPUS must use VCPUOP_register_vcpu_info.
>   */

I suppose I don't really want to know why this exists in the ARM ABI? 
It looks to be a misfeature.

Shouldn't it be labelled as obsolete ?  (Is that even a thing you can do
in kernel-doc?  It surely must be...)

> @@ -299,26 +308,28 @@ struct vcpu_guest_context {
>  typedef struct vcpu_guest_context vcpu_guest_context_t;
>  DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
>  
> -/*
> +
> +/**
> + * struct xen_arch_domainconfig - arch-specific domain creation params
> + *
>   * struct xen_arch_domainconfig's ABI is covered by
>   * XEN_DOMCTL_INTERFACE_VERSION.
>   */
> +struct xen_arch_domainconfig {
> +    /** @gic_version: IN/OUT parameter */
>  #define XEN_DOMCTL_CONFIG_GIC_NATIVE    0
>  #define XEN_DOMCTL_CONFIG_GIC_V2        1
>  #define XEN_DOMCTL_CONFIG_GIC_V3        2
> -
> +    uint8_t gic_version;

Please can we have a newline in here, and elsewhere separating blocks of
logically connected field/constant/comments.

It will make a world of difference to the readability of the header
files themselves.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 22:08:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 22:08:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10141.26817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMHc-0005hF-Du; Wed, 21 Oct 2020 22:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10141.26817; Wed, 21 Oct 2020 22:08:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMHc-0005h8-An; Wed, 21 Oct 2020 22:08:36 +0000
Received: by outflank-mailman (input) for mailman id 10141;
 Wed, 21 Oct 2020 22:08:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVMHb-0005h3-9v
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:08:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 30143480-acdd-4426-9f9e-74c59a2afd5d;
 Wed, 21 Oct 2020 22:08:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 63FA224198;
 Wed, 21 Oct 2020 22:08:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVMHb-0005h3-9v
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:08:35 +0000
X-Inumbo-ID: 30143480-acdd-4426-9f9e-74c59a2afd5d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 30143480-acdd-4426-9f9e-74c59a2afd5d;
	Wed, 21 Oct 2020 22:08:33 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 63FA224198;
	Wed, 21 Oct 2020 22:08:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603318112;
	bh=w5f5LBOO8QWVhF4rbFRxfWHUWPezr7wTxwtgOEfrsXs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BebKciay2sobxlGbWIHQSkAADNdKLpfkzlhJBkg77ljcX5rTv4h0dp1gtWHhBosXm
	 hixKgretL8KHzZ7hDX5ewNP2mAL39o7EotX7yHu/18Uj8a9Yjpx/DoRq69TB7lWuhV
	 xj1rEF6N0b5Su/SKORN59q4qXTz5aAF8twFOo1yI=
Date: Wed, 21 Oct 2020 15:08:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, george.dunlap@citrix.com, 
    ian.jackson@eu.citrix.com, jbeulich@suse.com, julien@xen.org, wl@xen.org, 
    Bertrand.Marquis@arm.com
Subject: Re: [PATCH v2 00/14] kernel-doc: public/arch-arm.h
In-Reply-To: <85d505c1-1328-055e-e3f9-1b8cddde16d6@citrix.com>
Message-ID: <alpine.DEB.2.21.2010211504390.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s> <85d505c1-1328-055e-e3f9-1b8cddde16d6@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-485445970-1603318112=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-485445970-1603318112=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 21 Oct 2020, Andrew Cooper wrote:
> On 21/10/2020 01:00, Stefano Stabellini wrote:
> > Hi all,
> >
> > This patch series converts Xen in-code comments to the kernel-doc (and
> > doxygen) format:
> >
> > https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html
> >
> > Please note that this patch series is meant as groundwork. The end goal
> > is to enable a more sophisticated documents generation with doxygen,
> > see: https://marc.info/?l=xen-devel&m=160220627022339
> >
> >
> > # Changes compared to v1:
> > - addressed review comments
> > - use oneline comments even for nested struct members
> 
> On the whole, good.
> 
> However, there is one thing which problematic.  Right from patch 1, you
> start breaking the content used to render
> https://xenbits.xen.org/docs/unstable/hypercall/index.html
> 
> Either the patches need to incrementally feed the converted files into
> Sphinx directly (possibly with some one-time plumbing ahead of time), or
> patch 1 needs to be some script in docs/ capable of rendering kernel-doc
> to HTML, so we at least keep the plain docs around until the Sphinx
> integration is complete.
> 
> i.e. don't cause what we currently have to fall off
> https://xenbits.xen.org/docs/ entirely as a consequence of this series.

Thanks for pointing this out, it was not my intention. In fact, I wasn't
aware of https://xenbits.xen.org/docs/unstable/hypercall/index.html at
all. How is it generated? I am asking because I need to understand how
that works in order not to break it...

Is it just a matter of retaining the keywords like `incontents 50 and other
comments starting with ` ?

Otherwise, yes, I could add kernel-doc to docs/ or scripts/ to generate
markdown documents, which could be turned to HTML with Sphynx.
--8323329-485445970-1603318112=:12247--


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 22:13:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 22:13:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10144.26830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMM0-0006YG-Vn; Wed, 21 Oct 2020 22:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10144.26830; Wed, 21 Oct 2020 22:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMM0-0006Y9-Rx; Wed, 21 Oct 2020 22:13:08 +0000
Received: by outflank-mailman (input) for mailman id 10144;
 Wed, 21 Oct 2020 22:13:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aWaT=D4=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVMLz-0006Y4-AY
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:13:07 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a79acf41-5742-430a-92ae-a722c85c6f90;
 Wed, 21 Oct 2020 22:13:06 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09LMCrQ2073233
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 21 Oct 2020 18:12:59 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09LMCr8q073232;
 Wed, 21 Oct 2020 15:12:53 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aWaT=D4=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVMLz-0006Y4-AY
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:13:07 +0000
X-Inumbo-ID: a79acf41-5742-430a-92ae-a722c85c6f90
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a79acf41-5742-430a-92ae-a722c85c6f90;
	Wed, 21 Oct 2020 22:13:06 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09LMCrQ2073233
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 21 Oct 2020 18:12:59 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09LMCr8q073232;
	Wed, 21 Oct 2020 15:12:53 -0700 (PDT)
	(envelope-from ehem)
Date: Wed, 21 Oct 2020 15:12:53 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/acpi: Don't fail if SPCR table is absent
Message-ID: <20201021221253.GA73207@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Absence of a SPCR table likely means the console is a framebuffer.  In
such case acpi_iomem_deny_access() should NOT fail.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
 xen/arch/arm/acpi/domain_build.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
index 1b1cfabb00..bbdc90f92c 100644
--- a/xen/arch/arm/acpi/domain_build.c
+++ b/xen/arch/arm/acpi/domain_build.c
@@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
     status = acpi_get_table(ACPI_SIG_SPCR, 0,
                             (struct acpi_table_header **)&spcr);
 
-    if ( ACPI_FAILURE(status) )
+    if ( ACPI_SUCCESS(status) )
     {
-        printk("Failed to get SPCR table\n");
-        return -EINVAL;
+        mfn = spcr->serial_port.address >> PAGE_SHIFT;
+        /* Deny MMIO access for UART */
+        rc = iomem_deny_access(d, mfn, mfn + 1);
+        if ( rc )
+            return rc;
+    }
+    else
+    {
+        printk("Failed to get SPCR table, Xen console may be unavailable\n");
     }
-
-    mfn = spcr->serial_port.address >> PAGE_SHIFT;
-    /* Deny MMIO access for UART */
-    rc = iomem_deny_access(d, mfn, mfn + 1);
-    if ( rc )
-        return rc;
 
     /* Deny MMIO access for GIC regions */
     return gic_iomem_deny_access(d);
-- 
2.20.1



-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Oct 21 22:17:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 22:17:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10147.26842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMPw-0006mX-GN; Wed, 21 Oct 2020 22:17:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10147.26842; Wed, 21 Oct 2020 22:17:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMPw-0006mQ-DN; Wed, 21 Oct 2020 22:17:12 +0000
Received: by outflank-mailman (input) for mailman id 10147;
 Wed, 21 Oct 2020 22:17:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVMPv-0006mL-0x
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:17:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b32b3a73-8ee3-4cfe-b3b2-9c2bc18c36fb;
 Wed, 21 Oct 2020 22:17:10 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id EE281241A3;
 Wed, 21 Oct 2020 22:17:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=01QD=D4=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVMPv-0006mL-0x
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:17:11 +0000
X-Inumbo-ID: b32b3a73-8ee3-4cfe-b3b2-9c2bc18c36fb
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b32b3a73-8ee3-4cfe-b3b2-9c2bc18c36fb;
	Wed, 21 Oct 2020 22:17:10 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id EE281241A3;
	Wed, 21 Oct 2020 22:17:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603318629;
	bh=6KuZqfguu7FYtFJKwVcmdvYA+41YZTjOZ1wPXfq5t0I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=0umyfJEOTCl7WFGZRdGyPhU7VCRIz5goxL6w8OJXMTqAY34qxejd8R3ilF9v898LM
	 jn4t62lXpJcvdsjasgbsmByjBb33SsjZyBwA5Hz83FpE3jOabcpVxknx8n3rCsBqE3
	 7/GzLelVB0On7/P4mDllPWeM8H+zidO6wcLOXw9Q=
Date: Wed, 21 Oct 2020 15:17:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, george.dunlap@citrix.com, 
    ian.jackson@eu.citrix.com, jbeulich@suse.com, julien@xen.org, wl@xen.org, 
    Bertrand.Marquis@arm.com, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2 01/14] kernel-doc: public/arch-arm.h
In-Reply-To: <1659282c-dc07-ee3e-9c1d-654861643b72@citrix.com>
Message-ID: <alpine.DEB.2.21.2010211508450.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s> <20201021000011.15351-1-sstabellini@kernel.org> <1659282c-dc07-ee3e-9c1d-654861643b72@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-152475288-1603318629=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-152475288-1603318629=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 21 Oct 2020, Andrew Cooper wrote:
> On 21/10/2020 00:59, Stefano Stabellini wrote:
> > diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
> > index c365b1b39e..409697dede 100644
> > --- a/xen/include/public/arch-arm.h
> > +++ b/xen/include/public/arch-arm.h
> > @@ -201,7 +208,9 @@ typedef uint64_t xen_pfn_t;
> >  #define PRI_xen_pfn PRIx64
> >  #define PRIu_xen_pfn PRIu64
> >  
> > -/*
> > +/**
> > + * DOC: XEN_LEGACY_MAX_VCPUS
> > + *
> >   * Maximum number of virtual CPUs in legacy multi-processor guests.
> >   * Only one. All other VCPUS must use VCPUOP_register_vcpu_info.
> >   */
> 
> I suppose I don't really want to know why this exists in the ARM ABI? 
> It looks to be a misfeature.
> 
> Shouldn't it be labelled as obsolete ?  (Is that even a thing you can do
> in kernel-doc?  It surely must be...)

I tried not to make any content changes as part of this series, but as
we are looking into this, I could append patches to the end of the
series to make some additional changes. I.e. I'd prefer to keep the
mechanical patches mechanical.

In regards to XEN_LEGACY_MAX_VCPUS, it is part of struct shared_info so
it must be defined. It makes sense to define it to the smallest number
given that the newer interface (VCPUOP_register_vcpu_info) is preferred.

In regards to labelling things as obsolete, I couldn't find a way to do
it with kernel-doc, but keep in mind that the end goal is to use
doxygen. It might become possible then.


> > @@ -299,26 +308,28 @@ struct vcpu_guest_context {
> >  typedef struct vcpu_guest_context vcpu_guest_context_t;
> >  DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
> >  
> > -/*
> > +
> > +/**
> > + * struct xen_arch_domainconfig - arch-specific domain creation params
> > + *
> >   * struct xen_arch_domainconfig's ABI is covered by
> >   * XEN_DOMCTL_INTERFACE_VERSION.
> >   */
> > +struct xen_arch_domainconfig {
> > +    /** @gic_version: IN/OUT parameter */
> >  #define XEN_DOMCTL_CONFIG_GIC_NATIVE    0
> >  #define XEN_DOMCTL_CONFIG_GIC_V2        1
> >  #define XEN_DOMCTL_CONFIG_GIC_V3        2
> > -
> > +    uint8_t gic_version;
> 
> Please can we have a newline in here, and elsewhere separating blocks of
> logically connected field/constant/comments.
> 
> It will make a world of difference to the readability of the header
> files themselves.

Sure, I can do that
--8323329-152475288-1603318629=:12247--


From xen-devel-bounces@lists.xenproject.org Wed Oct 21 22:20:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 21 Oct 2020 22:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10150.26854 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMSd-0006wc-Uw; Wed, 21 Oct 2020 22:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10150.26854; Wed, 21 Oct 2020 22:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVMSd-0006wV-RT; Wed, 21 Oct 2020 22:19:59 +0000
Received: by outflank-mailman (input) for mailman id 10150;
 Wed, 21 Oct 2020 22:19:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVMSc-0006wQ-0l
 for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:19:58 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bb2c559-8662-406d-8741-6be5f1479558;
 Wed, 21 Oct 2020 22:19:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yt4r=D4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVMSc-0006wQ-0l
	for xen-devel@lists.xenproject.org; Wed, 21 Oct 2020 22:19:58 +0000
X-Inumbo-ID: 1bb2c559-8662-406d-8741-6be5f1479558
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1bb2c559-8662-406d-8741-6be5f1479558;
	Wed, 21 Oct 2020 22:19:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603318796;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=9cBA/Wmi9yoXzSP093C/7HAdOeR5X/ffuX6BuKX7UUU=;
  b=J5RelrYjMjMa56G8uHSKw0dNr9FjcWQ4ezG87ymsahpy92i5GdkB44ZF
   9x7Limz4qhEuUqQUidRacINEqonvluABupL13KtAJ1i0bSEfM19P1WOhp
   9aYwXOgjlVvlSxFoA0YqgbDv4kg01hMLZJvGuPFydSCEFDvfPWDf61lNw
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LJS+Iw0KcEJFzMh0CmMjH6z5cIONCpUJdhEtRxY0thjIgwi4wcOH2avndW8PIlBCd9yctBQG/E
 gLfAsmOnUK9DpxZ2b/pJPcrO9dFXFHqrPSd+jeXrP820ap2fyj+SGaUP0l5hdmlbUpyTwIWvAh
 1WanFPEwne5iIhf+NU4TQqdNlgQ3djX+ICFM9gy23Oqwcg/sbgDc76UUrqu/YxchjKaHo0Evn6
 KpjNrCum/j7UOOZitjtmhZ3b/biSh0DeG4y14Iva5/+XKIfeeC2YxThExUMC7Abt8n6nefnEbt
 DDI=
X-SBRS: 2.5
X-MesageID: 30584730
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,402,1596513600"; 
   d="scan'208";a="30584730"
Subject: Re: [PATCH v2 00/14] kernel-doc: public/arch-arm.h
To: Stefano Stabellini <sstabellini@kernel.org>
CC: <xen-devel@lists.xenproject.org>, <george.dunlap@citrix.com>,
	<ian.jackson@eu.citrix.com>, <jbeulich@suse.com>, <julien@xen.org>,
	<wl@xen.org>, <Bertrand.Marquis@arm.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
 <85d505c1-1328-055e-e3f9-1b8cddde16d6@citrix.com>
 <alpine.DEB.2.21.2010211504390.12247@sstabellini-ThinkPad-T480s>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <966c90f8-730c-7df1-17bf-95a8a0bcaf9d@citrix.com>
Date: Wed, 21 Oct 2020 23:19:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010211504390.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 23:08, Stefano Stabellini wrote:
> On Wed, 21 Oct 2020, Andrew Cooper wrote:
>> On 21/10/2020 01:00, Stefano Stabellini wrote:
>>> Hi all,
>>>
>>> This patch series converts Xen in-code comments to the kernel-doc (and
>>> doxygen) format:
>>>
>>> https://www.kernel.org/doc/html/latest/doc-guide/kernel-doc.html
>>>
>>> Please note that this patch series is meant as groundwork. The end goal
>>> is to enable a more sophisticated documents generation with doxygen,
>>> see: https://marc.info/?l=xen-devel&m=160220627022339
>>>
>>>
>>> # Changes compared to v1:
>>> - addressed review comments
>>> - use oneline comments even for nested struct members
>> On the whole, good.
>>
>> However, there is one thing which problematic.  Right from patch 1, you
>> start breaking the content used to render
>> https://xenbits.xen.org/docs/unstable/hypercall/index.html
>>
>> Either the patches need to incrementally feed the converted files into
>> Sphinx directly (possibly with some one-time plumbing ahead of time), or
>> patch 1 needs to be some script in docs/ capable of rendering kernel-doc
>> to HTML, so we at least keep the plain docs around until the Sphinx
>> integration is complete.
>>
>> i.e. don't cause what we currently have to fall off
>> https://xenbits.xen.org/docs/ entirely as a consequence of this series.
> Thanks for pointing this out, it was not my intention. In fact, I wasn't
> aware of https://xenbits.xen.org/docs/unstable/hypercall/index.html at
> all. How is it generated? I am asking because I need to understand how
> that works in order not to break it...

docs/xen-headers which is a random perl script suffering from
NIH-syndrome in a world with many far better alternatives.

> Is it just a matter of retaining the keywords like `incontents 50 and other
> comments starting with ` ?

I deliberately tried not to specify that "it should remain as is".

I will happily delete this script and infrastructure if you don't beat
me too it first, and it was one of my goals of suggesting kernel-doc in
the first place.  After all, our longterm goal is to move fully over to
Sphinx.

If that means maintaining both the legacy and the new side-by-side,
fine.  If it means moving one single header at a time fully into sphinx,
also fine.  (Observe that https://xenbits.xen.org/docs/latest/ is Sphinx
rendered from staging).  I certainly don't intend for the docs to
survive in their current form forever.

All I want to avoid is that the hypercall documentation disappears
entirely from https://xenbits.xen.org/docs/ in the interim.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 00:01:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 00:01:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10156.26872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVO35-0000Uy-Rg; Thu, 22 Oct 2020 00:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10156.26872; Thu, 22 Oct 2020 00:01:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVO35-0000Ur-Ob; Thu, 22 Oct 2020 00:01:43 +0000
Received: by outflank-mailman (input) for mailman id 10156;
 Thu, 22 Oct 2020 00:01:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVO34-0000UD-Kv
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 00:01:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fcf36a8f-e1b1-4c37-9f6d-72f6e5b97d44;
 Thu, 22 Oct 2020 00:01:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVO2y-0003a5-9C; Thu, 22 Oct 2020 00:01:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVO2x-0003hc-U0; Thu, 22 Oct 2020 00:01:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVO2x-00033Z-TY; Thu, 22 Oct 2020 00:01:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVO34-0000UD-Kv
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 00:01:42 +0000
X-Inumbo-ID: fcf36a8f-e1b1-4c37-9f6d-72f6e5b97d44
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fcf36a8f-e1b1-4c37-9f6d-72f6e5b97d44;
	Thu, 22 Oct 2020 00:01:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3vRuRrPS0/t5oxiQWPX+QbAeIhVs9fEhHyPkujmxA3w=; b=2njuFdwntpd6yNGgNkq/vnfrUl
	vwQ55Qe9WxqhuMq5e9vw+H9T1+ch3SZAz+AvSOx3XIiSFEXcJuSV9FLLklnhdAtwpo4qEH6iHeT0J
	qehl6bQozR0pXUxbNG+UM9ULayy7ruj2L1n0Q48WJvCxYDroeBKCuDBpkbzfF8OQUSXM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVO2y-0003a5-9C; Thu, 22 Oct 2020 00:01:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVO2x-0003hc-U0; Thu, 22 Oct 2020 00:01:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVO2x-00033Z-TY; Thu, 22 Oct 2020 00:01:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156042-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156042: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=270315b8235e3d10c2e360cff56c2f9e0915a252
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 00:01:35 +0000

flight 156042 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156042/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 12 debian-install           fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle 10 host-ping-check-xen fail in 156020 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-examine      8 reboot           fail in 156020 pass in 156042
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156020 pass in 156042
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 156020 pass in 156042
 test-arm64-arm64-xl-xsm   10 host-ping-check-xen fail in 156020 pass in 156042
 test-arm64-arm64-xl-seattle   8 xen-boot                   fail pass in 156020

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                270315b8235e3d10c2e360cff56c2f9e0915a252
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   82 days
Failing since        152366  2020-08-01 20:49:34 Z   81 days  136 attempts
Testing same since   156020  2020-10-20 07:28:33 Z    1 days    2 attempts

------------------------------------------------------------
3226 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 590931 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 00:20:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 00:20:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10159.26883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVOKu-0002NZ-FR; Thu, 22 Oct 2020 00:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10159.26883; Thu, 22 Oct 2020 00:20:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVOKu-0002NS-CN; Thu, 22 Oct 2020 00:20:08 +0000
Received: by outflank-mailman (input) for mailman id 10159;
 Thu, 22 Oct 2020 00:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WvX7=D5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVOKt-0002NN-Lr
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 00:20:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f725dc5b-7d78-486c-a20d-642730a1b244;
 Thu, 22 Oct 2020 00:20:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C7E502417F;
 Thu, 22 Oct 2020 00:20:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WvX7=D5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVOKt-0002NN-Lr
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 00:20:07 +0000
X-Inumbo-ID: f725dc5b-7d78-486c-a20d-642730a1b244
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f725dc5b-7d78-486c-a20d-642730a1b244;
	Thu, 22 Oct 2020 00:20:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C7E502417F;
	Thu, 22 Oct 2020 00:20:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603326006;
	bh=lPzxm99+i5rO/lsk5iXuDzwjGK/Ff+ttH2/6pmLFRw4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=iqIhVxU5P4ndtg5ZXp1gLFvY41C+i9WB+M9RKTfLh86Q0WEK1EreOcucYnygh0Ye1
	 DKNOoFbDTQaV10fy2TddY0EAc5qedBFwCZfgmp8ttZUDsRqQum9EfQ7wXmQtEZq9Qw
	 vlPRzysmUVJXSFjflYJ/NTrUApxZV6HoZM+qlyfM=
Date: Wed, 21 Oct 2020 17:20:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
In-Reply-To: <20201021221253.GA73207@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010211718150.12247@sstabellini-ThinkPad-T480s>
References: <20201021221253.GA73207@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 21 Oct 2020, Elliott Mitchell wrote:
> Absence of a SPCR table likely means the console is a framebuffer.  In
> such case acpi_iomem_deny_access() should NOT fail.
> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/acpi/domain_build.c | 19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
> index 1b1cfabb00..bbdc90f92c 100644
> --- a/xen/arch/arm/acpi/domain_build.c
> +++ b/xen/arch/arm/acpi/domain_build.c
> @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
>      status = acpi_get_table(ACPI_SIG_SPCR, 0,
>                              (struct acpi_table_header **)&spcr);
>  
> -    if ( ACPI_FAILURE(status) )
> +    if ( ACPI_SUCCESS(status) )
>      {
> -        printk("Failed to get SPCR table\n");
> -        return -EINVAL;
> +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
> +        /* Deny MMIO access for UART */
> +        rc = iomem_deny_access(d, mfn, mfn + 1);
> +        if ( rc )
> +            return rc;
> +    }
> +    else
> +    {
> +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
>      }
> -
> -    mfn = spcr->serial_port.address >> PAGE_SHIFT;
> -    /* Deny MMIO access for UART */
> -    rc = iomem_deny_access(d, mfn, mfn + 1);
> -    if ( rc )
> -        return rc;
>  
>      /* Deny MMIO access for GIC regions */
>      return gic_iomem_deny_access(d);


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 01:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 01:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10163.26896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVPKj-00047w-8Z; Thu, 22 Oct 2020 01:24:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10163.26896; Thu, 22 Oct 2020 01:24:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVPKj-00047p-5U; Thu, 22 Oct 2020 01:24:01 +0000
Received: by outflank-mailman (input) for mailman id 10163;
 Thu, 22 Oct 2020 01:23:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DD7K=D5=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1kVPKf-00047g-LS
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 01:23:59 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe375d51-9392-4a7c-bb35-9fc94cc3a4c6;
 Thu, 22 Oct 2020 01:23:56 +0000 (UTC)
Received: from [10.10.1.137] (c-73-129-147-140.hsd1.md.comcast.net
 [73.129.147.140]) by mx.zohomail.com
 with SMTPS id 1603329828911614.8460963906813;
 Wed, 21 Oct 2020 18:23:48 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DD7K=D5=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
	id 1kVPKf-00047g-LS
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 01:23:59 +0000
X-Inumbo-ID: fe375d51-9392-4a7c-bb35-9fc94cc3a4c6
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fe375d51-9392-4a7c-bb35-9fc94cc3a4c6;
	Thu, 22 Oct 2020 01:23:56 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603329831; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Gtp42vUhCZZYbSwUBHL+kPs0nVV+qubxqnOsbtv/VInei1/SP4N2YjV3d69YD3jTxwB/Ex8a50ItH4yiuyMjUI1G5NUs5VLmtyoRRuTUI64romQkcBTpy09SEkbIMRpsB0F4vEw0CzVo4+MpesOWb8FpWOwv1saZnD86wLNnPmo=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603329831; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=stFXklIoiHG3PHIWlVEZraIR0nK8W927Dv5q75eD7oQ=; 
	b=Me9++XvXTmYhNv+sKnYL9X2mgcCdQk66KcxE8snvxdmVaCPMQJlIw8onX7lEhPMbWwUN1Y/56EFhsa7v0aCIK7SM16BiXVA6ZQ8X7VNXr7nmSNmz5vPBxqG74eNbrP9HOVeX/sniHwuSf+VKhb1lA22A4mKoO0rjCg/zZ/bDRrI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603329831;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=stFXklIoiHG3PHIWlVEZraIR0nK8W927Dv5q75eD7oQ=;
	b=Qdw9ZPluGH+LZB2ZvbscAVVTbhYSyFcjnJ7N57lywm2zMcIatqIn5KFECdnL/S2q
	LGd0f6UUOhi1DBVgR8RHpnPBrHZtqL+9gwmsNkqdyI4/cRtpoxlFw23uOfJQKP31lRD
	j3EifDvcSKde8EVe47Oml0ncQtaZGbdDjNdTxjQ8=
Received: from [10.10.1.137] (c-73-129-147-140.hsd1.md.comcast.net [73.129.147.140]) by mx.zohomail.com
	with SMTPS id 1603329828911614.8460963906813; Wed, 21 Oct 2020 18:23:48 -0700 (PDT)
Subject: Re: XSM and the idle domain
To: Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 jbeulich@suse.com, andrew.cooper3@citrix.com, jandryuk@gmail.com,
 dgdegra@tycho.nsa.gov
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <2dbee673-036a-077e-6cb4-556aac46ac33@apertussolutions.com>
Date: Wed, 21 Oct 2020 21:23:18 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 10/21/20 10:34 AM, Hongyan Xia wrote:
> Hi,
> 
> A while ago there was a quick chat on IRC about how XSM interacts with
> the idle domain. The conversation did not reach any clear conclusions
> so it might be a good idea to summarise the questions in an email.
> 
> Basically there were two questions in that conversation:
> 
> 1. In its current state, are security modules able to limit what the
> idle domain can do?

Yes in the fact that the idle domain has a type and you can constrain 
what actions the type is allowed. Now in reality the idle domain is 
given the same type as the hypervisor itself thus must have the ability 
to make certain actions.

> 2. Should security modules be able to restrict the idle domain?

IMHO I think this question should be reversed to ask whether the actions 
the idle domain is being used for is appropriate from a security point 
of view. AIUI the idle domain is a mechanism for the scheduler to use as 
a place to schedule an idle vcpu. And yes I understand that some limited 
work is done there, e.g. memory scrubbing, but 1.) there is a difference 
between light/limited work that can be done within the confines of a 
domain and work requiring hypercalls, and 2.) this precedence may have 
been due to limitations vs being the necessarily correct approach.

> The first question came up during ongoing work in LiveUpdate. After an
> LU, the next Xen needs to restore all domains. To do that, some
> hypercalls need to be issued from the idle domain context and
> apparently XSM does not like it. We need to introduce hacks in the
> dummy module to leave the idle domain alone. Our work is not compiled
> with CONFIG_XSM at all, but with CONFIG_XSM, are we able to enforce
> security policies against the idle domain? Of course, without any LU
> work this does not make any difference because the idle domain does not
> do any useful work to be restricted anyway.

Why do they "need to be issued from the idle domain"? As was suggested 
by Jason, why isn't this done from a construction domain context? I will 
interject here that with DomB that is what we will be doing and it 
sounds like LiveUpdate is very similar to the relaunch concept that DomB 
is being constructed to support.

Yes XSM did not like it because an analogy of what is being done is like 
trying to do a system call from inside an OS kernel. Again AIUI the idle 
domain is not a real domain but an internal construct for the scheduler 
to manage idle vcpu and attempting to make hypercalls from it is in fact 
attempting to turn into a full fledged domain.

 From a security perspective, if hacks to the XSM hooks are necessary to 
make something work then it is highly recommended to take a step back 
and ask why and whether you are doing something that is not safe from a 
security perspective.

> Also, should idle domain be restricted? IMO the idle domain is Xen
> itself which mostly bootstraps the system and performs limited work
> when switched to, and is not something a user (either dom0 or domU)
> directly interacts with. I doubt XSM was designed to include the idle
> domain (although there is an ID allocated for it in the code), so I
> would say just exclude idle in all security policy checks.

The idle domain is a limited, internal construct within the hypervisor 
and should be constrained as part of the hypervisor, which is why its 
domain id gets labeled with the same label as the hypervisor. For this 
reason I would wholeheartedly disagree with exempting the idle domain id 
from XSM hooks as that would effectively be saying the core hypervisor 
should not be constrained. The purpose of the XSM hooks is to control 
the flow of information in the system in a non-bypassable way. Codifying 
bypasses completely subverts the security model behind XSM for which the 
flask security server is dependent upon.

> I may have missed some points in that discussion, so please feel free
> to add.
> 
> Hongyan
> 
> 

V/r,
DPS


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 01:43:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 01:43:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10167.26908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVPdZ-00062Q-V0; Thu, 22 Oct 2020 01:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10167.26908; Thu, 22 Oct 2020 01:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVPdZ-00062J-RN; Thu, 22 Oct 2020 01:43:29 +0000
Received: by outflank-mailman (input) for mailman id 10167;
 Thu, 22 Oct 2020 01:43:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVPdY-00062E-91
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 01:43:28 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b67b9e34-3e10-4b00-89ac-b20675723f36;
 Thu, 22 Oct 2020 01:43:26 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09M1hAfT073941
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Wed, 21 Oct 2020 21:43:16 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09M1hAop073940;
 Wed, 21 Oct 2020 18:43:10 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVPdY-00062E-91
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 01:43:28 +0000
X-Inumbo-ID: b67b9e34-3e10-4b00-89ac-b20675723f36
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b67b9e34-3e10-4b00-89ac-b20675723f36;
	Thu, 22 Oct 2020 01:43:26 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09M1hAfT073941
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Wed, 21 Oct 2020 21:43:16 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09M1hAop073940;
	Wed, 21 Oct 2020 18:43:10 -0700 (PDT)
	(envelope-from ehem)
Date: Wed, 21 Oct 2020 18:43:10 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: Remove EXPERT dependancy
Message-ID: <20201022014310.GA70872@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Linux requires UEFI support to be enabled on ARM64 devices.  While many
ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
potentially taking over.  Some common devices may need ACPI table
support.

Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
I am rather tempted to add defaulting enabled, but I'm not yet confident
of going that far yet.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
I hope a popular ARM device capable of running Xen will soon be running
Xen on ACPI/UEFI, but it isn't quite there yet.  As such I would like to
have "default y", but I don't think that is likely to be accepted yet.
---
 xen/arch/arm/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..f43d9074f9 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -32,7 +32,7 @@ menu "Architecture Features"
 source "arch/Kconfig"
 
 config ACPI
-	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
+	bool "ACPI (Advanced Configuration and Power Interface) Support"
 	depends on ARM_64
 	---help---
 
-- 
2.20.1



-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 22 02:45:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 02:45:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10170.26920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVQb4-0003R6-Fk; Thu, 22 Oct 2020 02:44:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10170.26920; Thu, 22 Oct 2020 02:44:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVQb4-0003Qz-Bn; Thu, 22 Oct 2020 02:44:58 +0000
Received: by outflank-mailman (input) for mailman id 10170;
 Thu, 22 Oct 2020 02:44:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVQb3-0003Qu-3Q
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 02:44:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06e1c723-6e4e-41a4-a485-74437f95def9;
 Thu, 22 Oct 2020 02:44:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQaz-00033Z-5r; Thu, 22 Oct 2020 02:44:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQay-0005DO-TC; Thu, 22 Oct 2020 02:44:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQay-000645-Si; Thu, 22 Oct 2020 02:44:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVQb3-0003Qu-3Q
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 02:44:57 +0000
X-Inumbo-ID: 06e1c723-6e4e-41a4-a485-74437f95def9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 06e1c723-6e4e-41a4-a485-74437f95def9;
	Thu, 22 Oct 2020 02:44:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kv4rR9mR/37RwUiyr++iiWjE3Tw8FALpr2HBJzBcKB4=; b=dvySohwy4y5De71EmI4wKoMkJv
	TMouPgrJVAcmgdIc7gxvnxkZlTW/9uKIMKwp9kWETkiCUAG8NWxJ7R01SzVhcVM2nIgsy3DyVE3OJ
	M3BlzTytS7k0tBz7D0TtGMSOXhOiQesSWQPcsNU/EP6nAwe0ky/Nm2IE1ZFodd2ZxK3c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQaz-00033Z-5r; Thu, 22 Oct 2020 02:44:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQay-0005DO-TC; Thu, 22 Oct 2020 02:44:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQay-000645-Si; Thu, 22 Oct 2020 02:44:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156059-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156059: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=f76848a7c11281c09c66086cafe66926d73170fa
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 02:44:52 +0000

flight 156059 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156059/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f76848a7c11281c09c66086cafe66926d73170fa
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  103 days
Failing since        151818  2020-07-11 04:18:52 Z  102 days   97 attempts
Testing same since   156059  2020-10-21 04:20:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22386 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 03:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 03:02:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10175.26937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVQsF-0005Fl-Pn; Thu, 22 Oct 2020 03:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10175.26937; Thu, 22 Oct 2020 03:02:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVQsF-0005Fe-Mo; Thu, 22 Oct 2020 03:02:43 +0000
Received: by outflank-mailman (input) for mailman id 10175;
 Thu, 22 Oct 2020 03:02:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVQsE-0005F0-Fm
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 03:02:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 288f5e42-b071-4050-b7e5-91cdcd47f32f;
 Thu, 22 Oct 2020 03:02:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQs4-0003RG-Aj; Thu, 22 Oct 2020 03:02:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQs4-0007UA-2Y; Thu, 22 Oct 2020 03:02:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVQs4-0008HO-25; Thu, 22 Oct 2020 03:02:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVQsE-0005F0-Fm
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 03:02:42 +0000
X-Inumbo-ID: 288f5e42-b071-4050-b7e5-91cdcd47f32f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 288f5e42-b071-4050-b7e5-91cdcd47f32f;
	Thu, 22 Oct 2020 03:02:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nVdEIeb2kCn9lwFyk9DYH8L2Dn05wzuu3ESal7vMujc=; b=j+wQqxXvT7967m6Kk/hLVEnkzN
	YPNVYNbmnnZv/wMQAssvvOGWYgk+W2TN0z/1gHMWDvIhi/HGV3rIVd0CaaMhlZ1OfMb5x6WFsLQ1D
	nwALJmGZuhSMWCxM7PqysA3H0rPwJaCltZq5wZ7Kh7ac24W3VGGQhEKiIHIxFL00EfHE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQs4-0003RG-Aj; Thu, 22 Oct 2020 03:02:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQs4-0007UA-2Y; Thu, 22 Oct 2020 03:02:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVQs4-0008HO-25; Thu, 22 Oct 2020 03:02:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156057-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156057: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c41341af76cfc85b5a6c0f87de4838672ab9f89
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 03:02:32 +0000

flight 156057 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156057/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c41341af76cfc85b5a6c0f87de4838672ab9f89
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   62 days
Failing since        152659  2020-08-21 14:07:39 Z   61 days  114 attempts
Testing same since   156028  2020-10-20 12:37:35 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 48058 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 05:58:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 05:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10181.26956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVTc2-0004UV-FF; Thu, 22 Oct 2020 05:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10181.26956; Thu, 22 Oct 2020 05:58:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVTc2-0004UO-By; Thu, 22 Oct 2020 05:58:10 +0000
Received: by outflank-mailman (input) for mailman id 10181;
 Thu, 22 Oct 2020 05:58:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVTc1-0004UJ-F0
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 05:58:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67bd16c3-30e9-4133-94d1-5b37df8598fe;
 Thu, 22 Oct 2020 05:58:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVTbx-0007U9-By; Thu, 22 Oct 2020 05:58:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVTbx-0002mZ-2S; Thu, 22 Oct 2020 05:58:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVTbx-0003I1-1w; Thu, 22 Oct 2020 05:58:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVTc1-0004UJ-F0
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 05:58:09 +0000
X-Inumbo-ID: 67bd16c3-30e9-4133-94d1-5b37df8598fe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 67bd16c3-30e9-4133-94d1-5b37df8598fe;
	Thu, 22 Oct 2020 05:58:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a5o/HXtYbP/87yFopW7lKYRgLgB2lhJKumZ4PEewvfE=; b=d/q85zolXsOU6bxK5tjvj+3Os/
	65mjvGAl2+2bc3FaAU+qFi3dQCIbena6o4o9H5tEueQfdvx/HkDIpTLZ4Nl7zxSlDpo8SeFTdkA79
	kX/3aabSby200GcmA1dCaDqCAIzN+05T5dfTGXKsGCW4T0LCYOBDPUwAI8gKa/zOppg4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVTbx-0007U9-By; Thu, 22 Oct 2020 05:58:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVTbx-0002mZ-2S; Thu, 22 Oct 2020 05:58:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVTbx-0003I1-1w; Thu, 22 Oct 2020 05:58:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156050-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156050: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0514a3a25fb9ebff5d75cc8f00a9229385300858
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 05:58:05 +0000

flight 156050 xen-unstable real [real]
flight 156084 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156050/
http://logs.test-lab.xenproject.org/osstest/logs/156084/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156013

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156013
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156013
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156013
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156013
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156013
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156013
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0514a3a25fb9ebff5d75cc8f00a9229385300858
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   156013  2020-10-20 04:30:46 Z    2 days
Failing since        156027  2020-10-20 12:37:35 Z    1 days    2 attempts
Testing same since   156050  2020-10-20 22:07:36 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0514a3a25fb9ebff5d75cc8f00a9229385300858
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 14:23:12 2020 +0200

    AMD/IOMMU: ensure suitable ordering of DTE modifications
    
    DMA and interrupt translation should be enabled only after other
    applicable DTE fields have been written. Similarly when disabling
    translation or when moving a device between domains, translation should
    first be disabled, before other entry fields get modified. Note however
    that the "moving" aspect doesn't apply to the interrupt remapping side,
    as domain specifics are maintained in the IRTEs here, not the DTE. We
    also never disable interrupt remapping once it got enabled for a device
    (the respective argument passed is always the immutable iommu_intremap).
    
    This is part of XSA-347.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 3b055121c5410e2c3105d6d06aa24ca0d58868cd
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 14:22:52 2020 +0200

    AMD/IOMMU: update live PTEs atomically
    
    Updating a live PTE bitfield by bitfield risks the compiler re-ordering
    the individual updates as well as splitting individual updates into
    multiple memory writes. Construct the new entry fully in a local
    variable, do the check to determine the flushing needs on the thus
    established new entry, and then write the new entry by a single insn.
    
    Similarly using memset() to clear a PTE is unsafe, as the order of
    writes the function does is, at least in principle, undefined.
    
    This is part of XSA-347.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 73f62c7380edf07469581a3049aba98abd63b275
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 14:22:26 2020 +0200

    AMD/IOMMU: convert amd_iommu_pte from struct to union
    
    This is to add a "raw" counterpart to the bitfield equivalent. Take the
    opportunity and
     - convert fields to bool / unsigned int,
     - drop the naming of the reserved field,
     - shorten the names of the ignored ones.
    
    This is part of XSA-347.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 5777a3742d88ff1c0ebc626ceb4fd47f9b3dc6d5
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 14:21:32 2020 +0200

    IOMMU: hold page ref until after deferred TLB flush
    
    When moving around a page via XENMAPSPACE_gmfn_range, deferring the TLB
    flush for the "from" GFN range requires that the page remains allocated
    to the guest until the TLB flush has actually occurred. Otherwise a
    parallel hypercall to remove the page would only flush the TLB for the
    GFN it has been moved to, but not the one is was mapped at originally.
    
    This is part of XSA-346.
    
    Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ")
    Reported-by: Julien Grall <jgrall@amazon.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit dea460d86957bf1425a8a1572626099ac3f165a8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 14:21:09 2020 +0200

    IOMMU: suppress "iommu_dont_flush_iotlb" when about to free a page
    
    Deferring flushes to a single, wide range one - as is done when
    handling XENMAPSPACE_gmfn_range - is okay only as long as
    pages don't get freed ahead of the eventual flush. While the only
    function setting the flag (xenmem_add_to_physmap()) suggests by its name
    that it's only mapping new entries, in reality the way
    xenmem_add_to_physmap_one() works means an unmap would happen not only
    for the page being moved (but not freed) but, if the destination GFN is
    populated, also for the page being displaced from that GFN. Collapsing
    the two flushes for this GFN into just one (end even more so deferring
    it to a batched invocation) is not correct.
    
    This is part of XSA-346.
    
    Fixes: cf95b2a9fd5a ("iommu: Introduce per cpu flag (iommu_dont_flush_iotlb) to avoid unnecessary iotlb... ")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 1ce75e99d75907aaffae05fcf658a833802bce49
Author: Hongyan Xia <hongyxia@amazon.com>
Date:   Sat Jan 11 21:57:43 2020 +0000

    x86/mm: Prevent some races in hypervisor mapping updates
    
    map_pages_to_xen will attempt to coalesce mappings into 2MiB and 1GiB
    superpages if possible, to maximize TLB efficiency.  This means both
    replacing superpage entries with smaller entries, and replacing
    smaller entries with superpages.
    
    Unfortunately, while some potential races are handled correctly,
    others are not.  These include:
    
    1. When one processor modifies a sub-superpage mapping while another
    processor replaces the entire range with a superpage.
    
    Take the following example:
    
    Suppose L3[N] points to L2.  And suppose we have two processors, A and
    B.
    
    * A walks the pagetables, get a pointer to L2.
    * B replaces L3[N] with a 1GiB mapping.
    * B Frees L2
    * A writes L2[M] #
    
    This is race exacerbated by the fact that virt_to_xen_l[21]e doesn't
    handle higher-level superpages properly: If you call virt_xen_to_l2e
    on a virtual address within an L3 superpage, you'll either hit a BUG()
    (most likely), or get a pointer into the middle of a data page; same
    with virt_xen_to_l1 on a virtual address within either an L3 or L2
    superpage.
    
    So take the following example:
    
    * A reads pl3e and discovers it to point to an L2.
    * B replaces L3[N] with a 1GiB mapping
    * A calls virt_to_xen_l2e() and hits the BUG_ON() #
    
    2. When two processors simultaneously try to replace a sub-superpage
    mapping with a superpage mapping.
    
    Take the following example:
    
    Suppose L3[N] points to L2.  And suppose we have two processors, A and B,
    both trying to replace L3[N] with a superpage.
    
    * A walks the pagetables, get a pointer to pl3e, and takes a copy ol3e pointing to L2.
    * B walks the pagetables, gets a pointre to pl3e, and takes a copy ol3e pointing to L2.
    * A writes the new value into L3[N]
    * B writes the new value into L3[N]
    * A recursively frees all the L1's under L2, then frees L2
    * B recursively double-frees all the L1's under L2, then double-frees L2 #
    
    Fix this by grabbing a lock for the entirety of the mapping update
    operation.
    
    Rather than grabbing map_pgdir_lock for the entire operation, however,
    repurpose the PGT_locked bit from L3's page->type_info as a lock.
    This means that rather than locking the entire address space, we
    "only" lock a single 512GiB chunk of hypervisor address space at a
    time.
    
    There was a proposal for a lock-and-reverify approach, where we walk
    the pagetables to the point where we decide what to do; then grab the
    map_pgdir_lock, re-verify the information we collected without the
    lock, and finally make the change (starting over again if anything had
    changed).  Without being able to guarantee that the L2 table wasn't
    freed, however, that means every read would need to be considered
    potentially unsafe.  Thinking carefully about that is probably
    something that wants to be done on public, not under time pressure.
    
    This is part of XSA-345.
    
    Reported-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b733f8a8b8db83f2d438cab3adb38b387cecfce0
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Sat Jan 11 21:57:42 2020 +0000

    x86/mm: Refactor modify_xen_mappings to have one exit path
    
    We will soon need to perform clean-ups before returning.
    
    No functional change.
    
    This is part of XSA-345.
    
    Reported-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 08e6c6f80b018878476adc2c4e5679d2ce5cb4b1
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Sat Jan 11 21:57:41 2020 +0000

    x86/mm: Refactor map_pages_to_xen to have only a single exit path
    
    We will soon need to perform clean-ups before returning.
    
    No functional change.
    
    This is part of XSA-345.
    
    Reported-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a7f0831e58bf4681d710e9a029644b6fa07b7cb0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:54:59 2020 +0200

    SVM: avoid VMSAVE in ctxt-switch-to
    
    Of the state saved by the insn and reloaded by the corresponding VMLOAD
    - TR and syscall state are invariant while having Xen's state loaded,
    - sysenter is unused altogether by Xen,
    - FS, GS, and LDTR are not used by Xen and get suitably set in PV
      context switch code.
    Note that state is suitably populated in _svm_cpu_up(); a minimal
    respective assertion gets added.
    
    Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit de6d188a519f9e3b7a1acc7784adf4c243865f9a
Author: Igor Druzhinin <igor.druzhinin@citrix.com>
Date:   Tue Oct 20 08:54:23 2020 +0200

    hvmloader: flip "ACPI data" to "ACPI NVS" type for ACPI table region
    
    ACPI specification contains statements describing memory marked with regular
    "ACPI data" type as reclaimable by the guest. Although the guest shouldn't
    really do it if it wants kexec or similar functionality to work, there
    could still be ambiguities in treating these regions as potentially regular
    RAM.
    
    One such example is SeaBIOS which currently reports "ACPI data" regions as
    RAM to the guest in its e801 call. Which it might have the right to do as any
    user of this is expected to be ACPI unaware. But a QEMU bootloader later seems
    to ignore that fact and is instead using e801 to find a place for initrd which
    causes the tables to be erased. While arguably QEMU bootloader or SeaBIOS need
    to be fixed / improved here, that is just one example of the potential problems
    from using a reclaimable memory type.
    
    Flip the type to "ACPI NVS" which doesn't have this ambiguity in it and is
    described by the spec as non-reclaimable (so cannot ever be treated like RAM).
    
    Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 7b36d16d21ae70a1eaabe577b7e4b42ed0f1a7d1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:53:53 2020 +0200

    xen-detect: make CPUID fallback CPUID-faulting aware
    
    Relying on presence / absence of hypervisor leaves in raw / escaped
    CPUID output cannot be used to tell apart PV and HVM on CPUID faulting
    capable hardware. Utilize a PV-only feature flag to avoid false positive
    HVM detection.
    
    While at it also short circuit the main detection loop: For PV, only
    the base group of leaves can possibly hold hypervisor information.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 25467bb5d121735af4969834a62bca752a7bfe10
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue Oct 20 08:52:53 2020 +0200

    EFI: free unused boot mem in at least some cases
    
    Address at least the primary reason why 52bba67f8b87 ("efi/boot: Don't
    free ebmalloc area at all") was put in place: Make xen_in_range() aware
    of the freed range. This is in particular relevant for EFI-enabled
    builds not actually running on EFI, as the entire range will be unused
    in this case.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:22:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:22:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10187.26974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVUvU-0004Ap-Qd; Thu, 22 Oct 2020 07:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10187.26974; Thu, 22 Oct 2020 07:22:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVUvU-0004Ai-Ni; Thu, 22 Oct 2020 07:22:20 +0000
Received: by outflank-mailman (input) for mailman id 10187;
 Thu, 22 Oct 2020 07:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVUvT-0004Ad-5r
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:22:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 704cdbe9-1292-4c7c-bd20-1eae4100e3c0;
 Thu, 22 Oct 2020 07:22:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0EC1ABB2;
 Thu, 22 Oct 2020 07:22:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVUvT-0004Ad-5r
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:22:19 +0000
X-Inumbo-ID: 704cdbe9-1292-4c7c-bd20-1eae4100e3c0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 704cdbe9-1292-4c7c-bd20-1eae4100e3c0;
	Thu, 22 Oct 2020 07:22:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603351336;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AEohr86YwyMbd6ENMvAUnEuFe7qV+XNj9RSib7V+qOw=;
	b=aPTzcGbwa+lxQfeUIZ8xWuHRPfqEmRG+qUuvVBLElKedfQUKbsgNgRpzflOVyARUYYKMli
	2iCx4A94NovZsbi4G9VQoWwYhtUUXWNAI0g4l/7tPQyvL0SSupEApVnPBEyQqs1uM12L5u
	SToyoI6vs4FNuTxjtGUa+BvPxjgVbHI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C0EC1ABB2;
	Thu, 22 Oct 2020 07:22:16 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Flush TLB in response to paging structure changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20201021130708.12249-1-andrew.cooper3@citrix.com>
 <7967fa6e-213d-50e2-87d3-dbd319aa2060@suse.com>
 <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74bd40bb-dfad-39dc-bcae-b5a6f6d16a0c@suse.com>
Date: Thu, 22 Oct 2020 09:22:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <9fe3d967-6bfe-71ef-6430-029de97dca8c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 17:39, Andrew Cooper wrote:
> On 21/10/2020 14:56, Jan Beulich wrote:
>> On 21.10.2020 15:07, Andrew Cooper wrote:
>>> @@ -4037,6 +4037,9 @@ long do_mmu_update(
>>>                          break;
>>>                      rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
>>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>>> +                    /* Paging structure maybe changed.  Flush linear range. */
>>> +                    if ( !rc )
>>> +                        flush_flags_all |= FLUSH_TLB;
>>>                      break;
>>>  
>>>                  case PGT_l3_page_table:
>>> @@ -4044,6 +4047,9 @@ long do_mmu_update(
>>>                          break;
>>>                      rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
>>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>>> +                    /* Paging structure maybe changed.  Flush linear range. */
>>> +                    if ( !rc )
>>> +                        flush_flags_all |= FLUSH_TLB;
>>>                      break;
>>>  
>>>                  case PGT_l4_page_table:
>>> @@ -4051,27 +4057,28 @@ long do_mmu_update(
>>>                          break;
>>>                      rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
>>>                                        cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
>>> -                    if ( !rc && pt_owner->arch.pv.xpti )
>>> +                    /* Paging structure maybe changed.  Flush linear range. */
>>> +                    if ( !rc )
>>>                      {
>>> -                        bool local_in_use = false;
>>> +                        bool local_in_use = mfn_eq(
>>> +                            pagetable_get_mfn(curr->arch.guest_table), mfn);
>>>  
>>> -                        if ( mfn_eq(pagetable_get_mfn(curr->arch.guest_table),
>>> -                                    mfn) )
>>> -                        {
>>> -                            local_in_use = true;
>>> -                            get_cpu_info()->root_pgt_changed = true;
>>> -                        }
>>> +                        flush_flags_all |= FLUSH_TLB;
>>> +
>>> +                        if ( local_in_use )
>>> +                            flush_flags_local |= FLUSH_TLB | FLUSH_ROOT_PGTBL;
>> Aiui here (and in the code consuming the flags) you build upon
>> flush_flags_local, when not zero, always being a superset of
>> flush_flags_all. I think this is a trap to fall into when later
>> wanting to change this code, but as per below this won't hold
>> anymore anyway, I think. Hence here I think you want to set
>> FLUSH_TLB unconditionally, and above for L3 and L2 you want to
>> set it in both variables. Or, if I'm wrong below, a comment to
>> that effect may help people avoid falling into this trap.
>>
>> An alternative would be to have
>>
>>     flush_flags_local |= (flush_flags_all & FLUSH_TLB);
>>
>> before doing the actual flush.
> 
> Honestly, this is what I meant by stating that the asymmetry is a total
> mess.
> 
> I originally named all 'remote', but this is even less accurate, it may
> still contain the current cpu.
> 
> Our matrix of complexity:
> 
> * FLUSH_TLB for L2+ structure changes
> * FLUSH_TLB_GLOBAL/FLUSH_ROOT_PGTBL for XPTI
> 
> with optimisations to skip GLOBAL/ROOT_PGTBL on the local CPU if none of
> the updates hit the L4-in-use, and to skip the remote if we hold all
> references on the L4.
> 
> Everything is complicated because pt_owner may not be current, for
> toolstack operations constructing a new domain.

Which is a case I'm wondering why we flush in the first place.
The L4 under construction can't be in use yet, and hence
updates to it shouldn't need syncing. Otoh
pt_owner->dirty_cpumask obviously is empty at that point, i.e.
special casing this likely isn't really worth it.

>>> @@ -4173,18 +4180,36 @@ long do_mmu_update(
>>>      if ( va )
>>>          unmap_domain_page(va);
>>>  
>>> -    if ( sync_guest )
>>> +    /*
>>> +     * Flushing needs to occur for one of several reasons.
>>> +     *
>>> +     * 1) An update to an L2 or higher occured.  This potentially changes the
>>> +     *    pagetable structure, requiring a flush of the linear range.
>>> +     * 2) An update to an L4 occured, and XPTI is enabled.  All CPUs running
>>> +     *    on a copy of this L4 need refreshing.
>>> +     */
>>> +    if ( flush_flags_all || flush_flags_local )
>> Minor remark: At least in x86 code it is more efficient to use
>> | instead of || in such cases, to avoid relying on the compiler
>> to carry out this small optimization.
> 
> This transformation should not be recommended in general.  There are
> cases, including this one, where it is at best, no effect, and at worst,
> wrong.
> 
> I don't care about people using ancient compilers.  They've got far
> bigger (== more impactful) problems than (the absence of) this
> transformation, and the TLB flush will dwarf anything the compiler does
> here.
> 
> However, the hand "optimised" case breaks a compiler spotting that the
> entire second clause is actually redundant for now.

Oh, you mean non-zero flush_flags_local implying non-zero
flush_flags_all? Not sure why a compiler recognizing this shouldn't
be able to optimize away | when it is able to optimize away || in
this case. Anyway - minor point, as said.

> I specifically didn't encode the dependency, to avoid subtle bugs
> if/when someone alters the logic.

Good point, but let me point out then that you encoded another
subtle dependency, which I didn't spell out completely before
because with the suggested adjustments it disappears: You may not
omit FLUSH_TLB from flush_flags_local when !local_in_use, as the L4
being changed may be one recursively hanging off of the L4 we're
running on.

>>>      {
>>> +        cpumask_t *mask = pt_owner->dirty_cpumask;
>>> +
>>>          /*
>>> -         * Force other vCPU-s of the affected guest to pick up L4 entry
>>> -         * changes (if any).
>>> +         * Local flushing may be asymmetric with remote.  If there is local
>>> +         * flushing to do, perform it separately and omit the current CPU from
>>> +         * pt_owner->dirty_cpumask.
>>>           */
>>> -        unsigned int cpu = smp_processor_id();
>>> -        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
>>> +        if ( flush_flags_local )
>>> +        {
>>> +            unsigned int cpu = smp_processor_id();
>>> +
>>> +            mask = per_cpu(scratch_cpumask, cpu);
>>> +            cpumask_copy(mask, pt_owner->dirty_cpumask);
>>> +            __cpumask_clear_cpu(cpu, mask);
>>> +
>>> +            flush_local(flush_flags_local);
>>> +        }
>>>  
>>> -        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
>> I understand you're of the opinion that cpumask_copy() +
>> __cpumask_clear_cpu() is more efficient than cpumask_andnot()?
>> (I guess I agree for high enough CPU counts.)
> 
> Its faster in all cases, even low CPU counts.

Is it? Data for BTR with a memory operand is available in the ORM only
for some Atom CPUs, and its latency and throughput aren't very good
there. Anyway - again a minor aspect, but I'll keep this in mind for
future code I write, as so far I've preferred the "andnot" form in
such cases.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10190.26986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVV2a-0004Tq-HW; Thu, 22 Oct 2020 07:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10190.26986; Thu, 22 Oct 2020 07:29:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVV2a-0004Tj-EP; Thu, 22 Oct 2020 07:29:40 +0000
Received: by outflank-mailman (input) for mailman id 10190;
 Thu, 22 Oct 2020 07:29:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVV2Z-0004Te-6g
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:29:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 553d952c-424c-4114-a279-a254f167f95a;
 Thu, 22 Oct 2020 07:29:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93DDEACB0;
 Thu, 22 Oct 2020 07:29:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVV2Z-0004Te-6g
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:29:39 +0000
X-Inumbo-ID: 553d952c-424c-4114-a279-a254f167f95a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 553d952c-424c-4114-a279-a254f167f95a;
	Thu, 22 Oct 2020 07:29:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603351777;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lHJcY36MeWxdzWNKBChh8Cu+HtOFu3lFHtl8gIkgDuA=;
	b=nNrXMVtM7xvOCWyZThDKJuYJ/fRSnDPCELMLTfv2I0FZlCpIUYNvPbn4QmKIRsxKjl3O0T
	ncNkheAvIoeAng4Xd6fCNk3/5BtvPyvJ1pkHUrnDzAGccb9y0Nrl4u6uBVmzss/paoEVHU
	ixwOFXD9pn92h1hfLgJk4OMs6RTnXZA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93DDEACB0;
	Thu, 22 Oct 2020 07:29:37 +0000 (UTC)
Subject: Re: [PATCH] x86/mm: avoid playing with directmap when self-snoop can
 be relied upon
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <33f7168c-b177-eed5-14e8-5e7a38dee853@suse.com>
 <20201021152321.cw6sdx3biyc2pmtx@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <48679ecc-3e69-697f-14dd-d12a1ef058ec@suse.com>
Date: Thu, 22 Oct 2020 09:29:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021152321.cw6sdx3biyc2pmtx@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 17:23, Roger Pau Monné wrote:
> On Tue, Oct 20, 2020 at 03:51:18PM +0200, Jan Beulich wrote:
>> The set of systems affected by XSA-345 would have been smaller is we had
>> this in place already: When the processor is capable of dealing with
>> mismatched cacheability, there's no extra work we need to carry out.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I guess it's not worth using the alternative framework to patch this
> up at boot in order to avoid the call in the first place?

It being non-trivial (afaict) in cases like this one makes me
think that the price to do so would be higher than the gain
to be had. But I might be wrong ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:33:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:33:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10193.26998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVV6H-0005O8-36; Thu, 22 Oct 2020 07:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10193.26998; Thu, 22 Oct 2020 07:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVV6G-0005O1-W6; Thu, 22 Oct 2020 07:33:28 +0000
Received: by outflank-mailman (input) for mailman id 10193;
 Thu, 22 Oct 2020 07:33:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVV6F-0005Nw-KF
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:33:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8a15e05-b9ae-4383-bd6d-8c37c3d3e435;
 Thu, 22 Oct 2020 07:33:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0746FABB2;
 Thu, 22 Oct 2020 07:33:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVV6F-0005Nw-KF
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:33:27 +0000
X-Inumbo-ID: b8a15e05-b9ae-4383-bd6d-8c37c3d3e435
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b8a15e05-b9ae-4383-bd6d-8c37c3d3e435;
	Thu, 22 Oct 2020 07:33:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352006;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jZmrjZSUnLgn7qREAUQyzOG+LZUm0ECb13lDvq/bEYE=;
	b=t2kBQLDFxQqZ7ChAZlR/Lq9sxFhOms9xLFsOMVe0yHQeBzlZHAgYnhkhDJ4n+1rmQ5KYqn
	EqYjtpNywvTaUvGcdUSyFp+upImx+R2vWqgasPtUOIHiot2frDR9faC3C53aP9hwjNAaNK
	znUKMoJb7dV10wa8+m0b6Gec8Ge57yE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0746FABB2;
	Thu, 22 Oct 2020 07:33:26 +0000 (UTC)
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
Date: Thu, 22 Oct 2020 09:33:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.10.2020 17:46, Roger Pau Monné wrote:
> On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
>> There's no global lock around the updating of this global piece of data.
>> Make use of cmpxchgptr() to avoid two entities racing with their
>> updates.
>>
>> While touching the functionality, mark xen_consumers[] read-mostly (or
>> else the if() condition could use the result of cmpxchgptr(), writing to
>> the slot unconditionally).
> 
> I'm not sure I get this, likely related to the comment I have below.

This is about the alternative case of invoking cmpxchgptr()
without the if() around it. On x86 this would mean always
writing the field, even if the designated value is already in
place.

>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -57,7 +57,8 @@
>>   * with a pointer, we stash them dynamically in a small lookup array which
>>   * can be indexed by a small integer.
>>   */
>> -static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS];
>> +static xen_event_channel_notification_t __read_mostly
>> +    xen_consumers[NR_XEN_CONSUMERS];
>>  
>>  /* Default notification action: wake up from wait_on_xen_event_channel(). */
>>  static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
>> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
>>  
>>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
>>      {
>> +        /* Use cmpxchgptr() in lieu of a global lock. */
>>          if ( xen_consumers[i] == NULL )
>> -            xen_consumers[i] = fn;
>> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
>>          if ( xen_consumers[i] == fn )
>>              break;
> 
> I think you could join it as:
> 
> if ( !xen_consumers[i] &&
>      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
>     break;
> 
> As cmpxchgptr will return the previous value of &xen_consumers[i]?

But then you also have to check whether the returned value is
fn (or retain the 2nd if()).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10198.27028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEz-0006Ke-JJ; Thu, 22 Oct 2020 07:42:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10198.27028; Thu, 22 Oct 2020 07:42:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEz-0006KU-C4; Thu, 22 Oct 2020 07:42:29 +0000
Received: by outflank-mailman (input) for mailman id 10198;
 Thu, 22 Oct 2020 07:42:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVEx-0006JQ-D6
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cac8e64-9e80-4c77-ab36-e75ad67bb889;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDAE6B1A0;
 Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVEx-0006JQ-D6
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:27 +0000
X-Inumbo-ID: 4cac8e64-9e80-4c77-ab36-e75ad67bb889
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4cac8e64-9e80-4c77-ab36-e75ad67bb889;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352545;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=FBBqK8HW3BIG0quz5o+GZ5G2pN19linyD06f3Ehf/3E=;
	b=Ku/7Poc7X6u54p4uLBdhgxlQmb+7ztGf4Y5XEcudfV9uDHBPLfP+yYzYjwltr+6HoxAx00
	U8V2bJtnD3lNNDInjMXYmr+eYx10+JmswutopyP/xpQsysO6ymER2ClLDV3NEERFMUEHlX
	yGSnKxIue8rBtuqwUSMoyv8L2q/GI+g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BDAE6B1A0;
	Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	x86@kernel.org,
	linux-doc@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Jonathan Corbet <corbet@lwn.net>
Subject: [PATCH 0/5] xen: event handling cleanup
Date: Thu, 22 Oct 2020 09:42:09 +0200
Message-Id: <20201022074214.21693-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some cleanups in Xen event handling code.

Juergen Gross (5):
  xen: remove no longer used functions
  xen/events: make struct irq_info private to events_base.c
  xen/events: only register debug interrupt for 2-level events
  xen/events: unmask a fifo event channel only if it was masked
  Documentation: add xen.fifo_events kernel parameter description

 .../admin-guide/kernel-parameters.txt         |  7 ++
 arch/x86/xen/smp.c                            | 19 ++--
 arch/x86/xen/xen-ops.h                        |  2 +
 drivers/xen/events/events_2l.c                |  7 +-
 drivers/xen/events/events_base.c              | 90 +++++++++++++------
 drivers/xen/events/events_fifo.c              |  9 +-
 drivers/xen/events/events_internal.h          | 70 ++-------------
 include/xen/events.h                          |  8 --
 8 files changed, 100 insertions(+), 112 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10197.27022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEz-0006K3-73; Thu, 22 Oct 2020 07:42:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10197.27022; Thu, 22 Oct 2020 07:42:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEz-0006Jv-2j; Thu, 22 Oct 2020 07:42:29 +0000
Received: by outflank-mailman (input) for mailman id 10197;
 Thu, 22 Oct 2020 07:42:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVEx-0006JP-9I
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0acf22e-8065-4816-b469-dc09b6d636cf;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC1DCB19E;
 Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVEx-0006JP-9I
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:27 +0000
X-Inumbo-ID: f0acf22e-8065-4816-b469-dc09b6d636cf
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f0acf22e-8065-4816-b469-dc09b6d636cf;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352545;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8+gXPSZ4YK0mj6VX7zgMh/c3+7HoLEk5WluQT9Z0yeU=;
	b=WUkn4x9r0qlY9xTRnQ8DIhORZkjrTb516b1OQh3oXGCkHDoEK3O35H4KKsTIbKvF0ysIp7
	8PWXisHXPbb1IzERtCyrFPHmdmq7qQs5zbXO6stE4UHq13ZKspWu1Okwvlc+eT5RG12i72
	XabaRzfVA7Um6FUlRe41BIpTyRf6tvY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AC1DCB19E;
	Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 1/5] xen: remove no longer used functions
Date: Thu, 22 Oct 2020 09:42:10 +0200
Message-Id: <20201022074214.21693-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
References: <20201022074214.21693-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With the switch to the lateeoi model for interdomain event channels
some functions are no longer in use. Remove them.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_base.c | 21 ---------------------
 include/xen/events.h             |  8 --------
 2 files changed, 29 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index cc317739e786..436682db41c5 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1145,14 +1145,6 @@ static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
 					       chip);
 }
 
-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
-				   evtchn_port_t remote_port)
-{
-	return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
-						   &xen_dynamic_chip);
-}
-EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq);
-
 int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
 					   evtchn_port_t remote_port)
 {
@@ -1320,19 +1312,6 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
 	return irq;
 }
 
-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
-					  evtchn_port_t remote_port,
-					  irq_handler_t handler,
-					  unsigned long irqflags,
-					  const char *devname,
-					  void *dev_id)
-{
-	return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
-				remote_port, handler, irqflags, devname,
-				dev_id, &xen_dynamic_chip);
-}
-EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler);
-
 int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
diff --git a/include/xen/events.h b/include/xen/events.h
index 3b8155c2ea03..8ec418e30c7f 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -35,16 +35,8 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
 			   unsigned long irqflags,
 			   const char *devname,
 			   void *dev_id);
-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
-				   evtchn_port_t remote_port);
 int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
 					   evtchn_port_t remote_port);
-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
-					  evtchn_port_t remote_port,
-					  irq_handler_t handler,
-					  unsigned long irqflags,
-					  const char *devname,
-					  void *dev_id);
 int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10196.27009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEr-0006Hu-U3; Thu, 22 Oct 2020 07:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10196.27009; Thu, 22 Oct 2020 07:42:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVEr-0006Hn-R8; Thu, 22 Oct 2020 07:42:21 +0000
Received: by outflank-mailman (input) for mailman id 10196;
 Thu, 22 Oct 2020 07:42:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVEq-0006Hi-1L
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b93c37c-2174-4d27-b3fe-958718084834;
 Thu, 22 Oct 2020 07:42:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1EFBFB19C;
 Thu, 22 Oct 2020 07:42:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVEq-0006Hi-1L
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:20 +0000
X-Inumbo-ID: 7b93c37c-2174-4d27-b3fe-958718084834
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b93c37c-2174-4d27-b3fe-958718084834;
	Thu, 22 Oct 2020 07:42:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352537;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XqnchFJhBknshCnEom8XMHWkGQ4hmPvPqTXGVG7OITs=;
	b=Q00wt85dC39LR5wWKYHF7gtyagBVsjr5sQ0GczfFIHTGL153K7Lu8j+cZV8jpJnwPswLha
	ZdcNC0U3hAgS5n3iVeVmpj0OtEoHKaViBATcgHnSOw9Rp3OBYFmwKz55RmnNN+MijuqcWM
	hDFxm8vmvvWm6cxcBt6jhUUfVOJ6ses=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1EFBFB19C;
	Thu, 22 Oct 2020 07:42:17 +0000 (UTC)
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a960dd45-2867-5ef6-970c-952c03aa8cef@suse.com>
Date: Thu, 22 Oct 2020 09:42:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021221253.GA73207@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 00:12, Elliott Mitchell wrote:
> --- a/xen/arch/arm/acpi/domain_build.c
> +++ b/xen/arch/arm/acpi/domain_build.c
> @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
>      status = acpi_get_table(ACPI_SIG_SPCR, 0,
>                              (struct acpi_table_header **)&spcr);
>  
> -    if ( ACPI_FAILURE(status) )
> +    if ( ACPI_SUCCESS(status) )
>      {
> -        printk("Failed to get SPCR table\n");
> -        return -EINVAL;
> +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
> +        /* Deny MMIO access for UART */
> +        rc = iomem_deny_access(d, mfn, mfn + 1);
> +        if ( rc )
> +            return rc;
> +    }
> +    else
> +    {
> +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
>      }

Nit: While I see you've got Stefano's R-b already, I Xen we typically
omit the braces here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10199.27046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF0-0006Nh-UL; Thu, 22 Oct 2020 07:42:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10199.27046; Thu, 22 Oct 2020 07:42:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF0-0006NX-Qf; Thu, 22 Oct 2020 07:42:30 +0000
Received: by outflank-mailman (input) for mailman id 10199;
 Thu, 22 Oct 2020 07:42:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVEz-0006JP-Re
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edad78d9-e1aa-4794-9a9d-ba180114592a;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDC47B1A1;
 Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVEz-0006JP-Re
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:29 +0000
X-Inumbo-ID: edad78d9-e1aa-4794-9a9d-ba180114592a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id edad78d9-e1aa-4794-9a9d-ba180114592a;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352545;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DCzo2x6WDAiWUVSQYeZls1IwPXkUYifyjGiWBjsTIGM=;
	b=oiuUMXtYSXgfUTl4bVKUWtF4DsZNuvZ3duK18sq6/l9VT5fhpTpl6ASGluJI/tBMZHobi/
	Pz7McQGoBilKn39mXstaaFIepoPGKEEDROJh/SL12oAihjb/La/1Y/gCa2bmj5dHgN7WYB
	aJs6uuf3rgKaOy2StoJI3MMrkEiZwmk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CDC47B1A1;
	Thu, 22 Oct 2020 07:42:25 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/5] xen/events: make struct irq_info private to events_base.c
Date: Thu, 22 Oct 2020 09:42:11 +0200
Message-Id: <20201022074214.21693-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
References: <20201022074214.21693-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The struct irq_info of Xen's event handling is used only for two
evtchn_ops functions outside of events_base.c. Those two functions
can easily be switched to avoid that usage.

This allows to make struct irq_info and its related access functions
private to events_base.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_2l.c       |  7 +--
 drivers/xen/events/events_base.c     | 63 ++++++++++++++++++++++---
 drivers/xen/events/events_fifo.c     |  6 +--
 drivers/xen/events/events_internal.h | 70 ++++------------------------
 4 files changed, 73 insertions(+), 73 deletions(-)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index fe5ad0e89cd8..da87f3a1e351 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,10 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
 
-static void evtchn_2l_bind_to_cpu(struct irq_info *info, unsigned cpu)
+static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
+				  unsigned int old_cpu)
 {
-	clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu)));
-	set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, old_cpu)));
+	set_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
 }
 
 static void evtchn_2l_clear_pending(evtchn_port_t port)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 436682db41c5..1c25580c7691 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -70,6 +70,57 @@
 #undef MODULE_PARAM_PREFIX
 #define MODULE_PARAM_PREFIX "xen."
 
+/* Interrupt types. */
+enum xen_irq_type {
+	IRQT_UNBOUND = 0,
+	IRQT_PIRQ,
+	IRQT_VIRQ,
+	IRQT_IPI,
+	IRQT_EVTCHN
+};
+
+/*
+ * Packed IRQ information:
+ * type - enum xen_irq_type
+ * event channel - irq->event channel mapping
+ * cpu - cpu this event channel is bound to
+ * index - type-specific information:
+ *    PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM
+ *           guest, or GSI (real passthrough IRQ) of the device.
+ *    VIRQ - virq number
+ *    IPI - IPI vector
+ *    EVTCHN -
+ */
+struct irq_info {
+	struct list_head list;
+	struct list_head eoi_list;
+	short refcnt;
+	short spurious_cnt;
+	enum xen_irq_type type; /* type */
+	unsigned irq;
+	evtchn_port_t evtchn;   /* event channel */
+	unsigned short cpu;     /* cpu bound */
+	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
+	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
+	u64 eoi_time;           /* Time in jiffies when to EOI. */
+
+	union {
+		unsigned short virq;
+		enum ipi_vector ipi;
+		struct {
+			unsigned short pirq;
+			unsigned short gsi;
+			unsigned char vector;
+			unsigned char flags;
+			uint16_t domid;
+		} pirq;
+	} u;
+};
+
+#define PIRQ_NEEDS_EOI	(1 << 0)
+#define PIRQ_SHAREABLE	(1 << 1)
+#define PIRQ_MSI_GROUP	(1 << 2)
+
 static uint __read_mostly event_loop_timeout = 2;
 module_param(event_loop_timeout, uint, 0644);
 
@@ -110,7 +161,7 @@ static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
 /* IRQ <-> IPI mapping */
 static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
-int **evtchn_to_irq;
+static int **evtchn_to_irq;
 #ifdef CONFIG_X86
 static unsigned long *pirq_eoi_map;
 #endif
@@ -190,7 +241,7 @@ int get_evtchn_to_irq(evtchn_port_t evtchn)
 }
 
 /* Get info for IRQ */
-struct irq_info *info_for_irq(unsigned irq)
+static struct irq_info *info_for_irq(unsigned irq)
 {
 	if (irq < nr_legacy_irqs())
 		return legacy_info_ptrs[irq];
@@ -228,7 +279,7 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 
 	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
-	return xen_evtchn_port_setup(info);
+	return xen_evtchn_port_setup(evtchn);
 }
 
 static int xen_irq_info_evtchn_setup(unsigned irq,
@@ -351,7 +402,7 @@ static enum xen_irq_type type_from_irq(unsigned irq)
 	return info_for_irq(irq)->type;
 }
 
-unsigned cpu_from_irq(unsigned irq)
+static unsigned cpu_from_irq(unsigned irq)
 {
 	return info_for_irq(irq)->cpu;
 }
@@ -391,7 +442,7 @@ static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu)
 #ifdef CONFIG_SMP
 	cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
 #endif
-	xen_evtchn_port_bind_to_cpu(info, cpu);
+	xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu);
 
 	info->cpu = cpu;
 }
@@ -745,7 +796,7 @@ static unsigned int __startup_pirq(unsigned int irq)
 	info->evtchn = evtchn;
 	bind_evtchn_to_cpu(evtchn, 0);
 
-	rc = xen_evtchn_port_setup(info);
+	rc = xen_evtchn_port_setup(evtchn);
 	if (rc)
 		goto err;
 
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 6085a808da95..243e7b6d7b96 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -138,9 +138,8 @@ static void init_array_page(event_word_t *array_page)
 		array_page[i] = 1 << EVTCHN_FIFO_MASKED;
 }
 
-static int evtchn_fifo_setup(struct irq_info *info)
+static int evtchn_fifo_setup(evtchn_port_t port)
 {
-	evtchn_port_t port = info->evtchn;
 	unsigned new_array_pages;
 	int ret;
 
@@ -186,7 +185,8 @@ static int evtchn_fifo_setup(struct irq_info *info)
 	return ret;
 }
 
-static void evtchn_fifo_bind_to_cpu(struct irq_info *info, unsigned cpu)
+static void evtchn_fifo_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu, 
+				    unsigned int old_cpu)
 {
 	/* no-op */
 }
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 82937d90d7d7..0a97c0549db7 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -7,65 +7,15 @@
 #ifndef __EVENTS_INTERNAL_H__
 #define __EVENTS_INTERNAL_H__
 
-/* Interrupt types. */
-enum xen_irq_type {
-	IRQT_UNBOUND = 0,
-	IRQT_PIRQ,
-	IRQT_VIRQ,
-	IRQT_IPI,
-	IRQT_EVTCHN
-};
-
-/*
- * Packed IRQ information:
- * type - enum xen_irq_type
- * event channel - irq->event channel mapping
- * cpu - cpu this event channel is bound to
- * index - type-specific information:
- *    PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM
- *           guest, or GSI (real passthrough IRQ) of the device.
- *    VIRQ - virq number
- *    IPI - IPI vector
- *    EVTCHN -
- */
-struct irq_info {
-	struct list_head list;
-	struct list_head eoi_list;
-	short refcnt;
-	short spurious_cnt;
-	enum xen_irq_type type;	/* type */
-	unsigned irq;
-	evtchn_port_t evtchn;	/* event channel */
-	unsigned short cpu;	/* cpu bound */
-	unsigned short eoi_cpu;	/* EOI must happen on this cpu */
-	unsigned int irq_epoch;	/* If eoi_cpu valid: irq_epoch of event */
-	u64 eoi_time;		/* Time in jiffies when to EOI. */
-
-	union {
-		unsigned short virq;
-		enum ipi_vector ipi;
-		struct {
-			unsigned short pirq;
-			unsigned short gsi;
-			unsigned char vector;
-			unsigned char flags;
-			uint16_t domid;
-		} pirq;
-	} u;
-};
-
-#define PIRQ_NEEDS_EOI	(1 << 0)
-#define PIRQ_SHAREABLE	(1 << 1)
-#define PIRQ_MSI_GROUP	(1 << 2)
-
 struct evtchn_loop_ctrl;
 
 struct evtchn_ops {
 	unsigned (*max_channels)(void);
 	unsigned (*nr_channels)(void);
 
-	int (*setup)(struct irq_info *info);
-	void (*bind_to_cpu)(struct irq_info *info, unsigned cpu);
+	int (*setup)(evtchn_port_t port);
+	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
+			    unsigned int old_cpu);
 
 	void (*clear_pending)(evtchn_port_t port);
 	void (*set_pending)(evtchn_port_t port);
@@ -83,12 +33,9 @@ struct evtchn_ops {
 
 extern const struct evtchn_ops *evtchn_ops;
 
-extern int **evtchn_to_irq;
 int get_evtchn_to_irq(evtchn_port_t evtchn);
 void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl);
 
-struct irq_info *info_for_irq(unsigned irq);
-unsigned cpu_from_irq(unsigned irq);
 unsigned int cpu_from_evtchn(evtchn_port_t evtchn);
 
 static inline unsigned xen_evtchn_max_channels(void)
@@ -100,17 +47,18 @@ static inline unsigned xen_evtchn_max_channels(void)
  * Do any ABI specific setup for a bound event channel before it can
  * be unmasked and used.
  */
-static inline int xen_evtchn_port_setup(struct irq_info *info)
+static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
 {
 	if (evtchn_ops->setup)
-		return evtchn_ops->setup(info);
+		return evtchn_ops->setup(evtchn);
 	return 0;
 }
 
-static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info,
-					       unsigned cpu)
+static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
+					       unsigned int cpu,
+					       unsigned int old_cpu)
 {
-	evtchn_ops->bind_to_cpu(info, cpu);
+	evtchn_ops->bind_to_cpu(evtchn, cpu, old_cpu);
 }
 
 static inline void clear_evtchn(evtchn_port_t port)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10200.27058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF4-0006SH-6N; Thu, 22 Oct 2020 07:42:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10200.27058; Thu, 22 Oct 2020 07:42:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF4-0006S8-2F; Thu, 22 Oct 2020 07:42:34 +0000
Received: by outflank-mailman (input) for mailman id 10200;
 Thu, 22 Oct 2020 07:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVF2-0006JQ-9x
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4234f1f-a4f2-4fce-823b-87302055cb70;
 Thu, 22 Oct 2020 07:42:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 60548B1A6;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVF2-0006JQ-9x
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:32 +0000
X-Inumbo-ID: e4234f1f-a4f2-4fce-823b-87302055cb70
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4234f1f-a4f2-4fce-823b-87302055cb70;
	Thu, 22 Oct 2020 07:42:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352546;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nHAk3DX3D0x2mTNVKivpF51Q9ju8eawUFMYlwMlLqII=;
	b=vFm0SaytDZT7HUOYCTHQFDI2uCpz4rcWn77wHy8dww39cSM0pDpTkPn9YL7t2Ax4aJ+HEd
	CKnHFM48SdWPOubGznfdTt+CRYTHR3c+F8RNasbOnBpcyAA1OELv3r1MrFsuGFUS0dqV6z
	1EeT51axrFMSZOTmfRvoS8dRL0W18j8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 60548B1A6;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jonathan Corbet <corbet@lwn.net>
Subject: [PATCH 5/5] Documentation: add xen.fifo_events kernel parameter description
Date: Thu, 22 Oct 2020 09:42:14 +0200
Message-Id: <20201022074214.21693-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
References: <20201022074214.21693-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The kernel boot parameter xen.fifo_events isn't listed in
Documentation/admin-guide/kernel-parameters.txt. Add it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 02d4adbf98d2..526d65d8573a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5978,6 +5978,13 @@
 			After which time (jiffies) the event handling loop
 			should start to delay EOI handling. Default is 2.
 
+	xen.fifo_events=	[XEN]
+			Boolean parameter to disable using fifo event handling
+			even if available. Normally fifo event handling is
+			preferred over the 2-level event handling, as it is
+			fairer and the number of possible event channels is
+			much higher. Default is on (use fifo events).
+
 	nopv=		[X86,XEN,KVM,HYPER_V,VMWARE]
 			Disables the PV optimizations forcing the guest to run
 			as generic guest with no PV drivers. Currently support
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10201.27070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF6-0006W2-Hl; Thu, 22 Oct 2020 07:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10201.27070; Thu, 22 Oct 2020 07:42:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVF6-0006Vq-E5; Thu, 22 Oct 2020 07:42:36 +0000
Received: by outflank-mailman (input) for mailman id 10201;
 Thu, 22 Oct 2020 07:42:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVF4-0006JP-Ry
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3ad582b-766e-4735-9d13-9a084aa73deb;
 Thu, 22 Oct 2020 07:42:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E5C2B1A3;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVF4-0006JP-Ry
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:34 +0000
X-Inumbo-ID: c3ad582b-766e-4735-9d13-9a084aa73deb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c3ad582b-766e-4735-9d13-9a084aa73deb;
	Thu, 22 Oct 2020 07:42:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352546;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xSsu2Z5Xpwi6tRqum0tsNqNsbAp7L8TAp9ugr1gvsVI=;
	b=atQVgRElJPObBm4D7xYVyo/SuLwtBvhCQM9dSvPAK4KyHUJGtLTLR/ibBuL5cHA1w1q9BD
	TsaPzGWVrDRKrB/bssKNhDFnxlZJ5vs6T7Z0NsBy06HFiVay+eoJpwM7REhV/VPwOYQbl7
	rhiGlLE8UurIByn89SWD+6pLMzqgBUk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3E5C2B1A3;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 4/5] xen/events: unmask a fifo event channel only if it was masked
Date: Thu, 22 Oct 2020 09:42:13 +0200
Message-Id: <20201022074214.21693-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
References: <20201022074214.21693-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Unmasking an event channel with fifo events channels being used can
require a hypercall to be made, so try to avoid that by checking
whether the event channel was really masked.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/events/events_fifo.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 243e7b6d7b96..f60c5a9ec833 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -236,6 +236,9 @@ static bool clear_masked_cond(volatile event_word_t *word)
 
 	w = *word;
 
+	if (!(w & (1 << EVTCHN_FIFO_MASKED)))
+		return true;
+
 	do {
 		if (w & (1 << EVTCHN_FIFO_PENDING))
 			return false;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:42:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:42:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10202.27081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVFA-0006cj-Ry; Thu, 22 Oct 2020 07:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10202.27081; Thu, 22 Oct 2020 07:42:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVFA-0006cX-ND; Thu, 22 Oct 2020 07:42:40 +0000
Received: by outflank-mailman (input) for mailman id 10202;
 Thu, 22 Oct 2020 07:42:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVF9-0006JP-SE
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88c88692-d876-4f25-8902-6597f0279a03;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1902BB1A5;
 Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVF9-0006JP-SE
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:42:39 +0000
X-Inumbo-ID: 88c88692-d876-4f25-8902-6597f0279a03
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 88c88692-d876-4f25-8902-6597f0279a03;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352546;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8iyLraNh4qsC0U9VAH0jIutbMoqMZ9rCRdDLW3b+ghk=;
	b=buOqT8orNctDSPPfKDdZ4gitpMoJYnx0HWmUlC1aRgdx74mP95sctuaKXF+JzlM36uSGkM
	p2xDJuqhtNo9VidkxVxYCRBcdUEkPOHDAmCxRS5Po5/c6d4/HReu+Z5NA3SH8iV7C+p2/I
	wrRCy+A/tIGlAMKNWyXwFMUaHgLtNRc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1902BB1A5;
	Thu, 22 Oct 2020 07:42:26 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: [PATCH 3/5] xen/events: only register debug interrupt for 2-level events
Date: Thu, 22 Oct 2020 09:42:12 +0200
Message-Id: <20201022074214.21693-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
References: <20201022074214.21693-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_debug_interrupt() is specific to 2-level event handling. So don't
register it with fifo event handling being active.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 arch/x86/xen/smp.c               | 19 +++++++++++--------
 arch/x86/xen/xen-ops.h           |  2 ++
 drivers/xen/events/events_base.c |  6 ++++--
 3 files changed, 17 insertions(+), 10 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 2097fa0ebdb5..b544e511b3c2 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -88,14 +88,17 @@ int xen_smp_intr_init(unsigned int cpu)
 	per_cpu(xen_callfunc_irq, cpu).irq = rc;
 	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
 
-	debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
-	rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
-				     IRQF_PERCPU | IRQF_NOBALANCING,
-				     debug_name, NULL);
-	if (rc < 0)
-		goto fail;
-	per_cpu(xen_debug_irq, cpu).irq = rc;
-	per_cpu(xen_debug_irq, cpu).name = debug_name;
+	if (!fifo_events) {
+		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
+		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
+					     xen_debug_interrupt,
+					     IRQF_PERCPU | IRQF_NOBALANCING,
+					     debug_name, NULL);
+		if (rc < 0)
+			goto fail;
+		per_cpu(xen_debug_irq, cpu).irq = rc;
+		per_cpu(xen_debug_irq, cpu).name = debug_name;
+	}
 
 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 45d556f71858..e444c78b6e2b 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -29,6 +29,8 @@ extern struct start_info *xen_start_info;
 extern struct shared_info xen_dummy_shared_info;
 extern struct shared_info *HYPERVISOR_shared_info;
 
+extern bool fifo_events;
+
 void xen_setup_mfn_list_list(void);
 void xen_build_mfn_list_list(void);
 void xen_setup_machphys_mapping(void);
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 1c25580c7691..bb18cce4db06 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -2050,7 +2050,7 @@ void xen_setup_callback_vector(void) {}
 static inline void xen_alloc_callback_vector(void) {}
 #endif
 
-static bool fifo_events = true;
+bool fifo_events = true;
 module_param(fifo_events, bool, 0);
 
 static int xen_evtchn_cpu_prepare(unsigned int cpu)
@@ -2082,8 +2082,10 @@ void __init xen_init_IRQ(void)
 
 	if (fifo_events)
 		ret = xen_evtchn_fifo_init();
-	if (ret < 0)
+	if (ret < 0) {
 		xen_evtchn_2l_init();
+		fifo_events = false;
+	}
 
 	xen_cpu_init_eoi(smp_processor_id());
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:46:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10220.27097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVIz-0007G6-Et; Thu, 22 Oct 2020 07:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10220.27097; Thu, 22 Oct 2020 07:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVIz-0007Fz-Bc; Thu, 22 Oct 2020 07:46:37 +0000
Received: by outflank-mailman (input) for mailman id 10220;
 Thu, 22 Oct 2020 07:46:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVIy-0007Fu-JG
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:46:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93429545-b353-43ee-99f1-b5abfa3916bc;
 Thu, 22 Oct 2020 07:46:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3BB6BB1BF;
 Thu, 22 Oct 2020 07:46:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVIy-0007Fu-JG
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:46:36 +0000
X-Inumbo-ID: 93429545-b353-43ee-99f1-b5abfa3916bc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 93429545-b353-43ee-99f1-b5abfa3916bc;
	Thu, 22 Oct 2020 07:46:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603352794;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0rHay0Yz4ByTnjfExn9doWqVZCdz4OQaLOPgGx0TQtM=;
	b=TZ8Sbw7NuXy2WDdTPXB9xnXSDAhWQfSQsAZrJpodQ+ujN8XUtHPfWd79KC0R5EZ1wr9iu9
	r8lJtM/XhO/nAwiCUVavBIiqh0GBtfrmqqVWzbkhXcalNAtJ5fZBD0M32RxBbxS8UjFhCU
	bfuiubesHfbVeGLvFXS6ZsWx/QcEDDw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3BB6BB1BF;
	Thu, 22 Oct 2020 07:46:34 +0000 (UTC)
Subject: Re: [xen-unstable test] 156050: regressions - FAIL
To: xen-devel@lists.xenproject.org
References: <osstest-156050-mainreport@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c38b0b70-b85c-6a5c-6a94-d4845d59a9dd@suse.com>
Date: Thu, 22 Oct 2020 09:46:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <osstest-156050-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 07:58, osstest service owner wrote:
> flight 156050 xen-unstable real [real]
> flight 156084 xen-unstable real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156050/
> http://logs.test-lab.xenproject.org/osstest/logs/156084/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156013

Continuing from my reply to the earlier flight yesterday I'm
meanwhile even more puzzled because 4.12 and earlier have had
pushes, i.e. the tests were successful there that have been
failing for 4.13 and newer. It's further suspicious (to me)
that in each case it's just one of the qemu{u,t} tests which
fails, while its sibling is successful. This may mean a
dependency on the particular hardware we're running on, but
again I wouldn't be able to make a connection of such
behavior to the commits under test.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:51:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10224.27112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVNC-00087g-3V; Thu, 22 Oct 2020 07:50:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10224.27112; Thu, 22 Oct 2020 07:50:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVNC-00087Z-0P; Thu, 22 Oct 2020 07:50:58 +0000
Received: by outflank-mailman (input) for mailman id 10224;
 Thu, 22 Oct 2020 07:50:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVNA-00087U-4Q
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:50:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df2db677-704f-4a7c-9b39-ba4844631df3;
 Thu, 22 Oct 2020 07:50:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EE36AB95;
 Thu, 22 Oct 2020 07:50:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVNA-00087U-4Q
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:50:56 +0000
X-Inumbo-ID: df2db677-704f-4a7c-9b39-ba4844631df3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id df2db677-704f-4a7c-9b39-ba4844631df3;
	Thu, 22 Oct 2020 07:50:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603353053;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zNLtVEILTRhSO9neaxgcuQDtNGcKm6gHkTPdV6pRK6Q=;
	b=MI5ELawq9JMeJ0xcVv/klGxR3zrlu0jnHvjsyo+88SnGhFjh/35SUV2brBNCuPc+F7M1Dx
	/OgZsV2PRkjK2dpooT6U6OGgzRQr20tquGLL5TrpKnNeYbs5m0JShnBfwqc/ZuWBJn1Kp9
	pRRMqdVTm1FIYms6cRVcgrsPhOknRBc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6EE36AB95;
	Thu, 22 Oct 2020 07:50:53 +0000 (UTC)
Subject: Re: [xen-unstable test] 156050: regressions - FAIL
From: Jan Beulich <jbeulich@suse.com>
To: xen-devel@lists.xenproject.org
Cc: osstest service owner <osstest-admin@xenproject.org>
References: <osstest-156050-mainreport@xen.org>
 <c38b0b70-b85c-6a5c-6a94-d4845d59a9dd@suse.com>
Message-ID: <72d0b538-da5c-48fb-45d2-ff7407d4925c@suse.com>
Date: Thu, 22 Oct 2020 09:50:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <c38b0b70-b85c-6a5c-6a94-d4845d59a9dd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 09:46, Jan Beulich wrote:
> On 22.10.2020 07:58, osstest service owner wrote:
>> flight 156050 xen-unstable real [real]
>> flight 156084 xen-unstable real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/156050/
>> http://logs.test-lab.xenproject.org/osstest/logs/156084/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 156013
> 
> Continuing from my reply to the earlier flight yesterday I'm
> meanwhile even more puzzled because 4.12 and earlier have had
> pushes, i.e. the tests were successful there that have been
> failing for 4.13 and newer. It's further suspicious (to me)
> that in each case it's just one of the qemu{u,t} tests which
> fails, while its sibling is successful. This may mean a
> dependency on the particular hardware we're running on, but
> again I wouldn't be able to make a connection of such
> behavior to the commits under test.

Actually, yesterday's 4.14 and 4.13 flights speak against a
hardware dependency: Their failing vs successful qemu{u,t}
each ran on respectively the same host.

For now I'm lost; I'd appreciate others to take a look.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:54:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10227.27124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVQk-0008Os-O8; Thu, 22 Oct 2020 07:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10227.27124; Thu, 22 Oct 2020 07:54:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVQk-0008Ol-KY; Thu, 22 Oct 2020 07:54:38 +0000
Received: by outflank-mailman (input) for mailman id 10227;
 Thu, 22 Oct 2020 07:54:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVQj-0008Og-Fa
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:54:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71463379-af1c-4a55-9f7c-68bb5f473271;
 Thu, 22 Oct 2020 07:54:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7D48AB95;
 Thu, 22 Oct 2020 07:54:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVQj-0008Og-Fa
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:54:37 +0000
X-Inumbo-ID: 71463379-af1c-4a55-9f7c-68bb5f473271
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 71463379-af1c-4a55-9f7c-68bb5f473271;
	Thu, 22 Oct 2020 07:54:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603353276;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cg+tcmkvWp5iSn5X367Swi4D8X/gEzacrPENhCVg7N8=;
	b=NztgMB21XpZkSCtyLdRmGo2ySnOxdLkxMUQyw7bb0ZJIJh4qh/5/Ob93wN9RaPEBcsGbrE
	Kf9W1NUvqgHVRqYhdIiW28hosxXIf5hcVsxzvHFeYZkDx+OMcBZ7jm/KKc0yAojtMfgBBn
	YUpBV4ZWovwng65Ha/UYmL5V033p6+M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D7D48AB95;
	Thu, 22 Oct 2020 07:54:35 +0000 (UTC)
Subject: Re: [PATCH 3/5] xen/events: only register debug interrupt for 2-level
 events
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>
References: <20201022074214.21693-1-jgross@suse.com>
 <20201022074214.21693-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9bfc266f-1efb-7910-6ff7-9cea6e40d7c9@suse.com>
Date: Thu, 22 Oct 2020 09:54:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022074214.21693-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 09:42, Juergen Gross wrote:
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -2050,7 +2050,7 @@ void xen_setup_callback_vector(void) {}
>  static inline void xen_alloc_callback_vector(void) {}
>  #endif
>  
> -static bool fifo_events = true;
> +bool fifo_events = true;

When making this non-static, perhaps better to also prefix it with
xen_?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 07:56:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 07:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10229.27136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVS4-0008W0-2n; Thu, 22 Oct 2020 07:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10229.27136; Thu, 22 Oct 2020 07:56:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVS3-0008Vt-Vx; Thu, 22 Oct 2020 07:55:59 +0000
Received: by outflank-mailman (input) for mailman id 10229;
 Thu, 22 Oct 2020 07:55:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVS3-0008Vl-C5
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:55:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 517bff5a-cd4b-4b45-97a4-5fcc121826fb;
 Thu, 22 Oct 2020 07:55:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1670AC5F;
 Thu, 22 Oct 2020 07:55:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVS3-0008Vl-C5
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 07:55:59 +0000
X-Inumbo-ID: 517bff5a-cd4b-4b45-97a4-5fcc121826fb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 517bff5a-cd4b-4b45-97a4-5fcc121826fb;
	Thu, 22 Oct 2020 07:55:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603353357;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G1L1K+pLqf/UxGfuaoyoFjDiUcEiA4in4XnRyEgIr6g=;
	b=FTrImu4mPRJ6yrUt8DRnKbFPZn3OKJMXN4t/VfmJl5iw4jUj2yc3ObrXiULkDBTrT9+sJp
	wXtdVda1Cy74dr+SDvRITBJ/De2Wmd09N6dS2bZbCJ/pX4VSxaHsL88ufPD7r32lZ9H2IP
	KdGhBNfXNQ3fig/9Qz/FRuHOM65myAk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A1670AC5F;
	Thu, 22 Oct 2020 07:55:57 +0000 (UTC)
Subject: Re: [PATCH 4/5] xen/events: unmask a fifo event channel only if it
 was masked
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201022074214.21693-1-jgross@suse.com>
 <20201022074214.21693-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e6dcce7e-acfb-0ca1-8ff1-e303932bc3c5@suse.com>
Date: Thu, 22 Oct 2020 09:55:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022074214.21693-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 09:42, Juergen Gross wrote:
> --- a/drivers/xen/events/events_fifo.c
> +++ b/drivers/xen/events/events_fifo.c
> @@ -236,6 +236,9 @@ static bool clear_masked_cond(volatile event_word_t *word)
>  
>  	w = *word;
>  
> +	if (!(w & (1 << EVTCHN_FIFO_MASKED)))
> +		return true;

Maybe better move this ...

>  	do {
>  		if (w & (1 << EVTCHN_FIFO_PENDING))
>  			return false;
> 

... into the loop, above this check?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:02:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:02:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10242.27147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVXl-0001Xe-6s; Thu, 22 Oct 2020 08:01:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10242.27147; Thu, 22 Oct 2020 08:01:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVXl-0001XX-3z; Thu, 22 Oct 2020 08:01:53 +0000
Received: by outflank-mailman (input) for mailman id 10242;
 Thu, 22 Oct 2020 08:01:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVXj-0001XS-U4
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:01:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94db193d-5262-4ce4-819d-d849086619ad;
 Thu, 22 Oct 2020 08:01:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 270F9AE7A;
 Thu, 22 Oct 2020 08:01:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVXj-0001XS-U4
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:01:51 +0000
X-Inumbo-ID: 94db193d-5262-4ce4-819d-d849086619ad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94db193d-5262-4ce4-819d-d849086619ad;
	Thu, 22 Oct 2020 08:01:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603353710;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=al5X9k9TmXnWVtHnDRc3HqlGbm+NoIWAXwyHFtAngDw=;
	b=cvLN1NFO4flcJ7AqVjmOAOYFNPxed6JADv1j2NB9NnIE/WIqa9Y1zTrnZjfjt9QbjwXhcp
	9vW4/GI6DGzMeNr16xacgc9d6c58Z1h1WWKIRVyyPWF/OKW6Q+52qGGG/ZMXYmMvuK3hoC
	gDRGji9S/lulT/lCAcVc6a7EupoAzZY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 270F9AE7A;
	Thu, 22 Oct 2020 08:01:50 +0000 (UTC)
Subject: Re: [PATCH 0/5] xen: event handling cleanup
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 x86@kernel.org, linux-doc@vger.kernel.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Jonathan Corbet <corbet@lwn.net>
References: <20201022074214.21693-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f2c78c8-da92-1b64-02ba-1130bfc79962@suse.com>
Date: Thu, 22 Oct 2020 10:01:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022074214.21693-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 09:42, Juergen Gross wrote:
> Do some cleanups in Xen event handling code.
> 
> Juergen Gross (5):
>   xen: remove no longer used functions
>   xen/events: make struct irq_info private to events_base.c
>   xen/events: only register debug interrupt for 2-level events
>   xen/events: unmask a fifo event channel only if it was masked
>   Documentation: add xen.fifo_events kernel parameter description

With the two remarks to individual patches suitably taken care of
one way or another
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:05:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:05:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10245.27159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVbS-0001nc-OU; Thu, 22 Oct 2020 08:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10245.27159; Thu, 22 Oct 2020 08:05:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVbS-0001nV-LX; Thu, 22 Oct 2020 08:05:42 +0000
Received: by outflank-mailman (input) for mailman id 10245;
 Thu, 22 Oct 2020 08:05:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kVVbR-0001nQ-H6
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:05:42 +0000
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adf2297a-7780-4b87-876d-204d83535e5c;
 Thu, 22 Oct 2020 08:05:39 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id v5so890840wmh.1
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:05:39 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id 32sm2105875wro.31.2020.10.22.01.05.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 22 Oct 2020 01:05:37 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kVVbR-0001nQ-H6
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:05:42 +0000
X-Inumbo-ID: adf2297a-7780-4b87-876d-204d83535e5c
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id adf2297a-7780-4b87-876d-204d83535e5c;
	Thu, 22 Oct 2020 08:05:39 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id v5so890840wmh.1
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:05:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=jo9DObZgd9Jq6ZPonfHxKOsM2RbE9S9o0V4+k1CciSY=;
        b=LpVtkQQEzDhGLYock7xQGko/zoS3bHGVeS2MwITVSQjnbfP4AMhqamrvOtAFmXYrPu
         M4rdxuuXfajVAiGf91dIAQUVAfjFyk3TE3h9FLC57hbSTRNI8HSEiW7wSnn833mLl3bT
         qYQcRr7XPtDbhyfcVfQaWMtP5UWNo6sIUEMEs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=jo9DObZgd9Jq6ZPonfHxKOsM2RbE9S9o0V4+k1CciSY=;
        b=YbHxh+PMW7j+BizprkGSAinTcdIE90hrtOcwkGoMRhS+VP3CBMWMaCKJG02Kowjzgd
         eNbd7SfBtQ/zppROG2impXsutoIxYjQmWrBK55m+w+Jq3AsAbmP6j2JeDq0RDDQ8bVqd
         YBubS+YOYGwxLQCB0BeLM00eu0i/WhRp6pTn91kk0cCXkMGJv46ngNfy9GmgIATvCAki
         9PDpm2hHmOOuPoZDYbjpRkFH7pQWtOtG9OLnw6s5h10vnXf/ogJnjuao3C+niIkaIe6r
         lz4gDUJyiJIcCRtK/SRMdUpZKPi5ATtSCgKAnMq1Hy/XhoGUv3wv9QIpg6NpwI2nVy26
         FMLg==
X-Gm-Message-State: AOAM533tgdtcOxAYVvtxZlUOdfBfQvRBInCKQgoUZbdzJ+kbxbUSVgdC
	QgnAl+O3Swy9AGbP9FX4C5G1Ow==
X-Google-Smtp-Source: ABdhPJz1mIaxROQHFnkDMtAkyfocV37SNrA+OnkarpcW450fCaDG5+KffMOrvXq4qfjJmhfMWT0ZNg==
X-Received: by 2002:a1c:2cc2:: with SMTP id s185mr1359132wms.77.1603353938320;
        Thu, 22 Oct 2020 01:05:38 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id 32sm2105875wro.31.2020.10.22.01.05.36
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 22 Oct 2020 01:05:37 -0700 (PDT)
Date: Thu, 22 Oct 2020 10:05:34 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201022080534.GT401619@phenom.ffwll.local>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-11-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201020122046.31167-11-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
> internally by DRM's fbdev helper.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> v5:
> 	* implement fb_read/fb_write internally (Daniel, Sam)
> v4:
> 	* move dma_buf_map changes into separate patch (Daniel)
> 	* TODO list: comment on fbdev updates (Daniel)
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
> ---
>  Documentation/gpu/todo.rst        |  19 ++-
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  4 files changed, 230 insertions(+), 29 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
>  Level: Intermediate
>  
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>  -----------------------------------------------------------------
>  
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..1d3180841778 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static bool drm_fbdev_use_iomem(struct fb_info *info)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
> +}
> +
> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
> +				   loff_t pos)
> +{
> +	const char __iomem *src = info->screen_base + pos;

Maybe a bit much a bikeshed, but I'd write this in terms of drm objects,
like the dirty_blit function, using the dma_buf_map (instead of the
fb_info parameter). And then instead of
screen_base and screen_buffer suffixes give them _mem and _iomem suffixes.

Same for write below. Or I'm not quite understanding why we do it like
this here - I don't think this code will be used outside of the generic
fbdev code, so we can always assume that drm_fb_helper->buffer is set up.

The other thing I think we need is some minimal testcases to make sure.
The fbtest tool used way back seems to have disappeared, I couldn't find
a copy of the source anywhere anymore.

With all that: Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>

Cheers, Daniel

> +	size_t alloc_size = min(count, PAGE_SIZE);
> +	ssize_t ret = 0;
> +	char *tmp;
> +
> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!tmp)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		size_t c = min(count, alloc_size);
> +
> +		memcpy_fromio(tmp, src, c);
> +		if (copy_to_user(buf, tmp, c)) {
> +			ret = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(tmp);
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
> +				     loff_t pos)
> +{
> +	const char *src = info->screen_buffer + pos;
> +
> +	if (copy_to_user(buf, src, count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	loff_t pos = *ppos;
> +	size_t total_size;
> +	ssize_t ret;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	if (info->screen_size)
> +		total_size = info->screen_size;
> +	else
> +		total_size = info->fix.smem_len;
> +
> +	if (pos >= total_size)
> +		return 0;
> +	if (count >= total_size)
> +		count = total_size;
> +	if (total_size - count < pos)
> +		count = total_size - pos;
> +
> +	if (drm_fbdev_use_iomem(info))
> +		ret = fb_read_screen_base(info, buf, count, pos);
> +	else
> +		ret = fb_read_screen_buffer(info, buf, count, pos);
> +
> +	if (ret > 0)
> +		*ppos = ret;
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
> +				    loff_t pos)
> +{
> +	char __iomem *dst = info->screen_base + pos;
> +	size_t alloc_size = min(count, PAGE_SIZE);
> +	ssize_t ret = 0;
> +	u8 *tmp;
> +
> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!tmp)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		size_t c = min(count, alloc_size);
> +
> +		if (copy_from_user(tmp, buf, c)) {
> +			ret = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, tmp, c);
> +
> +		dst += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(tmp);
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
> +				      loff_t pos)
> +{
> +	char *dst = info->screen_buffer + pos;
> +
> +	if (copy_from_user(dst, buf, count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	loff_t pos = *ppos;
> +	size_t total_size;
> +	ssize_t ret;
> +	int err;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	if (info->screen_size)
> +		total_size = info->screen_size;
> +	else
> +		total_size = info->fix.smem_len;
> +
> +	if (pos > total_size)
> +		return -EFBIG;
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +	if (total_size - count < pos) {
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - pos;
> +	}
> +
> +	/*
> +	 * Copy to framebuffer even if we already logged an error. Emulates
> +	 * the behavior of the original fbdev implementation.
> +	 */
> +	if (drm_fbdev_use_iomem(info))
> +		ret = fb_write_screen_base(info, buf, count, pos);
> +	else
> +		ret = fb_write_screen_buffer(info, buf, count, pos);
> +
> +	if (ret > 0)
> +		*ppos = ret;
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +	else
> +		drm_fb_helper_sys_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_copyarea(info, area);
> +	else
> +		drm_fb_helper_sys_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_imageblit(info, image);
> +	else
> +		drm_fb_helper_sys_imageblit(info, image);
> +}
> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:11:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:11:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10249.27172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVgx-0002fV-DM; Thu, 22 Oct 2020 08:11:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10249.27172; Thu, 22 Oct 2020 08:11:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVgx-0002fO-A1; Thu, 22 Oct 2020 08:11:23 +0000
Received: by outflank-mailman (input) for mailman id 10249;
 Thu, 22 Oct 2020 08:11:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVVgw-0002fJ-HS
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:11:22 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2352429f-5a1f-4088-a736-96698618b774;
 Thu, 22 Oct 2020 08:11:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVVgw-0002fJ-HS
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:11:22 +0000
X-Inumbo-ID: 2352429f-5a1f-4088-a736-96698618b774
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2352429f-5a1f-4088-a736-96698618b774;
	Thu, 22 Oct 2020 08:11:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603354281;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=8mZiS1O4rn+XJrW2/0Ahdl+RjgsJk6lkb2Ch4WsgDGA=;
  b=TN+pNR2oK3YSKh5PqdTqjfNx0YVT2AcToMZDxbXkh5Xf8rnDs7aLQrQY
   JCBgvgZWyqbKv4lNO+P4F6buFRu5UAXTlKHS7QQWnpHT7/yPLYK9WEXSQ
   Yye31HfPBMjc8xcdxC0yhY0KPjiKTjl4ijlkP25YeQPuTxZJrsuDBttpU
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: pVW4pBa1EpyVhUq/aTYjkhlE+ZIid9t+92pUpor+liVcXi4DkxVt2PUkOreGWF0W6XjTA+15Al
 rmnh3Up3L/P8cdiVJQWNvwJlawEiOrfqGRYcjj7JzJgr37Hu+f5GEzHHkn7Mf2XbeAKFPChqoF
 RLcpdGBteUhgMbTLjtKfpDIGmJo+Y9i5YbrjiKKswi0dxrZVVELJb49nIHe7Dr2o8gw6DKQeI4
 HFkccQGHSt9gfIt+pYMO55P6yUWiMWUx8f6a7PX/XDXJr8w3vkdcl8jjPfJWm8fAcVu8+dm5LT
 wAY=
X-SBRS: 2.5
X-MesageID: 29879736
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29879736"
Date: Thu, 22 Oct 2020 10:11:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
Message-ID: <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
 <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 22, 2020 at 09:33:27AM +0200, Jan Beulich wrote:
> On 21.10.2020 17:46, Roger Pau Monné wrote:
> > On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
> >> There's no global lock around the updating of this global piece of data.
> >> Make use of cmpxchgptr() to avoid two entities racing with their
> >> updates.
> >>
> >> While touching the functionality, mark xen_consumers[] read-mostly (or
> >> else the if() condition could use the result of cmpxchgptr(), writing to
> >> the slot unconditionally).
> > 
> > I'm not sure I get this, likely related to the comment I have below.
> 
> This is about the alternative case of invoking cmpxchgptr()
> without the if() around it. On x86 this would mean always
> writing the field, even if the designated value is already in
> place.
> 
> >> --- a/xen/common/event_channel.c
> >> +++ b/xen/common/event_channel.c
> >> @@ -57,7 +57,8 @@
> >>   * with a pointer, we stash them dynamically in a small lookup array which
> >>   * can be indexed by a small integer.
> >>   */
> >> -static xen_event_channel_notification_t xen_consumers[NR_XEN_CONSUMERS];
> >> +static xen_event_channel_notification_t __read_mostly
> >> +    xen_consumers[NR_XEN_CONSUMERS];
> >>  
> >>  /* Default notification action: wake up from wait_on_xen_event_channel(). */
> >>  static void default_xen_notification_fn(struct vcpu *v, unsigned int port)
> >> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
> >>  
> >>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
> >>      {
> >> +        /* Use cmpxchgptr() in lieu of a global lock. */
> >>          if ( xen_consumers[i] == NULL )
> >> -            xen_consumers[i] = fn;
> >> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
> >>          if ( xen_consumers[i] == fn )
> >>              break;
> > 
> > I think you could join it as:
> > 
> > if ( !xen_consumers[i] &&
> >      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
> >     break;
> > 
> > As cmpxchgptr will return the previous value of &xen_consumers[i]?
> 
> But then you also have to check whether the returned value is
> fn (or retain the 2nd if()).

__cmpxchg comment says that success of the operation is indicated when
the returned value equals the old value, so it's my understanding that
cmpxchgptr returning NULL would mean the exchange has succeed and that
xen_consumers[i] == fn?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:13:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:13:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10254.27184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVjP-0002ur-Vh; Thu, 22 Oct 2020 08:13:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10254.27184; Thu, 22 Oct 2020 08:13:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVjP-0002uk-Ro; Thu, 22 Oct 2020 08:13:55 +0000
Received: by outflank-mailman (input) for mailman id 10254;
 Thu, 22 Oct 2020 08:13:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVjO-0002uf-08
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:13:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 821790a2-601f-4033-bda9-e18cc0bd9a75;
 Thu, 22 Oct 2020 08:13:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63D89AEB6;
 Thu, 22 Oct 2020 08:13:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVjO-0002uf-08
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:13:54 +0000
X-Inumbo-ID: 821790a2-601f-4033-bda9-e18cc0bd9a75
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 821790a2-601f-4033-bda9-e18cc0bd9a75;
	Thu, 22 Oct 2020 08:13:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603354432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NiGj3diI1wYoQKy25zHuk6TRVDkf/COt9qFsW9GLczM=;
	b=sK7JfJlQeZk06be7dZ+pPIYgX/OsXSb0HGzwIfwvVZ2Qwm1TC4hv5/o/nTqxImEI8Hw6cI
	YHuSA207LuPRJLAZEPJh4FMyYGWKQLgOuj71TIgHM0jX0cru8PrNjZ8CfITB0oLV06oM44
	aGiH9t6AdB59BM8znqEKnV7nrIxd2bc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 63D89AEB6;
	Thu, 22 Oct 2020 08:13:52 +0000 (UTC)
Subject: Re: XSM and the idle domain
To: "Daniel P. Smith" <dpsmith@apertussolutions.com>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
 <2dbee673-036a-077e-6cb4-556aac46ac33@apertussolutions.com>
Cc: Hongyan Xia <hx242@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 andrew.cooper3@citrix.com, jandryuk@gmail.com, dgdegra@tycho.nsa.gov
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <09aad1f6-b9bd-1ba4-5e08-198ab2815a5b@suse.com>
Date: Thu, 22 Oct 2020 10:13:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <2dbee673-036a-077e-6cb4-556aac46ac33@apertussolutions.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 03:23, Daniel P. Smith wrote:
> On 10/21/20 10:34 AM, Hongyan Xia wrote:
>> Also, should idle domain be restricted? IMO the idle domain is Xen
>> itself which mostly bootstraps the system and performs limited work
>> when switched to, and is not something a user (either dom0 or domU)
>> directly interacts with. I doubt XSM was designed to include the idle
>> domain (although there is an ID allocated for it in the code), so I
>> would say just exclude idle in all security policy checks.
> 
> The idle domain is a limited, internal construct within the hypervisor 
> and should be constrained as part of the hypervisor, which is why its 
> domain id gets labeled with the same label as the hypervisor. For this 
> reason I would wholeheartedly disagree with exempting the idle domain id 
> from XSM hooks as that would effectively be saying the core hypervisor 
> should not be constrained. The purpose of the XSM hooks is to control 
> the flow of information in the system in a non-bypassable way. Codifying 
> bypasses completely subverts the security model behind XSM for which the 
> flask security server is dependent upon.

While what you say may in general make sense, I have two questions:
1) When the idle domain is purely an internal construct of Xen, why
   does it need limiting in any way? In fact, if restricting it in a
   bad way, aren't you risking to prevent the system from functioning
   correctly?
2) LU is merely restoring the prior state of the system. This prior
   state was reached with security auditing as per the system's
   policy at the time. Why should there be anything denind in the
   process of re-establishing this same state? IOW can't XSM checking
   be globally disabled until the system is ready be run normally
   again?
Please forgive if this sounds like rubbish to you - I may not have a
good enough understanding of the abstract constraints involved here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:15:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:15:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10257.27195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVlD-000338-BH; Thu, 22 Oct 2020 08:15:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10257.27195; Thu, 22 Oct 2020 08:15:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVlD-000331-86; Thu, 22 Oct 2020 08:15:47 +0000
Received: by outflank-mailman (input) for mailman id 10257;
 Thu, 22 Oct 2020 08:15:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVVlB-00032u-Kn
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:15:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d149b65c-7f54-416a-82a5-28be2db73798;
 Thu, 22 Oct 2020 08:15:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E313DAFB2;
 Thu, 22 Oct 2020 08:15:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVVlB-00032u-Kn
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:15:45 +0000
X-Inumbo-ID: d149b65c-7f54-416a-82a5-28be2db73798
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d149b65c-7f54-416a-82a5-28be2db73798;
	Thu, 22 Oct 2020 08:15:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603354544;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mdRP+lPDUxm3vony6awcQ1O4nfPcpohHENFL/HUfsQo=;
	b=UrBtSx3DwljbXMNXKpJk/zx7mFtkoiyQy8/aZO6Tsg1aApW+BXDwMgv9g2Hxkj9nKW8NNt
	5l5ezAsvaJTnr83/K96XAwSsL9vGTCXTwpjkpLHrIf9PRWBUJJu45Ir4/0agQo8p8M/OqS
	zTQOPkhEaB8qL4SeinkAHaGziCNV0Bg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E313DAFB2;
	Thu, 22 Oct 2020 08:15:43 +0000 (UTC)
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
 <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
 <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2172763f-9f3d-588e-b4f2-0f9187a40ff9@suse.com>
Date: Thu, 22 Oct 2020 10:15:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.2020 10:11, Roger Pau Monné wrote:
> On Thu, Oct 22, 2020 at 09:33:27AM +0200, Jan Beulich wrote:
>> On 21.10.2020 17:46, Roger Pau Monné wrote:
>>> On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
>>>> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
>>>>  
>>>>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
>>>>      {
>>>> +        /* Use cmpxchgptr() in lieu of a global lock. */
>>>>          if ( xen_consumers[i] == NULL )
>>>> -            xen_consumers[i] = fn;
>>>> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
>>>>          if ( xen_consumers[i] == fn )
>>>>              break;
>>>
>>> I think you could join it as:
>>>
>>> if ( !xen_consumers[i] &&
>>>      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
>>>     break;
>>>
>>> As cmpxchgptr will return the previous value of &xen_consumers[i]?
>>
>> But then you also have to check whether the returned value is
>> fn (or retain the 2nd if()).
> 
> __cmpxchg comment says that success of the operation is indicated when
> the returned value equals the old value, so it's my understanding that
> cmpxchgptr returning NULL would mean the exchange has succeed and that
> xen_consumers[i] == fn?

Correct. But if xen_consumers[i] == fn before the call, the return
value will be fn. The cmpxchg() wasn't "successful" in this case
(it didn't update anything), but the state of the array slot is what
we want.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:28:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10260.27207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVxG-00046e-G8; Thu, 22 Oct 2020 08:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10260.27207; Thu, 22 Oct 2020 08:28:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVxG-00046X-Cg; Thu, 22 Oct 2020 08:28:14 +0000
Received: by outflank-mailman (input) for mailman id 10260;
 Thu, 22 Oct 2020 08:28:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVxF-00046S-GH
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:28:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f34b6a47-71c4-4988-a5b4-5c42ab083134;
 Thu, 22 Oct 2020 08:28:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C990AFAA;
 Thu, 22 Oct 2020 08:28:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVxF-00046S-GH
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:28:13 +0000
X-Inumbo-ID: f34b6a47-71c4-4988-a5b4-5c42ab083134
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f34b6a47-71c4-4988-a5b4-5c42ab083134;
	Thu, 22 Oct 2020 08:28:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603355290;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G7vgWOyisgUYKXsvWM4vAN9Hxh9Ew9zQrWjKc2jf7cc=;
	b=TRO72CW2ZP+QLGGw7AR30kx3jSRdErch7W5/LxreppU58xFvOcYkzIBllkAjNHYU8lEL0x
	m16npEtf/ShX1SDZ+TjWZpb+l+GKeN+mmHTMnhnIvZmyEphEy1zo/U/Gw29t4XaPm9bvQQ
	PHqrwRlQ3Z91UP9I6vAwfRGkAPdjduw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7C990AFAA;
	Thu, 22 Oct 2020 08:28:10 +0000 (UTC)
Subject: Re: [PATCH 3/5] xen/events: only register debug interrupt for 2-level
 events
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>
References: <20201022074214.21693-1-jgross@suse.com>
 <20201022074214.21693-4-jgross@suse.com>
 <9bfc266f-1efb-7910-6ff7-9cea6e40d7c9@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1a549cfd-0a79-725d-d4a4-795a57092307@suse.com>
Date: Thu, 22 Oct 2020 10:28:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <9bfc266f-1efb-7910-6ff7-9cea6e40d7c9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.20 09:54, Jan Beulich wrote:
> On 22.10.2020 09:42, Juergen Gross wrote:
>> --- a/drivers/xen/events/events_base.c
>> +++ b/drivers/xen/events/events_base.c
>> @@ -2050,7 +2050,7 @@ void xen_setup_callback_vector(void) {}
>>   static inline void xen_alloc_callback_vector(void) {}
>>   #endif
>>   
>> -static bool fifo_events = true;
>> +bool fifo_events = true;
> 
> When making this non-static, perhaps better to also prefix it with
> xen_?

Fine with me.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:28:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:28:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10261.27220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVxp-0004Bu-PX; Thu, 22 Oct 2020 08:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10261.27220; Thu, 22 Oct 2020 08:28:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVxp-0004Bn-MT; Thu, 22 Oct 2020 08:28:49 +0000
Received: by outflank-mailman (input) for mailman id 10261;
 Thu, 22 Oct 2020 08:28:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVVxo-0004Bf-50
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:28:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1ba4936-d556-4d11-b86f-072ff46a46ff;
 Thu, 22 Oct 2020 08:28:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 531E4AFA8;
 Thu, 22 Oct 2020 08:28:46 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVVxo-0004Bf-50
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:28:48 +0000
X-Inumbo-ID: f1ba4936-d556-4d11-b86f-072ff46a46ff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f1ba4936-d556-4d11-b86f-072ff46a46ff;
	Thu, 22 Oct 2020 08:28:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603355326;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+ak4XkyQcVmIRaA1l0Rz4F8yZmad7xR9rPkb2Zs/I4w=;
	b=bXrR6ECZL37VdSqI0vbVU00hc8qxobp23r7xk9scCpOzO847J+xYSponAUnuU1AAaIy92H
	dWF/+ipteKPoLAHbzcWOadipU+15mcv73HClKvqBGugMCUoNjgWhxr2WMrxem55T6urPTr
	a95IHGfvdlCLfbV6wKdta6PHMmDYrAw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 531E4AFA8;
	Thu, 22 Oct 2020 08:28:46 +0000 (UTC)
Subject: Re: [PATCH 4/5] xen/events: unmask a fifo event channel only if it
 was masked
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201022074214.21693-1-jgross@suse.com>
 <20201022074214.21693-5-jgross@suse.com>
 <e6dcce7e-acfb-0ca1-8ff1-e303932bc3c5@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <72cb09c0-4fc7-f02e-c1fb-314e5add381f@suse.com>
Date: Thu, 22 Oct 2020 10:28:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <e6dcce7e-acfb-0ca1-8ff1-e303932bc3c5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.20 09:55, Jan Beulich wrote:
> On 22.10.2020 09:42, Juergen Gross wrote:
>> --- a/drivers/xen/events/events_fifo.c
>> +++ b/drivers/xen/events/events_fifo.c
>> @@ -236,6 +236,9 @@ static bool clear_masked_cond(volatile event_word_t *word)
>>   
>>   	w = *word;
>>   
>> +	if (!(w & (1 << EVTCHN_FIFO_MASKED)))
>> +		return true;
> 
> Maybe better move this ...
> 
>>   	do {
>>   		if (w & (1 << EVTCHN_FIFO_PENDING))
>>   			return false;
>>
> 
> ... into the loop, above this check?

Yes, that should be better.


Juergen



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10264.27232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVyn-0004KH-4C; Thu, 22 Oct 2020 08:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10264.27232; Thu, 22 Oct 2020 08:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVVyn-0004K9-0P; Thu, 22 Oct 2020 08:29:49 +0000
Received: by outflank-mailman (input) for mailman id 10264;
 Thu, 22 Oct 2020 08:29:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVVyl-0004Jz-RG
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:29:47 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fcdcffb-567b-4753-8cfb-cab147d634d6;
 Thu, 22 Oct 2020 08:29:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVVyl-0004Jz-RG
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:29:47 +0000
X-Inumbo-ID: 4fcdcffb-567b-4753-8cfb-cab147d634d6
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4fcdcffb-567b-4753-8cfb-cab147d634d6;
	Thu, 22 Oct 2020 08:29:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603355386;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=bsq6SYNvGXv/qZ1zn6xSGKXNAGCME+xYpuJt65efJN4=;
  b=hrJxJG6h7TFMKmKRtfFw+93cjfxgzvvSNfBnKqjq0UdnQU4atK7zARle
   PKjWGx9fO5GHGjfljAMVXEk3TrjimX5r51tVOR0P1ZlP8YAOPPLyNZzd5
   PnZ2fO8CXABJ3z1bhdv2tsvA++hYzYnpw132MDgDDDWeLLcCFMN1CnBqU
   4=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: xGSNH5C6BRcsV81PAB1fKrRqZZ+ysFNXCIFh/7B3EOXOWBIdXJ1uikSNSEIkP1lhnCOZPhklbV
 jglPQZ7WbI2soeEKGwqvTuOHU9+vsi2M9c1LNE8gJSUk6QChbhN/af9CXm8Bqe4e+Q4kQfFcKj
 iSqt4OxEB48qZDLrunRAHiWUsGxPV+fxlJl98vyJB/5YL4cC+jKTyU3ofMTKexQovZMwZRVI2D
 sTwAo8XYX3poxfPEBaCZDP4W29KIfdXBxdL07onltW3P+65X9g4Hs0x9ApPQY7nhqZO8vv9fk/
 ZdQ=
X-SBRS: 2.5
X-MesageID: 29603125
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29603125"
Date: Thu, 22 Oct 2020 10:29:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
Message-ID: <20201022082938.jnpw7wvzuvxqa6iy@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
 <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
 <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
 <2172763f-9f3d-588e-b4f2-0f9187a40ff9@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2172763f-9f3d-588e-b4f2-0f9187a40ff9@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 22, 2020 at 10:15:45AM +0200, Jan Beulich wrote:
> On 22.10.2020 10:11, Roger Pau Monné wrote:
> > On Thu, Oct 22, 2020 at 09:33:27AM +0200, Jan Beulich wrote:
> >> On 21.10.2020 17:46, Roger Pau Monné wrote:
> >>> On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
> >>>> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
> >>>>  
> >>>>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
> >>>>      {
> >>>> +        /* Use cmpxchgptr() in lieu of a global lock. */
> >>>>          if ( xen_consumers[i] == NULL )
> >>>> -            xen_consumers[i] = fn;
> >>>> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
> >>>>          if ( xen_consumers[i] == fn )
> >>>>              break;
> >>>
> >>> I think you could join it as:
> >>>
> >>> if ( !xen_consumers[i] &&
> >>>      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
> >>>     break;
> >>>
> >>> As cmpxchgptr will return the previous value of &xen_consumers[i]?
> >>
> >> But then you also have to check whether the returned value is
> >> fn (or retain the 2nd if()).
> > 
> > __cmpxchg comment says that success of the operation is indicated when
> > the returned value equals the old value, so it's my understanding that
> > cmpxchgptr returning NULL would mean the exchange has succeed and that
> > xen_consumers[i] == fn?
> 
> Correct. But if xen_consumers[i] == fn before the call, the return
> value will be fn. The cmpxchg() wasn't "successful" in this case
> (it didn't update anything), but the state of the array slot is what
> we want.

Oh, I get it now. You don't want the same fn populating more than one
slot.

I assume the reads of xen_consumers are not using ACCESS_ONCE or
read_atomic because we rely on the compiler performing such reads as
single instructions?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:32:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10269.27244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVW1m-0005AD-Jt; Thu, 22 Oct 2020 08:32:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10269.27244; Thu, 22 Oct 2020 08:32:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVW1m-0005A6-G0; Thu, 22 Oct 2020 08:32:54 +0000
Received: by outflank-mailman (input) for mailman id 10269;
 Thu, 22 Oct 2020 08:32:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVW1l-0005A1-5o
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:32:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1061199c-ff37-4d92-8866-0c8d08f4e86b;
 Thu, 22 Oct 2020 08:32:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVW1h-0002sq-Q3; Thu, 22 Oct 2020 08:32:49 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVW1h-0005Ik-G4; Thu, 22 Oct 2020 08:32:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVW1l-0005A1-5o
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:32:53 +0000
X-Inumbo-ID: 1061199c-ff37-4d92-8866-0c8d08f4e86b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1061199c-ff37-4d92-8866-0c8d08f4e86b;
	Thu, 22 Oct 2020 08:32:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Ju9JlLqIbF4Xjt9bUyXE8qrNmg37IAJ//vMAuq74Q0c=; b=DG9/LHWNFpl5sMA492PwxjWn99
	gR1/4P8YeSmOyG8UtRej110Zy3KKM+A09UHfonfU2puhTw/HoU/X/i6znYtb4CQBtQTT8sJVim4g9
	7DQOfbrTWhq9Ic4WeZ6EmuOH+zqbNQhzHHR7BKr3rUT/tqg00O3DmPiv/z1+jPXysS6g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVW1h-0002sq-Q3; Thu, 22 Oct 2020 08:32:49 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVW1h-0005Ik-G4; Thu, 22 Oct 2020 08:32:49 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
Date: Thu, 22 Oct 2020 09:32:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 21/10/2020 12:25, Rahul Singh wrote:
> Hello Julien,

Hi Rahul,

>> On 20 Oct 2020, at 6:03 pm, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Rahul,
>>
>> Thank you for the contribution. Lets make sure this attempt to SMMUv3 support in Xen will be more successful than the other one :).
> 
> Yes sure.
>>
>> I haven't reviewed the code yet, but I wanted to provide feedback on the commit message.
>>
>> On 20/10/2020 16:25, Rahul Singh wrote:
>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>> the Linux SMMUv3 driver.
>>> Major differences between the Linux driver are as follows:
>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>     that supports both Stage-1 and Stage-2 translations.
>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>>>     capability to share the page tables with the CPU.
>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>>>     and priority queue IRQ handling.
>>
>> Tasklets are not a replacement for threaded IRQ. In particular, they will have priority over anything else (IOW nothing will run on the pCPU until they are done).
>>
>> Do you know why Linux is using thread. Is it because of long running operations?
> 
> Yes you are right because of long running operations Linux is using the threaded IRQs.
> 
> SMMUv3 reports fault/events bases on memory-based circular buffer queues not based on the register. As per my understanding, it is time-consuming to process the memory based queues in interrupt context because of that Linux is using threaded IRQ to process the faults/events from SMMU.
> 
> I didn’t find any other solution in XEN in place of tasklet to defer the work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If we do all work in interrupt context we will make XEN less responsive.

So we need to make sure that Xen continue to receives interrupts, but we 
also need to make sure that a vCPU bound to the pCPU is also responsive.

> 
> If you know another solution in XEN that will be used to defer the work in the interrupt please let me know I will try to use that.

One of my work colleague encountered a similar problem recently. He had 
a long running tasklet and wanted to be broken down in smaller chunk.

We decided to use a timer to reschedule the taslket in the future. This 
allows the scheduler to run other loads (e.g. vCPU) for some time.

This is pretty hackish but I couldn't find a better solution as tasklet 
have high priority.

Maybe the other will have a better idea.

> 
>>> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>>>     access functions based on atomic operations implemented in Linux.
>>
>> Can you provide more details?
> 
> I tried to port the latest version of the SMMUv3 code than I observed that in order to port that code I have to also port atomic operation implemented in Linux to XEN. As latest Linux code uses atomic operation to process the command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() , atomic_fetch_andnot_relaxed()) .

Thank you for the explanation. I think it would be best to import the 
atomic helpers and use the latest code.

This will ensure that we don't re-introduce bugs and also buy us some 
time before the Linux and Xen driver diverge again too much.

Stefano, what do you think?

> 
>>
>>>     Atomic functions used by the commands queue access functions is not
>>>     implemented in XEN therefore we decided to port the earlier version
>>>     of the code. Once the proper atomic operations will be available in XEN
>>>     the driver can be updated.
>>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>>> ---
>>>   xen/drivers/passthrough/Kconfig       |   10 +
>>>   xen/drivers/passthrough/arm/Makefile  |    1 +
>>>   xen/drivers/passthrough/arm/smmu-v3.c | 2847 +++++++++++++++++++++++++
>>>   3 files changed, 2858 insertions(+)
>>
>> This is quite significant patch to review. Is there any way to get it split (maybe a verbatim Linux copy + Xen modification)?
> 
> Yes, I understand this is a quite significant patch to review let me think to get it split. If it is ok for you to review this patch and provide your comments then it will great for us.
I will try to have a look next week.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10272.27256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVW6m-0005Sn-6y; Thu, 22 Oct 2020 08:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10272.27256; Thu, 22 Oct 2020 08:38:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVW6m-0005Sg-3t; Thu, 22 Oct 2020 08:38:04 +0000
Received: by outflank-mailman (input) for mailman id 10272;
 Thu, 22 Oct 2020 08:38:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GzMw=D5=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kVW6j-0005Sb-RW
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:38:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 870c99d4-9ca0-47e3-8297-72da3ee97f1f;
 Thu, 22 Oct 2020 08:38:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0C486B00D;
 Thu, 22 Oct 2020 08:37:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GzMw=D5=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kVW6j-0005Sb-RW
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:38:01 +0000
X-Inumbo-ID: 870c99d4-9ca0-47e3-8297-72da3ee97f1f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 870c99d4-9ca0-47e3-8297-72da3ee97f1f;
	Thu, 22 Oct 2020 08:38:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0C486B00D;
	Thu, 22 Oct 2020 08:37:59 +0000 (UTC)
To: Daniel Vetter <daniel@ffwll.ch>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
 sam@ravnborg.org, alexander.deucher@amd.com, christian.koenig@amd.com,
 kraxel@redhat.com, l.stach@pengutronix.de, linux+etnaviv@armlinux.org.uk,
 christian.gmeiner@gmail.com, inki.dae@samsung.com, jy0922.shim@samsung.com,
 sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
 krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com, robh@kernel.org,
 tomeu.vizoso@collabora.com, steven.price@arm.com,
 alyssa.rosenzweig@collabora.com, hjc@rock-chips.com, heiko@sntech.de,
 hdegoede@redhat.com, sean@poorly.run, eric@anholt.net,
 oleksandr_andrushchenko@epam.com, ray.huang@amd.com,
 sumit.semwal@linaro.org, emil.velikov@collabora.com, luben.tuikov@amd.com,
 apaneers@amd.com, linus.walleij@linaro.org, melissa.srw@gmail.com,
 chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
 dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
 virtualization@lists.linux-foundation.org, etnaviv@lists.freedesktop.org,
 linux-arm-kernel@lists.infradead.org, linux-samsung-soc@vger.kernel.org,
 lima@lists.freedesktop.org, nouveau@lists.freedesktop.org,
 spice-devel@lists.freedesktop.org, linux-rockchip@lists.infradead.org,
 xen-devel@lists.xenproject.org, linux-media@vger.kernel.org,
 linaro-mm-sig@lists.linaro.org
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-11-tzimmermann@suse.de>
 <20201022080534.GT401619@phenom.ffwll.local>
From: Thomas Zimmermann <tzimmermann@suse.de>
Subject: Re: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de>
Date: Thu, 22 Oct 2020 10:37:56 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201022080534.GT401619@phenom.ffwll.local>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi

On 22.10.20 10:05, Daniel Vetter wrote:
> On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote:
>> At least sparc64 requires I/O-specific access to framebuffers. This
>> patch updates the fbdev console accordingly.
>>
>> For drivers with direct access to the framebuffer memory, the callback
>> functions in struct fb_ops test for the type of memory and call the rsp
>> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
>> internally by DRM's fbdev helper.
>>
>> For drivers that employ a shadow buffer, fbdev's blit function retrieves
>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
>> interfaces to access the buffer.
>>
>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
>> I/O memory and avoid a HW exception. With the introduction of struct
>> dma_buf_map, this is not required any longer. The patch removes the rsp
>> code from both, bochs and fbdev.
>>
>> v5:
>> 	* implement fb_read/fb_write internally (Daniel, Sam)
>> v4:
>> 	* move dma_buf_map changes into separate patch (Daniel)
>> 	* TODO list: comment on fbdev updates (Daniel)
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>> ---
>>  Documentation/gpu/todo.rst        |  19 ++-
>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>>  drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
>>  include/drm/drm_mode_config.h     |  12 --
>>  4 files changed, 230 insertions(+), 29 deletions(-)
>>
>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>> index 7e6fc3c04add..638b7f704339 100644
>> --- a/Documentation/gpu/todo.rst
>> +++ b/Documentation/gpu/todo.rst
>> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>>  ------------------------------------------------
>>  
>>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
>> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
>> -expects the framebuffer in system memory (or system-like memory).
>> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
>> +expected the framebuffer in system memory or system-like memory. By employing
>> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
>> +as well.
>>  
>>  Contact: Maintainer of the driver you plan to convert
>>  
>>  Level: Intermediate
>>  
>> +Reimplement functions in drm_fbdev_fb_ops without fbdev
>> +-------------------------------------------------------
>> +
>> +A number of callback functions in drm_fbdev_fb_ops could benefit from
>> +being rewritten without dependencies on the fbdev module. Some of the
>> +helpers could further benefit from using struct dma_buf_map instead of
>> +raw pointers.
>> +
>> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
>> +
>> +Level: Advanced
>> +
>> +
>>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>>  -----------------------------------------------------------------
>>  
>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
>> index 13d0d04c4457..853081d186d5 100644
>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>>  	bochs->dev->mode_config.preferred_depth = 24;
>>  	bochs->dev->mode_config.prefer_shadow = 0;
>>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
>> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>>  
>>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 6212cd7cde1d..1d3180841778 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>>  }
>>  
>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>> -					  struct drm_clip_rect *clip)
>> +					  struct drm_clip_rect *clip,
>> +					  struct dma_buf_map *dst)
>>  {
>>  	struct drm_framebuffer *fb = fb_helper->fb;
>>  	unsigned int cpp = fb->format->cpp[0];
>>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>  	void *src = fb_helper->fbdev->screen_buffer + offset;
>> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>>  	size_t len = (clip->x2 - clip->x1) * cpp;
>>  	unsigned int y;
>>  
>> -	for (y = clip->y1; y < clip->y2; y++) {
>> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
>> -			memcpy(dst, src, len);
>> -		else
>> -			memcpy_toio((void __iomem *)dst, src, len);
>> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>>  
>> +	for (y = clip->y1; y < clip->y2; y++) {
>> +		dma_buf_map_memcpy_to(dst, src, len);
>> +		dma_buf_map_incr(dst, fb->pitches[0]);
>>  		src += fb->pitches[0];
>> -		dst += fb->pitches[0];
>>  	}
>>  }
>>  
>> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>>  			if (ret)
>>  				return;
>> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>>  		}
>> +
>>  		if (helper->fb->funcs->dirty)
>>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>>  						 &clip_copy, 1);
>> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>>  		return -ENODEV;
>>  }
>>  
>> +static bool drm_fbdev_use_iomem(struct fb_info *info)
>> +{
>> +	struct drm_fb_helper *fb_helper = info->par;
>> +	struct drm_client_buffer *buffer = fb_helper->buffer;
>> +
>> +	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
>> +}
>> +
>> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
>> +				   loff_t pos)
>> +{
>> +	const char __iomem *src = info->screen_base + pos;
> 
> Maybe a bit much a bikeshed, but I'd write this in terms of drm objects,
> like the dirty_blit function, using the dma_buf_map (instead of the
> fb_info parameter). And then instead of
> screen_base and screen_buffer suffixes give them _mem and _iomem suffixes.

Screen_buffer can be a shadow buffer. Until the blit worker (see
drm_fb_helper_dirty_work() ) completes, it might be more up to date than
the real buffer that's stored in the client.

The orignal fbdev code supported an fb_sync callback to synchronize with
outstanding screen updates (e.g., HW blit ops), but fb_sync is just
overhead here. Copying from screen_buffer or screen_base always returns
the most up-to-date image.

> 
> Same for write below. Or I'm not quite understanding why we do it like
> this here - I don't think this code will be used outside of the generic
> fbdev code, so we can always assume that drm_fb_helper->buffer is set up.

It's similar as in the read case. If we write to the client's buffer, an
outstanding blit worker could write the now-outdated shadow buffer over
the user's newly written framebuffer data.

Thinking about it, we might want to schedule the blit worker at the end
of each fb_write, so that the data makes it into the HW buffer in time.

> 
> The other thing I think we need is some minimal testcases to make sure.
> The fbtest tool used way back seems to have disappeared, I couldn't find
> a copy of the source anywhere anymore.

As discussed on IRC, I'll add some testcase to the igt test. I'll share
the link here when done.

Best regards
Thomas

> 
> With all that: Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> 
> Cheers, Daniel
> 
>> +	size_t alloc_size = min(count, PAGE_SIZE);
>> +	ssize_t ret = 0;
>> +	char *tmp;
>> +
>> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
>> +	if (!tmp)
>> +		return -ENOMEM;
>> +
>> +	while (count) {
>> +		size_t c = min(count, alloc_size);
>> +
>> +		memcpy_fromio(tmp, src, c);
>> +		if (copy_to_user(buf, tmp, c)) {
>> +			ret = -EFAULT;
>> +			break;
>> +		}
>> +
>> +		src += c;
>> +		buf += c;
>> +		ret += c;
>> +		count -= c;
>> +	}
>> +
>> +	kfree(tmp);
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
>> +				     loff_t pos)
>> +{
>> +	const char *src = info->screen_buffer + pos;
>> +
>> +	if (copy_to_user(buf, src, count))
>> +		return -EFAULT;
>> +
>> +	return count;
>> +}
>> +
>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
>> +				 size_t count, loff_t *ppos)
>> +{
>> +	loff_t pos = *ppos;
>> +	size_t total_size;
>> +	ssize_t ret;
>> +
>> +	if (info->state != FBINFO_STATE_RUNNING)
>> +		return -EPERM;
>> +
>> +	if (info->screen_size)
>> +		total_size = info->screen_size;
>> +	else
>> +		total_size = info->fix.smem_len;
>> +
>> +	if (pos >= total_size)
>> +		return 0;
>> +	if (count >= total_size)
>> +		count = total_size;
>> +	if (total_size - count < pos)
>> +		count = total_size - pos;
>> +
>> +	if (drm_fbdev_use_iomem(info))
>> +		ret = fb_read_screen_base(info, buf, count, pos);
>> +	else
>> +		ret = fb_read_screen_buffer(info, buf, count, pos);
>> +
>> +	if (ret > 0)
>> +		*ppos = ret;
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
>> +				    loff_t pos)
>> +{
>> +	char __iomem *dst = info->screen_base + pos;
>> +	size_t alloc_size = min(count, PAGE_SIZE);
>> +	ssize_t ret = 0;
>> +	u8 *tmp;
>> +
>> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
>> +	if (!tmp)
>> +		return -ENOMEM;
>> +
>> +	while (count) {
>> +		size_t c = min(count, alloc_size);
>> +
>> +		if (copy_from_user(tmp, buf, c)) {
>> +			ret = -EFAULT;
>> +			break;
>> +		}
>> +		memcpy_toio(dst, tmp, c);
>> +
>> +		dst += c;
>> +		buf += c;
>> +		ret += c;
>> +		count -= c;
>> +	}
>> +
>> +	kfree(tmp);
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
>> +				      loff_t pos)
>> +{
>> +	char *dst = info->screen_buffer + pos;
>> +
>> +	if (copy_from_user(dst, buf, count))
>> +		return -EFAULT;
>> +
>> +	return count;
>> +}
>> +
>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>> +				  size_t count, loff_t *ppos)
>> +{
>> +	loff_t pos = *ppos;
>> +	size_t total_size;
>> +	ssize_t ret;
>> +	int err;
>> +
>> +	if (info->state != FBINFO_STATE_RUNNING)
>> +		return -EPERM;
>> +
>> +	if (info->screen_size)
>> +		total_size = info->screen_size;
>> +	else
>> +		total_size = info->fix.smem_len;
>> +
>> +	if (pos > total_size)
>> +		return -EFBIG;
>> +	if (count > total_size) {
>> +		err = -EFBIG;
>> +		count = total_size;
>> +	}
>> +	if (total_size - count < pos) {
>> +		if (!err)
>> +			err = -ENOSPC;
>> +		count = total_size - pos;
>> +	}
>> +
>> +	/*
>> +	 * Copy to framebuffer even if we already logged an error. Emulates
>> +	 * the behavior of the original fbdev implementation.
>> +	 */
>> +	if (drm_fbdev_use_iomem(info))
>> +		ret = fb_write_screen_base(info, buf, count, pos);
>> +	else
>> +		ret = fb_write_screen_buffer(info, buf, count, pos);
>> +
>> +	if (ret > 0)
>> +		*ppos = ret;
>> +
>> +	if (err)
>> +		return err;
>> +
>> +	return ret;
>> +}
>> +
>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
>> +				  const struct fb_fillrect *rect)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_fillrect(info, rect);
>> +	else
>> +		drm_fb_helper_sys_fillrect(info, rect);
>> +}
>> +
>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
>> +				  const struct fb_copyarea *area)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_copyarea(info, area);
>> +	else
>> +		drm_fb_helper_sys_copyarea(info, area);
>> +}
>> +
>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
>> +				   const struct fb_image *image)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_imageblit(info, image);
>> +	else
>> +		drm_fb_helper_sys_imageblit(info, image);
>> +}
>> +
>>  static const struct fb_ops drm_fbdev_fb_ops = {
>>  	.owner		= THIS_MODULE,
>>  	DRM_FB_HELPER_DEFAULT_OPS,
>> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>>  	.fb_release	= drm_fbdev_fb_release,
>>  	.fb_destroy	= drm_fbdev_fb_destroy,
>>  	.fb_mmap	= drm_fbdev_fb_mmap,
>> -	.fb_read	= drm_fb_helper_sys_read,
>> -	.fb_write	= drm_fb_helper_sys_write,
>> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
>> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
>> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
>> +	.fb_read	= drm_fbdev_fb_read,
>> +	.fb_write	= drm_fbdev_fb_write,
>> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
>> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
>> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>>  };
>>  
>>  static struct fb_deferred_io drm_fbdev_defio = {
>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
>> index 5ffbb4ed5b35..ab424ddd7665 100644
>> --- a/include/drm/drm_mode_config.h
>> +++ b/include/drm/drm_mode_config.h
>> @@ -877,18 +877,6 @@ struct drm_mode_config {
>>  	 */
>>  	bool prefer_shadow_fbdev;
>>  
>> -	/**
>> -	 * @fbdev_use_iomem:
>> -	 *
>> -	 * Set to true if framebuffer reside in iomem.
>> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
>> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
>> -	 *
>> -	 * FIXME: This should be replaced with a per-mapping is_iomem
>> -	 * flag (like ttm does), and then used everywhere in fbdev code.
>> -	 */
>> -	bool fbdev_use_iomem;
>> -
>>  	/**
>>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>>  	 *
>> -- 
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:49:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10279.27273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWHn-0006We-Do; Thu, 22 Oct 2020 08:49:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10279.27273; Thu, 22 Oct 2020 08:49:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWHn-0006WX-Ar; Thu, 22 Oct 2020 08:49:27 +0000
Received: by outflank-mailman (input) for mailman id 10279;
 Thu, 22 Oct 2020 08:49:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kVWHl-0006WS-Qe
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:49:25 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31449820-5798-4836-8cad-07e21ea304e8;
 Thu, 22 Oct 2020 08:49:24 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id k18so1046815wmj.5
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:49:24 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id n9sm2717558wrq.72.2020.10.22.01.49.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 22 Oct 2020 01:49:22 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kVWHl-0006WS-Qe
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:49:25 +0000
X-Inumbo-ID: 31449820-5798-4836-8cad-07e21ea304e8
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31449820-5798-4836-8cad-07e21ea304e8;
	Thu, 22 Oct 2020 08:49:24 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id k18so1046815wmj.5
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:49:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=VWt6XRHT1mtrTh0CCtZLa0wzMoEJLJvCOa4qxU5Byfg=;
        b=SJNJAJHWWSatt37yFJRHVEbZn0P63mliHJEuywcMzW7beqB8NExePBJsBSEKTV2IyK
         GDiZ5ZjuTBNjAQrSx9GFFKDh9re66/Dcbmane4QOewhmKa5XnxMXe84rjVZUUPTyyOKy
         1ovUJK5+84JV3uKcgbGhlDkAVpKoTx9G/IdjY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=VWt6XRHT1mtrTh0CCtZLa0wzMoEJLJvCOa4qxU5Byfg=;
        b=VNr8FCrSqrCzKtAVznzgcYmHvRIBF56HPKoEmyPDoZN3Ms+VpqP779KdwFaW8PIf4k
         znB/+mEV2duKlq4nC9cMFJHYBXnRhiDWAomNRk2wSHcHmEdSf9931Y1EWMlzmP9DOFYT
         hNZmiOD0ogXW/N2/ouTyGdx7rRooVqSUoK+m+RRhKgQllP+gqIbbYQLZ3Cj0PsfvFaXg
         vpBIBEqxWxtUjlK9ZbIQ56rS3nDseY3kFa7ByHzYZJ/VnAs+3wy6hiwYYjSUdebzccP/
         +u4iejZIHZfW7dZ0H1OvcSgBNF1qaxo11cVDoPTWMTbaooJGn8gpvslhPUaJE1Pu12Ph
         leZQ==
X-Gm-Message-State: AOAM531kyFzgBYPDUgPX7Yd02yu7iSGin6OwOwKm9CzkmifM+wzRhp71
	s5dMoeaSlC+u1rgob8OhMlPC9A==
X-Google-Smtp-Source: ABdhPJxkyq8GVJelx7O70iPhsj8A18D7siq72V+fzAy3BQnehMq9ndDymNKOsvN6dFPrcy8yVu5tEA==
X-Received: by 2002:a05:600c:2905:: with SMTP id i5mr1458519wmd.9.1603356563173;
        Thu, 22 Oct 2020 01:49:23 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id n9sm2717558wrq.72.2020.10.22.01.49.20
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 22 Oct 2020 01:49:22 -0700 (PDT)
Date: Thu, 22 Oct 2020 10:49:19 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, sam@ravnborg.org, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: Re: [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct
 dma_buf_map
Message-ID: <20201022084919.GU401619@phenom.ffwll.local>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-9-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201020122046.31167-9-tzimmermann@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote:
> Kernel DRM clients now store their framebuffer address in an instance
> of struct dma_buf_map. Depending on the buffer's location, the address
> refers to system or I/O memory.
> 
> Callers of drm_client_buffer_vmap() receive a copy of the value in
> the call's supplied arguments. It can be accessed and modified with
> dma_buf_map interfaces.
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
> ---
>  drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
>  drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
>  include/drm/drm_client.h        |  7 ++++---
>  3 files changed, 38 insertions(+), 26 deletions(-)
> 
> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
> index ac0082bed966..fe573acf1067 100644
> --- a/drivers/gpu/drm/drm_client.c
> +++ b/drivers/gpu/drm/drm_client.c
> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>  {
>  	struct drm_device *dev = buffer->client->dev;
>  
> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
> +	drm_gem_vunmap(buffer->gem, &buffer->map);
>  
>  	if (buffer->gem)
>  		drm_gem_object_put(buffer->gem);
> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>  /**
>   * drm_client_buffer_vmap - Map DRM client buffer into address space
>   * @buffer: DRM client buffer
> + * @map_copy: Returns the mapped memory's address
>   *
>   * This function maps a client buffer into kernel address space. If the
> - * buffer is already mapped, it returns the mapping's address.
> + * buffer is already mapped, it returns the existing mapping's address.
>   *
>   * Client buffer mappings are not ref'counted. Each call to
>   * drm_client_buffer_vmap() should be followed by a call to
>   * drm_client_buffer_vunmap(); or the client buffer should be mapped
>   * throughout its lifetime.
>   *
> + * The returned address is a copy of the internal value. In contrast to
> + * other vmap interfaces, you don't need it for the client's vunmap
> + * function. So you can modify it at will during blit and draw operations.
> + *
>   * Returns:
> - *	The mapped memory's address
> + *	0 on success, or a negative errno code otherwise.
>   */
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> +int
> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
>  {
> -	struct dma_buf_map map;
> +	struct dma_buf_map *map = &buffer->map;
>  	int ret;
>  
> -	if (buffer->vaddr)
> -		return buffer->vaddr;
> +	if (dma_buf_map_is_set(map))
> +		goto out;
>  
>  	/*
>  	 * FIXME: The dependency on GEM here isn't required, we could
> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>  	 * fd_install step out of the driver backend hooks, to make that
>  	 * final step optional for internal users.
>  	 */
> -	ret = drm_gem_vmap(buffer->gem, &map);
> +	ret = drm_gem_vmap(buffer->gem, map);
>  	if (ret)
> -		return ERR_PTR(ret);
> +		return ret;
>  
> -	buffer->vaddr = map.vaddr;
> +out:
> +	*map_copy = *map;
>  
> -	return map.vaddr;
> +	return 0;
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>  
> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>   */
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>  {
> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> +	struct dma_buf_map *map = &buffer->map;
>  
> -	drm_gem_vunmap(buffer->gem, &map);
> -	buffer->vaddr = NULL;
> +	drm_gem_vunmap(buffer->gem, map);
>  }
>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>  
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index c2f72bb6afb1..6212cd7cde1d 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->vaddr + offset;
> +	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>  	struct drm_clip_rect clip_copy;
>  	unsigned long flags;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>  	clip_copy = *clip;
> @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  
>  		/* Generic fbdev uses a shadow buffer */
>  		if (helper->buffer) {
> -			vaddr = drm_client_buffer_vmap(helper->buffer);
> -			if (IS_ERR(vaddr))
> +			ret = drm_client_buffer_vmap(helper->buffer, &map);
> +			if (ret)
>  				return;
>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>  		}
> @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  	struct drm_framebuffer *fb;
>  	struct fb_info *fbi;
>  	u32 format;
> -	void *vaddr;
> +	struct dma_buf_map map;
> +	int ret;
>  
>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>  		    sizes->surface_width, sizes->surface_height,
> @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>  		fb_deferred_io_init(fbi);
>  	} else {
>  		/* buffer is mapped for HW framebuffer */
> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
> -		if (IS_ERR(vaddr))
> -			return PTR_ERR(vaddr);
> +		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
> +		if (ret)
> +			return ret;
> +		if (map.is_iomem)
> +			fbi->screen_base = map.vaddr_iomem;
> +		else
> +			fbi->screen_buffer = map.vaddr;
>  
> -		fbi->screen_buffer = vaddr;
>  		/* Shamelessly leak the physical address to user-space */
>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)

Just noticed a tiny thing here: I think this needs to be patched to only
set smem_start when the map is _not_ iomem. Since virt_to_page isn't
defined on iomem at all.

I guess it'd be neat if we can set this for iomem too, but I have no idea
how to convert an iomem pointer back to a bus_addr_t ...

Cheers, Daniel

> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> index 7aaea665bfc2..f07f2fb02e75 100644
> --- a/include/drm/drm_client.h
> +++ b/include/drm/drm_client.h
> @@ -3,6 +3,7 @@
>  #ifndef _DRM_CLIENT_H_
>  #define _DRM_CLIENT_H_
>  
> +#include <linux/dma-buf-map.h>
>  #include <linux/lockdep.h>
>  #include <linux/mutex.h>
>  #include <linux/types.h>
> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>  	struct drm_gem_object *gem;
>  
>  	/**
> -	 * @vaddr: Virtual address for the buffer
> +	 * @map: Virtual address for the buffer
>  	 */
> -	void *vaddr;
> +	struct dma_buf_map map;
>  
>  	/**
>  	 * @fb: DRM framebuffer
> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>  
>  int drm_client_modeset_create(struct drm_client_dev *client);
> -- 
> 2.28.0
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:51:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10281.27286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWJz-0007Jk-R7; Thu, 22 Oct 2020 08:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10281.27286; Thu, 22 Oct 2020 08:51:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWJz-0007Jd-Nn; Thu, 22 Oct 2020 08:51:43 +0000
Received: by outflank-mailman (input) for mailman id 10281;
 Thu, 22 Oct 2020 08:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1kVWJy-0007JW-C8
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:51:42 +0000
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36aba427-d415-4488-ad7f-d20196ac57fd;
 Thu, 22 Oct 2020 08:51:39 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id c16so1068091wmd.2
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:51:39 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id a127sm2514868wmh.13.2020.10.22.01.51.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 22 Oct 2020 01:51:38 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FtFY=D5=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
	id 1kVWJy-0007JW-C8
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:51:42 +0000
X-Inumbo-ID: 36aba427-d415-4488-ad7f-d20196ac57fd
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 36aba427-d415-4488-ad7f-d20196ac57fd;
	Thu, 22 Oct 2020 08:51:39 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id c16so1068091wmd.2
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 01:51:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=erb5WlpHsJRWwYX0e3SV0gcGv3BO7EJiAnUE12jJHE0=;
        b=KD8AshS9U2fiksKTvv7o0P37aMRbN3UM3FAbfWu9YxAfRMLaL/6Ngf1thLAOjXN2V0
         AIYJ3fir8VO8pp7di2VX7Sl5I1n2yikuLxTQ26ddK5q1xXMh/Sw3+WNhhNG6xIKpM6E9
         Rw+XW1hAnR3LSAc5fIYsSosGFAeu2aCsUaIEM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=erb5WlpHsJRWwYX0e3SV0gcGv3BO7EJiAnUE12jJHE0=;
        b=aMAoM9HgL/aMrOmv1Ms9BMaFYTTHJrueL1zFV9hadmv/HlIvxXHwQHOybdkf2gVBy3
         qncDGtKmceqvbW10PTQi/OkqkQh9bHzW1MN0a/ViL1VI2w2UGOtWj7J2RjNFSIITf+eK
         bVp1BeBWlJOe6RgTsuyROOzwOtgo8YO7vkS5dX/Pm7N4prb72SZi809S3+r5sddP5Ny9
         NQlYMwuOfYNNU4GsgScBaGxUuyTXVt/BkH8p9lW9wQhLJLR/nu7lQqrRTHa7dwq8RHFw
         k8GEk6ChL3wVRcgwDv2dvWq5X/BXFvZpFfeHFJneSwYuRWUukChJBsaHIj7l+QTks+O6
         YXGw==
X-Gm-Message-State: AOAM530bc6ECbq2IEhwCSDXN+t2ZXOPiHaCA42lK/92ZWzCgMnwH/Hqk
	J/PFz5rhHolkFVoGWoWQMtJyCw==
X-Google-Smtp-Source: ABdhPJwCnAYthezROMJuiM7f0KJt+lGyOM7pBOyPT6+M7iThubfoowBMxVTwJR24l+gtJ18xepp0pw==
X-Received: by 2002:a7b:c387:: with SMTP id s7mr1468832wmj.52.1603356699036;
        Thu, 22 Oct 2020 01:51:39 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
        by smtp.gmail.com with ESMTPSA id a127sm2514868wmh.13.2020.10.22.01.51.36
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 22 Oct 2020 01:51:38 -0700 (PDT)
Date: Thu, 22 Oct 2020 10:51:35 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Daniel Vetter <daniel@ffwll.ch>, maarten.lankhorst@linux.intel.com,
	mripard@kernel.org, airlied@linux.ie, sam@ravnborg.org,
	alexander.deucher@amd.com, christian.koenig@amd.com,
	kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201022085135.GV401619@phenom.ffwll.local>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-11-tzimmermann@suse.de>
 <20201022080534.GT401619@phenom.ffwll.local>
 <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de>
X-Operating-System: Linux phenom 5.7.0-1-amd64 

On Thu, Oct 22, 2020 at 10:37:56AM +0200, Thomas Zimmermann wrote:
> Hi
> 
> On 22.10.20 10:05, Daniel Vetter wrote:
> > On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote:
> >> At least sparc64 requires I/O-specific access to framebuffers. This
> >> patch updates the fbdev console accordingly.
> >>
> >> For drivers with direct access to the framebuffer memory, the callback
> >> functions in struct fb_ops test for the type of memory and call the rsp
> >> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
> >> internally by DRM's fbdev helper.
> >>
> >> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> >> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> >> interfaces to access the buffer.
> >>
> >> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> >> I/O memory and avoid a HW exception. With the introduction of struct
> >> dma_buf_map, this is not required any longer. The patch removes the rsp
> >> code from both, bochs and fbdev.
> >>
> >> v5:
> >> 	* implement fb_read/fb_write internally (Daniel, Sam)
> >> v4:
> >> 	* move dma_buf_map changes into separate patch (Daniel)
> >> 	* TODO list: comment on fbdev updates (Daniel)
> >>
> >> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >> Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >> ---
> >>  Documentation/gpu/todo.rst        |  19 ++-
> >>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
> >>  drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
> >>  include/drm/drm_mode_config.h     |  12 --
> >>  4 files changed, 230 insertions(+), 29 deletions(-)
> >>
> >> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> >> index 7e6fc3c04add..638b7f704339 100644
> >> --- a/Documentation/gpu/todo.rst
> >> +++ b/Documentation/gpu/todo.rst
> >> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
> >>  ------------------------------------------------
> >>  
> >>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> >> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> >> -expects the framebuffer in system memory (or system-like memory).
> >> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> >> +expected the framebuffer in system memory or system-like memory. By employing
> >> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> >> +as well.
> >>  
> >>  Contact: Maintainer of the driver you plan to convert
> >>  
> >>  Level: Intermediate
> >>  
> >> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> >> +-------------------------------------------------------
> >> +
> >> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> >> +being rewritten without dependencies on the fbdev module. Some of the
> >> +helpers could further benefit from using struct dma_buf_map instead of
> >> +raw pointers.
> >> +
> >> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> >> +
> >> +Level: Advanced
> >> +
> >> +
> >>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
> >>  -----------------------------------------------------------------
> >>  
> >> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> >> index 13d0d04c4457..853081d186d5 100644
> >> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> >> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> >> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
> >>  	bochs->dev->mode_config.preferred_depth = 24;
> >>  	bochs->dev->mode_config.prefer_shadow = 0;
> >>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> >> -	bochs->dev->mode_config.fbdev_use_iomem = true;
> >>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
> >>  
> >>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> >> index 6212cd7cde1d..1d3180841778 100644
> >> --- a/drivers/gpu/drm/drm_fb_helper.c
> >> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
> >>  }
> >>  
> >>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> >> -					  struct drm_clip_rect *clip)
> >> +					  struct drm_clip_rect *clip,
> >> +					  struct dma_buf_map *dst)
> >>  {
> >>  	struct drm_framebuffer *fb = fb_helper->fb;
> >>  	unsigned int cpp = fb->format->cpp[0];
> >>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> >> -	void *dst = fb_helper->buffer->map.vaddr + offset;
> >>  	size_t len = (clip->x2 - clip->x1) * cpp;
> >>  	unsigned int y;
> >>  
> >> -	for (y = clip->y1; y < clip->y2; y++) {
> >> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> >> -			memcpy(dst, src, len);
> >> -		else
> >> -			memcpy_toio((void __iomem *)dst, src, len);
> >> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
> >>  
> >> +	for (y = clip->y1; y < clip->y2; y++) {
> >> +		dma_buf_map_memcpy_to(dst, src, len);
> >> +		dma_buf_map_incr(dst, fb->pitches[0]);
> >>  		src += fb->pitches[0];
> >> -		dst += fb->pitches[0];
> >>  	}
> >>  }
> >>  
> >> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
> >>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
> >>  			if (ret)
> >>  				return;
> >> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> >> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
> >>  		}
> >> +
> >>  		if (helper->fb->funcs->dirty)
> >>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
> >>  						 &clip_copy, 1);
> >> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
> >>  		return -ENODEV;
> >>  }
> >>  
> >> +static bool drm_fbdev_use_iomem(struct fb_info *info)
> >> +{
> >> +	struct drm_fb_helper *fb_helper = info->par;
> >> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> >> +
> >> +	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
> >> +}
> >> +
> >> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
> >> +				   loff_t pos)
> >> +{
> >> +	const char __iomem *src = info->screen_base + pos;
> > 
> > Maybe a bit much a bikeshed, but I'd write this in terms of drm objects,
> > like the dirty_blit function, using the dma_buf_map (instead of the
> > fb_info parameter). And then instead of
> > screen_base and screen_buffer suffixes give them _mem and _iomem suffixes.
> 
> Screen_buffer can be a shadow buffer. Until the blit worker (see
> drm_fb_helper_dirty_work() ) completes, it might be more up to date than
> the real buffer that's stored in the client.
> 
> The orignal fbdev code supported an fb_sync callback to synchronize with
> outstanding screen updates (e.g., HW blit ops), but fb_sync is just
> overhead here. Copying from screen_buffer or screen_base always returns
> the most up-to-date image.
> 
> > 
> > Same for write below. Or I'm not quite understanding why we do it like
> > this here - I don't think this code will be used outside of the generic
> > fbdev code, so we can always assume that drm_fb_helper->buffer is set up.
> 
> It's similar as in the read case. If we write to the client's buffer, an
> outstanding blit worker could write the now-outdated shadow buffer over
> the user's newly written framebuffer data.
> 
> Thinking about it, we might want to schedule the blit worker at the end
> of each fb_write, so that the data makes it into the HW buffer in time.

Hm ok, makes some sense. I think there's some potential for cleanup if we
add a dma_buf_map drm_fb_helper->uapi_map which points at the right thing
always. That could then also the drm_fbdev_use_iomem() helper and make
this all look really neat.

But maybe a follow up clean up patch, if you're bored. As-is:

Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>

While looking at this I also noticed a potential small issue in an earlier
patch.

> > The other thing I think we need is some minimal testcases to make sure.
> > The fbtest tool used way back seems to have disappeared, I couldn't find
> > a copy of the source anywhere anymore.
> 
> As discussed on IRC, I'll add some testcase to the igt test. I'll share
> the link here when done.
> 
> Best regards
> Thomas
> 
> > 
> > With all that: Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> > 
> > Cheers, Daniel
> > 
> >> +	size_t alloc_size = min(count, PAGE_SIZE);
> >> +	ssize_t ret = 0;
> >> +	char *tmp;
> >> +
> >> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> >> +	if (!tmp)
> >> +		return -ENOMEM;
> >> +
> >> +	while (count) {
> >> +		size_t c = min(count, alloc_size);
> >> +
> >> +		memcpy_fromio(tmp, src, c);
> >> +		if (copy_to_user(buf, tmp, c)) {
> >> +			ret = -EFAULT;
> >> +			break;
> >> +		}
> >> +
> >> +		src += c;
> >> +		buf += c;
> >> +		ret += c;
> >> +		count -= c;
> >> +	}
> >> +
> >> +	kfree(tmp);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
> >> +				     loff_t pos)
> >> +{
> >> +	const char *src = info->screen_buffer + pos;
> >> +
> >> +	if (copy_to_user(buf, src, count))
> >> +		return -EFAULT;
> >> +
> >> +	return count;
> >> +}
> >> +
> >> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> >> +				 size_t count, loff_t *ppos)
> >> +{
> >> +	loff_t pos = *ppos;
> >> +	size_t total_size;
> >> +	ssize_t ret;
> >> +
> >> +	if (info->state != FBINFO_STATE_RUNNING)
> >> +		return -EPERM;
> >> +
> >> +	if (info->screen_size)
> >> +		total_size = info->screen_size;
> >> +	else
> >> +		total_size = info->fix.smem_len;
> >> +
> >> +	if (pos >= total_size)
> >> +		return 0;
> >> +	if (count >= total_size)
> >> +		count = total_size;
> >> +	if (total_size - count < pos)
> >> +		count = total_size - pos;
> >> +
> >> +	if (drm_fbdev_use_iomem(info))
> >> +		ret = fb_read_screen_base(info, buf, count, pos);
> >> +	else
> >> +		ret = fb_read_screen_buffer(info, buf, count, pos);
> >> +
> >> +	if (ret > 0)
> >> +		*ppos = ret;
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
> >> +				    loff_t pos)
> >> +{
> >> +	char __iomem *dst = info->screen_base + pos;
> >> +	size_t alloc_size = min(count, PAGE_SIZE);
> >> +	ssize_t ret = 0;
> >> +	u8 *tmp;
> >> +
> >> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> >> +	if (!tmp)
> >> +		return -ENOMEM;
> >> +
> >> +	while (count) {
> >> +		size_t c = min(count, alloc_size);
> >> +
> >> +		if (copy_from_user(tmp, buf, c)) {
> >> +			ret = -EFAULT;
> >> +			break;
> >> +		}
> >> +		memcpy_toio(dst, tmp, c);
> >> +
> >> +		dst += c;
> >> +		buf += c;
> >> +		ret += c;
> >> +		count -= c;
> >> +	}
> >> +
> >> +	kfree(tmp);
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
> >> +				      loff_t pos)
> >> +{
> >> +	char *dst = info->screen_buffer + pos;
> >> +
> >> +	if (copy_from_user(dst, buf, count))
> >> +		return -EFAULT;
> >> +
> >> +	return count;
> >> +}
> >> +
> >> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> >> +				  size_t count, loff_t *ppos)
> >> +{
> >> +	loff_t pos = *ppos;
> >> +	size_t total_size;
> >> +	ssize_t ret;
> >> +	int err;
> >> +
> >> +	if (info->state != FBINFO_STATE_RUNNING)
> >> +		return -EPERM;
> >> +
> >> +	if (info->screen_size)
> >> +		total_size = info->screen_size;
> >> +	else
> >> +		total_size = info->fix.smem_len;
> >> +
> >> +	if (pos > total_size)
> >> +		return -EFBIG;
> >> +	if (count > total_size) {
> >> +		err = -EFBIG;
> >> +		count = total_size;
> >> +	}
> >> +	if (total_size - count < pos) {
> >> +		if (!err)
> >> +			err = -ENOSPC;
> >> +		count = total_size - pos;
> >> +	}
> >> +
> >> +	/*
> >> +	 * Copy to framebuffer even if we already logged an error. Emulates
> >> +	 * the behavior of the original fbdev implementation.
> >> +	 */
> >> +	if (drm_fbdev_use_iomem(info))
> >> +		ret = fb_write_screen_base(info, buf, count, pos);
> >> +	else
> >> +		ret = fb_write_screen_buffer(info, buf, count, pos);
> >> +
> >> +	if (ret > 0)
> >> +		*ppos = ret;
> >> +
> >> +	if (err)
> >> +		return err;
> >> +
> >> +	return ret;
> >> +}
> >> +
> >> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> >> +				  const struct fb_fillrect *rect)
> >> +{
> >> +	if (drm_fbdev_use_iomem(info))
> >> +		drm_fb_helper_cfb_fillrect(info, rect);
> >> +	else
> >> +		drm_fb_helper_sys_fillrect(info, rect);
> >> +}
> >> +
> >> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> >> +				  const struct fb_copyarea *area)
> >> +{
> >> +	if (drm_fbdev_use_iomem(info))
> >> +		drm_fb_helper_cfb_copyarea(info, area);
> >> +	else
> >> +		drm_fb_helper_sys_copyarea(info, area);
> >> +}
> >> +
> >> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> >> +				   const struct fb_image *image)
> >> +{
> >> +	if (drm_fbdev_use_iomem(info))
> >> +		drm_fb_helper_cfb_imageblit(info, image);
> >> +	else
> >> +		drm_fb_helper_sys_imageblit(info, image);
> >> +}
> >> +
> >>  static const struct fb_ops drm_fbdev_fb_ops = {
> >>  	.owner		= THIS_MODULE,
> >>  	DRM_FB_HELPER_DEFAULT_OPS,
> >> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
> >>  	.fb_release	= drm_fbdev_fb_release,
> >>  	.fb_destroy	= drm_fbdev_fb_destroy,
> >>  	.fb_mmap	= drm_fbdev_fb_mmap,
> >> -	.fb_read	= drm_fb_helper_sys_read,
> >> -	.fb_write	= drm_fb_helper_sys_write,
> >> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> >> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> >> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> >> +	.fb_read	= drm_fbdev_fb_read,
> >> +	.fb_write	= drm_fbdev_fb_write,
> >> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> >> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> >> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
> >>  };
> >>  
> >>  static struct fb_deferred_io drm_fbdev_defio = {
> >> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> >> index 5ffbb4ed5b35..ab424ddd7665 100644
> >> --- a/include/drm/drm_mode_config.h
> >> +++ b/include/drm/drm_mode_config.h
> >> @@ -877,18 +877,6 @@ struct drm_mode_config {
> >>  	 */
> >>  	bool prefer_shadow_fbdev;
> >>  
> >> -	/**
> >> -	 * @fbdev_use_iomem:
> >> -	 *
> >> -	 * Set to true if framebuffer reside in iomem.
> >> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> >> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> >> -	 *
> >> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> >> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> >> -	 */
> >> -	bool fbdev_use_iomem;
> >> -
> >>  	/**
> >>  	 * @quirk_addfb_prefer_xbgr_30bpp:
> >>  	 *
> >> -- 
> >> 2.28.0
> >>
> > 
> 
> -- 
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 Nrnberg, Germany
> (HRB 36809, AG Nrnberg)
> Geschftsfhrer: Felix Imendrffer

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 08:56:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 08:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10285.27298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWOS-0007c0-Hp; Thu, 22 Oct 2020 08:56:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10285.27298; Thu, 22 Oct 2020 08:56:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWOS-0007bt-EN; Thu, 22 Oct 2020 08:56:20 +0000
Received: by outflank-mailman (input) for mailman id 10285;
 Thu, 22 Oct 2020 08:56:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVWOQ-0007bN-7e
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:56:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 35c70d9a-87f0-4fb9-b743-cbc7a57f3a85;
 Thu, 22 Oct 2020 08:56:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34B45AC82;
 Thu, 22 Oct 2020 08:56:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVWOQ-0007bN-7e
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 08:56:18 +0000
X-Inumbo-ID: 35c70d9a-87f0-4fb9-b743-cbc7a57f3a85
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 35c70d9a-87f0-4fb9-b743-cbc7a57f3a85;
	Thu, 22 Oct 2020 08:56:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603356975;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ffs8HVQOl285t4ppCdKCO8F+D0WzOhc8a+9BPoe5kE0=;
	b=FNMxUkIX2Cc0okDFSJ4KIbOPkURaV72SDFVT3TdQaXaBXjhJz3nbxpNfz3sLBiU0Dj2w+O
	kRko8jDaowthsJvyNPrns1KrttHfiwlo7gRn2+YsV1Bm/2MDnEzUL9l4t4VNFsp+XW18wp
	4SeTAAQOUDyyC+rfnGza+fG4EFpb5Ys=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 34B45AC82;
	Thu, 22 Oct 2020 08:56:15 +0000 (UTC)
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
 <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
 <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
 <2172763f-9f3d-588e-b4f2-0f9187a40ff9@suse.com>
 <20201022082938.jnpw7wvzuvxqa6iy@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2bc15e6b-cc53-3092-2a56-492a302fbc1e@suse.com>
Date: Thu, 22 Oct 2020 10:56:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022082938.jnpw7wvzuvxqa6iy@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.2020 10:29, Roger Pau Monné wrote:
> On Thu, Oct 22, 2020 at 10:15:45AM +0200, Jan Beulich wrote:
>> On 22.10.2020 10:11, Roger Pau Monné wrote:
>>> On Thu, Oct 22, 2020 at 09:33:27AM +0200, Jan Beulich wrote:
>>>> On 21.10.2020 17:46, Roger Pau Monné wrote:
>>>>> On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
>>>>>> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
>>>>>>  
>>>>>>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
>>>>>>      {
>>>>>> +        /* Use cmpxchgptr() in lieu of a global lock. */
>>>>>>          if ( xen_consumers[i] == NULL )
>>>>>> -            xen_consumers[i] = fn;
>>>>>> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
>>>>>>          if ( xen_consumers[i] == fn )
>>>>>>              break;
>>>>>
>>>>> I think you could join it as:
>>>>>
>>>>> if ( !xen_consumers[i] &&
>>>>>      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
>>>>>     break;
>>>>>
>>>>> As cmpxchgptr will return the previous value of &xen_consumers[i]?
>>>>
>>>> But then you also have to check whether the returned value is
>>>> fn (or retain the 2nd if()).
>>>
>>> __cmpxchg comment says that success of the operation is indicated when
>>> the returned value equals the old value, so it's my understanding that
>>> cmpxchgptr returning NULL would mean the exchange has succeed and that
>>> xen_consumers[i] == fn?
>>
>> Correct. But if xen_consumers[i] == fn before the call, the return
>> value will be fn. The cmpxchg() wasn't "successful" in this case
>> (it didn't update anything), but the state of the array slot is what
>> we want.
> 
> Oh, I get it now. You don't want the same fn populating more than one
> slot.

FAOD it's not just "want", it's a strict requirement.

> I assume the reads of xen_consumers are not using ACCESS_ONCE or
> read_atomic because we rely on the compiler performing such reads as
> single instructions?

Yes, as in so many other places in the code base.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:13:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:13:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10289.27309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWf9-00015l-1F; Thu, 22 Oct 2020 09:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10289.27309; Thu, 22 Oct 2020 09:13:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWf8-00015e-US; Thu, 22 Oct 2020 09:13:34 +0000
Received: by outflank-mailman (input) for mailman id 10289;
 Thu, 22 Oct 2020 09:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVWf8-00015Z-90
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:13:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6d671e3-151e-465f-a228-4bba560ce2f8;
 Thu, 22 Oct 2020 09:13:33 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVWf6-0003jb-0d; Thu, 22 Oct 2020 09:13:32 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVWf5-0000It-Oj; Thu, 22 Oct 2020 09:13:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVWf8-00015Z-90
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:13:34 +0000
X-Inumbo-ID: d6d671e3-151e-465f-a228-4bba560ce2f8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d6d671e3-151e-465f-a228-4bba560ce2f8;
	Thu, 22 Oct 2020 09:13:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=eVX4KsOWO0VfBR4o9B+a/URh7DqYVZ4bWqLYe2Rt4WE=; b=KKQJmGHgOP/aWsqkQmfpNGhzOz
	Pkpt4311Ke0/+b0CgyH+hPgvJnEsH9Zq+wnku8uRkpvv5zItOeBSgY4OtYY+V9cjIYivm0kQacXqy
	sm5H8On5hOuESwPzNTf5SVdCNux8I3/KfeEcOUpOJJuOATuj/dbhKEELyjq+HyvHqe+o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVWf6-0003jb-0d; Thu, 22 Oct 2020 09:13:32 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVWf5-0000It-Oj; Thu, 22 Oct 2020 09:13:31 +0000
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
To: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
Date: Thu, 22 Oct 2020 10:13:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201022014310.GA70872@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 22/10/2020 02:43, Elliott Mitchell wrote:
> Linux requires UEFI support to be enabled on ARM64 devices.  While many
> ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
> potentially taking over.  Some common devices may need ACPI table
> support.
> 
> Presently I think it is worth removing the dependancy on CONFIG_EXPERT.

The idea behind EXPERT is to gate any feature that is not considered to 
be stable/complete enough to be used in production.

I don't consider the ACPI complete because the parsing of the IORT (used 
to discover SMMU and GICv3 ITS) is not there yet.

I vaguely remember some issues on system using SMMU (e.g. Thunder-X) 
because Dom0 will try to use the IOMMU and this would break PV drivers.

Therefore I think we at least want to consider to hide SMMUs from dom0 
before removing EXPERT. Ideally, I would also like the feature to be 
tested in Osstest.

The good news is Xen Project already has systems (e.g. Thunder-X, 
Softiron) that can supported ACPI. So it should hopefully be a matter to 
tell them to boot with ACPI rather than DT.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:18:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:18:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10292.27321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWkA-0001Iz-LO; Thu, 22 Oct 2020 09:18:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10292.27321; Thu, 22 Oct 2020 09:18:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWkA-0001Is-IH; Thu, 22 Oct 2020 09:18:46 +0000
Received: by outflank-mailman (input) for mailman id 10292;
 Thu, 22 Oct 2020 09:18:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GzMw=D5=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kVWk9-0001In-K0
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:18:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dacc824b-b3db-416f-badc-cb13f0f9e7d2;
 Thu, 22 Oct 2020 09:18:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E375DB240;
 Thu, 22 Oct 2020 09:18:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GzMw=D5=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kVWk9-0001In-K0
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:18:45 +0000
X-Inumbo-ID: dacc824b-b3db-416f-badc-cb13f0f9e7d2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dacc824b-b3db-416f-badc-cb13f0f9e7d2;
	Thu, 22 Oct 2020 09:18:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E375DB240;
	Thu, 22 Oct 2020 09:18:42 +0000 (UTC)
To: Daniel Vetter <daniel@ffwll.ch>
Cc: luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 sam@ravnborg.org, emil.velikov@collabora.com,
 linux-samsung-soc@vger.kernel.org, jy0922.shim@samsung.com,
 lima@lists.freedesktop.org, oleksandr_andrushchenko@epam.com,
 krzk@kernel.org, steven.price@arm.com, linux-rockchip@lists.infradead.org,
 kgene@kernel.org, bskeggs@redhat.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, alyssa.rosenzweig@collabora.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, Daniel Vetter <daniel.vetter@ffwll.ch>,
 sw0312.kim@samsung.com, hjc@rock-chips.com, kyungmin.park@samsung.com,
 miaoqinglang@huawei.com, yuq825@gmail.com, alexander.deucher@amd.com,
 linux-media@vger.kernel.org, christian.koenig@amd.com
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-9-tzimmermann@suse.de>
 <20201022084919.GU401619@phenom.ffwll.local>
From: Thomas Zimmermann <tzimmermann@suse.de>
Subject: Re: [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct
 dma_buf_map
Message-ID: <f2d83a8b-91b3-ac64-b77f-2b1c78729014@suse.de>
Date: Thu, 22 Oct 2020 11:18:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.2
MIME-Version: 1.0
In-Reply-To: <20201022084919.GU401619@phenom.ffwll.local>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi

On 22.10.20 10:49, Daniel Vetter wrote:
> On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote:
>> Kernel DRM clients now store their framebuffer address in an instance
>> of struct dma_buf_map. Depending on the buffer's location, the address
>> refers to system or I/O memory.
>>
>> Callers of drm_client_buffer_vmap() receive a copy of the value in
>> the call's supplied arguments. It can be accessed and modified with
>> dma_buf_map interfaces.
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
>> Tested-by: Sam Ravnborg <sam@ravnborg.org>
>> ---
>>  drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
>>  drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
>>  include/drm/drm_client.h        |  7 ++++---
>>  3 files changed, 38 insertions(+), 26 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
>> index ac0082bed966..fe573acf1067 100644
>> --- a/drivers/gpu/drm/drm_client.c
>> +++ b/drivers/gpu/drm/drm_client.c
>> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
>>  {
>>  	struct drm_device *dev = buffer->client->dev;
>>  
>> -	drm_gem_vunmap(buffer->gem, buffer->vaddr);
>> +	drm_gem_vunmap(buffer->gem, &buffer->map);
>>  
>>  	if (buffer->gem)
>>  		drm_gem_object_put(buffer->gem);
>> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
>>  /**
>>   * drm_client_buffer_vmap - Map DRM client buffer into address space
>>   * @buffer: DRM client buffer
>> + * @map_copy: Returns the mapped memory's address
>>   *
>>   * This function maps a client buffer into kernel address space. If the
>> - * buffer is already mapped, it returns the mapping's address.
>> + * buffer is already mapped, it returns the existing mapping's address.
>>   *
>>   * Client buffer mappings are not ref'counted. Each call to
>>   * drm_client_buffer_vmap() should be followed by a call to
>>   * drm_client_buffer_vunmap(); or the client buffer should be mapped
>>   * throughout its lifetime.
>>   *
>> + * The returned address is a copy of the internal value. In contrast to
>> + * other vmap interfaces, you don't need it for the client's vunmap
>> + * function. So you can modify it at will during blit and draw operations.
>> + *
>>   * Returns:
>> - *	The mapped memory's address
>> + *	0 on success, or a negative errno code otherwise.
>>   */
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>> +int
>> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
>>  {
>> -	struct dma_buf_map map;
>> +	struct dma_buf_map *map = &buffer->map;
>>  	int ret;
>>  
>> -	if (buffer->vaddr)
>> -		return buffer->vaddr;
>> +	if (dma_buf_map_is_set(map))
>> +		goto out;
>>  
>>  	/*
>>  	 * FIXME: The dependency on GEM here isn't required, we could
>> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
>>  	 * fd_install step out of the driver backend hooks, to make that
>>  	 * final step optional for internal users.
>>  	 */
>> -	ret = drm_gem_vmap(buffer->gem, &map);
>> +	ret = drm_gem_vmap(buffer->gem, map);
>>  	if (ret)
>> -		return ERR_PTR(ret);
>> +		return ret;
>>  
>> -	buffer->vaddr = map.vaddr;
>> +out:
>> +	*map_copy = *map;
>>  
>> -	return map.vaddr;
>> +	return 0;
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vmap);
>>  
>> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
>>   */
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
>>  {
>> -	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
>> +	struct dma_buf_map *map = &buffer->map;
>>  
>> -	drm_gem_vunmap(buffer->gem, &map);
>> -	buffer->vaddr = NULL;
>> +	drm_gem_vunmap(buffer->gem, map);
>>  }
>>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
>>  
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index c2f72bb6afb1..6212cd7cde1d 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>>  	unsigned int cpp = fb->format->cpp[0];
>>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>  	void *src = fb_helper->fbdev->screen_buffer + offset;
>> -	void *dst = fb_helper->buffer->vaddr + offset;
>> +	void *dst = fb_helper->buffer->map.vaddr + offset;
>>  	size_t len = (clip->x2 - clip->x1) * cpp;
>>  	unsigned int y;
>>  
>> @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  	struct drm_clip_rect *clip = &helper->dirty_clip;
>>  	struct drm_clip_rect clip_copy;
>>  	unsigned long flags;
>> -	void *vaddr;
>> +	struct dma_buf_map map;
>> +	int ret;
>>  
>>  	spin_lock_irqsave(&helper->dirty_lock, flags);
>>  	clip_copy = *clip;
>> @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  
>>  		/* Generic fbdev uses a shadow buffer */
>>  		if (helper->buffer) {
>> -			vaddr = drm_client_buffer_vmap(helper->buffer);
>> -			if (IS_ERR(vaddr))
>> +			ret = drm_client_buffer_vmap(helper->buffer, &map);
>> +			if (ret)
>>  				return;
>>  			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>>  		}
>> @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  	struct drm_framebuffer *fb;
>>  	struct fb_info *fbi;
>>  	u32 format;
>> -	void *vaddr;
>> +	struct dma_buf_map map;
>> +	int ret;
>>  
>>  	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
>>  		    sizes->surface_width, sizes->surface_height,
>> @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
>>  		fb_deferred_io_init(fbi);
>>  	} else {
>>  		/* buffer is mapped for HW framebuffer */
>> -		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
>> -		if (IS_ERR(vaddr))
>> -			return PTR_ERR(vaddr);
>> +		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
>> +		if (ret)
>> +			return ret;
>> +		if (map.is_iomem)
>> +			fbi->screen_base = map.vaddr_iomem;
>> +		else
>> +			fbi->screen_buffer = map.vaddr;
>>  
>> -		fbi->screen_buffer = vaddr;
>>  		/* Shamelessly leak the physical address to user-space */
>>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
>>  		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
> 
> Just noticed a tiny thing here: I think this needs to be patched to only
> set smem_start when the map is _not_ iomem. Since virt_to_page isn't
> defined on iomem at all.
> 
> I guess it'd be neat if we can set this for iomem too, but I have no idea
> how to convert an iomem pointer back to a bus_addr_t ...

Not that I disagree, but that should be reviewed by the right people.
The commit at 4be9bd10e22d ("drm/fb_helper: Allow leaking fbdev
smem_start") appears to work around specific userspace drivers.

Best regards
Thomas

> 
> Cheers, Daniel
> 
>> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
>> index 7aaea665bfc2..f07f2fb02e75 100644
>> --- a/include/drm/drm_client.h
>> +++ b/include/drm/drm_client.h
>> @@ -3,6 +3,7 @@
>>  #ifndef _DRM_CLIENT_H_
>>  #define _DRM_CLIENT_H_
>>  
>> +#include <linux/dma-buf-map.h>
>>  #include <linux/lockdep.h>
>>  #include <linux/mutex.h>
>>  #include <linux/types.h>
>> @@ -141,9 +142,9 @@ struct drm_client_buffer {
>>  	struct drm_gem_object *gem;
>>  
>>  	/**
>> -	 * @vaddr: Virtual address for the buffer
>> +	 * @map: Virtual address for the buffer
>>  	 */
>> -	void *vaddr;
>> +	struct dma_buf_map map;
>>  
>>  	/**
>>  	 * @fb: DRM framebuffer
>> @@ -155,7 +156,7 @@ struct drm_client_buffer *
>>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
>>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
>>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
>> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
>> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
>>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
>>  
>>  int drm_client_modeset_create(struct drm_client_dev *client);
>> -- 
>> 2.28.0
>>
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:21:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10295.27334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWmp-00026o-4N; Thu, 22 Oct 2020 09:21:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10295.27334; Thu, 22 Oct 2020 09:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWmp-00026h-0T; Thu, 22 Oct 2020 09:21:31 +0000
Received: by outflank-mailman (input) for mailman id 10295;
 Thu, 22 Oct 2020 09:21:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVWmn-00026b-PR
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:21:29 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0478d580-44dc-40c7-8dce-2c237b880a2a;
 Thu, 22 Oct 2020 09:21:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVWmn-00026b-PR
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:21:29 +0000
X-Inumbo-ID: 0478d580-44dc-40c7-8dce-2c237b880a2a
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0478d580-44dc-40c7-8dce-2c237b880a2a;
	Thu, 22 Oct 2020 09:21:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603358488;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=C9aqx/jjtwxMn9vAhpNQPx4C33vcoB8dih4gv2oWmT0=;
  b=WQnYZYleHNmYc7bL/gPOuftZkn8yTxyu8JGr094ibMOI77KE4A5xo0BY
   vvflK0ukEYGS1yrkx4J5tjyg+l4I6oCjZNjHzHC2qAy0TCGv36yF965L4
   9YogE3IAPeCQduybIXt+hDS3LEVPQPeFHFqQkuumEtZ0lDxUwjxcDL60q
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kc51BmvvoDlcX8H+s15aRLIDcnoWU9hX00MmrF9NuEuAwhoa98UlGweTntLtzSJNZBYC4TObxV
 bVZ/6LuRPfS8Sg2/LnH5HchSSBoKrWB5ZZcU2Wwyfe6f90rjlOP0RVkkXCHxAdILG5MHvWtZlA
 G2DctSc7095U7pou1ObfgXyK6BoZEt7oCJ0KWpR+MkWk5QvTz4LqQ3EjwipK5M8sR8zSbygRh2
 kxF9OK3k/Z7o4SLbBOz0b2O/QhZjYot9FLmKXDJZmOorV1GLHL2LN01Hkg8U639gSRqveDYfuo
 6Rs=
X-SBRS: 2.5
X-MesageID: 29794353
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29794353"
Date: Thu, 22 Oct 2020 11:21:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
Message-ID: <20201022092119.kgm3nrwuwjhphcc7@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
> There's no global lock around the updating of this global piece of data.
> Make use of cmpxchgptr() to avoid two entities racing with their
> updates.
> 
> While touching the functionality, mark xen_consumers[] read-mostly (or
> else the if() condition could use the result of cmpxchgptr(), writing to
> the slot unconditionally).
> 
> The use of cmpxchgptr() here points out (by way of clang warning about
> it) that its original use of const was slightly wrong. Adjust the
> placement, or else undefined behavior of const qualifying a function
> type will result.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:24:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:24:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10298.27346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWpv-0002Mb-J4; Thu, 22 Oct 2020 09:24:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10298.27346; Thu, 22 Oct 2020 09:24:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWpv-0002MU-Fo; Thu, 22 Oct 2020 09:24:43 +0000
Received: by outflank-mailman (input) for mailman id 10298;
 Thu, 22 Oct 2020 09:24:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVWpu-0002MO-AZ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:24:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b05557c5-060e-4f8c-898c-6b503bf7bbc2;
 Thu, 22 Oct 2020 09:24:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F8D1AC3C;
 Thu, 22 Oct 2020 09:24:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVWpu-0002MO-AZ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:24:42 +0000
X-Inumbo-ID: b05557c5-060e-4f8c-898c-6b503bf7bbc2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b05557c5-060e-4f8c-898c-6b503bf7bbc2;
	Thu, 22 Oct 2020 09:24:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603358680;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CvoA3+uVzibS3Bv3IS1AE3ffCg+FdCpAtcx7xV/jTjQ=;
	b=qHxb0cXD8V947WK/zTGCy4JRPTiiPnWbfjoUGIGDV7kIGwcQV3wtHATNop4KSafd8Iyz7r
	dBJYSt86g9B3azGQnNR+okxaki8tQWdFxwj+3+cWV+URNQDUZ0IsvxcYdKnXDvEJFSlUs+
	jbrllajztgztGwWmyQ26pACn0y2fEX0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8F8D1AC3C;
	Thu, 22 Oct 2020 09:24:40 +0000 (UTC)
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
To: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Andy Lutomirski <luto@kernel.org>
References: <20201009144225.12019-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <28ccccfe-b95b-5c4d-af27-5004e9f02c40@suse.com>
Date: Thu, 22 Oct 2020 11:24:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201009144225.12019-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 09.10.20 16:42, Juergen Gross wrote:
> When running in lazy TLB mode the currently active page tables might
> be the ones of a previous process, e.g. when running a kernel thread.
> 
> This can be problematic in case kernel code is being modified via
> text_poke() in a kernel thread, and on another processor exit_mmap()
> is active for the process which was running on the first cpu before
> the kernel thread.
> 
> As text_poke() is using a temporary address space and the former
> address space (obtained via cpu_tlbstate.loaded_mm) is restored
> afterwards, there is a race possible in case the cpu on which
> exit_mmap() is running wants to make sure there are no stale
> references to that address space on any cpu active (this e.g. is
> required when running as a Xen PV guest, where this problem has been
> observed and analyzed).
> 
> In order to avoid that, drop off TLB lazy mode before switching to the
> temporary address space.
> 
> Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Can anyone look at this, please? It is fixing a real problem which has
been seen several times.


Juergen

> ---
>   arch/x86/kernel/alternative.c | 9 +++++++++
>   1 file changed, 9 insertions(+)
> 
> diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
> index cdaab30880b9..cd6be6f143e8 100644
> --- a/arch/x86/kernel/alternative.c
> +++ b/arch/x86/kernel/alternative.c
> @@ -807,6 +807,15 @@ static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm)
>   	temp_mm_state_t temp_state;
>   
>   	lockdep_assert_irqs_disabled();
> +
> +	/*
> +	 * Make sure not to be in TLB lazy mode, as otherwise we'll end up
> +	 * with a stale address space WITHOUT being in lazy mode after
> +	 * restoring the previous mm.
> +	 */
> +	if (this_cpu_read(cpu_tlbstate.is_lazy))
> +		leave_mm(smp_processor_id());
> +
>   	temp_state.mm = this_cpu_read(cpu_tlbstate.loaded_mm);
>   	switch_mm_irqs_off(NULL, mm, current);
>   
> 



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:26:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:26:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10300.27357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWrE-0002Tu-TO; Thu, 22 Oct 2020 09:26:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10300.27357; Thu, 22 Oct 2020 09:26:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVWrE-0002Tn-QS; Thu, 22 Oct 2020 09:26:04 +0000
Received: by outflank-mailman (input) for mailman id 10300;
 Thu, 22 Oct 2020 09:26:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVWrD-0002Ti-Si
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:26:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22415894-75c2-4a61-ab6c-9c8f46d9b6e4;
 Thu, 22 Oct 2020 09:26:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVWrD-0002Ti-Si
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:26:03 +0000
X-Inumbo-ID: 22415894-75c2-4a61-ab6c-9c8f46d9b6e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 22415894-75c2-4a61-ab6c-9c8f46d9b6e4;
	Thu, 22 Oct 2020 09:26:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603358763;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=sYRjzewN0a2IME8RSLdVajEVL/ZhMXvyWZlOdP+c8j4=;
  b=iLiJm7pJ3QkqZuHNlQPIUgcNQ60n7RlO5KPciQCuqqjUYL6ANvyHa3g3
   QfbBpPNd4l0GkQPal645YP8mv40sC9cnmLjBSV/dd3GP7V7G6jMvS4Cni
   HOurFUlEV/J1bZYyN01zLDshGkpDBEdO/aWuYvUnKh6JttrqvpT1b0VgA
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ir8dgV9DBXFyznyP+Om+OQWgSq8sGtxLnlSKtoxLu0bsQFaqY5g+4++2AmyfuvOcYilZcnDUhf
 Uz8ldwNgbie06CBHFDfDPUfbGZ8w3/2C0toSHwO+Mj1WDN/Nkvv6riL+pFi9CifqQlX1PvoyPQ
 35cBT/sa7yNrMUWgzC7JoRC9gXblozSDYHnPrs8az2mh7AiAjwpLJzvQEktUbZHc4NpWok+6DR
 wNBDBdFrlteegfPjmDBb/EnL6Ydjsg3LI8K7iiwGst81Zn9wtHU+cvZ9rfPlS0xD4Pqrw0RDWK
 +f4=
X-SBRS: 2.5
X-MesageID: 29524186
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29524186"
Date: Thu, 22 Oct 2020 11:25:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
Message-ID: <20201022092553.a45yqdy7zsdivi3r@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
 <20201021154650.zz77ircyuedr7gpm@Air-de-Roger>
 <3fd4c197-617e-dd48-6781-9ff0b1a82bf8@suse.com>
 <20201022081100.bedkkvuqf7ymjpbl@Air-de-Roger>
 <2172763f-9f3d-588e-b4f2-0f9187a40ff9@suse.com>
 <20201022082938.jnpw7wvzuvxqa6iy@Air-de-Roger>
 <2bc15e6b-cc53-3092-2a56-492a302fbc1e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2bc15e6b-cc53-3092-2a56-492a302fbc1e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Thu, Oct 22, 2020 at 10:56:15AM +0200, Jan Beulich wrote:
> On 22.10.2020 10:29, Roger Pau Monné wrote:
> > On Thu, Oct 22, 2020 at 10:15:45AM +0200, Jan Beulich wrote:
> >> On 22.10.2020 10:11, Roger Pau Monné wrote:
> >>> On Thu, Oct 22, 2020 at 09:33:27AM +0200, Jan Beulich wrote:
> >>>> On 21.10.2020 17:46, Roger Pau Monné wrote:
> >>>>> On Tue, Oct 20, 2020 at 04:08:13PM +0200, Jan Beulich wrote:
> >>>>>> @@ -80,8 +81,9 @@ static uint8_t get_xen_consumer(xen_even
> >>>>>>  
> >>>>>>      for ( i = 0; i < ARRAY_SIZE(xen_consumers); i++ )
> >>>>>>      {
> >>>>>> +        /* Use cmpxchgptr() in lieu of a global lock. */
> >>>>>>          if ( xen_consumers[i] == NULL )
> >>>>>> -            xen_consumers[i] = fn;
> >>>>>> +            cmpxchgptr(&xen_consumers[i], NULL, fn);
> >>>>>>          if ( xen_consumers[i] == fn )
> >>>>>>              break;
> >>>>>
> >>>>> I think you could join it as:
> >>>>>
> >>>>> if ( !xen_consumers[i] &&
> >>>>>      !cmpxchgptr(&xen_consumers[i], NULL, fn) )
> >>>>>     break;
> >>>>>
> >>>>> As cmpxchgptr will return the previous value of &xen_consumers[i]?
> >>>>
> >>>> But then you also have to check whether the returned value is
> >>>> fn (or retain the 2nd if()).
> >>>
> >>> __cmpxchg comment says that success of the operation is indicated when
> >>> the returned value equals the old value, so it's my understanding that
> >>> cmpxchgptr returning NULL would mean the exchange has succeed and that
> >>> xen_consumers[i] == fn?
> >>
> >> Correct. But if xen_consumers[i] == fn before the call, the return
> >> value will be fn. The cmpxchg() wasn't "successful" in this case
> >> (it didn't update anything), but the state of the array slot is what
> >> we want.
> > 
> > Oh, I get it now. You don't want the same fn populating more than one
> > slot.
> 
> FAOD it's not just "want", it's a strict requirement.

I wouldn't mind having a comment to that effect in the function, but I
won't insist.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10308.27393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDi-0004XP-D9; Thu, 22 Oct 2020 09:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10308.27393; Thu, 22 Oct 2020 09:49:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDi-0004XH-9e; Thu, 22 Oct 2020 09:49:18 +0000
Received: by outflank-mailman (input) for mailman id 10308;
 Thu, 22 Oct 2020 09:49:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDg-0004UF-QW
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05c960fd-bc91-47e3-97bb-11cfa3823f0f;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 000A5AE2B;
 Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDg-0004UF-QW
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:16 +0000
X-Inumbo-ID: 05c960fd-bc91-47e3-97bb-11cfa3823f0f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05c960fd-bc91-47e3-97bb-11cfa3823f0f;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360151;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YVuxwJ3hyt25tYMDo7KA9FGUbENDEaFuW9/mMP2hjWY=;
	b=XdZFpfUX5Q4edukPRqbno2sAVNX/9lStsFCY0SbUMIZdMDeuyPiBXOXTtEtl5tfur0ohB+
	v75te7c9mliV8OqxcIfKDJgE5Igkrfgo8YRH5iJ8yl5l20L8mN4rIR4CY6qW1Sv+fYvd/3
	q+PdCKHsYamkNp0VFS8CEvO750kZgLQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 000A5AE2B;
	Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 4/5] xen/events: unmask a fifo event channel only if it was masked
Date: Thu, 22 Oct 2020 11:49:06 +0200
Message-Id: <20201022094907.28560-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
References: <20201022094907.28560-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Unmasking an event channel with fifo events channels being used can
require a hypercall to be made, so try to avoid that by checking
whether the event channel was really masked.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- move test for already unmasked into loop (Jan Beulich)
---
 drivers/xen/events/events_fifo.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 243e7b6d7b96..b234f1766810 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -237,6 +237,9 @@ static bool clear_masked_cond(volatile event_word_t *word)
 	w = *word;
 
 	do {
+		if (!(w & (1 << EVTCHN_FIFO_MASKED)))
+			return true;
+
 		if (w & (1 << EVTCHN_FIFO_PENDING))
 			return false;
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10309.27406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDn-0004cK-Mg; Thu, 22 Oct 2020 09:49:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10309.27406; Thu, 22 Oct 2020 09:49:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDn-0004cD-JJ; Thu, 22 Oct 2020 09:49:23 +0000
Received: by outflank-mailman (input) for mailman id 10309;
 Thu, 22 Oct 2020 09:49:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDl-0004UF-Qc
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abb62816-e269-4a10-99fe-2f5207b54c08;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84D3DAE16;
 Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDl-0004UF-Qc
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:21 +0000
X-Inumbo-ID: abb62816-e269-4a10-99fe-2f5207b54c08
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id abb62816-e269-4a10-99fe-2f5207b54c08;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360150;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JcGWYIUvEotSFKztpkFYqbmREQ/+iyUWzVR4PQ0+LJk=;
	b=FVGKiUpzk67MlNKVh4Iu1U1BNH7HNyRtgob7UIEojDNYRgy5bqCimWkTfFawttBS1JCp1m
	4s/dfZECEcREU0JN4D+kmmZiq44cLh369XQhEphsWT4/d9IWC2x72qR4vqmsrTjRt8YqKn
	lAmPkAY5dIh+SArXdwslzl46isURLSM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 84D3DAE16;
	Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 2/5] xen/events: make struct irq_info private to events_base.c
Date: Thu, 22 Oct 2020 11:49:04 +0200
Message-Id: <20201022094907.28560-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
References: <20201022094907.28560-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The struct irq_info of Xen's event handling is used only for two
evtchn_ops functions outside of events_base.c. Those two functions
can easily be switched to avoid that usage.

This allows to make struct irq_info and its related access functions
private to events_base.c.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 drivers/xen/events/events_2l.c       |  7 +--
 drivers/xen/events/events_base.c     | 63 ++++++++++++++++++++++---
 drivers/xen/events/events_fifo.c     |  6 +--
 drivers/xen/events/events_internal.h | 70 ++++------------------------
 4 files changed, 73 insertions(+), 73 deletions(-)

diff --git a/drivers/xen/events/events_2l.c b/drivers/xen/events/events_2l.c
index fe5ad0e89cd8..da87f3a1e351 100644
--- a/drivers/xen/events/events_2l.c
+++ b/drivers/xen/events/events_2l.c
@@ -47,10 +47,11 @@ static unsigned evtchn_2l_max_channels(void)
 	return EVTCHN_2L_NR_CHANNELS;
 }
 
-static void evtchn_2l_bind_to_cpu(struct irq_info *info, unsigned cpu)
+static void evtchn_2l_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu,
+				  unsigned int old_cpu)
 {
-	clear_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, info->cpu)));
-	set_bit(info->evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
+	clear_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, old_cpu)));
+	set_bit(evtchn, BM(per_cpu(cpu_evtchn_mask, cpu)));
 }
 
 static void evtchn_2l_clear_pending(evtchn_port_t port)
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 436682db41c5..1c25580c7691 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -70,6 +70,57 @@
 #undef MODULE_PARAM_PREFIX
 #define MODULE_PARAM_PREFIX "xen."
 
+/* Interrupt types. */
+enum xen_irq_type {
+	IRQT_UNBOUND = 0,
+	IRQT_PIRQ,
+	IRQT_VIRQ,
+	IRQT_IPI,
+	IRQT_EVTCHN
+};
+
+/*
+ * Packed IRQ information:
+ * type - enum xen_irq_type
+ * event channel - irq->event channel mapping
+ * cpu - cpu this event channel is bound to
+ * index - type-specific information:
+ *    PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM
+ *           guest, or GSI (real passthrough IRQ) of the device.
+ *    VIRQ - virq number
+ *    IPI - IPI vector
+ *    EVTCHN -
+ */
+struct irq_info {
+	struct list_head list;
+	struct list_head eoi_list;
+	short refcnt;
+	short spurious_cnt;
+	enum xen_irq_type type; /* type */
+	unsigned irq;
+	evtchn_port_t evtchn;   /* event channel */
+	unsigned short cpu;     /* cpu bound */
+	unsigned short eoi_cpu; /* EOI must happen on this cpu-1 */
+	unsigned int irq_epoch; /* If eoi_cpu valid: irq_epoch of event */
+	u64 eoi_time;           /* Time in jiffies when to EOI. */
+
+	union {
+		unsigned short virq;
+		enum ipi_vector ipi;
+		struct {
+			unsigned short pirq;
+			unsigned short gsi;
+			unsigned char vector;
+			unsigned char flags;
+			uint16_t domid;
+		} pirq;
+	} u;
+};
+
+#define PIRQ_NEEDS_EOI	(1 << 0)
+#define PIRQ_SHAREABLE	(1 << 1)
+#define PIRQ_MSI_GROUP	(1 << 2)
+
 static uint __read_mostly event_loop_timeout = 2;
 module_param(event_loop_timeout, uint, 0644);
 
@@ -110,7 +161,7 @@ static DEFINE_PER_CPU(int [NR_VIRQS], virq_to_irq) = {[0 ... NR_VIRQS-1] = -1};
 /* IRQ <-> IPI mapping */
 static DEFINE_PER_CPU(int [XEN_NR_IPIS], ipi_to_irq) = {[0 ... XEN_NR_IPIS-1] = -1};
 
-int **evtchn_to_irq;
+static int **evtchn_to_irq;
 #ifdef CONFIG_X86
 static unsigned long *pirq_eoi_map;
 #endif
@@ -190,7 +241,7 @@ int get_evtchn_to_irq(evtchn_port_t evtchn)
 }
 
 /* Get info for IRQ */
-struct irq_info *info_for_irq(unsigned irq)
+static struct irq_info *info_for_irq(unsigned irq)
 {
 	if (irq < nr_legacy_irqs())
 		return legacy_info_ptrs[irq];
@@ -228,7 +279,7 @@ static int xen_irq_info_common_setup(struct irq_info *info,
 
 	irq_clear_status_flags(irq, IRQ_NOREQUEST|IRQ_NOAUTOEN);
 
-	return xen_evtchn_port_setup(info);
+	return xen_evtchn_port_setup(evtchn);
 }
 
 static int xen_irq_info_evtchn_setup(unsigned irq,
@@ -351,7 +402,7 @@ static enum xen_irq_type type_from_irq(unsigned irq)
 	return info_for_irq(irq)->type;
 }
 
-unsigned cpu_from_irq(unsigned irq)
+static unsigned cpu_from_irq(unsigned irq)
 {
 	return info_for_irq(irq)->cpu;
 }
@@ -391,7 +442,7 @@ static void bind_evtchn_to_cpu(evtchn_port_t evtchn, unsigned int cpu)
 #ifdef CONFIG_SMP
 	cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
 #endif
-	xen_evtchn_port_bind_to_cpu(info, cpu);
+	xen_evtchn_port_bind_to_cpu(evtchn, cpu, info->cpu);
 
 	info->cpu = cpu;
 }
@@ -745,7 +796,7 @@ static unsigned int __startup_pirq(unsigned int irq)
 	info->evtchn = evtchn;
 	bind_evtchn_to_cpu(evtchn, 0);
 
-	rc = xen_evtchn_port_setup(info);
+	rc = xen_evtchn_port_setup(evtchn);
 	if (rc)
 		goto err;
 
diff --git a/drivers/xen/events/events_fifo.c b/drivers/xen/events/events_fifo.c
index 6085a808da95..243e7b6d7b96 100644
--- a/drivers/xen/events/events_fifo.c
+++ b/drivers/xen/events/events_fifo.c
@@ -138,9 +138,8 @@ static void init_array_page(event_word_t *array_page)
 		array_page[i] = 1 << EVTCHN_FIFO_MASKED;
 }
 
-static int evtchn_fifo_setup(struct irq_info *info)
+static int evtchn_fifo_setup(evtchn_port_t port)
 {
-	evtchn_port_t port = info->evtchn;
 	unsigned new_array_pages;
 	int ret;
 
@@ -186,7 +185,8 @@ static int evtchn_fifo_setup(struct irq_info *info)
 	return ret;
 }
 
-static void evtchn_fifo_bind_to_cpu(struct irq_info *info, unsigned cpu)
+static void evtchn_fifo_bind_to_cpu(evtchn_port_t evtchn, unsigned int cpu, 
+				    unsigned int old_cpu)
 {
 	/* no-op */
 }
diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/events_internal.h
index 82937d90d7d7..0a97c0549db7 100644
--- a/drivers/xen/events/events_internal.h
+++ b/drivers/xen/events/events_internal.h
@@ -7,65 +7,15 @@
 #ifndef __EVENTS_INTERNAL_H__
 #define __EVENTS_INTERNAL_H__
 
-/* Interrupt types. */
-enum xen_irq_type {
-	IRQT_UNBOUND = 0,
-	IRQT_PIRQ,
-	IRQT_VIRQ,
-	IRQT_IPI,
-	IRQT_EVTCHN
-};
-
-/*
- * Packed IRQ information:
- * type - enum xen_irq_type
- * event channel - irq->event channel mapping
- * cpu - cpu this event channel is bound to
- * index - type-specific information:
- *    PIRQ - vector, with MSB being "needs EIO", or physical IRQ of the HVM
- *           guest, or GSI (real passthrough IRQ) of the device.
- *    VIRQ - virq number
- *    IPI - IPI vector
- *    EVTCHN -
- */
-struct irq_info {
-	struct list_head list;
-	struct list_head eoi_list;
-	short refcnt;
-	short spurious_cnt;
-	enum xen_irq_type type;	/* type */
-	unsigned irq;
-	evtchn_port_t evtchn;	/* event channel */
-	unsigned short cpu;	/* cpu bound */
-	unsigned short eoi_cpu;	/* EOI must happen on this cpu */
-	unsigned int irq_epoch;	/* If eoi_cpu valid: irq_epoch of event */
-	u64 eoi_time;		/* Time in jiffies when to EOI. */
-
-	union {
-		unsigned short virq;
-		enum ipi_vector ipi;
-		struct {
-			unsigned short pirq;
-			unsigned short gsi;
-			unsigned char vector;
-			unsigned char flags;
-			uint16_t domid;
-		} pirq;
-	} u;
-};
-
-#define PIRQ_NEEDS_EOI	(1 << 0)
-#define PIRQ_SHAREABLE	(1 << 1)
-#define PIRQ_MSI_GROUP	(1 << 2)
-
 struct evtchn_loop_ctrl;
 
 struct evtchn_ops {
 	unsigned (*max_channels)(void);
 	unsigned (*nr_channels)(void);
 
-	int (*setup)(struct irq_info *info);
-	void (*bind_to_cpu)(struct irq_info *info, unsigned cpu);
+	int (*setup)(evtchn_port_t port);
+	void (*bind_to_cpu)(evtchn_port_t evtchn, unsigned int cpu,
+			    unsigned int old_cpu);
 
 	void (*clear_pending)(evtchn_port_t port);
 	void (*set_pending)(evtchn_port_t port);
@@ -83,12 +33,9 @@ struct evtchn_ops {
 
 extern const struct evtchn_ops *evtchn_ops;
 
-extern int **evtchn_to_irq;
 int get_evtchn_to_irq(evtchn_port_t evtchn);
 void handle_irq_for_port(evtchn_port_t port, struct evtchn_loop_ctrl *ctrl);
 
-struct irq_info *info_for_irq(unsigned irq);
-unsigned cpu_from_irq(unsigned irq);
 unsigned int cpu_from_evtchn(evtchn_port_t evtchn);
 
 static inline unsigned xen_evtchn_max_channels(void)
@@ -100,17 +47,18 @@ static inline unsigned xen_evtchn_max_channels(void)
  * Do any ABI specific setup for a bound event channel before it can
  * be unmasked and used.
  */
-static inline int xen_evtchn_port_setup(struct irq_info *info)
+static inline int xen_evtchn_port_setup(evtchn_port_t evtchn)
 {
 	if (evtchn_ops->setup)
-		return evtchn_ops->setup(info);
+		return evtchn_ops->setup(evtchn);
 	return 0;
 }
 
-static inline void xen_evtchn_port_bind_to_cpu(struct irq_info *info,
-					       unsigned cpu)
+static inline void xen_evtchn_port_bind_to_cpu(evtchn_port_t evtchn,
+					       unsigned int cpu,
+					       unsigned int old_cpu)
 {
-	evtchn_ops->bind_to_cpu(info, cpu);
+	evtchn_ops->bind_to_cpu(evtchn, cpu, old_cpu);
 }
 
 static inline void clear_evtchn(evtchn_port_t port)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10307.27382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDf-0004VO-5N; Thu, 22 Oct 2020 09:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10307.27382; Thu, 22 Oct 2020 09:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDf-0004VH-1a; Thu, 22 Oct 2020 09:49:15 +0000
Received: by outflank-mailman (input) for mailman id 10307;
 Thu, 22 Oct 2020 09:49:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDd-0004Uo-9w
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0925a2cd-08e7-45d3-b8b0-c3fcc99589ae;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC373AE42;
 Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDd-0004Uo-9w
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:13 +0000
X-Inumbo-ID: 0925a2cd-08e7-45d3-b8b0-c3fcc99589ae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0925a2cd-08e7-45d3-b8b0-c3fcc99589ae;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360150;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I8ggVRtWIXEDLgeQFmioHekHeyrBQ6I5PgwEHN8hSsU=;
	b=Ju+czaxvRC6i22B87p6i1lSEv88o6z4+BLHbqlqBexcmz64qzWCTf4VZdzQgbq07WzQ3Am
	iuBhHY6Tskp2xObJCN0baZ6MoRfZ6UGmGpKaIAMDuGBlX7MjjxzQQM47vZpzCmxQGG/D8Y
	PuQVBSvvao19fFAPagIdg5YcBQ4XNK0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CC373AE42;
	Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	x86@kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 3/5] xen/events: only register debug interrupt for 2-level events
Date: Thu, 22 Oct 2020 11:49:05 +0200
Message-Id: <20201022094907.28560-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
References: <20201022094907.28560-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen_debug_interrupt() is specific to 2-level event handling. So don't
register it with fifo event handling being active.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- rename fifo_events variable to xen_fifo_events (Jan Beulich)
---
 arch/x86/xen/smp.c               | 19 +++++++++++--------
 arch/x86/xen/xen-ops.h           |  2 ++
 drivers/xen/events/events_base.c | 10 ++++++----
 3 files changed, 19 insertions(+), 12 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 2097fa0ebdb5..c1b2f764b29a 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -88,14 +88,17 @@ int xen_smp_intr_init(unsigned int cpu)
 	per_cpu(xen_callfunc_irq, cpu).irq = rc;
 	per_cpu(xen_callfunc_irq, cpu).name = callfunc_name;
 
-	debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
-	rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu, xen_debug_interrupt,
-				     IRQF_PERCPU | IRQF_NOBALANCING,
-				     debug_name, NULL);
-	if (rc < 0)
-		goto fail;
-	per_cpu(xen_debug_irq, cpu).irq = rc;
-	per_cpu(xen_debug_irq, cpu).name = debug_name;
+	if (!xen_fifo_events) {
+		debug_name = kasprintf(GFP_KERNEL, "debug%d", cpu);
+		rc = bind_virq_to_irqhandler(VIRQ_DEBUG, cpu,
+					     xen_debug_interrupt,
+					     IRQF_PERCPU | IRQF_NOBALANCING,
+					     debug_name, NULL);
+		if (rc < 0)
+			goto fail;
+		per_cpu(xen_debug_irq, cpu).irq = rc;
+		per_cpu(xen_debug_irq, cpu).name = debug_name;
+	}
 
 	callfunc_name = kasprintf(GFP_KERNEL, "callfuncsingle%d", cpu);
 	rc = bind_ipi_to_irqhandler(XEN_CALL_FUNCTION_SINGLE_VECTOR,
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 45d556f71858..9546c3384c75 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -29,6 +29,8 @@ extern struct start_info *xen_start_info;
 extern struct shared_info xen_dummy_shared_info;
 extern struct shared_info *HYPERVISOR_shared_info;
 
+extern bool xen_fifo_events;
+
 void xen_setup_mfn_list_list(void);
 void xen_build_mfn_list_list(void);
 void xen_setup_machphys_mapping(void);
diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 1c25580c7691..6038c4c35db5 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -2050,8 +2050,8 @@ void xen_setup_callback_vector(void) {}
 static inline void xen_alloc_callback_vector(void) {}
 #endif
 
-static bool fifo_events = true;
-module_param(fifo_events, bool, 0);
+bool xen_fifo_events = true;
+module_param_named(fifo_events, xen_fifo_events, bool, 0);
 
 static int xen_evtchn_cpu_prepare(unsigned int cpu)
 {
@@ -2080,10 +2080,12 @@ void __init xen_init_IRQ(void)
 	int ret = -EINVAL;
 	evtchn_port_t evtchn;
 
-	if (fifo_events)
+	if (xen_fifo_events)
 		ret = xen_evtchn_fifo_init();
-	if (ret < 0)
+	if (ret < 0) {
 		xen_evtchn_2l_init();
+		xen_fifo_events = false;
+	}
 
 	xen_cpu_init_eoi(smp_processor_id());
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10310.27418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDs-0004hA-15; Thu, 22 Oct 2020 09:49:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10310.27418; Thu, 22 Oct 2020 09:49:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDr-0004h1-T7; Thu, 22 Oct 2020 09:49:27 +0000
Received: by outflank-mailman (input) for mailman id 10310;
 Thu, 22 Oct 2020 09:49:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDq-0004UF-Qi
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07e93d5b-33d8-42b6-bf18-f0670b67f9c7;
 Thu, 22 Oct 2020 09:49:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A7AAAE3B;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDq-0004UF-Qi
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:26 +0000
X-Inumbo-ID: 07e93d5b-33d8-42b6-bf18-f0670b67f9c7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 07e93d5b-33d8-42b6-bf18-f0670b67f9c7;
	Thu, 22 Oct 2020 09:49:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360151;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Crg06KGGfBCWdVR6siqyRllme38SLEsgnrcw++z2EXs=;
	b=ntkw3Oa1P44/sgDd0FbtwXok4p2xU0vrbQGLG4PPE9rd1nOYQw27Ptor1TwPnXkxR18tKM
	ry1Zlp/JBcMLyM9Aux5JT3P9ciKA7Tdnn+EJKLeFFQV6SrPeC5bYYGXpA2ToqOzp4trx4+
	93O6GQYNhH18xBfwGqp33Jge1+eOrf0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A7AAAE3B;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Jonathan Corbet <corbet@lwn.net>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 5/5] Documentation: add xen.fifo_events kernel parameter description
Date: Thu, 22 Oct 2020 11:49:07 +0200
Message-Id: <20201022094907.28560-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
References: <20201022094907.28560-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The kernel boot parameter xen.fifo_events isn't listed in
Documentation/admin-guide/kernel-parameters.txt. Add it.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 02d4adbf98d2..526d65d8573a 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -5978,6 +5978,13 @@
 			After which time (jiffies) the event handling loop
 			should start to delay EOI handling. Default is 2.
 
+	xen.fifo_events=	[XEN]
+			Boolean parameter to disable using fifo event handling
+			even if available. Normally fifo event handling is
+			preferred over the 2-level event handling, as it is
+			fairer and the number of possible event channels is
+			much higher. Default is on (use fifo events).
+
 	nopv=		[X86,XEN,KVM,HYPER_V,VMWARE]
 			Disables the PV optimizations forcing the guest to run
 			as generic guest with no PV drivers. Currently support
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10306.27369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDc-0004UR-T6; Thu, 22 Oct 2020 09:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10306.27369; Thu, 22 Oct 2020 09:49:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDc-0004UK-QA; Thu, 22 Oct 2020 09:49:12 +0000
Received: by outflank-mailman (input) for mailman id 10306;
 Thu, 22 Oct 2020 09:49:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDb-0004UF-T9
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fd17f21-220d-4572-8d11-e6f15370c0c5;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4A394AD87;
 Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDb-0004UF-T9
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:11 +0000
X-Inumbo-ID: 8fd17f21-220d-4572-8d11-e6f15370c0c5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8fd17f21-220d-4572-8d11-e6f15370c0c5;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360150;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=h/u/r/R71NxCYcOyxSXeNfPtXwCtEz7D63O5chrEOVo=;
	b=UMv6UT0andZFdqJzA9T57KO5xZgBpaxBp0MwYbcL2oqPEIKF4LBgN2CJGiUBElIEmMfytC
	PIHXqXVWKV6fLLkXApYo0DHCHCb6ywsYd7UdSv0INLKi5dxaNWaF9pvD8e5FIpQL5rADuu
	jw/AAb47AszHf01DLw5KDCaXH39KMdA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4A394AD87;
	Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	x86@kernel.org,
	linux-doc@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Jonathan Corbet <corbet@lwn.net>
Subject: [PATCH v2 0/5] xen: event handling cleanup
Date: Thu, 22 Oct 2020 11:49:02 +0200
Message-Id: <20201022094907.28560-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do some cleanups in Xen event handling code.

Changes in V2:
- addressed comments

Juergen Gross (5):
  xen: remove no longer used functions
  xen/events: make struct irq_info private to events_base.c
  xen/events: only register debug interrupt for 2-level events
  xen/events: unmask a fifo event channel only if it was masked
  Documentation: add xen.fifo_events kernel parameter description

 .../admin-guide/kernel-parameters.txt         |  7 ++
 arch/x86/xen/smp.c                            | 19 ++--
 arch/x86/xen/xen-ops.h                        |  2 +
 drivers/xen/events/events_2l.c                |  7 +-
 drivers/xen/events/events_base.c              | 94 +++++++++++++------
 drivers/xen/events/events_fifo.c              |  9 +-
 drivers/xen/events/events_internal.h          | 70 ++------------
 include/xen/events.h                          |  8 --
 8 files changed, 102 insertions(+), 114 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 09:49:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 09:49:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10311.27430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDx-0004nU-Cx; Thu, 22 Oct 2020 09:49:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10311.27430; Thu, 22 Oct 2020 09:49:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXDx-0004nK-9F; Thu, 22 Oct 2020 09:49:33 +0000
Received: by outflank-mailman (input) for mailman id 10311;
 Thu, 22 Oct 2020 09:49:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVXDv-0004UF-Qq
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a2e997f-e878-4f8d-92e1-2b81462c9c11;
 Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E94AADBB;
 Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVXDv-0004UF-Qq
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 09:49:31 +0000
X-Inumbo-ID: 6a2e997f-e878-4f8d-92e1-2b81462c9c11
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6a2e997f-e878-4f8d-92e1-2b81462c9c11;
	Thu, 22 Oct 2020 09:49:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603360150;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+fw6jToo0XyIGVFmL1AVSJxKibDPcDAXWlsALH4ZCRY=;
	b=CLdksM9pz7V631JF9izi4td+aUTHbmuBiNNKiAlpgPZZOlD/X7PNWLhY9NFlnFx4v8V3BF
	C5TkzCSt4gQOCwGWGFtzZolGoDTfA2/WPRlTd6FISBXFXMCVgkcsSp0XP1kwaYaBxlQwy+
	jajaQ5HVad/0I+o7XEuXJhw2OYrYnro=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5E94AADBB;
	Thu, 22 Oct 2020 09:49:10 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 1/5] xen: remove no longer used functions
Date: Thu, 22 Oct 2020 11:49:03 +0200
Message-Id: <20201022094907.28560-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
References: <20201022094907.28560-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

With the switch to the lateeoi model for interdomain event channels
some functions are no longer in use. Remove them.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 drivers/xen/events/events_base.c | 21 ---------------------
 include/xen/events.h             |  8 --------
 2 files changed, 29 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index cc317739e786..436682db41c5 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1145,14 +1145,6 @@ static int bind_interdomain_evtchn_to_irq_chip(unsigned int remote_domain,
 					       chip);
 }
 
-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
-				   evtchn_port_t remote_port)
-{
-	return bind_interdomain_evtchn_to_irq_chip(remote_domain, remote_port,
-						   &xen_dynamic_chip);
-}
-EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irq);
-
 int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
 					   evtchn_port_t remote_port)
 {
@@ -1320,19 +1312,6 @@ static int bind_interdomain_evtchn_to_irqhandler_chip(
 	return irq;
 }
 
-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
-					  evtchn_port_t remote_port,
-					  irq_handler_t handler,
-					  unsigned long irqflags,
-					  const char *devname,
-					  void *dev_id)
-{
-	return bind_interdomain_evtchn_to_irqhandler_chip(remote_domain,
-				remote_port, handler, irqflags, devname,
-				dev_id, &xen_dynamic_chip);
-}
-EXPORT_SYMBOL_GPL(bind_interdomain_evtchn_to_irqhandler);
-
 int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
diff --git a/include/xen/events.h b/include/xen/events.h
index 3b8155c2ea03..8ec418e30c7f 100644
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -35,16 +35,8 @@ int bind_ipi_to_irqhandler(enum ipi_vector ipi,
 			   unsigned long irqflags,
 			   const char *devname,
 			   void *dev_id);
-int bind_interdomain_evtchn_to_irq(unsigned int remote_domain,
-				   evtchn_port_t remote_port);
 int bind_interdomain_evtchn_to_irq_lateeoi(unsigned int remote_domain,
 					   evtchn_port_t remote_port);
-int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
-					  evtchn_port_t remote_port,
-					  irq_handler_t handler,
-					  unsigned long irqflags,
-					  const char *devname,
-					  void *dev_id);
 int bind_interdomain_evtchn_to_irqhandler_lateeoi(unsigned int remote_domain,
 						  evtchn_port_t remote_port,
 						  irq_handler_t handler,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:18:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10326.27445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXgG-00081h-2E; Thu, 22 Oct 2020 10:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10326.27445; Thu, 22 Oct 2020 10:18:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXgF-00081a-Ue; Thu, 22 Oct 2020 10:18:47 +0000
Received: by outflank-mailman (input) for mailman id 10326;
 Thu, 22 Oct 2020 10:18:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVXgE-00080w-Qk
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:18:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 727ab74d-2c6a-4a96-9ab8-0deb39b41888;
 Thu, 22 Oct 2020 10:18:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVXg6-000592-WE; Thu, 22 Oct 2020 10:18:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVXg6-0006e6-Lf; Thu, 22 Oct 2020 10:18:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVXg6-0002IU-LB; Thu, 22 Oct 2020 10:18:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVXgE-00080w-Qk
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:18:46 +0000
X-Inumbo-ID: 727ab74d-2c6a-4a96-9ab8-0deb39b41888
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 727ab74d-2c6a-4a96-9ab8-0deb39b41888;
	Thu, 22 Oct 2020 10:18:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g4VK9AI2JVfBTnqPa+tOzdHlwVNm0JLtQ0JnVnL2DdQ=; b=fzqABLWlMw400E2/pMzj5MOgai
	Icqm3lZ5RXUEXQn0BMxvQjf+HDxohKH/XFmQRSwIChq0FiROCM+Dklnom2EjqXGVg0qg1SOIgT3qZ
	FcvbvIspB3lCxAt1Zvuk3LfexR2rkVuvwi3eGe+UQRAPbEBMtax4ng/HazNLKzNr023k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVXg6-000592-WE; Thu, 22 Oct 2020 10:18:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVXg6-0006e6-Lf; Thu, 22 Oct 2020 10:18:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVXg6-0002IU-LB; Thu, 22 Oct 2020 10:18:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156054-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156054: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dc38c1103cfdc643860e10c1b9e925dac83332dc
X-Osstest-Versions-That:
    xen=8e7e5857a203c9d9df7733fd68768555c7e76839
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 10:18:38 +0000

flight 156054 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156054/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155377
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155377
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155377
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155377
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155377
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155377
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155377
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155377
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155377
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155377
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155377
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  dc38c1103cfdc643860e10c1b9e925dac83332dc
baseline version:
 xen                  8e7e5857a203c9d9df7733fd68768555c7e76839

Last test of basis   155377  2020-10-03 13:48:36 Z   18 days
Testing same since   156030  2020-10-20 13:06:12 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chen Yu <yu.c.chen@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8e7e5857a2..dc38c1103c  dc38c1103cfdc643860e10c1b9e925dac83332dc -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:21:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10330.27460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXim-0000QX-HG; Thu, 22 Oct 2020 10:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10330.27460; Thu, 22 Oct 2020 10:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXim-0000QQ-DZ; Thu, 22 Oct 2020 10:21:24 +0000
Received: by outflank-mailman (input) for mailman id 10330;
 Thu, 22 Oct 2020 10:21:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O+cM=D5=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1kVXil-0000QL-8I
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:21:23 +0000
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd47cfdd-4aa9-457d-9bc5-89abb3bc3ca1;
 Thu, 22 Oct 2020 10:21:21 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id t15so1020081otk.0
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 03:21:21 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=O+cM=D5=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
	id 1kVXil-0000QL-8I
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:21:23 +0000
X-Inumbo-ID: bd47cfdd-4aa9-457d-9bc5-89abb3bc3ca1
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bd47cfdd-4aa9-457d-9bc5-89abb3bc3ca1;
	Thu, 22 Oct 2020 10:21:21 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id t15so1020081otk.0
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 03:21:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=5NIyadkpsx3TzJkX5ZsP+SweN4sKS84ZLyEuupL4jV0=;
        b=kX5gvpbI6Hf2MYOKYuXkmRgr2kaaY9M5LbYCnRyLjOp/Lz02Os2A6z4Aq+rsZLosJ2
         M40DsB8reXplQuB/qrDWVA5FqGJtYFM6i8vJW9MvS++v1iXLrhj8qzSZ7rNR1n5yxmI7
         vyrNJoQc1BgrM//GO5oEndyGtaxbYgwsx17/8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=5NIyadkpsx3TzJkX5ZsP+SweN4sKS84ZLyEuupL4jV0=;
        b=nAtaoMJpE0h8CcEETHkgrlFCH42ooMtQzcfJiBaEVMUHDDdYQACHMbeHhf8YElXqVm
         T+ijpZnDgYjvtMCKGcaXREz4uxTHQNX997YnLDb+9bzTB58bbtHpYKxWKIb7ZF1uRdgo
         QJ6QXAf395LUj8G0xc8rgDMvrqwUR5g1VXybI3d3F/JSGcu6Y74QYNrYJLdw4XaJ/oX1
         /L3xz+KQ7cRZN9xaT2MlEDTSS4k1urejCWjp/jTCEvZ7QElU0kReao63lwGH/s8yJX3u
         SQDdJWJ3sBOFAGN9rXYYhrBfWLEUu3LwATnJl7iEA+1Znqp2jOdh9OGWXiXjufIUumuI
         AQdw==
X-Gm-Message-State: AOAM532pwJw1mZm9uw46E8RK4/0yTEWZ8Dwj24QfKv6NGVmZKKeI7/JN
	aojN+XAiaGACIjp/v3h5csTxCK239S1+rAMLEuoh2A==
X-Google-Smtp-Source: ABdhPJygQ9KUnAKAlt6h9nc1uLqv50lVI2ThhRnR4aecj/Qy/MtijtMr1CHsIeYGKKQcbMMd5YG/A53hu9KqBTBsBQ8=
X-Received: by 2002:a05:6830:8b:: with SMTP id a11mr1304398oto.303.1603362080697;
 Thu, 22 Oct 2020 03:21:20 -0700 (PDT)
MIME-Version: 1.0
References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-9-tzimmermann@suse.de>
 <20201022084919.GU401619@phenom.ffwll.local> <f2d83a8b-91b3-ac64-b77f-2b1c78729014@suse.de>
In-Reply-To: <f2d83a8b-91b3-ac64-b77f-2b1c78729014@suse.de>
From: Daniel Vetter <daniel@ffwll.ch>
Date: Thu, 22 Oct 2020 12:21:07 +0200
Message-ID: <CAKMK7uFek_A-rFjBc7UUny8TUYx_9dk+-QzsTZFc93X0O=b1aA@mail.gmail.com>
Subject: Re: [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Luben Tuikov <luben.tuikov@amd.com>, Dave Airlie <airlied@linux.ie>, 
	Nouveau Dev <nouveau@lists.freedesktop.org>, dri-devel <dri-devel@lists.freedesktop.org>, 
	"Wilson, Chris" <chris@chris-wilson.co.uk>, Melissa Wen <melissa.srw@gmail.com>, 
	Huang Rui <ray.huang@amd.com>, Gerd Hoffmann <kraxel@redhat.com>, Sam Ravnborg <sam@ravnborg.org>, 
	Emil Velikov <emil.velikov@collabora.com>, 
	linux-samsung-soc <linux-samsung-soc@vger.kernel.org>, 
	Joonyoung Shim <jy0922.shim@samsung.com>, lima@lists.freedesktop.org, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Krzysztof Kozlowski <krzk@kernel.org>, 
	Steven Price <steven.price@arm.com>, 
	"open list:ARM/Rockchip SoC..." <linux-rockchip@lists.infradead.org>, Kukjin Kim <kgene@kernel.org>, 
	Ben Skeggs <bskeggs@redhat.com>, Russell King <linux+etnaviv@armlinux.org.uk>, 
	"open list:DRM DRIVER FOR QXL VIRTUAL GPU" <spice-devel@lists.freedesktop.org>, 
	Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>, 
	The etnaviv authors <etnaviv@lists.freedesktop.org>, Hans de Goede <hdegoede@redhat.com>, 
	"moderated list:DRM DRIVERS FOR XEN" <xen-devel@lists.xenproject.org>, 
	"open list:VIRTIO CORE, NET..." <virtualization@lists.linux-foundation.org>, Sean Paul <sean@poorly.run>, 
	apaneers@amd.com, Linux ARM <linux-arm-kernel@lists.infradead.org>, 
	"moderated list:DMA BUFFER SHARING FRAMEWORK" <linaro-mm-sig@lists.linaro.org>, 
	amd-gfx list <amd-gfx@lists.freedesktop.org>, Tomeu Vizoso <tomeu.vizoso@collabora.com>, 
	Seung-Woo Kim <sw0312.kim@samsung.com>, Sandy Huang <hjc@rock-chips.com>, 
	Kyungmin Park <kyungmin.park@samsung.com>, Qinglang Miao <miaoqinglang@huawei.com>, 
	Qiang Yu <yuq825@gmail.com>, Alex Deucher <alexander.deucher@amd.com>, 
	"open list:DMA BUFFER SHARING FRAMEWORK" <linux-media@vger.kernel.org>, =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 22, 2020 at 11:18 AM Thomas Zimmermann <tzimmermann@suse.de> wr=
ote:
>
> Hi
>
> On 22.10.20 10:49, Daniel Vetter wrote:
> > On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote:
> >> Kernel DRM clients now store their framebuffer address in an instance
> >> of struct dma_buf_map. Depending on the buffer's location, the address
> >> refers to system or I/O memory.
> >>
> >> Callers of drm_client_buffer_vmap() receive a copy of the value in
> >> the call's supplied arguments. It can be accessed and modified with
> >> dma_buf_map interfaces.
> >>
> >> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> >> Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
> >> Tested-by: Sam Ravnborg <sam@ravnborg.org>
> >> ---
> >>  drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++-------------=
-
> >>  drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++---------
> >>  include/drm/drm_client.h        |  7 ++++---
> >>  3 files changed, 38 insertions(+), 26 deletions(-)
> >>
> >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client=
.c
> >> index ac0082bed966..fe573acf1067 100644
> >> --- a/drivers/gpu/drm/drm_client.c
> >> +++ b/drivers/gpu/drm/drm_client.c
> >> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_cl=
ient_buffer *buffer)
> >>  {
> >>      struct drm_device *dev =3D buffer->client->dev;
> >>
> >> -    drm_gem_vunmap(buffer->gem, buffer->vaddr);
> >> +    drm_gem_vunmap(buffer->gem, &buffer->map);
> >>
> >>      if (buffer->gem)
> >>              drm_gem_object_put(buffer->gem);
> >> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *=
client, u32 width, u32 height, u
> >>  /**
> >>   * drm_client_buffer_vmap - Map DRM client buffer into address space
> >>   * @buffer: DRM client buffer
> >> + * @map_copy: Returns the mapped memory's address
> >>   *
> >>   * This function maps a client buffer into kernel address space. If t=
he
> >> - * buffer is already mapped, it returns the mapping's address.
> >> + * buffer is already mapped, it returns the existing mapping's addres=
s.
> >>   *
> >>   * Client buffer mappings are not ref'counted. Each call to
> >>   * drm_client_buffer_vmap() should be followed by a call to
> >>   * drm_client_buffer_vunmap(); or the client buffer should be mapped
> >>   * throughout its lifetime.
> >>   *
> >> + * The returned address is a copy of the internal value. In contrast =
to
> >> + * other vmap interfaces, you don't need it for the client's vunmap
> >> + * function. So you can modify it at will during blit and draw operat=
ions.
> >> + *
> >>   * Returns:
> >> - *  The mapped memory's address
> >> + *  0 on success, or a negative errno code otherwise.
> >>   */
> >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
> >> +int
> >> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_b=
uf_map *map_copy)
> >>  {
> >> -    struct dma_buf_map map;
> >> +    struct dma_buf_map *map =3D &buffer->map;
> >>      int ret;
> >>
> >> -    if (buffer->vaddr)
> >> -            return buffer->vaddr;
> >> +    if (dma_buf_map_is_set(map))
> >> +            goto out;
> >>
> >>      /*
> >>       * FIXME: The dependency on GEM here isn't required, we could
> >> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_b=
uffer *buffer)
> >>       * fd_install step out of the driver backend hooks, to make that
> >>       * final step optional for internal users.
> >>       */
> >> -    ret =3D drm_gem_vmap(buffer->gem, &map);
> >> +    ret =3D drm_gem_vmap(buffer->gem, map);
> >>      if (ret)
> >> -            return ERR_PTR(ret);
> >> +            return ret;
> >>
> >> -    buffer->vaddr =3D map.vaddr;
> >> +out:
> >> +    *map_copy =3D *map;
> >>
> >> -    return map.vaddr;
> >> +    return 0;
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>
> >> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
> >>   */
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
> >>  {
> >> -    struct dma_buf_map map =3D DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
> >> +    struct dma_buf_map *map =3D &buffer->map;
> >>
> >> -    drm_gem_vunmap(buffer->gem, &map);
> >> -    buffer->vaddr =3D NULL;
> >> +    drm_gem_vunmap(buffer->gem, map);
> >>  }
> >>  EXPORT_SYMBOL(drm_client_buffer_vunmap);
> >>
> >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_=
helper.c
> >> index c2f72bb6afb1..6212cd7cde1d 100644
> >> --- a/drivers/gpu/drm/drm_fb_helper.c
> >> +++ b/drivers/gpu/drm/drm_fb_helper.c
> >> @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct d=
rm_fb_helper *fb_helper,
> >>      unsigned int cpp =3D fb->format->cpp[0];
> >>      size_t offset =3D clip->y1 * fb->pitches[0] + clip->x1 * cpp;
> >>      void *src =3D fb_helper->fbdev->screen_buffer + offset;
> >> -    void *dst =3D fb_helper->buffer->vaddr + offset;
> >> +    void *dst =3D fb_helper->buffer->map.vaddr + offset;
> >>      size_t len =3D (clip->x2 - clip->x1) * cpp;
> >>      unsigned int y;
> >>
> >> @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_s=
truct *work)
> >>      struct drm_clip_rect *clip =3D &helper->dirty_clip;
> >>      struct drm_clip_rect clip_copy;
> >>      unsigned long flags;
> >> -    void *vaddr;
> >> +    struct dma_buf_map map;
> >> +    int ret;
> >>
> >>      spin_lock_irqsave(&helper->dirty_lock, flags);
> >>      clip_copy =3D *clip;
> >> @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_s=
truct *work)
> >>
> >>              /* Generic fbdev uses a shadow buffer */
> >>              if (helper->buffer) {
> >> -                    vaddr =3D drm_client_buffer_vmap(helper->buffer);
> >> -                    if (IS_ERR(vaddr))
> >> +                    ret =3D drm_client_buffer_vmap(helper->buffer, &m=
ap);
> >> +                    if (ret)
> >>                              return;
> >>                      drm_fb_helper_dirty_blit_real(helper, &clip_copy)=
;
> >>              }
> >> @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct dr=
m_fb_helper *fb_helper,
> >>      struct drm_framebuffer *fb;
> >>      struct fb_info *fbi;
> >>      u32 format;
> >> -    void *vaddr;
> >> +    struct dma_buf_map map;
> >> +    int ret;
> >>
> >>      drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
> >>                  sizes->surface_width, sizes->surface_height,
> >> @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct =
drm_fb_helper *fb_helper,
> >>              fb_deferred_io_init(fbi);
> >>      } else {
> >>              /* buffer is mapped for HW framebuffer */
> >> -            vaddr =3D drm_client_buffer_vmap(fb_helper->buffer);
> >> -            if (IS_ERR(vaddr))
> >> -                    return PTR_ERR(vaddr);
> >> +            ret =3D drm_client_buffer_vmap(fb_helper->buffer, &map);
> >> +            if (ret)
> >> +                    return ret;
> >> +            if (map.is_iomem)
> >> +                    fbi->screen_base =3D map.vaddr_iomem;
> >> +            else
> >> +                    fbi->screen_buffer =3D map.vaddr;
> >>
> >> -            fbi->screen_buffer =3D vaddr;
> >>              /* Shamelessly leak the physical address to user-space */
> >>  #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
> >>              if (drm_leak_fbdev_smem && fbi->fix.smem_start =3D=3D 0)
> >
> > Just noticed a tiny thing here: I think this needs to be patched to onl=
y
> > set smem_start when the map is _not_ iomem. Since virt_to_page isn't
> > defined on iomem at all.
> >
> > I guess it'd be neat if we can set this for iomem too, but I have no id=
ea
> > how to convert an iomem pointer back to a bus_addr_t ...
>
> Not that I disagree, but that should be reviewed by the right people.
> The commit at 4be9bd10e22d ("drm/fb_helper: Allow leaking fbdev
> smem_start") appears to work around specific userspace drivers.

It's for soc drivers, which all use either shmem or cma helpers, so
all system memory. Which means your patch here doesn't break anything.
But we need to make sure that if someone enables this it doesn't blow
up at least when used on a device where we map iomem.
-Daniel

> Best regards
> Thomas
>
> >
> > Cheers, Daniel
> >
> >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
> >> index 7aaea665bfc2..f07f2fb02e75 100644
> >> --- a/include/drm/drm_client.h
> >> +++ b/include/drm/drm_client.h
> >> @@ -3,6 +3,7 @@
> >>  #ifndef _DRM_CLIENT_H_
> >>  #define _DRM_CLIENT_H_
> >>
> >> +#include <linux/dma-buf-map.h>
> >>  #include <linux/lockdep.h>
> >>  #include <linux/mutex.h>
> >>  #include <linux/types.h>
> >> @@ -141,9 +142,9 @@ struct drm_client_buffer {
> >>      struct drm_gem_object *gem;
> >>
> >>      /**
> >> -     * @vaddr: Virtual address for the buffer
> >> +     * @map: Virtual address for the buffer
> >>       */
> >> -    void *vaddr;
> >> +    struct dma_buf_map map;
> >>
> >>      /**
> >>       * @fb: DRM framebuffer
> >> @@ -155,7 +156,7 @@ struct drm_client_buffer *
> >>  drm_client_framebuffer_create(struct drm_client_dev *client, u32 widt=
h, u32 height, u32 format);
> >>  void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
> >>  int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, st=
ruct drm_rect *rect);
> >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
> >> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct d=
ma_buf_map *map);
> >>  void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
> >>
> >>  int drm_client_modeset_create(struct drm_client_dev *client);
> >> --
> >> 2.28.0
> >>
> >
>
> --
> Thomas Zimmermann
> Graphics Driver Developer
> SUSE Software Solutions Germany GmbH
> Maxfeldstr. 5, 90409 N=C3=BCrnberg, Germany
> (HRB 36809, AG N=C3=BCrnberg)
> Gesch=C3=A4ftsf=C3=BChrer: Felix Imend=C3=B6rffer



--=20
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:29:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:29:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10351.27542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXq6-000187-Ax; Thu, 22 Oct 2020 10:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10351.27542; Thu, 22 Oct 2020 10:28:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXq6-000180-7p; Thu, 22 Oct 2020 10:28:58 +0000
Received: by outflank-mailman (input) for mailman id 10351;
 Thu, 22 Oct 2020 10:28:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVXq4-00017v-Ri
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:28:57 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa5974d7-4c6f-4323-9f56-0e707f91ed8b;
 Thu, 22 Oct 2020 10:28:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVXq4-00017v-Ri
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:28:57 +0000
X-Inumbo-ID: aa5974d7-4c6f-4323-9f56-0e707f91ed8b
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aa5974d7-4c6f-4323-9f56-0e707f91ed8b;
	Thu, 22 Oct 2020 10:28:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603362535;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=OS9n6tbKBjphGjZrT8RrDGb83is9TdWWEG1i638Wfvs=;
  b=cGXvTgB+HLUe2M+EuMaePZY4vZSh/0D/TccvCijUDAJytA9cGpLx4bpy
   oCbIDIGFTnQe0nkYdaETzLyc2cSeqSnRk/FHiSPTfeDkphdd9QC6eX9yl
   t3L8GGecTBS4EmZ2rY2k/5OuS50wAPlFE4mjwSQRRs7jQ8u2wtCvHWOc7
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 55AJ6+kyIihsC+u4pev7e4wPeP4uv9dDBkfwOfvA1Yi1n9JNQNm1Nwp8DUCsimgcLLJ92hF1bl
 vkulAOM+rweziuTb6vTPalq9EuMUtFAjsOiFkfEsiw6ah3z3ICmbGsJ+ZRCUcWDIgcLeNygsIT
 pum0q8X/ENPk6aBXrR0PmOoRgB6IKl7TT6K4/OFxgZnYgGsNDiq10i25kL9YgVFLhoXpDc/qCa
 Ez0BcslaJnHF4957ZAtqN6Lxxn/DvjFqwQEyWeqgdRoAGRWTBhDMsFcRBkWWW6+2R6XtNT6iD4
 M+g=
X-SBRS: None
X-MesageID: 29559715
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29559715"
Date: Thu, 22 Oct 2020 12:28:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 3/8] evtchn: rename and adjust guest_enabled_event()
Message-ID: <20201022102844.4ijjwihdwzhuzqjt@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <119ad32e-91f0-5c1d-c400-de78ab816839@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <119ad32e-91f0-5c1d-c400-de78ab816839@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:09:16PM +0200, Jan Beulich wrote:
> The function isn't about an "event" in general, but about a vIRQ. The
> function also failed to honor global vIRQ-s, instead assuming the caller
> would pass vCPU 0 in such a case.
> 
> While at it also adjust the
> - types the function uses,
> - single user to make use of domain_vcpu().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> v2: New.
> 
> --- a/xen/arch/x86/cpu/mcheck/vmce.h
> +++ b/xen/arch/x86/cpu/mcheck/vmce.h
> @@ -5,9 +5,9 @@
>  
>  int vmce_init(struct cpuinfo_x86 *c);
>  
> -#define dom0_vmce_enabled() (hardware_domain && hardware_domain->max_vcpus \
> -        && hardware_domain->vcpu[0] \
> -        && guest_enabled_event(hardware_domain->vcpu[0], VIRQ_MCA))
> +#define dom0_vmce_enabled() \
> +    (hardware_domain && \
> +     evtchn_virq_enabled(domain_vcpu(hardware_domain, 0), VIRQ_MCA))
>  
>  int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
>  
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -778,9 +778,15 @@ out:
>      return ret;
>  }
>  
> -int guest_enabled_event(struct vcpu *v, uint32_t virq)
> +bool evtchn_virq_enabled(const struct vcpu *v, unsigned int virq)
>  {
> -    return ((v != NULL) && (v->virq_to_evtchn[virq] != 0));
> +    if ( !v )

Not sure it's worth adding a check that virq < NR_VIRQS here just to
be on the safe side...

> +        return false;
> +
> +    if ( virq_is_global(virq) && v->vcpu_id )

...as virq_is_global has an ASSERT to that extend (but that would be a
nop on release builds).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:35:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:35:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10354.27553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXwk-00026T-3t; Thu, 22 Oct 2020 10:35:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10354.27553; Thu, 22 Oct 2020 10:35:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVXwk-00026M-0o; Thu, 22 Oct 2020 10:35:50 +0000
Received: by outflank-mailman (input) for mailman id 10354;
 Thu, 22 Oct 2020 10:35:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVXwj-00026H-0f
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:35:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea524d12-ac89-4094-a9af-81b397cbd866;
 Thu, 22 Oct 2020 10:35:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6BDE9ACF6;
 Thu, 22 Oct 2020 10:35:47 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVXwj-00026H-0f
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:35:49 +0000
X-Inumbo-ID: ea524d12-ac89-4094-a9af-81b397cbd866
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea524d12-ac89-4094-a9af-81b397cbd866;
	Thu, 22 Oct 2020 10:35:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603362947;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7bj/gkiOtWxRriaas+Eg9fuSwfNM6Cn6VTUSXRtCPkU=;
	b=dZ2Bu4biyrSPNNa7TM1ZJ0TcOD4E9BZzt8z0SvvC2XdLwKtL1Q9Nj/ysADaaUau6dX8zlB
	zfXGYESxigbwEczzRNm+fnJVYsNBXnQzuaGwUT9jDDTW4dxdaLbY7bJhtczMRUpPV+GZqU
	r/mqRlp38QBpwr/sgWd4PIggZxgleb4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6BDE9ACF6;
	Thu, 22 Oct 2020 10:35:47 +0000 (UTC)
Subject: Re: [PATCH v2 3/5] xen/events: only register debug interrupt for
 2-level events
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>
References: <20201022094907.28560-1-jgross@suse.com>
 <20201022094907.28560-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1de24e42-6cb7-4ecb-0eb2-c4a15dc8afc9@suse.com>
Date: Thu, 22 Oct 2020 12:35:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022094907.28560-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 11:49, Juergen Gross wrote:
> @@ -2080,10 +2080,12 @@ void __init xen_init_IRQ(void)
>  	int ret = -EINVAL;
>  	evtchn_port_t evtchn;
>  
> -	if (fifo_events)
> +	if (xen_fifo_events)
>  		ret = xen_evtchn_fifo_init();
> -	if (ret < 0)
> +	if (ret < 0) {
>  		xen_evtchn_2l_init();
> +		xen_fifo_events = false;
> +	}

Another note: While it may not matter right here, maybe better
first set the variable and the call the function?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:45:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10358.27565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVY6J-00038W-2W; Thu, 22 Oct 2020 10:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10358.27565; Thu, 22 Oct 2020 10:45:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVY6I-00038P-Vw; Thu, 22 Oct 2020 10:45:42 +0000
Received: by outflank-mailman (input) for mailman id 10358;
 Thu, 22 Oct 2020 10:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/7zn=D5=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1kVY6G-00038K-Al
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:45:41 +0000
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e22eaed2-bbc0-4d5d-bd07-cc1a537af39c;
 Thu, 22 Oct 2020 10:45:36 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kVY66-0008Se-5C; Thu, 22 Oct 2020 10:45:30 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CEEA830377D;
 Thu, 22 Oct 2020 12:45:27 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id ADE07203D0836; Thu, 22 Oct 2020 12:45:27 +0200 (CEST)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/7zn=D5=infradead.org=peterz@srs-us1.protection.inumbo.net>)
	id 1kVY6G-00038K-Al
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:45:41 +0000
X-Inumbo-ID: e22eaed2-bbc0-4d5d-bd07-cc1a537af39c
Received: from merlin.infradead.org (unknown [2001:8b0:10b:1231::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e22eaed2-bbc0-4d5d-bd07-cc1a537af39c;
	Thu, 22 Oct 2020 10:45:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=Du1SSCXnSrdocEU/quDNtLt04UawTC5Df3w+0WlN3W8=; b=0HqndUpYEunyxC1QFg6Eps1FML
	a0rjNvZkuoHGWJU7KdGIfeL1nFDRxhAnSOvIgQb8JmNERKI64sI6gEC1tHZUSQhgCpTg4/EOExVov
	NHDvjgkMs7vZxCs9ZNsnj6dkbsIKozDSt4mQYsbp2iba8FtADuiHlt+2bJEvHO2+TvJnwMyLg6n8/
	g6X3im+tqS75DIxJ5beJXIGTEuwWa6xY6mC4BLYc3ZfuCmuZfX4K/vFpgN26JbVEXzu/cg5W1IUuR
	8+sWJhHh8TBCCLx6BWJmoy4fhffwNmNCTYbzo5wYOkww6DgR2GMRentfnaqfQT1NVGejpgDlxcc54
	dAw51jxg==;
Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=noisy.programming.kicks-ass.net)
	by merlin.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kVY66-0008Se-5C; Thu, 22 Oct 2020 10:45:30 +0000
Received: from hirez.programming.kicks-ass.net (hirez.programming.kicks-ass.net [192.168.1.225])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(Client did not present a certificate)
	by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id CEEA830377D;
	Thu, 22 Oct 2020 12:45:27 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
	id ADE07203D0836; Thu, 22 Oct 2020 12:45:27 +0200 (CEST)
Date: Thu, 22 Oct 2020 12:45:27 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
	linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
Message-ID: <20201022104527.GI2594@hirez.programming.kicks-ass.net>
References: <20201009144225.12019-1-jgross@suse.com>
 <28ccccfe-b95b-5c4d-af27-5004e9f02c40@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <28ccccfe-b95b-5c4d-af27-5004e9f02c40@suse.com>

On Thu, Oct 22, 2020 at 11:24:39AM +0200, Jrgen Gro wrote:
> On 09.10.20 16:42, Juergen Gross wrote:
> > When running in lazy TLB mode the currently active page tables might
> > be the ones of a previous process, e.g. when running a kernel thread.
> > 
> > This can be problematic in case kernel code is being modified via
> > text_poke() in a kernel thread, and on another processor exit_mmap()
> > is active for the process which was running on the first cpu before
> > the kernel thread.
> > 
> > As text_poke() is using a temporary address space and the former
> > address space (obtained via cpu_tlbstate.loaded_mm) is restored
> > afterwards, there is a race possible in case the cpu on which
> > exit_mmap() is running wants to make sure there are no stale
> > references to that address space on any cpu active (this e.g. is
> > required when running as a Xen PV guest, where this problem has been
> > observed and analyzed).
> > 
> > In order to avoid that, drop off TLB lazy mode before switching to the
> > temporary address space.
> > 
> > Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Can anyone look at this, please? It is fixing a real problem which has
> been seen several times.

As it happens I picked it up yesterday, just pushed it out for you.

Thanks!


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 10:48:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 10:48:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10361.27578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVY9D-0003Iv-IT; Thu, 22 Oct 2020 10:48:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10361.27578; Thu, 22 Oct 2020 10:48:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVY9D-0003Io-ER; Thu, 22 Oct 2020 10:48:43 +0000
Received: by outflank-mailman (input) for mailman id 10361;
 Thu, 22 Oct 2020 10:48:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVY9C-0003Ij-Eu
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:48:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 235be151-fe57-4da3-a2cc-737d36ade19f;
 Thu, 22 Oct 2020 10:48:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D57D8ACA3;
 Thu, 22 Oct 2020 10:48:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVY9C-0003Ij-Eu
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 10:48:42 +0000
X-Inumbo-ID: 235be151-fe57-4da3-a2cc-737d36ade19f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 235be151-fe57-4da3-a2cc-737d36ade19f;
	Thu, 22 Oct 2020 10:48:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603363721;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0b+aO4geVivMxoOImVN5TC2oxRBqcdDWLWc9IG6Dja4=;
	b=eNZuxA34i5wiLPoll7WkPXL2PX8QR7fJehNfp3WFLosE56YkjFThkSySoANZJSUtVORyjn
	vTuaFa64zaqVevu+h3k5ObPjj9MuRVNO4EeA082UC/Yf2AJPz+zCqLSmoAHttaXSq3E31/
	HZtKCZJjbqa6RE6eAcjPoSNlWdpRUa8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D57D8ACA3;
	Thu, 22 Oct 2020 10:48:40 +0000 (UTC)
Subject: Re: [PATCH] x86/alternative: don't call text_poke() in lazy TLB mode
To: Peter Zijlstra <peterz@infradead.org>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "H. Peter Anvin" <hpa@zytor.com>, Andy Lutomirski <luto@kernel.org>
References: <20201009144225.12019-1-jgross@suse.com>
 <28ccccfe-b95b-5c4d-af27-5004e9f02c40@suse.com>
 <20201022104527.GI2594@hirez.programming.kicks-ass.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <61d30267-733f-49b5-8ca1-3246485e8151@suse.com>
Date: Thu, 22 Oct 2020 12:48:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201022104527.GI2594@hirez.programming.kicks-ass.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.20 12:45, Peter Zijlstra wrote:
> On Thu, Oct 22, 2020 at 11:24:39AM +0200, Jürgen Groß wrote:
>> On 09.10.20 16:42, Juergen Gross wrote:
>>> When running in lazy TLB mode the currently active page tables might
>>> be the ones of a previous process, e.g. when running a kernel thread.
>>>
>>> This can be problematic in case kernel code is being modified via
>>> text_poke() in a kernel thread, and on another processor exit_mmap()
>>> is active for the process which was running on the first cpu before
>>> the kernel thread.
>>>
>>> As text_poke() is using a temporary address space and the former
>>> address space (obtained via cpu_tlbstate.loaded_mm) is restored
>>> afterwards, there is a race possible in case the cpu on which
>>> exit_mmap() is running wants to make sure there are no stale
>>> references to that address space on any cpu active (this e.g. is
>>> required when running as a Xen PV guest, where this problem has been
>>> observed and analyzed).
>>>
>>> In order to avoid that, drop off TLB lazy mode before switching to the
>>> temporary address space.
>>>
>>> Fixes: cefa929c034eb5d ("x86/mm: Introduce temporary mm structs")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
>> Can anyone look at this, please? It is fixing a real problem which has
>> been seen several times.
> 
> As it happens I picked it up yesterday, just pushed it out for you.

Thank you very much!


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 11:17:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 11:17:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10373.27602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVYay-0006GY-0D; Thu, 22 Oct 2020 11:17:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10373.27602; Thu, 22 Oct 2020 11:17:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVYax-0006GR-Tb; Thu, 22 Oct 2020 11:17:23 +0000
Received: by outflank-mailman (input) for mailman id 10373;
 Thu, 22 Oct 2020 11:17:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVYaw-0006Fi-IZ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 11:17:22 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da8b5dac-2abb-45e1-809a-d7e83a5cc64e;
 Thu, 22 Oct 2020 11:17:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVYaw-0006Fi-IZ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 11:17:22 +0000
X-Inumbo-ID: da8b5dac-2abb-45e1-809a-d7e83a5cc64e
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da8b5dac-2abb-45e1-809a-d7e83a5cc64e;
	Thu, 22 Oct 2020 11:17:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603365442;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=skjN9q8/nTEh5lu/TM7UYNGn5QCEmu0rcjUMMXxXrQg=;
  b=PdDpV182p/XGoQe7D1tB7LdH6c3ouo1YHHFf0kzRZ0B+cm0Rd32DOJCm
   vmAzjAMHM6UDrmgNN8n06HXXmFyUV9Rgan/zsjTbUoJbtYZ3C8UXlbdM0
   M+lLWcibHgzU3ipq3X0hA6JzNnyfpNfYprf1vcah23unWvz0UQX79zhoo
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: nztF+dxPBXyakSpRbZ4sVTF0NL6HTI1gmkAjNGqe8as/rZW59KAb9IUiA0EB7HuLxSCXBweFOE
 gm4aMuYyDOB1kXlWxrVcoL5gw7Z21NhGtVIqGQ1wjE2xHc8DUzE66OSGcrqVRHJ0xEZ7vZLoXk
 ZtUHOq1+UmnvB9OhpJxrba0FZeXrHRHTYfBmzLq1AWtVT0dkaSO/wed5ss/Jug4zrKGhzakg/U
 vsP4HuZJAVhB3FDJmIxVxOmvRIz5oXAYp1IbOkPLLzU0fpiccDFwH+TWjQJhCJcveTeNAhDy6Z
 Zao=
X-SBRS: None
X-MesageID: 29889853
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29889853"
Date: Thu, 22 Oct 2020 13:17:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 4/8] evtchn: let evtchn_set_priority() acquire the
 per-channel lock
Message-ID: <20201022111712.g7kvaducfgwa6whn@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <266c9178-700b-5663-4b5f-69f160a165ab@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <266c9178-700b-5663-4b5f-69f160a165ab@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:09:41PM +0200, Jan Beulich wrote:
> Some lock wants to be held to make sure the port doesn't change state,
> but there's no point holding the per-domain event lock here. Switch to
> using the finer grained per-channel lock instead.

While true that's a fine grained lock, it also disables interrupts,
which the global event_lock didn't.

> FAOD this doesn't guarantee anything towards in particular
> evtchn_fifo_set_pending(), as for interdomain channels that function
> would be called with the remote side's per-channel lock held.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 11:29:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 11:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10376.27613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVYmj-0007Lm-4a; Thu, 22 Oct 2020 11:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10376.27613; Thu, 22 Oct 2020 11:29:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVYmj-0007Lf-1i; Thu, 22 Oct 2020 11:29:33 +0000
Received: by outflank-mailman (input) for mailman id 10376;
 Thu, 22 Oct 2020 11:29:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oPFL=D5=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1kVYmh-0007LZ-5i
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 11:29:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f59e6666-c92f-416a-8430-348aeda8a86b;
 Thu, 22 Oct 2020 11:29:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVYmc-0006e8-NA; Thu, 22 Oct 2020 11:29:26 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVYmc-0003GN-C4; Thu, 22 Oct 2020 11:29:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oPFL=D5=xen.org=hx242@srs-us1.protection.inumbo.net>)
	id 1kVYmh-0007LZ-5i
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 11:29:31 +0000
X-Inumbo-ID: f59e6666-c92f-416a-8430-348aeda8a86b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f59e6666-c92f-416a-8430-348aeda8a86b;
	Thu, 22 Oct 2020 11:29:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=+OgsHnWq3gEzM2l/AVtQUMDkm3YQtvZbw3/J0aQlw64=; b=xiIiqkJhxJyenua1JSB9pnVilc
	FIOymOndecfC6YWuaKITxklNslyPH3h7kWCSbWGbanhK/I/ppCz32vFzSA0yJfqE4NaKv8ilo92fH
	m+lafh+nqGntZsuVDvCxsbwJP2V1+IMUuEVsGHK2MOlLDZdAyYyrf516k3BqQlHDsXQ8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVYmc-0006e8-NA; Thu, 22 Oct 2020 11:29:26 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=u1bbd043a57dd5a.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVYmc-0003GN-C4; Thu, 22 Oct 2020 11:29:26 +0000
Message-ID: <9b6bc64b9f9fb2a7adad03c7da999a1babe243b9.camel@xen.org>
Subject: Re: XSM and the idle domain
From: Hongyan Xia <hx242@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Date: Thu, 22 Oct 2020 12:29:22 +0100
In-Reply-To: <CAKf6xpt7zgM3HghQru28kovd0m7z84bAR8Uqt6KKxbSrvQv8ZA@mail.gmail.com>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
	 <CAKf6xpt7zgM3HghQru28kovd0m7z84bAR8Uqt6KKxbSrvQv8ZA@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

(also replying to others in this thread.)

On Wed, 2020-10-21 at 12:21 -0400, Jason Andryuk wrote:
> On Wed, Oct 21, 2020 at 10:35 AM Hongyan Xia <hx242@xen.org> wrote:
> > 
> > Hi,
> 
> ...
> > 
> > The first question came up during ongoing work in LiveUpdate. After
> > an
> > LU, the next Xen needs to restore all domains. To do that, some
> > hypercalls need to be issued from the idle domain context and
> > apparently XSM does not like it. We need to introduce hacks in the
> > dummy module to leave the idle domain alone.
> 
> Is this modifying xsm_default_action() to add an is_idle_domain()
> check which always succeeds?

Yes. We had to do exactly that to avoid LU actions being denied by XSM.

> > Our work is not compiled
> > with CONFIG_XSM at all, but with CONFIG_XSM, are we able to enforce
> > security policies against the idle domain?
> 
> It's not clear to me if you want to use CONFIG_XSM, or just don't
> want
> to break it.

We don't (and won't) enable XSM in our build, but still we need a hack
to work around it, so I am just curious about what happens when people
use both LU and XSM at the same time.

> > Of course, without any LU
> > work this does not make any difference because the idle domain does
> > not
> > do any useful work to be restricted anyway.
> 
> I think this last sentence is the main point.  It's always been
> labeled xen_t, but since it doesn't go through any of the hook
> points,
> it hasn't needed any restrictions.  Actually, reviewing the Flask
> policy there is:
> # Domain destruction can result in some access checks for actions
> performed by
> # the hypervisor.  These should always be allowed.
> allow xen_t resource_type : resource { remove_irq remove_ioport
> remove_iomem };
> 
> > Also, should idle domain be restricted? IMO the idle domain is Xen
> > itself which mostly bootstraps the system and performs limited work
> > when switched to, and is not something a user (either dom0 or domU)
> > directly interacts with. I doubt XSM was designed to include the
> > idle
> > domain (although there is an ID allocated for it in the code), so I
> > would say just exclude idle in all security policy checks.
> 
> I think it makes sense to label xen_t, even if it doesn't do
> anything.
> As you say, it is a distinct entity from dom0 and domU.  Yes, it can
> circumvent the policy, but it's not actively hurting anything.  And
> it
> can be good to catch when it does start doing something, as you
> found.
> 
> Might it make sense to create a LU domain instead of using the idle
> domain for Live Update?  Another approach could be to run the
> idle_domain as "dom0" during Live Update, and then transition to the
> regular idle_domain when it completes?  You are re-creating dom0, but
> you could flip is_privileged on during live update and then remove it
> once complete.

Actually I think your suggestion and what Daniel suggested make sense.
We could just have a domLU that does all the restore work which has its
own security policies. That sounds like a clean solution to me.
However, one top priority of LU is to minimise the down time so that
domains won't feel a thing and every millisecond counts. I don't know
how much overhead this adds (maybe negligible if we just let domLU sit
in idle domain's page tables so switching and passing the LU save
stream to it is painless), but is something we need to keep in mind.

But this still sidesteps the question of whether the idle domain should
be subject to security policies. From another reply it sounds like the
idle domain should not be exempt from XSM. Although, to me restrictions
on idle domain are more like a debugging feature than a security
policy, since it prevents, e.g., accidentally issuing hypercalls from
it, but if the idle domain really wants to do something then there is
nothing to stop it. This is different from enforcing policies on a real
domain which guarantees things won't happen and the domain simply has
no mechanism to circumvent it (hopefully).

My experience with XSM is only the idle domain hack for LU so what I
said about it here may not make sense.

Hongyan



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 12:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 12:52:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10418.27671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVa4N-0007U1-TW; Thu, 22 Oct 2020 12:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10418.27671; Thu, 22 Oct 2020 12:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVa4N-0007Tu-Qe; Thu, 22 Oct 2020 12:51:51 +0000
Received: by outflank-mailman (input) for mailman id 10418;
 Thu, 22 Oct 2020 12:51:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L1Xo=D5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVa4L-0007To-UV
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 12:51:50 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a25d93d5-bcef-4030-8aab-354a820dc891;
 Thu, 22 Oct 2020 12:51:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L1Xo=D5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVa4L-0007To-UV
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 12:51:50 +0000
X-Inumbo-ID: a25d93d5-bcef-4030-8aab-354a820dc891
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a25d93d5-bcef-4030-8aab-354a820dc891;
	Thu, 22 Oct 2020 12:51:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603371108;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to:content-transfer-encoding;
  bh=0kl3Fme65YD3FrzucuH0C9HkwK3F3k97YMMsqg/NDA8=;
  b=DNGVj/7LYOaEbSKhZ81pMjARXXJ7K3g2GxY0/pBZEqReFBVxJ3miq9UZ
   eLKdypDWuNY/e/mKexHw/dBo5G8RytbzzwSHgSZUQUWF8woSvfsE7R6o/
   1m17JRku1iRSb6pHCLgK+PsQ78s+oKROz9AZacQYPVzgXILAJ2Ic4+ANG
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ayDclvKFRdNuUlZcvTc8wwZoFBH417P0zV7UvJO/5QTHiPLiq5XcF8jM+6u7hKMXDBAqxG9POO
 clqMeZnz9K3SevTcxf9yqypAvC48xd/+JqrnKPBWZW4cVjlbvfPCyW3nINQCT5SSBXYQRbPa7x
 pPuBNqPM6eRX6rY2mnMC+rUrhvzTpcwsCDMlKqGT33c1xo8Pad9Fs4bdooNBeCqjkOezXLBiCb
 EAik/6Yu/vPzmsnenLbBlhzPjQ/b/JcwwT3hAfBl1A/6XOORWwkcYc3uaJyPKKmzUo7FNvEtIw
 tWg=
X-SBRS: None
X-MesageID: 29618024
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29618024"
Subject: Re: XSM and the idle domain
To: Hongyan Xia <hx242@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, <jbeulich@suse.com>, <jandryuk@gmail.com>,
	<dgdegra@tycho.nsa.gov>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f8f5f354-aa8d-4bd0-9c0e-ef37702e80c5@citrix.com>
Date: Thu, 22 Oct 2020 13:51:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 21/10/2020 15:34, Hongyan Xia wrote:
> The first question came up during ongoing work in LiveUpdate. After an
> LU, the next Xen needs to restore all domains. To do that, some
> hypercalls need to be issued from the idle domain context and
> apparently XSM does not like it.

There is no such thing as issuing hypercalls from the idle domain
(context or otherwise), because the idle domain does not have enough
associated guest state for anything to make the requisite
SYSCALL/INT80/VMCALL/VMMCALL invocation.

I presume from this comment that what you mean is that you're calling
the plain hypercall functions, context checks and everything, from the
idle context?

If so, this is buggy for more reasons than just XSM objecting to its
calling context, and that XSM is merely the first thing to explode. 
Therefore, I don't think modifications to XSM are applicable to solving
the problem.

(Of course, this is all speculation because there's no concrete
implementation to look at.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:04:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10423.27684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaGH-0000FT-1f; Thu, 22 Oct 2020 13:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10423.27684; Thu, 22 Oct 2020 13:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaGG-0000FM-Up; Thu, 22 Oct 2020 13:04:08 +0000
Received: by outflank-mailman (input) for mailman id 10423;
 Thu, 22 Oct 2020 13:04:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DD7K=D5=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1kVaGE-0000FH-Vs
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:04:07 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0c1f949-0c04-4e47-a16d-39698c72a1ab;
 Thu, 22 Oct 2020 13:04:05 +0000 (UTC)
Received: from mail.zoho.com by mx.zohomail.com
 with SMTP id 1603371832291764.96684238394;
 Thu, 22 Oct 2020 06:03:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DD7K=D5=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
	id 1kVaGE-0000FH-Vs
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:04:07 +0000
X-Inumbo-ID: a0c1f949-0c04-4e47-a16d-39698c72a1ab
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a0c1f949-0c04-4e47-a16d-39698c72a1ab;
	Thu, 22 Oct 2020 13:04:05 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603371842; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=DAShzTJ9WokckaDedtYPm7w1OE1Ds959YQbGOCWa4E5BZMDQXI8JT5kITGk/9fZigFkDhy++IhEhhoOQRtzzKL+Ll1Gu/CSSui7g2RZ5zmc3YdZRcUooX2vMi8lCnmwo8kEG8NdeYUSL3Z2NpZqBLM52wrUKnHdQf8wGRflFjKU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603371842; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=pXjFeA1IYvBzIxRV5NYvGjEsogk6GRObXecbpb4CJXI=; 
	b=VGPCFY4AV0PlcCVLOVwtmWkjDl/UfHgGWwWnHallAojgzjTKEqH/n7+HtLqs1ipEuKVOwb9fX1rThV3c67DJrimnrS54+mRmtyeFY5bVv2Hp97lmwgWCuV3GD2SLO1YznNLwOayKqa4PjsuNDWb0pFOQZ03ieV/wztemmI+/B1A=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603371842;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Date:From:To:Cc:Message-ID:In-Reply-To:References:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=pXjFeA1IYvBzIxRV5NYvGjEsogk6GRObXecbpb4CJXI=;
	b=u9aJ8Temn4eo7F6NRNrU6ZPdpIzPBeYSOzYR5CNQ1KY8u0p9WgtjJXv+NwSOj03O
	oPDIL9VFjDnSIQai25ZONg14+irm58xabNOV2DEgW6tibMI3Q1uRTKyysdDK7+EGPzN
	wWUWzOK5PsOBOf8yrKy8vTs/4MH+mn+RiNEhQrng=
Received: from mail.zoho.com by mx.zohomail.com
	with SMTP id 1603371832291764.96684238394; Thu, 22 Oct 2020 06:03:52 -0700 (PDT)
Date: Thu, 22 Oct 2020 09:03:52 -0400
From: Daniel Smith <dpsmith@apertussolutions.com>
To: "Jan Beulich" <jbeulich@suse.com>
Cc: "Hongyan Xia" <hx242@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"andrew.cooper3" <andrew.cooper3@citrix.com>,
	"jandryuk" <jandryuk@gmail.com>, "dgdegra" <dgdegra@tycho.nsa.gov>
Message-ID: <175506893e2.fbe49bc728583.7319641449344681246@apertussolutions.com>
In-Reply-To: <09aad1f6-b9bd-1ba4-5e08-198ab2815a5b@suse.com>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
 <2dbee673-036a-077e-6cb4-556aac46ac33@apertussolutions.com> <09aad1f6-b9bd-1ba4-5e08-198ab2815a5b@suse.com>
Subject: Re: XSM and the idle domain
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Importance: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail

---- On Thu, 22 Oct 2020 04:13:53 -0400 Jan Beulich <jbeulich@suse.com> wro=
te ----

 > On 22.10.2020 03:23, Daniel P. Smith wrote:=20
 > > On 10/21/20 10:34 AM, Hongyan Xia wrote:=20
 > >> Also, should idle domain be restricted? IMO the idle domain is Xen=20
 > >> itself which mostly bootstraps the system and performs limited work=
=20
 > >> when switched to, and is not something a user (either dom0 or domU)=
=20
 > >> directly interacts with. I doubt XSM was designed to include the idle=
=20
 > >> domain (although there is an ID allocated for it in the code), so I=
=20
 > >> would say just exclude idle in all security policy checks.=20
 > >=20
 > > The idle domain is a limited, internal construct within the hypervisor=
=20
 > > and should be constrained as part of the hypervisor, which is why its=
=20
 > > domain id gets labeled with the same label as the hypervisor. For this=
=20
 > > reason I would wholeheartedly disagree with exempting the idle domain =
id=20
 > > from XSM hooks as that would effectively be saying the core hypervisor=
=20
 > > should not be constrained. The purpose of the XSM hooks is to control=
=20
 > > the flow of information in the system in a non-bypassable way. Codifyi=
ng=20
 > > bypasses completely subverts the security model behind XSM for which t=
he=20
 > > flask security server is dependent upon.=20
 > =20
 > While what you say may in general make sense, I have two questions:=20

[Apologies for any poor formatting, responding from webmail interface ( ._.=
)]

Hey Jan, these are very legitimate questions.

 > 1) When the idle domain is purely an internal construct of Xen, why=20
 >  does it need limiting in any way? In fact, if restricting it in a=20
 >  bad way, aren't you risking to prevent the system from functioning=20
 >  correctly?=20

Think in terms of least privilege, do you want the idle domain and by exten=
sion the hypervisor to have the additional privilege of imposing state on t=
o the system as opposed to processing the state changes. I am not saying it=
 is wrong technical approach (though I do believe at a minimum the implemen=
tation approach is flawed), I am just asking is it wise from a privilege de=
legation aspect of whether it could be done differently from a technical st=
and point. The underlying concern here is once you grant the privilege the =
hypervisor will forever have the privilege which can be used for good (LU) =
and bad (corruption). Take for instance what is being attempted with DomB, =
in this approach the privilege to impose state (configure domains) is deleg=
ated to the Boot Domain but it is not delegated the privilege to create sta=
te (domain creation). As I mentioned before, this is what Jason was suggest=
ing in having another domain type that is allowed to impose the state that =
is transitioned to from the idle domain to conduct the action.

Whether or not the idle domain is allowed to make hypercalls is not necessa=
rily a concern of the XSM hooks. If it is decided that this is the desired =
path, then what is of concern is that the corrective action does not weaken=
/break the hooks. If this ends up being the desired approach, then IMHO the=
 correct action is to update the dummy policy, flask policy, and SILO (if i=
t applies) to allow the privilege/access to occur versus putting bypasses i=
nto the security hooks.

 > 2) LU is merely restoring the prior state of the system. This prior=20
 >  state was reached with security auditing as per the system's=20
 >  policy at the time. Why should there be anything denind in the=20
 >  process of re-establishing this same state? IOW can't XSM checking=20
 >  be globally disabled until the system is ready be run normally=20
 >  again?

There is an assumption you made there that is being overlooked and that is =
you are assuming it is the same state. It is important to understand what a=
ssumptions are being made and when possible impose those assumptions throug=
h policy than with code. Not everyone will want to make the same assumption=
s and may want a better controlled path for that state to flow.

No you don't want to globally disable the XSM checking as that means you ha=
ve lost all control over the system where any and all policy violations cou=
ld occur without any auditing. This would open a huge hole for a malicious =
actor to take advantage of for an attack against the system.=20

In the end to reiterate, if this is decided to be the desired approach then=
 IMHO the correct implementation is to encode the access in policy not in b=
ypasses to the XSM hooks.

 > Please forgive if this sounds like rubbish to you - I may not have a=20
 > good enough understanding of the abstract constraints involved here.=20

No worries, it is always better to question when in doubt than making an as=
sumption. Hopefully I helped in providing a better explanation.

 > Jan=20
 >=20


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:09:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10426.27695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaL6-0000S0-Lj; Thu, 22 Oct 2020 13:09:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10426.27695; Thu, 22 Oct 2020 13:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaL6-0000Rt-IZ; Thu, 22 Oct 2020 13:09:08 +0000
Received: by outflank-mailman (input) for mailman id 10426;
 Thu, 22 Oct 2020 13:09:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVaL5-0000Ro-Lt
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:09:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e187659-c318-4ec4-972f-b4132a3af574;
 Thu, 22 Oct 2020 13:09:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A540AAC3F;
 Thu, 22 Oct 2020 13:09:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVaL5-0000Ro-Lt
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:09:07 +0000
X-Inumbo-ID: 8e187659-c318-4ec4-972f-b4132a3af574
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8e187659-c318-4ec4-972f-b4132a3af574;
	Thu, 22 Oct 2020 13:09:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603372145;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xnl30XZmbkdq+DrOW7dw1rQEfgCFIhyaC9tjnrTpvhk=;
	b=G3EvcoqIsqGs4aDO2mpBJ1JGmOUd/OPq7oygh7b6FF+0GTcrKqbng6hgIBfCH3Os3t/dIS
	uV/OIsKhlCM/Y5jWNUEQDJNFCgoh3+eFZBMSrsIC/BSdBaokhWtsWh9fyjw5n6pfIj+qVs
	MIV16z6zFR4NJs+Ydt0K7f5fFIaSdI4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A540AAC3F;
	Thu, 22 Oct 2020 13:09:05 +0000 (UTC)
Subject: Re: [PATCH v2 3/5] xen/events: only register debug interrupt for
 2-level events
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>
References: <20201022094907.28560-1-jgross@suse.com>
 <20201022094907.28560-4-jgross@suse.com>
 <1de24e42-6cb7-4ecb-0eb2-c4a15dc8afc9@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <98f76c98-00d5-3238-a54f-cce52419160f@suse.com>
Date: Thu, 22 Oct 2020 15:09:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <1de24e42-6cb7-4ecb-0eb2-c4a15dc8afc9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.20 12:35, Jan Beulich wrote:
> On 22.10.2020 11:49, Juergen Gross wrote:
>> @@ -2080,10 +2080,12 @@ void __init xen_init_IRQ(void)
>>   	int ret = -EINVAL;
>>   	evtchn_port_t evtchn;
>>   
>> -	if (fifo_events)
>> +	if (xen_fifo_events)
>>   		ret = xen_evtchn_fifo_init();
>> -	if (ret < 0)
>> +	if (ret < 0) {
>>   		xen_evtchn_2l_init();
>> +		xen_fifo_events = false;
>> +	}
> 
> Another note: While it may not matter right here, maybe better
> first set the variable and the call the function?

I don't think this is really important, TBH.

This code is executed before other cpus are up and we'd have major
other problems in case the sequence would really matter here.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:18:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10429.27708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaTa-0001R1-IP; Thu, 22 Oct 2020 13:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10429.27708; Thu, 22 Oct 2020 13:17:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVaTa-0001Qu-FG; Thu, 22 Oct 2020 13:17:54 +0000
Received: by outflank-mailman (input) for mailman id 10429;
 Thu, 22 Oct 2020 13:17:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVaTZ-0001Qp-OT
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:17:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42b9b724-3785-4a91-a4ff-79ba6fac635a;
 Thu, 22 Oct 2020 13:17:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0FCABAE35;
 Thu, 22 Oct 2020 13:17:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVaTZ-0001Qp-OT
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:17:53 +0000
X-Inumbo-ID: 42b9b724-3785-4a91-a4ff-79ba6fac635a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 42b9b724-3785-4a91-a4ff-79ba6fac635a;
	Thu, 22 Oct 2020 13:17:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603372672;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q3XLuJAiAJI40OqF2MlzN6uIGy9R9OJtoodmMnAlqVA=;
	b=R/lMCrLLsJ9oxMG8VYEjoj5XbtqmyCxxnkVfZT529iQ9EyrPI8NQo9Ft0fvftXIwPN/QRw
	WuR6R+i/DjPdKXhmnkCtsIeWGL+zxgLuHolPEnxjO0iiZcsdVvaHoZZg8tE+obiBqgPbb8
	ioZMsgIylAxUhxAT7O3IyhVn5ER3N4w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0FCABAE35;
	Thu, 22 Oct 2020 13:17:52 +0000 (UTC)
Subject: Re: [PATCH v2 3/5] xen/events: only register debug interrupt for
 2-level events
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>
References: <20201022094907.28560-1-jgross@suse.com>
 <20201022094907.28560-4-jgross@suse.com>
 <1de24e42-6cb7-4ecb-0eb2-c4a15dc8afc9@suse.com>
 <98f76c98-00d5-3238-a54f-cce52419160f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b21d3544-2257-dee5-222d-f4dade94d167@suse.com>
Date: Thu, 22 Oct 2020 15:17:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <98f76c98-00d5-3238-a54f-cce52419160f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.2020 15:09, Jürgen Groß wrote:
> On 22.10.20 12:35, Jan Beulich wrote:
>> On 22.10.2020 11:49, Juergen Gross wrote:
>>> @@ -2080,10 +2080,12 @@ void __init xen_init_IRQ(void)
>>>   	int ret = -EINVAL;
>>>   	evtchn_port_t evtchn;
>>>   
>>> -	if (fifo_events)
>>> +	if (xen_fifo_events)
>>>   		ret = xen_evtchn_fifo_init();
>>> -	if (ret < 0)
>>> +	if (ret < 0) {
>>>   		xen_evtchn_2l_init();
>>> +		xen_fifo_events = false;
>>> +	}
>>
>> Another note: While it may not matter right here, maybe better
>> first set the variable and the call the function?
> 
> I don't think this is really important, TBH.
> 
> This code is executed before other cpus are up and we'd have major
> other problems in case the sequence would really matter here.

Fair enough; I was thinking in particular that it ought to be
legitimate for xen_evtchn_2l_init() to BUG_ON(xen_fifo_events).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:34:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:34:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10435.27727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVajR-0003Kx-2i; Thu, 22 Oct 2020 13:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10435.27727; Thu, 22 Oct 2020 13:34:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVajQ-0003Kq-Va; Thu, 22 Oct 2020 13:34:16 +0000
Received: by outflank-mailman (input) for mailman id 10435;
 Thu, 22 Oct 2020 13:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVajP-0003Kl-Us
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:34:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c42782b7-6341-4e94-a04b-5afd271f396b;
 Thu, 22 Oct 2020 13:34:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4B4D7AC2F;
 Thu, 22 Oct 2020 13:34:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVajP-0003Kl-Us
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:34:15 +0000
X-Inumbo-ID: c42782b7-6341-4e94-a04b-5afd271f396b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c42782b7-6341-4e94-a04b-5afd271f396b;
	Thu, 22 Oct 2020 13:34:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603373654;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0QZIsBzvdYvQS6Vp6oOgnBeT+8YzewYeHdSYJwOlnQo=;
	b=PdrPb2nhcVeXpnsDu1mKva6XijwalG7ehExyqPcpgVQFVuAMrssdgM4P+h3BVCeXZBW0fD
	tPH/YP6k401iW2TVc9xZMOSqLM8iqxyIYZe22bW5tc9CUDK/EZguq0VDSwGPTCRtIWdSmf
	leUphOKxhy3rRCP58dT5d4u58i2sqRI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4B4D7AC2F;
	Thu, 22 Oct 2020 13:34:14 +0000 (UTC)
Subject: Re: [PATCH v2 4/8] evtchn: let evtchn_set_priority() acquire the
 per-channel lock
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <266c9178-700b-5663-4b5f-69f160a165ab@suse.com>
 <20201022111712.g7kvaducfgwa6whn@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36e0adac-21e7-235b-c569-54d9c97f3e77@suse.com>
Date: Thu, 22 Oct 2020 15:34:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022111712.g7kvaducfgwa6whn@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.2020 13:17, Roger Pau Monné wrote:
> On Tue, Oct 20, 2020 at 04:09:41PM +0200, Jan Beulich wrote:
>> Some lock wants to be held to make sure the port doesn't change state,
>> but there's no point holding the per-domain event lock here. Switch to
>> using the finer grained per-channel lock instead.
> 
> While true that's a fine grained lock, it also disables interrupts,
> which the global event_lock didn't.

True, yet we're aiming at dropping this aspect again. Hence I've
added "(albeit as a downside for the time being this requires
disabling interrupts for a short period of time)".

>> FAOD this doesn't guarantee anything towards in particular
>> evtchn_fifo_set_pending(), as for interdomain channels that function
>> would be called with the remote side's per-channel lock held.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:38:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10445.27747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVanL-0003Za-O8; Thu, 22 Oct 2020 13:38:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10445.27747; Thu, 22 Oct 2020 13:38:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVanL-0003ZT-KV; Thu, 22 Oct 2020 13:38:19 +0000
Received: by outflank-mailman (input) for mailman id 10445;
 Thu, 22 Oct 2020 13:38:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVanK-0003ZO-S3
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:38:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b98f4bb-79b1-450c-9483-18225160ea6d;
 Thu, 22 Oct 2020 13:38:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DE75CAC2F;
 Thu, 22 Oct 2020 13:38:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVanK-0003ZO-S3
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:38:18 +0000
X-Inumbo-ID: 7b98f4bb-79b1-450c-9483-18225160ea6d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b98f4bb-79b1-450c-9483-18225160ea6d;
	Thu, 22 Oct 2020 13:38:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603373894;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=cLEszLVcl3lbTdxN5S3K3DwgD7NiXsnnj7qZ4HCii80=;
	b=dScRkXefHj2hgt1Ub/m6+LkogeshpHquW8HPgMA8uHpJekRtX5DLMOby6k/KyAeQYiRq87
	1fpV35/HM2zD7n0lWt4aqBqTODb2dFezsN+7Ir7ZgXLminclE9rFBVUFS4ly3EwVID8q2n
	E0zDqFh7yk73UGsERtag7uyK7hS0MMo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DE75CAC2F;
	Thu, 22 Oct 2020 13:38:13 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: fix PINSRW and adjust other {,V}PINSR*
Message-ID: <34eba71f-e92a-22e0-42ae-dd85e682a8ff@suse.com>
Date: Thu, 22 Oct 2020 15:38:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The use of simd_packed_int together with no further update to op_bytes
has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
aligned memory operand. Use simd_none instead and override it after
general decoding with simd_other, like is done for the B/D/Q siblings.

While benign, for consistency also use DstImplicit instead of DstReg
in x86_decode_twobyte().

PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
gets dropped.

For further consistency also
- use src.bytes instead of op_bytes in relevant memcpy() invocations,
- avoid the pointless updating of op_bytes (all we care about later is
  that the value be less than 16).

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -362,7 +362,7 @@ static const struct twobyte_table {
     [0xc1] = { DstMem|SrcReg|ModRM },
     [0xc2] = { DstImplicit|SrcImmByte|ModRM, simd_any_fp, d8s_vl },
     [0xc3] = { DstMem|SrcReg|ModRM|Mov },
-    [0xc4] = { DstReg|SrcImmByte|ModRM, simd_packed_int, 1 },
+    [0xc4] = { DstImplicit|SrcImmByte|ModRM, simd_none, 1 },
     [0xc5] = { DstReg|SrcImmByte|ModRM|Mov },
     [0xc6] = { DstImplicit|SrcImmByte|ModRM, simd_packed_fp, d8s_vl },
     [0xc7] = { ImplicitOps|ModRM },
@@ -2786,7 +2786,7 @@ x86_decode_twobyte(
         /* fall through */
     case X86EMUL_OPC_VEX_66(0, 0xc4): /* vpinsrw */
     case X86EMUL_OPC_EVEX_66(0, 0xc4): /* vpinsrw */
-        state->desc = DstReg | SrcMem16;
+        state->desc = DstImplicit | SrcMem16;
         break;
 
     case 0xf0:
@@ -8589,6 +8589,7 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         memcpy(mmvalp, &src.val, 2);
         ea.type = OP_MEM;
+        state->simd_size = simd_other;
         goto simd_0f_int_imm8;
 
 #ifndef X86EMUL_NO_SIMD
@@ -8603,9 +8604,8 @@ x86_emulate(
             host_and_vcpu_must_have(avx512bw);
         if ( !mode_64bit() )
             evex.w = 0;
-        memcpy(mmvalp, &src.val, op_bytes);
+        memcpy(mmvalp, &src.val, src.bytes);
         ea.type = OP_MEM;
-        op_bytes = src.bytes;
         d = SrcMem16; /* Fake for the common SIMD code below. */
         state->simd_size = simd_other;
         goto avx512f_imm8_no_sae;
@@ -10774,10 +10774,8 @@ x86_emulate(
     case X86EMUL_OPC_66(0x0f3a, 0x20): /* pinsrb $imm8,r32/m8,xmm */
     case X86EMUL_OPC_66(0x0f3a, 0x22): /* pinsr{d,q} $imm8,r/m,xmm */
         host_and_vcpu_must_have(sse4_1);
-        get_fpu(X86EMUL_FPU_xmm);
-        memcpy(mmvalp, &src.val, op_bytes);
+        memcpy(mmvalp, &src.val, src.bytes);
         ea.type = OP_MEM;
-        op_bytes = src.bytes;
         d = SrcMem16; /* Fake for the common SIMD code below. */
         state->simd_size = simd_other;
         goto simd_0f3a_common;
@@ -10787,9 +10785,8 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         if ( !mode_64bit() )
             vex.w = 0;
-        memcpy(mmvalp, &src.val, op_bytes);
+        memcpy(mmvalp, &src.val, src.bytes);
         ea.type = OP_MEM;
-        op_bytes = src.bytes;
         d = SrcMem16; /* Fake for the common SIMD code below. */
         state->simd_size = simd_other;
         goto simd_0f_int_imm8;


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:39:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:39:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10448.27763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVao9-0003h5-2Q; Thu, 22 Oct 2020 13:39:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10448.27763; Thu, 22 Oct 2020 13:39:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVao8-0003gy-Va; Thu, 22 Oct 2020 13:39:08 +0000
Received: by outflank-mailman (input) for mailman id 10448;
 Thu, 22 Oct 2020 13:39:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVao7-0003gm-Kp
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:39:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f973e6e-8983-43d6-8fe1-87ad82c7804e;
 Thu, 22 Oct 2020 13:39:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4280FB03F;
 Thu, 22 Oct 2020 13:39:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVao7-0003gm-Kp
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:39:07 +0000
X-Inumbo-ID: 0f973e6e-8983-43d6-8fe1-87ad82c7804e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f973e6e-8983-43d6-8fe1-87ad82c7804e;
	Thu, 22 Oct 2020 13:39:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603373946;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=3rDg9DSkuthC0rT6ckEwlAGdxHgxfASCErG7/P8qjFw=;
	b=SY7znojBY9fgniVHDfLv79vUMhdPibcCvRv22SzYo0p+hvE3RxBOzEvVkbkCTyRRCB0MuQ
	CNnCXaqrUPnM8KpbDXhKRL/knUTIE26VqaH6TpZmrjmEj+dk6pyYKPsOh3gU7+QVNX2Q3+
	GLKiqojGUHzHjepj+t3LfYEvgCy4gQ0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4280FB03F;
	Thu, 22 Oct 2020 13:39:06 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: increase FPU save area in test harness/fuzzer
Message-ID: <adb5abbd-be50-82ba-c85d-c47024acc47c@suse.com>
Date: Thu, 22 Oct 2020 15:39:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Running them on a system (or emulator) with AMX support requires this
to be quite a bit larger than 8k, to avoid triggering the assert() in
emul_test_init(). Bump all the way up to 16k right away.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Intel's Software Development Emulator properly reports XSAVE-related AMX
CPUID data for Sapphire Rapids emulation as of 8.59.0.

--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -33,7 +33,7 @@
 uint32_t mxcsr_mask = 0x0000ffbf;
 struct cpuid_policy cp;
 
-static char fpu_save_area[4096] __attribute__((__aligned__((64))));
+static char fpu_save_area[0x4000] __attribute__((__aligned__((64))));
 static bool use_xsave;
 
 /*


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 13:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 13:56:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10463.27786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVb4a-0005eP-OH; Thu, 22 Oct 2020 13:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10463.27786; Thu, 22 Oct 2020 13:56:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVb4a-0005eI-LD; Thu, 22 Oct 2020 13:56:08 +0000
Received: by outflank-mailman (input) for mailman id 10463;
 Thu, 22 Oct 2020 13:56:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L1Xo=D5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kVb4Y-0005eD-Q7
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:56:06 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab658084-3fbd-4905-b3ac-216890b9cebe;
 Thu, 22 Oct 2020 13:56:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=L1Xo=D5=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kVb4Y-0005eD-Q7
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 13:56:06 +0000
X-Inumbo-ID: ab658084-3fbd-4905-b3ac-216890b9cebe
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab658084-3fbd-4905-b3ac-216890b9cebe;
	Thu, 22 Oct 2020 13:56:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603374962;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=xIvhnaxGXwC7eRdxAHcJrIJSH93g+CeMtChMIroxYXE=;
  b=U+nYsk6YRlj6kplljuB/daJc+CR+avfe9IDaeU7n3sCfbrduhIl/X1hA
   oXAk9R1yZR56c3D18FGeQGvLypNEA/Y3xjAd80R1WkUY7Od5mAy/bc1e2
   f458FXUbWYILD4jHoGgqi1RV+YR9tZEUzUMBm9Hmoqki48qO8y/wd1Cl2
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: gQv3tFbXCV1aKSXWEhDynYlB0+Tf38pRvB7oeQfQ1VTK2Cs7wlIQguIPk2F3Ka0XYNJiWJ7gdI
 b4lS6e4ImYW44HUn6OTQbRqN4QHtU94Pqz6MAq8R/0A71m9ly3aB9KfbP4QjRokYbLkdw2iZlm
 y2f9kow374XWJnNZgz/BelcfhbfN7HmZHs60lTURW40A7JruIif8Zx1Q54NrmUfAF/Xx6OjwdC
 M8ORIITQLmgBxDIGqvWcKiwx+PtTydxXss4U3AaDDzecKRe1QrGlT1Bw6NDg2fzPT4Gj1P8Jqv
 iz0=
X-SBRS: None
X-MesageID: 29543276
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29543276"
Subject: Re: [PATCH] x86emul: increase FPU save area in test harness/fuzzer
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <adb5abbd-be50-82ba-c85d-c47024acc47c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <367a2c7b-3cdf-e8ff-a0d8-50c200956004@citrix.com>
Date: Thu, 22 Oct 2020 14:55:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <adb5abbd-be50-82ba-c85d-c47024acc47c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL05.citrite.net (10.13.108.178)

On 22/10/2020 14:39, Jan Beulich wrote:
> Running them on a system (or emulator) with AMX support requires this
> to be quite a bit larger than 8k, to avoid triggering the assert() in
> emul_test_init(). Bump all the way up to 16k right away.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> Intel's Software Development Emulator properly reports XSAVE-related AMX
> CPUID data for Sapphire Rapids emulation as of 8.59.0.
>
> --- a/tools/tests/x86_emulator/x86-emulate.c
> +++ b/tools/tests/x86_emulator/x86-emulate.c
> @@ -33,7 +33,7 @@
>  uint32_t mxcsr_mask = 0x0000ffbf;
>  struct cpuid_policy cp;
>  
> -static char fpu_save_area[4096] __attribute__((__aligned__((64))));
> +static char fpu_save_area[0x4000] __attribute__((__aligned__((64))));
>  static bool use_xsave;
>  
>  /*



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:02:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10470.27808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbAO-0006dG-HT; Thu, 22 Oct 2020 14:02:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10470.27808; Thu, 22 Oct 2020 14:02:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbAO-0006d9-DB; Thu, 22 Oct 2020 14:02:08 +0000
Received: by outflank-mailman (input) for mailman id 10470;
 Thu, 22 Oct 2020 14:02:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVbAN-0006cb-Ha
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:02:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f7a5567-e46a-47a7-aa0a-54947caf5bd0;
 Thu, 22 Oct 2020 14:01:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbAC-0001QN-Lh; Thu, 22 Oct 2020 14:01:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbAC-0000Vk-DQ; Thu, 22 Oct 2020 14:01:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbAC-0004SO-Cw; Thu, 22 Oct 2020 14:01:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVbAN-0006cb-Ha
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:02:07 +0000
X-Inumbo-ID: 6f7a5567-e46a-47a7-aa0a-54947caf5bd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6f7a5567-e46a-47a7-aa0a-54947caf5bd0;
	Thu, 22 Oct 2020 14:01:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b18KICB5AE6D6vU7xswDRyHL2KdsBx3DeONdzHJOv68=; b=GDhphyN2nk+2P+7A3ETJtetwQF
	vz+n5GL+OWpB1TPZYG87OVaMkTtExj743kHPJBbUUm6iZmWyGYnibORTWQ1tatNqWcEOjAGzMKaEW
	priR5TVbHL3zxwYo3DKPhnXgnTvJAibeqWS2JvpAgFO6j9d8rVqPXNVwbH8X3VMNA3QM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbAC-0001QN-Lh; Thu, 22 Oct 2020 14:01:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbAC-0000Vk-DQ; Thu, 22 Oct 2020 14:01:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbAC-0004SO-Cw; Thu, 22 Oct 2020 14:01:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156082-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156082: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=25cb07498e962e9e8452e6b17c397132e8364ec7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 14:01:56 +0000

flight 156082 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156082/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              25cb07498e962e9e8452e6b17c397132e8364ec7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  104 days
Failing since        151818  2020-07-11 04:18:52 Z  103 days   98 attempts
Testing same since   156082  2020-10-22 04:19:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 22680 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10474.27822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbLO-0007nE-Jz; Thu, 22 Oct 2020 14:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10474.27822; Thu, 22 Oct 2020 14:13:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbLO-0007n7-Gw; Thu, 22 Oct 2020 14:13:30 +0000
Received: by outflank-mailman (input) for mailman id 10474;
 Thu, 22 Oct 2020 14:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVbLN-0007lT-H7
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:13:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b396133-0dfc-4397-a2d1-dd3b3519ca2b;
 Thu, 22 Oct 2020 14:13:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbLG-0001fn-7r; Thu, 22 Oct 2020 14:13:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbLG-0001Bj-0y; Thu, 22 Oct 2020 14:13:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbLG-0006og-0V; Thu, 22 Oct 2020 14:13:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVbLN-0007lT-H7
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:13:29 +0000
X-Inumbo-ID: 3b396133-0dfc-4397-a2d1-dd3b3519ca2b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3b396133-0dfc-4397-a2d1-dd3b3519ca2b;
	Thu, 22 Oct 2020 14:13:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kDhvUohDJ+AJnCSr+olybovtBIog6cfMkRcQapbP8WU=; b=H2PMve2s+UNkyesnv1rzfo6qZ/
	B7YVbbZM/jx7BlGOaHF+dflS8g/ZsAWxmk4VCCNsFfEwmJNwdNfvDpPzRvltlka5aW+nGi1dKYeZJ
	Xpfmi5AW9h3JGVJ7WxsYDNx1HtAhXOy1cHpjIKLcPgCDNRnnJNO9/ad3AXyyGpl9BFg0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbLG-0001fn-7r; Thu, 22 Oct 2020 14:13:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbLG-0001Bj-0y; Thu, 22 Oct 2020 14:13:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbLG-0006og-0V; Thu, 22 Oct 2020 14:13:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156065-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156065: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=26442d11e620a9e81c019a24a4ff38441c64ba10
X-Osstest-Versions-That:
    ovmf=f82b827c92f77eac8debdce6ef9689d156771871
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 14:13:22 +0000

flight 156065 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156065/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 26442d11e620a9e81c019a24a4ff38441c64ba10
baseline version:
 ovmf                 f82b827c92f77eac8debdce6ef9689d156771871

Last test of basis   156017  2020-10-20 06:40:54 Z    2 days
Testing same since   156065  2020-10-21 06:40:48 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jian J Wang <jian.j.wang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f82b827c92..26442d11e6  26442d11e620a9e81c019a24a4ff38441c64ba10 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:39:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10477.27834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbkF-0001PO-PN; Thu, 22 Oct 2020 14:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10477.27834; Thu, 22 Oct 2020 14:39:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbkF-0001PH-MK; Thu, 22 Oct 2020 14:39:11 +0000
Received: by outflank-mailman (input) for mailman id 10477;
 Thu, 22 Oct 2020 14:39:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVbkE-0001PC-0S
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:39:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 405ff595-6927-4046-bf2a-6287b3380861;
 Thu, 22 Oct 2020 14:39:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 24E54AC6D;
 Thu, 22 Oct 2020 14:39:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dfvK=D5=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVbkE-0001PC-0S
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:39:10 +0000
X-Inumbo-ID: 405ff595-6927-4046-bf2a-6287b3380861
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 405ff595-6927-4046-bf2a-6287b3380861;
	Thu, 22 Oct 2020 14:39:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603377548;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=OCAcuQfsMegAn8Lo2XBKi8Wp8cbS47CayeWcOW4FYX8=;
	b=EESPE/pUPHznm3BMkb6phgyJN73uUDFlGcho1FrD+Wv+Rk03+lsULI4ha3iojPTChAQxJu
	olXaAwpLy0fnaz1r0+8in/liczFR00z7DT4AIhHdJMoFeFvRY2fR4Sk5KQxEsAoMXwfJzC
	7ldekeMbJnt9ZjxLBBRcYA+s6Pj7ubQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 24E54AC6D;
	Thu, 22 Oct 2020 14:39:08 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen: add support for automatic debug key actions in case of crash
Date: Thu, 22 Oct 2020 16:39:05 +0200
Message-Id: <20201022143905.11032-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When the host crashes it would sometimes be nice to have additional
debug data available which could be produced via debug keys, but
halting the server for manual intervention might be impossible due to
the need to reboot/kexec rather sooner than later.

Add support for automatic debug key actions in case of crashes which
can be activated via boot- or runtime-parameter.

Depending on the type of crash the desired data might be different, so
support different settings for the possible types of crashes.

The parameter is "crash-debug" with the following syntax:

  crash-debug-<type>=<string>

with <type> being one of:

  panic, hwdom, watchdog, kexeccmd, debugkey

and <string> a sequence of debug key characters with '.' having the
special semantics of a 1 second pause.

So "crash-debug-watchdog=0.0qr" would result in special output in case
of watchdog triggered crash (dom0 state, 1 second pause, dom0 state,
domain info, run queues).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/xen-command-line.pandoc | 23 +++++++++++++++++++
 xen/common/kexec.c                |  8 ++++---
 xen/common/keyhandler.c           | 37 +++++++++++++++++++++++++++++++
 xen/common/shutdown.c             |  4 ++--
 xen/drivers/char/console.c        |  2 +-
 xen/include/xen/kexec.h           | 10 +++++++--
 xen/include/xen/keyhandler.h      | 11 +++++++++
 7 files changed, 87 insertions(+), 8 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..f328c99cf8 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -574,6 +574,29 @@ reduction of features at Xen's disposal to manage guests.
 ### cpuinfo (x86)
 > `= <boolean>`
 
+### crash-debug-debugkey
+### crash-debug-hwdom
+### crash-debug-kexeccmd
+### crash-debug-panic
+### crash-debug-watchdog
+> `= <string>`
+
+> Can be modified at runtime
+
+Specify debug-key actions in cases of crashes. Each of the parameters applies
+to a different crash reason. The `<string>` is a sequence of debug key
+characters, with `.` having the special meaning of a 1 second pause.
+
+So e.g. `crash-debug-watchdog=0.0r` would dump dom0 state twice with a
+second between the two state dumps, followed by the run queues of the
+hypervisor, if the system crashes due to a watchdog timeout.
+
+These parameters should be used carefully, as e.g. specifying
+`crash-debug-debugkey=C` would result in an endless loop. Depending on the
+reason of the system crash it might happen that triggering some debug key
+action will result in a hang instead of dumping data and then doing a
+reboot or crash dump.
+
 ### crashinfo_maxaddr
 > `= <size>`
 
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 52cdc4ebc3..ebeee6405a 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -373,10 +373,12 @@ static int kexec_common_shutdown(void)
     return 0;
 }
 
-void kexec_crash(void)
+void kexec_crash(enum crash_reason reason)
 {
     int pos;
 
+    keyhandler_crash_action(reason);
+
     pos = (test_bit(KEXEC_FLAG_CRASH_POS, &kexec_flags) != 0);
     if ( !test_bit(KEXEC_IMAGE_CRASH_BASE + pos, &kexec_flags) )
         return;
@@ -409,7 +411,7 @@ static long kexec_reboot(void *_image)
 static void do_crashdump_trigger(unsigned char key)
 {
     printk("'%c' pressed -> triggering crashdump\n", key);
-    kexec_crash();
+    kexec_crash(CRASHREASON_DEBUGKEY);
     printk(" * no crash kernel loaded!\n");
 }
 
@@ -840,7 +842,7 @@ static int kexec_exec(XEN_GUEST_HANDLE_PARAM(void) uarg)
         ret = continue_hypercall_on_cpu(0, kexec_reboot, image);
         break;
     case KEXEC_TYPE_CRASH:
-        kexec_crash(); /* Does not return */
+        kexec_crash(CRASHREASON_KEXECCMD); /* Does not return */
         break;
     }
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 68364e987d..ac8229a4d7 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -3,7 +3,9 @@
  */
 
 #include <asm/regs.h>
+#include <xen/delay.h>
 #include <xen/keyhandler.h>
+#include <xen/param.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
@@ -507,6 +509,41 @@ void __init initialize_keytable(void)
     }
 }
 
+#define CRASHACTION_SIZE  32
+static char crash_debug_panic[CRASHACTION_SIZE];
+static char crash_debug_hwdom[CRASHACTION_SIZE];
+static char crash_debug_watchdog[CRASHACTION_SIZE];
+static char crash_debug_kexeccmd[CRASHACTION_SIZE];
+static char crash_debug_debugkey[CRASHACTION_SIZE];
+
+static char *crash_action[CRASHREASON_N] = {
+    [CRASHREASON_PANIC] = crash_debug_panic,
+    [CRASHREASON_HWDOM] = crash_debug_hwdom,
+    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
+    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
+    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
+};
+
+string_runtime_param("crash-debug-panic", crash_debug_panic);
+string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
+string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
+string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
+string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
+
+void keyhandler_crash_action(enum crash_reason reason)
+{
+    const char *action = crash_action[reason];
+    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
+
+    while ( *action ) {
+        if ( *action == '.' )
+            mdelay(1000);
+        else
+            handle_keypress(*action, regs);
+        action++;
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 912593915b..abde48aa4c 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -43,7 +43,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_crash:
         debugger_trap_immediate();
         printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
@@ -56,7 +56,7 @@ void hwdom_shutdown(u8 reason)
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
                hardware_domain->domain_id);
-        kexec_crash();
+        kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 861ad53a8f..acec277f5e 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -1271,7 +1271,7 @@ void panic(const char *fmt, ...)
 
     debugger_trap_immediate();
 
-    kexec_crash();
+    kexec_crash(CRASHREASON_PANIC);
 
     if ( opt_noreboot )
         machine_halt();
diff --git a/xen/include/xen/kexec.h b/xen/include/xen/kexec.h
index e85ba16405..9f7a912e97 100644
--- a/xen/include/xen/kexec.h
+++ b/xen/include/xen/kexec.h
@@ -1,6 +1,8 @@
 #ifndef __XEN_KEXEC_H__
 #define __XEN_KEXEC_H__
 
+#include <xen/keyhandler.h>
+
 #ifdef CONFIG_KEXEC
 
 #include <public/kexec.h>
@@ -48,7 +50,7 @@ void machine_kexec_unload(struct kexec_image *image);
 void machine_kexec_reserved(xen_kexec_reserve_t *reservation);
 void machine_reboot_kexec(struct kexec_image *image);
 void machine_kexec(struct kexec_image *image);
-void kexec_crash(void);
+void kexec_crash(enum crash_reason reason);
 void kexec_crash_save_cpu(void);
 struct crash_xen_info *kexec_crash_save_info(void);
 void machine_crash_shutdown(void);
@@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
 #define kexecing 0
 
 static inline void kexec_early_calculations(void) {}
-static inline void kexec_crash(void) {}
+static inline void kexec_crash(enum crash_reason reason)
+{
+    keyhandler_crash_action(reason);
+}
+
 static inline void kexec_crash_save_cpu(void) {}
 static inline void set_kexec_crash_area_size(u64 system_ram) {}
 
diff --git a/xen/include/xen/keyhandler.h b/xen/include/xen/keyhandler.h
index 5131e86cbc..dbf797a8b4 100644
--- a/xen/include/xen/keyhandler.h
+++ b/xen/include/xen/keyhandler.h
@@ -48,4 +48,15 @@ void register_irq_keyhandler(unsigned char key,
 /* Inject a keypress into the key-handling subsystem. */
 extern void handle_keypress(unsigned char key, struct cpu_user_regs *regs);
 
+enum crash_reason {
+    CRASHREASON_PANIC,
+    CRASHREASON_HWDOM,
+    CRASHREASON_WATCHDOG,
+    CRASHREASON_KEXECCMD,
+    CRASHREASON_DEBUGKEY,
+    CRASHREASON_N
+};
+
+void keyhandler_crash_action(enum crash_reason reason);
+
 #endif /* __XEN_KEYHANDLER_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10481.27850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbmk-0002Em-Cu; Thu, 22 Oct 2020 14:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10481.27850; Thu, 22 Oct 2020 14:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbmk-0002Ef-8y; Thu, 22 Oct 2020 14:41:46 +0000
Received: by outflank-mailman (input) for mailman id 10481;
 Thu, 22 Oct 2020 14:41:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVbmi-0002Dy-Kc
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:41:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f813df8a-f358-48c0-86e5-a8376da0b2a1;
 Thu, 22 Oct 2020 14:41:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbma-0002EK-Lr; Thu, 22 Oct 2020 14:41:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbma-0002CM-6w; Thu, 22 Oct 2020 14:41:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbma-0008Q1-6P; Thu, 22 Oct 2020 14:41:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVbmi-0002Dy-Kc
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:41:44 +0000
X-Inumbo-ID: f813df8a-f358-48c0-86e5-a8376da0b2a1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f813df8a-f358-48c0-86e5-a8376da0b2a1;
	Thu, 22 Oct 2020 14:41:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EMFYQg7zrgmsuANVz4zNQTao5+lqcf2ot1VL51aKqVo=; b=VhlyOm7BGXPAW8QUYsQ81uPZ7n
	JPknPEz4007AjVkANyYZcpG2XpDm+VcyJSMPzaj6q2dyInK511ZiORiBdWjNszTjL8NFiOdK4/jxT
	22lV/rJ9zagZ33/lHMRII06H2Vk6NrXmi+ORBDT5A7NmPuBbkoUJsyVALzcnP1nYgffM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbma-0002EK-Lr; Thu, 22 Oct 2020 14:41:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbma-0002CM-6w; Thu, 22 Oct 2020 14:41:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbma-0008Q1-6P; Thu, 22 Oct 2020 14:41:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156063-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156063: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-amd64-i386-qemut-rhel6hvm-intel:guest-start/redhat.repeat:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7b1e587f25c2dda38236e48aae81729798f10663
X-Osstest-Versions-That:
    xen=c93b520a41f2787dd76bfb2e454836d1d5787505
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 14:41:36 +0000

flight 156063 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156063/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-intel 14 guest-start/redhat.repeat fail in 156031 pass in 156063
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 156031 pass in 156063
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156031

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155417
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155417
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155417
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155417
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155417
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155417
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155417
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155417
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155417
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155417
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155417
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7b1e587f25c2dda38236e48aae81729798f10663
baseline version:
 xen                  c93b520a41f2787dd76bfb2e454836d1d5787505

Last test of basis   155417  2020-10-04 02:29:19 Z   18 days
Testing same since   156031  2020-10-20 13:06:19 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Chen Yu <yu.c.chen@intel.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c93b520a41..7b1e587f25  7b1e587f25c2dda38236e48aae81729798f10663 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:42:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:42:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10485.27865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbnM-0002L2-OM; Thu, 22 Oct 2020 14:42:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10485.27865; Thu, 22 Oct 2020 14:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbnM-0002Kv-Kx; Thu, 22 Oct 2020 14:42:24 +0000
Received: by outflank-mailman (input) for mailman id 10485;
 Thu, 22 Oct 2020 14:42:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVbnM-0002K9-2a
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:42:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf91e8bf-86fd-4a2a-9e5b-5eff744d040c;
 Thu, 22 Oct 2020 14:42:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbnC-0002Ei-KC; Thu, 22 Oct 2020 14:42:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbnC-0002FR-CD; Thu, 22 Oct 2020 14:42:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbnC-00016c-AE; Thu, 22 Oct 2020 14:42:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVbnM-0002K9-2a
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:42:24 +0000
X-Inumbo-ID: bf91e8bf-86fd-4a2a-9e5b-5eff744d040c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bf91e8bf-86fd-4a2a-9e5b-5eff744d040c;
	Thu, 22 Oct 2020 14:42:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RnYTAweY2TcVq9rzIPEISJTiPkAYE3WRHcsz/+N+hZY=; b=KGr8DbOVMnYmv5Z299mrefQRzu
	9vg89sFKBHm1q8oqvNZRUQCaBMumCDskLa6jCXIq2yMaGBj/UxfSOX8of1NsJ468n6YW1FrFnWg5I
	coMv/til9sIk5GwMQ4jQNE9crhcUY2h1PqLLDhGjRP5Hx7eiAN0uI7ayIiKdbCqJVzUI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbnC-0002Ei-KC; Thu, 22 Oct 2020 14:42:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbnC-0002FR-CD; Thu, 22 Oct 2020 14:42:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbnC-00016c-AE; Thu, 22 Oct 2020 14:42:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156081-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156081: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=e06c687fdf24b52358539a52bba184e8f5ff5b35
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 14:42:14 +0000

flight 156081 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156081/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                e06c687fdf24b52358539a52bba184e8f5ff5b35
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  115 attempts
Testing same since   156081  2020-10-22 03:05:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49266 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 14:48:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 14:48:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10515.27963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbtd-00039K-IV; Thu, 22 Oct 2020 14:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10515.27963; Thu, 22 Oct 2020 14:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVbtd-00039D-FV; Thu, 22 Oct 2020 14:48:53 +0000
Received: by outflank-mailman (input) for mailman id 10515;
 Thu, 22 Oct 2020 14:48:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVbtc-00038Z-A6
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:48:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be8e1c5c-ddc6-431e-81fd-b4e03ee8a991;
 Thu, 22 Oct 2020 14:48:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbtU-0002Qo-HS; Thu, 22 Oct 2020 14:48:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbtU-0002Oq-96; Thu, 22 Oct 2020 14:48:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVbtU-0002wB-8b; Thu, 22 Oct 2020 14:48:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVbtc-00038Z-A6
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 14:48:52 +0000
X-Inumbo-ID: be8e1c5c-ddc6-431e-81fd-b4e03ee8a991
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id be8e1c5c-ddc6-431e-81fd-b4e03ee8a991;
	Thu, 22 Oct 2020 14:48:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VuNql1LunqDlgHpEPAApA3Mwg7qpg6ZaTR0lbX93qK8=; b=WleDsht2lieCQwAFJ5ovJrsKjG
	QGAlyUYJNmsNJseN26VnSNuzwMSD300bFtKK7gze9KxsC6IBP9OGUK+CesWlyctep1bD8j+CcZoaL
	S8gARTXPPsBIAFHnX22QZZv9hyaPtMIpX65UlH/yUq/LuY4scnhxe5jH/XVLbqdGP0jQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbtU-0002Qo-HS; Thu, 22 Oct 2020 14:48:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbtU-0002Oq-96; Thu, 22 Oct 2020 14:48:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVbtU-0002wB-8b; Thu, 22 Oct 2020 14:48:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156085-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156085: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-amd64-xsm:xen-build:fail:regression
    xen-unstable:build-arm64:xen-build:fail:regression
    xen-unstable:build-arm64-xsm:xen-build:fail:regression
    xen-unstable:build-amd64:xen-build:fail:regression
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-unstable:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3b49791e4cc2f38dd84bf331b75217adaef636e3
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 14:48:44 +0000

flight 156085 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156085/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 156013
 build-arm64                   6 xen-build                fail REGR. vs. 156013
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156013
 build-amd64                   6 xen-build                fail REGR. vs. 156013

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 155960
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156013
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156013
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3b49791e4cc2f38dd84bf331b75217adaef636e3
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   156013  2020-10-20 04:30:46 Z    2 days
Failing since        156027  2020-10-20 12:37:35 Z    2 days    3 attempts
Testing same since   156085  2020-10-22 05:59:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 341 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 15:18:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 15:18:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10534.27993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVcLX-0006FC-9L; Thu, 22 Oct 2020 15:17:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10534.27993; Thu, 22 Oct 2020 15:17:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVcLX-0006F5-6J; Thu, 22 Oct 2020 15:17:43 +0000
Received: by outflank-mailman (input) for mailman id 10534;
 Thu, 22 Oct 2020 15:17:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DPUU=D5=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kVcLV-0006F0-9t
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 15:17:41 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 028d8250-ae10-472c-9432-f31a0e5ddf58;
 Thu, 22 Oct 2020 15:17:40 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id d24so2332163ljg.10
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 08:17:40 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=DPUU=D5=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kVcLV-0006F0-9t
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 15:17:41 +0000
X-Inumbo-ID: 028d8250-ae10-472c-9432-f31a0e5ddf58
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 028d8250-ae10-472c-9432-f31a0e5ddf58;
	Thu, 22 Oct 2020 15:17:40 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id d24so2332163ljg.10
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 08:17:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=aytwR5IL9OS0QKf9b+p2KT2CU6QbBd6febiqArxDsfs=;
        b=o1F6tn88p06VknvNMv7p3t1B7lxIT2SpksabxDLA6YW1NknghJFih4Q/8LRdLGsXvm
         +AAURNpxNyA9F4esCWet0LRP00jLElfOYszZ/BpslWrEEVmDmOLYcHgzXdVYpKo3Ukr7
         sUVVaUuUnwGoScwwr6QzaQU456Mvr743poQ29MzGeo8ydnPATQluWMPUagKC8mF7aXkR
         9ACVkixvjrdrEKQqGZ4CuKzvwy5Zlxw1pcpVIpXazrdM0/VP5hP3rIB1JmPO87m0fK19
         shrqu1JFDDnBINUxOqEIdDSKLHDr7D8li6HdA1wLpVGzwkbxsL9JENLAxI1WZuVc16G8
         HM3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=aytwR5IL9OS0QKf9b+p2KT2CU6QbBd6febiqArxDsfs=;
        b=EG+OaQgENyPzyxA9RbivLKF3y++hQNzbr7lC9jygNhKAad4akWqw5T6pwd+tMnhPWx
         ZYdoy8jb4v+JTMx+eRI3mtSWpCNxhXKUs76Xkp7/CbTAPYH3tmT3Rrh9NWjkSy2W/ZdB
         9iKYi24a4GuZZ2ZpcA+FLHl54ibCTobsjR+lofQ08BEUalCphBV8tOhy/5n6W4H3Udp7
         Ne+oE7rERXpPaCEMqLceSruXgHzkuzS5SdZxSr6TpS0v0Sj3ryul/M5PNIErpFYDszYH
         ojTSetEdHviUodomWNafqPUF1DQnuA9SX2U2hgcVLhdXlSQpyKxTEMyLaLunaOWgwseE
         erRA==
X-Gm-Message-State: AOAM533tJxu0HeELbGPThA40nlDXtwEvFs+FNzdYv6m11zPTdDyB+1aE
	lIMgk2jQN/QLHpD1z0ICQ9nEWbrXmQSSEOrOW28=
X-Google-Smtp-Source: ABdhPJw0h2w2fU5fufpA8MMTD61ePZqo/Hxao/L0FiX3RbIRviCcHhHIA/vc7KyVeYNssrZwXiP9KmHAaf52SkQ7Kaw=
X-Received: by 2002:a2e:b0c7:: with SMTP id g7mr1123806ljl.433.1603379859164;
 Thu, 22 Oct 2020 08:17:39 -0700 (PDT)
MIME-Version: 1.0
References: <20201013140511.5681-1-jandryuk@gmail.com> <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
In-Reply-To: <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 22 Oct 2020 11:17:26 -0400
Message-ID: <CAKf6xpvpuG1jVdf3+heXzHFd_kc5kVHYdJgC+8iazFLciqOMZw@mail.gmail.com>
Subject: Re: [PATCH v2 0/3] Add Xen CpusAccel
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: QEMU <qemu-devel@nongnu.org>, Claudio Fontana <cfontana@suse.de>, 
	Thomas Huth <thuth@redhat.com>, Laurent Vivier <lvivier@redhat.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Paul Durrant <paul@xen.org>, xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Oct 13, 2020 at 1:16 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 13/10/20 16:05, Jason Andryuk wrote:
> > Xen was left behind when CpusAccel became mandatory and fails the assert
> > in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
> > Move the qtest cpu functions to a common location and reuse them for
> > Xen.
> >
> > v2:
> >   New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
> >   Use accel/dummy-cpus.c for filename
> >   Put prototype in include/sysemu/cpus.h
> >
> > Jason Andryuk (3):
> >   accel: Remove _WIN32 ifdef from qtest-cpus.c
> >   accel: move qtest CpusAccel functions to a common location
> >   accel: Add xen CpusAccel using dummy-cpus
> >
> >  accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
> >  accel/meson.build                          |  8 +++++++
> >  accel/qtest/meson.build                    |  1 -
> >  accel/qtest/qtest-cpus.h                   | 17 --------------
> >  accel/qtest/qtest.c                        |  5 +++-
> >  accel/xen/xen-all.c                        |  8 +++++++
> >  include/sysemu/cpus.h                      |  3 +++
> >  7 files changed, 27 insertions(+), 42 deletions(-)
> >  rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
> >  delete mode 100644 accel/qtest/qtest-cpus.h
> >
>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Thank you, Paolo.  Also Anthony Acked and Claudio Reviewed patch 3.
How can we get this into the tree?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 15:30:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 15:30:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10539.28012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVcXO-0007KZ-H5; Thu, 22 Oct 2020 15:29:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10539.28012; Thu, 22 Oct 2020 15:29:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVcXO-0007KS-Co; Thu, 22 Oct 2020 15:29:58 +0000
Received: by outflank-mailman (input) for mailman id 10539;
 Thu, 22 Oct 2020 15:29:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IPEr=D5=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kVcXN-0007KM-Nj
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 15:29:57 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8f014c1e-fbef-437e-907e-68fe43588320;
 Thu, 22 Oct 2020 15:29:56 +0000 (UTC)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-556-edzr7LrEMiKA4AuumLr6vQ-1; Thu, 22 Oct 2020 11:29:53 -0400
Received: by mail-wr1-f72.google.com with SMTP id f11so759361wro.15
 for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 08:29:52 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id h7sm4766264wrt.45.2020.10.22.08.29.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 22 Oct 2020 08:29:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IPEr=D5=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kVcXN-0007KM-Nj
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 15:29:57 +0000
X-Inumbo-ID: 8f014c1e-fbef-437e-907e-68fe43588320
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8f014c1e-fbef-437e-907e-68fe43588320;
	Thu, 22 Oct 2020 15:29:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603380596;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jkiSTsiwDrNQ72v/SazVSSbk21cua5TJ5JZP+u5sNmw=;
	b=a/GsyZaZiSCMqT8jswn81KuhUYvtVyfDsEaerdErUo0jsdFHJUOZgei8468Hkni8SOjb5c
	hehLdXwkMxUzH8FWolEr+m+B/CqdAOIBnz7AwNGC1Cc6RzterqMYCE0vRvdQOnfSC9iZqw
	Ule4N0f+8r0Nn56Y0V/S8AmIplGSauM=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-556-edzr7LrEMiKA4AuumLr6vQ-1; Thu, 22 Oct 2020 11:29:53 -0400
X-MC-Unique: edzr7LrEMiKA4AuumLr6vQ-1
Received: by mail-wr1-f72.google.com with SMTP id f11so759361wro.15
        for <xen-devel@lists.xenproject.org>; Thu, 22 Oct 2020 08:29:52 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=jkiSTsiwDrNQ72v/SazVSSbk21cua5TJ5JZP+u5sNmw=;
        b=jnC8KbUxSAlkI4fdY16hrFmw9F4IGUL8S+KJcy+L+4Q00Q3LvDbZqxDsq7CNvV2/yJ
         mBqSQdgRGQ7qiv8rAo47eoaTVjEopkcv/Yfw/rZ7/Tyn4v6l5OuRJkDZUhXTtxFFxvtB
         zQ2oapbZ0QjnUPFXuCS4xrx418Clb2TcDQ+cC+dNQz3HuiLGHfuZaPsHIexSaPAPRgFI
         CvGNThrb7JflmSXQEmW+BitT2T0X9yhpeTkWvRNT7/hs+hmrHHmR+Sswtg6ODFkaR2Um
         tQyAZbBkt0mkMoq7zU66bYUlNg0/v816aNM9zQ6WIxhrB+Vgo2yF8+5/Oqr/oRtocGTz
         KI2Q==
X-Gm-Message-State: AOAM53286CQylzc50V33HmHyinTVPyFLQ6Q6Nb1cBN8z89bND8F5Bkdi
	DivhGnP30qjXcv91PBzl9Eg25uSAnVMcb/dTrQx3MjDH+xvPZYBsu/12SN6/gmjrO/UbeqQDD0V
	PDQCOdb1+DUEy/5y5VMKg7RdSRLE=
X-Received: by 2002:a1c:7ec7:: with SMTP id z190mr3064252wmc.8.1603380591398;
        Thu, 22 Oct 2020 08:29:51 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwGX5JjI76ENng2y9LttGQf7DnpCKaOVKsBzUv2Ybaho8Ek92Wpmo5+190HqXDZuCF8ZiFmCQ==
X-Received: by 2002:a1c:7ec7:: with SMTP id z190mr3064228wmc.8.1603380591192;
        Thu, 22 Oct 2020 08:29:51 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id h7sm4766264wrt.45.2020.10.22.08.29.50
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 22 Oct 2020 08:29:50 -0700 (PDT)
Subject: Re: [PATCH v2 0/3] Add Xen CpusAccel
To: Jason Andryuk <jandryuk@gmail.com>
Cc: QEMU <qemu-devel@nongnu.org>, Claudio Fontana <cfontana@suse.de>,
 Thomas Huth <thuth@redhat.com>, Laurent Vivier <lvivier@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20201013140511.5681-1-jandryuk@gmail.com>
 <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
 <CAKf6xpvpuG1jVdf3+heXzHFd_kc5kVHYdJgC+8iazFLciqOMZw@mail.gmail.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <d9f23eee-c0af-d2dd-9b9d-f0255fc8e3d1@redhat.com>
Date: Thu, 22 Oct 2020 17:29:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <CAKf6xpvpuG1jVdf3+heXzHFd_kc5kVHYdJgC+8iazFLciqOMZw@mail.gmail.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22/10/20 17:17, Jason Andryuk wrote:
> On Tue, Oct 13, 2020 at 1:16 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>
>> On 13/10/20 16:05, Jason Andryuk wrote:
>>> Xen was left behind when CpusAccel became mandatory and fails the assert
>>> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
>>> Move the qtest cpu functions to a common location and reuse them for
>>> Xen.
>>>
>>> v2:
>>>   New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
>>>   Use accel/dummy-cpus.c for filename
>>>   Put prototype in include/sysemu/cpus.h
>>>
>>> Jason Andryuk (3):
>>>   accel: Remove _WIN32 ifdef from qtest-cpus.c
>>>   accel: move qtest CpusAccel functions to a common location
>>>   accel: Add xen CpusAccel using dummy-cpus
>>>
>>>  accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
>>>  accel/meson.build                          |  8 +++++++
>>>  accel/qtest/meson.build                    |  1 -
>>>  accel/qtest/qtest-cpus.h                   | 17 --------------
>>>  accel/qtest/qtest.c                        |  5 +++-
>>>  accel/xen/xen-all.c                        |  8 +++++++
>>>  include/sysemu/cpus.h                      |  3 +++
>>>  7 files changed, 27 insertions(+), 42 deletions(-)
>>>  rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
>>>  delete mode 100644 accel/qtest/qtest-cpus.h
>>>
>>
>> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
> 
> Thank you, Paolo.  Also Anthony Acked and Claudio Reviewed patch 3.
> How can we get this into the tree?

I think Anthony should send a pull request?

Paolo



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:01:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:01:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10544.28029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVd1X-0002zh-1P; Thu, 22 Oct 2020 16:01:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10544.28029; Thu, 22 Oct 2020 16:01:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVd1W-0002za-Ud; Thu, 22 Oct 2020 16:01:06 +0000
Received: by outflank-mailman (input) for mailman id 10544;
 Thu, 22 Oct 2020 16:01:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kVd1W-0002zV-3h
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:01:06 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34bc227d-d751-49ae-81e5-606dae3d9519;
 Thu, 22 Oct 2020 16:01:05 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mO8V=D5=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kVd1W-0002zV-3h
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:01:06 +0000
X-Inumbo-ID: 34bc227d-d751-49ae-81e5-606dae3d9519
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 34bc227d-d751-49ae-81e5-606dae3d9519;
	Thu, 22 Oct 2020 16:01:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603382465;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=ycOh5zIYrtDHzOnI9vkGH/2BnhNlPoyPcXDQJ+1Hg9s=;
  b=Oor8rARHe34G85BVP/Wh1JFbOESNIXCWtu5zvzEDt6BLvuzHxK/cRr49
   3WGnD7lGdp0wOkCCNaM2pauiqZ5GxXyQcdU1f6LJ98A+aBZxU5L2ZVh7d
   qBmqBff/tTqEFWvyC0teTxRI976eQTispeN8u0VnKFy6AaIFgfaMM/hQP
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WnF1brx971V3mH0zByaXpVYo/7QElDkSbf7CJ+0Mu4aF1TXQfgP6m/1tAKRdbv5sMQSpqTVT7O
 sf4cUlwJawvZ/hPJwlMx5YMVCKYb/kjomjuP1nY/ydukI8xgbN6FrGD5lmvQB77mgN3BlsNvrI
 G/yQKzamKKTZpgmRhHjnawAwqgUm2BkClNVqOLKAXWoQHEsbas/dthAtEM4u7HhjajsIyVqGGn
 2XLrFWnGClNTVqh3suCTvpVn2KQKwFKMFToWzdEmUfqEoytX9GYZ6HVfgVUQmF3LWdfeogGfnq
 Ji8=
X-SBRS: None
X-MesageID: 29829737
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,404,1596513600"; 
   d="scan'208";a="29829737"
Date: Thu, 22 Oct 2020 18:00:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
Message-ID: <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL06.citrite.net (10.13.108.179)

On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
> The per-vCPU virq_lock, which is being held anyway, together with there
> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
> is zero, provide sufficient guarantees.

This is also fine because closing the event channel will be done with
the VIRQ hold held also AFAICT.

> Undo the lock addition done for
> XSA-343 (commit e045199c7c9c "evtchn: address races with
> evtchn_reset()"). Update the description next to struct evtchn_port_ops
> accordingly.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -794,7 +794,6 @@ void send_guest_vcpu_virq(struct vcpu *v
>      unsigned long flags;
>      int port;
>      struct domain *d;
> -    struct evtchn *chn;
>  
>      ASSERT(!virq_is_global(virq));
>  
> @@ -805,10 +804,7 @@ void send_guest_vcpu_virq(struct vcpu *v
>          goto out;
>  
>      d = v->domain;
> -    chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
> -    evtchn_port_set_pending(d, v->vcpu_id, chn);
> -    spin_unlock(&chn->lock);
> +    evtchn_port_set_pending(d, v->vcpu_id, evtchn_from_port(d, port));
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> @@ -837,9 +833,7 @@ void send_guest_global_virq(struct domai
>          goto out;
>  
>      chn = evtchn_from_port(d, port);
> -    spin_lock(&chn->lock);
>      evtchn_port_set_pending(d, chn->notify_vcpu_id, chn);
> -    spin_unlock(&chn->lock);
>  
>   out:
>      spin_unlock_irqrestore(&v->virq_lock, flags);
> --- a/xen/include/xen/event.h
> +++ b/xen/include/xen/event.h
> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>   * Low-level event channel port ops.
>   *
>   * All hooks have to be called with a lock held which prevents the channel
> - * from changing state. This may be the domain event lock, the per-channel
> - * lock, or in the case of sending interdomain events also the other side's
> - * per-channel lock. Exceptions apply in certain cases for the PV shim.
> + * from changing state. This may be
> + * - the domain event lock,
> + * - the per-channel lock,
> + * - in the case of sending interdomain events the other side's per-channel
> + *   lock,
> + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
> + *   combination with the ordering enforced through how the vCPU's
> + *   virq_to_evtchn[] gets updated),
> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
> + * Exceptions apply in certain cases for the PV shim.

Having such a wide locking discipline looks dangerous to me, it's easy
to get things wrong without notice IMO.

Maybe we could add an assert to that effect in
evtchn_port_set_pending in order to make sure callers follow the
discipline?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:12:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10550.28054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdCY-00043z-7a; Thu, 22 Oct 2020 16:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10550.28054; Thu, 22 Oct 2020 16:12:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdCY-00043s-4P; Thu, 22 Oct 2020 16:12:30 +0000
Received: by outflank-mailman (input) for mailman id 10550;
 Thu, 22 Oct 2020 16:12:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVdCW-00043n-88
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:12:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d68f9fe-31e1-4f96-ab1f-86c6b972f0b4;
 Thu, 22 Oct 2020 16:12:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 57FA2AC48;
 Thu, 22 Oct 2020 16:12:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVdCW-00043n-88
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:12:28 +0000
X-Inumbo-ID: 9d68f9fe-31e1-4f96-ab1f-86c6b972f0b4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9d68f9fe-31e1-4f96-ab1f-86c6b972f0b4;
	Thu, 22 Oct 2020 16:12:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603383146;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tvy1ki8NPJCVlIgf2jzK2JopOOEnb4t999skvHfB3T8=;
	b=XRjcPUMxQnNqi/8NU1XFmNeE06w5B8lpP/eQozeypE5T2sFwzR4xgAjs+8vAXIKEGb8BSV
	5uaZcmyI45cmLIX8RAZ962HxBhPvly7XVtJi6fyFX8K5H4u+/PL9pj0h63HxnDi2OJ4nwx
	JnV/Ul+/F+JgHtyequAwuNCBaNRfd34=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 57FA2AC48;
	Thu, 22 Oct 2020 16:12:26 +0000 (UTC)
Subject: Re: [PATCH v2 05/11] x86/vioapic: switch to use the EOI callback
 mechanism
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-6-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <04ae52fb-5579-4994-0b7c-72d48d127749@suse.com>
Date: Thu, 22 Oct 2020 18:12:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-6-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/vlapic.c
> +++ b/xen/arch/x86/hvm/vlapic.c
> @@ -189,7 +189,7 @@ void vlapic_set_irq_callback(struct vlapic *vlapic, uint8_t vec, uint8_t trig,
>  
>      if ( hvm_funcs.update_eoi_exit_bitmap )
>          alternative_vcall(hvm_funcs.update_eoi_exit_bitmap, target, vec,
> -                          trig || callback);
> +                          callback);

There's a shortcoming in the alternative call framework which I
see no way to eliminate but which makes it necessary to use
!!callback here. Otherwise, if the callback happens to sit on a
256-byte boundary (low address byte zero), you'll pass false
when you mean true. (The original use, i.e. prior to patch 3,
of just "trig" was sufficiently okay, because the parameter
- despite being u8 - is effectively used as a boolean by the
callers iirc.)

Or perhaps the best thing is to require wrappers for all hooks
taking bool parameters, because then the necessary conversion
will be done when calling the wrapper.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:17:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:17:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10552.28066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdHD-0004LQ-QW; Thu, 22 Oct 2020 16:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10552.28066; Thu, 22 Oct 2020 16:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdHD-0004LJ-N7; Thu, 22 Oct 2020 16:17:19 +0000
Received: by outflank-mailman (input) for mailman id 10552;
 Thu, 22 Oct 2020 16:17:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVdHB-0004LE-M1
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:17:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04716776-590d-45eb-8db0-1f53653a127a;
 Thu, 22 Oct 2020 16:17:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5835AC23;
 Thu, 22 Oct 2020 16:17:15 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z30Q=D5=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVdHB-0004LE-M1
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:17:17 +0000
X-Inumbo-ID: 04716776-590d-45eb-8db0-1f53653a127a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 04716776-590d-45eb-8db0-1f53653a127a;
	Thu, 22 Oct 2020 16:17:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603383436;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E58KvwYuARR/Vx0qqHrGJc9ZoqrlGrPMgauEM1VCzb8=;
	b=I3YlhQZdqZmpAE6dPPtOiFm1sc7BPfOuYDXN84AE2YMjKRgI7QqWcdqaPfh0yNNmpq4NDd
	oGLClKeCj5nqWsivmh29UWQ/xs2aFsLRdpGIv73ME9e8C7hvvWbfUDFxPf0Mtvfz1tOAZr
	xCZPHpABCqA+no1Tagm150+A86HhOE4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E5835AC23;
	Thu, 22 Oct 2020 16:17:15 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
Date: Thu, 22 Oct 2020 18:17:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 22.10.2020 18:00, Roger Pau Monné wrote:
> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>> The per-vCPU virq_lock, which is being held anyway, together with there
>> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
>> is zero, provide sufficient guarantees.
> 
> This is also fine because closing the event channel will be done with
> the VIRQ hold held also AFAICT.

Right, I'm not going into these details (or else binding would also
need mentioning) here, as the code comment update should sufficiently
cover it. Hence just saying "sufficient guarantees".

>> --- a/xen/include/xen/event.h
>> +++ b/xen/include/xen/event.h
>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>   * Low-level event channel port ops.
>>   *
>>   * All hooks have to be called with a lock held which prevents the channel
>> - * from changing state. This may be the domain event lock, the per-channel
>> - * lock, or in the case of sending interdomain events also the other side's
>> - * per-channel lock. Exceptions apply in certain cases for the PV shim.
>> + * from changing state. This may be
>> + * - the domain event lock,
>> + * - the per-channel lock,
>> + * - in the case of sending interdomain events the other side's per-channel
>> + *   lock,
>> + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
>> + *   combination with the ordering enforced through how the vCPU's
>> + *   virq_to_evtchn[] gets updated),
>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>> + * Exceptions apply in certain cases for the PV shim.
> 
> Having such a wide locking discipline looks dangerous to me, it's easy
> to get things wrong without notice IMO.

It is effectively only describing how things are (or were before
XSA-343, getting restored here).

> Maybe we could add an assert to that effect in
> evtchn_port_set_pending in order to make sure callers follow the
> discipline?

Would be nice, but (a) see the last sentence of the comment still
in context above and (b) it shouldn't be just set_pending(). The
comment starts with "All hooks ..." after all.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:21:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:21:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10554.28078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdLA-0005Bl-Ce; Thu, 22 Oct 2020 16:21:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10554.28078; Thu, 22 Oct 2020 16:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdLA-0005Be-8m; Thu, 22 Oct 2020 16:21:24 +0000
Received: by outflank-mailman (input) for mailman id 10554;
 Thu, 22 Oct 2020 16:21:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVdL9-0005BZ-Kg
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:21:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b912655a-a25d-4b6c-9898-c4696330fb2e;
 Thu, 22 Oct 2020 16:21:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdL5-0004qa-Q3; Thu, 22 Oct 2020 16:21:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdL5-0006Uv-Iq; Thu, 22 Oct 2020 16:21:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdL5-0001ig-IM; Thu, 22 Oct 2020 16:21:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVdL9-0005BZ-Kg
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:21:23 +0000
X-Inumbo-ID: b912655a-a25d-4b6c-9898-c4696330fb2e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b912655a-a25d-4b6c-9898-c4696330fb2e;
	Thu, 22 Oct 2020 16:21:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aeIs6QtE0H4Z75aTplpOQpUsffs8nzleSIr0kTqcXyI=; b=lAT7M4H5esDrLV7HdQIr9Ulwp5
	S7Tq9nJajHRLe1TNfAuLd3K1fl3EppmkZzjMGrvVVUVnlyuJncQzFriacruplkxmxPaHqIiFiqJpR
	F4coAmDKtyH+CXwc7iJh4oB9A0YWKIKboSSKVe9xKIHGg6hNftobzisPf2c0IHybUkzk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdL5-0004qa-Q3; Thu, 22 Oct 2020 16:21:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdL5-0006Uv-Iq; Thu, 22 Oct 2020 16:21:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdL5-0001ig-IM; Thu, 22 Oct 2020 16:21:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156094-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156094: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 16:21:19 +0000

flight 156094 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156094/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  116 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10561.28129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiO-0007Nq-GW; Thu, 22 Oct 2020 16:45:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10561.28129; Thu, 22 Oct 2020 16:45:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiO-0007Ni-Cj; Thu, 22 Oct 2020 16:45:24 +0000
Received: by outflank-mailman (input) for mailman id 10561;
 Thu, 22 Oct 2020 16:45:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiM-0007J9-UC
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06f363db-1555-4fd7-92a8-8a5ce08226d0;
 Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0005Kf-DH
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0007T1-CK
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiF-00059e-Ka; Thu, 22 Oct 2020 17:45:15 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiM-0007J9-UC
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:22 +0000
X-Inumbo-ID: 06f363db-1555-4fd7-92a8-8a5ce08226d0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 06f363db-1555-4fd7-92a8-8a5ce08226d0;
	Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=bFPgih86ZB+5E8DS2+fY36fGNFvkYHro0nXN3elXtQs=; b=mFv0ByEi14am0LJl3NtKkn8hzw
	93y3ookXVXDgNgILgqtiOMsaW1T5XgCZx8vN5vx+ScNCaN9IM7uVT3R7YSQ2ID0uEoH4NgsMovgye
	yIbmgnkSf4uneRQB1/et8pn0DgOORtB9sc7LRgYUPjw1GlX4GjcT3DzpD6teDnjqVKeY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0005Kf-DH
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0007T1-CK
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-00059e-Ka; Thu, 22 Oct 2020 17:45:15 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 08/16] PDU/MSW: Make show() return the value from get()
Date: Thu, 22 Oct 2020 17:44:58 +0100
Message-Id: <20201022164506.1552-9-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

No-one uses this return value yet, so NFC.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pdu-msw b/pdu-msw
index 196b6c45..2d4ec967 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -119,6 +119,7 @@ sub get () {
 sub show () {
     my $mean = get();
     printf "pdu-msw $dnsname: #%s \"%s\" = %s\n", $useport, $usename, $mean;
+    return $mean;
 }
 
 sub action_value () {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10558.28093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiJ-0007JL-KX; Thu, 22 Oct 2020 16:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10558.28093; Thu, 22 Oct 2020 16:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiJ-0007JE-HK; Thu, 22 Oct 2020 16:45:19 +0000
Received: by outflank-mailman (input) for mailman id 10558;
 Thu, 22 Oct 2020 16:45:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiH-0007J4-LK
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 61030cba-ae82-48ad-b164-aef21b3733e6;
 Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0005KI-3b
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0007RI-2W
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiE-00059e-Aq; Thu, 22 Oct 2020 17:45:14 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiH-0007J4-LK
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
X-Inumbo-ID: 61030cba-ae82-48ad-b164-aef21b3733e6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 61030cba-ae82-48ad-b164-aef21b3733e6;
	Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=VA0MrPoomWT3D4MkBrEL/LD+ZtLasw0letLMtdwmVoo=; b=p4+G4XG0pUFpbh94MGkdGu02z5
	WqnNdxe2RlLXu9VjaAGH2FXurQYatuo4cFX0LN8VOud5EL04EqVLbgzy2N+JYEAvZMMm9GVlEw5NJ
	lJNBqLCrTaMXAascAik3c1Cwr34TLJDtSlpopt+Eq0WptgWNnUss9FjtoBlaN+be3ovI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0005KI-3b
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0007RI-2W
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiE-00059e-Aq; Thu, 22 Oct 2020 17:45:14 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 03/16] PDU/IPMI: Retransmit, don't just wait
Date: Thu, 22 Oct 2020 17:44:53 +0100
Message-Id: <20201022164506.1552-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We have a system for which
   ipmitool -H sabro0m -U root -P XXXX -I lanplus power on
seems to work but doesn't take effect the first time.

Retransit each retry.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/PDU/ipmi.pm | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/Osstest/PDU/ipmi.pm b/Osstest/PDU/ipmi.pm
index 98e8957f..21c94d98 100644
--- a/Osstest/PDU/ipmi.pm
+++ b/Osstest/PDU/ipmi.pm
@@ -66,11 +66,12 @@ sub pdu_power_state {
 	return;
     }
 
-    system_checked((@cmd, qw(power), $onoff));
-
     my $count = 60;
     for (;;) {
         last if $getstatus->() eq $onoff;
+
+	system_checked((@cmd, qw(power), $onoff));
+
         die "did not power $onoff" unless --$count > 0;
         sleep(1);
     }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10560.28117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiN-0007M7-7y; Thu, 22 Oct 2020 16:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10560.28117; Thu, 22 Oct 2020 16:45:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiN-0007Lx-3c; Thu, 22 Oct 2020 16:45:23 +0000
Received: by outflank-mailman (input) for mailman id 10560;
 Thu, 22 Oct 2020 16:45:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiM-0007J4-Fr
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f992d3f-b987-4b7f-9091-d96978f70784;
 Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0005KL-Ac
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0007RX-9l
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiE-00059e-Jn; Thu, 22 Oct 2020 17:45:14 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiM-0007J4-Fr
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:22 +0000
X-Inumbo-ID: 3f992d3f-b987-4b7f-9091-d96978f70784
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3f992d3f-b987-4b7f-9091-d96978f70784;
	Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=C+1e1LkAwGgC5IFexQaEraHAaghkz1YDhSurK5LkpAI=; b=05HUeVddBmvdon7uTzWT8QgIbQ
	g+PurN72ldhACXhYkec8QhHh7S6syxyIKF5W7+t9wsMQKWti+wgMHe6tYCD9dxg7RfQOetmcbK4O4
	4y8OSFgonK3LnFStk52FNjCTl8Tm/lLYkDjf7kU/OusOkEBB0VNAikSAgrND3zp/pPRg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0005KL-Ac
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0007RX-9l
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiE-00059e-Jn; Thu, 22 Oct 2020 17:45:14 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 04/16] PDU/MSW: Warn that SNMP status is often not immediately updated
Date: Thu, 22 Oct 2020 17:44:54 +0100
Message-Id: <20201022164506.1552-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If you don't know this, it's very confusing.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 1 +
 1 file changed, 1 insertion(+)

diff --git a/pdu-msw b/pdu-msw
index d2691567..04b03a22 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -133,4 +133,5 @@ if (!defined $action) {
     print "was: "; show();
     set();
     print "now: "; show();
+    print "^ note, PDUs often do not update returned info immediately\n";
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10559.28100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiJ-0007Jn-WC; Thu, 22 Oct 2020 16:45:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10559.28100; Thu, 22 Oct 2020 16:45:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiJ-0007JY-P6; Thu, 22 Oct 2020 16:45:19 +0000
Received: by outflank-mailman (input) for mailman id 10559;
 Thu, 22 Oct 2020 16:45:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiI-0007J9-0W
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f11cb73-a3a8-404f-98a9-c4b5274c2d3a;
 Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0005KZ-3E
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0007SY-2Q
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiF-00059e-Bv; Thu, 22 Oct 2020 17:45:15 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiI-0007J9-0W
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:18 +0000
X-Inumbo-ID: 0f11cb73-a3a8-404f-98a9-c4b5274c2d3a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0f11cb73-a3a8-404f-98a9-c4b5274c2d3a;
	Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=lTKozFywdZBXuKF9Bj7Cq1GHxhWrqzo800hkAgAt6wI=; b=MylLuYkvvp9ULb4KjJpXm0fxNN
	wgEaHk7iPKq7S9V1OeJU79pTUGyB5znm4veg0iJDvwWdIweFwbzv5B6QIlELGW++9zX1KB/FO5Pgd
	GQkDGFLQhwGiZOSGBG2lEdKbpAvyY5cXMJ3U5Rs4ao6Qff4twuB0NbioXQNzR6cWrPEg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0005KZ-3E
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0007SY-2Q
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-00059e-Bv; Thu, 22 Oct 2020 17:45:15 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 07/16] PDU/MSW: Actually implement delayed-*
Date: Thu, 22 Oct 2020 17:44:57 +0100
Message-Id: <20201022164506.1552-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Nothing in our tree uses this but having it here is useful docs for
the protocol so I shan't just delete it.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pdu-msw b/pdu-msw
index 03b0f342..196b6c45 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -127,7 +127,7 @@ sub action_value () {
                  $action =~ m/^(?:1|on)$/ ? 1 :
                  $action =~ m/^(?:reboot)$/ ? 3 :
                  die "unknown action $action\n$usagemsg");
-    return $valset;
+    return $valset + $delayadd;
 }
 
 sub set ($) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10562.28141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiS-0007Sw-QJ; Thu, 22 Oct 2020 16:45:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10562.28141; Thu, 22 Oct 2020 16:45:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiS-0007Sl-MF; Thu, 22 Oct 2020 16:45:28 +0000
Received: by outflank-mailman (input) for mailman id 10562;
 Thu, 22 Oct 2020 16:45:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiR-0007J4-GJ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da8692b0-de15-47c7-a238-7feafc4a7b64;
 Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0005K9-Ew
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0007QV-Cy
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiD-00059e-Hp; Thu, 22 Oct 2020 17:45:13 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiR-0007J4-GJ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:27 +0000
X-Inumbo-ID: da8692b0-de15-47c7-a238-7feafc4a7b64
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id da8692b0-de15-47c7-a238-7feafc4a7b64;
	Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=4zhHPcchVTIPM4EA2n/U98DOT589YDrA8xIM57+zDDI=; b=LAGVWRMRgcfmBDzNKkrr6r5S9j
	CTKuR0R8IvBhuQjRcnnavDB7gG4jVj4NOlGkmdlW/Vx2RVqQpwWmyE//7Guad8lD3YxyVPE+6uO25
	Z/L5Dqe+gE6GpdCeYDnRxGxxxT1FmBKLXVVExIg0rO5ptfy7OfxBL5j9ayVDNp6so6OM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0005K9-Ew
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0007QV-Cy
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiD-00059e-Hp; Thu, 22 Oct 2020 17:45:13 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 00/16] Bugfixes
Date: Thu, 22 Oct 2020 17:44:50 +0100
Message-Id: <20201022164506.1552-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Many of these are fixes to host sharing.

I'm still doing a formal dev test but I expect to push these soon.

Ian Jackson (16):
  share in jobdb: Break out $checkconstraints and move call
  share in jobdb: Move out-of-flight special case higher up
  PDU/IPMI: Retransmit, don't just wait
  PDU/MSW: Warn that SNMP status is often not immediately updated
  PDU/MSW: Break out get()
  PDU/MSW: Break out action_value()
  PDU/MSW: Actually implement delayed-*
  PDU/MSW: Make show() return the value from get()
  PDU/MSU: Retransmit on/off until PDU has changed
  host reuse fixes: Fix running of steps adhoc
  host reuse fixes: Fix runvar entry for adhoc tasks
  Introduce guest_mk_lv_name
  Prefix guest LV names with the job name
  reporting: Minor fix to reporting of tasks with no subtask
  host reuse fixes: Do not break host-reuse if no host allocated
  starvation: Do not count more than half a flight as starved

 Osstest/Executive.pm        |  2 +-
 Osstest/JobDB/Executive.pm  | 46 +++++++++++++++++++++++--------------
 Osstest/PDU/ipmi.pm         |  5 ++--
 Osstest/TestSupport.pm      |  9 ++++++--
 pdu-msw                     | 37 +++++++++++++++++++++++++----
 ts-debian-fixup             | 22 ++++++++++++++++++
 ts-debian-install           |  2 +-
 ts-host-reuse               |  2 +-
 ts-hosts-allocate-Executive |  2 +-
 9 files changed, 97 insertions(+), 30 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10563.28148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiT-0007Tw-Am; Thu, 22 Oct 2020 16:45:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10563.28148; Thu, 22 Oct 2020 16:45:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiT-0007TZ-1w; Thu, 22 Oct 2020 16:45:29 +0000
Received: by outflank-mailman (input) for mailman id 10563;
 Thu, 22 Oct 2020 16:45:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiR-0007J9-Uf
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa7c4406-f7f6-42b8-8760-d03d6f623f99;
 Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0005Kl-LN
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiH-0007TI-Kb
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiF-00059e-TV; Thu, 22 Oct 2020 17:45:15 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiR-0007J9-Uf
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:27 +0000
X-Inumbo-ID: fa7c4406-f7f6-42b8-8760-d03d6f623f99
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa7c4406-f7f6-42b8-8760-d03d6f623f99;
	Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=8uGskAcKrvh3idP+a9aM9+Bc0CPtEtIIFIHW822S8R4=; b=j31a3kr2aW1B4s4Ac5mX/Fkyco
	wY9Er0fLXxa2T9Hngjiek6YcNHKvO0x05IrTlPXmJWHLjwlGvEvpIrF5jKCGy0lKHsnCRZjx0tuP3
	iz9A1BhmFqYJqb3H9HK4yjgptZiY6S92SmjGFjGzM8yaXyw2p6zLn2AyrEJDzZ2XMGV8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0005Kl-LN
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-0007TI-Kb
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:17 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-00059e-TV; Thu, 22 Oct 2020 17:45:15 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 09/16] PDU/MSU: Retransmit on/off until PDU has changed
Date: Thu, 22 Oct 2020 17:44:59 +0100
Message-Id: <20201022164506.1552-10-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The main effect of this is that the transcript will actually show the
new PDU state.  Previously we would call show(), but APC PDUs would
normally not change immediately, so the transcript would show the old
state.

This also guards against an unresponsive PDU or a packet getting lost.
I don't think we have ever seen that.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)

diff --git a/pdu-msw b/pdu-msw
index 2d4ec967..c57f9f7c 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -41,6 +41,7 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
 
 if (@ARGV<2 || @ARGV>3 || $ARGV[0] =~ m/^-/) { die "bad usage\n$usagemsg"; }
 
+our ($max_retries) = 16; # timeout = 0.05 * max_retries^2
 our ($dnsname,$outlet,$action) = @ARGV;
 
 my ($session,$error) = Net::SNMP->session(
@@ -142,7 +143,21 @@ if (!defined $action) {
 } else {
     my $valset = action_value();
     print "was: "; show();
-    set($valset);
-    print "now: "; show();
-    print "^ note, PDUs often do not update returned info immediately\n";
+
+    my $retries = 0;
+    for (;;) {
+	set($valset);
+	sleep $retries * 0.1;
+	print "now: "; my $got = show();
+	if ($got eq $map[$valset]) { last; }
+	if ($map[$valset] !~ m{^(?:off|on)$}) {
+	    print
+ "^ note, PDUs often do not update returned info immediately\n";
+	    last;
+	}
+	if ($retries >= $max_retries) {
+	    die "PDU does not seem to be changing state!\n";
+	}
+	$retries++;
+    }
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10564.28165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiX-0007bF-J5; Thu, 22 Oct 2020 16:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10564.28165; Thu, 22 Oct 2020 16:45:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdiX-0007b2-E6; Thu, 22 Oct 2020 16:45:33 +0000
Received: by outflank-mailman (input) for mailman id 10564;
 Thu, 22 Oct 2020 16:45:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdiW-0007J4-GJ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1bfb5d76-f5c8-40c1-9df1-aa70e381cd8b;
 Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0005KF-Pr
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0007Qx-Ot
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiE-00059e-38; Thu, 22 Oct 2020 17:45:14 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdiW-0007J4-GJ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:32 +0000
X-Inumbo-ID: 1bfb5d76-f5c8-40c1-9df1-aa70e381cd8b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1bfb5d76-f5c8-40c1-9df1-aa70e381cd8b;
	Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=pVToY0m3Sc+yduYNkjYcp+9aOZ5fk0ZnWu2I935eP+I=; b=mbfhseFH6bQXqaNQMBa7wubdjY
	9aJgs0UHrwf56Pda0zztYS98M3mWdtQ6SCMltkv5tbQPsMMvKmwWmy7+kYJqDoSPewOzimwKQvPtj
	anFermhV+Lf2ugm+sMI1Biz8GkQUQLKANSVWRTimvAPBtjYK4nG3be9szLR0PwHEsVao=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0005KF-Pr
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0007Qx-Ot
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiE-00059e-38; Thu, 22 Oct 2020 17:45:14 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 02/16] share in jobdb: Move out-of-flight special case higher up
Date: Thu, 22 Oct 2020 17:44:52 +0100
Message-Id: <20201022164506.1552-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This avoids running the runvar computation loop outside flights.
This is good amongst other things because that loop prints warnings
about undef $flight and $job.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 071f31f1..4fa42e5d 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -587,6 +587,18 @@ END
 	$constraintsq->fetchrow_array() or confess "$hostname ?";
     };
 
+
+    if (!defined $flight) {
+	db_retry($dbh_tests,[], sub {
+	    $insertq->execute($hostname, $ttaskid,
+			      undef,undef,
+			      undef,
+			      undef,undef);
+	    $checkconstraints->();
+	});
+	return;
+    }
+
     my $ojvn = "$ho->{Ident}_lifecycle";
 
     if (length $r{$ojvn}) {
@@ -660,26 +672,17 @@ END
 	    }
 	}
 
-	if (defined $flight) {
-	    $insertq->execute($hostname, $ttaskid,
-			      $flight, $job,
-			      ($mode eq 'selectprep')+0,
+	$insertq->execute($hostname, $ttaskid,
+			  $flight, $job,
+			  ($mode eq 'selectprep')+0,
                 # ^ DBD::Pg doesn't accept perl canonical false for bool!
                 #   https://rt.cpan.org/Public/Bug/Display.html?id=133229
-			      $tident, $tstepno);
-	} else {
-	    $insertq->execute($hostname, $ttaskid,
-			      undef,undef,
-			      undef,
-			      undef,undef);
-	}
+			  $tident, $tstepno);
 	$checkconstraints->();
     });
 
-    if (defined $flight) {
-	push @lifecycle, $newsigil if length $newsigil;
-	store_runvar($ojvn, "@lifecycle");
-    }
+    push @lifecycle, $newsigil if length $newsigil;
+    store_runvar($ojvn, "@lifecycle");
 }
 
 sub current_stepno ($) { #method
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10565.28177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdid-0007kN-1q; Thu, 22 Oct 2020 16:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10565.28177; Thu, 22 Oct 2020 16:45:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdic-0007k7-Re; Thu, 22 Oct 2020 16:45:38 +0000
Received: by outflank-mailman (input) for mailman id 10565;
 Thu, 22 Oct 2020 16:45:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdib-0007J4-GJ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac949d06-cb98-42ec-8bf9-e6ec95e409e3;
 Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0005KO-L0
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0007Rs-J3
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiE-00059e-S2; Thu, 22 Oct 2020 17:45:14 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdib-0007J4-GJ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:37 +0000
X-Inumbo-ID: ac949d06-cb98-42ec-8bf9-e6ec95e409e3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ac949d06-cb98-42ec-8bf9-e6ec95e409e3;
	Thu, 22 Oct 2020 16:45:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=pa077BjlAALhun+Yyi8NReU98V341OBBWUC/iUdKDE4=; b=cvdEr3Hk5nzZwKzhmrsJJqzop0
	eskRHXJOZsEttWed1bDYUDVmuFyd2YHS4pFQ3qIy4u6KCqjt+qRk/0fIcdRmlFrHIRSkhiRxRUtgN
	ejryRhn/AW3zWK4IZFWxysj4Av/5rMczWeTOclFVZPR7xdb0VZ57mOsb4UhANBdYBDnA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0005KO-L0
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0007Rs-J3
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiE-00059e-S2; Thu, 22 Oct 2020 17:45:14 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 05/16] PDU/MSW: Break out get()
Date: Thu, 22 Oct 2020 17:44:55 +0100
Message-Id: <20201022164506.1552-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is going to be useful in a moment.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/pdu-msw b/pdu-msw
index 04b03a22..58c33952 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -106,13 +106,18 @@ my @map= (undef, qw(
                     delayed-off
                     delayed-reboot));
 
-sub show () {
+sub get () {
     my $got= $session->get_request($read_oid);
     die "SNMP error reading $read_oid ".$session->error()." " unless $got;
     my $val= $got->{$read_oid};
     die unless $val;
     my $mean= $map[$val];
     die "$val ?" unless defined $mean;
+    return $mean;
+}
+
+sub show () {
+    my $mean = get();
     printf "pdu-msw $dnsname: #%s \"%s\" = %s\n", $useport, $usename, $mean;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10568.28189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdih-0007sb-KI; Thu, 22 Oct 2020 16:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10568.28189; Thu, 22 Oct 2020 16:45:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdih-0007sR-FR; Thu, 22 Oct 2020 16:45:43 +0000
Received: by outflank-mailman (input) for mailman id 10568;
 Thu, 22 Oct 2020 16:45:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdig-0007J4-GN
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16143a18-e853-431d-a4a4-b49a52ec336c;
 Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0005KC-J3
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiF-0007Qi-H8
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiD-00059e-Pb; Thu, 22 Oct 2020 17:45:13 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdig-0007J4-GN
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:42 +0000
X-Inumbo-ID: 16143a18-e853-431d-a4a4-b49a52ec336c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 16143a18-e853-431d-a4a4-b49a52ec336c;
	Thu, 22 Oct 2020 16:45:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=V79wG/LQmL2k7Tyv4zkT6Uv1QsFd5R/iMJN6smjFWKQ=; b=vkinM3MIoJN07Athigc4mvVWxC
	LGxBzYZc9nN3fYT4Uo3b7H2XpVHUb/Lp5P4IPZ6nGxlfUxmWKZcRHQ2WnV7jbl/ZNGlRpu20EX1sk
	qFAyyFWk3kKPSsraoAWMq2uzY5Y40a8wcuMUDsqJOC6a4j1oo8Mbb5kB+HokgSVMO7Is=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0005KC-J3
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-0007Qi-H8
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:15 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiD-00059e-Pb; Thu, 22 Oct 2020 17:45:13 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 01/16] share in jobdb: Break out $checkconstraints and move call
Date: Thu, 22 Oct 2020 17:44:51 +0100
Message-Id: <20201022164506.1552-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This must happen after we introduce our new row or it is not
effective!

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index f69ce277..071f31f1 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -582,6 +582,11 @@ END
           VALUES (?,        ?,      ?,      ?,   ?,      ?,     ?     )
 END
 
+    my $checkconstraints = sub {
+	$constraintsq->execute($hostname, $ttaskid);
+	$constraintsq->fetchrow_array() or confess "$hostname ?";
+    };
+
     my $ojvn = "$ho->{Ident}_lifecycle";
 
     if (length $r{$ojvn}) {
@@ -654,8 +659,6 @@ END
 		push @lifecycle, "$omarks$otj:$o->{stepno}$osuffix";
 	    }
 	}
-	$constraintsq->execute($hostname, $ttaskid);
-	$constraintsq->fetchrow_array() or confess "$hostname ?";
 
 	if (defined $flight) {
 	    $insertq->execute($hostname, $ttaskid,
@@ -670,6 +673,7 @@ END
 			      undef,
 			      undef,undef);
 	}
+	$checkconstraints->();
     });
 
     if (defined $flight) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:45:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:45:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10571.28201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdin-00080J-0g; Thu, 22 Oct 2020 16:45:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10571.28201; Thu, 22 Oct 2020 16:45:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdim-000808-SN; Thu, 22 Oct 2020 16:45:48 +0000
Received: by outflank-mailman (input) for mailman id 10571;
 Thu, 22 Oct 2020 16:45:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVdil-0007J4-Gg
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3e2e9d0-401e-4da9-ab8f-1d6bb4e73d93;
 Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0005KR-RH
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVdiG-0007SA-QQ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiF-00059e-47; Thu, 22 Oct 2020 17:45:15 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVdil-0007J4-Gg
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:47 +0000
X-Inumbo-ID: a3e2e9d0-401e-4da9-ab8f-1d6bb4e73d93
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a3e2e9d0-401e-4da9-ab8f-1d6bb4e73d93;
	Thu, 22 Oct 2020 16:45:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ntqiqHkBhbltgpOYvMT5RzXTVuOOnIgx7foXFuhot5c=; b=2iAOwRkuvPW0EZagl7nN9EmOth
	fYgb6YvbHkqltJgvB032wVCXk+SgeIcmILNyTSzp6sCwpCV7qZGG7BzlFHxHNq2p0/fQWCaa/D1E0
	Fr1hLOBwZ2ek6j0hsJE7X1HlC6Cb0ycgBO0dO7A+2gvlLuhm/APGHU9o7sDDnNEpkKg4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0005KR-RH
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-0007SA-QQ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:45:16 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiF-00059e-47; Thu, 22 Oct 2020 17:45:15 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 06/16] PDU/MSW: Break out action_value()
Date: Thu, 22 Oct 2020 17:44:56 +0100
Message-Id: <20201022164506.1552-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is going to be useful in a moment.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-msw | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/pdu-msw b/pdu-msw
index 58c33952..03b0f342 100755
--- a/pdu-msw
+++ b/pdu-msw
@@ -121,13 +121,17 @@ sub show () {
     printf "pdu-msw $dnsname: #%s \"%s\" = %s\n", $useport, $usename, $mean;
 }
 
-sub set () {
+sub action_value () {
     my $delayadd= ($action =~ s/^delayed-// ? 3 : 0);
     my $valset= ($action =~ m/^(?:0|off)$/ ? 2 :
                  $action =~ m/^(?:1|on)$/ ? 1 :
                  $action =~ m/^(?:reboot)$/ ? 3 :
                  die "unknown action $action\n$usagemsg");
-        
+    return $valset;
+}
+
+sub set ($) {
+    my ($valset) = @_;
     my $res= $session->set_request(-varbindlist => [ $write_oid, INTEGER, $valset ]);
     die "SNMP set ".$session->error()." " unless $res;
 }
@@ -135,8 +139,9 @@ sub set () {
 if (!defined $action) {
     show();
 } else {
+    my $valset = action_value();
     print "was: "; show();
-    set();
+    set($valset);
     print "now: "; show();
     print "^ note, PDUs often do not update returned info immediately\n";
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 16:59:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 16:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10588.28213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdwH-00014R-A3; Thu, 22 Oct 2020 16:59:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10588.28213; Thu, 22 Oct 2020 16:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdwH-00014K-6A; Thu, 22 Oct 2020 16:59:45 +0000
Received: by outflank-mailman (input) for mailman id 10588;
 Thu, 22 Oct 2020 16:59:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVdwF-00014F-IP
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:59:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9f40cf1-d127-440f-8766-7d7d834824ed;
 Thu, 22 Oct 2020 16:59:41 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdwD-0005fF-FR; Thu, 22 Oct 2020 16:59:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdwD-0007tB-92; Thu, 22 Oct 2020 16:59:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVdwD-0008Cf-8Z; Thu, 22 Oct 2020 16:59:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVdwF-00014F-IP
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 16:59:43 +0000
X-Inumbo-ID: d9f40cf1-d127-440f-8766-7d7d834824ed
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d9f40cf1-d127-440f-8766-7d7d834824ed;
	Thu, 22 Oct 2020 16:59:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JouCDd3T+gOmQVrPKpo8QoSC12ze1pGsPmknQWbddSM=; b=dHZRRh1OWfuxZkGjVbheN3Moyo
	E3Ii7zYDliRpElQsKH9YkntrknhIlnttpvC9Hp5hCFn9PDlb8jz6Gse/oczFHiFfJCilxTSFpvxr0
	ho+9EWfiha+FcwfnTqJdqzgOkRa6Liu5J5nF5hLY6WbsBvLaOyW8oNTVqOrU/vBf3c1k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdwD-0005fF-FR; Thu, 22 Oct 2020 16:59:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdwD-0007tB-92; Thu, 22 Oct 2020 16:59:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVdwD-0008Cf-8Z; Thu, 22 Oct 2020 16:59:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156091-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156091: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=24cf72726564eac7ba9abb24f3d05795164d0a70
X-Osstest-Versions-That:
    ovmf=26442d11e620a9e81c019a24a4ff38441c64ba10
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 16:59:41 +0000

flight 156091 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156091/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 24cf72726564eac7ba9abb24f3d05795164d0a70
baseline version:
 ovmf                 26442d11e620a9e81c019a24a4ff38441c64ba10

Last test of basis   156065  2020-10-21 06:40:48 Z    1 days
Testing same since   156091  2020-10-22 14:14:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Pete Batard <pete@akeo.ie>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   26442d11e6..24cf727265  24cf72726564eac7ba9abb24f3d05795164d0a70 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10591.28228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdxi-0001sP-Lk; Thu, 22 Oct 2020 17:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10591.28228; Thu, 22 Oct 2020 17:01:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVdxi-0001sI-IX; Thu, 22 Oct 2020 17:01:14 +0000
Received: by outflank-mailman (input) for mailman id 10591;
 Thu, 22 Oct 2020 17:01:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oPFL=D5=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1kVdxh-0001sB-TQ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:01:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c64d2098-81d4-4015-962f-0d2c855766f7;
 Thu, 22 Oct 2020 17:01:06 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVdxY-0005ie-II; Thu, 22 Oct 2020 17:01:04 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1kVdxY-0004Sg-5W; Thu, 22 Oct 2020 17:01:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oPFL=D5=xen.org=hx242@srs-us1.protection.inumbo.net>)
	id 1kVdxh-0001sB-TQ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:01:13 +0000
X-Inumbo-ID: c64d2098-81d4-4015-962f-0d2c855766f7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c64d2098-81d4-4015-962f-0d2c855766f7;
	Thu, 22 Oct 2020 17:01:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:To:From:Subject:Message-ID;
	bh=FGFjykp3RtKn11ymLf3GRpJEUYfHOM80hAw8IsNtoqY=; b=vSjTIpob+IlDsFaU0hrx00/U34
	hQ8cA0dJ78HriNVM3IVNGHsRBN03ITCX5t0A0G/Dy4LFW25ezQFhrxSgTw+7K0md+XPl9MQz46SBK
	jL6iUab/O5ztp6jKrTGpi6IBDEhEA6jPJ1DhaPPnlqHYhLmtpMTy8CO7QBRmqpZX8ffU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVdxY-0005ie-II; Thu, 22 Oct 2020 17:01:04 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236] helo=u1bbd043a57dd5a.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <hx242@xen.org>)
	id 1kVdxY-0004Sg-5W; Thu, 22 Oct 2020 17:01:04 +0000
Message-ID: <48816c69ab2551a34c57a87392bb7f08ca6482ee.camel@xen.org>
Subject: Re: XSM and the idle domain
From: Hongyan Xia <hx242@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>, jbeulich@suse.com, jandryuk@gmail.com, 
	dgdegra@tycho.nsa.gov
Date: Thu, 22 Oct 2020 18:01:00 +0100
In-Reply-To: <f8f5f354-aa8d-4bd0-9c0e-ef37702e80c5@citrix.com>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
	 <f8f5f354-aa8d-4bd0-9c0e-ef37702e80c5@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Thu, 2020-10-22 at 13:51 +0100, Andrew Cooper wrote:
> On 21/10/2020 15:34, Hongyan Xia wrote:
> > The first question came up during ongoing work in LiveUpdate. After
> > an
> > LU, the next Xen needs to restore all domains. To do that, some
> > hypercalls need to be issued from the idle domain context and
> > apparently XSM does not like it.
> 
> There is no such thing as issuing hypercalls from the idle domain
> (context or otherwise), because the idle domain does not have enough
> associated guest state for anything to make the requisite
> SYSCALL/INT80/VMCALL/VMMCALL invocation.
> 
> I presume from this comment that what you mean is that you're calling
> the plain hypercall functions, context checks and everything, from
> the
> idle context?

Yep, the restore code just calls the hypercall functions from idle
context.

> If so, this is buggy for more reasons than just XSM objecting to its
> calling context, and that XSM is merely the first thing to explode. 
> Therefore, I don't think modifications to XSM are applicable to
> solving
> the problem.
> 
> (Of course, this is all speculation because there's no concrete
> implementation to look at.)

Another explosion is the inability to create hypercall preemption,
which for now is disabled when the calling context is the idle domain. 
Apart from XSM and preemption, the LU prototype works fine. We only
reuse a limited number of hypercall functions and are not trying to be
able to call all possible hypercalls from idle.

Having a dedicated domLU just like domB (or reusing domB) sounds like a
viable option. If the overhead can be made low enough then we won't
need to work around XSM and hypercall preemption.

Although the question was whether XSM should interact with the idle
domain. With a good design LU should be able to sidestep this though.

Hongyan



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10595.28240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5F-0002Eq-Gq; Thu, 22 Oct 2020 17:09:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10595.28240; Thu, 22 Oct 2020 17:09:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5F-0002Ej-CY; Thu, 22 Oct 2020 17:09:01 +0000
Received: by outflank-mailman (input) for mailman id 10595;
 Thu, 22 Oct 2020 17:09:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5E-0002Eb-7k
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfec223b-9a9f-4bac-8dd4-dfbb138bdaa5;
 Thu, 22 Oct 2020 17:08:59 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5C-0005sw-UZ
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:08:58 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5C-0005Tq-TA
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:08:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiG-00059e-L7; Thu, 22 Oct 2020 17:45:16 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5E-0002Eb-7k
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:00 +0000
X-Inumbo-ID: dfec223b-9a9f-4bac-8dd4-dfbb138bdaa5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfec223b-9a9f-4bac-8dd4-dfbb138bdaa5;
	Thu, 22 Oct 2020 17:08:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jomMV8uxLbcen1kF7hY6ev+qMAIxd7EdISVfjmqx5yM=; b=N2TvamoVcy3UV9381dfufm20ym
	w3ylL5ogO+Ryu0b6CCn0GtVCxiuXzqvNVFuTXpRoBfZCwjR/kKaq2ncideCTRWr3wXnGo5EekXx+4
	sXBT6v+zLG0p2NMkJrqUUevymK/hQ1C1HenDso0mEtKz4N0gYCJg9LHr8+wnuiHxG6lM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5C-0005sw-UZ
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:08:58 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5C-0005Tq-TA
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:08:58 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-00059e-L7; Thu, 22 Oct 2020 17:45:16 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 12/16] Introduce guest_mk_lv_name
Date: Thu, 22 Oct 2020 17:45:02 +0100
Message-Id: <20201022164506.1552-13-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This changes the way the disk name is constructed but not to any
overall effect.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/TestSupport.pm | 9 +++++++--
 ts-debian-install      | 2 +-
 2 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 5e6b15d9..12aaba79 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -76,7 +76,7 @@ BEGIN {
                       target_jobdir target_extract_jobdistpath_subdir
                       target_extract_jobdistpath target_extract_distpart
 		      target_tftp_prefix
-                      lv_create lv_dev_mapper
+                      lv_create lv_dev_mapper guest_mk_lv_name
 
                       poll_loop tcpconnect await_tcp
                       contents_make_cpio file_simple_write_contents
@@ -2177,6 +2177,11 @@ sub guest_var_commalist ($$) {
     return split /\,/, guest_var($gho,$runvartail,'');
 }
 
+sub guest_mk_lv_name ($$) {
+    my ($gho, $suffix) = @_;
+    return "$gho->{Name}".$suffix;
+}
+
 sub prepareguest ($$$$$$) {
     my ($ho, $gn, $hostname, $tcpcheckport, $mb,
         $boot_timeout) = @_;
@@ -2205,7 +2210,7 @@ sub prepareguest ($$$$$$) {
     # If we have defined guest specific disksize, use it
     $mb = guest_var($gho,'disksize',$mb);
     if (defined $mb) {
-	store_runvar("${gn}_disk_lv", $r{"${gn}_hostname"}.'-disk');
+	store_runvar("${gn}_disk_lv", guest_mk_lv_name($gho, '-disk'));
     }
 
     if (defined $mb) {
diff --git a/ts-debian-install b/ts-debian-install
index f07dd676..8caa9d76 100755
--- a/ts-debian-install
+++ b/ts-debian-install
@@ -100,7 +100,7 @@ END
 
     my $cfg= "/etc/xen/$gho->{Name}.cfg";
     store_runvar("$gho->{Guest}_cfgpath", $cfg);
-    store_runvar("$gho->{Guest}_swap_lv", "$gho->{Name}-swap");
+    store_runvar("$gho->{Guest}_swap_lv", guest_mk_lv_name($gho, "-swap"));
 }
 
 prep();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10596.28251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5K-0002Gr-PE; Thu, 22 Oct 2020 17:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10596.28251; Thu, 22 Oct 2020 17:09:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5K-0002Gj-Ll; Thu, 22 Oct 2020 17:09:06 +0000
Received: by outflank-mailman (input) for mailman id 10596;
 Thu, 22 Oct 2020 17:09:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5I-0002GB-Su
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e49d44e6-1121-4059-adb0-b8ea33001a15;
 Thu, 22 Oct 2020 17:09:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5H-0005tI-Iv
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:03 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5H-0005V6-GN
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiG-00059e-4k; Thu, 22 Oct 2020 17:45:16 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5I-0002GB-Su
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:04 +0000
X-Inumbo-ID: e49d44e6-1121-4059-adb0-b8ea33001a15
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e49d44e6-1121-4059-adb0-b8ea33001a15;
	Thu, 22 Oct 2020 17:09:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=jyYAayf5B+K21/iZZz63Au1pWfx5cy+b2HyWQ2k+sfw=; b=E3pXTXt0upA4zBxDHgZwtEZVsm
	wr1soaVACF6w6hzG5J3gaN7ZbKofp1JIgDhaeORq9iEB+dFuRr2pjNabUeFh4bkhT6Qa461KfS4A4
	DnMLAtypNzSwmrIXtDCoFuBQie/n1yEyQvt68n51GUYH2PKFVxyBolqANh+tOeSxXk40=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5H-0005tI-Iv
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:03 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5H-0005V6-GN
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-00059e-4k; Thu, 22 Oct 2020 17:45:16 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 10/16] host reuse fixes: Fix running of steps adhoc
Date: Thu, 22 Oct 2020 17:45:00 +0100
Message-Id: <20201022164506.1552-11-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a ts script is run by hand, for adhoc testing, there is no
OSSTEST_TESTID variable in the environment and the script does not
know it's own step number.  Such adhoc runs are not tracked as steps
in the steps table.

For host lifecycle purposes, treat these as ad-hoc out-of-flight uses,
based only on the taskid (which will usually be a person's personal
static task).

Without this, these adhoc runs fail with a constraint violating trying
to insert a flight/job/step row into the host lifecycle table: the
constraint requires the step to be specified but it is NULL.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 4fa42e5d..04555113 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -588,7 +588,7 @@ END
     };
 
 
-    if (!defined $flight) {
+    if (!defined $flight || !defined $tstepno) {
 	db_retry($dbh_tests,[], sub {
 	    $insertq->execute($hostname, $ttaskid,
 			      undef,undef,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10597.28264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5Q-0002KN-1o; Thu, 22 Oct 2020 17:09:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10597.28264; Thu, 22 Oct 2020 17:09:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5P-0002KF-UN; Thu, 22 Oct 2020 17:09:11 +0000
Received: by outflank-mailman (input) for mailman id 10597;
 Thu, 22 Oct 2020 17:09:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5N-0002GB-Ok
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 586d40d2-9e6f-43e5-8ea2-2722e0efbe36;
 Thu, 22 Oct 2020 17:09:08 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5M-0005tU-Hj
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:08 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5M-0005W0-Gu
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:08 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiH-00059e-Cj; Thu, 22 Oct 2020 17:45:17 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5N-0002GB-Ok
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:09 +0000
X-Inumbo-ID: 586d40d2-9e6f-43e5-8ea2-2722e0efbe36
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 586d40d2-9e6f-43e5-8ea2-2722e0efbe36;
	Thu, 22 Oct 2020 17:09:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=uD3SLBKiMkKsr66Riht6zcwv/GQ882j78TtfYVzfoKE=; b=oNAJn4NbsakzNZDpST6aDvIwWy
	v3eFTy0uKSG6l3lsKn9PaVP0iLv3a+ssXfkNGpg6todEHlxijFZq9YSge1+0xH9F7OWgo4fCK60fQ
	ZwDecxOiN6ZVdbJ7vQ/7eLWwa6TwOTrGBKi3zNGiyHQa4KkhqK2nOvMyBmuITB/9Tivs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5M-0005tU-Hj
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:08 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5M-0005W0-Gu
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:08 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-00059e-Cj; Thu, 22 Oct 2020 17:45:17 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 15/16] host reuse fixes: Do not break host-reuse if no host allocated
Date: Thu, 22 Oct 2020 17:45:05 +0100
Message-Id: <20201022164506.1552-16-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If host allocation failed, or our dependency jobs failed, then we
won't have allocated a host.  The host runvar will not be set.
In this case, we want to do nothing.

But we forgot to pass $noneok to selecthost.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-host-reuse | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-reuse b/ts-host-reuse
index e2498bb6..b885a3e6 100755
--- a/ts-host-reuse
+++ b/ts-host-reuse
@@ -165,7 +165,7 @@ sub act_start_test () {
 
 sub act_final () {
     if (!@ARGV) {
-	$ho = selecthost($whhost);
+	$ho = selecthost($whhost, 1);
 	return unless $ho;
 	host_update_lifecycle_info($ho, 'final');
     } elsif ("@ARGV" eq "--post-test-ok") {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10598.28276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5T-0002O0-CN; Thu, 22 Oct 2020 17:09:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10598.28276; Thu, 22 Oct 2020 17:09:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5T-0002Ns-80; Thu, 22 Oct 2020 17:09:15 +0000
Received: by outflank-mailman (input) for mailman id 10598;
 Thu, 22 Oct 2020 17:09:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5S-0002N0-1M
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f495b60-ce14-4014-9d06-1feaf311c5d6;
 Thu, 22 Oct 2020 17:09:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5R-0005td-2I
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:13 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5R-0005X1-02
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:13 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiG-00059e-C8; Thu, 22 Oct 2020 17:45:16 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5S-0002N0-1M
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:14 +0000
X-Inumbo-ID: 0f495b60-ce14-4014-9d06-1feaf311c5d6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f495b60-ce14-4014-9d06-1feaf311c5d6;
	Thu, 22 Oct 2020 17:09:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=hVU1+g4bJC8UHbCyURdy+z5xK2i7GZBXZk4m60xh6gI=; b=T4KHNm77Mgj+qRdM3pGkpeph6X
	I3K7wgG3miPJ55KYAG6ofd5iZwvJW2D9YoLyuT9nmsjM9xOv4OQFD1J86vckNQTL7NgvR4Y3AKQA4
	s3H4Ol3M+wtmFr72WVlxTe5vo3X7Jk4JZKPBiKH9K1bFY8xt2sXxhFJgjK/frYxz2B9Y=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5R-0005td-2I
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:13 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5R-0005X1-02
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:13 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-00059e-C8; Thu, 22 Oct 2020 17:45:16 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 11/16] host reuse fixes: Fix runvar entry for adhoc tasks
Date: Thu, 22 Oct 2020 17:45:01 +0100
Message-Id: <20201022164506.1552-12-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When processing an item from the host lifecycle table into the runvar,
we don't want to do all the processing of flight and job.  Instead, we
should simply put the ?<taskid> into the runvar.

Previously this would produce ?<taskid>: which the flight reporting
code would choke on.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 04555113..1dcf55ff 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -649,6 +649,11 @@ END
 	    }
 	    next if $tj_seen{$oisprepmark.$otj}++;
 
+	    if (!defined $o->{flight}) {
+		push @lifecycle, "$omarks$otj";
+		next;
+	    }
+
 	    if (!$omarks && !$olive && defined($o->{flight}) &&
 		$ho->{Shared} &&
 		$ho->{Shared}{Type} =~ m/^build-/ &&
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10599.28288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5Y-0002Te-T4; Thu, 22 Oct 2020 17:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10599.28288; Thu, 22 Oct 2020 17:09:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5Y-0002TW-Oo; Thu, 22 Oct 2020 17:09:20 +0000
Received: by outflank-mailman (input) for mailman id 10599;
 Thu, 22 Oct 2020 17:09:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5X-0002Sc-OG
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7c8abf9-876c-4136-9dc3-beec4f6f91f0;
 Thu, 22 Oct 2020 17:09:19 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5W-0005tp-Ra
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:18 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5W-0005YD-Qs
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:18 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiH-00059e-L8; Thu, 22 Oct 2020 17:45:17 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5X-0002Sc-OG
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:19 +0000
X-Inumbo-ID: f7c8abf9-876c-4136-9dc3-beec4f6f91f0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f7c8abf9-876c-4136-9dc3-beec4f6f91f0;
	Thu, 22 Oct 2020 17:09:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=4+xtJRn0orKXngPYVez9yveephWwMruvVYUsWR0keuQ=; b=py3IxVqKXg9Kv+dUjFAFRkctIc
	Wj6kWqyNdx0kCaXJ91JJgEPYIHeuk7JabmNx6RglGaFJYHoiDKSzzxJ+jeO5hP1rG3OAc6bnDnvdi
	iIHPAejtEEFoyaCgonCIJ5mppPTsAyyfw11YQ0lFBA/B6PY13LfXAE5fmzLbt/DHxIMU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5W-0005tp-Ra
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:18 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5W-0005YD-Qs
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:18 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-00059e-L8; Thu, 22 Oct 2020 17:45:17 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 16/16] starvation: Do not count more than half a flight as starved
Date: Thu, 22 Oct 2020 17:45:06 +0100
Message-Id: <20201022164506.1552-17-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This seems like a sensible rule.

This also prevents the following bizarre behaviour: when a flight has
a handful of jobs that cannot be run at all (eg because it's a
commissioning flight for only hosts of a particular arch), those jobs
can complete quite quickly.  Even with a high X value because only a
smallish portion of the flight has finished, this can lead to a modest
threshhold value.  This combines particularly badly with commissioning
flights, where the duraation estimates are often nonsense.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 ts-hosts-allocate-Executive | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-hosts-allocate-Executive b/ts-hosts-allocate-Executive
index b216186a..459b9215 100755
--- a/ts-hosts-allocate-Executive
+++ b/ts-hosts-allocate-Executive
@@ -863,7 +863,7 @@ sub starving ($$) {
 	"D=%d W=%d X=%.3f t_D=%s t_me=%s t_lim=%.3f X'=%.4f (fi.s=%s)",
 	$d, $w, $X, $total_d, $projected_me, $lim, $Xcmp,
 	$fi->{started} - $now;
-    my $bad = $projected_me > $lim;
+    my $bad = $projected_me > $lim && $d >= $w;
     return ($bad, $m);
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10601.28299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5e-0002ZQ-6L; Thu, 22 Oct 2020 17:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10601.28299; Thu, 22 Oct 2020 17:09:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5e-0002ZE-24; Thu, 22 Oct 2020 17:09:26 +0000
Received: by outflank-mailman (input) for mailman id 10601;
 Thu, 22 Oct 2020 17:09:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5c-0002Xb-7U
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cc435bc-f243-4d0d-abd9-0c2707d32535;
 Thu, 22 Oct 2020 17:09:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5b-0005tz-Eb
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:23 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5b-0005ZN-Dn
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:23 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiH-00059e-4G; Thu, 22 Oct 2020 17:45:17 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5c-0002Xb-7U
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:24 +0000
X-Inumbo-ID: 0cc435bc-f243-4d0d-abd9-0c2707d32535
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0cc435bc-f243-4d0d-abd9-0c2707d32535;
	Thu, 22 Oct 2020 17:09:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=dXejovIZMPj7AL5cM5Q0Y7OBdfs/GA2ayGeb8CLSfew=; b=PE56Vwgb/z5+PpFiMI/wQa0TlL
	TwX6Xaoao/AJEczGPuOvS1D+WwwQZBJsrs8Aam1fEy6cFHTGh+uX3Y1ZR5g46H4n9sHERRYaZavvN
	Okc7kReR21rkhA9pbsw5nIgoL//gZS1q67JV/D9g7nBohHKz4gTuqZSb7QUV7bfK8odI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5b-0005tz-Eb
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:23 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5b-0005ZN-Dn
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:23 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiH-00059e-4G; Thu, 22 Oct 2020 17:45:17 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 14/16] reporting: Minor fix to reporting of tasks with no subtask
Date: Thu, 22 Oct 2020 17:45:04 +0100
Message-Id: <20201022164506.1552-15-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

subtask can be NULL.  If so, do not include it.

This change fixes a warning and a minor cosmetic defect.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/Executive.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index e3ab1dc3..d95d848d 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -427,7 +427,7 @@ sub report_rogue_task_description ($) {
     my $info= "rogue task ";
     $info .= " $arow->{type} $arow->{refkey}";
     $info .= " ($arow->{comment})" if defined $arow->{comment};
-    $info .= " $arow->{subtask}";
+    $info .= " $arow->{subtask}" if defined $arow->{subtask};
     $info .= " (user $arow->{username})";
     return $info;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:09:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:09:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10603.28312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5i-0002eH-I7; Thu, 22 Oct 2020 17:09:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10603.28312; Thu, 22 Oct 2020 17:09:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe5i-0002e6-E9; Thu, 22 Oct 2020 17:09:30 +0000
Received: by outflank-mailman (input) for mailman id 10603;
 Thu, 22 Oct 2020 17:09:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVe5h-0002cx-1W
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a78ae94-1a34-442e-8627-7d4668985e07;
 Thu, 22 Oct 2020 17:09:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5f-0005uZ-Vr
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:27 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVe5f-0005aW-UB
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVdiG-00059e-T6; Thu, 22 Oct 2020 17:45:16 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A04T=D5=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVe5h-0002cx-1W
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:29 +0000
X-Inumbo-ID: 7a78ae94-1a34-442e-8627-7d4668985e07
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7a78ae94-1a34-442e-8627-7d4668985e07;
	Thu, 22 Oct 2020 17:09:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=tffHUjllB3zjo74/ONxS2tl01DELgFF4aymKZDnBl6s=; b=l85ZnbYzAVvw+Bl6L2MKk6LvYa
	Mhia8nAexPHBnkeZ12z2yFLRT95RfIt9lqPu2MoOpFIa63aCS+hhE6ZIUX/utCtFQW+4k7mIPTW+t
	fThutrSQocFtDz3ne6Q+u4ffmox/Au1esazD7tKyZblx7uLUUkW0WlyMH0rP+mOxErCU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5f-0005uZ-Vr
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:27 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVe5f-0005aW-UB
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:09:27 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVdiG-00059e-T6; Thu, 22 Oct 2020 17:45:16 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 13/16] Prefix guest LV names with the job name
Date: Thu, 22 Oct 2020 17:45:03 +0100
Message-Id: <20201022164506.1552-14-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201022164506.1552-1-iwj@xenproject.org>
References: <20201022164506.1552-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This means that a subsequent test which reuses the same host will not
use the same LVs.  This is a good idea because reusing the same LV
names in a subsequent job means relying on the "ad hoc run" cleanup
code.  This is a bad idea because that code is rarely tested.

And because, depending on the situation, the old LVs may even still be
in use.  For example, in a pair test, the guest's LVs will still be
set up for use with nbd.

It seems better to fix this by using a fresh LV rather than adding
more teardown code.

The "wear limit" on host reuse is what prevents the disk filling up
with LVs from old guests.

ts-debian-fixup needs special handling, because Debian's xen-tools'
xen-create-image utility hardcodes its notion of LV name construction.
We need to rename the actual LVs (perhaps overwriting old ones from a
previous ad-hoc run) and also update the config.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/TestSupport.pm |  2 +-
 ts-debian-fixup        | 22 ++++++++++++++++++++++
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 12aaba79..9362a865 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2179,7 +2179,7 @@ sub guest_var_commalist ($$) {
 
 sub guest_mk_lv_name ($$) {
     my ($gho, $suffix) = @_;
-    return "$gho->{Name}".$suffix;
+    return $job."_$gho->{Name}".$suffix;
 }
 
 sub prepareguest ($$$$$$) {
diff --git a/ts-debian-fixup b/ts-debian-fixup
index a878fe50..810b3aba 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -37,6 +37,27 @@ sub savecfg () {
     $cfg= get_filecontents("$cfgstash.orig");
 }
 
+sub lvnames () {
+    my $lvs = target_cmd_output_root($ho, "lvdisplay --colon", 30);
+    foreach my $suffix (qw(disk swap)) {
+	my $old = "$gho->{Name}-$suffix";
+	my $new = "${job}_${old}";
+	my $full_old = "/dev/$gho->{Vg}/$old";
+	my $full_new = "/dev/$gho->{Vg}/$new";
+	$cfg =~ s{\Q$full_old\E(?![0-9a-zA-Z/_.-])}{
+            logm "Replacing in domain config \`$&' with \`$full_new'";
+            $full_new;
+        }ge;
+	if ($lvs =~ m{^ *\Q$full_old\E}m) {
+	    if ($lvs =~ m{^ *\Q$full_new\E}m) {
+		# In case we are re-running (eg, adhoc)
+		target_cmd_root($ho, "lvremove -f $full_new", 30);
+	    }
+	    target_cmd_root($ho, "lvrename $full_old $new", 30);
+	}
+    }
+}
+
 sub ether () {
 #    $cfg =~ s/^ [ \t]*
 #        ( vif [ \t]* \= [ \t]* \[ [ \t]* [\'\"]
@@ -207,6 +228,7 @@ sub writecfg () {
 }
 
 savecfg();
+lvnames();
 ether();
 access();
 $console = target_setup_rootdev_console_inittab($ho,$gho,"$mountpoint");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:13:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:13:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10617.28324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe9s-0003nv-5Y; Thu, 22 Oct 2020 17:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10617.28324; Thu, 22 Oct 2020 17:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVe9s-0003no-2Z; Thu, 22 Oct 2020 17:13:48 +0000
Received: by outflank-mailman (input) for mailman id 10617;
 Thu, 22 Oct 2020 17:13:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVe9r-0003nj-0j
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:13:47 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f37ddf2-425e-488f-abae-491b82ac0e9e;
 Thu, 22 Oct 2020 17:13:44 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MHDSPX081607
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 22 Oct 2020 13:13:34 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09MHDS9Z081606;
 Thu, 22 Oct 2020 10:13:28 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVe9r-0003nj-0j
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:13:47 +0000
X-Inumbo-ID: 8f37ddf2-425e-488f-abae-491b82ac0e9e
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8f37ddf2-425e-488f-abae-491b82ac0e9e;
	Thu, 22 Oct 2020 17:13:44 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MHDSPX081607
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 22 Oct 2020 13:13:34 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09MHDS9Z081606;
	Thu, 22 Oct 2020 10:13:28 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 22 Oct 2020 10:13:28 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
Message-ID: <20201022171328.GA81455@mattapan.m5p.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <a960dd45-2867-5ef6-970c-952c03aa8cef@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a960dd45-2867-5ef6-970c-952c03aa8cef@suse.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Oct 22, 2020 at 09:42:17AM +0200, Jan Beulich wrote:
> On 22.10.2020 00:12, Elliott Mitchell wrote:
> > --- a/xen/arch/arm/acpi/domain_build.c
> > +++ b/xen/arch/arm/acpi/domain_build.c
> > @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
> >      status = acpi_get_table(ACPI_SIG_SPCR, 0,
> >                              (struct acpi_table_header **)&spcr);
> >  
> > -    if ( ACPI_FAILURE(status) )
> > +    if ( ACPI_SUCCESS(status) )
> >      {
> > -        printk("Failed to get SPCR table\n");
> > -        return -EINVAL;
> > +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
> > +        /* Deny MMIO access for UART */
> > +        rc = iomem_deny_access(d, mfn, mfn + 1);
> > +        if ( rc )
> > +            return rc;
> > +    }
> > +    else
> > +    {
> > +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
> >      }
> 
> Nit: While I see you've got Stefano's R-b already, I Xen we typically
> omit the braces here.

Personally, I prefer that myself, but was unsure of the preference here.
I've seen multiple projects which *really* dislike using having brackets
for some clauses, but not others (ie they want either all clauses with or
all clauses without; instead of only if required).

I sent what I thought was the more often used format.  (I also like tabs,
and dislike having so many spaces; alas my preferences are apparently
uncommon)


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 22 17:53:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 17:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10623.28339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVelZ-0007Yn-Bc; Thu, 22 Oct 2020 17:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10623.28339; Thu, 22 Oct 2020 17:52:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVelZ-0007Yg-7y; Thu, 22 Oct 2020 17:52:45 +0000
Received: by outflank-mailman (input) for mailman id 10623;
 Thu, 22 Oct 2020 17:52:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVelX-0007Y7-Gz
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:52:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cda1d67-1496-4be4-9380-2db4e9e2bee3;
 Thu, 22 Oct 2020 17:52:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVelP-0006nA-4g; Thu, 22 Oct 2020 17:52:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVelO-00010l-O9; Thu, 22 Oct 2020 17:52:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVelO-0003jE-LA; Thu, 22 Oct 2020 17:52:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVelX-0007Y7-Gz
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 17:52:43 +0000
X-Inumbo-ID: 2cda1d67-1496-4be4-9380-2db4e9e2bee3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2cda1d67-1496-4be4-9380-2db4e9e2bee3;
	Thu, 22 Oct 2020 17:52:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cGVxYb3n6Wk1I9n6ghTOi41/com01q0d00SZbtFzgrs=; b=cdR8axRRdUVG62bf8//Yww+f8e
	xMEvGNBtc8aV29RGCFgJfemJAT20tdwBJszQfpWa/Pl8qSFBBleBxhr3Hb71+1IsAEWMm57xT5D3I
	pAex3jE6TE5dqBU+am5sfjGUBXoUOVBC+GC7SUwKWZBdT9bpndqKDEcJEeA+Dc8lQJO8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVelP-0006nA-4g; Thu, 22 Oct 2020 17:52:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVelO-00010l-O9; Thu, 22 Oct 2020 17:52:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVelO-0003jE-LA; Thu, 22 Oct 2020 17:52:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156079-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156079: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl-credit1:<job status>:broken:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:host-install(5):broken:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f804b3159482eedbb4250b1e9248c308fb63b805
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 17:52:34 +0000

flight 156079 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156079/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit1     <job status>                 broken
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   5 host-install(5)       broken blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f804b3159482eedbb4250b1e9248c308fb63b805
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   82 days
Failing since        152366  2020-08-01 20:49:34 Z   81 days  137 attempts
Testing same since   156079  2020-10-22 00:05:31 Z    0 days    1 attempts

------------------------------------------------------------
3253 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  broken  
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-arm64-arm64-xl-credit1 broken
broken-step test-arm64-arm64-xl-credit1 host-install(5)

Not pushing.

(No revision log; it would be 597806 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 18:38:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 18:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10630.28351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfTq-0003Ba-Vr; Thu, 22 Oct 2020 18:38:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10630.28351; Thu, 22 Oct 2020 18:38:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfTq-0003BT-Sm; Thu, 22 Oct 2020 18:38:30 +0000
Received: by outflank-mailman (input) for mailman id 10630;
 Thu, 22 Oct 2020 18:38:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVfTq-0003BO-4I
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:38:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f8278ce-56e1-4bc2-8810-3cb69b57df61;
 Thu, 22 Oct 2020 18:38:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVfTo-0007o3-2O; Thu, 22 Oct 2020 18:38:28 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVfTn-0005bY-Pd; Thu, 22 Oct 2020 18:38:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVfTq-0003BO-4I
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:38:30 +0000
X-Inumbo-ID: 8f8278ce-56e1-4bc2-8810-3cb69b57df61
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8f8278ce-56e1-4bc2-8810-3cb69b57df61;
	Thu, 22 Oct 2020 18:38:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dEPy2ktksSRMlir51+QXQcs9YqL40/F70X9QC8tXbkc=; b=yPOAIrN9RLzlS4wxFD6g1nhvb4
	Qhd5x39W2uT2lwtZmygDmUY2IPM3Ab9nyT5jW5wUpqjqa0gZHixioIFYHc6uZyzqijmET2Vc2Xl+3
	hPz6gTb4UowzyQxYM4KnoctDVikBYuIhoqqUtcvluMM8ldlh8eeB/tIMdwZh8Bn2lgHo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVfTo-0007o3-2O; Thu, 22 Oct 2020 18:38:28 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVfTn-0005bY-Pd; Thu, 22 Oct 2020 18:38:27 +0000
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>, Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <a960dd45-2867-5ef6-970c-952c03aa8cef@suse.com>
 <20201022171328.GA81455@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <001018b4-f700-6143-3fbf-a98c627e11bf@xen.org>
Date: Thu, 22 Oct 2020 19:38:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201022171328.GA81455@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 22/10/2020 18:13, Elliott Mitchell wrote:
> On Thu, Oct 22, 2020 at 09:42:17AM +0200, Jan Beulich wrote:
>> On 22.10.2020 00:12, Elliott Mitchell wrote:
>>> --- a/xen/arch/arm/acpi/domain_build.c
>>> +++ b/xen/arch/arm/acpi/domain_build.c
>>> @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
>>>       status = acpi_get_table(ACPI_SIG_SPCR, 0,
>>>                               (struct acpi_table_header **)&spcr);
>>>   
>>> -    if ( ACPI_FAILURE(status) )
>>> +    if ( ACPI_SUCCESS(status) )
>>>       {
>>> -        printk("Failed to get SPCR table\n");
>>> -        return -EINVAL;
>>> +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
>>> +        /* Deny MMIO access for UART */
>>> +        rc = iomem_deny_access(d, mfn, mfn + 1);
>>> +        if ( rc )
>>> +            return rc;
>>> +    }
>>> +    else
>>> +    {
>>> +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
>>>       }
>>
>> Nit: While I see you've got Stefano's R-b already, I Xen we typically
>> omit the braces here.
> 
> Personally, I prefer that myself, but was unsure of the preference here.

I don't think we are very consistent here... I would not add them 
myself, but I don't particularly mind them (I know some editors will add 
them automatically).

I will keep them while committing. For the patch:

Acked-by: Julien Grall <jgrall@amazon.com>

> I've seen multiple projects which *really* dislike using having brackets
> for some clauses, but not others (ie they want either all clauses with or
> all clauses without; instead of only if required).
> 
> I sent what I thought was the more often used format.  (I also like tabs,
> and dislike having so many spaces; alas my preferences are apparently
> uncommon)

We have a few files in Xen using tabs (yes, we like mixing coding style!).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 18:42:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 18:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10634.28366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfY2-00041M-LO; Thu, 22 Oct 2020 18:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10634.28366; Thu, 22 Oct 2020 18:42:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfY2-00041F-Gd; Thu, 22 Oct 2020 18:42:50 +0000
Received: by outflank-mailman (input) for mailman id 10634;
 Thu, 22 Oct 2020 18:42:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVfY1-00040b-0w
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:42:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cfe19b88-1c1b-40e5-9df7-191f7f2d6d5c;
 Thu, 22 Oct 2020 18:42:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVfXq-0007u8-TE; Thu, 22 Oct 2020 18:42:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVfXq-0003TP-JP; Thu, 22 Oct 2020 18:42:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVfXq-0005QG-It; Thu, 22 Oct 2020 18:42:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVfY1-00040b-0w
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:42:49 +0000
X-Inumbo-ID: cfe19b88-1c1b-40e5-9df7-191f7f2d6d5c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cfe19b88-1c1b-40e5-9df7-191f7f2d6d5c;
	Thu, 22 Oct 2020 18:42:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O2pAJTp5AMNcILOJt7CFsoKqsVKsqqCMcOzQn1eTZN0=; b=e5oLO2BJFE4MXwYR45deV7maV4
	bMbgBGT/Y75jMTkaBhsSFpSkHIMljNiE7Oc8DcuGkgYnnwSrMbqcsEdWiEK2qHecjO/6m89nyUyP2
	OZNaKIiUSSu4OwCdOTVLEHrEN+8ACD1cmZGE7OItBOrAL6t785A/kntjqFTkJ9TZsehA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVfXq-0007u8-TE; Thu, 22 Oct 2020 18:42:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVfXq-0003TP-JP; Thu, 22 Oct 2020 18:42:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVfXq-0005QG-It; Thu, 22 Oct 2020 18:42:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156100: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 18:42:38 +0000

flight 156100 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156100/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  117 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 18:44:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 18:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10637.28381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfZh-0004I0-5W; Thu, 22 Oct 2020 18:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10637.28381; Thu, 22 Oct 2020 18:44:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVfZh-0004Ht-2Q; Thu, 22 Oct 2020 18:44:33 +0000
Received: by outflank-mailman (input) for mailman id 10637;
 Thu, 22 Oct 2020 18:44:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVfZf-0004Ho-A4
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:44:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6212469c-41fc-4145-ac83-55828beeed68;
 Thu, 22 Oct 2020 18:44:30 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVfZc-0007vV-PU; Thu, 22 Oct 2020 18:44:28 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVfZc-0006PY-H3; Thu, 22 Oct 2020 18:44:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GjE6=D5=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVfZf-0004Ho-A4
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 18:44:31 +0000
X-Inumbo-ID: 6212469c-41fc-4145-ac83-55828beeed68
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6212469c-41fc-4145-ac83-55828beeed68;
	Thu, 22 Oct 2020 18:44:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zKRIqE5QdB44gNecpq9/v2buwXY54W2I5s/epMF+zfY=; b=IvwEonIy05AykSGlxgHqIkYYMb
	UP4kh8NlEu7yrH5Gs9nYjF/r+v7zHHg9OjdMwmrDCfB9noT9cIPmzzfmiuDnUwOIbVyRbpC/jRBRe
	avS92hDWPNMsGTF+J6fogLWdRWb6rqkaWTKIXKQBP/xir9x6M26i3Rj1t4d3yrEbBgjY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVfZc-0007vV-PU; Thu, 22 Oct 2020 18:44:28 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVfZc-0006PY-H3; Thu, 22 Oct 2020 18:44:28 +0000
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <930267bd-5442-3ff0-bb5b-1ed8e2ebe37c@xen.org>
Date: Thu, 22 Oct 2020 19:44:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201021221253.GA73207@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

Thank you for the patch. FIY I tweak a bit the commit title before 
committing.

The title is now: "xen/arm: acpi: Don't fail it SPCR table is absent".

Cheers,

On 21/10/2020 23:12, Elliott Mitchell wrote:
> Absence of a SPCR table likely means the console is a framebuffer.  In
> such case acpi_iomem_deny_access() should NOT fail.
> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> ---
>   xen/arch/arm/acpi/domain_build.c | 19 ++++++++++---------
>   1 file changed, 10 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/domain_build.c b/xen/arch/arm/acpi/domain_build.c
> index 1b1cfabb00..bbdc90f92c 100644
> --- a/xen/arch/arm/acpi/domain_build.c
> +++ b/xen/arch/arm/acpi/domain_build.c
> @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
>       status = acpi_get_table(ACPI_SIG_SPCR, 0,
>                               (struct acpi_table_header **)&spcr);
>   
> -    if ( ACPI_FAILURE(status) )
> +    if ( ACPI_SUCCESS(status) )
>       {
> -        printk("Failed to get SPCR table\n");
> -        return -EINVAL;
> +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
> +        /* Deny MMIO access for UART */
> +        rc = iomem_deny_access(d, mfn, mfn + 1);
> +        if ( rc )
> +            return rc;
> +    }
> +    else
> +    {
> +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
>       }
> -
> -    mfn = spcr->serial_port.address >> PAGE_SHIFT;
> -    /* Deny MMIO access for UART */
> -    rc = iomem_deny_access(d, mfn, mfn + 1);
> -    if ( rc )
> -        return rc;
>   
>       /* Deny MMIO access for GIC regions */
>       return gic_iomem_deny_access(d);
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 19:19:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 19:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10641.28394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVg6z-0007G3-RC; Thu, 22 Oct 2020 19:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10641.28394; Thu, 22 Oct 2020 19:18:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVg6z-0007Fw-OD; Thu, 22 Oct 2020 19:18:57 +0000
Received: by outflank-mailman (input) for mailman id 10641;
 Thu, 22 Oct 2020 19:18:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVg6z-0007Fr-1N
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 19:18:57 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34ca4d40-ad36-46f5-a35e-bab1a3cc54a7;
 Thu, 22 Oct 2020 19:18:56 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MJIfm6082609
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 22 Oct 2020 15:18:47 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09MJIeBP082608;
 Thu, 22 Oct 2020 12:18:40 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVg6z-0007Fr-1N
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 19:18:57 +0000
X-Inumbo-ID: 34ca4d40-ad36-46f5-a35e-bab1a3cc54a7
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 34ca4d40-ad36-46f5-a35e-bab1a3cc54a7;
	Thu, 22 Oct 2020 19:18:56 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MJIfm6082609
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 22 Oct 2020 15:18:47 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09MJIeBP082608;
	Thu, 22 Oct 2020 12:18:40 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 22 Oct 2020 12:18:40 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
Message-ID: <20201022191840.GB81455@mattapan.m5p.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <930267bd-5442-3ff0-bb5b-1ed8e2ebe37c@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <930267bd-5442-3ff0-bb5b-1ed8e2ebe37c@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Oct 22, 2020 at 07:38:26PM +0100, Julien Grall wrote:
> I don't think we are very consistent here... I would not add them 
> myself, but I don't particularly mind them (I know some editors will add 
> them automatically).
> 
> I will keep them while committing. For the patch:

I would tend to remove them on commit since I dislike them.  Just as
stated, I was unsure.

On default settings, clang-format will object to:

if(thing)
{
	foo
}
else
	bar

Or

if(thing)
	foo
else
{
	bar
}

I *like* those formats, but was under the impression most people did not.
The indentation is the more visually obvious indicator, just the compiler
actually uses the brackets.  As such I *like* the misleading indentation
warnings as those seemed to have a fairly high true-positive rate.


On Thu, Oct 22, 2020 at 07:44:26PM +0100, Julien Grall wrote:
> Thank you for the patch. FIY I tweak a bit the commit title before 
> committing.
> 
> The title is now: "xen/arm: acpi: Don't fail it SPCR table is absent".

Perhaps "xen/arm: acpi: Don't fail on absent SPCR table"?

What you're suggesting doesn't read well to me.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 22 21:17:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 21:17:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10649.28409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVhxV-0001YO-O6; Thu, 22 Oct 2020 21:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10649.28409; Thu, 22 Oct 2020 21:17:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVhxV-0001YH-L8; Thu, 22 Oct 2020 21:17:17 +0000
Received: by outflank-mailman (input) for mailman id 10649;
 Thu, 22 Oct 2020 21:17:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WvX7=D5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVhxU-0001YC-4Y
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 21:17:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 569779f4-b278-434b-8817-4aed74fc5a68;
 Thu, 22 Oct 2020 21:17:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 10F1A21D43;
 Thu, 22 Oct 2020 21:17:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WvX7=D5=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVhxU-0001YC-4Y
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 21:17:16 +0000
X-Inumbo-ID: 569779f4-b278-434b-8817-4aed74fc5a68
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 569779f4-b278-434b-8817-4aed74fc5a68;
	Thu, 22 Oct 2020 21:17:15 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 10F1A21D43;
	Thu, 22 Oct 2020 21:17:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603401434;
	bh=HFynRJdHj89IlpNM0UxHVn/4elj4VJ9eirdXAUSC+g4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pB07Fpdt98P61ESMPoZa9B2F+TQ3XRiBI9XwAQkRs4mzPJ9NDFBknabIJy8BMFnla
	 u0DyLy9c251ImqWwsykbX2RYbiRESzH0jj6ixNp/22AgmWoiUHK4Oj0mKvIhEOYHn+
	 11xdoAxPSHCGfxRvpj/FoQUeR9qfqtQ991bYVvns=
Date: Thu, 22 Oct 2020 14:17:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
In-Reply-To: <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
Message-ID: <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
References: <20201022014310.GA70872@mattapan.m5p.com> <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 22 Oct 2020, Julien Grall wrote:
> On 22/10/2020 02:43, Elliott Mitchell wrote:
> > Linux requires UEFI support to be enabled on ARM64 devices.  While many
> > ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
> > potentially taking over.  Some common devices may need ACPI table
> > support.
> > 
> > Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
> 
> The idea behind EXPERT is to gate any feature that is not considered to be
> stable/complete enough to be used in production.

Yes, and from that point of view I don't think we want to remove EXPERT
from ACPI yet. However, the idea of hiding things behind EXPERT works
very well for new esoteric features, something like memory introspection
or memory overcommit. It does not work well for things that are actually
required to boot on the platform.

Typically ACPI systems don't come with device tree at all (RPi4 being an
exception), so users don't really have much of a choice in the matter.

>From that point of view, it would be better to remove EXPERT from ACPI,
maybe even build ACPI by default, *but* to add a warning at boot saying
something like:

"ACPI support is experimental. Boot using Device Tree if you can."


That would better convey the risks of using ACPI, while at the same time
making it a bit easier for users to boot on their ACPI-only platforms.


> I don't consider the ACPI complete because the parsing of the IORT (used to
> discover SMMU and GICv3 ITS) is not there yet.
> 
> I vaguely remember some issues on system using SMMU (e.g. Thunder-X) because
> Dom0 will try to use the IOMMU and this would break PV drivers.

I am not sure why Dom0 using the IOMMU would break PV drivers? Is it
because the pagetable is not properly updated when mapping foreign
pages?


> Therefore I think we at least want to consider to hide SMMUs from dom0 before
> removing EXPERT. Ideally, I would also like the feature to be tested in
> Osstest.
> 
> The good news is Xen Project already has systems (e.g. Thunder-X, Softiron)
> that can supported ACPI. So it should hopefully be a matter to tell them to
> boot with ACPI rather than DT.

I agree that we want to keep ACPI "expert/experimental" given its
current state but maybe we can find a better way to carry that message
than to set EXPERT in Kconfig.

And yes, if we wanted to make ACPI less "expert/experimental" we
definitely need some testing in OSSTest and any critical bugs (e.g. PV
drivers not working) addressed.


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 22:20:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 22:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10653.28424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kViwF-0007R6-NA; Thu, 22 Oct 2020 22:20:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10653.28424; Thu, 22 Oct 2020 22:20:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kViwF-0007Qz-IA; Thu, 22 Oct 2020 22:20:03 +0000
Received: by outflank-mailman (input) for mailman id 10653;
 Thu, 22 Oct 2020 22:20:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kViwD-0007Cn-KY
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:20:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f4efc31-ef90-4e2c-ba00-790d42fdab8a;
 Thu, 22 Oct 2020 22:19:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kViw5-000447-0k; Thu, 22 Oct 2020 22:19:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kViw4-00023v-NU; Thu, 22 Oct 2020 22:19:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kViw4-0005LV-Mw; Thu, 22 Oct 2020 22:19:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=A75Z=D5=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kViwD-0007Cn-KY
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:20:01 +0000
X-Inumbo-ID: 4f4efc31-ef90-4e2c-ba00-790d42fdab8a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f4efc31-ef90-4e2c-ba00-790d42fdab8a;
	Thu, 22 Oct 2020 22:19:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mRJ5no7t8344JIo3JvCWfylGgDoQhwMO69tXyneu2T0=; b=NZxOC8LQFPgUZmBzN+NAK6elAS
	e0Y4K7aCTAEKSMlfOkduvgJCNKIGFL53GmCHbeLsChRasgI6NghpwRP4mTVz1cuhr/YwARCSCRJ/Z
	CHwHTryht4+gIl+Rukg3WrOvky4fFPXZXrRn4IcsRofmcMgpHvrxoeDjJJAHa7ODHBYA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kViw5-000447-0k; Thu, 22 Oct 2020 22:19:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kViw4-00023v-NU; Thu, 22 Oct 2020 22:19:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kViw4-0005LV-Mw; Thu, 22 Oct 2020 22:19:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156108-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156108: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=861f0c110976fa8879b7bf63d9478b6be83d4ab6
X-Osstest-Versions-That:
    xen=3b49791e4cc2f38dd84bf331b75217adaef636e3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 22 Oct 2020 22:19:52 +0000

flight 156108 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156108/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  861f0c110976fa8879b7bf63d9478b6be83d4ab6
baseline version:
 xen                  3b49791e4cc2f38dd84bf331b75217adaef636e3

Last test of basis   156047  2020-10-20 21:00:31 Z    2 days
Testing same since   156108  2020-10-22 19:02:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Elliott Mitchell <ehem+xen@m5p.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b49791e4c..861f0c1109  861f0c110976fa8879b7bf63d9478b6be83d4ab6 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 22 22:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 22:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10656.28435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVj3l-0008Cr-FF; Thu, 22 Oct 2020 22:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10656.28435; Thu, 22 Oct 2020 22:27:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVj3l-0008Ck-CE; Thu, 22 Oct 2020 22:27:49 +0000
Received: by outflank-mailman (input) for mailman id 10656;
 Thu, 22 Oct 2020 22:27:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVj3k-0008Cf-P5
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:27:48 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07a2df02-8e59-41bd-a5e5-d38c2e1c3bd2;
 Thu, 22 Oct 2020 22:27:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MMRYj0083433
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 22 Oct 2020 18:27:39 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09MMRXj0083432;
 Thu, 22 Oct 2020 15:27:33 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=JUr7=D5=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVj3k-0008Cf-P5
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:27:48 +0000
X-Inumbo-ID: 07a2df02-8e59-41bd-a5e5-d38c2e1c3bd2
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 07a2df02-8e59-41bd-a5e5-d38c2e1c3bd2;
	Thu, 22 Oct 2020 22:27:47 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09MMRYj0083433
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 22 Oct 2020 18:27:39 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09MMRXj0083432;
	Thu, 22 Oct 2020 15:27:33 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 22 Oct 2020 15:27:33 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
Message-ID: <20201022222733.GA83375@mattapan.m5p.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Thu, Oct 22, 2020 at 02:17:13PM -0700, Stefano Stabellini wrote:
> On Thu, 22 Oct 2020, Julien Grall wrote:
> > On 22/10/2020 02:43, Elliott Mitchell wrote:
> > > Linux requires UEFI support to be enabled on ARM64 devices.  While many
> > > ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
> > > potentially taking over.  Some common devices may need ACPI table
> > > support.
> > > 
> > > Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
> > 
> > The idea behind EXPERT is to gate any feature that is not considered to be
> > stable/complete enough to be used in production.
> 
> Yes, and from that point of view I don't think we want to remove EXPERT
> from ACPI yet. However, the idea of hiding things behind EXPERT works
> very well for new esoteric features, something like memory introspection
> or memory overcommit. It does not work well for things that are actually
> required to boot on the platform.
> 
> Typically ACPI systems don't come with device tree at all (RPi4 being an
> exception), so users don't really have much of a choice in the matter.
> 
> >From that point of view, it would be better to remove EXPERT from ACPI,
> maybe even build ACPI by default, *but* to add a warning at boot saying
> something like:
> 
> "ACPI support is experimental. Boot using Device Tree if you can."
> 
> 
> That would better convey the risks of using ACPI, while at the same time
> making it a bit easier for users to boot on their ACPI-only platforms.

This matches my view.  I was thinking about including "default y", but I
felt the chances of that getting through were lower.  I concur with a
warning on boot being a good approach.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 22 22:59:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 22 Oct 2020 22:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10659.28448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVjYM-0002el-0I; Thu, 22 Oct 2020 22:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10659.28448; Thu, 22 Oct 2020 22:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVjYL-0002ee-TO; Thu, 22 Oct 2020 22:59:25 +0000
Received: by outflank-mailman (input) for mailman id 10659;
 Thu, 22 Oct 2020 22:59:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iz3t=D5=nask.pl=michall@srs-us1.protection.inumbo.net>)
 id 1kVjYK-0002eY-DL
 for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:59:24 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cc0e301-ef09-4ff5-bb8f-a019eba50933;
 Thu, 22 Oct 2020 22:59:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iz3t=D5=nask.pl=michall@srs-us1.protection.inumbo.net>)
	id 1kVjYK-0002eY-DL
	for xen-devel@lists.xenproject.org; Thu, 22 Oct 2020 22:59:24 +0000
X-Inumbo-ID: 3cc0e301-ef09-4ff5-bb8f-a019eba50933
Received: from mx.nask.net.pl (unknown [195.187.55.89])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3cc0e301-ef09-4ff5-bb8f-a019eba50933;
	Thu, 22 Oct 2020 22:59:21 +0000 (UTC)
X-Virus-Scanned: amavisd-new at 
Date: Fri, 23 Oct 2020 00:59:19 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
Subject: BUG: credit=sched2 machine hang when using DRAKVUF
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [195.187.238.14]
X-Mailer: Zimbra 9.0.0_GA_3969 (ZimbraWebClient - GC86 (Win)/9.0.0_GA_3969)
Thread-Index: ER30NuT7OhIYTWnjxdfF+8Y02MLJ1Q==
Thread-Topic: credit=sched2 machine hang when using DRAKVUF

Hello,

when using DRAKVUF against a Windows 7 x64 DomU, the whole machine hangs af=
ter a few minutes.

The chance for a hang seems to be correlated with number of PCPUs, in this =
case we have 14 PCPUs and hang is very easily reproducible, while on other =
machines with 2-4 PCPUs it's very rare (but still occurring sometimes). The=
 issue is observed with the default sched=3Dcredit2 and is no longer reprod=
ucible once sched=3Dcredit is set.


Enclosed: panic log from my Dom0.

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


pa=C5=BA 22 12:20:50 hostname kernel: rcu: INFO: rcu_sched self-detected st=
all on CPU
pa=C5=BA 22 12:20:50 hostname kernel: rcu:         3-....: (21002 ticks thi=
s GP) idle=3D7e2/1/0x4000000000000002 softirq=3D61729/61729 fqs=3D10490
pa=C5=BA 22 12:20:50 hostname kernel: rcu:          (t=3D21003 jiffies g=3D=
36437 q=3D9406)
pa=C5=BA 22 12:20:50 hostname kernel: NMI backtrace for cpu 3
pa=C5=BA 22 12:20:50 hostname kernel: CPU: 3 PID: 4153 Comm: drakvuf Tainte=
d: P           OEL    4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
pa=C5=BA 22 12:20:50 hostname kernel: Hardware name: Dell Inc. PowerEdge R6=
40/08HT8T, BIOS 2.1.8 04/30/2019
pa=C5=BA 22 12:20:50 hostname kernel: Call Trace:
pa=C5=BA 22 12:20:50 hostname kernel:  <IRQ>
pa=C5=BA 22 12:20:50 hostname kernel:  dump_stack+0x5c/0x80
pa=C5=BA 22 12:20:50 hostname kernel:  nmi_cpu_backtrace.cold.4+0x13/0x50
pa=C5=BA 22 12:20:50 hostname kernel:  ? lapic_can_unplug_cpu.cold.29+0x3b/=
0x3b
pa=C5=BA 22 12:20:50 hostname kernel:  nmi_trigger_cpumask_backtrace+0xf9/0=
xfb
pa=C5=BA 22 12:20:50 hostname kernel:  rcu_dump_cpu_stacks+0x9b/0xcb
pa=C5=BA 22 12:20:50 hostname kernel:  rcu_check_callbacks.cold.81+0x1db/0x=
335
pa=C5=BA 22 12:20:50 hostname kernel:  ? tick_sched_do_timer+0x60/0x60
pa=C5=BA 22 12:20:50 hostname kernel:  update_process_times+0x28/0x60
pa=C5=BA 22 12:20:50 hostname kernel:  tick_sched_handle+0x22/0x60
pa=C5=BA 22 12:20:50 hostname kernel:  tick_sched_timer+0x37/0x70
pa=C5=BA 22 12:20:50 hostname kernel:  __hrtimer_run_queues+0x100/0x280
pa=C5=BA 22 12:20:50 hostname kernel:  hrtimer_interrupt+0x100/0x220
pa=C5=BA 22 12:20:50 hostname kernel:  xen_timer_interrupt+0x1e/0x30
pa=C5=BA 22 12:20:50 hostname kernel:  __handle_irq_event_percpu+0x46/0x190
pa=C5=BA 22 12:20:50 hostname kernel:  handle_irq_event_percpu+0x30/0x80
pa=C5=BA 22 12:20:50 hostname kernel:  handle_percpu_irq+0x40/0x60
pa=C5=BA 22 12:20:50 hostname kernel:  generic_handle_irq+0x27/0x30
pa=C5=BA 22 12:20:50 hostname kernel:  __evtchn_fifo_handle_events+0x17d/0x=
190
pa=C5=BA 22 12:20:50 hostname kernel:  __xen_evtchn_do_upcall+0x42/0x80
pa=C5=BA 22 12:20:50 hostname kernel:  xen_evtchn_do_upcall+0x27/0x40
pa=C5=BA 22 12:20:50 hostname kernel:  xen_do_hypervisor_callback+0x29/0x40
pa=C5=BA 22 12:20:50 hostname kernel:  </IRQ>
pa=C5=BA 22 12:20:50 hostname kernel: RIP: e030:smp_call_function_single+0x=
ce/0xf0
pa=C5=BA 22 12:20:50 hostname kernel: Code: 8b 4c 24 38 65 48 33 0c 25 28 0=
0 00 00 75 34 c9 c3 48 89 d1 48 89 f2 48 89 e6 e8 6d fe ff ff 8b 54 24 18 8=
3 e2 01 74 0b f3 90 <8b> 54 24 18 8
3 e2 01 75 f5 eb ca 8b 05 b9 99 4d 01 85 c0 75 88 0f
pa=C5=BA 22 12:20:50 hostname kernel: RSP: e02b:ffffc9004713bd00 EFLAGS: 00=
000202
pa=C5=BA 22 12:20:50 hostname kernel: RAX: 0000000000000000 RBX: ffff888b0b=
6eea40 RCX: 0000000000000200
pa=C5=BA 22 12:20:50 hostname kernel: RDX: 0000000000000001 RSI: ffffffff82=
12e4a0 RDI: ffffffff81c2dec0
pa=C5=BA 22 12:20:50 hostname kernel: RBP: ffffc9004713bd50 R08: 0000000000=
000000 R09: ffff888c54052480
pa=C5=BA 22 12:20:50 hostname kernel: R10: ffff888c540524a8 R11: 0000000000=
000000 R12: ffffc9004713bd60
pa=C5=BA 22 12:20:50 hostname kernel: R13: 0000000080000000 R14: ffffffff80=
000000 R15: ffff888b0b6eeab0
pa=C5=BA 22 12:20:50 hostname kernel:  ? xen_pgd_alloc+0x110/0x110
pa=C5=BA 22 12:20:50 hostname kernel:  xen_exit_mmap+0xaa/0x100
pa=C5=BA 22 12:20:50 hostname kernel:  exit_mmap+0x64/0x180
pa=C5=BA 22 12:20:50 hostname kernel:  ? __raw_spin_unlock+0x5/0x10
pa=C5=BA 22 12:20:50 hostname kernel:  ? __handle_mm_fault+0x1090/0x1270
pa=C5=BA 22 12:20:50 hostname kernel:  ? _raw_spin_unlock_irqrestore+0x14/0=
x20
pa=C5=BA 22 12:20:50 hostname kernel:  ? exit_robust_list+0x5b/0x130
pa=C5=BA 22 12:20:50 hostname kernel:  mmput+0x54/0x130
pa=C5=BA 22 12:20:50 hostname kernel:  do_exit+0x290/0xb90
pa=C5=BA 22 12:20:50 hostname kernel:  ? handle_mm_fault+0xd6/0x200
pa=C5=BA 22 12:20:50 hostname kernel:  do_group_exit+0x3a/0xa0
pa=C5=BA 22 12:20:50 hostname kernel:  __x64_sys_exit_group+0x14/0x20
pa=C5=BA 22 12:20:50 hostname kernel:  do_syscall_64+0x53/0x110
pa=C5=BA 22 12:20:50 hostname kernel:  entry_SYSCALL_64_after_hwframe+0x44/=
0xa9
pa=C5=BA 22 12:20:50 hostname kernel: RIP: 0033:0x7f98d23ec9d6
pa=C5=BA 22 12:20:50 hostname kernel: Code: Bad RIP value.
pa=C5=BA 22 12:20:50 hostname kernel: RSP: 002b:00007ffc4a0327f8 EFLAGS: 00=
000246 ORIG_RAX: 00000000000000e7
pa=C5=BA 22 12:20:50 hostname kernel: RAX: ffffffffffffffda RBX: 00007f98d2=
4dd760 RCX: 00007f98d23ec9d6
pa=C5=BA 22 12:20:50 hostname kernel: RDX: 0000000000000000 RSI: 0000000000=
00003c RDI: 0000000000000000
pa=C5=BA 22 12:20:50 hostname kernel: RBP: 0000000000000000 R08: 0000000000=
0000e7 R09: ffffffffffffff60
pa=C5=BA 22 12:20:50 hostname kernel: R10: 0000000000000000 R11: 0000000000=
000246 R12: 00007f98d24dd760
pa=C5=BA 22 12:20:50 hostname kernel: R13: 000000000000005a R14: 00007f98d2=
4e6428 R15: 0000000000000000


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 00:03:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 00:03:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10663.28460 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVkXv-0001Cw-7a; Fri, 23 Oct 2020 00:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10663.28460; Fri, 23 Oct 2020 00:03:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVkXv-0001Cp-3w; Fri, 23 Oct 2020 00:03:03 +0000
Received: by outflank-mailman (input) for mailman id 10663;
 Fri, 23 Oct 2020 00:03:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVkXs-0001Ck-Ti
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 00:03:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0f6332b-cb74-4e37-a702-e2b03b2b295c;
 Fri, 23 Oct 2020 00:02:57 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0303A223C7;
 Fri, 23 Oct 2020 00:02:55 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVkXs-0001Ck-Ti
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 00:03:01 +0000
X-Inumbo-ID: b0f6332b-cb74-4e37-a702-e2b03b2b295c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b0f6332b-cb74-4e37-a702-e2b03b2b295c;
	Fri, 23 Oct 2020 00:02:57 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0303A223C7;
	Fri, 23 Oct 2020 00:02:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603411376;
	bh=hciVYpkjjxo85dXM0usTEYDPJ+OHhjhnZi3yVCHUB28=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qh9W9hsCQfQ/mmkw3mkX9kAy5HwoljLNOWTCR1TpBSZKCV7CMixBolJswgEISplVF
	 0nxV3Vn0R0X7QhGjANjJOj0eAtZDWkcvgUtnssdiiNbQWZrP423IZqxxsSc/gVUWtA
	 9yD6bXGKn37UfMcI3ksp3r4aZ8qzAsa7s8IgHlkk=
Date: Thu, 22 Oct 2020 17:02:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Rahul Singh <Rahul.Singh@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
In-Reply-To: <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
Message-ID: <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com> <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org> <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com> <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1704557436-1603410300=:12247"
Content-ID: <alpine.DEB.2.21.2010221645420.12247@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1704557436-1603410300=:12247
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010221645421.12247@sstabellini-ThinkPad-T480s>

On Thu, 22 Oct 2020, Julien Grall wrote:
> > > On 20/10/2020 16:25, Rahul Singh wrote:
> > > > Add support for ARM architected SMMUv3 implementations. It is based on
> > > > the Linux SMMUv3 driver.
> > > > Major differences between the Linux driver are as follows:
> > > > 1. Only Stage-2 translation is supported as compared to the Linux driver
> > > >     that supports both Stage-1 and Stage-2 translations.
> > > > 2. Use P2M  page table instead of creating one as SMMUv3 has the
> > > >     capability to share the page tables with the CPU.
> > > > 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
> > > >     and priority queue IRQ handling.
> > > 
> > > Tasklets are not a replacement for threaded IRQ. In particular, they will
> > > have priority over anything else (IOW nothing will run on the pCPU until
> > > they are done).
> > > 
> > > Do you know why Linux is using thread. Is it because of long running
> > > operations?
> > 
> > Yes you are right because of long running operations Linux is using the
> > threaded IRQs.
> > 
> > SMMUv3 reports fault/events bases on memory-based circular buffer queues not
> > based on the register. As per my understanding, it is time-consuming to
> > process the memory based queues in interrupt context because of that Linux
> > is using threaded IRQ to process the faults/events from SMMU.
> > 
> > I didn’t find any other solution in XEN in place of tasklet to defer the
> > work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
> > we do all work in interrupt context we will make XEN less responsive.
> 
> So we need to make sure that Xen continue to receives interrupts, but we also
> need to make sure that a vCPU bound to the pCPU is also responsive.
> 
> > 
> > If you know another solution in XEN that will be used to defer the work in
> > the interrupt please let me know I will try to use that.
> 
> One of my work colleague encountered a similar problem recently. He had a long
> running tasklet and wanted to be broken down in smaller chunk.
> 
> We decided to use a timer to reschedule the taslket in the future. This allows
> the scheduler to run other loads (e.g. vCPU) for some time.
> 
> This is pretty hackish but I couldn't find a better solution as tasklet have
> high priority.
> 
> Maybe the other will have a better idea.

Julien's suggestion is a good one.

But I think tasklets can be configured to be called from the idle_loop,
in which case they are not run in interrupt context?

Still, tasklets run until completion in Xen, which could take too long.
The code has to voluntarily release control of the execution flow once
it realizes it has been running for too long. The rescheduling via a
timer works.


Now, to brainstorm other possible alternatives, for hypercalls we have
been using hypercall continuations.  Continuations is a way to break a
hypercall implementation that takes too long into multiple execution
chunks. It works by calling into itself again: making the same hypercall
again with updated arguments, so that the scheduler has a chance to do
other operations in between, including running other tasklets and
softirqs.

That works well because  the source of the work is a guest request,
specifically a hypercall. However, in the case of the SMMU driver, there
is no hypercall. The Xen driver has to do work in response to an
interrupt and the work is not tied to one particular domain.

So I don't think the hypercall continuation model could work here. The
timer seems to be the best option.


> > > > 4. Latest version of the Linux SMMUv3 code implements the commands queue
> > > >     access functions based on atomic operations implemented in Linux.
> > > 
> > > Can you provide more details?
> > 
> > I tried to port the latest version of the SMMUv3 code than I observed that
> > in order to port that code I have to also port atomic operation implemented
> > in Linux to XEN. As latest Linux code uses atomic operation to process the
> > command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() ,
> > atomic_fetch_andnot_relaxed()) .
> 
> Thank you for the explanation. I think it would be best to import the atomic
> helpers and use the latest code.
> 
> This will ensure that we don't re-introduce bugs and also buy us some time
> before the Linux and Xen driver diverge again too much.
> 
> Stefano, what do you think?

I think you are right.
--8323329-1704557436-1603410300=:12247--


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 00:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 00:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10667.28475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVlMn-00060g-Co; Fri, 23 Oct 2020 00:55:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10667.28475; Fri, 23 Oct 2020 00:55:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVlMn-00060Z-9W; Fri, 23 Oct 2020 00:55:37 +0000
Received: by outflank-mailman (input) for mailman id 10667;
 Fri, 23 Oct 2020 00:55:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVlMm-0005zv-J7
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 00:55:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 05f6660e-ad59-4727-9167-503a20296f41;
 Fri, 23 Oct 2020 00:55:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVlMd-0007rv-Te; Fri, 23 Oct 2020 00:55:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVlMd-0003K8-K6; Fri, 23 Oct 2020 00:55:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVlMd-0004P0-Jd; Fri, 23 Oct 2020 00:55:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVlMm-0005zv-J7
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 00:55:36 +0000
X-Inumbo-ID: 05f6660e-ad59-4727-9167-503a20296f41
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 05f6660e-ad59-4727-9167-503a20296f41;
	Fri, 23 Oct 2020 00:55:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5mPGKcIsv31h6JxIIj8afh92uE4mtJOJQmUlmL5Rht4=; b=G7lbSTJcAhGk3hbigKoCP45SvN
	+lH01CJTwpHhLRlkxV5M5M8q3grPC5ZKNfaLf6R3E1eSihkJDkKr6Gvplv+eO47Px1SF9Sm36j1PX
	wQK0Vv87ufar93ye19iMTBMylDZ6FunLgzIefyacJ4V1p3vcjITNuXJ6ux9PLAZzl4ag=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVlMd-0007rv-Te; Fri, 23 Oct 2020 00:55:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVlMd-0003K8-K6; Fri, 23 Oct 2020 00:55:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVlMd-0004P0-Jd; Fri, 23 Oct 2020 00:55:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156109: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 00:55:27 +0000

flight 156109 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156109/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  118 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 01:38:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 01:38:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10670.28490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVm1p-000610-PZ; Fri, 23 Oct 2020 01:38:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10670.28490; Fri, 23 Oct 2020 01:38:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVm1p-00060r-Iz; Fri, 23 Oct 2020 01:38:01 +0000
Received: by outflank-mailman (input) for mailman id 10670;
 Fri, 23 Oct 2020 01:38:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVm1o-0005ym-Bm
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 01:38:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95e5b2d6-62b3-4065-8c62-00db10ac8b39;
 Fri, 23 Oct 2020 01:37:53 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVm1h-0004VD-6L; Fri, 23 Oct 2020 01:37:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVm1g-0005i1-Si; Fri, 23 Oct 2020 01:37:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVm1g-0004pJ-S9; Fri, 23 Oct 2020 01:37:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVm1o-0005ym-Bm
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 01:38:00 +0000
X-Inumbo-ID: 95e5b2d6-62b3-4065-8c62-00db10ac8b39
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95e5b2d6-62b3-4065-8c62-00db10ac8b39;
	Fri, 23 Oct 2020 01:37:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FK85YrbrSWBtVBXwYjjoX+MRvixr/UBfCnI8Uw85/Ks=; b=vr1ozvl9nJbyLfe2ma5KkBqpGu
	HlqXlQ6no5DwX3aR6PgmZdZk2j8c9QwFLoD+OMPrJCgc7bS1UDXJrD7CDkxWfTYOvUwKC7zfdYM9O
	S8f5JhM15x7DEIu9B+Gc75Wl18ayDkHpNQvXI2XJdkewMRiOXSVaFn6AZM6Hfzeu74YY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVm1h-0004VD-6L; Fri, 23 Oct 2020 01:37:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVm1g-0005i1-Si; Fri, 23 Oct 2020 01:37:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVm1g-0004pJ-S9; Fri, 23 Oct 2020 01:37:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156102-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156102: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=264eccb5dfc345c2e004883f00e62959f818fafd
X-Osstest-Versions-That:
    ovmf=24cf72726564eac7ba9abb24f3d05795164d0a70
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 01:37:52 +0000

flight 156102 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156102/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 264eccb5dfc345c2e004883f00e62959f818fafd
baseline version:
 ovmf                 24cf72726564eac7ba9abb24f3d05795164d0a70

Last test of basis   156091  2020-10-22 14:14:55 Z    0 days
Testing same since   156102  2020-10-22 17:10:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   24cf727265..264eccb5df  264eccb5dfc345c2e004883f00e62959f818fafd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 02:42:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 02:42:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10673.28502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVn1u-0004F8-FE; Fri, 23 Oct 2020 02:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10673.28502; Fri, 23 Oct 2020 02:42:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVn1u-0004F1-Ay; Fri, 23 Oct 2020 02:42:10 +0000
Received: by outflank-mailman (input) for mailman id 10673;
 Fri, 23 Oct 2020 02:42:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVn1t-0004Ew-By
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 02:42:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e910fba-4635-4e1e-bba9-2b4610c61a66;
 Fri, 23 Oct 2020 02:42:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVn1p-0006Cu-1h; Fri, 23 Oct 2020 02:42:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVn1o-0001mR-Nr; Fri, 23 Oct 2020 02:42:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVn1o-0007xy-NH; Fri, 23 Oct 2020 02:42:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVn1t-0004Ew-By
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 02:42:09 +0000
X-Inumbo-ID: 2e910fba-4635-4e1e-bba9-2b4610c61a66
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2e910fba-4635-4e1e-bba9-2b4610c61a66;
	Fri, 23 Oct 2020 02:42:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NO3GY79EljVYi04/Lt2zQIF4+zUDjI/9NaDGn7ummMw=; b=5a/CsSL5OSYF8yWib6K5qm+fMl
	qv8D5OAxSWZZ0bX6mXewX0G15ppJp2nNaazFz0avq06zSQSkXy0+B/nO5cnpg9V2EbBs3qGGVL2wM
	ISSYy10RO1es/1vpk0dyskcq6BeLLypUS12FFdNDrEVIluDDnmzPopjwx+6FrgcSQ/Zc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVn1p-0006Cu-1h; Fri, 23 Oct 2020 02:42:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVn1o-0001mR-Nr; Fri, 23 Oct 2020 02:42:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVn1o-0007xy-NH; Fri, 23 Oct 2020 02:42:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156110: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 02:42:04 +0000

flight 156110 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156110/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  119 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 03:35:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 03:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10678.28517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVnrO-0000hP-JD; Fri, 23 Oct 2020 03:35:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10678.28517; Fri, 23 Oct 2020 03:35:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVnrO-0000hI-GF; Fri, 23 Oct 2020 03:35:22 +0000
Received: by outflank-mailman (input) for mailman id 10678;
 Fri, 23 Oct 2020 03:35:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yA4w=D6=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kVnrM-0000hD-N1
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 03:35:20 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c52b60c9-6201-4adb-839c-9d949dde7327;
 Fri, 23 Oct 2020 03:35:19 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09N3Z6vb085398
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 22 Oct 2020 23:35:12 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09N3Z6uK085397;
 Thu, 22 Oct 2020 20:35:06 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yA4w=D6=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kVnrM-0000hD-N1
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 03:35:20 +0000
X-Inumbo-ID: c52b60c9-6201-4adb-839c-9d949dde7327
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c52b60c9-6201-4adb-839c-9d949dde7327;
	Fri, 23 Oct 2020 03:35:19 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09N3Z6vb085398
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 22 Oct 2020 23:35:12 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09N3Z6uK085397;
	Thu, 22 Oct 2020 20:35:06 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 22 Oct 2020 20:35:06 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: ACPI: Remove EXPERT dependancy, default on for ARM64
Message-ID: <20201023033506.GC83870@mattapan.m5p.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

Linux requires UEFI support to be enabled on ARM64 devices.  While many
ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
potentially taking over.  Some common devices may require ACPI table
support to boot.

For devices which can boot in either mode, continue defaulting to
device-tree.  Add warnings about using ACPI advising users of present
situation.

Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
---
Okay, hopefully this is okay.  Warning in Kconfig, warning on boot.
Perhaps "default y if ARM_64" is redundant, yet if someone tries to make
it possible to boot aarch32 on a ACPI machine...

I also want a date in the message.  Theory is this won't be there
forever, so a date is essential.
---
 xen/arch/arm/Kconfig     | 7 ++++++-
 xen/arch/arm/acpi/boot.c | 9 +++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..29624d03fa 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -32,13 +32,18 @@ menu "Architecture Features"
 source "arch/Kconfig"
 
 config ACPI
-	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
+	bool "ACPI (Advanced Configuration and Power Interface) Support"
 	depends on ARM_64
+	default y if ARM_64
 	---help---
 
 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
 	  an alternative to device tree on ARM64.
 
+	  Note this is presently EXPERIMENTAL.  If a given device has both
+	  device-tree and ACPI support, it is presently (October 2020)
+	  recommended to boot using the device-tree.
+
 config GICV3
 	bool "GICv3 driver"
 	depends on ARM_64 && !NEW_VGIC
diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 30e4bd1bc5..c0e8f85325 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -254,6 +254,15 @@ int __init acpi_boot_table_init(void)
                                    dt_scan_depth1_nodes, NULL) )
         goto disable;
 
+    printk("\n"
+"*************************************************************************\n"
+"*    WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING    *\n"
+"*                                                                       *\n"
+"* Xen-ARM ACPI support is EXPERIMENTAL.  It is presently (October 2020) *\n"
+"* recommended you boot your system in device-tree mode if you can.      *\n"
+"*************************************************************************\n"
+            "\n");
+
     /*
      * ACPI is disabled at this point. Enable it in order to parse
      * the ACPI tables.
-- 
2.20.1


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 03:53:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 03:53:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10680.28529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVo8x-0002bj-4d; Fri, 23 Oct 2020 03:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10680.28529; Fri, 23 Oct 2020 03:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVo8x-0002bc-1Q; Fri, 23 Oct 2020 03:53:31 +0000
Received: by outflank-mailman (input) for mailman id 10680;
 Fri, 23 Oct 2020 03:53:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVo8w-0002bX-7Z
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 03:53:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80cf4298-87ea-4380-94ed-4ea78b61ab3d;
 Fri, 23 Oct 2020 03:53:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVo8r-0007h7-U2; Fri, 23 Oct 2020 03:53:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVo8r-0005PY-LY; Fri, 23 Oct 2020 03:53:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVo8r-0005AE-L2; Fri, 23 Oct 2020 03:53:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVo8w-0002bX-7Z
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 03:53:30 +0000
X-Inumbo-ID: 80cf4298-87ea-4380-94ed-4ea78b61ab3d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 80cf4298-87ea-4380-94ed-4ea78b61ab3d;
	Fri, 23 Oct 2020 03:53:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/ewh2dlWRInynoqyN1aC+lI6XswOW9+wOaChtZ8KJe0=; b=I3vMv/zamVceQ0ZOyg31YV1UAs
	pr7RDPHpVIe3IDO9XaVoUdPQCB65Ji9TyzfcYJpgpVvIDPfLFSAPhPxXAu8OcjQJtP/D7EQFXPSJK
	aM6sXLcxkQD8N+Z+H2bk5eBg2B5hZ1o80jU7Zn7PozIm9m03RyHrnygERF1oCpfghC/s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVo8r-0007h7-U2; Fri, 23 Oct 2020 03:53:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVo8r-0005PY-LY; Fri, 23 Oct 2020 03:53:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVo8r-0005AE-L2; Fri, 23 Oct 2020 03:53:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156111-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156111: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 03:53:25 +0000

flight 156111 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156111/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   63 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  120 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 04:48:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 04:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10685.28544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVozg-0007cB-8I; Fri, 23 Oct 2020 04:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10685.28544; Fri, 23 Oct 2020 04:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVozg-0007c4-4o; Fri, 23 Oct 2020 04:48:00 +0000
Received: by outflank-mailman (input) for mailman id 10685;
 Fri, 23 Oct 2020 04:47:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVoze-0007bz-TD
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 04:47:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9ae739a-e6be-45af-93af-1d9983dcb6c0;
 Fri, 23 Oct 2020 04:47:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45FADAC12;
 Fri, 23 Oct 2020 04:47:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVoze-0007bz-TD
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 04:47:58 +0000
X-Inumbo-ID: d9ae739a-e6be-45af-93af-1d9983dcb6c0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d9ae739a-e6be-45af-93af-1d9983dcb6c0;
	Fri, 23 Oct 2020 04:47:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603428477;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YkR5342vNBYPqcxXX4kwtXys8DqB+7POEMSyGjxkSTk=;
	b=JSvSZoGnITPdQ1Cf3/SdVPtHe1SfZgCJuOhmzprg/xNV7NKOO0xNUfYfQElaKIqtWjlHpZ
	Dlkm38ihh6Fwnz457d504pUepkJvGi5sW/4Ak84yGqQm5Bk8PAIhZvCjP+vMvcZ3ciuZMk
	N4IgcrrobRu3ZBhRUkEMSkXcBsn7GH4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 45FADAC12;
	Fri, 23 Oct 2020 04:47:57 +0000 (UTC)
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
Date: Fri, 23 Oct 2020 06:47:56 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 23.10.20 00:59, Michał Leszczyński wrote:
> Hello,
> 
> when using DRAKVUF against a Windows 7 x64 DomU, the whole machine hangs after a few minutes.
> 
> The chance for a hang seems to be correlated with number of PCPUs, in this case we have 14 PCPUs and hang is very easily reproducible, while on other machines with 2-4 PCPUs it's very rare (but still occurring sometimes). The issue is observed with the default sched=credit2 and is no longer reproducible once sched=credit is set.

Interesting. Can you please share some more information?

Which Xen version are you using?

Is there any additional information in the dom0 log which could be
related to the hang (earlier WARN() splats, Oopses, Xen related
messages, hardware failure messages, ...?

Can you please try to get backtraces of all cpus at the time of the
hang?

It would help to know which cpu was the target of the call of
smp_call_function_single(), so a disassembly of that function would
be needed to find that information from the dumped registers.

I'm asking because I've seen a similar problem recently and I was
rather suspecting a fifo event channel issue than the Xen scheduler,
but your data suggests it could be the scheduler after all (if it is
the same issue, of course).


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 04:54:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 04:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10647.28555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVp6B-000081-Vj; Fri, 23 Oct 2020 04:54:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10647.28555; Fri, 23 Oct 2020 04:54:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVp6B-00007u-Sb; Fri, 23 Oct 2020 04:54:43 +0000
Received: by outflank-mailman (input) for mailman id 10647;
 Thu, 22 Oct 2020 20:56:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r7k1=D5=gmail.com=mshivam2196@srs-us1.protection.inumbo.net>)
 id 1kVhd7-00086e-Dk
 for xen-devel@lists.xen.org; Thu, 22 Oct 2020 20:56:13 +0000
Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c27bbc4e-7323-4c87-802c-bf410cbb119e;
 Thu, 22 Oct 2020 20:56:11 +0000 (UTC)
Received: by mail-io1-xd2f.google.com with SMTP id u62so3137198iod.8
 for <xen-devel@lists.xen.org>; Thu, 22 Oct 2020 13:56:11 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=r7k1=D5=gmail.com=mshivam2196@srs-us1.protection.inumbo.net>)
	id 1kVhd7-00086e-Dk
	for xen-devel@lists.xen.org; Thu, 22 Oct 2020 20:56:13 +0000
X-Inumbo-ID: c27bbc4e-7323-4c87-802c-bf410cbb119e
Received: from mail-io1-xd2f.google.com (unknown [2607:f8b0:4864:20::d2f])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c27bbc4e-7323-4c87-802c-bf410cbb119e;
	Thu, 22 Oct 2020 20:56:11 +0000 (UTC)
Received: by mail-io1-xd2f.google.com with SMTP id u62so3137198iod.8
        for <xen-devel@lists.xen.org>; Thu, 22 Oct 2020 13:56:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:from:date:message-id:subject:to;
        bh=O9q7hsrq3YScP3PTnwo78McfYdpUs5BdfpImePRXR6w=;
        b=h2fAMiUz/1bM0XB3aF8r5m5jsHr4gxph+aWlCXvxbILScT7zxIO3uPIGXAnsZFQQH/
         ACvWokJU5zR3gTj2lArhTM7QkEHbi6eZu5bVc8CoU1oWoBvfVBkDBeXU6jx+da82iflp
         8i3Kmrk8X1+xlBIPV2wv3vkYJLWBxkIpmADBvJYDBE/rtA47yg7ybR5lH67udKRvBMI8
         Lzewp/rapMRz/eomP/DAomSLg7Mr4gFQ35ujEl+j4xezx7u1IEy1XFc7mYx4YP8llxjo
         IZmB0mtwmNzlZK7SKciGqZ7SrXufEGHfS9ZXOxuxvqKUno+TbIMk/+bc3ON69ntwxSCG
         bXDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=O9q7hsrq3YScP3PTnwo78McfYdpUs5BdfpImePRXR6w=;
        b=fsMfveaMOF9zHeV4oJ36LOLyKcW51BKGRMleoRNmUSlK+o+q+OwPQEUlZlr0xDyD2Z
         H63hNu/+1Xo5ktgm7HU2zBih0KEAIlDljw68Ydwy51g4Odn7OQIBvU1cv1851I0O439h
         Vvn6RgypeK8k2YSmmYvG3msw6Juht99S7obWRhN0MVySP2MkrnqzVX3CIb/WV5I7pIo7
         Pc3eQMYRQNp+pjBmjsuL5MNeMEmqE9kXefv2JG/YZ7FxTgN2yKgKHv3I2QjxCg/3jyPv
         gcs+RR9RBR2HSCf4kYDAWbCTP+qHYSyAQt5WzqmKZLuFrBjQuuWV2mzdkBQbJMQJWU8i
         1bBQ==
X-Gm-Message-State: AOAM532xZu9qzpzkumsSLtglwFh8NLUveDfJrQoH9r5Mi1BR18Bra6Ye
	BoheKQ+9BuK1EV081/Rbqb/vHfmg2TrzeFQsgr5f8EEN640=
X-Google-Smtp-Source: ABdhPJyNebN4HW/x7tsIIRIFUsPKtsIq6t7GVJhsrYW0JRm+cMXr3i6jHxlzQWGPU1LXwWX30ysgQg5unN+nXxjavb4=
X-Received: by 2002:a02:7817:: with SMTP id p23mr3193520jac.57.1603400171351;
 Thu, 22 Oct 2020 13:56:11 -0700 (PDT)
MIME-Version: 1.0
From: Shivam Mehra <mshivam2196@gmail.com>
Date: Fri, 23 Oct 2020 02:26:00 +0530
Message-ID: <CANp2S65q-w7s9gBgE7bkx9v=z9DpzCeJXGN8Jc0EKcmMERV1Ow@mail.gmail.com>
Subject: urgent
To: xen-devel@lists.xen.org
Content-Type: multipart/alternative; boundary="000000000000a5710c05b248b1b5"

--000000000000a5710c05b248b1b5
Content-Type: text/plain; charset="UTF-8"

I want to know a few things like how have you implemented the repeated
migration of VM checkpoints from primary to backup, whenever backup
receives the checkpoint, it sends an acknowledgement, the primary receives
this acknowledgement and send a output release signal, Can you shed some
light on how these things are implemented. how the backup detects that t
received a checkpoint and it send an acknowledgement now?

--000000000000a5710c05b248b1b5
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">I want to know a few things like how have you implemented =
the repeated migration of VM checkpoints from primary to backup, whenever b=
ackup receives the checkpoint, it sends an acknowledgement, the primary rec=
eives this acknowledgement and send a output release signal, Can you shed s=
ome light on how these things are implemented. how the backup detects that =
t received a checkpoint and it send an acknowledgement now?<br></div>

--000000000000a5710c05b248b1b5--


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 06:15:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 06:15:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10691.28568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqLT-0007yC-0I; Fri, 23 Oct 2020 06:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10691.28568; Fri, 23 Oct 2020 06:14:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqLS-0007y5-SF; Fri, 23 Oct 2020 06:14:34 +0000
Received: by outflank-mailman (input) for mailman id 10691;
 Fri, 23 Oct 2020 06:14:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q3Pz=D6=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kVqLQ-0007y0-Mc
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:14:32 +0000
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e282ee3b-c436-4df9-a51a-263ea46f29b9;
 Fri, 23 Oct 2020 06:14:28 +0000 (UTC)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Oct 2020 23:14:24 -0700
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by fmsmga001.fm.intel.com with ESMTP; 22 Oct 2020 23:14:22 -0700
Received: from orsmsx602.amr.corp.intel.com (10.22.229.15) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:14:22 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 22 Oct 2020 23:14:22 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.176)
 by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Thu, 22 Oct 2020 23:14:19 -0700
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR1101MB2093.namprd11.prod.outlook.com (2603:10b6:301:50::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22; Fri, 23 Oct
 2020 06:14:19 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b%8]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 06:14:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q3Pz=D6=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
	id 1kVqLQ-0007y0-Mc
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:14:32 +0000
X-Inumbo-ID: e282ee3b-c436-4df9-a51a-263ea46f29b9
Received: from mga09.intel.com (unknown [134.134.136.24])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e282ee3b-c436-4df9-a51a-263ea46f29b9;
	Fri, 23 Oct 2020 06:14:28 +0000 (UTC)
IronPort-SDR: L/bTAiSO9kjSHnSN2uvOTj7xOAodvbqofjbh50JrVDNJzmySe0sG9FfOI8P1SKOjaXkD6OVF7a
 nTR9wqHXZwlg==
X-IronPort-AV: E=McAfee;i="6000,8403,9782"; a="167756536"
X-IronPort-AV: E=Sophos;i="5.77,407,1596524400"; 
   d="scan'208";a="167756536"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
  by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 23:14:24 -0700
IronPort-SDR: fNoKPtKDIoEM1Ovyrm8WnfHIz35JN5/rAHF/wTQPGuonrvcAsJGl5QpUrbuR625UyUTUqoG1Zz
 kswpY+B4iXDA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.77,407,1596524400"; 
   d="scan'208";a="423334573"
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
  by fmsmga001.fm.intel.com with ESMTP; 22 Oct 2020 23:14:22 -0700
Received: from orsmsx602.amr.corp.intel.com (10.22.229.15) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:14:22 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 22 Oct 2020 23:14:22 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.176)
 by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Thu, 22 Oct 2020 23:14:19 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WLVWEOAJ86jajKGH0M9WMFacu1xI5OGoviQYsb0Jz+KWmWlJ/kGzoCzWr3q0RMRgaMXGqJ/8aSddW+JjCAidK1Yj/z/sPS2PEFEYVqTvZXQZweqgLsXvCFkOLoAfWTzMa7u5h0PAMzcMnWXtQHddVzYNFJrjK6hDC0Wahx85+ZmKndv/RLFdvWVJ2bKCpbHMCrrvP+PqN5z1rAZMO6mzOjPd5VKN6naUShezf9YWS9U8mY1mbLYC7IEbIpwevXBzbws5tJtSjShF1vNxcy2gYkj82Mpz4mj0hxPkhU7gmlgC/+Ovm2imQ/65ltIoiIBfRjY3+102yD9rZJsAv452Vg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N+4hRcPBg99tlqBnoQ1lN1cY0rIsJir5Oqdog60o+EE=;
 b=YWJ9rpaQvdTyzBH0/jp8QxRU2EQO4RrYL1qVg8NUWKynWvoQwEjFqQmWz37dg574Jy78VToz+HvTFv1awHgmKix8f7F6SyQd008cuMydmdQiLjOscbG3eD6JlJqiUqmmnqCcVcQiKXClXPG1PnLnp96l4U65WkoSVD8nImunCp7LJNSjwBCMv6xYsUeafS1gX78xqrbZQskG8q9xqoImMgaLWzXx9ntCJRMi2ZDFRsXzyhv936kWyoCFFDhc3MoauYDUlkAxdwZw0BuEOmE2qPQHaaqPW3yfNjArvxQWXJjoiJgTrTzFvhLA1PywmRzTfZ2BKAb4JQLQikDiY/AwSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=N+4hRcPBg99tlqBnoQ1lN1cY0rIsJir5Oqdog60o+EE=;
 b=w3lvqTo5oMBa4KR+BveE6jkbtECtgolqsrG6mHnX14iy1o8HqYrEsnI/lgB+ac76ce8sSq58UPyzcJ4dyhuyohollZtPEIGXQtOHG50ekz6DIy2Bqzab7IenRHglorP2q8ZDu5qEsofvBhNniUYJUPYsdGtyNZUOTXKjSBZ2ElM=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR1101MB2093.namprd11.prod.outlook.com (2603:10b6:301:50::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22; Fri, 23 Oct
 2020 06:14:19 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b%8]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 06:14:19 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "Cooper, Andrew"
	<andrew.cooper3@citrix.com>, "Nakajima, Jun" <jun.nakajima@intel.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>
Subject: RE: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before re-entering
 guest"
Thread-Topic: [PATCH] x86/vmx: Revert "x86/VMX: sanitize rIP before
 re-entering guest"
Thread-Index: AQHWnk5UOwH5YPFK10i1vYA9BQPqg6mVtt4AgAFwXACAAS7uAIACEe2AgARKRQCAAHZigIABC12AgASUbqA=
Date: Fri, 23 Oct 2020 06:14:18 +0000
Message-ID: <MWHPR11MB1645D1ECA854D208D933392A8C1A0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20201009150948.31063-1-andrew.cooper3@citrix.com>
 <fb4a7a1d-95ad-0b59-7cb9-4a94c3600960@suse.com>
 <01bb2f27-4e0b-3637-e456-09eb7b9b233e@citrix.com>
 <1786f728-15c2-3877-c01a-035b11bd8504@suse.com>
 <82e64d10-50be-68ab-127b-99d205a0a768@citrix.com>
 <6430fef8-23f1-f4ef-8741-5e089eaa0df9@suse.com>
 <8b618252-4535-a8d9-efb9-0c1fba176ff4@citrix.com>
 <4eb096ab-7052-f6a9-a5ee-74d18683dde3@suse.com>
In-Reply-To: <4eb096ab-7052-f6a9-a5ee-74d18683dde3@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.219]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cb90937b-ec62-4e12-1945-08d8771ae05e
x-ms-traffictypediagnostic: MWHPR1101MB2093:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR1101MB2093FF975DD4620AA780562E8C1A0@MWHPR1101MB2093.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 6IMZFA4w6JQgjk/qcHeheIe5oIZpLJL88fVnMeX+tM7vu4Ngs0/G9jt4gO0MAFYZ5KGkI+SVZTsqE01O8F90Ij8xzv1qEU2goksW9WQk0cXAGhp2lDquF2RyNad3pmx+zrTmz0pRGljQZaDrV/eOgSyu65wvh4clbW1FANCVrbifCFYaGQBA4XxlyJuDB8fiOo3QwzLKHrjX1YrfGOKNP11hTwtl2HYaKuC5xxvE2JNHyhdt8HZ7z9brwvHHwGA+1etYBcDVa0BFrHq8qWW0b9WWDmdG9JHJ9+R/VK+khlgNjxLfLm1iZ2FYNg4Zjj+8cb+4cCKWKpPYqSKpE/XpTw==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(39860400002)(396003)(366004)(376002)(76116006)(7696005)(186003)(66446008)(83380400001)(5660300002)(66946007)(66556008)(64756008)(6506007)(71200400001)(8676002)(52536014)(4326008)(53546011)(86362001)(2906002)(316002)(8936002)(66476007)(9686003)(110136005)(26005)(33656002)(6636002)(54906003)(478600001)(55016002);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: 1qg2MuWuGWSEpR/TQwg6ORzn+NeW0xbWQBMkmzT7MXVA85I671YmJZ7cEJN+Lg+C8AiWKoeQHN0A4TwY4aY4/5qa93GGTiXzEKz+/Kxum2PNu2ua8X92driQukK7JT2wEtwlbviKlD/XuQ7YZS52Ze+wxNfaQ0ub9Lh9hxJDM2pOpZQigu2sV305b8GqRrCyU29wWQ8SG+Mgg33DfnWNhbgTUJ+BNrDtJHcB3/AJq2yi9mI2YTMx2lWpMZacwvbr0PGEJja16cAzqIRHdD2YvEynzsMktPH+zpmSFeOWPDePMpJlgTu95TrFi9m0RDBhZAUGOfvYR8QdDGEPtGg+0WjMW8Y7pzP7+7a0kv3aPZi36g9P3w0OGJFSCU8n3Eq1F2XZ0FLHAs1wJlGdJj7zLIGGNhVNgahhKFnwUEK2N0Ma10ps3P3oqSNuQooPQVpyf97x4Eq28UxjF11pV06AYiuFCDV5SuAITxZ1C76QPmeW2uiTc8vHHee1yiEF9UymAWR3eX2UCQxQJpOKZSopc1j4GWf6MFt1PEaDV7FZ9jy7TK+8ZKPIZe0Q1OurpaHm131UMap9j3NEbeV4blpyK05/J6l/Un2JYmMJIxQuf1hJBT2myLBR2qDhK1u29mf7WYt254WlUHOoVdV4KNPylw==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb90937b-ec62-4e12-1945-08d8771ae05e
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Oct 2020 06:14:18.8873
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ejYb0WxBOovmgeknoagAcKCejeJ3U98AZsPDM4hPm/Wm/4jM/hW2vVyBD5FNqNZxFCxoN3ZG9PEa7AXQY0DitA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR1101MB2093
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFR1ZXNkYXks
IE9jdG9iZXIgMjAsIDIwMjAgNDoxMCBQTQ0KPiANCj4gT24gMTkuMTAuMjAyMCAxODoxMiwgQW5k
cmV3IENvb3BlciB3cm90ZToNCj4gPiBPbiAxOS8xMC8yMDIwIDEwOjA5LCBKYW4gQmV1bGljaCB3
cm90ZToNCj4gPj4gT24gMTYuMTAuMjAyMCAxNzozOCwgQW5kcmV3IENvb3BlciB3cm90ZToNCj4g
Pj4+IE9uIDE1LzEwLzIwMjAgMDk6MDEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiA+Pj4+IE9uIDE0
LjEwLjIwMjAgMTU6NTcsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+ID4+Pj4+IFJ1bm5pbmcgd2l0
aCBjb3JydXB0IHN0YXRlIGlzIGV2ZXJ5IGJpdCBhbiBYU0EgYXMgaGl0dGluZyBhIFZNRW50cnkN
Cj4gPj4+Pj4gZmFpbHVyZSBpZiBpdCBjYW4gYmUgdHJpZ2dlcmVkIGJ5IHVzZXJzcGFjZSwgYnV0
IHRoZSBsYXR0ZXIgc2FmZXIgYW5kDQo+ID4+Pj4+IG11Y2ggbW9yZSBvYnZpb3VzLg0KPiA+Pj4+
IEkgZGlzYWdyZWUuIEZvciBDUEwgPiAwIHdlIGRvbid0ICJjb3JydXB0IiBndWVzdCBzdGF0ZSBh
bnkgbW9yZQ0KPiA+Pj4+IHRoYW4gcmVwb3J0aW5nIGEgI0dQIGZhdWx0IHdoZW4gb25lIGlzIGdv
aW5nIHRvIGJlIHJlcG9ydGVkDQo+ID4+Pj4gYW55d2F5IChhcyBsb25nIGFzIHRoZSBWTSBlbnRy
eSBkb2Vzbid0IGZhaWwsIGFuZCBoZW5jZSB0aGUNCj4gPj4+PiBndWVzdCB3b24ndCBnZXQgY3Jh
c2hlZCkuIElPVyB0aGlzIHJhaXNpbmcgb2YgI0dQIGFjdHVhbGx5IGlzIGENCj4gPj4+PiBwcmVj
YXV0aW9uYXJ5IG1lYXN1cmUgdG8gX2F2b2lkXyBYU0FzLg0KPiA+Pj4gSXQgZG9lcyBub3QgcmVt
b3ZlIGFueSBYU0FzLsKgIEl0IG1lcmVseSBoaWRlcyB0aGVtLg0KPiA+PiBIb3cgdGhhdD8gSWYg
d2UgY29udmVydCB0aGUgYWJpbGl0eSBvZiBndWVzdCB1c2VyIG1vZGUgdG8gY3Jhc2gNCj4gPj4g
dGhlIGd1ZXN0IGludG8gZGVsaXZlciBvZiAjR1AoMCksIGhvdyBpcyB0aGVyZSBhIGhpZGRlbiBY
U0EgdGhlbj8NCj4gPg0KPiA+IEJlY2F1c2UgdXNlcnNwYWNlIGJlaW5nIGFibGUgdG8gdHJpZ2dl
cmluZyB0aGlzIGZpeHVwIGlzIHN0aWxsIGFuIFhTQS4NCj4gDQo+IEhvdyBkbyB5b3Uga25vdyB3
aXRob3V0IGEgc3BlY2lmaWMgY2FzZSBhdCBoYW5kPyBJdCBtYXkgd2VsbCBiZQ0KPiB0aGF0IGFs
bCB0aGF0J3MgaW1wYWN0ZWQgaXMgZ3Vlc3QgdXNlciBzcGFjZSwgaW4gd2hpY2ggY2FzZSBJDQo+
IGRvbid0IHNlZSB3aHkgdGhlcmUgd291bGQgbmVlZCB0byBiZSBhbiBYU0EuIEl0J3Mgc3RpbGwg
YSBidWcNCj4gdGhlbiwgc3VyZS4NCj4gDQo+ID4+Pj4+IEl0IHdhcyB0aGUgYXBwcm9wcmlhdGUg
c2VjdXJpdHkgZml4IChnaXZlIG9yIHRha2UgdGhlIGZ1bmN0aW9uYWwgYnVnIGluDQo+ID4+Pj4+
IGl0KSBhdCB0aGUgdGltZSwgZ2l2ZW4gdGhlIGNvbXBsZXhpdHkgb2YgcmV0cm9maXR0aW5nIHpl
cm8gbGVuZ3RoDQo+ID4+Pj4+IGluc3RydWN0aW9uIGZldGNoZXMgdG8gdGhlIGVtdWxhdG9yLg0K
PiA+Pj4+Pg0KPiA+Pj4+PiBIb3dldmVyLCBpdCBpcyBvbmUgb2YgYSB2ZXJ5IGxvbmcgbGlzdCBv
ZiBndWVzdC1zdGF0ZS1pbmR1Y2VkIFZNRW50cnkNCj4gPj4+Pj4gZmFpbHVyZXMsIHdpdGggbm9u
LXRyaXZpYWwgbG9naWMgd2hpY2ggd2UgYXNzZXJ0IHdpbGwgcGFzcywgb24gYQ0KPiA+Pj4+PiBm
YXN0cGF0aCwgd2hlcmUgaGFyZHdhcmUgYWxzbyBwZXJmb3JtcyB0aGUgc2FtZSBjaGVja3MgYW5k
IHdlDQo+IGFscmVhZHkNCj4gPj4+Pj4gaGF2ZSBhIHJ1bnRpbWUgc2FmZSB3YXkgb2YgZGVhbGlu
ZyB3aXRoIGVycm9ycy7CoCAoSGVuY2Ugbm90IGFjdHVhbGx5DQo+ID4+Pj4+IHVzaW5nIEFTU0VS
VF9VTlJFQUNIQUJMRSgpIGhlcmUuKQ0KPiA+Pj4+ICJSdW50aW1lIHNhZmUiIGFzIGZhciBhcyBY
ZW4gaXMgY29uY2VybmVkLCBJIHRha2UgaXQuIFRoaXMgaXNuJ3Qgc2FmZQ0KPiA+Pj4+IGZvciB0
aGUgZ3Vlc3QgYXQgYWxsLCBhcyB2bXhfZmFpbGVkX3ZtZW50cnkoKSByZXN1bHRzIGluIGFuDQo+
ID4+Pj4gdW5jb25kaXRpb25hbCBkb21haW5fY3Jhc2goKS4NCj4gPj4+IEFueSBWTUVudHJ5IGZh
aWx1cmUgaXMgYSBidWcgaW4gWGVuLsKgIElmIHVzZXJzcGFjZSBjYW4gdHJpZ2dlciBpdCwgaXQg
aXMNCj4gPj4+IGFuIFhTQSwgKmlycmVzcGVjdGl2ZSogb2Ygd2hldGhlciB3ZSBjcmFzaCB0aGUg
ZG9tYWluIHRoZW4gYW5kIHRoZXJlLA0KPiBvcg0KPiA+Pj4gd2hldGhlciB3ZSBsZXQgaXQgdHJ5
IGFuZCBsaW1wIG9uIHdpdGggY29ycnVwdGVkIHN0YXRlLg0KPiA+PiBBbGxvd2luZyB0aGUgZ3Vl
c3QgdG8gY29udGludWUgd2l0aCBjb3JydXB0ZWQgc3RhdGUgaXMgbm90IGENCj4gPj4gdXNlZnVs
IHRoaW5nIHRvIGRvLCBJIGFncmVlLiBIb3dldmVyLCB3aGF0IGZhbGxzIHVuZGVyDQo+ID4+ICJj
b3JydXB0ZWQiIHNlZW1zIHRvIGJlIGRpZmZlcmVudCBmb3IgeW91IGFuZCBtZS4gSSdkIG5vdCBj
YWxsDQo+ID4+IGRlbGl2ZXJ5IG9mICNHUCAiY29ycnVwdGlvbiIgaW4gYW55IHdheS4NCj4gPg0K
PiA+IEkgY2FuIG9ubHkgcmVwZWF0IG15IHByZXZpb3VzIHN0YXRlbWVudDoNCj4gPg0KPiA+PiBU
aGVyZSBhcmUgbGVnYWwgc3RhdGVzIHdoZXJlIFJJUCBpcyAweDAwMDA4MDAwMDAwMDAwMDAgYW5k
ICNHUCBpcyB0aGUNCj4gPj4gd3JvbmcgdGhpbmcgdG8gZG8uDQo+ID4NCj4gPiBCbGluZGx5IHJh
aXNpbmcgI0dQIGluIGlzIG5vdCBhbHdheXMgdGhlIHJpZ2h0IHRoaW5nIHRvIGRvLg0KPiANCj4g
QWdhaW4gLSB3ZSdyZSBpbiBhZ3JlZW1lbnQgYWJvdXQgImJsaW5kbHkiLiBMZXQncyBiZSBsZXNz
IGJsaW5kLg0KPiANCj4gPj4gIFRoZSBwcmltYXJ5IGdvYWwgb3VnaHQNCj4gPj4gdG8gYmUgdGhh
dCB3ZSBkb24ndCBjb3JydXB0IHRoZSBndWVzdCBrZXJuZWwgdmlldyBvZiB0aGUgd29ybGQuDQo+
ID4+IEl0IG1heSB0aGVuIGhhdmUgdGhlIG9wcG9ydHVuaXR5IHRvIGtpbGwgdGhlIG9mZmVuZGlu
ZyB1c2VyDQo+ID4+IG1vZGUgcHJvY2Vzcy4NCj4gPg0KPiA+IEJ5IHRoZSB0aW1lIHdlIGhhdmUg
aGl0IHRoaXMgY2FzZSwgYWxsIGJldHMgYXJlIG9mZiwgYmVjYXVzZSBYZW4gKmlzKg0KPiA+IG1h
bGZ1bmN0aW9uaW5nLsKgIFdlIGhhdmUgbm8gaWRlYSBpZiBrZXJuZWwgY29udGV4dCBpcyBzdGls
bCBpbnRhY3QuwqAgWW91DQo+ID4gZG9uJ3QgZXZlbiBrbm93IHRoYXQgY3VycmVudCB1c2VyIGNv
bnRleHQgaXMgdGhlIGNvcnJlY3Qgb2ZmZW5kaW5nDQo+ID4gY29udGV4dCB0byBjbG9iYmVyLCBh
bmQgbWlnaHQgYmUgY3JlYXRpbmcgYSB1c2VyPT51c2VyIERvUyB2dWxuZXJhYmlsaXR5Lg0KPiA+
DQo+ID4gV2UgZGVmaW5pdGVseSBoYXZlIGFuIFhTQSB0byBmaW5kIGFuZCBmaXgsIGFuZCB3ZSBj
YW4gZWl0aGVyIG1ha2UgaXQNCj4gPiB2ZXJ5IG9idmlvdXMgYW5kIGxpa2VseSB0byBiZSByZXBv
cnRlZCwgb3IgaGlkZGVuIGFuZCBsaWFibGUgdG8gZ28NCj4gPiB1bm5vdGljZWQgZm9yIGEgbG9u
ZyBwZXJpb2Qgb2YgdGltZS4NCj4gDQo+IFdoeSB3b3VsZCBpdCBnbyB1bm5vdGljZWQgd2hlbiB3
ZSBsb2cgdGhlIGluY2lkZW50PyBJIHZlcnkgbXVjaA0KPiBob3BlIHBlb3BsZSBpbnNwZWN0IHRo
ZWlyIGxvZ3MgYXQgbGVhc3QgZXZlcnkgb25jZSBpbiBhIHdoaWxlIC4uLg0KPiANCj4gQW5kIGFz
IHBlciBhYm92ZSAtIEkgZGlzYWdyZWUgd2l0aCB5b3VyIHVzZSBvZiAiZGVmaW5pdGVseSIgaGVy
ZS4NCj4gV2UgaGF2ZSBhIGJ1ZyB0byBhbmFseXplIGFuZCBmaXgsIHllcy4gV2hldGhlciBpdCdz
IGFuIFhTQS13b3J0aHkNCj4gb25lIGlzbid0IGtub3duIGFoZWFkIG9mIHRpbWUsIHVubGVzcyB3
ZSBjcmFzaCB0aGUgZ3Vlc3QgaW4gc3VjaA0KPiBhIGNhc2UuDQo+IA0KPiBJbiBhbnkgZXZlbnQg
SSB0aGluayBpdCdzIGFib3V0IHRpbWUgdGhhdCB0aGUgVk1YIG1haW50YWluZXJzDQo+IHZvaWNl
IHRoZWlyIHZpZXdzIGhlcmUsIGFzIHRoZXkncmUgdGhlIG9uZXMgdG8gYXBwcm92ZSBvZg0KPiB3
aGljaGV2ZXIgY2hhbmdlIHdlIGVuZCB1cCB3aXRoLiBLZXZpbiwgSnVuPw0KPiANCg0KSG9uZXN0
bHkgc3BlYWtpbmcgYm90aCBvZiB5b3VyIG9wdGlvbnMgbWFrZSBzb21lIHNlbnNlLiBhbmQNCkkg
ZG9uJ3QgdGhpbmsgdGhlcmUgaXMgYSBwZXJmZWN0IGFuc3dlciBoZXJlLiBQZXJzb25hbGx5IEkn
bSBtb3JlDQphbGlnbmVkIHdpdGggSmFuJ3MgcG9pbnQgb24gcHJldmVudGluZyBndWVzdCB1c2Vy
IGZyb20gY3Jhc2hpbmcNCnRoZSB3aG9sZSBkb21haW4uIEJ1dCBsZXQncyBhbHNvIGhlYXIgZnJv
bSBvdGhlcnMnIG9waW5pb25zIGFzIA0KSSBiZWxpZXZlIHRoaXMgZGlsZW1tYSBtaWdodCBiZSBz
ZWVuIGluIG90aGVyIHBsYWNlcyB0b28gdGh1cyANCm1heSBuZWVkIGEgZ2VuZXJhbCBjb25zZW5z
dXMgaW4gWGVuIGNvbW11bml0eS4NCg0KVGhhbmtzDQpLZXZpbg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 06:17:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 06:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10697.28580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqO6-00086p-DU; Fri, 23 Oct 2020 06:17:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10697.28580; Fri, 23 Oct 2020 06:17:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqO6-00086i-AU; Fri, 23 Oct 2020 06:17:18 +0000
Received: by outflank-mailman (input) for mailman id 10697;
 Fri, 23 Oct 2020 06:17:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVqO5-00086d-EX
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:17:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4383c69-04b4-4fbc-b94e-3d3be06cd437;
 Fri, 23 Oct 2020 06:17:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B272DACE6;
 Fri, 23 Oct 2020 06:17:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVqO5-00086d-EX
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:17:17 +0000
X-Inumbo-ID: a4383c69-04b4-4fbc-b94e-3d3be06cd437
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a4383c69-04b4-4fbc-b94e-3d3be06cd437;
	Fri, 23 Oct 2020 06:17:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603433835;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hE8tQXWyEpxc+GIHZgnSHhMDuFUIkLvW+R9X8cPyhvE=;
	b=tMC8eupq2YSFUWa5FE2BWNjgbjDvpU5X6pKQYOh4Yr5VxssM7DyjpyPTT0Zdt1JJaOYSKs
	6N0a1gc+8Coy6OMhobQETL8DVajKheXdbdquDDyW12jPUQLsJGvA3h24UealHhUGo2kvn1
	9d5YrcpANb5O1zX7dVuwhBPaGud8eUo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B272DACE6;
	Fri, 23 Oct 2020 06:17:15 +0000 (UTC)
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <a960dd45-2867-5ef6-970c-952c03aa8cef@suse.com>
 <20201022171328.GA81455@mattapan.m5p.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <da453eee-0736-b71d-d427-2a6c429e3aa4@suse.com>
Date: Fri, 23 Oct 2020 08:17:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022171328.GA81455@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 19:13, Elliott Mitchell wrote:
> On Thu, Oct 22, 2020 at 09:42:17AM +0200, Jan Beulich wrote:
>> On 22.10.2020 00:12, Elliott Mitchell wrote:
>>> --- a/xen/arch/arm/acpi/domain_build.c
>>> +++ b/xen/arch/arm/acpi/domain_build.c
>>> @@ -42,17 +42,18 @@ static int __init acpi_iomem_deny_access(struct domain *d)
>>>      status = acpi_get_table(ACPI_SIG_SPCR, 0,
>>>                              (struct acpi_table_header **)&spcr);
>>>  
>>> -    if ( ACPI_FAILURE(status) )
>>> +    if ( ACPI_SUCCESS(status) )
>>>      {
>>> -        printk("Failed to get SPCR table\n");
>>> -        return -EINVAL;
>>> +        mfn = spcr->serial_port.address >> PAGE_SHIFT;
>>> +        /* Deny MMIO access for UART */
>>> +        rc = iomem_deny_access(d, mfn, mfn + 1);
>>> +        if ( rc )
>>> +            return rc;
>>> +    }
>>> +    else
>>> +    {
>>> +        printk("Failed to get SPCR table, Xen console may be unavailable\n");
>>>      }
>>
>> Nit: While I see you've got Stefano's R-b already, I Xen we typically
>> omit the braces here.
> 
> Personally, I prefer that myself, but was unsure of the preference here.
> I've seen multiple projects which *really* dislike using having brackets
> for some clauses, but not others (ie they want either all clauses with or
> all clauses without; instead of only if required).
> 
> I sent what I thought was the more often used format.  (I also like tabs,
> and dislike having so many spaces; alas my preferences are apparently
> uncommon)

"More often used" doesn't matter when there's an explicit statement
about this in ./CODING_STYLE: "Braces should be omitted for blocks
with a single statement." Yes, there are projects requiring all
if/else-if belonging together to consistently use or not use braces.
But there's explicitly no such statement in our doc. (Along these
lines, dislike of spaces [and favoring tabs] also doesn't matter, as
again the doc is explicit about it.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 06:28:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 06:28:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10703.28591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqYk-0000m2-F6; Fri, 23 Oct 2020 06:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10703.28591; Fri, 23 Oct 2020 06:28:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqYk-0000lv-C9; Fri, 23 Oct 2020 06:28:18 +0000
Received: by outflank-mailman (input) for mailman id 10703;
 Fri, 23 Oct 2020 06:28:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q3Pz=D6=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1kVqYj-0000lq-9x
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:28:17 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b81548a9-9521-4773-87db-961948c2e7d6;
 Fri, 23 Oct 2020 06:28:13 +0000 (UTC)
Received: from orsmga007.jf.intel.com ([10.7.209.58])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Oct 2020 23:28:10 -0700
Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14])
 by orsmga007.jf.intel.com with ESMTP; 22 Oct 2020 23:28:10 -0700
Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by
 ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:09 -0700
Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by
 ORSMSX608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:09 -0700
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 22 Oct 2020 23:28:09 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:07 -0700
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1711.namprd11.prod.outlook.com (2603:10b6:300:27::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Fri, 23 Oct
 2020 06:28:02 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b%8]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 06:28:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q3Pz=D6=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
	id 1kVqYj-0000lq-9x
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:28:17 +0000
X-Inumbo-ID: b81548a9-9521-4773-87db-961948c2e7d6
Received: from mga06.intel.com (unknown [134.134.136.31])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b81548a9-9521-4773-87db-961948c2e7d6;
	Fri, 23 Oct 2020 06:28:13 +0000 (UTC)
IronPort-SDR: qgP7VG7/f8BrYJsRkYUIP35PVdvsYFX4XxfuKxMJtfd9qvpYZftsQhkVFiakONLmiG95O8q7eK
 IxLGF49REC4w==
X-IronPort-AV: E=McAfee;i="6000,8403,9782"; a="229269346"
X-IronPort-AV: E=Sophos;i="5.77,407,1596524400"; 
   d="scan'208";a="229269346"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga007.jf.intel.com ([10.7.209.58])
  by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 22 Oct 2020 23:28:10 -0700
IronPort-SDR: lf498Q5QhkfcmkxGIUFxXi66Ge+t9y1l8e5Y6TgWOkBF9qDl0NVboAtNpz1GwqaAHE53v2a/OR
 7hlgMRvNxYHA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.77,407,1596524400"; 
   d="scan'208";a="360131184"
Received: from orsmsx601.amr.corp.intel.com ([10.22.229.14])
  by orsmga007.jf.intel.com with ESMTP; 22 Oct 2020 23:28:10 -0700
Received: from orsmsx608.amr.corp.intel.com (10.22.229.21) by
 ORSMSX601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:09 -0700
Received: from orsmsx601.amr.corp.intel.com (10.22.229.14) by
 ORSMSX608.amr.corp.intel.com (10.22.229.21) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:09 -0700
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx601.amr.corp.intel.com (10.22.229.14) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 22 Oct 2020 23:28:09 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.172)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.1713.5; Thu, 22 Oct 2020 23:28:07 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=c1OQ72WkfJFzQKu5sXnSD2bf2L0ISTvgHZhOcBM5NUZNIvrgsTbbBu/4sJS9N5RTqqtzEA5tnn6nyN/UZfRnLbE1ASaC+dPUNDWJYKpF3303+67hTD4QrheRt7WDM8jB8rwWRMPLsmdnEqn+7a51uppZgG/PGNZFc/23NQeswZCVOZSxhBYBgRC1SxseYstjGMyri02LX3HXTbV9/MPBLn9/Z9pHOzgtuUh/TgDvTvx4EgG5A24d+fkyNQwcWAjir86WJVVWtN1lJ+8LpInvA+Cdh72v4pEUG+4dQ6NIlcmTZ/hLqZU16FRxKyai4Pd4wLb+ZRVriK1wy21oYZQ4Zw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f8XxxAvJfuAo9wQvDsy7omotezTw048hqHDi1GsBNPI=;
 b=VSvQRoE6rsRX8wLCwgBihczCCQVLQA+JjN15QuGAMJiG1dSB8mvKyz8yPlqyH0oVzYRUR/qf1vlhBuGcTpPD4Pbes0bBrJ8qvqzsH4hSC/JdF9QEX7j8ddes3tiM7qh3ygF4ha55A5ESpIWI+kTfDCTTOn48vw/QRHVZPE4MSDPVlS4faTHjvvQveqgvjkJH5Kt0eXcApXPI/CGYoIhofblH/8rXN1DQsQE8wkv4MVkNfEfCjyO/SW8MXQVp1VdWSna81sRWX5yH0AI04eQf9qbHIfBnf0++Fs0vNQHOCLpmrifHJ8LLtN1vWFJdEIufWEe12FU/NW+TQPH6vzKeiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com;
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f8XxxAvJfuAo9wQvDsy7omotezTw048hqHDi1GsBNPI=;
 b=xq2QVaFDDoOU0A4mZCJTMaCNPcv1cCeH0HhA5z3cPwfWjrsS0pqpnqM34wLmwOflob72RkpO9BFX62Z3OwImcnLL+4IWDbcSw2EL6RykM/A0X+qeUhYJYkU+LAa5PLEd0EZ91o6ZW7KYi7kTceRxJkXxZm94eTbvTcfVkWOau6A=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1711.namprd11.prod.outlook.com (2603:10b6:300:27::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Fri, 23 Oct
 2020 06:28:02 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::c88f:585f:f117:930b%8]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 06:28:02 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "Cooper, Andrew" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "Nakajima, Jun" <jun.nakajima@intel.com>, "Paul
 Durrant" <paul@xen.org>
Subject: RE: [PATCH] IOMMU: avoid double flushing in shared page table case
Thread-Topic: [PATCH] IOMMU: avoid double flushing in shared page table case
Thread-Index: AQHWpuhdf0WK+guU3Ei2udEaPDozGamkvWZg
Date: Fri, 23 Oct 2020 06:28:02 +0000
Message-ID: <MWHPR11MB1645E262F83FEE9E2BDC9E8D8C1A0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
In-Reply-To: <e54f4fbb-92e2-9785-8648-596c615213a2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
dlp-version: 11.5.1.3
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.198.147.219]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f1643216-2c13-4392-c6a7-08d8771ccb43
x-ms-traffictypediagnostic: MWHPR11MB1711:
x-ld-processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB1711C0DF10E99A3595A7884E8C1A0@MWHPR11MB1711.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Hvz8lv4n7kMSJv4MMkxQnVI+givG5paqjVIgh/HiI++vwt7/oDQZf3VG/45wVD406ccisAbaWdtCl9ED99wuHGQKqneurfGmX60vu0lG1dXXYvy2apHR099+sP41E/8nMZMCPZTXg4bEEUDFB0LkTfovhjMAGYjmJS5Lo2AdLnGG/4zK0ZA02qD+F1oSz2e2nOkq0uz/UVdi14Rkd7balwzA9MFn6/D9zyV5EyEuw8Y/DLXkBZ1S+Ez9RPw/1B1IrrmA+H6MQBbOgitXUgmNwso3tNRXh0OZuEqZiF3KrZUJeODT43yv1Dhdqa1kf9A8
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:MWHPR11MB1645.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(136003)(39860400002)(346002)(71200400001)(9686003)(83380400001)(64756008)(7696005)(66476007)(2906002)(66556008)(66946007)(76116006)(55016002)(66446008)(26005)(5660300002)(8936002)(186003)(33656002)(478600001)(6506007)(8676002)(54906003)(52536014)(4326008)(110136005)(316002)(86362001);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata: d8QHoHEyn3FbzLrEmsq8bRcUsbE4Ld8aRlnZ+Id4TvN4fBUVRXv/LHDrcpsjqZLydXfwrJ5+aLHELRySYabtBphb75qebkRNAhxtslCZJtjjDCPWdO0L2/XmJkaY3gsiylzko2gWN/zE8i1Fc1Vtd1q5CjxsHI/ajKE3KxfqNqOG2VlTyLMKjN4BWaIzSZZmu9vsK11xbDt2pXsxWMWia2Vzy8ifTkVhFenWDNayBpQNlzCSYStcvmO9FM8EtMZz7W2v5KhACAfqCSgvg8z5Lg/+8grmxYq9oIW5tn3GqWwxYVX2xDQ+zOTrsaDHvyT+M9kJvWkn+HKFG52aRZXQeni0YN3MLLG1XOAXpEEvLv1Q0W+8YXNMxwr3NAKLgOyO+ZUoiyQ7TEKFZplRv5TSzjgYY0gtdI52phYl3RJHGeqRBlMWHOMnooe3MzesH00C0fsdCbDuz/HWrxaJ9EbdB7SREgdYPwTFCEKU9muLnKzK/qfYgrg5tHxEbHrDNBdblW5GL5gFW5l2GJ7w7H6SWo6O+ttVMkt/AOyKcbB88JWO2w+OfqUE/AF0ZF2ECtTj6wSFCkPwelVZbbPWQOvnmpzyTAW/dD4ajTiFf4HJVfR77P8CIH3h6Tf/bMro2m2/YwVgXua/qTGOEujssOGCBw==
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: MWHPR11MB1645.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f1643216-2c13-4392-c6a7-08d8771ccb43
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Oct 2020 06:28:02.4831
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: s/bwPwXdPK4PLMW685EFm7LgTk/hI+eHiERy7q1/hphXFleW0N6/MIuL+Stl4anwyHVGntCdtMHFK4wDUSlSLQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1711
X-OriginatorOrg: intel.com

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFR1ZXNkYXks
IE9jdG9iZXIgMjAsIDIwMjAgOTo1MyBQTQ0KPiANCj4gV2hpbGUgdGhlIGZsdXNoIGNvYWxlc2Np
bmcgb3B0aW1pemF0aW9uIGhhcyBiZWVuIGhlbHBpbmcgdGhlIG5vbi1zaGFyZWQNCj4gY2FzZSwg
aXQgaGFzIGFjdHVhbGx5IGxlYWQgdG8gZG91YmxlIGZsdXNoZXMgaW4gdGhlIHNoYXJlZCBjYXNl
ICh3aGljaA0KPiBvdWdodCB0byBiZSB0aGUgbW9yZSBjb21tb24gb25lIG5vd2FkYXlzIGF0IGxl
YXN0KTogT25jZSBmcm9tDQo+ICpfc2V0X2VudHJ5KCkgYW5kIGEgc2Vjb25kIHRpbWUgdXAgdGhl
IGNhbGwgdHJlZSBmcm9tIHdoZXJldmVyIHRoZQ0KPiBvdmVycmlkaW5nIGZsYWcgZ2V0cyBwbGF5
ZWQgd2l0aC4gSW4gYWxpZ25tZW50IHdpdGggWFNBLTM0NiBzdXBwcmVzcw0KPiBmbHVzaGluZyBp
biB0aGlzIGNhc2UuDQo+IA0KPiBTaW1pbGFybHkgYXZvaWQgZXhjZXNzaXZlIHNldHRpbmcgb2Yg
SU9NTVVfRkxVU0hGX2FkZGVkIG9uIHRoZSBiYXRjaGVkDQo+IGZsdXNoZXM6ICJpZHgiIGhhc24n
dCBiZWVuIGFkZGVkIGEgbmV3IG1hcHBpbmcgZm9yLg0KPiANCj4gU2lnbmVkLW9mZi1ieTogSmFu
IEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KDQpSZXZpZXdlZC1ieTogS2V2aW4gVGlhbiA8
a2V2aW4udGlhbkBpbnRlbC5jb20+DQoNCj4gLS0tDQo+IFRCRDogVGhlIEFybSBwYXJ0IHJlYWxs
eSBpcyBqdXN0IGZvciBjb21wbGV0ZW5lc3MgKGFuZCBoZW5jZSBjb3VsZCBhbHNvDQo+ICAgICAg
YmUgZHJvcHBlZCkgLSB0aGUgYWZmZWN0ZWQgbWFwcGluZyBzcGFjZXMgYXJlbid0IGN1cnJlbnRs
eQ0KPiAgICAgIHN1cHBvcnRlZCB0aGVyZS4NCj4gDQo+IC0tLSBhL3hlbi9hcmNoL2FybS9wMm0u
Yw0KPiArKysgYi94ZW4vYXJjaC9hcm0vcDJtLmMNCj4gQEAgLTEwNDUsNyArMTA0NSw3IEBAIHN0
YXRpYyBpbnQgX19wMm1fc2V0X2VudHJ5KHN0cnVjdCBwMm1fZG8NCj4gICAgICAgICAgcDJtLT5s
b3dlc3RfbWFwcGVkX2dmbiA9IGdmbl9taW4ocDJtLT5sb3dlc3RfbWFwcGVkX2dmbiwgc2dmbik7
DQo+ICAgICAgfQ0KPiANCj4gLSAgICBpZiAoIGlzX2lvbW11X2VuYWJsZWQocDJtLT5kb21haW4p
ICYmDQo+ICsgICAgaWYgKCBpc19pb21tdV9lbmFibGVkKHAybS0+ZG9tYWluKQ0KPiAmJiAhdGhp
c19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikgJiYNCj4gICAgICAgICAgIChscGFlX2lzX3Zh
bGlkKG9yaWdfcHRlKSB8fCBscGFlX2lzX3ZhbGlkKCplbnRyeSkpICkNCj4gICAgICB7DQo+ICAg
ICAgICAgIHVuc2lnbmVkIGludCBmbHVzaF9mbGFncyA9IDA7DQo+IC0tLSBhL3hlbi9hcmNoL3g4
Ni9tbS9wMm0tZXB0LmMNCj4gKysrIGIveGVuL2FyY2gveDg2L21tL3AybS1lcHQuYw0KPiBAQCAt
ODQyLDcgKzg0Miw3IEBAIG91dDoNCj4gICAgICBpZiAoIHJjID09IDAgJiYgcDJtX2lzX2hvc3Rw
Mm0ocDJtKSAmJg0KPiAgICAgICAgICAgbmVlZF9tb2RpZnlfdnRkX3RhYmxlICkNCj4gICAgICB7
DQo+IC0gICAgICAgIGlmICggaW9tbXVfdXNlX2hhcF9wdChkKSApDQo+ICsgICAgICAgIGlmICgg
aW9tbXVfdXNlX2hhcF9wdChkKSAmJiAhdGhpc19jcHUoaW9tbXVfZG9udF9mbHVzaF9pb3RsYikg
KQ0KPiAgICAgICAgICAgICAgcmMgPSBpb21tdV9pb3RsYl9mbHVzaChkLCBfZGZuKGdmbiksIDF1
bCA8PCBvcmRlciwNCj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKGlvbW11
X2ZsYWdzID8gSU9NTVVfRkxVU0hGX2FkZGVkIDogMCkgfA0KPiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAodnRkX3B0ZV9wcmVzZW50ID8gSU9NTVVfRkxVU0hGX21vZGlmaWVk
DQo+IC0tLSBhL3hlbi9jb21tb24vbWVtb3J5LmMNCj4gKysrIGIveGVuL2NvbW1vbi9tZW1vcnku
Yw0KPiBAQCAtODcwLDcgKzg3MCw3IEBAIGludCB4ZW5tZW1fYWRkX3RvX3BoeXNtYXAoc3RydWN0
IGRvbWFpbg0KPiAgICAgICAgICB0aGlzX2NwdShpb21tdV9kb250X2ZsdXNoX2lvdGxiKSA9IDA7
DQo+IA0KPiAgICAgICAgICByZXQgPSBpb21tdV9pb3RsYl9mbHVzaChkLCBfZGZuKHhhdHAtPmlk
eCAtIGRvbmUpLCBkb25lLA0KPiAtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJT01N
VV9GTFVTSEZfYWRkZWQgfCBJT01NVV9GTFVTSEZfbW9kaWZpZWQpOw0KPiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICBJT01NVV9GTFVTSEZfbW9kaWZpZWQpOw0KPiAgICAgICAgICBp
ZiAoIHVubGlrZWx5KHJldCkgJiYgcmMgPj0gMCApDQo+ICAgICAgICAgICAgICByYyA9IHJldDsN
Cj4gDQo=


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 06:33:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 06:33:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10717.28604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqdh-0001fX-7y; Fri, 23 Oct 2020 06:33:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10717.28604; Fri, 23 Oct 2020 06:33:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVqdh-0001fQ-3l; Fri, 23 Oct 2020 06:33:25 +0000
Received: by outflank-mailman (input) for mailman id 10717;
 Fri, 23 Oct 2020 06:33:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yJbv=D6=casper.srs.infradead.org=batv+ae109258c1d10f479368+6270+infradead.org+hch@srs-us1.protection.inumbo.net>)
 id 1kVqdd-0001fL-Vk
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:33:23 +0000
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3645c3ea-2fe3-4bb8-afb6-0e351b6ed1ed;
 Fri, 23 Oct 2020 06:33:18 +0000 (UTC)
Received: from [2001:4bb8:18c:20bd:c70:4a89:bc61:3] (helo=localhost)
 by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
 id 1kVqdU-0005Dv-O2; Fri, 23 Oct 2020 06:33:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yJbv=D6=casper.srs.infradead.org=batv+ae109258c1d10f479368+6270+infradead.org+hch@srs-us1.protection.inumbo.net>)
	id 1kVqdd-0001fL-Vk
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 06:33:23 +0000
X-Inumbo-ID: 3645c3ea-2fe3-4bb8-afb6-0e351b6ed1ed
Received: from casper.infradead.org (unknown [2001:8b0:10b:1236::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3645c3ea-2fe3-4bb8-afb6-0e351b6ed1ed;
	Fri, 23 Oct 2020 06:33:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=casper.20170209; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID:
	Content-Description:In-Reply-To:References;
	bh=VgaUDy/MYBhcMGNLTA5dcYkNxjEeOYucsAXgukb+9OE=; b=F21OlX9RTzKj2RyK9S1jOONSVX
	fBGHNAfoEYPDghf73ahvw01VvwR8PBi1lgexR2AR9oXWJ8uVIwb97yh6lOectRl76KlmCZaRD7CZB
	Ne0xWy9GBkkZrB0i0jyH0wgEx7a71zWxL5vUy7EJQ0Y/eO9C5hS6Ecpx9DEFTJOx+t7KW8Cf+A9wz
	KjFNSkhJly6lhDd+duxQgCu1UbcPA+t2i5M9NdOsDKaAI4qr1LABguhWCwhXX8rdvsbJJE/VUbkbG
	mIkn/UOMzSrskutaE903B6fgNIIN4QFDY3iMmZhu6C/TArJvku+4c7lzLjoRYMzNsxXu8Tc01GkSV
	d+cOoHsQ==;
Received: from [2001:4bb8:18c:20bd:c70:4a89:bc61:3] (helo=localhost)
	by casper.infradead.org with esmtpsa (Exim 4.92.3 #3 (Red Hat Linux))
	id 1kVqdU-0005Dv-O2; Fri, 23 Oct 2020 06:33:13 +0000
From: Christoph Hellwig <hch@lst.de>
To: konrad.wilk@oracle.com
Cc: iommu@lists.linux-foundation.org,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH for-5.10] swiotlb: remove the tbl_dma_addr argument to swiotlb_tbl_map_single
Date: Fri, 23 Oct 2020 08:33:09 +0200
Message-Id: <20201023063309.3472987-1-hch@lst.de>
X-Mailer: git-send-email 2.28.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by casper.infradead.org. See http://www.infradead.org/rpr.html

The tbl_dma_addr argument is used to check the DMA boundary for the
allocations, and thus needs to be a dma_addr_t.  swiotlb-xen instead
passed a physical address, which could lead to incorrect results for
strange offsets.  Fix this by removing the parameter entirely and hard
code the DMA address for io_tlb_start instead.

Fixes: 91ffe4ad534a ("swiotlb-xen: introduce phys_to_dma/dma_to_phys translations")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 drivers/iommu/intel/iommu.c |  5 ++---
 drivers/xen/swiotlb-xen.c   |  3 +--
 include/linux/swiotlb.h     | 10 +++-------
 kernel/dma/swiotlb.c        | 16 ++++++----------
 4 files changed, 12 insertions(+), 22 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 8651f6d4dfa032..6b560e6f193096 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -3815,9 +3815,8 @@ bounce_map_single(struct device *dev, phys_addr_t paddr, size_t size,
 	 * page aligned, we don't need to use a bounce page.
 	 */
 	if (!IS_ALIGNED(paddr | size, VTD_PAGE_SIZE)) {
-		tlb_addr = swiotlb_tbl_map_single(dev,
-				phys_to_dma_unencrypted(dev, io_tlb_start),
-				paddr, size, aligned_size, dir, attrs);
+		tlb_addr = swiotlb_tbl_map_single(dev, paddr, size,
+				aligned_size, dir, attrs);
 		if (tlb_addr == DMA_MAPPING_ERROR) {
 			goto swiotlb_error;
 		} else {
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 71ce1b7a23d1cc..2b385c1b4a99cb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -395,8 +395,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	 */
 	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
 
-	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
-				     phys, size, size, dir, attrs);
+	map = swiotlb_tbl_map_single(dev, phys, size, size, dir, attrs);
 	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 513913ff748626..3bb72266a75a1d 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -45,13 +45,9 @@ enum dma_sync_target {
 	SYNC_FOR_DEVICE = 1,
 };
 
-extern phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
-					  dma_addr_t tbl_dma_addr,
-					  phys_addr_t phys,
-					  size_t mapping_size,
-					  size_t alloc_size,
-					  enum dma_data_direction dir,
-					  unsigned long attrs);
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
+		size_t mapping_size, size_t alloc_size,
+		enum dma_data_direction dir, unsigned long attrs);
 
 extern void swiotlb_tbl_unmap_single(struct device *hwdev,
 				     phys_addr_t tlb_addr,
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b4eea0abc3f002..92e2f54f24c01b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -441,14 +441,11 @@ static void swiotlb_bounce(phys_addr_t orig_addr, phys_addr_t tlb_addr,
 	}
 }
 
-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
-				   dma_addr_t tbl_dma_addr,
-				   phys_addr_t orig_addr,
-				   size_t mapping_size,
-				   size_t alloc_size,
-				   enum dma_data_direction dir,
-				   unsigned long attrs)
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t orig_addr,
+		size_t mapping_size, size_t alloc_size,
+		enum dma_data_direction dir, unsigned long attrs)
 {
+	dma_addr_t tbl_dma_addr = phys_to_dma_unencrypted(hwdev, io_tlb_start);
 	unsigned long flags;
 	phys_addr_t tlb_addr;
 	unsigned int nslots, stride, index, wrap;
@@ -667,9 +664,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
 	trace_swiotlb_bounced(dev, phys_to_dma(dev, paddr), size,
 			      swiotlb_force);
 
-	swiotlb_addr = swiotlb_tbl_map_single(dev,
-			phys_to_dma_unencrypted(dev, io_tlb_start),
-			paddr, size, size, dir, attrs);
+	swiotlb_addr = swiotlb_tbl_map_single(dev, paddr, size, size, dir,
+			attrs);
 	if (swiotlb_addr == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
 
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 07:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 07:10:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10735.28616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrD4-0004mr-1l; Fri, 23 Oct 2020 07:09:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10735.28616; Fri, 23 Oct 2020 07:09:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrD3-0004mk-Un; Fri, 23 Oct 2020 07:09:57 +0000
Received: by outflank-mailman (input) for mailman id 10735;
 Fri, 23 Oct 2020 07:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8WB0=D6=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kVrD1-0004mf-SU
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:09:55 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 21626c04-6746-4d8c-901d-3e474dfe8c16;
 Fri, 23 Oct 2020 07:09:55 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-464-XE4oO9EOOLucvtmUytKZRQ-1; Fri, 23 Oct 2020 03:09:53 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5EB7425F6;
 Fri, 23 Oct 2020 07:09:51 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-123.ams2.redhat.com [10.36.112.123])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 328A760C04;
 Fri, 23 Oct 2020 07:09:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8WB0=D6=redhat.com=thuth@srs-us1.protection.inumbo.net>)
	id 1kVrD1-0004mf-SU
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:09:55 +0000
X-Inumbo-ID: 21626c04-6746-4d8c-901d-3e474dfe8c16
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 21626c04-6746-4d8c-901d-3e474dfe8c16;
	Fri, 23 Oct 2020 07:09:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603436994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IEF8a+t6dhaHnwPQStTb7j17q3j6NEA7UkQplxtsOTw=;
	b=dD7wUJNkgPjWhEmlzeXAq2epluMt0IW9hpUjASxKC6yuWXIhpPOOZFXHjwvbcn1mwvxkE1
	cIoIFgtScyWlTF+lmme8K1YzAUxVbfRmc4+3kQPyIadYR8WJ24ULFXSppnRidlGSNRK5GO
	va0fvrbaBBTbadKTfcFbBiyOMVi9vQo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-464-XE4oO9EOOLucvtmUytKZRQ-1; Fri, 23 Oct 2020 03:09:53 -0400
X-MC-Unique: XE4oO9EOOLucvtmUytKZRQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C5EB7425F6;
	Fri, 23 Oct 2020 07:09:51 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-123.ams2.redhat.com [10.36.112.123])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 328A760C04;
	Fri, 23 Oct 2020 07:09:48 +0000 (UTC)
Subject: Re: [PATCH v2 0/3] Add Xen CpusAccel
To: Paolo Bonzini <pbonzini@redhat.com>, Jason Andryuk <jandryuk@gmail.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 QEMU <qemu-devel@nongnu.org>, Claudio Fontana <cfontana@suse.de>,
 Anthony Perard <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20201013140511.5681-1-jandryuk@gmail.com>
 <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
 <CAKf6xpvpuG1jVdf3+heXzHFd_kc5kVHYdJgC+8iazFLciqOMZw@mail.gmail.com>
 <d9f23eee-c0af-d2dd-9b9d-f0255fc8e3d1@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <1927b32e-7919-5061-0285-d9c7184d0bae@redhat.com>
Date: Fri, 23 Oct 2020 09:09:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <d9f23eee-c0af-d2dd-9b9d-f0255fc8e3d1@redhat.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=thuth@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22/10/2020 17.29, Paolo Bonzini wrote:
> On 22/10/20 17:17, Jason Andryuk wrote:
>> On Tue, Oct 13, 2020 at 1:16 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>
>>> On 13/10/20 16:05, Jason Andryuk wrote:
>>>> Xen was left behind when CpusAccel became mandatory and fails the assert
>>>> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
>>>> Move the qtest cpu functions to a common location and reuse them for
>>>> Xen.
>>>>
>>>> v2:
>>>>   New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
>>>>   Use accel/dummy-cpus.c for filename
>>>>   Put prototype in include/sysemu/cpus.h
>>>>
>>>> Jason Andryuk (3):
>>>>   accel: Remove _WIN32 ifdef from qtest-cpus.c
>>>>   accel: move qtest CpusAccel functions to a common location
>>>>   accel: Add xen CpusAccel using dummy-cpus
>>>>
>>>>  accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
>>>>  accel/meson.build                          |  8 +++++++
>>>>  accel/qtest/meson.build                    |  1 -
>>>>  accel/qtest/qtest-cpus.h                   | 17 --------------
>>>>  accel/qtest/qtest.c                        |  5 +++-
>>>>  accel/xen/xen-all.c                        |  8 +++++++
>>>>  include/sysemu/cpus.h                      |  3 +++
>>>>  7 files changed, 27 insertions(+), 42 deletions(-)
>>>>  rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
>>>>  delete mode 100644 accel/qtest/qtest-cpus.h
>>>>
>>>
>>> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>>
>> Thank you, Paolo.  Also Anthony Acked and Claudio Reviewed patch 3.
>> How can we get this into the tree?
> 
> I think Anthony should send a pull request?

Since Anthony acked patch 3, I think I can also take it through the qtest tree.

 Thomas




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 07:11:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 07:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10737.28628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrEC-0005YA-CJ; Fri, 23 Oct 2020 07:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10737.28628; Fri, 23 Oct 2020 07:11:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrEC-0005Y3-90; Fri, 23 Oct 2020 07:11:08 +0000
Received: by outflank-mailman (input) for mailman id 10737;
 Fri, 23 Oct 2020 07:11:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVrEB-0005Xw-4S
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:11:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecefa0d8-755c-445d-90dd-1f43bac4a5a6;
 Fri, 23 Oct 2020 07:11:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93C5BB246;
 Fri, 23 Oct 2020 07:11:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVrEB-0005Xw-4S
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:11:07 +0000
X-Inumbo-ID: ecefa0d8-755c-445d-90dd-1f43bac4a5a6
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ecefa0d8-755c-445d-90dd-1f43bac4a5a6;
	Fri, 23 Oct 2020 07:11:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603437062;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dZAjmlGDA3SvkfoxF5aRJClLORKh8gMMBmV6zwTvsRM=;
	b=kIW3BijKz/tMkBL+GuyGDgu9DU8KZmAwYz2RjuNjvC3QwHxk6snZPwriL4mV6GrOGb+TU3
	Cs/ExDOrRuvKer9CgL09STasS84Eo2/KYlokYeyWTGLgQyXozAYfnNy28Sk+4MYcvNZWfg
	DxP9QzwOxSK32kyI3FUY6yR4ArYwQ8I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93C5BB246;
	Fri, 23 Oct 2020 07:11:02 +0000 (UTC)
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <930267bd-5442-3ff0-bb5b-1ed8e2ebe37c@xen.org>
 <20201022191840.GB81455@mattapan.m5p.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3e9ca63f-0202-7312-a095-48bf59e10a70@suse.com>
Date: Fri, 23 Oct 2020 09:11:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201022191840.GB81455@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 21:18, Elliott Mitchell wrote:
> On Thu, Oct 22, 2020 at 07:44:26PM +0100, Julien Grall wrote:
>> Thank you for the patch. FIY I tweak a bit the commit title before 
>> committing.
>>
>> The title is now: "xen/arm: acpi: Don't fail it SPCR table is absent".
> 
> Perhaps "xen/arm: acpi: Don't fail on absent SPCR table"?
> 
> What you're suggesting doesn't read well to me.

Perhaps Julien meant "if" instead of "it". i.e. a simple typo?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 07:24:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 07:24:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10741.28639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrQa-0006ke-H5; Fri, 23 Oct 2020 07:23:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10741.28639; Fri, 23 Oct 2020 07:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVrQa-0006kX-E9; Fri, 23 Oct 2020 07:23:56 +0000
Received: by outflank-mailman (input) for mailman id 10741;
 Fri, 23 Oct 2020 07:23:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVrQZ-0006jo-5U
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:23:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3572783e-1b9d-4ce2-b6d9-c24579dccee8;
 Fri, 23 Oct 2020 07:23:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 303B9ACC4;
 Fri, 23 Oct 2020 07:23:53 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVrQZ-0006jo-5U
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 07:23:55 +0000
X-Inumbo-ID: 3572783e-1b9d-4ce2-b6d9-c24579dccee8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3572783e-1b9d-4ce2-b6d9-c24579dccee8;
	Fri, 23 Oct 2020 07:23:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603437833;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XKi9lIqkUcYox5yJhEdLIRNbCvTjIhSfcdCdbLv9Dxs=;
	b=F3++9Q6CAXssj7ySAz86mgcxN2wY4jEUZHrE9a6jLaZc/pkcomxYxeVhpdyhZlqGQDT6X/
	7tE2xEngsLZibVmfPPX01Ou+ytTqW1U7u+BfPUd72WbJTHMLIxrNmhWaSj54vaCG7i6hV7
	6ykzzRAFTNZPiBruXQ9VEl10J9gfTa4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 303B9ACC4;
	Fri, 23 Oct 2020 07:23:53 +0000 (UTC)
Subject: Re: [PATCH] xen/arm: ACPI: Remove EXPERT dependancy, default on for
 ARM64
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
 <20201023033506.GC83870@mattapan.m5p.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9b6138a-e42c-a8d8-e07f-243ac6bc8d23@suse.com>
Date: Fri, 23 Oct 2020 09:23:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201023033506.GC83870@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.10.2020 05:35, Elliott Mitchell wrote:
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -32,13 +32,18 @@ menu "Architecture Features"
>  source "arch/Kconfig"
>  
>  config ACPI
> -	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
> +	bool "ACPI (Advanced Configuration and Power Interface) Support"
>  	depends on ARM_64
> +	default y if ARM_64

The "if" is pointless with the "depends on".

> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -254,6 +254,15 @@ int __init acpi_boot_table_init(void)
>                                     dt_scan_depth1_nodes, NULL) )
>          goto disable;
>  
> +    printk("\n"
> +"*************************************************************************\n"
> +"*    WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING    *\n"
> +"*                                                                       *\n"
> +"* Xen-ARM ACPI support is EXPERIMENTAL.  It is presently (October 2020) *\n"
> +"* recommended you boot your system in device-tree mode if you can.      *\n"
> +"*************************************************************************\n"
> +            "\n");
> +

We have an abstraction for such warnings, causing them to appear
later in the boot process and then consistently all in one place
(both increasing, as we believe, the chances of being noticed):
warning_add(). There's a delay accompanied with this, so I think
you will want to also have a command line option allowing to
silence this warning. "acpi=on" or "acpi=force", as available on
x86 and (possibly wrongly right now) not documented as
x86-specific, may be (re-)usable, i.e. to avoid having to
introduce some entirely new option.

Also a few formal nits: The subject tag should have been [PATCH v2],
there should have been a short revision log outside of the commit
message area, and new patch versione would better start their own
new threads than being in-reply-to the earlier version's one.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:02:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10753.28651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVs25-0002Xh-TM; Fri, 23 Oct 2020 08:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10753.28651; Fri, 23 Oct 2020 08:02:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVs25-0002Xa-QO; Fri, 23 Oct 2020 08:02:41 +0000
Received: by outflank-mailman (input) for mailman id 10753;
 Fri, 23 Oct 2020 08:02:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVs25-0002XV-Cj
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:02:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8668d801-06ee-4a06-999d-cd941e1340f2;
 Fri, 23 Oct 2020 08:02:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73A80AC83;
 Fri, 23 Oct 2020 08:02:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVs25-0002XV-Cj
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:02:41 +0000
X-Inumbo-ID: 8668d801-06ee-4a06-999d-cd941e1340f2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8668d801-06ee-4a06-999d-cd941e1340f2;
	Fri, 23 Oct 2020 08:02:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603440158;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=wWIMTLf4j3BA6CNc85qVTx7LGobZs+V9mQFsSd1mJyg=;
	b=XyI2w/gLt44KYYIk3/hehBDbA5w1VsrrOhm2oDSi47n1w++adySVjFVaW9S+f9/XAxgQWp
	G9yINnXVbZIIY2YJhXxhL3YiVvBsWojD4l+60JtpyaRKZHdqnAn0lu9JZ83dVF0oG24gI0
	YEYkBQCiWBmIHBR09O99n55GaWtJLo0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 73A80AC83;
	Fri, 23 Oct 2020 08:02:38 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] PCI: drop dead pci_lock_*pdev() declarations
Message-ID: <cb644565-92c9-8dbe-8c36-54e8b6b722ad@suse.com>
Date: Fri, 23 Oct 2020 10:02:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

They have no definitions, and hence users, anywhere.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -155,9 +155,6 @@ bool_t pci_device_detect(u16 seg, u8 bus
 int scan_pci_devices(void);
 enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn);
 int find_upstream_bridge(u16 seg, u8 *bus, u8 *devfn, u8 *secbus);
-struct pci_dev *pci_lock_pdev(int seg, int bus, int devfn);
-struct pci_dev *pci_lock_domain_pdev(
-    struct domain *, int seg, int bus, int devfn);
 
 void setup_hwdom_pci_devices(struct domain *,
                             int (*)(u8 devfn, struct pci_dev *));


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:34:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10766.28699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsWi-0005td-Mh; Fri, 23 Oct 2020 08:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10766.28699; Fri, 23 Oct 2020 08:34:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsWi-0005tW-Jd; Fri, 23 Oct 2020 08:34:20 +0000
Received: by outflank-mailman (input) for mailman id 10766;
 Fri, 23 Oct 2020 08:34:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsWg-0005tP-Kg
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:34:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 041ab3a9-529d-4d1b-9de3-b938425e909f;
 Fri, 23 Oct 2020 08:34:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 15FD9ABF4;
 Fri, 23 Oct 2020 08:34:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsWg-0005tP-Kg
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:34:18 +0000
X-Inumbo-ID: 041ab3a9-529d-4d1b-9de3-b938425e909f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 041ab3a9-529d-4d1b-9de3-b938425e909f;
	Fri, 23 Oct 2020 08:34:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442056;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=VH+yNN6+P3LEDkeW4LMEv6Dufqx0Ycnx/RNfURcuaVY=;
	b=Jl0djD1rtDoKVdyK6tnEd9mLUiCWgrAIw03J476n7O4xz6sZAB6mhOdeiL70VM64963jR1
	pY5uOZddvwrhe3JT56VKtIvmLb9GA4EewJnEJQxxTRh+Uuu4zxFmInHAHG7/rF31N2pOXY
	HGO9Fbg0NLoEvCP/zgETR+eZGlhhbQQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 15FD9ABF4;
	Fri, 23 Oct 2020 08:34:16 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/7] x86: some assembler macro rework
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Message-ID: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Date: Fri, 23 Oct 2020 10:34:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Parts of this were discussed in the context of Andrew's CET-SS work.
Further parts simply fit the underlying picture. And a few patches
towards the end get attached here simply because of their dependency.
Patch 7 is new.

All patches except for the new ones in principle have acks / R-b-s
which would allow them to go in. However, there still the controversy
on the naming of the newly introduced header in patch 1 (which
subsequent patches then add to). There hasn't been a name suggestion
which would - imo - truly represent an improvement.

It's also still not really clear to me what - if any - changes to
make to patch 2. As said there I'd be willing to drop some of the
changes made, but not all. Prior discussion hasn't led to a clear
understanding on my part of what's wanted to be kept / dropped. It
could have looked like the entire patch was wanted to go away, but I
don't think I can agree with this. (I could see about moving this to
the end of the series, to unblock what's currently the remainder.)

1: replace __ASM_{CL,ST}AC
2: reduce CET-SS related #ifdef-ary
3: drop ASM_{CL,ST}AC
4: fold indirect_thunk_asm.h into asm-defns.h
5: guard against straight-line speculation past RET
6: limit amount of INT3 in IND_THUNK_*
7: make guarding against straight-line speculation optional

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10768.28712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsYM-00061n-29; Fri, 23 Oct 2020 08:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10768.28712; Fri, 23 Oct 2020 08:36:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsYL-00061g-V8; Fri, 23 Oct 2020 08:36:01 +0000
Received: by outflank-mailman (input) for mailman id 10768;
 Fri, 23 Oct 2020 08:36:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsYL-00061Z-5f
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd7183f6-ea87-4cf3-b687-ddb3e84bd66b;
 Fri, 23 Oct 2020 08:35:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A1D9AC5F;
 Fri, 23 Oct 2020 08:35:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsYL-00061Z-5f
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:01 +0000
X-Inumbo-ID: cd7183f6-ea87-4cf3-b687-ddb3e84bd66b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cd7183f6-ea87-4cf3-b687-ddb3e84bd66b;
	Fri, 23 Oct 2020 08:35:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442159;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AnKusbjZAyoMSt2heKF01++irq1SZLWnDVB4dzApjOw=;
	b=FU7ZfaI1Q8/EcrW577jBaP7DcJS6N2aQXZphcqxIVtQilS8viBNLEg+aZzda2nTW/JKBK8
	gz5/oVUq4TGa09yxFf6srcggOwuLskZwgE/F2V4EFwZxHinV9zq0blu9DjaSMUlw0UUidE
	RltHxXaFUFOmOsbcRcOXdFr8PvoSZFo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2A1D9AC5F;
	Fri, 23 Oct 2020 08:35:59 +0000 (UTC)
Subject: [PATCH v3 1/7] x86: replace __ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <cb072975-78f9-eede-4005-b8e1a6d14f88@suse.com>
Date: Fri, 23 Oct 2020 10:36:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Introduce proper assembler macros instead, enabled only when the
assembler itself doesn't support the insns. To avoid duplicating the
macros for assembly and C files, have them processed into asm-macros.h.
This in turn requires adding a multiple inclusion guard when generating
that header.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -243,7 +243,10 @@ $(BASEDIR)/include/asm-x86/asm-macros.h:
 	echo '#if 0' >$@.new
 	echo '.if 0' >>$@.new
 	echo '#endif' >>$@.new
+	echo '#ifndef __ASM_MACROS_H__' >>$@.new
+	echo '#define __ASM_MACROS_H__' >>$@.new
 	echo 'asm ( ".include \"$@\"" );' >>$@.new
+	echo '#endif /* __ASM_MACROS_H__ */' >>$@.new
 	echo '#if 0' >>$@.new
 	echo '.endif' >>$@.new
 	cat $< >>$@.new
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -20,6 +20,7 @@ $(call as-option-add,CFLAGS,CC,"rdrand %
 $(call as-option-add,CFLAGS,CC,"rdfsbase %rax",-DHAVE_AS_FSGSBASE)
 $(call as-option-add,CFLAGS,CC,"xsaveopt (%rax)",-DHAVE_AS_XSAVEOPT)
 $(call as-option-add,CFLAGS,CC,"rdseed %eax",-DHAVE_AS_RDSEED)
+$(call as-option-add,CFLAGS,CC,"clac",-DHAVE_AS_CLAC_STAC)
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
--- a/xen/arch/x86/asm-macros.c
+++ b/xen/arch/x86/asm-macros.c
@@ -1 +1,2 @@
+#include <asm/asm-defns.h>
 #include <asm/alternative-asm.h>
--- /dev/null
+++ b/xen/include/asm-x86/asm-defns.h
@@ -0,0 +1,9 @@
+#ifndef HAVE_AS_CLAC_STAC
+.macro clac
+    .byte 0x0f, 0x01, 0xca
+.endm
+
+.macro stac
+    .byte 0x0f, 0x01, 0xcb
+.endm
+#endif
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -13,10 +13,12 @@
 #include <asm/alternative.h>
 
 #ifdef __ASSEMBLY__
+#include <asm/asm-defns.h>
 #ifndef CONFIG_INDIRECT_THUNK
 .equ CONFIG_INDIRECT_THUNK, 0
 #endif
 #else
+#include <asm/asm-macros.h>
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
@@ -200,34 +202,27 @@ register unsigned long current_stack_poi
 
 #endif
 
-/* "Raw" instruction opcodes */
-#define __ASM_CLAC      ".byte 0x0f,0x01,0xca"
-#define __ASM_STAC      ".byte 0x0f,0x01,0xcb"
-
 #ifdef __ASSEMBLY__
 .macro ASM_STAC
-    ALTERNATIVE "", __ASM_STAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
 .endm
 .macro ASM_CLAC
-    ALTERNATIVE "", __ASM_CLAC, X86_FEATURE_XEN_SMAP
+    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 .endm
 #else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_CLAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "clac", X86_FEATURE_XEN_SMAP);
 }
 
 static always_inline void stac(void)
 {
     /* Note: a barrier is implicit in alternative() */
-    alternative("", __ASM_STAC, X86_FEATURE_XEN_SMAP);
+    alternative("", "stac", X86_FEATURE_XEN_SMAP);
 }
 #endif
 
-#undef __ASM_STAC
-#undef __ASM_CLAC
-
 #ifdef __ASSEMBLY__
 .macro SAVE_ALL op, compat=0
 .ifeqs "\op", "CLAC"


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:36:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:36:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10770.28724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsYj-00068F-FH; Fri, 23 Oct 2020 08:36:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10770.28724; Fri, 23 Oct 2020 08:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsYj-000688-BQ; Fri, 23 Oct 2020 08:36:25 +0000
Received: by outflank-mailman (input) for mailman id 10770;
 Fri, 23 Oct 2020 08:36:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsYh-00067q-VY
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b607beda-8cc8-40fa-8a61-9930c404667a;
 Fri, 23 Oct 2020 08:36:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 126DFABF4;
 Fri, 23 Oct 2020 08:36:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsYh-00067q-VY
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:24 +0000
X-Inumbo-ID: b607beda-8cc8-40fa-8a61-9930c404667a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b607beda-8cc8-40fa-8a61-9930c404667a;
	Fri, 23 Oct 2020 08:36:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442182;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DSzaqBucHamXYL/0i8gmtZ51yyje1EHpgu1r982aK4g=;
	b=NJCsOqLu0bAhtAhfDoOB37Kduh4hSvC/yxM1RdWJpg+7wdILPrcD9vs5S6o9F/OQyhOX3j
	Ao4YW4Wj2LDcVl2p5BQ0cPk/KB0IWs7e0jManND1H9E6bi1P0KoYpQ8ASSkUFhog00yOIY
	uwJOAa8pW8NNX5imvfzPs5jDsHm3dUQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 126DFABF4;
	Fri, 23 Oct 2020 08:36:22 +0000 (UTC)
Subject: [PATCH v3 2/7] x86: reduce CET-SS related #ifdef-ary
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <1c7f05af-a979-0c12-4951-26bc15fb5597@suse.com>
Date: Fri, 23 Oct 2020 10:36:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Commit b586a81b7a90 ("x86/CET: Fix build following c/s 43b98e7190") had
to introduce a number of #ifdef-s to make the build work with older tool
chains. Introduce an assembler macro covering for tool chains not
knowing of CET-SS, allowing some conditionals where just SETSSBSY is the
problem to be dropped again.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Now that I've done this I'm no longer sure which direction is better to
follow: On one hand this introduces dead code (even if just NOPs) into
CET-SS-disabled builds. Otoh this is a step towards breaking the tool
chain version dependency of the feature.

I've also dropped conditionals around bigger chunks of code; while I
think that's preferable, I'm open to undo those parts.

--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -31,7 +31,6 @@ ENTRY(__high_start)
         jz      .L_bsp
 
         /* APs.  Set up shadow stacks before entering C. */
-#ifdef CONFIG_XEN_SHSTK
         testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
                 CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
         je      .L_ap_shstk_done
@@ -55,7 +54,6 @@ ENTRY(__high_start)
         mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
         mov     %rcx, %cr4
         setssbsy
-#endif
 
 .L_ap_shstk_done:
         call    start_secondary
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -668,7 +668,7 @@ static void __init noreturn reinit_bsp_s
     stack_base[0] = stack;
     memguard_guard_stack(stack);
 
-    if ( IS_ENABLED(CONFIG_XEN_SHSTK) && cpu_has_xen_shstk )
+    if ( cpu_has_xen_shstk )
     {
         wrmsrl(MSR_PL0_SSP,
                (unsigned long)stack + (PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8);
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -197,9 +197,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         CR4_PV32_RESTORE
         movq  8(%rsp),%rax /* Restore %rax. */
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -236,9 +236,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_KERNEL_SS,8(%rsp)
@@ -272,9 +270,7 @@ ENTRY(lstar_enter)
         jmp   test_all_events
 
 ENTRY(sysenter_entry)
-#ifdef CONFIG_XEN_SHSTK
         ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
-#endif
         /* sti could live here when we don't switch page tables below. */
         pushq $FLAT_USER_SS
         pushq $0
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -7,3 +7,9 @@
     .byte 0x0f, 0x01, 0xcb
 .endm
 #endif
+
+#ifndef CONFIG_HAS_AS_CET_SS
+.macro setssbsy
+    .byte 0xf3, 0x0f, 0x01, 0xe8
+.endm
+#endif



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:36:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:36:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10772.28736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsZ8-0006Eu-Nw; Fri, 23 Oct 2020 08:36:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10772.28736; Fri, 23 Oct 2020 08:36:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsZ8-0006En-Kz; Fri, 23 Oct 2020 08:36:50 +0000
Received: by outflank-mailman (input) for mailman id 10772;
 Fri, 23 Oct 2020 08:36:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsZ6-0006Ea-Vh
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c156b0f-fba9-4ea0-acc4-6b24ea400d3e;
 Fri, 23 Oct 2020 08:36:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A98E6AD79;
 Fri, 23 Oct 2020 08:36:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsZ6-0006Ea-Vh
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:36:49 +0000
X-Inumbo-ID: 3c156b0f-fba9-4ea0-acc4-6b24ea400d3e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3c156b0f-fba9-4ea0-acc4-6b24ea400d3e;
	Fri, 23 Oct 2020 08:36:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NoftNtm/8qU7dpqotJvRgXu3Zqs2DRJD+DRNHMewch4=;
	b=WAUTk+ANtN+ay00SoRKolQ9Z0V7TOGxhOAf596iZ7laf8Yu5OolwlQMqCT0O7RIS3tirc0
	7dEcS9xwCCNtw7BiCEjCz2Ao0EPsq8FF9RsNB6HTON3JEZK/qlLPpa4TU1x1rSL1BI/ign
	cm2GSiesFzAC5G6zILwklfLTpI8RgjM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A98E6AD79;
	Fri, 23 Oct 2020 08:36:46 +0000 (UTC)
Subject: [PATCH v3 3/7] x86: drop ASM_{CL,ST}AC
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <b328db31-1549-1d06-b6b5-c7241ea7ce9e@suse.com>
Date: Fri, 23 Oct 2020 10:36:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Use ALTERNATIVE directly, such that at the use sites it is visible that
alternative code patching is in use. Similarly avoid hiding the fact in
SAVE_ALL.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Further adjust comment in asm_domain_crash_synchronous().

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2165,9 +2165,8 @@ void activate_debugregs(const struct vcp
 void asm_domain_crash_synchronous(unsigned long addr)
 {
     /*
-     * We need clear AC bit here because in entry.S AC is set
-     * by ASM_STAC to temporarily allow accesses to user pages
-     * which is prevented by SMAP by default.
+     * We need to clear the AC bit here because the exception fixup logic
+     * may leave user accesses enabled.
      *
      * For some code paths, where this function is called, clac()
      * is not needed, but adding clac() here instead of each place
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -12,7 +12,7 @@
 #include <irq_vectors.h>
 
 ENTRY(entry_int82)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $HYPERCALL_VECTOR, 4(%rsp)
         SAVE_ALL compat=1 /* DPL1 gate, restricted to 32bit PV guests only. */
@@ -284,7 +284,7 @@ ENTRY(compat_int80_direct_trap)
 compat_create_bounce_frame:
         ASSERT_INTERRUPTS_ENABLED
         mov   %fs,%edi
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         testb $2,UREGS_cs+8(%rsp)
         jz    1f
         /* Push new frame at registered guest-OS stack base. */
@@ -331,7 +331,7 @@ compat_create_bounce_frame:
         movl  TRAPBOUNCE_error_code(%rdx),%eax
 .Lft8:  movl  %eax,%fs:(%rsi)           # ERROR CODE
 1:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         /* Rewrite our stack frame and return to guest-OS mode. */
         /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
         andl  $~(X86_EFLAGS_VM|X86_EFLAGS_RF|\
@@ -377,7 +377,7 @@ compat_crash_page_fault_4:
         addl  $4,%esi
 compat_crash_page_fault:
 .Lft14: mov   %edi,%fs
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movl  %esi,%edi
         call  show_page_walk
         jmp   dom_crash_sync_extable
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -276,7 +276,7 @@ ENTRY(sysenter_entry)
         pushq $0
         pushfq
 GLOBAL(sysenter_eflags_saved)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $3 /* ring 3 null cs */
         pushq $0 /* null rip */
         pushq $0
@@ -329,7 +329,7 @@ UNLIKELY_END(sysenter_gpf)
         jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         pushq $0
         movl  $0x80, 4(%rsp)
         SAVE_ALL
@@ -448,7 +448,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
 
         subq  $7*8,%rsi
         movq  UREGS_ss+8(%rsp),%rax
-        ASM_STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
         movq  VCPU_domain(%rbx),%rdi
         STORE_GUEST_STACK(rax,6)        # SS
         movq  UREGS_rsp+8(%rsp),%rax
@@ -486,7 +486,7 @@ __UNLIKELY_END(create_bounce_frame_bad_s
         STORE_GUEST_STACK(rax,1)        # R11
         movq  UREGS_rcx+8(%rsp),%rax
         STORE_GUEST_STACK(rax,0)        # RCX
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
 
 #undef STORE_GUEST_STACK
 
@@ -528,11 +528,11 @@ domain_crash_page_fault_2x8:
 domain_crash_page_fault_1x8:
         addq  $8,%rsi
 domain_crash_page_fault_0x8:
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         movq  %rsi,%rdi
         call  show_page_walk
 ENTRY(dom_crash_sync_extable)
-        ASM_CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
         # Get out of the guest-save area of the stack.
         GET_STACK_END(ax)
         leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
@@ -590,7 +590,8 @@ UNLIKELY_END(exit_cr3)
         iretq
 
 ENTRY(common_interrupt)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -622,7 +623,8 @@ ENTRY(page_fault)
         movl  $TRAP_page_fault,4(%rsp)
 /* No special register assumptions. */
 GLOBAL(handle_exception)
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -827,7 +829,8 @@ ENTRY(entry_CP)
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
-        SAVE_ALL STAC
+        ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
@@ -860,7 +863,8 @@ ENTRY(nmi)
         pushq $0
         movl  $TRAP_nmi,4(%rsp)
 handle_ist_exception:
-        SAVE_ALL CLAC
+        ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
+        SAVE_ALL
 
         GET_STACK_END(14)
 
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -200,16 +200,6 @@ register unsigned long current_stack_poi
         UNLIKELY_END_SECTION "\n"          \
         ".Llikely." #tag ".%=:"
 
-#endif
-
-#ifdef __ASSEMBLY__
-.macro ASM_STAC
-    ALTERNATIVE "", stac, X86_FEATURE_XEN_SMAP
-.endm
-.macro ASM_CLAC
-    ALTERNATIVE "", clac, X86_FEATURE_XEN_SMAP
-.endm
-#else
 static always_inline void clac(void)
 {
     /* Note: a barrier is implicit in alternative() */
@@ -224,18 +214,7 @@ static always_inline void stac(void)
 #endif
 
 #ifdef __ASSEMBLY__
-.macro SAVE_ALL op, compat=0
-.ifeqs "\op", "CLAC"
-        ASM_CLAC
-.else
-.ifeqs "\op", "STAC"
-        ASM_STAC
-.else
-.ifnb \op
-        .err
-.endif
-.endif
-.endif
+.macro SAVE_ALL compat=0
         addq  $-(UREGS_error_code-UREGS_r15), %rsp
         cld
         movq  %rdi,UREGS_rdi(%rsp)



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:37:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10775.28748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsZf-0006Mp-0b; Fri, 23 Oct 2020 08:37:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10775.28748; Fri, 23 Oct 2020 08:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsZe-0006Mi-Tw; Fri, 23 Oct 2020 08:37:22 +0000
Received: by outflank-mailman (input) for mailman id 10775;
 Fri, 23 Oct 2020 08:37:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsZd-0006MV-0t
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:37:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38c031f2-f3fe-4aaa-b6b6-86093f3d0834;
 Fri, 23 Oct 2020 08:37:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 55079AC5F;
 Fri, 23 Oct 2020 08:37:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsZd-0006MV-0t
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:37:21 +0000
X-Inumbo-ID: 38c031f2-f3fe-4aaa-b6b6-86093f3d0834
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 38c031f2-f3fe-4aaa-b6b6-86093f3d0834;
	Fri, 23 Oct 2020 08:37:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442239;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XH8HtTX0dx7wuPJeWsqa7xFPd/3P0ZggIToNVH+JyHk=;
	b=TSInIO7YtfG0plD8YY/Zcxqs3oXJCunhpn5Vowwo97wMdZ7Ni2QJDPtbzHQjScN1cG3JHv
	WUerZ8s7MOB0AYmv/jGiG35KuXjaDUfFT0Ty1zTOYxQMNX7OjwsV6w6WHmQUsDbGiPhLy4
	MBYUiwsXfDQst4B9wRyTdPOZlbIyyIA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 55079AC5F;
	Fri, 23 Oct 2020 08:37:19 +0000 (UTC)
Subject: [PATCH v3 4/7] x86: fold indirect_thunk_asm.h into asm-defns.h
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <f7d17cfc-c9dd-1131-2034-592fd3d5ce2d@suse.com>
Date: Fri, 23 Oct 2020 10:37:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

There's little point in having two separate headers both getting
included by asm_defns.h. This in particular reduces the number of
instances of guarding asm(".include ...") suitably in such dual use
headers.

No change to generated code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -139,7 +139,7 @@ ifeq ($(TARGET_ARCH),x86)
 t1 = $(call as-insn,$(CC),".L0: .L1: .skip (.L1 - .L0)",,-no-integrated-as)
 
 # Check whether clang asm()-s support .include.
-t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/indirect_thunk_asm.h\"",,-no-integrated-as)
+t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/asm-defns.h\"",,-no-integrated-as)
 
 # Check whether clang keeps .macro-s between asm()-s:
 # https://bugs.llvm.org/show_bug.cgi?id=36110
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -13,3 +13,40 @@
     .byte 0xf3, 0x0f, 0x01, 0xe8
 .endm
 #endif
+
+.macro INDIRECT_BRANCH insn:req arg:req
+/*
+ * Create an indirect branch.  insn is one of call/jmp, arg is a single
+ * register.
+ *
+ * With no compiler support, this degrades into a plain indirect call/jmp.
+ * With compiler support, dispatch to the correct __x86_indirect_thunk_*
+ */
+    .if CONFIG_INDIRECT_THUNK == 1
+
+        $done = 0
+        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
+        .ifeqs "\arg", "%r\reg"
+            \insn __x86_indirect_thunk_r\reg
+            $done = 1
+           .exitm
+        .endif
+        .endr
+
+        .if $done != 1
+            .error "Bad register arg \arg"
+        .endif
+
+    .else
+        \insn *\arg
+    .endif
+.endm
+
+/* Convenience wrappers. */
+.macro INDIRECT_CALL arg:req
+    INDIRECT_BRANCH call \arg
+.endm
+
+.macro INDIRECT_JMP arg:req
+    INDIRECT_BRANCH jmp \arg
+.endm
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -22,7 +22,6 @@
 asm ( "\t.equ CONFIG_INDIRECT_THUNK, "
       __stringify(IS_ENABLED(CONFIG_INDIRECT_THUNK)) );
 #endif
-#include <asm/indirect_thunk_asm.h>
 
 #ifndef __ASSEMBLY__
 void ret_from_intr(void);
--- a/xen/include/asm-x86/indirect_thunk_asm.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Trickery to allow this header to be included at the C level, to permit
- * proper dependency tracking in .*.o.d files, while still having it contain
- * assembler only macros.
- */
-#ifndef __ASSEMBLY__
-# if 0
-  .if 0
-# endif
-asm ( "\t.include \"asm/indirect_thunk_asm.h\"" );
-# if 0
-  .endif
-# endif
-#else
-
-.macro INDIRECT_BRANCH insn:req arg:req
-/*
- * Create an indirect branch.  insn is one of call/jmp, arg is a single
- * register.
- *
- * With no compiler support, this degrades into a plain indirect call/jmp.
- * With compiler support, dispatch to the correct __x86_indirect_thunk_*
- */
-    .if CONFIG_INDIRECT_THUNK == 1
-
-        $done = 0
-        .irp reg, ax, cx, dx, bx, bp, si, di, 8, 9, 10, 11, 12, 13, 14, 15
-        .ifeqs "\arg", "%r\reg"
-            \insn __x86_indirect_thunk_r\reg
-            $done = 1
-           .exitm
-        .endif
-        .endr
-
-        .if $done != 1
-            .error "Bad register arg \arg"
-        .endif
-
-    .else
-        \insn *\arg
-    .endif
-.endm
-
-/* Convenience wrappers. */
-.macro INDIRECT_CALL arg:req
-    INDIRECT_BRANCH call \arg
-.endm
-
-.macro INDIRECT_JMP arg:req
-    INDIRECT_BRANCH jmp \arg
-.endm
-
-#endif



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:38:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:38:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10779.28759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsaL-0006Xn-Ap; Fri, 23 Oct 2020 08:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10779.28759; Fri, 23 Oct 2020 08:38:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsaL-0006Xg-7s; Fri, 23 Oct 2020 08:38:05 +0000
Received: by outflank-mailman (input) for mailman id 10779;
 Fri, 23 Oct 2020 08:38:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsaK-0006Xa-HX
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:38:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14dc6f66-2615-401f-8786-26a944cfd467;
 Fri, 23 Oct 2020 08:38:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E3A1ADC2;
 Fri, 23 Oct 2020 08:38:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsaK-0006Xa-HX
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:38:04 +0000
X-Inumbo-ID: 14dc6f66-2615-401f-8786-26a944cfd467
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14dc6f66-2615-401f-8786-26a944cfd467;
	Fri, 23 Oct 2020 08:38:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442282;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=X5eJiPZad/Wxk2WeL4Rv91+7VfsqFeYUymX6fbe2ERc=;
	b=ISRPQlnJe203eMoqR3qOCAhHEkcd4PKSuBsQLqMx9RhltVPjfWh3GEp0RjBX1Pud8O9LRu
	UmYn02GqXGA6//LibSLjZN0uzj7pPh59juAYLdYlXjaBUHa2Um7N5PLXveuSJilqMyWkv3
	6mfwZRlqIyAX4ZDEplSXsl54jrTDUfA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0E3A1ADC2;
	Fri, 23 Oct 2020 08:38:02 +0000 (UTC)
Subject: [PATCH v3 5/7] x86: guard against straight-line speculation past RET
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <80ceea17-958d-f409-5f39-9f353e780f5b@suse.com>
Date: Fri, 23 Oct 2020 10:38:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Under certain conditions CPUs can speculate into the instruction stream
past a RET instruction. Guard against this just like 3b7dab93f240
("x86/spec-ctrl: Protect against CALL/JMP straight-line speculation")
did - by inserting an "INT $3" insn. It's merely the mechanics of how to
achieve this that differ: A set of macros gets introduced to post-
process RET insns issued by the compiler (or living in assembly files).

Unfortunately for clang this requires further features their built-in
assembler doesn't support: We need to be able to override insn mnemonics
produced by the compiler (which may be impossible, if internally
assembly mnemonics never get generated), and we want to use \(text)
escaping / quoting in the auxiliary macro.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
TBD: Would be nice to avoid the additions in .init.text, but a query to
     the binutils folks regarding the ability to identify the section
     stuff is in (by Peter Zijlstra over a year ago:
     https://sourceware.org/pipermail/binutils/2019-July/107528.html)
     has been left without helpful replies.
---
v3: Use .byte 0xc[23] instead of the nested macros.
v2: Fix build with newer clang. Use int3 mnemonic. Also override retq.

--- a/xen/Makefile
+++ b/xen/Makefile
@@ -145,7 +145,15 @@ t2 = $(call as-insn,$(CC) -I$(BASEDIR)/i
 # https://bugs.llvm.org/show_bug.cgi?id=36110
 t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
 
-CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
+# Check whether \(text) escaping in macro bodies is supported.
+t4 = $(call as-insn,$(CC),".macro m ret:req; \\(ret) $$\\ret; .endm; m 8",,-no-integrated-as)
+
+# Check whether macros can override insn mnemonics in inline assembly.
+t5 = $(call as-insn,$(CC),".macro ret; .error; .endm; .macro retq; .error; .endm",-no-integrated-as)
+
+acc1 := $(call or,$(t1),$(t2),$(t3),$(t4))
+
+CLANG_FLAGS += $(call or,$(acc1),$(t5))
 endif
 
 CLANG_FLAGS += -Werror=unknown-warning-option
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -50,3 +50,19 @@
 .macro INDIRECT_JMP arg:req
     INDIRECT_BRANCH jmp \arg
 .endm
+
+/*
+ * To guard against speculation past RET, insert a breakpoint insn
+ * immediately after them.
+ */
+.macro ret operand:vararg
+    retq \operand
+.endm
+.macro retq operand:vararg
+    .ifb \operand
+    .byte 0xc3
+    .else
+    .byte 0xc2
+    .word \operand
+    .endif
+.endm



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:38:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:38:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10782.28771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsap-0006em-JZ; Fri, 23 Oct 2020 08:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10782.28771; Fri, 23 Oct 2020 08:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsap-0006ee-Gd; Fri, 23 Oct 2020 08:38:35 +0000
Received: by outflank-mailman (input) for mailman id 10782;
 Fri, 23 Oct 2020 08:38:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsao-0006eS-0U
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:38:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a0b6461d-c37d-4b85-8b16-55e581202984;
 Fri, 23 Oct 2020 08:38:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 641DDABF4;
 Fri, 23 Oct 2020 08:38:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsao-0006eS-0U
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:38:34 +0000
X-Inumbo-ID: a0b6461d-c37d-4b85-8b16-55e581202984
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a0b6461d-c37d-4b85-8b16-55e581202984;
	Fri, 23 Oct 2020 08:38:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442312;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ic529odUAZBshqLAt5wPXzrnZzZH82DsTSQiP600Yn0=;
	b=bxP8bIJU47nYZg50sLKyag7ATL/GZjft5keT1ynmaRds0BmNn5YJ6j7cAHnJ3FqDU//c/n
	tWV2KdZ+dPycpJZUpJ60VJ8VB0fcNetwIrzB67UdBqiDEU+2Hxa8m1W0JYqY2k222gyWQW
	twYDRzUzPmraEf5J4sjRDVSF0Bb3XtI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 641DDABF4;
	Fri, 23 Oct 2020 08:38:32 +0000 (UTC)
Subject: [PATCH v3 6/7] x86: limit amount of INT3 in IND_THUNK_*
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <738249d7-521c-2ea3-332c-f2298b0b25a2@suse.com>
Date: Fri, 23 Oct 2020 10:38:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

There's no point having every replacement variant to also specify the
INT3 - just have it once in the base macro. When patching, NOPs will get
inserted, which are fine to speculate through (until reaching the INT3).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
I also wonder whether the LFENCE in IND_THUNK_RETPOLINE couldn't be
replaced by INT3 as well. Of course the effect will be marginal, as the
size of the thunk will still be 16 bytes when including tail padding
resulting from alignment.
---
v3: Add comment.
v2: New.

--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -11,6 +11,9 @@
 
 #include <asm/asm_defns.h>
 
+/* Don't tranform the "ret" further down. */
+.purgem ret
+
 .macro IND_THUNK_RETPOLINE reg:req
         call 2f
 1:
@@ -24,12 +27,10 @@
 .macro IND_THUNK_LFENCE reg:req
         lfence
         jmp *%\reg
-        int3 /* Halt straight-line speculation */
 .endm
 
 .macro IND_THUNK_JMP reg:req
         jmp *%\reg
-        int3 /* Halt straight-line speculation */
 .endm
 
 /*
@@ -44,6 +45,8 @@ ENTRY(__x86_indirect_thunk_\reg)
         __stringify(IND_THUNK_LFENCE \reg), X86_FEATURE_IND_THUNK_LFENCE, \
         __stringify(IND_THUNK_JMP \reg),    X86_FEATURE_IND_THUNK_JMP
 
+        int3 /* Halt straight-line speculation */
+
         .size __x86_indirect_thunk_\reg, . - __x86_indirect_thunk_\reg
         .type __x86_indirect_thunk_\reg, @function
 .endm



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:39:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10785.28784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsbI-0006o5-TL; Fri, 23 Oct 2020 08:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10785.28784; Fri, 23 Oct 2020 08:39:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsbI-0006ny-QO; Fri, 23 Oct 2020 08:39:04 +0000
Received: by outflank-mailman (input) for mailman id 10785;
 Fri, 23 Oct 2020 08:39:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVsbH-0006no-Go
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:39:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37783b02-83e7-45ee-8acf-b93102008a45;
 Fri, 23 Oct 2020 08:39:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1ADBBABF4;
 Fri, 23 Oct 2020 08:39:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVsbH-0006no-Go
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:39:03 +0000
X-Inumbo-ID: 37783b02-83e7-45ee-8acf-b93102008a45
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 37783b02-83e7-45ee-8acf-b93102008a45;
	Fri, 23 Oct 2020 08:39:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603442342;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M/CSsTj+azQBJhNlEDi7dDTrUxCNiHutJMIEl+th0O8=;
	b=t7vu6KTaiVJ6LAEIUaAxtoG8+NhI8Z8DpVtQ5GaRcQT7OaulrKF+4IKkvmuTJxQfjPw+j4
	kMJcvCLPepqJr/VsbKU0Yc9/cAwiqrkeWO1OnHOz8snflj7F6sO2dIlDAvwZWwh6oY7xz1
	nBtGBZ2uGOcV/R9ab/CRWbfROzGIU3A=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1ADBBABF4;
	Fri, 23 Oct 2020 08:39:02 +0000 (UTC)
Subject: [PATCH v3 7/7] x86: make guarding against straight-line speculation
 optional
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Message-ID: <06067023-0f61-3e37-a0a4-4254df1f5c16@suse.com>
Date: Fri, 23 Oct 2020 10:39:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7065e2dc-f846-be79-1081-682c2295358c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Put insertion of INT3 behind CONFIG_SPECULATIVE_HARDEN_BRANCH
conditionals.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.

--- a/xen/arch/x86/indirect-thunk.S
+++ b/xen/arch/x86/indirect-thunk.S
@@ -11,8 +11,10 @@
 
 #include <asm/asm_defns.h>
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
 /* Don't transform the "ret" further down. */
 .purgem ret
+#endif
 
 .macro IND_THUNK_RETPOLINE reg:req
         call 2f
@@ -45,7 +47,9 @@ ENTRY(__x86_indirect_thunk_\reg)
         __stringify(IND_THUNK_LFENCE \reg), X86_FEATURE_IND_THUNK_LFENCE, \
         __stringify(IND_THUNK_JMP \reg),    X86_FEATURE_IND_THUNK_JMP
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
         int3 /* Halt straight-line speculation */
+#endif
 
         .size __x86_indirect_thunk_\reg, . - __x86_indirect_thunk_\reg
         .type __x86_indirect_thunk_\reg, @function
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -51,6 +51,8 @@
     INDIRECT_BRANCH jmp \arg
 .endm
 
+#ifdef CONFIG_SPECULATIVE_HARDEN_BRANCH
+
 /*
  * To guard against speculation past RET, insert a breakpoint insn
  * immediately after them.
@@ -66,3 +68,5 @@
     .word \operand
     .endif
 .endm
+
+#endif



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 08:41:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 08:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10790.28796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsdV-0007g3-EX; Fri, 23 Oct 2020 08:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10790.28796; Fri, 23 Oct 2020 08:41:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVsdV-0007fw-Av; Fri, 23 Oct 2020 08:41:21 +0000
Received: by outflank-mailman (input) for mailman id 10790;
 Fri, 23 Oct 2020 08:41:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVsdT-0007fr-Gx
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:41:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd5ac4ca-fcea-4d2d-aa5c-3476333ffd07;
 Fri, 23 Oct 2020 08:41:18 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVsdS-0006OY-IF; Fri, 23 Oct 2020 08:41:18 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVsdS-00073R-AN; Fri, 23 Oct 2020 08:41:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVsdT-0007fr-Gx
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 08:41:19 +0000
X-Inumbo-ID: bd5ac4ca-fcea-4d2d-aa5c-3476333ffd07
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bd5ac4ca-fcea-4d2d-aa5c-3476333ffd07;
	Fri, 23 Oct 2020 08:41:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=TWdewo9d5jxMv1zaQDshWMRbbwOmCgPsNqHYJ8g2Ax0=; b=je0p9o3RWMuknnnVWfsifXubqC
	6m4E4H31nD/3M0S4IhLCBme+8j5Q7rL3RmQ1sjddYwx6wuXj7ksc5QUx7Y64WsamQ+sumqBva2KSu
	TBTQSWncKFEDuu4Fl2xp7UoytT/if3+ibtHs8AbU6HOqYbzFVB5EVaCsuFrijRtJCDeE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVsdS-0006OY-IF; Fri, 23 Oct 2020 08:41:18 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVsdS-00073R-AN; Fri, 23 Oct 2020 08:41:18 +0000
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
Date: Fri, 23 Oct 2020 09:41:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 22/10/2020 22:17, Stefano Stabellini wrote:
> On Thu, 22 Oct 2020, Julien Grall wrote:
>> On 22/10/2020 02:43, Elliott Mitchell wrote:
>>> Linux requires UEFI support to be enabled on ARM64 devices.  While many
>>> ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
>>> potentially taking over.  Some common devices may need ACPI table
>>> support.
>>>
>>> Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
>>
>> The idea behind EXPERT is to gate any feature that is not considered to be
>> stable/complete enough to be used in production.
> 
> Yes, and from that point of view I don't think we want to remove EXPERT
> from ACPI yet. However, the idea of hiding things behind EXPERT works
> very well for new esoteric features, something like memory introspection
> or memory overcommit.

Memaccess is not very new ;).

> It does not work well for things that are actually
> required to boot on the platform.

I am not sure where is the problem. It is easy to select EXPERT from the 
menuconfig. It also hints the user that the feature may not fully work.

> 
> Typically ACPI systems don't come with device tree at all (RPi4 being an
> exception), so users don't really have much of a choice in the matter.

And they typically have IOMMUs.

> 
>  From that point of view, it would be better to remove EXPERT from ACPI,
> maybe even build ACPI by default, *but* to add a warning at boot saying
> something like:
> 
> "ACPI support is experimental. Boot using Device Tree if you can."
> 
> 
> That would better convey the risks of using ACPI, while at the same time
> making it a bit easier for users to boot on their ACPI-only platforms.

Right, I agree that this make easier for users to boot Xen on ACPI-only 
platform. However, based on above, it is easy enough for a developper to 
rebuild Xen with ACPI and EXPERT enabled.

So what sort of users are you targeting?

I am sort of okay to remove EXPERT. But I still think building ACPI by 
default is still wrong because our default .config is meant to be 
(security) supported. I don't think ACPI can earn this qualification today.

In order to remove EXPERT, there are a few things to needs to be done 
(or checked):
     1) SUPPORT.MD has a statement about ACPI on Arm
     2) DT is favored over ACPI if the two firmware tables are present.

> 
> 
>> I don't consider the ACPI complete because the parsing of the IORT (used to
>> discover SMMU and GICv3 ITS) is not there yet.
>>
>> I vaguely remember some issues on system using SMMU (e.g. Thunder-X) because
>> Dom0 will try to use the IOMMU and this would break PV drivers.
> 
> I am not sure why Dom0 using the IOMMU would break PV drivers? Is it
> because the pagetable is not properly updated when mapping foreign
> pages?

IIRC, yes. This would need to be tested again.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 09:21:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 09:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10798.28807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtFn-0003HL-JU; Fri, 23 Oct 2020 09:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10798.28807; Fri, 23 Oct 2020 09:20:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtFn-0003HE-GI; Fri, 23 Oct 2020 09:20:55 +0000
Received: by outflank-mailman (input) for mailman id 10798;
 Fri, 23 Oct 2020 09:20:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVtFm-0003H9-9X
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:20:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed01f033-a1c3-4993-84ad-7b8d80283031;
 Fri, 23 Oct 2020 09:20:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVtFi-0007DI-BC; Fri, 23 Oct 2020 09:20:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVtFi-0006F3-2I; Fri, 23 Oct 2020 09:20:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVtFi-0008FR-1o; Fri, 23 Oct 2020 09:20:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVtFm-0003H9-9X
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:20:54 +0000
X-Inumbo-ID: ed01f033-a1c3-4993-84ad-7b8d80283031
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ed01f033-a1c3-4993-84ad-7b8d80283031;
	Fri, 23 Oct 2020 09:20:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gvLZ84NuVhAb2Ht6MV7iLiiXURrQhq9jPvld6JKqdCs=; b=ZlWvnPR3sKzyuHtvO92WpxoBfo
	QOdLncpizLeH9II1xiP6xLEKkXCu1vYGZ9G2aiGUbvJ0qMAWZafBELE2prkuebJ57Iae6NQ+fSHFg
	k5zoNN64Lran89d4Pk8Edt8mkAaa4sB5NkgD53tvJGzVg0XPWRt96nGqsjk88BvZxQs8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVtFi-0007DI-BC; Fri, 23 Oct 2020 09:20:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVtFi-0006F3-2I; Fri, 23 Oct 2020 09:20:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVtFi-0008FR-1o; Fri, 23 Oct 2020 09:20:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156115-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156115: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 09:20:50 +0000

flight 156115 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156115/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  122 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 09:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 09:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10803.28822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtPM-0004Kd-Lv; Fri, 23 Oct 2020 09:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10803.28822; Fri, 23 Oct 2020 09:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtPM-0004KW-J2; Fri, 23 Oct 2020 09:30:48 +0000
Received: by outflank-mailman (input) for mailman id 10803;
 Fri, 23 Oct 2020 09:30:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVtPK-0004KR-R9
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:30:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d17a9f92-1800-4779-9876-12de0167bbd0;
 Fri, 23 Oct 2020 09:30:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVtPH-0007Q3-Nu; Fri, 23 Oct 2020 09:30:43 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVtPH-0003FN-EK; Fri, 23 Oct 2020 09:30:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVtPK-0004KR-R9
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:30:46 +0000
X-Inumbo-ID: d17a9f92-1800-4779-9876-12de0167bbd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d17a9f92-1800-4779-9876-12de0167bbd0;
	Fri, 23 Oct 2020 09:30:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=xuywh7oVNz7LfL/MOpxwaUr5izAxK1T5iWfPMpxDzRU=; b=aRZ3MKtR6SFpXPe6cEwhs4E/p/
	mCyCXu8tqCadBo8QvdYCzh9SA2HLVDLzmzOLcYm7z8yjX2Ehe2gt7U0pnnu3rG4Pemj56QaPT3arH
	ar/PZn+6GdgkbipsGVadsZy3Olvz5HrwWGuIiVHAv87TCe/ugksDXU6L4vloR47YnduQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVtPH-0007Q3-Nu; Fri, 23 Oct 2020 09:30:43 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVtPH-0003FN-EK; Fri, 23 Oct 2020 09:30:43 +0000
Subject: Re: [PATCH] xen/acpi: Don't fail if SPCR table is absent
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201021221253.GA73207@mattapan.m5p.com>
 <930267bd-5442-3ff0-bb5b-1ed8e2ebe37c@xen.org>
 <20201022191840.GB81455@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <13ce1c15-587c-2ceb-4504-90d19ed8b349@xen.org>
Date: Fri, 23 Oct 2020 10:30:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201022191840.GB81455@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 22/10/2020 20:18, Elliott Mitchell wrote:
> On Thu, Oct 22, 2020 at 07:38:26PM +0100, Julien Grall wrote:
> On Thu, Oct 22, 2020 at 07:44:26PM +0100, Julien Grall wrote:
>> Thank you for the patch. FIY I tweak a bit the commit title before
>> committing.
>>
>> The title is now: "xen/arm: acpi: Don't fail it SPCR table is absent".
> 
> Perhaps "xen/arm: acpi: Don't fail on absent SPCR table"?
> 
> What you're suggesting doesn't read well to me.

Sorry, I made a typo when writing the title in the e-mail. Here a direct 
copy from the commit:

"xen/arm: acpi: Don't fail if SPCR table is absent"

This is pretty much your original title with "arm: " added to clarify 
the subsystem modified.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 09:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 09:34:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10807.28835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtT3-0004bx-7B; Fri, 23 Oct 2020 09:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10807.28835; Fri, 23 Oct 2020 09:34:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtT3-0004bq-36; Fri, 23 Oct 2020 09:34:37 +0000
Received: by outflank-mailman (input) for mailman id 10807;
 Fri, 23 Oct 2020 09:34:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dX6+=D6=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kVtT0-0004bl-Qf
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:34:35 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 4d66e040-4cb4-4caa-82f4-718cc0bd2d28;
 Fri, 23 Oct 2020 09:34:33 +0000 (UTC)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-409-80gi-UXRMw6s8QFcHyWBhg-1; Fri, 23 Oct 2020 05:34:31 -0400
Received: by mail-wm1-f71.google.com with SMTP id s12so330908wmj.0
 for <xen-devel@lists.xenproject.org>; Fri, 23 Oct 2020 02:34:31 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id m8sm2152212wrw.17.2020.10.23.02.34.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 23 Oct 2020 02:34:29 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dX6+=D6=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kVtT0-0004bl-Qf
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:34:35 +0000
X-Inumbo-ID: 4d66e040-4cb4-4caa-82f4-718cc0bd2d28
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 4d66e040-4cb4-4caa-82f4-718cc0bd2d28;
	Fri, 23 Oct 2020 09:34:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603445673;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FGMVu3aa9dhMCzxAud5NWcJoHpEpj2AUgXtxCn9BG9E=;
	b=MyuKN4Zc1RS6hm4KMpIk2K8J9IMAIW54dVpNrRFSlAQ+3qyty1urLrP+6MhnUfuLRTvrNm
	rC8fCV44NYKPE8S03++F7UuTwLkte6N95gRosj2sZ63u/HHR4cGidx+j0WrUpIfZ29IoGM
	Xv9KO1MvivaLBG4IR+YYpDDiRgFGg+I=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-409-80gi-UXRMw6s8QFcHyWBhg-1; Fri, 23 Oct 2020 05:34:31 -0400
X-MC-Unique: 80gi-UXRMw6s8QFcHyWBhg-1
Received: by mail-wm1-f71.google.com with SMTP id s12so330908wmj.0
        for <xen-devel@lists.xenproject.org>; Fri, 23 Oct 2020 02:34:31 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=FGMVu3aa9dhMCzxAud5NWcJoHpEpj2AUgXtxCn9BG9E=;
        b=ulzavgIgPkFgNCSRsSHhcE82Xmk6ITArDIVLIqwoJ9wnVmzOFv5N+w8IpZeY7Rbk8v
         TVOlm+a4Fqqz7PZ51CEMeO6RbR63Brptq6p1CMaTbG3KgPIrep/Z2ZJ7gQYKRGe5mvpf
         pa9vmAaTgAkHoSuqgtwVTFsDnPjfsJM/OBRfuJOrvwyNrnrky5ndGqI5L1HK07PMUgo0
         vO8ZX5+ZPtWbgXkgNJ+aO0rdhTLXFpDl0WMD2DF1Tp+T4wF6dksMGAPfZBqsenWjSNkd
         hNACFq7VYuIkEIYw5lA1slkNuAZ6lo8i9cVCxiiRXGKwKIhulW091E3UmEk6wWCe6jmd
         f4PQ==
X-Gm-Message-State: AOAM530/IWlJU2WJ6to0+OidPInCnFwGwDIZIDoTtJX6Fxd4RqdpU6wy
	nSN9LAykp9qUdcMp7LBbU1fOU6GzYRLggEYjlhIF7KVgjGDssej7dBmFygoFPxAVwqPlIczaFT5
	A5ObaYzlMwCDrgafV80X1t1OajI8=
X-Received: by 2002:a5d:4648:: with SMTP id j8mr1595267wrs.131.1603445670129;
        Fri, 23 Oct 2020 02:34:30 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxZEPFC0cYdFNLJkhT7P2Q/rKqhDgKW2UfR2mE25lRSMsIsWGSZR75378dt7F6GcxlNTU540A==
X-Received: by 2002:a5d:4648:: with SMTP id j8mr1595234wrs.131.1603445669876;
        Fri, 23 Oct 2020 02:34:29 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id m8sm2152212wrw.17.2020.10.23.02.34.29
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 23 Oct 2020 02:34:29 -0700 (PDT)
Subject: Re: [PATCH v2 0/3] Add Xen CpusAccel
To: Thomas Huth <thuth@redhat.com>, Jason Andryuk <jandryuk@gmail.com>
Cc: Laurent Vivier <lvivier@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 QEMU <qemu-devel@nongnu.org>, Claudio Fontana <cfontana@suse.de>,
 Anthony Perard <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20201013140511.5681-1-jandryuk@gmail.com>
 <ddb5c9c2-c206-28d6-2d9d-7954e7022c23@redhat.com>
 <CAKf6xpvpuG1jVdf3+heXzHFd_kc5kVHYdJgC+8iazFLciqOMZw@mail.gmail.com>
 <d9f23eee-c0af-d2dd-9b9d-f0255fc8e3d1@redhat.com>
 <1927b32e-7919-5061-0285-d9c7184d0bae@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <c0ef825b-ac02-67ab-aef2-f7722da1272a@redhat.com>
Date: Fri, 23 Oct 2020 11:34:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <1927b32e-7919-5061-0285-d9c7184d0bae@redhat.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23/10/20 09:09, Thomas Huth wrote:
> On 22/10/2020 17.29, Paolo Bonzini wrote:
>> On 22/10/20 17:17, Jason Andryuk wrote:
>>> On Tue, Oct 13, 2020 at 1:16 PM Paolo Bonzini <pbonzini@redhat.com> wrote:
>>>>
>>>> On 13/10/20 16:05, Jason Andryuk wrote:
>>>>> Xen was left behind when CpusAccel became mandatory and fails the assert
>>>>> in qemu_init_vcpu().  It relied on the same dummy cpu threads as qtest.
>>>>> Move the qtest cpu functions to a common location and reuse them for
>>>>> Xen.
>>>>>
>>>>> v2:
>>>>>   New patch "accel: Remove _WIN32 ifdef from qtest-cpus.c"
>>>>>   Use accel/dummy-cpus.c for filename
>>>>>   Put prototype in include/sysemu/cpus.h
>>>>>
>>>>> Jason Andryuk (3):
>>>>>   accel: Remove _WIN32 ifdef from qtest-cpus.c
>>>>>   accel: move qtest CpusAccel functions to a common location
>>>>>   accel: Add xen CpusAccel using dummy-cpus
>>>>>
>>>>>  accel/{qtest/qtest-cpus.c => dummy-cpus.c} | 27 ++++------------------
>>>>>  accel/meson.build                          |  8 +++++++
>>>>>  accel/qtest/meson.build                    |  1 -
>>>>>  accel/qtest/qtest-cpus.h                   | 17 --------------
>>>>>  accel/qtest/qtest.c                        |  5 +++-
>>>>>  accel/xen/xen-all.c                        |  8 +++++++
>>>>>  include/sysemu/cpus.h                      |  3 +++
>>>>>  7 files changed, 27 insertions(+), 42 deletions(-)
>>>>>  rename accel/{qtest/qtest-cpus.c => dummy-cpus.c} (71%)
>>>>>  delete mode 100644 accel/qtest/qtest-cpus.h
>>>>>
>>>>
>>>> Acked-by: Paolo Bonzini <pbonzini@redhat.com>
>>>
>>> Thank you, Paolo.  Also Anthony Acked and Claudio Reviewed patch 3.
>>> How can we get this into the tree?
>>
>> I think Anthony should send a pull request?
> 
> Since Anthony acked patch 3, I think I can also take it through the qtest tree.

No objections, thanks!

Paolo



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 09:39:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 09:39:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10812.28847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtXr-0004ny-R7; Fri, 23 Oct 2020 09:39:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10812.28847; Fri, 23 Oct 2020 09:39:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtXr-0004nr-Ns; Fri, 23 Oct 2020 09:39:35 +0000
Received: by outflank-mailman (input) for mailman id 10812;
 Fri, 23 Oct 2020 09:39:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OWdy=D6=epam.com=prvs=8565980f29=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1kVtXq-0004nm-0D
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:39:34 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04563125-10c4-4ad9-9b4d-c78cb3a8852d;
 Fri, 23 Oct 2020 09:39:33 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09N9TXTA019557; Fri, 23 Oct 2020 09:39:29 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 by mx0a-0039f301.pphosted.com with ESMTP id 34bs0c0w8x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 23 Oct 2020 09:39:29 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM5PR03MB2946.eurprd03.prod.outlook.com (2603:10a6:206:24::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Fri, 23 Oct
 2020 09:39:27 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 09:39:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OWdy=D6=epam.com=prvs=8565980f29=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
	id 1kVtXq-0004nm-0D
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:39:34 +0000
X-Inumbo-ID: 04563125-10c4-4ad9-9b4d-c78cb3a8852d
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 04563125-10c4-4ad9-9b4d-c78cb3a8852d;
	Fri, 23 Oct 2020 09:39:33 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09N9TXTA019557;
	Fri, 23 Oct 2020 09:39:29 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
	by mx0a-0039f301.pphosted.com with ESMTP id 34bs0c0w8x-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 23 Oct 2020 09:39:29 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oKAFgYsIMOjE3wYto/Lg/sM8bzKfWnCd3XkTVec6Y8IQsU9Vz5ATLSDPx7ZGsFPvCVOVa2KAqZnopTWNehlj1WqTcFO4pc47hbtflsRs4366OWPV2u1kj7YB0eoGCeOQj/GufYDnIUY9BhJVeJHQSBF4jN1CrAfcpFsjuNEDSj8pHgxFPAkMkum578poQe1O0n3xZf9tLvqSIbBzVneImNG1Rs/9jjxBmeh9bINlGNxKPtN/t9N0QrsrZKK5vhaD4JTXj5XqrgHIEPFIKs++2X5TFVvnRdESYoYn+yGf+9/HgMI+ltKv4fkPfHuif65cwb7px+oWV9t/jvyHMRr/Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a0Hqt4b9mc6qsgWaLB9t91UVzflAV03FQ6d6JphpDKA=;
 b=Z64vHhi/5PhqHbhdJzTcmcUUeL3k/85NZweIskXkE0hgLB6AnRMClaZQ2mYCy1shHosOa1dpLIk1P02/QGlfI3QGm+7aWe3VNF+yoE3bvhOV/yPSx6jVOvZDgmR0XiDv8amf2jqbeHd5Kb8N80BUPSx673SP+KayVaE+GGz/r39++tiV7Wgf+rKavYzIyppYKtpwAP9l0lnv1qlidB7SE6fja5SUMJBBPFLryQkD9NaRarEu9EeL4ydxvDhrXSkcRDBw9v5GToeFHpcPlzGJ7gv2aWUWzKhDtBRxYmC0HTLYYdoD2a+Wfr0MT0wRq2tWnwVDLsOjeQN4Fr/hAtRxQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a0Hqt4b9mc6qsgWaLB9t91UVzflAV03FQ6d6JphpDKA=;
 b=0r7HUqkrj1fxjzPYIOo1R4SbU82KNdBTahRS+MTfIZPAv5mPA/H132ZyZNxar1PEBK2RTNh5/MeTCR8+4QpQ4APOxbR7wzvLGkcNWMS1+C3G3PhCpR+n9B8pSGqycSKimrfISLUZtv45U1cdGbS9w0BxW59Gdab3min1chRHnAKUpeIW/zskPWSLuuHaqhhFOF7+5OwK+UdBS9gKesmv3NIBTY/pinPEQWXxSuou25X8tGec/q0fFaKGnnlzmTVKuVr5PB6s7A99dpP79Y5fxcPMxRcJw498hYur48gPOpi8Pc6kJcxQY3gk+xF2PdFyngEcMbpUmSu2GPV6rtFBqA==
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com (2603:10a6:20b:1c2::6)
 by AM5PR03MB2946.eurprd03.prod.outlook.com (2603:10a6:206:24::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.24; Fri, 23 Oct
 2020 09:39:27 +0000
Received: from AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8]) by AM7PR03MB6531.eurprd03.prod.outlook.com
 ([fe80::9439:23f1:1063:ad8%5]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 09:39:27 +0000
From: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>
To: Artem Mygaiev <Artem_Mygaiev@epam.com>,
        "sstabellini@kernel.org"
	<sstabellini@kernel.org>,
        "julien@xen.org" <julien@xen.org>
CC: "committers@xenproject.org" <committers@xenproject.org>,
        "jbeulich@suse.com" <jbeulich@suse.com>,
        "vicooodin@gmail.com"
	<vicooodin@gmail.com>,
        "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "viktor.mitin.19@gmail.com" <viktor.mitin.19@gmail.com>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: Xen Coding style and clang-format
Thread-Topic: Xen Coding style and clang-format
Thread-Index: 
 AQHWlwq4nKYEhMN38U+xmvwRsutq+amA8joAgAAHUgCAAXyKAIAAENkAgAlxpgCACF7sgIABM44AgASIOQCAAAt9gIAAE8BwgAUk8gCAAYMogIAEOBYA
Date: Fri, 23 Oct 2020 09:39:27 +0000
Message-ID: <4d717bf66ed3f4874ed95061f6324d6395e66064.camel@epam.com>
References: <300923eb27aea4d19bff3c21bc51d749c315f8e3.camel@epam.com>
	 <4238269c-3bf4-3acb-7464-3d753f377eef@suse.com>
	 <E068C671-8009-4976-87B8-0709F6A5C3BF@citrix.com>
	 <b16dfb26e0916166180d5cbbe95278dc99277330.camel@epam.com>
	 <B64C5E67-7BEA-4C31-9089-AB8CC1F1E80F@citrix.com>
	 <3ff3f7d16cdab692178ce638da1a6b880817fb7e.camel@epam.com>
	 <64FE5ADB-2359-4A31-B1A1-925750515D98@citrix.com>
	 <b4d7e9a7-6c25-1f7f-86ce-867083beb81a@suse.com>
	 <4d4f351b152df2c50e18676ccd6ab6b4dc667801.camel@epam.com>
	 <5bd7cc00-c4c9-0737-897d-e76f22e2fd5b@xen.org>
	 <AM6PR03MB3687A99424FA9FD062F5FE4BF4030@AM6PR03MB3687.eurprd03.prod.outlook.com>
	 <alpine.DEB.2.21.2010191101250.12247@sstabellini-ThinkPad-T480s>
	 <b0f9c9e0-d43e-e05b-d4ab-40f3bf437643@xen.org>
In-Reply-To: <b0f9c9e0-d43e-e05b-d4ab-40f3bf437643@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ad20fb3f-4cce-4a9c-4dc6-08d87737889d
x-ms-traffictypediagnostic: AM5PR03MB2946:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM5PR03MB2946BE253F62AE5986623D04F21A0@AM5PR03MB2946.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 3RhQCjEesb9OjUahSTeiegClHbjQ+dkTgHSe4wq8XFoQklIK2TEBiWf/bYG5w4P9l4kv71g5Lizttd2O8Ob+pH8e5QGX0bgvYyONROJELHn7G8iWXipGCDAGLBcx+5qxE03gkeXqbUpXekcDfBewdRzPb1hQqb5srbpJUx5Q7v45cQHsXSX/TjvFtI8Z1NIBi8wGusoan6Wg5vhJMXrn2nyf5uq0pDt9l9n6kW8S3xnnZ6Oql646O7B+poZNNo9WIEkjDffFZMzRKgnbv8oOxwKxrzdT2v97Sv6T+YGGHDmLNBaKZNA0xTYhWeU03qRyYcvNRUJTRMKODQ+mK92sGg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM7PR03MB6531.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(39860400002)(376002)(396003)(55236004)(54906003)(110136005)(6512007)(4326008)(86362001)(71200400001)(8676002)(4001150100001)(66946007)(316002)(83380400001)(76116006)(2616005)(5660300002)(478600001)(186003)(26005)(8936002)(36756003)(66556008)(66446008)(66476007)(6506007)(53546011)(107886003)(6486002)(64756008)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 j+A9Ty2XewwEcPKfqDxN49kb+qXIGPt66Ozh+qkXyUg9tF0Y/JJRUujGl/wI7kJ+0ALYqTQT40Y2lW2NeTxaZbjwfuAP4kvRPArniUCqmh1y4cVunobFXzLMbOeFLdBoo+7f2IjMes5QD3VE9ND6HgBeU4A57T/YfZaZNlDd3iotfxadjjLbgNrdpgZCqh/cANuf5+57GBp8yZvZofYe5wOx9z8GhEVVutUGitBPibifyJ/2siJbnwMrEkfdK4HFg+X4B50b8mD6tbXM+mlGIC/6lqnI+Swy6CBfqvdCy1WH/nW3cmRCZIOZjHFDlR4d8MEq7ENhBTdsm5DzfPtpg14Vb8KsMvFoJUgpS2Becys+K6sBlSV6LcufA+BrKc5BhPyKqqrGUeAweW+KLm5tfge9BlKlVMes1LJYxUmnrBQntxwMc7BQI5jJYpchQtJre/SAcKZb0phqlbJrF6lQQPJEwRI7mpYZd3nxa2As+3PqVDp7/TvjJjaK62ES/7lAd3NC37oxCS3zlXJ2hcZI7zhnWjLQJmLcjyNuQi5Rb1D/uHgRVBwV+Qf16pCQrH1J8owLxNFWklZhaFWbnHfopkjwtAu/jMGFhKlPXhckPHQoOs28KZzS9z8Xcl/x0rC/K0Qy/J2TTfN4SKo7Ws3RVQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <4EDE1D24A5CE004485C3F02D7F21119E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM7PR03MB6531.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad20fb3f-4cce-4a9c-4dc6-08d87737889d
X-MS-Exchange-CrossTenant-originalarrivaltime: 23 Oct 2020 09:39:27.0856
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: L4+CfWUUU16NWl5HDiJr+Qs/i5B9XBVjFK2wJhDHiHtlA+/BP1rzlIlRZQiWqqdfaTBiiFJomuPBJUmhv7BaVDzd7VRCfSbwbUrXzle34tA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR03MB2946
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.737
 definitions=2020-10-23_04:2020-10-23,2020-10-23 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 phishscore=0 spamscore=0 clxscore=1011 impostorscore=0 mlxlogscore=999
 lowpriorityscore=0 suspectscore=0 bulkscore=0 mlxscore=0 adultscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010230066

SGkgYWxsLA0KDQpPbiBUdWUsIDIwMjAtMTAtMjAgYXQgMTg6MTMgKzAxMDAsIEp1bGllbiBHcmFs
bCB3cm90ZToNCj4gSGksDQo+IA0KPiBPbiAxOS8xMC8yMDIwIDE5OjA3LCBTdGVmYW5vIFN0YWJl
bGxpbmkgd3JvdGU6DQo+ID4gT24gRnJpLCAxNiBPY3QgMjAyMCwgQXJ0ZW0gTXlnYWlldiB3cm90
ZToNCj4gPiA+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gPiBGcm9tOiBKdWxpZW4g
R3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiA+ID4gU2VudDog0L/Rj9GC0L3QuNGG0LAsIDE2INC+
0LrRgtGP0LHRgNGPIDIwMjAg0LMuIDEzOjI0DQo+ID4gPiBUbzogQW5hc3Rhc2lpYSBMdWtpYW5l
bmtvIDxBbmFzdGFzaWlhX0x1a2lhbmVua29AZXBhbS5jb20+OyANCj4gPiA+IGpiZXVsaWNoQHN1
c2UuY29tOyBHZW9yZ2UuRHVubGFwQGNpdHJpeC5jb20NCj4gPiA+IENjOiBBcnRlbSBNeWdhaWV2
IDxBcnRlbV9NeWdhaWV2QGVwYW0uY29tPjsgdmljb29vZGluQGdtYWlsLmNvbTsgDQo+ID4gPiB4
ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IGNvbW1pdHRlcnNAeGVucHJvamVjdC5vcmc7
IA0KPiA+ID4gdmlrdG9yLm1pdGluLjE5QGdtYWlsLmNvbTsgVm9sb2R5bXlyIEJhYmNodWsgPA0K
PiA+ID4gVm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+DQo+ID4gPiBTdWJqZWN0OiBSZTogWGVu
IENvZGluZyBzdHlsZSBhbmQgY2xhbmctZm9ybWF0DQo+ID4gPiANCj4gPiA+ID4gSGksDQo+ID4g
PiA+IA0KPiA+ID4gPiBPbiAxNi8xMC8yMDIwIDEwOjQyLCBBbmFzdGFzaWlhIEx1a2lhbmVua28g
d3JvdGU6DQo+ID4gPiA+ID4gVGhhbmtzIGZvciB5b3VyIGFkdmljZXMsIHdoaWNoIGhlbHBlZCBt
ZSBpbXByb3ZlIHRoZSBjaGVja2VyLg0KPiA+ID4gPiA+IEkNCj4gPiA+ID4gPiB1bmRlcnN0YW5k
IHRoYXQgdGhlcmUgYXJlIHN0aWxsIHNvbWUgZGlzYWdyZWVtZW50cyBhYm91dCB0aGUNCj4gPiA+
ID4gPiBmb3JtYXR0aW5nLCBidXQgYXMgSSBzYWlkIGJlZm9yZSwgdGhlIGNoZWNrZXIgY2Fubm90
IGJlIHZlcnkNCj4gPiA+ID4gPiBmbGV4aWJsZQ0KPiA+ID4gPiA+IGFuZCB0YWtlIGludG8gYWNj
b3VudCBhbGwgdGhlIGF1dGhvcidzIGlkZWFzLg0KPiA+ID4gPiANCj4gPiA+ID4gSSBhbSBub3Qg
c3VyZSB3aGF0IHlvdSByZWZlciBieSAiYXV0aG9yJ3MgaWRlYXMiIGhlcmUuIFRoZQ0KPiA+ID4g
PiBjaGVja2VyDQo+ID4gPiA+IHNob3VsZCBmb2xsb3cgYSBjb2Rpbmcgc3R5bGUgKFhlbiBvciBh
IG1vZGlmaWVkIHZlcnNpb24pOg0KPiA+ID4gPiAgICAgIC0gQW55dGhpbmcgbm90IGZvbGxvd2lu
ZyB0aGUgY29kaW5nIHN0eWxlIHNob3VsZCBiZQ0KPiA+ID4gPiBjb25zaWRlcmVkIGFzDQo+ID4g
PiA+IGludmFsaWQuDQo+ID4gPiA+ICAgICAgLSBBbnl0aGluZyBub3Qgd3JpdHRlbiBpbiB0aGUg
Y29kaW5nIHN0eWxlIHNob3VsZCBiZSBsZWZ0DQo+ID4gPiA+IHVudG91Y2hlZC91bmNvbW1lbnRl
ZCBieSB0aGUgY2hlY2tlci4NCj4gPiA+ID4gDQo+ID4gPiANCj4gPiA+IEFncmVlDQo+ID4gPiAN
Cj4gPiA+ID4gPiBJIHN1Z2dlc3QgdXNpbmcgdGhlDQo+ID4gPiA+ID4gY2hlY2tlciBub3QgYXMg
YSBtYW5kYXRvcnkgY2hlY2ssIGJ1dCBhcyBhbiBpbmRpY2F0aW9uIHRvIHRoZQ0KPiA+ID4gPiA+
IGF1dGhvciBvZg0KPiA+ID4gPiA+IHBvc3NpYmxlIGZvcm1hdHRpbmcgZXJyb3JzIHRoYXQgaGUg
Y2FuIGNvcnJlY3Qgb3IgaWdub3JlLg0KPiA+ID4gPiANCj4gPiA+ID4gSSBjYW4gdW5kZXJzdGFu
ZCB0aGF0IHNob3J0IHRlcm0gd2Ugd291bGQgd2FudCB0byBtYWtlIGl0DQo+ID4gPiA+IG9wdGlv
bmFsIHNvDQo+ID4gPiA+IGVpdGhlciB0aGUgY29kaW5nIHN0eWxlIG9yIHRoZSBjaGVja2VyIGNh
biBiZSB0dW5lZC4gQnV0IEkNCj4gPiA+ID4gZG9uJ3QgdGhpbmsNCj4gPiA+ID4gdGhpcyBpcyBh
biBpZGVhbCBzaXR1YXRpb24gdG8gYmUgaW4gbG9uZyB0ZXJtLg0KPiA+ID4gPiANCj4gPiA+ID4g
VGhlIGdvYWwgb2YgdGhlIGNoZWNrZXIgaXMgdG8gYXV0b21hdGljYWxseSB2ZXJpZnkgdGhlIGNv
ZGluZw0KPiA+ID4gPiBzdHlsZSBhbmQNCj4gPiA+ID4gZ2V0IGl0IGNvbnNpc3RlbnQgYWNyb3Nz
IFhlbi4gSWYgd2UgbWFrZSBpdCBvcHRpb25hbCBvciBpdCBpcw0KPiA+ID4gPiAidW5yZWxpYWJs
ZSIsIHRoZW4gd2UgbG9zZSB0aGUgdHdvIGJlbmVmaXRzIGFuZCBwb3NzaWJseQ0KPiA+ID4gPiBp
bmNyZWFzZSB0aGUNCj4gPiA+ID4gY29udHJpYnV0b3IgZnJ1c3RyYXRpb24gYXMgdGhlIGNoZWNr
ZXIgd291bGQgc2F5IEEgYnV0IHdlIG5lZWQNCj4gPiA+ID4gQi4NCj4gPiA+ID4gDQo+ID4gPiA+
IFRoZXJlZm9yZSwgd2UgbmVlZCB0byBtYWtlIHN1cmUgdGhlIGNoZWNrZXIgYW5kIHRoZSBjb2Rp
bmcNCj4gPiA+ID4gc3R5bGUgbWF0Y2guDQo+ID4gPiA+IEkgZG9uJ3QgaGF2ZSBhbnkgb3Bpbmlv
bnMgb24gdGhlIGFwcHJvYWNoIHRvIGFjaGlldmUgdGhhdC4NCj4gPiA+IA0KPiA+ID4gT2YgdGhl
IGxpc3Qgb2YgcmVtYWluaW5nIGlzc3VlcyBmcm9tIEFuYXN0YXNpaWEsIGxvb2tzIGxpa2Ugb25s
eQ0KPiA+ID4gaXRlbXMgNQ0KPiA+ID4gYW5kIDYgYXJlIGNvbmZvcm0gdG8gb2ZmaWNpYWwgWGVu
IGNvZGluZyBzdHlsZS4gQXMgZm9yIHJlbWFpbm5pbmcNCj4gPiA+IG9uZXMsDQo+ID4gPiBJIHdv
dWxkIGxpa2UgdG8gc3VnZ2VzdCBkaXNhYmxpbmcgdGhvc2UgdGhhdCBhcmUgY29udHJvdmVyc2lh
bA0KPiA+ID4gKGl0ZW1zIDEsDQo+ID4gPiAyLCA0LCA4LCA5LCAxMCkuIE1heWJlIHdlIHdhbnQg
dG8gaGF2ZSBmdXJ0aGVyIGRpc2N1c3Npb24gb24NCj4gPiA+IHJlZmluaW5nDQo+ID4gPiBjb2Rp
bmcgc3R5bGUsIHdlIGNhbiB1c2UgdGhlc2UgYXMgc3RhcnRpbmcgcG9pbnQuIElmIHdlIGFyZSBv
cGVuDQo+ID4gPiB0bw0KPiA+ID4gZXh0ZW5kaW5nIHN0eWxlIG5vdywgSSB3b3VsZCBzdWdnZXN0
IHRvIGFkZCBydWxlcyB0aGF0IHNlZW0gdG8gYmUNCj4gPiA+IG1lYW5pbmdmdWwgKGl0ZW1zIDMs
IDcpIGFuZCBrZWVwIHRoZW0gaW4gY2hlY2tlci4NCj4gPiANCj4gPiBHb29kIGFwcHJvYWNoLiBZ
ZXMsIEkgd291bGQgbGlrZSB0byBrZWVwIDMsIDcgaW4gdGhlIGNoZWNrZXIuDQo+ID4gDQo+ID4g
SSB3b3VsZCBhbHNvIGtlZXAgOCBhbmQgYWRkIGEgc21hbGwgbm90ZSB0byB0aGUgY29kaW5nIHN0
eWxlIHRvIHNheQ0KPiA+IHRoYXQNCj4gPiBjb21tZW50cyBzaG91bGQgYmUgYWxpZ25lZCB3aGVy
ZSBwb3NzaWJsZS4NCj4gDQo+ICsxIGZvciB0aGlzLiBBbHRob3VnaCwgSSBkb24ndCBtaW5kIHRo
ZSBjb2Rpbmcgc3R5bGUgdXNlZCBhcyBsb25nIGFzDQo+IHdlIA0KPiBoYXZlIGEgY2hlY2tlciBh
bmQgdGhlIGNvZGUgaXMgY29uc2lzdGVudCA6KS4NCj4gDQo+IENoZWVycywNCj4gDQpUaGFuayB5
b3UgZm9yIGFkdmljZXMgOikNCk5vdyBJJ20gdHJ5aW5nIHRvIGZpZ3VyZSBvdXQgdGhlIG9wdGlv
biB0aGF0IG5lZWRzIHRvIGJlIGNvcnJlY3RlZCBmb3INCnRoZSBjaGVja2VyIHRvIHdvcmsgY29y
cmVjdGx5Og0KV3JhcHBpbmcgYW4gb3BlcmF0aW9uIHRvIGEgbmV3IGxpbmUgd2hlbiB0aGUgc3Ry
aW5nIGxlbmd0aCBpcyBsb25nZXINCnRoYW4gdGhlIGFsbG93ZWQNCi0gICAgc3RhdHVzID0gYWNw
aV9nZXRfdGFibGUoQUNQSV9TSUdfU1BDUiwgMCwNCi0gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgKHN0cnVjdCBhY3BpX3RhYmxlX2hlYWRlciAqKikmc3Bjcik7DQorICAgIHN0YXR1cyA9DQor
ICAgICAgICBhY3BpX2dldF90YWJsZShBQ1BJX1NJR19TUENSLCAwLCAoc3RydWN0IGFjcGlfdGFi
bGVfaGVhZGVyDQoqKikmc3Bjcik7DQpBcyBpdCB0dXJuZWQgb3V0LCB0aGlzIGNhc2UgaXMgcXVp
dGUgcmFyZSBhbmQgdGhlIHJ1bGUgZm9yIHRyYW5zZmVycmluZw0KcGFyYW1ldGVycyB3b3JrcyBj
b3JyZWN0bHkgaW4gb3RoZXIgY2FzZXM6DQotICAgIHN0YXR1cyA9IGFjcGlfZ2V0X3RhYmxlKEFD
UElfU0lHX1NQQ1IsIDAsICZzcGNyLCBBQ1BJX1NJR19TUEMsIDAsDQpBQ1BJX1NJR19TUCwgMCk7
DQorICAgIHN0YXR1cyA9IGFjcGlfZ2V0X3RhYmxlKEFDUElfU0lHX1NQQ1IsIDAsICZzcGNyLCBB
Q1BJX1NJR19TUEMsIDAsDQorICAgICAgICAgICAgICAgICAgICAgICAgICAgIEFDUElfU0lHX1NQ
LCAwKTsNClRodXMgdGhlIGNoZWNrZXIgZG9lcyBub3Qgd29yayBjb3JyZWN0bHkgaW4gdGhlIGNh
c2Ugd2hlbiB0aGUgcHJvdG90eXBlDQpwYXJhbWV0ZXIgc3RhcnRzIHdpdGggYSBwYXJlbnRoZXNp
cy4gSSdtIGdvaW5nIHRvIGFzayBjbGFuZyBjb21tdW5pdHkNCmlzIHRoaXMgYmVoYXZpb3IgaXMg
ZXhwZWN0ZWQgb3IgbWF5YmUgaXQncyBhIGJ1Zy4NCg0KUmVnYXJkcywNCkFuYXN0YXNpaWENCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 09:59:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 09:59:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10816.28859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtr7-0006ko-F2; Fri, 23 Oct 2020 09:59:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10816.28859; Fri, 23 Oct 2020 09:59:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtr7-0006kh-C2; Fri, 23 Oct 2020 09:59:29 +0000
Received: by outflank-mailman (input) for mailman id 10816;
 Fri, 23 Oct 2020 09:59:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVtr6-0006kc-FQ
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:59:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fd16667-e882-4ac7-99a0-5ca7349c6dfb;
 Fri, 23 Oct 2020 09:59:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B0CE9AC82;
 Fri, 23 Oct 2020 09:59:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVtr6-0006kc-FQ
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 09:59:28 +0000
X-Inumbo-ID: 7fd16667-e882-4ac7-99a0-5ca7349c6dfb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7fd16667-e882-4ac7-99a0-5ca7349c6dfb;
	Fri, 23 Oct 2020 09:59:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603447166;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FK+KHeA73qDtchEwvOhQniPfB5Wh9g8p8YISf7Hd9fA=;
	b=dVM8a96gvIPqUZvj+6tWbw3k4Ol0WiJLvueXC9HCjAHXPWWVyLupPPd6Z8PABXmlg4Snm5
	QVf7ggCxYcVbiDmMsXDTPn4bZ7OzKExlwKgN+ZGMrqqpQn7NQBtNc7rPJRyULz8OdYxhUE
	YhvOja0ZSe0kfWoPdh7HyclWRoU+n50=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B0CE9AC82;
	Fri, 23 Oct 2020 09:59:26 +0000 (UTC)
Subject: Re: [PATCH v2 10/14] kernel-doc: public/vcpu.h
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 george.dunlap@citrix.com, ian.jackson@eu.citrix.com, julien@xen.org,
 wl@xen.org, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
 <20201021000011.15351-10-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <06711c5b-1ddd-a2dc-ccbe-17098c63bba8@suse.com>
Date: Fri, 23 Oct 2020 11:59:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021000011.15351-10-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.10.2020 02:00, Stefano Stabellini wrote:
> @@ -140,38 +173,74 @@ struct vcpu_register_runstate_memory_area {
>  typedef struct vcpu_register_runstate_memory_area vcpu_register_runstate_memory_area_t;
>  DEFINE_XEN_GUEST_HANDLE(vcpu_register_runstate_memory_area_t);
>  
> -/*
> - * Set or stop a VCPU's periodic timer. Every VCPU has one periodic timer
> - * which can be set via these commands. Periods smaller than one millisecond
> - * may not be supported.
> +/**
> + * DOC: VCPUOP_set_periodic_timer
> + *
> + * Set a VCPU's periodic timer. Every VCPU has one periodic timer which
> + * can be set via this command. Periods smaller than one millisecond may
> + * not be supported.
> + *
> + * @arg == vcpu_set_periodic_timer_t
> + */
> +#define VCPUOP_set_periodic_timer    6
> +/**
> + * DOC: VCPUOP_stop_periodic_timer
> + *
> + * Stop a VCPU's periodic timer.
> + *
> + * @arg == NULL
> + */
> +#define VCPUOP_stop_periodic_timer   7
> +/**
> + * struct vcpu_set_periodic_timer
>   */
> -#define VCPUOP_set_periodic_timer    6 /* arg == vcpu_set_periodic_timer_t */
> -#define VCPUOP_stop_periodic_timer   7 /* arg == NULL */
>  struct vcpu_set_periodic_timer {
>      uint64_t period_ns;
>  };
>  typedef struct vcpu_set_periodic_timer vcpu_set_periodic_timer_t;
>  DEFINE_XEN_GUEST_HANDLE(vcpu_set_periodic_timer_t);
>  
> -/*
> - * Set or stop a VCPU's single-shot timer. Every VCPU has one single-shot
> - * timer which can be set via these commands.
> +/**
> + * DOC: VCPUOP_set_singleshot_timer
> + *
> + * Set a VCPU's single-shot timer. Every VCPU has one single-shot timer
> + * which can be set via this command.
> + *
> + * @arg == vcpu_set_singleshot_timer_t
> + */
> +#define VCPUOP_set_singleshot_timer  8
> +/**
> + * DOC: VCPUOP_stop_singleshot_timer
> + *
> + * Stop a VCPU's single-shot timer.
> + *
> + * arg == NULL

Judging from earlier (and later instances) - @arg?

Jan



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:02:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:02:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10820.28871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtu7-0007da-UZ; Fri, 23 Oct 2020 10:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10820.28871; Fri, 23 Oct 2020 10:02:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtu7-0007dT-RX; Fri, 23 Oct 2020 10:02:35 +0000
Received: by outflank-mailman (input) for mailman id 10820;
 Fri, 23 Oct 2020 10:02:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVtu6-0007dO-Rm
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:02:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ad01d1d-e56c-4cd4-bc6b-f9c957a67499;
 Fri, 23 Oct 2020 10:02:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4B2BCB308;
 Fri, 23 Oct 2020 10:02:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVtu6-0007dO-Rm
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:02:34 +0000
X-Inumbo-ID: 7ad01d1d-e56c-4cd4-bc6b-f9c957a67499
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7ad01d1d-e56c-4cd4-bc6b-f9c957a67499;
	Fri, 23 Oct 2020 10:02:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603447353;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UiAwWQwxjwUyDse6kU5DAOIaLAbcxiGZF8selFBQ4vw=;
	b=URzPz2Sc7ROqNLbWsnONq+3DrLpXeWOm74LGrOhBKu/Ml6n86zsmjLwEc4dJndhtadNI6k
	GvqX1gZdcAXxgMrE9v62a+5dtshpMzIO6+Bn+L0EJO9RTigL2kS3OGp8rA7s+oeQGWW3ZA
	MdZjlvNUnPrAKRNmNZdTM6hv2iFJR8E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4B2BCB308;
	Fri, 23 Oct 2020 10:02:33 +0000 (UTC)
Subject: Re: [PATCH v2 11/14] kernel-doc: public/version.h
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 george.dunlap@citrix.com, ian.jackson@eu.citrix.com, julien@xen.org,
 wl@xen.org, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
 <20201021000011.15351-11-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e0ef4e16-eba2-1cdf-fc75-4181c12fb746@suse.com>
Date: Fri, 23 Oct 2020 12:02:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021000011.15351-11-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.10.2020 02:00, Stefano Stabellini wrote:
> --- a/xen/include/public/version.h
> +++ b/xen/include/public/version.h
> @@ -30,19 +30,32 @@
>  
>  #include "xen.h"
>  
> -/* NB. All ops return zero on success, except XENVER_{version,pagesize}
> - * XENVER_{version,pagesize,build_id} */
> +/**
> + * DOC: XENVER_*
> + *
> + * NB. All ops return zero on success, except for:
> + *
> + * - XENVER_{version,pagesize,build_id}
> + */
>  
> -/* arg == NULL; returns major:minor (16:16). */
> +/**
> + * DOC: XENVER_version
> + * @arg == NULL; returns major:minor (16:16).
> + */
>  #define XENVER_version      0
>  
> -/* arg == xen_extraversion_t. */
> +/**
> + * DOC: XENVER_extraversion
> + * @arg == xen_extraversion_t.
> + */
>  #define XENVER_extraversion 1
>  typedef char xen_extraversion_t[16];
>  #define XEN_EXTRAVERSION_LEN (sizeof(xen_extraversion_t))
>  
> -/* arg == xen_compile_info_t. */
>  #define XENVER_compile_info 2
> +/**
> + * struct xen_compile_info - XENVER_compile_info
> + */

At the example of this one - elsewhere I think I've seen you also
use single-line comments starting with /**. I can accept the choice
of multi-line here, but I think I'd like ./CODING_STYLE to then be
amended to allow such in certain cases.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:08:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:08:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10823.28883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtzh-0007zG-Ll; Fri, 23 Oct 2020 10:08:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10823.28883; Fri, 23 Oct 2020 10:08:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVtzh-0007z9-Id; Fri, 23 Oct 2020 10:08:21 +0000
Received: by outflank-mailman (input) for mailman id 10823;
 Fri, 23 Oct 2020 10:08:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVtzg-0007z4-Ar
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:08:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 159b2d48-822c-472b-97db-a67788535ab3;
 Fri, 23 Oct 2020 10:08:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF3CCACF5;
 Fri, 23 Oct 2020 10:08:17 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVtzg-0007z4-Ar
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:08:20 +0000
X-Inumbo-ID: 159b2d48-822c-472b-97db-a67788535ab3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 159b2d48-822c-472b-97db-a67788535ab3;
	Fri, 23 Oct 2020 10:08:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603447698;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=F4eZzYV8NnxDrphyFdWRCQ11jStzPJyDHfZINY2aw1o=;
	b=YEfXbcROsUuzDbPyyhyVVthKfbyKG6ZSDdhk3Rmor9awPXg/alrhyeswnMe6QPJVQN6ruU
	n1ZEeKg16mWQEVVbNYx5DZz8vARJ9+6jgF8zPccw8uNcdrHAuLZJvx7PjGQqEYdH2Mgc85
	pA68WLcQN8x4s2PBnJpCZHgfTavr0n0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EF3CCACF5;
	Fri, 23 Oct 2020 10:08:17 +0000 (UTC)
Subject: Re: [PATCH v2 12/14] kernel-doc: public/xen.h
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, andrew.cooper3@citrix.com,
 george.dunlap@citrix.com, ian.jackson@eu.citrix.com, julien@xen.org,
 wl@xen.org, Bertrand.Marquis@arm.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2010201646370.12247@sstabellini-ThinkPad-T480s>
 <20201021000011.15351-12-sstabellini@kernel.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f762160-e742-a14f-96a9-8ba908b7ebce@suse.com>
Date: Fri, 23 Oct 2020 12:08:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201021000011.15351-12-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.10.2020 02:00, Stefano Stabellini wrote:
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -81,14 +81,62 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>  
>  #endif
>  
> -/*
> - * HYPERCALLS
> - */
> -
> -/* `incontents 100 hcalls List of hypercalls
> - * ` enum hypercall_num { // __HYPERVISOR_* => HYPERVISOR_*()
> +/**
> + * DOC: HYPERCALLS
> + *
> + * List of hypercalls
> + *
> + * - __HYPERVISOR_set_trap_table
> + * - __HYPERVISOR_mmu_update
> + * - __HYPERVISOR_set_gdt
> + * - __HYPERVISOR_stack_switch
> + * - __HYPERVISOR_set_callbacks
> + * - __HYPERVISOR_fpu_taskswitch
> + * - __HYPERVISOR_sched_op_compat
> + * - __HYPERVISOR_platform_op
> + * - __HYPERVISOR_set_debugreg
> + * - __HYPERVISOR_get_debugreg
> + * - __HYPERVISOR_update_descriptor
> + * - __HYPERVISOR_memory_op
> + * - __HYPERVISOR_multicall
> + * - __HYPERVISOR_update_va_mapping
> + * - __HYPERVISOR_set_timer_op
> + * - __HYPERVISOR_event_channel_op_compat
> + * - __HYPERVISOR_xen_version
> + * - __HYPERVISOR_console_io
> + * - __HYPERVISOR_physdev_op_compat
> + * - __HYPERVISOR_grant_table_op
> + * - __HYPERVISOR_vm_assist
> + * - __HYPERVISOR_update_va_mapping_otherdomain
> + * - __HYPERVISOR_iret
> + * - __HYPERVISOR_vcpu_op
> + * - __HYPERVISOR_set_segment_base
> + * - __HYPERVISOR_mmuext_op
> + * - __HYPERVISOR_xsm_op
> + * - __HYPERVISOR_nmi_op
> + * - __HYPERVISOR_sched_op
> + * - __HYPERVISOR_callback_op
> + * - __HYPERVISOR_xenoprof_op
> + * - __HYPERVISOR_event_channel_op
> + * - __HYPERVISOR_physdev_op
> + * - __HYPERVISOR_hvm_op
> + * - __HYPERVISOR_sysctl
> + * - __HYPERVISOR_domctl
> + * - __HYPERVISOR_kexec_op
> + * - __HYPERVISOR_tmem_op
> + * - __HYPERVISOR_argo_op
> + * - __HYPERVISOR_xenpmu_op
> + * - __HYPERVISOR_dm_op
> + * - __HYPERVISOR_hypfs_op
> + * - __HYPERVISOR_arch_0
> + * - __HYPERVISOR_arch_1
> + * - __HYPERVISOR_arch_2
> + * - __HYPERVISOR_arch_3
> + * - __HYPERVISOR_arch_4
> + * - __HYPERVISOR_arch_5
> + * - __HYPERVISOR_arch_6
> + * - __HYPERVISOR_arch_7
>   */
> -
>  #define __HYPERVISOR_set_trap_table        0
>  #define __HYPERVISOR_mmu_update            1
>  #define __HYPERVISOR_set_gdt               2

I find this (and more similar ones below) addition of redundancy
quite unhelpful. Is there really no way at all to avoid such?

> @@ -650,34 +761,40 @@ typedef struct multicall_entry multicall_entry_t;
>  DEFINE_XEN_GUEST_HANDLE(multicall_entry_t);
>  
>  #if __XEN_INTERFACE_VERSION__ < 0x00040400
> -/*
> +/**
> + * DOC: NR_EVENT_CHANNELS
> + *
>   * Event channel endpoints per domain (when using the 2-level ABI):
>   *  1024 if a long is 32 bits; 4096 if a long is 64 bits.
>   */
>  #define NR_EVENT_CHANNELS EVTCHN_2L_NR_CHANNELS
>  #endif
>  
> +/**
> + * struct vcpu_time_info
> + *
> + * Updates to the following values are preceded and followed by an
> + * increment of 'version'. The guest can therefore detect updates by
> + * looking for changes to 'version'. If the least-significant bit of
> + * the version number is set then an update is in progress and the guest
> + * must wait to read a consistent set of values.
> + * The correct way to interact with the version number is similar to
> + * Linux's seqlock: see the implementations of read_seqbegin/read_seqretry.
> + *
> + * Current system time:
> + *   system_time +
> + *   ((((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul) >> 32)
> + * CPU frequency (Hz):
> + *   ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
> + */
>  struct vcpu_time_info {
> -    /*
> -     * Updates to the following values are preceded and followed by an
> -     * increment of 'version'. The guest can therefore detect updates by
> -     * looking for changes to 'version'. If the least-significant bit of
> -     * the version number is set then an update is in progress and the guest
> -     * must wait to read a consistent set of values.
> -     * The correct way to interact with the version number is similar to
> -     * Linux's seqlock: see the implementations of read_seqbegin/read_seqretry.
> -     */
>      uint32_t version;
>      uint32_t pad0;
> -    uint64_t tsc_timestamp;   /* TSC at last update of time vals.  */
> -    uint64_t system_time;     /* Time, in nanosecs, since boot.    */
> -    /*
> -     * Current system time:
> -     *   system_time +
> -     *   ((((tsc - tsc_timestamp) << tsc_shift) * tsc_to_system_mul) >> 32)
> -     * CPU frequency (Hz):
> -     *   ((10^9 << 32) / tsc_to_system_mul) >> tsc_shift
> -     */
> +    /** @tsc_timestamp: TSC at last update of time vals. */
> +    uint64_t tsc_timestamp;
> +    /** @system_time: Time, in nanosecs, since boot. */
> +    uint64_t system_time;

At the example of this (there are more below) - why the moving of the
main comment out of the struct, when comments inside the struct are
still used and apparently serving the (doc) purpose? This is even
more seeing that you ...

> @@ -692,18 +809,23 @@ typedef struct vcpu_time_info vcpu_time_info_t;
>  #define XEN_PVCLOCK_TSC_STABLE_BIT     (1 << 0)
>  #define XEN_PVCLOCK_GUEST_STOPPED      (1 << 1)
>  
> +/**
> + * struct vcpu_info
> + */
>  struct vcpu_info {
> -    /*
> -     * 'evtchn_upcall_pending' is written non-zero by Xen to indicate
> -     * a pending notification for a particular VCPU. It is then cleared
> -     * by the guest OS /before/ checking for pending work, thus avoiding
> -     * a set-and-check race. Note that the mask is only accessed by Xen
> -     * on the CPU that is currently hosting the VCPU. This means that the
> -     * pending and mask flags can be updated by the guest without special
> -     * synchronisation (i.e., no need for the x86 LOCK prefix).
> -     * This may seem suboptimal because if the pending flag is set by
> -     * a different CPU then an IPI may be scheduled even when the mask
> -     * is set. However, note:
> +    /**
> +     * @evtchn_upcall_pending:
> +     *
> +     * it is written non-zero by Xen to indicate a pending notification
> +     * for a particular VCPU. It is then cleared by the guest OS
> +     * /before/ checking for pending work, thus avoiding a set-and-check
> +     * race. Note that the mask is only accessed by Xen on the CPU that
> +     * is currently hosting the VCPU. This means that the pending and
> +     * mask flags can be updated by the guest without special
> +     * synchronisation (i.e., no need for the x86 LOCK prefix).  This
> +     * may seem suboptimal because if the pending flag is set by a
> +     * different CPU then an IPI may be scheduled even when the mask is
> +     * set. However, note:
>       *  1. The task of 'interrupt holdoff' is covered by the per-event-
>       *     channel mask bits. A 'noisy' event that is continually being
>       *     triggered can be masked at source at this very precise

... don't move it here.

> @@ -732,61 +854,69 @@ struct vcpu_info {
>  typedef struct vcpu_info vcpu_info_t;
>  #endif
>  
> -/*
> - * `incontents 200 startofday_shared Start-of-day shared data structure
> +/**
> + * struct shared_info - Start-of-day shared data structure
> + *
>   * Xen/kernel shared data -- pointer provided in start_info.
>   *
>   * This structure is defined to be both smaller than a page, and the
>   * only data on the shared page, but may vary in actual size even within
>   * compatible Xen versions; guests should not rely on the size
>   * of this structure remaining constant.
> + *
> + * A domain can create "event channels" on which it can send and receive
> + * asynchronous event notifications. There are three classes of event that
> + * are delivered by this mechanism:
> + *  1. Bi-directional inter- and intra-domain connections. Domains must
> + *     arrange out-of-band to set up a connection (usually by allocating
> + *     an unbound 'listener' port and avertising that via a storage service
> + *     such as xenstore).
> + *  2. Physical interrupts. A domain with suitable hardware-access
> + *     privileges can bind an event-channel port to a physical interrupt
> + *     source.
> + *  3. Virtual interrupts ('events'). A domain can bind an event-channel
> + *     port to a virtual interrupt source, such as the virtual-timer
> + *     device or the emergency console.
> + *
> + *
> + * @evtchn_pending: pending notifications

Please avoid double blank lines (even if they're just almost blank).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:15:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10827.28895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu6n-0000Xn-EH; Fri, 23 Oct 2020 10:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10827.28895; Fri, 23 Oct 2020 10:15:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu6n-0000Xg-BE; Fri, 23 Oct 2020 10:15:41 +0000
Received: by outflank-mailman (input) for mailman id 10827;
 Fri, 23 Oct 2020 10:15:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu6l-0000Xb-W0
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:15:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f769d8a3-91f5-44f3-8792-6680d4277fc5;
 Fri, 23 Oct 2020 10:15:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4959CABD1;
 Fri, 23 Oct 2020 10:15:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu6l-0000Xb-W0
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:15:40 +0000
X-Inumbo-ID: f769d8a3-91f5-44f3-8792-6680d4277fc5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f769d8a3-91f5-44f3-8792-6680d4277fc5;
	Fri, 23 Oct 2020 10:15:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448138;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=MLwvR0bR6+91bNT+kLoaVPWwhWujrDgATP2ywjwdqf0=;
	b=M+9SsSGoWFUFaYx7Q+hQ/Tteg4ewAK6PKLZW/ZiHZVKZpuZItTI0w0G/siF27huwQ9JBpw
	P81J0cI92BoyOE9/B9QURoNCH5c+yk0r6zkZ2BRob6XbRGw5NfmfLqSkhVfrONDNL01TnK
	XvEMMDuaccxOPlnbRgpv2aTeWCUWBhk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4959CABD1;
	Fri, 23 Oct 2020 10:15:38 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/8] xen: beginnings of moving library-like code into an
 archive
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Date: Fri, 23 Oct 2020 12:15:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In a few cases we link in library-like functions when they're not
actually needed. While we could use Kconfig options for each one
of them, I think the better approach for such generic code is to
build it always (thus making sure a build issue can't be introduced
for these in any however exotic configuration) and then put it into
an archive, for the linker to pick up as needed. The series here
presents a first few tiny steps towards such a goal.

Not that we can't use thin archives yet, due to our tool chain
(binutils) baseline being too low.

The first patch actually isn't directly related to the rest of the
series, except that - to avoid undue redundancy - I ran into the
issue addressed there while (originally) making patch 3 convert to
using $(call if_changed,ld) "on the fly". IOW it's a full
(contextual and functional) prereq to the series.

Further almost immediate steps I'd like to take if the approach
meets no opposition are
- split and move the rest of common/lib.c,
- split and move common/string.c, dropping the need for all the
  __HAVE_ARCH_* (implying possible per-arch archives then need to
  be specified ahead of lib/lib.a on the linker command lines),
- move common/libelf/ and common/libfdt/.

It's really only patch 7 which has changed in v7, but since no
other feedback arrived which would require adjustments, I'm
resending with just this one change.

1: lib: split _ctype[] into its own object, under lib/
2: lib: collect library files in an archive
3: lib: move list sorting code
4: lib: move parse_size_and_unit()
5: lib: move init_constructors()
6: lib: move rbtree code
7: lib: move bsearch code
8: lib: move sort code

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:16:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:16:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10830.28906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu7q-0000eZ-P6; Fri, 23 Oct 2020 10:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10830.28906; Fri, 23 Oct 2020 10:16:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu7q-0000eS-M9; Fri, 23 Oct 2020 10:16:46 +0000
Received: by outflank-mailman (input) for mailman id 10830;
 Fri, 23 Oct 2020 10:16:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu7p-0000eL-Ay
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:16:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f24def1b-b3e2-4f0b-bc3b-f1b46a8fdc7a;
 Fri, 23 Oct 2020 10:16:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AD78B7F2;
 Fri, 23 Oct 2020 10:16:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu7p-0000eL-Ay
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:16:45 +0000
X-Inumbo-ID: f24def1b-b3e2-4f0b-bc3b-f1b46a8fdc7a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f24def1b-b3e2-4f0b-bc3b-f1b46a8fdc7a;
	Fri, 23 Oct 2020 10:16:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448202;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6EOqujdf6R4/Hc88zQjNx0MmXisrFNGShOlDaeJv/5E=;
	b=E4NuXhz+vXbgShrxaN2Ti5LdlFvF0Xeloj0WBjRheE6XkvQddUNfZUVKFEtuIzPzVMBxpj
	MzcX3906AvxvEWCwEf1djqDxsWzEcCRJXsRqMmyefFH785Qp2K9luRQJtUDTbBPlCQmmVk
	16XbJsHPQKy8DncT57N3dM+XeMSEnnk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4AD78B7F2;
	Fri, 23 Oct 2020 10:16:42 +0000 (UTC)
Subject: [PATCH v2 1/8] lib: split _ctype[] into its own object, under lib/
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <8bf44bbd-8c39-7ab1-3ccb-52bf3744592b@suse.com>
Date: Fri, 23 Oct 2020 12:16:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is, besides for tidying, in preparation of then starting to use an
archive rather than an object file for generic library code which
arch-es (or even specific configurations within a single arch) may or
may not need.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/Makefile     |  3 ++-
 xen/Rules.mk     |  2 +-
 xen/common/lib.c | 29 -----------------------------
 xen/lib/Makefile |  1 +
 xen/lib/ctype.c  | 38 ++++++++++++++++++++++++++++++++++++++
 5 files changed, 42 insertions(+), 31 deletions(-)
 create mode 100644 xen/lib/ctype.c

diff --git a/xen/Makefile b/xen/Makefile
index bf0c804d4352..73bdc326c549 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -331,6 +331,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) include
 	$(MAKE) $(clean) common
 	$(MAKE) $(clean) drivers
+	$(MAKE) $(clean) lib
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
@@ -414,7 +415,7 @@ include/asm-$(TARGET_ARCH)/asm-offsets.h: arch/$(TARGET_ARCH)/asm-offsets.s
 	  echo ""; \
 	  echo "#endif") <$< >$@
 
-SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers test
+SUBDIRS = xsm arch/$(TARGET_ARCH) common drivers lib test
 define all_sources
     ( find include/asm-$(TARGET_ARCH) -name '*.h' -print; \
       find include -name 'asm-*' -prune -o -name '*.h' -print; \
diff --git a/xen/Rules.mk b/xen/Rules.mk
index 891c94e6ad00..333e19bec343 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -36,7 +36,7 @@ TARGET := $(BASEDIR)/xen
 # Note that link order matters!
 ALL_OBJS-y               += $(BASEDIR)/common/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/drivers/built_in.o
-ALL_OBJS-$(CONFIG_X86)   += $(BASEDIR)/lib/built_in.o
+ALL_OBJS-y               += $(BASEDIR)/lib/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
diff --git a/xen/common/lib.c b/xen/common/lib.c
index b2b799da44c5..a224efa8f6e8 100644
--- a/xen/common/lib.c
+++ b/xen/common/lib.c
@@ -1,37 +1,8 @@
-
-#include <xen/ctype.h>
 #include <xen/lib.h>
 #include <xen/types.h>
 #include <xen/init.h>
 #include <asm/byteorder.h>
 
-/* for ctype.h */
-const unsigned char _ctype[] = {
-    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 0-7 */
-    _C,_C|_S,_C|_S,_C|_S,_C|_S,_C|_S,_C,_C,         /* 8-15 */
-    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 16-23 */
-    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 24-31 */
-    _S|_SP,_P,_P,_P,_P,_P,_P,_P,                    /* 32-39 */
-    _P,_P,_P,_P,_P,_P,_P,_P,                        /* 40-47 */
-    _D,_D,_D,_D,_D,_D,_D,_D,                        /* 48-55 */
-    _D,_D,_P,_P,_P,_P,_P,_P,                        /* 56-63 */
-    _P,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U,      /* 64-71 */
-    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 72-79 */
-    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 80-87 */
-    _U,_U,_U,_P,_P,_P,_P,_P,                        /* 88-95 */
-    _P,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L,      /* 96-103 */
-    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 104-111 */
-    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 112-119 */
-    _L,_L,_L,_P,_P,_P,_P,_C,                        /* 120-127 */
-    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 128-143 */
-    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 144-159 */
-    _S|_SP,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,   /* 160-175 */
-    _P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,       /* 176-191 */
-    _U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,       /* 192-207 */
-    _U,_U,_U,_U,_U,_U,_U,_P,_U,_U,_U,_U,_U,_U,_U,_L,       /* 208-223 */
-    _L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,       /* 224-239 */
-    _L,_L,_L,_L,_L,_L,_L,_P,_L,_L,_L,_L,_L,_L,_L,_L};      /* 240-255 */
-
 /*
  * A couple of 64 bit operations ported from FreeBSD.
  * The code within the '#if BITS_PER_LONG == 32' block below, and no other
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 7019ca00e8fd..53b1da025e0d 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1 +1,2 @@
+obj-y += ctype.o
 obj-$(CONFIG_X86) += x86/
diff --git a/xen/lib/ctype.c b/xen/lib/ctype.c
new file mode 100644
index 000000000000..7b233a335fdf
--- /dev/null
+++ b/xen/lib/ctype.c
@@ -0,0 +1,38 @@
+#include <xen/ctype.h>
+
+/* for ctype.h */
+const unsigned char _ctype[] = {
+    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 0-7 */
+    _C,_C|_S,_C|_S,_C|_S,_C|_S,_C|_S,_C,_C,         /* 8-15 */
+    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 16-23 */
+    _C,_C,_C,_C,_C,_C,_C,_C,                        /* 24-31 */
+    _S|_SP,_P,_P,_P,_P,_P,_P,_P,                    /* 32-39 */
+    _P,_P,_P,_P,_P,_P,_P,_P,                        /* 40-47 */
+    _D,_D,_D,_D,_D,_D,_D,_D,                        /* 48-55 */
+    _D,_D,_P,_P,_P,_P,_P,_P,                        /* 56-63 */
+    _P,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U|_X,_U,      /* 64-71 */
+    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 72-79 */
+    _U,_U,_U,_U,_U,_U,_U,_U,                        /* 80-87 */
+    _U,_U,_U,_P,_P,_P,_P,_P,                        /* 88-95 */
+    _P,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L|_X,_L,      /* 96-103 */
+    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 104-111 */
+    _L,_L,_L,_L,_L,_L,_L,_L,                        /* 112-119 */
+    _L,_L,_L,_P,_P,_P,_P,_C,                        /* 120-127 */
+    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 128-143 */
+    0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,                /* 144-159 */
+    _S|_SP,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,   /* 160-175 */
+    _P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,_P,       /* 176-191 */
+    _U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,_U,       /* 192-207 */
+    _U,_U,_U,_U,_U,_U,_U,_P,_U,_U,_U,_U,_U,_U,_U,_L,       /* 208-223 */
+    _L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,_L,       /* 224-239 */
+    _L,_L,_L,_L,_L,_L,_L,_P,_L,_L,_L,_L,_L,_L,_L,_L};      /* 240-255 */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:17:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10831.28919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu86-0000jj-1u; Fri, 23 Oct 2020 10:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10831.28919; Fri, 23 Oct 2020 10:17:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu85-0000jc-UZ; Fri, 23 Oct 2020 10:17:01 +0000
Received: by outflank-mailman (input) for mailman id 10831;
 Fri, 23 Oct 2020 10:17:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu84-0000jK-Mj
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bb96d17-dae4-4dd4-b283-f43c8215731d;
 Fri, 23 Oct 2020 10:16:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 668EDB119;
 Fri, 23 Oct 2020 10:16:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu84-0000jK-Mj
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:00 +0000
X-Inumbo-ID: 9bb96d17-dae4-4dd4-b283-f43c8215731d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9bb96d17-dae4-4dd4-b283-f43c8215731d;
	Fri, 23 Oct 2020 10:16:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448218;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RmpAqtIysPPO5lFZ7Uf4Eoflsy4m5GRirSii+2+b7OA=;
	b=aSr1PvsdOKruMZ8+mtgvBW7J1bYL48F7UBAj8fWyJfYpL8Rjth8nZd709IegO297/OIedL
	4IUH4iOfqnmiaL6vhtNJ5eXQd49bcWm8g1w3A+OaXzTLVgpAJNW7e8loa5rg80y/GntYQU
	/XgfiC1zen8jV1qqAUPEHWDQWhlkjek=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 668EDB119;
	Fri, 23 Oct 2020 10:16:58 +0000 (UTC)
Subject: [PATCH v2 2/8] lib: collect library files in an archive
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <78dccec2-064f-d4b1-1865-eb3f1f14247a@suse.com>
Date: Fri, 23 Oct 2020 12:17:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In order to (subsequently) drop odd things like CONFIG_NEEDS_LIST_SORT
just to avoid bloating binaries when only some arch-es and/or
configurations need generic library routines, combine objects under lib/
into an archive, which the linker then can pick the necessary objects
out of.

Note that we can't use thin archives just yet, until we've raised the
minimum required binutils version suitably.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/Rules.mk          | 33 +++++++++++++++++++++++++++------
 xen/arch/arm/Makefile |  6 +++---
 xen/arch/x86/Makefile |  8 ++++----
 xen/lib/Makefile      |  3 ++-
 4 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/xen/Rules.mk b/xen/Rules.mk
index 333e19bec343..e59c7f213f77 100644
--- a/xen/Rules.mk
+++ b/xen/Rules.mk
@@ -41,12 +41,16 @@ ALL_OBJS-y               += $(BASEDIR)/xsm/built_in.o
 ALL_OBJS-y               += $(BASEDIR)/arch/$(TARGET_ARCH)/built_in.o
 ALL_OBJS-$(CONFIG_CRYPTO)   += $(BASEDIR)/crypto/built_in.o
 
+ALL_LIBS-y               := $(BASEDIR)/lib/lib.a
+
 # Initialise some variables
+lib-y :=
 targets :=
 CFLAGS-y :=
 AFLAGS-y :=
 
 ALL_OBJS := $(ALL_OBJS-y)
+ALL_LIBS := $(ALL_LIBS-y)
 
 SPECIAL_DATA_SECTIONS := rodata $(foreach a,1 2 4 8 16, \
                                             $(foreach w,1 2 4, \
@@ -60,7 +64,14 @@ include Makefile
 # ---------------------------------------------------------------------------
 
 quiet_cmd_ld = LD      $@
-cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(real-prereqs)
+cmd_ld = $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out %.a,$(real-prereqs)) \
+               --start-group $(filter %.a,$(real-prereqs)) --end-group
+
+# Archive
+# ---------------------------------------------------------------------------
+
+quiet_cmd_ar = AR      $@
+cmd_ar = rm -f $@; $(AR) cPrs $@ $(real-prereqs)
 
 # Objcopy
 # ---------------------------------------------------------------------------
@@ -86,6 +97,10 @@ obj-y    := $(patsubst %/, %/built_in.o, $(obj-y))
 # tell kbuild to descend
 subdir-obj-y := $(filter %/built_in.o, $(obj-y))
 
+# Libraries are always collected in one lib file.
+# Filter out objects already built-in
+lib-y := $(filter-out $(obj-y), $(sort $(lib-y)))
+
 $(filter %.init.o,$(obj-y) $(obj-bin-y) $(extra-y)): CFLAGS-y += -DINIT_SECTIONS_ONLY
 
 ifeq ($(CONFIG_COVERAGE),y)
@@ -129,19 +144,25 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
 c_flags += $(CFLAGS-y)
 a_flags += $(CFLAGS-y) $(AFLAGS-y)
 
-built_in.o: $(obj-y) $(extra-y)
+built_in.o: $(obj-y) $(if $(strip $(lib-y)),lib.a) $(extra-y)
 ifeq ($(obj-y),)
 	$(CC) $(c_flags) -c -x c /dev/null -o $@
 else
 ifeq ($(CONFIG_LTO),y)
-	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
+	$(LD_LTO) -r -o $@ $(filter-out lib.a $(extra-y),$^)
 else
-	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
+	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out lib.a $(extra-y),$^)
 endif
 endif
 
+lib.a: $(lib-y) FORCE
+	$(call if_changed,ar)
+
 targets += built_in.o
-targets += $(filter-out $(subdir-obj-y), $(obj-y)) $(extra-y)
+ifneq ($(strip $(lib-y)),)
+targets += lib.a
+endif
+targets += $(filter-out $(subdir-obj-y), $(obj-y) $(lib-y)) $(extra-y)
 targets += $(MAKECMDGOALS)
 
 built_in_bin.o: $(obj-bin-y) $(extra-y)
@@ -155,7 +176,7 @@ endif
 PHONY += FORCE
 FORCE:
 
-%/built_in.o: FORCE
+%/built_in.o %/lib.a: FORCE
 	$(MAKE) -f $(BASEDIR)/Rules.mk -C $* built_in.o
 
 %/built_in_bin.o: FORCE
diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 296c5e68bbc3..612a83b315c8 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -90,14 +90,14 @@ endif
 
 ifeq ($(CONFIG_LTO),y)
 # Gather all LTO objects together
-prelink_lto.o: $(ALL_OBJS)
-	$(LD_LTO) -r -o $@ $^
+prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
+	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
 
 # Link it with all the binary objects
 prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o
 	$(call if_changed,ld)
 else
-prelink.o: $(ALL_OBJS) FORCE
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
 	$(call if_changed,ld)
 endif
 
diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index 9b368632fb43..8f2180485b2b 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -132,8 +132,8 @@ EFI_OBJS-$(XEN_BUILD_EFI) := efi/relocs-dummy.o
 
 ifeq ($(CONFIG_LTO),y)
 # Gather all LTO objects together
-prelink_lto.o: $(ALL_OBJS)
-	$(LD_LTO) -r -o $@ $^
+prelink_lto.o: $(ALL_OBJS) $(ALL_LIBS)
+	$(LD_LTO) -r -o $@ $(filter-out %.a,$^) --start-group $(filter %.a,$^) --end-group
 
 # Link it with all the binary objects
 prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $(EFI_OBJS-y) FORCE
@@ -142,10 +142,10 @@ prelink.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o $
 prelink-efi.o: $(patsubst %/built_in.o,%/built_in_bin.o,$(ALL_OBJS)) prelink_lto.o FORCE
 	$(call if_changed,ld)
 else
-prelink.o: $(ALL_OBJS) $(EFI_OBJS-y) FORCE
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) $(EFI_OBJS-y) FORCE
 	$(call if_changed,ld)
 
-prelink-efi.o: $(ALL_OBJS) FORCE
+prelink-efi.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
 	$(call if_changed,ld)
 endif
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 53b1da025e0d..b8814361d63e 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,2 +1,3 @@
-obj-y += ctype.o
 obj-$(CONFIG_X86) += x86/
+
+lib-y += ctype.o
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:17:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:17:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10836.28931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu8d-0000sJ-EY; Fri, 23 Oct 2020 10:17:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10836.28931; Fri, 23 Oct 2020 10:17:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu8d-0000sC-AZ; Fri, 23 Oct 2020 10:17:35 +0000
Received: by outflank-mailman (input) for mailman id 10836;
 Fri, 23 Oct 2020 10:17:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu8b-0000rz-Mz
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 779ed0c8-ef0a-479e-af2a-b6385233ac17;
 Fri, 23 Oct 2020 10:17:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79572ACE5;
 Fri, 23 Oct 2020 10:17:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu8b-0000rz-Mz
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:33 +0000
X-Inumbo-ID: 779ed0c8-ef0a-479e-af2a-b6385233ac17
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 779ed0c8-ef0a-479e-af2a-b6385233ac17;
	Fri, 23 Oct 2020 10:17:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448251;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AnVuJHJ9HHCqUnhfZFlaHrQ/ZUOGi/mDmoJBR1ZocoE=;
	b=MXE0qkjidzOX8LMSTweK1pL/nRHWpAyJxJiQxNR2DU4zXuTptwhOAlz0mvSXErqEdBFYSe
	hgH/MZGTrWlQ+rPQVJKB5YkuRGqqayKi69AsA8acgDR0F8bIaK2fvDHm+YmojMDIW7DqV4
	7gjxTKRB/BsK5CW7kj2kRcLWXYKGXEQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 79572ACE5;
	Fri, 23 Oct 2020 10:17:31 +0000 (UTC)
Subject: [PATCH v2 3/8] lib: move list sorting code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <19d28bcc-9e5b-4902-8e8d-ae95fbc560a6@suse.com>
Date: Fri, 23 Oct 2020 12:17:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build the source file always, as by putting it into an archive it still
won't be linked into final binaries when not needed. This way possible
build breakage will be easier to notice, and it's more consistent with
us unconditionally building other library kind of code (e.g. sort() or
bsearch()).

While moving the source file, take the opportunity and drop the
pointless EXPORT_SYMBOL().

Signed-off-by: Jan Beulich <jbeulich@suse.com>

Build the source file always, as by putting it into an archive it still
won't be linked into final binaries when not needed. This way possible
build breakage will be easier to notice, and it's more consistent with
us unconditionally building other library kind of code (e.g. sort() or
bsearch()).

While moving the source file, take the opportunity and drop the
pointless EXPORT_SYMBOL().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/arm/Kconfig                        | 4 +---
 xen/common/Kconfig                          | 3 ---
 xen/common/Makefile                         | 1 -
 xen/lib/Makefile                            | 1 +
 xen/{common/list_sort.c => lib/list-sort.c} | 2 --
 5 files changed, 2 insertions(+), 9 deletions(-)
 rename xen/{common/list_sort.c => lib/list-sort.c} (98%)

diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 277738826581..cb7e2523b6de 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -56,9 +56,7 @@ config HVM
         def_bool y
 
 config NEW_VGIC
-	bool
-	prompt "Use new VGIC implementation"
-	select NEEDS_LIST_SORT
+	bool "Use new VGIC implementation"
 	---help---
 
 	This is an alternative implementation of the ARM GIC interrupt
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf2508899..0661328a99e7 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -66,9 +66,6 @@ config MEM_ACCESS
 config NEEDS_LIBELF
 	bool
 
-config NEEDS_LIST_SORT
-	bool
-
 menu "Speculative hardening"
 
 config SPECULATIVE_HARDEN_ARRAY
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 083f62acb634..52d3c2aa9384 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -21,7 +21,6 @@ obj-y += keyhandler.o
 obj-$(CONFIG_KEXEC) += kexec.o
 obj-$(CONFIG_KEXEC) += kimage.o
 obj-y += lib.o
-obj-$(CONFIG_NEEDS_LIST_SORT) += list_sort.o
 obj-$(CONFIG_LIVEPATCH) += livepatch.o livepatch_elf.o
 obj-$(CONFIG_MEM_ACCESS) += mem_access.o
 obj-y += memory.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index b8814361d63e..764f3624b5f9 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,3 +1,4 @@
 obj-$(CONFIG_X86) += x86/
 
 lib-y += ctype.o
+lib-y += list-sort.o
diff --git a/xen/common/list_sort.c b/xen/lib/list-sort.c
similarity index 98%
rename from xen/common/list_sort.c
rename to xen/lib/list-sort.c
index af2b2f6519f1..f8d8bbf28178 100644
--- a/xen/common/list_sort.c
+++ b/xen/lib/list-sort.c
@@ -15,7 +15,6 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/lib.h>
 #include <xen/list.h>
 
 #define MAX_LIST_LENGTH_BITS 20
@@ -154,4 +153,3 @@ void list_sort(void *priv, struct list_head *head,
 
 	merge_and_restore_back_links(priv, cmp, head, part[max_lev], list);
 }
-EXPORT_SYMBOL(list_sort);
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:17:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10838.28943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu8z-0000zA-Mz; Fri, 23 Oct 2020 10:17:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10838.28943; Fri, 23 Oct 2020 10:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu8z-0000z3-Je; Fri, 23 Oct 2020 10:17:57 +0000
Received: by outflank-mailman (input) for mailman id 10838;
 Fri, 23 Oct 2020 10:17:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu8y-0000y9-LC
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1efb48a-f6e2-4908-9f61-dd4124a1059b;
 Fri, 23 Oct 2020 10:17:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CADF1ACE5;
 Fri, 23 Oct 2020 10:17:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu8y-0000y9-LC
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:17:56 +0000
X-Inumbo-ID: d1efb48a-f6e2-4908-9f61-dd4124a1059b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d1efb48a-f6e2-4908-9f61-dd4124a1059b;
	Fri, 23 Oct 2020 10:17:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448274;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lboJsbVVVivJxZqzJYi7juIk44SP83bI03BaDd0lLh0=;
	b=d7JRslfZpV10EQNMgStARo6DYp3Erxmt5QyZ7NOhx+0RYrveX2Dv0rybI26yPk6kSDz1EZ
	C5WOfuh/oH0vgxyaSslaMGJmWWOzqmqO4P0PelTqMt2m0litpYy3wei1DjCc7bzRRDFzX9
	6786r+dMtVfrKAVAd8ucuW17fQ857vA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CADF1ACE5;
	Fri, 23 Oct 2020 10:17:54 +0000 (UTC)
Subject: [PATCH v2 4/8] lib: move parse_size_and_unit()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <eaffac30-8bd0-6018-5186-ca53d1becfe5@suse.com>
Date: Fri, 23 Oct 2020 12:17:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... into its own CU, to build it into an archive.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

... into its own CU, to build it into an archive.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/lib.c     | 39 ----------------------------------
 xen/lib/Makefile     |  1 +
 xen/lib/parse-size.c | 50 ++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 51 insertions(+), 39 deletions(-)
 create mode 100644 xen/lib/parse-size.c

diff --git a/xen/common/lib.c b/xen/common/lib.c
index a224efa8f6e8..6cfa332142a5 100644
--- a/xen/common/lib.c
+++ b/xen/common/lib.c
@@ -423,45 +423,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c)
 #endif
 }
 
-unsigned long long parse_size_and_unit(const char *s, const char **ps)
-{
-    unsigned long long ret;
-    const char *s1;
-
-    ret = simple_strtoull(s, &s1, 0);
-
-    switch ( *s1 )
-    {
-    case 'T': case 't':
-        ret <<= 10;
-        /* fallthrough */
-    case 'G': case 'g':
-        ret <<= 10;
-        /* fallthrough */
-    case 'M': case 'm':
-        ret <<= 10;
-        /* fallthrough */
-    case 'K': case 'k':
-        ret <<= 10;
-        /* fallthrough */
-    case 'B': case 'b':
-        s1++;
-        break;
-    case '%':
-        if ( ps )
-            break;
-        /* fallthrough */
-    default:
-        ret <<= 10; /* default to kB */
-        break;
-    }
-
-    if ( ps != NULL )
-        *ps = s1;
-
-    return ret;
-}
-
 typedef void (*ctor_func_t)(void);
 extern const ctor_func_t __ctors_start[], __ctors_end[];
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 764f3624b5f9..99f857540c99 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -2,3 +2,4 @@ obj-$(CONFIG_X86) += x86/
 
 lib-y += ctype.o
 lib-y += list-sort.o
+lib-y += parse-size.o
diff --git a/xen/lib/parse-size.c b/xen/lib/parse-size.c
new file mode 100644
index 000000000000..ec980cadfff3
--- /dev/null
+++ b/xen/lib/parse-size.c
@@ -0,0 +1,50 @@
+#include <xen/lib.h>
+
+unsigned long long parse_size_and_unit(const char *s, const char **ps)
+{
+    unsigned long long ret;
+    const char *s1;
+
+    ret = simple_strtoull(s, &s1, 0);
+
+    switch ( *s1 )
+    {
+    case 'T': case 't':
+        ret <<= 10;
+        /* fallthrough */
+    case 'G': case 'g':
+        ret <<= 10;
+        /* fallthrough */
+    case 'M': case 'm':
+        ret <<= 10;
+        /* fallthrough */
+    case 'K': case 'k':
+        ret <<= 10;
+        /* fallthrough */
+    case 'B': case 'b':
+        s1++;
+        break;
+    case '%':
+        if ( ps )
+            break;
+        /* fallthrough */
+    default:
+        ret <<= 10; /* default to kB */
+        break;
+    }
+
+    if ( ps != NULL )
+        *ps = s1;
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.22.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:18:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10840.28955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu9N-00017d-0t; Fri, 23 Oct 2020 10:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10840.28955; Fri, 23 Oct 2020 10:18:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu9M-00017W-Tc; Fri, 23 Oct 2020 10:18:20 +0000
Received: by outflank-mailman (input) for mailman id 10840;
 Fri, 23 Oct 2020 10:18:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu9L-00017F-GR
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:18:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb9262d6-4b1a-40e3-8b2a-c5fae1855b95;
 Fri, 23 Oct 2020 10:18:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E4078ABD1;
 Fri, 23 Oct 2020 10:18:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu9L-00017F-GR
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:18:19 +0000
X-Inumbo-ID: eb9262d6-4b1a-40e3-8b2a-c5fae1855b95
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb9262d6-4b1a-40e3-8b2a-c5fae1855b95;
	Fri, 23 Oct 2020 10:18:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448298;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hiYMWegp/VfyOCwZBfvaVvgad/gyWFZAtZKLyIpg09Y=;
	b=rd7FfW8aL3ICa21jysoMfzvGKWMD5ytujfcJa3P95iXA4EWJHRnp6Xzpxzz8vRusGl2EXZ
	5Jrx5/r4QDrjK3l9P1DdNs/bXfMzD86hjEN3q2Ko52Mq0RXxgjbxBUqQbJ54vtrG+mKV2w
	w40cSPEFAnqC9r3ZhrAdujL94BuYouQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E4078ABD1;
	Fri, 23 Oct 2020 10:18:17 +0000 (UTC)
Subject: [PATCH v2 5/8] lib: move init_constructors()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <c7b091f4-b2a4-965e-3a1a-de26f45f0d5d@suse.com>
Date: Fri, 23 Oct 2020 12:18:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

... into its own CU, for being unrelated to other things in
common/lib.c. For now it gets compiled into built_in.o rather than
lib.a, as it gets used unconditionally by Arm's as well as x86'es
{,__}start_xen(). But this could be changed in principle, the more that
there typically aren't any constructors anyway. Then again it's just
__init code anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/lib.c | 14 --------------
 xen/lib/Makefile |  1 +
 xen/lib/ctors.c  | 25 +++++++++++++++++++++++++
 3 files changed, 26 insertions(+), 14 deletions(-)
 create mode 100644 xen/lib/ctors.c

diff --git a/xen/common/lib.c b/xen/common/lib.c
index 6cfa332142a5..f5ca179a0af4 100644
--- a/xen/common/lib.c
+++ b/xen/common/lib.c
@@ -1,6 +1,5 @@
 #include <xen/lib.h>
 #include <xen/types.h>
-#include <xen/init.h>
 #include <asm/byteorder.h>
 
 /*
@@ -423,19 +422,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c)
 #endif
 }
 
-typedef void (*ctor_func_t)(void);
-extern const ctor_func_t __ctors_start[], __ctors_end[];
-
-void __init init_constructors(void)
-{
-    const ctor_func_t *f;
-    for ( f = __ctors_start; f < __ctors_end; ++f )
-        (*f)();
-
-    /* Putting this here seems as good (or bad) as any other place. */
-    BUILD_BUG_ON(sizeof(size_t) != sizeof(ssize_t));
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 99f857540c99..ba1fb7bcdee2 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,3 +1,4 @@
+obj-y += ctors.o
 obj-$(CONFIG_X86) += x86/
 
 lib-y += ctype.o
diff --git a/xen/lib/ctors.c b/xen/lib/ctors.c
new file mode 100644
index 000000000000..5bdc591cd50a
--- /dev/null
+++ b/xen/lib/ctors.c
@@ -0,0 +1,25 @@
+#include <xen/init.h>
+#include <xen/lib.h>
+
+typedef void (*ctor_func_t)(void);
+extern const ctor_func_t __ctors_start[], __ctors_end[];
+
+void __init init_constructors(void)
+{
+    const ctor_func_t *f;
+    for ( f = __ctors_start; f < __ctors_end; ++f )
+        (*f)();
+
+    /* Putting this here seems as good (or bad) as any other place. */
+    BUILD_BUG_ON(sizeof(size_t) != sizeof(ssize_t));
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:18:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:18:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10844.28967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu9l-0001Ex-9H; Fri, 23 Oct 2020 10:18:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10844.28967; Fri, 23 Oct 2020 10:18:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVu9l-0001Eo-5s; Fri, 23 Oct 2020 10:18:45 +0000
Received: by outflank-mailman (input) for mailman id 10844;
 Fri, 23 Oct 2020 10:18:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVu9j-0001EP-C1
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:18:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 473feced-7ce0-4248-b48d-f757c6930424;
 Fri, 23 Oct 2020 10:18:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9433AD04;
 Fri, 23 Oct 2020 10:18:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVu9j-0001EP-C1
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:18:43 +0000
X-Inumbo-ID: 473feced-7ce0-4248-b48d-f757c6930424
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 473feced-7ce0-4248-b48d-f757c6930424;
	Fri, 23 Oct 2020 10:18:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448321;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jNlq7Fb23c+zpnGBIkWqGuhMfVaSAd3Up+ZrIih1Y4E=;
	b=qObMJquwH1Tb1DW/N461rruhf7lhaKuVIgOKvX0YkCBKrCg89wETxoBJIXsh8CU9dfyYZf
	Lsc7KBu33ppafNgYp5jTFF2w7R4gb5ymA9/+DbcZo1gPUUGnaUDyxdAOpvwpihQ3WNKUA2
	miLupi02MxxkHsHzsB+6vmYKpUMoVYo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A9433AD04;
	Fri, 23 Oct 2020 10:18:41 +0000 (UTC)
Subject: [PATCH v2 6/8] lib: move rbtree code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <e16975d3-c34b-1b3f-743f-1abe13aa06f7@suse.com>
Date: Fri, 23 Oct 2020 12:18:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build this code into an archive, which results in not linking it into
x86 final binaries. This saves about 1.5k of dead code.

While moving the source file, take the opportunity and drop the
pointless EXPORT_SYMBOL().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/Makefile          | 1 -
 xen/lib/Makefile             | 1 +
 xen/{common => lib}/rbtree.c | 9 +--------
 3 files changed, 2 insertions(+), 9 deletions(-)
 rename xen/{common => lib}/rbtree.c (98%)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 52d3c2aa9384..7bb779f780a1 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -33,7 +33,6 @@ obj-y += preempt.o
 obj-y += random.o
 obj-y += rangeset.o
 obj-y += radix-tree.o
-obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
 obj-y += shutdown.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index ba1fb7bcdee2..b469d2dff7b8 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -4,3 +4,4 @@ obj-$(CONFIG_X86) += x86/
 lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
+lib-y += rbtree.o
diff --git a/xen/common/rbtree.c b/xen/lib/rbtree.c
similarity index 98%
rename from xen/common/rbtree.c
rename to xen/lib/rbtree.c
index 9f5498a89d4e..95e045d52461 100644
--- a/xen/common/rbtree.c
+++ b/xen/lib/rbtree.c
@@ -25,7 +25,7 @@
 #include <xen/rbtree.h>
 
 /*
- * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree 
+ * red-black trees properties:  http://en.wikipedia.org/wiki/Rbtree
  *
  *  1) A node is either red or black
  *  2) The root is black
@@ -223,7 +223,6 @@ void rb_insert_color(struct rb_node *node, struct rb_root *root)
 		}
 	}
 }
-EXPORT_SYMBOL(rb_insert_color);
 
 static void __rb_erase_color(struct rb_node *parent, struct rb_root *root)
 {
@@ -467,7 +466,6 @@ void rb_erase(struct rb_node *node, struct rb_root *root)
 	if (rebalance)
 		__rb_erase_color(rebalance, root);
 }
-EXPORT_SYMBOL(rb_erase);
 
 /*
  * This function returns the first node (in sort order) of the tree.
@@ -483,7 +481,6 @@ struct rb_node *rb_first(const struct rb_root *root)
 		n = n->rb_left;
 	return n;
 }
-EXPORT_SYMBOL(rb_first);
 
 struct rb_node *rb_last(const struct rb_root *root)
 {
@@ -496,7 +493,6 @@ struct rb_node *rb_last(const struct rb_root *root)
 		n = n->rb_right;
 	return n;
 }
-EXPORT_SYMBOL(rb_last);
 
 struct rb_node *rb_next(const struct rb_node *node)
 {
@@ -528,7 +524,6 @@ struct rb_node *rb_next(const struct rb_node *node)
 
 	return parent;
 }
-EXPORT_SYMBOL(rb_next);
 
 struct rb_node *rb_prev(const struct rb_node *node)
 {
@@ -557,7 +552,6 @@ struct rb_node *rb_prev(const struct rb_node *node)
 
 	return parent;
 }
-EXPORT_SYMBOL(rb_prev);
 
 void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 		     struct rb_root *root)
@@ -574,4 +568,3 @@ void rb_replace_node(struct rb_node *victim, struct rb_node *new,
 	/* Copy the pointers/colour from the victim to the replacement */
 	*new = *victim;
 }
-EXPORT_SYMBOL(rb_replace_node);
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:19:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10849.28979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVuA9-0001Mj-Jy; Fri, 23 Oct 2020 10:19:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10849.28979; Fri, 23 Oct 2020 10:19:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVuA9-0001Mc-G7; Fri, 23 Oct 2020 10:19:09 +0000
Received: by outflank-mailman (input) for mailman id 10849;
 Fri, 23 Oct 2020 10:19:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVuA7-0001ML-Ne
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:19:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1014aeee-1e22-4d37-a6d0-95a688ff8cf7;
 Fri, 23 Oct 2020 10:19:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D9A7CAD04;
 Fri, 23 Oct 2020 10:19:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVuA7-0001ML-Ne
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:19:07 +0000
X-Inumbo-ID: 1014aeee-1e22-4d37-a6d0-95a688ff8cf7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1014aeee-1e22-4d37-a6d0-95a688ff8cf7;
	Fri, 23 Oct 2020 10:19:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448346;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7kZoCIqv4yQVugUvj1z6blQttzij9hYwlGKaJUnpBgI=;
	b=NWq48x/RySaomVo5kON84Ox76WEKa0qzw91tc1UM95npyomVIUXhXGr6U4Zw6tXnaQW+cC
	YT/aTY8Y+LTxeq0LmQWSwK5wgFsNRKBaV4rtGU78eL9ByJgrmvKl2D8bT/2TETc2Rn1MW3
	X8JK1d/OE9HanmEibBWABAUXQJIUEQM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D9A7CAD04;
	Fri, 23 Oct 2020 10:19:05 +0000 (UTC)
Subject: [PATCH v2 7/8] lib: move bsearch code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <87a20884-5a76-a664-dcc9-bd4becee40b3@suse.com>
Date: Fri, 23 Oct 2020 12:19:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Convert this code to an inline function (backed by an instance in an
archive in case the compiler decides against inlining), which results
in not having it in x86 final binaries. This saves a little bit of dead
code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Make the function an extern inline in its header.
---
 xen/common/Makefile        |  1 -
 xen/common/bsearch.c       | 51 --------------------------------------
 xen/include/xen/compiler.h |  1 +
 xen/include/xen/lib.h      | 42 ++++++++++++++++++++++++++++++-
 xen/lib/Makefile           |  1 +
 xen/lib/bsearch.c          | 13 ++++++++++
 6 files changed, 56 insertions(+), 53 deletions(-)
 delete mode 100644 xen/common/bsearch.c
 create mode 100644 xen/lib/bsearch.c

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 7bb779f780a1..d8519a2cc163 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,6 +1,5 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
-obj-y += bsearch.o
 obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
diff --git a/xen/common/bsearch.c b/xen/common/bsearch.c
deleted file mode 100644
index 7090930aab5c..000000000000
--- a/xen/common/bsearch.c
+++ /dev/null
@@ -1,51 +0,0 @@
-/*
- * A generic implementation of binary search for the Linux kernel
- *
- * Copyright (C) 2008-2009 Ksplice, Inc.
- * Author: Tim Abbott <tabbott@ksplice.com>
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License as
- * published by the Free Software Foundation; version 2.
- */
-
-#include <xen/lib.h>
-
-/*
- * bsearch - binary search an array of elements
- * @key: pointer to item being searched for
- * @base: pointer to first element to search
- * @num: number of elements
- * @size: size of each element
- * @cmp: pointer to comparison function
- *
- * This function does a binary search on the given array.  The
- * contents of the array should already be in ascending sorted order
- * under the provided comparison function.
- *
- * Note that the key need not have the same type as the elements in
- * the array, e.g. key could be a string and the comparison function
- * could compare the string with the struct's name field.  However, if
- * the key and elements in the array are of the same type, you can use
- * the same comparison function for both sort() and bsearch().
- */
-void *bsearch(const void *key, const void *base, size_t num, size_t size,
-	      int (*cmp)(const void *key, const void *elt))
-{
-	size_t start = 0, end = num;
-	int result;
-
-	while (start < end) {
-		size_t mid = start + (end - start) / 2;
-
-		result = cmp(key, base + mid * size);
-		if (result < 0)
-			end = mid;
-		else if (result > 0)
-			start = mid + 1;
-		else
-			return (void *)base + mid * size;
-	}
-
-	return NULL;
-}
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c0e0ee9f27be..2b7acdf3b188 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -12,6 +12,7 @@
 
 #define inline        __inline__
 #define always_inline __inline__ __attribute__ ((__always_inline__))
+#define gnu_inline    __inline__ __attribute__ ((__gnu_inline__))
 #define noinline      __attribute__((__noinline__))
 
 #define noreturn      __attribute__((__noreturn__))
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 076bcfb67dbb..940d23755661 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -192,8 +192,48 @@ void dump_execstate(struct cpu_user_regs *);
 
 void init_constructors(void);
 
+/*
+ * bsearch - binary search an array of elements
+ * @key: pointer to item being searched for
+ * @base: pointer to first element to search
+ * @num: number of elements
+ * @size: size of each element
+ * @cmp: pointer to comparison function
+ *
+ * This function does a binary search on the given array.  The
+ * contents of the array should already be in ascending sorted order
+ * under the provided comparison function.
+ *
+ * Note that the key need not have the same type as the elements in
+ * the array, e.g. key could be a string and the comparison function
+ * could compare the string with the struct's name field.  However, if
+ * the key and elements in the array are of the same type, you can use
+ * the same comparison function for both sort() and bsearch().
+ */
+#ifndef BSEARCH_IMPLEMENTATION
+extern gnu_inline
+#endif
 void *bsearch(const void *key, const void *base, size_t num, size_t size,
-              int (*cmp)(const void *key, const void *elt));
+              int (*cmp)(const void *key, const void *elt))
+{
+    size_t start = 0, end = num;
+    int result;
+
+    while ( start < end )
+    {
+        size_t mid = start + (end - start) / 2;
+
+        result = cmp(key, base + mid * size);
+        if ( result < 0 )
+            end = mid;
+        else if ( result > 0 )
+            start = mid + 1;
+        else
+            return (void *)base + mid * size;
+    }
+
+    return NULL;
+}
 
 #endif /* __ASSEMBLY__ */
 
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index b469d2dff7b8..122eeb3d327b 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -1,6 +1,7 @@
 obj-y += ctors.o
 obj-$(CONFIG_X86) += x86/
 
+lib-y += bsearch.o
 lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
diff --git a/xen/lib/bsearch.c b/xen/lib/bsearch.c
new file mode 100644
index 000000000000..149f7feafd1f
--- /dev/null
+++ b/xen/lib/bsearch.c
@@ -0,0 +1,13 @@
+/*
+ * A generic implementation of binary search for the Linux kernel
+ *
+ * Copyright (C) 2008-2009 Ksplice, Inc.
+ * Author: Tim Abbott <tabbott@ksplice.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; version 2.
+ */
+
+#define BSEARCH_IMPLEMENTATION
+#include <xen/lib.h>
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 10:19:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 10:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10853.28991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVuAV-0001US-20; Fri, 23 Oct 2020 10:19:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10853.28991; Fri, 23 Oct 2020 10:19:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVuAU-0001UK-Tm; Fri, 23 Oct 2020 10:19:30 +0000
Received: by outflank-mailman (input) for mailman id 10853;
 Fri, 23 Oct 2020 10:19:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVuAU-0001UA-AA
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:19:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6efcfda2-f466-4fd9-9f2b-d863559826d4;
 Fri, 23 Oct 2020 10:19:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67F9FABD1;
 Fri, 23 Oct 2020 10:19:28 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVuAU-0001UA-AA
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 10:19:30 +0000
X-Inumbo-ID: 6efcfda2-f466-4fd9-9f2b-d863559826d4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6efcfda2-f466-4fd9-9f2b-d863559826d4;
	Fri, 23 Oct 2020 10:19:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603448368;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fCENBea3g/sHEZshbnkZNqrNEizuK/QJ9N/S6MkmEq8=;
	b=oIEzmmO3m16L6bX9CkxGbuw5XprILZWZyYFfix657HEk4sWWwuiCSFyRxGrbHpPD0prqxA
	UYKKEW8P3iZK5z6D6GcBlJRKroj0pIEHFZ208aj/bBn+Fvehfp+StKOCdbLkVX5Orn+4nc
	bXMW2Kdrs79PHkiYZ1pezYcyFhDcQDg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 67F9FABD1;
	Fri, 23 Oct 2020 10:19:28 +0000 (UTC)
Subject: [PATCH v2 8/8] lib: move sort code
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Message-ID: <293585a3-5475-0c02-19ce-c2080b2deab1@suse.com>
Date: Fri, 23 Oct 2020 12:19:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <aa1ca5da-3ecf-8721-63f9-b86ebbc64330@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Build this code into an archive, partly paralleling bsearch().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
 xen/common/Makefile        | 1 -
 xen/lib/Makefile           | 1 +
 xen/{common => lib}/sort.c | 0
 3 files changed, 1 insertion(+), 1 deletion(-)
 rename xen/{common => lib}/sort.c (100%)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index d8519a2cc163..90c679958965 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -36,7 +36,6 @@ obj-y += rcupdate.o
 obj-y += rwlock.o
 obj-y += shutdown.o
 obj-y += softirq.o
-obj-y += sort.o
 obj-y += smp.o
 obj-y += spinlock.o
 obj-y += stop_machine.o
diff --git a/xen/lib/Makefile b/xen/lib/Makefile
index 122eeb3d327b..33ff322b1655 100644
--- a/xen/lib/Makefile
+++ b/xen/lib/Makefile
@@ -6,3 +6,4 @@ lib-y += ctype.o
 lib-y += list-sort.o
 lib-y += parse-size.o
 lib-y += rbtree.o
+lib-y += sort.o
diff --git a/xen/common/sort.c b/xen/lib/sort.c
similarity index 100%
rename from xen/common/sort.c
rename to xen/lib/sort.c
-- 
2.22.0




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 11:35:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 11:35:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10864.29003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvLu-0000cN-OX; Fri, 23 Oct 2020 11:35:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10864.29003; Fri, 23 Oct 2020 11:35:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvLu-0000cG-Ke; Fri, 23 Oct 2020 11:35:22 +0000
Received: by outflank-mailman (input) for mailman id 10864;
 Fri, 23 Oct 2020 11:35:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kVvLt-0000cB-3o
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:35:21 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10b736e6-dc40-4c3d-acd2-b9be89a8fd9b;
 Fri, 23 Oct 2020 11:35:17 +0000 (UTC)
Received: from AM6PR05CA0012.eurprd05.prod.outlook.com (2603:10a6:20b:2e::25)
 by VI1PR08MB3725.eurprd08.prod.outlook.com (2603:10a6:803:b6::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 23 Oct
 2020 11:35:12 +0000
Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::24) by AM6PR05CA0012.outlook.office365.com
 (2603:10a6:20b:2e::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 11:35:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 11:35:10 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Fri, 23 Oct 2020 11:35:10 +0000
Received: from bc053f256823.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C6CB1098-6231-4894-9DB9-81AC5B4F3C16.1; 
 Fri, 23 Oct 2020 11:35:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bc053f256823.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 23 Oct 2020 11:35:01 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM9PR08MB5924.eurprd08.prod.outlook.com (2603:10a6:20b:282::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 11:35:00 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 11:35:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kVvLt-0000cB-3o
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:35:21 +0000
X-Inumbo-ID: 10b736e6-dc40-4c3d-acd2-b9be89a8fd9b
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown [40.107.14.42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 10b736e6-dc40-4c3d-acd2-b9be89a8fd9b;
	Fri, 23 Oct 2020 11:35:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AGEHtB9rNId9OdJGuNVRoSMtOawmeS4J9+jlHDhH3iU=;
 b=Vil2alzHlxXLtDXDVYQIxwh1JUb3ozno3ZbTRUldofU0r17MIepfZkYH/+Sw6aOALupeURT8BOF6gBXb45LR881nvb703SHaIRgp/pfvEbYK79+BswDTB5CqmCCyRZ8MI/uPZ6TeFvGEnzVsjhnOn3MMmoxZFPTkgAzCoxqYpEU=
Received: from AM6PR05CA0012.eurprd05.prod.outlook.com (2603:10a6:20b:2e::25)
 by VI1PR08MB3725.eurprd08.prod.outlook.com (2603:10a6:803:b6::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 23 Oct
 2020 11:35:12 +0000
Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::24) by AM6PR05CA0012.outlook.office365.com
 (2603:10a6:20b:2e::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 11:35:12 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 11:35:10 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Fri, 23 Oct 2020 11:35:10 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0797d5f54955c7d6
X-CR-MTA-TID: 64aa7808
Received: from bc053f256823.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id C6CB1098-6231-4894-9DB9-81AC5B4F3C16.1;
	Fri, 23 Oct 2020 11:35:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bc053f256823.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 23 Oct 2020 11:35:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aOsPaO781USbVjFuCN3O882eDeSVE73gr3L6BheiN8xoM8RdTABCmbVoZPepjG0JtfEXuK2Rl8PEasdbRAh1CJRqVzkwDAG0FjBN+7sM4XXaRXH5EUpITBvcjdzCL5gYKiy8h9lzKZOg/S9zeT1ckDjJcWi2d0UvaLZpQ2D+cUrYr89HSOmTKNmNEHcHppVR72sWF5t8MfJ0YrfDpawCr1d2AkycTT4EPtx9QSWwHmk1ksBj4shulM2ZgwCOGi4FuQXlVgBk8muM3+ZCAP+Ng33BjAu4ZFHT3PQYtQAAxx9HuOZEx+2HeMCgIAhJhWveeIXObrkny1oSXu6KAxlx4g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AGEHtB9rNId9OdJGuNVRoSMtOawmeS4J9+jlHDhH3iU=;
 b=dzZsgL4X8O5/QJJ9L5j5+CxfcKTxwt3OjtVXZcF9tGviQKgcEA/xdgKCQJxvQ9haQcyajrpaIVyEPirfOiP7tJWqfzmj6F+dITl08/JEhJ/r3jXqR5+JbJlJmOj6M+6lGoizgE5qdkMdPJq4g7AI8h0ktG3SLjypGcorh8KjmZoP3YC5fjS3xCshk7Yul6ILAId7DjEkQ4BFo0vkWIzelFfcWo5jT9I8uzsLWgMxajHcawh7t8xLmlXRYKxxRvgX5adz7XPRf1M8RwxMiov1qqRtnrVDAMBry7nNo11SXpb/NAAX80cx0nMJlnOVhK30PYuVDYN9TsU8mFkXa90LSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AGEHtB9rNId9OdJGuNVRoSMtOawmeS4J9+jlHDhH3iU=;
 b=Vil2alzHlxXLtDXDVYQIxwh1JUb3ozno3ZbTRUldofU0r17MIepfZkYH/+Sw6aOALupeURT8BOF6gBXb45LR881nvb703SHaIRgp/pfvEbYK79+BswDTB5CqmCCyRZ8MI/uPZ6TeFvGEnzVsjhnOn3MMmoxZFPTkgAzCoxqYpEU=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM9PR08MB5924.eurprd08.prod.outlook.com (2603:10a6:20b:282::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 11:35:00 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 11:35:00 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul Durrant
	<paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXIA=
Date: Fri, 23 Oct 2020 11:35:00 +0000
Message-ID: <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 28eddf09-9d51-4162-da2a-08d87747b3f9
x-ms-traffictypediagnostic: AM9PR08MB5924:|VI1PR08MB3725:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB37253439B98A295F08C97D7AFC1A0@VI1PR08MB3725.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zenOi7UX8txCp5r+UGKnZGn23WNBUbwaGRfNMYaszw4hvFT5DfbnBOKGN6pRdhWKqQAIQgcgUkqoVuqDDOBds4FAWQ7ErNymtezNmA+VVktBKgdwRLwOfO3Iy2exLBWB6NMQWN8NqIhHe3Ui/O3jZivEMZqh9/tOgn77Xr4dgD9x4UzVjyJqpegJjF/AH8A+O3b0hId2LQZVJ1cbDW5wap6BgiMtuK0ybmqqtpNFW531dkqcGzBe4b0eS70CvRVLvvFf8XUB1Zj+Yt2u/fbKm5J09gQvCMjDtwHW5c/a3dlN3h5PpGmQqoGYBWLQBNzpx4/lMaIWXmucY853zv+pEw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(366004)(396003)(39860400002)(53546011)(478600001)(55236004)(5660300002)(6506007)(26005)(6486002)(4326008)(86362001)(66946007)(2616005)(36756003)(8936002)(6916009)(6512007)(71200400001)(54906003)(33656002)(8676002)(64756008)(316002)(66476007)(91956017)(2906002)(76116006)(66556008)(66446008)(186003)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 wxEclyfqX38bYbvMpxq06yXhFZw4rx1bXI4mPeomPttdJtPUyt8X64UFPQMXtIDHRVX99QLMpI0tEfrP7T7+YmvNOq02jFyNXQ4XRn3RoSh6qpfFkJrjZ5yhza3SPSnG46KiwGlw/hlil+7Ubv2XccNs/7keqScYNZWEAp78+5M3S3cCaiWY2XZfzhjuahUK9ELhCR62g++hDuwam+qFnE4AKGbQc3+rc9K5Xe5cn+QpR+HK3iKY31chQR0S8AeAau9YEERZTFQpYPewyAckHfQm30iXN+xSfYBGhgLRvoJyAl6PtEBoQZ0E8pWx++bSLReKhEyQUQhPEsMynlLrYXxA8XDDksUiXFkl6VEkxFFS4I1apQHxoDKnws7LKXGEk+nP9KDbaBgSCeliyuXTpICIr5CtwQxtsGtpPCk6Pna61dRQTTUuEL+bOEbNreSprhxwQ0QucU6DeiyJuok+BAjHRM8yQj8bUSBTdLBwZPFGZVSBZIFIrNRSb099tGgUApNZeOfYl3EvV7o92f2F25krWeJ+DwtHhnVxH+T/xsiqgGiWNWtfiedjgofLnUAF1Tn/w0pkn2BvaXJMavMd2V4eJqHgRPJW34AXhaQ2/BzT1l7CHplhKuJHB8D9UJ3S6LtW1sGpb6+zXQzFXzBGog==
Content-Type: text/plain; charset="utf-8"
Content-ID: <F61CFFE78466DD448CA390CAB2DE2E31@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5924
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8071d114-283d-4083-0217-08d87747ad63
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zJtFGYVSIo79HgMjfAklV0IH6g46i60+utolX631pP2oKEQ7psrOD+lnPTp/LfkkjXCPVpTlWFP4Pf1BReP9H+G+fh46tllqqmSHCoF+z5/a4B659avMUaSeb92z3yP0NO/Zl9E1r/AU6owYUWdJ01iKCWNzU/DodFUEtyCHQ4o9n+Qw2eRMsxK2o62Yf+Q23l1EjwsJfVbo1hOu+VgAlImVF6mIQ59QCUHBtq9lFr9AtzbXPC4YmfkbEcLYfBYZo8NcahTvDi8S8K0y0EZ9dAgNg6OQKrjOOYQGOwoMsic4EUVBZo1b27HnX65a3tIucaQuduc0uQas3WcoI8XS/QSztJUCXQ+WQzGjV3ZGLiJn43Sp5Zl7sbgpWHUusx4GvgTj0HTRinhqOrsCH+OKpQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(39860400002)(136003)(46966005)(36906005)(81166007)(54906003)(316002)(336012)(8936002)(478600001)(33656002)(86362001)(2616005)(55236004)(8676002)(36756003)(26005)(186003)(53546011)(6486002)(2906002)(107886003)(6506007)(6862004)(6512007)(82310400003)(356005)(5660300002)(4326008)(47076004)(70206006)(82740400003)(70586007)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2020 11:35:10.7424
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 28eddf09-9d51-4162-da2a-08d87747b3f9
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3725

SGVsbG8sDQoNCj4gT24gMjMgT2N0IDIwMjAsIGF0IDE6MDIgYW0sIFN0ZWZhbm8gU3RhYmVsbGlu
aSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBPbiBUaHUsIDIyIE9jdCAy
MDIwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+IE9uIDIwLzEwLzIwMjAgMTY6MjUsIFJhaHVs
IFNpbmdoIHdyb3RlOg0KPj4+Pj4gQWRkIHN1cHBvcnQgZm9yIEFSTSBhcmNoaXRlY3RlZCBTTU1V
djMgaW1wbGVtZW50YXRpb25zLiBJdCBpcyBiYXNlZCBvbg0KPj4+Pj4gdGhlIExpbnV4IFNNTVV2
MyBkcml2ZXIuDQo+Pj4+PiBNYWpvciBkaWZmZXJlbmNlcyBiZXR3ZWVuIHRoZSBMaW51eCBkcml2
ZXIgYXJlIGFzIGZvbGxvd3M6DQo+Pj4+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMg
c3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+Pj4+ICAgIHRoYXQg
c3VwcG9ydHMgYm90aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+IDIu
IFVzZSBQMk0gIHBhZ2UgdGFibGUgaW5zdGVhZCBvZiBjcmVhdGluZyBvbmUgYXMgU01NVXYzIGhh
cyB0aGUNCj4+Pj4+ICAgIGNhcGFiaWxpdHkgdG8gc2hhcmUgdGhlIHBhZ2UgdGFibGVzIHdpdGgg
dGhlIENQVS4NCj4+Pj4+IDMuIFRhc2tsZXRzIGlzIHVzZWQgaW4gcGxhY2Ugb2YgdGhyZWFkZWQg
SVJRJ3MgaW4gTGludXggZm9yIGV2ZW50IHF1ZXVlDQo+Pj4+PiAgICBhbmQgcHJpb3JpdHkgcXVl
dWUgSVJRIGhhbmRsaW5nLg0KPj4+PiANCj4+Pj4gVGFza2xldHMgYXJlIG5vdCBhIHJlcGxhY2Vt
ZW50IGZvciB0aHJlYWRlZCBJUlEuIEluIHBhcnRpY3VsYXIsIHRoZXkgd2lsbA0KPj4+PiBoYXZl
IHByaW9yaXR5IG92ZXIgYW55dGhpbmcgZWxzZSAoSU9XIG5vdGhpbmcgd2lsbCBydW4gb24gdGhl
IHBDUFUgdW50aWwNCj4+Pj4gdGhleSBhcmUgZG9uZSkuDQo+Pj4+IA0KPj4+PiBEbyB5b3Uga25v
dyB3aHkgTGludXggaXMgdXNpbmcgdGhyZWFkLiBJcyBpdCBiZWNhdXNlIG9mIGxvbmcgcnVubmlu
Zw0KPj4+PiBvcGVyYXRpb25zPw0KPj4+IA0KPj4+IFllcyB5b3UgYXJlIHJpZ2h0IGJlY2F1c2Ug
b2YgbG9uZyBydW5uaW5nIG9wZXJhdGlvbnMgTGludXggaXMgdXNpbmcgdGhlDQo+Pj4gdGhyZWFk
ZWQgSVJRcy4NCj4+PiANCj4+PiBTTU1VdjMgcmVwb3J0cyBmYXVsdC9ldmVudHMgYmFzZXMgb24g
bWVtb3J5LWJhc2VkIGNpcmN1bGFyIGJ1ZmZlciBxdWV1ZXMgbm90DQo+Pj4gYmFzZWQgb24gdGhl
IHJlZ2lzdGVyLiBBcyBwZXIgbXkgdW5kZXJzdGFuZGluZywgaXQgaXMgdGltZS1jb25zdW1pbmcg
dG8NCj4+PiBwcm9jZXNzIHRoZSBtZW1vcnkgYmFzZWQgcXVldWVzIGluIGludGVycnVwdCBjb250
ZXh0IGJlY2F1c2Ugb2YgdGhhdCBMaW51eA0KPj4+IGlzIHVzaW5nIHRocmVhZGVkIElSUSB0byBw
cm9jZXNzIHRoZSBmYXVsdHMvZXZlbnRzIGZyb20gU01NVS4NCj4+PiANCj4+PiBJIGRpZG7igJl0
IGZpbmQgYW55IG90aGVyIHNvbHV0aW9uIGluIFhFTiBpbiBwbGFjZSBvZiB0YXNrbGV0IHRvIGRl
ZmVyIHRoZQ0KPj4+IHdvcmssIHRoYXTigJlzIHdoeSBJIHVzZWQgdGFza2xldCBpbiBYRU4gaW4g
cmVwbGFjZW1lbnQgb2YgdGhyZWFkZWQgSVJRcy4gSWYNCj4+PiB3ZSBkbyBhbGwgd29yayBpbiBp
bnRlcnJ1cHQgY29udGV4dCB3ZSB3aWxsIG1ha2UgWEVOIGxlc3MgcmVzcG9uc2l2ZS4NCj4+IA0K
Pj4gU28gd2UgbmVlZCB0byBtYWtlIHN1cmUgdGhhdCBYZW4gY29udGludWUgdG8gcmVjZWl2ZXMg
aW50ZXJydXB0cywgYnV0IHdlIGFsc28NCj4+IG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgYSB2Q1BV
IGJvdW5kIHRvIHRoZSBwQ1BVIGlzIGFsc28gcmVzcG9uc2l2ZS4NCj4+IA0KPj4+IA0KPj4+IElm
IHlvdSBrbm93IGFub3RoZXIgc29sdXRpb24gaW4gWEVOIHRoYXQgd2lsbCBiZSB1c2VkIHRvIGRl
ZmVyIHRoZSB3b3JrIGluDQo+Pj4gdGhlIGludGVycnVwdCBwbGVhc2UgbGV0IG1lIGtub3cgSSB3
aWxsIHRyeSB0byB1c2UgdGhhdC4NCj4+IA0KPj4gT25lIG9mIG15IHdvcmsgY29sbGVhZ3VlIGVu
Y291bnRlcmVkIGEgc2ltaWxhciBwcm9ibGVtIHJlY2VudGx5LiBIZSBoYWQgYSBsb25nDQo+PiBy
dW5uaW5nIHRhc2tsZXQgYW5kIHdhbnRlZCB0byBiZSBicm9rZW4gZG93biBpbiBzbWFsbGVyIGNo
dW5rLg0KPj4gDQo+PiBXZSBkZWNpZGVkIHRvIHVzZSBhIHRpbWVyIHRvIHJlc2NoZWR1bGUgdGhl
IHRhc2xrZXQgaW4gdGhlIGZ1dHVyZS4gVGhpcyBhbGxvd3MNCj4+IHRoZSBzY2hlZHVsZXIgdG8g
cnVuIG90aGVyIGxvYWRzIChlLmcuIHZDUFUpIGZvciBzb21lIHRpbWUuDQo+PiANCj4+IFRoaXMg
aXMgcHJldHR5IGhhY2tpc2ggYnV0IEkgY291bGRuJ3QgZmluZCBhIGJldHRlciBzb2x1dGlvbiBh
cyB0YXNrbGV0IGhhdmUNCj4+IGhpZ2ggcHJpb3JpdHkuDQo+PiANCj4+IE1heWJlIHRoZSBvdGhl
ciB3aWxsIGhhdmUgYSBiZXR0ZXIgaWRlYS4NCj4gDQo+IEp1bGllbidzIHN1Z2dlc3Rpb24gaXMg
YSBnb29kIG9uZS4NCj4gDQo+IEJ1dCBJIHRoaW5rIHRhc2tsZXRzIGNhbiBiZSBjb25maWd1cmVk
IHRvIGJlIGNhbGxlZCBmcm9tIHRoZSBpZGxlX2xvb3AsDQo+IGluIHdoaWNoIGNhc2UgdGhleSBh
cmUgbm90IHJ1biBpbiBpbnRlcnJ1cHQgY29udGV4dD8NCj4gDQoNCiBZZXMgeW91IGFyZSByaWdo
dCB0YXNrbGV0IHdpbGwgYmUgc2NoZWR1bGVkIGZyb20gdGhlIGlkbGVfbG9vcCB0aGF0IGlzIG5v
dCBpbnRlcnJ1cHQgY29uZXh0Lg0KDQo+IFN0aWxsLCB0YXNrbGV0cyBydW4gdW50aWwgY29tcGxl
dGlvbiBpbiBYZW4sIHdoaWNoIGNvdWxkIHRha2UgdG9vIGxvbmcuDQo+IFRoZSBjb2RlIGhhcyB0
byB2b2x1bnRhcmlseSByZWxlYXNlIGNvbnRyb2wgb2YgdGhlIGV4ZWN1dGlvbiBmbG93IG9uY2UN
Cj4gaXQgcmVhbGl6ZXMgaXQgaGFzIGJlZW4gcnVubmluZyBmb3IgdG9vIGxvbmcuIFRoZSByZXNj
aGVkdWxpbmcgdmlhIGENCj4gdGltZXIgd29ya3MuDQo+IA0KPiANCj4gTm93LCB0byBicmFpbnN0
b3JtIG90aGVyIHBvc3NpYmxlIGFsdGVybmF0aXZlcywgZm9yIGh5cGVyY2FsbHMgd2UgaGF2ZQ0K
PiBiZWVuIHVzaW5nIGh5cGVyY2FsbCBjb250aW51YXRpb25zLiAgQ29udGludWF0aW9ucyBpcyBh
IHdheSB0byBicmVhayBhDQo+IGh5cGVyY2FsbCBpbXBsZW1lbnRhdGlvbiB0aGF0IHRha2VzIHRv
byBsb25nIGludG8gbXVsdGlwbGUgZXhlY3V0aW9uDQo+IGNodW5rcy4gSXQgd29ya3MgYnkgY2Fs
bGluZyBpbnRvIGl0c2VsZiBhZ2FpbjogbWFraW5nIHRoZSBzYW1lIGh5cGVyY2FsbA0KPiBhZ2Fp
biB3aXRoIHVwZGF0ZWQgYXJndW1lbnRzLCBzbyB0aGF0IHRoZSBzY2hlZHVsZXIgaGFzIGEgY2hh
bmNlIHRvIGRvDQo+IG90aGVyIG9wZXJhdGlvbnMgaW4gYmV0d2VlbiwgaW5jbHVkaW5nIHJ1bm5p
bmcgb3RoZXIgdGFza2xldHMgYW5kDQo+IHNvZnRpcnFzLg0KPiANCj4gVGhhdCB3b3JrcyB3ZWxs
IGJlY2F1c2UgIHRoZSBzb3VyY2Ugb2YgdGhlIHdvcmsgaXMgYSBndWVzdCByZXF1ZXN0LA0KPiBz
cGVjaWZpY2FsbHkgYSBoeXBlcmNhbGwuIEhvd2V2ZXIsIGluIHRoZSBjYXNlIG9mIHRoZSBTTU1V
IGRyaXZlciwgdGhlcmUNCj4gaXMgbm8gaHlwZXJjYWxsLiBUaGUgWGVuIGRyaXZlciBoYXMgdG8g
ZG8gd29yayBpbiByZXNwb25zZSB0byBhbg0KPiBpbnRlcnJ1cHQgYW5kIHRoZSB3b3JrIGlzIG5v
dCB0aWVkIHRvIG9uZSBwYXJ0aWN1bGFyIGRvbWFpbi4NCj4gDQo+IFNvIEkgZG9uJ3QgdGhpbmsg
dGhlIGh5cGVyY2FsbCBjb250aW51YXRpb24gbW9kZWwgY291bGQgd29yayBoZXJlLiBUaGUNCj4g
dGltZXIgc2VlbXMgdG8gYmUgdGhlIGJlc3Qgb3B0aW9uLg0KPiANCg0KWWVzLCBJIGFncmVlIHdp
dGggeW91IGFzIHRoZSBzb3VyY2Ugb2YgdGhlIHdvcmsgaXMgbm90IGEgZ3Vlc3QgcmVxdWVzdCBp
biB0aGUgY2FzZSBvZiBTTU1VIEkgdGhpbmsgd2UgY2FuIG5vdCB1c2UgdGhlIGh5cGVyIGNhbGwg
Y29udGludWF0aW9uLg0KDQpBcyBzdWdnZXN0ZWQgSSB3aWxsIHRyeSB0byB1c2UgdGhlIHRpbWVy
IHRvIHNjaGVkdWxlIHRoZSB3b3JrIGFuZCB3aWxsIHNoYXJlIHRoZSBmaW5kaW5ncy4NCj4gDQo+
IA0KPj4+Pj4gNC4gTGF0ZXN0IHZlcnNpb24gb2YgdGhlIExpbnV4IFNNTVV2MyBjb2RlIGltcGxl
bWVudHMgdGhlIGNvbW1hbmRzIHF1ZXVlDQo+Pj4+PiAgICBhY2Nlc3MgZnVuY3Rpb25zIGJhc2Vk
IG9uIGF0b21pYyBvcGVyYXRpb25zIGltcGxlbWVudGVkIGluIExpbnV4Lg0KPj4+PiANCj4+Pj4g
Q2FuIHlvdSBwcm92aWRlIG1vcmUgZGV0YWlscz8NCj4+PiANCj4+PiBJIHRyaWVkIHRvIHBvcnQg
dGhlIGxhdGVzdCB2ZXJzaW9uIG9mIHRoZSBTTU1VdjMgY29kZSB0aGFuIEkgb2JzZXJ2ZWQgdGhh
dA0KPj4+IGluIG9yZGVyIHRvIHBvcnQgdGhhdCBjb2RlIEkgaGF2ZSB0byBhbHNvIHBvcnQgYXRv
bWljIG9wZXJhdGlvbiBpbXBsZW1lbnRlZA0KPj4+IGluIExpbnV4IHRvIFhFTi4gQXMgbGF0ZXN0
IExpbnV4IGNvZGUgdXNlcyBhdG9taWMgb3BlcmF0aW9uIHRvIHByb2Nlc3MgdGhlDQo+Pj4gY29t
bWFuZCBxdWV1ZXMgKGF0b21pY19jb25kX3JlYWRfcmVsYXhlZCgpLGF0b21pY19sb25nX2NvbmRf
cmVhZF9yZWxheGVkKCkgLA0KPj4+IGF0b21pY19mZXRjaF9hbmRub3RfcmVsYXhlZCgpKSAuDQo+
PiANCj4+IFRoYW5rIHlvdSBmb3IgdGhlIGV4cGxhbmF0aW9uLiBJIHRoaW5rIGl0IHdvdWxkIGJl
IGJlc3QgdG8gaW1wb3J0IHRoZSBhdG9taWMNCj4+IGhlbHBlcnMgYW5kIHVzZSB0aGUgbGF0ZXN0
IGNvZGUuDQo+PiANCj4+IFRoaXMgd2lsbCBlbnN1cmUgdGhhdCB3ZSBkb24ndCByZS1pbnRyb2R1
Y2UgYnVncyBhbmQgYWxzbyBidXkgdXMgc29tZSB0aW1lDQo+PiBiZWZvcmUgdGhlIExpbnV4IGFu
ZCBYZW4gZHJpdmVyIGRpdmVyZ2UgYWdhaW4gdG9vIG11Y2guDQo+PiANCj4+IFN0ZWZhbm8sIHdo
YXQgZG8geW91IHRoaW5rPw0KPiANCj4gSSB0aGluayB5b3UgYXJlIHJpZ2h0Lg0KDQpZZXMsIEkg
YWdyZWUgd2l0aCB5b3UgdG8gaGF2ZSBYRU4gY29kZSBpbiBzeW5jIHdpdGggTGludXggY29kZSB0
aGF0J3Mgd2h5IEkgc3RhcnRlZCB3aXRoIHRvIHBvcnQgdGhlIExpbnV4IGF0b21pYyBvcGVyYXRp
b25zIHRvIFhFTiAgdGhlbiBJIHJlYWxpc2VkIHRoYXQgaXQgaXMgbm90IHN0cmFpZ2h0Zm9yd2Fy
ZCB0byBwb3J0IGF0b21pYyBvcGVyYXRpb25zIGFuZCBpdCByZXF1aXJlcyBsb3RzIG9mIGVmZm9y
dCBhbmQgdGVzdGluZy4gVGhlcmVmb3JlIEkgZGVjaWRlZCB0byBwb3J0IHRoZSBjb2RlIGJlZm9y
ZSB0aGUgYXRvbWljIG9wZXJhdGlvbiBpcyBpbnRyb2R1Y2VkIGluIExpbnV4Lg0KDQoNClJlZ2Fy
ZHMsDQpSYWh1bA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 11:41:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 11:41:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10878.29015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvRw-0001V1-En; Fri, 23 Oct 2020 11:41:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10878.29015; Fri, 23 Oct 2020 11:41:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvRw-0001Uu-BG; Fri, 23 Oct 2020 11:41:36 +0000
Received: by outflank-mailman (input) for mailman id 10878;
 Fri, 23 Oct 2020 11:41:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kVvRu-0001Up-Q3
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:41:34 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.54]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 991fe2f9-23f2-4339-96c4-8c464eb99903;
 Fri, 23 Oct 2020 11:41:33 +0000 (UTC)
Received: from DB6PR0501CA0038.eurprd05.prod.outlook.com (2603:10a6:4:67::24)
 by PA4PR08MB6109.eurprd08.prod.outlook.com (2603:10a6:102:e2::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 11:41:31 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::a6) by DB6PR0501CA0038.outlook.office365.com
 (2603:10a6:4:67::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 11:41:31 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 11:41:29 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Fri, 23 Oct 2020 11:41:29 +0000
Received: from ac0a0d56d676.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E51EDE9C-8625-49B9-A823-103ECC6C4F61.1; 
 Fri, 23 Oct 2020 11:41:05 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ac0a0d56d676.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 23 Oct 2020 11:41:05 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB3491.eurprd08.prod.outlook.com (2603:10a6:208:d3::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22; Fri, 23 Oct
 2020 11:41:02 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 11:41:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kVvRu-0001Up-Q3
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:41:34 +0000
X-Inumbo-ID: 991fe2f9-23f2-4339-96c4-8c464eb99903
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [40.107.7.54])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 991fe2f9-23f2-4339-96c4-8c464eb99903;
	Fri, 23 Oct 2020 11:41:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4UglLBGLZ5V1tj8IgCmnb7EYeOzzLeZk/KUGgNgTsFo=;
 b=lkY8EaKppfa4dWm0QrYCHCTw4JMFNLzUuZMmbbUOjg08uS2ksvkw/Jt1x64ezVgq7veBF5TtKPHPPChM9a93OzvMsjC5gVV8riGPYEgNUKhLyaLu63x4EHMBn98mn4I3D3sSgfLtt1U7aofGz97f1n9A5oa1BkK38eWFmMIhDes=
Received: from DB6PR0501CA0038.eurprd05.prod.outlook.com (2603:10a6:4:67::24)
 by PA4PR08MB6109.eurprd08.prod.outlook.com (2603:10a6:102:e2::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 11:41:31 +0000
Received: from DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::a6) by DB6PR0501CA0038.outlook.office365.com
 (2603:10a6:4:67::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 11:41:31 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT043.mail.protection.outlook.com (10.152.20.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 11:41:29 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Fri, 23 Oct 2020 11:41:29 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3f9ac5679c940023
X-CR-MTA-TID: 64aa7808
Received: from ac0a0d56d676.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E51EDE9C-8625-49B9-A823-103ECC6C4F61.1;
	Fri, 23 Oct 2020 11:41:05 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ac0a0d56d676.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 23 Oct 2020 11:41:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DZNTaWxiMoPo1n3mmEo6nCRh+rZBCok/28zROfpdLfAGgCgba/iea3pvXqWuuI/xaAdwTpYGfzMzH7AQ4OuF7eWqiDQp1kCgjrUbt3Ucsl6VyJJmnTpOe8ri7e9mR/vpjCJyFa4JwNn3hs9/Z+p/f+cwuPqcLA7XMwDqSoywSj8NK23lkeXrvP1zWZyZoYxtPIbzcf7bKDowO3Eyfe/fyjPgv32kVmMvZPy4fIwg9aEF8xXj5Yw9KC4GMf8o7spOkka61c/YShFiKpRHz14zGtQ6Hwglrq1KEnhk6rpYTL2I2dUhPXXnxWeb1RD5Cxsb+COr1n/icwPooexJlC/ssQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4UglLBGLZ5V1tj8IgCmnb7EYeOzzLeZk/KUGgNgTsFo=;
 b=ALng+dWjKzGdoxRogD5QiikW0h6xpoF7Gt++TDWrqcCpGKRyJlQnXR5ARzItCR7YqNrWUm1u0TNzRuC31Aq1RSN+XYv0WsOpgRmU5R6kL1F7X6hK7d4tQ2kE/w5o5YsqoYjN/xQh7V7hLY45ff21VQQ58g62DqZO0mqPCtowfniiEHjfgej841VmWO2Mi0056RR25ubNvIolKXTP84OsA/ODfOdmqvKmVlBYxEysO3czMfxBlG2/DIZZEZ7ls8Q3ISr7CEBUan8HvFpBPe3OlBUs1Ph0/hmBMnSqbTb1/YNmfLmehuddQ7MBtZAQ45qPoK4NA4EHdSMiurG8gQIsww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4UglLBGLZ5V1tj8IgCmnb7EYeOzzLeZk/KUGgNgTsFo=;
 b=lkY8EaKppfa4dWm0QrYCHCTw4JMFNLzUuZMmbbUOjg08uS2ksvkw/Jt1x64ezVgq7veBF5TtKPHPPChM9a93OzvMsjC5gVV8riGPYEgNUKhLyaLu63x4EHMBn98mn4I3D3sSgfLtt1U7aofGz97f1n9A5oa1BkK38eWFmMIhDes=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB3491.eurprd08.prod.outlook.com (2603:10a6:208:d3::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22; Fri, 23 Oct
 2020 11:41:02 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 11:41:02 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABxu0A
Date: Fri, 23 Oct 2020 11:41:02 +0000
Message-ID: <B339BB42-A4E1-4EE6-BD29-B3AD48D6B621@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
In-Reply-To: <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: bcab4d95-1093-440f-26fd-08d87748952b
x-ms-traffictypediagnostic: AM0PR08MB3491:|PA4PR08MB6109:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB61096C345039DA44BAF15509FC1A0@PA4PR08MB6109.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4SnUKURCFr5Q6qtjnHWy4xu0W7x7dGg6HHZsxOEudw9SwR7fNhuAeKBVIOxznB7nM4hKjO+T6LiJqCpp+0jFCXH/MOvnmDPinsNCRWTMsMS97qLLiOxwc7DJGM+ljmwWGrWflKQYFp05xjLrUhAnZTVJ/+bVtVioITxwiSRQ+uOCveZMshrqkGDP8DggJumC8t+jzXj34GZbTqhFqMHqPz1y2MHYMKlrqDN2r9zuxrs7SugHdPODWdnMyXTONUp2tkFksprTxTubc2VEDQzpa16wBdWpxoLytAlYs9fJy2k0tAW3lQjmF2trcD/nQLXy9T9WKA6y5EotTwl/+HRLJA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(366004)(136003)(4326008)(478600001)(33656002)(6506007)(186003)(6486002)(54906003)(26005)(53546011)(91956017)(66446008)(6916009)(316002)(6512007)(55236004)(76116006)(66556008)(66476007)(66946007)(64756008)(86362001)(2906002)(71200400001)(83380400001)(8936002)(5660300002)(8676002)(36756003)(2616005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 c0MRc7G7LstX+xRA1QVWEHDTL7NROHnN1K4S4nmxyfvQ0QvIBYrgfUFzM4IWCQ+KAnZVcZzpmLyxo/o+eoxUC4FJQtubEmwbG1QyBvfGwkxCL8zDOMkqiDrQ6x5DcWNkbeOVJZyM2Pn520adY7O+CSlXAD7dL6ejOnu/yKNu6mxVzxN6NWC+FqmVX+i5S+MsN217lDhxkpTCwU+a0G0+k71wxnJwQkyFszltfyJrzv+/cGQqlycyy3SYpmAPKw3UUZSRIqNgAAm0vX3e6E2WCH9O8GbhFexRSmsMkBv2wua8lKK3hhcNyPnNpnrbP2g5UtsRO/AFa24Yoj9Q13yVk6fnjOWtzk7cjLPOjPskQHyu1k604vd/pdJFxUI83KP6f6XEkBUrscdiTDQBS/68tpxmQSLiVD2FjAdi7xEd5g5J5KVambjhTE4m8KQKX5nKCvzCJHUaVsOvBX3/ATjwcG9tk1Tw/aRw+9tJDsVL3aUSjRabUtbPDMvtuqk57Ly+fWANhHChEi6R1S9TJIaoFeCGgbd6R+ZqugjmyMIpMjlLnng7XTEg5rMDBEVvI0mnmE74wiaLIBeO3rLoETA70VbP2C0DIOd5ytF+keBP2d4/Lt4Q/9P0q59EF/L0ts/gfFXcKVTfBE/pzzf+GSOShg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <5536C43CCE36C64E896D78B6BE6E2F44@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3491
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ce3ae4be-2317-4f41-8e97-08d877488538
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Twg689vLvxy1gyuZR8UmPX2GJ9nPTSwovs7+lV0TAJVDglgtobbGATNJ8Z+eKuS3qQNOcsGbk12fi6wuEnNBzHyreG20M4UYzA4JencDUMai1EVREYK4qwKTCr5zILvx6aHJqUiX6r5Rg2eKZz4Fb/XxH7VEhM7L/0VRgOgNKbFwflH35U9vAnxddhE9aSAsFbgidnXZp4QYbVT9j1RaNIhixABzyyDYPCjnteZkn7VYz3zh/F+FeOF7k0sJs+sZXY3VXxPKho0uaIYR4g28OAugxPPSotQl71WK8Vy3tfLk0nJqOekUWoclHgdEBF7lDauIC5J5bV2dReuuk8Dj7/zbI7R2hjo2dAbk9WdzTfd+wH+vsn7Ko0tIdasOFImZ7osk+ufrm0lVr7F5bcAsKw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(46966005)(26005)(356005)(81166007)(36756003)(83380400001)(8676002)(478600001)(47076004)(4326008)(6862004)(6506007)(53546011)(55236004)(336012)(82310400003)(2616005)(107886003)(186003)(70586007)(70206006)(33656002)(54906003)(6512007)(5660300002)(8936002)(2906002)(316002)(82740400003)(86362001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2020 11:41:29.6834
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bcab4d95-1093-440f-26fd-08d87748952b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6109

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDIyIE9jdCAyMDIwLCBhdCA5OjMyIGFtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMjEvMTAvMjAyMCAx
MjoyNSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbyBKdWxpZW4sDQo+IA0KPiBIaSBSYWh1
bCwNCj4gDQo+Pj4gT24gMjAgT2N0IDIwMjAsIGF0IDY6MDMgcG0sIEp1bGllbiBHcmFsbCA8anVs
aWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IA0KPj4+IEhpIFJhaHVsLA0KPj4+IA0KPj4+IFRoYW5r
IHlvdSBmb3IgdGhlIGNvbnRyaWJ1dGlvbi4gTGV0cyBtYWtlIHN1cmUgdGhpcyBhdHRlbXB0IHRv
IFNNTVV2MyBzdXBwb3J0IGluIFhlbiB3aWxsIGJlIG1vcmUgc3VjY2Vzc2Z1bCB0aGFuIHRoZSBv
dGhlciBvbmUgOikuDQo+PiBZZXMgc3VyZS4NCj4+PiANCj4+PiBJIGhhdmVuJ3QgcmV2aWV3ZWQg
dGhlIGNvZGUgeWV0LCBidXQgSSB3YW50ZWQgdG8gcHJvdmlkZSBmZWVkYmFjayBvbiB0aGUgY29t
bWl0IG1lc3NhZ2UuDQo+Pj4gDQo+Pj4gT24gMjAvMTAvMjAyMCAxNjoyNSwgUmFodWwgU2luZ2gg
d3JvdGU6DQo+Pj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxl
bWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24NCj4+Pj4gdGhlIExpbnV4IFNNTVV2MyBkcml2ZXIu
DQo+Pj4+IE1ham9yIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUgYXMg
Zm9sbG93czoNCj4+Pj4gMS4gT25seSBTdGFnZS0yIHRyYW5zbGF0aW9uIGlzIHN1cHBvcnRlZCBh
cyBjb21wYXJlZCB0byB0aGUgTGludXggZHJpdmVyDQo+Pj4+ICAgIHRoYXQgc3VwcG9ydHMgYm90
aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4gMi4gVXNlIFAyTSAgcGFn
ZSB0YWJsZSBpbnN0ZWFkIG9mIGNyZWF0aW5nIG9uZSBhcyBTTU1VdjMgaGFzIHRoZQ0KPj4+PiAg
ICBjYXBhYmlsaXR5IHRvIHNoYXJlIHRoZSBwYWdlIHRhYmxlcyB3aXRoIHRoZSBDUFUuDQo+Pj4+
IDMuIFRhc2tsZXRzIGlzIHVzZWQgaW4gcGxhY2Ugb2YgdGhyZWFkZWQgSVJRJ3MgaW4gTGludXgg
Zm9yIGV2ZW50IHF1ZXVlDQo+Pj4+ICAgIGFuZCBwcmlvcml0eSBxdWV1ZSBJUlEgaGFuZGxpbmcu
DQo+Pj4gDQo+Pj4gVGFza2xldHMgYXJlIG5vdCBhIHJlcGxhY2VtZW50IGZvciB0aHJlYWRlZCBJ
UlEuIEluIHBhcnRpY3VsYXIsIHRoZXkgd2lsbCBoYXZlIHByaW9yaXR5IG92ZXIgYW55dGhpbmcg
ZWxzZSAoSU9XIG5vdGhpbmcgd2lsbCBydW4gb24gdGhlIHBDUFUgdW50aWwgdGhleSBhcmUgZG9u
ZSkuDQo+Pj4gDQo+Pj4gRG8geW91IGtub3cgd2h5IExpbnV4IGlzIHVzaW5nIHRocmVhZC4gSXMg
aXQgYmVjYXVzZSBvZiBsb25nIHJ1bm5pbmcgb3BlcmF0aW9ucz8NCj4+IFllcyB5b3UgYXJlIHJp
Z2h0IGJlY2F1c2Ugb2YgbG9uZyBydW5uaW5nIG9wZXJhdGlvbnMgTGludXggaXMgdXNpbmcgdGhl
IHRocmVhZGVkIElSUXMuDQo+PiBTTU1VdjMgcmVwb3J0cyBmYXVsdC9ldmVudHMgYmFzZXMgb24g
bWVtb3J5LWJhc2VkIGNpcmN1bGFyIGJ1ZmZlciBxdWV1ZXMgbm90IGJhc2VkIG9uIHRoZSByZWdp
c3Rlci4gQXMgcGVyIG15IHVuZGVyc3RhbmRpbmcsIGl0IGlzIHRpbWUtY29uc3VtaW5nIHRvIHBy
b2Nlc3MgdGhlIG1lbW9yeSBiYXNlZCBxdWV1ZXMgaW4gaW50ZXJydXB0IGNvbnRleHQgYmVjYXVz
ZSBvZiB0aGF0IExpbnV4IGlzIHVzaW5nIHRocmVhZGVkIElSUSB0byBwcm9jZXNzIHRoZSBmYXVs
dHMvZXZlbnRzIGZyb20gU01NVS4NCj4+IEkgZGlkbuKAmXQgZmluZCBhbnkgb3RoZXIgc29sdXRp
b24gaW4gWEVOIGluIHBsYWNlIG9mIHRhc2tsZXQgdG8gZGVmZXIgdGhlIHdvcmssIHRoYXTigJlz
IHdoeSBJIHVzZWQgdGFza2xldCBpbiBYRU4gaW4gcmVwbGFjZW1lbnQgb2YgdGhyZWFkZWQgSVJR
cy4gSWYgd2UgZG8gYWxsIHdvcmsgaW4gaW50ZXJydXB0IGNvbnRleHQgd2Ugd2lsbCBtYWtlIFhF
TiBsZXNzIHJlc3BvbnNpdmUuDQo+IA0KPiBTbyB3ZSBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IFhl
biBjb250aW51ZSB0byByZWNlaXZlcyBpbnRlcnJ1cHRzLCBidXQgd2UgYWxzbyBuZWVkIHRvIG1h
a2Ugc3VyZSB0aGF0IGEgdkNQVSBib3VuZCB0byB0aGUgcENQVSBpcyBhbHNvIHJlc3BvbnNpdmUu
DQoNClllcyBJIGFncmVlLg0KPiANCj4+IElmIHlvdSBrbm93IGFub3RoZXIgc29sdXRpb24gaW4g
WEVOIHRoYXQgd2lsbCBiZSB1c2VkIHRvIGRlZmVyIHRoZSB3b3JrIGluIHRoZSBpbnRlcnJ1cHQg
cGxlYXNlIGxldCBtZSBrbm93IEkgd2lsbCB0cnkgdG8gdXNlIHRoYXQuDQo+IA0KPiBPbmUgb2Yg
bXkgd29yayBjb2xsZWFndWUgZW5jb3VudGVyZWQgYSBzaW1pbGFyIHByb2JsZW0gcmVjZW50bHku
IEhlIGhhZCBhIGxvbmcgcnVubmluZyB0YXNrbGV0IGFuZCB3YW50ZWQgdG8gYmUgYnJva2VuIGRv
d24gaW4gc21hbGxlciBjaHVuay4NCj4gDQo+IFdlIGRlY2lkZWQgdG8gdXNlIGEgdGltZXIgdG8g
cmVzY2hlZHVsZSB0aGUgdGFzbGtldCBpbiB0aGUgZnV0dXJlLiBUaGlzIGFsbG93cyB0aGUgc2No
ZWR1bGVyIHRvIHJ1biBvdGhlciBsb2FkcyAoZS5nLiB2Q1BVKSBmb3Igc29tZSB0aW1lLg0KPiAN
Cj4gVGhpcyBpcyBwcmV0dHkgaGFja2lzaCBidXQgSSBjb3VsZG4ndCBmaW5kIGEgYmV0dGVyIHNv
bHV0aW9uIGFzIHRhc2tsZXQgaGF2ZSBoaWdoIHByaW9yaXR5Lg0KPiANCj4gTWF5YmUgdGhlIG90
aGVyIHdpbGwgaGF2ZSBhIGJldHRlciBpZGVhLg0KDQpMZXQgbWUgdHJ5IHRvIHVzZSB0aGUgdGlt
ZXIgYW5kIHdpbGwgc2hhcmUgbXkgZmluZGluZ3MuDQo+IA0KPj4+PiA0LiBMYXRlc3QgdmVyc2lv
biBvZiB0aGUgTGludXggU01NVXYzIGNvZGUgaW1wbGVtZW50cyB0aGUgY29tbWFuZHMgcXVldWUN
Cj4+Pj4gICAgYWNjZXNzIGZ1bmN0aW9ucyBiYXNlZCBvbiBhdG9taWMgb3BlcmF0aW9ucyBpbXBs
ZW1lbnRlZCBpbiBMaW51eC4NCj4+PiANCj4+PiBDYW4geW91IHByb3ZpZGUgbW9yZSBkZXRhaWxz
Pw0KPj4gSSB0cmllZCB0byBwb3J0IHRoZSBsYXRlc3QgdmVyc2lvbiBvZiB0aGUgU01NVXYzIGNv
ZGUgdGhhbiBJIG9ic2VydmVkIHRoYXQgaW4gb3JkZXIgdG8gcG9ydCB0aGF0IGNvZGUgSSBoYXZl
IHRvIGFsc28gcG9ydCBhdG9taWMgb3BlcmF0aW9uIGltcGxlbWVudGVkIGluIExpbnV4IHRvIFhF
Ti4gQXMgbGF0ZXN0IExpbnV4IGNvZGUgdXNlcyBhdG9taWMgb3BlcmF0aW9uIHRvIHByb2Nlc3Mg
dGhlIGNvbW1hbmQgcXVldWVzIChhdG9taWNfY29uZF9yZWFkX3JlbGF4ZWQoKSxhdG9taWNfbG9u
Z19jb25kX3JlYWRfcmVsYXhlZCgpICwgYXRvbWljX2ZldGNoX2FuZG5vdF9yZWxheGVkKCkpIC4N
Cj4gDQo+IFRoYW5rIHlvdSBmb3IgdGhlIGV4cGxhbmF0aW9uLiBJIHRoaW5rIGl0IHdvdWxkIGJl
IGJlc3QgdG8gaW1wb3J0IHRoZSBhdG9taWMgaGVscGVycyBhbmQgdXNlIHRoZSBsYXRlc3QgY29k
ZS4NCj4gDQo+IFRoaXMgd2lsbCBlbnN1cmUgdGhhdCB3ZSBkb24ndCByZS1pbnRyb2R1Y2UgYnVn
cyBhbmQgYWxzbyBidXkgdXMgc29tZSB0aW1lIGJlZm9yZSB0aGUgTGludXggYW5kIFhlbiBkcml2
ZXIgZGl2ZXJnZSBhZ2FpbiB0b28gbXVjaC4NCj4gDQo+IFN0ZWZhbm8sIHdoYXQgZG8geW91IHRo
aW5rPw0KPiANCj4+PiANCj4+Pj4gICAgQXRvbWljIGZ1bmN0aW9ucyB1c2VkIGJ5IHRoZSBjb21t
YW5kcyBxdWV1ZSBhY2Nlc3MgZnVuY3Rpb25zIGlzIG5vdA0KPj4+PiAgICBpbXBsZW1lbnRlZCBp
biBYRU4gdGhlcmVmb3JlIHdlIGRlY2lkZWQgdG8gcG9ydCB0aGUgZWFybGllciB2ZXJzaW9uDQo+
Pj4+ICAgIG9mIHRoZSBjb2RlLiBPbmNlIHRoZSBwcm9wZXIgYXRvbWljIG9wZXJhdGlvbnMgd2ls
bCBiZSBhdmFpbGFibGUgaW4gWEVODQo+Pj4+ICAgIHRoZSBkcml2ZXIgY2FuIGJlIHVwZGF0ZWQu
DQo+Pj4+IFNpZ25lZC1vZmYtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5naEBhcm0uY29tPg0K
Pj4+PiAtLS0NCj4+Pj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL0tjb25maWcgICAgICAgfCAg
IDEwICsNCj4+Pj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9NYWtlZmlsZSAgfCAgICAx
ICsNCj4+Pj4gIHhlbi9kcml2ZXJzL3Bhc3N0aHJvdWdoL2FybS9zbW11LXYzLmMgfCAyODQ3ICsr
KysrKysrKysrKysrKysrKysrKysrKysNCj4+Pj4gIDMgZmlsZXMgY2hhbmdlZCwgMjg1OCBpbnNl
cnRpb25zKCspDQo+Pj4gDQo+Pj4gVGhpcyBpcyBxdWl0ZSBzaWduaWZpY2FudCBwYXRjaCB0byBy
ZXZpZXcuIElzIHRoZXJlIGFueSB3YXkgdG8gZ2V0IGl0IHNwbGl0IChtYXliZSBhIHZlcmJhdGlt
IExpbnV4IGNvcHkgKyBYZW4gbW9kaWZpY2F0aW9uKT8NCj4+IFllcywgSSB1bmRlcnN0YW5kIHRo
aXMgaXMgYSBxdWl0ZSBzaWduaWZpY2FudCBwYXRjaCB0byByZXZpZXcgbGV0IG1lIHRoaW5rIHRv
IGdldCBpdCBzcGxpdC4gSWYgaXQgaXMgb2sgZm9yIHlvdSB0byByZXZpZXcgdGhpcyBwYXRjaCBh
bmQgcHJvdmlkZSB5b3VyIGNvbW1lbnRzIHRoZW4gaXQgd2lsbCBncmVhdCBmb3IgdXMuDQo+IEkg
d2lsbCB0cnkgdG8gaGF2ZSBhIGxvb2sgbmV4dCB3ZWVrLg0KDQpUaGFua3MgaW4gYWR2YW5jZSDi
mLrvuI8NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KUmVnYXJk
cywNClJhaHVsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 11:46:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 11:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10891.29027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvWe-0001os-6R; Fri, 23 Oct 2020 11:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10891.29027; Fri, 23 Oct 2020 11:46:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvWe-0001ol-2Q; Fri, 23 Oct 2020 11:46:28 +0000
Received: by outflank-mailman (input) for mailman id 10891;
 Fri, 23 Oct 2020 11:46:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVvWd-0001og-FG
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:46:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 067deedc-ae4d-4971-876e-ddd83b8525bd;
 Fri, 23 Oct 2020 11:46:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVvWY-0001rU-H1; Fri, 23 Oct 2020 11:46:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVvWY-0004Do-6E; Fri, 23 Oct 2020 11:46:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVvWY-0006zp-5j; Fri, 23 Oct 2020 11:46:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVvWd-0001og-FG
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 11:46:27 +0000
X-Inumbo-ID: 067deedc-ae4d-4971-876e-ddd83b8525bd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 067deedc-ae4d-4971-876e-ddd83b8525bd;
	Fri, 23 Oct 2020 11:46:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ia4fUy5X8BIstY/W+oUhyCxVZ+V/KEPXLLc8nlAkx0o=; b=QggjBKr1QSG269M4Jfha5Do8et
	v5HOuRO0R+XWN2r7PStY+DjW8NqVLBzBQZmXRJX+tTM7z+fcOzBJMRhzdMyQoK+r/mSR3Xf/BGQV+
	YOt5e2dJY32M5QSMyKRmxB45KzUVz631HvDZJU4NUDUy8vkuCRORd0jC+ljfyJfL0upo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVvWY-0001rU-H1; Fri, 23 Oct 2020 11:46:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVvWY-0004Do-6E; Fri, 23 Oct 2020 11:46:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVvWY-0006zp-5j; Fri, 23 Oct 2020 11:46:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156117-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156117: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=861f0c110976fa8879b7bf63d9478b6be83d4ab6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 11:46:22 +0000

flight 156117 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156117/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  861f0c110976fa8879b7bf63d9478b6be83d4ab6

Last test of basis   156108  2020-10-22 19:02:27 Z    0 days
Testing same since   156117  2020-10-23 09:01:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   861f0c1109..6ca70821b5  6ca70821b59849ad97c3fadc47e63c1a4af1a78c -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 12:02:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 12:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10923.29066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvlk-0003qK-AY; Fri, 23 Oct 2020 12:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10923.29066; Fri, 23 Oct 2020 12:02:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVvlk-0003qD-6O; Fri, 23 Oct 2020 12:02:04 +0000
Received: by outflank-mailman (input) for mailman id 10923;
 Fri, 23 Oct 2020 12:02:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVvlj-0003pM-3p
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:02:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6536edf7-ff5e-4613-9946-93d05b4eac10;
 Fri, 23 Oct 2020 12:01:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVvlb-0002C6-5Q; Fri, 23 Oct 2020 12:01:55 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVvla-0006MH-U3; Fri, 23 Oct 2020 12:01:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVvlj-0003pM-3p
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:02:03 +0000
X-Inumbo-ID: 6536edf7-ff5e-4613-9946-93d05b4eac10
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6536edf7-ff5e-4613-9946-93d05b4eac10;
	Fri, 23 Oct 2020 12:01:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7J1/3GPU5DQ3O73f4AMt9xzhevm5e9HcjZMaRusFFPk=; b=hIwwWJS1gCVM6sVa0nTuyn/A3C
	ROkmC3ioeE4K+NUkdRdfKgIiLa4UC7Bb8m2WpYETZPhaVJFvJJpsBDf/M8CFCR+nPyDlEI7T2qLYq
	epQSssx97t0wcrK/DogoCzQyQm1Zm/UeqoAP6qhxu8bcFMf3+77Fkt4S5KF5wq+pPuL8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVvlb-0002C6-5Q; Fri, 23 Oct 2020 12:01:55 +0000
Received: from cpc91186-cmbg18-2-0-cust22.5-4.cable.virginm.net ([80.1.50.23] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVvla-0006MH-U3; Fri, 23 Oct 2020 12:01:55 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Rahul Singh <Rahul.Singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <d4be5451-809b-c793-8b6a-9e9bace0a52e@xen.org>
Date: Fri, 23 Oct 2020 13:01:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Stefano,

On 23/10/2020 01:02, Stefano Stabellini wrote:
> On Thu, 22 Oct 2020, Julien Grall wrote:
>>>> On 20/10/2020 16:25, Rahul Singh wrote:
>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>>> the Linux SMMUv3 driver.
>>>>> Major differences between the Linux driver are as follows:
>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>>>      that supports both Stage-1 and Stage-2 translations.
>>>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>>>>>      capability to share the page tables with the CPU.
>>>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>>>>>      and priority queue IRQ handling.
>>>>
>>>> Tasklets are not a replacement for threaded IRQ. In particular, they will
>>>> have priority over anything else (IOW nothing will run on the pCPU until
>>>> they are done).
>>>>
>>>> Do you know why Linux is using thread. Is it because of long running
>>>> operations?
>>>
>>> Yes you are right because of long running operations Linux is using the
>>> threaded IRQs.
>>>
>>> SMMUv3 reports fault/events bases on memory-based circular buffer queues not
>>> based on the register. As per my understanding, it is time-consuming to
>>> process the memory based queues in interrupt context because of that Linux
>>> is using threaded IRQ to process the faults/events from SMMU.
>>>
>>> I didn’t find any other solution in XEN in place of tasklet to defer the
>>> work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
>>> we do all work in interrupt context we will make XEN less responsive.
>>
>> So we need to make sure that Xen continue to receives interrupts, but we also
>> need to make sure that a vCPU bound to the pCPU is also responsive.
>>
>>>
>>> If you know another solution in XEN that will be used to defer the work in
>>> the interrupt please let me know I will try to use that.
>>
>> One of my work colleague encountered a similar problem recently. He had a long
>> running tasklet and wanted to be broken down in smaller chunk.
>>
>> We decided to use a timer to reschedule the taslket in the future. This allows
>> the scheduler to run other loads (e.g. vCPU) for some time.
>>
>> This is pretty hackish but I couldn't find a better solution as tasklet have
>> high priority.
>>
>> Maybe the other will have a better idea.
> 
> Julien's suggestion is a good one.
> 
> But I think tasklets can be configured to be called from the idle_loop,
> in which case they are not run in interrupt context?

Tasklets can either run from the IDLE loop or from a softirq context.

When running from a softirq context is may happen on return from 
receiving an interrupt. However, interrupts will always be enabled.

So I am not sure what concern you are trying to raise here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 12:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 12:29:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10947.29082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwCU-00065y-JX; Fri, 23 Oct 2020 12:29:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10947.29082; Fri, 23 Oct 2020 12:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwCU-00065r-GU; Fri, 23 Oct 2020 12:29:42 +0000
Received: by outflank-mailman (input) for mailman id 10947;
 Fri, 23 Oct 2020 12:29:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVwCS-00065m-Vv
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:29:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 700d1554-33fc-426d-b192-34a7332df53e;
 Fri, 23 Oct 2020 12:29:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 227B3AC6A;
 Fri, 23 Oct 2020 12:29:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVwCS-00065m-Vv
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:29:41 +0000
X-Inumbo-ID: 700d1554-33fc-426d-b192-34a7332df53e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 700d1554-33fc-426d-b192-34a7332df53e;
	Fri, 23 Oct 2020 12:29:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603456179;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Y9ipaWFoA96wa51ziNHoT9h9mHLvv1E4EYwbX0EMRRw=;
	b=mUnAbejHL+98oFfpR9iQhJXfzjhYiCyLA09ZH6LE3oQtQOf46VrgEKMegA9j0TWvE/TzPo
	pFFSZoUpfDS6ywhoxBgv6M6YisPQK4FK25DMeTwreMnfOt9ywjkAife+3kDlXOHUf+F/8J
	UXX53Z90m9wwuIfCKj6FvbzNBULojwE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 227B3AC6A;
	Fri, 23 Oct 2020 12:29:39 +0000 (UTC)
Subject: Re: [PATCH v2 06/11] x86/hvm: allowing registering EOI callbacks for
 GSIs
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-7-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c3a7d136-0987-3be1-5b14-07e788354484@suse.com>
Date: Fri, 23 Oct 2020 14:29:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-7-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -595,6 +595,66 @@ int hvm_local_events_need_delivery(struct vcpu *v)
>      return !hvm_interrupt_blocked(v, intack);
>  }
>  
> +int hvm_gsi_register_callback(struct domain *d, unsigned int gsi,
> +                              struct hvm_gsi_eoi_callback *cb)
> +{
> +    if ( gsi >= hvm_domain_irq(d)->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return -EINVAL;
> +    }
> +
> +    write_lock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +    list_add(&cb->list, &hvm_domain_irq(d)->gsi_callbacks[gsi]);
> +    write_unlock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +
> +    return 0;
> +}
> +
> +void hvm_gsi_unregister_callback(struct domain *d, unsigned int gsi,
> +                                 struct hvm_gsi_eoi_callback *cb)
> +{
> +    struct list_head *tmp;

This could be const if you used ...

> +    if ( gsi >= hvm_domain_irq(d)->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return;
> +    }
> +
> +    write_lock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +    list_for_each ( tmp, &hvm_domain_irq(d)->gsi_callbacks[gsi] )
> +        if ( tmp == &cb->list )
> +        {
> +            list_del(tmp);

... &cb->list here.

> +            break;
> +        }
> +    write_unlock(&hvm_domain_irq(d)->gsi_callbacks_lock);
> +}
> +
> +void hvm_gsi_execute_callbacks(unsigned int gsi, void *data)
> +{
> +    struct domain *currd = current->domain;
> +    struct hvm_gsi_eoi_callback *cb;
> +
> +    read_lock(&hvm_domain_irq(currd)->gsi_callbacks_lock);
> +    list_for_each_entry ( cb, &hvm_domain_irq(currd)->gsi_callbacks[gsi],
> +                          list )
> +        cb->callback(gsi, cb->data ?: data);

Are callback functions in principle permitted to unregister
themselves? If so, you'd need to use list_for_each_entry_safe()
here.

What's the idea of passing cb->data _or_ data?

Finally here and maybe in a few more places latch hvm_domain_irq()
into a local variable?

> +    read_unlock(&hvm_domain_irq(currd)->gsi_callbacks_lock);
> +}
> +
> +bool hvm_gsi_has_callbacks(struct domain *d, unsigned int gsi)

I think a function like this would want to have all const inputs,
and it looks to be possible thanks to hvm_domain_irq() yielding
a pointer.

> --- a/xen/arch/x86/hvm/vioapic.c
> +++ b/xen/arch/x86/hvm/vioapic.c
> @@ -393,6 +393,7 @@ static void eoi_callback(unsigned int vector, void *data)
>          for ( pin = 0; pin < vioapic->nr_pins; pin++ )
>          {
>              union vioapic_redir_entry *ent = &vioapic->redirtbl[pin];
> +            unsigned int gsi = vioapic->base_gsi + pin;
>  
>              if ( ent->fields.vector != vector )
>                  continue;
> @@ -402,13 +403,17 @@ static void eoi_callback(unsigned int vector, void *data)
>              if ( is_iommu_enabled(d) )
>              {
>                  spin_unlock(&d->arch.hvm.irq_lock);
> -                hvm_dpci_eoi(vioapic->base_gsi + pin, ent);
> +                hvm_dpci_eoi(gsi, ent);
>                  spin_lock(&d->arch.hvm.irq_lock);
>              }
>  
> +            spin_unlock(&d->arch.hvm.irq_lock);
> +            hvm_gsi_execute_callbacks(gsi, ent);
> +            spin_lock(&d->arch.hvm.irq_lock);

Iirc on an earlier patch Paul has already expressed concern about such
transient unlocking. At the very least I'd expect the description to
say why this is safe. One particular question would be in how far what
ents points to can't change across this window, disconnecting the uses
of it in the 1st locked section from those in the 2nd one.

> @@ -620,7 +628,7 @@ static int ioapic_load(struct domain *d, hvm_domain_context_t *h)
>           * Add a callback for each possible vector injected by a redirection
>           * entry.
>           */
> -        if ( vector < 16 || !ent->fields.remote_irr ||
> +        if ( vector < 16 ||
>               (delivery_mode != dest_LowestPrio && delivery_mode != dest_Fixed) )
>              continue;

I'm having trouble identifying what this gets replaced by.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 12:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 12:32:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10950.29094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwEp-0006tc-0g; Fri, 23 Oct 2020 12:32:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10950.29094; Fri, 23 Oct 2020 12:32:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwEo-0006tV-Th; Fri, 23 Oct 2020 12:32:06 +0000
Received: by outflank-mailman (input) for mailman id 10950;
 Fri, 23 Oct 2020 12:32:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVwEn-0006tP-5V
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:32:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64e30598-3705-42d2-9795-46d9ecbeec62;
 Fri, 23 Oct 2020 12:32:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C197EAE1A;
 Fri, 23 Oct 2020 12:32:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVwEn-0006tP-5V
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:32:05 +0000
X-Inumbo-ID: 64e30598-3705-42d2-9795-46d9ecbeec62
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 64e30598-3705-42d2-9795-46d9ecbeec62;
	Fri, 23 Oct 2020 12:32:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603456323;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A8FWvXP1rN7ufKIsGtT6ekIiSnsRFQrKSbVD85n1nww=;
	b=UPM7tbPClMwKTPyX6TZCX3wzOHHaIehO9WXUCUzovLvr1/+tM9uH1HDu4NtJdEfsxFHvjd
	LyzgRA1xXttlAzyPyby/UzVIOcGVAtDrUxqn9+6UGjNuWsNXQ7LJLUu8ih7inE/KZXtNJX
	r+Vjvp6qo7k7oAbAYaU6kGmInwLxXhA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C197EAE1A;
	Fri, 23 Oct 2020 12:32:03 +0000 (UTC)
Subject: Re: [PATCH v2 07/11] x86/dpci: move code
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-8-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26aabe08-f44c-95f3-35d9-057abf6fa8ef@suse.com>
Date: Fri, 23 Oct 2020 14:32:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-8-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> This is code movement in order to simply further changes.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -276,6 +276,92 @@ static struct vcpu *vector_hashing_dest(const struct domain *d,
>      return dest;
>  }
>  
> +static void hvm_pirq_eoi(struct pirq *pirq,
> +                         const union vioapic_redir_entry *ent)
> +{
> +    struct hvm_pirq_dpci *pirq_dpci;
> +
> +    if ( !pirq )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return;
> +    }
> +
> +    pirq_dpci = pirq_dpci(pirq);
> +
> +    /*
> +     * No need to get vector lock for timer
> +     * since interrupt is still not EOIed
> +     */
> +    if ( --pirq_dpci->pending ||
> +         (ent && ent->fields.mask) ||
> +         !pt_irq_need_timer(pirq_dpci->flags) )
> +        return;
> +
> +    stop_timer(&pirq_dpci->timer);
> +    pirq_guest_eoi(pirq);
> +}
> +
> +static void __hvm_dpci_eoi(struct domain *d,

... could I talk you into dropping one of the two leading underscores
while moving the thing?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 12:48:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 12:48:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10958.29106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwTx-00087h-Be; Fri, 23 Oct 2020 12:47:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10958.29106; Fri, 23 Oct 2020 12:47:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwTx-00087a-7x; Fri, 23 Oct 2020 12:47:45 +0000
Received: by outflank-mailman (input) for mailman id 10958;
 Fri, 23 Oct 2020 12:47:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVwTv-00087V-HN
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:47:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4e752b6-d737-40ce-8a0b-13ade3d34d38;
 Fri, 23 Oct 2020 12:47:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C00C2ABBE;
 Fri, 23 Oct 2020 12:47:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVwTv-00087V-HN
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 12:47:43 +0000
X-Inumbo-ID: a4e752b6-d737-40ce-8a0b-13ade3d34d38
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a4e752b6-d737-40ce-8a0b-13ade3d34d38;
	Fri, 23 Oct 2020 12:47:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603457261;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y0WX4FA6Hy6b6OwtZDzIwH6drk/1fu8vfXyC+x29rBQ=;
	b=BST/sUpQxoBpS7P4aaSO9XB0jbU+qQuIO3d3NOWVN/ffV4D56j0284de5JDXq6YM+hrfYB
	2bU1ROiU9l57WghWe/cwQHDW0hUJ/ENhqZJ4KbmF90LKQG1b7OWa92aFR5rvMf7X+OAj05
	iLSPjQ08t3HQWfsD7weqAOF21VIf/Zs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C00C2ABBE;
	Fri, 23 Oct 2020 12:47:41 +0000 (UTC)
Subject: Re: [PATCH v2 08/11] x86/dpci: switch to use a GSI EOI callback
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-9-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4b37f6f4-3dbd-46c9-890b-f3b0205fd661@suse.com>
Date: Fri, 23 Oct 2020 14:47:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-9-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -327,9 +327,10 @@ static void hvm_gsi_eoi(struct domain *d, unsigned int gsi,
>      hvm_pirq_eoi(pirq, ent);
>  }
>  
> -void hvm_dpci_eoi(unsigned int guest_gsi, const union vioapic_redir_entry *ent)
> +static void dpci_eoi(unsigned int guest_gsi, void *data)
>  {
>      struct domain *d = current->domain;
> +    const union vioapic_redir_entry *ent = data;
>      const struct hvm_irq_dpci *hvm_irq_dpci;
>      const struct hvm_girq_dpci_mapping *girq;
>  
> @@ -565,7 +566,7 @@ int pt_irq_create_bind(
>              unsigned int link;
>  
>              digl = xmalloc(struct dev_intx_gsi_link);
> -            girq = xmalloc(struct hvm_girq_dpci_mapping);
> +            girq = xzalloc(struct hvm_girq_dpci_mapping);
>  
>              if ( !digl || !girq )
>              {
> @@ -578,11 +579,22 @@ int pt_irq_create_bind(
>              girq->bus = digl->bus = pt_irq_bind->u.pci.bus;
>              girq->device = digl->device = pt_irq_bind->u.pci.device;
>              girq->intx = digl->intx = pt_irq_bind->u.pci.intx;
> -            list_add_tail(&digl->list, &pirq_dpci->digl_list);
> +            girq->cb.callback = dpci_eoi;
>  
>              guest_gsi = hvm_pci_intx_gsi(digl->device, digl->intx);
>              link = hvm_pci_intx_link(digl->device, digl->intx);
>  
> +            rc = hvm_gsi_register_callback(d, guest_gsi, &girq->cb);

So this is where my question on the earlier patch gets answered:
You utilize passing NULL data to the callback to actually get
passed the IO-APIC redir entry pointer into the callback. This is
perhaps okay in principle if it was half way visible. May I ask
that at the very least instead of switching to xzalloc above you
set ->data to NULL here explicitly, accompanied by a comment on
the effect?

However, I wonder whether it wouldn't be better to have the
callback be passed const union vioapic_redir_entry * right away.
Albeit I haven't looked at the later patches yes, where it may
well be I'd find arguments against.

> @@ -590,8 +602,17 @@ int pt_irq_create_bind(
>          }
>          else
>          {
> +            struct hvm_gsi_eoi_callback *cb =
> +                xzalloc(struct hvm_gsi_eoi_callback);

I can't seem to be able to spot anywhere that this would get freed
(except on an error path in this function).

>              ASSERT(is_hardware_domain(d));
>  
> +            if ( !cb )
> +            {
> +                spin_unlock(&d->event_lock);
> +                return -ENOMEM;
> +            }
> +
>              /* MSI_TRANSLATE is not supported for the hardware domain. */
>              if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_PCI ||
>                   pirq >= hvm_domain_irq(d)->nr_gsis )
> @@ -601,6 +622,19 @@ int pt_irq_create_bind(
>                  return -EINVAL;
>              }

There's an error path here where you don't free cb, and I think
one or two more further down (where you then also may need to
unregister it first).

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 13:00:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 13:00:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10980.29142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwfw-0001d2-Oo; Fri, 23 Oct 2020 13:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10980.29142; Fri, 23 Oct 2020 13:00:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVwfw-0001cv-LQ; Fri, 23 Oct 2020 13:00:08 +0000
Received: by outflank-mailman (input) for mailman id 10980;
 Fri, 23 Oct 2020 13:00:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVwfv-0001Z8-6p
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:00:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a89fd82a-be83-484f-886d-862fc6e7d5d6;
 Fri, 23 Oct 2020 13:00:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVwfr-0003Oy-FB; Fri, 23 Oct 2020 13:00:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVwfr-0003R5-2y; Fri, 23 Oct 2020 13:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVwfv-0001Z8-6p
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:00:07 +0000
X-Inumbo-ID: a89fd82a-be83-484f-886d-862fc6e7d5d6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a89fd82a-be83-484f-886d-862fc6e7d5d6;
	Fri, 23 Oct 2020 13:00:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4x5X74mf7UfWgJG24rGYwYtiVhY42QZKSLbEpYcdMjo=; b=XkTTaLM4Qm9JwEnJ1R1Csl1DPr
	iU0zDxUCvbVRyx82Lyrf9IoQ6bwvGtstRjEAT7MCHsIQHtGYHc2xsqgBY9uMKgw1urup0QF4gW2UK
	h46bIIBqyV9vRWrHk9mQjWlpS2t79AxYEaAJNHbk5EGVHQCOPuwXYCHPdLcf1ISr5Vss=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVwfr-0003Oy-FB; Fri, 23 Oct 2020 13:00:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVwfr-0003R5-2y; Fri, 23 Oct 2020 13:00:03 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
Date: Fri, 23 Oct 2020 14:00:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 23/10/2020 12:35, Rahul Singh wrote:
> Hello,
> 
>> On 23 Oct 2020, at 1:02 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Thu, 22 Oct 2020, Julien Grall wrote:
>>>>> On 20/10/2020 16:25, Rahul Singh wrote:
>>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>>>> the Linux SMMUv3 driver.
>>>>>> Major differences between the Linux driver are as follows:
>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>>>>     that supports both Stage-1 and Stage-2 translations.
>>>>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>>>>>>     capability to share the page tables with the CPU.
>>>>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>>>>>>     and priority queue IRQ handling.
>>>>>
>>>>> Tasklets are not a replacement for threaded IRQ. In particular, they will
>>>>> have priority over anything else (IOW nothing will run on the pCPU until
>>>>> they are done).
>>>>>
>>>>> Do you know why Linux is using thread. Is it because of long running
>>>>> operations?
>>>>
>>>> Yes you are right because of long running operations Linux is using the
>>>> threaded IRQs.
>>>>
>>>> SMMUv3 reports fault/events bases on memory-based circular buffer queues not
>>>> based on the register. As per my understanding, it is time-consuming to
>>>> process the memory based queues in interrupt context because of that Linux
>>>> is using threaded IRQ to process the faults/events from SMMU.
>>>>
>>>> I didn’t find any other solution in XEN in place of tasklet to defer the
>>>> work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
>>>> we do all work in interrupt context we will make XEN less responsive.
>>>
>>> So we need to make sure that Xen continue to receives interrupts, but we also
>>> need to make sure that a vCPU bound to the pCPU is also responsive.
>>>
>>>>
>>>> If you know another solution in XEN that will be used to defer the work in
>>>> the interrupt please let me know I will try to use that.
>>>
>>> One of my work colleague encountered a similar problem recently. He had a long
>>> running tasklet and wanted to be broken down in smaller chunk.
>>>
>>> We decided to use a timer to reschedule the taslket in the future. This allows
>>> the scheduler to run other loads (e.g. vCPU) for some time.
>>>
>>> This is pretty hackish but I couldn't find a better solution as tasklet have
>>> high priority.
>>>
>>> Maybe the other will have a better idea.
>>
>> Julien's suggestion is a good one.
>>
>> But I think tasklets can be configured to be called from the idle_loop,
>> in which case they are not run in interrupt context?
>>
> 
>   Yes you are right tasklet will be scheduled from the idle_loop that is not interrupt conext.

This depends on your tasklet. Some will run from the softirq context 
which is usually (for Arm) on the return of an exception.

>>
>>>>>> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>>>>>>     access functions based on atomic operations implemented in Linux.
>>>>>
>>>>> Can you provide more details?
>>>>
>>>> I tried to port the latest version of the SMMUv3 code than I observed that
>>>> in order to port that code I have to also port atomic operation implemented
>>>> in Linux to XEN. As latest Linux code uses atomic operation to process the
>>>> command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() ,
>>>> atomic_fetch_andnot_relaxed()) .
>>>
>>> Thank you for the explanation. I think it would be best to import the atomic
>>> helpers and use the latest code.
>>>
>>> This will ensure that we don't re-introduce bugs and also buy us some time
>>> before the Linux and Xen driver diverge again too much.
>>>
>>> Stefano, what do you think?
>>
>> I think you are right.
> 
> Yes, I agree with you to have XEN code in sync with Linux code that's why I started with to port the Linux atomic operations to XEN  then I realised that it is not straightforward to port atomic operations and it requires lots of effort and testing. Therefore I decided to port the code before the atomic operation is introduced in Linux.

Hmmm... I would not have expected a lot of effort required to add the 3 
atomics operations above. Are you trying to also port the LSE support at 
the same time?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 13:26:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 13:26:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10984.29153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVx58-0003oj-Uh; Fri, 23 Oct 2020 13:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10984.29153; Fri, 23 Oct 2020 13:26:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVx58-0003oc-Re; Fri, 23 Oct 2020 13:26:10 +0000
Received: by outflank-mailman (input) for mailman id 10984;
 Fri, 23 Oct 2020 13:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kVx57-0003oW-5Z
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1cd2bb22-cdc2-426f-bf66-b68664d48636;
 Fri, 23 Oct 2020 13:26:08 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVx56-0003u5-6c
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:08 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVx56-0005lC-4K
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kVx52-0007fs-Rs; Fri, 23 Oct 2020 14:26:04 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kVx57-0003oW-5Z
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:09 +0000
X-Inumbo-ID: 1cd2bb22-cdc2-426f-bf66-b68664d48636
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1cd2bb22-cdc2-426f-bf66-b68664d48636;
	Fri, 23 Oct 2020 13:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=b9VfwqB3+vPR0wPmnXFCMvn45MMEFkbthDp1NTUn50I=; b=CLI1+68mvERSD0wwVpCkr7Ce/e
	WzMz1u3R+3rUmcuAo0l6QToEBWd2gAAGAfE19BBQcCJZc8QYZnQN8Ek6UZuo7CpkjvdMIc9vJHoWf
	gsGL0OgOyJKDcUkZiLAy7lAUY+qpgub0FNY4nRBrf0rU2XwMTGcJ91K/02zlVyWG3qMY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx56-0003u5-6c
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:08 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx56-0005lC-4K
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:26:08 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx52-0007fs-Rs; Fri, 23 Oct 2020 14:26:04 +0100
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24466.55788.624955.556357@mariner.uk.xensource.com>
Date: Fri, 23 Oct 2020 14:26:04 +0100
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Samuel Thibault <samuel.thibault@ens-lyon.org>,
    Christian Lindig <christian.lindig@citrix.com>,
    David Scott <dave@recoil.org>
Subject: Re: [PATCH 0/3] tools: avoid creating symbolic links during make
In-Reply-To: <20201002142214.3438-1-jgross@suse.com>
References: <20201002142214.3438-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Juergen Gross writes ("[PATCH 0/3] tools: avoid creating symbolic links during make"):
> The rework of the Xen library build introduced creating some additional
> symbolic links during the build process.
> 
> This series is undoing that by moving all official Xen library headers
> to tools/include and by using include paths and the vpath directive
> when access to some private headers of another directory is needed.

I'm OK with these changes and inclined to give my ack and commit all
three.

I did have one observation: it is rather odd that all the
autogenerated header files are each generated by the relevant
tools/libs/foo/Makefile, but the file is in tools/include/.

This is particularly odd given that tools/include/ has a Makefile of
its own which mostly does install stuff.

Can we at least have a comment in tools/include/Makefile saying that
it is forbidden to add rules which build include files here, and
suggesting to the reader which other Makefiles to read ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 13:30:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 13:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10988.29166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVx9H-0004gE-Jc; Fri, 23 Oct 2020 13:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10988.29166; Fri, 23 Oct 2020 13:30:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVx9H-0004g7-Gb; Fri, 23 Oct 2020 13:30:27 +0000
Received: by outflank-mailman (input) for mailman id 10988;
 Fri, 23 Oct 2020 13:30:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kVx9G-0004g2-31
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f5d3a22-5a3c-4198-bbd8-329b7d92ef3c;
 Fri, 23 Oct 2020 13:30:25 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVx9F-00040F-1f
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:25 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVx9F-0005yE-0t
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kVx9B-0007sr-Sp; Fri, 23 Oct 2020 14:30:21 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kVx9G-0004g2-31
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:26 +0000
X-Inumbo-ID: 5f5d3a22-5a3c-4198-bbd8-329b7d92ef3c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5f5d3a22-5a3c-4198-bbd8-329b7d92ef3c;
	Fri, 23 Oct 2020 13:30:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=f3328DQndh6pgsRQk5BdExZnwN+spqoLKrPSfDlYn4c=; b=OW7Zd/lYX2vE/cd3aOPxIn7aBJ
	fETzneocVpImFwfzfP4/fzVPb8HlVF2OYOe9k/pG1SzyCSH+5KuAe+cXvwVNMYFg0O9oZp5DGdVeU
	imuDhFLsWl1YRNRjywUqVBPgc2/ZI/QJ8AKn5n1uOTKODroAio1kjk4+M0i3xxPoJhoI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx9F-00040F-1f
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:25 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx9F-0005yE-0t
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:30:25 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kVx9B-0007sr-Sp; Fri, 23 Oct 2020 14:30:21 +0100
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24466.56045.693953.101465@mariner.uk.xensource.com>
Date: Fri, 23 Oct 2020 14:30:21 +0100
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH 0/2] maintainers: correct some entries
In-Reply-To: <20200909115944.4181-1-jgross@suse.com>
References: <20200909115944.4181-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Juergen Gross writes ("[PATCH 0/2] maintainers: correct some entries"):
> Fix some paths after reorg of library locations, and drop unreachable
> maintainer.

Thanks, both

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

and committed.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 13:38:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 13:38:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.10994.29186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxH7-000555-FW; Fri, 23 Oct 2020 13:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 10994.29186; Fri, 23 Oct 2020 13:38:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxH7-00054y-Bn; Fri, 23 Oct 2020 13:38:33 +0000
Received: by outflank-mailman (input) for mailman id 10994;
 Fri, 23 Oct 2020 13:38:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qXPM=D6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1kVxH6-00054t-Lk
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:38:32 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id baae3058-49e9-4e82-9e4e-d6c9268e788f;
 Fri, 23 Oct 2020 13:38:31 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09NDYJeI087810;
 Fri, 23 Oct 2020 13:38:12 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 34ak16uh2s-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 23 Oct 2020 13:38:11 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09NDZlV4073341;
 Fri, 23 Oct 2020 13:38:11 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 348ah20hea-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 23 Oct 2020 13:38:11 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09NDc2iq013614;
 Fri, 23 Oct 2020 13:38:03 GMT
Received: from [10.74.86.114] (/10.74.86.114)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 23 Oct 2020 06:38:02 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qXPM=D6=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
	id 1kVxH6-00054t-Lk
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:38:32 +0000
X-Inumbo-ID: baae3058-49e9-4e82-9e4e-d6c9268e788f
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id baae3058-49e9-4e82-9e4e-d6c9268e788f;
	Fri, 23 Oct 2020 13:38:31 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09NDYJeI087810;
	Fri, 23 Oct 2020 13:38:12 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=f68vJI26CdskCy/ucIn4jhkM8wmlpjwl4MW/M1mNHhg=;
 b=PfdBGxLVfz4qeCvoJmnJEqhZdQbNZ1LH0KcIGHn1uWllnU0bExSIMwArZ086J/bcI0W/
 npbxmjzJIXAymSYSw4RZGVPRw67Xw55VB709eHh7kQqajaNOa3OcdyEw0CXukKX+gEoR
 9l3An1MhlDJtG6SnH4BISnKpCZw8L5/jiN5sXX/lqToCvdGdRA3VSIb5immyNSYw8Hmz
 hiefXPzRYqCbkt9ORvymaTfBnr4oJ3Vgk6xEUxJnOQCs9/nek1LRZ1t+eWjfHvZWl90I
 1gj0dj1JEg5ZmPDBOhgbKaLDNS3DAGm3FRMav1auuVyhqeydK5wgRrj4J23ZzPsiOb9R KA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
	by userp2120.oracle.com with ESMTP id 34ak16uh2s-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Fri, 23 Oct 2020 13:38:11 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
	by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09NDZlV4073341;
	Fri, 23 Oct 2020 13:38:11 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
	by aserp3020.oracle.com with ESMTP id 348ah20hea-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Fri, 23 Oct 2020 13:38:11 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
	by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 09NDc2iq013614;
	Fri, 23 Oct 2020 13:38:03 GMT
Received: from [10.74.86.114] (/10.74.86.114)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Fri, 23 Oct 2020 06:38:02 -0700
Subject: Re: [PATCH v2 0/5] xen: event handling cleanup
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org, x86@kernel.org,
        linux-doc@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
        Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
        Jonathan Corbet <corbet@lwn.net>
References: <20201022094907.28560-1-jgross@suse.com>
From: boris.ostrovsky@oracle.com
Organization: Oracle Corporation
Message-ID: <71ea84e2-31d8-328d-fce6-fafb2010a229@oracle.com>
Date: Fri, 23 Oct 2020 09:37:59 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201022094907.28560-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9782 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 mlxscore=0 phishscore=0
 malwarescore=0 spamscore=0 suspectscore=0 bulkscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010230095
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9782 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 phishscore=0
 priorityscore=1501 clxscore=1011 malwarescore=0 mlxscore=0 adultscore=0
 lowpriorityscore=0 impostorscore=0 spamscore=0 mlxlogscore=999
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010230095


On 10/22/20 5:49 AM, Juergen Gross wrote:
> Do some cleanups in Xen event handling code.
>
> Changes in V2:
> - addressed comments
>
> Juergen Gross (5):
>   xen: remove no longer used functions
>   xen/events: make struct irq_info private to events_base.c
>   xen/events: only register debug interrupt for 2-level events
>   xen/events: unmask a fifo event channel only if it was masked
>   Documentation: add xen.fifo_events kernel parameter description
>
>  .../admin-guide/kernel-parameters.txt         |  7 ++
>  arch/x86/xen/smp.c                            | 19 ++--
>  arch/x86/xen/xen-ops.h                        |  2 +
>  drivers/xen/events/events_2l.c                |  7 +-
>  drivers/xen/events/events_base.c              | 94 +++++++++++++------
>  drivers/xen/events/events_fifo.c              |  9 +-
>  drivers/xen/events/events_internal.h          | 70 ++------------
>  include/xen/events.h                          |  8 --
>  8 files changed, 102 insertions(+), 114 deletions(-)
>

Applied to for-linus-5.10b.


-boris



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 13:58:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 13:58:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11001.29197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxaa-00072f-6J; Fri, 23 Oct 2020 13:58:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11001.29197; Fri, 23 Oct 2020 13:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxaa-00072Y-3K; Fri, 23 Oct 2020 13:58:40 +0000
Received: by outflank-mailman (input) for mailman id 11001;
 Fri, 23 Oct 2020 13:58:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kVxaY-00072T-JG
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:58:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31063ba2-e521-4810-ad6f-75d0e7c6ff94;
 Fri, 23 Oct 2020 13:58:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVxaS-0004Zl-K8; Fri, 23 Oct 2020 13:58:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kVxaS-0001rj-9E; Fri, 23 Oct 2020 13:58:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kVxaS-0004UV-8k; Fri, 23 Oct 2020 13:58:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kVxaY-00072T-JG
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 13:58:38 +0000
X-Inumbo-ID: 31063ba2-e521-4810-ad6f-75d0e7c6ff94
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 31063ba2-e521-4810-ad6f-75d0e7c6ff94;
	Fri, 23 Oct 2020 13:58:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MPWhS11rxBWwCMRoLIVydL3AhqJ1jBrgK67/dsySvV8=; b=03JXJB1zlYpsuiK61yJgZlx8Ea
	zQw11Nez+qLr7iI6XG9JqUMO0TFdYFv/2WRH+Oshk6nidQkIC4LFI/wrRPiKwyanY5ZDb33O60EQ9
	28mb+fCIvZHyk3fKb12OhqJYqF4kdbXK/cQWrS1tAr4u5YpyUJhLnn42wa3C8lDxjSgE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVxaS-0004Zl-K8; Fri, 23 Oct 2020 13:58:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVxaS-0001rj-9E; Fri, 23 Oct 2020 13:58:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kVxaS-0004UV-8k; Fri, 23 Oct 2020 13:58:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156118-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156118: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 13:58:32 +0000

flight 156118 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156118/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   62 days  123 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:19:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:19:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11015.29253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuz-0000q5-10; Fri, 23 Oct 2020 14:19:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11015.29253; Fri, 23 Oct 2020 14:19:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuy-0000pw-Tv; Fri, 23 Oct 2020 14:19:44 +0000
Received: by outflank-mailman (input) for mailman id 11015;
 Fri, 23 Oct 2020 14:19:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVxux-0000me-Li
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e36debdc-7f34-4275-b6a0-68932db8629c;
 Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 51A85B2AE;
 Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVxux-0000me-Li
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:43 +0000
X-Inumbo-ID: e36debdc-7f34-4275-b6a0-68932db8629c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e36debdc-7f34-4275-b6a0-68932db8629c;
	Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603462777;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hnG6mBS9gSVm05A1WvJ2mFbfRf9HUb0Klxqf2yEGtiU=;
	b=VOpIkOShAiYoVQ1Z1LvjYBSS54URrWcvP2/Reabmd31f6qX9kAni+cmJZQfeadW3yIMAHQ
	9YqLBIApz9GgEoetRJpQWmYURjUzbwrDujymf2OOFVaXTfmf3JF7ASqz6uVJMAlR4sWab0
	VQnBclxUxEvJfUNiN2r22dTu/ksXQ0Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 51A85B2AE;
	Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH v2 2/3] tools/libs/guest: don't use symbolic links for xenctrl headers
Date: Fri, 23 Oct 2020 16:19:33 +0200
Message-Id: <20201023141934.20062-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201023141934.20062-1-jgross@suse.com>
References: <20201023141934.20062-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using symbolic links for accessing the xenctrl private
headers use an include path instead.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/libs/guest/Makefile | 9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index 5b4ad313cc..1c729040b3 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -6,11 +6,6 @@ ifeq ($(CONFIG_LIBXC_MINIOS),y)
 override CONFIG_MIGRATE := n
 endif
 
-LINK_FILES := xc_private.h xc_core.h xc_core_x86.h xc_core_arm.h xc_bitops.h
-
-$(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/libs/ctrl/$(notdir $@) $@
-
 SRCS-y += xg_private.c
 SRCS-y += xg_domain.c
 SRCS-y += xg_suspend.c
@@ -29,6 +24,8 @@ else
 SRCS-y += xg_nomigrate.c
 endif
 
+CFLAGS += -I$(XEN_libxenctrl)
+
 vpath %.c ../../../xen/common/libelf
 CFLAGS += -I../../../xen/common/libelf
 
@@ -111,8 +108,6 @@ $(eval $(genpath-target))
 
 xc_private.h: _paths.h
 
-$(LIB_OBJS) $(PIC_OBJS): $(LINK_FILES)
-
 .PHONY: cleanlocal
 cleanlocal:
 	rm -f libxenguest.map
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:19:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:19:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11014.29241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuw-0000np-K4; Fri, 23 Oct 2020 14:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11014.29241; Fri, 23 Oct 2020 14:19:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuw-0000ni-Gl; Fri, 23 Oct 2020 14:19:42 +0000
Received: by outflank-mailman (input) for mailman id 11014;
 Fri, 23 Oct 2020 14:19:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVxuv-0000nb-9H
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0940eab8-7c47-487b-90cb-c3622471516b;
 Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2CDCCB2A4;
 Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVxuv-0000nb-9H
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:41 +0000
X-Inumbo-ID: 0940eab8-7c47-487b-90cb-c3622471516b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0940eab8-7c47-487b-90cb-c3622471516b;
	Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603462777;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iY9Qn8kSLsYhVemyYxA+92+y1Yj78ZJBQagEpKpdTSU=;
	b=PJVlFY6xGJL5feekzk/MDJOPK7IQQ0Lq91cCqNE5NDSNzB5fbH+j1ZvcgfJ30gMRrqzNZy
	TZkCog/srgpj6QIPv+Y/R3AVf97Zn/YT0XdU8NAwxvqYddu3Ac+ylgbabO4nQzi7mjs810
	cAHMPnkoqBUyoPWHh7RoqcxbxwqCV+Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2CDCCB2A4;
	Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH v2 1/3] tools/libs: move official headers to common directory
Date: Fri, 23 Oct 2020 16:19:32 +0200
Message-Id: <20201023141934.20062-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201023141934.20062-1-jgross@suse.com>
References: <20201023141934.20062-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of each library having an own include directory move the
official headers to tools/include instead. This will drop the need to
link those headers to tools/include and there is no need any longer
to have library-specific include paths when building Xen.

While at it remove setting of the unused variable
PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 .gitignore                                    |  5 ++--
 stubdom/mini-os.mk                            |  2 +-
 tools/Rules.mk                                |  5 ++--
 tools/include/Makefile                        |  6 ++++
 tools/{libs/vchan => }/include/libxenvchan.h  |  0
 tools/{libs/light => }/include/libxl.h        |  0
 tools/{libs/light => }/include/libxl_event.h  |  0
 tools/{libs/light => }/include/libxl_json.h   |  0
 tools/{libs/light => }/include/libxl_utils.h  |  0
 tools/{libs/light => }/include/libxl_uuid.h   |  0
 tools/{libs/util => }/include/libxlutil.h     |  0
 tools/{libs/call => }/include/xencall.h       |  0
 tools/{libs/ctrl => }/include/xenctrl.h       |  0
 .../{libs/ctrl => }/include/xenctrl_compat.h  |  0
 .../devicemodel => }/include/xendevicemodel.h |  0
 tools/{libs/evtchn => }/include/xenevtchn.h   |  0
 .../include/xenforeignmemory.h                |  0
 tools/{libs/gnttab => }/include/xengnttab.h   |  0
 tools/{libs/guest => }/include/xenguest.h     |  0
 tools/{libs/hypfs => }/include/xenhypfs.h     |  0
 tools/{libs/stat => }/include/xenstat.h       |  0
 .../compat => include/xenstore-compat}/xs.h   |  0
 .../xenstore-compat}/xs_lib.h                 |  0
 tools/{libs/store => }/include/xenstore.h     |  0
 tools/{xenstore => include}/xenstore_lib.h    |  0
 .../{libs/toolcore => }/include/xentoolcore.h |  0
 .../include/xentoolcore_internal.h            |  0
 tools/{libs/toollog => }/include/xentoollog.h |  0
 tools/libs/call/Makefile                      |  3 --
 tools/libs/ctrl/Makefile                      |  3 --
 tools/libs/devicemodel/Makefile               |  3 --
 tools/libs/evtchn/Makefile                    |  2 --
 tools/libs/foreignmemory/Makefile             |  3 --
 tools/libs/gnttab/Makefile                    |  3 --
 tools/libs/guest/Makefile                     |  3 --
 tools/libs/hypfs/Makefile                     |  3 --
 tools/libs/libs.mk                            | 10 ++-----
 tools/libs/light/Makefile                     | 28 ++++++++-----------
 tools/libs/stat/Makefile                      |  2 --
 tools/libs/store/Makefile                     | 11 +++-----
 tools/libs/toolcore/Makefile                  |  9 +++---
 tools/libs/toollog/Makefile                   |  2 --
 tools/libs/util/Makefile                      |  3 --
 tools/libs/vchan/Makefile                     |  3 --
 tools/ocaml/libs/xentoollog/Makefile          |  2 +-
 tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
 46 files changed, 36 insertions(+), 77 deletions(-)
 rename tools/{libs/vchan => }/include/libxenvchan.h (100%)
 rename tools/{libs/light => }/include/libxl.h (100%)
 rename tools/{libs/light => }/include/libxl_event.h (100%)
 rename tools/{libs/light => }/include/libxl_json.h (100%)
 rename tools/{libs/light => }/include/libxl_utils.h (100%)
 rename tools/{libs/light => }/include/libxl_uuid.h (100%)
 rename tools/{libs/util => }/include/libxlutil.h (100%)
 rename tools/{libs/call => }/include/xencall.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl_compat.h (100%)
 rename tools/{libs/devicemodel => }/include/xendevicemodel.h (100%)
 rename tools/{libs/evtchn => }/include/xenevtchn.h (100%)
 rename tools/{libs/foreignmemory => }/include/xenforeignmemory.h (100%)
 rename tools/{libs/gnttab => }/include/xengnttab.h (100%)
 rename tools/{libs/guest => }/include/xenguest.h (100%)
 rename tools/{libs/hypfs => }/include/xenhypfs.h (100%)
 rename tools/{libs/stat => }/include/xenstat.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs_lib.h (100%)
 rename tools/{libs/store => }/include/xenstore.h (100%)
 rename tools/{xenstore => include}/xenstore_lib.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore_internal.h (100%)
 rename tools/{libs/toollog => }/include/xentoollog.h (100%)

diff --git a/.gitignore b/.gitignore
index f6865c9cd8..b346a2abf6 100644
--- a/.gitignore
+++ b/.gitignore
@@ -143,7 +143,6 @@ tools/libs/light/test_timedereg
 tools/libs/light/test_fdderegrace
 tools/libs/light/tmp.*
 tools/libs/light/xenlight.pc
-tools/libs/light/include/_*.h
 tools/libs/stat/_paths.h
 tools/libs/stat/headers.chk
 tools/libs/stat/libxenstat.map
@@ -153,7 +152,6 @@ tools/libs/store/list.h
 tools/libs/store/utils.h
 tools/libs/store/xenstore.pc
 tools/libs/store/xs_lib.c
-tools/libs/store/include/xenstore_lib.h
 tools/libs/util/*.pc
 tools/libs/util/_paths.h
 tools/libs/util/libxlu_cfg_y.output
@@ -231,7 +229,8 @@ tools/hotplug/Linux/xendomains
 tools/hotplug/NetBSD/rc.d/xencommons
 tools/hotplug/NetBSD/rc.d/xendriverdomain
 tools/include/acpi
-tools/include/*.h
+tools/include/_libxl*.h
+tools/include/_xentoolcore_list.h
 tools/include/xen/*
 tools/include/xen-xsm/*
 tools/include/xen-foreign/*.(c|h|size)
diff --git a/stubdom/mini-os.mk b/stubdom/mini-os.mk
index 420e9a8771..7e4968e026 100644
--- a/stubdom/mini-os.mk
+++ b/stubdom/mini-os.mk
@@ -5,7 +5,7 @@
 # XEN_ROOT
 # MINIOS_TARGET_ARCH
 
-XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/libs/store/include
+XENSTORE_CPPFLAGS = -isystem $(XEN_ROOT)/tools/include
 TOOLCORE_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toolcore
 TOOLLOG_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/toollog
 EVTCHN_PATH = $(XEN_ROOT)/stubdom/libs-$(MINIOS_TARGET_ARCH)/evtchn
diff --git a/tools/Rules.mk b/tools/Rules.mk
index f3e0078927..f61da81f4a 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -87,7 +87,7 @@ endif
 define LIB_defs
  FILENAME_$(1) ?= xen$(1)
  XEN_libxen$(1) = $$(XEN_ROOT)/tools/libs/$(1)
- CFLAGS_libxen$(1) = -I$$(XEN_libxen$(1))/include $$(CFLAGS_xeninclude)
+ CFLAGS_libxen$(1) = $$(CFLAGS_xeninclude)
  SHDEPS_libxen$(1) = $$(foreach use,$$(USELIBS_$(1)),$$(SHLIB_libxen$$(use)))
  LDLIBS_libxen$(1) = $$(SHDEPS_libxen$(1)) $$(XEN_libxen$(1))/lib$$(FILENAME_$(1))$$(libextension)
  SHLIB_libxen$(1) = $$(SHDEPS_libxen$(1)) -Wl,-rpath-link=$$(XEN_libxen$(1))
@@ -97,8 +97,7 @@ $(foreach lib,$(LIBS_LIBS),$(eval $(call LIB_defs,$(lib))))
 
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
-CFLAGS_libxenctrl += $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) -D__XEN_TOOLS__
-CFLAGS_libxenguest += $(CFLAGS_libxenevtchn) $(CFLAGS_libxenforeignmemory)
+CFLAGS_libxenctrl += -D__XEN_TOOLS__
 
 ifeq ($(CONFIG_Linux),y)
 LDLIBS_libxenstore += -ldl
diff --git a/tools/include/Makefile b/tools/include/Makefile
index 4d4313b60d..4d4ec5f974 100644
--- a/tools/include/Makefile
+++ b/tools/include/Makefile
@@ -1,6 +1,12 @@
 XEN_ROOT = $(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
+# Caution: some tools/libs/*/Makefile generate header files directly in
+# tools/include, and they do the [un]install actions for those, too.
+# In case other headers need to be built cwin tools/include this should be
+# taken into account, i.e. there should be no rules added here for generating
+# any tools/include/*.h files.
+
 # Relative to $(XEN_ROOT)/xen/xsm/flask
 FLASK_H_DEPEND := policy/initial_sids
 
diff --git a/tools/libs/vchan/include/libxenvchan.h b/tools/include/libxenvchan.h
similarity index 100%
rename from tools/libs/vchan/include/libxenvchan.h
rename to tools/include/libxenvchan.h
diff --git a/tools/libs/light/include/libxl.h b/tools/include/libxl.h
similarity index 100%
rename from tools/libs/light/include/libxl.h
rename to tools/include/libxl.h
diff --git a/tools/libs/light/include/libxl_event.h b/tools/include/libxl_event.h
similarity index 100%
rename from tools/libs/light/include/libxl_event.h
rename to tools/include/libxl_event.h
diff --git a/tools/libs/light/include/libxl_json.h b/tools/include/libxl_json.h
similarity index 100%
rename from tools/libs/light/include/libxl_json.h
rename to tools/include/libxl_json.h
diff --git a/tools/libs/light/include/libxl_utils.h b/tools/include/libxl_utils.h
similarity index 100%
rename from tools/libs/light/include/libxl_utils.h
rename to tools/include/libxl_utils.h
diff --git a/tools/libs/light/include/libxl_uuid.h b/tools/include/libxl_uuid.h
similarity index 100%
rename from tools/libs/light/include/libxl_uuid.h
rename to tools/include/libxl_uuid.h
diff --git a/tools/libs/util/include/libxlutil.h b/tools/include/libxlutil.h
similarity index 100%
rename from tools/libs/util/include/libxlutil.h
rename to tools/include/libxlutil.h
diff --git a/tools/libs/call/include/xencall.h b/tools/include/xencall.h
similarity index 100%
rename from tools/libs/call/include/xencall.h
rename to tools/include/xencall.h
diff --git a/tools/libs/ctrl/include/xenctrl.h b/tools/include/xenctrl.h
similarity index 100%
rename from tools/libs/ctrl/include/xenctrl.h
rename to tools/include/xenctrl.h
diff --git a/tools/libs/ctrl/include/xenctrl_compat.h b/tools/include/xenctrl_compat.h
similarity index 100%
rename from tools/libs/ctrl/include/xenctrl_compat.h
rename to tools/include/xenctrl_compat.h
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/include/xendevicemodel.h
similarity index 100%
rename from tools/libs/devicemodel/include/xendevicemodel.h
rename to tools/include/xendevicemodel.h
diff --git a/tools/libs/evtchn/include/xenevtchn.h b/tools/include/xenevtchn.h
similarity index 100%
rename from tools/libs/evtchn/include/xenevtchn.h
rename to tools/include/xenevtchn.h
diff --git a/tools/libs/foreignmemory/include/xenforeignmemory.h b/tools/include/xenforeignmemory.h
similarity index 100%
rename from tools/libs/foreignmemory/include/xenforeignmemory.h
rename to tools/include/xenforeignmemory.h
diff --git a/tools/libs/gnttab/include/xengnttab.h b/tools/include/xengnttab.h
similarity index 100%
rename from tools/libs/gnttab/include/xengnttab.h
rename to tools/include/xengnttab.h
diff --git a/tools/libs/guest/include/xenguest.h b/tools/include/xenguest.h
similarity index 100%
rename from tools/libs/guest/include/xenguest.h
rename to tools/include/xenguest.h
diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/include/xenhypfs.h
similarity index 100%
rename from tools/libs/hypfs/include/xenhypfs.h
rename to tools/include/xenhypfs.h
diff --git a/tools/libs/stat/include/xenstat.h b/tools/include/xenstat.h
similarity index 100%
rename from tools/libs/stat/include/xenstat.h
rename to tools/include/xenstat.h
diff --git a/tools/libs/store/include/compat/xs.h b/tools/include/xenstore-compat/xs.h
similarity index 100%
rename from tools/libs/store/include/compat/xs.h
rename to tools/include/xenstore-compat/xs.h
diff --git a/tools/libs/store/include/compat/xs_lib.h b/tools/include/xenstore-compat/xs_lib.h
similarity index 100%
rename from tools/libs/store/include/compat/xs_lib.h
rename to tools/include/xenstore-compat/xs_lib.h
diff --git a/tools/libs/store/include/xenstore.h b/tools/include/xenstore.h
similarity index 100%
rename from tools/libs/store/include/xenstore.h
rename to tools/include/xenstore.h
diff --git a/tools/xenstore/xenstore_lib.h b/tools/include/xenstore_lib.h
similarity index 100%
rename from tools/xenstore/xenstore_lib.h
rename to tools/include/xenstore_lib.h
diff --git a/tools/libs/toolcore/include/xentoolcore.h b/tools/include/xentoolcore.h
similarity index 100%
rename from tools/libs/toolcore/include/xentoolcore.h
rename to tools/include/xentoolcore.h
diff --git a/tools/libs/toolcore/include/xentoolcore_internal.h b/tools/include/xentoolcore_internal.h
similarity index 100%
rename from tools/libs/toolcore/include/xentoolcore_internal.h
rename to tools/include/xentoolcore_internal.h
diff --git a/tools/libs/toollog/include/xentoollog.h b/tools/include/xentoollog.h
similarity index 100%
rename from tools/libs/toollog/include/xentoollog.h
rename to tools/include/xentoollog.h
diff --git a/tools/libs/call/Makefile b/tools/libs/call/Makefile
index 81c7478efd..4ed201b3b3 100644
--- a/tools/libs/call/Makefile
+++ b/tools/libs/call/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxencall)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/ctrl/Makefile b/tools/libs/ctrl/Makefile
index 0071226d2a..4185dc3f22 100644
--- a/tools/libs/ctrl/Makefile
+++ b/tools/libs/ctrl/Makefile
@@ -62,9 +62,6 @@ $(eval $(genpath-target))
 
 $(LIB_OBJS) $(PIC_OBJS): _paths.h
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 clean: cleanlocal
 
 .PHONY: cleanlocal
diff --git a/tools/libs/devicemodel/Makefile b/tools/libs/devicemodel/Makefile
index 42417958f2..b67fc0fac1 100644
--- a/tools/libs/devicemodel/Makefile
+++ b/tools/libs/devicemodel/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += compat.c
 SRCS-$(CONFIG_MiniOS)  += compat.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxendevicemodel)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/evtchn/Makefile b/tools/libs/evtchn/Makefile
index aec76641e8..ad01a17b3d 100644
--- a/tools/libs/evtchn/Makefile
+++ b/tools/libs/evtchn/Makefile
@@ -12,5 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenevtchn)/include
diff --git a/tools/libs/foreignmemory/Makefile b/tools/libs/foreignmemory/Makefile
index cf444d3c1a..13850f7988 100644
--- a/tools/libs/foreignmemory/Makefile
+++ b/tools/libs/foreignmemory/Makefile
@@ -12,6 +12,3 @@ SRCS-$(CONFIG_NetBSD)  += compat.c netbsd.c
 SRCS-$(CONFIG_MiniOS)  += minios.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenforeignmemory)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index d8d4d55e27..d86c49d243 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -14,6 +14,3 @@ SRCS-$(CONFIG_SunOS)   += gnttab_unimp.c gntshr_unimp.c
 SRCS-$(CONFIG_NetBSD)  += gnttab_unimp.c gntshr_unimp.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxengnttab)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/guest/Makefile b/tools/libs/guest/Makefile
index f24732fbcd..5b4ad313cc 100644
--- a/tools/libs/guest/Makefile
+++ b/tools/libs/guest/Makefile
@@ -113,9 +113,6 @@ xc_private.h: _paths.h
 
 $(LIB_OBJS) $(PIC_OBJS): $(LINK_FILES)
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenctrl)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 .PHONY: cleanlocal
 cleanlocal:
 	rm -f libxenguest.map
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
index 668d68853f..39feca87e8 100644
--- a/tools/libs/hypfs/Makefile
+++ b/tools/libs/hypfs/Makefile
@@ -9,6 +9,3 @@ APPEND_LDFLAGS += -lz
 SRCS-y                 += core.c
 
 include ../libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenhypfs)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 325b7b7cea..959ff91a56 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -47,10 +47,10 @@ endif
 PKG_CONFIG_LOCAL := $(PKG_CONFIG_DIR)/$(PKG_CONFIG)
 
 LIBHEADER ?= $(LIB_FILE_NAME).h
-LIBHEADERS = $(foreach h, $(LIBHEADER), include/$(h))
-LIBHEADERSGLOB = $(foreach h, $(LIBHEADER), $(XEN_ROOT)/tools/include/$(h))
+LIBHEADERS = $(foreach h, $(LIBHEADER), $(XEN_INCLUDE)/$(h))
 
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_PREFIX = $(XEN_ROOT)
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_INCLUDE)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 
 .PHONY: all
@@ -74,14 +74,11 @@ else
 .PHONY: headers.chk
 endif
 
-headers.chk: $(LIBHEADERSGLOB) $(AUTOINCS)
+headers.chk: $(AUTOINCS)
 
 libxen$(LIBNAME).map:
 	echo 'VERS_$(MAJOR).$(MINOR) { global: *; };' >$@
 
-$(LIBHEADERSGLOB): $(LIBHEADERS)
-	for i in $(realpath $(LIBHEADERS)); do ln -sf $$i $(XEN_ROOT)/tools/include; done
-
 lib$(LIB_FILE_NAME).a: $(LIB_OBJS)
 	$(AR) rc $@ $^
 
@@ -123,7 +120,6 @@ clean:
 	rm -f lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR) lib$(LIB_FILE_NAME).so.$(MAJOR)
 	rm -f headers.chk
 	rm -f $(PKG_CONFIG)
-	rm -f $(LIBHEADERSGLOB)
 	rm -f _paths.h
 
 .PHONY: distclean
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index f58a3214e5..3424fdb61b 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -152,7 +152,7 @@ LIBXL_TEST_OBJS += $(foreach t, $(LIBXL_TESTS_INSIDE),libxl_test_$t.opic)
 TEST_PROG_OBJS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t.o) test_common.o
 TEST_PROGS += $(foreach t, $(LIBXL_TESTS_PROGS),test_$t)
 
-AUTOINCS = _libxl_list.h _paths.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
+AUTOINCS = $(XEN_INCLUDE)/_libxl_list.h _paths.h _libxl_save_msgs_callout.h _libxl_save_msgs_helper.h
 AUTOSRCS = _libxl_save_msgs_callout.c _libxl_save_msgs_helper.c
 
 CLIENTS = testidl libxl-save-helper
@@ -165,9 +165,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(CURDIR)
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 LDUSELIBS-y += $(PTYFUNCS_LIBS)
 LDUSELIBS-$(CONFIG_LIBNL) += $(LIBNL3_LIBS)
 LDUSELIBS-$(CONFIG_Linux) += -luuid
@@ -185,7 +182,7 @@ libxl_x86_acpi.o libxl_x86_acpi.opic: CFLAGS += -I$(XEN_ROOT)/tools
 $(SAVE_HELPER_OBJS): CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenevtchn) $(CFLAGS_libxenguest)
 
 testidl.o: CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenlight)
-testidl.c: libxl_types.idl gentest.py include/libxl.h $(AUTOINCS)
+testidl.c: libxl_types.idl gentest.py $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
 	$(PYTHON) gentest.py libxl_types.idl testidl.c.new
 	mv testidl.c.new testidl.c
 
@@ -200,15 +197,15 @@ libxl.api-ok: check-libxl-api-rules _libxl.api-for-check
 	$(PERL) $^
 	touch $@
 
-_%.api-for-check: include/%.h $(AUTOINCS)
-	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_$*.o) -c -E $< $(APPEND_CFLAGS) \
+_libxl.api-for-check: $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
+	$(CC) $(CPPFLAGS) $(CFLAGS) $(CFLAGS_libxl.o) -c -E $< $(APPEND_CFLAGS) \
 		-DLIBXL_EXTERNAL_CALLERS_ONLY=LIBXL_EXTERNAL_CALLERS_ONLY \
 		>$@.new
 	mv -f $@.new $@
 
-_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
-	$(PERL) $^ --prefix=libxl >$@.new
-	$(call move-if-changed,$@.new,$@)
+$(XEN_INCLUDE)/_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
+	$(PERL) $^ --prefix=libxl >$(notdir $@).new
+	$(call move-if-changed,$(notdir $@).new,$@)
 
 _libxl_save_msgs_helper.c _libxl_save_msgs_callout.c \
 _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
@@ -216,13 +213,13 @@ _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
 	$(PERL) -w $< $@ >$@.new
 	$(call move-if-changed,$@.new,$@)
 
-include/libxl.h: _libxl_types.h _libxl_list.h
-include/libxl_json.h: _libxl_types_json.h
+$(XEN_INCLUDE)/libxl.h: $(XEN_INCLUDE)/_libxl_types.h $(XEN_INCLUDE)/_libxl_list.h
+$(XEN_INCLUDE)/libxl_json.h: $(XEN_INCLUDE)/_libxl_types_json.h
 libxl_internal.h: _libxl_types_internal.h _libxl_types_private.h _libxl_types_internal_private.h _paths.h
 libxl_internal_json.h: _libxl_types_internal_json.h
 xl.h: _paths.h
 
-$(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): include/libxl.h
+$(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS) $(SAVE_HELPER_OBJS): $(XEN_INCLUDE)/libxl.h
 $(LIB_OBJS) $(PIC_OBJS) $(LIBXL_TEST_OBJS): libxl_internal.h
 
 _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_type%.idl gentypes.py idl.py
@@ -234,8 +231,8 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
-include/_%.h: _%.h
-	cp $< $@
+$(XEN_INCLUDE)/_%.h: _%.h
+	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
 
 libxenlight_test.so: $(PIC_OBJS) $(LIBXL_TEST_OBJS)
 	$(CC) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenlight.so.$(MAJOR) $(SHLIB_LDFLAGS) -o $@ $^ $(LDUSELIBS) $(APPEND_LDFLAGS)
@@ -271,7 +268,6 @@ cleanlocal:
 	$(RM) -f testidl.c.new testidl.c *.api-ok
 	$(RM) -f $(TEST_PROGS)
 	$(RM) -rf __pycache__
-	$(RM) -f include/_*.h
 	$(RM) -f libxenlight.map
 	$(RM) -f $(AUTOSRCS) $(AUTOINCS)
 	$(MAKE) -C $(ACPI_PATH) ACPI_BUILD_DIR=$(CURDIR) clean
diff --git a/tools/libs/stat/Makefile b/tools/libs/stat/Makefile
index 5463f5f7ca..8353e96946 100644
--- a/tools/libs/stat/Makefile
+++ b/tools/libs/stat/Makefile
@@ -30,8 +30,6 @@ APPEND_LDFLAGS += $(LDLIBS-y)
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstat)/include
-
 $(LIB_OBJS): _paths.h
 
 PYLIB=bindings/swig/python/_xenstat.so
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 4da502646e..930e763de9 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -21,12 +21,12 @@ CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
 
-LINK_FILES = xs_lib.c include/xenstore_lib.h list.h utils.h
+LINK_FILES = xs_lib.c list.h utils.h
 
 $(LIB_OBJS): $(LINK_FILES)
 
 $(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/xenstore/$(notdir $@) $@
+	ln -sf $(XEN_ROOT)/tools/xenstore/$@ $@
 
 xs.opic: CFLAGS += -DUSE_PTHREAD
 ifeq ($(CONFIG_Linux),y)
@@ -35,9 +35,6 @@ else
 PKG_CONFIG_REMOVE += -ldl
 endif
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenstore)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 .PHONY: install
 install: install-headers
 
@@ -45,8 +42,8 @@ install: install-headers
 install-headers:
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)/xenstore-compat
-	$(INSTALL_DATA) include/compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
-	$(INSTALL_DATA) include/compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
+	$(INSTALL_DATA) $(XEN_INCLUDE)/xenstore-compat/xs.h $(DESTDIR)$(includedir)/xenstore-compat/xs.h
+	$(INSTALL_DATA) $(XEN_INCLUDE)/xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xenstore-compat/xs_lib.h
 	ln -sf xenstore-compat/xs.h  $(DESTDIR)$(includedir)/xs.h
 	ln -sf xenstore-compat/xs_lib.h $(DESTDIR)$(includedir)/xs_lib.h
 
diff --git a/tools/libs/toolcore/Makefile b/tools/libs/toolcore/Makefile
index 5819bbc8ee..1cf30733c9 100644
--- a/tools/libs/toolcore/Makefile
+++ b/tools/libs/toolcore/Makefile
@@ -3,18 +3,17 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR	= 1
 MINOR	= 0
-AUTOINCS := include/_xentoolcore_list.h
+AUTOINCS := $(XEN_INCLUDE)/_xentoolcore_list.h
 
 SRCS-y	+= handlereg.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
 PKG_CONFIG_DESC := Central support for Xen Hypervisor userland libraries
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoolcore)/include
 
 $(LIB_OBJS): $(AUTOINCS)
 $(PIC_OBJS): $(AUTOINCS)
 
-include/_xentoolcore_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
-	$(PERL) $^ --prefix=xentoolcore >$@.new
-	$(call move-if-changed,$@.new,$@)
+$(XEN_INCLUDE)/_xentoolcore_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
+	$(PERL) $^ --prefix=xentoolcore >$(notdir $@).new
+	$(call move-if-changed,$(notdir $@).new,$@)
diff --git a/tools/libs/toollog/Makefile b/tools/libs/toollog/Makefile
index 3f986835d6..dce1b2de85 100644
--- a/tools/libs/toollog/Makefile
+++ b/tools/libs/toollog/Makefile
@@ -8,5 +8,3 @@ SRCS-y	+= xtl_core.c
 SRCS-y	+= xtl_logger_stdio.c
 
 include $(XEN_ROOT)/tools/libs/libs.mk
-
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxentoollog)/include
diff --git a/tools/libs/util/Makefile b/tools/libs/util/Makefile
index 0c9db8027d..b739360be7 100644
--- a/tools/libs/util/Makefile
+++ b/tools/libs/util/Makefile
@@ -39,9 +39,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenutil)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 $(LIB_OBJS) $(PIC_OBJS): $(AUTOINCS) _paths.h
 
 %.c %.h:: %.y
diff --git a/tools/libs/vchan/Makefile b/tools/libs/vchan/Makefile
index 5e18d5b196..83a45d2817 100644
--- a/tools/libs/vchan/Makefile
+++ b/tools/libs/vchan/Makefile
@@ -12,9 +12,6 @@ NO_HEADERS_CHK := y
 
 include $(XEN_ROOT)/tools/libs/libs.mk
 
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_libxenvchan)/include
-$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
-
 clean: cleanlocal
 
 .PHONY: cleanlocal
diff --git a/tools/ocaml/libs/xentoollog/Makefile b/tools/ocaml/libs/xentoollog/Makefile
index 8ae0a784fd..593f9e9e9d 100644
--- a/tools/ocaml/libs/xentoollog/Makefile
+++ b/tools/ocaml/libs/xentoollog/Makefile
@@ -49,7 +49,7 @@ xentoollog.mli: xentoollog.mli.in _xtl_levels.mli.in
 
 libs: $(LIBS)
 
-_xtl_levels.ml.in _xtl_levels.mli.in _xtl_levels.inc: genlevels.py $(XEN_ROOT)/tools/libs/toollog/include/xentoollog.h
+_xtl_levels.ml.in _xtl_levels.mli.in _xtl_levels.inc: genlevels.py $(XEN_INCLUDE)/xentoollog.h
 	$(PYTHON) genlevels.py _xtl_levels.mli.in _xtl_levels.ml.in _xtl_levels.inc
 
 .PHONY: install
diff --git a/tools/ocaml/libs/xentoollog/genlevels.py b/tools/ocaml/libs/xentoollog/genlevels.py
index f9cf853e26..11a623e459 100755
--- a/tools/ocaml/libs/xentoollog/genlevels.py
+++ b/tools/ocaml/libs/xentoollog/genlevels.py
@@ -6,7 +6,7 @@ import sys
 from functools import reduce
 
 def read_levels():
-	f = open('../../../libs/toollog/include/xentoollog.h', 'r')
+	f = open('../../../include/xentoollog.h', 'r')
 
 	levels = []
 	record = False
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11013.29229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuu-0000mq-Bf; Fri, 23 Oct 2020 14:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11013.29229; Fri, 23 Oct 2020 14:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxuu-0000mj-8i; Fri, 23 Oct 2020 14:19:40 +0000
Received: by outflank-mailman (input) for mailman id 11013;
 Fri, 23 Oct 2020 14:19:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVxus-0000me-QT
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23f3a7d9-8e8d-420a-836c-c5e9c7fbb96d;
 Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DAC18AEC1;
 Fri, 23 Oct 2020 14:19:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVxus-0000me-QT
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:38 +0000
X-Inumbo-ID: 23f3a7d9-8e8d-420a-836c-c5e9c7fbb96d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23f3a7d9-8e8d-420a-836c-c5e9c7fbb96d;
	Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603462777;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=v8LE5jBqvYJCEqYjzSRXOOIfWubVtZci2ezHFGLPSgY=;
	b=d2ElYo6IMD4dkqPo2o6XdYQAzsWukLNam6/iuLRuDdKKjNTEbmPJV6i8aynURa+Zk9n0Uk
	rGFnz4i71zWELmacKHwwkvJDRJMQK5rjuscfyEYzTL/nNFQMwHIuLaemE+ysSlF1BsdiN4
	Btde5klG7xB0uvM6vZYQrGLBxaTEdu0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DAC18AEC1;
	Fri, 23 Oct 2020 14:19:36 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v2 0/3] tools: avoid creating symbolic links during make
Date: Fri, 23 Oct 2020 16:19:31 +0200
Message-Id: <20201023141934.20062-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The rework of the Xen library build introduced creating some additional
symbolic links during the build process.

This series is undoing that by moving all official Xen library headers
to tools/include and by using include paths and the vpath directive
when access to some private headers of another directory is needed.

Changes in V2:
- added comment to tools/include/Makefile (Ian Jackson)

Juergen Gross (3):
  tools/libs: move official headers to common directory
  tools/libs/guest: don't use symbolic links for xenctrl headers
  tools/libs/store: don't use symbolic links for external files

 .gitignore                                    |  5 ++--
 stubdom/mini-os.mk                            |  2 +-
 tools/Rules.mk                                |  5 ++--
 tools/include/Makefile                        |  6 ++++
 tools/{libs/vchan => }/include/libxenvchan.h  |  0
 tools/{libs/light => }/include/libxl.h        |  0
 tools/{libs/light => }/include/libxl_event.h  |  0
 tools/{libs/light => }/include/libxl_json.h   |  0
 tools/{libs/light => }/include/libxl_utils.h  |  0
 tools/{libs/light => }/include/libxl_uuid.h   |  0
 tools/{libs/util => }/include/libxlutil.h     |  0
 tools/{libs/call => }/include/xencall.h       |  0
 tools/{libs/ctrl => }/include/xenctrl.h       |  0
 .../{libs/ctrl => }/include/xenctrl_compat.h  |  0
 .../devicemodel => }/include/xendevicemodel.h |  0
 tools/{libs/evtchn => }/include/xenevtchn.h   |  0
 .../include/xenforeignmemory.h                |  0
 tools/{libs/gnttab => }/include/xengnttab.h   |  0
 tools/{libs/guest => }/include/xenguest.h     |  0
 tools/{libs/hypfs => }/include/xenhypfs.h     |  0
 tools/{libs/stat => }/include/xenstat.h       |  0
 .../compat => include/xenstore-compat}/xs.h   |  0
 .../xenstore-compat}/xs_lib.h                 |  0
 tools/{libs/store => }/include/xenstore.h     |  0
 tools/{xenstore => include}/xenstore_lib.h    |  0
 .../{libs/toolcore => }/include/xentoolcore.h |  0
 .../include/xentoolcore_internal.h            |  0
 tools/{libs/toollog => }/include/xentoollog.h |  0
 tools/libs/call/Makefile                      |  3 --
 tools/libs/ctrl/Makefile                      |  3 --
 tools/libs/devicemodel/Makefile               |  3 --
 tools/libs/evtchn/Makefile                    |  2 --
 tools/libs/foreignmemory/Makefile             |  3 --
 tools/libs/gnttab/Makefile                    |  3 --
 tools/libs/guest/Makefile                     | 12 ++------
 tools/libs/hypfs/Makefile                     |  3 --
 tools/libs/libs.mk                            | 10 ++-----
 tools/libs/light/Makefile                     | 28 ++++++++-----------
 tools/libs/stat/Makefile                      |  2 --
 tools/libs/store/Makefile                     | 15 +++-------
 tools/libs/toolcore/Makefile                  |  9 +++---
 tools/libs/toollog/Makefile                   |  2 --
 tools/libs/util/Makefile                      |  3 --
 tools/libs/vchan/Makefile                     |  3 --
 tools/ocaml/libs/xentoollog/Makefile          |  2 +-
 tools/ocaml/libs/xentoollog/genlevels.py      |  2 +-
 46 files changed, 38 insertions(+), 88 deletions(-)
 rename tools/{libs/vchan => }/include/libxenvchan.h (100%)
 rename tools/{libs/light => }/include/libxl.h (100%)
 rename tools/{libs/light => }/include/libxl_event.h (100%)
 rename tools/{libs/light => }/include/libxl_json.h (100%)
 rename tools/{libs/light => }/include/libxl_utils.h (100%)
 rename tools/{libs/light => }/include/libxl_uuid.h (100%)
 rename tools/{libs/util => }/include/libxlutil.h (100%)
 rename tools/{libs/call => }/include/xencall.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl.h (100%)
 rename tools/{libs/ctrl => }/include/xenctrl_compat.h (100%)
 rename tools/{libs/devicemodel => }/include/xendevicemodel.h (100%)
 rename tools/{libs/evtchn => }/include/xenevtchn.h (100%)
 rename tools/{libs/foreignmemory => }/include/xenforeignmemory.h (100%)
 rename tools/{libs/gnttab => }/include/xengnttab.h (100%)
 rename tools/{libs/guest => }/include/xenguest.h (100%)
 rename tools/{libs/hypfs => }/include/xenhypfs.h (100%)
 rename tools/{libs/stat => }/include/xenstat.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs.h (100%)
 rename tools/{libs/store/include/compat => include/xenstore-compat}/xs_lib.h (100%)
 rename tools/{libs/store => }/include/xenstore.h (100%)
 rename tools/{xenstore => include}/xenstore_lib.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore.h (100%)
 rename tools/{libs/toolcore => }/include/xentoolcore_internal.h (100%)
 rename tools/{libs/toollog => }/include/xentoollog.h (100%)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:19:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11016.29265 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxv4-0000ud-9h; Fri, 23 Oct 2020 14:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11016.29265; Fri, 23 Oct 2020 14:19:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVxv4-0000uW-6Y; Fri, 23 Oct 2020 14:19:50 +0000
Received: by outflank-mailman (input) for mailman id 11016;
 Fri, 23 Oct 2020 14:19:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kVxv2-0000me-Lx
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1913b86f-e79a-4782-aa59-63cfb60638e9;
 Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7F4C2B2B0;
 Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=H00l=D6=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kVxv2-0000me-Lx
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:19:48 +0000
X-Inumbo-ID: 1913b86f-e79a-4782-aa59-63cfb60638e9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1913b86f-e79a-4782-aa59-63cfb60638e9;
	Fri, 23 Oct 2020 14:19:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603462777;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bpbKlAusw8M3iIDqGNsPUH+GZlGUA08Dg8QDzb3c4FQ=;
	b=PRNm5DbdDC2YY7W6D25UhZWCQG09mMtqjfr2R1uLjAIkh8IVVobCZjtNLxlTbw8nMYdvw4
	xJ4p25F/Yr/Uzw04O5xvFdEy1JbGUV5WvOCQUcqfcH1H8hyEvVLiPyQoN4fsjVC08MoU2S
	tpUEaAD0+SixDAuNJsYuo8LSK4mcvKk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7F4C2B2B0;
	Fri, 23 Oct 2020 14:19:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>
Subject: [PATCH v2 3/3] tools/libs/store: don't use symbolic links for external files
Date: Fri, 23 Oct 2020 16:19:34 +0200
Message-Id: <20201023141934.20062-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201023141934.20062-1-jgross@suse.com>
References: <20201023141934.20062-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of using symbolic links to include files from xenstored use
the vpath directive and an include path.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Christian Lindig <christian.lindig@citrix.com>
Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 tools/libs/store/Makefile | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index 930e763de9..bc89b9cd70 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -21,12 +21,8 @@ CFLAGS += $(CFLAGS_libxentoolcore)
 CFLAGS += -DXEN_LIB_STORED="\"$(XEN_LIB_STORED)\""
 CFLAGS += -DXEN_RUN_STORED="\"$(XEN_RUN_STORED)\""
 
-LINK_FILES = xs_lib.c list.h utils.h
-
-$(LIB_OBJS): $(LINK_FILES)
-
-$(LINK_FILES):
-	ln -sf $(XEN_ROOT)/tools/xenstore/$@ $@
+vpath xs_lib.c $(XEN_ROOT)/tools/xenstore
+CFLAGS += -I $(XEN_ROOT)/tools/xenstore
 
 xs.opic: CFLAGS += -DUSE_PTHREAD
 ifeq ($(CONFIG_Linux),y)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:28:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:28:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11029.29277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVy37-0002Bd-6w; Fri, 23 Oct 2020 14:28:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11029.29277; Fri, 23 Oct 2020 14:28:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVy37-0002BW-2w; Fri, 23 Oct 2020 14:28:09 +0000
Received: by outflank-mailman (input) for mailman id 11029;
 Fri, 23 Oct 2020 14:28:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kVy35-0002BR-CW
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:28:07 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.82]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aea0c86f-7828-475e-99ee-edcdfdce41c3;
 Fri, 23 Oct 2020 14:28:04 +0000 (UTC)
Received: from AM0PR03CA0075.eurprd03.prod.outlook.com (2603:10a6:208:69::16)
 by DB7PR08MB4217.eurprd08.prod.outlook.com (2603:10a6:10:7d::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 23 Oct
 2020 14:28:02 +0000
Received: from AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:69:cafe::6b) by AM0PR03CA0075.outlook.office365.com
 (2603:10a6:208:69::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 14:28:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT057.mail.protection.outlook.com (10.152.17.44) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 14:28:02 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Fri, 23 Oct 2020 14:28:01 +0000
Received: from 9d30bc132b10.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 ECE2E7B6-E366-48D1-86B3-E0CCFAF129CA.1; 
 Fri, 23 Oct 2020 14:27:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9d30bc132b10.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 23 Oct 2020 14:27:56 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4322.eurprd08.prod.outlook.com (2603:10a6:208:148::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 14:27:54 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 14:27:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=SP8M=D6=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kVy35-0002BR-CW
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:28:07 +0000
X-Inumbo-ID: aea0c86f-7828-475e-99ee-edcdfdce41c3
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.82])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aea0c86f-7828-475e-99ee-edcdfdce41c3;
	Fri, 23 Oct 2020 14:28:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vm9mTcOrrvUkLftjAZ0Ecc6q0i/efpX7Re7SPqS6ZFw=;
 b=sCJOz2AJh7tFjENTkJzWE9m8IVdp2csvjDrwLGCMeobIr9mGGTNr/qRpSWd13apt/qmOPjx2RsuJO4L0xIqzr/EPwBfbWDxYkzv3qbpHf3yYuUpuS6jmcWPpW+V1ZbpV3P7H50i30iXTRV6InmGYscvoL2BiNikRtQ9NvgZwg1M=
Received: from AM0PR03CA0075.eurprd03.prod.outlook.com (2603:10a6:208:69::16)
 by DB7PR08MB4217.eurprd08.prod.outlook.com (2603:10a6:10:7d::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 23 Oct
 2020 14:28:02 +0000
Received: from AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:69:cafe::6b) by AM0PR03CA0075.outlook.office365.com
 (2603:10a6:208:69::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 23 Oct 2020 14:28:02 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT057.mail.protection.outlook.com (10.152.17.44) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Fri, 23 Oct 2020 14:28:02 +0000
Received: ("Tessian outbound c189680f801b:v64"); Fri, 23 Oct 2020 14:28:01 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 95dcc42c71afbb43
X-CR-MTA-TID: 64aa7808
Received: from 9d30bc132b10.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id ECE2E7B6-E366-48D1-86B3-E0CCFAF129CA.1;
	Fri, 23 Oct 2020 14:27:56 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9d30bc132b10.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 23 Oct 2020 14:27:56 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eQnfDxitdb1NmqiLIAqWan3L5rLfXTiTMr8lgd10kYtl/eSk2t8BpkR53gkFa3Ug48FVdQY1qpFEpfdOu1DOfW3mpkrMX94gq5/tIVVVecElDXL0fbbcBNrzRnPkujEVREJxMYHq2ZOX5jb1fQSlycdFjEZpCf9N9iOdklqRYen0+KthKu8/YzK+Q6bCXvBc/7X1CEY42NW0S+obGLi9+CIDzVv3WLW9zafrIGmHIwXtQxVjySoQRTGOQo/qrzCDdwCfCGCGPYNGTiD1AlDO4oqaqy4uwc824sW45TnupbAPpWRmByLDG8V8IN8wuFoC53waKj1JS2Yb3lj3//TGIw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vm9mTcOrrvUkLftjAZ0Ecc6q0i/efpX7Re7SPqS6ZFw=;
 b=m2snCqoMsgKggrG4u7Vsbo/JUneq7422mYyZy/8qzbHFGzz4y7kx3WGM8zqVU0M37WCfqlNqm1wcvrocUqILSyrwrSVrK4NTua9rPl3zhLowxW5gmH4bjM6oUzIC0VHPEGN828N/hFZMhmZ60CibqqB9MQJWP31BXA9/SHEpTQvlohutKzKOgVrUxmKu6Sp0sySeMfarswbH7z6b1G1RX/0n+BSQJMJFnfLQmW6lG8QoPajrn146HzC/WliOAWPtWsw6x5O89lmqvFp/MCqOT1xQ9VtzwkZ0vwGXPJ9CmwpE8M9/1TbvbfJSyOGJrMNqYTq3wCgUxyXlm1PSLDwKxA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vm9mTcOrrvUkLftjAZ0Ecc6q0i/efpX7Re7SPqS6ZFw=;
 b=sCJOz2AJh7tFjENTkJzWE9m8IVdp2csvjDrwLGCMeobIr9mGGTNr/qRpSWd13apt/qmOPjx2RsuJO4L0xIqzr/EPwBfbWDxYkzv3qbpHf3yYuUpuS6jmcWPpW+V1ZbpV3P7H50i30iXTRV6InmGYscvoL2BiNikRtQ9NvgZwg1M=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4322.eurprd08.prod.outlook.com (2603:10a6:208:148::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 23 Oct
 2020 14:27:54 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Fri, 23 Oct 2020
 14:27:54 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXICAABfBAIAAGI6A
Date: Fri, 23 Oct 2020 14:27:54 +0000
Message-ID: <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
In-Reply-To: <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ed7a999a-4689-488e-96ab-08d8775fd93a
x-ms-traffictypediagnostic: AM0PR08MB4322:|DB7PR08MB4217:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB4217C089031383681F8117D1FC1A0@DB7PR08MB4217.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yl6yeMhhbpT5rqmoZgVnP/vXCG1FYuZHmtTjjQ/05gHKLZLoKvfGLFfrhbpKID47TpN0RpgDIKd72gCPsaC62WfjqVB42x+dzTNHoy8kX7UwDWJ5TWre0b+bfzTs8Qm1FF/mkKA2UpdK5+K9KzU5VF6ekfTuBaeEp6eL2pG/d6UjEqL9c3bI6FX3E3w1II+VW3qP0ccomrEYK+nT4ehCGfWbXV982pQ8ePB5/ac3nSAMYnuTz9XbC8DqGJkCbh7rqaspr+g8l1KlarE5hIz55Mc8vYfKhNaDiLvOJBUqXfXv/LSJVbgP19gS/+wAs+rfCYc1jfKYB9nM7jyt5HNoyA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(136003)(346002)(376002)(2616005)(36756003)(8676002)(54906003)(86362001)(8936002)(4326008)(2906002)(26005)(316002)(6506007)(6916009)(53546011)(186003)(33656002)(5660300002)(64756008)(76116006)(478600001)(6512007)(71200400001)(66446008)(66946007)(83380400001)(66476007)(66556008)(6486002)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 lNFhFvFqf3Bd9uZQO2NguR9nZ6bgiegq9vSpvVGH/DorY10dKD5a58/XED+CfbTlhawV2Wd/5VNVhfxj8+Lb1Xb2+QwaC3HpZONNZcFPqnqXqbHul+1eGfLKbhwUbdNn+Lt9+jWSWalX0bJNJH3Xd6HwSCs+WDbMMNpycfivzMvXBkjd3PXYhmd+2fTt7AHIaDoWBxHBAey/EnAPyTqg7agBzY0YlfxlPjAc2GtGM2FV3/jSKjRqg6qoZNSvN4BvVwvX0sriY8qC7Jt1yX0vRF4sXrtdqMNUr7ChmjIyknWoI9QvDyXuI/jNrveKF3ZRAnL5xCtkEkoe9UF0v9uuGXrtk1+eIN0JEKzcHpf9GhrxZkNhGqX3I+2y445qvMuliFl/5JK7xTgMw9E5SE996pjU8NZwiSwXfOMR8rfVOdPCxdaew8zyQi7iQMPPhaVmSv26YP3vNeKAL5HX1xW4z6K/2ZFAg+mo3j9vkKewoZdvpnvM1A/7RTfr3aG0/+56Ja/iVSa7VTEC/T8nauVQ8vU+jaH9BPR9PgofCnGgOOgAVffzWvAjMWHnSaO3OUxSKLKXGXQcBeBjKYyRWHzWqcMZaV71JwiYB4bgeY0wHWIjHUzBk37ohZFNfRLvg5LoJa3KIxrdo9BzOiQgbYwozQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <1A1FFAA1CE5D894696315E8CEFF931E1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4322
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6548902c-6b5e-481c-4439-08d8775fd4a9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Z0NMmMIiAPztH2hCYrE5Zw6jOxiOrvPobLq5cuc2TPxiYVTF1q7qkiOzFGcenT7UCnxx3ZMgbIzIuuMgZbfI5FbBpgUvtWGBSGhX4TdSCY0m5tZOb0dxQjuO1F3DvAN9smQb5FEsG2PZrPXnEaxb6AK9TMF5h3iqklvVBgf3hpj5DhtuGwyrRV/RdRXgjbhYKO5HFcUJoN88HxQVQ23V+f/mlOYrT2y6a1EVtFhbVILnd1y3CBP6vFk6Nr3UmoOgf8tFB/R+thu2rdpzjBpSPKyGscNR/mLOzL17L+x5Ij4e4V3KJSEV+0vrrfc+zbF3WhMoUYOrKTPF9PI32Ggxt4sjY6/ZdIHVu00CTcNVOFiR7hC8SWX9a0XPUkYJ/Da9q1KWYwE63r0UsWFdNZsbmw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39850400004)(396003)(346002)(376002)(46966005)(316002)(6862004)(186003)(54906003)(2906002)(478600001)(6506007)(33656002)(36906005)(26005)(53546011)(2616005)(336012)(83380400001)(6512007)(8936002)(47076004)(81166007)(36756003)(70206006)(107886003)(5660300002)(356005)(70586007)(8676002)(6486002)(4326008)(82310400003)(82740400003)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Oct 2020 14:28:02.2445
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed7a999a-4689-488e-96ab-08d8775fd93a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB4217

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDIzIE9jdCAyMDIwLCBhdCAyOjAwIHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMjMvMTAvMjAyMCAx
MjozNSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbywNCj4+PiBPbiAyMyBPY3QgMjAyMCwg
YXQgMTowMiBhbSwgU3RlZmFubyBTdGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3
cm90ZToNCj4+PiANCj4+PiBPbiBUaHUsIDIyIE9jdCAyMDIwLCBKdWxpZW4gR3JhbGwgd3JvdGU6
DQo+Pj4+Pj4gT24gMjAvMTAvMjAyMCAxNjoyNSwgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+Pj4+
IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxlbWVudGF0aW9ucy4g
SXQgaXMgYmFzZWQgb24NCj4+Pj4+Pj4gdGhlIExpbnV4IFNNTVV2MyBkcml2ZXIuDQo+Pj4+Pj4+
IE1ham9yIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUgYXMgZm9sbG93
czoNCj4+Pj4+Pj4gMS4gT25seSBTdGFnZS0yIHRyYW5zbGF0aW9uIGlzIHN1cHBvcnRlZCBhcyBj
b21wYXJlZCB0byB0aGUgTGludXggZHJpdmVyDQo+Pj4+Pj4+ICAgIHRoYXQgc3VwcG9ydHMgYm90
aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+Pj4gMi4gVXNlIFAyTSAg
cGFnZSB0YWJsZSBpbnN0ZWFkIG9mIGNyZWF0aW5nIG9uZSBhcyBTTU1VdjMgaGFzIHRoZQ0KPj4+
Pj4+PiAgICBjYXBhYmlsaXR5IHRvIHNoYXJlIHRoZSBwYWdlIHRhYmxlcyB3aXRoIHRoZSBDUFUu
DQo+Pj4+Pj4+IDMuIFRhc2tsZXRzIGlzIHVzZWQgaW4gcGxhY2Ugb2YgdGhyZWFkZWQgSVJRJ3Mg
aW4gTGludXggZm9yIGV2ZW50IHF1ZXVlDQo+Pj4+Pj4+ICAgIGFuZCBwcmlvcml0eSBxdWV1ZSBJ
UlEgaGFuZGxpbmcuDQo+Pj4+Pj4gDQo+Pj4+Pj4gVGFza2xldHMgYXJlIG5vdCBhIHJlcGxhY2Vt
ZW50IGZvciB0aHJlYWRlZCBJUlEuIEluIHBhcnRpY3VsYXIsIHRoZXkgd2lsbA0KPj4+Pj4+IGhh
dmUgcHJpb3JpdHkgb3ZlciBhbnl0aGluZyBlbHNlIChJT1cgbm90aGluZyB3aWxsIHJ1biBvbiB0
aGUgcENQVSB1bnRpbA0KPj4+Pj4+IHRoZXkgYXJlIGRvbmUpLg0KPj4+Pj4+IA0KPj4+Pj4+IERv
IHlvdSBrbm93IHdoeSBMaW51eCBpcyB1c2luZyB0aHJlYWQuIElzIGl0IGJlY2F1c2Ugb2YgbG9u
ZyBydW5uaW5nDQo+Pj4+Pj4gb3BlcmF0aW9ucz8NCj4+Pj4+IA0KPj4+Pj4gWWVzIHlvdSBhcmUg
cmlnaHQgYmVjYXVzZSBvZiBsb25nIHJ1bm5pbmcgb3BlcmF0aW9ucyBMaW51eCBpcyB1c2luZyB0
aGUNCj4+Pj4+IHRocmVhZGVkIElSUXMuDQo+Pj4+PiANCj4+Pj4+IFNNTVV2MyByZXBvcnRzIGZh
dWx0L2V2ZW50cyBiYXNlcyBvbiBtZW1vcnktYmFzZWQgY2lyY3VsYXIgYnVmZmVyIHF1ZXVlcyBu
b3QNCj4+Pj4+IGJhc2VkIG9uIHRoZSByZWdpc3Rlci4gQXMgcGVyIG15IHVuZGVyc3RhbmRpbmcs
IGl0IGlzIHRpbWUtY29uc3VtaW5nIHRvDQo+Pj4+PiBwcm9jZXNzIHRoZSBtZW1vcnkgYmFzZWQg
cXVldWVzIGluIGludGVycnVwdCBjb250ZXh0IGJlY2F1c2Ugb2YgdGhhdCBMaW51eA0KPj4+Pj4g
aXMgdXNpbmcgdGhyZWFkZWQgSVJRIHRvIHByb2Nlc3MgdGhlIGZhdWx0cy9ldmVudHMgZnJvbSBT
TU1VLg0KPj4+Pj4gDQo+Pj4+PiBJIGRpZG7igJl0IGZpbmQgYW55IG90aGVyIHNvbHV0aW9uIGlu
IFhFTiBpbiBwbGFjZSBvZiB0YXNrbGV0IHRvIGRlZmVyIHRoZQ0KPj4+Pj4gd29yaywgdGhhdOKA
mXMgd2h5IEkgdXNlZCB0YXNrbGV0IGluIFhFTiBpbiByZXBsYWNlbWVudCBvZiB0aHJlYWRlZCBJ
UlFzLiBJZg0KPj4+Pj4gd2UgZG8gYWxsIHdvcmsgaW4gaW50ZXJydXB0IGNvbnRleHQgd2Ugd2ls
bCBtYWtlIFhFTiBsZXNzIHJlc3BvbnNpdmUuDQo+Pj4+IA0KPj4+PiBTbyB3ZSBuZWVkIHRvIG1h
a2Ugc3VyZSB0aGF0IFhlbiBjb250aW51ZSB0byByZWNlaXZlcyBpbnRlcnJ1cHRzLCBidXQgd2Ug
YWxzbw0KPj4+PiBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IGEgdkNQVSBib3VuZCB0byB0aGUgcENQ
VSBpcyBhbHNvIHJlc3BvbnNpdmUuDQo+Pj4+IA0KPj4+Pj4gDQo+Pj4+PiBJZiB5b3Uga25vdyBh
bm90aGVyIHNvbHV0aW9uIGluIFhFTiB0aGF0IHdpbGwgYmUgdXNlZCB0byBkZWZlciB0aGUgd29y
ayBpbg0KPj4+Pj4gdGhlIGludGVycnVwdCBwbGVhc2UgbGV0IG1lIGtub3cgSSB3aWxsIHRyeSB0
byB1c2UgdGhhdC4NCj4+Pj4gDQo+Pj4+IE9uZSBvZiBteSB3b3JrIGNvbGxlYWd1ZSBlbmNvdW50
ZXJlZCBhIHNpbWlsYXIgcHJvYmxlbSByZWNlbnRseS4gSGUgaGFkIGEgbG9uZw0KPj4+PiBydW5u
aW5nIHRhc2tsZXQgYW5kIHdhbnRlZCB0byBiZSBicm9rZW4gZG93biBpbiBzbWFsbGVyIGNodW5r
Lg0KPj4+PiANCj4+Pj4gV2UgZGVjaWRlZCB0byB1c2UgYSB0aW1lciB0byByZXNjaGVkdWxlIHRo
ZSB0YXNsa2V0IGluIHRoZSBmdXR1cmUuIFRoaXMgYWxsb3dzDQo+Pj4+IHRoZSBzY2hlZHVsZXIg
dG8gcnVuIG90aGVyIGxvYWRzIChlLmcuIHZDUFUpIGZvciBzb21lIHRpbWUuDQo+Pj4+IA0KPj4+
PiBUaGlzIGlzIHByZXR0eSBoYWNraXNoIGJ1dCBJIGNvdWxkbid0IGZpbmQgYSBiZXR0ZXIgc29s
dXRpb24gYXMgdGFza2xldCBoYXZlDQo+Pj4+IGhpZ2ggcHJpb3JpdHkuDQo+Pj4+IA0KPj4+PiBN
YXliZSB0aGUgb3RoZXIgd2lsbCBoYXZlIGEgYmV0dGVyIGlkZWEuDQo+Pj4gDQo+Pj4gSnVsaWVu
J3Mgc3VnZ2VzdGlvbiBpcyBhIGdvb2Qgb25lLg0KPj4+IA0KPj4+IEJ1dCBJIHRoaW5rIHRhc2ts
ZXRzIGNhbiBiZSBjb25maWd1cmVkIHRvIGJlIGNhbGxlZCBmcm9tIHRoZSBpZGxlX2xvb3AsDQo+
Pj4gaW4gd2hpY2ggY2FzZSB0aGV5IGFyZSBub3QgcnVuIGluIGludGVycnVwdCBjb250ZXh0Pw0K
Pj4+IA0KPj4gIFllcyB5b3UgYXJlIHJpZ2h0IHRhc2tsZXQgd2lsbCBiZSBzY2hlZHVsZWQgZnJv
bSB0aGUgaWRsZV9sb29wIHRoYXQgaXMgbm90IGludGVycnVwdCBjb25leHQuDQo+IA0KPiBUaGlz
IGRlcGVuZHMgb24geW91ciB0YXNrbGV0LiBTb21lIHdpbGwgcnVuIGZyb20gdGhlIHNvZnRpcnEg
Y29udGV4dCB3aGljaCBpcyB1c3VhbGx5IChmb3IgQXJtKSBvbiB0aGUgcmV0dXJuIG9mIGFuIGV4
Y2VwdGlvbi4NCj4gDQoNClRoYW5rcyBmb3IgdGhlIGluZm8uIEkgd2lsbCBjaGVjayBhbmQgd2ls
bCBnZXQgYmV0dGVyIHVuZGVyc3RhbmRpbmcgb2YgdGhlIHRhc2tsZXQgaG93IGl0IHdpbGwgcnVu
IGluIFhFTi4NCg0KPj4+IA0KPj4+Pj4+PiA0LiBMYXRlc3QgdmVyc2lvbiBvZiB0aGUgTGludXgg
U01NVXYzIGNvZGUgaW1wbGVtZW50cyB0aGUgY29tbWFuZHMgcXVldWUNCj4+Pj4+Pj4gICAgYWNj
ZXNzIGZ1bmN0aW9ucyBiYXNlZCBvbiBhdG9taWMgb3BlcmF0aW9ucyBpbXBsZW1lbnRlZCBpbiBM
aW51eC4NCj4+Pj4+PiANCj4+Pj4+PiBDYW4geW91IHByb3ZpZGUgbW9yZSBkZXRhaWxzPw0KPj4+
Pj4gDQo+Pj4+PiBJIHRyaWVkIHRvIHBvcnQgdGhlIGxhdGVzdCB2ZXJzaW9uIG9mIHRoZSBTTU1V
djMgY29kZSB0aGFuIEkgb2JzZXJ2ZWQgdGhhdA0KPj4+Pj4gaW4gb3JkZXIgdG8gcG9ydCB0aGF0
IGNvZGUgSSBoYXZlIHRvIGFsc28gcG9ydCBhdG9taWMgb3BlcmF0aW9uIGltcGxlbWVudGVkDQo+
Pj4+PiBpbiBMaW51eCB0byBYRU4uIEFzIGxhdGVzdCBMaW51eCBjb2RlIHVzZXMgYXRvbWljIG9w
ZXJhdGlvbiB0byBwcm9jZXNzIHRoZQ0KPj4+Pj4gY29tbWFuZCBxdWV1ZXMgKGF0b21pY19jb25k
X3JlYWRfcmVsYXhlZCgpLGF0b21pY19sb25nX2NvbmRfcmVhZF9yZWxheGVkKCkgLA0KPj4+Pj4g
YXRvbWljX2ZldGNoX2FuZG5vdF9yZWxheGVkKCkpIC4NCj4+Pj4gDQo+Pj4+IFRoYW5rIHlvdSBm
b3IgdGhlIGV4cGxhbmF0aW9uLiBJIHRoaW5rIGl0IHdvdWxkIGJlIGJlc3QgdG8gaW1wb3J0IHRo
ZSBhdG9taWMNCj4+Pj4gaGVscGVycyBhbmQgdXNlIHRoZSBsYXRlc3QgY29kZS4NCj4+Pj4gDQo+
Pj4+IFRoaXMgd2lsbCBlbnN1cmUgdGhhdCB3ZSBkb24ndCByZS1pbnRyb2R1Y2UgYnVncyBhbmQg
YWxzbyBidXkgdXMgc29tZSB0aW1lDQo+Pj4+IGJlZm9yZSB0aGUgTGludXggYW5kIFhlbiBkcml2
ZXIgZGl2ZXJnZSBhZ2FpbiB0b28gbXVjaC4NCj4+Pj4gDQo+Pj4+IFN0ZWZhbm8sIHdoYXQgZG8g
eW91IHRoaW5rPw0KPj4+IA0KPj4+IEkgdGhpbmsgeW91IGFyZSByaWdodC4NCj4+IFllcywgSSBh
Z3JlZSB3aXRoIHlvdSB0byBoYXZlIFhFTiBjb2RlIGluIHN5bmMgd2l0aCBMaW51eCBjb2RlIHRo
YXQncyB3aHkgSSBzdGFydGVkIHdpdGggdG8gcG9ydCB0aGUgTGludXggYXRvbWljIG9wZXJhdGlv
bnMgdG8gWEVOICB0aGVuIEkgcmVhbGlzZWQgdGhhdCBpdCBpcyBub3Qgc3RyYWlnaHRmb3J3YXJk
IHRvIHBvcnQgYXRvbWljIG9wZXJhdGlvbnMgYW5kIGl0IHJlcXVpcmVzIGxvdHMgb2YgZWZmb3J0
IGFuZCB0ZXN0aW5nLiBUaGVyZWZvcmUgSSBkZWNpZGVkIHRvIHBvcnQgdGhlIGNvZGUgYmVmb3Jl
IHRoZSBhdG9taWMgb3BlcmF0aW9uIGlzIGludHJvZHVjZWQgaW4gTGludXguDQo+IA0KPiBIbW1t
Li4uIEkgd291bGQgbm90IGhhdmUgZXhwZWN0ZWQgYSBsb3Qgb2YgZWZmb3J0IHJlcXVpcmVkIHRv
IGFkZCB0aGUgMyBhdG9taWNzIG9wZXJhdGlvbnMgYWJvdmUuIEFyZSB5b3UgdHJ5aW5nIHRvIGFs
c28gcG9ydCB0aGUgTFNFIHN1cHBvcnQgYXQgdGhlIHNhbWUgdGltZT8NCg0KVGhlcmUgYXJlIG90
aGVyIGF0b21pYyBvcGVyYXRpb25zIHVzZWQgaW4gdGhlIFNNTVV2MyBjb2RlIGFwYXJ0IGZyb20g
dGhlIDMgYXRvbWljIG9wZXJhdGlvbiBJIG1lbnRpb24uIEkganVzdCBtZW50aW9uIDMgb3BlcmF0
aW9uIGFzIGFuIGV4YW1wbGUuIEkgdHJpZWQgdG8gcG9ydCBhdCB0aGF0IHRpbWUgYnV0IHdoZW4g
SSBzdGFydCBwb3J0aW5nIEkgcmVhbGlzZWQgdGhhdCBvbmUgYXRvbWljIG9wZXJhdGlvbiBkZXBl
bmQgb24gYW5vdGhlciBvbmUgc28gSSBkZWNpZGVkIG5vdCB0byBwcm9jZWVkIGZ1cnRoZXIuDQoN
Cj4gDQo+IENoZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0KDQpSZWdhcmRzLA0K
UmFodWw=


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:52:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:52:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11050.29293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyQQ-0004ul-9l; Fri, 23 Oct 2020 14:52:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11050.29293; Fri, 23 Oct 2020 14:52:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyQQ-0004ue-5C; Fri, 23 Oct 2020 14:52:14 +0000
Received: by outflank-mailman (input) for mailman id 11050;
 Fri, 23 Oct 2020 14:52:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kVyQP-0004uZ-D1
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34bf1b99-e4e5-444e-a1ba-39416efcbd17;
 Fri, 23 Oct 2020 14:52:12 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVyQO-0005mn-Fq
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:12 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kVyQO-0003rK-Cg
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kVyQL-00083u-4R; Fri, 23 Oct 2020 15:52:09 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UuJs=D6=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kVyQP-0004uZ-D1
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:13 +0000
X-Inumbo-ID: 34bf1b99-e4e5-444e-a1ba-39416efcbd17
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 34bf1b99-e4e5-444e-a1ba-39416efcbd17;
	Fri, 23 Oct 2020 14:52:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=wN6vjdYJBuYzKtmGQANpBjSnJ+IdsjQMiN9RUJbYgvY=; b=cPdli9rji9Hldqb4OQ2TMA+gPs
	nn1I+mdbj/IiupP78aL664lgmjvh/corLI/sgaD+ML2wAOVVD+Dc5/Krxi+xGivFx3zh2SaexmRIN
	slzqLM4nTblaHpsZGBuT9qhhrE/HYNxMNIenica2CL7LwVcuK+9JcHLG1zQVMwjgLp0c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVyQO-0005mn-Fq
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:12 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kVyQO-0003rK-Cg
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:52:12 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kVyQL-00083u-4R; Fri, 23 Oct 2020 15:52:09 +0100
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24466.60952.899550.965639@mariner.uk.xensource.com>
Date: Fri, 23 Oct 2020 15:52:08 +0100
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org,
    Andrew Cooper <andrew.cooper3@citrix.com>,
    George Dunlap <george.dunlap@citrix.com>,
    Jan Beulich <jbeulich@suse.com>,
    Julien Grall <julien@xen.org>,
    Stefano Stabellini <sstabellini@kernel.org>,
    Wei Liu <wl@xen.org>,
    Samuel Thibault <samuel.thibault@ens-lyon.org>,
    Christian Lindig <christian.lindig@citrix.com>,
    David Scott <dave@recoil.org>,
    Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH v2 1/3] tools/libs: move official headers to common directory
In-Reply-To: <20201023141934.20062-2-jgross@suse.com>
References: <20201023141934.20062-1-jgross@suse.com>
	<20201023141934.20062-2-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Juergen Gross writes ("[PATCH v2 1/3] tools/libs: move official headers to common directory"):
> Instead of each library having an own include directory move the
> official headers to tools/include instead. This will drop the need to
> link those headers to tools/include and there is no need any longer
> to have library-specific include paths when building Xen.
> 
> While at it remove setting of the unused variable
> PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Acked-by: Christian Lindig <christian.lindig@citrix.com>
> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks, all three

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

and pushed.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 14:59:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 14:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11060.29320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyX0-0005L5-6X; Fri, 23 Oct 2020 14:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11060.29320; Fri, 23 Oct 2020 14:59:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyX0-0005Ky-3M; Fri, 23 Oct 2020 14:59:02 +0000
Received: by outflank-mailman (input) for mailman id 11060;
 Fri, 23 Oct 2020 14:59:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVyWy-0005Ks-Li
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:59:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7de0acfa-d57a-42be-b941-4f12bc17ca2b;
 Fri, 23 Oct 2020 14:58:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1383AD09;
 Fri, 23 Oct 2020 14:58:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVyWy-0005Ks-Li
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 14:59:00 +0000
X-Inumbo-ID: 7de0acfa-d57a-42be-b941-4f12bc17ca2b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7de0acfa-d57a-42be-b941-4f12bc17ca2b;
	Fri, 23 Oct 2020 14:58:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603465138;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NgGmob6u60vj5rQwir3hB1HfE+wHt+CUgGlG6PpiMdc=;
	b=XbYygUT2op50uGIyEsLVohn03xBnQrBM21+rdBEthqWurbP8CTTgpx8exe+QZOu4lWvDwK
	5W8FvYpBarrFKGGZzVPBO49JlPfktL3hy032V5YGKKgPVWI87LIRQUKsij79ISFhP2GSF2
	viCRTqV6w2HztrvKD2uodjcSTjYGzi0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A1383AD09;
	Fri, 23 Oct 2020 14:58:58 +0000 (UTC)
Subject: Re: [PATCH v2 09/11] x86/vpt: switch interrupt injection model
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-10-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63ed7745-a37f-5df2-6eaa-b0ed4a0a30e3@suse.com>
Date: Fri, 23 Oct 2020 16:59:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-10-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> Currently vPT relies on timers being assigned to a vCPU and performing
> checks on every return to HVM guest in order to check if an interrupt
> from a vPT timer assigned to the vCPU is currently being injected.
> 
> This model doesn't work properly since the interrupt destination vCPU
> of a vPT timer can be different from the vCPU where the timer is
> currently assigned, in which case the timer would get stuck because it
> never sees the interrupt as being injected.
> 
> Knowing when a vPT interrupt is injected is relevant for the guest
> timer modes where missed vPT interrupts are not discarded and instead
> are accumulated and injected when possible.
> 
> This change aims to modify the logic described above, so that vPT
> doesn't need to check on every return to HVM guest if a vPT interrupt
> is being injected. In order to achieve this the vPT code is modified
> to make use of the new EOI callbacks, so that virtual timers can
> detect when a interrupt has been serviced by the guest by waiting for
> the EOI callback to execute.
> 
> This model also simplifies some of the logic, as when executing the
> timer EOI callback Xen can try to inject another interrupt if the
> timer has interrupts pending for delivery.
> 
> Note that timers are still bound to a vCPU for the time being, this
> relation however doesn't limit the interrupt destination anymore, and
> will be removed by further patches.
> 
> This model has been tested with Windows 7 guests without showing any
> timer delay, even when the guest was limited to have very little CPU
> capacity and pending virtual timer interrupts accumulate.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - New in this version.
> ---
> Sorry, this is a big change, but I'm having issues splitting it into
> smaller pieces as the functionality needs to be changed in one go, or
> else timers would be broken.
> 
> If this approach seems sensible I can try to split it up.

If it can't sensibly be split, so be it, I would say. And yes, the
approach does look sensible to me, supported by ...

> ---
>  xen/arch/x86/hvm/svm/intr.c   |   3 -
>  xen/arch/x86/hvm/vmx/intr.c   |  59 ------
>  xen/arch/x86/hvm/vpt.c        | 326 ++++++++++++++--------------------
>  xen/include/asm-x86/hvm/vpt.h |   5 +-
>  4 files changed, 135 insertions(+), 258 deletions(-)

... this diffstat. Good work!

Just a couple of nits, but before giving this my ack I may need to
go through it a 2nd time.

> +/*
> + * The same callback is shared between LAPIC and PIC/IO-APIC based timers, as
> + * we ignore the first parameter that's different between them.
> + */
> +static void eoi_callback(unsigned int unused, void *data)
>  {
> -    struct list_head *head = &v->arch.hvm.tm_list;
> -    struct periodic_time *pt, *temp, *earliest_pt;
> -    uint64_t max_lag;
> -    int irq, pt_vector = -1;
> -    bool level;
> +    struct periodic_time *pt = data;
> +    struct vcpu *v;
> +    time_cb *cb = NULL;
> +    void *cb_priv;
>  
> -    pt_vcpu_lock(v);
> +    pt_lock(pt);
>  
> -    earliest_pt = NULL;
> -    max_lag = -1ULL;
> -    list_for_each_entry_safe ( pt, temp, head, list )
> +    pt_irq_fired(pt->vcpu, pt);
> +    if ( pt->pending_intr_nr )
>      {
> -        if ( pt->pending_intr_nr )
> +        if ( inject_interrupt(pt) )
> +        {
> +            pt->pending_intr_nr--;
> +            cb = pt->cb;
> +            cb_priv = pt->priv;
> +            v = pt->vcpu;
> +        }
> +        else
>          {
> -            /* RTC code takes care of disabling the timer itself. */
> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
> -                 /* Level interrupts should be asserted even if masked. */
> -                 !pt->level )
> -            {
> -                /* suspend timer emulation */
> +            /* Masked. */
> +            if ( pt->on_list )
>                  list_del(&pt->list);
> -                pt->on_list = 0;
> -            }
> -            else
> -            {
> -                if ( (pt->last_plt_gtime + pt->period) < max_lag )
> -                {
> -                    max_lag = pt->last_plt_gtime + pt->period;
> -                    earliest_pt = pt;
> -                }
> -            }
> +            pt->on_list = false;
>          }
>      }
>  
> -    if ( earliest_pt == NULL )
> -    {
> -        pt_vcpu_unlock(v);
> -        return -1;
> -    }
> +    pt_unlock(pt);
>  
> -    earliest_pt->irq_issued = 1;
> -    irq = earliest_pt->irq;
> -    level = earliest_pt->level;
> +    if ( cb != NULL )
> +        cb(v, cb_priv);

Nit: Like done elsewhere, omit the " != NULL"?

> +    /* Update time when an interrupt is injected. */
> +    if ( mode_is(v->domain, one_missed_tick_pending) ||
> +         mode_is(v->domain, no_missed_ticks_pending) )
> +        pt->last_plt_gtime = hvm_get_guest_time(v);
> +    else
> +        pt->last_plt_gtime += pt->period;
>  
> -    pt_vcpu_unlock(v);
> +    if ( mode_is(v->domain, delay_for_missed_ticks) &&

This looks to be possible to move into the "else" above, but on the
whole maybe everything together would best be handled by switch()?

> @@ -543,6 +443,24 @@ void create_periodic_time(
>      pt->cb = cb;
>      pt->priv = data;
>  
> +    switch ( pt->source )
> +    {
> +        int rc;
> +
> +    case PTSRC_isa:
> +        irq = hvm_isa_irq_to_gsi(irq);
> +        /* fallthrough */
> +    case PTSRC_ioapic:
> +        pt->eoi_cb.callback = eoi_callback;
> +        pt->eoi_cb.data = pt;
> +        rc = hvm_gsi_register_callback(v->domain, irq, &pt->eoi_cb);
> +        if ( rc )
> +            gdprintk(XENLOG_WARNING,
> +                     "unable to register callback for timer GSI %u: %d\n",
> +                     irq, rc);

If this triggers, would it be helpful to also log pt->source?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:07:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11072.29333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyek-0006Np-5U; Fri, 23 Oct 2020 15:07:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11072.29333; Fri, 23 Oct 2020 15:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyek-0006Ni-1h; Fri, 23 Oct 2020 15:07:02 +0000
Received: by outflank-mailman (input) for mailman id 11072;
 Fri, 23 Oct 2020 15:07:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVyei-0006Nd-Oi
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:07:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab015e83-faa7-485d-ac9c-4744e7e0cc80;
 Fri, 23 Oct 2020 15:06:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDE5EAC83;
 Fri, 23 Oct 2020 15:06:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVyei-0006Nd-Oi
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:07:00 +0000
X-Inumbo-ID: ab015e83-faa7-485d-ac9c-4744e7e0cc80
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab015e83-faa7-485d-ac9c-4744e7e0cc80;
	Fri, 23 Oct 2020 15:06:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603465618;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WJfdBU/v3a9AiZbddIIyfpXTvlnrLNU7ifSIrJmNkPY=;
	b=Gb8CSFTYYralmCouIQBs3BT6akLW/9yN78GgF8/InK3qvXsU0fHyryxBE0muu9XmnqZnVn
	DsONvoH1Wd2dT7vnGeMgHJkz4kwHuC5Kzk63YrxFUEu5ro7NmEKQZ0q7dbuJBwHekudAmG
	/p+DEyL7Y5ezaafBdJqshAXwNSaEVrw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CDE5EAC83;
	Fri, 23 Oct 2020 15:06:58 +0000 (UTC)
Subject: Re: [PATCH v2 1/3] tools/libs: move official headers to common
 directory
To: Ian Jackson <iwj@xenproject.org>, Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>
References: <20201023141934.20062-1-jgross@suse.com>
 <20201023141934.20062-2-jgross@suse.com>
 <24466.60952.899550.965639@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2e11d90-70c3-0a00-2e56-83191e4d7d0e@suse.com>
Date: Fri, 23 Oct 2020 17:07:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <24466.60952.899550.965639@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.10.2020 16:52, Ian Jackson wrote:
> Juergen Gross writes ("[PATCH v2 1/3] tools/libs: move official headers to common directory"):
>> Instead of each library having an own include directory move the
>> official headers to tools/include instead. This will drop the need to
>> link those headers to tools/include and there is no need any longer
>> to have library-specific include paths when building Xen.
>>
>> While at it remove setting of the unused variable
>> PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Acked-by: Christian Lindig <christian.lindig@citrix.com>
>> Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Thanks, all three
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> and pushed.

While you're at it, Ian, could you also take a look at
"[PATCH 2/2] tools/libs: fix uninstall rule for header files"
(patch 1 there now obviously is obsolete)?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:19:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11077.29349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyqR-0007UP-6a; Fri, 23 Oct 2020 15:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11077.29349; Fri, 23 Oct 2020 15:19:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVyqR-0007UI-3S; Fri, 23 Oct 2020 15:19:07 +0000
Received: by outflank-mailman (input) for mailman id 11077;
 Fri, 23 Oct 2020 15:19:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVyqQ-0007UD-IE
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:19:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0200c312-1c6f-45dd-a5b8-87e6b3d3ff4d;
 Fri, 23 Oct 2020 15:19:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVyqN-0006P2-Ri; Fri, 23 Oct 2020 15:19:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVyqN-00068L-GT; Fri, 23 Oct 2020 15:19:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVyqQ-0007UD-IE
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:19:06 +0000
X-Inumbo-ID: 0200c312-1c6f-45dd-a5b8-87e6b3d3ff4d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0200c312-1c6f-45dd-a5b8-87e6b3d3ff4d;
	Fri, 23 Oct 2020 15:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=mFHRwWJrpdUpMhPQ0g1QtGVM9bmtQzilbEwf6bnBV80=; b=EJKS5CTRpMYK5//51+qmiNgUGB
	0dZcapUL7JKj/KrTl6GF7zE0z9m4bjDof76u1KocLByEt5dBWN4BH3BWKA+cm1mSbu3zTV1N0rwsG
	0wuBB1+B7bq7Pk2AS5jmlVWScRpLerzIshhHfXeWvyKhKXVigupcNxwWWZe6fNkKqAzY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVyqN-0006P2-Ri; Fri, 23 Oct 2020 15:19:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVyqN-00068L-GT; Fri, 23 Oct 2020 15:19:03 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
Date: Fri, 23 Oct 2020 16:19:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 23/10/2020 15:27, Rahul Singh wrote:
> Hello Julien,
> 
>> On 23 Oct 2020, at 2:00 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 23/10/2020 12:35, Rahul Singh wrote:
>>> Hello,
>>>> On 23 Oct 2020, at 1:02 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>
>>>> On Thu, 22 Oct 2020, Julien Grall wrote:
>>>>>>> On 20/10/2020 16:25, Rahul Singh wrote:
>>>>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>>>>>> the Linux SMMUv3 driver.
>>>>>>>> Major differences between the Linux driver are as follows:
>>>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>>>>>>     that supports both Stage-1 and Stage-2 translations.
>>>>>>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>>>>>>>>     capability to share the page tables with the CPU.
>>>>>>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>>>>>>>>     and priority queue IRQ handling.
>>>>>>>
>>>>>>> Tasklets are not a replacement for threaded IRQ. In particular, they will
>>>>>>> have priority over anything else (IOW nothing will run on the pCPU until
>>>>>>> they are done).
>>>>>>>
>>>>>>> Do you know why Linux is using thread. Is it because of long running
>>>>>>> operations?
>>>>>>
>>>>>> Yes you are right because of long running operations Linux is using the
>>>>>> threaded IRQs.
>>>>>>
>>>>>> SMMUv3 reports fault/events bases on memory-based circular buffer queues not
>>>>>> based on the register. As per my understanding, it is time-consuming to
>>>>>> process the memory based queues in interrupt context because of that Linux
>>>>>> is using threaded IRQ to process the faults/events from SMMU.
>>>>>>
>>>>>> I didn’t find any other solution in XEN in place of tasklet to defer the
>>>>>> work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
>>>>>> we do all work in interrupt context we will make XEN less responsive.
>>>>>
>>>>> So we need to make sure that Xen continue to receives interrupts, but we also
>>>>> need to make sure that a vCPU bound to the pCPU is also responsive.
>>>>>
>>>>>>
>>>>>> If you know another solution in XEN that will be used to defer the work in
>>>>>> the interrupt please let me know I will try to use that.
>>>>>
>>>>> One of my work colleague encountered a similar problem recently. He had a long
>>>>> running tasklet and wanted to be broken down in smaller chunk.
>>>>>
>>>>> We decided to use a timer to reschedule the taslket in the future. This allows
>>>>> the scheduler to run other loads (e.g. vCPU) for some time.
>>>>>
>>>>> This is pretty hackish but I couldn't find a better solution as tasklet have
>>>>> high priority.
>>>>>
>>>>> Maybe the other will have a better idea.
>>>>
>>>> Julien's suggestion is a good one.
>>>>
>>>> But I think tasklets can be configured to be called from the idle_loop,
>>>> in which case they are not run in interrupt context?
>>>>
>>>   Yes you are right tasklet will be scheduled from the idle_loop that is not interrupt conext.
>>
>> This depends on your tasklet. Some will run from the softirq context which is usually (for Arm) on the return of an exception.
>>
> 
> Thanks for the info. I will check and will get better understanding of the tasklet how it will run in XEN.
> 
>>>>
>>>>>>>> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>>>>>>>>     access functions based on atomic operations implemented in Linux.
>>>>>>>
>>>>>>> Can you provide more details?
>>>>>>
>>>>>> I tried to port the latest version of the SMMUv3 code than I observed that
>>>>>> in order to port that code I have to also port atomic operation implemented
>>>>>> in Linux to XEN. As latest Linux code uses atomic operation to process the
>>>>>> command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() ,
>>>>>> atomic_fetch_andnot_relaxed()) .
>>>>>
>>>>> Thank you for the explanation. I think it would be best to import the atomic
>>>>> helpers and use the latest code.
>>>>>
>>>>> This will ensure that we don't re-introduce bugs and also buy us some time
>>>>> before the Linux and Xen driver diverge again too much.
>>>>>
>>>>> Stefano, what do you think?
>>>>
>>>> I think you are right.
>>> Yes, I agree with you to have XEN code in sync with Linux code that's why I started with to port the Linux atomic operations to XEN  then I realised that it is not straightforward to port atomic operations and it requires lots of effort and testing. Therefore I decided to port the code before the atomic operation is introduced in Linux.
>>
>> Hmmm... I would not have expected a lot of effort required to add the 3 atomics operations above. Are you trying to also port the LSE support at the same time?
> 
> There are other atomic operations used in the SMMUv3 code apart from the 3 atomic operation I mention. I just mention 3 operation as an example. 

Ok. Do you have a list you could share?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11086.29364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVz5E-000108-Kx; Fri, 23 Oct 2020 15:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11086.29364; Fri, 23 Oct 2020 15:34:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVz5E-000101-Hy; Fri, 23 Oct 2020 15:34:24 +0000
Received: by outflank-mailman (input) for mailman id 11086;
 Fri, 23 Oct 2020 15:34:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVz5D-0000zw-E3
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:34:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 773f0138-09e0-42a3-9c7c-9da0025a6e9d;
 Fri, 23 Oct 2020 15:34:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99051AF2D;
 Fri, 23 Oct 2020 15:34:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVz5D-0000zw-E3
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:34:23 +0000
X-Inumbo-ID: 773f0138-09e0-42a3-9c7c-9da0025a6e9d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 773f0138-09e0-42a3-9c7c-9da0025a6e9d;
	Fri, 23 Oct 2020 15:34:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603467261;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VnS3/Hfvrb1x6FATDdrnpzKKnHSYkuOy25LKm1xXJRA=;
	b=XQm8qtFycOmtQ0XPyyraijA3418RFhPyVnA6B5BaV52VMnxh0sdXQEDhpZvH7H0oKMnMef
	HkD2xA2ZXtNpYuHm5bBKuD1F2p3DBTII7MXBMzNx57lpe8m9gDFbSvw/1jLWSS0NzDi8Gz
	g30MmS58rI7EjiVPgzI6Qb72D1xdfnQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 99051AF2D;
	Fri, 23 Oct 2020 15:34:21 +0000 (UTC)
Subject: Re: [PATCH v2 10/11] x86/vpt: remove vPT timers per-vCPU lists
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-11-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <201eec52-4886-a8f5-3f56-b24ce066b17d@suse.com>
Date: Fri, 23 Oct 2020 17:34:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930104108.35969-11-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.09.2020 12:41, Roger Pau Monne wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1964,7 +1964,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      vpmu_switch_from(prev);
>      np2m_schedule(NP2M_SCHEDLE_OUT);
>  
> -    if ( is_hvm_domain(prevd) && !list_empty(&prev->arch.hvm.tm_list) )
> +    if ( is_hvm_domain(prevd) )
>          pt_save_timer(prev);

While most of the function goes away, pt_freeze_time() now will get
called in cases it previously wasn't called - is this benign?

> @@ -195,50 +182,20 @@ static void pt_thaw_time(struct vcpu *v)
>  
>  void pt_save_timer(struct vcpu *v)
>  {
> -    struct list_head *head = &v->arch.hvm.tm_list;
> -    struct periodic_time *pt;
> -
> -    if ( v->pause_flags & VPF_blocked )
> -        return;
> -
> -    pt_vcpu_lock(v);
> -
> -    list_for_each_entry ( pt, head, list )
> -        if ( !pt->do_not_freeze )
> -            stop_timer(&pt->timer);
>  
>      pt_freeze_time(v);
> -
> -    pt_vcpu_unlock(v);
>  }
>  
>  void pt_restore_timer(struct vcpu *v)
>  {
> -    struct list_head *head = &v->arch.hvm.tm_list;
> -    struct periodic_time *pt;
> -
> -    pt_vcpu_lock(v);
> -
> -    list_for_each_entry ( pt, head, list )
> -        if ( pt->pending_intr_nr == 0 )
> -            set_timer(&pt->timer, pt->scheduled);
> -
>      pt_thaw_time(v);
> -
> -    pt_vcpu_unlock(v);
>  }

In both functions the single function called also is the only
time it is used anywhere, so I guess the extra layer could be
removed.

> @@ -402,8 +339,7 @@ void create_periodic_time(
>      write_lock(&v->domain->arch.hvm.pl_time->pt_migrate);
>  
>      pt->pending_intr_nr = 0;
> -    pt->do_not_freeze = 0;
> -    pt->irq_issued = 0;
> +    pt->masked = false;

I agree here, but ...

> @@ -479,10 +412,8 @@ void destroy_periodic_time(struct periodic_time *pt)
>          return;
>  
>      pt_lock(pt);
> -    if ( pt->on_list )
> -        list_del(&pt->list);
> -    pt->on_list = 0;
>      pt->pending_intr_nr = 0;
> +    pt->masked = false;

... why not "true" here, at the very least for pt_active()'s sake?

> --- a/xen/include/asm-x86/hvm/vpt.h
> +++ b/xen/include/asm-x86/hvm/vpt.h
> @@ -31,13 +31,10 @@
>  typedef void time_cb(struct vcpu *v, void *opaque);
>  
>  struct periodic_time {
> -    struct list_head list;
> -    bool on_list;
>      bool one_shot;
> -    bool do_not_freeze;
> -    bool irq_issued;
>      bool warned_timeout_too_short;
>      bool level;
> +    bool masked;

"masked" aiui doesn't say anything about the present state of a
timer, but about its state the last time an interrupt was
attempted to be injected. If this is right, either a name change
("last_seen_masked" is somewhat longish) might be helpful, but
at the very least I'd like to ask for a comment to this effect.

> @@ -158,7 +153,7 @@ void pt_adjust_global_vcpu_target(struct vcpu *v);
>  void pt_may_unmask_irq(struct domain *d, struct periodic_time *vlapic_pt);
>  
>  /* Is given periodic timer active? */
> -#define pt_active(pt) ((pt)->on_list || (pt)->pending_intr_nr)
> +#define pt_active(pt) !(pt)->masked

This wants parentheses around it. And why does the right side of the
|| go away?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11090.29389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCi-0001ub-N3; Fri, 23 Oct 2020 15:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11090.29389; Fri, 23 Oct 2020 15:42:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCi-0001uT-JT; Fri, 23 Oct 2020 15:42:08 +0000
Received: by outflank-mailman (input) for mailman id 11090;
 Fri, 23 Oct 2020 15:42:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCg-0001tF-NH
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b82c153b-1e68-4902-b1fc-9280c714a7cb;
 Fri, 23 Oct 2020 15:42:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCd-0006qO-Oh; Fri, 23 Oct 2020 15:42:03 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCd-0007wb-E2; Fri, 23 Oct 2020 15:42:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCg-0001tF-NH
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:06 +0000
X-Inumbo-ID: b82c153b-1e68-4902-b1fc-9280c714a7cb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b82c153b-1e68-4902-b1fc-9280c714a7cb;
	Fri, 23 Oct 2020 15:42:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=5sAqH1p7E9o/e3huPMn1JHSLfgrcyUzmCSK/8SVUy1U=; b=nwHVoFWswujMk2ZQTQyCNN4bd
	NuokyUNJAPDY8glIgM1ehxPo2tRO6ZlkbldJpeTiD+3Oe6kqdQ2h86fHpkARJneWsZu6P0NxBmUfW
	Kz63m2Irep3sVriPLjeObGK7tS0nCs3OVNUNbEWBqcVno3un9BX+8nRmToKvsqx5eN6pY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCd-0006qO-Oh; Fri, 23 Oct 2020 15:42:03 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCd-0007wb-E2; Fri, 23 Oct 2020 15:42:03 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Rahul Singh <rahul.singh@arm.com>
Subject: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
Date: Fri, 23 Oct 2020 16:41:50 +0100
Message-Id: <20201023154156.6593-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
while the __acpi_os_{un,}map_memory() are meant to be arch-specific.

Currently, the former are still containing x86 specific code.

To avoid this rather strange split, the generic helpers are reworked so
they are arch-agnostic. This requires the introduction of a new helper
__acpi_os_unmap_memory() that will undo any mapping done by
__acpi_os_map_memory().

Currently, the arch-helper for unmap is basically a no-op so it only
returns whether the mapping was arch specific. But this will change
in the future.

Note that the x86 version of acpi_os_map_memory() was already able to
able the 1MB region. Hence why there is no addition of new code.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>

---
    Changes in v2:
        - Constify ptr in __acpi_unmap_table()
        - Coding style fixes
        - Fix build on arm64
        - Use PAGE_OFFSET() rather than open-coding it
        - Add Rahul's tested-by and reviewed-by
---
 xen/arch/arm/acpi/lib.c | 12 ++++++++++++
 xen/arch/x86/acpi/lib.c | 18 ++++++++++++++++++
 xen/drivers/acpi/osl.c  | 34 ++++++++++++++++++----------------
 xen/include/xen/acpi.h  |  1 +
 4 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
index 4fc6e17322c1..fcc186b03399 100644
--- a/xen/arch/arm/acpi/lib.c
+++ b/xen/arch/arm/acpi/lib.c
@@ -30,6 +30,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
     unsigned long base, offset, mapped_size;
     int idx;
 
+    /* No arch specific implementation after early boot */
+    if ( system_state >= SYS_STATE_boot )
+        return NULL;
+
     offset = phys & (PAGE_SIZE - 1);
     mapped_size = PAGE_SIZE - offset;
     set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
@@ -49,6 +53,14 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
     return ((char *) base + offset);
 }
 
+bool __acpi_unmap_table(const void *ptr, unsigned long size)
+{
+    vaddr_t vaddr = (vaddr_t)ptr;
+
+    return ((vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) &&
+            (vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE)));
+}
+
 /* True to indicate PSCI 0.2+ is implemented */
 bool __init acpi_psci_present(void)
 {
diff --git a/xen/arch/x86/acpi/lib.c b/xen/arch/x86/acpi/lib.c
index 265b9ad81905..a22414a05c13 100644
--- a/xen/arch/x86/acpi/lib.c
+++ b/xen/arch/x86/acpi/lib.c
@@ -46,6 +46,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
 	if ((phys + size) <= (1 * 1024 * 1024))
 		return __va(phys);
 
+	/* No further arch specific implementation after early boot */
+	if (system_state >= SYS_STATE_boot)
+		return NULL;
+
 	offset = phys & (PAGE_SIZE - 1);
 	mapped_size = PAGE_SIZE - offset;
 	set_fixmap(FIX_ACPI_END, phys);
@@ -66,6 +70,20 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
 	return ((char *) base + offset);
 }
 
+bool __acpi_unmap_table(const void *ptr, unsigned long size)
+{
+	unsigned long vaddr = (unsigned long)ptr;
+
+	if ((vaddr >= DIRECTMAP_VIRT_START) &&
+	    (vaddr < DIRECTMAP_VIRT_END)) {
+		ASSERT(!((__pa(ptr) + size - 1) >> 20));
+		return true;
+	}
+
+	return ((vaddr >= __fix_to_virt(FIX_ACPI_END)) &&
+		(vaddr < (__fix_to_virt(FIX_ACPI_BEGIN) + PAGE_SIZE)));
+}
+
 unsigned int acpi_get_processor_id(unsigned int cpu)
 {
 	unsigned int acpiid, apicid;
diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
index 4c8bb7839eda..389505f78666 100644
--- a/xen/drivers/acpi/osl.c
+++ b/xen/drivers/acpi/osl.c
@@ -92,27 +92,29 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
 void __iomem *
 acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 {
-	if (system_state >= SYS_STATE_boot) {
-		mfn_t mfn = _mfn(PFN_DOWN(phys));
-		unsigned int offs = phys & (PAGE_SIZE - 1);
-
-		/* The low first Mb is always mapped on x86. */
-		if (IS_ENABLED(CONFIG_X86) && !((phys + size - 1) >> 20))
-			return __va(phys);
-		return __vmap(&mfn, PFN_UP(offs + size), 1, 1,
-			      ACPI_MAP_MEM_ATTR, VMAP_DEFAULT) + offs;
-	}
-	return __acpi_map_table(phys, size);
+	void *ptr;
+	mfn_t mfn = _mfn(PFN_DOWN(phys));
+	unsigned int offs = PAGE_OFFSET(phys);
+
+	/* Try the arch specific implementation first */
+	ptr = __acpi_map_table(phys, size);
+	if (ptr)
+		return ptr;
+
+	/* No common implementation for early boot map */
+	if (unlikely(system_state < SYS_STATE_boot))
+		return NULL;
+
+	ptr = __vmap(&mfn, PFN_UP(offs + size), 1, 1,
+		     ACPI_MAP_MEM_ATTR, VMAP_DEFAULT);
+
+	return !ptr ? NULL : (ptr + offs);
 }
 
 void acpi_os_unmap_memory(void __iomem * virt, acpi_size size)
 {
-	if (IS_ENABLED(CONFIG_X86) &&
-	    (unsigned long)virt >= DIRECTMAP_VIRT_START &&
-	    (unsigned long)virt < DIRECTMAP_VIRT_END) {
-		ASSERT(!((__pa(virt) + size - 1) >> 20));
+	if (__acpi_unmap_table(virt, size))
 		return;
-	}
 
 	if (system_state >= SYS_STATE_boot)
 		vunmap((void *)((unsigned long)virt & PAGE_MASK));
diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
index c945ab05c864..21d5e9feb5ae 100644
--- a/xen/include/xen/acpi.h
+++ b/xen/include/xen/acpi.h
@@ -68,6 +68,7 @@ typedef int (*acpi_table_entry_handler) (struct acpi_subtable_header *header, co
 
 unsigned int acpi_get_processor_id (unsigned int cpu);
 char * __acpi_map_table (paddr_t phys_addr, unsigned long size);
+bool __acpi_unmap_table(const void *ptr, unsigned long size);
 int acpi_boot_init (void);
 int acpi_boot_table_init (void);
 int acpi_numa_init (void);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11089.29376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCf-0001sU-Du; Fri, 23 Oct 2020 15:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11089.29376; Fri, 23 Oct 2020 15:42:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCf-0001sN-Ay; Fri, 23 Oct 2020 15:42:05 +0000
Received: by outflank-mailman (input) for mailman id 11089;
 Fri, 23 Oct 2020 15:42:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCe-0001sI-6d
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b243efb0-0e17-4060-a9f9-46ebb4bb49bf;
 Fri, 23 Oct 2020 15:42:02 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCb-0006pf-05; Fri, 23 Oct 2020 15:42:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCa-0007wb-IC; Fri, 23 Oct 2020 15:42:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCe-0001sI-6d
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:04 +0000
X-Inumbo-ID: b243efb0-0e17-4060-a9f9-46ebb4bb49bf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b243efb0-0e17-4060-a9f9-46ebb4bb49bf;
	Fri, 23 Oct 2020 15:42:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=rvzJmItCyl6RWcuoLLfHErwnWZgEzsudKHjxWh09svw=; b=1XAf5qSP6bnbMqCivCyd+AYGTh
	GvvS0Wjvasb6yfv6tjDfHrVtZ49ku67ATw1da+Er2CfAJENYGKD/3qD4jkocKs31IChBopLv8+Jh0
	OUezGqKtYRsQKm1QfrM3XCk+ouRarFUZeCZYhESBSZbc9DHISg6P3chmg7YzBAXKHkCQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCb-0006pf-05; Fri, 23 Oct 2020 15:42:01 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCa-0007wb-IC; Fri, 23 Oct 2020 15:42:00 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v2 0/7] xen/arm: Unbreak ACPI
Date: Fri, 23 Oct 2020 16:41:49 +0100
Message-Id: <20201023154156.6593-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

Xen on ARM has been broken for quite a while on ACPI systems. This
series aims to fix it.

This series also introduced support for ACPI 5.1. This allows Xen to
boot on QEMU.

I have only build tested the x86 side so far.

Cheers,

Julien Grall (7):
  xen/acpi: Rework acpi_os_map_memory() and acpi_os_unmap_memory()
  xen/arm: acpi: The fixmap area should always be cleared during
    failure/unmap
  xen/arm: Check if the platform is not using ACPI before initializing
    Dom0less
  xen/arm: Introduce fw_unreserved_regions() and use it
  xen/arm: acpi: add BAD_MADT_GICC_ENTRY() macro
  xen/arm: gic-v2: acpi: Use the correct length for the GICC structure
  xen/arm: acpi: Allow Xen to boot with ACPI 5.1

 xen/arch/arm/acpi/boot.c    |  6 +--
 xen/arch/arm/acpi/lib.c     | 79 ++++++++++++++++++++++++++++++-------
 xen/arch/arm/gic-v2.c       |  5 ++-
 xen/arch/arm/gic-v3.c       |  2 +-
 xen/arch/arm/kernel.c       |  2 +-
 xen/arch/arm/setup.c        | 25 +++++++++---
 xen/arch/x86/acpi/lib.c     | 18 +++++++++
 xen/drivers/acpi/osl.c      | 34 ++++++++--------
 xen/include/asm-arm/acpi.h  |  8 ++++
 xen/include/asm-arm/setup.h |  3 +-
 xen/include/xen/acpi.h      |  1 +
 11 files changed, 139 insertions(+), 44 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11091.29400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCk-0001wn-68; Fri, 23 Oct 2020 15:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11091.29400; Fri, 23 Oct 2020 15:42:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCk-0001wf-29; Fri, 23 Oct 2020 15:42:10 +0000
Received: by outflank-mailman (input) for mailman id 11091;
 Fri, 23 Oct 2020 15:42:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCj-0001sI-5x
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f66bf689-5611-4936-a33c-af2393b9d86d;
 Fri, 23 Oct 2020 15:42:08 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCh-0006qd-V6; Fri, 23 Oct 2020 15:42:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCh-0007wb-Jh; Fri, 23 Oct 2020 15:42:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCj-0001sI-5x
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:09 +0000
X-Inumbo-ID: f66bf689-5611-4936-a33c-af2393b9d86d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f66bf689-5611-4936-a33c-af2393b9d86d;
	Fri, 23 Oct 2020 15:42:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=e/uuoKFzFJnK8H8UoaneqmSTwLDi27UJbrlnGwgsDCk=; b=qUupofu5KDc3BkRqa8wjSrCme
	VGN/JetlsmJm/XyhwoFx3FKBjMMF4CVaS7qMbX+igDtyJ1/Qtx/smGiIlYso7dVL/ZVnF0FeWaKtk
	hC2+bdOst5Pe6OJRkZu6bcee/6YS/6fwI9wel6b7V32mHs3CUEeY41z1L+nnb4qT0WUfc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCh-0006qd-V6; Fri, 23 Oct 2020 15:42:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCh-0007wb-Jh; Fri, 23 Oct 2020 15:42:07 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Rahul Singh <rahul.singh@arm.com>
Subject: [PATCH v2 3/7] xen/arm: Check if the platform is not using ACPI before initializing Dom0less
Date: Fri, 23 Oct 2020 16:41:52 +0100
Message-Id: <20201023154156.6593-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Dom0less requires a device-tree. However, since commit 6e3e77120378
"xen/arm: setup: Relocate the Device-Tree later on in the boot", the
device-tree will not get unflatten when using ACPI.

This will lead to a crash during boot.

Given the complexity to setup dom0less with ACPI (for instance how to
assign device?), we should skip any code related to Dom0less when using
ACPI.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Tested-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Rahul Singh <rahul.singh@arm.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

---
    Changes in v2:
        - Add Rahul's tested-by and reviewed-by
        - Add Stefano's reviewed-by
---
 xen/arch/arm/setup.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index f16b33fa87a2..35e5bee04efa 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -987,7 +987,8 @@ void __init start_xen(unsigned long boot_phys_offset,
 
     system_state = SYS_STATE_active;
 
-    create_domUs();
+    if ( acpi_disabled )
+        create_domUs();
 
     domain_unpause_by_systemcontroller(dom0);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11092.29413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCo-00020v-HM; Fri, 23 Oct 2020 15:42:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11092.29413; Fri, 23 Oct 2020 15:42:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCo-00020m-Dh; Fri, 23 Oct 2020 15:42:14 +0000
Received: by outflank-mailman (input) for mailman id 11092;
 Fri, 23 Oct 2020 15:42:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCn-000207-Fv
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb530848-315a-4f24-8da9-04ff76f25cbb;
 Fri, 23 Oct 2020 15:42:12 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCl-0006r5-V8; Fri, 23 Oct 2020 15:42:11 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCl-0007wb-L8; Fri, 23 Oct 2020 15:42:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCn-000207-Fv
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:13 +0000
X-Inumbo-ID: eb530848-315a-4f24-8da9-04ff76f25cbb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eb530848-315a-4f24-8da9-04ff76f25cbb;
	Fri, 23 Oct 2020 15:42:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=3b+zUC4rUGoRV/bU3U7XTe8hxIwKqKy/A0anE7Xt22g=; b=Ex15/X/yHkm5VcqouUFLOTVy2
	GjAr5mN3s7s1NO5KlyhbJPi3JBiSWWOGhwHextGMcqmtQ3f5SmArE0zeJnS0LygZo9+oSD2M3jQcb
	ECX6CjrkpWg9hBAsPW4DNfNfbmt2qPrknD9dwVZLJ+XUafyOc6C3Q0WD8H8oy2zuwJvZo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCl-0006r5-V8; Fri, 23 Oct 2020 15:42:11 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCl-0007wb-L8; Fri, 23 Oct 2020 15:42:11 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 5/7] xen/arm: acpi: add BAD_MADT_GICC_ENTRY() macro
Date: Fri, 23 Oct 2020 16:41:54 +0100
Message-Id: <20201023154156.6593-6-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

Imported from Linux commit b6cfb277378ef831c0fa84bcff5049307294adc6:

    The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
    of the MADT.  In the ACPI 5.1 version of the spec, the struct for the
    GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
    ACPI 6.0, the struct is 80 bytes long.  But, there is only one definition
    in ACPICA for this struct -- and that is the 6.0 version.  Hence, when
    BAD_MADT_ENTRY() compares the struct size to the length in the GICC
    subtable, it fails if 5.1 structs are in use, and there are systems in
    the wild that have them.

    This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
    only, accounting for the difference in specification versions that are
    possible.  The BAD_MADT_ENTRY() will continue to work as is for all other
    MADT subtables.

    This code is being added to an arm64 header file since that is currently
    the only architecture using the GICC subtable of the MADT.  As a GIC is
    specific to ARM, it is also unlikely the subtable will be used elsewhere.

    Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.")
    Signed-off-by: Al Stone <al.stone@linaro.org>
    Acked-by: Will Deacon <will.deacon@arm.com>
    Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
    [catalin.marinas@arm.com: extra brackets around macro arguments]
    Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Changes in v2:
    - Patch added

We may want to consider to also import:

commit 9eb1c92b47c73249465d388eaa394fe436a3b489
Author: Jeremy Linton <jeremy.linton@arm.com>
Date:   Tue Nov 27 17:59:12 2018 +0000

    arm64: acpi: Prepare for longer MADTs

    The BAD_MADT_GICC_ENTRY check is a little too strict because
    it rejects MADT entries that don't match the currently known
    lengths. We should remove this restriction to avoid problems
    if the table length changes. Future code which might depend on
    additional fields should be written to validate those fields
    before using them, rather than trying to globally check
    known MADT version lengths.

    Link: https://lkml.kernel.org/r/20181012192937.3819951-1-jeremy.linton@arm.com
    Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
    [lorenzo.pieralisi@arm.com: added MADT macro comments]
    Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
    Acked-by: Sudeep Holla <sudeep.holla@arm.com>
    Cc: Will Deacon <will.deacon@arm.com>
    Cc: Catalin Marinas <catalin.marinas@arm.com>
    Cc: Al Stone <ahs3@redhat.com>
    Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
    Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 xen/include/asm-arm/acpi.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/include/asm-arm/acpi.h b/xen/include/asm-arm/acpi.h
index 50340281a917..b52ae2d6ef72 100644
--- a/xen/include/asm-arm/acpi.h
+++ b/xen/include/asm-arm/acpi.h
@@ -54,6 +54,14 @@ void acpi_smp_init_cpus(void);
  */
 paddr_t acpi_get_table_offset(struct membank tbl_add[], EFI_MEM_RES index);
 
+/* Macros for consistency checks of the GICC subtable of MADT */
+#define ACPI_MADT_GICC_LENGTH	\
+    (acpi_gbl_FADT.header.revision < 6 ? 76 : 80)
+
+#define BAD_MADT_GICC_ENTRY(entry, end)						\
+    (!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) ||	\
+     (entry)->header.length != ACPI_MADT_GICC_LENGTH)
+
 #ifdef CONFIG_ACPI
 extern bool acpi_disabled;
 /* Basic configuration for ACPI */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11093.29425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCp-00023I-T8; Fri, 23 Oct 2020 15:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11093.29425; Fri, 23 Oct 2020 15:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCp-000236-Ot; Fri, 23 Oct 2020 15:42:15 +0000
Received: by outflank-mailman (input) for mailman id 11093;
 Fri, 23 Oct 2020 15:42:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCo-0001sI-11
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db9169e1-10ff-4b6d-9589-f36f9e8684c5;
 Fri, 23 Oct 2020 15:42:10 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCj-0006qo-SC; Fri, 23 Oct 2020 15:42:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCj-0007wb-IJ; Fri, 23 Oct 2020 15:42:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCo-0001sI-11
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:14 +0000
X-Inumbo-ID: db9169e1-10ff-4b6d-9589-f36f9e8684c5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id db9169e1-10ff-4b6d-9589-f36f9e8684c5;
	Fri, 23 Oct 2020 15:42:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=xtMpaYLn6G5WzirrAmp6ZgVW0jgej4JPXWVQTvLzkCo=; b=4IiEUZqsbsyOdS75K5/AVt7sW
	d4Q6IqmGbH8LriIHX2DeilsKgbwamOXU4sh/VLRB1onAOqLIG51TLQlaCaz1f29JfuclM1gPzSIET
	+lkNoBi4N/VJzh8IUCPeqRBxvAd7T+LT461M1c/Jg/f4EOgc4iawB2NUP3e2aYskdojLI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCj-0006qo-SC; Fri, 23 Oct 2020 15:42:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCj-0007wb-IJ; Fri, 23 Oct 2020 15:42:09 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 4/7] xen/arm: Introduce fw_unreserved_regions() and use it
Date: Fri, 23 Oct 2020 16:41:53 +0100
Message-Id: <20201023154156.6593-5-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Since commit 6e3e77120378 "xen/arm: setup: Relocate the Device-Tree
later on in the boot", the device-tree will not be kept mapped when
using ACPI.

However, a few places are calling dt_unreserved_regions() which expects
a valid DT. This will lead to a crash.

As the DT should not be used for ACPI (other than for detecting the
modules), a new function fw_unreserved_regions() is introduced.

It will behave the same way on DT system. On ACPI system, it will
unreserve the whole region.

Take the opportunity to clarify that bootinfo.reserved_mem is only used
when booting using Device-Tree.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Is there any region we should exclude on ACPI?

    Changes in v2:
        - Add a comment on top of bootinfo.reserved_mem.
---
 xen/arch/arm/kernel.c       |  2 +-
 xen/arch/arm/setup.c        | 22 +++++++++++++++++-----
 xen/include/asm-arm/setup.h |  3 ++-
 3 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
index 032923853f2c..ab78689ed2a6 100644
--- a/xen/arch/arm/kernel.c
+++ b/xen/arch/arm/kernel.c
@@ -307,7 +307,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
      * Free the original kernel, update the pointers to the
      * decompressed kernel
      */
-    dt_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
+    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
 
     return 0;
 }
diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 35e5bee04efa..7fcff9af2a7e 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -196,8 +196,9 @@ static void __init processor_id(void)
     processor_setup();
 }
 
-void __init dt_unreserved_regions(paddr_t s, paddr_t e,
-                                  void (*cb)(paddr_t, paddr_t), int first)
+static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
+                                         void (*cb)(paddr_t, paddr_t),
+                                         int first)
 {
     int i, nr = fdt_num_mem_rsv(device_tree_flattened);
 
@@ -244,6 +245,17 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     cb(s, e);
 }
 
+void __init fw_unreserved_regions(paddr_t s, paddr_t e,
+                                  void (*cb)(paddr_t, paddr_t), int first)
+{
+    if ( acpi_disabled )
+        dt_unreserved_regions(s, e, cb, first);
+    else
+        cb(s, e);
+}
+
+
+
 struct bootmodule __init *add_boot_module(bootmodule_kind kind,
                                           paddr_t start, paddr_t size,
                                           bool domU)
@@ -405,7 +417,7 @@ void __init discard_initial_modules(void)
              !mfn_valid(maddr_to_mfn(e)) )
             continue;
 
-        dt_unreserved_regions(s, e, init_domheap_pages, 0);
+        fw_unreserved_regions(s, e, init_domheap_pages, 0);
     }
 
     mi->nr_mods = 0;
@@ -712,7 +724,7 @@ static void __init setup_mm(void)
                 n = mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages));
             }
 
-            dt_unreserved_regions(s, e, init_boot_pages, 0);
+            fw_unreserved_regions(s, e, init_boot_pages, 0);
 
             s = n;
         }
@@ -765,7 +777,7 @@ static void __init setup_mm(void)
             if ( e > bank_end )
                 e = bank_end;
 
-            dt_unreserved_regions(s, e, init_boot_pages, 0);
+            fw_unreserved_regions(s, e, init_boot_pages, 0);
             s = n;
         }
     }
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 2f8f24e286ed..28bf622aa196 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -67,6 +67,7 @@ struct bootcmdlines {
 
 struct bootinfo {
     struct meminfo mem;
+    /* The reserved regions are only used when booting using Device-Tree */
     struct meminfo reserved_mem;
     struct bootmodules modules;
     struct bootcmdlines cmdlines;
@@ -96,7 +97,7 @@ int construct_dom0(struct domain *d);
 void create_domUs(void);
 
 void discard_initial_modules(void);
-void dt_unreserved_regions(paddr_t s, paddr_t e,
+void fw_unreserved_regions(paddr_t s, paddr_t e,
                            void (*cb)(paddr_t, paddr_t), int first);
 
 size_t boot_fdt_info(const void *fdt, paddr_t paddr);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11094.29437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCr-00025r-7e; Fri, 23 Oct 2020 15:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11094.29437; Fri, 23 Oct 2020 15:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCr-00025g-2t; Fri, 23 Oct 2020 15:42:17 +0000
Received: by outflank-mailman (input) for mailman id 11094;
 Fri, 23 Oct 2020 15:42:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCq-000207-Gj
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bfc2abb2-916e-40ab-af75-851762a3b815;
 Fri, 23 Oct 2020 15:42:14 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCo-0006rJ-1R; Fri, 23 Oct 2020 15:42:14 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCn-0007wb-Nx; Fri, 23 Oct 2020 15:42:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCq-000207-Gj
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:16 +0000
X-Inumbo-ID: bfc2abb2-916e-40ab-af75-851762a3b815
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bfc2abb2-916e-40ab-af75-851762a3b815;
	Fri, 23 Oct 2020 15:42:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=0dMZlXEOpL//B8eU1LrZwd4fsJbSpHy+FuAFJFAQtlc=; b=o9ZBsOfjpCE2qodp6iwnSik+u
	/G+MLIopOzwOuzu2Lmno4MWQQrO/cZynUbVEs63UAVkSCM3fyYYmzXokXvilQ/c7IGOwK5U1wAekT
	Dw5d1ThSe8xhB6wIYvxxmtDzs58TvYrwgfq3h64UMsEF8rZ+hLoV91zXPlh9d7iNpHS7k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCo-0006rJ-1R; Fri, 23 Oct 2020 15:42:14 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCn-0007wb-Nx; Fri, 23 Oct 2020 15:42:14 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 6/7] xen/arm: gic-v2: acpi: Use the correct length for the GICC structure
Date: Fri, 23 Oct 2020 16:41:55 +0100
Message-Id: <20201023154156.6593-7-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

The length of the GICC structure in the MADT ACPI table differs between
version 5.1 and 6.0, although there are no other relevant differences.

Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
overcome this issue.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 2 +-
 xen/arch/arm/gic-v2.c    | 5 +++--
 xen/arch/arm/gic-v3.c    | 2 +-
 3 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 30e4bd1bc5a7..55c3e5cbc834 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     acpi_table_print_madt_entry(header);
diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
index 0f747538dbcd..0e5f23201974 100644
--- a/xen/arch/arm/gic-v2.c
+++ b/xen/arch/arm/gic-v2.c
@@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
 
     host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
                              header);
-    size = sizeof(struct acpi_madt_generic_interrupt);
+
+    size = ACPI_MADT_GICC_LENGTH;
     /* Add Generic Interrupt */
     for ( i = 0; i < d->max_vcpus; i++ )
     {
@@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index 0f6cbf6224e9..ce202402c0ed 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
     struct acpi_madt_generic_interrupt *processor =
                container_of(header, struct acpi_madt_generic_interrupt, header);
 
-    if ( BAD_MADT_ENTRY(processor, end) )
+    if ( BAD_MADT_GICC_ENTRY(processor, end) )
         return -EINVAL;
 
     /* Read from APIC table and fill up the GIC variables */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11095.29448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCu-0002BA-Gp; Fri, 23 Oct 2020 15:42:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11095.29448; Fri, 23 Oct 2020 15:42:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCu-0002B0-DW; Fri, 23 Oct 2020 15:42:20 +0000
Received: by outflank-mailman (input) for mailman id 11095;
 Fri, 23 Oct 2020 15:42:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCt-0001sI-10
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cead6ae4-bffe-432e-bbd1-a6f115e21901;
 Fri, 23 Oct 2020 15:42:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCf-0006qX-Qv; Fri, 23 Oct 2020 15:42:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCf-0007wb-Gq; Fri, 23 Oct 2020 15:42:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCt-0001sI-10
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:19 +0000
X-Inumbo-ID: cead6ae4-bffe-432e-bbd1-a6f115e21901
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cead6ae4-bffe-432e-bbd1-a6f115e21901;
	Fri, 23 Oct 2020 15:42:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=HcZbTU8t/M6Ruiq9fikm3iT8lCruTeZsHeQARUWgYFE=; b=P3FpaibtAFbVLFIK8KEi1gfn0
	RcJXK5ewmuL4+2tSJdS5DM8VCfs+VJw6wpYnxV81k1ZDcundBuos/SFQiEThSIljnvdY2me8xoo8j
	YY7YZqqQGYWkAWdKpubEnvW/lq9g3PxizhH8Gu+vu9owG5F2nrgGh6FKjsI0Y1eWk67lA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCf-0006qX-Qv; Fri, 23 Oct 2020 15:42:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCf-0007wb-Gq; Fri, 23 Oct 2020 15:42:05 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Xu <xuwei5@hisilicon.com>
Subject: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be cleared during failure/unmap
Date: Fri, 23 Oct 2020 16:41:51 +0100
Message-Id: <20201023154156.6593-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
{set, clear}_fixmap()" enforced that each set_fixmap() should be
paired with a clear_fixmap(). Any failure to follow the model would
result to a platform crash.

Unfortunately, the use of fixmap in the ACPI code was overlooked as it
is calling set_fixmap() but not clear_fixmap().

The function __acpi_os_map_table() is reworked so:
    - We know before the mapping whether the fixmap region is big
    enough for the mapping.
    - It will fail if the fixmap is already in use. This is not a
    change of behavior but clarifying the current expectation to avoid
    hitting a BUG().

The function __acpi_os_unmap_table() will now call clear_fixmap().

Reported-by: Wei Xu <xuwei5@hisilicon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---

The discussion on the original thread [1] suggested to also zap it on
x86. This is technically not necessary today, so it is left alone for
now.

I looked at making the fixmap code common but the index are inverted
between Arm and x86.

    Changes in v2:
        - Clarify the commit message
        - Fix the size computation in __acpi_unmap_table()

[1] https://lore.kernel.org/xen-devel/5E26C935.9080107@hisilicon.com/
---
 xen/arch/arm/acpi/lib.c | 73 +++++++++++++++++++++++++++++++----------
 1 file changed, 56 insertions(+), 17 deletions(-)

diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
index fcc186b03399..b755620e67b5 100644
--- a/xen/arch/arm/acpi/lib.c
+++ b/xen/arch/arm/acpi/lib.c
@@ -25,40 +25,79 @@
 #include <xen/init.h>
 #include <xen/mm.h>
 
+static bool fixmap_inuse;
+
 char *__acpi_map_table(paddr_t phys, unsigned long size)
 {
-    unsigned long base, offset, mapped_size;
-    int idx;
+    unsigned long base, offset;
+    mfn_t mfn;
+    unsigned int idx;
 
     /* No arch specific implementation after early boot */
     if ( system_state >= SYS_STATE_boot )
         return NULL;
 
     offset = phys & (PAGE_SIZE - 1);
-    mapped_size = PAGE_SIZE - offset;
-    set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
-    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
+    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) + offset;
+
+    /* Check the fixmap is big enough to map the region */
+    if ( (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - base) < size )
+        return NULL;
+
+    /* With the fixmap, we can only map one region at the time */
+    if ( fixmap_inuse )
+        return NULL;
 
-    /* Most cases can be covered by the below. */
+    fixmap_inuse = true;
+
+    size += offset;
+    mfn = maddr_to_mfn(phys);
     idx = FIXMAP_ACPI_BEGIN;
-    while ( mapped_size < size )
-    {
-        if ( ++idx > FIXMAP_ACPI_END )
-            return NULL;    /* cannot handle this */
-        phys += PAGE_SIZE;
-        set_fixmap(idx, maddr_to_mfn(phys), PAGE_HYPERVISOR);
-        mapped_size += PAGE_SIZE;
-    }
 
-    return ((char *) base + offset);
+    do {
+        set_fixmap(idx, mfn, PAGE_HYPERVISOR);
+        size -= min(size, (unsigned long)PAGE_SIZE);
+        mfn = mfn_add(mfn, 1);
+        idx++;
+    } while ( size > 0 );
+
+    return (char *)base;
 }
 
 bool __acpi_unmap_table(const void *ptr, unsigned long size)
 {
     vaddr_t vaddr = (vaddr_t)ptr;
+    unsigned int idx;
+
+    /* We are only handling fixmap address in the arch code */
+    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
+         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )
+        return false;
+
+    /*
+     * __acpi_map_table() will always return a pointer in the first page
+     * for the ACPI fixmap region. The caller is expected to free with
+     * the same address.
+     */
+    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIXMAP_ACPI_BEGIN));
+
+    /* The region allocated fit in the ACPI fixmap region. */
+    ASSERT(size < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - vaddr));
+    ASSERT(fixmap_inuse);
+
+    fixmap_inuse = false;
+
+    size += vaddr - FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
+    idx = FIXMAP_ACPI_BEGIN;
+
+    do
+    {
+        clear_fixmap(idx);
+        size -= min(size, (unsigned long)PAGE_SIZE);
+        idx++;
+    } while ( size > 0 );
 
-    return ((vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) &&
-            (vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE)));
+    return true;
 }
 
 /* True to indicate PSCI 0.2+ is implemented */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11096.29461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCx-0002GC-5H; Fri, 23 Oct 2020 15:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11096.29461; Fri, 23 Oct 2020 15:42:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzCx-0002G3-0B; Fri, 23 Oct 2020 15:42:23 +0000
Received: by outflank-mailman (input) for mailman id 11096;
 Fri, 23 Oct 2020 15:42:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzCv-000207-H1
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3237385-ff05-47ad-a248-91e89864f56d;
 Fri, 23 Oct 2020 15:42:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCq-0006rW-47; Fri, 23 Oct 2020 15:42:16 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzCp-0007wb-Qi; Fri, 23 Oct 2020 15:42:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzCv-000207-H1
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:21 +0000
X-Inumbo-ID: b3237385-ff05-47ad-a248-91e89864f56d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b3237385-ff05-47ad-a248-91e89864f56d;
	Fri, 23 Oct 2020 15:42:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=9Jhhh200nUTGbCy8jyEQ9Z0MRCAcS0zUrLABG2/n+kQ=; b=eU6k8QiNQIfc2naCg2wGciaPZ
	fP1QY7kRegzgMuXtFrMcdCK5j9lxQdhdIer7ua0FYk7Svf07Z5VnSqKPyt83G+8nqUd2mIJfh61nw
	RdUQVmZd9hfqyEns2XOKDe+9eEgPW3OqR1ZsALtkkvrH1WwSKqH46Xxqu2aCmerLEyJv0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCq-0006rW-47; Fri, 23 Oct 2020 15:42:16 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235] helo=ufe34d9ed68d054.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzCp-0007wb-Qi; Fri, 23 Oct 2020 15:42:16 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: alex.bennee@linaro.org,
	masami.hiramatsu@linaro.org,
	ehem+xen@m5p.com,
	bertrand.marquis@arm.com,
	andre.przywara@arm.com,
	Rahul.Singh@arm.com,
	Julien Grall <julien.grall@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v2 7/7] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
Date: Fri, 23 Oct 2020 16:41:56 +0100
Message-Id: <20201023154156.6593-8-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
References: <20201023154156.6593-1-julien@xen.org>

From: Julien Grall <julien.grall@arm.com>

At the moment Xen requires the FADT ACPI table to be at least version
6.0, apparently because of some reliance on other ACPI v6.0 features.

But actually this is overzealous, and Xen works now fine with ACPI v5.1.

Let's relax the version check for the FADT table to allow QEMU to
run the hypervisor with ACPI.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Patch added
---
 xen/arch/arm/acpi/boot.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
index 55c3e5cbc834..7ea2990cb82c 100644
--- a/xen/arch/arm/acpi/boot.c
+++ b/xen/arch/arm/acpi/boot.c
@@ -181,8 +181,8 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
      * we only deal with ACPI 6.0 or newer revision to get GIC and SMP
      * boot protocol configuration data, or we will disable ACPI.
      */
-    if ( table->revision > 6
-         || (table->revision == 6 && fadt->minor_revision >= 0) )
+    if ( table->revision > 5
+         || (table->revision == 5 && fadt->minor_revision >= 1) )
         return 0;
 
     printk("Unsupported FADT revision %d.%d, should be 6.0+, will disable ACPI\n",
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:42:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:42:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11104.29473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzDG-0002aU-Hq; Fri, 23 Oct 2020 15:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11104.29473; Fri, 23 Oct 2020 15:42:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzDG-0002aM-E5; Fri, 23 Oct 2020 15:42:42 +0000
Received: by outflank-mailman (input) for mailman id 11104;
 Fri, 23 Oct 2020 15:42:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVzDF-0002WD-55
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8540496d-860c-429e-96fa-8c2ceb2bc626;
 Fri, 23 Oct 2020 15:42:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4400DAF2B;
 Fri, 23 Oct 2020 15:42:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVzDF-0002WD-55
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:42:41 +0000
X-Inumbo-ID: 8540496d-860c-429e-96fa-8c2ceb2bc626
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8540496d-860c-429e-96fa-8c2ceb2bc626;
	Fri, 23 Oct 2020 15:42:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603467759;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HvO0lS572aXuQyWV7KeUQz/6uyXq3Y+9QrhDtWKaEFw=;
	b=On8fdivDL45GQ3MXIeD3S2COb2X+aKdcUGS2nTXS4GxYpY10gQOuk1n2NAPxZ4iZXx53SD
	osW/+XVkgxjgEvw2Z0PwDJdIBMX60pFUbgrlfYsYf5JOPxnBGJEAE7ipBWkCtaLgGRyGB2
	JMF8bKA5Rc7daOsZDEgLV4t0nrI0qS0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4400DAF2B;
	Fri, 23 Oct 2020 15:42:39 +0000 (UTC)
Subject: Re: [PATCH v2 11/11] x86/vpt: introduce a per-vPT lock
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <20200930104108.35969-1-roger.pau@citrix.com>
 <20200930104108.35969-12-roger.pau@citrix.com>
 <20200930133048.GV19254@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae9d0ba6-3aed-7ce6-bea8-a42b60af7137@suse.com>
Date: Fri, 23 Oct 2020 17:42:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20200930133048.GV19254@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.09.2020 15:30, Roger Pau Monné wrote:
> On Wed, Sep 30, 2020 at 12:41:08PM +0200, Roger Pau Monne wrote:
>> Introduce a per virtual timer lock that replaces the existing per-vCPU
>> and per-domain vPT locks. Since virtual timers are no longer assigned
>> or migrated between vCPUs the locking can be simplified to a
>> in-structure spinlock that protects all the fields.
>>
>> This requires introducing a helper to initialize the spinlock, and
>> that could be used to initialize other virtual timer fields in the
>> future.
>>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Just realized I had the following uncommitted chunk that should have
> been part of this patch, nothing critical but the tm_lock can now be
> removed.

And then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:46:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:46:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11123.29485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzGa-00037l-1z; Fri, 23 Oct 2020 15:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11123.29485; Fri, 23 Oct 2020 15:46:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzGZ-00037e-VC; Fri, 23 Oct 2020 15:46:07 +0000
Received: by outflank-mailman (input) for mailman id 11123;
 Fri, 23 Oct 2020 15:46:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzGY-00037Z-NH
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:46:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7001696-143d-49f6-b10a-9c4fc0701a5f;
 Fri, 23 Oct 2020 15:46:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzGS-0006x1-1x; Fri, 23 Oct 2020 15:46:00 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzGR-0008Qn-S0; Fri, 23 Oct 2020 15:46:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzGY-00037Z-NH
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:46:06 +0000
X-Inumbo-ID: c7001696-143d-49f6-b10a-9c4fc0701a5f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c7001696-143d-49f6-b10a-9c4fc0701a5f;
	Fri, 23 Oct 2020 15:46:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=GvqaUzrigzIbVtYNvyY97YRR3jek2DTCCG92ttcjFIA=; b=COOtbH5sd1SNbg+g6mTYtWtsem
	yMQxIrCDxSQ9nzrhyKeshIWNlOzyDUxOzXwvStetl99b2b8c7NRyB3AbjcDk0cdBo1vnhaGgquj7n
	yJPGQuOzscJLbqKNU5mtRKirAjCG9bSQVH3iYr1mZ2iy50ltxlH2+18xwRiRHFbM130g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzGS-0006x1-1x; Fri, 23 Oct 2020 15:46:00 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzGR-0008Qn-S0; Fri, 23 Oct 2020 15:46:00 +0000
Subject: Re: [PATCH v4 2/4] xen: Introduce HAS_M2P config and use to protect
 mfn_to_gmfn call
From: Julien Grall <julien@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200921180214.4842-1-julien@xen.org>
 <20200921180214.4842-3-julien@xen.org>
 <a2e1773d-cb01-fa02-334a-a642f9316b57@citrix.com>
 <d80519d8-6699-7beb-9192-0e87623b0b62@xen.org>
 <bc50c5cd-d239-60a4-0a66-790717de5815@citrix.com>
 <ff006b75-73d2-ae21-1811-fbd5c9c244c7@xen.org>
Message-ID: <32d4f762-a61d-bfdd-c4a8-38e5edef1aa8@xen.org>
Date: Fri, 23 Oct 2020 16:45:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ff006b75-73d2-ae21-1811-fbd5c9c244c7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 26/09/2020 14:00, Julien Grall wrote:
> Hi Andrew,
> 
> On 22/09/2020 19:56, Andrew Cooper wrote:
>> On 22/09/2020 19:20, Julien Grall wrote:
>>>>> +
>>>>>    #endif /* __ASM_DOMAIN_H__ */
>>>>>      /*
>>>>> diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
>>>>> index 5c5e55ebcb76..7564df5e8374 100644
>>>>> --- a/xen/include/public/domctl.h
>>>>> +++ b/xen/include/public/domctl.h
>>>>> @@ -136,6 +136,12 @@ struct xen_domctl_getdomaininfo {
>>>>>        uint64_aligned_t outstanding_pages;
>>>>>        uint64_aligned_t shr_pages;
>>>>>        uint64_aligned_t paged_pages;
>>>>> +#define XEN_INVALID_SHARED_INFO_FRAME (~(uint64_t)0)
>>>>
>>>> We've already got INVALID_GFN as a constant used in the interface.  
>>>> Lets
>>>> not proliferate more.
>>>
>>> This was my original approach (see [1]) but this was reworked because:
>>>     1) INVALID_GFN is not technically defined in the ABI. So the
>>> toolstack has to hardcode the value in the check.
>>>     2) The value is different between 32-bit and 64-bit Arm as
>>> INVALID_GFN is defined as an unsigned long.
>>>
>>> So providing a new define is the right way to go.
>>
>> There is nothing special about this field.  It should not have a
>> dedicated constant, when a general one is the appropriate one to use.
>>
>> libxl already has LIBXL_INVALID_GFN, which is already used.
> 
> Right, but that's imply it cannot be used by libxc as this would be a 
> layer violation.
> 
>>
>> If this isn't good enough, them the right thing to do is put a proper
>> INVALID_GFN in the tools interface.
> 
> That would be nice but I can see some issue on x86 given that we don't 
> consistenly define a GFN in the interface as a 64-bit value.
> 
> So would you still be happy to consider introducing XEN_INVALID_GFN in 
> the interface with some caveats?

Gentle ping. @Andrew, are you happy with this approach?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 15:47:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 15:47:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11126.29497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzHz-0003FU-DV; Fri, 23 Oct 2020 15:47:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11126.29497; Fri, 23 Oct 2020 15:47:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzHz-0003FN-9b; Fri, 23 Oct 2020 15:47:35 +0000
Received: by outflank-mailman (input) for mailman id 11126;
 Fri, 23 Oct 2020 15:47:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kVzHy-0003FI-Ap
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:47:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 899383f0-e24f-4cb4-a8ce-17f557e89553;
 Fri, 23 Oct 2020 15:47:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 719A2AD49;
 Fri, 23 Oct 2020 15:47:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=gNxR=D6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kVzHy-0003FI-Ap
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 15:47:34 +0000
X-Inumbo-ID: 899383f0-e24f-4cb4-a8ce-17f557e89553
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 899383f0-e24f-4cb4-a8ce-17f557e89553;
	Fri, 23 Oct 2020 15:47:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603468052;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZMQG8HMDTnF0Z6C9utrK1ZfxnzVLy957Ni7BC+3SY4A=;
	b=hYjzQH9TDvEWmhrw29C6hoQ1tW/zErOkV59h3+vJblSSwAtshznyefE70SOQDBFyG4/c3E
	xQWBvV31sSvpkm2NdhYdsrbBsqORqzv/X8qBzRyIOFiLv29RRVmk1grr1So9U7+LN2dGJF
	nW1+wfV2NzPRVgOJKmYCjEk/veNEgSQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 719A2AD49;
	Fri, 23 Oct 2020 15:47:32 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Rahul.Singh@arm.com, Julien Grall
 <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0fe86a2b-290e-ebd0-ab1a-d6318bef66fe@suse.com>
Date: Fri, 23 Oct 2020 17:47:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201023154156.6593-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 23.10.2020 17:41, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
> 
> Currently, the former are still containing x86 specific code.
> 
> To avoid this rather strange split, the generic helpers are reworked so
> they are arch-agnostic. This requires the introduction of a new helper
> __acpi_os_unmap_memory() that will undo any mapping done by
> __acpi_os_map_memory().
> 
> Currently, the arch-helper for unmap is basically a no-op so it only
> returns whether the mapping was arch specific. But this will change
> in the future.
> 
> Note that the x86 version of acpi_os_map_memory() was already able to
> able the 1MB region. Hence why there is no addition of new code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
> Tested-by: Rahul Singh <rahul.singh@arm.com>

Non-Arm parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:05:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:05:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11139.29517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzYc-0005n9-VO; Fri, 23 Oct 2020 16:04:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11139.29517; Fri, 23 Oct 2020 16:04:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzYc-0005n1-SC; Fri, 23 Oct 2020 16:04:46 +0000
Received: by outflank-mailman (input) for mailman id 11139;
 Fri, 23 Oct 2020 16:04:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVzYb-0005mw-1P
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:04:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c174df08-0dc6-4639-9cc8-0bfc89dcbbf1;
 Fri, 23 Oct 2020 16:04:44 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzYX-0007uQ-Ew; Fri, 23 Oct 2020 16:04:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVzYX-0001Qo-5W; Fri, 23 Oct 2020 16:04:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVzYb-0005mw-1P
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:04:45 +0000
X-Inumbo-ID: c174df08-0dc6-4639-9cc8-0bfc89dcbbf1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c174df08-0dc6-4639-9cc8-0bfc89dcbbf1;
	Fri, 23 Oct 2020 16:04:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cAFotQCaqJldMMQoBx1Mucz8jlPrhF5zZ2dGZIKFe1I=; b=3Lg7JQT3MvAUsmKEZ3ZdWkLDV3
	M9hrE6Zz52Py1ikLT1wHTeQh4t1YDc7iovqAQM8+bS56Kg7t4Fkd/MocqxvkSA0GbwVhynQAvC0U1
	aSBeB9027e6aa8uyqtowOp+lJyTwj7tRuDYZM9JtpKwQVoNHZsdOGkfYeEimDH9OgSfM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzYX-0007uQ-Ew; Fri, 23 Oct 2020 16:04:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVzYX-0001Qo-5W; Fri, 23 Oct 2020 16:04:41 +0000
Subject: Re: [PATCH] PCI: drop dead pci_lock_*pdev() declarations
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cb644565-92c9-8dbe-8c36-54e8b6b722ad@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <744bb4fb-3e2d-652d-ef55-d42a404851b4@xen.org>
Date: Fri, 23 Oct 2020 17:04:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <cb644565-92c9-8dbe-8c36-54e8b6b722ad@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 23/10/2020 09:02, Jan Beulich wrote:
> They have no definitions, and hence users, anywhere.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> 
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -155,9 +155,6 @@ bool_t pci_device_detect(u16 seg, u8 bus
>   int scan_pci_devices(void);
>   enum pdev_type pdev_type(u16 seg, u8 bus, u8 devfn);
>   int find_upstream_bridge(u16 seg, u8 *bus, u8 *devfn, u8 *secbus);
> -struct pci_dev *pci_lock_pdev(int seg, int bus, int devfn);
> -struct pci_dev *pci_lock_domain_pdev(
> -    struct domain *, int seg, int bus, int devfn);
>   
>   void setup_hwdom_pci_devices(struct domain *,
>                               int (*)(u8 devfn, struct pci_dev *));
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:14:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:14:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11147.29533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVziJ-0006rJ-TH; Fri, 23 Oct 2020 16:14:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11147.29533; Fri, 23 Oct 2020 16:14:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVziJ-0006rC-Pr; Fri, 23 Oct 2020 16:14:47 +0000
Received: by outflank-mailman (input) for mailman id 11147;
 Fri, 23 Oct 2020 16:14:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kVziI-0006r7-NJ
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 73090fc0-4388-4a80-afc9-419641a5526a;
 Fri, 23 Oct 2020 16:14:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVziG-00086e-9R; Fri, 23 Oct 2020 16:14:44 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kVziG-0002Lt-0g; Fri, 23 Oct 2020 16:14:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nEE3=D6=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kVziI-0006r7-NJ
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:46 +0000
X-Inumbo-ID: 73090fc0-4388-4a80-afc9-419641a5526a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 73090fc0-4388-4a80-afc9-419641a5526a;
	Fri, 23 Oct 2020 16:14:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=FovwwkTpbjUCmAUNjvltY+9UukJzlORO7DJLQNXPO4c=; b=HEFNcpswtJITyFIG5DpV9SqMmP
	BQuYAAha66CXTbQtnqYOFSS/GatldkhRhN+nR/Zsf8CKTCquOil3dwtNo/yIRijgut5T5+9OEtCvt
	YdQjQA4C6cLsjKf7xi8hHPzq7oADfUIk1N1DYWbDhxdbhHIRpgJeygnrgz6i6jJYY7RI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVziG-00086e-9R; Fri, 23 Oct 2020 16:14:44 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kVziG-0002Lt-0g; Fri, 23 Oct 2020 16:14:44 +0000
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
To: Jason Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20200525025506.225959-1-jandryuk@gmail.com>
 <24269.8360.504075.118119@mariner.uk.xensource.com>
 <CAKf6xpuqTdSc-qnfHu=yyEo6V45QLiSP6j=XsgEudoO4ojFaJw@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bb8435bf-de66-ae74-0ccb-57c1a0710b18@xen.org>
Date: Fri, 23 Oct 2020 17:14:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpuqTdSc-qnfHu=yyEo6V45QLiSP6j=XsgEudoO4ojFaJw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jason,

On 20/10/2020 14:27, Jason Andryuk wrote:
> On Tue, May 26, 2020 at 10:13 AM Ian Jackson <ian.jackson@citrix.com> wrote:
>>
>> Jason Andryuk writes ("[PATCH] SUPPORT: Add linux device model stubdom to Toolstack"):
>>> Add qemu-xen linux device model stubdomain to the Toolstack section as a
>>> Tech Preview.
>>
>> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Hi, this never got applied.  It should go to staging and 4.14.

Sorry this fell through the cracks. I have committed it with the 
existing Acks.

Regarding 4.14, this would need to go through a backport request.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:14:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:14:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11148.29545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVziR-0006ti-4z; Fri, 23 Oct 2020 16:14:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11148.29545; Fri, 23 Oct 2020 16:14:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVziR-0006tb-1n; Fri, 23 Oct 2020 16:14:55 +0000
Received: by outflank-mailman (input) for mailman id 11148;
 Fri, 23 Oct 2020 16:14:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5an6=D6=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kVziP-0006tE-Nb
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d8e22ab-e6c9-40b2-8a2d-ca541c165d7c;
 Fri, 23 Oct 2020 16:14:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVziO-00086m-GP
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:52 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kVziO-0002NH-FT
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:52 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kVziM-0008Eb-Mc; Fri, 23 Oct 2020 17:14:50 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=5an6=D6=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kVziP-0006tE-Nb
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:53 +0000
X-Inumbo-ID: 2d8e22ab-e6c9-40b2-8a2d-ca541c165d7c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2d8e22ab-e6c9-40b2-8a2d-ca541c165d7c;
	Fri, 23 Oct 2020 16:14:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=bypnxhjVJ8hrVMcfMf2eie599ceBF4c9S0o8XImtOP8=; b=CNMrdGwN7QU51wM8mbKr5MB+Dt
	BmON7cpvCNnRPx7mC1vI8NYOp0Th1sQomSET6+WarLDKdmzGjku4OFAhxngYTxje4pXPvNJc2KOSy
	+4kNySUcyd1hyjTlS9v+RrR/UMy/v43mf2wPi2BQ6LO/JE/JDJLuFOIk5gn9xt11bvNI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVziO-00086m-GP
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:52 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVziO-0002NH-FT
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:14:52 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kVziM-0008Eb-Mc; Fri, 23 Oct 2020 17:14:50 +0100
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH] host reuse fixes: Properly clear out old static tasks from history
Date: Fri, 23 Oct 2020 17:14:44 +0100
Message-Id: <20201023161444.2133-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The algorithm for clearing out old lifecycle entries was wrong: it
would delete all entries for non-live tasks.

In practice this would properly remove all the old entries for
non-static tasks, since ownd tasks typically don't releease things
until the task ends (and it becomes non-live).  And it wouldn't remove
more than it should do unless some now-not-live task had an allocation
overlapping with us, which is not supposed to be possible if we are
doing a host wipe.  But it would not remove static tasks ever, since
they are always live.

Change to a completely different algorithm:

 * Check that only us (ie, $ttaskid) has (any shares of) this host
   allocated.  There's a function resource_check_allocated_core which
   already does this and since we're conceptually part of Executive
   it is proper for us to call it.  This is just a sanity check.

 * Delete all lifecycle entries predating the first entry made by
   us.  (We could just delete all entries other than ours, but in
   theory maybe some future code could result in a siutation where
   someone else could have had another share briefly at some point.)

This removes old junk from the "Tasks that could have affected" in
reports.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/JobDB/Executive.pm | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/Osstest/JobDB/Executive.pm b/Osstest/JobDB/Executive.pm
index 1dcf55ff..097c8d75 100644
--- a/Osstest/JobDB/Executive.pm
+++ b/Osstest/JobDB/Executive.pm
@@ -515,15 +515,19 @@ sub jobdb_host_update_lifecycle_info ($$$) { #method
 
     if ($mode eq 'wiped') {
 	db_retry($flight, [qw(running)], $dbh_tests,[], sub {
-            $dbh_tests->do(<<END, {}, $hostname);
-                DELETE FROM host_lifecycle h
-                      WHERE hostname=?
-                        AND NOT EXISTS(
-                SELECT 1
-		  FROM tasks t
-		 WHERE t.live
-		   AND t.taskid = h.taskid
-                );
+            my $cshare = Osstest::Executive::resource_check_allocated_core(
+                "host",$hostname);
+            die "others have this host allocated when we have just wiped it! "
+	      .Dumper($cshare)
+	      if $cshare->{Others};
+	    $dbh_tests->do(<<END, {}, $hostname, $hostname, $ttaskid);
+                DELETE FROM host_lifecycle
+		      WHERE hostname=?
+			AND lcseq < (
+			       SELECT min(lcseq) 
+				FROM host_lifecycle
+			       WHERE hostname=? and taskid=?
+			    )
 END
         });
 	logm("host lifecycle: $hostname: wiped, cleared out old info");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11159.29573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqf-00083B-EP; Fri, 23 Oct 2020 16:23:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11159.29573; Fri, 23 Oct 2020 16:23:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqf-000832-Aa; Fri, 23 Oct 2020 16:23:25 +0000
Received: by outflank-mailman (input) for mailman id 11159;
 Fri, 23 Oct 2020 16:23:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqd-00081j-TW
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e91dac0-7484-4bc2-9cda-ec2c9295f15a;
 Fri, 23 Oct 2020 16:23:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqb-0008J2-Us; Fri, 23 Oct 2020 16:23:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqb-000376-NJ; Fri, 23 Oct 2020 16:23:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqd-00081j-TW
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:23 +0000
X-Inumbo-ID: 0e91dac0-7484-4bc2-9cda-ec2c9295f15a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0e91dac0-7484-4bc2-9cda-ec2c9295f15a;
	Fri, 23 Oct 2020 16:23:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=kyqm8nP4Buhsk6mGmHQHc+zCmy1DFqv8sR89/X9OQ0M=; b=N2imVYaF3AB50X40UqM0wt/DO
	HWQIc3/cUxNyg3kfuwHs6BsZh5bGinlZ7ysYPmJh6dYQSfjsdfJ7znSdn4vlkU/lA+iFBYpC3mAGq
	1/XORhty1j3c92iWiO3Ho9il21i+ONQqJ/0qSbZsG0qsdmHHLqBvan0MTnrpprNmWQT4c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqb-0008J2-Us; Fri, 23 Oct 2020 16:23:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqb-000376-NJ; Fri, 23 Oct 2020 16:23:21 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 02/25] libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
Date: Fri, 23 Oct 2020 16:22:51 +0000
Message-Id: <20201023162314.2235-3-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Remove open coded definition of libxl_device_pci_list().

NOTE: Using the macro also defines libxl_device_pci_list_free() so a prototype
      for it is added. Subsequent patches will make used of it.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/include/libxl.h        |  7 +++++++
 tools/libs/light/libxl_pci.c | 27 ++-------------------------
 2 files changed, 9 insertions(+), 25 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index fbe4c81ba5..ee52d3cf7e 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -452,6 +452,12 @@
 #define LIBXL_HAVE_CONFIG_PCIS 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_LIST_FREE indicates that the
+ * libxl_device_pci_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2321,6 +2327,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
 
 libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid,
                                         int *num);
+void libxl_device_pci_list_free(libxl_device_pci* list, int num);
 
 /*
  * Turns the current process into a backend device service daemon
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2ff1c64a31..515e74fe5a 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -2393,31 +2393,6 @@ static int libxl__device_pci_get_num(libxl__gc *gc, const char *be_path,
     return rc;
 }
 
-libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num)
-{
-    GC_INIT(ctx);
-    char *be_path;
-    unsigned int n, i;
-    libxl_device_pci *pcis = NULL;
-
-    *num = 0;
-
-    be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
-                                                LIBXL__DEVICE_KIND_PCI);
-    if (libxl__device_pci_get_num(gc, be_path, &n))
-        goto out;
-
-    pcis = calloc(n, sizeof(libxl_device_pci));
-
-    for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
-
-    *num = n;
-out:
-    GC_FREE;
-    return pcis;
-}
-
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
@@ -2492,6 +2467,8 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
     return COMPARE_PCI(d1, d2);
 }
 
+LIBXL_DEFINE_DEVICE_LIST(pci)
+
 #define libxl__device_pci_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11158.29561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqe-00081v-3J; Fri, 23 Oct 2020 16:23:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11158.29561; Fri, 23 Oct 2020 16:23:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqd-00081k-Un; Fri, 23 Oct 2020 16:23:23 +0000
Received: by outflank-mailman (input) for mailman id 11158;
 Fri, 23 Oct 2020 16:23:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqc-00081e-DL
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc477e0e-6fda-4b50-a9de-9e1b26c444c1;
 Fri, 23 Oct 2020 16:23:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqa-0008Is-4H; Fri, 23 Oct 2020 16:23:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqZ-000376-P8; Fri, 23 Oct 2020 16:23:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqc-00081e-DL
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:22 +0000
X-Inumbo-ID: dc477e0e-6fda-4b50-a9de-9e1b26c444c1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc477e0e-6fda-4b50-a9de-9e1b26c444c1;
	Fri, 23 Oct 2020 16:23:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=paZPJP/oEG4bKcO58FfOeXtzFY1cQuLkf+AoEVb8yz0=; b=lQAizG/RYyjlJLRghBHH11uGzI
	RDI2HOkkUyN4LrVLyossHeD0s6gIhrXAuutTUV4bxLWw/3xENWQ7FfA+T1v9kBumL5x6JlLHvFWgw
	uaFBi5CwgYi8SjxnjjzkkzURw5vj7vMvQ8iTDFYFLgGkCpugh+vS/Ue90U62dumNWFm8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqa-0008Is-4H; Fri, 23 Oct 2020 16:23:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqZ-000376-P8; Fri, 23 Oct 2020 16:23:19 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 00/25] xl / libxl: named PCI pass-through devices
Date: Fri, 23 Oct 2020 16:22:49 +0000
Message-Id: <20201023162314.2235-1-paul@xen.org>
X-Mailer: git-send-email 2.11.0

From: Paul Durrant <pdurrant@amazon.com>

This series adds support for naming devices added to the assignable list
and then using a name (instead of a BDF) for convenience when attaching
a device to a domain.

The first 15 patches are cleanup. The remaining 10 modify documentation
and add the new functionality.

Paul Durrant (25):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
  libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
  libxl: s/domainid/domid/g in libxl_pci.c
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 +++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +--
 tools/golang/xenlight/helpers.gen.go |   77 ++-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 ++-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   53 +-
 tools/libs/light/libxl_nic.c         |   19 +-
 tools/libs/light/libxl_pci.c         | 1072 ++++++++++++++++++----------------
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  359 ++++++------
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   30 +-
 tools/xl/xl_pci.c                    |  164 +++---
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1337 insertions(+), 935 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11160.29585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqi-00085k-NZ; Fri, 23 Oct 2020 16:23:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11160.29585; Fri, 23 Oct 2020 16:23:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqi-00085c-KP; Fri, 23 Oct 2020 16:23:28 +0000
Received: by outflank-mailman (input) for mailman id 11160;
 Fri, 23 Oct 2020 16:23:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqh-00081e-Bx
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf5bee94-46ef-4101-91c2-c55571be3951;
 Fri, 23 Oct 2020 16:23:25 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqe-0008JN-CR; Fri, 23 Oct 2020 16:23:24 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqe-000376-4u; Fri, 23 Oct 2020 16:23:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqh-00081e-Bx
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:27 +0000
X-Inumbo-ID: cf5bee94-46ef-4101-91c2-c55571be3951
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cf5bee94-46ef-4101-91c2-c55571be3951;
	Fri, 23 Oct 2020 16:23:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=GJYb7sAQkLDokuXQ2JDjweeTgQOvSRxXD4pmq8a7R7o=; b=Mtpt2GuDcvRfR2zgRagk4wS1w
	/s0dAZ7L5AaFJRUpNbdEAT0qh1lO1iyiUguU6j1tox8ogTlAnt92iKbKzgxhnfi3RwGnQ5q1d2Sd+
	CWcJCaFbeQOPTmkG4FhRsAFPQhBM7sFbbtJ9Gi7z9+2LRE/+d9EIzvx7mJEpeXAnWAed8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqe-0008JN-CR; Fri, 23 Oct 2020 16:23:24 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqe-000376-4u; Fri, 23 Oct 2020 16:23:24 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 05/25] libxl: s/detatched/detached in libxl_pci.c
Date: Fri, 23 Oct 2020 16:22:54 +0000
Message-Id: <20201023162314.2235-6-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Simply spelling correction. Purely cosmetic fix.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 58242b5b94..3936d60a14 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1860,7 +1860,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
 static void pci_remove_timeout(libxl__egc *egc,
     libxl__ev_time *ev, const struct timeval *requested_abs, int rc);
-static void pci_remove_detatched(libxl__egc *egc,
+static void pci_remove_detached(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 static void pci_remove_stubdom_done(libxl__egc *egc,
     libxl__ao_device *aodev);
@@ -1973,7 +1973,7 @@ skip1:
 skip_irq:
     rc = 0;
 out_fail:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1997,7 +1997,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del(libxl__egc *egc,
@@ -2023,7 +2023,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
@@ -2046,7 +2046,7 @@ static void pci_remove_qmp_device_del_cb(libxl__egc *egc,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
@@ -2062,7 +2062,7 @@ static void pci_remove_qmp_retry_timer_cb(libxl__egc *egc, libxl__ev_time *ev,
     return;
 
 out:
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
 static void pci_remove_qmp_query_cb(libxl__egc *egc,
@@ -2122,7 +2122,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     }
 
 out:
-    pci_remove_detatched(egc, prs, rc); /* must be last */
+    pci_remove_detached(egc, prs, rc); /* must be last */
 }
 
 static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
@@ -2141,12 +2141,12 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
      * error */
-    pci_remove_detatched(egc, prs, rc);
+    pci_remove_detached(egc, prs, rc);
 }
 
-static void pci_remove_detatched(libxl__egc *egc,
-                                 pci_remove_state *prs,
-                                 int rc)
+static void pci_remove_detached(libxl__egc *egc,
+                                pci_remove_state *prs,
+                                int rc)
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11161.29597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqk-000885-0Z; Fri, 23 Oct 2020 16:23:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11161.29597; Fri, 23 Oct 2020 16:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqj-00087y-Tb; Fri, 23 Oct 2020 16:23:29 +0000
Received: by outflank-mailman (input) for mailman id 11161;
 Fri, 23 Oct 2020 16:23:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqi-00081j-Rz
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79d71e5c-7d8c-4c80-abe7-635d4a86d4f5;
 Fri, 23 Oct 2020 16:23:23 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqc-0008J8-Po; Fri, 23 Oct 2020 16:23:22 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqc-000376-HB; Fri, 23 Oct 2020 16:23:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqi-00081j-Rz
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:28 +0000
X-Inumbo-ID: 79d71e5c-7d8c-4c80-abe7-635d4a86d4f5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 79d71e5c-7d8c-4c80-abe7-635d4a86d4f5;
	Fri, 23 Oct 2020 16:23:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=jx3PjnZjVUkMl0nTJ/ptt3J2EPZY9Ep+JNq2r5DzMzg=; b=3S9cR+SJl4FReoaGiBqUu4J0t
	pVMo50e3/3gUicBPd4w81CCLEVHsqVSIJauT2TLpg9nDfm/otfSMWU9oYPsVbd50aKyobs1fNHMnE
	g3O+YXXLXQGpysTGjgRPuYFO35oLbcxXnt+ouxc6LwBub93TlsMI5BK+MerK0WG/Mbhvs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqc-0008J8-Po; Fri, 23 Oct 2020 16:23:22 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqc-000376-HB; Fri, 23 Oct 2020 16:23:22 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 03/25] libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
Date: Fri, 23 Oct 2020 16:22:52 +0000
Message-Id: <20201023162314.2235-4-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Remove open-coded definitions of libxl_device_nic_list() and
libxl_device_nic_list_free().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>

This patch is slightly tangential. I just happend to notice the inefficiency
while looking at code for various device types.
---
 tools/libs/light/libxl_nic.c | 19 +------------------
 1 file changed, 1 insertion(+), 18 deletions(-)

diff --git a/tools/libs/light/libxl_nic.c b/tools/libs/light/libxl_nic.c
index 0e5d120ae9..a44058f929 100644
--- a/tools/libs/light/libxl_nic.c
+++ b/tools/libs/light/libxl_nic.c
@@ -403,24 +403,6 @@ static int libxl__nic_from_xenstore(libxl__gc *gc, const char *libxl_path,
     return rc;
 }
 
-libxl_device_nic *libxl_device_nic_list(libxl_ctx *ctx, uint32_t domid, int *num)
-{
-    libxl_device_nic *r;
-
-    GC_INIT(ctx);
-
-    r = libxl__device_list(gc, &libxl__nic_devtype, domid, num);
-
-    GC_FREE;
-
-    return r;
-}
-
-void libxl_device_nic_list_free(libxl_device_nic* list, int num)
-{
-    libxl__device_list_free(&libxl__nic_devtype, list, num);
-}
-
 int libxl_device_nic_getinfo(libxl_ctx *ctx, uint32_t domid,
                               const libxl_device_nic *nic,
                               libxl_nicinfo *nicinfo)
@@ -527,6 +509,7 @@ LIBXL_DEFINE_DEVID_TO_DEVICE(nic)
 LIBXL_DEFINE_DEVICE_ADD(nic)
 LIBXL_DEFINE_DEVICES_ADD(nic)
 LIBXL_DEFINE_DEVICE_REMOVE(nic)
+LIBXL_DEFINE_DEVICE_LIST(nic)
 
 DEFINE_DEVICE_TYPE_STRUCT(nic, VIF,
     .update_config = libxl_device_nic_update_config,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11162.29609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqo-0008DD-A9; Fri, 23 Oct 2020 16:23:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11162.29609; Fri, 23 Oct 2020 16:23:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqo-0008D4-6l; Fri, 23 Oct 2020 16:23:34 +0000
Received: by outflank-mailman (input) for mailman id 11162;
 Fri, 23 Oct 2020 16:23:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqm-00081e-C8
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d74bd44-e25a-4dd0-a026-178d0dc88266;
 Fri, 23 Oct 2020 16:23:26 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqf-0008JU-6J; Fri, 23 Oct 2020 16:23:25 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqe-000376-V3; Fri, 23 Oct 2020 16:23:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqm-00081e-C8
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:32 +0000
X-Inumbo-ID: 5d74bd44-e25a-4dd0-a026-178d0dc88266
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5d74bd44-e25a-4dd0-a026-178d0dc88266;
	Fri, 23 Oct 2020 16:23:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=d5hhK/pKE8zCFXVC7B3RNGDmz0Oe2e31kzCzkBdDeBM=; b=BR9viEbkG0CJzaf6Fobq/85Yi
	QIo8B/2Mvcg7lWD+3Jf4532XLAir3N9SagRDNWRaDo3lk8vgmbIPuIJGS40SmeKVCSnR6wAtn7+1s
	gf7mAs1QW3Xo31BOFQVI1QatbHKmbicpV67Ws6KA4Ad6SuZQijq0hsPJ2C7FdYXl6x8mQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqf-0008JU-6J; Fri, 23 Oct 2020 16:23:25 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqe-000376-V3; Fri, 23 Oct 2020 16:23:25 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 06/25] libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
Date: Fri, 23 Oct 2020 16:22:55 +0000
Message-Id: <20201023162314.2235-7-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Both 'domid' and 'pci' are available in 'pci_remove_state' so there is no
need to also pass them as separate arguments.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3936d60a14..97889fda49 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1867,14 +1867,14 @@ static void pci_remove_stubdom_done(libxl__egc *egc,
 static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
-static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pci, int force,
-                          pci_remove_state *prs)
+static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
     libxl_device_pci *assigned;
+    uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
+    libxl_device_pci *pci = prs->pci;
     int rc, num;
 
     assigned = libxl_device_pci_list(ctx, domid, &num);
@@ -2269,7 +2269,6 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_domid domid = prs->domid;
     libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
@@ -2287,7 +2286,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
             } else {
                 pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pci, prs->force, prs);
+            do_pci_remove(egc, prs);
             return;
         }
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11163.29621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqp-0008G8-NM; Fri, 23 Oct 2020 16:23:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11163.29621; Fri, 23 Oct 2020 16:23:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqp-0008Fx-JB; Fri, 23 Oct 2020 16:23:35 +0000
Received: by outflank-mailman (input) for mailman id 11163;
 Fri, 23 Oct 2020 16:23:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqn-00081j-S5
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0477d06b-3336-4a5f-a8a5-3a15c8eb6455;
 Fri, 23 Oct 2020 16:23:24 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqd-0008JG-Iu; Fri, 23 Oct 2020 16:23:23 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqd-000376-B2; Fri, 23 Oct 2020 16:23:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqn-00081j-S5
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:33 +0000
X-Inumbo-ID: 0477d06b-3336-4a5f-a8a5-3a15c8eb6455
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0477d06b-3336-4a5f-a8a5-3a15c8eb6455;
	Fri, 23 Oct 2020 16:23:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=Uuq1zUW4SAXAEEW1C7Mx/Yysa/gBiK1oEft0ZGe17wc=; b=fpFK1KTwSmcH68N2XsSoIj34a
	T0iSmGw9/zEGiO63+ojlY18FPhklxXidcw/oty9m+CDCoD8l/nMg+cahmy19zlJwv8gIHqIdtyDnL
	TKlzN5bk8gpkvHJYZftJc7wFr1qDyMnowoYQtDVCkJwaNfQQAf8quea5dQCAZh/xhgLkE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqd-0008JG-Iu; Fri, 23 Oct 2020 16:23:23 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqd-000376-B2; Fri, 23 Oct 2020 16:23:23 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 04/25] libxl: s/domainid/domid/g in libxl_pci.c
Date: Fri, 23 Oct 2020 16:22:53 +0000
Message-Id: <20201023162314.2235-5-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

It's pointless having two stack variables to hold exactly the same value.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 43 ++++++++++++++++++++-----------------------
 1 file changed, 20 insertions(+), 23 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 515e74fe5a..58242b5b94 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1326,8 +1326,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     int irq, i;
     int r;
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
-    uint32_t domainid = domid;
-    bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
+    bool isstubdom = libxl_is_stubdom(ctx, domid, &domid);
 
     /* Convenience aliases */
     bool starting = pas->starting;
@@ -1349,7 +1348,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     irq = 0;
 
     if (f == NULL) {
-        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+        LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1361,7 +1360,7 @@ static void pci_add_dm_done(libxl__egc *egc,
             if (flags & PCI_BAR_IO) {
                 r = xc_domain_ioport_permission(ctx->xch, domid, start, size, 1);
                 if (r < 0) {
-                    LOGED(ERROR, domainid,
+                    LOGED(ERROR, domid,
                           "xc_domain_ioport_permission 0x%llx/0x%llx (error %d)",
                           start, size, r);
                     fclose(f);
@@ -1372,7 +1371,7 @@ static void pci_add_dm_done(libxl__egc *egc,
                 r = xc_domain_iomem_permission(ctx->xch, domid, start>>XC_PAGE_SHIFT,
                                                 (size+(XC_PAGE_SIZE-1))>>XC_PAGE_SHIFT, 1);
                 if (r < 0) {
-                    LOGED(ERROR, domainid,
+                    LOGED(ERROR, domid,
                           "xc_domain_iomem_permission 0x%llx/0x%llx (error %d)",
                           start, size, r);
                     fclose(f);
@@ -1387,13 +1386,13 @@ static void pci_add_dm_done(libxl__egc *egc,
                                 pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
-        LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+        LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
         goto out_no_irq;
     }
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
         r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (r < 0) {
-            LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
+            LOGED(ERROR, domid, "xc_physdev_map_pirq irq=%d (error=%d)",
                   irq, r);
             fclose(f);
             rc = ERROR_FAIL;
@@ -1401,7 +1400,7 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
         r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
         if (r < 0) {
-            LOGED(ERROR, domainid,
+            LOGED(ERROR, domid,
                   "xc_domain_irq_permission irq=%d (error=%d)", irq, r);
             fclose(f);
             rc = ERROR_FAIL;
@@ -1414,7 +1413,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
                              pci) < 0 ) {
-            LOGD(ERROR, domainid, "Setting permissive for device");
+            LOGD(ERROR, domid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
         }
@@ -1425,13 +1424,13 @@ out_no_irq:
         if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
         } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
-            LOGED(ERROR, domainid, "unknown rdm check flag.");
+            LOGED(ERROR, domid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
         r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
-            LOGED(ERROR, domainid, "xc_assign_device failed");
+            LOGED(ERROR, domid, "xc_assign_device failed");
             rc = ERROR_FAIL;
             goto out;
         }
@@ -1877,7 +1876,6 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl_device_pci *assigned;
     libxl_domain_type type = libxl__domain_type(gc, domid);
     int rc, num;
-    uint32_t domainid = domid;
 
     assigned = libxl_device_pci_list(ctx, domid, &num);
     if (assigned == NULL) {
@@ -1889,7 +1887,7 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     rc = ERROR_INVAL;
     if ( !is_pci_in_array(assigned, num, pci->domain,
                           pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domainid, "PCI device not attached to this domain");
+        LOGD(ERROR, domid, "PCI device not attached to this domain");
         goto out_fail;
     }
 
@@ -1925,7 +1923,7 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         int i;
 
         if (f == NULL) {
-            LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+            LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
             goto skip1;
         }
         for (i = 0; i < PROC_PCI_NUM_RESOURCES; i++) {
@@ -1936,7 +1934,7 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
                 if (flags & PCI_BAR_IO) {
                     rc = xc_domain_ioport_permission(ctx->xch, domid, start, size, 0);
                     if (rc < 0)
-                        LOGED(ERROR, domainid,
+                        LOGED(ERROR, domid,
                               "xc_domain_ioport_permission error 0x%x/0x%x",
                               start,
                               size);
@@ -1944,7 +1942,7 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
                     rc = xc_domain_iomem_permission(ctx->xch, domid, start>>XC_PAGE_SHIFT,
                                                     (size+(XC_PAGE_SIZE-1))>>XC_PAGE_SHIFT, 0);
                     if (rc < 0)
-                        LOGED(ERROR, domainid,
+                        LOGED(ERROR, domid,
                               "xc_domain_iomem_permission error 0x%x/0x%x",
                               start,
                               size);
@@ -1957,17 +1955,17 @@ skip1:
                                pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
-            LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
+            LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
             goto skip_irq;
         }
         if ((fscanf(f, "%u", &irq) == 1) && irq) {
             rc = xc_physdev_unmap_pirq(ctx->xch, domid, irq);
             if (rc < 0) {
-                LOGED(ERROR, domainid, "xc_physdev_unmap_pirq irq=%d", irq);
+                LOGED(ERROR, domid, "xc_physdev_unmap_pirq irq=%d", irq);
             }
             rc = xc_domain_irq_permission(ctx->xch, domid, irq, 0);
             if (rc < 0) {
-                LOGED(ERROR, domainid, "xc_domain_irq_permission irq=%d", irq);
+                LOGED(ERROR, domid, "xc_domain_irq_permission irq=%d", irq);
             }
         }
         fclose(f);
@@ -2152,12 +2150,11 @@ static void pci_remove_detatched(libxl__egc *egc,
 {
     STATE_AO_GC(prs->aodev->ao);
     int stubdomid = 0;
-    uint32_t domainid = prs->domid;
+    uint32_t domid = prs->domid;
     bool isstubdom;
 
     /* Convenience aliases */
     libxl_device_pci *const pci = prs->pci;
-    libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
     libxl__ev_qmp_dispose(gc, &prs->qmp);
@@ -2167,7 +2164,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     if (rc && !prs->force)
         goto out;
 
-    isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
+    isstubdom = libxl_is_stubdom(CTX, domid, &domid);
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
@@ -2177,7 +2174,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     if (!isstubdom) {
         rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
-            LOGED(ERROR, domainid, "xc_deassign_device failed");
+            LOGED(ERROR, domid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11164.29633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqt-0008MW-D7; Fri, 23 Oct 2020 16:23:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11164.29633; Fri, 23 Oct 2020 16:23:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqt-0008M9-5T; Fri, 23 Oct 2020 16:23:39 +0000
Received: by outflank-mailman (input) for mailman id 11164;
 Fri, 23 Oct 2020 16:23:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqr-00081e-CW
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21eda335-bcb2-431d-8b54-8363e40956a4;
 Fri, 23 Oct 2020 16:23:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqb-0008Iw-64; Fri, 23 Oct 2020 16:23:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqa-000376-MR; Fri, 23 Oct 2020 16:23:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqr-00081e-CW
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:37 +0000
X-Inumbo-ID: 21eda335-bcb2-431d-8b54-8363e40956a4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 21eda335-bcb2-431d-8b54-8363e40956a4;
	Fri, 23 Oct 2020 16:23:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=2Epbe5jjBp/+RA6U7S2vptokK57Dpce2kJ38YAOJuqY=; b=AdOIB0mG+bpDznfUEDEzTL6BL
	gaezcrVA48p/TZuC6W2jAHliQEDFbwJei9i6MeARq9vrLiYOYIw1joNQCeNpe61DghMTwW5gWAOgw
	5mWLqGZAmBqT5oFw3SI80pQ3VauWzGSGI5WshuurDrJWRHxtA6ZlBpmbLvdr/uoDAD6jI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqb-0008Iw-64; Fri, 23 Oct 2020 16:23:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqa-000376-MR; Fri, 23 Oct 2020 16:23:21 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 01/25] xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
Date: Fri, 23 Oct 2020 16:22:50 +0000
Message-Id: <20201023162314.2235-2-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The seemingly arbitrary use of 'pci' and 'pcidev' in the code in libxl_pci.c
is confusing and also compromises use of some macros used for other device
types. Indeed it seems that DEFINE_DEVICE_TYPE_STRUCT_X exists solely because
of this duality.

This patch purges use of 'pcidev' from the libxl code, allowing evaluation of
DEFINE_DEVICE_TYPE_STRUCT_X to be replaced with DEFINE_DEVICE_TYPE_STRUCT,
hence allowing removal of the former.

For consistency the xl and libs/util code is also modified, but in this case
it is purely cosmetic.

NOTE: Some of the more gross formatting errors (such as lack of spaces after
      keywords) that came into context have been fixed in libxl_pci.c.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h             |  17 +-
 tools/libs/light/libxl_create.c   |   6 +-
 tools/libs/light/libxl_dm.c       |  18 +-
 tools/libs/light/libxl_internal.h |  45 ++-
 tools/libs/light/libxl_pci.c      | 582 +++++++++++++++++++-------------------
 tools/libs/light/libxl_types.idl  |   2 +-
 tools/libs/util/libxlu_pci.c      |  36 +--
 tools/xl/xl_parse.c               |  28 +-
 tools/xl/xl_pci.c                 |  68 ++---
 tools/xl/xl_sxp.c                 |  12 +-
 10 files changed, 409 insertions(+), 405 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 1ea5b4f446..fbe4c81ba5 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -445,6 +445,13 @@
 #define LIBXL_HAVE_DISK_SAFE_REMOVE 1
 
 /*
+ * LIBXL_HAVE_CONFIG_PCIS indicates that the 'pcidevs' and 'num_pcidevs'
+ * fields in libxl_domain_config have been renamed to 'pcis' and 'num_pcis'
+ * respectively.
+ */
+#define LIBXL_HAVE_CONFIG_PCIS 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2300,15 +2307,15 @@ int libxl_device_pvcallsif_destroy(libxl_ctx *ctx, uint32_t domid,
 
 /* PCI Passthrough */
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
                          LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
                              LIBXL_EXTERNAL_CALLERS_ONLY;
 
@@ -2352,8 +2359,8 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 
 /* CPUID handling */
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519..1f5052c520 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -1100,7 +1100,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
         goto error_out;
     }
 
-    bool need_pt = d_config->num_pcidevs || d_config->num_dtdevs;
+    bool need_pt = d_config->num_pcis || d_config->num_dtdevs;
     if (c_info->passthrough == LIBXL_PASSTHROUGH_DEFAULT) {
         c_info->passthrough = need_pt
             ? LIBXL_PASSTHROUGH_ENABLED : LIBXL_PASSTHROUGH_DISABLED;
@@ -1141,7 +1141,7 @@ int libxl__domain_config_setdefault(libxl__gc *gc,
      * assignment when PoD is enabled.
      */
     if (d_config->c_info.type != LIBXL_DOMAIN_TYPE_PV &&
-        d_config->num_pcidevs && pod_enabled) {
+        d_config->num_pcis && pod_enabled) {
         ret = ERROR_INVAL;
         LOGD(ERROR, domid,
              "PCI device assignment for HVM guest failed due to PoD enabled");
@@ -1817,7 +1817,7 @@ const libxl__device_type *device_type_tbl[] = {
     &libxl__vtpm_devtype,
     &libxl__usbctrl_devtype,
     &libxl__usbdev_devtype,
-    &libxl__pcidev_devtype,
+    &libxl__pci_devtype,
     &libxl__dtdev_devtype,
     &libxl__vdispl_devtype,
     &libxl__vsnd_devtype,
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index d1ff35dda3..f147a733c8 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -442,7 +442,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
 
     /* Might not expose rdm. */
     if (strategy == LIBXL_RDM_RESERVE_STRATEGY_IGNORE &&
-        !d_config->num_pcidevs)
+        !d_config->num_pcis)
         return 0;
 
     /* Query all RDM entries in this platform */
@@ -469,13 +469,13 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     }
 
     /* Query RDM entries per-device */
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcidevs[i].domain;
-        bus = d_config->pcidevs[i].bus;
-        devfn = PCI_DEVFN(d_config->pcidevs[i].dev,
-                          d_config->pcidevs[i].func);
+        seg = d_config->pcis[i].domain;
+        bus = d_config->pcis[i].bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].dev,
+                          d_config->pcis[i].func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
@@ -488,7 +488,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
         assert(xrdm);
 
         rc = libxl__device_pci_setdefault(gc, DOMID_INVALID,
-                                          &d_config->pcidevs[i], false);
+                                          &d_config->pcis[i], false);
         if (rc)
             goto out;
 
@@ -516,7 +516,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                      * global policy in this case.
                      */
                     d_config->rdms[j].policy
-                        = d_config->pcidevs[i].rdm_policy;
+                        = d_config->pcis[i].rdm_policy;
                     new = false;
                     break;
                 }
@@ -526,7 +526,7 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
                 add_rdm_entry(gc, d_config,
                               pfn_to_paddr(xrdm[n].start_pfn),
                               pfn_to_paddr(xrdm[n].nr_pages),
-                              d_config->pcidevs[i].rdm_policy);
+                              d_config->pcis[i].rdm_policy);
         }
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index e26cda9b50..3e70ff639b 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -1709,7 +1709,7 @@ _hidden int libxl__pci_topology_init(libxl__gc *gc,
 /* from libxl_pci */
 
 _hidden void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                                   libxl_device_pci *pcidev, bool starting,
+                                   libxl_device_pci *pci, bool starting,
                                    libxl__ao_device *aodev);
 _hidden void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                            libxl__multidev *);
@@ -3945,30 +3945,27 @@ struct libxl__device_type {
     device_set_xenstore_config_fn_t set_xenstore_config;
 };
 
-#define DEFINE_DEVICE_TYPE_STRUCT_X(name, sname, kind, ...)                    \
-    const libxl__device_type libxl__ ## name ## _devtype = {                   \
-        .type          = LIBXL__DEVICE_KIND_ ## kind,                       \
-        .ptr_offset    = offsetof(libxl_domain_config, name ## s),             \
-        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),     \
-        .dev_elem_size = sizeof(libxl_device_ ## sname),                       \
-        .add           = libxl__add_ ## name ## s,                             \
-        .set_default   = (device_set_default_fn_t)                             \
-                         libxl__device_ ## sname ## _setdefault,               \
-        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name,   \
-        .init          = (device_init_fn_t)libxl_device_ ## sname ## _init,    \
-        .copy          = (device_copy_fn_t)libxl_device_ ## sname ## _copy,    \
-        .dispose       = (device_dispose_fn_t)                                 \
-                         libxl_device_ ## sname ## _dispose,                   \
-        .compare       = (device_compare_fn_t)                                 \
-                         libxl_device_ ## sname ## _compare,                   \
-        .update_devid  = (device_update_devid_fn_t)                            \
-                         libxl__device_ ## sname ## _update_devid,             \
-        __VA_ARGS__                                                            \
+#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                           \
+    const libxl__device_type libxl__ ## name ## _devtype = {                 \
+        .type          = LIBXL__DEVICE_KIND_ ## kind,                        \
+        .ptr_offset    = offsetof(libxl_domain_config, name ## s),           \
+        .num_offset    = offsetof(libxl_domain_config, num_ ## name ## s),   \
+        .dev_elem_size = sizeof(libxl_device_ ## name),                      \
+        .add           = libxl__add_ ## name ## s,                           \
+        .set_default   = (device_set_default_fn_t)                           \
+                         libxl__device_ ## name ## _setdefault,              \
+        .to_device     = (device_to_device_fn_t)libxl__device_from_ ## name, \
+        .init          = (device_init_fn_t)libxl_device_ ## name ## _init,   \
+        .copy          = (device_copy_fn_t)libxl_device_ ## name ## _copy,   \
+        .dispose       = (device_dispose_fn_t)                               \
+                         libxl_device_ ## name ## _dispose,                  \
+        .compare       = (device_compare_fn_t)                               \
+                         libxl_device_ ## name ## _compare,                  \
+        .update_devid  = (device_update_devid_fn_t)                          \
+                         libxl__device_ ## name ## _update_devid,            \
+        __VA_ARGS__                                                          \
     }
 
-#define DEFINE_DEVICE_TYPE_STRUCT(name, kind, ...)                             \
-    DEFINE_DEVICE_TYPE_STRUCT_X(name, name, kind, __VA_ARGS__)
-
 static inline void **libxl__device_type_get_ptr(
     const libxl__device_type *dt, const libxl_domain_config *d_config)
 {
@@ -3995,7 +3992,7 @@ extern const libxl__device_type libxl__nic_devtype;
 extern const libxl__device_type libxl__vtpm_devtype;
 extern const libxl__device_type libxl__usbctrl_devtype;
 extern const libxl__device_type libxl__usbdev_devtype;
-extern const libxl__device_type libxl__pcidev_devtype;
+extern const libxl__device_type libxl__pci_devtype;
 extern const libxl__device_type libxl__vdispl_devtype;
 extern const libxl__device_type libxl__p9_devtype;
 extern const libxl__device_type libxl__pvcallsif_devtype;
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index bc5843b137..2ff1c64a31 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,51 +25,51 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pcidev_encode_bdf(libxl_device_pci *pcidev)
+static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pcidev->domain << 16;
-    value |= (pcidev->bus & 0xff) << 8;
-    value |= (pcidev->dev & 0x1f) << 3;
-    value |= (pcidev->func & 0x7);
+    value = pci->domain << 16;
+    value |= (pci->bus & 0xff) << 8;
+    value |= (pci->dev & 0x1f) << 3;
+    value |= (pci->func & 0x7);
 
     return value;
 }
 
-static void pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                            unsigned int bus, unsigned int dev,
+                            unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
 }
 
 static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             flexarray_t *back,
                                             int num,
-                                            const libxl_device_pci *pcidev)
+                                            const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func));
-    if (pcidev->vdevfn)
-        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pcidev->vdevfn));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    if (pci->vdevfn)
+        flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
               GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pcidev->msitranslate, pcidev->power_mgmt,
-                             pcidev->permissive));
+                             pci->msitranslate, pci->power_mgmt,
+                             pci->permissive));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
-static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
-                                      const libxl_device_pci *pcidev,
-                                      libxl__device *device)
+static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
+                                   const libxl_device_pci *pci,
+                                   libxl__device *device)
 {
     device->backend_devid = 0;
     device->backend_domid = 0;
@@ -80,7 +80,7 @@ static void libxl__device_from_pcidev(libxl__gc *gc, uint32_t domid,
 }
 
 static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pcidev,
+                                     const libxl_device_pci *pci,
                                      int num)
 {
     flexarray_t *front = NULL;
@@ -94,15 +94,15 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
     LOGD(DEBUG, domid, "Creating pci backend");
 
     /* add pci device */
-    libxl__device_from_pcidev(gc, domid, pcidev, &device);
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
     flexarray_append_pair(back, "online", "1");
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pcidev++)
-        libxl_create_pci_backend_device(gc, back, i, pcidev);
+    for (i = 0; i < num; i++, pci++)
+        libxl_create_pci_backend_device(gc, back, i, pci);
 
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
@@ -116,7 +116,7 @@ static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           uint32_t domid,
-                                          const libxl_device_pci *pcidev,
+                                          const libxl_device_pci *pci,
                                           bool starting)
 {
     flexarray_t *back;
@@ -136,7 +136,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pcidev, 1);
+        return libxl__create_pci_backend(gc, domid, pci, 1);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -151,7 +151,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
 
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
     num = atoi(num_devs);
-    libxl_create_pci_backend_device(gc, back, num, pcidev);
+    libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
     if (!starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
@@ -170,8 +170,8 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
-        device_add_domain_config(gc, &d_config, &libxl__pcidev_devtype,
-                                 pcidev);
+        device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
+                                 pci);
 
         rc = libxl__dm_check_start(gc, &d_config, domid);
         if (rc) goto out;
@@ -201,7 +201,7 @@ out:
     return rc;
 }
 
-static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pcidev)
+static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *be_path, *num_devs_path, *num_devs, *xsdev, *tmp, *tmppath;
@@ -231,8 +231,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pcidev->domain && bus == pcidev->bus &&
-            pcidev->dev == dev && pcidev->func == func) {
+        if (domain == pci->domain && bus == pci->bus &&
+            pci->dev == dev && pci->func == func) {
             break;
         }
     }
@@ -350,7 +350,7 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
                     *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
                     if (*list == NULL)
                         return ERROR_NOMEM;
-                    pcidev_struct_fill(*list + *num, dom, bus, dev, func, 0);
+                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
                     (*num)++;
                 }
             }
@@ -361,8 +361,8 @@ static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int
     return 0;
 }
 
-static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
-                       int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
+                           int dom, int bus, int dev, int func)
 {
     int i;
 
@@ -383,7 +383,7 @@ static int is_pcidev_in_array(libxl_device_pci *assigned, int num_assigned,
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
 static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pcidev)
+                           libxl_device_pci *pci)
 {
     int rc, fd;
     char *buf;
@@ -394,8 +394,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pcidev->domain, pcidev->bus,
-                    pcidev->dev, pcidev->func);
+    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
+                    pci->dev, pci->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -411,7 +411,7 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcidevs = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new, *assigned;
     struct dirent *de;
     DIR *dir;
     int r, num_assigned;
@@ -436,40 +436,40 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pcidev_in_array(assigned, num_assigned, dom, bus, dev, func))
+        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
             continue;
 
-        new = realloc(pcidevs, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcidevs = new;
-        new = pcidevs + *num;
+        pcis = new;
+        new = pcis + *num;
 
         memset(new, 0, sizeof(*new));
-        pcidev_struct_fill(new, dom, bus, dev, func, 0);
+        pci_struct_fill(new, dom, bus, dev, func, 0);
         (*num)++;
     }
 
     closedir(dir);
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func);
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -483,7 +483,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pcidev) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -495,11 +495,11 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pcidev,
     return 0;
 }
 
-static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -507,7 +507,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -515,18 +515,18 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_vendor;
 }
 
-static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
+static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus, pci->dev, pci->func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -534,7 +534,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -542,25 +542,25 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pcidev)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         return 0xffff;
     }
 
     return pci_device_device;
 }
 
-static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
+static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+                     pci->domain, pci->bus, pci->dev, pci->func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -569,7 +569,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pcidev,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         ret = ERROR_FAIL;
     }
 
@@ -588,16 +588,16 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
     uint16_t pt_vendor, pt_device;
     unsigned long class;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
-        pt_vendor = sysfs_dev_get_vendor(gc, pcidev);
-        pt_device = sysfs_dev_get_device(gc, pcidev);
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
+        libxl_device_pci *pci = &d_config->pcis[i];
+        pt_vendor = sysfs_dev_get_vendor(gc, pci);
+        pt_device = sysfs_dev_get_device(gc, pci);
 
         if (pt_vendor == 0xffff || pt_device == 0xffff ||
             pt_vendor != 0x8086)
             continue;
 
-        if (sysfs_dev_get_class(gc, pcidev, &class))
+        if (sysfs_dev_get_class(gc, pci, &class))
             continue;
         if (class == 0x030000)
             return true;
@@ -621,8 +621,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pcidev's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
+/* Scan through /sys/.../pciback/slots looking for pci's BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
 {
     FILE *f;
     int rc = 0;
@@ -635,11 +635,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pcidev)
         return ERROR_FAIL;
     }
 
-    while(fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if(dom == pcidev->domain
-           && bus == pcidev->bus
-           && dev == pcidev->dev
-           && func == pcidev->func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
+        if (dom == pci->domain
+            && bus == pci->bus
+            && dev == pci->dev
+            && func == pci->func) {
             rc = 1;
             goto out;
         }
@@ -649,7 +649,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
 {
     char * spath;
     int rc;
@@ -665,8 +665,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pcidev->domain, pcidev->bus,
-                      pcidev->dev, pcidev->func);
+                      pci->domain, pci->bus,
+                      pci->dev, pci->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -677,40 +677,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pcidev)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
 {
     int rc;
 
-    if ( (rc=pciback_dev_has_slot(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcidev) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pcidev, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pcidev) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -721,49 +721,49 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pcidev)
 #define PCIBACK_INFO_PATH "/libxl/pciback"
 
 static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             char *driver_path)
 {
     char *path;
 
     path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pcidev->domain,
-                     pcidev->bus,
-                     pcidev->dev,
-                     pcidev->func);
+                     pci->domain,
+                     pci->bus,
+                     pci->dev,
+                     pci->func);
     if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
         LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
     }
 }
 
 static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     return libxl__xs_read(gc, XBT_NULL,
                           GCSPRINTF(
                            PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pcidev->domain,
-                           pcidev->bus,
-                           pcidev->dev,
-                           pcidev->func));
+                           pci->domain,
+                           pci->bus,
+                           pci->dev,
+                           pci->func));
 }
 
 static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pcidev)
+                                              libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL,
           GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pcidev->domain,
-                    pcidev->bus,
-                    pcidev->dev,
-                    pcidev->func) );
+                    pci->domain,
+                    pci->bus,
+                    pci->dev,
+                    pci->func) );
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -773,10 +773,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pcidev->domain;
-    bus = pcidev->bus;
-    dev = pcidev->dev;
-    func = pcidev->func;
+    dom = pci->domain;
+    bus = pci->bus;
+    dev = pci->dev;
+    func = pci->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -786,7 +786,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pcidev);
+    rc = pciback_dev_is_assigned(gc, pci);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -796,7 +796,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pcidev, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -805,9 +805,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pcidev, driver_path);
+            pci_assignable_driver_path_write(gc, pci, driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pcidev)) != NULL ) {
+                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,10 +815,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pcidev);
+        pci_assignable_driver_path_remove(gc, pci);
     }
 
-    if ( pciback_dev_assign(gc, pcidev) ) {
+    if ( pciback_dev_assign(gc, pci) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -829,7 +829,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -840,7 +840,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pcidev,
+                                               libxl_device_pci *pci,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -848,24 +848,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pcidev_encode_bdf(pcidev));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcidev->domain, pcidev->bus,
-            pcidev->dev, pcidev->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
+            pci->dev, pci->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc=pciback_dev_is_assigned(gc, pcidev)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pcidev);
+        pciback_dev_unassign(gc, pci);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pcidev);
+    driver_path = pci_assignable_driver_path_read(gc, pci);
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -873,12 +873,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pcidev) < 0 ) {
+                                 pci) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pcidev);
+            pci_assignable_driver_path_remove(gc, pci);
         }
     } else {
         if ( rebind ) {
@@ -890,26 +890,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pcidev, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
 
     GC_FREE;
     return rc;
@@ -920,7 +920,7 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pcidev,
  * driver. It also initialises a bit-mask of which function numbers are present
  * on that device.
 */
-static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsigned int *func_mask)
+static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigned int *func_mask)
 {
     struct dirent *de;
     DIR *dir;
@@ -940,11 +940,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pcidev, unsi
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pcidev->domain != dom )
+        if ( pci->domain != dom )
             continue;
-        if ( pcidev->bus != bus )
+        if ( pci->bus != bus )
             continue;
-        if ( pcidev->dev != dev )
+        if ( pci->dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -979,7 +979,7 @@ static int pci_ins_check(libxl__gc *gc, uint32_t domid, const char *state, void
 }
 
 static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
-                                 libxl_device_pci *pcidev)
+                                 libxl_device_pci *pci)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     int rc = 0;
@@ -991,15 +991,15 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    if (pcidev->vdevfn) {
+    if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pcidev->domain, pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->vdevfn, pcidev->msitranslate,
-                         pcidev->power_mgmt);
+                         pci->domain, pci->bus, pci->dev,
+                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pcidev->domain,  pcidev->bus, pcidev->dev,
-                         pcidev->func, pcidev->msitranslate, pcidev->power_mgmt);
+                         pci->domain,  pci->bus, pci->dev,
+                         pci->func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1010,7 +1010,7 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     if ( rc < 0 )
         LOGD(ERROR, domid, "qemu refused to add device: %s", vdevfn);
-    else if ( sscanf(vdevfn, "0x%x", &pcidev->vdevfn) != 1 ) {
+    else if ( sscanf(vdevfn, "0x%x", &pci->vdevfn) != 1 ) {
         LOGD(ERROR, domid, "wrong format for the vdevfn: '%s'", vdevfn);
         rc = -1;
     }
@@ -1054,7 +1054,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     int pci_domid;
 } pci_add_state;
 
@@ -1072,7 +1072,7 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pcidev,
+                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1082,7 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pcidev = pcidev;
+    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1128,7 +1128,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1136,7 +1136,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_add_xenstore(gc, domid, pcidev);
+    rc = qemu_pci_add_xenstore(gc, domid, pci);
 out:
     pci_add_dm_done(egc, pas, rc); /* must be last */
 }
@@ -1149,7 +1149,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1160,14 +1160,14 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
-    if (pcidev->vdevfn) {
+                           "%04x:%02x:%02x.%01x", pci->domain,
+                           pci->bus, pci->dev, pci->func);
+    if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
-                               PCI_SLOT(pcidev->vdevfn),
-                               PCI_FUNC(pcidev->vdevfn));
+                               PCI_SLOT(pci->vdevfn),
+                               PCI_FUNC(pci->vdevfn));
     }
     /*
      * Version of QEMU prior to the XSA-131 fix did not support
@@ -1179,7 +1179,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
      * set the permissive flag if it is true. Users of older QEMU
      * have no reason to set the flag so this is ok.
      */
-    if (pcidev->permissive)
+    if (pci->permissive)
         libxl__qmp_param_add_bool(gc, &args, "permissive", true);
 
     qmp->ao = pas->aodev->ao;
@@ -1230,7 +1230,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1251,7 +1251,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1283,7 +1283,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
              }
              dev_func = libxl__json_object_get_integer(o);
 
-             pcidev->vdevfn = PCI_DEVFN(dev_slot, dev_func);
+             pci->vdevfn = PCI_DEVFN(dev_slot, dev_func);
 
              rc = 0;
              goto out;
@@ -1331,7 +1331,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pcidev = pas->pcidev;
+    libxl_device_pci *pci = pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1342,8 +1342,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                           pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1383,8 +1383,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                                pcidev->bus, pcidev->dev, pcidev->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                                pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1411,9 +1411,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
 
     /* Don't restrict writes to the PCI config space from this VM */
-    if (pcidev->permissive) {
+    if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pcidev) < 0 ) {
+                             pci) < 0 ) {
             LOGD(ERROR, domainid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1422,14 +1422,14 @@ static void pci_add_dm_done(libxl__egc *egc,
 
 out_no_irq:
     if (!isstubdom) {
-        if (pcidev->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
+        if (pci->rdm_policy == LIBXL_RDM_RESERVE_POLICY_STRICT) {
             flag &= ~XEN_DOMCTL_DEV_RDM_RELAXED;
-        } else if (pcidev->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
+        } else if (pci->rdm_policy != LIBXL_RDM_RESERVE_POLICY_RELAXED) {
             LOGED(ERROR, domainid, "unknown rdm check flag.");
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domainid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1438,7 +1438,7 @@ out_no_irq:
     }
 
     if (!starting && !libxl_get_stubdom_id(CTX, domid))
-        rc = libxl__device_pci_add_xenstore(gc, domid, pcidev, starting);
+        rc = libxl__device_pci_add_xenstore(gc, domid, pci, starting);
     else
         rc = 0;
 out:
@@ -1493,7 +1493,7 @@ int libxl__device_pci_setdefault(libxl__gc *gc, uint32_t domid,
 }
 
 int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
-                         libxl_device_pci *pcidev,
+                         libxl_device_pci *pci,
                          const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -1504,24 +1504,24 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_ADD;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_add(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_add(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
-static int libxl_pcidev_assignable(libxl_ctx *ctx, libxl_device_pci *pcidev)
+static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
     for (i = 0; i < num; i++) {
-        if (pcidevs[i].domain == pcidev->domain &&
-            pcidevs[i].bus == pcidev->bus &&
-            pcidevs[i].dev == pcidev->dev &&
-            pcidevs[i].func == pcidev->func)
+        if (pcis[i].domain == pci->domain &&
+            pcis[i].bus == pci->bus &&
+            pcis[i].dev == pci->dev &&
+            pcis[i].func == pci->func)
             break;
     }
-    free(pcidevs);
+    free(pcis);
     return i != num;
 }
 
@@ -1535,7 +1535,7 @@ static void device_pci_add_done(libxl__egc *egc,
     pci_add_state *, int rc);
 
 void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
-                           libxl_device_pci *pcidev, bool starting,
+                           libxl_device_pci *pci, bool starting,
                            libxl__ao_device *aodev)
 {
     STATE_AO_GC(aodev->ao);
@@ -1545,9 +1545,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pcidev to be used by callbacks */
-    aodev->device_config = pcidev;
-    aodev->device_type = &libxl__pcidev_devtype;
+    /* Store *pci to be used by callbacks */
+    aodev->device_config = pci;
+    aodev->device_type = &libxl__pci_devtype;
 
     GCNEW(pas);
     pas->aodev = aodev;
@@ -1556,29 +1556,29 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+                 pci->domain, pci->bus, pci->dev, pci->func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
         }
     }
 
-    rc = libxl__device_pci_setdefault(gc, domid, pcidev, !starting);
+    rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pcidev->seize && !pciback_dev_is_assigned(gc, pcidev)) {
-        rc = libxl__device_pci_assignable_add(gc, pcidev, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
+        rc = libxl__device_pci_assignable_add(gc, pci, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pcidev_assignable(ctx, pcidev)) {
+    if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+             pci->domain, pci->bus, pci->dev, pci->func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1589,25 +1589,25 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
              "cannot determine if device is assigned, refusing to continue");
         goto out;
     }
-    if ( is_pcidev_in_array(assigned, num_assigned, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
+                         pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domid, "PCI device already attached to a domain");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pcidev_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
         return;
     }
 
@@ -1664,42 +1664,42 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     /* Convenience aliases */
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) goto out;
 
-    orig_vdev = pcidev->vdevfn & ~7U;
+    orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( !(pcidev->vdevfn >> 3) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( !(pci->vdevfn >> 3) ) {
             LOGD(ERROR, domid, "Must specify a v-slot for multi-function devices");
             rc = ERROR_INVAL;
             goto out;
         }
-        if ( pci_multifunction_check(gc, pcidev, &pfunc_mask) ) {
+        if ( pci_multifunction_check(gc, pci, &pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= pfunc_mask;
+        pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pcidev->func);
+        pfunc_mask = (1 << pci->func);
     }
 
-    for(rc = 0, i = 7; i >= 0; --i) {
+    for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
                 /* if not passing through multiple devices in a block make
                  * sure that virtual function number 0 is always used otherwise
                  * guest won't see the device
                  */
-                pcidev->vdevfn = orig_vdev;
+                pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pcidev, pas); /* must be last */
+            do_pci_add(egc, domid, pci, pas); /* must be last */
             return;
         }
     }
@@ -1715,13 +1715,13 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pcidev = aodev->device_config;
+    libxl_device_pci *pci = aodev->device_config;
 
     if (rc) {
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func,
+             pci->domain, pci->bus, pci->dev, pci->func,
              rc);
     }
     aodev->rc = rc;
@@ -1733,16 +1733,16 @@ typedef struct {
     libxl__ao_device *outer_aodev;
     libxl_domain_config *d_config;
     libxl_domid domid;
-} add_pcidevs_state;
+} add_pcis_state;
 
-static void add_pcidevs_done(libxl__egc *, libxl__multidev *, int rc);
+static void add_pcis_done(libxl__egc *, libxl__multidev *, int rc);
 
-static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
-                               libxl_domain_config *d_config,
-                               libxl__multidev *multidev)
+static void libxl__add_pcis(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
+                            libxl_domain_config *d_config,
+                            libxl__multidev *multidev)
 {
     AO_GC;
-    add_pcidevs_state *apds;
+    add_pcis_state *apds;
     int i;
 
     /* We need to start a new multidev in order to be able to execute
@@ -1752,23 +1752,23 @@ static void libxl__add_pcidevs(libxl__egc *egc, libxl__ao *ao, uint32_t domid,
     apds->outer_aodev = libxl__multidev_prepare(multidev);
     apds->d_config = d_config;
     apds->domid = domid;
-    apds->multidev.callback = add_pcidevs_done;
+    apds->multidev.callback = add_pcis_done;
     libxl__multidev_begin(ao, &apds->multidev);
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         libxl__ao_device *aodev = libxl__multidev_prepare(&apds->multidev);
-        libxl__device_pci_add(egc, domid, &d_config->pcidevs[i],
+        libxl__device_pci_add(egc, domid, &d_config->pcis[i],
                               true, aodev);
     }
 
     libxl__multidev_prepared(egc, &apds->multidev, 0);
 }
 
-static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
+static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
                              int rc)
 {
     EGC_GC;
-    add_pcidevs_state *apds = CONTAINER_OF(multidev, *apds, multidev);
+    add_pcis_state *apds = CONTAINER_OF(multidev, *apds, multidev);
 
     /* Convenience aliases */
     libxl_domain_config *d_config = apds->d_config;
@@ -1777,9 +1777,9 @@ static void add_pcidevs_done(libxl__egc *egc, libxl__multidev *multidev,
 
     if (rc) goto out;
 
-    if (d_config->num_pcidevs > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcidevs,
-            d_config->num_pcidevs);
+    if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
+        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
+                                       d_config->num_pcis);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
@@ -1792,7 +1792,7 @@ out:
 }
 
 static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
-                                    libxl_device_pci *pcidev, int force)
+                                    libxl_device_pci *pci, int force)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *state;
@@ -1804,12 +1804,12 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pcidev->domain,
-                     pcidev->bus, pcidev->dev, pcidev->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
+                     pci->bus, pci->dev, pci->func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
-    if ( !force && (pcidev->vdevfn & 0x7) == 0 ) {
+    if ( !force && (pci->vdevfn & 0x7) == 0 ) {
         libxl__qemu_traditional_cmd(gc, domid, "pci-rem");
         if (libxl__wait_for_device_model_deprecated(gc, domid, "pci-removed",
                                          NULL, NULL, NULL) < 0) {
@@ -1830,7 +1830,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pcidev;
+    libxl_device_pci *pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1844,7 +1844,7 @@ typedef struct pci_remove_state {
 } pci_remove_state;
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
-    uint32_t domid, libxl_device_pci *pcidev, bool force,
+    uint32_t domid, libxl_device_pci *pci, bool force,
     libxl__ao_device *aodev);
 static void device_pci_remove_common_next(libxl__egc *egc,
     pci_remove_state *prs, int rc);
@@ -1869,7 +1869,7 @@ static void pci_remove_done(libxl__egc *egc,
     pci_remove_state *prs, int rc);
 
 static void do_pci_remove(libxl__egc *egc, uint32_t domid,
-                          libxl_device_pci *pcidev, int force,
+                          libxl_device_pci *pci, int force,
                           pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
@@ -1887,8 +1887,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     libxl__ptr_add(gc, assigned);
 
     rc = ERROR_INVAL;
-    if ( !is_pcidev_in_array(assigned, num, pcidev->domain,
-                      pcidev->bus, pcidev->dev, pcidev->func) ) {
+    if ( !is_pci_in_array(assigned, num, pci->domain,
+                          pci->bus, pci->dev, pci->func) ) {
         LOGD(ERROR, domainid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1917,8 +1917,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pcidev->domain,
-                                     pcidev->bus, pcidev->dev, pcidev->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
+                                     pci->bus, pci->dev, pci->func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1953,8 +1953,8 @@ static void do_pci_remove(libxl__egc *egc, uint32_t domid,
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pcidev->domain,
-                               pcidev->bus, pcidev->dev, pcidev->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
+                               pci->bus, pci->dev, pci->func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domainid, "Couldn't open %s", sysfs_path);
@@ -1988,7 +1988,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1996,7 +1996,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
     if (rc)
         goto out;
 
-    rc = qemu_pci_remove_xenstore(gc, domid, pcidev, prs->force);
+    rc = qemu_pci_remove_xenstore(gc, domid, pci, prs->force);
 
 out:
     pci_remove_detatched(egc, prs, rc);
@@ -2010,7 +2010,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2018,7 +2018,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pcidev->bus, pcidev->dev, pcidev->func);
+                           pci->bus, pci->dev, pci->func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2080,14 +2080,14 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     if (rc) goto out;
 
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pcidev->bus, pcidev->dev, pcidev->func);
+                         pci->bus, pci->dev, pci->func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2135,10 +2135,10 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pcidev->bus, pcidev->dev, pcidev->func);
+         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2156,7 +2156,7 @@ static void pci_remove_detatched(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl_domid domid = prs->domid;
 
     /* Cleaning QMP states ASAP */
@@ -2170,30 +2170,30 @@ static void pci_remove_detatched(libxl__egc *egc,
     isstubdom = libxl_is_stubdom(CTX, domid, &domainid);
 
     /* don't do multiple resets while some functions are still passed through */
-    if ( (pcidev->vdevfn & 0x7) == 0 ) {
-        libxl__device_pci_reset(gc, pcidev->domain, pcidev->bus, pcidev->dev, pcidev->func);
+    if ((pci->vdevfn & 0x7) == 0) {
+        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pcidev_encode_bdf(pcidev));
+        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domainid, "xc_deassign_device failed");
     }
 
     stubdomid = libxl_get_stubdom_id(CTX, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pcidev_s;
+        libxl_device_pci *pci_s;
         libxl__ao_device *const stubdom_aodev = &prs->stubdom_aodev;
 
-        GCNEW(pcidev_s);
-        libxl_device_pci_init(pcidev_s);
-        libxl_device_pci_copy(CTX, pcidev_s, pcidev);
+        GCNEW(pci_s);
+        libxl_device_pci_init(pci_s);
+        libxl_device_pci_copy(CTX, pci_s, pci);
 
         libxl__prepare_ao_device(ao, stubdom_aodev);
         stubdom_aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
         stubdom_aodev->callback = pci_remove_stubdom_done;
         stubdom_aodev->update_json = prs->aodev->update_json;
-        libxl__device_pci_remove_common(egc, stubdomid, pcidev_s,
+        libxl__device_pci_remove_common(egc, stubdomid, pci_s,
                                         prs->force, stubdom_aodev);
         return;
     }
@@ -2219,14 +2219,14 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pcidev);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
 
 static void libxl__device_pci_remove_common(libxl__egc *egc,
                                             uint32_t domid,
-                                            libxl_device_pci *pcidev,
+                                            libxl_device_pci *pci,
                                             bool force,
                                             libxl__ao_device *aodev)
 {
@@ -2237,7 +2237,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pcidev = pcidev;
+    prs->pci = pci;
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2247,16 +2247,16 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
-    prs->orig_vdev = pcidev->vdevfn & ~7U;
+    prs->orig_vdev = pci->vdevfn & ~7U;
 
-    if ( pcidev->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
-        if ( pci_multifunction_check(gc, pcidev, &prs->pfunc_mask) ) {
+    if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
+        if ( pci_multifunction_check(gc, pci, &prs->pfunc_mask) ) {
             rc = ERROR_FAIL;
             goto out;
         }
-        pcidev->vfunc_mask &= prs->pfunc_mask;
-    }else{
-        prs->pfunc_mask = (1 << pcidev->func);
+        pci->vfunc_mask &= prs->pfunc_mask;
+    } else {
+        prs->pfunc_mask = (1 << pci->func);
     }
 
     rc = 0;
@@ -2273,7 +2273,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pcidev = prs->pcidev;
+    libxl_device_pci *const pci = prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2284,13 +2284,13 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         const int i = prs->next_func;
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
-            if ( pcidev->vfunc_mask == pfunc_mask ) {
-                pcidev->func = i;
-                pcidev->vdevfn = orig_vdev | i;
-            }else{
-                pcidev->vdevfn = orig_vdev;
+            if ( pci->vfunc_mask == pfunc_mask ) {
+                pci->func = i;
+                pci->vdevfn = orig_vdev | i;
+            } else {
+                pci->vdevfn = orig_vdev;
             }
-            do_pci_remove(egc, domid, pcidev, prs->force, prs);
+            do_pci_remove(egc, domid, pci, prs->force, prs);
             return;
         }
     }
@@ -2306,7 +2306,7 @@ out:
 }
 
 int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
-                            libxl_device_pci *pcidev,
+                            libxl_device_pci *pci,
                             const libxl_asyncop_how *ao_how)
 
 {
@@ -2318,12 +2318,12 @@ int libxl_device_pci_remove(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, false, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, false, aodev);
     return AO_INPROGRESS;
 }
 
 int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
-                             libxl_device_pci *pcidev,
+                             libxl_device_pci *pci,
                              const libxl_asyncop_how *ao_how)
 {
     AO_CREATE(ctx, domid, ao_how);
@@ -2334,7 +2334,7 @@ int libxl_device_pci_destroy(libxl_ctx *ctx, uint32_t domid,
     aodev->action = LIBXL__DEVICE_ACTION_REMOVE;
     aodev->callback = device_addrm_aocomplete;
     aodev->update_json = true;
-    libxl__device_pci_remove_common(egc, domid, pcidev, true, aodev);
+    libxl__device_pci_remove_common(egc, domid, pci, true, aodev);
     return AO_INPROGRESS;
 }
 
@@ -2353,7 +2353,7 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
     if (s)
         vdevfn = strtol(s, (char **) NULL, 16);
 
-    pcidev_struct_fill(pci, domain, bus, dev, func, vdevfn);
+    pci_struct_fill(pci, domain, bus, dev, func, vdevfn);
 
     s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/opts-%d", be_path, nr));
     if (s) {
@@ -2398,7 +2398,7 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     GC_INIT(ctx);
     char *be_path;
     unsigned int n, i;
-    libxl_device_pci *pcidevs = NULL;
+    libxl_device_pci *pcis = NULL;
 
     *num = 0;
 
@@ -2407,28 +2407,28 @@ libxl_device_pci *libxl_device_pci_list(libxl_ctx *ctx, uint32_t domid, int *num
     if (libxl__device_pci_get_num(gc, be_path, &n))
         goto out;
 
-    pcidevs = calloc(n, sizeof(libxl_device_pci));
+    pcis = calloc(n, sizeof(libxl_device_pci));
 
     for (i = 0; i < n; i++)
-        libxl__device_pci_from_xs_be(gc, be_path, i, pcidevs + i);
+        libxl__device_pci_from_xs_be(gc, be_path, i, pcis + i);
 
     *num = n;
 out:
     GC_FREE;
-    return pcidevs;
+    return pcis;
 }
 
 void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
                                    libxl__multidev *multidev)
 {
     STATE_AO_GC(multidev->ao);
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(CTX, domid, &num);
-    if ( pcidevs == NULL )
+    pcis = libxl_device_pci_list(CTX, domid, &num);
+    if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcidevs);
+    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2436,7 +2436,7 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
          * devices by the time we even get here!
          */
         libxl__ao_device *aodev = libxl__multidev_prepare(multidev);
-        libxl__device_pci_remove_common(egc, domid, pcidevs + i, true,
+        libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
 }
@@ -2449,13 +2449,13 @@ int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
     if (!libxl_defbool_val(d_config->b_info.u.hvm.gfx_passthru))
         return 0;
 
-    for (i = 0 ; i < d_config->num_pcidevs ; i++) {
+    for (i = 0 ; i < d_config->num_pcis ; i++) {
         uint64_t vga_iomem_start = 0xa0000 >> XC_PAGE_SHIFT;
         uint32_t stubdom_domid;
-        libxl_device_pci *pcidev = &d_config->pcidevs[i];
+        libxl_device_pci *pci = &d_config->pcis[i];
         unsigned long pci_device_class;
 
-        if (sysfs_dev_get_class(gc, pcidev, &pci_device_class))
+        if (sysfs_dev_get_class(gc, pci, &pci_device_class))
             continue;
         if (pci_device_class != 0x030000) /* VGA class */
             continue;
@@ -2494,7 +2494,7 @@ static int libxl_device_pci_compare(const libxl_device_pci *d1,
 
 #define libxl__device_pci_update_devid NULL
 
-DEFINE_DEVICE_TYPE_STRUCT_X(pcidev, pci, PCI,
+DEFINE_DEVICE_TYPE_STRUCT(pci, PCI,
     .get_num = libxl__device_pci_get_num,
     .from_xenstore = libxl__device_pci_from_xs_be,
 );
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f399..20f8dd7cfa 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -940,7 +940,7 @@ libxl_domain_config = Struct("domain_config", [
 
     ("disks", Array(libxl_device_disk, "num_disks")),
     ("nics", Array(libxl_device_nic, "num_nics")),
-    ("pcidevs", Array(libxl_device_pci, "num_pcidevs")),
+    ("pcis", Array(libxl_device_pci, "num_pcis")),
     ("rdms", Array(libxl_device_rdm, "num_rdms")),
     ("dtdevs", Array(libxl_device_dtdev, "num_dtdevs")),
     ("vfbs", Array(libxl_device_vfb, "num_vfbs")),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 12fc0b3a7f..1d38fffce3 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -23,15 +23,15 @@ static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
     return 0;
 }
 
-static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
-                               unsigned int bus, unsigned int dev,
-                               unsigned int func, unsigned int vdevfn)
+static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
+                           unsigned int bus, unsigned int dev,
+                           unsigned int func, unsigned int vdevfn)
 {
-    pcidev->domain = domain;
-    pcidev->bus = bus;
-    pcidev->dev = dev;
-    pcidev->func = func;
-    pcidev->vdevfn = vdevfn;
+    pci->domain = domain;
+    pci->bus = bus;
+    pci->dev = dev;
+    pci->func = func;
+    pci->vdevfn = vdevfn;
     return 0;
 }
 
@@ -47,7 +47,7 @@ static int pcidev_struct_fill(libxl_device_pci *pcidev, unsigned int domain,
 #define STATE_RDM_STRATEGY      10
 #define STATE_RESERVE_POLICY    11
 #define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str)
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
 {
     unsigned state = STATE_DOMAIN;
     unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
@@ -110,11 +110,11 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 }
                 *ptr = '\0';
                 if ( !strcmp(tok, "*") ) {
-                    pcidev->vfunc_mask = LIBXL_PCI_FUNC_ALL;
+                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
                 }else{
                     if ( hex_convert(tok, &func, 0x7) )
                         goto parse_error;
-                    pcidev->vfunc_mask = (1 << 0);
+                    pci->vfunc_mask = (1 << 0);
                 }
                 tok = ptr + 1;
             }
@@ -141,18 +141,18 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
                 state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
                 *ptr = '\0';
                 if ( !strcmp(optkey, "msitranslate") ) {
-                    pcidev->msitranslate = atoi(tok);
+                    pci->msitranslate = atoi(tok);
                 }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pcidev->power_mgmt = atoi(tok);
+                    pci->power_mgmt = atoi(tok);
                 }else if ( !strcmp(optkey, "permissive") ) {
-                    pcidev->permissive = atoi(tok);
+                    pci->permissive = atoi(tok);
                 }else if ( !strcmp(optkey, "seize") ) {
-                    pcidev->seize = atoi(tok);
+                    pci->seize = atoi(tok);
                 } else if (!strcmp(optkey, "rdm_policy")) {
                     if (!strcmp(tok, "strict")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
                     } else if (!strcmp(tok, "relaxed")) {
-                        pcidev->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
+                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
                     } else {
                         XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
                                           " policy: 'strict' or 'relaxed'.",
@@ -175,7 +175,7 @@ int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str
     assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
 
     /* Just a pretty way to fill in the values */
-    pcidev_struct_fill(pcidev, dom, bus, dev, func, vslot << 3);
+    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
 
     free(buf2);
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c..0765780d9f 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1470,24 +1470,24 @@ void parse_config_data(const char *config_source,
     }
 
     if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
-        d_config->num_pcidevs = 0;
-        d_config->pcidevs = NULL;
+        d_config->num_pcis = 0;
+        d_config->pcis = NULL;
         for(i = 0; (buf = xlu_cfg_get_listitem (pcis, i)) != NULL; i++) {
-            libxl_device_pci *pcidev;
-
-            pcidev = ARRAY_EXTEND_INIT_NODEVID(d_config->pcidevs,
-                                               d_config->num_pcidevs,
-                                               libxl_device_pci_init);
-            pcidev->msitranslate = pci_msitranslate;
-            pcidev->power_mgmt = pci_power_mgmt;
-            pcidev->permissive = pci_permissive;
-            pcidev->seize = pci_seize;
+            libxl_device_pci *pci;
+
+            pci = ARRAY_EXTEND_INIT_NODEVID(d_config->pcis,
+                                            d_config->num_pcis,
+                                            libxl_device_pci_init);
+            pci->msitranslate = pci_msitranslate;
+            pci->power_mgmt = pci_power_mgmt;
+            pci->permissive = pci_permissive;
+            pci->seize = pci_seize;
             /*
              * Like other pci option, the per-device policy always follows
              * the global policy by default.
              */
-            pcidev->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pcidev, buf);
+            pci->rdm_policy = b_info->u.hvm.rdm.policy;
+            e = xlu_pci_parse_bdf(config, pci, buf);
             if (e) {
                 fprintf(stderr,
                         "unable to parse PCI BDF `%s' for passthrough\n",
@@ -1495,7 +1495,7 @@ void parse_config_data(const char *config_source,
                 exit(-e);
             }
         }
-        if (d_config->num_pcidevs && c_info->type == LIBXL_DOMAIN_TYPE_PV)
+        if (d_config->num_pcis && c_info->type == LIBXL_DOMAIN_TYPE_PV)
             libxl_defbool_set(&b_info->u.pv.e820_host, true);
     }
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 58345bdae2..34fcf5a4fa 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -24,20 +24,20 @@
 
 static void pcilist(uint32_t domid)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_list(ctx, domid, &num);
-    if (pcidevs == NULL)
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (pcis == NULL)
         return;
     printf("Vdev Device\n");
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
-               (pcidevs[i].vdevfn >> 3) & 0x1f, pcidevs[i].vdevfn & 0x7,
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pcilist(int argc, char **argv)
@@ -57,28 +57,28 @@ int main_pcilist(int argc, char **argv)
 
 static int pcidetach(uint32_t domid, const char *bdf, int force)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
     if (force) {
-        if (libxl_device_pci_destroy(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
     } else {
-        if (libxl_device_pci_remove(ctx, domid, &pcidev, 0))
+        if (libxl_device_pci_remove(ctx, domid, &pci, 0))
             r = 1;
     }
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -108,24 +108,24 @@ int main_pcidetach(int argc, char **argv)
 
 static int pciattach(uint32_t domid, const char *bdf, const char *vs)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_add(ctx, domid, &pcidev, 0))
+    if (libxl_device_pci_add(ctx, domid, &pci, 0))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -155,19 +155,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcidevs;
+    libxl_device_pci *pcis;
     int num, i;
 
-    pcidevs = libxl_device_pci_assignable_list(ctx, &num);
+    pcis = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcidevs == NULL )
+    if ( pcis == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcidevs[i].domain, pcidevs[i].bus, pcidevs[i].dev, pcidevs[i].func);
-        libxl_device_pci_dispose(&pcidevs[i]);
+               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcidevs);
+    free(pcis);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -184,24 +184,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
@@ -226,24 +226,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pcidev;
+    libxl_device_pci pci;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pcidev);
+    libxl_device_pci_init(&pci);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcidev, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcidev, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pcidev);
+    libxl_device_pci_dispose(&pci);
     xlu_cfg_destroy(config);
 
     return r;
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index 359a001570..b03e348ffb 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -190,16 +190,16 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t)\n");
     }
 
-    for (i = 0; i < d_config->num_pcidevs; i++) {
+    for (i = 0; i < d_config->num_pcis; i++) {
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcidevs[i].domain, d_config->pcidevs[i].bus,
-               d_config->pcidevs[i].dev, d_config->pcidevs[i].func,
-               d_config->pcidevs[i].vdevfn);
+               d_config->pcis[i].domain, d_config->pcis[i].bus,
+               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
-               d_config->pcidevs[i].msitranslate,
-               d_config->pcidevs[i].power_mgmt);
+               d_config->pcis[i].msitranslate,
+               d_config->pcis[i].power_mgmt);
         fprintf(fh, "\t\t)\n");
         fprintf(fh, "\t)\n");
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11165.29644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqu-0008P1-DB; Fri, 23 Oct 2020 16:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11165.29644; Fri, 23 Oct 2020 16:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqu-0008Ok-32; Fri, 23 Oct 2020 16:23:40 +0000
Received: by outflank-mailman (input) for mailman id 11165;
 Fri, 23 Oct 2020 16:23:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqs-00081j-SC
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa8e3bea-270a-4ca5-afec-b1e30f05f168;
 Fri, 23 Oct 2020 16:23:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqg-0008Jb-0D; Fri, 23 Oct 2020 16:23:26 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqf-000376-Ov; Fri, 23 Oct 2020 16:23:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqs-00081j-SC
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:38 +0000
X-Inumbo-ID: aa8e3bea-270a-4ca5-afec-b1e30f05f168
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aa8e3bea-270a-4ca5-afec-b1e30f05f168;
	Fri, 23 Oct 2020 16:23:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ORjVh3eyNwRmvhpvuOMksACO9sNjoM6DFI5Mu3lxT+8=; b=jksCA09h/pKmnh61JI897CrQH
	j+q+5lCplucUTLHgXEa1w6zqNc8qgoZMNAQMRV82qkkjnmGyV3R9uLsQH5dt4SAuLECFD/qKEuCGm
	TyBuVraoreUmzOvQsDM4RWiugqXUZilSrdtWxAygRvJrsOUV0VNaws/R5UUhwDYY7kL+s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqg-0008Jb-0D; Fri, 23 Oct 2020 16:23:26 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqf-000376-Ov; Fri, 23 Oct 2020 16:23:25 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 07/25] libxl: stop using aodev->device_config in libxl__device_pci_add()...
Date: Fri, 23 Oct 2020 16:22:56 +0000
Message-Id: <20201023162314.2235-8-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to hold a pointer to the device.

There is already a 'pci' field in 'pci_add_state' so simply use that from
the start. This also allows the 'pci' (#3) argument to be dropped from
do_pci_add().

NOTE: This patch also changes the type of the 'pci_domid' field in
      'pci_add_state' from 'int' to 'libxl_domid' which is more appropriate
      given what the field is used for.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 97889fda49..b8d8cc6a69 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1055,7 +1055,7 @@ typedef struct pci_add_state {
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
     libxl_device_pci *pci;
-    int pci_domid;
+    libxl_domid pci_domid;
 } pci_add_state;
 
 static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
@@ -1072,7 +1072,6 @@ static void pci_add_dm_done(libxl__egc *,
 
 static void do_pci_add(libxl__egc *egc,
                        libxl_domid domid,
-                       libxl_device_pci *pci,
                        pci_add_state *pas)
 {
     STATE_AO_GC(pas->aodev->ao);
@@ -1082,7 +1081,6 @@ static void do_pci_add(libxl__egc *egc,
     /* init pci_add_state */
     libxl__xswait_init(&pas->xswait);
     libxl__ev_qmp_init(&pas->qmp);
-    pas->pci = pci;
     pas->pci_domid = domid;
     libxl__ev_time_init(&pas->timeout);
 
@@ -1544,13 +1542,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     int stubdomid = 0;
     pci_add_state *pas;
 
-    /* Store *pci to be used by callbacks */
-    aodev->device_config = pci;
-    aodev->device_type = &libxl__pci_devtype;
-
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
+    pas->pci = pci;
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1604,9 +1599,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         GCNEW(pci_s);
         libxl_device_pci_init(pci_s);
         libxl_device_pci_copy(CTX, pci_s, pci);
+        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
-        do_pci_add(egc, stubdomid, pci_s, pas); /* must be last */
+        do_pci_add(egc, stubdomid, pas); /* must be last */
         return;
     }
 
@@ -1661,9 +1657,8 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
     int i;
 
     /* Convenience aliases */
-    libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) goto out;
 
@@ -1698,7 +1693,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
                 pci->vdevfn = orig_vdev;
             }
             pas->callback = device_pci_add_done;
-            do_pci_add(egc, domid, pci, pas); /* must be last */
+            do_pci_add(egc, domid, pas); /* must be last */
             return;
         }
     }
@@ -1714,7 +1709,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = aodev->device_config;
+    libxl_device_pci *pci = pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11166.29657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqx-0008WM-R6; Fri, 23 Oct 2020 16:23:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11166.29657; Fri, 23 Oct 2020 16:23:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzqx-0008WB-M7; Fri, 23 Oct 2020 16:23:43 +0000
Received: by outflank-mailman (input) for mailman id 11166;
 Fri, 23 Oct 2020 16:23:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzqw-00081e-Ci
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67bb7164-12ad-405d-bbf8-688efe3dd4bd;
 Fri, 23 Oct 2020 16:23:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqg-0008Jg-Q9; Fri, 23 Oct 2020 16:23:26 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqg-000376-In; Fri, 23 Oct 2020 16:23:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzqw-00081e-Ci
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:42 +0000
X-Inumbo-ID: 67bb7164-12ad-405d-bbf8-688efe3dd4bd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67bb7164-12ad-405d-bbf8-688efe3dd4bd;
	Fri, 23 Oct 2020 16:23:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QJzbQ3Np4pSqJz9X+hJwu3xWTD3Y5IwSiZezmKwHDvM=; b=v3MNx6rAv6sCfXHLzOAXuI4BS
	jVqc+IqLPnZDvdlkmuUlIOFVQu5CSYKu2H0CqZ4m45XCv9FdsAv92CSoCMjz4zu5b92AzZIxMK6ZU
	LsFNkG9q+u1nM1sTsyoec5xoYz885bfhcz265Mr3Np7oaEeZY+DX/ny5g55DyuYGsLKxs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqg-0008Jg-Q9; Fri, 23 Oct 2020 16:23:26 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqg-000376-In; Fri, 23 Oct 2020 16:23:26 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 08/25] libxl: generalise 'driver_path' xenstore access functions in libxl_pci.c
Date: Fri, 23 Oct 2020 16:22:57 +0000
Message-Id: <20201023162314.2235-9-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

For the purposes of re-binding a device to its previous driver
libxl__device_pci_assignable_add() writes the driver path into xenstore.
This path is then read back in libxl__device_pci_assignable_remove().

The functions that support this writing to and reading from xenstore are
currently dedicated for this purpose and hence the node name 'driver_path'
is hard-coded. This patch generalizes these utility functions and passes
'driver_path' as an argument. Subsequent patches will invoke them to
access other nodes.

NOTE: Because functions will have a broader use (other than storing a
      driver path in lieu of pciback) the base xenstore path is also
      changed from '/libxl/pciback' to '/libxl/pci'.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 66 +++++++++++++++++++++-----------------------
 1 file changed, 32 insertions(+), 34 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index b8d8cc6a69..f74203100d 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -718,48 +718,46 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCIBACK_INFO_PATH "/libxl/pciback"
+#define PCI_INFO_PATH "/libxl/pci"
 
-static void pci_assignable_driver_path_write(libxl__gc *gc,
-                                            libxl_device_pci *pci,
-                                            char *driver_path)
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    char *path;
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
 
-    path = GCSPRINTF(PCIBACK_INFO_PATH"/"PCI_BDF_XSPATH"/driver_path",
-                     pci->domain,
-                     pci->bus,
-                     pci->dev,
-                     pci->func);
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", driver_path) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", driver_path, path);
+    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
+        LOGE(WARN, "Write of %s to node %s failed.", val, path);
     }
 }
 
-static char * pci_assignable_driver_path_read(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
 {
-    return libxl__xs_read(gc, XBT_NULL,
-                          GCSPRINTF(
-                           PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH "/driver_path",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func));
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_assignable_driver_path_remove(libxl__gc *gc,
-                                              libxl_device_pci *pci)
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
 {
+    char *path = pci_info_xs_path(gc, pci, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL,
-          GCSPRINTF(PCIBACK_INFO_PATH "/" PCI_BDF_XSPATH,
-                    pci->domain,
-                    pci->bus,
-                    pci->dev,
-                    pci->func) );
+    xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
@@ -805,9 +803,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_assignable_driver_path_write(gc, pci, driver_path);
+            pci_info_xs_write(gc, pci, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_assignable_driver_path_read(gc, pci)) != NULL ) {
+                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -815,7 +813,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_assignable_driver_path_remove(gc, pci);
+        pci_info_xs_remove(gc, pci, "driver_path");
     }
 
     if ( pciback_dev_assign(gc, pci) ) {
@@ -865,7 +863,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     }
 
     /* Rebind if necessary */
-    driver_path = pci_assignable_driver_path_read(gc, pci);
+    driver_path = pci_info_xs_read(gc, pci, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -878,7 +876,7 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
                 return -1;
             }
 
-            pci_assignable_driver_path_remove(gc, pci);
+            pci_info_xs_remove(gc, pci, "driver_path");
         }
     } else {
         if ( rebind ) {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:23:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11170.29669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzr2-0000DE-An; Fri, 23 Oct 2020 16:23:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11170.29669; Fri, 23 Oct 2020 16:23:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzr2-0000Cv-5Y; Fri, 23 Oct 2020 16:23:48 +0000
Received: by outflank-mailman (input) for mailman id 11170;
 Fri, 23 Oct 2020 16:23:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzr1-00081e-D3
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45b5f1c7-5366-42f6-873d-8647cb949167;
 Fri, 23 Oct 2020 16:23:28 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqh-0008Jp-K8; Fri, 23 Oct 2020 16:23:27 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqh-000376-Ce; Fri, 23 Oct 2020 16:23:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzr1-00081e-D3
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:23:47 +0000
X-Inumbo-ID: 45b5f1c7-5366-42f6-873d-8647cb949167
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45b5f1c7-5366-42f6-873d-8647cb949167;
	Fri, 23 Oct 2020 16:23:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=DQUDwHY5sR5Fv3fHlALRKj3/AMe6ExqKLpToY1+yr+I=; b=4XEB77K0rvyyg7sYB7vvSU002
	m5vvIMhMYWkEkxgA3OUrYmsAz5r7IvJn5KOC5wnt+15IN9KZVH+eGmW0RFrMUOr3Z4HFw0LJvje0t
	S0LixHDPrZLZuD6eOnsjpucaEGxejpQY/CCgvFhBd6RM6W7DhjM8sl0BT9wy+XKiuTTcQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqh-0008Jp-K8; Fri, 23 Oct 2020 16:23:27 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqh-000376-Ce; Fri, 23 Oct 2020 16:23:27 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 09/25] libxl: remove unnecessary check from libxl__device_pci_add()
Date: Fri, 23 Oct 2020 16:22:58 +0000
Message-Id: <20201023162314.2235-10-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

The code currently checks explicitly whether the device is already assigned,
but this is actually unnecessary as assigned devices do not form part of
the list returned by libxl_device_pci_assignable_list() and hence the
libxl_pci_assignable() test would have already failed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index f74203100d..0be1b21185 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1535,8 +1535,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 {
     STATE_AO_GC(aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
-    int num_assigned, rc;
+    int rc;
     int stubdomid = 0;
     pci_add_state *pas;
 
@@ -1575,19 +1574,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
-    rc = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if ( rc ) {
-        LOGD(ERROR, domid,
-             "cannot determine if device is assigned, refusing to continue");
-        goto out;
-    }
-    if ( is_pci_in_array(assigned, num_assigned, pci->domain,
-                         pci->bus, pci->dev, pci->func) ) {
-        LOGD(ERROR, domid, "PCI device already attached to a domain");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:25:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:25:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11193.29681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzsW-0000mD-Pa; Fri, 23 Oct 2020 16:25:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11193.29681; Fri, 23 Oct 2020 16:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzsW-0000m5-LQ; Fri, 23 Oct 2020 16:25:20 +0000
Received: by outflank-mailman (input) for mailman id 11193;
 Fri, 23 Oct 2020 16:25:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kVzsU-0000lt-Ss
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:25:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a79096c1-eee4-47b9-aa28-7bcc11430077;
 Fri, 23 Oct 2020 16:25:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E26E42192A;
 Fri, 23 Oct 2020 16:25:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kVzsU-0000lt-Ss
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:25:18 +0000
X-Inumbo-ID: a79096c1-eee4-47b9-aa28-7bcc11430077
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a79096c1-eee4-47b9-aa28-7bcc11430077;
	Fri, 23 Oct 2020 16:25:18 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E26E42192A;
	Fri, 23 Oct 2020 16:25:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603470317;
	bh=PAR2NkJcyGWUrf2r1igze2t3rx5eT74HdkfAQ2uSaTs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DZt87/4/pJZT/AwsmR3amWr6W0YM+4uZfglT+lyMoe3tU0+gcmPBE15p9anqNawx1
	 dfpsTk0jv0vviF8lg162HUpV0xinaBHSnqeZifPp52zhQmz4tgP4ZQu4HMHZKCb2L0
	 XIFo2uRIAeT1Q2zxbo6kYLKh71iv+amD1KYZO1ek=
Date: Fri, 23 Oct 2020 09:25:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Rahul Singh <Rahul.Singh@arm.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
In-Reply-To: <d4be5451-809b-c793-8b6a-9e9bace0a52e@xen.org>
Message-ID: <alpine.DEB.2.21.2010230920060.12247@sstabellini-ThinkPad-T480s>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com> <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org> <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com> <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s> <d4be5451-809b-c793-8b6a-9e9bace0a52e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1071560962-1603470205=:12247"
Content-ID: <alpine.DEB.2.21.2010230923450.12247@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1071560962-1603470205=:12247
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010230923451.12247@sstabellini-ThinkPad-T480s>

On Fri, 23 Oct 2020, Julien Grall wrote:
> On 23/10/2020 01:02, Stefano Stabellini wrote:
> > On Thu, 22 Oct 2020, Julien Grall wrote:
> > > > > On 20/10/2020 16:25, Rahul Singh wrote:
> > > > > > Add support for ARM architected SMMUv3 implementations. It is based
> > > > > > on
> > > > > > the Linux SMMUv3 driver.
> > > > > > Major differences between the Linux driver are as follows:
> > > > > > 1. Only Stage-2 translation is supported as compared to the Linux
> > > > > > driver
> > > > > >      that supports both Stage-1 and Stage-2 translations.
> > > > > > 2. Use P2M  page table instead of creating one as SMMUv3 has the
> > > > > >      capability to share the page tables with the CPU.
> > > > > > 3. Tasklets is used in place of threaded IRQ's in Linux for event
> > > > > > queue
> > > > > >      and priority queue IRQ handling.
> > > > > 
> > > > > Tasklets are not a replacement for threaded IRQ. In particular, they
> > > > > will
> > > > > have priority over anything else (IOW nothing will run on the pCPU
> > > > > until
> > > > > they are done).
> > > > > 
> > > > > Do you know why Linux is using thread. Is it because of long running
> > > > > operations?
> > > > 
> > > > Yes you are right because of long running operations Linux is using the
> > > > threaded IRQs.
> > > > 
> > > > SMMUv3 reports fault/events bases on memory-based circular buffer queues
> > > > not
> > > > based on the register. As per my understanding, it is time-consuming to
> > > > process the memory based queues in interrupt context because of that
> > > > Linux
> > > > is using threaded IRQ to process the faults/events from SMMU.
> > > > 
> > > > I didn’t find any other solution in XEN in place of tasklet to defer the
> > > > work, that’s why I used tasklet in XEN in replacement of threaded IRQs.
> > > > If
> > > > we do all work in interrupt context we will make XEN less responsive.
> > > 
> > > So we need to make sure that Xen continue to receives interrupts, but we
> > > also
> > > need to make sure that a vCPU bound to the pCPU is also responsive.
> > > 
> > > > 
> > > > If you know another solution in XEN that will be used to defer the work
> > > > in
> > > > the interrupt please let me know I will try to use that.
> > > 
> > > One of my work colleague encountered a similar problem recently. He had a
> > > long
> > > running tasklet and wanted to be broken down in smaller chunk.
> > > 
> > > We decided to use a timer to reschedule the taslket in the future. This
> > > allows
> > > the scheduler to run other loads (e.g. vCPU) for some time.
> > > 
> > > This is pretty hackish but I couldn't find a better solution as tasklet
> > > have
> > > high priority.
> > > 
> > > Maybe the other will have a better idea.
> > 
> > Julien's suggestion is a good one.
> > 
> > But I think tasklets can be configured to be called from the idle_loop,
> > in which case they are not run in interrupt context?
> 
> Tasklets can either run from the IDLE loop or from a softirq context.
> 
> When running from a softirq context is may happen on return from receiving an
> interrupt. However, interrupts will always be enabled.
> 
> So I am not sure what concern you are trying to raise here.

Not raising any concerns :-)

I thought one of the previous statements in this thread implied that
tasklets are run in interrupt context -- I just wanted to go into
details on that point as it is relevant.
--8323329-1071560962-1603470205=:12247--


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11200.29698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztY-0000vt-De; Fri, 23 Oct 2020 16:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11200.29698; Fri, 23 Oct 2020 16:26:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztY-0000vi-7Z; Fri, 23 Oct 2020 16:26:24 +0000
Received: by outflank-mailman (input) for mailman id 11200;
 Fri, 23 Oct 2020 16:26:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztW-0000v3-JG
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27f1b87b-7c40-4f94-ba01-9684c636cdd7;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008NS-LH; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqm-000376-LC; Fri, 23 Oct 2020 16:23:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztW-0000v3-JG
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:22 +0000
X-Inumbo-ID: 27f1b87b-7c40-4f94-ba01-9684c636cdd7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 27f1b87b-7c40-4f94-ba01-9684c636cdd7;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=+xRHgG/V1V6jdrFunRi9DByLVqbO+8Hy1Wezh0jszpM=; b=NkS5oRBEuIIoyTRowqey4tbzc
	k7H9ApuLG8+2rlTdwtjzRw9hxVTW+OnH4/Pg22zcQk2rwzYu6fsYNHCztfy2E6OtGo+OKH/KBtXKS
	MKaIjOp5uCu8CKbLbUfSaaiUYgrFO1ijieApbV9Tch6gld8WZWZBkRvQwg40HFvYzyRO4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008NS-LH; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqm-000376-LC; Fri, 23 Oct 2020 16:23:32 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 15/25] libxl: Make sure devices added by pci-attach are reflected in the config
Date: Fri, 23 Oct 2020 16:23:04 +0000
Message-Id: <20201023162314.2235-16-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently libxl__device_pci_add_xenstore() is broken in that does not
update the domain's configuration for the first device added (which causes
creation of the overall backend area in xenstore). This can be easily observed
by running 'xl list -l' after adding a single device: the device will be
missing.

This patch fixes the problem and adds a DEBUG log line to allow easy
verification that the domain configuration is being modified.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 68 +++++++++++++++++++++++---------------------
 1 file changed, 35 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index c5d73133eb..45685ebec2 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -79,39 +79,18 @@ static void libxl__device_from_pci(libxl__gc *gc, uint32_t domid,
     device->kind = LIBXL__DEVICE_KIND_PCI;
 }
 
-static int libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
-                                     const libxl_device_pci *pci,
-                                     int num)
+static void libxl__create_pci_backend(libxl__gc *gc, uint32_t domid,
+                                      flexarray_t *front, flexarray_t *back)
 {
-    flexarray_t *front = NULL;
-    flexarray_t *back = NULL;
-    libxl__device device;
-    int i;
-
-    front = flexarray_make(gc, 16, 1);
-    back = flexarray_make(gc, 16, 1);
-
     LOGD(DEBUG, domid, "Creating pci backend");
 
-    /* add pci device */
-    libxl__device_from_pci(gc, domid, pci, &device);
-
     flexarray_append_pair(back, "frontend-id", GCSPRINTF("%d", domid));
-    flexarray_append_pair(back, "online", "1");
+    flexarray_append_pair(back, "online", GCSPRINTF("%d", 1));
     flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateInitialising));
     flexarray_append_pair(back, "domain", libxl__domid_to_name(gc, domid));
 
-    for (i = 0; i < num; i++, pci++)
-        libxl_create_pci_backend_device(gc, back, i, pci);
-
-    flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num));
     flexarray_append_pair(front, "backend-id", GCSPRINTF("%d", 0));
     flexarray_append_pair(front, "state", GCSPRINTF("%d", XenbusStateInitialising));
-
-    return libxl__device_generic_add(gc, XBT_NULL, &device,
-                                     libxl__xs_kvs_of_flexarray(gc, back),
-                                     libxl__xs_kvs_of_flexarray(gc, front),
-                                     NULL);
 }
 
 static int libxl__device_pci_add_xenstore(libxl__gc *gc,
@@ -119,7 +98,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
                                           const libxl_device_pci *pci,
                                           bool starting)
 {
-    flexarray_t *back;
+    flexarray_t *front, *back;
     char *num_devs, *be_path;
     int num = 0;
     xs_transaction_t t = XBT_NULL;
@@ -127,16 +106,22 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
     libxl_domain_config d_config;
     libxl__flock *lock = NULL;
     bool is_stubdomain = libxl_is_stubdom(CTX, domid, NULL);
+    libxl__device device;
+
+    libxl__device_from_pci(gc, domid, pci, &device);
 
     /* Stubdomain doesn't have own config. */
     if (!is_stubdomain)
         libxl_domain_config_init(&d_config);
 
+    front = flexarray_make(gc, 16, 1);
+    back = flexarray_make(gc, 16, 1);
+
     be_path = libxl__domain_device_backend_path(gc, 0, domid, 0,
                                                 LIBXL__DEVICE_KIND_PCI);
     num_devs = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/num_devs", be_path));
     if (!num_devs)
-        return libxl__create_pci_backend(gc, domid, pci, 1);
+        libxl__create_pci_backend(gc, domid, front, back);
 
     libxl_domain_type domtype = libxl__domain_type(gc, domid);
     if (domtype == LIBXL_DOMAIN_TYPE_INVALID)
@@ -147,13 +132,11 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             return ERROR_FAIL;
     }
 
-    back = flexarray_make(gc, 16, 1);
-
     LOGD(DEBUG, domid, "Adding new pci device to xenstore");
-    num = atoi(num_devs);
+    num = num_devs ? atoi(num_devs) : 0;
     libxl_create_pci_backend_device(gc, back, num, pci);
     flexarray_append_pair(back, "num_devs", GCSPRINTF("%d", num + 1));
-    if (!starting)
+    if (num && !starting)
         flexarray_append_pair(back, "state", GCSPRINTF("%d", XenbusStateReconfiguring));
 
     /*
@@ -170,6 +153,7 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
         rc = libxl__get_domain_configuration(gc, domid, &d_config);
         if (rc) goto out;
 
+        LOGD(DEBUG, domid, "Adding new pci device to config");
         device_add_domain_config(gc, &d_config, &libxl__pci_devtype,
                                  pci);
 
@@ -186,7 +170,10 @@ static int libxl__device_pci_add_xenstore(libxl__gc *gc,
             if (rc) goto out;
         }
 
-        libxl__xs_writev(gc, t, be_path, libxl__xs_kvs_of_flexarray(gc, back));
+        libxl__device_generic_add(gc, t, &device,
+                                  libxl__xs_kvs_of_flexarray(gc, back),
+                                  libxl__xs_kvs_of_flexarray(gc, front),
+                                  NULL);
 
         rc = libxl__xs_transaction_commit(gc, &t);
         if (!rc) break;
@@ -1711,8 +1698,23 @@ static void add_pcis_done(libxl__egc *egc, libxl__multidev *multidev,
     if (rc) goto out;
 
     if (d_config->num_pcis > 0 && !libxl_get_stubdom_id(CTX, domid)) {
-        rc = libxl__create_pci_backend(gc, domid, d_config->pcis,
-                                       d_config->num_pcis);
+        flexarray_t *front, *back;
+        unsigned int i;
+        libxl__device device;
+
+        libxl__device_from_pci(gc, domid, &d_config->pcis[0], &device);
+
+        front = flexarray_make(gc, 16, 1);
+        back = flexarray_make(gc, 16, 1);
+
+        libxl__create_pci_backend(gc, domid, front, back);
+        for (i = 0; i < d_config->num_pcis; i++)
+            libxl_create_pci_backend_device(gc, back, i, &d_config->pcis[i]);
+
+        rc = libxl__device_generic_add(gc, XBT_NULL, &device,
+                                       libxl__xs_kvs_of_flexarray(gc, back),
+                                       libxl__xs_kvs_of_flexarray(gc, front),
+                                       NULL);
         if (rc < 0) {
             LOGD(ERROR, domid, "libxl_create_pci_backend failed: %d", rc);
             goto out;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11199.29693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztY-0000vQ-3Z; Fri, 23 Oct 2020 16:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11199.29693; Fri, 23 Oct 2020 16:26:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztX-0000vJ-W8; Fri, 23 Oct 2020 16:26:23 +0000
Received: by outflank-mailman (input) for mailman id 11199;
 Fri, 23 Oct 2020 16:26:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztW-0000v2-HW
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72fc22ac-891f-40a4-86ba-576ef68e6381;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008NU-Mf; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqv-000376-Fq; Fri, 23 Oct 2020 16:23:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztW-0000v2-HW
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:22 +0000
X-Inumbo-ID: 72fc22ac-891f-40a4-86ba-576ef68e6381
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 72fc22ac-891f-40a4-86ba-576ef68e6381;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z+TBph7KfTXS8lKGFLMVlUTRo1VQXbbt6hL6Ha7NBXk=; b=tT2ajUVtGFDsgz52kOM43g3eY
	0D0fZHUPN6CUpJgKqD7RFc01dSyQKnlo3Bgx79oQ0D+1OVNet5uw3aPCIWVDhAOHc+d2dQAb5uUbO
	BFvszb4tWpOjnr6NcjSUm65EtKkJ1MoSkYqNxSDmZvW7u06HHdHaFS+rQTSc2uh0VbM60=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008NU-Mf; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqv-000376-Fq; Fri, 23 Oct 2020 16:23:41 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 24/25] docs/man: modify xl-pci-configuration(5) to add 'name' field to PCI_SPEC_STRING
Date: Fri, 23 Oct 2020 16:23:13 +0000
Message-Id: <20201023162314.2235-25-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Since assignable devices can be named, a subsequent patch will support use
of a PCI_SPEC_STRING containing a 'name' parameter instead of a 'bdf'. In
this case the name will be used to look up the 'bdf' in the list of assignable
(or assigned) devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 25 +++++++++++++++++++++++--
 1 file changed, 23 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 4dd73bc498..db3360307c 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -51,7 +51,7 @@ is not specified, or if it is specified with an empty value (whether
 positionally or explicitly).
 
 B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
-B<bdf> will be ignored.
+B<bdf> or B<name> will be ignored.
 
 =head1 Positional Parameters
 
@@ -70,7 +70,11 @@ B<*> to indicate all functions of a multi-function device.
 
 =item Default Value
 
-None. This parameter is mandatory as it identifies the device.
+None. This parameter is mandatory in its positional form. As a non-positional
+parameter it is also mandatory unless a B<name> parameter is present, in
+which case B<bdf> must not be present since the B<name> will be used to find
+the B<bdf> in the list of assignable devices. See L<xl(1)> for more information
+on naming assignable devices.
 
 =back
 
@@ -194,4 +198,21 @@ B<NOTE>: This overrides the global B<rdm> option.
 
 =back
 
+=item B<name>=I<STRING>
+
+=over 4
+
+=item Description
+
+This is the name given when the B<BDF> was made assignable. See L<xl(1)> for
+more information on naming assignable devices.
+
+=item Default Value
+
+None. This parameter must not be present if a B<bdf> parameter is present.
+If a B<bdf> parameter is not present then B<name> is mandatory as it is
+required to look up the B<BDF> in the list of assignable devices.
+
+=back
+
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11201.29717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztc-00010u-Qf; Fri, 23 Oct 2020 16:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11201.29717; Fri, 23 Oct 2020 16:26:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztc-00010m-N8; Fri, 23 Oct 2020 16:26:28 +0000
Received: by outflank-mailman (input) for mailman id 11201;
 Fri, 23 Oct 2020 16:26:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztb-0000v2-GB
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 897b0fc5-8dbf-4bc1-a5d7-e01dd3117c9b;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008NQ-Iu; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqp-000376-2n; Fri, 23 Oct 2020 16:23:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztb-0000v2-GB
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:27 +0000
X-Inumbo-ID: 897b0fc5-8dbf-4bc1-a5d7-e01dd3117c9b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 897b0fc5-8dbf-4bc1-a5d7-e01dd3117c9b;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=um7JEb+Wnwt5fiP5c5wstzbmPuQS6pAhcCxDSP4apgk=; b=h5SsgST+sCAoFHA0FZRabee9q
	wdqGgTts8Q9LxErYlEfoXtNMvcoMJhvNZ4RqeFk5rNDVjjVyjcTwXGc5dXCmRcc98y0a9vjvMksym
	7Zg1ekeRnIeRPdDR8jUG672LLSFypFD7c+SRhcNBHhmJY9ZUp5Il4+ifEae8unb1nexQ4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008NQ-Iu; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqp-000376-2n; Fri, 23 Oct 2020 16:23:35 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 18/25] docs/man: fix xl(1) documentation for 'pci' operations
Date: Fri, 23 Oct 2020 16:23:07 +0000
Message-Id: <20201023162314.2235-19-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently the documentation completely fails to mention the existence of
PCI_SPEC_STRING. This patch tidies things up, specifically clarifying that
'pci-assignable-add/remove' take <BDF> arguments where as 'pci-attach/detach'
take <PCI_SPEC_STRING> arguments (which will be enforced in a subsequent
patch).

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 5f7d3a7134..373a52839d 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1597,14 +1597,18 @@ List virtual network interfaces for a domain.
 
 =item B<pci-assignable-list>
 
-List all the assignable PCI devices.
+List all the B<BDF> of assignable PCI devices. See
+L<xl-pci-configuration(5)> for more information.
+
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
 =item B<pci-assignable-add> I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF assignable to guests.
+Make the device at B<BDF> assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
 first be unbound, and the original driver stored so that it can be
@@ -1620,8 +1624,10 @@ being used.
 
 =item B<pci-assignable-remove> [I<-r>] I<BDF>
 
-Make the device at PCI Bus/Device/Function BDF not assignable to
-guests.  This will at least unbind the device from pciback, and
+Make the device at B<BDF> not assignable to guests. See
+L<xl-pci-configuration(5)> for more information.
+
+This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
 option is specified, it will also attempt to re-bind the device to its
 original driver, making it usable by Domain 0 again.  If the device is
@@ -1637,15 +1643,15 @@ As always, this should only be done if you trust the guest, or are
 confident that the particular device you're re-assigning to dom0 will
 cancel all in-flight DMA on FLR.
 
-=item B<pci-attach> I<domain-id> I<BDF>
+=item B<pci-attach> I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-plug a new pass-through pci device to the specified domain.
-B<BDF> is the PCI Bus/Device/Function of the physical device to pass-through.
+Hot-plug a new pass-through pci device to the specified domain. See
+L<xl-pci-configuration(5)> for more information.
 
-=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<BDF>
+=item B<pci-detach> [I<OPTIONS>] I<domain-id> I<PCI_SPEC_STRING>
 
-Hot-unplug a previously assigned pci device from a domain. B<BDF> is the PCI
-Bus/Device/Function of the physical device to be removed from the guest domain.
+Hot-unplug a pci device that was previously passed through to a domain. See
+L<xl-pci-configuration(5)> for more information.
 
 B<OPTIONS>
 
@@ -1660,7 +1666,7 @@ even without guest domain's collaboration.
 
 =item B<pci-list> I<domain-id>
 
-List pass-through pci devices for a domain.
+List the B<BDF> of pci devices passed through to a domain.
 
 =back
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11202.29724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztd-00011q-8W; Fri, 23 Oct 2020 16:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11202.29724; Fri, 23 Oct 2020 16:26:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztd-00011Z-1K; Fri, 23 Oct 2020 16:26:29 +0000
Received: by outflank-mailman (input) for mailman id 11202;
 Fri, 23 Oct 2020 16:26:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztb-0000v3-HW
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7828a2f3-b8a2-4ece-86ed-5eb63f846dcb;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008Nq-0h; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqn-000376-F3; Fri, 23 Oct 2020 16:23:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztb-0000v3-HW
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:27 +0000
X-Inumbo-ID: 7828a2f3-b8a2-4ece-86ed-5eb63f846dcb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7828a2f3-b8a2-4ece-86ed-5eb63f846dcb;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=GMhuBQvWwKt0oiSVX55rbWxIi0temtvls6NwYa6kuYM=; b=3uTiqlaer9LfnCLBMURaVX2tb
	ErERqysULJnoGU2hRi47Z3HIAItwBFoBv12UhCxP12KxZAqc6+xD0TX0Jt4QIaOWHHSUxmwovLGqi
	Pi73o3sX7oiGdAZYCdxWSyO6oNajpb/3ytdWtzivelnhXfg2EB63GgZ4A5Zb8V5ZG0DkE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008Nq-0h; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqn-000376-F3; Fri, 23 Oct 2020 16:23:33 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 16/25] docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg manpage...
Date: Fri, 23 Oct 2020 16:23:05 +0000
Message-Id: <20201023162314.2235-17-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and put it into a new xl-pci-configuration(5) manpage, akin to the
xl-network-configration(5) and xl-disk-configuration(5) manpages.

This patch moves the content of the section verbatim. A subsequent patch
will improve the documentation, once it is in its new location.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 78 +++++++++++++++++++++++++++++++++++++
 docs/man/xl.cfg.5.pod.in            | 68 +-------------------------------
 2 files changed, 79 insertions(+), 67 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
new file mode 100644
index 0000000000..72a27bd95d
--- /dev/null
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -0,0 +1,78 @@
+=encoding utf8
+
+=head1 NAME
+
+xl-pci-configuration - XL PCI Configuration Syntax
+
+=head1 SYNTAX
+
+This document specifies the format for B<PCI_SPEC_STRING> which is used by
+the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+
+Each B<PCI_SPEC_STRING> has the form of
+B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+
+=over 4
+
+=item B<[DDDD:]BB:DD.F>
+
+Identifies the PCI device from the host perspective in the domain
+(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
+the same scheme as used in the output of B<lspci(1)> for the device in
+question.
+
+Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
+is zero and it is optional here also. You may specify the function
+(B<F>) as B<*> to indicate all functions.
+
+=item B<@VSLOT>
+
+Specifies the virtual slot where the guest will see this
+device. This is equivalent to the B<DD> which the guest sees. In a
+guest B<DDDD> and B<BB> are C<0000:00>.
+
+=item B<permissive=BOOLEAN>
+
+By default pciback only allows PV guests to write "known safe" values
+into PCI configuration space, likewise QEMU (both qemu-xen and
+qemu-xen-traditional) imposes the same constraint on HVM guests.
+However, many devices require writes to other areas of the configuration space
+in order to operate properly.  This option tells the backend (pciback or QEMU)
+to allow all writes to the PCI configuration space of this device by this
+domain.
+
+B<This option should be enabled with caution:> it gives the guest much
+more control over the device, which may have security or stability
+implications.  It is recommended to only enable this option for
+trusted VMs under administrator's control.
+
+=item B<msitranslate=BOOLEAN>
+
+Specifies that MSI-INTx translation should be turned on for the PCI
+device. When enabled, MSI-INTx translation will always enable MSI on
+the PCI device regardless of whether the guest uses INTx or MSI. Some
+device drivers, such as NVIDIA's, detect an inconsistency and do not
+function when this option is enabled. Therefore the default is false (0).
+
+=item B<seize=BOOLEAN>
+
+Tells B<xl> to automatically attempt to re-assign a device to
+pciback if it is not already assigned.
+
+B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+system device, such as a network or a disk controller being used by
+dom0 without confirmation.  Please use with care.
+
+=item B<power_mgmt=BOOLEAN>
+
+B<(HVM only)> Specifies that the VM should be able to program the
+D0-D3hot power management states for the PCI device. The default is false (0).
+
+=item B<rdm_policy=STRING>
+
+B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
+option but just specific to a given device. The default is "relaxed".
+
+Note: this would override global B<rdm> option.
+
+=back
diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..b00644e852 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -1101,73 +1101,7 @@ option is valid only when the B<controller> option is specified.
 =item B<pci=[ "PCI_SPEC_STRING", "PCI_SPEC_STRING", ...]>
 
 Specifies the host PCI devices to passthrough to this guest.
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
-
-=over 4
-
-=item B<[DDDD:]BB:DD.F>
-
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
-
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
-
-=item B<@VSLOT>
-
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
-
-=item B<permissive=BOOLEAN>
-
-By default pciback only allows PV guests to write "known safe" values
-into PCI configuration space, likewise QEMU (both qemu-xen and
-qemu-xen-traditional) imposes the same constraint on HVM guests.
-However, many devices require writes to other areas of the configuration space
-in order to operate properly.  This option tells the backend (pciback or QEMU)
-to allow all writes to the PCI configuration space of this device by this
-domain.
-
-B<This option should be enabled with caution:> it gives the guest much
-more control over the device, which may have security or stability
-implications.  It is recommended to only enable this option for
-trusted VMs under administrator's control.
-
-=item B<msitranslate=BOOLEAN>
-
-Specifies that MSI-INTx translation should be turned on for the PCI
-device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
-function when this option is enabled. Therefore the default is false (0).
-
-=item B<seize=BOOLEAN>
-
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
-
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
-system device, such as a network or a disk controller being used by
-dom0 without confirmation.  Please use with care.
-
-=item B<power_mgmt=BOOLEAN>
-
-B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
-
-=item B<rdm_policy=STRING>
-
-B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
-
-Note: this would override global B<rdm> option.
-
-=back
+See L<xl-pci-configuration(5)> for more details.
 
 =item B<pci_permissive=BOOLEAN>
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11203.29741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzth-00018G-KZ; Fri, 23 Oct 2020 16:26:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11203.29741; Fri, 23 Oct 2020 16:26:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzth-000188-Fo; Fri, 23 Oct 2020 16:26:33 +0000
Received: by outflank-mailman (input) for mailman id 11203;
 Fri, 23 Oct 2020 16:26:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztg-0000v2-GL
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1916f930-e166-42b9-ad5b-8280fe52630d;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008Ny-3l; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqt-000376-1A; Fri, 23 Oct 2020 16:23:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztg-0000v2-GL
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:32 +0000
X-Inumbo-ID: 1916f930-e166-42b9-ad5b-8280fe52630d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1916f930-e166-42b9-ad5b-8280fe52630d;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=VqeeOXCaFQ2Ft+b5+Gf0Q+8kwLK5tou/HCVHc7Td96s=; b=rUt3n1yHArGCF4WPFnW3j/Wd2
	PHFR/Bih3WQ25luaHKeb1kOE57eN3s0+MXVfRqdyzVRYIVafwQNusxMdY9Fq0b74S+dwfbHDow+Xl
	KOLEqCZdyWUE7SL9ZM2qhfwRjGiGeLbgzPtYcbxbOZXi5xxIH7DkOZ6XsrxF1l8LpuiF4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008Ny-3l; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqt-000376-1A; Fri, 23 Oct 2020 16:23:39 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 22/25] docs/man: modify xl(1) in preparation for naming of assignable devices
Date: Fri, 23 Oct 2020 16:23:11 +0000
Message-Id: <20201023162314.2235-23-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A subsequent patch will introduce code to allow a name to be specified to
'xl pci-assignable-add' such that the assignable device may be referred to
by than name in subsequent operations.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl.1.pod.in | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 373a52839d..a45b423d0f 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -1595,19 +1595,23 @@ List virtual network interfaces for a domain.
 
 =over 4
 
-=item B<pci-assignable-list>
+=item B<pci-assignable-list> [I<-n>]
 
 List all the B<BDF> of assignable PCI devices. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+specified then any name supplied when the device was made assignable
+will also be displayed.
 
 These are devices in the system which are configured to be
 available for passthrough and are bound to a suitable PCI
 backend driver in domain 0 rather than a real driver.
 
-=item B<pci-assignable-add> I<BDF>
+=item B<pci-assignable-add> [I<-n NAME>] I<BDF>
 
 Make the device at B<BDF> assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+L<xl-pci-configuration(5)> for more information. If the -n option is
+supplied then the assignable device entry will the named with the
+given B<NAME>.
 
 This will bind the device to the pciback driver and assign it to the
 "quarantine domain".  If it is already bound to a driver, it will
@@ -1622,10 +1626,11 @@ not to do this on a device critical to domain 0's operation, such as
 storage controllers, network interfaces, or GPUs that are currently
 being used.
 
-=item B<pci-assignable-remove> [I<-r>] I<BDF>
+=item B<pci-assignable-remove> [I<-r>] I<BDF>|I<NAME>
 
-Make the device at B<BDF> not assignable to guests. See
-L<xl-pci-configuration(5)> for more information.
+Make a device non-assignable to guests. The device may be identified
+either by its B<BDF> or the B<NAME> supplied when the device was made
+assignable. See L<xl-pci-configuration(5)> for more information.
 
 This will at least unbind the device from pciback, and
 re-assign it from the "quarantine domain" back to domain 0.  If the -r
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11204.29747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzti-00019L-3K; Fri, 23 Oct 2020 16:26:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11204.29747; Fri, 23 Oct 2020 16:26:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzth-000197-RI; Fri, 23 Oct 2020 16:26:33 +0000
Received: by outflank-mailman (input) for mailman id 11204;
 Fri, 23 Oct 2020 16:26:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztg-0000v3-Hx
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b97d7312-5e9d-4a2e-87ca-00498ffff7bc;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008O0-53; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqo-000376-8v; Fri, 23 Oct 2020 16:23:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztg-0000v3-Hx
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:32 +0000
X-Inumbo-ID: b97d7312-5e9d-4a2e-87ca-00498ffff7bc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b97d7312-5e9d-4a2e-87ca-00498ffff7bc;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=z6HgaaJxE43aYUfbefthFXj4bV6TUshSIIqyN3QunIw=; b=Zz6BTj1cwZdakGQDRxN1PhsFE
	SHOWDggs1srSx+88vukhHOKibqNIrfxjy+A5ZtkyXQWRYZdhPJPBh3L0D7qDjEYwbXTY4AFZ+tXdu
	J9tevDgsgVKi2BzjsnW76OojdsJufMOcS0v3bZHOFDeia8RzNfHZc4gVV/U8iSSKewcEo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008O0-53; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqo-000376-8v; Fri, 23 Oct 2020 16:23:34 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 17/25] docs/man: improve documentation of PCI_SPEC_STRING...
Date: Fri, 23 Oct 2020 16:23:06 +0000
Message-Id: <20201023162314.2235-18-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and prepare for adding support for non-positional parsing of 'bdf' and
'vslot' in a subsequent patch.

Also document 'BDF' as a first-class parameter type and fix the documentation
to state that the default value of 'rdm_policy' is actually 'strict', not
'relaxed', as can be seen in libxl__device_pci_setdefault().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 docs/man/xl-pci-configuration.5.pod | 177 ++++++++++++++++++++++++++++++------
 1 file changed, 148 insertions(+), 29 deletions(-)

diff --git a/docs/man/xl-pci-configuration.5.pod b/docs/man/xl-pci-configuration.5.pod
index 72a27bd95d..4dd73bc498 100644
--- a/docs/man/xl-pci-configuration.5.pod
+++ b/docs/man/xl-pci-configuration.5.pod
@@ -6,32 +6,105 @@ xl-pci-configuration - XL PCI Configuration Syntax
 
 =head1 SYNTAX
 
-This document specifies the format for B<PCI_SPEC_STRING> which is used by
-the L<xl.cfg(5)> pci configuration option, and related L<xl(1)> commands.
+This document specifies the format for B<BDF> and B<PCI_SPEC_STRING> which are
+used by the L<xl.cfg(5)> pci configuration option, and related L<xl(1)>
+commands.
 
-Each B<PCI_SPEC_STRING> has the form of
-B<[DDDD:]BB:DD.F[@VSLOT],KEY=VALUE,KEY=VALUE,...> where:
+A B<BDF> has the following form:
+
+    [DDDD:]BB:SS.F
+
+B<DDDD> is the domain number, B<BB> is the bus number, B<SS> is the device (or
+slot) number, and B<F> is the function number. This is the same scheme as
+used in the output of L<lspci(1)> for the device in question. By default
+L<lspci(1)> will omit the domain (B<DDDD>) if it is zero and hence a zero
+value for domain may also be omitted when specifying a B<BDF>.
+
+Each B<PCI_SPEC_STRING> has the one of the forms:
+
+=over 4
+
+    [<bdf>[@<vslot>,][<key>=<value>,]*
+    [<key>=<value>,]*
+
+=back
+
+For example, these strings are equivalent:
 
 =over 4
 
-=item B<[DDDD:]BB:DD.F>
+    36:00.0@20,seize=1
+    36:00.0,vslot=20,seize=1
+    bdf=36:00.0,vslot=20,seize=1
 
-Identifies the PCI device from the host perspective in the domain
-(B<DDDD>), Bus (B<BB>), Device (B<DD>) and Function (B<F>) syntax. This is
-the same scheme as used in the output of B<lspci(1)> for the device in
-question.
+=back
+
+More formally, the string is a series of comma-separated keyword/value
+pairs, flags and positional parameters.  Parameters which are not bare
+keywords and which do not contain "=" symbols are assigned to the
+positional parameters, in the order specified below.  The positional
+parameters may also be specified by name.
+
+Each parameter may be specified at most once, either as a positional
+parameter or a named parameter.  Default values apply if the parameter
+is not specified, or if it is specified with an empty value (whether
+positionally or explicitly).
+
+B<NOTE>: In context of B<xl pci-detach> (see L<xl(1)>), parameters other than
+B<bdf> will be ignored.
+
+=head1 Positional Parameters
+
+=over 4
+
+=item B<bdf>=I<BDF>
+
+=over 4
 
-Note: by default B<lspci(1)> will omit the domain (B<DDDD>) if it
-is zero and it is optional here also. You may specify the function
-(B<F>) as B<*> to indicate all functions.
+=item Description
 
-=item B<@VSLOT>
+This identifies the PCI device from the host perspective.
 
-Specifies the virtual slot where the guest will see this
-device. This is equivalent to the B<DD> which the guest sees. In a
-guest B<DDDD> and B<BB> are C<0000:00>.
+In the context of a B<PCI_SPEC_STRING> you may specify the function (B<F>) as
+B<*> to indicate all functions of a multi-function device.
 
-=item B<permissive=BOOLEAN>
+=item Default Value
+
+None. This parameter is mandatory as it identifies the device.
+
+=back
+
+=item B<vslot>=I<NUMBER>
+
+=over 4
+
+=item Description
+
+Specifies the virtual slot (device) number where the guest will see this
+device. For example, running L<lspci(1)> in a Linux guest where B<vslot>
+was specified as C<8> would identify the device as C<00:08.0>. Virtual domain
+and bus numbers are always 0.
+
+B<NOTE:> This parameter is always parsed as a hexidecimal value.
+
+=item Default Value
+
+None. This parameter is not mandatory. An available B<vslot> will be selected
+if this parameter is not specified.
+
+=back
+
+=back
+
+=head1 Other Parameters and Flags
+
+=over 4
+
+=item B<permissive>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 By default pciback only allows PV guests to write "known safe" values
 into PCI configuration space, likewise QEMU (both qemu-xen and
@@ -46,33 +119,79 @@ more control over the device, which may have security or stability
 implications.  It is recommended to only enable this option for
 trusted VMs under administrator's control.
 
-=item B<msitranslate=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<msitranslate>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 Specifies that MSI-INTx translation should be turned on for the PCI
 device. When enabled, MSI-INTx translation will always enable MSI on
-the PCI device regardless of whether the guest uses INTx or MSI. Some
-device drivers, such as NVIDIA's, detect an inconsistency and do not
+the PCI device regardless of whether the guest uses INTx or MSI.
+
+=item Default Value
+
+Some device drivers, such as NVIDIA's, detect an inconsistency and do not
 function when this option is enabled. Therefore the default is false (0).
 
-=item B<seize=BOOLEAN>
+=back
+
+=item B<seize>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
-Tells B<xl> to automatically attempt to re-assign a device to
-pciback if it is not already assigned.
+Tells L<xl(1)> to automatically attempt to make the device assignable to
+guests if that has not already been done by the B<pci-assignable-add>
+command.
 
-B<WARNING:> If you set this option, B<xl> will gladly re-assign a critical
+B<WARNING:> If you set this option, L<xl> will gladly re-assign a critical
 system device, such as a network or a disk controller being used by
 dom0 without confirmation.  Please use with care.
 
-=item B<power_mgmt=BOOLEAN>
+=item Default Value
+
+0
+
+=back
+
+=item B<power_mgmt>=I<BOOLEAN>
+
+=over 4
+
+=item Description
 
 B<(HVM only)> Specifies that the VM should be able to program the
-D0-D3hot power management states for the PCI device. The default is false (0).
+D0-D3hot power management states for the PCI device.
+
+=item Default Value
+
+0
 
-=item B<rdm_policy=STRING>
+=back
+
+=item B<rdm_policy>=I<STRING>
+
+=over 4
+
+=item Description
 
 B<(HVM/x86 only)> This is the same as the policy setting inside the B<rdm>
-option but just specific to a given device. The default is "relaxed".
+option in L<xl.cfg(5)> but just specific to a given device.
 
-Note: this would override global B<rdm> option.
+B<NOTE>: This overrides the global B<rdm> option.
+
+=item Default Value
+
+"strict"
+
+=back
 
 =back
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11205.29765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztn-0001Hu-G9; Fri, 23 Oct 2020 16:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11205.29765; Fri, 23 Oct 2020 16:26:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztn-0001Hj-Ci; Fri, 23 Oct 2020 16:26:39 +0000
Received: by outflank-mailman (input) for mailman id 11205;
 Fri, 23 Oct 2020 16:26:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztl-0000v2-GZ
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cf76f40-3f3d-4c0d-98b8-2222d69b0ccd;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008NY-Pa; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzql-000376-1D; Fri, 23 Oct 2020 16:23:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztl-0000v2-GZ
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:37 +0000
X-Inumbo-ID: 4cf76f40-3f3d-4c0d-98b8-2222d69b0ccd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4cf76f40-3f3d-4c0d-98b8-2222d69b0ccd;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=fKvidaGIqKTgBLgc0mSSzScxrN0OQ+wp9Q7aYF6wsy4=; b=dxCpiIUo7fG8UlymJDufun+bR
	i+OASKtJG3Ot94E9zEZz/u7qMWtKwxWMQ6e5rZ9QBj9uuT3yyyzIQjBfryn2CQVKbbV1YAzZSPuNs
	90iWxHAWRMjJpsniUYRG6RvuV6Eb2ANGIbMjvNAP1Wnup2hq9BcLR0FWPZ3hDIhwhUXjo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008NY-Pa; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzql-000376-1D; Fri, 23 Oct 2020 16:23:31 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 13/25] libxl: use COMPARE_PCI() macro is_pci_in_array()...
Date: Fri, 23 Oct 2020 16:23:02 +0000
Message-Id: <20201023162314.2235-14-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... rather than an open-coded equivalent.

This patch tidies up the is_pci_in_array() function, making it take a single
'libxl_device_pci' argument rather than separate domain, bus, device and
function arguments. The already-available COMPARE_PCI() macro can then be
used and it is also modified to return 'bool' rather than 'int'.

The patch also modifies libxl_pci_assignable() to use is_pci_in_array() rather
than a separate open-coded equivalent, and also modifies it to return a
'bool' rather than an 'int'.

NOTE: The COMPARE_PCI() macro is also fixed to include the 'domain' in its
      comparison, which should always have been the case.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_internal.h |  7 ++++---
 tools/libs/light/libxl_pci.c      | 38 +++++++++++++-------------------------
 2 files changed, 17 insertions(+), 28 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 3e70ff639b..80d7988622 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4744,9 +4744,10 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->func == (b)->func &&    \
-                           (a)->bus == (b)->bus &&      \
-                           (a)->dev == (b)->dev)
+#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+                           (a)->bus == (b)->bus &&       \
+                           (a)->dev == (b)->dev &&       \
+                           (a)->func == (b)->func)
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index e858509609..2e8e1c50f1 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -317,24 +317,17 @@ retry_transaction2:
     return 0;
 }
 
-static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
-                           int dom, int bus, int dev, int func)
+static int is_pci_in_array(libxl_device_pci *pcis, int num,
+                           libxl_device_pci *pci)
 {
     int i;
 
-    for(i = 0; i < num_assigned; i++) {
-        if ( assigned[i].domain != dom )
-            continue;
-        if ( assigned[i].bus != bus )
-            continue;
-        if ( assigned[i].dev != dev )
-            continue;
-        if ( assigned[i].func != func )
-            continue;
-        return 1;
+    for(i = 0; i < num; i++) {
+        if (COMPARE_PCI(pci, &pcis[i]))
+            break;
     }
 
-    return 0;
+    return i < num;
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
@@ -1467,21 +1460,17 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
 {
     libxl_device_pci *pcis;
-    int num, i;
+    int num;
+    bool assignable;
 
     pcis = libxl_device_pci_assignable_list(ctx, &num);
-    for (i = 0; i < num; i++) {
-        if (pcis[i].domain == pci->domain &&
-            pcis[i].bus == pci->bus &&
-            pcis[i].dev == pci->dev &&
-            pcis[i].func == pci->func)
-            break;
-    }
+    assignable = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_assignable_list_free(pcis, num);
-    return i != num;
+
+    return assignable;
 }
 
 static void device_pci_add_stubdom_wait(libxl__egc *egc,
@@ -1829,8 +1818,7 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         goto out_fail;
     }
 
-    attached = is_pci_in_array(pcis, num, pci->domain,
-                               pci->bus, pci->dev, pci->func);
+    attached = is_pci_in_array(pcis, num, pci);
     libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11206.29771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztn-0001Ii-Te; Fri, 23 Oct 2020 16:26:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11206.29771; Fri, 23 Oct 2020 16:26:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztn-0001II-Lp; Fri, 23 Oct 2020 16:26:39 +0000
Received: by outflank-mailman (input) for mailman id 11206;
 Fri, 23 Oct 2020 16:26:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztl-0000v3-I5
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d198a79-1aba-45cd-874b-371bd5d50ab7;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008OE-9E; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqr-000376-3d; Fri, 23 Oct 2020 16:23:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztl-0000v3-I5
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:37 +0000
X-Inumbo-ID: 0d198a79-1aba-45cd-874b-371bd5d50ab7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0d198a79-1aba-45cd-874b-371bd5d50ab7;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=FS2lsRq/N5rr0lREQKsZJxqwgY3o2rJbmA8WgHHgsnw=; b=2+w1u1cHA7vnNQpXElGGIByv+
	yYiZwwKzcVmCuheAhZgrcCY3UVXQDufQD5+4a0ijPnDc6KGr8N2EksnWk9343oAPveZQAVsLxgIJ9
	5eqVu3WDTzy/FM2dCHnb9BqqiBY0R6wAGm/HIE/tV+Lp8SGIs982y4dHnK+mAohz/r55A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008OE-9E; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqr-000376-3d; Fri, 23 Oct 2020 16:23:37 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 20/25] libxlu: introduce xlu_pci_parse_spec_string()
Date: Fri, 23 Oct 2020 16:23:09 +0000
Message-Id: <20201023162314.2235-21-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch largely re-writes the code to parse a PCI_SPEC_STRING and enters
it via the newly introduced function. The new parser also deals with 'bdf'
and 'vslot' as non-positional paramaters, as per the documentation in
xl-pci-configuration(5).

The existing xlu_pci_parse_bdf() function remains, but now strictly parses
BDF values. Some existing callers of xlu_pci_parse_bdf() are
modified to call xlu_pci_parse_spec_string() as per the documentation in xl(1).

NOTE: Usage text in xl_cmdtable.c and error messages are also modified
      appropriately.

Fixes: d25cc3ec93eb ("libxl: workaround gcc 10.2 maybe-uninitialized warning")
Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxlutil.h    |   8 +-
 tools/libs/util/libxlu_pci.c | 354 +++++++++++++++++++++++--------------------
 tools/xl/xl_cmdtable.c       |   4 +-
 tools/xl/xl_parse.c          |   4 +-
 tools/xl/xl_pci.c            |  37 +++--
 5 files changed, 220 insertions(+), 187 deletions(-)

diff --git a/tools/include/libxlutil.h b/tools/include/libxlutil.h
index 92e35c5462..cdd6aab4f8 100644
--- a/tools/include/libxlutil.h
+++ b/tools/include/libxlutil.h
@@ -109,9 +109,15 @@ int xlu_disk_parse(XLU_Config *cfg, int nspecs, const char *const *specs,
    */
 
 /*
+ * PCI BDF
+ */
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str);
+
+/*
  * PCI specification parsing
  */
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pcidev, const char *str);
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pci,
+                              const char *str);
 
 /*
  * RDM parsing
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 5c107f2642..a8b6ce5427 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -1,5 +1,7 @@
 #define _GNU_SOURCE
 
+#include <ctype.h>
+
 #include "libxlu_internal.h"
 #include "libxlu_disk_l.h"
 #include "libxlu_disk_i.h"
@@ -9,185 +11,213 @@
 #define XLU__PCI_ERR(_c, _x, _a...) \
     if((_c) && (_c)->report) fprintf((_c)->report, _x, ##_a)
 
-static int hex_convert(const char *str, unsigned int *val, unsigned int mask)
+static int parse_bdf(libxl_pci_bdf *bdfp, uint32_t *vfunc_maskp,
+                     const char *str, const char **endp)
 {
-    unsigned long ret;
-    char *end;
-
-    ret = strtoul(str, &end, 16);
-    if ( end == str || *end != '\0' )
-        return -1;
-    if ( ret & ~mask )
-        return -1;
-    *val = (unsigned int)ret & mask;
+    const char *ptr = str;
+    unsigned int colons = 0;
+    unsigned int domain, bus, dev, func;
+    int n;
+
+    /* Count occurrences of ':' to detrmine presence/absence of the 'domain' */
+    while (isxdigit(*ptr) || *ptr == ':') {
+        if (*ptr == ':')
+            colons++;
+        ptr++;
+    }
+
+    ptr = str;
+    switch (colons) {
+    case 1:
+        domain = 0;
+        if (sscanf(ptr, "%x:%x.%n", &bus, &dev, &n) != 2)
+            return ERROR_INVAL;
+        break;
+    case 2:
+        if (sscanf(ptr, "%x:%x:%x.%n", &domain, &bus, &dev, &n) != 3)
+            return ERROR_INVAL;
+        break;
+    default:
+        return ERROR_INVAL;
+    }
+
+    if (domain > 0xffff || bus > 0xff || dev > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+    if (*ptr == '*') {
+        if (!vfunc_maskp)
+            return ERROR_INVAL;
+        *vfunc_maskp = LIBXL_PCI_FUNC_ALL;
+        func = 0;
+        ptr++;
+    } else {
+        if (sscanf(ptr, "%x%n", &func, &n) != 1)
+            return ERROR_INVAL;
+        if (func > 7)
+            return ERROR_INVAL;
+        if (vfunc_maskp)
+            *vfunc_maskp = 1;
+        ptr += n;
+    }
+
+    bdfp->domain = domain;
+    bdfp->bus = bus;
+    bdfp->dev = dev;
+    bdfp->func = func;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
-                           unsigned int bus, unsigned int dev,
-                           unsigned int func, unsigned int vdevfn)
+static int parse_vslot(uint32_t *vdevfnp, const char *str, const char **endp)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
-    pci->vdevfn = vdevfn;
+    const char *ptr = str;
+    unsigned int val;
+    int n;
+
+    if (sscanf(ptr, "%x%n", &val, &n) != 1)
+        return ERROR_INVAL;
+
+    if (val > 0x1f)
+        return ERROR_INVAL;
+
+    ptr += n;
+
+    *vdevfnp = val << 3;
+
+    if (endp)
+        *endp = ptr;
+
     return 0;
 }
 
-#define STATE_DOMAIN    0
-#define STATE_BUS       1
-#define STATE_DEV       2
-#define STATE_FUNC      3
-#define STATE_VSLOT     4
-#define STATE_OPTIONS_K 6
-#define STATE_OPTIONS_V 7
-#define STATE_TERMINAL  8
-#define STATE_TYPE      9
-#define STATE_RDM_STRATEGY      10
-#define STATE_RESERVE_POLICY    11
-#define INVALID         0xffffffff
-int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_device_pci *pci, const char *str)
+static int parse_key_val(char **keyp, char**valp, const char *str,
+                         const char **endp)
 {
-    unsigned state = STATE_DOMAIN;
-    unsigned dom = INVALID, bus = INVALID, dev = INVALID, func = INVALID, vslot = 0;
-    char *buf2, *tok, *ptr, *end, *optkey = NULL;
+    const char *ptr = str;
+    char *key, *val;
+
+    while (*ptr != '=' && *ptr != '\0')
+        ptr++;
 
-    if ( NULL == (buf2 = ptr = strdup(str)) )
+    if (*ptr == '\0')
+        return ERROR_INVAL;
+
+    key = strndup(str, ptr - str);
+    if (!key)
         return ERROR_NOMEM;
 
-    for(tok = ptr, end = ptr + strlen(ptr) + 1; ptr < end; ptr++) {
-        switch(state) {
-        case STATE_DOMAIN:
-            if ( *ptr == ':' ) {
-                state = STATE_BUS;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dom, 0xffff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_BUS:
-            if ( *ptr == ':' ) {
-                state = STATE_DEV;
-                *ptr = '\0';
-                if ( hex_convert(tok, &bus, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }else if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( dom & ~0xff )
-                    goto parse_error;
-                bus = dom;
-                dom = 0;
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_DEV:
-            if ( *ptr == '.' ) {
-                state = STATE_FUNC;
-                *ptr = '\0';
-                if ( hex_convert(tok, &dev, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_FUNC:
-            if ( *ptr == '\0' || *ptr == '@' || *ptr == ',' ) {
-                switch( *ptr ) {
-                case '\0':
-                    state = STATE_TERMINAL;
-                    break;
-                case '@':
-                    state = STATE_VSLOT;
-                    break;
-                case ',':
-                    state = STATE_OPTIONS_K;
-                    break;
-                }
-                *ptr = '\0';
-                if ( !strcmp(tok, "*") ) {
-                    pci->vfunc_mask = LIBXL_PCI_FUNC_ALL;
-                }else{
-                    if ( hex_convert(tok, &func, 0x7) )
-                        goto parse_error;
-                    pci->vfunc_mask = (1 << 0);
-                }
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_VSLOT:
-            if ( *ptr == '\0' || *ptr == ',' ) {
-                state = ( *ptr == ',' ) ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( hex_convert(tok, &vslot, 0xff) )
-                    goto parse_error;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_K:
-            if ( *ptr == '=' ) {
-                state = STATE_OPTIONS_V;
-                *ptr = '\0';
-                optkey = tok;
-                tok = ptr + 1;
-            }
-            break;
-        case STATE_OPTIONS_V:
-            if ( *ptr == ',' || *ptr == '\0' ) {
-                state = (*ptr == ',') ? STATE_OPTIONS_K : STATE_TERMINAL;
-                *ptr = '\0';
-                if ( !strcmp(optkey, "msitranslate") ) {
-                    pci->msitranslate = atoi(tok);
-                }else if ( !strcmp(optkey, "power_mgmt") ) {
-                    pci->power_mgmt = atoi(tok);
-                }else if ( !strcmp(optkey, "permissive") ) {
-                    pci->permissive = atoi(tok);
-                }else if ( !strcmp(optkey, "seize") ) {
-                    pci->seize = atoi(tok);
-                } else if (!strcmp(optkey, "rdm_policy")) {
-                    if (!strcmp(tok, "strict")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                    } else if (!strcmp(tok, "relaxed")) {
-                        pci->rdm_policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                    } else {
-                        XLU__PCI_ERR(cfg, "%s is not an valid PCI RDM property"
-                                          " policy: 'strict' or 'relaxed'.",
-                                     tok);
-                        goto parse_error;
-                    }
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown PCI BDF option: %s", optkey);
-                }
-                tok = ptr + 1;
-            }
-        default:
-            break;
+    str = ++ptr; /* skip '=' */
+    while (*ptr != ',' && *ptr != '\0')
+        ptr++;
+
+    val = strndup(str, ptr - str);
+    if (!val) {
+        free(key);
+        return ERROR_NOMEM;
+    }
+
+    if (*ptr == ',')
+        ptr++;
+
+    *keyp = key;
+    *valp = val;
+    *endp = ptr;
+
+    return 0;
+}
+
+static int parse_rdm_policy(XLU_Config *cfg, libxl_rdm_reserve_policy *policy,
+                            const char *str)
+{
+    int ret = libxl_rdm_reserve_policy_from_string(str, policy);
+
+    if (ret)
+        XLU__PCI_ERR(cfg, "Unknown RDM policy: %s", str);
+
+    return ret;
+}
+
+int xlu_pci_parse_bdf(XLU_Config *cfg, libxl_pci_bdf *bdf, const char *str)
+{
+    return parse_bdf(bdf, NULL, str, NULL);
+}
+
+int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
+                              const char *str)
+{
+    const char *ptr = str;
+    bool bdf_present = false;
+    int ret;
+
+    /* Attempt to parse 'bdf' as positional parameter */
+    ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, ptr, &ptr);
+    if (!ret) {
+        bdf_present = true;
+
+        /* Check whether 'vslot' if present */
+        if (*ptr == '@') {
+            ret = parse_vslot(&pcidev->vdevfn, ++ptr, &ptr);
+            if (ret)
+                return ret;
         }
+        if (*ptr == ',')
+            ptr++;
+        else if (*ptr != '\0')
+            return ERROR_INVAL;
     }
 
-    if ( tok != ptr || state != STATE_TERMINAL )
-        goto parse_error;
+    /* Parse the rest as 'key=val' pairs */
+    while (*ptr != '\0') {
+        char *key, *val;
 
-    assert(dom != INVALID && bus != INVALID && dev != INVALID && func != INVALID);
+        ret = parse_key_val(&key, &val, ptr, &ptr);
+        if (ret)
+            return ret;
 
-    /* Just a pretty way to fill in the values */
-    pci_struct_fill(pci, dom, bus, dev, func, vslot << 3);
+        if (!strcmp(key, "bdf")) {
+            ret = parse_bdf(&pcidev->bdf, &pcidev->vfunc_mask, val, NULL);
+            bdf_present = !ret;
+        } else if (!strcmp(key, "vslot")) {
+            ret = parse_vslot(&pcidev->vdevfn, val, NULL);
+        } else if (!strcmp(key, "permissive")) {
+            pcidev->permissive = atoi(val);
+        } else if (!strcmp(key, "msitranslate")) {
+            pcidev->msitranslate = atoi(val);
+        } else if (!strcmp(key, "seize")) {
+            pcidev->seize= atoi(val);
+        } else if (!strcmp(key, "power_mgmt")) {
+            pcidev->power_mgmt = atoi(val);
+        } else if (!strcmp(key, "rdm_policy")) {
+            ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else {
+            XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
+            ret = ERROR_INVAL;
+        }
 
-    free(buf2);
+        free(key);
+        free(val);
 
-    return 0;
+        if (ret)
+            return ret;
+    }
 
-parse_error:
-    free(buf2);
-    return ERROR_INVAL;
+    if (!bdf_present)
+        return ERROR_INVAL;
+
+    return 0;
 }
 
 int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 {
+#define STATE_TYPE           0
+#define STATE_RDM_STRATEGY   1
+#define STATE_RESERVE_POLICY 2
+#define STATE_TERMINAL       3
+
     unsigned state = STATE_TYPE;
     char *buf2, *tok, *ptr, *end;
 
@@ -227,15 +257,8 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
             if (*ptr == ',' || *ptr == '\0') {
                 state = *ptr == ',' ? STATE_TYPE : STATE_TERMINAL;
                 *ptr = '\0';
-                if (!strcmp(tok, "strict")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_STRICT;
-                } else if (!strcmp(tok, "relaxed")) {
-                    rdm->policy = LIBXL_RDM_RESERVE_POLICY_RELAXED;
-                } else {
-                    XLU__PCI_ERR(cfg, "Unknown RDM property policy value: %s",
-                                 tok);
+                if (!parse_rdm_policy(cfg, &rdm->policy, tok))
                     goto parse_error;
-                }
                 tok = ptr + 1;
             }
         default:
@@ -253,6 +276,11 @@ int xlu_rdm_parse(XLU_Config *cfg, libxl_rdm_reserve *rdm, const char *str)
 parse_error:
     free(buf2);
     return ERROR_INVAL;
+
+#undef STATE_TYPE
+#undef STATE_RDM_STRATEGY
+#undef STATE_RESERVE_POLICY
+#undef STATE_TERMINAL
 }
 
 /*
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 7da6c1b927..2ee0c49673 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -90,12 +90,12 @@ struct cmd_spec cmd_table[] = {
     { "pci-attach",
       &main_pciattach, 0, 1,
       "Insert a new pass-through pci device",
-      "<Domain> <BDF> [Virtual Slot]",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-detach",
       &main_pcidetach, 0, 1,
       "Remove a domain's pass-through pci device",
-      "<Domain> <BDF>",
+      "<Domain> <PCI_SPEC_STRING>",
     },
     { "pci-list",
       &main_pcilist, 0, 0,
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 0765780d9f..6a4703e745 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1487,10 +1487,10 @@ void parse_config_data(const char *config_source,
              * the global policy by default.
              */
             pci->rdm_policy = b_info->u.hvm.rdm.policy;
-            e = xlu_pci_parse_bdf(config, pci, buf);
+            e = xlu_pci_parse_spec_string(config, pci, buf);
             if (e) {
                 fprintf(stderr,
-                        "unable to parse PCI BDF `%s' for passthrough\n",
+                        "unable to parse PCI_SPEC_STRING `%s' for passthrough\n",
                         buf);
                 exit(-e);
             }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index b6dc7c2840..9c24496cb2 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -55,7 +55,7 @@ int main_pcilist(int argc, char **argv)
     return 0;
 }
 
-static int pcidetach(uint32_t domid, const char *bdf, int force)
+static int pcidetach(uint32_t domid, const char *spec_string, int force)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -66,8 +66,9 @@ static int pcidetach(uint32_t domid, const char *bdf, int force)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-detach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-detach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
     if (force) {
@@ -89,7 +90,7 @@ int main_pcidetach(int argc, char **argv)
     uint32_t domid;
     int opt;
     int force = 0;
-    const char *bdf = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "f", NULL, "pci-detach", 2) {
     case 'f':
@@ -98,15 +99,15 @@ int main_pcidetach(int argc, char **argv)
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
+    spec_string = argv[optind + 1];
 
-    if (pcidetach(domid, bdf, force))
+    if (pcidetach(domid, spec_string, force))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciattach(uint32_t domid, const char *bdf, const char *vs)
+static int pciattach(uint32_t domid, const char *spec_string)
 {
     libxl_device_pci pci;
     XLU_Config *config;
@@ -117,8 +118,9 @@ static int pciattach(uint32_t domid, const char *bdf, const char *vs)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_inig"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-attach: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_spec_string(config, &pci, spec_string)) {
+        fprintf(stderr, "pci-attach: malformed PCI_SPEC_STRING \"%s\"\n",
+                spec_string);
         exit(2);
     }
 
@@ -135,19 +137,16 @@ int main_pciattach(int argc, char **argv)
 {
     uint32_t domid;
     int opt;
-    const char *bdf = NULL, *vs = NULL;
+    const char *spec_string = NULL;
 
     SWITCH_FOREACH_OPT(opt, "", NULL, "pci-attach", 2) {
         /* No options */
     }
 
     domid = find_domain(argv[optind]);
-    bdf = argv[optind + 1];
-
-    if (optind + 1 < argc)
-        vs = argv[optind + 2];
+    spec_string = argv[optind + 1];
 
-    if (pciattach(domid, bdf, vs))
+    if (pciattach(domid, spec_string))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
@@ -193,8 +192,8 @@ static int pciassignable_add(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-add: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
@@ -235,8 +234,8 @@ static int pciassignable_remove(const char *bdf, int rebind)
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF specification \"%s\"\n", bdf);
+    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11207.29788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzts-0001RZ-GY; Fri, 23 Oct 2020 16:26:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11207.29788; Fri, 23 Oct 2020 16:26:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzts-0001RQ-Cg; Fri, 23 Oct 2020 16:26:44 +0000
Received: by outflank-mailman (input) for mailman id 11207;
 Fri, 23 Oct 2020 16:26:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztq-0000v2-Ga
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc70f8b5-957c-4ba3-b316-955b4dff14c3;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008Ne-SQ; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqi-000376-6X; Fri, 23 Oct 2020 16:23:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztq-0000v2-Ga
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:42 +0000
X-Inumbo-ID: cc70f8b5-957c-4ba3-b316-955b4dff14c3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cc70f8b5-957c-4ba3-b316-955b4dff14c3;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=M7KyfdRowcfz1Dkb4pOwz/I/UqTxNMMmQHabJ9z5yGE=; b=xJIIzblZXaqib5gGfqsFtCb8j
	7ciGpN8ViuR2DuGP1RRs8bEgfobxQ/q9m908lKlHUEYZY3DzbVEz4nhvqgVElLENk5fm7A9/+TLTB
	4Q+/whXZtSVhTIe+4/Pe3V+opXc6pdyN1DJ7YAmZW0C03NW00gSTseKrB3MiLNnSiBamU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008Ne-SQ; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqi-000376-6X; Fri, 23 Oct 2020 16:23:28 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 10/25] libxl: remove get_all_assigned_devices() from libxl_pci.c
Date: Fri, 23 Oct 2020 16:22:59 +0000
Message-Id: <20201023162314.2235-11-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Use of this function is a very inefficient way to check whether a device
has already been assigned.

This patch adds code that saves the domain id in xenstore at the point of
assignment, and removes it again when the device id de-assigned (or the
domain is destroyed). It is then straightforward to check whether a device
has been assigned by checking whether a device has a saved domain id.

NOTE: To facilitate the xenstore check it is necessary to move the
      pci_info_xs_read() earlier in libxl_pci.c. To keep related functions
      together, the rest of the pci_info_xs_XXX() functions are moved too.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 149 ++++++++++++++++---------------------------
 1 file changed, 55 insertions(+), 94 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0be1b21185..879b1b24a0 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -317,50 +317,6 @@ retry_transaction2:
     return 0;
 }
 
-static int get_all_assigned_devices(libxl__gc *gc, libxl_device_pci **list, int *num)
-{
-    char **domlist;
-    unsigned int nd = 0, i;
-
-    *list = NULL;
-    *num = 0;
-
-    domlist = libxl__xs_directory(gc, XBT_NULL, "/local/domain", &nd);
-    for(i = 0; i < nd; i++) {
-        char *path, *num_devs;
-
-        path = GCSPRINTF("/local/domain/0/backend/%s/%s/0/num_devs",
-                         libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                         domlist[i]);
-        num_devs = libxl__xs_read(gc, XBT_NULL, path);
-        if ( num_devs ) {
-            int ndev = atoi(num_devs), j;
-            char *devpath, *bdf;
-
-            for(j = 0; j < ndev; j++) {
-                devpath = GCSPRINTF("/local/domain/0/backend/%s/%s/0/dev-%u",
-                                    libxl__device_kind_to_string(LIBXL__DEVICE_KIND_PCI),
-                                    domlist[i], j);
-                bdf = libxl__xs_read(gc, XBT_NULL, devpath);
-                if ( bdf ) {
-                    unsigned dom, bus, dev, func;
-                    if ( sscanf(bdf, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
-                        continue;
-
-                    *list = realloc(*list, sizeof(libxl_device_pci) * ((*num) + 1));
-                    if (*list == NULL)
-                        return ERROR_NOMEM;
-                    pci_struct_fill(*list + *num, dom, bus, dev, func, 0);
-                    (*num)++;
-                }
-            }
-        }
-    }
-    libxl__ptr_add(gc, *list);
-
-    return 0;
-}
-
 static int is_pci_in_array(libxl_device_pci *assigned, int num_assigned,
                            int dom, int bus, int dev, int func)
 {
@@ -408,19 +364,58 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
     return 0;
 }
 
+#define PCI_INFO_PATH "/libxl/pci"
+
+static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    return node ?
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
+                  pci->domain, pci->bus, pci->dev, pci->func,
+                  node) :
+        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
+                  pci->domain, pci->bus, pci->dev, pci->func);
+}
+
+
+static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node, const char *val)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
+
+    if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
+
+    return rc;
+}
+
+static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+                              const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+
+    return libxl__xs_read(gc, XBT_NULL, path);
+}
+
+static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+                               const char *node)
+{
+    char *path = pci_info_xs_path(gc, pci, node);
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    /* Remove the xenstore entry */
+    xs_rm(ctx->xsh, XBT_NULL, path);
+}
+
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new, *assigned;
+    libxl_device_pci *pcis = NULL, *new;
     struct dirent *de;
     DIR *dir;
-    int r, num_assigned;
 
     *num = 0;
 
-    r = get_all_assigned_devices(gc, &assigned, &num_assigned);
-    if (r) goto out;
-
     dir = opendir(SYSFS_PCIBACK_DRIVER);
     if (NULL == dir) {
         if (errno == ENOENT) {
@@ -436,9 +431,6 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        if (is_pci_in_array(assigned, num_assigned, dom, bus, dev, func))
-            continue;
-
         new = realloc(pcis, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
@@ -448,6 +440,10 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 
         memset(new, 0, sizeof(*new));
         pci_struct_fill(new, dom, bus, dev, func, 0);
+
+        if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
+            continue;
+
         (*num)++;
     }
 
@@ -718,48 +714,6 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
     return 0;
 }
 
-#define PCI_INFO_PATH "/libxl/pci"
-
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    return node ?
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
-                  node) :
-        GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
-}
-
-
-static void pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node, const char *val)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    if ( libxl__xs_printf(gc, XBT_NULL, path, "%s", val) < 0 ) {
-        LOGE(WARN, "Write of %s to node %s failed.", val, path);
-    }
-}
-
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
-                              const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-
-    return libxl__xs_read(gc, XBT_NULL, path);
-}
-
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
-                               const char *node)
-{
-    char *path = pci_info_xs_path(gc, pci, node);
-    libxl_ctx *ctx = libxl__gc_owner(gc);
-
-    /* Remove the xenstore entry */
-    xs_rm(ctx->xsh, XBT_NULL, path);
-}
-
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_device_pci *pci,
                                             int rebind)
@@ -1574,6 +1528,9 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         goto out;
     }
 
+    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    if (rc) goto out;
+
     libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
@@ -1701,6 +1658,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->domain, pci->bus, pci->dev, pci->func,
              rc);
+        pci_info_xs_remove(gc, pci, "domid");
     }
     aodev->rc = rc;
     aodev->callback(egc, aodev);
@@ -2276,6 +2234,9 @@ out:
     libxl__xswait_stop(gc, &prs->xswait);
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
+
+    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11208.29794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztt-0001Sb-2g; Fri, 23 Oct 2020 16:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11208.29794; Fri, 23 Oct 2020 16:26:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzts-0001SF-O1; Fri, 23 Oct 2020 16:26:44 +0000
Received: by outflank-mailman (input) for mailman id 11208;
 Fri, 23 Oct 2020 16:26:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztq-0000v3-IC
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:42 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfcfee88-6d13-4832-b2d6-32347057bf74;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008Na-R0; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqq-000376-6U; Fri, 23 Oct 2020 16:23:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztq-0000v3-IC
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:42 +0000
X-Inumbo-ID: cfcfee88-6d13-4832-b2d6-32347057bf74
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cfcfee88-6d13-4832-b2d6-32347057bf74;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=B5YV8ybDELJCpp3SDKwlX7entpb3B/zsUKYNaPdCHGs=; b=XTq1cg8rSTL6bX0C1eumTX540
	pSSOgzfpjL5rjN8lZEzBEmkEOpGBCfEg8RvbgR1yqqdS3w1fJ0Hta6EU1qDCk1PD1gMjO7VsJWq8D
	tliu43kx57mLYpwrojdfJ+mUDa2htuvPNnpSwIW/B2RE7Oe8iGus5L+tC7eVDm1COEV30=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008Na-R0; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqq-000376-6U; Fri, 23 Oct 2020 16:23:36 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 19/25] libxl: introduce 'libxl_pci_bdf' in the idl...
Date: Fri, 23 Oct 2020 16:23:08 +0000
Message-Id: <20201023162314.2235-20-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... and use in 'libxl_device_pci'

This patch is preparatory work for restricting the type passed to functions
that only require BDF information, rather than passing a 'libxl_device_pci'
structure which is only partially filled. In this patch only the minimal
mechanical changes necessary to deal with the structural changes are made.
Subsequent patches will adjust the code to make better use of the new type.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/golang/xenlight/helpers.gen.go |  77 ++++++++++++------
 tools/golang/xenlight/types.gen.go   |   8 +-
 tools/include/libxl.h                |   6 ++
 tools/libs/light/libxl_dm.c          |   8 +-
 tools/libs/light/libxl_internal.h    |   3 +-
 tools/libs/light/libxl_pci.c         | 148 +++++++++++++++++------------------
 tools/libs/light/libxl_types.idl     |  16 ++--
 tools/libs/util/libxlu_pci.c         |   8 +-
 tools/xl/xl_pci.c                    |   6 +-
 tools/xl/xl_sxp.c                    |   4 +-
 10 files changed, 167 insertions(+), 117 deletions(-)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index c8605994e7..b7230f693c 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1999,6 +1999,41 @@ xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
  return nil
  }
 
+// NewPciBdf returns an instance of PciBdf initialized with defaults.
+func NewPciBdf() (*PciBdf, error) {
+var (
+x PciBdf
+xc C.libxl_pci_bdf)
+
+C.libxl_pci_bdf_init(&xc)
+defer C.libxl_pci_bdf_dispose(&xc)
+
+if err := x.fromC(&xc); err != nil {
+return nil, err }
+
+return &x, nil}
+
+func (x *PciBdf) fromC(xc *C.libxl_pci_bdf) error {
+ x.Func = byte(xc._func)
+x.Dev = byte(xc.dev)
+x.Bus = byte(xc.bus)
+x.Domain = int(xc.domain)
+
+ return nil}
+
+func (x *PciBdf) toC(xc *C.libxl_pci_bdf) (err error){defer func(){
+if err != nil{
+C.libxl_pci_bdf_dispose(xc)}
+}()
+
+xc._func = C.uint8_t(x.Func)
+xc.dev = C.uint8_t(x.Dev)
+xc.bus = C.uint8_t(x.Bus)
+xc.domain = C.int(x.Domain)
+
+ return nil
+ }
+
 // NewDevicePci returns an instance of DevicePci initialized with defaults.
 func NewDevicePci() (*DevicePci, error) {
 var (
@@ -2014,10 +2049,9 @@ return nil, err }
 return &x, nil}
 
 func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
- x.Func = byte(xc._func)
-x.Dev = byte(xc.dev)
-x.Bus = byte(xc.bus)
-x.Domain = int(xc.domain)
+ if err := x.Bdf.fromC(&xc.bdf);err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 x.Vdevfn = uint32(xc.vdevfn)
 x.VfuncMask = uint32(xc.vfunc_mask)
 x.Msitranslate = bool(xc.msitranslate)
@@ -2033,10 +2067,9 @@ if err != nil{
 C.libxl_device_pci_dispose(xc)}
 }()
 
-xc._func = C.uint8_t(x.Func)
-xc.dev = C.uint8_t(x.Dev)
-xc.bus = C.uint8_t(x.Bus)
-xc.domain = C.int(x.Domain)
+if err := x.Bdf.toC(&xc.bdf); err != nil {
+return fmt.Errorf("converting field Bdf: %v", err)
+}
 xc.vdevfn = C.uint32_t(x.Vdevfn)
 xc.vfunc_mask = C.uint32_t(x.VfuncMask)
 xc.msitranslate = C.bool(x.Msitranslate)
@@ -2766,13 +2799,13 @@ if err := x.Nics[i].fromC(&v); err != nil {
 return fmt.Errorf("converting field Nics: %v", err) }
 }
 }
-x.Pcidevs = nil
-if n := int(xc.num_pcidevs); n > 0 {
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
-x.Pcidevs = make([]DevicePci, n)
-for i, v := range cPcidevs {
-if err := x.Pcidevs[i].fromC(&v); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err) }
+x.Pcis = nil
+if n := int(xc.num_pcis); n > 0 {
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:n:n]
+x.Pcis = make([]DevicePci, n)
+for i, v := range cPcis {
+if err := x.Pcis[i].fromC(&v); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err) }
 }
 }
 x.Rdms = nil
@@ -2922,13 +2955,13 @@ return fmt.Errorf("converting field Nics: %v", err)
 }
 }
 }
-if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
-xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs)*C.sizeof_libxl_device_pci))
-xc.num_pcidevs = C.int(numPcidevs)
-cPcidevs := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
-for i,v := range x.Pcidevs {
-if err := v.toC(&cPcidevs[i]); err != nil {
-return fmt.Errorf("converting field Pcidevs: %v", err)
+if numPcis := len(x.Pcis); numPcis > 0 {
+xc.pcis = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcis)*C.sizeof_libxl_device_pci))
+xc.num_pcis = C.int(numPcis)
+cPcis := (*[1<<28]C.libxl_device_pci)(unsafe.Pointer(xc.pcis))[:numPcis:numPcis]
+for i,v := range x.Pcis {
+if err := v.toC(&cPcis[i]); err != nil {
+return fmt.Errorf("converting field Pcis: %v", err)
 }
 }
 }
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index b4c5df0f2c..bc62ae8ce9 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -707,11 +707,15 @@ ColoCheckpointHost string
 ColoCheckpointPort string
 }
 
-type DevicePci struct {
+type PciBdf struct {
 Func byte
 Dev byte
 Bus byte
 Domain int
+}
+
+type DevicePci struct {
+Bdf PciBdf
 Vdevfn uint32
 VfuncMask uint32
 Msitranslate bool
@@ -896,7 +900,7 @@ CInfo DomainCreateInfo
 BInfo DomainBuildInfo
 Disks []DeviceDisk
 Nics []DeviceNic
-Pcidevs []DevicePci
+Pcis []DevicePci
 Rdms []DeviceRdm
 Dtdevs []DeviceDtdev
 Vfbs []DeviceVfb
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 8225809d94..5edacccbd1 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -464,6 +464,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_PCI_BDF indicates that the 'libxl_pci_bdf' type is defined
+ * is embedded in the 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index f147a733c8..e7f36a1742 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -472,10 +472,10 @@ int libxl__domain_device_construct_rdm(libxl__gc *gc,
     for (i = 0; i < d_config->num_pcis; i++) {
         unsigned int n, nr_entries;
 
-        seg = d_config->pcis[i].domain;
-        bus = d_config->pcis[i].bus;
-        devfn = PCI_DEVFN(d_config->pcis[i].dev,
-                          d_config->pcis[i].func);
+        seg = d_config->pcis[i].bdf.domain;
+        bus = d_config->pcis[i].bdf.bus;
+        devfn = PCI_DEVFN(d_config->pcis[i].bdf.dev,
+                          d_config->pcis[i].bdf.func);
         nr_entries = 0;
         rc = libxl__xc_device_get_rdm(gc, 0,
                                       seg, bus, devfn, &nr_entries, &xrdm);
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 80d7988622..e1cb8404ab 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -4744,10 +4744,11 @@ void libxl__xcinfo2xlinfo(libxl_ctx *ctx,
  * devices have same identifier. */
 #define COMPARE_DEVID(a, b) ((a)->devid == (b)->devid)
 #define COMPARE_DISK(a, b) (!strcmp((a)->vdev, (b)->vdev))
-#define COMPARE_PCI(a, b) ((a)->domain == (b)->domain && \
+#define COMPARE_BDF(a, b) ((a)->domain == (b)->domain && \
                            (a)->bus == (b)->bus &&       \
                            (a)->dev == (b)->dev &&       \
                            (a)->func == (b)->func)
+#define COMPARE_PCI(a, b) COMPARE_BDF(&((a)->bdf), &((b)->bdf))
 #define COMPARE_USB(a, b) ((a)->ctrl == (b)->ctrl && \
                            (a)->port == (b)->port)
 #define COMPARE_USBCTRL(a, b) ((a)->devid == (b)->devid)
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 45685ebec2..fec77dd270 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -29,10 +29,10 @@ static unsigned int pci_encode_bdf(libxl_device_pci *pci)
 {
     unsigned int value;
 
-    value = pci->domain << 16;
-    value |= (pci->bus & 0xff) << 8;
-    value |= (pci->dev & 0x1f) << 3;
-    value |= (pci->func & 0x7);
+    value = pci->bdf.domain << 16;
+    value |= (pci->bdf.bus & 0xff) << 8;
+    value |= (pci->bdf.dev & 0x1f) << 3;
+    value |= (pci->bdf.func & 0x7);
 
     return value;
 }
@@ -41,10 +41,10 @@ static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
 }
 
@@ -54,9 +54,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             const libxl_device_pci *pci)
 {
     flexarray_append(back, GCSPRINTF("key-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
-    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->domain, pci->bus, pci->dev, pci->func));
+    flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     if (pci->vdevfn)
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
@@ -218,8 +218,8 @@ static int libxl__device_pci_remove_xenstore(libxl__gc *gc, uint32_t domid, libx
         unsigned int domain = 0, bus = 0, dev = 0, func = 0;
         xsdev = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/dev-%d", be_path, i));
         sscanf(xsdev, PCI_BDF, &domain, &bus, &dev, &func);
-        if (domain == pci->domain && bus == pci->bus &&
-            pci->dev == dev && pci->func == func) {
+        if (domain == pci->bdf.domain && bus == pci->bdf.bus &&
+            pci->bdf.dev == dev && pci->bdf.func == func) {
             break;
         }
     }
@@ -330,8 +330,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->domain, pci->bus,
-                    pci->dev, pci->func);
+    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+                    pci->bdf.dev, pci->bdf.func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -351,10 +351,10 @@ static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->domain, pci->bus, pci->dev, pci->func,
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->domain, pci->bus, pci->dev, pci->func);
+                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 }
 
 
@@ -452,10 +452,10 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->domain,
-                           pci->bus,
-                           pci->dev,
-                           pci->func);
+                           pci->bdf.domain,
+                           pci->bdf.bus,
+                           pci->bdf.dev,
+                           pci->bdf.func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -485,7 +485,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_vendor_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/vendor",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_vendor;
 
@@ -493,7 +493,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have vendor attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_vendor);
@@ -501,7 +501,7 @@ static uint16_t sysfs_dev_get_vendor(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read vendor of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -512,7 +512,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
 {
     char *pci_device_device_path =
             GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/device",
-                      pci->domain, pci->bus, pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     uint16_t read_items;
     uint16_t pci_device_device;
 
@@ -520,7 +520,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have device attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
     read_items = fscanf(f, "0x%hx\n", &pci_device_device);
@@ -528,7 +528,7 @@ static uint16_t sysfs_dev_get_device(libxl__gc *gc, libxl_device_pci *pci)
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read device of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         return 0xffff;
     }
 
@@ -539,14 +539,14 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
                                unsigned long *class)
 {
     char *pci_device_class_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/class",
-                     pci->domain, pci->bus, pci->dev, pci->func);
+                     pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     int read_items, ret = 0;
 
     FILE *f = fopen(pci_device_class_path, "r");
     if (!f) {
         LOGE(ERROR,
              "pci device "PCI_BDF" does not have class attribute",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
         goto out;
     }
@@ -555,7 +555,7 @@ static int sysfs_dev_get_class(libxl__gc *gc, libxl_device_pci *pci,
     if (read_items != 1) {
         LOGE(ERROR,
              "cannot read class of pci device "PCI_BDF,
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         ret = ERROR_FAIL;
     }
 
@@ -622,10 +622,10 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->domain
-            && bus == pci->bus
-            && dev == pci->dev
-            && func == pci->func) {
+        if (dom == pci->bdf.domain
+            && bus == pci->bdf.bus
+            && dev == pci->bdf.dev
+            && func == pci->bdf.func) {
             rc = 1;
             goto out;
         }
@@ -651,8 +651,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->domain, pci->bus,
-                      pci->dev, pci->func);
+                      pci->bdf.domain, pci->bdf.bus,
+                      pci->bdf.dev, pci->bdf.func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -715,10 +715,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->domain;
-    bus = pci->bus;
-    dev = pci->dev;
-    func = pci->func;
+    dom = pci->bdf.domain;
+    bus = pci->bdf.bus;
+    dev = pci->bdf.dev;
+    func = pci->bdf.func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -792,8 +792,8 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     /* De-quarantine */
     rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->domain, pci->bus,
-            pci->dev, pci->func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
+            pci->bdf.dev, pci->bdf.func);
         return ERROR_FAIL;
     }
 
@@ -882,11 +882,11 @@ static int pci_multifunction_check(libxl__gc *gc, libxl_device_pci *pci, unsigne
 
         if ( sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4 )
             continue;
-        if ( pci->domain != dom )
+        if ( pci->bdf.domain != dom )
             continue;
-        if ( pci->bus != bus )
+        if ( pci->bdf.bus != bus )
             continue;
-        if ( pci->dev != dev )
+        if ( pci->bdf.dev != dev )
             continue;
 
         path = GCSPRINTF("%s/" PCI_BDF, SYSFS_PCIBACK_DRIVER, dom, bus, dev, func);
@@ -935,13 +935,13 @@ static int qemu_pci_add_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
     if (pci->vdevfn) {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF_VDEVFN","PCI_OPTIONS,
-                         pci->domain, pci->bus, pci->dev,
-                         pci->func, pci->vdevfn, pci->msitranslate,
+                         pci->bdf.domain, pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->vdevfn, pci->msitranslate,
                          pci->power_mgmt);
     } else {
         libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF","PCI_OPTIONS,
-                         pci->domain,  pci->bus, pci->dev,
-                         pci->func, pci->msitranslate, pci->power_mgmt);
+                         pci->bdf.domain,  pci->bdf.bus, pci->bdf.dev,
+                         pci->bdf.func, pci->msitranslate, pci->power_mgmt);
     }
 
     libxl__qemu_traditional_cmd(gc, domid, "pci-ins");
@@ -1100,10 +1100,10 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
     libxl__qmp_param_add_string(gc, &args, "driver",
                                 "xen-pci-passthrough");
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     QMP_PARAMETERS_SPRINTF(&args, "hostaddr",
-                           "%04x:%02x:%02x.%01x", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+                           "%04x:%02x:%02x.%01x", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     if (pci->vdevfn) {
         QMP_PARAMETERS_SPRINTF(&args, "addr", "%x.%x",
                                PCI_SLOT(pci->vdevfn),
@@ -1191,7 +1191,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
      */
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     for (i = 0; (bus = libxl__json_array_get(response, i)); i++) {
         devices = libxl__json_map_get("devices", bus, JSON_ARRAY);
@@ -1281,8 +1281,8 @@ static void pci_add_dm_done(libxl__egc *egc,
     if (isstubdom)
         starting = false;
 
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                           pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     start = end = flags = size = 0;
     irq = 0;
@@ -1322,8 +1322,8 @@ static void pci_add_dm_done(libxl__egc *egc,
         }
     }
     fclose(f);
-    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                                pci->bus, pci->dev, pci->func);
+    sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                                pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     f = fopen(sysfs_path, "r");
     if (f == NULL) {
         LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
@@ -1494,7 +1494,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
-                 pci->domain, pci->bus, pci->dev, pci->func,
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
                  errno == EOPNOTSUPP ? "cannot be assigned - no IOMMU"
                  : "already assigned to a different guest");
             goto out;
@@ -1512,7 +1512,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     if (!libxl_pci_assignable(ctx, pci)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
-             pci->domain, pci->bus, pci->dev, pci->func);
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
@@ -1520,7 +1520,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
-    libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+    libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
@@ -1601,13 +1601,13 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
         pci->vfunc_mask &= pfunc_mask;
         /* so now vfunc_mask == pfunc_mask */
     }else{
-        pfunc_mask = (1 << pci->func);
+        pfunc_mask = (1 << pci->bdf.func);
     }
 
     for (rc = 0, i = 7; i >= 0; --i) {
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 /* if not passing through multiple devices in a block make
@@ -1639,7 +1639,7 @@ static void device_pci_add_done(libxl__egc *egc,
         LOGD(ERROR, domid,
              "libxl__device_pci_add  failed for "
              "PCI device %x:%x:%x.%x (rc %d)",
-             pci->domain, pci->bus, pci->dev, pci->func,
+             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
@@ -1739,8 +1739,8 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/state");
     state = libxl__xs_read(gc, XBT_NULL, path);
     path = DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, "/parameter");
-    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->domain,
-                     pci->bus, pci->dev, pci->func);
+    libxl__xs_printf(gc, XBT_NULL, path, PCI_BDF, pci->bdf.domain,
+                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* Remove all functions at once atomically by only signalling
      * device-model for function 0 */
@@ -1853,8 +1853,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
     } else {
         assert(type == LIBXL_DOMAIN_TYPE_PV);
 
-        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->domain,
-                                     pci->bus, pci->dev, pci->func);
+        char *sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/resource", pci->bdf.domain,
+                                     pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         FILE *f = fopen(sysfs_path, "r");
         unsigned int start = 0, end = 0, flags = 0, size = 0;
         int irq = 0;
@@ -1889,8 +1889,8 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
         }
         fclose(f);
 skip1:
-        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
-                               pci->bus, pci->dev, pci->func);
+        sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->bdf.domain,
+                               pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         f = fopen(sysfs_path, "r");
         if (f == NULL) {
             LOGED(ERROR, domid, "Couldn't open %s", sysfs_path);
@@ -1954,7 +1954,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     if (rc) goto out;
 
     QMP_PARAMETERS_SPRINTF(&args, "id", PCI_PT_QDEV_ID,
-                           pci->bus, pci->dev, pci->func);
+                           pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     prs->qmp.callback = pci_remove_qmp_device_del_cb;
     rc = libxl__ev_qmp_send(egc, &prs->qmp, "device_del", args);
     if (rc) goto out;
@@ -2023,7 +2023,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
     libxl__ev_qmp_dispose(gc, qmp);
 
     asked_id = GCSPRINTF(PCI_PT_QDEV_ID,
-                         pci->bus, pci->dev, pci->func);
+                         pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* query-pci response:
      * [{ 'devices': [ 'qdev_id': 'str', ...  ], ... }]
@@ -2074,7 +2074,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
-         PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
+         PCI_PT_QDEV_ID, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
 
     /* If we timed out, we might still want to keep destroying the device
      * (when force==true), so let the next function decide what to do on
@@ -2106,7 +2106,7 @@ static void pci_remove_detached(libxl__egc *egc,
 
     /* don't do multiple resets while some functions are still passed through */
     if ((pci->vdevfn & 0x7) == 0) {
-        libxl__device_pci_reset(gc, pci->domain, pci->bus, pci->dev, pci->func);
+        libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
     }
 
     if (!isstubdom) {
@@ -2194,7 +2194,7 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
         }
         pci->vfunc_mask &= prs->pfunc_mask;
     } else {
-        prs->pfunc_mask = (1 << pci->func);
+        prs->pfunc_mask = (1 << pci->bdf.func);
     }
 
     rc = 0;
@@ -2222,7 +2222,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
         prs->next_func--;
         if ( (1 << i) & pfunc_mask ) {
             if ( pci->vfunc_mask == pfunc_mask ) {
-                pci->func = i;
+                pci->bdf.func = i;
                 pci->vdevfn = orig_vdev | i;
             } else {
                 pci->vdevfn = orig_vdev;
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 20f8dd7cfa..2c441142fb 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -769,18 +769,22 @@ libxl_device_nic = Struct("device_nic", [
     ("colo_checkpoint_port", string)
     ])
 
+libxl_pci_bdf = Struct("pci_bdf", [
+    ("func", uint8),
+    ("dev", uint8),
+    ("bus", uint8),
+    ("domain", integer),
+    ])
+
 libxl_device_pci = Struct("device_pci", [
-    ("func",      uint8),
-    ("dev",       uint8),
-    ("bus",       uint8),
-    ("domain",    integer),
-    ("vdevfn",    uint32),
+    ("bdf", libxl_pci_bdf),
+    ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
     ("power_mgmt", bool),
     ("permissive", bool),
     ("seize", bool),
-    ("rdm_policy",      libxl_rdm_reserve_policy),
+    ("rdm_policy", libxl_rdm_reserve_policy),
     ])
 
 libxl_device_rdm = Struct("device_rdm", [
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index 1d38fffce3..5c107f2642 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -27,10 +27,10 @@ static int pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                            unsigned int bus, unsigned int dev,
                            unsigned int func, unsigned int vdevfn)
 {
-    pci->domain = domain;
-    pci->bus = bus;
-    pci->dev = dev;
-    pci->func = func;
+    pci->bdf.domain = domain;
+    pci->bdf.bus = bus;
+    pci->bdf.dev = dev;
+    pci->bdf.func = func;
     pci->vdevfn = vdevfn;
     return 0;
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f71498cbb5..b6dc7c2840 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -34,7 +34,8 @@ static void pcilist(uint32_t domid)
     for (i = 0; i < num; i++) {
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_list_free(pcis, num);
 }
@@ -163,7 +164,8 @@ static void pciassignable_list(void)
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
+               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
+               pcis[i].bdf.func);
     }
     libxl_device_pci_assignable_list_free(pcis, num);
 }
diff --git a/tools/xl/xl_sxp.c b/tools/xl/xl_sxp.c
index b03e348ffb..95180b60df 100644
--- a/tools/xl/xl_sxp.c
+++ b/tools/xl/xl_sxp.c
@@ -194,8 +194,8 @@ void printf_info_sexp(int domid, libxl_domain_config *d_config, FILE *fh)
         fprintf(fh, "\t(device\n");
         fprintf(fh, "\t\t(pci\n");
         fprintf(fh, "\t\t\t(pci dev %04x:%02x:%02x.%01x@%02x)\n",
-               d_config->pcis[i].domain, d_config->pcis[i].bus,
-               d_config->pcis[i].dev, d_config->pcis[i].func,
+               d_config->pcis[i].bdf.domain, d_config->pcis[i].bdf.bus,
+               d_config->pcis[i].bdf.dev, d_config->pcis[i].bdf.func,
                d_config->pcis[i].vdevfn);
         fprintf(fh, "\t\t\t(opts msitranslate %d power_mgmt %d)\n",
                d_config->pcis[i].msitranslate,
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11209.29813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztx-0001d1-G3; Fri, 23 Oct 2020 16:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11209.29813; Fri, 23 Oct 2020 16:26:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztx-0001cp-9a; Fri, 23 Oct 2020 16:26:49 +0000
Received: by outflank-mailman (input) for mailman id 11209;
 Fri, 23 Oct 2020 16:26:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztv-0000v2-Gn
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15e9921f-64bf-430c-8b85-f3d2e864f4a5;
 Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008NW-O7; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqj-000376-3f; Fri, 23 Oct 2020 16:23:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztv-0000v2-Gn
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:47 +0000
X-Inumbo-ID: 15e9921f-64bf-430c-8b85-f3d2e864f4a5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15e9921f-64bf-430c-8b85-f3d2e864f4a5;
	Fri, 23 Oct 2020 16:26:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=GUr1/jWBqMoyqKCH+//ockHXZ9S72D3qfXA2SoYLOJs=; b=fltJbSQ7mhgz40Z9DOfjMZ85W
	RQsoLGbyaE54ZzwCBoeIfnI26GSASuInU+DU1+RIw8Fq86Zk0/st1PW6IutuwNYPhoE0Qyna653lt
	9BnFkpvVp81qDYSrtGwOPnYRuSIAhkE6fKLA5zDg4j7Aba4ljUmoBBPTvr/BoftPHlZgE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008NW-O7; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqj-000376-3f; Fri, 23 Oct 2020 16:23:29 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 11/25] libxl: make sure callers of libxl_device_pci_list() free the list after use
Date: Fri, 23 Oct 2020 16:23:00 +0000
Message-Id: <20201023162314.2235-12-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

A previous patch introduced libxl_device_pci_list_free() which should be used
by callers of libxl_device_pci_list() to properly dispose of the exported
'libxl_device_pci' types and the free the memory holding them. Whilst all
current callers do ensure the memory is freed, only the code in xl's
pcilist() function actually calls libxl_device_pci_dispose(). As it stands
this laxity does not lead to any memory leaks, but the simple addition of
.e.g. a 'string' into the idl definition of 'libxl_device_pci' would lead
to leaks.

This patch makes sure all callers of libxl_device_pci_list() can call
libxl_device_pci_list_free() by keeping copies of 'libxl_device_pci'
structures inline in 'pci_add_state' and 'pci_remove_state' (and also making
sure these are properly disposed at the end of the operations) rather
than keeping pointers to the structures returned by libxl_device_pci_list().

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/libs/light/libxl_pci.c | 68 ++++++++++++++++++++++++--------------------
 tools/xl/xl_pci.c            |  3 +-
 2 files changed, 38 insertions(+), 33 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 879b1b24a0..3162facb37 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1006,7 +1006,7 @@ typedef struct pci_add_state {
     libxl__xswait_state xswait;
     libxl__ev_qmp qmp;
     libxl__ev_time timeout;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     libxl_domid pci_domid;
 } pci_add_state;
 
@@ -1078,7 +1078,7 @@ static void pci_add_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1099,7 +1099,7 @@ static void pci_add_qmp_device_add(libxl__egc *egc, pci_add_state *pas)
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     libxl__ev_qmp *const qmp = &pas->qmp;
 
     rc = libxl__ev_time_register_rel(ao, &pas->timeout,
@@ -1180,7 +1180,7 @@ static void pci_add_qmp_query_pci_cb(libxl__egc *egc,
     int dev_slot, dev_func;
 
     /* Convenience aliases */
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1280,7 +1280,7 @@ static void pci_add_dm_done(libxl__egc *egc,
 
     /* Convenience aliases */
     bool starting = pas->starting;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
     bool hvm = libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM;
 
     libxl__ev_qmp_dispose(gc, &pas->qmp);
@@ -1496,7 +1496,10 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     GCNEW(pas);
     pas->aodev = aodev;
     pas->domid = domid;
-    pas->pci = pci;
+
+    libxl_device_pci_copy(CTX, &pas->pci, pci);
+    pci = &pas->pci;
+
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
@@ -1535,12 +1538,6 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
 
     stubdomid = libxl_get_stubdom_id(ctx, domid);
     if (stubdomid != 0) {
-        libxl_device_pci *pci_s;
-
-        GCNEW(pci_s);
-        libxl_device_pci_init(pci_s);
-        libxl_device_pci_copy(CTX, pci_s, pci);
-        pas->pci = pci_s;
         pas->callback = device_pci_add_stubdom_wait;
 
         do_pci_add(egc, stubdomid, pas); /* must be last */
@@ -1599,7 +1596,7 @@ static void device_pci_add_stubdom_done(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) goto out;
 
@@ -1650,7 +1647,7 @@ static void device_pci_add_done(libxl__egc *egc,
     EGC_GC;
     libxl__ao_device *aodev = pas->aodev;
     libxl_domid domid = pas->domid;
-    libxl_device_pci *pci = pas->pci;
+    libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
         LOGD(ERROR, domid,
@@ -1660,6 +1657,7 @@ static void device_pci_add_done(libxl__egc *egc,
              rc);
         pci_info_xs_remove(gc, pci, "domid");
     }
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -1766,7 +1764,7 @@ static int qemu_pci_remove_xenstore(libxl__gc *gc, uint32_t domid,
 typedef struct pci_remove_state {
     libxl__ao_device *aodev;
     libxl_domid domid;
-    libxl_device_pci *pci;
+    libxl_device_pci pci;
     bool force;
     bool hvm;
     unsigned int orig_vdev;
@@ -1808,22 +1806,25 @@ static void do_pci_remove(libxl__egc *egc, pci_remove_state *prs)
 {
     STATE_AO_GC(prs->aodev->ao);
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    libxl_device_pci *assigned;
+    libxl_device_pci *pcis;
+    bool attached;
     uint32_t domid = prs->domid;
     libxl_domain_type type = libxl__domain_type(gc, domid);
-    libxl_device_pci *pci = prs->pci;
+    libxl_device_pci *pci = &prs->pci;
     int rc, num;
 
-    assigned = libxl_device_pci_list(ctx, domid, &num);
-    if (assigned == NULL) {
+    pcis = libxl_device_pci_list(ctx, domid, &num);
+    if (!pcis) {
         rc = ERROR_FAIL;
         goto out_fail;
     }
-    libxl__ptr_add(gc, assigned);
+
+    attached = is_pci_in_array(pcis, num, pci->domain,
+                               pci->bus, pci->dev, pci->func);
+    libxl_device_pci_list_free(pcis, num);
 
     rc = ERROR_INVAL;
-    if ( !is_pci_in_array(assigned, num, pci->domain,
-                          pci->bus, pci->dev, pci->func) ) {
+    if (!attached) {
         LOGD(ERROR, domid, "PCI device not attached to this domain");
         goto out_fail;
     }
@@ -1923,7 +1924,7 @@ static void pci_remove_qemu_trad_watch_state_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl_domid domid = prs->domid;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = check_qemu_running(gc, domid, xswa, rc, state);
     if (rc == ERROR_NOT_READY)
@@ -1945,7 +1946,7 @@ static void pci_remove_qmp_device_del(libxl__egc *egc,
     int rc;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     rc = libxl__ev_time_register_rel(ao, &prs->timeout,
                                      pci_remove_timeout,
@@ -2015,7 +2016,7 @@ static void pci_remove_qmp_query_cb(libxl__egc *egc,
 
     /* Convenience aliases */
     libxl__ao *const ao = prs->aodev->ao;
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     if (rc) goto out;
 
@@ -2070,7 +2071,7 @@ static void pci_remove_timeout(libxl__egc *egc, libxl__ev_time *ev,
     pci_remove_state *prs = CONTAINER_OF(ev, *prs, timeout);
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     LOGD(WARN, prs->domid, "timed out waiting for DM to remove "
          PCI_PT_QDEV_ID, pci->bus, pci->dev, pci->func);
@@ -2091,7 +2092,7 @@ static void pci_remove_detached(libxl__egc *egc,
     bool isstubdom;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
 
     /* Cleaning QMP states ASAP */
     libxl__ev_qmp_dispose(gc, &prs->qmp);
@@ -2153,7 +2154,7 @@ static void pci_remove_done(libxl__egc *egc,
 
     if (rc) goto out;
 
-    libxl__device_pci_remove_xenstore(gc, prs->domid, prs->pci);
+    libxl__device_pci_remove_xenstore(gc, prs->domid, &prs->pci);
 out:
     device_pci_remove_common_next(egc, prs, rc);
 }
@@ -2171,7 +2172,10 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     GCNEW(prs);
     prs->aodev = aodev;
     prs->domid = domid;
-    prs->pci = pci;
+
+    libxl_device_pci_copy(CTX, &prs->pci, pci);
+    pci = &prs->pci;
+
     prs->force = force;
     libxl__xswait_init(&prs->xswait);
     libxl__ev_qmp_init(&prs->qmp);
@@ -2206,7 +2210,7 @@ static void device_pci_remove_common_next(libxl__egc *egc,
     EGC_GC;
 
     /* Convenience aliases */
-    libxl_device_pci *const pci = prs->pci;
+    libxl_device_pci *const pci = &prs->pci;
     libxl__ao_device *const aodev = prs->aodev;
     const unsigned int pfunc_mask = prs->pfunc_mask;
     const unsigned int orig_vdev = prs->orig_vdev;
@@ -2237,6 +2241,7 @@ out:
 
     if (!rc) pci_info_xs_remove(gc, pci, "domid");
 
+    libxl_device_pci_dispose(pci);
     aodev->rc = rc;
     aodev->callback(egc, aodev);
 }
@@ -2339,7 +2344,6 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
     pcis = libxl_device_pci_list(CTX, domid, &num);
     if ( pcis == NULL )
         return;
-    libxl__ptr_add(gc, pcis);
 
     for (i = 0; i < num; i++) {
         /* Force remove on shutdown since, on HVM, qemu will not always
@@ -2350,6 +2354,8 @@ void libxl__device_pci_destroy_all(libxl__egc *egc, uint32_t domid,
         libxl__device_pci_remove_common(egc, domid, pcis + i, true,
                                         aodev);
     }
+
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int libxl__grant_vga_iomem_permission(libxl__gc *gc, const uint32_t domid,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 34fcf5a4fa..7c0f102ac7 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -35,9 +35,8 @@ static void pcilist(uint32_t domid)
         printf("%02x.%01x %04x:%02x:%02x.%01x\n",
                (pcis[i].vdevfn >> 3) & 0x1f, pcis[i].vdevfn & 0x7,
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_list_free(pcis, num);
 }
 
 int main_pcilist(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11210.29819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzty-0001ex-CP; Fri, 23 Oct 2020 16:26:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11210.29819; Fri, 23 Oct 2020 16:26:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVztx-0001eU-UM; Fri, 23 Oct 2020 16:26:49 +0000
Received: by outflank-mailman (input) for mailman id 11210;
 Fri, 23 Oct 2020 16:26:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVztv-0000v3-IC
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 528ee80c-e585-44e9-9d9c-44a3707f41eb;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008O8-7t; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzql-000376-RK; Fri, 23 Oct 2020 16:23:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVztv-0000v3-IC
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:47 +0000
X-Inumbo-ID: 528ee80c-e585-44e9-9d9c-44a3707f41eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 528ee80c-e585-44e9-9d9c-44a3707f41eb;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=uoyjLt+gARtd7y51vlGjeKLiscOL3AjdBsOhpLXNLIw=; b=Eo22PrB86p+nJTk8ojvcFZIRc
	U/EWYsurGfPCK+QdMPAdcL8rtyEP0vqJo5RF2acfgCYWlkQVxDoOcfhkUqyN8m+fbnJPdutyhPxID
	PG1+iiIUBf6pAfdgUWmJjIWIj3PBZPjkQ0W0p8nMzBNyFB/yQBkQ/ErH7Y3EP1+DSCFJY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008O8-7t; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzql-000376-RK; Fri, 23 Oct 2020 16:23:32 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 14/25] libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
Date: Fri, 23 Oct 2020 16:23:03 +0000
Message-Id: <20201023162314.2235-15-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

Other parameters, such as 'msitranslate' and 'permissive' are dealt with
but 'rdm_policy' appears to be have been completely missed.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
---
 tools/libs/light/libxl_pci.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 2e8e1c50f1..c5d73133eb 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -61,9 +61,9 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
         flexarray_append_pair(back, GCSPRINTF("vdevfn-%d", num), GCSPRINTF("%x", pci->vdevfn));
     flexarray_append(back, GCSPRINTF("opts-%d", num));
     flexarray_append(back,
-              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d",
-                             pci->msitranslate, pci->power_mgmt,
-                             pci->permissive));
+              GCSPRINTF("msitranslate=%d,power_mgmt=%d,permissive=%d,rdm_policy=%s",
+                        pci->msitranslate, pci->power_mgmt,
+                        pci->permissive, libxl_rdm_reserve_policy_to_string(pci->rdm_policy)));
     flexarray_append_pair(back, GCSPRINTF("state-%d", num), GCSPRINTF("%d", XenbusStateInitialising));
 }
 
@@ -2310,6 +2310,9 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
             } else if (!strcmp(p, "permissive")) {
                 p = strtok_r(NULL, ",=", &saveptr);
                 pci->permissive = atoi(p);
+            } else if (!strcmp(p, "rdm_policy")) {
+                p = strtok_r(NULL, ",=", &saveptr);
+                libxl_rdm_reserve_policy_from_string(p, &pci->rdm_policy);
             }
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11214.29837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu2-0001nl-0v; Fri, 23 Oct 2020 16:26:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11214.29837; Fri, 23 Oct 2020 16:26:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu1-0001nX-Pw; Fri, 23 Oct 2020 16:26:53 +0000
Received: by outflank-mailman (input) for mailman id 11214;
 Fri, 23 Oct 2020 16:26:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzu0-0000v2-Gx
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f5ada0d-630a-4352-a9a4-a759950906c0;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008Nk-Tw; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqw-000376-Cy; Fri, 23 Oct 2020 16:23:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzu0-0000v2-Gx
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:52 +0000
X-Inumbo-ID: 1f5ada0d-630a-4352-a9a4-a759950906c0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1f5ada0d-630a-4352-a9a4-a759950906c0;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=xc8Rt4FNyf81W81G3gNl5AS6+YoOb+nL0S+vPdlRodM=; b=jpkOw9P+gXjavbrsUKHlCAFP8
	RWwQDAW5Af6QjdsjUWpRZ5pyHuTHZPSbO2dYpetDWaHddBykEDl06tpDFXYMM/YqYGm1IO6Pxp7Sx
	fnLTMK21Sla90Bv9zkpn2SuDmq6IccqRrzdKxidRr82l/OCUVdhXDNO4JmsgULxHCozjk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008Nk-Tw; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqw-000376-Cy; Fri, 23 Oct 2020 16:23:42 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 25/25] xl / libxl: support 'xl pci-attach/detach' by name
Date: Fri, 23 Oct 2020 16:23:14 +0000
Message-Id: <20201023162314.2235-26-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch adds a 'name' field into the idl for 'libxl_device_pci' and
libxlu_pci_parse_spec_string() is modified to parse the new 'name'
parameter of PCI_SPEC_STRING detailed in the updated documention in
xl-pci-configuration(5).

If the 'name' field is non-NULL then both libxl_device_pci_add() and
libxl_device_pci_remove() will use it to look up the device BDF in
the list of assignable devices.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h            |  6 ++++
 tools/libs/light/libxl_pci.c     | 67 +++++++++++++++++++++++++++++++++++++---
 tools/libs/light/libxl_types.idl |  1 +
 tools/libs/util/libxlu_pci.c     |  7 ++++-
 tools/xl/xl_pci.c                |  1 +
 5 files changed, 76 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 4025d3a3d4..5b55a20155 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -485,6 +485,12 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_NAME indicates that the 'name' field of
+ * libxl_device_pci is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 0f7d655aff..e5d54732c3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -60,6 +60,10 @@ static void libxl_create_pci_backend_device(libxl__gc *gc,
                                             int num,
                                             const libxl_device_pci *pci)
 {
+    if (pci->name) {
+        flexarray_append(back, GCSPRINTF("name-%d", num));
+        flexarray_append(back, GCSPRINTF("%s", pci->name));
+    }
     flexarray_append(back, GCSPRINTF("key-%d", num));
     flexarray_append(back, GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func));
     flexarray_append(back, GCSPRINTF("dev-%d", num));
@@ -252,6 +256,7 @@ retry_transaction:
 
 retry_transaction2:
     t = xs_transaction_start(ctx->xsh);
+    xs_rm(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/state-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/key-%d", be_path, i));
     xs_rm(ctx->xsh, t, GCSPRINTF("%s/dev-%d", be_path, i));
@@ -290,6 +295,12 @@ retry_transaction2:
             xs_write(ctx->xsh, t, GCSPRINTF("%s/vdevfn-%d", be_path, j - 1), tmp, strlen(tmp));
             xs_rm(ctx->xsh, t, tmppath);
         }
+        tmppath = GCSPRINTF("%s/name-%d", be_path, j);
+        tmp = libxl__xs_read(gc, t, tmppath);
+        if (tmp) {
+            xs_write(ctx->xsh, t, GCSPRINTF("%s/name-%d", be_path, j - 1), tmp, strlen(tmp));
+            xs_rm(ctx->xsh, t, tmppath);
+        }
     }
     if (!xs_transaction_end(ctx->xsh, t, 0))
         if (errno == EAGAIN)
@@ -1586,6 +1597,23 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->starting = starting;
     pas->callback = device_pci_add_stubdom_done;
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &pci->bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
         rc = xc_test_assign_device(ctx->xch, domid,
                                    pci_encode_bdf(&pci->bdf));
@@ -1734,11 +1762,19 @@ static void device_pci_add_done(libxl__egc *egc,
     libxl_device_pci *pci = &pas->pci;
 
     if (rc) {
-        LOGD(ERROR, domid,
-             "libxl__device_pci_add  failed for "
-             "PCI device %x:%x:%x.%x (rc %d)",
-             pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
-             rc);
+        if (pci->name) {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device '%s' (rc %d)",
+                 pci->name,
+                 rc);
+        } else {
+            LOGD(ERROR, domid,
+                 "libxl__device_pci_add failed for "
+                 "PCI device %x:%x:%x.%x (rc %d)",
+                 pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                 rc);
+        }
         pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
@@ -2284,6 +2320,23 @@ static void libxl__device_pci_remove_common(libxl__egc *egc,
     libxl__ev_time_init(&prs->timeout);
     libxl__ev_time_init(&prs->retry_timer);
 
+    if (pci->name) {
+        libxl_pci_bdf *pcibdf =
+            libxl_device_pci_assignable_name2bdf(CTX, pci->name);
+
+        if (!pcibdf) {
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        LOGD(DETAIL, domid, "'%s' -> %04x:%02x:%02x.%u", pci->name,
+             pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
+
+        libxl_pci_bdf_copy(CTX, &prs->pci.bdf, pcibdf);
+        libxl_pci_bdf_dispose(pcibdf);
+        free(pcibdf);
+    }
+
     prs->orig_vdev = pci->vdevfn & ~7U;
 
     if ( pci->vfunc_mask == LIBXL_PCI_FUNC_ALL ) {
@@ -2418,6 +2471,10 @@ static int libxl__device_pci_from_xs_be(libxl__gc *gc,
         } while ((p = strtok_r(NULL, ",=", &saveptr)) != NULL);
     }
 
+    s = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("%s/name-%d", be_path, nr));
+    if (s)
+        pci->name = strdup(s);
+
     return 0;
 }
 
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 2c441142fb..44bad36f1c 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -778,6 +778,7 @@ libxl_pci_bdf = Struct("pci_bdf", [
 
 libxl_device_pci = Struct("device_pci", [
     ("bdf", libxl_pci_bdf),
+    ("name", string),
     ("vdevfn", uint32),
     ("vfunc_mask", uint32),
     ("msitranslate", bool),
diff --git a/tools/libs/util/libxlu_pci.c b/tools/libs/util/libxlu_pci.c
index a8b6ce5427..543a1f80e9 100644
--- a/tools/libs/util/libxlu_pci.c
+++ b/tools/libs/util/libxlu_pci.c
@@ -151,6 +151,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
 {
     const char *ptr = str;
     bool bdf_present = false;
+    bool name_present = false;
     int ret;
 
     /* Attempt to parse 'bdf' as positional parameter */
@@ -193,6 +194,10 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             pcidev->power_mgmt = atoi(val);
         } else if (!strcmp(key, "rdm_policy")) {
             ret = parse_rdm_policy(cfg, &pcidev->rdm_policy, val);
+        } else if (!strcmp(key, "name")) {
+            name_present = true;
+            pcidev->name = strdup(val);
+            if (!pcidev->name) ret = ERROR_NOMEM;
         } else {
             XLU__PCI_ERR(cfg, "Unknown PCI_SPEC_STRING option: %s", key);
             ret = ERROR_INVAL;
@@ -205,7 +210,7 @@ int xlu_pci_parse_spec_string(XLU_Config *cfg, libxl_device_pci *pcidev,
             return ret;
     }
 
-    if (!bdf_present)
+    if (!(bdf_present ^ name_present))
         return ERROR_INVAL;
 
     return 0;
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index f1b58b3976..1f89fed6db 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -71,6 +71,7 @@ static int pcidetach(uint32_t domid, const char *spec_string, int force)
                 spec_string);
         exit(2);
     }
+
     if (force) {
         if (libxl_device_pci_destroy(ctx, domid, &pci, 0))
             r = 1;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11215.29843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu2-0001pG-L0; Fri, 23 Oct 2020 16:26:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11215.29843; Fri, 23 Oct 2020 16:26:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu2-0001op-92; Fri, 23 Oct 2020 16:26:54 +0000
Received: by outflank-mailman (input) for mailman id 11215;
 Fri, 23 Oct 2020 16:26:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzu0-0000v3-IR
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d654f5d-4e2a-4f1a-88ec-68c7ff500dd8;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztU-0008No-VR; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqs-000376-7I; Fri, 23 Oct 2020 16:23:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzu0-0000v3-IR
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:52 +0000
X-Inumbo-ID: 7d654f5d-4e2a-4f1a-88ec-68c7ff500dd8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7d654f5d-4e2a-4f1a-88ec-68c7ff500dd8;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=OvopI12UbRrhQAhqN7hvDR3hBLZ7P1RnY9jfoW3jfa8=; b=vi73Ud6zHzB7KkLW/mCTovsm/
	d8gb0T8FrD/00Tem41RHymTesctebKGvTjQ7GjOgbnKfY56yvK0lLRBmKtVavlRrRU3QdXZxVOs0H
	hWzRBBndV3AyTGzmRwI0ObRWfFm5hCmhOIhrbZHoFLC9ziIEbxMdbUxs9ZOIFPCUcYt0g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztU-0008No-VR; Fri, 23 Oct 2020 16:26:20 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqs-000376-7I; Fri, 23 Oct 2020 16:23:38 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 21/25] libxl: modify libxl_device_pci_assignable_add/remove/list/list_free()...
Date: Fri, 23 Oct 2020 16:23:10 +0000
Message-Id: <20201023162314.2235-22-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to use 'libxl_pci_bdf' rather than 'libxl_device_pci'.

This patch modifies the API and callers accordingly. It also modifies
several internal functions in libxl_pci.c that support the API to also use
'libxl_pci_bdf'.

NOTE: The OCaml bindings are adjusted to contain the interface change. It
      should therefore not affect compatibility with OCaml-based utilities.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  15 ++-
 tools/libs/light/libxl_pci.c         | 215 +++++++++++++++++++----------------
 tools/ocaml/libs/xl/xenlight_stubs.c |  15 ++-
 tools/xl/xl_pci.c                    |  32 +++---
 4 files changed, 157 insertions(+), 120 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5edacccbd1..5703fdf367 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -470,6 +470,13 @@
 #define LIBXL_HAVE_PCI_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_BDF indicates that the
+ * libxl_device_pci_assignable_add/remove/list/list_free() functions all
+ * use the 'libxl_pci_bdf' type rather than 'libxl_device_pci' type.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2378,10 +2385,10 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index fec77dd270..5104f31448 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -25,26 +25,33 @@
 #define PCI_BDF_XSPATH         "%04x-%02x-%02x-%01x"
 #define PCI_PT_QDEV_ID         "pci-pt-%02x_%02x.%01x"
 
-static unsigned int pci_encode_bdf(libxl_device_pci *pci)
+static unsigned int pci_encode_bdf(libxl_pci_bdf *pcibdf)
 {
     unsigned int value;
 
-    value = pci->bdf.domain << 16;
-    value |= (pci->bdf.bus & 0xff) << 8;
-    value |= (pci->bdf.dev & 0x1f) << 3;
-    value |= (pci->bdf.func & 0x7);
+    value = pcibdf->domain << 16;
+    value |= (pcibdf->bus & 0xff) << 8;
+    value |= (pcibdf->dev & 0x1f) << 3;
+    value |= (pcibdf->func & 0x7);
 
     return value;
 }
 
+static void pcibdf_struct_fill(libxl_pci_bdf *pcibdf, unsigned int domain,
+                               unsigned int bus, unsigned int dev,
+                               unsigned int func)
+{
+    pcibdf->domain = domain;
+    pcibdf->bus = bus;
+    pcibdf->dev = dev;
+    pcibdf->func = func;
+}
+
 static void pci_struct_fill(libxl_device_pci *pci, unsigned int domain,
                             unsigned int bus, unsigned int dev,
                             unsigned int func, unsigned int vdevfn)
 {
-    pci->bdf.domain = domain;
-    pci->bdf.bus = bus;
-    pci->bdf.dev = dev;
-    pci->bdf.func = func;
+    pcibdf_struct_fill(&pci->bdf, domain, bus, dev, func);
     pci->vdevfn = vdevfn;
 }
 
@@ -318,8 +325,8 @@ static int is_pci_in_array(libxl_device_pci *pcis, int num,
 }
 
 /* Write the standard BDF into the sysfs path given by sysfs_path. */
-static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
-                           libxl_device_pci *pci)
+static int sysfs_write_bdf(libxl__gc *gc, const char *sysfs_path,
+                           libxl_pci_bdf *pcibdf)
 {
     int rc, fd;
     char *buf;
@@ -330,8 +337,8 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
         return ERROR_FAIL;
     }
 
-    buf = GCSPRINTF(PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-                    pci->bdf.dev, pci->bdf.func);
+    buf = GCSPRINTF(PCI_BDF, pcibdf->domain, pcibdf->bus,
+                    pcibdf->dev, pcibdf->func);
     rc = write(fd, buf, strlen(buf));
     /* Annoying to have two if's, but we need the errno */
     if (rc < 0)
@@ -346,22 +353,22 @@ static int sysfs_write_bdf(libxl__gc *gc, const char * sysfs_path,
 
 #define PCI_INFO_PATH "/libxl/pci"
 
-static char *pci_info_xs_path(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_path(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
     return node ?
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH"/%s",
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
                   node) :
         GCSPRINTF(PCI_INFO_PATH"/"PCI_BDF_XSPATH,
-                  pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
+                  pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func);
 }
 
 
-static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
+static int pci_info_xs_write(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node, const char *val)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     int rc = libxl__xs_printf(gc, XBT_NULL, path, "%s", val);
 
     if (rc) LOGE(WARN, "Write of %s to node %s failed.", val, path);
@@ -369,28 +376,28 @@ static int pci_info_xs_write(libxl__gc *gc, libxl_device_pci *pci,
     return rc;
 }
 
-static char *pci_info_xs_read(libxl__gc *gc, libxl_device_pci *pci,
+static char *pci_info_xs_read(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                               const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
 
     return libxl__xs_read(gc, XBT_NULL, path);
 }
 
-static void pci_info_xs_remove(libxl__gc *gc, libxl_device_pci *pci,
+static void pci_info_xs_remove(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                                const char *node)
 {
-    char *path = pci_info_xs_path(gc, pci, node);
+    char *path = pci_info_xs_path(gc, pcibdf, node);
     libxl_ctx *ctx = libxl__gc_owner(gc);
 
     /* Remove the xenstore entry */
     xs_rm(ctx->xsh, XBT_NULL, path);
 }
 
-libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
+libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
 {
     GC_INIT(ctx);
-    libxl_device_pci *pcis = NULL, *new;
+    libxl_pci_bdf *pcibdfs = NULL, *new;
     struct dirent *de;
     DIR *dir;
 
@@ -411,15 +418,15 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         if (sscanf(de->d_name, PCI_BDF, &dom, &bus, &dev, &func) != 4)
             continue;
 
-        new = realloc(pcis, ((*num) + 1) * sizeof(*new));
+        new = realloc(pcibdfs, ((*num) + 1) * sizeof(*new));
         if (NULL == new)
             continue;
 
-        pcis = new;
-        new = pcis + *num;
+        pcibdfs = new;
+        new = pcibdfs + *num;
 
-        libxl_device_pci_init(new);
-        pci_struct_fill(new, dom, bus, dev, func, 0);
+        libxl_pci_bdf_init(new);
+        pcibdf_struct_fill(new, dom, bus, dev, func);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
             continue;
@@ -430,32 +437,32 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
     closedir(dir);
 out:
     GC_FREE;
-    return pcis;
+    return pcibdfs;
 }
 
-void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num)
 {
     int i;
 
     for (i = 0; i < num; i++)
-        libxl_device_pci_dispose(&list[i]);
+        libxl_pci_bdf_dispose(&list[i]);
 
     free(list);
 }
 
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
-static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
+static int sysfs_dev_unbind(libxl__gc *gc, libxl_pci_bdf *pcibdf,
                             char **driver_path)
 {
     char * spath, *dp = NULL;
     struct stat st;
 
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/driver",
-                           pci->bdf.domain,
-                           pci->bdf.bus,
-                           pci->bdf.dev,
-                           pci->bdf.func);
+                           pcibdf->domain,
+                           pcibdf->bus,
+                           pcibdf->dev,
+                           pcibdf->func);
     if ( !lstat(spath, &st) ) {
         /* Find the canonical path to the driver. */
         dp = libxl__zalloc(gc, PATH_MAX);
@@ -469,7 +476,7 @@ static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
 
         /* Unbind from the old driver */
         spath = GCSPRINTF("%s/unbind", dp);
-        if ( sysfs_write_bdf(gc, spath, pci) < 0 ) {
+        if ( sysfs_write_bdf(gc, spath, pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't unbind device");
             return -1;
         }
@@ -607,8 +614,8 @@ bool libxl__is_igd_vga_passthru(libxl__gc *gc,
  * already exist.
  */
 
-/* Scan through /sys/.../pciback/slots looking for pci's BDF */
-static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
+/* Scan through /sys/.../pciback/slots looking for BDF */
+static int pciback_dev_has_slot(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     FILE *f;
     int rc = 0;
@@ -621,11 +628,11 @@ static int pciback_dev_has_slot(libxl__gc *gc, libxl_device_pci *pci)
         return ERROR_FAIL;
     }
 
-    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func)==4) {
-        if (dom == pci->bdf.domain
-            && bus == pci->bdf.bus
-            && dev == pci->bdf.dev
-            && func == pci->bdf.func) {
+    while (fscanf(f, "%x:%x:%x.%d\n", &dom, &bus, &dev, &func) == 4) {
+        if (dom == pcibdf->domain
+           && bus == pcibdf->bus
+           && dev == pcibdf->dev
+           && func == pcibdf->func) {
             rc = 1;
             goto out;
         }
@@ -635,7 +642,7 @@ out:
     return rc;
 }
 
-static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_is_assigned(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     char * spath;
     int rc;
@@ -651,8 +658,8 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     }
 
     spath = GCSPRINTF(SYSFS_PCIBACK_DRIVER"/"PCI_BDF,
-                      pci->bdf.domain, pci->bdf.bus,
-                      pci->bdf.dev, pci->bdf.func);
+                      pcibdf->domain, pcibdf->bus,
+                      pcibdf->dev, pcibdf->func);
     rc = lstat(spath, &st);
 
     if( rc == 0 )
@@ -663,40 +670,40 @@ static int pciback_dev_is_assigned(libxl__gc *gc, libxl_device_pci *pci)
     return -1;
 }
 
-static int pciback_dev_assign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_assign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     int rc;
 
-    if ( (rc = pciback_dev_has_slot(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_has_slot(gc, pcibdf)) < 0 ) {
         LOGE(ERROR, "Error checking for pciback slot");
         return ERROR_FAIL;
     } else if (rc == 0) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/new_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't bind device to pciback!");
             return ERROR_FAIL;
         }
     }
 
-    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pci) < 0 ) {
+    if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/bind", pcibdf) < 0 ) {
         LOGE(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
     return 0;
 }
 
-static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
+static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 {
     /* Remove from pciback */
-    if ( sysfs_dev_unbind(gc, pci, NULL) < 0 ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, NULL) < 0 ) {
         LOG(ERROR, "Couldn't unbind device!");
         return ERROR_FAIL;
     }
 
     /* Remove slot if necessary */
-    if ( pciback_dev_has_slot(gc, pci) > 0 ) {
+    if ( pciback_dev_has_slot(gc, pcibdf) > 0 ) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/remove_slot",
-                             pci) < 0 ) {
+                             pcibdf) < 0 ) {
             LOGE(ERROR, "Couldn't remove pciback slot");
             return ERROR_FAIL;
         }
@@ -705,7 +712,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_device_pci *pci)
 }
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
-                                            libxl_device_pci *pci,
+                                            libxl_pci_bdf *pcibdf,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -715,10 +722,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     struct stat st;
 
     /* Local copy for convenience */
-    dom = pci->bdf.domain;
-    bus = pci->bdf.bus;
-    dev = pci->bdf.dev;
-    func = pci->bdf.func;
+    dom = pcibdf->domain;
+    bus = pcibdf->bus;
+    dev = pcibdf->dev;
+    func = pcibdf->func;
 
     /* See if the device exists */
     spath = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF, dom, bus, dev, func);
@@ -728,7 +735,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if it's already assigned to pciback */
-    rc = pciback_dev_is_assigned(gc, pci);
+    rc = pciback_dev_is_assigned(gc, pcibdf);
     if ( rc < 0 ) {
         return ERROR_FAIL;
     }
@@ -738,7 +745,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
-    if ( sysfs_dev_unbind(gc, pci, &driver_path ) ) {
+    if ( sysfs_dev_unbind(gc, pcibdf, &driver_path ) ) {
         LOG(ERROR, "Couldn't unbind "PCI_BDF" from driver",
             dom, bus, dev, func);
         return ERROR_FAIL;
@@ -747,9 +754,9 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     /* Store driver_path for rebinding to dom0 */
     if ( rebind ) {
         if ( driver_path ) {
-            pci_info_xs_write(gc, pci, "driver_path", driver_path);
+            pci_info_xs_write(gc, pcibdf, "driver_path", driver_path);
         } else if ( (driver_path =
-                     pci_info_xs_read(gc, pci, "driver_path")) != NULL ) {
+                     pci_info_xs_read(gc, pcibdf, "driver_path")) != NULL ) {
             LOG(INFO, PCI_BDF" not bound to a driver, will be rebound to %s",
                 dom, bus, dev, func, driver_path);
         } else {
@@ -757,10 +764,10 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
                 dom, bus, dev, func);
         }
     } else {
-        pci_info_xs_remove(gc, pci, "driver_path");
+        pci_info_xs_remove(gc, pcibdf, "driver_path");
     }
 
-    if ( pciback_dev_assign(gc, pci) ) {
+    if ( pciback_dev_assign(gc, pcibdf) ) {
         LOG(ERROR, "Couldn't bind device to pciback!");
         return ERROR_FAIL;
     }
@@ -771,7 +778,7 @@ quarantine:
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
      * unnecessarily denied.
      */
-    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci),
+    rc = xc_assign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf),
                           XEN_DOMCTL_DEV_RDM_RELAXED);
     if ( rc < 0 ) {
         LOG(ERROR, "failed to quarantine "PCI_BDF, dom, bus, dev, func);
@@ -782,7 +789,7 @@ quarantine:
 }
 
 static int libxl__device_pci_assignable_remove(libxl__gc *gc,
-                                               libxl_device_pci *pci,
+                                               libxl_pci_bdf *pcibdf,
                                                int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -790,24 +797,24 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     char *driver_path;
 
     /* De-quarantine */
-    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pci));
+    rc = xc_deassign_device(ctx->xch, DOMID_IO, pci_encode_bdf(pcibdf));
     if ( rc < 0 ) {
-        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pci->bdf.domain, pci->bdf.bus,
-            pci->bdf.dev, pci->bdf.func);
+        LOG(ERROR, "failed to de-quarantine "PCI_BDF, pcibdf->domain,
+            pcibdf->bus, pcibdf->dev, pcibdf->func);
         return ERROR_FAIL;
     }
 
     /* Unbind from pciback */
-    if ( (rc = pciback_dev_is_assigned(gc, pci)) < 0 ) {
+    if ( (rc = pciback_dev_is_assigned(gc, pcibdf)) < 0 ) {
         return ERROR_FAIL;
     } else if ( rc ) {
-        pciback_dev_unassign(gc, pci);
+        pciback_dev_unassign(gc, pcibdf);
     } else {
         LOG(WARN, "Not bound to pciback");
     }
 
     /* Rebind if necessary */
-    driver_path = pci_info_xs_read(gc, pci, "driver_path");
+    driver_path = pci_info_xs_read(gc, pcibdf, "driver_path");
 
     if ( driver_path ) {
         if ( rebind ) {
@@ -815,12 +822,12 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
 
             if ( sysfs_write_bdf(gc,
                                  GCSPRINTF("%s/bind", driver_path),
-                                 pci) < 0 ) {
+                                 pcibdf) < 0 ) {
                 LOGE(ERROR, "Couldn't bind device to %s", driver_path);
                 return -1;
             }
 
-            pci_info_xs_remove(gc, pci, "driver_path");
+            pci_info_xs_remove(gc, pcibdf, "driver_path");
         }
     } else {
         if ( rebind ) {
@@ -832,26 +839,26 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
     return 0;
 }
 
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                     int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
 }
 
 
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci,
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
                                        int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_remove(gc, pci, rebind);
+    rc = libxl__device_pci_assignable_remove(gc, pcibdf, rebind);
 
     GC_FREE;
     return rc;
@@ -1352,7 +1359,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
-                             pci) < 0 ) {
+                             &pci->bdf) < 0 ) {
             LOGD(ERROR, domid, "Setting permissive for device");
             rc = ERROR_FAIL;
             goto out;
@@ -1368,7 +1375,8 @@ out_no_irq:
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(pci), flag);
+        r = xc_assign_device(ctx->xch, domid, pci_encode_bdf(&pci->bdf),
+                             flag);
         if (r < 0 && (hvm || errno != ENOSYS)) {
             LOGED(ERROR, domid, "xc_assign_device failed");
             rc = ERROR_FAIL;
@@ -1447,15 +1455,28 @@ int libxl_device_pci_add(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
-static bool libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
+static int is_bdf_in_array(libxl_pci_bdf *pcibdfs, int num,
+                           libxl_pci_bdf *pcibdf)
 {
-    libxl_device_pci *pcis;
+    int i;
+
+    for(i = 0; i < num; i++) {
+        if (COMPARE_BDF(pcibdf, &pcibdfs[i]))
+            break;
+    }
+
+    return i < num;
+}
+
+static bool is_bdf_assignable(libxl_ctx *ctx, libxl_pci_bdf *pcibdf)
+{
+    libxl_pci_bdf *pcibdfs;
     int num;
     bool assignable;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
-    assignable = is_pci_in_array(pcis, num, pci);
-    libxl_device_pci_assignable_list_free(pcis, num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
+    assignable = is_bdf_in_array(pcibdfs, num, pcibdf);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 
     return assignable;
 }
@@ -1490,7 +1511,8 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     pas->callback = device_pci_add_stubdom_done;
 
     if (libxl__domain_type(gc, domid) == LIBXL_DOMAIN_TYPE_HVM) {
-        rc = xc_test_assign_device(ctx->xch, domid, pci_encode_bdf(pci));
+        rc = xc_test_assign_device(ctx->xch, domid,
+                                   pci_encode_bdf(&pci->bdf));
         if (rc) {
             LOGD(ERROR, domid,
                  "PCI device %04x:%02x:%02x.%u %s?",
@@ -1504,20 +1526,20 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     rc = libxl__device_pci_setdefault(gc, domid, pci, !starting);
     if (rc) goto out;
 
-    if (pci->seize && !pciback_dev_is_assigned(gc, pci)) {
-        rc = libxl__device_pci_assignable_add(gc, pci, 1);
+    if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
         if ( rc )
             goto out;
     }
 
-    if (!libxl_pci_assignable(ctx, pci)) {
+    if (!is_bdf_assignable(ctx, &pci->bdf)) {
         LOGD(ERROR, domid, "PCI device %x:%x:%x.%x is not assignable",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
         rc = ERROR_FAIL;
         goto out;
     }
 
-    rc = pci_info_xs_write(gc, pci, "domid", GCSPRINTF("%u", domid));
+    rc = pci_info_xs_write(gc, &pci->bdf, "domid", GCSPRINTF("%u", domid));
     if (rc) goto out;
 
     libxl__device_pci_reset(gc, pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func);
@@ -1641,7 +1663,7 @@ static void device_pci_add_done(libxl__egc *egc,
              "PCI device %x:%x:%x.%x (rc %d)",
              pci->bdf.domain, pci->bdf.bus, pci->bdf.dev, pci->bdf.func,
              rc);
-        pci_info_xs_remove(gc, pci, "domid");
+        pci_info_xs_remove(gc, &pci->bdf, "domid");
     }
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
@@ -2110,7 +2132,8 @@ static void pci_remove_detached(libxl__egc *egc,
     }
 
     if (!isstubdom) {
-        rc = xc_deassign_device(CTX->xch, domid, pci_encode_bdf(pci));
+        rc = xc_deassign_device(CTX->xch, domid,
+                                pci_encode_bdf(&pci->bdf));
         if (rc < 0 && (prs->hvm || errno != ENOSYS))
             LOGED(ERROR, domid, "xc_deassign_device failed");
     }
@@ -2239,7 +2262,7 @@ out:
     libxl__ev_time_deregister(gc, &prs->timeout);
     libxl__ev_time_deregister(gc, &prs->retry_timer);
 
-    if (!rc) pci_info_xs_remove(gc, pci, "domid");
+    if (!rc) pci_info_xs_remove(gc, &pci->bdf, "domid");
 
     libxl_device_pci_dispose(pci);
     aodev->rc = rc;
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 352a00134d..2388f23869 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,7 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -861,7 +861,7 @@ value stub_xl_device_pci_assignable_remove(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_remove(CTX, &c_info, c_rebind);
+	ret = libxl_device_pci_assignable_remove(CTX, &c_info.bdf, c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
@@ -876,7 +876,7 @@ value stub_xl_device_pci_assignable_list(value ctx)
 {
 	CAMLparam1(ctx);
 	CAMLlocal2(list, temp);
-	libxl_device_pci *c_list;
+	libxl_pci_bdf *c_list;
 	int i, nb;
 	uint32_t c_domid;
 
@@ -889,11 +889,18 @@ value stub_xl_device_pci_assignable_list(value ctx)
 
 	list = temp = Val_emptylist;
 	for (i = 0; i < nb; i++) {
+		libxl_device_pci pci;
+
+		libxl_device_pci_init(&pci);
+		libxl_pci_bdf_copy(CTX, &pci.bdf, &c_list[i]);
+
 		list = caml_alloc_small(2, Tag_cons);
 		Field(list, 0) = Val_int(0);
 		Field(list, 1) = temp;
 		temp = list;
-		Store_field(list, 0, Val_device_pci(&c_list[i]));
+		Store_field(list, 0, Val_device_pci(&pci));
+
+		libxl_device_pci_dispose(&pci);
 	}
 	libxl_device_pci_assignable_list_free(c_list, nb);
 
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 9c24496cb2..37708b4eb1 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -154,19 +154,19 @@ int main_pciattach(int argc, char **argv)
 
 static void pciassignable_list(void)
 {
-    libxl_device_pci *pcis;
+    libxl_pci_bdf *pcibdfs;
     int num, i;
 
-    pcis = libxl_device_pci_assignable_list(ctx, &num);
+    pcibdfs = libxl_device_pci_assignable_list(ctx, &num);
 
-    if ( pcis == NULL )
+    if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
-               pcis[i].bdf.domain, pcis[i].bdf.bus, pcis[i].bdf.dev,
-               pcis[i].bdf.func);
+               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
+               pcibdfs[i].func);
     }
-    libxl_device_pci_assignable_list_free(pcis, num);
+    libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
@@ -183,24 +183,24 @@ int main_pciassignable_list(int argc, char **argv)
 
 static int pciassignable_add(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-add: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -225,24 +225,24 @@ int main_pciassignable_add(int argc, char **argv)
 
 static int pciassignable_remove(const char *bdf, int rebind)
 {
-    libxl_device_pci pci;
+    libxl_pci_bdf pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_device_pci_init(&pci);
+    libxl_pci_bdf_init(&pcibdf);
 
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pci.bdf, bdf)) {
+    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
         fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pci, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
         r = 1;
 
-    libxl_device_pci_dispose(&pci);
+    libxl_pci_bdf_dispose(&pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:26:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:26:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11219.29862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu7-0001zr-1n; Fri, 23 Oct 2020 16:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11219.29862; Fri, 23 Oct 2020 16:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzu6-0001zO-Mc; Fri, 23 Oct 2020 16:26:58 +0000
Received: by outflank-mailman (input) for mailman id 11219;
 Fri, 23 Oct 2020 16:26:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzu5-0000v2-HE
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf6a7257-063a-40c2-9d1c-081ae0f7b8be;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008O2-6X; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqk-000376-7K; Fri, 23 Oct 2020 16:23:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzu5-0000v2-HE
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:26:57 +0000
X-Inumbo-ID: bf6a7257-063a-40c2-9d1c-081ae0f7b8be
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bf6a7257-063a-40c2-9d1c-081ae0f7b8be;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=TwtfBZ1TBMcZx0WwXud6JOyhwZrUvxUUMk2BtEGrrXI=; b=PHGk+IBka0LiWVfU4vT9v9Myv
	iiBrJ0Dt5HDFp9njxUh4fYZt+K+LX+9nJslKhWFaBCjuQWNAx3P5kuxj7ghRjuYGF3/QRxfykH5z6
	0YFMvzcH/zj8+AdJBYuAh1XdGGR+U0MLRrew4cPVX56ejNbYmX6a7LEeaatWmU9iNPebU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008O2-6X; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqk-000376-7K; Fri, 23 Oct 2020 16:23:30 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 12/25] libxl: add libxl_device_pci_assignable_list_free()...
Date: Fri, 23 Oct 2020 16:23:01 +0000
Message-Id: <20201023162314.2235-13-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

... to be used by callers of libxl_device_pci_assignable_list().

Currently there is no API for callers of libxl_device_pci_assignable_list()
to free the list. The xl function pciassignable_list() calls
libxl_device_pci_dispose() on each element of the returned list, but
libxl_pci_assignable() in libxl_pci.c does not. Neither does the implementation
of libxl_device_pci_assignable_list() call libxl_device_pci_init().

This patch adds the new API function, makes sure it is used everywhere and
also modifies libxl_device_pci_assignable_list() to initialize list
entries rather than just zeroing them.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                |  7 +++++++
 tools/libs/light/libxl_pci.c         | 14 ++++++++++++--
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +--
 tools/xl/xl_pci.c                    |  3 +--
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ee52d3cf7e..8225809d94 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -458,6 +458,12 @@
 #define LIBXL_HAVE_DEVICE_PCI_LIST_FREE 1
 
 /*
+ * LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE indicates that the
+ * libxl_device_pci_assignable_list_free() function is defined.
+ */
+#define LIBXL_HAVE_DEVICE_PCI_ASSIGNABLE_LIST_FREE 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2369,6 +2375,7 @@ int libxl_device_events_handler(libxl_ctx *ctx,
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_device_pci *pci, int rebind);
 libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num);
 
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 3162facb37..e858509609 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -438,7 +438,7 @@ libxl_device_pci *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num)
         pcis = new;
         new = pcis + *num;
 
-        memset(new, 0, sizeof(*new));
+        libxl_device_pci_init(new);
         pci_struct_fill(new, dom, bus, dev, func, 0);
 
         if (pci_info_xs_read(gc, new, "domid")) /* already assigned */
@@ -453,6 +453,16 @@ out:
     return pcis;
 }
 
+void libxl_device_pci_assignable_list_free(libxl_device_pci *list, int num)
+{
+    int i;
+
+    for (i = 0; i < num; i++)
+        libxl_device_pci_dispose(&list[i]);
+
+    free(list);
+}
+
 /* Unbind device from its current driver, if any.  If driver_path is non-NULL,
  * store the path to the original driver in it. */
 static int sysfs_dev_unbind(libxl__gc *gc, libxl_device_pci *pci,
@@ -1470,7 +1480,7 @@ static int libxl_pci_assignable(libxl_ctx *ctx, libxl_device_pci *pci)
             pcis[i].func == pci->func)
             break;
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
     return i != num;
 }
 
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 1181971da4..352a00134d 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -894,9 +894,8 @@ value stub_xl_device_pci_assignable_list(value ctx)
 		Field(list, 1) = temp;
 		temp = list;
 		Store_field(list, 0, Val_device_pci(&c_list[i]));
-		libxl_device_pci_dispose(&c_list[i]);
 	}
-	free(c_list);
+	libxl_device_pci_assignable_list_free(c_list, nb);
 
 	CAMLreturn(list);
 }
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 7c0f102ac7..f71498cbb5 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -164,9 +164,8 @@ static void pciassignable_list(void)
     for (i = 0; i < num; i++) {
         printf("%04x:%02x:%02x.%01x\n",
                pcis[i].domain, pcis[i].bus, pcis[i].dev, pcis[i].func);
-        libxl_device_pci_dispose(&pcis[i]);
     }
-    free(pcis);
+    libxl_device_pci_assignable_list_free(pcis, num);
 }
 
 int main_pciassignable_list(int argc, char **argv)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:27:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11223.29873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzuB-00028K-A6; Fri, 23 Oct 2020 16:27:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11223.29873; Fri, 23 Oct 2020 16:27:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kVzuB-000282-6D; Fri, 23 Oct 2020 16:27:03 +0000
Received: by outflank-mailman (input) for mailman id 11223;
 Fri, 23 Oct 2020 16:27:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1kVzuA-0000v2-HC
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:27:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8512d213-ebb3-411b-ac50-ba2ab9e66ddb;
 Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVztV-0008Nu-2M; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com
 ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1kVzqu-000376-4o; Fri, 23 Oct 2020 16:23:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=X6LH=D6=xen.org=paul@srs-us1.protection.inumbo.net>)
	id 1kVzuA-0000v2-HC
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:27:02 +0000
X-Inumbo-ID: 8512d213-ebb3-411b-ac50-ba2ab9e66ddb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8512d213-ebb3-411b-ac50-ba2ab9e66ddb;
	Fri, 23 Oct 2020 16:26:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ILx3JMJce2ioNh3ZqL79hk3Ma28xIlwlj5jXju/NDSo=; b=yYuFGUw7nDAR/fYCQJIol4etH
	oHdy36tjOiw6boekQOWzfcXV79tqDBEnc3o8XexFeDJude7uCfQAfcCjZqrYLLQ+ViyjOzUoEIvz0
	ZGO5erBs7lSrgDN4iAGqfMDdAJZHJZKkKS/LVeUxjMnhWLE9E+4QtpmDzysYyJpMSLUe0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVztV-0008Nu-2M; Fri, 23 Oct 2020 16:26:21 +0000
Received: from ec2-18-200-132-236.eu-west-1.compute.amazonaws.com ([18.200.132.236] helo=ip-10-0-185-232.eu-west-1.compute.internal)
	by xenbits.xenproject.org with esmtpsa (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <paul@xen.org>)
	id 1kVzqu-000376-4o; Fri, 23 Oct 2020 16:23:40 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant <pdurrant@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH 23/25] xl / libxl: support naming of assignable devices
Date: Fri, 23 Oct 2020 16:23:12 +0000
Message-Id: <20201023162314.2235-24-paul@xen.org>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
References: <20201023162314.2235-1-paul@xen.org>

From: Paul Durrant <pdurrant@amazon.com>

This patch modifies libxl_device_pci_assignable_add() to take an optional
'name' argument, which (if supplied) is saved into xenstore and can hence be
used to refer to the now-assignable BDF in subsequent operations. To
facilitate this, a new libxl_device_pci_assignable_name2bdf() function is
added.

The xl code is modified to allow a name to be specified in the
'pci-assignable-add' operation and also allow an option to be specified to
'pci-assignable-list' requesting that names be displayed. The latter is
facilitated by a new libxl_device_pci_assignable_bdf2name() function. Finally
xl 'pci-assignable-remove' is modified to that either a name or BDF can be
supplied. The supplied 'identifier' is first assumed to be a name, but if
libxl_device_pci_assignable_name2bdf() fails to find a matching BDF the
identifier itself will be parsed as a BDF. Names my only include printable
characters and may not include whitespace.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Wei Liu <wl@xen.org>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/include/libxl.h                | 19 +++++++-
 tools/libs/light/libxl_pci.c         | 86 +++++++++++++++++++++++++++++++++---
 tools/ocaml/libs/xl/xenlight_stubs.c |  3 +-
 tools/xl/xl_cmdtable.c               | 12 +++--
 tools/xl/xl_pci.c                    | 84 ++++++++++++++++++++++++-----------
 5 files changed, 166 insertions(+), 38 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index 5703fdf367..4025d3a3d4 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -477,6 +477,14 @@
 #define LIBXL_HAVE_PCI_ASSIGNABLE_BDF 1
 
 /*
+ * LIBXL_HAVE_PCI_ASSIGNABLE_NAME indicates that the
+ * libxl_device_pci_assignable_add() function takes a 'name' argument
+ * and that the libxl_device_pci_assignable_name2bdf() and
+ * libxl_device_pci_assignable_bdf2name() functions are defined.
+ */
+#define LIBXL_HAVE_PCI_ASSIGNABLE_NAME 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
@@ -2385,11 +2393,18 @@ int libxl_device_events_handler(libxl_ctx *ctx,
  * added or is not bound, the functions will emit a warning but return
  * SUCCESS.
  */
-int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
-int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf, int rebind);
+int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                    const char *name, int rebind);
+int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
+                                       int rebind);
 libxl_pci_bdf *libxl_device_pci_assignable_list(libxl_ctx *ctx, int *num);
 void libxl_device_pci_assignable_list_free(libxl_pci_bdf *list, int num);
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name);
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf);
+
 /* CPUID handling */
 int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str);
 int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 5104f31448..0f7d655aff 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -713,6 +713,7 @@ static int pciback_dev_unassign(libxl__gc *gc, libxl_pci_bdf *pcibdf)
 
 static int libxl__device_pci_assignable_add(libxl__gc *gc,
                                             libxl_pci_bdf *pcibdf,
+                                            const char *name,
                                             int rebind)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
@@ -721,6 +722,23 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     int rc;
     struct stat st;
 
+    /* Sanitise any name that was passed */
+    if (name) {
+        unsigned int i, n = strlen(name);
+
+        if (n > 64) { /* Reasonable upper bound on name length */
+            LOG(ERROR, "Name too long");
+            return ERROR_FAIL;
+        }
+
+        for (i = 0; i < n; i++) {
+            if (!isgraph(name[i])) {
+                LOG(ERROR, "Names may only include printable characters");
+                return ERROR_FAIL;
+            }
+        }
+    }
+
     /* Local copy for convenience */
     dom = pcibdf->domain;
     bus = pcibdf->bus;
@@ -741,7 +759,7 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
     }
     if ( rc ) {
         LOG(WARN, PCI_BDF" already assigned to pciback", dom, bus, dev, func);
-        goto quarantine;
+        goto name;
     }
 
     /* Check to see if there's already a driver that we need to unbind from */
@@ -772,7 +790,12 @@ static int libxl__device_pci_assignable_add(libxl__gc *gc,
         return ERROR_FAIL;
     }
 
-quarantine:
+name:
+    if (name)
+        pci_info_xs_write(gc, pcibdf, "name", name);
+    else
+        pci_info_xs_remove(gc, pcibdf, "name");
+
     /*
      * DOMID_IO is just a sentinel domain, without any actual mappings,
      * so always pass XEN_DOMCTL_DEV_RDM_RELAXED to avoid assignment being
@@ -836,16 +859,18 @@ static int libxl__device_pci_assignable_remove(libxl__gc *gc,
         }
     }
 
+    pci_info_xs_remove(gc, pcibdf, "name");
+
     return 0;
 }
 
 int libxl_device_pci_assignable_add(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
-                                    int rebind)
+                                    const char *name, int rebind)
 {
     GC_INIT(ctx);
     int rc;
 
-    rc = libxl__device_pci_assignable_add(gc, pcibdf, rebind);
+    rc = libxl__device_pci_assignable_add(gc, pcibdf, name, rebind);
 
     GC_FREE;
     return rc;
@@ -864,6 +889,57 @@ int libxl_device_pci_assignable_remove(libxl_ctx *ctx, libxl_pci_bdf *pcibdf,
     return rc;
 }
 
+libxl_pci_bdf *libxl_device_pci_assignable_name2bdf(libxl_ctx *ctx,
+                                                    const char *name)
+{
+    GC_INIT(ctx);
+    char **bdfs;
+    libxl_pci_bdf *pcibdf;
+    unsigned int i, n;
+
+    bdfs = libxl__xs_directory(gc, XBT_NULL, PCI_INFO_PATH, &n);
+    if (!n)
+        goto out;
+
+    pcibdf = calloc(1, sizeof(*pcibdf));
+    if (!pcibdf)
+        goto out;
+
+    for (i = 0; i < n; i++) {
+        unsigned dom, bus, dev, func;
+        const char *tmp;
+
+        if (sscanf(bdfs[i], PCI_BDF_XSPATH, &dom, &bus, &dev, &func) != 4)
+            continue;
+
+        pcibdf_struct_fill(pcibdf, dom, bus, dev, func);
+
+        tmp = pci_info_xs_read(gc, pcibdf, "name");
+        if (tmp && !strcmp(tmp, name))
+            goto out;
+    }
+
+    free(pcibdf);
+    pcibdf = NULL;
+
+out:
+    GC_FREE;
+    return pcibdf;
+}
+
+char *libxl_device_pci_assignable_bdf2name(libxl_ctx *ctx,
+                                           libxl_pci_bdf *pcibdf)
+{
+    GC_INIT(ctx);
+    char *name = NULL, *tmp = pci_info_xs_read(gc, pcibdf, "name");
+
+    if (tmp)
+        name = strdup(tmp);
+
+    GC_FREE;
+    return name;
+}
+
 /*
  * This function checks that all functions of a device are bound to pciback
  * driver. It also initialises a bit-mask of which function numbers are present
@@ -1527,7 +1603,7 @@ void libxl__device_pci_add(libxl__egc *egc, uint32_t domid,
     if (rc) goto out;
 
     if (pci->seize && !pciback_dev_is_assigned(gc, &pci->bdf)) {
-        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, 1);
+        rc = libxl__device_pci_assignable_add(gc, &pci->bdf, NULL, 1);
         if ( rc )
             goto out;
     }
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
index 2388f23869..96bb4655e0 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -840,7 +840,8 @@ value stub_xl_device_pci_assignable_add(value ctx, value info, value rebind)
 	device_pci_val(CTX, &c_info, info);
 
 	caml_enter_blocking_section();
-	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, c_rebind);
+	ret = libxl_device_pci_assignable_add(CTX, &c_info.bdf, NULL,
+					      c_rebind);
 	caml_leave_blocking_section();
 
 	libxl_device_pci_dispose(&c_info);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 2ee0c49673..9e9aa448e2 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -105,21 +105,25 @@ struct cmd_spec cmd_table[] = {
     { "pci-assignable-add",
       &main_pciassignable_add, 0, 1,
       "Make a device assignable for pci-passthru",
-      "<BDF>",
+      "[options] <BDF>",
+      "-n NAME, --name=NAME    Name the assignable device.\n"
       "-h                      Print this help.\n"
     },
     { "pci-assignable-remove",
       &main_pciassignable_remove, 0, 1,
       "Remove a device from being assignable",
-      "[options] <BDF>",
+      "[options] <BDF>|NAME",
       "-h                      Print this help.\n"
       "-r                      Attempt to re-assign the device to the\n"
-      "                        original driver"
+      "                        original driver."
     },
     { "pci-assignable-list",
       &main_pciassignable_list, 0, 0,
       "List all the assignable pci devices",
-      "",
+      "[options]",
+      "-h                      Print this help.\n"
+      "-n, --show-names        Display assignable device names where\n"
+      "                        supplied.\n"
     },
     { "pause",
       &main_pause, 0, 1,
diff --git a/tools/xl/xl_pci.c b/tools/xl/xl_pci.c
index 37708b4eb1..f1b58b3976 100644
--- a/tools/xl/xl_pci.c
+++ b/tools/xl/xl_pci.c
@@ -152,7 +152,7 @@ int main_pciattach(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
-static void pciassignable_list(void)
+static void pciassignable_list(bool show_names)
 {
     libxl_pci_bdf *pcibdfs;
     int num, i;
@@ -162,9 +162,15 @@ static void pciassignable_list(void)
     if ( pcibdfs == NULL )
         return;
     for (i = 0; i < num; i++) {
-        printf("%04x:%02x:%02x.%01x\n",
-               pcibdfs[i].domain, pcibdfs[i].bus, pcibdfs[i].dev,
-               pcibdfs[i].func);
+        libxl_pci_bdf *pcibdf = &pcibdfs[i];
+        char *name = show_names ?
+            libxl_device_pci_assignable_bdf2name(ctx, pcibdf) : NULL;
+
+        printf("%04x:%02x:%02x.%01x %s\n",
+               pcibdf->domain, pcibdf->bus, pcibdf->dev, pcibdf->func,
+               name ?: "");
+
+        free(name);
     }
     libxl_device_pci_assignable_list_free(pcibdfs, num);
 }
@@ -172,16 +178,23 @@ static void pciassignable_list(void)
 int main_pciassignable_list(int argc, char **argv)
 {
     int opt;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-list", 0) {
-        /* No options */
+    static struct option opts[] = {
+        {"show-names", 0, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    bool show_names = false;
+
+    SWITCH_FOREACH_OPT(opt, "n", opts, "pci-assignable-list", 0) {
+    case 'n':
+        show_names = true;
+        break;
     }
 
-    pciassignable_list();
+    pciassignable_list(show_names);
     return 0;
 }
 
-static int pciassignable_add(const char *bdf, int rebind)
+static int pciassignable_add(const char *bdf, const char *name, int rebind)
 {
     libxl_pci_bdf pcibdf;
     XLU_Config *config;
@@ -197,7 +210,7 @@ static int pciassignable_add(const char *bdf, int rebind)
         exit(2);
     }
 
-    if (libxl_device_pci_assignable_add(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_add(ctx, &pcibdf, name, rebind))
         r = 1;
 
     libxl_pci_bdf_dispose(&pcibdf);
@@ -210,39 +223,58 @@ int main_pciassignable_add(int argc, char **argv)
 {
     int opt;
     const char *bdf = NULL;
-
-    SWITCH_FOREACH_OPT(opt, "", NULL, "pci-assignable-add", 1) {
-        /* No options */
+    static struct option opts[] = {
+        {"name", 1, 0, 'n'},
+        COMMON_LONG_OPTS
+    };
+    const char *name = NULL;
+
+    SWITCH_FOREACH_OPT(opt, "n:", opts, "pci-assignable-add", 0) {
+    case 'n':
+        name = optarg;
+        break;
     }
 
     bdf = argv[optind];
 
-    if (pciassignable_add(bdf, 1))
+    if (pciassignable_add(bdf, name, 1))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
 }
 
-static int pciassignable_remove(const char *bdf, int rebind)
+static int pciassignable_remove(const char *ident, int rebind)
 {
-    libxl_pci_bdf pcibdf;
+    libxl_pci_bdf *pcibdf;
     XLU_Config *config;
     int r = 0;
 
-    libxl_pci_bdf_init(&pcibdf);
-
     config = xlu_cfg_init(stderr, "command line");
     if (!config) { perror("xlu_cfg_init"); exit(-1); }
 
-    if (xlu_pci_parse_bdf(config, &pcibdf, bdf)) {
-        fprintf(stderr, "pci-assignable-remove: malformed BDF \"%s\"\n", bdf);
-        exit(2);
+    pcibdf = libxl_device_pci_assignable_name2bdf(ctx, ident);
+    if (!pcibdf) {
+        pcibdf = calloc(1, sizeof(*pcibdf));
+
+        if (!pcibdf) {
+            fprintf(stderr,
+                    "pci-assignable-remove: failed to allocate memory\n");
+            exit(2);
+        }
+
+        libxl_pci_bdf_init(pcibdf);
+        if (xlu_pci_parse_bdf(config, pcibdf, ident)) {
+            fprintf(stderr,
+                    "pci-assignable-remove: malformed BDF '%s'\n", ident);
+            exit(2);
+        }
     }
 
-    if (libxl_device_pci_assignable_remove(ctx, &pcibdf, rebind))
+    if (libxl_device_pci_assignable_remove(ctx, pcibdf, rebind))
         r = 1;
 
-    libxl_pci_bdf_dispose(&pcibdf);
+    libxl_pci_bdf_dispose(pcibdf);
+    free(pcibdf);
     xlu_cfg_destroy(config);
 
     return r;
@@ -251,7 +283,7 @@ static int pciassignable_remove(const char *bdf, int rebind)
 int main_pciassignable_remove(int argc, char **argv)
 {
     int opt;
-    const char *bdf = NULL;
+    const char *ident = NULL;
     int rebind = 0;
 
     SWITCH_FOREACH_OPT(opt, "r", NULL, "pci-assignable-remove", 1) {
@@ -260,9 +292,9 @@ int main_pciassignable_remove(int argc, char **argv)
         break;
     }
 
-    bdf = argv[optind];
+    ident = argv[optind];
 
-    if (pciassignable_remove(bdf, rebind))
+    if (pciassignable_remove(ident, rebind))
         return EXIT_FAILURE;
 
     return EXIT_SUCCESS;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 16:58:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 16:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11279.29893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW0O0-0005no-VV; Fri, 23 Oct 2020 16:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11279.29893; Fri, 23 Oct 2020 16:57:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW0O0-0005nh-RM; Fri, 23 Oct 2020 16:57:52 +0000
Received: by outflank-mailman (input) for mailman id 11279;
 Fri, 23 Oct 2020 16:57:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW0Nz-0005nW-Q1
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:57:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f96c451-6ded-46c5-96ed-a52c0cedb92f;
 Fri, 23 Oct 2020 16:57:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 10F0B206BE;
 Fri, 23 Oct 2020 16:57:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW0Nz-0005nW-Q1
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 16:57:51 +0000
X-Inumbo-ID: 8f96c451-6ded-46c5-96ed-a52c0cedb92f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8f96c451-6ded-46c5-96ed-a52c0cedb92f;
	Fri, 23 Oct 2020 16:57:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 10F0B206BE;
	Fri, 23 Oct 2020 16:57:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603472270;
	bh=55TAokNzigc1BthbDU5lS/ZjrUxZPxLZrR7sApqcjMo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IydVByRQ0vPzagjArWQYUvVH8XMylhiaXZFwIYpsA9F6j5S+F7lTmHqMJuYwqZ+R7
	 ScqYnkIDHhAw4X76A7TTlzdmuugG1UaGo24hM49Wgz0gERkSU2C7yz92TgQCmUsOPV
	 yon930UNOTd5Pv1d9v+hNZQWhqsf1m/QPY70UkrY=
Date: Fri, 23 Oct 2020 09:57:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
In-Reply-To: <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
Message-ID: <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
References: <20201022014310.GA70872@mattapan.m5p.com> <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org> <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s> <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/10/2020 22:17, Stefano Stabellini wrote:
> > On Thu, 22 Oct 2020, Julien Grall wrote:
> > > On 22/10/2020 02:43, Elliott Mitchell wrote:
> > > > Linux requires UEFI support to be enabled on ARM64 devices.  While many
> > > > ARM64 devices lack ACPI, the writing seems to be on the wall of
> > > > UEFI/ACPI
> > > > potentially taking over.  Some common devices may need ACPI table
> > > > support.
> > > > 
> > > > Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
> > > 
> > > The idea behind EXPERT is to gate any feature that is not considered to be
> > > stable/complete enough to be used in production.
> > 
> > Yes, and from that point of view I don't think we want to remove EXPERT
> > from ACPI yet. However, the idea of hiding things behind EXPERT works
> > very well for new esoteric features, something like memory introspection
> > or memory overcommit.
> 
> Memaccess is not very new ;).
> 
> > It does not work well for things that are actually
> > required to boot on the platform.
> 
> I am not sure where is the problem. It is easy to select EXPERT from the
> menuconfig. It also hints the user that the feature may not fully work.
> 
> > 
> > Typically ACPI systems don't come with device tree at all (RPi4 being an
> > exception), so users don't really have much of a choice in the matter.
> 
> And they typically have IOMMUs.
> 
> > 
> >  From that point of view, it would be better to remove EXPERT from ACPI,
> > maybe even build ACPI by default, *but* to add a warning at boot saying
> > something like:
> > 
> > "ACPI support is experimental. Boot using Device Tree if you can."
> > 
> > 
> > That would better convey the risks of using ACPI, while at the same time
> > making it a bit easier for users to boot on their ACPI-only platforms.
> 
> Right, I agree that this make easier for users to boot Xen on ACPI-only
> platform. However, based on above, it is easy enough for a developper to
> rebuild Xen with ACPI and EXPERT enabled.
> 
> So what sort of users are you targeting?

Somebody trying Xen for the first time, they might know how to build it
but they might not know that ACPI is not available by default, and they
might not know that they need to enable EXPERT in order to get the ACPI
option in the menu. It is easy to do once you know it is there,
otherwise one might not know where to look in the menu.


> I am sort of okay to remove EXPERT. 

OK. This would help (even without building it by default) because as you
go and look at the menu the first time, you'll find ACPI among the
options right away.


> But I still think building ACPI by default
> is still wrong because our default .config is meant to be (security)
> supported. I don't think ACPI can earn this qualification today.

Certainly we don't want to imply ACPI is security supported. I was
looking at SUPPORT.md and it is only says:

"""
EXPERT and DEBUG Kconfig options are not security supported. Other
Kconfig options are supported, if the related features are marked as
supported in this document.
"""

So technically we could enable ACPI in the build by default as ACPI for
ARM is marked as experimental. However, I can see that it is not a
great idea to enable by default an unsupported option in the kconfig, so
from that point of view it might be best to leave ACPI disabled by
default. Probably the best compromise at this time.



> In order to remove EXPERT, there are a few things to needs to be done (or
> checked):
>     1) SUPPORT.MD has a statement about ACPI on Arm

### Host ACPI (via Domain 0)

    Status, x86 PV: Supported
    Status, ARM: Experimental



>     2) DT is favored over ACPI if the two firmware tables are present.

Good idea. xen/arch/arm/acpi/boot.c:acpi_boot_table_init has:

    /*
     * Enable ACPI instead of device tree unless
     * - ACPI has been disabled explicitly (acpi=off), or
     * - the device tree is not empty (it has more than just a /chosen node)
     *   and ACPI has not been force enabled (acpi=force)
     */
    if ( param_acpi_off)
        goto disable;
    if ( !param_acpi_force &&
         device_tree_for_each_node(device_tree_flattened, 0,
                                   dt_scan_depth1_nodes, NULL) )
        goto disable;

We should be fine.


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 17:07:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 17:07:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11290.29904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW0XQ-0006to-0m; Fri, 23 Oct 2020 17:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11290.29904; Fri, 23 Oct 2020 17:07:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW0XP-0006th-U6; Fri, 23 Oct 2020 17:07:35 +0000
Received: by outflank-mailman (input) for mailman id 11290;
 Fri, 23 Oct 2020 17:07:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW0XP-0006tc-73
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 17:07:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11bd47f6-078b-4fab-8ea8-9540902e1f68;
 Fri, 23 Oct 2020 17:07:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4CCBD2087D;
 Fri, 23 Oct 2020 17:07:33 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW0XP-0006tc-73
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 17:07:35 +0000
X-Inumbo-ID: 11bd47f6-078b-4fab-8ea8-9540902e1f68
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 11bd47f6-078b-4fab-8ea8-9540902e1f68;
	Fri, 23 Oct 2020 17:07:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4CCBD2087D;
	Fri, 23 Oct 2020 17:07:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603472853;
	bh=VJWSCbydwWntfptwMJaoHDDXL2wUeScfu8+vf3MpODo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CU1gdFjydIM8guFmBjVEVmIjRfVmd0EvO94eNtigdvemOEkLiRmixiH9N5kyUL/Cs
	 3XEDnpPfBbx2tBWxZe70uu9uH/h342WZp0JK3bkQ8FOKpuzAXU3Th6P+ImtG1MZEVu
	 OvV794Zy3TA14GfVxgY4E7DK5fNp6wY31wKDfYH4=
Date: Fri, 23 Oct 2020 10:07:32 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: ACPI: Remove EXPERT dependancy, default on for
 ARM64
In-Reply-To: <20201023033506.GC83870@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010231002300.12247@sstabellini-ThinkPad-T480s>
References: <20201022014310.GA70872@mattapan.m5p.com> <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org> <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s> <20201023033506.GC83870@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 22 Oct 2020, Elliott Mitchell wrote:
> Linux requires UEFI support to be enabled on ARM64 devices.  While many
> ARM64 devices lack ACPI, the writing seems to be on the wall of UEFI/ACPI
> potentially taking over.  Some common devices may require ACPI table
> support to boot.

Let's not make guesses on the direction of the industry in a commit
message :-)

The following would suffice:

Some common ARM64 devices require ACPI to boot (no device tree is
available).


> For devices which can boot in either mode, continue defaulting to
> device-tree.  Add warnings about using ACPI advising users of present
> situation.
> 
> Signed-off-by: Elliott Mitchell <ehem+xen@m5p.com>
> ---
> Okay, hopefully this is okay.  Warning in Kconfig, warning on boot.
> Perhaps "default y if ARM_64" is redundant, yet if someone tries to make
> it possible to boot aarch32 on a ACPI machine...
> 
> I also want a date in the message.  Theory is this won't be there
> forever, so a date is essential.
> ---
>  xen/arch/arm/Kconfig     | 7 ++++++-
>  xen/arch/arm/acpi/boot.c | 9 +++++++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index 2777388265..29624d03fa 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -32,13 +32,18 @@ menu "Architecture Features"
>  source "arch/Kconfig"
>  
>  config ACPI
> -	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
> +	bool "ACPI (Advanced Configuration and Power Interface) Support"
>  	depends on ARM_64
> +	default y if ARM_64

I am not so sure about the "default y" for the reason that the option is
not technically "supported", so it is probably best to take the default
line out. Otherwise we end up with a default "unsupported" kconfig which
is not great.


>  	---help---
>  
>  	  Advanced Configuration and Power Interface (ACPI) support for Xen is
>  	  an alternative to device tree on ARM64.
>  
> +	  Note this is presently EXPERIMENTAL.  If a given device has both
> +	  device-tree and ACPI support, it is presently (October 2020)
> +	  recommended to boot using the device-tree.

Please remove the date from the message. We'll update as needed in the
future. The following works:

 Note this is presently EXPERIMENTAL.  If a given device has both
 device-tree and ACPI support, it is recommended to boot using the
 device-tree.


>  config GICV3
>  	bool "GICv3 driver"
>  	depends on ARM_64 && !NEW_VGIC
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 30e4bd1bc5..c0e8f85325 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -254,6 +254,15 @@ int __init acpi_boot_table_init(void)
>                                     dt_scan_depth1_nodes, NULL) )
>          goto disable;
>  
> +    printk("\n"
> +"*************************************************************************\n"
> +"*    WARNING WARNING WARNING WARNING WARNING WARNING WARNING WARNING    *\n"
> +"*                                                                       *\n"
> +"* Xen-ARM ACPI support is EXPERIMENTAL.  It is presently (October 2020) *\n"
> +"* recommended you boot your system in device-tree mode if you can.      *\n"
> +"*************************************************************************\n"
> +            "\n");

Please use warning_add and remove the date from the message.


>      /*
>       * ACPI is disabled at this point. Enable it in order to parse
>       * the ACPI tables.
> -- 
> 2.20.1
> 
> 
> -- 
> (\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
>  \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
>   \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
> 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 18:25:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 18:25:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11294.29917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW1kA-0006Fr-PA; Fri, 23 Oct 2020 18:24:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11294.29917; Fri, 23 Oct 2020 18:24:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW1kA-0006Fk-Lm; Fri, 23 Oct 2020 18:24:50 +0000
Received: by outflank-mailman (input) for mailman id 11294;
 Fri, 23 Oct 2020 18:24:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW1k9-0006Ff-U4
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 18:24:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 28337b82-08da-4e3c-bf01-76ca91106875;
 Fri, 23 Oct 2020 18:24:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW1k4-0002Vo-AH; Fri, 23 Oct 2020 18:24:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW1k3-0006z6-W2; Fri, 23 Oct 2020 18:24:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW1k3-00079j-VY; Fri, 23 Oct 2020 18:24:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW1k9-0006Ff-U4
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 18:24:50 +0000
X-Inumbo-ID: 28337b82-08da-4e3c-bf01-76ca91106875
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 28337b82-08da-4e3c-bf01-76ca91106875;
	Fri, 23 Oct 2020 18:24:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=q4TJ9okq/DUvTLr5oXlcds1WNLLFu5girGhW4O2g/p8=; b=VNbnzXJPca75j1gBgTS533D4IZ
	9oMit9hMObmRzFDvJ8SKmah9rZxFDbQbk/sR4eLZqxNBSLRwj9e8+cYzGE/crMCt6GrukD3Cylu1i
	Oe3k3WntgUKEfY00Coe1f/HTR99JyZXTS+zUbR8S57eFDIVFaOT90SX5mnHNqcKi9T1M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW1k4-0002Vo-AH; Fri, 23 Oct 2020 18:24:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW1k3-0006z6-W2; Fri, 23 Oct 2020 18:24:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW1k3-00079j-VY; Fri, 23 Oct 2020 18:24:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156122-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156122: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 18:24:43 +0000

flight 156122 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156122/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  124 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 19:12:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 19:12:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11300.29932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW2Ti-0002W5-GE; Fri, 23 Oct 2020 19:11:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11300.29932; Fri, 23 Oct 2020 19:11:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW2Ti-0002Vy-D9; Fri, 23 Oct 2020 19:11:54 +0000
Received: by outflank-mailman (input) for mailman id 11300;
 Fri, 23 Oct 2020 19:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW2Tg-0002Vt-UK
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 19:11:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba2479ae-5700-4483-827d-115c31862282;
 Fri, 23 Oct 2020 19:11:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW2Td-0003So-UE; Fri, 23 Oct 2020 19:11:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW2Td-0000Y7-IZ; Fri, 23 Oct 2020 19:11:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW2Td-0001JK-I4; Fri, 23 Oct 2020 19:11:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW2Tg-0002Vt-UK
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 19:11:52 +0000
X-Inumbo-ID: ba2479ae-5700-4483-827d-115c31862282
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba2479ae-5700-4483-827d-115c31862282;
	Fri, 23 Oct 2020 19:11:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p4NiSBxSgX5SSQAh0FsHR5Bw0Jdkg1LcSZ0BSK3dYyc=; b=P+4bgwQH29te/JLfpol2dUK1IV
	tpFKXR2ZqmlzB7b1oP7m3zyrk6pPbo2pdaRFl7AfEVCQp9GklUKvUpoc65a8McCc9hdA4wU6yPEWO
	qzK5AsgaWVwR1MoU3yNN2eMPOrCwihKg3gUctoSwakc9H7F7HZHGrECZqQAMepVxmb7Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW2Td-0003So-UE; Fri, 23 Oct 2020 19:11:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW2Td-0000Y7-IZ; Fri, 23 Oct 2020 19:11:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW2Td-0001JK-I4; Fri, 23 Oct 2020 19:11:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156129-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156129: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 19:11:49 +0000

flight 156129 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156129/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    2 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 21:01:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 21:01:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11314.29973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW4BF-0004kN-Lt; Fri, 23 Oct 2020 21:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11314.29973; Fri, 23 Oct 2020 21:00:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW4BF-0004kG-J3; Fri, 23 Oct 2020 21:00:57 +0000
Received: by outflank-mailman (input) for mailman id 11314;
 Fri, 23 Oct 2020 21:00:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW4BE-0004kB-I4
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 21:00:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45aa85a2-04d9-49a5-a333-1108ef1d40d3;
 Fri, 23 Oct 2020 21:00:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW4BC-0005kA-9A; Fri, 23 Oct 2020 21:00:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW4BB-0006Ie-W3; Fri, 23 Oct 2020 21:00:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW4BB-0006sk-V3; Fri, 23 Oct 2020 21:00:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW4BE-0004kB-I4
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 21:00:56 +0000
X-Inumbo-ID: 45aa85a2-04d9-49a5-a333-1108ef1d40d3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 45aa85a2-04d9-49a5-a333-1108ef1d40d3;
	Fri, 23 Oct 2020 21:00:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qlrWE3FmFtrcC/EqaSKRUAlRL6mmO+2EIFrsFIJgW00=; b=ARhctkODdgbeATVzZGxWKBNKWD
	zb8mFeqVA9fupzj2FcIStorrxEQuOhXG6EnnGaODCfewgE/nIKMvE9l3n0iAe5e5Y7bmJxS3KzRkh
	21UPNniS8C44MWQq1m0fwqR6R0iBR/+HnEXcXZbHhBig+ZJVtur4wco5nLCSh4yV73Js=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW4BC-0005kA-9A; Fri, 23 Oct 2020 21:00:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW4BB-0006Ie-W3; Fri, 23 Oct 2020 21:00:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW4BB-0006sk-V3; Fri, 23 Oct 2020 21:00:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156133-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156133: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 21:00:53 +0000

flight 156133 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156133/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    3 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 21:25:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 21:25:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11320.29995 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW4YV-0006wo-OR; Fri, 23 Oct 2020 21:24:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11320.29995; Fri, 23 Oct 2020 21:24:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW4YV-0006wh-K6; Fri, 23 Oct 2020 21:24:59 +0000
Received: by outflank-mailman (input) for mailman id 11320;
 Fri, 23 Oct 2020 21:24:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yA4w=D6=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kW4YU-0006wc-Gq
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 21:24:58 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23e5d1e5-4c1c-4676-87cd-242166302ddc;
 Fri, 23 Oct 2020 21:24:57 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09NLOWNS092051
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 23 Oct 2020 17:24:38 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09NLOVdm092050;
 Fri, 23 Oct 2020 14:24:31 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yA4w=D6=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kW4YU-0006wc-Gq
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 21:24:58 +0000
X-Inumbo-ID: 23e5d1e5-4c1c-4676-87cd-242166302ddc
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 23e5d1e5-4c1c-4676-87cd-242166302ddc;
	Fri, 23 Oct 2020 21:24:57 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09NLOWNS092051
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 23 Oct 2020 17:24:38 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09NLOVdm092050;
	Fri, 23 Oct 2020 14:24:31 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 23 Oct 2020 14:24:31 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
        masami.hiramatsu@linaro.org, bertrand.marquis@arm.com,
        andre.przywara@arm.com, Rahul.Singh@arm.com,
        Julien Grall <jgrall@amazon.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Wei Liu <wl@xen.org>, Roger Pau Monn?? <roger.pau@citrix.com>
Subject: Re: [PATCH v2 0/7] xen/arm: Unbreak ACPI
Message-ID: <20201023212431.GB90171@mattapan.m5p.com>
References: <20201023154156.6593-1-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201023154156.6593-1-julien@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 23, 2020 at 04:41:49PM +0100, Julien Grall wrote:
> Xen on ARM has been broken for quite a while on ACPI systems. This
> series aims to fix it.
> 
> This series also introduced support for ACPI 5.1. This allows Xen to
> boot on QEMU.
> 
> I have only build tested the x86 side so far.

On a Tianocore-utilizing Raspberry PI 4B, this series allows successful
boot (some other distinct issues remain).  As such, for the series on an
ARM device:

Tested-by: Elliott Mitchell <ehem+xen@m5p.com>


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Oct 23 22:22:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 22:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11328.30019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW5Rl-00049d-5m; Fri, 23 Oct 2020 22:22:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11328.30019; Fri, 23 Oct 2020 22:22:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW5Rl-00049W-2Z; Fri, 23 Oct 2020 22:22:05 +0000
Received: by outflank-mailman (input) for mailman id 11328;
 Fri, 23 Oct 2020 22:22:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW5Rj-00049R-JQ
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 22:22:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79a148a8-8d01-4a0d-b6fa-3b2c5a3dc861;
 Fri, 23 Oct 2020 22:22:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW5Rf-0007N2-MP; Fri, 23 Oct 2020 22:21:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW5Rf-0001eX-E6; Fri, 23 Oct 2020 22:21:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW5Rf-0002Xy-DZ; Fri, 23 Oct 2020 22:21:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW5Rj-00049R-JQ
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 22:22:03 +0000
X-Inumbo-ID: 79a148a8-8d01-4a0d-b6fa-3b2c5a3dc861
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79a148a8-8d01-4a0d-b6fa-3b2c5a3dc861;
	Fri, 23 Oct 2020 22:22:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LYVjP/uT8rgEC/QCVwr/SiVX/UmyEN6gT5Z6Xhlrl98=; b=BhaExmfRKxlt1Vm3srndjUrPVY
	ccazgXN0d3oMvSXt5m14XHNn9y/U7HP1um/OQev2Zh397eVxsBqFAFl/IxBuIErH2huo3V2hEt/H/
	W2SZQNPyomAfnS011nEbfI95F88Khj5TwRXfLzJ+PU5wj6yVjkhsXAT4rRWrfMOzoboA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW5Rf-0007N2-MP; Fri, 23 Oct 2020 22:21:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW5Rf-0001eX-E6; Fri, 23 Oct 2020 22:21:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW5Rf-0002Xy-DZ; Fri, 23 Oct 2020 22:21:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156130: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 22:21:59 +0000

flight 156130 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156130/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  125 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 23:10:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 23:10:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11339.30048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6C4-0008Ma-7x; Fri, 23 Oct 2020 23:09:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11339.30048; Fri, 23 Oct 2020 23:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6C4-0008MT-4v; Fri, 23 Oct 2020 23:09:56 +0000
Received: by outflank-mailman (input) for mailman id 11339;
 Fri, 23 Oct 2020 23:09:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW6C2-0008Ls-Tv
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:09:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6086b6d-64b6-438c-b494-2418aa59dfd7;
 Fri, 23 Oct 2020 23:09:48 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Bv-0008Lb-Jm; Fri, 23 Oct 2020 23:09:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Bv-00044x-Cc; Fri, 23 Oct 2020 23:09:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Bv-0003YZ-CB; Fri, 23 Oct 2020 23:09:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW6C2-0008Ls-Tv
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:09:54 +0000
X-Inumbo-ID: b6086b6d-64b6-438c-b494-2418aa59dfd7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b6086b6d-64b6-438c-b494-2418aa59dfd7;
	Fri, 23 Oct 2020 23:09:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J1p9hwDpqkPsCdtcLN7q0y1/ZwDGJ7L0qIvQAv4tAuQ=; b=KNGEqziFIZ3satfsWz+crf3GK3
	wNnciXgK7ScO8VywhoDJjaXf2CY9LyLON/bLtm2WO0NWlO/AU89dzNvtmX97Nz4QJfIh6nMc4P4Kh
	7u8w6kMC1Mk50wMjijB8r1yQFdhFymaJeXjEwsDTjS8hfLQfZUhVoZEdGsPq67y//5iM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Bv-0008Lb-Jm; Fri, 23 Oct 2020 23:09:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Bv-00044x-Cc; Fri, 23 Oct 2020 23:09:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Bv-0003YZ-CB; Fri, 23 Oct 2020 23:09:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156140: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 23:09:47 +0000

flight 156140 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    4 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 23:30:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 23:30:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11344.30063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6Vt-0002ei-Vl; Fri, 23 Oct 2020 23:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11344.30063; Fri, 23 Oct 2020 23:30:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6Vt-0002eb-Sk; Fri, 23 Oct 2020 23:30:25 +0000
Received: by outflank-mailman (input) for mailman id 11344;
 Fri, 23 Oct 2020 23:30:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW6Vt-0002dP-AP
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:30:25 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b273019-0b0e-43cc-bcc9-ea54cae818bc;
 Fri, 23 Oct 2020 23:30:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Vl-0000Lp-Lu; Fri, 23 Oct 2020 23:30:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Vl-0005QG-Fe; Fri, 23 Oct 2020 23:30:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW6Vl-0006na-FA; Fri, 23 Oct 2020 23:30:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vJnI=D6=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW6Vt-0002dP-AP
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:30:25 +0000
X-Inumbo-ID: 6b273019-0b0e-43cc-bcc9-ea54cae818bc
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6b273019-0b0e-43cc-bcc9-ea54cae818bc;
	Fri, 23 Oct 2020 23:30:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=LUEt7Uk1l8w9nctICwSU5hc42imC30s2EDXHQ1Aihho=; b=ipoVLTxxTnbbtUnNzDdZvKpG9l
	rmKEEMX2bpavBO5YzMN3sIUi0HcUPYmII27ucsxugU6mcLzvoK6FEF0R4ts+6rcWKG/lW/IcQLGmy
	aJ4N92ACDxqrjHO0PapW4v7lWrm4xdm9CbuPYmMKV+viJNdmiOuZq59vmqK6MA+V/jBM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Vl-0000Lp-Lu; Fri, 23 Oct 2020 23:30:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Vl-0005QG-Fe; Fri, 23 Oct 2020 23:30:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW6Vl-0006na-FA; Fri, 23 Oct 2020 23:30:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-amd64
Message-Id: <E1kW6Vl-0006na-FA@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 23 Oct 2020 23:30:17 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-amd64
testid xen-build

Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4664034cdc720a52913bc26358240bb9d3798527
  Bug not present: 154137dfdba334348887baf0be9693c407f7cef3
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156144/


  commit 4664034cdc720a52913bc26358240bb9d3798527
  Author: Juergen Gross <jgross@suse.com>
  Date:   Mon Oct 19 17:27:54 2020 +0200
  
      tools/libs: move official headers to common directory
      
      Instead of each library having an own include directory move the
      official headers to tools/include instead. This will drop the need to
      link those headers to tools/include and there is no need any longer
      to have library-specific include paths when building Xen.
      
      While at it remove setting of the unused variable
      PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Christian Lindig <christian.lindig@citrix.com>
      Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
      Acked-by: Ian Jackson <iwj@xenproject.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-amd64.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build --summary-out=tmp/156144.bisection-summary --basis-template=156117 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-amd64 xen-build
Searching for failure / basis pass:
 156140 fail [host=himrod1] / 156120 [host=himrod2] 156117 [host=himrod2] 156108 ok.
Failure / basis pass flights: 156140 / 156108
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Basis pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 861f0c110976fa8879b7bf63d9478b6be83d4ab6
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#861f0c110976fa8879b7bf63d9478b6be83d4ab6-4ddd6499d999a7d08cabfda5b0262e473dd5beed
Loaded 5001 nodes in revision graph
Searching for test results:
 156108 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 861f0c110976fa8879b7bf63d9478b6be83d4ab6
 156117 [host=himrod2]
 156120 [host=himrod2]
 156129 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156131 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 861f0c110976fa8879b7bf63d9478b6be83d4ab6
 156132 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156134 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156135 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3
 156133 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156137 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 588756db020e73e6f5e4407bbf78fbd53f15b731
 156138 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4664034cdc720a52913bc26358240bb9d3798527
 156139 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3
 156141 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4664034cdc720a52913bc26358240bb9d3798527
 156142 pass 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3
 156140 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156144 fail 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 4664034cdc720a52913bc26358240bb9d3798527
Searching for interesting versions
 Result found: flight 156108 (pass), for basis pass
 For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3, results HASH(0x559bab0e2bb8) HASH(0x559bab0f1840) HASH(0x559bab0f7e80) For basis failure, parent search stopping at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c, results HASH(0x559bab0e82d0) For basis failure, parent search stopping at 3d273dd05e51e5a1\
 ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 861f0c110976fa8879b7bf63d9478b6be83d4ab6, results HASH(0x559bab0e0d30) HASH(0x559bab0e2738) Result found: flight 156129 (fail), for basis failure (at ancestor ~484)
 Repro found: flight 156131 (pass), for basis pass
 Repro found: flight 156132 (fail), for basis failure
 0 revisions at 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3
No revisions left to test, checking graph state.
 Result found: flight 156135 (pass), for last pass
 Result found: flight 156138 (fail), for first failure
 Repro found: flight 156139 (pass), for last pass
 Repro found: flight 156141 (fail), for first failure
 Repro found: flight 156142 (pass), for last pass
 Repro found: flight 156144 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  4664034cdc720a52913bc26358240bb9d3798527
  Bug not present: 154137dfdba334348887baf0be9693c407f7cef3
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156144/


  commit 4664034cdc720a52913bc26358240bb9d3798527
  Author: Juergen Gross <jgross@suse.com>
  Date:   Mon Oct 19 17:27:54 2020 +0200
  
      tools/libs: move official headers to common directory
      
      Instead of each library having an own include directory move the
      official headers to tools/include instead. This will drop the need to
      link those headers to tools/include and there is no need any longer
      to have library-specific include paths when building Xen.
      
      While at it remove setting of the unused variable
      PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Christian Lindig <christian.lindig@citrix.com>
      Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
      Acked-by: Ian Jackson <iwj@xenproject.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-amd64.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156144: tolerable ALL FAIL

flight 156144 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156144/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64                   6 xen-build               fail baseline untested


jobs:
 build-amd64                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri Oct 23 23:36:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 23:36:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11348.30079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6bX-0002yp-MB; Fri, 23 Oct 2020 23:36:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11348.30079; Fri, 23 Oct 2020 23:36:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6bX-0002yi-Ie; Fri, 23 Oct 2020 23:36:15 +0000
Received: by outflank-mailman (input) for mailman id 11348;
 Fri, 23 Oct 2020 23:36:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fepJ=D6=nask.pl=michall@srs-us1.protection.inumbo.net>)
 id 1kW6bW-0002yd-Dm
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:36:14 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ef37d5a8-c4db-4c2f-987b-8f3cd60cca37;
 Fri, 23 Oct 2020 23:36:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fepJ=D6=nask.pl=michall@srs-us1.protection.inumbo.net>)
	id 1kW6bW-0002yd-Dm
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:36:14 +0000
X-Inumbo-ID: ef37d5a8-c4db-4c2f-987b-8f3cd60cca37
Received: from mx.nask.net.pl (unknown [195.187.55.89])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ef37d5a8-c4db-4c2f-987b-8f3cd60cca37;
	Fri, 23 Oct 2020 23:36:11 +0000 (UTC)
X-Virus-Scanned: amavisd-new at 
Date: Sat, 24 Oct 2020 01:36:09 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
Message-ID: <1398275796.814046.1603496169810.JavaMail.zimbra@nask.pl>
In-Reply-To: <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl> <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [195.187.238.14]
X-Mailer: Zimbra 9.0.0_GA_3969 (ZimbraWebClient - GC86 (Win)/9.0.0_GA_3969)
Thread-Topic: credit=sched2 machine hang when using DRAKVUF
Thread-Index: ehjKdI2aZ/NIQ0mvEFcfZJ714jsIjw==

----- 23 pa=C5=BA, 2020 o 6:47, J=C3=BCrgen Gro=C3=9F jgross@suse.com napis=
a=C5=82(a):

> On 23.10.20 00:59, Micha=C5=82 Leszczy=C5=84ski wrote:
>> Hello,
>>=20
>> when using DRAKVUF against a Windows 7 x64 DomU, the whole machine hangs=
 after a
>> few minutes.
>>=20
>> The chance for a hang seems to be correlated with number of PCPUs, in th=
is case
>> we have 14 PCPUs and hang is very easily reproducible, while on other ma=
chines
>> with 2-4 PCPUs it's very rare (but still occurring sometimes). The issue=
 is
>> observed with the default sched=3Dcredit2 and is no longer reproducible =
once
>> sched=3Dcredit is set.
>=20
> Interesting. Can you please share some more information?
>=20
> Which Xen version are you using?

RELEASE-4.14

>=20
> Is there any additional information in the dom0 log which could be
> related to the hang (earlier WARN() splats, Oopses, Xen related
> messages, hardware failure messages, ...?

I will try to find something out next week and will come back to you.

>=20
> Can you please try to get backtraces of all cpus at the time of the
> hang?
>=20
> It would help to know which cpu was the target of the call of
> smp_call_function_single(), so a disassembly of that function would
> be needed to find that information from the dumped registers.
>=20
> I'm asking because I've seen a similar problem recently and I was
> rather suspecting a fifo event channel issue than the Xen scheduler,
> but your data suggests it could be the scheduler after all (if it is
> the same issue, of course).
>=20
>=20
> Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 23 23:59:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 23 Oct 2020 23:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11352.30090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6y6-00054R-Iy; Fri, 23 Oct 2020 23:59:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11352.30090; Fri, 23 Oct 2020 23:59:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW6y6-00054K-Fy; Fri, 23 Oct 2020 23:59:34 +0000
Received: by outflank-mailman (input) for mailman id 11352;
 Fri, 23 Oct 2020 23:59:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW6y4-00054F-N5
 for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:59:32 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 160cfefc-cd1f-4082-aa50-e9853c344ade;
 Fri, 23 Oct 2020 23:59:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E4E9E207FF;
 Fri, 23 Oct 2020 23:59:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3H45=D6=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW6y4-00054F-N5
	for xen-devel@lists.xenproject.org; Fri, 23 Oct 2020 23:59:32 +0000
X-Inumbo-ID: 160cfefc-cd1f-4082-aa50-e9853c344ade
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 160cfefc-cd1f-4082-aa50-e9853c344ade;
	Fri, 23 Oct 2020 23:59:32 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E4E9E207FF;
	Fri, 23 Oct 2020 23:59:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603497571;
	bh=y3kGqPmVV53FhWuLmgb7cUA1bbmAXvKK3QPz8nZavXE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BN3Z/HVkAPHohBekWTeqa4kylNQi8CzAdDiQ6sQNr9BGIVe/ShzvCaOtyji0sbWa+
	 5q44vgKiPL5WhmtqVL8rvXN/oKYcU7RJgRfUi+9/p34HioadKYOmCD/mXxQareLPeQ
	 Dh4yEqXN0fNM2E/sVPYr/NNzEUNogQmugZ/zYUiI=
Date: Fri, 23 Oct 2020 16:59:30 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    roman@zededa.com, xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
In-Reply-To: <20201023211941.GA90171@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
References: <20201011051933.GA77136@mattapan.m5p.com> <alpine.DEB.2.21.2010121138480.10386@sstabellini-ThinkPad-T480s> <20201012215751.GB89158@mattapan.m5p.com> <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org> <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org> <20201022021655.GA74011@mattapan.m5p.com> <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s> <20201023005629.GA83870@mattapan.m5p.com> <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

+ xen-devel

On Fri, 23 Oct 2020, Elliott Mitchell wrote:
> On Thu, Oct 22, 2020 at 06:02:46PM -0700, Stefano Stabellini wrote:
> > On Thu, 22 Oct 2020, Elliott Mitchell wrote:
> > > On Thu, Oct 22, 2020 at 04:27:23PM -0700, Stefano Stabellini wrote:
> > > > On Wed, 21 Oct 2020, Elliott Mitchell wrote:
> > > > > Due to experimenting with "proper" console on serial port, I ended up
> > > > > getting output.  Apparently domain 0 was panicing when trying to setup
> > > > > xen-blkback due to the swiotlb code being unable to allocate a bounce
> > > > > buffer.
> > > > > 
> > > > > Stefano, what is the status of swiotlb in the 5.8 kernel series?
> > > > 
> > > > The swiotlb fixes for RPi4 are not in 5.8. Linux 5.9 has just been
> > > > released, and it should come with everything you need.
> > > 
> > > I had 13 patches applied to Debian's 5.8 kernel source.  Two of the
> > > batch I had against 5.6 had gotten into mainline.  No issues were visible
> > > during normal operation.
> > > 
> > > Problem showed up when trying to start a domain.  By using Xen's console
> > > device I managed to get the messages:
> > > 
> > > xen-blkback: backend/vbd/3/51712: using 2 queues, protocol 1 (arm-abi) persistent grants
> > > Kernel panic - not syncing: Can not allocate SWIOTLB buffer earlier and can't now provide you with the DMA bounce buffer
> > > 
> > > Worth noting that by the time when I was starting this domain, the device
> > > had an uptime of more than an hour.  There could be a problem of swiotlb
> > > needing the ability to claim DMA-viable pages after they've been in use
> > > for other purposes.
> >  
> > I'll have a look
> 
> Finally came up with one detail of a change I'd made in the right time
> frame to trigger this issue.  As such I can now control this behavior and
> get it to occur or not.
> 
> I have some suspicion my planned workload approach differs from others.
> 
> During the runs where I was able to successfully boot a child domain,
> domain 0 had been allocated 512MB of memory.  During the unsuccessful run
> where the above message occurred, domain 0 had been allocated 2GB of
> memory.  This appears to reliably control the occurrence of this bug.

This is what is going on. kernel/dma/swiotlb.c:swiotlb_init gets called
and tries to allocate a buffer for the swiotlb. It does so by calling

  memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);

In your case, the allocation must fail, no_iotlb_memory is set, and I
expect you see this warning among the boot messages:

  Cannot allocate buffer

Later during initialization swiotlb-xen comes in
(drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
is != 0 it thinks the memory is ready to use when actually it is not.

When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
and since no_iotlb_memory is set the kernel panics.


The reason why you are only seeing it with a 2G dom0 is because
swiotlb_init is only called when:

  max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))

see arch/arm64/mm/init.c:mem_init. So when dom0 is 512MB swiotlb_init is
not called at all. swiotlb-xen does the allocation itself with
memblock_alloc and it succeeds.

Note that I tried to repro the issue here at my end but it works for me
with device tree. So the swiotlb_init memory allocation failure probably
only shows on ACPI, maybe because ACPI is reserving too much low memory.

In any case, I think the issue might be "fixed" by this patch:



diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c19379fabd20..84e15e7d3929 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	if (verbose)
 		swiotlb_print_info();
@@ -263,8 +264,11 @@ swiotlb_init(int verbose)
 		return;
 
 	if (io_tlb_start)
+	{
 		memblock_free_early(io_tlb_start,
 				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+		io_tlb_start = 0;
+	}
 	pr_warn("Cannot allocate buffer");
 	no_iotlb_memory = true;
 }
@@ -362,6 +366,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	swiotlb_print_info();


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:07:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11357.30103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW75G-0006hq-9J; Sat, 24 Oct 2020 00:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11357.30103; Sat, 24 Oct 2020 00:06:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW75G-0006hj-6F; Sat, 24 Oct 2020 00:06:58 +0000
Received: by outflank-mailman (input) for mailman id 11357;
 Sat, 24 Oct 2020 00:06:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW75E-0006he-2M
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:06:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d36654a4-6a19-47de-bc0c-508d8f75f280;
 Sat, 24 Oct 2020 00:06:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A42F0214F1;
 Sat, 24 Oct 2020 00:06:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW75E-0006he-2M
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:06:56 +0000
X-Inumbo-ID: d36654a4-6a19-47de-bc0c-508d8f75f280
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d36654a4-6a19-47de-bc0c-508d8f75f280;
	Sat, 24 Oct 2020 00:06:55 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A42F0214F1;
	Sat, 24 Oct 2020 00:06:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603498014;
	bh=MShSA168VqfvbW0FxSMkG4ksQrHfEZI2MwoSwHVBmYg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gRZOhW9OXX3XpMgm7ZJ0CeizffkhxpTG5TJuwQ//cZvwQU8TZCcx6y7jD/jMHFzKN
	 YTtDiLoLYCviEEn7Kk5Smr9c0gjsuNH33N63VfWmDKQI71mTePeDbPzuka/ThhzL/I
	 FVM6wbavy2l2+MXaDELCr3eWuURD/9aTzjJ60l7w=
Date: Fri, 23 Oct 2020 17:06:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul Singh <Rahul.Singh@arm.com>, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Rahul Singh <rahul.singh@arm.com>
Subject: Re: [PATCH v2 1/7] xen/acpi: Rework acpi_os_map_memory() and
 acpi_os_unmap_memory()
In-Reply-To: <20201023154156.6593-2-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231706410.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-2-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The functions acpi_os_{un,}map_memory() are meant to be arch-agnostic
> while the __acpi_os_{un,}map_memory() are meant to be arch-specific.
> 
> Currently, the former are still containing x86 specific code.
> 
> To avoid this rather strange split, the generic helpers are reworked so
> they are arch-agnostic. This requires the introduction of a new helper
> __acpi_os_unmap_memory() that will undo any mapping done by
> __acpi_os_map_memory().
> 
> Currently, the arch-helper for unmap is basically a no-op so it only
> returns whether the mapping was arch specific. But this will change
> in the future.
> 
> Note that the x86 version of acpi_os_map_memory() was already able to
> able the 1MB region. Hence why there is no addition of new code.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Rahul Singh <rahul.singh@arm.com>
> Tested-by: Rahul Singh <rahul.singh@arm.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Constify ptr in __acpi_unmap_table()
>         - Coding style fixes
>         - Fix build on arm64
>         - Use PAGE_OFFSET() rather than open-coding it
>         - Add Rahul's tested-by and reviewed-by
> ---
>  xen/arch/arm/acpi/lib.c | 12 ++++++++++++
>  xen/arch/x86/acpi/lib.c | 18 ++++++++++++++++++
>  xen/drivers/acpi/osl.c  | 34 ++++++++++++++++++----------------
>  xen/include/xen/acpi.h  |  1 +
>  4 files changed, 49 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index 4fc6e17322c1..fcc186b03399 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -30,6 +30,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>      unsigned long base, offset, mapped_size;
>      int idx;
>  
> +    /* No arch specific implementation after early boot */
> +    if ( system_state >= SYS_STATE_boot )
> +        return NULL;
> +
>      offset = phys & (PAGE_SIZE - 1);
>      mapped_size = PAGE_SIZE - offset;
>      set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> @@ -49,6 +53,14 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>      return ((char *) base + offset);
>  }
>  
> +bool __acpi_unmap_table(const void *ptr, unsigned long size)
> +{
> +    vaddr_t vaddr = (vaddr_t)ptr;
> +
> +    return ((vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) &&
> +            (vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE)));
> +}
> +
>  /* True to indicate PSCI 0.2+ is implemented */
>  bool __init acpi_psci_present(void)
>  {
> diff --git a/xen/arch/x86/acpi/lib.c b/xen/arch/x86/acpi/lib.c
> index 265b9ad81905..a22414a05c13 100644
> --- a/xen/arch/x86/acpi/lib.c
> +++ b/xen/arch/x86/acpi/lib.c
> @@ -46,6 +46,10 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>  	if ((phys + size) <= (1 * 1024 * 1024))
>  		return __va(phys);
>  
> +	/* No further arch specific implementation after early boot */
> +	if (system_state >= SYS_STATE_boot)
> +		return NULL;
> +
>  	offset = phys & (PAGE_SIZE - 1);
>  	mapped_size = PAGE_SIZE - offset;
>  	set_fixmap(FIX_ACPI_END, phys);
> @@ -66,6 +70,20 @@ char *__acpi_map_table(paddr_t phys, unsigned long size)
>  	return ((char *) base + offset);
>  }
>  
> +bool __acpi_unmap_table(const void *ptr, unsigned long size)
> +{
> +	unsigned long vaddr = (unsigned long)ptr;
> +
> +	if ((vaddr >= DIRECTMAP_VIRT_START) &&
> +	    (vaddr < DIRECTMAP_VIRT_END)) {
> +		ASSERT(!((__pa(ptr) + size - 1) >> 20));
> +		return true;
> +	}
> +
> +	return ((vaddr >= __fix_to_virt(FIX_ACPI_END)) &&
> +		(vaddr < (__fix_to_virt(FIX_ACPI_BEGIN) + PAGE_SIZE)));
> +}
> +
>  unsigned int acpi_get_processor_id(unsigned int cpu)
>  {
>  	unsigned int acpiid, apicid;
> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 4c8bb7839eda..389505f78666 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -92,27 +92,29 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
>  void __iomem *
>  acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
>  {
> -	if (system_state >= SYS_STATE_boot) {
> -		mfn_t mfn = _mfn(PFN_DOWN(phys));
> -		unsigned int offs = phys & (PAGE_SIZE - 1);
> -
> -		/* The low first Mb is always mapped on x86. */
> -		if (IS_ENABLED(CONFIG_X86) && !((phys + size - 1) >> 20))
> -			return __va(phys);
> -		return __vmap(&mfn, PFN_UP(offs + size), 1, 1,
> -			      ACPI_MAP_MEM_ATTR, VMAP_DEFAULT) + offs;
> -	}
> -	return __acpi_map_table(phys, size);
> +	void *ptr;
> +	mfn_t mfn = _mfn(PFN_DOWN(phys));
> +	unsigned int offs = PAGE_OFFSET(phys);
> +
> +	/* Try the arch specific implementation first */
> +	ptr = __acpi_map_table(phys, size);
> +	if (ptr)
> +		return ptr;
> +
> +	/* No common implementation for early boot map */
> +	if (unlikely(system_state < SYS_STATE_boot))
> +		return NULL;
> +
> +	ptr = __vmap(&mfn, PFN_UP(offs + size), 1, 1,
> +		     ACPI_MAP_MEM_ATTR, VMAP_DEFAULT);
> +
> +	return !ptr ? NULL : (ptr + offs);
>  }
>  
>  void acpi_os_unmap_memory(void __iomem * virt, acpi_size size)
>  {
> -	if (IS_ENABLED(CONFIG_X86) &&
> -	    (unsigned long)virt >= DIRECTMAP_VIRT_START &&
> -	    (unsigned long)virt < DIRECTMAP_VIRT_END) {
> -		ASSERT(!((__pa(virt) + size - 1) >> 20));
> +	if (__acpi_unmap_table(virt, size))
>  		return;
> -	}
>  
>  	if (system_state >= SYS_STATE_boot)
>  		vunmap((void *)((unsigned long)virt & PAGE_MASK));
> diff --git a/xen/include/xen/acpi.h b/xen/include/xen/acpi.h
> index c945ab05c864..21d5e9feb5ae 100644
> --- a/xen/include/xen/acpi.h
> +++ b/xen/include/xen/acpi.h
> @@ -68,6 +68,7 @@ typedef int (*acpi_table_entry_handler) (struct acpi_subtable_header *header, co
>  
>  unsigned int acpi_get_processor_id (unsigned int cpu);
>  char * __acpi_map_table (paddr_t phys_addr, unsigned long size);
> +bool __acpi_unmap_table(const void *ptr, unsigned long size);
>  int acpi_boot_init (void);
>  int acpi_boot_table_init (void);
>  int acpi_numa_init (void);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:16:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:16:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11361.30114 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7ED-0007lK-61; Sat, 24 Oct 2020 00:16:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11361.30114; Sat, 24 Oct 2020 00:16:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7ED-0007lD-31; Sat, 24 Oct 2020 00:16:13 +0000
Received: by outflank-mailman (input) for mailman id 11361;
 Sat, 24 Oct 2020 00:16:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7EB-0007l8-Pw
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:16:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8af179f4-5ae2-411a-bb63-7a5fadf00df5;
 Sat, 24 Oct 2020 00:16:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DB8B42225E;
 Sat, 24 Oct 2020 00:16:09 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7EB-0007l8-Pw
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:16:11 +0000
X-Inumbo-ID: 8af179f4-5ae2-411a-bb63-7a5fadf00df5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8af179f4-5ae2-411a-bb63-7a5fadf00df5;
	Sat, 24 Oct 2020 00:16:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DB8B42225E;
	Sat, 24 Oct 2020 00:16:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603498570;
	bh=m4P5pbLIFRz1X41Hu460Num+nUlO0E7tg5/DYZmhy8Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SO/Nn9u2Hy66aKZhNOT1wryxULvrEG3h6H7J36LKueQze74vDrWDuo1LK6vCGCh4a
	 gceZumpiuXri1MCSXyW3PNTp2eAP3Wy1mQe8CKcx91a6cxel9VFjGycq40C9PKV5EH
	 3RLb1NO50qxo4h+ksUzJIHxA59F6Bq9k7tj8aEuU=
Date: Fri, 23 Oct 2020 17:16:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Xu <xuwei5@hisilicon.com>
Subject: Re: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
In-Reply-To: <20201023154156.6593-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 022387ee1ad3 "xen/arm: mm: Don't open-code Xen PT update in
> {set, clear}_fixmap()" enforced that each set_fixmap() should be
> paired with a clear_fixmap(). Any failure to follow the model would
> result to a platform crash.
> 
> Unfortunately, the use of fixmap in the ACPI code was overlooked as it
> is calling set_fixmap() but not clear_fixmap().
> 
> The function __acpi_os_map_table() is reworked so:
>     - We know before the mapping whether the fixmap region is big
>     enough for the mapping.
>     - It will fail if the fixmap is already in use. This is not a
>     change of behavior but clarifying the current expectation to avoid
>     hitting a BUG().
> 
> The function __acpi_os_unmap_table() will now call clear_fixmap().
> 
> Reported-by: Wei Xu <xuwei5@hisilicon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
> 
> The discussion on the original thread [1] suggested to also zap it on
> x86. This is technically not necessary today, so it is left alone for
> now.
> 
> I looked at making the fixmap code common but the index are inverted
> between Arm and x86.
> 
>     Changes in v2:
>         - Clarify the commit message
>         - Fix the size computation in __acpi_unmap_table()
> 
> [1] https://lore.kernel.org/xen-devel/5E26C935.9080107@hisilicon.com/
> ---
>  xen/arch/arm/acpi/lib.c | 73 +++++++++++++++++++++++++++++++----------
>  1 file changed, 56 insertions(+), 17 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/lib.c b/xen/arch/arm/acpi/lib.c
> index fcc186b03399..b755620e67b5 100644
> --- a/xen/arch/arm/acpi/lib.c
> +++ b/xen/arch/arm/acpi/lib.c
> @@ -25,40 +25,79 @@
>  #include <xen/init.h>
>  #include <xen/mm.h>
>  
> +static bool fixmap_inuse;
> +
>  char *__acpi_map_table(paddr_t phys, unsigned long size)
>  {
> -    unsigned long base, offset, mapped_size;
> -    int idx;
> +    unsigned long base, offset;
> +    mfn_t mfn;
> +    unsigned int idx;
>  
>      /* No arch specific implementation after early boot */
>      if ( system_state >= SYS_STATE_boot )
>          return NULL;
>  
>      offset = phys & (PAGE_SIZE - 1);
> -    mapped_size = PAGE_SIZE - offset;
> -    set_fixmap(FIXMAP_ACPI_BEGIN, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> -    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
> +    base = FIXMAP_ADDR(FIXMAP_ACPI_BEGIN) + offset;
> +
> +    /* Check the fixmap is big enough to map the region */
> +    if ( (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - base) < size )
> +        return NULL;
> +
> +    /* With the fixmap, we can only map one region at the time */
> +    if ( fixmap_inuse )
> +        return NULL;
>  
> -    /* Most cases can be covered by the below. */
> +    fixmap_inuse = true;
> +
> +    size += offset;
> +    mfn = maddr_to_mfn(phys);
>      idx = FIXMAP_ACPI_BEGIN;
> -    while ( mapped_size < size )
> -    {
> -        if ( ++idx > FIXMAP_ACPI_END )
> -            return NULL;    /* cannot handle this */
> -        phys += PAGE_SIZE;
> -        set_fixmap(idx, maddr_to_mfn(phys), PAGE_HYPERVISOR);
> -        mapped_size += PAGE_SIZE;
> -    }
>  
> -    return ((char *) base + offset);
> +    do {
> +        set_fixmap(idx, mfn, PAGE_HYPERVISOR);
> +        size -= min(size, (unsigned long)PAGE_SIZE);
> +        mfn = mfn_add(mfn, 1);
> +        idx++;
> +    } while ( size > 0 );
> +
> +    return (char *)base;
>  }
>  
>  bool __acpi_unmap_table(const void *ptr, unsigned long size)
>  {
>      vaddr_t vaddr = (vaddr_t)ptr;
> +    unsigned int idx;
> +
> +    /* We are only handling fixmap address in the arch code */
> +    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> +         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )

Is it missing "+ PAGE_SIZE"?

   if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
        (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) )


> +        return false;
> +
> +    /*
> +     * __acpi_map_table() will always return a pointer in the first page
> +     * for the ACPI fixmap region. The caller is expected to free with
> +     * the same address.
> +     */
> +    ASSERT((vaddr & PAGE_MASK) == FIXMAP_ADDR(FIXMAP_ACPI_BEGIN));
> +
> +    /* The region allocated fit in the ACPI fixmap region. */
> +    ASSERT(size < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE - vaddr));
> +    ASSERT(fixmap_inuse);
> +
> +    fixmap_inuse = false;
> +
> +    size += vaddr - FIXMAP_ADDR(FIXMAP_ACPI_BEGIN);
> +    idx = FIXMAP_ACPI_BEGIN;
> +
> +    do
> +    {
> +        clear_fixmap(idx);
> +        size -= min(size, (unsigned long)PAGE_SIZE);
> +        idx++;
> +    } while ( size > 0 );
>  
> -    return ((vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) &&
> -            (vaddr < (FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE)));
> +    return true;
>  }
>  
>  /* True to indicate PSCI 0.2+ is implemented */



From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:17:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:17:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11366.30127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7Fs-0007td-I5; Sat, 24 Oct 2020 00:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11366.30127; Sat, 24 Oct 2020 00:17:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7Fs-0007tW-Ev; Sat, 24 Oct 2020 00:17:56 +0000
Received: by outflank-mailman (input) for mailman id 11366;
 Sat, 24 Oct 2020 00:17:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7Fr-0007tR-4H
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:17:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9fedbe38-7150-45cb-92f6-55cd505ca8a1;
 Sat, 24 Oct 2020 00:17:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C987020936;
 Sat, 24 Oct 2020 00:17:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7Fr-0007tR-4H
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:17:55 +0000
X-Inumbo-ID: 9fedbe38-7150-45cb-92f6-55cd505ca8a1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9fedbe38-7150-45cb-92f6-55cd505ca8a1;
	Sat, 24 Oct 2020 00:17:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id C987020936;
	Sat, 24 Oct 2020 00:17:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603498673;
	bh=9D5HnOGhWjHnq7RKFQPNreQollNGQIC38+3dRFokapw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=h+aOrtnMNu2dxAMND5417GLFFWQkrOb6o3Sw8+CmzGlP3TOTLG5TouhWsB1Fu8Vt/
	 uVRh9e+ewqm1/8MyAsvqUFM4nvJT7653B4KMsfunu1LOfGdwO6ya4UKmZDt8rQJx/s
	 5dXW+MqQKDVgfFxtej0SpzgvLfV37j12rND7N61s=
Date: Fri, 23 Oct 2020 17:17:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 4/7] xen/arm: Introduce fw_unreserved_regions() and
 use it
In-Reply-To: <20201023154156.6593-5-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231717420.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-5-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Since commit 6e3e77120378 "xen/arm: setup: Relocate the Device-Tree
> later on in the boot", the device-tree will not be kept mapped when
> using ACPI.
> 
> However, a few places are calling dt_unreserved_regions() which expects
> a valid DT. This will lead to a crash.
> 
> As the DT should not be used for ACPI (other than for detecting the
> modules), a new function fw_unreserved_regions() is introduced.
> 
> It will behave the same way on DT system. On ACPI system, it will
> unreserve the whole region.
> 
> Take the opportunity to clarify that bootinfo.reserved_mem is only used
> when booting using Device-Tree.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Is there any region we should exclude on ACPI?
> 
>     Changes in v2:
>         - Add a comment on top of bootinfo.reserved_mem.
> ---
>  xen/arch/arm/kernel.c       |  2 +-
>  xen/arch/arm/setup.c        | 22 +++++++++++++++++-----
>  xen/include/asm-arm/setup.h |  3 ++-
>  3 files changed, 20 insertions(+), 7 deletions(-)
> 
> diff --git a/xen/arch/arm/kernel.c b/xen/arch/arm/kernel.c
> index 032923853f2c..ab78689ed2a6 100644
> --- a/xen/arch/arm/kernel.c
> +++ b/xen/arch/arm/kernel.c
> @@ -307,7 +307,7 @@ static __init int kernel_decompress(struct bootmodule *mod)
>       * Free the original kernel, update the pointers to the
>       * decompressed kernel
>       */
> -    dt_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
> +    fw_unreserved_regions(addr, addr + size, init_domheap_pages, 0);
>  
>      return 0;
>  }
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 35e5bee04efa..7fcff9af2a7e 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -196,8 +196,9 @@ static void __init processor_id(void)
>      processor_setup();
>  }
>  
> -void __init dt_unreserved_regions(paddr_t s, paddr_t e,
> -                                  void (*cb)(paddr_t, paddr_t), int first)
> +static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
> +                                         void (*cb)(paddr_t, paddr_t),
> +                                         int first)
>  {
>      int i, nr = fdt_num_mem_rsv(device_tree_flattened);
>  
> @@ -244,6 +245,17 @@ void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>      cb(s, e);
>  }
>  
> +void __init fw_unreserved_regions(paddr_t s, paddr_t e,
> +                                  void (*cb)(paddr_t, paddr_t), int first)
> +{
> +    if ( acpi_disabled )
> +        dt_unreserved_regions(s, e, cb, first);
> +    else
> +        cb(s, e);
> +}
> +
> +
> +
>  struct bootmodule __init *add_boot_module(bootmodule_kind kind,
>                                            paddr_t start, paddr_t size,
>                                            bool domU)
> @@ -405,7 +417,7 @@ void __init discard_initial_modules(void)
>               !mfn_valid(maddr_to_mfn(e)) )
>              continue;
>  
> -        dt_unreserved_regions(s, e, init_domheap_pages, 0);
> +        fw_unreserved_regions(s, e, init_domheap_pages, 0);
>      }
>  
>      mi->nr_mods = 0;
> @@ -712,7 +724,7 @@ static void __init setup_mm(void)
>                  n = mfn_to_maddr(mfn_add(xenheap_mfn_start, xenheap_pages));
>              }
>  
> -            dt_unreserved_regions(s, e, init_boot_pages, 0);
> +            fw_unreserved_regions(s, e, init_boot_pages, 0);
>  
>              s = n;
>          }
> @@ -765,7 +777,7 @@ static void __init setup_mm(void)
>              if ( e > bank_end )
>                  e = bank_end;
>  
> -            dt_unreserved_regions(s, e, init_boot_pages, 0);
> +            fw_unreserved_regions(s, e, init_boot_pages, 0);
>              s = n;
>          }
>      }
> diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
> index 2f8f24e286ed..28bf622aa196 100644
> --- a/xen/include/asm-arm/setup.h
> +++ b/xen/include/asm-arm/setup.h
> @@ -67,6 +67,7 @@ struct bootcmdlines {
>  
>  struct bootinfo {
>      struct meminfo mem;
> +    /* The reserved regions are only used when booting using Device-Tree */
>      struct meminfo reserved_mem;
>      struct bootmodules modules;
>      struct bootcmdlines cmdlines;
> @@ -96,7 +97,7 @@ int construct_dom0(struct domain *d);
>  void create_domUs(void);
>  
>  void discard_initial_modules(void);
> -void dt_unreserved_regions(paddr_t s, paddr_t e,
> +void fw_unreserved_regions(paddr_t s, paddr_t e,
>                             void (*cb)(paddr_t, paddr_t), int first);
>  
>  size_t boot_fdt_info(const void *fdt, paddr_t paddr);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:32:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11374.30157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7UH-0001Ks-C3; Sat, 24 Oct 2020 00:32:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11374.30157; Sat, 24 Oct 2020 00:32:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7UH-0001Kk-8m; Sat, 24 Oct 2020 00:32:49 +0000
Received: by outflank-mailman (input) for mailman id 11374;
 Sat, 24 Oct 2020 00:32:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7UG-0001KO-03
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:32:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 765bd07a-8b9b-4e33-ac31-6fddcf273180;
 Sat, 24 Oct 2020 00:32:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4216C2225E;
 Sat, 24 Oct 2020 00:32:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7UG-0001KO-03
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:32:48 +0000
X-Inumbo-ID: 765bd07a-8b9b-4e33-ac31-6fddcf273180
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 765bd07a-8b9b-4e33-ac31-6fddcf273180;
	Sat, 24 Oct 2020 00:32:47 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4216C2225E;
	Sat, 24 Oct 2020 00:32:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603499566;
	bh=TIVqHxRpQk4djQm/INICWBnFauo1ZEtb+0TiyGbJgik=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=1DtS0om7Fjgw7qU6EXN1M6hEPHUMWrm3dSmYl1W3AtKna7pxFBpE4JoyReJshQu0C
	 VuSU2GDwlcToCeV3WU6/DEz+VoHTYh1NR4Pi6VrzePqHuxhFPTEnAzshUmI9F7d5DY
	 QVY6HDMBh5KfRO0zhCDrDk2AaC9e2RwxDbY/IDFs=
Date: Fri, 23 Oct 2020 17:32:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v2 6/7] xen/arm: gic-v2: acpi: Use the correct length
 for the GICC structure
In-Reply-To: <20201023154156.6593-7-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231731010.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-7-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> The length of the GICC structure in the MADT ACPI table differs between
> version 5.1 and 6.0, although there are no other relevant differences.
> 
> Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
> overcome this issue.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/acpi/boot.c | 2 +-
>  xen/arch/arm/gic-v2.c    | 5 +++--
>  xen/arch/arm/gic-v3.c    | 2 +-
>  3 files changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 30e4bd1bc5a7..55c3e5cbc834 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
>      struct acpi_madt_generic_interrupt *processor =
>                 container_of(header, struct acpi_madt_generic_interrupt, header);
>  
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>          return -EINVAL;
>  
>      acpi_table_print_madt_entry(header);
> diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> index 0f747538dbcd..0e5f23201974 100644
> --- a/xen/arch/arm/gic-v2.c
> +++ b/xen/arch/arm/gic-v2.c
> @@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
>  
>      host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
>                               header);
> -    size = sizeof(struct acpi_madt_generic_interrupt);
> +
> +    size = ACPI_MADT_GICC_LENGTH;
>      /* Add Generic Interrupt */
>      for ( i = 0; i < d->max_vcpus; i++ )
>      {
> @@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
>      struct acpi_madt_generic_interrupt *processor =
>                 container_of(header, struct acpi_madt_generic_interrupt, header);
>  
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>          return -EINVAL;
>  
>      /* Read from APIC table and fill up the GIC variables */
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index 0f6cbf6224e9..ce202402c0ed 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
>      struct acpi_madt_generic_interrupt *processor =
>                 container_of(header, struct acpi_madt_generic_interrupt, header);
>  
> -    if ( BAD_MADT_ENTRY(processor, end) )
> +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
>          return -EINVAL;
>  
>      /* Read from APIC table and fill up the GIC variables */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:32:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:32:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11373.30144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7UA-0001Ix-Ul; Sat, 24 Oct 2020 00:32:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11373.30144; Sat, 24 Oct 2020 00:32:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7UA-0001Iq-Rh; Sat, 24 Oct 2020 00:32:42 +0000
Received: by outflank-mailman (input) for mailman id 11373;
 Sat, 24 Oct 2020 00:32:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7U9-0001Il-CZ
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:32:41 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2ccc09d-1543-4118-8d7c-2366991adf9f;
 Sat, 24 Oct 2020 00:32:40 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 58E3B207F7;
 Sat, 24 Oct 2020 00:32:39 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7U9-0001Il-CZ
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:32:41 +0000
X-Inumbo-ID: e2ccc09d-1543-4118-8d7c-2366991adf9f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e2ccc09d-1543-4118-8d7c-2366991adf9f;
	Sat, 24 Oct 2020 00:32:40 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 58E3B207F7;
	Sat, 24 Oct 2020 00:32:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603499559;
	bh=MpfKU2gOrSuRSvwuser9eJzzlEebpWAo2bL9BylnXf0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Y7vXXYjDLyvj3+g3hHpLsBRHb0Ug8L9LZo/us/GNavRUQrma4CxYMYQDeGWP52Mjn
	 Fzt1D46S5PQvNwPySZ6EdcihZCXys7GXcwsfpWDzGJla9lTVbtI4HRrMvukttJGGJN
	 /w77LzitDtY7VDOEFPFHdAZSjNrUBQVcoAG81uXY=
Date: Fri, 23 Oct 2020 17:32:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v2 5/7] xen/arm: acpi: add BAD_MADT_GICC_ENTRY() macro
In-Reply-To: <20201023154156.6593-6-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231719520.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-6-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Imported from Linux commit b6cfb277378ef831c0fa84bcff5049307294adc6:
> 
>     The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
>     of the MADT.  In the ACPI 5.1 version of the spec, the struct for the
>     GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
>     ACPI 6.0, the struct is 80 bytes long.  But, there is only one definition
>     in ACPICA for this struct -- and that is the 6.0 version.  Hence, when
>     BAD_MADT_ENTRY() compares the struct size to the length in the GICC
>     subtable, it fails if 5.1 structs are in use, and there are systems in
>     the wild that have them.
> 
>     This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
>     only, accounting for the difference in specification versions that are
>     possible.  The BAD_MADT_ENTRY() will continue to work as is for all other
>     MADT subtables.
> 
>     This code is being added to an arm64 header file since that is currently
>     the only architecture using the GICC subtable of the MADT.  As a GIC is
>     specific to ARM, it is also unlikely the subtable will be used elsewhere.
> 
>     Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.")
>     Signed-off-by: Al Stone <al.stone@linaro.org>
>     Acked-by: Will Deacon <will.deacon@arm.com>
>     Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
>     [catalin.marinas@arm.com: extra brackets around macro arguments]
>     Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
> Changes in v2:
>     - Patch added
> 
> We may want to consider to also import:
> 
> commit 9eb1c92b47c73249465d388eaa394fe436a3b489
> Author: Jeremy Linton <jeremy.linton@arm.com>
> Date:   Tue Nov 27 17:59:12 2018 +0000

Sure


>     arm64: acpi: Prepare for longer MADTs
> 
>     The BAD_MADT_GICC_ENTRY check is a little too strict because
>     it rejects MADT entries that don't match the currently known
>     lengths. We should remove this restriction to avoid problems
>     if the table length changes. Future code which might depend on
>     additional fields should be written to validate those fields
>     before using them, rather than trying to globally check
>     known MADT version lengths.
> 
>     Link: https://lkml.kernel.org/r/20181012192937.3819951-1-jeremy.linton@arm.com
>     Signed-off-by: Jeremy Linton <jeremy.linton@arm.com>
>     [lorenzo.pieralisi@arm.com: added MADT macro comments]
>     Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
>     Acked-by: Sudeep Holla <sudeep.holla@arm.com>
>     Cc: Will Deacon <will.deacon@arm.com>
>     Cc: Catalin Marinas <catalin.marinas@arm.com>
>     Cc: Al Stone <ahs3@redhat.com>
>     Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
>     Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  xen/include/asm-arm/acpi.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/xen/include/asm-arm/acpi.h b/xen/include/asm-arm/acpi.h
> index 50340281a917..b52ae2d6ef72 100644
> --- a/xen/include/asm-arm/acpi.h
> +++ b/xen/include/asm-arm/acpi.h
> @@ -54,6 +54,14 @@ void acpi_smp_init_cpus(void);
>   */
>  paddr_t acpi_get_table_offset(struct membank tbl_add[], EFI_MEM_RES index);
>  
> +/* Macros for consistency checks of the GICC subtable of MADT */
> +#define ACPI_MADT_GICC_LENGTH	\
> +    (acpi_gbl_FADT.header.revision < 6 ? 76 : 80)
> +
> +#define BAD_MADT_GICC_ENTRY(entry, end)						\
> +    (!(entry) || (unsigned long)(entry) + sizeof(*(entry)) > (end) ||	\
> +     (entry)->header.length != ACPI_MADT_GICC_LENGTH)
> +
>  #ifdef CONFIG_ACPI
>  extern bool acpi_disabled;
>  /* Basic configuration for ACPI */
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:33:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:33:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11378.30169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7Ul-0001cv-Lz; Sat, 24 Oct 2020 00:33:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11378.30169; Sat, 24 Oct 2020 00:33:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7Ul-0001cn-If; Sat, 24 Oct 2020 00:33:19 +0000
Received: by outflank-mailman (input) for mailman id 11378;
 Sat, 24 Oct 2020 00:33:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7Uk-0001Um-IK
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:33:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3bf7024-3da8-42e5-9a84-c11b758c484f;
 Sat, 24 Oct 2020 00:33:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E7E03207F7;
 Sat, 24 Oct 2020 00:33:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7Uk-0001Um-IK
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:33:18 +0000
X-Inumbo-ID: a3bf7024-3da8-42e5-9a84-c11b758c484f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a3bf7024-3da8-42e5-9a84-c11b758c484f;
	Sat, 24 Oct 2020 00:33:13 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E7E03207F7;
	Sat, 24 Oct 2020 00:33:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603499592;
	bh=WV+e6y8JnoR4Clbfvz79BylQiPklji9goIMBmamCRZk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=zp3laeFWpKE4qFSIwpoCjQ46GfaXSpnv7sFhDPnT4ueNhaSex+PKwIu5/9HeK3Ywx
	 X9Hmd0sAkL7Rxre0c7OzYov2664do534oKafsOUrjrM2WxDNVp5Um8Rt6ApvbI7AYP
	 YCyJKrVUPGUy5DlBsu8+d61TJdHnHeFix7OwT0NE=
Date: Fri, 23 Oct 2020 17:33:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v2 7/7] xen/arm: acpi: Allow Xen to boot with ACPI 5.1
In-Reply-To: <20201023154156.6593-8-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2010231732150.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-8-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> At the moment Xen requires the FADT ACPI table to be at least version
> 6.0, apparently because of some reliance on other ACPI v6.0 features.
> 
> But actually this is overzealous, and Xen works now fine with ACPI v5.1.
> 
> Let's relax the version check for the FADT table to allow QEMU to
> run the hypervisor with ACPI.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Patch added
> ---
>  xen/arch/arm/acpi/boot.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> index 55c3e5cbc834..7ea2990cb82c 100644
> --- a/xen/arch/arm/acpi/boot.c
> +++ b/xen/arch/arm/acpi/boot.c
> @@ -181,8 +181,8 @@ static int __init acpi_parse_fadt(struct acpi_table_header *table)
>       * we only deal with ACPI 6.0 or newer revision to get GIC and SMP
>       * boot protocol configuration data, or we will disable ACPI.
>       */
> -    if ( table->revision > 6
> -         || (table->revision == 6 && fadt->minor_revision >= 0) )
> +    if ( table->revision > 5
> +         || (table->revision == 5 && fadt->minor_revision >= 1) )
>          return 0;
>  
>      printk("Unsupported FADT revision %d.%d, should be 6.0+, will disable ACPI\n",
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 00:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 00:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11390.30193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7gs-0002pA-V6; Sat, 24 Oct 2020 00:45:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11390.30193; Sat, 24 Oct 2020 00:45:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW7gs-0002p3-RJ; Sat, 24 Oct 2020 00:45:50 +0000
Received: by outflank-mailman (input) for mailman id 11390;
 Sat, 24 Oct 2020 00:45:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kW7gr-0002oy-Dc
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:45:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9bad57da-951a-40ec-a1f9-30c668982c6c;
 Sat, 24 Oct 2020 00:45:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4081F2087D;
 Sat, 24 Oct 2020 00:45:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=BKCc=D7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kW7gr-0002oy-Dc
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 00:45:49 +0000
X-Inumbo-ID: 9bad57da-951a-40ec-a1f9-30c668982c6c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9bad57da-951a-40ec-a1f9-30c668982c6c;
	Sat, 24 Oct 2020 00:45:48 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 4081F2087D;
	Sat, 24 Oct 2020 00:45:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603500347;
	bh=XyS8hRmjqJFKbI66U4BCw5giO+6os719277xBozoTBk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WRRIbYP1ZWEhzQ8yP/+qW1Te+eYPbQIjC4poBKwsj27DyJ7yccRsbBRg0WN1bg9BJ
	 Q1gzjnszAE2dId8KKtXRhqj5h1Bp67o8YmfVA3yptIbjvRrOXmbT7U2J2Up/6k2nS6
	 fZ8mPy70lbXiyL7c9Xc5x+IIXRQBQsEwjGoP+Rlk=
Date: Fri, 23 Oct 2020 17:45:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, 
    alex.bennee@linaro.org, masami.hiramatsu@linaro.org, ehem+xen@m5p.com, 
    bertrand.marquis@arm.com, andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH v2 6/7] xen/arm: gic-v2: acpi: Use the correct length
 for the GICC structure
In-Reply-To: <alpine.DEB.2.21.2010231731010.12247@sstabellini-ThinkPad-T480s>
Message-ID: <alpine.DEB.2.21.2010231735000.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-7-julien@xen.org> <alpine.DEB.2.21.2010231731010.12247@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 23 Oct 2020, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Julien Grall wrote:
> > From: Julien Grall <julien.grall@arm.com>
> > 
> > The length of the GICC structure in the MADT ACPI table differs between
> > version 5.1 and 6.0, although there are no other relevant differences.
> > 
> > Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
> > overcome this issue.
> > 
> > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > Signed-off-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Actually it looks we need to do substitutions in a couple of other places:

- xen/arch/arm/gic-v3.c:gicv3_make_hwdom_madt
- xen/arch/arm/gic-v3.c:gic_acpi_get_madt_cpu_num
- xen/arch/arm/gic.c:gic_get_hwdom_madt_size





> >     Changes in v2:
> >         - Patch added
> > ---
> >  xen/arch/arm/acpi/boot.c | 2 +-
> >  xen/arch/arm/gic-v2.c    | 5 +++--
> >  xen/arch/arm/gic-v3.c    | 2 +-
> >  3 files changed, 5 insertions(+), 4 deletions(-)
> > 
> > diff --git a/xen/arch/arm/acpi/boot.c b/xen/arch/arm/acpi/boot.c
> > index 30e4bd1bc5a7..55c3e5cbc834 100644
> > --- a/xen/arch/arm/acpi/boot.c
> > +++ b/xen/arch/arm/acpi/boot.c
> > @@ -131,7 +131,7 @@ acpi_parse_gic_cpu_interface(struct acpi_subtable_header *header,
> >      struct acpi_madt_generic_interrupt *processor =
> >                 container_of(header, struct acpi_madt_generic_interrupt, header);
> >  
> > -    if ( BAD_MADT_ENTRY(processor, end) )
> > +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
> >          return -EINVAL;
> >  
> >      acpi_table_print_madt_entry(header);
> > diff --git a/xen/arch/arm/gic-v2.c b/xen/arch/arm/gic-v2.c
> > index 0f747538dbcd..0e5f23201974 100644
> > --- a/xen/arch/arm/gic-v2.c
> > +++ b/xen/arch/arm/gic-v2.c
> > @@ -1136,7 +1136,8 @@ static int gicv2_make_hwdom_madt(const struct domain *d, u32 offset)
> >  
> >      host_gicc = container_of(header, struct acpi_madt_generic_interrupt,
> >                               header);
> > -    size = sizeof(struct acpi_madt_generic_interrupt);
> > +
> > +    size = ACPI_MADT_GICC_LENGTH;
> >      /* Add Generic Interrupt */
> >      for ( i = 0; i < d->max_vcpus; i++ )
> >      {
> > @@ -1165,7 +1166,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
> >      struct acpi_madt_generic_interrupt *processor =
> >                 container_of(header, struct acpi_madt_generic_interrupt, header);
> >  
> > -    if ( BAD_MADT_ENTRY(processor, end) )
> > +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
> >          return -EINVAL;
> >  
> >      /* Read from APIC table and fill up the GIC variables */
> > diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> > index 0f6cbf6224e9..ce202402c0ed 100644
> > --- a/xen/arch/arm/gic-v3.c
> > +++ b/xen/arch/arm/gic-v3.c
> > @@ -1558,7 +1558,7 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
> >      struct acpi_madt_generic_interrupt *processor =
> >                 container_of(header, struct acpi_madt_generic_interrupt, header);
> >  
> > -    if ( BAD_MADT_ENTRY(processor, end) )
> > +    if ( BAD_MADT_GICC_ENTRY(processor, end) )
> >          return -EINVAL;
> >  
> >      /* Read from APIC table and fill up the GIC variables */
> > -- 
> > 2.17.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 01:57:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 01:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11405.30241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW8nf-0005tf-Ff; Sat, 24 Oct 2020 01:56:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11405.30241; Sat, 24 Oct 2020 01:56:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW8nf-0005tY-CE; Sat, 24 Oct 2020 01:56:55 +0000
Received: by outflank-mailman (input) for mailman id 11405;
 Sat, 24 Oct 2020 01:56:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kW8nd-0005tT-S1
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 01:56:53 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de66172b-b1eb-497c-862e-ed9e9a4dce1a;
 Sat, 24 Oct 2020 01:56:50 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O1udLq093473
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 23 Oct 2020 21:56:45 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09O1udR5093472;
 Fri, 23 Oct 2020 18:56:39 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kW8nd-0005tT-S1
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 01:56:53 +0000
X-Inumbo-ID: de66172b-b1eb-497c-862e-ed9e9a4dce1a
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de66172b-b1eb-497c-862e-ed9e9a4dce1a;
	Sat, 24 Oct 2020 01:56:50 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O1udLq093473
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 23 Oct 2020 21:56:45 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09O1udR5093472;
	Fri, 23 Oct 2020 18:56:39 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 23 Oct 2020 18:56:38 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201024015638.GC90171@mattapan.m5p.com>
References: <20201012215751.GB89158@mattapan.m5p.com>
 <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org>
 <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 23, 2020 at 04:59:30PM -0700, Stefano Stabellini wrote:
> This is what is going on. kernel/dma/swiotlb.c:swiotlb_init gets called
> and tries to allocate a buffer for the swiotlb. It does so by calling
> 
>   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> 
> In your case, the allocation must fail, no_iotlb_memory is set, and I
> expect you see this warning among the boot messages:
> 
>   Cannot allocate buffer
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0 it thinks the memory is ready to use when actually it is not.
> 
> When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
> and since no_iotlb_memory is set the kernel panics.
> 
> 
> The reason why you are only seeing it with a 2G dom0 is because
> swiotlb_init is only called when:
> 
>   max_pfn > PFN_DOWN(arm64_dma_phys_limit ? : arm64_dma32_phys_limit))
> 
> see arch/arm64/mm/init.c:mem_init. So when dom0 is 512MB swiotlb_init is
> not called at all. swiotlb-xen does the allocation itself with
> memblock_alloc and it succeeds.
> 
> Note that I tried to repro the issue here at my end but it works for me
> with device tree. So the swiotlb_init memory allocation failure probably
> only shows on ACPI, maybe because ACPI is reserving too much low memory.
> 
> In any case, I think the issue might be "fixed" by this patch:
> 
> 
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c19379fabd20..84e15e7d3929 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	if (verbose)
>  		swiotlb_print_info();
> @@ -263,8 +264,11 @@ swiotlb_init(int verbose)
>  		return;
>  
>  	if (io_tlb_start)
> +	{
>  		memblock_free_early(io_tlb_start,
>  				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
> +		io_tlb_start = 0;
> +	}
>  	pr_warn("Cannot allocate buffer");
>  	no_iotlb_memory = true;
>  }
> @@ -362,6 +366,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	swiotlb_print_info();

This does indeed appear to take care of the domain 0 panic.

Last issue is the framebuffer and this project has reached usability.
My impression was framebuffer was an issue for device-tree as well.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 24 02:19:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 02:19:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11413.30264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW99m-0008Nk-FF; Sat, 24 Oct 2020 02:19:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11413.30264; Sat, 24 Oct 2020 02:19:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW99m-0008Nd-CB; Sat, 24 Oct 2020 02:19:46 +0000
Received: by outflank-mailman (input) for mailman id 11413;
 Sat, 24 Oct 2020 02:19:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW99k-0008NY-TK
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 02:19:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b839f84a-4e9d-453c-8f83-079d4757bbae;
 Sat, 24 Oct 2020 02:19:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW99i-0000mg-IK; Sat, 24 Oct 2020 02:19:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW99i-00073L-CO; Sat, 24 Oct 2020 02:19:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW99i-0005L3-Bt; Sat, 24 Oct 2020 02:19:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW99k-0008NY-TK
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 02:19:44 +0000
X-Inumbo-ID: b839f84a-4e9d-453c-8f83-079d4757bbae
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b839f84a-4e9d-453c-8f83-079d4757bbae;
	Sat, 24 Oct 2020 02:19:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=08F5JvJaGa/kjx9+xPBNwfC9csS/k+vDIKZ/DawCMvE=; b=3ntJ03xDohrSV3tGOwGXsYRRpN
	WrdLLOJQ6k71wH0Syfh+o9keffdftbDnBh3Xr+M+huJfL6LEoK3APgUlVn1tplrRPbn7Il7OQE7mn
	DqBVc/Ok5/e9cSVXQ+E+7qCHxYT9YAm0q4EBvL5xKm5xOcsS6k6x3A4WT7AhSwiH5kOg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW99i-0000mg-IK; Sat, 24 Oct 2020 02:19:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW99i-00073L-CO; Sat, 24 Oct 2020 02:19:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW99i-0005L3-Bt; Sat, 24 Oct 2020 02:19:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156146: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 02:19:42 +0000

flight 156146 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156146/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    5 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 02:26:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 02:26:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11418.30280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW9G2-0000yB-Bm; Sat, 24 Oct 2020 02:26:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11418.30280; Sat, 24 Oct 2020 02:26:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW9G2-0000y4-8k; Sat, 24 Oct 2020 02:26:14 +0000
Received: by outflank-mailman (input) for mailman id 11418;
 Sat, 24 Oct 2020 02:26:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kW9G1-0000xz-3Q
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 02:26:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78132c8d-d367-469d-93f5-ca39bee2ce18;
 Sat, 24 Oct 2020 02:26:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW9Fw-0000uc-CK; Sat, 24 Oct 2020 02:26:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kW9Fw-0007Tm-57; Sat, 24 Oct 2020 02:26:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kW9Fw-0002L5-4e; Sat, 24 Oct 2020 02:26:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kW9G1-0000xz-3Q
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 02:26:13 +0000
X-Inumbo-ID: 78132c8d-d367-469d-93f5-ca39bee2ce18
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 78132c8d-d367-469d-93f5-ca39bee2ce18;
	Sat, 24 Oct 2020 02:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=tUr/Gn3BddzEHn61fZmNIYWGL9MA7VdTEWEG1hkp9Xw=; b=YaW8w5+6EWICxfBSYcDkwhuapI
	KzJk8ZRaynIBpjGebacgCosh4elTMjgcQRR/xQCjgp6vhmdw33M4h7m4jdDUUgeJ24QQjm7Gvt87l
	WeNSAHCs2FyV/r+NGm6H1+OtBLBk0eSBvMK5h80/SiaaaMJstT3/lINXGHfk8E+1Mc7E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW9Fw-0000uc-CK; Sat, 24 Oct 2020 02:26:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW9Fw-0007Tm-57; Sat, 24 Oct 2020 02:26:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kW9Fw-0002L5-4e; Sat, 24 Oct 2020 02:26:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-arm64-xsm
Message-Id: <E1kW9Fw-0002L5-4e@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 02:26:08 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-arm64-xsm
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f89955449c5a47ff688e91873bbce4c3670ed9fe
  Bug not present: 56c1aca6a2bc013f45e7af2fa88605a693402770
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156157/


  commit f89955449c5a47ff688e91873bbce4c3670ed9fe
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Oct 23 15:53:10 2020 +0200
  
      tools/init-xenstore-domain: support xenstore pvh stubdom
      
      Instead of creating the xenstore-stubdom domain first and parsing the
      kernel later do it the other way round. This enables to probe for the
      domain type supported by the xenstore-stubdom and to support both, pv
      and pvh type stubdoms.
      
      Try to parse the stubdom image first for PV support, if this fails use
      HVM. Then create the domain with the appropriate type selected.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build --summary-out=tmp/156157.bisection-summary --basis-template=156117 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-arm64-xsm xen-build
Searching for failure / basis pass:
 156146 fail [host=laxton0] / 156120 [host=rochester1] 156117 [host=rochester0] 156108 [host=rochester0] 156047 [host=rochester0] 156029 ok.
Failure / basis pass flights: 156146 / 156029
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Basis pass ea6d3cd1ed79d824e605a70c3626bc437c386260 0514a3a25fb9ebff5d75cc8f00a9229385300858
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#0514a3a25fb9ebff5d75cc8f00a9229385300858-4ddd6499d999a7d08cabfda5b0262e473dd5beed
Loaded 5001 nodes in revision graph
Searching for test results:
 156029 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 0514a3a25fb9ebff5d75cc8f00a9229385300858
 156047 [host=rochester0]
 156108 [host=rochester0]
 156117 [host=rochester0]
 156120 [host=rochester1]
 156129 [host=rochester0]
 156133 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156140 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156145 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 0514a3a25fb9ebff5d75cc8f00a9229385300858
 156147 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156148 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 710f62cc826bb8c7ead99f9d6b6b269e39ff3e98
 156149 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
 156150 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
 156151 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 70cf8e9acada638f68c1c597d7580500d9f21c91
 156152 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156153 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
 156154 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156155 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
 156156 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156146 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156157 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
Searching for interesting versions
 Result found: flight 156029 (pass), for basis pass
 For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770, results HASH(0x55a6ef49eed0) HASH(0x55a6ef4adaf8) HASH(0x55a6ef4b1688) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 70cf8e9acada638f68c1c597d7580500d9f21c91, results HASH(0x55a6ef49b040) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625, results HASH(0x\
 55a6ef4a4408) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 710f62cc826bb8c7ead99f9d6b6b269e39ff3e98, results HASH(0x55a6ef499338) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 0514a3a25fb9ebff5d75cc8f00a9229385300858, results HASH(0x55a6ef492878) HASH(0x55a6ef49f4d0) Result found: flight 156133 (fail), for basis failure (at ancestor ~484)
 Repro found: flight 156145 (pass), for basis pass
 Repro found: flight 156146 (fail), for basis failure
 0 revisions at ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
No revisions left to test, checking graph state.
 Result found: flight 156152 (pass), for last pass
 Result found: flight 156153 (fail), for first failure
 Repro found: flight 156154 (pass), for last pass
 Repro found: flight 156155 (fail), for first failure
 Repro found: flight 156156 (pass), for last pass
 Repro found: flight 156157 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f89955449c5a47ff688e91873bbce4c3670ed9fe
  Bug not present: 56c1aca6a2bc013f45e7af2fa88605a693402770
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156157/


  commit f89955449c5a47ff688e91873bbce4c3670ed9fe
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Oct 23 15:53:10 2020 +0200
  
      tools/init-xenstore-domain: support xenstore pvh stubdom
      
      Instead of creating the xenstore-stubdom domain first and parsing the
      kernel later do it the other way round. This enables to probe for the
      domain type supported by the xenstore-stubdom and to support both, pv
      and pvh type stubdoms.
      
      Try to parse the stubdom image first for PV support, if this fails use
      HVM. Then create the domain with the appropriate type selected.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156157: tolerable ALL FAIL

flight 156157 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156157/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat Oct 24 03:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 03:13:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11424.30301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW9zY-0005oQ-56; Sat, 24 Oct 2020 03:13:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11424.30301; Sat, 24 Oct 2020 03:13:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kW9zY-0005nn-1h; Sat, 24 Oct 2020 03:13:16 +0000
Received: by outflank-mailman (input) for mailman id 11424;
 Sat, 24 Oct 2020 03:13:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kW9zW-0005iR-Rx
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:13:14 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecc9bd65-dfd7-4df5-9f1b-88b8e303f069;
 Sat, 24 Oct 2020 03:13:13 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O3D3ZS095285
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 23 Oct 2020 23:13:09 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09O3D3H6095284;
 Fri, 23 Oct 2020 20:13:03 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kW9zW-0005iR-Rx
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:13:14 +0000
X-Inumbo-ID: ecc9bd65-dfd7-4df5-9f1b-88b8e303f069
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ecc9bd65-dfd7-4df5-9f1b-88b8e303f069;
	Sat, 24 Oct 2020 03:13:13 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O3D3ZS095285
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Fri, 23 Oct 2020 23:13:09 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09O3D3H6095284;
	Fri, 23 Oct 2020 20:13:03 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 23 Oct 2020 20:13:03 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201024031303.GD90171@mattapan.m5p.com>
References: <20201012215751.GB89158@mattapan.m5p.com>
 <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org>
 <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 23, 2020 at 04:59:30PM -0700, Stefano Stabellini wrote:
> This is what is going on. kernel/dma/swiotlb.c:swiotlb_init gets called
> and tries to allocate a buffer for the swiotlb. It does so by calling
> 
>   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> 
> In your case, the allocation must fail, no_iotlb_memory is set, and I
> expect you see this warning among the boot messages:
> 
>   Cannot allocate buffer
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0 it thinks the memory is ready to use when actually it is not.

Forgot to respond to this portion even though I'd noticed on my first
read.  I haven't ever observed that message.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 24 03:32:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 03:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11429.30316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWAHk-0007h0-RW; Sat, 24 Oct 2020 03:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11429.30316; Sat, 24 Oct 2020 03:32:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWAHk-0007gt-Nq; Sat, 24 Oct 2020 03:32:04 +0000
Received: by outflank-mailman (input) for mailman id 11429;
 Sat, 24 Oct 2020 03:32:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWAHk-0007gF-9m
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:32:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ce9a952-5151-4779-b5e1-36446c5a9cd0;
 Sat, 24 Oct 2020 03:31:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAHd-0002FT-JJ; Sat, 24 Oct 2020 03:31:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAHd-0003Ek-8X; Sat, 24 Oct 2020 03:31:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAHd-0000He-7z; Sat, 24 Oct 2020 03:31:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWAHk-0007gF-9m
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:32:04 +0000
X-Inumbo-ID: 1ce9a952-5151-4779-b5e1-36446c5a9cd0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ce9a952-5151-4779-b5e1-36446c5a9cd0;
	Sat, 24 Oct 2020 03:31:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dhtCUPiettNwiXtoylL3td/6QoeXoRQzRj2FEKOyTrc=; b=Bb4xr51kkMGxkKCGjXxchf1Sk8
	5B6vNrjKFdEqSONV/S50E2DraenBQdCJL8wClIAB6XSigPtKh/1aHXV2AWchw7R1y1E/mRkBEX501
	Lj0Cj5AqAe0CoPNQJHXaBZzxMBwc4pTVtBp6/iupSdvKF9HPV5URblXsMfR1Oa1dsNl4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAHd-0002FT-JJ; Sat, 24 Oct 2020 03:31:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAHd-0003Ek-8X; Sat, 24 Oct 2020 03:31:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAHd-0000He-7z; Sat, 24 Oct 2020 03:31:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156143: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf:host-reuse/final/host:broken:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 03:31:57 +0000

flight 156143 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-armhf                   4 host-reuse/final/host broken blocked in 152631
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  126 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step build-armhf host-reuse/final/host

Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 03:49:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 03:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11434.30328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWAYp-0000bP-HI; Sat, 24 Oct 2020 03:49:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11434.30328; Sat, 24 Oct 2020 03:49:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWAYp-0000bI-EC; Sat, 24 Oct 2020 03:49:43 +0000
Received: by outflank-mailman (input) for mailman id 11434;
 Sat, 24 Oct 2020 03:49:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWAYn-0000bC-QS
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:49:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18906d2e-691f-4ad3-b6d1-580f3afae178;
 Sat, 24 Oct 2020 03:49:39 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAYk-0002bv-QK; Sat, 24 Oct 2020 03:49:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAYk-00043D-FI; Sat, 24 Oct 2020 03:49:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWAYk-0005mx-En; Sat, 24 Oct 2020 03:49:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWAYn-0000bC-QS
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 03:49:41 +0000
X-Inumbo-ID: 18906d2e-691f-4ad3-b6d1-580f3afae178
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18906d2e-691f-4ad3-b6d1-580f3afae178;
	Sat, 24 Oct 2020 03:49:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rKzU3P+t9WchZ54xiyns4F8pn0BKwkE6EqMK1aPTBjE=; b=vMdYhyTcWRUJfUTFC7UM2vhmse
	Zh5YUa9vaVuo8Z74XdIRR5FAcjLuGUmUUwvZU5V4y0hA6mJ47MbyXC3HTyY5LqzwyhM79avMTcwJS
	yN8dbKF6lRNvafUIoex3SGQHgS8FIRgGUXO8aQX4E/Xf+qQZQ9wd6UIEQz+cwpLZPAsw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAYk-0002bv-QK; Sat, 24 Oct 2020 03:49:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAYk-00043D-FI; Sat, 24 Oct 2020 03:49:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWAYk-0005mx-En; Sat, 24 Oct 2020 03:49:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156127-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156127: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f9893351acaecf0a414baf9942b48d5bb5c688c6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 03:49:38 +0000

flight 156127 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156127/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2 10 host-ping-check-xen fail in 156116 pass in 156127
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen fail in 156116 pass in 156127
 test-arm64-arm64-examine      8 reboot           fail in 156116 pass in 156127
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen        fail pass in 156116

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 156116 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f9893351acaecf0a414baf9942b48d5bb5c688c6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   84 days
Failing since        152366  2020-08-01 20:49:34 Z   83 days  140 attempts
Testing same since   156116  2020-10-23 06:36:51 Z    0 days    2 attempts

------------------------------------------------------------
3287 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 608329 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 04:35:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 04:35:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11441.30349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWBGd-0005Sq-Tu; Sat, 24 Oct 2020 04:34:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11441.30349; Sat, 24 Oct 2020 04:34:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWBGd-0005Sj-Qw; Sat, 24 Oct 2020 04:34:59 +0000
Received: by outflank-mailman (input) for mailman id 11441;
 Sat, 24 Oct 2020 04:34:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kWBGc-0005Se-Nu
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 04:34:58 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b8c1b87-12b5-4c96-b5b6-a22715a1dad3;
 Sat, 24 Oct 2020 04:34:57 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O4YlGZ097256
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sat, 24 Oct 2020 00:34:53 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09O4Yk1e097255;
 Fri, 23 Oct 2020 21:34:46 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kWBGc-0005Se-Nu
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 04:34:58 +0000
X-Inumbo-ID: 4b8c1b87-12b5-4c96-b5b6-a22715a1dad3
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b8c1b87-12b5-4c96-b5b6-a22715a1dad3;
	Sat, 24 Oct 2020 04:34:57 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O4YlGZ097256
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Sat, 24 Oct 2020 00:34:53 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09O4Yk1e097255;
	Fri, 23 Oct 2020 21:34:46 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 23 Oct 2020 21:34:46 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201024043446.GA97167@mattapan.m5p.com>
References: <20201012215751.GB89158@mattapan.m5p.com>
 <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org>
 <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 23, 2020 at 04:59:30PM -0700, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Elliott Mitchell wrote:
> > Finally came up with one detail of a change I'd made in the right time
> > frame to trigger this issue.  As such I can now control this behavior and
> > get it to occur or not.
> > 
> > I have some suspicion my planned workload approach differs from others.
> > 
> > During the runs where I was able to successfully boot a child domain,
> > domain 0 had been allocated 512MB of memory.  During the unsuccessful run
> > where the above message occurred, domain 0 had been allocated 2GB of
> > memory.  This appears to reliably control the occurrence of this bug.

> In your case, the allocation must fail, no_iotlb_memory is set, and I
> expect you see this warning among the boot messages:
> 
>   Cannot allocate buffer
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0 it thinks the memory is ready to use when actually it is not.

Then I look at more copies of `dmesg` logs and discover I did have one
where that message occurred.  Then start playing a bit more and a
matching pattern emerges.

dom0_mem=1023M
=> "software IO TLB: mapped [mem 0x2c000000-0x30000000] (64MB)"

dom0_mem=1024M
=> "software IO TLB: Cannot allocate buffer"

That looks suspicious.  Really suspicious.  I don't know how many bugs
are combined together, nor where they are, but more data for you.

I see a possibility Tianocore marks a smaller region of memory as being
DMA-capable.  This though is speculation.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 24 05:02:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 05:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11446.30360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWBh0-0000K1-WF; Sat, 24 Oct 2020 05:02:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11446.30360; Sat, 24 Oct 2020 05:02:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWBh0-0000Ju-TE; Sat, 24 Oct 2020 05:02:14 +0000
Received: by outflank-mailman (input) for mailman id 11446;
 Sat, 24 Oct 2020 05:02:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWBgy-0000Jp-RR
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:02:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2104490-5934-4fb3-b701-10ddb4a6b3d2;
 Sat, 24 Oct 2020 05:02:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWBgw-0004ZQ-Go; Sat, 24 Oct 2020 05:02:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWBgw-0007G9-86; Sat, 24 Oct 2020 05:02:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWBgw-0003yl-7Y; Sat, 24 Oct 2020 05:02:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWBgy-0000Jp-RR
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:02:12 +0000
X-Inumbo-ID: d2104490-5934-4fb3-b701-10ddb4a6b3d2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d2104490-5934-4fb3-b701-10ddb4a6b3d2;
	Sat, 24 Oct 2020 05:02:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2v5+vcQbu56/FoWc89Ti0+ftqYsMjH3eNfUX+QsXp9Y=; b=qJpsYu1sHDn+pu9y0M8pTCMUWa
	21EuHrSwOKzvqohA6427mLPZbWbOVtSqb7LAGngtgb/lB3n26WAtDOlFLWJx4Xlbk4U3QdIZfB8qv
	0Dw2asBUEza9I5+1VllfZM0LDv/MjxCQqKEoZ+/savoQLZPYY0fNlow5cFKv4FkZsiKU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWBgw-0004ZQ-Go; Sat, 24 Oct 2020 05:02:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWBgw-0007G9-86; Sat, 24 Oct 2020 05:02:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWBgw-0003yl-7Y; Sat, 24 Oct 2020 05:02:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156159: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 05:02:10 +0000

flight 156159 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156159/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    6 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 05:36:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 05:36:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11451.30376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWCDa-0003XZ-NJ; Sat, 24 Oct 2020 05:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11451.30376; Sat, 24 Oct 2020 05:35:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWCDa-0003XS-KF; Sat, 24 Oct 2020 05:35:54 +0000
Received: by outflank-mailman (input) for mailman id 11451;
 Sat, 24 Oct 2020 05:35:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kWCDZ-0003XN-QT
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:35:53 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b8561f3-1386-4cf9-8494-0455bc26722f;
 Sat, 24 Oct 2020 05:35:52 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O5ZfwU097466
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Sat, 24 Oct 2020 01:35:47 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09O5Zeol097465;
 Fri, 23 Oct 2020 22:35:40 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+Iak=D7=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kWCDZ-0003XN-QT
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:35:53 +0000
X-Inumbo-ID: 4b8561f3-1386-4cf9-8494-0455bc26722f
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b8561f3-1386-4cf9-8494-0455bc26722f;
	Sat, 24 Oct 2020 05:35:52 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09O5ZfwU097466
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Sat, 24 Oct 2020 01:35:47 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09O5Zeol097465;
	Fri, 23 Oct 2020 22:35:40 -0700 (PDT)
	(envelope-from ehem)
Date: Fri, 23 Oct 2020 22:35:40 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201024053540.GA97417@mattapan.m5p.com>
References: <20201012215751.GB89158@mattapan.m5p.com>
 <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org>
 <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Fri, Oct 23, 2020 at 04:59:30PM -0700, Stefano Stabellini wrote:
> Note that I tried to repro the issue here at my end but it works for me
> with device tree. So the swiotlb_init memory allocation failure probably
> only shows on ACPI, maybe because ACPI is reserving too much low memory.

Found it.  Take a look at 437b0aa06a014ce174e24c0d3530b3e9ab19b18b

 PLATFORM_START(rpi4, "Raspberry Pi 4")
     .compatible     = rpi4_dt_compat,
     .blacklist_dev  = rpi4_blacklist_dev,
+    .dma_bitsize    = 30,
 PLATFORM_END

Where this is used to match against a *device-tree*.  ACPI has a distinct
means of specifying a limited DMA-width; the above fails, because it
assumes a *device-tree*.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat Oct 24 05:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 05:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11455.30388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWCFk-0003hR-4C; Sat, 24 Oct 2020 05:38:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11455.30388; Sat, 24 Oct 2020 05:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWCFk-0003hK-13; Sat, 24 Oct 2020 05:38:08 +0000
Received: by outflank-mailman (input) for mailman id 11455;
 Sat, 24 Oct 2020 05:38:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWCFi-0003hE-HW
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:38:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63ceb179-0afc-4f8a-8e5b-ff9a7cee6d56;
 Sat, 24 Oct 2020 05:38:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWCFb-0005Hj-TE; Sat, 24 Oct 2020 05:37:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWCFb-0001Aj-I4; Sat, 24 Oct 2020 05:37:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWCFb-0008Pb-HZ; Sat, 24 Oct 2020 05:37:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWCFi-0003hE-HW
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 05:38:06 +0000
X-Inumbo-ID: 63ceb179-0afc-4f8a-8e5b-ff9a7cee6d56
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 63ceb179-0afc-4f8a-8e5b-ff9a7cee6d56;
	Sat, 24 Oct 2020 05:38:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KldK/eBtGUHJU60O8zTheJQhxeEb/xcR0VB3/pz5Ifg=; b=2MAwrrqObBR8qTpruJTwIyiZV+
	S0Yy7myIJ+2jSG+HTW963gKI5e/OLj1JyjGSVMOWU3miCZqaLMi1R64/iZBhEns2XhJNHuA8uCDBq
	JRLDV8q0ghE4o2mXlQUn1Zk6k0hY8G9yauGQLLDnJABWN9tawMyMsehgsF12f6P958aw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWCFb-0005Hj-TE; Sat, 24 Oct 2020 05:37:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWCFb-0001Aj-I4; Sat, 24 Oct 2020 05:37:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWCFb-0008Pb-HZ; Sat, 24 Oct 2020 05:37:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156160: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 05:37:59 +0000

flight 156160 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156160/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  127 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 07:28:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 07:28:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11466.30413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWDyP-00063L-Cl; Sat, 24 Oct 2020 07:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11466.30413; Sat, 24 Oct 2020 07:28:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWDyP-00063E-9O; Sat, 24 Oct 2020 07:28:21 +0000
Received: by outflank-mailman (input) for mailman id 11466;
 Sat, 24 Oct 2020 07:28:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWDyN-000639-CD
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:28:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ae71b36-2f5b-475d-bf30-ba1dc2694280;
 Sat, 24 Oct 2020 07:28:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWDyK-0007ZR-0F; Sat, 24 Oct 2020 07:28:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWDyJ-0007yk-OD; Sat, 24 Oct 2020 07:28:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWDyJ-0005TK-Ni; Sat, 24 Oct 2020 07:28:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWDyN-000639-CD
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:28:19 +0000
X-Inumbo-ID: 6ae71b36-2f5b-475d-bf30-ba1dc2694280
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6ae71b36-2f5b-475d-bf30-ba1dc2694280;
	Sat, 24 Oct 2020 07:28:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g3jAtbKuzEt0H64Oa0gI2Mttnv5VnssGr17WwSVpKUg=; b=nu+oQ4ukbp4KGyvGglv6+6Hcas
	0N5wSYZmgp4bzImGrsVmpuj6lwYJ2xkjGv+q+sAQOYpdp7bIfVvRUxkxRel6Q567PbdPPUN0YMOa9
	0bXZAJ3NmtmBaiWyxPRE8/kAVkfcoAiCv72saYgevpKI89T1/VLpxTkdpJs/KdWpwEUo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWDyK-0007ZR-0F; Sat, 24 Oct 2020 07:28:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWDyJ-0007yk-OD; Sat, 24 Oct 2020 07:28:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWDyJ-0005TK-Ni; Sat, 24 Oct 2020 07:28:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156164: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 07:28:15 +0000

flight 156164 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156164/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    0 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    7 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 07:48:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 07:48:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11472.30431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWEI2-00086C-3l; Sat, 24 Oct 2020 07:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11472.30431; Sat, 24 Oct 2020 07:48:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWEI2-000865-0g; Sat, 24 Oct 2020 07:48:38 +0000
Received: by outflank-mailman (input) for mailman id 11472;
 Sat, 24 Oct 2020 07:48:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWEI1-00084n-7X
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:48:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29965a3e-983f-4f61-9175-0934165dbd88;
 Sat, 24 Oct 2020 07:48:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEHt-0007yO-DM; Sat, 24 Oct 2020 07:48:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEHt-00006Y-3A; Sat, 24 Oct 2020 07:48:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEHt-0000wH-2f; Sat, 24 Oct 2020 07:48:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWEI1-00084n-7X
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:48:37 +0000
X-Inumbo-ID: 29965a3e-983f-4f61-9175-0934165dbd88
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 29965a3e-983f-4f61-9175-0934165dbd88;
	Sat, 24 Oct 2020 07:48:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=THmhYrtow6JB7kJbtZNCGwGQ9z3Db2JgU/RP21p/9hk=; b=S/GvW2pcu85opY1CSbNUqfexmU
	lYpwHqPCP4EUKTsLlk/XElc+rN8XHLHfrW90fJZ3TcTjHWwfIU/PfsGRbc0fJG3w/I/X6s5gI4jF3
	r+hEMvHiSosDd1mkXTSBVL+sAVV+P8XfvpI4T2qZvuJJ56u6HPTumtyr4ZRxEKTF+nEs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEHt-0007yO-DM; Sat, 24 Oct 2020 07:48:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEHt-00006Y-3A; Sat, 24 Oct 2020 07:48:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEHt-0000wH-2f; Sat, 24 Oct 2020 07:48:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156136: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=0dfddb2116e3757f77a691a3fe335173088d69dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 07:48:29 +0000

flight 156136 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156136/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156013
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156013
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156013
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156013
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156013
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156013
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156013
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156013
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  0dfddb2116e3757f77a691a3fe335173088d69dc

Last test of basis   156013  2020-10-20 04:30:46 Z    4 days
Failing since        156027  2020-10-20 12:37:35 Z    3 days    7 attempts
Testing same since   156119  2020-10-23 12:37:24 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0dfddb2116..6ca70821b5  6ca70821b59849ad97c3fadc47e63c1a4af1a78c -> master


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 07:54:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 07:54:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11479.30447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWENr-0000gl-Vs; Sat, 24 Oct 2020 07:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11479.30447; Sat, 24 Oct 2020 07:54:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWENr-0000ge-So; Sat, 24 Oct 2020 07:54:39 +0000
Received: by outflank-mailman (input) for mailman id 11479;
 Sat, 24 Oct 2020 07:54:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWENq-0000fS-Be
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:54:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4bce43a-f91a-4684-8592-d3d623c03d65;
 Sat, 24 Oct 2020 07:54:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWENj-00085c-45; Sat, 24 Oct 2020 07:54:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWENi-0000Hf-Rp; Sat, 24 Oct 2020 07:54:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWENi-0004wI-RL; Sat, 24 Oct 2020 07:54:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWENq-0000fS-Be
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 07:54:38 +0000
X-Inumbo-ID: e4bce43a-f91a-4684-8592-d3d623c03d65
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e4bce43a-f91a-4684-8592-d3d623c03d65;
	Sat, 24 Oct 2020 07:54:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sVaoDMZGlEyeH/4/DQiNFOyk9eYz5AlG3YAEVIUxhL4=; b=XwYeOAdU/XdwtdCdHwNrtF8feT
	L/+IDT9Vnx6Jy0GOAX1hyMkRapgTgCZIc2grIhJo7NP4BVS6DCIpnAGCZ3gxm/Oeem8RppBj1X0uq
	RW5CKlEH2TDzcLsReg7YhVdBTLXzeZmxjLIobO503T1Mt5WMiSg4djfXmKb/Txqltj/E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWENj-00085c-45; Sat, 24 Oct 2020 07:54:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWENi-0000Hf-Rp; Sat, 24 Oct 2020 07:54:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWENi-0004wI-RL; Sat, 24 Oct 2020 07:54:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156163-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156163: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=04b1c2d1e2e12abcca22380827edaa058399f4fa
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 07:54:30 +0000

flight 156163 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156163/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              04b1c2d1e2e12abcca22380827edaa058399f4fa
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  106 days
Failing since        151818  2020-07-11 04:18:52 Z  105 days  100 attempts
Testing same since   156163  2020-10-24 04:19:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23141 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 08:20:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 08:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11517.30560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWEmu-0004Wu-KI; Sat, 24 Oct 2020 08:20:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11517.30560; Sat, 24 Oct 2020 08:20:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWEmu-0004Wn-GV; Sat, 24 Oct 2020 08:20:32 +0000
Received: by outflank-mailman (input) for mailman id 11517;
 Sat, 24 Oct 2020 08:20:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWEmt-0004Wi-OU
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 08:20:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 429f8d76-f4fd-4850-816f-71f855e91c80;
 Sat, 24 Oct 2020 08:20:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEmq-0000n6-0f; Sat, 24 Oct 2020 08:20:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEmp-00011L-QS; Sat, 24 Oct 2020 08:20:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWEmp-00034A-Q0; Sat, 24 Oct 2020 08:20:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWEmt-0004Wi-OU
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 08:20:31 +0000
X-Inumbo-ID: 429f8d76-f4fd-4850-816f-71f855e91c80
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 429f8d76-f4fd-4850-816f-71f855e91c80;
	Sat, 24 Oct 2020 08:20:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zxFA9CjDsqYyJY1OyGysYU+3MjyrKd79xrMra6rhuuk=; b=dhnnfSBPm9gBTPqiHpJMWPRGQQ
	elL6MOIV2o83DdMA22+q8y0vICOsDuxMMfTKKU4gzoOL+pWKhCew7stnzAOzrYwK4B/EDOranHuIx
	USNN5/1EOxgO+Zs+txE7LNwzWv3Bazg8k6mWc546qVcinHvjDsFdj/kp7hWi4C/MypQA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEmq-0000n6-0f; Sat, 24 Oct 2020 08:20:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEmp-00011L-QS; Sat, 24 Oct 2020 08:20:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWEmp-00034A-Q0; Sat, 24 Oct 2020 08:20:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156165-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156165: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 08:20:27 +0000

flight 156165 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156165/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   64 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  128 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 09:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 09:08:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11524.30579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWFWO-0000JI-Go; Sat, 24 Oct 2020 09:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11524.30579; Sat, 24 Oct 2020 09:07:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWFWO-0000JB-Dg; Sat, 24 Oct 2020 09:07:32 +0000
Received: by outflank-mailman (input) for mailman id 11524;
 Sat, 24 Oct 2020 09:07:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWFWN-0000IZ-Lw
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 09:07:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fb045fd8-6fd5-449c-82e8-919d2fda591f;
 Sat, 24 Oct 2020 09:07:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWFWE-0001kQ-Ju; Sat, 24 Oct 2020 09:07:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWFWE-0002Pj-DA; Sat, 24 Oct 2020 09:07:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWFWE-00033D-Cb; Sat, 24 Oct 2020 09:07:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWFWN-0000IZ-Lw
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 09:07:31 +0000
X-Inumbo-ID: fb045fd8-6fd5-449c-82e8-919d2fda591f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fb045fd8-6fd5-449c-82e8-919d2fda591f;
	Sat, 24 Oct 2020 09:07:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bT9kazlwuaJ1qCVewBEsSJGh4K1Mf6CPOvCVdy0pKfQ=; b=b9psNLRDNK6gUFArlj8nmFOrta
	xSUbRSifC9Yet1nLXHxow1EaYXoEbMaIt9Utbx70o4xkQFNABxZOItQiBqS3DIxB5KYFiAcyVU7ue
	ThFkT+MsAxOGy15IcUnp+yPaz1lUr5hsAjdV9ZqKj6JUyMXRRwCwYfrg7K6vJYLDY59c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWFWE-0001kQ-Ju; Sat, 24 Oct 2020 09:07:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWFWE-0002Pj-DA; Sat, 24 Oct 2020 09:07:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWFWE-00033D-Cb; Sat, 24 Oct 2020 09:07:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156168-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156168: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 09:07:22 +0000

flight 156168 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156168/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    8 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 09:40:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 09:40:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11531.30601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWG2P-0003yy-BE; Sat, 24 Oct 2020 09:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11531.30601; Sat, 24 Oct 2020 09:40:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWG2P-0003yr-6t; Sat, 24 Oct 2020 09:40:37 +0000
Received: by outflank-mailman (input) for mailman id 11531;
 Sat, 24 Oct 2020 09:40:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWG2N-0003y8-6F
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 09:40:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3f4e39b-74b6-446b-b973-20cf095be282;
 Sat, 24 Oct 2020 09:40:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWG2J-0002RF-Bj; Sat, 24 Oct 2020 09:40:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWG2J-0003hK-0C; Sat, 24 Oct 2020 09:40:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWG2I-0007ZJ-Vw; Sat, 24 Oct 2020 09:40:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWG2N-0003y8-6F
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 09:40:35 +0000
X-Inumbo-ID: d3f4e39b-74b6-446b-b973-20cf095be282
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d3f4e39b-74b6-446b-b973-20cf095be282;
	Sat, 24 Oct 2020 09:40:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mX0ejg/rfhR/2wZg3DXfE9ohTDW0wvbqW9yl5nu+2Yc=; b=wlyHf5JMgOgi/buC+0M7znmKjV
	QAYhbqulj5v0XvLhSXSIfzTDZL/H8QAYL8oyIKN+BDe5IGW4EAvonZ7qBiTPosewrKf9CKmbFsdC4
	j5nc1OvXfL3McJpDOv4d5ImQ9eTAFdtLJCF4NVaKSsyGeOZ4pquEKLzAJ2vyRdTWvBbE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWG2J-0002RF-Bj; Sat, 24 Oct 2020 09:40:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWG2J-0003hK-0C; Sat, 24 Oct 2020 09:40:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWG2I-0007ZJ-Vw; Sat, 24 Oct 2020 09:40:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156170-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156170: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 09:40:30 +0000

flight 156170 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156170/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  129 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 11:13:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 11:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11543.30623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWHTz-0003Uy-IF; Sat, 24 Oct 2020 11:13:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11543.30623; Sat, 24 Oct 2020 11:13:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWHTz-0003Ur-FC; Sat, 24 Oct 2020 11:13:11 +0000
Received: by outflank-mailman (input) for mailman id 11543;
 Sat, 24 Oct 2020 11:13:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWHTy-0003UD-58
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 11:13:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbf8e4bd-ad71-49f3-8711-bc3326c3f3fe;
 Sat, 24 Oct 2020 11:13:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWHTo-0004OZ-6G; Sat, 24 Oct 2020 11:13:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWHTn-0008D5-TI; Sat, 24 Oct 2020 11:12:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWHTn-0001o4-Sn; Sat, 24 Oct 2020 11:12:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWHTy-0003UD-58
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 11:13:10 +0000
X-Inumbo-ID: bbf8e4bd-ad71-49f3-8711-bc3326c3f3fe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bbf8e4bd-ad71-49f3-8711-bc3326c3f3fe;
	Sat, 24 Oct 2020 11:13:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RuM0hALqdMDT/lexJRP6OIhkiDDlx4a+m8CliTOxZIY=; b=2ARePKv5SuoQbhN+fzknBFT7lA
	yWGRrh/LhgBGzOwTFLoOa+0qQSu+y/A+4aiBVbkk+gqNJIsVjlAM7stc5Z+alxcuaDctEfLVZqrlN
	Go8jolTDAmWi5QzgjS1a677UK6l6frVBCvoElksVoed2LgsYbb9LdrSK+l8t4KiVjAEE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWHTo-0004OZ-6G; Sat, 24 Oct 2020 11:13:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWHTn-0008D5-TI; Sat, 24 Oct 2020 11:12:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWHTn-0001o4-Sn; Sat, 24 Oct 2020 11:12:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156172-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156172: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 11:12:59 +0000

flight 156172 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156172/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  130 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 12:24:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 12:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11578.30642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWIar-0001Dc-50; Sat, 24 Oct 2020 12:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11578.30642; Sat, 24 Oct 2020 12:24:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWIar-0001DV-1i; Sat, 24 Oct 2020 12:24:21 +0000
Received: by outflank-mailman (input) for mailman id 11578;
 Sat, 24 Oct 2020 12:24:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWIaq-0001DQ-5D
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 12:24:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 167c67d2-8b42-4730-92b5-25071cd02145;
 Sat, 24 Oct 2020 12:24:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIan-0005oT-KK; Sat, 24 Oct 2020 12:24:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIan-0004Vu-DG; Sat, 24 Oct 2020 12:24:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIan-0000u2-Ck; Sat, 24 Oct 2020 12:24:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWIaq-0001DQ-5D
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 12:24:20 +0000
X-Inumbo-ID: 167c67d2-8b42-4730-92b5-25071cd02145
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 167c67d2-8b42-4730-92b5-25071cd02145;
	Sat, 24 Oct 2020 12:24:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x/5vWzihVQIVGZcUTXg2jOvSIySQVzalgqtOT3spQ68=; b=XgTc0vHERr6KiUG+lG2cSDHGfN
	DpoHhhcdMdKbLZC/3fxw1H7DyE80GQYQGK1EIQ0dtXmWavU6tB4/0/r58zHZ9hY20h7efyR/DxdfZ
	6U+qEpJ6EQc7mU8cfqkk84Pp4j2MqujoYuJfsETm2LHhv/7UuFoWtpNdaqskg64xXOik=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIan-0005oT-KK; Sat, 24 Oct 2020 12:24:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIan-0004Vu-DG; Sat, 24 Oct 2020 12:24:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIan-0000u2-Ck; Sat, 24 Oct 2020 12:24:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156171: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 12:24:17 +0000

flight 156171 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156171/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    0 days    9 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 12:46:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 12:46:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11583.30660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWIwd-00032U-2A; Sat, 24 Oct 2020 12:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11583.30660; Sat, 24 Oct 2020 12:46:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWIwc-00032N-VP; Sat, 24 Oct 2020 12:46:50 +0000
Received: by outflank-mailman (input) for mailman id 11583;
 Sat, 24 Oct 2020 12:46:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWIwc-00031j-1A
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 12:46:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0bfbf7e-dca9-4bd3-8ee6-ee97ac7e810f;
 Sat, 24 Oct 2020 12:46:40 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIwS-0006FD-Dp; Sat, 24 Oct 2020 12:46:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIwS-0005jj-4x; Sat, 24 Oct 2020 12:46:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWIwS-0006Mz-4S; Sat, 24 Oct 2020 12:46:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWIwc-00031j-1A
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 12:46:50 +0000
X-Inumbo-ID: f0bfbf7e-dca9-4bd3-8ee6-ee97ac7e810f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f0bfbf7e-dca9-4bd3-8ee6-ee97ac7e810f;
	Sat, 24 Oct 2020 12:46:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KO/1HH1ig4iwMtmfj2qVVy/HgWjrjqhYyGlF29/tZkc=; b=L5CNVXrwkiinO4uiLTgs8X172b
	SAUXlqfN1JeCYV/bPYO/0cVng+iflbSUysH2ulTYQgPjnzvAhjHtWzEkJmxVi9ezvjacq/ilwfbL4
	TLzDOHT02uwLDIYwnGqGalZ8edJY716OO+HtJX/fRTqN7xPFwlQ9jaIUgJQ7b6SFI4xY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIwS-0006FD-Dp; Sat, 24 Oct 2020 12:46:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIwS-0005jj-4x; Sat, 24 Oct 2020 12:46:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWIwS-0006Mz-4S; Sat, 24 Oct 2020 12:46:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156174-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156174: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 12:46:40 +0000

flight 156174 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156174/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   63 days  131 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    1 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 13:32:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 13:32:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11601.30672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWJej-0007KG-NY; Sat, 24 Oct 2020 13:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11601.30672; Sat, 24 Oct 2020 13:32:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWJej-0007K9-KM; Sat, 24 Oct 2020 13:32:25 +0000
Received: by outflank-mailman (input) for mailman id 11601;
 Sat, 24 Oct 2020 13:32:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6JnP=D7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1kWJeh-0007K4-Jw
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 13:32:23 +0000
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58d53182-45c0-4f18-80e3-c84e566bd040;
 Sat, 24 Oct 2020 13:32:22 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id h7so5969933wre.4
 for <xen-devel@lists.xenproject.org>; Sat, 24 Oct 2020 06:32:22 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id f17sm10409284wme.22.2020.10.24.06.32.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 24 Oct 2020 06:32:20 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6JnP=D7=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
	id 1kWJeh-0007K4-Jw
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 13:32:23 +0000
X-Inumbo-ID: 58d53182-45c0-4f18-80e3-c84e566bd040
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 58d53182-45c0-4f18-80e3-c84e566bd040;
	Sat, 24 Oct 2020 13:32:22 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id h7so5969933wre.4
        for <xen-devel@lists.xenproject.org>; Sat, 24 Oct 2020 06:32:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
         :mime-version:content-transfer-encoding:content-language
         :thread-index;
        bh=ykW/kGCRrp4MSwPvvIUsmz5dNalBmjO72gPf1orWJtg=;
        b=uizVRQwX822I3OdG3EBrJ3o0k7koJx8ER48dNaRlYobuGCWhL46xDmaklkkhhE2nXq
         7f7w4RRPJGmTschZsmlVP8Y5o9k1ibSAvHTpWlD+hnDBnADZgXiBfxfudAk5DirAj/IC
         PW8u0eTPlMAdTAYp2xTrdmzHPUheuKsLDd6MibPI7/yyQP3if5lhtsI98hGV1m70fiKD
         JqD7On7NMbojZR9uVQsvIHkQdvVkNMudaWyaTi7UbMMieu2W5Ex32+PGygz63LkE6Q+4
         mOS0mAuhQ+x+SLqOnij9HcF9MMWetEL9vcj8Xy4BOVEUNQhhJLmIy2z7ZZX45aoL5jK5
         6Nbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
         :subject:date:message-id:mime-version:content-transfer-encoding
         :content-language:thread-index;
        bh=ykW/kGCRrp4MSwPvvIUsmz5dNalBmjO72gPf1orWJtg=;
        b=InfbidLmYq5avKJDDzOcd9BXB1nUxpsQYAj+RxHjumOIYny6ijxVFlaZ6Byfc7axOs
         CfI8hVZQMMYVrAPMTT0Yb4+IPzJ7mN0idbwsEaOOG7JGkXY2ohFlwfcaOw1QT2Mg/yGM
         GmtOQ4TBzyhi5AGUq/ZONAayyibh3tcG8eK2jaJIIysIn9Z0Yr4elqj0ewOi9S8qJWKx
         Q096Ii7V45nXdJ9T33FNKhFNTAOgx0JcRMoctUOHiQVmTG7TLshQcKg8UTcR31cQDspy
         YMdBGuUEiVdQN3srGLmlaFQrlXHw0ZGm0j8C3vJjo7l5IFnCMOSY3HUVWuMQskjdSEtL
         h1tA==
X-Gm-Message-State: AOAM531+22g3FMWZHjvlHN0v7enZUkrIaftKqeP6Qj31v6wvvZ6AB1h0
	ULTeP0czvKttEPUVIzzAS4M=
X-Google-Smtp-Source: ABdhPJwe0vog+Ce8D2mRs5a9B/885zA6cZFORLWAyXkLQOIoAqoOzwgl59/QOlo2Nh3Ibi1XgjawTg==
X-Received: by 2002:adf:dd8f:: with SMTP id x15mr8207754wrl.124.1603546341501;
        Sat, 24 Oct 2020 06:32:21 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-239.amazon.com. [54.240.197.239])
        by smtp.gmail.com with ESMTPSA id f17sm10409284wme.22.2020.10.24.06.32.20
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Sat, 24 Oct 2020 06:32:20 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
Reply-To: <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
Cc: "'Andrew Cooper'" <andrew.cooper3@citrix.com>
References: <7b8ad528-b0bd-4d93-f08b-42b5af376561@suse.com>
In-Reply-To: <7b8ad528-b0bd-4d93-f08b-42b5af376561@suse.com>
Subject: RE: [PATCH] AMD/IOMMU: correct shattering of super pages
Date: Sat, 24 Oct 2020 14:32:19 +0100
Message-ID: <003201d6aa0a$19828f70$4c87ae50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQLsqfUdq1A5Y4ay/vzEavqcRMpipad5WK4w

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 20 October 2020 14:54
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Paul Durrant <paul@xen.org>
> Subject: [PATCH] AMD/IOMMU: correct shattering of super pages
> 
> Fill the new page table _before_ installing into a live page table
> hierarchy, as installing a blank page first risks I/O faults on
> sub-ranges of the original super page which aren't part of the range
> for which mappings are being updated.
> 
> While at it also do away with mapping and unmapping the same fresh
> intermediate page table page once per entry to be written.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Afaict this corrects presently dead code: I don't think there are ways
> for super pages to be created in the first place, i.e. none could ever
> need shattering.

I believe you are correct, yes. Certainly an improvement though so...

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Sat Oct 24 14:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 14:13:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11607.30687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWKIE-0002Rw-0F; Sat, 24 Oct 2020 14:13:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11607.30687; Sat, 24 Oct 2020 14:13:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWKID-0002Ro-SL; Sat, 24 Oct 2020 14:13:13 +0000
Received: by outflank-mailman (input) for mailman id 11607;
 Sat, 24 Oct 2020 14:13:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWKID-0002Rj-2K
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 14:13:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7361ddac-fcd4-44ca-ad3c-2f83294e5508;
 Sat, 24 Oct 2020 14:13:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWKIA-00086t-H1; Sat, 24 Oct 2020 14:13:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWKIA-00014H-8r; Sat, 24 Oct 2020 14:13:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWKIA-0007QR-8N; Sat, 24 Oct 2020 14:13:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWKID-0002Rj-2K
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 14:13:13 +0000
X-Inumbo-ID: 7361ddac-fcd4-44ca-ad3c-2f83294e5508
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7361ddac-fcd4-44ca-ad3c-2f83294e5508;
	Sat, 24 Oct 2020 14:13:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mlluYHp8nsTwGmyBqSObLEuTc+6DdoSHXquyN/FLbqE=; b=YzTyxuvGh1uQCn3fC+qQdj812m
	q8uN3sPilfsR9yFk/3N2km1KSTRxEg8WgJEyeTzofayCFWukw8ZmCgli3rBnGT4shvsoSieRwlrnN
	pjYJFpyaKqUYXK7tatADR5LlYqvEEAEymI4XifKV8kdc80QneuOvgsl9CHH+Xqdmk9y0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWKIA-00086t-H1; Sat, 24 Oct 2020 14:13:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWKIA-00014H-8r; Sat, 24 Oct 2020 14:13:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWKIA-0007QR-8N; Sat, 24 Oct 2020 14:13:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156175: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 14:13:10 +0000

flight 156175 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156175/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   10 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 15:20:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 15:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11613.30705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWLKZ-0007gD-UA; Sat, 24 Oct 2020 15:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11613.30705; Sat, 24 Oct 2020 15:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWLKZ-0007g6-QF; Sat, 24 Oct 2020 15:19:43 +0000
Received: by outflank-mailman (input) for mailman id 11613;
 Sat, 24 Oct 2020 15:19:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWLKZ-0007fP-30
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 15:19:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ed2cc02-8cf2-4a68-bc1d-a712e2a7aa2c;
 Sat, 24 Oct 2020 15:19:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWLKP-00010m-VT; Sat, 24 Oct 2020 15:19:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWLKP-0003ZT-LG; Sat, 24 Oct 2020 15:19:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWLKP-0007Kr-Kj; Sat, 24 Oct 2020 15:19:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWLKZ-0007fP-30
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 15:19:43 +0000
X-Inumbo-ID: 3ed2cc02-8cf2-4a68-bc1d-a712e2a7aa2c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ed2cc02-8cf2-4a68-bc1d-a712e2a7aa2c;
	Sat, 24 Oct 2020 15:19:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w52anOi1ZoTyHqfDDjbf3UmDmtyNTmxmEtXq8Vz/9b4=; b=Wz6NIS3PuV1F65GwuilrPCdr8R
	Avn90hZsWX1XaPwzKKyG+fjsJDJYoe1CiDtnrn8c5H09C7ihhWaoTdAYrKaysxiqzAVecabm8RbvV
	5xvWRJXNwAD2mMHp9+bWn+kNDfVyWnpHv9IfPLgJU0EMXXkUlWXuESVLdSXlOTC7lnCA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWLKP-00010m-VT; Sat, 24 Oct 2020 15:19:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWLKP-0003ZT-LG; Sat, 24 Oct 2020 15:19:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWLKP-0007Kr-Kj; Sat, 24 Oct 2020 15:19:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156176: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 15:19:33 +0000

flight 156176 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156176/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  132 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 16:07:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 16:07:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11618.30716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWM4D-00040E-Ov; Sat, 24 Oct 2020 16:06:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11618.30716; Sat, 24 Oct 2020 16:06:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWM4D-000407-M3; Sat, 24 Oct 2020 16:06:53 +0000
Received: by outflank-mailman (input) for mailman id 11618;
 Sat, 24 Oct 2020 16:06:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWM4B-000402-RF
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:06:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f824c74-1d8b-483b-a0b1-b9995e30841f;
 Sat, 24 Oct 2020 16:06:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWM49-0002Vo-6y; Sat, 24 Oct 2020 16:06:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWM48-0004y9-Um; Sat, 24 Oct 2020 16:06:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWM48-0006yM-UK; Sat, 24 Oct 2020 16:06:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWM4B-000402-RF
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:06:51 +0000
X-Inumbo-ID: 5f824c74-1d8b-483b-a0b1-b9995e30841f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5f824c74-1d8b-483b-a0b1-b9995e30841f;
	Sat, 24 Oct 2020 16:06:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=m17z8Q5kENlosrUJWsg2fOHnDeFXl9Ssxb0i9T9kZfs=; b=Avd0Lsmro5qMwP1woprAoTIsvm
	ESyRY674M9upWPy0wCHJAgInpMEIe8MBUFnWHTqzAj063VTHcUaWG6hiYY4iGk4p/ltj9rkfNddO/
	IB+k7Oktj7EZKzI810DdNY6BOiBMEyz+7UM5EooB54hZPxtkE1UtykiuUm2+bYO+8RAU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWM49-0002Vo-6y; Sat, 24 Oct 2020 16:06:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWM48-0004y9-Um; Sat, 24 Oct 2020 16:06:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWM48-0006yM-UK; Sat, 24 Oct 2020 16:06:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156161-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156161: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:build-i386-pvops:kernel-build:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f11901ed723d1351843771c3a84b03a253bbf8b2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 16:06:48 +0000

flight 156161 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156161/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 build-i386-pvops              6 kernel-build             fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f11901ed723d1351843771c3a84b03a253bbf8b2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   84 days
Failing since        152366  2020-08-01 20:49:34 Z   83 days  141 attempts
Testing same since   156161  2020-10-24 03:52:55 Z    0 days    1 attempts

------------------------------------------------------------
3331 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 620223 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 16:25:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 16:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11625.30741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWMML-0005nA-Dq; Sat, 24 Oct 2020 16:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11625.30741; Sat, 24 Oct 2020 16:25:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWMML-0005n2-9n; Sat, 24 Oct 2020 16:25:37 +0000
Received: by outflank-mailman (input) for mailman id 11625;
 Sat, 24 Oct 2020 16:25:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWMMK-0005mU-QL
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:25:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02e8e106-e223-4f1d-961f-589caf52f18e;
 Sat, 24 Oct 2020 16:25:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMMD-0002si-51; Sat, 24 Oct 2020 16:25:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMMC-0005R8-SK; Sat, 24 Oct 2020 16:25:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMMC-0000RG-Ro; Sat, 24 Oct 2020 16:25:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWMMK-0005mU-QL
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:25:36 +0000
X-Inumbo-ID: 02e8e106-e223-4f1d-961f-589caf52f18e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 02e8e106-e223-4f1d-961f-589caf52f18e;
	Sat, 24 Oct 2020 16:25:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AlHLiTpBZfROCh/NmMGofDvxLUtooZm2Le0LBJ76w0w=; b=k/1/5EO3hhfSH6kLwWrrotQu5N
	E1M5D3e8hkM1GjUaOiiw2gvaknFrsamNayDZ+8NwlYILTnBU/zHzwrjtDAGw/vne6F7T9KSylyb2X
	Ae3Fx1XXSwe/KaSWy2POEap+pDQ9Sg9vc3oxv57N3LHqTeIiC0AEfoFuwoiva9d9W4uE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMMD-0002si-51; Sat, 24 Oct 2020 16:25:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMMC-0005R8-SK; Sat, 24 Oct 2020 16:25:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMMC-0000RG-Ro; Sat, 24 Oct 2020 16:25:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156167: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 16:25:28 +0000

flight 156167 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156167/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156136
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156136
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156136
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156136
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156136
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156136
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156136
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156136
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156136
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156136
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156136
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156167  2020-10-24 07:50:16 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Oct 24 16:56:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 16:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11629.30756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWMpv-0008Qi-2Q; Sat, 24 Oct 2020 16:56:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11629.30756; Sat, 24 Oct 2020 16:56:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWMpu-0008Qb-Ux; Sat, 24 Oct 2020 16:56:10 +0000
Received: by outflank-mailman (input) for mailman id 11629;
 Sat, 24 Oct 2020 16:56:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWMpt-0008Pw-In
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:56:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f641dd03-57da-4b5b-8b1a-772947b4370a;
 Sat, 24 Oct 2020 16:56:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMpk-0003U2-6X; Sat, 24 Oct 2020 16:56:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMpj-000661-V4; Sat, 24 Oct 2020 16:56:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWMpj-00069N-Ub; Sat, 24 Oct 2020 16:55:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWMpt-0008Pw-In
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 16:56:09 +0000
X-Inumbo-ID: f641dd03-57da-4b5b-8b1a-772947b4370a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f641dd03-57da-4b5b-8b1a-772947b4370a;
	Sat, 24 Oct 2020 16:56:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NtKdGFFl2OLfndJ17F/8Gbuz3qVdlbIaSi+J5h4KGcI=; b=z363L2XUSb+d4rnnpptH9xfqNy
	erGgEVawK1niVaY55dHMhVB6MYpPeWDKMqEEwt5KjobczjkwC2/STGIEpklUgfRSluaPxaLwgS15x
	Ppf+dF2GMnFhzKbAADMXwgi1SG8ivONVLK9Y/sh3/ovSVIBe3+Qx9AMkXbbGx+C6j1AY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMpk-0003U2-6X; Sat, 24 Oct 2020 16:56:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMpj-000661-V4; Sat, 24 Oct 2020 16:56:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWMpj-00069N-Ub; Sat, 24 Oct 2020 16:55:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156178: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 16:55:59 +0000

flight 156178 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156178/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   11 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    0 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 17:12:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 17:12:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11631.30768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWN5Y-0001jn-FL; Sat, 24 Oct 2020 17:12:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11631.30768; Sat, 24 Oct 2020 17:12:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWN5Y-0001jg-BU; Sat, 24 Oct 2020 17:12:20 +0000
Received: by outflank-mailman (input) for mailman id 11631;
 Sat, 24 Oct 2020 17:12:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWN5W-0001jb-Sh
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 17:12:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4e91426-cb4f-4841-a623-d694f08af39b;
 Sat, 24 Oct 2020 17:12:06 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWN5K-0003pF-7E; Sat, 24 Oct 2020 17:12:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWN5J-0006RF-Ta; Sat, 24 Oct 2020 17:12:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWN5J-0000HV-T3; Sat, 24 Oct 2020 17:12:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWN5W-0001jb-Sh
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 17:12:18 +0000
X-Inumbo-ID: d4e91426-cb4f-4841-a623-d694f08af39b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d4e91426-cb4f-4841-a623-d694f08af39b;
	Sat, 24 Oct 2020 17:12:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dD6ZgymSA68hkGby3ftCLEj7ngJRvfsXwiY2ST20dEY=; b=TuGJfaypPysD3uMZH4u0Aqe3N/
	UBB7DV+to1BDpT5pHvk/6BkM9eUnghW7m+3FuSqRdwZPpnPZ/fih8t4VszvwxPp8Yhp9vhGoGOdtx
	DPvLAxSwy585WtzezKMP6YEZs+WGcBHuuaeJKsV11Qz0eCd4I7lS1obPKJehh/Z9V6hk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWN5K-0003pF-7E; Sat, 24 Oct 2020 17:12:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWN5J-0006RF-Ta; Sat, 24 Oct 2020 17:12:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWN5J-0000HV-T3; Sat, 24 Oct 2020 17:12:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156179-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156179: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 17:12:05 +0000

flight 156179 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156179/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  133 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 18:35:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 18:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11641.30791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWONe-0000Il-MH; Sat, 24 Oct 2020 18:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11641.30791; Sat, 24 Oct 2020 18:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWONe-0000Id-Io; Sat, 24 Oct 2020 18:35:06 +0000
Received: by outflank-mailman (input) for mailman id 11641;
 Sat, 24 Oct 2020 18:35:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWONd-0000I4-Iv
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 18:35:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94e4a7e6-cc7f-4009-8ebd-c1bbc5dea4b9;
 Sat, 24 Oct 2020 18:34:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWONW-0005Zn-32; Sat, 24 Oct 2020 18:34:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWONV-0001Q5-RD; Sat, 24 Oct 2020 18:34:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWONV-0008V1-Qg; Sat, 24 Oct 2020 18:34:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWONd-0000I4-Iv
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 18:35:05 +0000
X-Inumbo-ID: 94e4a7e6-cc7f-4009-8ebd-c1bbc5dea4b9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94e4a7e6-cc7f-4009-8ebd-c1bbc5dea4b9;
	Sat, 24 Oct 2020 18:34:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jUCslzTGe0E9HdYG0zt7sL+Blpw/3KYlm9i+IAn8lUU=; b=DTXfjC0QeUBR/dF92Jb9naFcpi
	Bfj7tOHKF6m6r4+LLNnYMeCVQz+9L18HetvbADMn7a4NiWDZT6N7MsOYKL2p7UBR/3qcCT6TpSAt1
	wpF/aNk43RZsrdl/peft7iCdHmtZ/WIv2GMgmvgyqDu6dTaj4M9rs4eCyGAuLE/QTvG0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWONW-0005Zn-32; Sat, 24 Oct 2020 18:34:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWONV-0001Q5-RD; Sat, 24 Oct 2020 18:34:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWONV-0008V1-Qg; Sat, 24 Oct 2020 18:34:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156182: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 18:34:57 +0000

flight 156182 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156182/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   12 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 18:59:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 18:59:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11644.30807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWOl5-0002AI-OO; Sat, 24 Oct 2020 18:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11644.30807; Sat, 24 Oct 2020 18:59:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWOl5-0002AB-Ku; Sat, 24 Oct 2020 18:59:19 +0000
Received: by outflank-mailman (input) for mailman id 11644;
 Sat, 24 Oct 2020 18:59:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWOl4-00029X-4V
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 18:59:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85705ba7-833a-4dbc-aeb8-66312bb732c3;
 Sat, 24 Oct 2020 18:59:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWOku-00063T-Gk; Sat, 24 Oct 2020 18:59:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWOku-00035p-7l; Sat, 24 Oct 2020 18:59:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWOku-0007t7-7I; Sat, 24 Oct 2020 18:59:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWOl4-00029X-4V
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 18:59:18 +0000
X-Inumbo-ID: 85705ba7-833a-4dbc-aeb8-66312bb732c3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 85705ba7-833a-4dbc-aeb8-66312bb732c3;
	Sat, 24 Oct 2020 18:59:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EAbD7R8jghIfWg6OqIFFDgdgSDbflHbhBA0CSSP9uwE=; b=ZWBB126V/XMvvvZsjo3vPg1Kcr
	IzzRe0C+jxHwp8UGlyukiYFyNTuJzncdWo2jnLtDJjfPUW1l0k1mZnCDXhJghCaeqjlNLLB1/+B+J
	ZPisQJMBTGWlvK8JcIKiUM94UOZFGun9YEDQlFJgT1pk7UiEz71uEPKz9HY7ALrsZ18I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWOku-00063T-Gk; Sat, 24 Oct 2020 18:59:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWOku-00035p-7l; Sat, 24 Oct 2020 18:59:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWOku-0007t7-7I; Sat, 24 Oct 2020 18:59:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156183-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156183: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 18:59:08 +0000

flight 156183 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156183/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  134 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   19 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 20:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 20:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11651.30825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWQJQ-0002J1-1D; Sat, 24 Oct 2020 20:38:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11651.30825; Sat, 24 Oct 2020 20:38:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWQJP-0002Iu-Tb; Sat, 24 Oct 2020 20:38:51 +0000
Received: by outflank-mailman (input) for mailman id 11651;
 Sat, 24 Oct 2020 20:38:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=viiv=D7=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
 id 1kWQJO-0002Ip-FH
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 20:38:50 +0000
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fb5659f-5f21-473a-8239-c39c0f82e84f;
 Sat, 24 Oct 2020 20:38:48 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by asavdk4.altibox.net (Postfix) with ESMTPS id A32E380548;
 Sat, 24 Oct 2020 22:38:39 +0200 (CEST)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=viiv=D7=ravnborg.org=sam@srs-us1.protection.inumbo.net>)
	id 1kWQJO-0002Ip-FH
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 20:38:50 +0000
X-Inumbo-ID: 4fb5659f-5f21-473a-8239-c39c0f82e84f
Received: from asavdk4.altibox.net (unknown [109.247.116.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4fb5659f-5f21-473a-8239-c39c0f82e84f;
	Sat, 24 Oct 2020 20:38:48 +0000 (UTC)
Received: from ravnborg.org (unknown [188.228.123.71])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by asavdk4.altibox.net (Postfix) with ESMTPS id A32E380548;
	Sat, 24 Oct 2020 22:38:39 +0200 (CEST)
Date: Sat, 24 Oct 2020 22:38:38 +0200
From: Sam Ravnborg <sam@ravnborg.org>
To: Thomas Zimmermann <tzimmermann@suse.de>
Cc: maarten.lankhorst@linux.intel.com, mripard@kernel.org, airlied@linux.ie,
	daniel@ffwll.ch, alexander.deucher@amd.com,
	christian.koenig@amd.com, kraxel@redhat.com, l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk, christian.gmeiner@gmail.com,
	inki.dae@samsung.com, jy0922.shim@samsung.com,
	sw0312.kim@samsung.com, kyungmin.park@samsung.com, kgene@kernel.org,
	krzk@kernel.org, yuq825@gmail.com, bskeggs@redhat.com,
	robh@kernel.org, tomeu.vizoso@collabora.com, steven.price@arm.com,
	alyssa.rosenzweig@collabora.com, hjc@rock-chips.com,
	heiko@sntech.de, hdegoede@redhat.com, sean@poorly.run,
	eric@anholt.net, oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com, sumit.semwal@linaro.org,
	emil.velikov@collabora.com, luben.tuikov@amd.com, apaneers@amd.com,
	linus.walleij@linaro.org, melissa.srw@gmail.com,
	chris@chris-wilson.co.uk, miaoqinglang@huawei.com,
	dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org, lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org, spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org, xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org
Subject: Re: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <20201024203838.GB93644@ravnborg.org>
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-11-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201020122046.31167-11-tzimmermann@suse.de>
X-CMAE-Score: 0
X-CMAE-Analysis: v=2.3 cv=fu7ymmwf c=1 sm=1 tr=0
	a=S6zTFyMACwkrwXSdXUNehg==:117 a=S6zTFyMACwkrwXSdXUNehg==:17
	a=kj9zAlcOel0A:10 a=7gkXJVJtAAAA:8 a=Ogo0Iva9nwArjqrsVXIA:9
	a=x50s5EHYZIou7SfH:21 a=IqrjcI2zc_8-tyTa:21 a=CjuIK1q_8ugA:10
	a=qfUslh1TxfEA:10 a=E9Po1WZjFZOl8hwRPBS3:22

Hi Thomas.

On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote:
> At least sparc64 requires I/O-specific access to framebuffers. This
> patch updates the fbdev console accordingly.
> 
> For drivers with direct access to the framebuffer memory, the callback
> functions in struct fb_ops test for the type of memory and call the rsp
> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
> internally by DRM's fbdev helper.
> 
> For drivers that employ a shadow buffer, fbdev's blit function retrieves
> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
> interfaces to access the buffer.
> 
> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
> I/O memory and avoid a HW exception. With the introduction of struct
> dma_buf_map, this is not required any longer. The patch removes the rsp
> code from both, bochs and fbdev.
> 
> v5:
> 	* implement fb_read/fb_write internally (Daniel, Sam)
> v4:
> 	* move dma_buf_map changes into separate patch (Daniel)
> 	* TODO list: comment on fbdev updates (Daniel)
> 
> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
> Tested-by: Sam Ravnborg <sam@ravnborg.org>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>

But see a few comments below on naming for you to consider.

	Sam

> ---
>  Documentation/gpu/todo.rst        |  19 ++-
>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>  drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
>  include/drm/drm_mode_config.h     |  12 --
>  4 files changed, 230 insertions(+), 29 deletions(-)
> 
> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
> index 7e6fc3c04add..638b7f704339 100644
> --- a/Documentation/gpu/todo.rst
> +++ b/Documentation/gpu/todo.rst
> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>  ------------------------------------------------
>  
>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
> -expects the framebuffer in system memory (or system-like memory).
> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
> +expected the framebuffer in system memory or system-like memory. By employing
> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
> +as well.
>  
>  Contact: Maintainer of the driver you plan to convert
>  
>  Level: Intermediate
>  
> +Reimplement functions in drm_fbdev_fb_ops without fbdev
> +-------------------------------------------------------
> +
> +A number of callback functions in drm_fbdev_fb_ops could benefit from
> +being rewritten without dependencies on the fbdev module. Some of the
> +helpers could further benefit from using struct dma_buf_map instead of
> +raw pointers.
> +
> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
> +
> +Level: Advanced
> +
> +
>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>  -----------------------------------------------------------------
>  
> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
> index 13d0d04c4457..853081d186d5 100644
> --- a/drivers/gpu/drm/bochs/bochs_kms.c
> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>  	bochs->dev->mode_config.preferred_depth = 24;
>  	bochs->dev->mode_config.prefer_shadow = 0;
>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>  
>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
> index 6212cd7cde1d..1d3180841778 100644
> --- a/drivers/gpu/drm/drm_fb_helper.c
> +++ b/drivers/gpu/drm/drm_fb_helper.c
> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>  }
>  
>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
> -					  struct drm_clip_rect *clip)
> +					  struct drm_clip_rect *clip,
> +					  struct dma_buf_map *dst)
>  {
>  	struct drm_framebuffer *fb = fb_helper->fb;
>  	unsigned int cpp = fb->format->cpp[0];
>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>  	void *src = fb_helper->fbdev->screen_buffer + offset;
> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>  	size_t len = (clip->x2 - clip->x1) * cpp;
>  	unsigned int y;
>  
> -	for (y = clip->y1; y < clip->y2; y++) {
> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
> -			memcpy(dst, src, len);
> -		else
> -			memcpy_toio((void __iomem *)dst, src, len);
> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>  
> +	for (y = clip->y1; y < clip->y2; y++) {
> +		dma_buf_map_memcpy_to(dst, src, len);
> +		dma_buf_map_incr(dst, fb->pitches[0]);
>  		src += fb->pitches[0];
> -		dst += fb->pitches[0];
>  	}
>  }
>  
> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>  			if (ret)
>  				return;
> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>  		}
> +
>  		if (helper->fb->funcs->dirty)
>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>  						 &clip_copy, 1);
> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>  		return -ENODEV;
>  }
>  
> +static bool drm_fbdev_use_iomem(struct fb_info *info)
> +{
> +	struct drm_fb_helper *fb_helper = info->par;
> +	struct drm_client_buffer *buffer = fb_helper->buffer;
> +
> +	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
> +}
> +
> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
> +				   loff_t pos)
The naming here confused me - a name like:
fb_read_iomem() would have helped me more.
With the current naming I shall remember that the screen_base member is
the iomem pointer.

> +{
> +	const char __iomem *src = info->screen_base + pos;
> +	size_t alloc_size = min(count, PAGE_SIZE);
> +	ssize_t ret = 0;
> +	char *tmp;
> +
> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!tmp)
> +		return -ENOMEM;
> +

I looked around and could not find other places where
we copy from iomem to mem to usermem in chunks of PAGE_SIZE.

> +	while (count) {
> +		size_t c = min(count, alloc_size);
> +
> +		memcpy_fromio(tmp, src, c);
> +		if (copy_to_user(buf, tmp, c)) {
> +			ret = -EFAULT;
> +			break;
> +		}
> +
> +		src += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(tmp);
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
> +				     loff_t pos)
And fb_read_sysmem() here.

> +{
> +	const char *src = info->screen_buffer + pos;
> +
> +	if (copy_to_user(buf, src, count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
> +				 size_t count, loff_t *ppos)
> +{
> +	loff_t pos = *ppos;
> +	size_t total_size;
> +	ssize_t ret;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	if (info->screen_size)
> +		total_size = info->screen_size;
> +	else
> +		total_size = info->fix.smem_len;
> +
> +	if (pos >= total_size)
> +		return 0;
> +	if (count >= total_size)
> +		count = total_size;
> +	if (total_size - count < pos)
> +		count = total_size - pos;
> +
> +	if (drm_fbdev_use_iomem(info))
> +		ret = fb_read_screen_base(info, buf, count, pos);
> +	else
> +		ret = fb_read_screen_buffer(info, buf, count, pos);
> +
> +	if (ret > 0)
> +		*ppos = ret;
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
> +				    loff_t pos)

fb_write_iomem()

> +{
> +	char __iomem *dst = info->screen_base + pos;
> +	size_t alloc_size = min(count, PAGE_SIZE);
> +	ssize_t ret = 0;
> +	u8 *tmp;
> +
> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
> +	if (!tmp)
> +		return -ENOMEM;
> +
> +	while (count) {
> +		size_t c = min(count, alloc_size);
> +
> +		if (copy_from_user(tmp, buf, c)) {
> +			ret = -EFAULT;
> +			break;
> +		}
> +		memcpy_toio(dst, tmp, c);
> +
> +		dst += c;
> +		buf += c;
> +		ret += c;
> +		count -= c;
> +	}
> +
> +	kfree(tmp);
> +
> +	return ret;
> +}
> +
> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
> +				      loff_t pos)
fb_write_sysmem()

> +{
> +	char *dst = info->screen_buffer + pos;
> +
> +	if (copy_from_user(dst, buf, count))
> +		return -EFAULT;
> +
> +	return count;
> +}
> +
> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
> +				  size_t count, loff_t *ppos)
> +{
> +	loff_t pos = *ppos;
> +	size_t total_size;
> +	ssize_t ret;
> +	int err;
> +
> +	if (info->state != FBINFO_STATE_RUNNING)
> +		return -EPERM;
> +
> +	if (info->screen_size)
> +		total_size = info->screen_size;
> +	else
> +		total_size = info->fix.smem_len;
> +
> +	if (pos > total_size)
> +		return -EFBIG;
> +	if (count > total_size) {
> +		err = -EFBIG;
> +		count = total_size;
> +	}
> +	if (total_size - count < pos) {
> +		if (!err)
> +			err = -ENOSPC;
> +		count = total_size - pos;
> +	}
> +
> +	/*
> +	 * Copy to framebuffer even if we already logged an error. Emulates
> +	 * the behavior of the original fbdev implementation.
> +	 */
> +	if (drm_fbdev_use_iomem(info))
> +		ret = fb_write_screen_base(info, buf, count, pos);
> +	else
> +		ret = fb_write_screen_buffer(info, buf, count, pos);
> +
> +	if (ret > 0)
> +		*ppos = ret;
> +
> +	if (err)
> +		return err;
> +
> +	return ret;
> +}
> +
> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
> +				  const struct fb_fillrect *rect)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_fillrect(info, rect);
> +	else
> +		drm_fb_helper_sys_fillrect(info, rect);
> +}
> +
> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
> +				  const struct fb_copyarea *area)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_copyarea(info, area);
> +	else
> +		drm_fb_helper_sys_copyarea(info, area);
> +}
> +
> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
> +				   const struct fb_image *image)
> +{
> +	if (drm_fbdev_use_iomem(info))
> +		drm_fb_helper_cfb_imageblit(info, image);
> +	else
> +		drm_fb_helper_sys_imageblit(info, image);
> +}
> +
>  static const struct fb_ops drm_fbdev_fb_ops = {
>  	.owner		= THIS_MODULE,
>  	DRM_FB_HELPER_DEFAULT_OPS,
> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>  	.fb_release	= drm_fbdev_fb_release,
>  	.fb_destroy	= drm_fbdev_fb_destroy,
>  	.fb_mmap	= drm_fbdev_fb_mmap,
> -	.fb_read	= drm_fb_helper_sys_read,
> -	.fb_write	= drm_fb_helper_sys_write,
> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
> +	.fb_read	= drm_fbdev_fb_read,
> +	.fb_write	= drm_fbdev_fb_write,
> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>  };
>  
>  static struct fb_deferred_io drm_fbdev_defio = {
> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
> index 5ffbb4ed5b35..ab424ddd7665 100644
> --- a/include/drm/drm_mode_config.h
> +++ b/include/drm/drm_mode_config.h
> @@ -877,18 +877,6 @@ struct drm_mode_config {
>  	 */
>  	bool prefer_shadow_fbdev;
>  
> -	/**
> -	 * @fbdev_use_iomem:
> -	 *
> -	 * Set to true if framebuffer reside in iomem.
> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
> -	 *
> -	 * FIXME: This should be replaced with a per-mapping is_iomem
> -	 * flag (like ttm does), and then used everywhere in fbdev code.
> -	 */
> -	bool fbdev_use_iomem;
> -
>  	/**
>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>  	 *
> -- 
> 2.28.0


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 20:46:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 20:46:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11654.30837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWQQK-0003CA-PO; Sat, 24 Oct 2020 20:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11654.30837; Sat, 24 Oct 2020 20:46:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWQQK-0003C3-MH; Sat, 24 Oct 2020 20:46:00 +0000
Received: by outflank-mailman (input) for mailman id 11654;
 Sat, 24 Oct 2020 20:45:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWQQJ-0003Bx-Dw
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 20:45:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ecdb7b7-a3e0-465e-818f-bb3707ef7a52;
 Sat, 24 Oct 2020 20:45:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWQQH-0008I9-3J; Sat, 24 Oct 2020 20:45:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWQQG-0006OK-Rl; Sat, 24 Oct 2020 20:45:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWQQG-0000DS-RF; Sat, 24 Oct 2020 20:45:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWQQJ-0003Bx-Dw
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 20:45:59 +0000
X-Inumbo-ID: 9ecdb7b7-a3e0-465e-818f-bb3707ef7a52
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ecdb7b7-a3e0-465e-818f-bb3707ef7a52;
	Sat, 24 Oct 2020 20:45:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7pIUhSGbVIc/FAuj20kZXmuk4f6LjJ9KJyGyycy0vqc=; b=ZBz0PvjxxbQh/Dqj0gdE9w0TnV
	upodEL7F6Mv02xwoiAvS6AsddN2BZvPkvVfaWMg95OhwOAp71Atd3Cf8GVn3i+LuhFQpjUCK6W5B/
	7Okq2dOzGbhKC34Q6GdQquYlt7pV47rZDiMVIbFFPHc1zfqj8NIVAAx+/TRrxJrs3gS8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWQQH-0008I9-3J; Sat, 24 Oct 2020 20:45:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWQQG-0006OK-Rl; Sat, 24 Oct 2020 20:45:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWQQG-0000DS-RF; Sat, 24 Oct 2020 20:45:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156185-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156185: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 20:45:56 +0000

flight 156185 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   13 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 21:57:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 21:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11665.30863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWRXS-0000o4-6h; Sat, 24 Oct 2020 21:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11665.30863; Sat, 24 Oct 2020 21:57:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWRXS-0000nx-3o; Sat, 24 Oct 2020 21:57:26 +0000
Received: by outflank-mailman (input) for mailman id 11665;
 Sat, 24 Oct 2020 21:57:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWRXQ-0000ns-E7
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 21:57:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09dd577c-ac57-4845-bc39-4765f79b090c;
 Sat, 24 Oct 2020 21:57:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRXN-0001IQ-Ud; Sat, 24 Oct 2020 21:57:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRXN-00007N-JL; Sat, 24 Oct 2020 21:57:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRXN-0007Qu-Iq; Sat, 24 Oct 2020 21:57:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWRXQ-0000ns-E7
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 21:57:24 +0000
X-Inumbo-ID: 09dd577c-ac57-4845-bc39-4765f79b090c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 09dd577c-ac57-4845-bc39-4765f79b090c;
	Sat, 24 Oct 2020 21:57:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qWyTvDTc4sr9nCJrKUO529F1Y5A0EzlOKGK1rkcQuPU=; b=nY+ee18sppyHBeklhhCdKOBCIx
	ZiUO2Zf/RcmEI5zZ43Rmjmbwo2bm5QZIOQGZdxnuJGuD68a7Um+NWutqie7+VsW3YCcPCBWpGNysb
	WMQuHCAIIVuaVW6Es59SSZkF5QWEVbCgJobKSDUjzMm24i/lyXEr9LkbAmzDBJp7W3C8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRXN-0001IQ-Ud; Sat, 24 Oct 2020 21:57:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRXN-00007N-JL; Sat, 24 Oct 2020 21:57:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRXN-0007Qu-Iq; Sat, 24 Oct 2020 21:57:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156188-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156188: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 21:57:21 +0000

flight 156188 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156188/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   14 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 22:04:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 22:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11672.30881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWRdx-0001ly-3X; Sat, 24 Oct 2020 22:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11672.30881; Sat, 24 Oct 2020 22:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWRdx-0001lr-0Q; Sat, 24 Oct 2020 22:04:09 +0000
Received: by outflank-mailman (input) for mailman id 11672;
 Sat, 24 Oct 2020 22:04:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWRdv-0001lD-Qd
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 22:04:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 32a0e8bb-528a-4ac8-84fd-cb3a9a9a06e0;
 Sat, 24 Oct 2020 22:03:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRdk-0001U5-Vt; Sat, 24 Oct 2020 22:03:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRdk-0000Fb-LO; Sat, 24 Oct 2020 22:03:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWRdk-0008I5-Kt; Sat, 24 Oct 2020 22:03:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWRdv-0001lD-Qd
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 22:04:07 +0000
X-Inumbo-ID: 32a0e8bb-528a-4ac8-84fd-cb3a9a9a06e0
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 32a0e8bb-528a-4ac8-84fd-cb3a9a9a06e0;
	Sat, 24 Oct 2020 22:03:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=USKbN2MTBilf1RY7xwYbWrzFTqQfVnf29s2Dm6FFohg=; b=Oc6gYji7Jcs1GxnD+auoeQyH9F
	1zYFpyOqnqY4cvbo9yl+8cSWZ8wnjVT+c8I1jpNEx/WItl6LGm4JGYykH/ZKTVnyih+JMbanIa+yI
	oukLFR2c7DRDx3Bw0ShpRHk8waC17faYiUDHVvLXzqlWO3VpbwOHRbjjZjeV3qnTQbXo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRdk-0001U5-Vt; Sat, 24 Oct 2020 22:03:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRdk-0000Fb-LO; Sat, 24 Oct 2020 22:03:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWRdk-0008I5-Kt; Sat, 24 Oct 2020 22:03:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156186-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156186: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 22:03:56 +0000

flight 156186 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156186/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  135 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   20 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 23:20:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 23:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11681.30898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWSpR-0008Or-TC; Sat, 24 Oct 2020 23:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11681.30898; Sat, 24 Oct 2020 23:20:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWSpR-0008Ok-Pq; Sat, 24 Oct 2020 23:20:05 +0000
Received: by outflank-mailman (input) for mailman id 11681;
 Sat, 24 Oct 2020 23:20:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWSpR-0008HQ-3G
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 23:20:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d53028ec-7ed5-4075-82e2-5a62500b161c;
 Sat, 24 Oct 2020 23:19:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWSpK-0002yh-BC; Sat, 24 Oct 2020 23:19:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWSpK-0001yD-3w; Sat, 24 Oct 2020 23:19:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWSpK-0006GM-3P; Sat, 24 Oct 2020 23:19:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWSpR-0008HQ-3G
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 23:20:05 +0000
X-Inumbo-ID: d53028ec-7ed5-4075-82e2-5a62500b161c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d53028ec-7ed5-4075-82e2-5a62500b161c;
	Sat, 24 Oct 2020 23:19:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QK7EVXZf0ojlgDD/wAXHO7qrglOH16TziNhdfcbD7bg=; b=r2v6C2nYGibYv1BpULi2vseWNd
	Q+y1owpiv5T/9mNPzEfnpiMf4XToqQlgOEaRYzXSB4/Ll9rW8SsZqowMluXZq/H8JIhdJzvw7Z0+n
	RI2B28MKRl68/FCuaMdlsyj2nX5qGPpPwXDZ2tyApWa2+4eS/u49ImBJTIMCGXiM/RBg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWSpK-0002yh-BC; Sat, 24 Oct 2020 23:19:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWSpK-0001yD-3w; Sat, 24 Oct 2020 23:19:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWSpK-0006GM-3P; Sat, 24 Oct 2020 23:19:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156190-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156190: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 23:19:58 +0000

flight 156190 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156190/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   15 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 24 23:40:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 24 Oct 2020 23:40:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11686.30917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWT8u-0001sE-MA; Sat, 24 Oct 2020 23:40:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11686.30917; Sat, 24 Oct 2020 23:40:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWT8u-0001s7-IG; Sat, 24 Oct 2020 23:40:12 +0000
Received: by outflank-mailman (input) for mailman id 11686;
 Sat, 24 Oct 2020 23:40:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWT8s-0001ij-UL
 for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 23:40:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb49865b-4243-43ab-9e1e-b90027c42880;
 Sat, 24 Oct 2020 23:40:02 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWT8k-0003NK-4p; Sat, 24 Oct 2020 23:40:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWT8j-0002P7-QA; Sat, 24 Oct 2020 23:40:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWT8j-0001WS-Pi; Sat, 24 Oct 2020 23:40:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IA9X=D7=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWT8s-0001ij-UL
	for xen-devel@lists.xenproject.org; Sat, 24 Oct 2020 23:40:10 +0000
X-Inumbo-ID: fb49865b-4243-43ab-9e1e-b90027c42880
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb49865b-4243-43ab-9e1e-b90027c42880;
	Sat, 24 Oct 2020 23:40:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OCu3OLmMWvD4CXIoKdnSLo/DVO7+SzQ+0QnG/i+p930=; b=sCV9UY1MRfONFDDdHCWy0z2Js0
	2KphD5tXVIaJ+XDYPp2EIIupr1o7zEY1dkNBCbGlD6jnZzGbtTJTrF2rW5iC83Q3oPjN5JzqMHEcB
	++TUzcmtJA7GsUkY35ae2OFVve7o+HaL3DgdZFWY4xxdUdlKpqM8lfmR7+blK5du2RRQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWT8k-0003NK-4p; Sat, 24 Oct 2020 23:40:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWT8j-0002P7-QA; Sat, 24 Oct 2020 23:40:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWT8j-0001WS-Pi; Sat, 24 Oct 2020 23:40:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156191: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 24 Oct 2020 23:40:01 +0000

flight 156191 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156191/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  136 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   21 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 01:59:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 01:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11700.30938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWVJZ-0001IH-DN; Sun, 25 Oct 2020 01:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11700.30938; Sun, 25 Oct 2020 01:59:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWVJZ-0001I8-6W; Sun, 25 Oct 2020 01:59:21 +0000
Received: by outflank-mailman (input) for mailman id 11700;
 Sun, 25 Oct 2020 01:59:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWVJX-0001HW-5F
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 01:59:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea69d591-fa01-4b63-8976-ffd195278d3d;
 Sun, 25 Oct 2020 01:59:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWVJM-0002V1-BU; Sun, 25 Oct 2020 01:59:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWVJM-0005QS-0S; Sun, 25 Oct 2020 01:59:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWVJL-0001IX-Vn; Sun, 25 Oct 2020 01:59:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWVJX-0001HW-5F
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 01:59:19 +0000
X-Inumbo-ID: ea69d591-fa01-4b63-8976-ffd195278d3d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ea69d591-fa01-4b63-8976-ffd195278d3d;
	Sun, 25 Oct 2020 01:59:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qJCAUBVFK1P6Q5NgzI3iu6+ejaHxqtchqg9pY2IFME4=; b=qsHEcV0EoLuRWLqDE0kOAJvgg+
	7ikYbqkgAGaZf2er9sCfHd4Jyp6dDPNLM0/N4SKDMkXiWCaEWVrnh3hCdN8Rb26urLAYPzzL2O0LZ
	hfQnL2llrN1AoqP3ib/9FgBAmJ0RZv99+SwEi0ariFhkI9bmAXFC3AQ2bHSOFrk0zzoY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWVJM-0002V1-BU; Sun, 25 Oct 2020 01:59:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWVJM-0005QS-0S; Sun, 25 Oct 2020 01:59:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWVJL-0001IX-Vn; Sun, 25 Oct 2020 01:59:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156193-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156193: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 01:59:07 +0000

flight 156193 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156193/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   16 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   15 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 02:25:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 02:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11703.30952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWVib-0004By-Gg; Sun, 25 Oct 2020 02:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11703.30952; Sun, 25 Oct 2020 02:25:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWVib-0004Br-DV; Sun, 25 Oct 2020 02:25:13 +0000
Received: by outflank-mailman (input) for mailman id 11703;
 Sun, 25 Oct 2020 02:25:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWVia-0004BD-K2
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 02:25:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39fa44cb-de26-4812-8953-93c6e27feb16;
 Sun, 25 Oct 2020 02:25:04 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWViR-0003T5-TV; Sun, 25 Oct 2020 02:25:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWViR-0005zw-Jz; Sun, 25 Oct 2020 02:25:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWViR-0006vX-JS; Sun, 25 Oct 2020 02:25:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWVia-0004BD-K2
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 02:25:12 +0000
X-Inumbo-ID: 39fa44cb-de26-4812-8953-93c6e27feb16
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 39fa44cb-de26-4812-8953-93c6e27feb16;
	Sun, 25 Oct 2020 02:25:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ka3Mvw0Qk3jtCxJbhbRqFI2n5CSPOwJcln/0wxYP9qs=; b=eY5vuMpYIOWYfZSBT+XTjym9cG
	Gtsu5BEY7vTG/tIlFKf2UBSKRoUm8XJHdpofetmc+nUquvDm3dXp+pp/JCLeHvyZ4pdeEpkPHuzNJ
	H2HmCIVVsllZYqjokjML98G6pqOhWNFXoNshdg/+5iSX+ZNWzMyUO181wqJcRqsPKeKY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWViR-0003T5-TV; Sun, 25 Oct 2020 02:25:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWViR-0005zw-Jz; Sun, 25 Oct 2020 02:25:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWViR-0006vX-JS; Sun, 25 Oct 2020 02:25:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156194: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 02:25:03 +0000

flight 156194 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156194/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  137 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   22 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 03:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 03:12:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11709.30968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWWSE-0008Qq-DI; Sun, 25 Oct 2020 03:12:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11709.30968; Sun, 25 Oct 2020 03:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWWSE-0008Qj-9X; Sun, 25 Oct 2020 03:12:22 +0000
Received: by outflank-mailman (input) for mailman id 11709;
 Sun, 25 Oct 2020 03:12:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWWSC-0008QB-QQ
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 03:12:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 777f5bd6-3ece-439e-aabe-ff85cce897ac;
 Sun, 25 Oct 2020 03:12:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWWS1-0004QR-En; Sun, 25 Oct 2020 03:12:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWWS1-0007SA-4n; Sun, 25 Oct 2020 03:12:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWWS1-000782-4D; Sun, 25 Oct 2020 03:12:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWWSC-0008QB-QQ
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 03:12:20 +0000
X-Inumbo-ID: 777f5bd6-3ece-439e-aabe-ff85cce897ac
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 777f5bd6-3ece-439e-aabe-ff85cce897ac;
	Sun, 25 Oct 2020 03:12:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rZ5vutD/XyNTwEAuW8qN0KdexlJaMuorh9r7Qmc3Xmo=; b=Zd4TZ/i4WfWmCkFkSv/Av/eLrr
	ET5SPvrHZ68FVSFxdHc4f56WCmAPBOqTwKPBO1XmYLQy6aQT5RQKPmSSgyMNOTzc80KBrYhxoz5sC
	J78Q7Y2NX4AsQUxjIKTMm9gtvNzHUV1TUYej8n6RFlHVa0Ps+IxMMFl0ddzUsKDnt7aI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWWS1-0004QR-En; Sun, 25 Oct 2020 03:12:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWWS1-0007SA-4n; Sun, 25 Oct 2020 03:12:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWWS1-000782-4D; Sun, 25 Oct 2020 03:12:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156197: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 03:12:09 +0000

flight 156197 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156197/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   17 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   16 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 05:02:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 05:02:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11717.30985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYAG-0001PM-9f; Sun, 25 Oct 2020 05:01:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11717.30985; Sun, 25 Oct 2020 05:01:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYAG-0001PF-6f; Sun, 25 Oct 2020 05:01:56 +0000
Received: by outflank-mailman (input) for mailman id 11717;
 Sun, 25 Oct 2020 05:01:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWYAF-0001PA-08
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:01:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23009b20-c762-4e44-932f-a8ecd66d8822;
 Sun, 25 Oct 2020 05:01:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYAB-0007A6-9M; Sun, 25 Oct 2020 05:01:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYAB-0004Cl-1L; Sun, 25 Oct 2020 05:01:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYAB-0005bl-0r; Sun, 25 Oct 2020 05:01:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWYAF-0001PA-08
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:01:55 +0000
X-Inumbo-ID: 23009b20-c762-4e44-932f-a8ecd66d8822
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 23009b20-c762-4e44-932f-a8ecd66d8822;
	Sun, 25 Oct 2020 05:01:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=g/8oJCPZ7vGmkvzaWcKMjCkFwWOlAVdDMlLZp0kaAvc=; b=DrOVkwnl6ttrVLA/90tr2NydEk
	d1LpYOtMGNnvCfJ2Ub6LvRSDl/Zq9W591J31yWYBDGRf4r6SJWWZ7r7F3PMhVU4pvYUYe+y8x96In
	P0sBe6owOCjdLDJGyFfQ6rpltV+tphXiU1AgFQfUUZTE4wiQ66AP+BRdHillvYn5QZOg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYAB-0007A6-9M; Sun, 25 Oct 2020 05:01:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYAB-0004Cl-1L; Sun, 25 Oct 2020 05:01:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYAB-0005bl-0r; Sun, 25 Oct 2020 05:01:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-armhf
Message-Id: <E1kWYAB-0005bl-0r@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 05:01:51 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-armhf
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f89955449c5a47ff688e91873bbce4c3670ed9fe
  Bug not present: 56c1aca6a2bc013f45e7af2fa88605a693402770
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156199/


  commit f89955449c5a47ff688e91873bbce4c3670ed9fe
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Oct 23 15:53:10 2020 +0200
  
      tools/init-xenstore-domain: support xenstore pvh stubdom
      
      Instead of creating the xenstore-stubdom domain first and parsing the
      kernel later do it the other way round. This enables to probe for the
      domain type supported by the xenstore-stubdom and to support both, pv
      and pvh type stubdoms.
      
      Try to parse the stubdom image first for PV support, if this fails use
      HVM. Then create the domain with the appropriate type selected.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-armhf.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-armhf.xen-build --summary-out=tmp/156199.bisection-summary --basis-template=156117 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-armhf xen-build
Searching for failure / basis pass:
 156197 fail [host=cubietruck-gleizes] / 156120 [host=cubietruck-metzinger] 156117 ok.
Failure / basis pass flights: 156197 / 156117
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Basis pass ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#ea6d3cd1ed79d824e605a70c3626bc437c386260-ea6d3cd1ed79d824e605a70c3626bc437c386260 git://xenbits.xen.org/xen.git#6ca70821b59849ad97c3fadc47e63c1a4af1a78c-4ddd6499d999a7d08cabfda5b0262e473dd5beed
Loaded 5001 nodes in revision graph
Searching for test results:
 156117 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156120 [host=cubietruck-metzinger]
 156129 [host=cubietruck-braque]
 156133 [host=cubietruck-braque]
 156140 [host=cubietruck-braque]
 156146 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156158 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c
 156159 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156162 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156164 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156166 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 154137dfdba334348887baf0be9693c407f7cef3
 156168 [host=cubietruck-metzinger]
 156169 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 70cf8e9acada638f68c1c597d7580500d9f21c91
 156171 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156173 [host=cubietruck-metzinger]
 156175 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156177 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156178 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156180 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
 156182 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156184 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156185 [host=cubietruck-metzinger]
 156187 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
 156188 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156189 [host=cubietruck-metzinger]
 156190 [host=cubietruck-metzinger]
 156192 pass ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
 156193 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156197 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 4ddd6499d999a7d08cabfda5b0262e473dd5beed
 156195 [host=cubietruck-metzinger]
 156199 fail ea6d3cd1ed79d824e605a70c3626bc437c386260 f89955449c5a47ff688e91873bbce4c3670ed9fe
Searching for interesting versions
 Result found: flight 156117 (pass), for basis pass
 For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770, results HASH(0x557199d3cef8) HASH(0x557199d471e8) HASH(0x557199d30a20) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 70cf8e9acada638f68c1c597d7580500d9f21c91, results HASH(0x557199d365b8) For basis failure, parent search stopping at ea6d3cd1ed79d824e605a70c3626bc437c386260 6ca70821b59849ad97c3fadc47e63c1a4af1a78c, results HASH(0x\
 557199d233b0) HASH(0x557199d2f318) Result found: flight 156146 (fail), for basis failure (at ancestor ~484)
 Repro found: flight 156158 (pass), for basis pass
 Repro found: flight 156159 (fail), for basis failure
 0 revisions at ea6d3cd1ed79d824e605a70c3626bc437c386260 56c1aca6a2bc013f45e7af2fa88605a693402770
No revisions left to test, checking graph state.
 Result found: flight 156177 (pass), for last pass
 Result found: flight 156180 (fail), for first failure
 Repro found: flight 156184 (pass), for last pass
 Repro found: flight 156187 (fail), for first failure
 Repro found: flight 156192 (pass), for last pass
 Repro found: flight 156199 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  f89955449c5a47ff688e91873bbce4c3670ed9fe
  Bug not present: 56c1aca6a2bc013f45e7af2fa88605a693402770
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/156199/


  commit f89955449c5a47ff688e91873bbce4c3670ed9fe
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri Oct 23 15:53:10 2020 +0200
  
      tools/init-xenstore-domain: support xenstore pvh stubdom
      
      Instead of creating the xenstore-stubdom domain first and parsing the
      kernel later do it the other way round. This enables to probe for the
      domain type supported by the xenstore-stubdom and to support both, pv
      and pvh type stubdoms.
      
      Try to parse the stubdom image first for PV support, if this fails use
      HVM. Then create the domain with the appropriate type selected.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-armhf.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
156199: tolerable ALL FAIL

flight 156199 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/156199/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf                   6 xen-build               fail baseline untested


jobs:
 build-armhf                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun Oct 25 05:23:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 05:23:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11720.31001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYVB-0003Dk-6R; Sun, 25 Oct 2020 05:23:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11720.31001; Sun, 25 Oct 2020 05:23:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYVB-0003Dd-33; Sun, 25 Oct 2020 05:23:33 +0000
Received: by outflank-mailman (input) for mailman id 11720;
 Sun, 25 Oct 2020 05:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWYVA-0003DY-5F
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:23:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c471832d-395f-44e5-be5e-864de88b4933;
 Sun, 25 Oct 2020 05:23:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYV1-0007bL-GZ; Sun, 25 Oct 2020 05:23:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYV1-0006iF-6N; Sun, 25 Oct 2020 05:23:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYV1-0005YP-5t; Sun, 25 Oct 2020 05:23:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWYVA-0003DY-5F
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:23:32 +0000
X-Inumbo-ID: c471832d-395f-44e5-be5e-864de88b4933
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c471832d-395f-44e5-be5e-864de88b4933;
	Sun, 25 Oct 2020 05:23:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=t3fSjr7/0phP8pLbcF8vtJJAKyCtn6zsNP6xFCO4rPY=; b=5paZZyb1dLNziAMCEzUW60v2nW
	3KEMdNmYqZEZ7aVw+G+fyXTlhTRklzMOY7EyqipcxBnqma8Wu4zbE2dNASuKjWabfIR1Kq1X3L3xJ
	aSrBRUB62Q5cOmF0l89wdAHgIZRBUKwndG+nrnVCqizeVtpB7sH7IqOh+X3kq1ggO8/U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYV1-0007bL-GZ; Sun, 25 Oct 2020 05:23:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYV1-0006iF-6N; Sun, 25 Oct 2020 05:23:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYV1-0005YP-5t; Sun, 25 Oct 2020 05:23:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156198-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156198: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 05:23:23 +0000

flight 156198 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156198/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  138 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   23 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 05:29:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 05:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11726.31019 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYbF-0003SY-31; Sun, 25 Oct 2020 05:29:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11726.31019; Sun, 25 Oct 2020 05:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYbE-0003SR-WC; Sun, 25 Oct 2020 05:29:49 +0000
Received: by outflank-mailman (input) for mailman id 11726;
 Sun, 25 Oct 2020 05:29:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWYbD-0003Rs-CL
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:29:47 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d88be77-d4ae-4a59-9786-62c9f5c5975f;
 Sun, 25 Oct 2020 05:29:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYb3-0007iW-4r; Sun, 25 Oct 2020 05:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYb2-0007ES-Qf; Sun, 25 Oct 2020 05:29:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYb2-00013a-Q9; Sun, 25 Oct 2020 05:29:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWYbD-0003Rs-CL
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:29:47 +0000
X-Inumbo-ID: 7d88be77-d4ae-4a59-9786-62c9f5c5975f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d88be77-d4ae-4a59-9786-62c9f5c5975f;
	Sun, 25 Oct 2020 05:29:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Gpm1Y7peMjCOWIOsplixePX/fbfqXgsxhL66lmSXydU=; b=A/Glg0hTU3DjER3jivGVgzNkuO
	t7Fk5gGkWwEeI1HJj33TRuIj4aa7nOMdkytYYvT0QxkVeN4CqtREhgdULrGFUGBnXHQ05VJC6YwB0
	OBz+Sa7qm/d6uvYPNzGDp+HGJf2X6IoCG+e3+kpF5Pn0IIlHrHELLUXuowGbCGjGetRQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYb3-0007iW-4r; Sun, 25 Oct 2020 05:29:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYb2-0007ES-Qf; Sun, 25 Oct 2020 05:29:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYb2-00013a-Q9; Sun, 25 Oct 2020 05:29:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156181-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156181: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:build-i386-pvops:kernel-build:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f11901ed723d1351843771c3a84b03a253bbf8b2
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 05:29:36 +0000

flight 156181 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156181/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 build-i386-pvops              6 kernel-build             fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot         fail in 156161 pass in 156181
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 156161
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156161
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 156161

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1 11 leak-check/basis(11) fail in 156161 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156161 blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 156161 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f11901ed723d1351843771c3a84b03a253bbf8b2
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   85 days
Failing since        152366  2020-08-01 20:49:34 Z   84 days  142 attempts
Testing same since   156161  2020-10-24 03:52:55 Z    1 days    2 attempts

------------------------------------------------------------
3331 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 620223 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 05:45:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 05:45:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11731.31031 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYql-0005AX-HA; Sun, 25 Oct 2020 05:45:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11731.31031; Sun, 25 Oct 2020 05:45:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYql-0005AQ-DK; Sun, 25 Oct 2020 05:45:51 +0000
Received: by outflank-mailman (input) for mailman id 11731;
 Sun, 25 Oct 2020 05:45:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWYqk-0005AL-PV
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:45:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3acac7c-3b65-4df6-aacf-248b69bc3407;
 Sun, 25 Oct 2020 05:45:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6BA42ABDE;
 Sun, 25 Oct 2020 05:45:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWYqk-0005AL-PV
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:45:50 +0000
X-Inumbo-ID: e3acac7c-3b65-4df6-aacf-248b69bc3407
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e3acac7c-3b65-4df6-aacf-248b69bc3407;
	Sun, 25 Oct 2020 05:45:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603604747;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=YafWDY0sGbBr2Keuon4d1t6Xh6u+lrf5ml2czJnQEh0=;
	b=mUqGjhLHRuoTL5/ToNHHrf5e1cpMlHVytMvei9IstZeNKJJViAGhbrnBHMUxUIq9n6kKSr
	0ZBj3tKFnMZkmr0QhW1hMDxdRzuOQ15cdF1t3IXaRMnrEQu/Xnu3gvmlxqS6i38g5NlMPR
	RAp5euLzWKIwz1XGtNde2ygONy52+5I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6BA42ABDE;
	Sun, 25 Oct 2020 05:45:47 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/helpers: fix Arm build by excluding init-xenstore-domain
Date: Sun, 25 Oct 2020 06:45:46 +0100
Message-Id: <20201025054546.4960-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The support for PVH xenstore-stubdom has broken the Arm build.

Xenstore stubdom isn't supported on Arm, so there is no need to build
the init-xenstore-domain helper.

Build the helper on x86 only.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/helpers/Makefile | 12 ++++--------
 1 file changed, 4 insertions(+), 8 deletions(-)

diff --git a/tools/helpers/Makefile b/tools/helpers/Makefile
index f759528322..1bcc97ea8a 100644
--- a/tools/helpers/Makefile
+++ b/tools/helpers/Makefile
@@ -7,8 +7,10 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 PROGS += xen-init-dom0
 ifeq ($(CONFIG_Linux),y)
+ifeq ($(CONFIG_X86),y)
 PROGS += init-xenstore-domain
 endif
+endif
 
 XEN_INIT_DOM0_OBJS = xen-init-dom0.o init-dom-json.o
 $(XEN_INIT_DOM0_OBJS): CFLAGS += $(CFLAGS_libxentoollog)
@@ -37,17 +39,11 @@ init-xenstore-domain: $(INIT_XENSTORE_DOMAIN_OBJS)
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(LIBEXEC_BIN)
-	$(INSTALL_PROG) xen-init-dom0 $(DESTDIR)$(LIBEXEC_BIN)
-ifeq ($(CONFIG_Linux),y)
-	$(INSTALL_PROG) init-xenstore-domain $(DESTDIR)$(LIBEXEC_BIN)
-endif
+	for i in $(PROGS); do $(INSTALL_PROG) $$i $(DESTDIR)$(LIBEXEC_BIN); done
 
 .PHONY: uninstall
 uninstall:
-ifeq ($(CONFIG_Linux),y)
-	rm -f $(DESTDIR)$(LIBEXEC_BIN)/init-xenstore-domain
-endif
-	rm -f $(DESTDIR)$(LIBEXEC_BIN)/xen-init-dom0
+	for i in $(PROGS); do rm -f $(DESTDIR)$(LIBEXEC_BIN)/$$i; done
 
 .PHONY: clean
 clean:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Oct 25 05:50:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 05:50:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11735.31045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYvR-00062y-56; Sun, 25 Oct 2020 05:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11735.31045; Sun, 25 Oct 2020 05:50:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWYvR-00062r-1z; Sun, 25 Oct 2020 05:50:41 +0000
Received: by outflank-mailman (input) for mailman id 11735;
 Sun, 25 Oct 2020 05:50:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWYvP-00062B-UB
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:50:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d0a97ec-1ace-42a7-987f-9075f02f5fdf;
 Sun, 25 Oct 2020 05:50:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYvI-00088T-IF; Sun, 25 Oct 2020 05:50:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYvI-00088B-9u; Sun, 25 Oct 2020 05:50:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWYvI-0008M9-9N; Sun, 25 Oct 2020 05:50:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWYvP-00062B-UB
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 05:50:39 +0000
X-Inumbo-ID: 3d0a97ec-1ace-42a7-987f-9075f02f5fdf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d0a97ec-1ace-42a7-987f-9075f02f5fdf;
	Sun, 25 Oct 2020 05:50:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cD+s0/x2mckvzVsHY77qO7H2BWEsJOIVt3m8Ie4cyGI=; b=ehJ1QcaPwkfZgKBNodv2kvCZQ4
	PlwSEkS9PB4ALR+OAv+VqRiP6XwSTv9fyS1VhhVBfs+1s+OAARNFZNEecbUoiYyfi0IZwvsA34lty
	M6cdJCtjmIJsgYLUqo5Ib8dmOqoMrAgpfqGz0TyQO0WwVwjCLT+2wOHDwqkEs0rpvZXY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYvI-00088T-IF; Sun, 25 Oct 2020 05:50:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYvI-00088B-9u; Sun, 25 Oct 2020 05:50:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWYvI-0008M9-9N; Sun, 25 Oct 2020 05:50:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156200-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156200: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 05:50:32 +0000

flight 156200 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156200/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   18 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   17 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 07:04:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 07:04:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11740.31060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWa4b-0003mc-Lf; Sun, 25 Oct 2020 07:04:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11740.31060; Sun, 25 Oct 2020 07:04:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWa4b-0003mV-IP; Sun, 25 Oct 2020 07:04:13 +0000
Received: by outflank-mailman (input) for mailman id 11740;
 Sun, 25 Oct 2020 07:04:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWa4Z-0003lW-Uf
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:04:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c51e47d-19c9-45d4-aa37-5bcbea20a5d8;
 Sun, 25 Oct 2020 07:04:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa4S-0001I4-Kn; Sun, 25 Oct 2020 07:04:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa4S-0003GQ-9d; Sun, 25 Oct 2020 07:04:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa4S-00039a-99; Sun, 25 Oct 2020 07:04:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWa4Z-0003lW-Uf
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:04:11 +0000
X-Inumbo-ID: 1c51e47d-19c9-45d4-aa37-5bcbea20a5d8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c51e47d-19c9-45d4-aa37-5bcbea20a5d8;
	Sun, 25 Oct 2020 07:04:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XAX6tjjiaakwkbc2Hld/ReM+tOw6tvvU8MRdRmpFl5Y=; b=T1QZ9SgTbSGWD2Zh3SVTSEsduK
	CWMCqPFScGZTNxcd6ldjM3F2Dbh9DmLIcdYco26KxQhueC9I/Q2G2/xSoty4VLgMx4dq7nwc2n74Z
	nOv0Bsin02rOjaNnTz0W11D6ppNB9TytasZY4bFgJ7nm8gAe0f27XSozO8fUfmSFJRac=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa4S-0001I4-Kn; Sun, 25 Oct 2020 07:04:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa4S-0003GQ-9d; Sun, 25 Oct 2020 07:04:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa4S-00039a-99; Sun, 25 Oct 2020 07:04:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156201-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156201: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=04b1c2d1e2e12abcca22380827edaa058399f4fa
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 07:04:04 +0000

flight 156201 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156201/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              04b1c2d1e2e12abcca22380827edaa058399f4fa
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  107 days
Failing since        151818  2020-07-11 04:18:52 Z  106 days  101 attempts
Testing same since   156163  2020-10-24 04:19:13 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23141 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 07:06:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 07:06:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11741.31073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWa6i-0003ts-2A; Sun, 25 Oct 2020 07:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11741.31073; Sun, 25 Oct 2020 07:06:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWa6h-0003tl-VV; Sun, 25 Oct 2020 07:06:23 +0000
Received: by outflank-mailman (input) for mailman id 11741;
 Sun, 25 Oct 2020 07:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWa6g-0003tf-Om
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:06:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 717f3fbe-8892-4bc3-9977-4d5c74afa2a4;
 Sun, 25 Oct 2020 07:06:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa6e-0001K4-Gf; Sun, 25 Oct 2020 07:06:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa6e-0003MV-8Z; Sun, 25 Oct 2020 07:06:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWa6e-0006fb-81; Sun, 25 Oct 2020 07:06:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWa6g-0003tf-Om
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:06:22 +0000
X-Inumbo-ID: 717f3fbe-8892-4bc3-9977-4d5c74afa2a4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 717f3fbe-8892-4bc3-9977-4d5c74afa2a4;
	Sun, 25 Oct 2020 07:06:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lLaIK9radMlZY2yXatDOOyNaDuXKCRujnGYlNZyp72E=; b=qqcbJqBn/jxrIwrZg94kmV4rMo
	myrYbeWLcTZFaxCwXPaLG9BlQKAceCVpwBGMtTaLZxWPK5iGl9amUsfDRbPvnIF87kejxwYYlWeMQ
	4yzSCisC8lr+LUX5KIU8BUEmewdxO8qYSTdjfknGWzThMEpsRKAjLayo8jiigraUc1DI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa6e-0001K4-Gf; Sun, 25 Oct 2020 07:06:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa6e-0003MV-8Z; Sun, 25 Oct 2020 07:06:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWa6e-0006fb-81; Sun, 25 Oct 2020 07:06:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156204-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156204: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 07:06:20 +0000

flight 156204 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156204/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   19 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   18 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 07:33:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 07:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11748.31087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWaWY-0006UV-7a; Sun, 25 Oct 2020 07:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11748.31087; Sun, 25 Oct 2020 07:33:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWaWY-0006UO-4d; Sun, 25 Oct 2020 07:33:06 +0000
Received: by outflank-mailman (input) for mailman id 11748;
 Sun, 25 Oct 2020 07:33:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWaWW-0006UJ-L1
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:33:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1aa55ab-f10f-4dda-989a-1ad75face95e;
 Sun, 25 Oct 2020 07:32:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWaWQ-0001r5-Pk; Sun, 25 Oct 2020 07:32:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWaWQ-0004Of-HZ; Sun, 25 Oct 2020 07:32:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWaWQ-0007JX-H5; Sun, 25 Oct 2020 07:32:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWaWW-0006UJ-L1
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 07:33:04 +0000
X-Inumbo-ID: d1aa55ab-f10f-4dda-989a-1ad75face95e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d1aa55ab-f10f-4dda-989a-1ad75face95e;
	Sun, 25 Oct 2020 07:32:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2j1A9XUtC8uWmgRhZr7HWks/Gtym/6uGEaIbMImOIDs=; b=uGILilslv/L1vDC6v+fZdCQIsH
	9v+BgaUR0zRDsanJpGmqDciaMdH5Im9nU8clacpfC3LUHzo2QEau5kUqkiyc2eG+jVA9VSyn+2z3c
	XAEZQjEHm+kjmpbLiLmY53DOPKZJX7+LJnbBBbz/iXqrGlxB+0VirakVD4QlmfVQDqqU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWaWQ-0001r5-Pk; Sun, 25 Oct 2020 07:32:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWaWQ-0004Of-HZ; Sun, 25 Oct 2020 07:32:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWaWQ-0007JX-H5; Sun, 25 Oct 2020 07:32:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156203-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156203: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 07:32:58 +0000

flight 156203 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156203/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  139 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   24 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 08:47:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 08:47:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11766.31102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWbgK-0004cv-9i; Sun, 25 Oct 2020 08:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11766.31102; Sun, 25 Oct 2020 08:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWbgK-0004co-6a; Sun, 25 Oct 2020 08:47:16 +0000
Received: by outflank-mailman (input) for mailman id 11766;
 Sun, 25 Oct 2020 08:47:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWbgJ-0004ch-2B
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 08:47:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afb13c05-12bd-416a-992f-8fccdb18cccd;
 Sun, 25 Oct 2020 08:47:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbgA-0003sM-P4; Sun, 25 Oct 2020 08:47:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbgA-0000IR-Eu; Sun, 25 Oct 2020 08:47:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbgA-0003gE-EQ; Sun, 25 Oct 2020 08:47:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWbgJ-0004ch-2B
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 08:47:15 +0000
X-Inumbo-ID: afb13c05-12bd-416a-992f-8fccdb18cccd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id afb13c05-12bd-416a-992f-8fccdb18cccd;
	Sun, 25 Oct 2020 08:47:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BNsl+MIIKwpvuNglbsf4z+WJKy8FY4fYppf14fRwrQc=; b=vvYuJoHn2XidEL5EtG6BeHa0Yg
	AQUX1IxDQAOVQG6Pag3MyimJHHGGPqa+SGPGsWeDGjfGw4zzgErZ8bQCRzNrDGTdU8WD0ZRmFaoyq
	NzGcuO/8dBeEm6Wr8naUvisOHckd40dtXs/CmTyed+P8l7EusT6N3J3YtjbDwVkO4GO4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbgA-0003sM-P4; Sun, 25 Oct 2020 08:47:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbgA-0000IR-Eu; Sun, 25 Oct 2020 08:47:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbgA-0003gE-EQ; Sun, 25 Oct 2020 08:47:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156205-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156205: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 08:47:06 +0000

flight 156205 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156205/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   65 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  140 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   25 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 08:54:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 08:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11773.31118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWbmo-0005Yy-5y; Sun, 25 Oct 2020 08:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11773.31118; Sun, 25 Oct 2020 08:53:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWbmo-0005Yr-2t; Sun, 25 Oct 2020 08:53:58 +0000
Received: by outflank-mailman (input) for mailman id 11773;
 Sun, 25 Oct 2020 08:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWbmm-0005Ym-8j
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 08:53:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7c5e10f-7f3a-47ab-ac84-84eb4a808054;
 Sun, 25 Oct 2020 08:53:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbmj-000415-Ug; Sun, 25 Oct 2020 08:53:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbmj-0000Wp-Lh; Sun, 25 Oct 2020 08:53:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWbmj-0006hG-LF; Sun, 25 Oct 2020 08:53:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWbmm-0005Ym-8j
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 08:53:56 +0000
X-Inumbo-ID: a7c5e10f-7f3a-47ab-ac84-84eb4a808054
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7c5e10f-7f3a-47ab-ac84-84eb4a808054;
	Sun, 25 Oct 2020 08:53:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xxpkuZfiBXLTPy/v5Y4PF13M4pJvfr01UelrFLLIeM8=; b=N0EOZPxbIK529+SvvOIfeKfaR8
	gAioNuKGr9cKPvFCr1I+w68bxvxxKc9tVULp2Vh+6JuLwT/E3w7HPjL2ftgOCCDUjQ1gMbeXg+Igz
	+qRGWC/HUxxLHoXkTpW6acBvNwO0A1+s8lpC6k5Pytp9V2IG15OGVrAEMOqaifkLE3yI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbmj-000415-Ug; Sun, 25 Oct 2020 08:53:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbmj-0000Wp-Lh; Sun, 25 Oct 2020 08:53:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWbmj-0006hG-LF; Sun, 25 Oct 2020 08:53:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156206-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156206: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 08:53:53 +0000

flight 156206 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156206/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    1 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   20 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   19 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 09:27:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 09:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11779.31133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWcJI-0008Em-Qt; Sun, 25 Oct 2020 09:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11779.31133; Sun, 25 Oct 2020 09:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWcJI-0008Ef-M4; Sun, 25 Oct 2020 09:27:32 +0000
Received: by outflank-mailman (input) for mailman id 11779;
 Sun, 25 Oct 2020 09:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWcJI-0008Ea-1T
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 09:27:32 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 207aa75d-38e5-4d91-b131-a9dfc510e07e;
 Sun, 25 Oct 2020 09:27:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcJF-0004hq-B4; Sun, 25 Oct 2020 09:27:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcJE-0001uY-V3; Sun, 25 Oct 2020 09:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcJE-0004OE-UX; Sun, 25 Oct 2020 09:27:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWcJI-0008Ea-1T
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 09:27:32 +0000
X-Inumbo-ID: 207aa75d-38e5-4d91-b131-a9dfc510e07e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 207aa75d-38e5-4d91-b131-a9dfc510e07e;
	Sun, 25 Oct 2020 09:27:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nTuvXYmI7IopR8NWUYG4bqv2ssNtZvwU1u8WOYu6Lp4=; b=o+vP+eDxDwKGmxjRc2/nAVbLyi
	0VWAlAMSRH0xxAPSB+nVuYN9NbC5vco9VTHLQo5aM5/eVFyVo6Nte9Rp68pYnmetJ/GQPmCyUsRoF
	xm/L9YiZom2bEix/n5Ntwt5JCn8WNn2dAIqeQCBIKbjFSTN4hHMAJN3jGxnUYqgAazv8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcJF-0004hq-B4; Sun, 25 Oct 2020 09:27:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcJE-0001uY-V3; Sun, 25 Oct 2020 09:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcJE-0004OE-UX; Sun, 25 Oct 2020 09:27:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156196-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156196: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 09:27:28 +0000

flight 156196 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156196/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156167
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156167
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156167
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156167
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156167
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156167
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156167
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156167
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156167
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156167
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156167
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156196  2020-10-25 01:51:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Oct 25 09:50:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 09:50:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11792.31150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWcfb-0002K4-SB; Sun, 25 Oct 2020 09:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11792.31150; Sun, 25 Oct 2020 09:50:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWcfb-0002Jx-Or; Sun, 25 Oct 2020 09:50:35 +0000
Received: by outflank-mailman (input) for mailman id 11792;
 Sun, 25 Oct 2020 09:50:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWcfa-0002JP-S9
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 09:50:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0d787a2-9dcc-4f36-bda3-1279767484ba;
 Sun, 25 Oct 2020 09:50:28 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcfU-0005Ak-74; Sun, 25 Oct 2020 09:50:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcfT-0002es-VB; Sun, 25 Oct 2020 09:50:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWcfT-0000CX-Uf; Sun, 25 Oct 2020 09:50:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWcfa-0002JP-S9
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 09:50:34 +0000
X-Inumbo-ID: c0d787a2-9dcc-4f36-bda3-1279767484ba
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c0d787a2-9dcc-4f36-bda3-1279767484ba;
	Sun, 25 Oct 2020 09:50:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4ezHm1osa7cS3bnpu7U5iF7FGeieEOppDJmGECkT2lo=; b=K/agsyFzLtu5t+Q4CHOVYhjZZQ
	VrwicDvCuYJQ6JWGlbt+jpoglzzO8T3AJZ2SCvZprCQIR6Ks4QyTFr31fFmpGTzbqJe+8u9yxXgOW
	NGAS68Ufc1RSrRPF0h+2QlyG8ZkmTYf2QErkCpqgY0uelOFLC6slWxplVoBn2rciwt8Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcfU-0005Ak-74; Sun, 25 Oct 2020 09:50:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcfT-0002es-VB; Sun, 25 Oct 2020 09:50:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWcfT-0000CX-Uf; Sun, 25 Oct 2020 09:50:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156209: all pass - PUSHED
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=3b49791e4cc2f38dd84bf331b75217adaef636e3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 09:50:27 +0000

flight 156209 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156209/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  3b49791e4cc2f38dd84bf331b75217adaef636e3

Last test of basis   156067  2020-10-21 09:19:38 Z    4 days
Testing same since   156209  2020-10-25 09:18:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3b49791e4c..6ca70821b5  6ca70821b59849ad97c3fadc47e63c1a4af1a78c -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 10:11:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 10:11:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11801.31162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWczv-0004C8-LY; Sun, 25 Oct 2020 10:11:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11801.31162; Sun, 25 Oct 2020 10:11:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWczv-0004C1-IJ; Sun, 25 Oct 2020 10:11:35 +0000
Received: by outflank-mailman (input) for mailman id 11801;
 Sun, 25 Oct 2020 10:11:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWczu-0004Bw-5U
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:11:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29e5e7ec-1e6b-4c76-ad3c-9c99d5d5a695;
 Sun, 25 Oct 2020 10:11:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 116C6B919;
 Sun, 25 Oct 2020 10:11:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWczu-0004Bw-5U
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:11:34 +0000
X-Inumbo-ID: 29e5e7ec-1e6b-4c76-ad3c-9c99d5d5a695
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29e5e7ec-1e6b-4c76-ad3c-9c99d5d5a695;
	Sun, 25 Oct 2020 10:11:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603620692;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=raNB0yYuAFJJIBaE4EtGmLT9bOORIVRZjUCtlK/P0+o=;
	b=GnGdjSc9xiqh7P1KQxF+XPTCoKY2UUQthboVD3tGA3Y3LfsN1ldW9P6hWtsiJsTkNderJW
	DXnVS665P4jX0xi21Z5XJyb1EU5uIgyWPDgz7nt/WrE8IPFIkeM5Mn2/Saf5JlbAqoFwpf
	sz02Cna1LaTL1m1KRYqEzpo2TdYU4NY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 116C6B919;
	Sun, 25 Oct 2020 10:11:32 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs: let build depend on official headers
Date: Sun, 25 Oct 2020 11:11:29 +0100
Message-Id: <20201025101129.19685-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The build target of a library should depend on the official headers
of that library, too, as those might be required for building other
tools.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/libs.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/libs.mk b/tools/libs/libs.mk
index 959ff91a56..b0e785b380 100644
--- a/tools/libs/libs.mk
+++ b/tools/libs/libs.mk
@@ -57,7 +57,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 all: build
 
 .PHONY: build
-build: libs libxen$(LIBNAME).map
+build: libs libxen$(LIBNAME).map $(LIBHEADERS)
 
 .PHONY: libs
 libs: headers.chk $(LIB) $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Oct 25 10:12:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 10:12:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11804.31175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWd0h-0004I4-0E; Sun, 25 Oct 2020 10:12:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11804.31175; Sun, 25 Oct 2020 10:12:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWd0g-0004Hx-SX; Sun, 25 Oct 2020 10:12:22 +0000
Received: by outflank-mailman (input) for mailman id 11804;
 Sun, 25 Oct 2020 10:12:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWd0f-0004Hr-KB
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:12:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93c5f21b-db38-4d7d-9f4f-3b91664639d7;
 Sun, 25 Oct 2020 10:12:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 257C9B917;
 Sun, 25 Oct 2020 10:12:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWd0f-0004Hr-KB
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:12:21 +0000
X-Inumbo-ID: 93c5f21b-db38-4d7d-9f4f-3b91664639d7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 93c5f21b-db38-4d7d-9f4f-3b91664639d7;
	Sun, 25 Oct 2020 10:12:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603620740;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=IvOcZw1DeMnPaGyM6RGdEv/N6xQF81rZ8GfMXEFkmCk=;
	b=RMRWxFYhe2KakQribLWQ9hVXdmKZT+CYXWfMLsgGuneSZH0VABpZHvtkCDgWatTn0SzvzY
	AiftEXnkyQGGhsvlsN514ZOtKWvInAz0xhZezsdN5smEaXzYWXBe+6JigI4zYvfdSPQlx4
	Mr+YWF4l7J3Ip+xFGqZdFwhTWgv9yRY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 257C9B917;
	Sun, 25 Oct 2020 10:12:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools/libs/light: fix race in Makefile
Date: Sun, 25 Oct 2020 11:12:18 +0100
Message-Id: <20201025101218.20478-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The header $(INCLUDE)/_lixl_list.h matches two different rules, which
can result in build breakage. Fix that.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/light/Makefile | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 3424fdb61b..370537ed38 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -203,9 +203,9 @@ _libxl.api-for-check: $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
 		>$@.new
 	mv -f $@.new $@
 
-$(XEN_INCLUDE)/_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
-	$(PERL) $^ --prefix=libxl >$(notdir $@).new
-	$(call move-if-changed,$(notdir $@).new,$@)
+_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
+	$(PERL) $^ --prefix=libxl >$@.new
+	$(call move-if-changed,$@.new,$@)
 
 _libxl_save_msgs_helper.c _libxl_save_msgs_callout.c \
 _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sun Oct 25 10:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 10:29:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11811.31189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdHN-0005Oj-Hi; Sun, 25 Oct 2020 10:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11811.31189; Sun, 25 Oct 2020 10:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdHN-0005Oc-EL; Sun, 25 Oct 2020 10:29:37 +0000
Received: by outflank-mailman (input) for mailman id 11811;
 Sun, 25 Oct 2020 10:29:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWdHM-0005Ny-SL
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:29:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc824fdb-d04d-440f-a6ff-d54c5f0a58ea;
 Sun, 25 Oct 2020 10:29:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdHC-000641-KM; Sun, 25 Oct 2020 10:29:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdHC-0004Ni-CY; Sun, 25 Oct 2020 10:29:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdHC-0003dh-C3; Sun, 25 Oct 2020 10:29:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWdHM-0005Ny-SL
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:29:36 +0000
X-Inumbo-ID: dc824fdb-d04d-440f-a6ff-d54c5f0a58ea
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dc824fdb-d04d-440f-a6ff-d54c5f0a58ea;
	Sun, 25 Oct 2020 10:29:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AlRlupDnsrRNcpUVO6szns9Evgv9dPKDSPLxk5+xHUo=; b=qdtDTWYDF9KS/62yEx0XSC6iiY
	guZJ2kDx3Lpii9RuLAdDgxnIP0h0v0rD5dIrYI9JDLYfkoi0t9ul/3K40/nzEMsh5sahTd1FNzBRV
	c4GvLy8k59v+EMOoAoOwyBjJYUsN7cNdk6NWuo3BI2i49aN9qW704HBgtTNe92cuVl+c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdHC-000641-KM; Sun, 25 Oct 2020 10:29:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdHC-0004Ni-CY; Sun, 25 Oct 2020 10:29:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdHC-0003dh-C3; Sun, 25 Oct 2020 10:29:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156208-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156208: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 10:29:26 +0000

flight 156208 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156208/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  141 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   26 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 10:39:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 10:39:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11815.31202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdQr-0006Mn-LS; Sun, 25 Oct 2020 10:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11815.31202; Sun, 25 Oct 2020 10:39:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdQr-0006Mg-HT; Sun, 25 Oct 2020 10:39:25 +0000
Received: by outflank-mailman (input) for mailman id 11815;
 Sun, 25 Oct 2020 10:39:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWdQp-0006Mb-Ok
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:39:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f34d2d3-b8e5-4f1e-81bb-313d11da9b5c;
 Sun, 25 Oct 2020 10:39:21 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdQn-0006GY-Dw; Sun, 25 Oct 2020 10:39:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdQn-0004m3-5R; Sun, 25 Oct 2020 10:39:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWdQn-00046G-4u; Sun, 25 Oct 2020 10:39:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWdQp-0006Mb-Ok
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:39:24 +0000
X-Inumbo-ID: 9f34d2d3-b8e5-4f1e-81bb-313d11da9b5c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9f34d2d3-b8e5-4f1e-81bb-313d11da9b5c;
	Sun, 25 Oct 2020 10:39:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YN9H5mCNitUsmYFAYVEs+s1bjIZO+4i6GSn2XP/JztA=; b=WVZwicoTFv4Eep/IvQhBI7ceQD
	22t4OaoowhADRofOenveHw28vj1OjiOEkSbMVb7yCGvVOW5GUyMGGfQiYCOdl8WveVTGb4DJNDDQF
	Gttpq16oZT7D9w184FDa9LOoeM5DXCKSQLC3Zkcp9z2VQD6X4J+GcLUwJdsF2GKuqAzA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdQn-0006GY-Dw; Sun, 25 Oct 2020 10:39:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdQn-0004m3-5R; Sun, 25 Oct 2020 10:39:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWdQn-00046G-4u; Sun, 25 Oct 2020 10:39:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156207-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156207: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 10:39:21 +0000

flight 156207 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156207/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   21 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   20 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 10:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 10:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11819.31217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdT7-0007AX-2z; Sun, 25 Oct 2020 10:41:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11819.31217; Sun, 25 Oct 2020 10:41:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWdT6-0007AQ-Vm; Sun, 25 Oct 2020 10:41:44 +0000
Received: by outflank-mailman (input) for mailman id 11819;
 Sun, 25 Oct 2020 10:41:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWdT5-0007AJ-Fd
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:41:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ec3ea25-aabe-4e76-a36d-8bb467c7242d;
 Sun, 25 Oct 2020 10:41:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D6114AF87;
 Sun, 25 Oct 2020 10:41:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hByA=EA=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWdT5-0007AJ-Fd
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 10:41:43 +0000
X-Inumbo-ID: 0ec3ea25-aabe-4e76-a36d-8bb467c7242d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ec3ea25-aabe-4e76-a36d-8bb467c7242d;
	Sun, 25 Oct 2020 10:41:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603622502;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=K1lMPXuT/iwQ4NEr8qzlsyuvWWhJIYQ55CHdPb6wS9s=;
	b=p4UGfDC3B5qBo1pfMZNoT5+88WfO7eQ4J6ln/6UwH8nxlhHWEcqep4eBH9e+BUQCcYvAwo
	5cLogFJbYXNGBVmP1y191OapTItiMoNmdVbR1yC21Twn0/FMtfWUBXGlNeIdA6r77rY/vL
	1kKrDf1ABKGr4kRDFdrpkP2GkCPHGbk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D6114AF87;
	Sun, 25 Oct 2020 10:41:41 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.10-rc1c
Date: Sun, 25 Oct 2020 11:41:41 +0100
Message-Id: <20201025104141.4698-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1c-tag

xen: branch for v5.10-rc1c

It contains:

- a series for the Xen pv block drivers adding module parameters for
  better control of resource usge

- a cleanup series for the Xen event driver


Thanks.

Juergen

 Documentation/ABI/testing/sysfs-driver-xen-blkback |  9 +++
 .../ABI/testing/sysfs-driver-xen-blkfront          | 11 ++-
 Documentation/admin-guide/kernel-parameters.txt    |  7 ++
 arch/x86/xen/smp.c                                 | 19 +++--
 arch/x86/xen/xen-ops.h                             |  2 +
 drivers/block/xen-blkback/xenbus.c                 | 22 +++--
 drivers/block/xen-blkfront.c                       | 20 +++--
 drivers/xen/events/events_2l.c                     |  7 +-
 drivers/xen/events/events_base.c                   | 94 +++++++++++++++-------
 drivers/xen/events/events_fifo.c                   |  9 ++-
 drivers/xen/events/events_internal.h               | 70 +++-------------
 include/xen/events.h                               |  8 --
 12 files changed, 152 insertions(+), 126 deletions(-)

Juergen Gross (5):
      xen: remove no longer used functions
      xen/events: make struct irq_info private to events_base.c
      xen/events: only register debug interrupt for 2-level events
      xen/events: unmask a fifo event channel only if it was masked
      Documentation: add xen.fifo_events kernel parameter description

SeongJae Park (3):
      xen-blkback: add a parameter for disabling of persistent grants
      xen-blkfront: add a parameter for disabling of persistent grants
      xen-blkfront: Apply changed parameter name to the document


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 12:37:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 12:37:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11866.31229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWfGr-0008Ao-PI; Sun, 25 Oct 2020 12:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11866.31229; Sun, 25 Oct 2020 12:37:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWfGr-0008Ah-LE; Sun, 25 Oct 2020 12:37:13 +0000
Received: by outflank-mailman (input) for mailman id 11866;
 Sun, 25 Oct 2020 12:37:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWfGr-0008AY-4o
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 12:37:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd5028d5-6af7-44a5-ad86-f6f8eaddbd3f;
 Sun, 25 Oct 2020 12:37:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfGo-0000AB-1q; Sun, 25 Oct 2020 12:37:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfGn-0007uI-Qs; Sun, 25 Oct 2020 12:37:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfGn-0000uV-QN; Sun, 25 Oct 2020 12:37:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWfGr-0008AY-4o
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 12:37:13 +0000
X-Inumbo-ID: dd5028d5-6af7-44a5-ad86-f6f8eaddbd3f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dd5028d5-6af7-44a5-ad86-f6f8eaddbd3f;
	Sun, 25 Oct 2020 12:37:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NbHe0JRRaSGHWwnb3jguwQjWGX2io7RJTgqlZAlcrQ8=; b=Wd0LgawV86OfVZzNFpHgadrvWB
	sa4ut4RuABqQ2khLQcGJiJfEQTB0jpFzYpLP3EDEr/su0OpO6yK27cuFtaDCTSFIsQUrBj1LuB/b7
	prXrk7hXcBYtCTvUzJrhkN2jVNkX1Yzswm6SYEHKCNGrh60k0Rh5DyOSOzDOXN9MEMoc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfGo-0000AB-1q; Sun, 25 Oct 2020 12:37:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfGn-0007uI-Qs; Sun, 25 Oct 2020 12:37:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfGn-0000uV-QN; Sun, 25 Oct 2020 12:37:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156211-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156211: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 12:37:09 +0000

flight 156211 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156211/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   22 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   21 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 13:04:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 13:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11877.31249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWfhI-0002Op-7E; Sun, 25 Oct 2020 13:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11877.31249; Sun, 25 Oct 2020 13:04:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWfhI-0002Oi-3R; Sun, 25 Oct 2020 13:04:32 +0000
Received: by outflank-mailman (input) for mailman id 11877;
 Sun, 25 Oct 2020 13:04:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWfhG-0002Od-Sw
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 13:04:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c77e62fe-f97c-429b-9f66-175271d8808a;
 Sun, 25 Oct 2020 13:04:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfhA-0000lb-W0; Sun, 25 Oct 2020 13:04:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfhA-0008VW-Oh; Sun, 25 Oct 2020 13:04:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWfhA-0002jr-OC; Sun, 25 Oct 2020 13:04:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWfhG-0002Od-Sw
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 13:04:31 +0000
X-Inumbo-ID: c77e62fe-f97c-429b-9f66-175271d8808a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c77e62fe-f97c-429b-9f66-175271d8808a;
	Sun, 25 Oct 2020 13:04:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fwTgZJSsHsVFYYiRtB5KfuKicdvQdgBkTgQ4iNJSZDk=; b=Oe9FzDi0FGB1FUot1Tp+wl5xwZ
	kbYfLdQhZUkhr0J1lTT1tAGpJly5U0Ky4SiQ0v/qiG0WPsZpY/Ec6C3A3jHETDkFzBv7QonGIvCXe
	BgSGfR1cEyg8NEExybtMgEcRZKlxRm/DBC3JEzJ65JfrJRbE1W9O6xs0CJbAQOlf0d0s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfhA-0000lb-W0; Sun, 25 Oct 2020 13:04:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfhA-0008VW-Oh; Sun, 25 Oct 2020 13:04:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWfhA-0002jr-OC; Sun, 25 Oct 2020 13:04:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156210-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156210: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 13:04:24 +0000

flight 156210 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   64 days  142 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   27 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 13:52:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11886.31264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWgR1-0006cz-08; Sun, 25 Oct 2020 13:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11886.31264; Sun, 25 Oct 2020 13:51:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWgR0-0006cs-TM; Sun, 25 Oct 2020 13:51:46 +0000
Received: by outflank-mailman (input) for mailman id 11886;
 Sun, 25 Oct 2020 13:51:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWgQy-0006cn-Pl
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 13:51:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 561d56a7-dfbd-467f-b0cc-1983d304a57d;
 Sun, 25 Oct 2020 13:51:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgQw-0001fI-Hn; Sun, 25 Oct 2020 13:51:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgQw-000166-99; Sun, 25 Oct 2020 13:51:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgQw-00028h-8d; Sun, 25 Oct 2020 13:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWgQy-0006cn-Pl
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 13:51:44 +0000
X-Inumbo-ID: 561d56a7-dfbd-467f-b0cc-1983d304a57d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 561d56a7-dfbd-467f-b0cc-1983d304a57d;
	Sun, 25 Oct 2020 13:51:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=79AT0qxnZoool6a3al37T1p6K4Y9/aF9SjYJ2idh8UU=; b=dJAIZ0lAhcT/TbP7P8ZfQUFB3B
	LrPbX5wmwqTw7VfPkZQdIKmXAajA1LMIfXdB8SxBazjc96qO2NkI0CteEl6HNOB+rcTbyEhxbJ4W2
	eRalMWCojqPCQFes8cnoiyc1IiZiU6QtIjC1kkJsiay4rx/DR6IMTEC1S2orCtZ1yw+E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgQw-0001fI-Hn; Sun, 25 Oct 2020 13:51:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgQw-000166-99; Sun, 25 Oct 2020 13:51:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgQw-00028h-8d; Sun, 25 Oct 2020 13:51:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156212-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156212: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 13:51:42 +0000

flight 156212 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156212/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    1 days   23 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   22 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 14:18:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 14:18:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11896.31278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWgqP-00009q-5v; Sun, 25 Oct 2020 14:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11896.31278; Sun, 25 Oct 2020 14:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWgqP-00009j-2p; Sun, 25 Oct 2020 14:18:01 +0000
Received: by outflank-mailman (input) for mailman id 11896;
 Sun, 25 Oct 2020 14:17:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWgqN-00009c-QW
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 14:17:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f2c8ad0-be0e-4995-835c-4573285c6363;
 Sun, 25 Oct 2020 14:17:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgqK-0002Ho-1R; Sun, 25 Oct 2020 14:17:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgqJ-0001ee-OC; Sun, 25 Oct 2020 14:17:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWgqJ-000845-Nf; Sun, 25 Oct 2020 14:17:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWgqN-00009c-QW
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 14:17:59 +0000
X-Inumbo-ID: 1f2c8ad0-be0e-4995-835c-4573285c6363
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1f2c8ad0-be0e-4995-835c-4573285c6363;
	Sun, 25 Oct 2020 14:17:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C7t7ZR6J7Wu4xlTERWT2amY6TYS2TVlh+XvS6faWu0s=; b=rjltcyEGzftCiFPqdFWSVMySw5
	YyfGYfYeH3Mlxagm7NGj8q47OkT23Y1ZCoTJLcFaXH0O2U6vyF+jCaHAhKDOaaA6tTd6ptzICc8HB
	MFwvrX351mz4ay9VERrTi7vmTbpW4RKjtbRxRsrOwiaMDs28RnFQdmY+5bvLaNeRkjus=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgqK-0002Ho-1R; Sun, 25 Oct 2020 14:17:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgqJ-0001ee-OC; Sun, 25 Oct 2020 14:17:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWgqJ-000845-Nf; Sun, 25 Oct 2020 14:17:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156213: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 14:17:55 +0000

flight 156213 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156213/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  143 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    2 days   28 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 15:05:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 15:05:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11913.31297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhaM-0004RD-3u; Sun, 25 Oct 2020 15:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11913.31297; Sun, 25 Oct 2020 15:05:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhaM-0004R6-0h; Sun, 25 Oct 2020 15:05:30 +0000
Received: by outflank-mailman (input) for mailman id 11913;
 Sun, 25 Oct 2020 15:05:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWhaK-0004QY-5W
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:05:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb8eff14-088a-4f0f-8869-cdc1694f54db;
 Sun, 25 Oct 2020 15:05:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhaB-0003E5-SZ; Sun, 25 Oct 2020 15:05:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhaB-0002gz-KX; Sun, 25 Oct 2020 15:05:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhaB-0003Q7-K3; Sun, 25 Oct 2020 15:05:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWhaK-0004QY-5W
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:05:28 +0000
X-Inumbo-ID: cb8eff14-088a-4f0f-8869-cdc1694f54db
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb8eff14-088a-4f0f-8869-cdc1694f54db;
	Sun, 25 Oct 2020 15:05:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=REvEdBalZWGXWoAhV4X0FEUQe00vdLJsuFV8bgNfnRg=; b=MowBsXR3/GLropN+Ai0jCZMO9f
	fSL8Wdd93SOab589X50X+Jlc9RlXZt2N/V932/14RaRQ08pIRhGCZnf+H0blw47vCPfEkH9pLxqC4
	94uD7NYHG9oLwbhPpHJhIdGYX4kdz1bA1u3ImmJV+MYBYUYVyIQ/tRzhLrDSrhmRE5E4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhaB-0003E5-SZ; Sun, 25 Oct 2020 15:05:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhaB-0002gz-KX; Sun, 25 Oct 2020 15:05:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhaB-0003Q7-K3; Sun, 25 Oct 2020 15:05:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156214-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156214: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 15:05:19 +0000

flight 156214 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156214/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   24 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   23 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 15:13:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 15:13:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11918.31312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhhj-0005Lt-Vb; Sun, 25 Oct 2020 15:13:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11918.31312; Sun, 25 Oct 2020 15:13:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhhj-0005Lm-SY; Sun, 25 Oct 2020 15:13:07 +0000
Received: by outflank-mailman (input) for mailman id 11918;
 Sun, 25 Oct 2020 15:13:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWhhi-0005L5-Tr
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:13:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c170703-4589-434d-9eed-b2765b64a477;
 Sun, 25 Oct 2020 15:12:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhhb-0003Nr-E8; Sun, 25 Oct 2020 15:12:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhhb-0002qm-5W; Sun, 25 Oct 2020 15:12:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhhb-0002mo-4z; Sun, 25 Oct 2020 15:12:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWhhi-0005L5-Tr
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:13:06 +0000
X-Inumbo-ID: 5c170703-4589-434d-9eed-b2765b64a477
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c170703-4589-434d-9eed-b2765b64a477;
	Sun, 25 Oct 2020 15:12:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Bdkvd+P5wShcXEsW+qs1Bk1GZilKLt6MhAr7C8ZiCG8=; b=nAFn1AqQ1ECh59PluKFMbmyBrk
	SCYSTmJXunjaZnVlfR1K/kO9pNkALpiKIA0tjImbk61LdAzmawyW3eiLYuTJR/B8Py2C5q6YyLzEi
	uRuabN8jmeb297e9f1+VZlxXe1k3Z+ry+hhTM4qdbrQr1hgZW4c55WbXh3mc1yeXg/xs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhhb-0003Nr-E8; Sun, 25 Oct 2020 15:12:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhhb-0002qm-5W; Sun, 25 Oct 2020 15:12:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhhb-0002mo-4z; Sun, 25 Oct 2020 15:12:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156202-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156202: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d76913908102044f14381df865bb74df17a538cb
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 15:12:59 +0000

flight 156202 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156202/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d76913908102044f14381df865bb74df17a538cb
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   85 days
Failing since        152366  2020-08-01 20:49:34 Z   84 days  143 attempts
Testing same since   156202  2020-10-25 05:32:39 Z    0 days    1 attempts

------------------------------------------------------------
3370 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 639988 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 15:31:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 15:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11924.31324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhze-00076Y-Md; Sun, 25 Oct 2020 15:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11924.31324; Sun, 25 Oct 2020 15:31:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWhze-00076R-Ja; Sun, 25 Oct 2020 15:31:38 +0000
Received: by outflank-mailman (input) for mailman id 11924;
 Sun, 25 Oct 2020 15:31:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWhzd-00076M-1w
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:31:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c48805a-6106-41d8-ad90-508b91dd8cc3;
 Sun, 25 Oct 2020 15:31:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhzV-0003js-OE; Sun, 25 Oct 2020 15:31:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhzV-0003Gg-Cz; Sun, 25 Oct 2020 15:31:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWhzV-0006iU-CU; Sun, 25 Oct 2020 15:31:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWhzd-00076M-1w
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 15:31:37 +0000
X-Inumbo-ID: 1c48805a-6106-41d8-ad90-508b91dd8cc3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1c48805a-6106-41d8-ad90-508b91dd8cc3;
	Sun, 25 Oct 2020 15:31:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g6PcTHNEclwmZDgk/uNSAl2KP0hTL06EMbz03g3Wh3c=; b=rwg/2R2GW3ZENH8SkgP9wnWPaf
	S25MWIKE8RHKMIo2FYOU+RkunbQPVXPRDDweTfdCy8PvNtOdvGRCePbvqWF7v4nsEmS3HCaaWtZIY
	kd+qrV1D707lRhzu/yjed+KDRFEI9XI2IonSvNrkTRXkjJz6fWcXPwVriWQCgu8Nxk5w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhzV-0003js-OE; Sun, 25 Oct 2020 15:31:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhzV-0003Gg-Cz; Sun, 25 Oct 2020 15:31:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWhzV-0006iU-CU; Sun, 25 Oct 2020 15:31:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156215-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156215: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 15:31:29 +0000

flight 156215 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156215/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  144 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   29 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 16:20:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 16:20:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11940.31342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWikD-0002o7-Ky; Sun, 25 Oct 2020 16:19:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11940.31342; Sun, 25 Oct 2020 16:19:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWikD-0002o0-G8; Sun, 25 Oct 2020 16:19:45 +0000
Received: by outflank-mailman (input) for mailman id 11940;
 Sun, 25 Oct 2020 16:19:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWikD-0002nL-1i
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 16:19:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5afa73c-4d88-42a4-9e9b-6217702199d9;
 Sun, 25 Oct 2020 16:19:36 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWik3-0005F6-Uv; Sun, 25 Oct 2020 16:19:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWik3-0004JJ-J9; Sun, 25 Oct 2020 16:19:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWik3-0008H2-Ie; Sun, 25 Oct 2020 16:19:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWikD-0002nL-1i
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 16:19:45 +0000
X-Inumbo-ID: a5afa73c-4d88-42a4-9e9b-6217702199d9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a5afa73c-4d88-42a4-9e9b-6217702199d9;
	Sun, 25 Oct 2020 16:19:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f8jKHb7peuz8nlTfk+twtmVR4mWUnmoVVARFP3L+Ox4=; b=f4feCfujO3OVmYfwDidLAc+F/i
	Qb+BpKTqxGKQkJAPoYX0FgezhlDl2LZOwz0VDqkoSGfsJtdwNniGxtr7R6oXPsT0kPHOpOD5deHYl
	UqDW8vFXVvK/vOGh5VScsdiA5W6u2ACrRL4F5LU7vFtQs5FPu1xGqqM0SCccOYbw8p+c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWik3-0005F6-Uv; Sun, 25 Oct 2020 16:19:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWik3-0004JJ-J9; Sun, 25 Oct 2020 16:19:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWik3-0008H2-Ie; Sun, 25 Oct 2020 16:19:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156216-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156216: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 16:19:35 +0000

flight 156216 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156216/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  145 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   30 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 16:59:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 16:59:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11948.31357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWjMa-0006Gd-To; Sun, 25 Oct 2020 16:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11948.31357; Sun, 25 Oct 2020 16:59:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWjMa-0006GW-Qc; Sun, 25 Oct 2020 16:59:24 +0000
Received: by outflank-mailman (input) for mailman id 11948;
 Sun, 25 Oct 2020 16:59:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWjMZ-0006FK-UC
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 16:59:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2503b139-e743-4d9c-b45c-0e505a927c73;
 Sun, 25 Oct 2020 16:59:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWjMS-00061J-M9; Sun, 25 Oct 2020 16:59:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWjMS-0005Ax-E6; Sun, 25 Oct 2020 16:59:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWjMS-0005TU-Dc; Sun, 25 Oct 2020 16:59:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWjMZ-0006FK-UC
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 16:59:23 +0000
X-Inumbo-ID: 2503b139-e743-4d9c-b45c-0e505a927c73
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2503b139-e743-4d9c-b45c-0e505a927c73;
	Sun, 25 Oct 2020 16:59:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lFfAUHz9Ne7pU/iBp+MDzv32c3Pe4SKLL7+R2pEhikg=; b=iDX8SPJ5BE4FrD9KgwLpH6Pz+9
	/a16lx7QG71eYO+CN+PK28uJ1CX3BD2NTcVpJ2CqwRmHgrAbNl7Iza48m9vCbuQZsPaRqfTiKFiMh
	unLOQrviP8wuZgqNIhVRLpUmGCDE2QYQJ9L34SEy+izphtZQhSSPWwaHqtJBz5MniXko=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWjMS-00061J-M9; Sun, 25 Oct 2020 16:59:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWjMS-0005Ax-E6; Sun, 25 Oct 2020 16:59:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWjMS-0005TU-Dc; Sun, 25 Oct 2020 16:59:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156218: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 16:59:16 +0000

flight 156218 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156218/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   25 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    1 days   24 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 18:06:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 18:06:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11958.31371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkP1-0003mw-Rz; Sun, 25 Oct 2020 18:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11958.31371; Sun, 25 Oct 2020 18:05:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkP1-0003mp-Ov; Sun, 25 Oct 2020 18:05:59 +0000
Received: by outflank-mailman (input) for mailman id 11958;
 Sun, 25 Oct 2020 18:05:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWkP0-0003mB-Cw
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:05:58 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id abfdb9ce-6aad-4349-b9f8-fcae87ce6bc8;
 Sun, 25 Oct 2020 18:05:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkOq-0007Qr-UU; Sun, 25 Oct 2020 18:05:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkOq-0007KW-Gv; Sun, 25 Oct 2020 18:05:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkOq-0001m9-GQ; Sun, 25 Oct 2020 18:05:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWkP0-0003mB-Cw
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:05:58 +0000
X-Inumbo-ID: abfdb9ce-6aad-4349-b9f8-fcae87ce6bc8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id abfdb9ce-6aad-4349-b9f8-fcae87ce6bc8;
	Sun, 25 Oct 2020 18:05:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LQA4rLE6F9OV0mAgnpwSyay8st99zd1jy+q63HedlxQ=; b=FgPT356hQSg2mGpb+eBiKr42hw
	3lW6Aencnip8afWDD4ZMuC1QOwU2VXUGdOCKbk1QSR/s1cOLM0hMLJEldZHfSVHEphSXyhdHO1Ldz
	R/5QFiuue3f2ingwNmsgEgK2q63fFI8ymdw/6YMvLZB89dyo7NldePf8AOJ72Ipb8pzs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkOq-0007Qr-UU; Sun, 25 Oct 2020 18:05:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkOq-0007KW-Gv; Sun, 25 Oct 2020 18:05:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkOq-0001m9-GQ; Sun, 25 Oct 2020 18:05:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156219-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156219: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 18:05:48 +0000

flight 156219 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156219/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  146 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   31 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 18:35:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 18:35:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11963.31384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkrV-0006Oz-D1; Sun, 25 Oct 2020 18:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11963.31384; Sun, 25 Oct 2020 18:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkrV-0006Os-9z; Sun, 25 Oct 2020 18:35:25 +0000
Received: by outflank-mailman (input) for mailman id 11963;
 Sun, 25 Oct 2020 18:35:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+mPq=EA=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1kWkrU-0006On-A3
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:35:24 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3ccfd54-0321-451b-955a-669e03e53ef4;
 Sun, 25 Oct 2020 18:35:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+mPq=EA=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
	id 1kWkrU-0006On-A3
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:35:24 +0000
X-Inumbo-ID: e3ccfd54-0321-451b-955a-669e03e53ef4
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e3ccfd54-0321-451b-955a-669e03e53ef4;
	Sun, 25 Oct 2020 18:35:23 +0000 (UTC)
Subject: Re: [GIT PULL] xen: branch for v5.10-rc1c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603650922;
	bh=1Yw38DTAm5hqdpgggMPY7fzxqYKarUJDpGy5M517d3Y=;
	h=From:In-Reply-To:References:Date:To:Cc:From;
	b=Ghr5L7DXNdFpn0F7DiGFM9GcF46YlKMx8DXuqJhxTw83qEHTjKaxov/IfHeMVJLYj
	 A4qzHMUY6trW9sZSs17PjT+scdpvo5QHErMVoSFIJ29uZsArLsVLf0G0dJ7kSNKE26
	 oOOiq3QrwMoGmfqx3Z5lqMZo4IThXOrOGclRakxw=
From: pr-tracker-bot@kernel.org
In-Reply-To: <20201025104141.4698-1-jgross@suse.com>
References: <20201025104141.4698-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20201025104141.4698-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1c-tag
X-PR-Tracked-Commit-Id: 1a89c1dc9520b908e7894652ee2b19db9de37b64
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: bd6aabc7ca39dd28a27fe1ec99e36e941cfb8192
Message-Id: <160365092281.20889.10785325670814123122.pr-tracker-bot@kernel.org>
Date: Sun, 25 Oct 2020 18:35:22 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Sun, 25 Oct 2020 11:41:41 +0100:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.10b-rc1c-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/bd6aabc7ca39dd28a27fe1ec99e36e941cfb8192

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 18:39:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 18:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11967.31396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkvr-0006bc-1b; Sun, 25 Oct 2020 18:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11967.31396; Sun, 25 Oct 2020 18:39:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWkvq-0006bV-T5; Sun, 25 Oct 2020 18:39:54 +0000
Received: by outflank-mailman (input) for mailman id 11967;
 Sun, 25 Oct 2020 18:39:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWkvp-0006bO-LL
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:39:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffc53c42-9dff-4103-b006-4039f9fb6e1e;
 Sun, 25 Oct 2020 18:39:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkvn-00086h-5Z; Sun, 25 Oct 2020 18:39:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkvm-0000Gv-U5; Sun, 25 Oct 2020 18:39:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWkvm-0003qb-RS; Sun, 25 Oct 2020 18:39:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWkvp-0006bO-LL
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 18:39:53 +0000
X-Inumbo-ID: ffc53c42-9dff-4103-b006-4039f9fb6e1e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ffc53c42-9dff-4103-b006-4039f9fb6e1e;
	Sun, 25 Oct 2020 18:39:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kc/Wr5fRIVYK6hv6QghYZHQCveM9+WKx7PNvkad4R+8=; b=VpXp4lg+435KJPvpa0bmkgd8I7
	+u+wFxUHBHRJ2YiWkktoGQlPib3p+PuyROrKwqg/rzkPbQJl3Os0S1mvSrewaUpD7Xqg6x3jWlFKq
	zPyxGd6nJ35aIZNIgSyT/SsekLhXX9DRplNSXOSWQj6pO1eBXl5hiolCioebALC6GvL0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkvn-00086h-5Z; Sun, 25 Oct 2020 18:39:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkvm-0000Gv-U5; Sun, 25 Oct 2020 18:39:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWkvm-0003qb-RS; Sun, 25 Oct 2020 18:39:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156220-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156220: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 18:39:50 +0000

flight 156220 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156220/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   26 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   25 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 21:12:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 21:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11974.31411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWnJG-00039T-7Q; Sun, 25 Oct 2020 21:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11974.31411; Sun, 25 Oct 2020 21:12:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWnJG-00039M-4S; Sun, 25 Oct 2020 21:12:14 +0000
Received: by outflank-mailman (input) for mailman id 11974;
 Sun, 25 Oct 2020 21:12:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWnJE-00039H-DN
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 21:12:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 403949e3-4ee4-435e-9a12-c465947084d5;
 Sun, 25 Oct 2020 21:12:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWnJA-0002ow-UC; Sun, 25 Oct 2020 21:12:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWnJA-0006Rn-L8; Sun, 25 Oct 2020 21:12:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWnJA-0004nB-Kd; Sun, 25 Oct 2020 21:12:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWnJE-00039H-DN
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 21:12:12 +0000
X-Inumbo-ID: 403949e3-4ee4-435e-9a12-c465947084d5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 403949e3-4ee4-435e-9a12-c465947084d5;
	Sun, 25 Oct 2020 21:12:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zB+hOxSTrw6O80B9aT+/D6bYkInXRtBv3n4lFyy5aYc=; b=bIbmMOh/KqLgLK6dakdBJqgrMH
	7YPrDrBZ4msNEqz0+W/dlrsdZYop6JJ4JOglMcSvZlB/MVP+hJmOUaU8D42iLKgrtd7hP3juNFdXU
	/rJQKwS+gnUgOOyJAWB+TczYi9hgLL9kxQD9hLfT5BAkAt/TlUeHvMztXIV+7NxjxsUc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWnJA-0002ow-UC; Sun, 25 Oct 2020 21:12:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWnJA-0006Rn-L8; Sun, 25 Oct 2020 21:12:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWnJA-0004nB-Kd; Sun, 25 Oct 2020 21:12:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156222: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 21:12:08 +0000

flight 156222 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156222/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   27 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   26 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 21:38:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 21:38:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11979.31429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWniT-00053S-CU; Sun, 25 Oct 2020 21:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11979.31429; Sun, 25 Oct 2020 21:38:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWniT-00053L-96; Sun, 25 Oct 2020 21:38:17 +0000
Received: by outflank-mailman (input) for mailman id 11979;
 Sun, 25 Oct 2020 21:38:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWniR-00052h-50
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 21:38:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a6707d6-2df4-4ea6-b6be-ee8ba69a54fb;
 Sun, 25 Oct 2020 21:38:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWniH-0003LX-AZ; Sun, 25 Oct 2020 21:38:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWniH-00072V-0y; Sun, 25 Oct 2020 21:38:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWniH-0007B3-0O; Sun, 25 Oct 2020 21:38:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWniR-00052h-50
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 21:38:15 +0000
X-Inumbo-ID: 8a6707d6-2df4-4ea6-b6be-ee8ba69a54fb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8a6707d6-2df4-4ea6-b6be-ee8ba69a54fb;
	Sun, 25 Oct 2020 21:38:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7H9tmehqEwfCtpDqWi/iRjfNvof+3cyWyfhD9WMtfo8=; b=c4jwgeEA/Athfz1nGwPagd6bRz
	L2VQsIDaJIBHa7NENp5gdlHaBOyBiIdZp7t7ybRLVAwfCMOaKtzJeg+vNfe4ux2WXBBQO7dBTltZS
	FxM+X2bC4hI+Bsgav3LNUsmYbCtyC0CeKXx7ehTv0yzjW6HxQvSXQusxBGKOKZnAToWU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWniH-0003LX-AZ; Sun, 25 Oct 2020 21:38:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWniH-00072V-0y; Sun, 25 Oct 2020 21:38:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWniH-0007B3-0O; Sun, 25 Oct 2020 21:38:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156221-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156221: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 21:38:05 +0000

flight 156221 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156221/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  147 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   32 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 23:14:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 23:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11986.31444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWpCr-00050S-MM; Sun, 25 Oct 2020 23:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11986.31444; Sun, 25 Oct 2020 23:13:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWpCr-00050L-IQ; Sun, 25 Oct 2020 23:13:45 +0000
Received: by outflank-mailman (input) for mailman id 11986;
 Sun, 25 Oct 2020 23:13:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWpCq-0004zk-RW
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 23:13:44 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05b96b88-5de9-4c5e-b6e4-d0440f827993;
 Sun, 25 Oct 2020 23:13:37 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWpCj-0005Hd-2J; Sun, 25 Oct 2020 23:13:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWpCi-0000lp-QT; Sun, 25 Oct 2020 23:13:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWpCi-0004KN-Q0; Sun, 25 Oct 2020 23:13:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWpCq-0004zk-RW
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 23:13:44 +0000
X-Inumbo-ID: 05b96b88-5de9-4c5e-b6e4-d0440f827993
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 05b96b88-5de9-4c5e-b6e4-d0440f827993;
	Sun, 25 Oct 2020 23:13:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fjPL5KF8HgqsQZFZDARU9HNmPdGlfhpb5uCpqtTK/Vc=; b=aNxd3NL7vEujPBD2Qx6dntKU/6
	aI9FaDzyEZLfm2dz6CdgZQt+Y5HhW9268fk2Ct7yJX0IwgMY6NPepzE6Ge7qeoq2d20Roxb4/Su2y
	vZj94C+CEvsCZ/hCpooPjme/wueICEiqziSh3YYn1ThOiXdpwAXxSNB6wwMZewTn6lrw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWpCj-0005Hd-2J; Sun, 25 Oct 2020 23:13:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWpCi-0000lp-QT; Sun, 25 Oct 2020 23:13:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWpCi-0004KN-Q0; Sun, 25 Oct 2020 23:13:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156217-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156217: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d76913908102044f14381df865bb74df17a538cb
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 23:13:36 +0000

flight 156217 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156217/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 156202 REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu  fail in 156202 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1 10 host-ping-check-xen fail in 156202 pass in 156217
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156202
 test-arm64-arm64-examine      8 reboot                     fail pass in 156202
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156202
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 156202

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d76913908102044f14381df865bb74df17a538cb
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   86 days
Failing since        152366  2020-08-01 20:49:34 Z   85 days  144 attempts
Testing same since   156202  2020-10-25 05:32:39 Z    0 days    2 attempts

------------------------------------------------------------
3370 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 639988 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Oct 25 23:56:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 25 Oct 2020 23:56:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.11991.31458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWpri-0008SB-11; Sun, 25 Oct 2020 23:55:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 11991.31458; Sun, 25 Oct 2020 23:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWprh-0008S4-To; Sun, 25 Oct 2020 23:55:57 +0000
Received: by outflank-mailman (input) for mailman id 11991;
 Sun, 25 Oct 2020 23:55:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWprg-0008RQ-M1
 for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 23:55:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60c2644a-b70b-4976-9743-8f52b60f3bb8;
 Sun, 25 Oct 2020 23:55:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWprZ-00065e-8z; Sun, 25 Oct 2020 23:55:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWprY-0001hH-VZ; Sun, 25 Oct 2020 23:55:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWprY-0004Od-V3; Sun, 25 Oct 2020 23:55:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ryJk=EA=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWprg-0008RQ-M1
	for xen-devel@lists.xenproject.org; Sun, 25 Oct 2020 23:55:56 +0000
X-Inumbo-ID: 60c2644a-b70b-4976-9743-8f52b60f3bb8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 60c2644a-b70b-4976-9743-8f52b60f3bb8;
	Sun, 25 Oct 2020 23:55:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qU5x3p2lvjI5db2w3zQuAVTjruelESmYFdWi2L0qHfE=; b=Eax1+9CB+9yHcq7BGHjNTNwJ04
	SIe9foAaN2pg8LN3nBfmjbYRqXT2Ov/ij8lIhTiRGLeNXt9Vxz7HgArW3LGxhtUnXeshhigtJrLRI
	pGHwOsbDbgt/TBuSyHvy7us9jdYJkgUxtg1lLsO4xOJtPrbXGuqp+hnlcapA3QJRg9H0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWprZ-00065e-8z; Sun, 25 Oct 2020 23:55:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWprY-0001hH-VZ; Sun, 25 Oct 2020 23:55:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWprY-0004Od-V3; Sun, 25 Oct 2020 23:55:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156223: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 25 Oct 2020 23:55:48 +0000

flight 156223 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156223/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   28 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   27 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 00:22:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 00:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12011.31474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWqHG-0003Cd-FI; Mon, 26 Oct 2020 00:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12011.31474; Mon, 26 Oct 2020 00:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWqHG-0003CW-CJ; Mon, 26 Oct 2020 00:22:22 +0000
Received: by outflank-mailman (input) for mailman id 12011;
 Mon, 26 Oct 2020 00:22:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWqHE-0003Bs-SV
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 00:22:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bef94cc-69fb-4237-8ac2-0e24cd1ac32b;
 Mon, 26 Oct 2020 00:22:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWqGz-0007Ex-6b; Mon, 26 Oct 2020 00:22:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWqGy-0002R8-Pw; Mon, 26 Oct 2020 00:22:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWqGy-0006Wj-PQ; Mon, 26 Oct 2020 00:22:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWqHE-0003Bs-SV
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 00:22:20 +0000
X-Inumbo-ID: 3bef94cc-69fb-4237-8ac2-0e24cd1ac32b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3bef94cc-69fb-4237-8ac2-0e24cd1ac32b;
	Mon, 26 Oct 2020 00:22:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x+tOZ06bAC3fh50au9it2CzQFZYaiE8YYf3anw+uCuI=; b=u/lSiTIBV7kggJpmlj+xGBHoTF
	37F2y7ww3GD8BqR4jIo5G7XM/e+Y737yZ0AEkT7EH5iJuE7LsmicXHNKG8ONNV0pZo2AKl9Hlmlmt
	VherehAhbjIlDTr4c4zg1YyRzHqsCK8ubR+mFiIXAWM/ANca+Ot4XxB6Zc42GfDCnXTI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWqGz-0007Ex-6b; Mon, 26 Oct 2020 00:22:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWqGy-0002R8-Pw; Mon, 26 Oct 2020 00:22:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWqGy-0006Wj-PQ; Mon, 26 Oct 2020 00:22:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156224-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156224: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 00:22:04 +0000

flight 156224 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156224/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  148 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   33 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 01:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 01:10:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12017.31488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWr1R-0003JX-3r; Mon, 26 Oct 2020 01:10:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12017.31488; Mon, 26 Oct 2020 01:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWr1R-0003Il-0I; Mon, 26 Oct 2020 01:10:05 +0000
Received: by outflank-mailman (input) for mailman id 12017;
 Mon, 26 Oct 2020 01:10:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWr1P-0002v0-KV
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 01:10:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f60f20ec-464b-408c-bdd7-b347b64935f8;
 Mon, 26 Oct 2020 01:09:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWr1I-0003zY-4Y; Mon, 26 Oct 2020 01:09:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWr1H-0003tB-T0; Mon, 26 Oct 2020 01:09:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWr1H-00086X-SX; Mon, 26 Oct 2020 01:09:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWr1P-0002v0-KV
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 01:10:03 +0000
X-Inumbo-ID: f60f20ec-464b-408c-bdd7-b347b64935f8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f60f20ec-464b-408c-bdd7-b347b64935f8;
	Mon, 26 Oct 2020 01:09:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PCyEViCGqLPziBxTEKTdfUPbSoJgctDeghhGxcbWlEY=; b=fX8Egccg8QjvLDULcyEe1ZQZGD
	ZVRg2DsVIylrwsQvfLrwZlgFr64vNjVDR4ln+DTUwfnGWMFMgpR+GEyi/MsVCelWNoGr1BmC9BFpj
	W5imUmTDrEztxCNzolGn3zCqozRuTeQrTfAYdcwazVIsCNkf5DVTZXLmUpwCqc1UWIA4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWr1I-0003zY-4Y; Mon, 26 Oct 2020 01:09:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWr1H-0003tB-T0; Mon, 26 Oct 2020 01:09:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWr1H-00086X-SX; Mon, 26 Oct 2020 01:09:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156226: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 01:09:55 +0000

flight 156226 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156226/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   29 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   28 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 02:59:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 02:59:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12021.31504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWsij-00042u-27; Mon, 26 Oct 2020 02:58:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12021.31504; Mon, 26 Oct 2020 02:58:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWsii-00042n-Up; Mon, 26 Oct 2020 02:58:52 +0000
Received: by outflank-mailman (input) for mailman id 12021;
 Mon, 26 Oct 2020 02:58:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWsih-00042F-2C
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 02:58:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00ec304c-d964-42b4-a535-a8bb4bc43685;
 Mon, 26 Oct 2020 02:58:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsiX-0006cd-VR; Mon, 26 Oct 2020 02:58:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsiX-0000kK-L0; Mon, 26 Oct 2020 02:58:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsiX-0001PS-KG; Mon, 26 Oct 2020 02:58:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWsih-00042F-2C
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 02:58:51 +0000
X-Inumbo-ID: 00ec304c-d964-42b4-a535-a8bb4bc43685
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 00ec304c-d964-42b4-a535-a8bb4bc43685;
	Mon, 26 Oct 2020 02:58:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PZL9BP3QivH66/2YiutN9fm311fWv2AbFpuwusI79YE=; b=HDbpfO7Qd+y47ewOnu52HCc6Dz
	pwBIYp7uUoRvBzYVKVNobyYvehFHu0BW6WMs7NhZht9g2Ai/SC3143uzodj5/Iz5Q+f19gdLn55CG
	BccFOWFVMsM4NSBXLi2ckUphnyvpAX8QzngEjvpfTIweC4lAEgHBDefoHXsgkuZ1rU4k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsiX-0006cd-VR; Mon, 26 Oct 2020 02:58:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsiX-0000kK-L0; Mon, 26 Oct 2020 02:58:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsiX-0001PS-KG; Mon, 26 Oct 2020 02:58:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156229-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156229: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 02:58:41 +0000

flight 156229 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156229/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   30 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   29 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 03:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 03:04:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12024.31516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWsnw-0005OR-LZ; Mon, 26 Oct 2020 03:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12024.31516; Mon, 26 Oct 2020 03:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWsnw-0005OK-IV; Mon, 26 Oct 2020 03:04:16 +0000
Received: by outflank-mailman (input) for mailman id 12024;
 Mon, 26 Oct 2020 03:04:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWsnu-0005OF-MF
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 03:04:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7414b8f9-c1a8-4300-ad56-1184d650c2e7;
 Mon, 26 Oct 2020 03:04:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsnp-0007EP-Q9; Mon, 26 Oct 2020 03:04:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsnp-0000vK-Ck; Mon, 26 Oct 2020 03:04:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWsnp-0004g8-C8; Mon, 26 Oct 2020 03:04:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWsnu-0005OF-MF
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 03:04:14 +0000
X-Inumbo-ID: 7414b8f9-c1a8-4300-ad56-1184d650c2e7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7414b8f9-c1a8-4300-ad56-1184d650c2e7;
	Mon, 26 Oct 2020 03:04:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fjt/NhBiWpwexxUfXQ/BtLxm5TNvqdZJyKGKOiSFdaY=; b=qafMmKds/rRrFidvLekFhNxIK0
	Mfc9iWIYMjynBzwnOftZ0wSQjSv0Tuw6LFrSvn9EH7/yoWlfTviklKowTzLsHh00eTo/hwQ2KqggC
	NYLgiPJI7HBTXJ9trKYClSZka4vlFBo2KJG7wQiSVThwvT+66nXoC8fhxD3FFO/Sf7EY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsnp-0007EP-Q9; Mon, 26 Oct 2020 03:04:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsnp-0000vK-Ck; Mon, 26 Oct 2020 03:04:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWsnp-0004g8-C8; Mon, 26 Oct 2020 03:04:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156227-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156227: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 03:04:09 +0000

flight 156227 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156227/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  149 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   34 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 04:29:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 04:29:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12031.31531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWu8T-0003ts-SN; Mon, 26 Oct 2020 04:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12031.31531; Mon, 26 Oct 2020 04:29:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWu8T-0003tl-Oo; Mon, 26 Oct 2020 04:29:33 +0000
Received: by outflank-mailman (input) for mailman id 12031;
 Mon, 26 Oct 2020 04:29:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWu8S-0003tg-Tv
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 04:29:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bd54d7f-fa88-4b95-88eb-20e3ef097feb;
 Mon, 26 Oct 2020 04:29:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWu8O-0000gy-Rm; Mon, 26 Oct 2020 04:29:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWu8O-0004h1-FX; Mon, 26 Oct 2020 04:29:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWu8O-00025x-F1; Mon, 26 Oct 2020 04:29:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWu8S-0003tg-Tv
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 04:29:33 +0000
X-Inumbo-ID: 3bd54d7f-fa88-4b95-88eb-20e3ef097feb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3bd54d7f-fa88-4b95-88eb-20e3ef097feb;
	Mon, 26 Oct 2020 04:29:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U4GEIyvCBRZ/rIOuhHE42yvqnZ1WMaexAGsLNw9VUp0=; b=jYiiegEIOZUUVUFYI/Q27A6gsO
	L0EIWtR+JN9y+Uxyd4/wZ6VljKq/cfPfo3ZcZ+lmmRdPJ/b2EEFF3L44ULvqJ6mnUwDVNKlWm6wcD
	hmTnx0Q6VCQ1RpRGuXJI89+AHPW7L3ScXS1ihffmr/SSSg6kKf776HBtWKCxNZcSd7Iw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWu8O-0000gy-Rm; Mon, 26 Oct 2020 04:29:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWu8O-0004h1-FX; Mon, 26 Oct 2020 04:29:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWu8O-00025x-F1; Mon, 26 Oct 2020 04:29:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156231: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 04:29:28 +0000

flight 156231 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156231/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   66 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  150 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   35 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 04:44:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 04:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12038.31549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWuMQ-0005cn-B1; Mon, 26 Oct 2020 04:43:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12038.31549; Mon, 26 Oct 2020 04:43:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWuMQ-0005cg-7x; Mon, 26 Oct 2020 04:43:58 +0000
Received: by outflank-mailman (input) for mailman id 12038;
 Mon, 26 Oct 2020 04:43:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWuMO-0005c8-Lf
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 04:43:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5dbc3397-a000-425c-bbd4-6aac7ae3d06e;
 Mon, 26 Oct 2020 04:43:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWuMH-0000yo-2C; Mon, 26 Oct 2020 04:43:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWuMG-0005j9-Pm; Mon, 26 Oct 2020 04:43:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWuMG-0008VI-PH; Mon, 26 Oct 2020 04:43:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWuMO-0005c8-Lf
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 04:43:56 +0000
X-Inumbo-ID: 5dbc3397-a000-425c-bbd4-6aac7ae3d06e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5dbc3397-a000-425c-bbd4-6aac7ae3d06e;
	Mon, 26 Oct 2020 04:43:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uz9S05+6mB6cU45bVYSo6aR+Npl2cb470tfviIWMbXE=; b=BhRlrgxDY2BF+eEEVYrTc/80Ks
	fi7tb4CWwteWhNTG//ooxXYIgNQem07k84Fj2fcbZF//dUrNkSAAqdMN2kzuY+cM5mqICeDzbG3wW
	PBkAyw0nRG67ltU11U9dScociIhTcpU5lefYHCka6wjAMWTRjPSQloqlEPYLHDsgDgTw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWuMH-0000yo-2C; Mon, 26 Oct 2020 04:43:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWuMG-0005j9-Pm; Mon, 26 Oct 2020 04:43:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWuMG-0008VI-PH; Mon, 26 Oct 2020 04:43:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156230: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 04:43:48 +0000

flight 156230 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156230/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   31 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   30 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 06:06:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 06:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12044.31563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWvdy-0004V9-Eg; Mon, 26 Oct 2020 06:06:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12044.31563; Mon, 26 Oct 2020 06:06:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWvdy-0004V2-Bh; Mon, 26 Oct 2020 06:06:10 +0000
Received: by outflank-mailman (input) for mailman id 12044;
 Mon, 26 Oct 2020 06:06:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWvdx-0004UU-Ly
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 06:06:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b88ba2c-7a47-43a6-b47a-50702f231869;
 Mon, 26 Oct 2020 06:06:03 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWvdq-0002zK-LR; Mon, 26 Oct 2020 06:06:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWvdq-0001SV-DG; Mon, 26 Oct 2020 06:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWvdq-00046B-Ch; Mon, 26 Oct 2020 06:06:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWvdx-0004UU-Ly
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 06:06:09 +0000
X-Inumbo-ID: 3b88ba2c-7a47-43a6-b47a-50702f231869
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3b88ba2c-7a47-43a6-b47a-50702f231869;
	Mon, 26 Oct 2020 06:06:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iCU5AZp+VPXONVIfJKMEYJMYH71qqqs2mg4BjuPeOhU=; b=V5IHYrnZnUonTKyfVTC+XFs0IA
	WQZGt6UAbBBvTyjTODj5mLjU1dRXlKR9CgBijUzJa69Zf3VwRcUdtD0eMyf+uRsuMwHvz365JxddW
	a7PvRPccOxQrjir2aEXELpcYo//AvpqLYOpCHsStxmLyPu2SxgP3W82/vg0MB4NpajHU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWvdq-0002zK-LR; Mon, 26 Oct 2020 06:06:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWvdq-0001SV-DG; Mon, 26 Oct 2020 06:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWvdq-00046B-Ch; Mon, 26 Oct 2020 06:06:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156235-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156235: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 06:06:02 +0000

flight 156235 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156235/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   32 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   31 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 07:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 07:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12049.31576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWxGZ-0005Es-7l; Mon, 26 Oct 2020 07:50:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12049.31576; Mon, 26 Oct 2020 07:50:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWxGZ-0005El-2o; Mon, 26 Oct 2020 07:50:07 +0000
Received: by outflank-mailman (input) for mailman id 12049;
 Mon, 26 Oct 2020 07:50:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=83VV=EB=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kWxGY-0005B6-4x
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 07:50:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07514789-9323-478a-82d4-0e786aace9a9;
 Mon, 26 Oct 2020 07:50:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9151FAEE0;
 Mon, 26 Oct 2020 07:50:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=83VV=EB=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kWxGY-0005B6-4x
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 07:50:06 +0000
X-Inumbo-ID: 07514789-9323-478a-82d4-0e786aace9a9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 07514789-9323-478a-82d4-0e786aace9a9;
	Mon, 26 Oct 2020 07:50:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9151FAEE0;
	Mon, 26 Oct 2020 07:50:02 +0000 (UTC)
To: Sam Ravnborg <sam@ravnborg.org>
Cc: luben.tuikov@amd.com, airlied@linux.ie, nouveau@lists.freedesktop.org,
 dri-devel@lists.freedesktop.org, chris@chris-wilson.co.uk,
 melissa.srw@gmail.com, ray.huang@amd.com, kraxel@redhat.com,
 emil.velikov@collabora.com, linux-samsung-soc@vger.kernel.org,
 jy0922.shim@samsung.com, lima@lists.freedesktop.org,
 oleksandr_andrushchenko@epam.com, krzk@kernel.org, steven.price@arm.com,
 linux-rockchip@lists.infradead.org, kgene@kernel.org,
 alyssa.rosenzweig@collabora.com, linux+etnaviv@armlinux.org.uk,
 spice-devel@lists.freedesktop.org, bskeggs@redhat.com,
 etnaviv@lists.freedesktop.org, hdegoede@redhat.com,
 xen-devel@lists.xenproject.org, virtualization@lists.linux-foundation.org,
 sean@poorly.run, apaneers@amd.com, linux-arm-kernel@lists.infradead.org,
 linaro-mm-sig@lists.linaro.org, amd-gfx@lists.freedesktop.org,
 tomeu.vizoso@collabora.com, sw0312.kim@samsung.com, hjc@rock-chips.com,
 kyungmin.park@samsung.com, miaoqinglang@huawei.com, yuq825@gmail.com,
 alexander.deucher@amd.com, linux-media@vger.kernel.org,
 christian.koenig@amd.com
References: <20201020122046.31167-1-tzimmermann@suse.de>
 <20201020122046.31167-11-tzimmermann@suse.de>
 <20201024203838.GB93644@ravnborg.org>
From: Thomas Zimmermann <tzimmermann@suse.de>
Subject: Re: [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O
 memory
Message-ID: <f97902e4-cf8e-17c6-5a6e-b11da41101c5@suse.de>
Date: Mon, 26 Oct 2020 08:50:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.3
MIME-Version: 1.0
In-Reply-To: <20201024203838.GB93644@ravnborg.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi

Am 24.10.20 um 22:38 schrieb Sam Ravnborg:
> Hi Thomas.
> 
> On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote:
>> At least sparc64 requires I/O-specific access to framebuffers. This
>> patch updates the fbdev console accordingly.
>>
>> For drivers with direct access to the framebuffer memory, the callback
>> functions in struct fb_ops test for the type of memory and call the rsp
>> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
>> internally by DRM's fbdev helper.
>>
>> For drivers that employ a shadow buffer, fbdev's blit function retrieves
>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map
>> interfaces to access the buffer.
>>
>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as
>> I/O memory and avoid a HW exception. With the introduction of struct
>> dma_buf_map, this is not required any longer. The patch removes the rsp
>> code from both, bochs and fbdev.
>>
>> v5:
>> 	* implement fb_read/fb_write internally (Daniel, Sam)
>> v4:
>> 	* move dma_buf_map changes into separate patch (Daniel)
>> 	* TODO list: comment on fbdev updates (Daniel)
>>
>> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
>> Tested-by: Sam Ravnborg <sam@ravnborg.org>
> Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
> 
> But see a few comments below on naming for you to consider.
> 
> 	Sam
> 
>> ---
>>  Documentation/gpu/todo.rst        |  19 ++-
>>  drivers/gpu/drm/bochs/bochs_kms.c |   1 -
>>  drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
>>  include/drm/drm_mode_config.h     |  12 --
>>  4 files changed, 230 insertions(+), 29 deletions(-)
>>
>> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
>> index 7e6fc3c04add..638b7f704339 100644
>> --- a/Documentation/gpu/todo.rst
>> +++ b/Documentation/gpu/todo.rst
>> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
>>  ------------------------------------------------
>>  
>>  Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
>> -atomic modesetting and GEM vmap support. Current generic fbdev emulation
>> -expects the framebuffer in system memory (or system-like memory).
>> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
>> +expected the framebuffer in system memory or system-like memory. By employing
>> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
>> +as well.
>>  
>>  Contact: Maintainer of the driver you plan to convert
>>  
>>  Level: Intermediate
>>  
>> +Reimplement functions in drm_fbdev_fb_ops without fbdev
>> +-------------------------------------------------------
>> +
>> +A number of callback functions in drm_fbdev_fb_ops could benefit from
>> +being rewritten without dependencies on the fbdev module. Some of the
>> +helpers could further benefit from using struct dma_buf_map instead of
>> +raw pointers.
>> +
>> +Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
>> +
>> +Level: Advanced
>> +
>> +
>>  drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
>>  -----------------------------------------------------------------
>>  
>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
>> index 13d0d04c4457..853081d186d5 100644
>> --- a/drivers/gpu/drm/bochs/bochs_kms.c
>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c
>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
>>  	bochs->dev->mode_config.preferred_depth = 24;
>>  	bochs->dev->mode_config.prefer_shadow = 0;
>>  	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
>> -	bochs->dev->mode_config.fbdev_use_iomem = true;
>>  	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
>>  
>>  	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
>> index 6212cd7cde1d..1d3180841778 100644
>> --- a/drivers/gpu/drm/drm_fb_helper.c
>> +++ b/drivers/gpu/drm/drm_fb_helper.c
>> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
>>  }
>>  
>>  static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
>> -					  struct drm_clip_rect *clip)
>> +					  struct drm_clip_rect *clip,
>> +					  struct dma_buf_map *dst)
>>  {
>>  	struct drm_framebuffer *fb = fb_helper->fb;
>>  	unsigned int cpp = fb->format->cpp[0];
>>  	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
>>  	void *src = fb_helper->fbdev->screen_buffer + offset;
>> -	void *dst = fb_helper->buffer->map.vaddr + offset;
>>  	size_t len = (clip->x2 - clip->x1) * cpp;
>>  	unsigned int y;
>>  
>> -	for (y = clip->y1; y < clip->y2; y++) {
>> -		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
>> -			memcpy(dst, src, len);
>> -		else
>> -			memcpy_toio((void __iomem *)dst, src, len);
>> +	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
>>  
>> +	for (y = clip->y1; y < clip->y2; y++) {
>> +		dma_buf_map_memcpy_to(dst, src, len);
>> +		dma_buf_map_incr(dst, fb->pitches[0]);
>>  		src += fb->pitches[0];
>> -		dst += fb->pitches[0];
>>  	}
>>  }
>>  
>> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
>>  			ret = drm_client_buffer_vmap(helper->buffer, &map);
>>  			if (ret)
>>  				return;
>> -			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
>> +			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
>>  		}
>> +
>>  		if (helper->fb->funcs->dirty)
>>  			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
>>  						 &clip_copy, 1);
>> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
>>  		return -ENODEV;
>>  }
>>  
>> +static bool drm_fbdev_use_iomem(struct fb_info *info)
>> +{
>> +	struct drm_fb_helper *fb_helper = info->par;
>> +	struct drm_client_buffer *buffer = fb_helper->buffer;
>> +
>> +	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
>> +}
>> +
>> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
>> +				   loff_t pos)
> The naming here confused me - a name like:
> fb_read_iomem() would have helped me more.
> With the current naming I shall remember that the screen_base member is
> the iomem pointer.

Yeah, true. In terms of naming, I was undecided. I was thinking about
adopting a naming similar to what you describe, but OTOH we don't use
sysmem anywhere in the code. I thought about adopting fbdev's conention
of using _sys_ and _cfb_. But that would make sensein the local context.

> 
>> +{
>> +	const char __iomem *src = info->screen_base + pos;
>> +	size_t alloc_size = min(count, PAGE_SIZE);
>> +	ssize_t ret = 0;
>> +	char *tmp;
>> +
>> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
>> +	if (!tmp)
>> +		return -ENOMEM;
>> +
> 
> I looked around and could not find other places where
> we copy from iomem to mem to usermem in chunks of PAGE_SIZE.

I took this pattern from fbdev's original implementation. I think it's
done to work nicely with kmalloc.

Best regards
Thomas

> 
>> +	while (count) {
>> +		size_t c = min(count, alloc_size);
>> +
>> +		memcpy_fromio(tmp, src, c);
>> +		if (copy_to_user(buf, tmp, c)) {
>> +			ret = -EFAULT;
>> +			break;
>> +		}
>> +
>> +		src += c;
>> +		buf += c;
>> +		ret += c;
>> +		count -= c;
>> +	}
>> +
>> +	kfree(tmp);
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
>> +				     loff_t pos)
> And fb_read_sysmem() here.
> 
>> +{
>> +	const char *src = info->screen_buffer + pos;
>> +
>> +	if (copy_to_user(buf, src, count))
>> +		return -EFAULT;
>> +
>> +	return count;
>> +}
>> +
>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
>> +				 size_t count, loff_t *ppos)
>> +{
>> +	loff_t pos = *ppos;
>> +	size_t total_size;
>> +	ssize_t ret;
>> +
>> +	if (info->state != FBINFO_STATE_RUNNING)
>> +		return -EPERM;
>> +
>> +	if (info->screen_size)
>> +		total_size = info->screen_size;
>> +	else
>> +		total_size = info->fix.smem_len;
>> +
>> +	if (pos >= total_size)
>> +		return 0;
>> +	if (count >= total_size)
>> +		count = total_size;
>> +	if (total_size - count < pos)
>> +		count = total_size - pos;
>> +
>> +	if (drm_fbdev_use_iomem(info))
>> +		ret = fb_read_screen_base(info, buf, count, pos);
>> +	else
>> +		ret = fb_read_screen_buffer(info, buf, count, pos);
>> +
>> +	if (ret > 0)
>> +		*ppos = ret;
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
>> +				    loff_t pos)
> 
> fb_write_iomem()
> 
>> +{
>> +	char __iomem *dst = info->screen_base + pos;
>> +	size_t alloc_size = min(count, PAGE_SIZE);
>> +	ssize_t ret = 0;
>> +	u8 *tmp;
>> +
>> +	tmp = kmalloc(alloc_size, GFP_KERNEL);
>> +	if (!tmp)
>> +		return -ENOMEM;
>> +
>> +	while (count) {
>> +		size_t c = min(count, alloc_size);
>> +
>> +		if (copy_from_user(tmp, buf, c)) {
>> +			ret = -EFAULT;
>> +			break;
>> +		}
>> +		memcpy_toio(dst, tmp, c);
>> +
>> +		dst += c;
>> +		buf += c;
>> +		ret += c;
>> +		count -= c;
>> +	}
>> +
>> +	kfree(tmp);
>> +
>> +	return ret;
>> +}
>> +
>> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
>> +				      loff_t pos)
> fb_write_sysmem()
> 
>> +{
>> +	char *dst = info->screen_buffer + pos;
>> +
>> +	if (copy_from_user(dst, buf, count))
>> +		return -EFAULT;
>> +
>> +	return count;
>> +}
>> +
>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
>> +				  size_t count, loff_t *ppos)
>> +{
>> +	loff_t pos = *ppos;
>> +	size_t total_size;
>> +	ssize_t ret;
>> +	int err;
>> +
>> +	if (info->state != FBINFO_STATE_RUNNING)
>> +		return -EPERM;
>> +
>> +	if (info->screen_size)
>> +		total_size = info->screen_size;
>> +	else
>> +		total_size = info->fix.smem_len;
>> +
>> +	if (pos > total_size)
>> +		return -EFBIG;
>> +	if (count > total_size) {
>> +		err = -EFBIG;
>> +		count = total_size;
>> +	}
>> +	if (total_size - count < pos) {
>> +		if (!err)
>> +			err = -ENOSPC;
>> +		count = total_size - pos;
>> +	}
>> +
>> +	/*
>> +	 * Copy to framebuffer even if we already logged an error. Emulates
>> +	 * the behavior of the original fbdev implementation.
>> +	 */
>> +	if (drm_fbdev_use_iomem(info))
>> +		ret = fb_write_screen_base(info, buf, count, pos);
>> +	else
>> +		ret = fb_write_screen_buffer(info, buf, count, pos);
>> +
>> +	if (ret > 0)
>> +		*ppos = ret;
>> +
>> +	if (err)
>> +		return err;
>> +
>> +	return ret;
>> +}
>> +
>> +static void drm_fbdev_fb_fillrect(struct fb_info *info,
>> +				  const struct fb_fillrect *rect)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_fillrect(info, rect);
>> +	else
>> +		drm_fb_helper_sys_fillrect(info, rect);
>> +}
>> +
>> +static void drm_fbdev_fb_copyarea(struct fb_info *info,
>> +				  const struct fb_copyarea *area)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_copyarea(info, area);
>> +	else
>> +		drm_fb_helper_sys_copyarea(info, area);
>> +}
>> +
>> +static void drm_fbdev_fb_imageblit(struct fb_info *info,
>> +				   const struct fb_image *image)
>> +{
>> +	if (drm_fbdev_use_iomem(info))
>> +		drm_fb_helper_cfb_imageblit(info, image);
>> +	else
>> +		drm_fb_helper_sys_imageblit(info, image);
>> +}
>> +
>>  static const struct fb_ops drm_fbdev_fb_ops = {
>>  	.owner		= THIS_MODULE,
>>  	DRM_FB_HELPER_DEFAULT_OPS,
>> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
>>  	.fb_release	= drm_fbdev_fb_release,
>>  	.fb_destroy	= drm_fbdev_fb_destroy,
>>  	.fb_mmap	= drm_fbdev_fb_mmap,
>> -	.fb_read	= drm_fb_helper_sys_read,
>> -	.fb_write	= drm_fb_helper_sys_write,
>> -	.fb_fillrect	= drm_fb_helper_sys_fillrect,
>> -	.fb_copyarea	= drm_fb_helper_sys_copyarea,
>> -	.fb_imageblit	= drm_fb_helper_sys_imageblit,
>> +	.fb_read	= drm_fbdev_fb_read,
>> +	.fb_write	= drm_fbdev_fb_write,
>> +	.fb_fillrect	= drm_fbdev_fb_fillrect,
>> +	.fb_copyarea	= drm_fbdev_fb_copyarea,
>> +	.fb_imageblit	= drm_fbdev_fb_imageblit,
>>  };
>>  
>>  static struct fb_deferred_io drm_fbdev_defio = {
>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
>> index 5ffbb4ed5b35..ab424ddd7665 100644
>> --- a/include/drm/drm_mode_config.h
>> +++ b/include/drm/drm_mode_config.h
>> @@ -877,18 +877,6 @@ struct drm_mode_config {
>>  	 */
>>  	bool prefer_shadow_fbdev;
>>  
>> -	/**
>> -	 * @fbdev_use_iomem:
>> -	 *
>> -	 * Set to true if framebuffer reside in iomem.
>> -	 * When set to true memcpy_toio() is used when copying the framebuffer in
>> -	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
>> -	 *
>> -	 * FIXME: This should be replaced with a per-mapping is_iomem
>> -	 * flag (like ttm does), and then used everywhere in fbdev code.
>> -	 */
>> -	bool fbdev_use_iomem;
>> -
>>  	/**
>>  	 * @quirk_addfb_prefer_xbgr_30bpp:
>>  	 *
>> -- 
>> 2.28.0
> _______________________________________________
> dri-devel mailing list
> dri-devel@lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel
> 

-- 
Thomas Zimmermann
Graphics Driver Developer
SUSE Software Solutions Germany GmbH
Maxfeldstr. 5, 90409 Nürnberg, Germany
(HRB 36809, AG Nürnberg)
Geschäftsführer: Felix Imendörffer


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 07:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 07:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12050.31587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWxGr-0005Ml-I1; Mon, 26 Oct 2020 07:50:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12050.31587; Mon, 26 Oct 2020 07:50:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWxGr-0005Me-F4; Mon, 26 Oct 2020 07:50:25 +0000
Received: by outflank-mailman (input) for mailman id 12050;
 Mon, 26 Oct 2020 07:50:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWxGp-0005M6-P0
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 07:50:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc09724b-e7c9-49d7-9b5b-2cdd71e90f8a;
 Mon, 26 Oct 2020 07:50:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWxGo-00056w-8p; Mon, 26 Oct 2020 07:50:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWxGo-0006ZZ-1E; Mon, 26 Oct 2020 07:50:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWxGo-0004il-0l; Mon, 26 Oct 2020 07:50:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWxGp-0005M6-P0
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 07:50:23 +0000
X-Inumbo-ID: cc09724b-e7c9-49d7-9b5b-2cdd71e90f8a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cc09724b-e7c9-49d7-9b5b-2cdd71e90f8a;
	Mon, 26 Oct 2020 07:50:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SC0Rn1oTGHohGhgn32vVWwXkawvxAQKYrZbm34pfDME=; b=Hr/cmNl/Pma0OAbI7mrXU/rxuY
	3I/Y6ZnckzblVtaU9Y89aEMODs3BKo+uFEc5QjeZ88L37lLL3QcNn+OkRoYbZ71SM6mFElN8Mxa+7
	0s/OnDUPL7OagfwvVqXfEunopUcaAnQscuPcKAxRi2CreLLpqf9cTUwdGauFbt5fG3tc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWxGo-00056w-8p; Mon, 26 Oct 2020 07:50:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWxGo-0006ZZ-1E; Mon, 26 Oct 2020 07:50:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWxGo-0004il-0l; Mon, 26 Oct 2020 07:50:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156236-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156236: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 07:50:22 +0000

flight 156236 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156236/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    2 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   33 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   32 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:11:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12070.31603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyWc-0004N8-W0; Mon, 26 Oct 2020 09:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12070.31603; Mon, 26 Oct 2020 09:10:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyWc-0004N1-Sw; Mon, 26 Oct 2020 09:10:46 +0000
Received: by outflank-mailman (input) for mailman id 12070;
 Mon, 26 Oct 2020 09:10:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mzuJ=EB=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
 id 1kWyWb-0004Mw-Az
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:10:45 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98f4234d-5ca6-400f-b652-5e3911cac795;
 Mon, 26 Oct 2020 09:10:43 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mzuJ=EB=citrix.com=christian.lindig@srs-us1.protection.inumbo.net>)
	id 1kWyWb-0004Mw-Az
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:10:45 +0000
X-Inumbo-ID: 98f4234d-5ca6-400f-b652-5e3911cac795
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 98f4234d-5ca6-400f-b652-5e3911cac795;
	Mon, 26 Oct 2020 09:10:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603703443;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=HevSUfVx9XCXw/IdnyXHaf9pc4s6R+rAi24K/Zi/YGc=;
  b=iNslXG1gEV+kUlpreTv+lMiUM4yT/kuwEMaCgyd4bcwqAnoq/unnkwav
   veTZyUdosj83++Fd6vVfALN4u3F6fpTX5K1kEanWkDrq9Ty28WmyW2ozX
   Thbw4RKD/MUxFYJkRHOYgUAingFNASCPotcent6JSbcqRshlTr1W+FkCC
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vqMUyCHM7t8wSMwvlHwbADC6KYBn9TFQlQl/MlwqyGA+ZM+vynPP059l1XNlSluWDwOFHUrmaW
 FxebGTlfjIOy93nw60hd1nMB7LD0279QZlJjpCUuuEU78xSncF0ucloS9oxtSe+l5wtjIbM2hx
 wRoH6noqyxMrLJb0tM3ddQIcFgGut4Qyh+oW9FsrT9TZjikeZv/I6LHDEit7V+OdhHrxtqwdCh
 5gEPiMifAFz5Xu8Id1DVROfsyi6QU4DUo20L+6zx0LkJGUn0FEKLNgjOT98lqglwQIVYtHS0cK
 VDQ=
X-SBRS: None
X-MesageID: 30097165
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,417,1596513600"; 
   d="scan'208";a="30097165"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q4TX4h+JTKpQhsbQiTn+ukW7g3bIu1hwRVXW9YGm/5VdZeTSZeNB0x4wlpp1PpBCasSzaNeNB2KEalXIjLfoNhw/JYpsrIJ9ID5HvUsG1aesUv29j7+uLGSRs9MkvwuJWuwLdTTQzHBBYwiitiP3RklhJU1vCMu6i15cxLmsWGlBCCtDO6/XCXKqwdQB37tVU8mYqPaBsEaPd33pLeudb6ZIvNGGxbgNluKlJBh3WFrPTLZajvmG1UJbwD8g+ue9U9JVEmOXbmGNG05iyVL3I3k2bZ1UXFjoWMyXUo6IwtP0rZSHA9C9x/pw5PfDSlfErzSfxoGnq+K7CznTqdG3VQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c/atUr40ri/lRe6qjuIsDEbw5SZZA1YqUtnpWhxjBcY=;
 b=L2e0gDfPY/ki4ghwWZkfSApjB7Lj9o5hhEqQK7ullEANTk5UDPXOH4WKVLjX2R9SE2IKm6wQq9CeEFRtLAFuz8XAsGUS3cNxSH/XNcnTzX23XAwSJBk1GxkEfx1ffo83eHAxrgdDveRI/vqpbI8Fiyo7qSHM0eE0uTUFggbgZXG306EvMNQie0F8rp+ZWPbEZevLMVESSdy2CI0sDAeNHLZEYF6Xaw0M4SQ3pC04nBzFclpXyymQuWKOjupPXEN/fI/OxKYuNLPa+WSqseHIywT6pjVy9RLC+qutkIph8CZjm8PkvmVIrqGZUzjsXywIXf/dmz4a4ARku81zWQa6TA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=c/atUr40ri/lRe6qjuIsDEbw5SZZA1YqUtnpWhxjBcY=;
 b=j3dKAA0GTh3Jtx6GUCkdZ8LhbhqWour1QPFfUR/hZsTKsft3X8sNzoyf8w7LfzLjiMNSpSvME16Uaa0O0BAz5NedXEgCYAVIY2wHk/WkYBl501hHC3iBkmYYAt0LVr8+6FGpnsYHeGF4eBtkkcJZxD5F/PPPc0BgpPSBEBH6+Pc=
From: Christian Lindig <christian.lindig@citrix.com>
To: Paul Durrant <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Paul Durrant <pdurrant@amazon.com>, Anthony Perard
	<anthony.perard@citrix.com>, David Scott <dave@recoil.org>, George Dunlap
	<George.Dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Nick Rosbrook
	<rosbrookn@ainfosec.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 00/25] xl / libxl: named PCI pass-through devices
Thread-Topic: [PATCH 00/25] xl / libxl: named PCI pass-through devices
Thread-Index: AQHWqVjZZk7XivGQQ0yHJO5ww7Ae8ampnK5M
Date: Mon, 26 Oct 2020 09:10:39 +0000
Message-ID: <DS7PR03MB5655470EA654923540585C9EF6190@DS7PR03MB5655.namprd03.prod.outlook.com>
References: <20201023162314.2235-1-paul@xen.org>
In-Reply-To: <20201023162314.2235-1-paul@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d5e1fb6d-48d9-47db-16c8-08d8798f01e1
x-ms-traffictypediagnostic: DM6PR03MB5051:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <DM6PR03MB5051BB6BCD1CA6361090D79FF6190@DM6PR03MB5051.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:1091;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 3V4jUNZeoc9XI6pgpLKYw1095w8tPvq2h2GZp6sfdgUE1SVerrx+fVIHBjmXu5LDuL9wfx8XJbEoiYdfGZt+RW/Nh/SKMb0nEgCGAjxebY4atxTzbrh+LS5YRbmo8IK+MZAJk+PqKoAOCZyhsrQlUAO+cu5IeeiXXQGHOK4AwoBV2uAK84UjarRoh2suXUiKx870/i63TJS4jIj0jc09yVdrzeR+lQgJLzhWOQFvXU2gZJ26ffSYEbs1CYPYBHQsDG8WpP5P4m00udkxMakEub0v5vf+36TSgjdx2oPuoCVGnx7ijLPEW2z03bCCiu/t
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5655.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(366004)(39850400004)(55016002)(8936002)(9686003)(66946007)(316002)(26005)(2906002)(4326008)(33656002)(86362001)(54906003)(64756008)(478600001)(83380400001)(76116006)(110136005)(66446008)(44832011)(66476007)(91956017)(66556008)(53546011)(5660300002)(52536014)(186003)(71200400001)(8676002)(55236004)(7696005)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 7VKYMGYGbFLvi5jgVpl2Uxchhiu+LatOHH34GAk0mzSROuH5NccqrrA03Yd7KkHD53Hc3c34hzKG7WgQk3uXPMej96oR+VUIqrWFVPwgI0Mw2CbIwxk7JgRrxFy9YC1qk3J/TJlnIFS9Yz12RKXH65i/lxPwWILOBvbdR+Yq2HA120jM5CN/P6x4XMR0jN10NJI2+W2vPZr9b7JNKqnDt9++6Odvj9apIyyYw7tJ8379+j49Gcpmp7xxu9cO0chpDYqdRfXoXSg9Y86Yq+KlGWQyGHnISOfTV6MOP5dXeEAslyNpRObTuVxoISURaCexTrqvkszwQprNPii+x+MJWRGvd71vnsBzm9i2WnB24Ch7bCZj9QtKFjOrcluxt889UWtmeY2Pq4WLH1rVGDBwSd9HmuQRQcjXg3SITwf/vAmML3FE5Cvfii5WPq/YW/FgzFqVnY42Wqcu12qNijmunBU4L/UkMV3l1R3u8TNGCv+rfbyX3bg/9rd/oKBdsdwK9044mrrC8FJHldTwGnS+hzG+Q8mZodgQyzAmiag1UcNBTV2PTfEY/TuZPOmsJqWc0hgMQQkiQVUyZGRqtfQIzS9pBobKna3eGL8Mjama4hBygkzzDaRMWXp9nmrPy0rUF9x6rzMHTci1UAYCbVgvTw==
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5655.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d5e1fb6d-48d9-47db-16c8-08d8798f01e1
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Oct 2020 09:10:39.0543
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cjpqLgsau7cLyfgTNqQ/fJ9YQhKMs8YpPUbN1tNGkB2qU9i5I3R4gfCbsLnVKBcctpmkFNNRBp7q/CMJUPCXdj9tPOZ+CriUINYYhinqal0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5051
X-OriginatorOrg: citrix.com

> NOTE: The OCaml bindings are adjusted to contain the interface change. It
> should therefore not affect compatibility with OCaml-based utilities.

Acked-by: Christian Lindig <christian.lindig@citrix.com>

________________________________________
From: Paul Durrant <paul@xen.org>
Sent: 23 October 2020 17:22
To: xen-devel@lists.xenproject.org
Cc: Paul Durrant; Anthony Perard; Christian Lindig; David Scott; George Dun=
lap; Ian Jackson; Nick Rosbrook; Wei Liu
Subject: [PATCH 00/25] xl / libxl: named PCI pass-through devices

From: Paul Durrant <pdurrant@amazon.com>

This series adds support for naming devices added to the assignable list
and then using a name (instead of a BDF) for convenience when attaching
a device to a domain.

The first 15 patches are cleanup. The remaining 10 modify documentation
and add the new functionality.

Paul Durrant (25):
  xl / libxl: s/pcidev/pci and remove DEFINE_DEVICE_TYPE_STRUCT_X
  libxl: use LIBXL_DEFINE_DEVICE_LIST for pci devices
  libxl: use LIBXL_DEFINE_DEVICE_LIST for nic devices
  libxl: s/domainid/domid/g in libxl_pci.c
  libxl: s/detatched/detached in libxl_pci.c
  libxl: remove extraneous arguments to do_pci_remove() in libxl_pci.c
  libxl: stop using aodev->device_config in libxl__device_pci_add()...
  libxl: generalise 'driver_path' xenstore access functions in
    libxl_pci.c
  libxl: remove unnecessary check from libxl__device_pci_add()
  libxl: remove get_all_assigned_devices() from libxl_pci.c
  libxl: make sure callers of libxl_device_pci_list() free the list
    after use
  libxl: add libxl_device_pci_assignable_list_free()...
  libxl: use COMPARE_PCI() macro is_pci_in_array()...
  libxl: add/recover 'rdm_policy' to/from PCI backend in xenstore
  libxl: Make sure devices added by pci-attach are reflected in the
    config
  docs/man: extract documentation of PCI_SPEC_STRING from the xl.cfg
    manpage...
  docs/man: improve documentation of PCI_SPEC_STRING...
  docs/man: fix xl(1) documentation for 'pci' operations
  libxl: introduce 'libxl_pci_bdf' in the idl...
  libxlu: introduce xlu_pci_parse_spec_string()
  libxl: modify
    libxl_device_pci_assignable_add/remove/list/list_free()...
  docs/man: modify xl(1) in preparation for naming of assignable devices
  xl / libxl: support naming of assignable devices
  docs/man: modify xl-pci-configuration(5) to add 'name' field to
    PCI_SPEC_STRING
  xl / libxl: support 'xl pci-attach/detach' by name

 docs/man/xl-pci-configuration.5.pod  |  218 +++++++
 docs/man/xl.1.pod.in                 |   39 +-
 docs/man/xl.cfg.5.pod.in             |   68 +--
 tools/golang/xenlight/helpers.gen.go |   77 ++-
 tools/golang/xenlight/types.gen.go   |    8 +-
 tools/include/libxl.h                |   67 ++-
 tools/include/libxlutil.h            |    8 +-
 tools/libs/light/libxl_create.c      |    6 +-
 tools/libs/light/libxl_dm.c          |   18 +-
 tools/libs/light/libxl_internal.h    |   53 +-
 tools/libs/light/libxl_nic.c         |   19 +-
 tools/libs/light/libxl_pci.c         | 1072 ++++++++++++++++++------------=
----
 tools/libs/light/libxl_types.idl     |   19 +-
 tools/libs/util/libxlu_pci.c         |  359 ++++++------
 tools/ocaml/libs/xl/xenlight_stubs.c |   19 +-
 tools/xl/xl_cmdtable.c               |   16 +-
 tools/xl/xl_parse.c                  |   30 +-
 tools/xl/xl_pci.c                    |  164 +++---
 tools/xl/xl_sxp.c                    |   12 +-
 19 files changed, 1337 insertions(+), 935 deletions(-)
 create mode 100644 docs/man/xl-pci-configuration.5.pod
---
Cc: Anthony PERARD <anthony.perard@citrix.com>
Cc: Christian Lindig <christian.lindig@citrix.com>
Cc: David Scott <dave@recoil.org>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>
Cc: Wei Liu <wl@xen.org>
--
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:11:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12073.31614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyXc-0004T2-BB; Mon, 26 Oct 2020 09:11:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12073.31614; Mon, 26 Oct 2020 09:11:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyXc-0004Sv-7e; Mon, 26 Oct 2020 09:11:48 +0000
Received: by outflank-mailman (input) for mailman id 12073;
 Mon, 26 Oct 2020 09:11:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWyXa-0004Sn-Sy
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:11:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0d9ed7e-e2c2-4e14-aa95-a78e2244b1a0;
 Mon, 26 Oct 2020 09:11:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13BB0B234;
 Mon, 26 Oct 2020 09:11:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWyXa-0004Sn-Sy
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:11:46 +0000
X-Inumbo-ID: d0d9ed7e-e2c2-4e14-aa95-a78e2244b1a0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d0d9ed7e-e2c2-4e14-aa95-a78e2244b1a0;
	Mon, 26 Oct 2020 09:11:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703504;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=x5lSeHpuNg45MJDTE8esKB5bWoOT9jIc+k+R1E5RMUk=;
	b=gmdnXMuY9RJPDrTsN9n7/sKHuqUZiHbm/l6FeuekDdCQjyEgG0OibTGgRS7vLPGvGUA7rK
	iZlUgcqgqD2ag3LHVQ+Y2patGh926k2DXVJ+5bZReCU4fHFsDfhGuDA+LNRBzpdEexWBOG
	m7ObjOgBb4AwZl97W7XBAahFP/sx9SA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 13BB0B234;
	Mon, 26 Oct 2020 09:11:44 +0000 (UTC)
Subject: Re: [xen-unstable test] 156196: tolerable FAIL
To: Ian Jackson <iwj@xenproject.org>
References: <osstest-156196-mainreport@xen.org>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <481f0092-68c0-35f1-f038-80c4620cc21b@suse.com>
Date: Mon, 26 Oct 2020 10:11:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <osstest-156196-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.10.2020 10:27, osstest service owner wrote:
> flight 156196 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156196/
> 
> Failures :-/ but no regressions.

This and the prior two flights have shown no issue at all with
the test-amd64-amd64-xl-qemu*-debianhvm-i386-xsm tests. I begin
wondering whether the failures previously observed here as well
as for 4.14 and 4.13 were in fact "glitches" caused by
something outside of the software under test.

Jan

> Tests which did not succeed, but are not blocking:
>  test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156167
>  test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156167
>  test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156167
>  test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156167
>  test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156167
>  test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156167
>  test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156167
>  test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156167
>  test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156167
>  test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156167
>  test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156167
>  test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
>  test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
>  test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
>  test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
>  test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
>  test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
>  test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
>  test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
>  test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
>  test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
>  test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
>  test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
>  test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
>  test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
>  test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
>  test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
>  test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
>  test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
>  test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
>  test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
>  test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
>  test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
>  test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
>  test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
>  test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
> 
> version targeted for testing:
>  xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
> baseline version:
>  xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
> 
> Last test of basis   156196  2020-10-25 01:51:25 Z    0 days
> Testing same since                          (not found)         0 attempts
> 
> jobs:
>  build-amd64-xsm                                              pass    
>  build-arm64-xsm                                              pass    
>  build-i386-xsm                                               pass    
>  build-amd64-xtf                                              pass    
>  build-amd64                                                  pass    
>  build-arm64                                                  pass    
>  build-armhf                                                  pass    
>  build-i386                                                   pass    
>  build-amd64-libvirt                                          pass    
>  build-arm64-libvirt                                          pass    
>  build-armhf-libvirt                                          pass    
>  build-i386-libvirt                                           pass    
>  build-amd64-prev                                             pass    
>  build-i386-prev                                              pass    
>  build-amd64-pvops                                            pass    
>  build-arm64-pvops                                            pass    
>  build-armhf-pvops                                            pass    
>  build-i386-pvops                                             pass    
>  test-xtf-amd64-amd64-1                                       pass    
>  test-xtf-amd64-amd64-2                                       pass    
>  test-xtf-amd64-amd64-3                                       pass    
>  test-xtf-amd64-amd64-4                                       pass    
>  test-xtf-amd64-amd64-5                                       pass    
>  test-amd64-amd64-xl                                          pass    
>  test-amd64-coresched-amd64-xl                                pass    
>  test-arm64-arm64-xl                                          pass    
>  test-armhf-armhf-xl                                          pass    
>  test-amd64-i386-xl                                           pass    
>  test-amd64-coresched-i386-xl                                 pass    
>  test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
>  test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
>  test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
>  test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
>  test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
>  test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
>  test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
>  test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
>  test-amd64-amd64-libvirt-xsm                                 pass    
>  test-arm64-arm64-libvirt-xsm                                 pass    
>  test-amd64-i386-libvirt-xsm                                  pass    
>  test-amd64-amd64-xl-xsm                                      pass    
>  test-arm64-arm64-xl-xsm                                      pass    
>  test-amd64-i386-xl-xsm                                       pass    
>  test-amd64-amd64-qemuu-nested-amd                            fail    
>  test-amd64-amd64-xl-pvhv2-amd                                pass    
>  test-amd64-i386-qemut-rhel6hvm-amd                           pass    
>  test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
>  test-amd64-amd64-dom0pvh-xl-amd                              pass    
>  test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
>  test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
>  test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
>  test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
>  test-amd64-i386-freebsd10-amd64                              pass    
>  test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
>  test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
>  test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
>  test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
>  test-amd64-amd64-xl-qemut-win7-amd64                         fail    
>  test-amd64-i386-xl-qemut-win7-amd64                          fail    
>  test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
>  test-amd64-i386-xl-qemuu-win7-amd64                          fail    
>  test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
>  test-amd64-i386-xl-qemut-ws16-amd64                          fail    
>  test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
>  test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
>  test-armhf-armhf-xl-arndale                                  pass    
>  test-amd64-amd64-xl-credit1                                  pass    
>  test-arm64-arm64-xl-credit1                                  pass    
>  test-armhf-armhf-xl-credit1                                  pass    
>  test-amd64-amd64-xl-credit2                                  pass    
>  test-arm64-arm64-xl-credit2                                  pass    
>  test-armhf-armhf-xl-credit2                                  pass    
>  test-armhf-armhf-xl-cubietruck                               pass    
>  test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
>  test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
>  test-amd64-amd64-examine                                     pass    
>  test-arm64-arm64-examine                                     pass    
>  test-armhf-armhf-examine                                     pass    
>  test-amd64-i386-examine                                      pass    
>  test-amd64-i386-freebsd10-i386                               pass    
>  test-amd64-amd64-qemuu-nested-intel                          pass    
>  test-amd64-amd64-xl-pvhv2-intel                              pass    
>  test-amd64-i386-qemut-rhel6hvm-intel                         pass    
>  test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
>  test-amd64-amd64-dom0pvh-xl-intel                            pass    
>  test-amd64-amd64-libvirt                                     pass    
>  test-armhf-armhf-libvirt                                     pass    
>  test-amd64-i386-libvirt                                      pass    
>  test-amd64-amd64-livepatch                                   pass    
>  test-amd64-i386-livepatch                                    pass    
>  test-amd64-amd64-migrupgrade                                 pass    
>  test-amd64-i386-migrupgrade                                  pass    
>  test-amd64-amd64-xl-multivcpu                                pass    
>  test-armhf-armhf-xl-multivcpu                                pass    
>  test-amd64-amd64-pair                                        pass    
>  test-amd64-i386-pair                                         pass    
>  test-amd64-amd64-libvirt-pair                                pass    
>  test-amd64-i386-libvirt-pair                                 pass    
>  test-amd64-amd64-amd64-pvgrub                                pass    
>  test-amd64-amd64-i386-pvgrub                                 pass    
>  test-amd64-amd64-xl-pvshim                                   pass    
>  test-amd64-i386-xl-pvshim                                    fail    
>  test-amd64-amd64-pygrub                                      pass    
>  test-amd64-amd64-xl-qcow2                                    pass    
>  test-armhf-armhf-libvirt-raw                                 pass    
>  test-amd64-i386-xl-raw                                       pass    
>  test-amd64-amd64-xl-rtds                                     pass    
>  test-armhf-armhf-xl-rtds                                     pass    
>  test-arm64-arm64-xl-seattle                                  pass    
>  test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
>  test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
>  test-amd64-amd64-xl-shadow                                   pass    
>  test-amd64-i386-xl-shadow                                    pass    
>  test-arm64-arm64-xl-thunderx                                 pass    
>  test-amd64-amd64-libvirt-vhd                                 pass    
>  test-armhf-armhf-xl-vhd                                      pass    
> 
> 
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
> 
> Logs, config files, etc. are available at
>     http://logs.test-lab.xenproject.org/osstest/logs
> 
> Explanation of these reports, and of osstest in general, is at
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
>     http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
> 
> Test harness code can be found at
>     http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
> 
> 
> Published tested tree is already up to date.
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12077.31626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZ7-0004fI-SQ; Mon, 26 Oct 2020 09:13:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12077.31626; Mon, 26 Oct 2020 09:13:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZ7-0004fB-PW; Mon, 26 Oct 2020 09:13:21 +0000
Received: by outflank-mailman (input) for mailman id 12077;
 Mon, 26 Oct 2020 09:13:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZ6-0004ev-F0
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7ee1994-9d37-4c0e-b1db-66ba1b788599;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E2543AD09;
 Mon, 26 Oct 2020 09:13:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZ6-0004ev-F0
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:20 +0000
X-Inumbo-ID: a7ee1994-9d37-4c0e-b1db-66ba1b788599
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7ee1994-9d37-4c0e-b1db-66ba1b788599;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=WPHkwaEWfDmd0odQcx9LNDu44X1lNTuYJeY875Uf7Qk=;
	b=Qgn2ng78n2EdKBNLg80HBa9KEs1RO+4k9e8NvY8ZVXNbtnGYkPP0B1sHTudpLgvMh4e6xo
	oyGeGxRJSBiL8XfeAbZZG7stq98Cw77nY8+X6VQ9Rf06HwKGWOf04p1ou4kiPHf/7Nf/7C
	zeOECLuVR7Obc314stQ8LuXm9R2WLJU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E2543AD09;
	Mon, 26 Oct 2020 09:13:18 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 00/12] xen: support per-cpupool scheduling granularity
Date: Mon, 26 Oct 2020 10:13:04 +0100
Message-Id: <20201026091316.25680-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Support scheduling granularity per cpupool. Setting the granularity is
done via hypfs, which needed to gain dynamical entries for that
purpose.

Apart from the hypfs related additional functionality the main change
for cpupools was the support for moving a domain to a new granularity,
as this requires to modify the scheduling unit/vcpu relationship.

I have tried to do the hypfs modifications in a rather generic way in
order to be able to use the same infrastructure in other cases, too
(e.g. for per-domain entries).

The complete series has been tested by creating cpupools with different
granularities and moving busy and idle domains between those.

Juergen Gross (12):
  xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
  xen/cpupool: add missing bits for per-cpupool scheduling granularity
  xen/sched: support moving a domain between cpupools with different
    granularity
  xen/sched: sort included headers in cpupool.c
  docs: fix hypfs path documentation
  xen/hypfs: move per-node function pointers into a dedicated struct
  xen/hypfs: pass real failure reason up from hypfs_get_entry()
  xen/hypfs: support dynamic hypfs nodes
  xen/hypfs: add support for id-based dynamic directories
  xen/hypfs: add cpupool directories
  xen/hypfs: add scheduling granularity entry to cpupool entries
  xen/cpupool: make per-cpupool sched-gran hypfs node writable

 docs/misc/hypfs-paths.pandoc |  18 ++-
 xen/common/hypfs.c           | 233 +++++++++++++++++++++++++++--------
 xen/common/sched/core.c      | 122 +++++++++++++-----
 xen/common/sched/cpupool.c   | 213 +++++++++++++++++++++++++++++---
 xen/common/sched/private.h   |   1 +
 xen/include/xen/hypfs.h      | 106 +++++++++++-----
 xen/include/xen/param.h      |  15 +--
 7 files changed, 567 insertions(+), 141 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12078.31634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZ8-0004fk-8N; Mon, 26 Oct 2020 09:13:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12078.31634; Mon, 26 Oct 2020 09:13:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZ8-0004fa-1L; Mon, 26 Oct 2020 09:13:22 +0000
Received: by outflank-mailman (input) for mailman id 12078;
 Mon, 26 Oct 2020 09:13:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZ6-0004f1-Sm
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dafdec3e-1b45-494f-b5d9-81e097f3cb5a;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1018CB234;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZ6-0004f1-Sm
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:20 +0000
X-Inumbo-ID: dafdec3e-1b45-494f-b5d9-81e097f3cb5a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dafdec3e-1b45-494f-b5d9-81e097f3cb5a;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=S+BqtDReQpO+TXL58A4bLpNMJ+nOUR/20v/rJyuhuj4=;
	b=jIQEYVwFJQcO6tgFsP5y643ojuRqFb0JEVZyJ8pBXnPRDkcdWxnb7+NoRXv84suKMhSrBV
	3vDphIgKzxy01drPzNsfPJGIqunAyg2YYodaHLQep5/3+kytO/1QcnD2qqkxeu32O/V7Wd
	dOpMtsCMVoo/eFfH4gkcMaRo26xLKY0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1018CB234;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 01/12] xen/cpupool: add cpu to sched_res_mask when removing it from cpupool
Date: Mon, 26 Oct 2020 10:13:05 +0100
Message-Id: <20201026091316.25680-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When a cpu is removed from a cpupool and added to the free cpus it
should be added to sched_res_mask, too.

The related removal from sched_res_mask in case of core scheduling
is already done in schedule_cpu_add().

As long as all cpupools share the same scheduling granularity there
is nothing going wrong with the missing removal, but this will change
when per-cpupool granularity is fully supported.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index ed973e90ec..f8c81592af 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -3189,6 +3189,7 @@ int schedule_cpu_rm(unsigned int cpu)
             /* Adjust cpu masks of resources (old and new). */
             cpumask_clear_cpu(cpu_iter, sr->cpus);
             cpumask_set_cpu(cpu_iter, sr_new[idx]->cpus);
+            cpumask_set_cpu(cpu_iter, &sched_res_mask);
 
             /* Init timer. */
             init_timer(&sr_new[idx]->s_timer, s_timer_fn, NULL, cpu_iter);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12080.31651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZC-0004jy-F9; Mon, 26 Oct 2020 09:13:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12080.31651; Mon, 26 Oct 2020 09:13:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZC-0004jo-BE; Mon, 26 Oct 2020 09:13:26 +0000
Received: by outflank-mailman (input) for mailman id 12080;
 Mon, 26 Oct 2020 09:13:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZB-0004ev-DY
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3e88123-e98f-46cc-89ef-ca162c14a811;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 32ED1B236;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZB-0004ev-DY
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:25 +0000
X-Inumbo-ID: c3e88123-e98f-46cc-89ef-ca162c14a811
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c3e88123-e98f-46cc-89ef-ca162c14a811;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ecPsgfMBalsxI2bSrpf4c/uqpvI8X4flzNfflD+HujM=;
	b=umOd6p2DKuwXUe/IOnnsMi4KI8Shwj6jZGazqTU+jtSkfGgbu2eK/oraBqzdQ+iamG2/EM
	z38ad7nUJrEgu4aXBWIOlZvGXkEWoHU3hoa/GBd3icnLPsCPx1+zu+jLPDjjSlk9Cq1HgB
	Uo86j4D3UmTLZ+gIHgqoSziSH2CFf7U=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 32ED1B236;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH 02/12] xen/cpupool: add missing bits for per-cpupool scheduling granularity
Date: Mon, 26 Oct 2020 10:13:06 +0100
Message-Id: <20201026091316.25680-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Even with storing the scheduling granularity in struct cpupool there
are still a few bits missing for being able to have cpupools with
different granularity (apart from the missing interface for setting
the individual granularities): the number of cpus in a scheduling
unit is always taken from the global sched_granularity variable.

So store the value in struct cpupool and use that instead of
sched_granularity.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 3 ++-
 xen/common/sched/private.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 7ea641ca26..6429c8f7b5 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -151,7 +151,7 @@ static void __init cpupool_gran_init(void)
 
 unsigned int cpupool_get_granularity(const struct cpupool *c)
 {
-    return c ? sched_granularity : 1;
+    return c ? c->sched_gran : 1;
 }
 
 static void free_cpupool_struct(struct cpupool *c)
@@ -289,6 +289,7 @@ static struct cpupool *cpupool_create(
     }
     c->sched->cpupool = c;
     c->gran = opt_sched_granularity;
+    c->sched_gran = sched_granularity;
 
     *q = c;
 
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index df50976eb2..685992cab9 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -514,6 +514,7 @@ struct cpupool
     struct scheduler *sched;
     atomic_t         refcnt;
     enum sched_gran  gran;
+    unsigned int     sched_gran;     /* Number of cpus per sched-item. */
 };
 
 static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain *d)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12081.31663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZD-0004m9-PF; Mon, 26 Oct 2020 09:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12081.31663; Mon, 26 Oct 2020 09:13:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZD-0004m0-Kf; Mon, 26 Oct 2020 09:13:27 +0000
Received: by outflank-mailman (input) for mailman id 12081;
 Mon, 26 Oct 2020 09:13:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZB-0004f1-RF
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f24d1136-c11a-4f8f-880b-54df60084fad;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 568CEB23A;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZB-0004f1-RF
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:25 +0000
X-Inumbo-ID: f24d1136-c11a-4f8f-880b-54df60084fad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f24d1136-c11a-4f8f-880b-54df60084fad;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=J3RFdnzX1db04Vx257W+zNC99T8L5VMF+AvHyH9dORI=;
	b=gsGfazlpVNzFEF+sijQLLwzV18BWbOs2htBVfb1s5xf9e0p9HeCzJSCnxPrBoRJfAzhBkh
	Y2W1r0HcpHmuBq84uXoKqvZDROz7KRlVhyLEFKs1xujahxHwov4Hpc4gpT/3fJyKyiYgpd
	F2IIHCiinkwz+Nw3LQ594Wf+Xv2o96c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 568CEB23A;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 03/12] xen/sched: support moving a domain between cpupools with different granularity
Date: Mon, 26 Oct 2020 10:13:07 +0100
Message-Id: <20201026091316.25680-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When moving a domain between cpupools with different scheduling
granularity the sched_units of the domain need to be adjusted.

Do that by allocating new sched_units and throwing away the old ones
in sched_move_domain().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 121 ++++++++++++++++++++++++++++++----------
 1 file changed, 90 insertions(+), 31 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index f8c81592af..8f1db88593 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -613,17 +613,45 @@ static void sched_move_irqs(const struct sched_unit *unit)
         vcpu_move_irqs(v);
 }
 
+/*
+ * Move a domain from one cpupool to another.
+ *
+ * A domain with any vcpu having temporary affinity settings will be denied
+ * to move. Hard and soft affinities will be reset.
+ *
+ * In order to support cpupools with different scheduling granularities all
+ * scheduling units are replaced by new ones.
+ *
+ * The complete move is done in the following steps:
+ * - check prerequisites (no vcpu with temporary affinities)
+ * - allocate all new data structures (scheduler specific domain data, unit
+ *   memory, scheduler specific unit data)
+ * - pause domain
+ * - temporarily move all (old) units to the same scheduling resource (this
+ *   makes the final resource assignment easier in case the new cpupool has
+ *   a larger granularity than the old one, as the scheduling locks for all
+ *   vcpus must be held for that operation)
+ * - remove old units from scheduling
+ * - set new cpupool and scheduler domain data pointers in struct domain
+ * - switch all vcpus to new units, still assigned to the old scheduling
+ *   resource
+ * - migrate all new units to scheduling resources of the new cpupool
+ * - unpause the domain
+ * - free the old memory (scheduler specific domain data, unit memory,
+ *   scheduler specific unit data)
+ */
 int sched_move_domain(struct domain *d, struct cpupool *c)
 {
     struct vcpu *v;
-    struct sched_unit *unit;
+    struct sched_unit *unit, *old_unit;
+    struct sched_unit *new_units = NULL, *old_units;
+    struct sched_unit **unit_ptr = &new_units;
     unsigned int new_p, unit_idx;
-    void **unit_priv;
     void *domdata;
-    void *unitdata;
-    struct scheduler *old_ops;
+    struct scheduler *old_ops = dom_scheduler(d);
     void *old_domdata;
     unsigned int gran = cpupool_get_granularity(c);
+    unsigned int n_units = DIV_ROUND_UP(d->max_vcpus, gran);
     int ret = 0;
 
     for_each_vcpu ( d, v )
@@ -641,53 +669,78 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
         goto out;
     }
 
-    unit_priv = xzalloc_array(void *, DIV_ROUND_UP(d->max_vcpus, gran));
-    if ( unit_priv == NULL )
+    for ( unit_idx = 0; unit_idx < n_units; unit_idx++ )
     {
-        sched_free_domdata(c->sched, domdata);
-        ret = -ENOMEM;
-        goto out;
-    }
+        unit = sched_alloc_unit_mem();
+        if ( unit )
+        {
+            /* Initialize unit for sched_alloc_udata() to work. */
+            unit->domain = d;
+            unit->unit_id = unit_idx * gran;
+            unit->vcpu_list = d->vcpu[unit->unit_id];
+            unit->priv = sched_alloc_udata(c->sched, unit, domdata);
+            *unit_ptr = unit;
+        }
 
-    unit_idx = 0;
-    for_each_sched_unit ( d, unit )
-    {
-        unit_priv[unit_idx] = sched_alloc_udata(c->sched, unit, domdata);
-        if ( unit_priv[unit_idx] == NULL )
+        if ( !unit || !unit->priv )
         {
-            for ( unit_idx = 0; unit_priv[unit_idx]; unit_idx++ )
-                sched_free_udata(c->sched, unit_priv[unit_idx]);
-            xfree(unit_priv);
-            sched_free_domdata(c->sched, domdata);
+            old_units = new_units;
+            old_domdata = domdata;
             ret = -ENOMEM;
-            goto out;
+            goto out_free;
         }
-        unit_idx++;
+
+        unit_ptr = &unit->next_in_list;
     }
 
     domain_pause(d);
 
-    old_ops = dom_scheduler(d);
     old_domdata = d->sched_priv;
 
+    new_p = cpumask_first(d->cpupool->cpu_valid);
     for_each_sched_unit ( d, unit )
     {
+        spinlock_t *lock;
+
+        /*
+         * Temporarily move all units to same processor to make locking
+         * easier when moving the new units to the new processors.
+         */
+        lock = unit_schedule_lock_irq(unit);
+        sched_set_res(unit, get_sched_res(new_p));
+        spin_unlock_irq(lock);
+
         sched_remove_unit(old_ops, unit);
     }
 
+    old_units = d->sched_unit_list;
+
     d->cpupool = c;
     d->sched_priv = domdata;
 
+    unit = new_units;
+    for_each_vcpu ( d, v )
+    {
+        old_unit = v->sched_unit;
+        if ( unit->unit_id + gran == v->vcpu_id )
+            unit = unit->next_in_list;
+
+        unit->state_entry_time = old_unit->state_entry_time;
+        unit->runstate_cnt[v->runstate.state]++;
+        /* Temporarily use old resource assignment */
+        unit->res = get_sched_res(new_p);
+
+        v->sched_unit = unit;
+    }
+
+    d->sched_unit_list = new_units;
+
     new_p = cpumask_first(c->cpu_valid);
-    unit_idx = 0;
     for_each_sched_unit ( d, unit )
     {
         spinlock_t *lock;
         unsigned int unit_p = new_p;
 
-        unitdata = unit->priv;
-        unit->priv = unit_priv[unit_idx];
-
         for_each_sched_unit_vcpu ( unit, v )
         {
             migrate_timer(&v->periodic_timer, new_p);
@@ -713,8 +766,6 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
         sched_insert_unit(c->sched, unit);
 
-        sched_free_udata(old_ops, unitdata);
-
         unit_idx++;
     }
 
@@ -722,11 +773,19 @@ int sched_move_domain(struct domain *d, struct cpupool *c)
 
     domain_unpause(d);
 
-    sched_free_domdata(old_ops, old_domdata);
+ out_free:
+    for ( unit = old_units; unit; )
+    {
+        if ( unit->priv )
+            sched_free_udata(c->sched, unit->priv);
+        old_unit = unit;
+        unit = unit->next_in_list;
+        xfree(old_unit);
+    }
 
-    xfree(unit_priv);
+    sched_free_domdata(old_ops, old_domdata);
 
-out:
+ out:
     rcu_read_unlock(&sched_res_rculock);
 
     return ret;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12082.31675 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZH-0004qm-6W; Mon, 26 Oct 2020 09:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12082.31675; Mon, 26 Oct 2020 09:13:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZH-0004qd-0L; Mon, 26 Oct 2020 09:13:31 +0000
Received: by outflank-mailman (input) for mailman id 12082;
 Mon, 26 Oct 2020 09:13:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZG-0004ev-Dq
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 465e31a0-6598-4ac1-9d0f-563313d478fe;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77B4CB23B;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZG-0004ev-Dq
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:30 +0000
X-Inumbo-ID: 465e31a0-6598-4ac1-9d0f-563313d478fe
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 465e31a0-6598-4ac1-9d0f-563313d478fe;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XbeoD7GvpOh4XVGvOVBLgVXAc4wxwYUa82TSh2+/HXM=;
	b=LNO+6NWhImdPet4PPhCO14zZyZJ1jDYmYTHxHlsEVoSK+QGZDlMXEve6KJroGdheLj5ERn
	WrJ4SJgnuN7YlGZ/hXxTuT/10gRKyxo5QzxYhV1XFXre87ESC0n8Jz61LddgzbFlSuaVVW
	D/gkfdp8syZHWsoqLwvre3IG5Msnsbc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 77B4CB23B;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH 04/12] xen/sched: sort included headers in cpupool.c
Date: Mon, 26 Oct 2020 10:13:08 +0100
Message-Id: <20201026091316.25680-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Common style is to include header files in alphabetical order. Sort the
#include statements in cpupool.c accordingly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/cpupool.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 6429c8f7b5..84f326ea63 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -11,15 +11,15 @@
  * (C) 2009, Juergen Gross, Fujitsu Technology Solutions
  */
 
-#include <xen/lib.h>
-#include <xen/init.h>
+#include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/init.h>
+#include <xen/keyhandler.h>
+#include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
 #include <xen/warning.h>
-#include <xen/keyhandler.h>
-#include <xen/cpu.h>
 
 #include "private.h"
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12083.31687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZI-0004th-GP; Mon, 26 Oct 2020 09:13:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12083.31687; Mon, 26 Oct 2020 09:13:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZI-0004tU-Az; Mon, 26 Oct 2020 09:13:32 +0000
Received: by outflank-mailman (input) for mailman id 12083;
 Mon, 26 Oct 2020 09:13:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZG-0004f1-RQ
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b99fa3f3-aecd-408e-99f0-5f4902718482;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EA73B241;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZG-0004f1-RQ
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:30 +0000
X-Inumbo-ID: b99fa3f3-aecd-408e-99f0-5f4902718482
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b99fa3f3-aecd-408e-99f0-5f4902718482;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703600;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZSq8aaykDP1qqXLMQtSSJphelHz62lSdHw2BdhisVQc=;
	b=u5w8r5XbLREuAI2PEahXh4vzIz3GETulgMztS2x23i1OOAkvALwTqoDwwSbJDENI3vb83/
	hBXe3YC6srkKz0vdI9BE74zZtZaZGtmXYBAHCbjy8vY1r1x9dX46rpHHiZpOb6XLHFJDc1
	PvBrHsDxHjQ71tzmGAvznsbtbzZamgQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 2EA73B241;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 07/12] xen/hypfs: pass real failure reason up from hypfs_get_entry()
Date: Mon, 26 Oct 2020 10:13:11 +0100
Message-Id: <20201026091316.25680-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Instead of handling all errors from hypfs_get_entry() as ENOENT pass
up the real error value via ERR_PTR().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index e655e8cfc7..97260bd4a3 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -182,7 +182,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
     while ( again )
     {
         if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
-            return NULL;
+            return ERR_PTR(-ENOENT);
 
         if ( !*path )
             return &dir->e;
@@ -201,7 +201,7 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
                                                      struct hypfs_entry_dir, e);
 
             if ( cmp < 0 )
-                return NULL;
+                return ERR_PTR(-ENOENT);
             if ( !cmp && strlen(entry->name) == name_len )
             {
                 if ( !*end )
@@ -216,13 +216,13 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
         }
     }
 
-    return NULL;
+    return ERR_PTR(-ENOENT);
 }
 
 static struct hypfs_entry *hypfs_get_entry(const char *path)
 {
     if ( path[0] != '/' )
-        return NULL;
+        return ERR_PTR(-EINVAL);
 
     return hypfs_get_entry_rel(&hypfs_root, path + 1);
 }
@@ -446,9 +446,9 @@ long do_hypfs_op(unsigned int cmd,
         goto out;
 
     entry = hypfs_get_entry(path);
-    if ( !entry )
+    if ( IS_ERR(entry) )
     {
-        ret = -ENOENT;
+        ret = PTR_ERR(entry);
         goto out;
     }
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12084.31699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZM-00050v-V8; Mon, 26 Oct 2020 09:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12084.31699; Mon, 26 Oct 2020 09:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZM-00050h-O7; Mon, 26 Oct 2020 09:13:36 +0000
Received: by outflank-mailman (input) for mailman id 12084;
 Mon, 26 Oct 2020 09:13:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZL-0004ev-E3
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c6bdc73-fd31-4aae-958c-0c85ffd05956;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B2FD5B23C;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZL-0004ev-E3
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:35 +0000
X-Inumbo-ID: 8c6bdc73-fd31-4aae-958c-0c85ffd05956
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8c6bdc73-fd31-4aae-958c-0c85ffd05956;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703599;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IOhsYgKG+zQ0PfhzYSXXErTxqo2b3iMd5G4A+1tHm1s=;
	b=SpVmo6ufiSpibMZaNeNPngD2dihhkRE5Y/jy0wfvxNgfqb3B4S7F7HVZD2TjiVem0An7tY
	wE8BG9NNd7AOjSplwENeez+FbXlVhif42kh8c+S1C4RQfLRkCYCLwO0C6B4E/TmsBqbvnu
	dsRR37datpPzckgAyLMacVbatSZlO58=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B2FD5B23C;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 05/12] docs: fix hypfs path documentation
Date: Mon, 26 Oct 2020 10:13:09 +0100
Message-Id: <20201026091316.25680-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The /params/* entry is missing a writable tag.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index dddb592bc5..6c7b2f7ee3 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -179,7 +179,7 @@ The minor version of Xen.
 
 A directory of runtime parameters.
 
-#### /params/*
+#### /params/* [w]
 
 The individual parameters. The description of the different parameters can be
 found in `docs/misc/xen-command-line.pandoc`.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12085.31708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZN-00053i-To; Mon, 26 Oct 2020 09:13:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12085.31708; Mon, 26 Oct 2020 09:13:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZN-00053L-PE; Mon, 26 Oct 2020 09:13:37 +0000
Received: by outflank-mailman (input) for mailman id 12085;
 Mon, 26 Oct 2020 09:13:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZL-0004f1-Rl
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07e6da9e-aa3a-4f34-8f06-da9f0e4f8e2e;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1AA1B245;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZL-0004f1-Rl
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:35 +0000
X-Inumbo-ID: 07e6da9e-aa3a-4f34-8f06-da9f0e4f8e2e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 07e6da9e-aa3a-4f34-8f06-da9f0e4f8e2e;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703601;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=M2krL48gwKtn5+NJtv6lNCEGODluTgIO1YbuF6efH7k=;
	b=L+h43uW4CetUkMAM1vDXaII+0gwnfcoYb/izkWwW0fDF4ogdzW7BHD4v/gdgvaBmOMzzRq
	zfpnAMZrb5Y2uk3vtPGOo8On3aJYFUMAKtnv26Q3xIL0C6IfdKlkiFVO8pixTBAvyPQa1p
	/irvmsgO06EfxSxhvotv8XmbPFgqF84=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E1AA1B245;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 10/12] xen/hypfs: add cpupool directories
Date: Mon, 26 Oct 2020 10:13:14 +0100
Message-Id: <20201026091316.25680-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add /cpupool/<cpupool-id> directories to hypfs. Those are completely
dynamic, so the related hypfs access functions need to be implemented.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |  9 +++++
 xen/common/sched/cpupool.c   | 78 ++++++++++++++++++++++++++++++++++++
 2 files changed, 87 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 6c7b2f7ee3..aaca1cdf92 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -175,6 +175,15 @@ The major version of Xen.
 
 The minor version of Xen.
 
+#### /cpupool/
+
+A directory of all current cpupools.
+
+#### /cpupool/*/
+
+The individual cpupools. Each entry is a directory with the name being the
+cpupool-id (e.g. /cpupool/0/).
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 84f326ea63..8612ee5cf6 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -13,6 +13,8 @@
 
 #include <xen/cpu.h>
 #include <xen/cpumask.h>
+#include <xen/guest_access.h>
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -992,6 +994,78 @@ static struct notifier_block cpu_nfb = {
     .notifier_call = cpu_callback
 };
 
+#ifdef CONFIG_HYPFS
+static HYPFS_DIR_INIT(cpupool_pooldir, "id");
+
+static int cpupool_dir_read(const struct hypfs_entry *entry,
+                            XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    int ret = 0;
+    struct cpupool **q;
+
+    spin_lock(&cpupool_lock);
+
+    for_each_cpupool(q)
+    {
+        ret = hypfs_read_dyndir_id_entry(&cpupool_pooldir, (*q)->cpupool_id,
+                                         !(*q)->next, &uaddr);
+        if ( ret )
+            break;
+    }
+
+    spin_unlock(&cpupool_lock);
+
+    return ret;
+}
+
+static unsigned int cpupool_dir_getsize(const struct hypfs_entry *entry)
+{
+    struct cpupool **q;
+    unsigned int size = 0;
+
+    spin_lock(&cpupool_lock);
+
+    for_each_cpupool(q)
+        size += HYPFS_DIRENTRY_SIZE(snprintf(NULL, 0, "%d", (*q)->cpupool_id));
+
+    spin_unlock(&cpupool_lock);
+
+    return size;
+}
+
+static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir,
+                                                 const char *name,
+                                                 unsigned int name_len)
+{
+    unsigned long id;
+    const char *end;
+    struct cpupool *cpupool;
+
+    id = simple_strtoul(name, &end, 10);
+    if ( id > INT_MAX || end != name + name_len )
+        return ERR_PTR(-ENOENT);
+
+    spin_lock(&cpupool_lock);
+
+    cpupool = __cpupool_find_by_id(id, true);
+
+    spin_unlock(&cpupool_lock);
+
+    if ( !cpupool )
+        return ERR_PTR(-ENOENT);
+
+    return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
+}
+
+static struct hypfs_funcs cpupool_dir_funcs = {
+    .read = cpupool_dir_read,
+    .getsize = cpupool_dir_getsize,
+    .findentry = cpupool_dir_findentry,
+};
+
+static HYPFS_VARDIR_INIT(cpupool_dir, "cpupool", &cpupool_dir_funcs);
+#endif
+
 static int __init cpupool_init(void)
 {
     unsigned int cpu;
@@ -999,6 +1073,10 @@ static int __init cpupool_init(void)
 
     cpupool_gran_init();
 
+#ifdef CONFIG_HYPFS
+    hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+#endif
+
     cpupool0 = cpupool_create(0, 0, &err);
     BUG_ON(cpupool0 == NULL);
     cpupool_put(cpupool0);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12087.31723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZR-0005AW-AS; Mon, 26 Oct 2020 09:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12087.31723; Mon, 26 Oct 2020 09:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZR-0005AF-5F; Mon, 26 Oct 2020 09:13:41 +0000
Received: by outflank-mailman (input) for mailman id 12087;
 Mon, 26 Oct 2020 09:13:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZQ-0004ev-E8
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e0e474d-5af9-43b8-8c80-c4d9f6ee50d1;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB53EB23E;
 Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZQ-0004ev-E8
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:40 +0000
X-Inumbo-ID: 4e0e474d-5af9-43b8-8c80-c4d9f6ee50d1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e0e474d-5af9-43b8-8c80-c4d9f6ee50d1;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703600;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G8a1iKL8UN8o1Ty1vmdQY0A5FOvExIvgwVG3SwZ4QNw=;
	b=qMnllShSIDuFGZF9rkN+h4h10kPIw8LwFXaswJu96zqzAA4PQ7umLDOvK1PREY9jwJ4Uez
	YG7kQ6xkfkDJ/Wun3SX1PQZac7sNxALkwYcqZK2E8aOopkxRkpSzYo/MPoL8trbYsvJeDj
	L1C8PgFcFLOUk7zAR2wF+C3kkKSAP+4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EB53EB23E;
	Mon, 26 Oct 2020 09:13:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 06/12] xen/hypfs: move per-node function pointers into a dedicated struct
Date: Mon, 26 Oct 2020 10:13:10 +0100
Message-Id: <20201026091316.25680-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the function pointers currently stored in each hypfs node into a
dedicated structure in order to save some space for each node. This
will save even more space with additional callbacks added in future.

Provide some standard function vectors.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 25 +++++++++++++++---
 xen/include/xen/hypfs.h | 57 +++++++++++++++++++++++++----------------
 xen/include/xen/param.h | 15 ++++-------
 3 files changed, 62 insertions(+), 35 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 8e932b5cf9..e655e8cfc7 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -24,6 +24,25 @@ CHECK_hypfs_dirlistentry;
     (DIRENTRY_NAME_OFF +        \
      ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
 
+struct hypfs_funcs hypfs_dir_funcs = {
+    .read = hypfs_read_dir,
+};
+struct hypfs_funcs hypfs_leaf_ro_funcs = {
+    .read = hypfs_read_leaf,
+};
+struct hypfs_funcs hypfs_leaf_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_leaf,
+};
+struct hypfs_funcs hypfs_bool_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_bool,
+};
+struct hypfs_funcs hypfs_custom_wr_funcs = {
+    .read = hypfs_read_leaf,
+    .write = hypfs_write_custom,
+};
+
 static DEFINE_RWLOCK(hypfs_lock);
 enum hypfs_lock_state {
     hypfs_unlocked,
@@ -284,7 +303,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
 
     guest_handle_add_offset(uaddr, sizeof(e));
 
-    ret = entry->read(entry, uaddr);
+    ret = entry->funcs->read(entry, uaddr);
 
  out:
     return ret;
@@ -387,14 +406,14 @@ static int hypfs_write(struct hypfs_entry *entry,
 {
     struct hypfs_entry_leaf *l;
 
-    if ( !entry->write )
+    if ( !entry->funcs->write )
         return -EACCES;
 
     ASSERT(entry->max_size);
 
     l = container_of(entry, struct hypfs_entry_leaf, e);
 
-    return entry->write(l, uaddr, ulen);
+    return entry->funcs->write(l, uaddr, ulen);
 }
 
 long do_hypfs_op(unsigned int cmd,
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 5ad99cb558..77916ebb58 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -7,6 +7,20 @@
 #include <public/hypfs.h>
 
 struct hypfs_entry_leaf;
+struct hypfs_entry;
+
+struct hypfs_funcs {
+    int (*read)(const struct hypfs_entry *entry,
+                XEN_GUEST_HANDLE_PARAM(void) uaddr);
+    int (*write)(struct hypfs_entry_leaf *leaf,
+                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+};
+
+extern struct hypfs_funcs hypfs_dir_funcs;
+extern struct hypfs_funcs hypfs_leaf_ro_funcs;
+extern struct hypfs_funcs hypfs_leaf_wr_funcs;
+extern struct hypfs_funcs hypfs_bool_wr_funcs;
+extern struct hypfs_funcs hypfs_custom_wr_funcs;
 
 struct hypfs_entry {
     unsigned short type;
@@ -15,10 +29,7 @@ struct hypfs_entry {
     unsigned int max_size;
     const char *name;
     struct list_head list;
-    int (*read)(const struct hypfs_entry *entry,
-                XEN_GUEST_HANDLE_PARAM(void) uaddr);
-    int (*write)(struct hypfs_entry_leaf *leaf,
-                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+    struct hypfs_funcs *funcs;
 };
 
 struct hypfs_entry_leaf {
@@ -42,7 +53,7 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.read = hypfs_read_dir,                 \
+        .e.funcs = &hypfs_dir_funcs,              \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
@@ -52,7 +63,7 @@ struct hypfs_entry_dir {
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
         .e.name = (nam),                          \
         .e.max_size = (msz),                      \
-        .e.read = hypfs_read_leaf,                \
+        .e.funcs = &hypfs_leaf_ro_funcs,          \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
@@ -72,35 +83,37 @@ static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
     leaf->e.size = strlen(str) + 1;
 }
 
-#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
-    struct hypfs_entry_leaf __read_mostly var = {        \
-        .e.type = (typ),                                 \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
-        .e.name = (nam),                                 \
-        .e.size = sizeof(contvar),                       \
-        .e.max_size = (wr) ? sizeof(contvar) : 0,        \
-        .e.read = hypfs_read_leaf,                       \
-        .e.write = (wr),                                 \
-        .u.content = &(contvar),                         \
+#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, fn, wr) \
+    struct hypfs_entry_leaf __read_mostly var = {            \
+        .e.type = (typ),                                     \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,                   \
+        .e.name = (nam),                                     \
+        .e.size = sizeof(contvar),                           \
+        .e.max_size = (wr) ? sizeof(contvar) : 0,            \
+        .e.funcs = (fn),                                     \
+        .u.content = &(contvar),                             \
     }
 
 #define HYPFS_UINT_INIT(var, nam, contvar)                       \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
-                         hypfs_write_leaf)
+                         &hypfs_leaf_wr_funcs, 1)
 
 #define HYPFS_INT_INIT(var, nam, contvar)                        \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar,  \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
-                         hypfs_write_leaf)
+                         &hypfs_leaf_wr_funcs, 1)
 
 #define HYPFS_BOOL_INIT(var, nam, contvar)                       \
-    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
+                         &hypfs_leaf_ro_funcs, 0)
 #define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
     HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
-                         hypfs_write_bool)
+                         &hypfs_bool_wr_funcs, 1)
 
 extern struct hypfs_entry_dir hypfs_root;
 
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index d0409d3a0e..1b2c7db954 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -116,8 +116,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
         { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_custom, \
+          .hypfs.e.funcs = &hypfs_custom_wr_funcs, \
           .init_leaf = (initfunc), \
           .func = (variable) }
 #define boolean_runtime_only_param(nam, variable) \
@@ -127,8 +126,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_bool, \
+          .hypfs.e.funcs = &hypfs_bool_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define integer_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -137,8 +135,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define size_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -147,8 +144,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 #define string_runtime_only_param(nam, variable) \
     __paramfs __parfs_##variable = \
@@ -157,8 +153,7 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.name = (nam), \
           .hypfs.e.size = 0, \
           .hypfs.e.max_size = sizeof(variable), \
-          .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.e.funcs = &hypfs_leaf_wr_funcs, \
           .hypfs.u.content = &(variable) }
 
 #else
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12088.31735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZS-0005E8-TP; Mon, 26 Oct 2020 09:13:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12088.31735; Mon, 26 Oct 2020 09:13:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZS-0005Do-LM; Mon, 26 Oct 2020 09:13:42 +0000
Received: by outflank-mailman (input) for mailman id 12088;
 Mon, 26 Oct 2020 09:13:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZQ-0004f1-Rm
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2d1a173-a065-4271-967a-ad287a5065d5;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A1D3EB244;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZQ-0004f1-Rm
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:40 +0000
X-Inumbo-ID: c2d1a173-a065-4271-967a-ad287a5065d5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c2d1a173-a065-4271-967a-ad287a5065d5;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703600;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kfzDywvYxNprz4jIH7nIZnqi8wD+M3kSqllQAfQb1kA=;
	b=PEGW9POG9sdRUTUTQE3XHqjCbqIUKV3+9Tmk2NH7J/akbaoZTKno/CnHxk+5DeQmQm+qNU
	rXKR5EVpKbB6ztfuZ+PH660DZ0P69kcIlZPwcm9PdXlnmIhQW6sXUDX+ga2ek6tvAEVuXd
	bfUqkxKWfJH8zLNC5Icsn8wlm9N3X70=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A1D3EB244;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 09/12] xen/hypfs: add support for id-based dynamic directories
Date: Mon, 26 Oct 2020 10:13:13 +0100
Message-Id: <20201026091316.25680-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add some helpers to hypfs.c to support dynamic directories with a
numerical id as name.

The dynamic directory is based on a template specified by the user
allowing to use specific access functions and having a predefined
set of entries in the directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 76 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/hypfs.h | 14 ++++++++
 2 files changed, 90 insertions(+)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 4c226a06b4..12be2f6d16 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -257,6 +257,82 @@ unsigned int hypfs_getsize(const struct hypfs_entry *entry)
     return entry->size;
 }
 
+int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr)
+{
+    struct xen_hypfs_dirlistentry direntry;
+    char name[12];
+    unsigned int e_namelen, e_len;
+
+    e_namelen = snprintf(name, sizeof(name), "%u", id);
+    e_len = HYPFS_DIRENTRY_SIZE(e_namelen);
+    direntry.e.pad = 0;
+    direntry.e.type = template->e.type;
+    direntry.e.encoding = template->e.encoding;
+    direntry.e.content_len = template->e.funcs->getsize(&template->e);
+    direntry.e.max_write_len = template->e.max_size;
+    direntry.off_next = is_last ? 0 : e_len;
+    if ( copy_to_guest(*uaddr, &direntry, 1) )
+        return -EFAULT;
+    if ( copy_to_guest_offset(*uaddr, HYPFS_DIRENTRY_NAME_OFF, name,
+                              e_namelen + 1) )
+        return -EFAULT;
+
+    guest_handle_add_offset(*uaddr, e_len);
+
+    return 0;
+}
+
+static struct hypfs_entry *hypfs_dyndir_findentry(struct hypfs_entry_dir *dir,
+                                                  const char *name,
+                                                  unsigned int name_len)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+    if ( !data )
+        return ERR_PTR(-ENOENT);
+
+    /* Use template with original findentry function. */
+    return data->template->e.funcs->findentry(data->template, name, name_len);
+}
+
+static int hypfs_read_dyndir(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_get_dyndata();
+    if ( !data )
+        return -ENOENT;
+
+    /* Use template with original read function. */
+    return data->template->e.funcs->read(&data->template->e, uaddr);
+}
+
+struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir *template,
+                                              unsigned int id)
+{
+    struct hypfs_dyndir_id *data;
+
+    data = hypfs_alloc_dyndata(sizeof(*data), alignof(*data));
+    if ( !data )
+        return ERR_PTR(-ENOMEM);
+
+    data->template = template;
+    data->id = id;
+    snprintf(data->name, sizeof(data->name), "%u", id);
+    data->dir = *template;
+    data->dir.e.name = data->name;
+    data->dir.e.funcs = &data->funcs;
+    data->funcs = *template->e.funcs;
+    data->funcs.findentry = hypfs_dyndir_findentry;
+    data->funcs.read = hypfs_read_dyndir;
+
+    return &data->dir.e;
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index c8999b5381..adfb522496 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -50,6 +50,15 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
+struct hypfs_dyndir_id {
+    struct hypfs_entry_dir dir;       /* Modified copy of template. */
+    struct hypfs_funcs funcs;         /* Dynamic functions. */
+    struct hypfs_entry_dir *template; /* Template used. */
+    char name[12];                    /* Name of hypfs entry. */
+
+    unsigned int id;                  /* Numerical id. */
+};
+
 #define HYPFS_DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
 #define HYPFS_DIRENTRY_SIZE(name_len) \
     (HYPFS_DIRENTRY_NAME_OFF +        \
@@ -150,6 +159,11 @@ struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
                                         unsigned int name_len);
 void *hypfs_alloc_dyndata(unsigned long size, unsigned long align);
 void *hypfs_get_dyndata(void);
+int hypfs_read_dyndir_id_entry(struct hypfs_entry_dir *template,
+                               unsigned int id, bool is_last,
+                               XEN_GUEST_HANDLE_PARAM(void) *uaddr);
+struct hypfs_entry *hypfs_gen_dyndir_entry_id(struct hypfs_entry_dir *template,
+                                              unsigned int id);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12090.31747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZX-0005MI-4R; Mon, 26 Oct 2020 09:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12090.31747; Mon, 26 Oct 2020 09:13:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZX-0005M6-09; Mon, 26 Oct 2020 09:13:47 +0000
Received: by outflank-mailman (input) for mailman id 12090;
 Mon, 26 Oct 2020 09:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZV-0004ev-EQ
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c71bdaa5-202f-4353-9cbf-15d9e856f476;
 Mon, 26 Oct 2020 09:13:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 64294B234;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZV-0004ev-EQ
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:45 +0000
X-Inumbo-ID: c71bdaa5-202f-4353-9cbf-15d9e856f476
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c71bdaa5-202f-4353-9cbf-15d9e856f476;
	Mon, 26 Oct 2020 09:13:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703601;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zwN6kMRipkhMSsm6aORIIrosl9Cg2bYxBav4xp706Is=;
	b=KUWQ3HmOrCVw2Woxn7ZqwEd2ZJS4dD1x3r4iRWRx7iaQ6h/Wa0CMtME9CsHhcO0Tkzsjb9
	3ZnecGwimgvPBhBJhdeeyFwcAxOj/9AQ/G8ZWX5Gz5vaaD86LdI02gbRJswM5CnNAMIczy
	puqEIMPjqKD3mmyoDSGU7EIsbL/aXwo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 64294B234;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 12/12] xen/cpupool: make per-cpupool sched-gran hypfs node writable
Date: Mon, 26 Oct 2020 10:13:16 +0100
Message-Id: <20201026091316.25680-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make /cpupool/<id>/sched-gran in hypfs writable. This will enable per
cpupool selectable scheduling granularity.

Writing this node is allowed only with no cpu assigned to the cpupool.
Allowed are values "cpu", "core" and "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |  5 ++-
 xen/common/sched/cpupool.c   | 75 +++++++++++++++++++++++++++++++-----
 2 files changed, 69 insertions(+), 11 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index f1ce24d7fe..e86f7d0dbe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,10 +184,13 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
-#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket") [w]
 
 The scheduling granularity of a cpupool.
 
+Writing a value is allowed only for cpupools with no cpu assigned and if the
+architecture is supporting different scheduling granularities.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8674ac0fdd..d0c61fb720 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -78,7 +78,7 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
-static int __init sched_select_granularity(const char *str)
+static int sched_gran_get(const char *str, enum sched_gran *mode)
 {
     unsigned int i;
 
@@ -86,36 +86,43 @@ static int __init sched_select_granularity(const char *str)
     {
         if ( strcmp(sg_name[i].name, str) == 0 )
         {
-            opt_sched_granularity = sg_name[i].mode;
+            *mode = sg_name[i].mode;
             return 0;
         }
     }
 
     return -EINVAL;
 }
+
+static int __init sched_select_granularity(const char *str)
+{
+    return sched_gran_get(str, &opt_sched_granularity);
+}
 custom_param("sched-gran", sched_select_granularity);
+#else
+static int sched_gran_get(const char *str, enum sched_gran *mode)
+{
+    return -EINVAL;
+}
 #endif
 
-static unsigned int __init cpupool_check_granularity(void)
+static unsigned int cpupool_check_granularity(enum sched_gran mode)
 {
     unsigned int cpu;
     unsigned int siblings, gran = 0;
 
-    if ( opt_sched_granularity == SCHED_GRAN_cpu )
+    if ( mode == SCHED_GRAN_cpu )
         return 1;
 
     for_each_online_cpu ( cpu )
     {
-        siblings = cpumask_weight(sched_get_opt_cpumask(opt_sched_granularity,
-                                                        cpu));
+        siblings = cpumask_weight(sched_get_opt_cpumask(mode, cpu));
         if ( gran == 0 )
             gran = siblings;
         else if ( gran != siblings )
             return 0;
     }
 
-    sched_disable_smt_switching = true;
-
     return gran;
 }
 
@@ -127,7 +134,7 @@ static void __init cpupool_gran_init(void)
 
     while ( gran == 0 )
     {
-        gran = cpupool_check_granularity();
+        gran = cpupool_check_granularity(opt_sched_granularity);
 
         if ( gran == 0 )
         {
@@ -153,6 +160,9 @@ static void __init cpupool_gran_init(void)
     if ( fallback )
         warning_add(fallback);
 
+    if ( opt_sched_granularity != SCHED_GRAN_cpu )
+        sched_disable_smt_switching = true;
+
     sched_granularity = gran;
     sched_gran_print(opt_sched_granularity, sched_granularity);
 }
@@ -1088,13 +1098,58 @@ static int cpupool_gran_read(const struct hypfs_entry *entry,
     return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;
 }
 
+static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
+                              XEN_GUEST_HANDLE_PARAM(void) uaddr,
+                              unsigned int ulen)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;
+    enum sched_gran gran;
+    unsigned int sched_gran;
+    char name[SCHED_GRAN_NAME_LEN];
+    int ret = 0;
+
+    if ( ulen > SCHED_GRAN_NAME_LEN )
+        return -ENOSPC;
+
+    if ( copy_from_guest(name, uaddr, ulen) )
+        return -EFAULT;
+
+    sched_gran = sched_gran_get(name, &gran) ? 0
+                                             : cpupool_check_granularity(gran);
+    if ( memchr(name, 0, ulen) != (name + ulen - 1) || sched_gran == 0 )
+        return -EINVAL;
+
+    data = hypfs_get_dyndata();
+    if ( !data )
+        return -ENOENT;
+
+    spin_lock(&cpupool_lock);
+
+    cpupool = __cpupool_find_by_id(data->id, true);
+    if ( !cpupool )
+        ret = -ENOENT;
+    else if ( !cpumask_empty(cpupool->cpu_valid) )
+        ret = -EBUSY;
+    else
+    {
+        cpupool->gran = gran;
+        cpupool->sched_gran = sched_gran;
+    }
+
+    spin_unlock(&cpupool_lock);
+
+    return ret;
+}
+
 static struct hypfs_funcs cpupool_gran_funcs = {
     .read = cpupool_gran_read,
+    .write = cpupool_gran_write,
     .getsize = hypfs_getsize,
 };
 
 static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
-                          0, &cpupool_gran_funcs);
+                          SCHED_GRAN_NAME_LEN, &cpupool_gran_funcs);
 static char granstr[SCHED_GRAN_NAME_LEN] = {
     [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
     [SCHED_GRAN_NAME_LEN - 1] = 0
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12091.31755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZX-0005NX-U7; Mon, 26 Oct 2020 09:13:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12091.31755; Mon, 26 Oct 2020 09:13:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZX-0005N9-DC; Mon, 26 Oct 2020 09:13:47 +0000
Received: by outflank-mailman (input) for mailman id 12091;
 Mon, 26 Oct 2020 09:13:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZV-0004f1-S3
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d8c24bb7-8e2d-4583-acb4-5a9f8c6fa148;
 Mon, 26 Oct 2020 09:13:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 29ED6B248;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZV-0004f1-S3
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:45 +0000
X-Inumbo-ID: d8c24bb7-8e2d-4583-acb4-5a9f8c6fa148
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d8c24bb7-8e2d-4583-acb4-5a9f8c6fa148;
	Mon, 26 Oct 2020 09:13:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703601;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BtuXFg+GprfOQoW7qcf8PMrXYl8U5hOCU9BdXI7w0yo=;
	b=X6sFx1rS9WEJvX5kbsaQAt19W7YP4+Bqr2R8LLtyKOUIrvJq/MPQG1mnjkxkKFXfAhCpQe
	ova2WKr80t7vA5to7plnDvHfrbuK8FuCN0noIsDDYfUeD96tFt7Ko5fv3sMkS9tXs6v6xx
	TLgu8mcIfDHuNxe5UH8ufStdahvCuYc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 29ED6B248;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Dario Faggioli <dfaggioli@suse.com>
Subject: [PATCH 11/12] xen/hypfs: add scheduling granularity entry to cpupool entries
Date: Mon, 26 Oct 2020 10:13:15 +0100
Message-Id: <20201026091316.25680-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a "sched-gran" entry to the per-cpupool hypfs directories.

For now make this entry read-only and let it contain one of the
strings "cpu", "core" or "socket".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |  4 +++
 xen/common/sched/cpupool.c   | 51 +++++++++++++++++++++++++++++++++---
 2 files changed, 52 insertions(+), 3 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index aaca1cdf92..f1ce24d7fe 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -184,6 +184,10 @@ A directory of all current cpupools.
 The individual cpupools. Each entry is a directory with the name being the
 cpupool-id (e.g. /cpupool/0/).
 
+#### /cpupool/*/sched-gran = ("cpu" | "core" | "socket")
+
+The scheduling granularity of a cpupool.
+
 #### /params/
 
 A directory of runtime parameters.
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 8612ee5cf6..8674ac0fdd 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -42,9 +42,10 @@ static DEFINE_SPINLOCK(cpupool_lock);
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
 static unsigned int __read_mostly sched_granularity = 1;
 
+#define SCHED_GRAN_NAME_LEN  8
 struct sched_gran_name {
     enum sched_gran mode;
-    char name[8];
+    char name[SCHED_GRAN_NAME_LEN];
 };
 
 static const struct sched_gran_name sg_name[] = {
@@ -53,7 +54,7 @@ static const struct sched_gran_name sg_name[] = {
     {SCHED_GRAN_socket, "socket"},
 };
 
-static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+static const char *sched_gran_get_name(enum sched_gran mode)
 {
     const char *name = "";
     unsigned int i;
@@ -67,8 +68,13 @@ static void sched_gran_print(enum sched_gran mode, unsigned int gran)
         }
     }
 
+    return name;
+}
+
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
     printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
-           name, gran, gran == 1 ? "" : "s");
+           sched_gran_get_name(mode), gran, gran == 1 ? "" : "s");
 }
 
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
@@ -1057,6 +1063,43 @@ static struct hypfs_entry *cpupool_dir_findentry(struct hypfs_entry_dir *dir,
     return hypfs_gen_dyndir_entry_id(&cpupool_pooldir, id);
 }
 
+static int cpupool_gran_read(const struct hypfs_entry *entry,
+                             XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_dyndir_id *data;
+    struct cpupool *cpupool;
+    const char *name = "";
+
+    data = hypfs_get_dyndata();
+    if ( !data )
+        return -ENOENT;
+
+    spin_lock(&cpupool_lock);
+
+    cpupool = __cpupool_find_by_id(data->id, true);
+    if ( cpupool )
+        name = sched_gran_get_name(cpupool->gran);
+
+    spin_unlock(&cpupool_lock);
+
+    if ( !cpupool )
+        return -ENOENT;
+
+    return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;
+}
+
+static struct hypfs_funcs cpupool_gran_funcs = {
+    .read = cpupool_gran_read,
+    .getsize = hypfs_getsize,
+};
+
+static HYPFS_VARSIZE_INIT(cpupool_gran, XEN_HYPFS_TYPE_STRING, "sched-gran",
+                          0, &cpupool_gran_funcs);
+static char granstr[SCHED_GRAN_NAME_LEN] = {
+    [0 ... SCHED_GRAN_NAME_LEN - 2] = '?',
+    [SCHED_GRAN_NAME_LEN - 1] = 0
+};
+
 static struct hypfs_funcs cpupool_dir_funcs = {
     .read = cpupool_dir_read,
     .getsize = cpupool_dir_getsize,
@@ -1075,6 +1118,8 @@ static int __init cpupool_init(void)
 
 #ifdef CONFIG_HYPFS
     hypfs_add_dir(&hypfs_root, &cpupool_dir, true);
+    hypfs_string_set_reference(&cpupool_gran, granstr);
+    hypfs_add_leaf(&cpupool_pooldir, &cpupool_gran, true);
 #endif
 
     cpupool0 = cpupool_create(0, 0, &err);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:13:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12094.31771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZc-0005YX-CE; Mon, 26 Oct 2020 09:13:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12094.31771; Mon, 26 Oct 2020 09:13:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyZc-0005YM-7e; Mon, 26 Oct 2020 09:13:52 +0000
Received: by outflank-mailman (input) for mailman id 12094;
 Mon, 26 Oct 2020 09:13:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWyZa-0004f1-SN
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9b3da22-e431-4730-8a0f-bb74a709d485;
 Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6BF8CB243;
 Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWyZa-0004f1-SN
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:13:50 +0000
X-Inumbo-ID: f9b3da22-e431-4730-8a0f-bb74a709d485
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f9b3da22-e431-4730-8a0f-bb74a709d485;
	Mon, 26 Oct 2020 09:13:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603703600;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wzVfNJp5CakgtQPXKcxiHB74zrzQZ4Nkcj6JZDwDa6k=;
	b=NL/En86Zerx/XqAPLdxbToE9TgYiblPWaYJYlvTQW/RutFLZdUPGjUNGDbRYDww7CLqc7r
	X2yqGr+HsQeFQdmmqvJebYLtfoEo0/6ClP/pPX9+Irr9DTLuodyTR9hf7mFBCBOhSVbTNl
	ZZq/IZykawDZMb56HpOMdFrv1W4coS0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6BF8CB243;
	Mon, 26 Oct 2020 09:13:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 08/12] xen/hypfs: support dynamic hypfs nodes
Date: Mon, 26 Oct 2020 10:13:12 +0100
Message-Id: <20201026091316.25680-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201026091316.25680-1-jgross@suse.com>
References: <20201026091316.25680-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a getsize() function pointer to struct hypfs_funcs for being able
to have dynamically filled entries without the need to take the hypfs
lock each time the contents are being generated.

For directories add a findentry callback to the vector and modify
hypfs_get_entry_rel() to use it.

Add a HYPFS_VARDIR_INIT() macro for intializing such a directory
statically, taking a struct hypfs_funcs pointer as parameter additional
to those of HYPFS_DIR_INIT().

Modify HYPFS_VARSIZE_INIT() to take the function vector pointer as an
additional parameter as this will be needed for dynamical entries.

Move DIRENTRY_SIZE() macro to hypfs.h as it will be needed by the read
function of a directory with dynamically generated entries.

For being able to let the generic hypfs coding continue to work on
normal struct hypfs_entry entities even for dynamical nodes add some
infrastructure for allocating a working area for the current hypfs
request in order to store needed information for traversing the tree.
This area is anchored in a percpu pointer and can be retrieved by any
level of the dynamic entries. It will be freed automatically when
dropping the hypfs lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 124 +++++++++++++++++++++++++---------------
 xen/include/xen/hypfs.h |  39 +++++++++----
 2 files changed, 108 insertions(+), 55 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 97260bd4a3..4c226a06b4 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -19,28 +19,29 @@
 CHECK_hypfs_dirlistentry;
 #endif
 
-#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
-#define DIRENTRY_SIZE(name_len) \
-    (DIRENTRY_NAME_OFF +        \
-     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
-
 struct hypfs_funcs hypfs_dir_funcs = {
     .read = hypfs_read_dir,
+    .getsize = hypfs_getsize,
+    .findentry = hypfs_dir_findentry,
 };
 struct hypfs_funcs hypfs_leaf_ro_funcs = {
     .read = hypfs_read_leaf,
+    .getsize = hypfs_getsize,
 };
 struct hypfs_funcs hypfs_leaf_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_leaf,
+    .getsize = hypfs_getsize,
 };
 struct hypfs_funcs hypfs_bool_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_bool,
+    .getsize = hypfs_getsize,
 };
 struct hypfs_funcs hypfs_custom_wr_funcs = {
     .read = hypfs_read_leaf,
     .write = hypfs_write_custom,
+    .getsize = hypfs_getsize,
 };
 
 static DEFINE_RWLOCK(hypfs_lock);
@@ -50,6 +51,7 @@ enum hypfs_lock_state {
     hypfs_write_locked
 };
 static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+static DEFINE_PER_CPU(void *, hypfs_dyndata);
 
 HYPFS_DIR_INIT(hypfs_root, "");
 
@@ -71,9 +73,12 @@ static void hypfs_write_lock(void)
 
 static void hypfs_unlock(void)
 {
-    enum hypfs_lock_state locked = this_cpu(hypfs_locked);
+    unsigned int cpu = smp_processor_id();
+    enum hypfs_lock_state locked = per_cpu(hypfs_locked, cpu);
+
+    XFREE(per_cpu(hypfs_dyndata, cpu));
 
-    this_cpu(hypfs_locked) = hypfs_unlocked;
+    per_cpu(hypfs_locked, cpu) = hypfs_unlocked;
 
     switch ( locked )
     {
@@ -88,6 +93,23 @@ static void hypfs_unlock(void)
     }
 }
 
+void *hypfs_alloc_dyndata(unsigned long size, unsigned long align)
+{
+    unsigned int cpu = smp_processor_id();
+
+    ASSERT(per_cpu(hypfs_locked, cpu) != hypfs_unlocked);
+    ASSERT(per_cpu(hypfs_dyndata, cpu) == NULL);
+
+    per_cpu(hypfs_dyndata, cpu) = _xzalloc(size, align);
+
+    return per_cpu(hypfs_dyndata, cpu);
+}
+
+void *hypfs_get_dyndata(void)
+{
+    return this_cpu(hypfs_dyndata);
+}
+
 static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
 {
     int ret = -ENOENT;
@@ -122,7 +144,7 @@ static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
     {
         unsigned int sz = strlen(new->name);
 
-        parent->e.size += DIRENTRY_SIZE(sz);
+        parent->e.size += HYPFS_DIRENTRY_SIZE(sz);
     }
 
     hypfs_unlock();
@@ -171,15 +193,34 @@ static int hypfs_get_path_user(char *buf,
     return 0;
 }
 
+struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
+                                        const char *name,
+                                        unsigned int name_len)
+{
+    struct hypfs_entry *entry;
+
+    list_for_each_entry ( entry, &dir->dirlist, list )
+    {
+        int cmp = strncmp(name, entry->name, name_len);
+
+        if ( cmp < 0 )
+            return ERR_PTR(-ENOENT);
+
+        if ( !cmp && strlen(entry->name) == name_len )
+            return entry;
+    }
+
+    return ERR_PTR(-ENOENT);
+}
+
 static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
                                                const char *path)
 {
     const char *end;
     struct hypfs_entry *entry;
     unsigned int name_len;
-    bool again = true;
 
-    while ( again )
+    for ( ;; )
     {
         if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
             return ERR_PTR(-ENOENT);
@@ -192,28 +233,12 @@ static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
             end = strchr(path, '\0');
         name_len = end - path;
 
-        again = false;
+        entry = dir->e.funcs->findentry(dir, path, name_len);
+        if ( IS_ERR(entry) || !*end )
+            return entry;
 
-        list_for_each_entry ( entry, &dir->dirlist, list )
-        {
-            int cmp = strncmp(path, entry->name, name_len);
-            struct hypfs_entry_dir *d = container_of(entry,
-                                                     struct hypfs_entry_dir, e);
-
-            if ( cmp < 0 )
-                return ERR_PTR(-ENOENT);
-            if ( !cmp && strlen(entry->name) == name_len )
-            {
-                if ( !*end )
-                    return entry;
-
-                again = true;
-                dir = d;
-                path = end + 1;
-
-                break;
-            }
-        }
+        path = end + 1;
+        dir = container_of(entry, struct hypfs_entry_dir, e);
     }
 
     return ERR_PTR(-ENOENT);
@@ -227,12 +252,17 @@ static struct hypfs_entry *hypfs_get_entry(const char *path)
     return hypfs_get_entry_rel(&hypfs_root, path + 1);
 }
 
+unsigned int hypfs_getsize(const struct hypfs_entry *entry)
+{
+    return entry->size;
+}
+
 int hypfs_read_dir(const struct hypfs_entry *entry,
                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
 {
     const struct hypfs_entry_dir *d;
     const struct hypfs_entry *e;
-    unsigned int size = entry->size;
+    unsigned int size = entry->funcs->getsize(entry);
 
     ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
 
@@ -242,18 +272,18 @@ int hypfs_read_dir(const struct hypfs_entry *entry,
     {
         struct xen_hypfs_dirlistentry direntry;
         unsigned int e_namelen = strlen(e->name);
-        unsigned int e_len = DIRENTRY_SIZE(e_namelen);
+        unsigned int e_len = HYPFS_DIRENTRY_SIZE(e_namelen);
 
         direntry.e.pad = 0;
         direntry.e.type = e->type;
         direntry.e.encoding = e->encoding;
-        direntry.e.content_len = e->size;
+        direntry.e.content_len = e->funcs->getsize(e);
         direntry.e.max_write_len = e->max_size;
         direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
         if ( copy_to_guest(uaddr, &direntry, 1) )
             return -EFAULT;
 
-        if ( copy_to_guest_offset(uaddr, DIRENTRY_NAME_OFF,
+        if ( copy_to_guest_offset(uaddr, HYPFS_DIRENTRY_NAME_OFF,
                                   e->name, e_namelen + 1) )
             return -EFAULT;
 
@@ -275,22 +305,25 @@ int hypfs_read_leaf(const struct hypfs_entry *entry,
 
     l = container_of(entry, const struct hypfs_entry_leaf, e);
 
-    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0;
+    return copy_to_guest(uaddr, l->u.content, entry->funcs->getsize(entry)) ?
+                                              -EFAULT : 0;
 }
 
 static int hypfs_read(const struct hypfs_entry *entry,
                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
     struct xen_hypfs_direntry e;
+    unsigned int size;
     long ret = -EINVAL;
 
     if ( ulen < sizeof(e) )
         goto out;
 
+    size = entry->funcs->getsize(entry);
     e.pad = 0;
     e.type = entry->type;
     e.encoding = entry->encoding;
-    e.content_len = entry->size;
+    e.content_len = size;
     e.max_write_len = entry->max_size;
 
     ret = -EFAULT;
@@ -298,7 +331,7 @@ static int hypfs_read(const struct hypfs_entry *entry,
         goto out;
 
     ret = -ENOBUFS;
-    if ( ulen < entry->size + sizeof(e) )
+    if ( ulen < size + sizeof(e) )
         goto out;
 
     guest_handle_add_offset(uaddr, sizeof(e));
@@ -314,14 +347,15 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
 {
     char *buf;
     int ret;
+    struct hypfs_entry *e = &leaf->e;
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
 
-    if ( ulen > leaf->e.max_size )
+    if ( ulen > e->max_size )
         return -ENOSPC;
 
-    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
-         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
+    if ( e->type != XEN_HYPFS_TYPE_STRING &&
+         e->type != XEN_HYPFS_TYPE_BLOB && ulen != e->funcs->getsize(e) )
         return -EDOM;
 
     buf = xmalloc_array(char, ulen);
@@ -333,14 +367,14 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
         goto out;
 
     ret = -EINVAL;
-    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
-         leaf->e.encoding == XEN_HYPFS_ENC_PLAIN &&
+    if ( e->type == XEN_HYPFS_TYPE_STRING &&
+         e->encoding == XEN_HYPFS_ENC_PLAIN &&
          memchr(buf, 0, ulen) != (buf + ulen - 1) )
         goto out;
 
     ret = 0;
     memcpy(leaf->u.write_ptr, buf, ulen);
-    leaf->e.size = ulen;
+    e->size = ulen;
 
  out:
     xfree(buf);
@@ -354,7 +388,7 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
 
     ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
     ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
-           leaf->e.size == sizeof(bool) &&
+           leaf->e.funcs->getsize(&leaf->e) == sizeof(bool) &&
            leaf->e.max_size == sizeof(bool) );
 
     if ( ulen != leaf->e.max_size )
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 77916ebb58..c8999b5381 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -2,11 +2,13 @@
 #define __XEN_HYPFS_H__
 
 #ifdef CONFIG_HYPFS
+#include <xen/lib.h>
 #include <xen/list.h>
 #include <xen/string.h>
 #include <public/hypfs.h>
 
 struct hypfs_entry_leaf;
+struct hypfs_entry_dir;
 struct hypfs_entry;
 
 struct hypfs_funcs {
@@ -14,6 +16,9 @@ struct hypfs_funcs {
                 XEN_GUEST_HANDLE_PARAM(void) uaddr);
     int (*write)(struct hypfs_entry_leaf *leaf,
                  XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+    unsigned int (*getsize)(const struct hypfs_entry *entry);
+    struct hypfs_entry *(*findentry)(struct hypfs_entry_dir *dir,
+                                     const char *name, unsigned int name_len);
 };
 
 extern struct hypfs_funcs hypfs_dir_funcs;
@@ -45,7 +50,12 @@ struct hypfs_entry_dir {
     struct list_head dirlist;
 };
 
-#define HYPFS_DIR_INIT(var, nam)                  \
+#define HYPFS_DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
+#define HYPFS_DIRENTRY_SIZE(name_len) \
+    (HYPFS_DIRENTRY_NAME_OFF +        \
+     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
+
+#define HYPFS_VARDIR_INIT(var, nam, fn)           \
     struct hypfs_entry_dir __read_mostly var = {  \
         .e.type = XEN_HYPFS_TYPE_DIR,             \
         .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
@@ -53,22 +63,25 @@ struct hypfs_entry_dir {
         .e.size = 0,                              \
         .e.max_size = 0,                          \
         .e.list = LIST_HEAD_INIT(var.e.list),     \
-        .e.funcs = &hypfs_dir_funcs,              \
+        .e.funcs = (fn),                          \
         .dirlist = LIST_HEAD_INIT(var.dirlist),   \
     }
 
-#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
-    struct hypfs_entry_leaf __read_mostly var = { \
-        .e.type = (typ),                          \
-        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
-        .e.name = (nam),                          \
-        .e.max_size = (msz),                      \
-        .e.funcs = &hypfs_leaf_ro_funcs,          \
+#define HYPFS_DIR_INIT(var, nam)                  \
+    HYPFS_VARDIR_INIT(var, nam, &hypfs_dir_funcs)
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz, fn) \
+    struct hypfs_entry_leaf __read_mostly var = {  \
+        .e.type = (typ),                           \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,         \
+        .e.name = (nam),                           \
+        .e.max_size = (msz),                       \
+        .e.funcs = (fn),                           \
     }
 
 /* Content and size need to be set via hypfs_string_set_reference(). */
 #define HYPFS_STRING_INIT(var, nam)               \
-    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0, &hypfs_leaf_ro_funcs)
 
 /*
  * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
@@ -131,6 +144,12 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+unsigned int hypfs_getsize(const struct hypfs_entry *entry);
+struct hypfs_entry *hypfs_dir_findentry(struct hypfs_entry_dir *dir,
+                                        const char *name,
+                                        unsigned int name_len);
+void *hypfs_alloc_dyndata(unsigned long size, unsigned long align);
+void *hypfs_get_dyndata(void);
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:16:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:16:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12125.31783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWycY-0006Ec-VR; Mon, 26 Oct 2020 09:16:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12125.31783; Mon, 26 Oct 2020 09:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWycY-0006EV-SR; Mon, 26 Oct 2020 09:16:54 +0000
Received: by outflank-mailman (input) for mailman id 12125;
 Mon, 26 Oct 2020 09:16:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWycX-0006EQ-F6
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:16:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa4d873c-c7d4-48e8-81bd-2aec74265b51;
 Mon, 26 Oct 2020 09:16:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWycU-0007SW-01; Mon, 26 Oct 2020 09:16:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWycT-0002VG-Nv; Mon, 26 Oct 2020 09:16:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWycT-0001WY-NU; Mon, 26 Oct 2020 09:16:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWycX-0006EQ-F6
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:16:53 +0000
X-Inumbo-ID: aa4d873c-c7d4-48e8-81bd-2aec74265b51
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id aa4d873c-c7d4-48e8-81bd-2aec74265b51;
	Mon, 26 Oct 2020 09:16:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LHVGQkeZMShKVXyJ+zz75hHsfXBF+MjXVc8Auk8SprY=; b=XMyzgXFtUI7mhwzK26OfbvCu0M
	TBHWU8jjYKmO7OIKzM9JUBYH0+eUtoq6UeKHWyVUHp7eXhnWRj4L3SfEGF7d7Dur3X5E2JFTttUqt
	GkvhG7lL1I0f+gZn13O7gKUA1HBaKoMQnuble4b7VAtF4YlzYSmPKkMQxdPN5IH2VTmw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWycU-0007SW-01; Mon, 26 Oct 2020 09:16:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWycT-0002VG-Nv; Mon, 26 Oct 2020 09:16:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWycT-0001WY-NU; Mon, 26 Oct 2020 09:16:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156237-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156237: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 09:16:49 +0000

flight 156237 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156237/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    3 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   34 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   33 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:19:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:19:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12130.31798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyfS-0006Qn-Ip; Mon, 26 Oct 2020 09:19:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12130.31798; Mon, 26 Oct 2020 09:19:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyfS-0006Qg-Fi; Mon, 26 Oct 2020 09:19:54 +0000
Received: by outflank-mailman (input) for mailman id 12130;
 Mon, 26 Oct 2020 09:19:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kWyfR-0006Qa-0Q
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:19:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ecb3a476-f323-44eb-a5b0-6d249f5651f9;
 Mon, 26 Oct 2020 09:19:51 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kWyfP-0007Wa-Gv; Mon, 26 Oct 2020 09:19:51 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kWyfP-0003V7-8L; Mon, 26 Oct 2020 09:19:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kWyfR-0006Qa-0Q
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:19:53 +0000
X-Inumbo-ID: ecb3a476-f323-44eb-a5b0-6d249f5651f9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ecb3a476-f323-44eb-a5b0-6d249f5651f9;
	Mon, 26 Oct 2020 09:19:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=TwqRLlzS+yxeNwh1I7UD8QFuBcRJ1HFGwuFy1gl9RIM=; b=onBis+YKjjNCmETM122acl/iET
	tnYgrOT9FLL/F0YkjiMkz/KnpMdI3UfZRgHGkrpKTv0KUBdrhENx6QVdwYKvLLWp8vK2zTQ/VtQ5z
	hdvPJgWUBRVvtLmOZmU0WzUky/BQlQqrnFX9h7fKxvgMunUqz/YP6yRY5ld5d92ECREM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kWyfP-0007Wa-Gv; Mon, 26 Oct 2020 09:19:51 +0000
Received: from gw1.octic.net ([81.187.162.82] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kWyfP-0003V7-8L; Mon, 26 Oct 2020 09:19:51 +0000
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20201022014310.GA70872@mattapan.m5p.com>
 <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org>
 <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s>
 <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org>
 <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
Date: Mon, 26 Oct 2020 09:19:49 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 23/10/2020 17:57, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 22/10/2020 22:17, Stefano Stabellini wrote:
>>> On Thu, 22 Oct 2020, Julien Grall wrote:
>>>> On 22/10/2020 02:43, Elliott Mitchell wrote:
>>>>> Linux requires UEFI support to be enabled on ARM64 devices.  While many
>>>>> ARM64 devices lack ACPI, the writing seems to be on the wall of
>>>>> UEFI/ACPI
>>>>> potentially taking over.  Some common devices may need ACPI table
>>>>> support.
>>>>>
>>>>> Presently I think it is worth removing the dependancy on CONFIG_EXPERT.
>>>>
>>>> The idea behind EXPERT is to gate any feature that is not considered to be
>>>> stable/complete enough to be used in production.
>>>
>>> Yes, and from that point of view I don't think we want to remove EXPERT
>>> from ACPI yet. However, the idea of hiding things behind EXPERT works
>>> very well for new esoteric features, something like memory introspection
>>> or memory overcommit.
>>
>> Memaccess is not very new ;).
>>
>>> It does not work well for things that are actually
>>> required to boot on the platform.
>>
>> I am not sure where is the problem. It is easy to select EXPERT from the
>> menuconfig. It also hints the user that the feature may not fully work.
>>
>>>
>>> Typically ACPI systems don't come with device tree at all (RPi4 being an
>>> exception), so users don't really have much of a choice in the matter.
>>
>> And they typically have IOMMUs.
>>
>>>
>>>   From that point of view, it would be better to remove EXPERT from ACPI,
>>> maybe even build ACPI by default, *but* to add a warning at boot saying
>>> something like:
>>>
>>> "ACPI support is experimental. Boot using Device Tree if you can."
>>>
>>>
>>> That would better convey the risks of using ACPI, while at the same time
>>> making it a bit easier for users to boot on their ACPI-only platforms.
>>
>> Right, I agree that this make easier for users to boot Xen on ACPI-only
>> platform. However, based on above, it is easy enough for a developper to
>> rebuild Xen with ACPI and EXPERT enabled.
>>
>> So what sort of users are you targeting?
> 
> Somebody trying Xen for the first time, they might know how to build it
> but they might not know that ACPI is not available by default, and they
> might not know that they need to enable EXPERT in order to get the ACPI
> option in the menu. It is easy to do once you know it is there,
> otherwise one might not know where to look in the menu.

Right, EXPERT can now be enabled using Kconfig. So it is not very 
different from an option Foo has been hidden because the dependency Bar 
has not been selected.

It should be easy enough (if it is not we should fix it) to figure out 
the dependency when searching the option via menuconfig.

> 
> 
>> I am sort of okay to remove EXPERT.
> 
> OK. This would help (even without building it by default) because as you
> go and look at the menu the first time, you'll find ACPI among the
> options right away.

To be honest, this step is probably the easiest in the full process to 
get Xen build and booted on Arm.

I briefly looked at Elliot's v2, and I can't keep thinking that we are 
trying to re-invent EXPERT for ACPI because we think the feature is 
*more* important than any other feature gated by EXPERT.

In fact, all the features behind EXPERT are important. But they have 
been gated by EXPERT because they are not mature enough.

We already moved EXPERT from a command line option to a menuconfig 
option. So it should be easy enough to enable it now. If it still not 
the case, then we should improve it.

But I don't think ACPI is mature enough to deserve a different 
treatment. It would be more useful to get to the stage where ACPI can 
work without any crash/issue first.

> 
> 
>> But I still think building ACPI by default
>> is still wrong because our default .config is meant to be (security)
>> supported. I don't think ACPI can earn this qualification today.
> 
> Certainly we don't want to imply ACPI is security supported. I was
> looking at SUPPORT.md and it is only says:
> 
> """
> EXPERT and DEBUG Kconfig options are not security supported. Other
> Kconfig options are supported, if the related features are marked as
> supported in this document.
> """
> 
> So technically we could enable ACPI in the build by default as ACPI for
> ARM is marked as experimental. However, I can see that it is not a
> great idea to enable by default an unsupported option in the kconfig, so
> from that point of view it might be best to leave ACPI disabled by
> default. Probably the best compromise at this time.
 From my understanding, the goal of EXPERT was to gate such features. 
With your suggestion, it is not clear to me what's the difference 
between "experimental" and option gated by "EXPERT".

Do you mind clarifying?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:31:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:31:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12141.31809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyqV-00085Y-Ji; Mon, 26 Oct 2020 09:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12141.31809; Mon, 26 Oct 2020 09:31:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyqV-00085R-GP; Mon, 26 Oct 2020 09:31:19 +0000
Received: by outflank-mailman (input) for mailman id 12141;
 Mon, 26 Oct 2020 09:31:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWyqU-00085M-10
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:31:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08c7d4de-0787-4f43-82c0-979cfea668b3;
 Mon, 26 Oct 2020 09:31:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyqN-0007nT-3i; Mon, 26 Oct 2020 09:31:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyqM-0002qV-OU; Mon, 26 Oct 2020 09:31:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyqM-0005Vu-O2; Mon, 26 Oct 2020 09:31:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWyqU-00085M-10
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:31:18 +0000
X-Inumbo-ID: 08c7d4de-0787-4f43-82c0-979cfea668b3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 08c7d4de-0787-4f43-82c0-979cfea668b3;
	Mon, 26 Oct 2020 09:31:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pg3EQY6zNyUxAc7wD0TFmIU/uRenucI/bSHgOmfkTbc=; b=SF3sxSS1wlTOeOJpd2imXodlFJ
	m47egVnKO2Ogldpk3aU8xylzab83ctWA+BRtymlQIfHY/cqjam90vf8nrInoVnWE3hjxz4Q3r1z3y
	fqMvYIdu1XnxqyzNZJEyC33EtkldqMJ2zwQyBTOV+WmdLsVs2qZ4SlDcYynq/bITHkS4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyqN-0007nT-3i; Mon, 26 Oct 2020 09:31:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyqM-0002qV-OU; Mon, 26 Oct 2020 09:31:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyqM-0005Vu-O2; Mon, 26 Oct 2020 09:31:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156233-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156233: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=04b1c2d1e2e12abcca22380827edaa058399f4fa
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 09:31:10 +0000

flight 156233 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156233/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              04b1c2d1e2e12abcca22380827edaa058399f4fa
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  108 days
Failing since        151818  2020-07-11 04:18:52 Z  107 days  102 attempts
Testing same since   156163  2020-10-24 04:19:13 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23141 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12147.31825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWytb-0008Im-3A; Mon, 26 Oct 2020 09:34:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12147.31825; Mon, 26 Oct 2020 09:34:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWytb-0008If-06; Mon, 26 Oct 2020 09:34:31 +0000
Received: by outflank-mailman (input) for mailman id 12147;
 Mon, 26 Oct 2020 09:34:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWytZ-0008Ia-Jz
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:34:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48c73774-1a46-4968-84fd-03c051c3e5cb;
 Mon, 26 Oct 2020 09:34:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90DFAAB0E;
 Mon, 26 Oct 2020 09:34:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWytZ-0008Ia-Jz
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:34:29 +0000
X-Inumbo-ID: 48c73774-1a46-4968-84fd-03c051c3e5cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 48c73774-1a46-4968-84fd-03c051c3e5cb;
	Mon, 26 Oct 2020 09:34:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603704867;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=80S3QVUpXto2jMoZj7laiS6mLH338j6LMMxVTB3tEWY=;
	b=a3s0PF20BdBro8iQP6OHAVOcYZYDqfqltZx5LWXYgdWhwKje0qa9C+rl5WOEp1cp7I4qll
	YqFIGCirvRXPV6OaC9Bu+1C/VmvNtsCQdoUb3uRvDXT25q70FAw7ctApCfBhnUiZmm4E7I
	kuUoxIpTwdY4PzzZw/uFY5Xw7mlDdXE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 90DFAAB0E;
	Mon, 26 Oct 2020 09:34:27 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: fix race in Makefile
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20201025101218.20478-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <38097e3b-27c3-deaf-8556-6b48677c54a4@suse.com>
Date: Mon, 26 Oct 2020 10:34:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201025101218.20478-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.10.2020 11:12, Juergen Gross wrote:
> The header $(INCLUDE)/_lixl_list.h matches two different rules, which
> can result in build breakage. Fix that.

While I don't doubt you having observed a race, I'm not sure this is
true, and hence I'm also not sure the change is going to address it:
Aiui the two rules you talk about are the one you change and

$(XEN_INCLUDE)/_%.h: _%.h
	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)

But a pattern rule doesn't come into play when a specific rule for
a file exists.

What I don't understand here is why this two step moving around of
headers is used: Instead of the above pattern rule, can't the rule
to generate _libxl_type%.h, _libxl_type%_json.h,
_libxl_type%_private.h, and _libxl_type%.c put the relevant header
files right into their designated place? This would allow the
pattern rule to go away, albeit I'd then still be unclear about
the specific race you did observe.

Jan

> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  tools/libs/light/Makefile | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
> index 3424fdb61b..370537ed38 100644
> --- a/tools/libs/light/Makefile
> +++ b/tools/libs/light/Makefile
> @@ -203,9 +203,9 @@ _libxl.api-for-check: $(XEN_INCLUDE)/libxl.h $(AUTOINCS)
>  		>$@.new
>  	mv -f $@.new $@
>  
> -$(XEN_INCLUDE)/_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
> -	$(PERL) $^ --prefix=libxl >$(notdir $@).new
> -	$(call move-if-changed,$(notdir $@).new,$@)
> +_libxl_list.h: $(XEN_INCLUDE)/xen-external/bsd-sys-queue-h-seddery $(XEN_INCLUDE)/xen-external/bsd-sys-queue.h
> +	$(PERL) $^ --prefix=libxl >$@.new
> +	$(call move-if-changed,$@.new,$@)
>  
>  _libxl_save_msgs_helper.c _libxl_save_msgs_callout.c \
>  _libxl_save_msgs_helper.h _libxl_save_msgs_callout.h: \
> 



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:37:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12151.31836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyvv-0008Rj-HA; Mon, 26 Oct 2020 09:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12151.31836; Mon, 26 Oct 2020 09:36:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyvv-0008Rc-Dr; Mon, 26 Oct 2020 09:36:55 +0000
Received: by outflank-mailman (input) for mailman id 12151;
 Mon, 26 Oct 2020 09:36:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWyvu-0008RX-4o
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:36:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13a3cdc9-1b90-463d-a3d6-235550d9b2bc;
 Mon, 26 Oct 2020 09:36:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 69DE1B1D2;
 Mon, 26 Oct 2020 09:36:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWyvu-0008RX-4o
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:36:54 +0000
X-Inumbo-ID: 13a3cdc9-1b90-463d-a3d6-235550d9b2bc
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 13a3cdc9-1b90-463d-a3d6-235550d9b2bc;
	Mon, 26 Oct 2020 09:36:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603705012;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+p5pwnxxDkubn0ApxX3PGLtDsgBeYgDFORRxRpObx/U=;
	b=hPpbBpHdz2jfhRBaX8oZvHNasUuQD+scibtUTaby1oAG9POpUautAg7q+FGi4oCZLMB3F/
	aj7JWIaw1ImMZVN6/7qZy1qbzBylpgfB+OkzhjyRJqZRQi1UExz3zidMnKRjnXQTlQ/G3s
	0mqusl1tzHa1b5gRAh4lMBIMNmp37Mo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 69DE1B1D2;
	Mon, 26 Oct 2020 09:36:52 +0000 (UTC)
Subject: Re: [PATCH 05/12] docs: fix hypfs path documentation
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-6-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f4f34ff5-4ab8-14da-1da0-9bb0e70b80d7@suse.com>
Date: Mon, 26 Oct 2020 10:36:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> The /params/* entry is missing a writable tag.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:39:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12156.31851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyyr-0000CB-1c; Mon, 26 Oct 2020 09:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12156.31851; Mon, 26 Oct 2020 09:39:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyyq-0000C4-Ui; Mon, 26 Oct 2020 09:39:56 +0000
Received: by outflank-mailman (input) for mailman id 12156;
 Mon, 26 Oct 2020 09:39:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWyyq-0000BE-5Q
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:39:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afece48e-82b9-4a71-9bee-8054d14504ff;
 Mon, 26 Oct 2020 09:39:49 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyyi-0007zM-St; Mon, 26 Oct 2020 09:39:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyyi-00034H-Hw; Mon, 26 Oct 2020 09:39:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWyyi-0004wf-HR; Mon, 26 Oct 2020 09:39:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWyyq-0000BE-5Q
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:39:56 +0000
X-Inumbo-ID: afece48e-82b9-4a71-9bee-8054d14504ff
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id afece48e-82b9-4a71-9bee-8054d14504ff;
	Mon, 26 Oct 2020 09:39:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zt4VhIFL0obV4DTR+SRZvJQBzC+hnDTlPDAldvp3nNo=; b=YsfNY+OuDszTg7BHnqdj8aNDOX
	aAZG8NmSuu8OIFtMa+bZudtNHiOgNsjax1MbwEyWF/RGqg638NHW9nDIZf0fISSndGObV+1wFm3N0
	xeaWEwF/kW7XNwyOl8rKWHz95iz2O6/b6Rwo4Y+84STYdVHR0V3P3ZCZ0ULEiTn7vXhw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyyi-0007zM-St; Mon, 26 Oct 2020 09:39:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyyi-00034H-Hw; Mon, 26 Oct 2020 09:39:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWyyi-0004wf-HR; Mon, 26 Oct 2020 09:39:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156225: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3650b228f83adda7e5ee532e2b90429c03f7b9ec
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 09:39:48 +0000

flight 156225 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156225/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3650b228f83adda7e5ee532e2b90429c03f7b9ec
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   86 days
Failing since        152366  2020-08-01 20:49:34 Z   85 days  145 attempts
Testing same since   156225  2020-10-25 23:39:51 Z    0 days    1 attempts

------------------------------------------------------------
3376 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 640968 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:40:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:40:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12159.31864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyzZ-0000zR-FA; Mon, 26 Oct 2020 09:40:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12159.31864; Mon, 26 Oct 2020 09:40:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWyzZ-0000zK-Bj; Mon, 26 Oct 2020 09:40:41 +0000
Received: by outflank-mailman (input) for mailman id 12159;
 Mon, 26 Oct 2020 09:40:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWyzX-0000zE-Ml
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:40:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 299c6dd4-addb-4657-863a-6b547d50e98d;
 Mon, 26 Oct 2020 09:40:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 21495B1D2;
 Mon, 26 Oct 2020 09:40:38 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWyzX-0000zE-Ml
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:40:39 +0000
X-Inumbo-ID: 299c6dd4-addb-4657-863a-6b547d50e98d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 299c6dd4-addb-4657-863a-6b547d50e98d;
	Mon, 26 Oct 2020 09:40:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603705238;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=vzPGygT7cshvhUpTVJ/ACGyKYSuDqb77C/dObjVqwaA=;
	b=XlX/Eh6DoxyBjl38+mhgjN0d64EYJOF9+Ri047jXSUdDYBRPr+Yw89MB2JkLfFXYhnaRzc
	nAt1c5mhFMNMHwucH8wJ32q5SwiDoSQ+1/bg7WqSvvm0VT3wlvdsFiwfRf199aTW5gCg2y
	dysGJHCEVOzjXK29sLL+TKAyCf+1tHs=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 21495B1D2;
	Mon, 26 Oct 2020 09:40:38 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3] x86/pv: inject #UD for entirely missing SYSCALL callbacks
Message-ID: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
Date: Mon, 26 Oct 2020 10:40:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In the case that no 64-bit SYSCALL callback is registered, the guest
will be crashed when 64-bit userspace executes a SYSCALL instruction,
which would be a userspace => kernel DoS.  Similarly for 32-bit
userspace when no 32-bit SYSCALL callback was registered either.

This has been the case ever since the introduction of 64bit PV support,
but behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which
yield #GP/#UD in userspace before the callback is registered, and are
therefore safe by default.

This change does constitute a change in the PV ABI, for the corner case
of a PV guest kernel not registering a 64-bit callback (which has to be
considered a defacto requirement of the unwritten PV ABI, considering
there is no PV equivalent of EFER.SCE).

It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
SYSENTER (safe by default, until explicitly enabled).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <JBeulich@suse.com>
---
v3:
 * Split this change off of "x86/pv: Inject #UD for missing SYSCALL
   callbacks", to allow the uncontroversial part of that change to go
   in. Add conditional "rex64" for UREGS_rip adjustment. (Is branching
   over just the REX prefix too much trickery even for an unlikely to be
   taken code path?)

v2:
 * Drop unnecessary instruction suffixes
 * Don't truncate #UD entrypoint to 32 bits

--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -33,11 +33,27 @@ ENTRY(switch_to_kernel)
         cmoveq VCPU_syscall32_addr(%rbx),%rax
         testq %rax,%rax
         cmovzq VCPU_syscall_addr(%rbx),%rax
-        movq  %rax,TRAPBOUNCE_eip(%rdx)
         /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
         btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
         setc  %cl
         leal  (,%rcx,TBF_INTERRUPT),%ecx
+
+        test  %rax, %rax
+UNLIKELY_START(z, syscall_no_callback) /* TB_eip == 0 => #UD */
+        mov   VCPU_trap_ctxt(%rbx), %rdi
+        movl  $X86_EXC_UD, UREGS_entry_vector(%rsp)
+        cmpw  $FLAT_USER_CS32, UREGS_cs(%rsp)
+        je    0f
+        rex64                           # subl => subq
+0:
+        subl  $2, UREGS_rip(%rsp)
+        mov   X86_EXC_UD * TRAPINFO_sizeof + TRAPINFO_eip(%rdi), %rax
+        testb $4, X86_EXC_UD * TRAPINFO_sizeof + TRAPINFO_flags(%rdi)
+        setnz %cl
+        lea   TBF_EXCEPTION(, %rcx, TBF_INTERRUPT), %ecx
+UNLIKELY_END(syscall_no_callback)
+
+        movq  %rax, TRAPBOUNCE_eip(%rdx)
         movb  %cl,TRAPBOUNCE_flags(%rdx)
         call  create_bounce_frame
         andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:42:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:42:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12163.31876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz0t-00018q-PO; Mon, 26 Oct 2020 09:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12163.31876; Mon, 26 Oct 2020 09:42:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz0t-00018j-M0; Mon, 26 Oct 2020 09:42:03 +0000
Received: by outflank-mailman (input) for mailman id 12163;
 Mon, 26 Oct 2020 09:42:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWz0s-00018b-4O
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:42:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ca7d3ef-67d4-48db-8c4a-0914eacad7f4;
 Mon, 26 Oct 2020 09:42:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D3A4B2AB;
 Mon, 26 Oct 2020 09:42:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWz0s-00018b-4O
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:42:02 +0000
X-Inumbo-ID: 0ca7d3ef-67d4-48db-8c4a-0914eacad7f4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0ca7d3ef-67d4-48db-8c4a-0914eacad7f4;
	Mon, 26 Oct 2020 09:42:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603705320;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VMdzQKAaM6bFeqE8l0wwYexuiOYIIiXaap9PzHsjpZI=;
	b=Ghf88ZZaq3D4f5tm/nMdXAv6D5ZrtXAVZ9QU4b69lViBjanBG45nkJn+GqEEOnfHG7RQ7j
	6ZzfDdgrK22ASWD0dd5oSX11g/1rA5slvdKYLrx+K6X+58QuLl6dii9FbBUU2s6KhnNOE6
	ubnleRScZXgYIPOIsaQnSk8TE67foMU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7D3A4B2AB;
	Mon, 26 Oct 2020 09:42:00 +0000 (UTC)
Subject: Re: [PATCH v3] x86/pv: inject #UD for entirely missing SYSCALL
 callbacks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
Message-ID: <ce9e309e-cd16-2fef-188d-63d3866e7f1a@suse.com>
Date: Mon, 26 Oct 2020 10:41:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:40, Jan Beulich wrote:

And of course this should have

From: Andrew Cooper <andrew.cooper3@citrix.com>

right here, sorry.

Jan

> In the case that no 64-bit SYSCALL callback is registered, the guest
> will be crashed when 64-bit userspace executes a SYSCALL instruction,
> which would be a userspace => kernel DoS.  Similarly for 32-bit
> userspace when no 32-bit SYSCALL callback was registered either.
> 
> This has been the case ever since the introduction of 64bit PV support,
> but behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which
> yield #GP/#UD in userspace before the callback is registered, and are
> therefore safe by default.
> 
> This change does constitute a change in the PV ABI, for the corner case
> of a PV guest kernel not registering a 64-bit callback (which has to be
> considered a defacto requirement of the unwritten PV ABI, considering
> there is no PV equivalent of EFER.SCE).
> 
> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
> SYSENTER (safe by default, until explicitly enabled).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <JBeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:46:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:46:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12170.31887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz4p-0001L0-AO; Mon, 26 Oct 2020 09:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12170.31887; Mon, 26 Oct 2020 09:46:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz4p-0001Kt-70; Mon, 26 Oct 2020 09:46:07 +0000
Received: by outflank-mailman (input) for mailman id 12170;
 Mon, 26 Oct 2020 09:46:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kWz4o-0001Ko-F2
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:46:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6300eb3b-689b-4b80-8b78-ca747ce71604;
 Mon, 26 Oct 2020 09:46:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6F851AC48;
 Mon, 26 Oct 2020 09:46:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kWz4o-0001Ko-F2
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:46:06 +0000
X-Inumbo-ID: 6300eb3b-689b-4b80-8b78-ca747ce71604
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6300eb3b-689b-4b80-8b78-ca747ce71604;
	Mon, 26 Oct 2020 09:46:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603705564;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wW5S7trTnyLZSMpzApW79jGaAeCctoYfsCL53iLSAIM=;
	b=F6GiHJb2RIalO4VQoJroK1f28C9NuXh2gOgdK1x+3aAEbaBZ/LxUJpcliRC8YBP36Dc4NP
	A8luzLdeaeoJlhH7SEZgB47/SdJ+NcKmpRbnCABidlbtIj6Ip1jIe84tyGkFom+/fVfiuo
	ecqcFAwH5GZXVPNpAEIcMrl8FT/E2wE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6F851AC48;
	Mon, 26 Oct 2020 09:46:04 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: fix race in Makefile
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20201025101218.20478-1-jgross@suse.com>
 <38097e3b-27c3-deaf-8556-6b48677c54a4@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <7aafa515-b549-6d2f-32fb-991ba5a12934@suse.com>
Date: Mon, 26 Oct 2020 10:46:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <38097e3b-27c3-deaf-8556-6b48677c54a4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.20 10:34, Jan Beulich wrote:
> On 25.10.2020 11:12, Juergen Gross wrote:
>> The header $(INCLUDE)/_lixl_list.h matches two different rules, which
>> can result in build breakage. Fix that.
> 
> While I don't doubt you having observed a race, I'm not sure this is
> true, and hence I'm also not sure the change is going to address it:
> Aiui the two rules you talk about are the one you change and
> 
> $(XEN_INCLUDE)/_%.h: _%.h
> 	$(call move-if-changed,_$*.h,$(XEN_INCLUDE)/_$*.h)
> 
> But a pattern rule doesn't come into play when a specific rule for
> a file exists.

Hmm, true. I didn't see the race, but spotted the suspected ambiguity
just by chance.

> 
> What I don't understand here is why this two step moving around of
> headers is used: Instead of the above pattern rule, can't the rule
> to generate _libxl_type%.h, _libxl_type%_json.h,
> _libxl_type%_private.h, and _libxl_type%.c put the relevant header
> files right into their designated place? This would allow the
> pattern rule to go away, albeit I'd then still be unclear about
> the specific race you did observe.

This would require to replace the pattern rules used to generate the
files by per-file rules instead, as e.g. _libxl_types_json.h and
_libxl_types_internal_json.h are matching the same pattern, but they
need to end up in different directories.

In the end I think this patch can just be dropped.

Sorry for the noise,

Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:46:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12172.31903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz57-0001QL-KC; Mon, 26 Oct 2020 09:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12172.31903; Mon, 26 Oct 2020 09:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWz57-0001QE-Gi; Mon, 26 Oct 2020 09:46:25 +0000
Received: by outflank-mailman (input) for mailman id 12172;
 Mon, 26 Oct 2020 09:46:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWz56-0001Ou-Dy
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:46:24 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 623cd873-d9f4-4d80-9ace-69f806647549;
 Mon, 26 Oct 2020 09:46:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWz50-00087f-5O; Mon, 26 Oct 2020 09:46:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWz4z-0003DN-T6; Mon, 26 Oct 2020 09:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWz4z-0007Kz-SY; Mon, 26 Oct 2020 09:46:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWz56-0001Ou-Dy
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:46:24 +0000
X-Inumbo-ID: 623cd873-d9f4-4d80-9ace-69f806647549
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 623cd873-d9f4-4d80-9ace-69f806647549;
	Mon, 26 Oct 2020 09:46:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/gIA32XdbAKB4NRFhS72e94jNYiYhLF6gN6orWS8RN4=; b=quwqQZVM8aAMTfvdf+GJ3Zo86g
	oikE50eOuJh3US8Lbm/Y+8WDSSWSuP1NPQizreTFbkaCKnC8w6iNTGszOu+y3Mfq4ewGBlnZuU2Ju
	8Zh4gqMBT+CYJd10Wp9dGuFZDm/pBT2P1jPzPZZEgM5glnxHPh87YHoBYvTyAAFbQxps=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWz50-00087f-5O; Mon, 26 Oct 2020 09:46:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWz4z-0003DN-T6; Mon, 26 Oct 2020 09:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWz4z-0007Kz-SY; Mon, 26 Oct 2020 09:46:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156232: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b70c4fdcde83689d8cd1e5e2faf598d0087934a3
X-Osstest-Versions-That:
    ovmf=264eccb5dfc345c2e004883f00e62959f818fafd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 09:46:17 +0000

flight 156232 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156232/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b70c4fdcde83689d8cd1e5e2faf598d0087934a3
baseline version:
 ovmf                 264eccb5dfc345c2e004883f00e62959f818fafd

Last test of basis   156102  2020-10-22 17:10:41 Z    3 days
Testing same since   156232  2020-10-26 03:10:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bob Feng <bob.c.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   264eccb5df..b70c4fdcde  b70c4fdcde83689d8cd1e5e2faf598d0087934a3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 09:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 09:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12181.31915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzEl-0002P7-Jl; Mon, 26 Oct 2020 09:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12181.31915; Mon, 26 Oct 2020 09:56:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzEl-0002P0-Fl; Mon, 26 Oct 2020 09:56:23 +0000
Received: by outflank-mailman (input) for mailman id 12181;
 Mon, 26 Oct 2020 09:56:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWzEk-0002Ov-7E
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:56:22 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87b1c860-11a5-456c-b884-13f997b5462d;
 Mon, 26 Oct 2020 09:56:17 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzEe-0008KM-Oh; Mon, 26 Oct 2020 09:56:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzEe-0003Qq-Gb; Mon, 26 Oct 2020 09:56:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzEe-0002Q4-G5; Mon, 26 Oct 2020 09:56:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWzEk-0002Ov-7E
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 09:56:22 +0000
X-Inumbo-ID: 87b1c860-11a5-456c-b884-13f997b5462d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 87b1c860-11a5-456c-b884-13f997b5462d;
	Mon, 26 Oct 2020 09:56:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sPj/jM2mRniINx4OcDJw1DRZeZMgprvMXo/SD0URd7Q=; b=b/IbgrL0pUIfnsoR8JUvJcw0l3
	ONIF1PWlu7xIJKk7xyvekyafRsT6pKKmwHHQipwdO8h4tCyaopqdhB/Bc9g33vNSmq+Faw08rNvHe
	ppqByYr9lFAIxkpr789SxiPIG7Q42n91QKWNc7G71MYWGZoo4aTt0m2pLjF1qwrmMTjk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzEe-0008KM-Oh; Mon, 26 Oct 2020 09:56:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzEe-0003Qq-Gb; Mon, 26 Oct 2020 09:56:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzEe-0002Q4-G5; Mon, 26 Oct 2020 09:56:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156234-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156234: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 09:56:16 +0000

flight 156234 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156234/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   67 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  151 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   36 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 10:07:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 10:07:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12189.31930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzP2-0003SQ-Oh; Mon, 26 Oct 2020 10:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12189.31930; Mon, 26 Oct 2020 10:07:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzP2-0003SJ-Lb; Mon, 26 Oct 2020 10:07:00 +0000
Received: by outflank-mailman (input) for mailman id 12189;
 Mon, 26 Oct 2020 10:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kWzP1-0003SE-Tr
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:06:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0af6a5fa-0d93-4edb-bc15-92a6b71e7943;
 Mon, 26 Oct 2020 10:06:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 25CC8ADF2;
 Mon, 26 Oct 2020 10:06:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kWzP1-0003SE-Tr
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:06:59 +0000
X-Inumbo-ID: 0af6a5fa-0d93-4edb-bc15-92a6b71e7943
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0af6a5fa-0d93-4edb-bc15-92a6b71e7943;
	Mon, 26 Oct 2020 10:06:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603706818;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sb7P6Y6PEdt/la5dIDYuwZ123VhFpNwmyepjEl7xqj4=;
	b=XdKGZrdKMvgCEuPJ6GiXANUYZSJsRSzHcWo60cmohLC9hGiLYfSzrYxZD0TBRlaDa362Gd
	NqobxntQ4KV3nuYv/xpTrC7MIHOFgBKacpEz6S0aWSiVBZsgUavVudakLzR9aRz9qkoq8j
	P7BcF4pEmbeUKpyszni9mreGUhxn8VU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 25CC8ADF2;
	Mon, 26 Oct 2020 10:06:58 +0000 (UTC)
Subject: Re: [PATCH] tools/libs/light: fix race in Makefile
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20201025101218.20478-1-jgross@suse.com>
 <38097e3b-27c3-deaf-8556-6b48677c54a4@suse.com>
 <7aafa515-b549-6d2f-32fb-991ba5a12934@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <74fdb7b6-2727-c325-2af5-3039b84f2332@suse.com>
Date: Mon, 26 Oct 2020 11:06:57 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.1
MIME-Version: 1.0
In-Reply-To: <7aafa515-b549-6d2f-32fb-991ba5a12934@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.10.2020 10:46, Jürgen Groß wrote:
> On 26.10.20 10:34, Jan Beulich wrote:
>> What I don't understand here is why this two step moving around of
>> headers is used: Instead of the above pattern rule, can't the rule
>> to generate _libxl_type%.h, _libxl_type%_json.h,
>> _libxl_type%_private.h, and _libxl_type%.c put the relevant header
>> files right into their designated place? This would allow the
>> pattern rule to go away, albeit I'd then still be unclear about
>> the specific race you did observe.
> 
> This would require to replace the pattern rules used to generate the
> files by per-file rules instead, as e.g. _libxl_types_json.h and
> _libxl_types_internal_json.h are matching the same pattern, but they
> need to end up in different directories.

Ah, right - I didn't pay attention to the *_internal*.h needs.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 10:17:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 10:17:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12196.31942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzYd-0004P0-Oo; Mon, 26 Oct 2020 10:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12196.31942; Mon, 26 Oct 2020 10:16:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzYd-0004Ot-Ko; Mon, 26 Oct 2020 10:16:55 +0000
Received: by outflank-mailman (input) for mailman id 12196;
 Mon, 26 Oct 2020 10:16:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kWzYc-0004Oo-1E
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:16:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 333b403e-387f-4708-8fc2-81ae08efd101;
 Mon, 26 Oct 2020 10:16:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzYZ-0000Q8-9T; Mon, 26 Oct 2020 10:16:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzYY-0003ty-VY; Mon, 26 Oct 2020 10:16:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kWzYY-0008Va-V4; Mon, 26 Oct 2020 10:16:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kWzYc-0004Oo-1E
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:16:54 +0000
X-Inumbo-ID: 333b403e-387f-4708-8fc2-81ae08efd101
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 333b403e-387f-4708-8fc2-81ae08efd101;
	Mon, 26 Oct 2020 10:16:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EbYnqd/Sj4CyTV4zummuLkgtPjLqjiyanlLOgn/I/ro=; b=UUUGUZw/qeX8lVXiWys8U9M+9/
	m+Y1I3IvTavrns3BB1p1iKZpQSlpLSg81yom9mLjN9x9aaY+BWWan92pE5BzQaOUx3iFm6rfcL+WI
	M9dNKal2IIMsDyQysmGAPYvUzDlHm8OzfpHYpxLgP2Tof+SIROeEbyDtayLcOdtnajSI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzYZ-0000Q8-9T; Mon, 26 Oct 2020 10:16:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzYY-0003ty-VY; Mon, 26 Oct 2020 10:16:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kWzYY-0008Va-V4; Mon, 26 Oct 2020 10:16:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156228-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156228: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 10:16:50 +0000

flight 156228 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156228/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 156196

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156196
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156196
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156196
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156196
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156196
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156196
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156196
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156196
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156196
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156196
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156196
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156228  2020-10-26 01:51:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 10:43:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 10:43:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12208.31966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzyC-00073O-4E; Mon, 26 Oct 2020 10:43:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12208.31966; Mon, 26 Oct 2020 10:43:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kWzyC-00073H-1H; Mon, 26 Oct 2020 10:43:20 +0000
Received: by outflank-mailman (input) for mailman id 12208;
 Mon, 26 Oct 2020 10:43:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0AxF=EB=gmail.com=dunlapg@srs-us1.protection.inumbo.net>)
 id 1kWzy9-00073C-Uz
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:43:18 +0000
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 812f991b-620c-43dc-bbc8-4f0b037e5333;
 Mon, 26 Oct 2020 10:43:16 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id t20so8693990edr.11
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 03:43:16 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0AxF=EB=gmail.com=dunlapg@srs-us1.protection.inumbo.net>)
	id 1kWzy9-00073C-Uz
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 10:43:18 +0000
X-Inumbo-ID: 812f991b-620c-43dc-bbc8-4f0b037e5333
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 812f991b-620c-43dc-bbc8-4f0b037e5333;
	Mon, 26 Oct 2020 10:43:16 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id t20so8693990edr.11
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 03:43:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=umich.edu; s=google-2016-06-03;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=GcXznKRACrGiaPsHMISkyH4VxsBYAdOZpQikx+ceUek=;
        b=Pwy0BP9osbbaeoprrRW3gjZE/Z2I3hXZ5nRJ3hCzCyOhQnwj7KfaFUHKOjHgs0HcMW
         xb+w6NNW8QDlAjOlVlT1AOclqe8dy4BvmH3jvxMln9PNSsbqi32pewQjACAtK3nsKxFO
         W1YMPxoo58Q5UXcCwAXvyiJCKcQC+vezk73eYQvzCvx2AvIKDNEoqpkB9muZeKWG1GkW
         bQmo38K5m11UwTV21xUFWZi3k76PbrJ67EeyPGDWPXlDO3Z7UHcmYD3dA4oe+joAfolF
         haupZsDw5Yg17X+qesV/yKhXIzipWnxiT2lMbFGspRLDxR+5y31/+dIIJUA5wwRIKazN
         oOWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=GcXznKRACrGiaPsHMISkyH4VxsBYAdOZpQikx+ceUek=;
        b=OQR3qsDsn5Ty76LsYof0HEElmSgJ0fqq0TnlNck1Sr/p3+7b0aIuCDZL9TVpJltMOl
         Rx8uBJRo0IfTnKVLrba1nQ+OkfV9i804lg/5xjyQMfXAmyogWc9UORFkFeBpoCt1zkKl
         1oimNE+MxqqPfY8S/nAn8FXaufiy88RS9RfJHvMPCU+W6R3aGQakJ+0UTCwK7Mt3ci2e
         LMgis1EUwHfhiaXQ57OkKKKo6WapdMzWbfyiBP9URquslx2E3QI2qi7xvAduZ0cKGP27
         mCcbHPI8ZJJCyU5AzQ7TOYIyKifDuoUDFhapUDdE6b+MrA6MB3Ite5NlJ2vbHBWzjsob
         tznA==
X-Gm-Message-State: AOAM530I1W/bkfNOkABYoZgCoNWc2lKeCl+wbhH0LcnABn1xBIOwniZR
	VxmKuYMbntrErgshLfMUDTrJoxou8YG6COze6UY=
X-Google-Smtp-Source: ABdhPJxKx1FFgSO/qBC+JY1UHW/GhXxKoGrdoN/L4YFYVS6ASeReHimJLQay6YoAZIAWgkXTF8vFFb4ZE2+ujQhXrrg=
X-Received: by 2002:a50:8125:: with SMTP id 34mr15683908edc.39.1603708996142;
 Mon, 26 Oct 2020 03:43:16 -0700 (PDT)
MIME-Version: 1.0
References: <158524252335.30595.3422322089286433323.stgit@Palanthas>
In-Reply-To: <158524252335.30595.3422322089286433323.stgit@Palanthas>
From: George Dunlap <dunlapg@umich.edu>
Date: Mon, 26 Oct 2020 10:43:04 +0000
Message-ID: <CAFLBxZaPNsxoazbB=e1sN7A=gzvr2rpAj7qdA73TtcRpPqUkLw@mail.gmail.com>
Subject: Re: [Xen-devel] [PATCH] xen: credit2: document that min_rqd is valid
 and ok to use
To: Dario Faggioli <dfaggioli@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, =?UTF-8?B?SsO8cmdlbkdyb8Of?= <jgross@suse.com>, 
	Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>
Content-Type: multipart/alternative; boundary="00000000000009ce5705b290995f"

--00000000000009ce5705b290995f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Mar 26, 2020 at 5:09 PM Dario Faggioli <dfaggioli@suse.com> wrote:
>
> Code is a bit involved, and it is not easy to tell that min_rqd, inside
> csched2_res_pick() is actually pointing to a runqueue, when it is
> dereferenced.
>
> Add a comment and an ASSERT() for that.
>
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> ---
> Cc: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> ---
>  xen/common/sched/credit2.c |    7 +++++++
>  1 file changed, 7 insertions(+)
>
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index c7241944a8..9da51e624b 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -2387,6 +2387,13 @@ csched2_res_pick(const struct scheduler *ops,
const struct sched_unit *unit)
>          goto out_up;
>      }
>
> +    /*
> +     * If we're here, min_rqd must be valid. In fact, either we picked a
> +     * runqueue in the "list_for_each" (as min_avgload is initialized to
> +     * MAX_LOAD) or we just did that (in the "else" branch) above.
> +     */


Sorry it's taken so long to get back to you on this.

The problem with this is that there are actually *three* alternate clauses
above:

1. (has_soft && min_s_rqd)
2. min_rqd
3. <none of the above>

It's obvious that if we hit #2 or #3, that min_rqd will be set.  But it's
not immediately obvious why the condition in #1 guarantees that min_rqd
will be set.

Is it because if we get to the point in the above loop where min_s_rqd is
set, then min_rqd will always be set if it hasn't been set already?  Or to
put it a different way -- the only way for min_rqd *not* to be set is if it
always bailed before min_s_rqd was set?

 -George

--00000000000009ce5705b290995f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><br><br>On Thu, Mar 26, 2020 at 5:09 PM Dario Faggiol=
i &lt;<a href=3D"mailto:dfaggioli@suse.com">dfaggioli@suse.com</a>&gt; wrot=
e:<br>&gt;<br>&gt; Code is a bit involved, and it is not easy to tell that =
min_rqd, inside<br>&gt; csched2_res_pick() is actually pointing to a runque=
ue, when it is<br>&gt; dereferenced.<br>&gt;<br>&gt; Add a comment and an A=
SSERT() for that.<br>&gt;<br>&gt; Suggested-by: Jan Beulich &lt;<a href=3D"=
mailto:jbeulich@suse.com">jbeulich@suse.com</a>&gt;<br>&gt; Signed-off-by: =
Dario Faggioli &lt;<a href=3D"mailto:dfaggioli@suse.com">dfaggioli@suse.com=
</a>&gt;<br>&gt; ---<br>&gt; Cc: J=C3=BCrgen Gro=C3=9F &lt;<a href=3D"mailt=
o:jgross@suse.com">jgross@suse.com</a>&gt;<br>&gt; ---<br>&gt; =C2=A0xen/co=
mmon/sched/credit2.c | =C2=A0 =C2=A07 +++++++<br>&gt; =C2=A01 file changed,=
 7 insertions(+)<br>&gt;<br>&gt; diff --git a/xen/common/sched/credit2.c b/=
xen/common/sched/credit2.c<br>&gt; index c7241944a8..9da51e624b 100644<br>&=
gt; --- a/xen/common/sched/credit2.c<br>&gt; +++ b/xen/common/sched/credit2=
.c<br>&gt; @@ -2387,6 +2387,13 @@ csched2_res_pick(const struct scheduler *=
ops, const struct sched_unit *unit)<br>&gt; =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0goto out_up;<br>&gt; =C2=A0 =C2=A0 =C2=A0}<br>&gt;<br>&gt; + =C2=A0 =C2=
=A0/*<br>&gt; + =C2=A0 =C2=A0 * If we&#39;re here, min_rqd must be valid. I=
n fact, either we picked a<br>&gt; + =C2=A0 =C2=A0 * runqueue in the &quot;=
list_for_each&quot; (as min_avgload is initialized to<br>&gt; + =C2=A0 =C2=
=A0 * MAX_LOAD) or we just did that (in the &quot;else&quot; branch) above.=
<br>&gt; + =C2=A0 =C2=A0 */<br><br><br><div>Sorry it&#39;s taken so long to=
 get back to you on this.</div><div><br></div><div>The problem with this is=
 that there are actually *three* alternate clauses above:</div><div><br></d=
iv><div>1. (has_soft &amp;&amp; min_s_rqd)</div><div>2. min_rqd</div><div>3=
. &lt;none of the above&gt;</div><div><br></div><div>It&#39;s obvious that =
if we hit #2 or #3, that min_rqd will be set.=C2=A0 But it&#39;s not immedi=
ately obvious why the condition in #1 guarantees that min_rqd will be set.<=
/div><div><br></div><div>Is it because if we get to the point in the above =
loop where min_s_rqd is set, then min_rqd will always be set if it hasn&#39=
;t been set already?=C2=A0 Or to put it a different way -- the only way for=
 min_rqd *not* to be set is if it always bailed before min_s_rqd was set?<b=
r></div><br></div><div>=C2=A0-George</div></div>

--00000000000009ce5705b290995f--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 11:04:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 11:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12232.31996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0IB-0000ch-7C; Mon, 26 Oct 2020 11:03:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12232.31996; Mon, 26 Oct 2020 11:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0IB-0000ca-49; Mon, 26 Oct 2020 11:03:59 +0000
Received: by outflank-mailman (input) for mailman id 12232;
 Mon, 26 Oct 2020 11:03:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX0I9-0000cV-Cy
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:03:57 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63733462-244e-4225-9cce-d36b7d8855fd;
 Mon, 26 Oct 2020 11:03:55 +0000 (UTC)
Received: from AM5PR0202CA0019.eurprd02.prod.outlook.com
 (2603:10a6:203:69::29) by AM0PR08MB3425.eurprd08.prod.outlook.com
 (2603:10a6:208:db::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 26 Oct
 2020 11:03:51 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::dd) by AM5PR0202CA0019.outlook.office365.com
 (2603:10a6:203:69::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Mon, 26 Oct 2020 11:03:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Mon, 26 Oct 2020 11:03:51 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Mon, 26 Oct 2020 11:03:51 +0000
Received: from 094e56be3056.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1C653325-3BAE-474D-954E-A635C5CECE6F.1; 
 Mon, 26 Oct 2020 11:03:13 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 094e56be3056.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 26 Oct 2020 11:03:13 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4179.eurprd08.prod.outlook.com (2603:10a6:208:12b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Mon, 26 Oct
 2020 11:03:02 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Mon, 26 Oct 2020
 11:03:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX0I9-0000cV-Cy
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:03:57 +0000
X-Inumbo-ID: 63733462-244e-4225-9cce-d36b7d8855fd
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown [40.107.8.44])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 63733462-244e-4225-9cce-d36b7d8855fd;
	Mon, 26 Oct 2020 11:03:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b6gHBiLAEBapWfu7nOf/sBDSoCp+v6xj7jdAjy1f9/k=;
 b=8aOw2Q7ZCFERkr3AY0KzBHQ6DSj++RufO0ewLgay5WFlXnVUNAp9i+0fnrxRy1MgRKTmH32HXPM9KtaX/mn3xuJLO6/RA0vf/eW5EEsSsxbvurWSYEMEugDF1NDLWBfbcv7syI9jN6MngTUwmFa114blkOjeZ3gCfXDYvKSGtFc=
Received: from AM5PR0202CA0019.eurprd02.prod.outlook.com
 (2603:10a6:203:69::29) by AM0PR08MB3425.eurprd08.prod.outlook.com
 (2603:10a6:208:db::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 26 Oct
 2020 11:03:51 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::dd) by AM5PR0202CA0019.outlook.office365.com
 (2603:10a6:203:69::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Mon, 26 Oct 2020 11:03:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Mon, 26 Oct 2020 11:03:51 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Mon, 26 Oct 2020 11:03:51 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ad426e3378c2b8d0
X-CR-MTA-TID: 64aa7808
Received: from 094e56be3056.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 1C653325-3BAE-474D-954E-A635C5CECE6F.1;
	Mon, 26 Oct 2020 11:03:13 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 094e56be3056.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 26 Oct 2020 11:03:13 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JV7HvWrJ5LXFPuXgBFi9gO0ONM76IsGuk86EOOV1XHr1/7B2urFeh/n3YA8xvMJYh4ltLNVAMxflajJDSBx9WF0rhk2oPKjfUKIdpXFEpx1sTwl9gbNWY8I7danBF3Jou2EkxD1ulKckcRDrWTAybhpx5tY52y8rurYLZJMNwZCBUx84sgEp04uHLnzeMT8FFCbLWysiSpIMAVuCNAMFjvcGLV1uHjMO4MrPs2k+10eBOolSQBPNMjBc1OMKh2SK3aSU+cKvOPzMJabNCpzDOCyadmN16iMgY2fsCi2+845WmK3FJA9urLnMJfLwD4e/fZHzRRdv/Lqk1Sk0f+feuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b6gHBiLAEBapWfu7nOf/sBDSoCp+v6xj7jdAjy1f9/k=;
 b=BLON6442q/qFddQJG/oIWQ/jtUesApNSNUWEePtSa8YFT+t7Sl4NX4md7Cc8Q3q/sL+G/tZKS0SjEudh52Wujv0qaVA265+D2xU5XC/g+rs584vqWXJNQYeMxif72d4lbkJLmtxX6u276YbwLAXCLuISQ25WAUg6KLyj2nNtqW+Yn3C0BjOEL9bGIM3+jmlqrmJ9FGNZrL/NkDl8c112afy3ugCV2oL3aVAP0OF67pOCIy4lq9SCIr12SkPWARdFrY3I2MPEVtW1b2+KN4i5TPQhBpJcpyoDefyYZkNNNgET/vFUKIVpQRVQMwXtH0GhRHWAX+cVK/5tR/9xHDra/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b6gHBiLAEBapWfu7nOf/sBDSoCp+v6xj7jdAjy1f9/k=;
 b=8aOw2Q7ZCFERkr3AY0KzBHQ6DSj++RufO0ewLgay5WFlXnVUNAp9i+0fnrxRy1MgRKTmH32HXPM9KtaX/mn3xuJLO6/RA0vf/eW5EEsSsxbvurWSYEMEugDF1NDLWBfbcv7syI9jN6MngTUwmFa114blkOjeZ3gCfXDYvKSGtFc=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4179.eurprd08.prod.outlook.com (2603:10a6:208:12b::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Mon, 26 Oct
 2020 11:03:02 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Mon, 26 Oct 2020
 11:03:02 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXICAABfBAIAAGI6AgAAOSACABG94AA==
Date: Mon, 26 Oct 2020 11:03:02 +0000
Message-ID: <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
In-Reply-To: <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ecdb2a65-e4bb-45a9-49d8-08d8799ed26a
x-ms-traffictypediagnostic: AM0PR08MB4179:|AM0PR08MB3425:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3425B005E05B5CBEC16C2205FC190@AM0PR08MB3425.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Wct1RX+xtBNtxPsw4/ZJGhfhM/dlE0OhaaZl2W5LRZlAKbKAN5Do1AjPlU2+Ktsybl/zk3BPwoXTLklY5frKE9XV5KIecK/GIxH7gL4puR+8KYcpSA1TpYdr0SUhL/1lIR1yJpR3oVj8Q8kz4NOWMC4R1AFH+SqlpcjQ2eZEZRttwarZCXR6g6tWPDpzbN8MTU5tdtqrH2bXhh2xgJdfbzFBHjyajqY8YLUVom0QYDBQqUIAbc1cxWywr/fbUvgJO9FR0qoTbeKPmNcKQAQp93B6Oix38hwO1nsdB5aEy2aGepVoncIEQi0DLZnhzcybgGvbq6w7dJDe/O9oEyGvAg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(366004)(396003)(39860400002)(8936002)(6506007)(26005)(186003)(66556008)(66476007)(4326008)(66946007)(478600001)(83380400001)(71200400001)(53546011)(5660300002)(55236004)(6486002)(33656002)(2616005)(8676002)(316002)(66446008)(54906003)(76116006)(2906002)(36756003)(86362001)(6916009)(6512007)(64756008)(91956017);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 pxHmoqlF11edzwXGa9CMOc49c5Qj9euK6Ga9PkbzbFoukbI/cH12Bo2ZgaMlpwTjphWaPnu84ZDKzZqVYGLwLwPByBgTzwuBgRTqv6YuMt446m60Mf9eK/dX+Q4Y1WQaBkDPz+2f7H383FGmMgr4djtxqRqtK9sY69Fwds/Fhs5ziHE6vcsk3a2ULm+vAW1sRfkOcgGIWxnWQppM0b/6OTFnPvQv4jJ7t/mtVTaGn1zqXw1XgFd6X2GYN9tAAhe26lpL89UaFHPABqufp3WwCeGfKPjdwhR7QbNYMJ/tHZ1BoTLtvl44X8EUfo45CHNZKbBSX7Cyr8g9KfnlpaClBDeGv4SiLF93LxgKXoS52hDr5+k1pkcYADio4qjp46aL1pqUz7DO8Da8Jg4hzWYOMxqQrXHyJfSRKMrDCZl/P0fVL1siw//qlYFRi8KNr62c4cBlHlQ0AgNJQ69EyiQMzrGaQBTV39AFLlb3T5Sqxl335rP1SzCIUMUZpR18H1NLH9rRlh+IIBKGgjI76BBhc/QDyaVnBLQY/ESHNe6p6tESKDlJbMtFL2NeE/UkHbilxxVCEPoMTjND7PKveAcNar86U9hxaKTI4UmuTgMPckkeTKUa7GRUxDZh3FyuY44cRj47eGpPhzCesWgkqWNhMQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <AB3E0BD265A2E24DBBEB60B488362047@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4179
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a6b81dc7-223d-46c9-32b6-08d8799eb52c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	YPn1P0wVfcBJriuQRnaOAJ2nR263yMb4mtBWw55cLhJUlixDpPUDv+l5QdS7uJ600CjU5vPW0Panz8FeOw6kltwyppZOMRtQ8hNKwWihK5n72ZfqhNTVWb/J/nPoyKXMZw3uKfONp+4kLTiN/7QbYaFbu436F6wRy2y2WvydsOI/J/5x1TKNw+ikSKlkL+x59Sl1tUeJfailoz8BojhTlVUy7ZB8+6k9HtFccUMd71MyJk1D5L/0LWXiGm1AR3TWBkaY9YLExpCyuivTNJ+BOrD42fwADR5wwfN/+z5jxvmxwNuVzwpjFkHLED4gDlzvYyYsf+wQbIiX5s3M9cNqIN9UKS+cApVITj+v3g+QEdh4VU7d/ht+ln4NKhPBvXYATCbkM24OeC17MJPLOGucrQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(46966005)(70206006)(316002)(5660300002)(6486002)(4326008)(2906002)(33656002)(36906005)(2616005)(70586007)(86362001)(54906003)(6512007)(83380400001)(6862004)(82740400003)(6506007)(82310400003)(107886003)(53546011)(47076004)(36756003)(356005)(8676002)(81166007)(186003)(26005)(336012)(478600001)(55236004)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Oct 2020 11:03:51.4060
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ecdb2a65-e4bb-45a9-49d8-08d8799ed26a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3425

SGVsbG8gSnVsaWVuLA0KDQo+IE9uIDIzIE9jdCAyMDIwLCBhdCA0OjE5IHBtLCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4gDQo+IA0KPiANCj4gT24gMjMvMTAvMjAyMCAx
NToyNywgUmFodWwgU2luZ2ggd3JvdGU6DQo+PiBIZWxsbyBKdWxpZW4sDQo+Pj4gT24gMjMgT2N0
IDIwMjAsIGF0IDI6MDAgcG0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0K
Pj4+IA0KPj4+IA0KPj4+IA0KPj4+IE9uIDIzLzEwLzIwMjAgMTI6MzUsIFJhaHVsIFNpbmdoIHdy
b3RlOg0KPj4+PiBIZWxsbywNCj4+Pj4+IE9uIDIzIE9jdCAyMDIwLCBhdCAxOjAyIGFtLCBTdGVm
YW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+Pj4gDQo+
Pj4+PiBPbiBUaHUsIDIyIE9jdCAyMDIwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+Pj4+PiBP
biAyMC8xMC8yMDIwIDE2OjI1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+PiBBZGQgc3Vw
cG9ydCBmb3IgQVJNIGFyY2hpdGVjdGVkIFNNTVV2MyBpbXBsZW1lbnRhdGlvbnMuIEl0IGlzIGJh
c2VkIG9uDQo+Pj4+Pj4+Pj4gdGhlIExpbnV4IFNNTVV2MyBkcml2ZXIuDQo+Pj4+Pj4+Pj4gTWFq
b3IgZGlmZmVyZW5jZXMgYmV0d2VlbiB0aGUgTGludXggZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0K
Pj4+Pj4+Pj4+IDEuIE9ubHkgU3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBzdXBwb3J0ZWQgYXMgY29t
cGFyZWQgdG8gdGhlIExpbnV4IGRyaXZlcg0KPj4+Pj4+Pj4+ICAgIHRoYXQgc3VwcG9ydHMgYm90
aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+Pj4+PiAyLiBVc2UgUDJN
ICBwYWdlIHRhYmxlIGluc3RlYWQgb2YgY3JlYXRpbmcgb25lIGFzIFNNTVV2MyBoYXMgdGhlDQo+
Pj4+Pj4+Pj4gICAgY2FwYWJpbGl0eSB0byBzaGFyZSB0aGUgcGFnZSB0YWJsZXMgd2l0aCB0aGUg
Q1BVLg0KPj4+Pj4+Pj4+IDMuIFRhc2tsZXRzIGlzIHVzZWQgaW4gcGxhY2Ugb2YgdGhyZWFkZWQg
SVJRJ3MgaW4gTGludXggZm9yIGV2ZW50IHF1ZXVlDQo+Pj4+Pj4+Pj4gICAgYW5kIHByaW9yaXR5
IHF1ZXVlIElSUSBoYW5kbGluZy4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gVGFza2xldHMgYXJlIG5v
dCBhIHJlcGxhY2VtZW50IGZvciB0aHJlYWRlZCBJUlEuIEluIHBhcnRpY3VsYXIsIHRoZXkgd2ls
bA0KPj4+Pj4+Pj4gaGF2ZSBwcmlvcml0eSBvdmVyIGFueXRoaW5nIGVsc2UgKElPVyBub3RoaW5n
IHdpbGwgcnVuIG9uIHRoZSBwQ1BVIHVudGlsDQo+Pj4+Pj4+PiB0aGV5IGFyZSBkb25lKS4NCj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4gRG8geW91IGtub3cgd2h5IExpbnV4IGlzIHVzaW5nIHRocmVhZC4g
SXMgaXQgYmVjYXVzZSBvZiBsb25nIHJ1bm5pbmcNCj4+Pj4+Pj4+IG9wZXJhdGlvbnM/DQo+Pj4+
Pj4+IA0KPj4+Pj4+PiBZZXMgeW91IGFyZSByaWdodCBiZWNhdXNlIG9mIGxvbmcgcnVubmluZyBv
cGVyYXRpb25zIExpbnV4IGlzIHVzaW5nIHRoZQ0KPj4+Pj4+PiB0aHJlYWRlZCBJUlFzLg0KPj4+
Pj4+PiANCj4+Pj4+Pj4gU01NVXYzIHJlcG9ydHMgZmF1bHQvZXZlbnRzIGJhc2VzIG9uIG1lbW9y
eS1iYXNlZCBjaXJjdWxhciBidWZmZXIgcXVldWVzIG5vdA0KPj4+Pj4+PiBiYXNlZCBvbiB0aGUg
cmVnaXN0ZXIuIEFzIHBlciBteSB1bmRlcnN0YW5kaW5nLCBpdCBpcyB0aW1lLWNvbnN1bWluZyB0
bw0KPj4+Pj4+PiBwcm9jZXNzIHRoZSBtZW1vcnkgYmFzZWQgcXVldWVzIGluIGludGVycnVwdCBj
b250ZXh0IGJlY2F1c2Ugb2YgdGhhdCBMaW51eA0KPj4+Pj4+PiBpcyB1c2luZyB0aHJlYWRlZCBJ
UlEgdG8gcHJvY2VzcyB0aGUgZmF1bHRzL2V2ZW50cyBmcm9tIFNNTVUuDQo+Pj4+Pj4+IA0KPj4+
Pj4+PiBJIGRpZG7igJl0IGZpbmQgYW55IG90aGVyIHNvbHV0aW9uIGluIFhFTiBpbiBwbGFjZSBv
ZiB0YXNrbGV0IHRvIGRlZmVyIHRoZQ0KPj4+Pj4+PiB3b3JrLCB0aGF04oCZcyB3aHkgSSB1c2Vk
IHRhc2tsZXQgaW4gWEVOIGluIHJlcGxhY2VtZW50IG9mIHRocmVhZGVkIElSUXMuIElmDQo+Pj4+
Pj4+IHdlIGRvIGFsbCB3b3JrIGluIGludGVycnVwdCBjb250ZXh0IHdlIHdpbGwgbWFrZSBYRU4g
bGVzcyByZXNwb25zaXZlLg0KPj4+Pj4+IA0KPj4+Pj4+IFNvIHdlIG5lZWQgdG8gbWFrZSBzdXJl
IHRoYXQgWGVuIGNvbnRpbnVlIHRvIHJlY2VpdmVzIGludGVycnVwdHMsIGJ1dCB3ZSBhbHNvDQo+
Pj4+Pj4gbmVlZCB0byBtYWtlIHN1cmUgdGhhdCBhIHZDUFUgYm91bmQgdG8gdGhlIHBDUFUgaXMg
YWxzbyByZXNwb25zaXZlLg0KPj4+Pj4+IA0KPj4+Pj4+PiANCj4+Pj4+Pj4gSWYgeW91IGtub3cg
YW5vdGhlciBzb2x1dGlvbiBpbiBYRU4gdGhhdCB3aWxsIGJlIHVzZWQgdG8gZGVmZXIgdGhlIHdv
cmsgaW4NCj4+Pj4+Pj4gdGhlIGludGVycnVwdCBwbGVhc2UgbGV0IG1lIGtub3cgSSB3aWxsIHRy
eSB0byB1c2UgdGhhdC4NCj4+Pj4+PiANCj4+Pj4+PiBPbmUgb2YgbXkgd29yayBjb2xsZWFndWUg
ZW5jb3VudGVyZWQgYSBzaW1pbGFyIHByb2JsZW0gcmVjZW50bHkuIEhlIGhhZCBhIGxvbmcNCj4+
Pj4+PiBydW5uaW5nIHRhc2tsZXQgYW5kIHdhbnRlZCB0byBiZSBicm9rZW4gZG93biBpbiBzbWFs
bGVyIGNodW5rLg0KPj4+Pj4+IA0KPj4+Pj4+IFdlIGRlY2lkZWQgdG8gdXNlIGEgdGltZXIgdG8g
cmVzY2hlZHVsZSB0aGUgdGFzbGtldCBpbiB0aGUgZnV0dXJlLiBUaGlzIGFsbG93cw0KPj4+Pj4+
IHRoZSBzY2hlZHVsZXIgdG8gcnVuIG90aGVyIGxvYWRzIChlLmcuIHZDUFUpIGZvciBzb21lIHRp
bWUuDQo+Pj4+Pj4gDQo+Pj4+Pj4gVGhpcyBpcyBwcmV0dHkgaGFja2lzaCBidXQgSSBjb3VsZG4n
dCBmaW5kIGEgYmV0dGVyIHNvbHV0aW9uIGFzIHRhc2tsZXQgaGF2ZQ0KPj4+Pj4+IGhpZ2ggcHJp
b3JpdHkuDQo+Pj4+Pj4gDQo+Pj4+Pj4gTWF5YmUgdGhlIG90aGVyIHdpbGwgaGF2ZSBhIGJldHRl
ciBpZGVhLg0KPj4+Pj4gDQo+Pj4+PiBKdWxpZW4ncyBzdWdnZXN0aW9uIGlzIGEgZ29vZCBvbmUu
DQo+Pj4+PiANCj4+Pj4+IEJ1dCBJIHRoaW5rIHRhc2tsZXRzIGNhbiBiZSBjb25maWd1cmVkIHRv
IGJlIGNhbGxlZCBmcm9tIHRoZSBpZGxlX2xvb3AsDQo+Pj4+PiBpbiB3aGljaCBjYXNlIHRoZXkg
YXJlIG5vdCBydW4gaW4gaW50ZXJydXB0IGNvbnRleHQ/DQo+Pj4+PiANCj4+Pj4gIFllcyB5b3Ug
YXJlIHJpZ2h0IHRhc2tsZXQgd2lsbCBiZSBzY2hlZHVsZWQgZnJvbSB0aGUgaWRsZV9sb29wIHRo
YXQgaXMgbm90IGludGVycnVwdCBjb25leHQuDQo+Pj4gDQo+Pj4gVGhpcyBkZXBlbmRzIG9uIHlv
dXIgdGFza2xldC4gU29tZSB3aWxsIHJ1biBmcm9tIHRoZSBzb2Z0aXJxIGNvbnRleHQgd2hpY2gg
aXMgdXN1YWxseSAoZm9yIEFybSkgb24gdGhlIHJldHVybiBvZiBhbiBleGNlcHRpb24uDQo+Pj4g
DQo+PiBUaGFua3MgZm9yIHRoZSBpbmZvLiBJIHdpbGwgY2hlY2sgYW5kIHdpbGwgZ2V0IGJldHRl
ciB1bmRlcnN0YW5kaW5nIG9mIHRoZSB0YXNrbGV0IGhvdyBpdCB3aWxsIHJ1biBpbiBYRU4uDQo+
Pj4+PiANCj4+Pj4+Pj4+PiA0LiBMYXRlc3QgdmVyc2lvbiBvZiB0aGUgTGludXggU01NVXYzIGNv
ZGUgaW1wbGVtZW50cyB0aGUgY29tbWFuZHMgcXVldWUNCj4+Pj4+Pj4+PiAgICBhY2Nlc3MgZnVu
Y3Rpb25zIGJhc2VkIG9uIGF0b21pYyBvcGVyYXRpb25zIGltcGxlbWVudGVkIGluIExpbnV4Lg0K
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBDYW4geW91IHByb3ZpZGUgbW9yZSBkZXRhaWxzPw0KPj4+Pj4+
PiANCj4+Pj4+Pj4gSSB0cmllZCB0byBwb3J0IHRoZSBsYXRlc3QgdmVyc2lvbiBvZiB0aGUgU01N
VXYzIGNvZGUgdGhhbiBJIG9ic2VydmVkIHRoYXQNCj4+Pj4+Pj4gaW4gb3JkZXIgdG8gcG9ydCB0
aGF0IGNvZGUgSSBoYXZlIHRvIGFsc28gcG9ydCBhdG9taWMgb3BlcmF0aW9uIGltcGxlbWVudGVk
DQo+Pj4+Pj4+IGluIExpbnV4IHRvIFhFTi4gQXMgbGF0ZXN0IExpbnV4IGNvZGUgdXNlcyBhdG9t
aWMgb3BlcmF0aW9uIHRvIHByb2Nlc3MgdGhlDQo+Pj4+Pj4+IGNvbW1hbmQgcXVldWVzIChhdG9t
aWNfY29uZF9yZWFkX3JlbGF4ZWQoKSxhdG9taWNfbG9uZ19jb25kX3JlYWRfcmVsYXhlZCgpICwN
Cj4+Pj4+Pj4gYXRvbWljX2ZldGNoX2FuZG5vdF9yZWxheGVkKCkpIC4NCj4+Pj4+PiANCj4+Pj4+
PiBUaGFuayB5b3UgZm9yIHRoZSBleHBsYW5hdGlvbi4gSSB0aGluayBpdCB3b3VsZCBiZSBiZXN0
IHRvIGltcG9ydCB0aGUgYXRvbWljDQo+Pj4+Pj4gaGVscGVycyBhbmQgdXNlIHRoZSBsYXRlc3Qg
Y29kZS4NCj4+Pj4+PiANCj4+Pj4+PiBUaGlzIHdpbGwgZW5zdXJlIHRoYXQgd2UgZG9uJ3QgcmUt
aW50cm9kdWNlIGJ1Z3MgYW5kIGFsc28gYnV5IHVzIHNvbWUgdGltZQ0KPj4+Pj4+IGJlZm9yZSB0
aGUgTGludXggYW5kIFhlbiBkcml2ZXIgZGl2ZXJnZSBhZ2FpbiB0b28gbXVjaC4NCj4+Pj4+PiAN
Cj4+Pj4+PiBTdGVmYW5vLCB3aGF0IGRvIHlvdSB0aGluaz8NCj4+Pj4+IA0KPj4+Pj4gSSB0aGlu
ayB5b3UgYXJlIHJpZ2h0Lg0KPj4+PiBZZXMsIEkgYWdyZWUgd2l0aCB5b3UgdG8gaGF2ZSBYRU4g
Y29kZSBpbiBzeW5jIHdpdGggTGludXggY29kZSB0aGF0J3Mgd2h5IEkgc3RhcnRlZCB3aXRoIHRv
IHBvcnQgdGhlIExpbnV4IGF0b21pYyBvcGVyYXRpb25zIHRvIFhFTiAgdGhlbiBJIHJlYWxpc2Vk
IHRoYXQgaXQgaXMgbm90IHN0cmFpZ2h0Zm9yd2FyZCB0byBwb3J0IGF0b21pYyBvcGVyYXRpb25z
IGFuZCBpdCByZXF1aXJlcyBsb3RzIG9mIGVmZm9ydCBhbmQgdGVzdGluZy4gVGhlcmVmb3JlIEkg
ZGVjaWRlZCB0byBwb3J0IHRoZSBjb2RlIGJlZm9yZSB0aGUgYXRvbWljIG9wZXJhdGlvbiBpcyBp
bnRyb2R1Y2VkIGluIExpbnV4Lg0KPj4+IA0KPj4+IEhtbW0uLi4gSSB3b3VsZCBub3QgaGF2ZSBl
eHBlY3RlZCBhIGxvdCBvZiBlZmZvcnQgcmVxdWlyZWQgdG8gYWRkIHRoZSAzIGF0b21pY3Mgb3Bl
cmF0aW9ucyBhYm92ZS4gQXJlIHlvdSB0cnlpbmcgdG8gYWxzbyBwb3J0IHRoZSBMU0Ugc3VwcG9y
dCBhdCB0aGUgc2FtZSB0aW1lPw0KPj4gVGhlcmUgYXJlIG90aGVyIGF0b21pYyBvcGVyYXRpb25z
IHVzZWQgaW4gdGhlIFNNTVV2MyBjb2RlIGFwYXJ0IGZyb20gdGhlIDMgYXRvbWljIG9wZXJhdGlv
biBJIG1lbnRpb24uIEkganVzdCBtZW50aW9uIDMgb3BlcmF0aW9uIGFzIGFuIGV4YW1wbGUuIA0K
PiANCj4gT2suIERvIHlvdSBoYXZlIGEgbGlzdCB5b3UgY291bGQgc2hhcmU/DQo+IA0KDQpZZXMu
IFBsZWFzZSBmaW5kIHRoZSBsaXN0IHRoYXQgd2UgaGF2ZSB0byBwb3J0IHRvIHRoZSBYRU4gaW4g
b3JkZXIgdG8gbWVyZ2UgdGhlIGxhdGVzdCBTTU1VdjMgY29kZS4gDQoNCklmIHdlIHN0YXJ0IHRv
IHBvcnQgdGhlIGJlbG93IGxpc3Qgd2UgbWlnaHQgaGF2ZSB0byBwb3J0IGFub3RoZXIgYXRvbWlj
IG9wZXJhdGlvbiBiYXNlZCBvbiB3aGljaCBiZWxvdyBhdG9taWMgb3BlcmF0aW9ucyBhcmUgaW1w
bGVtZW50ZWQuIEkgaGF2ZSBub3Qgc3BlbnQgdGltZSBvbiBob3cgdGhlc2UgYXRvbWljIG9wZXJh
dGlvbnMgYXJlIGltcGxlbWVudGVkIGluIGRldGFpbCBidXQgYXMgcGVyIG15IHVuZGVyc3RhbmRp
bmcsIGl0IHJlcXVpcmVkIGFuIGVmZm9ydCB0byBwb3J0IHRoZW0gdG8gWEVOIGFuZCByZXF1aXJl
ZCBhIGxvdCBvZiB0ZXN0aW5nLg0KDQoxLiBhdG9taWNfc2V0X3JlbGVhc2UNCjIuIGF0b21pY19m
ZXRjaF9hbmRub3RfcmVsYXhlZA0KMy4gYXRvbWljX2NvbmRfcmVhZF9yZWxheGVkDQo0LiBhdG9t
aWNfbG9uZ19jb25kX3JlYWRfcmVsYXhlZA0KNS4gYXRvbWljX2xvbmdfeG9yDQo2LiBhdG9taWNf
c2V0X3JlbGVhc2UNCjcuIGF0b21pY19jbXB4Y2hnX3JlbGF4ZWQgbWlnaHQgYmUgd2UgY2FuIHVz
ZSBhdG9taWNfY21weGNoZyB0aGF0IGlzIGltcGxlbWVudGVkIGluIFhFTiBuZWVkIHRvIGNoZWNr
Lg0KOC4gYXRvbWljX2RlY19yZXR1cm5fcmVsZWFzZQ0KOS4gYXRvbWljX2ZldGNoX2luY19yZWxh
eGVkDQoNCj4gQ2hlZXJzLA0KPiANCj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQpSZWdhcmRzLA0K
UmFodWwNCg0K


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 11:05:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 11:05:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12241.32009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0Jy-0000k7-Kl; Mon, 26 Oct 2020 11:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12241.32009; Mon, 26 Oct 2020 11:05:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0Jy-0000k0-G7; Mon, 26 Oct 2020 11:05:50 +0000
Received: by outflank-mailman (input) for mailman id 12241;
 Mon, 26 Oct 2020 11:05:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kX0Jx-0000jv-AE
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:05:49 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6953cc64-8436-4160-a82c-1956ce69b3f6;
 Mon, 26 Oct 2020 11:05:48 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id x7so11944476wrl.3
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 04:05:48 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id v11sm19343600wml.26.2020.10.26.04.05.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 04:05:47 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kX0Jx-0000jv-AE
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:05:49 +0000
X-Inumbo-ID: 6953cc64-8436-4160-a82c-1956ce69b3f6
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6953cc64-8436-4160-a82c-1956ce69b3f6;
	Mon, 26 Oct 2020 11:05:48 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id x7so11944476wrl.3
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 04:05:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=lRJzK5QYRzzd/LbtNDNUjOHentsTogmyjw6KPb6kXeg=;
        b=WAa1f06dvRl5i2XbE7YpNM9UXT5MmDWQwmsooi564Rst6Wr6oYPXzUtqMuKLVgqHwv
         Yv9dmYMvcwZFW0C3y1P59umOAftz/ccjkRSW4LrkXdGwaeUqMQGkDM0d3ZpQ6PasIqIM
         eYN5TZ/oeY0PmcfUaPszukPNwu4PRqxAgO99g3yPdC2Yd0Rk1+Gkl6W8XhhUz9Ukg5m0
         1coXfxSC/AyUT+1KzK/no7Nn4eNn6XvEKVT9YmYP54IBQmztimxStqG6515lH/aOI+Sp
         XyuNxcvOhmPDgSzYfWhwI89K+MYa7DM2DiPPRhtdymmgOmbFn6WV+Boya0+egqCWziHS
         0MQQ==
X-Gm-Message-State: AOAM5325F0/GgDR95kLTkvGRBUcXcVqa80//Y9RkqDW56mn/20x4n7sO
	4Xa84wd5q+SvGNr90BZDyq0=
X-Google-Smtp-Source: ABdhPJxyNcf3LNRZ8U75CEnea5iZcfWsajZHThZ4iw253DZhawKCgd07ynS8NZqvg+JrnmaqTyEmnw==
X-Received: by 2002:adf:8541:: with SMTP id 59mr17269179wrh.61.1603710347889;
        Mon, 26 Oct 2020 04:05:47 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id v11sm19343600wml.26.2020.10.26.04.05.47
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 04:05:47 -0700 (PDT)
Date: Mon, 26 Oct 2020 11:05:45 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/helpers: fix Arm build by excluding
 init-xenstore-domain
Message-ID: <20201026110545.uvd2x6j3pjac7hkl@liuwe-devbox-debian-v2>
References: <20201025054546.4960-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201025054546.4960-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Sun, Oct 25, 2020 at 06:45:46AM +0100, Juergen Gross wrote:
> The support for PVH xenstore-stubdom has broken the Arm build.
> 
> Xenstore stubdom isn't supported on Arm, so there is no need to build
> the init-xenstore-domain helper.
> 
> Build the helper on x86 only.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>

I have applied this patch to unblock osstest.


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 11:43:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 11:43:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12263.32028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0uR-0004DY-Ax; Mon, 26 Oct 2020 11:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12263.32028; Mon, 26 Oct 2020 11:43:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX0uR-0004DR-7T; Mon, 26 Oct 2020 11:43:31 +0000
Received: by outflank-mailman (input) for mailman id 12263;
 Mon, 26 Oct 2020 11:43:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX0uP-0004Cn-Dd
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:43:29 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d72a3cc1-d582-4574-9abc-3a21a6f20abf;
 Mon, 26 Oct 2020 11:43:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX0uG-0002B2-C6; Mon, 26 Oct 2020 11:43:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX0uG-0000LH-18; Mon, 26 Oct 2020 11:43:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX0uG-0004dl-0g; Mon, 26 Oct 2020 11:43:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX0uP-0004Cn-Dd
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 11:43:29 +0000
X-Inumbo-ID: d72a3cc1-d582-4574-9abc-3a21a6f20abf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d72a3cc1-d582-4574-9abc-3a21a6f20abf;
	Mon, 26 Oct 2020 11:43:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hZpcNJCkIsNG8x8jMh13/9TJ4bE9DgqDGxIU+/6bDjA=; b=jmksF++zv9hXWrSSa6T7a05Ahd
	B44So7U/3s2B6DkRrFKYio8Bv5smRLIAdiDOAtdItBXEjqEN91nZnGyp4Oi42zYHK8C/g9WtxRXov
	KroSMUVNxcivhhjTJGLBC/P8vrW4M1dVlL1MBuy2RHJtONDFmaOvd0mnb7LqL9xL4T8E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX0uG-0002B2-C6; Mon, 26 Oct 2020 11:43:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX0uG-0000LH-18; Mon, 26 Oct 2020 11:43:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX0uG-0004dl-0g; Mon, 26 Oct 2020 11:43:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156239: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=4ddd6499d999a7d08cabfda5b0262e473dd5beed
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 11:43:20 +0000

flight 156239 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156239/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  4ddd6499d999a7d08cabfda5b0262e473dd5beed
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    3 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   35 attempts
Testing same since   156129  2020-10-23 18:01:24 Z    2 days   34 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 12:11:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 12:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12272.32040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX1Lf-0006pE-Qb; Mon, 26 Oct 2020 12:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12272.32040; Mon, 26 Oct 2020 12:11:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX1Lf-0006p7-NK; Mon, 26 Oct 2020 12:11:39 +0000
Received: by outflank-mailman (input) for mailman id 12272;
 Mon, 26 Oct 2020 12:11:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9RWh=EB=arm.com=ash.wilding@srs-us1.protection.inumbo.net>)
 id 1kX1Le-0006p1-Qz
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 12:11:38 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.50]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 182fb024-048d-4da1-86e4-24b37402d229;
 Mon, 26 Oct 2020 12:11:35 +0000 (UTC)
Received: from AM5PR1001CA0015.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::28)
 by DB6PR0801MB1781.eurprd08.prod.outlook.com (2603:10a6:4:3c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 26 Oct
 2020 12:11:33 +0000
Received: from AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::2d) by AM5PR1001CA0015.outlook.office365.com
 (2603:10a6:206:2::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Mon, 26 Oct 2020 12:11:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT062.mail.protection.outlook.com (10.152.17.120) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Mon, 26 Oct 2020 12:11:33 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Mon, 26 Oct 2020 12:11:32 +0000
Received: from 36b14b03cd5b.3
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 261000BE-B563-4EAD-A940-FD4F4E867D8A.1; 
 Mon, 26 Oct 2020 12:10:55 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 36b14b03cd5b.3
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 26 Oct 2020 12:10:55 +0000
Received: from DBBPR08MB4428.eurprd08.prod.outlook.com (2603:10a6:10:ce::21)
 by DBBPR08MB4839.eurprd08.prod.outlook.com (2603:10a6:10:da::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Mon, 26 Oct
 2020 12:10:51 +0000
Received: from DBBPR08MB4428.eurprd08.prod.outlook.com
 ([fe80::59b6:8468:7c17:b23a]) by DBBPR08MB4428.eurprd08.prod.outlook.com
 ([fe80::59b6:8468:7c17:b23a%3]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 12:10:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=9RWh=EB=arm.com=ash.wilding@srs-us1.protection.inumbo.net>)
	id 1kX1Le-0006p1-Qz
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 12:11:38 +0000
X-Inumbo-ID: 182fb024-048d-4da1-86e4-24b37402d229
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.50])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 182fb024-048d-4da1-86e4-24b37402d229;
	Mon, 26 Oct 2020 12:11:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wzH2AXF6wZ2geie3ElaMTpTH+sb1YH5ncsADnPXMFeY=;
 b=0tO/cmCQEJYQ+eFPm/DrOwQR6/mhHR/+kxfTehslWMpt1Uk0pcyLnNDrVBZCkVgM8YzWMESNcUhzeNSYccSDy++wlh2q5x1bIAHHh+WkseQfMS5oNnM7wKz1RDp83ja9HZTObqxdhlyV7mhUvZ0ARZvl97IYF4n/r+CgsFiTUXQ=
Received: from AM5PR1001CA0015.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::28)
 by DB6PR0801MB1781.eurprd08.prod.outlook.com (2603:10a6:4:3c::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 26 Oct
 2020 12:11:33 +0000
Received: from AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::2d) by AM5PR1001CA0015.outlook.office365.com
 (2603:10a6:206:2::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21 via Frontend
 Transport; Mon, 26 Oct 2020 12:11:33 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT062.mail.protection.outlook.com (10.152.17.120) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Mon, 26 Oct 2020 12:11:33 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Mon, 26 Oct 2020 12:11:32 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a94cd087977e158e
X-CR-MTA-TID: 64aa7808
Received: from 36b14b03cd5b.3
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 261000BE-B563-4EAD-A940-FD4F4E867D8A.1;
	Mon, 26 Oct 2020 12:10:55 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 36b14b03cd5b.3
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Mon, 26 Oct 2020 12:10:55 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XJz0xg2HoyAgIGzILf/3/3YCm/J9PMXQrgfKuIOoSYlkwEzaCD0TX6oxOH6iIPUCQ6rw+Lp/xmd9uYpEUNN4cBLCbwjg4OxseBmoO5hJJlaKwYxoEuAwEb+TBiY9ahnVAEurTWO4P1LDcWXa6rRjyDYb0AHEurCxbvues6fJiYvjfpaF4jUGEP2Isk+v1HN0g/WmGPoW630MBp1D8LDToiBZn48KZdgAy9flkocUPcjBKEJqVmAePLiGlEGtE8kiChG9+xlxFVA4DbsbkikTHcFiY84pBsbQnBYfmAMPObkWzQS9nTZJ5PQwMifayNhzrYucmeAY/Y8B3Zx3kCkGcA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wzH2AXF6wZ2geie3ElaMTpTH+sb1YH5ncsADnPXMFeY=;
 b=Toa9Pxig8cT/G/uqM0Xwrv6uXFwmUDXiXx8aqxL7ieXw3NyZ8c/RWDt2jZG0YautAv8rfZp+C2lTL1i7+JXSsQmgqTKU2j7DqtA7tbPjJxv93Q3NC6nc7arbMNBn+UzsWtmgsPM/Iu6qTnUKcnGZgdCKABdw7zv7t2lmr1hwMrlCKMnx8DKrc8jH9yOwlggbIqDFKU+nK9l+pGOmq0kllLCCtJ9UeLjXLaPASM4ai3/tbrWH8ojQBmhqThJM+1g0rYRWPpeUstlQd3cNEYQn1DOvdj1sD0Q0Q+LsNYNcxWvG84dQMIvBJzFpOdk0p1JACLaIAR+swddzl0BQeN7cKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wzH2AXF6wZ2geie3ElaMTpTH+sb1YH5ncsADnPXMFeY=;
 b=0tO/cmCQEJYQ+eFPm/DrOwQR6/mhHR/+kxfTehslWMpt1Uk0pcyLnNDrVBZCkVgM8YzWMESNcUhzeNSYccSDy++wlh2q5x1bIAHHh+WkseQfMS5oNnM7wKz1RDp83ja9HZTObqxdhlyV7mhUvZ0ARZvl97IYF4n/r+CgsFiTUXQ=
Received: from DBBPR08MB4428.eurprd08.prod.outlook.com (2603:10a6:10:ce::21)
 by DBBPR08MB4839.eurprd08.prod.outlook.com (2603:10a6:10:da::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Mon, 26 Oct
 2020 12:10:51 +0000
Received: from DBBPR08MB4428.eurprd08.prod.outlook.com
 ([fe80::59b6:8468:7c17:b23a]) by DBBPR08MB4428.eurprd08.prod.outlook.com
 ([fe80::59b6:8468:7c17:b23a%3]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 12:10:51 +0000
From: Ash Wilding <Ash.Wilding@arm.com>
To: Rahul Singh <Rahul.Singh@arm.com>, Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWpwMtpf2reu1cBEaQzJxkZyPH86mh66MAgAFiLYCAAQPggIAAwV4AgAAXwACAABiPAIAADkcAgARvegCAABLxAA==
Date: Mon, 26 Oct 2020 12:10:51 +0000
Message-ID: <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
In-Reply-To: <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-GB
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Microsoft-MacOutlook/16.42.20101102
Authentication-Results-Original: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c09b70d9-fece-4685-7e36-08d879a84767
x-ms-traffictypediagnostic: DBBPR08MB4839:|DB6PR0801MB1781:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1781610A645539A2A28D2B67E9190@DB6PR0801MB1781.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 FSEdRFTpu/HwG37CPoi6V+OrMIcdYe8FrA3T3uW3JE8d27O78KtXNfBiXxglROVrvGcSwNFX0usT2wvro3kwwacdyae0kLDpn7OFVRQn7r4GAGiC9r6vt+qa8zMFy2bZb4OgqzFtyO3oeNyx67kJLFHY7iqR/K3rIkcGZZyUjbOBP2lnU0JjleT6zlAz/Ld9vj6wjCDRMo987ac7NCnrc1Wo+bgl7W1UyvATqLlgpvUCARNT621ZdYJvr7Czw7QUgD7wWcbijthTl4MaEqT6pY6MulF89M5Ajyjo7ggltQYl2FkmpTBddgWvhIGMqc008JNFL0j1GPWwRlxis+erByOeEAg2CZe+n0gTvj1tBsM3Kb3OCLvY+rrE353Ub6EbIvrpTrbB+j93fRYe6tifMw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DBBPR08MB4428.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(136003)(346002)(376002)(316002)(8676002)(110136005)(54906003)(66556008)(2616005)(966005)(6486002)(76116006)(186003)(8936002)(33656002)(66946007)(71200400001)(478600001)(91956017)(36756003)(64756008)(66476007)(26005)(86362001)(2906002)(66446008)(6506007)(4326008)(6512007)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 d825Mk0SjCof3rLHSy1sp0DDdLqkt51SL57eE+jBXs1dO4M5lJn3vGNHGXfRtjmLP1emTjq2h2JaTwYQhwur6h2MgNjsPc5fhZtwzc+7WefxG4aN4wJVo5+cZEgPauzWOkhnmgtjQXh99vADFuKgWJUgNbPjgMpZbZVfB8eBbGLvkhiNSEXwEbQFO8wOwIXXuoQWFehD0NVvnLMVcSc7OfA95jzd1fadIYd/yNuNki7eIbvxLZHxCv7hCP63cQDSv9ismbfuCMotk7wjPgNFCdJM0gEQ4CLJ0PTSMPmuVXadd6LNWW7dsb9eB+Z80XwfiYYjUSUfb40eN7BG7dYL03CUbfWpYXuprFJ6e2uSEUaLR4B9YEETa9xqoEKbicSO7yV8+i0tEdazcZZ5i1b3ojuI6NfNB8yz71AOqG6hTZlYrE6QOYkEnJXDzgdg/tv+Gyvyv3wDE+8tHOh6sEud6dM1qQrQNytil3snGyJbQrNrwSvfwW9bEgYcNNKrkQYp77OOI19kHS6SEeFBceK1w3u8R6SIoVkfvBjECuEG4MBshv95uR+yOi5gvx0oAiI6Z/ZnnhXg9E9LfMKDiWidQPBmNy6Fn9EEi05+R+Rd0A2OAR8+FeoaRzBQMmbJs2ao81a/rptLK6ORNCWO4bJr0w==
Content-Type: text/plain; charset="utf-8"
Content-ID: <9FF34A7A152DB34BB60C0122779D2AF7@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4839
Original-Authentication-Results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e1c3074a-035b-4866-28f5-08d879a82e8c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2FzVtluKJD9RbLHGI3nKaR76c/cs9PZZRGhQMJObPmb2U7iEJhy1tENler8Pmr8k+q8ptEjmO5ZSorLjklO/CksKdX4XX5Eq1cGlzkJOuHCHUcY64CJLmS4SQSF4X6tfHAemXU6Z5u/MdB936mCWvLfMNyZz/ZuuEvKeIDVb1wcrXpEQDNTjIqHZWAoYXILZF8oHCDAWq3nbo9CKdS7iIhIIggL1AB3ghnlJeK2Y4OJJdm+J9FgiJEf9hlSPf5qSs2FFaCdImkQbTfPx1iNYHEfscrFOvo/hQGJivigAY/GVSjZfiiaGijHGZ8qxoQmNhFaLenHX0IcABKGNLM3P93m1cvfXfek0SD8KEaiUZbgfcx2i5EO5qmOtwvWii77u5AN5jtNzdpHeluXFdTnA78MaGPS5XiohccZSTsSdUueJfKJxoznZgRUxgpE43NQxgON/IsyUAiFYxih1rSgnhtc5oOs/FIK71x3pKELzrOE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(46966005)(82310400003)(8676002)(82740400003)(8936002)(47076004)(6512007)(6486002)(356005)(2906002)(26005)(966005)(4326008)(107886003)(86362001)(478600001)(6506007)(81166007)(186003)(70586007)(2616005)(33656002)(110136005)(5660300002)(54906003)(36906005)(336012)(36756003)(316002)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Oct 2020 12:11:33.1627
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c09b70d9-fece-4685-7e36-08d879a84767
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT062.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1781

SGksDQoNCj4gMS4gYXRvbWljX3NldF9yZWxlYXNlDQo+IDIuIGF0b21pY19mZXRjaF9hbmRub3Rf
cmVsYXhlZA0KPiAzLiBhdG9taWNfY29uZF9yZWFkX3JlbGF4ZWQNCj4gNC4gYXRvbWljX2xvbmdf
Y29uZF9yZWFkX3JlbGF4ZWQNCj4gNS4gYXRvbWljX2xvbmdfeG9yDQo+IDYuIGF0b21pY19zZXRf
cmVsZWFzZQ0KPiA3LiBhdG9taWNfY21weGNoZ19yZWxheGVkIG1pZ2h0IGJlIHdlIGNhbiB1c2Ug
YXRvbWljX2NtcHhjaGcgdGhhdCBpcw0KPiAgICBpbXBsZW1lbnRlZCBpbiBYRU4gbmVlZCB0byBj
aGVjay4NCj4gOC4gYXRvbWljX2RlY19yZXR1cm5fcmVsZWFzZQ0KPiA5LiBhdG9taWNfZmV0Y2hf
aW5jX3JlbGF4ZWQNCg0KDQpJZiB3ZSdyZSBnb2luZyB0byBwdWxsIGluIExpbnV4J3MgaW1wbGVt
ZW50YXRpb25zIG9mIHRoZSBhYm92ZSBhdG9taWNzDQpoZWxwZXJzIGZvciBTTU1VdjMsIGFuZCBn
aXZlbiB0aGUgbWFqb3JpdHkgb2YgU01NVXYzIHN5c3RlbXMgYXJlIHY4LjErDQp3aXRoIExTRSwg
cGVyaGFwcyB0aGlzIHdvdWxkIGJlIGEgZ29vZCB0aW1lIHRvIGRyb3AgdGhlIGN1cnJlbnQNCmF0
b21pYy5oIGluIFhlbiBjb21wbGV0ZWx5IGFuZCBwdWxsIGluIGJvdGggTGludXgncyBMTC9TQyBh
bmQgTFNFDQpoZWxwZXJzLCB0aGVuIHVzZSBhIG5ldyBLY29uZmlnIHRvIHRvZ2dsZSBiZXR3ZWVu
IHRoZW0/DQoNCkJhY2sgaW4gNWQ0NWVjYWJlMyBbMV0gSmFuIG1lbnRpb25lZCB3ZSBwcm9iYWJs
eSB3YW50IHRvIGF2b2lkIHJlbHlpbmcNCm9uIGdjYyBhdG9taWNzIGhlbHBlcnMgYXMgd2UgY2Fu
J3Qgc3dpdGNoIGJldHdlZW4gTEwvU0MgYW5kIExTRQ0KYXRvbWljcy4gV2l0aCB0aGUgYWJvdmUg
d2UnZCBiZSBhYmxlIHRvIGRyb3AgdGhlIHJlZmVyZW5jZSB0byBnY2Mncw0KYnVpbHQtaW4gX19z
eW5jX2ZldGNoX2FuZF9hZGQoKSBpbiB4ZW4vaW5jbHVkZS9hc20tYXJtL3N5c3RlbS5oIGJ5DQpt
YWtpbmcgYXJjaF9mZXRjaF9hbmRfYWRkKCkgcHVsbCBpbiB0aGUgZXhwbGljaXQgaW1wbGVtZW50
YXRpb24gb2YgdGhlDQpoZWxwZXIuDQoNClRob3VnaHRzPw0KDQpUaGFua3MsDQpBc2guDQoNClsx
XSBodHRwczovL3hlbmJpdHMueGVuLm9yZy9naXR3ZWIvP3A9eGVuLmdpdDthPWNvbW1pdDtoPTVk
NDVlY2FiZTMNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:09:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12302.32052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2Ee-0002sN-V9; Mon, 26 Oct 2020 13:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12302.32052; Mon, 26 Oct 2020 13:08:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2Ee-0002sG-Ry; Mon, 26 Oct 2020 13:08:28 +0000
Received: by outflank-mailman (input) for mailman id 12302;
 Mon, 26 Oct 2020 13:08:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX2Ed-0002sB-J4
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:08:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a8ec570-e1de-441a-9b92-48f0d6ad45e8;
 Mon, 26 Oct 2020 13:08:22 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2EX-0003xP-Mq; Mon, 26 Oct 2020 13:08:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2EX-0002ph-CG; Mon, 26 Oct 2020 13:08:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2EX-0005jN-Bj; Mon, 26 Oct 2020 13:08:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX2Ed-0002sB-J4
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:08:27 +0000
X-Inumbo-ID: 0a8ec570-e1de-441a-9b92-48f0d6ad45e8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0a8ec570-e1de-441a-9b92-48f0d6ad45e8;
	Mon, 26 Oct 2020 13:08:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8K8aUhgD2OZ1ce00dx6zdpuLfrfrWCCFQNU3F68fngc=; b=3efXRF48RkyG3MNeTpWODIXab+
	CBWi4WpcsqMd3JZjhqLZdnGzWy2EKFJTZDPDJVtrr+3EcZIr5lJTjQw9V+JC8A108uVVpa8XLWAVI
	0JyYrMLyTdhGc/2St4yE7b9rnpHo01FjWhmvvJOb/IV0ZYSYhzO/ILAnew2qEhB57iJg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2EX-0003xP-Mq; Mon, 26 Oct 2020 13:08:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2EX-0002ph-CG; Mon, 26 Oct 2020 13:08:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2EX-0005jN-Bj; Mon, 26 Oct 2020 13:08:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156240-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156240: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):starved:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):starved:nonblocking
    qemu-mainline:build-armhf:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 13:08:21 +0000

flight 156240 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156240/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               starved  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               starved  n/a
 build-armhf-libvirt           1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               starved  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               starved  n/a
 build-armhf                   2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                4c5b97bfd0dd54dc27717ae8d1cd10e14eef1430
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   67 days
Failing since        152659  2020-08-21 14:07:39 Z   65 days  152 attempts
Testing same since   156094  2020-10-22 15:08:36 Z    3 days   37 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  starved 
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          starved 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          starved 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  starved 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  starved 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  starved 
 test-armhf-armhf-xl-cubietruck                               starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     starved 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                starved 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 starved 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     starved 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 49957 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:27:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:27:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12311.32070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2X9-0004dH-R0; Mon, 26 Oct 2020 13:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12311.32070; Mon, 26 Oct 2020 13:27:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2X9-0004dA-Mt; Mon, 26 Oct 2020 13:27:35 +0000
Received: by outflank-mailman (input) for mailman id 12311;
 Mon, 26 Oct 2020 13:27:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX2X8-0004cZ-N5
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:27:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a2c51a6-d350-48c6-8409-ee9184975ff5;
 Mon, 26 Oct 2020 13:27:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2X1-0004Jo-20; Mon, 26 Oct 2020 13:27:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2X0-0003cj-Qg; Mon, 26 Oct 2020 13:27:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX2X0-00062T-QB; Mon, 26 Oct 2020 13:27:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX2X8-0004cZ-N5
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:27:34 +0000
X-Inumbo-ID: 5a2c51a6-d350-48c6-8409-ee9184975ff5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5a2c51a6-d350-48c6-8409-ee9184975ff5;
	Mon, 26 Oct 2020 13:27:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E5qUQxpcoFMytN1bXMVSRr1BqhvTxlqHN0+UilhCoy8=; b=svzsWaWvt6QRBFTiF3imQLQISh
	G4Zf4OuWGdQFhptVesrAJ08qazGYelEYnjraeX8NDcP/vxPTA7JKYnjOn7IyhkBLFNrJbivNWUYdG
	DvjC8Rg4BzHSoOn1/hCHmEZEUnN0qUbenvd6ZFaULQZ5wPphmIcM93/6c4Sm5cEsJxy4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2X1-0004Jo-20; Mon, 26 Oct 2020 13:27:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2X0-0003cj-Qg; Mon, 26 Oct 2020 13:27:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX2X0-00062T-QB; Mon, 26 Oct 2020 13:27:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156241-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156241: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=92abe1481c1181b95c7f91846bd1d77f37ee5c5e
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 13:27:26 +0000

flight 156241 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156241/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  92abe1481c1181b95c7f91846bd1d77f37ee5c5e
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    3 days
Failing since        156120  2020-10-23 14:01:24 Z    2 days   36 attempts
Testing same since   156241  2020-10-26 12:02:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 92abe1481c1181b95c7f91846bd1d77f37ee5c5e
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 25 06:45:46 2020 +0100

    tools/helpers: fix Arm build by excluding init-xenstore-domain
    
    The support for PVH xenstore-stubdom has broken the Arm build.
    
    Xenstore stubdom isn't supported on Arm, so there is no need to build
    the init-xenstore-domain helper.
    
    Build the helper on x86 only.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:31:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12314.32081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2bD-0005TX-C9; Mon, 26 Oct 2020 13:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12314.32081; Mon, 26 Oct 2020 13:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2bD-0005TQ-8r; Mon, 26 Oct 2020 13:31:47 +0000
Received: by outflank-mailman (input) for mailman id 12314;
 Mon, 26 Oct 2020 13:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kX2bC-0005TL-3r
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:31:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24977410-667a-45d5-a7f5-07c358d0e611;
 Mon, 26 Oct 2020 13:31:45 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX2bB-0004PW-1m; Mon, 26 Oct 2020 13:31:45 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX2bA-0005DO-Qs; Mon, 26 Oct 2020 13:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kX2bC-0005TL-3r
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:31:46 +0000
X-Inumbo-ID: 24977410-667a-45d5-a7f5-07c358d0e611
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24977410-667a-45d5-a7f5-07c358d0e611;
	Mon, 26 Oct 2020 13:31:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=COH07By8MlJXLmXlds6S5U1Em2gVSNaKhuITL1stUJc=; b=M08Fs8OscJP1U4byw5mjknMsbz
	or7OcQxCp59keMhStF4aMGrG+aE9nNQcwuYRvErF8P+Ua4M3IlMDEcy/jPKr0aIKb3qqlxE3JMzEq
	7ZHAt/zgZu37aMVw5olD4n9KTbRIzpcYlpf0GRcpXZyzqsKJQcSLy3Buh2IR1dVeYa6Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX2bB-0004PW-1m; Mon, 26 Oct 2020 13:31:45 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX2bA-0005DO-Qs; Mon, 26 Oct 2020 13:31:44 +0000
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: roman@zededa.com, xen-devel@lists.xenproject.org
References: <20201012215751.GB89158@mattapan.m5p.com>
 <c38d78bd-c011-404b-5f59-d10cd7d7f006@xen.org>
 <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
Date: Mon, 26 Oct 2020 13:31:42 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201024053540.GA97417@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 24/10/2020 06:35, Elliott Mitchell wrote:
> On Fri, Oct 23, 2020 at 04:59:30PM -0700, Stefano Stabellini wrote:
>> Note that I tried to repro the issue here at my end but it works for me
>> with device tree. So the swiotlb_init memory allocation failure probably
>> only shows on ACPI, maybe because ACPI is reserving too much low memory.
> 
> Found it.  Take a look at 437b0aa06a014ce174e24c0d3530b3e9ab19b18b
> 
>   PLATFORM_START(rpi4, "Raspberry Pi 4")
>       .compatible     = rpi4_dt_compat,
>       .blacklist_dev  = rpi4_blacklist_dev,
> +    .dma_bitsize    = 30,
>   PLATFORM_END
> 
> Where this is used to match against a *device-tree*.

Right. When we introduced ACPI in Xen, we made the assumption there 
would be no need for per-platform workaround.

> ACPI has a distinct
> means of specifying a limited DMA-width; the above fails, because it
> assumes a *device-tree*.

Do you know if it would be possible to infer from the ACPI static table 
the DMA-width?

If not, is there a way to uniquely identify the platform?


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:35:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12318.32094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2eO-0005ez-Rx; Mon, 26 Oct 2020 13:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12318.32094; Mon, 26 Oct 2020 13:35:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2eO-0005es-OR; Mon, 26 Oct 2020 13:35:04 +0000
Received: by outflank-mailman (input) for mailman id 12318;
 Mon, 26 Oct 2020 13:35:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kX2eN-0005em-60
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:35:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1e74d0d-5a5e-4627-98c8-83a66328c2b4;
 Mon, 26 Oct 2020 13:35:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 22E18AD83;
 Mon, 26 Oct 2020 13:35:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kX2eN-0005em-60
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:35:03 +0000
X-Inumbo-ID: c1e74d0d-5a5e-4627-98c8-83a66328c2b4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1e74d0d-5a5e-4627-98c8-83a66328c2b4;
	Mon, 26 Oct 2020 13:35:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603719301;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2uDrOTb0Os/dYcoOTbQJ3/XBQHfsOHXatctiWkXpN9w=;
	b=oSGxm1L+3SyqyqHuoRb6wzTCk4Vz2EwfSDYUKJav5wLxoyLmLJ+oxnvND079Yv3/lMm2/4
	dPnombvhzXvosOCysJ02C/upxLJq99Em4BFVwSOIecVhdJzWuNLL9G/KA/pReUhCAwKzYb
	fJQuZKlofsweFhuIGvXA6s8nr62qXcg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 22E18AD83;
	Mon, 26 Oct 2020 13:35:01 +0000 (UTC)
Subject: Re: [xen-unstable-smoke test] 156241: regressions - trouble:
 blocked/fail
To: osstest service owner <osstest-admin@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <osstest-156241-mainreport@xen.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8528afe9-4225-1942-9f7c-54ec50379345@suse.com>
Date: Mon, 26 Oct 2020 14:35:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <osstest-156241-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.20 14:27, osstest service owner wrote:
> flight 156241 xen-unstable-smoke real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156241/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>   build-amd64                   6 xen-build                fail REGR. vs. 156117
>   build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
>   build-armhf                   6 xen-build                fail REGR. vs. 156117

I'm pretty sure these failures will be fixed by my patch

"tools/libs: let build depend on official headers"


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:37:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12323.32106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2gx-0005nm-9r; Mon, 26 Oct 2020 13:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12323.32106; Mon, 26 Oct 2020 13:37:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2gx-0005nf-6H; Mon, 26 Oct 2020 13:37:43 +0000
Received: by outflank-mailman (input) for mailman id 12323;
 Mon, 26 Oct 2020 13:37:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kX2gv-0005na-Ta
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:37:42 +0000
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14e936e6-75fa-4637-9c88-82992d06765f;
 Mon, 26 Oct 2020 13:37:40 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603719455384180.41777328470607;
 Mon, 26 Oct 2020 06:37:35 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kX2gv-0005na-Ta
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:37:42 +0000
X-Inumbo-ID: 14e936e6-75fa-4637-9c88-82992d06765f
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 14e936e6-75fa-4637-9c88-82992d06765f;
	Mon, 26 Oct 2020 13:37:40 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603719457; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=LUIAeD6RP8AmRGuH0R1yMML5Os89erB7WSpNFqoYqdHaojYnlERI3bgAc4Q1Ta7a1gMebKJaDdMxYW9plCvQg2yeBQOycSM+7sln1uW3yllAgPvifWk9nQ2/EKYUtBd2qZ/BQGHPb9F2h3sawJ9Uh1KxHIJLrX6o8R3aJwUPm58=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603719457; h=Content-Type:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=SOM/fbpgIIU2RcvSQdpk47HFfpuibk2bDq/06D70Fa4=; 
	b=mwz3672jgAftYmbbSYT9sqqtEv2PUkMvCKEi/v1LuWcNslA+GOkX+oCjik1tVMBn0NR5sns7Q1XmONBxuNCnfvLW+8+c6/RawLJ95VJbYjJMDHB+GE4mK5BnlxlahRnCiAfhb6dYrFSwL12mTPGWaHyY3+7OUnI9bjEfLYFIXkE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603719457;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Subject:Cc:Message-ID:Date:MIME-Version:Content-Type;
	bh=SOM/fbpgIIU2RcvSQdpk47HFfpuibk2bDq/06D70Fa4=;
	b=OiZTWdmdhQeIFUBDpWPCsRcOrcKhzukP+Qb8PygPUw5UWKO/hEr2Soobu/uKEhhg
	0vPUUHKoFtedA+TXRt/S5x5VFaC+XwOP368wnoT7gpEWQCmmSVc9HIBu1nGZ21felB2
	qnAWbcdSHRPufMcRCH/UDIEXbfrq7cyaptLH8TaQ=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603719455384180.41777328470607; Mon, 26 Oct 2020 06:37:35 -0700 (PDT)
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Recent upgrade of 4.13 -> 4.14 issue
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
Date: Mon, 26 Oct 2020 14:37:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="sm3uTXYd50zO7mXrabSd8WUl5TMONZpX9"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--sm3uTXYd50zO7mXrabSd8WUl5TMONZpX9
Content-Type: multipart/mixed; boundary="BdKjHxGSDDIoiItuAaMS0Zeu8A2aaIxqN";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
Subject: Recent upgrade of 4.13 -> 4.14 issue

--BdKjHxGSDDIoiItuAaMS0Zeu8A2aaIxqN
Content-Type: multipart/mixed;
 boundary="------------355E3F64B4519B719BD94414"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------355E3F64B4519B719BD94414
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Hi all,

I'm experiencing problem with a HP ProLiant DL360p Gen8 and recent upgrad=
e of 4.13 -> 4.14. dom0 and the entire system becomes unstable and freeze=
 at some point.

I've managed to get three pieces of logs (last one after a reboot and jus=
t before total freeze) by connecting to the serial console with loglvl op=
tions and also redirecting linux kernel output to the serial console:

```
[ 2150.954883] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 2150.954934] rcu: 	7-...0: (3 GPs behind) idle=3D842/1/0x40000000000000=
00 softirq=3D64670/64671 fqs=3D14673
[ 2150.954962] 	(detected by 12, t=3D60002 jiffies, g=3D236593, q=3D126)
[ 2150.954984] Sending NMI from CPU 12 to CPUs 7:
[ 2160.968519] rcu: rcu_sched kthread starved for 10008 jiffies! g236593 =
f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D9
[ 2160.968568] rcu: RCU grace-period kthread stack dump:
[ 2160.968586] rcu_sched       R  running task        0    10      2 0x80=
004000
[ 2160.968612] Call Trace:
[ 2160.968634]  ? xen_hypercall_xen_version+0xa/0x20
[ 2160.968658]  ? xen_force_evtchn_callback+0x9/0x10
[ 2160.968918]  ? check_events+0x12/0x20
[ 2160.968946]  ? schedule+0x39/0xa0
[ 2160.968964]  ? schedule_timeout+0x96/0x150
[ 2160.968981]  ? __next_timer_interrupt+0xd0/0xd0
[ 2160.969002]  ? rcu_gp_fqs_loop+0xea/0x2a0
[ 2160.969019]  ? rcu_gp_kthread+0xb5/0x140
[ 2160.969035]  ? rcu_gp_init+0x470/0x470
[ 2160.969052]  ? kthread+0x115/0x140
[ 2160.969067]  ? __kthread_bind_mask+0x60/0x60
[ 2160.969085]  ? ret_from_fork+0x35/0x40
```

and also

```
[ 2495.945931] CPU: 14 PID: 24181 Comm: Xorg Not tainted 5.4.72-2.qubes.x=
86_64 #1
[ 2495.945954] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/24/201=
9
[ 2495.945984] RIP: e030:smp_call_function_many+0x20a/0x270
[ 2495.946004] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff 48 6=
3 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90 8b 42 18=
 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf b2
8a
[ 2495.946051] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
[ 2495.946068] RAX: 0000000000000003 RBX: 0000000000000010 RCX: 000000000=
0000007
[ 2495.946090] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI: 000000000=
0000007
[ 2495.946113] RBP: ffffffff81082fc0 R08: 0000000000000007 R09: 000000000=
0000000
[ 2495.946134] R10: 0000000000000000 R11: ffffffff8265b6a8 R12: 000000000=
0000000
[ 2495.946156] R13: 0000000000000001 R14: 0000000000029ac0 R15: ffff88824=
15a9b00
[ 2495.946211] FS:  00007a0d51a91a40(0000) GS:ffff888241580000(0000) knlG=
S:0000000000000000
[ 2495.946235] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2495.946253] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4: 000000000=
0040660
[ 2495.946290] Call Trace:
[ 2495.946314]  ? do_kernel_range_flush+0x50/0x50
[ 2495.946334]  on_each_cpu+0x28/0x50
[ 2495.946354]  decrease_reservation+0x22f/0x310
[ 2495.946377]  alloc_xenballooned_pages+0xeb/0x120
[ 2495.946396]  ? __kmalloc+0x183/0x260
[ 2495.946413]  gnttab_alloc_pages+0x11/0x40
[ 2495.946434]  gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
[ 2495.946454]  gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]
[ 2495.946479]  do_vfs_ioctl+0x2fb/0x490
[ 2495.946500]  ? syscall_trace_enter+0x1d1/0x2c0
[ 2495.946551]  ksys_ioctl+0x5e/0x90
[ 2495.946567]  __x64_sys_ioctl+0x16/0x20
[ 2495.946583]  do_syscall_64+0x5b/0xf0
[ 2495.946601]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 2495.946620] RIP: 0033:0x7a0d51f763bb
[ 2495.946727] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00 00 4=
8 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05=
 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 01
48
[ 2495.946804] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX: 0000=
000000000010
[ 2495.946863] RAX: ffffffffffffffda RBX: 0000000000001000 RCX: 00007a0d5=
1f763bb
[ 2495.946885] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI: 000000000=
0000009
[ 2495.946939] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09: 00007a0d5=
1a30010
[ 2495.946998] R10: 0000000000000000 R11: 0000000000000206 R12: 00007fffc=
48d5060
[ 2495.947020] R13: 0000000000000001 R14: 0000000000000009 R15: 000000000=
0000001
[ 2510.964667] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 2510.964716] rcu: 	7-...0: (3 GPs behind) idle=3D842/1/0x40000000000000=
00 softirq=3D64670/64671 fqs=3D96182
[ 2510.964744] 	(detected by 12, t=3D420012 jiffies, g=3D236593, q=3D1140=
4)
[ 2510.964769] Sending NMI from CPU 12 to CPUs 7:
[ 2523.945643] watchdog: BUG: soft lockup - CPU#14 stuck for 22s! [Xorg:2=
4181]
[ 2523.945686] Modules linked in: snd_seq_dummy snd_hrtimer snd_seq snd_s=
eq_device snd_timer snd soundcore br_netfilter xt_physdev xen_netback loo=
p bridge stp llc rfkill ebtable_filter ebtables ip6table_filter ip
6_tables iptable_filter intel_rapl_msr iTCO_wdt ipmi_ssif iTCO_vendor_sup=
port intel_rapl_common sb_edac rapl raid456 async_raid6_recov async_memcp=
y async_pq async_xor async_tx xor raid6_pq pcspkr joydev hpilo lpc
_ich hpwdt ioatdma dca tg3 r8169 ipmi_si ipmi_devintf ipmi_msghandler acp=
i_power_meter xenfs ip_tables dm_thin_pool dm_persistent_data libcrc32c d=
m_bio_prison dm_crypt uas usb_storage uhci_hcd crct10dif_pclmul cr
c32_pclmul crc32c_intel ghash_clmulni_intel mgag200 drm_kms_helper serio_=
raw drm_vram_helper ttm drm ata_generic pata_acpi i2c_algo_bit ehci_pci e=
hci_hcd xhci_pci xhci_hcd hpsa scsi_transport_sas xen_privcmd xen_
pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput pkcs8_key_p=
arser
[ 2523.945934] CPU: 14 PID: 24181 Comm: Xorg Tainted: G             L    =
5.4.72-2.qubes.x86_64 #1
[ 2523.945960] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/24/201=
9
[ 2523.945989] RIP: e030:smp_call_function_many+0x20a/0x270
[ 2523.946010] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff 48 6=
3 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90 8b 42 18=
 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf b2
8a
[ 2523.946057] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
[ 2523.946075] RAX: 0000000000000003 RBX: 0000000000000010 RCX: 000000000=
0000007
[ 2523.946097] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI: 000000000=
0000007
[ 2523.946119] RBP: ffffffff81082fc0 R08: 0000000000000007 R09: 000000000=
0000000
[ 2523.946162] R10: 0000000000000000 R11: ffffffff8265b6a8 R12: 000000000=
0000000
[ 2523.946184] R13: 0000000000000001 R14: 0000000000029ac0 R15: ffff88824=
15a9b00
[ 2523.946233] FS:  00007a0d51a91a40(0000) GS:ffff888241580000(0000) knlG=
S:0000000000000000
[ 2523.946256] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2523.946275] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4: 000000000=
0040660
[ 2523.946313] Call Trace:
[ 2523.946336]  ? do_kernel_range_flush+0x50/0x50
[ 2523.946356]  on_each_cpu+0x28/0x50
[ 2523.946376]  decrease_reservation+0x22f/0x310
[ 2523.946397]  alloc_xenballooned_pages+0xeb/0x120
[ 2523.946418]  ? __kmalloc+0x183/0x260
[ 2523.946434]  gnttab_alloc_pages+0x11/0x40
[ 2523.946457]  gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
[ 2523.946478]  gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]
[ 2523.946502]  do_vfs_ioctl+0x2fb/0x490
[ 2523.946559]  ? syscall_trace_enter+0x1d1/0x2c0
[ 2523.946578]  ksys_ioctl+0x5e/0x90
[ 2523.946594]  __x64_sys_ioctl+0x16/0x20
[ 2523.946610]  do_syscall_64+0x5b/0xf0
[ 2523.946713]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 2523.946738] RIP: 0033:0x7a0d51f763bb
[ 2523.946782] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00 00 4=
8 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05=
 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 01
48
[ 2523.946867] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX: 0000=
000000000010
[ 2523.946927] RAX: ffffffffffffffda RBX: 0000000000001000 RCX: 00007a0d5=
1f763bb
[ 2523.946950] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI: 000000000=
0000009
[ 2523.946972] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09: 00007a0d5=
1a30010
[ 2523.947029] R10: 0000000000000000 R11: 0000000000000206 R12: 00007fffc=
48d5060
[ 2523.947051] R13: 0000000000000001 R14: 0000000000000009 R15: 000000000=
0000001
```

and finally

```
[ 1597.971380] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 1597.971409] rcu: 	0-...0: (5 ticks this GP) idle=3Dcd2/0/0x1 softirq=3D=
59121/59122 fqs=3D14998
[ 1597.971420] rcu: 	7-...0: (2 ticks this GP) idle=3De9e/1/0x40000000000=
00000 softirq=3D49519/49519 fqs=3D14998
[ 1597.971431] 	(detected by 11, t=3D60002 jiffies, g=3D234321, q=3D83)
[ 1597.971441] Sending NMI from CPU 11 to CPUs 0:
[ 1597.972452] NMI backtrace for cpu 0
[ 1597.972453] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.4.72-2.qubes.x=
86_64 #1
[ 1597.972453] Hardware name: HP ProLiant DL360p Gen8, BIOS P71 05/24/201=
9
[ 1597.972454] RIP: e030:xen_hypercall_sched_op+0xa/0x20
[ 1597.972454] Code: 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc cc c=
c cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00 00 0f 05=
 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
cc
[ 1597.972455] RSP: e02b:ffffc90001403d88 EFLAGS: 00000002
[ 1597.972456] RAX: 0000000000000000 RBX: ffff888241215f80 RCX: ffffffff8=
10013aa
[ 1597.972456] RDX: ffff88823c070428 RSI: ffffc90001403da8 RDI: 000000000=
0000003
[ 1597.972456] RBP: ffff8882365d8bf0 R08: ffffffff8265b6a0 R09: 000000000=
0000000
[ 1597.972457] R10: 0000000000000000 R11: 0000000000000202 R12: 000000000=
0000049
[ 1597.972457] R13: 0000000000000100 R14: ffff8882422b6640 R15: 000000000=
0000001
[ 1597.972458] FS:  0000729633b7d700(0000) GS:ffff888241200000(0000) knlG=
S:0000000000000000
[ 1597.972458] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 1597.972458] CR2: 000077f6508c47c0 CR3: 000000020d006000 CR4: 000000000=
0040660
[ 1597.972459] Call Trace:
[ 1597.972459]  <IRQ>
[ 1597.972459]  ? xen_poll_irq+0x73/0xa0
[ 1597.972459]  ? xen_qlock_wait+0x7b/0x80
[ 1597.972460]  ? pv_wait_head_or_lock+0x85/0xb0
[ 1597.972460]  ? __pv_queued_spin_lock_slowpath+0xb5/0x1b0
[ 1597.972460]  ? _raw_spin_lock_irqsave+0x32/0x40
[ 1597.972461]  ? bfq_finish_requeue_request+0xb5/0x120
[ 1597.972461]  ? blk_mq_free_request+0x3a/0xf0
[ 1597.972461]  ? scsi_end_request+0x95/0x140
[ 1597.972461]  ? scsi_io_completion+0x74/0x190
[ 1597.972462]  ? blk_done_softirq+0xea/0x180
[ 1597.972462]  ? __do_softirq+0xd9/0x2c8
[ 1597.972462]  ? irq_exit+0xcf/0x110
[ 1597.972462]  ? xen_evtchn_do_upcall+0x2c/0x40
[ 1597.972463]  ? xen_do_hypervisor_callback+0x29/0x40
[ 1597.972463]  </IRQ>
[ 1597.972463]  ? xen_hypercall_sched_op+0xa/0x20
[ 1597.972464]  ? xen_hypercall_sched_op+0xa/0x20
[ 1597.972464]  ? xen_safe_halt+0xc/0x20
[ 1597.972464]  ? default_idle+0x1a/0x140
[ 1597.972465]  ? cpuidle_idle_call+0x139/0x190
[ 1597.972465]  ? do_idle+0x73/0xd0
[ 1597.972465]  ? cpu_startup_entry+0x19/0x20
[ 1597.972466]  ? start_kernel+0x68a/0x6bf
[ 1597.972466]  ? xen_start_kernel+0x6a2/0x6c1
[ 1597.972470] Sending NMI from CPU 11 to CPUs 7:
[ 1607.976873] rcu: rcu_sched kthread starved for 10007 jiffies! g234321 =
f0x0 RCU_GP_WAIT_FQS(5) ->state=3D0x402 ->cpu=3D11
[ 1607.976924] rcu: RCU grace-period kthread stack dump:
[ 1607.976942] rcu_sched       I    0    10      2 0x80004000
[ 1607.976972] Call Trace:
[ 1607.976999]  __schedule+0x217/0x570
[ 1607.977020]  ? schedule+0x39/0xa0
[ 1607.977038]  ? schedule_timeout+0x96/0x150
[ 1607.977056]  ? __next_timer_interrupt+0xd0/0xd0
[ 1607.977079]  ? rcu_gp_fqs_loop+0xea/0x2a0
[ 1607.977096]  ? rcu_gp_kthread+0xb5/0x140
[ 1607.977112]  ? rcu_gp_init+0x470/0x470
[ 1607.977130]  ? kthread+0x115/0x140
[ 1607.977145]  ? __kthread_bind_mask+0x60/0x60
[ 1607.977164]  ? ret_from_fork+0x35/0x40
```

I've tried to increase memory, set maximum of dom0 vcps to 1 or 4, pin vc=
pus too, multiple 5.4 kernels tool...no luck.

I've also observed that some (never the same) VM (PVH or HVM) fail to sta=
rt randomly because of being stuck at boot time with analog stack trace a=
s the first piece of log provided above. Those VM are impossible to desro=
y and then it "propagates" though dom0 with the two latest piece of codes=
=2E

If anyone would have any idea of what's going on, that would be very appr=
eciated. Thank you.

Fr=C3=A9d=C3=A9ric

--------------355E3F64B4519B719BD94414
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------355E3F64B4519B719BD94414--

--BdKjHxGSDDIoiItuAaMS0Zeu8A2aaIxqN--

--sm3uTXYd50zO7mXrabSd8WUl5TMONZpX9
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+W0RwACgkQSEAQtc3F
duKP2BAAuYxkjd+Qj0yZeylhZtcDTjFVaa5/5yisq3wz3ECYcy2tLGgpsvalU9ja
2FfBGXyf575GkW10gk20G5GhtxRDT0Y/bjUx39t7LDMXWGOD94WTr9F/GLFcXc7+
HCSki/3fiVv2sFAUxOBg4ATo0hWT/jzCLJKlpXEVP8mV2hz68nkozyJ5XdiFdyOj
/uEjHRk4wLUi2yK++kmvMYygDxs/0VWEHTZO76IyzJf5s+BLDdWFeI84KqMRL7YL
FAlSaTEUVnbes4jOmsYLV1DPJkpuz5PnX2v4gC40B0ndiPduyMGSZHLeds1rp9y4
FRfruIVGdUA4ffjLncRpMmulu0FvFdjEQNTEf7b6OD7kZ90oGlCvV2ps6VIuYych
yuw/5kyo3pCQvGP+SuBF5WC1a5AZ6JfKaQha1k8cVWBUs1KOvaGYoaUFETGx0w6a
/o2xrEcB0+RrtLFsmozGJ26RqQHdojuZwbuQHV86fyCOoHg38Vh2n14Xu+TdRTF5
ReZjnudn1ecyE0wczIzSORF1mSjv6DffOtQh2ij9CmOLJNd1lPKwHFefLJ0ATkE9
Gxxh4akWRXL4eBjPpakab7Xj8TxMqyzKiJL6Uz9D1vJhFQemcfo6PyxgYG0548Rs
lUwhDGFPwM0SIPO9aT7QZHy/hKSKh2bFew6BQWf59CS0UrSDf3k=
=DJLH
-----END PGP SIGNATURE-----

--sm3uTXYd50zO7mXrabSd8WUl5TMONZpX9--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:37:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:37:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12324.32118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2h7-0005rI-N9; Mon, 26 Oct 2020 13:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12324.32118; Mon, 26 Oct 2020 13:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2h7-0005rA-Ik; Mon, 26 Oct 2020 13:37:53 +0000
Received: by outflank-mailman (input) for mailman id 12324;
 Mon, 26 Oct 2020 13:37:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kX2h6-0005ql-73
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:37:52 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 470bdee7-414d-41b0-8e39-299697ea26c5;
 Mon, 26 Oct 2020 13:37:51 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id v6so12077984lfa.13
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 06:37:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kX2h6-0005ql-73
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:37:52 +0000
X-Inumbo-ID: 470bdee7-414d-41b0-8e39-299697ea26c5
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 470bdee7-414d-41b0-8e39-299697ea26c5;
	Mon, 26 Oct 2020 13:37:51 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id v6so12077984lfa.13
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 06:37:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=jmxjB0lodzUkO8ogrxbdM7GY0PHknpA4mlDFBnfQsJA=;
        b=L8wsYGj47pIuJAgoKXL1immpbRX/bQzfuRytbjLVJWVlHwqQb905FwdOfnyA6oHllg
         +ptyS7q7EhwBXe0si8/X8KZ5sT9ZgGh+u8Ga8CGaFlF+4thSXMgFMcCscQVofAFYeRvz
         hTYeR/UWd9ZLijmff9LDOHROBbJgFB1oFLtdv87SG3thzCqrzNG+NrmTmk7owOhB0q2C
         Au71SKOerwD/CSltb5Vu4zyRS60rglfTAUdPyP46Ie3pel8A+bbyzAlk+SdJkYvcEWkl
         iYQKg7F0R2l4TUiesO+01hg/jP4o9ckk3YrHn5pW7442rx/x5sMgQUSPZiBCXe6buNt2
         WG2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=jmxjB0lodzUkO8ogrxbdM7GY0PHknpA4mlDFBnfQsJA=;
        b=p6Mkn/nX6dEVxQwOMlf/Jht/GUDeXGM7Y2AkudKrslEkedpIvqJ2yNuGmMr7wYM7Y7
         4wP4f6cJ8F9TzefHQEoYTQ1zKH910xtK4GWWxMv9kk6LE97Tegr3g3J3EC0nbH5Hq6Mp
         fwFHvIf/3SHCAJwkT2D7Q/jfapQbjbUjL7I31v1HiMDleYBNJkxPZUNVUhZ6nHgszosd
         xdUGuHktf5QoIZt3OnsE6LAQ8zdmJokMVFqnHikZBAQwL2WDEW+sXNdeslt3A8oiFe+H
         BCdWDSnXy9yLWe2O78iEpxjV6hM6VeK9D5TmhFZGTU8ZXUwI5G8MOCNBayI+zGBVhdkT
         /Gxg==
X-Gm-Message-State: AOAM530Boxat78+Uws+ZExDFAqDaEHJTzZKmhX+r4Ms8b0mkgk/aOjU8
	Nc6pfKXgdMYbzjvNXVAfSVRcqGGEBxVk8zb/6Ag=
X-Google-Smtp-Source: ABdhPJza02LxVp0DyRuucexgpsHIrJE1SFyPvYKMyJHMKgEB64mtFMeujsXXuToePQ70ronIrfwtd2VPDtZRubgfOXo=
X-Received: by 2002:a19:7f4a:: with SMTP id a71mr5491778lfd.202.1603719470119;
 Mon, 26 Oct 2020 06:37:50 -0700 (PDT)
MIME-Version: 1.0
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
 <f8f5f354-aa8d-4bd0-9c0e-ef37702e80c5@citrix.com> <48816c69ab2551a34c57a87392bb7f08ca6482ee.camel@xen.org>
In-Reply-To: <48816c69ab2551a34c57a87392bb7f08ca6482ee.camel@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 26 Oct 2020 09:37:38 -0400
Message-ID: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
Subject: Re: XSM and the idle domain
To: Hongyan Xia <hx242@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Daniel Smith <dpsmith@apertussolutions.com>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 22, 2020 at 1:01 PM Hongyan Xia <hx242@xen.org> wrote:
>
> On Thu, 2020-10-22 at 13:51 +0100, Andrew Cooper wrote:
> > On 21/10/2020 15:34, Hongyan Xia wrote:
> > > The first question came up during ongoing work in LiveUpdate. After
> > > an
> > > LU, the next Xen needs to restore all domains. To do that, some
> > > hypercalls need to be issued from the idle domain context and
> > > apparently XSM does not like it.
> >
> > There is no such thing as issuing hypercalls from the idle domain
> > (context or otherwise), because the idle domain does not have enough
> > associated guest state for anything to make the requisite
> > SYSCALL/INT80/VMCALL/VMMCALL invocation.
> >
> > I presume from this comment that what you mean is that you're calling
> > the plain hypercall functions, context checks and everything, from
> > the
> > idle context?
>
> Yep, the restore code just calls the hypercall functions from idle
> context.
>
> > If so, this is buggy for more reasons than just XSM objecting to its
> > calling context, and that XSM is merely the first thing to explode.
> > Therefore, I don't think modifications to XSM are applicable to
> > solving
> > the problem.
> >
> > (Of course, this is all speculation because there's no concrete
> > implementation to look at.)
>
> Another explosion is the inability to create hypercall preemption,
> which for now is disabled when the calling context is the idle domain.
> Apart from XSM and preemption, the LU prototype works fine. We only
> reuse a limited number of hypercall functions and are not trying to be
> able to call all possible hypercalls from idle.

I wonder if for domain_create, it wouldn't be better to move
xsm_domain_create() out to the domctl (hypercall entry) and check it
there.  That would side-step xsm in domain_create.  Flask would need
to be modified for that.  I've an untested patch doing the
rearranging, which I'll send as a follow up.

What other hypercalls are you having issues with?  Those could also be
refactored so the hypercall entry checks permissions, and the actual
work is done in a directly callable function.

> Having a dedicated domLU just like domB (or reusing domB) sounds like a
> viable option. If the overhead can be made low enough then we won't
> need to work around XSM and hypercall preemption.
>
> Although the question was whether XSM should interact with the idle
> domain. With a good design LU should be able to sidestep this though.

Circling back to the main topic, is the idle domain Xen, or is it
distinct?  It runs in the context of Xen, so Xen isn't really in a
place to enforce policy on itself.  Hongyan, as you said earlier,
applying XSM is more of a debugging feature at that point than a
security feature.  And as Jan pointed out, you can have problems if
XSM prevents the hypervisor from performing an action it doesn't
expect to fail.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:47:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12334.32138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2q2-0006uH-Jk; Mon, 26 Oct 2020 13:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12334.32138; Mon, 26 Oct 2020 13:47:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2q2-0006uA-GT; Mon, 26 Oct 2020 13:47:06 +0000
Received: by outflank-mailman (input) for mailman id 12334;
 Mon, 26 Oct 2020 13:47:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kX2q1-0006u5-Dc
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:47:05 +0000
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7492d56-ab58-428c-a0b8-707d243889e2;
 Mon, 26 Oct 2020 13:47:04 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id m14so646035qtc.12
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 06:47:04 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:1145:a885:8e8f:3f60])
 by smtp.gmail.com with ESMTPSA id
 o14sm6882324qto.16.2020.10.26.06.47.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 06:47:02 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kX2q1-0006u5-Dc
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:47:05 +0000
X-Inumbo-ID: c7492d56-ab58-428c-a0b8-707d243889e2
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c7492d56-ab58-428c-a0b8-707d243889e2;
	Mon, 26 Oct 2020 13:47:04 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id m14so646035qtc.12
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 06:47:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=f9KRZ9bVQua0OwnsqOAio9K9V42DH2ndT61btswgFso=;
        b=C7VNL0Xq9AnuQnEflUVOeIxUOWx/9f6ZH/R2bkQsdLXe+gWIt8ZTMWXuyhLeNnpeLR
         +Jud2CQH4NzxtHFGKthI9cXw4AxSUsV03N0hAKLWirrc5aoF4Leguy/aMSjAzClc3hCy
         8xenUSwRIxAJYHEqCe/q+Fr9PL2oaGVakTk6oF1rKxKPr+/onfJ5iN/o4/jkywGQ1AjK
         Hrb2W7RUjC4IUO/cmRwKymxDsbHRx5Ni9GqS4lnJ7IJpMMeMwtEm+faRJnQV7DaUIIdo
         IP0UjNPZTk1VA6N1MHuRGC1PB/LG3M2/3e3fUZ93cc6vusFeFPSpuxHiRYYb3APF4m4D
         Awcg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=f9KRZ9bVQua0OwnsqOAio9K9V42DH2ndT61btswgFso=;
        b=MrvKHsQAEsvdkON/nAuLLcb4KmRG13lGPVUNywyvRP8ch+O4iLnLbx+IglL7TbvRqF
         Gm+O8+MWG1NqElOmtEGInmp5EThH/r6U5ur1ID7s4QfUTj/p/C3aGVQzMak7qyzt+LrC
         yNYFIQAqAJuU/IJFhOub6GCOowLnAAxHwNWWmitVpfPVJ/wYIo5dc++0ayWV6Rc7f8HK
         7FcaF6220PY2UU5+e8mwYVeiOr3KJRFG+B0tw/fdDQlpdctTEz2X/MpBV9rNFTvezW0t
         cMJH/pYgmz0rndPQL/J4h4MBl09MA16H5+lO+H78kEVzL6bIPyuGTO85BJqTNxbQHkus
         0mZA==
X-Gm-Message-State: AOAM531kPlTvdEiHO+LRw/ylArVzoc04Vx4eIjLuivDlLDNUjD1JmciQ
	IktjO0HIHudxfWzT9wf5UhobPJDXJfc=
X-Google-Smtp-Source: ABdhPJys3ry2LTW3BPWVb06qQqHt1emPhwO34Z8wtkGbI+CgBitwneTq4c9hiDja2ZIAxUWyVgsY0Q==
X-Received: by 2002:aed:3325:: with SMTP id u34mr16888054qtd.263.1603720023188;
        Mon, 26 Oct 2020 06:47:03 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:1145:a885:8e8f:3f60])
        by smtp.gmail.com with ESMTPSA id o14sm6882324qto.16.2020.10.26.06.47.01
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 06:47:02 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org,
	hx242@xen.org
Cc: dpsmith@apertussolutions.com,
	Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [RFC PATCH] xsm: Re-work domain_create and domain_alloc_security
Date: Mon, 26 Oct 2020 09:46:51 -0400
Message-Id: <20201026134651.8162-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
References: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Untested!

This only really matters for flask, but all of xsm is updated.

flask_domain_create() and flask_domain_alloc_security() are a strange
pair.

flask_domain_create() serves double duty.  It both assigns sid and
self_sid values and checks if the calling domain has permission to
create the target domain.  It also has special casing for handling dom0.
Meanwhile flask_domain_alloc_security() assigns some special sids, but
waits for others to be assigned in flask_domain_create.  This split
seems to have come about so that the structures are allocated before
calling flask_domain_create().  It also means flask_domain_create is
called in the middle of domain_create.

Re-arrange the two calls.  Let flask_domain_create just check if current
has permission to create ssidref.  Then it can be moved out to do_domctl
and gate entry into domain_create.  This avoids doing partial domain
creation before the permission check.

Have flask_domain_alloc_security() take a ssidref argument.  The ssidref
was already permission checked earlier, so it can just be assigned.
Then the self_sid can be calculated here as well rather than in
flask_domain_create().

The dom0 special casing is moved into flask_domain_alloc_security().
Maybe this should be just a fall-through for the dom0 already created
case.  This code may not be needed any longer.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/common/domain.c     |  6 ++----
 xen/common/domctl.c     |  4 ++++
 xen/include/xsm/dummy.h |  6 +++---
 xen/include/xsm/xsm.h   | 12 +++++------
 xen/xsm/flask/hooks.c   | 48 ++++++++++++++++-------------------------
 5 files changed, 34 insertions(+), 42 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index f748806a45..6b1f5ed59d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -407,7 +407,8 @@ struct domain *domain_create(domid_t domid,
 
     lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid);
 
-    if ( (err = xsm_alloc_security_domain(d)) != 0 )
+    if ( (err = xsm_alloc_security_domain(d, config ? config->ssidref :
+                                                      0)) != 0 )
         goto fail;
 
     atomic_set(&d->refcnt, 1);
@@ -470,9 +471,6 @@ struct domain *domain_create(domid_t domid,
         if ( !d->iomem_caps || !d->irq_caps )
             goto fail;
 
-        if ( (err = xsm_domain_create(XSM_HOOK, d, config->ssidref)) != 0 )
-            goto fail;
-
         d->controller_pause_count = 1;
         atomic_inc(&d->pause_count);
 
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index af044e2eda..ffdc1a41cd 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -406,6 +406,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         domid_t        dom;
         static domid_t rover = 0;
 
+        ret = xsm_domain_create(XSM_HOOK, op->u.createdomain.ssidref);
+        if (ret)
+            break;
+
         dom = op->domain;
         if ( (dom > 0) && (dom < DOMID_FIRST_RESERVED) )
         {
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 7ae3c40eb5..29c4ca9951 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -104,10 +104,10 @@ static XSM_INLINE void xsm_security_domaininfo(struct domain *d,
     return;
 }
 
-static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref)
+static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG u32 ssidref)
 {
     XSM_ASSERT_ACTION(XSM_HOOK);
-    return xsm_default_action(action, current->domain, d);
+    return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_getdomaininfo(XSM_DEFAULT_ARG struct domain *d)
@@ -163,7 +163,7 @@ static XSM_INLINE int xsm_readconsole(XSM_DEFAULT_ARG uint32_t clear)
     return xsm_default_action(action, current->domain, NULL);
 }
 
-static XSM_INLINE int xsm_alloc_security_domain(struct domain *d)
+static XSM_INLINE int xsm_alloc_security_domain(struct domain *d, uint32_t ssidref)
 {
     return 0;
 }
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 358ec13ba8..c1d2ef5832 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -46,7 +46,7 @@ typedef enum xsm_default xsm_default_t;
 struct xsm_operations {
     void (*security_domaininfo) (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info);
-    int (*domain_create) (struct domain *d, u32 ssidref);
+    int (*domain_create) (u32 ssidref);
     int (*getdomaininfo) (struct domain *d);
     int (*domctl_scheduler_op) (struct domain *d, int op);
     int (*sysctl_scheduler_op) (int op);
@@ -71,7 +71,7 @@ struct xsm_operations {
     int (*grant_copy) (struct domain *d1, struct domain *d2);
     int (*grant_query_size) (struct domain *d1, struct domain *d2);
 
-    int (*alloc_security_domain) (struct domain *d);
+    int (*alloc_security_domain) (struct domain *d, uint32_t ssidref);
     void (*free_security_domain) (struct domain *d);
     int (*alloc_security_evtchn) (struct evtchn *chn);
     void (*free_security_evtchn) (struct evtchn *chn);
@@ -202,9 +202,9 @@ static inline void xsm_security_domaininfo (struct domain *d,
     xsm_ops->security_domaininfo(d, info);
 }
 
-static inline int xsm_domain_create (xsm_default_t def, struct domain *d, u32 ssidref)
+static inline int xsm_domain_create (xsm_default_t def, u32 ssidref)
 {
-    return xsm_ops->domain_create(d, ssidref);
+    return xsm_ops->domain_create(ssidref);
 }
 
 static inline int xsm_getdomaininfo (xsm_default_t def, struct domain *d)
@@ -305,9 +305,9 @@ static inline int xsm_grant_query_size (xsm_default_t def, struct domain *d1, st
     return xsm_ops->grant_query_size(d1, d2);
 }
 
-static inline int xsm_alloc_security_domain (struct domain *d)
+static inline int xsm_alloc_security_domain (struct domain *d, uint32_t ssidref)
 {
-    return xsm_ops->alloc_security_domain(d);
+    return xsm_ops->alloc_security_domain(d, ssidref);
 }
 
 static inline void xsm_free_security_domain (struct domain *d)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index de050cc9fe..719fe90f22 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -156,9 +156,11 @@ static int avc_unknown_permission(const char *name, int id)
     return rc;
 }
 
-static int flask_domain_alloc_security(struct domain *d)
+static int flask_domain_alloc_security(struct domain *d, u32 ssidref)
 {
     struct domain_security_struct *dsec;
+    static int dom0_created = 0;
+    int rc;
 
     dsec = xzalloc(struct domain_security_struct);
     if ( !dsec )
@@ -175,14 +177,24 @@ static int flask_domain_alloc_security(struct domain *d)
     case DOMID_IO:
         dsec->sid = SECINITSID_DOMIO;
         break;
+    case 0:
+        if ( !dom0_created ) {
+            dsec->sid = SECINITSID_DOM0;
+            dom0_created = 1;
+        } else {
+            dsec->sid = SECINITSID_UNLABELED;
+        }
+        break;
     default:
-        dsec->sid = SECINITSID_UNLABELED;
+        dsec->sid = ssidref;
     }
 
     dsec->self_sid = dsec->sid;
-    d->ssid = dsec;
 
-    return 0;
+    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
+                                 &dsec->self_sid);
+
+    return rc;
 }
 
 static void flask_domain_free_security(struct domain *d)
@@ -507,32 +519,10 @@ static void flask_security_domaininfo(struct domain *d,
     info->ssidref = domain_sid(d);
 }
 
-static int flask_domain_create(struct domain *d, u32 ssidref)
+static int flask_domain_create(u32 ssidref)
 {
-    int rc;
-    struct domain_security_struct *dsec = d->ssid;
-    static int dom0_created = 0;
-
-    if ( is_idle_domain(current->domain) && !dom0_created )
-    {
-        dsec->sid = SECINITSID_DOM0;
-        dom0_created = 1;
-    }
-    else
-    {
-        rc = avc_current_has_perm(ssidref, SECCLASS_DOMAIN,
-                          DOMAIN__CREATE, NULL);
-        if ( rc )
-            return rc;
-
-        dsec->sid = ssidref;
-    }
-    dsec->self_sid = dsec->sid;
-
-    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN,
-                                 &dsec->self_sid);
-
-    return rc;
+    return avc_current_has_perm(ssidref, SECCLASS_DOMAIN, DOMAIN__CREATE,
+                                NULL);
 }
 
 static int flask_getdomaininfo(struct domain *d)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:51:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12338.32150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2u0-0007kx-6P; Mon, 26 Oct 2020 13:51:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12338.32150; Mon, 26 Oct 2020 13:51:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2u0-0007kq-1f; Mon, 26 Oct 2020 13:51:12 +0000
Received: by outflank-mailman (input) for mailman id 12338;
 Mon, 26 Oct 2020 13:51:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX2tx-0007kk-UA
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:51:10 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e895d521-8cdd-46f6-ad31-ddcdb911d708;
 Mon, 26 Oct 2020 13:51:08 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX2tx-0007kk-UA
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:51:10 +0000
X-Inumbo-ID: e895d521-8cdd-46f6-ad31-ddcdb911d708
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e895d521-8cdd-46f6-ad31-ddcdb911d708;
	Mon, 26 Oct 2020 13:51:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603720268;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Kige2qC+WVChCsv8oUthNVYbRyRDIDCpAo5fVpsVfGU=;
  b=RqTtGMZA1Fxq00PUkHlt7TRoM21mOB7pbO8X65dZdAbdgo1c2E23TCMW
   XOu4/DTxp8PDzNYn9aEQp869wNvxQIsbMgU406mJUIq4bfGQgWgRtxQqq
   kDi4XdFQtALITKByDfEpkBgEvDenChi6FZEm8zXgY0FnhNqEC9a/sfgKt
   4=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qNdgpFi9PYMhsprorFVrqIiX9PvX/m7CbJkn/iv/QpiWqC0vNd+NguKcQHoZKTCVqg+ch4F2bA
 bCkh1U6PjZYKnUN4KgHYMsDncR6UIThnDcWTCuNI6Ll8PTqCRI5/pQBS0ODaQD+uSmv4EtsHNa
 LFGnP5H37dnS51EVCO0kA4xy27CeRNfd1Txe5c1HUcHKhUK+HMg9sbioGMkWVwk+YUgEEkvGPf
 39sSELdfS+ANib7WUZaTcn5ku2jv0Lm5u8bKoNMyqRVMnKBe64PmPBNqcd8ta+w1uLRel/6bla
 6Bk=
X-SBRS: None
X-MesageID: 30026597
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,419,1596513600"; 
   d="scan'208";a="30026597"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/svm: Merge hsa and host_vmcb to reduce memory overhead
Date: Mon, 26 Oct 2020 13:50:43 +0000
Message-ID: <20201026135043.15560-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The format of the Host State Area is, and has always been, a VMCB.  It is
explicitly safe to put the host VMSAVE data in.

This removes 4k of memory allocation per pCPU.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/svm/svm.c | 27 ++++-----------------------
 1 file changed, 4 insertions(+), 23 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index cfea5b5523..9ec9ad0646 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -72,11 +72,10 @@ static void svm_update_guest_efer(struct vcpu *);
 static struct hvm_function_table svm_function_table;
 
 /*
- * Physical addresses of the Host State Area (for hardware) and vmcb (for Xen)
- * which contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state when in
- * guest vcpu context.
+ * Host State Area.  This area is used by the processor in non-root mode, and
+ * contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state required to
+ * leave guest vcpu context.
  */
-static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa);
 static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb);
 #ifdef CONFIG_PV
 static DEFINE_PER_CPU(struct vmcb_struct *, host_vmcb_va);
@@ -1436,15 +1435,8 @@ static bool svm_event_pending(const struct vcpu *v)
 
 static void svm_cpu_dead(unsigned int cpu)
 {
-    paddr_t *this_hsa = &per_cpu(hsa, cpu);
     paddr_t *this_vmcb = &per_cpu(host_vmcb, cpu);
 
-    if ( *this_hsa )
-    {
-        free_domheap_page(maddr_to_page(*this_hsa));
-        *this_hsa = 0;
-    }
-
 #ifdef CONFIG_PV
     if ( per_cpu(host_vmcb_va, cpu) )
     {
@@ -1462,7 +1454,6 @@ static void svm_cpu_dead(unsigned int cpu)
 
 static int svm_cpu_up_prepare(unsigned int cpu)
 {
-    paddr_t *this_hsa = &per_cpu(hsa, cpu);
     paddr_t *this_vmcb = &per_cpu(host_vmcb, cpu);
     nodeid_t node = cpu_to_node(cpu);
     unsigned int memflags = 0;
@@ -1471,16 +1462,6 @@ static int svm_cpu_up_prepare(unsigned int cpu)
     if ( node != NUMA_NO_NODE )
         memflags = MEMF_node(node);
 
-    if ( !*this_hsa )
-    {
-        pg = alloc_domheap_page(NULL, memflags);
-        if ( !pg )
-            goto err;
-
-        clear_domain_page(page_to_mfn(pg));
-        *this_hsa = page_to_maddr(pg);
-    }
-
     if ( !*this_vmcb )
     {
         pg = alloc_domheap_page(NULL, memflags);
@@ -1597,7 +1578,7 @@ static int _svm_cpu_up(bool bsp)
     write_efer(read_efer() | EFER_SVME);
 
     /* Initialize the HSA for this core. */
-    wrmsrl(MSR_K8_VM_HSAVE_PA, per_cpu(hsa, cpu));
+    wrmsrl(MSR_K8_VM_HSAVE_PA, per_cpu(host_vmcb, cpu));
 
     /* check for erratum 383 */
     svm_init_erratum_383(c);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 13:55:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 13:55:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12342.32162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2y4-0007wY-N1; Mon, 26 Oct 2020 13:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12342.32162; Mon, 26 Oct 2020 13:55:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX2y4-0007wR-Jk; Mon, 26 Oct 2020 13:55:24 +0000
Received: by outflank-mailman (input) for mailman id 12342;
 Mon, 26 Oct 2020 13:55:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX2y3-0007wM-74
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:55:23 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8782f2f-cccf-490b-be74-30fc249871dd;
 Mon, 26 Oct 2020 13:55:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX2y3-0007wM-74
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 13:55:23 +0000
X-Inumbo-ID: f8782f2f-cccf-490b-be74-30fc249871dd
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f8782f2f-cccf-490b-be74-30fc249871dd;
	Mon, 26 Oct 2020 13:55:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603720521;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to;
  bh=wA4Bb9MdIDNPRKyJ32yU/qbfeRXOGFjmhf9hghEdhgs=;
  b=WfY4nImOp84H1iNGL1tXRIztyrIBp48g2730eK5eiHrPYDbjV0gQvGuj
   UJTUYr2EtGI4fqqvaEX2G8WEJ+prX4rUIIG1UrpkGB243XtbvO6Eay97a
   7VEEw9uWhzbyE17FbgjG0WH34FMyMedI7tzW1uOOmld8ny2+aa57er/LX
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: qpxJKNKJ347VBdUy9zJ3ZSghHRMxR/BTnjGjJLi7Kj4meswgFDkA2RguwqDnthxAPsRkifHSOt
 B9+3wWteUI1jUPRj6tT+TN8YZGs5qWOD5d2/B/TDS4cL5GZRk5HTjil8ZcRvMbH8woSwBjKmG2
 Xmdc4M2wCuGL8K/rYA25ZSkykZa3oBCZc/ak//MwjDEiuOsey7Xk6h/egsmkTZXK3slnjHBw59
 Ctmi3lp902fwYn6YfkD24YcYTZl/4UqJAar9oN4t6M89zdm3hvuiJ4U8qcqMAtLG8E/ijMD2JA
 1rg=
X-SBRS: None
X-MesageID: 29783135
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,419,1596513600"; 
   d="asc'?scan'208";a="29783135"
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
	<marmarek@invisiblethingslab.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
Date: Mon, 26 Oct 2020 13:54:38 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature";
	boundary="t9lahTrSAdTYQgHuSPbDHYIyNAZ3SGZJH"
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

--t9lahTrSAdTYQgHuSPbDHYIyNAZ3SGZJH
Content-Type: multipart/mixed; boundary="80iG2zyIyucGufSbPuRTmviQwt5RJC8dy"

--80iG2zyIyucGufSbPuRTmviQwt5RJC8dy
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB

On 26/10/2020 13:37, Fr=C3=A9d=C3=A9ric Pierret wrote:
> Hi all,
>
> I'm experiencing problem with a HP ProLiant DL360p Gen8 and recent
> upgrade of 4.13 -> 4.14. dom0 and the entire system becomes unstable
> and freeze at some point.
>
> I've managed to get three pieces of logs (last one after a reboot and
> just before total freeze) by connecting to the serial console with
> loglvl options and also redirecting linux kernel output to the serial
> console:
>
> ```
> [ 2150.954883] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> [ 2150.954934] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (3 GPs behind)
> idle=3D842/1/0x4000000000000000 softirq=3D64670/64671 fqs=3D14673
> [ 2150.954962]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 12, t=3D60002 jiffi=
es, g=3D236593, q=3D126)
> [ 2150.954984] Sending NMI from CPU 12 to CPUs 7:
> [ 2160.968519] rcu: rcu_sched kthread starved for 10008 jiffies!
> g236593 f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D9
> [ 2160.968568] rcu: RCU grace-period kthread stack dump:
> [ 2160.968586] rcu_sched=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 R=C2=A0 ru=
nning task=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0 =
10=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2
> 0x80004000
> [ 2160.968612] Call Trace:
> [ 2160.968634]=C2=A0 ? xen_hypercall_xen_version+0xa/0x20
> [ 2160.968658]=C2=A0 ? xen_force_evtchn_callback+0x9/0x10
> [ 2160.968918]=C2=A0 ? check_events+0x12/0x20
> [ 2160.968946]=C2=A0 ? schedule+0x39/0xa0
> [ 2160.968964]=C2=A0 ? schedule_timeout+0x96/0x150
> [ 2160.968981]=C2=A0 ? __next_timer_interrupt+0xd0/0xd0
> [ 2160.969002]=C2=A0 ? rcu_gp_fqs_loop+0xea/0x2a0
> [ 2160.969019]=C2=A0 ? rcu_gp_kthread+0xb5/0x140
> [ 2160.969035]=C2=A0 ? rcu_gp_init+0x470/0x470
> [ 2160.969052]=C2=A0 ? kthread+0x115/0x140
> [ 2160.969067]=C2=A0 ? __kthread_bind_mask+0x60/0x60
> [ 2160.969085]=C2=A0 ? ret_from_fork+0x35/0x40
> ```
>
> and also
>
> ```
> [ 2495.945931] CPU: 14 PID: 24181 Comm: Xorg Not tainted
> 5.4.72-2.qubes.x86_64 #1
> [ 2495.945954] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
> 05/24/2019
> [ 2495.945984] RIP: e030:smp_call_function_many+0x20a/0x270
> [ 2495.946004] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf =
b2
> 8a
> [ 2495.946051] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
> [ 2495.946068] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
> 0000000000000007
> [ 2495.946090] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
> 0000000000000007
> [ 2495.946113] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
> 0000000000000000
> [ 2495.946134] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
> 0000000000000000
> [ 2495.946156] R13: 0000000000000001 R14: 0000000000029ac0 R15:
> ffff8882415a9b00
> [ 2495.946211] FS:=C2=A0 00007a0d51a91a40(0000) GS:ffff888241580000(000=
0)
> knlGS:0000000000000000
> [ 2495.946235] CS:=C2=A0 e030 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 2495.946253] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
> 0000000000040660
> [ 2495.946290] Call Trace:
> [ 2495.946314]=C2=A0 ? do_kernel_range_flush+0x50/0x50
> [ 2495.946334]=C2=A0 on_each_cpu+0x28/0x50
> [ 2495.946354]=C2=A0 decrease_reservation+0x22f/0x310
> [ 2495.946377]=C2=A0 alloc_xenballooned_pages+0xeb/0x120
> [ 2495.946396]=C2=A0 ? __kmalloc+0x183/0x260
> [ 2495.946413]=C2=A0 gnttab_alloc_pages+0x11/0x40
> [ 2495.946434]=C2=A0 gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
> [ 2495.946454]=C2=A0 gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]=

> [ 2495.946479]=C2=A0 do_vfs_ioctl+0x2fb/0x490
> [ 2495.946500]=C2=A0 ? syscall_trace_enter+0x1d1/0x2c0
> [ 2495.946551]=C2=A0 ksys_ioctl+0x5e/0x90
> [ 2495.946567]=C2=A0 __x64_sys_ioctl+0x16/0x20
> [ 2495.946583]=C2=A0 do_syscall_64+0x5b/0xf0
> [ 2495.946601]=C2=A0 entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 2495.946620] RIP: 0033:0x7a0d51f763bb
> [ 2495.946727] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 =
01
> 48
> [ 2495.946804] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
> 0000000000000010
> [ 2495.946863] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
> 00007a0d51f763bb
> [ 2495.946885] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
> 0000000000000009
> [ 2495.946939] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
> 00007a0d51a30010
> [ 2495.946998] R10: 0000000000000000 R11: 0000000000000206 R12:
> 00007fffc48d5060
> [ 2495.947020] R13: 0000000000000001 R14: 0000000000000009 R15:
> 0000000000000001
> [ 2510.964667] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> [ 2510.964716] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (3 GPs behind)
> idle=3D842/1/0x4000000000000000 softirq=3D64670/64671 fqs=3D96182
> [ 2510.964744]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 12, t=3D420012 jiff=
ies, g=3D236593, q=3D11404)
> [ 2510.964769] Sending NMI from CPU 12 to CPUs 7:
> [ 2523.945643] watchdog: BUG: soft lockup - CPU#14 stuck for 22s!
> [Xorg:24181]
> [ 2523.945686] Modules linked in: snd_seq_dummy snd_hrtimer snd_seq
> snd_seq_device snd_timer snd soundcore br_netfilter xt_physdev
> xen_netback loop bridge stp llc rfkill ebtable_filter ebtables
> ip6table_filter ip
> 6_tables iptable_filter intel_rapl_msr iTCO_wdt ipmi_ssif
> iTCO_vendor_support intel_rapl_common sb_edac rapl raid456
> async_raid6_recov async_memcpy async_pq async_xor async_tx xor
> raid6_pq pcspkr joydev hpilo lpc
> _ich hpwdt ioatdma dca tg3 r8169 ipmi_si ipmi_devintf ipmi_msghandler
> acpi_power_meter xenfs ip_tables dm_thin_pool dm_persistent_data
> libcrc32c dm_bio_prison dm_crypt uas usb_storage uhci_hcd
> crct10dif_pclmul cr
> c32_pclmul crc32c_intel ghash_clmulni_intel mgag200 drm_kms_helper
> serio_raw drm_vram_helper ttm drm ata_generic pata_acpi i2c_algo_bit
> ehci_pci ehci_hcd xhci_pci xhci_hcd hpsa scsi_transport_sas
> xen_privcmd xen_
> pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
> pkcs8_key_parser
> [ 2523.945934] CPU: 14 PID: 24181 Comm: Xorg Tainted: G=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> L=C2=A0=C2=A0=C2=A0 5.4.72-2.qubes.x86_64 #1
> [ 2523.945960] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
> 05/24/2019
> [ 2523.945989] RIP: e030:smp_call_function_many+0x20a/0x270
> [ 2523.946010] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf =
b2
> 8a
> [ 2523.946057] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
> [ 2523.946075] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
> 0000000000000007
> [ 2523.946097] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
> 0000000000000007
> [ 2523.946119] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
> 0000000000000000
> [ 2523.946162] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
> 0000000000000000
> [ 2523.946184] R13: 0000000000000001 R14: 0000000000029ac0 R15:
> ffff8882415a9b00
> [ 2523.946233] FS:=C2=A0 00007a0d51a91a40(0000) GS:ffff888241580000(000=
0)
> knlGS:0000000000000000
> [ 2523.946256] CS:=C2=A0 e030 DS: 0000 ES: 0000 CR0: 0000000080050033
> [ 2523.946275] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
> 0000000000040660
> [ 2523.946313] Call Trace:
> [ 2523.946336]=C2=A0 ? do_kernel_range_flush+0x50/0x50
> [ 2523.946356]=C2=A0 on_each_cpu+0x28/0x50
> [ 2523.946376]=C2=A0 decrease_reservation+0x22f/0x310
> [ 2523.946397]=C2=A0 alloc_xenballooned_pages+0xeb/0x120
> [ 2523.946418]=C2=A0 ? __kmalloc+0x183/0x260
> [ 2523.946434]=C2=A0 gnttab_alloc_pages+0x11/0x40
> [ 2523.946457]=C2=A0 gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
> [ 2523.946478]=C2=A0 gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]=

> [ 2523.946502]=C2=A0 do_vfs_ioctl+0x2fb/0x490
> [ 2523.946559]=C2=A0 ? syscall_trace_enter+0x1d1/0x2c0
> [ 2523.946578]=C2=A0 ksys_ioctl+0x5e/0x90
> [ 2523.946594]=C2=A0 __x64_sys_ioctl+0x16/0x20
> [ 2523.946610]=C2=A0 do_syscall_64+0x5b/0xf0
> [ 2523.946713]=C2=A0 entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 2523.946738] RIP: 0033:0x7a0d51f763bb
> [ 2523.946782] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 =
01
> 48
> [ 2523.946867] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
> 0000000000000010
> [ 2523.946927] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
> 00007a0d51f763bb
> [ 2523.946950] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
> 0000000000000009
> [ 2523.946972] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
> 00007a0d51a30010
> [ 2523.947029] R10: 0000000000000000 R11: 0000000000000206 R12:
> 00007fffc48d5060
> [ 2523.947051] R13: 0000000000000001 R14: 0000000000000009 R15:
> 0000000000000001
> ```
>
> and finally
>
> ```
> [ 1597.971380] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
> [ 1597.971409] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 0-...0: (5 ticks this GP) i=
dle=3Dcd2/0/0x1
> softirq=3D59121/59122 fqs=3D14998
> [ 1597.971420] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (2 ticks this GP)
> idle=3De9e/1/0x4000000000000000 softirq=3D49519/49519 fqs=3D14998
> [ 1597.971431]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 11, t=3D60002 jiffi=
es, g=3D234321, q=3D83)
> [ 1597.971441] Sending NMI from CPU 11 to CPUs 0:
> [ 1597.972452] NMI backtrace for cpu 0
> [ 1597.972453] CPU: 0 PID: 0 Comm: swapper/0 Not tainted
> 5.4.72-2.qubes.x86_64 #1
> [ 1597.972453] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
> 05/24/2019
> [ 1597.972454] RIP: e030:xen_hypercall_sched_op+0xa/0x20
> [ 1597.972454] Code: 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc
> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00
> 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc =
cc
> cc
> [ 1597.972455] RSP: e02b:ffffc90001403d88 EFLAGS: 00000002
> [ 1597.972456] RAX: 0000000000000000 RBX: ffff888241215f80 RCX:
> ffffffff810013aa
> [ 1597.972456] RDX: ffff88823c070428 RSI: ffffc90001403da8 RDI:
> 0000000000000003
> [ 1597.972456] RBP: ffff8882365d8bf0 R08: ffffffff8265b6a0 R09:
> 0000000000000000
> [ 1597.972457] R10: 0000000000000000 R11: 0000000000000202 R12:
> 0000000000000049
> [ 1597.972457] R13: 0000000000000100 R14: ffff8882422b6640 R15:
> 0000000000000001
> [ 1597.972458] FS:=C2=A0 0000729633b7d700(0000) GS:ffff888241200000(000=
0)
> knlGS:0000000000000000
> [ 1597.972458] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0: 0000000080050=
033
> [ 1597.972458] CR2: 000077f6508c47c0 CR3: 000000020d006000 CR4:
> 0000000000040660
> [ 1597.972459] Call Trace:
> [ 1597.972459]=C2=A0 <IRQ>
> [ 1597.972459]=C2=A0 ? xen_poll_irq+0x73/0xa0
> [ 1597.972459]=C2=A0 ? xen_qlock_wait+0x7b/0x80
> [ 1597.972460]=C2=A0 ? pv_wait_head_or_lock+0x85/0xb0
> [ 1597.972460]=C2=A0 ? __pv_queued_spin_lock_slowpath+0xb5/0x1b0
> [ 1597.972460]=C2=A0 ? _raw_spin_lock_irqsave+0x32/0x40
> [ 1597.972461]=C2=A0 ? bfq_finish_requeue_request+0xb5/0x120
> [ 1597.972461]=C2=A0 ? blk_mq_free_request+0x3a/0xf0
> [ 1597.972461]=C2=A0 ? scsi_end_request+0x95/0x140
> [ 1597.972461]=C2=A0 ? scsi_io_completion+0x74/0x190
> [ 1597.972462]=C2=A0 ? blk_done_softirq+0xea/0x180
> [ 1597.972462]=C2=A0 ? __do_softirq+0xd9/0x2c8
> [ 1597.972462]=C2=A0 ? irq_exit+0xcf/0x110
> [ 1597.972462]=C2=A0 ? xen_evtchn_do_upcall+0x2c/0x40
> [ 1597.972463]=C2=A0 ? xen_do_hypervisor_callback+0x29/0x40
> [ 1597.972463]=C2=A0 </IRQ>
> [ 1597.972463]=C2=A0 ? xen_hypercall_sched_op+0xa/0x20
> [ 1597.972464]=C2=A0 ? xen_hypercall_sched_op+0xa/0x20
> [ 1597.972464]=C2=A0 ? xen_safe_halt+0xc/0x20
> [ 1597.972464]=C2=A0 ? default_idle+0x1a/0x140
> [ 1597.972465]=C2=A0 ? cpuidle_idle_call+0x139/0x190
> [ 1597.972465]=C2=A0 ? do_idle+0x73/0xd0
> [ 1597.972465]=C2=A0 ? cpu_startup_entry+0x19/0x20
> [ 1597.972466]=C2=A0 ? start_kernel+0x68a/0x6bf
> [ 1597.972466]=C2=A0 ? xen_start_kernel+0x6a2/0x6c1
> [ 1597.972470] Sending NMI from CPU 11 to CPUs 7:
> [ 1607.976873] rcu: rcu_sched kthread starved for 10007 jiffies!
> g234321 f0x0 RCU_GP_WAIT_FQS(5) ->state=3D0x402 ->cpu=3D11
> [ 1607.976924] rcu: RCU grace-period kthread stack dump:
> [ 1607.976942] rcu_sched=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 I=C2=A0=C2=
=A0=C2=A0 0=C2=A0=C2=A0=C2=A0 10=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 0x800040=
00
> [ 1607.976972] Call Trace:
> [ 1607.976999]=C2=A0 __schedule+0x217/0x570
> [ 1607.977020]=C2=A0 ? schedule+0x39/0xa0
> [ 1607.977038]=C2=A0 ? schedule_timeout+0x96/0x150
> [ 1607.977056]=C2=A0 ? __next_timer_interrupt+0xd0/0xd0
> [ 1607.977079]=C2=A0 ? rcu_gp_fqs_loop+0xea/0x2a0
> [ 1607.977096]=C2=A0 ? rcu_gp_kthread+0xb5/0x140
> [ 1607.977112]=C2=A0 ? rcu_gp_init+0x470/0x470
> [ 1607.977130]=C2=A0 ? kthread+0x115/0x140
> [ 1607.977145]=C2=A0 ? __kthread_bind_mask+0x60/0x60
> [ 1607.977164]=C2=A0 ? ret_from_fork+0x35/0x40
> ```
>
> I've tried to increase memory, set maximum of dom0 vcps to 1 or 4, pin
> vcpus too, multiple 5.4 kernels tool...no luck.
>
> I've also observed that some (never the same) VM (PVH or HVM) fail to
> start randomly because of being stuck at boot time with analog stack
> trace as the first piece of log provided above. Those VM are
> impossible to desroy and then it "propagates" though dom0 with the two
> latest piece of codes.
>
> If anyone would have any idea of what's going on, that would be very
> appreciated. Thank you.

Does booting Xen with `sched=3Dcredit` make a difference?

~Andrew


--80iG2zyIyucGufSbPuRTmviQwt5RJC8dy--

--t9lahTrSAdTYQgHuSPbDHYIyNAZ3SGZJH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEzzVJW36m9w6nfSF2ZcP5BqXXn6AFAl+W1SUACgkQZcP5BqXX
n6DGTw/5AZ4OhkG469Z9v2QwZ2bD7OVw25TesyJSwW26M5RQ7gHyB1/pMNwSun2q
l7rhnsVxwioYDwOElnKW5KHufTisYOh6lSJ9z0QLq6TQkdKVc58GS4XsKoNFP8n6
mLkmgH8FtFxnbOUKpUd8cTyOnRrN47KxicNIvLrAb57dorkaQ0Q9EdEvG1Y37AwE
f8m2BGwNXMzDzrinYFqtjnSmeglmGNcpIFskGHikRw84Dcx90PY++o1bj1N7KOUi
Hbtxnt7rrBVkhJOQ5rfQzz+s3exa/rgW2BIr8HH79uUNNv7fyWfPiMd6iXFhJzCu
1rddbh56PjL1JS4zhnmkU/E8wUyM/Pu8TjuBBWnQGPVIcZWXNXR8hgHINuy7U2Z3
vQft/jILIrM0xE9Uz3bj4Wag5S7B+FT9IZD0I/L7WaTHSeHQO6oP+vlKbmLReM3/
r5DmHcQQC8qNSfbvBDYwD8so5vNKPgBuu1xJiN1LoblJsFoZxSRWypYDHZ9/Vvv4
fRvO8uEHwvMBhaJMlmvaEzzNSTc3f9GumjHVuhzmoBJRzV/5bDFpyQw0NRVU/VpM
/NObA54dDMMAS/pV2N52YmQRdwBBjFciOLIARe6ocQzZkOY7RjD9Q8C4UC/Lkjs7
wsOGKmth6UDFh6GNM5mUXeS2FQjmk9m1dgblnnI7ou0I81KG7rU=
=sh0S
-----END PGP SIGNATURE-----

--t9lahTrSAdTYQgHuSPbDHYIyNAZ3SGZJH--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 14:30:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 14:30:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12365.32192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3Vl-0003BX-VV; Mon, 26 Oct 2020 14:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12365.32192; Mon, 26 Oct 2020 14:30:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3Vl-0003BQ-SQ; Mon, 26 Oct 2020 14:30:13 +0000
Received: by outflank-mailman (input) for mailman id 12365;
 Mon, 26 Oct 2020 14:30:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kX3Vk-0003BL-Cy
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:30:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01c0bbc5-7514-45db-9fbc-561b41c9faff;
 Mon, 26 Oct 2020 14:30:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7E817AD07;
 Mon, 26 Oct 2020 14:30:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l58c=EB=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kX3Vk-0003BL-Cy
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:30:12 +0000
X-Inumbo-ID: 01c0bbc5-7514-45db-9fbc-561b41c9faff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 01c0bbc5-7514-45db-9fbc-561b41c9faff;
	Mon, 26 Oct 2020 14:30:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603722608;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=t/VGkCCxs83EI4lKDv0uCIN0kWYhHx890tFrGpk0/t8=;
	b=NLnarj/BgJNDmTj67NOba9gwP4HlTJ6TNiq8dLDW6iW1en+oMyK3Xu0DMYXfWXdWjFw+lA
	l+HxVp9Sa6GFS+b6z/f6Fcngl1dqz5L0FFmuN2hpmpooL8b//DEV0LE+6pX9RTbF9pTxSX
	Oxbkgu+LXi7qbqIUAA7/qvz7TnZdZIg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7E817AD07;
	Mon, 26 Oct 2020 14:30:08 +0000 (UTC)
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <George.Dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
Date: Mon, 26 Oct 2020 15:30:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.10.20 14:54, Andrew Cooper wrote:
> On 26/10/2020 13:37, Frédéric Pierret wrote:
>> Hi all,
>>
>> I'm experiencing problem with a HP ProLiant DL360p Gen8 and recent
>> upgrade of 4.13 -> 4.14. dom0 and the entire system becomes unstable
>> and freeze at some point.
>>
>> I've managed to get three pieces of logs (last one after a reboot and
>> just before total freeze) by connecting to the serial console with
>> loglvl options and also redirecting linux kernel output to the serial
>> console:
>>
>> ```
>> [ 2150.954883] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 2150.954934] rcu:     7-...0: (3 GPs behind)
>> idle=842/1/0x4000000000000000 softirq=64670/64671 fqs=14673
>> [ 2150.954962]     (detected by 12, t=60002 jiffies, g=236593, q=126)
>> [ 2150.954984] Sending NMI from CPU 12 to CPUs 7:
>> [ 2160.968519] rcu: rcu_sched kthread starved for 10008 jiffies!
>> g236593 f0x0 RCU_GP_DOING_FQS(6) ->state=0x0 ->cpu=9
>> [ 2160.968568] rcu: RCU grace-period kthread stack dump:
>> [ 2160.968586] rcu_sched       R  running task        0    10      2
>> 0x80004000
>> [ 2160.968612] Call Trace:
>> [ 2160.968634]  ? xen_hypercall_xen_version+0xa/0x20
>> [ 2160.968658]  ? xen_force_evtchn_callback+0x9/0x10
>> [ 2160.968918]  ? check_events+0x12/0x20
>> [ 2160.968946]  ? schedule+0x39/0xa0
>> [ 2160.968964]  ? schedule_timeout+0x96/0x150
>> [ 2160.968981]  ? __next_timer_interrupt+0xd0/0xd0
>> [ 2160.969002]  ? rcu_gp_fqs_loop+0xea/0x2a0
>> [ 2160.969019]  ? rcu_gp_kthread+0xb5/0x140
>> [ 2160.969035]  ? rcu_gp_init+0x470/0x470
>> [ 2160.969052]  ? kthread+0x115/0x140
>> [ 2160.969067]  ? __kthread_bind_mask+0x60/0x60
>> [ 2160.969085]  ? ret_from_fork+0x35/0x40
>> ```
>>
>> and also
>>
>> ```
>> [ 2495.945931] CPU: 14 PID: 24181 Comm: Xorg Not tainted
>> 5.4.72-2.qubes.x86_64 #1
>> [ 2495.945954] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 2495.945984] RIP: e030:smp_call_function_many+0x20a/0x270
>> [ 2495.946004] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
>> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
>> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf b2
>> 8a
>> [ 2495.946051] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
>> [ 2495.946068] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
>> 0000000000000007
>> [ 2495.946090] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
>> 0000000000000007
>> [ 2495.946113] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
>> 0000000000000000
>> [ 2495.946134] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
>> 0000000000000000
>> [ 2495.946156] R13: 0000000000000001 R14: 0000000000029ac0 R15:
>> ffff8882415a9b00
>> [ 2495.946211] FS:  00007a0d51a91a40(0000) GS:ffff888241580000(0000)
>> knlGS:0000000000000000
>> [ 2495.946235] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 2495.946253] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
>> 0000000000040660
>> [ 2495.946290] Call Trace:
>> [ 2495.946314]  ? do_kernel_range_flush+0x50/0x50
>> [ 2495.946334]  on_each_cpu+0x28/0x50
>> [ 2495.946354]  decrease_reservation+0x22f/0x310
>> [ 2495.946377]  alloc_xenballooned_pages+0xeb/0x120
>> [ 2495.946396]  ? __kmalloc+0x183/0x260
>> [ 2495.946413]  gnttab_alloc_pages+0x11/0x40
>> [ 2495.946434]  gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
>> [ 2495.946454]  gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]
>> [ 2495.946479]  do_vfs_ioctl+0x2fb/0x490
>> [ 2495.946500]  ? syscall_trace_enter+0x1d1/0x2c0
>> [ 2495.946551]  ksys_ioctl+0x5e/0x90
>> [ 2495.946567]  __x64_sys_ioctl+0x16/0x20
>> [ 2495.946583]  do_syscall_64+0x5b/0xf0
>> [ 2495.946601]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> [ 2495.946620] RIP: 0033:0x7a0d51f763bb
>> [ 2495.946727] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
>> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 01
>> 48
>> [ 2495.946804] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
>> 0000000000000010
>> [ 2495.946863] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
>> 00007a0d51f763bb
>> [ 2495.946885] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
>> 0000000000000009
>> [ 2495.946939] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
>> 00007a0d51a30010
>> [ 2495.946998] R10: 0000000000000000 R11: 0000000000000206 R12:
>> 00007fffc48d5060
>> [ 2495.947020] R13: 0000000000000001 R14: 0000000000000009 R15:
>> 0000000000000001
>> [ 2510.964667] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 2510.964716] rcu:     7-...0: (3 GPs behind)
>> idle=842/1/0x4000000000000000 softirq=64670/64671 fqs=96182
>> [ 2510.964744]     (detected by 12, t=420012 jiffies, g=236593, q=11404)
>> [ 2510.964769] Sending NMI from CPU 12 to CPUs 7:
>> [ 2523.945643] watchdog: BUG: soft lockup - CPU#14 stuck for 22s!
>> [Xorg:24181]
>> [ 2523.945686] Modules linked in: snd_seq_dummy snd_hrtimer snd_seq
>> snd_seq_device snd_timer snd soundcore br_netfilter xt_physdev
>> xen_netback loop bridge stp llc rfkill ebtable_filter ebtables
>> ip6table_filter ip
>> 6_tables iptable_filter intel_rapl_msr iTCO_wdt ipmi_ssif
>> iTCO_vendor_support intel_rapl_common sb_edac rapl raid456
>> async_raid6_recov async_memcpy async_pq async_xor async_tx xor
>> raid6_pq pcspkr joydev hpilo lpc
>> _ich hpwdt ioatdma dca tg3 r8169 ipmi_si ipmi_devintf ipmi_msghandler
>> acpi_power_meter xenfs ip_tables dm_thin_pool dm_persistent_data
>> libcrc32c dm_bio_prison dm_crypt uas usb_storage uhci_hcd
>> crct10dif_pclmul cr
>> c32_pclmul crc32c_intel ghash_clmulni_intel mgag200 drm_kms_helper
>> serio_raw drm_vram_helper ttm drm ata_generic pata_acpi i2c_algo_bit
>> ehci_pci ehci_hcd xhci_pci xhci_hcd hpsa scsi_transport_sas
>> xen_privcmd xen_
>> pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
>> pkcs8_key_parser
>> [ 2523.945934] CPU: 14 PID: 24181 Comm: Xorg Tainted: G
>> L    5.4.72-2.qubes.x86_64 #1
>> [ 2523.945960] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 2523.945989] RIP: e030:smp_call_function_many+0x20a/0x270
>> [ 2523.946010] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
>> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
>> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf b2
>> 8a
>> [ 2523.946057] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
>> [ 2523.946075] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
>> 0000000000000007
>> [ 2523.946097] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
>> 0000000000000007
>> [ 2523.946119] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
>> 0000000000000000
>> [ 2523.946162] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
>> 0000000000000000
>> [ 2523.946184] R13: 0000000000000001 R14: 0000000000029ac0 R15:
>> ffff8882415a9b00
>> [ 2523.946233] FS:  00007a0d51a91a40(0000) GS:ffff888241580000(0000)
>> knlGS:0000000000000000
>> [ 2523.946256] CS:  e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 2523.946275] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
>> 0000000000040660
>> [ 2523.946313] Call Trace:
>> [ 2523.946336]  ? do_kernel_range_flush+0x50/0x50
>> [ 2523.946356]  on_each_cpu+0x28/0x50
>> [ 2523.946376]  decrease_reservation+0x22f/0x310
>> [ 2523.946397]  alloc_xenballooned_pages+0xeb/0x120
>> [ 2523.946418]  ? __kmalloc+0x183/0x260
>> [ 2523.946434]  gnttab_alloc_pages+0x11/0x40
>> [ 2523.946457]  gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
>> [ 2523.946478]  gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev]
>> [ 2523.946502]  do_vfs_ioctl+0x2fb/0x490
>> [ 2523.946559]  ? syscall_trace_enter+0x1d1/0x2c0
>> [ 2523.946578]  ksys_ioctl+0x5e/0x90
>> [ 2523.946594]  __x64_sys_ioctl+0x16/0x20
>> [ 2523.946610]  do_syscall_64+0x5b/0xf0
>> [ 2523.946713]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> [ 2523.946738] RIP: 0033:0x7a0d51f763bb
>> [ 2523.946782] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
>> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89 01
>> 48
>> [ 2523.946867] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
>> 0000000000000010
>> [ 2523.946927] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
>> 00007a0d51f763bb
>> [ 2523.946950] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
>> 0000000000000009
>> [ 2523.946972] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
>> 00007a0d51a30010
>> [ 2523.947029] R10: 0000000000000000 R11: 0000000000000206 R12:
>> 00007fffc48d5060
>> [ 2523.947051] R13: 0000000000000001 R14: 0000000000000009 R15:
>> 0000000000000001
>> ```
>>
>> and finally
>>
>> ```
>> [ 1597.971380] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 1597.971409] rcu:     0-...0: (5 ticks this GP) idle=cd2/0/0x1
>> softirq=59121/59122 fqs=14998
>> [ 1597.971420] rcu:     7-...0: (2 ticks this GP)
>> idle=e9e/1/0x4000000000000000 softirq=49519/49519 fqs=14998
>> [ 1597.971431]     (detected by 11, t=60002 jiffies, g=234321, q=83)
>> [ 1597.971441] Sending NMI from CPU 11 to CPUs 0:
>> [ 1597.972452] NMI backtrace for cpu 0
>> [ 1597.972453] CPU: 0 PID: 0 Comm: swapper/0 Not tainted
>> 5.4.72-2.qubes.x86_64 #1
>> [ 1597.972453] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 1597.972454] RIP: e030:xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972454] Code: 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc
>> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00
>> 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc
>> cc
>> [ 1597.972455] RSP: e02b:ffffc90001403d88 EFLAGS: 00000002
>> [ 1597.972456] RAX: 0000000000000000 RBX: ffff888241215f80 RCX:
>> ffffffff810013aa
>> [ 1597.972456] RDX: ffff88823c070428 RSI: ffffc90001403da8 RDI:
>> 0000000000000003
>> [ 1597.972456] RBP: ffff8882365d8bf0 R08: ffffffff8265b6a0 R09:
>> 0000000000000000
>> [ 1597.972457] R10: 0000000000000000 R11: 0000000000000202 R12:
>> 0000000000000049
>> [ 1597.972457] R13: 0000000000000100 R14: ffff8882422b6640 R15:
>> 0000000000000001
>> [ 1597.972458] FS:  0000729633b7d700(0000) GS:ffff888241200000(0000)
>> knlGS:0000000000000000
>> [ 1597.972458] CS:  10000e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 1597.972458] CR2: 000077f6508c47c0 CR3: 000000020d006000 CR4:
>> 0000000000040660
>> [ 1597.972459] Call Trace:
>> [ 1597.972459]  <IRQ>
>> [ 1597.972459]  ? xen_poll_irq+0x73/0xa0
>> [ 1597.972459]  ? xen_qlock_wait+0x7b/0x80
>> [ 1597.972460]  ? pv_wait_head_or_lock+0x85/0xb0
>> [ 1597.972460]  ? __pv_queued_spin_lock_slowpath+0xb5/0x1b0
>> [ 1597.972460]  ? _raw_spin_lock_irqsave+0x32/0x40
>> [ 1597.972461]  ? bfq_finish_requeue_request+0xb5/0x120
>> [ 1597.972461]  ? blk_mq_free_request+0x3a/0xf0
>> [ 1597.972461]  ? scsi_end_request+0x95/0x140
>> [ 1597.972461]  ? scsi_io_completion+0x74/0x190
>> [ 1597.972462]  ? blk_done_softirq+0xea/0x180
>> [ 1597.972462]  ? __do_softirq+0xd9/0x2c8
>> [ 1597.972462]  ? irq_exit+0xcf/0x110
>> [ 1597.972462]  ? xen_evtchn_do_upcall+0x2c/0x40
>> [ 1597.972463]  ? xen_do_hypervisor_callback+0x29/0x40
>> [ 1597.972463]  </IRQ>
>> [ 1597.972463]  ? xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972464]  ? xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972464]  ? xen_safe_halt+0xc/0x20
>> [ 1597.972464]  ? default_idle+0x1a/0x140
>> [ 1597.972465]  ? cpuidle_idle_call+0x139/0x190
>> [ 1597.972465]  ? do_idle+0x73/0xd0
>> [ 1597.972465]  ? cpu_startup_entry+0x19/0x20
>> [ 1597.972466]  ? start_kernel+0x68a/0x6bf
>> [ 1597.972466]  ? xen_start_kernel+0x6a2/0x6c1
>> [ 1597.972470] Sending NMI from CPU 11 to CPUs 7:
>> [ 1607.976873] rcu: rcu_sched kthread starved for 10007 jiffies!
>> g234321 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=11
>> [ 1607.976924] rcu: RCU grace-period kthread stack dump:
>> [ 1607.976942] rcu_sched       I    0    10      2 0x80004000
>> [ 1607.976972] Call Trace:
>> [ 1607.976999]  __schedule+0x217/0x570
>> [ 1607.977020]  ? schedule+0x39/0xa0
>> [ 1607.977038]  ? schedule_timeout+0x96/0x150
>> [ 1607.977056]  ? __next_timer_interrupt+0xd0/0xd0
>> [ 1607.977079]  ? rcu_gp_fqs_loop+0xea/0x2a0
>> [ 1607.977096]  ? rcu_gp_kthread+0xb5/0x140
>> [ 1607.977112]  ? rcu_gp_init+0x470/0x470
>> [ 1607.977130]  ? kthread+0x115/0x140
>> [ 1607.977145]  ? __kthread_bind_mask+0x60/0x60
>> [ 1607.977164]  ? ret_from_fork+0x35/0x40
>> ```
>>
>> I've tried to increase memory, set maximum of dom0 vcps to 1 or 4, pin
>> vcpus too, multiple 5.4 kernels tool...no luck.
>>
>> I've also observed that some (never the same) VM (PVH or HVM) fail to
>> start randomly because of being stuck at boot time with analog stack
>> trace as the first piece of log provided above. Those VM are
>> impossible to desroy and then it "propagates" though dom0 with the two
>> latest piece of codes.
>>
>> If anyone would have any idea of what's going on, that would be very
>> appreciated. Thank you.
> 
> Does booting Xen with `sched=credit` make a difference?

Hmm, I think I have spotted a problem in credit2 which could explain the
hang:

csched2_unit_wake() will NOT put the sched unit on a runqueue in case it
has CSFLAG_scheduled set. This bit will be reset only in
csched2_context_saved().

So in case a vcpu (and its unit, of course) is blocked and there has
been no other vcpu active on its physical cpu but the idle vcpu, there
will be no call of csched2_context_saved(). This will block the vcpu
to become active in theory for eternity, in case there is no need to
run another vcpu on the physical cpu.

Thoughts?


Juergen



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 14:48:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 14:48:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12369.32204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3nS-0004H3-HG; Mon, 26 Oct 2020 14:48:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12369.32204; Mon, 26 Oct 2020 14:48:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3nS-0004Gw-DZ; Mon, 26 Oct 2020 14:48:30 +0000
Received: by outflank-mailman (input) for mailman id 12369;
 Mon, 26 Oct 2020 14:48:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kX3nQ-0004Gr-Kf
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:48:28 +0000
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ea70aac-8eef-4277-9185-7b9e47d3cd1e;
 Mon, 26 Oct 2020 14:48:27 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id e2so12777175wme.1
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 07:48:27 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id o3sm24827771wru.15.2020.10.26.07.48.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 07:48:26 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kX3nQ-0004Gr-Kf
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:48:28 +0000
X-Inumbo-ID: 1ea70aac-8eef-4277-9185-7b9e47d3cd1e
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1ea70aac-8eef-4277-9185-7b9e47d3cd1e;
	Mon, 26 Oct 2020 14:48:27 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id e2so12777175wme.1
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 07:48:27 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=+P4MMbjykqdf+A1BMUNnxsawJgGT9/QAIBzSHXc/WtQ=;
        b=P5947ODpcnIKsxw5ykIiONCipdkaa0kgiZ6Xiv5AKb1pTKivRmghJjLffYiw8Y4p2Y
         aOyYR0kq+W3UosP3wBniooM0mSG/3+aNY+ZBOiIo8IdFDX3KYbmnF/LdEFJekhDJs3j6
         mSHxFWjb0ue2q2FFAD61yk774n2F5FL+sYr2CF/N7yTpeIT0TElQa6DOnqFOnOMqmhVj
         glHfB3oO7dKAE9zU1I6r0v5aLo8OUyiCNJDLtovXxSHXTfzWVu2CmMgoeHohyFjxjtAg
         hcwl7OC+O+46/gZ/dS2pxY/xVnsVxf1peVZ9BEstWhIV38pFUONxaolT48iLHlmUjW4u
         NHag==
X-Gm-Message-State: AOAM531kQJm7f618mCAnplazSsi3CEBYcR8TWVyLifxyBB5ifX5Xbvdz
	/mQBnv8IJb/JUDRuON2thmzGAIJFfbo=
X-Google-Smtp-Source: ABdhPJzKlOpQX/+yh4CZgPEaIdhjcXqYtpmUtM1yHLTsNV9G08H74+UKHxtBuT57URwg9ZevaesxaQ==
X-Received: by 2002:a1c:f002:: with SMTP id a2mr17471023wmb.129.1603723707113;
        Mon, 26 Oct 2020 07:48:27 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id o3sm24827771wru.15.2020.10.26.07.48.25
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 07:48:26 -0700 (PDT)
Date: Mon, 26 Oct 2020 14:48:24 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH] tools/libs: let build depend on official headers
Message-ID: <20201026144824.6dglorqqhabishpt@liuwe-devbox-debian-v2>
References: <20201025101129.19685-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201025101129.19685-1-jgross@suse.com>
User-Agent: NeoMutt/20180716

On Sun, Oct 25, 2020 at 11:11:29AM +0100, Juergen Gross wrote:
> The build target of a library should depend on the official headers
> of that library, too, as those might be required for building other
> tools.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 14:49:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 14:49:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12372.32216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3oX-0004Nc-R7; Mon, 26 Oct 2020 14:49:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12372.32216; Mon, 26 Oct 2020 14:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3oX-0004NV-Nh; Mon, 26 Oct 2020 14:49:37 +0000
Received: by outflank-mailman (input) for mailman id 12372;
 Mon, 26 Oct 2020 14:49:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kX3oW-0004NM-CV
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:49:36 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 947a35a7-d3ba-4566-8a2f-8079694e7140;
 Mon, 26 Oct 2020 14:49:35 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id h5so12909753wrv.7
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 07:49:34 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c18sm22280009wrq.5.2020.10.26.07.49.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 07:49:33 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kX3oW-0004NM-CV
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:49:36 +0000
X-Inumbo-ID: 947a35a7-d3ba-4566-8a2f-8079694e7140
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 947a35a7-d3ba-4566-8a2f-8079694e7140;
	Mon, 26 Oct 2020 14:49:35 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id h5so12909753wrv.7
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 07:49:34 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to:user-agent;
        bh=x/kvi4dGENrB3A6TGaDyypQDm6iRWN/t/maDZTvbmPc=;
        b=OaqLcDD+95YGk5VJ/zCgDEGFT4hWZDtnd5tKj4xVWGGKIlv+wGzAtZ5UZQv+eKZIqR
         +nVIw8qpgoQnxXHjWs6VUICcm47MYj2uHzB0ez8RCxV+Is3YjHttOPwuDIos5ISpGuZy
         LJWa6CDdt7rci8rv5+EX9JE/yAIl4iPJoVqLtzcvQ98/GJdmuMNXDhlE32qwPCkOCh0/
         R6H3zQKm+gApq3u6RIcTQpPBymUeq47JyeE4gdJNlvUZGnDDyN1PvE4EBTbrK8p/d71q
         NQamc2RcLkfXN5qs4rk/TtBmGLuy4vZAHCDNqfnQtvqUcKWPVy1JolB+QBo9cSu2r8Ih
         DWCg==
X-Gm-Message-State: AOAM5309te1lanFnmlgTBUpERIsiNQ+f68qQtlHYZwGxWhvPs0sCHwxS
	vjYNyM/oOeahU14MPbrnnO4=
X-Google-Smtp-Source: ABdhPJw8zbFdbIWfe33eQyaLtKeddr3v1dWgxf8PsVKy/afKJsKkeRRD7jD8iRel3SEF9chhobs2qw==
X-Received: by 2002:a05:6000:10c6:: with SMTP id b6mr18757084wrx.10.1603723773711;
        Mon, 26 Oct 2020 07:49:33 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id c18sm22280009wrq.5.2020.10.26.07.49.33
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 07:49:33 -0700 (PDT)
Date: Mon, 26 Oct 2020 14:49:31 +0000
From: Wei Liu <wl@xen.org>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: osstest service owner <osstest-admin@xenproject.org>,
	xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [xen-unstable-smoke test] 156241: regressions - trouble:
 blocked/fail
Message-ID: <20201026144931.cvrc72esryhcnwxy@liuwe-devbox-debian-v2>
References: <osstest-156241-mainreport@xen.org>
 <8528afe9-4225-1942-9f7c-54ec50379345@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8528afe9-4225-1942-9f7c-54ec50379345@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Oct 26, 2020 at 02:35:00PM +0100, Jrgen Gro wrote:
> On 26.10.20 14:27, osstest service owner wrote:
> > flight 156241 xen-unstable-smoke real [real]
> > http://logs.test-lab.xenproject.org/osstest/logs/156241/
> > 
> > Regressions :-(
> > 
> > Tests which did not succeed and are blocking,
> > including tests which could not be run:
> >   build-amd64                   6 xen-build                fail REGR. vs. 156117
> >   build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
> >   build-armhf                   6 xen-build                fail REGR. vs. 156117
> 
> I'm pretty sure these failures will be fixed by my patch
> 
> "tools/libs: let build depend on official headers"
> 

I've applied that patch. Let's see how it goes.

Wei.


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 14:52:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 14:52:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12377.32228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3rf-0005DA-AG; Mon, 26 Oct 2020 14:52:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12377.32228; Mon, 26 Oct 2020 14:52:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX3rf-0005D3-7N; Mon, 26 Oct 2020 14:52:51 +0000
Received: by outflank-mailman (input) for mailman id 12377;
 Mon, 26 Oct 2020 14:52:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cOiY=EB=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
 id 1kX3rd-0005Cu-R9
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:49 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3e9d222-4b93-4956-936a-306462ed313d;
 Mon, 26 Oct 2020 14:52:48 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kX3rc-0006AX-Hp
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:48 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1kX3rc-000431-Fv
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1kX3rZ-0008BG-91; Mon, 26 Oct 2020 14:52:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=cOiY=EB=xenproject.org=iwj@srs-us1.protection.inumbo.net>)
	id 1kX3rd-0005Cu-R9
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:49 +0000
X-Inumbo-ID: a3e9d222-4b93-4956-936a-306462ed313d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a3e9d222-4b93-4956-936a-306462ed313d;
	Mon, 26 Oct 2020 14:52:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=J4t3I45qo0MSJRdxlzUiS2RCquAvBHxQiIf0rb90sLg=; b=FYbVfupl8uBi97NgkR99qB8aby
	WkTOazNURWGuR3U/eIudK1E1jec3QA5GEJkUFKA01eCxWu2mtdna8B0erG8+KBsCGuEbmdiDsXFkM
	P+wpZuYifnJwMWr87KEK4VoyNg3A9Ft/7x8wlNM6jBdIUMQGuA2EGiPeifU2g+oh1JZc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kX3rc-0006AX-Hp
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:48 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <iwj@xenproject.org>)
	id 1kX3rc-000431-Fv
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 14:52:48 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
	(envelope-from <iwj@xenproject.org>)
	id 1kX3rZ-0008BG-91; Mon, 26 Oct 2020 14:52:45 +0000
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24470.58045.89332.837032@mariner.uk.xensource.com>
Date: Mon, 26 Oct 2020 14:52:45 +0000
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [OSSTEST PATCH] ts-xen-build-prep: Install ninja
In-Reply-To: <20201020093549.270000-1-anthony.perard@citrix.com>
References: <20201020093549.270000-1-anthony.perard@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Anthony PERARD writes ("[OSSTEST PATCH] ts-xen-build-prep: Install ninja"):
> QEMU upstream now requires ninja to build. (Probably since QEMU commit
> 09e93326e448 ("build: replace ninjatool with ninja"))
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

and pushed, thanks.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 15:41:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 15:41:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12386.32246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4cr-00015O-3n; Mon, 26 Oct 2020 15:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12386.32246; Mon, 26 Oct 2020 15:41:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4cr-00015H-0n; Mon, 26 Oct 2020 15:41:37 +0000
Received: by outflank-mailman (input) for mailman id 12386;
 Mon, 26 Oct 2020 15:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX4cp-00014j-S4
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:41:35 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57d3b5c1-7e37-42bc-a951-708c79b435f2;
 Mon, 26 Oct 2020 15:41:31 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX4ck-0007AJ-LV; Mon, 26 Oct 2020 15:41:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX4ck-0006fq-Da; Mon, 26 Oct 2020 15:41:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX4ck-0000Z0-D5; Mon, 26 Oct 2020 15:41:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX4cp-00014j-S4
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:41:35 +0000
X-Inumbo-ID: 57d3b5c1-7e37-42bc-a951-708c79b435f2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 57d3b5c1-7e37-42bc-a951-708c79b435f2;
	Mon, 26 Oct 2020 15:41:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XKX9I5+5sJAKz4+fIe65dvMrou+WmTvzVVrWqV13UH8=; b=my+/9WjkUsWgAes/4ETVA70AN6
	1JX+lZ7Xw+xrQeBXLEQe67B0wBPTPRXFr+hzUcIibDlWYKehjspA3ZdmKFdDy9pDAV23j1tRlrFY0
	mTwKCDQhkTEvFcq37PbWPTmT7CR8pBjqr0kdV2SU9YhjyVgFJZBhSQ0UMW7+Y+ryoHLU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX4ck-0007AJ-LV; Mon, 26 Oct 2020 15:41:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX4ck-0006fq-Da; Mon, 26 Oct 2020 15:41:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX4ck-0000Z0-D5; Mon, 26 Oct 2020 15:41:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156243: regressions - trouble: blocked/fail
X-Osstest-Failures:
    xen-unstable-smoke:build-amd64:xen-build:fail:regression
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=20cd1be5e29577ba4c6c952cc86dfd7cfbd841b3
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 15:41:30 +0000

flight 156243 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156243/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 156117
 build-arm64-xsm               6 xen-build                fail REGR. vs. 156117
 build-armhf                   6 xen-build                fail REGR. vs. 156117

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  20cd1be5e29577ba4c6c952cc86dfd7cfbd841b3
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    3 days
Failing since        156120  2020-10-23 14:01:24 Z    3 days   37 attempts
Testing same since   156243  2020-10-26 14:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 20cd1be5e29577ba4c6c952cc86dfd7cfbd841b3
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Oct 26 14:39:42 2020 +0100

    PCI: drop dead pci_lock_*pdev() declarations
    
    They have no definitions, and hence users, anywhere.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 2a758376f9e2bd277b6067952517a301da87dc86
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon Oct 26 14:38:35 2020 +0100

    AMD/IOMMU: correct shattering of super pages
    
    Fill the new page table _before_ installing into a live page table
    hierarchy, as installing a blank page first risks I/O faults on
    sub-ranges of the original super page which aren't part of the range
    for which mappings are being updated.
    
    While at it also do away with mapping and unmapping the same fresh
    intermediate page table page once per entry to be written.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 92abe1481c1181b95c7f91846bd1d77f37ee5c5e
Author: Juergen Gross <jgross@suse.com>
Date:   Sun Oct 25 06:45:46 2020 +0100

    tools/helpers: fix Arm build by excluding init-xenstore-domain
    
    The support for PVH xenstore-stubdom has broken the Arm build.
    
    Xenstore stubdom isn't supported on Arm, so there is no need to build
    the init-xenstore-domain helper.
    
    Build the helper on x86 only.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4ddd6499d999a7d08cabfda5b0262e473dd5beed
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Sun May 24 22:55:06 2020 -0400

    SUPPORT: Add linux device model stubdom to Toolstack
    
    Add qemu-xen linux device model stubdomain to the Toolstack section as a
    Tech Preview.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 06f0598b41f23c9e4cf7d8c5a05b282de92f3a35
Author: Jan Beulich <jbeulich@suse.com>
Date:   Fri Oct 23 18:03:18 2020 +0200

    x86emul: fix PINSRW and adjust other {,V}PINSR*
    
    The use of simd_packed_int together with no further update to op_bytes
    has lead to wrong signaling of #GP(0) for PINSRW without a 16-byte
    aligned memory operand. Use simd_none instead and override it after
    general decoding with simd_other, like is done for the B/D/Q siblings.
    
    While benign, for consistency also use DstImplicit instead of DstReg
    in x86_decode_twobyte().
    
    PINSR{B,D,Q} also had a stray (redundant) get_fpu() invocation, which
    gets dropped.
    
    For further consistency also
    - use src.bytes instead of op_bytes in relevant memcpy() invocations,
    - avoid the pointless updating of op_bytes (all we care about later is
      that the value be less than 16).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9af5e2b31b4e6f3892b4614ecd0a619af5d64d7e
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/store: don't use symbolic links for external files
    
    Instead of using symbolic links to include files from xenstored use
    the vpath directive and an include path.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 588756db020e73e6f5e4407bbf78fbd53f15b731
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs/guest: don't use symbolic links for xenctrl headers
    
    Instead of using symbolic links for accessing the xenctrl private
    headers use an include path instead.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 4664034cdc720a52913bc26358240bb9d3798527
Author: Juergen Gross <jgross@suse.com>
Date:   Mon Oct 19 17:27:54 2020 +0200

    tools/libs: move official headers to common directory
    
    Instead of each library having an own include directory move the
    official headers to tools/include instead. This will drop the need to
    link those headers to tools/include and there is no need any longer
    to have library-specific include paths when building Xen.
    
    While at it remove setting of the unused variable
    PKG_CONFIG_CFLAGS_LOCAL in libs/*/Makefile.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Christian Lindig <christian.lindig@citrix.com>
    Tested-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Acked-by: Ian Jackson <iwj@xenproject.org>

commit 154137dfdba334348887baf0be9693c407f7cef3
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Oct 7 08:50:03 2020 +0200

    stubdom: add xenstore pvh stubdom
    
    Add a PVH xenstore stubdom in order to support a Xenstore stubdom on
    a hypervisor built without PV-support.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Acked-by: Wei Liu <wl@xen.org>

commit f89955449c5a47ff688e91873bbce4c3670ed9fe
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:10 2020 +0200

    tools/init-xenstore-domain: support xenstore pvh stubdom
    
    Instead of creating the xenstore-stubdom domain first and parsing the
    kernel later do it the other way round. This enables to probe for the
    domain type supported by the xenstore-stubdom and to support both, pv
    and pvh type stubdoms.
    
    Try to parse the stubdom image first for PV support, if this fails use
    HVM. Then create the domain with the appropriate type selected.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 56c1aca6a2bc013f45e7af2fa88605a693402770
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Oct 23 15:53:09 2020 +0200

    tools/init-xenstore-domain: add logging
    
    Add a possibility to do logging in init-xenstore-domain: use -v[...]
    for selecting the log-level as in xl, log to stderr.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 70cf8e9acada638f68c1c597d7580500d9f21c91
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:44 2020 +0200

    maintainers: remove unreachable remus maintainer
    
    The mails for Yang Hongyang are bouncing, remove him from MAINTAINERS
    file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 032a96e5ef38f96eccfebbf8a0dbd83dc7beb625
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Sep 9 13:59:43 2020 +0200

    maintainers: fix libxl paths
    
    Fix the paths of libxl in the MAINTAINERS file.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 15:47:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 15:47:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12391.32259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4ip-0001J7-Um; Mon, 26 Oct 2020 15:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12391.32259; Mon, 26 Oct 2020 15:47:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4ip-0001J0-Rj; Mon, 26 Oct 2020 15:47:47 +0000
Received: by outflank-mailman (input) for mailman id 12391;
 Mon, 26 Oct 2020 15:47:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kX4in-0001Iv-Uv
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:47:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7dbbc6a3-e7b4-4705-8724-5ce441cfc786;
 Mon, 26 Oct 2020 15:47:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45B4BACAA;
 Mon, 26 Oct 2020 15:47:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kX4in-0001Iv-Uv
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:47:45 +0000
X-Inumbo-ID: 7dbbc6a3-e7b4-4705-8724-5ce441cfc786
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7dbbc6a3-e7b4-4705-8724-5ce441cfc786;
	Mon, 26 Oct 2020 15:47:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603727264;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Uh+v4GctdV0LWTJJQA59Ypv4Eglj9jnoaPqzqi+vSj8=;
	b=JOiGwSNmHbMveCsobWJwSRAqtNrallSIqluUWR0uzNYILIfPxALmOkx3vGkr147XccW/6g
	x9Aqim03e5WvPHqHu3RNTJlA6iQWAl/UkTA65rh/C6hnPXJo6mpsAKrnfdHQfzTXPk3ZGy
	t3yyOmvPj3h5JR/HsTklxttGmnXHzf8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 45B4BACAA;
	Mon, 26 Oct 2020 15:47:44 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: don't open-code l<N>e_to_mfn()
Message-ID: <2c72239a-46e9-4822-fa4b-5a0155c5c5e5@suse.com>
Date: Mon, 26 Oct 2020 16:47:43 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -779,9 +779,9 @@ pod_retry_l3:
         }
         if ( flags & _PAGE_PSE )
         {
-            mfn = _mfn(l3e_get_pfn(*l3e) +
-                       l2_table_offset(addr) * L1_PAGETABLE_ENTRIES +
-                       l1_table_offset(addr));
+            mfn = mfn_add(l3e_get_mfn(*l3e),
+                          l2_table_offset(addr) * L1_PAGETABLE_ENTRIES +
+                          l1_table_offset(addr));
             *t = p2m_recalc_type(recalc || _needs_recalc(flags),
                                  p2m_flags_to_type(flags), p2m, gfn);
             unmap_domain_page(l3e);
@@ -820,7 +820,7 @@ pod_retry_l2:
     }
     if ( flags & _PAGE_PSE )
     {
-        mfn = _mfn(l2e_get_pfn(*l2e) + l1_table_offset(addr));
+        mfn = mfn_add(l2e_get_mfn(*l2e), l1_table_offset(addr));
         *t = p2m_recalc_type(recalc || _needs_recalc(flags),
                              p2m_flags_to_type(flags), p2m, gfn);
         unmap_domain_page(l2e);
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,7 @@ void copy_page_sse2(void *, const void *
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+#define vmap_to_mfn(va)     l1e_get_mfn(*virt_to_xen_l1e((unsigned long)(va)))
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 15:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 15:54:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12395.32271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4os-0002DZ-LF; Mon, 26 Oct 2020 15:54:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12395.32271; Mon, 26 Oct 2020 15:54:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4os-0002DS-I5; Mon, 26 Oct 2020 15:54:02 +0000
Received: by outflank-mailman (input) for mailman id 12395;
 Mon, 26 Oct 2020 15:54:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kX4or-0002DK-3X
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:54:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbd63557-167b-4689-8293-7fbe2b755cf8;
 Mon, 26 Oct 2020 15:54:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47282ACAA;
 Mon, 26 Oct 2020 15:53:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kX4or-0002DK-3X
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:54:01 +0000
X-Inumbo-ID: dbd63557-167b-4689-8293-7fbe2b755cf8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dbd63557-167b-4689-8293-7fbe2b755cf8;
	Mon, 26 Oct 2020 15:54:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603727639;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=8lM7HOM92Lc5vba5OG73OTC9FwXE+Z/RKzE2Ig5AFIo=;
	b=ZqOC5aI8NEK9Js/Rr0HQVMI6HVMXR+swCIMEo3WoWRT2QnzczU8FTTjzR9EMUYsMXrWOCT
	BeTsNiNy99vK/ZLY9Qx0gJYEFAyYwZUHAIpxj/tFWBALCWBaTLpL8204T4QnIRo31kj0Gq
	Po3s4Q3YI6OjDJHGC/eLUpzwWSQiNZc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 47282ACAA;
	Mon, 26 Oct 2020 15:53:59 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: don't open-code vmap_to_mfn()
Message-ID: <d1aeb904-6ae1-2ca8-220a-920f2be5416d@suse.com>
Date: Mon, 26 Oct 2020 16:53:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -333,21 +333,14 @@ void unmap_domain_page_global(const void
 mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
-    const l1_pgentry_t *pl1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
-    {
-        pl1e = virt_to_xen_l1e(va);
-        BUG_ON(!pl1e);
-    }
-    else
-    {
-        ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
-        pl1e = &__linear_l1_table[l1_linear_offset(va)];
-    }
+        return vmap_to_mfn(va);
 
-    return l1e_get_mfn(*pl1e);
+    ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
+
+    return l1e_get_mfn(__linear_l1_table[l1_linear_offset(va)]);
 }


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 15:59:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 15:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12401.32283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4uS-0002Q2-C7; Mon, 26 Oct 2020 15:59:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12401.32283; Mon, 26 Oct 2020 15:59:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4uS-0002Pv-8r; Mon, 26 Oct 2020 15:59:48 +0000
Received: by outflank-mailman (input) for mailman id 12401;
 Mon, 26 Oct 2020 15:59:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kX4uR-0002Pq-AZ
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:59:47 +0000
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a231f8f2-9a30-4314-9927-513498c579b0;
 Mon, 26 Oct 2020 15:59:45 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id j7so13196048wrt.9
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 08:59:45 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a3sm24261537wrh.94.2020.10.26.08.59.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 08:59:44 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kX4uR-0002Pq-AZ
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 15:59:47 +0000
X-Inumbo-ID: a231f8f2-9a30-4314-9927-513498c579b0
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a231f8f2-9a30-4314-9927-513498c579b0;
	Mon, 26 Oct 2020 15:59:45 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id j7so13196048wrt.9
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 08:59:45 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=KLQHujeknDHRw2OOFJwGw/YxyPHbgCfirQBM51Dl8uQ=;
        b=FIFP48+EfXofAB+5gtp7/Uz9HsycISUvD2Pv/jnRfzvg0hIu13k1B1KDRHGVCfViln
         gQV8ZWDAIgGip5KjcPL8uFPBdYBIEMM1CIOKzOYkM1zs7rxR7MMggapuZUDc8H8eoqNs
         FClSM6P12QoNMe6Vi+pjYi4C/QVonHo9e9uWVO+0SC8WGxKIMQdu8RGTBT01EkcSjbnN
         ucSy08M8KMjTWd2/CHTsdszw4n8WI60IBFl2e4da0bibTcDIMKVFJFCgrEw9JCConBT1
         XJu0/0iYb5Y+MnZTE4DGzwZcSQlawFLCeYiguIJ1uynQJimIG5o031iIIJmxJXnYtR3e
         UmPQ==
X-Gm-Message-State: AOAM531KuuaGhnslteCvYj1AH2FeZkfVeU+Ftu63osB59MBMig+YBNeU
	EfRjIAyEbjfkvCQxptZ7p/A=
X-Google-Smtp-Source: ABdhPJzidQ6TCBptpVg7zevTuXmVzifC9efFXiyZ15Qa1nhrhKIH9sqxy4JESEqoyVzq0+ELKrhXhw==
X-Received: by 2002:adf:cd87:: with SMTP id q7mr18420442wrj.169.1603727984575;
        Mon, 26 Oct 2020 08:59:44 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id a3sm24261537wrh.94.2020.10.26.08.59.43
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 08:59:44 -0700 (PDT)
Date: Mon, 26 Oct 2020 15:59:42 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86: don't open-code vmap_to_mfn()
Message-ID: <20201026155942.d3f2vxrspyexye2v@liuwe-devbox-debian-v2>
References: <d1aeb904-6ae1-2ca8-220a-920f2be5416d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d1aeb904-6ae1-2ca8-220a-920f2be5416d@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Oct 26, 2020 at 04:53:58PM +0100, Jan Beulich wrote:
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:00:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12402.32295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4v4-0003iN-LS; Mon, 26 Oct 2020 16:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12402.32295; Mon, 26 Oct 2020 16:00:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4v4-0003iG-Hb; Mon, 26 Oct 2020 16:00:26 +0000
Received: by outflank-mailman (input) for mailman id 12402;
 Mon, 26 Oct 2020 16:00:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1kX4v3-0003i9-B5
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:00:25 +0000
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 715b8c0e-c71d-45b7-ad28-33a4f958c1d6;
 Mon, 26 Oct 2020 16:00:24 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x7so13270900wrl.3
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 09:00:24 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id l11sm22384427wro.89.2020.10.26.09.00.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 26 Oct 2020 09:00:17 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kDWn=EB=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
	id 1kX4v3-0003i9-B5
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:00:25 +0000
X-Inumbo-ID: 715b8c0e-c71d-45b7-ad28-33a4f958c1d6
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 715b8c0e-c71d-45b7-ad28-33a4f958c1d6;
	Mon, 26 Oct 2020 16:00:24 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x7so13270900wrl.3
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 09:00:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=ouk3eT1qAU6Yio8PImrLhRW689YzO30DOS3jAJ6J78c=;
        b=gnIBRmrXnlufffhvfwb7QiWkzD9wuXoUuGHm5uhROYB19mafSQNYXTqW2LzLFYTlnP
         GGcHpSo3e4uWoJ5+iSLE2MnK1qSV2sNcE+U4ML5T3HQpxv7TvlNjALZHyBUhAjySH1cQ
         RJ3XI42kwUJi7YJGJ+t9xI3pebcF+RY7lMN3AXKTx3lZpiVqCsj1v/ek3x3gK5cyomFC
         0W1G6/5pCrLDZEiF5hAhJXC1pMaRub31YJDrpOPh75I8zZyLHPSw/pt2dHBThSCPu/tG
         lU/gCKK+o1VNFZYIs/Atx+ywSe9jI6dmZo8oyXFyDCgRv4clow6WNqaeVW2dEdbhX7hf
         syEw==
X-Gm-Message-State: AOAM532betHmNFfFOM5vOmG01/qS2OaaHOq4WazUo07f5Hy3chdKcXHz
	Ht0Q687OfWdf5Olj74vDehU=
X-Google-Smtp-Source: ABdhPJzh/kueoRKN97FQmG7HEQuJjEWtR6wCmayPYYvQV5nOME9V93Vd9kcyRrkZr3K1RHR5OioI4g==
X-Received: by 2002:a05:6000:18d:: with SMTP id p13mr18511512wrx.248.1603728018071;
        Mon, 26 Oct 2020 09:00:18 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
        by smtp.gmail.com with ESMTPSA id l11sm22384427wro.89.2020.10.26.09.00.17
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Mon, 26 Oct 2020 09:00:17 -0700 (PDT)
Date: Mon, 26 Oct 2020 16:00:16 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH] x86: don't open-code l<N>e_to_mfn()
Message-ID: <20201026160016.o5ovilcqvsku5s2d@liuwe-devbox-debian-v2>
References: <2c72239a-46e9-4822-fa4b-5a0155c5c5e5@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2c72239a-46e9-4822-fa4b-5a0155c5c5e5@suse.com>
User-Agent: NeoMutt/20180716

On Mon, Oct 26, 2020 at 04:47:43PM +0100, Jan Beulich wrote:
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:03:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:03:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12407.32307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4y2-0003ur-3m; Mon, 26 Oct 2020 16:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12407.32307; Mon, 26 Oct 2020 16:03:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX4y2-0003uk-0r; Mon, 26 Oct 2020 16:03:30 +0000
Received: by outflank-mailman (input) for mailman id 12407;
 Mon, 26 Oct 2020 16:03:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/GsP=EB=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kX4y0-0003uf-LX
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:03:28 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c09a792-74c5-4dc5-86d0-3e7a2ffafaee;
 Mon, 26 Oct 2020 16:03:27 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09QG3GtK021006
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 26 Oct 2020 12:03:22 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09QG3GJD021005;
 Mon, 26 Oct 2020 09:03:16 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/GsP=EB=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kX4y0-0003uf-LX
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:03:28 +0000
X-Inumbo-ID: 7c09a792-74c5-4dc5-86d0-3e7a2ffafaee
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7c09a792-74c5-4dc5-86d0-3e7a2ffafaee;
	Mon, 26 Oct 2020 16:03:27 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09QG3GtK021006
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Mon, 26 Oct 2020 12:03:22 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09QG3GJD021005;
	Mon, 26 Oct 2020 09:03:16 -0700 (PDT)
	(envelope-from ehem)
Date: Mon, 26 Oct 2020 09:03:16 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201026160316.GA20589@mattapan.m5p.com>
References: <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> On 24/10/2020 06:35, Elliott Mitchell wrote:
> > ACPI has a distinct
> > means of specifying a limited DMA-width; the above fails, because it
> > assumes a *device-tree*.
> 
> Do you know if it would be possible to infer from the ACPI static table 
> the DMA-width?

Yes, and it is.  Due to not knowing much about ACPI tables I don't know
what the C code would look like though (problem is which documentation
should I be looking at first?).

Handy bit of information is in the RP4 Tianocore table source:
https://github.com/tianocore/edk2-platforms/blob/d492639638eee331ac3389e6cf53ea266c3c84b3/Platform/RaspberryPi/AcpiTables/Dsdt.asl

      Name (_DMA, ResourceTemplate() {
        //
        // Only the first GB is available.
        // Bus 0xC0000000 -> CPU 0x00000000.
        //
        QWordMemory (ResourceConsumer,
          ,
          MinFixed,
          MaxFixed,
          NonCacheable,
          ReadWrite,
          0x0,
          0x00000000C0000000, // MIN
          0x00000000FFFFFFFF, // MAX
          0xFFFFFFFF40000000, // TRA
          0x0000000040000000, // LEN
          ,
          ,
          )
      })

There should be some corresponding code in the Linux 5.9 kernels.  From
the look of that, it might even be possible to specify a memory range
which didn't start at address 0.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:08:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12416.32326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX52I-000470-PK; Mon, 26 Oct 2020 16:07:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12416.32326; Mon, 26 Oct 2020 16:07:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX52I-00046t-MN; Mon, 26 Oct 2020 16:07:54 +0000
Received: by outflank-mailman (input) for mailman id 12416;
 Mon, 26 Oct 2020 16:07:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX52I-000466-4B
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:07:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a3a4d07-edd8-4a26-b7c1-c659b97ff9cf;
 Mon, 26 Oct 2020 16:07:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX527-0008Gk-TN; Mon, 26 Oct 2020 16:07:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX527-0007Wa-KH; Mon, 26 Oct 2020 16:07:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX527-0008Iv-FR; Mon, 26 Oct 2020 16:07:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX52I-000466-4B
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:07:54 +0000
X-Inumbo-ID: 4a3a4d07-edd8-4a26-b7c1-c659b97ff9cf
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a3a4d07-edd8-4a26-b7c1-c659b97ff9cf;
	Mon, 26 Oct 2020 16:07:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YiHMaCw2pZiQQAca+tMhOlXYAd7lxZkQhRTg36HGiCo=; b=ZcbVyXUFg4AntxkxNIsbWCdlLo
	Dke9/g6Q5C2yE1veAYOQiVT72IFw3QPtb5K8fADx9ObxR0Hv8MRmyRvOWSFpmIu2wuy0Ggl06Ynu6
	FzENTAv8Inx8InH7hO32Jju9DwLwrs/nz9vo1qeYr5Bjl+6iVJQWm4z4bfjC38UDRNRw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX527-0008Gk-TN; Mon, 26 Oct 2020 16:07:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX527-0007Wa-KH; Mon, 26 Oct 2020 16:07:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX527-0008Iv-FR; Mon, 26 Oct 2020 16:07:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156242-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156242: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=288a1cc6345ed0b04e0dc887905ebeef17141608
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 16:07:43 +0000

flight 156242 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156242/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                288a1cc6345ed0b04e0dc887905ebeef17141608
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   67 days
Failing since        152659  2020-08-21 14:07:39 Z   66 days  153 attempts
Testing same since   156242  2020-10-26 13:36:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 50810 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:11:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12420.32338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX565-0004yF-Fv; Mon, 26 Oct 2020 16:11:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12420.32338; Mon, 26 Oct 2020 16:11:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX565-0004y8-BU; Mon, 26 Oct 2020 16:11:49 +0000
Received: by outflank-mailman (input) for mailman id 12420;
 Mon, 26 Oct 2020 16:11:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kX563-0004y3-Ln
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:11:47 +0000
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c6f7b50-f216-4b29-aced-42d85205b3c7;
 Mon, 26 Oct 2020 16:11:45 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603728701553173.61104816778584;
 Mon, 26 Oct 2020 09:11:41 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kX563-0004y3-Ln
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:11:47 +0000
X-Inumbo-ID: 2c6f7b50-f216-4b29-aced-42d85205b3c7
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2c6f7b50-f216-4b29-aced-42d85205b3c7;
	Mon, 26 Oct 2020 16:11:45 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603728703; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=PfUqU81Ozw3NOMisjcztGk6CPyfpJi5/uvGZaBLUYxcm/VUNsLesND01I9MF+EloVoUaO7uRM0Ouglr4Vfl4ZuTJ+qkxtb9sX0wexYGqzSsA5zIfK/jFP5lgk06O89FXraKRZvmVwlvue0zBX3UuiVL2gujWoyE583m/Q7cBWIc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603728703; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=QzcxLYnBTJOTzOyF4yMHIQVE1VyRuO6pHAPmsfNnAZg=; 
	b=kltwQGaNiaJLR4WOpW3ep+SdjbvVS7RiMXSLlK6jhoOIFUCrVF8YrqOs9aHFXU60OayLRAEZoNosyKVDIDOCXVj2974XyaQ5IFAPcMQCzV9Br1U9ouyvf6U+1oel6Q2h66m33nLjPAsMRfim0wNvC+1yzpUznWdWP9SPQYVkLsw=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603728703;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=QzcxLYnBTJOTzOyF4yMHIQVE1VyRuO6pHAPmsfNnAZg=;
	b=k4wA5s7HI/62lSU76Hqv3rQ5Bkk35fbneLvnSZ6Y6B6Rq8WOtQ/BBEw9Ct13iMTW
	jmNV0Pnil9sYpTUiGb7qpPFITgcNEefQ/MGyOFseRejAv0dTxiLgp7mw6cGx7lzt8rO
	8RuJ+t32lv1buwVbqr+VknzvwL7Dn3Vdhf5GnXJc=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603728701553173.61104816778584; Mon, 26 Oct 2020 09:11:41 -0700 (PDT)
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
Date: Mon, 26 Oct 2020 17:11:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="dZPpcpFpD7r2InzMDJ3nb6NE8eA9WCDqb"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--dZPpcpFpD7r2InzMDJ3nb6NE8eA9WCDqb
Content-Type: multipart/mixed; boundary="oUvIPs7eEQPkCUMkmISYwqaGkevYKHQ0z";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
In-Reply-To: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>

--oUvIPs7eEQPkCUMkmISYwqaGkevYKHQ0z
Content-Type: multipart/mixed;
 boundary="------------017959A3B2100110F445AD20"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------017959A3B2100110F445AD20
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/26/20 =C3=A0 2:54 PM, Andrew Cooper a =C3=A9crit=C2=A0:
> On 26/10/2020 13:37, Fr=C3=A9d=C3=A9ric Pierret wrote:
>> Hi all,
>>
>> I'm experiencing problem with a HP ProLiant DL360p Gen8 and recent
>> upgrade of 4.13 -> 4.14. dom0 and the entire system becomes unstable
>> and freeze at some point.
>>
>> I've managed to get three pieces of logs (last one after a reboot and
>> just before total freeze) by connecting to the serial console with
>> loglvl options and also redirecting linux kernel output to the serial
>> console:
>>
>> ```
>> [ 2150.954883] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 2150.954934] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (3 GPs behind)
>> idle=3D842/1/0x4000000000000000 softirq=3D64670/64671 fqs=3D14673
>> [ 2150.954962]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 12, t=3D60002 jiff=
ies, g=3D236593, q=3D126)
>> [ 2150.954984] Sending NMI from CPU 12 to CPUs 7:
>> [ 2160.968519] rcu: rcu_sched kthread starved for 10008 jiffies!
>> g236593 f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D9
>> [ 2160.968568] rcu: RCU grace-period kthread stack dump:
>> [ 2160.968586] rcu_sched=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 R=C2=A0 r=
unning task=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0=C2=A0=C2=A0=C2=A0=
 10=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2
>> 0x80004000
>> [ 2160.968612] Call Trace:
>> [ 2160.968634]=C2=A0 ? xen_hypercall_xen_version+0xa/0x20
>> [ 2160.968658]=C2=A0 ? xen_force_evtchn_callback+0x9/0x10
>> [ 2160.968918]=C2=A0 ? check_events+0x12/0x20
>> [ 2160.968946]=C2=A0 ? schedule+0x39/0xa0
>> [ 2160.968964]=C2=A0 ? schedule_timeout+0x96/0x150
>> [ 2160.968981]=C2=A0 ? __next_timer_interrupt+0xd0/0xd0
>> [ 2160.969002]=C2=A0 ? rcu_gp_fqs_loop+0xea/0x2a0
>> [ 2160.969019]=C2=A0 ? rcu_gp_kthread+0xb5/0x140
>> [ 2160.969035]=C2=A0 ? rcu_gp_init+0x470/0x470
>> [ 2160.969052]=C2=A0 ? kthread+0x115/0x140
>> [ 2160.969067]=C2=A0 ? __kthread_bind_mask+0x60/0x60
>> [ 2160.969085]=C2=A0 ? ret_from_fork+0x35/0x40
>> ```
>>
>> and also
>>
>> ```
>> [ 2495.945931] CPU: 14 PID: 24181 Comm: Xorg Not tainted
>> 5.4.72-2.qubes.x86_64 #1
>> [ 2495.945954] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 2495.945984] RIP: e030:smp_call_function_many+0x20a/0x270
>> [ 2495.946004] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
>> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
>> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf=
 b2
>> 8a
>> [ 2495.946051] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
>> [ 2495.946068] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
>> 0000000000000007
>> [ 2495.946090] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
>> 0000000000000007
>> [ 2495.946113] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
>> 0000000000000000
>> [ 2495.946134] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
>> 0000000000000000
>> [ 2495.946156] R13: 0000000000000001 R14: 0000000000029ac0 R15:
>> ffff8882415a9b00
>> [ 2495.946211] FS:=C2=A0 00007a0d51a91a40(0000) GS:ffff888241580000(00=
00)
>> knlGS:0000000000000000
>> [ 2495.946235] CS:=C2=A0 e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 2495.946253] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
>> 0000000000040660
>> [ 2495.946290] Call Trace:
>> [ 2495.946314]=C2=A0 ? do_kernel_range_flush+0x50/0x50
>> [ 2495.946334]=C2=A0 on_each_cpu+0x28/0x50
>> [ 2495.946354]=C2=A0 decrease_reservation+0x22f/0x310
>> [ 2495.946377]=C2=A0 alloc_xenballooned_pages+0xeb/0x120
>> [ 2495.946396]=C2=A0 ? __kmalloc+0x183/0x260
>> [ 2495.946413]=C2=A0 gnttab_alloc_pages+0x11/0x40
>> [ 2495.946434]=C2=A0 gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
>> [ 2495.946454]=C2=A0 gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev=
]
>> [ 2495.946479]=C2=A0 do_vfs_ioctl+0x2fb/0x490
>> [ 2495.946500]=C2=A0 ? syscall_trace_enter+0x1d1/0x2c0
>> [ 2495.946551]=C2=A0 ksys_ioctl+0x5e/0x90
>> [ 2495.946567]=C2=A0 __x64_sys_ioctl+0x16/0x20
>> [ 2495.946583]=C2=A0 do_syscall_64+0x5b/0xf0
>> [ 2495.946601]=C2=A0 entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> [ 2495.946620] RIP: 0033:0x7a0d51f763bb
>> [ 2495.946727] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
>> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89=
 01
>> 48
>> [ 2495.946804] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
>> 0000000000000010
>> [ 2495.946863] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
>> 00007a0d51f763bb
>> [ 2495.946885] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
>> 0000000000000009
>> [ 2495.946939] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
>> 00007a0d51a30010
>> [ 2495.946998] R10: 0000000000000000 R11: 0000000000000206 R12:
>> 00007fffc48d5060
>> [ 2495.947020] R13: 0000000000000001 R14: 0000000000000009 R15:
>> 0000000000000001
>> [ 2510.964667] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 2510.964716] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (3 GPs behind)
>> idle=3D842/1/0x4000000000000000 softirq=3D64670/64671 fqs=3D96182
>> [ 2510.964744]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 12, t=3D420012 jif=
fies, g=3D236593, q=3D11404)
>> [ 2510.964769] Sending NMI from CPU 12 to CPUs 7:
>> [ 2523.945643] watchdog: BUG: soft lockup - CPU#14 stuck for 22s!
>> [Xorg:24181]
>> [ 2523.945686] Modules linked in: snd_seq_dummy snd_hrtimer snd_seq
>> snd_seq_device snd_timer snd soundcore br_netfilter xt_physdev
>> xen_netback loop bridge stp llc rfkill ebtable_filter ebtables
>> ip6table_filter ip
>> 6_tables iptable_filter intel_rapl_msr iTCO_wdt ipmi_ssif
>> iTCO_vendor_support intel_rapl_common sb_edac rapl raid456
>> async_raid6_recov async_memcpy async_pq async_xor async_tx xor
>> raid6_pq pcspkr joydev hpilo lpc
>> _ich hpwdt ioatdma dca tg3 r8169 ipmi_si ipmi_devintf ipmi_msghandler
>> acpi_power_meter xenfs ip_tables dm_thin_pool dm_persistent_data
>> libcrc32c dm_bio_prison dm_crypt uas usb_storage uhci_hcd
>> crct10dif_pclmul cr
>> c32_pclmul crc32c_intel ghash_clmulni_intel mgag200 drm_kms_helper
>> serio_raw drm_vram_helper ttm drm ata_generic pata_acpi i2c_algo_bit
>> ehci_pci ehci_hcd xhci_pci xhci_hcd hpsa scsi_transport_sas
>> xen_privcmd xen_
>> pciback xen_blkback xen_gntalloc xen_gntdev xen_evtchn uinput
>> pkcs8_key_parser
>> [ 2523.945934] CPU: 14 PID: 24181 Comm: Xorg Tainted: G
>> L=C2=A0=C2=A0=C2=A0 5.4.72-2.qubes.x86_64 #1
>> [ 2523.945960] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 2523.945989] RIP: e030:smp_call_function_many+0x20a/0x270
>> [ 2523.946010] Code: 8a 00 3b 05 4c b5 69 01 89 c7 0f 83 89 fe ff ff
>> 48 63 c7 49 8b 17 48 03 14 c5 80 f9 3d 82 8b 42 18 a8 01 74 09 f3 90
>> 8b 42 18 <a8> 01 75 f7 eb c9 48 c7 c2 a0 6f 82 82 4c 89 f6 89 df e8 bf=
 b2
>> 8a
>> [ 2523.946057] RSP: e02b:ffffc90003aa7cf0 EFLAGS: 00000202
>> [ 2523.946075] RAX: 0000000000000003 RBX: 0000000000000010 RCX:
>> 0000000000000007
>> [ 2523.946097] RDX: ffff8882413ef640 RSI: 0000000000000010 RDI:
>> 0000000000000007
>> [ 2523.946119] RBP: ffffffff81082fc0 R08: 0000000000000007 R09:
>> 0000000000000000
>> [ 2523.946162] R10: 0000000000000000 R11: ffffffff8265b6a8 R12:
>> 0000000000000000
>> [ 2523.946184] R13: 0000000000000001 R14: 0000000000029ac0 R15:
>> ffff8882415a9b00
>> [ 2523.946233] FS:=C2=A0 00007a0d51a91a40(0000) GS:ffff888241580000(00=
00)
>> knlGS:0000000000000000
>> [ 2523.946256] CS:=C2=A0 e030 DS: 0000 ES: 0000 CR0: 0000000080050033
>> [ 2523.946275] CR2: 000070abab15a000 CR3: 00000001e18ee000 CR4:
>> 0000000000040660
>> [ 2523.946313] Call Trace:
>> [ 2523.946336]=C2=A0 ? do_kernel_range_flush+0x50/0x50
>> [ 2523.946356]=C2=A0 on_each_cpu+0x28/0x50
>> [ 2523.946376]=C2=A0 decrease_reservation+0x22f/0x310
>> [ 2523.946397]=C2=A0 alloc_xenballooned_pages+0xeb/0x120
>> [ 2523.946418]=C2=A0 ? __kmalloc+0x183/0x260
>> [ 2523.946434]=C2=A0 gnttab_alloc_pages+0x11/0x40
>> [ 2523.946457]=C2=A0 gntdev_alloc_map+0x12f/0x250 [xen_gntdev]
>> [ 2523.946478]=C2=A0 gntdev_ioctl_map_grant_ref+0x73/0x1d0 [xen_gntdev=
]
>> [ 2523.946502]=C2=A0 do_vfs_ioctl+0x2fb/0x490
>> [ 2523.946559]=C2=A0 ? syscall_trace_enter+0x1d1/0x2c0
>> [ 2523.946578]=C2=A0 ksys_ioctl+0x5e/0x90
>> [ 2523.946594]=C2=A0 __x64_sys_ioctl+0x16/0x20
>> [ 2523.946610]=C2=A0 do_syscall_64+0x5b/0xf0
>> [ 2523.946713]=C2=A0 entry_SYSCALL_64_after_hwframe+0x44/0xa9
>> [ 2523.946738] RIP: 0033:0x7a0d51f763bb
>> [ 2523.946782] Code: 0f 1e fa 48 8b 05 dd aa 0c 00 64 c7 00 26 00 00
>> 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00
>> 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d ad aa 0c 00 f7 d8 64 89=
 01
>> 48
>> [ 2523.946867] RSP: 002b:00007fffc48d5058 EFLAGS: 00000206 ORIG_RAX:
>> 0000000000000010
>> [ 2523.946927] RAX: ffffffffffffffda RBX: 0000000000001000 RCX:
>> 00007a0d51f763bb
>> [ 2523.946950] RDX: 00007fffc48d5060 RSI: 0000000000184700 RDI:
>> 0000000000000009
>> [ 2523.946972] RBP: 00007fffc48d5100 R08: 00007fffc48d512c R09:
>> 00007a0d51a30010
>> [ 2523.947029] R10: 0000000000000000 R11: 0000000000000206 R12:
>> 00007fffc48d5060
>> [ 2523.947051] R13: 0000000000000001 R14: 0000000000000009 R15:
>> 0000000000000001
>> ```
>>
>> and finally
>>
>> ```
>> [ 1597.971380] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
>> [ 1597.971409] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 0-...0: (5 ticks this GP) =
idle=3Dcd2/0/0x1
>> softirq=3D59121/59122 fqs=3D14998
>> [ 1597.971420] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 7-...0: (2 ticks this GP)
>> idle=3De9e/1/0x4000000000000000 softirq=3D49519/49519 fqs=3D14998
>> [ 1597.971431]=C2=A0=C2=A0=C2=A0=C2=A0 (detected by 11, t=3D60002 jiff=
ies, g=3D234321, q=3D83)
>> [ 1597.971441] Sending NMI from CPU 11 to CPUs 0:
>> [ 1597.972452] NMI backtrace for cpu 0
>> [ 1597.972453] CPU: 0 PID: 0 Comm: swapper/0 Not tainted
>> 5.4.72-2.qubes.x86_64 #1
>> [ 1597.972453] Hardware name: HP ProLiant DL360p Gen8, BIOS P71
>> 05/24/2019
>> [ 1597.972454] RIP: e030:xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972454] Code: 51 41 53 b8 1c 00 00 00 0f 05 41 5b 59 c3 cc cc
>> cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 51 41 53 b8 1d 00 00
>> 00 0f 05 <41> 5b 59 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc=
 cc
>> cc
>> [ 1597.972455] RSP: e02b:ffffc90001403d88 EFLAGS: 00000002
>> [ 1597.972456] RAX: 0000000000000000 RBX: ffff888241215f80 RCX:
>> ffffffff810013aa
>> [ 1597.972456] RDX: ffff88823c070428 RSI: ffffc90001403da8 RDI:
>> 0000000000000003
>> [ 1597.972456] RBP: ffff8882365d8bf0 R08: ffffffff8265b6a0 R09:
>> 0000000000000000
>> [ 1597.972457] R10: 0000000000000000 R11: 0000000000000202 R12:
>> 0000000000000049
>> [ 1597.972457] R13: 0000000000000100 R14: ffff8882422b6640 R15:
>> 0000000000000001
>> [ 1597.972458] FS:=C2=A0 0000729633b7d700(0000) GS:ffff888241200000(00=
00)
>> knlGS:0000000000000000
>> [ 1597.972458] CS:=C2=A0 10000e030 DS: 0000 ES: 0000 CR0: 000000008005=
0033
>> [ 1597.972458] CR2: 000077f6508c47c0 CR3: 000000020d006000 CR4:
>> 0000000000040660
>> [ 1597.972459] Call Trace:
>> [ 1597.972459]=C2=A0 <IRQ>
>> [ 1597.972459]=C2=A0 ? xen_poll_irq+0x73/0xa0
>> [ 1597.972459]=C2=A0 ? xen_qlock_wait+0x7b/0x80
>> [ 1597.972460]=C2=A0 ? pv_wait_head_or_lock+0x85/0xb0
>> [ 1597.972460]=C2=A0 ? __pv_queued_spin_lock_slowpath+0xb5/0x1b0
>> [ 1597.972460]=C2=A0 ? _raw_spin_lock_irqsave+0x32/0x40
>> [ 1597.972461]=C2=A0 ? bfq_finish_requeue_request+0xb5/0x120
>> [ 1597.972461]=C2=A0 ? blk_mq_free_request+0x3a/0xf0
>> [ 1597.972461]=C2=A0 ? scsi_end_request+0x95/0x140
>> [ 1597.972461]=C2=A0 ? scsi_io_completion+0x74/0x190
>> [ 1597.972462]=C2=A0 ? blk_done_softirq+0xea/0x180
>> [ 1597.972462]=C2=A0 ? __do_softirq+0xd9/0x2c8
>> [ 1597.972462]=C2=A0 ? irq_exit+0xcf/0x110
>> [ 1597.972462]=C2=A0 ? xen_evtchn_do_upcall+0x2c/0x40
>> [ 1597.972463]=C2=A0 ? xen_do_hypervisor_callback+0x29/0x40
>> [ 1597.972463]=C2=A0 </IRQ>
>> [ 1597.972463]=C2=A0 ? xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972464]=C2=A0 ? xen_hypercall_sched_op+0xa/0x20
>> [ 1597.972464]=C2=A0 ? xen_safe_halt+0xc/0x20
>> [ 1597.972464]=C2=A0 ? default_idle+0x1a/0x140
>> [ 1597.972465]=C2=A0 ? cpuidle_idle_call+0x139/0x190
>> [ 1597.972465]=C2=A0 ? do_idle+0x73/0xd0
>> [ 1597.972465]=C2=A0 ? cpu_startup_entry+0x19/0x20
>> [ 1597.972466]=C2=A0 ? start_kernel+0x68a/0x6bf
>> [ 1597.972466]=C2=A0 ? xen_start_kernel+0x6a2/0x6c1
>> [ 1597.972470] Sending NMI from CPU 11 to CPUs 7:
>> [ 1607.976873] rcu: rcu_sched kthread starved for 10007 jiffies!
>> g234321 f0x0 RCU_GP_WAIT_FQS(5) ->state=3D0x402 ->cpu=3D11
>> [ 1607.976924] rcu: RCU grace-period kthread stack dump:
>> [ 1607.976942] rcu_sched=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 I=C2=A0=C2=
=A0=C2=A0 0=C2=A0=C2=A0=C2=A0 10=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2 0x800040=
00
>> [ 1607.976972] Call Trace:
>> [ 1607.976999]=C2=A0 __schedule+0x217/0x570
>> [ 1607.977020]=C2=A0 ? schedule+0x39/0xa0
>> [ 1607.977038]=C2=A0 ? schedule_timeout+0x96/0x150
>> [ 1607.977056]=C2=A0 ? __next_timer_interrupt+0xd0/0xd0
>> [ 1607.977079]=C2=A0 ? rcu_gp_fqs_loop+0xea/0x2a0
>> [ 1607.977096]=C2=A0 ? rcu_gp_kthread+0xb5/0x140
>> [ 1607.977112]=C2=A0 ? rcu_gp_init+0x470/0x470
>> [ 1607.977130]=C2=A0 ? kthread+0x115/0x140
>> [ 1607.977145]=C2=A0 ? __kthread_bind_mask+0x60/0x60
>> [ 1607.977164]=C2=A0 ? ret_from_fork+0x35/0x40
>> ```
>>
>> I've tried to increase memory, set maximum of dom0 vcps to 1 or 4, pin=

>> vcpus too, multiple 5.4 kernels tool...no luck.
>>
>> I've also observed that some (never the same) VM (PVH or HVM) fail to
>> start randomly because of being stuck at boot time with analog stack
>> trace as the first piece of log provided above. Those VM are
>> impossible to desroy and then it "propagates" though dom0 with the two=

>> latest piece of codes.
>>
>> If anyone would have any idea of what's going on, that would be very
>> appreciated. Thank you.
>=20
> Does booting Xen with `sched=3Dcredit` make a difference?
>=20
> ~Andrew
>=20

Thank you Andrew. Since your mail I'm currently testing this on productio=
n and it's clearly more stable than this morning. I will not say yet it's=
 solved because yesterday I had some few hours of stability too. but clea=
rly, it's encouraging because this morning it was just hell every 15/30 m=
inutes.

If that helps, the server has two physical processors and 8 cores each.

Best,
Fr=C3=A9d=C3=A9ric

--------------017959A3B2100110F445AD20
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------017959A3B2100110F445AD20--

--oUvIPs7eEQPkCUMkmISYwqaGkevYKHQ0z--

--dZPpcpFpD7r2InzMDJ3nb6NE8eA9WCDqb
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+W9TkACgkQSEAQtc3F
duJEOQ/+K81xaT7KURJAdEHSEGlFBsxmuNtYTEHJZ8Sz7HXej8OMaGO+UzaevnQe
poDSk8ILvYON4E432IZkm4oAFF+yDRDB8dBp9SgQOkqE8MwfsemApkuMSl21p2Py
YZe5aiggzOqbi4/hemHVULTdq1eFWoRcHVa2N/jAzk1Vgs16M/Wpm2Ic6qbQYg82
Iy5wzMRzedTaNqVJBnMqR4zh/dTB3RAdhKE7NgH2bIkTgad27vTHhvKk1Q2yfPyv
dRzmQnL/UosQlOXPuaWRpSgvIqrQKUV+/3Q0HAR90PBJ/mF9gTFspUR6Lhq0nY93
BiaLCr1jNiKYHGrV/kQdv6genotp9x49beMZRg5mVaPWfyRPDq+N8kirsONg19Nk
xJBGzxrghKR7CEGSqRinqK2bdgfPWj1nQLPzkxbNwqc1hr9ofbFvVfVcA4jAiKrn
EcqCw4NEpBK3h1L78TOO6Mc/aI4m5wRMcJKGpYl155CGgvEX5v4YMO+PtaV15Y4g
ntyNXWca/AkedNULgELUkbqSGcS/p5U4wLENqMPutFUrELw9i04qZ0bTl8jRFyx/
H1Oq4a6sBu40ZtPv3i6NlIflPl7ssR9EZlyHyUkasSyqSScT3bRhLIYnPbfpDDv/
BVLOIL0OQxEq4uicAmJxOlUYLDX+WFrNbcS3NNUuLOvE+p+GxDo=
=A/8f
-----END PGP SIGNATURE-----

--dZPpcpFpD7r2InzMDJ3nb6NE8eA9WCDqb--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:21:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:21:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12427.32354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5F4-0005vm-Dx; Mon, 26 Oct 2020 16:21:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12427.32354; Mon, 26 Oct 2020 16:21:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5F4-0005vf-B1; Mon, 26 Oct 2020 16:21:06 +0000
Received: by outflank-mailman (input) for mailman id 12427;
 Mon, 26 Oct 2020 16:21:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kX5F2-0005va-RK
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:21:04 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7c77dcec-9fc8-4869-807b-b29d31f37240;
 Mon, 26 Oct 2020 16:21:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A82F01042;
 Mon, 26 Oct 2020 09:21:02 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D63D3F719;
 Mon, 26 Oct 2020 09:21:01 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kX5F2-0005va-RK
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:21:04 +0000
X-Inumbo-ID: 7c77dcec-9fc8-4869-807b-b29d31f37240
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 7c77dcec-9fc8-4869-807b-b29d31f37240;
	Mon, 26 Oct 2020 16:21:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id A82F01042;
	Mon, 26 Oct 2020 09:21:02 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6D63D3F719;
	Mon, 26 Oct 2020 09:21:01 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/3] xen/arm: Warn user on cpu errata 832075
Date: Mon, 26 Oct 2020 16:20:36 +0000
Message-Id: <cover.1603728729.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1

This serie is a v2 after a discussion of [1] to introduce several
changes to handle properly the warning for Arm cpu errata 832075:
- use printk_once instead of warning add
- introduce a tainted type "Unsecure"

The last patch is adding the warning and flags A57 core affected as
unsupported.

[1] https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg00896.html

Bertrand Marquis (3):
  xen/arm: use printk_once for errata warning prints
  xen: Add an unsecure Taint type
  xen/arm: Warn user on cpu errata 832075

 SUPPORT.md               |  1 +
 xen/arch/arm/cpuerrata.c | 23 +++++++++++++++--------
 xen/common/kernel.c      |  4 +++-
 xen/include/xen/lib.h    |  1 +
 4 files changed, 20 insertions(+), 9 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12431.32376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GI-00063f-9I; Mon, 26 Oct 2020 16:22:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12431.32376; Mon, 26 Oct 2020 16:22:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GI-00063N-2j; Mon, 26 Oct 2020 16:22:22 +0000
Received: by outflank-mailman (input) for mailman id 12431;
 Mon, 26 Oct 2020 16:22:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kX5GG-00062S-R0
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:20 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 66440a05-ce01-4d1f-b9fd-bf81bda01c1e;
 Mon, 26 Oct 2020 16:22:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD4311042;
 Mon, 26 Oct 2020 09:22:19 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C2C2F3F719;
 Mon, 26 Oct 2020 09:22:18 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kX5GG-00062S-R0
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:20 +0000
X-Inumbo-ID: 66440a05-ce01-4d1f-b9fd-bf81bda01c1e
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 66440a05-ce01-4d1f-b9fd-bf81bda01c1e;
	Mon, 26 Oct 2020 16:22:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id DD4311042;
	Mon, 26 Oct 2020 09:22:19 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C2C2F3F719;
	Mon, 26 Oct 2020 09:22:18 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/3] xen: Add an unsecure Taint type
Date: Mon, 26 Oct 2020 16:21:32 +0000
Message-Id: <d7e82b374cb7c83d6e18774e23bc4d970c4e8b53.1603728729.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1603728729.git.bertrand.marquis@arm.com>
References: <cover.1603728729.git.bertrand.marquis@arm.com>

Define a new Unsecure taint type to be used to signal a system tainted
due to an unsecure configuration or hardware feature/errata.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/common/kernel.c   | 4 +++-
 xen/include/xen/lib.h | 1 +
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index c3a943f077..7a345ae45e 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -326,6 +326,7 @@ unsigned int tainted;
  *  'E' - An error (e.g. a machine check exceptions) has been injected.
  *  'H' - HVM forced emulation prefix is permitted.
  *  'M' - Machine had a machine check experience.
+ *  'U' - Platform is unsecure (usually due to an errata on the platform).
  *
  *      The string is overwritten by the next call to print_taint().
  */
@@ -333,7 +334,8 @@ char *print_tainted(char *str)
 {
     if ( tainted )
     {
-        snprintf(str, TAINT_STRING_MAX_LEN, "Tainted: %c%c%c%c",
+        snprintf(str, TAINT_STRING_MAX_LEN, "Tainted: %c%c%c%c%c",
+                 tainted & TAINT_MACHINE_UNSECURE ? 'U' : ' ',
                  tainted & TAINT_MACHINE_CHECK ? 'M' : ' ',
                  tainted & TAINT_SYNC_CONSOLE ? 'C' : ' ',
                  tainted & TAINT_ERROR_INJECT ? 'E' : ' ',
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 1983bd6b86..a9679c913d 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -193,6 +193,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
 #define TAINT_MACHINE_CHECK             (1u << 1)
 #define TAINT_ERROR_INJECT              (1u << 2)
 #define TAINT_HVM_FEP                   (1u << 3)
+#define TAINT_MACHINE_UNSECURE          (1u << 4)
 extern unsigned int tainted;
 #define TAINT_STRING_MAX_LEN            20
 extern char *print_tainted(char *str);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:22:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12430.32366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GH-00062f-P0; Mon, 26 Oct 2020 16:22:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12430.32366; Mon, 26 Oct 2020 16:22:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GH-00062Y-M0; Mon, 26 Oct 2020 16:22:21 +0000
Received: by outflank-mailman (input) for mailman id 12430;
 Mon, 26 Oct 2020 16:22:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kX5GF-00062L-PW
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:19 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 99e5f00d-760e-4ee2-b636-b0d79a51e480;
 Mon, 26 Oct 2020 16:22:18 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E33C1042;
 Mon, 26 Oct 2020 09:22:18 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE9023F719;
 Mon, 26 Oct 2020 09:22:17 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kX5GF-00062L-PW
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:19 +0000
X-Inumbo-ID: 99e5f00d-760e-4ee2-b636-b0d79a51e480
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 99e5f00d-760e-4ee2-b636-b0d79a51e480;
	Mon, 26 Oct 2020 16:22:18 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8E33C1042;
	Mon, 26 Oct 2020 09:22:18 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DE9023F719;
	Mon, 26 Oct 2020 09:22:17 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 1/3] xen/arm: use printk_once for errata warning prints
Date: Mon, 26 Oct 2020 16:21:31 +0000
Message-Id: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1603728729.git.bertrand.marquis@arm.com>
References: <cover.1603728729.git.bertrand.marquis@arm.com>

Replace usage of warning_add by printk_once with a **** prefix and
suffix for errata related warnings.

This prevents the need for the assert which is not secure enough to
protect this print against wrong usage.

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/cpuerrata.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 0c63dfa779..0430069a84 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -157,7 +157,6 @@ extern char __smccc_workaround_1_smc_start[], __smccc_workaround_1_smc_end[];
 static int enable_smccc_arch_workaround_1(void *data)
 {
     struct arm_smccc_res res;
-    static bool warned = false;
     const struct arm_cpu_capabilities *entry = data;
 
     /*
@@ -182,13 +181,8 @@ static int enable_smccc_arch_workaround_1(void *data)
                                      "call ARM_SMCCC_ARCH_WORKAROUND_1");
 
 warn:
-    if ( !warned )
-    {
-        ASSERT(system_state < SYS_STATE_active);
-        warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
-                    "Please update your firmware.\n");
-        warned = true;
-    }
+    printk_once("**** No support for ARM_SMCCC_ARCH_WORKAROUND_1. ****\n"
+                "**** Please update your firmware.                ****\n");
 
     return 0;
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:22:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:22:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12432.32389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GM-00067e-Gh; Mon, 26 Oct 2020 16:22:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12432.32389; Mon, 26 Oct 2020 16:22:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5GM-00067V-DK; Mon, 26 Oct 2020 16:22:26 +0000
Received: by outflank-mailman (input) for mailman id 12432;
 Mon, 26 Oct 2020 16:22:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kX5GK-00062L-OP
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:24 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c1702353-f4d9-4f28-b0c4-315701b71e1f;
 Mon, 26 Oct 2020 16:22:21 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 52B34106F;
 Mon, 26 Oct 2020 09:22:21 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com
 [10.1.198.23])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DFC23F719;
 Mon, 26 Oct 2020 09:22:20 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Ht1Y=EB=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kX5GK-00062L-OP
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:22:24 +0000
X-Inumbo-ID: c1702353-f4d9-4f28-b0c4-315701b71e1f
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id c1702353-f4d9-4f28-b0c4-315701b71e1f;
	Mon, 26 Oct 2020 16:22:21 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 52B34106F;
	Mon, 26 Oct 2020 09:22:21 -0700 (PDT)
Received: from e109506-lin.cambridge.arm.com (e109506-lin.cambridge.arm.com [10.1.198.23])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1DFC23F719;
	Mon, 26 Oct 2020 09:22:20 -0700 (PDT)
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Date: Mon, 26 Oct 2020 16:21:33 +0000
Message-Id: <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
In-Reply-To: <cover.1603728729.git.bertrand.marquis@arm.com>
References: <cover.1603728729.git.bertrand.marquis@arm.com>

When a Cortex A57 processor is affected by CPU errata 832075, a guest
not implementing the workaround for it could deadlock the system.
Add a warning during boot informing the user that only trusted guests
should be executed on the system.
An equivalent warning is already given to the user by KVM on cores
affected by this errata.

Also taint the hypervisor as unsecure when this errata applies and
mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 SUPPORT.md               |  1 +
 xen/arch/arm/cpuerrata.c | 13 +++++++++++++
 2 files changed, 14 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index 5fbe5fc444..f7a3b046b0 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -38,6 +38,7 @@ supported in this document.
 ### ARM v8
 
     Status: Supported
+    Status, Cortex A57 r0p0 - r1p2, not security supported (Errata 832075)
 
 ## Host hardware support
 
diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 0430069a84..b35e8cd0b9 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -503,6 +503,19 @@ void check_local_cpu_errata(void)
 void __init enable_errata_workarounds(void)
 {
     enable_cpu_capabilities(arm_errata);
+
+#ifdef CONFIG_ARM64_ERRATUM_832075
+    if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
+    {
+        printk_once("**** This CPU is affected by the errata 832075. ****\n"
+                    "**** Guests without CPU erratum workarounds     ****\n"
+                    "**** can deadlock the system!                   ****\n"
+                    "**** Only trusted guests should be used.        ****\n");
+
+        /* Taint the machine has being insecure */
+        add_taint(TAINT_MACHINE_UNSECURE);
+    }
+#endif
 }
 
 static int cpu_errata_callback(struct notifier_block *nfb,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:23:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:23:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12444.32402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5Hf-0006R8-Uk; Mon, 26 Oct 2020 16:23:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12444.32402; Mon, 26 Oct 2020 16:23:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5Hf-0006R1-RY; Mon, 26 Oct 2020 16:23:47 +0000
Received: by outflank-mailman (input) for mailman id 12444;
 Mon, 26 Oct 2020 16:23:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yqiy=EB=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1kX5He-0006Qv-Gd
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:23:46 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f177b7a1-3c15-4869-8fbd-25ac42984de9;
 Mon, 26 Oct 2020 16:23:44 +0000 (UTC)
Received: from mail.zoho.com by mx.zohomail.com
 with SMTP id 1603729413449887.4027185662565;
 Mon, 26 Oct 2020 09:23:33 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=yqiy=EB=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
	id 1kX5He-0006Qv-Gd
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:23:46 +0000
X-Inumbo-ID: f177b7a1-3c15-4869-8fbd-25ac42984de9
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f177b7a1-3c15-4869-8fbd-25ac42984de9;
	Mon, 26 Oct 2020 16:23:44 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603729421; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=JL9uapzJPF+OtqxwwwiZOZ3/a/lNcWytF4x26u1xSl+zV6aq6p4C80dn3ZilmwHKLi0TUTO4twv0Nj8G1Pha6XtSXy00dQzx1jPgX79hccuSFCklCBV3vG0hySmu7L8zP9DvIA8qn0kGhNytaUkPtQiTq6wRkR7mXmdWymp2WCs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603729421; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=ho2sIgtCQ2Ox4DRk31SJ+r+Sx8tQf2Y/xJoFYp1xVps=; 
	b=NpJxF2wvZkVew/WFWt6/Djg2NbkRXKpNRCGSZ7hyNnBaCBzDwfGynzRN7ohSYshZ02Nm3KCytMikWr4+aJuHlbqqqW8E4qA21xUn3yb5h2/gvhxSj40EavtLIQKC0n9XqmEJMLFjfpKJwq2Q79gFtvFVD/zQb1+cSJ6xN0HuWDs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603729421;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Date:From:To:Cc:Message-ID:In-Reply-To:References:Subject:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=ho2sIgtCQ2Ox4DRk31SJ+r+Sx8tQf2Y/xJoFYp1xVps=;
	b=A1zLwnjNfydEpHfcPcaXYadOLhl3G52HD7pff0YTGy8NkmaKwMp/sMvrq1qodh7o
	UuGqjuP9Y95T3lo25K6AgImwgd6srjDUi1+IigvZev1zbI140Omk0hBucHbbGH/kIiH
	D25p/noSh3iLvb2GNci2zjim0j4nlRQ9uBCCpEJA=
Received: from mail.zoho.com by mx.zohomail.com
	with SMTP id 1603729413449887.4027185662565; Mon, 26 Oct 2020 09:23:33 -0700 (PDT)
Date: Mon, 26 Oct 2020 12:23:33 -0400
From: Daniel Smith <dpsmith@apertussolutions.com>
To: "Jason Andryuk" <jandryuk@gmail.com>
Cc: "xen-devel" <xen-devel@lists.xenproject.org>, "hx242" <hx242@xen.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>,
	"Jan Beulich" <jbeulich@suse.com>,
	"Daniel De Graaf" <dgdegra@tycho.nsa.gov>
Message-ID: <17565b8d546.eaf68ba048834.6199377730744210517@apertussolutions.com>
In-Reply-To: <20201026134651.8162-1-jandryuk@gmail.com>
References: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com> <20201026134651.8162-1-jandryuk@gmail.com>
Subject: Re: [RFC PATCH] xsm: Re-work domain_create and
 domain_alloc_security
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Importance: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail

---- On Mon, 26 Oct 2020 09:46:51 -0400 Jason Andryuk <jandryuk@gmail.com> wrote ----

 > Untested! 
 >  
 > This only really matters for flask, but all of xsm is updated. 
 >  
 > flask_domain_create() and flask_domain_alloc_security() are a strange 
 > pair. 
 >  
 > flask_domain_create() serves double duty.  It both assigns sid and 
 > self_sid values and checks if the calling domain has permission to 
 > create the target domain.  It also has special casing for handling dom0. 
 > Meanwhile flask_domain_alloc_security() assigns some special sids, but 
 > waits for others to be assigned in flask_domain_create.  This split 
 > seems to have come about so that the structures are allocated before 
 > calling flask_domain_create().  It also means flask_domain_create is 
 > called in the middle of domain_create. 
 >  
 > Re-arrange the two calls.  Let flask_domain_create just check if current 
 > has permission to create ssidref.  Then it can be moved out to do_domctl 
 > and gate entry into domain_create.  This avoids doing partial domain 
 > creation before the permission check. 
 >  
 > Have flask_domain_alloc_security() take a ssidref argument.  The ssidref 
 > was already permission checked earlier, so it can just be assigned. 
 > Then the self_sid can be calculated here as well rather than in 
 > flask_domain_create(). 
 >  
 > The dom0 special casing is moved into flask_domain_alloc_security(). 
 > Maybe this should be just a fall-through for the dom0 already created 
 > case.  This code may not be needed any longer. 
 >  
 > Signed-off-by: Jason Andryuk <jandryuk@gmail.com> 
 > --- 
 >  xen/common/domain.c     |  6 ++---- 
 >  xen/common/domctl.c     |  4 ++++ 
 >  xen/include/xsm/dummy.h |  6 +++--- 
 >  xen/include/xsm/xsm.h   | 12 +++++------ 
 >  xen/xsm/flask/hooks.c   | 48 ++++++++++++++++------------------------- 
 >  5 files changed, 34 insertions(+), 42 deletions(-) 
 >  
 > diff --git a/xen/common/domain.c b/xen/common/domain.c 
 > index f748806a45..6b1f5ed59d 100644 
 > --- a/xen/common/domain.c 
 > +++ b/xen/common/domain.c 
 > @@ -407,7 +407,8 @@ struct domain *domain_create(domid_t domid, 
 >  
 >  lock_profile_register_struct(LOCKPROF_TYPE_PERDOM, d, domid); 
 >  
 > -    if ( (err = xsm_alloc_security_domain(d)) != 0 ) 
 > +    if ( (err = xsm_alloc_security_domain(d, config ? config->ssidref : 
 > +                                                      0)) != 0 ) 
 >  goto fail; 
 >  
 >  atomic_set(&d->refcnt, 1); 
 > @@ -470,9 +471,6 @@ struct domain *domain_create(domid_t domid, 
 >  if ( !d->iomem_caps || !d->irq_caps ) 
 >  goto fail; 
 >  
 > -        if ( (err = xsm_domain_create(XSM_HOOK, d, config->ssidref)) != 0 ) 
 > -            goto fail; 
 > - 
 >  d->controller_pause_count = 1; 
 >  atomic_inc(&d->pause_count); 
 >  
 > diff --git a/xen/common/domctl.c b/xen/common/domctl.c 
 > index af044e2eda..ffdc1a41cd 100644 
 > --- a/xen/common/domctl.c 
 > +++ b/xen/common/domctl.c 
 > @@ -406,6 +406,10 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl) 
 >  domid_t        dom; 
 >  static domid_t rover = 0; 
 >  
 > +        ret = xsm_domain_create(XSM_HOOK, op->u.createdomain.ssidref); 
 > +        if (ret) 
 > +            break; 
 > + 
 >  dom = op->domain; 
 >  if ( (dom > 0) && (dom < DOMID_FIRST_RESERVED) ) 
 >  { 
 > diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h 
 > index 7ae3c40eb5..29c4ca9951 100644 
 > --- a/xen/include/xsm/dummy.h 
 > +++ b/xen/include/xsm/dummy.h 
 > @@ -104,10 +104,10 @@ static XSM_INLINE void xsm_security_domaininfo(struct domain *d, 
 >  return; 
 >  } 
 >  
 > -static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref) 
 > +static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG u32 ssidref) 
 >  { 
 >  XSM_ASSERT_ACTION(XSM_HOOK); 
 > -    return xsm_default_action(action, current->domain, d); 
 > +    return xsm_default_action(action, current->domain, NULL); 
 >  } 
 >  
 >  static XSM_INLINE int xsm_getdomaininfo(XSM_DEFAULT_ARG struct domain *d) 
 > @@ -163,7 +163,7 @@ static XSM_INLINE int xsm_readconsole(XSM_DEFAULT_ARG uint32_t clear) 
 >  return xsm_default_action(action, current->domain, NULL); 
 >  } 
 >  
 > -static XSM_INLINE int xsm_alloc_security_domain(struct domain *d) 
 > +static XSM_INLINE int xsm_alloc_security_domain(struct domain *d, uint32_t ssidref) 
 >  { 
 >  return 0; 
 >  } 
 > diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h 
 > index 358ec13ba8..c1d2ef5832 100644 
 > --- a/xen/include/xsm/xsm.h 
 > +++ b/xen/include/xsm/xsm.h 
 > @@ -46,7 +46,7 @@ typedef enum xsm_default xsm_default_t; 
 >  struct xsm_operations { 
 >  void (*security_domaininfo) (struct domain *d, 
 >  struct xen_domctl_getdomaininfo *info); 
 > -    int (*domain_create) (struct domain *d, u32 ssidref); 
 > +    int (*domain_create) (u32 ssidref); 
 >  int (*getdomaininfo) (struct domain *d); 
 >  int (*domctl_scheduler_op) (struct domain *d, int op); 
 >  int (*sysctl_scheduler_op) (int op); 
 > @@ -71,7 +71,7 @@ struct xsm_operations { 
 >  int (*grant_copy) (struct domain *d1, struct domain *d2); 
 >  int (*grant_query_size) (struct domain *d1, struct domain *d2); 
 >  
 > -    int (*alloc_security_domain) (struct domain *d); 
 > +    int (*alloc_security_domain) (struct domain *d, uint32_t ssidref); 
 >  void (*free_security_domain) (struct domain *d); 
 >  int (*alloc_security_evtchn) (struct evtchn *chn); 
 >  void (*free_security_evtchn) (struct evtchn *chn); 
 > @@ -202,9 +202,9 @@ static inline void xsm_security_domaininfo (struct domain *d, 
 >  xsm_ops->security_domaininfo(d, info); 
 >  } 
 >  
 > -static inline int xsm_domain_create (xsm_default_t def, struct domain *d, u32 ssidref) 
 > +static inline int xsm_domain_create (xsm_default_t def, u32 ssidref) 
 >  { 
 > -    return xsm_ops->domain_create(d, ssidref); 
 > +    return xsm_ops->domain_create(ssidref); 
 >  } 
 >  
 >  static inline int xsm_getdomaininfo (xsm_default_t def, struct domain *d) 
 > @@ -305,9 +305,9 @@ static inline int xsm_grant_query_size (xsm_default_t def, struct domain *d1, st 
 >  return xsm_ops->grant_query_size(d1, d2); 
 >  } 
 >  
 > -static inline int xsm_alloc_security_domain (struct domain *d) 
 > +static inline int xsm_alloc_security_domain (struct domain *d, uint32_t ssidref) 
 >  { 
 > -    return xsm_ops->alloc_security_domain(d); 
 > +    return xsm_ops->alloc_security_domain(d, ssidref); 
 >  } 
 >  
 >  static inline void xsm_free_security_domain (struct domain *d) 
 > diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c 
 > index de050cc9fe..719fe90f22 100644 
 > --- a/xen/xsm/flask/hooks.c 
 > +++ b/xen/xsm/flask/hooks.c 
 > @@ -156,9 +156,11 @@ static int avc_unknown_permission(const char *name, int id) 
 >  return rc; 
 >  } 
 >  
 > -static int flask_domain_alloc_security(struct domain *d) 
 > +static int flask_domain_alloc_security(struct domain *d, u32 ssidref) 
 >  { 
 >  struct domain_security_struct *dsec; 
 > +    static int dom0_created = 0; 
 > +    int rc; 
 >  
 >  dsec = xzalloc(struct domain_security_struct); 
 >  if ( !dsec ) 
 > @@ -175,14 +177,24 @@ static int flask_domain_alloc_security(struct domain *d) 
 >  case DOMID_IO: 
 >  dsec->sid = SECINITSID_DOMIO; 
 >  break; 
 > +    case 0: 
 > +        if ( !dom0_created ) { 
 > +            dsec->sid = SECINITSID_DOM0; 
 > +            dom0_created = 1; 
 > +        } else { 
 > +            dsec->sid = SECINITSID_UNLABELED; 
 > +        } 

While the handling of this case is not wrong, I have to wonder if there is a better way to handle the dom0 creation case.

 > +        break; 
 >  default: 
 > -        dsec->sid = SECINITSID_UNLABELED; 
 > +        dsec->sid = ssidref; 
 >  } 
 >  
 >  dsec->self_sid = dsec->sid; 
 > -    d->ssid = dsec; 

I don't think you meant to deleted that, without it domains will have no ssid assigned to them.

 > -    return 0; 
 > +    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN, 
 > +                                 &dsec->self_sid); 
 > + 
 > +    return rc; 
 >  } 
 >  
 >  static void flask_domain_free_security(struct domain *d) 
 > @@ -507,32 +519,10 @@ static void flask_security_domaininfo(struct domain *d, 
 >  info->ssidref = domain_sid(d); 
 >  } 
 >  
 > -static int flask_domain_create(struct domain *d, u32 ssidref) 
 > +static int flask_domain_create(u32 ssidref) 
 >  { 
 > -    int rc; 
 > -    struct domain_security_struct *dsec = d->ssid; 
 > -    static int dom0_created = 0; 
 > - 
 > -    if ( is_idle_domain(current->domain) && !dom0_created ) 
 > -    { 
 > -        dsec->sid = SECINITSID_DOM0; 
 > -        dom0_created = 1; 
 > -    } 
 > -    else 
 > -    { 
 > -        rc = avc_current_has_perm(ssidref, SECCLASS_DOMAIN, 
 > -                          DOMAIN__CREATE, NULL); 
 > -        if ( rc ) 
 > -            return rc; 
 > - 
 > -        dsec->sid = ssidref; 
 > -    } 
 > -    dsec->self_sid = dsec->sid; 
 > - 
 > -    rc = security_transition_sid(dsec->sid, dsec->sid, SECCLASS_DOMAIN, 
 > -                                 &dsec->self_sid); 
 > - 
 > -    return rc; 
 > +    return avc_current_has_perm(ssidref, SECCLASS_DOMAIN, DOMAIN__CREATE, 
 > +                                NULL); 
 >  } 
 >  
 >  static int flask_getdomaininfo(struct domain *d) 
 > -- 
 > 2.26.2 
 >  
 

V/r,
Daniel P. Smith
Apertus Solutions, LLC





From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:31:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12451.32414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5Oo-0007Lj-Oj; Mon, 26 Oct 2020 16:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12451.32414; Mon, 26 Oct 2020 16:31:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5Oo-0007Lc-L5; Mon, 26 Oct 2020 16:31:10 +0000
Received: by outflank-mailman (input) for mailman id 12451;
 Mon, 26 Oct 2020 16:31:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kX5On-0007LW-FR
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:31:09 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e28c21c-8246-4a4f-b11f-08762a654786;
 Mon, 26 Oct 2020 16:31:05 +0000 (UTC)
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-9-ogrNgqPRPp-M8HVy-1nUkQ-1; Mon, 26 Oct 2020 17:31:02 +0100
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM8PR04MB7266.eurprd04.prod.outlook.com (2603:10a6:20b:1d6::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 26 Oct
 2020 16:31:01 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 16:31:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kX5On-0007LW-FR
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:31:09 +0000
X-Inumbo-ID: 0e28c21c-8246-4a4f-b11f-08762a654786
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0e28c21c-8246-4a4f-b11f-08762a654786;
	Mon, 26 Oct 2020 16:31:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1603729864;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=aB/GevxpENHG7YFJITbPgqvDU4lP4uZ9P5yFjCmyIns=;
	b=Yi9VxD/7TkIxJLgNcefJSAqovwA0+QwszBXKGnqPpeWkQkp94HCFpyO/+6N1fZGtGdwCqM
	NRMQv/tY1LmOk/TRbqiEkis42kE2JaeNoPiIgO3QMtAieRoEwnOn8q7Nnwy2VZyqtvPE4p
	VKksNJPEfWWBmWIU+G4h9Ec8vy5gtVc=
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2175.outbound.protection.outlook.com [104.47.17.175])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-9-ogrNgqPRPp-M8HVy-1nUkQ-1; Mon, 26 Oct 2020 17:31:02 +0100
X-MC-Unique: ogrNgqPRPp-M8HVy-1nUkQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NQCfcUp4U/H9cPg9+Ij41KFMbNdMqdDj6vJIX0iNgdrUYcmvy2EA39i1qhdyjOH53Eh823JYaotV9d+zNttsYETZH52TISqe+YnzoRQM7dsMBd8VewfIP4IaBe+B6bQ3ZP+wUq+2rtNDeXbBll96YA2hlrqCfJ32SNNuZ9+0bffo6lLvr/qEylP5faL0Vq9D0idadJlatOBeFr/dEObTuCHKIe4Mll8I2o7Dsa33pzNQJ/biC7bp8jT2sQCAoTCfAw2Fw3m24ZMtJT21bFQziZyUBuR73Pj78sVSW9Ak3/1nASXE8frG2YDoKghRnb+r5ljLm9oPVbtxpwxmw11bPw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aB/GevxpENHG7YFJITbPgqvDU4lP4uZ9P5yFjCmyIns=;
 b=K+oM/aS0eBVfDYpbwBdUzVGeNmNXzPOdxYpi5hMvziF+01dl8q9zGveL3zTnSshsAZvDUBaqB2qMVe4NWeQeUFEWlZ0XMd3mbuiP8Vf/qrEKKxlPfzzZLqymvwcWjKnSOHHIidwwWpX1wLC6xTm8z4jitL9arI165XIrC69WjM6/ZuoclB7t6vD6PDiFiSv3SRuwPK4DXRJxOBjDJoMYgvSPjxP7u+ah2uAfwK+fii++UpuFri2MTKkmHzetzIlSHCdv6jYagEVzCXqcXeeUEmcwhFTWfHvflNNmzWNR5KzdJWDwZN38/AA529B/cnTseXiYKGCvxLL/PFHb7HiRlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM8PR04MB7266.eurprd04.prod.outlook.com (2603:10a6:20b:1d6::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Mon, 26 Oct
 2020 16:31:01 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 16:31:01 +0000
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <JGross@suse.com>, "George.Dunlap@citrix.com"
	<George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
	"frederic.pierret@qubes-os.org" <frederic.pierret@qubes-os.org>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Thread-Topic: Recent upgrade of 4.13 -> 4.14 issue
Thread-Index: AQHWq6SD/uyafOnyXU2hV/eab0QUF6mqE3EA
Date: Mon, 26 Oct 2020 16:31:01 +0000
Message-ID: <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
In-Reply-To: <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
user-agent: Evolution 3.38.1 (by Flathub.org) 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
x-originating-ip: [89.186.78.87]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 56fdfeec-232a-4cc0-e23d-08d879cc86b8
x-ms-traffictypediagnostic: AM8PR04MB7266:
x-ld-processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <AM8PR04MB7266666EA4132257392C7789C5190@AM8PR04MB7266.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 LTcKB0Rxbuvcl6Hg8UWre/zW64pOWpXbucte3GoPpI09xt77it75REEEq84fceQPujnwW6WE7ifovNh3nIhjo3tBNMvVUp29RyEm4bfOuFvpoeNTF5guz6K+BYugP/07Rn8XFD1EZWPombAWbsldZEog6LPWpsOqEApP32x59gHARRnivKfQOzZFEmhc/dXBkI4t4smRtJ31EiWVvpsn1RyjNtk8zjF1uGdbPBk4AQozT7WR0Yl1LxRClMi9IPdQdDB/uGmG3BV8XFpKmbRSqj3sXLsnvNK3wT60isXFjt4hN180bfNRuOCbzzr+GswtpfsdRdLQw5CVuE5NV6bVS67z/TseeW/iIgzlxPU561xFlE1+lP2zjN2h3omwuEcNtJy71OWLe625irp3t/twuQ==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB5826.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(346002)(396003)(39860400002)(136003)(366004)(6486002)(86362001)(76116006)(8936002)(4326008)(186003)(6506007)(4001150100001)(66574015)(5660300002)(99936003)(66946007)(8676002)(2906002)(66616009)(2616005)(66476007)(66556008)(6512007)(36756003)(83380400001)(478600001)(110136005)(53546011)(54906003)(26005)(64756008)(71200400001)(966005)(66446008)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 ieJtbYPaeJrekP9LROFuz47m0OTaNHmWtmaZ8VyZdYPFNfEq5fc1KlOwRB4e9leGczDzBK1e1YoXHoPkGiR4OF8Fl1HkQKRdvrJt5e738gn3A+SlxoPn5u6D+YqOmMutBF0ubPqWmMu20jaszEqziys9QzMiPGiGMi3AF6dsEZMWSEg50fGRLtjf7KNQFTDGePN99N9kLdIFeMDF951dnm1fBScA9sQ54FkPNOTl/SBxmCLc98o3FImPzR8TdsQYWwDThBPAB4vhYALXgBRLeJWay4IycDAbI72CztEccojdIS6tHySO4+toSx3NP250jME3LjHC8tGKMmx/3EprU5LEQCurcKjEkcAUFuU/+fb450mR20A1z0HI0S3HTHyFwKbQFNKwxR8ihv92p30atmE2+MhFWewz778yVqYmtM7qAKVBTOcDAi01OqUG5gOarqRkG4JgQuv7FSh9pgmAfgqHc690lPHex5fRB5kP7tue0uBslmM4edIBIqyplnuoTLDsxlx15LHZvLD2S8pgiqCGpQV5xsxOLg3fXyGpWc+Vepcde00TMiXnmaDqFPVHa8Wmzrp5iCENdlYm0dPQ7IQLV7WpjcNI+kJdktkmoBSXTAw/NA4eYFjvjGGee4XSq/xFXLaa2eNdDqeILMo1GA==
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-xUngC0FMeVHxvue983WA"
MIME-Version: 1.0
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB5826.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 56fdfeec-232a-4cc0-e23d-08d879cc86b8
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Oct 2020 16:31:01.2538
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 7WaSwT2Y2CMrpjrDQhfLJrJf6G0gtivB4b+PPdVgv5l24PV2FxSwQoL7tWg+vpuqRDc9RNqRzNvP52L6fJtezQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR04MB7266

--=-xUngC0FMeVHxvue983WA
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 15:30 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 26.10.20 14:54, Andrew Cooper wrote:
> > On 26/10/2020 13:37, Fr=C3=A9d=C3=A9ric Pierret wrote:
> > >=20
> > > If anyone would have any idea of what's going on, that would be
> > > very
> > > appreciated. Thank you.
> >=20
> > Does booting Xen with `sched=3Dcredit` make a difference?
>=20
> Hmm, I think I have spotted a problem in credit2 which could explain
> the
> hang:
>=20
> csched2_unit_wake() will NOT put the sched unit on a runqueue in case
> it
> has CSFLAG_scheduled set. This bit will be reset only in
> csched2_context_saved().
>=20
Exactly, it does not put it back there. However, if it finds a vCPU
with the CSFLAG_scheduled flag set, It should set
CSFLAG_delayed_runq_add flag.

Unless curr_on_cpu(cpu)=3D=3Dunit or unit_on_runq(svc)=3D=3Dtrue... which
should not be the case. Or where you saying that we actually are in one
of this situations?

In fact...

> So in case a vcpu (and its unit, of course) is blocked and there has
> been no other vcpu active on its physical cpu but the idle vcpu,
> there
> will be no call of csched2_context_saved(). This will block the vcpu
> to become active in theory for eternity, in case there is no need to
> run another vcpu on the physical cpu.
>=20
...I maybe am not seeing what exact situation and sequence of events
you're exactly thinking to. What I see is this: [*]

- vCPU V is running, i.e., CSFLAG_scheduled is set
- vCPU V blocks
- we enter schedule()
  - schedule calls do_schedule() --> csched2_schedule()
    - we pick idle, so CSFLAG_delayed_runq_add is set for V
  - schedule calls sched_context_switch()
    - sched_context_switch() calls context_switch()
      - context_switch() calls sched_context_switched()
        - sched_context_switched() calls:
          - vcpu_context_saved()
          - unit_context_saved()
            - unit_context_saved() calls sched_context_saved() -->   =20
                                          csched2_context_saved()
              - csched2_context_saved():
                - clears CSFLAG_scheduled
                - checks (and clear) CSFLAG_delayed_runq_add

[*] this assumes granularity 1, i.e., no core-scheduling and no=C2=A0
    rendezvous. Or was core-scheduling actually enabled?

And if CSFLAG_delayed_runq_add is set **and** the vCPU is runnable, the
task is added back to the runqueue.

So, even if we don't do the actual context switch (i.e., we don't call
__context_switch() ) if the next vCPU that we pick when vCPU V blocks
is the idle one, it looks to me that we go get to call
csched2_context_saved().

And it also looks to me that, when we get to that, if the vCPU is
runnable, even if it has the CSFLAG_scheduled still set, we do put it
back to the runqueue.

And if the vCPU blocked, but csched2_unit_wake() run while
CSFLAG_scheduled was still set, it indeed should mean that the vCPU
itself will be runnable again when we get to csched2_context_saved().

Or did you have something completely different in mind, and I'm missing
it?


Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-xUngC0FMeVHxvue983WA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+W+bYACgkQFkJ4iaW4
c+5/ww/9G663B2cz0ktfvdS1ZrrcrzSmanFSDq/VlS7NV5kvXeafGQMVLAKZS0Mi
2QWeW6JO1+CxN6rPoweS1HXSgBCEMhDtz2Q6itl6XpjUkz14NmeJOTr01wZ43tUD
UvHM8sCK/+7EGROXNB/aWwPHXCFSJME38F8NUaufjortD/J9Dtb1FNy0timYj82G
tz/OUdbU3ypU9ucStS1xaSFtcu0YiWNgVIQ9IRfNOjVZSTFNPubFRtkkSdPSw4Zi
DgH9Oh+Xm0xsx5nt8GY7tVF+2pczZRpxihi7NagzejzrSbO/ftaWm9iWOPfDKine
etQf9qzASKYeE+egiVXww3DDWY60w0GSHy+F09izcilHekCibMu1EMPL1mjdsQY2
GICqC34zNT9GVlE2dbP4qNHNeQKmt0yReM2EWvu2YThawIb1KxkCCwlv/84O4voj
uQYiHxBCuksvuS8zi+0Z1vYva071S6T6OgDPhQp5D9J076ZxQ+6Ugs+FlIheBQei
aVLL9fCyYTCNX+ts3DQ6YscOfNux737cw42gdPtm8pc/UXbrMA1VNu1qBOg2T3xv
L/+k3od6ISRUGJnICknvqkJWgHP/w2R2vv0TQX/9Lc//BEzqQm+gM4IvczxffsnS
wEBzBiteNyyTftpw0kVPRd+wEsOIcTqhq+JsY0vAPY100g3G+2s=
=uTft
-----END PGP SIGNATURE-----

--=-xUngC0FMeVHxvue983WA--



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:40:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12455.32426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5XL-0007cV-Ka; Mon, 26 Oct 2020 16:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12455.32426; Mon, 26 Oct 2020 16:39:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5XL-0007cO-HD; Mon, 26 Oct 2020 16:39:59 +0000
Received: by outflank-mailman (input) for mailman id 12455;
 Mon, 26 Oct 2020 16:39:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kX5XK-0007cJ-NU
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:39:58 +0000
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fa74700-e853-459b-b9c4-629e31d20fa4;
 Mon, 26 Oct 2020 16:39:57 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id d24so10900796ljg.10
 for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 09:39:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ZY59=EB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kX5XK-0007cJ-NU
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:39:58 +0000
X-Inumbo-ID: 1fa74700-e853-459b-b9c4-629e31d20fa4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1fa74700-e853-459b-b9c4-629e31d20fa4;
	Mon, 26 Oct 2020 16:39:57 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id d24so10900796ljg.10
        for <xen-devel@lists.xenproject.org>; Mon, 26 Oct 2020 09:39:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=/ba3eye2nqN8R+jvaoMR7ipQ8F80RQhtFXVvQeivpDY=;
        b=ps0PSrOqcLnGdRrDek4lPX/T9vvQMqbZHsAHfldbr/fu8UeMiUijsvohIo0uA0sWKc
         HcPETn2yA1ox1CSf8OFnEJCw/+PGKj/IChhsb8Qp6mU9erBTqrpi0oN6+yI4XLgFr+vn
         wyXJPzD88YAzYVQc3m0YhcOHVObRBbRSgdqVDiEeNoEizBZfenddWBP4gBeVk9HOU3Mr
         WSUPIZWgR4OJQq/38SzQdts3kEWvpziP7Dkzx+zSWco8/juiBz0nYMasOEvGp93kzCIv
         odS3gDCLVqjzjBfnY9/VJjjOB2dgoOORTnsfUWBqMt+LUa0Ny0IMyImLviBNxzDgIM+g
         eMGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=/ba3eye2nqN8R+jvaoMR7ipQ8F80RQhtFXVvQeivpDY=;
        b=Z1gmH2rW+qxE+sQbsoAqo30JvXnkvXR7RGJg3fXSDRY6cCjW1KQXZ66F39L0jqRph9
         DHWjX0Kpbs8WZhVgjRwtCB5naqyhwCMAlkDzd+BtuIcyT9ApBPhfnvjAbrVzYT17FAPs
         7RHVZyb4lccXwXvaWddu1ZMpBd5sSQ4KNVPLHbgoc+QetND8GMbC2Suq9ryDT1d3Igim
         zDtwQaI53SkhvPU3q4f2jo2invfvJNGKHo9CX16Umu2Hs+Vw8IZVp4WqFijjYtXMpYki
         hQKpvIIFlV+vTsR1eGs8iS+vg0/uu304V8TR3nnu9oYzvnGEJNMmmxDw6gOodRCOLZJC
         yU0g==
X-Gm-Message-State: AOAM531s4rQjB1rxcbtXSWlDMB36IrM9igPoiEvhmN4ZP7K25pmE4Iw1
	8fOhP4/icG/bfzwrqsyUKEL5+eygZ12EA0Q+gu8=
X-Google-Smtp-Source: ABdhPJx2nuqBQQV4qbIIS7PsS+gPnxb0ieBxk7rT5PJVs+O3pjFY4uXoO/vVlPHHcSloMLrH2hF8aPuWaN/6Mw6lFbs=
X-Received: by 2002:a2e:b0c7:: with SMTP id g7mr5945690ljl.433.1603730396567;
 Mon, 26 Oct 2020 09:39:56 -0700 (PDT)
MIME-Version: 1.0
References: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
 <20201026134651.8162-1-jandryuk@gmail.com> <17565b8d546.eaf68ba048834.6199377730744210517@apertussolutions.com>
In-Reply-To: <17565b8d546.eaf68ba048834.6199377730744210517@apertussolutions.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 26 Oct 2020 12:39:44 -0400
Message-ID: <CAKf6xpvXuo8bS5VT1gSbx=57tWdtGEnLKZcJGjRo2QOoKREMsw@mail.gmail.com>
Subject: Re: [RFC PATCH] xsm: Re-work domain_create and domain_alloc_security
To: Daniel Smith <dpsmith@apertussolutions.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, hx242 <hx242@xen.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Content-Type: text/plain; charset="UTF-8"

On Mon, Oct 26, 2020 at 12:23 PM Daniel Smith
<dpsmith@apertussolutions.com> wrote:
>
> ---- On Mon, 26 Oct 2020 09:46:51 -0400 Jason Andryuk <jandryuk@gmail.com> wrote ----
>
>  > Untested!
>  >
>  > This only really matters for flask, but all of xsm is updated.
>  >
>  > flask_domain_create() and flask_domain_alloc_security() are a strange
>  > pair.
>  >
>  > flask_domain_create() serves double duty.  It both assigns sid and
>  > self_sid values and checks if the calling domain has permission to
>  > create the target domain.  It also has special casing for handling dom0.
>  > Meanwhile flask_domain_alloc_security() assigns some special sids, but
>  > waits for others to be assigned in flask_domain_create.  This split
>  > seems to have come about so that the structures are allocated before
>  > calling flask_domain_create().  It also means flask_domain_create is
>  > called in the middle of domain_create.
>  >
>  > Re-arrange the two calls.  Let flask_domain_create just check if current
>  > has permission to create ssidref.  Then it can be moved out to do_domctl
>  > and gate entry into domain_create.  This avoids doing partial domain
>  > creation before the permission check.
>  >
>  > Have flask_domain_alloc_security() take a ssidref argument.  The ssidref
>  > was already permission checked earlier, so it can just be assigned.
>  > Then the self_sid can be calculated here as well rather than in
>  > flask_domain_create().
>  >
>  > The dom0 special casing is moved into flask_domain_alloc_security().
>  > Maybe this should be just a fall-through for the dom0 already created
>  > case.  This code may not be needed any longer.
>  >
>  > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>  > ---

<snip>

>  > -static int flask_domain_alloc_security(struct domain *d)
>  > +static int flask_domain_alloc_security(struct domain *d, u32 ssidref)
>  >  {
>  >  struct domain_security_struct *dsec;
>  > +    static int dom0_created = 0;
>  > +    int rc;
>  >
>  >  dsec = xzalloc(struct domain_security_struct);
>  >  if ( !dsec )
>  > @@ -175,14 +177,24 @@ static int flask_domain_alloc_security(struct domain *d)
>  >  case DOMID_IO:
>  >  dsec->sid = SECINITSID_DOMIO;
>  >  break;
>  > +    case 0:
>  > +        if ( !dom0_created ) {
>  > +            dsec->sid = SECINITSID_DOM0;
>  > +            dom0_created = 1;
>  > +        } else {
>  > +            dsec->sid = SECINITSID_UNLABELED;
>  > +        }
>
> While the handling of this case is not wrong, I have to wonder if there is a better way to handle the dom0 creation case.

dom0_cfg.ssidref could be set to SECINITSID_DOM0.  I guess that would
need some xsm_ssid_dom0 wrapper.  Then maybe this logic can go away
and the default case used.

pv-shim doesn't necessarily use domid 0, so this may be broken there.
dom0_cfg.ssidref would fix that, I think.  But I'm not familiar with
pv-shim.

>  > +        break;
>  >  default:
>  > -        dsec->sid = SECINITSID_UNLABELED;
>  > +        dsec->sid = ssidref;
>  >  }
>  >
>  >  dsec->self_sid = dsec->sid;
>  > -    d->ssid = dsec;
>
> I don't think you meant to deleted that, without it domains will have no ssid assigned to them.

Yes, this should be retained.

Thanks for looking.

-Jason


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:43:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12459.32438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5aC-0008Qj-3T; Mon, 26 Oct 2020 16:42:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12459.32438; Mon, 26 Oct 2020 16:42:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5aC-0008Qc-0N; Mon, 26 Oct 2020 16:42:56 +0000
Received: by outflank-mailman (input) for mailman id 12459;
 Mon, 26 Oct 2020 16:42:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wDZ8=EB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kX5aA-0008QW-1L
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:42:54 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.162])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fbb5e75-b3de-4330-a118-c32a467ee0b2;
 Mon, 26 Oct 2020 16:42:53 +0000 (UTC)
Received: from aepfle.de by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9QGgi0nh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 26 Oct 2020 17:42:44 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wDZ8=EB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kX5aA-0008QW-1L
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:42:54 +0000
X-Inumbo-ID: 4fbb5e75-b3de-4330-a118-c32a467ee0b2
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.162])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4fbb5e75-b3de-4330-a118-c32a467ee0b2;
	Mon, 26 Oct 2020 16:42:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603730572;
	s=strato-dkim-0002; d=aepfle.de;
	h=In-Reply-To:References:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=9XWjZUFzV7jHF8vOdUmxwfJm+gEsibSLLwGolDofJks=;
	b=be12kLkFMAoMXc1XC1BPXLm3nNMDRm79Th73F4PL35rEmPMGsJpYHCqd+Cpud/Hp2S
	dSjDJmq8zkpqwrxijVTTY5gWb8YYcfcLY5NR+G05DA6n+qwW+OsGYcST04sDFufIp9Y2
	mkkc3lHZtRLySXutP0dhapvjgSPGXuZzo7Y7b2weayLUHn/0+pAdjzXiPoGfaOkgcrct
	oIXI3LpyCS6tkERCqjdF8viVPx6Vc9YFAyUn7tIXz0XLe6FlB84sS8Xvo4KIb+2guJeW
	8glsRb+6Qh3o4DkciOvrYhkBTNzEkDaC5BATRFgl9cd2EruPAhYDDFFh6eg2wDac/evo
	S6qg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from aepfle.de
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9QGgi0nh
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Mon, 26 Oct 2020 17:42:44 +0100 (CET)
Date: Mon, 26 Oct 2020 17:42:39 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Anthony PERARD <anthony.perard@citrix.com>,
	=?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
	Wei Liu <wl@xen.org>
Subject: Re: [Xen-devel] [XEN PATCH for-4.13 v7 01/11] libxl: Offer API
 versions 0x040700 and 0x040800
Message-ID: <20201026164239.GA27498@aepfle.de>
References: <20191023130013.32382-1-ian.jackson@eu.citrix.com>
 <20191023130013.32382-2-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="IJpNTDwzlM2Ie8A6"
Content-Disposition: inline
In-Reply-To: <20191023130013.32382-2-ian.jackson@eu.citrix.com>


--IJpNTDwzlM2Ie8A6
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline

On Wed, Oct 23, Ian Jackson wrote:

> 0x040700 was introduced in 304400459ef0 (aka 4.7.0-rc1~481)
> 0x040800 was introduced in 57f8b13c7240 (aka 4.8.0-rc1~437)

> Anyway, in the meantime, we should fix it.  Backporting this is
> probably a good idea: it won't change the behaviour for existing
> callers but it will avoid errors for some older correct uses.

> +    LIBXL_API_VERSION != 0x040700 && LIBXL_API_VERSION != 0x040800 && \


Why was this never backported to staging-4.12 and older?
Please backport it to assist libvirt.

Olaf

--IJpNTDwzlM2Ie8A6
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+W/HoACgkQ86SN7mm1
DoDE+Q//ax8kFL5cwKFA/vjvEd0wkNIWx302HyvX9vblP0EmpVN/7YujOTWwcjsh
+UN2nZe0x5OuVH0R66j/IiRYGXUlt8TFLfiaOd0UT9G+yYgzA4rGWWWdgYPWmuAz
/4PH8RmQ+U1qA1bN5rdtxKnQ/LqBmfY7xgW6hJxmHGi2h7q0Rwheuo7NvkqpY2Al
d7+Px8THGw9Ny///Rpq3PPHs3GuZG7VWiCRI5zjHQ6zDny3z7KKW96UFrjJT4/Hh
t1hnxoZr0yQEAJPckA2oLIj9Jv/MbIYrEMIxGHGDBWbTVnHAGJh6A3gB9R07gjqG
Qwz317yr65arja/MffPSxaBLDmjxzD44BVEi/PbdmvbOlFzvCtv0U9Yzg59wAJ3K
jcmEAEcIop+0FECEPHbVZbNXzgopFDqo4fdh7/6giSWQsa7aRCIsfRypxyqKJMEa
uISme/qtMZ5YEFaGbJY/QxcFaDs5R1SAvS8COgmAombrssOEJVNBSAOVknDhYAJJ
O0EoJJJotQRVIuOAQ0Y1an+gHDtB4exjV8qL7CyoGir7pUCD0+u6F7j71pWT8lX9
Hpou/fSceC9MgrWDfRJ8nq4sOQY4oqcnnWiYrp7NR2ZN/aUFkwiNoxIQEhQxLnnD
uYR55pNFXQlzxO0F1RIwknJIUIEm5Cys81bWkp55Ocn87N43lw8=
=oIf3
-----END PGP SIGNATURE-----

--IJpNTDwzlM2Ie8A6--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:44:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12462.32449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5bF-00007n-EZ; Mon, 26 Oct 2020 16:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12462.32449; Mon, 26 Oct 2020 16:44:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5bF-00007g-BJ; Mon, 26 Oct 2020 16:44:01 +0000
Received: by outflank-mailman (input) for mailman id 12462;
 Mon, 26 Oct 2020 16:43:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX5bD-00007S-4z
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:43:59 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 287bca69-e455-4e61-aa85-b08440266629;
 Mon, 26 Oct 2020 16:43:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX5bD-00007S-4z
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:43:59 +0000
X-Inumbo-ID: 287bca69-e455-4e61-aa85-b08440266629
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 287bca69-e455-4e61-aa85-b08440266629;
	Mon, 26 Oct 2020 16:43:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603730637;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=W5tYi8nQdGAwAPRlBIrOXvRnCSGAJRP7MM8VE7dlYhY=;
  b=ElM30uPoEqak7tFyzc8kg2ebYMdYBLAgwu/QoDPHI6prDB+x1xeswZuO
   svsdvlEbehb/bl2+6NvuFVV8BMPajGgjWW64ttjH19iaFnv0LGebfsyHM
   SjqSArLLSmnHOhuuFfMNoirQtVtIQxRAFHEGAXthA5gZrw+mmUaC46VHQ
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: gqHDFX6I39qpUyjzj1CRe3TqTEz+Co0fBvaH3vhjWVf9+hNa+mtu+ALwhwm46kxfVhI1mOAWJG
 8LjnYC6iN+nYmPRTK06Mw7Fi078k/fQ7zqdNqi8mEavDAiZPCb8+bKFjr5ZLuVXJub1Dg7aSyk
 gkiTZoEvROefBHGrcomQNOBOpkUfhpfE0KoHEwdtfHo94Heq7K0Kfh528H/CH8K5quIk+MxDGW
 7w6GGizzldrd1V+YTZxlf0i39jw0dPihGelJjlZQ9N+jxKg1nYHUeB7taUzUsq3NzmngcC5j3X
 xxY=
X-SBRS: None
X-MesageID: 29769153
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,420,1596513600"; 
   d="scan'208";a="29769153"
Subject: Re: XSM and the idle domain
To: Jason Andryuk <jandryuk@gmail.com>, Hongyan Xia <hx242@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"Daniel Smith" <dpsmith@apertussolutions.com>
References: <bfd645cf42ef7786183be15c222ad04beed362c0.camel@xen.org>
 <f8f5f354-aa8d-4bd0-9c0e-ef37702e80c5@citrix.com>
 <48816c69ab2551a34c57a87392bb7f08ca6482ee.camel@xen.org>
 <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f1d16da4-b08f-6819-4883-affa5788c49c@citrix.com>
Date: Mon, 26 Oct 2020 16:43:49 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpt0Kpi2ST4gfPnLrqUHE+3hHkRYpQAHPjp2vW=cHpqPAA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 26/10/2020 13:37, Jason Andryuk wrote:
> On Thu, Oct 22, 2020 at 1:01 PM Hongyan Xia <hx242@xen.org> wrote:
>> On Thu, 2020-10-22 at 13:51 +0100, Andrew Cooper wrote:
>>> On 21/10/2020 15:34, Hongyan Xia wrote:
>>>> The first question came up during ongoing work in LiveUpdate. After
>>>> an
>>>> LU, the next Xen needs to restore all domains. To do that, some
>>>> hypercalls need to be issued from the idle domain context and
>>>> apparently XSM does not like it.
>>> There is no such thing as issuing hypercalls from the idle domain
>>> (context or otherwise), because the idle domain does not have enough
>>> associated guest state for anything to make the requisite
>>> SYSCALL/INT80/VMCALL/VMMCALL invocation.
>>>
>>> I presume from this comment that what you mean is that you're calling
>>> the plain hypercall functions, context checks and everything, from
>>> the
>>> idle context?
>> Yep, the restore code just calls the hypercall functions from idle
>> context.
>>
>>> If so, this is buggy for more reasons than just XSM objecting to its
>>> calling context, and that XSM is merely the first thing to explode.
>>> Therefore, I don't think modifications to XSM are applicable to
>>> solving
>>> the problem.
>>>
>>> (Of course, this is all speculation because there's no concrete
>>> implementation to look at.)
>> Another explosion is the inability to create hypercall preemption,
>> which for now is disabled when the calling context is the idle domain.
>> Apart from XSM and preemption, the LU prototype works fine. We only
>> reuse a limited number of hypercall functions and are not trying to be
>> able to call all possible hypercalls from idle.
> I wonder if for domain_create, it wouldn't be better to move
> xsm_domain_create() out to the domctl (hypercall entry) and check it
> there.  That would side-step xsm in domain_create.  Flask would need
> to be modified for that.  I've an untested patch doing the
> rearranging, which I'll send as a follow up.
>
> What other hypercalls are you having issues with?  Those could also be
> refactored so the hypercall entry checks permissions, and the actual
> work is done in a directly callable function.
>
>> Having a dedicated domLU just like domB (or reusing domB) sounds like a
>> viable option. If the overhead can be made low enough then we won't
>> need to work around XSM and hypercall preemption.
>>
>> Although the question was whether XSM should interact with the idle
>> domain. With a good design LU should be able to sidestep this though.
> Circling back to the main topic, is the idle domain Xen, or is it
> distinct?

It "is" Xen, IMO.

> It runs in the context of Xen, so Xen isn't really in a
> place to enforce policy on itself.  Hongyan, as you said earlier,
> applying XSM is more of a debugging feature at that point than a
> security feature.  And as Jan pointed out, you can have problems if
> XSM prevents the hypervisor from performing an action it doesn't
> expect to fail.

We have several system DOMID's which are SELF, IO, XEN, COW, INVALID and
IDLE.

SELF is a magic constant expected to be used in most hypercalls on
oneself, to simplify callers.  INVALID is also a magic constant.

The others all have struct domain's allocated for them, and are concrete
objects as far as Xen is concerned.  IO/XEN/COW all exist for the
purpose of fitting into the memory/device ownership models, while IDLE
exists for the purpose of encapsulating the idle vcpus in the scheduling
model.

None of them have any kind of outside-Xen state associated with them. 
"scheduling" an idle vCPU runs the idle loop, but it is all code within
the hypervisor.

The problem here is that idle context is also used in certain "normal"
cases in Xen (startup, shutdown, possibly also for softirq/tasklet
context), all of which we (currently) expect not to be making hypercalls
from.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 16:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 16:48:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12468.32461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5fh-0000M1-12; Mon, 26 Oct 2020 16:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12468.32461; Mon, 26 Oct 2020 16:48:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX5fg-0000Lu-Tj; Mon, 26 Oct 2020 16:48:36 +0000
Received: by outflank-mailman (input) for mailman id 12468;
 Mon, 26 Oct 2020 16:48:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kX5fg-0000Lp-ER
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:48:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cf104f7-502b-42c6-8a10-5ca2ef7cb90d;
 Mon, 26 Oct 2020 16:48:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5ED92ACF5;
 Mon, 26 Oct 2020 16:48:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8IcS=EB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kX5fg-0000Lp-ER
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 16:48:36 +0000
X-Inumbo-ID: 0cf104f7-502b-42c6-8a10-5ca2ef7cb90d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0cf104f7-502b-42c6-8a10-5ca2ef7cb90d;
	Mon, 26 Oct 2020 16:48:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603730910;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vqRv94muRlAq/r4s6zwGCYMbRRkKaqrtyn2uegmbDqQ=;
	b=M/oGQJ0UoJJRAF3lryoXxPwZX90Ymaiqml2Td/RaE+dGgu3YlrZbbTk1KlFzvrkwk9RLs/
	c3OoVnTi6j9hkraqXIPfiO7I62Ta7Q1z/UDBgbUo4fo6jeilIj+sisC/u9OkoGDI74iKKg
	mMJy0TzTJk12f7FX/Lv8B4ambPiZQPU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5ED92ACF5;
	Mon, 26 Oct 2020 16:48:30 +0000 (UTC)
Subject: Re: [Xen-devel] [XEN PATCH for-4.13 v7 01/11] libxl: Offer API
 versions 0x040700 and 0x040800
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Anthony PERARD
 <anthony.perard@citrix.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
References: <20191023130013.32382-1-ian.jackson@eu.citrix.com>
 <20191023130013.32382-2-ian.jackson@eu.citrix.com>
 <20201026164239.GA27498@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <be484d8b-c026-b22b-fea2-81839731f5a4@suse.com>
Date: Mon, 26 Oct 2020 17:48:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026164239.GA27498@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 17:42, Olaf Hering wrote:
> On Wed, Oct 23, Ian Jackson wrote:
> 
>> 0x040700 was introduced in 304400459ef0 (aka 4.7.0-rc1~481)
>> 0x040800 was introduced in 57f8b13c7240 (aka 4.8.0-rc1~437)
> 
>> Anyway, in the meantime, we should fix it.  Backporting this is
>> probably a good idea: it won't change the behaviour for existing
>> callers but it will avoid errors for some older correct uses.
> 
>> +    LIBXL_API_VERSION != 0x040700 && LIBXL_API_VERSION != 0x040800 && \
> 
> 
> Why was this never backported to staging-4.12 and older?
> Please backport it to assist libvirt.

I'm afraid the request comes too late for 4.12 (branch now
closed for its final stable release to be cut) and older
(already in security-only mode).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:18:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:18:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12474.32474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68O-0002zn-E5; Mon, 26 Oct 2020 17:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12474.32474; Mon, 26 Oct 2020 17:18:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68O-0002zg-Ag; Mon, 26 Oct 2020 17:18:16 +0000
Received: by outflank-mailman (input) for mailman id 12474;
 Mon, 26 Oct 2020 17:18:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX68M-0002zb-VF
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:14 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id dfc9d755-f9ee-44c1-adf9-49016fcff8c9;
 Mon, 26 Oct 2020 17:18:13 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF93B139F;
 Mon, 26 Oct 2020 10:18:12 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 491973F719;
 Mon, 26 Oct 2020 10:18:11 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX68M-0002zb-VF
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:14 +0000
X-Inumbo-ID: dfc9d755-f9ee-44c1-adf9-49016fcff8c9
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id dfc9d755-f9ee-44c1-adf9-49016fcff8c9;
	Mon, 26 Oct 2020 17:18:13 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CF93B139F;
	Mon, 26 Oct 2020 10:18:12 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 491973F719;
	Mon, 26 Oct 2020 10:18:11 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v1 0/4] xen/arm: Make PCI passthrough code non-x86 specific
Date: Mon, 26 Oct 2020 17:17:50 +0000
Message-Id: <cover.1603731279.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

This patch series is preparatory work to make PCI passthrough code non-x86
specific.

Rahul Singh (4):
  xen/ns16550: solve compilation error on ARM with CONFIG_HAS_PCI
    enabled.
  xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag for PCI ATS
    functionality.
  xen/pci: Move x86 specific code to x86 directory.
  xen/pci: solve compilation error when memory paging is not enabled.

 xen/arch/x86/Kconfig                     |  1 +
 xen/drivers/char/Kconfig                 |  7 ++
 xen/drivers/char/ns16550.c               | 32 ++++----
 xen/drivers/passthrough/ats.h            | 24 ++++++
 xen/drivers/passthrough/pci.c            | 79 +------------------
 xen/drivers/passthrough/vtd/x86/Makefile |  2 +-
 xen/drivers/passthrough/x86/Makefile     |  3 +-
 xen/drivers/passthrough/x86/iommu.c      |  7 ++
 xen/drivers/passthrough/x86/pci.c        | 97 ++++++++++++++++++++++++
 xen/drivers/pci/Kconfig                  |  3 +
 xen/include/xen/pci.h                    |  2 +
 11 files changed, 164 insertions(+), 93 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/pci.c

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:18:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:18:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12475.32486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68c-00033C-N3; Mon, 26 Oct 2020 17:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12475.32486; Mon, 26 Oct 2020 17:18:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68c-000334-JJ; Mon, 26 Oct 2020 17:18:30 +0000
Received: by outflank-mailman (input) for mailman id 12475;
 Mon, 26 Oct 2020 17:18:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX68a-00032c-Qx
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:28 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 6fd7745e-367b-44aa-beec-076783c922f6;
 Mon, 26 Oct 2020 17:18:27 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33967139F;
 Mon, 26 Oct 2020 10:18:27 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 011B33F719;
 Mon, 26 Oct 2020 10:18:25 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX68a-00032c-Qx
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:28 +0000
X-Inumbo-ID: 6fd7745e-367b-44aa-beec-076783c922f6
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 6fd7745e-367b-44aa-beec-076783c922f6;
	Mon, 26 Oct 2020 17:18:27 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33967139F;
	Mon, 26 Oct 2020 10:18:27 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 011B33F719;
	Mon, 26 Oct 2020 10:18:25 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with CONFIG_HAS_PCI enabled.
Date: Mon, 26 Oct 2020 17:17:51 +0000
Message-Id: <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>

ARM platforms does not support ns16550 PCI support. When CONFIG_HAS_PCI
is enabled for ARM a compilation error is observed.

Fixed compilation error after introducing new kconfig option
CONFIG_HAS_NS16550_PCI for x86 platforms to support ns16550 PCI.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/char/Kconfig   |  7 +++++++
 xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
 2 files changed, 23 insertions(+), 16 deletions(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..8887e86afe 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -4,6 +4,13 @@ config HAS_NS16550
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
+config HAS_NS16550_PCI
+	bool "NS16550 UART PCI support" if X86
+	default y
+	depends on X86 && HAS_NS16550 && HAS_PCI
+	help
+	  This selects the 16550-series UART PCI support. For most systems, say Y.
+
 config HAS_CADENCE_UART
 	bool "Xilinx Cadence UART driver"
 	default y
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index d8b52eb813..bd1c2af956 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -16,7 +16,7 @@
 #include <xen/timer.h>
 #include <xen/serial.h>
 #include <xen/iocap.h>
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
 #include <xen/pci_ids.h>
@@ -54,7 +54,7 @@ enum serial_param_type {
     reg_shift,
     reg_width,
     stop_bits,
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     bridge_bdf,
     device,
     port_bdf,
@@ -83,7 +83,7 @@ static struct ns16550 {
     unsigned int timeout_ms;
     bool_t intr_works;
     bool_t dw_usr_bsy;
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     /* PCI card parameters. */
     bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
     bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
@@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp_vars[] = {
     {"reg-shift", reg_shift},
     {"reg-width", reg_width},
     {"stop-bits", stop_bits},
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     {"bridge", bridge_bdf},
     {"dev", device},
     {"port", port_bdf},
 #endif
 };
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 struct ns16550_config {
     u16 vendor_id;
     u16 dev_id;
@@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
 
 static void pci_serial_early_init(struct ns16550 *uart)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
         return;
 
@@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
 
 static void __init ns16550_init_irq(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->msi )
@@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
     uart->timeout_ms = max_t(
         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar || uart->ps_bdf_enable )
     {
         if ( uart->param && uart->param->mmio &&
@@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
 
     stop_timer(&uart->timer);
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( uart->bar )
        uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
                                   uart->ps_bdf[2]), PCI_COMMAND);
@@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
 
 static void _ns16550_resume(struct serial_port *port)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     struct ns16550 *uart = port->uart;
 
     if ( uart->bar )
@@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *uart)
     return 1; /* Everything is MMIO */
 #endif
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     pci_serial_early_init(uart);
 #endif
 
@@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *uart)
     return (status == 0x90);
 }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
 static int __init
 pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
 {
@@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "pci", 3) == 0 )
         {
             if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_com) )
@@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
 
     if ( *conf == ',' && *++conf != ',' )
     {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         if ( strncmp(conf, "msi", 3) == 0 )
         {
             conf += 3;
@@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
             uart->irq = simple_strtol(conf, &conf, 10);
     }
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
     if ( *conf == ',' && *++conf != ',' )
     {
         conf = parse_pci(conf, NULL, &uart->ps_bdf[0],
@@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
             uart->reg_width = simple_strtoul(param_value, NULL, 0);
             break;
 
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_NS16550_PCI
         case bridge_bdf:
             if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
                             &uart->ps_bdf[1], &uart->ps_bdf[2]) )
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:18:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12477.32497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68l-00037u-5r; Mon, 26 Oct 2020 17:18:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12477.32497; Mon, 26 Oct 2020 17:18:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68l-00037n-2D; Mon, 26 Oct 2020 17:18:39 +0000
Received: by outflank-mailman (input) for mailman id 12477;
 Mon, 26 Oct 2020 17:18:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX68k-00032c-Lt
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:38 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0ae8421b-87d0-4fbf-a03d-17c905b0a8ef;
 Mon, 26 Oct 2020 17:18:34 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B487139F;
 Mon, 26 Oct 2020 10:18:34 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCC1E3F719;
 Mon, 26 Oct 2020 10:18:32 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX68k-00032c-Lt
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:38 +0000
X-Inumbo-ID: 0ae8421b-87d0-4fbf-a03d-17c905b0a8ef
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 0ae8421b-87d0-4fbf-a03d-17c905b0a8ef;
	Mon, 26 Oct 2020 17:18:34 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4B487139F;
	Mon, 26 Oct 2020 10:18:34 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BCC1E3F719;
	Mon, 26 Oct 2020 10:18:32 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Kevin Tian <kevin.tian@intel.com>
Subject: [PATCH v1 2/4] xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag for PCI ATS functionality.
Date: Mon, 26 Oct 2020 17:17:52 +0000
Message-Id: <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>

PCI ATS functionality is not enabled and tested for ARM architecture
but it is enabled for x86 and referenced in common passthrough/pci.c
code.

Therefore introducing the new flag to enable the ATS functionality for
x86 only to avoid issues for ARM architecture.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/x86/Kconfig                     |  1 +
 xen/drivers/passthrough/ats.h            | 24 ++++++++++++++++++++++++
 xen/drivers/passthrough/vtd/x86/Makefile |  2 +-
 xen/drivers/passthrough/x86/Makefile     |  2 +-
 xen/drivers/pci/Kconfig                  |  3 +++
 5 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa6ad..31906e9c97 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -20,6 +20,7 @@ config X86
 	select HAS_NS16550
 	select HAS_PASSTHROUGH
 	select HAS_PCI
+	select HAS_PCI_ATS
 	select HAS_PDX
 	select HAS_SCHED_GRANULARITY
 	select HAS_UBSAN
diff --git a/xen/drivers/passthrough/ats.h b/xen/drivers/passthrough/ats.h
index 22ae209b37..a0af07b287 100644
--- a/xen/drivers/passthrough/ats.h
+++ b/xen/drivers/passthrough/ats.h
@@ -17,6 +17,8 @@
 
 #include <xen/pci_regs.h>
 
+#ifdef CONFIG_HAS_PCI_ATS
+
 #define ATS_REG_CAP    4
 #define ATS_REG_CTL    6
 #define ATS_QUEUE_DEPTH_MASK     0x1f
@@ -48,5 +50,27 @@ static inline int pci_ats_device(int seg, int bus, int devfn)
     return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
 }
 
+#else
+
+static inline int enable_ats_device(struct pci_dev *pdev,
+                                    struct list_head *ats_list)
+{
+    return -ENODEV;
+}
+
+static inline void disable_ats_device(struct pci_dev *pdev) { }
+
+static inline int pci_ats_enabled(int seg, int bus, int devfn)
+{
+    return -ENODEV;
+}
+
+static inline int pci_ats_device(int seg, int bus, int devfn)
+{
+    return -ENODEV;
+}
+
+#endif /* CONFIG_HAS_PCI_ATS */
+
 #endif /* _ATS_H_ */
 
diff --git a/xen/drivers/passthrough/vtd/x86/Makefile b/xen/drivers/passthrough/vtd/x86/Makefile
index 4ef00a4c5b..60f79fe263 100644
--- a/xen/drivers/passthrough/vtd/x86/Makefile
+++ b/xen/drivers/passthrough/vtd/x86/Makefile
@@ -1,3 +1,3 @@
-obj-y += ats.o
+obj-$(CONFIG_HAS_PCI_ATS) += ats.o
 obj-$(CONFIG_HVM) += hvm.o
 obj-y += vtd.o
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index a70cf9460d..05e6f51f25 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,2 @@
-obj-y += ats.o
+obj-$(CONFIG_HAS_PCI_ATS) += ats.o
 obj-y += iommu.o
diff --git a/xen/drivers/pci/Kconfig b/xen/drivers/pci/Kconfig
index 7da03fa13b..1588d4a91e 100644
--- a/xen/drivers/pci/Kconfig
+++ b/xen/drivers/pci/Kconfig
@@ -1,3 +1,6 @@
 
 config HAS_PCI
 	bool
+
+config HAS_PCI_ATS
+	bool
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:18:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:18:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12479.32510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68q-0003CC-EB; Mon, 26 Oct 2020 17:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12479.32510; Mon, 26 Oct 2020 17:18:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68q-0003C2-AU; Mon, 26 Oct 2020 17:18:44 +0000
Received: by outflank-mailman (input) for mailman id 12479;
 Mon, 26 Oct 2020 17:18:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX68p-00032c-M8
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:43 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ced3ab88-a6e7-48e9-bba0-d7e449761653;
 Mon, 26 Oct 2020 17:18:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD89D139F;
 Mon, 26 Oct 2020 10:18:37 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 60E553F719;
 Mon, 26 Oct 2020 10:18:36 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX68p-00032c-M8
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:43 +0000
X-Inumbo-ID: ced3ab88-a6e7-48e9-bba0-d7e449761653
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id ced3ab88-a6e7-48e9-bba0-d7e449761653;
	Mon, 26 Oct 2020 17:18:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD89D139F;
	Mon, 26 Oct 2020 10:18:37 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 60E553F719;
	Mon, 26 Oct 2020 10:18:36 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
Date: Mon, 26 Oct 2020 17:17:53 +0000
Message-Id: <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>

passthrough/pci.c file is common for all architecture, but there is x86
sepcific code in this file.

Move x86 specific code to the x86 directory to avoid compilation error
for other architecture.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/pci.c        | 75 +--------------------
 xen/drivers/passthrough/x86/Makefile |  1 +
 xen/drivers/passthrough/x86/iommu.c  |  7 ++
 xen/drivers/passthrough/x86/pci.c    | 97 ++++++++++++++++++++++++++++
 xen/include/xen/pci.h                |  2 +
 5 files changed, 108 insertions(+), 74 deletions(-)
 create mode 100644 xen/drivers/passthrough/x86/pci.c

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 2a3bce1462..c6fbb7172c 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -24,7 +24,6 @@
 #include <xen/irq.h>
 #include <xen/param.h>
 #include <xen/vm_event.h>
-#include <asm/hvm/irq.h>
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/event.h>
@@ -847,71 +846,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     return ret;
 }
 
-static int pci_clean_dpci_irq(struct domain *d,
-                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
-{
-    struct dev_intx_gsi_link *digl, *tmp;
-
-    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
-
-    if ( pt_irq_need_timer(pirq_dpci->flags) )
-        kill_timer(&pirq_dpci->timer);
-
-    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
-    {
-        list_del(&digl->list);
-        xfree(digl);
-    }
-
-    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
-
-    if ( !pt_pirq_softirq_active(pirq_dpci) )
-        return 0;
-
-    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
-
-    return -ERESTART;
-}
-
-static int pci_clean_dpci_irqs(struct domain *d)
-{
-    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    if ( !is_hvm_domain(d) )
-        return 0;
-
-    spin_lock(&d->event_lock);
-    hvm_irq_dpci = domain_get_irq_dpci(d);
-    if ( hvm_irq_dpci != NULL )
-    {
-        int ret = 0;
-
-        if ( hvm_irq_dpci->pending_pirq_dpci )
-        {
-            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
-                 ret = -ERESTART;
-            else
-                 hvm_irq_dpci->pending_pirq_dpci = NULL;
-        }
-
-        if ( !ret )
-            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
-        if ( ret )
-        {
-            spin_unlock(&d->event_lock);
-            return ret;
-        }
-
-        hvm_domain_irq(d)->dpci = NULL;
-        free_hvm_irq_dpci(hvm_irq_dpci);
-    }
-    spin_unlock(&d->event_lock);
-    return 0;
-}
-
 /* Caller should hold the pcidevs_lock */
 static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
                            uint8_t devfn)
@@ -971,7 +905,7 @@ int pci_release_devices(struct domain *d)
     int ret;
 
     pcidevs_lock();
-    ret = pci_clean_dpci_irqs(d);
+    ret = arch_pci_release_devices(d);
     if ( ret )
     {
         pcidevs_unlock();
@@ -1375,13 +1309,6 @@ static int __init setup_dump_pcidevs(void)
 }
 __initcall(setup_dump_pcidevs);
 
-int iommu_update_ire_from_msi(
-    struct msi_desc *msi_desc, struct msi_msg *msg)
-{
-    return iommu_intremap
-           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
-}
-
 static int iommu_add_device(struct pci_dev *pdev)
 {
     const struct domain_iommu *hd;
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index 05e6f51f25..642f673ed2 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-$(CONFIG_HAS_PCI_ATS) += ats.o
 obj-y += iommu.o
+obj-y += pci.o
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index f17b1820f4..875e67b53b 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
     return pg;
 }
 
+int iommu_update_ire_from_msi(
+    struct msi_desc *msi_desc, struct msi_msg *msg)
+{
+    return iommu_intremap
+           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/x86/pci.c b/xen/drivers/passthrough/x86/pci.c
new file mode 100644
index 0000000000..443712aa22
--- /dev/null
+++ b/xen/drivers/passthrough/x86/pci.c
@@ -0,0 +1,97 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/param.h>
+#include <xen/sched.h>
+#include <xen/pci.h>
+#include <xen/pci_regs.h>
+
+static int pci_clean_dpci_irq(struct domain *d,
+                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
+{
+    struct dev_intx_gsi_link *digl, *tmp;
+
+    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
+
+    if ( pt_irq_need_timer(pirq_dpci->flags) )
+        kill_timer(&pirq_dpci->timer);
+
+    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
+    {
+        list_del(&digl->list);
+        xfree(digl);
+    }
+
+    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
+
+    if ( !pt_pirq_softirq_active(pirq_dpci) )
+        return 0;
+
+    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
+
+    return -ERESTART;
+}
+
+static int pci_clean_dpci_irqs(struct domain *d)
+{
+    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !is_hvm_domain(d) )
+        return 0;
+
+    spin_lock(&d->event_lock);
+    hvm_irq_dpci = domain_get_irq_dpci(d);
+    if ( hvm_irq_dpci != NULL )
+    {
+        int ret = 0;
+
+        if ( hvm_irq_dpci->pending_pirq_dpci )
+        {
+            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
+                 ret = -ERESTART;
+            else
+                 hvm_irq_dpci->pending_pirq_dpci = NULL;
+        }
+
+        if ( !ret )
+            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
+        if ( ret )
+        {
+            spin_unlock(&d->event_lock);
+            return ret;
+        }
+
+        hvm_domain_irq(d)->dpci = NULL;
+        free_hvm_irq_dpci(hvm_irq_dpci);
+    }
+    spin_unlock(&d->event_lock);
+    return 0;
+}
+
+int arch_pci_release_devices(struct domain *d)
+{
+    return pci_clean_dpci_irqs(d);
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 2bc4aaf453..13ae7cc2a5 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -212,4 +212,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
 void msixtbl_pt_unregister(struct domain *, struct pirq *);
 void msixtbl_pt_cleanup(struct domain *d);
 
+int arch_pci_release_devices(struct domain *d);
+
 #endif /* __XEN_PCI_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:18:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12480.32522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68v-0003Gz-Ms; Mon, 26 Oct 2020 17:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12480.32522; Mon, 26 Oct 2020 17:18:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX68v-0003Gr-JE; Mon, 26 Oct 2020 17:18:49 +0000
Received: by outflank-mailman (input) for mailman id 12480;
 Mon, 26 Oct 2020 17:18:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kX68u-00032c-MT
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:48 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id bc3f38fb-0759-475f-96b7-c29d22c77b32;
 Mon, 26 Oct 2020 17:18:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0CA6213A1;
 Mon, 26 Oct 2020 10:18:40 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown
 [10.58.246.76])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5D0DA3F719;
 Mon, 26 Oct 2020 10:18:39 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0Yhc=EB=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kX68u-00032c-MT
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:18:48 +0000
X-Inumbo-ID: bc3f38fb-0759-475f-96b7-c29d22c77b32
Received: from foss.arm.com (unknown [217.140.110.172])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id bc3f38fb-0759-475f-96b7-c29d22c77b32;
	Mon, 26 Oct 2020 17:18:40 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
	by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0CA6213A1;
	Mon, 26 Oct 2020 10:18:40 -0700 (PDT)
Received: from scm-wfh-server-rahsin01.stack04.eu02.mi.arm.com (unknown [10.58.246.76])
	by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5D0DA3F719;
	Mon, 26 Oct 2020 10:18:39 -0700 (PDT)
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v1 4/4] xen/pci: solve compilation error when memory paging is not enabled.
Date: Mon, 26 Oct 2020 17:17:54 +0000
Message-Id: <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>

d->vm_event_paging struct is defined under CONFIG_HAS_MEM_PAGING in
sched.h but referenced in passthrough/pci.c directly.

If CONFIG_HAS_MEM_PAGING is not enabled for architecture, compiler will
throws an error.

No functional change.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/pci.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c6fbb7172c..3125c23e87 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( !is_iommu_enabled(d) )
         return 0;
 
-    /* Prevent device assign if mem paging or mem sharing have been 
+#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
+    /* Prevent device assign if mem paging or mem sharing have been
      * enabled for this domain */
     if ( d != dom_io &&
          unlikely(mem_sharing_enabled(d) ||
                   vm_event_check_ring(d->vm_event_paging) ||
                   p2m_get_hostp2m(d)->global_logdirty) )
         return -EXDEV;
+#endif
 
     /* device_assigned() should already have cleared the device for assignment */
     ASSERT(pcidevs_locked());
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:25:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12499.32546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fi-0004Nj-P0; Mon, 26 Oct 2020 17:25:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12499.32546; Mon, 26 Oct 2020 17:25:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fi-0004Nc-KQ; Mon, 26 Oct 2020 17:25:50 +0000
Received: by outflank-mailman (input) for mailman id 12499;
 Mon, 26 Oct 2020 17:25:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX6Fh-0004NA-MW
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:49 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59c0ae4d-2f47-4978-a159-0144a4da89a4;
 Mon, 26 Oct 2020 17:25:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX6Fh-0004NA-MW
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:49 +0000
X-Inumbo-ID: 59c0ae4d-2f47-4978-a159-0144a4da89a4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 59c0ae4d-2f47-4978-a159-0144a4da89a4;
	Mon, 26 Oct 2020 17:25:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603733147;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=pP9bPZd+ynQusPsRE1lR+uMStanSY8oC7KNbBWD1tII=;
  b=Uh4dyZYiCXoJg+b8OAn2ERcRImgIT943j+YsjVwht5EEnUGWA1DYN5Fq
   ORm4OGBqorTF+cD6gQicvosBbz3IeRwhKj8HrYqA+CCa2TCdPe5SKzGj8
   KGVEleTIC3wz2v/Wiv9v30sx2DT/DqNS4cfGRvF3ookQPRYzeWtaVR7c2
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RUsuTg5n0SgumsZ62WO1pyNzSxgWv6VSNF0gBpmjjULpYbEvVn0fhZLozwiYoFH6OzFQjnPU+q
 c8PW/1DYgS8JEo08Q80+Ihob7HfZv9aJ12MqIIACSav2MbERYXFyUHaMCZXVwowpFxQtqcLGMn
 yqKiT0eC/NVTl3V48LnvuzFZZbWSKa0QUG4YY5ifplRZpNtop2o+p/oxehzxuxyoUx6ww41xmJ
 0NZJbiBrPGfgyuP2EpMW9OZv3puioYDSuWRdH//cVaaKVEcGH3KwdQ38q9HrF33JbZZG1zgnZH
 tKE=
X-SBRS: None
X-MesageID: 30138949
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,420,1596513600"; 
   d="scan'208";a="30138949"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH 1/3] x86/ucode: Break out compare_revisions() from existing infrastructure
Date: Mon, 26 Oct 2020 17:25:17 +0000
Message-ID: <20201026172519.17881-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201026172519.17881-1-andrew.cooper3@citrix.com>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Drop some unnecesserily verbose pr_debug()'s on the AMD side.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 xen/arch/x86/cpu/microcode/amd.c   | 22 +++++++++++-----------
 xen/arch/x86/cpu/microcode/intel.c | 15 ++++++++++++---
 2 files changed, 23 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index e80f7cd3e4..7d2f57c4cb 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -168,6 +168,15 @@ static bool check_final_patch_levels(const struct cpu_signature *sig)
     return false;
 }
 
+static enum microcode_match_result compare_revisions(
+    uint32_t old_rev, uint32_t new_rev)
+{
+    if ( new_rev > old_rev )
+        return NEW_UCODE;
+
+    return OLD_UCODE;
+}
+
 static enum microcode_match_result microcode_fits(
     const struct microcode_patch *patch)
 {
@@ -178,16 +187,7 @@ static enum microcode_match_result microcode_fits(
          equiv.id  != patch->processor_rev_id )
         return MIS_UCODE;
 
-    if ( patch->patch_id <= sig->rev )
-    {
-        pr_debug("microcode: patch is already at required level or greater.\n");
-        return OLD_UCODE;
-    }
-
-    pr_debug("microcode: CPU%d found a matching microcode update with version %#x (current=%#x)\n",
-             cpu, patch->patch_id, sig->rev);
-
-    return NEW_UCODE;
+    return compare_revisions(sig->rev, patch->patch_id);
 }
 
 static enum microcode_match_result compare_header(
@@ -196,7 +196,7 @@ static enum microcode_match_result compare_header(
     if ( new->processor_rev_id != old->processor_rev_id )
         return MIS_UCODE;
 
-    return new->patch_id > old->patch_id ? NEW_UCODE : OLD_UCODE;
+    return compare_revisions(old->patch_id, new->patch_id);
 }
 
 static enum microcode_match_result compare_patch(
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index 72c07fcd1d..e1ccb5d232 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -222,6 +222,15 @@ static int microcode_sanity_check(const struct microcode_patch *patch)
     return 0;
 }
 
+static enum microcode_match_result compare_revisions(
+    uint32_t old_rev, uint32_t new_rev)
+{
+    if ( new_rev > old_rev )
+        return NEW_UCODE;
+
+    return OLD_UCODE;
+}
+
 /* Check an update against the CPU signature and current update revision */
 static enum microcode_match_result microcode_update_match(
     const struct microcode_patch *mc)
@@ -245,7 +254,7 @@ static enum microcode_match_result microcode_update_match(
     return MIS_UCODE;
 
  found:
-    return mc->rev > cpu_sig->rev ? NEW_UCODE : OLD_UCODE;
+    return compare_revisions(cpu_sig->rev, mc->rev);
 }
 
 static enum microcode_match_result compare_patch(
@@ -258,7 +267,7 @@ static enum microcode_match_result compare_patch(
     ASSERT(microcode_update_match(old) != MIS_UCODE);
     ASSERT(microcode_update_match(new) != MIS_UCODE);
 
-    return new->rev > old->rev ? NEW_UCODE : OLD_UCODE;
+    return compare_revisions(old->rev, new->rev);
 }
 
 static int apply_microcode(const struct microcode_patch *patch)
@@ -332,7 +341,7 @@ static struct microcode_patch *cpu_request_microcode(const void *buf,
          * one with higher revision.
          */
         if ( (microcode_update_match(mc) != MIS_UCODE) &&
-             (!saved || (mc->rev > saved->rev)) )
+             (!saved || compare_revisions(saved->rev, mc->rev) == NEW_UCODE) )
             saved = mc;
 
         buf  += blob_size;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:25:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:25:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12498.32534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Ff-0004M9-FP; Mon, 26 Oct 2020 17:25:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12498.32534; Mon, 26 Oct 2020 17:25:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Ff-0004M2-CU; Mon, 26 Oct 2020 17:25:47 +0000
Received: by outflank-mailman (input) for mailman id 12498;
 Mon, 26 Oct 2020 17:25:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX6Fd-0004Lx-Pv
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:45 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 609cec39-727e-4727-a084-f8d167bc9837;
 Mon, 26 Oct 2020 17:25:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX6Fd-0004Lx-Pv
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:45 +0000
X-Inumbo-ID: 609cec39-727e-4727-a084-f8d167bc9837
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 609cec39-727e-4727-a084-f8d167bc9837;
	Mon, 26 Oct 2020 17:25:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603733144;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=BNGySlpj4rUM3cAeZCyEmyQZGXtupDnrUOOIgEyp9+g=;
  b=baC84Xktvre9C8SO77o78oFWABTCVT5eE5UgSQ0oN/afc4i3XZQd9Sxx
   nn95UP/9bAKGTxnPSDMO/+sTwtQvzuZGHdAO/0xOHO+YYeeeeyqiYEYug
   lhaFPvolhvG1SOm248MhJ0uYbOxHGztKJ5SwnoV7Girpl6Mcdr5d+harq
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2ewJXry9Xx/diuVR5Sl+G9+bNuOdIRrkYAUVLYtOB+A98fcAiCVBn6iAUCyqsAlzFhUNM7UU6J
 cd+jMA2rXVVJm0GAwVBKgA7t1m49de5AkVHXcyZZweQ5CGSdHSIVCFBMhyzbqD1gxKbtRg9kCW
 VouadEYINwmcVIRFGNFvy3PDv2iFLeaeW3bYggXu7oRK8E0MWwt5M5VneK/6UJ//d5LsXy9BZ2
 8R3h5F3epuSkjpj2FkpW1GYDwLYP8jwqLpuVCPPr9VBK19NdHtQQVTfPP8pWaZjIoI0ki7wyQ6
 Suw=
X-SBRS: None
X-MesageID: 29805557
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,420,1596513600"; 
   d="scan'208";a="29805557"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH 0/3] x86/ucode: Fixes and improvements to ucode revision handling
Date: Mon, 26 Oct 2020 17:25:16 +0000
Message-ID: <20201026172519.17881-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Patch 2 fixes a bug with the Intel revision handling, which is causing
problems on IceLake SDPs.

Patch 3 adds ucode=allow-same to allow for sensible testing of the late
microcode loading path.

Andrew Cooper (3):
  x86/ucode: Break out compare_revisions() from existing infrastructure
  x86/ucode/intel: Fix handling of microcode revision
  x86/ucode: Introduce ucode=allow-same for testing purposes

 docs/misc/xen-command-line.pandoc    |  7 ++++++-
 xen/arch/x86/cpu/microcode/amd.c     | 25 ++++++++++++++-----------
 xen/arch/x86/cpu/microcode/core.c    |  4 ++++
 xen/arch/x86/cpu/microcode/intel.c   | 31 +++++++++++++++++++++++++++----
 xen/arch/x86/cpu/microcode/private.h |  2 ++
 5 files changed, 53 insertions(+), 16 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:25:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:25:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12500.32558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fn-0004Qt-1d; Mon, 26 Oct 2020 17:25:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12500.32558; Mon, 26 Oct 2020 17:25:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fm-0004Ql-TS; Mon, 26 Oct 2020 17:25:54 +0000
Received: by outflank-mailman (input) for mailman id 12500;
 Mon, 26 Oct 2020 17:25:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX6Fm-0004NA-8Z
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:54 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 392e3e4d-20cc-4b4b-9c39-092efdd340f3;
 Mon, 26 Oct 2020 17:25:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX6Fm-0004NA-8Z
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:54 +0000
X-Inumbo-ID: 392e3e4d-20cc-4b4b-9c39-092efdd340f3
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 392e3e4d-20cc-4b4b-9c39-092efdd340f3;
	Mon, 26 Oct 2020 17:25:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603733148;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=N4NLiVkpafLkQts7MJNqLznQ4pqesNjOindufP348ms=;
  b=RnPdwy9AUT3oMlWbDGsL5uhgMgtRPT6DpYUGHaropY2CMf45K0Nw+hNE
   NTZYgQO2ErwzMvRtkzVW+8Q+jZ3cl0yr4u3NjzgwsiX9k3mo9BqNVtwOa
   lIRR9szV+INyR3QZxLT4XdelX+yFPyjFFMAPiItd3bfl0IEDVB9a856sq
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: J99NVkgohU6w6HUzF9IbgBNlOWf42KScqzdOM8z/aI5qBup5FpJUuPGBNhU1t3/xRr6hft6CKk
 b19A6S88I6fttt4FHelpS8ers3deLh7BYL144+Up3wDy410uT1yP1NyX8JbkaIJxOm2HVTdyw2
 /Kxtp5OjDt1qM3OccRqE2xWszu6ObMx9neDN141JPTbN7YiENKXCHUQD5ghg+wjK19eX0W3+hP
 DOfUvbLVEzwqDwzcjWj4Ec/GlsB4m9zOs8LbN+JlFo7yuhXbnYlP7Xdf5h7xsYsmMkj+rn1YDd
 rjc=
X-SBRS: None
X-MesageID: 30048907
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,420,1596513600"; 
   d="scan'208";a="30048907"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH 2/3] x86/ucode/intel: Fix handling of microcode revision
Date: Mon, 26 Oct 2020 17:25:18 +0000
Message-ID: <20201026172519.17881-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201026172519.17881-1-andrew.cooper3@citrix.com>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

For Intel microcodes, the revision field is signed (as documented in the SDM)
and negative revisions are used for pre-production/test microcode (not
documented publicly anywhere I can spot).

Adjust the revision checking to match the algorithm presented here:

  https://software.intel.com/security-software-guidance/best-practices/microcode-update-guidance

This treats pre-production microcode as always applicable, but also production
microcode having higher precident than pre-production.  It is expected that
anyone using pre-production microcode knows what they are doing.

This is necessary to load production microcode on an SDP with pre-production
microcode embedded in firmware.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>

"signed" is somewhat of a problem.  The actual numbers only make sense as
sign-magnitude, and not as twos-compliement, which is why the rule is "any
debug microcode goes".

The actual upgrade/downgrade rules are quite complicated, but in general, any
change is permitted so long as the Security Version Number (embedded inside
the patch) doesn't go backwards.

---
 xen/arch/x86/cpu/microcode/intel.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index e1ccb5d232..5fa2821cdb 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -33,7 +33,7 @@
 
 struct microcode_patch {
     uint32_t hdrver;
-    uint32_t rev;
+    int32_t rev;
     uint16_t year;
     uint8_t  day;
     uint8_t  month;
@@ -222,12 +222,23 @@ static int microcode_sanity_check(const struct microcode_patch *patch)
     return 0;
 }
 
+/*
+ * Production microcode has a positive revision.  Pre-production microcode has
+ * a negative revision.
+ */
 static enum microcode_match_result compare_revisions(
-    uint32_t old_rev, uint32_t new_rev)
+    int32_t old_rev, int32_t new_rev)
 {
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
+    /*
+     * Treat pre-production as always applicable - anyone using pre-production
+     * microcode knows what they are doing, and can keep any resulting pieces.
+     */
+    if ( new_rev < 0 )
+        return NEW_UCODE;
+
     return OLD_UCODE;
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:25:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12501.32570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fr-0004Vs-F3; Mon, 26 Oct 2020 17:25:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12501.32570; Mon, 26 Oct 2020 17:25:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Fr-0004Vh-BI; Mon, 26 Oct 2020 17:25:59 +0000
Received: by outflank-mailman (input) for mailman id 12501;
 Mon, 26 Oct 2020 17:25:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kX6Fp-0004UI-R8
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:57 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba839629-3739-46e5-9838-5a6827041efd;
 Mon, 26 Oct 2020 17:25:56 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dsar=EB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kX6Fp-0004UI-R8
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:25:57 +0000
X-Inumbo-ID: ba839629-3739-46e5-9838-5a6827041efd
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ba839629-3739-46e5-9838-5a6827041efd;
	Mon, 26 Oct 2020 17:25:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603733156;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=nO7Qr7V6r7ZHR8K0sMdn1GJ/DwtEeYhxjW4WWT4T3NU=;
  b=heZv7/3TLNmlMc2Pveatv6cAyO7ECJrenlRTBs0UuXo00kNAnQ6HJebA
   8iGaitDKMBCehtrJ10xvcTaRDsxXpLtbJCvH8OMo3xPr9r0bB5/+sLJyd
   FpY/0+zD23gKyQV9dKRUFg1HVJTRn0ewRkOgHwTIDtK02NSgdkRlOx0m0
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: lGVp5LQRwa7XhBUO7HSZMpI7+PjUyiPEgfU6FBq5AXzG3dTwVyujYJUgr0FBZTd/dBKNFBeJ2b
 NSe4YHuqS6BvwKY4aoXBiBuSaSNQtccJ/EAEHMpg9OQdxS1XYF6Pa3/PIoS/cuF2IiV7ugkLLU
 jkdVev3TaAlU6sPZpfGVXXZ8SsFiq35j3srlovjqsbJfPOuDA2KaiyZEPuf6XdiHSbOVykd3SZ
 yePPIoxUAr8I6m+Mu2u/YfuLZ/Q3Gj5J8DVbxpLJfkWkhhFa4eSH/imZ32/GnyQMGvA11aUSmb
 JSg=
X-SBRS: None
X-MesageID: 30138966
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,420,1596513600"; 
   d="scan'208";a="30138966"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>, Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: [PATCH 3/3] x86/ucode: Introduce ucode=allow-same for testing purposes
Date: Mon, 26 Oct 2020 17:25:19 +0000
Message-ID: <20201026172519.17881-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201026172519.17881-1-andrew.cooper3@citrix.com>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Many CPUs will actually reload microcode when offered the same version as
currently loaded.  This allows for easy testing of the late microcode loading
path.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Juergen Gross <jgross@suse.com>
CC: Igor Druzhinin <igor.druzhinin@citrix.com>

I was hoping to make this a runtime parameter, but I honestly can't figure out
the new HYPFS-only infrastructure is supposed to work.
---
 docs/misc/xen-command-line.pandoc    | 7 ++++++-
 xen/arch/x86/cpu/microcode/amd.c     | 3 +++
 xen/arch/x86/cpu/microcode/core.c    | 4 ++++
 xen/arch/x86/cpu/microcode/intel.c   | 3 +++
 xen/arch/x86/cpu/microcode/private.h | 2 ++
 5 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 4ae9391fcd..97b1cc67a4 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2216,7 +2216,7 @@ logic applies:
    active by default.
 
 ### ucode
-> `= List of [ <integer> | scan=<bool>, nmi=<bool> ]`
+> `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
 
     Applicability: x86
     Default: `nmi`
@@ -2248,6 +2248,11 @@ precedence over `scan`.
 stop_machine context. In NMI handler, even NMIs are blocked, which is
 considered safer. The default value is `true`.
 
+'allow-same' alters the default acceptance policy for new microcode to permit
+trying to reload the same version.  Many CPUs will actually reload microcode
+of the same version, and this allows for easily testing of the late microcode
+loading path.
+
 ### unrestricted_guest (Intel)
 > `= <boolean>`
 
diff --git a/xen/arch/x86/cpu/microcode/amd.c b/xen/arch/x86/cpu/microcode/amd.c
index 7d2f57c4cb..5255028af7 100644
--- a/xen/arch/x86/cpu/microcode/amd.c
+++ b/xen/arch/x86/cpu/microcode/amd.c
@@ -174,6 +174,9 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
+    if ( opt_ucode_allow_same && new_rev == old_rev )
+        return NEW_UCODE;
+
     return OLD_UCODE;
 }
 
diff --git a/xen/arch/x86/cpu/microcode/core.c b/xen/arch/x86/cpu/microcode/core.c
index 18ebc07b13..ac3ceb567c 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -95,6 +95,8 @@ static bool_t __initdata ucode_scan;
 /* By default, ucode loading is done in NMI handler */
 static bool ucode_in_nmi = true;
 
+bool __read_mostly opt_ucode_allow_same;
+
 /* Protected by microcode_mutex */
 static struct microcode_patch *microcode_cache;
 
@@ -121,6 +123,8 @@ static int __init parse_ucode(const char *s)
 
         if ( (val = parse_boolean("nmi", s, ss)) >= 0 )
             ucode_in_nmi = val;
+        else if ( (val = parse_boolean("allow-same", s, ss)) >= 0 )
+            opt_ucode_allow_same = val;
         else if ( !ucode_mod_forced ) /* Not forced by EFI */
         {
             if ( (val = parse_boolean("scan", s, ss)) >= 0 )
diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index 5fa2821cdb..f6d01490e0 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -232,6 +232,9 @@ static enum microcode_match_result compare_revisions(
     if ( new_rev > old_rev )
         return NEW_UCODE;
 
+    if ( opt_ucode_allow_same && new_rev == old_rev )
+        return NEW_UCODE;
+
     /*
      * Treat pre-production as always applicable - anyone using pre-production
      * microcode knows what they are doing, and can keep any resulting pieces.
diff --git a/xen/arch/x86/cpu/microcode/private.h b/xen/arch/x86/cpu/microcode/private.h
index 9a15cdc879..c085a10268 100644
--- a/xen/arch/x86/cpu/microcode/private.h
+++ b/xen/arch/x86/cpu/microcode/private.h
@@ -3,6 +3,8 @@
 
 #include <asm/microcode.h>
 
+extern bool opt_ucode_allow_same;
+
 enum microcode_match_result {
     OLD_UCODE, /* signature matched, but revision id is older or equal */
     NEW_UCODE, /* signature matched, but revision id is newer */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:45:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:45:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12516.32582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Xz-0006Td-3j; Mon, 26 Oct 2020 17:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12516.32582; Mon, 26 Oct 2020 17:44:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6Xz-0006TW-0R; Mon, 26 Oct 2020 17:44:43 +0000
Received: by outflank-mailman (input) for mailman id 12516;
 Mon, 26 Oct 2020 17:44:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kX6Xx-0006TP-1w
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:44:41 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7a05408-0de9-44ab-8bdb-e68065950964;
 Mon, 26 Oct 2020 17:44:40 +0000 (UTC)
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2057.outbound.protection.outlook.com [104.47.8.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-KzpoajMePXqRQchpX_6GvQ-1; Mon, 26 Oct 2020 18:44:36 +0100
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM0PR04MB5010.eurprd04.prod.outlook.com (2603:10a6:208:c3::30)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 26 Oct
 2020 17:44:33 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 17:44:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kX6Xx-0006TP-1w
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:44:41 +0000
X-Inumbo-ID: a7a05408-0de9-44ab-8bdb-e68065950964
Received: from de-smtp-delivery-102.mimecast.com (unknown [51.163.158.102])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7a05408-0de9-44ab-8bdb-e68065950964;
	Mon, 26 Oct 2020 17:44:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1603734278;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=s+dcP0fDsH7fHeSWMU+ap3zr7bH622FyjHlpD2W3gSI=;
	b=ZzWkxp/t8yzNmtH9s2MNdyKLEyqD7PBE18AketkiIjnAAI7xbCXhxWdoYiopvTpuJMeR47
	N6oP2zfqYQb8KjTEhCUEY5n77eMFtj1aXT9Z/v5G9lcZ1PIZ/MqQkaOfW8Q3UWfu9JmigH
	gELm1BVee/wnJIz5kyLSY9IJR/jgTN4=
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 (mail-am5eur03lp2057.outbound.protection.outlook.com [104.47.8.57]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-31-KzpoajMePXqRQchpX_6GvQ-1; Mon, 26 Oct 2020 18:44:36 +0100
X-MC-Unique: KzpoajMePXqRQchpX_6GvQ-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A2aNUeYsGKkOy5fpABeZP48YPp5kjq061clD/DFK9BT7aTmDug2Xg5lPbB4aGBRiUpBPg8s4TpJxHUI3/0V2H+WFxnk40CShD+MEu/z0RIPb/7rwTtNzr0yj7qojluKp5wuX9xG9peXP5xEjT3PiIjtonzXN2Gd/B9jWv7dcXvuF+PCJr+r7C/fP9muc5WqBI1C6B0365fsY568zqX+oBLsgf+P4a2g/cHWCPk8yAhWaPRlYQByxiT4kBpoSlavuf/0RSPhZps3n+05NlScQRq3qpubehNab12eVt/tBK5kQX+uN3KfL3kkfncXtvYFeSqm+IIoMq/rFpfv3HBt+4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=s+dcP0fDsH7fHeSWMU+ap3zr7bH622FyjHlpD2W3gSI=;
 b=EtlMy0e383W75vSzQIt3zDvFBEn1KYgaisj096OfMxKolWHAaCnTpqHd90eRLNG4yRU67Q2d/EBd8h4Ek5GxUehR6aZvll1l899IyzSMpqBNIl87exHID9QNDG5A/Nmiw+TqD0ZelQR1KCG0O/GXPPW/lrqHQc30oYj7t6Nmgnl040KtnqCMBKTOQSASTul1uOs/8RB8jKDoV0+FoCekw/jIbtwjr/+LmHq3hCzkqugr1Fk+7CAdsDYBCXTWP4rnZiBgHwK5Aejl2WEQxPWRtN5L6aqvNiapaJ+Iyj4jnRYlMn8lK2WgG/YUk7FMrfN5WzdfYJRzfqzSyvc3xYxtqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM0PR04MB5010.eurprd04.prod.outlook.com (2603:10a6:208:c3::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Mon, 26 Oct
 2020 17:44:33 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.018; Mon, 26 Oct 2020
 17:44:33 +0000
From: Dario Faggioli <dfaggioli@suse.com>
To: "dunlapg@umich.edu" <dunlapg@umich.edu>
CC: Juergen Gross <JGross@suse.com>, Jan Beulich <JBeulich@suse.com>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH] xen: credit2: document that min_rqd is valid
 and ok to use
Thread-Topic: [Xen-devel] [PATCH] xen: credit2: document that min_rqd is valid
 and ok to use
Thread-Index: AQHWq4TS/h5FA3rgjEyszqb6NOGihKmqKEQA
Date: Mon, 26 Oct 2020 17:44:33 +0000
Message-ID: <815860112d49d09cec5d70dfe75b9a659ec061ca.camel@suse.com>
References: <158524252335.30595.3422322089286433323.stgit@Palanthas>
	 <CAFLBxZaPNsxoazbB=e1sN7A=gzvr2rpAj7qdA73TtcRpPqUkLw@mail.gmail.com>
In-Reply-To:
 <CAFLBxZaPNsxoazbB=e1sN7A=gzvr2rpAj7qdA73TtcRpPqUkLw@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
user-agent: Evolution 3.38.1 (by Flathub.org) 
authentication-results: umich.edu; dkim=none (message not signed)
 header.d=none;umich.edu; dmarc=none action=none header.from=suse.com;
x-originating-ip: [89.186.78.87]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f7e16f8d-f0ca-43a0-c8b1-08d879d6ccc5
x-ms-traffictypediagnostic: AM0PR04MB5010:
x-ld-processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <AM0PR04MB50105713137F6C77C182DC2DC5190@AM0PR04MB5010.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7219;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 dl0cnC0VBRvm20WlRuoHtEGW3olMRqkW5lQM5aPYC7H72cg4p5qWn31dUfo9hxlsRdzv5F/AGf02G0e2iA7EsweSJ5b/lO/CPUcObmiSMy8g3/rILAdy+bNZaaaoSonl85Bahb4TaU4vF0Dx9XT2CwrRX+FFcPp0PhVt4rthxtTi+VMdMPn5Nw2ZCfDkC7YthvUt6/3o30OpNF2JmdspqhAT3WMIgVIAFSPtgFxHApnnSiOlBQBtwvXqYawZS631suwnEe4b+UeKtcL8a5FZxHoKBJBGSckMBimHtqVLby/ZJS65D7elP3nxcy8Ux9rbrdNMWDE0DATnzPkOVg2E3kNiq0+JZVc6fh1q8DUT8PTOyIMixI9a8m7TFGf1lywIMh5sFyctkdQAcZSXIJWCzw==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB5826.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(346002)(366004)(136003)(376002)(396003)(36756003)(26005)(6916009)(76116006)(86362001)(4001150100001)(2906002)(54906003)(186003)(478600001)(8936002)(5660300002)(966005)(2616005)(66616009)(66476007)(64756008)(83380400001)(99936003)(6486002)(66446008)(4326008)(66556008)(8676002)(71200400001)(6512007)(66946007)(53546011)(316002)(6506007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 s/injg2lhCURwqSxQz2uQOUl4XpctC9zuuq1vGLhG/sCNMvMwwD9F2JkLyRi0pFkxKgghb1nkyvu7L9RPpy4L7UvjFNzhnrjTrwQi5XuEaU5y+QqDmYtb3oCmepKbTFpe4Ln2Sdn0cuGrX0ewlCMbpnpxFcPIeIoTYbrHEF/foI+JzVKyB/6103aI+mqISS4hRfHHuufL2m9Hm84MeoqWQTSzklUsxP9MV3aQsKYbSvqOwk4K1DCKfhG7BFNZQHzSPLmqa9CwDb+eRVPYlZnim1ykFun1dLSKBJN97xGF+bdRAxQurAyJLy6JAz3e5njaq+pV7Jr6gfLPrPcC4Y/ubaIC1hqjitI9s8lAo4wzcdmgLiedpqx50b3kjAL/GVepZ5/yIVXha+vsRJUImC/v4/NSbZ5YRqu7wy/BNnYqB7Ahy5+KP0oMdOK9PTPbASI9N48K9pzERsZ+OgaC1aJqTkIX2YTXHpNTAEn+VlOj6dW9xQjCQ0Z1xYkaaRsLG0Cv4tmWO+UGod0wejDS71jACeroXyDiHOOprjArU/w9+oQJxfj+ltJofoTy67XajuolTQSdXclP5ckPWf0F4sZToOq9t6pPIyfMiv9LwQ6B30qfOBzpst8JW9hPq674+j60mxhn/A6Zg8pfiRIHl2pcg==
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-VbeZjh9/9BT2JlfMdhj/"
MIME-Version: 1.0
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB5826.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f7e16f8d-f0ca-43a0-c8b1-08d879d6ccc5
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Oct 2020 17:44:33.7731
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 0ufGS4swowTsoMteklvZWRD6Bb/SPJWYhV/4YcHvDVbC7seLr7nrtMqi6tzKYKhbqRZBDNEFAuDU65rRV1wIzA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5010

--=-VbeZjh9/9BT2JlfMdhj/
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 10:43 +0000, George Dunlap wrote:
> On Thu, Mar 26, 2020 at 5:09 PM Dario Faggioli <dfaggioli@suse.com>
> wrote:
> > diff --git a/xen/common/sched/credit2.c
> > b/xen/common/sched/credit2.c
> > index c7241944a8..9da51e624b 100644
> > --- a/xen/common/sched/credit2.c
> > +++ b/xen/common/sched/credit2.c
> > @@ -2387,6 +2387,13 @@ csched2_res_pick(const struct scheduler
> > *ops, const struct sched_unit *unit)
> > =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0goto out_up;
> > =C2=A0 =C2=A0 =C2=A0}
> >=20
> > + =C2=A0 =C2=A0/*
> > + =C2=A0 =C2=A0 * If we're here, min_rqd must be valid. In fact, either=
 we
> > picked a
> > + =C2=A0 =C2=A0 * runqueue in the "list_for_each" (as min_avgload is
> > initialized to
> > + =C2=A0 =C2=A0 * MAX_LOAD) or we just did that (in the "else" branch) =
above.
> > + =C2=A0 =C2=A0 */
>=20
>=20
> Sorry it's taken so long to get back to you on this.
>=20
> The problem with this is that there are actually *three* alternate
> clauses above:
>=20
> 1. (has_soft && min_s_rqd)
> 2. min_rqd
> 3. <none of the above>
>=20
Yes, indeed.

However, one of the three is "if (min_rqs)", and I think it is clear
that in that case (which would be 2 in the list above) min_rqd is
valid.

Therefore, this part of the comment "In fact, either we picked a
runqueue in the "list_for_each" (as min_avgload is initialized to
MAX_LOAD)", was referring to 1.

And this other part "or we just did that (in the "else" branch) above",
was referring to 3.

> It's obvious that if we hit #2 or #3, that min_rqd will be set.=C2=A0 But
> it's not immediately obvious why the condition in #1 guarantees that
> min_rqd will be set.
>=20
That's what I tried to explain with this: "we picked a runqueue in the
"list_for_each" (as min_avgload is initialized to MAX_LOAD)"

> Is it because if we get to the point in the above loop where
> min_s_rqd is set, then min_rqd will always be set if it hasn't been
> set already?=C2=A0 Or to put it a different way -- the only way for
> min_rqd *not* to be set is if it always bailed before min_s_rqd was
> set?
>=20
The point is really that the "list_for_each" loop scans all the
runqueues. If we do at least one step of the loop, min_rqd is ok,
because min_avgload is initialized to MAX_LOAD, and hence we have done
at least one assignment of min_rq=3Drqd (in the body of the very last if
of the loop itself).

min_s_rqd may or may not have been set to point to any runqueue. But if
it is valid, it means we have done at least one step of the loop, and
hence min_rqd is valid too.

Makes sense? :-)

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-VbeZjh9/9BT2JlfMdhj/
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+XBMoACgkQFkJ4iaW4
c+6nYw//ZL0XdoWgUouXWjsi3xKYACHJSyvf+GOe1D7+axLgvv5iOgNH6mySas3e
/q1lnQONe+kSl8kSM/yBpCcJRR2PwhPaX+/AFscA2CIc2zAXCnZjVwzV8AGGlWu9
vi5uwFkMuD7bVgZviU2p/Qd83sAzrs/Ej4PGuYo1caBJOPv2/UdreA7Nyk9ydlad
XXoAWSB/VZvcHfZKHLV8zwlmw6vC3lGuen07Kl1ehdsVG2p5fT4RZlOlm+l9B7rA
Wl600iPEAF79yzHjbQYjm/0AgYKaArYylDCFa2HhklJ+HreIizUfvo2mZ2DPUfBo
cjNaqYyNoUr8mC2wsdYrdCETGH3+HpF75qdC9E4kRw+aPPE2tVJxb0AcoXkKs8Fa
0IK7wm9bnfwvdP1cwQEKn15IrE7wWA6V+6WujRndG5YmYZxFk4cDqzK2KcXbCH7Y
I3rVlLDBle7q8wW6mjj8Vyghz88Vsx0k6pJl2CjH1JCcDW+YcTJWuPg5GVtA5kM7
OMyBJf+q4IJxHark6P8mL5cmfs+WuWiVGEmVypjQaFdGu+I3czX1Kdr+Ja/+l5je
YBW/Uboti63832O1mawDR8ZPaieImBZoU7tu635UKirNJH0jQ5Rc06+nBYVKQJol
Yd9otYdp7IA7NzHAC/hmdrcjF0KXoUsy3Yy9L3SYX+8N120DLKI=
=CZ09
-----END PGP SIGNATURE-----

--=-VbeZjh9/9BT2JlfMdhj/--



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 17:54:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 17:54:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12523.32597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6hR-0007R0-39; Mon, 26 Oct 2020 17:54:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12523.32597; Mon, 26 Oct 2020 17:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6hR-0007Qt-0E; Mon, 26 Oct 2020 17:54:29 +0000
Received: by outflank-mailman (input) for mailman id 12523;
 Mon, 26 Oct 2020 17:54:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kX6hQ-0007Qo-4c
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:54:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a15c5f22-ea17-4a82-a962-564f7853fae2;
 Mon, 26 Oct 2020 17:54:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0B1A3AF86;
 Mon, 26 Oct 2020 17:54:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=3TsF=EB=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kX6hQ-0007Qo-4c
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 17:54:28 +0000
X-Inumbo-ID: a15c5f22-ea17-4a82-a962-564f7853fae2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a15c5f22-ea17-4a82-a962-564f7853fae2;
	Mon, 26 Oct 2020 17:54:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603734865;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Si4DlmvMHXJaQ0sr20k69+hyq1qX3Q6SZo+emlwosWc=;
	b=m+0Nn43aSUIih4OKaQSpUcT9hzEWZgmfJv2EpNYUkBnyURwL9KatFDHF5XwKkrj6VIEbjV
	6pHxaCRUt+WNm3kuizq6W62nooXc6T6siHOH//IRS56z8YhIirYYxAt7jAtJu+3BCcby8U
	RMuHZ3s2ZGw2Pd33pXLNcVQPOMEdkRY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0B1A3AF86;
	Mon, 26 Oct 2020 17:54:25 +0000 (UTC)
Message-ID: <6da1d0b4b573997ff24a3b5597e764d0dba7597d.camel@suse.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?Fr=E9d=E9ric?= Pierret <frederic.pierret@qubes-os.org>, 
 Andrew Cooper <andrew.cooper3@citrix.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Cc: Marek =?ISO-8859-1?Q?Marczykowski-G=F3recki?=
	 <marmarek@invisiblethingslab.com>
Date: Mon, 26 Oct 2020 18:54:23 +0100
In-Reply-To: <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-P2cS9z0+kmEVv02Wx7Du"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-P2cS9z0+kmEVv02Wx7Du
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-10-26 at 17:11 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
> Le 10/26/20 =C3=A0 2:54 PM, Andrew Cooper a =C3=A9crit=C2=A0:
> > > If anyone would have any idea of what's going on, that would be
> > > very
> > > appreciated. Thank you.
> >=20
> > Does booting Xen with `sched=3Dcredit` make a difference?
> >=20
> > ~Andrew
>=20
> Thank you Andrew. Since your mail I'm currently testing this on
> production and it's clearly more stable than this morning. I will not
> say yet it's solved because yesterday I had some few hours of
> stability too. but clearly, it's encouraging because this morning it
> was just hell every 15/30 minutes.
>=20
Ok, yes, let us know if the credit scheduler seems to not suffer from
the issue.

I'm curious about another thing, though. You mentioned, in your
previous email (and in the subject :-)) that this is a 4.13 -> 4.14
issue for you?

Does that mean that the problem was not there on 4.13?

I'm asking because Credit2 was already the default scheduler in 4.13.=C2=A0

So, unless you were configuring things differently, you were already
using it there.

If this is the case, it would hint at the fact that something that
changed between .13 and .14 could be the cause.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-P2cS9z0+kmEVv02Wx7Du
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+XDU8ACgkQFkJ4iaW4
c+6VZhAAy798V6/hEZDTa0kmdU2imGrd6bqBQwJ0T3fm/kcL09xYpRS6u9+GRwOb
hhAnM+c3J/OrgpGiCFx3nHGdyazKCX7vaNWjAsfQB6uGgnnGHpJTHj6mVZ4MEgda
Auj87jcKQ61UJM2ycMeSz3Z2Ynt1p09cRoMTMoX5b96cuuZMT3PFiXv7LMF0BRBb
AC2WexSSV1TRTmAQenSVO1ftEmeMv4Z1ZjVPBPgllT+Qxhg5Yq7i+WqPS4MmFRHA
xLbNDqVOG4FhD4zJw6LuGy0+oZ1vyH3bDNQRzu+uO7ue+eryQcB26mfWG/u+Oro7
WYWoqEyxrffAPaLYt3wdHqG1NOq1e6uMVmbLaYygZRPB54HkNxpFscaPh0JkdcVL
zxJaNh5s2U5e41hCfT6hABeQt8EIitE+oRpifgLywzouo7TPnvnp2bb3j8GdC5Lw
EdsfAXf7jfNU/TuCpmWPB+2qWp9i1DZMVLT2XtrmGE6qhZh5V1Fky+tVuUohgZnK
x4TDL3yPHT6u4/y82gyoUY89raJxiYVyDeNCKH1RE8/3CgG4ggRz7UeuXor1QOYj
tfNSPWz9nAkxUjKPXPb2Bmo+/lwmFpLH4DX+mpa+vNXbBVLHTJpe4WBdLinmo2k3
LgkQaDjjo94DQyVAY1ZyYMRKSTn0mZq9TTL7yvhUIrrJsnm/wn4=
=54mQ
-----END PGP SIGNATURE-----

--=-P2cS9z0+kmEVv02Wx7Du--



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 18:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 18:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12532.32613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6ts-0008VD-Ab; Mon, 26 Oct 2020 18:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12532.32613; Mon, 26 Oct 2020 18:07:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX6ts-0008V6-7d; Mon, 26 Oct 2020 18:07:20 +0000
Received: by outflank-mailman (input) for mailman id 12532;
 Mon, 26 Oct 2020 18:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX6tq-0008V1-Ra
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:07:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0add8ff-ced1-4b4e-9663-ccd7a08b583b;
 Mon, 26 Oct 2020 18:07:16 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX6tn-0002Qf-T2; Mon, 26 Oct 2020 18:07:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX6tn-00077N-Iy; Mon, 26 Oct 2020 18:07:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX6tn-0003uU-IR; Mon, 26 Oct 2020 18:07:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX6tq-0008V1-Ra
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:07:18 +0000
X-Inumbo-ID: f0add8ff-ced1-4b4e-9663-ccd7a08b583b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f0add8ff-ced1-4b4e-9663-ccd7a08b583b;
	Mon, 26 Oct 2020 18:07:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xe/DaTlVPkFSIAOLbmyEy8TPNvvnKcy8GjT16XqsnF4=; b=WGjA/fz+FpMm4XwZk7ApVxOMCs
	Amd/+eBPgdfZe8UE/7T65aCrmdNq+0uGjjLy0rN5yFxjiaUyioDYEPRgI1uC3VhJQU+YzIa2K2014
	t1hfAlE3Ebc5S94JHDZriu49mevHdYinmEI8GlvoLloZ2jmGkhEzzKoSuLsZp6TS7Adc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX6tn-0002Qf-T2; Mon, 26 Oct 2020 18:07:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX6tn-00077N-Iy; Mon, 26 Oct 2020 18:07:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX6tn-0003uU-IR; Mon, 26 Oct 2020 18:07:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156238: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
    linux-linus:test-arm64-arm64-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-seattle:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=3650b228f83adda7e5ee532e2b90429c03f7b9ec
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 18:07:15 +0000

flight 156238 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156238/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm 10 host-ping-check-xen fail in 156225 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl           8 xen-boot         fail in 156225 pass in 156238
 test-arm64-arm64-xl-credit1   8 xen-boot         fail in 156225 pass in 156238
 test-amd64-amd64-xl-rtds   18 guest-localmigrate fail in 156225 pass in 156238
 test-arm64-arm64-examine      8 reboot           fail in 156225 pass in 156238
 test-amd64-amd64-examine    4 memdisk-try-append fail in 156225 pass in 156238
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10 fail in 156225 pass in 156238
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156225

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-seattle  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 156225 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3650b228f83adda7e5ee532e2b90429c03f7b9ec
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   86 days
Failing since        152366  2020-08-01 20:49:34 Z   85 days  146 attempts
Testing same since   156225  2020-10-25 23:39:51 Z    0 days    2 attempts

------------------------------------------------------------
3376 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 640968 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 18:27:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 18:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12538.32629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Co-0001rW-7L; Mon, 26 Oct 2020 18:26:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12538.32629; Mon, 26 Oct 2020 18:26:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Co-0001rP-29; Mon, 26 Oct 2020 18:26:54 +0000
Received: by outflank-mailman (input) for mailman id 12538;
 Mon, 26 Oct 2020 18:26:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX7Cl-0001rK-U0
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:26:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf3951eb-d0a1-4247-8cfc-11f6b2977988;
 Mon, 26 Oct 2020 18:26:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Cj-0002pa-Ou; Mon, 26 Oct 2020 18:26:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Cj-0007u1-F5; Mon, 26 Oct 2020 18:26:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Cj-0001Yk-EY; Mon, 26 Oct 2020 18:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX7Cl-0001rK-U0
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:26:51 +0000
X-Inumbo-ID: cf3951eb-d0a1-4247-8cfc-11f6b2977988
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cf3951eb-d0a1-4247-8cfc-11f6b2977988;
	Mon, 26 Oct 2020 18:26:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1gE7/dhsBVfChUkPY21HhW3wphWny/eSKOmSIQeQi+Y=; b=6FPU+ItoK6J4fDjaDW4G7IwKJ1
	TCYTpUJ/HA2dkjJMukyUeZycsHWKImm24d6Xl8ozr/Ny1633gn1vEiVUNH/rOGwVKYleoWpcy8YHt
	nZg4mNCZIRMy1jiZmFDqLgBK/n5An1OhKxgbYiGVhT0rzT9BxFE76ctTeZn+F8SL5oek=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Cj-0002pa-Ou; Mon, 26 Oct 2020 18:26:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Cj-0007u1-F5; Mon, 26 Oct 2020 18:26:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Cj-0001Yk-EY; Mon, 26 Oct 2020 18:26:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156245: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=964781c6f162893677c50a779b7d562a299727ba
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 18:26:49 +0000

flight 156245 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156245/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  964781c6f162893677c50a779b7d562a299727ba
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156117  2020-10-23 09:01:23 Z    3 days
Failing since        156120  2020-10-23 14:01:24 Z    3 days   38 attempts
Testing same since   156245  2020-10-26 16:01:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6ca70821b5..964781c6f1  964781c6f162893677c50a779b7d562a299727ba -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 18:42:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 18:42:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12546.32643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Rh-0003ZG-Jy; Mon, 26 Oct 2020 18:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12546.32643; Mon, 26 Oct 2020 18:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Rh-0003Z9-Gy; Mon, 26 Oct 2020 18:42:17 +0000
Received: by outflank-mailman (input) for mailman id 12546;
 Mon, 26 Oct 2020 18:42:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kX7Rg-0003Z4-Fl
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:42:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6fe302f-521a-4fa2-ade5-d6b8abe2d1af;
 Mon, 26 Oct 2020 18:42:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Rb-00039M-8Z; Mon, 26 Oct 2020 18:42:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Ra-0000E3-OC; Mon, 26 Oct 2020 18:42:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kX7Ra-0001Nc-Nd; Mon, 26 Oct 2020 18:42:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kX7Rg-0003Z4-Fl
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:42:16 +0000
X-Inumbo-ID: f6fe302f-521a-4fa2-ade5-d6b8abe2d1af
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f6fe302f-521a-4fa2-ade5-d6b8abe2d1af;
	Mon, 26 Oct 2020 18:42:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C1rdJF7ahTpgKL2Ee0bSZ0sQQP5D5cbiCDWaQv5/D4M=; b=goloPANBbXONGqHzHIyPnJ69Aq
	Ddd8QlpP4yN9/87sPqDn0FkfwzVo5F/XDYd10Jb09izxx6NQO1vIz2nxGW/nDvbefuxPcorFbbl7K
	0mKkuNsmeFuu+fTV9MRsokdtwN2rIx8MYC4QtFXjGFZQNG+vMPWWpo38hkJFv8q1LiSQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Rb-00039M-8Z; Mon, 26 Oct 2020 18:42:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Ra-0000E3-OC; Mon, 26 Oct 2020 18:42:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kX7Ra-0001Nc-Nd; Mon, 26 Oct 2020 18:42:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156246-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156246: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a46e72710566eea0f90f9c673a0f02da0064acce
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 18:42:10 +0000

flight 156246 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156246/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                a46e72710566eea0f90f9c673a0f02da0064acce
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   67 days
Failing since        152659  2020-08-21 14:07:39 Z   66 days  154 attempts
Testing same since   156246  2020-10-26 16:37:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 51459 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 18:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 18:44:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12551.32659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Ts-0003kX-6L; Mon, 26 Oct 2020 18:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12551.32659; Mon, 26 Oct 2020 18:44:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7Ts-0003kQ-3B; Mon, 26 Oct 2020 18:44:32 +0000
Received: by outflank-mailman (input) for mailman id 12551;
 Mon, 26 Oct 2020 18:44:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kX7Tq-0003kL-9z
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:44:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24418615-9ad3-465d-8d85-6cf4678a606e;
 Mon, 26 Oct 2020 18:44:29 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX7To-0003D8-QX; Mon, 26 Oct 2020 18:44:28 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX7To-0000gJ-K4; Mon, 26 Oct 2020 18:44:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kX7Tq-0003kL-9z
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 18:44:30 +0000
X-Inumbo-ID: 24418615-9ad3-465d-8d85-6cf4678a606e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 24418615-9ad3-465d-8d85-6cf4678a606e;
	Mon, 26 Oct 2020 18:44:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XBYOMVI6KLwqG7Lz5TsppMwW/9efEZo7+jiuHBf6LRM=; b=K8sVIpL3kW1x2SriEEkM6p/WM8
	yp6olEBGKiwORB3prOpoy5hn8bB3voVQvadkUoDY2bcbVKysEkEy/Q2ftQgw8LCh69QNRrbpwZtRV
	0mK8WvIb7xemum+nmggyTZ5LRXF5/tMsDoOXoGZjYDQeHXnNg+znCtefA4+vKfZh/NyM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX7To-0003D8-QX; Mon, 26 Oct 2020 18:44:28 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX7To-0000gJ-K4; Mon, 26 Oct 2020 18:44:28 +0000
Subject: Re: Xen on RP4
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, roman@zededa.com,
 xen-devel@lists.xenproject.org
References: <20201016003024.GA13290@mattapan.m5p.com>
 <23885c28-dee5-4e9a-dc43-6ccf19a94df6@xen.org>
 <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
Date: Mon, 26 Oct 2020 18:44:27 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026160316.GA20589@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 26/10/2020 16:03, Elliott Mitchell wrote:
> On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
>> On 24/10/2020 06:35, Elliott Mitchell wrote:
>>> ACPI has a distinct
>>> means of specifying a limited DMA-width; the above fails, because it
>>> assumes a *device-tree*.
>>
>> Do you know if it would be possible to infer from the ACPI static table
>> the DMA-width?
> 
> Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> what the C code would look like though (problem is which documentation
> should I be looking at first?).

What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
is written in AML. So far the shortest implementation I have seen for 
the AML parser is around 5000 lines (see [1]). It might be possible to 
strip some the code, although I think this will still probably too big 
for a single workaround.

What I meant by "static table" is a table that looks like a structure 
and can be parsed in a few lines. If we can't find on contain the DMA 
window, then the next best solution is to find a way to identity the 
platform.

I don't know enough ACPI to know if this solution is possible. A good 
starter would probably be the ACPI spec [2].

> 
> Handy bit of information is in the RP4 Tianocore table source:
> https://github.com/tianocore/edk2-platforms/blob/d492639638eee331ac3389e6cf53ea266c3c84b3/Platform/RaspberryPi/AcpiTables/Dsdt.asl
> 
>        Name (_DMA, ResourceTemplate() {
>          //
>          // Only the first GB is available.
>          // Bus 0xC0000000 -> CPU 0x00000000.
>          //
>          QWordMemory (ResourceConsumer,
>            ,
>            MinFixed,
>            MaxFixed,
>            NonCacheable,
>            ReadWrite,
>            0x0,
>            0x00000000C0000000, // MIN
>            0x00000000FFFFFFFF, // MAX
>            0xFFFFFFFF40000000, // TRA
>            0x0000000040000000, // LEN
>            ,
>            ,
>            )
>        })
> 
> There should be some corresponding code in the Linux 5.9 kernels.  From
> the look of that, it might even be possible to specify a memory range
> which didn't start at address 0.
> 
> 

Cheers,

[1] https://github.com/openbsd/src/blob/master/sys/dev/acpi/dsdt.c
[2] https://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 19:05:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 19:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12556.32671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7ny-0005Ym-TD; Mon, 26 Oct 2020 19:05:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12556.32671; Mon, 26 Oct 2020 19:05:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7ny-0005Yf-PB; Mon, 26 Oct 2020 19:05:18 +0000
Received: by outflank-mailman (input) for mailman id 12556;
 Mon, 26 Oct 2020 19:05:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kX7nx-0005Ya-H3
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 19:05:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b9b8d5d-b9bb-478a-a681-3493e922afd5;
 Mon, 26 Oct 2020 19:05:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX7nu-0003fx-RA; Mon, 26 Oct 2020 19:05:14 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kX7nu-00024x-EH; Mon, 26 Oct 2020 19:05:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6Eey=EB=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kX7nx-0005Ya-H3
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 19:05:17 +0000
X-Inumbo-ID: 9b9b8d5d-b9bb-478a-a681-3493e922afd5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b9b8d5d-b9bb-478a-a681-3493e922afd5;
	Mon, 26 Oct 2020 19:05:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MD5w1Mwew/Mf6nmoUUblqsberfJCG0aBF4mUDoPN/c8=; b=eXwfShyITaz7LMd8bJTWohVoyQ
	CQmR+9V1rvqlhNrbTvxKsph7NQdX5jiLrTAzvkX4YIP6WvSa9FpEkf/x6KGztKSCup+nmiBCxyMKx
	mi4sy4pl9Eq7dZkdRK4VJhtitTjsBnlA6wRLz/bAmTLyb5rp+7S/8YvxvheanrZA2nuA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX7nu-0003fx-RA; Mon, 26 Oct 2020 19:05:14 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kX7nu-00024x-EH; Mon, 26 Oct 2020 19:05:14 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Ash Wilding <Ash.Wilding@arm.com>, Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
Date: Mon, 26 Oct 2020 19:05:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 26/10/2020 12:10, Ash Wilding wrote:
> Hi,

Hi Ash,

>> 1. atomic_set_release
>> 2. atomic_fetch_andnot_relaxed
>> 3. atomic_cond_read_relaxed
>> 4. atomic_long_cond_read_relaxed
>> 5. atomic_long_xor
>> 6. atomic_set_release
>> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is
>>     implemented in XEN need to check.
>> 8. atomic_dec_return_release
>> 9. atomic_fetch_inc_relaxed
> 
> 
> If we're going to pull in Linux's implementations of the above atomics
> helpers for SMMUv3, and given the majority of SMMUv3 systems are v8.1+
> with LSE, perhaps this would be a good time to drop the current
> atomic.h in Xen completely and pull in both Linux's LL/SC and LSE
> helpers,

When I originally answered to the thread, I thought about suggesting 
importing LSE. But I felt it was too much to ask in order to merge the 
SMMUv3 code.

However, I would love to have support for LSE in Xen as this would solve 
other not yet fully closed security issue with LL/SC (see XSA-295 [2]).

Would Arm be willing to add support for LSE before merging the SMMUv3?

As an alternative, it might also be possible to provide "dumb" 
implementation for all the helpers even if they are stricter than 
necessary for the memory ordering requirements.

then use a new Kconfig to toggle between them?

I would prefer to follow the same approach as Linux and allow Xen to 
select at boot time which implementations to use. This would enable 
distro to provide a single binary that boot on all Armv8 and still allow 
Xen to select the best set of instructions.

Xen already provides a framework to switch between two sets of 
instructions at boot. This was borrowed from Linux, so I don't expect a 
big hurdle to get this supported.

> 
> Back in 5d45ecabe3 [1] Jan mentioned we probably want to avoid relying
> on gcc atomics helpers as we can't switch between LL/SC and LSE
> atomics. 

I asked Jan to add this line in the commit message :). My concern was 
that even if we provided a runtime switch (or sanity check for XSA-295), 
the GCC helpers would not be able to take advantage (the code is not 
written by Xen community).

Cheers,

> [1] https://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5d45ecabe3

[2] https://xenbits.xen.org/xsa/advisory-295.html

> 
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 19:10:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 19:10:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12559.32683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7tM-0006Q4-I9; Mon, 26 Oct 2020 19:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12559.32683; Mon, 26 Oct 2020 19:10:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX7tM-0006Px-EO; Mon, 26 Oct 2020 19:10:52 +0000
Received: by outflank-mailman (input) for mailman id 12559;
 Mon, 26 Oct 2020 19:10:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kX7tK-0006Ps-Oe
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 19:10:50 +0000
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6b10a91-9c34-4d0e-948f-cfcfb83ba409;
 Mon, 26 Oct 2020 19:10:49 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603739443797662.4635810614253;
 Mon, 26 Oct 2020 12:10:43 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oIL/=EB=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kX7tK-0006Ps-Oe
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 19:10:50 +0000
X-Inumbo-ID: c6b10a91-9c34-4d0e-948f-cfcfb83ba409
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c6b10a91-9c34-4d0e-948f-cfcfb83ba409;
	Mon, 26 Oct 2020 19:10:49 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603739448; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=GDCeM5L6YvedesYSiQfvGBgHopYSJ6bojLWlnkx/GuDgt3IdUHhXgKnyCSqF3A6bY+Ba6GLUnsxL8rYXkOTfeMdvh2xgHPfsWNSmAwSo58/ckgC+bp5v/tiKxZtwL8pRwItwb7SOLY9KMkqICcKlnjW+T8P/J0oZfjcMLxhTJvc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603739448; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=GnzzDDnsedpzJAW4j4XsruJLatQ217ifyjkLgNGOKSc=; 
	b=kwoX9itmZs0E54F3xzFiia4dYPqLSmC1X+XpXPo4DoGhahS0wLxafQEjfVYxx1E+4n7KSUpXLJkuN22Fpno/z7cayDlQlC3r6wlVuznzZRoFqLDoIn0iG0siWwkR1I3cGaKDBCv/mm1bvh9SDlDMjLtyKoJU0yKiitg3SyzROd8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603739448;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=GnzzDDnsedpzJAW4j4XsruJLatQ217ifyjkLgNGOKSc=;
	b=Zg+31PWmjvsjkOScQFqkHT7iHpq662yZ6gT41GDhhxaTFLbn09rzbv82IgmcT9mS
	uRUfetq18HkiBQkKxVycGoL2oSExjIeweCC/O4RJQq6CfSkDyeLsMLA+YizjhOwZFjY
	B1486JfWcZQuyEQlBwcXKey4ZmA1Rsbt/F6+f8yA=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603739443797662.4635810614253; Mon, 26 Oct 2020 12:10:43 -0700 (PDT)
To: Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
 <6da1d0b4b573997ff24a3b5597e764d0dba7597d.camel@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <3137ae40-66a0-2f5f-4cb1-36082e3da982@qubes-os.org>
Date: Mon, 26 Oct 2020 20:10:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <6da1d0b4b573997ff24a3b5597e764d0dba7597d.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="m0Gnag2sETvA4DdpQO6Y2I0A9WC1lGM9Y"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--m0Gnag2sETvA4DdpQO6Y2I0A9WC1lGM9Y
Content-Type: multipart/mixed; boundary="eRGi8IqtYCqTIB1h7uMBjrzeOTWj2mlAb";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Message-ID: <3137ae40-66a0-2f5f-4cb1-36082e3da982@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <f6d179ac-eccc-99e1-ee0f-ea0d7f5ed335@qubes-os.org>
 <6da1d0b4b573997ff24a3b5597e764d0dba7597d.camel@suse.com>
In-Reply-To: <6da1d0b4b573997ff24a3b5597e764d0dba7597d.camel@suse.com>

--eRGi8IqtYCqTIB1h7uMBjrzeOTWj2mlAb
Content-Type: multipart/mixed;
 boundary="------------AE6B7013617E2D5FE752B446"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------AE6B7013617E2D5FE752B446
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/26/20 =C3=A0 6:54 PM, Dario Faggioli a =C3=A9crit=C2=A0:
> On Mon, 2020-10-26 at 17:11 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
>> Le 10/26/20 =C3=A0 2:54 PM, Andrew Cooper a =C3=A9crit=C2=A0:
>>>> If anyone would have any idea of what's going on, that would be
>>>> very
>>>> appreciated. Thank you.
>>>
>>> Does booting Xen with `sched=3Dcredit` make a difference?
>>>
>>> ~Andrew
>>
>> Thank you Andrew. Since your mail I'm currently testing this on
>> production and it's clearly more stable than this morning. I will not
>> say yet it's solved because yesterday I had some few hours of
>> stability too. but clearly, it's encouraging because this morning it
>> was just hell every 15/30 minutes.
>>
> Ok, yes, let us know if the credit scheduler seems to not suffer from
> the issue.
>=20

Yes unfortunately, I had few hours of stability but it just end up to:

```
[15883.967829] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[15883.967868] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D14879
[15883.967884] 	(detected by 0, t=3D60002 jiffies, g=3D460221, q=3D89)
[15883.967901] Sending NMI from CPU 0 to CPUs 12:
[15893.970590] rcu: rcu_sched kthread starved for 9994 jiffies! g460221 f=
0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D9
[15893.970622] rcu: RCU grace-period kthread stack dump:
[15893.970631] rcu_sched       R  running task        0    10      2 0x80=
004008
[15893.970645] Call Trace:
[15893.970658]  ? xen_hypercall_xen_version+0xa/0x20
[15893.970670]  ? xen_force_evtchn_callback+0x9/0x10
[15893.970679]  ? check_events+0x12/0x20
[15893.970687]  ? xen_restore_fl_direct+0x1f/0x20
[15893.970697]  ? _raw_spin_unlock_irqrestore+0x14/0x20
[15893.970708]  ? force_qs_rnp+0x6f/0x170
[15893.970715]  ? rcu_nocb_unlock_irqrestore+0x30/0x30
[15893.970724]  ? rcu_gp_fqs_loop+0x234/0x2a0
[15893.970732]  ? rcu_gp_kthread+0xb5/0x140
[15893.970740]  ? rcu_gp_init+0x470/0x470
[15893.970748]  ? kthread+0x115/0x140
[15893.970756]  ? __kthread_bind_mask+0x60/0x60
[15893.970764]  ? ret_from_fork+0x35/0x40
[16063.972793] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[16063.972825] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D57364
[16063.972840] 	(detected by 5, t=3D240007 jiffies, g=3D460221, q=3D6439)=

[16063.972855] Sending NMI from CPU 5 to CPUs 12:
[16243.977769] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[16243.977802] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D99504
[16243.977817] 	(detected by 11, t=3D420012 jiffies, g=3D460221, q=3D6710=
)
[16243.977830] Sending NMI from CPU 11 to CPUs 12:
[16253.980496] rcu: rcu_sched kthread starved for 10001 jiffies! g460221 =
f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D9
[16253.980528] rcu: RCU grace-period kthread stack dump:
[16253.980537] rcu_sched       R  running task        0    10      2 0x80=
004008
[16253.980550] Call Trace:
[16253.980563]  ? xen_hypercall_xen_version+0xa/0x20
[16253.980575]  ? xen_force_evtchn_callback+0x9/0x10
[16253.980584]  ? check_events+0x12/0x20
[16253.980592]  ? xen_restore_fl_direct+0x1f/0x20
[16253.980602]  ? _raw_spin_unlock_irqrestore+0x14/0x20
[16253.980613]  ? force_qs_rnp+0x6f/0x170
[16253.980620]  ? rcu_nocb_unlock_irqrestore+0x30/0x30
[16253.980629]  ? rcu_gp_fqs_loop+0x234/0x2a0
[16253.980637]  ? rcu_gp_kthread+0xb5/0x140
[16253.980645]  ? rcu_gp_init+0x470/0x470
[16253.980653]  ? kthread+0x115/0x140
[16253.980661]  ? __kthread_bind_mask+0x60/0x60
[16253.980669]  ? ret_from_fork+0x35/0x40
[16423.982735] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[16423.982789] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D139435
[16423.982820] 	(detected by 10, t=3D600017 jiffies, g=3D460221, q=3D7354=
)
[16423.982842] Sending NMI from CPU 10 to CPUs 12:
[16433.984844] rcu: rcu_sched kthread starved for 10001 jiffies! g460221 =
f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D3
[16433.984875] rcu: RCU grace-period kthread stack dump:
[16433.984885] rcu_sched       R  running task        0    10      2 0x80=
004000
[16433.984897] Call Trace:
[16433.984910]  ? xen_hypercall_xen_version+0xa/0x20
[16433.984922]  ? xen_force_evtchn_callback+0x9/0x10
[16433.984931]  ? check_events+0x12/0x20
[16433.984939]  ? xen_restore_fl_direct+0x1f/0x20
[16433.984949]  ? _raw_spin_unlock_irqrestore+0x14/0x20
[16433.984960]  ? force_qs_rnp+0x6f/0x170
[16433.984967]  ? rcu_nocb_unlock_irqrestore+0x30/0x30
[16433.984976]  ? rcu_gp_fqs_loop+0x234/0x2a0
[16433.984984]  ? rcu_gp_kthread+0xb5/0x140
[16433.984992]  ? rcu_gp_init+0x470/0x470
[16433.985000]  ? kthread+0x115/0x140
[16433.985007]  ? __kthread_bind_mask+0x60/0x60
[16433.985015]  ? ret_from_fork+0x35/0x40
[16603.987677] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[16603.987710] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D179313
[16603.987725] 	(detected by 0, t=3D780022 jiffies, g=3D460221, q=3D7869)=

[16603.987740] Sending NMI from CPU 0 to CPUs 12:
[16783.992658] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[16783.992710] rcu: 	12-...0: (75 ticks this GP) idle=3D5c6/1/0x400000000=
0000000 softirq=3D139356/139357 fqs=3D219106
[16783.992741] 	(detected by 13, t=3D960027 jiffies, g=3D460221, q=3D8300=
)
[16783.992768] Sending NMI from CPU 13 to CPUs 12:
[16793.995873] rcu: rcu_sched kthread starved for 10000 jiffies! g460221 =
f0x0 RCU_GP_DOING_FQS(6) ->state=3D0x0 ->cpu=3D4
[16793.995906] rcu: RCU grace-period kthread stack dump:
[16793.995915] rcu_sched       R  running task        0    10      2 0x80=
004000
[16793.995930] Call Trace:
[16793.995948]  ? xen_hypercall_xen_version+0xa/0x20
[16793.995963]  ? xen_force_evtchn_callback+0x9/0x10
[16793.995972]  ? check_events+0x12/0x20
[16793.995979]  ? xen_restore_fl_direct+0x1f/0x20
[16793.995992]  ? _raw_spin_unlock_irqrestore+0x14/0x20
[16793.996004]  ? force_qs_rnp+0x6f/0x170
[16793.996012]  ? rcu_nocb_unlock_irqrestore+0x30/0x30
[16793.996021]  ? rcu_gp_fqs_loop+0x234/0x2a0
[16793.996029]  ? rcu_gp_kthread+0xb5/0x140
[16793.996037]  ? rcu_gp_init+0x470/0x470
[16793.996046]  ? kthread+0x115/0x140
[16793.996054]  ? __kthread_bind_mask+0x60/0x60
[16793.996062]  ? ret_from_fork+0x35/0x40
```

> I'm curious about another thing, though. You mentioned, in your
> previous email (and in the subject :-)) that this is a 4.13 -> 4.14
> issue for you?

This is indeed happening since I've updated xen-4.14 from 4.13 and 4.13 w=
as totally stable for me. Server was running for months without any issue=
=2E
 =20
> Does that mean that the problem was not there on 4.13?
>=20
> I'm asking because Credit2 was already the default scheduler in 4.13.
>=20
> So, unless you were configuring things differently, you were already
> using it there.

Normally, there is a new custom patch for S3 resume from Marek (in CC) an=
d he would be much more able than me to precise some very specific change=
s with respect to 4.13.

> If this is the case, it would hint at the fact that something that
> changed between .13 and .14 could be the cause.
>=20
> Regards
>=20

Thank you again for your help.

--------------AE6B7013617E2D5FE752B446
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------AE6B7013617E2D5FE752B446--

--eRGi8IqtYCqTIB1h7uMBjrzeOTWj2mlAb--

--m0Gnag2sETvA4DdpQO6Y2I0A9WC1lGM9Y
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+XHzAACgkQSEAQtc3F
duJElA//RffoOh8RaBMCkrsTw7YWCMuL3rO1cXBbV89Gojllfl033Gk1Kbo6dTBU
KYQvM2S3wNJ3TCmUjTD6kBp5o8W7/AXkvhXKItS3fVYx6Yx5WX8iTMDrwTyiBtDa
pKDkObIf+WZlhWYKLPaP+N9sIu82IygNuFb6Xy6vsingz6AS8z4/cjnAPxHF3FQZ
Bp0O8jVCE5kyzQK3NTZJeBs4dB7kgZq8mdls9PWuQBaEHDvyuzfNjJm7ea0wUki+
9LaPKkqnawV9E+LhosSjXMoi2fFsszqC+qu+W2F919GoWFR8lL6WT/ey7Sb/1yHq
g6iBnoRTLCFN0osqsooMItAj/WCeymYSVr27yiHVH4Gp0TwD7zFSodXoi2BXiVWm
HHyV4dccRjCEM3hg89pUpQOWMBs+YRuZ2Zh152P0xpZA2yRuWaYdIzajgZ6UANg3
U/G7Rr6iJS0nNoPG55xwEJ2QPIRp8n0LYIPOZgCVrNlXadIdBuqwcow4rIyjGqmG
y2PMD0slLbq3ZjvPB5Am+ZBiOUnaIJKHHOBM/Mpd8+FuNE/WuYJrq0KjqVAV7D2H
o+YvW1k6HWK1rysVfhcChPF9FC7kvBq/71Q94zaeAUeIbWmYzf3j5bCZPR1Rnuxq
GHvJrKehNN8FIbmdWGONBafD1ZlvhBzPELFKg1Yr7C+cSbrZ7dE=
=DOOh
-----END PGP SIGNATURE-----

--m0Gnag2sETvA4DdpQO6Y2I0A9WC1lGM9Y--


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 20:42:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 20:42:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12564.32695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX9Jn-0005fd-BT; Mon, 26 Oct 2020 20:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12564.32695; Mon, 26 Oct 2020 20:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kX9Jn-0005fW-7u; Mon, 26 Oct 2020 20:42:15 +0000
Received: by outflank-mailman (input) for mailman id 12564;
 Mon, 26 Oct 2020 20:42:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wDZ8=EB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kX9Jl-0005fR-0M
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 20:42:13 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00a13b8b-3c22-4d36-84e6-ee97feeda9da;
 Mon, 26 Oct 2020 20:42:11 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9QKg01Si
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 26 Oct 2020 21:42:00 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=wDZ8=EB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kX9Jl-0005fR-0M
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 20:42:13 +0000
X-Inumbo-ID: 00a13b8b-3c22-4d36-84e6-ee97feeda9da
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 00a13b8b-3c22-4d36-84e6-ee97feeda9da;
	Mon, 26 Oct 2020 20:42:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603744930;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=EwcirB4XvBjykSpJRWnvQgUIQP98XMUaOqvA5ypkIuM=;
	b=pDMRBAkW7MbVxcNYRTEH6va2dRIXbEdahduEnVza/bHHljKxaU8bHQ42BdpvMp3Sak
	5zR47f6i2tNCWQRI/N0OO990u5j7/e0OxWilSmUZ3w4s0/oFRNT8C6r4M0bMk3ROM+Ch
	QcYQRPSpqdxwk68zqd9okBlQPgmqurP0QqaWW8JFwZyMs6z5KJoJsCT1tKmNId1NYjIi
	Gg9SUrEFZ2F4ptzooC+aZnK6/XDCNd1A+zumRBzZXj+txD0pjsxbtuPr5Di/GIDB5lH4
	F/x4vNLf2bMwgsyt4nNAIFDq1H2lHtCJ8tUgbyzObVH/ZmP1mdTjpSt2DpA/MZbUloUw
	Iy1Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9QKg01Si
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Mon, 26 Oct 2020 21:42:00 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Jan Beulich <jbeulich@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] libacpi: use temporary files for generated files
Date: Mon, 26 Oct 2020 21:41:51 +0100
Message-Id: <20201026204151.23459-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Use a temporay file, and move it in place once done.
The same pattern exists already for other dependencies.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libacpi/Makefile | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
index c17f3924cc..2cc4cc585b 100644
--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -43,7 +43,8 @@ all: $(C_SRC) $(H_SRC)
 
 $(H_SRC): $(ACPI_BUILD_DIR)/%.h: %.asl iasl
 	iasl -vs -p $(ACPI_BUILD_DIR)/$*.$(TMP_SUFFIX) -tc $<
-	sed -e 's/AmlCode/$*/g' -e 's/_aml_code//g' $(ACPI_BUILD_DIR)/$*.hex >$@
+	sed -e 's/AmlCode/$*/g' -e 's/_aml_code//g' $(ACPI_BUILD_DIR)/$*.hex >$@.$(TMP_SUFFIX)
+	mv -f $@.$(TMP_SUFFIX) $@
 	rm -f $(addprefix $(ACPI_BUILD_DIR)/, $*.aml $*.hex)
  
 $(MK_DSDT): mk_dsdt.c


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 21:32:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 21:32:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12568.32706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXA6X-0001XI-5D; Mon, 26 Oct 2020 21:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12568.32706; Mon, 26 Oct 2020 21:32:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXA6X-0001XB-25; Mon, 26 Oct 2020 21:32:37 +0000
Received: by outflank-mailman (input) for mailman id 12568;
 Mon, 26 Oct 2020 21:32:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/GsP=EB=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kXA6W-0001X6-1o
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 21:32:36 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb390f28-432c-4117-8f83-1a60479a3cf5;
 Mon, 26 Oct 2020 21:32:34 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09QLWP13023597
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Mon, 26 Oct 2020 17:32:30 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09QLWOUX023596;
 Mon, 26 Oct 2020 14:32:24 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/GsP=EB=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kXA6W-0001X6-1o
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 21:32:36 +0000
X-Inumbo-ID: cb390f28-432c-4117-8f83-1a60479a3cf5
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id cb390f28-432c-4117-8f83-1a60479a3cf5;
	Mon, 26 Oct 2020 21:32:34 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09QLWP13023597
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Mon, 26 Oct 2020 17:32:30 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09QLWOUX023596;
	Mon, 26 Oct 2020 14:32:24 -0700 (PDT)
	(envelope-from ehem)
Date: Mon, 26 Oct 2020 14:32:24 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201026213224.GB20589@mattapan.m5p.com>
References: <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
 <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> Hi Elliott,
> 
> On 26/10/2020 16:03, Elliott Mitchell wrote:
> > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> >> On 24/10/2020 06:35, Elliott Mitchell wrote:
> >>> ACPI has a distinct
> >>> means of specifying a limited DMA-width; the above fails, because it
> >>> assumes a *device-tree*.
> >>
> >> Do you know if it would be possible to infer from the ACPI static table
> >> the DMA-width?
> > 
> > Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> > what the C code would look like though (problem is which documentation
> > should I be looking at first?).
> 
> What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
> is written in AML. So far the shortest implementation I have seen for 
> the AML parser is around 5000 lines (see [1]). It might be possible to 
> strip some the code, although I think this will still probably too big 
> for a single workaround.
> 
> What I meant by "static table" is a table that looks like a structure 
> and can be parsed in a few lines. If we can't find on contain the DMA 
> window, then the next best solution is to find a way to identity the 
> platform.
> 
> I don't know enough ACPI to know if this solution is possible. A good 
> starter would probably be the ACPI spec [2].

Be assured, you likely know more about ACPI than I do.  :-)

A crucial point though is the mentions of dealing with DMA on the
Raspberry PI 4B using ACPI pointed at that "_DMA" string.  What is there
is Good Enough(tm) to make a 5.8 Linux kernel successfully operate
using ACPI.

Looking at the 5.8 source, apparently _DMA is an ACPI method.  That
almost looks straightforward enough for me to tackle for Xen...
Good news is looks like only supportting a single DMA window...


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Mon Oct 26 23:41:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 23:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12579.32727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXC7I-00043b-IA; Mon, 26 Oct 2020 23:41:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12579.32727; Mon, 26 Oct 2020 23:41:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXC7I-00043U-Ey; Mon, 26 Oct 2020 23:41:32 +0000
Received: by outflank-mailman (input) for mailman id 12579;
 Mon, 26 Oct 2020 23:41:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXC7G-00043L-Pl
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:41:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6efed2e-edc1-4209-b5cf-8ebbc59833be;
 Mon, 26 Oct 2020 23:41:27 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXC7C-0000sr-PK; Mon, 26 Oct 2020 23:41:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXC7C-0007k5-Ap; Mon, 26 Oct 2020 23:41:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXC7C-0003KY-AJ; Mon, 26 Oct 2020 23:41:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WRSk=EB=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXC7G-00043L-Pl
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:41:30 +0000
X-Inumbo-ID: c6efed2e-edc1-4209-b5cf-8ebbc59833be
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c6efed2e-edc1-4209-b5cf-8ebbc59833be;
	Mon, 26 Oct 2020 23:41:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FeXKcH8tKJwM3rvg3qWgaHBfGL0df1m0U9Mu7EMA/ZQ=; b=vWBZxwMNXiS280K/l2oatzRq3A
	3QvUSKSb6F2uvJXy9bXc/G1bmjzm3x5tqhuEWIgGA36AlyI913RSoWtt48CYAm4pSg2+QWShh9QBU
	4/tv+WEXtFxafG7nxjw9nxvjupVJ3etjSNpcrQ/0KvjY7X0AvOcEoI2HLEaZ00uCrgnE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXC7C-0000sr-PK; Mon, 26 Oct 2020 23:41:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXC7C-0007k5-Ap; Mon, 26 Oct 2020 23:41:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXC7C-0003KY-AJ; Mon, 26 Oct 2020 23:41:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156249-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156249: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=a95e0396c805735c491a049b01de6f5a713fb91b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 26 Oct 2020 23:41:26 +0000

flight 156249 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156249/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                a95e0396c805735c491a049b01de6f5a713fb91b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   67 days
Failing since        152659  2020-08-21 14:07:39 Z   66 days  155 attempts
Testing same since   156249  2020-10-26 19:06:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 51765 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Oct 26 23:50:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 23:50:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12584.32742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCFu-00050u-N3; Mon, 26 Oct 2020 23:50:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12584.32742; Mon, 26 Oct 2020 23:50:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCFu-00050n-Ht; Mon, 26 Oct 2020 23:50:26 +0000
Received: by outflank-mailman (input) for mailman id 12584;
 Mon, 26 Oct 2020 23:50:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lBFa=EB=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kXCFt-00050i-1d
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:50:25 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d372090-6b0f-4f73-86a4-14570f3c3254;
 Mon, 26 Oct 2020 23:50:23 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 16B1B2075B;
 Mon, 26 Oct 2020 23:50:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBFa=EB=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kXCFt-00050i-1d
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:50:25 +0000
X-Inumbo-ID: 5d372090-6b0f-4f73-86a4-14570f3c3254
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5d372090-6b0f-4f73-86a4-14570f3c3254;
	Mon, 26 Oct 2020 23:50:23 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 16B1B2075B;
	Mon, 26 Oct 2020 23:50:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603756222;
	bh=j8+aveAqrJQEVzbm2fQkpi44naYHrixzixT0DdUmo5A=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=uhJHajN05XdyKQNP3gZpGIRZiFVOsNjDBVOIJHE6npc+ADEGGTWD4twcXlr//Gzw+
	 63SpAPUMffktbH4QiOwm90Tizcrla1eYVBT7x1XbQcoWH3bWSmu1JLCuDZarx7Gqrr
	 P4ToM9QncBvJWQbimTSHHt60lvgcP6vyoqa1XUMM=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Marek Szyprowski <m.szyprowski@samsung.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 5.9 062/147] xen: gntdev: fix common struct sg_table related issues
Date: Mon, 26 Oct 2020 19:47:40 -0400
Message-Id: <20201026234905.1022767-62-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201026234905.1022767-1-sashal@kernel.org>
References: <20201026234905.1022767-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Marek Szyprowski <m.szyprowski@samsung.com>

[ Upstream commit d1749eb1ab85e04e58c29e58900e3abebbdd6e82 ]

The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/xen/gntdev-dmabuf.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index b1b6eebafd5de..4c13cbc99896a 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
-				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
-						   gntdev_dmabuf_attach->dir,
-						   DMA_ATTR_SKIP_CPU_SYNC);
+				dma_unmap_sgtable(attach->dev, sgt,
+						  gntdev_dmabuf_attach->dir,
+						  DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
 		}
 
@@ -288,8 +287,8 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-				      DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (dma_map_sgtable(attach->dev, sgt, dir,
+				    DMA_ATTR_SKIP_CPU_SYNC)) {
 			sg_free_table(sgt);
 			kfree(sgt);
 			sgt = ERR_PTR(-ENOMEM);
@@ -633,7 +632,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sgtable_page(sgt, &sg_iter, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Oct 26 23:53:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 26 Oct 2020 23:53:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12588.32753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCIf-0005Ba-3K; Mon, 26 Oct 2020 23:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12588.32753; Mon, 26 Oct 2020 23:53:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCIf-0005BT-0A; Mon, 26 Oct 2020 23:53:17 +0000
Received: by outflank-mailman (input) for mailman id 12588;
 Mon, 26 Oct 2020 23:53:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lBFa=EB=kernel.org=sashal@srs-us1.protection.inumbo.net>)
 id 1kXCId-0005BL-DT
 for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:53:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cdd4db8-6b51-46da-a296-c15c1e9f3f43;
 Mon, 26 Oct 2020 23:53:13 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net
 [73.47.72.35])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D86B722202;
 Mon, 26 Oct 2020 23:53:11 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=lBFa=EB=kernel.org=sashal@srs-us1.protection.inumbo.net>)
	id 1kXCId-0005BL-DT
	for xen-devel@lists.xenproject.org; Mon, 26 Oct 2020 23:53:15 +0000
X-Inumbo-ID: 8cdd4db8-6b51-46da-a296-c15c1e9f3f43
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cdd4db8-6b51-46da-a296-c15c1e9f3f43;
	Mon, 26 Oct 2020 23:53:13 +0000 (UTC)
Received: from sasha-vm.mshome.net (c-73-47-72-35.hsd1.nh.comcast.net [73.47.72.35])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D86B722202;
	Mon, 26 Oct 2020 23:53:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603756392;
	bh=j8+aveAqrJQEVzbm2fQkpi44naYHrixzixT0DdUmo5A=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=hMDjDuYtIUSPBB9B21/4bfkTHLyW6e8q+6pzDlVwQZtGtKTsoUYGPD0YYYZmihC5U
	 l4AXo+sqP6cf37YCIqdwqF0VwUmATsw0NAqs9E51pAneZNAFjYXnlryUjmpDmHsqU6
	 ZP6YauC5AYL556Q9tQ3qmDbCROVimmHwqRP0ncT0=
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org,
	stable@vger.kernel.org
Cc: Marek Szyprowski <m.szyprowski@samsung.com>,
	Juergen Gross <jgross@suse.com>,
	Sasha Levin <sashal@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: [PATCH AUTOSEL 5.8 054/132] xen: gntdev: fix common struct sg_table related issues
Date: Mon, 26 Oct 2020 19:50:46 -0400
Message-Id: <20201026235205.1023962-54-sashal@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20201026235205.1023962-1-sashal@kernel.org>
References: <20201026235205.1023962-1-sashal@kernel.org>
MIME-Version: 1.0
X-stable: review
X-Patchwork-Hint: Ignore
Content-Transfer-Encoding: 8bit

From: Marek Szyprowski <m.szyprowski@samsung.com>

[ Upstream commit d1749eb1ab85e04e58c29e58900e3abebbdd6e82 ]

The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/xen/gntdev-dmabuf.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index b1b6eebafd5de..4c13cbc99896a 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
-				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
-						   gntdev_dmabuf_attach->dir,
-						   DMA_ATTR_SKIP_CPU_SYNC);
+				dma_unmap_sgtable(attach->dev, sgt,
+						  gntdev_dmabuf_attach->dir,
+						  DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
 		}
 
@@ -288,8 +287,8 @@ dmabuf_exp_ops_map_dma_buf(struct dma_buf_attachment *attach,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-				      DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (dma_map_sgtable(attach->dev, sgt, dir,
+				    DMA_ATTR_SKIP_CPU_SYNC)) {
 			sg_free_table(sgt);
 			kfree(sgt);
 			sgt = ERR_PTR(-ENOMEM);
@@ -633,7 +632,7 @@ dmabuf_imp_to_refs(struct gntdev_dmabuf_priv *priv, struct device *dev,
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sgtable_page(sgt, &sg_iter, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 00:02:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 00:02:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12593.32766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCRO-0006ht-S2; Tue, 27 Oct 2020 00:02:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12593.32766; Tue, 27 Oct 2020 00:02:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXCRO-0006hm-P0; Tue, 27 Oct 2020 00:02:18 +0000
Received: by outflank-mailman (input) for mailman id 12593;
 Tue, 27 Oct 2020 00:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXCRM-0006hh-Tm
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 00:02:16 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f55fbabc-cc91-45b8-9c03-fbc316abb7b5;
 Tue, 27 Oct 2020 00:02:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 53D6520773;
 Tue, 27 Oct 2020 00:02:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXCRM-0006hh-Tm
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 00:02:16 +0000
X-Inumbo-ID: f55fbabc-cc91-45b8-9c03-fbc316abb7b5
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f55fbabc-cc91-45b8-9c03-fbc316abb7b5;
	Tue, 27 Oct 2020 00:02:16 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 53D6520773;
	Tue, 27 Oct 2020 00:02:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603756935;
	bh=S2fWz7i/YT4lvIbM2qM9onXOHifLeu3fj+WHAM6/Uic=;
	h=Date:From:To:cc:Subject:From;
	b=L1iKctRVl+HbjZOuleP6fj+u871oq3QJwoicqf8T9s77ZZfViR4y4lgH0KRvluXXE
	 7HFEZzg/saCMcBDD3TkhqW7EZrHaEYEnFTIS3Cl1IUMxL3LX1kr+N2tNI8mguV1DC9
	 Mu+SdZmJhL00T7eivPsMxiectmupJ6g6BRr5e3Xo=
Date: Mon, 26 Oct 2020 17:02:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org, 
    konrad.wilk@oracle.com, hch@lst.de
Subject: [PATCH] fix swiotlb panic on Xen
Message-ID: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
allocate a buffer for the swiotlb. It does so by calling

  memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);

If the allocation must fail, no_iotlb_memory is set.


Later during initialization swiotlb-xen comes in
(drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
is != 0, it thinks the memory is ready to use when actually it is not.

When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
and since no_iotlb_memory is set the kernel panics.

Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
initialized, it would do the initialization itself, which might still
succeed.


Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
failure, and also by setting no_iotlb_memory to false on swiotlb
initialization success.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>


diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c19379fabd20..9924214df60a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	if (verbose)
 		swiotlb_print_info();
@@ -262,9 +263,11 @@ swiotlb_init(int verbose)
 	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
 		return;
 
-	if (io_tlb_start)
+	if (io_tlb_start) {
 		memblock_free_early(io_tlb_start,
 				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+		io_tlb_start = 0;
+	}
 	pr_warn("Cannot allocate buffer");
 	no_iotlb_memory = true;
 }
@@ -362,6 +365,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	swiotlb_print_info();
 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 00:52:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 00:52:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12601.32779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXDDL-0002Xe-TK; Tue, 27 Oct 2020 00:51:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12601.32779; Tue, 27 Oct 2020 00:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXDDL-0002XX-NM; Tue, 27 Oct 2020 00:51:51 +0000
Received: by outflank-mailman (input) for mailman id 12601;
 Tue, 27 Oct 2020 00:51:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXDDK-0002XR-Ph
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 00:51:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81e7d00f-58fe-478a-b75b-2255c36dca56;
 Tue, 27 Oct 2020 00:51:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8653F2076A;
 Tue, 27 Oct 2020 00:51:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXDDK-0002XR-Ph
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 00:51:50 +0000
X-Inumbo-ID: 81e7d00f-58fe-478a-b75b-2255c36dca56
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 81e7d00f-58fe-478a-b75b-2255c36dca56;
	Tue, 27 Oct 2020 00:51:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 8653F2076A;
	Tue, 27 Oct 2020 00:51:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603759908;
	bh=2cj05f2qTvOdPYmIDkLVhVs8OKzu4GUW+BcubKJUduA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=f1fA7PqGQ3NdYMwC5e8TAOxJ7FpQVVe1rU2D+7uIpd9pEhmTxJD4O/Ch+cs6cWdL0
	 61FFqjKm1JdFOoFrNQ0IIld13jc+I1R93GZj1V36bX+7s2OqGRW3zN0V0kIiGbbEIK
	 xkfWJ7+SoeGKKEk5+sVfVUur67O9CQxfdz2yW3UA=
Date: Mon, 26 Oct 2020 17:51:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: Remove EXPERT dependancy
In-Reply-To: <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
Message-ID: <alpine.DEB.2.21.2010261726350.12247@sstabellini-ThinkPad-T480s>
References: <20201022014310.GA70872@mattapan.m5p.com> <7bf92deb-b1ba-31b2-0357-2639cd2a1bca@xen.org> <alpine.DEB.2.21.2010221403570.12247@sstabellini-ThinkPad-T480s> <b4ec906d-ebb6-add9-1bc0-39ab8d588026@xen.org> <alpine.DEB.2.21.2010230944090.12247@sstabellini-ThinkPad-T480s>
 <bf3b65d2-2642-f1f6-39f1-2f88433e9901@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1630026899-1603759908=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1630026899-1603759908=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 26 Oct 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 23/10/2020 17:57, Stefano Stabellini wrote:
> > On Fri, 23 Oct 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 22/10/2020 22:17, Stefano Stabellini wrote:
> > > > On Thu, 22 Oct 2020, Julien Grall wrote:
> > > > > On 22/10/2020 02:43, Elliott Mitchell wrote:
> > > > > > Linux requires UEFI support to be enabled on ARM64 devices.  While
> > > > > > many
> > > > > > ARM64 devices lack ACPI, the writing seems to be on the wall of
> > > > > > UEFI/ACPI
> > > > > > potentially taking over.  Some common devices may need ACPI table
> > > > > > support.
> > > > > > 
> > > > > > Presently I think it is worth removing the dependancy on
> > > > > > CONFIG_EXPERT.
> > > > > 
> > > > > The idea behind EXPERT is to gate any feature that is not considered
> > > > > to be
> > > > > stable/complete enough to be used in production.
> > > > 
> > > > Yes, and from that point of view I don't think we want to remove EXPERT
> > > > from ACPI yet. However, the idea of hiding things behind EXPERT works
> > > > very well for new esoteric features, something like memory introspection
> > > > or memory overcommit.
> > > 
> > > Memaccess is not very new ;).
> > > 
> > > > It does not work well for things that are actually
> > > > required to boot on the platform.
> > > 
> > > I am not sure where is the problem. It is easy to select EXPERT from the
> > > menuconfig. It also hints the user that the feature may not fully work.
> > > 
> > > > 
> > > > Typically ACPI systems don't come with device tree at all (RPi4 being an
> > > > exception), so users don't really have much of a choice in the matter.
> > > 
> > > And they typically have IOMMUs.
> > > 
> > > > 
> > > >   From that point of view, it would be better to remove EXPERT from
> > > > ACPI,
> > > > maybe even build ACPI by default, *but* to add a warning at boot saying
> > > > something like:
> > > > 
> > > > "ACPI support is experimental. Boot using Device Tree if you can."
> > > > 
> > > > 
> > > > That would better convey the risks of using ACPI, while at the same time
> > > > making it a bit easier for users to boot on their ACPI-only platforms.
> > > 
> > > Right, I agree that this make easier for users to boot Xen on ACPI-only
> > > platform. However, based on above, it is easy enough for a developper to
> > > rebuild Xen with ACPI and EXPERT enabled.
> > > 
> > > So what sort of users are you targeting?
> > 
> > Somebody trying Xen for the first time, they might know how to build it
> > but they might not know that ACPI is not available by default, and they
> > might not know that they need to enable EXPERT in order to get the ACPI
> > option in the menu. It is easy to do once you know it is there,
> > otherwise one might not know where to look in the menu.
> 
> Right, EXPERT can now be enabled using Kconfig. So it is not very different
> from an option Foo has been hidden because the dependency Bar has not been
> selected.
> 
> It should be easy enough (if it is not we should fix it) to figure out the
> dependency when searching the option via menuconfig.

I do `make menuconfig` and there is no ACPI option. How do I even know
that ACPI has a kconfig option to enable? I'd assume that ACPI is always
enabled in the kconfig unless told otherwise.

But let's say that you know that you need to look for ACPI. I'd use the
search function, and it tells me:

  Symbol: ACPI [=n]                                                                                                                      │  
  Type  : bool                                                                                                                           │  
  Prompt: ACPI (Advanced Configuration and Power Interface) Support                                                                      │  
    Location:                                                                                                                            │  
  (1) -> Architecture Features                                                                                                           │  
    Defined at arch/arm/Kconfig:34                                                                                                       │  
    Depends on: ARM_64 [=y]
 
I go and look "Architecture Features" as told, but it is not there. How
do I need that I need to enable "Configure standard Xen features (expert
users)" to get that option?

 
> > > I am sort of okay to remove EXPERT.
> > 
> > OK. This would help (even without building it by default) because as you
> > go and look at the menu the first time, you'll find ACPI among the
> > options right away.
> 
> To be honest, this step is probably the easiest in the full process to get Xen
> build and booted on Arm.
> 
> I briefly looked at Elliot's v2, and I can't keep thinking that we are trying
> to re-invent EXPERT for ACPI because we think the feature is *more* important
> than any other feature gated by EXPERT.
> 
> In fact, all the features behind EXPERT are important. But they have been
> gated by EXPERT because they are not mature enough.

It is not as much a matter of "importance" but a matter of required for
booting. I don't think that EXPERT is really a good tool for gating
things that are required for booting. If we had something else (not
ACPI) that is required for booting and marked as EXPERT, I'd say the
same thing. The only other thing that might qualify is ITS support.


> We already moved EXPERT from a command line option to a menuconfig option. So
> it should be easy enough to enable it now. If it still not the case, then we
> should improve it.
> 
> But I don't think ACPI is mature enough to deserve a different treatment.

I fully agree ACPI is not mature.


> It would be more useful to get to the stage where ACPI can work
> without any crash/issue first. 

Yes indeed. I don't have any stake in this, given that none of my
systems have ACPI, so I'd better shut up maybe :-)


> > > But I still think building ACPI by default
> > > is still wrong because our default .config is meant to be (security)
> > > supported. I don't think ACPI can earn this qualification today.
> > 
> > Certainly we don't want to imply ACPI is security supported. I was
> > looking at SUPPORT.md and it is only says:
> > 
> > """
> > EXPERT and DEBUG Kconfig options are not security supported. Other
> > Kconfig options are supported, if the related features are marked as
> > supported in this document.
> > """
> > 
> > So technically we could enable ACPI in the build by default as ACPI for
> > ARM is marked as experimental. However, I can see that it is not a
> > great idea to enable by default an unsupported option in the kconfig, so
> > from that point of view it might be best to leave ACPI disabled by
> > default. Probably the best compromise at this time.
> From my understanding, the goal of EXPERT was to gate such features. With your
> suggestion, it is not clear to me what's the difference between "experimental"
> and option gated by "EXPERT".
> 
> Do you mind clarifying?

Ah! That's a good question actually. Is the expectation and
"experimental" in SUPPORT.md and EXPERT in Kconfig are pretty much the
same thing? I didn't think so. Let's have a look. SUPPORT.md says:

### KCONFIG Expert

    Status: Experimental


And EXPERT says:

config EXPERT
	bool "Configure standard Xen features (expert users)"
	help
	  This option allows certain base Xen options and settings
	  to be disabled or tweaked. This is for specialized environments
	  which can tolerate a "non-standard" Xen.
	  Only use this if you really know what you are doing.
	  Xen binaries built with this option enabled are not security
	  supported.
	

It seems that they are not the same: EXPERT is a *subset* of
Experimental for certain Xen options "for specialized environments" and
"expert users, which I think makes sense. ACPI is a good example of
something "experimental" but not for specialized environments.

--8323329-1630026899-1603759908=:12247--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 01:29:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 01:29:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12605.32790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXDnt-0001RV-Qr; Tue, 27 Oct 2020 01:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12605.32790; Tue, 27 Oct 2020 01:29:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXDnt-0001RO-NB; Tue, 27 Oct 2020 01:29:37 +0000
Received: by outflank-mailman (input) for mailman id 12605;
 Tue, 27 Oct 2020 01:29:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXDns-0001RJ-BK
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 01:29:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 788acf31-d022-463f-8eb5-2537fb6a629b;
 Tue, 27 Oct 2020 01:29:33 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXDnp-0007vt-6S; Tue, 27 Oct 2020 01:29:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXDno-0006gl-RG; Tue, 27 Oct 2020 01:29:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXDno-0003U9-Qo; Tue, 27 Oct 2020 01:29:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXDns-0001RJ-BK
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 01:29:36 +0000
X-Inumbo-ID: 788acf31-d022-463f-8eb5-2537fb6a629b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 788acf31-d022-463f-8eb5-2537fb6a629b;
	Tue, 27 Oct 2020 01:29:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zx4OlaSH36X6bzgpiBXd3aAXLHXkeiTZDFVI7S11FOo=; b=O1oTANxVhpgI7CGXMjXaAZlqqU
	u6LB3IqqlKgXrjoBXALChH86Wgu3taOoa8CO2+9cANYVfNNrEC6HsYQHOQ+8s9fCt/NeeIBRW9RFZ
	NtsRFY5kHb5HhFK3V63Jf879IpRSdRPHb66jQukc65SbbsCKMV7AspO3p3yyFyRNUnrE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXDnp-0007vt-6S; Tue, 27 Oct 2020 01:29:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXDno-0006gl-RG; Tue, 27 Oct 2020 01:29:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXDno-0003U9-Qo; Tue, 27 Oct 2020 01:29:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156247-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156247: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=41ba50b0572e90ed3d24fe4def54567e9050bc47
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 01:29:32 +0000

flight 156247 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156247/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                41ba50b0572e90ed3d24fe4def54567e9050bc47
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   87 days
Failing since        152366  2020-08-01 20:49:34 Z   86 days  147 attempts
Testing same since   156247  2020-10-26 18:11:18 Z    0 days    1 attempts

------------------------------------------------------------
3376 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641057 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 05:59:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 05:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12620.32805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXI0N-0007zf-NB; Tue, 27 Oct 2020 05:58:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12620.32805; Tue, 27 Oct 2020 05:58:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXI0N-0007zX-Hm; Tue, 27 Oct 2020 05:58:47 +0000
Received: by outflank-mailman (input) for mailman id 12620;
 Tue, 27 Oct 2020 05:58:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kKAC=EC=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kXI0M-0007zS-A4
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 05:58:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94cbc760-0627-46ae-9e4c-de2465115f4c;
 Tue, 27 Oct 2020 05:58:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20968AB98;
 Tue, 27 Oct 2020 05:58:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=kKAC=EC=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kXI0M-0007zS-A4
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 05:58:46 +0000
X-Inumbo-ID: 94cbc760-0627-46ae-9e4c-de2465115f4c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 94cbc760-0627-46ae-9e4c-de2465115f4c;
	Tue, 27 Oct 2020 05:58:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603778324;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j5wHNAKxk4kKcEKC8IMknOrGGu6PJzjfvJ6egTYEWMI=;
	b=b37/132RkMUAy2umlhaO1dbIkR2ned0X3sF1bOY3b9l2JPTShUF7SCkIhT7a3zlflR1eZu
	CvYpAkoPOhsfZMBWxLrXRZVQVVX/azhw0RFv7+CSDPYmAC6due3EcexootjZZqxwwjMLbh
	QWcZq/jV42N4Bs4EA0qp0MNapUeoHaw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 20968AB98;
	Tue, 27 Oct 2020 05:58:44 +0000 (UTC)
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: Dario Faggioli <dfaggioli@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "frederic.pierret@qubes-os.org" <frederic.pierret@qubes-os.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
Date: Tue, 27 Oct 2020 06:58:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.10.20 17:31, Dario Faggioli wrote:
> On Mon, 2020-10-26 at 15:30 +0100, Jürgen Groß wrote:
>> On 26.10.20 14:54, Andrew Cooper wrote:
>>> On 26/10/2020 13:37, Frédéric Pierret wrote:
>>>>
>>>> If anyone would have any idea of what's going on, that would be
>>>> very
>>>> appreciated. Thank you.
>>>
>>> Does booting Xen with `sched=credit` make a difference?
>>
>> Hmm, I think I have spotted a problem in credit2 which could explain
>> the
>> hang:
>>
>> csched2_unit_wake() will NOT put the sched unit on a runqueue in case
>> it
>> has CSFLAG_scheduled set. This bit will be reset only in
>> csched2_context_saved().
>>
> Exactly, it does not put it back there. However, if it finds a vCPU
> with the CSFLAG_scheduled flag set, It should set
> CSFLAG_delayed_runq_add flag.
> 
> Unless curr_on_cpu(cpu)==unit or unit_on_runq(svc)==true... which
> should not be the case. Or where you saying that we actually are in one
> of this situations?
> 
> In fact...
> 
>> So in case a vcpu (and its unit, of course) is blocked and there has
>> been no other vcpu active on its physical cpu but the idle vcpu,
>> there
>> will be no call of csched2_context_saved(). This will block the vcpu
>> to become active in theory for eternity, in case there is no need to
>> run another vcpu on the physical cpu.
>>
> ...I maybe am not seeing what exact situation and sequence of events
> you're exactly thinking to. What I see is this: [*]
> 
> - vCPU V is running, i.e., CSFLAG_scheduled is set
> - vCPU V blocks
> - we enter schedule()
>    - schedule calls do_schedule() --> csched2_schedule()
>      - we pick idle, so CSFLAG_delayed_runq_add is set for V
>    - schedule calls sched_context_switch()
>      - sched_context_switch() calls context_switch()
>        - context_switch() calls sched_context_switched()
>          - sched_context_switched() calls:
>            - vcpu_context_saved()
>            - unit_context_saved()
>              - unit_context_saved() calls sched_context_saved() -->
>                                            csched2_context_saved()
>                - csched2_context_saved():
>                  - clears CSFLAG_scheduled
>                  - checks (and clear) CSFLAG_delayed_runq_add
> 
> [*] this assumes granularity 1, i.e., no core-scheduling and no
>      rendezvous. Or was core-scheduling actually enabled?
> 
> And if CSFLAG_delayed_runq_add is set **and** the vCPU is runnable, the
> task is added back to the runqueue.
> 
> So, even if we don't do the actual context switch (i.e., we don't call
> __context_switch() ) if the next vCPU that we pick when vCPU V blocks
> is the idle one, it looks to me that we go get to call
> csched2_context_saved().
> 
> And it also looks to me that, when we get to that, if the vCPU is
> runnable, even if it has the CSFLAG_scheduled still set, we do put it
> back to the runqueue.
> 
> And if the vCPU blocked, but csched2_unit_wake() run while
> CSFLAG_scheduled was still set, it indeed should mean that the vCPU
> itself will be runnable again when we get to csched2_context_saved().
> 
> Or did you have something completely different in mind, and I'm missing
> it?

No, I think you are right. I mixed that up with __context_switch() not
being called.

Sorry for the noise,


Juergen



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 06:35:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 06:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12639.32869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXIZd-0003GE-8S; Tue, 27 Oct 2020 06:35:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12639.32869; Tue, 27 Oct 2020 06:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXIZd-0003G7-5R; Tue, 27 Oct 2020 06:35:13 +0000
Received: by outflank-mailman (input) for mailman id 12639;
 Tue, 27 Oct 2020 06:35:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXIZb-0003Fx-99
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 06:35:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11990b9a-a7df-48af-a319-c65265b5944a;
 Tue, 27 Oct 2020 06:35:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIZY-0006hP-RQ; Tue, 27 Oct 2020 06:35:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIZY-00041X-Jq; Tue, 27 Oct 2020 06:35:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIZY-0008Tf-J6; Tue, 27 Oct 2020 06:35:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXIZb-0003Fx-99
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 06:35:11 +0000
X-Inumbo-ID: 11990b9a-a7df-48af-a319-c65265b5944a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 11990b9a-a7df-48af-a319-c65265b5944a;
	Tue, 27 Oct 2020 06:35:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=arOWrGy2QDMLhf3P5sYofgkClCbapv9Fep+sW6ikp1s=; b=G/nGBUlesZ3CK0HCKf35XDWz6v
	GA4z6OmJ/UcQPZkLOcYdwe3zLTFt7XfwvEcMnnrcE57UIIvUyNwZLirHYusenrOsrxjcPxfY7oSGf
	Qj5QBOEcKjRj3G4kRcPLBEO7LjdPnv3BiWEVpXWqwBNz2LQg8rI82bJ0yB6al69ZBEyI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIZY-0006hP-RQ; Tue, 27 Oct 2020 06:35:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIZY-00041X-Jq; Tue, 27 Oct 2020 06:35:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIZY-0008Tf-J6; Tue, 27 Oct 2020 06:35:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156248: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=964781c6f162893677c50a779b7d562a299727ba
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 06:35:08 +0000

flight 156248 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156248/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156228
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156228
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156228
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156228
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156228
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156228
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156228
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156228
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156228
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156228
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156228
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156228
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  964781c6f162893677c50a779b7d562a299727ba
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156228  2020-10-26 01:51:27 Z    1 days
Testing same since   156248  2020-10-26 18:36:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6ca70821b5..964781c6f1  964781c6f162893677c50a779b7d562a299727ba -> master


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 06:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 06:36:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12644.32888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXIac-0003Oa-MR; Tue, 27 Oct 2020 06:36:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12644.32888; Tue, 27 Oct 2020 06:36:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXIac-0003OT-Iu; Tue, 27 Oct 2020 06:36:14 +0000
Received: by outflank-mailman (input) for mailman id 12644;
 Tue, 27 Oct 2020 06:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXIab-0003OJ-AS
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 06:36:13 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26a925d9-bc3b-4570-82ed-4a2c7b6cb816;
 Tue, 27 Oct 2020 06:36:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIaV-0006ie-Ja; Tue, 27 Oct 2020 06:36:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIaV-00043j-Bs; Tue, 27 Oct 2020 06:36:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXIaV-0001JO-BN; Tue, 27 Oct 2020 06:36:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXIab-0003OJ-AS
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 06:36:13 +0000
X-Inumbo-ID: 26a925d9-bc3b-4570-82ed-4a2c7b6cb816
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 26a925d9-bc3b-4570-82ed-4a2c7b6cb816;
	Tue, 27 Oct 2020 06:36:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UvunzE/jcaIe77rJiWgGfoXusCiGkGGu0ZMFvfRIp2o=; b=s09v1+f3tG8xeZWIV03a50yiYJ
	iOIkf3JPKdlV0U29JbrVT021Iq55Ja1Gxflgeml8S7ZbB7RmqF+CSabMW0wVIpJigH/Bs2M/Eu+cM
	0faCIJVNrWdAlHe6JT14Inek6b+/ruzf56M1E3504aRgHpq3XSmwnYJtQ0tSpuwA4jZg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIaV-0006ie-Ja; Tue, 27 Oct 2020 06:36:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIaV-00043j-Bs; Tue, 27 Oct 2020 06:36:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXIaV-0001JO-BN; Tue, 27 Oct 2020 06:36:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156253-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156253: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ea7af657f14dac0794e15775c6f9b9dde3df73b5
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 06:36:07 +0000

flight 156253 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156253/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ea7af657f14dac0794e15775c6f9b9dde3df73b5
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  109 days
Failing since        151818  2020-07-11 04:18:52 Z  108 days  103 attempts
Testing same since   156253  2020-10-27 04:20:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23296 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 07:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 07:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12655.32903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXJY0-0000Eh-KY; Tue, 27 Oct 2020 07:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12655.32903; Tue, 27 Oct 2020 07:37:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXJY0-0000Ea-FJ; Tue, 27 Oct 2020 07:37:36 +0000
Received: by outflank-mailman (input) for mailman id 12655;
 Tue, 27 Oct 2020 07:37:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kXJXz-0000EV-3h
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 07:37:35 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92e0906f-5cb4-4d78-9ef4-558e3bf935ab;
 Tue, 27 Oct 2020 07:37:33 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09R7YQ6F005396; Tue, 27 Oct 2020 07:37:32 GMT
Received: from eur02-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur02lp2056.outbound.protection.outlook.com [104.47.6.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 34dth23vd4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 27 Oct 2020 07:37:32 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 07:37:29 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 07:37:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kXJXz-0000EV-3h
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 07:37:35 +0000
X-Inumbo-ID: 92e0906f-5cb4-4d78-9ef4-558e3bf935ab
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 92e0906f-5cb4-4d78-9ef4-558e3bf935ab;
	Tue, 27 Oct 2020 07:37:33 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09R7YQ6F005396;
	Tue, 27 Oct 2020 07:37:32 GMT
Received: from eur02-ve1-obe.outbound.protection.outlook.com (mail-ve1eur02lp2056.outbound.protection.outlook.com [104.47.6.56])
	by mx0a-0039f301.pphosted.com with ESMTP id 34dth23vd4-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Tue, 27 Oct 2020 07:37:32 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LJC3k9WliAr73zaa4gdqjWKL07166ohLBj/H5SauDhb8F2LJtFDmmrLOOF9uBm/EqqnUs3eQ+/u5G+XbK19rtU6D4N3sF27eCt+VGRHYveymgNRYw+fHNHAHJK+lCVU/px5d35VC9BxADJwj6wFNNLru+i3/sEdHSStp8o4aJDtVi9ZoyE2Bzv42lKFrrQQt5jyT+Si5m3mzeChGhyZv3QjupnMv9EVWoVJLSFUVbh5wcAD8RwCL1k18b9VnxRgIApp27SN0hM3kleUEa4D8y8oI520RlTkbfSirI8Rpfd7VgjQ80XxGmZ1MS3MrTL2tU/bI4zPcsMfDbPg8TUQ/xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6ma5D9kE2vB48ZRnF982ee4/AEojb/VS+Y+GsInweVY=;
 b=FBKTLOy++lcQ2abp5Evw88TeGdZQNXWkH4Uxvs3rdS5tnsdDbgTx9Mdwfbuzzsm4no1t3VsGu8WIqNYGzVe6HHicxDP/ayDl12ao5mo1CnTfl09Sfgy/B88sICat72pQkiBZsVS7nEsmtOXy7bi1pL/WgRAmQu3W7Zxmrlt51wTfqMbCioNFbPBX3osb46EGBpdtpArKONLTWIL6JufXiDA0biY/b8EdOlAc/aW7SfJGCa4zVGVB8ubvSuCFyyqyHeLbrSnXRz9REIy3UJb+CkBFLjEekig4rQd9I3QMm8QJMPTSF3m54JqGIFAZ3dlj+A46W+MY0WtWLNAEII3ysw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6ma5D9kE2vB48ZRnF982ee4/AEojb/VS+Y+GsInweVY=;
 b=ToeAISEXYi3kujZkd035rHkDNoKE6FY4jPqHxzC1a9DG9BSEzvqaaMtZ5CFpEywWaBTLv2MIUuLlXI/dTeg5CYWWdNINh1jfDPvU7h4Bw2W1YS6LwnIzdLQYAawVrZLtoV+IS4mJBpZGY65DXOMmk3n9YOwqengGYdDqGvbqzSQaG+w5Tlu6c5aXhvOg9+T5CR+1w9OjgS4F1EN/2Ct8dr3KNe2+Qtqqqn/OWH0gj0IHrAjUdpvZaHqUglTq4O4Pi3O3xTaYkrGyn6wn0Nwp8RN50+K43NZehhhzomfCJM5UVVgRyxZsEh7nM+Gd/uyclN+LuiwIvJNjXEe5Qw/1gA==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 07:37:29 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 07:37:29 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Dan Carpenter <dan.carpenter@oracle.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [bug report] ALSA: xen-front: Use Xen common shared buffer
 implementation
Thread-Topic: [bug report] ALSA: xen-front: Use Xen common shared buffer
 implementation
Thread-Index: AQHWp5gCAaSbdhOl50Ol8t4K8ZvWE6mrGNoA
Date: Tue, 27 Oct 2020 07:37:29 +0000
Message-ID: <df832bf9-8caa-0be6-f3bf-17c593d1fb1a@epam.com>
References: <20201021105023.GA957589@mwanda>
In-Reply-To: <20201021105023.GA957589@mwanda>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1ca7944d-3869-4fdb-57ac-08d87a4b2887
x-ms-traffictypediagnostic: AM4PR0301MB2195:
x-microsoft-antispam-prvs: 
 <AM4PR0301MB2195EDC4BFA8A239B200AB5BE7160@AM4PR0301MB2195.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 Q4SqmmpOprbbEFsj64SD6xrBX51/4G6yOnj1K7v0akju+81qjcnoMoTB3Pvm0zZVWnMYwXMDGa7lZngvRnhegulKZqPuxRyeP3wRjbPA9YxAkKKk7Ujjsprba1YQ/301fOANcIkEziTdTsUxIB9E0zh5JU47rZNEUFr4MdAfeA3mKupoWeDfXXdCD9UwWLy+4tMBC52MwlK4yUJGFYFAsBbjXZUyq8yItMF8jB0/ZSuXNB+yixpoB0lFWSUtQETpmNm+nqHtAn9dpk/Q982AZYGWpzmjTAQgY4l60FmZh2mz1DN2+AfjUT2hnUNuCONt+Nx5k5y9CVylx+b4vZH3qQ==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(366004)(396003)(39860400002)(346002)(31686004)(31696002)(2616005)(2906002)(8936002)(6916009)(83380400001)(76116006)(26005)(71200400001)(86362001)(478600001)(66556008)(66446008)(36756003)(6506007)(66946007)(6486002)(64756008)(66476007)(8676002)(186003)(4326008)(316002)(6512007)(53546011)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 t7dd0ReOxXHaWvYazm1W/+oBAOwETkoMuKxauWW+fEh0ECxqqELw2gi5/KQPEaAT77AjUCpZYVWeOqAwL6K5Z0H4DmQ8FdEq/a6L7gKgATKpYJgUJPcr5NFhhfbePhQ82g0Q5weuEkz4I31pwVPWMjnlaWavE84lDgYTi4kc2JOsD7Zf+aoV2lPZFWFwlMJ1l2FnDntHeJSFznP6TRBecBsPX7/PJw1OI77roqAgBh1Y5lMtI8qfprW898W9IGvl2om0Zz709bt2NE4bkgAk4+4UFZJuMc9hlWCYrZCi61gQhptowuRvLaUEW2S7yXQbHVNN5T4X9Ht56qhOrgGBYtFZZVDkqpb8greWaTCSq8OLxvm7oMMstlm/FOuDDBIOBbgVQoiVoSS8YIIL15O9nKjxFtvoWpvR40wwVbAgCBA0OY8dNC2ERfNic9DgvkIy0xDeDEriDY1Qu3JF970P2NFgECI0coUA2bqkCMOi/xba4SiRwdlhu140F8eA6iZ8Sn9pb4jkTi2igKffAWA3xBbybaLCWTP+1zC2TzrMGqjwPgOSstQdysr8tsxBOCdLDQKPXtNLzPNPEV9kslgdXCx6qNLLmxN+0Jtr92oYW84iLFgKAUdRmzdEZxy7zUBq0S8F58kLn3p3QRlHnMUnqg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <9C8C16C32BC12D43946ACDCE2E7715AE@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ca7944d-3869-4fdb-57ac-08d87a4b2887
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Oct 2020 07:37:29.2516
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ElH/bzRdy3Ho2yDu9T/CXFTMHOit+Vmd0PnuevHYjh37xZESI6AIWwq7mbe7iBM5f5tzBJxp0mI1T7SwRK8BFyL2ixbEjgz8ysA39ms1gGWCiyLMMcsKcA9aDtV7abZt
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2195
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.737
 definitions=2020-10-27_03:2020-10-26,2020-10-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0
 mlxlogscore=790 bulkscore=0 adultscore=0 clxscore=1011 spamscore=0
 priorityscore=1501 lowpriorityscore=0 impostorscore=0 mlxscore=0
 phishscore=0 suspectscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2010270049

SGVsbG8sIERhbiENCg0KT24gMTAvMjEvMjAgMTo1MCBQTSwgRGFuIENhcnBlbnRlciB3cm90ZToN
Cj4gSGVsbG8gT2xla3NhbmRyIEFuZHJ1c2hjaGVua28sDQo+DQo+IFRoZSBwYXRjaCA1OGY5ZDgw
NmQxNmE6ICJBTFNBOiB4ZW4tZnJvbnQ6IFVzZSBYZW4gY29tbW9uIHNoYXJlZA0KPiBidWZmZXIg
aW1wbGVtZW50YXRpb24iIGZyb20gTm92IDMwLCAyMDE4LCBsZWFkcyB0byB0aGUgZm9sbG93aW5n
DQo+IHN0YXRpYyBjaGVja2VyIHdhcm5pbmc6DQo+DQo+ICAgICAgc291bmQveGVuL3hlbl9zbmRf
ZnJvbnRfYWxzYS5jOjQ5NSBhbHNhX2h3X3BhcmFtcygpDQo+ICAgICAgd2FybjogJ3N0cmVhbS0+
c2hidWYuZGlyZWN0b3J5JyBkb3VibGUgZnJlZWQNCj4gICAgICBzb3VuZC94ZW4veGVuX3NuZF9m
cm9udF9hbHNhLmM6NDk1IGFsc2FfaHdfcGFyYW1zKCkNCj4gICAgICB3YXJuOiAnc3RyZWFtLT5z
aGJ1Zi5ncmVmcycgZG91YmxlIGZyZWVkDQo+DQo+IHNvdW5kL3hlbi94ZW5fc25kX2Zyb250X2Fs
c2EuYw0KPiAgICAgNDYxICBzdGF0aWMgaW50IGFsc2FfaHdfcGFyYW1zKHN0cnVjdCBzbmRfcGNt
X3N1YnN0cmVhbSAqc3Vic3RyZWFtLA0KPiAgICAgNDYyICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIHN0cnVjdCBzbmRfcGNtX2h3X3BhcmFtcyAqcGFyYW1zKQ0KPiAgICAgNDYzICB7DQo+ICAg
ICA0NjQgICAgICAgICAgc3RydWN0IHhlbl9zbmRfZnJvbnRfcGNtX3N0cmVhbV9pbmZvICpzdHJl
YW0gPSBzdHJlYW1fZ2V0KHN1YnN0cmVhbSk7DQo+ICAgICA0NjUgICAgICAgICAgc3RydWN0IHhl
bl9zbmRfZnJvbnRfaW5mbyAqZnJvbnRfaW5mbyA9IHN0cmVhbS0+ZnJvbnRfaW5mbzsNCj4gICAg
IDQ2NiAgICAgICAgICBzdHJ1Y3QgeGVuX2Zyb250X3BnZGlyX3NoYnVmX2NmZyBidWZfY2ZnOw0K
PiAgICAgNDY3ICAgICAgICAgIGludCByZXQ7DQo+ICAgICA0NjgNCj4gICAgIDQ2OSAgICAgICAg
ICAvKg0KPiAgICAgNDcwICAgICAgICAgICAqIFRoaXMgY2FsbGJhY2sgbWF5IGJlIGNhbGxlZCBt
dWx0aXBsZSB0aW1lcywNCj4gICAgIDQ3MSAgICAgICAgICAgKiBzbyBmcmVlIHRoZSBwcmV2aW91
c2x5IGFsbG9jYXRlZCBzaGFyZWQgYnVmZmVyIGlmIGFueS4NCj4gICAgIDQ3MiAgICAgICAgICAg
Ki8NCj4gICAgIDQ3MyAgICAgICAgICBzdHJlYW1fZnJlZShzdHJlYW0pOw0KPiAgICAgICAgICAg
ICAgICAgIF5eXl5eXl5eXl5eXl5eXl5eXl4NCj4gVGhpcyBpcyBmcmVlZCBoZXJlLg0KPg0KPiAg
ICAgNDc0ICAgICAgICAgIHJldCA9IHNoYnVmX3NldHVwX2JhY2tzdG9yZShzdHJlYW0sIHBhcmFt
c19idWZmZXJfYnl0ZXMocGFyYW1zKSk7DQo+ICAgICA0NzUgICAgICAgICAgaWYgKHJldCA8IDAp
DQo+ICAgICA0NzYgICAgICAgICAgICAgICAgICBnb3RvIGZhaWw7DQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICBeXl5eXl5eXl5eDQo+IFRoaXMgbGVhZHMgdG8gc29tZSBkb3VibGUgZnJlZXMu
ICBQcm9iYWJseSBtb3JlIGRvdWJsZSBmcmVlcyB0aGFuIFNtYXRjaA0KPiBpcyBkZXRlY3Rpbmcu
DQo+DQo+ICAgICA0NzcNCj4gICAgIDQ3OCAgICAgICAgICBtZW1zZXQoJmJ1Zl9jZmcsIDAsIHNp
emVvZihidWZfY2ZnKSk7DQo+ICAgICA0NzkgICAgICAgICAgYnVmX2NmZy54Yl9kZXYgPSBmcm9u
dF9pbmZvLT54Yl9kZXY7DQo+ICAgICA0ODAgICAgICAgICAgYnVmX2NmZy5wZ2RpciA9ICZzdHJl
YW0tPnNoYnVmOw0KPiAgICAgNDgxICAgICAgICAgIGJ1Zl9jZmcubnVtX3BhZ2VzID0gc3RyZWFt
LT5udW1fcGFnZXM7DQo+ICAgICA0ODIgICAgICAgICAgYnVmX2NmZy5wYWdlcyA9IHN0cmVhbS0+
cGFnZXM7DQo+ICAgICA0ODMNCj4gICAgIDQ4NCAgICAgICAgICByZXQgPSB4ZW5fZnJvbnRfcGdk
aXJfc2hidWZfYWxsb2MoJmJ1Zl9jZmcpOw0KPiAgICAgICAgICAgICAgICAgIF5eXl5eXl5eXl5e
Xl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eXl5eDQo+IFRoaXMgaXMgd2hlcmUgInN0cmVh
bS0+c2hidWYuZGlyZWN0b3J5IiBpcyByZS1hbGxvY2F0ZWQgb24gdGhlIHN1Y2Nlc3MNCj4gcGF0
aC4NCj4NCj4gICAgIDQ4NSAgICAgICAgICBpZiAocmV0IDwgMCkNCj4gICAgIDQ4NiAgICAgICAg
ICAgICAgICAgIGdvdG8gZmFpbDsNCj4gICAgIDQ4Nw0KPiAgICAgNDg4ICAgICAgICAgIHJldCA9
IHhlbl9mcm9udF9wZ2Rpcl9zaGJ1Zl9tYXAoJnN0cmVhbS0+c2hidWYpOw0KPiAgICAgNDg5ICAg
ICAgICAgIGlmIChyZXQgPCAwKQ0KPiAgICAgNDkwICAgICAgICAgICAgICAgICAgZ290byBmYWls
Ow0KPiAgICAgNDkxDQo+ICAgICA0OTIgICAgICAgICAgcmV0dXJuIDA7DQo+ICAgICA0OTMNCj4g
ICAgIDQ5NCAgZmFpbDoNCj4gICAgIDQ5NSAgICAgICAgICBzdHJlYW1fZnJlZShzdHJlYW0pOw0K
PiAgICAgICAgICAgICAgICAgIF5eXl5eXl5eXl5eXl5eXl5eXl5eDQo+IERvdWJsZSBmcmVlLg0K
Pg0KPiAgICAgNDk2ICAgICAgICAgIGRldl9lcnIoJmZyb250X2luZm8tPnhiX2Rldi0+ZGV2LA0K
PiAgICAgNDk3ICAgICAgICAgICAgICAgICAgIkZhaWxlZCB0byBhbGxvY2F0ZSBidWZmZXJzIGZv
ciBzdHJlYW0gd2l0aCBpbmRleCAlZFxuIiwNCj4gICAgIDQ5OCAgICAgICAgICAgICAgICAgIHN0
cmVhbS0+aW5kZXgpOw0KPiAgICAgNDk5ICAgICAgICAgIHJldHVybiByZXQ7DQo+ICAgICA1MDAg
IH0NCj4NCj4gcmVnYXJkcywNCj4gZGFuIGNhcnBlbnRlcg0KDQpUaGFuayB5b3UgZm9yIHJlcG9y
dGluZyB0aGlzLA0KDQpJJ2xsIHRyeSB0byBsb29rIGF0IGl0IGNsb3NlbHkgYW5kIHByZXBhcmUg
YSBwYXRjaC4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 07:59:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 07:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12668.32915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXJse-00024T-Bd; Tue, 27 Oct 2020 07:58:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12668.32915; Tue, 27 Oct 2020 07:58:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXJse-00024M-8b; Tue, 27 Oct 2020 07:58:56 +0000
Received: by outflank-mailman (input) for mailman id 12668;
 Tue, 27 Oct 2020 07:58:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XCir=EC=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1kXJsc-00024H-Sx
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 07:58:54 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f4eff0c-fda4-4723-a3cd-d46d127a364c;
 Tue, 27 Oct 2020 07:58:53 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 463BD67373; Tue, 27 Oct 2020 08:58:51 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=XCir=EC=lst.de=hch@srs-us1.protection.inumbo.net>)
	id 1kXJsc-00024H-Sx
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 07:58:54 +0000
X-Inumbo-ID: 7f4eff0c-fda4-4723-a3cd-d46d127a364c
Received: from verein.lst.de (unknown [213.95.11.211])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7f4eff0c-fda4-4723-a3cd-d46d127a364c;
	Tue, 27 Oct 2020 07:58:53 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
	id 463BD67373; Tue, 27 Oct 2020 08:58:51 +0100 (CET)
Date: Tue, 27 Oct 2020 08:58:51 +0100
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org, konrad.wilk@oracle.com, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
Message-ID: <20201027075851.GD22487@lst.de>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good for now:

Reviewed-by: Christoph Hellwig <hch@lst.de>

But we really need to clean up the mess with all these magic variables
eventually.


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 09:03:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 09:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12696.32930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXKsu-00007O-Hc; Tue, 27 Oct 2020 09:03:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12696.32930; Tue, 27 Oct 2020 09:03:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXKsu-00007H-ES; Tue, 27 Oct 2020 09:03:16 +0000
Received: by outflank-mailman (input) for mailman id 12696;
 Tue, 27 Oct 2020 09:03:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXKst-00006W-O0
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:03:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82fb8dbc-563b-439e-ab88-951916e08052;
 Tue, 27 Oct 2020 09:03:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXKsm-0001yN-QZ; Tue, 27 Oct 2020 09:03:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXKsm-0002dl-KQ; Tue, 27 Oct 2020 09:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXKsm-0002DV-Jw; Tue, 27 Oct 2020 09:03:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXKst-00006W-O0
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:03:15 +0000
X-Inumbo-ID: 82fb8dbc-563b-439e-ab88-951916e08052
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 82fb8dbc-563b-439e-ab88-951916e08052;
	Tue, 27 Oct 2020 09:03:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6iEUxXMgBY1wBCGIXqjxqIuJErsln6MceQK2vDQKUuY=; b=oZknsAsM3FBcX/9/MmigGWeY2o
	2amXyFsOZzck05PiY7uxZhc9WGe3E0z7ArrB7gqGpmtco9UubNowmw+rxShcKXT2u29ChmW0fR216
	vlh1yfSnc23+qimWcPPT+pKCNNprpNVO74EUR+ZAR3x0QbuTLTiWG1LqaiLIzVXSjyHI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXKsm-0001yN-QZ; Tue, 27 Oct 2020 09:03:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXKsm-0002dl-KQ; Tue, 27 Oct 2020 09:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXKsm-0002DV-Jw; Tue, 27 Oct 2020 09:03:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156252-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156252: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a3212009d95bbcba7d08076aba2eee51eb1f8e7c
X-Osstest-Versions-That:
    ovmf=b70c4fdcde83689d8cd1e5e2faf598d0087934a3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 09:03:08 +0000

flight 156252 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156252/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a3212009d95bbcba7d08076aba2eee51eb1f8e7c
baseline version:
 ovmf                 b70c4fdcde83689d8cd1e5e2faf598d0087934a3

Last test of basis   156232  2020-10-26 03:10:04 Z    1 days
Testing same since   156252  2020-10-27 01:41:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Heng Luo <heng.luo@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Luo, Heng <heng.luo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b70c4fdcde..a3212009d9  a3212009d95bbcba7d08076aba2eee51eb1f8e7c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 09:22:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 09:22:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12702.32942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLBo-0001st-TW; Tue, 27 Oct 2020 09:22:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12702.32942; Tue, 27 Oct 2020 09:22:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLBo-0001sm-Py; Tue, 27 Oct 2020 09:22:48 +0000
Received: by outflank-mailman (input) for mailman id 12702;
 Tue, 27 Oct 2020 09:22:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Hbf0=EC=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kXLBn-0001sh-Fg
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:22:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c83cbdf-7c92-4b99-ba6d-83db1a613517;
 Tue, 27 Oct 2020 09:22:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3AD25ABAE;
 Tue, 27 Oct 2020 09:22:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Hbf0=EC=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kXLBn-0001sh-Fg
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:22:47 +0000
X-Inumbo-ID: 5c83cbdf-7c92-4b99-ba6d-83db1a613517
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5c83cbdf-7c92-4b99-ba6d-83db1a613517;
	Tue, 27 Oct 2020 09:22:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603790564;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=5VXmT7OXS7Grv2amMbTiDpvu7FDd2Az8R8wtzflu0aQ=;
	b=LZ2WcOIuRMOuKL44nnU2lB9YRItAeUiekLoMA22hSSxma92dDHZ5x2fIWoK/rGco+bDcNK
	ARePcIg5qz3UOo+UCX03VjlOIYpXiROQg5YQGq4PeUHMZcL2sDt6gtfte/xjVapAYJMvU5
	yqZsfUbDJu529z3CuuwpPlTcAicOFwY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3AD25ABAE;
	Tue, 27 Oct 2020 09:22:44 +0000 (UTC)
Message-ID: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, 
	"George.Dunlap@citrix.com"
	 <George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>, 
	"frederic.pierret@qubes-os.org"
	 <frederic.pierret@qubes-os.org>, "andrew.cooper3@citrix.com"
	 <andrew.cooper3@citrix.com>
Date: Tue, 27 Oct 2020 10:22:42 +0100
In-Reply-To: <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
	 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
	 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-3aE5EEhsEb5z1wV5vZe8"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-3aE5EEhsEb5z1wV5vZe8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-10-27 at 06:58 +0100, J=C3=BCrgen Gro=C3=9F wrote:
> On 26.10.20 17:31, Dario Faggioli wrote:
> >=20
> > Or did you have something completely different in mind, and I'm
> > missing
> > it?
>=20
> No, I think you are right. I mixed that up with __context_switch()
> not
> being called.
>=20
Right.

> Sorry for the noise,
>=20
Sure, no problem.

In fact, this issue is apparently scheduler independent. It indeed
seemd to be related to the other report we have "BUG: credit=3Dsched2
machine hang when using DRAKVUF", but there it looks like it is
scheduler-dependant.

Might it be something that lies somewhere else, but Credit2 is
triggering it faster/easier? (Just thinking out loud...)

For Frederic, what happens is that dom0 hangs, right? So you're able to
poke at Xen with some debugkeys (like 'r' for the scheduler's status,
and the ones for the domain's vCPUs)?

If yes, it may be useful to see the output.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-3aE5EEhsEb5z1wV5vZe8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+X5uIACgkQFkJ4iaW4
c+7TGBAA0mdnksj8Pa4CjR789Vn+qTbNO+8CRgX3eZpZaBK63DVqpn6v8TZ1sVCC
le8IOj4ql+qmquzRtbFycuw6BCihZR54PlhWpgIr2usmEi1h203K0QsJy/xkBzGi
f+mOnZW+Xr81INZLpzCW5UvpkHFTpfri2vfxWdYHiGx5MEps38u4Slg50ShvB+bh
mQfTLC29EYtMDrmbRsg8avA9IW7prUD+OiaV6HHcnqSC9Od/2zdzJzUl8MjTYLi/
br7Arx8F8r2s0LC2FqniD5osfnUajV76mr8vI3yZz5RpqESm5dSzFhEwpS+8OS5x
Ko4RqFoXcPtjexdKLedrNyNHtCfg4qTKclu2u7BRQLPDYHwnyldWUfT0zIf5DrTC
w7Nq/FUdjrSuGQuqVfW5iwVuzw6nTo26Sv6PAiLpiMB+RnFwXadPMSqnWNoNsOdI
pZAvPXIrlVfUuCwgOjMW7DLEsqyiyZRL2TIXkNWhnNslthYCL9rd7gZ8kPuk3Xq5
+YI37H8h+FrkFBTbAcF+ija0boFNOnK4+xw8QOZUIIxiB8n8saGYNDonDJPynehv
WUFRJFjw9moBP7nXIyR5Fs/XTb8gkhNqE/jls8iVXKnOh53iUqWCwB2BUKx0+Jj0
gXX5UJNlcoDFNj3OK0y+kYhm6HGxjSw+ZYaXQJQaPSZDq1bQg1I=
=T1jQ
-----END PGP SIGNATURE-----

--=-3aE5EEhsEb5z1wV5vZe8--



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 09:31:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 09:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12710.32956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLKJ-0002pi-ST; Tue, 27 Oct 2020 09:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12710.32956; Tue, 27 Oct 2020 09:31:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLKJ-0002pb-PO; Tue, 27 Oct 2020 09:31:35 +0000
Received: by outflank-mailman (input) for mailman id 12710;
 Tue, 27 Oct 2020 09:31:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXLKI-0002ot-8f
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:31:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3377537e-7cbd-483f-b4e6-d875e136e568;
 Tue, 27 Oct 2020 09:31:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXLK9-0002Xi-UX; Tue, 27 Oct 2020 09:31:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXLK9-0003lm-HP; Tue, 27 Oct 2020 09:31:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXLK9-0000jZ-Gt; Tue, 27 Oct 2020 09:31:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXLKI-0002ot-8f
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:31:34 +0000
X-Inumbo-ID: 3377537e-7cbd-483f-b4e6-d875e136e568
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3377537e-7cbd-483f-b4e6-d875e136e568;
	Tue, 27 Oct 2020 09:31:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oY6lrCGqb8Xji/G3ou+9DBEV54YoYVIVrXgHiq2XtcY=; b=W6fA7lENDRSCeGcFtQPWdVGmQ+
	xeM5xzjg6NMXYIuOs/dBYnX24IG8gKu8ElAPAUP3z7ifOc67HAHN7Ke69dJLqpEUK+4ra7txP2N2Z
	JxICebGzeEXUQXBGiNbfgvgQp6DdythH+MoI9+tTHTTPz3So7ezZwiuxG2OfbZyTibjk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXLK9-0002Xi-UX; Tue, 27 Oct 2020 09:31:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXLK9-0003lm-HP; Tue, 27 Oct 2020 09:31:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXLK9-0000jZ-Gt; Tue, 27 Oct 2020 09:31:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156250: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1dc887329a10903940501b43e8c0cc67af7c06d5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 09:31:25 +0000

flight 156250 qemu-mainline real [real]
flight 156256 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156250/
http://logs.test-lab.xenproject.org/osstest/logs/156256/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1dc887329a10903940501b43e8c0cc67af7c06d5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   68 days
Failing since        152659  2020-08-21 14:07:39 Z   66 days  156 attempts
Testing same since   156250  2020-10-27 00:09:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 51873 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 09:46:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 09:46:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12720.32972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLYJ-0003tj-Bm; Tue, 27 Oct 2020 09:46:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12720.32972; Tue, 27 Oct 2020 09:46:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLYJ-0003tc-8t; Tue, 27 Oct 2020 09:46:03 +0000
Received: by outflank-mailman (input) for mailman id 12720;
 Tue, 27 Oct 2020 09:46:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXLYG-0003tX-OS
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:46:01 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.23])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8cf5222-6f65-40b3-9a8a-8f69c8c20c1d;
 Tue, 27 Oct 2020 09:45:57 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9R9jt3Ji
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Tue, 27 Oct 2020 10:45:55 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXLYG-0003tX-OS
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:46:01 +0000
X-Inumbo-ID: b8cf5222-6f65-40b3-9a8a-8f69c8c20c1d
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.23])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b8cf5222-6f65-40b3-9a8a-8f69c8c20c1d;
	Tue, 27 Oct 2020 09:45:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603791956;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-ID:Subject:To:From:Date:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=+rHiVQMTt3NwVmMjrLIIZ1zBDbkzJm5NmfYzYF1RJ84=;
	b=b07vKBICZeep5QWgqtjTBVTeEE5lFh0EvINkrNinuqZdoBipr2RDJv6ljjaT/88vo3
	A0Zfkde+8z5u6KhLekMq+I0PF58MXa9uGC03UgNRCn+usvGgfO8Zd1oYgBn236G975Ai
	qyl8ocVEg+7nLbdx079pgmD71CTAGuYpWCmoo5lQ3MhObiogxfKFCweuCxHKejKxzyVR
	ewwx2aXsSf6I9l9CKrS3euYsAxcFLACti6W0juVYHNHIuSi/Ko17/L6GEKrzxyHcYJst
	/3ZOwZKeOroNBPurrss0KkJdTSGz1B17RwaPWiSa+EVFihAMg7cWfOccPxyRMAaPZ0V2
	nyqQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9R9jt3Ji
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate)
	for <xen-devel@lists.xenproject.org>;
	Tue, 27 Oct 2020 10:45:55 +0100 (CET)
Date: Tue, 27 Oct 2020 10:45:48 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: flawed Makefile dependencies in libacpi
Message-ID: <20201027104548.5823c554.olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/ERL4GRi0oNJ/V0HkyQYZzqP"; protocol="application/pgp-signature"

--Sig_/ERL4GRi0oNJ/V0HkyQYZzqP
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Every once in a while build.c fails to compile because ssdt_s3.h does not e=
xist. The 'sed' command which creates the file appears a few lines down in =
the build log.

I'm not familiar with make. I wonder if "build.o" should depend on "$(H_SRC=
)" instead of the expanded list of generated headers. Some sort of dependen=
cy to the individual files seem to exist, otherwise make would probably com=
plain.

Olaf

--Sig_/ERL4GRi0oNJ/V0HkyQYZzqP
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+X7E0ACgkQ86SN7mm1
DoAFjxAAljUa5v4li8vp0YrLoz6nA7TMqdt9DUMtBvz0sg/WJ9MSPkTr8KDVe4Wc
MVM5I71PnKJ9CUjV4sy1CUi4J6dOgp/mGBDdhmpuRz/mAD9OjQbnmHkAeb/V+eI9
hKBsk//0ZceDPaH3UsOcLrZDji6tBubTd4kuUJAT7ir8OenXfXRnOSfap/ECiN7T
hT7YD5u+80eJ6nXJ+ql6m2Y2BUxwNeEEJcVx3klQcSks/O7PCnftP1orR9mgtxj5
szSfzzbwOK6SdOyvrRCnFfAGPHzRZ9u4jpK3DgnRzX0RqEcLjtporn25Ca8qKOOH
cATKxccLjPiNOJDcpD4sKw/fTEU/90UTIf8IEYYGEubqjLoKbgnBKFp7xdqbGHCd
SvIbD8GKyvoxpPzNaGCepkGZJOE/6OE91zHzrgCZuD+U6ZZ/6fSLcblx8+jN0M8f
cYuI+b1bXxT9mPCDb8DHqd1sdtNnK5tUCzVUBhRnEZf05zEb4XvMJ/ffBRZRDa5A
UEM1FMJiiX4JtGIY2MpPaefRghorxxX/iogO+LOoJasr2okoBd9tznm9dWNZNZbR
Uv4kb3s2l8ymTX5PNop6o+fw5g5Mo8LcBl/RENlIzu86bUkdJCpS+2+gGMvc7FCq
b4Sfj3XQh3o7X+FWNcz+V/FJj5Q7ixZUh2QVkwkqhzMagbwHLjo=
=0M+Z
-----END PGP SIGNATURE-----

--Sig_/ERL4GRi0oNJ/V0HkyQYZzqP--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 09:59:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 09:59:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12730.32989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLl8-0004wv-La; Tue, 27 Oct 2020 09:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12730.32989; Tue, 27 Oct 2020 09:59:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXLl8-0004wo-IL; Tue, 27 Oct 2020 09:59:18 +0000
Received: by outflank-mailman (input) for mailman id 12730;
 Tue, 27 Oct 2020 09:59:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kXLl7-0004wj-BW
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:59:17 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21ba9d9a-b6a2-4eb5-9b64-3a507a042dbf;
 Tue, 27 Oct 2020 09:59:16 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09R9twpK009217; Tue, 27 Oct 2020 09:59:10 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 by mx0a-0039f301.pphosted.com with ESMTP id 34ddh2dbu7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 27 Oct 2020 09:59:10 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6323.eurprd03.prod.outlook.com (2603:10a6:20b:159::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19; Tue, 27 Oct
 2020 09:59:06 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 09:59:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kXLl7-0004wj-BW
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 09:59:17 +0000
X-Inumbo-ID: 21ba9d9a-b6a2-4eb5-9b64-3a507a042dbf
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 21ba9d9a-b6a2-4eb5-9b64-3a507a042dbf;
	Tue, 27 Oct 2020 09:59:16 +0000 (UTC)
Received: from pps.filterd (m0174677.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09R9twpK009217;
	Tue, 27 Oct 2020 09:59:10 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
	by mx0a-0039f301.pphosted.com with ESMTP id 34ddh2dbu7-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Tue, 27 Oct 2020 09:59:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XSjCH0CLlx0PRnyH0FkE5cz4kHpseMcJDYgwwhk3I0hF2pb31LDgcvmLk66iajH35DXifPOpozvAPgRM9EVwi74HsGNYmiSYmGrlEDHglrO+3xWLXBXr+ucO0upernZIq4pbSetnA2LRz+O756+Y2NlwyTbMnxAGy1u4hIb4A8qAPlowKZbD9DymtV3nL4KVFw0eIRC5KmYxmAaCO8SwqrqnuKtCcozHXarEbgVnEBhmMB9qqGmovF/aovnfcjPfjSLFutdg3moNqyWHiOSiM1X3IgX7uorXkLmld+9+y2MS9W3dAIqHed5VIspRK5I7w3fI7Qg7k/x8CRIkrY9ZXg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wfepSJ8iKy4SzAe8qRCLxQ8t3vxjdNyD8vio/vkrtAY=;
 b=l09II9CO3l71ISRxvitKrUXa0nt61P+3loN/9OytgvtCm6tag5YD3VcDvKac4XDxjlhxjgonMnEqSxAYx1Yr11eYBBrOxSgSseR6iBvv1r8k4t3cqQv6D3QXNrLXVbUFkoGha2cEVbPjQFSfDVmYxkUaCDIDbWNPe1xjGRKt4wd6c+dqlkI1ImjCIlUKdDdNguEbw58/VgVbYSEOoW08FgMHyUpjr3J5QpXZjXKVebF/tiN1Y8FsrGQ5R4PsU+1CWRB8SmlFjkqgMM9ps2pbcyILh5/uRLV2HGpZKqoO6p0YvY1kdBELJyHyj05pM+aHGFFHTl8ls3wX5iIylHpEkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wfepSJ8iKy4SzAe8qRCLxQ8t3vxjdNyD8vio/vkrtAY=;
 b=C+rhn56zw3cmD+L2BBYG6zbmhIV0xvTSQudUawV/opHOyNheBwzTHQKm+Q2XhlLvm74VXmEDKKKfafkh7ogqM7JwNIVUoZYtuueVuW3BsOwfabzTogghZBZVJnDY63pv3l66MBpu2eL/Sn8nSvTEReUL945Bqb+yBbc2kpuKjhiiySiwRCoeJJ2hQDn3qM//wYi6idrxlNFiLfRhBNgqzkfnNxRkm2wattTRgLhlrcVu2rfhGg1/2mh46FA/ddSa+0E3A45jk/9fSRfG6U/0bLsoBVIVuehQ99Z04gnUggHyAaePODFqEZUagGrvBlhMnba7pSWn2yY6STe41FG11w==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6323.eurprd03.prod.outlook.com (2603:10a6:20b:159::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19; Tue, 27 Oct
 2020 09:59:06 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 09:59:06 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>,
        "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        "wl@xen.org" <wl@xen.org>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
        "paul@xen.org"
	<paul@xen.org>, Artem Mygaiev <Artem_Mygaiev@epam.com>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Rahul Singh <Rahul.Singh@arm.com>
Subject: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Topic: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Index: AQHWrEfNmyZZUV4zgEum5lDBP9dPxg==
Date: Tue, 27 Oct 2020 09:59:05 +0000
Message-ID: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ffa43bb0-23dd-41e6-7605-08d87a5ef10c
x-ms-traffictypediagnostic: AM0PR03MB6323:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB6323D090A4B887880F22107DE7160@AM0PR03MB6323.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 75Zuaw7sBVer0PaOrcow+SnP07hPuukJvrtQVArIUdT2He5ysyhaAbnq2lz+ZgY7ce/Km7EQaYtee9mvxGKFwjb0nq9XZ2mySX4K/+kuwbsigPVnL7jK6EE9iAX5WtpXQgjZQ8of+bHCBORYh0vCQhuyr8RdEO1lQO4sydQw2G7CfJJFT7wV0Z8dvTFU586oWw+pybfvazQ1SmobTZXXqcgnWKwNrJMzMtTDQXz/xD1jRMH/TYFasvi/LakGmHYrReEwMUe6QpmmHI2m9KLiPND9Q4ZRoWFfXF3tUnzIr8/k+RT6FPy73o+kJiYNeEpV2g8qcl7jr/HCysgcB/zIWYs6yRFBz9qUIRVQyzWvjHqpVs4WQDbjVkLqVasC3xbM
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(366004)(39860400002)(8936002)(7416002)(6486002)(316002)(8676002)(110136005)(76116006)(86362001)(66946007)(2906002)(6512007)(66446008)(36756003)(83380400001)(66556008)(71200400001)(5660300002)(66476007)(64756008)(2616005)(478600001)(31696002)(6506007)(186003)(31686004)(26005)(921003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 Tg8tCN4r2BitwQ9QR2nNO+NMAtSSGJbyiP1oLvOGoMYOGAq7D8tsyg4zloGX0yu1gZ1KAGftjlGl+TyHOxObnCpO9ow4nf4mY8pjoe9O3fHN+lyXM9BzCySVgzeVtHAto79eXkIIKodSmgX30viheBQyG0mrhkRLQ3O5mOxOheGSnDQWdED4DQgQBk9okhc7JALuM2F6KheMm7kzE2EjWLotE2uz3HHxO8GyMyZMJDJ8bhBpUwWm0hB93TCHzv8ezI4BMwtSe+k6SvsWzgSAimfZU5tMK0+5m0RbZqt3YzoWN63/bMikSmrjeS/vbbOjf4hqEwcF5bCjcoQbt8dlEaM3zcyVtpgbW5QmBv0Rclr4OoQv3zVfIBQz4AIFRyR7zsQMVrMPFJL5wo8Ppwk3h9SGuTvFX1yLWDWwzX3bh603OAo6fCOesrkN3pYkKdrklB+SkLGQY0HLhIYUEHgNLs0wUOLaj/L7n80o8azH4MOguejeagE2D8B5ndJfbsrVaGAoBMHhJq/lcv9+0GHDW6leZg4Vhn9Qx6oE1hOaGREMrwHXG1NOkmh+KjnT7lglUtZTPfkuuAkCqZRnA6Wtx+UkwTH5LBIARyDN9y6q8eYyMr8a6zPUf7Q29uvRdmMcGas8Yu6e1tsdL0+Npp0DWA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <B1EE3E7AA0A8F049B3F471279A3639BB@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ffa43bb0-23dd-41e6-7605-08d87a5ef10c
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Oct 2020 09:59:05.4142
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ANkzO+5DBnfobqAEUldsuTqwRX3w68zo9FTkkLbJLdCCKVgHHRBWiqNtl+TkOaTC8ZSPZuyAoanalMWESMbOdDVwYZNbW7oEbdsYOBtfqPsElarh02O40aSprJErFIJx
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6323
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235,18.0.737
 definitions=2020-10-27_05:2020-10-26,2020-10-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011
 lowpriorityscore=0 mlxscore=0 bulkscore=0 malwarescore=0 adultscore=0
 spamscore=0 mlxlogscore=906 phishscore=0 suspectscore=0 priorityscore=1501
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010270064

SGVsbG8sIGFsbCENCg0KV2hpbGUgd29ya2luZyBvbiBQQ0kgcGFzc3Rocm91Z2ggb24gQVJNIChw
YXJ0aWFsIFJGQyB3YXMgcHVibGlzaGVkIGJ5IEFSTQ0KZWFybGllciB0aGlzIHllYXIpIEkgdHJp
ZWQgdG8gaW1wbGVtZW50IHNvbWUgcmVsYXRlZCBjaGFuZ2VzIGluIHRoZSB0b29sc3RhY2suDQpP
bmUgb2YgdGhlIG9ic3RhY2xlcyBmb3IgQVJNIGlzIFBDSSBiYWNrZW5k4oCZcyByZWxhdGVkIGNv
ZGUgcHJlc2VuY2U6IEFSTSBpcw0KZ29pbmcgdG8gZnVsbHkgZW11bGF0ZSBhbiBFQ0FNIGhvc3Qg
YnJpZGdlIGluIFhlbiwgc28gbm8gUENJIGJhY2tlbmQvZnJvbnRlbmQNCnBhaXIgaXMgZ29pbmcg
dG8gYmUgdXNlZC4NCg0KSWYgbXkgdW5kZXJzdGFuZGluZyBjb3JyZWN0IHRoZSBmdW5jdGlvbmFs
aXR5IHdoaWNoIGlzIGltcGxlbWVudGVkIGJ5IHRoZQ0KcGNpYmFjayBhbmQgdG9vbHN0YWNrIGFu
ZCB3aGljaCBpcyByZWxldmFudC9uZWVkZWQgZm9yIEFSTToNCg0KIMKgMS4gcGNpYmFjayBpcyB1
c2VkIGFzIGEgZGF0YWJhc2UgZm9yIGFzc2lnbmFibGUgUENJIGRldmljZXMsIGUuZy4geGwNCiDC
oMKgwqAgcGNpLWFzc2lnbmFibGUte2FkZHxyZW1vdmV8bGlzdH0gbWFuaXB1bGF0ZXMgdGhhdCBs
aXN0LiBTbywgd2hlbmV2ZXIgdGhlDQogwqDCoMKgIHRvb2xzdGFjayBuZWVkcyB0byBrbm93IHdo
aWNoIFBDSSBkZXZpY2VzIGNhbiBiZSBwYXNzZWQgdGhyb3VnaCBpdCByZWFkcw0KIMKgwqDCoCB0
aGF0IGZyb20gdGhlIHJlbGV2YW50IHN5c2ZzIGVudHJpZXMgb2YgdGhlIHBjaWJhY2suDQoNCiDC
oDIuIHBjaWJhY2sgaXMgdXNlZCB0byBob2xkIHRoZSB1bmJvdW5kIFBDSSBkZXZpY2VzLCBlLmcu
IHdoZW4gcGFzc2luZyB0aHJvdWdoIGENCiDCoMKgwqAgUENJIGRldmljZSBpdCBuZWVkcyB0byBi
ZSB1bmJvdW5kIGZyb20gdGhlIHJlbGV2YW50IGRldmljZSBkcml2ZXIgYW5kIGJvdW5kDQogwqDC
oMKgIHRvIHBjaWJhY2sgKHN0cmljdGx5IHNwZWFraW5nIGl0IGlzIG5vdCByZXF1aXJlZCB0aGF0
IHRoZSBkZXZpY2UgaXMgYm91bmQgdG8NCiDCoMKgwqAgcGNpYmFjaywgYnV0IHBjaWJhY2sgaXMg
YWdhaW4gdXNlZCBhcyBhIGRhdGFiYXNlIG9mIHRoZSBwYXNzZWQgdGhyb3VnaCBQQ0kNCiDCoMKg
wqAgZGV2aWNlcywgc28gd2UgY2FuIHJlLWJpbmQgdGhlIGRldmljZXMgYmFjayB0byB0aGVpciBv
cmlnaW5hbCBkcml2ZXJzIHdoZW4NCiDCoMKgwqAgZ3Vlc3QgZG9tYWluIHNodXRzIGRvd24pDQoN
CiDCoDMuIHRvb2xzdGFjayBkZXBlbmRzIG9uIERvbWFpbi0wIGZvciBkaXNjb3ZlcmluZyBQQ0kg
ZGV2aWNlIHJlc291cmNlcyB3aGljaCBhcmUNCiDCoMKgwqAgdGhlbiBwZXJtaXR0ZWQgZm9yIHRo
ZSBndWVzdCBkb21haW4sIGUuZyBNTUlPIHJhbmdlcywgSVJRcy4gYXJlIHJlYWQgZnJvbQ0KIMKg
wqDCoCB0aGUgc3lzZnMNCg0KIMKgNC4gdG9vbHN0YWNrIGlzIHJlc3BvbnNpYmxlIGZvciByZXNl
dHRpbmcgUENJIGRldmljZXMgYmVpbmcgcGFzc2VkIHRocm91Z2ggdmlhDQogwqDCoMKgIHN5c2Zz
L3Jlc2V0IG9mIHRoZSBEb21haW4tMOKAmXMgUENJIGJ1cyBzdWJzeXN0ZW0NCg0KIMKgNS4gdG9v
bHN0YWNrIGlzIHJlc3BvbnNpYmxlIGZvciB0aGUgZGV2aWNlcyBhcmUgcGFzc2VkIHdpdGggYWxs
IHJlbGV2YW50DQogwqDCoMKgIGZ1bmN0aW9ucywgZS5nLiBzbyBmb3IgbXVsdGlmdW5jdGlvbiBk
ZXZpY2VzIGFsbCB0aGUgZnVuY3Rpb25zIGFyZSBwYXNzZWQgdG8NCiDCoMKgwqAgYSBkb21haW4g
YW5kIG5vIHBhcnRpYWwgcGFzc3Rocm91Z2ggaXMgZG9uZQ0KDQogwqA2LiB0b29sc3RhY2sgY2Fy
ZXMgYWJvdXQgU1ItSU9WIGRldmljZXMgKGFtIEkgY29ycmVjdCBoZXJlPykNCg0KDQpJIGhhdmUg
aW1wbGVtZW50ZWQgYSByZWFsbHkgZGlydHkgUE9DIGZvciB0aGF0IHdoaWNoIEkgd291bGQgbmVl
ZCB0byBjbGVhbiB1cA0KYmVmb3JlIHNob3dpbmcsIGJ1dCBiZWZvcmUgdGhhdCBJIHdvdWxkIGxp
a2UgdG8gZ2V0IHNvbWUgZmVlZGJhY2sgYW5kIGFkdmljZSBvbg0KaG93IHRvIHByb2NlZWQgd2l0
aCB0aGUgYWJvdmUuIEkgc3VnZ2VzdCB3ZToNCg0KIMKgMS4gTW92ZSBhbGwgcGNpYmFjayByZWxh
dGVkIGNvZGUgKHdoaWNoIHNlZW1zIHRvIGJlY29tZSB4ODYgY29kZSBvbmx5KSBpbnRvIGENCiDC
oMKgwqAgZGVkaWNhdGVkIGZpbGUsIHNvbWV0aGluZyBsaWtlIHRvb2xzL2xpYnhsL2xpYnhsX3Bj
aV94ODYuYw0KDQogwqAyLiBNYWtlIHRoZSBmdW5jdGlvbmFsaXR5IG5vdyBwcm92aWRlZCBieSBw
Y2liYWNrIGFyY2hpdGVjdHVyZSBkZXBlbmRlbnQsIHNvDQogwqDCoMKgIHRvb2xzL2xpYnhsL2xp
YnhsX3BjaS5jIGRlbGVnYXRlcyBhY3R1YWwgYXNzaWduYWJsZSBkZXZpY2UgbGlzdCBoYW5kbGlu
ZyB0bw0KIMKgwqDCoCB0aGF0IGFyY2ggY29kZSBhbmQgdXNlcyBzb21lIHNvcnQgb2Yg4oCcb3Bz
4oCdLCBlLmcuDQogwqDCoMKgIGFyY2gtPm9wcy5nZXRfYWxsX2Fzc2lnbmFibGUsIGFyY2gtPm9w
cy5hZGRfYXNzaWduYWJsZSBldGMuIChUaGlzIGNhbiBhbHNvDQogwqDCoMKgIGJlIGRvbmUgd2l0
aCDigJwjaWZkZWYgQ09ORklHX1BDSUJBQ0vigJ0sIGJ1dCBzZWVtcyB0byBiZSBub3QgY3V0ZSku
IEludHJvZHVjZQ0KIMKgwqDCoCB0b29scy9saWJ4bC9saWJ4bF9wY2lfYXJtLmMgdG8gcHJvdmlk
ZSBBUk0gaW1wbGVtZW50YXRpb24uDQoNCiDCoDMuIEFSTSBvbmx5OiBBcyB3ZSBkbyBub3QgaGF2
ZSBwY2liYWNrIG9uIEFSTSB3ZSBuZWVkIHRvIGhhdmUgc29tZSBzdG9yYWdlIGZvcg0KIMKgwqDC
oCBhc3NpZ25hYmxlIGRldmljZSBsaXN0OiBtb3ZlIHRoYXQgaW50byBYZW4gYnkgZXh0ZW5kaW5n
IHN0cnVjdCBwY2lfZGV2IHdpdGgNCiDCoMKgwqAg4oCcYm9vbCBhc3NpZ25lZOKAnSBhbmQgcHJv
dmlkaW5nIHN5c2N0bHMgZm9yIG1hbmlwdWxhdGluZyB0aGF0LCBlLmcuDQogwqDCoMKgIFhFTl9T
WVNDVExfcGNpX2RldmljZV97c2V0fGdldH1fYXNzaWduZWQsDQogwqDCoMKgIFhFTl9TWVNDVExf
cGNpX2RldmljZV9lbnVtX2Fzc2lnbmVkICh0byBlbnVtZXJhdGUvZ2V0IHRoZSBsaXN0IG9mDQog
wqDCoMKgIGFzc2lnbmVkL25vdC1hc3NpZ25lZCBQQ0kgZGV2aWNlcykuIENhbiB0aGlzIGFsc28g
YmUgaW50ZXJlc3RpbmcgZm9yIHg4Nj8gQXQNCiDCoMKgwqAgdGhlIG1vbWVudCBpdCBzZWVtcyB0
aGF0IHg4NiBkb2VzIHJlbHkgb24gcGNpYmFjayBwcmVzZW5jZSwgc28gcHJvYmFibHkgdGhpcw0K
IMKgwqDCoCBjaGFuZ2UgbWlnaHQgbm90IGJlIGludGVyZXN0aW5nIGZvciB4ODYgd29ybGQsIGJ1
dCBtYXkgYWxsb3cgc3RyaXBwaW5nDQogwqDCoMKgIHBjaWJhY2sgZnVuY3Rpb25hbGl0eSBhIGJp
dCBhbmQgbWFraW5nIHRoZSBjb2RlIGNvbW1vbiB0byBib3RoIEFSTSBhbmQgeDg2Lg0KDQogwqA0
LiBBUk0gb25seTogSXQgaXMgbm90IGNsZWFyIGhvdyB0byBoYW5kbGUgcmUtYmluZGluZyBvZiB0
aGUgUENJIGRyaXZlciBvbg0KIMKgwqDCoCBndWVzdCBzaHV0ZG93bjogd2UgbmVlZCB0byBzdG9y
ZSB0aGUgc3lzZnMgcGF0aCBvZiB0aGUgb3JpZ2luYWwgZHJpdmVyIHRoZQ0KIMKgwqDCoCBkZXZp
Y2Ugd2FzIGJvdW5kIHRvLiBEbyB3ZSBhbHNvIHdhbnQgdG8gc3RvcmUgdGhhdCBpbiBzdHJ1Y3Qg
cGNpX2Rldj8NCg0KIMKgNS4gQW4gYWx0ZXJuYXRpdmUgcm91dGUgZm9yIDMtNCBjb3VsZCBiZSB0
byBzdG9yZSB0aGF0IGRhdGEgaW4gWGVuU3RvcmUsIGUuZy4NCiDCoMKgwqAgTU1JT3MsIElSUSwg
YmluZCBzeXNmcyBwYXRoIGV0Yy4gVGhpcyB3b3VsZCByZXF1aXJlIG1vcmUgY29kZSBvbiBYZW4g
c2lkZSB0bw0KIMKgwqDCoCBhY2Nlc3MgWGVuU3RvcmUgYW5kIHdvbuKAmXQgd29yayBpZiBNTUlP
cy9JUlFzIGFyZSBwYXNzZWQgdmlhIGRldmljZSB0cmVlL0FDUEkNCiDCoMKgwqAgdGFibGVzIGJ5
IHRoZSBib290bG9hZGVycy4NCg0KDQpBbm90aGVyIGJpZyBxdWVzdGlvbiBpcyB3aXRoIHJlc3Bl
Y3QgdG8gRG9tYWluLTAgYW5kIFBDSSBidXMgc3lzZnMgdXNlLiBUaGUNCmV4aXN0aW5nIGNvZGUg
Zm9yIHF1ZXJ5aW5nIFBDSSBkZXZpY2UgcmVzb3VyY2VzL0lSUXMgYW5kIHJlc2V0dGluZyB0aG9z
ZSB2aWENCnN5c2ZzIG9mIERvbWFpbi0wIGlzIG1vcmUgdGhhbiBPSyBpZiBEb21haW4tMCBpcyBw
cmVzZW50IGFuZCBvd25zIFBDSSBIVy4gQnV0LA0KdGhlcmUgYXJlIGF0IGxlYXN0IHR3byBjYXNl
cyB3aGVuIHRoaXMgaXMgbm90IGdvaW5nIHRvIHdvcmsgb24gQVJNOiBEb20wbGVzcw0Kc2V0dXBz
IGFuZCB3aGVuIHRoZXJlIGlzIGEgaGFyZHdhcmUgZG9tYWluIG93bmluZyBQQ0kgZGV2aWNlcy4N
Cg0KSW4gb3VyIGNhc2Ugd2UgaGF2ZSBhIGRlZGljYXRlZCBndWVzdCB3aGljaCBpcyBhIHNvcnQg
b2YgaGFyZHdhcmUgZG9tYWluIChkcml2ZXINCmRvbWFpbiBEb21EKSB3aGljaCBvd25zIGFsbCB0
aGUgaGFyZHdhcmUgb2YgdGhlIHBsYXRmb3JtLCBzbyB3ZSBhcmUgaW50ZXJlc3RlZA0KaW4gaW1w
bGVtZW50aW5nIHNvbWV0aGluZyB0aGF0IGZpdHMgb3VyIGRlc2lnbiBhcyB3ZWxsOiBEb21EL2hh
cmR3YXJlIGRvbWFpbg0KbWFrZXMgaXQgbm90IHBvc3NpYmxlIHRvIGFjY2VzcyB0aGUgcmVsZXZh
bnQgUENJIGJ1cyBzeXNmcyBlbnRyaWVzIGZyb20gRG9tYWluLTANCmFzIHRob3NlIGxpdmUgaW4g
RG9tRC9od2RvbS4gVGhpcyBpcyBhbHNvIHRydWUgZm9yIERvbTBsZXNzIHNldHVwcyBhcyB0aGVy
ZSBpcw0Kbm8gZW50aXR5IHRoYXQgY2FuIHByb3ZpZGUgdGhlIHNhbWUuDQoNCkZvciB0aGF0IHJl
YXNvbiBpbiBteSBQT0MgSSBoYXZlIGludHJvZHVjZWQgdGhlIGZvbGxvd2luZzogZXh0ZW5kZWQg
c3RydWN0DQpwY2lfZGV2IHRvIGhvbGQgYW4gYXJyYXkgb2YgUENJIGRldmljZeKAmXMgTU1JTyBy
YW5nZXMgYW5kIElSUToNCg0KIMKgMS4gUHJvdmlkZSBpbnRlcm5hbCBBUEkgZm9yIGFjY2Vzc2lu
ZyB0aGUgYXJyYXkgb2YgTU1JTyByYW5nZXMgYW5kIElSUS4gVGhpcw0KIMKgwqDCoCBjYW4gYmUg
dXNlZCBpbiBib3RoIERvbTBsZXNzIGFuZCBEb21haW4tMCBzZXR1cHMgdG8gbWFuaXB1bGF0ZSB0
aGUgcmVsZXZhbnQNCiDCoMKgwqAgZGF0YS4gVGhlIGFjdHVhbCBkYXRhIGNhbiBiZSByZWFkIGZy
b20gYSBkZXZpY2UgdHJlZS9BQ1BJIHRhYmxlcyBpZg0KIMKgwqDCoCBlbnVtZXJhdGlvbiBpcyBk
b25lIGJ5IGJvb3Rsb2FkZXJzLg0KDQogwqAyLiBGb3IgRG9tYWluLTAvRG9tRCBzZXR1cCBhZGQg
UEhZU0RFVk9QX3BjaV9kZXZpY2Vfc2V0X3Jlc291cmNlcyBzbyBEb21haW4tMA0KIMKgwqDCoCBj
YW4gc2V0IHRoZSByZWxldmFudCByZXNvdXJjZXMgaW4gWGVuIHdoaWxlIGVudW1lcmF0aW5nIFBD
SSBkZXZpY2VzLiBUaGlzDQogwqDCoMKgIHJlcXVpcmVzIGEgY2hhbmdlIHRvIHRoZSBMaW51eCBr
ZXJuZWwgZHJpdmVyIHRvIHdvcmsgKEkgY2FuIHByb3ZpZGUgbW9yZQ0KIMKgwqDCoCBkZXRhaWxz
IGlmIG5lZWRlZCkuDQoNCiDCoDMuIEZvciB0aGUgcmVzZXR0aW5nIGRldmljZXMgd2UgbWF5IHdh
bnQgdG8gZG8gdGhhdCBmdW5jdGlvbmFsaXR5IG9uIFhlbiBzaWRlDQogwqDCoMKgIGFzIHdlbGwg
dmlhIGludHJvZHVjaW5nIFBIWVNERVZPUF9wY2lfZGV2aWNlX3Jlc2V0Lg0KDQoNCkkgY2FuIHBy
b2JhYmx5IGltcGxlbWVudCBhbiBSRkMgc2VyaWVzIHdpdGggYWxsIHRoZSBhYm92ZSBpZiB3ZSBh
Z3JlZSBvbiB0aGUNCmFwcHJvYWNoLiBDb21tZW50cyBhcmUgbW9yZSB0aGFuIHdlbGNvbWUuDQoN
ClRoYW5rIHlvdSwNCk9sZWtzYW5kcg==


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:16:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:16:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12740.33005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXM1P-0006mr-75; Tue, 27 Oct 2020 10:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12740.33005; Tue, 27 Oct 2020 10:16:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXM1P-0006mk-3h; Tue, 27 Oct 2020 10:16:07 +0000
Received: by outflank-mailman (input) for mailman id 12740;
 Tue, 27 Oct 2020 10:16:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXM1O-0006mf-7q
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:16:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 126356f5-4a59-4741-9e19-b6145b081e87;
 Tue, 27 Oct 2020 10:16:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 90BEFAC3F;
 Tue, 27 Oct 2020 10:16:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXM1O-0006mf-7q
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:16:06 +0000
X-Inumbo-ID: 126356f5-4a59-4741-9e19-b6145b081e87
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 126356f5-4a59-4741-9e19-b6145b081e87;
	Tue, 27 Oct 2020 10:16:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603793764;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uio1N1/bRRMGiCixJ50d+oeJQt91viHk8fB+jmT1j2A=;
	b=f0iykRrB50vPYNALhXVWYcbICEgM5Ged8dFO2K6snCSVLxB/L0+FvwxD3WrSLSFbtxps+m
	XolQ6U5YYh+T8r2wH34fH6/hRmbBhBvcpb0cl/MlzxS/InMK0yHlL9I56/gDP9ykWAcDXp
	/Hh6o9T0oPTGK3K2HEUEBUPNWNyfVfU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 90BEFAC3F;
	Tue, 27 Oct 2020 10:16:04 +0000 (UTC)
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Olaf Hering <olaf@aepfle.de>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201026204151.23459-1-olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
Date: Tue, 27 Oct 2020 11:16:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026204151.23459-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 21:41, Olaf Hering wrote:
> Use a temporay file, and move it in place once done.
> The same pattern exists already for other dependencies.

This pattern is used when a rule consists of multiple commands
having their output appended to one another's. This isn't the
case here, so I'm afraid I'd like it to be made more obvious
what the gain / improvement is. That is - I don't mind the
change, if indeed it is for a good reason.

Jan

> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> ---
>  tools/libacpi/Makefile | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/libacpi/Makefile b/tools/libacpi/Makefile
> index c17f3924cc..2cc4cc585b 100644
> --- a/tools/libacpi/Makefile
> +++ b/tools/libacpi/Makefile
> @@ -43,7 +43,8 @@ all: $(C_SRC) $(H_SRC)
>  
>  $(H_SRC): $(ACPI_BUILD_DIR)/%.h: %.asl iasl
>  	iasl -vs -p $(ACPI_BUILD_DIR)/$*.$(TMP_SUFFIX) -tc $<
> -	sed -e 's/AmlCode/$*/g' -e 's/_aml_code//g' $(ACPI_BUILD_DIR)/$*.hex >$@
> +	sed -e 's/AmlCode/$*/g' -e 's/_aml_code//g' $(ACPI_BUILD_DIR)/$*.hex >$@.$(TMP_SUFFIX)
> +	mv -f $@.$(TMP_SUFFIX) $@
>  	rm -f $(addprefix $(ACPI_BUILD_DIR)/, $*.aml $*.hex)
>   
>  $(MK_DSDT): mk_dsdt.c
> 



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:18:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:18:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12744.33016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXM3h-0006xG-KO; Tue, 27 Oct 2020 10:18:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12744.33016; Tue, 27 Oct 2020 10:18:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXM3h-0006x9-HG; Tue, 27 Oct 2020 10:18:29 +0000
Received: by outflank-mailman (input) for mailman id 12744;
 Tue, 27 Oct 2020 10:18:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXM3g-0006wQ-9q
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:18:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db142a89-cd33-4480-962a-ee65fbfe4c82;
 Tue, 27 Oct 2020 10:18:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 13D63AD21;
 Tue, 27 Oct 2020 10:18:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXM3g-0006wQ-9q
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:18:28 +0000
X-Inumbo-ID: db142a89-cd33-4480-962a-ee65fbfe4c82
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id db142a89-cd33-4480-962a-ee65fbfe4c82;
	Tue, 27 Oct 2020 10:18:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603793906;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FBDlXNcbk66/HGikRnlnbPAO0NKwpzbMr56EI5mizNE=;
	b=nlewcj8gpXe9mhik9uvAKoKINZjaDGhMSTTKAaoEE0cci6/kUfP0bYx0Uc1EtnOdxuqgbR
	JAR/gjwbmUe4wMd1BuM3yhHOzB3fXt4b2boMMYEUAzSVe1wG/OeIbmZNM3v7Jo4a6l29fy
	QTqMl56iwijPPGqime2Kw9A0wVmOQjY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 13D63AD21;
	Tue, 27 Oct 2020 10:18:26 +0000 (UTC)
Subject: Re: flawed Makefile dependencies in libacpi
To: Olaf Hering <olaf@aepfle.de>
References: <20201027104548.5823c554.olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
Date: Tue, 27 Oct 2020 11:18:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027104548.5823c554.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 10:45, Olaf Hering wrote:
> Every once in a while build.c fails to compile because ssdt_s3.h does not exist. The 'sed' command which creates the file appears a few lines down in the build log.
> 
> I'm not familiar with make. I wonder if "build.o" should depend on "$(H_SRC)" instead of the expanded list of generated headers.

Oh, yes, it definitely should. Will you make a patch?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:25:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:25:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12748.33029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMAK-0007qA-CN; Tue, 27 Oct 2020 10:25:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12748.33029; Tue, 27 Oct 2020 10:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMAK-0007q3-96; Tue, 27 Oct 2020 10:25:20 +0000
Received: by outflank-mailman (input) for mailman id 12748;
 Tue, 27 Oct 2020 10:25:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXMAJ-0007py-Ju
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:25:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 703f0479-bf37-4c36-86df-a4b94ad8a704;
 Tue, 27 Oct 2020 10:25:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16659AD21;
 Tue, 27 Oct 2020 10:25:18 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXMAJ-0007py-Ju
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:25:19 +0000
X-Inumbo-ID: 703f0479-bf37-4c36-86df-a4b94ad8a704
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 703f0479-bf37-4c36-86df-a4b94ad8a704;
	Tue, 27 Oct 2020 10:25:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603794318;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YZzsh69HGbVNf5I827All+M37rdPv9kdetVvOsKnw+A=;
	b=apvZNe2IRzcM80CJ+NJPnD2TmRofB608NoulqEe/5xoOjQ8Vx8EICe19CUqrrvSNBzWShC
	YndUvhhoUJnGyoTStZYPpwRnVzecPavCDf5R3ub8rKlejZopzmajNjUWSNRfy0YbZ6w1ru
	rmzXEc4zI32+GjkeL3XgIQsdFQVyPOg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 16659AD21;
	Tue, 27 Oct 2020 10:25:18 +0000 (UTC)
Subject: Re: flawed Makefile dependencies in libacpi
From: Jan Beulich <jbeulich@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20201027104548.5823c554.olaf@aepfle.de>
 <d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
Message-ID: <da99d577-ce8d-8fcf-c157-5b91ee895097@suse.com>
Date: Tue, 27 Oct 2020 11:25:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 11:18, Jan Beulich wrote:
> On 27.10.2020 10:45, Olaf Hering wrote:
>> Every once in a while build.c fails to compile because ssdt_s3.h does not exist. The 'sed' command which creates the file appears a few lines down in the build log.
>>
>> I'm not familiar with make. I wonder if "build.o" should depend on "$(H_SRC)" instead of the expanded list of generated headers.
> 
> Oh, yes, it definitely should. Will you make a patch?

Actually it only should be if this line was useful for anything.
But it's dead. Instead the consumers of libacpi/ may need
respective dependencies to be added. In this context it then is
relevant in which context you see the failure - firmware/hvmloader/
or libs/light/ ?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:27:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:27:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12753.33041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMCJ-0007zM-St; Tue, 27 Oct 2020 10:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12753.33041; Tue, 27 Oct 2020 10:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMCJ-0007zF-Ph; Tue, 27 Oct 2020 10:27:23 +0000
Received: by outflank-mailman (input) for mailman id 12753;
 Tue, 27 Oct 2020 10:27:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXMCI-0007z8-Dd
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:27:22 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75188af9-4a41-479e-be96-a27bac5ba22d;
 Tue, 27 Oct 2020 10:27:20 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9RARI3Zk
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 27 Oct 2020 11:27:18 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXMCI-0007z8-Dd
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:27:22 +0000
X-Inumbo-ID: 75188af9-4a41-479e-be96-a27bac5ba22d
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 75188af9-4a41-479e-be96-a27bac5ba22d;
	Tue, 27 Oct 2020 10:27:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603794439;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=nCQn5s7gnsJrUywKf494kqgGObHPVAzUlbs91jKYoKI=;
	b=YW310YFAeVaM92JS8DjNW8mouZL0B1vAgES4KdVHwpUPgWfaJpB4mvhQQSILD3DucA
	C0I05yfu5YfBtMKb5Shp5PBfxhR6NVi0q2WOLgPi5dzUxUwl8alC32T84S1RNOVVO7Ir
	GfuIjzudGsj5Uz8mVhVhoo6+JIKlhA8OWSvkHvYsXiHQeRvcwDdTa1zaUJU1tOAjVbJA
	c/5uc8DQ77cv1b9f0QgRl3V3+dvc9i2i1YTRKZHUW9c6R5bCcKfYuBhc69rt9Y6qfnd7
	9MgzxYuvPrdn8JEjwquE9obQdkLIwQvtpq2GRV5cHBBo5YLGArRlKg6JtwbQjcAxTUwe
	BdVg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9RARI3Zk
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 27 Oct 2020 11:27:18 +0100 (CET)
Date: Tue, 27 Oct 2020 11:27:03 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
Message-ID: <20201027112703.24d55a50.olaf@aepfle.de>
In-Reply-To: <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
References: <20201026204151.23459-1-olaf@aepfle.de>
	<68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/SxvxvkGtKYBEm1jNYi.ExHB"; protocol="application/pgp-signature"

--Sig_/SxvxvkGtKYBEm1jNYi.ExHB
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 27 Oct 2020 11:16:04 +0100
schrieb Jan Beulich <jbeulich@suse.com>:

> This pattern is used when a rule consists of multiple commands
> having their output appended to one another's.

My understanding is: a rule is satisfied as soon as the file exists. In thi=
s case once the invoked shell opens the file for writing. If this is the ca=
se, the change should be applied.

Olaf

--Sig_/SxvxvkGtKYBEm1jNYi.ExHB
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+X9fcACgkQ86SN7mm1
DoBnwA/+P1nCOYXsIQg7ROhnP9jmb+hENWAJFRKHUiC3bwFlc08idfA86C+AXmPe
9e2OQsFyGL5dyu9maHkv/y6DbQxMjEO0h4F0c19vN/p4SgluUMNfqF4M0nEscbbx
vtatf5uyTzx/kipv8P2RXaFw7nX7dN6EKLsOcrXHAbQwLC3DL9So2iHpWU/e9RBP
OQAYVPt9404PLFxIrFMTOpF0wnrzvmIhc1+cehUuqUM3ZsGf/xnDhB59lH45aPIN
jiHy9ul2gLP2KgiEU1I9B6YVCpkyVXDQLGs8x/RXXmt2EUr/ao8mcu5SX9g1tLvP
6xit03mLCtxHc6wizY85kqbeiaYDxv5aOYnaXauGEW4Hg9myVupE7F2R4n5+NxdC
Kcy2Otbz0woKlgP0MtOuHDD5uGvRSO2IppmMJooORlbNWSEWfJLk62blz6dvnKWK
Cb0xICH/IwH6Gy+nbuDScZpJRcnw7a9Q7cBG9e+a9obCISCDlG7oVxkTogDuHvNj
0+81WryhhEOo9ORURoPlP3ol1kE/mMpRY/+BKCGb2+VIHYEjbA7Zts5KvJqv4Yag
UfkC6DNMwmZEGVXp+3Rm9pnztAP7Q01xLgyhPD/gXXygO/rLiWQlc5isYdddWggv
vZuULJwri0R94QJyIfiWPxh9DBnvAwC5GrWUnChS0DBrfSsaNZo=
=9uxu
-----END PGP SIGNATURE-----

--Sig_/SxvxvkGtKYBEm1jNYi.ExHB--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12758.33053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMFT-0000PZ-C4; Tue, 27 Oct 2020 10:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12758.33053; Tue, 27 Oct 2020 10:30:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMFT-0000PS-92; Tue, 27 Oct 2020 10:30:39 +0000
Received: by outflank-mailman (input) for mailman id 12758;
 Tue, 27 Oct 2020 10:30:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXMFR-0000PM-GD
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:30:37 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 893a64f6-c3c1-4f1d-810b-5a74f4ae42d0;
 Tue, 27 Oct 2020 10:30:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9RAUY3b6
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 27 Oct 2020 11:30:34 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXMFR-0000PM-GD
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:30:37 +0000
X-Inumbo-ID: 893a64f6-c3c1-4f1d-810b-5a74f4ae42d0
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 893a64f6-c3c1-4f1d-810b-5a74f4ae42d0;
	Tue, 27 Oct 2020 10:30:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603794634;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=fxoKuNTxv9y4hpE5nRksZZWZ43+oumRhQ3184OpepgQ=;
	b=BvlBpleoBJEfdjJuFRuOv0k6Syyv3WbvJh6XTVVNldLThIwEX7rThE6YzIxYE+agWh
	xKeiT2hUDliT7TnULMUi3FxVwHgxV+wz82hIIY9yUhn5GX5y2fJ/TMAuyVcnQLADl6wC
	o+zv7qakKgF7jKvwjXjczOFFarH1rPL6xrh4Vo19Uzb8ViBSgXczyvudIHTa8v9Nx3Eh
	2ER8Glh8eU4F55HQWl0GM+mbxx2IUnQFcT3q5jI1jzWQ7YjZ64P5zm0eBsL2Vsb7E/ge
	nu4FgBsm3XON2mOynndh6bz1yfBXTn8zcqj7joGPsaBeROWQitqNjngOLZK8O+XHcIPG
	n6aA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9RAUY3b6
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 27 Oct 2020 11:30:34 +0100 (CET)
Date: Tue, 27 Oct 2020 11:30:24 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: flawed Makefile dependencies in libacpi
Message-ID: <20201027113024.36948380.olaf@aepfle.de>
In-Reply-To: <da99d577-ce8d-8fcf-c157-5b91ee895097@suse.com>
References: <20201027104548.5823c554.olaf@aepfle.de>
	<d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
	<da99d577-ce8d-8fcf-c157-5b91ee895097@suse.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/3GDVV37_kR=8HRgDTUmTbvi"; protocol="application/pgp-signature"

--Sig_/3GDVV37_kR=8HRgDTUmTbvi
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 27 Oct 2020 11:25:17 +0100
schrieb Jan Beulich <jbeulich@suse.com>:

> In this context it then is relevant in which context you see the failure =
- firmware/hvmloader/ or libs/light/ ?

I do not have the log anymore to check this detail.

Olaf

--Sig_/3GDVV37_kR=8HRgDTUmTbvi
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+X9sEACgkQ86SN7mm1
DoDB8g//f/gY9NKfEv01Ui5VFcmTaxgpmIXbrRPSCAmtS3YEpOOYuHTMCsubcm/2
6YAyp4RMVhPu5dZBYr8GBu0WpkkaUZda9Ow0GFJtkgOIgnoE6mOE5G43fcXXTQ67
Zlm0CzSF5lgN9xgHj4vUuelaNAnnoLgEyH15N21BLoNU+dO+UyqyMnbajp90EIiM
DDDw+SLQPldCAVoAkAe8TjklgzTcV4ymfWkTVzgxqIzhFDJZj1IIDnZrij4wTGXJ
46D2p8ERHgvwzGlpueDshZ25Vc8Hr69mJAMS7kDwKNol/r7+a1ftCyAMKSI0O1pR
mamjVMKuSbVN//n+G9XIAcBThgUVyGJnlimoDQp7fTamn5EB85wcHF3D++hsNLXO
jBEug7F0huzpzN0xluHGw3swX6qzTc1rH5S0xWzB7KgSDenSJcxg3Onowe6/HKtP
udewETTNGj6xSkeAVM91Lwx16PMyTk0JvFF1FA7QRTCshKDx0LFtmlzXuUJtt6O7
j9hmRGHZ9w2k/7t5Lpid1reU51x2NX6erTIt+0o4Ud1FO3ITZJSITCwSWX2lg3+T
OCEBIcsPT3SqWWjhpUm31e6Ri+tw7XtiXMVA0rWzV2Ss8nx//HcYa3BqNBPARj1F
jf36+3qC1c9cGfy58W3JDBCGbidKZcdifMFwzN4mfHNRBzfRtdI=
=1Tic
-----END PGP SIGNATURE-----

--Sig_/3GDVV37_kR=8HRgDTUmTbvi--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:37:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12763.33065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMM2-0000dH-4H; Tue, 27 Oct 2020 10:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12763.33065; Tue, 27 Oct 2020 10:37:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMM2-0000dA-1B; Tue, 27 Oct 2020 10:37:26 +0000
Received: by outflank-mailman (input) for mailman id 12763;
 Tue, 27 Oct 2020 10:37:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXMM0-0000d5-KX
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:37:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4bf5aad6-f7d6-4150-9813-d0ff97f97a7f;
 Tue, 27 Oct 2020 10:37:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 836CCAD21;
 Tue, 27 Oct 2020 10:37:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXMM0-0000d5-KX
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:37:24 +0000
X-Inumbo-ID: 4bf5aad6-f7d6-4150-9813-d0ff97f97a7f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4bf5aad6-f7d6-4150-9813-d0ff97f97a7f;
	Tue, 27 Oct 2020 10:37:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603795042;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8r7G2jFVLnZt3ajabHoTOfSi+r6FG3WhwJCqBm5PLhg=;
	b=g5Qk+En3qInviCcwj2jxI/9C2SLlkDVTXpdlOxBX3RB+C2s1w4PLRi1RS0ed1M0fD7Vf+Y
	dntrp+hqDL21lxChSmSnUKXsjRthYpq2+nDmXU5JMf9+65YQ8uKLFZQNugZvz1vnW1V5t/
	uMKDWhHsRcUDtTw4tjq+ybvOGM+EgTw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 836CCAD21;
	Tue, 27 Oct 2020 10:37:22 +0000 (UTC)
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Olaf Hering <olaf@aepfle.de>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
Date: Tue, 27 Oct 2020 11:37:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027112703.24d55a50.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 11:27, Olaf Hering wrote:
> Am Tue, 27 Oct 2020 11:16:04 +0100
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
>> This pattern is used when a rule consists of multiple commands
>> having their output appended to one another's.
> 
> My understanding is: a rule is satisfied as soon as the file exists.

No - once make has found that a rule's commands need running, it'll
run the full set and only check again afterwards. If there was a
racing parallel make, things would be different, but such a race
would need addressing elsewhere: No two possibly racing threads of
the make process ought to be creating the same files. The creation
of the files needs synchronizing in such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:38:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:38:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12766.33076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMMi-0000lA-E2; Tue, 27 Oct 2020 10:38:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12766.33076; Tue, 27 Oct 2020 10:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMMi-0000l3-Aq; Tue, 27 Oct 2020 10:38:08 +0000
Received: by outflank-mailman (input) for mailman id 12766;
 Tue, 27 Oct 2020 10:38:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXMMh-0000kx-6x
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:38:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3352c78-5fcf-459e-92e2-de3a83449ef1;
 Tue, 27 Oct 2020 10:38:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91D3FACBA;
 Tue, 27 Oct 2020 10:38:04 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXMMh-0000kx-6x
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:38:07 +0000
X-Inumbo-ID: f3352c78-5fcf-459e-92e2-de3a83449ef1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f3352c78-5fcf-459e-92e2-de3a83449ef1;
	Tue, 27 Oct 2020 10:38:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603795084;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=So6bAdwK3hurOT3c3zOnY3wChVhg335l5tUGEor9+ag=;
	b=NcaMyDlgZbSM3XtWgwn7qqUx1ThnCpF9B1ecr+hIZSAZnvRdbpUFrg1nMDgJk0datdw9mt
	+xOc8cXpVn+i2bSOd76O7oyu4Vp1a+kXScse9kmS/GaazRfQxvkZR+o9wnmu+nHeEVXmSR
	bkUZj4X0dh6+5Ru5rtQLQ0gSZiJnaXQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 91D3FACBA;
	Tue, 27 Oct 2020 10:38:04 +0000 (UTC)
Subject: Re: flawed Makefile dependencies in libacpi
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20201027104548.5823c554.olaf@aepfle.de>
 <d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
 <da99d577-ce8d-8fcf-c157-5b91ee895097@suse.com>
 <20201027113024.36948380.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0729e4ae-c513-9a1e-ab02-cf051cee6845@suse.com>
Date: Tue, 27 Oct 2020 11:38:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027113024.36948380.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 11:30, Olaf Hering wrote:
> Am Tue, 27 Oct 2020 11:25:17 +0100
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
>> In this context it then is relevant in which context you see the failure - firmware/hvmloader/ or libs/light/ ?
> 
> I do not have the log anymore to check this detail.

>From looking at both makefiles, it's only libxl which is affected.
I guess I'll make a patch then.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:42:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12772.33088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMQt-0001bj-0J; Tue, 27 Oct 2020 10:42:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12772.33088; Tue, 27 Oct 2020 10:42:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMQs-0001bc-Tc; Tue, 27 Oct 2020 10:42:26 +0000
Received: by outflank-mailman (input) for mailman id 12772;
 Tue, 27 Oct 2020 10:42:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXMQr-0001bX-LR
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:42:25 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9702663-dd25-4b81-8dfa-62095dbfc9df;
 Tue, 27 Oct 2020 10:42:24 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
 with ESMTPSA id R05874w9RAgM3fx
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 27 Oct 2020 11:42:22 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXMQr-0001bX-LR
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:42:25 +0000
X-Inumbo-ID: e9702663-dd25-4b81-8dfa-62095dbfc9df
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e9702663-dd25-4b81-8dfa-62095dbfc9df;
	Tue, 27 Oct 2020 10:42:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603795343;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=qF5Ylhn24R/t4ae1fOOGTbYfc4h8/HVKWIyjTvOpC4E=;
	b=tv9g/xucdz/9PvDHMDOcuLSM/NnARw96s4w8J3CJQeMo0BtlV5mvpcIrWa3fDkAaC2
	qEuW9t9QXePCDvFklGOYr2KTDY1Xdh6hFvrMKRug+kQEI6auiHHmUCTlcTeerdl94j9A
	x0QEMG+p9hlY96TkECEg/aqYjx6KiYrlQRZkdGqZ+usdtw79NeKrSk5RogVZ30fAjr+D
	3NGV+sgS/DcLGWrC+4Z9VvrmoqcM7XyX8dHj3KWHB96SNLsdAjc52Cs0ll54g8fU8YgG
	TNghxjF+VSicf0O3JCr7/DrTMv2hJvtcFcQX8hu03dapgQqSDupEh3glFbg6sy5Xax/y
	38bQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.2 DYNA|AUTH)
	with ESMTPSA id R05874w9RAgM3fx
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 27 Oct 2020 11:42:22 +0100 (CET)
Date: Tue, 27 Oct 2020 11:42:14 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: flawed Makefile dependencies in libacpi
Message-ID: <20201027114214.22b36cb0.olaf@aepfle.de>
In-Reply-To: <0729e4ae-c513-9a1e-ab02-cf051cee6845@suse.com>
References: <20201027104548.5823c554.olaf@aepfle.de>
	<d6e3a7b0-129e-9b47-8802-b71eb8642519@suse.com>
	<da99d577-ce8d-8fcf-c157-5b91ee895097@suse.com>
	<20201027113024.36948380.olaf@aepfle.de>
	<0729e4ae-c513-9a1e-ab02-cf051cee6845@suse.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/dGw3DT2Npa8EK+yqn6gy.pE"; protocol="application/pgp-signature"

--Sig_/dGw3DT2Npa8EK+yqn6gy.pE
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 27 Oct 2020 11:38:04 +0100
schrieb Jan Beulich <jbeulich@suse.com>:

> I guess I'll make a patch then.

Thanks. I briefly looked at the code and it is not obvious how it would nee=
d to look like.

Olaf

--Sig_/dGw3DT2Npa8EK+yqn6gy.pE
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+X+YYACgkQ86SN7mm1
DoCFphAAla82wzwMA9rZxIfvfYvMoOJqsIcJWqUuRnBSHlW6K3a5z3K4yb7rxced
4tbTLF7afZLPshaEf2FX+H71BFoupKNxSZhwlcCHuPnBlBp7gnzNQe/hF3hBVxiF
aw8q+vvG6WbXlGb/HgnRer7O47WmBZ3M97gn+20POUs3ztHSTVrpGNYTqrfTGEca
iURiVZCzH4d4RQqgA44sTSmrpR/hyh9csSRq5Zh1xQtdAglsZRP+5PkGVbaHt0T9
9hgGgsvsTlDpizf5DCnTYP1sMCihnRL1eQjtWtE+AfRPaLyq0T+FiHYwhVXC0ayY
aa+ZnzrLu6Dv6NG0ziTChKssy981S9JOeRXEhRu88D+AvP08or+SQkOKjLBzP0fn
ftc0ofef1ASeTjA6Mqtvdbky/eVroRjY8mBBNKyzcIcZ7FCKSQiB9qUsChD/cTtO
RSGW83b8HWG/OABj4a+NRRkqlwv3d4cG6H6noWXN7lEsU/yc5Nf60RdlWEAWZQ3I
3FqWKk2dKh+T/G1rjNgC5AMB6vgdJiwYtSwOrZeYM/vs+91yNmgP0ckjOtndoBlC
0wl6yOz/YfmJ8kHugSMbZRhcNzzOs5FbTEbyjmyj8tgD/OiSrrvLlIpi7NO7j6Xa
tfs9QrAoyuBBvSTc6PqsMOOBq2dMbgjbOO5Q5zfcWclBgJw/QF8=
=kCmi
-----END PGP SIGNATURE-----

--Sig_/dGw3DT2Npa8EK+yqn6gy.pE--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 10:57:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 10:57:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12779.33101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMfa-0002ep-Cb; Tue, 27 Oct 2020 10:57:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12779.33101; Tue, 27 Oct 2020 10:57:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMfa-0002ei-9R; Tue, 27 Oct 2020 10:57:38 +0000
Received: by outflank-mailman (input) for mailman id 12779;
 Tue, 27 Oct 2020 10:57:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXMfY-0002eZ-GM
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:57:36 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad703685-fd15-43d8-98a1-faeea2ccb4bf;
 Tue, 27 Oct 2020 10:57:35 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXMfY-0002eZ-GM
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 10:57:36 +0000
X-Inumbo-ID: ad703685-fd15-43d8-98a1-faeea2ccb4bf
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ad703685-fd15-43d8-98a1-faeea2ccb4bf;
	Tue, 27 Oct 2020 10:57:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603796255;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=FlhFwdK4CvQ766qK2/X/WXSTzHrQlYTWB1SqfcIzsFA=;
  b=PBSgF0e9GTS0yFBEMiOw2/iVZ/DtLvKg6QgLM9uC89fZPTE7YYRBAZuC
   Fl8PJ9Xz5oRrlQT2g5pzvl2bgAN5Nh4DtqsDphCZ4cI0tcytN2xWCKLm5
   1ModNbiKfsFwjQ9LfsPbsRdi/VhmsGJHGDSudJLFSRKAvbZWRijb3Sdmy
   4=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fqykTXJDfFdy1dgPWqYRxY0BAob9QP7nut7E7cT8mH7f48OlczC+2StQMxe2igiBbU1WA5eoeY
 NwGJsN9Le30nhhcMdKjj6BnR4P5nD+DAljg6INpKpImDPX/f8skfhlT1MVUwejY88JT1TE2fWd
 k8EUuhAOlXEmOpx4OaS+ZR/UaCs3/UL72F5x79ZRfg4O1kUR03nQihmi0nZ6f5YwEVq50OsDxt
 mlF6CsvFXOofQD6Dq0qn/YPZLkCR/6KMxe3oVsSdYowTNUtErpIjblNNHbqcC+uIp25uUNyqtf
 qYQ=
X-SBRS: None
X-MesageID: 29858074
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,423,1596513600"; 
   d="scan'208";a="29858074"
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Jan Beulich <jbeulich@suse.com>, Olaf Hering <olaf@aepfle.de>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
Date: Tue, 27 Oct 2020 10:57:12 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 27/10/2020 10:37, Jan Beulich wrote:
> On 27.10.2020 11:27, Olaf Hering wrote:
>> Am Tue, 27 Oct 2020 11:16:04 +0100
>> schrieb Jan Beulich <jbeulich@suse.com>:
>>
>>> This pattern is used when a rule consists of multiple commands
>>> having their output appended to one another's.
>> My understanding is: a rule is satisfied as soon as the file exists.
> No - once make has found that a rule's commands need running, it'll
> run the full set and only check again afterwards.

It stops at the first command which fails.

Olaf is correct, but the problem here is an incremental build issue, not
a parallel build issue.

Intermediate files must not use the name of the target, or a failure and
re-build will use the (bogus) intermediate state rather than rebuilding it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12783.33113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMiB-0003WB-Q4; Tue, 27 Oct 2020 11:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12783.33113; Tue, 27 Oct 2020 11:00:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMiB-0003W4-N2; Tue, 27 Oct 2020 11:00:19 +0000
Received: by outflank-mailman (input) for mailman id 12783;
 Tue, 27 Oct 2020 11:00:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z16P=EC=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kXMiA-0003Vz-2A
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:00:18 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cf6879f-c274-406e-b102-6c89e1415c50;
 Tue, 27 Oct 2020 11:00:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Z16P=EC=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kXMiA-0003Vz-2A
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:00:18 +0000
X-Inumbo-ID: 3cf6879f-c274-406e-b102-6c89e1415c50
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3cf6879f-c274-406e-b102-6c89e1415c50;
	Tue, 27 Oct 2020 11:00:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603796416;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=bnzilUGUttNZba4BEWffCGoa6x7FSjyqhCYjTIOednw=;
  b=c8AeIn40fP4wh8isxdSahruQiAbggDSHzk47ITRnCIFdDQtY74KU6kFN
   juXg7u7Dw5gxFU04am/2Ackmm/RyGLINO4Oe2r04e3ujbA8w37jzu05F7
   LdjOeG01bSxKscwaui/PPzXxrQ9U72czVZc6gANY96cw8VU3lFYwmn7lo
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fgYDooBJu1m9oJNlNgvljkvzOf4GAW6lL6MsihnZ7zJ0L8k4VKcEaNvwVoJI8SFAIPlyKw48h2
 exzprB6oSJrRUQSCqpzsNDLg1IB5HmCzO4lFep0xGVIWCCta+p4y2aUFQa7kzqTW/qJwHLF7hN
 lTG6S9iGLDGAzgRtu9jDOEDfNJZwKL1N1uKLWMBpsN0ALzHihxfPSoC80rYHBPAjW1v/sjsJK3
 WniYXv8nrszKiIkIrxA9i2MMOZsVsLk9uwUnrHcVptqtJgbOXamDStEc8HIQgZMralpuRO6uOW
 a9g=
X-SBRS: None
X-MesageID: 29824937
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,423,1596513600"; 
   d="scan'208";a="29824937"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, "Julien
 Grall" <julien@xen.org>
Subject: [PATCH] {x86,arm}/mm.c: Make populate_pt_range __init
Date: Tue, 27 Oct 2020 10:58:39 +0000
Message-ID: <20201027105839.129217-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

It's only called from another __init function.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
---
 xen/arch/arm/mm.c | 2 +-
 xen/arch/x86/mm.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 9c4b26bf07..dbd9f3fe4c 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1298,7 +1298,7 @@ int map_pages_to_xen(unsigned long virt,
     return xen_pt_update(virt, mfn, nr_mfns, flags);
 }
 
-int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
+int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 {
     return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b2f35b3e7d..1f7ddf318b 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5529,7 +5529,7 @@ int map_pages_to_xen(
     return rc;
 }
 
-int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
+int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 {
     return map_pages_to_xen(virt, INVALID_MFN, nr_mfns, MAP_SMALL_PAGES);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:00:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12784.33125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMiH-0003Yc-2J; Tue, 27 Oct 2020 11:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12784.33125; Tue, 27 Oct 2020 11:00:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMiG-0003YT-V6; Tue, 27 Oct 2020 11:00:24 +0000
Received: by outflank-mailman (input) for mailman id 12784;
 Tue, 27 Oct 2020 11:00:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXMiE-0003Vz-V3
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:00:22 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e903df16-8414-4bfe-bdc8-51904a180445;
 Tue, 27 Oct 2020 11:00:21 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXMiE-0003Vz-V3
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:00:22 +0000
X-Inumbo-ID: e903df16-8414-4bfe-bdc8-51904a180445
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e903df16-8414-4bfe-bdc8-51904a180445;
	Tue, 27 Oct 2020 11:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603796421;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Wo3apLLJMSy0WeWORmBpKmV4LoQ9ADlHHgJPDqtSl7I=;
  b=KlGWEGUY8vlXHK15PFpHJH6HTPDPRF+F9yL6qs7ixXVXZjSMT8EBuTbp
   cgyuhbHrYEfpcTuAnscX0eIpsM6s03ahmcA3qsWBiGgzngjoJQPNACsvR
   mcU4OH1jkozbZmA4WDb5VaeSaLwQbATUgrk4HSoBd2lYAyvk0BwwNnG4s
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: bZplUjrN01QUtIH+uNoSOhd+eAgMQRPxEge4qEDqJU3M8Q4Cx6tSJyYBlYeUy8b0WxmCBP9Q+Y
 1uTx0ZWaK+aZ8xARgsU0XQ43PI75/zemEAdjWkk5AlAQi1W1cYzkgGMX1SZAeHe8dNNLOT0C1c
 VTywyVApl53tKgQLas5j6rkBRGQ97cbGAfl7zrfRUfz5gmGK+uOdDYXcrHAcjyJ2v3N0p4fBrz
 FQtJouiLGBS/l1YaNwvXywri5qkl0/aWqYvY9nuwgIw9tiEK60qUDpgcP+B+0efmU/9O6EVAWG
 bq0=
X-SBRS: None
X-MesageID: 29824965
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,423,1596513600"; 
   d="scan'208";a="29824965"
Subject: Re: [PATCH] {x86,arm}/mm.c: Make populate_pt_range __init
To: George Dunlap <george.dunlap@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <20201027105839.129217-1-george.dunlap@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a31f9bd0-8b2d-368c-7dac-b589840e9a34@citrix.com>
Date: Tue, 27 Oct 2020 11:00:15 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201027105839.129217-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 27/10/2020 10:58, George Dunlap wrote:
> It's only called from another __init function.
>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12789.33137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMj7-0003kX-IC; Tue, 27 Oct 2020 11:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12789.33137; Tue, 27 Oct 2020 11:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMj7-0003kQ-Ee; Tue, 27 Oct 2020 11:01:17 +0000
Received: by outflank-mailman (input) for mailman id 12789;
 Tue, 27 Oct 2020 11:01:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RClw=EC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXMj6-0003kK-OS
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:01:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96ad4033-117e-4c46-b5b5-3a91aa9d1b90;
 Tue, 27 Oct 2020 11:01:14 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXMj3-0004XD-3q; Tue, 27 Oct 2020 11:01:13 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXMj2-0004qn-SV; Tue, 27 Oct 2020 11:01:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RClw=EC=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXMj6-0003kK-OS
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:01:16 +0000
X-Inumbo-ID: 96ad4033-117e-4c46-b5b5-3a91aa9d1b90
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 96ad4033-117e-4c46-b5b5-3a91aa9d1b90;
	Tue, 27 Oct 2020 11:01:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cdmmRHozHqzlbBzx38N1UiVxEXtQxsUpLic57nL5yAE=; b=kgpD7tAlL9Pk/qs9v1pe/4eX+D
	Olyf9lg0RuQo4tYVKzwhoNtFfM56F2Hwror5YX3agdZ05cxcuUH/AVApM1rgG5QXsasRaJYBp3O+m
	kn5BZniYD9kH/T5bBfmaEpUUrDc3lCRkjNTRmv3pdmDV5NsKw6C+2W2a3AXRPIQFHr3M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXMj3-0004XD-3q; Tue, 27 Oct 2020 11:01:13 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXMj2-0004qn-SV; Tue, 27 Oct 2020 11:01:13 +0000
Subject: Re: [PATCH] {x86,arm}/mm.c: Make populate_pt_range __init
To: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20201027105839.129217-1-george.dunlap@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9c0e6973-5c4a-e505-37f4-dd60f17bc40a@xen.org>
Date: Tue, 27 Oct 2020 11:01:11 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027105839.129217-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi George,

On 27/10/2020 10:58, George Dunlap wrote:
> It's only called from another __init function.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Roger Pau Monne <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> ---
>   xen/arch/arm/mm.c | 2 +-
>   xen/arch/x86/mm.c | 2 +-
>   2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 9c4b26bf07..dbd9f3fe4c 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1298,7 +1298,7 @@ int map_pages_to_xen(unsigned long virt,
>       return xen_pt_update(virt, mfn, nr_mfns, flags);
>   }
>   
> -int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
> +int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns)
>   {
>       return xen_pt_update(virt, INVALID_MFN, nr_mfns, _PAGE_POPULATE);
>   }
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index b2f35b3e7d..1f7ddf318b 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5529,7 +5529,7 @@ int map_pages_to_xen(
>       return rc;
>   }
>   
> -int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
> +int __init populate_pt_range(unsigned long virt, unsigned long nr_mfns)
>   {
>       return map_pages_to_xen(virt, INVALID_MFN, nr_mfns, MAP_SMALL_PAGES);
>   }
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:07:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:07:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12795.33148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMof-0003zo-5m; Tue, 27 Oct 2020 11:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12795.33148; Tue, 27 Oct 2020 11:07:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMof-0003zh-2b; Tue, 27 Oct 2020 11:07:01 +0000
Received: by outflank-mailman (input) for mailman id 12795;
 Tue, 27 Oct 2020 11:06:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXMod-0003zc-Lu
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:06:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a73f9991-0bcc-47ee-9c75-ebaf74384797;
 Tue, 27 Oct 2020 11:06:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93E2BACA1;
 Tue, 27 Oct 2020 11:06:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXMod-0003zc-Lu
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:06:59 +0000
X-Inumbo-ID: a73f9991-0bcc-47ee-9c75-ebaf74384797
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a73f9991-0bcc-47ee-9c75-ebaf74384797;
	Tue, 27 Oct 2020 11:06:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603796816;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O0vZw6E2rOMF4o+87GRNkEoLA4lfUr9gQhXHGUHdeYI=;
	b=Ji6bb8JUusQWqBIx/eSXOimBnC7S20LYf2EokjdqUzuQ3OYBtpT7QgX6DSGWGvfF0HdAkC
	bfvqPvE+LIz+oxtIVakZqzGeTxnGSTNJOthYQjwv1/tEOum5o/e0YerRsaa0evfYB3M7Vd
	kZ4sYg0BrtxCDzC2XFq4fvyuW1WA3pU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93E2BACA1;
	Tue, 27 Oct 2020 11:06:56 +0000 (UTC)
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Olaf Hering <olaf@aepfle.de>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
 <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
Date: Tue, 27 Oct 2020 12:06:56 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 11:57, Andrew Cooper wrote:
> On 27/10/2020 10:37, Jan Beulich wrote:
>> On 27.10.2020 11:27, Olaf Hering wrote:
>>> Am Tue, 27 Oct 2020 11:16:04 +0100
>>> schrieb Jan Beulich <jbeulich@suse.com>:
>>>
>>>> This pattern is used when a rule consists of multiple commands
>>>> having their output appended to one another's.
>>> My understanding is: a rule is satisfied as soon as the file exists.
>> No - once make has found that a rule's commands need running, it'll
>> run the full set and only check again afterwards.
> 
> It stops at the first command which fails.
> 
> Olaf is correct, but the problem here is an incremental build issue, not
> a parallel build issue.
> 
> Intermediate files must not use the name of the target, or a failure and
> re-build will use the (bogus) intermediate state rather than rebuilding it.

But there's no intermediate file here - the file gets created in one
go. Furthermore doesn't make delete the target file(s) when a rule
fails? (One may not want to rely on this, and hence indeed keep multi-
part rules update intermediate files of different names.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:12:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:12:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12798.33160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMtx-0004s5-SU; Tue, 27 Oct 2020 11:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12798.33160; Tue, 27 Oct 2020 11:12:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXMtx-0004ry-PH; Tue, 27 Oct 2020 11:12:29 +0000
Received: by outflank-mailman (input) for mailman id 12798;
 Tue, 27 Oct 2020 11:07:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) id 1kXMpL-00046I-L3
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:07:43 +0000
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2ef190e-2cbf-4c37-b705-1e0ce4afbe85;
 Tue, 27 Oct 2020 11:07:42 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id 33so953289edq.13
 for <xen-devel@lists.xenproject.org>; Tue, 27 Oct 2020 04:07:42 -0700 (PDT)
Received: from C02ZJ1BNLVDN.emea.arm.com (52-67-201-31.ftth.glasoperator.nl.
 [31.201.67.52])
 by smtp.gmail.com with ESMTPSA id h8sm735126eds.51.2020.10.27.04.07.41
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 27 Oct 2020 04:07:41 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	id 1kXMpL-00046I-L3
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:07:43 +0000
X-Inumbo-ID: d2ef190e-2cbf-4c37-b705-1e0ce4afbe85
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d2ef190e-2cbf-4c37-b705-1e0ce4afbe85;
	Tue, 27 Oct 2020 11:07:42 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id 33so953289edq.13
        for <xen-devel@lists.xenproject.org>; Tue, 27 Oct 2020 04:07:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=QhOFNmdP+x17aFRPk5F31RIidLvDON2O5YUEDajIylk=;
        b=Or1c3Yo+7Q0waEUXKiaP75uP7sdPNmRMvgO/O14pbHx0CiQ2i6ktSORNDhV/+fYKHA
         XajEjcSKB66aLLdOICupx/IKk/XbAd2ovNf65UzbwtMHAaw2Wkjtsb41FhbpuiqjMi5Q
         z3x8ZujPak0nIrYG2McCGWCD0Kl11lWwnGa2y51RGBqHsnFUauu7KTpzrJEI7TXqu0nW
         aSbI2Y0uPvhLQMxSJnsozrzDTlxovB9koHl+qxGdmAUQk3bU/2NO/Toc4jQ7ldSlaznA
         mv7W1jndoGArtuSE7tDv4b/bgrYwhywz++mheEWeapysBVa3nPqaJkKDLjZp3t1UDkyA
         kvOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=QhOFNmdP+x17aFRPk5F31RIidLvDON2O5YUEDajIylk=;
        b=PLHqk74f9/9FKacFv9i0sdrQHwOOY5kRlVGMjaD2B/IFcpB9cVlA30HMxkyW4PQXIB
         ClPHkt2JA8q5OKd4+lhhJJLCTo1H7myHYefndLJmQAcdYb0v7LaAPXw/vSZLidvnYViU
         5wDb4vaOPWhUM97pVNne6PL7sBVgottkhK2noWBfL3xm2udxoucGP9RCe7lBqADpHea2
         931ngECoUInACKLai00wXzy5aw7uRFKaNUNfNh77KM3WTixGIQ9mdpfZoQrISJjF0mAb
         bZ7iX20LeXtLJtRieQ50Vve7/OrRv7KeJEQvVD4cuHODMedXzkHJRex7BQU0canvWY74
         YIow==
X-Gm-Message-State: AOAM530/4borXjP7/G/JnVFS5Z6HumdDQVvpDges6BwfpF19fb66/yLh
	u+euCl4FNzPvXQXHFduMPOI=
X-Google-Smtp-Source: ABdhPJxv5B6i49ryFRqBgOtlU6TEizhzggiZggFruVog7eU/9F39FpxkWSTkRPTquDqsFQnPBTDnSQ==
X-Received: by 2002:aa7:c7cf:: with SMTP id o15mr1543834eds.15.1603796861867;
        Tue, 27 Oct 2020 04:07:41 -0700 (PDT)
Received: from C02ZJ1BNLVDN.emea.arm.com (52-67-201-31.ftth.glasoperator.nl. [31.201.67.52])
        by smtp.gmail.com with ESMTPSA id h8sm735126eds.51.2020.10.27.04.07.41
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Tue, 27 Oct 2020 04:07:41 -0700 (PDT)
From: Ash Wilding <ash.j.wilding@gmail.com>
X-Google-Original-From: Ash Wilding
To: julien@xen.org
Cc: Ash.Wilding@arm.com,
	Bertrand.Marquis@arm.com,
	Rahul.Singh@arm.com,
	Volodymyr_Babchuk@epam.com,
	jbeulich@suse.com,
	paul@xen.org,
	sstabellini@kernel.org,
	xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Date: Tue, 27 Oct 2020 12:07:40 +0100
Message-Id: <20201027110740.79646-1-ash.j.wilding@gmail.com>
X-Mailer: git-send-email 2.24.3 (Apple Git-128)
In-Reply-To: <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
References: <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi Julien,


> Would Arm be willing to add support for LSE before merging the
> SMMUv3?

(( Taking my Arm hat off for a second and speaking independently... ))

I've been toying with doing this in my own personal time but unsure how
long it would take (unable to commit much time on it right now). I'll
let the Arm folks speak for themselves as to whether they're able and
willing do it.

If not, I don't necessarily think pulling in Linux's LL/SC and LSE
atomics helpers should block merging the SMMUv3 driver if everything
else is OK after review; we could use Rahul's versions in the driver
for now and then merge Linux's helpers later.


> I would prefer to follow the same approach as Linux and allow Xen to 
> select at boot time which implementations to use. This would enable 
> distro to provide a single binary that boot on all Armv8 and still
> allow Xen to select the best set of instructions.

Yep good idea, agreed.

Note that while Linux uses the alternatives framework for LL/SC vs LSE,
it's still controlled by CONFIG_ARM64_LSE_ATOMICS, see [3]. This would
give us the best of both worlds - distros can build with the Kconfig =y
to enable single image with runtime detection, while expert users can
still build a custom image to force use of LL/SC on v8.1+ systems should
they wish.


> I asked Jan to add this line in the commit message :). My concern was 
> that even if we provided a runtime switch (or sanity check for
> XSA-295), the GCC helpers would not be able to take advantage (the
> code is not written by Xen community).

Ahh yes, makes sense - all the more reason for us to get explicit
implementations into Xen sooner rather than later :-)


Thanks,
Ash.

> [1] https://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=5d45ecabe3
> [2] https://xenbits.xen.org/xsa/advisory-295.html
[3] https://elixir.bootlin.com/linux/latest/source/arch/arm64/include/asm/lse.h#L7


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:40:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:40:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12809.33177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXNL8-0007XH-3T; Tue, 27 Oct 2020 11:40:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12809.33177; Tue, 27 Oct 2020 11:40:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXNL8-0007XA-04; Tue, 27 Oct 2020 11:40:34 +0000
Received: by outflank-mailman (input) for mailman id 12809;
 Tue, 27 Oct 2020 11:40:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXNL5-0007X3-RP
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:40:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 27acd177-d817-4b45-8b04-a63c1cdbf551;
 Tue, 27 Oct 2020 11:40:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70BD8B21E;
 Tue, 27 Oct 2020 11:40:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXNL5-0007X3-RP
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:40:31 +0000
X-Inumbo-ID: 27acd177-d817-4b45-8b04-a63c1cdbf551
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 27acd177-d817-4b45-8b04-a63c1cdbf551;
	Tue, 27 Oct 2020 11:40:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603798829;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=ExArnadxcx3JAKnobevenpT6dgaB39uPFzazX6DJsIE=;
	b=sl8eD0k+0NI9L6smI9aauvdi0j6+ihgEa6nUfZ+V5bmq1S0k0bUsbEBjfFhQW96cBL3zWb
	k5ZOIpd4sb435A25dttiHI5lCUI5ZCcg+iGI5yEK8SLfbiwNrN1EX6TAyJ2T2trRIVKXiO
	5oDIcTr4PcK+FtRytl0t7rzBKNKT9X8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 70BD8B21E;
	Tue, 27 Oct 2020 11:40:29 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony Perard <anthony.perard@citrix.com>, Olaf Hering <olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] libxl: fix libacpi dependency
Message-ID: <bd68e8f4-ce57-7798-f6d2-53e85319b8d4@suse.com>
Date: Tue, 27 Oct 2020 12:40:29 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

$(DSDT_FILES-y) depends on the recursive make to have run in libacpi/
such that the file(s) itself/themselves were generated before
compilation gets attempted. The same, however, is also necessary for
generated headers, before source files including them would get
attempted to be compiled.

The dependency specified in libacpi's Makefile, otoh, is entirely
pointless nowadays - no compilation happens there anymore (except for
tools involved in building the generated files). Together with it, the
rule generating acpi.a also can go away.

Reported-by: Olaf Hering <olaf@aepfle.de>
Fixes: 14c0d328da2b ("libxl/acpi: Build ACPI tables for HVMlite guests")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Arguably we might also use $(ACPI_OBJS) instead of specifying just the
one object file we know has respective #include directives.

--- a/tools/libacpi/Makefile
+++ b/tools/libacpi/Makefile
@@ -89,11 +89,6 @@ iasl:
 	@echo 
 	@exit 1
 
-build.o: ssdt_s3.h ssdt_s4.h ssdt_pm.h ssdt_tpm.h ssdt_laptop_slate.h
-
-acpi.a: $(OBJS)
-	$(AR) rc $@ $(OBJS)
-
 clean:
 	rm -f $(C_SRC) $(H_SRC) $(MK_DSDT) $(C_SRC:=.$(TMP_SUFFIX))
 	rm -f $(patsubst %.c,%.hex,$(C_SRC)) $(patsubst %.c,%.aml,$(C_SRC)) $(patsubst %.c,%.asl,$(C_SRC))
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -32,7 +32,7 @@ ACPI_PATH  = $(XEN_ROOT)/tools/libacpi
 DSDT_FILES-$(CONFIG_X86) = dsdt_pvh.c
 ACPI_OBJS  = $(patsubst %.c,%.o,$(DSDT_FILES-y)) build.o static_tables.o
 ACPI_PIC_OBJS = $(patsubst %.o,%.opic,$(ACPI_OBJS))
-$(DSDT_FILES-y): acpi
+$(DSDT_FILES-y) build.o: acpi
 vpath build.c $(ACPI_PATH)/
 vpath static_tables.c $(ACPI_PATH)/
 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 11:41:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 11:41:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12813.33189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXNLw-0007cg-Dp; Tue, 27 Oct 2020 11:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12813.33189; Tue, 27 Oct 2020 11:41:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXNLw-0007cZ-AN; Tue, 27 Oct 2020 11:41:24 +0000
Received: by outflank-mailman (input) for mailman id 12813;
 Tue, 27 Oct 2020 11:41:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXNLv-0007cU-Hq
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:41:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db6113ff-81c2-40a7-b3a2-a5dc99b1fb6f;
 Tue, 27 Oct 2020 11:41:20 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXNLs-0005Mo-3L; Tue, 27 Oct 2020 11:41:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXNLr-0003Xf-Op; Tue, 27 Oct 2020 11:41:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXNLr-0001UY-OI; Tue, 27 Oct 2020 11:41:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXNLv-0007cU-Hq
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 11:41:23 +0000
X-Inumbo-ID: db6113ff-81c2-40a7-b3a2-a5dc99b1fb6f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id db6113ff-81c2-40a7-b3a2-a5dc99b1fb6f;
	Tue, 27 Oct 2020 11:41:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e4VOOLRP1p6VFECEtFRVTzddHBlAeri2qW3GBJlCFS4=; b=LVpjTQz3X2A2mNNP5yNHKYt0C4
	+eWTt8WpWepl3BLUZUPd1KuiRpZLxp2i++TVTg/vH6YQocxmGPgYGG/Y7vyxxvQUeZQK7TOfK1VyT
	8SbITS1JXwMwMeJh+nDAVt9ewSQzsq+/hG0/7Na5ZXfrEPwzllWKAgeKb7JLUtnFTxOs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXNLs-0005Mo-3L; Tue, 27 Oct 2020 11:41:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXNLr-0003Xf-Op; Tue, 27 Oct 2020 11:41:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXNLr-0001UY-OI; Tue, 27 Oct 2020 11:41:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156251-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156251: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4525c8781ec0701ce824e8bd379ae1b129e26568
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 11:41:19 +0000

flight 156251 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156251/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl          10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-xsm      11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4525c8781ec0701ce824e8bd379ae1b129e26568
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   87 days
Failing since        152366  2020-08-01 20:49:34 Z   86 days  148 attempts
Testing same since   156251  2020-10-27 01:41:28 Z    0 days    1 attempts

------------------------------------------------------------
3376 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641103 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 12:39:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 12:39:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12828.33208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXOGB-0003r9-E8; Tue, 27 Oct 2020 12:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12828.33208; Tue, 27 Oct 2020 12:39:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXOGB-0003r2-BB; Tue, 27 Oct 2020 12:39:31 +0000
Received: by outflank-mailman (input) for mailman id 12828;
 Tue, 27 Oct 2020 12:39:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXOGA-0003qx-OA
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 12:39:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e05b2688-5292-4713-868d-a87def622c3e;
 Tue, 27 Oct 2020 12:39:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXOG8-0006b1-La; Tue, 27 Oct 2020 12:39:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXOG8-0006za-AW; Tue, 27 Oct 2020 12:39:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXOG8-0007MH-A2; Tue, 27 Oct 2020 12:39:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXOGA-0003qx-OA
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 12:39:30 +0000
X-Inumbo-ID: e05b2688-5292-4713-868d-a87def622c3e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e05b2688-5292-4713-868d-a87def622c3e;
	Tue, 27 Oct 2020 12:39:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F97QQ/4W98o5Kki4hQtw1TWEijiTNxYA4iCdrHSHno4=; b=A4aj/fkDYEIBw8UxE3XJlXZ5s6
	ePR5/wru7lflGy7qoX7HTMKtPLnshFW49k+UYyVOaSBEdAVqlxNswRxAW5AhFTQm69xWmYeAQLMxX
	xTzK+1Lgbx7JwlqOMDP98q+B3/cLhv1cb/QdJ+0BMmD2SIPZMaCbfHmmitr2DowjRKhU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXOG8-0006b1-La; Tue, 27 Oct 2020 12:39:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXOG8-0006za-AW; Tue, 27 Oct 2020 12:39:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXOG8-0007MH-A2; Tue, 27 Oct 2020 12:39:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156255-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156255: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=eb520b93d279e901a593c57e30649fb08f4290c5
X-Osstest-Versions-That:
    ovmf=a3212009d95bbcba7d08076aba2eee51eb1f8e7c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 12:39:28 +0000

flight 156255 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156255/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 eb520b93d279e901a593c57e30649fb08f4290c5
baseline version:
 ovmf                 a3212009d95bbcba7d08076aba2eee51eb1f8e7c

Last test of basis   156252  2020-10-27 01:41:47 Z    0 days
Testing same since   156255  2020-10-27 09:04:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cosmo Lai <cosmo.lai@intel.com>
  Yunhua Feng <fengyunhua@byosoft.com.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a3212009d9..eb520b93d2  eb520b93d279e901a593c57e30649fb08f4290c5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 12:55:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 12:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12833.33222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXOVa-0005aa-Qb; Tue, 27 Oct 2020 12:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12833.33222; Tue, 27 Oct 2020 12:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXOVa-0005aT-Mk; Tue, 27 Oct 2020 12:55:26 +0000
Received: by outflank-mailman (input) for mailman id 12833;
 Tue, 27 Oct 2020 12:55:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6MOI=EC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kXOVZ-0005aN-6O
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 12:55:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c019ee49-a27b-4477-85d6-67db177819c8;
 Tue, 27 Oct 2020 12:55:23 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=6MOI=EC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kXOVZ-0005aN-6O
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 12:55:25 +0000
X-Inumbo-ID: c019ee49-a27b-4477-85d6-67db177819c8
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c019ee49-a27b-4477-85d6-67db177819c8;
	Tue, 27 Oct 2020 12:55:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603803323;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=jvMo3zUlhHZ29FEKKWNDQj4STOsiBHNcqKK1MMK43MI=;
  b=L72JS1c85Zh8o/wx2hIobHjMjpjU4hguhGAu1kXEB7milt0AGK3SXJvT
   I6Fbs2mpM+OqFnUcMjdU47B9QZvIUPw+3WNEv4A+QnfgCxAVpmfyumMtD
   P6V/NdepBHyQNsG9obTxv9SvCMoeAxhZD4jh2yPRlvXRiSmLfmJoXWIF/
   M=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 32gkcY/iNEG3GZUbD/HLoFeKkw4FHN9Sbu6kEQCjR83MsBU7zyjm2xgYA/c3agVklHCNJ2eGH6
 NSl79jTFppVsPxa2PBh0DQCnwpirlyW2+K4zcebGHKwrRa73egMzoY+cbOdaGYj1Fld1jDX7vn
 VeY3vxzO373bVhOVQ5qrGuLtLPeuBcxU6hZvRl3VhjYcGQtDN04rSQDuqB4s9YtHUwwd3SHwzB
 Fd5TFwIAmwaZb/xFFesbLvb9GFTQdpHB4p3iNM9SgD4VrrRHfP8hDQZ0axFPqoka9ZfM8OkGsh
 V+A=
X-SBRS: None
X-MesageID: 29915959
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,423,1596513600"; 
   d="scan'208";a="29915959"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XUrGLgvKryT1EAxN0dfP9qnFLQvKNhCamcHFgpbydXIBdqKT1YDvfmfMXpMi2HMIoHnX6WnRHytGUjZ2wdViVnTWPq91QR5VwvkNqfVyqIVDIZplEaBNvRWsB+4HD8EKq2/iJjrLWRV0ITeWPEntXnLFUDGkQ4AvR0UwxvW3YWu75eXLxcPB/44b8MTgehQJo/3xyqSRIObn7BTcnPDR5O9aHIcb+yA9GsDLrW8Q+4uBj9wLqijKiYMZ02/sVo8jLlY2EZLeRgUyt0fa0oskNrRg6B8tez7iEIH5BRFNd5TGy84LS9WyF79SD3VLjp4+uHqRx9LQg8wcOYp9ebIliA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8SQer9U+hp0ru/DBwTtroRhttF7NK6NWubEMpGsj8BA=;
 b=IfdjjMCDO2CBLz2ovB5E1GH0XxjDja0jptdGquj2tPdhK5605SyZLjTWCiSmRbBF4D1cqD/heQ3CULXhXF/Ektu2TccD9cLwz5k+djFhUsxL9T5HHFomRycYlHBYhvTCZxT+JmPlA2WnHg+vxWvb8l881QVUEQDhHAHFJuuwquzV9LUqAqYCVu6hdQMz7HLuZu6OQabPonelp4GcCRVxNQj1iZLh8Uk4ivr+SA6bWzWR64MMZ5o0DulShOTkRkeDx+b9Uf8ObcBt9d08m3h4o3ITnl2tl1pfN35EBc1Y1DXiMsUQ48fME9zHH0XYproU2SkqX119vvDE4duCf+hOFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8SQer9U+hp0ru/DBwTtroRhttF7NK6NWubEMpGsj8BA=;
 b=dFMMc0rGPTayUQ3SeACooSqk7RvZb2ijxLGbGxe50frVg61ZITKFvCDadRYl+IKzdrPZ3KvmsAs8e8ja+XhZl0JYFIDwei75jpoLxBOF9uozYkwhzeUIxhIN+rhWKTf3d4xnGadOAvpsXt4Ju3f+MQXz6/kW+ilTXOaTVKPmxRg=
Date: Tue, 27 Oct 2020 13:55:14 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>, "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "wl@xen.org" <wl@xen.org>, "paul@xen.org"
	<paul@xen.org>, Artem Mygaiev <Artem_Mygaiev@epam.com>, Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>, xen-devel <xen-devel@lists.xenproject.org>,
	Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Message-ID: <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
References: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
X-ClientProxiedBy: LO2P265CA0402.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 153f476a-2f98-431c-f715-08d87a778ef0
X-MS-TrafficTypeDiagnostic: DM6PR03MB4842:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4842BD5CACF77895298560038F160@DM6PR03MB4842.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8WkXJGOS2TMoDt7rBjLBVe7lIK+almkvnu12kKCl4zd8nJpfDJWR3W9dqTtVXGk9Rxk0c+10BfyN+nuqB5O7ZoJdoAkEYYRbWqqfpRY73ijUbJq9WCGKIdb+FNijgsvsalJO/l3J1aU4tM9ZvzNlb1+bs/T15aFmhwFpHqOuVFCCfehcHzVXvSz0mGsdSqoH7MZjoErYG8M6LZ4OJfKIkjfQFP/aNYO3YLvJAjBmSkif4CBPm+vzm/PQ/OWkzOqAH0I8q0SyxsKsiwDB+T2RTjhpzRcOJJKPUMs3+GwHStb2Q0JoRenUzndlQSrdbrlQp+gPglyx5FFymPtq6IC3Dg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(366004)(136003)(346002)(39860400002)(396003)(316002)(956004)(66476007)(54906003)(86362001)(26005)(66556008)(6916009)(33716001)(2906002)(16526019)(66946007)(9686003)(4326008)(6496006)(5660300002)(186003)(6666004)(1076003)(8936002)(85182001)(7416002)(8676002)(6486002)(478600001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: TMc9ccQ8iUuqx7owJxlBzDZHcd5sDJD/2vRZrUHEG3pHNDkXcbvlK7x1E5w5KtvkE9MK/ZvQhK8XZNtpuScAEELxW3vYoZtgE+stLhAsafdrJaYotWZBWio3NquxK+jlq4MEqMPtSbq3YSukxGeEfvAEbD7PUEQOMuFKfCXbmBHR5PVGshq68mKoR7rG1LKE6Lz5ztJF4G9qpoyYk0AYZs+Z7o9O1nwUjjq3Ilfp9OyGuPi7c2cONcOpbBM+/wyKliG4kf7yom/Sk4mkv1fwPXqWPR2/ZRUCJYGagujztukAn3wSL7/7YJkNiI8GaRa21llWbA8+pRoN85lTxTjEVQeZI65EsR/HOH9+qYcMonfW5hvEaLCJGLKPliUJNkulkS+tsmdtgMlZv2qaQjykN40uZn9Cs+/yBJHIl5dYu9IzcdXO/sb7KGP0fC0zZFN0RSWvPDcezG6GWGAEdRnmlrkaFWWT8gnJ2Ce9tr0kk6S36/Z6hKXTC9ucmYJdVN33nQx1JC+jZ1haju7kFxvYMZZOaiFLIEL5I8fc6+NQ9DwYKqa2eZTAefhljLC2ztCeu7yyns+TviYZDaih3+NtxCmshqmIjgLGiIxocSwzfOMpX0Jt2pFRv1dobc1GFw2LcLmVkR5g5AuJh/2zoG/vwA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 153f476a-2f98-431c-f715-08d87a778ef0
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Oct 2020 12:55:19.2759
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tPmdH4dz0GmC6A4APdaOTr+hEJZW3QFlfFhTNqJbLCBVCQidVufwB/36JtYQBTYXYcQTV335PdYAuI9THUJg1w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4842
X-OriginatorOrg: citrix.com

On Tue, Oct 27, 2020 at 09:59:05AM +0000, Oleksandr Andrushchenko wrote:
> Hello, all!
> 
> While working on PCI passthrough on ARM (partial RFC was published by ARM
> earlier this year) I tried to implement some related changes in the toolstack.
> One of the obstacles for ARM is PCI backend’s related code presence: ARM is
> going to fully emulate an ECAM host bridge in Xen, so no PCI backend/frontend
> pair is going to be used.
> 
> If my understanding correct the functionality which is implemented by the
> pciback and toolstack and which is relevant/needed for ARM:
> 
>   1. pciback is used as a database for assignable PCI devices, e.g. xl
>      pci-assignable-{add|remove|list} manipulates that list. So, whenever the
>      toolstack needs to know which PCI devices can be passed through it reads
>      that from the relevant sysfs entries of the pciback.
> 
>   2. pciback is used to hold the unbound PCI devices, e.g. when passing through a
>      PCI device it needs to be unbound from the relevant device driver and bound
>      to pciback (strictly speaking it is not required that the device is bound to
>      pciback, but pciback is again used as a database of the passed through PCI
>      devices, so we can re-bind the devices back to their original drivers when
>      guest domain shuts down)
> 
>   3. toolstack depends on Domain-0 for discovering PCI device resources which are
>      then permitted for the guest domain, e.g MMIO ranges, IRQs. are read from
>      the sysfs
> 
>   4. toolstack is responsible for resetting PCI devices being passed through via
>      sysfs/reset of the Domain-0’s PCI bus subsystem
> 
>   5. toolstack is responsible for the devices are passed with all relevant
>      functions, e.g. so for multifunction devices all the functions are passed to
>      a domain and no partial passthrough is done
> 
>   6. toolstack cares about SR-IOV devices (am I correct here?)

I'm not sure I fully understand what this means. Toolstack cares about
SR-IOV as it cares about other PCI devices, but the SR-IOV
functionality is managed by the (dom0) kernel.

> 
> 
> I have implemented a really dirty POC for that which I would need to clean up
> before showing, but before that I would like to get some feedback and advice on
> how to proceed with the above. I suggest we:
> 
>   1. Move all pciback related code (which seems to become x86 code only) into a
>      dedicated file, something like tools/libxl/libxl_pci_x86.c
> 
>   2. Make the functionality now provided by pciback architecture dependent, so
>      tools/libxl/libxl_pci.c delegates actual assignable device list handling to
>      that arch code and uses some sort of “ops”, e.g.
>      arch->ops.get_all_assignable, arch->ops.add_assignable etc. (This can also
>      be done with “#ifdef CONFIG_PCIBACK”, but seems to be not cute). Introduce
>      tools/libxl/libxl_pci_arm.c to provide ARM implementation.

To be fair this is arch and OS dependent, since it's currently based
on sysfs which is Linux specific. So it should really be
libxl_pci_linux_x86.c or similar.

> 
>   3. ARM only: As we do not have pciback on ARM we need to have some storage for
>      assignable device list: move that into Xen by extending struct pci_dev with
>      “bool assigned” and providing sysctls for manipulating that, e.g.
>      XEN_SYSCTL_pci_device_{set|get}_assigned,
>      XEN_SYSCTL_pci_device_enum_assigned (to enumerate/get the list of
>      assigned/not-assigned PCI devices). Can this also be interesting for x86? At
>      the moment it seems that x86 does rely on pciback presence, so probably this
>      change might not be interesting for x86 world, but may allow stripping
>      pciback functionality a bit and making the code common to both ARM and x86.

How are you going to perform the device reset then? Will you assign
the device to dom0 after removing it from the guest so that dom0 can
perform the reset? You will need to use logic currently present in
pciback to do so IIRC.

It doesn't seem like a bad approach, but there are more consequences
than just how assignable devices are listed.

Also Xen doesn't currently know about IOMMU groups, so Xen would have
to gain this knowledge in order to know the minimal set of PCI devices
that can be assigned to a guest.

> 
>   4. ARM only: It is not clear how to handle re-binding of the PCI driver on
>      guest shutdown: we need to store the sysfs path of the original driver the
>      device was bound to. Do we also want to store that in struct pci_dev?

I'm not sure I follow you here. On shutdown the device would be
handled back to Xen?

Most certainly we don't want to store a sysfs (Linux private
information) inside of a Xen specific struct (pci_dev).

>   5. An alternative route for 3-4 could be to store that data in XenStore, e.g.
>      MMIOs, IRQ, bind sysfs path etc. This would require more code on Xen side to
>      access XenStore and won’t work if MMIOs/IRQs are passed via device tree/ACPI
>      tables by the bootloaders.

As above, I think I need more context to understand what and why you
need to save such information.

> 
> Another big question is with respect to Domain-0 and PCI bus sysfs use. The
> existing code for querying PCI device resources/IRQs and resetting those via
> sysfs of Domain-0 is more than OK if Domain-0 is present and owns PCI HW. But,
> there are at least two cases when this is not going to work on ARM: Dom0less
> setups and when there is a hardware domain owning PCI devices.
> 
> In our case we have a dedicated guest which is a sort of hardware domain (driver
> domain DomD) which owns all the hardware of the platform, so we are interested
> in implementing something that fits our design as well: DomD/hardware domain
> makes it not possible to access the relevant PCI bus sysfs entries from Domain-0
> as those live in DomD/hwdom. This is also true for Dom0less setups as there is
> no entity that can provide the same.

You need some kind of channel to transfer this information from the
hardware domain to the toolstack domain. Some kind of protocol over
libvchan might be an option.

> For that reason in my POC I have introduced the following: extended struct
> pci_dev to hold an array of PCI device’s MMIO ranges and IRQ:
> 
>   1. Provide internal API for accessing the array of MMIO ranges and IRQ. This
>      can be used in both Dom0less and Domain-0 setups to manipulate the relevant
>      data. The actual data can be read from a device tree/ACPI tables if
>      enumeration is done by bootloaders.

I would be against storing this data inside of Xen if Xen doesn't have
to make any use of it. Does Xen need to know the MMIO ranges and IRQs
to perform it's task?

If not, then there's no reason to store those in Xen. The hypervisor
is not the right place to implement a database like mechanism for PCI
devices.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:35:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:35:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12846.33235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXP8H-0000fR-Rv; Tue, 27 Oct 2020 13:35:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12846.33235; Tue, 27 Oct 2020 13:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXP8H-0000fK-N9; Tue, 27 Oct 2020 13:35:25 +0000
Received: by outflank-mailman (input) for mailman id 12846;
 Tue, 27 Oct 2020 13:35:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o/8F=EC=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1kXP8G-0000fF-Pf
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:35:24 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88a35dc3-605a-48e0-a214-8394d436ff8a;
 Tue, 27 Oct 2020 13:35:23 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RDUV5F054249;
 Tue, 27 Oct 2020 13:35:20 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 34dgm3yjgm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 27 Oct 2020 13:35:20 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RDVI8m168151;
 Tue, 27 Oct 2020 13:35:19 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 34cx6vygrw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 27 Oct 2020 13:35:19 +0000
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09RDZI5Y012414;
 Tue, 27 Oct 2020 13:35:18 GMT
Received: from char.us.oracle.com (/10.152.32.25)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 27 Oct 2020 06:35:18 -0700
Received: by char.us.oracle.com (Postfix, from userid 1000)
 id 8FA456A0121; Tue, 27 Oct 2020 09:37:01 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o/8F=EC=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
	id 1kXP8G-0000fF-Pf
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:35:24 +0000
X-Inumbo-ID: 88a35dc3-605a-48e0-a214-8394d436ff8a
Received: from userp2120.oracle.com (unknown [156.151.31.85])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 88a35dc3-605a-48e0-a214-8394d436ff8a;
	Tue, 27 Oct 2020 13:35:23 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
	by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RDUV5F054249;
	Tue, 27 Oct 2020 13:35:20 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=6SwjgoK6NmHrI7CpsdIL3yfEJog1phV7AEfu8seZA/M=;
 b=kRXDExu9qqJPjZrehu24qzSbTC0ClcfPsU+E1LCNqwZPgXaAuNivtu7tqhMfiZ6q0lpe
 Bc1rJ0kUziITVuLTbsP+xc0QqqexEpxCiNpw4wC3MvpUMWaUVfEDuHdObkmMq/Z/v6tH
 X6h2+gop8GjYD+39av1lHRI9HqyrzDcYyIEZ5VDgzqp4QFliovcuwBR6IXhG+0YeYv6n
 zZ4tglD5BG/aTAmPoqTvmoggu26V0I/2ZQQVXMhKytv7dCCx8onZQvV3Z5ohpk4C1U1/
 TQXQoHQHMUjEUi7pbhhlMvU0zdedoahfmbvsem7LF20QVv3dtc1iiQTSPRaKivdbHjOp Fg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by userp2120.oracle.com with ESMTP id 34dgm3yjgm-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Tue, 27 Oct 2020 13:35:20 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RDVI8m168151;
	Tue, 27 Oct 2020 13:35:19 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
	by userp3030.oracle.com with ESMTP id 34cx6vygrw-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Tue, 27 Oct 2020 13:35:19 +0000
Received: from abhmp0015.oracle.com (abhmp0015.oracle.com [141.146.116.21])
	by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09RDZI5Y012414;
	Tue, 27 Oct 2020 13:35:18 GMT
Received: from char.us.oracle.com (/10.152.32.25)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 27 Oct 2020 06:35:18 -0700
Received: by char.us.oracle.com (Postfix, from userid 1000)
	id 8FA456A0121; Tue, 27 Oct 2020 09:37:01 -0400 (EDT)
Date: Tue, 27 Oct 2020 09:37:01 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
        xen-devel@lists.xenproject.org, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
Message-ID: <20201027133701.GB6077@char.us.oracle.com>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.9.1 (2017-09-22)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9786 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 spamscore=0
 bulkscore=0 malwarescore=0 mlxlogscore=999 mlxscore=0 suspectscore=2
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010270085
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9786 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 impostorscore=0
 adultscore=0 bulkscore=0 spamscore=0 phishscore=0 mlxlogscore=999
 suspectscore=2 clxscore=1011 mlxscore=0 malwarescore=0 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010270085

On Mon, Oct 26, 2020 at 05:02:14PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
> allocate a buffer for the swiotlb. It does so by calling
> 
>   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> 
> If the allocation must fail, no_iotlb_memory is set.
> 
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0, it thinks the memory is ready to use when actually it is not.
> 
> When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
> and since no_iotlb_memory is set the kernel panics.
> 
> Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
> initialized, it would do the initialization itself, which might still
> succeed.
> 
> 
> Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
> failure, and also by setting no_iotlb_memory to false on swiotlb
> initialization success.

Should this have a Fixes: flag?

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c19379fabd20..9924214df60a 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	if (verbose)
>  		swiotlb_print_info();
> @@ -262,9 +263,11 @@ swiotlb_init(int verbose)
>  	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
>  		return;
>  
> -	if (io_tlb_start)
> +	if (io_tlb_start) {
>  		memblock_free_early(io_tlb_start,
>  				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
> +		io_tlb_start = 0;
> +	}
>  	pr_warn("Cannot allocate buffer");
>  	no_iotlb_memory = true;
>  }
> @@ -362,6 +365,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	swiotlb_print_info();
>  


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12854.33247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGh-0001bE-Nj; Tue, 27 Oct 2020 13:44:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12854.33247; Tue, 27 Oct 2020 13:44:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGh-0001b7-JP; Tue, 27 Oct 2020 13:44:07 +0000
Received: by outflank-mailman (input) for mailman id 12854;
 Tue, 27 Oct 2020 13:44:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPGg-0001b2-Az
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8df44da4-c4c6-4015-899c-7111f422603c;
 Tue, 27 Oct 2020 13:44:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-0007vu-NW
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-00023F-MO
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGb-0002Uo-Rf; Tue, 27 Oct 2020 13:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPGg-0001b2-Az
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:06 +0000
X-Inumbo-ID: 8df44da4-c4c6-4015-899c-7111f422603c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8df44da4-c4c6-4015-899c-7111f422603c;
	Tue, 27 Oct 2020 13:44:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=W8kgjbVEV7/nGoVLukrqwqdL8zpPCmjDYWvWoU+BKAk=; b=bdReNmeSoCnBKRfDwUoTsyN34u
	RRGfoqiebkV4ULig7/g08T7mo5/wbAB6juGozae7rwBpjdgbmvIszTHjaq5hAzeOyZ7dZXkfYzb0q
	65elxdgA3K8NqnF+afNDJS9xYP1GmHAQ95++onQbt5FoKQr1+GX8bnWfmTEQqYFVUWGQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-0007vu-NW
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-00023F-MO
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGb-0002Uo-Rf; Tue, 27 Oct 2020 13:44:01 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 1/7] README: Fix a typo
Date: Tue, 27 Oct 2020 13:43:48 +0000
Message-Id: <20201027134354.25561-2-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 README | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README b/README
index 1703e076..ef6c4e60 100644
--- a/README
+++ b/README
@@ -655,7 +655,7 @@ HostProp_<host>_PowerILOM
         unsupported     Fails whenever a power operation is needed
 
         msw [--apc6] <pdu> <port-name-regexp|port-num>
-             Control and APC masterswitch via SNMP.  The SNMP
+             Control an APC masterswitch via SNMP.  The SNMP
              community is `private'.  See the `pdu-msw' script.
 
         ipmi <mgmt> [<user> [<pass> [<ipmitool options...>]]]
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12855.33259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGn-0001d4-0J; Tue, 27 Oct 2020 13:44:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12855.33259; Tue, 27 Oct 2020 13:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGm-0001cx-SS; Tue, 27 Oct 2020 13:44:12 +0000
Received: by outflank-mailman (input) for mailman id 12855;
 Tue, 27 Oct 2020 13:44:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPGl-0001b2-6z
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09414b3a-96fd-4658-97f5-58c34c5343a6;
 Tue, 27 Oct 2020 13:44:03 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-0007vr-Kn
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-000234-Ib
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGb-0002Uo-K4; Tue, 27 Oct 2020 13:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPGl-0001b2-6z
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:11 +0000
X-Inumbo-ID: 09414b3a-96fd-4658-97f5-58c34c5343a6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 09414b3a-96fd-4658-97f5-58c34c5343a6;
	Tue, 27 Oct 2020 13:44:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=BgQgKZzhbqKo+V2i8b0wx7lVtbj5VfAzFi4byRnMZ84=; b=TJ9614qkxEA6MdlJYojOU8Q/PX
	BVOHQS8xNlSeTDKmzF9pmnLzDVTFFrjtoJtt6bRJGg+n8Jtw46Kq2wH4uvBx6Y6WqDbgC6wU6sC8S
	hhwi6PGVAgy6dDCPsi70ummYDqeDVJZgepFqJ+3aeKQiM6kW4DDdHctJlKPjWZ2myM8k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-0007vr-Kn
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-000234-Ib
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGb-0002Uo-K4; Tue, 27 Oct 2020 13:44:01 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 0/7] Prepare for ServerTech PDUs
Date: Tue, 27 Oct 2020 13:43:47 +0000
Message-Id: <20201027134354.25561-1-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We have taken delivery of two Servertech PDUs which do
(near-)zero-crossing switching.  We hope these will not suffer from
the relays welding shut like our APC PDUs do.

We will control these PDUs via SNMP, as we do for the APC PDUs, but
each PDU manufacturer uses their own SNMP range, so adjustments to the
code and configuration are needed.

These new arrangements have been tested successfully in a mockup
environment.

Ian Jackson (7):
  README: Fix a typo
  pdu-snmp: Rename from pdu-msw
  pdu-snmp: Centralise base OIDs
  pdu-snmp: Refactor model handling
  pdu-snmp: Support ServerTech PDUs "Pro 1/2" aka "Sentry4"
  PDU::snmp, PDU::msw: Rename from msw to snmp
  pdu-snmp: Fix sleeping

 Osstest/PDU/msw.pm  | 14 +-------------
 Osstest/PDU/snmp.pm | 39 +++++++++++++++++++++++++++++++++++++++
 README              |  9 ++++++---
 pdu-msw => pdu-snmp | 45 +++++++++++++++++++++++++++++++++++----------
 4 files changed, 81 insertions(+), 26 deletions(-)
 create mode 100644 Osstest/PDU/snmp.pm
 rename pdu-msw => pdu-snmp (78%)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12856.33271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGr-0001gR-E6; Tue, 27 Oct 2020 13:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12856.33271; Tue, 27 Oct 2020 13:44:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGr-0001gG-9w; Tue, 27 Oct 2020 13:44:17 +0000
Received: by outflank-mailman (input) for mailman id 12856;
 Tue, 27 Oct 2020 13:44:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPGq-0001b2-78
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca71bba4-5000-4fcb-9d25-00ed036ea09f;
 Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-0007w0-7T
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-00024D-5h
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGc-0002Uo-Eu; Tue, 27 Oct 2020 13:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPGq-0001b2-78
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:16 +0000
X-Inumbo-ID: ca71bba4-5000-4fcb-9d25-00ed036ea09f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ca71bba4-5000-4fcb-9d25-00ed036ea09f;
	Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=juiCgnMY9/QgNhWg1mZv5w6KyG10Ju4wvLuPHHdxwak=; b=cfmBjS2VvCrPXT2Enw1FWAmqvT
	+e13+JUKzFNeQ7nHxEEzB4eNuVRR8UJNo70aN+Eb7Hnpjqlp6KZ34HUA1tdHOYrsO9AbQMLpAiy85
	+LT8/tLo2aelV1/pvS+qUY3siqYvkVbzKFZyHjhviEhlv5ANufpOWqQivOU37qULxlCA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-0007w0-7T
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-00024D-5h
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGc-0002Uo-Eu; Tue, 27 Oct 2020 13:44:02 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 3/7] pdu-snmp: Centralise base OIDs
Date: Tue, 27 Oct 2020 13:43:50 +0000
Message-Id: <20201027134354.25561-4-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Do not hardcoode .3 and .4 in the main logic.

No functional change.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-snmp | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/pdu-snmp b/pdu-snmp
index 581a60b0..a4918f53 100755
--- a/pdu-snmp
+++ b/pdu-snmp
@@ -27,8 +27,11 @@ use Net::SNMP;
 use Data::Dumper;
 
 my $community= 'private';
+
 my $baseoid= '.1.3.6.1.4.1.318.1.1.4.4.2.1';
 my $baseoid_write= "$baseoid.3";
+my $baseoid_name= "$baseoid.4";
+my $baseoid_read= "$baseoid.3";
 
 while (@ARGV && $ARGV[0] =~ m/^-/) {
     $_ = shift @ARGV;
@@ -52,7 +55,7 @@ die "SNMP error $error " unless defined $session;
 
 sub getname ($) {
     my ($port) = @_;
-    my $oid= "$baseoid.4.$port";
+    my $oid= "$baseoid_name.$port";
     my $res= $session->get_request($oid);
     if ($res) {
         my $name= $res->{$oid};
@@ -96,7 +99,7 @@ if ($outlet =~ m/^\d+$/) {
     ($useport,$usename)= @{ $found[0] };
 }
 
-my $read_oid= "$baseoid.3.$useport";
+my $read_oid= "$baseoid_read.$useport";
 my $write_oid= "$baseoid_write.$useport";
 
 my @map= (undef, qw(
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12857.33283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGw-0001ky-MR; Tue, 27 Oct 2020 13:44:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12857.33283; Tue, 27 Oct 2020 13:44:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPGw-0001kn-JI; Tue, 27 Oct 2020 13:44:22 +0000
Received: by outflank-mailman (input) for mailman id 12857;
 Tue, 27 Oct 2020 13:44:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPGv-0001b2-7P
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a14440a3-c76a-4308-aae1-ceb979216295;
 Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-0007vx-UJ
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGd-00023g-TN
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGc-0002Uo-7J; Tue, 27 Oct 2020 13:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPGv-0001b2-7P
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:21 +0000
X-Inumbo-ID: a14440a3-c76a-4308-aae1-ceb979216295
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a14440a3-c76a-4308-aae1-ceb979216295;
	Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=xllzJu2rVf2X8UnmOfyzoIVs8umEVd2ts2o3jPuGX7A=; b=QERZ262VuOMUg98KAxp3jEQxUj
	OcoMDfabWuGwEjNd/YgRs/CfWLM8/rKXq6mRNfOSyJu3PW2z8p1uNTZTZFRb3LBsJ0VB6w67d6IeN
	u4h7mudX1C74P9dWyiYR7M/U5DXd6cP92v/zxswYvUuwdxvxbVLMG63H7inddR5jhbXg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-0007vx-UJ
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-00023g-TN
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:03 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGc-0002Uo-7J; Tue, 27 Oct 2020 13:44:02 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 2/7] pdu-snmp: Rename from pdu-msw
Date: Tue, 27 Oct 2020 13:43:49 +0000
Message-Id: <20201027134354.25561-3-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We are going to make this script control PDUs other than APC ones.

No overall functional change for internal callers.  Anyone out-of-tree
using this script will need to change the name of the program they run.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/PDU/msw.pm  | 2 +-
 README              | 2 +-
 pdu-msw => pdu-snmp | 6 +++---
 3 files changed, 5 insertions(+), 5 deletions(-)
 rename pdu-msw => pdu-snmp (95%)

diff --git a/Osstest/PDU/msw.pm b/Osstest/PDU/msw.pm
index 19d9f56b..614216d4 100644
--- a/Osstest/PDU/msw.pm
+++ b/Osstest/PDU/msw.pm
@@ -33,7 +33,7 @@ sub new {
 sub pdu_power_state {
     my ($mo, $on) = @_;
     my $onoff= $on ? "on" : "off";
-    system_checked("./pdu-msw @{ $mo->{Args} } $onoff");
+    system_checked("./pdu-snmp @{ $mo->{Args} } $onoff");
 }
 
 1;
diff --git a/README b/README
index ef6c4e60..70f8ae73 100644
--- a/README
+++ b/README
@@ -656,7 +656,7 @@ HostProp_<host>_PowerILOM
 
         msw [--apc6] <pdu> <port-name-regexp|port-num>
              Control an APC masterswitch via SNMP.  The SNMP
-             community is `private'.  See the `pdu-msw' script.
+             community is `private'.  See the `pdu-snmp' script.
 
         ipmi <mgmt> [<user> [<pass> [<ipmitool options...>]]]
              Use IPMI by (by running ipmitool).  <mgmt> is the name or
diff --git a/pdu-msw b/pdu-snmp
similarity index 95%
rename from pdu-msw
rename to pdu-snmp
index c57f9f7c..581a60b0 100755
--- a/pdu-msw
+++ b/pdu-snmp
@@ -19,7 +19,7 @@
 
 my $usagemsg= <<END;
 usage:
-  pdu-msw SWITCH-DNS-NAME PORT-NAME-REGEXP|PORT [[delayed-]on|off|0|1|reboot]
+  pdu-snmp SWITCH-DNS-NAME PORT-NAME-REGEXP|PORT [[delayed-]on|off|0|1|reboot]
 END
 
 use strict qw(refs vars);
@@ -89,7 +89,7 @@ if ($outlet =~ m/^\d+$/) {
                    ($t->[2] ? '*' : ''),
                    $t->[0], $t->[1]);
         }
-        die "pdu-msw $dnsname: ".
+        die "pdu-snmp $dnsname: ".
             (@found ? "multiple ports match" : "no ports match").
             "\n";
     }
@@ -119,7 +119,7 @@ sub get () {
 
 sub show () {
     my $mean = get();
-    printf "pdu-msw $dnsname: #%s \"%s\" = %s\n", $useport, $usename, $mean;
+    printf "pdu-snmp $dnsname: #%s \"%s\" = %s\n", $useport, $usename, $mean;
     return $mean;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12858.33295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPH1-0001q0-Vw; Tue, 27 Oct 2020 13:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12858.33295; Tue, 27 Oct 2020 13:44:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPH1-0001pt-Rm; Tue, 27 Oct 2020 13:44:27 +0000
Received: by outflank-mailman (input) for mailman id 12858;
 Tue, 27 Oct 2020 13:44:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPH0-0001b2-7W
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:26 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 711c59bb-aa35-4855-b496-b03771642e22;
 Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-0007w6-JZ
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-00024r-If
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGc-0002Uo-Td; Tue, 27 Oct 2020 13:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPH0-0001b2-7W
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:26 +0000
X-Inumbo-ID: 711c59bb-aa35-4855-b496-b03771642e22
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 711c59bb-aa35-4855-b496-b03771642e22;
	Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=ht5aMArUMsbRZegTrAAwD7lR0CIO4rForYrTexZof/g=; b=6No575w7BLbLgwxOa+OluzfAhV
	eSOnAue4Va2681FC/CaJB3Z3jKGahvAu4JKegrHsj7GkzaGMVK2F6OVM3PPaeYF/GcOTz7zlSTvwR
	xVHz3Y9wyCMxKVmCCVfFfXzGJg2bsGjtaiyz7rq8UTE9qlfJAl6hV7qNiuPsVGn5Gx6I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-0007w6-JZ
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-00024r-If
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGc-0002Uo-Td; Tue, 27 Oct 2020 13:44:02 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 5/7] pdu-snmp: Support ServerTech PDUs "Pro 1/2" aka "Sentry4"
Date: Tue, 27 Oct 2020 13:43:52 +0000
Message-Id: <20201027134354.25561-6-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Values from Sentry4.mib, from
  https://www.servertech.com/support/sentry-mib-oid-tree-downloads

Useful runes used when developing and testing, with "Sentry.mib" from
the Servertech zipfile renamed to "mibs/Sentry4-MIB":
  snmpwalk -On -m Sentry4-MIB -M +:mibs/ -Ci -v 2c -c private pdu1 iso.3.6.1.4.1.1718.4
  snmpwalk -m Sentry4-MIB -M +:mibs/ -Ci -v 2c -c private pdu1 iso.3.6.1.4.1.1718.4
  snmptranslate -Td -m Sentry4-MIB -M +:mibs/ Sentry4-MIB::st4OutletControlAction.1.1.2

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-snmp | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/pdu-snmp b/pdu-snmp
index 74244145..61380766 100755
--- a/pdu-snmp
+++ b/pdu-snmp
@@ -44,6 +44,13 @@ sub model_apc6 () {
     $baseoid_write= '.1.3.6.1.4.1.318.1.1.12.3.3.1.1.4';
 }
 
+sub model_sentry4 () {
+    $baseoid = ".1.3.6.1.4.1.1718.4.1.8";
+    $baseoid_name = "$baseoid.2.1.3.1.1"; # st4OutletName.1.1
+    $baseoid_read = "$baseoid.3.1.1.1.1"; # st4OutletState.1.1
+    $baseoid_write= "$baseoid.5.1.2.1.1"; # st4OutletControlAction.1.1
+}
+
 my $model_name = 'msw';
 
 while (@ARGV && $ARGV[0] =~ m/^-/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12859.33307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPH7-0001vk-9a; Tue, 27 Oct 2020 13:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12859.33307; Tue, 27 Oct 2020 13:44:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPH7-0001vc-62; Tue, 27 Oct 2020 13:44:33 +0000
Received: by outflank-mailman (input) for mailman id 12859;
 Tue, 27 Oct 2020 13:44:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPH5-0001b2-7V
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:31 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f23b889a-f9db-4df0-bd39-42713a532765;
 Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-0007w3-Ee
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-00024b-CL
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGc-0002Uo-MM; Tue, 27 Oct 2020 13:44:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPH5-0001b2-7V
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:31 +0000
X-Inumbo-ID: f23b889a-f9db-4df0-bd39-42713a532765
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f23b889a-f9db-4df0-bd39-42713a532765;
	Tue, 27 Oct 2020 13:44:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=0tVI4eo+1zS+zccZupV965ruxpbEgNilUw/RkAp3VIk=; b=En0amqcxDJ3ef96nNC1ptH1/zK
	DB0rGug4qsKF16JkFyfKhngx78Cuxc41vBRIfbC162XkQrjjxcSf4IpA3DNXg6Jo2MZYZMgDNf4iW
	Vzd4OsS/fTggZ3EFUAJuZBOkeisplk9C4E7vKbgo1t2kQ/GOMG/9Nfr2wqF+5o5he2Yg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-0007w3-Ee
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-00024b-CL
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGc-0002Uo-MM; Tue, 27 Oct 2020 13:44:02 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 4/7] pdu-snmp: Refactor model handling
Date: Tue, 27 Oct 2020 13:43:51 +0000
Message-Id: <20201027134354.25561-5-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This makes it easier to see waht is going on and to add new model(s).

No functional change.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-snmp | 27 +++++++++++++++++++++------
 1 file changed, 21 insertions(+), 6 deletions(-)

diff --git a/pdu-snmp b/pdu-snmp
index a4918f53..74244145 100755
--- a/pdu-snmp
+++ b/pdu-snmp
@@ -28,15 +28,28 @@ use Data::Dumper;
 
 my $community= 'private';
 
-my $baseoid= '.1.3.6.1.4.1.318.1.1.4.4.2.1';
-my $baseoid_write= "$baseoid.3";
-my $baseoid_name= "$baseoid.4";
-my $baseoid_read= "$baseoid.3";
+our ($baseoid, $baseoid_write, $baseoid_name, $baseoid_read);
+
+sub model_msw () {
+    # APC MasterSwitch
+    $baseoid= '.1.3.6.1.4.1.318.1.1.4.4.2.1';
+    $baseoid_name= "$baseoid.4";
+    $baseoid_read= "$baseoid.3";
+    $baseoid_write= "$baseoid.3";
+}
+
+sub model_apc6 () {
+    # APC MasterSwitch protocol version 6 (?)
+    model_msw();
+    $baseoid_write= '.1.3.6.1.4.1.318.1.1.12.3.3.1.1.4';
+}
+
+my $model_name = 'msw';
 
 while (@ARGV && $ARGV[0] =~ m/^-/) {
     $_ = shift @ARGV;
-    if (m/^--apc6$/) {
-	$baseoid_write= '.1.3.6.1.4.1.318.1.1.12.3.3.1.1.4';
+    if (m/^--(\w+)$/ && ${*::}{"model_$1"}) {
+	$model_name= $1;
     } else {
 	die "$_ ?";
     }
@@ -44,6 +57,8 @@ while (@ARGV && $ARGV[0] =~ m/^-/) {
 
 if (@ARGV<2 || @ARGV>3 || $ARGV[0] =~ m/^-/) { die "bad usage\n$usagemsg"; }
 
+${*::}{"model_$model_name"}->();
+
 our ($max_retries) = 16; # timeout = 0.05 * max_retries^2
 our ($dnsname,$outlet,$action) = @ARGV;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12861.33319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPHB-00021D-MV; Tue, 27 Oct 2020 13:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12861.33319; Tue, 27 Oct 2020 13:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPHB-000212-HI; Tue, 27 Oct 2020 13:44:37 +0000
Received: by outflank-mailman (input) for mailman id 12861;
 Tue, 27 Oct 2020 13:44:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPHA-0001b2-7X
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:36 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af43575d-5cd1-4af6-bb2d-3a86862fcf7c;
 Tue, 27 Oct 2020 13:44:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGf-0007wE-45
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:05 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGf-00025a-2Y
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:05 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGd-0002Uo-CI; Tue, 27 Oct 2020 13:44:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPHA-0001b2-7X
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:36 +0000
X-Inumbo-ID: af43575d-5cd1-4af6-bb2d-3a86862fcf7c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id af43575d-5cd1-4af6-bb2d-3a86862fcf7c;
	Tue, 27 Oct 2020 13:44:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=DE/eoGAmQvD+pJdKoW0UhbHdlhrfKgbDNu4rZs20/uA=; b=KlFM8E9wKq0F2iUIWL+fguCwQj
	cD64qTWKnBO0zJ//jOJarmRAjQpJ/L8nlLnmUZkONYdseez/s+svFoBQ1Bkw8bczaqc4wiCLyr2bS
	X8NcKnn9hZUr/9NPkFBEQDNx2LhuF/KEka11c7GUOLkIRZI8u1NZgL4Vaj1yGRWBHp9s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGf-0007wE-45
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:05 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGf-00025a-2Y
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:05 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-0002Uo-CI; Tue, 27 Oct 2020 13:44:03 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 7/7] pdu-snmp: Fix sleeping
Date: Tue, 27 Oct 2020 13:43:54 +0000
Message-Id: <20201027134354.25561-8-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

sleep takes only an integer.  We have to use select to sleep for
fractions of a second.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 pdu-snmp | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pdu-snmp b/pdu-snmp
index 61380766..79d22e1f 100755
--- a/pdu-snmp
+++ b/pdu-snmp
@@ -172,7 +172,7 @@ if (!defined $action) {
     my $retries = 0;
     for (;;) {
 	set($valset);
-	sleep $retries * 0.1;
+	select undef,undef,undef, $retries * 0.1;
 	print "now: "; my $got = show();
 	if ($got eq $map[$valset]) { last; }
 	if ($map[$valset] !~ m{^(?:off|on)$}) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 13:44:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 13:44:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12863.33331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPHH-00027f-2C; Tue, 27 Oct 2020 13:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12863.33331; Tue, 27 Oct 2020 13:44:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPHG-00027W-TS; Tue, 27 Oct 2020 13:44:42 +0000
Received: by outflank-mailman (input) for mailman id 12863;
 Tue, 27 Oct 2020 13:44:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1kXPHF-0001b2-7f
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:41 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ba1eb07-fff5-4634-a0eb-c9e6eb2c3882;
 Tue, 27 Oct 2020 13:44:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-0007w9-TI
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <ijackson@chiark.greenend.org.uk>) id 1kXPGe-00025I-RV
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by mariner.uk.xensource.com with esmtp (Exim 4.89)
 (envelope-from <ijackson@chiark.greenend.org.uk>)
 id 1kXPGd-0002Uo-4S; Tue, 27 Oct 2020 13:44:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=txzK=EC=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
	id 1kXPHF-0001b2-7f
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:41 +0000
X-Inumbo-ID: 2ba1eb07-fff5-4634-a0eb-c9e6eb2c3882
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2ba1eb07-fff5-4634-a0eb-c9e6eb2c3882;
	Tue, 27 Oct 2020 13:44:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=wq7mzu+ToDMU3LijsJ/aE5ctMgAPIjBlbDtSDRbuAHo=; b=KFP2JGznwmC3nlWKz5vo4yJ1xs
	6OHQE5SgihgX6Hoqr5tqEy/obBhH63LYdRCir5GTNPytlIkCZBBP7/Qewo4NfqlBJG0H/VvenCK2A
	da5yPGEz31JU6hu9nMFwqZqL7rCQW0je8S55Va7sUVMEocySGzQX0xqFnsnrdT3e8RPA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-0007w9-TI
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from iwj (helo=mynotebook.example.org)
	by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGe-00025I-RV
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 13:44:04 +0000
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
	by mariner.uk.xensource.com with esmtp (Exim 4.89)
	(envelope-from <ijackson@chiark.greenend.org.uk>)
	id 1kXPGd-0002Uo-4S; Tue, 27 Oct 2020 13:44:03 +0000
From: Ian Jackson <iwj@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>
Subject: [OSSTEST PATCH 6/7] PDU::snmp, PDU::msw: Rename from msw to snmp
Date: Tue, 27 Oct 2020 13:43:53 +0000
Message-Id: <20201027134354.25561-7-iwj@xenproject.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20201027134354.25561-1-iwj@xenproject.org>
References: <20201027134354.25561-1-iwj@xenproject.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Retain the old name for compatibility with existing configuration.

No change other than to messages.

Signed-off-by: Ian Jackson <iwj@xenproject.org>
---
 Osstest/PDU/msw.pm  | 14 +-------------
 Osstest/PDU/snmp.pm | 39 +++++++++++++++++++++++++++++++++++++++
 README              |  9 ++++++---
 3 files changed, 46 insertions(+), 16 deletions(-)
 create mode 100644 Osstest/PDU/snmp.pm

diff --git a/Osstest/PDU/msw.pm b/Osstest/PDU/msw.pm
index 614216d4..099ef778 100644
--- a/Osstest/PDU/msw.pm
+++ b/Osstest/PDU/msw.pm
@@ -22,18 +22,6 @@ use warnings;
 
 use Osstest;
 
-use parent qw(Osstest::PDU::unsupported);
-
-sub new {
-    my ($class, $ho, $methname, @args) = @_;
-
-    return bless { Args => \@args }, $class;
-}
-
-sub pdu_power_state {
-    my ($mo, $on) = @_;
-    my $onoff= $on ? "on" : "off";
-    system_checked("./pdu-snmp @{ $mo->{Args} } $onoff");
-}
+use parent qw(Osstest::PDU::snmp);
 
 1;
diff --git a/Osstest/PDU/snmp.pm b/Osstest/PDU/snmp.pm
new file mode 100644
index 00000000..dca60df7
--- /dev/null
+++ b/Osstest/PDU/snmp.pm
@@ -0,0 +1,39 @@
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2013 Citrix Inc.
+# 
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+# 
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+
+package Osstest::PDU::snmp;
+
+use strict;
+use warnings;
+
+use Osstest;
+
+use parent qw(Osstest::PDU::unsupported);
+
+sub new {
+    my ($class, $ho, $methname, @args) = @_;
+
+    return bless { Args => \@args }, $class;
+}
+
+sub pdu_power_state {
+    my ($mo, $on) = @_;
+    my $onoff= $on ? "on" : "off";
+    system_checked("./pdu-snmp @{ $mo->{Args} } $onoff");
+}
+
+1;
diff --git a/README b/README
index 70f8ae73..33c4d2cc 100644
--- a/README
+++ b/README
@@ -654,9 +654,9 @@ HostProp_<host>_PowerILOM
         manual          Asks the user on the controlling terminal
         unsupported     Fails whenever a power operation is needed
 
-        msw [--apc6] <pdu> <port-name-regexp|port-num>
-             Control an APC masterswitch via SNMP.  The SNMP
-             community is `private'.  See the `pdu-snmp' script.
+        snmp --<model> <pdu> <port-name-regexp|port-num>
+             Control a PDU via SNMP.  The SNMP community is `private'.
+             See the `pdu-snmp' script for supported model names.
 
         ipmi <mgmt> [<user> [<pass> [<ipmitool options...>]]]
              Use IPMI by (by running ipmitool).  <mgmt> is the name or
@@ -667,6 +667,9 @@ HostProp_<host>_PowerILOM
              Does nothing if `on|off|both' is inapplicable, and has
              less error checking and less defaulting than ipmi.
 
+        msw ....
+             Deprecated alias for snmp.
+
         Supported specially are:
 
         <delay>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12894.33355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhE-000558-KC; Tue, 27 Oct 2020 14:11:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12894.33355; Tue, 27 Oct 2020 14:11:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhE-000551-GL; Tue, 27 Oct 2020 14:11:32 +0000
Received: by outflank-mailman (input) for mailman id 12894;
 Tue, 27 Oct 2020 14:11:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXPhD-00053N-In
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:11:31 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24476384-a21d-491e-94b1-b4ca6fb47bf9;
 Tue, 27 Oct 2020 14:11:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXPhD-00053N-In
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:11:31 +0000
X-Inumbo-ID: 24476384-a21d-491e-94b1-b4ca6fb47bf9
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24476384-a21d-491e-94b1-b4ca6fb47bf9;
	Tue, 27 Oct 2020 14:11:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603807886;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=569KM2uHTxkxXUk0wQS7+jSoZw4XOaNwLw4aKbluc9Q=;
  b=foyHneP8l7UvBClRox4rU4K7L0y+JXEHa+93fhIGjUYKW8zQb8D4TzKD
   hxFzE8KoWW4/wmu4Kti34uakVq+gWULmjVLnBlv4bqH4l844vcC2PSmUG
   my6n2axE4aMQwe32eUTeO+jD2USuSNObDgYbKMlVsRGMcv2udzyRXSF1M
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Y8PQGjjVLkgnsWJYE/hJLFKUz4ieiFNndYKnktFXYdCh6bRgaJtQzJtkmKMvO0dhyzd9F5++Kj
 Xjg2q7CoLK1lfPfo9gpuS/Osx0/q3Knt1GnyIBMRGs5Jq5qQs+iR8TfZtzaQ4wruwbM8DBC3Z0
 8rzv98DFLeo2g8ylDqPH3kR5cFpsQ1Q5NSkjSguGYvT3YGm0ZfAZoGj1SJ4u1mk2h5scMM5Ris
 5n/rw3FFC7rqAc+2FzP9BYs9a/062a2NAeIcH8Y0j4e0ZUQi1vzxM+AROOPwoaFHsCdaPgQOf6
 s74=
X-SBRS: None
X-MesageID: 30949886
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,424,1596513600"; 
   d="scan'208";a="30949886"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/2] x86/pv: Flush TLB in response to paging structure changes
Date: Tue, 27 Oct 2020 14:10:37 +0000
Message-ID: <20201027141037.27357-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201027141037.27357-1-andrew.cooper3@citrix.com>
References: <20201027141037.27357-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
is safe from Xen's point of view (as the update only affects guest mappings),
and the guest is required to flush (if necessary) after making updates.

However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
writeable pagetables, etc.) is an implementation detail outside of the
API/ABI.

Changes in the paging structure require invalidations in the linear pagetable
range for subsequent accesses into the linear pagetables to access non-stale
mappings.  Xen must provide suitable flushing to prevent intermixed guest
actions from accidentally accessing/modifying the wrong pagetable.

For all L2 and higher modifications, flush the TLB.  PV guests cannot create
L2 or higher entries with the Global bit set, so no mappings established in
the linear range can be global.  (This could in principle be an order 39 flush
starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)

Express the necessary flushes as a set of booleans which accumulate across the
operation.  Comment the flushing logic extensively.

This is XSA-286

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v3:
 * Rework from scratch vs v2.
---
 xen/arch/x86/mm.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 59 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 38168189aa..a3704ef648 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3891,7 +3891,8 @@ long do_mmu_update(
     struct vcpu *curr = current, *v = curr;
     struct domain *d = v->domain, *pt_owner = d, *pg_owner;
     mfn_t map_mfn = INVALID_MFN, mfn;
-    bool sync_guest = false;
+    bool flush_linear_pt = false, flush_root_pt_local = false,
+        flush_root_pt_others = false;
     uint32_t xsm_needed = 0;
     uint32_t xsm_checked = 0;
     int rc = put_old_guest_table(curr);
@@ -4041,6 +4042,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l2_entry(va, l2e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_linear_pt = true;
                     break;
 
                 case PGT_l3_page_table:
@@ -4048,6 +4051,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l3_entry(va, l3e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_linear_pt = true;
                     break;
 
                 case PGT_l4_page_table:
@@ -4055,6 +4060,8 @@ long do_mmu_update(
                         break;
                     rc = mod_l4_entry(va, l4e_from_intpte(req.val), mfn,
                                       cmd == MMU_PT_UPDATE_PRESERVE_AD, v);
+                    if ( !rc )
+                        flush_linear_pt = true;
                     if ( !rc && pt_owner->arch.pv.xpti )
                     {
                         bool local_in_use = false;
@@ -4063,7 +4070,7 @@ long do_mmu_update(
                                     mfn) )
                         {
                             local_in_use = true;
-                            get_cpu_info()->root_pgt_changed = true;
+                            flush_root_pt_local = true;
                         }
 
                         /*
@@ -4075,7 +4082,7 @@ long do_mmu_update(
                              (1 + !!(page->u.inuse.type_info & PGT_pinned) +
                               mfn_eq(pagetable_get_mfn(curr->arch.guest_table_user),
                                      mfn) + local_in_use) )
-                            sync_guest = true;
+                            flush_root_pt_others = true;
                     }
                     break;
 
@@ -4177,19 +4184,61 @@ long do_mmu_update(
     if ( va )
         unmap_domain_page(va);
 
-    if ( sync_guest )
+    /*
+     * Perform required TLB maintenance.
+     *
+     * This logic currently depend on flush_linear_pt being a superset of the
+     * flush_root_pt_* conditions.
+     *
+     * pt_owner may not be current->domain.  This may occur during
+     * construction of 32bit PV guests, or debugging of PV guests.  The
+     * behaviour cannot be correct with domain unpaused.  We therefore expect
+     * pt_owner->dirty_cpumask to be empty, but it is a waste of effort to
+     * explicitly check for an exclude this corner case.
+     *
+     * flush_linear_pt requires a FLUSH_TLB to all dirty CPUs.  The flush must
+     * be performed now to maintain correct behaviour across a multicall.
+     * i.e. we cannot relax FLUSH_TLB to FLUSH_ROOT_PGTBL, given that the
+     * former is a side effect of the latter, because the resync (which is in
+     * the return-to-guest path) happens too late.
+     *
+     * flush_root_pt_* requires FLUSH_ROOT_PGTBL on either the local CPU
+     * (implies pt_owner == current->domain and current->processor set in
+     * pt_owner->dirty_cpumask), and/or all *other* dirty CPUs as there are
+     * references we can't account for locally.
+     */
+    if ( flush_linear_pt /* || flush_root_pt_local || flush_root_pt_others */ )
     {
+        unsigned int cpu = smp_processor_id();
+        cpumask_t *mask = pt_owner->dirty_cpumask;
+
         /*
-         * Force other vCPU-s of the affected guest to pick up L4 entry
-         * changes (if any).
+         * Always handle local flushing separately (if applicable), to
+         * separate the flush invocations appropriately for scope of the two
+         * flush_root_pt_* variables.
          */
-        unsigned int cpu = smp_processor_id();
-        cpumask_t *mask = per_cpu(scratch_cpumask, cpu);
+        if ( likely(cpumask_test_cpu(cpu, mask)) )
+        {
+            mask = per_cpu(scratch_cpumask, cpu);
 
-        cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
+            cpumask_copy(mask, pt_owner->dirty_cpumask);
+            __cpumask_clear_cpu(cpu, mask);
+
+            flush_local(FLUSH_TLB |
+                        (flush_root_pt_local ? FLUSH_ROOT_PGTBL : 0));
+        }
+        else
+            /* Sanity check.  flush_root_pt_local implies local cpu is dirty. */
+            ASSERT(!flush_root_pt_local);
+
+        /* Flush the remote dirty CPUs.  Does not include the local CPU. */
         if ( !cpumask_empty(mask) )
-            flush_mask(mask, FLUSH_ROOT_PGTBL);
+            flush_mask(mask, FLUSH_TLB |
+                       (flush_root_pt_others ? FLUSH_ROOT_PGTBL : 0));
     }
+    else
+        /* Sanity check.  flush_root_pt_* implies flush_linear_pt. */
+        ASSERT(!flush_root_pt_local && !flush_root_pt_others);
 
     perfc_add(num_page_updates, i);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:11:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12893.33343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhA-00053Z-6u; Tue, 27 Oct 2020 14:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12893.33343; Tue, 27 Oct 2020 14:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhA-00053S-40; Tue, 27 Oct 2020 14:11:28 +0000
Received: by outflank-mailman (input) for mailman id 12893;
 Tue, 27 Oct 2020 14:11:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXPh8-00053N-Po
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:11:26 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b7ba5b8-0cc2-4c44-9249-3eecfb49092d;
 Tue, 27 Oct 2020 14:11:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXPh8-00053N-Po
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:11:26 +0000
X-Inumbo-ID: 4b7ba5b8-0cc2-4c44-9249-3eecfb49092d
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b7ba5b8-0cc2-4c44-9249-3eecfb49092d;
	Tue, 27 Oct 2020 14:11:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603807885;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=j/2Q2d4NLk5j8Y791XgkMADfxqAFxYy385J6xuXYUOw=;
  b=M4suCh9dq3aBwLOke6796Ynu9v14vrEdH+qWDBJ1iB+9Nl+aNu+QlFqP
   2aZIPnNAEK6MSovbJMZHrVVNuZLlaDPoDbE3GxxvVqazbd3dSPXVZz3NB
   /3GGV+z8AkZiZDifek5IdCZG0kAyIXqaPFv3QNApsKuiY2Mt5qW9bT4gC
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: i3Xm9+72s+9SQAOfx/iEGcWF1xIlG8CV0kUKezAnXuNKS58307q0HDGBAIRxOYiSR6oj2TAX7Z
 yyxP6oPGngkVI/RSyARHYtmLw5SK3Z2ab8/WkrGSGJQDx3UJbZiVToh6oRsBIMjoP+5v4V1NrN
 QU9LWJSvCkwGshznU1Hi8muDhcOu2Wb/mJnVWSXZExzZYXyDFAayF+U+tRi7ROpfV8+f64FcLo
 xQ02QPUbzInVvX/dx6tG+ds9ETC8jtChebXeBGHoaCW9O/n1r89xzuU1wA9eyp1hL7BAvamhxW
 4oY=
X-SBRS: None
X-MesageID: 30949880
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,424,1596513600"; 
   d="scan'208";a="30949880"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/2] x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
Date: Tue, 27 Oct 2020 14:10:36 +0000
Message-ID: <20201027141037.27357-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20201027141037.27357-1-andrew.cooper3@citrix.com>
References: <20201027141037.27357-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
to the L4 path in do_mmu_update().

However, this was unnecessary.

The L4 resync will pick up any new mappings created by the L4 change.  Any
changes to existing mappings are the guests responsibility to flush, and if
one is needed, an MMUEXT_OP hypercall will follow.

This is (not really) XSA-286 (but necessary to simplify the logic).

Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

v3:
 * New
---
 xen/arch/x86/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b2f35b3e7d..38168189aa 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4188,7 +4188,7 @@ long do_mmu_update(
 
         cpumask_andnot(mask, pt_owner->dirty_cpumask, cpumask_of(cpu));
         if ( !cpumask_empty(mask) )
-            flush_mask(mask, FLUSH_TLB_GLOBAL | FLUSH_ROOT_PGTBL);
+            flush_mask(mask, FLUSH_ROOT_PGTBL);
     }
 
     perfc_add(num_page_updates, i);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:12:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:12:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12898.33367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhl-0005El-Us; Tue, 27 Oct 2020 14:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12898.33367; Tue, 27 Oct 2020 14:12:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPhl-0005Ee-QK; Tue, 27 Oct 2020 14:12:05 +0000
Received: by outflank-mailman (input) for mailman id 12898;
 Tue, 27 Oct 2020 14:12:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXPhl-0005EU-42
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:12:05 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59c09a57-d156-4a7a-a13c-46552d1189e0;
 Tue, 27 Oct 2020 14:12:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXPhl-0005EU-42
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:12:05 +0000
X-Inumbo-ID: 59c09a57-d156-4a7a-a13c-46552d1189e0
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 59c09a57-d156-4a7a-a13c-46552d1189e0;
	Tue, 27 Oct 2020 14:12:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603807923;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=aa/4vWnGY2t+XfGcpzdER28PO2XLmOedn3YHehVXRLo=;
  b=WnLJiyelaAFlMG5zIUJj/ySROi5HZB8MJJe3dwqecWJzzHEdQyYVmJeK
   qliwqVPAteBdeOgtkc+o0HblhhmXM+3vKog6bGLphu51RLAoreNyRIIb3
   bnTDa9FXkylPftTeHI251QRLXsRKxvtsJ4Yb+7Yvpot+w/gXvGteilIPX
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: v/+EfdVrloK5LeCEzi/Fv1PLsbcad73XHtmN4+2W+ispExyB9W5lmOK/e9gwkbVYN0GkAajM+l
 Wa8d54ST17aNpIswZLmcAQbVARFvwT6o6ROdZhHm3523vEiAUPEim8t7QxO7QA1G1eeksmYt5u
 7JpNKWXKeh7rJrUJQx6tX8m2pyB4cWwOCCJB1U4flrvrUqqLUw1pqoRD+LltDwNmmkNgsbqpZv
 FUEWgLE4Jrs+L8xoEWA95NLkALP2n0Kr0lYElvdvhIyhRr1QFrnrNRJkuUuLCsCRI7cY+SnIzq
 008=
X-SBRS: None
X-MesageID: 29875501
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,424,1596513600"; 
   d="scan'208";a="29875501"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/2] x86/pv: Alternative XSA-286 fix
Date: Tue, 27 Oct 2020 14:10:35 +0000
Message-ID: <20201027141037.27357-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Alternative XSA-286 fix, with far less of a performance hit.

v3:
 * Split into two patches, to try and make the TLB flushing in do_mmu_update()
   a tractable problem to solve.

Andrew Cooper (2):
  x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
  x86/pv: Flush TLB in response to paging structure changes

 xen/arch/x86/mm.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 59 insertions(+), 10 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:12:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12902.33379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPic-0005PO-9O; Tue, 27 Oct 2020 14:12:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12902.33379; Tue, 27 Oct 2020 14:12:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPic-0005PH-4S; Tue, 27 Oct 2020 14:12:58 +0000
Received: by outflank-mailman (input) for mailman id 12902;
 Tue, 27 Oct 2020 14:12:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=bP4d=EC=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1kXPia-0005Ng-NF
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:12:56 +0000
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d17d290-c339-455e-ab5a-8dac409db236;
 Tue, 27 Oct 2020 14:12:55 +0000 (UTC)
Received: from nodbug.lucina.net (93-137-71-201.adsl.net.t-com.hr
 [93.137.71.201])
 by smtp.lucina.net (Postfix) with ESMTPSA id 4ECDC122804;
 Tue, 27 Oct 2020 15:12:54 +0100 (CET)
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 5939A2683D52; Tue, 27 Oct 2020 15:12:53 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=bP4d=EC=lucina.net=martin@srs-us1.protection.inumbo.net>)
	id 1kXPia-0005Ng-NF
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:12:56 +0000
X-Inumbo-ID: 4d17d290-c339-455e-ab5a-8dac409db236
Received: from smtp.lucina.net (unknown [62.176.169.44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4d17d290-c339-455e-ab5a-8dac409db236;
	Tue, 27 Oct 2020 14:12:55 +0000 (UTC)
Received: from nodbug.lucina.net (93-137-71-201.adsl.net.t-com.hr [93.137.71.201])
	by smtp.lucina.net (Postfix) with ESMTPSA id 4ECDC122804;
	Tue, 27 Oct 2020 15:12:54 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
	s=dkim-201811; t=1603807974;
	bh=h2OzsMInKjif+yY3DceG3feD9iaBujgGV9ZwqWPl4jw=;
	h=Date:From:To:Cc:Subject:From;
	b=TvSzT6I5LPOSShABikOcf1ng7ddfFojtXjZOuHBIObybR5blM90Blu3SVDy/SlQIj
	 O8otzo1Xh+cL6zpO/31kx5ZAOAce/0gac1H4/AgBC06ZARSezcKMm03YaU72jBQ9DJ
	 q15gt4ygSiuCK95TZriJLNPV6/6kZTQssh3/rk1IymeAK2wZjLPTBkxMs+cCAPRUwK
	 cyQlI/FkW0cfgqWTafIw4q4/oJnO9HUAKeHW/F+fUFjGBHwRTCYfCPIlaFSwO/uwrd
	 4D3x7q1RcyythveykijyiDYhA4zWOA/iE2O/Pig7JJh2POz+fh4wyAEaGMQs2Bun3+
	 MbskA8aOJ4QvA==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
	id 5939A2683D52; Tue, 27 Oct 2020 15:12:53 +0100 (CET)
Date: Tue, 27 Oct 2020 15:12:53 +0100
From: Martin Lucina <martin@lucina.net>
To: mirageos-devel@lists.xenproject.org
Cc: xen-devel@lists.xenproject.org
Subject: [ANN] MirageOS 3.9.0 released
Message-ID: <20201027141253.GA14637@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
	mirageos-devel@lists.xenproject.org, xen-devel@lists.xenproject.org
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
User-Agent: Mutt/1.10.1 (2018-07-13)

Dear all,

We are pleased to announce the release of MirageOS 3.9.0.

Our last release announcement was for [MirageOS
3.6.0](https://mirage.io/blog/announcing-mirage-36-release), so we will
also cover changes since 3.7.x and 3.8.x in this announcement.

New features:

- The Xen backend has been [re-written from
  scratch](https://github.com/mirage/mirage/issues/1159) to be based on
  Solo5, and now supports PVHv2 on Xen 4.10 or higher, and QubesOS 4.0.
- As part of this re-write, the existing Mini-OS based implementation has
  been retired, and all non-UNIX backends now use a unified OCaml runtime
  based on `ocaml-freestanding`.
- OCaml runtime settings settable via the `OCAMLRUNPARAM` environment
  variable are now exposed as unikernel boot parameters. For details, refer
  to [#1180](https://github.com/mirage/mirage/pull/1180).

Security posture improvements:

- With the move to a unified Solo5 and ocaml-freestanding base MirageOS
  unikernels on Xen gain several notable improvements to their overall
  security posture such as SSP for all C code, W^X, and malloc heap
  canaries. For details, refer to the mirage-xen 6.0.0 release
  [announcement](https://github.com/mirage/mirage-xen/releases/tag/v6.0.0).

API breaking changes:

- Several Xen-specific APIs have been removed or replaced, unikernels using
  these may need to be updated. For details, refer to the mirage-xen 6.0.0
  release
  [announcement](https://github.com/mirage/mirage-xen/releases/tag/v6.0.0).

Other notable changes:

- `Mirage_runtime` provides event loop enter and exit hook registration
  ([#1010](https://github.com/mirage/mirage/pull/1010)).
- All MirageOS backends now behave similarly on a successful exit of the
  unikernel: they call `exit` with the return value 0, thus `at_exit`
  handlers are now executed
  ([#1011](https://github.com/mirage/mirage/pull/1011)).
- The unix backend used a toplevel exception handler, which has been
  removed. All backends now behave equally with respect to exceptions.
- Please note that the `Mirage_net.listen` function still installs an
  exception handler, which will be removed in a future release. The out of
  memory exception is no longer caught by `Mirage_net.listen`
  ([#1036](https://github.com/mirage/mirage/issues/1036)).
- To reduce the number of OPAM packages, the `mirage-*-lwt` packages are
  now deprecated. `Mirage_net` (and others) now use `Lwt.t` directly, and
  their `buffer` type is `Cstruct.t`
  ([#1004](https://github.com/mirage/mirage/issues/1004)).
- OPAM files generated by `mirage configure` now include opam build and
  installation instructions, and also an URL to the Git `origin`
  ([#1022](https://github.com/mirage/mirage/pull/1022)).

Known issues:

- `mirage configure` fails if the unikernel is under version control and no
  `origin` remote is present
  ([#1188](https://github.com/mirage/mirage/issues/1188)).
- The Xen backend has issues with event delivery if built with an Alpine
  Linux GCC toolchain. As a work-around, please use a Fedora or Debian
  based toolchain.

Acknowledgements:

- Thanks to Roger Pau Monn, Andrew Cooper and other core Xen developers
  for help with understanding the specifics of how Xen PVHv2 works, and how
  to write an implementation from scratch.
- Thanks to Marek Marczykowski-Grecki for help with the QubesOS specifics,
  and for forward-porting some missing parts of PVHv2 to QubesOS version of
  Xen.
- Thanks to @palainp on Github for help with testing on QubesOS.


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:20:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12912.33396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPpW-0006Hg-3E; Tue, 27 Oct 2020 14:20:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12912.33396; Tue, 27 Oct 2020 14:20:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXPpV-0006HZ-VN; Tue, 27 Oct 2020 14:20:05 +0000
Received: by outflank-mailman (input) for mailman id 12912;
 Tue, 27 Oct 2020 14:20:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x4TT=EC=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kXPpU-0005wa-Fn
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:20:04 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b6f493e-1bdd-48e9-9f5b-724c84097515;
 Tue, 27 Oct 2020 14:20:02 +0000 (UTC)
Received: from AM6P193CA0138.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::43)
 by AM8PR08MB5650.eurprd08.prod.outlook.com (2603:10a6:20b:1d3::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Tue, 27 Oct
 2020 14:20:00 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::db) by AM6P193CA0138.outlook.office365.com
 (2603:10a6:209:85::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Tue, 27 Oct 2020 14:20:00 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Tue, 27 Oct 2020 14:20:00 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64");
 Tue, 27 Oct 2020 14:19:59 +0000
Received: from 1f14f6613bd2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3862FDE7-600E-4667-BB80-097084A9868D.1; 
 Tue, 27 Oct 2020 14:19:54 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1f14f6613bd2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 27 Oct 2020 14:19:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5321.eurprd08.prod.outlook.com (2603:10a6:10:11c::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Tue, 27 Oct
 2020 14:19:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 27 Oct 2020
 14:19:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x4TT=EC=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kXPpU-0005wa-Fn
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:20:04 +0000
X-Inumbo-ID: 0b6f493e-1bdd-48e9-9f5b-724c84097515
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown [40.107.3.43])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0b6f493e-1bdd-48e9-9f5b-724c84097515;
	Tue, 27 Oct 2020 14:20:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kI554YTTYiUUDfRBH9gI3S5wDzOkzAMhqDe2vEedAbE=;
 b=1YxomQOdkxUdiikPBKBF8VFwQeUnbEAFqDp8vK18bQA76ZETu+kgkX97fg6LqEwBUzDKOiuROSIYLolg4ZF+jSz9YiGf0czTbnlLtPDl5JajLGxBRM27k7KzCccHvUH6lKr7+/doKziJRFsPpfxTg9pm2eLW3J9GH/xZYB2Sri4=
Received: from AM6P193CA0138.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::43)
 by AM8PR08MB5650.eurprd08.prod.outlook.com (2603:10a6:20b:1d3::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Tue, 27 Oct
 2020 14:20:00 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::db) by AM6P193CA0138.outlook.office365.com
 (2603:10a6:209:85::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.22 via Frontend
 Transport; Tue, 27 Oct 2020 14:20:00 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Tue, 27 Oct 2020 14:20:00 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64"); Tue, 27 Oct 2020 14:19:59 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3f5ead35d8ad106e
X-CR-MTA-TID: 64aa7808
Received: from 1f14f6613bd2.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3862FDE7-600E-4667-BB80-097084A9868D.1;
	Tue, 27 Oct 2020 14:19:54 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1f14f6613bd2.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 27 Oct 2020 14:19:54 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZrQIn0PNxSLRGAM0PyLD2VQfLjmprijJnEzcoCPXJ9mmRI12yhynz4kGjkGx4q/GwT/QbjeBBORUF9o9mjpW8hG5JeFxriaqsVFB+ipOqCPOJ7q+xLQ9uJCwQOkTla6TIOMbYocUnwTeanQNqWG5suNr1lTfL4PX0EDYQZOSipircxCNCEKkvvv3mYY8r+qEVXydifvuUntWW0krc065wE3Xg8PUbHlK507cSyAMB0BvjAWTpPpaMA5gJQTqjuwadZvg8UCTtI/Jk6i46bH5jFR5pkTx9WPseunVrXA3+h5FXSstOd6m05ojs9CCqx/ucouUxLdFBobYT1Jv/btNVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kI554YTTYiUUDfRBH9gI3S5wDzOkzAMhqDe2vEedAbE=;
 b=XzcVp/KHwp5UmL2IjiNkqaJWOeM0Um0bcCQ9izIXi53sj8+uBwQlreSpaGr+OqcxqfONXwe+GnsUxjOIDSoN1/l261S0r+nmhMqE1Vdx5MLLsTMwbutmSmVcELVVEsGZKjlZnj6jfsioI8Aq5IC12s1X8ztYkZifOdSRM0bF+PtTTJSqeJkxH0bLDVPwZGTqhVao9n4s1MiDMZrvMd8oHOOyfgo/uqPEj6tW5EqSojKc1uCSZZ+Xs5yxVoIuaVokGWbaZ3d9PTxQ2rOrV5ufqHM8DkY2xELM2cg1Bc1m67In331uwBoCJpLIggUZ/jQQyYzxPUB3f+WDtEyhept3nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kI554YTTYiUUDfRBH9gI3S5wDzOkzAMhqDe2vEedAbE=;
 b=1YxomQOdkxUdiikPBKBF8VFwQeUnbEAFqDp8vK18bQA76ZETu+kgkX97fg6LqEwBUzDKOiuROSIYLolg4ZF+jSz9YiGf0czTbnlLtPDl5JajLGxBRM27k7KzCccHvUH6lKr7+/doKziJRFsPpfxTg9pm2eLW3J9GH/xZYB2Sri4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB8PR08MB5321.eurprd08.prod.outlook.com (2603:10a6:10:11c::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Tue, 27 Oct
 2020 14:19:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 27 Oct 2020
 14:19:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Ash Wilding <Ash.Wilding@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mgw0AAgAEzsQCAAWItgIABA+GAgADBXQCAABfAAIAAGI8AgAAORwCABG97AIAAEvKAgABzxQCAAUKcAA==
Date: Tue, 27 Oct 2020 14:19:52 +0000
Message-ID: <F573C8D3-3473-43CD-BA98-8D59E0188AF8@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
 <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
In-Reply-To: <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 432d17f4-1977-4ce9-ddc8-08d87a836398
x-ms-traffictypediagnostic: DB8PR08MB5321:|AM8PR08MB5650:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB5650780A91B170D48F8B44AC9D160@AM8PR08MB5650.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gNCLqO/9/DRmt8hiKNMDqhzI8KDxqGmKGsNb1h2C/8mwjRMqhbdg5MOlZzrByx608H8ydn9uumYzixNW8ZaQFRYTJQPopNyvXv5eePeExp15v8gU97e/l+bol25FyuitkL9qf8wo/s8b96dYvigNQQ7IB72njppDROC8VFcAXoHyjKxHD2tIKUKVCJicjwR2SUSSoEFg+QFA6ZB/Ran6DKTTDVmgW6DYJTg5aASAJxiQ2L76VsAa0FV24yA5jQGPV57kJZKk3L0xZwj/EwvKGogSxeAkwMVxJM5YWZie0n2P6HscMTEXpV0Q649HqhAWGklH6t+nstcunNJQBLG0b3t2i1RKsSRdQ/KUeXc3xOeo7txJqjwgLcqSVnqZCHgwhqKz0iNcxwTXRKN5qi1LCg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(39860400002)(136003)(346002)(366004)(396003)(64756008)(2616005)(6486002)(2906002)(33656002)(966005)(66446008)(66556008)(36756003)(86362001)(186003)(26005)(5660300002)(478600001)(54906003)(66476007)(6512007)(66946007)(8936002)(4326008)(83380400001)(71200400001)(76116006)(91956017)(316002)(6506007)(8676002)(6916009)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 5hW6DsC3u38UIY94Lm6J0mVrTQOZUAvxwrK3NF+bFQfzykXK3heJiHst1oYBagXzi3lFrI0A0jyT6zbWAJQGJPt/nWuDpoD+R3QmQ9xNQctyfYtrpfiJBM4XB7cKnRgiE18a4UdxnswyR9TgLQ0vEix4GBGk0iPiE+c95d2Bje0Sn/w9b7jQUhRCGniUCqE3fF8LZztjD8XwWzh6W0YMh6yw+ZLN0ytD1sRvSu86F8JVIbOudZTSowjMdXQ9rVANu5JF3dYE4GC0/ELMlRfWSbjSLFa332W2ATSmKYCd38RvLbLXFy0nZzNWI6M4+IR0/J7zB7djOndZv6dCw13lyKKbSKmUP5hV0qFua31DeJJ9UbJl9oA1kn43Qo2EocbY9Mkuuj8NakaLmOjoXW+zdn2GwHC8H2K6ZvzZ23Hu2tBLk+Y6QrGT8m5EE5GA0omngmlfSEpIqZGNiORRvTQAzYmnZ8PpPSisVtWIUioSuqQNUhc7i98jwi13I0eSgAhYlrj2b+lb4Lv4L9G6KSA4ghrG/I+N73UHlMpU9V+kV7FvT4tuCP5IIJX12yCrnqkNWArNoZ0hJF556zQUdwmQ2QWU9h0reI+P7BnWjekAM07H2OlJg+aj1D/BxD3WOQlseV2M80UlGlnhx6BDpp0GGQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <293DEC749F29B64F8837A31010A48212@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5321
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c488c4de-5d01-4256-8e25-08d87a835f15
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	JKs6GzYtmjNCqpzN0D8M3aXmG7XhGz+ym6Fnb2TfqmdvgZGZ+hxu+P1VP8rnw7AebIHiHgIZT+p0zjKWfUgDL+JA3ZYRMju5oWlI99xDQS50bfMO2SZ+d+IWzUyGbNHMh8BG2a6bMZUtcCCdFtw8QZaEQGHCj6UFBpZvnjh3XalDVKN0SJJ6hNy3/x1ijXHcJsh/ypwYtuD9lPi/u++6wQ7PVy8vSzR5H2FwAIKEd1wK1+K1yK2eXssln06gMRXKmoWP8bSFkZv01jq5KQXcHIQxc98z2FucGfw5OEq1R6a38FaYmDWcnPXAJJ8sXoqhf9AFOOIpQsk0EVoXJn6KgL6caFGdmthmB3vocP5yl0zD79ELi5G50mNdvQESACJWqVjMMvV1l+7G08/RTMjgdiOvM82nMPpyEWkOeQ5BBAIejmWvb/CaWXcz7pSleq03F1j4IoLB+7bPksy+MNG/Co0wLYRL3W9fuKkyXMczxDw=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(376002)(346002)(46966005)(53546011)(6862004)(82310400003)(36756003)(4326008)(26005)(186003)(36906005)(2906002)(70586007)(966005)(81166007)(336012)(70206006)(6486002)(83380400001)(33656002)(8676002)(8936002)(316002)(6512007)(2616005)(47076004)(478600001)(356005)(82740400003)(5660300002)(54906003)(6506007)(107886003)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Oct 2020 14:20:00.2653
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 432d17f4-1977-4ce9-ddc8-08d87a836398
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5650

Hi Julien,

> On 26 Oct 2020, at 19:05, Julien Grall <julien@xen.org> wrote:
>=20
> On 26/10/2020 12:10, Ash Wilding wrote:
>> Hi,
>=20
> Hi Ash,
>=20
>>> 1. atomic_set_release
>>> 2. atomic_fetch_andnot_relaxed
>>> 3. atomic_cond_read_relaxed
>>> 4. atomic_long_cond_read_relaxed
>>> 5. atomic_long_xor
>>> 6. atomic_set_release
>>> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is
>>>    implemented in XEN need to check.
>>> 8. atomic_dec_return_release
>>> 9. atomic_fetch_inc_relaxed
>> If we're going to pull in Linux's implementations of the above atomics
>> helpers for SMMUv3, and given the majority of SMMUv3 systems are v8.1+
>> with LSE, perhaps this would be a good time to drop the current
>> atomic.h in Xen completely and pull in both Linux's LL/SC and LSE
>> helpers,
>=20
> When I originally answered to the thread, I thought about suggesting impo=
rting LSE. But I felt it was too much to ask in order to merge the SMMUv3 c=
ode.
>=20
> However, I would love to have support for LSE in Xen as this would solve =
other not yet fully closed security issue with LL/SC (see XSA-295 [2]).
>=20
> Would Arm be willing to add support for LSE before merging the SMMUv3?

We are willing to work on LSE but I cannot commit on me and my team to star=
t working on this subject before the end of the year.

So I think it would be good to integrate this version of SMMUv3 and we can =
then update it to the latest Linux one once LSE has been added.

I think it make sense in the meantime to discuss how this should be designe=
d but it might be a good idea to make a specific discussion thread for that=
.

Cheers
Bertrand

>=20
> As an alternative, it might also be possible to provide "dumb" implementa=
tion for all the helpers even if they are stricter than necessary for the m=
emory ordering requirements.
>=20
> then use a new Kconfig to toggle between them?
>=20
> I would prefer to follow the same approach as Linux and allow Xen to sele=
ct at boot time which implementations to use. This would enable distro to p=
rovide a single binary that boot on all Armv8 and still allow Xen to select=
 the best set of instructions.
>=20
> Xen already provides a framework to switch between two sets of instructio=
ns at boot. This was borrowed from Linux, so I don't expect a big hurdle to=
 get this supported.
>=20
>> Back in 5d45ecabe3 [1] Jan mentioned we probably want to avoid relying
>> on gcc atomics helpers as we can't switch between LL/SC and LSE
>> atomics.=20
>=20
> I asked Jan to add this line in the commit message :). My concern was tha=
t even if we provided a runtime switch (or sanity check for XSA-295), the G=
CC helpers would not be able to take advantage (the code is not written by =
Xen community).
>=20
> Cheers,
>=20
>> [1] https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dcommit;h=3D5d45ecabe=
3
>=20
> [2] https://xenbits.xen.org/xsa/advisory-295.html
>=20
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:37:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:37:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12938.33407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQ6H-0007U8-MC; Tue, 27 Oct 2020 14:37:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12938.33407; Tue, 27 Oct 2020 14:37:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQ6H-0007U1-JI; Tue, 27 Oct 2020 14:37:25 +0000
Received: by outflank-mailman (input) for mailman id 12938;
 Tue, 27 Oct 2020 14:37:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXQ6G-0007Tw-KI
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:37:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 554116c3-c0a9-4381-b52a-0701f32b06b0;
 Tue, 27 Oct 2020 14:37:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB272AC1F;
 Tue, 27 Oct 2020 14:37:22 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXQ6G-0007Tw-KI
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:37:24 +0000
X-Inumbo-ID: 554116c3-c0a9-4381-b52a-0701f32b06b0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 554116c3-c0a9-4381-b52a-0701f32b06b0;
	Tue, 27 Oct 2020 14:37:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603809442;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=H+HPVp15S5DBifX1LYbjZZIZkMe4aOpuXoimUh/hw5U=;
	b=Wqf/uOzHGVLsxVxZ6eiYeC7xhGGT3VzxCTb6rYNxtXpIQsKCssCGk2oLUYr/n07yBRuM+f
	JFid8LVutUaLcJvVO+HT7Bml89gvC7xczhNsJRQIc3vrqeW4jBep0MBdLilsMAJHbhDRz0
	/sIOjUj5g6m6NnDbnE695P/hmZwqJn8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AB272AC1F;
	Tue, 27 Oct 2020 14:37:22 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update()
 for XPTI
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201027141037.27357-1-andrew.cooper3@citrix.com>
 <20201027141037.27357-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <589e9ed8-3b54-4225-1740-c5b0e3e667c7@suse.com>
Date: Tue, 27 Oct 2020 15:37:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027141037.27357-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 15:10, Andrew Cooper wrote:
> c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
> use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
> to the L4 path in do_mmu_update().
> 
> However, this was unnecessary.
> 
> The L4 resync will pick up any new mappings created by the L4 change.  Any
> changes to existing mappings are the guests responsibility to flush, and if
> one is needed, an MMUEXT_OP hypercall will follow.
> 
> This is (not really) XSA-286 (but necessary to simplify the logic).
> 
> Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")

While - with yet another round of consideration - I agree, I
would have thought justifying the correctness of the change
by d543fa409358 ("xen/x86: disable global pages for domains with
XPTI active") would have been easier. Perhaps at least worth
mentioning?

I agree that this reasoning is necessary to warrant the Fixes:
tag, and hence to indicate backporting of this change is going
to be fine all the way through to (the backports of) this
commit.

The primary complication with the justification you use is with
the commit you refer to saying "so that inside Xen we'll also
pick up such changes" in its description. I assume your reasoning
wrt this is that any hypercall started on another vCPU before
the mmu-update here completes can't rely on Xen observing the
new entries, which is what I believe I viewed differently at the
time. Whereas any one started after the sync/flush will obviously
observe the new entries (because of the very sync). My main
problem with this reasoning has been that it "feels" as if there
might be a way by which a guest could arrange a hypercall which
ought to observe the new entries but, because of the sync
happening only on the exit paths, doesn't. Quite likely such
"arranging", if possible at all, would need to be called "out of
spec".

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:37:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12940.33420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQ6l-0007Zr-W8; Tue, 27 Oct 2020 14:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12940.33420; Tue, 27 Oct 2020 14:37:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQ6l-0007Zk-Sp; Tue, 27 Oct 2020 14:37:55 +0000
Received: by outflank-mailman (input) for mailman id 12940;
 Tue, 27 Oct 2020 14:37:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RClw=EC=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXQ6k-0007ZY-9i
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:37:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29737194-3685-4ef2-a65f-9a9b6808bb6a;
 Tue, 27 Oct 2020 14:37:53 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXQ6h-0000kW-GQ; Tue, 27 Oct 2020 14:37:51 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXQ6h-0006Bk-8a; Tue, 27 Oct 2020 14:37:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=RClw=EC=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXQ6k-0007ZY-9i
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:37:54 +0000
X-Inumbo-ID: 29737194-3685-4ef2-a65f-9a9b6808bb6a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 29737194-3685-4ef2-a65f-9a9b6808bb6a;
	Tue, 27 Oct 2020 14:37:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dA2U/WBpIiNLs6m7784iLFfwRN40j+t9fnVMroKcBrs=; b=3EhmVNgVbNQGQVWDgSPCjbm9xF
	xs3ptOyf+6LvfOzr/P84rJqhYvZuAL/0ZdKHSszH88Fs0hJDZ2XMTZRK/YoNHTTLEZrzjUU9dDTXt
	8dqrbDO7ELX4eHPrpzFOHu4NbX5lH5yn/V1+7F0N9cXibuWB9B10cr444VG+bU5ZN2vI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXQ6h-0000kW-GQ; Tue, 27 Oct 2020 14:37:51 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXQ6h-0006Bk-8a; Tue, 27 Oct 2020 14:37:51 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Ash Wilding <Ash.Wilding@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
 <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
 <F573C8D3-3473-43CD-BA98-8D59E0188AF8@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <13baac40-8b10-0def-4e44-0d8f655fcaf1@xen.org>
Date: Tue, 27 Oct 2020 14:37:49 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <F573C8D3-3473-43CD-BA98-8D59E0188AF8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 27/10/2020 14:19, Bertrand Marquis wrote:
> Hi Julien,

Hi Bertrand,

> 
>> On 26 Oct 2020, at 19:05, Julien Grall <julien@xen.org> wrote:
>>
>> On 26/10/2020 12:10, Ash Wilding wrote:
>>> Hi,
>>
>> Hi Ash,
>>
>>>> 1. atomic_set_release
>>>> 2. atomic_fetch_andnot_relaxed
>>>> 3. atomic_cond_read_relaxed
>>>> 4. atomic_long_cond_read_relaxed
>>>> 5. atomic_long_xor
>>>> 6. atomic_set_release
>>>> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is
>>>>     implemented in XEN need to check.
>>>> 8. atomic_dec_return_release
>>>> 9. atomic_fetch_inc_relaxed
>>> If we're going to pull in Linux's implementations of the above atomics
>>> helpers for SMMUv3, and given the majority of SMMUv3 systems are v8.1+
>>> with LSE, perhaps this would be a good time to drop the current
>>> atomic.h in Xen completely and pull in both Linux's LL/SC and LSE
>>> helpers,
>>
>> When I originally answered to the thread, I thought about suggesting importing LSE. But I felt it was too much to ask in order to merge the SMMUv3 code.
>>
>> However, I would love to have support for LSE in Xen as this would solve other not yet fully closed security issue with LL/SC (see XSA-295 [2]).
>>
>> Would Arm be willing to add support for LSE before merging the SMMUv3?
> 
> We are willing to work on LSE but I cannot commit on me and my team to start working on this subject before the end of the year.

That's fine. There are other way we can bridge the gap between Xen and 
Linux in order to get the latest version (see more below).

> 
> So I think it would be good to integrate this version of SMMUv3 and we can then update it to the latest Linux one once LSE has been added.

As I pointed out in my first e-mail on this thread, I am quite concerned 
that we are going to (re-)introduce bugs that have been fixed in Linux.
Did you investigate that this is not going to happen?

However, I think we can manage to get the latest version without 
requiring LSE. It should be possible to provide dumb version of most the 
helpers.

> 
> I think it make sense in the meantime to discuss how this should be designed but it might be a good idea to make a specific discussion thread for that.

Make sense. Can you start a new thread?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 14:46:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 14:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12948.33432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQFF-00005y-TM; Tue, 27 Oct 2020 14:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12948.33432; Tue, 27 Oct 2020 14:46:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQFF-00005r-PS; Tue, 27 Oct 2020 14:46:41 +0000
Received: by outflank-mailman (input) for mailman id 12948;
 Tue, 27 Oct 2020 14:46:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXQFE-00005m-IM
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:46:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1628e3d-cefe-47c8-8b30-5c28c5dbc2df;
 Tue, 27 Oct 2020 14:46:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CB152AC12;
 Tue, 27 Oct 2020 14:46:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXQFE-00005m-IM
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 14:46:40 +0000
X-Inumbo-ID: e1628e3d-cefe-47c8-8b30-5c28c5dbc2df
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e1628e3d-cefe-47c8-8b30-5c28c5dbc2df;
	Tue, 27 Oct 2020 14:46:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603809998;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y9EcKYoS6cmVmpnnZMHPyeShlA8v+Qt0BsL8pYm3jpY=;
	b=CDIt1Tl7RcMZ4X208cYP2htGM5RJRGFz2FktCT22qopdHyq9Oatzjynif2WI3ErPz+WhMT
	dgR116FQGp/ddfOQ6B6AkVBJ0gAzbu8YVKiS/Sl6zkXu8ZVNnAKtpjzmx0qWy8Hve4qXjL
	GHpnZ6g2pCYEW9RHjw8JN1vGLpkc7vw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CB152AC12;
	Tue, 27 Oct 2020 14:46:38 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] x86/pv: Flush TLB in response to paging structure
 changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201027141037.27357-1-andrew.cooper3@citrix.com>
 <20201027141037.27357-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0cd065e3-47ae-8a98-92ed-f67a575949b8@suse.com>
Date: Tue, 27 Oct 2020 15:46:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201027141037.27357-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.10.2020 15:10, Andrew Cooper wrote:
> With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
> is safe from Xen's point of view (as the update only affects guest mappings),
> and the guest is required to flush (if necessary) after making updates.
> 
> However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
> writeable pagetables, etc.) is an implementation detail outside of the
> API/ABI.
> 
> Changes in the paging structure require invalidations in the linear pagetable
> range for subsequent accesses into the linear pagetables to access non-stale
> mappings.  Xen must provide suitable flushing to prevent intermixed guest
> actions from accidentally accessing/modifying the wrong pagetable.
> 
> For all L2 and higher modifications, flush the TLB.  PV guests cannot create
> L2 or higher entries with the Global bit set, so no mappings established in
> the linear range can be global.

Perhaps "..., so no guest controlled mappings ..."?

> @@ -4177,19 +4184,61 @@ long do_mmu_update(
>      if ( va )
>          unmap_domain_page(va);
>  
> -    if ( sync_guest )
> +    /*
> +     * Perform required TLB maintenance.
> +     *
> +     * This logic currently depend on flush_linear_pt being a superset of the
> +     * flush_root_pt_* conditions.
> +     *
> +     * pt_owner may not be current->domain.  This may occur during
> +     * construction of 32bit PV guests, or debugging of PV guests.  The
> +     * behaviour cannot be correct with domain unpaused.  We therefore expect
> +     * pt_owner->dirty_cpumask to be empty, but it is a waste of effort to
> +     * explicitly check for an exclude this corner case.

Nit: s/ an / and /

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 15:24:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 15:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12957.33443 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQph-0003YY-Of; Tue, 27 Oct 2020 15:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12957.33443; Tue, 27 Oct 2020 15:24:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXQph-0003YR-Lk; Tue, 27 Oct 2020 15:24:21 +0000
Received: by outflank-mailman (input) for mailman id 12957;
 Tue, 27 Oct 2020 15:24:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXQpf-0003YM-QY
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 15:24:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 772fbcc8-0320-4c46-b379-da3ec11fb5ad;
 Tue, 27 Oct 2020 15:24:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B57B7ACF3;
 Tue, 27 Oct 2020 15:24:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXQpf-0003YM-QY
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 15:24:19 +0000
X-Inumbo-ID: 772fbcc8-0320-4c46-b379-da3ec11fb5ad
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 772fbcc8-0320-4c46-b379-da3ec11fb5ad;
	Tue, 27 Oct 2020 15:24:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603812256;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w8683Ro2k05l4yvE9DeZmnIto4ZGZzqKcEzAUrqe+O4=;
	b=tvB1HMWEg+PsBjR6EgLEVO0+WyUP6MPzu+3vThxFFxEOxc8M8X+91neQOUWB3NT7/C8Tti
	hWbszOMm+2E7wzDU7BoAEQQpSxtDwVBnkTKux1yOhJq/hNMey28RVy8PbXJG/QvZi+4Kee
	6ldtC9L4+9jIpGyWUdJEtTVot0Hv5Ec=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B57B7ACF3;
	Tue, 27 Oct 2020 15:24:16 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Merge hsa and host_vmcb to reduce memory
 overhead
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026135043.15560-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec123127-786a-02e9-07dd-351f30b6a5b3@suse.com>
Date: Tue, 27 Oct 2020 16:24:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026135043.15560-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 14:50, Andrew Cooper wrote:
> The format of the Host State Area is, and has always been, a VMCB.  It is
> explicitly safe to put the host VMSAVE data in.

Nit: The PM calls this "Host Save Area" or "Host State Save Area"
afaics.

I recall us discussing this option in the past, and not right
away pursuing it because of it not having been explicit at the
time. What place in the doc has made this explicit? The main
uncertainty (without any explicit statement) on my part would be
the risk of VMSAVE writing (for performance reasons) e.g. full
cache lines, i.e. more than exactly the bits holding the state
to be saved, without first bringing old contents in from memory.

Of course, with the VMSAVE gone from svm_ctxt_switch_to() the
only one left is in _svm_cpu_up(), i.e. long before the first
VM exit, but then the same consideration could apply the other
way around (VM exit writing full cache lines, assuming the
other parts of the HSA are unused).

And then of course, if in fact this isn't spelled out anywhere,
there's the forward compatibility question.

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -72,11 +72,10 @@ static void svm_update_guest_efer(struct vcpu *);
>  static struct hvm_function_table svm_function_table;
>  
>  /*
> - * Physical addresses of the Host State Area (for hardware) and vmcb (for Xen)
> - * which contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state when in
> - * guest vcpu context.
> + * Host State Area.  This area is used by the processor in non-root mode, and
> + * contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state required to
> + * leave guest vcpu context.
>   */
> -static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa);
>  static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb);

The comment now applies to host_vmcb, so making the dual purpose
more obvious would imo be helpful. This would in particular mean
not starting with (only) "Host State Area" (unless here you really
mean to use a term different from the PM, to express the superset,
but then one less easy to mix up with Host Save Area would likely
be better), and not solely referring to non-root mode. Albeit
maybe you mean root mode by saying "contains Xen's"? If so,
perhaps "..., and also contains"?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 15:53:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 15:53:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12963.33456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXRHM-00069W-2i; Tue, 27 Oct 2020 15:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12963.33456; Tue, 27 Oct 2020 15:52:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXRHL-00069P-Vk; Tue, 27 Oct 2020 15:52:55 +0000
Received: by outflank-mailman (input) for mailman id 12963;
 Tue, 27 Oct 2020 15:52:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kXRHL-00069K-10
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 15:52:55 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9586153-8ea9-4817-a6e9-7627099ee4fd;
 Tue, 27 Oct 2020 15:52:53 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09RFnpif024163; Tue, 27 Oct 2020 15:52:46 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur04lp2059.outbound.protection.outlook.com [104.47.14.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 34dmbx68h6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 27 Oct 2020 15:52:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5700.eurprd03.prod.outlook.com (2603:10a6:208:16f::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 27 Oct
 2020 15:52:42 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 15:52:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kXRHL-00069K-10
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 15:52:55 +0000
X-Inumbo-ID: b9586153-8ea9-4817-a6e9-7627099ee4fd
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b9586153-8ea9-4817-a6e9-7627099ee4fd;
	Tue, 27 Oct 2020 15:52:53 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09RFnpif024163;
	Tue, 27 Oct 2020 15:52:46 GMT
Received: from eur04-vi1-obe.outbound.protection.outlook.com (mail-vi1eur04lp2059.outbound.protection.outlook.com [104.47.14.59])
	by mx0b-0039f301.pphosted.com with ESMTP id 34dmbx68h6-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Tue, 27 Oct 2020 15:52:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dvSiumjoBJjPUyl96pA4106ckW/Y6kVwHdr03vQeCgc382u2pZSoZwCC2z/0vAgDeRvpJlbTx51VW0W1pzlmimNruAFWOnQVh7mineuFarONWwdMzpdYxghSW1QNOHsE9QI629nTmB6ZedjeGydUoFLepldZaX+p0SXvYY97wyhSSnApXBGRSkpBrru55j9x8jMFiee9iTKOLd8Gm31SCqb/2uvmFyc9svfpFyf8nEXDe0Uqni2JAKeq9M8enr0lQtK31+MUC5zcdFzqMS3ix3od4tTTBGYRHXkkcFRyE/TYxJpxemcq7iKKouxN4n+6cAsqJ/2EqbMh7PGA2MIHiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ypum0ZP6pul9yjowOAW9sosm/3UXA3wT5wkWTOQsbTE=;
 b=d1eQUqUKbkkYgsaoTJYSKzKPuPw6FxBLq24suY+lg0mU7MQNvvgOg3BdGHnoCwNqmLx2wYTyrL6b6EZc3NRXNXM2S+yGGBMRd7Z26Gz+O4sReez6/YVTp5j31mgNEMFdz5uy8eMp8nWhNzJGx4RAYv5Aid5hKpjBn4SDgx8Xat6/onhgGDUbq7bbXeCzaTn4pqGvHDpf29q9zESJkcm+45akefhb2BoGrEkH4HSVmPSxRH9cQHsawpwMI3lzlT1rWbH7x+r+t0TjKJMSNAqR0Wui15zEfBQaByymUBHBfTj/aH7JHeY9St++XxFuJf2A813/DpSSw9zyGzCFw1IY8w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ypum0ZP6pul9yjowOAW9sosm/3UXA3wT5wkWTOQsbTE=;
 b=t0gEVKG/0Qr6EeK5P2GgCgGBdvYqqCXDoUlCBWO7Swa8+vWh4fyebgpCQj3kEYT9P2YqJm91fpJ8RbPJNZnWY7UQqhwR+LKE/3k2NgE9HEVIDeBaabRO5shgZZlDdAFLUK+SR4fybuwwpgGRm45ylf93RZmKLQ5m/Ee2WZBlSb+2N/XN8dAWiCm2iQFVxMQh/KPp2p6pfo2oWyHErEsmTsP9YLXlxgbbGEiCehK6S6kMZgFjSbA5erUMUCDl8P+RVURyQebkPQk7BuOJ7pB0/KroXVT071r2p5YtQ8TdSl+eadsAWF65e6c7tWA5TYUpuG0AHbBj4XQ2EsRuBy3nEQ==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5700.eurprd03.prod.outlook.com (2603:10a6:208:16f::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 27 Oct
 2020 15:52:42 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 15:52:42 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>,
        "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        "wl@xen.org" <wl@xen.org>, "paul@xen.org"
	<paul@xen.org>,
        Artem Mygaiev <Artem_Mygaiev@epam.com>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Rahul Singh <Rahul.Singh@arm.com>
Subject: Re: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Topic: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Index: AQHWrEfNmyZZUV4zgEum5lDBP9dPxqmraEMAgAAxlIA=
Date: Tue, 27 Oct 2020 15:52:42 +0000
Message-ID: <ac342c70-8fbb-023d-00b3-4a1bc1d389a7@epam.com>
References: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
 <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
In-Reply-To: <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: dd9da055-fdff-452e-7222-08d87a905718
x-ms-traffictypediagnostic: AM0PR03MB5700:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB57003FDBD8584C6CF4A8A9CDE7160@AM0PR03MB5700.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 MVOFD7LpiErbrZiE0KCIFyfhH8ARNMluCtveaap0sEDNgI+4nfxy6u+dQP2037msFegN3dA9ymcIUEChoxlhw1QVvxrHTnG2ynkhaAFo+aThJ/8X1G/lHUW+R1YhRvDNZtWLPMWK4KpyX4V9y5wOsfjtuM60hOmlJGHFdFBT6Ycia3ljPiolWmx99cr96dSIdzHXvXyFtagfgpgCCNCFGPUTA1eOhHdkMnpCmuQoFSF95dFXC+qRuKJwxXoCtZqvPgC0fAgvD7mhZjLPrJE7okj/SfKchNmsuQWofXJgKP5qqdREQedxx/dsk10YdK8129dOM15a4nVrFubGmtYhlw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(366004)(396003)(39860400002)(5660300002)(76116006)(66946007)(66476007)(2906002)(36756003)(66446008)(7416002)(66556008)(64756008)(2616005)(478600001)(8936002)(53546011)(71200400001)(54906003)(6512007)(86362001)(6486002)(6506007)(6916009)(8676002)(83380400001)(316002)(31696002)(31686004)(4326008)(26005)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 J6EQ4qh4FpE2CpkjHAbhktfJyw8Hl3ZCWZrsz750RFCLLw0pxrlEZZRkQCaEhALTK2yL6Ur+jYFMMKWIdm3YKeVq+h9ediwXFCIuXVQTXUszDHID0un/sB4OYX7bGa9gU1k2QTJVt1E4lsHaoecD19AYMqmISIP3b4EICOGjo8y7LHGXQCPB7KUVEVhD9A1d8aI5lfTaECF6GZppjGP77h9spWD28DXZhYLjbp9toPI7m99uRgdV5rvUEd4trTSysRATOHR3ldr4K5M5wTBDsbzYo+iKNgHfGg/PWGBBMbfnpbOf+dLPjyyVJwwuMdXqSS7CFePoUnOnP8XE6YEbr6FVgvl+l5iLGcpEep/T7P9XOtFkqG3hcI4H+v/r5Z07FVXjitJSpbdxmmtCJ3Tkee+z+fgbuxiBnuct6JApB6DTj380bvhZtyVuxaJjf4+826mqjsJBdn/HrmTQX+E+202ZxbhJmUw9XVRJKHWakKaimOnbNg45663YfU+f7wXChAzcqd7o/6gaMc6aho4LVrkhdzyjHPqwaz6EIp62Z3wiIz9tUyPVl0cQYId6ugd95YHX9p+mZ5zL6yA6rbEDTo1wmU6mS8LWb+cp8JQA8CBXxvSh3DOhacqozGIibIifl4+FQ6ZpPrN48ngfbWJU8A==
Content-Type: text/plain; charset="utf-8"
Content-ID: <51389A2462735146BE4B4CDAFEB521A7@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dd9da055-fdff-452e-7222-08d87a905718
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Oct 2020 15:52:42.6939
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ydadhrt917nhZIaS5hOdi+pCA1dsAfSkz0plyIm6Uw0Hx6x0Xx0s3g2PDzrVhqPAxrwxind8sfUMmty3D819hGNCwWOSmU3s6JB2HSz+LlbIwVodSgvRvxeFeOxKaKuK
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5700
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-27_08:2020-10-26,2020-10-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 malwarescore=0
 mlxscore=0 lowpriorityscore=0 impostorscore=0 adultscore=0 mlxlogscore=999
 spamscore=0 phishscore=0 bulkscore=0 clxscore=1015 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010270098

SGVsbG8sIFJvZ2VyIQ0KDQpPbiAxMC8yNy8yMCAyOjU1IFBNLCBSb2dlciBQYXUgTW9ubsOpIHdy
b3RlOg0KPiBPbiBUdWUsIE9jdCAyNywgMjAyMCBhdCAwOTo1OTowNUFNICswMDAwLCBPbGVrc2Fu
ZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+IEhlbGxvLCBhbGwhDQo+Pg0KPj4gV2hpbGUgd29y
a2luZyBvbiBQQ0kgcGFzc3Rocm91Z2ggb24gQVJNIChwYXJ0aWFsIFJGQyB3YXMgcHVibGlzaGVk
IGJ5IEFSTQ0KPj4gZWFybGllciB0aGlzIHllYXIpIEkgdHJpZWQgdG8gaW1wbGVtZW50IHNvbWUg
cmVsYXRlZCBjaGFuZ2VzIGluIHRoZSB0b29sc3RhY2suDQo+PiBPbmUgb2YgdGhlIG9ic3RhY2xl
cyBmb3IgQVJNIGlzIFBDSSBiYWNrZW5k4oCZcyByZWxhdGVkIGNvZGUgcHJlc2VuY2U6IEFSTSBp
cw0KPj4gZ29pbmcgdG8gZnVsbHkgZW11bGF0ZSBhbiBFQ0FNIGhvc3QgYnJpZGdlIGluIFhlbiwg
c28gbm8gUENJIGJhY2tlbmQvZnJvbnRlbmQNCj4+IHBhaXIgaXMgZ29pbmcgdG8gYmUgdXNlZC4N
Cj4+DQo+PiBJZiBteSB1bmRlcnN0YW5kaW5nIGNvcnJlY3QgdGhlIGZ1bmN0aW9uYWxpdHkgd2hp
Y2ggaXMgaW1wbGVtZW50ZWQgYnkgdGhlDQo+PiBwY2liYWNrIGFuZCB0b29sc3RhY2sgYW5kIHdo
aWNoIGlzIHJlbGV2YW50L25lZWRlZCBmb3IgQVJNOg0KPj4NCj4+ICAgwqAxLiBwY2liYWNrIGlz
IHVzZWQgYXMgYSBkYXRhYmFzZSBmb3IgYXNzaWduYWJsZSBQQ0kgZGV2aWNlcywgZS5nLiB4bA0K
Pj4gICDCoMKgwqAgcGNpLWFzc2lnbmFibGUte2FkZHxyZW1vdmV8bGlzdH0gbWFuaXB1bGF0ZXMg
dGhhdCBsaXN0LiBTbywgd2hlbmV2ZXIgdGhlDQo+PiAgIMKgwqDCoCB0b29sc3RhY2sgbmVlZHMg
dG8ga25vdyB3aGljaCBQQ0kgZGV2aWNlcyBjYW4gYmUgcGFzc2VkIHRocm91Z2ggaXQgcmVhZHMN
Cj4+ICAgwqDCoMKgIHRoYXQgZnJvbSB0aGUgcmVsZXZhbnQgc3lzZnMgZW50cmllcyBvZiB0aGUg
cGNpYmFjay4NCj4+DQo+PiAgIMKgMi4gcGNpYmFjayBpcyB1c2VkIHRvIGhvbGQgdGhlIHVuYm91
bmQgUENJIGRldmljZXMsIGUuZy4gd2hlbiBwYXNzaW5nIHRocm91Z2ggYQ0KPj4gICDCoMKgwqAg
UENJIGRldmljZSBpdCBuZWVkcyB0byBiZSB1bmJvdW5kIGZyb20gdGhlIHJlbGV2YW50IGRldmlj
ZSBkcml2ZXIgYW5kIGJvdW5kDQo+PiAgIMKgwqDCoCB0byBwY2liYWNrIChzdHJpY3RseSBzcGVh
a2luZyBpdCBpcyBub3QgcmVxdWlyZWQgdGhhdCB0aGUgZGV2aWNlIGlzIGJvdW5kIHRvDQo+PiAg
IMKgwqDCoCBwY2liYWNrLCBidXQgcGNpYmFjayBpcyBhZ2FpbiB1c2VkIGFzIGEgZGF0YWJhc2Ug
b2YgdGhlIHBhc3NlZCB0aHJvdWdoIFBDSQ0KPj4gICDCoMKgwqAgZGV2aWNlcywgc28gd2UgY2Fu
IHJlLWJpbmQgdGhlIGRldmljZXMgYmFjayB0byB0aGVpciBvcmlnaW5hbCBkcml2ZXJzIHdoZW4N
Cj4+ICAgwqDCoMKgIGd1ZXN0IGRvbWFpbiBzaHV0cyBkb3duKQ0KPj4NCj4+ICAgwqAzLiB0b29s
c3RhY2sgZGVwZW5kcyBvbiBEb21haW4tMCBmb3IgZGlzY292ZXJpbmcgUENJIGRldmljZSByZXNv
dXJjZXMgd2hpY2ggYXJlDQo+PiAgIMKgwqDCoCB0aGVuIHBlcm1pdHRlZCBmb3IgdGhlIGd1ZXN0
IGRvbWFpbiwgZS5nIE1NSU8gcmFuZ2VzLCBJUlFzLiBhcmUgcmVhZCBmcm9tDQo+PiAgIMKgwqDC
oCB0aGUgc3lzZnMNCj4+DQo+PiAgIMKgNC4gdG9vbHN0YWNrIGlzIHJlc3BvbnNpYmxlIGZvciBy
ZXNldHRpbmcgUENJIGRldmljZXMgYmVpbmcgcGFzc2VkIHRocm91Z2ggdmlhDQo+PiAgIMKgwqDC
oCBzeXNmcy9yZXNldCBvZiB0aGUgRG9tYWluLTDigJlzIFBDSSBidXMgc3Vic3lzdGVtDQo+Pg0K
Pj4gICDCoDUuIHRvb2xzdGFjayBpcyByZXNwb25zaWJsZSBmb3IgdGhlIGRldmljZXMgYXJlIHBh
c3NlZCB3aXRoIGFsbCByZWxldmFudA0KPj4gICDCoMKgwqAgZnVuY3Rpb25zLCBlLmcuIHNvIGZv
ciBtdWx0aWZ1bmN0aW9uIGRldmljZXMgYWxsIHRoZSBmdW5jdGlvbnMgYXJlIHBhc3NlZCB0bw0K
Pj4gICDCoMKgwqAgYSBkb21haW4gYW5kIG5vIHBhcnRpYWwgcGFzc3Rocm91Z2ggaXMgZG9uZQ0K
Pj4NCj4+ICAgwqA2LiB0b29sc3RhY2sgY2FyZXMgYWJvdXQgU1ItSU9WIGRldmljZXMgKGFtIEkg
Y29ycmVjdCBoZXJlPykNCj4gSSdtIG5vdCBzdXJlIEkgZnVsbHkgdW5kZXJzdGFuZCB3aGF0IHRo
aXMgbWVhbnMuIFRvb2xzdGFjayBjYXJlcyBhYm91dA0KPiBTUi1JT1YgYXMgaXQgY2FyZXMgYWJv
dXQgb3RoZXIgUENJIGRldmljZXMsIGJ1dCB0aGUgU1ItSU9WDQo+IGZ1bmN0aW9uYWxpdHkgaXMg
bWFuYWdlZCBieSB0aGUgKGRvbTApIGtlcm5lbC4NClllcywgeW91IGFyZSByaWdodC4gUGxlYXNl
IGlnbm9yZSAjNg0KPg0KPj4NCj4+IEkgaGF2ZSBpbXBsZW1lbnRlZCBhIHJlYWxseSBkaXJ0eSBQ
T0MgZm9yIHRoYXQgd2hpY2ggSSB3b3VsZCBuZWVkIHRvIGNsZWFuIHVwDQo+PiBiZWZvcmUgc2hv
d2luZywgYnV0IGJlZm9yZSB0aGF0IEkgd291bGQgbGlrZSB0byBnZXQgc29tZSBmZWVkYmFjayBh
bmQgYWR2aWNlIG9uDQo+PiBob3cgdG8gcHJvY2VlZCB3aXRoIHRoZSBhYm92ZS4gSSBzdWdnZXN0
IHdlOg0KPj4NCj4+ICAgwqAxLiBNb3ZlIGFsbCBwY2liYWNrIHJlbGF0ZWQgY29kZSAod2hpY2gg
c2VlbXMgdG8gYmVjb21lIHg4NiBjb2RlIG9ubHkpIGludG8gYQ0KPj4gICDCoMKgwqAgZGVkaWNh
dGVkIGZpbGUsIHNvbWV0aGluZyBsaWtlIHRvb2xzL2xpYnhsL2xpYnhsX3BjaV94ODYuYw0KPj4N
Cj4+ICAgwqAyLiBNYWtlIHRoZSBmdW5jdGlvbmFsaXR5IG5vdyBwcm92aWRlZCBieSBwY2liYWNr
IGFyY2hpdGVjdHVyZSBkZXBlbmRlbnQsIHNvDQo+PiAgIMKgwqDCoCB0b29scy9saWJ4bC9saWJ4
bF9wY2kuYyBkZWxlZ2F0ZXMgYWN0dWFsIGFzc2lnbmFibGUgZGV2aWNlIGxpc3QgaGFuZGxpbmcg
dG8NCj4+ICAgwqDCoMKgIHRoYXQgYXJjaCBjb2RlIGFuZCB1c2VzIHNvbWUgc29ydCBvZiDigJxv
cHPigJ0sIGUuZy4NCj4+ICAgwqDCoMKgIGFyY2gtPm9wcy5nZXRfYWxsX2Fzc2lnbmFibGUsIGFy
Y2gtPm9wcy5hZGRfYXNzaWduYWJsZSBldGMuIChUaGlzIGNhbiBhbHNvDQo+PiAgIMKgwqDCoCBi
ZSBkb25lIHdpdGgg4oCcI2lmZGVmIENPTkZJR19QQ0lCQUNL4oCdLCBidXQgc2VlbXMgdG8gYmUg
bm90IGN1dGUpLiBJbnRyb2R1Y2UNCj4+ICAgwqDCoMKgIHRvb2xzL2xpYnhsL2xpYnhsX3BjaV9h
cm0uYyB0byBwcm92aWRlIEFSTSBpbXBsZW1lbnRhdGlvbi4NCj4gVG8gYmUgZmFpciB0aGlzIGlz
IGFyY2ggYW5kIE9TIGRlcGVuZGVudCwgc2luY2UgaXQncyBjdXJyZW50bHkgYmFzZWQNCj4gb24g
c3lzZnMgd2hpY2ggaXMgTGludXggc3BlY2lmaWMuIFNvIGl0IHNob3VsZCByZWFsbHkgYmUNCj4g
bGlieGxfcGNpX2xpbnV4X3g4Ni5jIG9yIHNpbWlsYXIuDQpUaGlzIGlzIHRydWUsIGJ1dCBkbyB3
ZSByZWFsbHkgaGF2ZSBhbnkgb3RoZXIgaW1wbGVtZW50YXRpb24geWV0Pw0KPg0KPj4gICDCoDMu
IEFSTSBvbmx5OiBBcyB3ZSBkbyBub3QgaGF2ZSBwY2liYWNrIG9uIEFSTSB3ZSBuZWVkIHRvIGhh
dmUgc29tZSBzdG9yYWdlIGZvcg0KPj4gICDCoMKgwqAgYXNzaWduYWJsZSBkZXZpY2UgbGlzdDog
bW92ZSB0aGF0IGludG8gWGVuIGJ5IGV4dGVuZGluZyBzdHJ1Y3QgcGNpX2RldiB3aXRoDQo+PiAg
IMKgwqDCoCDigJxib29sIGFzc2lnbmVk4oCdIGFuZCBwcm92aWRpbmcgc3lzY3RscyBmb3IgbWFu
aXB1bGF0aW5nIHRoYXQsIGUuZy4NCj4+ICAgwqDCoMKgIFhFTl9TWVNDVExfcGNpX2RldmljZV97
c2V0fGdldH1fYXNzaWduZWQsDQo+PiAgIMKgwqDCoCBYRU5fU1lTQ1RMX3BjaV9kZXZpY2VfZW51
bV9hc3NpZ25lZCAodG8gZW51bWVyYXRlL2dldCB0aGUgbGlzdCBvZg0KPj4gICDCoMKgwqAgYXNz
aWduZWQvbm90LWFzc2lnbmVkIFBDSSBkZXZpY2VzKS4gQ2FuIHRoaXMgYWxzbyBiZSBpbnRlcmVz
dGluZyBmb3IgeDg2PyBBdA0KPj4gICDCoMKgwqAgdGhlIG1vbWVudCBpdCBzZWVtcyB0aGF0IHg4
NiBkb2VzIHJlbHkgb24gcGNpYmFjayBwcmVzZW5jZSwgc28gcHJvYmFibHkgdGhpcw0KPj4gICDC
oMKgwqAgY2hhbmdlIG1pZ2h0IG5vdCBiZSBpbnRlcmVzdGluZyBmb3IgeDg2IHdvcmxkLCBidXQg
bWF5IGFsbG93IHN0cmlwcGluZw0KPj4gICDCoMKgwqAgcGNpYmFjayBmdW5jdGlvbmFsaXR5IGEg
Yml0IGFuZCBtYWtpbmcgdGhlIGNvZGUgY29tbW9uIHRvIGJvdGggQVJNIGFuZCB4ODYuDQo+IEhv
dyBhcmUgeW91IGdvaW5nIHRvIHBlcmZvcm0gdGhlIGRldmljZSByZXNldCB0aGVuPyBXaWxsIHlv
dSBhc3NpZ24NCj4gdGhlIGRldmljZSB0byBkb20wIGFmdGVyIHJlbW92aW5nIGl0IGZyb20gdGhl
IGd1ZXN0IHNvIHRoYXQgZG9tMCBjYW4NCj4gcGVyZm9ybSB0aGUgcmVzZXQ/IFlvdSB3aWxsIG5l
ZWQgdG8gdXNlIGxvZ2ljIGN1cnJlbnRseSBwcmVzZW50IGluDQo+IHBjaWJhY2sgdG8gZG8gc28g
SUlSQy4NCj4NCj4gSXQgZG9lc24ndCBzZWVtIGxpa2UgYSBiYWQgYXBwcm9hY2gsIGJ1dCB0aGVy
ZSBhcmUgbW9yZSBjb25zZXF1ZW5jZXMNCj4gdGhhbiBqdXN0IGhvdyBhc3NpZ25hYmxlIGRldmlj
ZXMgYXJlIGxpc3RlZC4NCj4NCj4gQWxzbyBYZW4gZG9lc24ndCBjdXJyZW50bHkga25vdyBhYm91
dCBJT01NVSBncm91cHMsIHNvIFhlbiB3b3VsZCBoYXZlDQo+IHRvIGdhaW4gdGhpcyBrbm93bGVk
Z2UgaW4gb3JkZXIgdG8ga25vdyB0aGUgbWluaW1hbCBzZXQgb2YgUENJIGRldmljZXMNCj4gdGhh
dCBjYW4gYmUgYXNzaWduZWQgdG8gYSBndWVzdC4NCkdvb2QgcG9pbnQsIEknbGwgY2hlY2sgdGhl
IHJlbGV2YW50IHJlc2V0IGNvZGUuIFRoYW5rcw0KPg0KPj4gICDCoDQuIEFSTSBvbmx5OiBJdCBp
cyBub3QgY2xlYXIgaG93IHRvIGhhbmRsZSByZS1iaW5kaW5nIG9mIHRoZSBQQ0kgZHJpdmVyIG9u
DQo+PiAgIMKgwqDCoCBndWVzdCBzaHV0ZG93bjogd2UgbmVlZCB0byBzdG9yZSB0aGUgc3lzZnMg
cGF0aCBvZiB0aGUgb3JpZ2luYWwgZHJpdmVyIHRoZQ0KPj4gICDCoMKgwqAgZGV2aWNlIHdhcyBi
b3VuZCB0by4gRG8gd2UgYWxzbyB3YW50IHRvIHN0b3JlIHRoYXQgaW4gc3RydWN0IHBjaV9kZXY/
DQo+IEknbSBub3Qgc3VyZSBJIGZvbGxvdyB5b3UgaGVyZS4gT24gc2h1dGRvd24gdGhlIGRldmlj
ZSB3b3VsZCBiZQ0KPiBoYW5kbGVkIGJhY2sgdG8gWGVuPw0KDQpDdXJyZW50bHkgaXQgaXMgYm91
bmQgYmFjayB0byB0aGUgZHJpdmVyIHdoaWNoIHdlIHNlaXplZCB0aGUgZGV2aWNlIGZyb20gKGlm
IGFueSkuDQoNClNvLCBwcm9iYWJseSB0aGUgc2FtZSBsb2dpYyBzaG91bGQgcmVtYWluPw0KDQo+
DQo+IE1vc3QgY2VydGFpbmx5IHdlIGRvbid0IHdhbnQgdG8gc3RvcmUgYSBzeXNmcyAoTGludXgg
cHJpdmF0ZQ0KPiBpbmZvcm1hdGlvbikgaW5zaWRlIG9mIGEgWGVuIHNwZWNpZmljIHN0cnVjdCAo
cGNpX2RldikuDQpZZWFwLCB0aGlzIGlzIHNvbWV0aGluZyBJIGRvbid0IGxpa2UgYXMgd2VsbA0K
Pg0KPj4gICDCoDUuIEFuIGFsdGVybmF0aXZlIHJvdXRlIGZvciAzLTQgY291bGQgYmUgdG8gc3Rv
cmUgdGhhdCBkYXRhIGluIFhlblN0b3JlLCBlLmcuDQo+PiAgIMKgwqDCoCBNTUlPcywgSVJRLCBi
aW5kIHN5c2ZzIHBhdGggZXRjLiBUaGlzIHdvdWxkIHJlcXVpcmUgbW9yZSBjb2RlIG9uIFhlbiBz
aWRlIHRvDQo+PiAgIMKgwqDCoCBhY2Nlc3MgWGVuU3RvcmUgYW5kIHdvbuKAmXQgd29yayBpZiBN
TUlPcy9JUlFzIGFyZSBwYXNzZWQgdmlhIGRldmljZSB0cmVlL0FDUEkNCj4+ICAgwqDCoMKgIHRh
YmxlcyBieSB0aGUgYm9vdGxvYWRlcnMuDQo+IEFzIGFib3ZlLCBJIHRoaW5rIEkgbmVlZCBtb3Jl
IGNvbnRleHQgdG8gdW5kZXJzdGFuZCB3aGF0IGFuZCB3aHkgeW91DQo+IG5lZWQgdG8gc2F2ZSBz
dWNoIGluZm9ybWF0aW9uLg0KDQpXZWxsLCB3aXRoIHBjaWJhY2sgYWJzZW5jZSB3ZSBsb29zZSBh
ICJkYXRhYmFzZSIgd2hpY2ggaG9sZHMgYWxsIHRoZSBrbm93bGVkZ2UNCg0KYWJvdXQgd2hpY2gg
ZGV2aWNlcyBhcmUgYXNzaWduZWQsIGJvdW5kIGV0Yy4gU28sIFhlblN0b3JlICpjb3VsZCogYmUg
dXNlZCBhIHN1Y2gNCg0KYSBkYXRhYmFzZSBmb3IgdXMuIEJ1dCB0aGlzIGxvb2tzIG5vdCBlbGVn
YW50Lg0KDQo+DQo+PiBBbm90aGVyIGJpZyBxdWVzdGlvbiBpcyB3aXRoIHJlc3BlY3QgdG8gRG9t
YWluLTAgYW5kIFBDSSBidXMgc3lzZnMgdXNlLiBUaGUNCj4+IGV4aXN0aW5nIGNvZGUgZm9yIHF1
ZXJ5aW5nIFBDSSBkZXZpY2UgcmVzb3VyY2VzL0lSUXMgYW5kIHJlc2V0dGluZyB0aG9zZSB2aWEN
Cj4+IHN5c2ZzIG9mIERvbWFpbi0wIGlzIG1vcmUgdGhhbiBPSyBpZiBEb21haW4tMCBpcyBwcmVz
ZW50IGFuZCBvd25zIFBDSSBIVy4gQnV0LA0KPj4gdGhlcmUgYXJlIGF0IGxlYXN0IHR3byBjYXNl
cyB3aGVuIHRoaXMgaXMgbm90IGdvaW5nIHRvIHdvcmsgb24gQVJNOiBEb20wbGVzcw0KPj4gc2V0
dXBzIGFuZCB3aGVuIHRoZXJlIGlzIGEgaGFyZHdhcmUgZG9tYWluIG93bmluZyBQQ0kgZGV2aWNl
cy4NCj4+DQo+PiBJbiBvdXIgY2FzZSB3ZSBoYXZlIGEgZGVkaWNhdGVkIGd1ZXN0IHdoaWNoIGlz
IGEgc29ydCBvZiBoYXJkd2FyZSBkb21haW4gKGRyaXZlcg0KPj4gZG9tYWluIERvbUQpIHdoaWNo
IG93bnMgYWxsIHRoZSBoYXJkd2FyZSBvZiB0aGUgcGxhdGZvcm0sIHNvIHdlIGFyZSBpbnRlcmVz
dGVkDQo+PiBpbiBpbXBsZW1lbnRpbmcgc29tZXRoaW5nIHRoYXQgZml0cyBvdXIgZGVzaWduIGFz
IHdlbGw6IERvbUQvaGFyZHdhcmUgZG9tYWluDQo+PiBtYWtlcyBpdCBub3QgcG9zc2libGUgdG8g
YWNjZXNzIHRoZSByZWxldmFudCBQQ0kgYnVzIHN5c2ZzIGVudHJpZXMgZnJvbSBEb21haW4tMA0K
Pj4gYXMgdGhvc2UgbGl2ZSBpbiBEb21EL2h3ZG9tLiBUaGlzIGlzIGFsc28gdHJ1ZSBmb3IgRG9t
MGxlc3Mgc2V0dXBzIGFzIHRoZXJlIGlzDQo+PiBubyBlbnRpdHkgdGhhdCBjYW4gcHJvdmlkZSB0
aGUgc2FtZS4NCj4gWW91IG5lZWQgc29tZSBraW5kIG9mIGNoYW5uZWwgdG8gdHJhbnNmZXIgdGhp
cyBpbmZvcm1hdGlvbiBmcm9tIHRoZQ0KPiBoYXJkd2FyZSBkb21haW4gdG8gdGhlIHRvb2xzdGFj
ayBkb21haW4uIFNvbWUga2luZCBvZiBwcm90b2NvbCBvdmVyDQo+IGxpYnZjaGFuIG1pZ2h0IGJl
IGFuIG9wdGlvbi4NClllcywgdGhpcyB3YXkgaXQgd2lsbCBhbGwgYmUgaGFuZGxlZCB3aXRob3V0
IHdvcmthcm91bmRzDQo+DQo+PiBGb3IgdGhhdCByZWFzb24gaW4gbXkgUE9DIEkgaGF2ZSBpbnRy
b2R1Y2VkIHRoZSBmb2xsb3dpbmc6IGV4dGVuZGVkIHN0cnVjdA0KPj4gcGNpX2RldiB0byBob2xk
IGFuIGFycmF5IG9mIFBDSSBkZXZpY2XigJlzIE1NSU8gcmFuZ2VzIGFuZCBJUlE6DQo+Pg0KPj4g
ICDCoDEuIFByb3ZpZGUgaW50ZXJuYWwgQVBJIGZvciBhY2Nlc3NpbmcgdGhlIGFycmF5IG9mIE1N
SU8gcmFuZ2VzIGFuZCBJUlEuIFRoaXMNCj4+ICAgwqDCoMKgIGNhbiBiZSB1c2VkIGluIGJvdGgg
RG9tMGxlc3MgYW5kIERvbWFpbi0wIHNldHVwcyB0byBtYW5pcHVsYXRlIHRoZSByZWxldmFudA0K
Pj4gICDCoMKgwqAgZGF0YS4gVGhlIGFjdHVhbCBkYXRhIGNhbiBiZSByZWFkIGZyb20gYSBkZXZp
Y2UgdHJlZS9BQ1BJIHRhYmxlcyBpZg0KPj4gICDCoMKgwqAgZW51bWVyYXRpb24gaXMgZG9uZSBi
eSBib290bG9hZGVycy4NCj4gSSB3b3VsZCBiZSBhZ2FpbnN0IHN0b3JpbmcgdGhpcyBkYXRhIGlu
c2lkZSBvZiBYZW4gaWYgWGVuIGRvZXNuJ3QgaGF2ZQ0KPiB0byBtYWtlIGFueSB1c2Ugb2YgaXQu
IERvZXMgWGVuIG5lZWQgdG8ga25vdyB0aGUgTU1JTyByYW5nZXMgYW5kIElSUXMNCj4gdG8gcGVy
Zm9ybSBpdCdzIHRhc2s/DQo+DQo+IElmIG5vdCwgdGhlbiB0aGVyZSdzIG5vIHJlYXNvbiB0byBz
dG9yZSB0aG9zZSBpbiBYZW4uIFRoZSBoeXBlcnZpc29yDQo+IGlzIG5vdCB0aGUgcmlnaHQgcGxh
Y2UgdG8gaW1wbGVtZW50IGEgZGF0YWJhc2UgbGlrZSBtZWNoYW5pc20gZm9yIFBDSQ0KPiBkZXZp
Y2VzLg0KDQpXZSBoYXZlIGRpc2N1c3NlZCBhbGwgdGhlIGFib3ZlIHdpdGggUm9nZXIgb24gSVJD
ICh0aGFuayB5b3UgUm9nZXIpLA0KDQpzbyBJJ2xsIHByZXBhcmUgYW4gUkZDIGZvciBBUk0gUENJ
IHBhc3N0aHJvdWdoIGNvbmZpZ3VyYXRpb24gYW5kIHNlbmQgaXQgQVNBUC4NCg0KPg0KPiBSb2dl
ci4NCg0KVGhhbmsgeW91LA0KDQpPbGVrc2FuZHINCg==


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 16:47:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 16:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12971.33468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXS7i-0002aE-Dn; Tue, 27 Oct 2020 16:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12971.33468; Tue, 27 Oct 2020 16:47:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXS7i-0002a7-Ah; Tue, 27 Oct 2020 16:47:02 +0000
Received: by outflank-mailman (input) for mailman id 12971;
 Tue, 27 Oct 2020 16:47:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kXS7g-0002a2-Vb
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 16:47:01 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75b9b1a1-092d-46c7-898a-64f76140e3e7;
 Tue, 27 Oct 2020 16:46:59 +0000 (UTC)
Received: from mail.zoho.com by mx.zohomail.com
 with SMTP id 1603817213048288.2248765640369;
 Tue, 27 Oct 2020 09:46:53 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kXS7g-0002a2-Vb
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 16:47:01 +0000
X-Inumbo-ID: 75b9b1a1-092d-46c7-898a-64f76140e3e7
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 75b9b1a1-092d-46c7-898a-64f76140e3e7;
	Tue, 27 Oct 2020 16:46:59 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603817215; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=U5cjdclwu0P5TdEv3W4TBPGw3FeqHJchlqnyY+A+LPJD86cHb6oFETDKoejn6ynuiBb5kqwGXkkj49EKWEE/6cjV4XCaL3/TFoKUn/9bP+AnzMeTFmxXVRQCA1nGpnywkU829K5kUZ1wyPE2UZ4BF3b3owJah2VNE1bzYDZOouo=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603817215; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=ZGnuTaMNeroUiaMV+En9CbWZ/kq0ioLsTMCpdOf8/js=; 
	b=KfXCpxn/pZuSVUyFOhnRw/7RLGT7UZG+vFEcxVJnaUuj2yEbpsDCRdxG3YYq+OddGR1vGOgmDu9EF4MHJL5lzkOW3YNzs4qpPCu/tNTDR97BU84muqFRbDzWpswZLJd/D4IAqAKuozR54/U97srIlUVxpJnV1E1zcayQZsBiuFs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603817215;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Date:From:To:Cc:Message-Id:In-Reply-To:References:Subject:MIME-Version:Content-Type;
	bh=ZGnuTaMNeroUiaMV+En9CbWZ/kq0ioLsTMCpdOf8/js=;
	b=EdmoyNjUo4ovsCEfdbAhmKF8B5S31OztI7DgcsIkVBV/7QcEsJrFxCx+8Pskz2Q1
	+cU7rnhFbvjDp+NeYf3pjToaDocF7yf3CunrCy12KHbfTuMgQStqmxfo4hjPOVSH+xk
	vUaSIDaWpxl50kD64V3AJgoP0t4IoUcocprvdXus=
Received: from mail.zoho.com by mx.zohomail.com
	with SMTP id 1603817213048288.2248765640369; Tue, 27 Oct 2020 09:46:53 -0700 (PDT)
Date: Tue, 27 Oct 2020 17:46:53 +0100
From: =?UTF-8?Q?Fr=C3=A9d=C3=A9ric_Pierret?= <frederic.pierret@qubes-os.org>
To: "Dario Faggioli" <dfaggioli@suse.com>
Cc: =?UTF-8?Q?=22J=C3=BCrgen_Gro=C3=9F=22?= <jgross@suse.com>,
	"George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-Id: <1756af48c73.12516f93769803.5250908833716445433@qubes-os.org>
In-Reply-To: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
	 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
	 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com> <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
MIME-Version: 1.0
Content-Type: multipart/alternative; 
	boundary="----=_Part_224604_740662196.1603817213043"
Importance: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail

------=_Part_224604_740662196.1603817213043
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

---- On Tue, 27 Oct 2020 10:22:42 +0100 Dario Faggioli <dfaggioli@suse.com>=
 wrote ----



On Tue, 2020-10-27 at 06:58 +0100, J=C3=BCrgen Gro=C3=9F wrote:=20
> On 26.10.20 17:31, Dario Faggioli wrote:=20
> >=20
> > Or did you have something completely different in mind, and I'm=20
> > missing=20
> > it?=20
>=20
> No, I think you are right. I mixed that up with __context_switch()=20
> not=20
> being called.=20
>=20
Right.=20
=20
> Sorry for the noise,=20
>=20
Sure, no problem.=20
=20
In fact, this issue is apparently scheduler independent. It indeed=20
seemd to be related to the other report we have "BUG: credit=3Dsched2=20
machine hang when using DRAKVUF", but there it looks like it is=20
scheduler-dependant.=20
=20
Might it be something that lies somewhere else, but Credit2 is=20
triggering it faster/easier? (Just thinking out loud...)=20
=20
For Frederic, what happens is that dom0 hangs, right? So you're able to=20
poke at Xen with some debugkeys (like 'r' for the scheduler's status,=20
and the ones for the domain's vCPUs)?=20
=20
If yes, it may be useful to see the output.=20
=20
Regards=20
--=20
Dario Faggioli, Ph.D=20
http://about.me/dario.faggioli=20
Virtualization Software Engineer=20
SUSE Labs, SUSE https://www.suse.com/=20
-------------------------------------------------------------------=20
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)=20






First=C2=A0of=C2=A0all,=C2=A0sorry=C2=A0for=C2=A0the=C2=A0possible=C2=A0dup=
licates.=C2=A0I=C2=A0had=C2=A0network=C2=A0issue=C2=A0due=C2=A0to=C2=A0subs=
equent=C2=A0freezes=C2=A0(...)=C2=A0while=C2=A0writing=C2=A0to=C2=A0you=C2=
=A0and=C2=A0Marek=C2=A0has=C2=A0not=C2=A0received=C2=A0my=C2=A0previous=C2=
=A0mails=C2=A0so=C2=A0here=C2=A0the=C2=A0info.=20

=20

=20

To=C2=A0answer=C2=A0your=C2=A0question=C2=A0Dario,=C2=A0yes=C2=A0dom0=C2=A0=
hangs=C2=A0totally=C2=A0and=C2=A0VMs=C2=A0too.=C2=A0In=C2=A0the=C2=A0case=
=C2=A0of=C2=A0`sched=3Dcredit`,=C2=A0I've=C2=A0succeeded=C2=A0to=C2=A0obtai=
n=C2=A0the=C2=A0output=C2=A0of=C2=A0'r'=C2=A0debug-keys=C2=A0in=C2=A0serial=
=C2=A0console:=20

```=20

(XEN)=C2=A0sched_smt_power_savings:=C2=A0disabled=20

(XEN)=C2=A0NOW=3D72810702614697=20

(XEN)=C2=A0Online=C2=A0Cpus:=C2=A00-15=20

(XEN)=C2=A0Cpupool=C2=A00:=20

(XEN)=C2=A0Cpus:=C2=A00-15=20

(XEN)=C2=A0Scheduling=C2=A0granularity:=C2=A0cpu,=C2=A01=C2=A0CPU=C2=A0per=
=C2=A0sched-resource=20

(XEN)=C2=A0Scheduler:=C2=A0SMP=C2=A0Credit=C2=A0Scheduler=C2=A0(credit)=20

(XEN)=C2=A0info:=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0ncpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A016=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0master=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A00=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0credit=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A04800=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0credit=C2=A0balance=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=3D=C2=A0608=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0weight=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A012256=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0runq_sort=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A0996335=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0default-weight=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=3D=C2=A0256=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0tslice=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A030ms=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0ratelimit=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=3D=C2=A01000us=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0credits=C2=A0per=C2=A0msec=C2=A0=C2=A0=
=C2=A0=3D=C2=A010=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0ticks=C2=A0per=C2=A0tslice=C2=A0=C2=A0=
=C2=A0=3D=C2=A03=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0migration=C2=A0delay=C2=A0=C2=A0=C2=A0=
=C2=A0=3D=C2=A00us=20

(XEN)=C2=A0idlers:=C2=A000000000,00003c99=20

(XEN)=C2=A0active=C2=A0units:=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[0.1]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D6=C2=A0credit=3D214=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A02:=C2=A0[0.4]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D8=C2=A0credit=3D115=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A03:=C2=A0[0.5]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D5=C2=A0credit=3D239=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A04:=C2=A0[0.11]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D1=C2=A0credit=3D-55=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A05:=C2=A0[0.6]=C2=A0pri=3D-2=
=C2=A0flags=3D0=C2=A0cpu=3D15=C2=A0credit=3D-177=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A06:=C2=A0[0.7]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D2=C2=A0credit=3D50=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A07:=C2=A0[19.1]=C2=A0pri=3D-2=
=C2=A0flags=3D0=C2=A0cpu=3D9=C2=A0credit=3D-241=C2=A0[w=3D256,cap=3D0]=20

(XEN)=C2=A0CPUs=C2=A0info:=20

(XEN)=C2=A0CPU[00]=C2=A0current=3Dd[IDLE]v0,=C2=A0curr=3Dd[IDLE]v0,=C2=A0pr=
ev=3DNULL=20

(XEN)=C2=A0CPU[00]=C2=A0nr_run=3D0,=C2=A0sort=3D996334,=C2=A0sibling=3D{0},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0CPU[01]=C2=A0current=3Dd0v11,=C2=A0curr=3Dd0v11,=C2=A0prev=3DNUL=
L=20

(XEN)=C2=A0CPU[01]=C2=A0nr_run=3D1,=C2=A0sort=3D996335,=C2=A0sibling=3D{1},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.11]=C2=A0pri=3D-1=C2=A0flag=
s=3D0=C2=A0cpu=3D1=C2=A0credit=3D-55=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.1]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D1=20

(XEN)=C2=A0CPU[02]=C2=A0current=3Dd0v7,=C2=A0curr=3Dd0v7,=C2=A0prev=3DNULL=
=20

(XEN)=C2=A0CPU[02]=C2=A0nr_run=3D1,=C2=A0sort=3D996335,=C2=A0sibling=3D{2},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.7]=C2=A0pri=3D-1=C2=A0flags=
=3D0=C2=A0cpu=3D2=C2=A0credit=3D50=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.2]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D2=20

(XEN)=C2=A0CPU[03]=C2=A0current=3Dd[IDLE]v3,=C2=A0curr=3Dd[IDLE]v3,=C2=A0pr=
ev=3DNULL=20

(XEN)=C2=A0CPU[03]=C2=A0nr_run=3D0,=C2=A0sort=3D996329,=C2=A0sibling=3D{3},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0CPU[04]=C2=A0current=3Dd[IDLE]v4,=C2=A0curr=3Dd[IDLE]v4,=C2=A0pr=
ev=3DNULL=20

(XEN)=C2=A0CPU[04]=C2=A0nr_run=3D0,=C2=A0sort=3D996325,=C2=A0sibling=3D{4},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0CPU[05]=C2=A0current=3Dd0v5,=C2=A0curr=3Dd0v5,=C2=A0prev=3DNULL=
=20

(XEN)=C2=A0CPU[05]=C2=A0nr_run=3D1,=C2=A0sort=3D996334,=C2=A0sibling=3D{5},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.5]=C2=A0pri=3D-1=C2=A0flags=
=3D0=C2=A0cpu=3D5=C2=A0credit=3D239=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.5]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D5=20

(XEN)=C2=A0CPU[06]=C2=A0current=3Dd0v1,=C2=A0curr=3Dd0v1,=C2=A0prev=3DNULL=
=20

(XEN)=C2=A0CPU[06]=C2=A0nr_run=3D1,=C2=A0sort=3D996334,=C2=A0sibling=3D{6},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.1]=C2=A0pri=3D-1=C2=A0flags=
=3D0=C2=A0cpu=3D6=C2=A0credit=3D214=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.6]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D6=20

(XEN)=C2=A0CPU[07]=C2=A0current=3Dd[IDLE]v7,=C2=A0curr=3Dd[IDLE]v7,=C2=A0pr=
ev=3DNULL=20

(XEN)=C2=A0CPU[07]=C2=A0nr_run=3D0,=C2=A0sort=3D996303,=C2=A0sibling=3D{7},=
=C2=A0core=3D{0-7}=20

(XEN)=C2=A0CPU[08]=C2=A0current=3Dd[IDLE]v8,=C2=A0curr=3Dd[IDLE]v8,=C2=A0pr=
ev=3DNULL=20

(XEN)=C2=A0CPU[08]=C2=A0nr_run=3D2,=C2=A0sort=3D996335,=C2=A0sibling=3D{8},=
=C2=A0core=3D{8-15}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[0.4]=C2=A0pri=3D-1=
=C2=A0flags=3D0=C2=A0cpu=3D8=C2=A0credit=3D115=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0CPU[09]=C2=A0current=3Dd19v1,=C2=A0curr=3Dd19v1,=C2=A0prev=3DNUL=
L=20

(XEN)=C2=A0CPU[09]=C2=A0nr_run=3D1,=C2=A0sort=3D996335,=C2=A0sibling=3D{9},=
=C2=A0core=3D{8-15}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[19.1]=C2=A0pri=3D-2=C2=A0flag=
s=3D0=C2=A0cpu=3D9=C2=A0credit=3D-241=C2=A0[w=3D256,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.9]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D9=20

(XEN)=C2=A0CPU[10]=C2=A0current=3Dd[IDLE]v10,=C2=A0curr=3Dd[IDLE]v10,=C2=A0=
prev=3DNULL=20

(XEN)=C2=A0CPU[10]=C2=A0nr_run=3D0,=C2=A0sort=3D996334,=C2=A0sibling=3D{10}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0CPU[11]=C2=A0current=3Dd[IDLE]v11,=C2=A0curr=3Dd[IDLE]v11,=C2=A0=
prev=3DNULL=20

(XEN)=C2=A0CPU[11]=C2=A0nr_run=3D0,=C2=A0sort=3D996331,=C2=A0sibling=3D{11}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0CPU[12]=C2=A0current=3Dd[IDLE]v12,=C2=A0curr=3Dd[IDLE]v12,=C2=A0=
prev=3DNULL=20

(XEN)=C2=A0CPU[12]=C2=A0nr_run=3D0,=C2=A0sort=3D996333,=C2=A0sibling=3D{12}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0CPU[13]=C2=A0current=3Dd[IDLE]v13,=C2=A0curr=3Dd[IDLE]v13,=C2=A0=
prev=3DNULL=20

(XEN)=C2=A0CPU[13]=C2=A0nr_run=3D0,=C2=A0sort=3D996334,=C2=A0sibling=3D{13}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0CPU[14]=C2=A0current=3Dd0v14,=C2=A0curr=3Dd0v14,=C2=A0prev=3DNUL=
L=20

(XEN)=C2=A0CPU[14]=C2=A0nr_run=3D1,=C2=A0sort=3D990383,=C2=A0sibling=3D{14}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.14]=C2=A0pri=3D0=C2=A0flags=
=3D0=C2=A0cpu=3D14=C2=A0credit=3D-514=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.14]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D14=20

(XEN)=C2=A0CPU[15]=C2=A0current=3Dd0v6,=C2=A0curr=3Dd0v6,=C2=A0prev=3DNULL=
=20

(XEN)=C2=A0CPU[15]=C2=A0nr_run=3D1,=C2=A0sort=3D996335,=C2=A0sibling=3D{15}=
,=C2=A0core=3D{8-15}=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0run:=C2=A0[0.6]=C2=A0pri=3D-2=C2=A0flags=
=3D0=C2=A0cpu=3D15=C2=A0credit=3D-177=C2=A0[w=3D2000,cap=3D0]=20

(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A01:=C2=A0[32767.15]=C2=A0pri=
=3D-64=C2=A0flags=3D0=C2=A0cpu=3D15=20

```=20

=20

I=C2=A0attempt=C2=A0to=C2=A0get=C2=A0'*'=C2=A0but=C2=A0that=C2=A0blocked=C2=
=A0my=C2=A0serial=C2=A0console,=C2=A0at=C2=A0least=C2=A0I=C2=A0was=C2=A0not=
=C2=A0able=C2=A0to=C2=A0interact=C2=A0with=C2=A0it=C2=A0few=C2=A0minutes=C2=
=A0later.=C2=A0I'll=C2=A0try=C2=A0to=C2=A0get=C2=A0other=C2=A0info=C2=A0too=
.=C2=A0I've=C2=A0also=C2=A0uploaded=C2=A0the=C2=A0piece=C2=A0of=C2=A0this=
=C2=A0huge=C2=A0'*'=C2=A0dump=C2=A0here:=C2=A0https://gist.github.com/fepit=
re/36923fbc08cc2fd8bdb59b81e73a6c2e=20

=20

Right=C2=A0after,=C2=A0I've=C2=A0restarted=C2=A0with=C2=A0the=C2=A0default=
=C2=A0value=C2=A0of=C2=A0'sched'=C2=A0(credit2)=C2=A0and=C2=A0just=C2=A0few=
=C2=A0minutes=C2=A0later=C2=A0I=C2=A0obtained:=20

'r':=C2=A0https://gist.github.com/fepitre/78541f555902275d906d627de2420571=
=20

'q':=C2=A0https://gist.github.com/fepitre/0ddf6b5e8fdb3152d24337d83fdc345e=
=20

'I':=C2=A0https://gist.github.com/fepitre/50c68233d08ad1e495edf7e0e146838b=
=20

=20

Tell=C2=A0me=C2=A0if=C2=A0I=C2=A0can=C2=A0provide=C2=A0any=C2=A0other=C2=A0=
info=C2=A0from=C2=A0serial=C2=A0console.=20

=20

Regards,=20

Fr=C3=A9d=C3=A9ric
------=_Part_224604_740662196.1603817213043
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head>=
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"></h=
ead><body ><div style=3D"font-family: Verdana, Arial, Helvetica, sans-serif=
; font-size: 10pt;"><div>---- On Tue, 27 Oct 2020 10:22:42 +0100 <b>Dario F=
aggioli &lt;dfaggioli@suse.com&gt;</b> wrote ----<br></div><div style=3D"" =
data-zbluepencil-ignore=3D"true" class=3D"zmail_extra"><div><br></div><bloc=
kquote style=3D"border-left: 1px solid rgb(204, 204, 204); padding-left: 6p=
x; margin: 0px 0px 0px 5px;"><div>On Tue, 2020-10-27 at 06:58 +0100, J=C3=
=BCrgen Gro=C3=9F wrote: <br>&gt; On 26.10.20 17:31, Dario Faggioli wrote: =
<br>&gt; &gt; <br>&gt; &gt; Or did you have something completely different =
in mind, and I'm <br>&gt; &gt; missing <br>&gt; &gt; it? <br>&gt; <br>&gt; =
No, I think you are right. I mixed that up with __context_switch() <br>&gt;=
 not <br>&gt; being called. <br>&gt; <br>Right. <br> <br>&gt; Sorry for the=
 noise, <br>&gt; <br>Sure, no problem. <br> <br>In fact, this issue is appa=
rently scheduler independent. It indeed <br>seemd to be related to the othe=
r report we have "BUG: credit=3Dsched2 <br>machine hang when using DRAKVUF"=
, but there it looks like it is <br>scheduler-dependant. <br> <br>Might it =
be something that lies somewhere else, but Credit2 is <br>triggering it fas=
ter/easier? (Just thinking out loud...) <br> <br>For Frederic, what happens=
 is that dom0 hangs, right? So you're able to <br>poke at Xen with some deb=
ugkeys (like 'r' for the scheduler's status, <br>and the ones for the domai=
n's vCPUs)? <br> <br>If yes, it may be useful to see the output. <br> <br>R=
egards <br>-- <br>Dario Faggioli, Ph.D <br><a href=3D"http://about.me/dario=
.faggioli" target=3D"_blank">http://about.me/dario.faggioli</a> <br>Virtual=
ization Software Engineer <br>SUSE Labs, SUSE <a href=3D"https://www.suse.c=
om/" target=3D"_blank">https://www.suse.com/</a> <br>----------------------=
--------------------------------------------- <br>&lt;&lt;This happens beca=
use _I_ choose it to happen!&gt;&gt; (Raistlin Majere) </div></blockquote><=
/div><div><br></div><div><br></div><div>First&nbsp;of&nbsp;all,&nbsp;sorry&=
nbsp;for&nbsp;the&nbsp;possible&nbsp;duplicates.&nbsp;I&nbsp;had&nbsp;netwo=
rk&nbsp;issue&nbsp;due&nbsp;to&nbsp;subsequent&nbsp;freezes&nbsp;(...)&nbsp=
;while&nbsp;writing&nbsp;to&nbsp;you&nbsp;and&nbsp;Marek&nbsp;has&nbsp;not&=
nbsp;received&nbsp;my&nbsp;previous&nbsp;mails&nbsp;so&nbsp;here&nbsp;the&n=
bsp;info. <br></div><div> <br></div><div> <br></div><div>To&nbsp;answer&nbs=
p;your&nbsp;question&nbsp;Dario,&nbsp;yes&nbsp;dom0&nbsp;hangs&nbsp;totally=
&nbsp;and&nbsp;VMs&nbsp;too.&nbsp;In&nbsp;the&nbsp;case&nbsp;of&nbsp;`sched=
=3Dcredit`,&nbsp;I've&nbsp;succeeded&nbsp;to&nbsp;obtain&nbsp;the&nbsp;outp=
ut&nbsp;of&nbsp;'r'&nbsp;debug-keys&nbsp;in&nbsp;serial&nbsp;console: <br><=
/div><div>``` <br></div><div>(XEN)&nbsp;sched_smt_power_savings:&nbsp;disab=
led <br></div><div>(XEN)&nbsp;NOW=3D72810702614697 <br></div><div>(XEN)&nbs=
p;Online&nbsp;Cpus:&nbsp;0-15 <br></div><div>(XEN)&nbsp;Cpupool&nbsp;0: <br=
></div><div>(XEN)&nbsp;Cpus:&nbsp;0-15 <br></div><div>(XEN)&nbsp;Scheduling=
&nbsp;granularity:&nbsp;cpu,&nbsp;1&nbsp;CPU&nbsp;per&nbsp;sched-resource <=
br></div><div>(XEN)&nbsp;Scheduler:&nbsp;SMP&nbsp;Credit&nbsp;Scheduler&nbs=
p;(credit) <br></div><div>(XEN)&nbsp;info: <br></div><div>(XEN)&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;ncpus&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;16 <br></div><div>(XEN)&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;master&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;0 <br></div><div>(XEN)&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;credit&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;4800 <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;credit&nbsp;balance&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;608 <=
br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;weight&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;12256 <b=
r></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;runq_sort&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;996335 <br></div><div>(X=
EN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;default-weight&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;=3D&nbsp;256 <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;tslice&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
=3D&nbsp;30ms <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ratelimit&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=3D&nbsp;1000us <=
br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;credits&nbsp;per&nbsp;msec=
&nbsp;&nbsp;&nbsp;=3D&nbsp;10 <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;ticks&nbsp;per&nbsp;tslice&nbsp;&nbsp;&nbsp;=3D&nbsp;3 <br></div><div>=
(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;migration&nbsp;delay&nbsp;&nbsp;&nbsp;&n=
bsp;=3D&nbsp;0us <br></div><div>(XEN)&nbsp;idlers:&nbsp;00000000,00003c99 <=
br></div><div>(XEN)&nbsp;active&nbsp;units: <br></div><div>(XEN)&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&nbsp;[0.1]&nbsp;pri=3D-1&nbsp;flags=3D0&n=
bsp;cpu=3D6&nbsp;credit=3D214&nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;2:&nbsp;[0.4]&nbsp;pri=3D-1&nbsp;f=
lags=3D0&nbsp;cpu=3D8&nbsp;credit=3D115&nbsp;[w=3D2000,cap=3D0] <br></div><=
div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;3:&nbsp;[0.5]&nbsp;pri=
=3D-1&nbsp;flags=3D0&nbsp;cpu=3D5&nbsp;credit=3D239&nbsp;[w=3D2000,cap=3D0]=
 <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;4:&nbsp;[0.1=
1]&nbsp;pri=3D-1&nbsp;flags=3D0&nbsp;cpu=3D1&nbsp;credit=3D-55&nbsp;[w=3D20=
00,cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;5=
:&nbsp;[0.6]&nbsp;pri=3D-2&nbsp;flags=3D0&nbsp;cpu=3D15&nbsp;credit=3D-177&=
nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;6:&nbsp;[0.7]&nbsp;pri=3D-1&nbsp;flags=3D0&nbsp;cpu=3D2&nbsp;cre=
dit=3D50&nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;7:&nbsp;[19.1]&nbsp;pri=3D-2&nbsp;flags=3D0&nbsp;cpu=3D=
9&nbsp;credit=3D-241&nbsp;[w=3D256,cap=3D0] <br></div><div>(XEN)&nbsp;CPUs&=
nbsp;info: <br></div><div>(XEN)&nbsp;CPU[00]&nbsp;current=3Dd[IDLE]v0,&nbsp=
;curr=3Dd[IDLE]v0,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[00]&nbsp;=
nr_run=3D0,&nbsp;sort=3D996334,&nbsp;sibling=3D{0},&nbsp;core=3D{0-7} <br><=
/div><div>(XEN)&nbsp;CPU[01]&nbsp;current=3Dd0v11,&nbsp;curr=3Dd0v11,&nbsp;=
prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[01]&nbsp;nr_run=3D1,&nbsp;sort=3D=
996335,&nbsp;sibling=3D{1},&nbsp;core=3D{0-7} <br></div><div>(XEN)&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;run:&nbsp;[0.11]&nbsp;pri=3D-1&nbsp;flags=3D0&nbsp;cpu=
=3D1&nbsp;credit=3D-55&nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&nbsp;[32767.1]&nbsp;pri=3D-64&nbsp;fla=
gs=3D0&nbsp;cpu=3D1 <br></div><div>(XEN)&nbsp;CPU[02]&nbsp;current=3Dd0v7,&=
nbsp;curr=3Dd0v7,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[02]&nbsp;n=
r_run=3D1,&nbsp;sort=3D996335,&nbsp;sibling=3D{2},&nbsp;core=3D{0-7} <br></=
div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;run:&nbsp;[0.7]&nbsp;pri=3D-1&n=
bsp;flags=3D0&nbsp;cpu=3D2&nbsp;credit=3D50&nbsp;[w=3D2000,cap=3D0] <br></d=
iv><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&nbsp;[32767.2]&nb=
sp;pri=3D-64&nbsp;flags=3D0&nbsp;cpu=3D2 <br></div><div>(XEN)&nbsp;CPU[03]&=
nbsp;current=3Dd[IDLE]v3,&nbsp;curr=3Dd[IDLE]v3,&nbsp;prev=3DNULL <br></div=
><div>(XEN)&nbsp;CPU[03]&nbsp;nr_run=3D0,&nbsp;sort=3D996329,&nbsp;sibling=
=3D{3},&nbsp;core=3D{0-7} <br></div><div>(XEN)&nbsp;CPU[04]&nbsp;current=3D=
d[IDLE]v4,&nbsp;curr=3Dd[IDLE]v4,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbs=
p;CPU[04]&nbsp;nr_run=3D0,&nbsp;sort=3D996325,&nbsp;sibling=3D{4},&nbsp;cor=
e=3D{0-7} <br></div><div>(XEN)&nbsp;CPU[05]&nbsp;current=3Dd0v5,&nbsp;curr=
=3Dd0v5,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[05]&nbsp;nr_run=3D1=
,&nbsp;sort=3D996334,&nbsp;sibling=3D{5},&nbsp;core=3D{0-7} <br></div><div>=
(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;run:&nbsp;[0.5]&nbsp;pri=3D-1&nbsp;flags=
=3D0&nbsp;cpu=3D5&nbsp;credit=3D239&nbsp;[w=3D2000,cap=3D0] <br></div><div>=
(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&nbsp;[32767.5]&nbsp;pri=
=3D-64&nbsp;flags=3D0&nbsp;cpu=3D5 <br></div><div>(XEN)&nbsp;CPU[06]&nbsp;c=
urrent=3Dd0v1,&nbsp;curr=3Dd0v1,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbsp=
;CPU[06]&nbsp;nr_run=3D1,&nbsp;sort=3D996334,&nbsp;sibling=3D{6},&nbsp;core=
=3D{0-7} <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;run:&nbsp;[0.1]&=
nbsp;pri=3D-1&nbsp;flags=3D0&nbsp;cpu=3D6&nbsp;credit=3D214&nbsp;[w=3D2000,=
cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&n=
bsp;[32767.6]&nbsp;pri=3D-64&nbsp;flags=3D0&nbsp;cpu=3D6 <br></div><div>(XE=
N)&nbsp;CPU[07]&nbsp;current=3Dd[IDLE]v7,&nbsp;curr=3Dd[IDLE]v7,&nbsp;prev=
=3DNULL <br></div><div>(XEN)&nbsp;CPU[07]&nbsp;nr_run=3D0,&nbsp;sort=3D9963=
03,&nbsp;sibling=3D{7},&nbsp;core=3D{0-7} <br></div><div>(XEN)&nbsp;CPU[08]=
&nbsp;current=3Dd[IDLE]v8,&nbsp;curr=3Dd[IDLE]v8,&nbsp;prev=3DNULL <br></di=
v><div>(XEN)&nbsp;CPU[08]&nbsp;nr_run=3D2,&nbsp;sort=3D996335,&nbsp;sibling=
=3D{8},&nbsp;core=3D{8-15} <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;1:&nbsp;[0.4]&nbsp;pri=3D-1&nbsp;flags=3D0&nbsp;cpu=3D8&nbsp;=
credit=3D115&nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&nbsp;CPU[09]&nbsp=
;current=3Dd19v1,&nbsp;curr=3Dd19v1,&nbsp;prev=3DNULL <br></div><div>(XEN)&=
nbsp;CPU[09]&nbsp;nr_run=3D1,&nbsp;sort=3D996335,&nbsp;sibling=3D{9},&nbsp;=
core=3D{8-15} <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;run:&nbsp;[=
19.1]&nbsp;pri=3D-2&nbsp;flags=3D0&nbsp;cpu=3D9&nbsp;credit=3D-241&nbsp;[w=
=3D256,cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;1:&nbsp;[32767.9]&nbsp;pri=3D-64&nbsp;flags=3D0&nbsp;cpu=3D9 <br></div><=
div>(XEN)&nbsp;CPU[10]&nbsp;current=3Dd[IDLE]v10,&nbsp;curr=3Dd[IDLE]v10,&n=
bsp;prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[10]&nbsp;nr_run=3D0,&nbsp;sor=
t=3D996334,&nbsp;sibling=3D{10},&nbsp;core=3D{8-15} <br></div><div>(XEN)&nb=
sp;CPU[11]&nbsp;current=3Dd[IDLE]v11,&nbsp;curr=3Dd[IDLE]v11,&nbsp;prev=3DN=
ULL <br></div><div>(XEN)&nbsp;CPU[11]&nbsp;nr_run=3D0,&nbsp;sort=3D996331,&=
nbsp;sibling=3D{11},&nbsp;core=3D{8-15} <br></div><div>(XEN)&nbsp;CPU[12]&n=
bsp;current=3Dd[IDLE]v12,&nbsp;curr=3Dd[IDLE]v12,&nbsp;prev=3DNULL <br></di=
v><div>(XEN)&nbsp;CPU[12]&nbsp;nr_run=3D0,&nbsp;sort=3D996333,&nbsp;sibling=
=3D{12},&nbsp;core=3D{8-15} <br></div><div>(XEN)&nbsp;CPU[13]&nbsp;current=
=3Dd[IDLE]v13,&nbsp;curr=3Dd[IDLE]v13,&nbsp;prev=3DNULL <br></div><div>(XEN=
)&nbsp;CPU[13]&nbsp;nr_run=3D0,&nbsp;sort=3D996334,&nbsp;sibling=3D{13},&nb=
sp;core=3D{8-15} <br></div><div>(XEN)&nbsp;CPU[14]&nbsp;current=3Dd0v14,&nb=
sp;curr=3Dd0v14,&nbsp;prev=3DNULL <br></div><div>(XEN)&nbsp;CPU[14]&nbsp;nr=
_run=3D1,&nbsp;sort=3D990383,&nbsp;sibling=3D{14},&nbsp;core=3D{8-15} <br><=
/div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;run:&nbsp;[0.14]&nbsp;pri=3D0&=
nbsp;flags=3D0&nbsp;cpu=3D14&nbsp;credit=3D-514&nbsp;[w=3D2000,cap=3D0] <br=
></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;1:&nbsp;[32767.1=
4]&nbsp;pri=3D-64&nbsp;flags=3D0&nbsp;cpu=3D14 <br></div><div>(XEN)&nbsp;CP=
U[15]&nbsp;current=3Dd0v6,&nbsp;curr=3Dd0v6,&nbsp;prev=3DNULL <br></div><di=
v>(XEN)&nbsp;CPU[15]&nbsp;nr_run=3D1,&nbsp;sort=3D996335,&nbsp;sibling=3D{1=
5},&nbsp;core=3D{8-15} <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;ru=
n:&nbsp;[0.6]&nbsp;pri=3D-2&nbsp;flags=3D0&nbsp;cpu=3D15&nbsp;credit=3D-177=
&nbsp;[w=3D2000,cap=3D0] <br></div><div>(XEN)&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;1:&nbsp;[32767.15]&nbsp;pri=3D-64&nbsp;flags=3D0&nbsp;cpu=3D15 =
<br></div><div>``` <br></div><div> <br></div><div>I&nbsp;attempt&nbsp;to&nb=
sp;get&nbsp;'*'&nbsp;but&nbsp;that&nbsp;blocked&nbsp;my&nbsp;serial&nbsp;co=
nsole,&nbsp;at&nbsp;least&nbsp;I&nbsp;was&nbsp;not&nbsp;able&nbsp;to&nbsp;i=
nteract&nbsp;with&nbsp;it&nbsp;few&nbsp;minutes&nbsp;later.&nbsp;I'll&nbsp;=
try&nbsp;to&nbsp;get&nbsp;other&nbsp;info&nbsp;too.&nbsp;I've&nbsp;also&nbs=
p;uploaded&nbsp;the&nbsp;piece&nbsp;of&nbsp;this&nbsp;huge&nbsp;'*'&nbsp;du=
mp&nbsp;here:&nbsp;<a class=3D"moz-txt-link-freetext" href=3D"https://gist.=
github.com/fepitre/36923fbc08cc2fd8bdb59b81e73a6c2e" target=3D"_blank">http=
s://gist.github.com/fepitre/36923fbc08cc2fd8bdb59b81e73a6c2e</a> <br></div>=
<div> <br></div><div>Right&nbsp;after,&nbsp;I've&nbsp;restarted&nbsp;with&n=
bsp;the&nbsp;default&nbsp;value&nbsp;of&nbsp;'sched'&nbsp;(credit2)&nbsp;an=
d&nbsp;just&nbsp;few&nbsp;minutes&nbsp;later&nbsp;I&nbsp;obtained: <br></di=
v><div>'r':&nbsp;<a class=3D"moz-txt-link-freetext" href=3D"https://gist.gi=
thub.com/fepitre/78541f555902275d906d627de2420571" target=3D"_blank">https:=
//gist.github.com/fepitre/78541f555902275d906d627de2420571</a> <br></div><d=
iv>'q':&nbsp;<a class=3D"moz-txt-link-freetext" href=3D"https://gist.github=
.com/fepitre/0ddf6b5e8fdb3152d24337d83fdc345e" target=3D"_blank">https://gi=
st.github.com/fepitre/0ddf6b5e8fdb3152d24337d83fdc345e</a> <br></div><div>'=
I':&nbsp;<a class=3D"moz-txt-link-freetext" href=3D"https://gist.github.com=
/fepitre/50c68233d08ad1e495edf7e0e146838b" target=3D"_blank">https://gist.g=
ithub.com/fepitre/50c68233d08ad1e495edf7e0e146838b</a> <br></div><div> <br>=
</div><div>Tell&nbsp;me&nbsp;if&nbsp;I&nbsp;can&nbsp;provide&nbsp;any&nbsp;=
other&nbsp;info&nbsp;from&nbsp;serial&nbsp;console. <br></div><div> <br></d=
iv><div>Regards, <br></div><div>Fr=C3=A9d=C3=A9ric<br></div></div><br></bod=
y></html>
------=_Part_224604_740662196.1603817213043--



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 17:18:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 17:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.12989.33479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXSbx-0005G6-SJ; Tue, 27 Oct 2020 17:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 12989.33479; Tue, 27 Oct 2020 17:18:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXSbx-0005Fz-PQ; Tue, 27 Oct 2020 17:18:17 +0000
Received: by outflank-mailman (input) for mailman id 12989;
 Tue, 27 Oct 2020 17:18:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXSbw-0005Fu-H7
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:18:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51f4dc21-8b7e-4ec8-9acb-5d6d3afc6ec8;
 Tue, 27 Oct 2020 17:18:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E6424AF3B;
 Tue, 27 Oct 2020 17:18:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aH5n=EC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXSbw-0005Fu-H7
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:18:16 +0000
X-Inumbo-ID: 51f4dc21-8b7e-4ec8-9acb-5d6d3afc6ec8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 51f4dc21-8b7e-4ec8-9acb-5d6d3afc6ec8;
	Tue, 27 Oct 2020 17:18:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603819095;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fZloFDfr7m7sdxGHJxewwdMeBuQdWLuvBKOZl5RMfns=;
	b=EYWrard2J9CuB9FZobmCmLT5bKgsNWAZ7exvrGAgx6aQFriOtOTQ8tKnpLzl13CeR9TGn9
	afF3Hfgl3ti/reNLoIaBoDZnyRF3RDhCh6o18ehZKgg9kiwGsYVn1CEtL9XUuJJbBdyHaW
	ZCLkM/XDoOE65C+FR3+O+rLx7HaYWsE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E6424AF3B;
	Tue, 27 Oct 2020 17:18:14 +0000 (UTC)
Subject: Re: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, "wl@xen.org" <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Artem Mygaiev <Artem_Mygaiev@epam.com>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh
 <Rahul.Singh@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
 <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
 <ac342c70-8fbb-023d-00b3-4a1bc1d389a7@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7f98534d-39fd-cbcb-13dd-bbbd994251f0@suse.com>
Date: Tue, 27 Oct 2020 18:18:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <ac342c70-8fbb-023d-00b3-4a1bc1d389a7@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.10.2020 16:52, Oleksandr Andrushchenko wrote:
> On 10/27/20 2:55 PM, Roger Pau Monné wrote:
>> On Tue, Oct 27, 2020 at 09:59:05AM +0000, Oleksandr Andrushchenko wrote:
>>>    5. An alternative route for 3-4 could be to store that data in XenStore, e.g.
>>>       MMIOs, IRQ, bind sysfs path etc. This would require more code on Xen side to
>>>       access XenStore and won’t work if MMIOs/IRQs are passed via device tree/ACPI
>>>       tables by the bootloaders.
>> As above, I think I need more context to understand what and why you
>> need to save such information.
> 
> Well, with pciback absence we loose a "database" which holds all the knowledge
> 
> about which devices are assigned, bound etc.

What hasn't become clear to me (sorry if I've overlooked it) is
why some form of pciback is not an option on Arm. Where it would
need to run in your split hardware-domain / Dom0 setup (if I got
that right in the first place) would be a secondary question.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 17:41:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 17:41:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13000.33491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXSyQ-0007kc-UG; Tue, 27 Oct 2020 17:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13000.33491; Tue, 27 Oct 2020 17:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXSyQ-0007kV-Qi; Tue, 27 Oct 2020 17:41:30 +0000
Received: by outflank-mailman (input) for mailman id 13000;
 Tue, 27 Oct 2020 17:41:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXSyP-0007kP-DJ
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:41:29 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b7a67d3-fba9-4632-96f4-d737a761ccfd;
 Tue, 27 Oct 2020 17:41:27 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9RHfQ1cx
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 27 Oct 2020 18:41:26 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vej7=EC=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXSyP-0007kP-DJ
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:41:29 +0000
X-Inumbo-ID: 7b7a67d3-fba9-4632-96f4-d737a761ccfd
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7b7a67d3-fba9-4632-96f4-d737a761ccfd;
	Tue, 27 Oct 2020 17:41:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603820486;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-ID:Subject:To:From:Date:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=dWOUItY+RY70D0NE+mnLyN9YKkUEgUs8fxinAALuBwo=;
	b=tnNxfJsGYtWWembJ8P4jxCw351zppgZsh+p1YAE4yFxsdImsoNghgN2Y0W3hU14DsR
	V2a+JlcEXmF14JW5NR36jD4Um7IEunaekudXFgE16AJFydHKvxXm217cMeji4jGD7jVw
	I8pBJPY406mTnSCaFe4UetrVXHPaSNd5aP/PmUFvvt2H4+OfE1/NLEUmVwvLp90d3D9S
	dAJlIBdxhVDlmYFijmmiOZ4FEm3kAYLG6rd2NYkc9BuAL7QI8TnLOnQ3lfToRWoL6LwW
	hsNNM0tuJbjOHimq5w03tAMpWBvxAaSvXnLBfg3OAvaIFQTvEwQAxifulgQDC2nu7rm8
	Vrpg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9RHfQ1cx
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Tue, 27 Oct 2020 18:41:26 +0100 (CET)
Date: Tue, 27 Oct 2020 18:41:19 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org
Subject: inconsistent pfn type checking in process_page_data
Message-ID: <20201027184119.1d3f701e.olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/IUCnAdpwlNNSU68hTJ7Tgwj"; protocol="application/pgp-signature"

--Sig_/IUCnAdpwlNNSU68hTJ7Tgwj
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Andrew,

with commit 93c2ff78adcadbe0f8bda57eeb30b1414c966324 a new function process=
_page_data was added. While filling the mfns array for xenforeignmemory_map=
, the individual pfn types[] are checked in a different way than the checki=
ng of the result of the mapping attempt.=20

Is there a special reason why the first loop uses the various TAB values, a=
nd the other loop checks XTAB/BROKEN/XALLOC? The sending side also uses the=
 latter.


Olaf

--Sig_/IUCnAdpwlNNSU68hTJ7Tgwj
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+YW78ACgkQ86SN7mm1
DoCJbxAAojX1wTd2kFu33GQqzlHl3MmJswVFzS2TlpZHqm6fjQ8wZ10qZp+a7jTW
ZpuEoJGsFzHPGLWfQ9PRuQlGXGYdov5ZmM2M8OVyBWTYi0BwseP6EpDfnIidbLpP
BF9g0Q6H7pckvHsoru1+1X68iah00HvovwZAxVuLtQQbufAQ0Ov/qAgM225LSw9x
7sCHBBUihyrwN18PsLRd0ByqIJwkYryX5q1UObkCrpDzs8XiVwusZSYeXmIZDIf9
qXQxgnyPonZU69sqz+ym4z5d3NdJnVDPJ/eAi5EnLjZbh09258kpy+tEVyXsSiwP
EczSwXMIrTi4nv+vE8hr4wecGHf11Sj+b2dEb4imU7ijQmTdiazfIxASjRW/5w9b
ephGwYrvBZaOYOtzsokLFa1c7zvvQnnLqe/UndAm1sBoOZuTGMG/rOHuIrmZhM7d
+cmYVGcwi3VUAPVcNC8VHLHzKxCwDwBDDhBuTY8cPAbUBvN1rmjHbLHC4AmDhdZh
m2gwXWB8Wk5iGEfm8nuZ7d1Hdl4oKyHstMNyWtWfRnJBGeiLEru1tvpKMByTIQhl
tsP3InZ1cxbarfmZ6RqJ5KDzCe+ZVSVMoENBO9T8mDialSSeCCw4DBmhYAmBA8f1
K0xBE0uaJH6gY9Z9mtMz9gWdkt5HT0ff85RdfwWi6/IywckYt/w=
=SMAb
-----END PGP SIGNATURE-----

--Sig_/IUCnAdpwlNNSU68hTJ7Tgwj--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 17:43:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 17:43:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13003.33503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXT05-0007u6-9A; Tue, 27 Oct 2020 17:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13003.33503; Tue, 27 Oct 2020 17:43:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXT05-0007ty-68; Tue, 27 Oct 2020 17:43:13 +0000
Received: by outflank-mailman (input) for mailman id 13003;
 Tue, 27 Oct 2020 17:43:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXT03-0007tt-Vv
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:43:12 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5fbf3f4-e7cb-4bd6-b796-07ce57863753;
 Tue, 27 Oct 2020 17:43:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E2B912054F;
 Tue, 27 Oct 2020 17:43:09 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXT03-0007tt-Vv
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:43:12 +0000
X-Inumbo-ID: a5fbf3f4-e7cb-4bd6-b796-07ce57863753
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a5fbf3f4-e7cb-4bd6-b796-07ce57863753;
	Tue, 27 Oct 2020 17:43:11 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id E2B912054F;
	Tue, 27 Oct 2020 17:43:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603820590;
	bh=7PAIrARo+U5BjdgOi0yQlsQrBkmZ+IVZWY4TopAO5S0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=MMM+0N60ODKjqcP7/UwhGGndw5EYDfe/PwXyJI+6tNbzrhUuGH+SoWyHcSgzjOqNz
	 fovF3qvrYsGblmTK/PtbuY+/wlQATssJpiclYPtzUGafdoOhjwqfgsoQBeFL/zjs2R
	 wQhW8o38W5vMUl+wL+KE9PxS4CW7FEyLrSSwW2SI=
Date: Tue, 27 Oct 2020 10:43:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, 
    xen-devel@lists.xenproject.org, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
In-Reply-To: <20201027133701.GB6077@char.us.oracle.com>
Message-ID: <alpine.DEB.2.21.2010271041490.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s> <20201027133701.GB6077@char.us.oracle.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 27 Oct 2020, Konrad Rzeszutek Wilk wrote:
> On Mon, Oct 26, 2020 at 05:02:14PM -0700, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
> > allocate a buffer for the swiotlb. It does so by calling
> > 
> >   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> > 
> > If the allocation must fail, no_iotlb_memory is set.
> > 
> > 
> > Later during initialization swiotlb-xen comes in
> > (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> > is != 0, it thinks the memory is ready to use when actually it is not.
> > 
> > When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
> > and since no_iotlb_memory is set the kernel panics.
> > 
> > Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
> > initialized, it would do the initialization itself, which might still
> > succeed.
> > 
> > 
> > Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
> > failure, and also by setting no_iotlb_memory to false on swiotlb
> > initialization success.
> 
> Should this have a Fixes: flag?

That would be

Fixes: ac2cbab21f31 ("x86: Don't panic if can not alloc buffer for swiotlb")



> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > 
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index c19379fabd20..9924214df60a 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> >  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
> >  	}
> >  	io_tlb_index = 0;
> > +	no_iotlb_memory = false;
> >  
> >  	if (verbose)
> >  		swiotlb_print_info();
> > @@ -262,9 +263,11 @@ swiotlb_init(int verbose)
> >  	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
> >  		return;
> >  
> > -	if (io_tlb_start)
> > +	if (io_tlb_start) {
> >  		memblock_free_early(io_tlb_start,
> >  				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
> > +		io_tlb_start = 0;
> > +	}
> >  	pr_warn("Cannot allocate buffer");
> >  	no_iotlb_memory = true;
> >  }
> > @@ -362,6 +365,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
> >  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
> >  	}
> >  	io_tlb_index = 0;
> > +	no_iotlb_memory = false;
> >  
> >  	swiotlb_print_info();
> >  
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 17:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 17:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13010.33524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXT2h-00086X-QQ; Tue, 27 Oct 2020 17:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13010.33524; Tue, 27 Oct 2020 17:45:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXT2h-00086Q-N5; Tue, 27 Oct 2020 17:45:55 +0000
Received: by outflank-mailman (input) for mailman id 13010;
 Tue, 27 Oct 2020 17:45:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kXT2g-00086L-Tl
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:45:55 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63d278ad-0552-4652-b109-cd6e2f16e7fd;
 Tue, 27 Oct 2020 17:45:52 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09RHjgIH026592; Tue, 27 Oct 2020 17:45:46 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
 by mx0b-0039f301.pphosted.com with ESMTP id 34e54t3g8p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 27 Oct 2020 17:45:46 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6769.eurprd03.prod.outlook.com (2603:10a6:20b:284::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 27 Oct
 2020 17:45:44 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 17:45:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y/3m=EC=epam.com=prvs=8569373f95=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kXT2g-00086L-Tl
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:45:55 +0000
X-Inumbo-ID: 63d278ad-0552-4652-b109-cd6e2f16e7fd
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 63d278ad-0552-4652-b109-cd6e2f16e7fd;
	Tue, 27 Oct 2020 17:45:52 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09RHjgIH026592;
	Tue, 27 Oct 2020 17:45:46 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
	by mx0b-0039f301.pphosted.com with ESMTP id 34e54t3g8p-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Tue, 27 Oct 2020 17:45:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FpNNHHQG/GlLyivcHn2uUx5U1NBgqlmw36XjSD4XPaGeDizIvcHmb+6n2HHHiTPxVOZXys6hom9+yeyfyl9/3nrtsu++GKHuK/tTuwBwBrKtJ//ZgSt0sltDymluSsREJq1uI2hTKMlfQZvWfKxTRmr8XesyjpMdfTHhGgtMQfcGM0WB5Kugbz6aRvU/GBR35fpcdK3LIiRUqJCpui8A4IaeMU6Tq8vHmzoMbrj6KvF5bSxBEa8VZCUAxxwgGAh8dF2QiabQK2cPUDrWIIa94v6HhFfEPVUo5JwyjNLc/3PZ64u9EnJalMl1MBxdzbILUICCDytHD10pF9eyGgbvbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QxMtAn0oVQ8AIZRTVRlBeBKcXCog6gx3h8ndcnk02q0=;
 b=d+wgvZZ475zOBXTJ8Mx1XBBQjIrYfvrULco5ntlZr82YCaybkyI46zJ5S81R/aXhnjjqWHq74Q8G2KPutsPb4BGb7RsgloLEVEn1ZN3SIYv4USGMzCSzVgN3O664BcJXKyj2cGEGmqPm2FSyJ8TSSq+HMT4QUlzHovyAfEI01IpRRY881tkmqnxIBUV2QuII/Xm9QIEbaxKhInmNQgrrxf5UbzMj83YVc7Ki0iY9y5IinIFDbwGaeyReo/CxxmlT8Jj2A5RBwiP1p+3rOBDuZa4EuQ4SbaC1DGeQRJjfa9OU2LGRaA9I/SzrdFmAAeRqCVeYLAwaLYyrObjxsL7Wbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QxMtAn0oVQ8AIZRTVRlBeBKcXCog6gx3h8ndcnk02q0=;
 b=54NKx9K+xVydgGkNwyQyZE5FpK4+SazerDxtz3zptoDHiuNiKaX4XNNBwg6gbbxE2GJRVgYgiOWbCqJII3r/sSM9tLWwK0p4gC0GDotdi4ex11P/K5trE6RAl5ZK8QGKKvSzhAwwWi6bOc58kOgQfO+ryvrY0eLyoHFHAbwHo2eIz0wrvAbtFuYxftxXPANznfz/MEVIF6dAanPflR+bKlZoJ8knddiRSnrPOZP6vA/lyrC3l4l4SjZxHwjeOMsRf3iSyWW1nyaBgcO6cHlm9QVKMTKilptSm+0YdIboPygeD4Y7ZKxhs51Hh8dvFEYNK4npWT+XIxn5KIyZxWph0A==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM9PR03MB6769.eurprd03.prod.outlook.com (2603:10a6:20b:284::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Tue, 27 Oct
 2020 17:45:44 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%8]) with mapi id 15.20.3477.029; Tue, 27 Oct 2020
 17:45:44 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>,
        "andrew.cooper3@citrix.com"
	<andrew.cooper3@citrix.com>,
        "george.dunlap@citrix.com"
	<george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        "wl@xen.org"
	<wl@xen.org>, "paul@xen.org" <paul@xen.org>,
        Artem Mygaiev
	<Artem_Mygaiev@epam.com>,
        Oleksandr Tyshchenko
	<Oleksandr_Tyshchenko@epam.com>,
        xen-devel <xen-devel@lists.xenproject.org>,
        Rahul Singh <Rahul.Singh@arm.com>,
        =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>
Subject: Re: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Topic: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
Thread-Index: AQHWrEfNmyZZUV4zgEum5lDBP9dPxqmraEMAgAAxlICAABfnAIAAB66A
Date: Tue, 27 Oct 2020 17:45:44 +0000
Message-ID: <1e8b43e5-7d44-a061-f60a-00f75eb19b8b@epam.com>
References: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
 <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
 <ac342c70-8fbb-023d-00b3-4a1bc1d389a7@epam.com>
 <7f98534d-39fd-cbcb-13dd-bbbd994251f0@suse.com>
In-Reply-To: <7f98534d-39fd-cbcb-13dd-bbbd994251f0@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 531fa3a9-6f30-4def-71a6-08d87aa0216d
x-ms-traffictypediagnostic: AM9PR03MB6769:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM9PR03MB6769A8B46F495E981264226FE7160@AM9PR03MB6769.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 fiMnY5MVas6HamH5Dd15gokfS/digVvZjXTi17DxN9NU+4ZAz3oHKurTuWxzTEvlQwIwdJYwxpouRAKMCTVlJ6+YbAfkLb0w6UcgTzIYwWZ5LSdpluhIzL42emsvPHE5Cx4lY+3/pYbRvJ8ucvwImkebI9cFViZVwk/fEvg7C2JlDxze++UyvakniyifPoNCEMUE+1GGREzWixXhFr1kNdbr4lJQfOAlRnpWsGEirpWlBXDUE1q176kaNWvl9ULxNjX+Izg/HpQEi2ycdcrSdg4oiUbMgW9YqTteY/iKlvDUIjP6dE03ii9Ecv2oSK0XYgR0rT6RdqSKSxFI4ijVhg==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(376002)(366004)(136003)(86362001)(8936002)(4326008)(7416002)(36756003)(6512007)(478600001)(2906002)(83380400001)(31696002)(53546011)(6506007)(71200400001)(66446008)(64756008)(66556008)(91956017)(186003)(5660300002)(2616005)(66476007)(6486002)(8676002)(26005)(66946007)(76116006)(6916009)(316002)(31686004)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 y2Lne5/j+lWGGznvKszfM5BbZLRcTB9JTf4p8iMk2evh2smX/paefTFfT7tA5YoWi0lACqP39LSlWg/N4TqFP2uLbYTQY3jpXD+0xIFUdhJb43nl3DtnatP8HJIs4fAAwIOzTJoB4I/Zuzgldm/pK0Uvy5VTljRJd9E/zQkn6Ggl7NslnL69BzhS9Ro0iKPgMMJsHAEsoTVcP42qfyuTeGturaViRkfeGYxre76yuBIySr2qyPwQSNe3clCl1xOahVtIJtfMmKFAAyia4DKt9f90FLIcbATYhLhxnDPjUL9hxr2/QjkO99Pzf4QlaJlVBRn0zYCSct1+PLC8vxNnpAEiBPqmJrfk0FkiraGn37vVJMCgCQbyKGhs56l5LnuF5M9paX/ksYiZxQCEpgwXGU2jkvR6MUxNhuxr56+O9bcQDovjJOHarTW9flmqNgVWWSdPc8FDAoxPAAt4RObuf2wtzy4AIeKN8eYK8zpNgdAvLHzlZhNtcJCrO9oTlvw85bjkkTtZsTkXCOFXD89FS/ZwM6xZQGrONaj3T2DcofivjcO5kZk3fWbUjGgXZUl+CiAnoQfE+DqJdt1EUdNYRy3UUw4Sf7vdbaH3zUoBW2X2TCg0caHytt4Xv1/gZVRAe/mKLg06fE8E7969wfQFIQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <2A32E5BEBD11A14FB55F700B1E604478@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 531fa3a9-6f30-4def-71a6-08d87aa0216d
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Oct 2020 17:45:44.6404
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EgGk9QSfUp8W+ttftuZ7QtNAwyiYloHz0GMggaLbDhmdx7n88Ce1saDWgr1RNY16nFiu0VHS6nY9lSJlePYoSNezabqcpw815oY2YbpqvGVBDXaBH5HlDAzDgfK4LogX
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB6769
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-27_10:2020-10-26,2020-10-27 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 spamscore=0
 bulkscore=0 clxscore=1015 priorityscore=1501 phishscore=0 mlxscore=0
 lowpriorityscore=0 adultscore=0 malwarescore=0 mlxlogscore=999
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010270106

T24gMTAvMjcvMjAgNzoxOCBQTSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+IE9uIDI3LjEwLjIwMjAg
MTY6NTIsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIHdyb3RlOg0KPj4gT24gMTAvMjcvMjAgMjo1
NSBQTSwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+PiBPbiBUdWUsIE9jdCAyNywgMjAyMCBh
dCAwOTo1OTowNUFNICswMDAwLCBPbGVrc2FuZHIgQW5kcnVzaGNoZW5rbyB3cm90ZToNCj4+Pj4g
ICAgwqA1LiBBbiBhbHRlcm5hdGl2ZSByb3V0ZSBmb3IgMy00IGNvdWxkIGJlIHRvIHN0b3JlIHRo
YXQgZGF0YSBpbiBYZW5TdG9yZSwgZS5nLg0KPj4+PiAgICDCoMKgwqAgTU1JT3MsIElSUSwgYmlu
ZCBzeXNmcyBwYXRoIGV0Yy4gVGhpcyB3b3VsZCByZXF1aXJlIG1vcmUgY29kZSBvbiBYZW4gc2lk
ZSB0bw0KPj4+PiAgICDCoMKgwqAgYWNjZXNzIFhlblN0b3JlIGFuZCB3b27igJl0IHdvcmsgaWYg
TU1JT3MvSVJRcyBhcmUgcGFzc2VkIHZpYSBkZXZpY2UgdHJlZS9BQ1BJDQo+Pj4+ICAgIMKgwqDC
oCB0YWJsZXMgYnkgdGhlIGJvb3Rsb2FkZXJzLg0KPj4+IEFzIGFib3ZlLCBJIHRoaW5rIEkgbmVl
ZCBtb3JlIGNvbnRleHQgdG8gdW5kZXJzdGFuZCB3aGF0IGFuZCB3aHkgeW91DQo+Pj4gbmVlZCB0
byBzYXZlIHN1Y2ggaW5mb3JtYXRpb24uDQo+PiBXZWxsLCB3aXRoIHBjaWJhY2sgYWJzZW5jZSB3
ZSBsb29zZSBhICJkYXRhYmFzZSIgd2hpY2ggaG9sZHMgYWxsIHRoZSBrbm93bGVkZ2UNCj4+DQo+
PiBhYm91dCB3aGljaCBkZXZpY2VzIGFyZSBhc3NpZ25lZCwgYm91bmQgZXRjLg0KPiBXaGF0IGhh
c24ndCBiZWNvbWUgY2xlYXIgdG8gbWUgKHNvcnJ5IGlmIEkndmUgb3Zlcmxvb2tlZCBpdCkgaXMN
Cj4gd2h5IHNvbWUgZm9ybSBvZiBwY2liYWNrIGlzIG5vdCBhbiBvcHRpb24gb24gQXJtLg0KWWVz
LCBpdCBpcyBwcm9iYWJseSBwb3NzaWJsZSB0byBydW4gcGNpYmFjayBldmVuIHdpdGhvdXQgcnVu
bmluZw0KDQpwY2lmcm9udCBpbnN0YW5jZXMgaW4gZ3Vlc3RzIGFuZCBvbmx5IHVzZSB0aGF0IGZ1
bmN0aW9uYWxpdHkNCg0Kd2hpY2ggaXMgbmVlZGVkIGZvciB0aGUgdG9vbHN0YWNrLiBXZSBjYW4g
ZXZlbiBoYXZlIGl0IGFzIGlzIHdpdGhvdXQNCg0KbW9kaWZpY2F0aW9ucyBnaXZlbiB0aGF0IHBj
aWZyb250IHdvbid0IHJ1biBhbmQgdGhhdCBwYXJ0IG9mIHRoZSBwY2liYWNrDQoNCnJlbGF0ZWQg
dG8gUENJIGNvbmZpZyBzcGFjZSwgTVNJIGV0Yy4gd29uJ3Qgc2ltcGx5IGJlIHVzZWQsIGJ1dCBz
dGlsbA0KDQpwcmVzZW50IGluIHRoZSBwY2liYWNrIGRyaXZlci4gV2UgY2FuIHRyeSB0aGF0IChw
Y2liYWNrIGlzIHg4Ng0KDQpvbmx5IGluIHRoZSBrZXJuZWwpLg0KDQo+IFdoZXJlIGl0IHdvdWxk
DQo+IG5lZWQgdG8gcnVuIGluIHlvdXIgc3BsaXQgaGFyZHdhcmUtZG9tYWluIC8gRG9tMCBzZXR1
cCAoaWYgSSBnb3QNCj4gdGhhdCByaWdodCBpbiB0aGUgZmlyc3QgcGxhY2UpIHdvdWxkIGJlIGEg
c2Vjb25kYXJ5IHF1ZXN0aW9uLg0KDQpUaGlzIGFjdHVhbGx5IGJlY29tZXMgYSBwcm9ibGVtIGlm
IHdlIHRoaW5rIGFib3V0IGh3ZG9tICE9IERvbTA6DQoNCkRvbTAvdG9vbHN0YWNrIHdhbnRzIHRv
IHJlYWQgUENJIGJ1cyBzeXNmcyBhbmQgaXQgYWxzbyB3YW50cyB0byBhY2Nlc3MNCg0KcGNpYmFj
aydzIHN5c2ZzIGVudHJpZXMuIFNvLCBmb3IgRG9tMCdzIHRvb2xzdGFjayB0byByZWFkIHN5c2Zz
IGluIHRoaXMgc2NlbmFyaW8NCg0Kd2UgbmVlZCBhIGJyaWRnZSBiZXR3ZWVuIERvbTAgYW5kIHRo
YXQgaHdkb20gdG8gYWNjZXNzIGJvdGggUENJDQoNCnN1YnN5c3RlbSBhbmQgcGNpYmFjaydzIHN5
c2ZzOiB0aGlzIGNvdWxkIGJlIGltcGxlbWVudGVkIGFzIGEgYmFjay1mcm9udCBwYWlyDQoNCndp
dGggYSByaW5nIGFuZCBldmVudCBjaGFubmVsIGFzIFBWIGRyaXZlcnMgZG8uIFRoaXMgYXBwcm9h
Y2ggb2YgY291cnNlIHdpbGwNCg0KcmVxdWlyZSB0aGUgdG9vbHN0YWNrIHRvIHdvcmsgaW4gdHdv
IG1vZGVzOiBsb2NhbCBzeXNmcy9wY2liYWNrIGFuZCByZW1vdGUgb25lcy4NCg0KSW4gdGhlIHJl
bW90ZSBhY2Nlc3MgbW9kZWwgdGhlIHRvb2xzdGFjayB3aWxsIG5lZWQgdG8gY3JlYXRlIGEgY29u
bmVjdGlvbiB0bw0KDQp0aGUgaHdkb20gZWFjaCB0aW1lIGl0IHJ1bnMgYW5kIHJlcXVpcmVzIHN5
c2ZzIGRhdGEgd2hpY2ggc2hvdWxkIGJlIGFjY2VwdGFibGUuDQoNCkl0IGNhbiBhbHNvIGJlIHBv
c3NpYmxlIHRvIGhhdmUgdGhlIHRvb2xzdGFjayBhbHdheXMgdXNlIHRoZSByZW1vdGUgbW9kZWwg
ZXZlbg0KDQppZiBpdCBydW5zIGxvY2FsbHkgd2hpY2ggd2lsbCBtYWtlIHRoZSB0b29sc3RhY2sn
cyBjb2RlIHN1cHBvcnQgYSBzaW5nbGUgbW9kZWwgZm9yIGFsbA0KDQp0aGUgdXNlLWNhc2VzLg0K
DQooTmV2ZXIgdGhvdWdodCBpZiBpdCBpcyBwb3NzaWJsZSB0byBydW4gYm90aCBiYWNrZW5kIGFu
ZCBmcm9udGVuZCBpbiB0aGUgc2FtZSBWTSB0aG91Z2gpLg0KDQo+IEphbg0KDQpUaGFuayB5b3Us
DQoNCk9sZWtzYW5kcg0K


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 17:57:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 17:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13016.33536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXTDk-0000dk-W9; Tue, 27 Oct 2020 17:57:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13016.33536; Tue, 27 Oct 2020 17:57:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXTDk-0000dd-Ro; Tue, 27 Oct 2020 17:57:20 +0000
Received: by outflank-mailman (input) for mailman id 13016;
 Tue, 27 Oct 2020 17:57:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x4TT=EC=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kXTDj-0000dY-DF
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:57:19 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.89]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd0dd83c-ed96-4c5a-8bff-6fee2d90b3e2;
 Tue, 27 Oct 2020 17:57:17 +0000 (UTC)
Received: from MRXP264CA0038.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:14::26)
 by VE1PR08MB5821.eurprd08.prod.outlook.com (2603:10a6:800:1b2::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 17:57:14 +0000
Received: from VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:14:cafe::6f) by MRXP264CA0038.outlook.office365.com
 (2603:10a6:500:14::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Tue, 27 Oct 2020 17:57:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT011.mail.protection.outlook.com (10.152.18.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Tue, 27 Oct 2020 17:57:14 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Tue, 27 Oct 2020 17:57:12 +0000
Received: from bfabdf19ea4b.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0712E98F-97FF-44C4-B625-9EAB07C6B910.1; 
 Tue, 27 Oct 2020 17:57:05 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bfabdf19ea4b.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 27 Oct 2020 17:57:05 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1654.eurprd08.prod.outlook.com (2603:10a6:4:3a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 17:57:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 27 Oct 2020
 17:57:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=x4TT=EC=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kXTDj-0000dY-DF
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:57:19 +0000
X-Inumbo-ID: fd0dd83c-ed96-4c5a-8bff-6fee2d90b3e2
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.89])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fd0dd83c-ed96-4c5a-8bff-6fee2d90b3e2;
	Tue, 27 Oct 2020 17:57:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DqWwbo+0I5dCDOVwwtBJrTZ2XYC/jsTBsx7WLk6ov+4=;
 b=mLy9myRFz0EYx5yhCvdKIIHbyUjVAU9GeOuA42WcM7vp4HVgwwMyMOiP5dgHrdk/G9DojPKN2/OH3lkvNFDoD1o0ZW7rWAKbPE1WF4tVOJol7QXdBf/m76OGQJQEjKvvYyvHLOS/QN4nLcoFaAflWDTuPMgJcw9x4Zw6lwdzlLc=
Received: from MRXP264CA0038.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:14::26)
 by VE1PR08MB5821.eurprd08.prod.outlook.com (2603:10a6:800:1b2::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 17:57:14 +0000
Received: from VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:14:cafe::6f) by MRXP264CA0038.outlook.office365.com
 (2603:10a6:500:14::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Tue, 27 Oct 2020 17:57:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT011.mail.protection.outlook.com (10.152.18.134) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.18 via Frontend Transport; Tue, 27 Oct 2020 17:57:14 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Tue, 27 Oct 2020 17:57:12 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e2869ff47589fdbd
X-CR-MTA-TID: 64aa7808
Received: from bfabdf19ea4b.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 0712E98F-97FF-44C4-B625-9EAB07C6B910.1;
	Tue, 27 Oct 2020 17:57:05 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bfabdf19ea4b.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Tue, 27 Oct 2020 17:57:05 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XfyUj8WzktkOiKbz7kxkX4CUgYMJZ9DIY/ctqdnMffaivQDovhzLc2/yC/tofuAxsWPhut35xVuGv6Ee0HRvluYKaE8qESu+IeF8n7AOzy+GBnSImqLnR+s+DZ5k0VP6Y7ozoRir5s9gd8ap96KJuNGpC7kMoqT4bP+Y8HM6UG9nMbsBwnEVyOS+dSaWDb3cz3cUi/LFAc26pj5qpe9otcTihEsSpeRQP1zJEI9q2BcoMcennAnE8mbS6biTPYPdDan6e+0ekVmk1cI2eaKExVu7C2xcG4DYE+mqKdgtfTuKxfQL+7wMIj+PuqSXCzszfyJZ5fY304HuDfX4SO/LLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DqWwbo+0I5dCDOVwwtBJrTZ2XYC/jsTBsx7WLk6ov+4=;
 b=Zoj6z8wux9j0JbrW3QxEUikQKWq8tjDM3BYn36FWGcdvBlCjZ4TuT/q91FW0AEPTZy3Z3DpONxGLv6zI7bEAJR8Lct/VQn9gKdrqqcVgEcll0HJmY6Oweua/yyiLV3HOF7TTgHtYOy63uyC8P8mgNIO5FEcQ6V45zwAaHO7aMRwuv3ur1bqD7zmZQltvgTj6Y43VWrxrfTmx00scuXL8qw9al6V5VhFUQJCZpAIuQEKsGf3K6GX7bljl7r3C6uydUt9hg8DHImLx7URUuYMR1Td7pXEfkEhSeukSoH4xvWZlj/DtOr82vZwunIFqWlE99XuWEI6257drzw6YijP+BA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DqWwbo+0I5dCDOVwwtBJrTZ2XYC/jsTBsx7WLk6ov+4=;
 b=mLy9myRFz0EYx5yhCvdKIIHbyUjVAU9GeOuA42WcM7vp4HVgwwMyMOiP5dgHrdk/G9DojPKN2/OH3lkvNFDoD1o0ZW7rWAKbPE1WF4tVOJol7QXdBf/m76OGQJQEjKvvYyvHLOS/QN4nLcoFaAflWDTuPMgJcw9x4Zw6lwdzlLc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1654.eurprd08.prod.outlook.com (2603:10a6:4:3a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Tue, 27 Oct
 2020 17:57:03 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Tue, 27 Oct 2020
 17:57:03 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Ash Wilding <Ash.Wilding@arm.com>, Rahul Singh <Rahul.Singh@arm.com>,
	Stefano Stabellini <sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mgw0AAgAEzsQCAAWItgIABA+GAgADBXQCAABfAAIAAGI8AgAAORwCABG97AIAAEvKAgABzxQCAAUKcAIAABQSAgAA3qQA=
Date: Tue, 27 Oct 2020 17:57:02 +0000
Message-ID: <15D396A4-707C-44C2-B6F0-87BDB42171C4@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <BF2E5EF7-575B-4A8F-BC00-3F2B73754886@arm.com>
 <9cf9f8d3-b699-de3c-781f-f7ad1b498899@xen.org>
 <F573C8D3-3473-43CD-BA98-8D59E0188AF8@arm.com>
 <13baac40-8b10-0def-4e44-0d8f655fcaf1@xen.org>
In-Reply-To: <13baac40-8b10-0def-4e44-0d8f655fcaf1@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d2edb1bc-7c4e-47d1-e6d9-08d87aa1bc85
x-ms-traffictypediagnostic: DB6PR0801MB1654:|VE1PR08MB5821:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VE1PR08MB582133756C4D1DB2917B8E759D160@VE1PR08MB5821.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 V76SM3TRk37lRkzezuHvzZDpvgPjJK5Cy68yQ1ap5x5fC/FXv8+w433RJem/FikIhTsJ8jzw1xrYvIDbBK5qDN5HjXBheKguDrnpeU12jkykoJs9cdci93GRADwRy81x0LS4LeTI2ENKjN9bLTrB4Zf7LTUxrOiaussecUfHgZPqNMCiKJAdWbZjOiZz94SA7DlS3QGvC4Zor3XbalTxMgiL/+8k5tGGcxrGwanVA+vBr9lJmT6kHI+rjYtAbDTGxc4Wn09KyCPxCT6mHdbnf6H9CK2Ca1wA1ZFeerHLM41eBtnQIK0A1QYHeQwEYeIoYxPw0Fh/Ni93EzXmz5JcEQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(366004)(136003)(39860400002)(478600001)(83380400001)(66946007)(6916009)(4326008)(26005)(36756003)(316002)(2906002)(186003)(66476007)(76116006)(66556008)(64756008)(91956017)(6506007)(53546011)(66446008)(86362001)(5660300002)(6512007)(2616005)(33656002)(6486002)(71200400001)(8676002)(54906003)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 frmrzOZGLXEzDCxey3GlPL/QIXRmaMq+sXdxNKklB7WbG211MJ+MFRiDce59umgKNuBmP5rxfrX4Q1SfX6OIcWtycwECL9t3QDxaddhWuK3cXr7+/bTU0NgdR4OPrJhatx45AbA7Rc52hN3MIBSWFN3+LXOOPYAtfmvOf1G2vyenG22Ic1riIg6slTRLkRgfs6D3cxEcLuK+c7EhF757ClzjPrH7csfas29XM9jpaD0yG2T14E15Q+xLdMAgGKrfx3oTvTeImlQ4FM3Hh1UAaaGYI/CBGw6y3urxC7fgsltMAyATs6NPrBB6kn44+4qaKB/6wjzuscqUFPXFYNZ7+2o/VB9/aX1QmDQLfaD386waprEOvWbxFdMLXTjXETkFmYFqH0rPNINirZB2rFzST2kbwwm8CerDQ/eHiWI9gpdBpk/mwZrR5XzeLFT2QxPz0kLgsYIReUmdD7zUl7S2rsbS4Z3K9wd64WQ3ySiWZuCljAunJ01cYusnbG99xmkHP77aXtdcmaZWuNQoB0QSVhIpWpn6rTkWSII0AwRh0Pma50hGtO7RjO8AYgB1y1oDMS/9yPHq+RFcq8tycMidn0rT+b7f+BGlOiIUWwmoWVXWOuohA6Ds1xqCfMnonWW2b5pl/0eSmo4qJSJWnmNXew==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <B7556D11E69D184EB103F98D2E904F4A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1654
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	16aabdf8-b1bc-446c-de32-08d87aa1b5ca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7KBnhaw+EMCV7hIxm55EKm9PablEQicWaHL0luYsF5ErswnVavhiR7hC/U8cuhRhM2RmpHr0G+1dzHpFmdTWKeRu8ld0vmTFzGBFatrAQ5I087gRR3bkeQIxnNImYAsXnf48tjNS0+mG8bv0N6tkFGXhgAJbJxxBSE7h2ixejMeLcsBA94xQyxPyuKVA73RLDs4C0AUn4aGW0ib0+ExOOpZJt654KMm9lWrBLr0t8H3FxMwsPjYD2X5t8LscBQh3YnIKNXoyAfPZjLjoFgOGEX74cXfw00jYrXAtJRA855/2lKD5N7jslvvCQ9GvaFjQIECsR7Sbm4xO0gDC+RlBVT3LLImkn3QdshZUvnkS/6FDh1fhUQCe1H7pR2QFFotnY1Mij3IB+Qs33VjKzja/+A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(39850400004)(46966005)(6862004)(81166007)(4326008)(36906005)(186003)(107886003)(36756003)(356005)(54906003)(83380400001)(86362001)(33656002)(8676002)(478600001)(70586007)(70206006)(6512007)(6506007)(47076004)(336012)(6486002)(316002)(82310400003)(2616005)(53546011)(26005)(8936002)(2906002)(82740400003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Oct 2020 17:57:14.2121
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d2edb1bc-7c4e-47d1-e6d9-08d87aa1bc85
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT011.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5821

Hi Julien,

> On 27 Oct 2020, at 14:37, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 27/10/2020 14:19, Bertrand Marquis wrote:
>> Hi Julien,
>=20
> Hi Bertrand,
>=20
>>> On 26 Oct 2020, at 19:05, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> On 26/10/2020 12:10, Ash Wilding wrote:
>>>> Hi,
>>>=20
>>> Hi Ash,
>>>=20
>>>>> 1. atomic_set_release
>>>>> 2. atomic_fetch_andnot_relaxed
>>>>> 3. atomic_cond_read_relaxed
>>>>> 4. atomic_long_cond_read_relaxed
>>>>> 5. atomic_long_xor
>>>>> 6. atomic_set_release
>>>>> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is
>>>>>    implemented in XEN need to check.
>>>>> 8. atomic_dec_return_release
>>>>> 9. atomic_fetch_inc_relaxed
>>>> If we're going to pull in Linux's implementations of the above atomics
>>>> helpers for SMMUv3, and given the majority of SMMUv3 systems are v8.1+
>>>> with LSE, perhaps this would be a good time to drop the current
>>>> atomic.h in Xen completely and pull in both Linux's LL/SC and LSE
>>>> helpers,
>>>=20
>>> When I originally answered to the thread, I thought about suggesting im=
porting LSE. But I felt it was too much to ask in order to merge the SMMUv3=
 code.
>>>=20
>>> However, I would love to have support for LSE in Xen as this would solv=
e other not yet fully closed security issue with LL/SC (see XSA-295 [2]).
>>>=20
>>> Would Arm be willing to add support for LSE before merging the SMMUv3?
>> We are willing to work on LSE but I cannot commit on me and my team to s=
tart working on this subject before the end of the year.
>=20
> That's fine. There are other way we can bridge the gap between Xen and Li=
nux in order to get the latest version (see more below).
>=20
>> So I think it would be good to integrate this version of SMMUv3 and we c=
an then update it to the latest Linux one once LSE has been added.
>=20
> As I pointed out in my first e-mail on this thread, I am quite concerned =
that we are going to (re-)introduce bugs that have been fixed in Linux.
> Did you investigate that this is not going to happen?

We will take the action to check changes in Linux in the driver since the v=
ersion we are based on to make sure no critical fixes have been done
that are needed in our code.

>=20
> However, I think we can manage to get the latest version without requirin=
g LSE. It should be possible to provide dumb version of most the helpers.

If this is done, there would still be after a big work on switching to the =
newer code from Linux, testing and reviewing it.

Updating the driver after those dumb versions are added would still be poss=
ible.

Cheers
Bertrand

>=20
>> I think it make sense in the meantime to discuss how this should be desi=
gned but it might be a good idea to make a specific discussion thread for t=
hat.
>=20
> Make sense. Can you start a new thread?
>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 18:14:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 18:14:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13033.33551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXTUO-0002U0-Jn; Tue, 27 Oct 2020 18:14:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13033.33551; Tue, 27 Oct 2020 18:14:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXTUO-0002Tt-FA; Tue, 27 Oct 2020 18:14:32 +0000
Received: by outflank-mailman (input) for mailman id 13033;
 Tue, 27 Oct 2020 18:14:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXTUM-0002TL-Hr
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 18:14:30 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab3f7d16-15ac-47dd-9bee-d61a4de51e86;
 Tue, 27 Oct 2020 18:14:23 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXTUF-0005lE-1v; Tue, 27 Oct 2020 18:14:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXTUE-0006Gk-Ng; Tue, 27 Oct 2020 18:14:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXTUE-00041h-NB; Tue, 27 Oct 2020 18:14:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXTUM-0002TL-Hr
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 18:14:30 +0000
X-Inumbo-ID: ab3f7d16-15ac-47dd-9bee-d61a4de51e86
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab3f7d16-15ac-47dd-9bee-d61a4de51e86;
	Tue, 27 Oct 2020 18:14:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iJ45EXYQ/zMoG2vWOpR1ovMYSkUW6TIB57B4LIuFryA=; b=ZH+59JiW7Fw6C4VPovrPlxS/Iw
	kHMwrM3SKn8GckZvhwscoQYtyN6OwCdY9Ff3tGL0bLZSexgemFDPaFOccjRRXQSIWC9fGoa+LTbWL
	w+4W7I8nynFXdw3cRtWpk1xpzmy3nziN4AKi3NNblztbftfj2cAw7J4Q3ZE8gYZayxiE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXTUF-0005lE-1v; Tue, 27 Oct 2020 18:14:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXTUE-0006Gk-Ng; Tue, 27 Oct 2020 18:14:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXTUE-00041h-NB; Tue, 27 Oct 2020 18:14:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156254-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156254: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-amd64-pygrub:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=964781c6f162893677c50a779b7d562a299727ba
X-Osstest-Versions-That:
    xen=964781c6f162893677c50a779b7d562a299727ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 18:14:22 +0000

flight 156254 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156254/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 156248
 test-amd64-amd64-pygrub      21 guest-start/debian.repeat  fail pass in 156248

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156248
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156248
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156248
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156248
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156248
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156248
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156248
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156248
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156248
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156248
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156248
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156248
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  964781c6f162893677c50a779b7d562a299727ba
baseline version:
 xen                  964781c6f162893677c50a779b7d562a299727ba

Last test of basis   156254  2020-10-27 06:38:30 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      fail    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 19:02:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 19:02:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13061.33603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUEv-0006xh-Qc; Tue, 27 Oct 2020 19:02:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13061.33603; Tue, 27 Oct 2020 19:02:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUEv-0006xa-MD; Tue, 27 Oct 2020 19:02:37 +0000
Received: by outflank-mailman (input) for mailman id 13061;
 Tue, 27 Oct 2020 19:02:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kXUEu-0006xS-HZ
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:02:36 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33b64eeb-d449-4697-a1bd-b3de12fa2a07;
 Tue, 27 Oct 2020 19:02:35 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603813375063569.4061733314264;
 Tue, 27 Oct 2020 08:42:55 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kXUEu-0006xS-HZ
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:02:36 +0000
X-Inumbo-ID: 33b64eeb-d449-4697-a1bd-b3de12fa2a07
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33b64eeb-d449-4697-a1bd-b3de12fa2a07;
	Tue, 27 Oct 2020 19:02:35 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603824973; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=GNopAgwdzENJHsG41YS11o0ebFEbgiiLCDhxYWWvWHzvVKu69PrUW7GxOUhFxL5tKUqFV4sBefxLnp6UeVGfImI6tcS42tyvyHYIhAu81FGuf1GTcyZws8xLCWckgavqZcRORtseJu5v6imGJtDf5z0QWIV2cSidzKSD/gaaxvg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603824973; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=KB1GGuP7nPe19SpJ+/wqtnpUKBz9kLLmZQoSyqn60p8=; 
	b=Oh+wvXhniC6gmuZI/OJ6I/fv5HIfkfoL4818GCP+kfX4ZRlFO1guY3u2TaiIdSUT+xvIpwlpBAuEGDKcQDBMFWmpOPVty8pIf+n6A+IqXciQXpzttCVGK2Wfv46DO0OAG/xXLumSr8loKePKVaoyCpFVfK0IKYgoI85TR54EQAc=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603824973;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=KB1GGuP7nPe19SpJ+/wqtnpUKBz9kLLmZQoSyqn60p8=;
	b=SucvbrXbMrnDUQOqn/GiLA4Nl/gOhUMZ+8+4Jv8PBvbIww6g3Yxp8CPZN/NVWUi4
	jvPriJCedtg2N/vcgQshpnEm+2bsVoliYooSoXUZr+LkmKwl5TOt/DQ+QLhMKkSdGw/
	QRk7UW5A4JKb0emBqml3RdgQHRV/vENSz0Dag0aQ=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603813375063569.4061733314264; Tue, 27 Oct 2020 08:42:55 -0700 (PDT)
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
Date: Tue, 27 Oct 2020 16:42:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xR62Yw6Zhwo7qE7jF8KUz2WRyV4MP303C"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xR62Yw6Zhwo7qE7jF8KUz2WRyV4MP303C
Content-Type: multipart/mixed; boundary="kGSL22qBXdph9VBTHEPBJp1yL5qfMENtT";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
In-Reply-To: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>

--kGSL22qBXdph9VBTHEPBJp1yL5qfMENtT
Content-Type: multipart/mixed;
 boundary="------------584CD6C833828720D14F2CFC"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------584CD6C833828720D14F2CFC
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/27/20 =C3=A0 10:22 AM, Dario Faggioli a =C3=A9crit=C2=A0:
> On Tue, 2020-10-27 at 06:58 +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 26.10.20 17:31, Dario Faggioli wrote:
>>>
>>> Or did you have something completely different in mind, and I'm
>>> missing
>>> it?
>>
>> No, I think you are right. I mixed that up with __context_switch()
>> not
>> being called.
>>
> Right.
>=20
>> Sorry for the noise,
>>
> Sure, no problem.
>=20
> In fact, this issue is apparently scheduler independent. It indeed
> seemd to be related to the other report we have "BUG: credit=3Dsched2
> machine hang when using DRAKVUF", but there it looks like it is
> scheduler-dependant.
>=20
> Might it be something that lies somewhere else, but Credit2 is
> triggering it faster/easier? (Just thinking out loud...)
>=20
> For Frederic, what happens is that dom0 hangs, right? So you're able to=

> poke at Xen with some debugkeys (like 'r' for the scheduler's status,
> and the ones for the domain's vCPUs)?
>=20
> If yes, it may be useful to see the output.
>=20
> Regards
>=20

I'm having some new info with respect to your request. Yes dom0 hangs and=
 I can interact with serial console. I've succeeded to obtain the output =
of 'r' debug-keys:

```
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=3D72810702614697
(XEN) Online Cpus: 0-15
(XEN) Cpupool 0:
(XEN) Cpus: 0-15
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler (credit)
(XEN) info:
(XEN) 	ncpus              =3D 16
(XEN) 	master             =3D 0
(XEN) 	credit             =3D 4800
(XEN) 	credit balance     =3D 608
(XEN) 	weight             =3D 12256
(XEN) 	runq_sort          =3D 996335
(XEN) 	default-weight     =3D 256
(XEN) 	tslice             =3D 30ms
(XEN) 	ratelimit          =3D 1000us
(XEN) 	credits per msec   =3D 10
(XEN) 	ticks per tslice   =3D 3
(XEN) 	migration delay    =3D 0us
(XEN) idlers: 00000000,00003c99
(XEN) active units:
(XEN) 	  1: [0.1] pri=3D-1 flags=3D0 cpu=3D6 credit=3D214 [w=3D2000,cap=3D=
0]
(XEN) 	  2: [0.4] pri=3D-1 flags=3D0 cpu=3D8 credit=3D115 [w=3D2000,cap=3D=
0]
(XEN) 	  3: [0.5] pri=3D-1 flags=3D0 cpu=3D5 credit=3D239 [w=3D2000,cap=3D=
0]
(XEN) 	  4: [0.11] pri=3D-1 flags=3D0 cpu=3D1 credit=3D-55 [w=3D2000,cap=3D=
0]
(XEN) 	  5: [0.6] pri=3D-2 flags=3D0 cpu=3D15 credit=3D-177 [w=3D2000,cap=
=3D0]
(XEN) 	  6: [0.7] pri=3D-1 flags=3D0 cpu=3D2 credit=3D50 [w=3D2000,cap=3D=
0]
(XEN) 	  7: [19.1] pri=3D-2 flags=3D0 cpu=3D9 credit=3D-241 [w=3D256,cap=3D=
0]
(XEN) CPUs info:
(XEN) CPU[00] current=3Dd[IDLE]v0, curr=3Dd[IDLE]v0, prev=3DNULL
(XEN) CPU[00] nr_run=3D0, sort=3D996334, sibling=3D{0}, core=3D{0-7}
(XEN) CPU[01] current=3Dd0v11, curr=3Dd0v11, prev=3DNULL
(XEN) CPU[01] nr_run=3D1, sort=3D996335, sibling=3D{1}, core=3D{0-7}
(XEN) 	run: [0.11] pri=3D-1 flags=3D0 cpu=3D1 credit=3D-55 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.1] pri=3D-64 flags=3D0 cpu=3D1
(XEN) CPU[02] current=3Dd0v7, curr=3Dd0v7, prev=3DNULL
(XEN) CPU[02] nr_run=3D1, sort=3D996335, sibling=3D{2}, core=3D{0-7}
(XEN) 	run: [0.7] pri=3D-1 flags=3D0 cpu=3D2 credit=3D50 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.2] pri=3D-64 flags=3D0 cpu=3D2
(XEN) CPU[03] current=3Dd[IDLE]v3, curr=3Dd[IDLE]v3, prev=3DNULL
(XEN) CPU[03] nr_run=3D0, sort=3D996329, sibling=3D{3}, core=3D{0-7}
(XEN) CPU[04] current=3Dd[IDLE]v4, curr=3Dd[IDLE]v4, prev=3DNULL
(XEN) CPU[04] nr_run=3D0, sort=3D996325, sibling=3D{4}, core=3D{0-7}
(XEN) CPU[05] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
(XEN) CPU[05] nr_run=3D1, sort=3D996334, sibling=3D{5}, core=3D{0-7}
(XEN) 	run: [0.5] pri=3D-1 flags=3D0 cpu=3D5 credit=3D239 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.5] pri=3D-64 flags=3D0 cpu=3D5
(XEN) CPU[06] current=3Dd0v1, curr=3Dd0v1, prev=3DNULL
(XEN) CPU[06] nr_run=3D1, sort=3D996334, sibling=3D{6}, core=3D{0-7}
(XEN) 	run: [0.1] pri=3D-1 flags=3D0 cpu=3D6 credit=3D214 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.6] pri=3D-64 flags=3D0 cpu=3D6
(XEN) CPU[07] current=3Dd[IDLE]v7, curr=3Dd[IDLE]v7, prev=3DNULL
(XEN) CPU[07] nr_run=3D0, sort=3D996303, sibling=3D{7}, core=3D{0-7}
(XEN) CPU[08] current=3Dd[IDLE]v8, curr=3Dd[IDLE]v8, prev=3DNULL
(XEN) CPU[08] nr_run=3D2, sort=3D996335, sibling=3D{8}, core=3D{8-15}
(XEN) 	  1: [0.4] pri=3D-1 flags=3D0 cpu=3D8 credit=3D115 [w=3D2000,cap=3D=
0]
(XEN) CPU[09] current=3Dd19v1, curr=3Dd19v1, prev=3DNULL
(XEN) CPU[09] nr_run=3D1, sort=3D996335, sibling=3D{9}, core=3D{8-15}
(XEN) 	run: [19.1] pri=3D-2 flags=3D0 cpu=3D9 credit=3D-241 [w=3D256,cap=3D=
0]
(XEN) 	  1: [32767.9] pri=3D-64 flags=3D0 cpu=3D9
(XEN) CPU[10] current=3Dd[IDLE]v10, curr=3Dd[IDLE]v10, prev=3DNULL
(XEN) CPU[10] nr_run=3D0, sort=3D996334, sibling=3D{10}, core=3D{8-15}
(XEN) CPU[11] current=3Dd[IDLE]v11, curr=3Dd[IDLE]v11, prev=3DNULL
(XEN) CPU[11] nr_run=3D0, sort=3D996331, sibling=3D{11}, core=3D{8-15}
(XEN) CPU[12] current=3Dd[IDLE]v12, curr=3Dd[IDLE]v12, prev=3DNULL
(XEN) CPU[12] nr_run=3D0, sort=3D996333, sibling=3D{12}, core=3D{8-15}
(XEN) CPU[13] current=3Dd[IDLE]v13, curr=3Dd[IDLE]v13, prev=3DNULL
(XEN) CPU[13] nr_run=3D0, sort=3D996334, sibling=3D{13}, core=3D{8-15}
(XEN) CPU[14] current=3Dd0v14, curr=3Dd0v14, prev=3DNULL
(XEN) CPU[14] nr_run=3D1, sort=3D990383, sibling=3D{14}, core=3D{8-15}
(XEN) 	run: [0.14] pri=3D0 flags=3D0 cpu=3D14 credit=3D-514 [w=3D2000,cap=
=3D0]
(XEN) 	  1: [32767.14] pri=3D-64 flags=3D0 cpu=3D14
(XEN) CPU[15] current=3Dd0v6, curr=3Dd0v6, prev=3DNULL
(XEN) CPU[15] nr_run=3D1, sort=3D996335, sibling=3D{15}, core=3D{8-15}
(XEN) 	run: [0.6] pri=3D-2 flags=3D0 cpu=3D15 credit=3D-177 [w=3D2000,cap=
=3D0]
(XEN) 	  1: [32767.15] pri=3D-64 flags=3D0 cpu=3D15
```

I attempt to get '*' but that blocked my serial console, at least I was n=
ot able to interact with it few minutes later. I'll try to get other info=
 too. I've also uploaded the piece of this huge '*' dump here: https://gi=
st.github.com/fepitre/36923fbc08cc2fd8bdb59b81e73a6c2e

Regards

--------------584CD6C833828720D14F2CFC
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------584CD6C833828720D14F2CFC--

--kGSL22qBXdph9VBTHEPBJp1yL5qfMENtT--

--xR62Yw6Zhwo7qE7jF8KUz2WRyV4MP303C
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+YP/gACgkQSEAQtc3F
duJwPw/9FxuKJxbNwpEVFvohmgNbzOSSpVoF07l0nEblhoDY0+H8fX3nstTSWY16
asY1epUj2a6FCCjtiZrXdrCZ04Na0Hvfrt1EHxO6chaaTNvHWoPHACZY6gARfAut
/C7X8W+0FNAm46wVLKkHElp6qmegAejMUxBWmI2G7sqI+7T6tFWXms81K+kgr6Bz
QL5jWO2l6hfTyBYLDfvvncjOK7WDBu1NE2hF08YbSe9V47sTh3rx7D81ePcDtE5s
3rOgmQZ8SMMOvZX3+dNgWD8MyY+Cqvlr8ILatVq6afH9LisDze4l6KeM3fmCxkR+
zhWNS5cP0yVlUsNozm4dEOAJxgHIz6goJXawF9pksvkfkezHY4t7n5vDn8+Qs1c7
TK5uAzEnKHBCBdaEXRkI5EOafFpcDFo6P3iGxdM19glDJ9Q9U/GD/rsnTlG8QVpc
NH2QOC4ED839wL/pggESADqL5KToChsz1s610fNmkCO/Aug/mmJLK/fIeDwFBb+W
mif9OMnjfu5gitcwEKwjI5fPS4kvy/PxrGFwDTBlbGOOMXx9rFXZLrmZn3UA7D50
J36P04RcWOOJFYGZdwQvHTBhd6B0YfboMaXatsQueP1ovY/pHl9Xx7Mrc3/2bZ/i
LPB2+R8aBZMrETamFDXjO0023J2/FBmDjxbV3VM0ZC+XW4Q9zZ0=
=CPDe
-----END PGP SIGNATURE-----

--xR62Yw6Zhwo7qE7jF8KUz2WRyV4MP303C--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 19:14:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 19:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13068.33615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUQ3-0007y4-0i; Tue, 27 Oct 2020 19:14:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13068.33615; Tue, 27 Oct 2020 19:14:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUQ2-0007xx-Ta; Tue, 27 Oct 2020 19:14:06 +0000
Received: by outflank-mailman (input) for mailman id 13068;
 Tue, 27 Oct 2020 19:14:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kXUQ1-0007xs-Cq
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:14:05 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f21a453-f0fd-4f45-80af-802c07f4c499;
 Tue, 27 Oct 2020 19:14:02 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603814822714428.9691482866506;
 Tue, 27 Oct 2020 09:07:02 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kXUQ1-0007xs-Cq
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:14:05 +0000
X-Inumbo-ID: 6f21a453-f0fd-4f45-80af-802c07f4c499
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6f21a453-f0fd-4f45-80af-802c07f4c499;
	Tue, 27 Oct 2020 19:14:02 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603825463; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=CWgnlwuf+vl/YIgwTr3I/a9bfD6ff5T7Wv8MysM/tdKJD8dgEIaZ1zVpNMtPwajd+nfoZST8CGyWFI17YA2xar4eIQnjEGWJJX8n+lSdGFxXcamUea6ngREYoLRTyQE20Lls7lDz6MMFSaa+ALrsieIUcZfjy6fzlCTlEPseekg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603825463; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=zwZzKeIMfs869uqUN+jylJxJAsXo+50eS/lmT0P948M=; 
	b=dEu99EK5apmmc0gVSHlEToj3XOqjcpg3BE+MuMrhRY7SfY+1bLhqSN02DsyZ2V0LujaV1RXj/c/DPkwKSKpTaOJSP8HULeuuAq2KFXiC/IWGJNJuEzOWdYUlQNh7TxienDnjcp1y5gAy9+z2loWvi0IUz9vkAaIkPhxgZmvM0XA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603825463;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:From:To:Cc:References:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=zwZzKeIMfs869uqUN+jylJxJAsXo+50eS/lmT0P948M=;
	b=VH+OSWDgycMyqvWhuuydM5x3aqaFIvu3F4MGQoT1I/z7vxjvHFm7uclLe6WzgMi9
	rUCY7WCE2I6nHg7+ETcTZL0GAuzBljO2fZwKJhgTikZSiYmJVBMvBUWUYhZaJTeFBIM
	xO9Q/3RL6f/g4gNsVV3WqYjAInSB7GsfF5IHaiy4=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603814822714428.9691482866506; Tue, 27 Oct 2020 09:07:02 -0700 (PDT)
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
Date: Tue, 27 Oct 2020 17:06:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="gHMmyumlKrc840OvDE8uPUjYA3CmVstiz"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gHMmyumlKrc840OvDE8uPUjYA3CmVstiz
Content-Type: multipart/mixed; boundary="nIf3NTCDIQsAFxOtuut7a4P0zWhs0peKu";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
In-Reply-To: <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>

--nIf3NTCDIQsAFxOtuut7a4P0zWhs0peKu
Content-Type: multipart/mixed;
 boundary="------------2CB8B852480B5EB37E6F9DE5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2CB8B852480B5EB37E6F9DE5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/27/20 =C3=A0 4:42 PM, Fr=C3=A9d=C3=A9ric Pierret a =C3=A9crit=C2=A0=
:
>=20
>=20
> Le 10/27/20 =C3=A0 10:22 AM, Dario Faggioli a =C3=A9crit=C2=A0:
>> On Tue, 2020-10-27 at 06:58 +0100, J=C3=BCrgen Gro=C3=9F wrote:
>>> On 26.10.20 17:31, Dario Faggioli wrote:
>>>>
>>>> Or did you have something completely different in mind, and I'm
>>>> missing
>>>> it?
>>>
>>> No, I think you are right. I mixed that up with __context_switch()
>>> not
>>> being called.
>>>
>> Right.
>>
>>> Sorry for the noise,
>>>
>> Sure, no problem.
>>
>> In fact, this issue is apparently scheduler independent. It indeed
>> seemd to be related to the other report we have "BUG: credit=3Dsched2
>> machine hang when using DRAKVUF", but there it looks like it is
>> scheduler-dependant.
>>
>> Might it be something that lies somewhere else, but Credit2 is
>> triggering it faster/easier? (Just thinking out loud...)
>>
>> For Frederic, what happens is that dom0 hangs, right? So you're able t=
o
>> poke at Xen with some debugkeys (like 'r' for the scheduler's status,
>> and the ones for the domain's vCPUs)?
>>
>> If yes, it may be useful to see the output.
>>
>> Regards
>>
>=20
> I'm having some new info with respect to your request. Yes dom0 hangs a=
nd I can interact with serial console. I've succeeded to obtain the outpu=
t of 'r' debug-keys:
>=20
> ```
> (XEN) sched_smt_power_savings: disabled
> (XEN) NOW=3D72810702614697
> (XEN) Online Cpus: 0-15
> (XEN) Cpupool 0:
> (XEN) Cpus: 0-15
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Scheduler: SMP Credit Scheduler (credit)
> (XEN) info:
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 ncpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 16
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 master=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 0
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 credit=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 4800
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 credit balance=C2=A0=C2=A0=C2=A0=C2=A0 =3D=
 608
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 weight=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 12256
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 runq_sort=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 =3D 996335
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 default-weight=C2=A0=C2=A0=C2=A0=C2=A0 =3D=
 256
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 tslice=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 30ms
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 ratelimit=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 =3D 1000us
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 credits per msec=C2=A0=C2=A0 =3D 10
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 ticks per tslice=C2=A0=C2=A0 =3D 3
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 migration delay=C2=A0=C2=A0=C2=A0 =3D 0us=

> (XEN) idlers: 00000000,00003c99
> (XEN) active units:
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [0.1] pri=3D-1 flags=3D0 c=
pu=3D6 credit=3D214 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2: [0.4] pri=3D-1 flags=3D0 c=
pu=3D8 credit=3D115 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 3: [0.5] pri=3D-1 flags=3D0 c=
pu=3D5 credit=3D239 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 4: [0.11] pri=3D-1 flags=3D0 =
cpu=3D1 credit=3D-55 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 5: [0.6] pri=3D-2 flags=3D0 c=
pu=3D15 credit=3D-177 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 6: [0.7] pri=3D-1 flags=3D0 c=
pu=3D2 credit=3D50 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 7: [19.1] pri=3D-2 flags=3D0 =
cpu=3D9 credit=3D-241 [w=3D256,cap=3D0]
> (XEN) CPUs info:
> (XEN) CPU[00] current=3Dd[IDLE]v0, curr=3Dd[IDLE]v0, prev=3DNULL
> (XEN) CPU[00] nr_run=3D0, sort=3D996334, sibling=3D{0}, core=3D{0-7}
> (XEN) CPU[01] current=3Dd0v11, curr=3Dd0v11, prev=3DNULL
> (XEN) CPU[01] nr_run=3D1, sort=3D996335, sibling=3D{1}, core=3D{0-7}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.11] pri=3D-1 flags=3D0 cpu=3D1 cr=
edit=3D-55 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.1] pri=3D-64 flags=3D=
0 cpu=3D1
> (XEN) CPU[02] current=3Dd0v7, curr=3Dd0v7, prev=3DNULL
> (XEN) CPU[02] nr_run=3D1, sort=3D996335, sibling=3D{2}, core=3D{0-7}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.7] pri=3D-1 flags=3D0 cpu=3D2 cre=
dit=3D50 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.2] pri=3D-64 flags=3D=
0 cpu=3D2
> (XEN) CPU[03] current=3Dd[IDLE]v3, curr=3Dd[IDLE]v3, prev=3DNULL
> (XEN) CPU[03] nr_run=3D0, sort=3D996329, sibling=3D{3}, core=3D{0-7}
> (XEN) CPU[04] current=3Dd[IDLE]v4, curr=3Dd[IDLE]v4, prev=3DNULL
> (XEN) CPU[04] nr_run=3D0, sort=3D996325, sibling=3D{4}, core=3D{0-7}
> (XEN) CPU[05] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
> (XEN) CPU[05] nr_run=3D1, sort=3D996334, sibling=3D{5}, core=3D{0-7}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.5] pri=3D-1 flags=3D0 cpu=3D5 cre=
dit=3D239 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.5] pri=3D-64 flags=3D=
0 cpu=3D5
> (XEN) CPU[06] current=3Dd0v1, curr=3Dd0v1, prev=3DNULL
> (XEN) CPU[06] nr_run=3D1, sort=3D996334, sibling=3D{6}, core=3D{0-7}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.1] pri=3D-1 flags=3D0 cpu=3D6 cre=
dit=3D214 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.6] pri=3D-64 flags=3D=
0 cpu=3D6
> (XEN) CPU[07] current=3Dd[IDLE]v7, curr=3Dd[IDLE]v7, prev=3DNULL
> (XEN) CPU[07] nr_run=3D0, sort=3D996303, sibling=3D{7}, core=3D{0-7}
> (XEN) CPU[08] current=3Dd[IDLE]v8, curr=3Dd[IDLE]v8, prev=3DNULL
> (XEN) CPU[08] nr_run=3D2, sort=3D996335, sibling=3D{8}, core=3D{8-15}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [0.4] pri=3D-1 flags=3D0 c=
pu=3D8 credit=3D115 [w=3D2000,cap=3D0]
> (XEN) CPU[09] current=3Dd19v1, curr=3Dd19v1, prev=3DNULL
> (XEN) CPU[09] nr_run=3D1, sort=3D996335, sibling=3D{9}, core=3D{8-15}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [19.1] pri=3D-2 flags=3D0 cpu=3D9 cr=
edit=3D-241 [w=3D256,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.9] pri=3D-64 flags=3D=
0 cpu=3D9
> (XEN) CPU[10] current=3Dd[IDLE]v10, curr=3Dd[IDLE]v10, prev=3DNULL
> (XEN) CPU[10] nr_run=3D0, sort=3D996334, sibling=3D{10}, core=3D{8-15}
> (XEN) CPU[11] current=3Dd[IDLE]v11, curr=3Dd[IDLE]v11, prev=3DNULL
> (XEN) CPU[11] nr_run=3D0, sort=3D996331, sibling=3D{11}, core=3D{8-15}
> (XEN) CPU[12] current=3Dd[IDLE]v12, curr=3Dd[IDLE]v12, prev=3DNULL
> (XEN) CPU[12] nr_run=3D0, sort=3D996333, sibling=3D{12}, core=3D{8-15}
> (XEN) CPU[13] current=3Dd[IDLE]v13, curr=3Dd[IDLE]v13, prev=3DNULL
> (XEN) CPU[13] nr_run=3D0, sort=3D996334, sibling=3D{13}, core=3D{8-15}
> (XEN) CPU[14] current=3Dd0v14, curr=3Dd0v14, prev=3DNULL
> (XEN) CPU[14] nr_run=3D1, sort=3D990383, sibling=3D{14}, core=3D{8-15}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.14] pri=3D0 flags=3D0 cpu=3D14 cr=
edit=3D-514 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.14] pri=3D-64 flags=
=3D0 cpu=3D14
> (XEN) CPU[15] current=3Dd0v6, curr=3Dd0v6, prev=3DNULL
> (XEN) CPU[15] nr_run=3D1, sort=3D996335, sibling=3D{15}, core=3D{8-15}
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 run: [0.6] pri=3D-2 flags=3D0 cpu=3D15 cr=
edit=3D-177 [w=3D2000,cap=3D0]
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 1: [32767.15] pri=3D-64 flags=
=3D0 cpu=3D15
> ```
>=20
> I attempt to get '*' but that blocked my serial console, at least I was=
 not able to interact with it few minutes later. I'll try to get other in=
fo too. I've also uploaded the piece of this huge '*' dump here: https://=
gist.github.com/fepitre/36923fbc08cc2fd8bdb59b81e73a6c2e
>=20
> Regards

Ok the server got frozen just few minutes after my mail and I got now:
'r': https://gist.github.com/fepitre/78541f555902275d906d627de2420571
'q': https://gist.github.com/fepitre/0ddf6b5e8fdb3152d24337d83fdc345e
'I': https://gist.github.com/fepitre/50c68233d08ad1e495edf7e0e146838b

Tell me if I can provide any other info from serial console.

Regards,
Fr=C3=A9d=C3=A9ric

--------------2CB8B852480B5EB37E6F9DE5
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------2CB8B852480B5EB37E6F9DE5--

--nIf3NTCDIQsAFxOtuut7a4P0zWhs0peKu--

--gHMmyumlKrc840OvDE8uPUjYA3CmVstiz
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+YRaMACgkQSEAQtc3F
duL1URAAv11YdOo/5ETgQI7ffBLS+VKyiFkefvWux4lzJoD4xL1UDA2RLgTUjG7E
Heh3T0BAmAXw2cn0qbcZ3fr5EaDVExQS/EJaV6AEFubH1gldsmqwDS3uOXIEa+AA
9m8r6W6TRPbzGOdAdnRw1B+p9cWljyKIrIUXcRXMTJ4RzIZTCuvoe9kziGmPWqOR
6cIdtH4hPumY0hcVI7D2uaRSGivn/FR/8MZOBvSIrTEuix7B044ts6RtDq6068W5
HUjkXv4KCQCdEbYVnVjfwAD57Rlneo9aebidITJSHkh2GpZZVU8qx8ZFf4rL5bl9
wR0UdpoiFxiX9xI7qmeqcpaN7iD9EnUgNV4zEw0XLJRxVGj9vIEcgOdgYG9YLcpH
j1OMslZ8wTeCQm+NHMSzFQl3fxtOditnP51p0SEzCc4i95xtWMqBy6FWwFChm+Uc
1FU1Ur4o3/p9drLvnJx2CKHrO7r95ago3Iu364SWcucAYtHeX+2I0DZcCExcZMTQ
zpWvnfBTOJJu1nOSHzSThlEy8qkcK3D3Endjp5K+sx36IOaaLYegyzXCccD762h3
BqmNpDvNI/pzLBGwp9iOGKkLqiSF66jLa25cQAqRNnQsVdeXnr+9aNpFeYNRaAHD
15H82oN3Y6hanIwiBLZGxbWa0oQQmPVWmyQ9rX8YHDHOX2lyHdg=
=m2zU
-----END PGP SIGNATURE-----

--gHMmyumlKrc840OvDE8uPUjYA3CmVstiz--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 19:19:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 19:19:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13072.33627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUVA-0008Ad-LJ; Tue, 27 Oct 2020 19:19:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13072.33627; Tue, 27 Oct 2020 19:19:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUVA-0008AW-Hj; Tue, 27 Oct 2020 19:19:24 +0000
Received: by outflank-mailman (input) for mailman id 13072;
 Tue, 27 Oct 2020 19:19:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kXUV9-0008AR-3B
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:19:23 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63acae75-877d-4efd-97a8-1f44600a7bac;
 Tue, 27 Oct 2020 19:19:20 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1603816117181773.358838387671;
 Tue, 27 Oct 2020 09:28:37 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOid=EC=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kXUV9-0008AR-3B
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:19:23 +0000
X-Inumbo-ID: 63acae75-877d-4efd-97a8-1f44600a7bac
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 63acae75-877d-4efd-97a8-1f44600a7bac;
	Tue, 27 Oct 2020 19:19:20 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1603825717; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=I+bObYXksXPPkENKHp5GRUY1gLnldkvJeAoOXacdVnqCG37DcMGKKkAPmv7XA2BINa1bla0hEgKHXcICGIbHUiHJdnYCp5MlOylJXcUoKFqDvKgIccsUswSu23BydLoFxzG4Ek0vVQHAnGp1BIXTvrlRDjoy6l1Z4GMILdf5Tos=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1603825717; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=drxeSLG6Gfp0Frj1rNMIEkfARikmVV9OlVxXZB4J4PQ=; 
	b=DokHVnk0HXQ7l+HDHg6mM0pgENF3334RfqwKHC7bJvbzm6ELdqJwBB4ZHLtLuhQrIPxNqUf1HJ1a0HJTIqR0xV0Fz+RLdGZ75tvJ6TubRO8/uXhwy7VKRDNtRf7XLI1Yi1uEDbYoSTs1k/+dnrTyOm2hWneS7Qp2b3TcrY2dzFg=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1603825717;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=To:Cc:References:From:Subject:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=drxeSLG6Gfp0Frj1rNMIEkfARikmVV9OlVxXZB4J4PQ=;
	b=Bm2ZZ1XnDj5O5QyiQHf5TqwSorTnrSqo7iySE4WvRFzk4lfxBQnaU3vfF5m3PE8E
	Eqi3/A6UkZny+MaxCuaycrHzrYdNuCRkKftEEIRgM87SnvsD5eA1saG7mfCy5wKqHA/
	CfSbVHdHUVtSf6Y809k/xNdnzL9WZkN41Tggyi5Y=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1603816117181773.358838387671; Tue, 27 Oct 2020 09:28:37 -0700 (PDT)
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <26391834-23bc-5f4c-1110-44036e5eec79@qubes-os.org>
Date: Tue, 27 Oct 2020 17:28:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Xzf3UU1wKo5qhuk06o60TW6UTKhDHNr4I"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Xzf3UU1wKo5qhuk06o60TW6UTKhDHNr4I
Content-Type: multipart/mixed; boundary="X2OPUdNtlN9DfKMj3NwMDcjdeaXbwOqcs";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>, "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <26391834-23bc-5f4c-1110-44036e5eec79@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
In-Reply-To: <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>

--X2OPUdNtlN9DfKMj3NwMDcjdeaXbwOqcs
Content-Type: multipart/mixed;
 boundary="------------E2315F6D784B7E28DE9D958E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E2315F6D784B7E28DE9D958E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/27/20 =C3=A0 10:22 AM, Dario Faggioli a =C3=A9crit=C2=A0:
> On Tue, 2020-10-27 at 06:58 +0100, J=C3=BCrgen Gro=C3=9F wrote:
>> On 26.10.20 17:31, Dario Faggioli wrote:
>>>
>>> Or did you have something completely different in mind, and I'm
>>> missing
>>> it?
>>
>> No, I think you are right. I mixed that up with __context_switch()
>> not
>> being called.
>>
> Right.
>=20
>> Sorry for the noise,
>>
> Sure, no problem.
>=20
> In fact, this issue is apparently scheduler independent. It indeed
> seemd to be related to the other report we have "BUG: credit=3Dsched2
> machine hang when using DRAKVUF", but there it looks like it is
> scheduler-dependant.
>=20
> Might it be something that lies somewhere else, but Credit2 is
> triggering it faster/easier? (Just thinking out loud...)
>=20
> For Frederic, what happens is that dom0 hangs, right? So you're able to=

> poke at Xen with some debugkeys (like 'r' for the scheduler's status,
> and the ones for the domain's vCPUs)?
>=20
> If yes, it may be useful to see the output.
>=20
> Regards
>=20

First of all, sorry for the possible duplicates. I had network issue due =
to subsequent freezes (...) while writing to you and Marek has not receiv=
ed my previous mails so here the info.


To answer your question Dario, yes dom0 hangs totally and VMs too. In the=
 case of `sched=3Dcredit`, I've succeeded to obtain the output of 'r' deb=
ug-keys in serial console:
```
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=3D72810702614697
(XEN) Online Cpus: 0-15
(XEN) Cpupool 0:
(XEN) Cpus: 0-15
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler (credit)
(XEN) info:
(XEN) 	ncpus              =3D 16
(XEN) 	master             =3D 0
(XEN) 	credit             =3D 4800
(XEN) 	credit balance     =3D 608
(XEN) 	weight             =3D 12256
(XEN) 	runq_sort          =3D 996335
(XEN) 	default-weight     =3D 256
(XEN) 	tslice             =3D 30ms
(XEN) 	ratelimit          =3D 1000us
(XEN) 	credits per msec   =3D 10
(XEN) 	ticks per tslice   =3D 3
(XEN) 	migration delay    =3D 0us
(XEN) idlers: 00000000,00003c99
(XEN) active units:
(XEN) 	  1: [0.1] pri=3D-1 flags=3D0 cpu=3D6 credit=3D214 [w=3D2000,cap=3D=
0]
(XEN) 	  2: [0.4] pri=3D-1 flags=3D0 cpu=3D8 credit=3D115 [w=3D2000,cap=3D=
0]
(XEN) 	  3: [0.5] pri=3D-1 flags=3D0 cpu=3D5 credit=3D239 [w=3D2000,cap=3D=
0]
(XEN) 	  4: [0.11] pri=3D-1 flags=3D0 cpu=3D1 credit=3D-55 [w=3D2000,cap=3D=
0]
(XEN) 	  5: [0.6] pri=3D-2 flags=3D0 cpu=3D15 credit=3D-177 [w=3D2000,cap=
=3D0]
(XEN) 	  6: [0.7] pri=3D-1 flags=3D0 cpu=3D2 credit=3D50 [w=3D2000,cap=3D=
0]
(XEN) 	  7: [19.1] pri=3D-2 flags=3D0 cpu=3D9 credit=3D-241 [w=3D256,cap=3D=
0]
(XEN) CPUs info:
(XEN) CPU[00] current=3Dd[IDLE]v0, curr=3Dd[IDLE]v0, prev=3DNULL
(XEN) CPU[00] nr_run=3D0, sort=3D996334, sibling=3D{0}, core=3D{0-7}
(XEN) CPU[01] current=3Dd0v11, curr=3Dd0v11, prev=3DNULL
(XEN) CPU[01] nr_run=3D1, sort=3D996335, sibling=3D{1}, core=3D{0-7}
(XEN) 	run: [0.11] pri=3D-1 flags=3D0 cpu=3D1 credit=3D-55 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.1] pri=3D-64 flags=3D0 cpu=3D1
(XEN) CPU[02] current=3Dd0v7, curr=3Dd0v7, prev=3DNULL
(XEN) CPU[02] nr_run=3D1, sort=3D996335, sibling=3D{2}, core=3D{0-7}
(XEN) 	run: [0.7] pri=3D-1 flags=3D0 cpu=3D2 credit=3D50 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.2] pri=3D-64 flags=3D0 cpu=3D2
(XEN) CPU[03] current=3Dd[IDLE]v3, curr=3Dd[IDLE]v3, prev=3DNULL
(XEN) CPU[03] nr_run=3D0, sort=3D996329, sibling=3D{3}, core=3D{0-7}
(XEN) CPU[04] current=3Dd[IDLE]v4, curr=3Dd[IDLE]v4, prev=3DNULL
(XEN) CPU[04] nr_run=3D0, sort=3D996325, sibling=3D{4}, core=3D{0-7}
(XEN) CPU[05] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
(XEN) CPU[05] nr_run=3D1, sort=3D996334, sibling=3D{5}, core=3D{0-7}
(XEN) 	run: [0.5] pri=3D-1 flags=3D0 cpu=3D5 credit=3D239 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.5] pri=3D-64 flags=3D0 cpu=3D5
(XEN) CPU[06] current=3Dd0v1, curr=3Dd0v1, prev=3DNULL
(XEN) CPU[06] nr_run=3D1, sort=3D996334, sibling=3D{6}, core=3D{0-7}
(XEN) 	run: [0.1] pri=3D-1 flags=3D0 cpu=3D6 credit=3D214 [w=3D2000,cap=3D=
0]
(XEN) 	  1: [32767.6] pri=3D-64 flags=3D0 cpu=3D6
(XEN) CPU[07] current=3Dd[IDLE]v7, curr=3Dd[IDLE]v7, prev=3DNULL
(XEN) CPU[07] nr_run=3D0, sort=3D996303, sibling=3D{7}, core=3D{0-7}
(XEN) CPU[08] current=3Dd[IDLE]v8, curr=3Dd[IDLE]v8, prev=3DNULL
(XEN) CPU[08] nr_run=3D2, sort=3D996335, sibling=3D{8}, core=3D{8-15}
(XEN) 	  1: [0.4] pri=3D-1 flags=3D0 cpu=3D8 credit=3D115 [w=3D2000,cap=3D=
0]
(XEN) CPU[09] current=3Dd19v1, curr=3Dd19v1, prev=3DNULL
(XEN) CPU[09] nr_run=3D1, sort=3D996335, sibling=3D{9}, core=3D{8-15}
(XEN) 	run: [19.1] pri=3D-2 flags=3D0 cpu=3D9 credit=3D-241 [w=3D256,cap=3D=
0]
(XEN) 	  1: [32767.9] pri=3D-64 flags=3D0 cpu=3D9
(XEN) CPU[10] current=3Dd[IDLE]v10, curr=3Dd[IDLE]v10, prev=3DNULL
(XEN) CPU[10] nr_run=3D0, sort=3D996334, sibling=3D{10}, core=3D{8-15}
(XEN) CPU[11] current=3Dd[IDLE]v11, curr=3Dd[IDLE]v11, prev=3DNULL
(XEN) CPU[11] nr_run=3D0, sort=3D996331, sibling=3D{11}, core=3D{8-15}
(XEN) CPU[12] current=3Dd[IDLE]v12, curr=3Dd[IDLE]v12, prev=3DNULL
(XEN) CPU[12] nr_run=3D0, sort=3D996333, sibling=3D{12}, core=3D{8-15}
(XEN) CPU[13] current=3Dd[IDLE]v13, curr=3Dd[IDLE]v13, prev=3DNULL
(XEN) CPU[13] nr_run=3D0, sort=3D996334, sibling=3D{13}, core=3D{8-15}
(XEN) CPU[14] current=3Dd0v14, curr=3Dd0v14, prev=3DNULL
(XEN) CPU[14] nr_run=3D1, sort=3D990383, sibling=3D{14}, core=3D{8-15}
(XEN) 	run: [0.14] pri=3D0 flags=3D0 cpu=3D14 credit=3D-514 [w=3D2000,cap=
=3D0]
(XEN) 	  1: [32767.14] pri=3D-64 flags=3D0 cpu=3D14
(XEN) CPU[15] current=3Dd0v6, curr=3Dd0v6, prev=3DNULL
(XEN) CPU[15] nr_run=3D1, sort=3D996335, sibling=3D{15}, core=3D{8-15}
(XEN) 	run: [0.6] pri=3D-2 flags=3D0 cpu=3D15 credit=3D-177 [w=3D2000,cap=
=3D0]
(XEN) 	  1: [32767.15] pri=3D-64 flags=3D0 cpu=3D15
```

I attempt to get '*' but that blocked my serial console, at least I was n=
ot able to interact with it few minutes later. I'll try to get other info=
 too. I've also uploaded the piece of this huge '*' dump here: https://gi=
st.github.com/fepitre/36923fbc08cc2fd8bdb59b81e73a6c2e

Right after, I've restarted with the default value of 'sched' (credit2) a=
nd just few minutes later I obtained:
'r': https://gist.github.com/fepitre/78541f555902275d906d627de2420571
'q': https://gist.github.com/fepitre/0ddf6b5e8fdb3152d24337d83fdc345e
'I': https://gist.github.com/fepitre/50c68233d08ad1e495edf7e0e146838b

Tell me if I can provide any other info from serial console.

Regards,
Fr=C3=A9d=C3=A9ric



--------------E2315F6D784B7E28DE9D958E
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------E2315F6D784B7E28DE9D958E--

--X2OPUdNtlN9DfKMj3NwMDcjdeaXbwOqcs--

--Xzf3UU1wKo5qhuk06o60TW6UTKhDHNr4I
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+YSrIACgkQSEAQtc3F
duK/MBAAu8TfGJNJGeXnVAkrBK+1Sem2WuLPgMaNusDIkUi1hlKwFw60UGfD+lHI
4l9jO3Os4jnU0XPuDs06t55HkfjZ19Wi2njZ31ihfTySLKSFyfywYzxH7LuCMDJz
gxD6gYDN6L/ZFZjdHjqsi7G+lwI91Nu+2BwfMkPagcIhxGJiqrZtE534eO8GA0/J
LSf1GzHdRrbHzuSx/yNjSc53ky2hfWDSwIfRpIHS1QrEorqNtEI8g+9OHfiQVBZk
BiVWNx10/mQv5XKj2JxmRSOjFVt2vafA2zewcxuSnp4xQuCdt4U+Ico31btaQ25Z
xZgtLX7VxrcFMrT7Wavq2+Cp5a0yBu1o97vtpsTWj5XLCFq+T+5M3SLEZFDS4emw
hudiV2VigkWGE8gGqn8cQ6phBlrytq4LLOF4kWSV2GS+x374mIXZJTE1M0ie3o19
YLo6+HYQviY02fU8TFnZumTiqKgegaAl319kM9eu1eUqs+vkJ332P8JwxuOADr7/
8K7sO4+L9WhikrJg357WNm/BmohBLseB9mY20udE4L7iuwu/dD4sc4MN/k2w9PHt
iV6H7DB36lqz1Ubwi+hr/6tX39bHeE6CtjNnVVt1M/0Q626CwNGlnPO+Qqr5Yo4d
/jzDa+OFjzWm3COjNf+kIt1y5zaeTK/8RvRAmF0UT+ccXFLKZUE=
=MUE1
-----END PGP SIGNATURE-----

--Xzf3UU1wKo5qhuk06o60TW6UTKhDHNr4I--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 19:25:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 19:25:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13075.33638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUbQ-0000dC-F0; Tue, 27 Oct 2020 19:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13075.33638; Tue, 27 Oct 2020 19:25:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUbQ-0000d5-C6; Tue, 27 Oct 2020 19:25:52 +0000
Received: by outflank-mailman (input) for mailman id 13075;
 Tue, 27 Oct 2020 19:25:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o/8F=EC=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1kXUbP-0000d0-Go
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:25:51 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 482ff22e-0870-4cb3-a176-cbcf0aa7908c;
 Tue, 27 Oct 2020 19:25:49 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RJP21A164536;
 Tue, 27 Oct 2020 19:25:45 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 34cc7kuye1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Tue, 27 Oct 2020 19:25:45 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RJPKpI106279;
 Tue, 27 Oct 2020 19:25:44 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 34cx6wc87t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 27 Oct 2020 19:25:44 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09RJPgg2007991;
 Tue, 27 Oct 2020 19:25:42 GMT
Received: from char.us.oracle.com (/10.152.32.25)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 27 Oct 2020 12:25:42 -0700
Received: by char.us.oracle.com (Postfix, from userid 1000)
 id 373186A0121; Tue, 27 Oct 2020 15:27:27 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o/8F=EC=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
	id 1kXUbP-0000d0-Go
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:25:51 +0000
X-Inumbo-ID: 482ff22e-0870-4cb3-a176-cbcf0aa7908c
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 482ff22e-0870-4cb3-a176-cbcf0aa7908c;
	Tue, 27 Oct 2020 19:25:49 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
	by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RJP21A164536;
	Tue, 27 Oct 2020 19:25:45 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=WWBVSyjATfqcuJQT+Yh6z4CCuAVHsOU2pwiB2KA9sbQ=;
 b=oNpuOKowZ1S862d6HWXEpBF6XqOjSjkj7XINwdr3XhrVVUdwCe0i0DOho17D9z8Sg05U
 acLF8fHnANx0ayghZdiDrEd+f1qcwfMFFWVCuNDgx2etJMa992mU03XWv+iLd5SHHkJg
 dgLwmAaLRANLtQiU/G7FIKEYp5NrNLWuAjFVPq5hQgbdYNZIQvwVgVtDLB2VrP8l89xY
 ADl0KBMePGoYDy7bLoY7+A5Odim6oCuVb72T1DXy+oCRUA9aG5yJuKHNw/uJ7jox/Di9
 I81V/kb0jh3z4LbGrig8E9z7KewsPBuORax86jP+PrHvghqqOXKeppF15dYGuZqqmQk5 AA== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
	by aserp2120.oracle.com with ESMTP id 34cc7kuye1-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
	Tue, 27 Oct 2020 19:25:45 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
	by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 09RJPKpI106279;
	Tue, 27 Oct 2020 19:25:44 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
	by userp3030.oracle.com with ESMTP id 34cx6wc87t-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Tue, 27 Oct 2020 19:25:44 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
	by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 09RJPgg2007991;
	Tue, 27 Oct 2020 19:25:42 GMT
Received: from char.us.oracle.com (/10.152.32.25)
	by default (Oracle Beehive Gateway v4.0)
	with ESMTP ; Tue, 27 Oct 2020 12:25:42 -0700
Received: by char.us.oracle.com (Postfix, from userid 1000)
	id 373186A0121; Tue, 27 Oct 2020 15:27:27 -0400 (EDT)
Date: Tue, 27 Oct 2020 15:27:27 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Elliott Mitchell <ehem+undef@m5p.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
        xen-devel@lists.xenproject.org, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
Message-ID: <20201027192726.GA13396@char.us.oracle.com>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
 <20201027175114.GA32110@mattapan.m5p.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20201027175114.GA32110@mattapan.m5p.com>
User-Agent: Mutt/1.9.1 (2017-09-22)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9787 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 phishscore=0 spamscore=0
 bulkscore=0 malwarescore=0 mlxlogscore=999 mlxscore=0 suspectscore=2
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2009150000
 definitions=main-2010270112
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9787 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 lowpriorityscore=0 adultscore=0
 malwarescore=0 spamscore=0 clxscore=1011 mlxscore=0 suspectscore=2
 priorityscore=1501 impostorscore=0 bulkscore=0 phishscore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010270112

> As the person who first found this and then confirmed this fixes a bug:
> 
> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>

Thank you!!

I changed the title and added the various tags and will put it in
linux-next later this week.

>From a1eb2768bf5954d25aa0f0136b38f0aa5d92d984 Mon Sep 17 00:00:00 2001
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
Date: Mon, 26 Oct 2020 17:02:14 -0700
Subject: [PATCH] swiotlb: fix "x86: Don't panic if can not alloc buffer for
 swiotlb"

kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
allocate a buffer for the swiotlb. It does so by calling

  memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);

If the allocation must fail, no_iotlb_memory is set.

Later during initialization swiotlb-xen comes in
(drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
is != 0, it thinks the memory is ready to use when actually it is not.

When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
and since no_iotlb_memory is set the kernel panics.

Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
initialized, it would do the initialization itself, which might still
succeed.

Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
failure, and also by setting no_iotlb_memory to false on swiotlb
initialization success.

Fixes: ac2cbab21f31 ("x86: Don't panic if can not alloc buffer for swiotlb")

Reported-by: Elliott Mitchell <ehem+xen@m5p.com>
Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
CC: stable@vger.kernel.org
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
---
 kernel/dma/swiotlb.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 465a567678d9..e08cac39c0ba 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -229,6 +229,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	if (verbose)
 		swiotlb_print_info();
@@ -260,9 +261,11 @@ swiotlb_init(int verbose)
 	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
 		return;
 
-	if (io_tlb_start)
+	if (io_tlb_start) {
 		memblock_free_early(io_tlb_start,
 				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
+		io_tlb_start = 0;
+	}
 	pr_warn("Cannot allocate buffer");
 	no_iotlb_memory = true;
 }
@@ -360,6 +363,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
 	}
 	io_tlb_index = 0;
+	no_iotlb_memory = false;
 
 	swiotlb_print_info();
 
-- 
2.13.6



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 19:31:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 19:31:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13085.33650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUgV-0001V5-3m; Tue, 27 Oct 2020 19:31:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13085.33650; Tue, 27 Oct 2020 19:31:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXUgV-0001Uy-0g; Tue, 27 Oct 2020 19:31:07 +0000
Received: by outflank-mailman (input) for mailman id 13085;
 Tue, 27 Oct 2020 19:31:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXUgT-0001Ut-FJ
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:31:05 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 228fdc32-52d5-4ed5-bd28-1f4d7d8ff50f;
 Tue, 27 Oct 2020 19:31:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXUgT-0001Ut-FJ
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 19:31:05 +0000
X-Inumbo-ID: 228fdc32-52d5-4ed5-bd28-1f4d7d8ff50f
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 228fdc32-52d5-4ed5-bd28-1f4d7d8ff50f;
	Tue, 27 Oct 2020 19:31:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603827063;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=Zl/xKhDItIScI+KG1AgREzYTPuS7GYYGS4lHAOwb8/c=;
  b=a2G65Kix71UeTzmKzrytPAuV2xS7CKCf7h20u7xLQRhtMhsDBnP7cqn/
   isdAyeWFJkR7C8QYUHyyTzZiEVNMczDLPlHwvt/LglsN2Ew/QG9/vFkGG
   lN4nQoEXaE2Et31mFOKWEyOGwizsoePUrrKmE1gc81X9gz5VEMDR/zOF3
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: nnxbZeJ9Zx4r/23DcmJ8Wo+jqjyuTLwQO7ko2iUYKdlQMGeNB4sG5Cuo3oH2/waJGBOLSsHQgf
 teioAt5wx/H9MHhx2J3IvJ8haASQ+or5eS4lWREvsH4n4Z03Chg6CzU7yi0ZfWEHbja2nlU7Uq
 TVgkXUdZLArMAOOVU0qFLIPw0R+sNRdHwFs48RYTTS16BGjPx7ePxx5Hv2UyTECXm831Ae0Smc
 UYH86IWcdiTMygBcG6NPHqm/6VxIJHrmTunWTmQHcGPX3HA3WsPDZ2CsUjg45acz4jsSrCtZVd
 FUQ=
X-SBRS: None
X-MesageID: 30991591
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,424,1596513600"; 
   d="scan'208";a="30991591"
Subject: Re: [PATCH] x86/svm: Merge hsa and host_vmcb to reduce memory
 overhead
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026135043.15560-1-andrew.cooper3@citrix.com>
 <ec123127-786a-02e9-07dd-351f30b6a5b3@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6acb623c-27bd-2d2d-c7c3-52c9ff1a1bf5@citrix.com>
Date: Tue, 27 Oct 2020 19:30:45 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <ec123127-786a-02e9-07dd-351f30b6a5b3@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 27/10/2020 15:24, Jan Beulich wrote:
> On 26.10.2020 14:50, Andrew Cooper wrote:
>> The format of the Host State Area is, and has always been, a VMCB.  It is
>> explicitly safe to put the host VMSAVE data in.
> Nit: The PM calls this "Host Save Area" or "Host State Save Area"
> afaics.
>
> I recall us discussing this option in the past, and not right
> away pursuing it because of it not having been explicit at the
> time. What place in the doc has made this explicit?

Sadly still not yet, but the pestering has happened.

> The main
> uncertainty (without any explicit statement) on my part would be
> the risk of VMSAVE writing (for performance reasons) e.g. full
> cache lines, i.e. more than exactly the bits holding the state
> to be saved, without first bringing old contents in from memory.

SEV-ES now requires the hypervisor to program desired exit state in in
the VMCB, due to differences in how the VMRUN instruction works.  See
Vol3 15.35.8.  (And yes - this does contradict the earlier statement in
that a the hypervisor must not write directly into the host state area).

I have had it confirmed by AMD that it is safe to use in this fashion,
but if you want more evidence, KVM has had this behaviour on AMD for its
entire lifetime.

>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -72,11 +72,10 @@ static void svm_update_guest_efer(struct vcpu *);
>>  static struct hvm_function_table svm_function_table;
>>  
>>  /*
>> - * Physical addresses of the Host State Area (for hardware) and vmcb (for Xen)
>> - * which contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state when in
>> - * guest vcpu context.
>> + * Host State Area.  This area is used by the processor in non-root mode, and
>> + * contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state required to
>> + * leave guest vcpu context.
>>   */
>> -static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa);
>>  static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb);
> The comment now applies to host_vmcb, so making the dual purpose
> more obvious would imo be helpful.

But it isn't dual purpose.  It *is* host state, both the half which
VMRUN deals with, and the half which VMLOAD/SAVE deals with (separately,
to optimise VMRUN).

I suppose technically it gets a little complicated with whose state
fs/gs/ldtr/gsbase actually is, given the PV VMLOAD-optimised path, but
the state relevant to Xen's security is tr/syscall/sysenter, which will
remain correct from the start of day.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 21:24:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 21:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13097.33665 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWRQ-0002TT-G5; Tue, 27 Oct 2020 21:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13097.33665; Tue, 27 Oct 2020 21:23:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWRQ-0002TM-DB; Tue, 27 Oct 2020 21:23:40 +0000
Received: by outflank-mailman (input) for mailman id 13097;
 Tue, 27 Oct 2020 21:23:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXWRO-0002Si-Iy
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:23:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2eb6752-b7ea-4e1e-a30f-468985f67a5d;
 Tue, 27 Oct 2020 21:23:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWRF-0001Gs-U9; Tue, 27 Oct 2020 21:23:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWRE-0000uc-RE; Tue, 27 Oct 2020 21:23:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWRE-0001XJ-Pq; Tue, 27 Oct 2020 21:23:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXWRO-0002Si-Iy
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:23:38 +0000
X-Inumbo-ID: a2eb6752-b7ea-4e1e-a30f-468985f67a5d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a2eb6752-b7ea-4e1e-a30f-468985f67a5d;
	Tue, 27 Oct 2020 21:23:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TziAbsw0lwRNQMLIqO+SV86DlUdux5ZUGX9W3/q9c1U=; b=prOh/jk8A7kP38vZvPrv9KJD2u
	Tdf+S/B5YwtzhSS6t4CMQbwZYNBfTESzzSFE+SruLuecwVEm7HYJgprujwBmk9t8AkXRa9o7voXIE
	l3AnYS1MuCF005iYq0hLEH54MnK6JZClFASNiU+FGV0vz5kgyZ4HZOf5F0iZEiWoVWHk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWRF-0001Gs-U9; Tue, 27 Oct 2020 21:23:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWRE-0000uc-RE; Tue, 27 Oct 2020 21:23:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWRE-0001XJ-Pq; Tue, 27 Oct 2020 21:23:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156257-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156257: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=1dc887329a10903940501b43e8c0cc67af7c06d5
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 21:23:28 +0000

flight 156257 qemu-mainline real [real]
flight 156266 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156257/
http://logs.test-lab.xenproject.org/osstest/logs/156266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1dc887329a10903940501b43e8c0cc67af7c06d5
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   68 days
Failing since        152659  2020-08-21 14:07:39 Z   67 days  157 attempts
Testing same since   156250  2020-10-27 00:09:00 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 51873 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 21:54:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 21:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13106.33678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWun-000567-Vv; Tue, 27 Oct 2020 21:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13106.33678; Tue, 27 Oct 2020 21:54:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWun-000560-So; Tue, 27 Oct 2020 21:54:01 +0000
Received: by outflank-mailman (input) for mailman id 13106;
 Tue, 27 Oct 2020 21:54:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXWun-00055v-2q
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:54:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31a66316-1548-49c9-9f3c-2e37b361648d;
 Tue, 27 Oct 2020 21:54:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7224220759;
 Tue, 27 Oct 2020 21:53:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXWun-00055v-2q
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:54:01 +0000
X-Inumbo-ID: 31a66316-1548-49c9-9f3c-2e37b361648d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31a66316-1548-49c9-9f3c-2e37b361648d;
	Tue, 27 Oct 2020 21:54:00 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7224220759;
	Tue, 27 Oct 2020 21:53:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603835639;
	bh=hLuBGmvknLJruJYQUNZGkX4EVqyzQJamlMv7F6lFoho=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=vOuhydjw9vlxiSuYbstg3MW2oGhlejZ8GriRzNfLn4RdkjVwbV2wc643qtmaqkPBj
	 fxNpyuRws07osa8Ikr3w7OWn6YN3ZIzLEsVSiqxn0xhx/2Ll93BusC3uRUEkjfDNRi
	 Ln94Z4m4p0cL6QbfhDQL50xIrZkIz0HWmJIOUVkM=
Date: Tue, 27 Oct 2020 14:53:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
cc: Elliott Mitchell <ehem+undef@m5p.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org, 
    xen-devel@lists.xenproject.org, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
In-Reply-To: <20201027192726.GA13396@char.us.oracle.com>
Message-ID: <alpine.DEB.2.21.2010271453480.12247@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s> <20201027175114.GA32110@mattapan.m5p.com> <20201027192726.GA13396@char.us.oracle.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 27 Oct 2020, Konrad Rzeszutek Wilk wrote:
> > As the person who first found this and then confirmed this fixes a bug:
> > 
> > Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
> 
> Thank you!!
> 
> I changed the title and added the various tags and will put it in
> linux-next later this week.

Looks fine, thank you


> >From a1eb2768bf5954d25aa0f0136b38f0aa5d92d984 Mon Sep 17 00:00:00 2001
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Date: Mon, 26 Oct 2020 17:02:14 -0700
> Subject: [PATCH] swiotlb: fix "x86: Don't panic if can not alloc buffer for
>  swiotlb"
> 
> kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
> allocate a buffer for the swiotlb. It does so by calling
> 
>   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> 
> If the allocation must fail, no_iotlb_memory is set.
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0, it thinks the memory is ready to use when actually it is not.
> 
> When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
> and since no_iotlb_memory is set the kernel panics.
> 
> Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
> initialized, it would do the initialization itself, which might still
> succeed.
> 
> Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
> failure, and also by setting no_iotlb_memory to false on swiotlb
> initialization success.
> 
> Fixes: ac2cbab21f31 ("x86: Don't panic if can not alloc buffer for swiotlb")
> 
> Reported-by: Elliott Mitchell <ehem+xen@m5p.com>
> Tested-by: Elliott Mitchell <ehem+xen@m5p.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> Reviewed-by: Christoph Hellwig <hch@lst.de>
> CC: stable@vger.kernel.org
> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> ---
>  kernel/dma/swiotlb.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 465a567678d9..e08cac39c0ba 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -229,6 +229,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	if (verbose)
>  		swiotlb_print_info();
> @@ -260,9 +261,11 @@ swiotlb_init(int verbose)
>  	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
>  		return;
>  
> -	if (io_tlb_start)
> +	if (io_tlb_start) {
>  		memblock_free_early(io_tlb_start,
>  				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
> +		io_tlb_start = 0;
> +	}
>  	pr_warn("Cannot allocate buffer");
>  	no_iotlb_memory = true;
>  }
> @@ -360,6 +363,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	swiotlb_print_info();
>  
> -- 
> 2.13.6
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 21:55:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 21:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13110.33690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWvl-0005Cq-Ah; Tue, 27 Oct 2020 21:55:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13110.33690; Tue, 27 Oct 2020 21:55:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXWvl-0005Cj-6r; Tue, 27 Oct 2020 21:55:01 +0000
Received: by outflank-mailman (input) for mailman id 13110;
 Tue, 27 Oct 2020 21:54:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXWvj-0005Cc-Nd
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:54:59 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 088a8202-fd07-413f-9447-7ce9f517a46e;
 Tue, 27 Oct 2020 21:54:57 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWvg-0001uL-SR; Tue, 27 Oct 2020 21:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWvg-0002r9-L1; Tue, 27 Oct 2020 21:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXWvg-0007UT-KX; Tue, 27 Oct 2020 21:54:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=WJOX=EC=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXWvj-0005Cc-Nd
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 21:54:59 +0000
X-Inumbo-ID: 088a8202-fd07-413f-9447-7ce9f517a46e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 088a8202-fd07-413f-9447-7ce9f517a46e;
	Tue, 27 Oct 2020 21:54:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lQyI3TvkIEJE9pDOox74WtGFh8q5uBHW7TID/wEMyB4=; b=EP5Tpx/ImY5lyT8iH7u48sKkWh
	kcRn7jhuG6L0L1iJbTTI1IAR4+xHtLxSahL92XB2x6BhhfPl31jrhThhkGB7YiLfW22OJhn2z/exM
	iYNq40jZucNnCt0edCjy5rmIQUB67qRaDcFh/ObrgnkpsNdE5a5WJTBkFLU+qGMlEBQo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWvg-0001uL-SR; Tue, 27 Oct 2020 21:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWvg-0002r9-L1; Tue, 27 Oct 2020 21:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXWvg-0007UT-KX; Tue, 27 Oct 2020 21:54:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156260-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156260: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
X-Osstest-Versions-That:
    xen=964781c6f162893677c50a779b7d562a299727ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 27 Oct 2020 21:54:56 +0000

flight 156260 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156260/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883
baseline version:
 xen                  964781c6f162893677c50a779b7d562a299727ba

Last test of basis   156245  2020-10-26 16:01:22 Z    1 days
Testing same since   156260  2020-10-27 18:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   964781c6f1..16a20963b3  16a20963b3209788f2c0d3a3eebb7d92f03f5883 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 22:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 22:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13123.33705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXg6-00014I-Ul; Tue, 27 Oct 2020 22:42:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13123.33705; Tue, 27 Oct 2020 22:42:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXg6-00014B-RA; Tue, 27 Oct 2020 22:42:54 +0000
Received: by outflank-mailman (input) for mailman id 13123;
 Tue, 27 Oct 2020 22:42:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXXg5-000146-H3
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:42:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c97c9b4f-c1c7-4621-b03e-5470c1b7e0f8;
 Tue, 27 Oct 2020 22:42:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id ACB462070E;
 Tue, 27 Oct 2020 22:42:51 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXXg5-000146-H3
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:42:53 +0000
X-Inumbo-ID: c97c9b4f-c1c7-4621-b03e-5470c1b7e0f8
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c97c9b4f-c1c7-4621-b03e-5470c1b7e0f8;
	Tue, 27 Oct 2020 22:42:52 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id ACB462070E;
	Tue, 27 Oct 2020 22:42:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603838571;
	bh=DOYHKDzCBaOnpf3jMoYPK5uiy0h3DwiEWrPZhy4OGS4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=IJabmM6lZap9DQGI8TB5u3T5Aj4TLOqLOnQjHthU2dNBmvjwYWyWef5koU9FS4Mrm
	 lNqTBx+gd6Vt6+jyJmfYRenBmf8/bjPIidf3aorFugoZx/DhrHF7zK72gjRt/07ZLd
	 cMvIGv7b4bKmd7wQXe8NqgDxf0eb0MoQ6DyvYDxM=
Date: Tue, 27 Oct 2020 15:42:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 1/3] xen/arm: use printk_once for errata warning
 prints
In-Reply-To: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2010271532200.12247@sstabellini-ThinkPad-T480s>
References: <cover.1603728729.git.bertrand.marquis@arm.com> <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Bertrand Marquis wrote:
> Replace usage of warning_add by printk_once with a **** prefix and
> suffix for errata related warnings.
> 
> This prevents the need for the assert which is not secure enough to
> protect this print against wrong usage.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/arm/cpuerrata.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 0c63dfa779..0430069a84 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -157,7 +157,6 @@ extern char __smccc_workaround_1_smc_start[], __smccc_workaround_1_smc_end[];
>  static int enable_smccc_arch_workaround_1(void *data)
>  {
>      struct arm_smccc_res res;
> -    static bool warned = false;
>      const struct arm_cpu_capabilities *entry = data;
>  
>      /*
> @@ -182,13 +181,8 @@ static int enable_smccc_arch_workaround_1(void *data)
>                                       "call ARM_SMCCC_ARCH_WORKAROUND_1");
>  
>  warn:
> -    if ( !warned )
> -    {
> -        ASSERT(system_state < SYS_STATE_active);
> -        warning_add("No support for ARM_SMCCC_ARCH_WORKAROUND_1.\n"
> -                    "Please update your firmware.\n");
> -        warned = true;
> -    }
> +    printk_once("**** No support for ARM_SMCCC_ARCH_WORKAROUND_1. ****\n"
> +                "**** Please update your firmware.                ****\n");
>  
>      return 0;
>  }
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 22:44:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 22:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13133.33716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXhC-0001CR-8z; Tue, 27 Oct 2020 22:44:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13133.33716; Tue, 27 Oct 2020 22:44:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXhC-0001CK-5b; Tue, 27 Oct 2020 22:44:02 +0000
Received: by outflank-mailman (input) for mailman id 13133;
 Tue, 27 Oct 2020 22:44:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXXhA-0001CB-8v
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:44:00 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5008af22-287b-4090-a77d-b7afe8327c97;
 Tue, 27 Oct 2020 22:43:59 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5867C2070E;
 Tue, 27 Oct 2020 22:43:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXXhA-0001CB-8v
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:44:00 +0000
X-Inumbo-ID: 5008af22-287b-4090-a77d-b7afe8327c97
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5008af22-287b-4090-a77d-b7afe8327c97;
	Tue, 27 Oct 2020 22:43:59 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 5867C2070E;
	Tue, 27 Oct 2020 22:43:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603838638;
	bh=s2EsWcdamgqbRhMPT220EeF+aiisPXqfZRmvzzvpkZc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=t3izSP5/mRHliBCpcwOaSnnSUNGhNzyYvXZs6ZOHruDLfIP+VoO2AQiKkpJWwoN1a
	 PEoXcQazy2dYh71B8+ZM98jqdx93YeE3nqwl91rh4gdCjvfNQsVFsmset9ca5CWRZj
	 M/qSe+ASM7ahJDgDFgPWmMm9PEQUxRbfX2vACB0M=
Date: Tue, 27 Oct 2020 15:43:57 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/3] xen: Add an unsecure Taint type
In-Reply-To: <d7e82b374cb7c83d6e18774e23bc4d970c4e8b53.1603728729.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2010271538080.12247@sstabellini-ThinkPad-T480s>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com> <d7e82b374cb7c83d6e18774e23bc4d970c4e8b53.1603728729.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Bertrand Marquis wrote:
> Define a new Unsecure taint type to be used to signal a system tainted
> due to an unsecure configuration or hardware feature/errata.
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/common/kernel.c   | 4 +++-
>  xen/include/xen/lib.h | 1 +
>  2 files changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/kernel.c b/xen/common/kernel.c
> index c3a943f077..7a345ae45e 100644
> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -326,6 +326,7 @@ unsigned int tainted;
>   *  'E' - An error (e.g. a machine check exceptions) has been injected.
>   *  'H' - HVM forced emulation prefix is permitted.
>   *  'M' - Machine had a machine check experience.
> + *  'U' - Platform is unsecure (usually due to an errata on the platform).
>   *
>   *      The string is overwritten by the next call to print_taint().
>   */
> @@ -333,7 +334,8 @@ char *print_tainted(char *str)
>  {
>      if ( tainted )
>      {
> -        snprintf(str, TAINT_STRING_MAX_LEN, "Tainted: %c%c%c%c",
> +        snprintf(str, TAINT_STRING_MAX_LEN, "Tainted: %c%c%c%c%c",
> +                 tainted & TAINT_MACHINE_UNSECURE ? 'U' : ' ',
>                   tainted & TAINT_MACHINE_CHECK ? 'M' : ' ',
>                   tainted & TAINT_SYNC_CONSOLE ? 'C' : ' ',
>                   tainted & TAINT_ERROR_INJECT ? 'E' : ' ',
> diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
> index 1983bd6b86..a9679c913d 100644
> --- a/xen/include/xen/lib.h
> +++ b/xen/include/xen/lib.h
> @@ -193,6 +193,7 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c);
>  #define TAINT_MACHINE_CHECK             (1u << 1)
>  #define TAINT_ERROR_INJECT              (1u << 2)
>  #define TAINT_HVM_FEP                   (1u << 3)
> +#define TAINT_MACHINE_UNSECURE          (1u << 4)
>  extern unsigned int tainted;
>  #define TAINT_STRING_MAX_LEN            20
>  extern char *print_tainted(char *str);
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 22:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 22:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13134.33729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXhR-0001HJ-Gs; Tue, 27 Oct 2020 22:44:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13134.33729; Tue, 27 Oct 2020 22:44:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXXhR-0001HC-Dd; Tue, 27 Oct 2020 22:44:17 +0000
Received: by outflank-mailman (input) for mailman id 13134;
 Tue, 27 Oct 2020 22:44:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXXhP-0001Gk-A6
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:44:15 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cd50051-5bf8-4a75-9d8d-22c5a7558a9f;
 Tue, 27 Oct 2020 22:44:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B8C0A2070E;
 Tue, 27 Oct 2020 22:44:13 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXXhP-0001Gk-A6
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 22:44:15 +0000
X-Inumbo-ID: 7cd50051-5bf8-4a75-9d8d-22c5a7558a9f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7cd50051-5bf8-4a75-9d8d-22c5a7558a9f;
	Tue, 27 Oct 2020 22:44:14 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id B8C0A2070E;
	Tue, 27 Oct 2020 22:44:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603838654;
	bh=rv/hDW3fGoRvDjcOou9Oo677MhsJNQXyeWnFU4rkdy0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=YXcoYVhHg8cALr2h6Ft9QtCM/feZOTIpU7xdDndfuQMzQkgOhJgptFaWbE334RDMQ
	 +d+CLGmOK0Y9D8iX2Yj21Fk5W3IVeYm/InKiRLIxulBrCpmA+ffgH4fKgnfz8wIhMO
	 iJiTjJ8R79FtAN+Gya9zqB8/Qekz8bKcgZAKIBU0=
Date: Tue, 27 Oct 2020 15:44:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <bertrand.marquis@arm.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
Message-ID: <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com> <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Bertrand Marquis wrote:
> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> not implementing the workaround for it could deadlock the system.
> Add a warning during boot informing the user that only trusted guests
> should be executed on the system.
> An equivalent warning is already given to the user by KVM on cores
> affected by this errata.
> 
> Also taint the hypervisor as unsecure when this errata applies and
> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>  SUPPORT.md               |  1 +
>  xen/arch/arm/cpuerrata.c | 13 +++++++++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index 5fbe5fc444..f7a3b046b0 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -38,6 +38,7 @@ supported in this document.
>  ### ARM v8
>  
>      Status: Supported
> +    Status, Cortex A57 r0p0 - r1p2, not security supported (Errata 832075)
>  
>  ## Host hardware support
>  
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 0430069a84..b35e8cd0b9 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -503,6 +503,19 @@ void check_local_cpu_errata(void)
>  void __init enable_errata_workarounds(void)
>  {
>      enable_cpu_capabilities(arm_errata);
> +
> +#ifdef CONFIG_ARM64_ERRATUM_832075
> +    if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
> +    {
> +        printk_once("**** This CPU is affected by the errata 832075. ****\n"
> +                    "**** Guests without CPU erratum workarounds     ****\n"
> +                    "**** can deadlock the system!                   ****\n"
> +                    "**** Only trusted guests should be used.        ****\n");

These can be on 2 lines, no need to be on 4 lines.


I know that Julien wrote about printing the warning from
enable_errata_workarounds but to me it looks more natural if we did it
from the .enable function specific to ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE.

That said, I don't feel strongly about it, I am fine either way. Julien,
do you have a preference?


Other than that, it is fine.


> +        /* Taint the machine has being insecure */
> +        add_taint(TAINT_MACHINE_UNSECURE);
> +    }
> +#endif



From xen-devel-bounces@lists.xenproject.org Tue Oct 27 23:19:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 23:19:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13149.33741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYF1-00045L-9s; Tue, 27 Oct 2020 23:18:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13149.33741; Tue, 27 Oct 2020 23:18:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYF1-00045E-6T; Tue, 27 Oct 2020 23:18:59 +0000
Received: by outflank-mailman (input) for mailman id 13149;
 Tue, 27 Oct 2020 23:18:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXYEz-000459-Tc
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:18:58 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 904df3fc-12db-4189-8424-63a110050b68;
 Tue, 27 Oct 2020 23:18:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=OCGY=EC=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXYEz-000459-Tc
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:18:58 +0000
X-Inumbo-ID: 904df3fc-12db-4189-8424-63a110050b68
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 904df3fc-12db-4189-8424-63a110050b68;
	Tue, 27 Oct 2020 23:18:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603840736;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to;
  bh=IZ9FHP7UM9ir5vN2+AeRAWVYMbaqA/QaczprssiUnIc=;
  b=eyuhy0Zd97UDrpZ7EOXuJha2FLUEwBYis8RPlGZyd3CE9abipUKLweY2
   J31OYHuA3oPQ6kypepPL1HlX0LyCM9d30640XmyK2J5l/YT6eYVF63SxP
   XqcqHCJ4Smsi2j93UGQDm64WXjUX0NK8+Q6BIyHjJj3LuHUE+MXaQrJdH
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: VnByQLNSlIX7iD4ifp24aPZTiaBZXb++wlCSUecghkT5Hdd25WUekYvZve6UplKRQleaiTQtgd
 WkrGun2nbcO3AwraMFnC1X30E7J8x7f5RaJKoTgfdSlmkQpmbgSBNscY8eIDGePLotzuK//ry1
 L5DelmyXWZixd3tRpj0RZ/+4uARrbr8l6gCwI+f4O21o0HdFrv+CYyVGIA+S+e9u4+C/wFuOAJ
 Hd1YCS/ZbBPfGVL/e8pa9DFPAsbsfR3bylLRs+4CHRGfc+XDkoDzeBpAK/QrGkozEApoZDMhYz
 p/w=
X-SBRS: None
X-MesageID: 29977274
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,424,1596513600"; 
   d="asc'?scan'208";a="29977274"
Subject: Re: inconsistent pfn type checking in process_page_data
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
References: <20201027184119.1d3f701e.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6c595f1b-72cf-4f9e-5bad-eb0ebde45630@citrix.com>
Date: Tue, 27 Oct 2020 23:18:47 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201027184119.1d3f701e.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature";
	boundary="PmNP58m8765AuC1cLqKa14r0h9Udv4XtV"
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

--PmNP58m8765AuC1cLqKa14r0h9Udv4XtV
Content-Type: multipart/mixed; boundary="Pt76t2PbaCJH0N46DjzEpjw4D9tx0Jp59"

--Pt76t2PbaCJH0N46DjzEpjw4D9tx0Jp59
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB

On 27/10/2020 17:41, Olaf Hering wrote:
> Andrew,
>
> with commit 93c2ff78adcadbe0f8bda57eeb30b1414c966324 a new function pro=
cess_page_data was added. While filling the mfns array for xenforeignmemo=
ry_map, the individual pfn types[] are checked in a different way than th=
e checking of the result of the mapping attempt.=20
>
> Is there a special reason why the first loop uses the various TAB value=
s, and the other loop checks XTAB/BROKEN/XALLOC? The sending side also us=
es the latter.

Hmm.=A0 (Wow that's more than 5 years old now)

IIRC, that particular piece of logic took an age to get correct.=A0 Its
quite possible it ended up being a merge of code which David and I wrote
at slightly different times.

Fundamentally, there are 4 bits of page type which encode NOTAB
(writeable), L{1..4}, Pinned L{1..4}, XTAB (invalid), XALLOC
(allocate-only - still not sure what the purpose is.=A0 There is no
producer of this type that I'm aware of) and BROKEN.

By my count, that is 12 legal types, so asymmetric type=3D>"is there data=
"
conversions is probably a bad thing.

I suspect we probably want an is_known_page_type() predicate on the
source side to sanity check data from Xen, and on the destination side
in handle_page_data() to sanity check data in the stream, and we
probably want a page_type_has_data() predicate to use in
write_batch()/process_page_data() to ensure that the handling is consiste=
nt.

Do you fancy doing a patch?=A0 If not I can put this on my TODO list, but=

no ETA on when I'd get to it.

~Andrew


--Pt76t2PbaCJH0N46DjzEpjw4D9tx0Jp59--

--PmNP58m8765AuC1cLqKa14r0h9Udv4XtV
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEzzVJW36m9w6nfSF2ZcP5BqXXn6AFAl+YqtsACgkQZcP5BqXX
n6BkrRAAg1pM7SuBZRyjw6qrXtkBnjmCHVa6eI9/BShJvYOoZwKINWlmrTagFX6M
q/7tqiU+XMMO1XtQ77XrEWUfLdnagxZSj0LcVXo2j9AOILcEfhVjm06kTr/y1Tie
s8tDBOfTdn+y0kV4ujlCm5sOfBngWEDOtYNXqhTeNZSdbmeeNXLue4ojwLDWP+Vf
4JK34YO0O4q4cGCclIpubPz0clO5ZzOauyBSno2OKrDvkF2C3NuXyvCkIz69oJ3z
+LyngRA38nrGXz3pVDZ9fqvVOvnPLIIwh8oyLE2f56084Z00xuHRpYTE69lF/d3B
frCkPEzOrlQVVepLIlEs0PavRbtvLiV0igzg8IubbBqKWL0c8Zyna4byAR1ZhWkL
A/Ob7K8RfjoBYDnVCAm5qjQpI4NMU8Sc6chevBqeJqNG9lfKRlhISPRBrncG4jy2
n0ggAAdrl6Tnnz2o3W+HiTbKz9500Li2xHfhtOyJiRuDvo3tNh8l24i6B52f12iO
QL6DuQBsmSW7geWl7nBQ9YD1rfkbMQUBRCdnmkAgIF3C/hEJXi2+PpFyqQVZUGqR
oCsNviC3kvxGhhSuZ0KlKisIZySMpVtk8K8UdFMmpje80PbAovAem+PnVVCoJB52
AlFe/yx5ffGOSN0ZidQiI2ac5nQAJSQoONAP3vv41G9t9JuC3e0=
=coZJ
-----END PGP SIGNATURE-----

--PmNP58m8765AuC1cLqKa14r0h9Udv4XtV--


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 23:32:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 23:32:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13171.33753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYS5-0005kR-LF; Tue, 27 Oct 2020 23:32:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13171.33753; Tue, 27 Oct 2020 23:32:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYS5-0005kK-HX; Tue, 27 Oct 2020 23:32:29 +0000
Received: by outflank-mailman (input) for mailman id 13171;
 Tue, 27 Oct 2020 23:32:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXYS4-0005kF-1w
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:32:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa1f5950-341c-46d5-b464-d499d84a232c;
 Tue, 27 Oct 2020 23:32:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3ED6520747;
 Tue, 27 Oct 2020 23:32:25 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXYS4-0005kF-1w
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:32:28 +0000
X-Inumbo-ID: fa1f5950-341c-46d5-b464-d499d84a232c
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fa1f5950-341c-46d5-b464-d499d84a232c;
	Tue, 27 Oct 2020 23:32:27 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3ED6520747;
	Tue, 27 Oct 2020 23:32:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603841546;
	bh=qtb3dL32MVZs65s3PVnJxFbnlJZc/NeI8cYzE9m4lzM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sliiHvdbnq0sWM9OagtFdWu8tsuwhwLB2cz/qmpmVR3ZjBwHGU7hhPHBqvUDGxr0+
	 YfgSA/LAYbXbCY/KxBHmUkA5QrhPeRXdU44gMPaT4l71rbbKFFAjJz9aq5NM30gY6m
	 4JBZhiLMhx35Hjuu/Mx6QYcZQ1sge8vobnHbj6RQ=
Date: Tue, 27 Oct 2020 16:32:24 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
In-Reply-To: <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
References: <cover.1603731279.git.rahul.singh@arm.com> <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Rahul Singh wrote:
> ARM platforms does not support ns16550 PCI support. When CONFIG_HAS_PCI
                ^ do

> is enabled for ARM a compilation error is observed.
> 
> Fixed compilation error after introducing new kconfig option
> CONFIG_HAS_NS16550_PCI for x86 platforms to support ns16550 PCI.
>
> No functional change.

Written like that it would seem that ARM platforms do not support
NS16550 on the PCI bus, but actually, it would be theoretically possible
to have an NS16550 card slotted in a PCI bus on ARM, right?

The problem is the current limitation in terms of Xen internal
plumbings, such as lack of MSI support. Is that correct?

If so, I'd update the commit message to reflect the situation a bit
better.


> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/drivers/char/Kconfig   |  7 +++++++
>  xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
>  2 files changed, 23 insertions(+), 16 deletions(-)
> 
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..8887e86afe 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -4,6 +4,13 @@ config HAS_NS16550
>  	help
>  	  This selects the 16550-series UART support. For most systems, say Y.
>  
> +config HAS_NS16550_PCI
> +	bool "NS16550 UART PCI support" if X86
> +	default y
> +	depends on X86 && HAS_NS16550 && HAS_PCI
> +	help
> +	  This selects the 16550-series UART PCI support. For most systems, say Y.

I think that this should be a silent option:
if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
otherwise -> automatically disable

No need to show it to the user.

The rest of course is fine.


>  config HAS_CADENCE_UART
>  	bool "Xilinx Cadence UART driver"
>  	default y
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index d8b52eb813..bd1c2af956 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -16,7 +16,7 @@
>  #include <xen/timer.h>
>  #include <xen/serial.h>
>  #include <xen/iocap.h>
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/pci_ids.h>
> @@ -54,7 +54,7 @@ enum serial_param_type {
>      reg_shift,
>      reg_width,
>      stop_bits,
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      bridge_bdf,
>      device,
>      port_bdf,
> @@ -83,7 +83,7 @@ static struct ns16550 {
>      unsigned int timeout_ms;
>      bool_t intr_works;
>      bool_t dw_usr_bsy;
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      /* PCI card parameters. */
>      bool_t pb_bdf_enable;   /* if =1, pb-bdf effective, port behind bridge */
>      bool_t ps_bdf_enable;   /* if =1, ps_bdf effective, port on pci card */
> @@ -117,14 +117,14 @@ static const struct serial_param_var __initconst sp_vars[] = {
>      {"reg-shift", reg_shift},
>      {"reg-width", reg_width},
>      {"stop-bits", stop_bits},
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      {"bridge", bridge_bdf},
>      {"dev", device},
>      {"port", port_bdf},
>  #endif
>  };
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  struct ns16550_config {
>      u16 vendor_id;
>      u16 dev_id;
> @@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, char *pc)
>  
>  static void pci_serial_early_init(struct ns16550 *uart)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( !uart->ps_bdf_enable || uart->io_base >= 0x10000 )
>          return;
>  
> @@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial_port *port)
>  
>  static void __init ns16550_init_irq(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->msi )
> @@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct serial_port *port)
>      uart->timeout_ms = max_t(
>          unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( uart->bar || uart->ps_bdf_enable )
>      {
>          if ( uart->param && uart->param->mmio &&
> @@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>      stop_timer(&uart->timer);
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( uart->bar )
>         uart->cr = pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->ps_bdf[1],
>                                    uart->ps_bdf[2]), PCI_COMMAND);
> @@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port)
>  
>  static void _ns16550_resume(struct serial_port *port)
>  {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      struct ns16550 *uart = port->uart;
>  
>      if ( uart->bar )
> @@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *uart)
>      return 1; /* Everything is MMIO */
>  #endif
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      pci_serial_early_init(uart);
>  #endif
>  
> @@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *uart)
>      return (status == 0x90);
>  }
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>  static int __init
>  pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>  {
> @@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>  
>      if ( *conf == ',' && *++conf != ',' )
>      {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          if ( strncmp(conf, "pci", 3) == 0 )
>          {
>              if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_com) )
> @@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>  
>      if ( *conf == ',' && *++conf != ',' )
>      {
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          if ( strncmp(conf, "msi", 3) == 0 )
>          {
>              conf += 3;
> @@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550 *uart, char **str)
>              uart->irq = simple_strtol(conf, &conf, 10);
>      }
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>      if ( *conf == ',' && *++conf != ',' )
>      {
>          conf = parse_pci(conf, NULL, &uart->ps_bdf[0],
> @@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str, struct ns16550 *uart)
>              uart->reg_width = simple_strtoul(param_value, NULL, 0);
>              break;
>  
> -#ifdef CONFIG_HAS_PCI
> +#ifdef CONFIG_HAS_NS16550_PCI
>          case bridge_bdf:
>              if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
>                              &uart->ps_bdf[1], &uart->ps_bdf[2]) )
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Oct 27 23:56:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 27 Oct 2020 23:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13190.33765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYom-0007bi-Ip; Tue, 27 Oct 2020 23:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13190.33765; Tue, 27 Oct 2020 23:55:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYom-0007bb-Fq; Tue, 27 Oct 2020 23:55:56 +0000
Received: by outflank-mailman (input) for mailman id 13190;
 Tue, 27 Oct 2020 23:55:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXYol-0007bW-Dj
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:55:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d84f8bc1-5e67-4e09-84c5-1a75d37e38a3;
 Tue, 27 Oct 2020 23:55:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2E24721D91;
 Tue, 27 Oct 2020 23:55:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=C1L6=EC=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXYol-0007bW-Dj
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 23:55:55 +0000
X-Inumbo-ID: d84f8bc1-5e67-4e09-84c5-1a75d37e38a3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d84f8bc1-5e67-4e09-84c5-1a75d37e38a3;
	Tue, 27 Oct 2020 23:55:54 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2E24721D91;
	Tue, 27 Oct 2020 23:55:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603842953;
	bh=SXf5Z5Usu2gveD/0mUvQaH6q3rziOWLpQl68mdCRxwE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=1c/HZ1ozl7Lkp9FCUsjJtiJUVRFEYiOUbqpFhFRTOQlxQTnkAWVpccHkpSVsqxBoU
	 rIglBm+ZE/+RjHbjR/jZzypwUhKxsldFiSNNnY2n/mT52Qm/dZFkdaa/ltO1mmujau
	 92KUBSkT0kLsq65z79XEaPDWKxpIz73HD59b4BG8=
Date: Tue, 27 Oct 2020 16:55:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, 
    Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>, 
    Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v1 2/4] xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag
 for PCI ATS functionality.
In-Reply-To: <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2010271640280.12247@sstabellini-ThinkPad-T480s>
References: <cover.1603731279.git.rahul.singh@arm.com> <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Rahul Singh wrote:
> PCI ATS functionality is not enabled and tested for ARM architecture
> but it is enabled for x86 and referenced in common passthrough/pci.c
> code.
> 
> Therefore introducing the new flag to enable the ATS functionality for
> x86 only to avoid issues for ARM architecture.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/arch/x86/Kconfig                     |  1 +
>  xen/drivers/passthrough/ats.h            | 24 ++++++++++++++++++++++++
>  xen/drivers/passthrough/vtd/x86/Makefile |  2 +-
>  xen/drivers/passthrough/x86/Makefile     |  2 +-
>  xen/drivers/pci/Kconfig                  |  3 +++
>  5 files changed, 30 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index 24868aa6ad..31906e9c97 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -20,6 +20,7 @@ config X86
>  	select HAS_NS16550
>  	select HAS_PASSTHROUGH
>  	select HAS_PCI
> +	select HAS_PCI_ATS
>  	select HAS_PDX
>  	select HAS_SCHED_GRANULARITY
>  	select HAS_UBSAN
> diff --git a/xen/drivers/passthrough/ats.h b/xen/drivers/passthrough/ats.h
> index 22ae209b37..a0af07b287 100644
> --- a/xen/drivers/passthrough/ats.h
> +++ b/xen/drivers/passthrough/ats.h
> @@ -17,6 +17,8 @@
>  
>  #include <xen/pci_regs.h>
>  
> +#ifdef CONFIG_HAS_PCI_ATS
> +
>  #define ATS_REG_CAP    4
>  #define ATS_REG_CTL    6
>  #define ATS_QUEUE_DEPTH_MASK     0x1f
> @@ -48,5 +50,27 @@ static inline int pci_ats_device(int seg, int bus, int devfn)
>      return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
>  }
>  
> +#else
> +
> +static inline int enable_ats_device(struct pci_dev *pdev,
> +                                    struct list_head *ats_list)
> +{
> +    return -ENODEV;
> +}
> +
> +static inline void disable_ats_device(struct pci_dev *pdev) { }
> +
> +static inline int pci_ats_enabled(int seg, int bus, int devfn)
> +{
> +    return -ENODEV;
> +}

pci_ats_enabled returns 0 when ATS is not enabled


> +static inline int pci_ats_device(int seg, int bus, int devfn)
> +{
> +    return -ENODEV;

also returns 0 when ATS is not enabled


> +}
> +
> +#endif /* CONFIG_HAS_PCI_ATS */
> +
>  #endif /* _ATS_H_ */
>  
> diff --git a/xen/drivers/passthrough/vtd/x86/Makefile b/xen/drivers/passthrough/vtd/x86/Makefile
> index 4ef00a4c5b..60f79fe263 100644
> --- a/xen/drivers/passthrough/vtd/x86/Makefile
> +++ b/xen/drivers/passthrough/vtd/x86/Makefile
> @@ -1,3 +1,3 @@
> -obj-y += ats.o
> +obj-$(CONFIG_HAS_PCI_ATS) += ats.o
>  obj-$(CONFIG_HVM) += hvm.o
>  obj-y += vtd.o
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
> index a70cf9460d..05e6f51f25 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,2 @@
> -obj-y += ats.o
> +obj-$(CONFIG_HAS_PCI_ATS) += ats.o
>  obj-y += iommu.o
> diff --git a/xen/drivers/pci/Kconfig b/xen/drivers/pci/Kconfig
> index 7da03fa13b..1588d4a91e 100644
> --- a/xen/drivers/pci/Kconfig
> +++ b/xen/drivers/pci/Kconfig
> @@ -1,3 +1,6 @@
>  
>  config HAS_PCI
>  	bool
> +
> +config HAS_PCI_ATS
> +	bool
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 00:04:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 00:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13210.33776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYwj-0000j7-1p; Wed, 28 Oct 2020 00:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13210.33776; Wed, 28 Oct 2020 00:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXYwi-0000j0-VH; Wed, 28 Oct 2020 00:04:08 +0000
Received: by outflank-mailman (input) for mailman id 13210;
 Wed, 28 Oct 2020 00:04:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXYwh-0000iv-6o
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 00:04:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb000be8-4024-40ee-820e-b2bdd260ad63;
 Wed, 28 Oct 2020 00:04:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3DF8C2223C;
 Wed, 28 Oct 2020 00:04:05 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXYwh-0000iv-6o
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 00:04:07 +0000
X-Inumbo-ID: bb000be8-4024-40ee-820e-b2bdd260ad63
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id bb000be8-4024-40ee-820e-b2bdd260ad63;
	Wed, 28 Oct 2020 00:04:06 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 3DF8C2223C;
	Wed, 28 Oct 2020 00:04:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603843445;
	bh=7XeYwh4xtpo99whUtqEGN7GeyloO1dcUjaYCORgAGLo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tgTgnVIAnnWQHjbpco1BWPCqPUKnYDW+tdSDQ5y0Xx5kiShfzn/Hir8Gp25hz2Pkk
	 vGP6AhqANvuO9wXgFhrMxPfth+lUdizJbdJw/nhp9JWMKrErnzUqNDJXDjjS8HIhJs
	 tjFr4l1TiTloHrvlAfoIeSUPx+i0lF1yp7zqUMAU=
Date: Tue, 27 Oct 2020 17:04:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v1 0/4] xen/arm: Make PCI passthrough code non-x86
 specific
In-Reply-To: <cover.1603731279.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2010271702020.12247@sstabellini-ThinkPad-T480s>
References: <cover.1603731279.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Rahul Singh wrote:
> This patch series is preparatory work to make PCI passthrough code non-x86
> specific.

Do you have a simple patch I could use to test the compilation on ARM?
Even just a hack to help me review the series?

Right now I can only test that the compilation on x86 is unbroken.

Thanks!

- Stefano


> Rahul Singh (4):
>   xen/ns16550: solve compilation error on ARM with CONFIG_HAS_PCI
>     enabled.
>   xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag for PCI ATS
>     functionality.
>   xen/pci: Move x86 specific code to x86 directory.
>   xen/pci: solve compilation error when memory paging is not enabled.
> 
>  xen/arch/x86/Kconfig                     |  1 +
>  xen/drivers/char/Kconfig                 |  7 ++
>  xen/drivers/char/ns16550.c               | 32 ++++----
>  xen/drivers/passthrough/ats.h            | 24 ++++++
>  xen/drivers/passthrough/pci.c            | 79 +------------------
>  xen/drivers/passthrough/vtd/x86/Makefile |  2 +-
>  xen/drivers/passthrough/x86/Makefile     |  3 +-
>  xen/drivers/passthrough/x86/iommu.c      |  7 ++
>  xen/drivers/passthrough/x86/pci.c        | 97 ++++++++++++++++++++++++
>  xen/drivers/pci/Kconfig                  |  3 +
>  xen/include/xen/pci.h                    |  2 +
>  11 files changed, 164 insertions(+), 93 deletions(-)
>  create mode 100644 xen/drivers/passthrough/x86/pci.c
> 
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 00:15:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 00:15:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13230.33789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXZ80-0001hL-4n; Wed, 28 Oct 2020 00:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13230.33789; Wed, 28 Oct 2020 00:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXZ80-0001hE-1T; Wed, 28 Oct 2020 00:15:48 +0000
Received: by outflank-mailman (input) for mailman id 13230;
 Wed, 28 Oct 2020 00:15:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXZ7z-0001h9-1L
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 00:15:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08941584-b26b-4b1f-a9eb-396ac791c599;
 Wed, 28 Oct 2020 00:15:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 045B62223C;
 Wed, 28 Oct 2020 00:15:44 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXZ7z-0001h9-1L
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 00:15:47 +0000
X-Inumbo-ID: 08941584-b26b-4b1f-a9eb-396ac791c599
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 08941584-b26b-4b1f-a9eb-396ac791c599;
	Wed, 28 Oct 2020 00:15:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 045B62223C;
	Wed, 28 Oct 2020 00:15:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603844145;
	bh=0b1ob6ERADyc9hSxZPEjS6jZR4GRpxV4gccE/wpme/Q=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=N8ne3+AVJwTuQEXJVsffqTBtm8ZQqamfDLwheWF23yAKAEPz2XX7GNLPWFmwfkdWz
	 NAM9cPia9hYI7M1a4zT0cLaFgeYKrOpkQQ+QtrpExxuwJIWZbaGEW2W5fOhHr+mbzB
	 2zxlS8YkHQ+g3YmBhQ3XvXpGlQDMmB0SZt3C0o80=
Date: Tue, 27 Oct 2020 17:15:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Rahul Singh <rahul.singh@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86
 directory.
In-Reply-To: <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
Message-ID: <alpine.DEB.2.21.2010271657030.12247@sstabellini-ThinkPad-T480s>
References: <cover.1603731279.git.rahul.singh@arm.com> <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 26 Oct 2020, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> sepcific code in this file.
  ^ specific

> Move x86 specific code to the x86 directory to avoid compilation error
> for other architecture.
> 
> No functional change.
> 
> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
> ---
>  xen/drivers/passthrough/pci.c        | 75 +--------------------
>  xen/drivers/passthrough/x86/Makefile |  1 +
>  xen/drivers/passthrough/x86/iommu.c  |  7 ++
>  xen/drivers/passthrough/x86/pci.c    | 97 ++++++++++++++++++++++++++++
>  xen/include/xen/pci.h                |  2 +
>  5 files changed, 108 insertions(+), 74 deletions(-)
>  create mode 100644 xen/drivers/passthrough/x86/pci.c
> 
> diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
> index 2a3bce1462..c6fbb7172c 100644
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -24,7 +24,6 @@
>  #include <xen/irq.h>
>  #include <xen/param.h>
>  #include <xen/vm_event.h>
> -#include <asm/hvm/irq.h>
>  #include <xen/delay.h>
>  #include <xen/keyhandler.h>
>  #include <xen/event.h>

sched.h is not needed anymore among the #include


> @@ -847,71 +846,6 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
>      return ret;
>  }
>  
> -static int pci_clean_dpci_irq(struct domain *d,
> -                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> -{
> -    struct dev_intx_gsi_link *digl, *tmp;
> -
> -    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> -
> -    if ( pt_irq_need_timer(pirq_dpci->flags) )
> -        kill_timer(&pirq_dpci->timer);
> -
> -    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> -    {
> -        list_del(&digl->list);
> -        xfree(digl);
> -    }
> -
> -    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> -
> -    if ( !pt_pirq_softirq_active(pirq_dpci) )
> -        return 0;
> -
> -    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> -
> -    return -ERESTART;
> -}
> -
> -static int pci_clean_dpci_irqs(struct domain *d)
> -{
> -    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> -
> -    if ( !is_iommu_enabled(d) )
> -        return 0;
> -
> -    if ( !is_hvm_domain(d) )
> -        return 0;
> -
> -    spin_lock(&d->event_lock);
> -    hvm_irq_dpci = domain_get_irq_dpci(d);
> -    if ( hvm_irq_dpci != NULL )
> -    {
> -        int ret = 0;
> -
> -        if ( hvm_irq_dpci->pending_pirq_dpci )
> -        {
> -            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> -                 ret = -ERESTART;
> -            else
> -                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> -        }
> -
> -        if ( !ret )
> -            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> -        if ( ret )
> -        {
> -            spin_unlock(&d->event_lock);
> -            return ret;
> -        }
> -
> -        hvm_domain_irq(d)->dpci = NULL;
> -        free_hvm_irq_dpci(hvm_irq_dpci);
> -    }
> -    spin_unlock(&d->event_lock);
> -    return 0;
> -}
> -
>  /* Caller should hold the pcidevs_lock */
>  static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
>                             uint8_t devfn)
> @@ -971,7 +905,7 @@ int pci_release_devices(struct domain *d)
>      int ret;
>  
>      pcidevs_lock();
> -    ret = pci_clean_dpci_irqs(d);
> +    ret = arch_pci_release_devices(d);
>      if ( ret )
>      {
>          pcidevs_unlock();
> @@ -1375,13 +1309,6 @@ static int __init setup_dump_pcidevs(void)
>  }
>  __initcall(setup_dump_pcidevs);
>  
> -int iommu_update_ire_from_msi(
> -    struct msi_desc *msi_desc, struct msi_msg *msg)
> -{
> -    return iommu_intremap
> -           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> -}
> -
>  static int iommu_add_device(struct pci_dev *pdev)
>  {
>      const struct domain_iommu *hd;
> diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
> index 05e6f51f25..642f673ed2 100644
> --- a/xen/drivers/passthrough/x86/Makefile
> +++ b/xen/drivers/passthrough/x86/Makefile
> @@ -1,2 +1,3 @@
>  obj-$(CONFIG_HAS_PCI_ATS) += ats.o
>  obj-y += iommu.o
> +obj-y += pci.o
> diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
> index f17b1820f4..875e67b53b 100644
> --- a/xen/drivers/passthrough/x86/iommu.c
> +++ b/xen/drivers/passthrough/x86/iommu.c
> @@ -308,6 +308,13 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>      return pg;
>  }
>  
> +int iommu_update_ire_from_msi(
> +    struct msi_desc *msi_desc, struct msi_msg *msg)
> +{
> +    return iommu_intremap
> +           ? iommu_call(&iommu_ops, update_ire_from_msi, msi_desc, msg) : 0;
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/drivers/passthrough/x86/pci.c b/xen/drivers/passthrough/x86/pci.c
> new file mode 100644
> index 0000000000..443712aa22
> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/pci.c
> @@ -0,0 +1,97 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/param.h>

not needed


> +#include <xen/sched.h>
> +#include <xen/pci.h>
> +#include <xen/pci_regs.h>

not needed


> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +static int pci_clean_dpci_irqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci = domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci != NULL )
> +    {
> +        int ret = 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> +                 ret = -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> +        }
> +
> +        if ( !ret )
> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci = NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +    return 0;
> +}
> +
> +int arch_pci_release_devices(struct domain *d)
> +{
> +    return pci_clean_dpci_irqs(d);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
> index 2bc4aaf453..13ae7cc2a5 100644
> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -212,4 +212,6 @@ int msixtbl_pt_register(struct domain *, struct pirq *, uint64_t gtable);
>  void msixtbl_pt_unregister(struct domain *, struct pirq *);
>  void msixtbl_pt_cleanup(struct domain *d);
>  
> +int arch_pci_release_devices(struct domain *d);
> +
>  #endif /* __XEN_PCI_H__ */



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 01:01:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 01:01:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13253.33801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXZq1-0001Mn-Jv; Wed, 28 Oct 2020 01:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13253.33801; Wed, 28 Oct 2020 01:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXZq1-0001M5-GE; Wed, 28 Oct 2020 01:01:17 +0000
Received: by outflank-mailman (input) for mailman id 13253;
 Wed, 28 Oct 2020 01:01:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXZpz-0000bY-Pq
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 01:01:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2829c48d-bc64-4070-9eed-5324c8b58412;
 Wed, 28 Oct 2020 01:01:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXZpw-0008Vn-34; Wed, 28 Oct 2020 01:01:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXZpv-0004G1-Q5; Wed, 28 Oct 2020 01:01:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXZpv-0006lg-Ow; Wed, 28 Oct 2020 01:01:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXZpz-0000bY-Pq
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 01:01:15 +0000
X-Inumbo-ID: 2829c48d-bc64-4070-9eed-5324c8b58412
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2829c48d-bc64-4070-9eed-5324c8b58412;
	Wed, 28 Oct 2020 01:01:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gcLSsSK6ICnKElinZIjgW16k58AtEfCJrOvFnbw+1I4=; b=TeRM9BaQrp2E7t/yCBJ0QGyXfI
	GHTaRB/Dqg6rboOA39q5EmpWDv3u++tEeVei93yKJ7j9cOoNM3UZmBQOO/aITLFS6emnRROIxVE87
	JXh0msiwYzs2Td9tvYzuy/E8Ud6pNIh8yqnrSk9sucQ183eog+xFkCt26RDKNF8AvgKc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXZpw-0008Vn-34; Wed, 28 Oct 2020 01:01:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXZpv-0004G1-Q5; Wed, 28 Oct 2020 01:01:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXZpv-0006lg-Ow; Wed, 28 Oct 2020 01:01:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156258-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156258: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4525c8781ec0701ce824e8bd379ae1b129e26568
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 01:01:11 +0000

flight 156258 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156258/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl     10 host-ping-check-xen fail in 156251 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  8 xen-boot         fail in 156251 pass in 156258
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen        fail pass in 156251
 test-arm64-arm64-xl           8 xen-boot                   fail pass in 156251
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 156251

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2 11 leak-check/basis(11) fail in 156251 blocked in 152332
 test-arm64-arm64-xl-xsm 11 leak-check/basis(11) fail in 156251 blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4525c8781ec0701ce824e8bd379ae1b129e26568
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   88 days
Failing since        152366  2020-08-01 20:49:34 Z   87 days  149 attempts
Testing same since   156251  2020-10-27 01:41:28 Z    0 days    2 attempts

------------------------------------------------------------
3376 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641103 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 01:54:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 01:54:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13275.33815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXafe-0006Nx-Lh; Wed, 28 Oct 2020 01:54:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13275.33815; Wed, 28 Oct 2020 01:54:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXafe-0006Nq-I9; Wed, 28 Oct 2020 01:54:38 +0000
Received: by outflank-mailman (input) for mailman id 13275;
 Wed, 28 Oct 2020 01:54:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aCqx=ED=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kXafc-0006Nl-O1
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 01:54:36 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d564ce3e-4fa9-44c0-909b-f2a0ca8ec0b4;
 Wed, 28 Oct 2020 01:54:35 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09S1sOxZ034410
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 27 Oct 2020 21:54:30 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09S1sN2l034409;
 Tue, 27 Oct 2020 18:54:23 -0700 (PDT) (envelope-from ehem)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aCqx=ED=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kXafc-0006Nl-O1
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 01:54:36 +0000
X-Inumbo-ID: d564ce3e-4fa9-44c0-909b-f2a0ca8ec0b4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d564ce3e-4fa9-44c0-909b-f2a0ca8ec0b4;
	Wed, 28 Oct 2020 01:54:35 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09S1sOxZ034410
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 27 Oct 2020 21:54:30 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09S1sN2l034409;
	Tue, 27 Oct 2020 18:54:23 -0700 (PDT)
	(envelope-from ehem)
Date: Tue, 27 Oct 2020 18:54:23 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201028015423.GA33407@mattapan.m5p.com>
References: <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
 <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> On 26/10/2020 16:03, Elliott Mitchell wrote:
> > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> >> On 24/10/2020 06:35, Elliott Mitchell wrote:
> >>> ACPI has a distinct
> >>> means of specifying a limited DMA-width; the above fails, because it
> >>> assumes a *device-tree*.
> >>
> >> Do you know if it would be possible to infer from the ACPI static table
> >> the DMA-width?
> > 
> > Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> > what the C code would look like though (problem is which documentation
> > should I be looking at first?).
> 
> What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
> is written in AML. So far the shortest implementation I have seen for 
> the AML parser is around 5000 lines (see [1]). It might be possible to 
> strip some the code, although I think this will still probably too big 
> for a single workaround.
> 
> What I meant by "static table" is a table that looks like a structure 
> and can be parsed in a few lines. If we can't find on contain the DMA 
> window, then the next best solution is to find a way to identity the 
> platform.
> 
> I don't know enough ACPI to know if this solution is possible. A good 
> starter would probably be the ACPI spec [2].

Okay, that is worse than I had thought (okay, ACPI is impressively
complex for something nominally firmware-level).

There are strings in the present Tianocore implementation for Raspberry
PI 4B which could be targeted.  Notably included in the output during
boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
unsure how kosher these are as this wsn't implemented nor blessed by the
Raspberry PI Foundation).

I strongly dislike this approach as you soon turn the Xen project into a
database of hardware.  This is already occurring with
xen/arch/arm/platforms and I would love to do something about this.  On
that thought, how about utilizing Xen's command-line for this purpose?

Have a procedure of during installation/updates retrieve DMA limitation
information from the running OS and the following boot Xen will follow
the appropriate setup.  By its nature, Domain 0 will have the information
needed, just becomes an issue of how hard that is to retrieve...


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Oct 28 02:04:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 02:04:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13280.33828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXaoz-0007hA-J8; Wed, 28 Oct 2020 02:04:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13280.33828; Wed, 28 Oct 2020 02:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXaoz-0007h3-FP; Wed, 28 Oct 2020 02:04:17 +0000
Received: by outflank-mailman (input) for mailman id 13280;
 Wed, 28 Oct 2020 02:04:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Dss=ED=nask.pl=michall@srs-us1.protection.inumbo.net>)
 id 1kXaoy-0007gy-T2
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 02:04:16 +0000
Received: from mx.nask.net.pl (unknown [195.187.55.89])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e70213dc-1873-4526-9f07-ac349ce562f7;
 Wed, 28 Oct 2020 02:04:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Dss=ED=nask.pl=michall@srs-us1.protection.inumbo.net>)
	id 1kXaoy-0007gy-T2
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 02:04:16 +0000
X-Inumbo-ID: e70213dc-1873-4526-9f07-ac349ce562f7
Received: from mx.nask.net.pl (unknown [195.187.55.89])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e70213dc-1873-4526-9f07-ac349ce562f7;
	Wed, 28 Oct 2020 02:04:14 +0000 (UTC)
X-Virus-Scanned: amavisd-new at 
Date: Wed, 28 Oct 2020 03:04:12 +0100 (CET)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
Message-ID: <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
In-Reply-To: <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl> <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [195.187.238.14]
X-Mailer: Zimbra 9.0.0_GA_3969 (ZimbraWebClient - GC86 (Win)/9.0.0_GA_3969)
Thread-Topic: credit=sched2 machine hang when using DRAKVUF
Thread-Index: BnNcHQLsrjmQr0BXl+REe3376mCu2Q==

----- 23 pa=C5=BA, 2020 o 6:47, J=C3=BCrgen Gro=C3=9F jgross@suse.com napis=
a=C5=82(a):

> On 23.10.20 00:59, Micha=C5=82 Leszczy=C5=84ski wrote:
>> Hello,
>>=20
>> when using DRAKVUF against a Windows 7 x64 DomU, the whole machine hangs=
 after a
>> few minutes.
>>=20
>> The chance for a hang seems to be correlated with number of PCPUs, in th=
is case
>> we have 14 PCPUs and hang is very easily reproducible, while on other ma=
chines
>> with 2-4 PCPUs it's very rare (but still occurring sometimes). The issue=
 is
>> observed with the default sched=3Dcredit2 and is no longer reproducible =
once
>> sched=3Dcredit is set.
>=20
> Interesting. Can you please share some more information?
>=20
> Which Xen version are you using?
>=20
> Is there any additional information in the dom0 log which could be
> related to the hang (earlier WARN() splats, Oopses, Xen related
> messages, hardware failure messages, ...?
>=20
> Can you please try to get backtraces of all cpus at the time of the
> hang?
>=20
> It would help to know which cpu was the target of the call of
> smp_call_function_single(), so a disassembly of that function would
> be needed to find that information from the dumped registers.
>=20
> I'm asking because I've seen a similar problem recently and I was
> rather suspecting a fifo event channel issue than the Xen scheduler,
> but your data suggests it could be the scheduler after all (if it is
> the same issue, of course).
>=20
>=20
> Juergen


As I've said before, I'm using RELEASE-4.14.0, this is DELL PowerEdge R640 =
with 14 PCPUs.

I have the following additional pieces of log (enclosed below). As you coul=
d see, the issue is about particular vCPUs of Dom0 not being scheduled for =
a long time, which really decreases stability of the host system.

Hope this helps somehow.



Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska

---

[  313.730969] rcu: INFO: rcu_sched self-detected stall on CPU
[  313.731154] rcu:     5-....: (5249 ticks this GP) idle=3Dc6e/1/0x4000000=
000000002 softirq=3D4625/4625 fqs=3D2624
[  313.731474] rcu:      (t=3D5250 jiffies g=3D10309 q=3D220)
[  338.968676] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  346.963959] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [xenconsol=
ed:2747]
(XEN) *** Serial input to Xen (type 'CTRL-a' three times to switch input)
(XEN) sched_smt_power_savings: disabled
(XEN) NOW=3D384307105230
(XEN) Online Cpus: 0,2,4,6,8,10,12,14,16,18,20,22,24,26
(XEN) Cpupool 0:
(XEN) Cpus: 0,2,4,6,8,10,12,14,16,18,20,22,24,26
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Active queues: 2
(XEN)   default-weight     =3D 256
(XEN) Runqueue 0:
(XEN)   ncpus              =3D 7
(XEN)   cpus               =3D 0,2,4,6,8,10,12
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 10
(XEN)   instload           =3D 3
(XEN)   aveload            =3D 805194 (~307%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,00000000,00001=
145
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,0000=
0000,00001145
(XEN) Runqueue 1:
(XEN)   ncpus              =3D 7
(XEN)   cpus               =3D 14,16,18,20,22,24,26
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 22
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 51211 (~19%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,00000000,05454=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,0000=
0000,05454000
(XEN) Domain info:
(XEN)   Domain: 0 w 256 c 0 v 14
(XEN)     1: [0.0] flags=3D20 cpu=3D0 credit=3D-10000000 [w=3D256] load=3D4=
594 (~1%)
(XEN)     2: [0.1] flags=3D20 cpu=3D2 credit=3D9134904 [w=3D256] load=3D262=
144 (~100%)
(XEN)     3: [0.2] flags=3D22 cpu=3D4 credit=3D-10000000 [w=3D256] load=3D2=
62144 (~100%)
(XEN)     4: [0.3] flags=3D20 cpu=3D6 credit=3D-10000000 [w=3D256] load=3D4=
299 (~1%)
(XEN)     5: [0.4] flags=3D20 cpu=3D8 credit=3D-10000000 [w=3D256] load=3D4=
537 (~1%)
(XEN)     6: [0.5] flags=3D22 cpu=3D10 credit=3D-10000000 [w=3D256] load=3D=
262144 (~100%)
(XEN)     7: [0.6] flags=3D20 cpu=3D12 credit=3D-10000000 [w=3D256] load=3D=
5158 (~1%)
(XEN)     8: [0.7] flags=3D20 cpu=3D14 credit=3D10053352 [w=3D256] load=3D5=
150 (~1%)
(XEN)     9: [0.8] flags=3D20 cpu=3D16 credit=3D10200526 [w=3D256] load=3D5=
155 (~1%)
(XEN)    10: [0.9] flags=3D20 cpu=3D18 credit=3D10207025 [w=3D256] load=3D4=
939 (~1%)
(XEN)    11: [0.10] flags=3D20 cpu=3D20 credit=3D10131199 [w=3D256] load=3D=
5753 (~2%)
(XEN)    12: [0.11] flags=3D20 cpu=3D22 credit=3D8103663 [w=3D256] load=3D2=
2544 (~8%)
(XEN)    13: [0.12] flags=3D20 cpu=3D24 credit=3D10213151 [w=3D256] load=3D=
4905 (~1%)
(XEN)    14: [0.13] flags=3D20 cpu=3D26 credit=3D10235821 [w=3D256] load=3D=
4858 (~1%)
(XEN)   Domain: 29 w 256 c 0 v 4
(XEN)    15: [29.0] flags=3D0 cpu=3D16 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN)    16: [29.1] flags=3D0 cpu=3D26 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN)    17: [29.2] flags=3D0 cpu=3D10 credit=3D8294046 [w=3D256] load=3D0 =
(~0%)
(XEN)    18: [29.3] flags=3D0 cpu=3D12 credit=3D9009727 [w=3D256] load=3D0 =
(~0%)
(XEN)   Domain: 30 w 256 c 0 v 4
(XEN)    19: [30.0] flags=3D0 cpu=3D26 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN)    20: [30.1] flags=3D0 cpu=3D16 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN)    21: [30.2] flags=3D0 cpu=3D18 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN)    22: [30.3] flags=3D0 cpu=3D22 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
(XEN) Runqueue 0:
(XEN) CPU[00] runq=3D0, sibling=3D{0}, core=3D{0,2,4,6,8,10,12,14,16,18,20,=
22,24,26}
(XEN) CPU[02] runq=3D0, sibling=3D{2}, core=3D{0,2,4,6,8,10,12,14,16,18,20,=
22,24,26}
(XEN) CPU[04] runq=3D0, sibling=3D{4}, core=3D{0,2,4,6,8,10,12,14,16,18,20,=
22,24,26}
(XEN)   run: [0.2] flags=3D22 cpu=3D4 credit=3D-10000000 [w=3D256] load=3D2=
62144 (~100%)
(XEN) CPU[06] runq=3D0, sibling=3D{6}, core=3D{0,2,4,6,8,10,12,14,16,18,20,=
22,24,26}
(XEN) CPU[08] runq=3D0, sibling=3D{8}, core=3D{0,2,4,6,8,10,12,14,16,18,20,=
22,24,26}
(XEN) CPU[10] runq=3D0, sibling=3D{10}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN)   run: [0.5] flags=3D22 cpu=3D10 credit=3D-10000000 [w=3D256] load=3D=
262144 (~100%)
(XEN) CPU[12] runq=3D0, sibling=3D{12}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) RUNQ:
(XEN)     0: [0.1] flags=3D20 cpu=3D2 credit=3D9134904 [w=3D256] load=3D262=
144 (~100%)
(XEN) Runqueue 1:
(XEN) CPU[14] runq=3D1, sibling=3D{14}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[16] runq=3D1, sibling=3D{16}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[18] runq=3D1, sibling=3D{18}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[20] runq=3D1, sibling=3D{20}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[22] runq=3D1, sibling=3D{22}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[24] runq=3D1, sibling=3D{24}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) CPU[26] runq=3D1, sibling=3D{26}, core=3D{0,2,4,6,8,10,12,14,16,18,20=
,22,24,26}
(XEN) RUNQ:
(XEN) CPUs info:
(XEN) CPU[00] current=3Dd[IDLE]v0, curr=3Dd[IDLE]v0, prev=3DNULL
(XEN) CPU[02] current=3Dd[IDLE]v2, curr=3Dd[IDLE]v2, prev=3DNULL
(XEN) CPU[04] current=3Dd0v2, curr=3Dd0v2, prev=3DNULL
(XEN) CPU[06] current=3Dd[IDLE]v6, curr=3Dd[IDLE]v6, prev=3DNULL
(XEN) CPU[08] current=3Dd[IDLE]v8, curr=3Dd[IDLE]v8, prev=3DNULL
(XEN) CPU[10] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
(XEN) CPU[12] current=3Dd[IDLE]v12, curr=3Dd[IDLE]v12, prev=3DNULL
(XEN) CPU[14] current=3Dd[IDLE]v14, curr=3Dd[IDLE]v14, prev=3DNULL
(XEN) CPU[16] current=3Dd[IDLE]v16, curr=3Dd[IDLE]v16, prev=3DNULL
(XEN) CPU[18] current=3Dd[IDLE]v18, curr=3Dd[IDLE]v18, prev=3DNULL
(XEN) CPU[20] current=3Dd[IDLE]v20, curr=3Dd[IDLE]v20, prev=3DNULL
(XEN) CPU[22] current=3Dd[IDLE]v22, curr=3Dd[IDLE]v22, prev=3DNULL
(XEN) CPU[24] current=3Dd[IDLE]v24, curr=3Dd[IDLE]v24, prev=3DNULL
(XEN) CPU[26] current=3Dd[IDLE]v26, curr=3Dd[IDLE]v26, prev=3DNULL
[  366.966158] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  374.961437] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [xenconsol=
ed:2747]
[  376.737277] rcu: INFO: rcu_sched self-detected stall on CPU
[  376.737457] rcu:     5-....: (21002 ticks this GP) idle=3Dc6e/1/0x400000=
0000000002 softirq=3D4625/4630 fqs=3D10491
[  376.737773] rcu:      (t=3D21003 jiffies g=3D10309 q=3D4631)
[  402.958905] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [xenconsol=
ed:2747]
[  402.962904] watchdog: BUG: soft lockup - CPU#5 stuck for 23s! [sshd:5991=
]
[  413.434099] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks=
: { 2-... 5-... } 5657 jiffies s: 57 root: 0x24/.
[  413.434470] rcu: blocking rcu_node structures:
[  430.956362] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  430.960361] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  439.743562] rcu: INFO: rcu_sched self-detected stall on CPU
[  439.743741] rcu:     5-....: (36755 ticks this GP) idle=3Dc6e/1/0x400000=
0000000002 softirq=3D4625/4630 fqs=3D18363
[  439.744060] rcu:      (t=3D36756 jiffies g=3D10309 q=3D8837)
[  458.953810] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  466.957079] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  476.916310] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks=
: { 2-... 5-... } 21529 jiffies s: 57 root: 0x24/.
[  476.916712] rcu: blocking rcu_node structures:
[  486.951250] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  491.251030] INFO: task sshd:5993 blocked for more than 120 seconds.
[  491.251249]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  491.251535] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  491.251891] INFO: task sshd:5995 blocked for more than 120 seconds.
[  491.252078]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  491.252321] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  491.252657] INFO: task qemu-system-i38:6011 blocked for more than 120 se=
conds.
[  491.252924]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  491.253167] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  491.253512] INFO: task sshd:6056 blocked for more than 120 seconds.
[  491.253703]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  491.253947] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  494.954517] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  502.749802] rcu: INFO: rcu_sched self-detected stall on CPU
[  502.749981] rcu:     5-....: (52508 ticks this GP) idle=3Dc6e/1/0x400000=
0000000002 softirq=3D4625/4630 fqs=3D26225
[  502.750307] rcu:      (t=3D52509 jiffies g=3D10309 q=3D12321)
[  514.948683] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  526.951580] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  540.398469] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks=
: { 2-... 5-... } 37401 jiffies s: 57 root: 0x24/.
[  540.398843] rcu: blocking rcu_node structures:
[  542.946109] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  554.949003] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  565.756021] rcu: INFO: rcu_sched self-detected stall on CPU
[  565.756203] rcu:     5-....: (68261 ticks this GP) idle=3Dc6e/1/0x400000=
0000000002 softirq=3D4625/4630 fqs=3D34096
[  565.756521] rcu:      (t=3D68262 jiffies g=3D10309 q=3D14838)
[  570.943529] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  590.945683] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  598.940945] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  603.880631] rcu: INFO: rcu_sched detected expedited stalls on CPUs/tasks=
: { 2-... 5-... } 53273 jiffies s: 57 root: 0x24/.
[  603.881006] rcu: blocking rcu_node structures:
[  612.071898] INFO: task ntpd:2726 blocked for more than 120 seconds.
[  612.072120]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.072409] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  612.072806] INFO: task sshd:5993 blocked for more than 120 seconds.
[  612.073016]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.073258] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  612.073605] INFO: task sshd:5995 blocked for more than 120 seconds.
[  612.085380]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.097439] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  612.109691] INFO: task qemu-system-i38:6011 blocked for more than 120 se=
conds.
[  612.121877]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.134123] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  612.146501] INFO: task sshd:6056 blocked for more than 120 seconds.
[  612.158836]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.171134] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  612.183643] INFO: task qemu-system-i38:6105 blocked for more than 120 se=
conds.
[  612.196174]       Tainted: P           OEL    4.19.0-6-amd64 #1 Debian 4=
.19.67-2+deb10u2
[  612.208899] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables =
this message.
[  618.943102] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]
[  626.938354] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  628.762185] rcu: INFO: rcu_sched self-detected stall on CPU
[  628.774574] rcu:     5-....: (84011 ticks this GP) idle=3Dc6e/1/0x400000=
0000000002 softirq=3D4625/4630 fqs=3D41964
[  628.787208] rcu:      (t=3D84015 jiffies g=3D10309 q=3D21278)
[  654.935761] watchdog: BUG: soft lockup - CPU#2 stuck for 22s! [xenconsol=
ed:2747]
[  654.939869] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991=
]


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 05:13:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 05:13:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13014.33842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXdlN-0007BP-LZ; Wed, 28 Oct 2020 05:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13014.33842; Wed, 28 Oct 2020 05:12:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXdlN-0007BI-IG; Wed, 28 Oct 2020 05:12:45 +0000
Received: by outflank-mailman (input) for mailman id 13014;
 Tue, 27 Oct 2020 17:51:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fYxI=EC=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kXT89-0000XI-P2
 for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:51:33 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bb59a78-17b7-4cac-a2d4-47d9888b874c;
 Tue, 27 Oct 2020 17:51:32 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09RHpF0L032137
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Tue, 27 Oct 2020 13:51:20 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09RHpEQr032136;
 Tue, 27 Oct 2020 10:51:14 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fYxI=EC=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kXT89-0000XI-P2
	for xen-devel@lists.xenproject.org; Tue, 27 Oct 2020 17:51:33 +0000
X-Inumbo-ID: 6bb59a78-17b7-4cac-a2d4-47d9888b874c
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bb59a78-17b7-4cac-a2d4-47d9888b874c;
	Tue, 27 Oct 2020 17:51:32 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09RHpF0L032137
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Tue, 27 Oct 2020 13:51:20 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09RHpEQr032136;
	Tue, 27 Oct 2020 10:51:14 -0700 (PDT)
	(envelope-from ehem)
Date: Tue, 27 Oct 2020 10:51:14 -0700
From: Elliott Mitchell <ehem+undef@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org,
        xen-devel@lists.xenproject.org, konrad.wilk@oracle.com, hch@lst.de
Subject: Re: [PATCH] fix swiotlb panic on Xen
Message-ID: <20201027175114.GA32110@mattapan.m5p.com>
References: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010261653320.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Mon, Oct 26, 2020 at 05:02:14PM -0700, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> kernel/dma/swiotlb.c:swiotlb_init gets called first and tries to
> allocate a buffer for the swiotlb. It does so by calling
> 
>   memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
> 
> If the allocation must fail, no_iotlb_memory is set.
> 
> 
> Later during initialization swiotlb-xen comes in
> (drivers/xen/swiotlb-xen.c:xen_swiotlb_init) and given that io_tlb_start
> is != 0, it thinks the memory is ready to use when actually it is not.
> 
> When the swiotlb is actually needed, swiotlb_tbl_map_single gets called
> and since no_iotlb_memory is set the kernel panics.
> 
> Instead, if swiotlb-xen.c:xen_swiotlb_init knew the swiotlb hadn't been
> initialized, it would do the initialization itself, which might still
> succeed.
> 
> 
> Fix the panic by setting io_tlb_start to 0 on swiotlb initialization
> failure, and also by setting no_iotlb_memory to false on swiotlb
> initialization success.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c19379fabd20..9924214df60a 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -231,6 +231,7 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	if (verbose)
>  		swiotlb_print_info();
> @@ -262,9 +263,11 @@ swiotlb_init(int verbose)
>  	if (vstart && !swiotlb_init_with_tbl(vstart, io_tlb_nslabs, verbose))
>  		return;
>  
> -	if (io_tlb_start)
> +	if (io_tlb_start) {
>  		memblock_free_early(io_tlb_start,
>  				    PAGE_ALIGN(io_tlb_nslabs << IO_TLB_SHIFT));
> +		io_tlb_start = 0;
> +	}
>  	pr_warn("Cannot allocate buffer");
>  	no_iotlb_memory = true;
>  }
> @@ -362,6 +365,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>  		io_tlb_orig_addr[i] = INVALID_PHYS_ADDR;
>  	}
>  	io_tlb_index = 0;
> +	no_iotlb_memory = false;
>  
>  	swiotlb_print_info();
>  

As the person who first found this and then confirmed this fixes a bug:

Tested-by: Elliott Mitchell <ehem+xen@m5p.com>


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Wed Oct 28 07:18:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 07:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13313.33855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfiw-0000g3-Cy; Wed, 28 Oct 2020 07:18:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13313.33855; Wed, 28 Oct 2020 07:18:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfiw-0000fw-9S; Wed, 28 Oct 2020 07:18:22 +0000
Received: by outflank-mailman (input) for mailman id 13313;
 Wed, 28 Oct 2020 07:18:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXfiu-0000fq-IK
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:18:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 877d40d1-cfd5-4246-aec3-c16ea735c606;
 Wed, 28 Oct 2020 07:18:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 661EEAAF1;
 Wed, 28 Oct 2020 07:18:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXfiu-0000fq-IK
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:18:20 +0000
X-Inumbo-ID: 877d40d1-cfd5-4246-aec3-c16ea735c606
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 877d40d1-cfd5-4246-aec3-c16ea735c606;
	Wed, 28 Oct 2020 07:18:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603869498;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=m0AFQQztnsrrudDOoXvM1TAP6LLQFo9xY91hTb49O3k=;
	b=mTiNhDiuvuh/b/qZFLwC9IsKLkNcb0FriwalSddy5sSar89XzkqDgMZGhMmtcQWH/+AJyL
	9Nog2WQ4YnFkFM+Am4SL2pLyd4Ec/VenhM0onq4SJC+EpkPzMF5BGSGGnEybobh+m0JmBT
	CWpt5J0CgYlgOK9T97h4CE31acvGX/E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 661EEAAF1;
	Wed, 28 Oct 2020 07:18:18 +0000 (UTC)
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
Date: Wed, 28 Oct 2020 08:18:14 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.10.2020 00:32, Stefano Stabellini wrote:
> On Mon, 26 Oct 2020, Rahul Singh wrote:
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -4,6 +4,13 @@ config HAS_NS16550
>>  	help
>>  	  This selects the 16550-series UART support. For most systems, say Y.
>>  
>> +config HAS_NS16550_PCI
>> +	bool "NS16550 UART PCI support" if X86
>> +	default y
>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>> +	help
>> +	  This selects the 16550-series UART PCI support. For most systems, say Y.
> 
> I think that this should be a silent option:
> if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
> otherwise -> automatically disable
> 
> No need to show it to the user.

I agree in principle, but I don't see why an X86 dependency gets
added here. HAS_PCI really should be all that's needed.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 07:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 07:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13317.33867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfnR-0001Wc-VH; Wed, 28 Oct 2020 07:23:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13317.33867; Wed, 28 Oct 2020 07:23:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfnR-0001WV-Rn; Wed, 28 Oct 2020 07:23:01 +0000
Received: by outflank-mailman (input) for mailman id 13317;
 Wed, 28 Oct 2020 07:23:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXfnQ-0001WO-79
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:23:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd554b9d-b447-4104-afac-ff0f72e5bc4d;
 Wed, 28 Oct 2020 07:22:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3D87AC91;
 Wed, 28 Oct 2020 07:22:58 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXfnQ-0001WO-79
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:23:00 +0000
X-Inumbo-ID: cd554b9d-b447-4104-afac-ff0f72e5bc4d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id cd554b9d-b447-4104-afac-ff0f72e5bc4d;
	Wed, 28 Oct 2020 07:22:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603869778;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rk5iwzZRpz+eecA97DrbPMASTKj+GayBmzsxxFyB5iU=;
	b=ZCq5phEy8zzPDWKtdM90WEudGRmICWe8wGrrFXYeQdWBhJMNPldPoHW8ggsqPt7NbyJ6OV
	g9RcWo1ww/Z1IZyBkc+6DUD5JLMGYdX6OoveK2f37X52ldXE/bQaGbo3l/RvVnAyqkmHqx
	+IV86z6I3JlGYfyNgOalHK2XspqUOVA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A3D87AC91;
	Wed, 28 Oct 2020 07:22:58 +0000 (UTC)
Subject: Re: [PATCH v1 2/4] xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag for
 PCI ATS functionality.
To: Stefano Stabellini <sstabellini@kernel.org>,
 Rahul Singh <rahul.singh@arm.com>
Cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Paul Durrant <paul@xen.org>, Kevin Tian <kevin.tian@intel.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271640280.12247@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f2930a1d-ec05-6c24-c012-76a307f97deb@suse.com>
Date: Wed, 28 Oct 2020 08:22:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010271640280.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.10.2020 00:55, Stefano Stabellini wrote:
> On Mon, 26 Oct 2020, Rahul Singh wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -20,6 +20,7 @@ config X86
>>  	select HAS_NS16550
>>  	select HAS_PASSTHROUGH
>>  	select HAS_PCI
>> +	select HAS_PCI_ATS

>From an abstract perspective - is there a way to have a PCI
subsystem where it is impossible for an ATS-capable devices to
be present? I think this shouldn't be a HAS_* option, but an
ordinary user selectable one. We may then even want to consider
having it off by default on x86 (the command line option
defaults to off as well, after all).

>> @@ -48,5 +50,27 @@ static inline int pci_ats_device(int seg, int bus, int devfn)
>>      return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
>>  }
>>  
>> +#else
>> +
>> +static inline int enable_ats_device(struct pci_dev *pdev,
>> +                                    struct list_head *ats_list)
>> +{
>> +    return -ENODEV;
>> +}
>> +
>> +static inline void disable_ats_device(struct pci_dev *pdev) { }
>> +
>> +static inline int pci_ats_enabled(int seg, int bus, int devfn)
>> +{
>> +    return -ENODEV;
>> +}
> 
> pci_ats_enabled returns 0 when ATS is not enabled
> 
> 
>> +static inline int pci_ats_device(int seg, int bus, int devfn)
>> +{
>> +    return -ENODEV;
> 
> also returns 0 when ATS is not enabled

And really both ought to be returning bool.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 07:25:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 07:25:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13326.33879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfpo-0001g7-Cx; Wed, 28 Oct 2020 07:25:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13326.33879; Wed, 28 Oct 2020 07:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXfpo-0001g0-A5; Wed, 28 Oct 2020 07:25:28 +0000
Received: by outflank-mailman (input) for mailman id 13326;
 Wed, 28 Oct 2020 07:25:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXfpn-0001fv-Ap
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:25:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c0f4f07-aeb0-452c-a061-10b968526896;
 Wed, 28 Oct 2020 07:25:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D274FB1B5;
 Wed, 28 Oct 2020 07:25:25 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXfpn-0001fv-Ap
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:25:27 +0000
X-Inumbo-ID: 2c0f4f07-aeb0-452c-a061-10b968526896
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c0f4f07-aeb0-452c-a061-10b968526896;
	Wed, 28 Oct 2020 07:25:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603869925;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yJoszYaqabhvfmhXSdo1KsONfLselggym6Z5LR8lf6w=;
	b=lx6EeuwB6MjXRUplt1WETSTrAvpUM9YH09YAKdZ6XHh8dfs7kPAaJPGm/UNsGILFp8JtHD
	sB4nrsw1tvFZqgpbpaS9ZNKBYCTp3V0My4pqKn97Q/+KPfmikz6U85kjRmgP3t/Co+lMYf
	Vm9wxe9+9KOAtDkd9sRUnpRGfUMMWjw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D274FB1B5;
	Wed, 28 Oct 2020 07:25:25 +0000 (UTC)
Subject: Re: [PATCH v1 2/4] xen/pci: Introduce new CONFIG_HAS_PCI_ATS flag for
 PCI ATS functionality.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
References: <cover.1603731279.git.rahul.singh@arm.com>
 <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2cef4162-80bb-8689-6109-7fb964a3d24a@suse.com>
Date: Wed, 28 Oct 2020 08:25:26 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1bb971bed3f5a56b0691fbcfd0346ae721ba049f.1603731279.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:17, Rahul Singh wrote:
> @@ -48,5 +50,27 @@ static inline int pci_ats_device(int seg, int bus, int devfn)
>      return pci_find_ext_capability(seg, bus, devfn, PCI_EXT_CAP_ID_ATS);
>  }
>  
> +#else
> +
> +static inline int enable_ats_device(struct pci_dev *pdev,
> +                                    struct list_head *ats_list)
> +{
> +    return -ENODEV;

In addition to what Stefano has said, I don't think this is an
appropriate return value here either - -EOPNOTSUPP would seem
quite a bit closer to reality.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 07:45:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 07:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13331.33891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXg9Y-0003RC-4e; Wed, 28 Oct 2020 07:45:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13331.33891; Wed, 28 Oct 2020 07:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXg9Y-0003R5-0h; Wed, 28 Oct 2020 07:45:52 +0000
Received: by outflank-mailman (input) for mailman id 13331;
 Wed, 28 Oct 2020 07:45:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXg9W-0003R0-S9
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:45:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3caef230-27a6-405f-adbe-60eb8dcc4987;
 Wed, 28 Oct 2020 07:45:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A059AACDB;
 Wed, 28 Oct 2020 07:45:48 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXg9W-0003R0-S9
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:45:50 +0000
X-Inumbo-ID: 3caef230-27a6-405f-adbe-60eb8dcc4987
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3caef230-27a6-405f-adbe-60eb8dcc4987;
	Wed, 28 Oct 2020 07:45:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603871148;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+Wpz5kToeasQpm0ZCbD9yjn9dEBHaiD7pOorXoT+ulA=;
	b=QQq3/IbjlZ+dN8WfbF4jvMc6EmLI4P50iwmcH20Gdm7Zc9fwJ/brdhO1GGUPpfoPxSgOWr
	jZ3WtG+UGm3uXUNI+21KZtnQpXb7zsBQIMuoiiOUUGbDCcHkWdmUFm1Y6H515OYnRzoCkC
	T/92iYdB1x8QBYScT6NOWriNKBdG10g=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A059AACDB;
	Wed, 28 Oct 2020 07:45:48 +0000 (UTC)
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
Cc: xen-devel@lists.xenproject.org, =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?=
 <jgross@suse.com>
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
 <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
 <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <66f4b628-970c-9990-118a-572f971d6ed2@suse.com>
Date: Wed, 28 Oct 2020 08:45:49 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.10.2020 03:04, Michał Leszczyński wrote:
> As I've said before, I'm using RELEASE-4.14.0, this is DELL PowerEdge R640 with 14 PCPUs.

I.e. you haven't tried the tip of the 4.14 stable branch?

> I have the following additional pieces of log (enclosed below). As you could see, the issue is about particular vCPUs of Dom0 not being scheduled for a long time, which really decreases stability of the host system.

I have to admit that the log makes me wonder whether this isn't a
Dom0 internal issue:

> [  338.968676] watchdog: BUG: soft lockup - CPU#5 stuck for 22s! [sshd:5991]
> [  346.963959] watchdog: BUG: soft lockup - CPU#2 stuck for 23s! [xenconsoled:2747]

For these two vCPU-s we see ...

> (XEN) Domain info:
> (XEN)   Domain: 0 w 256 c 0 v 14
> (XEN)     1: [0.0] flags=20 cpu=0 credit=-10000000 [w=256] load=4594 (~1%)
> (XEN)     2: [0.1] flags=20 cpu=2 credit=9134904 [w=256] load=262144 (~100%)
> (XEN)     3: [0.2] flags=22 cpu=4 credit=-10000000 [w=256] load=262144 (~100%)
> (XEN)     4: [0.3] flags=20 cpu=6 credit=-10000000 [w=256] load=4299 (~1%)
> (XEN)     5: [0.4] flags=20 cpu=8 credit=-10000000 [w=256] load=4537 (~1%)
> (XEN)     6: [0.5] flags=22 cpu=10 credit=-10000000 [w=256] load=262144 (~100%)

... that both are fully loaded and ...

> (XEN) Runqueue 0:
> (XEN) CPU[00] runq=0, sibling={0}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN) CPU[02] runq=0, sibling={2}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN) CPU[04] runq=0, sibling={4}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN)   run: [0.2] flags=22 cpu=4 credit=-10000000 [w=256] load=262144 (~100%)
> (XEN) CPU[06] runq=0, sibling={6}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN) CPU[08] runq=0, sibling={8}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN) CPU[10] runq=0, sibling={10}, core={0,2,4,6,8,10,12,14,16,18,20,22,24,26}
> (XEN)   run: [0.5] flags=22 cpu=10 credit=-10000000 [w=256] load=262144 (~100%)

... they're actively running, confirmed another time ...

> (XEN) RUNQ:
> (XEN) CPUs info:
> (XEN) CPU[00] current=d[IDLE]v0, curr=d[IDLE]v0, prev=NULL
> (XEN) CPU[02] current=d[IDLE]v2, curr=d[IDLE]v2, prev=NULL
> (XEN) CPU[04] current=d0v2, curr=d0v2, prev=NULL
> (XEN) CPU[06] current=d[IDLE]v6, curr=d[IDLE]v6, prev=NULL
> (XEN) CPU[08] current=d[IDLE]v8, curr=d[IDLE]v8, prev=NULL
> (XEN) CPU[10] current=d0v5, curr=d0v5, prev=NULL

... here. Hence an additional question is what exactly they're doing.
'0' and possibly 'd' debug key output may shed some light on it, but
to interpret that output the exact kernel and hypervisor binaries
would need to be known / available.

Furthermore to tell dead lock from live lock, more than one invocation
of any of the involved debug keys is often helpful.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 07:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 07:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13336.33902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgI3-0004Mm-0T; Wed, 28 Oct 2020 07:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13336.33902; Wed, 28 Oct 2020 07:54:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgI2-0004Mf-Td; Wed, 28 Oct 2020 07:54:38 +0000
Received: by outflank-mailman (input) for mailman id 13336;
 Wed, 28 Oct 2020 07:54:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXgI1-0004Ma-3Y
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:54:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c85aec63-8c13-41b3-97cb-d387e01ad98b;
 Wed, 28 Oct 2020 07:54:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0FC1ADEC;
 Wed, 28 Oct 2020 07:54:34 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXgI1-0004Ma-3Y
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 07:54:37 +0000
X-Inumbo-ID: c85aec63-8c13-41b3-97cb-d387e01ad98b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c85aec63-8c13-41b3-97cb-d387e01ad98b;
	Wed, 28 Oct 2020 07:54:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603871674;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KpjxqE25mEO15/L9CJ6X7OaXXiqO5bjF9MbVgEdP6Es=;
	b=vHXw7wBbeJbpOCSdrJZt25UUz3pf3mGv5md+4YnD0AGjsfElDn7mZ+h+Zvrq7cdFXY2ezf
	2G03Y17j5hbUF0xRv78X5RF2vyV65xRcNQSOdF/eb/nthQITApG0nbjFKL5BpJP3nYahf2
	6LqxylqEyYk2egjlXUdGCb5XO+y7Xx8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C0FC1ADEC;
	Wed, 28 Oct 2020 07:54:34 +0000 (UTC)
Subject: Re: ARM/PCI passthrough: libxl_pci, sysfs and pciback questions
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, "wl@xen.org" <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Artem Mygaiev <Artem_Mygaiev@epam.com>,
 Oleksandr Tyshchenko <Oleksandr_Tyshchenko@epam.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Rahul Singh
 <Rahul.Singh@arm.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <ad25a5ba-f44c-4e88-f3b0-e0baa5efc5f6@epam.com>
 <20201027125104.axv26kdqljqsvufn@Air-de-Roger>
 <ac342c70-8fbb-023d-00b3-4a1bc1d389a7@epam.com>
 <7f98534d-39fd-cbcb-13dd-bbbd994251f0@suse.com>
 <1e8b43e5-7d44-a061-f60a-00f75eb19b8b@epam.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <727b84e8-5018-6c5b-552c-e4d204daf6c2@suse.com>
Date: Wed, 28 Oct 2020 08:54:35 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1e8b43e5-7d44-a061-f60a-00f75eb19b8b@epam.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.10.2020 18:45, Oleksandr Andrushchenko wrote:
> On 10/27/20 7:18 PM, Jan Beulich wrote:
>> On 27.10.2020 16:52, Oleksandr Andrushchenko wrote:
>>> On 10/27/20 2:55 PM, Roger Pau Monné wrote:
>>>> On Tue, Oct 27, 2020 at 09:59:05AM +0000, Oleksandr Andrushchenko wrote:
>>>>>     5. An alternative route for 3-4 could be to store that data in XenStore, e.g.
>>>>>        MMIOs, IRQ, bind sysfs path etc. This would require more code on Xen side to
>>>>>        access XenStore and won’t work if MMIOs/IRQs are passed via device tree/ACPI
>>>>>        tables by the bootloaders.
>>>> As above, I think I need more context to understand what and why you
>>>> need to save such information.
>>> Well, with pciback absence we loose a "database" which holds all the knowledge
>>>
>>> about which devices are assigned, bound etc.
>> What hasn't become clear to me (sorry if I've overlooked it) is
>> why some form of pciback is not an option on Arm.
> Yes, it is probably possible to run pciback even without running
> 
> pcifront instances in guests and only use that functionality
> 
> which is needed for the toolstack. We can even have it as is without
> 
> modifications given that pcifront won't run and that part of the pciback
> 
> related to PCI config space, MSI etc. won't simply be used, but still
> 
> present in the pciback driver. We can try that (pciback is x86
> 
> only in the kernel).
> 
>> Where it would
>> need to run in your split hardware-domain / Dom0 setup (if I got
>> that right in the first place) would be a secondary question.
> 
> This actually becomes a problem if we think about hwdom != Dom0:
> 
> Dom0/toolstack wants to read PCI bus sysfs and it also wants to access
> 
> pciback's sysfs entries. So, for Dom0's toolstack to read sysfs in this scenario
> 
> we need a bridge between Dom0 and that hwdom to access both PCI
> 
> subsystem and pciback's sysfs: this could be implemented as a back-front pair
> 
> with a ring and event channel as PV drivers do. This approach of course will
> 
> require the toolstack to work in two modes: local sysfs/pciback and remote ones.
> 
> In the remote access model the toolstack will need to create a connection to
> 
> the hwdom each time it runs and requires sysfs data which should be acceptable.

That's the price to pay for disaggregation, I think. So yes to the
outline in general, but I'd like such an abstraction to not talk in
terms of "sysfs" or in fact anything that's OS specific on either
side. Whether it indeed needs a full new pair of front/back drivers
is a different question.

> It can also be possible to have the toolstack always use the remote model even
> 
> if it runs locally which will make the toolstack's code support a single model for all
> 
> the use-cases.

That's certainly one possible way of doing the necessary abstraction,
I agree.

> (Never thought if it is possible to run both backend and frontend in the same VM though).

Why would it not be? Other back/front pairs certainly can.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:06:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13348.33915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgTO-0005tq-Dd; Wed, 28 Oct 2020 08:06:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13348.33915; Wed, 28 Oct 2020 08:06:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgTO-0005tj-Ae; Wed, 28 Oct 2020 08:06:22 +0000
Received: by outflank-mailman (input) for mailman id 13348;
 Wed, 28 Oct 2020 08:06:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXgTN-0005te-4K
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:06:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f40e969-a113-4afe-a8db-7a296aecc6d8;
 Wed, 28 Oct 2020 08:06:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0F1D5AC8B;
 Wed, 28 Oct 2020 08:06:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXgTN-0005te-4K
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:06:21 +0000
X-Inumbo-ID: 3f40e969-a113-4afe-a8db-7a296aecc6d8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3f40e969-a113-4afe-a8db-7a296aecc6d8;
	Wed, 28 Oct 2020 08:06:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603872379;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HMtT7KXdpk1APyRvlE2ER2cCchJyToR0mxmgX9rkDSY=;
	b=BUFi2akJ5ygMmX4CLyOao/LpEZ1BnlF+k3FP3mig1CBkPSY21mqCwInfR07ZeGcHMSAqtu
	Z5sG3WJtNgGhevAbM6H9s23haU0rKPPkbpEy0+rW9ScBBFwkgvRs/BAqo3O40nvn/c0HR+
	R9nORU1ajyxBuyO8LEPVNhIG7wwQYMk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0F1D5AC8B;
	Wed, 28 Oct 2020 08:06:19 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Merge hsa and host_vmcb to reduce memory
 overhead
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026135043.15560-1-andrew.cooper3@citrix.com>
 <ec123127-786a-02e9-07dd-351f30b6a5b3@suse.com>
 <6acb623c-27bd-2d2d-c7c3-52c9ff1a1bf5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <731eceae-ab67-8176-2576-549426780316@suse.com>
Date: Wed, 28 Oct 2020 09:06:19 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6acb623c-27bd-2d2d-c7c3-52c9ff1a1bf5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.10.2020 20:30, Andrew Cooper wrote:
> On 27/10/2020 15:24, Jan Beulich wrote:
>> On 26.10.2020 14:50, Andrew Cooper wrote:
>>> The format of the Host State Area is, and has always been, a VMCB.  It is
>>> explicitly safe to put the host VMSAVE data in.
>> Nit: The PM calls this "Host Save Area" or "Host State Save Area"
>> afaics.
>>
>> I recall us discussing this option in the past, and not right
>> away pursuing it because of it not having been explicit at the
>> time. What place in the doc has made this explicit?
> 
> Sadly still not yet, but the pestering has happened.
> 
>> The main
>> uncertainty (without any explicit statement) on my part would be
>> the risk of VMSAVE writing (for performance reasons) e.g. full
>> cache lines, i.e. more than exactly the bits holding the state
>> to be saved, without first bringing old contents in from memory.
> 
> SEV-ES now requires the hypervisor to program desired exit state in in
> the VMCB, due to differences in how the VMRUN instruction works.  See
> Vol3 15.35.8.  (And yes - this does contradict the earlier statement in
> that a the hypervisor must not write directly into the host state area).
> 
> I have had it confirmed by AMD that it is safe to use in this fashion,
> but if you want more evidence, KVM has had this behaviour on AMD for its
> entire lifetime.

Ah, interesting.

>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -72,11 +72,10 @@ static void svm_update_guest_efer(struct vcpu *);
>>>  static struct hvm_function_table svm_function_table;
>>>  
>>>  /*
>>> - * Physical addresses of the Host State Area (for hardware) and vmcb (for Xen)
>>> - * which contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state when in
>>> - * guest vcpu context.
>>> + * Host State Area.  This area is used by the processor in non-root mode, and
>>> + * contains Xen's fs/gs/tr/ldtr and GSBASE/STAR/SYSENTER state required to
>>> + * leave guest vcpu context.
>>>   */
>>> -static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, hsa);
>>>  static DEFINE_PER_CPU_READ_MOSTLY(paddr_t, host_vmcb);
>> The comment now applies to host_vmcb, so making the dual purpose
>> more obvious would imo be helpful.
> 
> But it isn't dual purpose.  It *is* host state, both the half which
> VMRUN deals with, and the half which VMLOAD/SAVE deals with (separately,
> to optimise VMRUN).

It is host state, yes, and if you would spell it "host state area"
(and perhaps even omit "area") it would look less dual purpose,
because the use of capitals (to me at least) suggests you refer to
HSA (as used e.g. in the MSR name). The dual purpose really is
that (a) the address gets put in the respective MSR and (b) the
thing also gets directly accessed as a VMCB. Just like the comment
originally said. Yes, in the end it's cumulative host state.

Is there any reason why you can't mostly keep the original comment,
merely starting with singular "Physical address of ..."? (Of course
I'd then still prefer if "Host State Area" was changed to either of
the two terms actually used by the PM.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:17:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:17:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13356.33930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgeF-0006sn-NC; Wed, 28 Oct 2020 08:17:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13356.33930; Wed, 28 Oct 2020 08:17:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgeF-0006sg-Jq; Wed, 28 Oct 2020 08:17:35 +0000
Received: by outflank-mailman (input) for mailman id 13356;
 Wed, 28 Oct 2020 08:17:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXgeE-0006s5-2y
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:17:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0615ba2-1103-4d64-9103-891c8fb9940d;
 Wed, 28 Oct 2020 08:17:25 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXge4-00041d-Oh; Wed, 28 Oct 2020 08:17:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXge4-0000As-Ez; Wed, 28 Oct 2020 08:17:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXge4-0004S1-E2; Wed, 28 Oct 2020 08:17:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXgeE-0006s5-2y
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:17:34 +0000
X-Inumbo-ID: c0615ba2-1103-4d64-9103-891c8fb9940d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c0615ba2-1103-4d64-9103-891c8fb9940d;
	Wed, 28 Oct 2020 08:17:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nMt8gWczwQY9W3TKxinp0YpzNgJaPIws1nX3AGbyYL0=; b=bAwG3KZa5sbh4WkF4Wxdz+6Fx4
	LSRJUFQwpKp7r6kccNi9/a/9UO8kicC+RFNPUgRnNeeq0VkEmGubpNpKB/kYCYfkDsHIeta0nQlrO
	PvqNWqO1gTHa5y6w6aVEw6GD4bSgcfV4QyekxgE0HRNl9mKPkZ82xNXyHptFJfDH9II8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXge4-00041d-Oh; Wed, 28 Oct 2020 08:17:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXge4-0000As-Ez; Wed, 28 Oct 2020 08:17:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXge4-0004S1-E2; Wed, 28 Oct 2020 08:17:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 156261: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=78d903e95efc5b0166b393d289a687c64016e8ef
X-Osstest-Versions-That:
    xen=71da63bbb83af8c8c537f3731dda7dc2d2fd31ac
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 08:17:24 +0000

flight 156261 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156261/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-thunderx  3 hosts-allocate               fail  like 156033
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156033
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156033
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156033
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156033
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156033
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156033
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156033
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156033
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156033
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156033
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156033
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156033
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156033
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  78d903e95efc5b0166b393d289a687c64016e8ef
baseline version:
 xen                  71da63bbb83af8c8c537f3731dda7dc2d2fd31ac

Last test of basis   156033  2020-10-20 13:35:44 Z    7 days
Testing same since   156261  2020-10-27 18:36:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   71da63bbb8..78d903e95e  78d903e95efc5b0166b393d289a687c64016e8ef -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:21:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13360.33941 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXghr-0007jG-7R; Wed, 28 Oct 2020 08:21:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13360.33941; Wed, 28 Oct 2020 08:21:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXghr-0007j9-46; Wed, 28 Oct 2020 08:21:19 +0000
Received: by outflank-mailman (input) for mailman id 13360;
 Wed, 28 Oct 2020 08:21:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXghp-0007j4-Mz
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:21:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c991c82c-92c3-464e-b5c1-54adc5770001;
 Wed, 28 Oct 2020 08:21:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3CFBB91A;
 Wed, 28 Oct 2020 08:21:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXghp-0007j4-Mz
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:21:17 +0000
X-Inumbo-ID: c991c82c-92c3-464e-b5c1-54adc5770001
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c991c82c-92c3-464e-b5c1-54adc5770001;
	Wed, 28 Oct 2020 08:21:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603873276;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+zq4UqT3xUB0tKR32SS8TSAnP6cpSu6XeY5h3ZH7QX8=;
	b=uZTHiAp4hwTIXkyC/Lq80Qs4tmHg/0tz/n/e2qpuUBw6P02MSYguAlQPzEfrh0bFJAar1w
	U8j2yqlqTKZaEexFuIBefXtJCyQ3iUOWZVKOAfirfH+b7O1Ma3wcDIVfqQnPF6gsasepK/
	6ocMlRcqutTEhNOCKYtlpKa6BraPgns=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A3CFBB91A;
	Wed, 28 Oct 2020 08:21:15 +0000 (UTC)
Subject: Re: [PATCH 1/3] x86/ucode: Break out compare_revisions() from
 existing infrastructure
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
 <20201026172519.17881-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a8c50574-ce8e-44ad-fccd-9b8f2b3288c1@suse.com>
Date: Wed, 28 Oct 2020 09:21:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026172519.17881-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:25, Andrew Cooper wrote:
> Drop some unnecesserily verbose pr_debug()'s on the AMD side.

To be honest I'm not entirely convinced of this part of the change:
For one, pr_debug() expands to nothing by default. And then you
delete 2/3 of all pr_debug() instances, putting under question why
there's a pr_debug() in the first place.

> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Only after having looked at the subsequent patches to understand
how this is going to be useful:
Acked-by: Jan Beulich <jbeulich@suse.com>
However, ...

> --- a/xen/arch/x86/cpu/microcode/amd.c
> +++ b/xen/arch/x86/cpu/microcode/amd.c
> @@ -168,6 +168,15 @@ static bool check_final_patch_levels(const struct cpu_signature *sig)
>      return false;
>  }
>  
> +static enum microcode_match_result compare_revisions(
> +    uint32_t old_rev, uint32_t new_rev)

... this (and the respective Intel code) is another good example
where, by our present guide lines, fixed width types aren't
appropriate to use. "unsigned int" (and in the later patch plain
"int" or "signed int") will fulfill the purpose, and hence ought
to be preferred.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:32:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:32:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13366.33961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgsl-0000Ix-90; Wed, 28 Oct 2020 08:32:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13366.33961; Wed, 28 Oct 2020 08:32:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgsl-0000Iq-5u; Wed, 28 Oct 2020 08:32:35 +0000
Received: by outflank-mailman (input) for mailman id 13366;
 Wed, 28 Oct 2020 08:32:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXgsj-0000Ik-St
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:32:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54fde817-2474-4790-a7c7-cfa08597fbae;
 Wed, 28 Oct 2020 08:32:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 27DCBACB6;
 Wed, 28 Oct 2020 08:32:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXgsj-0000Ik-St
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:32:33 +0000
X-Inumbo-ID: 54fde817-2474-4790-a7c7-cfa08597fbae
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54fde817-2474-4790-a7c7-cfa08597fbae;
	Wed, 28 Oct 2020 08:32:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603873952;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uWrDiNYGdRPB69jbZDbzFlNN0RuveZ1O13jg+5Xfmvk=;
	b=ZYy87uV7yukTwu5sbsBN3oD4i9cYECAC8/KfFLv+ldvudg/bF4FfOziCzJOtd4sCFUKdp4
	xFxAANB0SLAW6XhPtjh7xNXQ2xcmCO8EOgXGcUMH09rrJbVkzHAl95Mv5ASjSvmn+XlhpD
	ig4YLtNEv2giez+/X0RjJmKZb9em/Ow=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 27DCBACB6;
	Wed, 28 Oct 2020 08:32:32 +0000 (UTC)
Subject: Re: [PATCH 2/3] x86/ucode/intel: Fix handling of microcode revision
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
 <20201026172519.17881-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <157061d4-9ed4-9252-17eb-0eb642f755ec@suse.com>
Date: Wed, 28 Oct 2020 09:32:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026172519.17881-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:25, Andrew Cooper wrote:
> For Intel microcodes, the revision field is signed (as documented in the SDM)
> and negative revisions are used for pre-production/test microcode (not
> documented publicly anywhere I can spot).
> 
> Adjust the revision checking to match the algorithm presented here:
> 
>   https://software.intel.com/security-software-guidance/best-practices/microcode-update-guidance
> 
> This treats pre-production microcode as always applicable, but also production
> microcode having higher precident than pre-production.  It is expected that

Nit: "precedence" I guess?

> anyone using pre-production microcode knows what they are doing.
> 
> This is necessary to load production microcode on an SDP with pre-production
> microcode embedded in firmware.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:35:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13370.33973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgv9-0000T9-Mn; Wed, 28 Oct 2020 08:35:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13370.33973; Wed, 28 Oct 2020 08:35:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgv9-0000T2-Jn; Wed, 28 Oct 2020 08:35:03 +0000
Received: by outflank-mailman (input) for mailman id 13370;
 Wed, 28 Oct 2020 08:35:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXgv8-0000Sx-Ui
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:35:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74f374e3-f120-4981-9fda-4c90a254eda0;
 Wed, 28 Oct 2020 08:35:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 68D2BACB6;
 Wed, 28 Oct 2020 08:35:01 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXgv8-0000Sx-Ui
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:35:02 +0000
X-Inumbo-ID: 74f374e3-f120-4981-9fda-4c90a254eda0
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 74f374e3-f120-4981-9fda-4c90a254eda0;
	Wed, 28 Oct 2020 08:35:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603874101;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rYNFhhYWGWRe40YrMvFyoHGRdvEqDiP65gRjd5q3Skg=;
	b=B7iQOhra9VrbTMACrUglGugUlD6YEAlYqhk7s2JPmAN/1Jv1XYzARxCIu3OaNOA7ptbUTK
	qyPDPEQcvp13/gBEtGA1BGVAdPyzLhxH97/N6gI2f3WCUYQsqRD3NM9hUVdM3Lc8tK54N3
	inFCi/Mmgv1l+3mFZFWSKMQjGltphKw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 68D2BACB6;
	Wed, 28 Oct 2020 08:35:01 +0000 (UTC)
Subject: Re: [PATCH 3/3] x86/ucode: Introduce ucode=allow-same for testing
 purposes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Juergen Gross <jgross@suse.com>,
 Igor Druzhinin <igor.druzhinin@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
 <20201026172519.17881-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <67f2df64-68e2-23e6-172f-9824eb8f7e14@suse.com>
Date: Wed, 28 Oct 2020 09:35:02 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026172519.17881-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:25, Andrew Cooper wrote:
> Many CPUs will actually reload microcode when offered the same version as
> currently loaded.  This allows for easy testing of the late microcode loading
> path.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one nit:

> @@ -2248,6 +2248,11 @@ precedence over `scan`.
>  stop_machine context. In NMI handler, even NMIs are blocked, which is
>  considered safer. The default value is `true`.
>  
> +'allow-same' alters the default acceptance policy for new microcode to permit
> +trying to reload the same version.  Many CPUs will actually reload microcode
> +of the same version, and this allows for easily testing of the late microcode
> +loading path.

Either "easy" or drop "of"?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:36:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:36:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13374.33986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgwn-0000cX-7S; Wed, 28 Oct 2020 08:36:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13374.33986; Wed, 28 Oct 2020 08:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXgwn-0000cQ-4P; Wed, 28 Oct 2020 08:36:45 +0000
Received: by outflank-mailman (input) for mailman id 13374;
 Wed, 28 Oct 2020 08:36:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aZyU=ED=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kXgwl-0000cL-HY
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:36:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f61a7d05-237b-4401-8aae-95e8f3e3d227;
 Wed, 28 Oct 2020 08:36:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B3F8BACB6;
 Wed, 28 Oct 2020 08:36:41 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=aZyU=ED=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kXgwl-0000cL-HY
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:36:43 +0000
X-Inumbo-ID: f61a7d05-237b-4401-8aae-95e8f3e3d227
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f61a7d05-237b-4401-8aae-95e8f3e3d227;
	Wed, 28 Oct 2020 08:36:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603874201;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jLMEeVZNG/TEx5W5KQGMjELEYzd5LmrPPQ+A8SlDNh8=;
	b=uFaAZ97EieKwShiVlzUskCn3WKBGlJUgULa5C5sBccIwEhoFGIQGdCuHol0ErvVZNh2JLl
	TJWlGWg5VNBssmfLJFT/mahgboyWIQ+5OxrZBLtSNb55y63m6ZicLeijDuOk8Ap6dQeiGJ
	bgRDPuUAdnp2Jn671jZgCh9Y33lzIz0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B3F8BACB6;
	Wed, 28 Oct 2020 08:36:41 +0000 (UTC)
Subject: Re: [PATCH 3/3] x86/ucode: Introduce ucode=allow-same for testing
 purposes
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Igor Druzhinin <igor.druzhinin@citrix.com>
References: <20201026172519.17881-1-andrew.cooper3@citrix.com>
 <20201026172519.17881-4-andrew.cooper3@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8033edf2-f77f-e454-63cf-6fb1f2c9e08c@suse.com>
Date: Wed, 28 Oct 2020 09:36:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <20201026172519.17881-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.10.20 18:25, Andrew Cooper wrote:
> Many CPUs will actually reload microcode when offered the same version as
> currently loaded.  This allows for easy testing of the late microcode loading
> path.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Igor Druzhinin <igor.druzhinin@citrix.com>
> 
> I was hoping to make this a runtime parameter, but I honestly can't figure out
> the new HYPFS-only infrastructure is supposed to work.

For your use case have a look into xen/arch/x86/hvm/vmx/vmcs.c how the
"ept" runtime parameter is handled.

This is a similar case where one sub-option can be modified at runtime.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:44:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:44:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13381.33998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXh3m-0001Xg-0m; Wed, 28 Oct 2020 08:43:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13381.33998; Wed, 28 Oct 2020 08:43:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXh3l-0001XZ-TU; Wed, 28 Oct 2020 08:43:57 +0000
Received: by outflank-mailman (input) for mailman id 13381;
 Wed, 28 Oct 2020 08:43:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOlM=ED=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kXh3k-0001XU-1W
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:43:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30c9fc65-bf9d-49b8-b13c-9c6f30e7dff1;
 Wed, 28 Oct 2020 08:43:54 +0000 (UTC)
Received: from AM6PR05CA0021.eurprd05.prod.outlook.com (2603:10a6:20b:2e::34)
 by HE1PR0802MB2506.eurprd08.prod.outlook.com (2603:10a6:3:d6::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.26; Wed, 28 Oct
 2020 08:43:51 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::74) by AM6PR05CA0021.outlook.office365.com
 (2603:10a6:20b:2e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 28 Oct 2020 08:43:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 08:43:51 +0000
Received: ("Tessian outbound d5e343850048:v64");
 Wed, 28 Oct 2020 08:43:51 +0000
Received: from d7169c8f5c8b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5EFC4820-1EDD-48BE-870C-37BCA71AC8AF.1; 
 Wed, 28 Oct 2020 08:43:45 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d7169c8f5c8b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 08:43:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1909.eurprd08.prod.outlook.com (2603:10a6:4:72::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 08:43:43 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 08:43:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOlM=ED=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kXh3k-0001XU-1W
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:43:56 +0000
X-Inumbo-ID: 30c9fc65-bf9d-49b8-b13c-9c6f30e7dff1
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0c::62b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30c9fc65-bf9d-49b8-b13c-9c6f30e7dff1;
	Wed, 28 Oct 2020 08:43:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AYyRvUTEbAdnFlwhD1kOW7kYHlzlk8gTl6DKu3cM4bg=;
 b=zfVh9QjviY5n0Qo5Kflb1ddwGmRS/0D1Nnvfb/SFSJl45XTDbLQB+nOErHdHDq2AjH1YVmDSSFV+w+yDXj/Tiaj+0tnY7MYfyvZXWt/9i6FZLJ28RWvQhWjS7QAsEo9jbwS1eOwPixH4hIaucjDvK8UzZIp9rcB93JEFa3pxGE4=
Received: from AM6PR05CA0021.eurprd05.prod.outlook.com (2603:10a6:20b:2e::34)
 by HE1PR0802MB2506.eurprd08.prod.outlook.com (2603:10a6:3:d6::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.26; Wed, 28 Oct
 2020 08:43:51 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::74) by AM6PR05CA0021.outlook.office365.com
 (2603:10a6:20b:2e::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 28 Oct 2020 08:43:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 08:43:51 +0000
Received: ("Tessian outbound d5e343850048:v64"); Wed, 28 Oct 2020 08:43:51 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0308d5cf17e5374e
X-CR-MTA-TID: 64aa7808
Received: from d7169c8f5c8b.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5EFC4820-1EDD-48BE-870C-37BCA71AC8AF.1;
	Wed, 28 Oct 2020 08:43:45 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d7169c8f5c8b.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 08:43:45 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Pm5sNEMkOb+5UDOaJk1MFcPLLO63zqz6FFVODvXbVT8+5/phZ9j0FNJVj6i3c2AK/GGW0d6thh4sBithgNMp0Tj9BsMuh4h9XuhOutJQIJ24cWQ8MWJuxtV4WZT4Fq23qjK3Pu+sPo/IB3gI7QjWDebj7qGHB+pyyota0bHfrE2XstFn5uJgXmStcVPj9AqNIP1jtXiQA7dk8ZTFp8rx1PohAUb9R9p8gqQ8eQJxoKwr/HdHHjPt8FYeOWjFPo5ju3ajnSQyox4NgjgWeuwQrLJljlzRr6fFBLLvG4Jjn0LfS8bJYcnoX7V/2PdjiLegbCSEyOYq3y0aSiL4WLYOtQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AYyRvUTEbAdnFlwhD1kOW7kYHlzlk8gTl6DKu3cM4bg=;
 b=FhIIQegiOTDw3MhrUOkfaCQ2fv0PBpe7xWY6Yeb0ICw4VI9aaKHFyxXAFA62KvfK0WA9KuwbN7bi8mzw77/CqiOkDQIUpn7naBrhpXTovW13SRp56nqq64QVR3uxi4m8gPi6stonjJvvraNlZ6GJAa7e1GPwA1JCXWmXvAoaCsIJyLrFnxlknG3beiszrj2LmsQPoNDi4pBY7CX0EJPopWZc6UBxzU4OjwZuexJ03IifhQBShkzZQSFR/OabcBoSxv+3sq91gsFYX9+PlpxU3IDE8/4Qt7ZK5IUE2c7e4mV60K0crMTgzJ+JCXxa+lfdp+x9Gd8bEEwTNvijRzaRYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AYyRvUTEbAdnFlwhD1kOW7kYHlzlk8gTl6DKu3cM4bg=;
 b=zfVh9QjviY5n0Qo5Kflb1ddwGmRS/0D1Nnvfb/SFSJl45XTDbLQB+nOErHdHDq2AjH1YVmDSSFV+w+yDXj/Tiaj+0tnY7MYfyvZXWt/9i6FZLJ28RWvQhWjS7QAsEo9jbwS1eOwPixH4hIaucjDvK8UzZIp9rcB93JEFa3pxGE4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1909.eurprd08.prod.outlook.com (2603:10a6:4:72::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 08:43:43 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 08:43:43 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWq7RKM3lRt4XRXUWYGPT9/eCkOKmsDfmAgACnfwA=
Date: Wed, 28 Oct 2020 08:43:43 +0000
Message-ID: <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
References:
 <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f80a642a-30f0-4d8f-dd4d-08d87b1d9856
x-ms-traffictypediagnostic: DB6PR0801MB1909:|HE1PR0802MB2506:
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB25065A6A36CAF4ABF402B51F9D170@HE1PR0802MB2506.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:1002;OLM:1002;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oo681h1bqXb8SCM1LTmA3fuUgVlyvUhoCN7fqpn0TIKZ8FIsJCfn6qDlBKflsOzZCDhZJHDjJdpqtQC/kjM0aYkY1BJ1pnWK5lWmzj2ptu1YyGguzMJ8xsQ5Wuw+W9SF0MiZ7PbZhFWvLB9vdm5gBKDzOWIaQmM2sHBMkWwoO0PY5ftEo2WAkoGjAGftsS/8ELr+0zHm0B7CQmQ3/oG4kPEklphaF3twXl7EBnfGU+4xXhlcbu4V404y4C3OrMDpo1e+NCvAv4xXKjPt7rw+m73M4eqkCqZzAK6vM7QaleZQTP/RZqo2fhH9Dj1yfPktxSQ6iz177hlTTORrUNfXPg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(136003)(39860400002)(376002)(6916009)(478600001)(53546011)(66946007)(2616005)(5660300002)(54906003)(2906002)(66446008)(76116006)(316002)(66476007)(26005)(91956017)(6506007)(66556008)(64756008)(186003)(83380400001)(8676002)(33656002)(6486002)(86362001)(6512007)(8936002)(4326008)(71200400001)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 FmxuJvUcAa0ixn3eRaMD973HCkIXN3B/4o7qNRioNN7rleLUixZhPzQf0+BRb5lwwp+aZanoogLbzrhg9fejBbyXTTOyMmjyWtwkI5FMAfKSAZFHJQBn3iVrnpYgEWQSrRs5fR9jhg+R+BhDBVV4IVKBVjevN5Jw/GRMeC56ARcB/gtDtQsHIMZ032JxaQW/6IXNOx2oqEklCLXC0eLcM1s9Pa3HyoHeTZzbrlyLRuG1BN/lsN8sLiHndxzkrKYCDZxd+WiTN4JsU5qos/qIWc/yIi6jt6sUoxf8leahsW7193+01eeGbqistm+FB96QLE+ec1lrNiNpg9rxCt2ud+VOpmuq0iUNtceCjn/aMxVTQjleVhU6lJcqLrldV9MSO1bA3G23OJWTY7SySfO9qZAvfeyeVhoCsqUhh9laXBSjY5ingYYjQsdBubKylMdrsC0h5CABdo+JjCmvOMSKKcBtNdZ2H/eddclOR5hzCxtw8RpjTIUWEha8gajN/cwPggcskkjFsiD51D3pDzh/QXyGnwlcGcniZ+o7lWdkCUlANQMQYI7iOPDycN0ovbJjs9Z6WoFfkaGuofPvbw/TM/qXzl+bAc1NoSIqMEUfH+cbkxBrebhcbVWBMQ/8yxgJwMVIwku9GpIZRB6hNyujWg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <9CE107575A7F7E4BA56EA60574CC4410@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1909
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	151911f4-5ba4-41a2-068b-08d87b1d93b1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	zVztjICZx6/LUReo40fdzXBLE4kW9jH4neiw3Te2ZdaC2JB4C5HE8mWTPgy3lGAQ01B0irCnz9J+5IoJchdkVXHwA57W9vZFw1CqlIU1SQhf0gt7Y/kCEFa+BT8As6cLaNV9qKtY+QUsoZHz4CysmtsGPEOZdts6/Td6x8etwjmVkksSAp3W7hdzoN9Vq/Cfn8sK5KYvmM/oHKuyCWQqS7V+5CzNEhaXlzwhvKvsEbrmQyZiz2pNTPJ6moSgPF7xW8MU5wt8fnrlQv6BC5EXgmB1RNGg66nuX6/78I8g2Xa0AyxJtubPCQarUSScX1oQsl4vf0cNCsH+YX6cWn5ygdqolOGDs0SuC1RZG9gRA00rspzOGWKhcAy4jcgeMLfZa37HPqaPbXFA1kBwan35OA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(136003)(46966005)(53546011)(8936002)(83380400001)(356005)(81166007)(8676002)(47076004)(86362001)(70586007)(6486002)(36756003)(70206006)(107886003)(5660300002)(6512007)(54906003)(2616005)(2906002)(186003)(316002)(26005)(4326008)(6862004)(33656002)(82740400003)(6506007)(36906005)(478600001)(82310400003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 08:43:51.2323
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f80a642a-30f0-4d8f-dd4d-08d87b1d9856
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2506



> On 27 Oct 2020, at 22:44, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Mon, 26 Oct 2020, Bertrand Marquis wrote:
>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>> not implementing the workaround for it could deadlock the system.
>> Add a warning during boot informing the user that only trusted guests
>> should be executed on the system.
>> An equivalent warning is already given to the user by KVM on cores
>> affected by this errata.
>>=20
>> Also taint the hypervisor as unsecure when this errata applies and
>> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
>>=20
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>> ---
>> SUPPORT.md               |  1 +
>> xen/arch/arm/cpuerrata.c | 13 +++++++++++++
>> 2 files changed, 14 insertions(+)
>>=20
>> diff --git a/SUPPORT.md b/SUPPORT.md
>> index 5fbe5fc444..f7a3b046b0 100644
>> --- a/SUPPORT.md
>> +++ b/SUPPORT.md
>> @@ -38,6 +38,7 @@ supported in this document.
>> ### ARM v8
>>=20
>>     Status: Supported
>> +    Status, Cortex A57 r0p0 - r1p2, not security supported (Errata 8320=
75)
>>=20
>> ## Host hardware support
>>=20
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 0430069a84..b35e8cd0b9 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -503,6 +503,19 @@ void check_local_cpu_errata(void)
>> void __init enable_errata_workarounds(void)
>> {
>>     enable_cpu_capabilities(arm_errata);
>> +
>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>> +    if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
>> +    {
>> +        printk_once("**** This CPU is affected by the errata 832075. **=
**\n"
>> +                    "**** Guests without CPU erratum workarounds     **=
**\n"
>> +                    "**** can deadlock the system!                   **=
**\n"
>> +                    "**** Only trusted guests should be used.        **=
**\n");
>=20
> These can be on 2 lines, no need to be on 4 lines.

I can fix that in a v3.

>=20
>=20
> I know that Julien wrote about printing the warning from
> enable_errata_workarounds but to me it looks more natural if we did it
> from the .enable function specific to ARM64_WORKAROUND_DEVICE_LOAD_ACQUIR=
E.

I have no preference either here but i kind of like this way because if we =
had more warnings
they would allow be at the same place.

I will wait for Julien answer on this before sending a v3 for this patch.

Cheers
Bertrand

>=20
> That said, I don't feel strongly about it, I am fine either way. Julien,
> do you have a preference?
>=20
>=20
> Other than that, it is fine.
>=20
>=20
>> +        /* Taint the machine has being insecure */
>> +        add_taint(TAINT_MACHINE_UNSECURE);
>> +    }
>> +#endif



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 08:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 08:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13398.34009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhA2-0002Qr-N5; Wed, 28 Oct 2020 08:50:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13398.34009; Wed, 28 Oct 2020 08:50:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhA2-0002Qk-K1; Wed, 28 Oct 2020 08:50:26 +0000
Received: by outflank-mailman (input) for mailman id 13398;
 Wed, 28 Oct 2020 08:50:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uak6=ED=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kXhA0-0002Qf-UN
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:50:25 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fafe5503-2d6e-47e9-a3d3-7948b86efe9a;
 Wed, 28 Oct 2020 08:50:23 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id o70so3635155ybc.1
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 01:50:23 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Uak6=ED=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kXhA0-0002Qf-UN
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 08:50:25 +0000
X-Inumbo-ID: fafe5503-2d6e-47e9-a3d3-7948b86efe9a
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fafe5503-2d6e-47e9-a3d3-7948b86efe9a;
	Wed, 28 Oct 2020 08:50:23 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id o70so3635155ybc.1
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 01:50:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:from:date:message-id:subject:to:cc;
        bh=ieqmOtTH2tzg7ChcRoz+nnMCwbn+J69zwhqoR1NXgx8=;
        b=HEK9ynanFA0kNKqLUsZHE+BO4/W4VZzOxLG885P/yQQb0u2+NlCDqmOhHL+j49yKFX
         r19CwtJHC97e69lSKLY3yiDdXt6MOVTeDxZg8YojdwMnIjyHUnZMJCqXR7+f+8wDuv0Z
         ndAjpBB51TslhK6VCLhQNog5+9bs8gYr/wExlrYyk2n832r//GzHnsh0fN9wckLCIzGy
         2gWkrKaBHYrux50YpM7w+PuHHZ6i6sm+GocIU9fDEpdaFG6vCp5BH5gNVCvrTK79jMoL
         MxYBTewvvOdd/dNBUtTveP9bfMuMd3IcqruSvFm1FR3w2kKiSrKS7Xzs0Qn1M0a0agrb
         Vw7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
        bh=ieqmOtTH2tzg7ChcRoz+nnMCwbn+J69zwhqoR1NXgx8=;
        b=WEraeBCsQIqd8uqT5jXBM2sGnd8kOvEAJqQGKaydr54hVk0zPmcocNK72LPdofXVJt
         W3N4cmm0tnT6kUtXuCKRoEzSH1wz6ygWF8Bgkwp/3Veul84G1u2qyGkNrfuYGOVZSH7w
         payMujiSk1iaq+tYFoqBfwKilkDDtCWfusfV2J3OrmT+YnGuung9Evw0M0T4ua1bQXr4
         T13CEL3eUWZZXbwzLbyu/1oSad5fiyqb5kwttDZoxoPFrlVdy12h2YVnZ1nC4J8SqwSN
         F2DXOcOx1vXW1WqchPpqiA3xeRsVPrv+OszouMUhl48nNFnKOgBBAQ3qWefH9MHvzex3
         TSLA==
X-Gm-Message-State: AOAM532rWWjAlkQctu4R/lCOoDr/nkFbGlhJyszi1M5cJyCqpAD7ScqZ
	XqD9uXu6Vrg7MexXOc+d/L8BRIuPj7IecPN1sYts6hLxT1DCKA==
X-Google-Smtp-Source: ABdhPJzVZ41WwXZPYxB8sSVTXxCADIVn/HVtM/c4O+tnQlp9oOuE0HxQcvb2Qy/6HOy2THqmzRqRgvpgkPKDNes6eeg=
X-Received: by 2002:a25:578a:: with SMTP id l132mr8300639ybb.200.1603875022926;
 Wed, 28 Oct 2020 01:50:22 -0700 (PDT)
MIME-Version: 1.0
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Wed, 28 Oct 2020 17:50:12 +0900
Message-ID: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
Subject: [bug report] xen/arm64: singlestep doesn't work on Dom0
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>, 
	Masami Hiramatsu <mhiramat@kernel.org>, Jassi Brar <jaswinder.singh@linaro.org>, 
	Stefano Stabellini <stefano.stabellini@linaro.org>
Content-Type: text/plain; charset="UTF-8"

Hello,

When I tested the kprobes on Dom0 kernel, it caused a kernel panic.

Here is an example, to clarify the bug is in the single-stepping
(software-step exception), I added a printk() where the kprobes setup
single-stepping.

root@develbox:~# cd /sys/kernel/debug/tracing/
root@develbox:/sys/kernel/debug/tracing# echo p vfs_read > kprobe_events
root@develbox:/sys/kernel/debug/tracing# echo 1 > events/kprobes/enable
root@develbox:/sys/kernel/debug/tracing# [  112.282480] kprobes:
singlestep ool insn at ffff800011785000    <--- This shows single
stepping buffer (trampoline)
[  112.288077] ------------[ cut here ]------------
[  112.292745] kernel BUG at arch/arm64/kernel/traps.c:406!
[  112.298129] Internal error: Oops - BUG: 0 [#1] SMP
[  112.302987] Modules linked in: fuse bridge stp llc binfmt_misc
nls_ascii nls_cp437 vfat fat ahci libahci libata hid_generic udlfb
scsi_mod aes_ce_blk crypto_simd evdev cryptd aes_ce_cipher usbhid
ghash_ce realtek gf128mul hid netsec sha2_ce mdio_devres i2c_algo_bit
of_mdio sha256_arm64 fb_ddc sha1_ce fixed_phy gpio_keys leds_gpio
libphy bpf_preload ip_tables x_tables autofs4 xhci_pci
xhci_pci_renesas xhci_hcd usbcore gpio_mb86s7x
[  112.341097] CPU: 13 PID: 1045 Comm: bash Not tainted 5.10.0-rc1+ #44
[  112.347515] Hardware name: Socionext Developer Box (DT)
[  112.352813] pstate: 00000085 (nzcv daIf -PAN -UAO -TCO BTYPE=--)
[  112.358897] pc : do_undefinstr+0x354/0x378
[  112.363053] lr : do_undefinstr+0x270/0x378
[  112.367218] sp : ffff8000122fbc50
[  112.370603] x29: ffff8000122fbc50 x28: ffff00084bc9e080
[  112.375985] x27: 0000000000000000 x26: 0000000000000000
[  112.381366] x25: 0000000000000000 x24: 0000000000000000
[  112.386748] x23: 0000000080000085 x22: ffff800011785004
[  112.392129] x21: ffff8000122fbe00 x20: ffff8000122fbcc0
[  112.397511] x19: ffff800011249988 x18: 0000000000000000
[  112.402892] x17: 0000000000000000 x16: 0000000000000000
[  112.408274] x15: 0000000000000000 x14: 0000000000000000
[  112.413655] x13: 0000000000000000 x12: 0000000000000000
[  112.419037] x11: 0000000000000000 x10: 0000000000000000
[  112.424426] x9 : ffff800010314614 x8 : 0000000000000000
[  112.429801] x7 : 0000000000000000 x6 : ffff8000122fbca8
[  112.435189] x5 : 0000000000000000 x4 : ffff800011400110
[  112.440564] x3 : 00000000d5300000 x2 : ffff800011255f78
[  112.445946] x1 : ffff800011400110 x0 : 0000000080000085
[  112.451328] Call trace:
[  112.453848]  do_undefinstr+0x354/0x378
[  112.457669]  el1_sync_handler+0xa8/0x138
[  112.461658]  el1_sync+0x7c/0x100
[  112.464958]  0xffff800011785004     /// <- Undefined instruction
error happens on the next instruction of single stepping buffer.
[  112.468172]  __arm64_sys_read+0x24/0x30
[  112.472078]  el0_svc_common.constprop.3+0x94/0x178
[  112.476936]  do_el0_svc+0x2c/0x98
[  112.480321]  el0_sync_handler+0x118/0x168
[  112.484407]  el0_sync+0x158/0x180
[  112.487789] Code: d2801400 17ffffbe a9025bf5 f9001bf7 (d4210000)
[  112.493951] ---[ end trace 3564a3bf75d1618c ]---

So, this seems that the Linux kernel couldn't catch the software-step exception.

I confirmed the same kernel doesn't cause this error without Xen. I
guess Xen is not correctly setting the debug registers when the cpu
goes EL1.
(Or, would we handle debug exceptions in Xen and transfer it to EL1
OS? I'm not sure how it was designed)

Thank you,

-- 
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:20:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:20:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13416.34022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhdE-00053w-0d; Wed, 28 Oct 2020 09:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13416.34022; Wed, 28 Oct 2020 09:20:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhdD-00053p-U3; Wed, 28 Oct 2020 09:20:35 +0000
Received: by outflank-mailman (input) for mailman id 13416;
 Wed, 28 Oct 2020 09:20:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhdD-00053k-Gh
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:20:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87249370-37f1-4a88-a3c4-122a90584149;
 Wed, 28 Oct 2020 09:20:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A1FFAE55;
 Wed, 28 Oct 2020 09:20:33 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhdD-00053k-Gh
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:20:35 +0000
X-Inumbo-ID: 87249370-37f1-4a88-a3c4-122a90584149
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 87249370-37f1-4a88-a3c4-122a90584149;
	Wed, 28 Oct 2020 09:20:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603876833;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=lYdSBZ4GqDWpjsgCkyHXRTUKdyPlmdwYdDzfWbdn6vg=;
	b=rFAfa3UcQLC2ivGNOd3yK47deNhW+ZTM5tpdAdWtFGH/QMlCHq6fpSWwoWe6onLqAUrcMO
	nq9VylGNKl7mVXs6cjcurTZYhBb3KnRjobuQxsri5t3EnudUv5MpKhYh8UbmStHUK1sffN
	lXHPKGscCk1fan/zX996JhvpdrMripI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5A1FFAE55;
	Wed, 28 Oct 2020 09:20:33 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/5] x86/p2m: hook adjustments
Message-ID: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Date: Wed, 28 Oct 2020 10:20:33 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This started out with me getting confused by the two write_p2m_entry()
hooks we have - there really ought to be no more than one, or if two
were absolutely needed they imo would better have distinct names. Other
adjustment opportunities (and I hope they're improvements) were found
while getting rid of that one unnecessary layer of indirect calls.

1: p2m: paging_write_p2m_entry() is a private function
2: p2m: collapse the two ->write_p2m_entry() hooks
3: p2m: suppress audit_p2m hook when possible
4: HAP: move nested-P2M flush calculations out of locked region
5: [RFC] p2m: split write_p2m_entry() hook

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:22:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13419.34034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXheh-0005BP-CC; Wed, 28 Oct 2020 09:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13419.34034; Wed, 28 Oct 2020 09:22:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXheh-0005BI-99; Wed, 28 Oct 2020 09:22:07 +0000
Received: by outflank-mailman (input) for mailman id 13419;
 Wed, 28 Oct 2020 09:22:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhef-0005BD-Qc
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:22:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c45dc1a0-7852-4b33-b0f6-489983d15d97;
 Wed, 28 Oct 2020 09:22:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EC929AE55;
 Wed, 28 Oct 2020 09:22:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhef-0005BD-Qc
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:22:05 +0000
X-Inumbo-ID: c45dc1a0-7852-4b33-b0f6-489983d15d97
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c45dc1a0-7852-4b33-b0f6-489983d15d97;
	Wed, 28 Oct 2020 09:22:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603876924;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yu6nsT1gjKKWROl7+xNzleE21oVo7PP0tppH9efwgOc=;
	b=q2y7pH7q4P+xg0XBXJX+RfzzUHGlmVIs65+CrckFJFVZukH7KW5gyhUwq/oK1GKLCYE6p7
	+hbU/347P9GfihhNXl3XUT/t9m4ULw0mpRUMao4GmKDnuaP2lNPRusTXJ1ba/r9a8o83PF
	lw6t+nBhpak8ZNmlxf8ZcefaTPOd93k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EC929AE55;
	Wed, 28 Oct 2020 09:22:03 +0000 (UTC)
Subject: [PATCH 1/5] x86/p2m: paging_write_p2m_entry() is a private function
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Message-ID: <1fab241b-3969-9ce5-2388-bcdbe3be6079@suse.com>
Date: Wed, 28 Oct 2020 10:22:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

As it gets installed by p2m_pt_init(), it doesn't need to live in
paging.c. The function working in terms of l1_pgentry_t even further
indicates its non-paging-generic nature. Move it and drop its
paging_ prefix, not adding any new one now that it's static.

This then also makes more obvious that in the EPT case we wouldn't
risk mistakenly calling through the NULL hook pointer.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -108,6 +108,31 @@ static unsigned long p2m_type_to_flags(c
     }
 }
 
+/*
+ * Atomically write a P2M entry and update the paging-assistance state
+ * appropriately.
+ * Arguments: the domain in question, the GFN whose mapping is being updated,
+ * a pointer to the entry to be written, the MFN in which the entry resides,
+ * the new contents of the entry, and the level in the p2m tree at which
+ * we are writing.
+ */
+static int write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
+                           l1_pgentry_t *p, l1_pgentry_t new,
+                           unsigned int level)
+{
+    struct domain *d = p2m->domain;
+    struct vcpu *v = current;
+    int rc = 0;
+
+    if ( v->domain != d )
+        v = d->vcpu ? d->vcpu[0] : NULL;
+    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) )
+        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
+    else
+        safe_write_pte(p, new);
+
+    return rc;
+}
 
 // Find the next level's P2M entry, checking for out-of-range gfn's...
 // Returns NULL on error.
@@ -594,7 +619,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
         entry_content.l1 = l3e_content.l3;
 
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -631,7 +656,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
 
         /* level 1 entry */
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -666,7 +691,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
         entry_content.l1 = l2e_content.l2;
 
         rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
-        /* NB: paging_write_p2m_entry() handles tlb flushes properly */
+        /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
     }
@@ -1107,7 +1132,7 @@ void p2m_pt_init(struct p2m_domain *p2m)
     p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
-    p2m->write_p2m_entry = paging_write_p2m_entry;
+    p2m->write_p2m_entry = write_p2m_entry;
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
 #else
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -941,27 +941,7 @@ void paging_update_nestedmode(struct vcp
         v->arch.paging.nestedmode = NULL;
     hvm_asid_flush_vcpu(v);
 }
-#endif
 
-int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level)
-{
-    struct domain *d = p2m->domain;
-    struct vcpu *v = current;
-    int rc = 0;
-
-    if ( v->domain != d )
-        v = d->vcpu ? d->vcpu[0] : NULL;
-    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v) != NULL) )
-        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
-    else
-        safe_write_pte(p, new);
-
-    return rc;
-}
-
-#ifdef CONFIG_HVM
 int __init paging_set_allocation(struct domain *d, unsigned int pages,
                                  bool *preempted)
 {
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -371,18 +371,6 @@ static inline void safe_write_pte(l1_pge
     *p = new;
 }
 
-/* Atomically write a P2M entry and update the paging-assistance state 
- * appropriately. 
- * Arguments: the domain in question, the GFN whose mapping is being updated, 
- * a pointer to the entry to be written, the MFN in which the entry resides, 
- * the new contents of the entry, and the level in the p2m tree at which 
- * we are writing. */
-struct p2m_domain;
-
-int paging_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level);
-
 /*
  * Called from the guest to indicate that the a process is being
  * torn down and its pagetables will soon be discarded.



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13423.34045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhfZ-0005KO-OE; Wed, 28 Oct 2020 09:23:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13423.34045; Wed, 28 Oct 2020 09:23:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhfZ-0005KH-LI; Wed, 28 Oct 2020 09:23:01 +0000
Received: by outflank-mailman (input) for mailman id 13423;
 Wed, 28 Oct 2020 09:23:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhfX-0005K5-V3
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:22:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 751d2f3c-6ebd-40cb-87eb-2a65ecf7f8db;
 Wed, 28 Oct 2020 09:22:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D7F21AF2D;
 Wed, 28 Oct 2020 09:22:57 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhfX-0005K5-V3
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:22:59 +0000
X-Inumbo-ID: 751d2f3c-6ebd-40cb-87eb-2a65ecf7f8db
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 751d2f3c-6ebd-40cb-87eb-2a65ecf7f8db;
	Wed, 28 Oct 2020 09:22:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603876977;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NCgW+bACNDPGH3cgrnwMxwZu4xm7gdBOFvqRHOsKIW4=;
	b=bJnA87lKFKTAaHIFeVGKAu6cHQ29VmLbF3rgVk1h6reUbJInqMPWBVEKkPKxc2HHGr4pih
	8UZyOR5Td/DsyFgGiIzAf3VH0RmLArqgLQHtF7tcxReUagI1IhdcdldHOMCYlYhwEznxBH
	B27B690Rej5Ulr5g/yXoHjOMh6Btu5c=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D7F21AF2D;
	Wed, 28 Oct 2020 09:22:57 +0000 (UTC)
Subject: [PATCH 2/5] x86/p2m: collapse the two ->write_p2m_entry() hooks
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Message-ID: <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
Date: Wed, 28 Oct 2020 10:22:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The struct paging_mode instances get set to the same functions
regardless of mode by both HAP and shadow code, hence there's no point
having this hook there. The hook also doesn't need moving elsewhere - we
can directly use struct p2m_domain's. This merely requires (from a
strictly formal pov; in practice this may not even be needed) making
sure we don't end up using safe_write_pte() for nested P2Ms.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Like for the possibly unnecessary p2m_is_nestedp2m() I'm not really sure
the paging_get_hostmode() check there is still needed either. But I
didn't want to alter more aspects than necessary here.

Of course with the p2m_is_nestedp2m() check there and with all three of
{hap,nestedp2m,shadow}_write_p2m_entry() now globally accessible, it's
certainly an option to do away with the indirect call there altogether.
In fact we may even be able to go further and fold the three functions:
They're relatively similar, and this would "seamlessly" address the
apparent bug of nestedp2m_write_p2m_entry() not making use of
p2m_entry_modify().

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -823,6 +823,11 @@ hap_write_p2m_entry(struct p2m_domain *p
     return 0;
 }
 
+void hap_p2m_init(struct p2m_domain *p2m)
+{
+    p2m->write_p2m_entry = hap_write_p2m_entry;
+}
+
 static unsigned long hap_gva_to_gfn_real_mode(
     struct vcpu *v, struct p2m_domain *p2m, unsigned long gva, uint32_t *pfec)
 {
@@ -846,7 +851,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_real_mode,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 1
 };
@@ -858,7 +862,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_2_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 2
 };
@@ -870,7 +873,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_3_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 3
 };
@@ -882,7 +884,6 @@ static const struct paging_mode hap_pagi
     .p2m_ga_to_gfn          = hap_p2m_ga_to_gfn_4_levels,
     .update_cr3             = hap_update_cr3,
     .update_paging_modes    = hap_update_paging_modes,
-    .write_p2m_entry        = hap_write_p2m_entry,
     .flush_tlb              = flush_tlb,
     .guest_levels           = 4
 };
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -126,8 +126,9 @@ static int write_p2m_entry(struct p2m_do
 
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
-    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) )
-        rc = paging_get_hostmode(v)->write_p2m_entry(p2m, gfn, p, new, level);
+    if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
+         p2m_is_nestedp2m(p2m) )
+        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
     else
         safe_write_pte(p, new);
 
@@ -209,7 +210,7 @@ p2m_next_level(struct p2m_domain *p2m, v
 
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
         if ( rc )
             goto error;
     }
@@ -251,7 +252,7 @@ p2m_next_level(struct p2m_domain *p2m, v
         {
             new_entry = l1e_from_pfn(pfn | (i << ((level - 1) * PAGETABLE_ORDER)),
                                      flags);
-            rc = p2m->write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
+            rc = write_p2m_entry(p2m, gfn, l1_entry + i, new_entry, level);
             if ( rc )
             {
                 unmap_domain_page(l1_entry);
@@ -262,8 +263,7 @@ p2m_next_level(struct p2m_domain *p2m, v
         unmap_domain_page(l1_entry);
 
         new_entry = l1e_from_mfn(mfn, P2M_BASE_FLAGS | _PAGE_RW);
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, new_entry,
-                                  level + 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, new_entry, level + 1);
         if ( rc )
             goto error;
     }
@@ -335,7 +335,7 @@ static int p2m_pt_set_recalc_range(struc
             if ( (l1e_get_flags(e) & _PAGE_PRESENT) && !needs_recalc(l1, e) )
             {
                 set_recalc(l1, e);
-                err = p2m->write_p2m_entry(p2m, first_gfn, pent, e, level);
+                err = write_p2m_entry(p2m, first_gfn, pent, e, level);
                 if ( err )
                 {
                     ASSERT_UNREACHABLE();
@@ -412,8 +412,8 @@ static int do_recalc(struct p2m_domain *
                      !needs_recalc(l1, ent) )
                 {
                     set_recalc(l1, ent);
-                    err = p2m->write_p2m_entry(p2m, gfn - remainder, &ptab[i],
-                                               ent, level);
+                    err = write_p2m_entry(p2m, gfn - remainder, &ptab[i], ent,
+                                          level);
                     if ( err )
                     {
                         ASSERT_UNREACHABLE();
@@ -426,7 +426,7 @@ static int do_recalc(struct p2m_domain *
             if ( !err )
             {
                 clear_recalc(l1, e);
-                err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+                err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
                 ASSERT(!err);
 
                 recalc_done = true;
@@ -474,7 +474,7 @@ static int do_recalc(struct p2m_domain *
         }
         else
             clear_recalc(l1, e);
-        err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
+        err = write_p2m_entry(p2m, gfn, pent, e, level + 1);
         ASSERT(!err);
 
         recalc_done = true;
@@ -618,7 +618,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             : l3e_empty();
         entry_content.l1 = l3e_content.l3;
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 3);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -655,7 +655,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             entry_content = l1e_empty();
 
         /* level 1 entry */
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 1);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -690,7 +690,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m,
             : l2e_empty();
         entry_content.l1 = l2e_content.l2;
 
-        rc = p2m->write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
+        rc = write_p2m_entry(p2m, gfn, p2m_entry, entry_content, 2);
         /* NB: write_p2m_entry() handles tlb flushes properly */
         if ( rc )
             goto out;
@@ -914,7 +914,7 @@ static void p2m_pt_change_entry_type_glo
             int rc;
 
             set_recalc(l1, e);
-            rc = p2m->write_p2m_entry(p2m, gfn, &tab[i], e, 4);
+            rc = write_p2m_entry(p2m, gfn, &tab[i], e, 4);
             if ( rc )
             {
                 ASSERT_UNREACHABLE();
@@ -1132,7 +1132,13 @@ void p2m_pt_init(struct p2m_domain *p2m)
     p2m->recalc = do_recalc;
     p2m->change_entry_type_global = p2m_pt_change_entry_type_global;
     p2m->change_entry_type_range = p2m_pt_change_entry_type_range;
-    p2m->write_p2m_entry = write_p2m_entry;
+
+    /* Still too early to use paging_mode_hap(). */
+    if ( hap_enabled(p2m->domain) )
+        hap_p2m_init(p2m);
+    else if ( IS_ENABLED(CONFIG_SHADOW_PAGING) )
+        shadow_p2m_init(p2m);
+
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
 #else
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3137,7 +3137,7 @@ static void sh_unshadow_for_p2m_change(s
     }
 }
 
-int
+static int
 shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
                        l1_pgentry_t *p, l1_pgentry_t new,
                        unsigned int level)
@@ -3183,6 +3183,11 @@ shadow_write_p2m_entry(struct p2m_domain
     return 0;
 }
 
+void shadow_p2m_init(struct p2m_domain *p2m)
+{
+    p2m->write_p2m_entry = shadow_write_p2m_entry;
+}
+
 /**************************************************************************/
 /* Log-dirty mode support */
 
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -4792,7 +4792,6 @@ const struct paging_mode sh_paging_mode
     .gva_to_gfn                    = sh_gva_to_gfn,
     .update_cr3                    = sh_update_cr3,
     .update_paging_modes           = shadow_update_paging_modes,
-    .write_p2m_entry               = shadow_write_p2m_entry,
     .flush_tlb                     = shadow_flush_tlb,
     .guest_levels                  = GUEST_PAGING_LEVELS,
     .shadow.detach_old_tables      = sh_detach_old_tables,
--- a/xen/arch/x86/mm/shadow/none.c
+++ b/xen/arch/x86/mm/shadow/none.c
@@ -60,21 +60,12 @@ static void _update_paging_modes(struct
     ASSERT_UNREACHABLE();
 }
 
-static int _write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                            l1_pgentry_t *p, l1_pgentry_t new,
-                            unsigned int level)
-{
-    ASSERT_UNREACHABLE();
-    return -EOPNOTSUPP;
-}
-
 static const struct paging_mode sh_paging_none = {
     .page_fault                    = _page_fault,
     .invlpg                        = _invlpg,
     .gva_to_gfn                    = _gva_to_gfn,
     .update_cr3                    = _update_cr3,
     .update_paging_modes           = _update_paging_modes,
-    .write_p2m_entry               = _write_p2m_entry,
 };
 
 void shadow_vcpu_init(struct vcpu *v)
--- a/xen/arch/x86/mm/shadow/private.h
+++ b/xen/arch/x86/mm/shadow/private.h
@@ -390,11 +390,6 @@ static inline int sh_remove_write_access
 }
 #endif
 
-/* Functions that atomically write PT/P2M entries and update state */
-int shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                           l1_pgentry_t *p, l1_pgentry_t new,
-                           unsigned int level);
-
 /* Functions that atomically write PV guest PT entries */
 void sh_write_guest_entry(struct vcpu *v, intpte_t *p, intpte_t new,
                           mfn_t gmfn);
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -836,6 +836,9 @@ void p2m_flush_nestedp2m(struct domain *
 /* Flushes the np2m specified by np2m_base (if it exists) */
 void np2m_flush_base(struct vcpu *v, unsigned long np2m_base);
 
+void hap_p2m_init(struct p2m_domain *p2m);
+void shadow_p2m_init(struct p2m_domain *p2m);
+
 int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
     l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
 
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -141,10 +141,6 @@ struct paging_mode {
     void          (*update_cr3            )(struct vcpu *v, int do_locking,
                                             bool noflush);
     void          (*update_paging_modes   )(struct vcpu *v);
-    int           (*write_p2m_entry       )(struct p2m_domain *p2m,
-                                            unsigned long gfn,
-                                            l1_pgentry_t *p, l1_pgentry_t new,
-                                            unsigned int level);
     bool          (*flush_tlb             )(bool (*flush_vcpu)(void *ctxt,
                                                                struct vcpu *v),
                                             void *ctxt);



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:23:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13426.34058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhgI-0005SF-2J; Wed, 28 Oct 2020 09:23:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13426.34058; Wed, 28 Oct 2020 09:23:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhgH-0005S7-Uo; Wed, 28 Oct 2020 09:23:45 +0000
Received: by outflank-mailman (input) for mailman id 13426;
 Wed, 28 Oct 2020 09:23:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhgG-0005Ry-Jg
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:23:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 51916575-9bb7-4837-a3e0-4485b7de8df7;
 Wed, 28 Oct 2020 09:23:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D7D9AE55;
 Wed, 28 Oct 2020 09:23:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhgG-0005Ry-Jg
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:23:44 +0000
X-Inumbo-ID: 51916575-9bb7-4837-a3e0-4485b7de8df7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 51916575-9bb7-4837-a3e0-4485b7de8df7;
	Wed, 28 Oct 2020 09:23:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603877021;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ldwy/42iIo0sCTIc18oLAOIhkEM9oMjLPSiF7ItDO6A=;
	b=d1ADUAQyyFUI5XPvjZrMvxa7fbW5nx6uYZ2x76+G4OxQVF6a/8D3953Der42YgiufWgwaq
	pU0gZg+yCkDQoGRT/GV7jDbhARuDfrvBelfA2iO2aEMUrv31G0NowcZvsRrb5+c6yx2bvQ
	j77ku8ZK5XWVCUFY5OpYz89d0hzj+co=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8D7D9AE55;
	Wed, 28 Oct 2020 09:23:41 +0000 (UTC)
Subject: [PATCH 3/5] x86/p2m: suppress audit_p2m hook when possible
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Message-ID: <722cf75e-da6a-49c5-472a-898796c9030e@suse.com>
Date: Wed, 28 Oct 2020 10:23:42 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When P2M_AUDIT is false, it's unused, so instead of having a dangling
NULL pointer sit there, omit the field altogether.

Instead of adding "#if P2M_AUDIT && defined(CONFIG_HVM)" in even more
places, fold the latter part right into the definition of P2M_AUDIT.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1012,7 +1012,7 @@ long arch_do_domctl(
         break;
 #endif
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
+#if P2M_AUDIT
     case XEN_DOMCTL_audit_p2m:
         if ( d == currd )
             ret = -EPERM;
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1260,7 +1260,9 @@ int ept_p2m_init(struct p2m_domain *p2m)
     p2m->change_entry_type_global = ept_change_entry_type_global;
     p2m->change_entry_type_range = ept_change_entry_type_range;
     p2m->memory_type_changed = ept_memory_type_changed;
+#if P2M_AUDIT
     p2m->audit_p2m = NULL;
+#endif
     p2m->tlb_flush = ept_tlb_flush;
 
     /* Set the memory type used when accessing EPT paging structures. */
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -971,8 +971,8 @@ static int p2m_pt_change_entry_type_rang
     return err;
 }
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
-long p2m_pt_audit_p2m(struct p2m_domain *p2m)
+#if P2M_AUDIT
+static long p2m_pt_audit_p2m(struct p2m_domain *p2m)
 {
     unsigned long entry_count = 0, pmbad = 0;
     unsigned long mfn, gfn, m2pfn;
@@ -1120,8 +1120,6 @@ long p2m_pt_audit_p2m(struct p2m_domain
 
     return pmbad;
 }
-#else
-# define p2m_pt_audit_p2m NULL
 #endif /* P2M_AUDIT */
 
 /* Set up the p2m function pointers for pagetable format */
@@ -1141,8 +1139,6 @@ void p2m_pt_init(struct p2m_domain *p2m)
 
 #if P2M_AUDIT
     p2m->audit_p2m = p2m_pt_audit_p2m;
-#else
-    p2m->audit_p2m = NULL;
 #endif
 }
 
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2435,7 +2435,7 @@ int p2m_altp2m_propagate_change(struct d
 
 /*** Audit ***/
 
-#if P2M_AUDIT && defined(CONFIG_HVM)
+#if P2M_AUDIT
 void audit_p2m(struct domain *d,
                uint64_t *orphans,
                 uint64_t *m2p_bad,
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -31,6 +31,14 @@
 #include <asm/mem_sharing.h>
 #include <asm/page.h>    /* for pagetable_t */
 
+/* Debugging and auditing of the P2M code? */
+#ifndef NDEBUG
+#define P2M_AUDIT     defined(CONFIG_HVM)
+#else
+#define P2M_AUDIT     0
+#endif
+#define P2M_DEBUGGING 0
+
 extern bool_t opt_hap_1gb, opt_hap_2mb;
 
 /*
@@ -268,7 +276,9 @@ struct p2m_domain {
     int                (*write_p2m_entry)(struct p2m_domain *p2m,
                                           unsigned long gfn, l1_pgentry_t *p,
                                           l1_pgentry_t new, unsigned int level);
+#if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
+#endif
 
     /*
      * P2M updates may require TLBs to be flushed (invalidated).
@@ -758,14 +768,6 @@ extern void p2m_pt_init(struct p2m_domai
 void *map_domain_gfn(struct p2m_domain *p2m, gfn_t gfn, mfn_t *mfn,
                      p2m_query_t q, uint32_t *pfec);
 
-/* Debugging and auditing of the P2M code? */
-#ifndef NDEBUG
-#define P2M_AUDIT     1
-#else
-#define P2M_AUDIT     0
-#endif
-#define P2M_DEBUGGING 0
-
 #if P2M_AUDIT
 extern void audit_p2m(struct domain *d,
                       uint64_t *orphans,



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:24:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13431.34069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhgk-0005Z6-BI; Wed, 28 Oct 2020 09:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13431.34069; Wed, 28 Oct 2020 09:24:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhgk-0005Yz-8I; Wed, 28 Oct 2020 09:24:14 +0000
Received: by outflank-mailman (input) for mailman id 13431;
 Wed, 28 Oct 2020 09:24:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhgj-0005Yr-IG
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:24:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca21e3f6-7e57-4e2c-be35-2f85bffc0e51;
 Wed, 28 Oct 2020 09:24:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14286ACAC;
 Wed, 28 Oct 2020 09:24:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhgj-0005Yr-IG
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:24:13 +0000
X-Inumbo-ID: ca21e3f6-7e57-4e2c-be35-2f85bffc0e51
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ca21e3f6-7e57-4e2c-be35-2f85bffc0e51;
	Wed, 28 Oct 2020 09:24:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603877052;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JtLLY79rVz1/b+gt5Ib9A6igOOIy5NEZdkH5oVbCQnk=;
	b=VJI6vy9R90Vs2E3wVekuCxGJTGtQbMA4OcA0MkysEL29mIv9ddl22Dg2FTzRuu9F7+yyZf
	YZXwFWeCwK6agSPEID1Bc/CPdV4b0T3p8fvmZN/JucbKeOGA2IhHk3RlzULuld3IebIm4u
	DDcocneMpPBPVgjW78gYTUevI12lO4Q=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 14286ACAC;
	Wed, 28 Oct 2020 09:24:12 +0000 (UTC)
Subject: [PATCH 4/5] x86/HAP: move nested-P2M flush calculations out of locked
 region
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Message-ID: <551dc0a2-f5ef-a646-26eb-8a67ae428745@suse.com>
Date: Wed, 28 Oct 2020 10:24:12 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

By latching the old MFN into a local variable, these calculations don't
depend on anything but local variables anymore. Hence the point in time
when they get performed doesn't matter anymore, so they can be moved
past the locked region.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -780,7 +780,7 @@ hap_write_p2m_entry(struct p2m_domain *p
 {
     struct domain *d = p2m->domain;
     uint32_t old_flags;
-    bool_t flush_nestedp2m = 0;
+    mfn_t omfn;
     int rc;
 
     /* We know always use the host p2m here, regardless if the vcpu
@@ -790,21 +790,11 @@ hap_write_p2m_entry(struct p2m_domain *p
 
     paging_lock(d);
     old_flags = l1e_get_flags(*p);
-
-    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) 
-         && !p2m_get_hostp2m(d)->defer_nested_flush ) {
-        /* We are replacing a valid entry so we need to flush nested p2ms,
-         * unless the only change is an increase in access rights. */
-        mfn_t omfn = l1e_get_mfn(*p);
-        mfn_t nmfn = l1e_get_mfn(new);
-
-        flush_nestedp2m = !(mfn_eq(omfn, nmfn)
-            && perms_strictly_increased(old_flags, l1e_get_flags(new)) );
-    }
+    omfn = l1e_get_mfn(*p);
 
     rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
                           p2m_flags_to_type(old_flags), l1e_get_mfn(new),
-                          l1e_get_mfn(*p), level);
+                          omfn, level);
     if ( rc )
     {
         paging_unlock(d);
@@ -817,7 +807,14 @@ hap_write_p2m_entry(struct p2m_domain *p
 
     paging_unlock(d);
 
-    if ( flush_nestedp2m )
+    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) &&
+         !p2m_get_hostp2m(d)->defer_nested_flush &&
+         /*
+          * We are replacing a valid entry so we need to flush nested p2ms,
+          * unless the only change is an increase in access rights.
+          */
+         (!mfn_eq(omfn, l1e_get_mfn(new)) ||
+          !perms_strictly_increased(old_flags, l1e_get_flags(new))) )
         p2m_flush_nestedp2m(d);
 
     return 0;



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:24:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:24:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13435.34082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhhQ-0005gz-Mb; Wed, 28 Oct 2020 09:24:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13435.34082; Wed, 28 Oct 2020 09:24:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhhQ-0005gs-JF; Wed, 28 Oct 2020 09:24:56 +0000
Received: by outflank-mailman (input) for mailman id 13435;
 Wed, 28 Oct 2020 09:24:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXhhP-0005gl-UQ
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:24:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7387b699-f4a1-4898-bf58-bea01d086430;
 Wed, 28 Oct 2020 09:24:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D76F5AD1E;
 Wed, 28 Oct 2020 09:24:52 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXhhP-0005gl-UQ
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:24:55 +0000
X-Inumbo-ID: 7387b699-f4a1-4898-bf58-bea01d086430
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7387b699-f4a1-4898-bf58-bea01d086430;
	Wed, 28 Oct 2020 09:24:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603877092;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BDnAoknm+d841ow9/Slk5qnPpHUKxVRIWzU0TPE9W4Q=;
	b=IaRkl9wNnRkAYR4tHYkztpaya1S+G4WqI1ofPVXEweLaJHLizWLBP6alUy/hXF9q8YaIW4
	1bVoFRXzw9WjWgnDGPmRESha8NKmfKLb/wfvunYDHj5k26YTos9zGLIif3KmGJPbbsBvrr
	FDqdTRFs5ZRo3AYDeZRPdQbC713JjNQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id D76F5AD1E;
	Wed, 28 Oct 2020 09:24:52 +0000 (UTC)
Subject: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Tim Deegan <tim@xen.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Message-ID: <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
Date: Wed, 28 Oct 2020 10:24:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Fair parts of the present handlers are identical; in fact
nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
common parts right into write_p2m_entry(), splitting the hooks into a
"pre" one (needed just by shadow code) and a "post" one.

For the common parts moved I think that the p2m_flush_nestedp2m() is,
at least from an abstract perspective, also applicable in the shadow
case. Hence it doesn't get a 3rd hook put in place.

The initial comment that was in hap_write_p2m_entry() gets dropped: Its
placement was bogus, and looking back the the commit introducing it
(dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
of a p2m it was meant to be associated with.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
RFC: This is effectively the alternative to the suggestion in an earlier
     patch that we might do away with the hook altogether. Of course a
     hybrid approach would also be possible, by using direct calls here
     instead of splitting the hook into two.
---
I'm unsure whether p2m_init_nestedp2m() zapping the "pre" hook is
actually correct, or whether previously the sh_unshadow_for_p2m_change()
invocation was wrongly skipped.

--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -774,55 +774,18 @@ static void hap_update_paging_modes(stru
     put_gfn(d, cr3_gfn);
 }
 
-static int
-hap_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn, l1_pgentry_t *p,
-                    l1_pgentry_t new, unsigned int level)
+static void
+hap_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
 {
     struct domain *d = p2m->domain;
-    uint32_t old_flags;
-    mfn_t omfn;
-    int rc;
 
-    /* We know always use the host p2m here, regardless if the vcpu
-     * is in host or guest mode. The vcpu can be in guest mode by
-     * a hypercall which passes a domain and chooses mostly the first
-     * vcpu. */
-
-    paging_lock(d);
-    old_flags = l1e_get_flags(*p);
-    omfn = l1e_get_mfn(*p);
-
-    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                          p2m_flags_to_type(old_flags), l1e_get_mfn(new),
-                          omfn, level);
-    if ( rc )
-    {
-        paging_unlock(d);
-        return rc;
-    }
-
-    safe_write_pte(p, new);
-    if ( old_flags & _PAGE_PRESENT )
+    if ( oflags & _PAGE_PRESENT )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
-
-    paging_unlock(d);
-
-    if ( nestedhvm_enabled(d) && (old_flags & _PAGE_PRESENT) &&
-         !p2m_get_hostp2m(d)->defer_nested_flush &&
-         /*
-          * We are replacing a valid entry so we need to flush nested p2ms,
-          * unless the only change is an increase in access rights.
-          */
-         (!mfn_eq(omfn, l1e_get_mfn(new)) ||
-          !perms_strictly_increased(old_flags, l1e_get_flags(new))) )
-        p2m_flush_nestedp2m(d);
-
-    return 0;
 }
 
 void hap_p2m_init(struct p2m_domain *p2m)
 {
-    p2m->write_p2m_entry = hap_write_p2m_entry;
+    p2m->write_p2m_entry_post = hap_write_p2m_entry_post;
 }
 
 static unsigned long hap_gva_to_gfn_real_mode(
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -71,24 +71,11 @@
 /*        NESTED VIRT P2M FUNCTIONS         */
 /********************************************/
 
-int
-nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level)
+void
+nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
 {
-    struct domain *d = p2m->domain;
-    uint32_t old_flags;
-
-    paging_lock(d);
-
-    old_flags = l1e_get_flags(*p);
-    safe_write_pte(p, new);
-
-    if (old_flags & _PAGE_PRESENT)
-        guest_flush_tlb_mask(d, p2m->dirty_cpumask);
-
-    paging_unlock(d);
-
-    return 0;
+    if ( oflags & _PAGE_PRESENT )
+        guest_flush_tlb_mask(p2m->domain, p2m->dirty_cpumask);
 }
 
 /********************************************/
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -122,17 +122,55 @@ static int write_p2m_entry(struct p2m_do
 {
     struct domain *d = p2m->domain;
     struct vcpu *v = current;
-    int rc = 0;
 
     if ( v->domain != d )
         v = d->vcpu ? d->vcpu[0] : NULL;
     if ( likely(v && paging_mode_enabled(d) && paging_get_hostmode(v)) ||
          p2m_is_nestedp2m(p2m) )
-        rc = p2m->write_p2m_entry(p2m, gfn, p, new, level);
+    {
+        unsigned int oflags;
+        mfn_t omfn;
+        int rc;
+
+        paging_lock(d);
+
+        if ( p2m->write_p2m_entry_pre )
+            p2m->write_p2m_entry_pre(d, gfn, p, new, level);
+
+        oflags = l1e_get_flags(*p);
+        omfn = l1e_get_mfn(*p);
+
+        rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
+                              p2m_flags_to_type(oflags), l1e_get_mfn(new),
+                              omfn, level);
+        if ( rc )
+        {
+            paging_unlock(d);
+            return rc;
+        }
+
+        safe_write_pte(p, new);
+
+        if ( p2m->write_p2m_entry_post )
+            p2m->write_p2m_entry_post(p2m, oflags);
+
+        paging_unlock(d);
+
+        if ( nestedhvm_enabled(d) && !p2m_is_nestedp2m(p2m) &&
+             (oflags & _PAGE_PRESENT) &&
+             !p2m_get_hostp2m(d)->defer_nested_flush &&
+             /*
+              * We are replacing a valid entry so we need to flush nested p2ms,
+              * unless the only change is an increase in access rights.
+              */
+             (!mfn_eq(omfn, l1e_get_mfn(new)) ||
+              !perms_strictly_increased(oflags, l1e_get_flags(new))) )
+            p2m_flush_nestedp2m(d);
+    }
     else
         safe_write_pte(p, new);
 
-    return rc;
+    return 0;
 }
 
 // Find the next level's P2M entry, checking for out-of-range gfn's...
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -198,7 +198,8 @@ static int p2m_init_nestedp2m(struct dom
             return -ENOMEM;
         }
         p2m->p2m_class = p2m_nested;
-        p2m->write_p2m_entry = nestedp2m_write_p2m_entry;
+        p2m->write_p2m_entry_pre = NULL;
+        p2m->write_p2m_entry_post = nestedp2m_write_p2m_entry_post;
         list_add(&p2m->np2m_list, &p2m_get_hostp2m(d)->np2m_list);
     }
 
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3137,34 +3137,22 @@ static void sh_unshadow_for_p2m_change(s
     }
 }
 
-static int
-shadow_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-                       l1_pgentry_t *p, l1_pgentry_t new,
-                       unsigned int level)
+static void
+sh_write_p2m_entry_pre(struct domain *d, unsigned long gfn, l1_pgentry_t *p,
+                       l1_pgentry_t new, unsigned int level)
 {
-    struct domain *d = p2m->domain;
-    int rc;
-
-    paging_lock(d);
-
     /* If there are any shadows, update them.  But if shadow_teardown()
      * has already been called then it's not safe to try. */
     if ( likely(d->arch.paging.shadow.total_pages != 0) )
          sh_unshadow_for_p2m_change(d, gfn, p, new, level);
-
-    rc = p2m_entry_modify(p2m, p2m_flags_to_type(l1e_get_flags(new)),
-                          p2m_flags_to_type(l1e_get_flags(*p)),
-                          l1e_get_mfn(new), l1e_get_mfn(*p), level);
-    if ( rc )
-    {
-        paging_unlock(d);
-        return rc;
-    }
-
-    /* Update the entry with new content */
-    safe_write_pte(p, new);
+}
 
 #if (SHADOW_OPTIMIZATIONS & SHOPT_FAST_FAULT_PATH)
+static void
+sh_write_p2m_entry_post(struct p2m_domain *p2m, unsigned int oflags)
+{
+    struct domain *d = p2m->domain;
+
     /* If we're doing FAST_FAULT_PATH, then shadow mode may have
        cached the fact that this is an mmio region in the shadow
        page tables.  Blow the tables away to remove the cache.
@@ -3176,16 +3164,15 @@ shadow_write_p2m_entry(struct p2m_domain
         shadow_blow_tables(d);
         d->arch.paging.shadow.has_fast_mmio_entries = false;
     }
-#endif
-
-    paging_unlock(d);
-
-    return 0;
 }
+#else
+# define sh_write_p2m_entry_post NULL
+#endif
 
 void shadow_p2m_init(struct p2m_domain *p2m)
 {
-    p2m->write_p2m_entry = shadow_write_p2m_entry;
+    p2m->write_p2m_entry_pre  = sh_write_p2m_entry_pre;
+    p2m->write_p2m_entry_post = sh_write_p2m_entry_post;
 }
 
 /**************************************************************************/
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -272,10 +272,13 @@ struct p2m_domain {
                                                   unsigned long first_gfn,
                                                   unsigned long last_gfn);
     void               (*memory_type_changed)(struct p2m_domain *p2m);
-    
-    int                (*write_p2m_entry)(struct p2m_domain *p2m,
-                                          unsigned long gfn, l1_pgentry_t *p,
-                                          l1_pgentry_t new, unsigned int level);
+    void               (*write_p2m_entry_pre)(struct domain *d,
+                                              unsigned long gfn,
+                                              l1_pgentry_t *p,
+                                              l1_pgentry_t new,
+                                              unsigned int level);
+    void               (*write_p2m_entry_post)(struct p2m_domain *p2m,
+                                               unsigned int oflags);
 #if P2M_AUDIT
     long               (*audit_p2m)(struct p2m_domain *p2m);
 #endif
@@ -472,7 +475,7 @@ void __put_gfn(struct p2m_domain *p2m, u
  *
  * This is also used in the shadow code whenever the paging lock is
  * held -- in those cases, the caller is protected against concurrent
- * p2m updates by the fact that shadow_write_p2m_entry() also takes
+ * p2m updates by the fact that write_p2m_entry() also takes
  * the paging lock.
  *
  * Note that an unlocked accessor only makes sense for a "query" lookup.
@@ -841,8 +844,8 @@ void np2m_flush_base(struct vcpu *v, uns
 void hap_p2m_init(struct p2m_domain *p2m);
 void shadow_p2m_init(struct p2m_domain *p2m);
 
-int nestedp2m_write_p2m_entry(struct p2m_domain *p2m, unsigned long gfn,
-    l1_pgentry_t *p, l1_pgentry_t new, unsigned int level);
+void nestedp2m_write_p2m_entry_post(struct p2m_domain *p2m,
+                                    unsigned int oflags);
 
 /*
  * Alternate p2m: shadow p2m tables used for alternate memory views



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:39:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13444.34094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhv6-0006lz-3d; Wed, 28 Oct 2020 09:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13444.34094; Wed, 28 Oct 2020 09:39:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhv6-0006ls-08; Wed, 28 Oct 2020 09:39:04 +0000
Received: by outflank-mailman (input) for mailman id 13444;
 Wed, 28 Oct 2020 09:39:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXhv4-0006ln-Oq
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:39:02 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fde86503-6033-435c-8cdb-2d4025d92ffd;
 Wed, 28 Oct 2020 09:38:59 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXhv1-0005is-HI; Wed, 28 Oct 2020 09:38:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXhv1-0006WQ-3L; Wed, 28 Oct 2020 09:38:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXhv1-0004UQ-2K; Wed, 28 Oct 2020 09:38:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXhv4-0006ln-Oq
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:39:02 +0000
X-Inumbo-ID: fde86503-6033-435c-8cdb-2d4025d92ffd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fde86503-6033-435c-8cdb-2d4025d92ffd;
	Wed, 28 Oct 2020 09:38:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EYxvpVH2I8YA2HyPpQr/R10udsfD9bAM7APXHvwkWGc=; b=qSF91LuwmnDXPhqTzLrzJMe4nG
	UIALLWex2f+71LpM1i0x2aisUkBqEvndeEAuxAAQWxqdZJ7kHn4aCpZo/lhXhX0Kd2VdXERd9JzNk
	wuvrIFAVnphEf33Ln9KMDigw76YJ8U4EMg9jiitK5gGonJ6Uj5/ZzNITZi1rKZbMbGsw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXhv1-0005is-HI; Wed, 28 Oct 2020 09:38:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXhv1-0006WQ-3L; Wed, 28 Oct 2020 09:38:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXhv1-0004UQ-2K; Wed, 28 Oct 2020 09:38:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156262: regressions - FAIL
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-localmigrate/x10:fail:regression
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e274c8bdc12eb596e55233040e8b49da27150f31
X-Osstest-Versions-That:
    xen=63199dfd3a0418f1677c6ccd7fe05b123af4610a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 09:38:59 +0000

flight 156262 xen-4.11-testing real [real]
flight 156275 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156262/
http://logs.test-lab.xenproject.org/osstest/logs/156275/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 18 guest-localmigrate/x10 fail REGR. vs. 156034

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156034
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156034
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156034
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156034
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156034
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156034
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156034
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156034
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156034
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156034
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e274c8bdc12eb596e55233040e8b49da27150f31
baseline version:
 xen                  63199dfd3a0418f1677c6ccd7fe05b123af4610a

Last test of basis   156034  2020-10-20 13:35:54 Z    7 days
Testing same since   156262  2020-10-27 18:36:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit e274c8bdc12eb596e55233040e8b49da27150f31
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit 1d021db3c8712d25e25f078833baa160c90f260f
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:43:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13449.34111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhzR-0007eT-ND; Wed, 28 Oct 2020 09:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13449.34111; Wed, 28 Oct 2020 09:43:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXhzR-0007eM-K7; Wed, 28 Oct 2020 09:43:33 +0000
Received: by outflank-mailman (input) for mailman id 13449;
 Wed, 28 Oct 2020 09:43:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LslW=ED=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kXhzQ-0007eH-59
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:43:32 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52c65e87-5bb4-482c-b5f2-e3752ee3c0fb;
 Wed, 28 Oct 2020 09:43:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LslW=ED=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kXhzQ-0007eH-59
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:43:32 +0000
X-Inumbo-ID: 52c65e87-5bb4-482c-b5f2-e3752ee3c0fb
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 52c65e87-5bb4-482c-b5f2-e3752ee3c0fb;
	Wed, 28 Oct 2020 09:43:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603878210;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=O4RYG2lbV+VmbSfi+1QhZafhH9q9sQLvL4b+ZywryoM=;
  b=N1nd5cSZxfQC8d3f+tdiZivB+hU6pK57SGdF0jcvlDsrArtbQwU1T+ja
   u5Le3FAZuiuwiJ63o9Ow3z0jXE+dNsdOiw4OdKCCLuWEn/z+/+ZjF8xq1
   9j6u+I43w0nVmgXcmbjbU53/BFXnPzE44YRuyCiHx4hxk8LEt0essCUYo
   I=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ijaz0sh9MrHa1LWvW8HjU0wA0PLbsrczsBrm0tcB6NxxNXomZpfp3Yl3QeszbzCGavcYX15X8I
 gtrieAueYReX6lxbDJAMu7SU2eh+Zf2CQf5YkLxzF9N2XaRVMw0/i09c7Ru0l73e5C01RfPgId
 Mvjt4SJnh1S1WyAFIUTKON1p0MY9QHzjcRcZ+kWB/Ppf/qO0jgHGdw9dYVE+ITDoiso47vUmgx
 0IYW4+IItAO/HG52b8byZwLMXB7cb5KM+ss8Ib91O46rADymd7s+F0KF7Qpz0ME5B5wCADrbRI
 WU0=
X-SBRS: None
X-MesageID: 30004068
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,426,1596513600"; 
   d="scan'208";a="30004068"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h6Wr2vAiE78cZ0ODJ+kE+O0iwwQJEN+C/SPDwB5bQzwVc/nvvIf6VwF8vY8wu3NSslsHvD8GvgzysS9RBgJo912MRhBYBi2EMQUK9yXrfX2OwuiCDWLgL6uvfP/kpE+0ed3XBeY3XxuWoFvoj8c3QfR9anX90ooAUTjb4PtTAc3WiD1skC2ID9jslibJhPZ2rfV982aafzsK4B1Le+O06XhHL8Ra7ASjBd3RIDmlFWHLCYIH6fMu2TbjbrzdGQgGQ31n5I8Yh1EeUUTB3SATAZyCigx4TCM22L4Cv2EkH/RGlrONGVmwIqfb4grw/owrkn/0udtpt65tqlWb0jP4xw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O4RYG2lbV+VmbSfi+1QhZafhH9q9sQLvL4b+ZywryoM=;
 b=L31DMdX8701CB6zQN5g2zgdW0CJPyFp8RDvALoYPsJb+LYeZ1GzvDXfvnAT8HVINQFOKl4+HHn9Lv7YYD1WdgYLLNs3Ay6j9OvF2WF05xqRbpjqBGEcilGmdxJ/7zFx3gBEWI01We445iM0BpFQ3sPe09Rf5Z2ExT5pBFt56qeOkjgGtWCGks5lxJyPDPXVaWV/1FbvmKjkUt5or1as9DxTY+15KIb2qS+Y7vCRgRvAp6Qrkbhivtoq/1cZsWqPVLAmNIe++b9KO8maNRueaWWJtavcVw3+gDg0k9hOf5QSBGdBCiuva60mwauXGS7PvYOUvnzMat4xIpWSIFlgbSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O4RYG2lbV+VmbSfi+1QhZafhH9q9sQLvL4b+ZywryoM=;
 b=K2zHOVAjBqRFJas/lIVdt3npIa2bDqgTZQ2zIhWnnUXg3tjMxf9BMD0iaLUAqlbbCXxihseNGlqS8hV6awurqO9eByQyMyZsUK8Hp6H3tlEB7VDwdvn/zmzh6zwwZxKYclye5kIqDS6CO54a7ZMivnozx8lYxYaFHCT3Tj6Wwmw=
From: George Dunlap <George.Dunlap@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWq7QzrIonuj7b9kK48A0ZtEUXrqmsDfmAgACngICAABCpgA==
Date: Wed, 28 Oct 2020 09:43:21 +0000
Message-ID: <80E82B52-EE04-440D-8AC7-8B0E7E833FE4@citrix.com>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
 <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
In-Reply-To: <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6b86ce64-8338-4aa5-27de-08d87b25e857
x-ms-traffictypediagnostic: BYAPR03MB4520:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB4520CED6497A0118E9BE064A99170@BYAPR03MB4520.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5797;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: eKMok+O5Ui+MfS3wa3NJ2aEOqW1VNjNNnQBZLv0OEBHRLg/na2xES4ybbNxkB0XEB4MXZ37dGYLivBcZ+ayIeFiBvN6c2/NdjNpxVYntqDkr6Oh7eTFYZwZTUFqZEZJB9drXMBx8ZtFjloqRXoPDAO6CbfYWSFIUf5JVBnw5TGCsa9Dm3dq2KUA+oS2iDmEzTV7c8kfjASdM7UnK2hx0HHLSqbDccUeIi4aUX3wGVgf0aygv5X9LhEXpU4p8do+XD44jUGshZSOAnKHLm2ie4H7HLuxdjTaSlai9mJd+rT45qojJXsE73bFoy2wFnGngp14K8rIorueFBbvm2EYMaQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(8936002)(26005)(5660300002)(186003)(66946007)(2906002)(6486002)(36756003)(54906003)(91956017)(76116006)(8676002)(2616005)(66476007)(66556008)(66446008)(316002)(64756008)(4326008)(83380400001)(6506007)(53546011)(55236004)(6916009)(478600001)(71200400001)(86362001)(33656002)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: VeM4SpYHmyV+TxkMoOiVSk+ch0Zvx0FFiuSgw7FYLVVntvkM6N9QtnqUzPeVXe9vTAVVCMMGkyjdzme3A7mss083fl+XC0+AfI9otoK4zNy28eIjRQjhPoT95ulyfJiOMAS2El8sqlO/iQcJGmzrqcEEZZSIxQSDCb0MRN13zUZFIiFUHMn1g862mSOV1JDODJjF5vBv/5xuHnRMxEy4wHRTkhNIk4Yu8rtzNn8adfwzeb9KvdoKMsykV+iK1iRUrmvowf9JYtqHjcmVRL/vgva9SHxmamDKx/tyz+CqiKbW4uK+xyezNOV2janZxMXXORG+E+6FGCGyQLuzz488RaPsEQfTG6Y7xS1UdnDIFWyfAUVucIKlnQjzMZlbVgu2W5DxeHMXCOMEnDKiTco2G7shYpm32Fe0mLZDjjf32RCA5JqCLCIBHOTTih3bKaimDLopzmmCTas93sgSLqnq+9xohwLxy+0vLN8Fe5ODxGBGwBhWBUwOYQZTaGOzhlow5EOIc8yWHkG9SP8FO/Vnn0o/tGCEn5dZwDDiz/L/AezfAGdWkXDv2b48kOm33k7IqVkQXzSGmWw5VH5Uk+ravFFdPvDUUz9PbcRycHvlXCUM6xeq0uHLJEx9KWIi+HlWzWRZWj7DYOsb8bhuWgZr2g==
Content-Type: text/plain; charset="utf-8"
Content-ID: <5C9D075C27FD454B84E40189C56FB110@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b86ce64-8338-4aa5-27de-08d87b25e857
X-MS-Exchange-CrossTenant-originalarrivaltime: 28 Oct 2020 09:43:21.4118
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: t3AmvsEu7LBnHrKwAoASA6U1ccq0I3d+gDWJ52X5RPgcoL9C0gsL7lwxPyglTEs6yW0wO6YSxNOWX8zOb3fkajUtpoALcitmmLKPX5BrmdM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4520
X-OriginatorOrg: citrix.com

DQoNCj4gT24gT2N0IDI4LCAyMDIwLCBhdCA4OjQzIEFNLCBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0
cmFuZC5NYXJxdWlzQGFybS5jb20+IHdyb3RlOg0KPiANCj4gDQo+IA0KPj4gT24gMjcgT2N0IDIw
MjAsIGF0IDIyOjQ0LCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+
IHdyb3RlOg0KPj4gDQo+PiBPbiBNb24sIDI2IE9jdCAyMDIwLCBCZXJ0cmFuZCBNYXJxdWlzIHdy
b3RlOg0KPj4+IFdoZW4gYSBDb3J0ZXggQTU3IHByb2Nlc3NvciBpcyBhZmZlY3RlZCBieSBDUFUg
ZXJyYXRhIDgzMjA3NSwgYSBndWVzdA0KPj4+IG5vdCBpbXBsZW1lbnRpbmcgdGhlIHdvcmthcm91
bmQgZm9yIGl0IGNvdWxkIGRlYWRsb2NrIHRoZSBzeXN0ZW0uDQo+Pj4gQWRkIGEgd2FybmluZyBk
dXJpbmcgYm9vdCBpbmZvcm1pbmcgdGhlIHVzZXIgdGhhdCBvbmx5IHRydXN0ZWQgZ3Vlc3RzDQo+
Pj4gc2hvdWxkIGJlIGV4ZWN1dGVkIG9uIHRoZSBzeXN0ZW0uDQo+Pj4gQW4gZXF1aXZhbGVudCB3
YXJuaW5nIGlzIGFscmVhZHkgZ2l2ZW4gdG8gdGhlIHVzZXIgYnkgS1ZNIG9uIGNvcmVzDQo+Pj4g
YWZmZWN0ZWQgYnkgdGhpcyBlcnJhdGEuDQo+Pj4gDQo+Pj4gQWxzbyB0YWludCB0aGUgaHlwZXJ2
aXNvciBhcyB1bnNlY3VyZSB3aGVuIHRoaXMgZXJyYXRhIGFwcGxpZXMgYW5kDQo+Pj4gbWVudGlv
biBDb3J0ZXggQTU3IHIwcDAgLSByMXAyIGFzIG5vdCBzZWN1cml0eSBzdXBwb3J0ZWQgaW4gU1VQ
UE9SVC5tZA0KPj4+IA0KPj4+IFNpZ25lZC1vZmYtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRy
YW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+PiAtLS0NCj4+PiBTVVBQT1JULm1kICAgICAgICAgICAg
ICAgfCAgMSArDQo+Pj4geGVuL2FyY2gvYXJtL2NwdWVycmF0YS5jIHwgMTMgKysrKysrKysrKysr
Kw0KPj4+IDIgZmlsZXMgY2hhbmdlZCwgMTQgaW5zZXJ0aW9ucygrKQ0KPj4+IA0KPj4+IGRpZmYg
LS1naXQgYS9TVVBQT1JULm1kIGIvU1VQUE9SVC5tZA0KPj4+IGluZGV4IDVmYmU1ZmM0NDQuLmY3
YTNiMDQ2YjAgMTAwNjQ0DQo+Pj4gLS0tIGEvU1VQUE9SVC5tZA0KPj4+ICsrKyBiL1NVUFBPUlQu
bWQNCj4+PiBAQCAtMzgsNiArMzgsNyBAQCBzdXBwb3J0ZWQgaW4gdGhpcyBkb2N1bWVudC4NCj4+
PiAjIyMgQVJNIHY4DQo+Pj4gDQo+Pj4gICAgU3RhdHVzOiBTdXBwb3J0ZWQNCj4+PiArICAgIFN0
YXR1cywgQ29ydGV4IEE1NyByMHAwIC0gcjFwMiwgbm90IHNlY3VyaXR5IHN1cHBvcnRlZCAoRXJy
YXRhIDgzMjA3NSkNCg0KSSB0aGluayB0aGlzIHNob3VsZCBiZToNCg0KODzigJQNCg0KICAgIFN0
YXR1cywgQ29ydGV4IEE1NyByMHAwLXIxcDE6IFN1cHBvcnRlZCwgbm90IHNlY3VyaXR5IHN1cHBv
cnRlZA0KDQpGb3IgdGhlIENvcnRleCBBNTcgcjBwMCAtIHIxcDEsIHNlZSBFcnJhdGEgODMyMDc1
Lg0KDQrigJQ+OA0KDQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:53:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:53:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13455.34124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXi8k-0000B8-RF; Wed, 28 Oct 2020 09:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13455.34124; Wed, 28 Oct 2020 09:53:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXi8k-0000B0-NR; Wed, 28 Oct 2020 09:53:10 +0000
Received: by outflank-mailman (input) for mailman id 13455;
 Wed, 28 Oct 2020 09:53:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXi8j-0000Av-9M
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:53:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01eba60a-eab9-496c-83b0-0dd29d334e1b;
 Wed, 28 Oct 2020 09:53:07 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXi8h-00064B-2e; Wed, 28 Oct 2020 09:53:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXi8g-0007Or-Rw; Wed, 28 Oct 2020 09:53:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXi8g-00076Z-RQ; Wed, 28 Oct 2020 09:53:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXi8j-0000Av-9M
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:53:09 +0000
X-Inumbo-ID: 01eba60a-eab9-496c-83b0-0dd29d334e1b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 01eba60a-eab9-496c-83b0-0dd29d334e1b;
	Wed, 28 Oct 2020 09:53:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9/lxplSNEsFghK2s+esbIMjaxVO+Q8RtjMHSYIgekA0=; b=DWfoAlQ43CvUOCS78IaTPOn3OZ
	Yn3Y18RK3GeWUGuUPZn65SY4GT608B3VP7kgEYHyb/NkCEMb6EYkyqCojCWdOW5me1mv99xznBsdB
	q+NmaA8+98v90w/0fi4nmO9RcOGer6zaLbmk6TuBYTMN+nV45s5eRIEHI/8wv6v9hVuo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXi8h-00064B-2e; Wed, 28 Oct 2020 09:53:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXi8g-0007Or-Rw; Wed, 28 Oct 2020 09:53:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXi8g-00076Z-RQ; Wed, 28 Oct 2020 09:53:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 156274: all pass - PUSHED
X-Osstest-Versions-This:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
X-Osstest-Versions-That:
    xen=6ca70821b59849ad97c3fadc47e63c1a4af1a78c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 09:53:06 +0000

flight 156274 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156274/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883
baseline version:
 xen                  6ca70821b59849ad97c3fadc47e63c1a4af1a78c

Last test of basis   156209  2020-10-25 09:18:26 Z    3 days
Testing same since   156274  2020-10-28 09:20:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Christian Lindig <christian.lindig@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Ian Jackson <iwj@xenproject.org>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6ca70821b5..16a20963b3  16a20963b3209788f2c0d3a3eebb7d92f03f5883 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 09:57:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 09:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13460.34139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXiCy-0000M9-D8; Wed, 28 Oct 2020 09:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13460.34139; Wed, 28 Oct 2020 09:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXiCy-0000M2-9e; Wed, 28 Oct 2020 09:57:32 +0000
Received: by outflank-mailman (input) for mailman id 13460;
 Wed, 28 Oct 2020 09:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zOlM=ED=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kXiCx-0000Lx-5e
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:57:31 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.73]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b12adf0-e10d-4fa6-8cdd-612d78a23aca;
 Wed, 28 Oct 2020 09:57:29 +0000 (UTC)
Received: from AM7PR04CA0019.eurprd04.prod.outlook.com (2603:10a6:20b:110::29)
 by AM0PR08MB4178.eurprd08.prod.outlook.com (2603:10a6:208:133::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.27; Wed, 28 Oct
 2020 09:57:27 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::a0) by AM7PR04CA0019.outlook.office365.com
 (2603:10a6:20b:110::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 09:57:27 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 09:57:26 +0000
Received: ("Tessian outbound 7c188528bfe0:v64");
 Wed, 28 Oct 2020 09:57:26 +0000
Received: from 13da28b2a5bc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 19F5A326-F0B6-4255-AEF3-8C0C07130426.1; 
 Wed, 28 Oct 2020 09:56:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 13da28b2a5bc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 09:56:49 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3260.eurprd08.prod.outlook.com (2603:10a6:5:21::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 28 Oct
 2020 09:56:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 09:56:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=zOlM=ED=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kXiCx-0000Lx-5e
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 09:57:31 +0000
X-Inumbo-ID: 2b12adf0-e10d-4fa6-8cdd-612d78a23aca
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.73])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2b12adf0-e10d-4fa6-8cdd-612d78a23aca;
	Wed, 28 Oct 2020 09:57:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NQy8RBFw2JBcBUXvdwM4B3SFU81pp2y+IG4/qPH+uOU=;
 b=K3thq3oFJoPiKFU+B+wP1tjlbld1UPlQ4fyYcX9auYaaFUKAneb4Bjsf+siLbFKm/wkvjS11y/8qF1W+Y24gjGeLy1G1d8xVqNgQB08mRoie8efM6G/BzzsMDq9Ht8NqQ6hl5DSuYjUO1KZXQZY2tmVAX/dCe6NRSe5RQIgL3q4=
Received: from AM7PR04CA0019.eurprd04.prod.outlook.com (2603:10a6:20b:110::29)
 by AM0PR08MB4178.eurprd08.prod.outlook.com (2603:10a6:208:133::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.27; Wed, 28 Oct
 2020 09:57:27 +0000
Received: from AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:110:cafe::a0) by AM7PR04CA0019.outlook.office365.com
 (2603:10a6:20b:110::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 09:57:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT007.mail.protection.outlook.com (10.152.16.145) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 09:57:26 +0000
Received: ("Tessian outbound 7c188528bfe0:v64"); Wed, 28 Oct 2020 09:57:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 892a95f7c33f02b4
X-CR-MTA-TID: 64aa7808
Received: from 13da28b2a5bc.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 19F5A326-F0B6-4255-AEF3-8C0C07130426.1;
	Wed, 28 Oct 2020 09:56:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 13da28b2a5bc.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 09:56:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X0qmTShFtRd0YTFv1O/quisql4IgNUlMgqRucPvO0TUueebusQJAGAeGxzAWeKWlwHAC6AD/EVmLZjluvvMQWcIGnDBSuqNQplfPQWFUNix+KP4ofUCVtm/eusOYL9yQI7E8+lRnhfkWDCrTQ4Q4JA23bbl/mK7CSMhWPuiixvZLep34cBidFwObbX8noexoos5x20iQEQiD+hFxTg/yYMseNpfdFjpxXSxZYMKKv9KEu+ttksfNJvwhULucX//dM2BdWtjhkHzPDbj+ni8jgeRTRbJGUOdU2Wjqpb7CXfCpddkfzxIuFdx7/4K3peUSKQk7UrMHJlwQIqvqApSwWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NQy8RBFw2JBcBUXvdwM4B3SFU81pp2y+IG4/qPH+uOU=;
 b=kB20Ar0r5DNQNnV2jvmQW41el3l5FyEQxXVXt/AQ4KYRc063pQU+Au5DHzA7Chkq8oYiQQScOrX4BKxRtHKMTrIxd1b8TpFs0SSjVNHLyUwwNWmg9T30eIaexO+SGbxiYXHBiqhse0LkbiDf/p45o3ss9FfK6wV6BhBWFTlUvnO64I+UwW4s3biJoDX9oRU/8U865YQpgppBk5fw+4TzJoHODgq+3B1OtXCena4rhs8+s5W+D0ZyYZr3C4s3a7bCU3Vq6uh0+T9rFjz0IP/wpeo7i1b6tqsSioknoGm28WhI8FNHi+Oj1zGrmLGcfaTmCfMyFqGS2B96DKZ5ecy0XA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NQy8RBFw2JBcBUXvdwM4B3SFU81pp2y+IG4/qPH+uOU=;
 b=K3thq3oFJoPiKFU+B+wP1tjlbld1UPlQ4fyYcX9auYaaFUKAneb4Bjsf+siLbFKm/wkvjS11y/8qF1W+Y24gjGeLy1G1d8xVqNgQB08mRoie8efM6G/BzzsMDq9Ht8NqQ6hl5DSuYjUO1KZXQZY2tmVAX/dCe6NRSe5RQIgL3q4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3260.eurprd08.prod.outlook.com (2603:10a6:5:21::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 28 Oct
 2020 09:56:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 09:56:45 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: George Dunlap <george.dunlap@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWq7RKM3lRt4XRXUWYGPT9/eCkOKmsDfmAgACnfwCAABCqgIAAA70A
Date: Wed, 28 Oct 2020 09:56:45 +0000
Message-ID: <F216109E-83E1-418E-8802-249003415BA2@arm.com>
References:
 <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
 <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
 <80E82B52-EE04-440D-8AC7-8B0E7E833FE4@citrix.com>
In-Reply-To: <80E82B52-EE04-440D-8AC7-8B0E7E833FE4@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f84e5679-388b-4dca-4e9c-08d87b27e035
x-ms-traffictypediagnostic: DB7PR08MB3260:|AM0PR08MB4178:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB417842D51D5FF7724C1F30409D170@AM0PR08MB4178.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6790;OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oWCVw9KOoPxPrnAKfb8JdNMT14TJ2d74ZXmDsPaw1Vn0jFiaHI+62nB1UYzbg6wr5htG8QWpmW1waTaTkuAl8VDvVSypoblAd68Rxs/0LKLt/r8t+Jg8Kxziv2H6VIm4Fhmaqo+ufBjLyBPinHsS7BqPqm6HJvyBU5QCtnwgopMilckEXnd8UePVVOAYz/GOSEVi/whCmuBrQzQK6ycbuq/DoobNruEroOxzqCcg+Cl3xe38e/89Mg29fh/C1KPyyALxn+/i47C3iZeVI/ezJzctkz/cDwLzecATvnSQMZOkXPsKjZo7BOxo9GRLJ7Kcr5IcdZmlr2DHWOTRR42TWw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(396003)(39860400002)(346002)(83380400001)(5660300002)(8936002)(6506007)(478600001)(53546011)(4326008)(36756003)(54906003)(186003)(2616005)(86362001)(33656002)(66476007)(66556008)(66946007)(71200400001)(91956017)(6512007)(26005)(76116006)(64756008)(66446008)(6916009)(316002)(6486002)(2906002)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 kC5Svj2a8AbLVKLL4ojgMdykOLpUHwkefB7OAIaqPfucc91K0oIpooMXAibXoQgboPgSdlmvMm64t8yhQTRtvw6k+a9rigkaRPVJC0miNyXHJd0OWlAvX/7jpAwMA9JwyYmxrRxh7H14656lWPQ8UUgMAo1n1jZ0UBn+972K5PCbek/YjL4Sop0tsUoX0gpr40n4u/uRdFrjN46uPiyrjIk0RgRU1Q6n2M0BGsaYLer/at1gG+02flS/tyX0TiqA6K3yRbtpVpRArV1WEF4y7N9YeUbRH1p+XDoS0CFl3B9qDYy6AKUuuH79FdassXr3kQtbwlgYBekXFM2CsLSsPCn8kFORCBsSELFaCK7JsnSLysKJkJs0TakN2wmSyI6rMNbwGRf/iDwPrd/PYFlrgBzgw17BE7kHMupDMzO6ilD5DZbzPeXOLA56oDK77LVfB+gN+PjLtpc9ahRzA5tHefoT8RlW8ozX3iNV2i4aJXMXBlwwvOlDFwy9cJCAPBcDOeMMfwSxPwXFZvLoNz5+sUaYBVe2B0+SnE3bZzepX6sFIAipz/W3kNRKydneGjaqfJUlp//swZzX7wVV840N2paP7XponZwEkqBYM/El8f0tW9qZLfb03QuhDCvCewpdUrNYfFiuSbgJzqWV2+HXLg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <490CBF1E5A65BA4A806A217C1C58BB87@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3260
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a49d7326-072a-4bf4-072e-08d87b27c7be
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q8MRThr9gTmqq4ECTUsWdnIjNq/8WdlB57K4dyj8H2tHvs8aV3l1IkzsjiQfbfT4rlsuQJcxviqmXR5yY+eKrhK4Y4RnsD+h1TLLQh91GZZyIUQs2PZqtAWi+sOboILxVW9dHDeCEWxwUX0nNSh2SoiA+gR8ygDmBaTvpW3o05mK7n3my7QDFK1aORHOI3rFIsBaVlrexVktymackFdUb7p8Zh8M2/PA4yTYhkcZKvclMX34Dk0J81ui2sXjQP8dfh5g7e6a/0lcwVc72/3rhiuS5KcO1THvQDFxDtxlGIPNyc97W8sdFzGgajJFUKx9NzrOxXuiMgvkWfTPg9XvK1Eer8OZpRqzk55qVxhF06XZLDwcpp3qIoAIGNS0vJRHa/invn29KCbn1tko9oePRg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(46966005)(6486002)(4326008)(478600001)(26005)(53546011)(33656002)(6506007)(5660300002)(82310400003)(186003)(336012)(2616005)(6862004)(107886003)(86362001)(356005)(36756003)(82740400003)(47076004)(36906005)(83380400001)(8676002)(316002)(54906003)(81166007)(70586007)(70206006)(8936002)(2906002)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 09:57:26.7762
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f84e5679-388b-4dca-4e9c-08d87b27e035
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT007.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4178

SGkgR2VvcmdlLA0KDQo+IE9uIDI4IE9jdCAyMDIwLCBhdCAwOTo0MywgR2VvcmdlIER1bmxhcCA8
Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IA0KPiANCj4+IE9uIE9jdCAy
OCwgMjAyMCwgYXQgODo0MyBBTSwgQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bh
cm0uY29tPiB3cm90ZToNCj4+IA0KPj4gDQo+PiANCj4+PiBPbiAyNyBPY3QgMjAyMCwgYXQgMjI6
NDQsIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+
Pj4gDQo+Pj4gT24gTW9uLCAyNiBPY3QgMjAyMCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+
Pj4gV2hlbiBhIENvcnRleCBBNTcgcHJvY2Vzc29yIGlzIGFmZmVjdGVkIGJ5IENQVSBlcnJhdGEg
ODMyMDc1LCBhIGd1ZXN0DQo+Pj4+IG5vdCBpbXBsZW1lbnRpbmcgdGhlIHdvcmthcm91bmQgZm9y
IGl0IGNvdWxkIGRlYWRsb2NrIHRoZSBzeXN0ZW0uDQo+Pj4+IEFkZCBhIHdhcm5pbmcgZHVyaW5n
IGJvb3QgaW5mb3JtaW5nIHRoZSB1c2VyIHRoYXQgb25seSB0cnVzdGVkIGd1ZXN0cw0KPj4+PiBz
aG91bGQgYmUgZXhlY3V0ZWQgb24gdGhlIHN5c3RlbS4NCj4+Pj4gQW4gZXF1aXZhbGVudCB3YXJu
aW5nIGlzIGFscmVhZHkgZ2l2ZW4gdG8gdGhlIHVzZXIgYnkgS1ZNIG9uIGNvcmVzDQo+Pj4+IGFm
ZmVjdGVkIGJ5IHRoaXMgZXJyYXRhLg0KPj4+PiANCj4+Pj4gQWxzbyB0YWludCB0aGUgaHlwZXJ2
aXNvciBhcyB1bnNlY3VyZSB3aGVuIHRoaXMgZXJyYXRhIGFwcGxpZXMgYW5kDQo+Pj4+IG1lbnRp
b24gQ29ydGV4IEE1NyByMHAwIC0gcjFwMiBhcyBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkIGluIFNV
UFBPUlQubWQNCj4+Pj4gDQo+Pj4+IFNpZ25lZC1vZmYtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJl
cnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+Pj4gLS0tDQo+Pj4+IFNVUFBPUlQubWQgICAgICAg
ICAgICAgICB8ICAxICsNCj4+Pj4geGVuL2FyY2gvYXJtL2NwdWVycmF0YS5jIHwgMTMgKysrKysr
KysrKysrKw0KPj4+PiAyIGZpbGVzIGNoYW5nZWQsIDE0IGluc2VydGlvbnMoKykNCj4+Pj4gDQo+
Pj4+IGRpZmYgLS1naXQgYS9TVVBQT1JULm1kIGIvU1VQUE9SVC5tZA0KPj4+PiBpbmRleCA1ZmJl
NWZjNDQ0Li5mN2EzYjA0NmIwIDEwMDY0NA0KPj4+PiAtLS0gYS9TVVBQT1JULm1kDQo+Pj4+ICsr
KyBiL1NVUFBPUlQubWQNCj4+Pj4gQEAgLTM4LDYgKzM4LDcgQEAgc3VwcG9ydGVkIGluIHRoaXMg
ZG9jdW1lbnQuDQo+Pj4+ICMjIyBBUk0gdjgNCj4+Pj4gDQo+Pj4+ICAgU3RhdHVzOiBTdXBwb3J0
ZWQNCj4+Pj4gKyAgICBTdGF0dXMsIENvcnRleCBBNTcgcjBwMCAtIHIxcDIsIG5vdCBzZWN1cml0
eSBzdXBwb3J0ZWQgKEVycmF0YSA4MzIwNzUpDQo+IA0KPiBJIHRoaW5rIHRoaXMgc2hvdWxkIGJl
Og0KPiANCj4gODzigJQNCj4gDQo+ICAgIFN0YXR1cywgQ29ydGV4IEE1NyByMHAwLXIxcDE6IFN1
cHBvcnRlZCwgbm90IHNlY3VyaXR5IHN1cHBvcnRlZA0KPiANCj4gRm9yIHRoZSBDb3J0ZXggQTU3
IHIwcDAgLSByMXAxLCBzZWUgRXJyYXRhIDgzMjA3NS4NCj4gDQo+IOKAlD44DQo+IA0KDQpPayBJ
IHdpbGwgZml4IHRoYXQuDQoNClRoYW5rcyBmb3IgdGhlIHJldmlldw0KDQpDaGVlcnMNCkJlcnRy
YW5kDQoNCj4gLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 10:28:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 10:28:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13485.34150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXigl-00035M-WA; Wed, 28 Oct 2020 10:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13485.34150; Wed, 28 Oct 2020 10:28:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXigl-00035F-TE; Wed, 28 Oct 2020 10:28:19 +0000
Received: by outflank-mailman (input) for mailman id 13485;
 Wed, 28 Oct 2020 10:28:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXigj-00035A-U0
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:28:18 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 940f04aa-0f76-462e-b7cb-0a490b3c20f9;
 Wed, 28 Oct 2020 10:28:15 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SASE413
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 11:28:14 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXigj-00035A-U0
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:28:18 +0000
X-Inumbo-ID: 940f04aa-0f76-462e-b7cb-0a490b3c20f9
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 940f04aa-0f76-462e-b7cb-0a490b3c20f9;
	Wed, 28 Oct 2020 10:28:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603880895;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=F8L2CeB5oyokna2/67oN32iBHrH7X44E8SkvSap2Qc4=;
	b=BZxRiZEg4sChYbus6nL6RX3XWywWugKt1llRJ1H5FzfeUVW80gdsMh1F+/9ZFmir3b
	rh5Bgc4uWFAIgydkQ51mO2uz1AWybBEUhws8vZqXZ9fLO3V7ODB7vK7Fs3k3/RujdKYd
	RiVaGCN3JtgSf1bP5ogVVCZS1NZUr0yjkiyBZOJ5PBF6PdRdn3eXLOYm1qVxZqkzprAU
	hU4Nt0i3RT/jnGZGuwWFECaUPJt4LZa6Ip07z8qHedd4es+HXnNpZHfU81wt+JGf/gPx
	NGdWkaigtM5uB/V00XKLnc1pbRa1P3rjIE+fFZNw+S2UuRGrTxKjA0bSGEIGUhsXp0Eb
	DBQA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SASE413
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 11:28:14 +0100 (CET)
Date: Wed, 28 Oct 2020 11:28:05 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: inconsistent pfn type checking in process_page_data
Message-ID: <20201028112805.34ae0c5d.olaf@aepfle.de>
In-Reply-To: <6c595f1b-72cf-4f9e-5bad-eb0ebde45630@citrix.com>
References: <20201027184119.1d3f701e.olaf@aepfle.de>
	<6c595f1b-72cf-4f9e-5bad-eb0ebde45630@citrix.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/3kgZeESTg/d5bWGlhjx7R2O"; protocol="application/pgp-signature"

--Sig_/3kgZeESTg/d5bWGlhjx7R2O
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Tue, 27 Oct 2020 23:18:47 +0000
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> I suspect we probably want an is_known_page_type() predicate on the
> source side to sanity check data from Xen, and on the destination side
> in handle_page_data() to sanity check data in the stream, and we
> probably want a page_type_has_data() predicate to use in
> write_batch()/process_page_data() to ensure that the handling is consiste=
nt.

=46rom what I have seen, two (or three) helpers for sender and receiver will =
be needed:
is_known_page_type()
is_data_page_type()
maybe is_ptbl_page_type() for normalise/localise.

I will prepare a batch of changes in the coming days.


Olaf

--Sig_/3kgZeESTg/d5bWGlhjx7R2O
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+ZR7UACgkQ86SN7mm1
DoCjqQ/9HGZvCkeE3UGjzeTGkO0GQV1SUh7PzZnQop3eFSnqh4j6dvx0qRt9VRka
q42+IcerDy2LoY7FTU4JP1vYSBWa3/NMzG88pgYdw28TWPmUXeMdR6xlDssvxeGV
9Mui0gqe4KAmzg5FurkH2Lai6a/e8CXGWIR3wjhbp8kkylDEzKGmj/M7kgd3S8Ru
UqeQxoTLqumx0I+3YuEyA08mWqwurRsKNtHz2TF2BwGfQMl/FGaPo0GPUX71NS+c
pbk2sg5+NWFoujAZ99ZNdM9FVpDNo1CZeCzMG7Ef29sDytGxkPUrrGR6cG6OCJYb
/BdjGD7cNFDrC3CJZ9w87A0QEJws2w1Z+W8GrUZXMBcqWnc0Kl4yszu4n9UTNSQ5
WpLHEIlMPCqKJM38NWLn7iGb1wgzIpyxgenuIR6gdtKyrl4YiDU+mwkJtaKuRjun
5kE44zuHZ/nsgkOcHeuzCTkXicEIcO3Q2oh/mTjVkp0DgKW3oe8+pQmfSDEJ2K5a
OxfLbVrUAMkX3KZ/tQ66S+qgl7v5wPr5FbFOoYmySwHu8M/lYeW+RTmrwOP1WrL2
/HtpryfAk6+KAX5mGyWaMF94GFeq8zpTJ3ANtMsXKsmg2mpPhytnalSNkH6wct3r
p2PrtaMW1FSOAHXfC1kmkQj6nUoqoqyUYGMPdCxDwbJycX1h4AQ=
=UQj5
-----END PGP SIGNATURE-----

--Sig_/3kgZeESTg/d5bWGlhjx7R2O--


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 10:39:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 10:39:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13503.34162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXirc-00043f-Vw; Wed, 28 Oct 2020 10:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13503.34162; Wed, 28 Oct 2020 10:39:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXirc-00043Y-Sx; Wed, 28 Oct 2020 10:39:32 +0000
Received: by outflank-mailman (input) for mailman id 13503;
 Wed, 28 Oct 2020 10:39:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kXirb-00043T-Gk
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:39:31 +0000
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.66]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7c68d8b-b53c-4231-aa4d-3fc93ce6a709;
 Wed, 28 Oct 2020 10:39:28 +0000 (UTC)
Received: from AM6PR08CA0028.eurprd08.prod.outlook.com (2603:10a6:20b:c0::16)
 by DB8PR08MB5338.eurprd08.prod.outlook.com (2603:10a6:10:111::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 10:39:22 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::73) by AM6PR08CA0028.outlook.office365.com
 (2603:10a6:20b:c0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 10:39:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 10:39:21 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Wed, 28 Oct 2020 10:39:21 +0000
Received: from a014cbc6987a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 09DA498F-1D35-4C61-9DE5-4F39688986DA.1; 
 Wed, 28 Oct 2020 10:38:44 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a014cbc6987a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 10:38:44 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM4PR0802MB2131.eurprd08.prod.outlook.com (2603:10a6:200:5c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 10:38:42 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 10:38:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kXirb-00043T-Gk
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:39:31 +0000
X-Inumbo-ID: a7c68d8b-b53c-4231-aa4d-3fc93ce6a709
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown [40.107.2.66])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a7c68d8b-b53c-4231-aa4d-3fc93ce6a709;
	Wed, 28 Oct 2020 10:39:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LcCdlXMvph6wqbVohNV4iRafpnaYadc/lEWrqMb/Hgw=;
 b=RDL9HdV0VaRh5d6ZWzGjMFfsMf7frSSdzvet0oJykdeZ2JinipaYxu86OyQlqGNguFZAhhym/tlYDJHjQaTUhDzoVi5UbUgFEZD4DBW1Q+EYDef2r8O/mkwf5wcTJjGyrpfy02kFOwQz8tpxjdzFhT3Ldqykjo6DPhPcNU8AujM=
Received: from AM6PR08CA0028.eurprd08.prod.outlook.com (2603:10a6:20b:c0::16)
 by DB8PR08MB5338.eurprd08.prod.outlook.com (2603:10a6:10:111::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 10:39:22 +0000
Received: from VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::73) by AM6PR08CA0028.outlook.office365.com
 (2603:10a6:20b:c0::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 10:39:22 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT044.mail.protection.outlook.com (10.152.19.106) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 10:39:21 +0000
Received: ("Tessian outbound c189680f801b:v64"); Wed, 28 Oct 2020 10:39:21 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ccaa350e2fb60b1d
X-CR-MTA-TID: 64aa7808
Received: from a014cbc6987a.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 09DA498F-1D35-4C61-9DE5-4F39688986DA.1;
	Wed, 28 Oct 2020 10:38:44 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a014cbc6987a.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 10:38:44 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iQJys9oEMHoM9stshN4uC0T/06g1GnFqw8HrguyCryg5eVb32OBN4voD5qhYwTqwdAy4GH7wnpl+XnQWlXhqU8BuTLiLB1Nb8DTA3evrj/2LLnzOBJ1YUjLe4nAXJ69wX3p0qSo5xwptkpeKeMbFPqBcOZIEp0u1nm55S2FZ7YwkW7jn0VR/w923qw0sWuCgCohxZlAaCx5qVCsXcTUrXnKXSG0yglF/vNfONlK700HPBgXVLLMig08BkZ6rZ1E9MfHpfQ3Ls20I9aHkmU3jZivBCZll+vn3m9gQozJfGMcXxZr0pfoID7/g6Kc7K9l279DvSgqQU6+jkeoeWrtYMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LcCdlXMvph6wqbVohNV4iRafpnaYadc/lEWrqMb/Hgw=;
 b=mIH7J7B6iKuRXBR/Dvm4zIBP29V40DazqAxl7czqDYY6wL/fChsn/jvf26/aNDgDETlRy85dy6odHnNTpZohQCjppF3IB/kpJpuxXSSfwHIm4i9N1t6AMbwybGF9SBPNIB2VGj7razZz7ZzAccRORJb/iy1Lx1tbuZoyf9bvXo1/m0wgdoe3cnPIHtXPvBTP9OuX1Kf/jW+QxGzzUMNCO9Vsh4edNOpZl7eHtiWDjE4Y7wq+8ugMFFSJbYKJPsc3DuIn8yc3kZZQl9rqvsWBxqo8RC9rvHESuKOYlJN7BbkS37fqnQK5jIY2sJrabXqyCpPVUStySH5OZst/v8r21w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LcCdlXMvph6wqbVohNV4iRafpnaYadc/lEWrqMb/Hgw=;
 b=RDL9HdV0VaRh5d6ZWzGjMFfsMf7frSSdzvet0oJykdeZ2JinipaYxu86OyQlqGNguFZAhhym/tlYDJHjQaTUhDzoVi5UbUgFEZD4DBW1Q+EYDef2r8O/mkwf5wcTJjGyrpfy02kFOwQz8tpxjdzFhT3Ldqykjo6DPhPcNU8AujM=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM4PR0802MB2131.eurprd08.prod.outlook.com (2603:10a6:200:5c::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 10:38:42 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 10:38:42 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Topic: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Index: AQHWq7waQtOOhLtUwkWCA6LhLv3lTamsG2AAgAC6JoA=
Date: Wed, 28 Oct 2020 10:38:41 +0000
Message-ID: <84A4BE33-5886-469C-87DE-7F73208E3ADE@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 2b84b869-fa8e-4e2d-6e04-08d87b2dbb4d
x-ms-traffictypediagnostic: AM4PR0802MB2131:|DB8PR08MB5338:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5338F2A5C036E20DC0F200B7FC170@DB8PR08MB5338.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 j8Ku/Dw6K15SmFESinGISaYR7lqc89Rqc/lGfFuYMpeHHeH5mErB2AMOWvDFzLwoLV8ek5Xd2RqI4auibrlGk6Eg3H5nv8RLEDOrTm46LycJsidK3g3YZDojk6ukuZTKDjv1ndUL8ZMVJMByliQ+1IcKh6DKrGb5Ye0bQ+S/D9OrIgIVI87wRcsKfAnvb0+WWQMVPpxDkkxPSsQte1f/BNx1b7iRGcjQXUuC658bnXZjxyOyOal1ByPoR/K2NLK8b6Myt5zdfZa31LzklgiIuhoNLe2RiMtS2VyHGZM0zGWgHCHUzIjj01e4PutFmpXua4FTFLG3uTEheCnGwTtHRA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(39860400002)(136003)(376002)(346002)(66476007)(83380400001)(6512007)(86362001)(8936002)(478600001)(316002)(33656002)(55236004)(6506007)(53546011)(54906003)(2616005)(5660300002)(2906002)(26005)(8676002)(76116006)(71200400001)(36756003)(64756008)(6916009)(91956017)(4326008)(186003)(66446008)(66556008)(66946007)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 orhsE//ACXqHFp5m3LVjVORM44FWGu5mwZNZ/OrYDJU+sO/YhwKS4HcPa7pkI3nZeJZpqulW1ZCWmdIW30MD0Pukg/iEAPoCElKy5ma56D77yfYEz/vFZjQcMFq5t74vg5wxcGoJdweEaowMAMydia8uYAUnysT83qh7Fl88sBTcKxBPfAWRKUGO+ptjIC7RrTo6LwCEqN3wBYxsVCjeQmos7sJHdkoKl4Y2ZVgXav98C9IZl4hiXi3ciDhLTfbZOPa5drDDmZxctiPL2XJPR8nJEZuzDlub1yK9zF9CHwtcTkbH29Y7Ekjf3HGwxMbwju2HFQMkl6cMTNS3tf0DhE7qvmwXqrQaZOiMbJ31G7P34F4Azdl/SulcrKuHTzpD6sTwKx4QZjhoK2ytpg8r3krIiG45nLIArt76XnohiSmJ9pN44Xymj4lMaApthi+nMg4oqhQFhz+1iF5zSlx/zb6ZPAy4jqyTFVClxH/mrRrZ/QVGYmYX6N2S6D97koeuLs2rFDU1eSjxSJ0vly4391U+jO1i/A7tjNX8FW2nFXwB70RNp8kelSnJEGusHhtMWMZsxGpSxKgOnq/tSUPU8MviDOPWhOua4NJIyG2bTQ+vnJcbPbX2BWrvRGd/DtQgKaVW2RtCwvqEMLYUIprQMQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <7CEDBA91850A7F44BD3EC12709C74E72@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0802MB2131
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a358bb6d-b788-41b4-161f-08d87b2da3d6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UR9fid7IA4p1bJVMyQVObJf2x1BS+V1NP4LXFdK1WbK1Qawn4X9KP+ULyH92DwO25oThuFed3SmwinmBQOCI7412MIlnHemdUuLHKS0Uz3715+dpHy0GaLNmdnQ8cJenR0tsK97wSjlVQU4IZoJ3tVeOFZifDlarpiyF3ZfPzEXNe/F4jagmhFexl3eIj6NiEMSmSjEK2KWbOD4N7cZxyAWPY97U627dpVotg/Bk7Kt+ekgZ3PjblYL1QTMnXbP34+OhlDu2B9zDJ01+bvcUv/BujwMFM5yRNSkYjP1YTZZE9txKPCf9Ycbjh4xMbSNcfE2/NvzaJ4q/YoHvbInQcxOoG8JMeEhrHeyNPPqHaExA1sf8mbcKO2as7EM5VGbZAxyQ+KK2OADsNw+7cpjpug==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966005)(186003)(26005)(6512007)(8936002)(316002)(6862004)(6486002)(70586007)(5660300002)(70206006)(2616005)(86362001)(8676002)(54906003)(55236004)(33656002)(81166007)(336012)(36756003)(82740400003)(36906005)(82310400003)(478600001)(47076004)(6506007)(4326008)(83380400001)(2906002)(53546011)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 10:39:21.7654
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b84b869-fa8e-4e2d-6e04-08d87b2dbb4d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5338

Hello Stefano,

> On 27 Oct 2020, at 11:32 pm, Stefano Stabellini <sstabellini@kernel.org> =
wrote:
>=20
> On Mon, 26 Oct 2020, Rahul Singh wrote:
>> ARM platforms does not support ns16550 PCI support. When CONFIG_HAS_PCI
>                ^ do

Ok I will fix that in next version.
>=20
>> is enabled for ARM a compilation error is observed.
>>=20
>> Fixed compilation error after introducing new kconfig option
>> CONFIG_HAS_NS16550_PCI for x86 platforms to support ns16550 PCI.
>>=20
>> No functional change.
>=20
> Written like that it would seem that ARM platforms do not support
> NS16550 on the PCI bus, but actually, it would be theoretically possible
> to have an NS16550 card slotted in a PCI bus on ARM, right?
>=20
> The problem is the current limitation in terms of Xen internal
> plumbings, such as lack of MSI support. Is that correct?
>=20
> If so, I'd update the commit message to reflect the situation a bit
> better.
>=20

Yes you are right on ARM platforms PCI is not fully supported because of th=
at=20
I decided to disable it for ARM. Once we have full support for PCI device w=
e can enable=20
it for ARM also. I will modify the commit msg to reflect the same.

>=20
>> Signed-off-by: Rahul Singh <rahul.singh@arm.com>
>> ---
>> xen/drivers/char/Kconfig   |  7 +++++++
>> xen/drivers/char/ns16550.c | 32 ++++++++++++++++----------------
>> 2 files changed, 23 insertions(+), 16 deletions(-)
>>=20
>> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
>> index b572305657..8887e86afe 100644
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -4,6 +4,13 @@ config HAS_NS16550
>> 	help
>> 	  This selects the 16550-series UART support. For most systems, say Y.
>>=20
>> +config HAS_NS16550_PCI
>> +	bool "NS16550 UART PCI support" if X86
>> +	default y
>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>> +	help
>> +	  This selects the 16550-series UART PCI support. For most systems, sa=
y Y.
>=20
> I think that this should be a silent option:
> if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
> otherwise -> automatically disable
>=20
> No need to show it to the user.
>=20
> The rest of course is fine.

Ok I will fix that. I will do the same as done for HAS_NS16550 ,=20
x86:  silent option depend on HAS_NS16550 && HAS_PCI
ARM: user can decide to enable or disable via menuconfig depend on  HAS_NS1=
6550 && HAS_PCI.

>=20
>=20
>> config HAS_CADENCE_UART
>> 	bool "Xilinx Cadence UART driver"
>> 	default y
>> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
>> index d8b52eb813..bd1c2af956 100644
>> --- a/xen/drivers/char/ns16550.c
>> +++ b/xen/drivers/char/ns16550.c
>> @@ -16,7 +16,7 @@
>> #include <xen/timer.h>
>> #include <xen/serial.h>
>> #include <xen/iocap.h>
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>> #include <xen/pci.h>
>> #include <xen/pci_regs.h>
>> #include <xen/pci_ids.h>
>> @@ -54,7 +54,7 @@ enum serial_param_type {
>>     reg_shift,
>>     reg_width,
>>     stop_bits,
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     bridge_bdf,
>>     device,
>>     port_bdf,
>> @@ -83,7 +83,7 @@ static struct ns16550 {
>>     unsigned int timeout_ms;
>>     bool_t intr_works;
>>     bool_t dw_usr_bsy;
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     /* PCI card parameters. */
>>     bool_t pb_bdf_enable;   /* if =3D1, pb-bdf effective, port behind br=
idge */
>>     bool_t ps_bdf_enable;   /* if =3D1, ps_bdf effective, port on pci ca=
rd */
>> @@ -117,14 +117,14 @@ static const struct serial_param_var __initconst s=
p_vars[] =3D {
>>     {"reg-shift", reg_shift},
>>     {"reg-width", reg_width},
>>     {"stop-bits", stop_bits},
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     {"bridge", bridge_bdf},
>>     {"dev", device},
>>     {"port", port_bdf},
>> #endif
>> };
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>> struct ns16550_config {
>>     u16 vendor_id;
>>     u16 dev_id;
>> @@ -620,7 +620,7 @@ static int ns16550_getc(struct serial_port *port, ch=
ar *pc)
>>=20
>> static void pci_serial_early_init(struct ns16550 *uart)
>> {
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     if ( !uart->ps_bdf_enable || uart->io_base >=3D 0x10000 )
>>         return;
>>=20
>> @@ -719,7 +719,7 @@ static void __init ns16550_init_preirq(struct serial=
_port *port)
>>=20
>> static void __init ns16550_init_irq(struct serial_port *port)
>> {
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     struct ns16550 *uart =3D port->uart;
>>=20
>>     if ( uart->msi )
>> @@ -761,7 +761,7 @@ static void __init ns16550_init_postirq(struct seria=
l_port *port)
>>     uart->timeout_ms =3D max_t(
>>         unsigned int, 1, (bits * uart->fifo_size * 1000) / uart->baud);
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     if ( uart->bar || uart->ps_bdf_enable )
>>     {
>>         if ( uart->param && uart->param->mmio &&
>> @@ -841,7 +841,7 @@ static void ns16550_suspend(struct serial_port *port=
)
>>=20
>>     stop_timer(&uart->timer);
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     if ( uart->bar )
>>        uart->cr =3D pci_conf_read16(PCI_SBDF(0, uart->ps_bdf[0], uart->p=
s_bdf[1],
>>                                   uart->ps_bdf[2]), PCI_COMMAND);
>> @@ -850,7 +850,7 @@ static void ns16550_suspend(struct serial_port *port=
)
>>=20
>> static void _ns16550_resume(struct serial_port *port)
>> {
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     struct ns16550 *uart =3D port->uart;
>>=20
>>     if ( uart->bar )
>> @@ -1013,7 +1013,7 @@ static int __init check_existence(struct ns16550 *=
uart)
>>     return 1; /* Everything is MMIO */
>> #endif
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     pci_serial_early_init(uart);
>> #endif
>>=20
>> @@ -1044,7 +1044,7 @@ static int __init check_existence(struct ns16550 *=
uart)
>>     return (status =3D=3D 0x90);
>> }
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>> static int __init
>> pci_uart_config(struct ns16550 *uart, bool_t skip_amt, unsigned int idx)
>> {
>> @@ -1305,7 +1305,7 @@ static bool __init parse_positional(struct ns16550=
 *uart, char **str)
>>=20
>>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>>     {
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>         if ( strncmp(conf, "pci", 3) =3D=3D 0 )
>>         {
>>             if ( pci_uart_config(uart, 1/* skip AMT */, uart - ns16550_c=
om) )
>> @@ -1327,7 +1327,7 @@ static bool __init parse_positional(struct ns16550=
 *uart, char **str)
>>=20
>>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>>     {
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>         if ( strncmp(conf, "msi", 3) =3D=3D 0 )
>>         {
>>             conf +=3D 3;
>> @@ -1339,7 +1339,7 @@ static bool __init parse_positional(struct ns16550=
 *uart, char **str)
>>             uart->irq =3D simple_strtol(conf, &conf, 10);
>>     }
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>     if ( *conf =3D=3D ',' && *++conf !=3D ',' )
>>     {
>>         conf =3D parse_pci(conf, NULL, &uart->ps_bdf[0],
>> @@ -1419,7 +1419,7 @@ static bool __init parse_namevalue_pairs(char *str=
, struct ns16550 *uart)
>>             uart->reg_width =3D simple_strtoul(param_value, NULL, 0);
>>             break;
>>=20
>> -#ifdef CONFIG_HAS_PCI
>> +#ifdef CONFIG_HAS_NS16550_PCI
>>         case bridge_bdf:
>>             if ( !parse_pci(param_value, NULL, &uart->ps_bdf[0],
>>                             &uart->ps_bdf[1], &uart->ps_bdf[2]) )
>> --=20
>> 2.17.1

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 10:40:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 10:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13511.34175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXisL-0004qL-Cf; Wed, 28 Oct 2020 10:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13511.34175; Wed, 28 Oct 2020 10:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXisL-0004qE-9G; Wed, 28 Oct 2020 10:40:17 +0000
Received: by outflank-mailman (input) for mailman id 13511;
 Wed, 28 Oct 2020 10:40:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dNQY=ED=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXisJ-0004q7-MI
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:40:15 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5475cf1a-d720-46a5-83f3-168c66c8200d;
 Wed, 28 Oct 2020 10:40:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dNQY=ED=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXisJ-0004q7-MI
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:40:15 +0000
X-Inumbo-ID: 5475cf1a-d720-46a5-83f3-168c66c8200d
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5475cf1a-d720-46a5-83f3-168c66c8200d;
	Wed, 28 Oct 2020 10:40:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603881614;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to;
  bh=wVuX0JfPsx41AzSgKSRFihW5gCEIrOulUBagCzQ2rDk=;
  b=O6mwHhOWrM4+Ccb6IEnDG37dfrQyP3B2bIhLfZzc2mA+gohrMi/j+BIe
   duBvMOmGng4LhAc2fYY0A1jWnP31mM+TZA9mnF8h1CgcPeRxCrrsVInHO
   sdNY+t1nRzaPmGH3amr8fsvvh4jBHnr756G+kb4RSQvAvlGJn5qW5bcRh
   U=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 8cLQOr5nG8EnxRvOuk4DZmmF4pWFM/NakMMBdp+BV+P3auRfNvcUPkhNV4CKMCZTDwobVm/+Kn
 GIph360J0IwdDBHxW7RKjXnpDek+1tqsJ3bqbuW/4wcX0FVqMFolXYSDerGyAIQx88o+8GyfTx
 gQq79w4I95jTJQr2v+bLHvVdyuLztoJKxhcMWz5+0gA6w9ooisG9Exvb5afU4nmHTzE/nd/cgv
 gHWm4ByTxDOPfdYxAhjlh64NJjWmbHZ1vYdd3lZh+yqf6QmBwvFRFLFD1YfQ7M3aOEXElp74LP
 tWQ=
X-SBRS: None
X-MesageID: 30289764
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,426,1596513600"; 
   d="asc'?scan'208";a="30289764"
Subject: Re: inconsistent pfn type checking in process_page_data
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
References: <20201027184119.1d3f701e.olaf@aepfle.de>
 <6c595f1b-72cf-4f9e-5bad-eb0ebde45630@citrix.com>
 <20201028112805.34ae0c5d.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9eab5e65-ccaf-2f10-dea3-d91b4a6be402@citrix.com>
Date: Wed, 28 Oct 2020 10:39:55 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201028112805.34ae0c5d.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature";
	boundary="B1ne1FBauodjAqfrduvnYZaE92uAP4vh8"
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

--B1ne1FBauodjAqfrduvnYZaE92uAP4vh8
Content-Type: multipart/mixed; boundary="eimSeQqAIBTTEQBGOU5FWPdjtF0TNciFX"

--eimSeQqAIBTTEQBGOU5FWPdjtF0TNciFX
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB

On 28/10/2020 10:28, Olaf Hering wrote:
> Am Tue, 27 Oct 2020 23:18:47 +0000
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> I suspect we probably want an is_known_page_type() predicate on the
>> source side to sanity check data from Xen, and on the destination side=

>> in handle_page_data() to sanity check data in the stream, and we
>> probably want a page_type_has_data() predicate to use in
>> write_batch()/process_page_data() to ensure that the handling is consi=
stent.
> From what I have seen, two (or three) helpers for sender and receiver w=
ill be needed:
> is_known_page_type()
> is_data_page_type()
> maybe is_ptbl_page_type() for normalise/localise.

Ah yes.=A0 That too.

I would however recommend against the name is_data_page_type() because
then the predicate is ambiguous with "=3D=3D NOTAB".

page_type_has_stream_data()?=A0 Its a bit of a mouthful, but the purpose
is very clear.

~Andrew


--eimSeQqAIBTTEQBGOU5FWPdjtF0TNciFX--

--B1ne1FBauodjAqfrduvnYZaE92uAP4vh8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEzzVJW36m9w6nfSF2ZcP5BqXXn6AFAl+ZSnsACgkQZcP5BqXX
n6A8vBAAkUtSKB968OED2GEST7xxLYBm6ext/2HWLWeAtkS22TBwWdUFiGBlK88Y
L+SsQ37M06FIg04SRTibHrhHqQl4RhjlED4bsoP0T1zQPSlugc3T3sTNHldcdisF
2Ixk6q2J0+BDPrQMkR1F2prGhVCBpd6KMucJNrTAzxRp83irwXS0gqRXxGQZK/sX
dDaV223Ut3/v4Q4FC3ig2zU5O5AhZrh1DmOg2NK7Kv6nPDgo5HMLlJkI4DpWAJFS
tHVpbkCdAPeaYdbNaxyFtzkuboz4NicCcRKAT+xZOoG1lRSIC79XOQ52ulZCgNBP
UYSeV9z9kQ/vPM5ILiXKSTNd0eTuFlGRml+DDpyjv7FzaeKC/T0RGSrppLFH2ZNY
qm5lzrcDfw4UrBwankytBwMQIPe88wkOW+f9Xgg+NkNtft+EfxhRxc7oui0QVBuD
5djiScLvuFHSZZCc1I3EvSI5S1c1DPyHF/gs9EqldzLYDEZOV1dWDP8qqY8XaaZH
lQ0gKPo25pr3V/rV3YP0m0ZaapdXt9c91bBFdLIRNZY539tiNVNUBYWYqvWYJk9d
ddgd9b9F4umlA1/Yc90Zcp3vvtGiaPKUi24mg3M8SGIc/AUf5m3kWApgbfdq3QpC
OskFP/uzbJwzeqK+IQwZn1hWu9gfYW+s3L+rlQl89EXhv5T6/6c=
=IFFA
-----END PGP SIGNATURE-----

--B1ne1FBauodjAqfrduvnYZaE92uAP4vh8--


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 10:43:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 10:43:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13521.34187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXivi-00054I-U5; Wed, 28 Oct 2020 10:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13521.34187; Wed, 28 Oct 2020 10:43:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXivi-00054B-Qa; Wed, 28 Oct 2020 10:43:46 +0000
Received: by outflank-mailman (input) for mailman id 13521;
 Wed, 28 Oct 2020 10:43:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kXivh-000546-CC
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:43:45 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.84]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4bd61bbe-094d-47fe-b4f1-2adbe0506b5d;
 Wed, 28 Oct 2020 10:43:41 +0000 (UTC)
Received: from AM6P193CA0104.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::45)
 by DB6PR0801MB1896.eurprd08.prod.outlook.com (2603:10a6:4:75::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 10:43:39 +0000
Received: from AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::9e) by AM6P193CA0104.outlook.office365.com
 (2603:10a6:209:88::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24 via Frontend
 Transport; Wed, 28 Oct 2020 10:43:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT046.mail.protection.outlook.com (10.152.16.164) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 10:43:38 +0000
Received: ("Tessian outbound c579d876a324:v64");
 Wed, 28 Oct 2020 10:43:38 +0000
Received: from 8a2f2841c1e9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BB750250-8116-48BF-B724-4806D14E1944.1; 
 Wed, 28 Oct 2020 10:43:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a2f2841c1e9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 10:43:30 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM4PR08MB2803.eurprd08.prod.outlook.com (2603:10a6:205:5::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 28 Oct
 2020 10:41:57 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 10:41:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kXivh-000546-CC
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:43:45 +0000
X-Inumbo-ID: 4bd61bbe-094d-47fe-b4f1-2adbe0506b5d
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown [40.107.13.84])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4bd61bbe-094d-47fe-b4f1-2adbe0506b5d;
	Wed, 28 Oct 2020 10:43:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hs7F69O9QoS00esJRc3gJv0VF6Rg8v0f/788uBjM3OY=;
 b=suyoFX60CpXSYBvrZimOMQ5bVFzyxTibPXOjnM600k/zbdWaU/C8iYlZtEls9W+BIQJ+Tcj8fo5trFHSHt62AfEhSv610qCC+C0yT4+zBAiVyaZxj7uxDZ7B2VFRS6Lj/lG+UExhqfp7jmU1a9QUBnAGIZdGsOgrB5k2nqMgPfE=
Received: from AM6P193CA0104.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:88::45)
 by DB6PR0801MB1896.eurprd08.prod.outlook.com (2603:10a6:4:75::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 10:43:39 +0000
Received: from AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:88:cafe::9e) by AM6P193CA0104.outlook.office365.com
 (2603:10a6:209:88::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.24 via Frontend
 Transport; Wed, 28 Oct 2020 10:43:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT046.mail.protection.outlook.com (10.152.16.164) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 10:43:38 +0000
Received: ("Tessian outbound c579d876a324:v64"); Wed, 28 Oct 2020 10:43:38 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: acc32d9821f80553
X-CR-MTA-TID: 64aa7808
Received: from 8a2f2841c1e9.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id BB750250-8116-48BF-B724-4806D14E1944.1;
	Wed, 28 Oct 2020 10:43:30 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8a2f2841c1e9.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 10:43:30 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EDqCdut410/4+XCKQmLaX/NdR7ggr7+/6nyR6JhIOPOESOUt+ixy4a6A3kPQe5F7nSaaLB0fV0YVi5qpVVoSanwgxDsUYeL1VDxSzkzZ7jwqkW7UAqYiyg5Gu4j8AepucgdHOx3l7Op+YB0l5szWDlLEyCYVmRqcGXDCJhDIrXV0ZNjQQu62iUmo3GB7haYm2c8s7GUFsdasbIsdc+wZiIWRXMnOeqksQ7Iz+TR3Fpk8hLK3v0gQ1zA7Ufdj1Ula5sE4z0Iq7YdG+yi+QF0ofXNDS4xKpTTIbLYB6U7Jt3WGGr6ARPTlqIEWT6cdiA9Ih8LnbPgnbl7F0+3Q8t2lCg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hs7F69O9QoS00esJRc3gJv0VF6Rg8v0f/788uBjM3OY=;
 b=MHIa2mHRkXWPfJQWeN1yMkQ/rSQmWDYsYmnCFpVpZBYoxf8w43tJ+z7zvUXrsTP7cd2hwUT6B4vHBQFxQV+HdKVsFl1fnKpHjPWYuDGf1qJvE2iMq7A0xlfjwMAluOp++gYp8oi2m92HAY7rQfLI9CEpyQduz5WZgZNtbM6gCmCuuLlNN6onanINofYnUCJd72Hlf9EU7gsH17SCW1qfiW+lmEBQvMLQmc/HhTTfqnQ7nEz65wMKj9NcEj/LyyvSROVov+ZhOpitObvTaULZ8MmD2jXVPbpfR1hSroIvEmeL7/evBtiqW2plCpZcocxwCDBrD60uJhdZutogvUBz5g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=hs7F69O9QoS00esJRc3gJv0VF6Rg8v0f/788uBjM3OY=;
 b=suyoFX60CpXSYBvrZimOMQ5bVFzyxTibPXOjnM600k/zbdWaU/C8iYlZtEls9W+BIQJ+Tcj8fo5trFHSHt62AfEhSv610qCC+C0yT4+zBAiVyaZxj7uxDZ7B2VFRS6Lj/lG+UExhqfp7jmU1a9QUBnAGIZdGsOgrB5k2nqMgPfE=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM4PR08MB2803.eurprd08.prod.outlook.com (2603:10a6:205:5::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3455.21; Wed, 28 Oct
 2020 10:41:57 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 10:41:57 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Topic: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Index: AQHWq7waQtOOhLtUwkWCA6LhLv3lTamsG2AAgACCJwCAADjrgA==
Date: Wed, 28 Oct 2020 10:41:57 +0000
Message-ID: <53C69BD9-716C-4ECF-8287-75C435C561CE@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
 <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
In-Reply-To: <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4af4e108-e5a3-48c2-5d75-08d87b2e5484
x-ms-traffictypediagnostic: AM4PR08MB2803:|DB6PR0801MB1896:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB18965FD008816D2E22E1492BFC170@DB6PR0801MB1896.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 0pYyEzA5UoZJ4EwuF/i/kTRKEYYRSw71tvPyoovxnyC/OT1ISEKobByQuEJqr9yKCQGtn5WnxqQOUDylMgJxd4xpu7IIV6wZC3YkdNkn56cDn94IU/3RKNd2+mgHbNwAie2s8mT44Cvk4Kt/QRTdHol+ACn7ZWH78PQnqxrMCvLCrJBk1NCEaiFyG5BOyvz2J+2iq/5uWoed8W968rN0rmDFoOCTLzO+E1WZH/DhWr1dmFGUS0UUA6RyBjoBAUanwD8T4dUytaLAga2O3/W5sheQ5MwYhS8TeUOCenrKCCFDJ18PGST0l1a8ua/fnGCcpAlNmoRkmpdJjVy/IMdtBA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39860400002)(366004)(136003)(54906003)(316002)(86362001)(6916009)(5660300002)(66946007)(36756003)(66446008)(66476007)(66556008)(76116006)(91956017)(71200400001)(2616005)(8676002)(6512007)(53546011)(186003)(64756008)(33656002)(2906002)(6486002)(8936002)(4326008)(6506007)(26005)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 EcHxc3aJf5HAakQErZLK2pz0Kq7G0wMZ9QjnZ2ORgzgzlaQC3OOTF9u5zcfcaLZs1BHgft4fDaxIynn255LyYm90cZfQGBLxtR3kNoNg45PhiRhLT7ebxmeG2aSEhx3Adsq9j77rtNO/UjjosDGsuhu3y9SkVOmOEPq6l3uAuiDQFcsukYPg7bdIBn2qsWbdLJz9FohMOIhVZv2PsUohkaT+kOQcmhZftSZCC6uFbPcdHiCklZcrNxFSTdUSAnXu9vNSertls2PmkRqdpxp+1raoQhUvtP5j9EIXq7T+xd/YU0oMnVL5ymzfET17OWf/GEOkmbhpN8NdJ1URFTCzO/oiuu+Ylvmko7BoNy+ra7CAYH0oNtptd2wNumsp8YkVfasLMbCOZo/SZIGVUFcnb8iT9QLTRdomXUPuFnZaHtFsHUD41AhY29djAcxUeuVLmvGRPJSZZl7vUZGsZ2PjSO2hsxW+HVteiTPcN1l5Q4wU0ZRu5Id4AX/rALZgccA+r7sPY7fitVMiZVHef6vw4eK6GGLuAkwU2904oE2bIQFYCnUDXrx7rOtelIPdLBo3Irz+w+jTnHsdwUFi5xA/R5VkN2tk/cgcTJA6mvcVb240tzi0isIkShL5LJrO0chTV3eI/qukQEkoz44G15OCXg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <8DEB15AEC5641241A529044A4096E841@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2803
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e4be5d01-ff8c-43a8-ebca-08d87b2e1833
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q27Xs5lNSVG6xOC3n6Y/IxoDtUzXrQrMH193qa9lRb+0QwtaHN0RUrTqpx7UaiWxYY27nD2eBdTO20PisQayuC6qTvzKhLyc31TjaETDiAlcPLb5dTUHoUux5YBsIrT/b5B4RWvjj54MTYSOJJReIXFotVuEcQ60AgM+/bi59bLJFcVJAlPRTkgUoKDHs4K9EuEY1ARmYDRlpilDP0F7UJMq8MCF5kEdTb+OiO+Yxtwe7OPEypO2R0Lto0BmqLGfxB5nn7O3aIVUq9fRr5UIYeOYdq11yZjwgUyqBP5+P2ffr4U4IlBPF1eXdAq/GcKuvFqR6uTTQpwCwjzJKsPr4ouiu6zlQ3a7BaMbAxvw0awqd8bNEt3CZPwtT1PHBWco0DCwtCbMHOEXojyvpvvaJQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(39860400002)(396003)(346002)(46966005)(336012)(86362001)(8936002)(70206006)(82740400003)(47076004)(54906003)(81166007)(478600001)(6506007)(8676002)(53546011)(356005)(4326008)(316002)(2906002)(2616005)(33656002)(36756003)(186003)(6486002)(36906005)(6862004)(70586007)(5660300002)(6512007)(82310400003)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 10:43:38.8978
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4af4e108-e5a3-48c2-5d75-08d87b2e5484
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1896

Hello Jan,

> On 28 Oct 2020, at 7:18 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 28.10.2020 00:32, Stefano Stabellini wrote:
>> On Mon, 26 Oct 2020, Rahul Singh wrote:
>>> --- a/xen/drivers/char/Kconfig
>>> +++ b/xen/drivers/char/Kconfig
>>> @@ -4,6 +4,13 @@ config HAS_NS16550
>>> 	help
>>> 	  This selects the 16550-series UART support. For most systems, say Y.
>>>=20
>>> +config HAS_NS16550_PCI
>>> +	bool "NS16550 UART PCI support" if X86
>>> +	default y
>>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>>> +	help
>>> +	  This selects the 16550-series UART PCI support. For most systems, s=
ay Y.
>>=20
>> I think that this should be a silent option:
>> if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
>> otherwise -> automatically disable
>>=20
>> No need to show it to the user.
>=20
> I agree in principle, but I don't see why an X86 dependency gets
> added here. HAS_PCI really should be all that's needed.
>=20

Yes you are right . I will remove the X86 and make HAS_NS16550_PCI depend o=
n "HAS_NS16550 && HAS_PCI"
> Jan

Regards,
Rahul=


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 10:54:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 10:54:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13536.34199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXj6L-00064J-VU; Wed, 28 Oct 2020 10:54:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13536.34199; Wed, 28 Oct 2020 10:54:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXj6L-00064C-SI; Wed, 28 Oct 2020 10:54:45 +0000
Received: by outflank-mailman (input) for mailman id 13536;
 Wed, 28 Oct 2020 10:54:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXj6J-000647-Gl
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:54:44 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.23])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b618b0db-3d87-4272-97c6-873b8b3db9ed;
 Wed, 28 Oct 2020 10:54:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SAse49x
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 11:54:40 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXj6J-000647-Gl
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 10:54:44 +0000
X-Inumbo-ID: b618b0db-3d87-4272-97c6-873b8b3db9ed
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.23])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b618b0db-3d87-4272-97c6-873b8b3db9ed;
	Wed, 28 Oct 2020 10:54:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603882480;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=F5cVw0ub4nYD+6IzGEx8WGBH6p+u5cDBGyKNxKTF1lU=;
	b=AEUN0IXAvVCBPr74waEBjlH4YrpsJtNJQLc1avD4YOqdUc5SWeTMfEsR1bosLewMOV
	PARleDFNPFlKRvzWPH+yzO3ZH3+8Q/eo3bvscx4vyVVQnsacrMNuM1sHPr6gVgns9ZIH
	ZTV+1Cbnbr6SIovIUYeeO/t06SBPP6OQuhFTVIShCUXafdbLtzzEwZLa5k5MU41ygicv
	yqfzi6odfE9xb58DggzozhT4ljfUK91d/vm4/5VEadQ4tO/6sPxCvoiroVxMw4dIRhh4
	xodWxT4Hhes2SsGVtkGJQ1w12VbnbGjIfqbaGIjRve/nA4Xv/O/+RpWf2WqbweJM5YiH
	LoAg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SAse49x
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 11:54:40 +0100 (CET)
Date: Wed, 28 Oct 2020 11:54:33 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: inconsistent pfn type checking in process_page_data
Message-ID: <20201028115433.4ebd168e.olaf@aepfle.de>
In-Reply-To: <9eab5e65-ccaf-2f10-dea3-d91b4a6be402@citrix.com>
References: <20201027184119.1d3f701e.olaf@aepfle.de>
	<6c595f1b-72cf-4f9e-5bad-eb0ebde45630@citrix.com>
	<20201028112805.34ae0c5d.olaf@aepfle.de>
	<9eab5e65-ccaf-2f10-dea3-d91b4a6be402@citrix.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/b6sFZ1kdsbbnR30yg.Vrgnh"; protocol="application/pgp-signature"

--Sig_/b6sFZ1kdsbbnR30yg.Vrgnh
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 28 Oct 2020 10:39:55 +0000
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> I would however recommend against the name is_data_page_type() because
> then the predicate is ambiguous with "=3D=3D NOTAB".

Why is it ambiguous? It is not obvious to me.

page_type_has_stream_data() is certainly fine as a name.


Olaf

--Sig_/b6sFZ1kdsbbnR30yg.Vrgnh
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+ZTekACgkQ86SN7mm1
DoCNoRAAkYe0qOiHHRuJnfW7RsfU4F5+tg+ROk0pyXD0bGbfsv8ngAz17aCGYmze
DLqIDjC/nc4ZY9oougxQm6noYZwuAa443lWq7TQMrf5PlUSbQe3SEjAaOtiU3ulm
TQKnLm8O30HPzYnIrm/jvm5P9OZEv/Mu+YmQYoO0srvydi/NYcEYMZnn4y1ngglp
KCGy26PtBsTLz6fkSIe2g3FEdB/eHuGalpvj9zJp/lj5AKeYeE6Y56mHy2rkIioT
4/24EraqIcstJt4K2MxCSNiSWRzdArimToGIEx9p4B0+jlgvZI+TaFW2yKWtYOOj
FALaicDFYd/sREx/ICbS5dG9xolNJfWftuSawg1cmmYfWLqd3xw2+LmKm36DG7TW
+eJ9jvFJEybMAKkRdcG/6nrhROTHJtLIpI8TiBl+mcMwdM9YuN44IU+uyWN1Mxiq
EYSZ+yvgMFOyZFm8zbn9oSBwgnwbzV20SbTtOgR1/XWd39tUMhNQ4fTkPltBMYXr
hEbsAbyJ2NSAhlLCKwLc+ovTeCEZirWtfYrfn/psdngvAp92ESY1LDKKkwwM2tcD
VDgQHSFfnfib/UZw6KS6au4hM9hD8YlzpCzqsGbqq0iBhwdevz0sQdFg6KZ2yqRd
rjUCjLI7RYKdUFyjuTkJFzoTEh2N5fXjp/zZlyU/cca37Cut4YU=
=cIaJ
-----END PGP SIGNATURE-----

--Sig_/b6sFZ1kdsbbnR30yg.Vrgnh--


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 11:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 11:22:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13559.34214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjWo-0000Ey-6w; Wed, 28 Oct 2020 11:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13559.34214; Wed, 28 Oct 2020 11:22:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjWo-0000Eq-3i; Wed, 28 Oct 2020 11:22:06 +0000
Received: by outflank-mailman (input) for mailman id 13559;
 Wed, 28 Oct 2020 11:22:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXjWn-0000E7-I1
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:22:05 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebee9be3-dad7-4e0b-aa34-d60738d1f687;
 Wed, 28 Oct 2020 11:21:58 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXjWf-0007xw-Jw; Wed, 28 Oct 2020 11:21:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXjWf-0003D1-Al; Wed, 28 Oct 2020 11:21:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXjWf-0008JU-A7; Wed, 28 Oct 2020 11:21:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXjWn-0000E7-I1
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:22:05 +0000
X-Inumbo-ID: ebee9be3-dad7-4e0b-aa34-d60738d1f687
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ebee9be3-dad7-4e0b-aa34-d60738d1f687;
	Wed, 28 Oct 2020 11:21:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Iheht75fgkxlIAChmSAtjxamzJodbCPHVEyznDO9rxs=; b=JscIUmnABTsxNdJLfUunXExysW
	5afcCWoTSC114K7ALdi1T+jYgTN+rhH5LMEXCDmvp28S0/nqlLXLOYa5ca0etbOZ+GGcBKBsLaK4d
	NrZBRbMY+ap076ZeDoeCF/sstjVmjX72KYX/hF/aqDVYqW8FFXAAnNUI6WsDEbDfALig=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXjWf-0007xw-Jw; Wed, 28 Oct 2020 11:21:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXjWf-0003D1-Al; Wed, 28 Oct 2020 11:21:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXjWf-0008JU-A7; Wed, 28 Oct 2020 11:21:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156263: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 11:21:57 +0000

flight 156263 xen-4.12-testing real [real]
flight 156278 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156263/
http://logs.test-lab.xenproject.org/osstest/logs/156278/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 156035

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z    7 days
Testing same since   156263  2020-10-27 18:36:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4100d463dbdd95d85fabe387dd5676bed75f65f7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit b1d6f37aa5aa9f3fc5a269b9dd21b7feb7444be0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 11:33:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 11:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13565.34229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjh0-0001Dl-B3; Wed, 28 Oct 2020 11:32:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13565.34229; Wed, 28 Oct 2020 11:32:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjh0-0001De-7p; Wed, 28 Oct 2020 11:32:38 +0000
Received: by outflank-mailman (input) for mailman id 13565;
 Wed, 28 Oct 2020 11:32:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXjgz-0001DW-9G
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:32:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17b7a1e1-20c6-44a1-a28a-fbe40cd03ace;
 Wed, 28 Oct 2020 11:32:36 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXjgw-0008CQ-TW; Wed, 28 Oct 2020 11:32:34 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXjgw-0004J2-JB; Wed, 28 Oct 2020 11:32:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXjgz-0001DW-9G
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:32:37 +0000
X-Inumbo-ID: 17b7a1e1-20c6-44a1-a28a-fbe40cd03ace
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 17b7a1e1-20c6-44a1-a28a-fbe40cd03ace;
	Wed, 28 Oct 2020 11:32:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8TqnBbQSf5YysIeqnRqr4I+OKNKS396o2cuIBGvRYDE=; b=NEs9lfuTSDF+tZmdfW4KfpAhX9
	qMC7rv7HKT0CuS1XPgYOMkP2DWQuvbTt6i/9dwNjG5zN5SwSDw4eY/l1305Ump5VEjZzNsR+FR19Y
	xUPx0Dqc2ZKBSbI3m6l+HgqMaP2JiqpoZsLASK3Og+8gUVx7GYSUoOZqAsspqYzbtWoA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXjgw-0008CQ-TW; Wed, 28 Oct 2020 11:32:34 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXjgw-0004J2-JB; Wed, 28 Oct 2020 11:32:34 +0000
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
To: Rahul Singh <Rahul.Singh@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
 <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
 <53C69BD9-716C-4ECF-8287-75C435C561CE@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <749813e0-0b04-0ecf-5dc6-96cfe53c786b@xen.org>
Date: Wed, 28 Oct 2020 11:32:32 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <53C69BD9-716C-4ECF-8287-75C435C561CE@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 28/10/2020 10:41, Rahul Singh wrote:
>> On 28 Oct 2020, at 7:18 am, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 28.10.2020 00:32, Stefano Stabellini wrote:
>>> On Mon, 26 Oct 2020, Rahul Singh wrote:
>>>> --- a/xen/drivers/char/Kconfig
>>>> +++ b/xen/drivers/char/Kconfig
>>>> @@ -4,6 +4,13 @@ config HAS_NS16550
>>>> 	help
>>>> 	  This selects the 16550-series UART support. For most systems, say Y.
>>>>
>>>> +config HAS_NS16550_PCI
>>>> +	bool "NS16550 UART PCI support" if X86
>>>> +	default y
>>>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>>>> +	help
>>>> +	  This selects the 16550-series UART PCI support. For most systems, say Y.
>>>
>>> I think that this should be a silent option:
>>> if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
>>> otherwise -> automatically disable
>>>
>>> No need to show it to the user.
>>
>> I agree in principle, but I don't see why an X86 dependency gets
>> added here. HAS_PCI really should be all that's needed.
>>
> 
> Yes you are right . I will remove the X86 and make HAS_NS16550_PCI depend on "HAS_NS16550 && HAS_PCI"

There are quite a bit of work to make the PCI part of the implementation 
build on Arm because the code refer to x86 functions.

While in theory, an NS16550 PCI card could be used on Arm, there is only 
a slim chance for an actual users. So I am not convinced the effort is 
worth it here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 11:52:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 11:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13571.34241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjzp-0002yK-0J; Wed, 28 Oct 2020 11:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13571.34241; Wed, 28 Oct 2020 11:52:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXjzo-0002yD-TR; Wed, 28 Oct 2020 11:52:04 +0000
Received: by outflank-mailman (input) for mailman id 13571;
 Wed, 28 Oct 2020 11:52:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXjzn-0002y7-Hk
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:52:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9936a26f-734a-4594-90c6-d643d55aebff;
 Wed, 28 Oct 2020 11:52:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9B008AC97;
 Wed, 28 Oct 2020 11:52:01 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXjzn-0002y7-Hk
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:52:03 +0000
X-Inumbo-ID: 9936a26f-734a-4594-90c6-d643d55aebff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9936a26f-734a-4594-90c6-d643d55aebff;
	Wed, 28 Oct 2020 11:52:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603885921;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iuwYJfNBzBhzAi3pieRVMFr8JOzItk4j7lHBl/TIFt8=;
	b=q3ypEUzjndjDgCBPG82CESf82vsxNElJdsssdcsyc83wPDxfpU83gSciog72DHeO6Z4mHN
	Yul1ZcQ0lSqrbm6jfyS80Cee6gyvmZ+k/6Jwj5CRwv758ReFhOqH1iqNWQ8bRZEq6kdVUv
	S/wgNwxsZrHGGE4pSO6WzjM4kC2t/Ak=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 9B008AC97;
	Wed, 28 Oct 2020 11:52:01 +0000 (UTC)
Subject: Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1603731279.git.rahul.singh@arm.com>
 <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
Date: Wed, 28 Oct 2020 12:51:53 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:17, Rahul Singh wrote:
> passthrough/pci.c file is common for all architecture, but there is x86
> sepcific code in this file.

The code you move doesn't look to be x86 specific in the sense that
it makes no sense on other architectures, but just because certain
pieces are missing on Arm. With this I question whether is it really
appropriate to move this code. I do realize that in similar earlier
cases my questioning was mostly ignored ...

> --- /dev/null
> +++ b/xen/drivers/passthrough/x86/pci.c
> @@ -0,0 +1,97 @@
> +/*
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/param.h>
> +#include <xen/sched.h>
> +#include <xen/pci.h>
> +#include <xen/pci_regs.h>
> +
> +static int pci_clean_dpci_irq(struct domain *d,
> +                              struct hvm_pirq_dpci *pirq_dpci, void *arg)
> +{
> +    struct dev_intx_gsi_link *digl, *tmp;
> +
> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
> +
> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
> +        kill_timer(&pirq_dpci->timer);
> +
> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
> +    {
> +        list_del(&digl->list);
> +        xfree(digl);
> +    }
> +
> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
> +
> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
> +        return 0;
> +
> +    domain_get_irq_dpci(d)->pending_pirq_dpci = pirq_dpci;
> +
> +    return -ERESTART;
> +}
> +
> +static int pci_clean_dpci_irqs(struct domain *d)
> +{
> +    struct hvm_irq_dpci *hvm_irq_dpci = NULL;
> +
> +    if ( !is_iommu_enabled(d) )
> +        return 0;
> +
> +    if ( !is_hvm_domain(d) )
> +        return 0;
> +
> +    spin_lock(&d->event_lock);
> +    hvm_irq_dpci = domain_get_irq_dpci(d);
> +    if ( hvm_irq_dpci != NULL )
> +    {
> +        int ret = 0;
> +
> +        if ( hvm_irq_dpci->pending_pirq_dpci )
> +        {
> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci) )
> +                 ret = -ERESTART;
> +            else
> +                 hvm_irq_dpci->pending_pirq_dpci = NULL;
> +        }
> +
> +        if ( !ret )
> +            ret = pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
> +        if ( ret )
> +        {
> +            spin_unlock(&d->event_lock);
> +            return ret;
> +        }
> +
> +        hvm_domain_irq(d)->dpci = NULL;
> +        free_hvm_irq_dpci(hvm_irq_dpci);
> +    }
> +    spin_unlock(&d->event_lock);
> +    return 0;

While moving please add the missing blank line before the main
return statement of the function.

> +}
> +
> +int arch_pci_release_devices(struct domain *d)
> +{
> +    return pci_clean_dpci_irqs(d);
> +}

Why the extra function layer?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 11:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 11:56:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13575.34252 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXk4C-0003AV-Gh; Wed, 28 Oct 2020 11:56:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13575.34252; Wed, 28 Oct 2020 11:56:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXk4C-0003AO-De; Wed, 28 Oct 2020 11:56:36 +0000
Received: by outflank-mailman (input) for mailman id 13575;
 Wed, 28 Oct 2020 11:56:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXk4A-0003AJ-A4
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:56:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33d7a09a-d013-412c-800a-b2e549e85790;
 Wed, 28 Oct 2020 11:56:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C289EB036;
 Wed, 28 Oct 2020 11:56:32 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXk4A-0003AJ-A4
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:56:34 +0000
X-Inumbo-ID: 33d7a09a-d013-412c-800a-b2e549e85790
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 33d7a09a-d013-412c-800a-b2e549e85790;
	Wed, 28 Oct 2020 11:56:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603886192;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FvcQaZ+Y1Ei0/m74T7rBNWRFLD/qF9LXTrfinx3+oi8=;
	b=qiJR77FjoRv8c6F3YzSmtUpbWYfCOz9gK7PgAvNIdQlDRiZyouXtO4Y1OilEehr7IsEjSo
	YkhkQVne5rdoT+49iJfwmVheJyQqo7H3My6Zp4L+xbI/QQ9HgMjM7qs1nuXWMuNnnC3+rW
	uv872bgu9WJqH1oCtZzCpOyrlAMoL8I=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C289EB036;
	Wed, 28 Oct 2020 11:56:32 +0000 (UTC)
Subject: Re: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1603731279.git.rahul.singh@arm.com>
 <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
Date: Wed, 28 Oct 2020 12:56:32 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 18:17, Rahul Singh wrote:
> --- a/xen/drivers/passthrough/pci.c
> +++ b/xen/drivers/passthrough/pci.c
> @@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>      if ( !is_iommu_enabled(d) )
>          return 0;
>  
> -    /* Prevent device assign if mem paging or mem sharing have been 
> +#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
> +    /* Prevent device assign if mem paging or mem sharing have been
>       * enabled for this domain */
>      if ( d != dom_io &&
>           unlikely(mem_sharing_enabled(d) ||
>                    vm_event_check_ring(d->vm_event_paging) ||
>                    p2m_get_hostp2m(d)->global_logdirty) )
>          return -EXDEV;
> +#endif

Besides this also disabling mem-sharing and log-dirty related
logic, I don't think the change is correct: Each item being
checked needs individually disabling depending on its associated
CONFIG_*. For this, perhaps you want to introduce something
like mem_paging_enabled(d), to avoid the need for #ifdef here?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 11:58:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 11:58:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13579.34264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXk6G-0003KD-Ss; Wed, 28 Oct 2020 11:58:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13579.34264; Wed, 28 Oct 2020 11:58:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXk6G-0003K6-Q1; Wed, 28 Oct 2020 11:58:44 +0000
Received: by outflank-mailman (input) for mailman id 13579;
 Wed, 28 Oct 2020 11:58:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXk6F-0003K1-JP
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:58:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d453875-118b-460e-ab2e-4369c2ed84a3;
 Wed, 28 Oct 2020 11:58:42 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXk6C-0000IW-O4; Wed, 28 Oct 2020 11:58:40 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXk6C-0006qH-GQ; Wed, 28 Oct 2020 11:58:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXk6F-0003K1-JP
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 11:58:43 +0000
X-Inumbo-ID: 8d453875-118b-460e-ab2e-4369c2ed84a3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d453875-118b-460e-ab2e-4369c2ed84a3;
	Wed, 28 Oct 2020 11:58:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=7xJfBN7KF962Co3er/XSt9PfLt+of1SSQII+FdJ8PD8=; b=2szqhHfRTEF/LfzHVDz+B4o3Df
	zf8Z3KQCwCa7hjKHilAGPXnDh1cf23x8LtMaYlz90U4kOpZAm3nGpb0CLWoJHIrz0xgDQXENjxIWb
	pGuY7lFnzmghzo5GUeXPfWWnV3BKgGffn/m7WPCPDnwhF9rJudSOeXNGBbyUmoNMUOSI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXk6C-0000IW-O4; Wed, 28 Oct 2020 11:58:40 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXk6C-0006qH-GQ; Wed, 28 Oct 2020 11:58:40 +0000
Subject: Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
To: Jan Beulich <jbeulich@suse.com>, Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1603731279.git.rahul.singh@arm.com>
 <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
 <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a85906e2-6930-7f8a-417b-966a87d6c133@xen.org>
Date: Wed, 28 Oct 2020 11:58:38 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 28/10/2020 11:51, Jan Beulich wrote:
> On 26.10.2020 18:17, Rahul Singh wrote:
>> passthrough/pci.c file is common for all architecture, but there is x86
>> sepcific code in this file.
> 
> The code you move doesn't look to be x86 specific in the sense that
> it makes no sense on other architectures, but just because certain
> pieces are missing on Arm. With this I question whether is it really
> appropriate to move this code. I do realize that in similar earlier
> cases my questioning was mostly ignored ...

There are no plan to get support for PIRQ on Arm. All the interrupts 
will be properly sent to the guest using a virtual interrupt controller.

Regarding the code itself, there are still a few bits that are x86 
specific (see struct dev_intx_gsi_link). So I think the right action 
for now is to move the code to an x86 directory.

This can be adjusted in the future if there is another architecture that 
would require to use PIRQ.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 12:03:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 12:03:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13585.34277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXkAc-0004FY-NM; Wed, 28 Oct 2020 12:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13585.34277; Wed, 28 Oct 2020 12:03:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXkAc-0004FR-JU; Wed, 28 Oct 2020 12:03:14 +0000
Received: by outflank-mailman (input) for mailman id 13585;
 Wed, 28 Oct 2020 12:03:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i5Ob=ED=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kXkAa-0004FM-Od
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 12:03:12 +0000
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3801603f-0f13-4ba0-ad66-75a313cbdc68;
 Wed, 28 Oct 2020 12:03:11 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id l8so2841557wmg.3
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 05:03:11 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id m9sm6035713wmc.31.2020.10.28.05.03.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 28 Oct 2020 05:03:09 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id A93581FF7E;
 Wed, 28 Oct 2020 12:03:08 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=i5Ob=ED=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kXkAa-0004FM-Od
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 12:03:12 +0000
X-Inumbo-ID: 3801603f-0f13-4ba0-ad66-75a313cbdc68
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3801603f-0f13-4ba0-ad66-75a313cbdc68;
	Wed, 28 Oct 2020 12:03:11 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id l8so2841557wmg.3
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 05:03:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=QeWLWGEtkhvw5bvqcQIIMI9QsC8LiRwtu9MHgUBup78=;
        b=wN3gADAKEPYpx5bjjx+cwXJ7o01OjhCipiWA+mAQ4+QhH7tMsLVNKzM03BVcLztEhK
         I7q5hAbmp4U+YPgMPSBlIdkj/VVgKLZxjnoZDerk2QztHwa2qu35c+4jXi4f/2sZmlM0
         AVci8HOUb7NyP1I49w6S0fP153AZsVRVzcxbJ1GTSmNIo2DImxFXmL/LUAyYg6WM0DUJ
         Kqtf1RXu+d/5DLTJtvQtVAV4aypoPsk6+nbZFNk08+a/AJamdEMYH8A2qtSdGRAVYUan
         UyU1vFCl7RYNVoAIPb+iOn5Srk0pP7Fa3E5j22QCmLPN8t5RB4EwSMGqkts58iHfD/tF
         Sh0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=QeWLWGEtkhvw5bvqcQIIMI9QsC8LiRwtu9MHgUBup78=;
        b=gwMXs3rUqgCAwLRHJygBk2jcQc+Ynd/aEhakL3UEMgpk9WS/NqX05rxKlkSRHqi5oh
         d2skUfi3Q2Db4nQQW76lgGoyUU2dPjansproDxuJfqeEpvYWstoSw/RJmmvxzH17ISTo
         JySBD/dI+qM/O0VrUAlBbAY7Lu5l90q58wApyA7uBQwzg3j7x2xHRXPJmeB0XoMpfVe6
         stc8YX8OrDO44HxE1LherO7pU1oTyd6siZ9IlQzicE1f0mdp5kA8Pft37QDMcdQgPQSd
         fwC8Ot3uzp1n7kmXfPAb8XJP19sEZ7+Mjaep6M9ukTcmE+acrWdJjFTQH0VRbF+n0sis
         XgdQ==
X-Gm-Message-State: AOAM530JbR1mRXqNVI4qPmEXEx7+EB+kyVuzcBl1cRht1wj483zV3SiP
	J0Tle0xjExAs4I8WAEq8zl07JA==
X-Google-Smtp-Source: ABdhPJykh6a/PwFNMgxjcMkg6AzM2vsOOxyM4IGxXsdGAz94vshaM0qBDUw9+2cUZYwYDNnY5dWPxQ==
X-Received: by 2002:a05:600c:216:: with SMTP id 22mr7450906wmi.149.1603886590759;
        Wed, 28 Oct 2020 05:03:10 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id m9sm6035713wmc.31.2020.10.28.05.03.09
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 28 Oct 2020 05:03:09 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id A93581FF7E;
	Wed, 28 Oct 2020 12:03:08 +0000 (GMT)
References: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
User-agent: mu4e 1.5.6; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: xen-devel@lists.xenproject.org, Masami Hiramatsu <mhiramat@kernel.org>,
 Jassi Brar <jaswinder.singh@linaro.org>, Stefano Stabellini
 <stefano.stabellini@linaro.org>
Subject: Re: [bug report] xen/arm64: singlestep doesn't work on Dom0
In-reply-to: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
Date: Wed, 28 Oct 2020 12:03:08 +0000
Message-ID: <87imaumubn.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Masami Hiramatsu <masami.hiramatsu@linaro.org> writes:

> Hello,
>
> When I tested the kprobes on Dom0 kernel, it caused a kernel panic.
>
> Here is an example, to clarify the bug is in the single-stepping
> (software-step exception), I added a printk() where the kprobes setup
> single-stepping.
>
> root@develbox:~# cd /sys/kernel/debug/tracing/
> root@develbox:/sys/kernel/debug/tracing# echo p vfs_read > kprobe_events
> root@develbox:/sys/kernel/debug/tracing# echo 1 > events/kprobes/enable
> root@develbox:/sys/kernel/debug/tracing# [  112.282480] kprobes:
> singlestep ool insn at ffff800011785000    <--- This shows single
> stepping buffer (trampoline)
> [  112.288077] ------------[ cut here ]------------
> [  112.292745] kernel BUG at arch/arm64/kernel/traps.c:406!
> [  112.298129] Internal error: Oops - BUG: 0 [#1] SMP
> [  112.302987] Modules linked in: fuse bridge stp llc binfmt_misc
> nls_ascii nls_cp437 vfat fat ahci libahci libata hid_generic udlfb
> scsi_mod aes_ce_blk crypto_simd evdev cryptd aes_ce_cipher usbhid
> ghash_ce realtek gf128mul hid netsec sha2_ce mdio_devres i2c_algo_bit
> of_mdio sha256_arm64 fb_ddc sha1_ce fixed_phy gpio_keys leds_gpio
> libphy bpf_preload ip_tables x_tables autofs4 xhci_pci
> xhci_pci_renesas xhci_hcd usbcore gpio_mb86s7x
> [  112.341097] CPU: 13 PID: 1045 Comm: bash Not tainted 5.10.0-rc1+ #44
> [  112.347515] Hardware name: Socionext Developer Box (DT)
> [  112.352813] pstate: 00000085 (nzcv daIf -PAN -UAO -TCO BTYPE=3D--)
> [  112.358897] pc : do_undefinstr+0x354/0x378
> [  112.363053] lr : do_undefinstr+0x270/0x378
> [  112.367218] sp : ffff8000122fbc50
> [  112.370603] x29: ffff8000122fbc50 x28: ffff00084bc9e080
> [  112.375985] x27: 0000000000000000 x26: 0000000000000000
> [  112.381366] x25: 0000000000000000 x24: 0000000000000000
> [  112.386748] x23: 0000000080000085 x22: ffff800011785004
> [  112.392129] x21: ffff8000122fbe00 x20: ffff8000122fbcc0
> [  112.397511] x19: ffff800011249988 x18: 0000000000000000
> [  112.402892] x17: 0000000000000000 x16: 0000000000000000
> [  112.408274] x15: 0000000000000000 x14: 0000000000000000
> [  112.413655] x13: 0000000000000000 x12: 0000000000000000
> [  112.419037] x11: 0000000000000000 x10: 0000000000000000
> [  112.424426] x9 : ffff800010314614 x8 : 0000000000000000
> [  112.429801] x7 : 0000000000000000 x6 : ffff8000122fbca8
> [  112.435189] x5 : 0000000000000000 x4 : ffff800011400110
> [  112.440564] x3 : 00000000d5300000 x2 : ffff800011255f78
> [  112.445946] x1 : ffff800011400110 x0 : 0000000080000085
> [  112.451328] Call trace:
> [  112.453848]  do_undefinstr+0x354/0x378
> [  112.457669]  el1_sync_handler+0xa8/0x138
> [  112.461658]  el1_sync+0x7c/0x100
> [  112.464958]  0xffff800011785004     /// <- Undefined instruction
> error happens on the next instruction of single stepping buffer.
> [  112.468172]  __arm64_sys_read+0x24/0x30
> [  112.472078]  el0_svc_common.constprop.3+0x94/0x178
> [  112.476936]  do_el0_svc+0x2c/0x98
> [  112.480321]  el0_sync_handler+0x118/0x168
> [  112.484407]  el0_sync+0x158/0x180
> [  112.487789] Code: d2801400 17ffffbe a9025bf5 f9001bf7 (d4210000)
> [  112.493951] ---[ end trace 3564a3bf75d1618c ]---
>
> So, this seems that the Linux kernel couldn't catch the software-step exc=
eption.
>
> I confirmed the same kernel doesn't cause this error without Xen. I
> guess Xen is not correctly setting the debug registers when the cpu
> goes EL1.

Having worked on the arm64 KVM debug logic I have a little familiarity
with how it works on KVM.=20

> (Or, would we handle debug exceptions in Xen and transfer it to EL1
> OS? I'm not sure how it was designed)

Xen looks as though it should be trapping a chunk of debug accesses:

    /* Trap Debug and Performance Monitor accesses */
    WRITE_SYSREG(HDCR_TDRA|HDCR_TDOSA|HDCR_TDA|HDCR_TPM|HDCR_TPMCR,
                 MDCR_EL2);

but it doesn't set HDCR_TDE so it won't re-route debug exceptions to EL2
which should be the place that is dealing with them. Also I can't see
where the debug registers are saved/restored. In KVM we maintain a
shadow copy of the guest debug register state while guest debugging is
in effect as any breakpoints you want to trigger need to be copied
across.

I also can't see where CPSR_SS or DBG_MDSCR_SS is set in the mdscr.

FWIW most of the logic in KVM is contained in:

  arch/arm64/kvm/debug.c

with some smatterings of handling the traps and context swapping
elsewhere in the code.

>
> Thank you,


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 13:37:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 13:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13604.34297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXldo-0003V9-J4; Wed, 28 Oct 2020 13:37:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13604.34297; Wed, 28 Oct 2020 13:37:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXldo-0003V2-G8; Wed, 28 Oct 2020 13:37:28 +0000
Received: by outflank-mailman (input) for mailman id 13604;
 Wed, 28 Oct 2020 13:37:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yaoe=ED=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kXldn-0003Ux-F8
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 13:37:27 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 017a2eb0-f926-4a18-b5ee-9b58adfed154;
 Wed, 28 Oct 2020 13:37:26 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id d24so6322643ljg.10
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 06:37:26 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Yaoe=ED=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kXldn-0003Ux-F8
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 13:37:27 +0000
X-Inumbo-ID: 017a2eb0-f926-4a18-b5ee-9b58adfed154
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 017a2eb0-f926-4a18-b5ee-9b58adfed154;
	Wed, 28 Oct 2020 13:37:26 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id d24so6322643ljg.10
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 06:37:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=rw0OYPBFNZd2tEP7bWM6jD1sR09uy1QGHtTVG6KW0hw=;
        b=rcWTg/Snfdn5n+xpyhmynduNv1INw26BugYY+VC5STB5LZ+lEyNl6fwtQKioPJ1D1V
         yzjemPVpHiVgTdMjyNR6Ec/oXI0osHaNGBvL5Yub1/GCaSiJg//FjCG7VTL+9EI4/tLn
         gR2DTrsVA9uqsa1rGG5fq1HpxYkmQPCgWxxMTZpfR6dsca0slIS70j1HVasgapNw2WWA
         9yVxl/Pem5aWaWfY0FhAAdsixViP916FwKFZnz1hJAl9X8RtH/aKIvyBkcjv2VZgIu+b
         w2SdF5TRWzl8ZjmttybZ56jVbiDd/mo8P8fiPgNwo+YGe9lKAwnERyXsRjxnuK4ccZnO
         EMYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=rw0OYPBFNZd2tEP7bWM6jD1sR09uy1QGHtTVG6KW0hw=;
        b=XLmV4TTDcSzuPOrLCeyBr9I7VO73j74mU0qNPwPSBTLc4dbyZYneC2tsp6z37Xy4q3
         oLhitecCexOL2Yt6wdS7iQqYDJqkDAMhv27r38tqiF49Jbo6Ns+i1a+p3NMxLy5AW3ln
         KcwmvN2AADVyJBNDn9qnuRDy52qRZWpZKAW07mXjMlmf7ete66jHtF2h6ai7j9zW2Ieq
         wDJQX4zdSnOWOFQQ4Jy45V6WLncD24SqDM1zI4bWNBeRJXk1nA7Ky3g7OvR9fMnjPCr8
         PKhAq/FygTlqloZ9NK22uW3KpHJhpO6xQA8ocC/7X45PPF2y3ZwWgml5QWYOH7hYy3RZ
         M6VQ==
X-Gm-Message-State: AOAM531I+WCk3onqpMfDWDX3n9jrMYm8w3JkE4giFSYLKRKo0nkP0dY0
	Sa07qw57yJAdDqoeHwh5sdH2EJqzusierqhstvM=
X-Google-Smtp-Source: ABdhPJyEYWQunCUOsoarp2GTbfrOB+KD3P0u+eRm6s0cZVyYnpzWMs6RfFLPDpVzOhboCnjHOflaGfCHpSlvUKoCprg=
X-Received: by 2002:a2e:b0c7:: with SMTP id g7mr3095654ljl.433.1603892245345;
 Wed, 28 Oct 2020 06:37:25 -0700 (PDT)
MIME-Version: 1.0
References: <osstest-156257-mainreport@xen.org>
In-Reply-To: <osstest-156257-mainreport@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 28 Oct 2020 09:37:13 -0400
Message-ID: <CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzUo2q1KEMjhg@mail.gmail.com>
Subject: Re: [qemu-mainline test] 156257: regressions - FAIL
To: osstest service owner <osstest-admin@xenproject.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, Oct 27, 2020 at 5:23 PM osstest service owner
<osstest-admin@xenproject.org> wrote:
>
> flight 156257 qemu-mainline real [real]
> flight 156266 qemu-mainline real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/156257/
> http://logs.test-lab.xenproject.org/osstest/logs/156266/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631

QEMU doesn't start with "qemu-system-i386: -xen-domid 1: Option not
supported for this target"

This happens if CONFIG_XEN isn't set.

QEMU is built with:
                  host CPU: aarch64
           host endianness: little
               target list: i386-softmmu

commit 8a19980e3fc4 "configure: move accelerator logic to meson"
introduced this logic:
+accelerator_targets = { 'CONFIG_KVM': kvm_targets }
+if cpu in ['x86', 'x86_64']
+  accelerator_targets += {
+    'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
+    'CONFIG_HVF': ['x86_64-softmmu'],
+    'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
+  }
+endif

I guess something like this would fix it:
if cpu in ['aarch64', 'arm']
  accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'], }
endif

I don't have an arm setup to test this.

>  test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
>  test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
>  test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631

2020-10-27 14:08:29 Z lvdisplay output says device is still open:
/dev/fiano1-vg/test-amd64-amd64-xl-qcow2_debian.buster.guest.osstest-disk:fiano1-vg:3:1:-1:1:20504576:2503:-1:0:-1:253:2

It's unclear to me why the disk is still in use.  Looks like QEMU exited.

-Jason


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:09:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:09:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13613.34313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXm8S-0006Di-4l; Wed, 28 Oct 2020 14:09:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13613.34313; Wed, 28 Oct 2020 14:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXm8S-0006Db-1W; Wed, 28 Oct 2020 14:09:08 +0000
Received: by outflank-mailman (input) for mailman id 13613;
 Wed, 28 Oct 2020 14:09:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXm8Q-0006DW-NJ
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:09:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d69c228b-27eb-41ed-85e8-c5f9cc3aa9bd;
 Wed, 28 Oct 2020 14:09:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXm8P-00030x-B2; Wed, 28 Oct 2020 14:09:05 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXm8P-0001GP-2p; Wed, 28 Oct 2020 14:09:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXm8Q-0006DW-NJ
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:09:06 +0000
X-Inumbo-ID: d69c228b-27eb-41ed-85e8-c5f9cc3aa9bd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d69c228b-27eb-41ed-85e8-c5f9cc3aa9bd;
	Wed, 28 Oct 2020 14:09:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=WmejIZ3aIODOnPm4o9heNu2wuduJJ+feC8ECK1PimeU=; b=tWej2mjjv9qE7sUfwVjweWophS
	Pi0Tc0Jw5/Js/GJHvLr1nYYluc3/vU9DW4i7johLBeibE95yDqpRw0k1asATjDqBoIs/MpMXwy7vP
	GZYsC++1CW9jnPwZUqssB9XVJ/KeE3hdsIWEnGpdMS6coj9J+uxIbRNX/4SAnR3vbfNQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXm8P-00030x-B2; Wed, 28 Oct 2020 14:09:05 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXm8P-0001GP-2p; Wed, 28 Oct 2020 14:09:05 +0000
Subject: Re: [bug report] xen/arm64: singlestep doesn't work on Dom0
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>,
 xen-devel@lists.xenproject.org
Cc: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Masami Hiramatsu <mhiramat@kernel.org>,
 Jassi Brar <jaswinder.singh@linaro.org>,
 Stefano Stabellini <stefano.stabellini@linaro.org>
References: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <83c70bb7-b179-7e8b-f94b-3d0bd2a84f54@xen.org>
Date: Wed, 28 Oct 2020 14:09:03 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 28/10/2020 08:50, Masami Hiramatsu wrote:
> Hello,

Hi,

> When I tested the kprobes on Dom0 kernel, it caused a kernel panic.
> 
> Here is an example, to clarify the bug is in the single-stepping
> (software-step exception), I added a printk() where the kprobes setup
> single-stepping.
> 
> root@develbox:~# cd /sys/kernel/debug/tracing/
> root@develbox:/sys/kernel/debug/tracing# echo p vfs_read > kprobe_events
> root@develbox:/sys/kernel/debug/tracing# echo 1 > events/kprobes/enable
> root@develbox:/sys/kernel/debug/tracing# [  112.282480] kprobes:
> singlestep ool insn at ffff800011785000    <--- This shows single
> stepping buffer (trampoline)
> [  112.288077] ------------[ cut here ]------------
> [  112.292745] kernel BUG at arch/arm64/kernel/traps.c:406!
> [  112.298129] Internal error: Oops - BUG: 0 [#1] SMP
> [  112.302987] Modules linked in: fuse bridge stp llc binfmt_misc
> nls_ascii nls_cp437 vfat fat ahci libahci libata hid_generic udlfb
> scsi_mod aes_ce_blk crypto_simd evdev cryptd aes_ce_cipher usbhid
> ghash_ce realtek gf128mul hid netsec sha2_ce mdio_devres i2c_algo_bit
> of_mdio sha256_arm64 fb_ddc sha1_ce fixed_phy gpio_keys leds_gpio
> libphy bpf_preload ip_tables x_tables autofs4 xhci_pci
> xhci_pci_renesas xhci_hcd usbcore gpio_mb86s7x
> [  112.341097] CPU: 13 PID: 1045 Comm: bash Not tainted 5.10.0-rc1+ #44
> [  112.347515] Hardware name: Socionext Developer Box (DT)
> [  112.352813] pstate: 00000085 (nzcv daIf -PAN -UAO -TCO BTYPE=--)
> [  112.358897] pc : do_undefinstr+0x354/0x378
> [  112.363053] lr : do_undefinstr+0x270/0x378
> [  112.367218] sp : ffff8000122fbc50
> [  112.370603] x29: ffff8000122fbc50 x28: ffff00084bc9e080
> [  112.375985] x27: 0000000000000000 x26: 0000000000000000
> [  112.381366] x25: 0000000000000000 x24: 0000000000000000
> [  112.386748] x23: 0000000080000085 x22: ffff800011785004
> [  112.392129] x21: ffff8000122fbe00 x20: ffff8000122fbcc0
> [  112.397511] x19: ffff800011249988 x18: 0000000000000000
> [  112.402892] x17: 0000000000000000 x16: 0000000000000000
> [  112.408274] x15: 0000000000000000 x14: 0000000000000000
> [  112.413655] x13: 0000000000000000 x12: 0000000000000000
> [  112.419037] x11: 0000000000000000 x10: 0000000000000000
> [  112.424426] x9 : ffff800010314614 x8 : 0000000000000000
> [  112.429801] x7 : 0000000000000000 x6 : ffff8000122fbca8
> [  112.435189] x5 : 0000000000000000 x4 : ffff800011400110
> [  112.440564] x3 : 00000000d5300000 x2 : ffff800011255f78
> [  112.445946] x1 : ffff800011400110 x0 : 0000000080000085
> [  112.451328] Call trace:
> [  112.453848]  do_undefinstr+0x354/0x378
> [  112.457669]  el1_sync_handler+0xa8/0x138
> [  112.461658]  el1_sync+0x7c/0x100
> [  112.464958]  0xffff800011785004     /// <- Undefined instruction
> error happens on the next instruction of single stepping buffer.
> [  112.468172]  __arm64_sys_read+0x24/0x30
> [  112.472078]  el0_svc_common.constprop.3+0x94/0x178
> [  112.476936]  do_el0_svc+0x2c/0x98
> [  112.480321]  el0_sync_handler+0x118/0x168
> [  112.484407]  el0_sync+0x158/0x180
> [  112.487789] Code: d2801400 17ffffbe a9025bf5 f9001bf7 (d4210000)
> [  112.493951] ---[ end trace 3564a3bf75d1618c ]---
> 
> So, this seems that the Linux kernel couldn't catch the software-step exception.
> 
> I confirmed the same kernel doesn't cause this error without Xen. I
> guess Xen is not correctly setting the debug registers when the cpu
> goes EL1.
> (Or, would we handle debug exceptions in Xen and transfer it to EL1
> OS? I'm not sure how it was designed)
Debug registers (including single-step) is not supported by Xen today. I 
vaguely remember that they are quite tricky to implement it.

Although, it would be nice to have it supported!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13628.34329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTM-0000Hh-4p; Wed, 28 Oct 2020 14:30:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13628.34329; Wed, 28 Oct 2020 14:30:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTM-0000Ha-1g; Wed, 28 Oct 2020 14:30:44 +0000
Received: by outflank-mailman (input) for mailman id 13628;
 Wed, 28 Oct 2020 14:30:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXmTK-0000HV-C2
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:42 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4896d5e2-6755-43e8-a01e-1cbf095fdd52;
 Wed, 28 Oct 2020 14:30:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SEUW5Me
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 15:30:32 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXmTK-0000HV-C2
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:42 +0000
X-Inumbo-ID: 4896d5e2-6755-43e8-a01e-1cbf095fdd52
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.54])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4896d5e2-6755-43e8-a01e-1cbf095fdd52;
	Wed, 28 Oct 2020 14:30:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603895440;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=yv4/vNzbLjhhKaj2O8jZKB9GBNc7GvmmLKVV7dABzFU=;
	b=hHcOgn21sjPwj6e1+yxAm/1ynKgXRt+Ye9rVBiUkWEJsGyeo+Y0SHmTh2Ov8xPlaS7
	AsfrvgveOZXl23zPC8iGCUHG6UJ7NfYic0xQWFA4yWbAyi6W0c1AGcvS63BEuNWf6Y+D
	Gl8+iq0qxiViUpLVeWGbrXI/2O2OmZlFHfE78nY2MojgvE3TbIHagWCeqglUxQyrxwA8
	FehRYHWpncmmvXNDL7aOKR1ycnU0rc5I45EO5O+V2oLr0mBNn4t2eMDm1szLUxePku5x
	kohnon24XXphrDDQrDe3jNhqRlMmeG2sv3RdK6633C72tAU9BpaE+Tf4VNSg3JuTq5Zq
	76dA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SEUW5Me
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 15:30:32 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 3/3] tools: unify type checking for data pfns in migration stream
Date: Wed, 28 Oct 2020 15:30:24 +0100
Message-Id: <20201028143024.26833-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201028143024.26833-1-olaf@aepfle.de>
References: <20201028143024.26833-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a helper which decides if a given pfn type has data
for the migration stream.

No change in behavior intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 17 ++++++++++++++++
 tools/libs/guest/xg_sr_restore.c | 34 +++++---------------------------
 tools/libs/guest/xg_sr_save.c    | 14 ++-----------
 3 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index cc3ad1c394..70e328e951 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -455,6 +455,23 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
+static inline bool page_type_has_stream_data(uint32_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = false;
+        break;
+    default:
+        ret = true;
+        break;
+    }
+    return ret;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index f1c3169229..0332ae9f32 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0; i < count; ++i )
     {
-        if ( (!types || (types &&
-                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
              !pfn_is_populated(ctx, original_pfns[i]) )
         {
             rc = pfn_set_populated(ctx, original_pfns[i]);
@@ -233,25 +232,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     {
         ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_NOTAB:
-
-        case XEN_DOMCTL_PFINFO_L1TAB:
-        case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L2TAB:
-        case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L3TAB:
-        case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L4TAB:
-        case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
+        if ( page_type_has_stream_data(types[i]) == true )
             mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-            break;
-        }
     }
 
     /* Nothing to do? */
@@ -271,14 +253,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0, j = 0; i < count; ++i )
     {
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_XTAB:
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-            /* No page data to deal with. */
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         if ( map_errs[j] )
         {
@@ -413,7 +389,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
             goto err;
         }
 
-        if ( type < XEN_DOMCTL_PFINFO_BROKEN )
+        if ( page_type_has_stream_data(type) == true )
             /* NOTAB and all L1 through L4 tables (including pinned) should
              * have a page worth of data in the record. */
             pages_of_data++;
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 044d0ae3aa..0546d3d9e6 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -153,13 +153,8 @@ static int write_batch(struct xc_sr_context *ctx)
             goto err;
         }
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-        case XEN_DOMCTL_PFINFO_XTAB:
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         mfns[nr_pages++] = mfns[i];
     }
@@ -177,13 +172,8 @@ static int write_batch(struct xc_sr_context *ctx)
 
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
-            switch ( types[i] )
-            {
-            case XEN_DOMCTL_PFINFO_BROKEN:
-            case XEN_DOMCTL_PFINFO_XALLOC:
-            case XEN_DOMCTL_PFINFO_XTAB:
+            if ( page_type_has_stream_data(types[i]) == false )
                 continue;
-            }
 
             if ( errors[p] )
             {


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13630.34353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTU-0000M0-Rh; Wed, 28 Oct 2020 14:30:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13630.34353; Wed, 28 Oct 2020 14:30:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTU-0000Lr-OH; Wed, 28 Oct 2020 14:30:52 +0000
Received: by outflank-mailman (input) for mailman id 13630;
 Wed, 28 Oct 2020 14:30:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXmTT-0000IU-Tx
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:51 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.166])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7528e28e-3a0b-40cf-848b-882835f0625c;
 Wed, 28 Oct 2020 14:30:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SEUV5Md
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 15:30:31 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXmTT-0000IU-Tx
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:51 +0000
X-Inumbo-ID: 7528e28e-3a0b-40cf-848b-882835f0625c
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.166])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7528e28e-3a0b-40cf-848b-882835f0625c;
	Wed, 28 Oct 2020 14:30:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603895441;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=nOFtdNXlk5HjYyxO1mkGMNqJ+Z1za/MxmXOScJ0HbPg=;
	b=Vjq9hrSCC0J4X6bcR9uJ0f2dDFFfDSEpA6wPy5IKsY10nhsk25L1NpaCFj4xWmj6HY
	Qtx7RbNQ/keVcUl6kOpelrdPipY4wZlBYCFvAl0RwtATWzb9+gb7M+W8ZAXVn3gMC0Vg
	Wj1VXNOJDiNCEJvOR5EAoneEWONksVZzRilTBJ84XM+9FianWfQ8VUulmckugmF8mEIO
	MMKl9KYWrIO6tW9/DqLhXlgSnfyUehj65B98gmdETtPB/932l2HEsKr/Na3ndtV9Bj65
	+bBCtTnQnFCpZV/hLZeTHo0zcf5DjNQZAFDMAexgaD95bCg2oO6eLRcmSNh7b5386QgO
	aKHg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SEUV5Md
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 15:30:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 2/3] tools: use xc_is_known_page_type
Date: Wed, 28 Oct 2020 15:30:23 +0100
Message-Id: <20201028143024.26833-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201028143024.26833-1-olaf@aepfle.de>
References: <20201028143024.26833-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Verify pfn type on sending side, also verify incoming batch of pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_restore.c | 3 +--
 tools/libs/guest/xg_sr_save.c    | 6 ++++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index b57a787519..f1c3169229 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -406,8 +406,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         }
 
         type = (pages->pfn[i] & PAGE_DATA_TYPE_MASK) >> 32;
-        if ( ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) >= 5) &&
-             ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) <= 8) )
+        if ( xc_is_known_page_type(type) == false )
         {
             ERROR("Invalid type %#"PRIx32" for pfn %#"PRIpfn" (index %u)",
                   type, pfn, i);
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 2ba7c3200c..044d0ae3aa 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -147,6 +147,12 @@ static int write_batch(struct xc_sr_context *ctx)
 
     for ( i = 0; i < nr_pfns; ++i )
     {
+        if ( xc_is_known_page_type(types[i]) == false )
+        {
+            ERROR("Wrong type %#"PRIpfn" for pfn %#"PRIpfn, types[i], mfns[i]);
+            goto err;
+        }
+
         switch ( types[i] )
         {
         case XEN_DOMCTL_PFINFO_BROKEN:


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13629.34341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTQ-0000Iy-DE; Wed, 28 Oct 2020 14:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13629.34341; Wed, 28 Oct 2020 14:30:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmTQ-0000Ir-9z; Wed, 28 Oct 2020 14:30:48 +0000
Received: by outflank-mailman (input) for mailman id 13629;
 Wed, 28 Oct 2020 14:30:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXmTP-0000IU-13
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:47 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37c6a7fb-b1bb-49f9-8be0-4fe999580628;
 Wed, 28 Oct 2020 14:30:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SEUV5Mc
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 15:30:31 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXmTP-0000IU-13
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:30:47 +0000
X-Inumbo-ID: 37c6a7fb-b1bb-49f9-8be0-4fe999580628
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 37c6a7fb-b1bb-49f9-8be0-4fe999580628;
	Wed, 28 Oct 2020 14:30:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603895440;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=8EOjWvQk239Wp/aWNifZAKC+Sjf+9ZihUrmLiKkJAAc=;
	b=ivPRFXnGxrQjFfk6asYHhqTZi26eH2FBmOVi7PrYd1KIteAhN6/KIJBxRXViC2h5Dw
	o07Z89ZXmk8scebTQCcXQaTZ/5iw+7mOSM6Zr2kRisOAA6FgkKe0igexBDJyh57znJzz
	p7wbt/6dM+liiFk7I+fMWz68q19DD4xfbmj/JAhUfTczZLFBsQQY9QsRlvFEaMM6wXFK
	PiSKi8bT8GfAbVPJRBqBDGOE4KlcxVr9h4Gnz/FruHYMCt9eOIlXL6fiqQ8HzBI+6pa5
	2vVCK3gYmogcaZQ7oZoyLwnqA5fe3E7l2Shhq85bNy2wo++7O89IEzAxu8YM4+0MHsh7
	Katg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SEUV5Mc
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 15:30:31 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 1/3] tools: add xc_is_known_page_type to libxenctrl
Date: Wed, 28 Oct 2020 15:30:22 +0100
Message-Id: <20201028143024.26833-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201028143024.26833-1-olaf@aepfle.de>
References: <20201028143024.26833-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Users of xc_get_pfn_type_batch may want to sanity check the data
returned by Xen. Add a simple helper for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_private.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 5d2c7274fb..afb08aafe1 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -421,6 +421,39 @@ void *xc_map_foreign_ranges(xc_interface *xch, uint32_t dom,
 int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom,
                           unsigned int num, xen_pfn_t *);
 
+/* Sanitiy check for types returned by Xen */
+static inline bool xc_is_known_page_type(xen_pfn_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_NOTAB:
+
+    case XEN_DOMCTL_PFINFO_L1TAB:
+    case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L2TAB:
+    case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L3TAB:
+    case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L4TAB:
+    case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = true;
+        break;
+    default:
+        ret = false;
+        break;
+    }
+    return ret;
+}
+
 void bitmap_64_to_byte(uint8_t *bp, const uint64_t *lp, int nbits);
 void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:32:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13618.34364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmUy-0000ct-8u; Wed, 28 Oct 2020 14:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13618.34364; Wed, 28 Oct 2020 14:32:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmUy-0000cm-5a; Wed, 28 Oct 2020 14:32:24 +0000
Received: by outflank-mailman (input) for mailman id 13618;
 Wed, 28 Oct 2020 14:23:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hOAa=ED=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
 id 1kXmMV-0007te-S9
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:23:39 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5d30928-597a-4f13-97b0-774a60d20f7d;
 Wed, 28 Oct 2020 14:23:38 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5b2.dynamic.kabel-deutschland.de
 [95.90.213.178])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9E0EB247CE;
 Wed, 28 Oct 2020 14:23:36 +0000 (UTC)
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
 (envelope-from <mchehab@kernel.org>)
 id 1kXmMP-003hlf-2e; Wed, 28 Oct 2020 15:23:33 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hOAa=ED=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
	id 1kXmMV-0007te-S9
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:23:39 +0000
X-Inumbo-ID: f5d30928-597a-4f13-97b0-774a60d20f7d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f5d30928-597a-4f13-97b0-774a60d20f7d;
	Wed, 28 Oct 2020 14:23:38 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5b2.dynamic.kabel-deutschland.de [95.90.213.178])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9E0EB247CE;
	Wed, 28 Oct 2020 14:23:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603895017;
	bh=BciyExrejnTWlcFJoNuwR0ZZFbaqUgHrgchsLWHRRrs=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=xg8iLTHpAt+sKNFzz8w5Z7VRWae9UblGrR3hkSRHgoIF1X0lJiqotn3SXkGcfcYBI
	 1ueJjqcTIyD8YnVj6x5JurDk+94FS+xSFS0yEYwnrt6/F8B795mHSHHPjj2pf0/CCl
	 ynoZ3s0NkABw+gJeSX5qpoZ5YQ0NJtoaQnyLQ7VM=
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
	(envelope-from <mchehab@kernel.org>)
	id 1kXmMP-003hlf-2e; Wed, 28 Oct 2020 15:23:33 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	=?UTF-8?q?Javier=20Gonz=C3=A1lez?= <javier@javigon.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Mauro Carvalho Chehab" <mchehab+huawei@kernel.org>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Alexandre Belloni <alexandre.belloni@bootlin.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Andrew Donnellan <ajd@linux.ibm.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Baolin Wang <baolin.wang7@gmail.com>,
	Benson Leung <bleung@chromium.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Bruno Meneguele <bmeneg@redhat.com>,
	Chunyan Zhang <zhang.lyra@gmail.com>,
	Dan Murphy <dmurphy@ti.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Enric Balletbo i Serra <enric.balletbo@collabora.com>,
	Fabrice Gasnier <fabrice.gasnier@st.com>,
	Felipe Balbi <balbi@kernel.org>,
	Frederic Barrat <fbarrat@linux.ibm.com>,
	Guenter Roeck <groeck@chromium.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Heikki Krogerus <heikki.krogerus@linux.intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	Jonathan Cameron <jic23@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Konstantin Khlebnikov <koct9i@gmail.com>,
	Kranthi Kuntala <kranthi.kuntala@intel.com>,
	Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
	Lars-Peter Clausen <lars@metafoo.de>,
	Len Brown <lenb@kernel.org>,
	Leonid Maksymchuk <leonmaxx@gmail.com>,
	Ludovic Desroches <ludovic.desroches@microchip.com>,
	Mario Limonciello <mario.limonciello@dell.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Mimi Zohar <zohar@linux.ibm.com>,
	Nayna Jain <nayna@linux.ibm.com>,
	Nicolas Ferre <nicolas.ferre@microchip.com>,
	Niklas Cassel <niklas.cassel@wdc.com>,
	Oleh Kravchenko <oleg@kaa.org.ua>,
	Orson Zhai <orsonzhai@gmail.com>,
	Pavel Machek <pavel@ucw.cz>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
	Peter Rosin <peda@axentia.se>,
	Petr Mladek <pmladek@suse.com>,
	Philippe Bergheaud <felix@linux.ibm.com>,
	Richard Cochran <richardcochran@gmail.com>,
	Sebastian Reichel <sre@kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	linux-acpi@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-iio@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-pm@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-usb@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	netdev@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 20/33] docs: ABI: testing: make the files compatible with ReST output
Date: Wed, 28 Oct 2020 15:23:18 +0100
Message-Id: <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1603893146.git.mchehab+huawei@kernel.org>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Sender: Mauro Carvalho Chehab <mchehab@kernel.org>

From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>

Some files over there won't parse well by Sphinx.

Fix them.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
---
 .../ABI/testing/configfs-spear-pcie-gadget    |  36 +--
 Documentation/ABI/testing/configfs-usb-gadget |  83 +++---
 .../ABI/testing/configfs-usb-gadget-hid       |  10 +-
 .../ABI/testing/configfs-usb-gadget-rndis     |  16 +-
 .../ABI/testing/configfs-usb-gadget-uac1      |  18 +-
 .../ABI/testing/configfs-usb-gadget-uvc       | 220 +++++++++-------
 Documentation/ABI/testing/debugfs-ec          |  11 +-
 Documentation/ABI/testing/debugfs-pktcdvd     |  11 +-
 Documentation/ABI/testing/dev-kmsg            |  27 +-
 Documentation/ABI/testing/evm                 |  17 +-
 Documentation/ABI/testing/ima_policy          |  30 ++-
 Documentation/ABI/testing/procfs-diskstats    |  40 +--
 Documentation/ABI/testing/sysfs-block         |  38 +--
 Documentation/ABI/testing/sysfs-block-device  |   2 +
 Documentation/ABI/testing/sysfs-bus-acpi      |  18 +-
 .../sysfs-bus-event_source-devices-format     |   3 +-
 .../ABI/testing/sysfs-bus-i2c-devices-pca954x |  27 +-
 Documentation/ABI/testing/sysfs-bus-iio       |  11 +
 .../sysfs-bus-iio-adc-envelope-detector       |   5 +-
 .../ABI/testing/sysfs-bus-iio-cros-ec         |   2 +-
 .../ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 |   8 +-
 .../ABI/testing/sysfs-bus-iio-lptimer-stm32   |  29 ++-
 .../sysfs-bus-iio-magnetometer-hmc5843        |  19 +-
 .../sysfs-bus-iio-temperature-max31856        |  19 +-
 .../ABI/testing/sysfs-bus-iio-timer-stm32     | 137 ++++++----
 .../testing/sysfs-bus-intel_th-devices-msc    |   4 +
 Documentation/ABI/testing/sysfs-bus-nfit      |   2 +-
 .../testing/sysfs-bus-pci-devices-aer_stats   | 119 +++++----
 Documentation/ABI/testing/sysfs-bus-rapidio   |  23 +-
 .../ABI/testing/sysfs-bus-thunderbolt         |  40 +--
 Documentation/ABI/testing/sysfs-bus-usb       |  30 ++-
 .../testing/sysfs-bus-usb-devices-usbsevseg   |   7 +-
 Documentation/ABI/testing/sysfs-bus-vfio-mdev |  10 +-
 Documentation/ABI/testing/sysfs-class-cxl     |  15 +-
 Documentation/ABI/testing/sysfs-class-led     |   2 +-
 .../testing/sysfs-class-led-driver-el15203000 | 229 ++++++++---------
 .../ABI/testing/sysfs-class-led-driver-sc27xx |   4 +-
 Documentation/ABI/testing/sysfs-class-mic     |  52 ++--
 Documentation/ABI/testing/sysfs-class-ocxl    |   3 +
 Documentation/ABI/testing/sysfs-class-power   |  73 +++++-
 .../ABI/testing/sysfs-class-power-twl4030     |  33 +--
 Documentation/ABI/testing/sysfs-class-rc      |  30 ++-
 .../ABI/testing/sysfs-class-scsi_host         |   7 +-
 Documentation/ABI/testing/sysfs-class-typec   |  12 +-
 .../testing/sysfs-devices-platform-ACPI-TAD   |   4 +
 .../ABI/testing/sysfs-devices-platform-docg3  |  10 +-
 .../sysfs-devices-platform-sh_mobile_lcdc_fb  |   8 +-
 .../ABI/testing/sysfs-devices-system-cpu      |  99 +++++---
 .../ABI/testing/sysfs-devices-system-ibm-rtl  |   6 +-
 .../testing/sysfs-driver-bd9571mwv-regulator  |   4 +
 Documentation/ABI/testing/sysfs-driver-genwqe |  11 +-
 .../testing/sysfs-driver-hid-logitech-lg4ff   |  18 +-
 .../ABI/testing/sysfs-driver-hid-wiimote      |  11 +-
 .../ABI/testing/sysfs-driver-samsung-laptop   |  13 +-
 .../ABI/testing/sysfs-driver-toshiba_acpi     |  26 ++
 .../ABI/testing/sysfs-driver-toshiba_haps     |   2 +
 Documentation/ABI/testing/sysfs-driver-wacom  |   4 +-
 Documentation/ABI/testing/sysfs-firmware-acpi | 237 +++++++++---------
 .../ABI/testing/sysfs-firmware-dmi-entries    |  50 ++--
 Documentation/ABI/testing/sysfs-firmware-gsmi |   2 +-
 .../ABI/testing/sysfs-firmware-memmap         |  16 +-
 Documentation/ABI/testing/sysfs-fs-ext4       |   4 +-
 .../ABI/testing/sysfs-hypervisor-xen          |  13 +-
 .../ABI/testing/sysfs-kernel-boot_params      |  23 +-
 .../ABI/testing/sysfs-kernel-mm-hugepages     |  12 +-
 .../ABI/testing/sysfs-platform-asus-laptop    |  21 +-
 .../ABI/testing/sysfs-platform-asus-wmi       |   1 +
 Documentation/ABI/testing/sysfs-platform-at91 |  10 +-
 .../ABI/testing/sysfs-platform-eeepc-laptop   |  14 +-
 .../ABI/testing/sysfs-platform-ideapad-laptop |   9 +-
 .../sysfs-platform-intel-wmi-thunderbolt      |   1 +
 .../ABI/testing/sysfs-platform-sst-atom       |  13 +-
 .../ABI/testing/sysfs-platform-usbip-vudc     |  11 +-
 Documentation/ABI/testing/sysfs-ptp           |   2 +-
 Documentation/ABI/testing/sysfs-uevent        |  10 +-
 75 files changed, 1328 insertions(+), 869 deletions(-)

diff --git a/Documentation/ABI/testing/configfs-spear-pcie-gadget b/Documentation/ABI/testing/configfs-spear-pcie-gadget
index 840c324ef34d..cf877bd341df 100644
--- a/Documentation/ABI/testing/configfs-spear-pcie-gadget
+++ b/Documentation/ABI/testing/configfs-spear-pcie-gadget
@@ -10,22 +10,24 @@ Description:
 	This interfaces can be used to show spear's PCIe device capability.
 
 	Nodes are only visible when configfs is mounted. To mount configfs
-	in /config directory use:
-	# mount -t configfs none /config/
+	in /config directory use::
 
-	For nth PCIe Device Controller
-	/config/pcie-gadget.n/
-		link ... used to enable ltssm and read its status.
-		int_type ...used to configure and read type of supported
-			interrupt
-		no_of_msi ... used to configure number of MSI vector needed and
+	  # mount -t configfs none /config/
+
+	For nth PCIe Device Controller /config/pcie-gadget.n/:
+
+	=============== ======================================================
+	link		used to enable ltssm and read its status.
+	int_type	used to configure and read type of supported interrupt
+	no_of_msi	used to configure number of MSI vector needed and
 			to read no of MSI granted.
-		inta ... write 1 to assert INTA and 0 to de-assert.
-		send_msi ... write MSI vector to be sent.
-		vendor_id ... used to write and read vendor id (hex)
-		device_id ... used to write and read device id (hex)
-		bar0_size ... used to write and read bar0_size
-		bar0_address ... used to write and read bar0 mapped area in hex.
-		bar0_rw_offset ... used to write and read offset of bar0 where
-			bar0_data will be written or read.
-		bar0_data ... used to write and read data at bar0_rw_offset.
+	inta		write 1 to assert INTA and 0 to de-assert.
+	send_msi	write MSI vector to be sent.
+	vendor_id	used to write and read vendor id (hex)
+	device_id	used to write and read device id (hex)
+	bar0_size	used to write and read bar0_size
+	bar0_address	used to write and read bar0 mapped area in hex.
+	bar0_rw_offset	used to write and read offset of bar0 where bar0_data
+			will be written or read.
+	bar0_data	used to write and read data at bar0_rw_offset.
+	=============== ======================================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget b/Documentation/ABI/testing/configfs-usb-gadget
index 4594cc2435e8..dc351e9af80a 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget
+++ b/Documentation/ABI/testing/configfs-usb-gadget
@@ -12,22 +12,24 @@ Description:
 
 		The attributes of a gadget:
 
-		UDC		- bind a gadget to UDC/unbind a gadget;
-				write UDC's name found in /sys/class/udc/*
-				to bind a gadget, empty string "" to unbind.
+		================  ============================================
+		UDC		  bind a gadget to UDC/unbind a gadget;
+				  write UDC's name found in /sys/class/udc/*
+				  to bind a gadget, empty string "" to unbind.
 
-		max_speed	- maximum speed the driver supports. Valid
-				names are super-speed-plus, super-speed,
-				high-speed, full-speed, and low-speed.
+		max_speed	  maximum speed the driver supports. Valid
+				  names are super-speed-plus, super-speed,
+				  high-speed, full-speed, and low-speed.
 
-		bDeviceClass	- USB device class code
-		bDeviceSubClass	- USB device subclass code
-		bDeviceProtocol	- USB device protocol code
-		bMaxPacketSize0	- maximum endpoint 0 packet size
-		bcdDevice	- bcd device release number
-		bcdUSB		- bcd USB specification version number
-		idProduct	- product ID
-		idVendor	- vendor ID
+		bDeviceClass	  USB device class code
+		bDeviceSubClass	  USB device subclass code
+		bDeviceProtocol	  USB device protocol code
+		bMaxPacketSize0	  maximum endpoint 0 packet size
+		bcdDevice	  bcd device release number
+		bcdUSB		  bcd USB specification version number
+		idProduct	  product ID
+		idVendor	  vendor ID
+		================  ============================================
 
 What:		/config/usb-gadget/gadget/configs
 Date:		Jun 2013
@@ -41,8 +43,10 @@ KernelVersion:	3.11
 Description:
 		The attributes of a configuration:
 
-		bmAttributes	- configuration characteristics
-		MaxPower	- maximum power consumption from the bus
+		================  ======================================
+		bmAttributes	  configuration characteristics
+		MaxPower	  maximum power consumption from the bus
+		================  ======================================
 
 What:		/config/usb-gadget/gadget/configs/config/strings
 Date:		Jun 2013
@@ -57,7 +61,9 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		configuration	- configuration description
+		================  =========================
+		configuration	  configuration description
+		================  =========================
 
 
 What:		/config/usb-gadget/gadget/functions
@@ -76,8 +82,10 @@ Description:
 
 		The attributes:
 
-		compatible_id		- 8-byte string for "Compatible ID"
-		sub_compatible_id	- 8-byte string for "Sub Compatible ID"
+		=================	=====================================
+		compatible_id		8-byte string for "Compatible ID"
+		sub_compatible_id	8-byte string for "Sub Compatible ID"
+		=================	=====================================
 
 What:		/config/usb-gadget/gadget/functions/<func>.<inst>/interface.<n>/<property>
 Date:		May 2014
@@ -89,16 +97,19 @@ Description:
 
 		The attributes:
 
-		type		- value 1..7 for interpreting the data
-				1: unicode string
-				2: unicode string with environment variable
-				3: binary
-				4: little-endian 32-bit
-				5: big-endian 32-bit
-				6: unicode string with a symbolic link
-				7: multiple unicode strings
-		data		- blob of data to be interpreted depending on
+		=====		===============================================
+		type		value 1..7 for interpreting the data
+
+				- 1: unicode string
+				- 2: unicode string with environment variable
+				- 3: binary
+				- 4: little-endian 32-bit
+				- 5: big-endian 32-bit
+				- 6: unicode string with a symbolic link
+				- 7: multiple unicode strings
+		data		blob of data to be interpreted depending on
 				type
+		=====		===============================================
 
 What:		/config/usb-gadget/gadget/strings
 Date:		Jun 2013
@@ -113,9 +124,11 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		serialnumber	- gadget's serial number (string)
-		product		- gadget's product description
-		manufacturer	- gadget's manufacturer description
+		============	=================================
+		serialnumber	gadget's serial number (string)
+		product		gadget's product description
+		manufacturer	gadget's manufacturer description
+		============	=================================
 
 What:		/config/usb-gadget/gadget/os_desc
 Date:		May 2014
@@ -123,8 +136,10 @@ KernelVersion:	3.16
 Description:
 		This group contains "OS String" extension handling attributes.
 
-		use		- flag turning "OS Desctiptors" support on/off
-		b_vendor_code	- one-byte value used for custom per-device and
+		=============	===============================================
+		use		flag turning "OS Desctiptors" support on/off
+		b_vendor_code	one-byte value used for custom per-device and
 				per-interface requests
-		qw_sign		- an identifier to be reported as "OS String"
+		qw_sign		an identifier to be reported as "OS String"
 				proper
+		=============	===============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-hid b/Documentation/ABI/testing/configfs-usb-gadget-hid
index f12e00e6baa3..748705c4cb58 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-hid
+++ b/Documentation/ABI/testing/configfs-usb-gadget-hid
@@ -4,8 +4,10 @@ KernelVersion:	3.19
 Description:
 		The attributes:
 
-		protocol	- HID protocol to use
-		report_desc	- blob corresponding to HID report descriptors
+		=============	============================================
+		protocol	HID protocol to use
+		report_desc	blob corresponding to HID report descriptors
 				except the data passed through /dev/hidg<N>
-		report_length	- HID report length
-		subclass	- HID device subclass to use
+		report_length	HID report length
+		subclass	HID device subclass to use
+		=============	============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-rndis b/Documentation/ABI/testing/configfs-usb-gadget-rndis
index 137399095d74..9416eda7fe93 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-rndis
+++ b/Documentation/ABI/testing/configfs-usb-gadget-rndis
@@ -4,14 +4,16 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		ifname		- network device interface name associated with
+		=========	=============================================
+		ifname		network device interface name associated with
 				this function instance
-		qmult		- queue length multiplier for high and
+		qmult		queue length multiplier for high and
 				super speed
-		host_addr	- MAC address of host's end of this
+		host_addr	MAC address of host's end of this
 				Ethernet over USB link
-		dev_addr	- MAC address of device's end of this
+		dev_addr	MAC address of device's end of this
 				Ethernet over USB link
-		class		- USB interface class, default is 02 (hex)
-		subclass	- USB interface subclass, default is 06 (hex)
-		protocol	- USB interface protocol, default is 00 (hex)
+		class		USB interface class, default is 02 (hex)
+		subclass	USB interface subclass, default is 06 (hex)
+		protocol	USB interface protocol, default is 00 (hex)
+		=========	=============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uac1 b/Documentation/ABI/testing/configfs-usb-gadget-uac1
index abfe447c848f..dc23fd776943 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-uac1
+++ b/Documentation/ABI/testing/configfs-usb-gadget-uac1
@@ -4,11 +4,13 @@ KernelVersion:	4.14
 Description:
 		The attributes:
 
-		c_chmask - capture channel mask
-		c_srate - capture sampling rate
-		c_ssize - capture sample size (bytes)
-		p_chmask - playback channel mask
-		p_srate - playback sampling rate
-		p_ssize - playback sample size (bytes)
-		req_number - the number of pre-allocated request
-			for both capture and playback
+		==========	===================================
+		c_chmask	capture channel mask
+		c_srate		capture sampling rate
+		c_ssize		capture sample size (bytes)
+		p_chmask	playback channel mask
+		p_srate		playback sampling rate
+		p_ssize		playback sample size (bytes)
+		req_number	the number of pre-allocated request
+				for both capture and playback
+		==========	===================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uvc b/Documentation/ABI/testing/configfs-usb-gadget-uvc
index 809765bd9573..cee81b0347bb 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-uvc
+++ b/Documentation/ABI/testing/configfs-usb-gadget-uvc
@@ -3,9 +3,11 @@ Date:		Dec 2014
 KernelVersion:	4.0
 Description:	UVC function directory
 
-		streaming_maxburst	- 0..15 (ss only)
-		streaming_maxpacket	- 1..1023 (fs), 1..3072 (hs/ss)
-		streaming_interval	- 1..16
+		===================	=============================
+		streaming_maxburst	0..15 (ss only)
+		streaming_maxpacket	1..1023 (fs), 1..3072 (hs/ss)
+		streaming_interval	1..16
+		===================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control
 Date:		Dec 2014
@@ -13,8 +15,11 @@ KernelVersion:	4.0
 Description:	Control descriptors
 
 		All attributes read only:
-		bInterfaceNumber	- USB interface number for this
-					  streaming interface
+
+		================	=============================
+		bInterfaceNumber	USB interface number for this
+					streaming interface
+		================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/class
 Date:		Dec 2014
@@ -47,13 +52,16 @@ KernelVersion:	4.0
 Description:	Default output terminal descriptors
 
 		All attributes read only:
-		iTerminal	- index of string descriptor
-		bSourceID 	- id of the terminal to which this terminal
+
+		==============	=============================================
+		iTerminal	index of string descriptor
+		bSourceID 	id of the terminal to which this terminal
 				is connected
-		bAssocTerminal	- id of the input terminal to which this output
+		bAssocTerminal	id of the input terminal to which this output
 				terminal is associated
-		wTerminalType	- terminal type
-		bTerminalID	- a non-zero id of this terminal
+		wTerminalType	terminal type
+		bTerminalID	a non-zero id of this terminal
+		==============	=============================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/terminal/camera
 Date:		Dec 2014
@@ -66,16 +74,19 @@ KernelVersion:	4.0
 Description:	Default camera terminal descriptors
 
 		All attributes read only:
-		bmControls		- bitmap specifying which controls are
-					supported for the video stream
-		wOcularFocalLength	- the value of Locular
-		wObjectiveFocalLengthMax- the value of Lmin
-		wObjectiveFocalLengthMin- the value of Lmax
-		iTerminal		- index of string descriptor
-		bAssocTerminal		- id of the output terminal to which
-					this terminal is connected
-		wTerminalType		- terminal type
-		bTerminalID		- a non-zero id of this terminal
+
+		========================  ====================================
+		bmControls		  bitmap specifying which controls are
+					  supported for the video stream
+		wOcularFocalLength	  the value of Locular
+		wObjectiveFocalLengthMax  the value of Lmin
+		wObjectiveFocalLengthMin  the value of Lmax
+		iTerminal		  index of string descriptor
+		bAssocTerminal		  id of the output terminal to which
+					  this terminal is connected
+		wTerminalType		  terminal type
+		bTerminalID		  a non-zero id of this terminal
+		========================  ====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/processing
 Date:		Dec 2014
@@ -88,13 +99,16 @@ KernelVersion:	4.0
 Description:	Default processing unit descriptors
 
 		All attributes read only:
-		iProcessing	- index of string descriptor
-		bmControls	- bitmap specifying which controls are
+
+		===============	========================================
+		iProcessing	index of string descriptor
+		bmControls	bitmap specifying which controls are
 				supported for the video stream
-		wMaxMultiplier	- maximum digital magnification x100
-		bSourceID	- id of the terminal to which this unit is
+		wMaxMultiplier	maximum digital magnification x100
+		bSourceID	id of the terminal to which this unit is
 				connected
-		bUnitID		- a non-zero id of this unit
+		bUnitID		a non-zero id of this unit
+		===============	========================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/header
 Date:		Dec 2014
@@ -114,8 +128,11 @@ KernelVersion:	4.0
 Description:	Streaming descriptors
 
 		All attributes read only:
-		bInterfaceNumber	- USB interface number for this
-					  streaming interface
+
+		================	=============================
+		bInterfaceNumber	USB interface number for this
+					streaming interface
+		================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/class
 Date:		Dec 2014
@@ -148,13 +165,16 @@ KernelVersion:	4.0
 Description:	Default color matching descriptors
 
 		All attributes read only:
-		bMatrixCoefficients	- matrix used to compute luma and
-					chroma values from the color primaries
-		bTransferCharacteristics- optoelectronic transfer
-					characteristic of the source picutre,
-					also called the gamma function
-		bColorPrimaries		- color primaries and the reference
-					white
+
+		========================  ======================================
+		bMatrixCoefficients	  matrix used to compute luma and
+					  chroma values from the color primaries
+		bTransferCharacteristics  optoelectronic transfer
+					  characteristic of the source picutre,
+					  also called the gamma function
+		bColorPrimaries		  color primaries and the reference
+					  white
+		========================  ======================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg
 Date:		Dec 2014
@@ -168,47 +188,52 @@ Description:	Specific MJPEG format descriptors
 
 		All attributes read only,
 		except bmaControls and bDefaultFrameIndex:
-		bFormatIndex		- unique id for this format descriptor;
+
+		===================	=====================================
+		bFormatIndex		unique id for this format descriptor;
 					only defined after parent header is
 					linked into the streaming class;
 					read-only
-		bmaControls		- this format's data for bmaControls in
+		bmaControls		this format's data for bmaControls in
 					the streaming header
-		bmInterfaceFlags	- specifies interlace information,
+		bmInterfaceFlags	specifies interlace information,
 					read-only
-		bAspectRatioY		- the X dimension of the picture aspect
+		bAspectRatioY		the X dimension of the picture aspect
 					ratio, read-only
-		bAspectRatioX		- the Y dimension of the picture aspect
+		bAspectRatioX		the Y dimension of the picture aspect
 					ratio, read-only
-		bmFlags			- characteristics of this format,
+		bmFlags			characteristics of this format,
 					read-only
-		bDefaultFrameIndex	- optimum frame index for this stream
+		bDefaultFrameIndex	optimum frame index for this stream
+		===================	=====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg/name/name
 Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific MJPEG frame descriptors
 
-		bFrameIndex		- unique id for this framedescriptor;
-					only defined after parent format is
-					linked into the streaming header;
-					read-only
-		dwFrameInterval		- indicates how frame interval can be
-					programmed; a number of values
-					separated by newline can be specified
-		dwDefaultFrameInterval	- the frame interval the device would
-					like to use as default
-		dwMaxVideoFrameBufferSize- the maximum number of bytes the
-					compressor will produce for a video
-					frame or still image
-		dwMaxBitRate		- the maximum bit rate at the shortest
-					frame interval in bps
-		dwMinBitRate		- the minimum bit rate at the longest
-					frame interval in bps
-		wHeight			- height of decoded bitmap frame in px
-		wWidth			- width of decoded bitmam frame in px
-		bmCapabilities		- still image support, fixed frame-rate
-					support
+		=========================  =====================================
+		bFrameIndex		   unique id for this framedescriptor;
+					   only defined after parent format is
+					   linked into the streaming header;
+					   read-only
+		dwFrameInterval		   indicates how frame interval can be
+					   programmed; a number of values
+					   separated by newline can be specified
+		dwDefaultFrameInterval	   the frame interval the device would
+					   like to use as default
+		dwMaxVideoFrameBufferSize  the maximum number of bytes the
+					   compressor will produce for a video
+					   frame or still image
+		dwMaxBitRate		   the maximum bit rate at the shortest
+					   frame interval in bps
+		dwMinBitRate		   the minimum bit rate at the longest
+					   frame interval in bps
+		wHeight			   height of decoded bitmap frame in px
+		wWidth			   width of decoded bitmam frame in px
+		bmCapabilities		   still image support, fixed frame-rate
+					   support
+		=========================  =====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed
 Date:		Dec 2014
@@ -220,50 +245,54 @@ Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific uncompressed format descriptors
 
-		bFormatIndex		- unique id for this format descriptor;
+		==================	=======================================
+		bFormatIndex		unique id for this format descriptor;
 					only defined after parent header is
 					linked into the streaming class;
 					read-only
-		bmaControls		- this format's data for bmaControls in
+		bmaControls		this format's data for bmaControls in
 					the streaming header
-		bmInterfaceFlags	- specifies interlace information,
+		bmInterfaceFlags	specifies interlace information,
 					read-only
-		bAspectRatioY		- the X dimension of the picture aspect
+		bAspectRatioY		the X dimension of the picture aspect
 					ratio, read-only
-		bAspectRatioX		- the Y dimension of the picture aspect
+		bAspectRatioX		the Y dimension of the picture aspect
 					ratio, read-only
-		bDefaultFrameIndex	- optimum frame index for this stream
-		bBitsPerPixel		- number of bits per pixel used to
+		bDefaultFrameIndex	optimum frame index for this stream
+		bBitsPerPixel		number of bits per pixel used to
 					specify color in the decoded video
 					frame
-		guidFormat		- globally unique id used to identify
+		guidFormat		globally unique id used to identify
 					stream-encoding format
+		==================	=======================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed/name/name
 Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific uncompressed frame descriptors
 
-		bFrameIndex		- unique id for this framedescriptor;
-					only defined after parent format is
-					linked into the streaming header;
-					read-only
-		dwFrameInterval		- indicates how frame interval can be
-					programmed; a number of values
-					separated by newline can be specified
-		dwDefaultFrameInterval	- the frame interval the device would
-					like to use as default
-		dwMaxVideoFrameBufferSize- the maximum number of bytes the
-					compressor will produce for a video
-					frame or still image
-		dwMaxBitRate		- the maximum bit rate at the shortest
-					frame interval in bps
-		dwMinBitRate		- the minimum bit rate at the longest
-					frame interval in bps
-		wHeight			- height of decoded bitmap frame in px
-		wWidth			- width of decoded bitmam frame in px
-		bmCapabilities		- still image support, fixed frame-rate
-					support
+		=========================  =====================================
+		bFrameIndex		   unique id for this framedescriptor;
+					   only defined after parent format is
+					   linked into the streaming header;
+					   read-only
+		dwFrameInterval		   indicates how frame interval can be
+					   programmed; a number of values
+					   separated by newline can be specified
+		dwDefaultFrameInterval	   the frame interval the device would
+					   like to use as default
+		dwMaxVideoFrameBufferSize  the maximum number of bytes the
+					   compressor will produce for a video
+					   frame or still image
+		dwMaxBitRate		   the maximum bit rate at the shortest
+					   frame interval in bps
+		dwMinBitRate		   the minimum bit rate at the longest
+					   frame interval in bps
+		wHeight			   height of decoded bitmap frame in px
+		wWidth			   width of decoded bitmam frame in px
+		bmCapabilities		   still image support, fixed frame-rate
+					   support
+		=========================  =====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/header
 Date:		Dec 2014
@@ -276,17 +305,20 @@ KernelVersion:	4.0
 Description:	Specific streaming header descriptors
 
 		All attributes read only:
-		bTriggerUsage		- how the host software will respond to
+
+		====================	=====================================
+		bTriggerUsage		how the host software will respond to
 					a hardware trigger interrupt event
-		bTriggerSupport		- flag specifying if hardware
+		bTriggerSupport		flag specifying if hardware
 					triggering is supported
-		bStillCaptureMethod	- method of still image caputre
+		bStillCaptureMethod	method of still image caputre
 					supported
-		bTerminalLink		- id of the output terminal to which
+		bTerminalLink		id of the output terminal to which
 					the video endpoint of this interface
 					is connected
-		bmInfo			- capabilities of this video streaming
+		bmInfo			capabilities of this video streaming
 					interface
+		====================	=====================================
 
 What:		/sys/class/udc/udc.name/device/gadget/video4linux/video.name/function_name
 Date:		May 2018
diff --git a/Documentation/ABI/testing/debugfs-ec b/Documentation/ABI/testing/debugfs-ec
index 6546115a94da..ab6099daa8f5 100644
--- a/Documentation/ABI/testing/debugfs-ec
+++ b/Documentation/ABI/testing/debugfs-ec
@@ -6,7 +6,7 @@ Description:
 General information like which GPE is assigned to the EC and whether
 the global lock should get used.
 Knowing the EC GPE one can watch the amount of HW events related to
-the EC here (XY -> GPE number from /sys/kernel/debug/ec/*/gpe):
+the EC here (XY -> GPE number from `/sys/kernel/debug/ec/*/gpe`):
 /sys/firmware/acpi/interrupts/gpeXY
 
 The io file is binary and a userspace tool located here:
@@ -14,7 +14,8 @@ ftp://ftp.suse.com/pub/people/trenn/sources/ec/
 should get used to read out the 256 Embedded Controller registers
 or writing to them.
 
-CAUTION: Do not write to the Embedded Controller if you don't know
-what you are doing! Rebooting afterwards also is a good idea.
-This can influence the way your machine is cooled and fans may
-not get switched on again after you did a wrong write.
+CAUTION:
+  Do not write to the Embedded Controller if you don't know
+  what you are doing! Rebooting afterwards also is a good idea.
+  This can influence the way your machine is cooled and fans may
+  not get switched on again after you did a wrong write.
diff --git a/Documentation/ABI/testing/debugfs-pktcdvd b/Documentation/ABI/testing/debugfs-pktcdvd
index cf11736acb76..787907d70462 100644
--- a/Documentation/ABI/testing/debugfs-pktcdvd
+++ b/Documentation/ABI/testing/debugfs-pktcdvd
@@ -4,16 +4,15 @@ KernelVersion:  2.6.20
 Contact:        Thomas Maier <balagi@justmail.de>
 Description:
 
-debugfs interface
------------------
-
 The pktcdvd module (packet writing driver) creates
 these files in debugfs:
 
 /sys/kernel/debug/pktcdvd/pktcdvd[0-7]/
+
+    ====            ====== ====================================
     info            (0444) Lots of driver statistics and infos.
+    ====            ====== ====================================
 
-Example:
--------
+Example::
 
-cat /sys/kernel/debug/pktcdvd/pktcdvd0/info
+    cat /sys/kernel/debug/pktcdvd/pktcdvd0/info
diff --git a/Documentation/ABI/testing/dev-kmsg b/Documentation/ABI/testing/dev-kmsg
index 3c0bb76e3417..a377b6c093c9 100644
--- a/Documentation/ABI/testing/dev-kmsg
+++ b/Documentation/ABI/testing/dev-kmsg
@@ -6,6 +6,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		to the kernel's printk buffer.
 
 		Injecting messages:
+
 		Every write() to the opened device node places a log entry in
 		the kernel's printk buffer.
 
@@ -21,6 +22,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		the messages can always be reliably determined.
 
 		Accessing the buffer:
+
 		Every read() from the opened device node receives one record
 		of the kernel's printk buffer.
 
@@ -48,6 +50,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		if needed, without limiting the interface to a single reader.
 
 		The device supports seek with the following parameters:
+
 		SEEK_SET, 0
 		  seek to the first entry in the buffer
 		SEEK_END, 0
@@ -87,18 +90,22 @@ Description:	The /dev/kmsg character device node provides userspace access
 		readable context of the message, for reliable processing in
 		userspace.
 
-		Example:
-		7,160,424069,-;pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7] (ignored)
-		 SUBSYSTEM=acpi
-		 DEVICE=+acpi:PNP0A03:00
-		6,339,5140900,-;NET: Registered protocol family 10
-		30,340,5690716,-;udevd[80]: starting version 181
+		Example::
+
+		  7,160,424069,-;pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7] (ignored)
+		   SUBSYSTEM=acpi
+		   DEVICE=+acpi:PNP0A03:00
+		  6,339,5140900,-;NET: Registered protocol family 10
+		  30,340,5690716,-;udevd[80]: starting version 181
 
 		The DEVICE= key uniquely identifies devices the following way:
-		  b12:8        - block dev_t
-		  c127:3       - char dev_t
-		  n8           - netdev ifindex
-		  +sound:card0 - subsystem:devname
+
+		  ============  =================
+		  b12:8         block dev_t
+		  c127:3        char dev_t
+		  n8            netdev ifindex
+		  +sound:card0  subsystem:devname
+		  ============  =================
 
 		The flags field carries '-' by default. A 'c' indicates a
 		fragment of a line. Note, that these hints about continuation
diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
index 201d10319fa1..3c477ba48a31 100644
--- a/Documentation/ABI/testing/evm
+++ b/Documentation/ABI/testing/evm
@@ -17,26 +17,33 @@ Description:
 		echoing a value to <securityfs>/evm made up of the
 		following bits:
 
+		===	  ==================================================
 		Bit	  Effect
+		===	  ==================================================
 		0	  Enable HMAC validation and creation
 		1	  Enable digital signature validation
 		2	  Permit modification of EVM-protected metadata at
 			  runtime. Not supported if HMAC validation and
 			  creation is enabled.
 		31	  Disable further runtime modification of EVM policy
+		===	  ==================================================
 
-		For example:
+		For example::
 
-		echo 1 ><securityfs>/evm
+		  echo 1 ><securityfs>/evm
 
 		will enable HMAC validation and creation
 
-		echo 0x80000003 ><securityfs>/evm
+		::
+
+		  echo 0x80000003 ><securityfs>/evm
 
 		will enable HMAC and digital signature validation and
 		HMAC creation and disable all further modification of policy.
 
-		echo 0x80000006 ><securityfs>/evm
+		::
+
+		  echo 0x80000006 ><securityfs>/evm
 
 		will enable digital signature validation, permit
 		modification of EVM-protected metadata and
@@ -65,7 +72,7 @@ Description:
 		Shows the set of extended attributes used to calculate or
 		validate the EVM signature, and allows additional attributes
 		to be added at runtime. Any signatures generated after
-		additional attributes are added (and on files posessing those
+		additional attributes are added (and on files possessing those
 		additional attributes) will only be valid if the same
 		additional attributes are configured on system boot. Writing
 		a single period (.) will lock the xattr list from any further
diff --git a/Documentation/ABI/testing/ima_policy b/Documentation/ABI/testing/ima_policy
index cd572912c593..e35263f97fc1 100644
--- a/Documentation/ABI/testing/ima_policy
+++ b/Documentation/ABI/testing/ima_policy
@@ -15,19 +15,22 @@ Description:
 		IMA appraisal, if configured, uses these file measurements
 		for local measurement appraisal.
 
-		rule format: action [condition ...]
+		::
 
-		action: measure | dont_measure | appraise | dont_appraise |
-			audit | hash | dont_hash
-		condition:= base | lsm  [option]
+		  rule format: action [condition ...]
+
+		  action: measure | dont_measure | appraise | dont_appraise |
+			  audit | hash | dont_hash
+		  condition:= base | lsm  [option]
 			base:	[[func=] [mask=] [fsmagic=] [fsuuid=] [uid=]
 				[euid=] [fowner=] [fsname=]]
 			lsm:	[[subj_user=] [subj_role=] [subj_type=]
 				 [obj_user=] [obj_role=] [obj_type=]]
 			option:	[[appraise_type=]] [template=] [permit_directio]
 				[appraise_flag=] [keyrings=]
-		base: 	func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK][MODULE_CHECK]
-				[FIRMWARE_CHECK]
+		  base:
+			func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK]MODULE_CHECK]
+			        [FIRMWARE_CHECK]
 				[KEXEC_KERNEL_CHECK] [KEXEC_INITRAMFS_CHECK]
 				[KEXEC_CMDLINE] [KEY_CHECK]
 			mask:= [[^]MAY_READ] [[^]MAY_WRITE] [[^]MAY_APPEND]
@@ -37,8 +40,9 @@ Description:
 			uid:= decimal value
 			euid:= decimal value
 			fowner:= decimal value
-		lsm:  	are LSM specific
-		option:	appraise_type:= [imasig] [imasig|modsig]
+		  lsm:  are LSM specific
+		  option:
+			appraise_type:= [imasig] [imasig|modsig]
 			appraise_flag:= [check_blacklist]
 			Currently, blacklist check is only for files signed with appended
 			signature.
@@ -49,7 +53,7 @@ Description:
 			(eg, ima-ng). Only valid when action is "measure".
 			pcr:= decimal value
 
-		default policy:
+		  default policy:
 			# PROC_SUPER_MAGIC
 			dont_measure fsmagic=0x9fa0
 			dont_appraise fsmagic=0x9fa0
@@ -97,7 +101,8 @@ Description:
 
 		Examples of LSM specific definitions:
 
-		SELinux:
+		SELinux::
+
 			dont_measure obj_type=var_log_t
 			dont_appraise obj_type=var_log_t
 			dont_measure obj_type=auditd_log_t
@@ -105,10 +110,11 @@ Description:
 			measure subj_user=system_u func=FILE_CHECK mask=MAY_READ
 			measure subj_role=system_r func=FILE_CHECK mask=MAY_READ
 
-		Smack:
+		Smack::
+
 			measure subj_user=_ func=FILE_CHECK mask=MAY_READ
 
-		Example of measure rules using alternate PCRs:
+		Example of measure rules using alternate PCRs::
 
 			measure func=KEXEC_KERNEL_CHECK pcr=4
 			measure func=KEXEC_INITRAMFS_CHECK pcr=5
diff --git a/Documentation/ABI/testing/procfs-diskstats b/Documentation/ABI/testing/procfs-diskstats
index 70dcaf2481f4..df5a3a8c1edf 100644
--- a/Documentation/ABI/testing/procfs-diskstats
+++ b/Documentation/ABI/testing/procfs-diskstats
@@ -6,28 +6,32 @@ Description:
 		of block devices. Each line contains the following 14
 		fields:
 
-		 1 - major number
-		 2 - minor mumber
-		 3 - device name
-		 4 - reads completed successfully
-		 5 - reads merged
-		 6 - sectors read
-		 7 - time spent reading (ms)
-		 8 - writes completed
-		 9 - writes merged
-		10 - sectors written
-		11 - time spent writing (ms)
-		12 - I/Os currently in progress
-		13 - time spent doing I/Os (ms)
-		14 - weighted time spent doing I/Os (ms)
+		==  ===================================
+		 1  major number
+		 2  minor mumber
+		 3  device name
+		 4  reads completed successfully
+		 5  reads merged
+		 6  sectors read
+		 7  time spent reading (ms)
+		 8  writes completed
+		 9  writes merged
+		10  sectors written
+		11  time spent writing (ms)
+		12  I/Os currently in progress
+		13  time spent doing I/Os (ms)
+		14  weighted time spent doing I/Os (ms)
+		==  ===================================
 
 		Kernel 4.18+ appends four more fields for discard
 		tracking putting the total at 18:
 
-		15 - discards completed successfully
-		16 - discards merged
-		17 - sectors discarded
-		18 - time spent discarding
+		==  ===================================
+		15  discards completed successfully
+		16  discards merged
+		17  sectors discarded
+		18  time spent discarding
+		==  ===================================
 
 		Kernel 5.5+ appends two more fields for flush requests:
 
diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block
index 2322eb748b38..e34cdeeeb9d4 100644
--- a/Documentation/ABI/testing/sysfs-block
+++ b/Documentation/ABI/testing/sysfs-block
@@ -4,23 +4,27 @@ Contact:	Jerome Marchand <jmarchan@redhat.com>
 Description:
 		The /sys/block/<disk>/stat files displays the I/O
 		statistics of disk <disk>. They contain 11 fields:
-		 1 - reads completed successfully
-		 2 - reads merged
-		 3 - sectors read
-		 4 - time spent reading (ms)
-		 5 - writes completed
-		 6 - writes merged
-		 7 - sectors written
-		 8 - time spent writing (ms)
-		 9 - I/Os currently in progress
-		10 - time spent doing I/Os (ms)
-		11 - weighted time spent doing I/Os (ms)
-		12 - discards completed
-		13 - discards merged
-		14 - sectors discarded
-		15 - time spent discarding (ms)
-		16 - flush requests completed
-		17 - time spent flushing (ms)
+
+		==  ==============================================
+		 1  reads completed successfully
+		 2  reads merged
+		 3  sectors read
+		 4  time spent reading (ms)
+		 5  writes completed
+		 6  writes merged
+		 7  sectors written
+		 8  time spent writing (ms)
+		 9  I/Os currently in progress
+		10  time spent doing I/Os (ms)
+		11  weighted time spent doing I/Os (ms)
+		12  discards completed
+		13  discards merged
+		14  sectors discarded
+		15  time spent discarding (ms)
+		16  flush requests completed
+		17  time spent flushing (ms)
+		==  ==============================================
+
 		For more details refer Documentation/admin-guide/iostats.rst
 
 
diff --git a/Documentation/ABI/testing/sysfs-block-device b/Documentation/ABI/testing/sysfs-block-device
index 17f2bc7dd261..aa0fb500e3c9 100644
--- a/Documentation/ABI/testing/sysfs-block-device
+++ b/Documentation/ABI/testing/sysfs-block-device
@@ -8,11 +8,13 @@ Description:
 
 		It has the following valid values:
 
+		==	========================================================
 		0	OFF - the LED is not activated on activity
 		1	BLINK_ON - the LED blinks on every 10ms when activity is
 			detected.
 		2	BLINK_OFF - the LED is on when idle, and blinks off
 			every 10ms when activity is detected.
+		==	========================================================
 
 		Note that the user must turn sw_activity OFF it they wish to
 		control the activity LED via the em_message file.
diff --git a/Documentation/ABI/testing/sysfs-bus-acpi b/Documentation/ABI/testing/sysfs-bus-acpi
index e7898cfe5fb1..c78603497b97 100644
--- a/Documentation/ABI/testing/sysfs-bus-acpi
+++ b/Documentation/ABI/testing/sysfs-bus-acpi
@@ -67,14 +67,16 @@ Description:
 		The return value is a decimal integer representing the device's
 		status bitmap:
 
-		Bit [0] –  Set if the device is present.
-		Bit [1] –  Set if the device is enabled and decoding its
-		           resources.
-		Bit [2] –  Set if the device should be shown in the UI.
-		Bit [3] –  Set if the device is functioning properly (cleared if
-		           device failed its diagnostics).
-		Bit [4] –  Set if the battery is present.
-		Bits [31:5] –  Reserved (must be cleared)
+		===========  ==================================================
+		Bit [0]      Set if the device is present.
+		Bit [1]      Set if the device is enabled and decoding its
+		             resources.
+		Bit [2]      Set if the device should be shown in the UI.
+		Bit [3]      Set if the device is functioning properly (cleared
+			     if device failed its diagnostics).
+		Bit [4]      Set if the battery is present.
+		Bits [31:5]  Reserved (must be cleared)
+		===========  ==================================================
 
 		If bit [0] is clear, then bit 1 must also be clear (a device
 		that is not present cannot be enabled).
diff --git a/Documentation/ABI/testing/sysfs-bus-event_source-devices-format b/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
index 5bb793ec926c..df7ccc1b2fba 100644
--- a/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
+++ b/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
@@ -10,7 +10,8 @@ Description:
 		name/value pairs.
 
 		Userspace must be prepared for the possibility that attributes
-		define overlapping bit ranges. For example:
+		define overlapping bit ranges. For example::
+
 			attr1 = 'config:0-23'
 			attr2 = 'config:0-7'
 			attr3 = 'config:12-35'
diff --git a/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x b/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
index 0b0de8cd0d13..b6c69eb80ca4 100644
--- a/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
+++ b/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
@@ -6,15 +6,18 @@ Description:
 		Value that exists only for mux devices that can be
 		written to control the behaviour of the multiplexer on
 		idle. Possible values:
-		-2 - disconnect on idle, i.e. deselect the last used
-		     channel, which is useful when there is a device
-		     with an address that conflicts with another
-		     device on another mux on the same parent bus.
-		-1 - leave the mux as-is, which is the most optimal
-		     setting in terms of I2C operations and is the
-		     default mode.
-		0..<nchans> - set the mux to a predetermined channel,
-		     which is useful if there is one channel that is
-		     used almost always, and you want to reduce the
-		     latency for normal operations after rare
-		     transactions on other channels
+
+		===========  ===============================================
+		-2	     disconnect on idle, i.e. deselect the last used
+			     channel, which is useful when there is a device
+			     with an address that conflicts with another
+			     device on another mux on the same parent bus.
+		-1	     leave the mux as-is, which is the most optimal
+			     setting in terms of I2C operations and is the
+			     default mode.
+		0..<nchans>  set the mux to a predetermined channel,
+			     which is useful if there is one channel that is
+			     used almost always, and you want to reduce the
+			     latency for normal operations after rare
+			     transactions on other channels
+		===========  ===============================================
diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index a9d51810a3ba..e3df71987eff 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -65,6 +65,7 @@ Contact:	linux-iio@vger.kernel.org
 Description:
 		When the internal sampling clock can only take a specific set of
 		frequencies, we can specify the available values with:
+
 		- a small discrete set of values like "0 2 4 6 8"
 		- a range with minimum, step and maximum frequencies like
 		  "[min step max]"
@@ -1627,17 +1628,21 @@ Description:
 		Mounting matrix for IIO sensors. This is a rotation matrix which
 		informs userspace about sensor chip's placement relative to the
 		main hardware it is mounted on.
+
 		Main hardware placement is defined according to the local
 		reference frame related to the physical quantity the sensor
 		measures.
+
 		Given that the rotation matrix is defined in a board specific
 		way (platform data and / or device-tree), the main hardware
 		reference frame definition is left to the implementor's choice
 		(see below for a magnetometer example).
+
 		Applications should apply this rotation matrix to samples so
 		that when main hardware reference frame is aligned onto local
 		reference frame, then sensor chip reference frame is also
 		perfectly aligned with it.
+
 		Matrix is a 3x3 unitary matrix and typically looks like
 		[0, 1, 0; 1, 0, 0; 0, 0, -1]. Identity matrix
 		[1, 0, 0; 0, 1, 0; 0, 0, 1] means sensor chip and main hardware
@@ -1646,8 +1651,10 @@ Description:
 		For example, a mounting matrix for a magnetometer sensor informs
 		userspace about sensor chip's ORIENTATION relative to the main
 		hardware.
+
 		More specifically, main hardware orientation is defined with
 		respect to the LOCAL EARTH GEOMAGNETIC REFERENCE FRAME where :
+
 		* Y is in the ground plane and positive towards magnetic North ;
 		* X is in the ground plane, perpendicular to the North axis and
 		  positive towards the East ;
@@ -1656,13 +1663,16 @@ Description:
 		An implementor might consider that for a hand-held device, a
 		'natural' orientation would be 'front facing camera at the top'.
 		The main hardware reference frame could then be described as :
+
 		* Y is in the plane of the screen and is positive towards the
 		  top of the screen ;
 		* X is in the plane of the screen, perpendicular to Y axis, and
 		  positive towards the right hand side of the screen ;
 		* Z is perpendicular to the screen plane and positive out of the
 		  screen.
+
 		Another example for a quadrotor UAV might be :
+
 		* Y is in the plane of the propellers and positive towards the
 		  front-view camera;
 		* X is in the plane of the propellers, perpendicular to Y axis,
@@ -1704,6 +1714,7 @@ Description:
 		This interface is deprecated; please use the Counter subsystem.
 
 		A list of possible counting directions which are:
+
 		- "up"	: counter device is increasing.
 		- "down": counter device is decreasing.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector b/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
index 2071f9bcfaa5..1c2a07f7a75e 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
+++ b/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
@@ -5,7 +5,8 @@ Contact:	Peter Rosin <peda@axentia.se>
 Description:
 		The DAC is used to find the peak level of an alternating
 		voltage input signal by a binary search using the output
-		of a comparator wired to an interrupt pin. Like so:
+		of a comparator wired to an interrupt pin. Like so::
+
 		                           _
 		                          | \
 		     input +------>-------|+ \
@@ -19,10 +20,12 @@ Description:
 		            |    irq|------<-------'
 		            |       |
 		            '-------'
+
 		The boolean invert attribute (0/1) should be set when the
 		input signal is centered around the maximum value of the
 		dac instead of zero. The envelope detector will search
 		from below in this case and will also invert the result.
+
 		The edge/level of the interrupt is also switched to its
 		opposite value.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-cros-ec b/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
index 6158f831c761..adf24c40126f 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
+++ b/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
@@ -4,7 +4,7 @@ KernelVersion:	4.7
 Contact:	linux-iio@vger.kernel.org
 Description:
 		Writing '1' will perform a FOC (Fast Online Calibration). The
-                corresponding calibration offsets can be read from *_calibbias
+                corresponding calibration offsets can be read from `*_calibbias`
                 entries.
 
 What:		/sys/bus/iio/devices/iio:deviceX/location
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
index 0e66ae9b0071..91439d6d60b5 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
@@ -3,14 +3,20 @@ KernelVersion:	4.14
 Contact:	arnaud.pouliquen@st.com
 Description:
 		For audio purpose only.
+
 		Used by audio driver to set/get the spi input frequency.
+
 		This is mandatory if DFSDM is slave on SPI bus, to
 		provide information on the SPI clock frequency during runtime
 		Notice that the SPI frequency should be a multiple of sample
 		frequency to ensure the precision.
-		if DFSDM input is SPI master
+
+		if DFSDM input is SPI master:
+
 			Reading  SPI clkout frequency,
 			error on writing
+
 		If DFSDM input is SPI Slave:
+
 			Reading returns value previously set.
 			Writing value before starting conversions.
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
index ad2cc63e4bf8..73498ff666bd 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
@@ -17,9 +17,11 @@ KernelVersion:	4.13
 Contact:	fabrice.gasnier@st.com
 Description:
 		Configure the device counter quadrature modes:
+
 		- non-quadrature:
 			Encoder IN1 input servers as the count input (up
 			direction).
+
 		- quadrature:
 			Encoder IN1 and IN2 inputs are mixed to get direction
 			and count.
@@ -35,23 +37,26 @@ KernelVersion:	4.13
 Contact:	fabrice.gasnier@st.com
 Description:
 		Configure the device encoder/counter active edge:
+
 		- rising-edge
 		- falling-edge
 		- both-edges
 
 		In non-quadrature mode, device counts up on active edge.
+
 		In quadrature mode, encoder counting scenarios are as follows:
-		----------------------------------------------------------------
+
+		+---------+----------+--------------------+--------------------+
 		| Active  | Level on |      IN1 signal    |     IN2 signal     |
-		| edge    | opposite |------------------------------------------
+		| edge    | opposite +----------+---------+----------+---------+
 		|         | signal   |  Rising  | Falling |  Rising  | Falling |
-		----------------------------------------------------------------
-		| Rising  | High ->  |   Down   |    -    |    Up    |    -    |
-		| edge    | Low  ->  |    Up    |    -    |   Down   |    -    |
-		----------------------------------------------------------------
-		| Falling | High ->  |    -     |    Up   |    -     |   Down  |
-		| edge    | Low  ->  |    -     |   Down  |    -     |    Up   |
-		----------------------------------------------------------------
-		| Both    | High ->  |   Down   |    Up   |    Up    |   Down  |
-		| edges   | Low  ->  |    Up    |   Down  |   Down   |    Up   |
-		----------------------------------------------------------------
+		+---------+----------+----------+---------+----------+---------+
+		| Rising  | High ->  |   Down   |    -    |   Up     |    -    |
+		| edge    | Low  ->  |   Up     |    -    |   Down   |    -    |
+		+---------+----------+----------+---------+----------+---------+
+		| Falling | High ->  |    -     |   Up    |    -     |   Down  |
+		| edge    | Low  ->  |    -     |   Down  |    -     |   Up    |
+		+---------+----------+----------+---------+----------+---------+
+		| Both    | High ->  |   Down   |   Up    |   Up     |   Down  |
+		| edges   | Low  ->  |   Up     |   Down  |   Down   |   Up    |
+		+---------+----------+----------+---------+----------+---------+
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843 b/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
index 6275e9f56e6c..13f099ef6a95 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
+++ b/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
@@ -5,11 +5,16 @@ Contact:        linux-iio@vger.kernel.org
 Description:
                 Current configuration and available configurations
 		for the bias current.
-		normal		- Normal measurement configurations (default)
-		positivebias	- Positive bias configuration
-		negativebias	- Negative bias configuration
-		disabled	- Only available on HMC5983. Disables magnetic
+
+		============	  ============================================
+		normal		  Normal measurement configurations (default)
+		positivebias	  Positive bias configuration
+		negativebias	  Negative bias configuration
+		disabled	  Only available on HMC5983. Disables magnetic
 				  sensor and enables temperature sensor.
-		Note: The effect of this configuration may vary
-		according to the device. For exact documentation
-		check the device's datasheet.
+		============	  ============================================
+
+		Note:
+		  The effect of this configuration may vary
+		  according to the device. For exact documentation
+		  check the device's datasheet.
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856 b/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
index 3b3509a3ef2f..e5ef6d8e5da1 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
+++ b/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
@@ -5,9 +5,12 @@ Description:
 		Open-circuit fault. The detection of open-circuit faults,
 		such as those caused by broken thermocouple wires.
 		Reading returns either '1' or '0'.
-		'1' = An open circuit such as broken thermocouple wires
-		      has been detected.
-		'0' = No open circuit or broken thermocouple wires are detected
+
+		===  =======================================================
+		'1'  An open circuit such as broken thermocouple wires
+		     has been detected.
+		'0'  No open circuit or broken thermocouple wires are detected
+		===  =======================================================
 
 What:		/sys/bus/iio/devices/iio:deviceX/fault_ovuv
 KernelVersion:	5.1
@@ -18,7 +21,11 @@ Description:
 		cables by integrated MOSFETs at the T+ and T- inputs, and the
 		BIAS output. These MOSFETs turn off when the input voltage is
 		negative or greater than VDD.
+
 		Reading returns either '1' or '0'.
-		'1' = The input voltage is negative or greater than VDD.
-		'0' = The input voltage is positive and less than VDD (normal
-		state).
+
+		===  =======================================================
+		'1'  The input voltage is negative or greater than VDD.
+		'0'  The input voltage is positive and less than VDD (normal
+		     state).
+		===  =======================================================
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
index b7259234ad70..a10a4de3e5fe 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
@@ -3,67 +3,85 @@ KernelVersion:	4.11
 Contact:	benjamin.gaignard@st.com
 Description:
 		Reading returns the list possible master modes which are:
-		- "reset"     :	The UG bit from the TIMx_EGR register is
+
+
+		- "reset"
+				The UG bit from the TIMx_EGR register is
 				used as trigger output (TRGO).
-		- "enable"    : The Counter Enable signal CNT_EN is used
+		- "enable"
+				The Counter Enable signal CNT_EN is used
 				as trigger output.
-		- "update"    : The update event is selected as trigger output.
+		- "update"
+				The update event is selected as trigger output.
 				For instance a master timer can then be used
 				as a prescaler for a slave timer.
-		- "compare_pulse" : The trigger output send a positive pulse
-				    when the CC1IF flag is to be set.
-		- "OC1REF"    : OC1REF signal is used as trigger output.
-		- "OC2REF"    : OC2REF signal is used as trigger output.
-		- "OC3REF"    : OC3REF signal is used as trigger output.
-		- "OC4REF"    : OC4REF signal is used as trigger output.
+		- "compare_pulse"
+				The trigger output send a positive pulse
+				when the CC1IF flag is to be set.
+		- "OC1REF"
+				OC1REF signal is used as trigger output.
+		- "OC2REF"
+				OC2REF signal is used as trigger output.
+		- "OC3REF"
+				OC3REF signal is used as trigger output.
+		- "OC4REF"
+				OC4REF signal is used as trigger output.
+
 		Additional modes (on TRGO2 only):
-		- "OC5REF"    : OC5REF signal is used as trigger output.
-		- "OC6REF"    : OC6REF signal is used as trigger output.
+
+		- "OC5REF"
+				OC5REF signal is used as trigger output.
+		- "OC6REF"
+				OC6REF signal is used as trigger output.
 		- "compare_pulse_OC4REF":
-		  OC4REF rising or falling edges generate pulses.
+				OC4REF rising or falling edges generate pulses.
 		- "compare_pulse_OC6REF":
-		  OC6REF rising or falling edges generate pulses.
+				OC6REF rising or falling edges generate pulses.
 		- "compare_pulse_OC4REF_r_or_OC6REF_r":
-		  OC4REF or OC6REF rising edges generate pulses.
+				OC4REF or OC6REF rising edges generate pulses.
 		- "compare_pulse_OC4REF_r_or_OC6REF_f":
-		  OC4REF rising or OC6REF falling edges generate pulses.
+				OC4REF rising or OC6REF falling edges generate
+				pulses.
 		- "compare_pulse_OC5REF_r_or_OC6REF_r":
-		  OC5REF or OC6REF rising edges generate pulses.
+				OC5REF or OC6REF rising edges generate pulses.
 		- "compare_pulse_OC5REF_r_or_OC6REF_f":
-		  OC5REF rising or OC6REF falling edges generate pulses.
+				OC5REF rising or OC6REF falling edges generate
+				pulses.
 
-		+-----------+   +-------------+            +---------+
-		| Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
-		+-----------+   +--+--------+-+        |-> | Control +-->
-		                   |        |          ||  +---------+
-		                +--v--------+-+ OCxREF ||  +---------+
-		                | Chx compare +----------> | Output  | ChX
-		                +-----------+-+         |  | Control +-->
-		                      .     |           |  +---------+
-		                      .     |           |    .
-		                +-----------v-+ OC6REF  |    .
-		                | Ch6 compare +---------+>
-		                +-------------+
+		::
 
-		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r":
+		  +-----------+   +-------------+            +---------+
+		  | Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
+		  +-----------+   +--+--------+-+        |-> | Control +-->
+		                     |        |          ||  +---------+
+		                  +--v--------+-+ OCxREF ||  +---------+
+		                  | Chx compare +----------> | Output  | ChX
+		                  +-----------+-+         |  | Control +-->
+		                        .     |           |  +---------+
+		                        .     |           |    .
+		                  +-----------v-+ OC6REF  |    .
+		                  | Ch6 compare +---------+>
+		                  +-------------+
 
-		                X
-		              X   X
-		            X .   . X
-		          X   .   .   X
-		        X     .   .     X
-		count X .     .   .     . X
-		        .     .   .     .
-		        .     .   .     .
-		        +---------------+
-		OC4REF  |     .   .     |
-		      +-+     .   .     +-+
-		        .     +---+     .
-		OC6REF  .     |   |     .
-		      +-------+   +-------+
-		        +-+   +-+
-		TRGO2   | |   | |
-		      +-+ +---+ +---------+
+		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r"::
+
+		                  X
+		                X   X
+		              X .   . X
+		            X   .   .   X
+		          X     .   .     X
+		  count X .     .   .     . X
+		          .     .   .     .
+		          .     .   .     .
+		          +---------------+
+		  OC4REF  |     .   .     |
+		        +-+     .   .     +-+
+		          .     +---+     .
+		  OC6REF  .     |   |     .
+		        +-------+   +-------+
+		          +-+   +-+
+		  TRGO2   | |   | |
+		        +-+ +---+ +---------+
 
 What:		/sys/bus/iio/devices/triggerX/master_mode
 KernelVersion:	4.11
@@ -91,6 +109,30 @@ Description:
 		When counting down the counter start from preset value
 		and fire event when reach 0.
 
+What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
+KernelVersion:	4.12
+Contact:	benjamin.gaignard@st.com
+Description:
+		Reading returns the list possible quadrature modes.
+
+What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
+KernelVersion:	4.12
+Contact:	benjamin.gaignard@st.com
+Description:
+		Configure the device counter quadrature modes:
+
+		channel_A:
+			Encoder A input servers as the count input and B as
+			the UP/DOWN direction control input.
+
+		channel_B:
+			Encoder B input serves as the count input and A as
+			the UP/DOWN direction control input.
+
+		quadrature:
+			Encoder A and B inputs are mixed to get direction
+			and count with a scale of 0.25.
+
 What:		/sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available
 KernelVersion:	4.12
 Contact:	benjamin.gaignard@st.com
@@ -104,6 +146,7 @@ Description:
 		Configure the device counter enable modes, in all case
 		counting direction is set by in_count0_count_direction
 		attribute and the counter is clocked by the internal clock.
+
 		always:
 			Counter is always ON.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
index 7fd2601c2831..a74252e580a5 100644
--- a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
+++ b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
@@ -9,11 +9,13 @@ Date:		June 2015
 KernelVersion:	4.3
 Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 Description:	(RW) Configure MSC operating mode:
+
 		  - "single", for contiguous buffer mode (high-order alloc);
 		  - "multi", for multiblock mode;
 		  - "ExI", for DCI handler mode;
 		  - "debug", for debug mode;
 		  - any of the currently loaded buffer sinks.
+
 		If operating mode changes, existing buffer is deallocated,
 		provided there are no active users and tracing is not enabled,
 		otherwise the write will fail.
@@ -23,10 +25,12 @@ Date:		June 2015
 KernelVersion:	4.3
 Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 Description:	(RW) Configure MSC buffer size for "single" or "multi" modes.
+
 		In single mode, this is a single number of pages, has to be
 		power of 2. In multiblock mode, this is a comma-separated list
 		of numbers of pages for each window to be allocated. Number of
 		windows is not limited.
+
 		Writing to this file deallocates existing buffer (provided
 		there are no active users and tracing is not enabled) and then
 		allocates a new one.
diff --git a/Documentation/ABI/testing/sysfs-bus-nfit b/Documentation/ABI/testing/sysfs-bus-nfit
index e4f76e7eab93..63ef0b9ecce7 100644
--- a/Documentation/ABI/testing/sysfs-bus-nfit
+++ b/Documentation/ABI/testing/sysfs-bus-nfit
@@ -1,4 +1,4 @@
-For all of the nmem device attributes under nfit/*, see the 'NVDIMM Firmware
+For all of the nmem device attributes under ``nfit/*``, see the 'NVDIMM Firmware
 Interface Table (NFIT)' section in the ACPI specification
 (http://www.uefi.org/specifications) for more details.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
index 3c9a8c4a25eb..860db53037a5 100644
--- a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
+++ b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
@@ -1,6 +1,6 @@
-==========================
 PCIe Device AER statistics
-==========================
+--------------------------
+
 These attributes show up under all the devices that are AER capable. These
 statistical counters indicate the errors "as seen/reported by the device".
 Note that this may mean that if an endpoint is causing problems, the AER
@@ -17,19 +17,18 @@ Description:	List of correctable errors seen and reported by this
 		PCI device using ERR_COR. Note that since multiple errors may
 		be reported using a single ERR_COR message, thus
 		TOTAL_ERR_COR at the end of the file may not match the actual
-		total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
-Receiver Error 2
-Bad TLP 0
-Bad DLLP 0
-RELAY_NUM Rollover 0
-Replay Timer Timeout 0
-Advisory Non-Fatal 0
-Corrected Internal Error 0
-Header Log Overflow 0
-TOTAL_ERR_COR 2
--------------------------------------------------------------------------
+		total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
+		    Receiver Error 2
+		    Bad TLP 0
+		    Bad DLLP 0
+		    RELAY_NUM Rollover 0
+		    Replay Timer Timeout 0
+		    Advisory Non-Fatal 0
+		    Corrected Internal Error 0
+		    Header Log Overflow 0
+		    TOTAL_ERR_COR 2
 
 What:		/sys/bus/pci/devices/<dev>/aer_dev_fatal
 Date:		July 2018
@@ -39,28 +38,27 @@ Description:	List of uncorrectable fatal errors seen and reported by this
 		PCI device using ERR_FATAL. Note that since multiple errors may
 		be reported using a single ERR_FATAL message, thus
 		TOTAL_ERR_FATAL at the end of the file may not match the actual
-		total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
-Undefined 0
-Data Link Protocol 0
-Surprise Down Error 0
-Poisoned TLP 0
-Flow Control Protocol 0
-Completion Timeout 0
-Completer Abort 0
-Unexpected Completion 0
-Receiver Overflow 0
-Malformed TLP 0
-ECRC 0
-Unsupported Request 0
-ACS Violation 0
-Uncorrectable Internal Error 0
-MC Blocked TLP 0
-AtomicOp Egress Blocked 0
-TLP Prefix Blocked Error 0
-TOTAL_ERR_FATAL 0
--------------------------------------------------------------------------
+		total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
+		    Undefined 0
+		    Data Link Protocol 0
+		    Surprise Down Error 0
+		    Poisoned TLP 0
+		    Flow Control Protocol 0
+		    Completion Timeout 0
+		    Completer Abort 0
+		    Unexpected Completion 0
+		    Receiver Overflow 0
+		    Malformed TLP 0
+		    ECRC 0
+		    Unsupported Request 0
+		    ACS Violation 0
+		    Uncorrectable Internal Error 0
+		    MC Blocked TLP 0
+		    AtomicOp Egress Blocked 0
+		    TLP Prefix Blocked Error 0
+		    TOTAL_ERR_FATAL 0
 
 What:		/sys/bus/pci/devices/<dev>/aer_dev_nonfatal
 Date:		July 2018
@@ -70,32 +68,31 @@ Description:	List of uncorrectable nonfatal errors seen and reported by this
 		PCI device using ERR_NONFATAL. Note that since multiple errors
 		may be reported using a single ERR_FATAL message, thus
 		TOTAL_ERR_NONFATAL at the end of the file may not match the
-		actual total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
-Undefined 0
-Data Link Protocol 0
-Surprise Down Error 0
-Poisoned TLP 0
-Flow Control Protocol 0
-Completion Timeout 0
-Completer Abort 0
-Unexpected Completion 0
-Receiver Overflow 0
-Malformed TLP 0
-ECRC 0
-Unsupported Request 0
-ACS Violation 0
-Uncorrectable Internal Error 0
-MC Blocked TLP 0
-AtomicOp Egress Blocked 0
-TLP Prefix Blocked Error 0
-TOTAL_ERR_NONFATAL 0
--------------------------------------------------------------------------
+		actual total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
+		    Undefined 0
+		    Data Link Protocol 0
+		    Surprise Down Error 0
+		    Poisoned TLP 0
+		    Flow Control Protocol 0
+		    Completion Timeout 0
+		    Completer Abort 0
+		    Unexpected Completion 0
+		    Receiver Overflow 0
+		    Malformed TLP 0
+		    ECRC 0
+		    Unsupported Request 0
+		    ACS Violation 0
+		    Uncorrectable Internal Error 0
+		    MC Blocked TLP 0
+		    AtomicOp Egress Blocked 0
+		    TLP Prefix Blocked Error 0
+		    TOTAL_ERR_NONFATAL 0
 
-============================
 PCIe Rootport AER statistics
-============================
+----------------------------
+
 These attributes show up under only the rootports (or root complex event
 collectors) that are AER capable. These indicate the number of error messages as
 "reported to" the rootport. Please note that the rootports also transmit
diff --git a/Documentation/ABI/testing/sysfs-bus-rapidio b/Documentation/ABI/testing/sysfs-bus-rapidio
index 13208b27dd87..634ea207a50a 100644
--- a/Documentation/ABI/testing/sysfs-bus-rapidio
+++ b/Documentation/ABI/testing/sysfs-bus-rapidio
@@ -4,24 +4,27 @@ Description:
 		an individual subdirectory with the following name format of
 		device_name "nn:d:iiii", where:
 
-		nn   - two-digit hexadecimal ID of RapidIO network where the
+		====   ========================================================
+		nn     two-digit hexadecimal ID of RapidIO network where the
 		       device resides
-		d    - device type: 'e' - for endpoint or 's' - for switch
-		iiii - four-digit device destID for endpoints, or switchID for
+		d      device type: 'e' - for endpoint or 's' - for switch
+		iiii   four-digit device destID for endpoints, or switchID for
 		       switches
+		====   ========================================================
 
 		For example, below is a list of device directories that
 		represents a typical RapidIO network with one switch, one host,
 		and two agent endpoints, as it is seen by the enumerating host
-		(with destID = 1):
+		(with destID = 1)::
 
-		/sys/bus/rapidio/devices/00:e:0000
-		/sys/bus/rapidio/devices/00:e:0002
-		/sys/bus/rapidio/devices/00:s:0001
+		  /sys/bus/rapidio/devices/00:e:0000
+		  /sys/bus/rapidio/devices/00:e:0002
+		  /sys/bus/rapidio/devices/00:s:0001
 
-		NOTE: An enumerating or discovering endpoint does not create a
-		sysfs entry for itself, this is why an endpoint with destID=1 is
-		not shown in the list.
+		NOTE:
+		  An enumerating or discovering endpoint does not create a
+		  sysfs entry for itself, this is why an endpoint with destID=1
+		  is not shown in the list.
 
 Attributes Common for All RapidIO Devices
 -----------------------------------------
diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index dd565c378b40..171127294674 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -37,16 +37,18 @@ Contact:	thunderbolt-software@lists.01.org
 Description:	This attribute holds current Thunderbolt security level
 		set by the system BIOS. Possible values are:
 
-		none: All devices are automatically authorized
-		user: Devices are only authorized based on writing
-		      appropriate value to the authorized attribute
-		secure: Require devices that support secure connect at
-			minimum. User needs to authorize each device.
-		dponly: Automatically tunnel Display port (and USB). No
-			PCIe tunnels are created.
-		usbonly: Automatically tunnel USB controller of the
+		=======  ==================================================
+		none     All devices are automatically authorized
+		user     Devices are only authorized based on writing
+		         appropriate value to the authorized attribute
+		secure   Require devices that support secure connect at
+			 minimum. User needs to authorize each device.
+		dponly   Automatically tunnel Display port (and USB). No
+			 PCIe tunnels are created.
+		usbonly  Automatically tunnel USB controller of the
 			 connected Thunderbolt dock (and Display Port). All
 			 PCIe links downstream of the dock are removed.
+		=======  ==================================================
 
 What: /sys/bus/thunderbolt/devices/.../authorized
 Date:		Sep 2017
@@ -61,17 +63,23 @@ Description:	This attribute is used to authorize Thunderbolt devices
 		yet authorized.
 
 		Possible values are supported:
-		1: The device will be authorized and connected
+
+		==  ===========================================
+		1   The device will be authorized and connected
+		==  ===========================================
 
 		When key attribute contains 32 byte hex string the possible
 		values are:
-		1: The 32 byte hex string is added to the device NVM and
-		   the device is authorized.
-		2: Send a challenge based on the 32 byte hex string. If the
-		   challenge response from device is valid, the device is
-		   authorized. In case of failure errno will be ENOKEY if
-		   the device did not contain a key at all, and
-		   EKEYREJECTED if the challenge response did not match.
+
+		==  ========================================================
+		1   The 32 byte hex string is added to the device NVM and
+		    the device is authorized.
+		2   Send a challenge based on the 32 byte hex string. If the
+		    challenge response from device is valid, the device is
+		    authorized. In case of failure errno will be ENOKEY if
+		    the device did not contain a key at all, and
+		    EKEYREJECTED if the challenge response did not match.
+		==  ========================================================
 
 What: /sys/bus/thunderbolt/devices/.../boot
 Date:		Jun 2018
diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb
index 614d216dff1d..e449b8374f6a 100644
--- a/Documentation/ABI/testing/sysfs-bus-usb
+++ b/Documentation/ABI/testing/sysfs-bus-usb
@@ -72,24 +72,27 @@ Description:
 		table at compile time. The format for the device ID is:
 		idVendor idProduct bInterfaceClass RefIdVendor RefIdProduct
 		The vendor ID and device ID fields are required, the
-		rest is optional. The Ref* tuple can be used to tell the
+		rest is optional. The `Ref*` tuple can be used to tell the
 		driver to use the same driver_data for the new device as
 		it is used for the reference device.
 		Upon successfully adding an ID, the driver will probe
-		for the device and attempt to bind to it.  For example:
-		# echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
+		for the device and attempt to bind to it.  For example::
+
+		  # echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
 
 		Here add a new device (0458:7045) using driver_data from
-		an already supported device (0458:704c):
-		# echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
+		an already supported device (0458:704c)::
+
+		  # echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
 
 		Reading from this file will list all dynamically added
 		device IDs in the same format, with one entry per
-		line. For example:
-		# cat /sys/bus/usb/drivers/foo/new_id
-		8086 10f5
-		dead beef 06
-		f00d cafe
+		line. For example::
+
+		  # cat /sys/bus/usb/drivers/foo/new_id
+		  8086 10f5
+		  dead beef 06
+		  f00d cafe
 
 		The list will be truncated at PAGE_SIZE bytes due to
 		sysfs restrictions.
@@ -209,6 +212,7 @@ Description:
 		advance, and behaves well according to the specification.
 		This attribute is a bit-field that controls the behavior of
 		a specific port:
+
 		 - Bit 0 of this field selects the "old" enumeration scheme,
 		   as it is considerably faster (it only causes one USB reset
 		   instead of 2).
@@ -233,10 +237,10 @@ Description:
 		poll() for monitoring changes to this value in user space.
 
 		Any time this value changes the corresponding hub device will send a
-		udev event with the following attributes:
+		udev event with the following attributes::
 
-		OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
-		OVER_CURRENT_COUNT=[current value of this sysfs attribute]
+		  OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
+		  OVER_CURRENT_COUNT=[current value of this sysfs attribute]
 
 What:		/sys/bus/usb/devices/.../(hub interface)/portX/usb3_lpm_permit
 Date:		November 2015
diff --git a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
index 9ade80f81f96..2f86e4223bfc 100644
--- a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
+++ b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
@@ -12,8 +12,11 @@ KernelVersion:	2.6.26
 Contact:	Harrison Metzger <harrisonmetz@gmail.com>
 Description:	Controls the devices display mode.
 		For a 6 character display the values are
+
 			MSB 0x06; LSB 0x3F, and
+
 		for an 8 character display the values are
+
 			MSB 0x08; LSB 0xFF.
 
 What:		/sys/bus/usb/.../textmode
@@ -37,7 +40,7 @@ KernelVersion:	2.6.26
 Contact:	Harrison Metzger <harrisonmetz@gmail.com>
 Description:	Controls the decimal places on the device.
 		To set the nth decimal place, give this field
-		the value of 10 ** n. Assume this field has
+		the value of ``10 ** n``. Assume this field has
 		the value k and has 1 or more decimal places set,
 		to set the mth place (where m is not already set),
-		change this fields value to k + 10 ** m.
+		change this fields value to ``k + 10 ** m``.
diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
index 452dbe39270e..59fc804265db 100644
--- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
+++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
@@ -28,8 +28,9 @@ Description:
 		Writing UUID to this file will create mediated device of
 		type <type-id> for parent device <device>. This is a
 		write-only file.
-		For example:
-		# echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
+		For example::
+
+		  # echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
 		       /sys/devices/foo/mdev_supported_types/foo-1/create
 
 What:           /sys/.../mdev_supported_types/<type-id>/devices/
@@ -107,5 +108,6 @@ Description:
 		Writing '1' to this file destroys the mediated device. The
 		vendor driver can fail the remove() callback if that device
 		is active and the vendor driver doesn't support hot unplug.
-		Example:
-		# echo 1 > /sys/bus/mdev/devices/<UUID>/remove
+		Example::
+
+		  # echo 1 > /sys/bus/mdev/devices/<UUID>/remove
diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
index 7970e3713e70..a6f51a104c44 100644
--- a/Documentation/ABI/testing/sysfs-class-cxl
+++ b/Documentation/ABI/testing/sysfs-class-cxl
@@ -72,11 +72,16 @@ Description:    read/write
                 when performing the START_WORK ioctl. Only applicable when
                 running under hashed page table mmu.
                 Possible values:
-                        none: No prefaulting (default)
-                        work_element_descriptor: Treat the work element
-                                 descriptor as an effective address and
-                                 prefault what it points to.
-                        all: all segments process calling START_WORK maps.
+
+                =======================  ======================================
+		none			 No prefaulting (default)
+		work_element_descriptor  Treat the work element
+					 descriptor as an effective address and
+					 prefault what it points to.
+                all			 all segments process calling
+					 START_WORK maps.
+                =======================  ======================================
+
 Users:		https://github.com/ibm-capi/libcxl
 
 What:           /sys/class/cxl/<afu>/reset
diff --git a/Documentation/ABI/testing/sysfs-class-led b/Documentation/ABI/testing/sysfs-class-led
index 5f67f7ab277b..65e040978f73 100644
--- a/Documentation/ABI/testing/sysfs-class-led
+++ b/Documentation/ABI/testing/sysfs-class-led
@@ -50,7 +50,7 @@ Description:
 		You can change triggers in a similar manner to the way an IO
 		scheduler is chosen. Trigger specific parameters can appear in
 		/sys/class/leds/<led> once a given trigger is selected. For
-		their documentation see sysfs-class-led-trigger-*.
+		their documentation see `sysfs-class-led-trigger-*`.
 
 What:		/sys/class/leds/<led>/inverted
 Date:		January 2011
diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000 b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
index f520ece9b64c..69befe947d7e 100644
--- a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
+++ b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
@@ -6,127 +6,132 @@ Description:
 		The LEDs board supports only predefined patterns by firmware
 		for specific LEDs.
 
-		Breathing mode for Screen frame light tube:
-		"0 4000 1 4000"
+		Breathing mode for Screen frame light tube::
 
-		    ^
-		    |
-		Max-|     ---
-		    |    /   \
-		    |   /     \
-		    |  /       \     /
-		    | /         \   /
-		Min-|-           ---
-		    |
-		    0------4------8--> time (sec)
+		    "0 4000 1 4000"
 
-		Cascade mode for Pipe LED:
-		"1 800 2 800 4 800 8 800 16 800"
+			^
+			|
+		    Max-|     ---
+			|    /   \
+			|   /     \
+			|  /       \     /
+			| /         \   /
+		    Min-|-           ---
+			|
+			0------4------8--> time (sec)
 
-		      ^
-		      |
-		0 On -|----+                   +----+                   +---
-		      |    |                   |    |                   |
-		  Off-|    +-------------------+    +-------------------+
-		      |
-		1 On -|    +----+                   +----+
-		      |    |    |                   |    |
-		  Off |----+    +-------------------+    +------------------
-		      |
-		2 On -|         +----+                   +----+
-		      |         |    |                   |    |
-		  Off-|---------+    +-------------------+    +-------------
-		      |
-		3 On -|              +----+                   +----+
-		      |              |    |                   |    |
-		  Off-|--------------+    +-------------------+    +--------
-		      |
-		4 On -|                   +----+                   +----+
-		      |                   |    |                   |    |
-		  Off-|-------------------+    +-------------------+    +---
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		Cascade mode for Pipe LED::
 
-		Inverted cascade mode for Pipe LED:
-		"30 800 29 800 27 800 23 800 15 800"
+		    "1 800 2 800 4 800 8 800 16 800"
 
-		      ^
-		      |
-		0 On -|    +-------------------+    +-------------------+
-		      |    |                   |    |                   |
-		  Off-|----+                   +----+                   +---
-		      |
-		1 On -|----+    +-------------------+    +------------------
-		      |    |    |                   |    |
-		  Off |    +----+                   +----+
-		      |
-		2 On -|---------+    +-------------------+    +-------------
-		      |         |    |                   |    |
-		  Off-|         +----+                   +----+
-		      |
-		3 On -|--------------+    +-------------------+    +--------
-		      |              |    |                   |    |
-		  Off-|              +----+                   +----+
-		      |
-		4 On -|-------------------+    +-------------------+    +---
-		      |                   |    |                   |    |
-		  Off-|                   +----+                   +----+
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+			^
+			|
+		    0 On -|----+                   +----+                   +---
+			|    |                   |    |                   |
+		    Off-|    +-------------------+    +-------------------+
+			|
+		    1 On -|    +----+                   +----+
+			|    |    |                   |    |
+		    Off |----+    +-------------------+    +------------------
+			|
+		    2 On -|         +----+                   +----+
+			|         |    |                   |    |
+		    Off-|---------+    +-------------------+    +-------------
+			|
+		    3 On -|              +----+                   +----+
+			|              |    |                   |    |
+		    Off-|--------------+    +-------------------+    +--------
+			|
+		    4 On -|                   +----+                   +----+
+			|                   |    |                   |    |
+		    Off-|-------------------+    +-------------------+    +---
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
-		Bounce mode for Pipe LED:
-		"1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
+		Inverted cascade mode for Pipe LED::
 
-		      ^
-		      |
-		0 On -|----+                                       +--------
-		      |    |                                       |
-		  Off-|    +---------------------------------------+
-		      |
-		1 On -|    +----+                             +----+
-		      |    |    |                             |    |
-		  Off |----+    +-----------------------------+    +--------
-		      |
-		2 On -|         +----+                   +----+
-		      |         |    |                   |    |
-		  Off-|---------+    +-------------------+    +-------------
-		      |
-		3 On -|              +----+         +----+
-		      |              |    |         |    |
-		  Off-|--------------+    +---------+    +------------------
-		      |
-		4 On -|                   +---------+
-		      |                   |         |
-		  Off-|-------------------+         +-----------------------
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		    "30 800 29 800 27 800 23 800 15 800"
 
-		Inverted bounce mode for Pipe LED:
-		"30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
+			^
+			|
+		    0 On -|    +-------------------+    +-------------------+
+			|    |                   |    |                   |
+		    Off-|----+                   +----+                   +---
+			|
+		    1 On -|----+    +-------------------+    +------------------
+			|    |    |                   |    |
+		    Off |    +----+                   +----+
+			|
+		    2 On -|---------+    +-------------------+    +-------------
+			|         |    |                   |    |
+		    Off-|         +----+                   +----+
+			|
+		    3 On -|--------------+    +-------------------+    +--------
+			|              |    |                   |    |
+		    Off-|              +----+                   +----+
+			|
+		    4 On -|-------------------+    +-------------------+    +---
+			|                   |    |                   |    |
+		    Off-|                   +----+                   +----+
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
-		      ^
-		      |
-		0 On -|    +---------------------------------------+
-		      |    |                                       |
-		  Off-|----+                                       +--------
-		      |
-		1 On -|----+    +-----------------------------+    +--------
-		      |    |    |                             |    |
-		  Off |    +----+                             +----+
-		      |
-		2 On -|---------+    +-------------------+    +-------------
-		      |         |    |                   |    |
-		  Off-|         +----+                   +----+
-		      |
-		3 On -|--------------+    +---------+    +------------------
-		      |              |    |         |    |
-		  Off-|              +----+         +----+
-		      |
-		4 On -|-------------------+         +-----------------------
-		      |                   |         |
-		  Off-|                   +---------+
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		Bounce mode for Pipe LED::
+
+		    "1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
+
+			^
+			|
+		    0 On -|----+                                       +--------
+			|    |                                       |
+		    Off-|    +---------------------------------------+
+			|
+		    1 On -|    +----+                             +----+
+			|    |    |                             |    |
+		    Off |----+    +-----------------------------+    +--------
+			|
+		    2 On -|         +----+                   +----+
+			|         |    |                   |    |
+		    Off-|---------+    +-------------------+    +-------------
+			|
+		    3 On -|              +----+         +----+
+			|              |    |         |    |
+		    Off-|--------------+    +---------+    +------------------
+			|
+		    4 On -|                   +---------+
+			|                   |         |
+		    Off-|-------------------+         +-----------------------
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+
+		Inverted bounce mode for Pipe LED::
+
+		    "30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
+
+			^
+			|
+		    0 On -|    +---------------------------------------+
+			|    |                                       |
+		    Off-|----+                                       +--------
+			|
+		    1 On -|----+    +-----------------------------+    +--------
+			|    |    |                             |    |
+		    Off |    +----+                             +----+
+			|
+		    2 On -|---------+    +-------------------+    +-------------
+			|         |    |                   |    |
+		    Off-|         +----+                   +----+
+			|
+		    3 On -|--------------+    +---------+    +------------------
+			|              |    |         |    |
+		    Off-|              +----+         +----+
+			|
+		    4 On -|-------------------+         +-----------------------
+			|                   |         |
+		    Off-|                   +---------+
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
 What:		/sys/class/leds/<led>/repeat
 Date:		September 2019
diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
index 45b1e605d355..215482379580 100644
--- a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
+++ b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
@@ -12,8 +12,8 @@ Description:
 		format, we should set brightness as 0 for rise stage, fall
 		stage and low stage.
 
-		Min stage duration: 125 ms
-		Max stage duration: 31875 ms
+		- Min stage duration: 125 ms
+		- Max stage duration: 31875 ms
 
 		Since the stage duration step is 125 ms, the duration should be
 		a multiplier of 125, like 125ms, 250ms, 375ms, 500ms ... 31875ms.
diff --git a/Documentation/ABI/testing/sysfs-class-mic b/Documentation/ABI/testing/sysfs-class-mic
index 6ef682603179..bd0e780c3760 100644
--- a/Documentation/ABI/testing/sysfs-class-mic
+++ b/Documentation/ABI/testing/sysfs-class-mic
@@ -41,24 +41,33 @@ Description:
 		When read, this entry provides the current state of an Intel
 		MIC device in the context of the card OS. Possible values that
 		will be read are:
-		"ready" - The MIC device is ready to boot the card OS. On
-		reading this entry after an OSPM resume, a "boot" has to be
-		written to this entry if the card was previously shutdown
-		during OSPM suspend.
-		"booting" - The MIC device has initiated booting a card OS.
-		"online" - The MIC device has completed boot and is online
-		"shutting_down" - The card OS is shutting down.
-		"resetting" - A reset has been initiated for the MIC device
-		"reset_failed" - The MIC device has failed to reset.
+
+
+		===============  ===============================================
+		"ready"		 The MIC device is ready to boot the card OS.
+				 On reading this entry after an OSPM resume,
+				 a "boot" has to be written to this entry if
+				 the card was previously shutdown during OSPM
+				 suspend.
+		"booting"	 The MIC device has initiated booting a card OS.
+		"online"	 The MIC device has completed boot and is online
+		"shutting_down"	 The card OS is shutting down.
+		"resetting"	 A reset has been initiated for the MIC device
+		"reset_failed"	 The MIC device has failed to reset.
+		===============  ===============================================
 
 		When written, this sysfs entry triggers different state change
 		operations depending upon the current state of the card OS.
 		Acceptable values are:
-		"boot" - Boot the card OS image specified by the combination
-			 of firmware, ramdisk, cmdline and bootmode
-			sysfs entries.
-		"reset" - Initiates device reset.
-		"shutdown" - Initiates card OS shutdown.
+
+
+		==========  ===================================================
+		"boot"      Boot the card OS image specified by the combination
+			    of firmware, ramdisk, cmdline and bootmode
+			    sysfs entries.
+		"reset"     Initiates device reset.
+		"shutdown"  Initiates card OS shutdown.
+		==========  ===================================================
 
 What:		/sys/class/mic/mic(x)/shutdown_status
 Date:		October 2013
@@ -69,12 +78,15 @@ Description:
 		OS can shutdown because of various reasons. When read, this
 		entry provides the status on why the card OS was shutdown.
 		Possible values are:
-		"nop" -  shutdown status is not applicable, when the card OS is
-			"online"
-		"crashed" - Shutdown because of a HW or SW crash.
-		"halted" - Shutdown because of a halt command.
-		"poweroff" - Shutdown because of a poweroff command.
-		"restart" - Shutdown because of a restart command.
+
+		==========  ===================================================
+		"nop"       shutdown status is not applicable, when the card OS
+			    is "online"
+		"crashed"   Shutdown because of a HW or SW crash.
+		"halted"    Shutdown because of a halt command.
+		"poweroff"  Shutdown because of a poweroff command.
+		"restart"   Shutdown because of a restart command.
+		==========  ===================================================
 
 What:		/sys/class/mic/mic(x)/cmdline
 Date:		October 2013
diff --git a/Documentation/ABI/testing/sysfs-class-ocxl b/Documentation/ABI/testing/sysfs-class-ocxl
index ae1276efa45a..bf33f4fda58f 100644
--- a/Documentation/ABI/testing/sysfs-class-ocxl
+++ b/Documentation/ABI/testing/sysfs-class-ocxl
@@ -11,8 +11,11 @@ Contact:	linuxppc-dev@lists.ozlabs.org
 Description:	read only
 		Number of contexts for the AFU, in the format <n>/<max>
 		where:
+
+			====	===============================================
 			n:	number of currently active contexts, for debug
 			max:	maximum number of contexts supported by the AFU
+			====	===============================================
 
 What:		/sys/class/ocxl/<afu name>/pp_mmio_size
 Date:		January 2018
diff --git a/Documentation/ABI/testing/sysfs-class-power b/Documentation/ABI/testing/sysfs-class-power
index dbccb2fcd7ce..d4319a04c302 100644
--- a/Documentation/ABI/testing/sysfs-class-power
+++ b/Documentation/ABI/testing/sysfs-class-power
@@ -1,4 +1,4 @@
-===== General Properties =====
+**General Properties**
 
 What:		/sys/class/power_supply/<supply_name>/manufacturer
 Date:		May 2007
@@ -72,6 +72,7 @@ Description:
 		critically low).
 
 		Access: Read, Write
+
 		Valid values: 0 - 100 (percent)
 
 What:		/sys/class/power_supply/<supply_name>/capacity_error_margin
@@ -96,7 +97,9 @@ Description:
 		Coarse representation of battery capacity.
 
 		Access: Read
-		Valid values: "Unknown", "Critical", "Low", "Normal", "High",
+
+		Valid values:
+			      "Unknown", "Critical", "Low", "Normal", "High",
 			      "Full"
 
 What:		/sys/class/power_supply/<supply_name>/current_avg
@@ -139,6 +142,7 @@ Description:
 		throttling for thermal cooling or improving battery health.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/charge_control_limit_max
@@ -148,6 +152,7 @@ Description:
 		Maximum legal value for the charge_control_limit property.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/charge_control_start_threshold
@@ -168,6 +173,7 @@ Description:
 		stop.
 
 		Access: Read, Write
+
 		Valid values: 0 - 100 (percent)
 
 What:		/sys/class/power_supply/<supply_name>/charge_type
@@ -183,7 +189,9 @@ Description:
 		different algorithm.
 
 		Access: Read, Write
-		Valid values: "Unknown", "N/A", "Trickle", "Fast", "Standard",
+
+		Valid values:
+			      "Unknown", "N/A", "Trickle", "Fast", "Standard",
 			      "Adaptive", "Custom"
 
 What:		/sys/class/power_supply/<supply_name>/charge_term_current
@@ -194,6 +202,7 @@ Description:
 		when the battery is considered full and charging should end.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/health
@@ -204,7 +213,9 @@ Description:
 		functionality.
 
 		Access: Read
-		Valid values: "Unknown", "Good", "Overheat", "Dead",
+
+		Valid values:
+			      "Unknown", "Good", "Overheat", "Dead",
 			      "Over voltage", "Unspecified failure", "Cold",
 			      "Watchdog timer expire", "Safety timer expire",
 			      "Over current", "Calibration required", "Warm",
@@ -218,6 +229,7 @@ Description:
 		for a battery charge cycle.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/present
@@ -227,9 +239,13 @@ Description:
 		Reports whether a battery is present or not in the system.
 
 		Access: Read
+
 		Valid values:
+
+			== =======
 			0: Absent
 			1: Present
+			== =======
 
 What:		/sys/class/power_supply/<supply_name>/status
 Date:		May 2007
@@ -240,7 +256,9 @@ Description:
 		used to enable/disable charging to the battery.
 
 		Access: Read, Write
-		Valid values: "Unknown", "Charging", "Discharging",
+
+		Valid values:
+			      "Unknown", "Charging", "Discharging",
 			      "Not charging", "Full"
 
 What:		/sys/class/power_supply/<supply_name>/technology
@@ -250,7 +268,9 @@ Description:
 		Describes the battery technology supported by the supply.
 
 		Access: Read
-		Valid values: "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
+
+		Valid values:
+			      "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
 			      "NiCd", "LiMn"
 
 What:		/sys/class/power_supply/<supply_name>/temp
@@ -260,6 +280,7 @@ Description:
 		Reports the current TBAT battery temperature reading.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_max
@@ -274,6 +295,7 @@ Description:
 		critically high, and charging has stopped).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_min
@@ -289,6 +311,7 @@ Description:
 		remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_max
@@ -299,6 +322,7 @@ Description:
 		charging.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_min
@@ -309,6 +333,7 @@ Description:
 		charging.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/voltage_avg,
@@ -320,6 +345,7 @@ Description:
 		which they average readings to smooth out the reported value.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_max,
@@ -330,6 +356,7 @@ Description:
 		during charging.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_min,
@@ -340,6 +367,7 @@ Description:
 		during discharging.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_now,
@@ -350,9 +378,10 @@ Description:
 		This value is not averaged/smoothed.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
-===== USB Properties =====
+**USB Properties**
 
 What: 		/sys/class/power_supply/<supply_name>/current_avg
 Date:		May 2007
@@ -363,6 +392,7 @@ Description:
 		average readings to smooth out the reported value.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 
@@ -373,6 +403,7 @@ Description:
 		Reports the maximum IBUS current the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What: 		/sys/class/power_supply/<supply_name>/current_now
@@ -385,6 +416,7 @@ Description:
 		within the reported min/max range.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/input_current_limit
@@ -399,6 +431,7 @@ Description:
 		solved using power limit use input_current_limit.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/input_voltage_limit
@@ -441,10 +474,14 @@ Description:
 		USB supply so voltage and current can be controlled).
 
 		Access: Read, Write
+
 		Valid values:
+
+			== ==================================================
 			0: Offline
 			1: Online Fixed - Fixed Voltage Supply
 			2: Online Programmable - Programmable Voltage Supply
+			== ==================================================
 
 What:		/sys/class/power_supply/<supply_name>/temp
 Date:		May 2007
@@ -455,6 +492,7 @@ Description:
 		TJUNC temperature of an IC)
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_max
@@ -470,6 +508,7 @@ Description:
 		remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_min
@@ -485,6 +524,7 @@ Description:
 		accordingly to remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_max
@@ -494,6 +534,7 @@ Description:
 		Reports the maximum allowed supply temperature for operation.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_min
@@ -503,6 +544,7 @@ Description:
 		Reports the mainimum allowed supply temperature for operation.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What: 		/sys/class/power_supply/<supply_name>/usb_type
@@ -514,7 +556,9 @@ Description:
 		is attached.
 
 		Access: Read-Only
-		Valid values: "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
+
+		Valid values:
+			      "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
 			      "PD_DRP", "PD_PPS", "BrickID"
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_max
@@ -524,6 +568,7 @@ Description:
 		Reports the maximum VBUS voltage the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_min
@@ -533,6 +578,7 @@ Description:
 		Reports the minimum VBUS voltage the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_now
@@ -545,9 +591,10 @@ Description:
 		within the reported min/max range.
 
 		Access: Read, Write
+
 		Valid values: Represented in microvolts
 
-===== Device Specific Properties =====
+**Device Specific Properties**
 
 What:		/sys/class/power/ds2760-battery.*/charge_now
 Date:		May 2010
@@ -581,6 +628,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 5, 6 or 7 (hours),
 		- 0: disabled.
 
@@ -595,6 +643,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 4 - 16 (hours), step by 2 (rounded down)
 		- 0: disabled.
 
@@ -609,6 +658,7 @@ Description:
 		interrupt and start top-off charging mode.
 
 		Valid values:
+
 		- 100000 - 200000 (microamps), step by 25000 (rounded down)
 		- 200000 - 350000 (microamps), step by 50000 (rounded down)
 		- 0: disabled.
@@ -624,6 +674,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 0 - 70 (minutes), step by 10 (rounded down)
 
 What:		/sys/class/power_supply/bq24257-charger/ovp_voltage
@@ -637,6 +688,7 @@ Description:
 		device datasheet for details.
 
 		Valid values:
+
 		- 6000000, 6500000, 7000000, 8000000, 9000000, 9500000, 10000000,
 		  10500000 (all uV)
 
@@ -652,6 +704,7 @@ Description:
 		lower than the set value. See device datasheet for details.
 
 		Valid values:
+
 		- 4200000, 4280000, 4360000, 4440000, 4520000, 4600000, 4680000,
 		  4760000 (all uV)
 
@@ -666,6 +719,7 @@ Description:
 		the charger operates normally. See device datasheet for details.
 
 		Valid values:
+
 		- 1: enabled
 		- 0: disabled
 
@@ -681,6 +735,7 @@ Description:
 		from the system. See device datasheet for details.
 
 		Valid values:
+
 		- 1: enabled
 		- 0: disabled
 
diff --git a/Documentation/ABI/testing/sysfs-class-power-twl4030 b/Documentation/ABI/testing/sysfs-class-power-twl4030
index b4fd32d210c5..7ac36dba87bc 100644
--- a/Documentation/ABI/testing/sysfs-class-power-twl4030
+++ b/Documentation/ABI/testing/sysfs-class-power-twl4030
@@ -4,18 +4,20 @@ Description:
 	Writing to this can disable charging.
 
 	Possible values are:
-		"auto" - draw power as appropriate for detected
-			 power source and battery status.
-		"off"  - do not draw any power.
-		"continuous"
-		       - activate mode described as "linear" in
-		         TWL data sheets.  This uses whatever
-			 current is available and doesn't switch off
-			 when voltage drops.
 
-			 This is useful for unstable power sources
-			 such as bicycle dynamo, but care should
-			 be taken that battery is not over-charged.
+		=============	===========================================
+		"auto" 		draw power as appropriate for detected
+				power source and battery status.
+		"off"  		do not draw any power.
+		"continuous"	activate mode described as "linear" in
+				TWL data sheets.  This uses whatever
+				current is available and doesn't switch off
+				when voltage drops.
+
+				This is useful for unstable power sources
+				such as bicycle dynamo, but care should
+				be taken that battery is not over-charged.
+		=============	===========================================
 
 What: /sys/class/power_supply/twl4030_ac/mode
 Description:
@@ -23,6 +25,9 @@ Description:
 	Writing to this can disable charging.
 
 	Possible values are:
-		"auto" - draw power as appropriate for detected
-			 power source and battery status.
-		"off"  - do not draw any power.
+
+		======	===========================================
+		"auto"	draw power as appropriate for detected
+			power source and battery status.
+		"off"	do not draw any power.
+		======	===========================================
diff --git a/Documentation/ABI/testing/sysfs-class-rc b/Documentation/ABI/testing/sysfs-class-rc
index 6c0d6c8cb911..9c8ff7910858 100644
--- a/Documentation/ABI/testing/sysfs-class-rc
+++ b/Documentation/ABI/testing/sysfs-class-rc
@@ -21,15 +21,22 @@ KernelVersion:	2.6.36
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Reading this file returns a list of available protocols,
-		something like:
+		something like::
+
 		    "rc5 [rc6] nec jvc [sony]"
+
 		Enabled protocols are shown in [] brackets.
+
 		Writing "+proto" will add a protocol to the list of enabled
 		protocols.
+
 		Writing "-proto" will remove a protocol from the list of enabled
 		protocols.
+
 		Writing "proto" will enable only "proto".
+
 		Writing "none" will disable all protocols.
+
 		Write fails with EINVAL if an invalid protocol combination or
 		unknown protocol name is used.
 
@@ -39,11 +46,13 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode filter expected value.
+
 		Use in combination with /sys/class/rc/rcN/filter_mask to set the
 		expected value of the bits set in the filter mask.
 		If the hardware supports it then scancodes which do not match
 		the filter will be ignored. Otherwise the write will fail with
 		an error.
+
 		This value may be reset to 0 if the current protocol is altered.
 
 What:		/sys/class/rc/rcN/filter_mask
@@ -56,9 +65,11 @@ Description:
 		of the scancode which should be compared against the expected
 		value. A value of 0 disables the filter to allow all valid
 		scancodes to be processed.
+
 		If the hardware supports it then scancodes which do not match
 		the filter will be ignored. Otherwise the write will fail with
 		an error.
+
 		This value may be reset to 0 if the current protocol is altered.
 
 What:		/sys/class/rc/rcN/wakeup_protocols
@@ -67,15 +78,22 @@ KernelVersion:	4.11
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Reading this file returns a list of available protocols to use
-		for the wakeup filter, something like:
+		for the wakeup filter, something like::
+
 		    "rc-5 nec nec-x rc-6-0 rc-6-6a-24 [rc-6-6a-32] rc-6-mce"
+
 		Note that protocol variants are listed, so "nec", "sony",
 		"rc-5", "rc-6" have their different bit length encodings
 		listed if available.
+
 		The enabled wakeup protocol is shown in [] brackets.
+
 		Only one protocol can be selected at a time.
+
 		Writing "proto" will use "proto" for wakeup events.
+
 		Writing "none" will disable wakeup.
+
 		Write fails with EINVAL if an invalid protocol combination or
 		unknown protocol name is used, or if wakeup is not supported by
 		the hardware.
@@ -86,13 +104,17 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode wakeup filter expected value.
+
 		Use in combination with /sys/class/rc/rcN/wakeup_filter_mask to
 		set the expected value of the bits set in the wakeup filter mask
 		to trigger a system wake event.
+
 		If the hardware supports it and wakeup_filter_mask is not 0 then
 		scancodes which match the filter will wake the system from e.g.
 		suspend to RAM or power off.
+
 		Otherwise the write will fail with an error.
+
 		This value may be reset to 0 if the wakeup protocol is altered.
 
 What:		/sys/class/rc/rcN/wakeup_filter_mask
@@ -101,11 +123,15 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode wakeup filter mask of bits to compare.
+
 		Use in combination with /sys/class/rc/rcN/wakeup_filter to set
 		the bits of the scancode which should be compared against the
 		expected value to trigger a system wake event.
+
 		If the hardware supports it and wakeup_filter_mask is not 0 then
 		scancodes which match the filter will wake the system from e.g.
 		suspend to RAM or power off.
+
 		Otherwise the write will fail with an error.
+
 		This value may be reset to 0 if the wakeup protocol is altered.
diff --git a/Documentation/ABI/testing/sysfs-class-scsi_host b/Documentation/ABI/testing/sysfs-class-scsi_host
index bafc59fd7b69..7c98d8f43c45 100644
--- a/Documentation/ABI/testing/sysfs-class-scsi_host
+++ b/Documentation/ABI/testing/sysfs-class-scsi_host
@@ -56,8 +56,9 @@ Description:
 		management) on top, which makes it match the Windows IRST (Intel
 		Rapid Storage Technology) driver settings. This setting is also
 		close to min_power, except that:
+
 		a) It does not use host-initiated slumber mode, but it does
-		allow device-initiated slumber
+		   allow device-initiated slumber
 		b) It does not enable low power device sleep mode (DevSlp).
 
 What:		/sys/class/scsi_host/hostX/em_message
@@ -70,8 +71,8 @@ Description:
 		protocol, writes and reads correspond to the LED message format
 		as defined in the AHCI spec.
 
-		The user must turn sw_activity (under /sys/block/*/device/) OFF
-		it they wish to control the activity LED via the em_message
+		The user must turn sw_activity (under `/sys/block/*/device/`)
+		OFF it they wish to control the activity LED via the em_message
 		file.
 
 		em_message_type: (RO) Displays the current enclosure management
diff --git a/Documentation/ABI/testing/sysfs-class-typec b/Documentation/ABI/testing/sysfs-class-typec
index b834671522d6..b7794e02ad20 100644
--- a/Documentation/ABI/testing/sysfs-class-typec
+++ b/Documentation/ABI/testing/sysfs-class-typec
@@ -40,10 +40,13 @@ Description:
 		attribute will not return until the operation has finished.
 
 		Valid values:
-		- source (The port will behave as source only DFP port)
-		- sink (The port will behave as sink only UFP port)
-		- dual (The port will behave as dual-role-data and
+
+		======  ==============================================
+		source  (The port will behave as source only DFP port)
+		sink    (The port will behave as sink only UFP port)
+		dual    (The port will behave as dual-role-data and
 			dual-role-power port)
+		======  ==============================================
 
 What:		/sys/class/typec/<port>/vconn_source
 Date:		April 2017
@@ -59,6 +62,7 @@ Description:
 		generates uevent KOBJ_CHANGE.
 
 		Valid values:
+
 		- "no" when the port is not the VCONN Source
 		- "yes" when the port is the VCONN Source
 
@@ -72,6 +76,7 @@ Description:
 		power operation mode should show "usb_power_delivery".
 
 		Valid values:
+
 		- default
 		- 1.5A
 		- 3.0A
@@ -191,6 +196,7 @@ Date:		April 2017
 Contact:	Heikki Krogerus <heikki.krogerus@linux.intel.com>
 Description:
 		Shows type of the plug on the cable:
+
 		- type-a - Standard A
 		- type-b - Standard B
 		- type-c
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
index 7e43cdce9a52..f7b360a61b21 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
+++ b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
@@ -7,6 +7,7 @@ Description:
 		(RO) Hexadecimal bitmask of the TAD attributes are reported by
 		the platform firmware (see ACPI 6.2, section 9.18.2):
 
+		======= ======================================================
 		BIT(0): AC wakeup implemented if set
 		BIT(1): DC wakeup implemented if set
 		BIT(2): Get/set real time features implemented if set
@@ -16,6 +17,7 @@ Description:
 		BIT(6): The AC timer wakes up from S5 if set
 		BIT(7): The DC timer wakes up from S4 if set
 		BIT(8): The DC timer wakes up from S5 if set
+		======= ======================================================
 
 		The other bits are reserved.
 
@@ -62,9 +64,11 @@ Description:
 		timer status with the following meaning of bits (see ACPI 6.2,
 		Section 9.18.5):
 
+		======= ======================================================
 		Bit(0): The timer has expired if set.
 		Bit(1): The timer has woken up the system from a sleep state
 		        (S3 or S4/S5 if supported) if set.
+		======= ======================================================
 
 		The other bits are reserved.
 
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-docg3 b/Documentation/ABI/testing/sysfs-devices-platform-docg3
index 8aa36716882f..378c42694bfb 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-docg3
+++ b/Documentation/ABI/testing/sysfs-devices-platform-docg3
@@ -9,8 +9,10 @@ Description:
 		The protection has information embedded whether it blocks reads,
 		writes or both.
 		The result is:
-		0 -> the DPS is not keylocked
-		1 -> the DPS is keylocked
+
+		- 0 -> the DPS is not keylocked
+		- 1 -> the DPS is keylocked
+
 Users:		None identified so far.
 
 What:		/sys/devices/platform/docg3/f[0-3]_dps[01]_protection_key
@@ -27,8 +29,12 @@ Description:
 		Entering the correct value toggle the lock, and can be observed
 		through f[0-3]_dps[01]_is_keylocked.
 		Possible values are:
+
 			- 8 bytes
+
 		Typical values are:
+
 			- "00000000"
 			- "12345678"
+
 Users:		None identified so far.
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
index 2107082426da..e45ac2e865d5 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
+++ b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
@@ -17,10 +17,10 @@ Description:
 		to overlay planes.
 
 		Selects the composition mode for the overlay. Possible values
-		are
+		are:
 
-		0 - Alpha Blending
-		1 - ROP3
+		- 0 - Alpha Blending
+		- 1 - ROP3
 
 What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_position
 Date:		May 2012
@@ -30,7 +30,7 @@ Description:
 		to overlay planes.
 
 		Stores the x,y overlay position on the display in pixels. The
-		position format is `[0-9]+,[0-9]+'.
+		position format is `[0-9]+,[0-9]+`.
 
 What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_rop3
 Date:		May 2012
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index b555df825447..274c337ec6a9 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -151,23 +151,28 @@ Description:
 		The processor idle states which are available for use have the
 		following attributes:
 
-		name: (RO) Name of the idle state (string).
+		======== ==== =================================================
+		name:	 (RO) Name of the idle state (string).
 
 		latency: (RO) The latency to exit out of this idle state (in
-		microseconds).
+			      microseconds).
 
-		power: (RO) The power consumed while in this idle state (in
-		milliwatts).
+		power:   (RO) The power consumed while in this idle state (in
+			      milliwatts).
 
-		time: (RO) The total time spent in this idle state (in microseconds).
+		time:    (RO) The total time spent in this idle state
+			      (in microseconds).
 
-		usage: (RO) Number of times this state was entered (a count).
+		usage:	 (RO) Number of times this state was entered (a count).
 
-		above: (RO) Number of times this state was entered, but the
-		       observed CPU idle duration was too short for it (a count).
+		above:	 (RO) Number of times this state was entered, but the
+			      observed CPU idle duration was too short for it
+			      (a count).
 
-		below: (RO) Number of times this state was entered, but the
-		       observed CPU idle duration was too long for it (a count).
+		below: 	 (RO) Number of times this state was entered, but the
+			      observed CPU idle duration was too long for it
+			      (a count).
+		======== ==== =================================================
 
 What:		/sys/devices/system/cpu/cpuX/cpuidle/stateN/desc
 Date:		February 2008
@@ -290,6 +295,7 @@ Description:	Processor frequency boosting control
 		This switch controls the boost setting for the whole system.
 		Boosting allows the CPU and the firmware to run at a frequency
 		beyound it's nominal limit.
+
 		More details can be found in
 		Documentation/admin-guide/pm/cpufreq.rst
 
@@ -337,43 +343,57 @@ Contact:	Sudeep Holla <sudeep.holla@arm.com>
 Description:	Parameters for the CPU cache attributes
 
 		allocation_policy:
-			- WriteAllocate: allocate a memory location to a cache line
-					 on a cache miss because of a write
-			- ReadAllocate: allocate a memory location to a cache line
+			- WriteAllocate:
+					allocate a memory location to a cache line
+					on a cache miss because of a write
+			- ReadAllocate:
+					allocate a memory location to a cache line
 					on a cache miss because of a read
-			- ReadWriteAllocate: both writeallocate and readallocate
+			- ReadWriteAllocate:
+					both writeallocate and readallocate
 
-		attributes: LEGACY used only on IA64 and is same as write_policy
+		attributes:
+			    LEGACY used only on IA64 and is same as write_policy
 
-		coherency_line_size: the minimum amount of data in bytes that gets
+		coherency_line_size:
+				     the minimum amount of data in bytes that gets
 				     transferred from memory to cache
 
-		level: the cache hierarchy in the multi-level cache configuration
+		level:
+			the cache hierarchy in the multi-level cache configuration
 
-		number_of_sets: total number of sets in the cache, a set is a
+		number_of_sets:
+				total number of sets in the cache, a set is a
 				collection of cache lines with the same cache index
 
-		physical_line_partition: number of physical cache line per cache tag
+		physical_line_partition:
+				number of physical cache line per cache tag
 
-		shared_cpu_list: the list of logical cpus sharing the cache
+		shared_cpu_list:
+				the list of logical cpus sharing the cache
 
-		shared_cpu_map: logical cpu mask containing the list of cpus sharing
+		shared_cpu_map:
+				logical cpu mask containing the list of cpus sharing
 				the cache
 
-		size: the total cache size in kB
+		size:
+			the total cache size in kB
 
 		type:
 			- Instruction: cache that only holds instructions
 			- Data: cache that only caches data
 			- Unified: cache that holds both data and instructions
 
-		ways_of_associativity: degree of freedom in placing a particular block
-					of memory in the cache
+		ways_of_associativity:
+			degree of freedom in placing a particular block
+			of memory in the cache
 
 		write_policy:
-			- WriteThrough: data is written to both the cache line
+			- WriteThrough:
+					data is written to both the cache line
 					and to the block in the lower-level memory
-			- WriteBack: data is written only to the cache line and
+			- WriteBack:
+				     data is written only to the cache line and
 				     the modified cache line is written to main
 				     memory only when it is replaced
 
@@ -414,30 +434,30 @@ Description:	POWERNV CPUFreq driver's frequency throttle stats directory and
 		throttle attributes exported in the 'throttle_stats' directory:
 
 		- turbo_stat : This file gives the total number of times the max
-		frequency is throttled to lower frequency in turbo (at and above
-		nominal frequency) range of frequencies.
+		  frequency is throttled to lower frequency in turbo (at and above
+		  nominal frequency) range of frequencies.
 
 		- sub_turbo_stat : This file gives the total number of times the
-		max frequency is throttled to lower frequency in sub-turbo(below
-		nominal frequency) range of frequencies.
+		  max frequency is throttled to lower frequency in sub-turbo(below
+		  nominal frequency) range of frequencies.
 
 		- unthrottle : This file gives the total number of times the max
-		frequency is unthrottled after being throttled.
+		  frequency is unthrottled after being throttled.
 
 		- powercap : This file gives the total number of times the max
-		frequency is throttled due to 'Power Capping'.
+		  frequency is throttled due to 'Power Capping'.
 
 		- overtemp : This file gives the total number of times the max
-		frequency is throttled due to 'CPU Over Temperature'.
+		  frequency is throttled due to 'CPU Over Temperature'.
 
 		- supply_fault : This file gives the total number of times the
-		max frequency is throttled due to 'Power Supply Failure'.
+		  max frequency is throttled due to 'Power Supply Failure'.
 
 		- overcurrent : This file gives the total number of times the
-		max frequency is throttled due to 'Overcurrent'.
+		  max frequency is throttled due to 'Overcurrent'.
 
 		- occ_reset : This file gives the total number of times the max
-		frequency is throttled due to 'OCC Reset'.
+		  frequency is throttled due to 'OCC Reset'.
 
 		The sysfs attributes representing different throttle reasons like
 		powercap, overtemp, supply_fault, overcurrent and occ_reset map to
@@ -469,8 +489,9 @@ What:		/sys/devices/system/cpu/cpuX/regs/
 Date:		June 2016
 Contact:	Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
 Description:	AArch64 CPU registers
+
 		'identification' directory exposes the CPU ID registers for
-		 identifying model and revision of the CPU.
+		identifying model and revision of the CPU.
 
 What:		/sys/devices/system/cpu/cpu#/cpu_capacity
 Date:		December 2016
@@ -497,9 +518,11 @@ Description:	Information about CPU vulnerabilities
 		vulnerabilities. The output of those files reflects the
 		state of the CPUs in the system. Possible output values:
 
+		================  ==============================================
 		"Not affected"	  CPU is not affected by the vulnerability
 		"Vulnerable"	  CPU is affected and no mitigation in effect
 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
+		================  ==============================================
 
 		See also: Documentation/admin-guide/hw-vuln/index.rst
 
@@ -515,12 +538,14 @@ Description:	Control Symetric Multi Threading (SMT)
 		control: Read/write interface to control SMT. Possible
 			 values:
 
+			 ================ =========================================
 			 "on"		  SMT is enabled
 			 "off"		  SMT is disabled
 			 "forceoff"	  SMT is force disabled. Cannot be changed.
 			 "notsupported"   SMT is not supported by the CPU
 			 "notimplemented" SMT runtime toggling is not
 					  implemented for the architecture
+			 ================ =========================================
 
 			 If control status is "forceoff" or "notsupported" writes
 			 are rejected.
diff --git a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
index 470def06ab0a..1a8ee26e92ae 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
+++ b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
@@ -5,8 +5,10 @@ Contact:        Vernon Mauery <vernux@us.ibm.com>
 Description:    The state file allows a means by which to change in and
                 out of Premium Real-Time Mode (PRTM), as well as the
                 ability to query the current state.
-                    0 => PRTM off
-                    1 => PRTM enabled
+
+                    - 0 => PRTM off
+                    - 1 => PRTM enabled
+
 Users:          The ibm-prtm userspace daemon uses this interface.
 
 
diff --git a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
index 4d63a7904b94..42214b4ff14a 100644
--- a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
+++ b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
@@ -6,11 +6,13 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
 		if DDR power rails will be kept powered during system suspend.
 		("on"/"1" = enabled, "off"/"0" = disabled).
 		Two types of power switches (or control signals) can be used:
+
 		  A. With a momentary power switch (or pulse signal), DDR
 		     Backup Mode is enabled by default when available, as the
 		     PMIC will be configured only during system suspend.
 		  B. With a toggle power switch (or level signal), the
 		     following steps must be followed exactly:
+
 		       1. Configure PMIC for backup mode, to change the role of
 			  the accessory power switch from a power switch to a
 			  wake-up switch,
@@ -20,8 +22,10 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
 		       3. Suspend system,
 		       4. Switch accessory power switch on, to resume the
 			  system.
+
 		     DDR Backup Mode must be explicitly enabled by the user,
 		     to invoke step 1.
+
 		See also Documentation/devicetree/bindings/mfd/bd9571mwv.txt.
 Users:		User space applications for embedded boards equipped with a
 		BD9571MWV PMIC.
diff --git a/Documentation/ABI/testing/sysfs-driver-genwqe b/Documentation/ABI/testing/sysfs-driver-genwqe
index 64ac6d567c4b..69d855dc4c47 100644
--- a/Documentation/ABI/testing/sysfs-driver-genwqe
+++ b/Documentation/ABI/testing/sysfs-driver-genwqe
@@ -29,8 +29,12 @@ What:           /sys/class/genwqe/genwqe<n>_card/reload_bitstream
 Date:           May 2014
 Contact:        klebers@linux.vnet.ibm.com
 Description:    Interface to trigger a PCIe card reset to reload the bitstream.
+
+		::
+
                   sudo sh -c 'echo 1 > \
                     /sys/class/genwqe/genwqe0_card/reload_bitstream'
+
                 If successfully, the card will come back with the bitstream set
                 on 'next_bitstream'.
 
@@ -64,8 +68,11 @@ Description:    Base clock frequency of the card.
 What:           /sys/class/genwqe/genwqe<n>_card/device/sriov_numvfs
 Date:           Oct 2013
 Contact:        haver@linux.vnet.ibm.com
-Description:    Enable VFs (1..15):
+Description:    Enable VFs (1..15)::
+
                   sudo sh -c 'echo 15 > \
                     /sys/bus/pci/devices/0000\:1b\:00.0/sriov_numvfs'
-                Disable VFs:
+
+                Disable VFs::
+
                   Write a 0 into the same sysfs entry.
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
index 305dffd229a8..de07be314efc 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
+++ b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
@@ -12,7 +12,9 @@ KernelVersion:	4.1
 Contact:	Michal Malý <madcatxster@devoid-pointer.net>
 Description:	Displays a set of alternate modes supported by a wheel. Each
 		mode is listed as follows:
+
 		  Tag: Mode Name
+
 		Currently active mode is marked with an asterisk. List also
 		contains an abstract item "native" which always denotes the
 		native mode of the wheel. Echoing the mode tag switches the
@@ -24,24 +26,30 @@ Description:	Displays a set of alternate modes supported by a wheel. Each
 		This entry is not created for devices that have only one mode.
 
 		Currently supported mode switches:
-		Driving Force Pro:
+
+		Driving Force Pro::
+
 		  DF-EX --> DFP
 
-		G25:
+		G25::
+
 		  DF-EX --> DFP --> G25
 
-		G27:
+		G27::
+
 		  DF-EX <*> DFP <-> G25 <-> G27
 		  DF-EX <*--------> G25 <-> G27
 		  DF-EX <*----------------> G27
 
-		G29:
+		G29::
+
 		  DF-EX <*> DFP <-> G25 <-> G27 <-> G29
 		  DF-EX <*--------> G25 <-> G27 <-> G29
 		  DF-EX <*----------------> G27 <-> G29
 		  DF-EX <*------------------------> G29
 
-		DFGT:
+		DFGT::
+
 		  DF-EX <*> DFP <-> DFGT
 		  DF-EX <*--------> DFGT
 
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-wiimote b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
index 39dfa5cb1cc5..cd7b82a5c27d 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-wiimote
+++ b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
@@ -39,9 +39,13 @@ Description:	While a device is initialized by the wiimote driver, we perform
 		Other strings for each device-type are available and may be
 		added if new device-specific detections are added.
 		Currently supported are:
-			gen10: First Wii Remote generation
-			gen20: Second Wii Remote Plus generation (builtin MP)
+
+			============= =======================================
+			gen10:        First Wii Remote generation
+			gen20:        Second Wii Remote Plus generation
+				      (builtin MP)
 			balanceboard: Wii Balance Board
+			============= =======================================
 
 What:		/sys/bus/hid/drivers/wiimote/<dev>/bboard_calib
 Date:		May 2013
@@ -54,6 +58,7 @@ Description:	This attribute is only provided if the device was detected as a
 		First, 0kg values for all 4 sensors are written, followed by the
 		17kg values for all 4 sensors and last the 34kg values for all 4
 		sensors.
+
 		Calibration data is already applied by the kernel to all input
 		values but may be used by user-space to perform other
 		transformations.
@@ -68,9 +73,11 @@ Description:	This attribute is only provided if the device was detected as a
 		is prefixed with a +/-. Each value is a signed 16bit number.
 		Data is encoded as decimal numbers and specifies the offsets of
 		the analog sticks of the pro-controller.
+
 		Calibration data is already applied by the kernel to all input
 		values but may be used by user-space to perform other
 		transformations.
+
 		Calibration data is detected by the kernel during device setup.
 		You can write "scan\n" into this file to re-trigger calibration.
 		You can also write data directly in the form "x1:y1 x2:y2" to
diff --git a/Documentation/ABI/testing/sysfs-driver-samsung-laptop b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
index 34d3a3359cf4..28c9c040de5d 100644
--- a/Documentation/ABI/testing/sysfs-driver-samsung-laptop
+++ b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
@@ -9,10 +9,12 @@ Description:	Some Samsung laptops have different "performance levels"
 		their fans quiet at all costs.  Reading from this file
 		will show the current performance level.  Writing to the
 		file can change this value.
+
 			Valid options:
-				"silent"
-				"normal"
-				"overclock"
+				- "silent"
+				- "normal"
+				- "overclock"
+
 		Note that not all laptops support all of these options.
 		Specifically, not all support the "overclock" option,
 		and it's still unknown if this value even changes
@@ -25,8 +27,9 @@ Contact:	Corentin Chary <corentin.chary@gmail.com>
 Description:	Max battery charge level can be modified, battery cycle
 		life can be extended by reducing the max battery charge
 		level.
-		0 means normal battery mode (100% charge)
-		1 means battery life extender mode (80% charge)
+
+		- 0 means normal battery mode (100% charge)
+		- 1 means battery life extender mode (80% charge)
 
 What:		/sys/devices/platform/samsung/usb_charge
 Date:		December 1, 2011
diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
index f34221b52b14..e5a438d84e1f 100644
--- a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
+++ b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
@@ -4,10 +4,12 @@ KernelVersion:	3.15
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the keyboard backlight operation mode, valid
 		values are:
+
 			* 0x1  -> FN-Z
 			* 0x2  -> AUTO (also called TIMER)
 			* 0x8  -> ON
 			* 0x10 -> OFF
+
 		Note that from kernel 3.16 onwards this file accepts all listed
 		parameters, kernel 3.15 only accepts the first two (FN-Z and
 		AUTO).
@@ -41,8 +43,10 @@ KernelVersion:	3.15
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This files controls the status of the touchpad and pointing
 		stick (if available), valid values are:
+
 			* 0 -> OFF
 			* 1 -> ON
+
 Users:		KToshiba
 
 What:		/sys/devices/LNXSYSTM:00/LNXSYBUS:00/TOS{1900,620{0,7,8}}:00/available_kbd_modes
@@ -51,10 +55,12 @@ KernelVersion:	3.16
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file shows the supported keyboard backlight modes
 		the system supports, which can be:
+
 			* 0x1  -> FN-Z
 			* 0x2  -> AUTO (also called TIMER)
 			* 0x8  -> ON
 			* 0x10 -> OFF
+
 		Note that not all keyboard types support the listed modes.
 		See the entry named "available_kbd_modes"
 Users:		KToshiba
@@ -65,6 +71,7 @@ KernelVersion:	3.16
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file shows the current keyboard backlight type,
 		which can be:
+
 			* 1 -> Type 1, supporting modes FN-Z and AUTO
 			* 2 -> Type 2, supporting modes TIMER, ON and OFF
 Users:		KToshiba
@@ -75,10 +82,12 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Sleep & Charge charging mode, which
 		can be:
+
 			* 0 -> Disabled		(0x00)
 			* 1 -> Alternate	(0x09)
 			* 2 -> Auto		(0x21)
 			* 3 -> Typical		(0x11)
+
 		Note that from kernel 4.1 onwards this file accepts all listed
 		values, kernel 4.0 only supports the first three.
 		Note that this feature only works when connected to power, if
@@ -93,8 +102,10 @@ Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Sleep Functions under battery, and
 		set the level at which point they will be disabled, accepted
 		values can be:
+
 			* 0	-> Disabled
 			* 1-100	-> Battery level to disable sleep functions
+
 		Currently it prints two values, the first one indicates if the
 		feature is enabled or disabled, while the second one shows the
 		current battery level set.
@@ -107,8 +118,10 @@ Date:		January 23, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Rapid Charge state, which can be:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -118,8 +131,10 @@ Date:		January 23, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the Sleep & Music state, which values can be:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that this feature only works when connected to power, if
 		you want to use it under battery, see the entry named
 		"sleep_functions_on_battery"
@@ -138,6 +153,7 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the state of the internal fan, valid
 		values are:
+
 			* 0 -> OFF
 			* 1 -> ON
 
@@ -147,8 +163,10 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the Special Functions (hotkeys) operation
 		mode, valid values are:
+
 			* 0 -> Normal Operation
 			* 1 -> Special Functions
+
 		In the "Normal Operation" mode, the F{1-12} keys are as usual
 		and the hotkeys are accessed via FN-F{1-12}.
 		In the "Special Functions" mode, the F{1-12} keys trigger the
@@ -163,8 +181,10 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls whether the laptop should turn ON whenever
 		the LID is opened, valid values are:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -174,8 +194,10 @@ Date:		February 12, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB 3 functionality, valid values are:
+
 			* 0 -> Disabled (Acts as a regular USB 2)
 			* 1 -> Enabled (Full USB 3 functionality)
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -188,10 +210,14 @@ Description:	This file controls the Cooling Method feature.
 		Reading this file prints two values, the first is the actual cooling method
 		and the second is the maximum cooling method supported.
 		When the maximum cooling method is ONE, valid values are:
+
 			* 0 -> Maximum Performance
 			* 1 -> Battery Optimized
+
 		When the maximum cooling method is TWO, valid values are:
+
 			* 0 -> Maximum Performance
 			* 1 -> Performance
 			* 2 -> Battery Optimized
+
 Users:		KToshiba
diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_haps b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
index a662370b4dbf..c938690ce10d 100644
--- a/Documentation/ABI/testing/sysfs-driver-toshiba_haps
+++ b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
@@ -4,10 +4,12 @@ KernelVersion:	3.17
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the built-in accelerometer protection level,
 		valid values are:
+
 			* 0 -> Disabled
 			* 1 -> Low
 			* 2 -> Medium
 			* 3 -> High
+
 		The default potection value is set to 2 (Medium).
 Users:		KToshiba
 
diff --git a/Documentation/ABI/testing/sysfs-driver-wacom b/Documentation/ABI/testing/sysfs-driver-wacom
index afc48fc163b5..16acaa5712ec 100644
--- a/Documentation/ABI/testing/sysfs-driver-wacom
+++ b/Documentation/ABI/testing/sysfs-driver-wacom
@@ -79,7 +79,9 @@ Description:
 		When the Wacom Intuos 4 is connected over Bluetooth, the
 		image has to contain 256 bytes (64x32 px 1 bit colour).
 		The format is also scrambled, like in the USB mode, and it can
-		be summarized by converting 76543210 into GECA6420.
+		be summarized by converting::
+
+					    76543210 into GECA6420.
 					    HGFEDCBA      HFDB7531
 
 What:		/sys/bus/hid/devices/<bus>:<vid>:<pid>.<n>/wacom_remote/unpair_remote
diff --git a/Documentation/ABI/testing/sysfs-firmware-acpi b/Documentation/ABI/testing/sysfs-firmware-acpi
index 613f42a9d5cd..e4afc2538210 100644
--- a/Documentation/ABI/testing/sysfs-firmware-acpi
+++ b/Documentation/ABI/testing/sysfs-firmware-acpi
@@ -12,11 +12,14 @@ Description:
 		image: The image bitmap. Currently a 32-bit BMP.
 		status: 1 if the image is valid, 0 if firmware invalidated it.
 		type: 0 indicates image is in BMP format.
+
+		======== ===================================================
 		version: The version of the BGRT. Currently 1.
 		xoffset: The number of pixels between the left of the screen
 			 and the left edge of the image.
 		yoffset: The number of pixels between the top of the screen
 			 and the top edge of the image.
+		======== ===================================================
 
 What:		/sys/firmware/acpi/hotplug/
 Date:		February 2013
@@ -33,12 +36,14 @@ Description:
 		The following setting is available to user space for each
 		hotplug profile:
 
+		======== =======================================================
 		enabled: If set, the ACPI core will handle notifications of
-			hotplug events associated with the given class of
-			devices and will allow those devices to be ejected with
-			the help of the _EJ0 control method.  Unsetting it
-			effectively disables hotplug for the correspoinding
-			class of devices.
+			 hotplug events associated with the given class of
+			 devices and will allow those devices to be ejected with
+			 the help of the _EJ0 control method.  Unsetting it
+			 effectively disables hotplug for the correspoinding
+			 class of devices.
+		======== =======================================================
 
 		The value of the above attribute is an integer number: 1 (set)
 		or 0 (unset).  Attempts to write any other values to it will
@@ -71,86 +76,90 @@ Description:
 		To figure out where all the SCI's are coming from,
 		/sys/firmware/acpi/interrupts contains a file listing
 		every possible source, and the count of how many
-		times it has triggered.
-
-		$ cd /sys/firmware/acpi/interrupts
-		$ grep . *
-		error:	     0
-		ff_gbl_lock:	   0   enable
-		ff_pmtimer:	  0  invalid
-		ff_pwr_btn:	  0   enable
-		ff_rt_clk:	 2  disable
-		ff_slp_btn:	  0  invalid
-		gpe00:	     0	invalid
-		gpe01:	     0	 enable
-		gpe02:	   108	 enable
-		gpe03:	     0	invalid
-		gpe04:	     0	invalid
-		gpe05:	     0	invalid
-		gpe06:	     0	 enable
-		gpe07:	     0	 enable
-		gpe08:	     0	invalid
-		gpe09:	     0	invalid
-		gpe0A:	     0	invalid
-		gpe0B:	     0	invalid
-		gpe0C:	     0	invalid
-		gpe0D:	     0	invalid
-		gpe0E:	     0	invalid
-		gpe0F:	     0	invalid
-		gpe10:	     0	invalid
-		gpe11:	     0	invalid
-		gpe12:	     0	invalid
-		gpe13:	     0	invalid
-		gpe14:	     0	invalid
-		gpe15:	     0	invalid
-		gpe16:	     0	invalid
-		gpe17:	  1084	 enable
-		gpe18:	     0	 enable
-		gpe19:	     0	invalid
-		gpe1A:	     0	invalid
-		gpe1B:	     0	invalid
-		gpe1C:	     0	invalid
-		gpe1D:	     0	invalid
-		gpe1E:	     0	invalid
-		gpe1F:	     0	invalid
-		gpe_all:    1192
-		sci:	1194
-		sci_not:     0	
-
-		sci - The number of times the ACPI SCI
-		has been called and claimed an interrupt.
-
-		sci_not - The number of times the ACPI SCI
-		has been called and NOT claimed an interrupt.
-
-		gpe_all - count of SCI caused by GPEs.
-
-		gpeXX - count for individual GPE source
-
-		ff_gbl_lock - Global Lock
-
-		ff_pmtimer - PM Timer
-
-		ff_pwr_btn - Power Button
-
-		ff_rt_clk - Real Time Clock
-
-		ff_slp_btn - Sleep Button
-
-		error - an interrupt that can't be accounted for above.
-
-		invalid: it's either a GPE or a Fixed Event that
-			doesn't have an event handler.
-
-		disable: the GPE/Fixed Event is valid but disabled.
-
-		enable: the GPE/Fixed Event is valid and enabled.
-
-		Root has permission to clear any of these counters.  Eg.
-		# echo 0 > gpe11
-
-		All counters can be cleared by clearing the total "sci":
-		# echo 0 > sci
+		times it has triggered::
+
+		  $ cd /sys/firmware/acpi/interrupts
+		  $ grep . *
+		  error:	     0
+		  ff_gbl_lock:	   0   enable
+		  ff_pmtimer:	  0  invalid
+		  ff_pwr_btn:	  0   enable
+		  ff_rt_clk:	 2  disable
+		  ff_slp_btn:	  0  invalid
+		  gpe00:	     0	invalid
+		  gpe01:	     0	 enable
+		  gpe02:	   108	 enable
+		  gpe03:	     0	invalid
+		  gpe04:	     0	invalid
+		  gpe05:	     0	invalid
+		  gpe06:	     0	 enable
+		  gpe07:	     0	 enable
+		  gpe08:	     0	invalid
+		  gpe09:	     0	invalid
+		  gpe0A:	     0	invalid
+		  gpe0B:	     0	invalid
+		  gpe0C:	     0	invalid
+		  gpe0D:	     0	invalid
+		  gpe0E:	     0	invalid
+		  gpe0F:	     0	invalid
+		  gpe10:	     0	invalid
+		  gpe11:	     0	invalid
+		  gpe12:	     0	invalid
+		  gpe13:	     0	invalid
+		  gpe14:	     0	invalid
+		  gpe15:	     0	invalid
+		  gpe16:	     0	invalid
+		  gpe17:	  1084	 enable
+		  gpe18:	     0	 enable
+		  gpe19:	     0	invalid
+		  gpe1A:	     0	invalid
+		  gpe1B:	     0	invalid
+		  gpe1C:	     0	invalid
+		  gpe1D:	     0	invalid
+		  gpe1E:	     0	invalid
+		  gpe1F:	     0	invalid
+		  gpe_all:    1192
+		  sci:	1194
+		  sci_not:     0
+
+		===========  ==================================================
+		sci	     The number of times the ACPI SCI
+			     has been called and claimed an interrupt.
+
+		sci_not	     The number of times the ACPI SCI
+			     has been called and NOT claimed an interrupt.
+
+		gpe_all	     count of SCI caused by GPEs.
+
+		gpeXX	     count for individual GPE source
+
+		ff_gbl_lock  Global Lock
+
+		ff_pmtimer   PM Timer
+
+		ff_pwr_btn   Power Button
+
+		ff_rt_clk    Real Time Clock
+
+		ff_slp_btn   Sleep Button
+
+		error	     an interrupt that can't be accounted for above.
+
+		invalid      it's either a GPE or a Fixed Event that
+			     doesn't have an event handler.
+
+		disable	     the GPE/Fixed Event is valid but disabled.
+
+		enable       the GPE/Fixed Event is valid and enabled.
+		===========  ==================================================
+
+		Root has permission to clear any of these counters.  Eg.::
+
+		  # echo 0 > gpe11
+
+		All counters can be cleared by clearing the total "sci"::
+
+		  # echo 0 > sci
 
 		None of these counters has an effect on the function
 		of the system, they are simply statistics.
@@ -165,32 +174,34 @@ Description:
 
 		Let's take power button fixed event for example, please kill acpid
 		and other user space applications so that the machine won't shutdown
-		when pressing the power button.
-		# cat ff_pwr_btn
-		0	enabled
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		3	enabled
-		# echo disable > ff_pwr_btn
-		# cat ff_pwr_btn
-		3	disabled
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		3	disabled
-		# echo enable > ff_pwr_btn
-		# cat ff_pwr_btn
-		4	enabled
-		/*
-		 * this is because the status bit is set even if the enable bit is cleared,
-		 * and it triggers an ACPI fixed event when the enable bit is set again
-		 */
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		7	enabled
-		# echo disable > ff_pwr_btn
-		# press the power button for 3 times;
-		# echo clear > ff_pwr_btn	/* clear the status bit */
-		# echo disable > ff_pwr_btn
-		# cat ff_pwr_btn
-		7	enabled
+		when pressing the power button::
+
+		  # cat ff_pwr_btn
+		  0	enabled
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  3	enabled
+		  # echo disable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  3	disabled
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  3	disabled
+		  # echo enable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  4	enabled
+		  /*
+		   * this is because the status bit is set even if the enable
+		   * bit is cleared, and it triggers an ACPI fixed event when
+		   * the enable bit is set again
+		   */
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  7	enabled
+		  # echo disable > ff_pwr_btn
+		  # press the power button for 3 times;
+		  # echo clear > ff_pwr_btn	/* clear the status bit */
+		  # echo disable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  7	enabled
 
diff --git a/Documentation/ABI/testing/sysfs-firmware-dmi-entries b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
index 210ad44b95a5..fe0289c87768 100644
--- a/Documentation/ABI/testing/sysfs-firmware-dmi-entries
+++ b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
@@ -33,7 +33,7 @@ Description:
 		doesn't matter), they will be represented in sysfs as
 		entries "T-0" through "T-(N-1)":
 
-		Example entry directories:
+		Example entry directories::
 
 			/sys/firmware/dmi/entries/17-0
 			/sys/firmware/dmi/entries/17-1
@@ -50,61 +50,65 @@ Description:
 		Each DMI entry in sysfs has the common header values
 		exported as attributes:
 
-		handle	: The 16bit 'handle' that is assigned to this
+		========  =================================================
+		handle	  The 16bit 'handle' that is assigned to this
 			  entry by the firmware.  This handle may be
 			  referred to by other entries.
-		length	: The length of the entry, as presented in the
+		length	  The length of the entry, as presented in the
 			  entry itself.  Note that this is _not the
 			  total count of bytes associated with the
-			  entry_.  This value represents the length of
+			  entry.  This value represents the length of
 			  the "formatted" portion of the entry.  This
 			  "formatted" region is sometimes followed by
 			  the "unformatted" region composed of nul
 			  terminated strings, with termination signalled
 			  by a two nul characters in series.
-		raw	: The raw bytes of the entry. This includes the
+		raw	  The raw bytes of the entry. This includes the
 			  "formatted" portion of the entry, the
 			  "unformatted" strings portion of the entry,
 			  and the two terminating nul characters.
-		type	: The type of the entry.  This value is the same
+		type	  The type of the entry.  This value is the same
 			  as found in the directory name.  It indicates
 			  how the rest of the entry should be interpreted.
-		instance: The instance ordinal of the entry for the
+		instance  The instance ordinal of the entry for the
 			  given type.  This value is the same as found
 			  in the parent directory name.
-		position: The ordinal position (zero-based) of the entry
+		position  The ordinal position (zero-based) of the entry
 			  within the entirety of the DMI entry table.
+		========  =================================================
 
-		=== Entry Specialization ===
+		**Entry Specialization**
 
 		Some entry types may have other information available in
 		sysfs.  Not all types are specialized.
 
-		--- Type 15 - System Event Log ---
+		**Type 15 - System Event Log**
 
 		This entry allows the firmware to export a log of
 		events the system has taken.  This information is
 		typically backed by nvram, but the implementation
 		details are abstracted by this table.  This entry's data
-		is exported in the directory:
+		is exported in the directory::
 
-		/sys/firmware/dmi/entries/15-0/system_event_log
+		  /sys/firmware/dmi/entries/15-0/system_event_log
 
 		and has the following attributes (documented in the
 		SMBIOS / DMI specification under "System Event Log (Type 15)":
 
-		area_length
-		header_start_offset
-		data_start_offset
-		access_method
-		status
-		change_token
-		access_method_address
-		header_format
-		per_log_type_descriptor_length
-		type_descriptors_supported_count
+		- area_length
+		- header_start_offset
+		- data_start_offset
+		- access_method
+		- status
+		- change_token
+		- access_method_address
+		- header_format
+		- per_log_type_descriptor_length
+		- type_descriptors_supported_count
 
 		As well, the kernel exports the binary attribute:
 
-		raw_event_log	: The raw binary bits of the event log
+		=============	  ====================================
+		raw_event_log	  The raw binary bits of the event log
 				  as described by the DMI entry.
+		=============	  ====================================
diff --git a/Documentation/ABI/testing/sysfs-firmware-gsmi b/Documentation/ABI/testing/sysfs-firmware-gsmi
index 0faa0aaf4b6a..7a558354c1ee 100644
--- a/Documentation/ABI/testing/sysfs-firmware-gsmi
+++ b/Documentation/ABI/testing/sysfs-firmware-gsmi
@@ -20,7 +20,7 @@ Description:
 
 			This directory has the same layout (and
 			underlying implementation as /sys/firmware/efi/vars.
-			See Documentation/ABI/*/sysfs-firmware-efi-vars
+			See `Documentation/ABI/*/sysfs-firmware-efi-vars`
 			for more information on how to interact with
 			this structure.
 
diff --git a/Documentation/ABI/testing/sysfs-firmware-memmap b/Documentation/ABI/testing/sysfs-firmware-memmap
index eca0d65087dc..1f6f4d3a32c0 100644
--- a/Documentation/ABI/testing/sysfs-firmware-memmap
+++ b/Documentation/ABI/testing/sysfs-firmware-memmap
@@ -20,7 +20,7 @@ Description:
 		the raw memory map to userspace.
 
 		The structure is as follows: Under /sys/firmware/memmap there
-		are subdirectories with the number of the entry as their name:
+		are subdirectories with the number of the entry as their name::
 
 			/sys/firmware/memmap/0
 			/sys/firmware/memmap/1
@@ -34,14 +34,16 @@ Description:
 
 		Each directory contains three files:
 
-		start	: The start address (as hexadecimal number with the
+		========  =====================================================
+		start	  The start address (as hexadecimal number with the
 			  '0x' prefix).
-		end	: The end address, inclusive (regardless whether the
+		end	  The end address, inclusive (regardless whether the
 			  firmware provides inclusive or exclusive ranges).
-		type	: Type of the entry as string. See below for a list of
+		type	  Type of the entry as string. See below for a list of
 			  valid types.
+		========  =====================================================
 
-		So, for example:
+		So, for example::
 
 			/sys/firmware/memmap/0/start
 			/sys/firmware/memmap/0/end
@@ -57,9 +59,8 @@ Description:
 		  - reserved
 
 		Following shell snippet can be used to display that memory
-		map in a human-readable format:
+		map in a human-readable format::
 
-		-------------------- 8< ----------------------------------------
 		  #!/bin/bash
 		  cd /sys/firmware/memmap
 		  for dir in * ; do
@@ -68,4 +69,3 @@ Description:
 		      type=$(cat $dir/type)
 		      printf "%016x-%016x (%s)\n" $start $[ $end +1] "$type"
 		  done
-		-------------------- >8 ----------------------------------------
diff --git a/Documentation/ABI/testing/sysfs-fs-ext4 b/Documentation/ABI/testing/sysfs-fs-ext4
index 78604db56279..99e3d92f8299 100644
--- a/Documentation/ABI/testing/sysfs-fs-ext4
+++ b/Documentation/ABI/testing/sysfs-fs-ext4
@@ -45,8 +45,8 @@ Description:
 		parameter will have their blocks allocated out of a
 		block group specific preallocation pool, so that small
 		files are packed closely together.  Each large file
-		 will have its blocks allocated out of its own unique
-		 preallocation pool.
+		will have its blocks allocated out of its own unique
+		preallocation pool.
 
 What:		/sys/fs/ext4/<disk>/inode_readahead_blks
 Date:		March 2008
diff --git a/Documentation/ABI/testing/sysfs-hypervisor-xen b/Documentation/ABI/testing/sysfs-hypervisor-xen
index 53b7b2ea7515..4dbe0c49b393 100644
--- a/Documentation/ABI/testing/sysfs-hypervisor-xen
+++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
@@ -15,14 +15,17 @@ KernelVersion:	4.3
 Contact:	Boris Ostrovsky <boris.ostrovsky@oracle.com>
 Description:	If running under Xen:
 		Describes mode that Xen's performance-monitoring unit (PMU)
-		uses. Accepted values are
-			"off"  -- PMU is disabled
-			"self" -- The guest can profile itself
-			"hv"   -- The guest can profile itself and, if it is
+		uses. Accepted values are:
+
+			======    ============================================
+			"off"     PMU is disabled
+			"self"    The guest can profile itself
+			"hv"      The guest can profile itself and, if it is
 				  privileged (e.g. dom0), the hypervisor
-			"all" --  The guest can profile itself, the hypervisor
+			"all"     The guest can profile itself, the hypervisor
 				  and all other guests. Only available to
 				  privileged guests.
+			======    ============================================
 
 What:           /sys/hypervisor/pmu/pmu_features
 Date:           August 2015
diff --git a/Documentation/ABI/testing/sysfs-kernel-boot_params b/Documentation/ABI/testing/sysfs-kernel-boot_params
index eca38ce2852d..7f9bda453c4d 100644
--- a/Documentation/ABI/testing/sysfs-kernel-boot_params
+++ b/Documentation/ABI/testing/sysfs-kernel-boot_params
@@ -23,16 +23,17 @@ Description:	The /sys/kernel/boot_params directory contains two
 		representation of setup_data type. "data" file is the binary
 		representation of setup_data payload.
 
-		The whole boot_params directory structure is like below:
-		/sys/kernel/boot_params
-		|__ data
-		|__ setup_data
-		|   |__ 0
-		|   |   |__ data
-		|   |   |__ type
-		|   |__ 1
-		|       |__ data
-		|       |__ type
-		|__ version
+		The whole boot_params directory structure is like below::
+
+		  /sys/kernel/boot_params
+		  |__ data
+		  |__ setup_data
+		  |   |__ 0
+		  |   |   |__ data
+		  |   |   |__ type
+		  |   |__ 1
+		  |       |__ data
+		  |       |__ type
+		  |__ version
 
 Users:		Kexec
diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
index fdaa2162fae1..294387e2c7fb 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
@@ -7,9 +7,11 @@ Description:
 		of the hugepages supported by the kernel/CPU combination.
 
 		Under these directories are a number of files:
-			nr_hugepages
-			nr_overcommit_hugepages
-			free_hugepages
-			surplus_hugepages
-			resv_hugepages
+
+			- nr_hugepages
+			- nr_overcommit_hugepages
+			- free_hugepages
+			- surplus_hugepages
+			- resv_hugepages
+
 		See Documentation/admin-guide/mm/hugetlbpage.rst for details.
diff --git a/Documentation/ABI/testing/sysfs-platform-asus-laptop b/Documentation/ABI/testing/sysfs-platform-asus-laptop
index 8b0e8205a6a2..c78d358dbdbe 100644
--- a/Documentation/ABI/testing/sysfs-platform-asus-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-asus-laptop
@@ -4,13 +4,16 @@ KernelVersion:	2.6.20
 Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		This file allows display switching. The value
-		is composed by 4 bits and defined as follow:
-		4321
-		|||`- LCD
-		||`-- CRT
-		|`--- TV
-		`---- DVI
-		Ex: - 0 (0000b) means no display
+		is composed by 4 bits and defined as follow::
+
+		  4321
+		  |||`- LCD
+		  ||`-- CRT
+		  |`--- TV
+		  `---- DVI
+
+		Ex:
+		    - 0 (0000b) means no display
 		    - 3 (0011b) CRT+LCD.
 
 What:		/sys/devices/platform/asus_laptop/gps
@@ -28,8 +31,10 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Some models like the W1N have a LED display that can be
 		used to display several items of information.
-		To control the LED display, use the following :
+		To control the LED display, use the following::
+
 		    echo 0x0T000DDD > /sys/devices/platform/asus_laptop/
+
 		where T control the 3 letters display, and DDD the 3 digits display.
 		The DDD table can be found in Documentation/admin-guide/laptops/asus-laptop.rst
 
diff --git a/Documentation/ABI/testing/sysfs-platform-asus-wmi b/Documentation/ABI/testing/sysfs-platform-asus-wmi
index 1efac0ddb417..04885738cf15 100644
--- a/Documentation/ABI/testing/sysfs-platform-asus-wmi
+++ b/Documentation/ABI/testing/sysfs-platform-asus-wmi
@@ -5,6 +5,7 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Change CPU clock configuration (write-only).
 		There are three available clock configuration:
+
 		    * 0 -> Super Performance Mode
 		    * 1 -> High Performance Mode
 		    * 2 -> Power Saving Mode
diff --git a/Documentation/ABI/testing/sysfs-platform-at91 b/Documentation/ABI/testing/sysfs-platform-at91
index 4cc6a865ae66..b146be74b8e0 100644
--- a/Documentation/ABI/testing/sysfs-platform-at91
+++ b/Documentation/ABI/testing/sysfs-platform-at91
@@ -18,8 +18,10 @@ Description:
 		In order to use an extended can_id add the
 		CAN_EFF_FLAG (0x80000000U) to the can_id. Example:
 
-		- standard id 0x7ff:
-		echo 0x7ff      > /sys/class/net/can0/mb0_id
+		- standard id 0x7ff::
 
-		- extended id 0x1fffffff:
-		echo 0x9fffffff > /sys/class/net/can0/mb0_id
+		    echo 0x7ff      > /sys/class/net/can0/mb0_id
+
+		- extended id 0x1fffffff::
+
+		    echo 0x9fffffff > /sys/class/net/can0/mb0_id
diff --git a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
index 5b026c69587a..70dbe0733cf6 100644
--- a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
@@ -4,9 +4,11 @@ KernelVersion:	2.6.26
 Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		This file allows display switching.
+
 		- 1 = LCD
 		- 2 = CRT
 		- 3 = LCD+CRT
+
 		If you run X11, you should use xrandr instead.
 
 What:		/sys/devices/platform/eeepc/camera
@@ -30,16 +32,20 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Change CPU clock configuration.
 		On the Eee PC 1000H there are three available clock configuration:
+
 		    * 0 -> Super Performance Mode
 		    * 1 -> High Performance Mode
 		    * 2 -> Power Saving Mode
+
 		On Eee PC 701 there is only 2 available clock configurations.
 		Available configuration are listed in available_cpufv file.
 		Reading this file will show the raw hexadecimal value which
-		is defined as follow:
-		| 8 bit | 8 bit |
-		    |       `---- Current mode
-		    `------------ Availables modes
+		is defined as follow::
+
+		  | 8 bit | 8 bit |
+		      |       `---- Current mode
+		      `------------ Availables modes
+
 		For example, 0x301 means: mode 1 selected, 3 available modes.
 
 What:		/sys/devices/platform/eeepc/available_cpufv
diff --git a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
index 1b31be3f996a..fd2ac02bc5bd 100644
--- a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
@@ -12,6 +12,7 @@ Contact:	"Maxim Mikityanskiy <maxtram95@gmail.com>"
 Description:
 		Change fan mode
 		There are four available modes:
+
 			* 0 -> Super Silent Mode
 			* 1 -> Standard Mode
 			* 2 -> Dust Cleaning
@@ -32,9 +33,11 @@ KernelVersion:	4.18
 Contact:	"Oleg Keri <ezhi99@gmail.com>"
 Description:
 		Control fn-lock mode.
+
 			* 1 -> Switched On
 			* 0 -> Switched Off
 
-		For example:
-		# echo "0" >	\
-		/sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
+		For example::
+
+		  # echo "0" >	\
+		  /sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
diff --git a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
index 8af65059d519..e19144fd5d86 100644
--- a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
@@ -7,5 +7,6 @@ Description:
 		Thunderbolt controllers to turn on or off when no
 		devices are connected (write-only)
 		There are two available states:
+
 		    * 0 -> Force power disabled
 		    * 1 -> Force power enabled
diff --git a/Documentation/ABI/testing/sysfs-platform-sst-atom b/Documentation/ABI/testing/sysfs-platform-sst-atom
index 0d07c0395660..d5f6e21f0e42 100644
--- a/Documentation/ABI/testing/sysfs-platform-sst-atom
+++ b/Documentation/ABI/testing/sysfs-platform-sst-atom
@@ -5,13 +5,22 @@ Contact:	"Sebastien Guiriec" <sebastien.guiriec@intel.com>
 Description:
 		LPE Firmware version for SST driver on all atom
 		plaforms (BYT/CHT/Merrifield/BSW).
-		If the FW has never been loaded it will display:
+		If the FW has never been loaded it will display::
+
 			"FW not yet loaded"
-		If FW has been loaded it will display:
+
+		If FW has been loaded it will display::
+
 			"v01.aa.bb.cc"
+
 		aa: Major version is reflecting SoC version:
+
+			=== =============
 			0d: BYT FW
 			0b: BSW FW
 			07: Merrifield FW
+			=== =============
+
 		bb: Minor version
+
 		cc: Build version
diff --git a/Documentation/ABI/testing/sysfs-platform-usbip-vudc b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
index 81fcfb454913..53622d3ba27c 100644
--- a/Documentation/ABI/testing/sysfs-platform-usbip-vudc
+++ b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
@@ -16,10 +16,13 @@ Contact:	Krzysztof Opasiak <k.opasiak@samsung.com>
 Description:
 		Current status of the device.
 		Allowed values:
-		1 - Device is available and can be exported
-		2 - Device is currently exported
-		3 - Fatal error occurred during communication
-		  with peer
+
+		==  ==========================================
+		1   Device is available and can be exported
+		2   Device is currently exported
+		3   Fatal error occurred during communication
+		    with peer
+		==  ==========================================
 
 What:		/sys/devices/platform/usbip-vudc.%d/usbip_sockfd
 Date:		April 2016
diff --git a/Documentation/ABI/testing/sysfs-ptp b/Documentation/ABI/testing/sysfs-ptp
index a17f817a9309..2363ad810ddb 100644
--- a/Documentation/ABI/testing/sysfs-ptp
+++ b/Documentation/ABI/testing/sysfs-ptp
@@ -69,7 +69,7 @@ Description:
 		pin offered by the PTP hardware clock. The file name
 		is the hardware dependent pin name. Reading from this
 		file produces two numbers, the assigned function (see
-		the PTP_PF_ enumeration values in linux/ptp_clock.h)
+		the `PTP_PF_` enumeration values in linux/ptp_clock.h)
 		and the channel number. The function and channel
 		assignment may be changed by two writing numbers into
 		the file.
diff --git a/Documentation/ABI/testing/sysfs-uevent b/Documentation/ABI/testing/sysfs-uevent
index aa39f8d7bcdf..d0893dad3f38 100644
--- a/Documentation/ABI/testing/sysfs-uevent
+++ b/Documentation/ABI/testing/sysfs-uevent
@@ -19,7 +19,8 @@ Description:
                 a transaction identifier so it's possible to use the same UUID
                 value for one or more synthetic uevents in which case we
                 logically group these uevents together for any userspace
-                listeners. The UUID value appears in uevent as
+                listeners. The UUID value appears in uevent as:
+
                 "SYNTH_UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" environment
                 variable.
 
@@ -30,18 +31,19 @@ Description:
                 It's possible to define zero or more pairs - each pair is then
                 delimited by a space character ' '. Each pair appears in
                 synthetic uevent as "SYNTH_ARG_KEY=VALUE". That means the KEY
-                name gains "SYNTH_ARG_" prefix to avoid possible collisions
+                name gains `SYNTH_ARG_` prefix to avoid possible collisions
                 with existing variables.
 
-                Example of valid sequence written to the uevent file:
+                Example of valid sequence written to the uevent file::
 
                     add fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed A=1 B=abc
 
-                This generates synthetic uevent including these variables:
+                This generates synthetic uevent including these variables::
 
                     ACTION=add
                     SYNTH_ARG_A=1
                     SYNTH_ARG_B=abc
                     SYNTH_UUID=fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed
+
 Users:
                 udev, userspace tools generating synthetic uevents
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:32:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13619.34371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmUy-0000dU-MO; Wed, 28 Oct 2020 14:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13619.34371; Wed, 28 Oct 2020 14:32:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmUy-0000dF-Fq; Wed, 28 Oct 2020 14:32:24 +0000
Received: by outflank-mailman (input) for mailman id 13619;
 Wed, 28 Oct 2020 14:23:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hOAa=ED=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
 id 1kXmMW-0007tj-Sn
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:23:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69cc9ade-8a6a-470a-8e82-53e12e89e1ca;
 Wed, 28 Oct 2020 14:23:36 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5b2.dynamic.kabel-deutschland.de
 [95.90.213.178])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 61E9B247B5;
 Wed, 28 Oct 2020 14:23:35 +0000 (UTC)
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
 (envelope-from <mchehab@kernel.org>)
 id 1kXmMO-003hla-Vj; Wed, 28 Oct 2020 15:23:33 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hOAa=ED=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
	id 1kXmMW-0007tj-Sn
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:23:40 +0000
X-Inumbo-ID: 69cc9ade-8a6a-470a-8e82-53e12e89e1ca
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 69cc9ade-8a6a-470a-8e82-53e12e89e1ca;
	Wed, 28 Oct 2020 14:23:36 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5b2.dynamic.kabel-deutschland.de [95.90.213.178])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 61E9B247B5;
	Wed, 28 Oct 2020 14:23:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603895015;
	bh=AY3eByGhquEHF8ak94O7AQNwf8v9YM/HQQbwxMSoOGQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=GGSCA6mw+dV85UWdZoH33VM61wSREr4hucibHA5ymcPoBiiaxDJw80lsQsv5ycNQL
	 Z180NLQJp2zLdtNrQs+B106zxs9nDK3LHD51CAqQLXj2oZIeuozGmp3+YWaIoCKdJG
	 krAuUvJVl90KgKDqhyCYoEVrIlQoQ+mr1Em3GIU8=
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
	(envelope-from <mchehab@kernel.org>)
	id 1kXmMO-003hla-Vj; Wed, 28 Oct 2020 15:23:33 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Mauro Carvalho Chehab" <mchehab+huawei@kernel.org>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Thompson <daniel.thompson@linaro.org>,
	Jarkko Sakkinen <jarkko@kernel.org>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Jerry Snitselaar <jsnitsel@redhat.com>,
	Jingoo Han <jingoohan1@gmail.com>,
	Johannes Berg <johannes@sipsolutions.net>,
	Juergen Gross <jgross@suse.com>,
	Lee Jones <lee.jones@linaro.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mimi Zohar <zohar@linux.ibm.com>,
	Paul Mackerras <paulus@samba.org>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org,
	linux-wireless@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 19/33] docs: ABI: stable: make files ReST compatible
Date: Wed, 28 Oct 2020 15:23:17 +0100
Message-Id: <7186f5e2fa335088d41835e1cff776322219b8d2.1603893146.git.mchehab+huawei@kernel.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1603893146.git.mchehab+huawei@kernel.org>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: Mauro Carvalho Chehab <mchehab@kernel.org>

From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>

Several entries at the stable ABI files won't parse if we pass
them directly to the ReST output.

Adjust them, in order to allow adding their contents as-is at
the stable ABI book.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
---
 Documentation/ABI/stable/firewire-cdev        |  4 +
 Documentation/ABI/stable/sysfs-acpi-pmprofile | 22 +++--
 Documentation/ABI/stable/sysfs-bus-firewire   |  3 +
 Documentation/ABI/stable/sysfs-bus-nvmem      | 19 ++--
 Documentation/ABI/stable/sysfs-bus-usb        |  6 +-
 .../ABI/stable/sysfs-class-backlight          |  1 +
 .../ABI/stable/sysfs-class-infiniband         | 93 +++++++++++++------
 Documentation/ABI/stable/sysfs-class-rfkill   | 13 ++-
 Documentation/ABI/stable/sysfs-class-tpm      | 90 +++++++++---------
 Documentation/ABI/stable/sysfs-devices        |  5 +-
 Documentation/ABI/stable/sysfs-driver-ib_srp  |  1 +
 .../ABI/stable/sysfs-firmware-efi-vars        |  4 +
 .../ABI/stable/sysfs-firmware-opal-dump       |  5 +
 .../ABI/stable/sysfs-firmware-opal-elog       |  2 +
 Documentation/ABI/stable/sysfs-hypervisor-xen |  3 +
 Documentation/ABI/stable/vdso                 |  5 +-
 16 files changed, 176 insertions(+), 100 deletions(-)

diff --git a/Documentation/ABI/stable/firewire-cdev b/Documentation/ABI/stable/firewire-cdev
index f72ed653878a..c9e8ff026154 100644
--- a/Documentation/ABI/stable/firewire-cdev
+++ b/Documentation/ABI/stable/firewire-cdev
@@ -14,12 +14,14 @@ Description:
 		Each /dev/fw* is associated with one IEEE 1394 node, which can
 		be remote or local nodes.  Operations on a /dev/fw* file have
 		different scope:
+
 		  - The 1394 node which is associated with the file:
 			  - Asynchronous request transmission
 			  - Get the Configuration ROM
 			  - Query node ID
 			  - Query maximum speed of the path between this node
 			    and local node
+
 		  - The 1394 bus (i.e. "card") to which the node is attached to:
 			  - Isochronous stream transmission and reception
 			  - Asynchronous stream transmission and reception
@@ -31,6 +33,7 @@ Description:
 			    manager
 			  - Query cycle time
 			  - Bus reset initiation, bus reset event reception
+
 		  - All 1394 buses:
 			  - Allocation of IEEE 1212 address ranges on the local
 			    link layers, reception of inbound requests to such
@@ -43,6 +46,7 @@ Description:
 		userland implement different access permission models, some
 		operations are restricted to /dev/fw* files that are associated
 		with a local node:
+
 			  - Addition of descriptors or directories to the local
 			    nodes' Configuration ROM
 			  - PHY packet transmission and reception
diff --git a/Documentation/ABI/stable/sysfs-acpi-pmprofile b/Documentation/ABI/stable/sysfs-acpi-pmprofile
index 964c7a8afb26..fd97d22b677f 100644
--- a/Documentation/ABI/stable/sysfs-acpi-pmprofile
+++ b/Documentation/ABI/stable/sysfs-acpi-pmprofile
@@ -6,17 +6,21 @@ Description: 	The ACPI pm_profile sysfs interface exports the platform
 		power management (and performance) requirement expectations
 		as provided by BIOS. The integer value is directly passed as
 		retrieved from the FADT ACPI table.
-Values:         For possible values see ACPI specification:
+
+Values:	        For possible values see ACPI specification:
 		5.2.9 Fixed ACPI Description Table (FADT)
 		Field: Preferred_PM_Profile
 
 		Currently these values are defined by spec:
-		0 Unspecified
-		1 Desktop
-		2 Mobile
-		3 Workstation
-		4 Enterprise Server
-		5 SOHO Server
-		6 Appliance PC
-		7 Performance Server
+
+		== =================
+		0  Unspecified
+		1  Desktop
+		2  Mobile
+		3  Workstation
+		4  Enterprise Server
+		5  SOHO Server
+		6  Appliance PC
+		7  Performance Server
 		>7 Reserved
+		== =================
diff --git a/Documentation/ABI/stable/sysfs-bus-firewire b/Documentation/ABI/stable/sysfs-bus-firewire
index 41e5a0cd1e3e..9ac9eddb82ef 100644
--- a/Documentation/ABI/stable/sysfs-bus-firewire
+++ b/Documentation/ABI/stable/sysfs-bus-firewire
@@ -47,6 +47,7 @@ Description:
 		IEEE 1394 node device attribute.
 		Read-only and immutable.
 Values:		1: The sysfs entry represents a local node (a controller card).
+
 		0: The sysfs entry represents a remote node.
 
 
@@ -125,7 +126,9 @@ Description:
 		Read-only attribute, immutable during the target's lifetime.
 		Format, as exposed by firewire-sbp2 since 2.6.22, May 2007:
 		Colon-separated hexadecimal string representations of
+
 			u64 EUI-64 : u24 directory_ID : u16 LUN
+
 		without 0x prefixes, without whitespace.  The former sbp2 driver
 		(removed in 2.6.37 after being superseded by firewire-sbp2) used
 		a somewhat shorter format which was not as close to SAM.
diff --git a/Documentation/ABI/stable/sysfs-bus-nvmem b/Documentation/ABI/stable/sysfs-bus-nvmem
index 9ffba8576f7b..c399323f37de 100644
--- a/Documentation/ABI/stable/sysfs-bus-nvmem
+++ b/Documentation/ABI/stable/sysfs-bus-nvmem
@@ -9,13 +9,14 @@ Description:
 		Note: This file is only present if CONFIG_NVMEM_SYSFS
 		is enabled
 
-		ex:
-		hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
+		ex::
 
-		0000000 0000 0000 0000 0000 0000 0000 0000 0000
-		*
-		00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
-		0000000 0000 0000 0000 0000 0000 0000 0000 0000
-		...
-		*
-		0001000
+		  hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
+
+		  0000000 0000 0000 0000 0000 0000 0000 0000 0000
+		  *
+		  00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
+		  0000000 0000 0000 0000 0000 0000 0000 0000 0000
+		  ...
+		  *
+		  0001000
diff --git a/Documentation/ABI/stable/sysfs-bus-usb b/Documentation/ABI/stable/sysfs-bus-usb
index b832eeff9999..cad4bc232520 100644
--- a/Documentation/ABI/stable/sysfs-bus-usb
+++ b/Documentation/ABI/stable/sysfs-bus-usb
@@ -50,8 +50,10 @@ Description:
 
 		Tools can use this file and the connected_duration file to
 		compute the percentage of time that a device has been active.
-		For example,
-		echo $((100 * `cat active_duration` / `cat connected_duration`))
+		For example::
+
+		  echo $((100 * `cat active_duration` / `cat connected_duration`))
+
 		will give an integer percentage.  Note that this does not
 		account for counter wrap.
 Users:
diff --git a/Documentation/ABI/stable/sysfs-class-backlight b/Documentation/ABI/stable/sysfs-class-backlight
index 70302f370e7e..023fb52645f8 100644
--- a/Documentation/ABI/stable/sysfs-class-backlight
+++ b/Documentation/ABI/stable/sysfs-class-backlight
@@ -4,6 +4,7 @@ KernelVersion:	2.6.12
 Contact:	Richard Purdie <rpurdie@rpsys.net>
 Description:
 		Control BACKLIGHT power, values are FB_BLANK_* from fb.h
+
 		 - FB_BLANK_UNBLANK (0)   : power on.
 		 - FB_BLANK_POWERDOWN (4) : power off
 Users:		HAL
diff --git a/Documentation/ABI/stable/sysfs-class-infiniband b/Documentation/ABI/stable/sysfs-class-infiniband
index 87b11f91b425..348c4ac803ad 100644
--- a/Documentation/ABI/stable/sysfs-class-infiniband
+++ b/Documentation/ABI/stable/sysfs-class-infiniband
@@ -8,12 +8,14 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===========================================
 		node_type:	(RO) Node type (CA, RNIC, usNIC, usNIC UDP,
 				switch or router)
 
 		node_guid:	(RO) Node GUID
 
 		sys_image_guid:	(RO) System image GUID
+		=============== ===========================================
 
 
 What:		/sys/class/infiniband/<device>/node_desc
@@ -47,6 +49,7 @@ KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ===============================================
 		lid:		(RO) Port LID
 
 		rate:		(RO) Port data rate (active width * active
@@ -66,8 +69,9 @@ Description:
 
 		cap_mask:	(RO) Port capability mask. 2 bits here are
 				settable- IsCommunicationManagementSupported
-				(set when CM module is loaded) and IsSM (set via
-				open of issmN file).
+				(set when CM module is loaded) and IsSM (set
+				via open of issmN file).
+		=============== ===============================================
 
 
 What:		/sys/class/infiniband/<device>/ports/<port-num>/link_layer
@@ -103,8 +107,7 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
-		Errors info:
-		-----------
+		**Errors info**:
 
 		symbol_error: (RO) Total number of minor link errors detected on
 		one or more physical lanes.
@@ -142,8 +145,7 @@ Description:
 		intervention. It can also indicate hardware issues or extremely
 		poor link signal integrity
 
-		Data info:
-		---------
+		**Data info**:
 
 		port_xmit_data: (RO) Total number of data octets, divided by 4
 		(lanes), transmitted on all VLs. This is 64 bit counter
@@ -176,8 +178,7 @@ Description:
 		transmitted on all VLs from the port. This may include multicast
 		packets with errors.
 
-		Misc info:
-		---------
+		**Misc info**:
 
 		port_xmit_discards: (RO) Total number of outbound packets
 		discarded by the port because the port is down or congested.
@@ -244,9 +245,11 @@ Description:
 		two umad devices and two issm devices, while a switch will have
 		one device of each type (for switch port 0).
 
+		======= =====================================
 		ibdev:	(RO) Show Infiniband (IB) device name
 
 		port:	(RO) Display port number
+		======= =====================================
 
 
 What:		/sys/class/infiniband_mad/abi_version
@@ -264,10 +267,12 @@ Date:		Sept, 2005
 KernelVersion:	v2.6.14
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===========================================
 		ibdev:		(RO) Display Infiniband (IB) device name
 
 		abi_version:	(RO) Show ABI version of IB device specific
 				interfaces.
+		=============== ===========================================
 
 
 What:		/sys/class/infiniband_verbs/abi_version
@@ -289,12 +294,14 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ================================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host Channel Adapter type: MT23108, MT25208
 				(MT23108 compat mode), MT25208 or MT25204
 
 		board_id:	(RO) Manufacturing board ID
+		=============== ================================================
 
 
 sysfs interface for Mellanox ConnectX HCA IB driver (mlx4)
@@ -307,11 +314,13 @@ Date:		Sep, 2007
 KernelVersion:	v2.6.24
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===============================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
 
 		board_id:	(RO) Manufacturing board ID
+		=============== ===============================
 
 
 What:		/sys/class/infiniband/mlx4_X/iov/ports/<port-num>/gids/<n>
@@ -337,6 +346,7 @@ Description:
 		example, ports/1/pkeys/10 contains the value at index 10 in port
 		1's P_Key table.
 
+		======================= ==========================================
 		gids/<n>:		(RO) The physical port gids n = 0..127
 
 		admin_guids/<n>:	(RW) Allows examining or changing the
@@ -365,6 +375,7 @@ Description:
 					guest, whenever it uses its pkey index
 					1, will actually be using the real pkey
 					index 10.
+		======================= ==========================================
 
 
 What:		/sys/class/infiniband/mlx4_X/iov/<pci-slot-num>/ports/<m>/smi_enabled
@@ -376,12 +387,14 @@ Description:
 		Enabling QP0 on VFs for selected VF/port. By default, no VFs are
 		enabled for QP0 operation.
 
-		smi_enabled:	(RO) Indicates whether smi is currently enabled
-				for the indicated VF/port
+		================= ==== ===========================================
+		smi_enabled:	  (RO) Indicates whether smi is currently enabled
+				       for the indicated VF/port
 
-		enable_smi_admin:(RW) Used by the admin to request that smi
-				capability be enabled or disabled for the
-				indicated VF/port. 0 = disable, 1 = enable.
+		enable_smi_admin: (RW) Used by the admin to request that smi
+				       capability be enabled or disabled for the
+				       indicated VF/port. 0 = disable, 1 = enable.
+		================= ==== ===========================================
 
 		The requested enablement will occur at the next reset of the VF
 		(e.g. driver restart on the VM which owns the VF).
@@ -398,6 +411,7 @@ KernelVersion:	v2.6.35
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== =============================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Driver short name. Should normally match
@@ -406,6 +420,7 @@ Description:
 
 		board_id:	(RO) Manufacturing board id. (Vendor + device
 				information)
+		=============== =============================================
 
 
 sysfs interface for Intel IB driver qib
@@ -426,6 +441,7 @@ Date:		May, 2010
 KernelVersion:	v2.6.35
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ======================================================
 		version:	(RO) Display version information of installed software
 				and drivers.
 
@@ -452,6 +468,7 @@ Description:
 		chip_reset:	(WO) Reset the chip if possible by writing
 				"reset" to this file. Only allowed if no user
 				contexts are open that use chip resources.
+		=============== ======================================================
 
 
 What:		/sys/class/infiniband/qibX/ports/N/sl2vl/[0-15]
@@ -471,14 +488,16 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		Per-port congestion control. Both are binary attributes.
 
-		cc_table_bin:	(RO) Congestion control table size followed by
+		=============== ================================================
+		cc_table_bin	(RO) Congestion control table size followed by
 				table entries.
 
-		cc_settings_bin:(RO) Congestion settings: port control, control
+		cc_settings_bin (RO) Congestion settings: port control, control
 				map and an array of 16 entries for the
 				congestion entries - increase, timer, event log
 				trigger threshold and the minimum injection rate
 				delay.
+		=============== ================================================
 
 What:		/sys/class/infiniband/qibX/ports/N/linkstate/loopback
 What:		/sys/class/infiniband/qibX/ports/N/linkstate/led_override
@@ -491,6 +510,7 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		[to be documented]
 
+		=============== ===============================================
 		loopback:	(WO)
 		led_override:	(WO)
 		hrtbt_enable:	(RW)
@@ -501,6 +521,7 @@ Description:
 				errors. Possible states are- "Initted",
 				"Present", "IB_link_up", "IB_configured" or
 				"Fatal_Hardware_Error".
+		=============== ===============================================
 
 What:		/sys/class/infiniband/qibX/ports/N/diag_counters/rc_resends
 What:		/sys/class/infiniband/qibX/ports/N/diag_counters/seq_naks
@@ -549,6 +570,7 @@ Contact:	Christian Benvenuti <benve@cisco.com>,
 		linux-rdma@vger.kernel.org
 Description:
 
+		=============== ===============================================
 		board_id:	(RO) Manufacturing board id
 
 		config:		(RO) Report the configuration for this PF
@@ -561,6 +583,7 @@ Description:
 
 		iface:		(RO) Shows which network interface this usNIC
 				entry is associated to (visible with ifconfig).
+		=============== ===============================================
 
 What:		/sys/class/infiniband/usnic_X/qpn/summary
 What:		/sys/class/infiniband/usnic_X/qpn/context
@@ -605,6 +628,7 @@ Date:		May, 2016
 KernelVersion:	v4.6
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== =============================================
 		hw_rev:		(RO) Hardware revision number
 
 		board_id:	(RO) Manufacturing board id
@@ -623,6 +647,7 @@ Description:
 				available.
 
 		tempsense:	(RO) Thermal sense information
+		=============== =============================================
 
 
 What:		/sys/class/infiniband/hfi1_X/ports/N/CCMgtA/cc_settings_bin
@@ -634,19 +659,21 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		Per-port congestion control.
 
-		cc_table_bin:	(RO) CCA tables used by PSM2 Congestion control
+		=============== ================================================
+		cc_table_bin	(RO) CCA tables used by PSM2 Congestion control
 				table size followed by table entries. Binary
 				attribute.
 
-		cc_settings_bin:(RO) Congestion settings: port control, control
+		cc_settings_bin (RO) Congestion settings: port control, control
 				map and an array of 16 entries for the
 				congestion entries - increase, timer, event log
 				trigger threshold and the minimum injection rate
 				delay. Binary attribute.
 
-		cc_prescan:	(RW) enable prescanning for faster BECN
+		cc_prescan	(RW) enable prescanning for faster BECN
 				response. Write "on" to enable and "off" to
 				disable.
+		=============== ================================================
 
 What:		/sys/class/infiniband/hfi1_X/ports/N/sc2vl/[0-31]
 What:		/sys/class/infiniband/hfi1_X/ports/N/sl2sc/[0-31]
@@ -655,11 +682,13 @@ Date:		May, 2016
 KernelVersion:	v4.6
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===================================================
 		sc2vl/:		(RO) 32 files (0 - 31) used to translate sl->vl
 
 		sl2sc/:		(RO) 32 files (0 - 31) used to translate sl->sc
 
 		vl2mtu/:	(RO) 16 files (0 - 15) used to determine MTU for vl
+		=============== ===================================================
 
 
 What:		/sys/class/infiniband/hfi1_X/sdma_N/cpu_list
@@ -670,26 +699,28 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		sdma<N>/ contains one directory per sdma engine (0 - 15)
 
+		=============== ==============================================
 		cpu_list:	(RW) List of cpus for user-process to sdma
 				engine assignment.
 
 		vl:		(RO) Displays the virtual lane (vl) the sdma
 				engine maps to.
+		=============== ==============================================
 
 		This interface gives the user control on the affinity settings
 		for the device. As an example, to set an sdma engine irq
 		affinity and thread affinity of a user processes to use the
 		sdma engine, which is "near" in terms of NUMA configuration, or
-		physical cpu location, the user will do:
+		physical cpu location, the user will do::
 
-		echo "3" > /proc/irq/<N>/smp_affinity_list
-		echo "4-7" > /sys/devices/.../sdma3/cpu_list
-		cat /sys/devices/.../sdma3/vl
-		0
-		echo "8" > /proc/irq/<M>/smp_affinity_list
-		echo "9-12" > /sys/devices/.../sdma4/cpu_list
-		cat /sys/devices/.../sdma4/vl
-		1
+		  echo "3" > /proc/irq/<N>/smp_affinity_list
+		  echo "4-7" > /sys/devices/.../sdma3/cpu_list
+		  cat /sys/devices/.../sdma3/vl
+		  0
+		  echo "8" > /proc/irq/<M>/smp_affinity_list
+		  echo "9-12" > /sys/devices/.../sdma4/cpu_list
+		  cat /sys/devices/.../sdma4/vl
+		  1
 
 		to make sure that when a process runs on cpus 4,5,6, or 7, and
 		uses vl=0, then sdma engine 3 is selected by the driver, and
@@ -711,11 +742,13 @@ Date:		Jan, 2016
 KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ==== ========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Show HCA type (I40IW)
 
 		board_id:	(RO) I40IW board ID
+		=============== ==== ========================
 
 
 sysfs interface for QLogic qedr NIC Driver
@@ -728,9 +761,11 @@ KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ==== ========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Display HCA type
+		=============== ==== ========================
 
 
 sysfs interface for VMware Paravirtual RDMA driver
@@ -744,11 +779,13 @@ KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ==== =====================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
 
 		board_id:	(RO) Display PVRDMA manufacturing board ID
+		=============== ==== =====================================
 
 
 sysfs interface for Broadcom NetXtreme-E RoCE driver
@@ -760,6 +797,8 @@ Date:		Feb, 2017
 KernelVersion:	v4.11
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ==== =========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
+		=============== ==== =========================
diff --git a/Documentation/ABI/stable/sysfs-class-rfkill b/Documentation/ABI/stable/sysfs-class-rfkill
index 5b154f922643..037979f7dc4b 100644
--- a/Documentation/ABI/stable/sysfs-class-rfkill
+++ b/Documentation/ABI/stable/sysfs-class-rfkill
@@ -2,7 +2,7 @@ rfkill - radio frequency (RF) connector kill switch support
 
 For details to this subsystem look at Documentation/driver-api/rfkill.rst.
 
-For the deprecated /sys/class/rfkill/*/claim knobs of this interface look in
+For the deprecated ``/sys/class/rfkill/*/claim`` knobs of this interface look in
 Documentation/ABI/removed/sysfs-class-rfkill.
 
 What: 		/sys/class/rfkill
@@ -36,9 +36,10 @@ KernelVersion	v2.6.22
 Contact:	linux-wireless@vger.kernel.org
 Description: 	Whether the soft blocked state is initialised from non-volatile
 		storage at startup.
-Values: 	A numeric value.
-		0: false
-		1: true
+Values: 	A numeric value:
+
+		- 0: false
+		- 1: true
 
 
 What:		/sys/class/rfkill/rfkill[0-9]+/state
@@ -54,6 +55,7 @@ Description: 	Current state of the transmitter.
 		through this interface. There will likely be another attempt to
 		remove it in the future.
 Values: 	A numeric value.
+
 		0: RFKILL_STATE_SOFT_BLOCKED
 			transmitter is turned off by software
 		1: RFKILL_STATE_UNBLOCKED
@@ -69,6 +71,7 @@ KernelVersion	v2.6.34
 Contact:	linux-wireless@vger.kernel.org
 Description: 	Current hardblock state. This file is read only.
 Values: 	A numeric value.
+
 		0: inactive
 			The transmitter is (potentially) active.
 		1: active
@@ -82,7 +85,9 @@ KernelVersion	v2.6.34
 Contact:	linux-wireless@vger.kernel.org
 Description:	Current softblock state. This file is read and write.
 Values: 	A numeric value.
+
 		0: inactive
 			The transmitter is (potentially) active.
+
 		1: active
 			The transmitter is turned off by software.
diff --git a/Documentation/ABI/stable/sysfs-class-tpm b/Documentation/ABI/stable/sysfs-class-tpm
index 58e94e7d55be..ec464cf7861a 100644
--- a/Documentation/ABI/stable/sysfs-class-tpm
+++ b/Documentation/ABI/stable/sysfs-class-tpm
@@ -32,11 +32,11 @@ KernelVersion:	2.6.12
 Contact:	linux-integrity@vger.kernel.org
 Description:	The "caps" property contains TPM manufacturer and version info.
 
-		Example output:
+		Example output::
 
-		Manufacturer: 0x53544d20
-		TCG version: 1.2
-		Firmware version: 8.16
+		  Manufacturer: 0x53544d20
+		  TCG version: 1.2
+		  Firmware version: 8.16
 
 		Manufacturer is a hex dump of the 4 byte manufacturer info
 		space in a TPM. TCG version shows the TCG TPM spec level that
@@ -54,9 +54,9 @@ Description:	The "durations" property shows the 3 vendor-specific values
 		any longer than necessary before starting to poll for a
 		result.
 
-		Example output:
+		Example output::
 
-		3015000 4508000 180995000 [original]
+		  3015000 4508000 180995000 [original]
 
 		Here the short, medium and long durations are displayed in
 		usecs. "[original]" indicates that the values are displayed
@@ -92,14 +92,14 @@ Description:	The "pcrs" property will dump the current value of all Platform
 		values may be constantly changing, the output is only valid
 		for a snapshot in time.
 
-		Example output:
+		Example output::
 
-		PCR-00: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-01: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-02: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-03: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-04: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		...
+		  PCR-00: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-01: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-02: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-03: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-04: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  ...
 
 		The number of PCRs and hex bytes needed to represent a PCR
 		value will vary depending on TPM chip version. For TPM 1.1 and
@@ -119,44 +119,44 @@ Description:	The "pubek" property will return the TPM's public endorsement
 		ated at TPM manufacture time and exists for the life of the
 		chip.
 
-		Example output:
+		Example output::
 
-		Algorithm: 00 00 00 01
-		Encscheme: 00 03
-		Sigscheme: 00 01
-		Parameters: 00 00 08 00 00 00 00 02 00 00 00 00
-		Modulus length: 256
-		Modulus:
-		B4 76 41 82 C9 20 2C 10 18 40 BC 8B E5 44 4C 6C
-		3A B2 92 0C A4 9B 2A 83 EB 5C 12 85 04 48 A0 B6
-		1E E4 81 84 CE B2 F2 45 1C F0 85 99 61 02 4D EB
-		86 C4 F7 F3 29 60 52 93 6B B2 E5 AB 8B A9 09 E3
-		D7 0E 7D CA 41 BF 43 07 65 86 3C 8C 13 7A D0 8B
-		82 5E 96 0B F8 1F 5F 34 06 DA A2 52 C1 A9 D5 26
-		0F F4 04 4B D9 3F 2D F2 AC 2F 74 64 1F 8B CD 3E
-		1E 30 38 6C 70 63 69 AB E2 50 DF 49 05 2E E1 8D
-		6F 78 44 DA 57 43 69 EE 76 6C 38 8A E9 8E A3 F0
-		A7 1F 3C A8 D0 12 15 3E CA 0E BD FA 24 CD 33 C6
-		47 AE A4 18 83 8E 22 39 75 93 86 E6 FD 66 48 B6
-		10 AD 94 14 65 F9 6A 17 78 BD 16 53 84 30 BF 70
-		E0 DC 65 FD 3C C6 B0 1E BF B9 C1 B5 6C EF B1 3A
-		F8 28 05 83 62 26 11 DC B4 6B 5A 97 FF 32 26 B6
-		F7 02 71 CF 15 AE 16 DD D1 C1 8E A8 CF 9B 50 7B
-		C3 91 FF 44 1E CF 7C 39 FE 17 77 21 20 BD CE 9B
+		  Algorithm: 00 00 00 01
+		  Encscheme: 00 03
+		  Sigscheme: 00 01
+		  Parameters: 00 00 08 00 00 00 00 02 00 00 00 00
+		  Modulus length: 256
+		  Modulus:
+		  B4 76 41 82 C9 20 2C 10 18 40 BC 8B E5 44 4C 6C
+		  3A B2 92 0C A4 9B 2A 83 EB 5C 12 85 04 48 A0 B6
+		  1E E4 81 84 CE B2 F2 45 1C F0 85 99 61 02 4D EB
+		  86 C4 F7 F3 29 60 52 93 6B B2 E5 AB 8B A9 09 E3
+		  D7 0E 7D CA 41 BF 43 07 65 86 3C 8C 13 7A D0 8B
+		  82 5E 96 0B F8 1F 5F 34 06 DA A2 52 C1 A9 D5 26
+		  0F F4 04 4B D9 3F 2D F2 AC 2F 74 64 1F 8B CD 3E
+		  1E 30 38 6C 70 63 69 AB E2 50 DF 49 05 2E E1 8D
+		  6F 78 44 DA 57 43 69 EE 76 6C 38 8A E9 8E A3 F0
+		  A7 1F 3C A8 D0 12 15 3E CA 0E BD FA 24 CD 33 C6
+		  47 AE A4 18 83 8E 22 39 75 93 86 E6 FD 66 48 B6
+		  10 AD 94 14 65 F9 6A 17 78 BD 16 53 84 30 BF 70
+		  E0 DC 65 FD 3C C6 B0 1E BF B9 C1 B5 6C EF B1 3A
+		  F8 28 05 83 62 26 11 DC B4 6B 5A 97 FF 32 26 B6
+		  F7 02 71 CF 15 AE 16 DD D1 C1 8E A8 CF 9B 50 7B
+		  C3 91 FF 44 1E CF 7C 39 FE 17 77 21 20 BD CE 9B
 
-		Possible values:
+		Possible values::
 
-		Algorithm:	TPM_ALG_RSA			(1)
-		Encscheme:	TPM_ES_RSAESPKCSv15		(2)
+		  Algorithm:	TPM_ALG_RSA			(1)
+		  Encscheme:	TPM_ES_RSAESPKCSv15		(2)
 				TPM_ES_RSAESOAEP_SHA1_MGF1	(3)
-		Sigscheme:	TPM_SS_NONE			(1)
-		Parameters, a byte string of 3 u32 values:
+		  Sigscheme:	TPM_SS_NONE			(1)
+		  Parameters, a byte string of 3 u32 values:
 			Key Length (bits):	00 00 08 00	(2048)
 			Num primes:		00 00 00 02	(2)
 			Exponent Size:		00 00 00 00	(0 means the
 								 default exp)
-		Modulus Length: 256 (bytes)
-		Modulus:	The 256 byte Endorsement Key modulus
+		  Modulus Length: 256 (bytes)
+		  Modulus:	The 256 byte Endorsement Key modulus
 
 What:		/sys/class/tpm/tpmX/device/temp_deactivated
 Date:		April 2006
@@ -176,9 +176,9 @@ Description:	The "timeouts" property shows the 4 vendor-specific values
 		timeouts is defined by the TPM interface spec that the chip
 		conforms to.
 
-		Example output:
+		Example output::
 
-		750000 750000 750000 750000 [original]
+		  750000 750000 750000 750000 [original]
 
 		The four timeout values are shown in usecs, with a trailing
 		"[original]" or "[adjusted]" depending on whether the values
diff --git a/Documentation/ABI/stable/sysfs-devices b/Documentation/ABI/stable/sysfs-devices
index 4404bd9b96c1..42bf1eab5677 100644
--- a/Documentation/ABI/stable/sysfs-devices
+++ b/Documentation/ABI/stable/sysfs-devices
@@ -1,5 +1,6 @@
-# Note: This documents additional properties of any device beyond what
-# is documented in Documentation/admin-guide/sysfs-rules.rst
+Note:
+  This documents additional properties of any device beyond what
+  is documented in Documentation/admin-guide/sysfs-rules.rst
 
 What:		/sys/devices/*/of_node
 Date:		February 2015
diff --git a/Documentation/ABI/stable/sysfs-driver-ib_srp b/Documentation/ABI/stable/sysfs-driver-ib_srp
index 84972a57caae..bada15a329f7 100644
--- a/Documentation/ABI/stable/sysfs-driver-ib_srp
+++ b/Documentation/ABI/stable/sysfs-driver-ib_srp
@@ -6,6 +6,7 @@ Description:	Interface for making ib_srp connect to a new target.
 		One can request ib_srp to connect to a new target by writing
 		a comma-separated list of login parameters to this sysfs
 		attribute. The supported parameters are:
+
 		* id_ext, a 16-digit hexadecimal number specifying the eight
 		  byte identifier extension in the 16-byte SRP target port
 		  identifier. The target port identifier is sent by ib_srp
diff --git a/Documentation/ABI/stable/sysfs-firmware-efi-vars b/Documentation/ABI/stable/sysfs-firmware-efi-vars
index 5def20b9019e..46ccd233e359 100644
--- a/Documentation/ABI/stable/sysfs-firmware-efi-vars
+++ b/Documentation/ABI/stable/sysfs-firmware-efi-vars
@@ -17,6 +17,7 @@ Description:
 		directory has a name of the form "<key>-<vendor guid>"
 		and contains the following files:
 
+		=============== ========================================
 		attributes:	A read-only text file enumerating the
 				EFI variable flags.  Potential values
 				include:
@@ -59,12 +60,14 @@ Description:
 
 		size:		As ASCII representation of the size of
 				the variable's value.
+		=============== ========================================
 
 
 		In addition, two other magic binary files are provided
 		in the top-level directory and are used for adding and
 		removing variables:
 
+		=============== ========================================
 		new_var:	Takes a "struct efi_variable" and
 				instructs the EFI firmware to create a
 				new variable.
@@ -73,3 +76,4 @@ Description:
 				instructs the EFI firmware to remove any
 				variable that has a matching vendor GUID
 				and variable key name.
+		=============== ========================================
diff --git a/Documentation/ABI/stable/sysfs-firmware-opal-dump b/Documentation/ABI/stable/sysfs-firmware-opal-dump
index 32fe7f5c4880..1f74f45327ba 100644
--- a/Documentation/ABI/stable/sysfs-firmware-opal-dump
+++ b/Documentation/ABI/stable/sysfs-firmware-opal-dump
@@ -7,6 +7,7 @@ Description:
 
 		This is only for the powerpc/powernv platform.
 
+		=============== ===============================================
 		initiate_dump:	When '1' is written to it,
 				we will initiate a dump.
 				Read this file for supported commands.
@@ -19,8 +20,11 @@ Description:
 				and ID of the dump, use the id and type files.
 				Do not rely on any particular size of dump
 				type or dump id.
+		=============== ===============================================
 
 		Each dump has the following files:
+
+		=============== ===============================================
 		id:		An ASCII representation of the dump ID
 				in hex (e.g. '0x01')
 		type:		An ASCII representation of the type of
@@ -39,3 +43,4 @@ Description:
 				inaccessible.
 				Reading this file will get a list of
 				supported actions.
+		=============== ===============================================
diff --git a/Documentation/ABI/stable/sysfs-firmware-opal-elog b/Documentation/ABI/stable/sysfs-firmware-opal-elog
index 2536434d49d0..7c8a61a2d005 100644
--- a/Documentation/ABI/stable/sysfs-firmware-opal-elog
+++ b/Documentation/ABI/stable/sysfs-firmware-opal-elog
@@ -38,6 +38,7 @@ Description:
 		For each log entry (directory), there are the following
 		files:
 
+		==============  ================================================
 		id:		An ASCII representation of the ID of the
 				error log, in hex - e.g. "0x01".
 
@@ -58,3 +59,4 @@ Description:
 				entry will be removed from sysfs.
 				Reading this file will list the supported
 				operations (currently just acknowledge).
+		==============  ================================================
diff --git a/Documentation/ABI/stable/sysfs-hypervisor-xen b/Documentation/ABI/stable/sysfs-hypervisor-xen
index 3cf5cdfcd9a8..748593c64568 100644
--- a/Documentation/ABI/stable/sysfs-hypervisor-xen
+++ b/Documentation/ABI/stable/sysfs-hypervisor-xen
@@ -33,6 +33,8 @@ Description:	If running under Xen:
 		Space separated list of supported guest system types. Each type
 		is in the format: <class>-<major>.<minor>-<arch>
 		With:
+
+			======== ============================================
 			<class>: "xen" -- x86: paravirtualized, arm: standard
 				 "hvm" -- x86 only: fully virtualized
 			<major>: major guest interface version
@@ -43,6 +45,7 @@ Description:	If running under Xen:
 				 "x86_64": 64 bit x86 guest
 				 "armv7l": 32 bit arm guest
 				 "aarch64": 64 bit arm guest
+			======== ============================================
 
 What:		/sys/hypervisor/properties/changeset
 Date:		March 2009
diff --git a/Documentation/ABI/stable/vdso b/Documentation/ABI/stable/vdso
index 55406ec8a35a..73ed1240a5c0 100644
--- a/Documentation/ABI/stable/vdso
+++ b/Documentation/ABI/stable/vdso
@@ -23,6 +23,7 @@ Unless otherwise noted, the set of symbols with any given version and the
 ABI of those symbols is considered stable.  It may vary across architectures,
 though.
 
-(As of this writing, this ABI documentation as been confirmed for x86_64.
+Note:
+ As of this writing, this ABI documentation as been confirmed for x86_64.
  The maintainers of the other vDSO-using architectures should confirm
- that it is correct for their architecture.)
+ that it is correct for their architecture.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:42:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:42:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13652.34389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmeK-0001l3-1H; Wed, 28 Oct 2020 14:42:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13652.34389; Wed, 28 Oct 2020 14:42:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmeJ-0001kw-UG; Wed, 28 Oct 2020 14:42:03 +0000
Received: by outflank-mailman (input) for mailman id 13652;
 Wed, 28 Oct 2020 14:42:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kXmeI-0001kr-DZ
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:42:02 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.50])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5913529-7f87-422a-ba3a-c01e20b04d8c;
 Wed, 28 Oct 2020 14:42:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
 with ESMTPSA id D03373w9SEft5Qr
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 28 Oct 2020 15:41:55 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=LDgn=ED=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kXmeI-0001kr-DZ
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:42:02 +0000
X-Inumbo-ID: b5913529-7f87-422a-ba3a-c01e20b04d8c
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.50])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b5913529-7f87-422a-ba3a-c01e20b04d8c;
	Wed, 28 Oct 2020 14:42:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603896120;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=XGMXWyKz1a6XbgxmdFwWHpBc+hSLcSTn76PjFu99oLk=;
	b=YD6L6I30OHrZJhi1mREDCnJ3RJN/XGRs+SKb6x17Z9PRu8vWG3BXyvIDPyCh7hIypP
	KcX9m3ylr4BSqZcwVXh3jeDtBDNq9g2UBdbHXVOtfS08CG8/rEHFN2p5B/aot1xtUxfh
	wN8qdHUX1Hb8xnVvE/jYr/tyijrRbSew0sIHhurKE5Ry40eodybHSSnvOsc/mIH2EkJ2
	3PL6vIn2Yt/5MrzQdJij2S07V08ybUIkX/AS1cNeEKC+eRo9H6G2tPMlhHcDiC9x4FD7
	3HhczWeC92vQsAAeCu8h8vEeQ4YrR4A2F/rZb17oLWtTcZZ/FrXuBEusgmLXZiqQ3LV7
	BL8Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.2.3 DYNA|AUTH)
	with ESMTPSA id D03373w9SEft5Qr
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Wed, 28 Oct 2020 15:41:55 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: add readv_exact to xenctrl
Date: Wed, 28 Oct 2020 15:41:51 +0100
Message-Id: <20201028144151.2766-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read a batch of iovec's.

In the common case of short reads, finish individual iov's with read_exact.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---

Users will follow, next month probably.

 tools/libs/ctrl/xc_private.c | 54 +++++++++++++++++++++++++++++++++++-
 tools/libs/ctrl/xc_private.h |  1 +
 2 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index d94f846686..a86ed213a5 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -659,8 +659,22 @@ int write_exact(int fd, const void *data, size_t size)
 
 #if defined(__MINIOS__)
 /*
- * MiniOS's libc doesn't know about writev(). Implement it as multiple write()s.
+ * MiniOS's libc doesn't know about readv/writev(). Implement it as multiple write()s.
  */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc, i;
+
+    for ( i = 0; i < iovcnt; ++i )
+    {
+        rc = read_exact(fd, iov[i].iov_base, iov[i].iov_len);
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     int rc, i;
@@ -675,6 +689,44 @@ int writev_exact(int fd, const struct iovec *iov, int iovcnt)
     return 0;
 }
 #else
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc = 0, idx = 0;
+    ssize_t len;
+
+    while ( idx < iovcnt )
+    {
+        len = readv(fd, &iov[idx], min(iovcnt - idx, IOV_MAX));
+        if ( len == -1 && errno == EINTR )
+            continue;
+        if ( len <= 0 )
+        {
+            rc = -1;
+            goto out;
+        }
+        while ( len > 0 && idx < iovcnt )
+        {
+            if ( len >= iov[idx].iov_len )
+            {
+                len -= iov[idx].iov_len;
+            }
+            else
+            {
+                void *p = iov[idx].iov_base + len;
+                size_t l = iov[idx].iov_len - len;
+
+                rc = read_exact(fd, p, l);
+                if ( rc )
+                    goto out;
+                len = 0;
+            }
+            idx++;
+        }
+    }
+out:
+    return rc;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     struct iovec *local_iov = NULL;
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..5d2c7274fb 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -441,6 +441,7 @@ int xc_flush_mmu_updates(xc_interface *xch, struct xc_mmu *mmu);
 
 /* Return 0 on success; -1 on error setting errno. */
 int read_exact(int fd, void *data, size_t size); /* EOF => -1, errno=0 */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt);
 int write_exact(int fd, const void *data, size_t size);
 int writev_exact(int fd, const struct iovec *iov, int iovcnt);
 


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 14:49:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 14:49:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13659.34400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmlM-00021K-PR; Wed, 28 Oct 2020 14:49:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13659.34400; Wed, 28 Oct 2020 14:49:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXmlM-00021D-MY; Wed, 28 Oct 2020 14:49:20 +0000
Received: by outflank-mailman (input) for mailman id 13659;
 Wed, 28 Oct 2020 14:49:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXmlL-000218-Gd
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:49:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2da1031-33b0-4293-b27d-cca03d11408c;
 Wed, 28 Oct 2020 14:49:18 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXmlJ-0003pu-Gh; Wed, 28 Oct 2020 14:49:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXmlJ-0005DA-8y; Wed, 28 Oct 2020 14:49:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXmlL-000218-Gd
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 14:49:19 +0000
X-Inumbo-ID: c2da1031-33b0-4293-b27d-cca03d11408c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c2da1031-33b0-4293-b27d-cca03d11408c;
	Wed, 28 Oct 2020 14:49:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=geuIc3HYqWdr3oeekfddYO573JieDsY3CqsARBmN6rk=; b=cjJrLhbpqbtWffi1pBlMnPhEbt
	3abymfb/l14/JZZoSut8aS4sG/3IOxgUimXwthPRqzwnN2RkmcFJXbo93oKlqFKW19Ip+gKvb2Ea1
	OF0uUUKH0m0m9ESAUYoidCT1Ra7VzsUoUMctjB0fKJOSZPBtkL6kXFhjKlal4fOr8aIY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXmlJ-0003pu-Gh; Wed, 28 Oct 2020 14:49:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXmlJ-0005DA-8y; Wed, 28 Oct 2020 14:49:17 +0000
Subject: Re: [qemu-mainline test] 156257: regressions - FAIL
To: Jason Andryuk <jandryuk@gmail.com>,
 osstest service owner <osstest-admin@xenproject.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>
References: <osstest-156257-mainreport@xen.org>
 <CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzUo2q1KEMjhg@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <70d87480-6fcf-9fe0-34c0-30bd711406a4@xen.org>
Date: Wed, 28 Oct 2020 14:49:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzUo2q1KEMjhg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(+ Anthony and Stefano,)

Hi Jason,

On 28/10/2020 13:37, Jason Andryuk wrote:
> On Tue, Oct 27, 2020 at 5:23 PM osstest service owner
> <osstest-admin@xenproject.org> wrote:
>>
>> flight 156257 qemu-mainline real [real]
>> flight 156266 qemu-mainline real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/156257/
>> http://logs.test-lab.xenproject.org/osstest/logs/156266/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>   test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
> 
> QEMU doesn't start with "qemu-system-i386: -xen-domid 1: Option not
> supported for this target"
> 
> This happens if CONFIG_XEN isn't set.
> 
> QEMU is built with:
>                    host CPU: aarch64
>             host endianness: little
>                 target list: i386-softmmu
> 
> commit 8a19980e3fc4 "configure: move accelerator logic to meson"
> introduced this logic:
> +accelerator_targets = { 'CONFIG_KVM': kvm_targets }
> +if cpu in ['x86', 'x86_64']
> +  accelerator_targets += {
> +    'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
> +    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
> +    'CONFIG_HVF': ['x86_64-softmmu'],
> +    'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
> +  }
> +endif

I always wondered when this would come to bite us :). I am surprised it 
took so long.

> 
> I guess something like this would fix it:
> if cpu in ['aarch64', 'arm']
>    accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'], }
> endif

Per the logic above, I think this correct. @Stefano, @Anthony, can you 
have a look?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:14:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:14:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13664.34413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXn9a-0004ZR-Se; Wed, 28 Oct 2020 15:14:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13664.34413; Wed, 28 Oct 2020 15:14:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXn9a-0004ZK-Ov; Wed, 28 Oct 2020 15:14:22 +0000
Received: by outflank-mailman (input) for mailman id 13664;
 Wed, 28 Oct 2020 15:14:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kXn9Y-0004ZE-KV
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:14:20 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75a59c8f-594e-4bc2-924b-1234a60bcaad;
 Wed, 28 Oct 2020 15:14:18 +0000 (UTC)
Received: from AM5PR0601CA0030.eurprd06.prod.outlook.com
 (2603:10a6:203:68::16) by VI1PR08MB4320.eurprd08.prod.outlook.com
 (2603:10a6:803:100::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 28 Oct
 2020 15:14:16 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::14) by AM5PR0601CA0030.outlook.office365.com
 (2603:10a6:203:68::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 28 Oct 2020 15:14:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:14:16 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Wed, 28 Oct 2020 15:14:16 +0000
Received: from f5d719fceabe.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 80E4BB33-84B8-45E2-AC2D-F88B89FA43EB.1; 
 Wed, 28 Oct 2020 15:13:47 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f5d719fceabe.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 15:13:47 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM9PR08MB6178.eurprd08.prod.outlook.com (2603:10a6:20b:2db::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 15:13:46 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:13:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kXn9Y-0004ZE-KV
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:14:20 +0000
X-Inumbo-ID: 75a59c8f-594e-4bc2-924b-1234a60bcaad
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown [40.107.4.70])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 75a59c8f-594e-4bc2-924b-1234a60bcaad;
	Wed, 28 Oct 2020 15:14:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JN7GV8+obqNoRFpwaosnR0yQ0WMiGYzgwL+NGN0JVCE=;
 b=jfpQFrlTsd6eQU1C1UH6N+vIwnx9b/KeQGav6BNmxMt9YtKA40b/aWlxBeSUchReWvNIiJdMwnNO9YU4+6RDmg/3bYD7FLZ8EQrXbqqiXPKXgb+C0Z6g5MdQ1S5hDj+dU2cOlnKltsjGSciVsSpePgUi+7RtBm9WLa0bRyLCNxY=
Received: from AM5PR0601CA0030.eurprd06.prod.outlook.com
 (2603:10a6:203:68::16) by VI1PR08MB4320.eurprd08.prod.outlook.com
 (2603:10a6:803:100::10) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.28; Wed, 28 Oct
 2020 15:14:16 +0000
Received: from AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::14) by AM5PR0601CA0030.outlook.office365.com
 (2603:10a6:203:68::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Wed, 28 Oct 2020 15:14:16 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT033.mail.protection.outlook.com (10.152.16.99) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:14:16 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Wed, 28 Oct 2020 15:14:16 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 95b49ef55c60175e
X-CR-MTA-TID: 64aa7808
Received: from f5d719fceabe.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 80E4BB33-84B8-45E2-AC2D-F88B89FA43EB.1;
	Wed, 28 Oct 2020 15:13:47 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f5d719fceabe.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 15:13:47 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MMS5/WWg6swfl0pzkmRA4F6qBFIDiopfBZEDXM0bkKg7aNLqJcd+oD5KalFIl9lqCidXJzXXnHJdYGfQ89FGyr0zQsBJ+QLPYkqCGztixp/cXZBy3o/Jmmcl42XykEDy74Ku93YD5Q/F0hZHwMsK3jdHEB4pP6MoQwL3Qs003gFdTDb8vcU6tC+RJ3hJVI+tNsZLsss+AQurm1K20McB5jDj8zTT+SKXyXPF8usH2fHKniq0Y5AsRyoy7Kh99s6GABhQO0ADi4qpdFc25DdEpSb6HX8h/mbBJncEGrJ9DIv3tPdmjcxpVeTr8WCxEEC7/OHRLTiH1xM37mq/s0YGCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JN7GV8+obqNoRFpwaosnR0yQ0WMiGYzgwL+NGN0JVCE=;
 b=YC2leGcRVcGeBL20RN29xRaqHCXSASgqUzWbPvhRT1fc7QEXNPSz07VYMmhdi+eR1INtW96SExvREg5Do7yuxWDSSiG4vuELLiclC37B+v6Du4rWRWoOKhc+0ijl7ASbFjoz0yEYyXOZcfcnC42B0bNLxk8kKHgp4r2GAuxjoeOI1ZXf44lHc7+en3+lmC8OaTvveRd3l244Gn7hpXDKyeck48635xc21Vv9lRbFiVnGFpjB9MqD6GlZOFxN6uI0M87IHm5J3Fp960Srhzl92jFnfWIw5NeYsDOvNUcR5fWSUIjYy5Lf12wrZCnjnIMBs1Yw839YoUCk+S/ip/4y8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JN7GV8+obqNoRFpwaosnR0yQ0WMiGYzgwL+NGN0JVCE=;
 b=jfpQFrlTsd6eQU1C1UH6N+vIwnx9b/KeQGav6BNmxMt9YtKA40b/aWlxBeSUchReWvNIiJdMwnNO9YU4+6RDmg/3bYD7FLZ8EQrXbqqiXPKXgb+C0Z6g5MdQ1S5hDj+dU2cOlnKltsjGSciVsSpePgUi+7RtBm9WLa0bRyLCNxY=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM9PR08MB6178.eurprd08.prod.outlook.com (2603:10a6:20b:2db::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 15:13:46 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:13:46 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Topic: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Index: AQHWq7wn1E1TPrUt3EyifJVqFrVmIams60gAgAA3GIA=
Date: Wed, 28 Oct 2020 15:13:46 +0000
Message-ID: <14328157-D9C5-428E-BD1C-F4A841359185@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
 <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
In-Reply-To: <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4c1b9d8c-ecf3-46b7-3236-08d87b5422d2
x-ms-traffictypediagnostic: AM9PR08MB6178:|VI1PR08MB4320:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB4320847F2546320F0224DC68FC170@VI1PR08MB4320.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4941;OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 HD5EYNo6TvH4kZXLrQjIsF3lcpr3wi3Fkqp+I7HRvjZqzaXdrhb/T5piSHKlxXBnFlHaT0knS1BuHi89dtk04A7cnVs8dJgWbaDv7bqP/YlSBU9dvYOq9/9udioerP1llY+83Z6vUzTAyj6O9z6WLS71R/6mLbGk6VdNrCYR+dCOnPIFmK1DAAUrESWBXiijJSAoJwjluuyAk8pqUWUKOHiUVp3CNudF5UzaWRSoNRIjKnL4zgzDTIhk06DFIy5Xx41r3fykaNnnatZI/m0Gkb+Pq7Ds1WE9TFLKTHWX4yPgYdKkF8xpchMVLcyfe0iONX7P5Wopo+xZVdu37CzGRQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(376002)(39860400002)(136003)(71200400001)(36756003)(4326008)(2616005)(86362001)(26005)(6486002)(316002)(6512007)(54906003)(186003)(478600001)(55236004)(33656002)(53546011)(66556008)(8676002)(6506007)(8936002)(66446008)(66476007)(5660300002)(6916009)(2906002)(64756008)(91956017)(66946007)(76116006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 a8rMqPLB/XS3R7j6HiT9LNRvYNKtTheEiNo/38s1klGErGeZIU2CxDvpdCeDAFku8kqskSGvQJ+LDO5HW/3T7JymK4blFfYWOQX+Ylq7EMsvPn2A/T+/YnSOt6rummaaQ1e+xI9KrZSRDn+xt6zv7/A5bgDKwWddq7mcOfNsd+yP+knM12UdJ7lXwsXNJMYoZ9+qDUB7bmqjo3MpWZLxzNvmFncnyeBBd9ybptuwPMxROoryhozcF6Ur/a0nK1mrgxfKQp8SEo3hrNKPpgbY0v094eT8qMI57K9yY0zRCNr4b+NGBdY7DjyX2hqlESMoEwBObYQYR+46+7rzkRsJ21yqmE0UvNiivjEQ+9UTpFwF0aS3KHYDeVlCIXJA58Wb2eBqe4WIrz3yBHQt4hS7iRPZlvxqX/GpqlrEfBqDv1LI+Q8JTcidKFXYCcAxhTy37/1MAXU7y0WBcnqSPzbGH4cQ+vLP3goqmSF9O3PkZvtxttMVN7PFpDiH6MEGrbHWNfBHJ+IDsV+8MDVWFfnRGB8Ar2E2RpsHn4nS4DnBCQ2wh/EBgJA72NCditn4W+iCGlJpI+RrEXdGGgn1hR4wR7t/0fnlWD0oY8WC7W2inMwJ5NP/txthPYvWg68UQIyCGMjjU7ySvSbhGXpmuZrpNg==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <32115F39AB08614CB9FE1176B0C912C2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6178
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	235d86f0-6542-48f4-e050-08d87b541111
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	/j9fAzUN5rzbjo1CqNpFqj566ed8kyCEArZ9XxT7kSBdyGQRszhiRJVAs+Ut4m0Mitgntjc95T50GZUJdhuapfeH+DB716hzEwROxAbR8N9VcPQYYQpivzPEQIBIsrytpMlq3bgNr6RijdlGJnG8IXRVy7IpL+g3ks0iEBTuh8Ye8uLmMkEEgNdAbfDlPxNp7NSgmKK8KkeXCipWC7hKl029RpvwOJ1MNISZ7J+/aAb8ndeA2JiAxENijKawgmwxEpHsoP/jkdVZWUfbiKKPioWhKe5JhECPhCM4phwgDfkJR63BkZTJjVxklekVdQ4SkJ2dKQTD2RZTVXK3A9kBkFZnAoYSVCCMW/m6x1X02PwGKtlUg0RVpbGYsue+ZJlJ0pEBk5jiGtZbSYeg5J51gw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(39850400004)(136003)(396003)(46966005)(53546011)(36756003)(356005)(2906002)(478600001)(47076004)(81166007)(55236004)(86362001)(5660300002)(82740400003)(54906003)(6506007)(33656002)(8676002)(70206006)(26005)(36906005)(82310400003)(186003)(6486002)(2616005)(336012)(8936002)(6512007)(316002)(6862004)(70586007)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 15:14:16.4075
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c1b9d8c-ecf3-46b7-3236-08d87b5422d2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4320

Hello Jan,

> On 28 Oct 2020, at 11:56 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 26.10.2020 18:17, Rahul Singh wrote:
>> --- a/xen/drivers/passthrough/pci.c
>> +++ b/xen/drivers/passthrough/pci.c
>> @@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u16 s=
eg, u8 bus, u8 devfn, u32 flag)
>>     if ( !is_iommu_enabled(d) )
>>         return 0;
>>=20
>> -    /* Prevent device assign if mem paging or mem sharing have been=20
>> +#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
>> +    /* Prevent device assign if mem paging or mem sharing have been
>>      * enabled for this domain */
>>     if ( d !=3D dom_io &&
>>          unlikely(mem_sharing_enabled(d) ||
>>                   vm_event_check_ring(d->vm_event_paging) ||
>>                   p2m_get_hostp2m(d)->global_logdirty) )
>>         return -EXDEV;
>> +#endif
>=20
> Besides this also disabling mem-sharing and log-dirty related
> logic, I don't think the change is correct: Each item being
> checked needs individually disabling depending on its associated
> CONFIG_*. For this, perhaps you want to introduce something
> like mem_paging_enabled(d), to avoid the need for #ifdef here?
>=20

Ok I will fix that in next version.=20

> Jan

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:21:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13680.34425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnGg-0005Sx-MH; Wed, 28 Oct 2020 15:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13680.34425; Wed, 28 Oct 2020 15:21:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnGg-0005Sq-IR; Wed, 28 Oct 2020 15:21:42 +0000
Received: by outflank-mailman (input) for mailman id 13680;
 Wed, 28 Oct 2020 15:21:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kXnGf-0005Sl-Ff
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:21:41 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.51]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6674565-d246-4722-97ab-ca85d6711646;
 Wed, 28 Oct 2020 15:21:38 +0000 (UTC)
Received: from MR2P264CA0020.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::32) by
 HE1PR0802MB2617.eurprd08.prod.outlook.com (2603:10a6:3:e0::22) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.25; Wed, 28 Oct 2020 15:21:35 +0000
Received: from VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::f9) by MR2P264CA0020.outlook.office365.com
 (2603:10a6:500:1::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 15:21:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT012.mail.protection.outlook.com (10.152.18.211) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:21:35 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64");
 Wed, 28 Oct 2020 15:21:34 +0000
Received: from 731b9158dda2.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E02E6678-96F1-4A3F-A684-E544125856F4.1; 
 Wed, 28 Oct 2020 15:21:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 731b9158dda2.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 15:21:01 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB5027.eurprd08.prod.outlook.com (2603:10a6:208:15c::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 15:20:59 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:20:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kXnGf-0005Sl-Ff
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:21:41 +0000
X-Inumbo-ID: f6674565-d246-4722-97ab-ca85d6711646
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown [40.107.13.51])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f6674565-d246-4722-97ab-ca85d6711646;
	Wed, 28 Oct 2020 15:21:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EziYhGqbg+zTATj1oJDLW1nY9c62Yt5ufgVzeSdXRKE=;
 b=a3qE9G3eUGzfte3LxjH01vtjFPKEy0S+Cp9CwDVeHgm8pmBgHrOQ7Hg+CtI2wxnVw+yQ6Ul6c/tQoMqhl4SpNhaawDpKB8z8Oa86bQMFqfEko9fqtIYrTzlM/GImGbWnmsRxMe7Y8iZf67s4STZ/vLAr8wfckLwkhHxgjL9+NF4=
Received: from MR2P264CA0020.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::32) by
 HE1PR0802MB2617.eurprd08.prod.outlook.com (2603:10a6:3:e0::22) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3477.25; Wed, 28 Oct 2020 15:21:35 +0000
Received: from VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::f9) by MR2P264CA0020.outlook.office365.com
 (2603:10a6:500:1::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 15:21:35 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT012.mail.protection.outlook.com (10.152.18.211) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:21:35 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64"); Wed, 28 Oct 2020 15:21:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: b3202e61668085bc
X-CR-MTA-TID: 64aa7808
Received: from 731b9158dda2.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E02E6678-96F1-4A3F-A684-E544125856F4.1;
	Wed, 28 Oct 2020 15:21:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 731b9158dda2.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 15:21:01 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OAncU0MiK5u3dHViUWKF+U3c1caZapZK0OAZfpG61zTVnQdGvmfJ0W2ez21nf/4jIWxo5FlsUiW0VivbnNsl37HL7KcyfWyG/ex/EYOMY851C5ZCWyhVRDRJYSlsidBQnbj9qHHGBmKXX099p6iFcmVERN8E1MQaPYpEEP2+zD5fdAiYx5G1s9kPKo6p6RoFwUgDXJHNYFAekAhNpPvB3lcRb09dAaMRHcxwOA+R62WsaWh5WTKWTmsH6e45YOXTSOfc5OfEXM8X9zBsrnEm/SbXl/uNlh4Ed8LqqSfoeQvyKHOkXR3GIQbbUVGn88RT1d0ND3lI3zZ3sm604eK+LA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EziYhGqbg+zTATj1oJDLW1nY9c62Yt5ufgVzeSdXRKE=;
 b=VZ1xc80edmzoEKBNOaX43EOIZRDwSKIDijVX4u6Sga8dwD21JlQa2WUQRbZevhDeh2rkP66UXYhpTrBjkOCkkczESOUbJHue9FKuPA1KovaDwKrSRD+ZT1iDLJax56pv7Q7Q7GeJZ9lJ2V6xRewMW7KBC/KRS5PPNN+WGM77rrfvjipwNL9Eg5tLjZy8X/hwKzXVkVUirz8eDnkkDR4wvQ8sonIJRq/p0DaQEoip885kZ4OybTbyZ6MvLzQjStiQX8SI6rS8lC9xLGgZXAdVrileGx09oc0LXkCVwOP2yVlWiLQZker2d0otcU9xUnh92XRArLTMh8K+6E2ko5MW1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EziYhGqbg+zTATj1oJDLW1nY9c62Yt5ufgVzeSdXRKE=;
 b=a3qE9G3eUGzfte3LxjH01vtjFPKEy0S+Cp9CwDVeHgm8pmBgHrOQ7Hg+CtI2wxnVw+yQ6Ul6c/tQoMqhl4SpNhaawDpKB8z8Oa86bQMFqfEko9fqtIYrTzlM/GImGbWnmsRxMe7Y8iZf67s4STZ/vLAr8wfckLwkhHxgjL9+NF4=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB5027.eurprd08.prod.outlook.com (2603:10a6:208:15c::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Wed, 28 Oct
 2020 15:20:59 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:20:58 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Topic: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
Thread-Index: AQHWq7x5RAJ4utEmJEq01xijjJpnmqms6fuAgAA6aoA=
Date: Wed, 28 Oct 2020 15:20:58 +0000
Message-ID: <4F4C34EE-7035-4931-9C65-13CCFEF52D18@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
 <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
In-Reply-To: <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: fb2a2dfb-a0a4-4d8a-318c-08d87b552861
x-ms-traffictypediagnostic: AM0PR08MB5027:|HE1PR0802MB2617:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<HE1PR0802MB2617D28CD5081926070F2D25FC170@HE1PR0802MB2617.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 KIq4aA2LZWxMgz6fkxuCV8gQBJM41MuLa90kAjmvos+PNqxSOPAz6P2uC/zkbAB+UMjzwz+9Fkx92jBFlnTROgMy4gMJNqndenhrBGBSNtJcIvC95mzLzf0Cqn/FvLg/k1ZgjyJEuwUbZE88RWgeGPIgXU6SPs1IkkbucjmUlDotq1kuKnuK2/RYTYgavbE2VR9CmR7L39sCizi23fnx2bIqPSIDPFaW28ddLkopoFHrMx3VNwRAc8K7h1WpZHa8OSsyedhndNnUeBllnbl1gPvGPvmJThptlUH0fkF+n82wZMCXOIVhIKVOZYd4+JHiVyUXAVxgGqM0yOhCAgK2LekETsh4FCWqhjsNWt49hYQXEE9t0/GNsAVObtrJquC0sMPxF66nsCD3X9ZSwsGIhM516PjsRuZtuX0gtX0dmClBbSNT+IOA1XDWEW5GOmQ7
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(39860400002)(376002)(346002)(66476007)(91956017)(8676002)(76116006)(86362001)(66446008)(478600001)(6916009)(66946007)(64756008)(6506007)(186003)(53546011)(55236004)(66556008)(26005)(33656002)(2616005)(71200400001)(83380400001)(6512007)(4326008)(54906003)(36756003)(6486002)(8936002)(316002)(5660300002)(2906002)(2004002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 o+74WlQ3siLfLpCqMr8VuRomk5BiYpCTXKOAUfqWdRl7qINXl6STRhN0gwwHq3QA+bu8ZYxRc7+qQocS1+SHodr7UuFI/OgjTntaeJoOSl+Zzb0u/7UhjPwZANf7GHfMfWxriRn6A/KjaLL8HkHvifPvTfNBptUe8PVsoXSh4WgZ2KM/BU1hlryDxOWdn8EiFtSzzbjOPckh+BlG8r5wDuJF87LYCCAriWLlL7qkJLz1oKGB3Cf0dBUg+0rOnqrb6fP6nS9wDSPru5MbTSYHvyeV+nL2vLg1PqpE/Mk+fbf7SNGIY8RqNUS9kFlSM8epD38ZWN3FNYLFj3O+IBTnQsSeBMS+7ueXt/PlBKNYl6KwFi1G+UDXoBPAz9F62XOTkT9HB3RX7Udc2kcYkNz8QOiO+uekfSJ1Cpjaz5PWgvD/oYbOGaTkp08MCFcVN2uJvERYVcbeMRX5LBwH8xXsqco03XVoh9etjV+v4HoQU4q6rTU9ra/v9fZuuR99TfSjtPQZ5HP86DBeVHMKyMRwNqJc4iD0D5DNTNS+/c++WIksvaP4ef9ohbLGZShbaOTo1Vz46Zsp4D8+Vp1e0wyEi2zlhOIvoSoQDvGPJSuhKmUMTuntm63PffdvVZ9+9vgB/kxptx729MhnTEVdlr/mNA==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <EAA674ECFE45A84CA1066B2603ECD0C6@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5027
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	21e454a0-9a51-44dc-4577-08d87b5512be
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SOgou1CTJqALojQ/VeVc2DM0J8MtxQgcLxzg9J5qd96hA7yTP/0JY+PU6B8TJLYx1y02uZqjcJPwwqKOzjegmaxYthRT/B9bUtU8TYvlI7qba5cL4zbtDe3PdKljqF1A5F3wux8zwk4m1rLLJKitoJY+67mHH7ElbvxOSuspXOdaWzje2CMxFQVQW4eL5AWKZTpPKOCd7GreQ1F3bbiX/NxAw1QxZDfPaaXKtTE+bAQwJ3acW80Iz9KR65YSCiobGzQBA5EHw9DZXOB5HElmGAL0N0DLjo4zFZ0RlOIrZDjUNnhy/PlHdD1ONRV5OfsKkff6PPa6Rnpa0OV5J0jNSxPcnd/mRdAajWJuRgUZaYtKPi2q9FogOVaP0b6O7fBMFdZ6ZfI3TDxYzDSXYS+YQAIpyCK0DxIjeNmwZUihzHuCAoSeEfpixUw5vbIv6MOfhDD8TKR0MaEOyErcnupstJyZfBa2A0r7+ogivRlkdVkx/HBE0/6erlDz4BsSzUMR
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39850400004)(46966005)(6862004)(2616005)(82310400003)(33656002)(2906002)(54906003)(4326008)(70206006)(70586007)(86362001)(5660300002)(6486002)(336012)(6512007)(82740400003)(186003)(26005)(47076004)(316002)(83380400001)(8936002)(8676002)(81166007)(55236004)(53546011)(6506007)(356005)(478600001)(36906005)(36756003)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 15:21:35.1780
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fb2a2dfb-a0a4-4d8a-318c-08d87b552861
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0802MB2617

Hello Jan,

> On 28 Oct 2020, at 11:51 am, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 26.10.2020 18:17, Rahul Singh wrote:
>> passthrough/pci.c file is common for all architecture, but there is x86
>> sepcific code in this file.
>=20
> The code you move doesn't look to be x86 specific in the sense that
> it makes no sense on other architectures, but just because certain
> pieces are missing on Arm. With this I question whether is it really
> appropriate to move this code. I do realize that in similar earlier
> cases my questioning was mostly ignored ...
>=20
>> --- /dev/null
>> +++ b/xen/drivers/passthrough/x86/pci.c
>> @@ -0,0 +1,97 @@
>> +/*
>> + * This program is free software; you can redistribute it and/or modify=
 it
>> + * under the terms and conditions of the GNU General Public License,
>> + * version 2, as published by the Free Software Foundation.
>> + *
>> + * This program is distributed in the hope it will be useful, but WITHO=
UT
>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY o=
r
>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public Licens=
e for
>> + * more details.
>> + *
>> + * You should have received a copy of the GNU General Public License al=
ong with
>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>> + */
>> +
>> +#include <xen/param.h>
>> +#include <xen/sched.h>
>> +#include <xen/pci.h>
>> +#include <xen/pci_regs.h>
>> +
>> +static int pci_clean_dpci_irq(struct domain *d,
>> +                              struct hvm_pirq_dpci *pirq_dpci, void *ar=
g)
>> +{
>> +    struct dev_intx_gsi_link *digl, *tmp;
>> +
>> +    pirq_guest_unbind(d, dpci_pirq(pirq_dpci));
>> +
>> +    if ( pt_irq_need_timer(pirq_dpci->flags) )
>> +        kill_timer(&pirq_dpci->timer);
>> +
>> +    list_for_each_entry_safe ( digl, tmp, &pirq_dpci->digl_list, list )
>> +    {
>> +        list_del(&digl->list);
>> +        xfree(digl);
>> +    }
>> +
>> +    radix_tree_delete(&d->pirq_tree, dpci_pirq(pirq_dpci)->pirq);
>> +
>> +    if ( !pt_pirq_softirq_active(pirq_dpci) )
>> +        return 0;
>> +
>> +    domain_get_irq_dpci(d)->pending_pirq_dpci =3D pirq_dpci;
>> +
>> +    return -ERESTART;
>> +}
>> +
>> +static int pci_clean_dpci_irqs(struct domain *d)
>> +{
>> +    struct hvm_irq_dpci *hvm_irq_dpci =3D NULL;
>> +
>> +    if ( !is_iommu_enabled(d) )
>> +        return 0;
>> +
>> +    if ( !is_hvm_domain(d) )
>> +        return 0;
>> +
>> +    spin_lock(&d->event_lock);
>> +    hvm_irq_dpci =3D domain_get_irq_dpci(d);
>> +    if ( hvm_irq_dpci !=3D NULL )
>> +    {
>> +        int ret =3D 0;
>> +
>> +        if ( hvm_irq_dpci->pending_pirq_dpci )
>> +        {
>> +            if ( pt_pirq_softirq_active(hvm_irq_dpci->pending_pirq_dpci=
) )
>> +                 ret =3D -ERESTART;
>> +            else
>> +                 hvm_irq_dpci->pending_pirq_dpci =3D NULL;
>> +        }
>> +
>> +        if ( !ret )
>> +            ret =3D pt_pirq_iterate(d, pci_clean_dpci_irq, NULL);
>> +        if ( ret )
>> +        {
>> +            spin_unlock(&d->event_lock);
>> +            return ret;
>> +        }
>> +
>> +        hvm_domain_irq(d)->dpci =3D NULL;
>> +        free_hvm_irq_dpci(hvm_irq_dpci);
>> +    }
>> +    spin_unlock(&d->event_lock);
>> +    return 0;
>=20
> While moving please add the missing blank line before the main
> return statement of the function.

Ok I will fix that in next version.
>=20
>> +}
>> +
>> +int arch_pci_release_devices(struct domain *d)
>> +{
>> +    return pci_clean_dpci_irqs(d);
>> +}
>=20
> Why the extra function layer?

Is that ok if I rename the function pci_clean_dpci_irqs() to arch_pci_clean=
_pirqs() ?

>=20
> Jan
>=20

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:37:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13709.34437 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnVj-0006YB-8V; Wed, 28 Oct 2020 15:37:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13709.34437; Wed, 28 Oct 2020 15:37:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnVj-0006Y4-4h; Wed, 28 Oct 2020 15:37:15 +0000
Received: by outflank-mailman (input) for mailman id 13709;
 Wed, 28 Oct 2020 15:37:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXnVh-0006XK-Ok
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:37:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d6a7d713-23b2-45f3-9d49-2ebdd3068012;
 Wed, 28 Oct 2020 15:37:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0E8BADF5;
 Wed, 28 Oct 2020 15:37:11 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXnVh-0006XK-Ok
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:37:13 +0000
X-Inumbo-ID: d6a7d713-23b2-45f3-9d49-2ebdd3068012
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d6a7d713-23b2-45f3-9d49-2ebdd3068012;
	Wed, 28 Oct 2020 15:37:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603899432;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Lgefu7BR1euqQczOJFI6qiOrCVTqTjJPUkUuvPBsnHY=;
	b=Myo5jgkCI1f+9XbsYHvKMeS3Ot6h0iyk5loqJEpJqsojarUDpSQyVE57uV04Z8gccFac6W
	tV8xNXgEZ4HVeu4JSA58//HFhyMcCPDJkt8uyvHVFdvwyN5xRcasIHstOPlMyidcBYTQX7
	d/ah5/dUHAQOi0KF5RIrGmEESd6aAks=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E0E8BADF5;
	Wed, 28 Oct 2020 15:37:11 +0000 (UTC)
Subject: Re: [PATCH v1 3/4] xen/pci: Move x86 specific code to x86 directory.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <70029e8904170c4f19d9f521847050cd00c6e39d.1603731279.git.rahul.singh@arm.com>
 <301405a2-9ec1-847d-6f61-1067a225a3a9@suse.com>
 <4F4C34EE-7035-4931-9C65-13CCFEF52D18@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0a1b3e93-2f24-d377-0753-313b3773efd8@suse.com>
Date: Wed, 28 Oct 2020 16:37:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4F4C34EE-7035-4931-9C65-13CCFEF52D18@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.10.2020 16:20, Rahul Singh wrote:
>> On 28 Oct 2020, at 11:51 am, Jan Beulich <jbeulich@suse.com> wrote:
>> On 26.10.2020 18:17, Rahul Singh wrote:
>>> +int arch_pci_release_devices(struct domain *d)
>>> +{
>>> +    return pci_clean_dpci_irqs(d);
>>> +}
>>
>> Why the extra function layer?
> 
> Is that ok if I rename the function pci_clean_dpci_irqs() to arch_pci_clean_pirqs() ?

I see no problem with you doing so.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:43:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13714.34449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnb7-0007QT-Ti; Wed, 28 Oct 2020 15:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13714.34449; Wed, 28 Oct 2020 15:42:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnb7-0007QM-Pk; Wed, 28 Oct 2020 15:42:49 +0000
Received: by outflank-mailman (input) for mailman id 13714;
 Wed, 28 Oct 2020 15:42:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXnb5-0007QH-VL
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:42:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddcbab9d-85b1-4c18-8a39-29bf2a172c4c;
 Wed, 28 Oct 2020 15:42:47 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4136FADCC;
 Wed, 28 Oct 2020 15:42:46 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXnb5-0007QH-VL
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:42:47 +0000
X-Inumbo-ID: ddcbab9d-85b1-4c18-8a39-29bf2a172c4c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ddcbab9d-85b1-4c18-8a39-29bf2a172c4c;
	Wed, 28 Oct 2020 15:42:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603899766;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=m+osU/Ke31c9xIOQYci1sX4iSdMIFcxvQe6MCm+EGgI=;
	b=lTJoXbgMtrDo87BTgiuq8CwRqC9wH+HjpjWWGtoLD9mRVbfbOYF/vWX2eTGm13ZcQI1xqr
	9kdggn7thjRK+E06fu4wUpIQStRyieIpxMG25Aczr0JHXS8kifSPGuZ0DmNfiKmYowVb2b
	7NeDg6zmPELCC9/4y1V0iWnnDiMM5zk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4136FADCC;
	Wed, 28 Oct 2020 15:42:46 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shadow: correct GFN use by sh_unshadow_for_p2m_change()
Message-ID: <888b8f2b-4368-6179-25c5-dc3eb6acbf3d@suse.com>
Date: Wed, 28 Oct 2020 16:42:45 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Luckily sh_remove_all_mappings()'s use of the parameter is limited to
generation of log messages. Nevertheless we'd better pass correct GFNs
around:
- the incoming GFN, when replacing a large page, may not be large page
  aligned,
- incrementing by page-size-scaled values can't be right.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3115,6 +3115,8 @@ static void sh_unshadow_for_p2m_change(s
                  && mfn_valid(nmfn) )
                 npte = map_domain_page(nmfn);
 
+            gfn &= ~(L1_PAGETABLE_ENTRIES - 1);
+
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
             {
                 if ( !npte
@@ -3123,8 +3125,7 @@ static void sh_unshadow_for_p2m_change(s
                 {
                     /* This GFN->MFN mapping has gone away */
                     sh_remove_all_shadows_and_parents(d, omfn);
-                    if ( sh_remove_all_mappings(d, omfn,
-                                                _gfn(gfn + (i << PAGE_SHIFT))) )
+                    if ( sh_remove_all_mappings(d, omfn, _gfn(gfn + i)) )
                         cpumask_or(&flushmask, &flushmask, d->dirty_cpumask);
                 }
                 omfn = mfn_add(omfn, 1);


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:47:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:47:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13720.34461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnfQ-0007cu-FZ; Wed, 28 Oct 2020 15:47:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13720.34461; Wed, 28 Oct 2020 15:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnfQ-0007cn-Bo; Wed, 28 Oct 2020 15:47:16 +0000
Received: by outflank-mailman (input) for mailman id 13720;
 Wed, 28 Oct 2020 15:47:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kXnfP-0007c2-0j
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:47:15 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.86]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fef1665c-0f4a-4d58-b0e9-e6d5cbf33373;
 Wed, 28 Oct 2020 15:47:12 +0000 (UTC)
Received: from MR2P264CA0102.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::18)
 by PA4PR08MB6125.eurprd08.prod.outlook.com (2603:10a6:102:e1::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 28 Oct
 2020 15:47:10 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::5c) by MR2P264CA0102.outlook.office365.com
 (2603:10a6:500:33::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 15:47:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:47:10 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Wed, 28 Oct 2020 15:47:10 +0000
Received: from 9f720d99b88f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 35AA86EF-810A-4123-98AD-A8F072817690.1; 
 Wed, 28 Oct 2020 15:47:04 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9f720d99b88f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 28 Oct 2020 15:47:04 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4051.eurprd08.prod.outlook.com (2603:10a6:208:125::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 15:47:03 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:47:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=7zZT=ED=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kXnfP-0007c2-0j
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:47:15 +0000
X-Inumbo-ID: fef1665c-0f4a-4d58-b0e9-e6d5cbf33373
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.86])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fef1665c-0f4a-4d58-b0e9-e6d5cbf33373;
	Wed, 28 Oct 2020 15:47:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=py2ONcKrlKEu5txZ1tzonQ1wYAZkRDAFdKmbS88XHA8=;
 b=h4h1HU3zOIfeEhvrbvxma/VwSx7T+N11bVQjOyoQD/G04ccNcAdBHTXuqcAFvVSOGUh/9LxnqfIle78/kmKA338ZHa1PkpHEnje7siFAzUqY7PFjy3EQNEnG0UWerU9Lg0Ncqof01CE6hy/+QyEzKo3EZ+Q2iELXNmCV1Yv/Xe4=
Received: from MR2P264CA0102.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:33::18)
 by PA4PR08MB6125.eurprd08.prod.outlook.com (2603:10a6:102:e1::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.20; Wed, 28 Oct
 2020 15:47:10 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:33:cafe::5c) by MR2P264CA0102.outlook.office365.com
 (2603:10a6:500:33::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Wed, 28 Oct 2020 15:47:10 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Wed, 28 Oct 2020 15:47:10 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Wed, 28 Oct 2020 15:47:10 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 47206cb18c4b53a7
X-CR-MTA-TID: 64aa7808
Received: from 9f720d99b88f.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 35AA86EF-810A-4123-98AD-A8F072817690.1;
	Wed, 28 Oct 2020 15:47:04 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9f720d99b88f.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Wed, 28 Oct 2020 15:47:04 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RY0eEEHRGL3ma0HcjlUy3nDjUnjjnv7sVjgfcOUagKR7P1t5aGCEicQESw2u1dXsOtRYEF1BB/BgfHpsh5JgrfJJcP/kFpH4s3AzemdDAkc5wOQ471TTHL/PcvFuhKCBKuPl92z/K5O8Xt0cPiF6fFLFUtwlogy8oQeyp1FmsJOl0E+YnOWlRl9JJR7X2RrbRuXFo1ygssO62wlnwaBo45InSNWHl+bhSWg6z9ZfI4PXgK6JkFCz4JAqhpj5wPdHTyItz0PmleSVDORV71hfQd2sFSqFzNCKPVkfVN3reBRbW3JBgIXkj/34UD8g8StNtnRMCsNnFuKyYVwAm/8gjQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=py2ONcKrlKEu5txZ1tzonQ1wYAZkRDAFdKmbS88XHA8=;
 b=RMFAik49+zP6mZqeNSGH7vNH0BZ+rIhzcs/1NkVxlmTAGpDSiQ/vPyo4Isz6yRcyVbXtYEh0dyC4Cg6I5qkI01AxqJt4jp3SibuWUiV65dlmXppR6j3gxA12J/Q6I4UMcTr9FXG2rix1E0QzRqCCCYQP6sMtRdrQRwHm306nDHGaWIgPxZO7LIvYC0GMQCAXy3hiHPfHgaohhxQV4jrrCZMCo3OU3Y1R52vqtgQ48E8ny6EjldkzyhQe6vIVem96QoYXxlY24PvCOHrFhOVOHjUhRz8k1DilDFh1WU5vFJhBaGWSTsSmw4lowV5i0ivfX9hz3ewnCMRonjairx/Tow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=py2ONcKrlKEu5txZ1tzonQ1wYAZkRDAFdKmbS88XHA8=;
 b=h4h1HU3zOIfeEhvrbvxma/VwSx7T+N11bVQjOyoQD/G04ccNcAdBHTXuqcAFvVSOGUh/9LxnqfIle78/kmKA338ZHa1PkpHEnje7siFAzUqY7PFjy3EQNEnG0UWerU9Lg0Ncqof01CE6hy/+QyEzKo3EZ+Q2iELXNmCV1Yv/Xe4=
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com (2603:10a6:208:e4::28)
 by AM0PR08MB4051.eurprd08.prod.outlook.com (2603:10a6:208:125::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Wed, 28 Oct
 2020 15:47:03 +0000
Received: from AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5]) by AM0PR08MB3490.eurprd08.prod.outlook.com
 ([fe80::49fa:5525:9ab4:edd5%7]) with mapi id 15.20.3477.028; Wed, 28 Oct 2020
 15:47:03 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Topic: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
Thread-Index: AQHWq7waQtOOhLtUwkWCA6LhLv3lTamsG2AAgACCJwCAADjrgIAADiIAgABHGoA=
Date: Wed, 28 Oct 2020 15:47:03 +0000
Message-ID: <752F9365-EB91-46AC-A2AD-060A630D8404@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
 <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
 <53C69BD9-716C-4ECF-8287-75C435C561CE@arm.com>
 <749813e0-0b04-0ecf-5dc6-96cfe53c786b@xen.org>
In-Reply-To: <749813e0-0b04-0ecf-5dc6-96cfe53c786b@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5a8aaacd-c248-423e-8f7e-08d87b58bb86
x-ms-traffictypediagnostic: AM0PR08MB4051:|PA4PR08MB6125:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB6125CA0807E32A64F07A075FFC170@PA4PR08MB6125.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 n/zEu25UQOEGX6bc3ltMf8Ia3/sID7MGNmFNQHxJW0HDsD0QcV1fUWfXDoZEEvdE5osQmJR1SoCIOPm87ZcXNdKGRonoCp5NhzHVBq4zqM4Rci2G6azngrKC0E26UgogVxCsKxqsq08Yzih8mgJWUBXe/rNqI2wQfK4j85+xqwBD+Mhw4oJrzubSzWZfL6povjPsLN8TcGi2s3w3VsuGs5jkQpESm3TjkWlAk7dQ70RYQOCPeoaYQivYinWXvY+uLMEDebQoeX4STP7blCtukokBUCUb6ppGv/wtUFmdvTqcji3801q+zlkpSEM0TdCYEhSq1+hMtfAZrbVwLgXPeA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR08MB3490.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(366004)(39860400002)(53546011)(6506007)(8676002)(8936002)(4326008)(54906003)(55236004)(71200400001)(33656002)(66946007)(66556008)(2616005)(66476007)(66446008)(76116006)(64756008)(91956017)(2906002)(478600001)(5660300002)(186003)(316002)(86362001)(26005)(6512007)(6486002)(6916009)(36756003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 B+i0iLeBk3vGVLztjDe9XDhYm9gCKsZXshoSmtgwWFv2tG0aozgE+OIFFuW65uAsp76M6StTrZS3shLyWt8RzZMe4E/aINTXuh8Nz+zhYcwcJgfsyGzMdbHpjWpT/Hy2cjnyN/LhUVzK3JyUqIiqFPdmYmtvjC32YYyy4te7dyAZIpJsmcP+PhkT4yCtxOPLo7JrtIbWrJhDrECjE4ykpd1KjUPjuR59SIBMmjB++1nuv5b7jF1YLjiovs9GzvN3PtTg0qHdC2JclV+yPmwXHDaKHKQpJehh6AFzp2G16nUbtgO3fbgs58DTY2jpI1Ls4+31L8vdjbqp/tVM2qECkL8KblGzIhmGiDy7VBWdYedtKQZPRyxlDwfTA3JIFlP5WLXql4kpMT3obrAF/KUktira5IKueeM0MwkXY7gdWgRxdwr6Vp/ud5kigYXaz3s83Eb2WGkZr1Ugimly2f913dysyD1TqedZOswoFSQ0B4rIi4ColfOrH3TaWBZbyBT0oVy2A/dwjhh3/VyURRpHtmc0m7T5cWVG2rUcMgERqvJLH7TX5i9wayDhPb0PWkYTt42KYJWKDVRpHaeEFrJMjRPcf+rjznYdhVlzV7LP24u1DarpqLYXvIbKJGrZ6Y5NrY7X43gWtr7fmPg/ZYDJdQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <CB8D54BC02FB6246B7FC2902FDE70BCE@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4051
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e3eaa156-4ac3-4cbc-d3e6-08d87b58b77a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WF2czqo0UZUvAZ7zm29Pgqvv8SJnoU3Ls7jhNibLMhsoo8KrN0y3AVW8FoB4aM/29Fzd+9hSljZR6Pfyst1jOK/oiC8C//QEaOor98ZqsX5TGZT3Tsd0VegnrliOIc0MqYxs3/HbbOzZ+XjH2hxuthXrYi/RTNOACqSl6PRwjr69C5AfcW9C9ufHMrEHpg5DwTiir314fiUh8NMXKE9gvt8GWfLoI/QIq/tPueMagKgiDSrcmXm5C3i1kqHzvCA7bTIRioAeWqJJY23bLiFz/UqxWAtiO8075KAmxPKuOncZgQ8YplTXqIKQTnyfE9u78anjPdVUUyuP0A5NCPwOHL2LmmAFYvCcf2t8Jfkea1H28khiE/tlnfAgiUasuhvJla/YIsjl6oc+pK6kqmXELA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39850400004)(136003)(46966005)(86362001)(4326008)(6862004)(8936002)(478600001)(2906002)(5660300002)(33656002)(356005)(8676002)(82310400003)(47076004)(54906003)(81166007)(36906005)(316002)(6486002)(6512007)(82740400003)(70206006)(26005)(186003)(6506007)(336012)(55236004)(2616005)(36756003)(53546011)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2020 15:47:10.5002
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a8aaacd-c248-423e-8f7e-08d87b58bb86
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6125

Hello Julen,

> On 28 Oct 2020, at 11:32 am, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 28/10/2020 10:41, Rahul Singh wrote:
>>> On 28 Oct 2020, at 7:18 am, Jan Beulich <jbeulich@suse.com> wrote:
>>>=20
>>> On 28.10.2020 00:32, Stefano Stabellini wrote:
>>>> On Mon, 26 Oct 2020, Rahul Singh wrote:
>>>>> --- a/xen/drivers/char/Kconfig
>>>>> +++ b/xen/drivers/char/Kconfig
>>>>> @@ -4,6 +4,13 @@ config HAS_NS16550
>>>>> 	help
>>>>> 	  This selects the 16550-series UART support. For most systems, say =
Y.
>>>>>=20
>>>>> +config HAS_NS16550_PCI
>>>>> +	bool "NS16550 UART PCI support" if X86
>>>>> +	default y
>>>>> +	depends on X86 && HAS_NS16550 && HAS_PCI
>>>>> +	help
>>>>> +	  This selects the 16550-series UART PCI support. For most systems,=
 say Y.
>>>>=20
>>>> I think that this should be a silent option:
>>>> if HAS_NS16550 && HAS_PCI && X86 -> automatically enable
>>>> otherwise -> automatically disable
>>>>=20
>>>> No need to show it to the user.
>>>=20
>>> I agree in principle, but I don't see why an X86 dependency gets
>>> added here. HAS_PCI really should be all that's needed.
>>>=20
>> Yes you are right . I will remove the X86 and make HAS_NS16550_PCI depen=
d on "HAS_NS16550 && HAS_PCI"
>=20
> There are quite a bit of work to make the PCI part of the implementation =
build on Arm because the code refer to x86 functions.
>=20
> While in theory, an NS16550 PCI card could be used on Arm, there is only =
a slim chance for an actual users. So I am not convinced the effort is wort=
h it here.
>=20
> Cheers,

Ok. I will enable by default HAS_NS16550_PCI on X86 only. Once we have prop=
er support for NS16550 PCI on ARM we can enable it at that time.
I will modify as follows:

config HAS_NS16550_PCI                                                     =
    =20
    bool                                                                   =
    =20
    default y                                                              =
    =20
    depends on X86 && HAS_NS16550 && HAS_PCI                               =
    =20
    help                                                                   =
    =20
      This selects the 16550-series UART PCI support. For most systems, say=
 Y.


>=20
> --=20
> Julien Grall

Thanks,
Rahul



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13733.34476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnhL-0007nk-T4; Wed, 28 Oct 2020 15:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13733.34476; Wed, 28 Oct 2020 15:49:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnhL-0007nd-Pu; Wed, 28 Oct 2020 15:49:15 +0000
Received: by outflank-mailman (input) for mailman id 13733;
 Wed, 28 Oct 2020 15:49:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kXnhK-0007mt-F8
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:49:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 347b0444-ce59-4e92-8a2e-5bb9515c174e;
 Wed, 28 Oct 2020 15:49:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 93847AD71;
 Wed, 28 Oct 2020 15:49:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dk2S=ED=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kXnhK-0007mt-F8
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:49:14 +0000
X-Inumbo-ID: 347b0444-ce59-4e92-8a2e-5bb9515c174e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 347b0444-ce59-4e92-8a2e-5bb9515c174e;
	Wed, 28 Oct 2020 15:49:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603900152;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nFcUCOP/g/UZYCuUBOE6MBVdn1yuJPxtFuzE8jr9Rpw=;
	b=LOKUqf9iDH/0SYrcRnjK2kkdAhowptCiOhzWrBrP298V1KFAR+JnjrpWXge//daq4n3Jha
	nbe3ws03zJuOkVplJytBkme7G41zYSInswxFrFzJ7T7i1cP32sXrpXCYDjwOg1OfWMWiCg
	VTtF816ZG2xqok1+MJpMVc5GLAIwzoI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 93847AD71;
	Wed, 28 Oct 2020 15:49:12 +0000 (UTC)
Subject: Re: [PATCH v1 1/4] xen/ns16550: solve compilation error on ARM with
 CONFIG_HAS_PCI enabled.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Julien Grall <julien@xen.org>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <d1df24d48508c0c01c0b1130ed22f2b4585d04db.1603731279.git.rahul.singh@arm.com>
 <alpine.DEB.2.21.2010271621480.12247@sstabellini-ThinkPad-T480s>
 <95b72e09-246b-dcbe-6254-86b3e25081c6@suse.com>
 <53C69BD9-716C-4ECF-8287-75C435C561CE@arm.com>
 <749813e0-0b04-0ecf-5dc6-96cfe53c786b@xen.org>
 <752F9365-EB91-46AC-A2AD-060A630D8404@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2b2fd97b-2d82-8af2-8e20-4a623b62447d@suse.com>
Date: Wed, 28 Oct 2020 16:49:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <752F9365-EB91-46AC-A2AD-060A630D8404@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.10.2020 16:47, Rahul Singh wrote:
> Ok. I will enable by default HAS_NS16550_PCI on X86 only. Once we have proper support for NS16550 PCI on ARM we can enable it at that time.
> I will modify as follows:
> 
> config HAS_NS16550_PCI                                                          
>     bool                                                                        
>     default y                                                                   

def_bool y

Jan


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13734.34488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnhO-0007p4-5c; Wed, 28 Oct 2020 15:49:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13734.34488; Wed, 28 Oct 2020 15:49:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnhO-0007ov-24; Wed, 28 Oct 2020 15:49:18 +0000
Received: by outflank-mailman (input) for mailman id 13734;
 Wed, 28 Oct 2020 15:49:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXnhM-0007mO-FN
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:49:16 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0345f43e-f232-4774-8be8-224733256bd1;
 Wed, 28 Oct 2020 15:49:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXnhE-00054e-T9; Wed, 28 Oct 2020 15:49:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXnhE-0003zz-JG; Wed, 28 Oct 2020 15:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXnhE-0003x9-Gd; Wed, 28 Oct 2020 15:49:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u6N/=ED=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXnhM-0007mO-FN
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:49:16 +0000
X-Inumbo-ID: 0345f43e-f232-4774-8be8-224733256bd1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0345f43e-f232-4774-8be8-224733256bd1;
	Wed, 28 Oct 2020 15:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=whyrXehIQsTHVCr6ne/ARc75bgKpsMJNB2o2ccGNP0A=; b=ZzQC4VBqepY6YqnPq8Ylbj8vuG
	eFJ4RfBjcIVbUGi+G5a2A9POfa39Dq3bBpcctaqNFGMO0ElfPodvfARrLnxt2B/h8x80tjZimruPp
	edQgyOwTlbuFHINs20EYIQLJp/ItxQZTxSLypi7XP9bCU9v4sJaxRTJghc13VbnxCWCk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXnhE-00054e-T9; Wed, 28 Oct 2020 15:49:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXnhE-0003zz-JG; Wed, 28 Oct 2020 15:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXnhE-0003x9-Gd; Wed, 28 Oct 2020 15:49:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156265: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=28b78171271dbbce88bbd4cb2de3d828a51fb169
X-Osstest-Versions-That:
    xen=dc38c1103cfdc643860e10c1b9e925dac83332dc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 28 Oct 2020 15:49:08 +0000

flight 156265 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156265/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156054

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156054
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156054
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156054
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156054
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156054
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156054
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156054
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156054
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156054
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156054
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156054
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  28b78171271dbbce88bbd4cb2de3d828a51fb169
baseline version:
 xen                  dc38c1103cfdc643860e10c1b9e925dac83332dc

Last test of basis   156054  2020-10-21 01:36:14 Z    7 days
Testing same since   156265  2020-10-27 18:37:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dc38c1103c..28b7817127  28b78171271dbbce88bbd4cb2de3d828a51fb169 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 15:57:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 15:57:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13754.34508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnp3-0000SV-7O; Wed, 28 Oct 2020 15:57:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13754.34508; Wed, 28 Oct 2020 15:57:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXnp3-0000SO-4S; Wed, 28 Oct 2020 15:57:13 +0000
Received: by outflank-mailman (input) for mailman id 13754;
 Wed, 28 Oct 2020 15:57:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=61hT=ED=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kXnp1-0000SJ-N8
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:57:11 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe21a166-a0ac-47b0-9ecb-1e632dd5374e;
 Wed, 28 Oct 2020 15:57:10 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=61hT=ED=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kXnp1-0000SJ-N8
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 15:57:11 +0000
X-Inumbo-ID: fe21a166-a0ac-47b0-9ecb-1e632dd5374e
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fe21a166-a0ac-47b0-9ecb-1e632dd5374e;
	Wed, 28 Oct 2020 15:57:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603900630;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=zDU992cp5Ht454n786TTTtaGj5ZbiRftTeUs1bN4v+g=;
  b=a7vzjditLZ/WTRFp8KVfDoMP6Ja0jikuESlAGVtQbHjGVdVOENeuOFJU
   hRLSOKhadkt8NrIA2W8ZnkJPSBei5tPZEfkGcxp05oYFxv+Y4AM4jXEp/
   HmPCWzaE9XXr87GBo7tjvXOAMyigG81fkiku94gEStVAS9XKQeylv0LrK
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: oBq1SNoZ9u8Ky4rpMZ7lCGTuYwvzjZ4XAFarFF+kRX0zegxUsMUEZxK4f7vsnRvY8Cntlf7DmV
 3T60L4MajibiqTIPCEdqtab4XcURprScAIq26w91qgCzwTxZxqXb+UdHgOXzJHa9gkqlOy7P4g
 i4Lf3ZoqqerI6NqQWZ/ZrmsaYGhESIDXw7PzD6PP/7mclj7M1RUqugQiXCa7RVRbJvKlxII8lx
 2xt6O1f/u4FTJ4Sde4KCcAJ6PFhr+PMv9zWvQlBeQibZ/aI0mXMrifgIKMT7GWx8vr5sidq7E8
 VSU=
X-SBRS: None
X-MesageID: 31069753
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,427,1596513600"; 
   d="scan'208";a="31069753"
Date: Wed, 28 Oct 2020 15:57:06 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Jason Andryuk <jandryuk@gmail.com>, osstest service owner
	<osstest-admin@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [qemu-mainline test] 156257: regressions - FAIL
Message-ID: <20201028155706.GE2214@perard.uk.xensource.com>
References: <osstest-156257-mainreport@xen.org>
 <CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzUo2q1KEMjhg@mail.gmail.com>
 <70d87480-6fcf-9fe0-34c0-30bd711406a4@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <70d87480-6fcf-9fe0-34c0-30bd711406a4@xen.org>

On Wed, Oct 28, 2020 at 02:49:15PM +0000, Julien Grall wrote:
> (+ Anthony and Stefano,)
> 
> Hi Jason,
> 
> On 28/10/2020 13:37, Jason Andryuk wrote:
> > On Tue, Oct 27, 2020 at 5:23 PM osstest service owner
> > <osstest-admin@xenproject.org> wrote:
> > > 
> > > flight 156257 qemu-mainline real [real]
> > > flight 156266 qemu-mainline real-retest [real]
> > > http://logs.test-lab.xenproject.org/osstest/logs/156257/
> > > http://logs.test-lab.xenproject.org/osstest/logs/156266/
> > > 
> > > Regressions :-(
> > > 
> > > Tests which did not succeed and are blocking,
> > > including tests which could not be run:
> > >   test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
> > 
> > QEMU doesn't start with "qemu-system-i386: -xen-domid 1: Option not
> > supported for this target"
> > 
> > This happens if CONFIG_XEN isn't set.
> > 
> > QEMU is built with:
> >                    host CPU: aarch64
> >             host endianness: little
> >                 target list: i386-softmmu
> > 
> > commit 8a19980e3fc4 "configure: move accelerator logic to meson"
> > introduced this logic:
> > +accelerator_targets = { 'CONFIG_KVM': kvm_targets }
> > +if cpu in ['x86', 'x86_64']
> > +  accelerator_targets += {
> > +    'CONFIG_HAX': ['i386-softmmu', 'x86_64-softmmu'],
> > +    'CONFIG_XEN': ['i386-softmmu', 'x86_64-softmmu'],
> > +    'CONFIG_HVF': ['x86_64-softmmu'],
> > +    'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
> > +  }
> > +endif
> 
> I always wondered when this would come to bite us :). I am surprised it took
> so long.
> 
> > 
> > I guess something like this would fix it:
> > if cpu in ['aarch64', 'arm']
> >    accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'], }
> > endif
> 
> Per the logic above, I think this correct. @Stefano, @Anthony, can you have
> a look?

Yes, that would probably do the trick and restrict to x86 and arm rather
than any host with xen support. I think this would need a comment
explaining why we enable xen for i386 target on arm (or at least a
comment that we use i386 target on arm).

Thanks for the investigation.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 17:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 17:44:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13806.34520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpUc-0001pO-64; Wed, 28 Oct 2020 17:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13806.34520; Wed, 28 Oct 2020 17:44:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpUc-0001pH-2u; Wed, 28 Oct 2020 17:44:14 +0000
Received: by outflank-mailman (input) for mailman id 13806;
 Wed, 28 Oct 2020 17:44:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i5Ob=ED=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kXpUa-0001pC-Ur
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 17:44:13 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10680334-cc58-45a8-af22-9be9db772c0d;
 Wed, 28 Oct 2020 17:44:11 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id m13so2361383wrj.7
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 10:44:11 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id f13sm438798wrp.12.2020.10.28.10.44.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 28 Oct 2020 10:44:09 -0700 (PDT)
Received: from zen.lan (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 5827D1FF7E;
 Wed, 28 Oct 2020 17:44:08 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=i5Ob=ED=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kXpUa-0001pC-Ur
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 17:44:13 +0000
X-Inumbo-ID: 10680334-cc58-45a8-af22-9be9db772c0d
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 10680334-cc58-45a8-af22-9be9db772c0d;
	Wed, 28 Oct 2020 17:44:11 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id m13so2361383wrj.7
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 10:44:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=YAl0tecVsURCuewdLjQ48RaCrF96ohlsENSIUfpsU+w=;
        b=zeafB7Z4qJnIB81bJuKqrWyMO9ds5lDP45CKMEw/pVonwKm9jJoDyipd2H3Vc+mIDd
         2TKaS8P/pMN1ta1Letyd3LaSmJzn9UmePXvDdcM9gP7wyfaCO0OoNAqNi7MhEYY2hvlE
         wn2dugs4oD/TNrUzhgBHo79XxSxR+3Fme57x5yWtFr3WksdfLEeu3SK5r6xrVaUnA+Ij
         rXw2stqIYJj813rwwTXgDnkGdUyC2/vQFSSecLN2D6JQwTuhLJDvU7/T0PwbBnDpaz5x
         FdHjmjsQkKB3lBqpPZJz4p3gQXTawl7x3qiYhlwGLR031uQS5i3qtFFCh8LpzPDPpwoy
         JPMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=YAl0tecVsURCuewdLjQ48RaCrF96ohlsENSIUfpsU+w=;
        b=PVeu1RrS5wsW6RsRE//yy/wabhRCm9xHXkCXSilM/pxVVspNw2uJJrtImkmFaVkeHh
         AbMakfFctuwdA732ltM6VKzVJQmPoqz3AfHrJNLaWJhb1sWRK4pjEsYBmYq445kFFcqX
         a6As7FEr1tQVLZ7YocbNYTH1o4vlCgdiBFXMbdiuRvVmgy2Mc1+Ifh59LFtl8M83h/OC
         a59uQRckqpC6xEI3S6VbWmiL5btBks3qk91AEqiYuv6tbCRAyizxdnT3zPNOVaCEfpFk
         kzJVlxoqkclVWUJRkGfLd8fRTALS1zSv3qR9wT9Sr6c+vc80c05IV1zBI7afSc8s2JWH
         FS1A==
X-Gm-Message-State: AOAM532EwLrGjRjih1PS0N58ZN603iFN5Gsp68oJiW4SiBN9cu0UOzQo
	jQ1rHfvXWcaWCwDh8HwzDC3h8Q==
X-Google-Smtp-Source: ABdhPJxtamYIDVXm4aGyKgQ9cjhesv/FKnN4U2mpSB4roy/h9H0baZIpnVLxCyBEEchu2j6Kvww+UA==
X-Received: by 2002:a5d:4051:: with SMTP id w17mr531965wrp.24.1603907050586;
        Wed, 28 Oct 2020 10:44:10 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id f13sm438798wrp.12.2020.10.28.10.44.09
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 28 Oct 2020 10:44:09 -0700 (PDT)
Received: from zen.lan (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id 5827D1FF7E;
	Wed, 28 Oct 2020 17:44:08 +0000 (GMT)
From: =?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
To: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH] meson.build: fix building of Xen support for aarch64
Date: Wed, 28 Oct 2020 17:44:06 +0000
Message-Id: <20201028174406.23424-1-alex.bennee@linaro.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Xen is supported on aarch64 although weirdly using the i386-softmmu
model. Checking based on the host CPU meant we never enabled Xen
support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
make it not seem weird but that will require further build surgery.

Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
---
 meson.build | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/meson.build b/meson.build
index 835424999d..f1fcbfed4c 100644
--- a/meson.build
+++ b/meson.build
@@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
     'CONFIG_HVF': ['x86_64-softmmu'],
     'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
   }
+elif cpu in [ 'arm', 'aarch64' ]
+  accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'] }
 endif
 
 ##################
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 17:44:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 17:44:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13807.34531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpUz-0001u0-E3; Wed, 28 Oct 2020 17:44:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13807.34531; Wed, 28 Oct 2020 17:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpUz-0001tt-B4; Wed, 28 Oct 2020 17:44:37 +0000
Received: by outflank-mailman (input) for mailman id 13807;
 Wed, 28 Oct 2020 17:44:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m2Gk=ED=gmail.com=richardcochran@srs-us1.protection.inumbo.net>)
 id 1kXpUy-0001tm-Fq
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 17:44:36 +0000
Received: from mail-pf1-x444.google.com (unknown [2607:f8b0:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c361b3e-38cf-401a-978e-43b2fa062533;
 Wed, 28 Oct 2020 17:44:35 +0000 (UTC)
Received: by mail-pf1-x444.google.com with SMTP id c20so46886pfr.8
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 10:44:35 -0700 (PDT)
Received: from hoboy.vegasvil.org (c-73-241-114-122.hsd1.ca.comcast.net.
 [73.241.114.122])
 by smtp.gmail.com with ESMTPSA id d26sm224049pfo.82.2020.10.28.10.44.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 28 Oct 2020 10:44:34 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m2Gk=ED=gmail.com=richardcochran@srs-us1.protection.inumbo.net>)
	id 1kXpUy-0001tm-Fq
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 17:44:36 +0000
X-Inumbo-ID: 5c361b3e-38cf-401a-978e-43b2fa062533
Received: from mail-pf1-x444.google.com (unknown [2607:f8b0:4864:20::444])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5c361b3e-38cf-401a-978e-43b2fa062533;
	Wed, 28 Oct 2020 17:44:35 +0000 (UTC)
Received: by mail-pf1-x444.google.com with SMTP id c20so46886pfr.8
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 10:44:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=GeMX+k14VhxeG76k1/qSlgb4+cXieu72K4e08pwdt1o=;
        b=InpLHLruF700Q395/JUSDPFi5ct3Q0NFGhIrDU/UmIhGdDThKH8GKD+cw6T2N6QNcD
         PUvF0K87ShFDKaAjJCfvpeMCAaEiJ8eHb269GNLlCVUSVFBQarvkQDFEeK2v65rPe7+J
         tC/1aFfwnYWg9j8u/6Ud/Yyen1uhouxurM0nLvzz3MYFdzPpthJUWusVU6lftQoy21uA
         qiW9X1ZTMFe3cniRtPQ+TCgotIme378DGRcyAXvYYZgBHvV1ur7fGjNBhSILa0vbkqSM
         tRUkeNha3rVBlOMkNme6QRGiAF7X/US6S2FlZ9tPWrsV95ioL0sZyvBU99MP8xfC1Ema
         woPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=GeMX+k14VhxeG76k1/qSlgb4+cXieu72K4e08pwdt1o=;
        b=sn+m+5GF+Yh/tHLzNoBwmOz8s+oaTjBF77eOMWtLfh3AR/3n+WPoAp3/raCdcfQo6F
         CiWOKjQg0IOLh+CRGkZg6yXFcXpoYtQJGxaoHP4pI/zNjnMj7b2gtIA6s5PK4lP3gO9u
         6Ov2BS0dlSKw8uA/bQggOrz2L/AdOC6HvbyzDGzS+bgUbN1ZdQmk6myPuwSVgPNJCvWd
         YQ3kfKxz+KI6j2nCzqj5MfL92y/9AVnWawDt+KGQbzG3YTPYPx5+Muj18Am7wSY5fEoh
         xGCSn0ONkv6TvDmUTPR0zu4xxxxdmYp4Rqzd5UiziKnFFqR+e2BXN7xB7flRmOz0wynL
         A3Sg==
X-Gm-Message-State: AOAM532o2T7mDknJZn1jVZn5aVcjF7KdGoRQtkEqSpXhiIC2DNh3YPX7
	/X5lREjh54SrAGnLy6M+ZIQ=
X-Google-Smtp-Source: ABdhPJw+P7PZ1ANDYILo79Hsg7TsK8M0hrdd6+wFNAfXxYnBAlE0Wr2vQTH+qAkmHlbh1+XaD9f7GA==
X-Received: by 2002:a63:78c3:: with SMTP id t186mr490857pgc.12.1603907075050;
        Wed, 28 Oct 2020 10:44:35 -0700 (PDT)
Received: from hoboy.vegasvil.org (c-73-241-114-122.hsd1.ca.comcast.net. [73.241.114.122])
        by smtp.gmail.com with ESMTPSA id d26sm224049pfo.82.2020.10.28.10.44.29
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Wed, 28 Oct 2020 10:44:34 -0700 (PDT)
Date: Wed, 28 Oct 2020 10:44:27 -0700
From: Richard Cochran <richardcochran@gmail.com>
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>,
	"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	Javier =?iso-8859-1?Q?Gonz=E1lez?= <javier@javigon.com>,
	Jonathan Corbet <corbet@lwn.net>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Alexandre Belloni <alexandre.belloni@bootlin.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Andrew Donnellan <ajd@linux.ibm.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Baolin Wang <baolin.wang7@gmail.com>,
	Benson Leung <bleung@chromium.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Bruno Meneguele <bmeneg@redhat.com>,
	Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Enric Balletbo i Serra <enric.balletbo@collabora.com>,
	Fabrice Gasnier <fabrice.gasnier@st.com>,
	Felipe Balbi <balbi@kernel.org>,
	Frederic Barrat <fbarrat@linux.ibm.com>,
	Guenter Roeck <groeck@chromium.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Heikki Krogerus <heikki.krogerus@linux.intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	Jonathan Cameron <jic23@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Konstantin Khlebnikov <koct9i@gmail.com>,
	Kranthi Kuntala <kranthi.kuntala@intel.com>,
	Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
	Lars-Peter Clausen <lars@metafoo.de>, Len Brown <lenb@kernel.org>,
	Leonid Maksymchuk <leonmaxx@gmail.com>,
	Ludovic Desroches <ludovic.desroches@microchip.com>,
	Mario Limonciello <mario.limonciello@dell.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>,
	Nicolas Ferre <nicolas.ferre@microchip.com>,
	Niklas Cassel <niklas.cassel@wdc.com>,
	Oleh Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>,
	Pavel Machek <pavel@ucw.cz>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
	Peter Rosin <peda@axentia.se>, Petr Mladek <pmladek@suse.com>,
	Philippe Bergheaud <felix@linux.ibm.com>,
	Sebastian Reichel <sre@kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>, linux-acpi@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, linux-iio@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-pm@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
	linux-usb@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
	netdev@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 20/33] docs: ABI: testing: make the files compatible with
 ReST output
Message-ID: <20201028174427.GE9364@hoboy.vegasvil.org>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
 <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed, Oct 28, 2020 at 03:23:18PM +0100, Mauro Carvalho Chehab wrote:

> diff --git a/Documentation/ABI/testing/sysfs-uevent b/Documentation/ABI/testing/sysfs-uevent
> index aa39f8d7bcdf..d0893dad3f38 100644
> --- a/Documentation/ABI/testing/sysfs-uevent
> +++ b/Documentation/ABI/testing/sysfs-uevent
> @@ -19,7 +19,8 @@ Description:
>                  a transaction identifier so it's possible to use the same UUID
>                  value for one or more synthetic uevents in which case we
>                  logically group these uevents together for any userspace
> -                listeners. The UUID value appears in uevent as
> +                listeners. The UUID value appears in uevent as:

I know almost nothing about Sphinx, but why have one colon here ^^^ and ...

> +
>                  "SYNTH_UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" environment
>                  variable.
>  
> @@ -30,18 +31,19 @@ Description:
>                  It's possible to define zero or more pairs - each pair is then
>                  delimited by a space character ' '. Each pair appears in
>                  synthetic uevent as "SYNTH_ARG_KEY=VALUE". That means the KEY
> -                name gains "SYNTH_ARG_" prefix to avoid possible collisions
> +                name gains `SYNTH_ARG_` prefix to avoid possible collisions
>                  with existing variables.
>  
> -                Example of valid sequence written to the uevent file:
> +                Example of valid sequence written to the uevent file::

... two here?

Thanks,
Richard


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 18:13:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 18:13:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13813.34544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpxH-0004dH-R0; Wed, 28 Oct 2020 18:13:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13813.34544; Wed, 28 Oct 2020 18:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXpxH-0004dA-Nt; Wed, 28 Oct 2020 18:13:51 +0000
Received: by outflank-mailman (input) for mailman id 13813;
 Wed, 28 Oct 2020 18:13:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=61hT=ED=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kXpxG-0004d5-5H
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:13:50 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16e3adbb-f6b6-41cd-860c-2dec95519e15;
 Wed, 28 Oct 2020 18:13:49 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=61hT=ED=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kXpxG-0004d5-5H
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:13:50 +0000
X-Inumbo-ID: 16e3adbb-f6b6-41cd-860c-2dec95519e15
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 16e3adbb-f6b6-41cd-860c-2dec95519e15;
	Wed, 28 Oct 2020 18:13:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603908829;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=iJRY66IfuqDwktqFQFJqHM15N+oXk8Kse5TIrgqQc/w=;
  b=ejuqCs2C8Ws0WOawEQLOX+bfGA6c8x1eHrxoLVdzPSM4QrWV6Oybfwr/
   c9ts+75jXC4wOF+cUgLZYe9JFR890NIjknv0+QqxmlkhwKIOTB74lck/9
   mv+0UJpKAKxuPu74uNKClmKHEhdccviQVRTCJ1DzLSb+hl7LdvLrQCm60
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: EP5JZDbke5FPw731Eg1l8W+oCE7VrAssW9K+/7chfRPmLOd6M1U4OdmFfdebedk/DUYFQVMtT4
 LAt+FolsY0LRCBLFqTu+koHCGnw2hFZXPd0cvoJ4sDS/Wj23onJqojKAs4OR1MD3I5pHczOTxl
 v3FhKcXatoe13erkiJP2Li9lFK7KR3vTQz2Apkb61zQ1TWGS96iE88qynTGJ5d2X1nAyMQRIWu
 sbNmvr7wXrbvMwU7Ce6kPJp/T4oSjYgCURfhvFyQEgowBYHi0vHIGxRzpfPtcH99lLVmHnAYaq
 VN4=
X-SBRS: None
X-MesageID: 30004285
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,427,1596513600"; 
   d="scan'208";a="30004285"
Date: Wed, 28 Oct 2020 18:13:44 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>,
	Olaf Hering <olaf@aepfle.de>
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
Message-ID: <20201028181344.GA273696@perard.uk.xensource.com>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
 <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
 <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>

On Tue, Oct 27, 2020 at 12:06:56PM +0100, Jan Beulich wrote:
> On 27.10.2020 11:57, Andrew Cooper wrote:
> > On 27/10/2020 10:37, Jan Beulich wrote:
> >> On 27.10.2020 11:27, Olaf Hering wrote:
> >>> Am Tue, 27 Oct 2020 11:16:04 +0100
> >>> schrieb Jan Beulich <jbeulich@suse.com>:
> >>>
> >>>> This pattern is used when a rule consists of multiple commands
> >>>> having their output appended to one another's.
> >>> My understanding is: a rule is satisfied as soon as the file exists.
> >> No - once make has found that a rule's commands need running, it'll
> >> run the full set and only check again afterwards.
> > 
> > It stops at the first command which fails.
> > 
> > Olaf is correct, but the problem here is an incremental build issue, not
> > a parallel build issue.
> > 
> > Intermediate files must not use the name of the target, or a failure and
> > re-build will use the (bogus) intermediate state rather than rebuilding it.
> 
> But there's no intermediate file here - the file gets created in one
> go. Furthermore doesn't make delete the target file(s) when a rule
> fails? (One may not want to rely on this, and hence indeed keep multi-
> part rules update intermediate files of different names.)

What if something went badly enough and sed didn't managed to write the
whole file and make never had a chance to remove the bogus file?

Surely, this patch is a strict improvement to the build system.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 18:36:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 18:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13816.34556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqJ6-0006Pw-LU; Wed, 28 Oct 2020 18:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13816.34556; Wed, 28 Oct 2020 18:36:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqJ6-0006Pp-Hh; Wed, 28 Oct 2020 18:36:24 +0000
Received: by outflank-mailman (input) for mailman id 13816;
 Wed, 28 Oct 2020 18:36:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXqJ5-0006Pk-3R
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:36:23 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 689dee8f-9d11-46cc-ba57-53faf97fa844;
 Wed, 28 Oct 2020 18:36:21 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqIz-0000bu-7d; Wed, 28 Oct 2020 18:36:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqIy-0001QX-W2; Wed, 28 Oct 2020 18:36:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXqJ5-0006Pk-3R
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:36:23 +0000
X-Inumbo-ID: 689dee8f-9d11-46cc-ba57-53faf97fa844
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 689dee8f-9d11-46cc-ba57-53faf97fa844;
	Wed, 28 Oct 2020 18:36:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=LUrcTcFN0QRq9BA2TyUj84Y4q85VI4gMkfX2e7gxxMA=; b=1dj6Es7iWMVrolp+U2F30eNoTA
	q5K8hhtCUCiMZaDgST+aO/+NLRNRns9Ra8//qgrKuJpbdhMIEl5tRQzRuSWuEt+7TN6sYya5txe63
	qEejtOrXWsotuH1T8QyRcgSPvOt5oDFTn2IAIuOSqTn4or621dJjrO9o+LGaGGC/y8ik=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqIz-0000bu-7d; Wed, 28 Oct 2020 18:36:17 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqIy-0001QX-W2; Wed, 28 Oct 2020 18:36:17 +0000
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <alpine.DEB.2.21.2010271540110.12247@sstabellini-ThinkPad-T480s>
 <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <50410a64-634a-1445-e16f-468b7d943f9b@xen.org>
Date: Wed, 28 Oct 2020 18:36:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <759F39C4-F834-4BFC-B897-714612AEACD8@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 28/10/2020 08:43, Bertrand Marquis wrote:
> 
> 
>> On 27 Oct 2020, at 22:44, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>
>> On Mon, 26 Oct 2020, Bertrand Marquis wrote:
>>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>>> not implementing the workaround for it could deadlock the system.
>>> Add a warning during boot informing the user that only trusted guests
>>> should be executed on the system.
>>> An equivalent warning is already given to the user by KVM on cores
>>> affected by this errata.
>>>
>>> Also taint the hypervisor as unsecure when this errata applies and
>>> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
>>>
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>> SUPPORT.md               |  1 +
>>> xen/arch/arm/cpuerrata.c | 13 +++++++++++++
>>> 2 files changed, 14 insertions(+)
>>>
>>> diff --git a/SUPPORT.md b/SUPPORT.md
>>> index 5fbe5fc444..f7a3b046b0 100644
>>> --- a/SUPPORT.md
>>> +++ b/SUPPORT.md
>>> @@ -38,6 +38,7 @@ supported in this document.
>>> ### ARM v8
>>>
>>>      Status: Supported
>>> +    Status, Cortex A57 r0p0 - r1p2, not security supported (Errata 832075)
>>>
>>> ## Host hardware support
>>>
>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>>> index 0430069a84..b35e8cd0b9 100644
>>> --- a/xen/arch/arm/cpuerrata.c
>>> +++ b/xen/arch/arm/cpuerrata.c
>>> @@ -503,6 +503,19 @@ void check_local_cpu_errata(void)
>>> void __init enable_errata_workarounds(void)
>>> {
>>>      enable_cpu_capabilities(arm_errata);
>>> +
>>> +#ifdef CONFIG_ARM64_ERRATUM_832075
>>> +    if ( cpus_have_cap(ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE) )
>>> +    {
>>> +        printk_once("**** This CPU is affected by the errata 832075. ****\n"
>>> +                    "**** Guests without CPU erratum workarounds     ****\n"
>>> +                    "**** can deadlock the system!                   ****\n"
>>> +                    "**** Only trusted guests should be used.        ****\n");
>>
>> These can be on 2 lines, no need to be on 4 lines.
> 
> I can fix that in a v3.
> 
>>
>>
>> I know that Julien wrote about printing the warning from
>> enable_errata_workarounds but to me it looks more natural if we did it
>> from the .enable function specific to ARM64_WORKAROUND_DEVICE_LOAD_ACQUIRE.
> 
> I have no preference either here but i kind of like this way because if we had more warnings
> they would allow be at the same place.

So I add this placement in mind because the previous version was using 
warning_add() (It can't be called from non-init helper). As we are using 
printk_once() now, I don't really have a preference.

So I would stick with what you wrote.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 18:39:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 18:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13820.34567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqLz-0006ai-2y; Wed, 28 Oct 2020 18:39:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13820.34567; Wed, 28 Oct 2020 18:39:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqLz-0006ab-04; Wed, 28 Oct 2020 18:39:23 +0000
Received: by outflank-mailman (input) for mailman id 13820;
 Wed, 28 Oct 2020 18:39:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXqLx-0006aW-VH
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:39:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0afda49f-1c62-46d1-b86e-99e3ff4b169f;
 Wed, 28 Oct 2020 18:39:20 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqLs-0000gN-Q5; Wed, 28 Oct 2020 18:39:16 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqLs-0001nJ-Ih; Wed, 28 Oct 2020 18:39:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXqLx-0006aW-VH
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 18:39:21 +0000
X-Inumbo-ID: 0afda49f-1c62-46d1-b86e-99e3ff4b169f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0afda49f-1c62-46d1-b86e-99e3ff4b169f;
	Wed, 28 Oct 2020 18:39:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=lml6EC611nVUIyq5DR/FBiHXXR9rNBG5/vkviIFPhF0=; b=OhEg9PUL6p7GJ//NKN2df3rLx3
	Ju/DtVBar0DM7JswbdUQvntDnnGBLZ0aPGOwRpXUr4/e7xAcPKDk0Wl64cFhy3SZ/h0iWW8MKtt7G
	62auUSitfZjuGcyyn11O6qpcAL6qBR5yPNUDwpbOvHMe3XSF3ZGVcI9ujFVlmKR/ZlgM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqLs-0000gN-Q5; Wed, 28 Oct 2020 18:39:16 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqLs-0001nJ-Ih; Wed, 28 Oct 2020 18:39:16 +0000
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
To: Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
Date: Wed, 28 Oct 2020 18:39:14 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 26/10/2020 16:21, Bertrand Marquis wrote:
> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> not implementing the workaround for it could deadlock the system.
> Add a warning during boot informing the user that only trusted guests
> should be executed on the system.
> An equivalent warning is already given to the user by KVM on cores
> affected by this errata.
> 
> Also taint the hypervisor as unsecure when this errata applies and
> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

If you don't need to resend the series, then I would be happy to fix the 
typo pointed out by George on commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:12:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13826.34580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqrj-0001Xm-Rg; Wed, 28 Oct 2020 19:12:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13826.34580; Wed, 28 Oct 2020 19:12:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXqrj-0001Xf-NY; Wed, 28 Oct 2020 19:12:11 +0000
Received: by outflank-mailman (input) for mailman id 13826;
 Wed, 28 Oct 2020 19:12:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kXqri-0001Xa-7w
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:12:10 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab32d74f-daaf-42e1-889c-2a580036df52;
 Wed, 28 Oct 2020 19:12:08 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqrf-0001Le-Fc; Wed, 28 Oct 2020 19:12:07 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kXqrf-0004JU-3o; Wed, 28 Oct 2020 19:12:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=HSML=ED=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kXqri-0001Xa-7w
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:12:10 +0000
X-Inumbo-ID: ab32d74f-daaf-42e1-889c-2a580036df52
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ab32d74f-daaf-42e1-889c-2a580036df52;
	Wed, 28 Oct 2020 19:12:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Mq3kpiFjZWGfYPCI7erEYQQ7uke5ZgawXGto721oxjw=; b=5V/74rfXOkbme76oT8iYcFGJH/
	Btxb5uvi8pXux0FTSWdeH/OtDjEBsf5XlGbObrpeCuVcl8he3Z5QSSZ9K1iTzRROcvPxv6oecTfQD
	BMQNPUuGl7Ptla3I2+tPAD+JAbL3bkYKo+xqNT4fDmabbteRht2OhtP4sauhHHRRwgmo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqrf-0001Le-Fc; Wed, 28 Oct 2020 19:12:07 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231] helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kXqrf-0004JU-3o; Wed, 28 Oct 2020 19:12:07 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich
 <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bc697327-2750-9a78-042d-d45690d27928@xen.org>
Date: Wed, 28 Oct 2020 19:12:05 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 26/10/2020 11:03, Rahul Singh wrote:
> Hello Julien,

Hi Rahul,

>> On 23 Oct 2020, at 4:19 pm, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 23/10/2020 15:27, Rahul Singh wrote:
>>> Hello Julien,
>>>> On 23 Oct 2020, at 2:00 pm, Julien Grall <julien@xen.org> wrote:
>>>>
>>>>
>>>>
>>>> On 23/10/2020 12:35, Rahul Singh wrote:
>>>>> Hello,
>>>>>> On 23 Oct 2020, at 1:02 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>>>>
>>>>>> On Thu, 22 Oct 2020, Julien Grall wrote:
>>>>>>>>> On 20/10/2020 16:25, Rahul Singh wrote:
>>>>>>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>>>>>>>> the Linux SMMUv3 driver.
>>>>>>>>>> Major differences between the Linux driver are as follows:
>>>>>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>>>>>>>>>     that supports both Stage-1 and Stage-2 translations.
>>>>>>>>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
>>>>>>>>>>     capability to share the page tables with the CPU.
>>>>>>>>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
>>>>>>>>>>     and priority queue IRQ handling.
>>>>>>>>>
>>>>>>>>> Tasklets are not a replacement for threaded IRQ. In particular, they will
>>>>>>>>> have priority over anything else (IOW nothing will run on the pCPU until
>>>>>>>>> they are done).
>>>>>>>>>
>>>>>>>>> Do you know why Linux is using thread. Is it because of long running
>>>>>>>>> operations?
>>>>>>>>
>>>>>>>> Yes you are right because of long running operations Linux is using the
>>>>>>>> threaded IRQs.
>>>>>>>>
>>>>>>>> SMMUv3 reports fault/events bases on memory-based circular buffer queues not
>>>>>>>> based on the register. As per my understanding, it is time-consuming to
>>>>>>>> process the memory based queues in interrupt context because of that Linux
>>>>>>>> is using threaded IRQ to process the faults/events from SMMU.
>>>>>>>>
>>>>>>>> I didn’t find any other solution in XEN in place of tasklet to defer the
>>>>>>>> work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
>>>>>>>> we do all work in interrupt context we will make XEN less responsive.
>>>>>>>
>>>>>>> So we need to make sure that Xen continue to receives interrupts, but we also
>>>>>>> need to make sure that a vCPU bound to the pCPU is also responsive.
>>>>>>>
>>>>>>>>
>>>>>>>> If you know another solution in XEN that will be used to defer the work in
>>>>>>>> the interrupt please let me know I will try to use that.
>>>>>>>
>>>>>>> One of my work colleague encountered a similar problem recently. He had a long
>>>>>>> running tasklet and wanted to be broken down in smaller chunk.
>>>>>>>
>>>>>>> We decided to use a timer to reschedule the taslket in the future. This allows
>>>>>>> the scheduler to run other loads (e.g. vCPU) for some time.
>>>>>>>
>>>>>>> This is pretty hackish but I couldn't find a better solution as tasklet have
>>>>>>> high priority.
>>>>>>>
>>>>>>> Maybe the other will have a better idea.
>>>>>>
>>>>>> Julien's suggestion is a good one.
>>>>>>
>>>>>> But I think tasklets can be configured to be called from the idle_loop,
>>>>>> in which case they are not run in interrupt context?
>>>>>>
>>>>>   Yes you are right tasklet will be scheduled from the idle_loop that is not interrupt conext.
>>>>
>>>> This depends on your tasklet. Some will run from the softirq context which is usually (for Arm) on the return of an exception.
>>>>
>>> Thanks for the info. I will check and will get better understanding of the tasklet how it will run in XEN.
>>>>>>
>>>>>>>>>> 4. Latest version of the Linux SMMUv3 code implements the commands queue
>>>>>>>>>>     access functions based on atomic operations implemented in Linux.
>>>>>>>>>
>>>>>>>>> Can you provide more details?
>>>>>>>>
>>>>>>>> I tried to port the latest version of the SMMUv3 code than I observed that
>>>>>>>> in order to port that code I have to also port atomic operation implemented
>>>>>>>> in Linux to XEN. As latest Linux code uses atomic operation to process the
>>>>>>>> command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() ,
>>>>>>>> atomic_fetch_andnot_relaxed()) .
>>>>>>>
>>>>>>> Thank you for the explanation. I think it would be best to import the atomic
>>>>>>> helpers and use the latest code.
>>>>>>>
>>>>>>> This will ensure that we don't re-introduce bugs and also buy us some time
>>>>>>> before the Linux and Xen driver diverge again too much.
>>>>>>>
>>>>>>> Stefano, what do you think?
>>>>>>
>>>>>> I think you are right.
>>>>> Yes, I agree with you to have XEN code in sync with Linux code that's why I started with to port the Linux atomic operations to XEN  then I realised that it is not straightforward to port atomic operations and it requires lots of effort and testing. Therefore I decided to port the code before the atomic operation is introduced in Linux.
>>>>
>>>> Hmmm... I would not have expected a lot of effort required to add the 3 atomics operations above. Are you trying to also port the LSE support at the same time?
>>> There are other atomic operations used in the SMMUv3 code apart from the 3 atomic operation I mention. I just mention 3 operation as an example.
>>
>> Ok. Do you have a list you could share?
>>
> 
> Yes. Please find the list that we have to port to the XEN in order to merge the latest SMMUv3 code.

Thanks!

> 
> If we start to port the below list we might have to port another atomic operation based on which below atomic operations are implemented. I have not spent time on how these atomic operations are implemented in detail but as per my understanding, it required an effort to port them to XEN and required a lot of testing.

For the beginning, I think it is fine to implement them with a stronger 
memory barrier than necessary or in a less efficient. This can be 
refined afterwards.

Those helpers can directly be defined in the SMMUv3 drivers so we know 
they are not for general purpose :).

> 
> 1. atomic_set_release

This could be implemented as:

smp_mb();
atomic_set();

> 2. atomic_fetch_andnot_relaxed

This would need to be imported.

> 3. atomic_cond_read_relaxed

This would need to be imported. The simplest version seems to be the 
generic version provided by include/asm-generic/barrier.h (see 
smp_cond_load_relaxed).

> 4. atomic_long_cond_read_relaxed
> 5. atomic_long_xor

The two would require the implementation of atomic64. Volodymyr also 
required a version. I offered my help, however I didn't find enough time 
to do it yet :(.

For Arm64, it would be possible to do a copy/paste of the existing 
helpers and replace anything related to a 32-bit register with a 64-bit one.

For Arm32, they are a bit more complex because you now need to work with 
2 registers.

However, for your purpose, you would be using atomic_long_t. So the the 
Arm64 implementation should be sufficient.

> 6. atomic_set_release

Same as 1.

> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is implemented in XEN need to check.
atomic_cmpxchg() is strongly ordered. So it would be fine to use it for 
implementing the helper. Although, more inefficient :).

> 8. atomic_dec_return_release

Xen implements a stronger version atomic_dec_return(). You can re-use it 
here.

> 9. atomic_fetch_inc_relaxed

This would need to be imported.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13838.34608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEK-0003Pz-0q; Wed, 28 Oct 2020 19:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13838.34608; Wed, 28 Oct 2020 19:35:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEJ-0003Ps-Th; Wed, 28 Oct 2020 19:35:31 +0000
Received: by outflank-mailman (input) for mailman id 13838;
 Wed, 28 Oct 2020 19:35:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEI-0003Ov-NW
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e62bffd7-7be4-471e-9d06-9d7bbbb6014f;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EE52FB911;
 Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEI-0003Ov-NW
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:30 +0000
X-Inumbo-ID: e62bffd7-7be4-471e-9d06-9d7bbbb6014f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e62bffd7-7be4-471e-9d06-9d7bbbb6014f;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EE52FB911;
	Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v6 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function
Date: Wed, 28 Oct 2020 20:35:12 +0100
Message-Id: <20201028193521.2489-2-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The parameters map and is_iomem are always of the same value. Removed them
to prepares the function for conversion to struct dma_buf_map.

v4:
	* don't check for !kmap->virtual; will always be false

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++--------------
 1 file changed, 4 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index 9da823eb0edd..f445b84c43c4 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -379,32 +379,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
-				      bool map, bool *is_iomem)
+static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
 {
 	int ret;
 	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	bool is_iomem;
 
 	if (gbo->kmap_use_count > 0)
 		goto out;
 
-	if (kmap->virtual || !map)
-		goto out;
-
 	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
 	if (ret)
 		return ERR_PTR(ret);
 
 out:
-	if (!kmap->virtual) {
-		if (is_iomem)
-			*is_iomem = false;
-		return NULL; /* not mapped; don't increment ref */
-	}
 	++gbo->kmap_use_count;
-	if (is_iomem)
-		return ttm_kmap_obj_virtual(kmap, is_iomem);
-	return kmap->virtual;
+	return ttm_kmap_obj_virtual(kmap, &is_iomem);
 }
 
 static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
@@ -449,7 +439,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo, true, NULL);
+	base = drm_gem_vram_kmap_locked(gbo);
 	if (IS_ERR(base)) {
 		ret = PTR_ERR(base);
 		goto err_drm_gem_vram_unpin_locked;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13839.34620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEN-0003S6-AE; Wed, 28 Oct 2020 19:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13839.34620; Wed, 28 Oct 2020 19:35:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEN-0003Rw-62; Wed, 28 Oct 2020 19:35:35 +0000
Received: by outflank-mailman (input) for mailman id 13839;
 Wed, 28 Oct 2020 19:35:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEM-0003Op-HF
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a82aa3c7-7635-4cd2-9372-e2b875948661;
 Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BDAC6B916;
 Wed, 28 Oct 2020 19:35:27 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEM-0003Op-HF
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:34 +0000
X-Inumbo-ID: a82aa3c7-7635-4cd2-9372-e2b875948661
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a82aa3c7-7635-4cd2-9372-e2b875948661;
	Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BDAC6B916;
	Wed, 28 Oct 2020 19:35:27 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v6 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
Date: Wed, 28 Oct 2020 20:35:16 +0100
Message-Id: <20201028193521.2489-6-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel
address space. The mapping's address is returned as struct dma_buf_map.
Each function is a simplified version of TTM's existing kmap code. Both
functions respect the memory's location ani/or writecombine flags.

On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(),
two helpers that convert a GEM object into the TTM BO and forward the call
to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object
callbacks.

v5:
	* use size_t for storing mapping size (Christian)
	* ignore premapped memory areas correctly in ttm_bo_vunmap()
	* rebase onto latest TTM interfaces (Christian)
	* remove BUG() from ttm_bo_vmap() (Christian)
v4:
	* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel,
	  Christian)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++
 drivers/gpu/drm/ttm/ttm_bo_util.c    | 72 ++++++++++++++++++++++++++++
 include/drm/drm_gem_ttm_helper.h     |  6 +++
 include/drm/ttm/ttm_bo_api.h         | 28 +++++++++++
 include/linux/dma-buf-map.h          | 20 ++++++++
 5 files changed, 164 insertions(+)

diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c
index 0e4fb9ba43ad..db4c14d78a30 100644
--- a/drivers/gpu/drm/drm_gem_ttm_helper.c
+++ b/drivers/gpu/drm/drm_gem_ttm_helper.c
@@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 }
 EXPORT_SYMBOL(drm_gem_ttm_print_info);
 
+/**
+ * drm_gem_ttm_vmap() - vmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: [out] returns the dma-buf mapping.
+ *
+ * Maps a GEM object with ttm_bo_vmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ *
+ * Returns:
+ * 0 on success, or a negative errno code otherwise.
+ */
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	return ttm_bo_vmap(bo, map);
+
+}
+EXPORT_SYMBOL(drm_gem_ttm_vmap);
+
+/**
+ * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object
+ * @gem: GEM object.
+ * @map: dma-buf mapping.
+ *
+ * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as
+ * &drm_gem_object_funcs.vmap callback.
+ */
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map)
+{
+	struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem);
+
+	ttm_bo_vunmap(bo, map);
+}
+EXPORT_SYMBOL(drm_gem_ttm_vunmap);
+
 /**
  * drm_gem_ttm_mmap() - mmap &ttm_buffer_object
  * @gem: GEM object.
diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c
index ecb54415d1ca..7ccb2295cac1 100644
--- a/drivers/gpu/drm/ttm/ttm_bo_util.c
+++ b/drivers/gpu/drm/ttm/ttm_bo_util.c
@@ -32,6 +32,7 @@
 #include <drm/ttm/ttm_bo_driver.h>
 #include <drm/ttm/ttm_placement.h>
 #include <drm/drm_vma_manager.h>
+#include <linux/dma-buf-map.h>
 #include <linux/io.h>
 #include <linux/highmem.h>
 #include <linux/wait.h>
@@ -471,6 +472,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map)
 }
 EXPORT_SYMBOL(ttm_bo_kunmap);
 
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+	int ret;
+
+	ret = ttm_mem_io_reserve(bo->bdev, mem);
+	if (ret)
+		return ret;
+
+	if (mem->bus.is_iomem) {
+		void __iomem *vaddr_iomem;
+		size_t size = bo->num_pages << PAGE_SHIFT;
+
+		if (mem->bus.addr)
+			vaddr_iomem = (void __iomem *)mem->bus.addr;
+		else if (mem->bus.caching == ttm_write_combined)
+			vaddr_iomem = ioremap_wc(mem->bus.offset, size);
+		else
+			vaddr_iomem = ioremap(mem->bus.offset, size);
+
+		if (!vaddr_iomem)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr_iomem(map, vaddr_iomem);
+
+	} else {
+		struct ttm_operation_ctx ctx = {
+			.interruptible = false,
+			.no_wait_gpu = false
+		};
+		struct ttm_tt *ttm = bo->ttm;
+		pgprot_t prot;
+		void *vaddr;
+
+		ret = ttm_tt_populate(bo->bdev, ttm, &ctx);
+		if (ret)
+			return ret;
+
+		/*
+		 * We need to use vmap to get the desired page protection
+		 * or to make the buffer object look contiguous.
+		 */
+		prot = ttm_io_prot(bo, mem, PAGE_KERNEL);
+		vaddr = vmap(ttm->pages, bo->num_pages, 0, prot);
+		if (!vaddr)
+			return -ENOMEM;
+
+		dma_buf_map_set_vaddr(map, vaddr);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(ttm_bo_vmap);
+
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map)
+{
+	struct ttm_resource *mem = &bo->mem;
+
+	if (dma_buf_map_is_null(map))
+		return;
+
+	if (!map->is_iomem)
+		vunmap(map->vaddr);
+	else if (!mem->bus.addr)
+		iounmap(map->vaddr_iomem);
+	dma_buf_map_clear(map);
+
+	ttm_mem_io_free(bo->bdev, &bo->mem);
+}
+EXPORT_SYMBOL(ttm_bo_vunmap);
+
 static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo,
 				 bool dst_use_tt)
 {
diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h
index 118cef76f84f..7c6d874910b8 100644
--- a/include/drm/drm_gem_ttm_helper.h
+++ b/include/drm/drm_gem_ttm_helper.h
@@ -10,11 +10,17 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+struct dma_buf_map;
+
 #define drm_gem_ttm_of_gem(gem_obj) \
 	container_of(gem_obj, struct ttm_buffer_object, base)
 
 void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent,
 			    const struct drm_gem_object *gem);
+int drm_gem_ttm_vmap(struct drm_gem_object *gem,
+		     struct dma_buf_map *map);
+void drm_gem_ttm_vunmap(struct drm_gem_object *gem,
+			struct dma_buf_map *map);
 int drm_gem_ttm_mmap(struct drm_gem_object *gem,
 		     struct vm_area_struct *vma);
 
diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h
index 37102e45e496..2c59a785374c 100644
--- a/include/drm/ttm/ttm_bo_api.h
+++ b/include/drm/ttm/ttm_bo_api.h
@@ -48,6 +48,8 @@ struct ttm_bo_global;
 
 struct ttm_bo_device;
 
+struct dma_buf_map;
+
 struct drm_mm_node;
 
 struct ttm_placement;
@@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page,
  */
 void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map);
 
+/**
+ * ttm_bo_vmap
+ *
+ * @bo: The buffer object.
+ * @map: pointer to a struct dma_buf_map representing the map.
+ *
+ * Sets up a kernel virtual mapping, using ioremap or vmap to the
+ * data in the buffer object. The parameter @map returns the virtual
+ * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap().
+ *
+ * Returns
+ * -ENOMEM: Out of memory.
+ * -EINVAL: Invalid range.
+ */
+int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
+/**
+ * ttm_bo_vunmap
+ *
+ * @bo: The buffer object.
+ * @map: Object describing the map to unmap.
+ *
+ * Unmaps a kernel map set up by ttm_bo_vmap().
+ */
+void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map);
+
 /**
  * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object.
  *
diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index fd1aba545fdf..2e8bbecb5091 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -45,6 +45,12 @@
  *
  *	dma_buf_map_set_vaddr(&map. 0xdeadbeaf);
  *
+ * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem().
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr)
 	map->is_iomem = false;
 }
 
+/**
+ * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory
+ * @map:		The dma-buf mapping structure
+ * @vaddr_iomem:	An I/O-memory address
+ *
+ * Sets the address and the I/O-memory flag.
+ */
+static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map,
+					       void __iomem *vaddr_iomem)
+{
+	map->vaddr_iomem = vaddr_iomem;
+	map->is_iomem = true;
+}
+
 /**
  * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality
  * @lhs:	The dma-buf mapping structure
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13840.34632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEO-0003UW-Oh; Wed, 28 Oct 2020 19:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13840.34632; Wed, 28 Oct 2020 19:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEO-0003UI-L7; Wed, 28 Oct 2020 19:35:36 +0000
Received: by outflank-mailman (input) for mailman id 13840;
 Wed, 28 Oct 2020 19:35:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEN-0003Ov-LL
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd7479bd-7b44-42f1-8b8d-53cc7ae4025f;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1DC7DB913;
 Wed, 28 Oct 2020 19:35:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEN-0003Ov-LL
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:35 +0000
X-Inumbo-ID: bd7479bd-7b44-42f1-8b8d-53cc7ae4025f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bd7479bd-7b44-42f1-8b8d-53cc7ae4025f;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1DC7DB913;
	Wed, 28 Oct 2020 19:35:27 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
Date: Wed, 28 Oct 2020 20:35:15 +0100
Message-Id: <20201028193521.2489-5-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove
them before changing the interface to use struct drm_buf_map. As a side
effect of removing drm_gem_prime_vmap(), the error code changes from
ENOMEM to EOPNOTSUPP.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------
 drivers/gpu/drm/exynos/exynos_drm_gem.h |  2 --
 2 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c
index e7a6eb96f692..13a35623ac04 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
@@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = {
 static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = {
 	.free = exynos_drm_gem_free_object,
 	.get_sg_table = exynos_drm_gem_prime_get_sg_table,
-	.vmap = exynos_drm_gem_prime_vmap,
-	.vunmap	= exynos_drm_gem_prime_vunmap,
 	.vm_ops = &exynos_drm_gem_vm_ops,
 };
 
@@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 	return &exynos_gem->base;
 }
 
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	return NULL;
-}
-
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma)
 {
diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h
index 74e926abeff0..a23272fb96fb 100644
--- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
+++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
@@ -107,8 +107,6 @@ struct drm_gem_object *
 exynos_drm_gem_prime_import_sg_table(struct drm_device *dev,
 				     struct dma_buf_attachment *attach,
 				     struct sg_table *sgt);
-void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj);
-void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj,
 			      struct vm_area_struct *vma);
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13837.34596 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEI-0003P3-Nn; Wed, 28 Oct 2020 19:35:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13837.34596; Wed, 28 Oct 2020 19:35:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEI-0003Ow-Kp; Wed, 28 Oct 2020 19:35:30 +0000
Received: by outflank-mailman (input) for mailman id 13837;
 Wed, 28 Oct 2020 19:35:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEH-0003Op-NA
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id beff49a3-1948-440e-bf4b-7f1b754e902a;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E95D4B90E;
 Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEH-0003Op-NA
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:29 +0000
X-Inumbo-ID: beff49a3-1948-440e-bf4b-7f1b754e902a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id beff49a3-1948-440e-bf4b-7f1b754e902a;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id E95D4B90E;
	Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 00/10] Support GEM object mappings from I/O memory
Date: Wed, 28 Oct 2020 20:35:11 +0100
Message-Id: <20201028193521.2489-1-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

DRM's fbdev console uses regular load and store operations to update
framebuffer memory. The bochs driver on sparc64 requires the use of
I/O-specific load and store operations. We have a workaround, but need
a long-term solution to the problem.

This patchset changes GEM's vmap/vunmap interfaces to forward pointers
of type struct dma_buf_map and updates the generic fbdev emulation to
use them correctly. This enables I/O-memory operations on all framebuffers
that require and support them.

Patches #1 to #4 prepare VRAM helpers and drivers.

Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap
that is usable with TTM-based GEM drivers, and patch #6 updates GEM's
vmap/vunmap callback to forward instances of type struct dma_buf_map. While
the patch touches many files throughout the DRM modules, the applied changes
are mostly trivial interface fixes. Several TTM-based GEM drivers now use
the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to
forward struct dma_buf_map.

With struct dma_buf_map propagated through the layers, patches #8 to #10
convert DRM clients and generic fbdev emulation to use it. Updating the
fbdev framebuffer will select the correct functions, either for system or
I/O memory.

v6:
	* don't call page_to_phys() on fbdev framebuffers in I/O memory;
	  warn instead (Daniel)
v5:
	* rebase onto latest TTM changes (Christian)
	* support TTM premapped memory correctly (Christian)
	* implement fb_read/fb_write internally (Sam, Daniel)
	* cleanups
v4:
	* provide TTM vmap/vunmap plus GEM helpers and convert drivers
	  over (Christian, Daniel)
	* remove several empty functions
	* more TODOs and documentation (Daniel)
v3:
	* recreate the whole patchset on top of struct dma_buf_map
v2:
	* RFC patchset

Thomas Zimmermann (10):
  drm/vram-helper: Remove invariant parameters from internal kmap
    function
  drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
  drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
  drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}()
  drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers
  drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM
    backends
  drm/gem: Update internal GEM vmap/vunmap interfaces to use struct
    dma_buf_map
  drm/gem: Store client buffer mappings as struct dma_buf_map
  dma-buf-map: Add memcpy and pointer-increment interfaces
  drm/fb_helper: Support framebuffers in I/O memory

 Documentation/gpu/todo.rst                  |  37 ++-
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 ---
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +-
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/bochs/bochs_kms.c           |   1 -
 drivers/gpu/drm/drm_client.c                |  38 +--
 drivers/gpu/drm/drm_fb_helper.c             | 257 ++++++++++++++++++--
 drivers/gpu/drm/drm_gem.c                   |  29 ++-
 drivers/gpu/drm/drm_gem_cma_helper.c        |  27 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 ++--
 drivers/gpu/drm/drm_gem_ttm_helper.c        |  38 +++
 drivers/gpu/drm/drm_gem_vram_helper.c       | 117 ++++-----
 drivers/gpu/drm/drm_internal.h              |   5 +-
 drivers/gpu/drm/drm_prime.c                 |  14 +-
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   3 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       |   1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |  12 +-
 drivers/gpu/drm/exynos/exynos_drm_gem.c     |  12 -
 drivers/gpu/drm/exynos/exynos_drm_gem.h     |   2 -
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 --
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +-
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 +-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 ++-
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +-
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 --
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 +-
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/ttm/ttm_bo_util.c           |  72 ++++++
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   7 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 +-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 +-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 +-
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_client.h                    |   7 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   3 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_ttm_helper.h            |   6 +
 include/drm/drm_gem_vram_helper.h           |  14 +-
 include/drm/drm_mode_config.h               |  12 -
 include/drm/ttm/ttm_bo_api.h                |  28 +++
 include/linux/dma-buf-map.h                 |  93 ++++++-
 64 files changed, 859 insertions(+), 438 deletions(-)

--
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13841.34644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrET-0003aJ-4S; Wed, 28 Oct 2020 19:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13841.34644; Wed, 28 Oct 2020 19:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrET-0003aA-0b; Wed, 28 Oct 2020 19:35:41 +0000
Received: by outflank-mailman (input) for mailman id 13841;
 Wed, 28 Oct 2020 19:35:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrER-0003Op-HP
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f309d61-3868-4453-9b06-73d64d7204cd;
 Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 981D5B91E;
 Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrER-0003Op-HP
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:39 +0000
X-Inumbo-ID: 5f309d61-3868-4453-9b06-73d64d7204cd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5f309d61-3868-4453-9b06-73d64d7204cd;
	Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 981D5B91E;
	Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces
Date: Wed, 28 Oct 2020 20:35:20 +0100
Message-Id: <20201028193521.2489-10-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

To do framebuffer updates, one needs memcpy from system memory and a
pointer-increment function. Add both interfaces with documentation.

v5:
	* include <linux/string.h> to build on sparc64 (Sam)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 include/linux/dma-buf-map.h | 73 ++++++++++++++++++++++++++++++++-----
 1 file changed, 63 insertions(+), 10 deletions(-)

diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h
index 2e8bbecb5091..583a3a1f9447 100644
--- a/include/linux/dma-buf-map.h
+++ b/include/linux/dma-buf-map.h
@@ -7,6 +7,7 @@
 #define __DMA_BUF_MAP_H__
 
 #include <linux/io.h>
+#include <linux/string.h>
 
 /**
  * DOC: overview
@@ -32,6 +33,14 @@
  * accessing the buffer. Use the returned instance and the helper functions
  * to access the buffer's memory in the correct way.
  *
+ * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
+ * actually independent from the dma-buf infrastructure. When sharing buffers
+ * among devices, drivers have to know the location of the memory to access
+ * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
+ * solves this problem for dma-buf and its users. If other drivers or
+ * sub-systems require similar functionality, the type could be generalized
+ * and moved to a more prominent header file.
+ *
  * Open-coding access to :c:type:`struct dma_buf_map <dma_buf_map>` is
  * considered bad style. Rather then accessing its fields directly, use one
  * of the provided helper functions, or implement your own. For example,
@@ -51,6 +60,14 @@
  *
  *	dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf);
  *
+ * Instances of struct dma_buf_map do not have to be cleaned up, but
+ * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
+ * always refer to system memory.
+ *
+ * .. code-block:: c
+ *
+ *	dma_buf_map_clear(&map);
+ *
  * Test if a mapping is valid with either dma_buf_map_is_set() or
  * dma_buf_map_is_null().
  *
@@ -73,17 +90,19 @@
  *	if (dma_buf_map_is_equal(&sys_map, &io_map))
  *		// always false
  *
- * Instances of struct dma_buf_map do not have to be cleaned up, but
- * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings
- * always refer to system memory.
+ * A set up instance of struct dma_buf_map can be used to access or manipulate
+ * the buffer memory. Depending on the location of the memory, the provided
+ * helpers will pick the correct operations. Data can be copied into the memory
+ * with dma_buf_map_memcpy_to(). The address can be manipulated with
+ * dma_buf_map_incr().
  *
- * The type :c:type:`struct dma_buf_map <dma_buf_map>` and its helpers are
- * actually independent from the dma-buf infrastructure. When sharing buffers
- * among devices, drivers have to know the location of the memory to access
- * the buffers in a safe way. :c:type:`struct dma_buf_map <dma_buf_map>`
- * solves this problem for dma-buf and its users. If other drivers or
- * sub-systems require similar functionality, the type could be generalized
- * and moved to a more prominent header file.
+ * .. code-block:: c
+ *
+ *	const void *src = ...; // source buffer
+ *	size_t len = ...; // length of src
+ *
+ *	dma_buf_map_memcpy_to(&map, src, len);
+ *	dma_buf_map_incr(&map, len); // go to first byte after the memcpy
  */
 
 /**
@@ -210,4 +229,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map)
 	}
 }
 
+/**
+ * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping
+ * @dst:	The dma-buf mapping structure
+ * @src:	The source buffer
+ * @len:	The number of byte in src
+ *
+ * Copies data into a dma-buf mapping. The source buffer is in system
+ * memory. Depending on the buffer's location, the helper picks the correct
+ * method of accessing the memory.
+ */
+static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len)
+{
+	if (dst->is_iomem)
+		memcpy_toio(dst->vaddr_iomem, src, len);
+	else
+		memcpy(dst->vaddr, src, len);
+}
+
+/**
+ * dma_buf_map_incr - Increments the address stored in a dma-buf mapping
+ * @map:	The dma-buf mapping structure
+ * @incr:	The number of bytes to increment
+ *
+ * Increments the address stored in a dma-buf mapping. Depending on the
+ * buffer's location, the correct value will be updated.
+ */
+static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr)
+{
+	if (map->is_iomem)
+		map->vaddr_iomem += incr;
+	else
+		map->vaddr += incr;
+}
+
 #endif /* __DMA_BUF_MAP_H__ */
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13842.34650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrET-0003bR-KZ; Wed, 28 Oct 2020 19:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13842.34650; Wed, 28 Oct 2020 19:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrET-0003aw-Cb; Wed, 28 Oct 2020 19:35:41 +0000
Received: by outflank-mailman (input) for mailman id 13842;
 Wed, 28 Oct 2020 19:35:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrES-0003Ov-LS
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7301e54b-edb5-49cb-98b4-5ac00c178eef;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EA5EBB90F;
 Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrES-0003Ov-LS
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:40 +0000
X-Inumbo-ID: 7301e54b-edb5-49cb-98b4-5ac00c178eef
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7301e54b-edb5-49cb-98b4-5ac00c178eef;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EA5EBB90F;
	Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap()
Date: Wed, 28 Oct 2020 20:35:14 +0100
Message-Id: <20201028193521.2489-4-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function etnaviv_gem_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem.c       | 1 -
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 -----
 3 files changed, 7 deletions(-)

diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 914f0867ff71..9682c26d89bb 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
 void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
index 67d9a2b9ea6a..bbd235473645 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
@@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = {
 	.unpin = etnaviv_gem_prime_unpin,
 	.get_sg_table = etnaviv_gem_prime_get_sg_table,
 	.vmap = etnaviv_gem_prime_vmap,
-	.vunmap = etnaviv_gem_prime_vunmap,
 	.vm_ops = &vm_ops,
 };
 
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index 135fbff6fecf..a6d9932a32ae 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
 	return etnaviv_gem_vmap(obj);
 }
 
-void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* TODO msm_gem_vunmap() */
-}
-
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma)
 {
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13843.34668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEY-0003jG-0p; Wed, 28 Oct 2020 19:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13843.34668; Wed, 28 Oct 2020 19:35:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEX-0003iw-R7; Wed, 28 Oct 2020 19:35:45 +0000
Received: by outflank-mailman (input) for mailman id 13843;
 Wed, 28 Oct 2020 19:35:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEW-0003Op-Hf
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef729ea9-bd2b-48c9-829f-e0f75a96d4ec;
 Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 81B3AB918;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEW-0003Op-Hf
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:44 +0000
X-Inumbo-ID: ef729ea9-bd2b-48c9-829f-e0f75a96d4ec
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ef729ea9-bd2b-48c9-829f-e0f75a96d4ec;
	Wed, 28 Oct 2020 19:35:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 81B3AB918;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends
Date: Wed, 28 Oct 2020 20:35:17 +0100
Message-Id: <20201028193521.2489-7-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This patch replaces the vmap/vunmap's use of raw pointers in GEM object
functions with instances of struct dma_buf_map. GEM backends are
converted as well. For most of them, this simply changes the returned type.

TTM-based drivers now return information about the location of the memory,
either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap()
et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of
implementing their own vmap callbacks.

v5:
	* update vkms after switch to shmem
v4:
	* use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian)
	* fix a trailing { in drm_gem_vmap()
	* remove several empty functions instead of converting them (Daniel)
	* comment uses of raw pointers with a TODO (Daniel)
	* TODO list: convert more helpers to use struct dma_buf_map

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Acked-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst                  |  18 ++++
 drivers/gpu/drm/Kconfig                     |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c |  36 -------
 drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h |   2 -
 drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c     |   5 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_object.h  |   1 -
 drivers/gpu/drm/ast/ast_cursor.c            |  27 +++--
 drivers/gpu/drm/ast/ast_drv.h               |   7 +-
 drivers/gpu/drm/drm_gem.c                   |  23 +++--
 drivers/gpu/drm/drm_gem_cma_helper.c        |  10 +-
 drivers/gpu/drm/drm_gem_shmem_helper.c      |  48 +++++----
 drivers/gpu/drm/drm_gem_vram_helper.c       | 107 ++++++++++----------
 drivers/gpu/drm/etnaviv/etnaviv_drv.h       |   2 +-
 drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c |   9 +-
 drivers/gpu/drm/lima/lima_gem.c             |   6 +-
 drivers/gpu/drm/lima/lima_sched.c           |  11 +-
 drivers/gpu/drm/mgag200/mgag200_mode.c      |  10 +-
 drivers/gpu/drm/nouveau/Kconfig             |   1 +
 drivers/gpu/drm/nouveau/nouveau_bo.h        |   2 -
 drivers/gpu/drm/nouveau/nouveau_gem.c       |   6 +-
 drivers/gpu/drm/nouveau/nouveau_gem.h       |   2 -
 drivers/gpu/drm/nouveau/nouveau_prime.c     |  20 ----
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c |  14 +--
 drivers/gpu/drm/qxl/qxl_display.c           |  11 +-
 drivers/gpu/drm/qxl/qxl_draw.c              |  14 ++-
 drivers/gpu/drm/qxl/qxl_drv.h               |  11 +-
 drivers/gpu/drm/qxl/qxl_object.c            |  31 +++---
 drivers/gpu/drm/qxl/qxl_object.h            |   2 +-
 drivers/gpu/drm/qxl/qxl_prime.c             |  12 +--
 drivers/gpu/drm/radeon/radeon.h             |   1 -
 drivers/gpu/drm/radeon/radeon_gem.c         |   7 +-
 drivers/gpu/drm/radeon/radeon_prime.c       |  20 ----
 drivers/gpu/drm/rockchip/rockchip_drm_gem.c |  22 ++--
 drivers/gpu/drm/rockchip/rockchip_drm_gem.h |   4 +-
 drivers/gpu/drm/tiny/cirrus.c               |  10 +-
 drivers/gpu/drm/tiny/gm12u320.c             |  10 +-
 drivers/gpu/drm/udl/udl_modeset.c           |   8 +-
 drivers/gpu/drm/vboxvideo/vbox_mode.c       |  11 +-
 drivers/gpu/drm/vc4/vc4_bo.c                |   6 +-
 drivers/gpu/drm/vc4/vc4_drv.h               |   2 +-
 drivers/gpu/drm/vgem/vgem_drv.c             |  16 ++-
 drivers/gpu/drm/vkms/vkms_plane.c           |  15 ++-
 drivers/gpu/drm/vkms/vkms_writeback.c       |  22 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.c     |  18 ++--
 drivers/gpu/drm/xen/xen_drm_front_gem.h     |   6 +-
 include/drm/drm_gem.h                       |   5 +-
 include/drm/drm_gem_cma_helper.h            |   2 +-
 include/drm/drm_gem_shmem_helper.h          |   4 +-
 include/drm/drm_gem_vram_helper.h           |  14 +--
 49 files changed, 345 insertions(+), 308 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 700637e25ecd..7e6fc3c04add 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -446,6 +446,24 @@ Contact: Ville Syrjälä, Daniel Vetter
 
 Level: Intermediate
 
+Use struct dma_buf_map throughout codebase
+------------------------------------------
+
+Pointers to shared device memory are stored in struct dma_buf_map. Each
+instance knows whether it refers to system or I/O memory. Most of the DRM-wide
+interface have been converted to use struct dma_buf_map, but implementations
+often still use raw pointers.
+
+The task is to use struct dma_buf_map where it makes sense.
+
+* Memory managers should use struct dma_buf_map for dma-buf-imported buffers.
+* TTM might benefit from using struct dma_buf_map internally.
+* Framebuffer copying and blitting helpers should operate on struct dma_buf_map.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Christian König, Daniel Vetter
+
+Level: Intermediate
+
 
 Core refactorings
 =================
diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
index 32257189e09b..e479b04e955e 100644
--- a/drivers/gpu/drm/Kconfig
+++ b/drivers/gpu/drm/Kconfig
@@ -239,6 +239,7 @@ config DRM_RADEON
 	select FW_LOADER
         select DRM_KMS_HELPER
         select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
@@ -259,6 +260,7 @@ config DRM_AMDGPU
 	select DRM_KMS_HELPER
 	select DRM_SCHED
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select POWER_SUPPLY
 	select HWMON
 	select BACKLIGHT_CLASS_DEVICE
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
index 5b465ab774d1..e5919efca870 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
@@ -41,42 +41,6 @@
 #include <linux/dma-fence-array.h>
 #include <linux/pci-p2pdma.h>
 
-/**
- * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation
- * @obj: GEM BO
- *
- * Sets up an in-kernel virtual mapping of the BO's memory.
- *
- * Returns:
- * The virtual address of the mapping or an error pointer.
- */
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-/**
- * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation
- * @obj: GEM BO
- * @vaddr: Virtual address (unused)
- *
- * Tears down the in-kernel virtual mapping of the BO's memory.
- */
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 /**
  * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation
  * @obj: GEM BO
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
index 2c5c84a06bb9..39b5b9616fd8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h
@@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev,
 					    struct dma_buf *dma_buf);
 bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev,
 				      struct amdgpu_bo *bo);
-void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj);
-void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 int amdgpu_gem_prime_mmap(struct drm_gem_object *obj,
 			  struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
index be08a63ef58c..576659827e74 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
@@ -33,6 +33,7 @@
 
 #include <drm/amdgpu_drm.h>
 #include <drm/drm_debugfs.h>
+#include <drm/drm_gem_ttm_helper.h>
 
 #include "amdgpu.h"
 #include "amdgpu_display.h"
@@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = {
 	.open = amdgpu_gem_object_open,
 	.close = amdgpu_gem_object_close,
 	.export = amdgpu_gem_prime_export,
-	.vmap = amdgpu_gem_prime_vmap,
-	.vunmap = amdgpu_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
index 132e5f955180..01296ef0d673 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
@@ -100,7 +100,6 @@ struct amdgpu_bo {
 	struct amdgpu_bo		*parent;
 	struct amdgpu_bo		*shadow;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	struct amdgpu_mn		*mn;
 
 
diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c
index e0f4613918ad..742d43a7edf4 100644
--- a/drivers/gpu/drm/ast/ast_cursor.c
+++ b/drivers/gpu/drm/ast/ast_cursor.c
@@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast)
 
 	for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) {
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast)
 	struct drm_device *dev = &ast->base;
 	size_t size, i;
 	struct drm_gem_vram_object *gbo;
-	void __iomem *vaddr;
+	struct dma_buf_map map;
 	int ret;
 
 	size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE);
@@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast)
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
-		vaddr = drm_gem_vram_vmap(gbo);
-		if (IS_ERR(vaddr)) {
-			ret = PTR_ERR(vaddr);
+		ret = drm_gem_vram_vmap(gbo, &map);
+		if (ret) {
 			drm_gem_vram_unpin(gbo);
 			drm_gem_vram_put(gbo);
 			goto err_drm_gem_vram_put;
 		}
 
 		ast->cursor.gbo[i] = gbo;
-		ast->cursor.vaddr[i] = vaddr;
+		ast->cursor.map[i] = map;
 	}
 
 	return drmm_add_action_or_reset(dev, ast_cursor_release, NULL);
@@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast)
 	while (i) {
 		--i;
 		gbo = ast->cursor.gbo[i];
-		drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]);
+		drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]);
 		drm_gem_vram_unpin(gbo);
 		drm_gem_vram_put(gbo);
 	}
@@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 {
 	struct drm_device *dev = &ast->base;
 	struct drm_gem_vram_object *gbo;
+	struct dma_buf_map map;
 	int ret;
 	void *src;
 	void __iomem *dst;
@@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb)
 	ret = drm_gem_vram_pin(gbo, 0);
 	if (ret)
 		return ret;
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
-		ret = PTR_ERR(src);
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret)
 		goto err_drm_gem_vram_unpin;
-	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem;
 
 	/* do data transfer to cursor BO */
 	update_cursor_image(dst, src, fb->width, fb->height);
 
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 	drm_gem_vram_unpin(gbo);
 
 	return 0;
@@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y,
 	u8 __iomem *sig;
 	u8 jreg;
 
-	dst = ast->cursor.vaddr[ast->cursor.next_index];
+	dst = ast->cursor.map[ast->cursor.next_index].vaddr;
 
 	sig = dst + AST_HWC_SIZE;
 	writel(x, sig + AST_HWC_SIGNATURE_X);
diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
index 467049ca8430..f963141dd851 100644
--- a/drivers/gpu/drm/ast/ast_drv.h
+++ b/drivers/gpu/drm/ast/ast_drv.h
@@ -28,10 +28,11 @@
 #ifndef __AST_DRV_H__
 #define __AST_DRV_H__
 
-#include <linux/types.h>
-#include <linux/io.h>
+#include <linux/dma-buf-map.h>
 #include <linux/i2c.h>
 #include <linux/i2c-algo-bit.h>
+#include <linux/io.h>
+#include <linux/types.h>
 
 #include <drm/drm_connector.h>
 #include <drm/drm_crtc.h>
@@ -131,7 +132,7 @@ struct ast_private {
 
 	struct {
 		struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM];
-		void __iomem *vaddr[AST_DEFAULT_HWC_NUM];
+		struct dma_buf_map map[AST_DEFAULT_HWC_NUM];
 		unsigned int next_index;
 	} cursor;
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 1da67d34e55d..a89ad4570e3c 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -36,6 +36,7 @@
 #include <linux/pagemap.h>
 #include <linux/shmem_fs.h>
 #include <linux/dma-buf.h>
+#include <linux/dma-buf-map.h>
 #include <linux/mem_encrypt.h>
 #include <linux/pagevec.h>
 
@@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 
 void *drm_gem_vmap(struct drm_gem_object *obj)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
-	if (obj->funcs->vmap)
-		vaddr = obj->funcs->vmap(obj);
-	else
-		vaddr = ERR_PTR(-EOPNOTSUPP);
+	if (!obj->funcs->vmap)
+		return ERR_PTR(-EOPNOTSUPP);
 
-	if (!vaddr)
-		vaddr = ERR_PTR(-ENOMEM);
+	ret = obj->funcs->vmap(obj, &map);
+	if (ret)
+		return ERR_PTR(ret);
+	else if (dma_buf_map_is_null(&map))
+		return ERR_PTR(-ENOMEM);
 
-	return vaddr;
+	return map.vaddr;
 }
 
 void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
 {
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
+
 	if (!vaddr)
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, vaddr);
+		obj->funcs->vunmap(obj, &map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index d527485ea0b7..b57e3e9222f0 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual
  *     address space
  * @obj: GEM object
+ * @map: Returns the kernel virtual address of the CMA GEM object's backing
+ *       store.
  *
  * This function maps a buffer exported via DRM PRIME into the kernel's
  * virtual address space. Since the CMA buffers are already mapped into the
@@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap);
  * driver's &drm_gem_object_funcs.vmap callback.
  *
  * Returns:
- * The kernel virtual address of the CMA GEM object's backing store.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj);
 
-	return cma_obj->vaddr;
+	dma_buf_map_set_vaddr(map, cma_obj->vaddr);
+
+	return 0;
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
index fb11df7aced5..5553f58f68f3 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL(drm_gem_shmem_unpin);
 
-static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map;
 	int ret = 0;
 
-	if (shmem->vmap_use_count++ > 0)
-		return shmem->vaddr;
+	if (shmem->vmap_use_count++ > 0) {
+		dma_buf_map_set_vaddr(map, shmem->vaddr);
+		return 0;
+	}
 
 	if (obj->import_attach) {
-		ret = dma_buf_vmap(obj->import_attach->dmabuf, &map);
-		if (!ret)
-			shmem->vaddr = map.vaddr;
+		ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
+		if (!ret) {
+			if (WARN_ON(map->is_iomem)) {
+				ret = -EIO;
+				goto err_put_pages;
+			}
+			shmem->vaddr = map->vaddr;
+		}
 	} else {
 		pgprot_t prot = PAGE_KERNEL;
 
@@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 				    VM_MAP, prot);
 		if (!shmem->vaddr)
 			ret = -ENOMEM;
+		else
+			dma_buf_map_set_vaddr(map, shmem->vaddr);
 	}
 
 	if (ret) {
@@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 		goto err_put_pages;
 	}
 
-	return shmem->vaddr;
+	return 0;
 
 err_put_pages:
 	if (!obj->import_attach)
@@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
 err_zero_use:
 	shmem->vmap_use_count = 0;
 
-	return ERR_PTR(ret);
+	return ret;
 }
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
+ *       store.
  *
  * This function makes sure that a contiguous kernel virtual address mapping
  * exists for the buffer backing the shmem GEM object.
@@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem)
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj)
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
-	void *vaddr;
 	int ret;
 
 	ret = mutex_lock_interruptible(&shmem->vmap_lock);
 	if (ret)
-		return ERR_PTR(ret);
-	vaddr = drm_gem_shmem_vmap_locked(shmem);
+		return ret;
+	ret = drm_gem_shmem_vmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 
-	return vaddr;
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_shmem_vmap);
 
-static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem,
+					struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = &shmem->base;
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr);
 
 	if (WARN_ON_ONCE(!shmem->vmap_use_count))
 		return;
@@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 		return;
 
 	if (obj->import_attach)
-		dma_buf_vunmap(obj->import_attach->dmabuf, &map);
+		dma_buf_vunmap(obj->import_attach->dmabuf, map);
 	else
 		vunmap(shmem->vaddr);
 
@@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object
  * @shmem: shmem GEM object
+ * @map: Kernel virtual address where the SHMEM GEM object was mapped
  *
  * This function cleans up a kernel virtual address mapping acquired by
  * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to
@@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem)
  * also be called by drivers directly, in which case it will hide the
  * differences between dma-buf imported and natively allocated objects.
  */
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
 
 	mutex_lock(&shmem->vmap_lock);
-	drm_gem_shmem_vunmap_locked(shmem);
+	drm_gem_shmem_vunmap_locked(shmem, map);
 	mutex_unlock(&shmem->vmap_lock);
 }
 EXPORT_SYMBOL(drm_gem_shmem_vunmap);
diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c
index f445b84c43c4..4d99dd50e763 100644
--- a/drivers/gpu/drm/drm_gem_vram_helper.c
+++ b/drivers/gpu/drm/drm_gem_vram_helper.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-or-later
 
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 
 #include <drm/drm_debugfs.h>
@@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo)
 	 * up; only release the GEM object.
 	 */
 
-	WARN_ON(gbo->kmap_use_count);
-	WARN_ON(gbo->kmap.virtual);
+	WARN_ON(gbo->vmap_use_count);
+	WARN_ON(dma_buf_map_is_set(&gbo->map));
 
 	drm_gem_object_release(&gbo->bo.base);
 }
@@ -379,29 +380,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo)
 }
 EXPORT_SYMBOL(drm_gem_vram_unpin);
 
-static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo)
+static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo,
+				    struct dma_buf_map *map)
 {
 	int ret;
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
-	bool is_iomem;
 
-	if (gbo->kmap_use_count > 0)
+	if (gbo->vmap_use_count > 0)
 		goto out;
 
-	ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap);
+	ret = ttm_bo_vmap(&gbo->bo, &gbo->map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 out:
-	++gbo->kmap_use_count;
-	return ttm_kmap_obj_virtual(kmap, &is_iomem);
+	++gbo->vmap_use_count;
+	*map = gbo->map;
+
+	return 0;
 }
 
-static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
+static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo,
+				       struct dma_buf_map *map)
 {
-	if (WARN_ON_ONCE(!gbo->kmap_use_count))
+	struct drm_device *dev = gbo->bo.base.dev;
+
+	if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count))
 		return;
-	if (--gbo->kmap_use_count > 0)
+
+	if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map)))
+		return; /* BUG: map not mapped from this BO */
+
+	if (--gbo->vmap_use_count > 0)
 		return;
 
 	/*
@@ -415,7 +424,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
 /**
  * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address
  *                       space
- * @gbo:	The GEM VRAM object to map
+ * @gbo: The GEM VRAM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * The vmap function pins a GEM VRAM object to its current location, either
  * system or video memory, and maps its buffer into kernel address space.
@@ -424,48 +435,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo)
  * unmap and unpin the GEM VRAM object.
  *
  * Returns:
- * The buffer's virtual address on success, or
- * an ERR_PTR()-encoded error code otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo)
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
-	void *base;
 
 	ret = ttm_bo_reserve(&gbo->bo, true, false, NULL);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
 	ret = drm_gem_vram_pin_locked(gbo, 0);
 	if (ret)
 		goto err_ttm_bo_unreserve;
-	base = drm_gem_vram_kmap_locked(gbo);
-	if (IS_ERR(base)) {
-		ret = PTR_ERR(base);
+	ret = drm_gem_vram_kmap_locked(gbo, map);
+	if (ret)
 		goto err_drm_gem_vram_unpin_locked;
-	}
 
 	ttm_bo_unreserve(&gbo->bo);
 
-	return base;
+	return 0;
 
 err_drm_gem_vram_unpin_locked:
 	drm_gem_vram_unpin_locked(gbo);
 err_ttm_bo_unreserve:
 	ttm_bo_unreserve(&gbo->bo);
-	return ERR_PTR(ret);
+	return ret;
 }
 EXPORT_SYMBOL(drm_gem_vram_vmap);
 
 /**
  * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object
- * @gbo:	The GEM VRAM object to unmap
- * @vaddr:	The mapping's base address as returned by drm_gem_vram_vmap()
+ * @gbo: The GEM VRAM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  *
  * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See
  * the documentation for drm_gem_vram_vmap() for more information.
  */
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map)
 {
 	int ret;
 
@@ -473,7 +480,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr)
 	if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret))
 		return;
 
-	drm_gem_vram_kunmap_locked(gbo);
+	drm_gem_vram_kunmap_locked(gbo, map);
 	drm_gem_vram_unpin_locked(gbo);
 
 	ttm_bo_unreserve(&gbo->bo);
@@ -564,15 +571,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo,
 					       bool evict,
 					       struct ttm_resource *new_mem)
 {
-	struct ttm_bo_kmap_obj *kmap = &gbo->kmap;
+	struct ttm_buffer_object *bo = &gbo->bo;
+	struct drm_device *dev = bo->base.dev;
 
-	if (WARN_ON_ONCE(gbo->kmap_use_count))
+	if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count))
 		return;
 
-	if (!kmap->virtual)
-		return;
-	ttm_bo_kunmap(kmap);
-	kmap->virtual = NULL;
+	ttm_bo_vunmap(bo, &gbo->map);
 }
 
 static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo,
@@ -838,37 +843,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem)
 }
 
 /**
- * drm_gem_vram_object_vmap() - \
-	Implements &struct drm_gem_object_funcs.vmap
- * @gem:	The GEM object to map
+ * drm_gem_vram_object_vmap() -
+ *	Implements &struct drm_gem_object_funcs.vmap
+ * @gem: The GEM object to map
+ * @map: Returns the kernel virtual address of the VRAM GEM object's backing
+ *       store.
  *
  * Returns:
- * The buffers virtual address on success, or
- * NULL otherwise.
+ * 0 on success, or a negative error code otherwise.
  */
-static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem)
+static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
-	void *base;
 
-	base = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(base))
-		return NULL;
-	return base;
+	return drm_gem_vram_vmap(gbo, map);
 }
 
 /**
- * drm_gem_vram_object_vunmap() - \
-	Implements &struct drm_gem_object_funcs.vunmap
- * @gem:	The GEM object to unmap
- * @vaddr:	The mapping's base address
+ * drm_gem_vram_object_vunmap() -
+ *	Implements &struct drm_gem_object_funcs.vunmap
+ * @gem: The GEM object to unmap
+ * @map: Kernel virtual address where the VRAM GEM object was mapped
  */
-static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem,
-				       void *vaddr)
+static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map)
 {
 	struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem);
 
-	drm_gem_vram_vunmap(gbo, vaddr);
+	drm_gem_vram_vunmap(gbo, map);
 }
 
 /*
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
index 9682c26d89bb..f5be627e1de0 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h
@@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
 int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset);
 struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj);
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj);
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev,
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
index a6d9932a32ae..bc2543dd987d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c
@@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages);
 }
 
-void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj)
+int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	return etnaviv_gem_vmap(obj);
+	void *vaddr = etnaviv_gem_vmap(obj);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 int etnaviv_gem_prime_mmap(struct drm_gem_object *obj,
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 11223fe348df..832e5280a6ed 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj)
 	return drm_gem_shmem_pin(obj);
 }
 
-static void *lima_gem_vmap(struct drm_gem_object *obj)
+static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct lima_bo *bo = to_lima_bo(obj);
 
 	if (bo->heap_size)
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 
-	return drm_gem_shmem_vmap(obj);
+	return drm_gem_shmem_vmap(obj, map);
 }
 
 static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c
index dc6df9e9a40d..a070a85f8f36 100644
--- a/drivers/gpu/drm/lima/lima_sched.c
+++ b/drivers/gpu/drm/lima/lima_sched.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0 OR MIT
 /* Copyright 2017-2019 Qiang Yu <yuq825@gmail.com> */
 
+#include <linux/dma-buf-map.h>
 #include <linux/kthread.h>
 #include <linux/slab.h>
 #include <linux/vmalloc.h>
@@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 	struct lima_dump_chunk_buffer *buffer_chunk;
 	u32 size, task_size, mem_size;
 	int i;
+	struct dma_buf_map map;
+	int ret;
 
 	mutex_lock(&dev->error_task_list_lock);
 
@@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task)
 		} else {
 			buffer_chunk->size = lima_bo_size(bo);
 
-			data = drm_gem_shmem_vmap(&bo->base.base);
-			if (IS_ERR_OR_NULL(data)) {
+			ret = drm_gem_shmem_vmap(&bo->base.base, &map);
+			if (ret) {
 				kvfree(et);
 				goto out;
 			}
 
-			memcpy(buffer_chunk + 1, data, buffer_chunk->size);
+			memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size);
 
-			drm_gem_shmem_vunmap(&bo->base.base, data);
+			drm_gem_shmem_vunmap(&bo->base.base, &map);
 		}
 
 		buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size;
diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
index 38672f9e5c4f..8ef76769b97f 100644
--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
+++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_atomic_state_helper.h>
@@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb,
 		      struct drm_rect *clip)
 {
 	struct drm_device *dev = &mdev->base;
+	struct dma_buf_map map;
 	void *vmap;
+	int ret;
 
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (drm_WARN_ON(dev, !vmap))
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (drm_WARN_ON(dev, ret))
 		return; /* BUG: SHMEM BO should always be vmapped */
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip);
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 
 	/* Always scanout image at VRAM offset 0 */
 	mgag200_set_startadd(mdev, (u32)0);
diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
index 5dec1e5694b7..9436310d0854 100644
--- a/drivers/gpu/drm/nouveau/Kconfig
+++ b/drivers/gpu/drm/nouveau/Kconfig
@@ -6,6 +6,7 @@ config DRM_NOUVEAU
 	select FW_LOADER
 	select DRM_KMS_HELPER
 	select DRM_TTM
+	select DRM_TTM_HELPER
 	select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT
 	select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT
 	select X86_PLATFORM_DEVICES if ACPI && X86
diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h
index 641ef6298a0e..6045b85a762a 100644
--- a/drivers/gpu/drm/nouveau/nouveau_bo.h
+++ b/drivers/gpu/drm/nouveau/nouveau_bo.h
@@ -39,8 +39,6 @@ struct nouveau_bo {
 	unsigned mode;
 
 	struct nouveau_drm_tile *tile;
-
-	struct ttm_bo_kmap_obj dma_buf_vmap;
 };
 
 static inline struct nouveau_bo *
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
index 9a421c3949de..f942b526b0a5 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
@@ -24,6 +24,8 @@
  *
  */
 
+#include <drm/drm_gem_ttm_helper.h>
+
 #include "nouveau_drv.h"
 #include "nouveau_dma.h"
 #include "nouveau_fence.h"
@@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = {
 	.pin = nouveau_gem_prime_pin,
 	.unpin = nouveau_gem_prime_unpin,
 	.get_sg_table = nouveau_gem_prime_get_sg_table,
-	.vmap = nouveau_gem_prime_vmap,
-	.vunmap = nouveau_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 int
diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h
index b35c180322e2..3b919c7c931c 100644
--- a/drivers/gpu/drm/nouveau/nouveau_gem.h
+++ b/drivers/gpu/drm/nouveau/nouveau_gem.h
@@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *);
 extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *);
 extern struct drm_gem_object *nouveau_gem_prime_import_sg_table(
 	struct drm_device *, struct dma_buf_attachment *, struct sg_table *);
-extern void *nouveau_gem_prime_vmap(struct drm_gem_object *);
-extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *);
 
 #endif
diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c
index a8264aebf3d4..2f16b5249283 100644
--- a/drivers/gpu/drm/nouveau/nouveau_prime.c
+++ b/drivers/gpu/drm/nouveau/nouveau_prime.c
@@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages);
 }
 
-void *nouveau_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages,
-			  &nvbo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return nvbo->dma_buf_vmap.virtual;
-}
-
-void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct nouveau_bo *nvbo = nouveau_gem_object(obj);
-
-	ttm_bo_kunmap(&nvbo->dma_buf_vmap);
-}
-
 struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev,
 							 struct dma_buf_attachment *attach,
 							 struct sg_table *sg)
diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
index fdbc8d949135..5ab03d605f57 100644
--- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
+++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c
@@ -5,6 +5,7 @@
 #include <drm/drm_gem_shmem_helper.h>
 #include <drm/panfrost_drm.h>
 #include <linux/completion.h>
+#include <linux/dma-buf-map.h>
 #include <linux/iopoll.h>
 #include <linux/pm_runtime.h>
 #include <linux/slab.h>
@@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map;
 	struct drm_gem_shmem_object *bo;
 	u32 cfg, as;
 	int ret;
@@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 		goto err_close_bo;
 	}
 
-	perfcnt->buf = drm_gem_shmem_vmap(&bo->base);
-	if (IS_ERR(perfcnt->buf)) {
-		ret = PTR_ERR(perfcnt->buf);
+	ret = drm_gem_shmem_vmap(&bo->base, &map);
+	if (ret)
 		goto err_put_mapping;
-	}
+	perfcnt->buf = map.vaddr;
 
 	/*
 	 * Invalidate the cache and clear the counters to start from a fresh
@@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev,
 	return 0;
 
 err_vunmap:
-	drm_gem_shmem_vunmap(&bo->base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&bo->base, &map);
 err_put_mapping:
 	panfrost_gem_mapping_put(perfcnt->mapping);
 err_close_bo:
@@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 {
 	struct panfrost_file_priv *user = file_priv->driver_priv;
 	struct panfrost_perfcnt *perfcnt = pfdev->perfcnt;
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf);
 
 	if (user != perfcnt->user)
 		return -EINVAL;
@@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev,
 		  GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF));
 
 	perfcnt->user = NULL;
-	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf);
+	drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map);
 	perfcnt->buf = NULL;
 	panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv);
 	panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu);
diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
index 45fd76e04bdc..e165fa9b2089 100644
--- a/drivers/gpu/drm/qxl/qxl_display.c
+++ b/drivers/gpu/drm/qxl/qxl_display.c
@@ -25,6 +25,7 @@
 
 #include <linux/crc32.h>
 #include <linux/delay.h>
+#include <linux/dma-buf-map.h>
 
 #include <drm/drm_drv.h>
 #include <drm/drm_atomic.h>
@@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 	struct drm_gem_object *obj;
 	struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL;
 	int ret;
+	struct dma_buf_map user_map;
+	struct dma_buf_map cursor_map;
 	void *user_ptr;
 	int size = 64*64*4;
 
@@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		user_bo = gem_to_qxl_bo(obj);
 
 		/* pinning is done in the prepare/cleanup framevbuffer */
-		ret = qxl_bo_kmap(user_bo, &user_ptr);
+		ret = qxl_bo_kmap(user_bo, &user_map);
 		if (ret)
 			goto out_free_release;
+		user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 		ret = qxl_alloc_bo_reserved(qdev, release,
 					    sizeof(struct qxl_cursor) + size,
@@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
 		if (ret)
 			goto out_unpin;
 
-		ret = qxl_bo_kmap(cursor_bo, (void **)&cursor);
+		ret = qxl_bo_kmap(cursor_bo, &cursor_map);
 		if (ret)
 			goto out_backoff;
 
@@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 {
 	int ret;
 	struct drm_gem_object *gobj;
+	struct dma_buf_map map;
 	int monitors_config_size = sizeof(struct qxl_monitors_config) +
 		qxl_num_crtc * sizeof(struct qxl_head);
 
@@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev)
 	if (ret)
 		return ret;
 
-	qxl_bo_kmap(qdev->monitors_config_bo, NULL);
+	qxl_bo_kmap(qdev->monitors_config_bo, &map);
 
 	qdev->monitors_config = qdev->monitors_config_bo->kptr;
 	qdev->ram_header->monitors_config =
diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
index 3599db096973..7b7acb910780 100644
--- a/drivers/gpu/drm/qxl/qxl_draw.c
+++ b/drivers/gpu/drm/qxl/qxl_draw.c
@@ -20,6 +20,8 @@
  * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  */
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 
 #include "qxl_drv.h"
@@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev,
 					      unsigned int num_clips,
 					      struct qxl_bo *clips_bo)
 {
+	struct dma_buf_map map;
 	struct qxl_clip_rects *dev_clips;
 	int ret;
 
-	ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips);
-	if (ret) {
+	ret = qxl_bo_kmap(clips_bo, &map);
+	if (ret)
 		return NULL;
-	}
+	dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */
+
 	dev_clips->num_rects = num_clips;
 	dev_clips->chunk.next_chunk = 0;
 	dev_clips->chunk.prev_chunk = 0;
@@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	int stride = fb->pitches[0];
 	/* depth is not actually interesting, we don't mask with it */
 	int depth = fb->format->cpp[0] * 8;
+	struct dma_buf_map surface_map;
 	uint8_t *surface_base;
 	struct qxl_release *release;
 	struct qxl_bo *clips_bo;
@@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
 	if (ret)
 		goto out_release_backoff;
 
-	ret = qxl_bo_kmap(bo, (void **)&surface_base);
+	ret = qxl_bo_kmap(bo, &surface_map);
 	if (ret)
 		goto out_release_backoff;
+	surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	ret = qxl_image_init(qdev, release, dimage, surface_base,
 			     left - dumb_shadow_offset,
diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
index 3602e8b34189..eb437fea5d9e 100644
--- a/drivers/gpu/drm/qxl/qxl_drv.h
+++ b/drivers/gpu/drm/qxl/qxl_drv.h
@@ -30,6 +30,7 @@
  * Definitions taken from spice-protocol, plus kernel driver specific bits.
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/dma-fence.h>
 #include <linux/firmware.h>
 #include <linux/platform_device.h>
@@ -50,6 +51,8 @@
 
 #include "qxl_dev.h"
 
+struct dma_buf_map;
+
 #define DRIVER_AUTHOR		"Dave Airlie"
 
 #define DRIVER_NAME		"qxl"
@@ -79,7 +82,7 @@ struct qxl_bo {
 	/* Protected by tbo.reserved */
 	struct ttm_place		placements[3];
 	struct ttm_placement		placement;
-	struct ttm_bo_kmap_obj		kmap;
+	struct dma_buf_map		map;
 	void				*kptr;
 	unsigned int                    map_count;
 	int                             type;
@@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv);
 void qxl_gem_object_close(struct drm_gem_object *obj,
 			  struct drm_file *file_priv);
 void qxl_bo_force_delete(struct qxl_device *qdev);
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
 
 /* qxl_dumb.c */
 int qxl_mode_dumb_create(struct drm_file *file_priv,
@@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj);
 struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	struct drm_device *dev, struct dma_buf_attachment *attach,
 	struct sg_table *sgt);
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj);
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map);
 int qxl_gem_prime_mmap(struct drm_gem_object *obj,
 				struct vm_area_struct *vma);
 
diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c
index 547d46c14d56..ceebc5881f68 100644
--- a/drivers/gpu/drm/qxl/qxl_object.c
+++ b/drivers/gpu/drm/qxl/qxl_object.c
@@ -23,10 +23,12 @@
  *          Alon Levy
  */
 
+#include <linux/dma-buf-map.h>
+#include <linux/io-mapping.h>
+
 #include "qxl_drv.h"
 #include "qxl_object.h"
 
-#include <linux/io-mapping.h>
 static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo)
 {
 	struct qxl_bo *bo;
@@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev,
 	return 0;
 }
 
-int qxl_bo_kmap(struct qxl_bo *bo, void **ptr)
+int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map)
 {
-	bool is_iomem;
 	int r;
 
 	if (bo->kptr) {
-		if (ptr)
-			*ptr = bo->kptr;
 		bo->map_count++;
-		return 0;
+		goto out;
 	}
-	r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap);
+	r = ttm_bo_vmap(&bo->tbo, &bo->map);
 	if (r)
 		return r;
-	bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem);
-	if (ptr)
-		*ptr = bo->kptr;
 	bo->map_count = 1;
+
+	/* TODO: Remove kptr in favor of map everywhere. */
+	if (bo->map.is_iomem)
+		bo->kptr = (void *)bo->map.vaddr_iomem;
+	else
+		bo->kptr = bo->map.vaddr;
+
+out:
+	*map = bo->map;
 	return 0;
 }
 
@@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 	void *rptr;
 	int ret;
 	struct io_mapping *map;
+	struct dma_buf_map bo_map;
 
 	if (bo->tbo.mem.mem_type == TTM_PL_VRAM)
 		map = qdev->vram_mapping;
@@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev,
 		return rptr;
 	}
 
-	ret = qxl_bo_kmap(bo, &rptr);
+	ret = qxl_bo_kmap(bo, &bo_map);
 	if (ret)
 		return NULL;
+	rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	rptr += page_offset * PAGE_SIZE;
 	return rptr;
@@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo)
 	if (bo->map_count > 0)
 		return;
 	bo->kptr = NULL;
-	ttm_bo_kunmap(&bo->kmap);
+	ttm_bo_vunmap(&bo->tbo, &bo->map);
 }
 
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev,
diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h
index 09a5c818324d..ebf24c9d2bf2 100644
--- a/drivers/gpu/drm/qxl/qxl_object.h
+++ b/drivers/gpu/drm/qxl/qxl_object.h
@@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev,
 			 bool kernel, bool pinned, u32 domain,
 			 struct qxl_surface *surf,
 			 struct qxl_bo **bo_ptr);
-extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr);
+extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map);
 extern void qxl_bo_kunmap(struct qxl_bo *bo);
 void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset);
 void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map);
diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c
index 7d3816fca5a8..4aa949799446 100644
--- a/drivers/gpu/drm/qxl/qxl_prime.c
+++ b/drivers/gpu/drm/qxl/qxl_prime.c
@@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table(
 	return ERR_PTR(-ENOSYS);
 }
 
-void *qxl_gem_prime_vmap(struct drm_gem_object *obj)
+int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
-	void *ptr;
 	int ret;
 
-	ret = qxl_bo_kmap(bo, &ptr);
+	ret = qxl_bo_kmap(bo, map);
 	if (ret < 0)
-		return ERR_PTR(ret);
+		return ret;
 
-	return ptr;
+	return 0;
 }
 
-void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void qxl_gem_prime_vunmap(struct drm_gem_object *obj,
+			  struct dma_buf_map *map)
 {
 	struct qxl_bo *bo = gem_to_qxl_bo(obj);
 
diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h
index 5d54bccebd4d..44cb5ee6fc20 100644
--- a/drivers/gpu/drm/radeon/radeon.h
+++ b/drivers/gpu/drm/radeon/radeon.h
@@ -509,7 +509,6 @@ struct radeon_bo {
 	/* Constant after initialization */
 	struct radeon_device		*rdev;
 
-	struct ttm_bo_kmap_obj		dma_buf_vmap;
 	pid_t				pid;
 
 #ifdef CONFIG_MMU_NOTIFIER
diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
index 0ccd7213e41f..d2876ce3bc9e 100644
--- a/drivers/gpu/drm/radeon/radeon_gem.c
+++ b/drivers/gpu/drm/radeon/radeon_gem.c
@@ -31,6 +31,7 @@
 #include <drm/drm_debugfs.h>
 #include <drm/drm_device.h>
 #include <drm/drm_file.h>
+#include <drm/drm_gem_ttm_helper.h>
 #include <drm/radeon_drm.h>
 
 #include "radeon.h"
@@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj,
 struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj);
 int radeon_gem_prime_pin(struct drm_gem_object *obj);
 void radeon_gem_prime_unpin(struct drm_gem_object *obj);
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj);
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 static const struct drm_gem_object_funcs radeon_gem_object_funcs;
 
@@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = {
 	.pin = radeon_gem_prime_pin,
 	.unpin = radeon_gem_prime_unpin,
 	.get_sg_table = radeon_gem_prime_get_sg_table,
-	.vmap = radeon_gem_prime_vmap,
-	.vunmap = radeon_gem_prime_vunmap,
+	.vmap = drm_gem_ttm_vmap,
+	.vunmap = drm_gem_ttm_vunmap,
 };
 
 /*
diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c
index b9de0e51c0be..088d39a51c0d 100644
--- a/drivers/gpu/drm/radeon/radeon_prime.c
+++ b/drivers/gpu/drm/radeon/radeon_prime.c
@@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj)
 	return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages);
 }
 
-void *radeon_gem_prime_vmap(struct drm_gem_object *obj)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-	int ret;
-
-	ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages,
-			  &bo->dma_buf_vmap);
-	if (ret)
-		return ERR_PTR(ret);
-
-	return bo->dma_buf_vmap.virtual;
-}
-
-void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	struct radeon_bo *bo = gem_to_radeon_bo(obj);
-
-	ttm_bo_kunmap(&bo->dma_buf_vmap);
-}
-
 struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev,
 							struct dma_buf_attachment *attach,
 							struct sg_table *sg)
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
index 7d5ebb10323b..7971f57436dd 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
@@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm,
 	return ERR_PTR(ret);
 }
 
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj)
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
-	if (rk_obj->pages)
-		return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
-			    pgprot_writecombine(PAGE_KERNEL));
+	if (rk_obj->pages) {
+		void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP,
+				  pgprot_writecombine(PAGE_KERNEL));
+		if (!vaddr)
+			return -ENOMEM;
+		dma_buf_map_set_vaddr(map, vaddr);
+		return 0;
+	}
 
 	if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING)
-		return NULL;
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, rk_obj->kvaddr);
 
-	return rk_obj->kvaddr;
+	return 0;
 }
 
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
 
 	if (rk_obj->pages) {
-		vunmap(vaddr);
+		vunmap(map->vaddr);
 		return;
 	}
 
diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
index 7ffc541bea07..5a70a56cd406 100644
--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
+++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h
@@ -31,8 +31,8 @@ struct drm_gem_object *
 rockchip_gem_prime_import_sg_table(struct drm_device *dev,
 				   struct dma_buf_attachment *attach,
 				   struct sg_table *sg);
-void *rockchip_gem_prime_vmap(struct drm_gem_object *obj);
-void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
+int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm driver mmap file operations */
 int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma);
diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c
index 744a8e337e41..c02e35ed6e76 100644
--- a/drivers/gpu/drm/tiny/cirrus.c
+++ b/drivers/gpu/drm/tiny/cirrus.c
@@ -17,6 +17,7 @@
  */
 
 #include <linux/console.h>
+#include <linux/dma-buf-map.h>
 #include <linux/module.h>
 #include <linux/pci.h>
 
@@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 			       struct drm_rect *rect)
 {
 	struct cirrus_device *cirrus = to_cirrus(fb->dev);
+	struct dma_buf_map map;
 	void *vmap;
 	int idx, ret;
 
@@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	if (!drm_dev_enter(&cirrus->dev, &idx))
 		goto out;
 
-	ret = -ENOMEM;
-	vmap = drm_gem_shmem_vmap(fb->obj[0]);
-	if (!vmap)
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret)
 		goto out_dev_exit;
+	vmap = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (cirrus->cpp == fb->format->cpp[0])
 		drm_fb_memcpy_dstclip(cirrus->vram,
@@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb,
 	else
 		WARN_ON_ONCE("cpp mismatch");
 
-	drm_gem_shmem_vunmap(fb->obj[0], vmap);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 	ret = 0;
 
 out_dev_exit:
diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c
index cc397671f689..12a890cea6e9 100644
--- a/drivers/gpu/drm/tiny/gm12u320.c
+++ b/drivers/gpu/drm/tiny/gm12u320.c
@@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 {
 	int block, dst_offset, len, remain, ret, x1, x2, y1, y2;
 	struct drm_framebuffer *fb;
+	struct dma_buf_map map;
 	void *vaddr;
 	u8 *src;
 
@@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 	y1 = gm12u320->fb_update.rect.y1;
 	y2 = gm12u320->fb_update.rect.y2;
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
-		GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
+		GM12U320_ERR("failed to vmap fb: %d\n", ret);
 		goto put_fb;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	if (fb->obj[0]->import_attach) {
 		ret = dma_buf_begin_cpu_access(
@@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320)
 			GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret);
 	}
 vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 put_fb:
 	drm_framebuffer_put(fb);
 	gm12u320->fb_update.fb = NULL;
diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c
index fef43f4e3bac..42eeba1dfdbf 100644
--- a/drivers/gpu/drm/udl/udl_modeset.c
+++ b/drivers/gpu/drm/udl/udl_modeset.c
@@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	struct urb *urb;
 	struct drm_rect clip;
 	int log_bpp;
+	struct dma_buf_map map;
 	void *vaddr;
 
 	ret = udl_log_cpp(fb->format->cpp[0]);
@@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 			return ret;
 	}
 
-	vaddr = drm_gem_shmem_vmap(fb->obj[0]);
-	if (IS_ERR(vaddr)) {
+	ret = drm_gem_shmem_vmap(fb->obj[0], &map);
+	if (ret) {
 		DRM_ERROR("failed to vmap fb\n");
 		goto out_dma_buf_end_cpu_access;
 	}
+	vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	urb = udl_get_urb(dev);
 	if (!urb)
@@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y,
 	ret = 0;
 
 out_drm_gem_shmem_vunmap:
-	drm_gem_shmem_vunmap(fb->obj[0], vaddr);
+	drm_gem_shmem_vunmap(fb->obj[0], &map);
 out_dma_buf_end_cpu_access:
 	if (import_attach) {
 		tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf,
diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c
index 931c55126148..f268fb258c83 100644
--- a/drivers/gpu/drm/vboxvideo/vbox_mode.c
+++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c
@@ -9,6 +9,8 @@
  *          Michael Thayer <michael.thayer@oracle.com,
  *          Hans de Goede <hdegoede@redhat.com>
  */
+
+#include <linux/dma-buf-map.h>
 #include <linux/export.h>
 
 #include <drm/drm_atomic.h>
@@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	u32 height = plane->state->crtc_h;
 	size_t data_size, mask_size;
 	u32 flags;
+	struct dma_buf_map map;
+	int ret;
 	u8 *src;
 
 	/*
@@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 
 	vbox_crtc->cursor_enabled = true;
 
-	src = drm_gem_vram_vmap(gbo);
-	if (IS_ERR(src)) {
+	ret = drm_gem_vram_vmap(gbo, &map);
+	if (ret) {
 		/*
 		 * BUG: we should have pinned the BO in prepare_fb().
 		 */
@@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 		DRM_WARN("Could not map cursor bo, skipping update\n");
 		return;
 	}
+	src = map.vaddr; /* TODO: Use mapping abstraction properly */
 
 	/*
 	 * The mask must be calculated based on the alpha
@@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane,
 	data_size = width * height * 4 + mask_size;
 
 	copy_cursor_image(src, vbox->cursor_data, width, height, mask_size);
-	drm_gem_vram_vunmap(gbo, src);
+	drm_gem_vram_vunmap(gbo, &map);
 
 	flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE |
 		VBOX_MOUSE_POINTER_ALPHA;
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index 557f0d1e6437..f290a9a942dc 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
 	return drm_gem_cma_prime_mmap(obj, vma);
 }
 
-void *vc4_prime_vmap(struct drm_gem_object *obj)
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct vc4_bo *bo = to_vc4_bo(obj);
 
 	if (bo->validated_shader) {
 		DRM_DEBUG("mmaping of shader BOs not allowed.\n");
-		return ERR_PTR(-EINVAL);
+		return -EINVAL;
 	}
 
-	return drm_gem_cma_prime_vmap(obj);
+	return drm_gem_cma_prime_vmap(obj, map);
 }
 
 struct drm_gem_object *
diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
index cc79b1aaa878..904f2c36c963 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.h
+++ b/drivers/gpu/drm/vc4/vc4_drv.h
@@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma);
 struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev,
 						 struct dma_buf_attachment *attach,
 						 struct sg_table *sgt);
-void *vc4_prime_vmap(struct drm_gem_object *obj);
+int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 int vc4_bo_cache_init(struct drm_device *dev);
 void vc4_bo_cache_destroy(struct drm_device *dev);
 int vc4_bo_inc_usecnt(struct vc4_bo *bo);
diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
index fa54a6d1403d..b2aa26e1e4a2 100644
--- a/drivers/gpu/drm/vgem/vgem_drv.c
+++ b/drivers/gpu/drm/vgem/vgem_drv.c
@@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev,
 	return &obj->base;
 }
 
-static void *vgem_prime_vmap(struct drm_gem_object *obj)
+static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 	long n_pages = obj->size >> PAGE_SHIFT;
 	struct page **pages;
+	void *vaddr;
 
 	pages = vgem_pin_pages(bo);
 	if (IS_ERR(pages))
-		return NULL;
+		return PTR_ERR(pages);
+
+	vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
 
-	return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL));
+	return 0;
 }
 
-static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
+static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
 	struct drm_vgem_gem_object *bo = to_vgem_bo(obj);
 
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 	vgem_unpin_pages(bo);
 }
 
diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c
index 9890137bcb8d..0824327cc860 100644
--- a/drivers/gpu/drm/vkms/vkms_plane.c
+++ b/drivers/gpu/drm/vkms/vkms_plane.c
@@ -1,5 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_atomic.h>
 #include <drm/drm_atomic_helper.h>
 #include <drm/drm_fourcc.h>
@@ -146,15 +148,16 @@ static int vkms_prepare_fb(struct drm_plane *plane,
 			   struct drm_plane_state *state)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!state->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(state->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr))
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret)
+		DRM_ERROR("vmap failed: %d\n", ret);
 
 	return drm_gem_fb_prepare_fb(plane, state);
 }
@@ -164,13 +167,15 @@ static void vkms_cleanup_fb(struct drm_plane *plane,
 {
 	struct drm_gem_object *gem_obj;
 	struct drm_gem_shmem_object *shmem_obj;
+	struct dma_buf_map map;
 
 	if (!old_state->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(old_state->fb, 0);
 	shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0));
-	drm_gem_shmem_vunmap(gem_obj, shmem_obj->vaddr);
+	dma_buf_map_set_vaddr(&map, shmem_obj->vaddr);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 }
 
 static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = {
diff --git a/drivers/gpu/drm/vkms/vkms_writeback.c b/drivers/gpu/drm/vkms/vkms_writeback.c
index 26b903926872..67f80ab1e85f 100644
--- a/drivers/gpu/drm/vkms/vkms_writeback.c
+++ b/drivers/gpu/drm/vkms/vkms_writeback.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0+
 
-#include "vkms_drv.h"
+#include <linux/dma-buf-map.h>
+
 #include <drm/drm_fourcc.h>
 #include <drm/drm_writeback.h>
 #include <drm/drm_probe_helper.h>
@@ -8,6 +9,8 @@
 #include <drm/drm_gem_framebuffer_helper.h>
 #include <drm/drm_gem_shmem_helper.h>
 
+#include "vkms_drv.h"
+
 static const u32 vkms_wb_formats[] = {
 	DRM_FORMAT_XRGB8888,
 };
@@ -65,19 +68,20 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector,
 			       struct drm_writeback_job *job)
 {
 	struct drm_gem_object *gem_obj;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (!job->fb)
 		return 0;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	vaddr = drm_gem_shmem_vmap(gem_obj);
-	if (IS_ERR(vaddr)) {
-		DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr));
-		return PTR_ERR(vaddr);
+	ret = drm_gem_shmem_vmap(gem_obj, &map);
+	if (ret) {
+		DRM_ERROR("vmap failed: %d\n", ret);
+		return ret;
 	}
 
-	job->priv = vaddr;
+	job->priv = map.vaddr;
 
 	return 0;
 }
@@ -87,12 +91,14 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector,
 {
 	struct drm_gem_object *gem_obj;
 	struct vkms_device *vkmsdev;
+	struct dma_buf_map map;
 
 	if (!job->fb)
 		return;
 
 	gem_obj = drm_gem_fb_get_obj(job->fb, 0);
-	drm_gem_shmem_vunmap(gem_obj, job->priv);
+	dma_buf_map_set_vaddr(&map, job->priv);
+	drm_gem_shmem_vunmap(gem_obj, &map);
 
 	vkmsdev = drm_device_to_vkms_device(gem_obj->dev);
 	vkms_set_composer(&vkmsdev->output, false);
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index 4f34ef34ba60..74db5a840bed 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma)
 	return gem_mmap_obj(xen_obj, vma);
 }
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj)
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map)
 {
 	struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj);
+	void *vaddr;
 
 	if (!xen_obj->pages)
-		return NULL;
+		return -ENOMEM;
 
 	/* Please see comment in gem_mmap_obj on mapping and attributes. */
-	return vmap(xen_obj->pages, xen_obj->num_pages,
-		    VM_MAP, PAGE_KERNEL);
+	vaddr = vmap(xen_obj->pages, xen_obj->num_pages,
+		     VM_MAP, PAGE_KERNEL);
+	if (!vaddr)
+		return -ENOMEM;
+	dma_buf_map_set_vaddr(map, vaddr);
+
+	return 0;
 }
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr)
+				    struct dma_buf_map *map)
 {
-	vunmap(vaddr);
+	vunmap(map->vaddr);
 }
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h
index a39675fa31b2..a4e67d0a149c 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.h
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h
@@ -12,6 +12,7 @@
 #define __XEN_DRM_FRONT_GEM_H
 
 struct dma_buf_attachment;
+struct dma_buf_map;
 struct drm_device;
 struct drm_gem_object;
 struct file;
@@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj);
 
 int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma);
 
-void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj);
+int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj,
+				 struct dma_buf_map *map);
 
 void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj,
-				    void *vaddr);
+				    struct dma_buf_map *map);
 
 int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj,
 				 struct vm_area_struct *vma);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index c38dd35da00b..5e6daa1c982f 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -39,6 +39,7 @@
 
 #include <drm/drm_vma_manager.h>
 
+struct dma_buf_map;
 struct drm_gem_object;
 
 /**
@@ -138,7 +139,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void *(*vmap)(struct drm_gem_object *obj);
+	int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @vunmap:
@@ -148,7 +149,7 @@ struct drm_gem_object_funcs {
 	 *
 	 * This callback is optional.
 	 */
-	void (*vunmap)(struct drm_gem_object *obj, void *vaddr);
+	void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 	/**
 	 * @mmap:
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index a064b0d1c480..caf98b9cf4b4 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 				  struct sg_table *sgt);
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
-void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
+int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
index 5381f0c8cf6f..3449a0353fe0 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem);
 void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem);
 int drm_gem_shmem_pin(struct drm_gem_object *obj);
 void drm_gem_shmem_unpin(struct drm_gem_object *obj);
-void *drm_gem_shmem_vmap(struct drm_gem_object *obj);
-void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv);
 
diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h
index 128f88174d32..c0d28ba0f5c9 100644
--- a/include/drm/drm_gem_vram_helper.h
+++ b/include/drm/drm_gem_vram_helper.h
@@ -10,6 +10,7 @@
 #include <drm/ttm/ttm_bo_api.h>
 #include <drm/ttm/ttm_bo_driver.h>
 
+#include <linux/dma-buf-map.h>
 #include <linux/kernel.h> /* for container_of() */
 
 struct drm_mode_create_dumb;
@@ -29,9 +30,8 @@ struct vm_area_struct;
 
 /**
  * struct drm_gem_vram_object - GEM object backed by VRAM
- * @gem:	GEM object
  * @bo:		TTM buffer object
- * @kmap:	Mapping information for @bo
+ * @map:	Mapping information for @bo
  * @placement:	TTM placement information. Supported placements are \
 	%TTM_PL_VRAM and %TTM_PL_SYSTEM
  * @placements:	TTM placement information.
@@ -50,15 +50,15 @@ struct vm_area_struct;
  */
 struct drm_gem_vram_object {
 	struct ttm_buffer_object bo;
-	struct ttm_bo_kmap_obj kmap;
+	struct dma_buf_map map;
 
 	/**
-	 * @kmap_use_count:
+	 * @vmap_use_count:
 	 *
 	 * Reference count on the virtual address.
 	 * The address are un-mapped when the count reaches zero.
 	 */
-	unsigned int kmap_use_count;
+	unsigned int vmap_use_count;
 
 	/* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */
 	struct ttm_placement placement;
@@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo);
 s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo);
 int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag);
 int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo);
-void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo);
-void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr);
+int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
+void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map);
 
 int drm_gem_vram_fill_create_dumb(struct drm_file *file,
 				  struct drm_device *dev,
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13844.34678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEZ-0003m6-1Q; Wed, 28 Oct 2020 19:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13844.34678; Wed, 28 Oct 2020 19:35:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEY-0003lP-QS; Wed, 28 Oct 2020 19:35:46 +0000
Received: by outflank-mailman (input) for mailman id 13844;
 Wed, 28 Oct 2020 19:35:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEX-0003Ov-LU
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10abae3a-2afb-49de-98b4-3347149e2aff;
 Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EDFB5B910;
 Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEX-0003Ov-LU
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:45 +0000
X-Inumbo-ID: 10abae3a-2afb-49de-98b4-3347149e2aff
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 10abae3a-2afb-49de-98b4-3347149e2aff;
	Wed, 28 Oct 2020 19:35:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EDFB5B910;
	Wed, 28 Oct 2020 19:35:26 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>
Subject: [PATCH v6 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap()
Date: Wed, 28 Oct 2020 20:35:13 +0100
Message-Id: <20201028193521.2489-3-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function drm_gem_cma_prime_vunmap() is empty. Remove it before
changing the interface to use struct drm_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Christian König <christian.koenig@amd.com>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_gem_cma_helper.c | 17 -----------------
 drivers/gpu/drm/vc4/vc4_bo.c         |  1 -
 include/drm/drm_gem_cma_helper.h     |  1 -
 3 files changed, 19 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c
index 2165633c9b9e..d527485ea0b7 100644
--- a/drivers/gpu/drm/drm_gem_cma_helper.c
+++ b/drivers/gpu/drm/drm_gem_cma_helper.c
@@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj)
 }
 EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap);
 
-/**
- * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual
- *     address space
- * @obj: GEM object
- * @vaddr: kernel virtual address where the CMA GEM object was mapped
- *
- * This function removes a buffer exported via DRM PRIME from the kernel's
- * virtual address space. This is a no-op because CMA buffers cannot be
- * unmapped from kernel space. Drivers using the CMA helpers should set this
- * as their &drm_gem_object_funcs.vunmap callback.
- */
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr)
-{
-	/* Nothing to do */
-}
-EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap);
-
 static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = {
 	.free = drm_gem_cma_free_object,
 	.print_info = drm_gem_cma_print_info,
diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
index f432278173cd..557f0d1e6437 100644
--- a/drivers/gpu/drm/vc4/vc4_bo.c
+++ b/drivers/gpu/drm/vc4/vc4_bo.c
@@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = {
 	.export = vc4_prime_export,
 	.get_sg_table = drm_gem_cma_prime_get_sg_table,
 	.vmap = vc4_prime_vmap,
-	.vunmap = drm_gem_cma_prime_vunmap,
 	.vm_ops = &vc4_vm_ops,
 };
 
diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h
index 2bfa2502607a..a064b0d1c480 100644
--- a/include/drm/drm_gem_cma_helper.h
+++ b/include/drm/drm_gem_cma_helper.h
@@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev,
 int drm_gem_cma_prime_mmap(struct drm_gem_object *obj,
 			   struct vm_area_struct *vma);
 void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj);
-void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr);
 
 struct drm_gem_object *
 drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size);
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13846.34692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEe-0003w3-Eg; Wed, 28 Oct 2020 19:35:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13846.34692; Wed, 28 Oct 2020 19:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEe-0003vv-89; Wed, 28 Oct 2020 19:35:52 +0000
Received: by outflank-mailman (input) for mailman id 13846;
 Wed, 28 Oct 2020 19:35:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEc-0003Ov-Lj
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d1eadd6-f54b-4416-a4a6-84c042d89551;
 Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3DA22B919;
 Wed, 28 Oct 2020 19:35:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEc-0003Ov-Lj
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:50 +0000
X-Inumbo-ID: 7d1eadd6-f54b-4416-a4a6-84c042d89551
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7d1eadd6-f54b-4416-a4a6-84c042d89551;
	Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3DA22B919;
	Wed, 28 Oct 2020 19:35:29 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v6 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map
Date: Wed, 28 Oct 2020 20:35:18 +0100
Message-Id: <20201028193521.2489-8-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

GEM's vmap and vunmap interfaces now wrap memory pointers in struct
dma_buf_map.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c   | 18 +++++++++++-------
 drivers/gpu/drm/drm_gem.c      | 26 +++++++++++++-------------
 drivers/gpu/drm/drm_internal.h |  5 +++--
 drivers/gpu/drm/drm_prime.c    | 14 ++++----------
 4 files changed, 31 insertions(+), 32 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 495f47d23d87..ac0082bed966 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -3,6 +3,7 @@
  * Copyright 2018 Noralf Trønnes
  */
 
+#include <linux/dma-buf-map.h>
 #include <linux/list.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
@@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
  */
 void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 {
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	if (buffer->vaddr)
 		return buffer->vaddr;
@@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	vaddr = drm_gem_vmap(buffer->gem);
-	if (IS_ERR(vaddr))
-		return vaddr;
+	ret = drm_gem_vmap(buffer->gem, &map);
+	if (ret)
+		return ERR_PTR(ret);
 
-	buffer->vaddr = vaddr;
+	buffer->vaddr = map.vaddr;
 
-	return vaddr;
+	return map.vaddr;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+
+	drm_gem_vunmap(buffer->gem, &map);
 	buffer->vaddr = NULL;
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index a89ad4570e3c..4d5fff4bd821 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj)
 		obj->funcs->unpin(obj);
 }
 
-void *drm_gem_vmap(struct drm_gem_object *obj)
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map;
 	int ret;
 
 	if (!obj->funcs->vmap)
-		return ERR_PTR(-EOPNOTSUPP);
+		return -EOPNOTSUPP;
 
-	ret = obj->funcs->vmap(obj, &map);
+	ret = obj->funcs->vmap(obj, map);
 	if (ret)
-		return ERR_PTR(ret);
-	else if (dma_buf_map_is_null(&map))
-		return ERR_PTR(-ENOMEM);
+		return ret;
+	else if (dma_buf_map_is_null(map))
+		return -ENOMEM;
 
-	return map.vaddr;
+	return 0;
 }
 
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr);
-
-	if (!vaddr)
+	if (dma_buf_map_is_null(map))
 		return;
 
 	if (obj->funcs->vunmap)
-		obj->funcs->vunmap(obj, &map);
+		obj->funcs->vunmap(obj, map);
+
+	/* Always set the mapping to NULL. Callers may rely on this. */
+	dma_buf_map_clear(map);
 }
 
 /**
diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h
index 2bdac3557765..81d386b5b92a 100644
--- a/drivers/gpu/drm/drm_internal.h
+++ b/drivers/gpu/drm/drm_internal.h
@@ -33,6 +33,7 @@
 
 struct dentry;
 struct dma_buf;
+struct dma_buf_map;
 struct drm_connector;
 struct drm_crtc;
 struct drm_framebuffer;
@@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent,
 
 int drm_gem_pin(struct drm_gem_object *obj);
 void drm_gem_unpin(struct drm_gem_object *obj);
-void *drm_gem_vmap(struct drm_gem_object *obj);
-void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr);
+int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map);
+void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map);
 
 /* drm_debugfs.c drm_debugfs_crc.c */
 #if defined(CONFIG_DEBUG_FS)
diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
index 89e2a2496734..cb8fbeeb731b 100644
--- a/drivers/gpu/drm/drm_prime.c
+++ b/drivers/gpu/drm/drm_prime.c
@@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf);
  *
  * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap
  * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling.
+ * The kernel virtual address is returned in map.
  *
- * Returns the kernel virtual address or NULL on failure.
+ * Returns 0 on success or a negative errno code otherwise.
  */
 int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
-	void *vaddr;
 
-	vaddr = drm_gem_vmap(obj);
-	if (IS_ERR(vaddr))
-		return PTR_ERR(vaddr);
-
-	dma_buf_map_set_vaddr(map, vaddr);
-
-	return 0;
+	return drm_gem_vmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vmap);
 
@@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map)
 {
 	struct drm_gem_object *obj = dma_buf->priv;
 
-	drm_gem_vunmap(obj, map->vaddr);
+	drm_gem_vunmap(obj, map);
 }
 EXPORT_SYMBOL(drm_gem_dmabuf_vunmap);
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:35:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:35:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13851.34704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEj-00044L-Lx; Wed, 28 Oct 2020 19:35:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13851.34704; Wed, 28 Oct 2020 19:35:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEj-000449-I3; Wed, 28 Oct 2020 19:35:57 +0000
Received: by outflank-mailman (input) for mailman id 13851;
 Wed, 28 Oct 2020 19:35:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEh-0003Ov-Lr
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a82e1208-235d-4c49-94d0-8ff2055b3c2d;
 Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F08CAB91B;
 Wed, 28 Oct 2020 19:35:29 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEh-0003Ov-Lr
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:35:55 +0000
X-Inumbo-ID: a82e1208-235d-4c49-94d0-8ff2055b3c2d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a82e1208-235d-4c49-94d0-8ff2055b3c2d;
	Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F08CAB91B;
	Wed, 28 Oct 2020 19:35:29 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v6 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map
Date: Wed, 28 Oct 2020 20:35:19 +0100
Message-Id: <20201028193521.2489-9-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Kernel DRM clients now store their framebuffer address in an instance
of struct dma_buf_map. Depending on the buffer's location, the address
refers to system or I/O memory.

Callers of drm_client_buffer_vmap() receive a copy of the value in
the call's supplied arguments. It can be accessed and modified with
dma_buf_map interfaces.

v6:
	* don't call page_to_phys() on framebuffers in I/O memory;
	  warn instead (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 drivers/gpu/drm/drm_client.c    | 34 +++++++++++++++++++--------------
 drivers/gpu/drm/drm_fb_helper.c | 32 ++++++++++++++++++++-----------
 include/drm/drm_client.h        |  7 ++++---
 3 files changed, 45 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index ac0082bed966..fe573acf1067 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
 	struct drm_device *dev = buffer->client->dev;
 
-	drm_gem_vunmap(buffer->gem, buffer->vaddr);
+	drm_gem_vunmap(buffer->gem, &buffer->map);
 
 	if (buffer->gem)
 		drm_gem_object_put(buffer->gem);
@@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u
 /**
  * drm_client_buffer_vmap - Map DRM client buffer into address space
  * @buffer: DRM client buffer
+ * @map_copy: Returns the mapped memory's address
  *
  * This function maps a client buffer into kernel address space. If the
- * buffer is already mapped, it returns the mapping's address.
+ * buffer is already mapped, it returns the existing mapping's address.
  *
  * Client buffer mappings are not ref'counted. Each call to
  * drm_client_buffer_vmap() should be followed by a call to
  * drm_client_buffer_vunmap(); or the client buffer should be mapped
  * throughout its lifetime.
  *
+ * The returned address is a copy of the internal value. In contrast to
+ * other vmap interfaces, you don't need it for the client's vunmap
+ * function. So you can modify it at will during blit and draw operations.
+ *
  * Returns:
- *	The mapped memory's address
+ *	0 on success, or a negative errno code otherwise.
  */
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
+int
+drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy)
 {
-	struct dma_buf_map map;
+	struct dma_buf_map *map = &buffer->map;
 	int ret;
 
-	if (buffer->vaddr)
-		return buffer->vaddr;
+	if (dma_buf_map_is_set(map))
+		goto out;
 
 	/*
 	 * FIXME: The dependency on GEM here isn't required, we could
@@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer)
 	 * fd_install step out of the driver backend hooks, to make that
 	 * final step optional for internal users.
 	 */
-	ret = drm_gem_vmap(buffer->gem, &map);
+	ret = drm_gem_vmap(buffer->gem, map);
 	if (ret)
-		return ERR_PTR(ret);
+		return ret;
 
-	buffer->vaddr = map.vaddr;
+out:
+	*map_copy = *map;
 
-	return map.vaddr;
+	return 0;
 }
 EXPORT_SYMBOL(drm_client_buffer_vmap);
 
@@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap);
  */
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer)
 {
-	struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr);
+	struct dma_buf_map *map = &buffer->map;
 
-	drm_gem_vunmap(buffer->gem, &map);
-	buffer->vaddr = NULL;
+	drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index c2f72bb6afb1..6ce0b9119ef2 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->vaddr + offset;
+	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
@@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 	struct drm_clip_rect *clip = &helper->dirty_clip;
 	struct drm_clip_rect clip_copy;
 	unsigned long flags;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	spin_lock_irqsave(&helper->dirty_lock, flags);
 	clip_copy = *clip;
@@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 
 		/* Generic fbdev uses a shadow buffer */
 		if (helper->buffer) {
-			vaddr = drm_client_buffer_vmap(helper->buffer);
-			if (IS_ERR(vaddr))
+			ret = drm_client_buffer_vmap(helper->buffer, &map);
+			if (ret)
 				return;
 			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
 		}
@@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 	struct drm_framebuffer *fb;
 	struct fb_info *fbi;
 	u32 format;
-	void *vaddr;
+	struct dma_buf_map map;
+	int ret;
 
 	drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
 		    sizes->surface_width, sizes->surface_height,
@@ -2096,14 +2098,22 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper,
 		fb_deferred_io_init(fbi);
 	} else {
 		/* buffer is mapped for HW framebuffer */
-		vaddr = drm_client_buffer_vmap(fb_helper->buffer);
-		if (IS_ERR(vaddr))
-			return PTR_ERR(vaddr);
+		ret = drm_client_buffer_vmap(fb_helper->buffer, &map);
+		if (ret)
+			return ret;
+		if (map.is_iomem)
+			fbi->screen_base = map.vaddr_iomem;
+		else
+			fbi->screen_buffer = map.vaddr;
 
-		fbi->screen_buffer = vaddr;
-		/* Shamelessly leak the physical address to user-space */
+		/*
+		 * Shamelessly leak the physical address to user-space. As
+		 * page_to_phys() is undefined for I/O memory, warn in this
+		 * case.
+		 */
 #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM)
-		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0)
+		if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0 &&
+		    !drm_WARN_ON_ONCE(dev, map.is_iomem))
 			fbi->fix.smem_start =
 				page_to_phys(virt_to_page(fbi->screen_buffer));
 #endif
diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h
index 7aaea665bfc2..f07f2fb02e75 100644
--- a/include/drm/drm_client.h
+++ b/include/drm/drm_client.h
@@ -3,6 +3,7 @@
 #ifndef _DRM_CLIENT_H_
 #define _DRM_CLIENT_H_
 
+#include <linux/dma-buf-map.h>
 #include <linux/lockdep.h>
 #include <linux/mutex.h>
 #include <linux/types.h>
@@ -141,9 +142,9 @@ struct drm_client_buffer {
 	struct drm_gem_object *gem;
 
 	/**
-	 * @vaddr: Virtual address for the buffer
+	 * @map: Virtual address for the buffer
 	 */
-	void *vaddr;
+	struct dma_buf_map map;
 
 	/**
 	 * @fb: DRM framebuffer
@@ -155,7 +156,7 @@ struct drm_client_buffer *
 drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format);
 void drm_client_framebuffer_delete(struct drm_client_buffer *buffer);
 int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect);
-void *drm_client_buffer_vmap(struct drm_client_buffer *buffer);
+int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map);
 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer);
 
 int drm_client_modeset_create(struct drm_client_dev *client);
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 19:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 19:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13854.34716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEo-0004BC-2b; Wed, 28 Oct 2020 19:36:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13854.34716; Wed, 28 Oct 2020 19:36:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrEn-0004B3-Tj; Wed, 28 Oct 2020 19:36:01 +0000
Received: by outflank-mailman (input) for mailman id 13854;
 Wed, 28 Oct 2020 19:36:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
 id 1kXrEm-0003Ov-M0
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:36:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9eceeee8-e238-4f09-8561-0c569856f370;
 Wed, 28 Oct 2020 19:35:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A806B920;
 Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ICLB=ED=suse.de=tzimmermann@srs-us1.protection.inumbo.net>)
	id 1kXrEm-0003Ov-M0
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 19:36:00 +0000
X-Inumbo-ID: 9eceeee8-e238-4f09-8561-0c569856f370
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9eceeee8-e238-4f09-8561-0c569856f370;
	Wed, 28 Oct 2020 19:35:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5A806B920;
	Wed, 28 Oct 2020 19:35:31 +0000 (UTC)
From: Thomas Zimmermann <tzimmermann@suse.de>
To: maarten.lankhorst@linux.intel.com,
	mripard@kernel.org,
	airlied@linux.ie,
	daniel@ffwll.ch,
	sam@ravnborg.org,
	alexander.deucher@amd.com,
	christian.koenig@amd.com,
	kraxel@redhat.com,
	l.stach@pengutronix.de,
	linux+etnaviv@armlinux.org.uk,
	christian.gmeiner@gmail.com,
	inki.dae@samsung.com,
	jy0922.shim@samsung.com,
	sw0312.kim@samsung.com,
	kyungmin.park@samsung.com,
	kgene@kernel.org,
	krzk@kernel.org,
	yuq825@gmail.com,
	bskeggs@redhat.com,
	robh@kernel.org,
	tomeu.vizoso@collabora.com,
	steven.price@arm.com,
	alyssa.rosenzweig@collabora.com,
	hjc@rock-chips.com,
	heiko@sntech.de,
	hdegoede@redhat.com,
	sean@poorly.run,
	eric@anholt.net,
	oleksandr_andrushchenko@epam.com,
	ray.huang@amd.com,
	sumit.semwal@linaro.org,
	emil.velikov@collabora.com,
	luben.tuikov@amd.com,
	apaneers@amd.com,
	linus.walleij@linaro.org,
	melissa.srw@gmail.com,
	chris@chris-wilson.co.uk,
	miaoqinglang@huawei.com
Cc: dri-devel@lists.freedesktop.org,
	amd-gfx@lists.freedesktop.org,
	virtualization@lists.linux-foundation.org,
	etnaviv@lists.freedesktop.org,
	linux-arm-kernel@lists.infradead.org,
	linux-samsung-soc@vger.kernel.org,
	lima@lists.freedesktop.org,
	nouveau@lists.freedesktop.org,
	spice-devel@lists.freedesktop.org,
	linux-rockchip@lists.infradead.org,
	xen-devel@lists.xenproject.org,
	linux-media@vger.kernel.org,
	linaro-mm-sig@lists.linaro.org,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Daniel Vetter <daniel.vetter@ffwll.ch>
Subject: [PATCH v6 10/10] drm/fb_helper: Support framebuffers in I/O memory
Date: Wed, 28 Oct 2020 20:35:21 +0100
Message-Id: <20201028193521.2489-11-tzimmermann@suse.de>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de>
References: <20201028193521.2489-1-tzimmermann@suse.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

At least sparc64 requires I/O-specific access to framebuffers. This
patch updates the fbdev console accordingly.

For drivers with direct access to the framebuffer memory, the callback
functions in struct fb_ops test for the type of memory and call the rsp
fb_sys_ of fb_cfb_ functions. Read and write operations are implemented
internally by DRM's fbdev helper.

For drivers that employ a shadow buffer, fbdev's blit function retrieves
the framebuffer address as struct dma_buf_map, and uses dma_buf_map
interfaces to access the buffer.

The bochs driver on sparc64 uses a workaround to flag the framebuffer as
I/O memory and avoid a HW exception. With the introduction of struct
dma_buf_map, this is not required any longer. The patch removes the rsp
code from both, bochs and fbdev.

v5:
	* implement fb_read/fb_write internally (Daniel, Sam)
v4:
	* move dma_buf_map changes into separate patch (Daniel)
	* TODO list: comment on fbdev updates (Daniel)

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Sam Ravnborg <sam@ravnborg.org>
Tested-by: Sam Ravnborg <sam@ravnborg.org>
---
 Documentation/gpu/todo.rst        |  19 ++-
 drivers/gpu/drm/bochs/bochs_kms.c |   1 -
 drivers/gpu/drm/drm_fb_helper.c   | 227 ++++++++++++++++++++++++++++--
 include/drm/drm_mode_config.h     |  12 --
 4 files changed, 230 insertions(+), 29 deletions(-)

diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst
index 7e6fc3c04add..638b7f704339 100644
--- a/Documentation/gpu/todo.rst
+++ b/Documentation/gpu/todo.rst
@@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup()
 ------------------------------------------------
 
 Most drivers can use drm_fbdev_generic_setup(). Driver have to implement
-atomic modesetting and GEM vmap support. Current generic fbdev emulation
-expects the framebuffer in system memory (or system-like memory).
+atomic modesetting and GEM vmap support. Historically, generic fbdev emulation
+expected the framebuffer in system memory or system-like memory. By employing
+struct dma_buf_map, drivers with frambuffers in I/O memory can be supported
+as well.
 
 Contact: Maintainer of the driver you plan to convert
 
 Level: Intermediate
 
+Reimplement functions in drm_fbdev_fb_ops without fbdev
+-------------------------------------------------------
+
+A number of callback functions in drm_fbdev_fb_ops could benefit from
+being rewritten without dependencies on the fbdev module. Some of the
+helpers could further benefit from using struct dma_buf_map instead of
+raw pointers.
+
+Contact: Thomas Zimmermann <tzimmermann@suse.de>, Daniel Vetter
+
+Level: Advanced
+
+
 drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup
 -----------------------------------------------------------------
 
diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c
index 13d0d04c4457..853081d186d5 100644
--- a/drivers/gpu/drm/bochs/bochs_kms.c
+++ b/drivers/gpu/drm/bochs/bochs_kms.c
@@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs)
 	bochs->dev->mode_config.preferred_depth = 24;
 	bochs->dev->mode_config.prefer_shadow = 0;
 	bochs->dev->mode_config.prefer_shadow_fbdev = 1;
-	bochs->dev->mode_config.fbdev_use_iomem = true;
 	bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true;
 
 	bochs->dev->mode_config.funcs = &bochs_mode_funcs;
diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
index 6ce0b9119ef2..714ce3bd6221 100644
--- a/drivers/gpu/drm/drm_fb_helper.c
+++ b/drivers/gpu/drm/drm_fb_helper.c
@@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work)
 }
 
 static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper,
-					  struct drm_clip_rect *clip)
+					  struct drm_clip_rect *clip,
+					  struct dma_buf_map *dst)
 {
 	struct drm_framebuffer *fb = fb_helper->fb;
 	unsigned int cpp = fb->format->cpp[0];
 	size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp;
 	void *src = fb_helper->fbdev->screen_buffer + offset;
-	void *dst = fb_helper->buffer->map.vaddr + offset;
 	size_t len = (clip->x2 - clip->x1) * cpp;
 	unsigned int y;
 
-	for (y = clip->y1; y < clip->y2; y++) {
-		if (!fb_helper->dev->mode_config.fbdev_use_iomem)
-			memcpy(dst, src, len);
-		else
-			memcpy_toio((void __iomem *)dst, src, len);
+	dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */
 
+	for (y = clip->y1; y < clip->y2; y++) {
+		dma_buf_map_memcpy_to(dst, src, len);
+		dma_buf_map_incr(dst, fb->pitches[0]);
 		src += fb->pitches[0];
-		dst += fb->pitches[0];
 	}
 }
 
@@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work)
 			ret = drm_client_buffer_vmap(helper->buffer, &map);
 			if (ret)
 				return;
-			drm_fb_helper_dirty_blit_real(helper, &clip_copy);
+			drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map);
 		}
+
 		if (helper->fb->funcs->dirty)
 			helper->fb->funcs->dirty(helper->fb, NULL, 0, 0,
 						 &clip_copy, 1);
@@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
 		return -ENODEV;
 }
 
+static bool drm_fbdev_use_iomem(struct fb_info *info)
+{
+	struct drm_fb_helper *fb_helper = info->par;
+	struct drm_client_buffer *buffer = fb_helper->buffer;
+
+	return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem;
+}
+
+static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, 
+				   loff_t pos)
+{
+	const char __iomem *src = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	char *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min(count, alloc_size);
+
+		memcpy_fromio(tmp, src, c);
+		if (copy_to_user(buf, tmp, c)) {
+			ret = -EFAULT;
+			break;
+		}
+
+		src += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret;
+}
+
+static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count,
+				     loff_t pos)
+{
+	const char *src = info->screen_buffer + pos;
+
+	if (copy_to_user(buf, src, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf,
+				 size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos >= total_size)
+		return 0;
+	if (count >= total_size)
+		count = total_size;
+	if (total_size - count < pos)
+		count = total_size - pos;
+
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_read_screen_base(info, buf, count, pos);
+	else
+		ret = fb_read_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos = ret;
+
+	return ret;
+}
+
+static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count,
+				    loff_t pos)
+{
+	char __iomem *dst = info->screen_base + pos;
+	size_t alloc_size = min(count, PAGE_SIZE);
+	ssize_t ret = 0;
+	u8 *tmp;
+
+	tmp = kmalloc(alloc_size, GFP_KERNEL);
+	if (!tmp)
+		return -ENOMEM;
+
+	while (count) {
+		size_t c = min(count, alloc_size);
+
+		if (copy_from_user(tmp, buf, c)) {
+			ret = -EFAULT;
+			break;
+		}
+		memcpy_toio(dst, tmp, c);
+
+		dst += c;
+		buf += c;
+		ret += c;
+		count -= c;
+	}
+
+	kfree(tmp);
+
+	return ret;
+}
+
+static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count,
+				      loff_t pos)
+{
+	char *dst = info->screen_buffer + pos;
+
+	if (copy_from_user(dst, buf, count))
+		return -EFAULT;
+
+	return count;
+}
+
+static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf,
+				  size_t count, loff_t *ppos)
+{
+	loff_t pos = *ppos;
+	size_t total_size;
+	ssize_t ret;
+	int err;
+
+	if (info->state != FBINFO_STATE_RUNNING)
+		return -EPERM;
+
+	if (info->screen_size)
+		total_size = info->screen_size;
+	else
+		total_size = info->fix.smem_len;
+
+	if (pos > total_size)
+		return -EFBIG;
+	if (count > total_size) {
+		err = -EFBIG;
+		count = total_size;
+	}
+	if (total_size - count < pos) {
+		if (!err)
+			err = -ENOSPC;
+		count = total_size - pos;
+	}
+
+	/*
+	 * Copy to framebuffer even if we already logged an error. Emulates
+	 * the behavior of the original fbdev implementation.
+	 */
+	if (drm_fbdev_use_iomem(info))
+		ret = fb_write_screen_base(info, buf, count, pos);
+	else
+		ret = fb_write_screen_buffer(info, buf, count, pos);
+
+	if (ret > 0)
+		*ppos = ret;
+
+	if (err)
+		return err;
+
+	return ret;
+}
+
+static void drm_fbdev_fb_fillrect(struct fb_info *info,
+				  const struct fb_fillrect *rect)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_fillrect(info, rect);
+	else
+		drm_fb_helper_sys_fillrect(info, rect);
+}
+
+static void drm_fbdev_fb_copyarea(struct fb_info *info,
+				  const struct fb_copyarea *area)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_copyarea(info, area);
+	else
+		drm_fb_helper_sys_copyarea(info, area);
+}
+
+static void drm_fbdev_fb_imageblit(struct fb_info *info,
+				   const struct fb_image *image)
+{
+	if (drm_fbdev_use_iomem(info))
+		drm_fb_helper_cfb_imageblit(info, image);
+	else
+		drm_fb_helper_sys_imageblit(info, image);
+}
+
 static const struct fb_ops drm_fbdev_fb_ops = {
 	.owner		= THIS_MODULE,
 	DRM_FB_HELPER_DEFAULT_OPS,
@@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = {
 	.fb_release	= drm_fbdev_fb_release,
 	.fb_destroy	= drm_fbdev_fb_destroy,
 	.fb_mmap	= drm_fbdev_fb_mmap,
-	.fb_read	= drm_fb_helper_sys_read,
-	.fb_write	= drm_fb_helper_sys_write,
-	.fb_fillrect	= drm_fb_helper_sys_fillrect,
-	.fb_copyarea	= drm_fb_helper_sys_copyarea,
-	.fb_imageblit	= drm_fb_helper_sys_imageblit,
+	.fb_read	= drm_fbdev_fb_read,
+	.fb_write	= drm_fbdev_fb_write,
+	.fb_fillrect	= drm_fbdev_fb_fillrect,
+	.fb_copyarea	= drm_fbdev_fb_copyarea,
+	.fb_imageblit	= drm_fbdev_fb_imageblit,
 };
 
 static struct fb_deferred_io drm_fbdev_defio = {
diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h
index 5ffbb4ed5b35..ab424ddd7665 100644
--- a/include/drm/drm_mode_config.h
+++ b/include/drm/drm_mode_config.h
@@ -877,18 +877,6 @@ struct drm_mode_config {
 	 */
 	bool prefer_shadow_fbdev;
 
-	/**
-	 * @fbdev_use_iomem:
-	 *
-	 * Set to true if framebuffer reside in iomem.
-	 * When set to true memcpy_toio() is used when copying the framebuffer in
-	 * drm_fb_helper.drm_fb_helper_dirty_blit_real().
-	 *
-	 * FIXME: This should be replaced with a per-mapping is_iomem
-	 * flag (like ttm does), and then used everywhere in fbdev code.
-	 */
-	bool fbdev_use_iomem;
-
 	/**
 	 * @quirk_addfb_prefer_xbgr_30bpp:
 	 *
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed Oct 28 20:11:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 20:11:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13896.34735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrmc-000848-7L; Wed, 28 Oct 2020 20:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13896.34735; Wed, 28 Oct 2020 20:10:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXrmc-000841-4O; Wed, 28 Oct 2020 20:10:58 +0000
Received: by outflank-mailman (input) for mailman id 13896;
 Wed, 28 Oct 2020 20:10:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXrma-00083w-UK
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 20:10:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44efe205-5865-4c96-8ed8-2ebea7efccaa;
 Wed, 28 Oct 2020 20:10:56 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0786F247CC;
 Wed, 28 Oct 2020 20:10:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXrma-00083w-UK
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 20:10:56 +0000
X-Inumbo-ID: 44efe205-5865-4c96-8ed8-2ebea7efccaa
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 44efe205-5865-4c96-8ed8-2ebea7efccaa;
	Wed, 28 Oct 2020 20:10:56 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 0786F247CC;
	Wed, 28 Oct 2020 20:10:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603915855;
	bh=gEN5nLYxypA++AlL8jcsXUvpF47jM/kkqC0G/Ib/eYE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=syDTRoXvG+SY2bzAv9PSRfmS26YRgFQ36iZmFMk3gPzR4vYR2BPiZQrNDRVaqooy1
	 gQJMGyHUP34aUTmTBUxIi7i7qpCKM4+KI3HesEw4dWTzYkGpHhC/JXvqAv460WFr0Z
	 jgf4gRkFYJWDyWjJFGVBm8A4KyU1gJqt5WAv+eQE=
Date: Wed, 28 Oct 2020 13:10:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Bertrand Marquis <bertrand.marquis@arm.com>, 
    xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
Message-ID: <alpine.DEB.2.21.2010281309540.12247@sstabellini-ThinkPad-T480s>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com> <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com> <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 28 Oct 2020, Julien Grall wrote:
> Hi Bertrand,
> 
> On 26/10/2020 16:21, Bertrand Marquis wrote:
> > When a Cortex A57 processor is affected by CPU errata 832075, a guest
> > not implementing the workaround for it could deadlock the system.
> > Add a warning during boot informing the user that only trusted guests
> > should be executed on the system.
> > An equivalent warning is already given to the user by KVM on cores
> > affected by this errata.
> > 
> > Also taint the hypervisor as unsecure when this errata applies and
> > mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
> > 
> > Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> If you don't need to resend the series, then I would be happy to fix the typo
> pointed out by George on commit.

That's OK for me. Since you are at it, could you also condense the 4
lines of the warning into 2 lines on commit as well?

Thanks,

Stefano


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 21:18:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 21:18:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13905.34748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXspF-0004qV-3r; Wed, 28 Oct 2020 21:17:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13905.34748; Wed, 28 Oct 2020 21:17:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXspF-0004qO-0Y; Wed, 28 Oct 2020 21:17:45 +0000
Received: by outflank-mailman (input) for mailman id 13905;
 Wed, 28 Oct 2020 21:17:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXspE-0004qJ-9I
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:17:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 636638b4-d0ec-4fa4-9650-eb8a36ebcf21;
 Wed, 28 Oct 2020 21:17:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1C4002475E;
 Wed, 28 Oct 2020 21:17:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ghuf=ED=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXspE-0004qJ-9I
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:17:44 +0000
X-Inumbo-ID: 636638b4-d0ec-4fa4-9650-eb8a36ebcf21
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 636638b4-d0ec-4fa4-9650-eb8a36ebcf21;
	Wed, 28 Oct 2020 21:17:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1C4002475E;
	Wed, 28 Oct 2020 21:17:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603919862;
	bh=Ph8XcSSlJE5u0Y736qvOrXbypgMNew09pxLwuxA4xHw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=0y3+ShvRVvEGTexjrR0J3KR3tP6CIxCP+fR75LVZNzBedro+iprfmQvtYaVKThdgQ
	 pH8OqGZJiMQgwmiLwd7UxI+41Qkjlhi22JpEhxk/tY03BBTPFGIOuHpkqJ6Ur/Jmzp
	 Z3MmXYsi6RxL3kQK6u7g/vFq6+2ypmaW9N7Rh3Pg=
Date: Wed, 28 Oct 2020 14:17:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>
cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
In-Reply-To: <20201028174406.23424-1-alex.bennee@linaro.org>
Message-ID: <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s>
References: <20201028174406.23424-1-alex.bennee@linaro.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-857674193-1603919862=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-857674193-1603919862=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 28 Oct 2020, Alex Bennée wrote:
> Xen is supported on aarch64 although weirdly using the i386-softmmu
> model. Checking based on the host CPU meant we never enabled Xen
> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
> make it not seem weird but that will require further build surgery.
> 
> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
> ---
>  meson.build | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/meson.build b/meson.build
> index 835424999d..f1fcbfed4c 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
>      'CONFIG_HVF': ['x86_64-softmmu'],
>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
>    }
> +elif cpu in [ 'arm', 'aarch64' ]
> +  accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'] }
>  endif

This looks very reasonable -- the patch makes sense.


However I have two questions, mostly for my own understanding. I tried
to repro the aarch64 build problem but it works at my end, even without
this patch. I wonder why. I suspect it works thanks to these lines in
meson.build:

  if not get_option('xen').disabled() and 'CONFIG_XEN_BACKEND' in config_host
    accelerators += 'CONFIG_XEN'
    have_xen_pci_passthrough = not get_option('xen_pci_passthrough').disabled() and targetos == 'linux'
  else
    have_xen_pci_passthrough = false
  endif

But I am not entirely sure who is adding CONFIG_XEN_BACKEND to
config_host.


The other question is: does it make sense to print the value of
CONFIG_XEN as part of the summary? Something like:

diff --git a/meson.build b/meson.build
index 47e32e1fcb..c6e7832dc9 100644
--- a/meson.build
+++ b/meson.build
@@ -2070,6 +2070,7 @@ summary_info += {'KVM support':       config_all.has_key('CONFIG_KVM')}
 summary_info += {'HAX support':       config_all.has_key('CONFIG_HAX')}
 summary_info += {'HVF support':       config_all.has_key('CONFIG_HVF')}
 summary_info += {'WHPX support':      config_all.has_key('CONFIG_WHPX')}
+summary_info += {'XEN support':      config_all.has_key('CONFIG_XEN')}
 summary_info += {'TCG support':       config_all.has_key('CONFIG_TCG')}
 if config_all.has_key('CONFIG_TCG')
   summary_info += {'TCG debug enabled': config_host.has_key('CONFIG_DEBUG_TCG')}


But I realize there is already:

summary_info += {'xen support':       config_host.has_key('CONFIG_XEN_BACKEND')}

so it would be a bit of a duplicate
--8323329-857674193-1603919862=:12247--


From xen-devel-bounces@lists.xenproject.org Wed Oct 28 21:31:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 28 Oct 2020 21:31:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13911.34760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXt2p-0006Wt-GC; Wed, 28 Oct 2020 21:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13911.34760; Wed, 28 Oct 2020 21:31:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXt2p-0006Wm-Bm; Wed, 28 Oct 2020 21:31:47 +0000
Received: by outflank-mailman (input) for mailman id 13911;
 Wed, 28 Oct 2020 21:31:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dNQY=ED=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kXt2n-0006Wh-E4
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:31:45 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d4f690e-340c-4788-bf34-781eb0f3043f;
 Wed, 28 Oct 2020 21:31:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=dNQY=ED=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kXt2n-0006Wh-E4
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:31:45 +0000
X-Inumbo-ID: 9d4f690e-340c-4788-bf34-781eb0f3043f
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9d4f690e-340c-4788-bf34-781eb0f3043f;
	Wed, 28 Oct 2020 21:31:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603920703;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=QBnCna8eiqqIPimvDphfSIhfLU6HR9J6WeP6SsXBf/8=;
  b=eqgKbgA/aeH9gtKby9sjF+g+YVYWg22UDxFMKNqQcz3fD73i/8XRfll6
   OL41LG1rtKaHpwi4jU51ErEhiW40N0jqqDNc4WtgaR85nipb2eUJ1EU2q
   xATQAXvFt/QE+Y3RxyGZA9lcnr7elUu9q8a93JnMyOmJ6eIrlFkep7sn3
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: rntssEikuZwN3+9nk0j2xv3y/RbMhIv7b3LnGgSSZjJxkGg26dM0/lnzoppw2dHC8c7R1Zc2cc
 x/9A75bH1n20cooh7ZG1rXe71GiTlatBkVi8J+qMbXJENXAFoF1w/jBJWS0OjElKORCPyIfUsZ
 KrbZ0E5j5LttLC5vXeHVBVlD47dFE6S4kW+8llXnXUEO2cjgjNZstck5c8Us55iASDgvEvByCT
 Xm+nWelSZsSSpUO5L6SfcBRE7tQBiAqseBVR8+5R9gelh67Tx6q6ZB5d143IUfkN0E4/iZ0gn6
 M7M=
X-SBRS: None
X-MesageID: 30065052
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,428,1596513600"; 
   d="scan'208";a="30065052"
Subject: Re: [PATCH v3] x86/pv: inject #UD for entirely missing SYSCALL
 callbacks
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0ac0f006-c529-2437-4286-62158c2c491b@citrix.com>
Date: Wed, 28 Oct 2020 21:31:33 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 26/10/2020 09:40, Jan Beulich wrote:
> In the case that no 64-bit SYSCALL callback is registered, the guest
> will be crashed when 64-bit userspace executes a SYSCALL instruction,
> which would be a userspace => kernel DoS.  Similarly for 32-bit
> userspace when no 32-bit SYSCALL callback was registered either.
>
> This has been the case ever since the introduction of 64bit PV support,
> but behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which
> yield #GP/#UD in userspace before the callback is registered, and are
> therefore safe by default.
>
> This change does constitute a change in the PV ABI, for the corner case
> of a PV guest kernel not registering a 64-bit callback (which has to be
> considered a defacto requirement of the unwritten PV ABI, considering
> there is no PV equivalent of EFER.SCE).
>
> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
> SYSENTER (safe by default, until explicitly enabled).
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <JBeulich@suse.com>
> ---
> v3:
>  * Split this change off of "x86/pv: Inject #UD for missing SYSCALL
>    callbacks", to allow the uncontroversial part of that change to go
>    in. Add conditional "rex64" for UREGS_rip adjustment. (Is branching
>    over just the REX prefix too much trickery even for an unlikely to be
>    taken code path?)

I find this submission confusion seeing as my v3 is already suitably
acked and ready to commit.  What I haven't had yet enough free time to
do so.

> v2:
>  * Drop unnecessary instruction suffixes
>  * Don't truncate #UD entrypoint to 32 bits
>
> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -33,11 +33,27 @@ ENTRY(switch_to_kernel)
>          cmoveq VCPU_syscall32_addr(%rbx),%rax
>          testq %rax,%rax
>          cmovzq VCPU_syscall_addr(%rbx),%rax
> -        movq  %rax,TRAPBOUNCE_eip(%rdx)
>          /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
>          btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
>          setc  %cl
>          leal  (,%rcx,TBF_INTERRUPT),%ecx
> +
> +        test  %rax, %rax
> +UNLIKELY_START(z, syscall_no_callback) /* TB_eip == 0 => #UD */
> +        mov   VCPU_trap_ctxt(%rbx), %rdi
> +        movl  $X86_EXC_UD, UREGS_entry_vector(%rsp)
> +        cmpw  $FLAT_USER_CS32, UREGS_cs(%rsp)
> +        je    0f
> +        rex64                           # subl => subq
> +0:
> +        subl  $2, UREGS_rip(%rsp)

There was deliberately not a 32bit sub here (see below).

As to the construct, I'm having a hard time deciding whether this is an
excellent idea, or far too clever for its own good.

Some basic perf testing shows that there is a visible difference in
execution time of these two paths, which means there is some
optimisation being missed in the frontend for the 32bit case.  However,
the difference is tiny in the grand scheme of things (about 0.4%
difference in aggregate time to execute a loop of this pattern with a
fixed number of iterations.)

However, the 32bit case isn't actually interesting here.  A guest can't
execute a SYSCALL instruction on/across the 4G->0 boundary because the
M2P is mapped NX up to the 4G boundary, so we can never reach this point
with %eip < 2.

Therefore, the 64bit-only form is the appropriate one to use, which
solves any question of cleverness, or potential decode stalls it causes.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 00:37:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 00:37:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13945.34795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXvwC-0005b0-SE; Thu, 29 Oct 2020 00:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13945.34795; Thu, 29 Oct 2020 00:37:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXvwC-0005at-PI; Thu, 29 Oct 2020 00:37:08 +0000
Received: by outflank-mailman (input) for mailman id 13945;
 Thu, 29 Oct 2020 00:37:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kXvwB-0005ao-0v
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 00:37:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 966e97ca-0bed-4ea2-b6bf-fdae3292d422;
 Thu, 29 Oct 2020 00:37:05 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 531BE20790;
 Thu, 29 Oct 2020 00:37:03 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kXvwB-0005ao-0v
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 00:37:07 +0000
X-Inumbo-ID: 966e97ca-0bed-4ea2-b6bf-fdae3292d422
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 966e97ca-0bed-4ea2-b6bf-fdae3292d422;
	Thu, 29 Oct 2020 00:37:05 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 531BE20790;
	Thu, 29 Oct 2020 00:37:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603931824;
	bh=3bwGDs8HGM41Ct8ot2Jb1S53HzcX5PtZz45jl85Gooo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jEJxpCMNa7H3g+F4u32OyRxqoe5FTJ2TdUHrN82DOsa/dsGzZcMB55Jl2TXVuxX8k
	 lzJYHvjtn0jSSZDxt/OheixiQmXpe7fGqXBnari+b3im4NA+oEloO7ZTwGCi6GIxwo
	 qIAfjHEjiPv5AEFI6i2WQVNE965Ikzby2jeDgIYw=
Date: Wed, 28 Oct 2020 17:37:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    roman@zededa.com, xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
In-Reply-To: <20201028015423.GA33407@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
References: <20201022021655.GA74011@mattapan.m5p.com> <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s> <20201023005629.GA83870@mattapan.m5p.com> <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s> <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s> <20201024053540.GA97417@mattapan.m5p.com> <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org> <20201026160316.GA20589@mattapan.m5p.com> <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 27 Oct 2020, Elliott Mitchell wrote:
> On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> > On 26/10/2020 16:03, Elliott Mitchell wrote:
> > > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> > >> On 24/10/2020 06:35, Elliott Mitchell wrote:
> > >>> ACPI has a distinct
> > >>> means of specifying a limited DMA-width; the above fails, because it
> > >>> assumes a *device-tree*.
> > >>
> > >> Do you know if it would be possible to infer from the ACPI static table
> > >> the DMA-width?
> > > 
> > > Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> > > what the C code would look like though (problem is which documentation
> > > should I be looking at first?).
> > 
> > What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
> > is written in AML. So far the shortest implementation I have seen for 
> > the AML parser is around 5000 lines (see [1]). It might be possible to 
> > strip some the code, although I think this will still probably too big 
> > for a single workaround.
> > 
> > What I meant by "static table" is a table that looks like a structure 
> > and can be parsed in a few lines. If we can't find on contain the DMA 
> > window, then the next best solution is to find a way to identity the 
> > platform.
> > 
> > I don't know enough ACPI to know if this solution is possible. A good 
> > starter would probably be the ACPI spec [2].
> 
> Okay, that is worse than I had thought (okay, ACPI is impressively
> complex for something nominally firmware-level).
>
> There are strings in the present Tianocore implementation for Raspberry
> PI 4B which could be targeted.  Notably included in the output during
> boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
> unsure how kosher these are as this wsn't implemented nor blessed by the
> Raspberry PI Foundation).
> 
> I strongly dislike this approach as you soon turn the Xen project into a
> database of hardware.  This is already occurring with
> xen/arch/arm/platforms and I would love to do something about this.  On
> that thought, how about utilizing Xen's command-line for this purpose?

I don't think that a command line option is a good idea: basically it is
punting to users the task of platform detection. Also, it means that
users will be necessarily forced to edit the uboot script or grub
configuration file to boot.

Note that even if we introduced a new command line, we wouldn't take
away the need for xen/arch/arm/platforms anyway.

It would be far better for Xen to autodetect the platform if we can.


> Have a procedure of during installation/updates retrieve DMA limitation
> information from the running OS and the following boot Xen will follow
> the appropriate setup.  By its nature, Domain 0 will have the information
> needed, just becomes an issue of how hard that is to retrieve...

Historically that is what we used to do for many things related to ACPI,
but unfortunately it leads to a pretty bad architecture where Xen
depends on Dom0 for booting rather than the other way around. (Dom0
should be the one requiring Xen for booting, given that Xen is higher
privilege and boots first.)


I think the best compromise is still to use an ACPI string to detect the
platform. For instance, would it be possible to use the OEMID fields in
RSDT, XSDT, FADT?  Possibly even a combination of them?

Another option might be to get the platform name from UEFI somehow. 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 01:25:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 01:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13954.34807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXwgY-0005yg-80; Thu, 29 Oct 2020 01:25:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13954.34807; Thu, 29 Oct 2020 01:25:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXwgY-0005yZ-4p; Thu, 29 Oct 2020 01:25:02 +0000
Received: by outflank-mailman (input) for mailman id 13954;
 Thu, 29 Oct 2020 01:25:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kXwgW-0005yU-Bk
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 01:25:00 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 802c85e4-cb22-43fe-bb26-7fc1970de970;
 Thu, 29 Oct 2020 01:24:59 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id c129so806562yba.8
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 18:24:59 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kXwgW-0005yU-Bk
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 01:25:00 +0000
X-Inumbo-ID: 802c85e4-cb22-43fe-bb26-7fc1970de970
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 802c85e4-cb22-43fe-bb26-7fc1970de970;
	Thu, 29 Oct 2020 01:24:59 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id c129so806562yba.8
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 18:24:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=iqr6bSZbDIQ+xxqArcK/RJzLYSJYYDFEaIBFrTmTMA8=;
        b=uoQr1EMA0MpG0SFBA7p+4TUd2tItAZrNja56ZafkkeLGmz2/8Z66iIpq8WJNsQZknW
         pYdapNPbtkHxwmP1ZG4cELKEzD+xQuWpbclBeWjkH2b92WQAfwUV9OZ7uT5UTICPHabc
         4R15TEtkLbewUIleQddL7n0mn2kjdD0Zp30iwSffV/qmXRMbxOihohS0RMQzW4XZkM3f
         tQppSvyw+Pmejq8/HcbEFJZLo5gB62HUBEivGRMqDOo22Q6l5fx1COxgaHTsg66NLXfZ
         js8l/xsmMc6MR1ZfZONZzlXgMlMoUgMtdnT/0jTvpgJ4MdGQ5oQzBxCfu7v3pGvg5hKR
         qxgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=iqr6bSZbDIQ+xxqArcK/RJzLYSJYYDFEaIBFrTmTMA8=;
        b=ft8jI5c3oQbdrhEcd39RMAxuM3M/dEQDVCy7wKjYoyKPHx4aVvWrjriRpgqkcAUp4e
         eVmqIh2Xi6T1eLlrKjktpZnLLR8k9IGU7msRHM8wNXriyKubCXeGpVpnQ9f/FK9teEJu
         Mc211LS7d1NttfBRCs3Ma2D3UBOM1yXro5Te7hi3iUX3iOc5tsCP6I6uRsPTeoz0mm6A
         qlQuWnAW2jG1tvyO69gv5o638+P5+asChLUua/HdREfvHQByohYdxvKYBpp6B5qBXqMC
         QjWEIogJIExf/X60DQYRmM/mHIUhrQMIPSki5ZXf1FJyckCKSDZtQMv01813qkBRKPhs
         lHSQ==
X-Gm-Message-State: AOAM530ZV+iHXB+r7B9EnzD8Bax2y1tghswP0tlQxzhGd1r8hjHLU7RN
	vbTwUqGwrLYiJboRXrjW4TDYcB+ege/fV9F4/F+Frg==
X-Google-Smtp-Source: ABdhPJzvxyOp/9MQYQSYvvleLDxwqG+JUM4HIQMlnTcrDBZTttHHO92Ty0cBongdY30CJpouw3UhO0GIH5dBh4pK3aE=
X-Received: by 2002:a25:578a:: with SMTP id l132mr2569457ybb.200.1603934698904;
 Wed, 28 Oct 2020 18:24:58 -0700 (PDT)
MIME-Version: 1.0
References: <CAA93ih1bgSCNb9X8-NzGJfhFjRH5W5L2wAG0PHfQoUL4qHkZVA@mail.gmail.com>
 <87imaumubn.fsf@linaro.org>
In-Reply-To: <87imaumubn.fsf@linaro.org>
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Thu, 29 Oct 2020 10:24:48 +0900
Message-ID: <CAA93ih2M3-Lr78C+GN346m=Ox7fbgNOBJ4JSNJFH-3YmyRYiWg@mail.gmail.com>
Subject: Re: [bug report] xen/arm64: singlestep doesn't work on Dom0
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: xen-devel@lists.xenproject.org, Masami Hiramatsu <mhiramat@kernel.org>, 
	Jassi Brar <jaswinder.singh@linaro.org>, 
	Stefano Stabellini <stefano.stabellini@linaro.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Alex,

2020=E5=B9=B410=E6=9C=8828=E6=97=A5(=E6=B0=B4) 21:03 Alex Benn=C3=A9e <alex=
.bennee@linaro.org>:
>
>
> Masami Hiramatsu <masami.hiramatsu@linaro.org> writes:
>
> > Hello,
> >
> > When I tested the kprobes on Dom0 kernel, it caused a kernel panic.
> >
> > Here is an example, to clarify the bug is in the single-stepping
> > (software-step exception), I added a printk() where the kprobes setup
> > single-stepping.
> >
> > root@develbox:~# cd /sys/kernel/debug/tracing/
> > root@develbox:/sys/kernel/debug/tracing# echo p vfs_read > kprobe_event=
s
> > root@develbox:/sys/kernel/debug/tracing# echo 1 > events/kprobes/enable
> > root@develbox:/sys/kernel/debug/tracing# [  112.282480] kprobes:
> > singlestep ool insn at ffff800011785000    <--- This shows single
> > stepping buffer (trampoline)
> > [  112.288077] ------------[ cut here ]------------
> > [  112.292745] kernel BUG at arch/arm64/kernel/traps.c:406!
> > [  112.298129] Internal error: Oops - BUG: 0 [#1] SMP
> > [  112.302987] Modules linked in: fuse bridge stp llc binfmt_misc
> > nls_ascii nls_cp437 vfat fat ahci libahci libata hid_generic udlfb
> > scsi_mod aes_ce_blk crypto_simd evdev cryptd aes_ce_cipher usbhid
> > ghash_ce realtek gf128mul hid netsec sha2_ce mdio_devres i2c_algo_bit
> > of_mdio sha256_arm64 fb_ddc sha1_ce fixed_phy gpio_keys leds_gpio
> > libphy bpf_preload ip_tables x_tables autofs4 xhci_pci
> > xhci_pci_renesas xhci_hcd usbcore gpio_mb86s7x
> > [  112.341097] CPU: 13 PID: 1045 Comm: bash Not tainted 5.10.0-rc1+ #44
> > [  112.347515] Hardware name: Socionext Developer Box (DT)
> > [  112.352813] pstate: 00000085 (nzcv daIf -PAN -UAO -TCO BTYPE=3D--)
> > [  112.358897] pc : do_undefinstr+0x354/0x378
> > [  112.363053] lr : do_undefinstr+0x270/0x378
> > [  112.367218] sp : ffff8000122fbc50
> > [  112.370603] x29: ffff8000122fbc50 x28: ffff00084bc9e080
> > [  112.375985] x27: 0000000000000000 x26: 0000000000000000
> > [  112.381366] x25: 0000000000000000 x24: 0000000000000000
> > [  112.386748] x23: 0000000080000085 x22: ffff800011785004
> > [  112.392129] x21: ffff8000122fbe00 x20: ffff8000122fbcc0
> > [  112.397511] x19: ffff800011249988 x18: 0000000000000000
> > [  112.402892] x17: 0000000000000000 x16: 0000000000000000
> > [  112.408274] x15: 0000000000000000 x14: 0000000000000000
> > [  112.413655] x13: 0000000000000000 x12: 0000000000000000
> > [  112.419037] x11: 0000000000000000 x10: 0000000000000000
> > [  112.424426] x9 : ffff800010314614 x8 : 0000000000000000
> > [  112.429801] x7 : 0000000000000000 x6 : ffff8000122fbca8
> > [  112.435189] x5 : 0000000000000000 x4 : ffff800011400110
> > [  112.440564] x3 : 00000000d5300000 x2 : ffff800011255f78
> > [  112.445946] x1 : ffff800011400110 x0 : 0000000080000085
> > [  112.451328] Call trace:
> > [  112.453848]  do_undefinstr+0x354/0x378
> > [  112.457669]  el1_sync_handler+0xa8/0x138
> > [  112.461658]  el1_sync+0x7c/0x100
> > [  112.464958]  0xffff800011785004     /// <- Undefined instruction
> > error happens on the next instruction of single stepping buffer.
> > [  112.468172]  __arm64_sys_read+0x24/0x30
> > [  112.472078]  el0_svc_common.constprop.3+0x94/0x178
> > [  112.476936]  do_el0_svc+0x2c/0x98
> > [  112.480321]  el0_sync_handler+0x118/0x168
> > [  112.484407]  el0_sync+0x158/0x180
> > [  112.487789] Code: d2801400 17ffffbe a9025bf5 f9001bf7 (d4210000)
> > [  112.493951] ---[ end trace 3564a3bf75d1618c ]---
> >
> > So, this seems that the Linux kernel couldn't catch the software-step e=
xception.
> >
> > I confirmed the same kernel doesn't cause this error without Xen. I
> > guess Xen is not correctly setting the debug registers when the cpu
> > goes EL1.
>
> Having worked on the arm64 KVM debug logic I have a little familiarity
> with how it works on KVM.
>
> > (Or, would we handle debug exceptions in Xen and transfer it to EL1
> > OS? I'm not sure how it was designed)
>
> Xen looks as though it should be trapping a chunk of debug accesses:
>
>     /* Trap Debug and Performance Monitor accesses */
>     WRITE_SYSREG(HDCR_TDRA|HDCR_TDOSA|HDCR_TDA|HDCR_TPM|HDCR_TPMCR,
>                  MDCR_EL2);
>
> but it doesn't set HDCR_TDE so it won't re-route debug exceptions to EL2
> which should be the place that is dealing with them.

Would we need to re-route the debug exception to EL2? I thought it was
OK to send it to EL1 (Dom0 kernel).

> Also I can't see
> where the debug registers are saved/restored. In KVM we maintain a
> shadow copy of the guest debug register state while guest debugging is
> in effect as any breakpoints you want to trigger need to be copied
> across.

Hmm, so KVM will trap the access of the debug registers in EL1?


> I also can't see where CPSR_SS or DBG_MDSCR_SS is set in the mdscr.
>
> FWIW most of the logic in KVM is contained in:
>
>   arch/arm64/kvm/debug.c
>
> with some smatterings of handling the traps and context swapping
> elsewhere in the code.

Thank you,

--=20
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 02:12:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 02:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13960.34820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXxPi-00026B-Tg; Thu, 29 Oct 2020 02:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13960.34820; Thu, 29 Oct 2020 02:11:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXxPi-000264-Py; Thu, 29 Oct 2020 02:11:42 +0000
Received: by outflank-mailman (input) for mailman id 13960;
 Thu, 29 Oct 2020 02:11:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kXxPh-00025z-Qh
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 02:11:41 +0000
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac91b00e-6eff-444d-a3f4-29f1296db876;
 Thu, 29 Oct 2020 02:11:40 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id i186so869623ybc.11
 for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 19:11:40 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kXxPh-00025z-Qh
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 02:11:41 +0000
X-Inumbo-ID: ac91b00e-6eff-444d-a3f4-29f1296db876
Received: from mail-yb1-xb44.google.com (unknown [2607:f8b0:4864:20::b44])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ac91b00e-6eff-444d-a3f4-29f1296db876;
	Thu, 29 Oct 2020 02:11:40 +0000 (UTC)
Received: by mail-yb1-xb44.google.com with SMTP id i186so869623ybc.11
        for <xen-devel@lists.xenproject.org>; Wed, 28 Oct 2020 19:11:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=lc088nZoBzO5fGaktrxcU2f5uhW2KCUmmkhovFmnDRE=;
        b=p/PHugc93k5KhasO70GRjI+ime6/EhUDb4p+zZUF4bm+4tFbJ4QnzopkeTWJlmYDLS
         QS+DlhJP+2ssbjo4MKmTnhhyXTsr6mcpN+UyHbMZ+8GG39WoRdtQzyCJrV8ti13X4m+H
         iLtjHbkfv3LO6yZVix/+eKeLk7QvuWMjB7mRLnWbDajnmbnkinFWR9SyU79dz3TSxtmI
         3cF4ig9aBrunJF3J0e6zGIKMQ77/4heuw5DD1wN+YZHowY5XUf5bre76wS1nhuyOqLre
         JtRYReN+ctoxgm4WkwUa24llNcHO0AN6OkUFh/9i+qaWpsJW/8t67SVaoKiV+1051qi2
         s8nA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=lc088nZoBzO5fGaktrxcU2f5uhW2KCUmmkhovFmnDRE=;
        b=hOUBIq96EAXD7/CsXtX22ePX1g0X20D78QcsoxszYS8b7/4vPaVv/lOW7Dp+tpeTZ7
         OoiJZ7wOyL1kdVxvuWaFJbslpBkObmMY5i/UKgjQaO/jyH4qy6Xspwn6z1Op+mPcEuNl
         jgszlYcVsIcErBVuQ01EriDl9fCEv0YqMsCOfeXv05tjF5kBXcOMpeVz2Gtczq67V5pD
         aLJTWuYukQAfzFo02wQ8lxtyDFUpeJjeLzNjpz6YZeCPgS9W6svnmVewENo4PdnOyTMS
         LGSkPefKeZ5tNacsjaV12pXG+0JnSpwhWSx1i9tez27M4YU+VyEjum+8NCEIQ371MKVK
         jhyA==
X-Gm-Message-State: AOAM530WwOJ0E9KOP4tlTEvq8Av2dUnD2+2uYdAhe/r0k3dSr/MwsWtr
	gV3kv+MR0JQIuoeKbZRDgFAf2rlrnM2qBi/S3pVu3g==
X-Google-Smtp-Source: ABdhPJyA1ae5C1hgDl3PfWOToRM8A2j6q1B+Kp7Vbvjm/+coKjcmIlEUox3dKA5ObPMxt4zaoGwpxJ4CyeD7uhW7vuo=
X-Received: by 2002:a25:2d6:: with SMTP id 205mr2857987ybc.233.1603937500385;
 Wed, 28 Oct 2020 19:11:40 -0700 (PDT)
MIME-Version: 1.0
References: <20201028174406.23424-1-alex.bennee@linaro.org>
In-Reply-To: <20201028174406.23424-1-alex.bennee@linaro.org>
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Thu, 29 Oct 2020 11:11:29 +0900
Message-ID: <CAA93ih1z4dr0EMUg0G2WHXuzcK1ghET-8RJ_UuuFeWbToSnz3A@mail.gmail.com>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Alex,

2020=E5=B9=B410=E6=9C=8829=E6=97=A5(=E6=9C=A8) 2:44 Alex Benn=C3=A9e <alex.=
bennee@linaro.org>:
>
> Xen is supported on aarch64 although weirdly using the i386-softmmu
> model. Checking based on the host CPU meant we never enabled Xen
> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
> make it not seem weird but that will require further build surgery.
>
> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")

Thanks for the fix, I confirmed that the CONFIG_XEN=3D1 on arm64 build.

Reviewed-by: Masami Hiramatsu <masami.hiramatsu@linaro.org>

Thank you,


> ---
>  meson.build | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/meson.build b/meson.build
> index 835424999d..f1fcbfed4c 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
>      'CONFIG_HVF': ['x86_64-softmmu'],
>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
>    }
> +elif cpu in [ 'arm', 'aarch64' ]
> +  accelerator_targets +=3D { 'CONFIG_XEN': ['i386-softmmu'] }
>  endif
>
>  ##################
> --
> 2.20.1
>


--
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 03:16:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 03:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13967.34832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXyPq-0007GL-T3; Thu, 29 Oct 2020 03:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13967.34832; Thu, 29 Oct 2020 03:15:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kXyPq-0007GE-Q1; Thu, 29 Oct 2020 03:15:54 +0000
Received: by outflank-mailman (input) for mailman id 13967;
 Thu, 29 Oct 2020 03:15:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kXyPp-0007G9-J9
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 03:15:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f6bc46c-404e-4710-ab67-e79b233a1820;
 Thu, 29 Oct 2020 03:15:47 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXyPj-000840-3o; Thu, 29 Oct 2020 03:15:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kXyPi-0007cY-Kv; Thu, 29 Oct 2020 03:15:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kXyPi-0007ir-JV; Thu, 29 Oct 2020 03:15:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kXyPp-0007G9-J9
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 03:15:53 +0000
X-Inumbo-ID: 0f6bc46c-404e-4710-ab67-e79b233a1820
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f6bc46c-404e-4710-ab67-e79b233a1820;
	Thu, 29 Oct 2020 03:15:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T/44/7zdD58F90HUUhRzMM+Uzrmen2Sb09zSyvSRz0Y=; b=1VtRxPhwFNNZqBMZFRye8W4Vba
	nU4eKeP5vAK25by+Vr2XcsvxGQgaxaApk9efknapkhke5MbXKeP8uMBg/5uG5ppsEpIpxVrwNt8Wv
	0iWB0/FG3Z+jpMEqOiVxbitBxQnKbQ4r0dZO1V3I8gTUR0DLc2dcsU4M4GjH05ddOQZg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXyPj-000840-3o; Thu, 29 Oct 2020 03:15:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXyPi-0007cY-Kv; Thu, 29 Oct 2020 03:15:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kXyPi-0007ir-JV; Thu, 29 Oct 2020 03:15:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156264-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 156264: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10bb63c203f42d931fa1fa7dbbae7ce1765cecf2
X-Osstest-Versions-That:
    xen=7b1e587f25c2dda38236e48aae81729798f10663
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 03:15:46 +0000

flight 156264 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156264/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156063

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156063
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156063
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156063
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156063
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156063
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156063
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 156063
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156063
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156063
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156063
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156063
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156063
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  10bb63c203f42d931fa1fa7dbbae7ce1765cecf2
baseline version:
 xen                  7b1e587f25c2dda38236e48aae81729798f10663

Last test of basis   156063  2020-10-21 06:01:03 Z    7 days
Testing same since   156264  2020-10-27 18:37:55 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7b1e587f25..10bb63c203  10bb63c203f42d931fa1fa7dbbae7ce1765cecf2 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 05:05:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 05:05:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13983.34858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY07J-0000Bb-1g; Thu, 29 Oct 2020 05:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13983.34858; Thu, 29 Oct 2020 05:04:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY07I-0000BM-Tj; Thu, 29 Oct 2020 05:04:52 +0000
Received: by outflank-mailman (input) for mailman id 13983;
 Thu, 29 Oct 2020 05:04:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kY07H-0000Ag-9U
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 05:04:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f8a6411-812d-403c-b59c-870e6d6bf810;
 Thu, 29 Oct 2020 05:04:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY077-0002Kr-T6; Thu, 29 Oct 2020 05:04:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY077-0006Ch-GW; Thu, 29 Oct 2020 05:04:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kY077-00089K-Fm; Thu, 29 Oct 2020 05:04:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kY07H-0000Ag-9U
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 05:04:51 +0000
X-Inumbo-ID: 4f8a6411-812d-403c-b59c-870e6d6bf810
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4f8a6411-812d-403c-b59c-870e6d6bf810;
	Thu, 29 Oct 2020 05:04:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qPpn9bumYdpDEwxCkn5FFNIQh5tgfzywDJzVzuw149A=; b=nbRTP/aN97VPAYBv5b6qzycJ+B
	1Ax1kTXBD1Zv0PRLNA640E9mcKXWVHLmhV3d1RVFz2o0n7u46YNwne8upTXT9Fkjk+WvMlUS2z8M6
	QRHXwtfZKeNgMI6hMrjdnLWP+ZuQRXUKNKJKd842drpZBp6VZEQF7QWX68D0hx8mzeoQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY077-0002Kr-T6; Thu, 29 Oct 2020 05:04:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY077-0006Ch-GW; Thu, 29 Oct 2020 05:04:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY077-00089K-Fm; Thu, 29 Oct 2020 05:04:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156267: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=725ca3313a5b9cbef89eaa1c728567684f37990a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 05:04:41 +0000

flight 156267 qemu-mainline real [real]
flight 156286 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156267/
http://logs.test-lab.xenproject.org/osstest/logs/156286/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                725ca3313a5b9cbef89eaa1c728567684f37990a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   69 days
Failing since        152659  2020-08-21 14:07:39 Z   68 days  158 attempts
Testing same since   156267  2020-10-27 21:37:56 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frank Chang <frank.chang@sifive.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 52710 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 05:11:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 05:11:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13909.34870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY0DY-00014u-S3; Thu, 29 Oct 2020 05:11:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13909.34870; Thu, 29 Oct 2020 05:11:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY0DY-00014n-Os; Thu, 29 Oct 2020 05:11:20 +0000
Received: by outflank-mailman (input) for mailman id 13909;
 Wed, 28 Oct 2020 21:24:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oQrq=ED=kernel.org=arnd@srs-us1.protection.inumbo.net>)
 id 1kXsvm-0005kI-Bo
 for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:24:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9af3f642-ca11-4ed2-a90c-9bbbc905eaf1;
 Wed, 28 Oct 2020 21:24:29 +0000 (UTC)
Received: from localhost.localdomain
 (HSI-KBW-46-223-126-90.hsi.kabel-badenwuerttemberg.de [46.223.126.90])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 06BEB2483B;
 Wed, 28 Oct 2020 21:24:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oQrq=ED=kernel.org=arnd@srs-us1.protection.inumbo.net>)
	id 1kXsvm-0005kI-Bo
	for xen-devel@lists.xenproject.org; Wed, 28 Oct 2020 21:24:30 +0000
X-Inumbo-ID: 9af3f642-ca11-4ed2-a90c-9bbbc905eaf1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9af3f642-ca11-4ed2-a90c-9bbbc905eaf1;
	Wed, 28 Oct 2020 21:24:29 +0000 (UTC)
Received: from localhost.localdomain (HSI-KBW-46-223-126-90.hsi.kabel-badenwuerttemberg.de [46.223.126.90])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 06BEB2483B;
	Wed, 28 Oct 2020 21:24:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603920268;
	bh=74adyFANoxpq7wwxgmpMM8aBwaDT4mxr2cAgEihSj9Y=;
	h=From:To:Cc:Subject:Date:From;
	b=MvP2qo5b3UYqtfUKUeJsXhDvEKbg6KvcUe2ioGBxUOnMUqK+rjO34oR0n+DZTzvAr
	 eYPMX/c7ffNKn1EtNa89mNNH4FodNS1wdrSZ5tR7vIRpA7I9hqvX5vYihUon6OcUcw
	 DDjF1DjtQS1EZT43SQgk2OJfMP/zjCnOK4DfzDM8=
From: Arnd Bergmann <arnd@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	kvm@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Date: Wed, 28 Oct 2020 22:20:54 +0100
Message-Id: <20201028212417.3715575-1-arnd@kernel.org>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

There are hundreds of warnings in a W=2 build about a local
variable shadowing the global 'apic' definition:

arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a global declaration [-Wshadow]

Avoid this by renaming the global 'apic' variable to the more descriptive
'x86_system_apic'. It was originally called 'genapic', but both that
and the current 'apic' seem to be a little overly generic for a global
variable.

Fixes: c48f14966cc4 ("KVM: inline kvm_apic_present() and kvm_lapic_enabled()")
Fixes: c8d46cf06dc2 ("x86: rename 'genapic' to 'apic'")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
v2: rename the global instead of the local variable in the header

This is only tested in an allmodconfig build, after fixing up a
few mistakes in the original search&replace. It's likely that I
missed a few others, but this version should be sufficient to
decide whether this is a good idea in the first place, as well
as if there are better ideas for the new name.
---
 arch/x86/hyperv/hv_apic.c             | 24 ++++++++++++------------
 arch/x86/hyperv/hv_spinlock.c         |  4 ++--
 arch/x86/include/asm/apic.h           | 18 +++++++++---------
 arch/x86/kernel/acpi/boot.c           |  2 +-
 arch/x86/kernel/apic/apic.c           | 18 +++++++++---------
 arch/x86/kernel/apic/apic_flat_64.c   |  8 ++++----
 arch/x86/kernel/apic/apic_numachip.c  |  4 ++--
 arch/x86/kernel/apic/bigsmp_32.c      |  2 +-
 arch/x86/kernel/apic/hw_nmi.c         |  2 +-
 arch/x86/kernel/apic/io_apic.c        | 19 ++++++++++---------
 arch/x86/kernel/apic/ipi.c            | 22 +++++++++++-----------
 arch/x86/kernel/apic/msi.c            |  2 +-
 arch/x86/kernel/apic/probe_32.c       | 20 ++++++++++----------
 arch/x86/kernel/apic/probe_64.c       | 12 ++++++------
 arch/x86/kernel/apic/vector.c         |  8 ++++----
 arch/x86/kernel/apic/x2apic_cluster.c |  3 ++-
 arch/x86/kernel/apic/x2apic_phys.c    |  2 +-
 arch/x86/kernel/apic/x2apic_uv_x.c    |  2 +-
 arch/x86/kernel/cpu/common.c          | 14 ++++++++------
 arch/x86/kernel/cpu/mce/inject.c      |  4 ++--
 arch/x86/kernel/cpu/topology.c        |  8 ++++----
 arch/x86/kernel/irq_work.c            |  2 +-
 arch/x86/kernel/kvm.c                 |  6 +++---
 arch/x86/kernel/nmi_selftest.c        |  2 +-
 arch/x86/kernel/smpboot.c             | 20 +++++++++++---------
 arch/x86/kernel/vsmp_64.c             |  2 +-
 arch/x86/kvm/vmx/vmx.c                |  2 +-
 arch/x86/mm/srat.c                    |  2 +-
 arch/x86/platform/uv/uv_irq.c         |  4 ++--
 arch/x86/platform/uv/uv_nmi.c         |  2 +-
 arch/x86/xen/apic.c                   |  8 ++++----
 drivers/iommu/amd/iommu.c             | 10 ++++++----
 drivers/iommu/intel/irq_remapping.c   |  4 ++--
 33 files changed, 135 insertions(+), 127 deletions(-)

diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
index 284e73661a18..33e2dc78ca11 100644
--- a/arch/x86/hyperv/hv_apic.c
+++ b/arch/x86/hyperv/hv_apic.c
@@ -259,14 +259,14 @@ void __init hv_apic_init(void)
 		/*
 		 * Set the IPI entry points.
 		 */
-		orig_apic = *apic;
-
-		apic->send_IPI = hv_send_ipi;
-		apic->send_IPI_mask = hv_send_ipi_mask;
-		apic->send_IPI_mask_allbutself = hv_send_ipi_mask_allbutself;
-		apic->send_IPI_allbutself = hv_send_ipi_allbutself;
-		apic->send_IPI_all = hv_send_ipi_all;
-		apic->send_IPI_self = hv_send_ipi_self;
+		orig_apic = *x86_system_apic;
+
+		x86_system_apic->send_IPI = hv_send_ipi;
+		x86_system_apic->send_IPI_mask = hv_send_ipi_mask;
+		x86_system_apic->send_IPI_mask_allbutself = hv_send_ipi_mask_allbutself;
+		x86_system_apic->send_IPI_allbutself = hv_send_ipi_allbutself;
+		x86_system_apic->send_IPI_all = hv_send_ipi_all;
+		x86_system_apic->send_IPI_self = hv_send_ipi_self;
 	}
 
 	if (ms_hyperv.hints & HV_X64_APIC_ACCESS_RECOMMENDED) {
@@ -285,10 +285,10 @@ void __init hv_apic_init(void)
 		 */
 		apic_set_eoi_write(hv_apic_eoi_write);
 		if (!x2apic_enabled()) {
-			apic->read      = hv_apic_read;
-			apic->write     = hv_apic_write;
-			apic->icr_write = hv_apic_icr_write;
-			apic->icr_read  = hv_apic_icr_read;
+			x86_system_apic->read      = hv_apic_read;
+			x86_system_apic->write     = hv_apic_write;
+			x86_system_apic->icr_write = hv_apic_icr_write;
+			x86_system_apic->icr_read  = hv_apic_icr_read;
 		}
 	}
 }
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index f3270c1fc48c..03c8ac56b430 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -20,7 +20,7 @@ static bool __initdata hv_pvspin = true;
 
 static void hv_qlock_kick(int cpu)
 {
-	apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
+	x86_system_apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
 }
 
 static void hv_qlock_wait(u8 *byte, u8 val)
@@ -64,7 +64,7 @@ PV_CALLEE_SAVE_REGS_THUNK(hv_vcpu_is_preempted);
 
 void __init hv_init_spinlocks(void)
 {
-	if (!hv_pvspin || !apic ||
+	if (!hv_pvspin || !x86_system_apic ||
 	    !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
 	    !(ms_hyperv.features & HV_MSR_GUEST_IDLE_AVAILABLE)) {
 		pr_info("PV spinlocks disabled\n");
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
index 4e3099d9ae62..43a30083af45 100644
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -361,7 +361,7 @@ struct apic {
  * always just one such driver in use - the kernel decides via an
  * early probing process which one it picks - and then sticks to it):
  */
-extern struct apic *apic;
+extern struct apic *x86_system_apic;
 
 /*
  * APIC drivers are probed based on how they are listed in the .apicdrivers
@@ -395,37 +395,37 @@ extern int lapic_can_unplug_cpu(void);
 
 static inline u32 apic_read(u32 reg)
 {
-	return apic->read(reg);
+	return x86_system_apic->read(reg);
 }
 
 static inline void apic_write(u32 reg, u32 val)
 {
-	apic->write(reg, val);
+	x86_system_apic->write(reg, val);
 }
 
 static inline void apic_eoi(void)
 {
-	apic->eoi_write(APIC_EOI, APIC_EOI_ACK);
+	x86_system_apic->eoi_write(APIC_EOI, APIC_EOI_ACK);
 }
 
 static inline u64 apic_icr_read(void)
 {
-	return apic->icr_read();
+	return x86_system_apic->icr_read();
 }
 
 static inline void apic_icr_write(u32 low, u32 high)
 {
-	apic->icr_write(low, high);
+	x86_system_apic->icr_write(low, high);
 }
 
 static inline void apic_wait_icr_idle(void)
 {
-	apic->wait_icr_idle();
+	x86_system_apic->wait_icr_idle();
 }
 
 static inline u32 safe_apic_wait_icr_idle(void)
 {
-	return apic->safe_wait_icr_idle();
+	return x86_system_apic->safe_wait_icr_idle();
 }
 
 extern void __init apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v));
@@ -494,7 +494,7 @@ static inline unsigned int read_apic_id(void)
 {
 	unsigned int reg = apic_read(APIC_ID);
 
-	return apic->get_apic_id(reg);
+	return x86_system_apic->get_apic_id(reg);
 }
 
 extern int default_apic_id_valid(u32 apicid);
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 7bdc0239a943..5e522cbc88dc 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -211,7 +211,7 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
 	 * to not preallocating memory for all NR_CPUS
 	 * when we use CPU hotplug.
 	 */
-	if (!apic->apic_id_valid(apic_id)) {
+	if (!x86_system_apic->apic_id_valid(apic_id)) {
 		if (enabled)
 			pr_warn(PREFIX "x2apic entry ignored\n");
 		return 0;
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index b3eef1d5c903..df9310b3cf0d 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -241,7 +241,7 @@ static int modern_apic(void)
 static void __init apic_disable(void)
 {
 	pr_info("APIC: switched to apic NOOP\n");
-	apic = &apic_noop;
+	x86_system_apic = &apic_noop;
 }
 
 void native_apic_wait_icr_idle(void)
@@ -519,7 +519,7 @@ static int lapic_timer_set_oneshot(struct clock_event_device *evt)
 static void lapic_timer_broadcast(const struct cpumask *mask)
 {
 #ifdef CONFIG_SMP
-	apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
+	x86_system_apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
 #endif
 }
 
@@ -1444,7 +1444,7 @@ static void lapic_setup_esr(void)
 		return;
 	}
 
-	if (apic->disable_esr) {
+	if (x86_system_apic->disable_esr) {
 		/*
 		 * Something untraceable is creating bad interrupts on
 		 * secondary quads ... for the moment, just leave the
@@ -1570,7 +1570,7 @@ static void setup_local_APIC(void)
 
 #ifdef CONFIG_X86_32
 	/* Pound the ESR really hard over the head with a big hammer - mbligh */
-	if (lapic_is_integrated() && apic->disable_esr) {
+	if (lapic_is_integrated() && x86_system_apic->disable_esr) {
 		apic_write(APIC_ESR, 0);
 		apic_write(APIC_ESR, 0);
 		apic_write(APIC_ESR, 0);
@@ -1581,17 +1581,17 @@ static void setup_local_APIC(void)
 	 * Double-check whether this APIC is really registered.
 	 * This is meaningless in clustered apic mode, so we skip it.
 	 */
-	BUG_ON(!apic->apic_id_registered());
+	BUG_ON(!x86_system_apic->apic_id_registered());
 
 	/*
 	 * Intel recommends to set DFR, LDR and TPR before enabling
 	 * an APIC.  See e.g. "AP-388 82489DX User's Manual" (Intel
 	 * document number 292116).  So here it goes...
 	 */
-	apic->init_apic_ldr();
+	x86_system_apic->init_apic_ldr();
 
 #ifdef CONFIG_X86_32
-	if (apic->dest_logical) {
+	if (x86_system_apic->dest_logical) {
 		int logical_apicid, ldr_apicid;
 
 		/*
@@ -2463,7 +2463,7 @@ int generic_processor_info(int apicid, int version)
 #endif
 #ifdef CONFIG_X86_32
 	early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
-		apic->x86_32_early_logical_apicid(cpu);
+		x86_system_apic->x86_32_early_logical_apicid(cpu);
 #endif
 	set_cpu_possible(cpu, true);
 	physid_set(apicid, phys_cpu_present_map);
@@ -2499,7 +2499,7 @@ void __init apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v))
 static void __init apic_bsp_up_setup(void)
 {
 #ifdef CONFIG_X86_64
-	apic_write(APIC_ID, apic->set_apic_id(boot_cpu_physical_apicid));
+	apic_write(APIC_ID, x86_system_apic->set_apic_id(boot_cpu_physical_apicid));
 #else
 	/*
 	 * Hack: In case of kdump, after a crash, kernel might be booting
diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
index 7862b152a052..f8b38e712f7d 100644
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -20,8 +20,8 @@
 static struct apic apic_physflat;
 static struct apic apic_flat;
 
-struct apic *apic __ro_after_init = &apic_flat;
-EXPORT_SYMBOL_GPL(apic);
+struct apic *x86_system_apic __ro_after_init = &apic_flat;
+EXPORT_SYMBOL_GPL(x86_system_apic);
 
 static int flat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 {
@@ -53,7 +53,7 @@ static void _flat_send_IPI_mask(unsigned long mask, int vector)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__default_send_IPI_dest_field(mask, vector, apic->dest_logical);
+	__default_send_IPI_dest_field(mask, vector, x86_system_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
@@ -191,7 +191,7 @@ static void physflat_init_apic_ldr(void)
 
 static int physflat_probe(void)
 {
-	if (apic == &apic_physflat || num_possible_cpus() > 8 ||
+	if (x86_system_apic == &apic_physflat || num_possible_cpus() > 8 ||
 	    jailhouse_paravirt())
 		return 1;
 
diff --git a/arch/x86/kernel/apic/apic_numachip.c b/arch/x86/kernel/apic/apic_numachip.c
index 35edd57f064a..496af13cfe49 100644
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -159,12 +159,12 @@ static void numachip_send_IPI_self(int vector)
 
 static int __init numachip1_probe(void)
 {
-	return apic == &apic_numachip1;
+	return x86_system_apic == &apic_numachip1;
 }
 
 static int __init numachip2_probe(void)
 {
-	return apic == &apic_numachip2;
+	return x86_system_apic == &apic_numachip2;
 }
 
 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)
diff --git a/arch/x86/kernel/apic/bigsmp_32.c b/arch/x86/kernel/apic/bigsmp_32.c
index 98d015a4405a..87e2a1a80964 100644
--- a/arch/x86/kernel/apic/bigsmp_32.c
+++ b/arch/x86/kernel/apic/bigsmp_32.c
@@ -176,7 +176,7 @@ void __init generic_bigsmp_probe(void)
 	if (!probe_bigsmp())
 		return;
 
-	apic = &apic_bigsmp;
+	x86_system_apic = &apic_bigsmp;
 
 	for_each_possible_cpu(cpu) {
 		if (early_per_cpu(x86_cpu_to_logical_apicid,
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 34a992e275ef..ca2f74d52653 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -31,7 +31,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
 #ifdef arch_trigger_cpumask_backtrace
 static void nmi_raise_cpu_backtrace(cpumask_t *mask)
 {
-	apic->send_IPI_mask(mask, NMI_VECTOR);
+	x86_system_apic->send_IPI_mask(mask, NMI_VECTOR);
 }
 
 void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 7b3c7e0d4a09..0ec96c77ae63 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1457,7 +1457,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 	 * This is broken; anything with a real cpu count has to
 	 * circumvent this idiocy regardless.
 	 */
-	apic->ioapic_phys_id_map(&phys_cpu_present_map, &phys_id_present_map);
+	x86_system_apic->ioapic_phys_id_map(&phys_cpu_present_map, &phys_id_present_map);
 
 	/*
 	 * Set the IOAPIC ID to the value stored in the MPC table.
@@ -1483,7 +1483,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 		 * system must have a unique ID or we get lots of nice
 		 * 'stuck on smp_invalidate_needed IPI wait' messages.
 		 */
-		if (apic->check_apicid_used(&phys_id_present_map,
+		if (x86_system_apic->check_apicid_used(&phys_id_present_map,
 					    mpc_ioapic_id(ioapic_idx))) {
 			printk(KERN_ERR "BIOS bug, IO-APIC#%d ID %d is already used!...\n",
 				ioapic_idx, mpc_ioapic_id(ioapic_idx));
@@ -1498,7 +1498,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 			ioapics[ioapic_idx].mp_config.apicid = i;
 		} else {
 			physid_mask_t tmp;
-			apic->apicid_to_cpu_present(mpc_ioapic_id(ioapic_idx),
+			x86_system_apic->apicid_to_cpu_present(mpc_ioapic_id(ioapic_idx),
 						    &tmp);
 			apic_printk(APIC_VERBOSE, "Setting %d in the "
 					"phys_id_present_map\n",
@@ -2462,7 +2462,8 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 	 */
 
 	if (physids_empty(apic_id_map))
-		apic->ioapic_phys_id_map(&phys_cpu_present_map, &apic_id_map);
+		x86_system_apic->ioapic_phys_id_map(&phys_cpu_present_map,
+						    &apic_id_map);
 
 	raw_spin_lock_irqsave(&ioapic_lock, flags);
 	reg_00.raw = io_apic_read(ioapic, 0);
@@ -2478,10 +2479,10 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 	 * Every APIC in a system must have a unique ID or we get lots of nice
 	 * 'stuck on smp_invalidate_needed IPI wait' messages.
 	 */
-	if (apic->check_apicid_used(&apic_id_map, apic_id)) {
+	if (x86_system_apic->check_apicid_used(&apic_id_map, apic_id)) {
 
 		for (i = 0; i < get_physical_broadcast(); i++) {
-			if (!apic->check_apicid_used(&apic_id_map, i))
+			if (!x86_system_apic->check_apicid_used(&apic_id_map, i))
 				break;
 		}
 
@@ -2494,7 +2495,7 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 		apic_id = i;
 	}
 
-	apic->apicid_to_cpu_present(apic_id, &tmp);
+	x86_system_apic->apicid_to_cpu_present(apic_id, &tmp);
 	physids_or(apic_id_map, apic_id_map, tmp);
 
 	if (reg_00.bits.ID != apic_id) {
@@ -2948,8 +2949,8 @@ static void mp_setup_entry(struct irq_cfg *cfg, struct mp_chip_data *data,
 			   struct IO_APIC_route_entry *entry)
 {
 	memset(entry, 0, sizeof(*entry));
-	entry->delivery_mode = apic->irq_delivery_mode;
-	entry->dest_mode     = apic->irq_dest_mode;
+	entry->delivery_mode = x86_system_apic->irq_delivery_mode;
+	entry->dest_mode     = x86_system_apic->irq_dest_mode;
 	entry->dest	     = cfg->dest_apicid;
 	entry->vector	     = cfg->vector;
 	entry->trigger	     = data->trigger;
diff --git a/arch/x86/kernel/apic/ipi.c b/arch/x86/kernel/apic/ipi.c
index 387154e39e08..96e942c3a4a6 100644
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -52,9 +52,9 @@ void apic_send_IPI_allbutself(unsigned int vector)
 		return;
 
 	if (static_branch_likely(&apic_use_ipi_shorthand))
-		apic->send_IPI_allbutself(vector);
+		x86_system_apic->send_IPI_allbutself(vector);
 	else
-		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
+		x86_system_apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
 }
 
 /*
@@ -68,12 +68,12 @@ void native_smp_send_reschedule(int cpu)
 		WARN(1, "sched: Unexpected reschedule of offline CPU#%d!\n", cpu);
 		return;
 	}
-	apic->send_IPI(cpu, RESCHEDULE_VECTOR);
+	x86_system_apic->send_IPI(cpu, RESCHEDULE_VECTOR);
 }
 
 void native_send_call_func_single_ipi(int cpu)
 {
-	apic->send_IPI(cpu, CALL_FUNCTION_SINGLE_VECTOR);
+	x86_system_apic->send_IPI(cpu, CALL_FUNCTION_SINGLE_VECTOR);
 }
 
 void native_send_call_func_ipi(const struct cpumask *mask)
@@ -85,14 +85,14 @@ void native_send_call_func_ipi(const struct cpumask *mask)
 			goto sendmask;
 
 		if (cpumask_test_cpu(cpu, mask))
-			apic->send_IPI_all(CALL_FUNCTION_VECTOR);
+			x86_system_apic->send_IPI_all(CALL_FUNCTION_VECTOR);
 		else if (num_online_cpus() > 1)
-			apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
+			x86_system_apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
 		return;
 	}
 
 sendmask:
-	apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
+	x86_system_apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
 }
 
 #endif /* CONFIG_SMP */
@@ -224,7 +224,7 @@ void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
  */
 void default_send_IPI_single(int cpu, int vector)
 {
-	apic->send_IPI_mask(cpumask_of(cpu), vector);
+	x86_system_apic->send_IPI_mask(cpumask_of(cpu), vector);
 }
 
 void default_send_IPI_allbutself(int vector)
@@ -260,7 +260,7 @@ void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
 	for_each_cpu(query_cpu, mask)
 		__default_send_IPI_dest_field(
 			early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
-			vector, apic->dest_logical);
+			vector, x86_system_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
@@ -279,7 +279,7 @@ void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
 			continue;
 		__default_send_IPI_dest_field(
 			early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
-			vector, apic->dest_logical);
+			vector, x86_system_apic->dest_logical);
 		}
 	local_irq_restore(flags);
 }
@@ -297,7 +297,7 @@ void default_send_IPI_mask_logical(const struct cpumask *cpumask, int vector)
 
 	local_irq_save(flags);
 	WARN_ON(mask & ~cpumask_bits(cpu_online_mask)[0]);
-	__default_send_IPI_dest_field(mask, vector, apic->dest_logical);
+	__default_send_IPI_dest_field(mask, vector, x86_system_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
index 6313f0a05db7..5ccbd288841a 100644
--- a/arch/x86/kernel/apic/msi.c
+++ b/arch/x86/kernel/apic/msi.c
@@ -32,7 +32,7 @@ static void __irq_msi_compose_msg(struct irq_cfg *cfg, struct msi_msg *msg)
 
 	msg->address_lo =
 		MSI_ADDR_BASE_LO |
-		((apic->irq_dest_mode == 0) ?
+		((x86_system_apic->irq_dest_mode == 0) ?
 			MSI_ADDR_DEST_MODE_PHYSICAL :
 			MSI_ADDR_DEST_MODE_LOGICAL) |
 		MSI_ADDR_REDIRECTION_CPU |
diff --git a/arch/x86/kernel/apic/probe_32.c b/arch/x86/kernel/apic/probe_32.c
index 67b6f7c049ec..c007d4c130ab 100644
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -113,8 +113,8 @@ static struct apic apic_default __ro_after_init = {
 
 apic_driver(apic_default);
 
-struct apic *apic __ro_after_init = &apic_default;
-EXPORT_SYMBOL_GPL(apic);
+struct apic *x86_system_apic __ro_after_init = &apic_default;
+EXPORT_SYMBOL_GPL(x86_system_apic);
 
 static int cmdline_apic __initdata;
 static int __init parse_apic(char *arg)
@@ -126,7 +126,7 @@ static int __init parse_apic(char *arg)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if (!strcmp((*drv)->name, arg)) {
-			apic = *drv;
+			x86_system_apic = *drv;
 			cmdline_apic = 1;
 			return 0;
 		}
@@ -164,12 +164,12 @@ void __init default_setup_apic_routing(void)
 	 * - we find more than 8 CPUs in acpi LAPIC listing with xAPIC support
 	 */
 
-	if (!cmdline_apic && apic == &apic_default)
+	if (!cmdline_apic && x86_system_apic == &apic_default)
 		generic_bigsmp_probe();
 #endif
 
-	if (apic->setup_apic_routing)
-		apic->setup_apic_routing();
+	if (x86_system_apic->setup_apic_routing)
+		x86_system_apic->setup_apic_routing();
 }
 
 void __init generic_apic_probe(void)
@@ -179,7 +179,7 @@ void __init generic_apic_probe(void)
 
 		for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 			if ((*drv)->probe()) {
-				apic = *drv;
+				x86_system_apic = *drv;
 				break;
 			}
 		}
@@ -187,7 +187,7 @@ void __init generic_apic_probe(void)
 		if (drv == __apicdrivers_end)
 			panic("Didn't find an APIC driver");
 	}
-	printk(KERN_INFO "Using APIC driver %s\n", apic->name);
+	printk(KERN_INFO "Using APIC driver %s\n", x86_system_apic->name);
 }
 
 /* This function can switch the APIC even after the initial ->probe() */
@@ -202,9 +202,9 @@ int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 			continue;
 
 		if (!cmdline_apic) {
-			apic = *drv;
+			x86_system_apic = *drv;
 			printk(KERN_INFO "Switched to APIC driver `%s'.\n",
-			       apic->name);
+			       x86_system_apic->name);
 		}
 		return 1;
 	}
diff --git a/arch/x86/kernel/apic/probe_64.c b/arch/x86/kernel/apic/probe_64.c
index c46720f185c0..280c1e755cef 100644
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -24,10 +24,10 @@ void __init default_setup_apic_routing(void)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if ((*drv)->probe && (*drv)->probe()) {
-			if (apic != *drv) {
-				apic = *drv;
+			if (x86_system_apic != *drv) {
+				x86_system_apic = *drv;
 				pr_info("Switched APIC routing to %s.\n",
-					apic->name);
+					x86_system_apic->name);
 			}
 			break;
 		}
@@ -40,10 +40,10 @@ int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if ((*drv)->acpi_madt_oem_check(oem_id, oem_table_id)) {
-			if (apic != *drv) {
-				apic = *drv;
+			if (x86_system_apic != *drv) {
+				x86_system_apic = *drv;
 				pr_info("Setting APIC routing to %s.\n",
-					apic->name);
+					x86_system_apic->name);
 			}
 			return 1;
 		}
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 1eac53632786..f6c6cbb592e9 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -122,7 +122,7 @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector,
 	lockdep_assert_held(&vector_lock);
 
 	apicd->hw_irq_cfg.vector = vector;
-	apicd->hw_irq_cfg.dest_apicid = apic->calc_dest_apicid(cpu);
+	apicd->hw_irq_cfg.dest_apicid = x86_system_apic->calc_dest_apicid(cpu);
 	irq_data_update_effective_affinity(irqd, cpumask_of(cpu));
 	trace_vector_config(irqd->irq, vector, cpu,
 			    apicd->hw_irq_cfg.dest_apicid);
@@ -800,7 +800,7 @@ static int apic_retrigger_irq(struct irq_data *irqd)
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&vector_lock, flags);
-	apic->send_IPI(apicd->cpu, apicd->vector);
+	x86_system_apic->send_IPI(apicd->cpu, apicd->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
 
 	return 1;
@@ -876,7 +876,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
 		 */
 		irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
 		if (irr & (1U << (vector % 32))) {
-			apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
+			x86_system_apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
 			continue;
 		}
 		free_moved_vector(apicd);
@@ -894,7 +894,7 @@ static void __send_cleanup_vector(struct apic_chip_data *apicd)
 	cpu = apicd->prev_cpu;
 	if (cpu_online(cpu)) {
 		hlist_add_head(&apicd->clist, per_cpu_ptr(&cleanup_list, cpu));
-		apic->send_IPI(cpu, IRQ_MOVE_CLEANUP_VECTOR);
+		x86_system_apic->send_IPI(cpu, IRQ_MOVE_CLEANUP_VECTOR);
 	} else {
 		apicd->prev_vector = 0;
 	}
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index b0889c48a2ac..459e5b78edd5 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -61,7 +61,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
 		if (!dest)
 			continue;
 
-		__x2apic_send_IPI_dest(dest, vector, apic->dest_logical);
+		__x2apic_send_IPI_dest(dest, vector,
+				       x86_system_apic->dest_logical);
 		/* Remove cluster CPUs from tmpmask */
 		cpumask_andnot(tmpmsk, tmpmsk, &cmsk->mask);
 	}
diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
index bc9693841353..d4f8fa6cc67c 100644
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -92,7 +92,7 @@ static int x2apic_phys_probe(void)
 	if (x2apic_mode && (x2apic_phys || x2apic_fadt_phys()))
 		return 1;
 
-	return apic == &apic_x2apic_phys;
+	return x86_system_apic == &apic_x2apic_phys;
 }
 
 /* Common x2apic functions, also used by x2apic_cluster */
diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
index 714233cee0b5..9038db84d567 100644
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -796,7 +796,7 @@ static void uv_send_IPI_self(int vector)
 
 static int uv_probe(void)
 {
-	return apic == &apic_x2apic_uv_x;
+	return x86_system_apic == &apic_x2apic_uv_x;
 }
 
 static struct apic apic_x2apic_uv_x __ro_after_init = {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..178817fda45e 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -780,7 +780,8 @@ void detect_ht(struct cpuinfo_x86 *c)
 		return;
 
 	index_msb = get_count_order(smp_num_siblings);
-	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+	c->phys_proc_id = x86_system_apic->phys_pkg_id(c->initial_apicid,
+						       index_msb);
 
 	smp_num_siblings = smp_num_siblings / c->x86_max_cores;
 
@@ -788,8 +789,9 @@ void detect_ht(struct cpuinfo_x86 *c)
 
 	core_bits = get_count_order(c->x86_max_cores);
 
-	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
-				       ((1 << core_bits) - 1);
+	c->cpu_core_id = x86_system_apic->phys_pkg_id(c->initial_apicid,
+						      index_msb) &
+					       ((1 << core_bits) - 1);
 #endif
 }
 
@@ -1442,7 +1444,7 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		c->initial_apicid = (cpuid_ebx(1) >> 24) & 0xFF;
 #ifdef CONFIG_X86_32
 # ifdef CONFIG_SMP
-		c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+		c->apicid = x86_system_apic->phys_pkg_id(c->initial_apicid, 0);
 # else
 		c->apicid = c->initial_apicid;
 # endif
@@ -1481,7 +1483,7 @@ static void validate_apic_and_package_id(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SMP
 	unsigned int apicid, cpu = smp_processor_id();
 
-	apicid = apic->cpu_present_to_apicid(cpu);
+	apicid = x86_system_apic->cpu_present_to_apicid(cpu);
 
 	if (apicid != c->apicid) {
 		pr_err(FW_BUG "CPU%u: APIC id mismatch. Firmware: %x APIC: %x\n",
@@ -1535,7 +1537,7 @@ static void identify_cpu(struct cpuinfo_x86 *c)
 	apply_forced_caps(c);
 
 #ifdef CONFIG_X86_64
-	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+	c->apicid = x86_system_apic->phys_pkg_id(c->initial_apicid, 0);
 #endif
 
 	/*
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index 3a44346f2276..c39ed41fa74a 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -252,8 +252,8 @@ static void __maybe_unused raise_mce(struct mce *m)
 					mce_irq_ipi, NULL, 0);
 				preempt_enable();
 			} else if (m->inject_flags & MCJ_NMI_BROADCAST)
-				apic->send_IPI_mask(mce_inject_cpumask,
-						NMI_VECTOR);
+				x86_system_apic->send_IPI_mask(mce_inject_cpumask,
+							       NMI_VECTOR);
 		}
 		start = jiffies;
 		while (!cpumask_empty(mce_inject_cpumask)) {
diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
index d3a0791bc052..b4b0e1e77a61 100644
--- a/arch/x86/kernel/cpu/topology.c
+++ b/arch/x86/kernel/cpu/topology.c
@@ -137,16 +137,16 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
 	die_select_mask = (~(-1 << die_plus_mask_width)) >>
 				core_plus_mask_width;
 
-	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid,
+	c->cpu_core_id = x86_system_apic->phys_pkg_id(c->initial_apicid,
 				ht_mask_width) & core_select_mask;
-	c->cpu_die_id = apic->phys_pkg_id(c->initial_apicid,
+	c->cpu_die_id = x86_system_apic->phys_pkg_id(c->initial_apicid,
 				core_plus_mask_width) & die_select_mask;
-	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
+	c->phys_proc_id = x86_system_apic->phys_pkg_id(c->initial_apicid,
 				die_plus_mask_width);
 	/*
 	 * Reinit the apicid, now that we have extended initial_apicid.
 	 */
-	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+	c->apicid = x86_system_apic->phys_pkg_id(c->initial_apicid, 0);
 
 	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
 	__max_die_per_package = (die_level_siblings / core_level_siblings);
diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
index 890d4778cd35..7ae01529aa1d 100644
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -28,7 +28,7 @@ void arch_irq_work_raise(void)
 	if (!arch_irq_work_has_interrupt())
 		return;
 
-	apic->send_IPI_self(IRQ_WORK_VECTOR);
+	x86_system_apic->send_IPI_self(IRQ_WORK_VECTOR);
 	apic_wait_icr_idle();
 }
 #endif
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7f57ede3cb8e..a587ba6e844d 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -325,7 +325,7 @@ static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
 	 */
 	if (__test_and_clear_bit(KVM_PV_EOI_BIT, this_cpu_ptr(&kvm_apic_eoi)))
 		return;
-	apic->native_eoi_write(APIC_EOI, APIC_EOI_ACK);
+	x86_system_apic->native_eoi_write(APIC_EOI, APIC_EOI_ACK);
 }
 
 static void kvm_guest_cpu_init(void)
@@ -554,8 +554,8 @@ static void kvm_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
  */
 static void kvm_setup_pv_ipi(void)
 {
-	apic->send_IPI_mask = kvm_send_ipi_mask;
-	apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
+	x86_system_apic->send_IPI_mask = kvm_send_ipi_mask;
+	x86_system_apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
 	pr_info("setup PV IPIs\n");
 }
 
diff --git a/arch/x86/kernel/nmi_selftest.c b/arch/x86/kernel/nmi_selftest.c
index a1a96df3dff1..ee0f6dc7a2c9 100644
--- a/arch/x86/kernel/nmi_selftest.c
+++ b/arch/x86/kernel/nmi_selftest.c
@@ -75,7 +75,7 @@ static void __init test_nmi_ipi(struct cpumask *mask)
 	/* sync above data before sending NMI */
 	wmb();
 
-	apic->send_IPI_mask(mask, NMI_VECTOR);
+	x86_system_apic->send_IPI_mask(mask, NMI_VECTOR);
 
 	/* Don't wait longer than a second */
 	timeout = USEC_PER_SEC;
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 9a94934fae5f..51f2800949a9 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -756,7 +756,7 @@ wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
 	/* Target chip */
 	/* Boot on the stack */
 	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | apic->dest_logical, apicid);
+	apic_icr_write(APIC_DM_NMI | x86_system_apic->dest_logical, apicid);
 
 	pr_debug("Waiting for send to finish...\n");
 	send_status = safe_apic_wait_icr_idle();
@@ -983,7 +983,7 @@ wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
 	if (!boot_error) {
 		enable_start_cpu0 = 1;
 		*cpu0_nmi_registered = 1;
-		if (apic->dest_logical == APIC_DEST_LOGICAL)
+		if (x86_system_apic->dest_logical == APIC_DEST_LOGICAL)
 			id = cpu0_logical_apicid;
 		else
 			id = apicid;
@@ -1080,8 +1080,9 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
 	 * Otherwise,
 	 * - Use an INIT boot APIC message for APs or NMI for BSP.
 	 */
-	if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
+	if (x86_system_apic->wakeup_secondary_cpu)
+		boot_error = x86_system_apic->wakeup_secondary_cpu(apicid,
+								   start_ip);
 	else
 		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
 						     cpu0_nmi_registered);
@@ -1132,7 +1133,7 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
+	int apicid = x86_system_apic->cpu_present_to_apicid(cpu);
 	int cpu0_nmi_registered = 0;
 	unsigned long flags;
 	int err, ret = 0;
@@ -1143,7 +1144,7 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 
 	if (apicid == BAD_APICID ||
 	    !physid_isset(apicid, phys_cpu_present_map) ||
-	    !apic->apic_id_valid(apicid)) {
+	    !x86_system_apic->apic_id_valid(apicid)) {
 		pr_err("%s: bad cpu %d\n", __func__, cpu);
 		return -EINVAL;
 	}
@@ -1280,7 +1281,7 @@ static void __init smp_sanity_check(void)
 	 * Should not be necessary because the MP table should list the boot
 	 * CPU too, but we do it for the sake of robustness anyway.
 	 */
-	if (!apic->check_phys_apicid_present(boot_cpu_physical_apicid)) {
+	if (!x86_system_apic->check_phys_apicid_present(boot_cpu_physical_apicid)) {
 		pr_notice("weird, boot CPU (#%d) not listed by the BIOS\n",
 			  boot_cpu_physical_apicid);
 		physid_set(hard_smp_processor_id(), phys_cpu_present_map);
@@ -1467,8 +1468,9 @@ __init void prefill_possible_map(void)
 			pr_warn("Boot CPU (id %d) not listed by BIOS\n", cpu);
 
 			/* Make sure boot cpu is enumerated */
-			if (apic->cpu_present_to_apicid(0) == BAD_APICID &&
-			    apic->apic_id_valid(apicid))
+			if (x86_system_apic->cpu_present_to_apicid(0) ==
+					BAD_APICID &&
+			    x86_system_apic->apic_id_valid(apicid))
 				generic_processor_info(apicid, boot_cpu_apic_version);
 		}
 
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index 796cfaa46bfa..8928932fdf8f 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -135,7 +135,7 @@ static int apicid_phys_pkg_id(int initial_apic_id, int index_msb)
 static void vsmp_apic_post_init(void)
 {
 	/* need to update phys_pkg_id */
-	apic->phys_pkg_id = apicid_phys_pkg_id;
+	x86_system_apic->phys_pkg_id = apicid_phys_pkg_id;
 }
 
 void __init vsmp_init(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d14c94d0aff1..be76843a6946 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3943,7 +3943,7 @@ static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
 		 * which has no effect is safe here.
 		 */
 
-		apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
+		x86_system_apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
 		return true;
 	}
 #endif
diff --git a/arch/x86/mm/srat.c b/arch/x86/mm/srat.c
index dac07e4f5834..be3958dbb2af 100644
--- a/arch/x86/mm/srat.c
+++ b/arch/x86/mm/srat.c
@@ -40,7 +40,7 @@ acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa)
 		return;
 	pxm = pa->proximity_domain;
 	apic_id = pa->apic_id;
-	if (!apic->apic_id_valid(apic_id)) {
+	if (!x86_system_apic->apic_id_valid(apic_id)) {
 		printk(KERN_INFO "SRAT: PXM %u -> X2APIC 0x%04x ignored\n",
 			 pxm, apic_id);
 		return;
diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
index 18ca2261cc9a..2543148067b2 100644
--- a/arch/x86/platform/uv/uv_irq.c
+++ b/arch/x86/platform/uv/uv_irq.c
@@ -35,8 +35,8 @@ static void uv_program_mmr(struct irq_cfg *cfg, struct uv_irq_2_mmr_pnode *info)
 	mmr_value = 0;
 	entry = (struct uv_IO_APIC_route_entry *)&mmr_value;
 	entry->vector		= cfg->vector;
-	entry->delivery_mode	= apic->irq_delivery_mode;
-	entry->dest_mode	= apic->irq_dest_mode;
+	entry->delivery_mode	= x86_system_apic->irq_delivery_mode;
+	entry->dest_mode	= x86_system_apic->irq_dest_mode;
 	entry->polarity		= 0;
 	entry->trigger		= 0;
 	entry->mask		= 0;
diff --git a/arch/x86/platform/uv/uv_nmi.c b/arch/x86/platform/uv/uv_nmi.c
index eafc530c8767..afecd7e8de41 100644
--- a/arch/x86/platform/uv/uv_nmi.c
+++ b/arch/x86/platform/uv/uv_nmi.c
@@ -597,7 +597,7 @@ static void uv_nmi_nr_cpus_ping(void)
 	for_each_cpu(cpu, uv_nmi_cpu_mask)
 		uv_cpu_nmi_per(cpu).pinging = 1;
 
-	apic->send_IPI_mask(uv_nmi_cpu_mask, APIC_DM_NMI);
+	x86_system_apic->send_IPI_mask(uv_nmi_cpu_mask, APIC_DM_NMI);
 }
 
 /* Clean up flags for CPU's that ignored both NMI and ping */
diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
index e82fd1910dae..cb34b47e82c8 100644
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -191,12 +191,12 @@ static struct apic xen_pv_apic = {
 
 static void __init xen_apic_check(void)
 {
-	if (apic == &xen_pv_apic)
+	if (x86_system_apic == &xen_pv_apic)
 		return;
 
-	pr_info("Switched APIC routing from %s to %s.\n", apic->name,
+	pr_info("Switched APIC routing from %s to %s.\n", x86_system_apic->name,
 		xen_pv_apic.name);
-	apic = &xen_pv_apic;
+	x86_system_apic = &xen_pv_apic;
 }
 void __init xen_init_apic(void)
 {
@@ -204,7 +204,7 @@ void __init xen_init_apic(void)
 	/* On PV guests the APIC CPUID bit is disabled so none of the
 	 * routines end up executing. */
 	if (!xen_initial_domain())
-		apic = &xen_pv_apic;
+		x86_system_apic = &xen_pv_apic;
 
 	x86_platform.apic_post_init = xen_apic_check;
 }
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b9cf59443843..01a62ebb435d 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -3671,8 +3671,10 @@ static void irq_remapping_prepare_irte(struct amd_ir_data *data,
 
 	data->irq_2_irte.devid = devid;
 	data->irq_2_irte.index = index + sub_handle;
-	iommu->irte_ops->prepare(data->entry, apic->irq_delivery_mode,
-				 apic->irq_dest_mode, irq_cfg->vector,
+	iommu->irte_ops->prepare(data->entry,
+				 x86_system_apic->irq_delivery_mode,
+				 x86_system_apic->irq_dest_mode,
+				 irq_cfg->vector,
 				 irq_cfg->dest_apicid, devid);
 
 	switch (info->type) {
@@ -3943,8 +3945,8 @@ int amd_iommu_deactivate_guest_mode(void *data)
 	entry->hi.val = 0;
 
 	entry->lo.fields_remap.valid       = valid;
-	entry->lo.fields_remap.dm          = apic->irq_dest_mode;
-	entry->lo.fields_remap.int_type    = apic->irq_delivery_mode;
+	entry->lo.fields_remap.dm          = x86_system_apic->irq_dest_mode;
+	entry->lo.fields_remap.int_type    = x86_system_apic->irq_delivery_mode;
 	entry->hi.fields.vector            = cfg->vector;
 	entry->lo.fields_remap.destination =
 				APICID_TO_IRTE_DEST_LO(cfg->dest_apicid);
diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
index 0cfce1d3b7bb..21b70534b011 100644
--- a/drivers/iommu/intel/irq_remapping.c
+++ b/drivers/iommu/intel/irq_remapping.c
@@ -1113,7 +1113,7 @@ static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
 	memset(irte, 0, sizeof(*irte));
 
 	irte->present = 1;
-	irte->dst_mode = apic->irq_dest_mode;
+	irte->dst_mode = x86_system_apic->irq_dest_mode;
 	/*
 	 * Trigger mode in the IRTE will always be edge, and for IO-APIC, the
 	 * actual level or edge trigger will be setup in the IO-APIC
@@ -1122,7 +1122,7 @@ static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
 	 * irq migration in the presence of interrupt-remapping.
 	*/
 	irte->trigger_mode = 0;
-	irte->dlvry_mode = apic->irq_delivery_mode;
+	irte->dlvry_mode = x86_system_apic->irq_delivery_mode;
 	irte->vector = vector;
 	irte->dest_id = IRTE_DEST(dest);
 	irte->redir_hint = 1;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 05:20:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 05:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.13995.34885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY0MD-00022G-VI; Thu, 29 Oct 2020 05:20:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 13995.34885; Thu, 29 Oct 2020 05:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY0MD-000229-SF; Thu, 29 Oct 2020 05:20:17 +0000
Received: by outflank-mailman (input) for mailman id 13995;
 Thu, 29 Oct 2020 05:20:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kY0MC-000224-MS
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 05:20:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3a6a871-87ce-4e23-90b9-39757f738e18;
 Thu, 29 Oct 2020 05:20:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 633F0AC24;
 Thu, 29 Oct 2020 05:20:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kY0MC-000224-MS
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 05:20:16 +0000
X-Inumbo-ID: e3a6a871-87ce-4e23-90b9-39757f738e18
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e3a6a871-87ce-4e23-90b9-39757f738e18;
	Thu, 29 Oct 2020 05:20:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603948813;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UqG24I0rIDcFpF42E+Xl4MYf85CfDEjpNOAxMWy7yb4=;
	b=POFoYiJsKF3XKRRcPa3mj6IftjlKH8yQUsF+2tuosfMxzNWkx9YN5SrzJP0EPin/vRFxcR
	K+YKk4vgrSICKAX2rW02LMFwBFnyKioIU21JOYH4JIKXrxBCAYZVkgzlkeGXESiSKEkdR6
	c7Vlhs9FlCXFZvs8GBN8ymtVFt638XA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 633F0AC24;
	Thu, 29 Oct 2020 05:20:13 +0000 (UTC)
Subject: Re: Xen on RP4
To: Stefano Stabellini <sstabellini@kernel.org>,
 Elliott Mitchell <ehem+xen@m5p.com>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
 xen-devel@lists.xenproject.org
References: <20201022021655.GA74011@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s>
 <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
 <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com>
 <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e885b2a9-f6ea-e224-b906-125936cfe550@suse.com>
Date: Thu, 29 Oct 2020 06:20:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.20 01:37, Stefano Stabellini wrote:
> On Tue, 27 Oct 2020, Elliott Mitchell wrote:
>> On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
>>> On 26/10/2020 16:03, Elliott Mitchell wrote:
>>>> On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
>>>>> On 24/10/2020 06:35, Elliott Mitchell wrote:
>>>>>> ACPI has a distinct
>>>>>> means of specifying a limited DMA-width; the above fails, because it
>>>>>> assumes a *device-tree*.
>>>>>
>>>>> Do you know if it would be possible to infer from the ACPI static table
>>>>> the DMA-width?
>>>>
>>>> Yes, and it is.  Due to not knowing much about ACPI tables I don't know
>>>> what the C code would look like though (problem is which documentation
>>>> should I be looking at first?).
>>>
>>> What you provided below is an excerpt of the DSDT. AFAIK, DSDT content
>>> is written in AML. So far the shortest implementation I have seen for
>>> the AML parser is around 5000 lines (see [1]). It might be possible to
>>> strip some the code, although I think this will still probably too big
>>> for a single workaround.
>>>
>>> What I meant by "static table" is a table that looks like a structure
>>> and can be parsed in a few lines. If we can't find on contain the DMA
>>> window, then the next best solution is to find a way to identity the
>>> platform.
>>>
>>> I don't know enough ACPI to know if this solution is possible. A good
>>> starter would probably be the ACPI spec [2].
>>
>> Okay, that is worse than I had thought (okay, ACPI is impressively
>> complex for something nominally firmware-level).
>>
>> There are strings in the present Tianocore implementation for Raspberry
>> PI 4B which could be targeted.  Notably included in the output during
>> boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
>> unsure how kosher these are as this wsn't implemented nor blessed by the
>> Raspberry PI Foundation).
>>
>> I strongly dislike this approach as you soon turn the Xen project into a
>> database of hardware.  This is already occurring with
>> xen/arch/arm/platforms and I would love to do something about this.  On
>> that thought, how about utilizing Xen's command-line for this purpose?
> 
> I don't think that a command line option is a good idea: basically it is
> punting to users the task of platform detection. Also, it means that
> users will be necessarily forced to edit the uboot script or grub
> configuration file to boot.
> 
> Note that even if we introduced a new command line, we wouldn't take
> away the need for xen/arch/arm/platforms anyway.
> 
> It would be far better for Xen to autodetect the platform if we can.
> 
> 
>> Have a procedure of during installation/updates retrieve DMA limitation
>> information from the running OS and the following boot Xen will follow
>> the appropriate setup.  By its nature, Domain 0 will have the information
>> needed, just becomes an issue of how hard that is to retrieve...
> 
> Historically that is what we used to do for many things related to ACPI,
> but unfortunately it leads to a pretty bad architecture where Xen
> depends on Dom0 for booting rather than the other way around. (Dom0
> should be the one requiring Xen for booting, given that Xen is higher
> privilege and boots first.)
> 
> 
> I think the best compromise is still to use an ACPI string to detect the
> platform. For instance, would it be possible to use the OEMID fields in
> RSDT, XSDT, FADT?  Possibly even a combination of them?
> 
> Another option might be to get the platform name from UEFI somehow.

What about having a small domain parsing the ACPI booting first and use
that information for booting dom0?

That dom would be part of the Xen build and the hypervisor wouldn't need
to gain all the ACPI AML logic.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 07:04:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 07:04:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14003.34897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY1zE-0002Ga-Pa; Thu, 29 Oct 2020 07:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14003.34897; Thu, 29 Oct 2020 07:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY1zE-0002GT-Lk; Thu, 29 Oct 2020 07:04:40 +0000
Received: by outflank-mailman (input) for mailman id 14003;
 Thu, 29 Oct 2020 07:04:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2HK/=EE=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kY1zC-0002Fx-EC
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:04:38 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 329ecce4-b11d-4bfa-bbc3-1eaa03fa51f2;
 Thu, 29 Oct 2020 07:04:37 +0000 (UTC)
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-347-tTl3mqOmMOuk9PomABB5QQ-1; Thu, 29 Oct 2020 03:04:35 -0400
Received: by mail-wr1-f70.google.com with SMTP id q15so875770wrw.8
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 00:04:35 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e?
 ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
 by smtp.gmail.com with ESMTPSA id 3sm2825527wmd.19.2020.10.29.00.04.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 29 Oct 2020 00:04:33 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2HK/=EE=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kY1zC-0002Fx-EC
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:04:38 +0000
X-Inumbo-ID: 329ecce4-b11d-4bfa-bbc3-1eaa03fa51f2
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 329ecce4-b11d-4bfa-bbc3-1eaa03fa51f2;
	Thu, 29 Oct 2020 07:04:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603955077;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=k+MySIa0GO5sAsedbOJklvkP1JxXrpHRlnL4dRpsf/4=;
	b=iTVoSTQ2NbxX4x11OjpDifcwXh3AsZtz6+oh5Ja5vUOoAK68O27u/E5SSxfUK4zFpggGIL
	+PSRLDCC5dYbjYzOqHWaQIQBy2yTFGqs1LhdhV9BWpnfNf7SLyXYIsO+3iadojFlFckni5
	8MluJPFb80lVt+N1OBRVvpGoncLca/k=
Received: from mail-wr1-f70.google.com (mail-wr1-f70.google.com
 [209.85.221.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-347-tTl3mqOmMOuk9PomABB5QQ-1; Thu, 29 Oct 2020 03:04:35 -0400
X-MC-Unique: tTl3mqOmMOuk9PomABB5QQ-1
Received: by mail-wr1-f70.google.com with SMTP id q15so875770wrw.8
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 00:04:35 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=k+MySIa0GO5sAsedbOJklvkP1JxXrpHRlnL4dRpsf/4=;
        b=qFIwbQBD4STqNt8sJAXeFXfFMgxF3XfJITYtRdIJ6QxUFZmCz3yEj9YxuMKWC0DVNX
         K14cWkysvI37uT+0hICd2QcpL4B75XkWdXovoCCT8++qUtRiDTJn11eaUOAeDyrHUf54
         qbdyt0r6X0Zgz58ghOmND6OBtt+PGMu1WQF0uo+SDVo/DtcOlVSztHY37deEH5zCfkF6
         tobKHZ2Vtfo8ez8CUYhjeGTemjwGGzowdXb52Fe4CT4P7mhFFxVRTk3BwZ8pth0LZO35
         a8NQU0VSzXWx27Prq26Yivt0VzdwJAQqsXSMc1GnL++2w2v9jPTUFkpIfemdX1Sxa/g+
         K8DA==
X-Gm-Message-State: AOAM531xoZu0yIXtacm7F5XEGa7Erft70u4jZCBTX6yolA9cIromCjVS
	y2kzysrCiBjJgpsze3EflRrgbqUA/F+CzAma8y8h8xy5IQQuddAyz1PZ3DiwbMMqOF2fsI5z0VS
	No+WojCdMwyBUVptngxVOPR2Z6V8=
X-Received: by 2002:adf:e4ca:: with SMTP id v10mr3780261wrm.53.1603955073915;
        Thu, 29 Oct 2020 00:04:33 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyjtd7rX2GzPMeGPldc12GMpi5saPCqNeQJ2ZCa70evuRQIB96hYYOHQ+JlKqMs5entrVHldQ==
X-Received: by 2002:adf:e4ca:: with SMTP id v10mr3780234wrm.53.1603955073729;
        Thu, 29 Oct 2020 00:04:33 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e? ([2001:b07:6468:f312:5e2c:eb9a:a8b6:fd3e])
        by smtp.gmail.com with ESMTPSA id 3sm2825527wmd.19.2020.10.29.00.04.31
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 29 Oct 2020 00:04:33 -0700 (PDT)
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
To: Arnd Bergmann <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin"
 <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org,
 kvm@vger.kernel.org, platform-driver-x86@vger.kernel.org,
 xen-devel@lists.xenproject.org, iommu@lists.linux-foundation.org
References: <20201028212417.3715575-1-arnd@kernel.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <ea34f1d3-ed54-a2de-79d9-5cc8decc0ab3@redhat.com>
Date: Thu, 29 Oct 2020 08:04:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201028212417.3715575-1-arnd@kernel.org>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28/10/20 22:20, Arnd Bergmann wrote:
> Avoid this by renaming the global 'apic' variable to the more descriptive
> 'x86_system_apic'. It was originally called 'genapic', but both that
> and the current 'apic' seem to be a little overly generic for a global
> variable.

The 'apic' affects only the current CPU, so one of 'x86_local_apic',
'x86_lapic' or 'x86_apic' is probably preferrable.

I don't have huge objections to renaming 'apic' variables and arguments
in KVM to 'lapic'.  I do agree with Sean however that it's going to
break again very soon.

Paolo



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 07:21:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 07:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14008.34908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2FP-0003yn-7A; Thu, 29 Oct 2020 07:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14008.34908; Thu, 29 Oct 2020 07:21:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2FP-0003yg-4F; Thu, 29 Oct 2020 07:21:23 +0000
Received: by outflank-mailman (input) for mailman id 14008;
 Thu, 29 Oct 2020 07:21:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oRhV=EE=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
 id 1kY2FN-0003yb-ND
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:21:21 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 582b6af4-ea4d-425a-b8e8-efadd28ee87d;
 Thu, 29 Oct 2020 07:21:20 +0000 (UTC)
Received: from coco.lan (ip5f5ad5de.dynamic.kabel-deutschland.de
 [95.90.213.222])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1D568206A1;
 Thu, 29 Oct 2020 07:21:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=oRhV=EE=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
	id 1kY2FN-0003yb-ND
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:21:21 +0000
X-Inumbo-ID: 582b6af4-ea4d-425a-b8e8-efadd28ee87d
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 582b6af4-ea4d-425a-b8e8-efadd28ee87d;
	Thu, 29 Oct 2020 07:21:20 +0000 (UTC)
Received: from coco.lan (ip5f5ad5de.dynamic.kabel-deutschland.de [95.90.213.222])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 1D568206A1;
	Thu, 29 Oct 2020 07:21:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603956079;
	bh=ydKKLjnIGAQnlNnBFT/eI5WDiCpUdvgjw4RkqNPBoTs=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=AXef/AhAlPh1hi9YsdIj4o1ixVYaGEAZM4H9T1mWHDD5hcqq9AHBGqQYLJm4zvkzm
	 vDRNtmfz4vMaZ1nCDFiLdWyz0Wl7IBqlNL9mj+lYOyRgbN8ZBRkoI4V/HMDyMBFkDS
	 A+Vrm0psmwUp0NppXp1qjXj8jB9uy8mTt+24KpoY=
Date: Thu, 29 Oct 2020 08:21:00 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Richard Cochran <richardcochran@gmail.com>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>, Greg Kroah-Hartman
 <gregkh@linuxfoundation.org>, Mauro Carvalho Chehab
 <mchehab+samsung@kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, Jonathan Corbet <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Felipe Balbi <balbi@kernel.org>, Frederic Barrat
 <fbarrat@linux.ibm.com>, Guenter Roeck <groeck@chromium.org>, Hanjun Guo
 <guohanjun@huawei.com>, Heikki Krogerus <heikki.krogerus@linux.intel.com>,
 Jens Axboe <axboe@kernel.dk>, Johannes Thumshirn
 <johannes.thumshirn@wdc.com>, Jonathan Cameron <jic23@kernel.org>, Juergen
 Gross <jgross@suse.com>, Konstantin Khlebnikov <koct9i@gmail.com>, Kranthi
 Kuntala <kranthi.kuntala@intel.com>, Lakshmi Ramasubramanian
 <nramas@linux.microsoft.com>, Lars-Peter Clausen <lars@metafoo.de>, Len
 Brown <lenb@kernel.org>, Leonid Maksymchuk <leonmaxx@gmail.com>, Ludovic
 Desroches <ludovic.desroches@microchip.com>, Mario Limonciello
 <mario.limonciello@dell.com>, Maxime Coquelin <mcoquelin.stm32@gmail.com>,
 Michael Ellerman <mpe@ellerman.id.au>, Mika Westerberg
 <mika.westerberg@linux.intel.com>, Mike Kravetz <mike.kravetz@oracle.com>,
 Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>, Nicolas
 Ferre <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>,
 Oleh Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>, Pavel
 Machek <pavel@ucw.cz>, Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
 Peter Meerwald-Stadler <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>,
 Petr Mladek <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>,
 Sebastian Reichel <sre@kernel.org>, Sergey Senozhatsky
 <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 20/33] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201029082100.4820072c@coco.lan>
In-Reply-To: <20201028174427.GE9364@hoboy.vegasvil.org>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
	<4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
	<20201028174427.GE9364@hoboy.vegasvil.org>
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Hi Richard,

Em Wed, 28 Oct 2020 10:44:27 -0700
Richard Cochran <richardcochran@gmail.com> escreveu:

> On Wed, Oct 28, 2020 at 03:23:18PM +0100, Mauro Carvalho Chehab wrote:
> 
> > diff --git a/Documentation/ABI/testing/sysfs-uevent b/Documentation/ABI/testing/sysfs-uevent
> > index aa39f8d7bcdf..d0893dad3f38 100644
> > --- a/Documentation/ABI/testing/sysfs-uevent
> > +++ b/Documentation/ABI/testing/sysfs-uevent
> > @@ -19,7 +19,8 @@ Description:
> >                  a transaction identifier so it's possible to use the same UUID
> >                  value for one or more synthetic uevents in which case we
> >                  logically group these uevents together for any userspace
> > -                listeners. The UUID value appears in uevent as
> > +                listeners. The UUID value appears in uevent as:  
> 
> I know almost nothing about Sphinx, but why have one colon here ^^^ and ...

Good point. After re-reading the text, this ":" doesn't belong here.

> 
> > +
> >                  "SYNTH_UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" environment
> >                  variable.
> >  
> > @@ -30,18 +31,19 @@ Description:
> >                  It's possible to define zero or more pairs - each pair is then
> >                  delimited by a space character ' '. Each pair appears in
> >                  synthetic uevent as "SYNTH_ARG_KEY=VALUE". That means the KEY
> > -                name gains "SYNTH_ARG_" prefix to avoid possible collisions
> > +                name gains `SYNTH_ARG_` prefix to avoid possible collisions
> >                  with existing variables.
> >  
> > -                Example of valid sequence written to the uevent file:
> > +                Example of valid sequence written to the uevent file::  
> 
> ... two here?

The main issue that this patch wants to solve is here:

                This generates synthetic uevent including these variables::

                    ACTION=add
                    SYNTH_ARG_A=1
                    SYNTH_ARG_B=abc
                    SYNTH_UUID=fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed

On Sphinx, consecutive lines with the same indent belongs to the same
paragraph. So, without "::", the above will be displayed on a single line,
which is undesired.

using "::" tells Sphinx to display as-is. It will also place it into a a 
box (colored for html output) and using a monospaced font.

The change at the "uevent file:" line was done just for coherency
purposes.

Yet, after re-reading the text, there are other things that are not
coherent. So, I guess the enclosed patch will work better for sys-uevent.

Thanks,
Mauro

docs: ABI: sysfs-uevent: make it compatible with ReST output

- Replace " by ``, in order to use monospaced fonts;
- mark literal blocks as such.

Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>

diff --git a/Documentation/ABI/testing/sysfs-uevent b/Documentation/ABI/testing/sysfs-uevent
index aa39f8d7bcdf..0b6227706b35 100644
--- a/Documentation/ABI/testing/sysfs-uevent
+++ b/Documentation/ABI/testing/sysfs-uevent
@@ -6,42 +6,46 @@ Description:
                 Enable passing additional variables for synthetic uevents that
                 are generated by writing /sys/.../uevent file.
 
-                Recognized extended format is ACTION [UUID [KEY=VALUE ...].
+                Recognized extended format is::
 
-                The ACTION is compulsory - it is the name of the uevent action
-                ("add", "change", "remove"). There is no change compared to
-                previous functionality here. The rest of the extended format
-                is optional.
+			ACTION [UUID [KEY=VALUE ...]
+
+                The ACTION is compulsory - it is the name of the uevent
+                action (``add``, ``change``, ``remove``). There is no change
+                compared to previous functionality here. The rest of the
+                extended format is optional.
 
                 You need to pass UUID first before any KEY=VALUE pairs.
-                The UUID must be in "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
+                The UUID must be in ``xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx``
                 format where 'x' is a hex digit. The UUID is considered to be
                 a transaction identifier so it's possible to use the same UUID
                 value for one or more synthetic uevents in which case we
                 logically group these uevents together for any userspace
                 listeners. The UUID value appears in uevent as
-                "SYNTH_UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" environment
+                ``SYNTH_UUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`` environment
                 variable.
 
                 If UUID is not passed in, the generated synthetic uevent gains
-                "SYNTH_UUID=0" environment variable automatically.
+                ``SYNTH_UUID=0`` environment variable automatically.
 
                 The KEY=VALUE pairs can contain alphanumeric characters only.
+
                 It's possible to define zero or more pairs - each pair is then
                 delimited by a space character ' '. Each pair appears in
-                synthetic uevent as "SYNTH_ARG_KEY=VALUE". That means the KEY
-                name gains "SYNTH_ARG_" prefix to avoid possible collisions
+                synthetic uevent as ``SYNTH_ARG_KEY=VALUE``. That means the KEY
+                name gains ``SYNTH_ARG_`` prefix to avoid possible collisions
                 with existing variables.
 
-                Example of valid sequence written to the uevent file:
+                Example of valid sequence written to the uevent file::
 
                     add fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed A=1 B=abc
 
-                This generates synthetic uevent including these variables:
+                This generates synthetic uevent including these variables::
 
                     ACTION=add
                     SYNTH_ARG_A=1
                     SYNTH_ARG_B=abc
                     SYNTH_UUID=fe4d7c9d-b8c6-4a70-9ef1-3d8a58d18eed
+
 Users:
                 udev, userspace tools generating synthetic uevents


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 07:42:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 07:42:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14014.34921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2ZS-0005lK-VC; Thu, 29 Oct 2020 07:42:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14014.34921; Thu, 29 Oct 2020 07:42:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2ZS-0005lD-SB; Thu, 29 Oct 2020 07:42:06 +0000
Received: by outflank-mailman (input) for mailman id 14014;
 Thu, 29 Oct 2020 07:42:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kY2ZR-0005l8-6k
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:42:05 +0000
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93e04e1b-0b63-4c69-a9eb-9430104ac375;
 Thu, 29 Oct 2020 07:42:03 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id f140so1404668ybg.3
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 00:42:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=PPBj=EE=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kY2ZR-0005l8-6k
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 07:42:05 +0000
X-Inumbo-ID: 93e04e1b-0b63-4c69-a9eb-9430104ac375
Received: from mail-yb1-xb42.google.com (unknown [2607:f8b0:4864:20::b42])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 93e04e1b-0b63-4c69-a9eb-9430104ac375;
	Thu, 29 Oct 2020 07:42:03 +0000 (UTC)
Received: by mail-yb1-xb42.google.com with SMTP id f140so1404668ybg.3
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 00:42:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=bK3hEiwetQ245+GAznGI+zFTudBFUfkyUZ3Y5dKuhEI=;
        b=tl8jXvvz9fPM1ATkFASLjlG46nW+pgP3ROKZMskFlakZitAYoBaRxFSEzmSe5lBvEK
         jlUhKMETB5OKySm/f3tODwzj6oZbeuk8wZQCTdSiOmFaIoNB7pCpB1NwSCvrIRpuzbUw
         LpgCr1GZO/VSHGPpszLvtrBOTya2FA8fk5r1kpE5jmrnw5VfUcBinkfJ+jG+K42W4jn8
         W0mblOYRFw6fI1/Hfv66TSQyRYcHyniAP2nqsY9DR+fWSjdCKG43NVsKItE7q0khKGF+
         hJRrAVL4sogoRI1iYLts+hwyewdpqDuJvkCVnDwjDSRKTaZEd5gpJTIYgBmg0wfpO/sl
         cs0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=bK3hEiwetQ245+GAznGI+zFTudBFUfkyUZ3Y5dKuhEI=;
        b=JldLtWLMP0LceRSIOCAfRmSgsdaGlLfsGbIW+8n25EAQGrjCRr3cq6+Zd9yKJ+/bLV
         PCWTx4JLctIZHLWpERWi3SqCLtXtxE/B07hmjjxWJC+c0W9KNAeWiUEZEYrSO68Zur04
         6pSjpG74MhMaTTA7ti152fy6IQHxoD6uiBGmmL6m86f/HltFnHrmqcYzg9Pcqys3uzdJ
         SbmRdKUlzYM/wld2zrCMzBrjoGGbzr2kaBkp45c4nDPcSeqErDwQCSjiRBNZrZDz4+xG
         Rr9QaZj56DhzJe+90iZv3JtjhLXqWfcQ6TGegIquWm2iOlspL5h+JEvwdNRCdl97Lqo9
         t96w==
X-Gm-Message-State: AOAM531q9Fefpmq3Bqi6pgpes/KwM7dzI/F6jH10u/+s53J60XweqNb6
	s28Y8U/K8FuYORQhQ8551kx2bTJRVf0YkwD5TgqMew==
X-Google-Smtp-Source: ABdhPJyzVwFwPQeYhR6uYzQEflpMYRJ1HZ9uj5gOSoO8sIwuPV9u2qOSpjnz22iyIXnXDiW64DrHSrYEcEvRs4uRWuo=
X-Received: by 2002:a25:4216:: with SMTP id p22mr4451294yba.234.1603957323237;
 Thu, 29 Oct 2020 00:42:03 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
In-Reply-To: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Thu, 29 Oct 2020 16:41:52 +0900
Message-ID: <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: xen-devel@lists.xenproject.org, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Oleksandr,

I would like to try this on my arm64 board.

According to your comments in the patch, I made this config file.
# cat debian.conf
name =3D "debian"
type =3D "pvh"
vcpus =3D 8
memory =3D 512
kernel =3D "/opt/agl/vmlinuz-5.9.0-1-arm64"
ramdisk =3D "/opt/agl/initrd.img-5.9.0-1-arm64"
cmdline=3D "console=3Dhvc0 earlyprintk=3Dxen root=3D/dev/xvda1 rw"
disk =3D [ '/opt/agl/debian.qcow2,qcow2,hda' ]
vif =3D [ 'mac=3D00:16:3E:74:3d:76,bridge=3Dxenbr0' ]
virtio =3D 1
vdisk =3D [ 'backend=3DDom0, disks=3Dro:/dev/sda1' ]

And tried to boot a DomU, but I got below error.

# xl create -c debian.conf
Parsing config from debian.conf
libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
1:unable to add virtio_disk devices
libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
1:xc_domain_pause failed
libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
type for domid=3D1
libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
1:Unable to destroy guest
libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
1:Destruction of domain failed


Could you tell me how can I test it?

Thank you,

2020=E5=B9=B410=E6=9C=8816=E6=97=A5(=E9=87=91) 1:46 Oleksandr Tyshchenko <o=
lekstysh@gmail.com>:
>
> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Hello all.
>
> The purpose of this patch series is to add IOREQ/DM support to Xen on Arm=
.
> You can find an initial discussion at [1] and RFC/V1 series at [2]/[3].
> Xen on Arm requires some implementation to forward guest MMIO access to a=
 device
> model in order to implement virtio-mmio backend or even mediator outside =
of hypervisor.
> As Xen on x86 already contains required support this series tries to make=
 it common
> and introduce Arm specific bits plus some new functionality. Patch series=
 is based on
> Julien's PoC "xen/arm: Add support for Guest IO forwarding to a device em=
ulator".
> Besides splitting existing IOREQ/DM support and introducing Arm side, the=
 series
> also includes virtio-mmio related changes (last 2 patches for toolstack)
> for the reviewers to be able to see how the whole picture could look like=
.
>
> According to the initial discussion there are a few open questions/concer=
ns
> regarding security, performance in VirtIO solution:
> 1. virtio-mmio vs virtio-pci, SPI vs MSI, different use-cases require dif=
ferent
>    transport...
> 2. virtio backend is able to access all guest memory, some kind of protec=
tion
>    is needed: 'virtio-iommu in Xen' vs 'pre-shared-memory & memcpys in gu=
est'
> 3. interface between toolstack and 'out-of-qemu' virtio backend, avoid us=
ing
>    Xenstore in virtio backend if possible.
> 4. a lot of 'foreing mapping' could lead to the memory exhaustion, Julien
>    has some idea regarding that.
>
> Looks like all of them are valid and worth considering, but the first thi=
ng
> which we need on Arm is a mechanism to forward guest IO to a device emula=
tor,
> so let's focus on it in the first place.
>
> ***
>
> There are a lot of changes since RFC series, almost all TODOs were resolv=
ed on Arm,
> Arm code was improved and hardened, common IOREQ/DM code became really ar=
ch-agnostic
> (without HVM-ism), but one TODO still remains which is "PIO handling" on =
Arm.
> The "PIO handling" TODO is expected to left unaddressed for the current s=
eries.
> It is not an big issue for now while Xen doesn't have support for vPCI on=
 Arm.
> On Arm64 they are only used for PCI IO Bar and we would probably want to =
expose
> them to emulator as PIO access to make a DM completely arch-agnostic. So =
"PIO handling"
> should be implemented when we add support for vPCI.
>
> I left interface untouched in the following patch
> "xen/dm: Introduce xendevicemodel_set_irq_level DM op"
> since there is still an open discussion what interface to use/what
> information to pass to the hypervisor.
>
> Also I decided to drop the following patch:
> "[RFC PATCH V1 07/12] A collection of tweaks to be able to run emulator i=
n driver domain"
> as I got an advise to write our own policy using FLASK which would cover =
our use
> case (with emulator in driver domain) rather than tweak Xen.
>
> There are two patches on review this series depends on (each involved pat=
ch in this series
> contains this note as well):
> 1. https://patchwork.kernel.org/patch/11816689
> 2. https://patchwork.kernel.org/patch/11803383
>
> Please note, that IOREQ feature is disabled by default within this series=
.
>
> ***
>
> Patch series [4] was rebased on recent "staging branch"
> (8a62dee x86/vLAPIC: don't leak regs page from vlapic_init() upon error) =
and tested on
> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with virtio-mmio disk bac=
kend (we will
> share it later) running in driver domain and unmodified Linux Guest runni=
ng on existing
> virtio-blk driver (frontend). No issues were observed. Guest domain 'rebo=
ot/destroy'
> use-cases work properly. Patch series was only build-tested on x86.
>
> Please note, build-test passed for the following modes:
> 1. x86: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy (default)
> 2. x86: #CONFIG_HVM is not set / #CONFIG_IOREQ_SERVER is not set
> 3. Arm64: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy
> 4. Arm64: CONFIG_HVM=3Dy / #CONFIG_IOREQ_SERVER is not set  (default)
> 5. Arm32: CONFIG_HVM=3Dy / CONFIG_IOREQ_SERVER=3Dy
> 6. Arm32: CONFIG_HVM=3Dy / #CONFIG_IOREQ_SERVER is not set  (default)
>
> ***
>
> Any feedback/help would be highly appreciated.
>
> [1] https://lists.xenproject.org/archives/html/xen-devel/2020-07/msg00825=
.html
> [2] https://lists.xenproject.org/archives/html/xen-devel/2020-08/msg00071=
.html
> [3] https://lists.xenproject.org/archives/html/xen-devel/2020-09/msg00732=
.html
> [4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3
>
> Julien Grall (5):
>   xen/dm: Make x86's DM feature common
>   xen/mm: Make x86's XENMEM_resource_ioreq_server handling common
>   arm/ioreq: Introduce arch specific bits for IOREQ/DM features
>   xen/dm: Introduce xendevicemodel_set_irq_level DM op
>   libxl: Introduce basic virtio-mmio support on Arm
>
> Oleksandr Tyshchenko (18):
>   x86/ioreq: Prepare IOREQ feature for making it common
>   xen/ioreq: Make x86's IOREQ feature common
>   xen/ioreq: Make x86's hvm_ioreq_needs_completion() common
>   xen/ioreq: Provide alias for the handle_mmio()
>   xen/ioreq: Make x86's hvm_mmio_first(last)_byte() common
>   xen/ioreq: Make x86's hvm_ioreq_(page/vcpu/server) structs common
>   xen/ioreq: Move x86's ioreq_gfn(server) to struct domain
>   xen/ioreq: Introduce ioreq_params to abstract accesses to
>     arch.hvm.params
>   xen/ioreq: Move x86's io_completion/io_req fields to struct vcpu
>   xen/ioreq: Remove "hvm" prefixes from involved function names
>   xen/ioreq: Use guest_cmpxchg64() instead of cmpxchg()
>   xen/arm: Stick around in leave_hypervisor_to_guest until I/O has
>     completed
>   xen/mm: Handle properly reference in set_foreign_p2m_entry() on Arm
>   xen/ioreq: Introduce domain_has_ioreq_server()
>   xen/arm: io: Abstract sign-extension
>   xen/ioreq: Make x86's send_invalidate_req() common
>   xen/arm: Add mapcache invalidation handling
>   [RFC] libxl: Add support for virtio-disk configuration
>
>  MAINTAINERS                                     |    8 +-
>  tools/libs/devicemodel/core.c                   |   18 +
>  tools/libs/devicemodel/include/xendevicemodel.h |    4 +
>  tools/libs/devicemodel/libxendevicemodel.map    |    1 +
>  tools/libs/light/Makefile                       |    1 +
>  tools/libs/light/libxl_arm.c                    |   94 +-
>  tools/libs/light/libxl_create.c                 |    1 +
>  tools/libs/light/libxl_internal.h               |    1 +
>  tools/libs/light/libxl_types.idl                |   16 +
>  tools/libs/light/libxl_types_internal.idl       |    1 +
>  tools/libs/light/libxl_virtio_disk.c            |  109 ++
>  tools/xl/Makefile                               |    2 +-
>  tools/xl/xl.h                                   |    3 +
>  tools/xl/xl_cmdtable.c                          |   15 +
>  tools/xl/xl_parse.c                             |  116 ++
>  tools/xl/xl_virtio_disk.c                       |   46 +
>  xen/arch/arm/Makefile                           |    2 +
>  xen/arch/arm/dm.c                               |   89 ++
>  xen/arch/arm/domain.c                           |    9 +
>  xen/arch/arm/hvm.c                              |    4 +
>  xen/arch/arm/io.c                               |   29 +-
>  xen/arch/arm/ioreq.c                            |  126 ++
>  xen/arch/arm/p2m.c                              |   29 +
>  xen/arch/arm/traps.c                            |   58 +-
>  xen/arch/x86/Kconfig                            |    1 +
>  xen/arch/x86/hvm/Makefile                       |    1 -
>  xen/arch/x86/hvm/dm.c                           |  291 +----
>  xen/arch/x86/hvm/emulate.c                      |   60 +-
>  xen/arch/x86/hvm/hvm.c                          |   24 +-
>  xen/arch/x86/hvm/hypercall.c                    |    9 +-
>  xen/arch/x86/hvm/intercept.c                    |    5 +-
>  xen/arch/x86/hvm/io.c                           |   26 +-
>  xen/arch/x86/hvm/ioreq.c                        | 1533 -----------------=
------
>  xen/arch/x86/hvm/stdvga.c                       |   10 +-
>  xen/arch/x86/hvm/svm/nestedsvm.c                |    2 +-
>  xen/arch/x86/hvm/vmx/realmode.c                 |    6 +-
>  xen/arch/x86/hvm/vmx/vvmx.c                     |    2 +-
>  xen/arch/x86/mm.c                               |   46 +-
>  xen/arch/x86/mm/p2m.c                           |   13 +-
>  xen/arch/x86/mm/shadow/common.c                 |    2 +-
>  xen/common/Kconfig                              |    3 +
>  xen/common/Makefile                             |    2 +
>  xen/common/dm.c                                 |  292 +++++
>  xen/common/ioreq.c                              | 1443 +++++++++++++++++=
++++
>  xen/common/memory.c                             |   50 +-
>  xen/include/asm-arm/domain.h                    |    5 +
>  xen/include/asm-arm/hvm/ioreq.h                 |  109 ++
>  xen/include/asm-arm/mm.h                        |    8 -
>  xen/include/asm-arm/mmio.h                      |    1 +
>  xen/include/asm-arm/p2m.h                       |   19 +-
>  xen/include/asm-arm/paging.h                    |    4 +
>  xen/include/asm-arm/traps.h                     |   24 +
>  xen/include/asm-x86/hvm/domain.h                |   50 +-
>  xen/include/asm-x86/hvm/emulate.h               |    2 +-
>  xen/include/asm-x86/hvm/io.h                    |   17 -
>  xen/include/asm-x86/hvm/ioreq.h                 |  198 ++-
>  xen/include/asm-x86/hvm/vcpu.h                  |   18 -
>  xen/include/asm-x86/mm.h                        |    4 -
>  xen/include/asm-x86/p2m.h                       |   20 +-
>  xen/include/public/arch-arm.h                   |    5 +
>  xen/include/public/hvm/dm_op.h                  |   16 +
>  xen/include/xen/dm.h                            |   44 +
>  xen/include/xen/ioreq.h                         |  143 +++
>  xen/include/xen/p2m-common.h                    |    4 +
>  xen/include/xen/sched.h                         |   37 +
>  xen/include/xsm/dummy.h                         |    4 +-
>  xen/include/xsm/xsm.h                           |    6 +-
>  xen/xsm/dummy.c                                 |    2 +-
>  xen/xsm/flask/hooks.c                           |    5 +-
>  69 files changed, 3223 insertions(+), 2125 deletions(-)
>  create mode 100644 tools/libs/light/libxl_virtio_disk.c
>  create mode 100644 tools/xl/xl_virtio_disk.c
>  create mode 100644 xen/arch/arm/dm.c
>  create mode 100644 xen/arch/arm/ioreq.c
>  delete mode 100644 xen/arch/x86/hvm/ioreq.c
>  create mode 100644 xen/common/dm.c
>  create mode 100644 xen/common/ioreq.c
>  create mode 100644 xen/include/asm-arm/hvm/ioreq.h
>  create mode 100644 xen/include/xen/dm.h
>  create mode 100644 xen/include/xen/ioreq.h
>
> --
> 2.7.4
>
>


--=20
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 08:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 08:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14030.34933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2vG-0008AI-Bm; Thu, 29 Oct 2020 08:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14030.34933; Thu, 29 Oct 2020 08:04:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY2vG-0008AB-8f; Thu, 29 Oct 2020 08:04:38 +0000
Received: by outflank-mailman (input) for mailman id 14030;
 Thu, 29 Oct 2020 08:04:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kY2vF-0008A6-3J
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:04:37 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ed3484e-7051-49d9-a94e-d26dfcf0a80c;
 Thu, 29 Oct 2020 08:04:35 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY2vC-0006k5-Rs; Thu, 29 Oct 2020 08:04:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY2vC-0007WO-Cn; Thu, 29 Oct 2020 08:04:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kY2vC-0003cl-CH; Thu, 29 Oct 2020 08:04:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kY2vF-0008A6-3J
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:04:37 +0000
X-Inumbo-ID: 6ed3484e-7051-49d9-a94e-d26dfcf0a80c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 6ed3484e-7051-49d9-a94e-d26dfcf0a80c;
	Thu, 29 Oct 2020 08:04:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lgUEjwRJiqik4FKILeiREJEZiWDdZXH+h6uwY8HWdU4=; b=Cd2o215lm5rtkGWJxhZnilBiEf
	JzLfA8kjYMSdYvYyVJSWY0qxMHw8qpF7u3+7sIse124i5akf9oQd9xpsjuaWkoo3vJ1+YqsXzyJQc
	iFzVD7e2wbOX06pd7IT2HaM7IPODKgUWOlLumOXDrRr686YOHgXUPbLPGiBFFGdAioa4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY2vC-0006k5-Rs; Thu, 29 Oct 2020 08:04:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY2vC-0007WO-Cn; Thu, 29 Oct 2020 08:04:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY2vC-0003cl-CH; Thu, 29 Oct 2020 08:04:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156273: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=1f807631f402210d036ec4803e7adfefa222f786
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 08:04:34 +0000

flight 156273 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156273/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              1f807631f402210d036ec4803e7adfefa222f786
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  111 days
Failing since        151818  2020-07-11 04:18:52 Z  110 days  104 attempts
Testing same since   156273  2020-10-28 04:19:15 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23384 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 08:14:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 08:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14037.34948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY34I-0000h0-9r; Thu, 29 Oct 2020 08:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14037.34948; Thu, 29 Oct 2020 08:13:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY34I-0000gt-6V; Thu, 29 Oct 2020 08:13:58 +0000
Received: by outflank-mailman (input) for mailman id 14037;
 Thu, 29 Oct 2020 08:13:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kY34H-0000go-5R
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:13:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af3018fb-808e-4ad3-84f6-5fd24d514530;
 Thu, 29 Oct 2020 08:13:54 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY34E-0006vz-2d; Thu, 29 Oct 2020 08:13:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY34D-000858-R1; Thu, 29 Oct 2020 08:13:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kY34D-0007LQ-QX; Thu, 29 Oct 2020 08:13:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kY34H-0000go-5R
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:13:57 +0000
X-Inumbo-ID: af3018fb-808e-4ad3-84f6-5fd24d514530
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id af3018fb-808e-4ad3-84f6-5fd24d514530;
	Thu, 29 Oct 2020 08:13:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y95frfKEaMFoCRTCNxHXyf5iWTTqyUa2PdzULvvwATs=; b=gCD5QZiLqzgUEh98ISHJsBraOT
	n5zxNDx1tomjx/UxIzn/D/KvP9oENyyj+/IBkwppNZFZNgPB2VA/IxgVmrhVPhhdT+ZL0j0FhRKv4
	KM6rODsW5E7LU0eYrl3aeP0Bt87xW3SvChhJyMdnLvay88fewFNhdUnflePvIAgbqW7Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY34E-0006vz-2d; Thu, 29 Oct 2020 08:13:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY34D-000858-R1; Thu, 29 Oct 2020 08:13:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY34D-0007LQ-QX; Thu, 29 Oct 2020 08:13:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156268: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
X-Osstest-Versions-That:
    xen=964781c6f162893677c50a779b7d562a299727ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 08:13:53 +0000

flight 156268 xen-unstable real [real]
flight 156289 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156268/
http://logs.test-lab.xenproject.org/osstest/logs/156289/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      20 leak-check/check         fail REGR. vs. 156254

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 156254

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 156254
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156254
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156254
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156254
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156254
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156254
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156254
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156254
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156254
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156254
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156254
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156254
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883
baseline version:
 xen                  964781c6f162893677c50a779b7d562a299727ba

Last test of basis   156254  2020-10-27 06:38:30 Z    2 days
Testing same since   156268  2020-10-27 22:07:28 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 055e1c3a3d95b1e753148369fbc4ba48782dd602
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 08:23:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 08:23:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14043.34963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3Cw-0001cM-FS; Thu, 29 Oct 2020 08:22:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14043.34963; Thu, 29 Oct 2020 08:22:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3Cw-0001cF-BN; Thu, 29 Oct 2020 08:22:54 +0000
Received: by outflank-mailman (input) for mailman id 14043;
 Thu, 29 Oct 2020 08:22:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kY3Ct-0001c4-Tb
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:22:52 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f826689-95c3-4dbc-b818-2c2cd8cb82e6;
 Thu, 29 Oct 2020 08:22:47 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9T8Mj0gu
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Thu, 29 Oct 2020 09:22:45 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kY3Ct-0001c4-Tb
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:22:52 +0000
X-Inumbo-ID: 7f826689-95c3-4dbc-b818-2c2cd8cb82e6
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.163])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7f826689-95c3-4dbc-b818-2c2cd8cb82e6;
	Thu, 29 Oct 2020 08:22:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603959767;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-ID:Subject:To:From:Date:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=SsPnoBuedsUZMZkX0b+HjM5/9AJ8nr4vHORk5xZ/Vmc=;
	b=Edaryc6dcJsMx9O/8WAkyBuQ5pWtdH5ujzWFW/ha89KPdWqk2Bg0LBYDg/JdtfbDkg
	roSL85hbwn4XhNDaEFct8nk4OniADbwjuh/+ahcCcw+pwTrdNJamGsQUT8yL1DXpbK87
	ZOScsKoSj1DCm3aNYRoJ3Cur2PNV/LvWScUcEpG3Nsie975lg7BZ9TJRHBmJ//M9xSQP
	DBRwmhvG6DowWfG61ZtUocHyzG5SmgWKKlO5+mvruMvQpkiflcFELeZkgC1iqepTCNjU
	BWkriYYJcz/mSYiL5T2ysjdVQOgeAYyLuN1U13F7fY5NRSktlsIbDFK/RsGZ8L91py38
	UhAg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9T8Mj0gu
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate)
	for <xen-devel@lists.xenproject.org>;
	Thu, 29 Oct 2020 09:22:45 +0100 (CET)
Date: Thu, 29 Oct 2020 09:22:37 +0100
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: call traces in xl dmesg during boot
Message-ID: <20201029092237.50b8a6f6.olaf@aepfle.de>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/U3gfAfjj8OadWqGafY.KWMx"; protocol="application/pgp-signature"

--Sig_/U3gfAfjj8OadWqGafY.KWMx
Content-Type: multipart/mixed; boundary="MP_/JrfABqdNyhV/ar7fCf_MSe6"

--MP_/JrfABqdNyhV/ar7fCf_MSe6
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

During boot of xen.git#staging, Xen seems to think something pressed debug =
keys very early, which shows various call traces in 'xl dmesg'. A reboot ma=
y "fix" it, and no traces are printed.

Any idea what may cause this behavior? I do not see it on a slightly smalle=
r box.

Olaf

--MP_/JrfABqdNyhV/ar7fCf_MSe6
Content-Type: application/x-xz
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xl-info-4.15.20201027T17391.txt.xz

/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM4H9GDwddADQbys4c/Q8dgRMi8ktoAJV2cN3K9C3mgWk0
LRT8DxR18x30u5eLvwN/uxEZso/H4d9eS/86yBdtzH1XJ++p7K9XOTpqvLmXGEXbyflv1Y9ZNQ6x
J1A1thStKIDLpJDDGrbLCqq0Z0smUDvHHz5WYI9cWxbkZeEy4WIcjxLBU3OJXYscYh+M0tuL8UvX
5mCr3mrZPLV5ceRMXH0eaXL55dLk7pD9DcWf37pRpyzuaOI0GigHY+P/P75o2EXrKNChjOmvl4VE
WiOBW3GA97WFeobmVoyTKVA81bx2B9VesW6Ln1mwhV1y0kpoGGWBd2eglJrctViiuRDA/EU97XA5
hQfmly5DD61KaXClosllRUqmBJowFKrrTjRyTM2oLyAHMGpg5ajcNq0F6b0EM88r7rg+qT972JfR
InEAWzBvCpoCc7amkuqHNC0yFtbQ/XDIWL4CbOv60cnaTrXQkpyPYSp1UR/iWKNMcgY5GZVoHM+o
tFPLL9AwpeGxGxg1wlvcNei6Ue8czDavap0HRJeRUAolCT/yb4YERUmcfl91Z0B+xn6y7NqY/4uP
zyZoWIQtI1VHfw0EC9isjAiUAaTAEGCUOZEN1lIrsSF89roAbpH80t0js4J3lsNIB0LQUNHEk9Oy
kde2/IVSrHRYQ5q9F5QjGkTBjc3PvfHwYR3WsL76I4HM+hvaz866MnBb78SJT0jDY5UI5mtDH8h1
Sde01Hg04p7I5xkDvzH5BUMit2UOP+HirMiF1uYs8nGMYwZJ+stqZQHM7MQ0uxp/nFpwCo4As7FD
OsbHsdwczDRD70w4vTdzBFH/YbaFSckTLy5UzvhwBr5Jo2MLWnR2ggobKp4e1KHYQXRt5SJdzllw
t61dToQEYD+XDhqQGAOi29LTLFBP3dAiCKRbo/fR+LeMR2YX4XG546PXmvk+QM5jo1RDaznU04MK
whpJHVsrS5IQLlAl5GEnHvqTaGUafSQqxEAzbmKH/jOhoWJlyVW5DpYEx2Fh41V3Q+Peij4Eh6B6
AHxcV4Q+TrGoLTX3zp5lNxy5NyVUWQi/Rum4DoY2I5rwrXN9jFpbNhO9m8ULku4BLTm6cGJakZcG
KmjVMP2+bZsoNhNRJ3ivPkvmyfQZTedgvAOG+x58ldU4hR93sy6dqee5C6T0TM4gsN9GMVg+veuK
IfW1eqvmOFXCsz21sziXvC3lYMnBi6PZxedIlviI5IC7neYXMHW6q119ygUHncRkZykrQG1ihcCt
SUWpV8U+/qpp0eY9eEHOAzDv9/wSS/ykg6ZlintLbHQqoE2fuVW/T+mHxD4paFrEkcQ1DBwopRGS
4v6br0NnumA6CN7Zpdlc03hpDerIdPrsMq88K52dgMw6KLqj7N9OhQagI86/EGWb05gjS4uLuien
Zfm076NH+12pW+VgNAaOtajJciAPJWs3WbD61vPG5C7hcN4POkExMGy56wKTth3/qxJ1H+pf0bNS
sjQhJu2vSCMPdWUMwerVUhtXw4gm2aVUQXt8WTnoZ991ZRgw/TnvwjndyqK0mQWwV0y3e8uPAV+m
kEv6FLYUeGaICrD1cGIAbVgDC24/qSl1sF4iOBik2Cew4kej4CpUM+A6At/WB4+UPK2hXMWM+ide
4QAIPVGp0TIAJfSowS/0Id6VX7doGHsn5Pz6aWADGp5NXCkQoyALeBTXxf8dUtKcJpeYq6XwUpSe
Up0qnrKwhzLs6AyRO/HbJRlSmttEVf30lgQiox5RK3IBj6S1+N4i1leRYrzjKyL6WUc8X8/3rfjw
NCA8wO8WpZHvTC/XVU/O/j9663NGjSZ+2MrNfY50CY/oYk84tMNKeMKeJhRhfBVihHnwmQTZssY0
BafCoNB0a0rPAiWIC1G56wkf05FGnczrffaZDFlpIgloVpgiI7Xj46rcl5Q4a2Ps7G+tZ/MI6P3V
vh3mRndV6h5vqP4PVXhTr3gA1Xx+ERGcMLiw9Y+hmht4RU9A7tPx9NAcibxppXE2LWrPc0aDFL8I
dG8iZTHH2yRMUToFjRSBcdveKiRQY19yto2g+X3WHsa6NNun8oU36gSJbvtIkhe+naBNSj8/mV8j
+eqo0+9QfaNZc/ZCY69UnrU5Q0PIeu+2usg61ufdZEgTc2/30t+yoNs+Fs3v5hHkhgWbvbfRCnUN
1nbpJrkf+Os4m+h7Fu7Snv6HpUFLBvqaOH8r2o5zoZ47nnXIYgGKBaq214KIel7I/bIX5YsU3RSI
u9InhSTRg2UXZDQrZMqgMPpWQod/uomrYAl5WF7HpNdDGN7NpVmJtwGukOGXkrCMqakrQ0O9BQb9
hPTMvQiF1RDbK6DtdVjFOOjcQsPGh5vt+MxSD1yQopOMUDXm09OkfgQyQ34T/QrhqWfg3pbPAbj6
GgCaJVfNFd6f6HNP8roxRv8t54mVbO7lgB5yizJBpIRtw5kkifEEZFufBBmwDU6DR8twhxle89SC
hMo2Ycpnn49HrGa6qnvqG44PdtLCs+iYuAAf4xK5CWOg0wyDfv0YFvJJtUH23Eb1ZbFKenhdP+2d
s/YPNDeOKNVVYeR5EC9TUaGOsuPSa1DoC7IzKrUgzMu+gwXPs1zwPkJywidXX4oezpjwoOOq8dJW
c6Dg9/PKJZSkv081mkLJxr6KnC1LnuneNB8+kHxoFJEdecZBTPgYkUtnvrHXqXSmX1tGWn5Zs4wr
Tpbi5uMJON+zxFNG/3Dg1OyHwx3e8T/0vt+88JVIpcMK0rGhas/tOlmKZrGyowVDRbHy1Q8FQTJg
/cMiWjzgb6L3k5ZugrEMO1R5Pbx7ktjPpxnwTH8qCl3wG+X7XNwEdYgOrEvUDptOD/4WORx3UBeE
90X88OBEtjHTIA5wzS856JJHnlDAcHUkEinJMeVyyk/NqzRTKkl9Z5T+0fxaAnD0CiD7LzkTPood
WUQv/tAGW9q/Kx+FfHnIxsnM9CXhnPuvcwsscU20zAZTRHW0mCDLMssVoAfXBiTKHoDMocqu/Azg
xwMhUGyoZdt+RnrCK7G4GcQ03V8/kCiLSyNOMAD2JSAEqfm5c3C1SFsyneVyBXiJpRVC6dV8loVW
oEGBLG7Z68h2w4cJJFHCF3U4wey2loX1WnmpsdbevWPndGESefZFk2bIJ6Ma810U0j5yQM1CqjTJ
kCVdh891StFoxN7s9T2P8qJcgWneff0FmXMCXbCHCN2KPGBeAwl3O5kttD4q6dm3i2XzfMqDd7Q6
HMmQMsv2SsuzkV/TrmYdu8AqdKCXx2HNrb3GDkgTaT68hJSYQm/1jFPSQzJghU4acFAnn6Byw0M5
5VjERSc6BoGgEshbDDX1Kg5XEGHXmgdhqUpqECfFMUpernJaxjzk6t5PSlzi+a0Nfiw/YXW33L6P
aNIPCVa53OZGocpg19LBVUGZ8KJUFxmKDpzvLtx4q+w3mWD18pk22gdw1EJCLIRiuj3WnWFgzSe1
7Bh4lUgKCpnDItvII9xy9LNtvObRsU98DYByE6pc8NG5I3k3PJPxfqEQxnTsD/dwW8IRpgDRbiWH
ciQtBkeeZQ/hxmo/S3UasaHpTuboIFQ0k/eHL6aIHkn6O+AjqWEouQyBWAxS1I573Gec2q7YnAPv
CYDEXLWm5x92rAMuzCTcTT5ZXekfI4Q6sgS6hMfi85kLpDl55Up2Tqrt4j+e7Xjk6aHfRS/dRDDm
4l7XICiAaMeTTaDqFrJOOobTB5MJXv1oVInUdNFl2ViOaDCp2P1YtFQFStN0jef/Bojmsh+qoeGa
0P09KSZvNbC1KuLKZPFr+HC/3FPSlYFKW+QWQ82qB1nuBIcacqb2/XzKzfHSoROpQMX+she1QrkZ
JnewmkV3Gbin5J5Gw12wn6TMhAj3zNR6n1gtUl6eygmXW0MWyj2APJHgAxvQF9Fw0D30UPofJZRG
rmTj/oPBiVmyjlOV3JMMRUCNDYFCegVCHttoD17xNu3Nh7/rajQIcsu7N6j99cJ6OnaFYzlAO2aW
56hALUDWwDHaMjFE5+bGUM/KJbJgACwWifWycauWVQsXNC6E7bTa8cUzWaqz+B7/y2O4Y0fyszjZ
+8nLXiOdIiNBjdNVo21GDaAULiyj2mrlBrBfQpW19MeQ8lNdQU/OpgokI43vZRoBkH9y2CtTplsE
Sr0DegLKBibAMeVkMDGVvLl1/FA6SneshVI6ZVXf2swrTUFfiZRyRKKh7j+HT+ao76q4N8ouZe+I
Jtj3oIUDcVj3JX8eSZx/0OGwl76v1jCUwyV2gxyBdRGbSMRfGurX4zjArI5sRF4tivEiQNx01PM4
1XdVLpF6n1+ty/mlnSY0flskRPeRyTGGv8FgAaWapv87dmUCGz7yaMjW97FgRSRufu3EXX/rGh71
sPmK1Oob86eBJhFNQqGxyH8Y3bUWwkhohBQ3qlqDi1Ajx/IpmuuHkRCGUASvuGf/43oKIz9S65sB
g1Hdz8DAaPX/ls7aQEviv9nuGnzkdJ/AhBPFTS4DSyKch86dVzewEtfDUGtsXors4zZmW0APqQZB
WuRzct+CPyg2Mj4bES+s/2xetP64vUQ2txn/9YuEymJPErwSKFaRDbCC9ixoT35veHuEOpNX5R69
mYrZT1X+fJHcG9ol0WwB+DCdoKHwdwF4y9D3l6Nccl816hrUTNp8JM/WtSTJ4zHXyK2GK3nxm2Rp
BI2822g+gEs1VF9gxIicxUOY1PeLDIjqnT3OpFaJXHJIyVIg8o+FO4vmgjrNOpTWih0YWXD4wX9o
ITC1G013NF4Rrec3NrmBns+n2V5w56guSQeifquSOkxrFfaOuPdBPSVEGfyiLVFF8Ozv6oFz0jgr
86PBTqlwzFdm35gRe7YFBhChM7KV1ZBX2DmJXTSdp24zFP+kvoTQCpdwPbEKDOF4HCpiS1Yul0gl
n/+kVS1bYgdeq0SVortxDkR3A5hhuRBj+ctcD3JYYX1oFHcQ1y+M8ruhon4Oy/vXav+O3BaGpfe5
mDZAGybXHyBiwMLGePNVoNkalKdSfE7cK1bfVnt5sk5Tccx56fzpKp3Og4ErVkhBvRjekVr3UwTO
I8XDJQeiJBYt4/iyuTZVGvXL39YE+bPgTzJUYhPmA1nLPAYDAYGAc6aaAjUIxUIYfAK3NrEZb2FQ
hwEAAPBAGVxYNbaCAAGjHsf+AQBaJ9bOscRn+wIAAAAABFla

--MP_/JrfABqdNyhV/ar7fCf_MSe6
Content-Type: application/x-xz
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xl-dmesg-4.15.20201027T17391.txt.xz

/Td6WFoAAATm1rRGAgAhARwAAAAQz1jM57rc7/5dABrhlGBhPEExXzQ2584LNRUpTBiw5F058u9z
y/qE5aNGRZruSuy2XubDvlFJhUrikCKCTeiJIrOkZZVANe+lq2dqjyx0HEoxjdTB3MFVM/Eh9yuH
lrKbhG1hw/KgSOuP3bMdlmZQq/6dKvrpStFZxI6YgcjjHwlc8RymXVbZb2S0G7TJ54TV+g/KtFAg
VsbUJDueqapH3VT2k71ZzX7yaotUHu+r5JShg+2Xq/uinyoyQCXYSIbj67nmrX0RdIZg/cGjvIdf
xXtpjgYxdr/f9nAwDBRck6b9c3aVPRDLgXkP/3AQxPFHAON5Xaev0YpA4bMCJiV5s5YLbu4rgC2w
feSNX2Qdr9K7QHQp+LbPQM4Ll2qWa5GJyN+ke5+CYUHxwuHyiJ2YqoV/gD9JW68v9XQY5v98IHYE
Uk1bzb7QtRWBskWkUDBaTYuk+OJ3Sel64/soVbxKMDrV53gYviofAgZ3iU21zLe8kjM0+KegqmMZ
VbnpgcbjttuOs2pSohC84VD+ZwQ4GsHG6zddZqvTyAeAz8aR5mILoikbU8PKIjGOLQE5Ec0kRPs9
l/2432mzupE36xQJPlJFYKYV3oJuqKc6SKK6AB9gt5h6qeRpmr5ZuSLOEBOhgGYPg7m4V96akG65
6VCoVAclMpX5j4X4iTsCCbYFg32qetkWTXPrd2OYKKQEbIla0iCOqWg8lgoV13D61QZRL9pnyvmW
/xrt07zuzrILsR2Hu7Dn1sfqHCthSqcNJn1mT1Pt2OiRgL6rl3Rnh80U/d4lnxnjfqIN2lZ+T8XS
Tbfg0maejuCboTD8UrLFstuje6zengjQDbOLHizEzqDyr6hNBtJobtBp7V94ZMi/EIoQSZwfRM6l
argjwd7JxPq07keSb6oSs73TrZfCDTHB3QL5CJw5l18B48fUZzrs71j/OkHd85DNlXIwXvW9nAHB
3piUf5lL7Adon1YC9m9svBYsrTkgvmhzo3iKvdmX8gy8bQ8a5uYeknjRsslY3Zn8nnupjw1wiSd8
yJvFz46tqLK1XzztDMJYGWZ0VGtNo6LhRsPN9VIt3HmlhKHAkGg6cQz9O/gXc0VqkRENyxOEdkeb
7taloLCuXLfivoEYy6iVFZFxaUCS42NsGU4rkJ6wdFtvceLV64YoBLa2BuE+ZonN2xoxpWo7aRjv
9ggHeo1Oub2EkqUULtEIOMNmlpzuW4h5G9msyeavy1KR7yqfOAPYSemqjYGhEHGOPViEpMnlRsov
rS6gZDPggqbDGvyGj0cy4Hiajo+VoHg+KZCrPOogRHxD/K6RxDNOnKbARHL6hk6a7soA175EUTJA
Tkz8bdbpLLbncq8uL8Tcm67klTzbbCEUoJ1zHRltRv3JpPUCcD430238vMhxqeNuDldYNnYWc1jm
YT+Zve0x1bXlpNwQT5c4tgy4jd7S7g897aZZcNqw6oshWL02u9NyQ0mqEd2j3EIum/P/iqoF7kKW
Ihij72h9Np1MWjP+DhHumBtvqdcszbKBTcb9mJjNYACEFceXqRcgKtpd6xCc2K70mnNziG4n6HKa
zVE1KdYZxL6AP1NKG1mOqR9CamnDl1KqOtfqGrJ3Z5/oyxB0fju8m6v4o11CnzG+veuStUy9+Zi9
FyyGxdk1A9vK0wl37gGqIOl5yOlVXo89972PJKLu1jlnBzfcZw5BAqp2607UWMqPZwAOOeXA0+F5
aPFa0PRYsGmusavrmYwH+Uhjh1xLnpPG7/HdABfir1fB7hnD7GLQCqMILTEhb1bKmP60y2w/gEW9
6llYHapXqbhE++C73vsuxq4SHiRIqbm23giVM87nEBsYnnMJRFVJNkw1bONsCYCwcIEKXg2Ie2+9
z3rvgABVLrjzROijuIKfuPtydFSzsYB4WIP269xYPkSt2fWoZX+GrCDGCeH7KpaH1IgyQWv/+rGm
6gemc0A89lJdkeQ8RUCUYxg5wLnJ6dasY5/yk74671jT56S31Oli3Ek7LA81ySG3Um+PskVu7euh
HWEJ9hvs7T43qtPqRi6c5Tdcaj85v+EXlFboz1qoNasHCpRPFtYAXZyeWp0e6P5rrnEgkLpw+jd8
+US8tZ7jDIIm9fGuGDKAlrV4xZ5hBsAzWy/ffZxwK4TDfcUR0kqpefaUA0uHT7I1zuP1WQbHSz6H
p5gQ2Y8e750mO3IxgxcNMP2JvlPvj8kbZKe0OCtY87bYJRa30P8KlI/tL/fNtemt1pskMD0u9vKl
T+Jqq2IFureRg0HWciAka65g1pE/FcvBdnSKn+BnKcC3qUSuZmMeRp42wH76vYQrNR7MHrLVnkIn
91JdXyM5gfq6eSSln+fDofciqOO6urIerANkma0muJNnpDn336JwLSvJ2X+f1QRZf6wxMdS1Zv8Z
QYA6WljsmgW4uSyF6Wbo9EE+YCJHQv5bjUN3ipJYY7i0BCfMEKGGg4HKafPma1Cp8gZwM2o+LMgz
GJDvl9PMe6FsCYlMHIWvOSMENEOsbNOB9nQSFnudQ0hmqrxr0VJTxGYqVLP1DDV/ckTGbVNIDaCa
TT3Ejn81noLNIBU3Yq+9WrXLlo41GsDoUndCRAUJIJnN39SBIAs3LS+7LHnkLnv7/DC7m6+b5Fyn
roKesDwexh74mgNtoOle8peoExUPgQXtoKwq8LM0+fq0k5l2XmeXmRfQIBLvH/1EFvDJ+a9cIEp3
BYol4MsdR58J7bSrJU6jrWzk6ZOUEIIYbUivpNnom04CXfUhpnT6gzhy3jfQjRyYNbk59a9PQIEI
Q6WRQTGgU1PhVVWbXr/CQB/b7PPqJZYFhVdpszZ564zhBwlmH6bxp05p76uUnH79AOSnAmOsJHt9
cM8SApbPzmURbnOkNpmIt8wGINccZ8BZ33WQvn4O3ad5b7pBMDu0PhT4AZtkzkYvUpJFAbLn7pM8
x1z2vDDp0SRTe/A5sufd7awe55xJqUero9nuj86B02ii8L5QKnt3G75XdjGm5WIgpJLQWixEhB0v
RNbFF6481fSeNldI1H7HTKcU1zddKpatYHNJwC6cX/0unfku0xTEksSFVfEVBQSV2460dYuNO92B
/S5OLDMnCqoPx9L8vdmr4toP0gD8P3PH/f859vruvtlLa9BaOORS7U0H4CID2ZpTGjb4TXIrj3R+
kkDOW54hQnIsWY09ZpXGGaPrklklgi6OabjSujA/7SoXfWpMKl9+OifgTydGfSjo6wZNNZAcN+Zu
efa0uJWqpRbyPGEx4UcFb9fx6pKvmx9bJXbHfVWonS8TkfTmBEyXXkylA/pIjAfAJ8yZJ+UlnU5E
ZP1ZhxzynDi1Ska+psgaG/NZtjbFBFTehdA8+xCTt7i1jRVBAXHPC2TXRrnhCkyw3I9iVU9vZLA6
99kVYYMbnAH2B0DhdO0Cd2ZF9xmj7AMNyTcwT/ezX6/qm+PiGQpkPuI6Eyh7G0O37QqpD4VfwSwp
YITEfRTlTSrmlBlp4ZqxJbmWScVvzw0fbNHEorRMmL0rjg+9raUkkrGFFy6R/nmg9mIAxuJXEyzo
D6CvJAhpvSiYWcg+nIylOIxW0DCJJdhcU2TPGCimMNUlzyF7dB3SH+5lKo2xBJgL+jjFKvqdXo9g
kc2A0gsZc2xKJAkZmaqzZkWZPix1bv+lOg17upejCCE6gCFyC/eoVostlGIg81QDlhvVRqXb/O6i
fYM51qhFtgFrGTCf+VnUKs+Wz+DmdhuqSuCO7QI7adPLjv8cv8JNv8GBbPeOaVoqNSg0LxQ2/1/7
1LuAiEr/2qG2wsKZ9Slker40zTiHvqpXKDmSib529XOeUZ1RaRsTyN2nHl2Ir+C3Y+lSwc0TAGUk
OhkRv+qm5IOmpIXHd6dqkT7nJEGjZ6Djmn2QRCas8s+xSdD/jfAbWggXOEf0crDY5UKSKgLQMXe1
Knd128tueIryNoZmzZnHP1Cr9QZ4+WttnXLAMI8X+lqIWrE626To5ncjMubhphbWN7HTBLcNMyLN
JmZnTS4oRD5a3cEF/gwYgfeGNzkCIu83Fo6YJVE1gvslzNT6+WcENY8PvEaX3HjRb+ZN26wWGLuT
gjwtn4Nl1iMGZOducQKr9dbC4Uulm2vUkwy2Wz2SMnxChIWa2XGqHwG2y7188plgpGH6wQyd+Hji
NfPxtSjvgUZiEUFF3mFJOTLy15U0Epf9Sz2ch5ZtOmJQtx6aDl9NFNW+d20TWXyyJhn7ZvxfhYic
wYlxu1N/KhHKJlLqOGAa4Sf85wcJFHNeqGOimq+soKViJunBuiegJDNHszsR0oDqnHJ4tY0f5oJW
zrex8pj6JMnY9IAAd9wmk9b/q56wyYV75dvGy2SWG8r4wc53H/V3tBWzE+wx1DuYbFRhCnQsd718
5ehV0vG3iWHeGBMOaBXHe0UaT/0R1pYwb/+EQbDXBI7bofumI8Q6zUnUGUMw8I43lf5hVp8WFcuy
wUp4msmpcuhteaPeAFQFtaswbAE9o1ODVtlMOy7MnqwO9juPhYGH3TelViyTbjpePbLnIHaFqqoN
JS/7NLbhHxuLhmtc42GPNolxMZAg0uAkcmqBZttCjtpxR6Puzk3SjyfAqv4CiekKxLPCFukbFsx0
F5I86OW9tnWH2ODPD++yWuYKruQtcu4Xv6czJ+Gtd0yQViO1oK48Z2Mk9nu3M6FEXjll+4x6kzxu
hCf7S+CtoC/vwJwMsbOesca1KTqbWzoAKOAeepMaN4ATSrZZUv5nf9GWDKr8Otcn56F9uHzXCX78
B3wTHsPvetYX8EZ0MpAk18vI1hMGVmm2QecC4vKJf7t8bmcV6tOMhArYT0mCaGLsW8nT7fzyD2ps
E5lwWXWXw/0tH1QART8qb/leK8RlT1w5TipwMQmU3R4sYEKmQML2vwFejQYSkhIHWXNW82B2zBuV
u5a8X4OERaKiCmKfS0b7NrXYb2X0nQvgKMthZlJ+11G/Ix3R4SXjZ9ucEZiooMTw5Y+T/yU0qJGL
xA6IVe9zM+lObteU7eDPvBzdkfE6LlHr/N54khK3YHZjWha2qFSSt0eBMo5tal9SQ8hOsdz731FO
JMjkeJbbDa7MQ+yOuOqorhGtOPsGF6r8GeITifFZRj36+ru7c+mrfT9a0mtfDo98hjtlrI5jpHpJ
c2k8chVer8U8P3XbZUQ+hqgCQez+s4gfKQmE2qXXu56p4VULMxAfU81YgEYluU3EkQvSqTGsR4eN
TezEoRqn1D34SPtxgVi0XW0bGqDdcIk7YyM+eR7NX7rDx9KD/qerInkmPsm38I8UQtkWB/A49xlC
joVfNYcmYro9nW5XOpe5P+dkuJqwBv0kjBbzsZCppp/oYDSOrWpyvMuPxYEHrd1BcP+Za8Hs4P3x
QpMIQ/5gBau4cuzIVVyoBjXSPESA5hJ1tva8/Qi4ahElAcYR7V3gRutDA20drD9qfErix76qmH3H
xmtIGbW4qZkWBP+wl+JWhg3fqFHnMCXjkIWKeuaV8YiswJUyEuGaX1qXp3hnTYYTzJjbC5aXPb5Z
TEOHQ0CowpV0hFvyhir3r2cV2lgK3QIZAV/2zdSwzt5in/Kn3HRgLQQPVGn5S110xkD5VgVal8Xo
1+uC/fVC8zh3uqA0qtC05rXTQ/JTm3CdRN41eZdroryXchSOgxgBGAZ8accPMvCNISxg2tfssfvC
pNoggpUqkToEdX4lZF7g2FHS298i0bWOMyXEFqpUEu5czKV8E6S06NAMOwryPGto4n/xk336qSq6
AotPg4Rl6NXz1v7Csz0G9iws39G3TGKidSe6yf7K3AGxb+7Li21k1itsR8ng0vNdWZjeeiZoGTbo
qi5ik8KbpGwlmnybowVsgr7zzu2ibzOK4pKl/qO3K6UvGRTN6aPyxTV/3QskNmtSSX8xQWY1EhDC
MxXVYnUaFtexKJCondNlYOEXRpt9gG2VHTHcl9/A5MzCwJW3noEys5llptD/hf2nO6e6S1BiO41y
Fd9yYLBjUO52jR0X1YHMEjAeAqqOK7Jx8b6OPftngRVWqSnY6f/mcTblWN3vOuFTx5bqGVvJsVDK
4o9NgJoCgZ0AFDYDW5rSB+lXWdcpz8r0jZ8qtDE/AZTig7sK9VdF8ws19n9SWIrIL5lvXyFsoPLf
JM1Fm5gtxa6+gQtHaYJTcD+ud21Nioa9Q+Q1MamVfmSVXAr1aEUciER+nkXEwKI0Yk4po9/ZtyDs
YbvTV/BjoXB3ASGNPsYf+ylMuL2Y8JFwIAzz+HuuzIYiaBU6xwvbOGDcvykE5MKtgJ/7QZee39Qn
N58Keadt8SsOdk2dqKZ2jGJhPi7fRzRckYF/uX19703ZV4Whk6Q/ehkA9Crop4S8hypMJU1c91sA
QjWYVDk4h+tlhk8d5rASp92eol3dtCBSdT7yXX3G/QWUTIHBo7DQt7Dlj/3d6tb92SA11dHgxtqO
+HFdIZOQgUkIGB8QCrPpI/Tg19eBouj2sHPDYp7xQaQJ+zMQJ1tGE9vhLV/JQ7qiAgPJumruH98p
PLA9pKOn+2eA9dYwEKi/nC0MTfj8Sj7KbC+XgdIidZX94YbkhOm47QkvJN/axELtzBnYd2l2uAZl
YMlBCRuFwHwEGmt8GNUhf80WVAoMo4DzOmQadKe8pVybGVVIttetvpG5nyKtrYZp0CpqEqjneE6I
wdZqSGDLYJZ1JCA+WWk4RU/3YX3bhoRVazR1OquUNwH2VJTEsjt3VIKcFl/TP/Jj5Bql/dcPfmHZ
ux0zqYONhLqQY6a36zQsiOqSR9VM8L5yqQbkWJ7cX60UJfjvZ+rTf23CvPjxMKTM3vWm1gCsxW0K
ocEm8YuwRfUfx6HzfAq5jPdb3CtBSAtAi16PKnuMl5mw/cDwcwcVRBlB8+jF2nGIeacSyjpR31o7
JDEwMv8hPHAKKNrs350lH9sfQnHRh7HnVrVlkoTp5o3hpSedrI6N+26QvLQWpmkJJDL2MJ3meVNr
iLHzq74A7Bl+KoP4X2n5lz+NL384f13D3gnyZGaQxysbS0ksEv9d8GLODrpObH7zTFiSS0AF/oh4
m2Glk7kFvazv/OcTR5rDOUXhAEDccIgvBHEnf4HjabL1NTmJTS2gEQWEDVqqf9CxFW92VwvxWWn0
xYHIogFrGDjhJtPe4m4RLsJdVTN18P0vVtQq9QdtmGjtZxjPDvmj2kQk91D21z2GzWlGSIafUUYP
IPSBs8+G7m/QGh6UvojOPsg+n+LDzmyK5qvpq4waPmkyVtKuSR7KjHeAftWs4c//rQ8YagILc4qW
5Oo4L1XHmS7zbsAF4fYF+cFb5cy5bExXjfx1gZ+5wVYhyPvUcm6jKlpYgx9EI33vSTSn0jdBgLzU
6yuW7wn2bn6ebqBcwwTXrbrGTeAyL71vEc5HLWzedC42ig6VhGV7/JRysa9tcDBoLcQwLNeALkqh
/R9JoqRbwBbbiJsGppbBzjHd7Y5eRgisaLtpGfIFuCyb6enlR/za5bYuNeZocr0OmWpvlQoVOTVt
sdB6o4poyLt23ogKuQe7VBr7eqpgGP6sXntJrSlDJKpkB6pN6EUpAvsCIgF5/3h8MgAr/hzWVRuX
3QAN8gZWZWMUYUnwi3TSTjAYBEUCVoCta5Ow612PWzDEhoYG/dbTZEqNY0Z5HUlgn2KE3IMtE0ld
c6R/2fHw10Oa2VbGdT1ii4WlZxTtT/aNj+yhhlNlGc5RCTGqVHScWu5uex6nFhvzmVAhWbHNV6LN
IM0U3CXkEs1vk/H4wQ75ahcAzyDktXx2BdovGbHEVW6JHz7w5eLccam5GNPMyQzTMUicU1NiHaQu
rpivCcjz5f/N34TdKOnt8l/IxB130NRNgDeKtiXJCHNNoqZ+SbbUypTk+OR9q0wtJoNkGlhKW7n7
kaQF0xGrPPjRNpX7jDWHdQtz6le0OX83MFFtarI9nDRVYfOMKRvfch798yaQLklcjSxODsFXKg2J
n1JzsnqgGt4kRnSp/gSc9vqAY0Kd1TUrfTgeXcUAKwSpDJ7J/+fh9oWk3iF8+mIJAyESB27o0EQ+
iz9sIOT5SdhGz1HLqQRtdMw3eWBtEgH56OqC9rZLch4zPAGg55DUHpxlHh0orn5ct8SdfNRNUbND
jH0Ei8lgC16dTcS0jiUe8QWsDDHpnPR1FHoVH7reMz6xaWg2+n9evpcGCjpEcxiM1qco7qGCL3Ij
/UqAfCLNAw2bFxofVU1s2lppRYk1qz26BhRKWCQhSq24NaR/eysArB2SAQ7DoP5+MCY+n79uiz8/
yFL00qF7k5lTCZjNxcqi473X/faY/8g/iLcQECG5L2zc5ky883dWLM6uQwQwxs55j8F07GXMhAHa
VtDBR81IOms5Va9GLGYoAEDISChV3tKsu/+bjExxqyQTaJNoXY9q+Jqo//5PUSF42OFbp3c3q+we
9un2lUFAxonqpZg7s/U5VdJzYLWh5WA1ADXHQcNWL8i/svVH7EjfmiR/gT1soJx7B91WFIu9DPMR
2pUIHg1KCZUfKmTzOebDK7hbFSLx0LFb5XEjeakIQUIDjnvQ5bs1WdV8/r04SRVMw/0FWoaQ6xE1
PY4MiM8Oeveudu/GnKp1Ec2w1iJYh7ZISvNdvf0d+2KmeG7vXFGU6fQZVtazq0ceQWPfwoTFkjMg
M2ynSuXaThev8dImKvpgf8r3YucII8iI+4cVyuyOnUunmD/GVW+5qEkRMoKwxIvLgKKg9slooecc
mCZcs2JRzsb/xIoQMP7vlTNGK+ZEz4fzLUFPpoYAckigb6N0H2NgBHXUc5JoaxozWrwuq1CG38zj
i0dmwHXbrlyRwIMiG4mNwDUt6P9F7PFDyAIE5zcGHKIaNqhHh1yOHoJT3XNMULYrXQ8NNRYF+YM5
/cOIBBISYY2U35c7qgQtxeD/yZWWzucBUC5+8xYqWmGhLFwB9c4Qu1+jVru9xzvxFDmX7/m6HmGX
oxgDta1jJSA+gJeuCZoZdv7QMQZoqD7j6i7VP2lsOD88nXW9mU390vxdcI8UtzlVTkzfE/4H9sVk
URElpQ74OtrzPVHyaDvIAhlrELEQRV5neCtyqTy/NCr38SJo5geanOCeESt3GriXvWmbCWkDyAW/
KtPjs3dcNAbY8BhAJXYaDuKBkv7fjyWx2o7NMgDpp71WpBLRiZa9KD6NsjoyzkY760OIgEnWDN3g
J5bGDL56xrfvZwezqoYX7+RXwyGDJcjYZBJWsYLNhpsbeF2CT3awneVC+eSp8cosTfLdxMCU5tdx
FSxq4GEhKgDwplc9jsZuOy65loH0/Ni3RG1tidhoRcsir3LYNatneGxUnKr36l1H4uPPmA41OGqI
nSCfB7Xx/EKpoHAJXGucTH2tn8o5E9dBnDnAiYyJwdHfGupPGWaX2jMCowVJOxoIMNmfJR29rwYK
fiDz3c2q1Xx8i5DUGnHTBjFM5+TG5+x8GVsrJZHxP+SRG04QwyXRoWwYboXzEK0KqNoUi6gvNg3P
Wk3E3Jjs/I1Tq15mojftsgs0hNVVAjhaT4nNJydmJslm7IkI2p629GyIM85OJfK9d/6YH4HJoJ2s
eKLi9Pq6Qt5XrT7slCdEcWrRWDbl/mwFIZa6gY5Uu29/cH7xVO3NOFuxQ63lx7g1/terrt8Hl+m3
gQUwUlz2a+191M5ybSdK8M+xvAyzhsmGJ6RNyjnfJNx5p6TM9O71NVW1U/JiPHsSzrHaJOQXnGvE
9cYKY3L+5Xl+POUq8lyV3nBC3io8247fbGU3OcrdzhDtPyrVTsvR315YbQVOYrd63BU2ogeXeXHN
nuMy3DGksNS1rTN2QV6liQTLNamxNrdk0tABLiwkKV6g4uUWTASqzugrmlPf8Zjj254ZhUhZhjC7
lt5gN/oiczqj3Hzu0Lk6Z6qsXmjbMnHH0ihCmbYeYhVVhwQvrIj2W51T1jiTRkbmXaJh9ckEUWQZ
1yGjwFDFn2/ATC12z1l5bdMFEIzSw26Nty4vsPQsSIQm9QNOvgk5GEGKtEYpVYYmAjK1+Y6kS2ts
FV6oLFJxOkSDdIu0jB57bmFq37dBEYTRGyHpS1YKvyrjbkHzfORVwJ3qDF5bnaCrBds8IfFRCVlt
3JXCQhe5IkWsTtusxXbANMpu5VsxqBdusKaYqtk84t8QMYkYocHtLLubaVg/yJiXZcKedikZkFvD
30NFGEIrQ8/GL8aDp8QdPKSwMFiFQntadHa8Wi7ofvSFKj6XAPd+cWmuNJ7mvHe+ofAXPgScMZal
YzXCAp+acAJPywRKhXD7KC0IIGXKd09biWVeqgzEVi743kBzsn5cOmI4VjS/tKah5IU31l8oTBR+
+0HL5tG4S1Qnzab4vEhlC3w+M6/KG9PUPfbjWp/uiCQ7gXrgb/MObDioLcsP7lPRdgmoY47WBzSu
sdVZw6B/9INPV55NXNS6D/d77A2y2GkRFndTL4eWjHuLyUgLKo+frvzGH3Ewghj4k/3GX+febSw9
GqpvOwviHzXhE9NZH9363E5re5D9zVMlQDHEOVBjlX2fgMUHAH9cAI8oe070K9OxUkdVUhsDJXPa
s5vuxVqGebbK3hoBNHV1IcHQz0+zD+2X+/GalyjtrgUTnAc2/0pChyckxeyWoFBjvfuQNtD72T9H
SzC90FbifIGVxWK02l/ZaFFasYHpUEXrpZnlEYIbxzvJwbiMAPAyl4qu1NG4mA2XfuMmXNywoESy
BS6U4BgUvUXCyjqX1WXfTsVhOoeiYufV31gId5+TP5RPruyOWA7dXylDebUhVxYAOdXHrtIFbNUE
T794b9q+wkUrY6FaSVXVeXCsbLBaJDrHjvXQT2mmGlQNfioH5aWGX4apigu0YLfQ+22nMxKs08VX
e0lOOcZOpUwaua/y6OH6DOTJ0btePT5qSiZTDA7iBmtt/9ivTPhJKO2acz3vuc9c0gNicsUFjy9c
Pa+2sVjF7Cu6qVMjt8B6/06C+La2GXbmOZ1VSr37T/KsuHAq0hVm/DAbYbKzFPHpnIA7CORCHEqN
Eb7mzy2M6BHKgXEyNovpYBy+pUdjN294JotejUW+q79srsS1XFk/59TeFVXmGS2MsnLJL9cyRi0s
s80uCmZDWG9Ny/in4Ig9DpaRfFXRsIdBH8cquTx2nfeENhxhiwzW/TSDy0/dK6jK1KlikbNmShF3
/cZ6dhYW+qaZ4N+wi6LggrQqFFtloR52naNq7IekPnnsPb716ubSVXTLhvIRbtNPL8alVM18rhlj
LDJguzAqq257fHFvr1Rjzohmu37rBFypf2R46M/+YGVXA7dgLQQqGcXROjrcWm/urb2zFIZ3/sNt
5j9bqY4fxCpGlIZhS87bSrcKaa8IyuJQhd85Dy+r3E5ORBq3Ibp1Zwxa3FsAIilZU9OaazHbONub
wrYNRnlATQONbgcv79bjFyUqJMWgk4Jp/UpIQK2yMn45d6DVICohiWCGYj7xgkunkBpyxjb3aogb
dXaXcaLgYnjoAp4cQuR4I+EsZUIEl/Ur79xm07QSI7WOx0IkHxePkjGjzkM+FCvbzJk1wl1XiSoE
s9+Ko5u6u68iRmuu5s5evwJxurnDKtuxT1Q0ATs5NME+dIY3QT4iLr+epzewhukmpGS0fSKlz78J
VhgvJywhFoa555pK+dllmKu9Mj08MhrwDA2GRSRm7nihLw3RCo8iuTbjiZDnwaLuugC7DkAmSLml
QDb1Foy5KcpCbeMlj/l0Jt2BA4CTh3DePqYLlMn+UFLsETX6Ri7MFFA2So3l7cafh3ZiykKBPmsn
zcA/vREeIpXo973RbpVKIBJgNVKL8ISxiAMQ4AuORFEwx8eJu2sVMf7kG8q05byBgct+n8S7HQNU
vfAz757DcxCB4gVyHiadFS0dG40TgSjEXDp48JWGoIMImKNmFAOyuW9hjUMkD+G1LLxXJFgR6ico
Zdz4PWw8w/5ujLCpUDHKtAfc/NKfv6N8GF92VXl1+dp67wQwJypceUaZbCnV2YAdCm1JMS/5IwvJ
9LMoDzSchJR9vf5htjXjRGagCgZ5PNqMmZGcigsD+OZ2jcbo0ihK6oFtXYDHbvrfnlufcn/9LiDi
eFaP4wqe2Q1ab8rFcsTehq/lN7lqa4yM0PkpHGDy2xlrM+fk4gKMzDNkZbRhSZGha25+PsYw7BBI
U3Jt1iskPMgjK9Jq13LNmUzmk9/EY8TxFA9QuBMbBCrFXJPyZIIi9w+/qFV8VhFHBchX+PNRCIhu
QbBq2+Msr05W/zeN6MJW+8KrKPDDR1zPU3maxd1wrL0K+DociS3GxzrPwWadFeD3tVcTbv7OGMNa
BlxZzHBZZc2FyXqwCaj68XRAAWpWbPVJ00QVIun52JZtYusIlq+gt+6sC5gCKAkh0+5UDsVoKgAP
/Ie1dS/DLqv8BodGT05aGBVEyWR4D9m9boFPadnMGi9xI2dfq29JLf+qktsR2rgkmuJdElaV5Kd9
hl1uzUxX0FhD0KdLZfqLrYqyajh42aL7Uakh1d5mZCqGH5gkZBlRj9n4YEiSYw/EfLOOrpC82Aa2
9qvU22Z3/EWpZj5yHPbId1ZFwUdkuo2UCauZ3bWTDC0PfdIkh8B2Jfeohhr0ZxpOMCPsUB0FK9M9
IlRAnNDuGiA0E9FM+TPZnjfa3AVocVQGOXDxD2+8Zhi5HNPfU1CmDhdjd0aBK7wG1Qed2BE/fGMv
C7QZ5qnpWn28jcnJjjVnquBgxP7Qp4M29fPy+S8JM/81WOKCpwolcA6quO3bXZYg9CZAb9H4woaB
qyb6louH8nMcb2y3UYFUZItRwL7AMh6w3dxYg3NGg3N8MHEiYlBPFNBIdGVtm5W7+PZvMSEjIH4u
rG+92AQ/7kju0chf39eC76YYPXWTgspVV4pdCixMeyvYBQR3gjBQvv3vMiUtCqSdsu9ZSYH8+AUQ
6cvhJjeOHgQOlEbnVaW25u5FnQh01ycq9FTkMiN34dTXmA8y0AJ+Y/L8GvjRha+F7mEV+xxr746w
bK0ffMtXBbvD5aDx2XKdIpNh2O6Rc2Rpo7EQ6Onbq2NrQOe8krYZpPbuvxc873mN0b6HSmu72xpx
cviwadAMSh2S3cuQwi6sBBQvBbVvWtpcD4nOZduuFHCtpIjpVF5VlS2hsk5Yo5e85sxteuVgvFkl
w60edJLV/QMRkVE4/OqFQe0BeLG1QY1wI3xh2mLGdwZ5V/wtxdrA1YrmPD/xG3jewUf045B2Ggk1
ipgQ+Ii1b0S6ttG/wJ0CG3rcU478AkiKzra7AKfk9308zb1zyi+NjJVRh9kLoH8Asjii9TYz+rl4
51uAvN0Pj5jfsFRiYJgoswXDXVl+O1OuBFBpVZkpS5HSxrJqkq2h5FeLuL2Foz/y5g2RO1ZF96gV
hm7iFTIVaQp/RxZ6UQbuO9cuTNJlmdI15996PtD6djey3vA1E3g5BiCl0CfzYawcw1+SlZFX4kRo
twzzX4GLd09iOpQRSn7gPfII2S+rmb1+DeyUrh0J4N60xui6MlA5gzhKCZO/cTXHOBmC10swwCDB
DtElMCt2Cp/a+VC+t+y1IpxYyR7pduMNsTuhaxbfjHtzEXQ6x7LExKZnPNA8Ju2sIIFxu1KtRyLg
EHThW4ORQICEby86i9zRa7pGIyOqhBL/7wATJmsNsEAftB25tDVXTQ6lo4ixFqk/lYGc6ivG1S+A
pL79ElbclX7odWaZPHS76bPvqQ+PDNKsRZs3kf1QtByw2xvVR4SavOYjfQ2xhRZ7se3+Edm6+xJ7
AGIvvaGh6zEAqJyyzvWnCFnJyYudKkVkbg5UYcCG6jTBTFR2m3a3w5PbFlFeCcUUDibnMax+5u/W
882nm/wEp0jLOWy0h8Gr2yt5rLtW86zPcS6D9V52jS3dUC+Lwg8PEfyQgtIhWxDIL0RtcfraWzxZ
ld2AZRjzRUSDIrXChfhp8PNyDUhTm1wgfw+YENoTD7xmlzM4TtISBZ5SezugIkiHcClcp+XtaIzq
rsnnY1AbsNfmQYHl7Rc1188G3ZTQmLJMafFmfz0BPGSVH8TCMNeEYNbuweGb6Y3y7L+Z/8ffHwHj
3apM8Ja+JsUZQZMK3tA/LG/BB6VD3PLungbQOIGt1cmR/4v3Np08xhZaCXhY+tlNcG2ctfvCChvn
6xMcWquV2GAztCxQG1Y6KmHbZLeg2SyEPnlzMtaaLUol4i4uli/aDfKpUfd14BZ7IiFJhEzGOjq8
aPaL1g/kSTjKf5Qj8yN09kTW/XxJlSyRd/OdOVPu9YtDBlLHF7xnMeK7iexulqxOtxGOa5RBkxXf
CCk/5GPBAoz7yuEXJ7DaB4gvoUER0EKdsscjW0rj741jChczI7ZKhAQCfWOfSZ4rydSaDsBKzzgq
i3PX6YqWQ/KsIciVJVCsRrrm+R09wxOBKN7ZXgpbVOQFbVsIp2sFTT7M4VG2Zwd9NTVJ/rZydOQn
WCJYRJrDgJ6qCmaInB2kn1nBJjDIBYStEU2tokWndf8/z+p7u4BvrilX9tCleFSx9a5bEupYawRY
Vc1/ksGCVUZG1jqvOOoyn8RB/RPQXmaNEV/zGPetxWna/DtLZvzvw/kG7Pc97Ronx2n9AhvtYlTV
uIPrHeC+Pro3gM87h0Z26v5vDxDDAx/zMZJS8rbqxDnojvGjaJaRNjqslQ1e3SkwpWRfgvlAJgwu
uQ9lT/VvAZpIz09OiQfOiBq1QSrKHC7mb5cXbC3xebuNvWsce2v5QVjdguu4e0cdiPn0hA/zk1Jb
GMt6dPCOFRlcCWEIXJb9PJKRfvtFdri7TyCpPFqgv0g7E4rEel2iC7DqjphUmLgYZ3+8co4mU1x7
e3H5rtFsNFW6L6YfKc0obS5bXOKcQmxFItgdi0KPPUbtOD9mH7LXYfcvAmg7NiW6Svtp6Xvfq2+S
aDjIHbuBSXEsc+6Yl7tT+IRFrrtSCJ430lpCxb442R17RKV3/G7mlUqDBx3DqfqoD3DcXrP0lcD+
MKaVpKG8E3vg74IriQBUx6552oYZ9m1o6bcOQyWuWk3x3zbuJv75ZPPtSOGC/bDL2HJ2Ja1VkVTA
YjNHumGy0F6JU3dFYHme7+0IhKbEI8qwW6XQK+fKSeGfXqDc5Dt2+x2n6MyP9nbZO4OVAHphxBbE
C3ZCNR1t7lK9ehC39uL2/448/10ihmqGMyRY5+DH7bbgyx3SP0gDOAYwfuskGnqUxk0I4loTwWzl
4KVg183V4eQVijkkFOLGZRTu2KNGgm+CY9TDeBQuI5LuaXrF9KblVRwwt/WuChn8guLmSCGLeg4T
G4oaT6tl8e5A3TtFmh8XNY30zQzvgXU9kkmmzbOfyFNh3I+MTU6I4aE3cxuaqBXO7OvFtd0OimEh
6NDETBV5ZiKkAYblIKeSfoHln+8493//nxkpY7pn0WcBJVMKIz4EX6/Xh8gt2L53Lq2dkDjheXmv
SqtUxPf6mORrdoy1EaJ1HvwDkgCOFV6KjncXPX8n3yGPCyTPyZ++MJ8JeX3ExB4NDAVxrLPy9zCo
6A6nb9hohiNKZyb7LQSYVkpeMv9g4wqfKrjwibWiPsXePa4X4oY/ar7DUrHT/BDAGOIMps1DCU5Y
Biw8IdIOGlcxg/Ye9UQASY2J4PgRNppaLIOciblbSk/xTLaCpWKSIQ803YA4AgFQlTS+TXBTaBum
lNZx4u0w8PLyH8a1ZsRJ4fXUYseskDchE7ZsUXmiDE8ejnyVj7KVJuzvzv9cXwTtzsyXNp2Rdji+
a4RKbni2N5KELpOCC7/6lW6DK3EwYstd/6i4YePBroxG400W8zyYNXwMfbJcZbCYXmcUAvO4yF+I
uejSPez/RDyDgwEMJ0O+c7ZGHfCrLx+rLlauN2fs/8DzbxPO+UBjHKBh1XRycAADVT2KWnUn2QuH
Z3SrYKb5AWFUMV5ejYquUP+ALcvz6O3WPCgt5xrR5EgkfhF8vmsVQKyPxwJzuLWEfYL+Wpte3BA8
sUu3JOcwphew+knH0wKV+cVzOjhGOQ1ZmLdtJs2eSIGTZwpWoOpEaTRLF6ui40fNoCRlLAwTmFtW
5l6++4Xg655Ks9F0XecypIi+wXL/pFkmr9P8aoyMYJT3OZcgNwi9L0iKZwiefbKV5rwLbMxgwT+n
PfhWu4CnGME/Q2EF3PoydqRvOM9JRJ1n2y11FWg2CAxG2v+mOVLijsYYh2gxx5quyb38r+TSmiwy
APZvDLJ3256pX0z9WtqH3znFjqmVZM7e84OpRMlAKiqN9cs+3EvYyU+KPfkbT5TCmzV908pm8zB9
bOy5kWy3s3DRiG/Y/ducsyrOMJfT7Vx8XQhTcCaSRuljMnO1E9GENqleK3BpBUwaz4JLwHLqawJC
5u74aLsZ2CrhX0N3cUOrbyLHA43MjcJ55X9ZOIfbGsarcneEQTCbN08+EHMVo0RO8TXLjau21RcP
S7MTRRzENGSejtNFxXDDWIUwGxkivJqfJPXVAg8bTCa9lTTMs0NXMnP/Yl98h5zSwQAwhzrNwq9R
5W3TvTZBWuJg1icRfXbe23j68HRHmkVnx67iaOfOvfPV5GTvk2JTa2FG+3JAD8wRbX9UNfCT04EP
gGSVuWc9kgKFlZcCkL4ZNog+ZJ5IVMyog7hKoPDFyFdnpIh1PoeDrsKvqNg/1/fCXLYQe5UxDEz0
U/S+6ZCjrprf4YBHuJN5RqmT6SDY0Phvf/MgpxvyTBgWKPuOJCLlYihXyhiCv88ufZHPi5EcOVtO
dqgNDO/iHnLxvscG+U+F0bzTtPFpZbNQwLgiiiG2WyTyyXtI+BqMu6tc+6kx5Zt30oANv0jcC2CD
E8KMWemqwu7VFg8qnOhw5UXsEQFRDLzDKChUmtyWnywKHrTipUuy6sXJXErJvu17ru2GzY2Jmc1R
Z2DgOHYWNNbtHbXTfQVluazWFK8dGB+HY4KIQmvqjGbpC09VARnJm86VuLw/h3dL3TjlADGurQPj
FMUg/x71EsXN/PqlTdplcOl8V2KxGMv4AzcOklBVWxApn+JCV2I0PKgqu6IUToKznat55jKjB1rj
vEcpp61ndITc2eCaY1t+YaWEGg8ae8YtXB9LBNbF/ZU98lkyjm+EvpkW2Sl9Y7y3qwuK/mwDG3q8
gVoQQnnDgLLqK7nGiA+YtFl3jqkoI4bXp15LfPntddWdovp22r9Ldtu4MkpKfysKRjasJuXgrFHJ
r/McF1qyzya3SON447wI51J4tCkMKo1F1JYZ53J8ik+0WkrnN1LlYbYUCqUXLu/TFB/RioJwgcvN
GxP9ClCxKQtcbKQCLfWXv1LpgbrTEtTYAdskXJO+h2UR+3ceBgtaAM6FFda6BEso2tJvswX82Ve7
yDG469hZh7XSUCvwu73kxrQWpn7FQl98n94guDxrIB5FcppgTarmDT2Rbq7caq7cg5uoxZOU0AMA
wN84HYqaHPswSKMRDmgU9CSYoe/2QgRsue6GDiha++42s2f1ltlBu4D/VBt8fjgN3qM7qiOJE/qb
a/ZiEnABqRsp7TY15+KP/30b19NtK71th6bQ6ccMWDNnXF0Q1Cexkhj3W1xxMNvy9iGdoEh2HIgm
2sP3uOMDtwYcM50SyfyNjFZYvCnGz3qjqSyKQhjdS7oSvGamVfIIYw3dMfeGdDLDWZNA9FqHBa3i
xFvx4iNR6ypRSKsUGCLjMhGsG7/JW9exMukGhFtBkoH3F1dCqoeOU0n3v8M2NAdoyl5r2L4PraOd
6+52k9szLWIeTigfTMKcNTd8/77GQzESfqI9Jk2FfqpRhQMD1S/chSPs6Z871Dq/9UJFqUWClVT+
/X95HyHq2PHIDD2Muphx5i8pzCLeJ3mUH0GTVOsM8Ig6o8b+rPCeZGgeDn4CeSDmI8AWntoB8+h9
CSrRYbbW04zxKDZiCbSdy5k9AsHwgr2typ1PVIKhegRHLjOe4Bx9AWLorSC3Ufw79vaCkWV6QWpl
i67KisnvaaL0w2XJr/xVl9wPd7iSWO8H/lkTujwOPedRIXJx7QGW6LXpj16sZZD58rtnGL1JD3df
IPRmxDInRB5zeOh+S5sF77LNTcf8ehFCwa3GV0nl9tiufyCV+mWeMQ61fO9AMqcqxrRheVMYOOgu
g2OYD7nWQotLCUGcO7MgH99rr1AdiAeTyywh2yahLNOKKvNYMXI9StcppzOHoRCXItK6wLo8wx6P
z/Bvq1z6k4HJ0x6YHWff2dRR1CL/2Y+tL43yQvP1KZ8VQqEMS47027zN+yBH3VYKYPIuZEdmTt6h
xdvmKq3KySUYgmu1/yzimC/Oi6fJGq2nbvx2wS/33l1oPeG4tKRjRPZrD+nCqCe2OF/99ujPA61j
pv4L5ntOCdm9GTMfU2jprxzQa6QiCxKQqrSKc0UQGviNZ+iFoxkQuhBNLQAYhxRsMwlFQMDnHuat
tJnNEDgIt9Oz353a1bmtB41kdOS7m2Dj/wRq9lSOJc30W9hAzpMWnIXo8yp631bp+Ij73L9TgcjG
+BEW0hDS3DzvlM3WcwjfuyL6k8K6MC6aqPH6hFbx3sginb6vT++v0/SxaJw/AC6wDDZIkbtIbgVB
UEU5OBYLoc4O+xg/0Ll382ZIZalm03CsHbIZyFGFsG+o3FsrdYTZwx7OLOQryPf5eqLCjkVctAic
FaQiUp9Mu1O4HPZ+Sx1lHgWxFBAp/NyuQaUEcAu8ygFghO8bFzD+tFbKB5KmN+vwkc++LJihxzvw
+zDFj0SBO2Tdk323O8jlwqFjyfXweDQm9GP3Hq7d1bxdYiv1+eHeF0CL2Hj3/71WqPSxY12YULbJ
H697dalmgpqGF9PxYPi1Qa2NIjxTn1R0p315/QgXTBEMjPdWb6uQy5M8qPKyhJ07C4cWDp9wN5Aj
uspkQD6tuludgKgs6b9hOxmybGWeSzQYJpYroXuoI3RwNAuMsZ4l2X6BuDQhS+/Tu1sw5dVyVy93
lgtVj099laRO2b2PWjO0ZVUJhYxfkc2t6mfx9RhtJfL5JS8Osr9ReIBAlBWfchoTJWl+r5ixHBBs
BCLDUstdKOwajX5cYmcmHFKrqiRWSJhgyNH1HnzCHEtd8lesRbFiMtet2S6T6wCLBFaI3o4scaZ9
QsXXMOibi1Jl6divymVCYMPjfkXHqhEdwuVTTMBJLhVYfXiCMtZAQSUV/KyMLBd9jrOL08y1WDIV
Y5KfbvCCNri/pRzqpOyuYQB3qFqxR1elSpV1RV/55XRlcPl4uGJn2eQ00BDZAk/5wfvvkjOXyC60
EQxhz5o52wFpX4KaBDihEWIWuWJG7mVOZhn/eBKdUSEjwv2JV8EyHqRUBZkXHXArDbwosqi/XqsQ
lDAeZV5BwquOvLpXo/f+JRo2ufID6AwMvzjnBJgFWy3nxA/DJ7Ni1Ks7ypIioE9AryAzKfefDsK+
8d1IjxPWXDEf0h4c8fSUeRQuM26uBXteoUMXSyTDlI35C2Ek4eQ3fA5PLj1Iu8jzzCQfNvxj5gF7
HDuldKY3Uo+zNzWDQK5dZuLQ+gG2r+3hBLmaSO4Yp5qCch/fYG/HTvXY4iYy1lHuZzr34dhaBMu8
OpHAwTGZc6tX1vtW94GnNe+rA5gsvpN/MQ1oyF/0nr/3RCsTYpZjTv3HVPEqdG87CVb/7Wpzv32w
MD3WfCKSJZtHQQ3o2booy1MKslSJGa8EMudjD+8Ap2PQ5B1bkVgtH7fIulBU5lGH9AugmAtaH3q4
tkYaJQJQXClpCwN1zK5jVmvKxleAtO4xtoTS6mv+fd03Z2yM2/4117h/u1nNcxkklcUaNWePgvRr
I9nCWTy/9TO19dTluMrFmyKG49EaE4L+WGW4A+IopQCTKCkjNMQQmgZGSAFYHhbCj2wPrFVZJ31d
GDTpnNVijkHuhXUJALcFk0PgCRVE5E6Ma0/aBr50Dih1CGIHAWQFxvW6NLjwDIK2w60dUOQazGkI
P+VzOIY2tWcaCJ2QOfCv3Y9VvleL9cK3uTEJ8odtyRKNMNMJa9JoGl1Q2mjevUM0pw25YtCRnBIQ
Ot96toiWxe/FeHpYMRg4VVYm8lpU6B9laMfM+Ch/8LzAWnwJ3ko/449aVKn46Hy9YH3ESpEACt95
kaLKELu3SyVAoV+IiiKBd7YLJd5uH7lMZ7tTwfPLVgQZSGqUXPCLoCfizwDPkxrOo1RV6tisNPOm
Iaid/J36JvGwE9hXkgcGRoIP6k6UAWhThXcDrr5xtehjAd0CeSH5wPwrcKSzulaDqywCZcXqbcFJ
Vt2wpOYwcsJH6RNbF2p9RcHjieRDbutYI8EiVoAzgR0zNs7bx/4aXqz3JiXXuOCGRTjGItaN7T7c
7a5woTINBl2YaPo9sJiAPIbzn5X5Gm+zkcpVe6Jj27CgjGaGsd7zSd3jGzsXVbSp9EnS8eeCj6/e
srahRt0XarAELIh+5ezADBD32NloO69frfajc/EqpMDDnNik1f82fyovRUH8ySPFH20WIHsMXHFN
8UZGUKlQt4bChhIqkKpaY1Erk2Nn2kHInzsyg2EmuPRXW1baTjvU/hMpnL6EL/X/Ef4r3aH1cX84
pMkCRgxx/XMxSdwm1DXhMVTHy+X2ZPCP2EJrvaPAE9zkqr0TA/PRCRRdr+JjDEiwH0xWwG+I68v1
+M58Di5Cs2KkuRccjsmRy16VQG3aexYeOBch9SrUSEttCvaDohoIL7ssZxeVSx6X8kaUDg/Jx1gh
kgvZXcQkmtuVb0n2IPGIeXEywiTbW8UUrmHMWZueIXN+LBhRIMauSEWZ1GL6XsYoHoedgR06/mID
K4sjTbPuyiS8Ngi1Q79+UjQZMOir3nqeowK4uXZyq78dnNk7h5cXzq+L/YPiMlcbw720pwVRBr7D
y8Ti1gH+ampEfGfbWYmNoqzU25KoCY5QkSI8TOscJro67+AMbO8S6I/B/OmLNrJk4mOLV2OX4f9W
awqb6mgbsGzo2lqfqXf1IwM0jn0rJ/9tXqXyXPg/K7eRDkFyI28TuwHZnvr8gQOjcifKBmizck5A
+wlpPEBVG2MRLF0Yw+zzTEy9v+8VKWeLz8XYWNBI079uajifhBQi/ixlyFNMm+WjCMBw8EPoHqbb
XQqPpCyBvt+31ScLnplU0vdRj5Xw+y27zndHvr2Q9PGAqTkRLSzh+ixXXHN3s2qsyfLjukMwTZJP
tC+leC2pvElhu2Db8BkBPAudWYu7gIuOEJnVIeCWWJtC4A98sIR2HnhSiTiPuB2h/xVo+KTVPj1W
Cc0jrSLsgD/5ASYVUjBVM7dwFM15qnH7ECyGS+HbbLPVpOwR2++jPpSMWFEmPmTInd3N+D+0wQpM
S+k1kApwOGQ2NeIBbZ80AVhR2+CEnrSingnHBajU3hNrj8WLqNvxI2Wg6iZGRTpijrJr+ltkHwGh
greqNO4kGgJgSqTUOckbtNwfwvL+Ry/fQihIw9zTcOUOlwP6ydCirpX1VkgGFFW1HHNa1s5zimuU
njfWfRU3oVOC9Kt9Q+TWDR0Z5uIVfw2d2UHjrT9RHwB8841yIvQUqCqzEZ4QzEZcN9CUHt/pDtBd
AZze4x2F9A0rTsR+mpNc4rF5+A7+nPM5mOSVtUeaeVxjN6pOURVSQ1meD/zOqV7IvJ5j0ECjXBzM
hf58pltzupQYg5SDjfwvqknCL6v2nhczEwaSLWUqpssd/O3b/25vknXcyzSTyTjweBCbhGmIUfnW
4Z8ZT0HoMe2jFtjU5k7HIJkpvjV7bwouCZK+vF7Xf2RSfdh0c28jbTohg44GCWM8+Pr3Oq2O4g3F
BeKYuZSjytKd34PXu4uw6tPutXyO01vPjLxhQ9ivr6JQY4MQI/K37VNPeLJU/O5yeNrhNgLz/99I
vlUT7+J7IeINFgBzPPVSpVa3UN/k6tQ0QKjuKgGAuL0VrUqJNH+E76l1db4ppZnJHOYilyd4nLP9
OSULI1Wh2nt3KJv4Bis9T17DmCpHEmSvt84M3ECskEgfdVnuIRxbqcTeX7k+XnuvqzY3SU5YgGfC
v6D4NWTonAmdt0P7U6jD2FQcyoNJobmsn+6wObto/d30IvL7H1bH+F2mslvF4zgglFYR41qQDJnN
5Kqhb7yTi9T6aAKWo6qUOT6eWE5g9eKh5GIZpx51kgfVp7ahcSIhvUM7rNUyI6mVu9LZaL7rq/tK
rvMRDaF76xu210lhUZYNpG7FASKUgUFXQOMzsiQSa+ySvT/edZJ6K9uDn7+GrFp4vWrSBOO0ioJo
7IX2KxBs1YhPrv/VyYq/znPARBxKt3gTa0wgl9hAATP0mIStb9eWTSvQ7QZrryfGC1x9ABkZcFbK
4ru1an96URLx5y3K4ANhn5uyd6hWtXoUHHREi+jh6hF8qTcYIJzbEou2ZWRFErzB6m+0xTyRPsJm
+5O7p8ILR6Pw8dIN1Iikk6j6agVlsZixl5UDthzjVZkVsxdj4QIdlhnT2jVXtfZ0dvXQ17x8FY07
1ReTxjcOwH3gEQ4hJvWNeWG8TnHDOoONaCeGth3xIxVQYWYR/1go6Gc9M/+s1WtZrOYlwBsw+M0u
DB6JRi4bLyQ4LqDzqtNVGUgCQHJyVdXa2GTWyoE34kTMqZgNQn6KtJeX5WaoDf+hFaJ9Rk3M7jQo
24/KCRJthbVe87yvYglykFH/i32TLh/BBDt2twBcB1Ulf2Imh08eHCcOra/uiS1B9rb6kkUBE6th
nCNJtInyMXUtIHfETw6iqWAAVIDGKCd9GK3FRS7ET7X3gLAobC8yG2KFWJj4zpFsMFe1ArrA7s0N
Q3GJ/I55gFs6zOdvX/vVg9YMGnIyUJaxmOsOFz6ATiHGsr93xZVF/o3iBwJKWdqnVpEXW6d5aY3L
wkjBJ43blz+Q3h11WILsQB4TKwfPUmtQ9Np2kmFMbJ6ui+wtrby8CIPdt09K7jcVqRa1SIVDHUcM
MGcVePpjPLXF1oXmANhNXPQg7+CYn2UDFUQCuh0Yxjwq2PYePaSTNwU7W8sANlmIJix607hC9ErC
aqmbTnJ5h3rQb3ckjc2DbazaewGK8ykXHnAq2fizeYLoF9dC9fLRVdW/IWl5V4vh/Wyw4zYoUdWD
R/mYp+DWhZNG0BKCKNXIeayWLJecFpze6L1xY31+7Baj6GsSAJNzjINY7gCuiZWbRAF4nmZSy36O
L3eLfeXoFph8kTxi4GfR3+yDR4BSm4sbwA91GP14LzyFWPiTnoJXsxuv1WCPAlYa7M18p1KZX6on
K+15vb5nw/Y/y0Hb24NyG/VkOjDtYZ2g7ncILjATyTSZavyobdwsZVX+SnBHPS5DY37gRqjQcNu7
Zl3EeOdQO2yTN44OT0lYKZ4Xj3rLPTk5pO0PnQHqSlDcEDU31Z6hLVLrCwoz6Q7rehI3A/sOfMfC
QB9C3eVUaiYCgNqGODmNmNsH74jxyLXXU/s7zJEtlpscT5FSIxvKIqDiGFNOcwLqJxFqjk+86hVL
Gn3MY/97F5baq5O5vVLK4wrkxgMNRyFrcC3awqxLJ5z38RQlLF59Ee8u0K09YSfpzFK0aVCn3krF
37Bz7h20Jkd3YY1Wt7uKxj28AHeLO3zPGl42ATcXKupeHDWvln9bSIzfVIS+FrU5t7S78u7pXlpI
AGJmF0DtPRJ7muPEPavhkBToMwE0Nx0wqC1nfica/lGSCue1fxgfUNtcAuAqC8Cb9Gk/Ak0ovok2
X+wVs3ScOTs/eCkEHlu29ysGWwcLi7O1ky9k2cAg+7inY9/vhWExahH8jdeg771IygErmsYLO3Jc
cYlA8Q4Gz8xnrxGJUkuQy4TmyJFg4z3tuNk+DUSkVHBkM6wSQMF9wyJWQQoBVdJGbYAsEHD/pgIG
gqPNLq2aYdGNjdjan0Qz3Nt9D0+ZwUL8/rN89FlvEhEi1ateyX90GOze/phyIkPWF+97zHaGMkPL
IgF8/gnD3DvcEyY++7CP+G1cj1UuIsKnbyOm6Rdi7f/UufPqxg4O0E47FLG39uRLYmLmkowrqugG
sDTBs7Oj97H4FpabQhDJ/wA70oKGbPHxCSr03fbf/UXF0uxMj6aLL9MKbG3T9OfiaB7BBCkiX4fR
lyfkd0qh2wKubKklO5fCq5nse6idNazjDsfGSNuYEJUKU2ICtziPyEFUeLvqoIgZQLi4bsmicBKG
hzKXPQ9CnKBAkuUbJrshpFr2RSWowGKw4wXg1Zjpkyo2o52OcvLNiIgXLCr842Q2KY8q4a66TXst
iZjGoE3H8papKiv4Impdttie6JdOqGs93nqLLoaRvcCBvPqZIMUMTAK01ZB5EUKj0Gxp97ivtelu
NlVfFOAb4mYPWGciyT3zF+/F9F4kUakS8NX+DyEg/txBSgE/TeG7zuobQgH7+zjgxwzcwxWQjhTN
s5hahWSvzKwPk25MvUrpRaHRWsXXwBOwGihU9LszEEoMVHYyr0VOo38NFtb8LSdBAE/m4AOHHeoj
2MGe6mDLoCJ5dCOONV5LW503IDJN3zJ87dNBWzZkeK0WKirZn5J+BfzaTWC+q4XinElQagP1i5Gy
GpFBTkYE4MC7KHsiM8xeMJxoMsSo0OOuH/vTDzjROrnO19S9tTlbWhlsfoBEhsU8259BEp6x5ZwU
UeEel29l/JgEnAjjBw1vBiUdr52xHXB9Y8WsOzUwRDx3t4T735unmwGbjw6Nb4sg5eH9rdFfB3XU
4eDNn6IvsDMqpmxV0PTyswTZtrpQKuyM8YwJhXa2hQDEoiZ6vNvNz89FwbJYfcc0VUEH4vwkWIR5
MqwnEq6EH5Go+Uffn3T7XprsAQn/srkLtj6VnoRUraPy6He4gkgvIN9K9IA0Yr/eJBoG7Z5/4tOF
3GeGe793KNh6V3nTl0z8wWmma6cY98wfrfv/7+YGMpI0hf33xKWiY6n5OnTXUmzjiMXhXGpA0QVj
8/G6xI86+tAqlBYrV56fXScvpicxDJaJeOEVZGw6kT8wt3M6RP8DiB4EjzDyQLzKCYtgzcWHQVbZ
wHGc931t8i+Cu2MM9A+wJWnJJdIAxKOnATg2cuKTpw8oA9XFp7alGYGmCIRTnsCbjapY+c8cjPlK
bbvu/yAYyyemcnmbis/IDBrPl6AUWr+olaDBJJHIIe0AUvg8OEUDW2UP1WNFGqNzmqfKD10BCb9K
Pmzwsx64owMz/sQLg3xfzkvL7RKkC2sOVZtQCgRx0gYMUHF/SkF+4+2qgN7r6H4AfQ5zfq8YkA5O
D+TR3jEzqVDbDnVPTQTkGCekYoYEpE5MJ50dkH3HTBOzu7pJzxc5wzrowj75pI3nfrh3ZgNIB2Cs
0yBlapwQUUOyf7PoLZwh7MDlk9dqdlKygJv5xoxdFYr3L0AIPO+4iUc9bBsEku+hbeu5nA4w8MHo
GzsyOE+e3SGP1/ILB16CKY/5X1ilM+YVQFtahXH23VQRKRUR7QhbNv3rtxuaZN8Z8kM7xzHtENRK
jwwfcvt4oSd2ob8TJ62/qsxzrROlvtX4w1fwUaGNN5ZpGGraEA7xEwh1DM2+aEWE9IbF9sXUoRiU
qf1al567d01bx4kE5f6+1DvIgD9uFelnihSEWeUZgpFpxM19YLNoZrEgJ+gb85eMG1xmSqjOhUoq
vxqvdbfhctRxGdAzuvl9AJXZB4dctmIw4K3VVHfWRieyfs0juS7amcOlDNdv4PCW6rpqyw8Vm9uU
BiFSMdrRCRcBm3/vYZCWEiqGdMisOXNsSPtidcCW2TWo2LJl/+trdkh5wjMMh/kLpe9mNWkhP/iC
7uzcgdK/Fx+OsnTUheF/4tDGOCSSsrRO69L8BZBZ0wjmRdDHHzDQZp2X/FGU+0OC1+kinjJ+jdXE
DUFgCH+y4lC5hB9cjqHFaYhjU+uH1A+mI9UqyUz9iQ/XvmIsoOB/iGCfQZeLLQzwEmpveYxZygLQ
zfZJMAFOuSiXA+h8n//KJ9OzLSz/E3ImoA+C4FZobstRsh44FNWtP7Qo9ZToNcsS+OttsKa8fgO0
dXg/vyBiospCG8yjhKqo/bYllAJBdcLpNe+sqafxVmge+xaEguAVTrWALbxHF7EaiY/Pujxlto8g
AhgSMurLDQI1TLZpZPoifkVRmsFgJWTFoTL7hbV7E4yHvTsQqTTdEiT4/Yq+A3xLljJnpIyXqZmh
9Y8wpwNP5uISLuv/UX1Ejw+VGhdQrjyhW/yiB/fdZCERsYmRhvORCZsmjzhN5Pn7TIPS6GNxYS7N
B7giM8tP2l4aJJQfqU4FL4Kw16ziYAFiWoF2UJ+R+2n2T6gtp93rsBsym9pxRtQ+Vlan+s/lN+Kz
XdOOOu9EIHKJes0+OV/U/tYQs9/ybzWyZs2SrSd59ikT0ZQarV3+xanRQs3X8g8H1+WvQx56m16O
zhzzOx0C7f3bgYGp/4rfC+G0+f89tj0o47Wf7ptpvky2odolQ5lQ9PTCn3DjgxRa4B6cSel+Ahl3
QwmPFUeKhhZEtQviOPFROsoAMtgH/Uw5Zbb5jRB/srYIbKExAEp2a8+Ru9cZz6G8VrVFBYdhOkrc
DOGhm0MsRczqFOezvGuOMDS9Qpe0yGEEoCAL18UJHF1ntllG3JvvcemWe6povNP5AtsPP5Cl32/p
zWU0GtmdrVau6pUHPR+A1uGg/t35xTzPmLyTeae+sMMkGwYMKJvCyTTYabdPkRd8V+yyHsyqtn7F
G3Zsbfgx9yYO48t9tBE9OzLAcr4k9DcNdsSD/SpF8PrIuBey++Pg5x6yzQ7kh99eKXReA0sNo8A/
nPdpfW9fJgHRKWycHla1KJ9JRSZqMfop52PGlFnluMnRShvFmQK2JR3+UHiM3RQVJvfvaR2Beo04
R2Rt1zPUy0Zumi6FZF3ftO1YYnywaswTmP+lg13Q6jz3M2vL3KFJB0Qk16UO4r/qfP1yAubKTqL6
PEsyfxFQVaqOYQddoRZm2USPJpIEO7OPTZBPuGUYfrgnAgVwg4c6VuqFXWtQqJnGvnQ6rS91ggZO
CSATKVXLG7aoSGTBnGbxT3Rn/T2z+QUW7skcdftWTY4sVic689XVF4e9u6f++lrE/F4rwPgxeZFA
X2wOvqQooY6mjm2PE23GrWbBbpQp/PySI7oZsiSMETam6o/yANpyhxsRhfv9WGPEq/gr0AYa/6y+
tmW3SqED3nUpbyjoS+ft76977tMpk7aFrKTvCzVl3jjdLh3WrdjZFtcgAMEkRQjbbmNEOJ5HQs7P
ZS23IsA7WTww5v97zmy+w6tIOjo4uWWI/8aIRufPVT5c6kgSe8LU2S8NYN0XwLRmQPN6vm3gscmv
lqc5tKUEUrZHtN+Rc3n/XFBqhoZWmk8DW061NvcwBxv8YglGdT9Nwo9A5EaJt6xfmQgLgShuyDXk
ETtKEBnAMZWNGhT6ipVBxBlDUWrHQpnv+MLGImfHC23qgBjO2kIFL1UvpWRk+LQyMprtptntbO0G
/14M9mBVqEYxkkV5rJpHaPfI4kAziZfOnUCnX3wzDHd6DtRyFvlNMc0xHZO9wXualvXgyffaEB1m
wa6E1sSQtDkwGpLJioihjf0Q/rC6KHjq+gNBi5oiXuXjVqtgeUcE/yLrbS+A+s7cW6HZPLDwgb5y
p0oiJR1+tnmyFUcgVymAXEG0oGRDyctU0ap1/BY8lD4dIs45ikf8m0X/6DX4P323hohEv39PYuJM
XZNNo3vQMibkZiU90fiGjj8FfNgBGb5ffXVtnl2M4NzjXFS3dOpVEZ/RG3ms4xdET5kwBOJimtUK
jf+xcKJnNOQrFtrAiy8E1hDfIoeKxJywxiuM7kTeP8p1ulWhJNEFO82+E0PsLvAdfcawM04x0WCn
9vwH47Uk7mOfjtGyWl5OpnfFptpFhvkuNNKH0gT0w+LlONFOfXt0BtfC2w19+Y+578Sg/55neqOZ
2SE6HwrpLxrI8DfJ9m1WZEBfnQzhmN1RZJrY62LsgnKpFXWVctPgpN1Kb5z+KP0/QF7r+ajMpmoi
TNokdWqCErh9XOV/MoQyGmzId+tE+QWvkq3cLbVLWyK5x0mkUFuq/jKUrpZupUs2JQ+7isHw4XkV
WpX+1JxFhppB9P3v5iJ2j1kWJF3oSl/OuVR4n/HHHL470d6P+QP97JfB+09kiLQ8dZw55rbt3z1H
8TnRDam2SwOughmywzMioxyvK/luLhEdSg8x8c7ZX27+uzCgWQ7RCBj0971UQW71a9PgahRWGKL4
OBdJHYrK2Z9KU4yjVEYjFrek4TNLZhSn93wED5yaBU6uv8n7AoI87sm6yBi9xtSZs/d+l+FOJmoV
RrrgxRSdIkRfFzvioSC2Qg+4aaG1rHDSkYbxOmwA7lKfK3vXfJrunbNhXSuWN7lBAZ0Gjkkizmip
1kthUJY22Wxd2Q+4r34rJ9vzRx8n2NVaGr6veXElpCNwqzbV9U/mbNH+fm68XbN4RQTSv8bmnB14
OUU7RZHXppn71soy+YOhcXexf+V0GZQQ5a9Uu+u4SRfrSR3sraRh/13RcH5ZCSUGcOnCBZ1eUn0i
ARaPehiwysCAPqvrcMsTVBUnQr/C9ObR+LMmuVLlOQ1fDppmmmYvgXCMQCRueaQhgGRNncTIEPsv
2GZsMB5/HI8pOtSJ8Sq/EyNw11FVkX6jb9ZcD+fF+Qe+JVkNGHwSdjmepLwALf53cfDeXckUKcls
h4iwT/tQKBnD/U9nwaebBB0BYaACPYrrrJqdMY85CrlgklKoqx2yoj8Xl4gXH3ERGLTqlNmHfT/e
zLgz65KJCwkPPpIdGxvQD/oWWddjMbE/Qc6UMzfQVvGw8aAFnHrMIX3WEbt3087YEGJ6rcvAU9QL
Ie0I7XedZAv6yngNbzxVxP0ctYm4gSM35v5L4Tif6k1Im2+32q1WbH1+YS/Sap8iHHe4tZwDLhY6
KyGXhFO1v/xqsO7KNMNu6qUy9QxABXwJrHow+kGwDg3+bS2dmWjqKn94tThIl1nDGtCq2ZOyZB/+
o6cwbuxlcePC+L/MG8NP41FmuBqDIdg1TNbOrfA6Hds46cLYjrte/r1W64sKDm2Jp3kD2a5vLjdO
jtNAgj76ddvPnPuoGccfKCmNSwIP8pEKWFgnJr4L/1ewUIalOIVAZoOjlF/XQUagrO2eb2E2tsOO
Bs25+zQvovowy74pZwoPUzIDYK2xsiqOIPk1xpaSuryKbA8wCaugwAEsW6tHssO8pcUYt3AZ0YDW
lvtKm2ynH018QrQZY6RC5Xhpdmn4tHb6g+kNxQIa9fYCelVK/BRg+GEluTtpLsO4aFJYl0AaRlsU
7CDX4ZQH9MvxkceLEPfwGk4fo3YuTrUnW18yUYuhcuZG2JlfoZAufjN8q0NZlnQb45F90yFD5oGG
l3LTTxhCYZCLJYHKWLLrir3tcPLizVu58nUPQhST+lnNg5uk5nTi3qzG6TpslbZksCO/VUoTQe2w
vkxVDGSO8qCNcO9PNO4y+FBAEw0gTfVm2/imhzU9wxLUB3SsVROTqIPT+kwSOf6wXocz7mfX6YB7
8Koy6dKVG2tNFgiqvzEcBfjJ6s0vYGYTOGrMmYVo8NZOH/u/oSJBe9eWNQf5zdIh5ockef8uFG1y
cypKOnOZd4XwjZfHA0EaJ4HU6uP2Oijq1SPd3imoiKCqNvDU90iASa/2pGVoFN2+UQreyFKzmmuY
EzB0dEZVCN7YotWyJdXvwkrmQMig3/z2qaS1VVa6sOL/j/JK6vlhskilBLLQU8cw9mTuJ3CqQ4Zu
J/2r2U3QUxR8V/mGDDaL4UllU29g+6NB0NPiUtDliUOtur1DJD6ByNUCi5kE4m/N8yCwLLdkiGZE
m96c12eMuIZ+NpJkS4VYgpxQwQRsQEaJiqEPdkBryeHOCJb8X2pMVFil00RgSBSjlBsnTDyHZFbs
6UbgYv+RUWDyY2S8nvDmevnygXTX7YDfhokPRlJvoQCAC1Y8lanT9H6WqnD2PgFmiJJzfNWXArye
4ENm9PUfElFic6wiU5dthAsVCcCwmdFFp09DHdOWuw5a4sYLjb6sb/YQMP/OMISepyPJrtCJvutI
qlg/ANoyfPQtbLnSokKfI3B4Z52pczKEjqF+GXpiVWMuWXmHuZWYmOg5j7UmCN+w1vLTnNmaMXD2
xUadt3n42QBnxEqQCSYQ8ab4xaMyHlS4+Mktaz5y6GiSufPt65YDVLEJfSc0vw5ACEzxN0HxIqn5
4AqBQ2X7LNAkQZ6dNMjanbIURDOdwxgh0B4W+qLio/jJSOv02CQlXRZfpTvC4LbrrpHafqf9H1gP
phsOKb38a1g8KcBqkKqhknZ20NL8hCJdJY0NButJHo0cxClhatylZf6fOV2ZUPd/KLoKZvOV7R1m
7jFpqmO2+u69qVdakS28o7qivv+yqeeLKEMa4mwxs0kSFfCx9lDNXoOiTwWbcjyg6Kfu6IZovBR/
V+nHc2pIxxWg2likByTXA766/Y4iKa76iIISWO5+jf6bssrcTNr5k1U6hY+cx0hNWWK2GlJ41jkg
aeDyMwEnS2Mp14o0iOscK58xknqKfV9R1hkk3a4nMHkDX57Mc4RNRbLpi+qDewsf4WUAKovxn3TU
oNpMrMFwx0bjD5tJQ5R+I/9mhX7av7tJOThZEkQK7/+OFQ/31oa2cq/YkAIf9XVGWMR/OCS0bO1c
s8EN/5WUWfUefWoYmkiYmRwA8RU5M7X4mYe2kfq8SeKJXOCOyzwS9M4p4/Ko5CfCFtLtZkzRCgqG
kFTyPFv25SltsMn3dFbgS3Tf//CKcRJASuTCCP2h7tFewQsx6DPxIh7UJJJwAQph74vJoUWJsGJb
hIcleYNf4dma5CZ07lrJeVrCjlgoh5bFmsZArXX1szPCiavuPi9wOZjGJ/8WRXlR8oG8l5BXjt3x
hBwpQVxjs2pZxe1QPF8qw8KtTCh9hdozXvJgVSGUEKCHsOJPgADtXhxm5v9aBcEwx4Jztdrvo/mv
bwjqIpl86DCRZRjIC32CDpMliYgGzdx9BmM79pPfg7ZEPkayvz995jCama6NIuoI9pYpykUJ9a27
AcXR81Z8wIiW4GLmxeHADsMavmnOtf+g8vX9w0qTglLruA8zld2EJK4dx4EomsGKps8XDiCH+PPC
yPbozSwe7QQ/o8QfJL2UpK5wR5+AJfD0yN/cJukrypfN/J6wWieMwqbXdVj3JXijXWsXrJaV4eX/
xBUB6drj6nLCkPIy2EUGAXecw6KOuVASx3q5PdfNkNd75v1uvpUnsbLJko4Xahis/bFqq19L47Ut
/TyRopG+OCcmF9iFeG0kpkfffx0thK8cnIdE73xsVdtgS9WXs9Kjxg04plcnRVmVSYGc2HwBVQdR
tXH5jzkSt0HRBrmB9w5dUgquh0jHhh3ytyJJLON4hSmlc4wu0xNb8YdW1L8F8oIF9sLfZtuSIymI
70NmR8814/KUsMFTrqTJTXGGbnnWyewRPKHMhUr4VHTmPaxD+e3990QDxa1cOgp/ZvdYr5DcQmov
HxKA6OQtD66vFTCtlq4fsP8eZ05h3c4SnV+o+DX1BiJuctCRro8WrbdR5Q0AeopLcfG4LBzG9ibt
HbNTtwff3aW9GFQZGTMgDO4NlRLAyrU2gfnmffZUaIBtmfoZpsB+l569y1hjwB/jesaQqxaJScXp
6VL3hu/xyLYyTLeCLo5tVp6qFRYP7azlgkwy5Z1epi4NBRNLgRx2bIfDHuu4gob0+OUPQobNaVfP
zQ1cWqcWgjLeEXDYMfJZyJtcZNL7s7P/frkV8T+GI0N3LfKbE9joGgr1PlaG5Sp2S5FwnOaNifNN
O8LdEtWlCqZ6zxmGcoFieVsQ+BuKmBYiGOXVjfs11Unp1POoJfC1rG8HHxxZHTjsCx80CkSel/Z0
ZlFDtd5Ek/U0pp6VzO7XtfKimc2TFPYMd3kdDdpmxQqaPcVJ0O4B/TyRqVz4gNCBWpnOfLWE9NO2
oFci5LDjg1YcP9HzmqH3GWVG49zQnQF9j3TmxThZP4uw/gZlgRWIVbLSfjwoY8+4yZp+2Ov0/YIz
B4iwcAPLDMDgXyw8s2D7xsuI1QZoBSvGlMSjQ0iKI9ZFy6P94mMIqMkuNtZYLdg12/sU/WbhK77h
x/WzT6mROS3cuhiBWjVCjkOmCFJWOih3IIN3i+i+ztmHlCGILWkGOlxaGGF6Z8whiR9fSJfylaSq
3Y1HNEQXUttqLakrwnLgucLY269uaJXhHNOBC7AyR5+0k+MuJ2FWrmcaLBS85Nw8ArZu2sKNbWt5
8G2Y6uqjaVgJ48XWg7fhmSLuWl8FOq82mXRh24xnbcOhTZtE9K1Dcxpl+oXxtvmPd/iqLmLXt3N6
1hwPlrie3tCJGJo/njtI2k67FrXCFTdTRT6ymFpqpike8XIMpPMgYEn69rFcXRP9iDyoR+PqVDI6
gRgJ0yUfd2B/SEvuhisOB9f1prlvaVrUUgCJduqWRPeTp/yChKgFvf/qO+Ie7ZF5bwoC7uG/G1E5
RfNrDFm2e21vPoMWl0L+G52RtwvRuYqBCP0gNtEn3meUOt8auoILHhdEXVGpMcp7E3dPPWW3bWJY
iOT3uJKLEdJ54n8UxMz5wPo36ki8i1Sv1Vlv28xNTcDNoJDZeGhUN1ZZcCUZDz2GqZKTOCP8iQ/x
UwWa3BVa9WkbKj5vlHUJmnrkerFliGoihSz49hlcX6rHTwkWtnD+ZIllRFGbLe/K6vYxYtkt2o9V
yr4biV3vvFAuG3cXTHW7CITF+OEAYEhI+Jq9kItxINj9JLnU8IDY9VrbsIfvOUpGFq999aGy8bPF
ciPEOT2Z25zuD7xSFNUFCsKcibvydSNc0srAlK4Kiawxc6Bm/yqr1KT+gtMNbCYVRsMWFmbRORpO
I9t0JRH93KtR1Sfn2P4/ma1rkRgPqSk7ZHGZjy81XDKZfiEnK41tOHOYLc/CkU0yUu5fLOs409/h
5Nq5eC/uXW7d5TknhVQkdlR2+wzdkGoO8kHSWv4SAZNEBSLuo8pkxjU9uwabaO6/Kn1PN3UFG1go
oNiKiQJj6Nx2sj3v4ed4ihDmdari7Eaa8RM9kJr1hlw2071+OMZOy9v9hAvDQvJMmZzkVmwTV/hD
nyE358LZHvYEQuhyM8oXBNhmW8Mi7fmAelscZ3BB/RrtQ76KM9FNUkjMXMqLkwSkIdsB/4k5C1kJ
5e1W0muHGofBhEgnzWOh8n+rk9zXeoLzVjAJ4QVVUZrjTxemLo24dZ/ssCNYpF+v3k7sPq3HS4Gx
/fou+9K6hGsebJzbj5rXWB0N9d96yh5sX1bpwUu4laUDRcV5i/Q78ZdtwLR8doxYEOKA3QoVjr8m
Hlekf33UgEQUp6HRlV7ejb6DoxhYsgxKZWHsUxMqHBK5n4xk87sRxVOk7zoIT3YNzlYBGUpntn/c
NZMiTFrNevKu5pu8YeIGkFEhkfVI3u5Q19VjfQZWmsAOr8fw8MDeHbElfhk6A0HLYkCjUM4mzM9L
II7HLldSKoIX3vD4hvGfUAokd6KPnMewLXIAMi2T/xXM/3EpcbdFETnMuC8TAFuNTZp+0+VZ9aCy
68gnZFuCxLAnjHv0uaUcbdJsIMrOU4jpeG+KB6LXQgK0wIoFqgGLs/T82rWNJY9ngeOZ/odWNxxe
5/riiwP1CjSXPjboz6pLkY34+3S2BN3I3dKXX/7S0tOYWRb1e3tZs0bYQ2DH9L0tIWlL8QR+e7qv
+mwwbDbg73VY6+C5A+cJRdVTK+ix487PCORVDFDsxHQl9YoKjXDUw8Ie2YC8nyBdAeckSCTMxKPp
o2ZBNqfKFE1ylQtw2MHgRkjYd+2jQeLOt8ZMXCWCg/5R082RjswOS4cfCnb4TeLOwoVrreqAPV4Z
bicXixJRbIdjObU962Bb6lrYdrSy9Oc4ssy8MvCaYLijF/zG82oyH7suR0/yn82VXIUzExluMH19
5cwlBVO29WimvEdyRBUOhITNvxrNjnSKl2lEt7TZXRbw6HuUNoxLZ9djGfZClU4cOzH0K5UnISdK
wvQZPEd45anJMDGRxnFC9SNF4UfROj1actl8gjyO8miLpDV8vg/sFvaq+D7CH4AmLCH38YbQ9Ri6
QI2fpKfV8S4/jkC2UGOcvQ9fVzgZCXqWBZkfu/MTj5QS6HKXR+DeV80A0iHCNZiNY1MnlbkllsHn
iefuaOgTLi/jGa3RxuCcIUdOBgmzmLkdThX2mBX5ZDvfC8m8fPihFR1EeXqaEoefIHxis7X6FVo2
akmLAEKDcuNxA511/3YCjHDgAK6Yu0A4/vEq+rptp2+6NtuEiKqwjwZ6QOXHjzvV9Z67xH9AkB6H
keg0KzvHWImOtjLwAtDbFJwczn+K+JhlsMpbhIz1OPIdsGImPbX1qEXH6JNiaIOdcBiyDw2/zhO6
8ahCUSWO0LIomQIzpLdWzL65SeqylzZXPZMkEasaqPEhU6D3Zo3benIJnSfqITWlHkwsnalweXi9
9PTdLkvV8C/0y1cZwxq0EQ9xrN7pqWl2KCpGCjEjdhwrX7i2vlZb+3ToakCg6ZZrv/lgYJZOcWrf
x9+7z4FeyYOo4f4McVUs+oI4Xp9Odhk9ceFLjEs6BRKw2JCTVbmlIsp2L85SisAgssApSAcjPxlx
B3fdlJWFClaxQYeVXhDtxkZMNhXHBpl7VyrW7hnXQM9dcNBiwgzNFocW1UH1C40+ocwbXqnxMRne
nitpGroyU8yjeBTB4CU8gzbLiHuEphmyvWGo6pz1iGavnnL214vlKeaeCTt2+vEVDOvIh4UkLb/s
KCqd1Usd1p8b3j22x+T1VKAsBbs2qX4SIq2ZIvgZ7ekAxct3UxjXqcvthw20SpBFrcQLFJ20lpZr
ruC04YZFmdMHEI5gzZQ5k8It1WWYjSe4dhvPt8yDAQwl9o02uRTd18ec3HfukBmmaaiTJWuJgezm
lqqlzfgUzAGEvk6KUxO/zMIcYWzZMg01EMOufRj+ZBs7YbolPjcC0s3JPuGOf4p8sDV1FSNfH+ZM
mExgFT3KaFhPLXc6fyYWhEGJ3KF5BB5hl96XDkGQtpynotvN4dKaZofdW/N3Z19fsUxJ16N3IPIu
PrvMERlqbJsWAeA8BIclj45TlkdhqNaTtxo/cRQv+si6xjnjEDrcpgv2wSlpmTzZU10WyzAvFlEE
jDh88UhqP+Pt6hZY1sHa9cw9sNym+FHpGS7hlUV25yUo5Vkv5iTH3QNkL7xlAf6sz89SHSVHcKqJ
acGstZ+UT3l52XKewFJ0icIYZZ2lK9hh6r/KEzjroqjt4PSHCLWs0wiabn73onAxU4OklW5zCTEr
yqG9XFdUxmhFGO3ET/xw69SeV095V1clQzi+9SXrDhVPmPKbUKjeOSwbRZJsBlpm+jkOaJ+E4pUc
KmrY1dZZIrZ3KShmO7YlDKdukZgMC44AlliZPKKH9kNdQeFcvDlFprXlHarxxfmMPgEQ02MzJQaD
hus0vSUpe7fpn5GpNRIalG0AnXtD8NrbDX538vJo4Z6HiGb/AOoEnOV0feVq+9IHgBb/TFsh9U6Y
hyu0H40EL+87T9TXvnBY891+Dr8MhOHqNmwBVNDSHfeD9Q4piS8iWFJeswHELOAfO6+Dx990OcTi
KKgIN1USrlGA4MwsQl1tO/OuFxHupOO0Snzorb5IjAmCAAvY8q5MDBqwt9s8yw0OJdUuwnFrTE4G
uqZT/j/q4uPYINuL/Z+vI5AM6BT5MhIUmk6HreC6/npNlbRCeoeOIvrU/nN0GWKBth0KEykSU8CK
68CT9K3VtzHGs0ALZ6INnLAKXJ40tE20y57jjasc1b17DhoecA9SLgnjsAqDqVtEe+I7j1R0fLjP
5/OE6nPdJYPtW60VKznU+xNDtuF4/S++69hA1jgT0H221WCPg3OZl8HbfIt43016LCTuWu2XfhpV
nbhLOeWiLanOeB+SFqAUp3Nd4qxsF9joCX6XjfAgDQjxgnMpJEXZyo9ckctpXbx+JRtZ6fgswt07
d+DUAYxVBX0GgBijkLrQ5og3br8sa0od7rMWfzfDeR/mppirD9TdQCy8daVeQ9uT4nCl7MCwa8nX
+2Jmyl4EhVLbXEnbfL2asOZlV763J7l5y8gwWoBo/97bUH8gzFlKODKVGyVyMJ+vJJ3QlQigSRTX
TeBPbYQrEHDj3VsyZnjMfLnMunqTUigCKRxXEve9stn05KlxdtButFrsHfMY8LWg8Ptsvpzdm2ET
VVoyOmybtJ7RTk8FxoiPQ66O9TStfMiZ0CL3jwM8ji6R9zLu7hVDj4YBZxE+2fJiKzyGMDuBYPFs
WLSLQvJRJy2CXU9CQg2JU+ljMI47keYUHSUnycPd6Zv/pEPIiET3+9dVLWWPskNQCakdzPdKHwP6
xQ6um/E3vD+Y852moSS/QgFol64QZNoIajAuwyoV4ZyOcaTOav59ZE9M4e/1B4n5dubhsdI4tzfS
4ThaEea+eQxzd/Qna6FNp+lscps6fY9Qzs4TDDZraTaELtJHpH+Y/+Gn1chiue6qnsZ2wSG+rjVM
QsSlcMjldrwpY+foPOHQi2rbhOfK/ZsYKo6tjSWV4IydiJXab9j5NRHeXJ9aSPzWBvrAtcr+J4Ti
/3s2FxLE1T4h0bbMM+LbYMVIZCKULRb5Kuz26mALfWgMK+3sl6aWjsyS/gyqQOVhpp08vjXTNGKh
HR0f11kEjTWQZYpMmI1SjzgqPGSz7qjzm2WGVov+7kVP3Vr/ocNWFRo0+wDGiLI+0HD2aKihNSum
rFrjT+skorL5AOgY64PugkQJEhII8v0hps8YZP/nXnouO/9EjGWXCMKrOuki3iD1XWuuMDkPddLJ
oJXSUdaFtVJ7sTTPd1vE4Q6jSVeoIBzsrXvQ9D9O3GI3VWL1npqwiLeeTVvAjiSgsSLjCpOl8kfV
rnsj/ivj9aTvSl8+qpRfvuvPetXKKOboeOuhy7feKQqn8+knynWrK9aWsq4AllakHeEhveAY8MC4
EVq3GRmTO9XBir/nUD671nfZcAsDFlduLrcGs/MHiIPF2Lq0dvXluOmJiCPISiavKaOXSA5M8VWF
HLABCqZIAyg8SNvpjIj7vnXFygZ3ODQdstz6512JO05szzuolDmzKYWlK047W2ApEqBnd1e2x6wm
zKNkcQMDoSZ2rEUNCy7HwBxdq4mQZ2ZORWx0G7/GGCWW9KXCkCTPTrw7JinSYct4Ho1KtgBcIXKP
YOk0IxmkYpSySwATstP6EhFWazyxJtNblHICL/tXs8wIQTeWwZqAHanrtCJQFtFIpM2KM46cd+QD
+8SY7zUoTxPwDDUedmPm9ZaWedJSCIGPYwm/bIGLikFsJq/o8af1Mdft56rRw33M9fdlGWoVpjGb
2j+FuchOZWScIk7dlxA6hnx/d1aaAT1Csir/3tIdB+BEbgPWfdCJqBxU3h5C4W6r38WvLLDmKl7w
HNyrCLS2B/L0tAjaFV+atnIVg/l65XtAP4EdHp6xFQJbXaK3fpETDT81okj+Da5CB7kOWMq/q+lJ
8uHa37M+jLLU8nUBqFgEsNhi7sWVVr9Z33MCV+9j1CiB2CJOn//Oq4oCdJNVsnmesRStsciRsR+a
5b75iVqm1/XUvy8gc/L/pY/DhzamjJR42XuJAN8j8fHcuWtHGRyix4Eumw0eLKY1RKEBhWyjYoFp
ddrEzuE3DqvjVJOHsH25jXFrmQ6L7fYvFalATCbUYcbfArwHrhcEGWi6zmQDT7XIh9Rxhvpn5NXE
bQ5dcGMmhcmVgf92atRLEcXFOCUWb0qhG4nABwMt8mauErsZVzfYpHuYZ3Ac2FSkRSvNfjlrgYqm
pyov11j1veOV+XWzyhigBQK3BdwM7Ywgj0soAKhl5cxIzAs7rwQ4jUt7RlY6zKkUcNZtjf3FLzgF
cw/AzPCZjTZPPPu4BL8VDUQjAliW18LB9l7OuprE1vZc3dB7zEpzxQPHmwN1D6HwZFUb5wHWdo0k
bZmW1gZ5cR4kwmp/Wod9abh2DTjAaiG1JZsYZZ4R20Ba4HPYaqqby9gU+m3l/YH0FJclZmzKmnfW
+2dzUvBVNAHrZycTF3fUU+w7FQ9Vdkph6itQsLipWWqPXlH9qFSVKNXvRfvpcS7cSkCrzKrLxpD3
nbh1WjNOnZ3/UlLZXcjvrqieriObRZ8YSe/TOEWxdf9l9+oE0g8BygQiGNdOp38pJ78FwLXp3IDU
ka+LjAQ1A3qAK8z4iE6LkiZsBcsoC5TOZH/3zDrYiJS7fse0hLtnBvJcKGDqzsKwdHjYlA81t+dU
1qm5ATnmShBFfM6Cbzi54knV9cXzcuueUuSTf13WbuuzDDV9Dc+GjqjVHL+1CN2Qn0vsNKJqgBWg
s5ghI+QCcy4UPjeyFhAw05uNUkDutzj2AI7d8rpUbb6eBFGU4NreH+N37q4KIl7T8iECdnzIqYwv
FIlg/WX0f24Ht7zMes3Mb1/XgiNM0WO7nQ2tHDrM0aKAokYxbjzeGBLIBIAEg5iFobhUmB+0oYBB
Q+q7xtuzKjgqc7LxXbqaRvi62ptx93H0WPGQAHZpHWc0+ZBSD14pq9X18wNJMkcvupkI2iB58HNF
Zx9/59fv6mg/zNvR0yNY3hkt6g+9brklY5eAp0BQgGFiQi2L7stsReoFrwLbHaxUDwAE3Z9cyvZk
CxWszsGoKwfhlRKxRUIwFUVkt5tIRRVGLgP61Yka4SYtF12GeiAzLOHpv/hrB7Mq31A0nbQceor4
ru+LOvtIiZRfoBCSwMH0cr6lSGZ9sa2ktFiEoMARMsFvKQ342SR9TfK/AP8MZGEPmxNsOK/rbZ96
gknO3F0e+XDEvL/T1WIDLUAPkPtmGiyGwtcrtSNiQkblgH7oC8wJtDWMUQaK+rYnXT4uk0hSVIaL
nRsHQN3suFvMfiImGx9Dj1ZRuMmBJ3iGcZ+uknDvR6DSlfvzHmgXpJgA/Ua+vKZswftNzAZFalek
gQtShH1qUlKPWnaABmorL6vyOHseY0hoaSsR91FBkotXf3gEl7MNUKJW4n5Yq0iOyO8y85eQvDso
oXp2iKUxaDRQaqsg5920sIeLacTBtiENpDWRqDVByKUZz8Y0OQDEjzNgIiD57IPOCW5SrKYLBcq7
3VulcJTcalKA4DyZdeOsNnl0YMTutlx7qSJkCeltQRyXWD5GZO+tm9NwLtYTlLqDQ5IBfFfqZnaU
Dq3WgZN/icidGNMkkUn/8E2BQ+XVJ/DfzqsgWTzDmGtYHsIlsA758yFHiSnp7ftl5K/yYQ7+Przq
mkR7h1LIASXO5ycHHp7yA+3S/rjRNfctJCtzxGStBDzYqLhMzhW9XHNRNovROeEQkb6BlS0dEVzL
+AtDZExpAuQvOU19m51bsglYdyPqvh6fCdpxpdBaGLzIXQa5lai08NpLql4h+kuBy8pcu6wBphst
4cWVK6OrDIq2CJdYI1ocWRUft8lKpUvSqQnayqKjqer31VSz4qEt1tkaMSbzpwiCK0sCOOC3cwck
aqS1E9EkRcbRxB5nLDq6y5DfQM7wS0xgB8ut65lTxPr8QkoBgaoDT59JHohrSFMfdHwzG2+TOLy7
738Ojssmt0T9HHvd0CExh8+kigmDWPIGPRpeSgbihOFwQvp7rclaS1FOw0k3KyCaiMbvwJD9gukl
wYa2x2vI4v3Txdx9rz4gy2YPJbngsNz+fE4zRAURM5V86HPEbJmelu1KPHeAiPQ9ZwzQMHXZQXbv
eVuCUmgNAPRc4Cr79p48t0JDiZgFbLAPzXXbdwGMqiQ8yi/KOh8qADJJ2sqJBmu9TzaneyB7JZZ8
IJ3MwrU2ceZvUCTWAmYWx6hYPdOR9tCGlvAl8pe6IBf/SY79lh8L+d4kHyE9OJAg3Th3nfpDNI1s
lHvQnFJ5H58qQHoG9YZHKPxcKYspW0Apo+a+KzskonpvGama+NfoXz7INeRdnVSR6I1EDy8r4myG
lYeHISexod4qgZ0/BqT0vUYx5hD1LSsm7tarxDEpRxqFs2/zP1fDigT3gIRwmuXTfAEFS5kEtIGl
GwhCuMD/e86muC00VA1q+vuucXvE4CoEnOb4TiG3+/tf+iJqa0qVWjZf2m6IQ1NMCCtlqLR93WPO
7yfU05CyDBTCxEiBTdUfAPxXEMfeu7Uiu0ub36WKjaQSXHwmfvvSXAMZqwra4jEc5xqB5vhk1gDb
7rYVML0Z8vaf0UDmJPJRXsPPn42SnJvhjx+uEtdPJzcnGpFG7Cn54MWLBpb/7EVafKpUgJuA+C8w
QLKCcht3+9n3M5M6+TsB5wH7kLeKFkBcQ53WFPuE8FaCfuNPW1Ld8YsE90lk1fiWI6f30QrOTAOc
EJ9dhjMYwYHXBD8owLmf+BDSqJBvrU7VKJuP5IApacClAHC5vs5pmI0WNoAu2awzlG/tTRzRfKWL
hiChU8vCZQGqPRrnweSEDilNQHBKn0uQEpk8w2LiPXrTxkbKEOIQG0HLbGfY3C/4iXvoKJtnWVpZ
x7I2IdySZUIJM09A1KsHkGC1T1vBFbF0Vn4RfqkyjF2RynKyl9Su0QtFeeFZwiSt+iDCmDxUVkgF
gWlTw15n68xqflhri3SA5tYg6qxMamabKA4nY5dqUmT8m0ub5hjq/vcmCQOfp1hUGrjUKxz9sRaL
UmRR1bI54T6hydEGuAbkv02KIM3O2L1vdNk1DHyu9Rmaop5jI/QWKXjC+Pjj7aEqizAjCUh3C9+k
zUhJiB1WZ/oyWFAC3D4OiQl75WUvXNbAIGq32d4Ec1wPQBLL9LCLXq5GVgjGCt3wLnpI3vHGg72y
cmOkQSDNJVl2ygkfmuU2aos3spgYRuBTV1j+0x6Z9zceFlcOd/TS4kjNcq4OLvts9DYOquEhpbEn
UrNQhp5lZvRVzMhLxArEzwkaLJjHPLDjnFlbrqjGv2OlPVfXbGw8XdMehg+9Dgf/Mn0mjFZmdxX9
uMeInyM7I/ENNBXL27BzaaE90DapfWki/lj4MMBHPdGUCyqdiK/olcokgwX9gNfEchu+f4/LKVzv
GmrEMRMI8zl6v4DGtCSi68YTG7BcgqKblv3Gbfr8PheHyDu1r0ex5AoRx61cIQotqYtBAm8m5b4X
KCCjOe18/3/ufxz4WMv2cJ/TEBKdqLXq/pUXhkFOe7BbSm1cJap25XZ0Y/Qgam/LR3e/fvHiOWCq
ZevHg0Zrz6HGf+p3eptkgTTu2jF2AjHu/epek0OFxyW01ScuOeP4lFTuGO8OPssnpnBTNDJlU3Rw
H4EbvH4arQbozTd+qPspQ2l21Z5V6jMd3OrbdCZcBbrweZNaL2PWW5JCcxxN0Y11nAWVixuaO0YD
/FwXrC2LtIvUEOLLNaAB9gaPAkFq6w5ZHgrVncndsgrVzHrNDJ2Fzxl27TqW1oihXRvg0XzMo9xR
1TQHpjzA9R2HEb1DsZYg4HIiFuegFOlDQTT4+MW6T6DTZxNt9/asviFa5N8z5BXFH+5PQeSElNEx
hXh1x4rEcx1mgbenoQqwwJ9PwCRi3esw5qY2r/+oKodZQ0Eef/7vED2Nj90OU+sOijti/91Yv9cf
NQvYCwf93M0XHhu2tRIwQP6iezqUZZth4wwGUhGWHic8TezCiBEp6v7JuvTj241EfwAkwOdzxct9
U/Kg3GWI7CShMFGF45UBMbNCrnK/1qAN7+c+3bgXCQa/RZtlDfBxTTqkqc8d0V+H3mzbEWSY4nk8
/WfkCN0V88BPAdcnTvzV/lsMxdyTly/hOuVT6OmdlGCQJOpAuPh/fXlZCPRjkVwT2Dw3VyQe/Cs4
m2jQDOfOG5+COy+v2bnXxusLm+8fbn3EEMCCQj34eo29ToP777Dv+pSN2nREHwE+dJ6AOnOag7p9
zyE0iB1jhZubGf1Hr+Z3GUR9e+HrCw4VK8VanDUm4G50IvqmtRaPKmskQlU1N1kf+sjijS0JUWCt
W9+KzCOfjnMQxcRDK0KG7dY6cNriwWJv7m2UgqesLW4gK5yWmtpIqAFiLkhxup8+V8yLa4tbGLz6
SeRO/ejuUzQCr7E4Zradcmk7BuRlI4NpgrM04SXTzvKObscvxdQt7GkPSLUwA79d5HoeyxNq2IZv
jlhpF9sLN0gYenFtK3bZG8KuAGz3H0QeVGB2/2HRiJ1QFgQ6TH+OpPtkuWw0hH3bZAcHRHzfsh0U
IV1HQN+9cYP69wK9pkLpqfZa2nLVrz7ajMrkQR3gbeaqQDszvDwAEz82+eaVnOri3HuQLJxeiMQd
f8fwj+TCzKEfF+7zDZxOGVs/b4lZXLlRS3f4tGTGak6ZmpZ3eqS+TclbgErf8tBny35lXm9lYz/y
Hk3bXRZPMP6WdlIV91oCFpeE7PsgYOt175Af9Wb8yf0uYpj5lrnS3YEt3zrcS4eNnubtMrhui6JR
YhivT0bpu4cGTvliyqhIiEv7UouvWtV1IiQe1arq6hJ9Gm3+CHYc/SZUStekiwrSOfpVgbsQTSxV
f1iIoZBSkHmUzT/6F1SMUmOqpRU41CqkmKrhoj8MoEcoctOEAe6n6vwPwT7d3CT3Fg+RqIxzxFT7
mf4OCjADq53c99w+xrHQubnRgG6AY2TezHdXLOQBNggn2E+gcrEHdVUQJMoPKf/qrqt2F66ieqQi
4GuIuDY17PKp1lXNB2cURnz6t8kExqlE6vuCzNdrDmOfF+HLrcs8qJty4ygvYeNQ17dLx4bIGEOc
Zq2wRZcyvec7QrASincwmlrwNJn3HC1GEs0GLLY6El1I732Upxptb2mslqVmNjk26oOIv9ZqRMfo
4FZMc42G+dc+5GWmQ0Z2D1SXyeBvoBdJYm+FaJIt9LP/wd4HhP2uO781BL2X33FEwrspkPyhlwGC
7Yq74KM3raKQ3/dwiOREGSPJCUr5x9D4Gr6e0iPVfTjSHIIkpTbQxV6dsa+7/RqqY/HaPUySUclL
3lC/v/jxygDBB7fZAFf35w4Dn0iXW1yJl3JqlLUgYygSx+HZB9HHlF8UdurcBpTwmWnDOPjgh/EB
FkL5xJFP9YV//D+gAS5Gay0F5USH7vm7YSU2GcgzYGqKRv8jDrjuMA1Bt5B2wacuxZgRqm3IXsbZ
WZFzygmh1In6OWQrL7uVDk87wtKhf7t9SwEwN6fA4xHhbw0cpCa6DcpGS/rPW/aKnhRgd2nfyAC8
4Aync5HfS7j2NQMi7494VPzqE1xocxZkR23IyEQOHzDZiIc7cvptArefl/VrSIpJSSvvVCykDYMG
ARc1r4KGGheQfHB9SbvLbxpQpf44lVTjTMU3axclDAMXJff99YFon76/mzQJ8wsun1arE67KkN/F
51xJDyYk1S/ryLntCNrBzPwbdQ7uaAG9gUNv+a6JgBIQU7y1LYbHNcmitq2BnX2EMMqYemYw7Qcu
x34Sgctwoi/ctNse/6njeNg8CZfTEvAkm6uSDRDb45mA2LOdZwcMewcYU0cgPdoSwLvIjt2cLDQT
BzMNbBU4yWheIGtR35c8HhkusjnQAxS1nHLu0OAO+VOiQpO9C4vBiM94jdbTuBAaikpnM/FUTdMV
nIo1c1QJXe1pwMqZDuQ1eXyFe6jyMqreWv94S53eUWzxLVe3dZk9ePclCWqstIL3IwxHol8RJciX
ZGaeuR68QpyPcbG7yqW9/tWQ6bdjzV1dkUoOG1gflSssRukNe9fwCSEe2M+WiR7QKhexXRTcvPtj
r/b+pAR2Z0eFbJ8QT8lHBhYhhYqcfdjjLw57deOiQSKDh2jqEV4HD/T7kSovZ+lpszPnXgeV35iI
ez7Y7BZw+Z5pw9COsBPK54AgHZ8CDb/zYKOSN9YYOpX4zCYqHn6N1a3gEhDgP8fZ0YsPXpOaeMR8
LzdAjkM0OwCFGEBgIzq9+bWxZgoSrqruNBHshLTUa/yTXV8fA/+pXMbfDzNH+QmKaN9+gusBTP+V
AXnXzzf24ZTBRTTSjIzRgi/N6jkD2H+pdGcdKfkO7I1rxNuf001aAP2ww5fAnTthUQ9Qu34twI3l
kNb8RpPlEdukFa6BeB2J8QOx2iELLB4opp05WUNmEEK404Lgy/u11lIEZGqZNUFa0yzyhdGJmQ5c
dbhil+GUbECc1TQ2nfKF/oyIOqaTFHwOlEHQbSWvkQbOW1dNgoYNWeogsFo2WhDiynxmEtOvmU5j
tsVuq7pUL5HlzqBstBy2+8utIJhxlJpVRSU07oHO4u6WQ/lzgZ7GBeSPEZeYehnd2pO6rnVZBdmK
V1Hel9UpDvRFz/9Gpk3NRV7IV5X1ngBQ2zj9qiaYq+8KvNL+kpUbcV0nZZtYoTvLb/cphOlLvmvG
BY6SqQnLhk5KrVk7dEpIb3pLxPJIoR8tqPmoEnq6ZM4HmX0S5j8QV5biQrp65em/zntbQ5Ertexv
eolXxWt5yqLKWAfgtQF54T4awqqy7LLWfKq6sP8vcQGUdS8fIntLDBv+i4kaBfQrqxLbiuqh0HW4
vMDxxV3RAy6dYv2IRyMR2NeTTaaUvRJInAOOtS/w4cqyzsmFZSnbDqhiSpoHNWEud2UB8IvKwNTt
Z3OEgZWdPG8T24Ql2v02NIxJBpD4XYbKVvS0DLym1T4cL9qpeWS70hDr+G9EuoiANBTBHPWlqVSV
x/B8Uo2KeoGvmI0CEU9KTnfO5GaeeF7TN7I+5zSvvoS4yyXtQ/Q90XunrdRK1065I6K/kR7Wcv4S
vXi6CokPhtMrQBmFiwGQB7KXQ9ACQk6KDSxv3ovoC1YlbaLy980fEM8Evl+TT01mNW2ED+2aJzcp
nsWQ0G4lqc/FBZPRaBaiGYT8YMBzZrSKRbKVme/iTpY+tJAt2w/RtrEgyelOgo6LA4IX8FnbV92h
NsYthI5h9XP6f3hGGgwOldg84dXLOAnePXGP8RP4FOEEw4AQdpQWmK7fLv8fTdjRWf6kcMmxmRce
MLClKnzs7Jws7p2OyBlU+2Upo6KaACaoUPxBv4MX8zm0JYQYHvmL4kj/X0roQfJFVXd5uhtdePEn
zgLgmK0Lvd4f7DmdY+0gTfaQGVVyiOZp7YVPT8jKBY0sWUmnsZULJDH6/I2jqRG6itFipLVkA/dF
1Mr07RnFxFdCueLbubcMt2JkS1FL+i+ByAG4KULFcW++QQ8ZoNNZjVpHDWCg438PAv4L5F/9RKf6
3qoaXxRg0gJFsu47sfGmUn+V6F9tLkD5WtUZXFzvGp5McQzcu7cV3FNvux/R5oC9SrCy+Naan3my
vpUJPvFum4LeKINx+iJxp6Vild3aQy9yJF4HELtfTxskjmn9Vz1BVrNZYW7RK6/fRKlmeMAgipeu
bplhUbPDZ6iqs+9dK3wJtrEImMtvHl9nso/qfjQwuGdn64aBwtuYJQvjFQZ27KNQtBaPqyQZW3CR
A4pm63oFBVZqTxlzluYTLfqqCbvbhBx9X784KQ5UKQlEkaKRAFOv8YBnfIbJEhUSZKdiT6V9Gj2y
TM0FZ5KvL6xh/b6m0zYbCEpv2gRijjrpHUwBKO5Gm6hyrN9+7niuoj1SMGUtf3gyTUx7m6uErJKd
6Tie4ZSVVxcu4TxSgmyoyx6MMNfC3DE3BgUfBd9XRWVlhxpm8eGGmHMUy+y3DRXpao9YFUNhDpy5
6J/snxwGQovnbzCxjNMWskL1R9dvb5Dm8UjffEEdwp9Fs3Uq6t1H+7Y5hvIr1HEtSXITbc7TdOfC
4yaMICKA4RKkSoXyGZfROy5HBlXaAkBUGAVwzlyiVssHlNK5HDwNRQfHFqGG4wzOJLXEjtyPx0r5
wQEkIQ/AullfWxnoYKsLpAI1h+E4Zr3M7+BQwae68BugwTq7X8JjD7th9AlESov8o0yKXNyzvhyx
SUhAtOWsdTATFHXtSQAD7U9XQZIXrLYHRhszZdU4pekrG999Bj54DAKmH2vGN0bk4UWitTj9IS0K
J0iK2HY50Fn2C6/FErfGD/mCutMTjm6+4NdT9IMEWhCdbGDEHWlweaLRdZmeb+mGshypi4tQKbEj
zg9GC82nWfxoGo1GLLvZ9bjjOpP7PeKlpuUQoOHTXWJlQNfBBjmNfXrTcd6ioI10IoQjKl3r7WVS
tzbr+J91HvCwu0tVSZs9+6wCqODyFAyk6lnO/mGML172mi+/1d+eY8LOpolNA82NjcTFk1DFf5TE
pSygaDLODjUoptOgKAbqyuTZruuUp6Ncun+/BeaztQkXc68EABhsEg9sXcjqit6cBlrRxi4HOcH1
AqtLpZ9TUiQyn4uLtWHjqgGrY99LUk/5cJE0HRCo4sltdOgNDoNte2FWndyYJmhP5ZQDeGNyMZ/8
rSLA1+ETfsp5oPUk9OTC1nPZpniTex/8YQ+qxzbGxXo13AwJkYUCV6Vp+jJS/Eh4/EYH6HuAbN0s
+yYONNtIwWZC+QAwHWtBcNsV9Ba/0p0O537TleWd+7W2JNppWG3g+k7VcBy/6sV/qOXp6FvaUVbi
APf/ZU70y+GmxcrY1fzOJhcyMqWjqsV9okMyZKk7SNkfVNTJHIXfg84E0II/VZUuAYUirSFwImFT
3+nIPvOnQO2VtIibBlku7ce79AehdJyD9MYq2o9ACwe/RbhDZEeK+WJJydZxoEnKOwocglDyT09u
I5NpnqiOPK3CWhBMtpU5N+fm+LoU/qg7ArRybJQhI/qg67mCjkmV5VIVPMH9Y+GOulHM8eBZDpmU
3mOvVUHkk9yYacaE24OeJ17+qBAo07cAxAOaf/iv3AdWTnJnBUdUiNogxiRYOMwOqyL2G8geULvj
ww0cQwleGph3mKPGwdVs9fF0R7sKj1D3/WZqPPgenyLhVD6V4NBPtU+np9xINNO3t2gwFjNnFKu7
Q8fKKoqSloLcrJJhk1uJe+kSS3OTS19NKzQ35Hku6H5erAQrO5jdO74+puGBTdzoX4M3PF8VWC/i
y73eNl2XTLNYAZj9b/+K/0M8Al0B/nSb1VSubrlQYdSAwx5lpgpObZ+NX6OWM/SDvmTdlHqBmn2V
UkNg35Amyj++udXFeh5NuMPw3KShP8Yg5cC3f0fmFSFzIfDeZndhWChZ2eeUjNS7YYVqCMaaZDQu
UG97cLLP4R+4ljsAyOYXZR5IxEVzFPa0zDjfC6GqQBIVH16mjk3QBP9dEVouDTcgw+kDVu07iAuj
ydWnFQSZYZ97KyuVfryJ37jVn2kTzvszfwlqhGfvlK7qjTYoDqUAOVLmOrsdDwXTZD9mBT2tJcgD
DtL75wJ8dt54rBvJJauIXTuonOfTDFiy5lgya426w4K+MipiJvTOw1mSzuNVPo7vDjeWwNolHuT9
LfidaHfZRlfeqVLB4ClQhEhQGKYQcjK0Lrx1v0VQgRQaxO8iimk1WzHNvPC/35IaGzd5kgd+h2Sk
F8KwKPe/Szu9JoAYJ0ijhOEnEK/CqH22hfDVVFRnSy+L2bg453f5vX1OOYfTxnaUQyB4gZrzvMJi
BN/woSMf/OOzMeqJDvDyOlVs9wX4+wkHTbzd4RRjh6Iviib6auKqdg3AC927onzZCUHTSe4FkFUi
TvrAAoW8fAN4DNVfuUJ6sxc0e0Q5kLfbmpXc6c8PBiPKlQpllGbO7mdV88kMs5JeJGeateZu/d9H
Mfb4gt/sHitGK2lduH3vR1MiATfJ2m6cAPSDMIo0LdbUe/NDUJoma202spoi69ecqleLNEUejlKa
wl352bn3qEWL8uu0zicqpM1vD1CdGGA+PY+T9JW0qjHRMSom97GYSyZw7yu6Phum4ZS19keJOpV0
Ncd4bDvLfyzJFt85eRBBwxDkdHrBX+zERmyc0jJnNb2DdUYPoAJObwO8KiOjqNN9oiUe/+jzhiUu
aIAnkqlYihFY9g2n3bNI0UHn5tYgJxcSNLD8B4O0WstNWPFSz0+RZDUCVjgksIfKUs/HuAYssHCC
guxxSjWCcIrVogdPQYolAvRT+FB45FiY+oMYABh14CSR9JsQlzLsYcspHW0SEI3PP4X5f19H9ehS
i0Gr6VhgZeX2iXHXkZoxecMmR+UJUhLCzN/drBCnTisO0/5JUk7aVyTSNpJCugMmMCCEUpZ2o382
TNlwx2vIL8VQp7DF17H+0bhBO6l/FFroYNEX7zhGh2ZeFwShwknGxzqwvvhExGzBJ4r7sITQWGyT
0C//CY6wTiQ7xrm5/rCx5n79z8GlBCbEJ6Iz5BUUY3D9nMhqkn65S3sbSFmTzktSsrLiS2CgiNLU
xICKGfT7iLXVWDQsEJc7fiZx4xJw4IA40UwtLsd8LzHaAWdWeZvFBmgclFnMSYQ1EypjTaj7jgDo
d2qjmKMg9vP5G0Aq73TAoNDscV1HktN1/r7IK5R29EwyYemjh3wRAFGLz/4HvTMoOGw/+UC3XOtr
mZeIKM2LT/2+/H8GRguts/6gc4Ek4YA78Xm0CeOooTKj21otKMneRRe5L6QQYPK5dz3n8k1Wa0XF
OgX7Fn9dIN/uFHF1pT7zSk1BZz+8nox5A2YOiqZGHIWLFcY50xVmZE4zaMAsRpEMTEhyP7I8jYIJ
GCDEomtzkmgPbNybQS73htIqY6BoEaknVzcngN9Ek9HCL6TZIN+e/kfFS2K/u0T3g0CeafCr84yh
vP0qLLvgiayqhFqnGQKl7yGkJKSxrpfQdg2AvNEgQbXcx0Rsth+QbMyYvsriHlu3X+jX/5lkylfB
nvqpNkbUvt7F+IXMgVYT758bumXlFcMZO89hEk/yhn7+3kRVxN7EqH1ZYEvLHpQHqEslX6dWhSpf
ZAY4xYDEfVaBXbjTcJqNRWdwa+lSGtAGA62mLMiswiztdLeN+YrW0tUi0WJXP/yHldZaQ7QAC8Nl
CZjEKwKfF/BIEeSXCRoymCVmvfyGnSncYDKeb64HIN4L5je8xjsjttYvQSunQugSUWZ7d+nuwBQ/
ftf3kdbeppZ5FN4HnRF6XZWsKwsYlfO6vzZRX42haQ6xu1Jyrw/lieUH/tmQs8pIAHb5czKEVd0g
8ARSqm6QiDiIvcY/oN/ZHmVjvDD7pAjwR+UDKfyZoiP07UQ/3EwX3f0Oia3IW960ePw7L3cmTzwH
00RAb2TqP/CDVjIpOXmkNmIiU9dIOuOREc4aKyalPxmbzQ/hfHvbtX8KCC/xBX4khOHSS5Ce3SKe
PljH+ZWh46YnEUMdSymWGY01hSewLVCyLU3YTFezxGsaDp8WvaTlxN939FjRMNQo44erOeTDWGBk
FSgrHijM8q5C+fxyFrg62W7c7zajqJr3aDt91xFh9x/Mux/tnIP1ykNQS8ec4DEvwb8a/MxRLN8v
i7lIE9JslETNKtCnFVYgxW0EYp52eW60PgMOe09x+zFsbULOUWcGkqe8VMS8+Q3IEyhg/1+GwXMS
a8UIXD7qHPJkFmM8hbt4cQzZ0szz9ltBQSPPNRqowviZyXxwndrzU153EGonPLt1Sv5YTeEUXHbJ
sKAD6pMAffLqHoDvK8gAB8e6+Y8+OoomQdvgIcClEwDw7TGXc+k+9z3+SPGav8upB0ZVfE8Olo8I
unffdpLTgZl/tUw3NndLQ2IhtVO7f2OXPdO85lxW4+o8697ZsJ8MGoK2w8CppQhztgYIURIU29iZ
9jgH0V5AZtyjB0uN3fYOazIo2WoijWx3gMiI0IJZQMxASUF9TkDXjCa4N51panVsjZ0zE8Q0fzZk
U9hJP1BhN+BLaCsBzIyuDJmYip0GbHCTT0bCjeLXDRps9/FBGtdo2XN7TmtiqkVqZ0EnHthVYWiE
/Gt9CU2uI9L1xvhMc1AmD1x4+Cm0Qt14X/xdiN0zvb5kwiKVClzIZnIWlNMgN1y+Q5RKPGpILlx0
eBzIndxEW698o3nE3KvZ2X4jHlMDXjRes1uFNfMjhhis0oTmVseajLcVFAhGuNBG4+R1UTfCgmQX
QJKd50S+xOOgqX5YgCrYIH1DLt5ltjiC1OziCi53Ih4WE0rk3nuem8BNDGicWqCTqV2ssw8HwvdI
w7cPpAFWaeqoyappsWVV5sAf//Z2Yv0WBPRVmhI1lCHxqHAqDAYAm8dHWD/WYUGONGfOsvzYJw+h
tam76gvVzXlaRUZGjcyjLzFR1lHFIk3fH4s6SoJ6e+37pcxDyMBE0/Puvr1o7luhGFDvAACafjgQ
v6gJHj3V9f/GidofCqjlt0H9Txh+0zsXQzrdbSzYJ8938fuiOWyktvjWJoONOEYFnJgdmGxbdXvs
scAVFESjK9NcT/trtu3cFfn8gumBklGsRAx4blGKnQwVRto5I+tL9Oo8I8qZsM1ZA5AJrSolUZHV
ImX1OafOecnfN7GP9MCj7DeU9C+3KDEKuDS7ehGwjsvF0lCibpAUMG7HYUFJYTHgUyjvoUS94Z11
C0SMMpVSiOQimVFjo1JLMNdwZm9iygQDMfPl9rgseMYXVuXzTgp5U1PjbsKH0kkYMdzEUGr8ZydQ
keBUY+ShVdmHQnzHz3n8YUfRvoNn8PGMoSFcHfjSMsa1mgLWIen2BuXeFXo0u0lvwPE8ANG0LEX3
Pdw2WP4KbzJY3NAXTWcxDyh35/JynJzK6vhmH8k50LHl1cwftBt6Kc5aQ81rLgrzQFAreAzfvN67
qXY1dKHSrKwjIpUO/8Q949pdfinqRpMJn210iEgrat/9GiKAVbwnUYUDAhUlsADFuEdkNXfSloLT
MGv2KN1gCQgvBwKjWC7CSCMStvxAFidAM7hBmmeGb0loUZr5rnRNarMRYnbtRx9u75lR3AZ5Rpmb
NpKMGLJaGrgXqYJY9TVDKcvrfvGPNi8/HV+Ouvc0JaslJHXMRdOjbf65qdmq9x5dpHaIyOPi/qdl
LqPk1MhDwYH3GsbUZkQ8Cee0TwCqUq/mMTli7F/Ve1M8dZB67Ljswe6SWYdf41Y/MPRJnwGdzjWU
4SsRZxtCoGSLOQGkjpZcnXvxMWaGX1/bV66a6NVd1cfW5zE6VRLl9hR1IJr1vPjRLOdQ2K93sx0+
GiB13PHgjxATMcYTnj+4AtxMhcx5gm5GNZGzUqriOuv6D+SjQcKakQTQDjYzuakqceFpOEoBIeM/
rli66s9yUa49H4shBAD3YIyZor+lWqbISzWG/POSRPqvQVayRNHA1ADTCWpXpKkjtyvNRdbA3gs+
tnzjFyPmNVtx+z+Sxyn2rgkdktQ7YL4asuov+ft54+rU8GokjnB9Xhm56MZ8rpOqdc1E6KUBsgoq
frs8ko/VyPsDEvxzGOe4ZFKGI1gjsetF5W4OvoPA5/iS37CZqYFCmD4/BPI9PwdtIdzxjd7eijuw
IQkwfOnnkSGYL39Ep2geuELlAmGVtMbFmSlz+dLGGuaCzsy6lyfi9QX3rzMmm8jWh4lkU2GdUKgu
uluAVOxU9CK+4R4tyntP2Xg+ciVAbYeLQ7CXdDcFISOdgY05vEY4JrcMYB17jD5j9vkGSUoOSYr9
u77nR8iH2F/PnTvTTkKRCz8+s+HCTwpetaAj6Ki2KifxE72+lS+eEbOBNrCAEbJfMKdo++ni+WjF
QTc1mtITZNY5ZOcXvfymBF4LL+Dm0RGyvc4emkUdXYkGSA1Av81ulujzbW7hUWrn7zWm49iskd8K
qgvWETmYeescSRIsKf37UbtV3y/Nng59hpbhzsZEvZ2Mh/2nPiMn+PRCUk3sCgwdmVIaybUu9qn/
hcr4IAaMiHd5MUXOsf8FQXeh2ZZFGkc6owjv96eLkKEYxlDT7e6p4nohppby/IE8BJhGFEm2L2UR
FgbQeTghJLUvJzuVdadR2h1PSlDwUFzVACCagSPmUDVeh6EPfHVwYDOtlRQQL01Xny2ANeXElMy/
AhGgw0bD8pGKID9z27MYl3Dh63heA9tPfi1I15inM3/MXY5ZLozhaTGc1LYNzitegug8G7SGetn5
VXUm179dU+RyS6kC/au9fCaF8k1d0rW7dy3h02r/bwIEcz6z2bqMmPuxtaPzwD1czzqUtoY9faP2
SOXTRi3vKzsn1UfPfqB46irxtxS4HWeAgAkzNlkTfcIGdYMNXlb/ITBf2Xl09MUdAw3/E75VUy/0
Ubx+MB2CzPALoL9G6x+3VZzrDxnpzjgpY2D+E52F002FQWkLZpxPTPpGML6272wL9mVRZdKX8qDT
zreWH5Qnz1xF1Aa/FQICjvGsn68U7SLpO6RYsfK8XPESKSxdk8UGWxuJUZa7cBi5CuGln9ndGl5V
rookFd1Lq9ya88BNj8EsEUBOv5zydqTtAmi2LYso0ldlkyoQro4aU6IDwR8A8awIlww2ZZ8SoeCa
3raFr32Q2mec6NVx7NyOMPNUxVYrlfHS2t9agtmcBR6aaBySwnLiCAxSzoF75t7Uv6KVfnlE+8wB
eybDb2PVBz2zfBAv3Qy0JpqyYRkS02ftIAjN3l3OQUyk+muqliXZ2PKR0K2KOsbUl3jtTYDJ1OVb
K8VQnBuW0Vx0FtNB247hGX2tgoH1w07vDQmjUYD2J7RR9xBRMK17SEBCe0tbV0B0guj/z3CzIrwM
XKaKbkm7lhggUvCVw/M2UVKveWulV7diXoWR/UaARqE9871Ak5CZvwTsqj2jn0qye7wOkM/dnlsU
huMi/crFIur2WeofPWS4n2zk6bT/VKNfyCBs77AhbR9EmjInhoYAXaAlZR1y4+Msy9jug2xBsZHD
kC+zWWy9GcIpUGjHDYv7+0QubgzUhAn4Ieg3TDH/TE6Ki3ph2Cs4VNYyetLp8pFVtrJNEb1bRnGQ
SO0nMkmWBDy+wZYEvSllUbtcp6ZnPRUdraGxcPdQM+j+btiLo+/Exno3al1Ac7zeN5CxBfCSYCX4
JffPHt+QkVhutiTFa6fLjCxK0It2c5/10Ay5gS0GXC8VYd9eIMdkwZOUXer9aRZLdG527HpSy1+L
Ybt754dtwb5OuGff2gev0HyqtLJG0PfTbPPbNnoyvred2GMebpf69KkB+UEMwPhTTj8f5jgKjOrf
I3A0WPofvJxy87z3KAxoZoU0a4Gfqpd9bwcDv/94dnBPn1ZmOnRKyB5ShA6AKDoGKZEFQgPGbD0a
6SeYSPA6wItrGi2+Z+mb3LmDhmCFVvEprakQWHtEtxrqZipNP5TnTzDp82cB3Ed1i12UhsOCi5pd
vF/661e376WhFRr8rW2JdUoQMPatJlkp3UrAOa/UEiF+ZMrasmEUNmkAQ/ORi5xcd8bOJ9p4XX4k
KAyLEdngOkuFsEyf4XqsfmpVgyxrYxOtIO4B+p7k9xxjVOfgO24zsITh7yF+IZ27Oe25pHS6sJXp
x+FRT9npfzSDKlQZIHkKUsta0HMdeEuJ7FySlmcdx77E0vEPFoPyuPhtLHumVh8lEOAjbvW2v7rx
b29a+IBufrkJdJsUqQ1T0ZDH5dWg4Z5KtBSll/VSbqfGN/pnEjqIihCuR0UH+pn72dSY+wv9f5A/
y2KI23j8WoGNJM6nJ79/g5kySN3B27xZtEwA4Ae2IrwvsQZG4EYJzlYMDrY6HaVZF8SrmqGoxxVH
0ni0mYR+hRuol7s0ReBXRBMnFKMMEUgN75OVBvw7fDhCu/mTeIbMlWyHAk4QGkP4tFr2F2cTaK5t
kfaS9Scz7xWV8x3TSHsxjcYPf9WgIuq7DpXfGzAMldoS5d0GfNqXUZaZvAQL5vZ9NG5KHDFQbXY2
sFHShkuAV4h7l55GTUxGAMNsZoBXYZczd04E/q6Dl3Sv7T+ACFBdBV+f5XiXkrJTYg10qWpVudk8
JUNtd2S7F5DwAldk+IgNTKyp3KxAPlR42YfxY3o4zV38EJkNBjV0NW252+fewHLWvA/oGx/Ssezn
Sj+4kKpXxWPgu8+WYbo0mmSJiKHQ6+JYo7aKi2QKEu0j/P2SdKYnLi3Gu1JHQL0Fy/pVtOgJhSRs
Fdlu9FsExSoDHrXPYkFlUk5/0pqqqhLJ/pVT9SSMz5HbenJ2yths9ijTEQhMghPQQ0G1xE1OAm6p
VTazxTncko+Aoq5lnxAyIJZVrKu64wEKJsBZyPyONCB2PU+Bxn7owzmX2RKBo1DirCil2teaZZRS
EI/bOw5tDdNMXulSgExqBDln8a8FArGPwIe7FpweLufHn9wrfgyNXBYU017ByQ92qBu8yDfBiSY8
BxBPWbyYy+ETm/SuDDrzPdeHXwOahagL/LOk1/461EmjElEKe9tUlzBAALmQo+cvx9hv4JiX1oID
dpXytroA4TeCkzLJOwUOeYOcdj4jIlMUL8CVSYQ8/DeWeAOgjVeyst4wTLW+ismTEMGpe3S0DZss
moIFQe71APK5Ndungo+T8agmO2gYeUS36QUFT5mBJ8YycuLeKpMD73ThgE4F0ECrqK7zmnL91DeE
i72qeggFOQP/O8suJpprUJyGMRGKpHX0UgHA9+tN0o3JsRyysP+aYeQgnCgpVPIBALPzJ9aJOXzF
GEX57IjLxRdM90FD+7QVn9fWkTk6grXEj2qRTiQ+0SPsqBLgfMEEVZqPz86kF4HIPNcGuQC7WUYn
+GobX2zL3/dSH8BtRL6f/Tn4knqCnC+9Z2bYxP8gxGwmzTPIxpgX3TPv6VAcC0xa2h80sQ+3tsde
CZd3cRzmgjwlxipNcNXc2QYsmLp38H5QR+xeBYZC/pIsW4QhakpAkexIl2iIBPsSNao8IpEzA57H
tnEjUNYjUN+e8Mbg3rriA8X5R0BBRQjBmFnx16PFqjEhzE4frFS1KvkulUwkMjPgkw2qSJNSTmAE
VrYi3gS2z3nmUdpSzCMfmaVq99UrexS0Y/vw0MPbyCLofe4kl77fngj+6ldxkiuY6RgsJT39QSBn
KtaRpMN0c48cBp5C0imujnVwYsdkC4gII/7bgziGf2b25/owZBO1bb90gqfsSqC520jomoSGtTaC
80mCQvThV9J7rEV16gUtzb2XxpsH5JSUUN5AKC9y8JAxcu+5oo/z+Hz+a3NjmKw0m5MJxxI6Alpw
RRny+bt2jZ99I7fBPZT2JLocGaMfO1yhBCih2UzJqtvSshenDaFwcffVas9K6EUeu1wUWjzu+h90
C0p5Eiq2MIXNeMJJwPBX/LDzKEkWC+LbGuylcDm5BPMaq1p4WMPL1isBW1C+Hi197gOHQucwrFqJ
D4XGfOro27ew2vYAY/e4Gjly7DHAYFfSoDK6YNkzWFnFi5LQv6qzlX49ULmPPE09TjzEHImM8rTd
OoRu6FKj1YtFg5jle+8Rqqjb8V43MdeG7eR3/DT1y853x3Mn76+ucGMNN+5cyQpnSxKW5T73Furd
T+yDzxGqjx1ALDxyfbUvo01hflfBecWvwToHhaNR9b08WJrEDnJbVel18oUgUo1f5tw/kO+ScPTX
t2oapGCjMyiV3H2Bfi9RbE3/G6zzXahZc6ZJPIKxNsIEsZyQXWdZgDZK2LijbS1352vmy3tz/Rp7
sP7H7UO5NLBL/FveFtlRn+lYA1vQZXvAU5aGU5io7W8aMwn8sxCRUFGSyFAqE60xZwa4LN/8msKp
qcjbxqZF1KB5wu+JLuHHim+kTNL0o74jC3JiDFadc7HQXsdNRZ+OjqlFQq3nmClw8m+SLgPl/dI4
YRnSS8LNvr21Lkb4UXR4+KQXlv8K0tymazJIB65OLALL3WVVSiOezcJUvmeY0dBJWdeGHLR0aVT9
BOm2JhUrL8muDOsr+7WUS/srws4pGElF3tdvhkxnMVqitEmz6W/9vb1Iqnewdx/tf9Ci94mAirH3
F1n/tR3Mi0USwxyyJTvwNhCLg7DKDtIXg9WR+B/E4tEEJNTm2f4a0ZywqL6S7R0uO8J62kmJslne
GOBS7F93egdtlFye4ZCBq/Cqm94D/frKJfN12pb4r4wiP28xT5R9T+OtaEunDrqilmrVz3P+DoGR
z2IijUiw9PX4r2v8j84wzQ5IB0db4n0+xbdmLgIp3TntkMvnvaxB+RvABKTbDVtD1aFt6CXQMr8v
ppXAurYRJ4JOXE2oPY+WbsBQR2jv3uJBEhJKLdzHFZYWgNXUOCyJcLAumyOAIntC1XUq3ro9tpXy
DuIYjCrNeBCwDVGaSwOxLOR5oUAMh0bOawrSrIy6UpAOiI0+wSwApMjYy7lot16utje1S73rifpe
a3B0V2iz3uGTxaQDHfSqHTUG1xi0v1DC7K7lT9Qym6Nhf4HX6vDnj673U957RqsRY79jDdCcHL+P
qB8kSISSPN2BSWao/jT2xMr8emvOatZEuf0/Z2UE3JqiVfzgU4V52NXiNblQdn9BuRbaeyJosuRu
KdXXnbJsmRn8/9bhkpHC1A3daPa0EIt6kVlcKkRaHcjZeK1nkxdrjgJLFJ9p2eVz2embwj6tYqhF
ncywquwOe4gqSBzg/8852jK5kXnlJogEFg8zJa8QT+mK0loxbpupsNM1408egPmwBoLfJblVtFnt
29DjoGKphT+fNZkA/83S2O806qDbezHw5+PfKtNm1H+PWn+r6Xkhtvqv1YqhEt3vmhRSNK++VY0V
e3zZnjfkRrUHE0/NLehgeCmvsZX3EwwlOERHm0AoZyRRf19KurhZAya/ch1LGTn1QCtTWv428S30
0FUzHYjEtMiWpTQ0cYeQCH1Oqmxba0IvfRYgV/HVphpC35LkElFhpEfWlk0Ptzzl2OPutr1I9DQ+
51dIr09BMPj4zOzxZ2sYUxjdW5kK2A3xl8jc+AuJDRN2O5es0yc7uz4fYW9U1wfJ9Lnvn03gyMzh
JgEt9ePSSM2RcIBQ8/4LMemHopKhQrOdAantxKTl2nmTmu/TFJQxy3t/m7O8vOv6vMm7LYZgPK4O
DetsrSMd8WlbV7tJAIxldHN9lqCDGthzFera8/FDg4F4MdiUoMOR2DIRAuJowl8Pno/bZohqd0pZ
JV5/iayCvkneUQm+ZSs1RtiP/2/Zds3rixfbtxCUP974crOmdgZzta/tOGJfHNqQPmVY07wCtgNG
vr1UM3zZMZiESU0EdWfet/paoViAO1b74qmuOM1oTS3cwAIDU2VD7ez5IQefPZGv9fzhEIaBhQ84
fD6EpAtMH5pL9u189VDB1e9TseFOHaCzxDHOYKL1PeJZF5e0JvnPBJA38SNVwfXh3zUCHkGrOV92
wvz5P85C0CJnZ0MHaCA0m0eo0ttJLj42dXg4kMM1HupfLa2xpIdhGjLnCViGS6vXENFXw0cONwIZ
ZccRl90SzY/lmVQGIAgFWVkAAwzOegaQvoOBPnln1e8s6Skznk6BY5X7x8x0Om/GxzlQ/UkxKw0Z
8C2RzD9o5ke47ikeyE+o/J/HjtOwibb/9P52jrPlrweNni7dHEdbDqRc3JA/TLB8Rzjoc2TeN9SR
x8FnF+Wmn4ugfWhh+dwi5WxpWE7Wrgp+iP+NXQThA41dK1Prtd2JBKVUxxZ3YCBeojk3XwrB7S3p
TtK5WL4PV1HNarY2aLW0S0eXaY1ym0DsQG1b+xeWawzi1J4/0Tw2+qWDiKOsfwf7gYZaYTEde6Im
mJl+NfAMa/0VxP5Bk+ruQ31ftBcYZUwrFtmVcUq6vd4gqmVwku6SmMbiBl3UgmWpQc0zsUjtBKVN
AzgCPKDJEKL01nCJwHDdxpWy87I23gsXnxe0VAvywomHkKBSjs9P682E2XrROgL9OR/kew6QsaQ2
UpRA6UV7KOA3j3tNXWeJcbm5uc6Bn7l7N5j+hRmA9r6OnNpz7rzGa9n1gIG+fBqYfQHs4pdt5TaD
INXri8DNEOMwZ8iXcDTHVupffszMJu2RyMRtjQENcccZDgZlXcw75s77V9xgxYPUWTcDzmwgnh/U
SzJeuhUoDxlVACK/WgpNosVWyKtKHXaQBpzaShDjsk+UkyohMh90gT5th3WyUNM+dx+0X4a5xrS2
eMZFLcbGFJ7DcY/PynQ8jjGGFuyIW0d3k2SVZx3blvzkbP1LJ9SsDnW7Lub0f0nfnDJfIls1YZ4A
1GQ5D9y8l2HiTxyxZD1UsKG2A29lLeGN3AO44LV1xRp2MFo6IR8jdQwwGTJcADzHNuCUdCylGG1N
G5UbNhChSFpdGkTDahhM96qbF2kkIggsr9aMTWoiVFeHWmBlYu06ENkhzvoldFimNdX+1/J+Wa3F
/XNoqmk9F/Xc1ov+E8FIhBvh3w40xmBxZxwUcOdsOiQ5SsM+udr7V5n0hZPDxPrF4iguqbg6Ph3u
ExC5gZfaB8MtuTR76MtIHA9iEKZznWVF7IZtnKToiE7a7f5GJ0wpD1wERbVYM9daoeft5MUM7Mfn
cg5VTUIuK+XqD4yZwPp9I7x49f5m3sXUZTHb7XAudEEYpe9b2Q/6uqSP9yQWersPLmoBFNZ3GF+T
yZG66BTJqYarNd6A8dXQSkaovoJIn2zVGSpq6vrURFwvtlxiXN3Wp8PT++CncBAs+t4smpFHiy05
UicWFkLQp9UdgfiNPLFljfybwv1GCYApCw4G/+qxcQPX/tayuX6dC2mTXyb0ERmc8JUOXrRtWW9l
DE51PTGoRQ4iHz0PuDgMIhMcCplhe99lAyp+GGgLeZlWJ5gyXB+3qPNaOfTCN7ILGs9HStVNjPMI
g9MrRv5Zz/a8Kg5R+u9051a5OGY+qynxq8sw7O0RzMNxHsFP4QYH5pXO76jBhw63s6F2nh3+jwNP
K2dKb99aq/By88ZqVVVbqXFDaHQkCof5xPiPeZVK2QjLRzDrgybkP2S3jk7q+y0ALbhuuxjimv5Z
LZShZNDpbUArQ5Cg1yHsvOKycSt31oFyBWfpPA4JarqMjvh+mtt2UqWCIlq/RVxZPA5bjRoivkJH
+z0/hQeAwB0dlm5sKoV98DFYC9lp2VvNdsYx0pVn6oDEWBUgrWUvYREPiInr5faCXu56VELPkxPg
FnqybnEAzNf5e8l692yqe54umSbhHiRM0k7H59tpSouo1aL90hLnyjEzHRD/rUXmIJQcwgZX4sm9
+vmqzBKCFFXj3i2n2P5A0L11JCOB35/HnyTQ5bv+eJk0NULjGbeyolw6TfIDy3Zz8euv4/MT6Pgw
J/A7eTpdB024PCm4zzmPnHpelL3QdfyZwnA6SkrqccaBlgMYYKPb58Y5shJsIIJmwb8J18XHXKM6
gZAHHyseAz6PWijzZ24WDj9rSGegrFVdI34x6GvF5hS1THX4QGAJeiLiEoNctw1qbAauuKEhTqwn
+qWztoO5frcJEWBslRweIUAVmcrPvqlAL4qVEy9R3jtBrvyNTDVoWbh/BiD+uYLen8hEVMhGY5zL
W0QU1z+f8vG4q3vqhqwatc+dxTOvL4OOFmxZIyad7TFsQByHpOLy9iGpDklysEcp4Yeg9CSdPL4R
FVLY2vTuZ5WHleLLf0+NmNipyBpWLM0t4N3d5YPF4S8bHF7Y7N0JhC7lpM3s/nPlOS9Pax35oyvh
TnNxjxCBHwAYb4xmm0uDvgM1ESbEXmTdIePOtjAl04V0vjmK0D30kpwbWZz+lLkkPoVXsv2O6kih
cvuyNdf2YdJ7+5EDOukI3ztdwVLYjZEkhxc1boQR+79UtUtkYj4oEOjMgQUOdI3xCxZOttSIOaWP
CyIf4No6a5704NO+jIxvaaYUosx8yDW0MlKxL9zgN5X2FsP+yEi4Zp17L011ltGeiNwg1nkvGzUn
OK6KcvadpmJfW5eE6RohM0X3Eox3E75+9rPpDcUUTXQWPH7ICup+Tchl1madHDJGw+8F3qVwUyus
e1I3LGDmr3n57cG/DGOIzvLzNNl2yKyi7wLR+EBHV9hpu3iK9B5Ja/+qS6eb30/lLsc4uZnHWxaU
znXJNsem9JalnATzgAZSInGJR3qBBP9i5xw58E+yr3SXqumrUz+Bw/PJiLR571QN2eWBv47+DG68
89Vtkn0Fjfw8DiTUKsg09gNlvdztriydHQGOLSJPFLAvI6i/fa9Nhp4GPYCBfmrTa1R1ASKDd3aT
+8oCGGhImApjeHmwbRBKMoxbLIWnejbebRoPuvXBlrVTmP9lCiOgToUEffhxCaN332tqC4aW434C
OYzhrkQLdUWx3xV5GPI2BgzJJqiRrHc4o0dzPxA+VnPncSEILVom8EFykn5brCVsc5gVO6/y56p5
gpjT/b+5AlrseH9uKBGzSJFcQ+q2ZY4YcbzRdaL18kfakd2ax2XnRDsKQMahxDqDaFURZHOt7b46
xRdyCD7v7saSkZyf/9vq27VTQYpeLPAqNX2+hWkgckLkjZEyOGUrk2HD2uT2798BxcxTtg2hUN3M
LqErKHZ2+cdRGRbz+csdpvdIeFsBp+M6S8MTeLBqNukE+1US+XFANyZbKwTf6n1iiXd8iUZorsCC
6DsVljLxamph4a1HKIase6AevxhhtIHrVSoVgKFTFQ4mXUBYQNVaixrqIM8ZSZNFZtmh9hBa7RHN
//o/SMXdebkjfPG/4zkLadaFMLYYr9/GM+j09jYjogojSXZ7ZCNjIpslNMSDbhbyuEGMGaSH0xPm
pUOfBgCFSNcdKTF5cAiXfpOxPlEBz1ReyNSGsbld2yYgpSKCPl8qZa8USXfot6T7ooH+JjpJSAVb
PwuBVU4eA4Kld9+nYmZe7lSpoILyfdGcPZMLRdkxnf/lwjP1J85dANiwxy1+/jHnAQKwmR24jWm+
ss/LbSkAExkEb+PFjGLD62iy8fTOguDWyDHzF6IvTO7Mw1LnZhuX/iz8x7al4XAn3s9Qv0OqIELe
3HmviHcnXx8TZwHe8+nO6PRRiCBQ+V1AvTnnFfSeo3GXasCUGIOQ56O/Yt/qjEaCdguydEcdQmTM
x9zjacDiP3OYqYeT/zcYBoRRMfFUjGGvmIPI8FEeV8taJA7nqxrD88Xis/2sjj20GCpzQzKXDDee
5cOJKrk5MzW+NUtGmleiIOO+bofURhz2DQ4qqqbMq3e4gTN3cy90LwNbyEznZTlQLr5FIejDCNAQ
obny3bGtoXdGdYPGD/dJfjIHqZhUhzT1CXpDQH7IcBnT0T/6reawhoqpdg5vFAou+zmQSaRl/Lgh
nvQCCeyWD0x9g1MuZl1TKdNrd1NgOuiHtVJliy9MHpLCGDNTJZBZxc/mfbXKsUUtL8t7vojZQLVY
ouGKRisDyn3wn/sA0JlvgD7CVaT65qqMkJaGPgZxcXXqy+M4LglEbU880IbIo6NUqrDIpu1WqS8L
e/adFSL5k68B6lHr1wvCiLQw56FZhcJe/KxhHnA4GrmVc9Wq2r9U/w2s/sc2atbu8ty34HxOdzJn
fF09KmuZXAwI36GKDOTSFYlfmaQCYVxdHz0VJgtacHdyzn12DiYNNfDX4c4tsRPLh4wNfQSk5xrS
M1UyG1vk7HYo1mtZC7+Yu9QfUd4Y/K+MRit6lWdTDaPkAPoAkdkhJb2ublDRxbsgWE0c4ypPMuBf
X7kF2ez8yrWsjiXS3SUKebkE9o9DLuxHK8FLbLnqti0D2jjdpCQKvKgVVnjGBZJEzI6c6Wpkesmd
JxS1s5M8XG0H1/s2gda57zFw9yfS+lp+K5YEOf+sZ13DTGNKD8NUwcAPt6t2yctbTxyRDzs36r/D
SPYHmofLsiw62OqC1PHFkWajqZX6W71HHGniXafSW/BdSo46pFiwREpew8Mi2Ok7WgkPyDkDi1tx
HqzNWbW6LgLFfi3Pxj7R8letcdGDwLzaDQMtHxu7BiDvqJao6yezOBMwRGVQ05ElNMYzvrIiY0Tb
eHDCy6eEfqWYwO4R+rlD4Ao9c31xfTi3SvezgdGVUiUwFb08Ja8tN0lQNRYK5bv0H97yiGTusHLQ
eB4w15mrLXfUkJg+3Oqn5kaB9SIWgnxupBxnv3V31XLYzCmAhhHOhgpPTihcFdTCvYwTIK+nDLZb
O6Tmlsewe0iWUz7RR/qQiwCxsbTtqA4SP7YK6JZx3ZhMMGifRM5cX2IBoDPMJCzgX5yXUBhOzFNw
iZ6BOCpJIonCLxz7yip3orgXCA23hiXHuXWpGfb++LNr5SrQ5qrwniU1YFdymHUOvEuZx6XtURy0
ymKfIgl1Q6yX1PELvgWiVakVAlZCbj7vcSXgo8lv2M21lM5+M9Sj5IhqZlm4q8qpQRzSQfHuuqm3
4IogCv+UswksGXnFrnMyNunUrOmPp8EaElxGHDRTMXa6P4HjjfjBuKx3vAg79kDqhzDW3I6LHXmc
7EbnR3GhRSZKOg3YVrWTFH2GR/WLROQz0LkpvBTk+Sbt2k/aQDGSnf6PIDm5elB18jhMLZyA4/MP
cEy+OHVP54AolwhRPZrUeG6NNv1/djThLzhrP4OlSbbHoix+ukqy9vFSkZ6+75ozDY0t3/h+DgQR
UHIDuBrQOOCNcXYc80tdVFV7AOdE9++q0SWXQmQlBMQcZj1to4pvERkHxPX8xsV/nKUwjbfKMsjp
O+0OHcE4jKz2U044zR9v283+s6ffCCiGNGB/PVqoAyj0FWdOaF5xTfpeGC2xD7MtUTyicvzq7cLy
OEcJfvctDrzRrSoy19Uk/+AMJdBN/Dumlq6Wy+l5wPBSP4CbUY9Sfo1TeM6y02QroyVuFl6KvFmE
9awmEASdZp2bUXSQhRZsleRqp5XEljXLxUi/LwC0cz7KlzBXwT9VytLHiyUKriZYjh25v/QFLK4n
j9Cw+M2wa2fTBy7Ze/vvIlcbbm7Rl43EcJXtb8sqqOTZCt7Vi9caQ75eb8VjNzw0V//UsfzDRoiz
CiIFJr/o6AFZV18jdIVCPA1Oj7qFtVg3VD/LcWRcV1d9+88W9iQIdMRibW+puEYUftNIG1hbBMHo
z/6WZJXcBS7XqBzfozHiAOgA6i8SG2Uyt9QMBC1M9oREobycgNxA4YA30xXlQTT+Jc2vkFh6WEl+
HiDpeo/3SmCxIwu5AtkOlwqcNvL49Sdio/ESsW+McSP39y8z96Tj0/V+KenVWMh/yw23wUQJA4nN
PgCstERFwuYscTTVSrYYODpeDajAdmVPdQ6rHtvX8v6Cn1Rp/+0jQ8ZrRISaEm3zk0tMBZto6BdP
Rif77IuMzQyB76JFwXEuB9oZsSwKGPr+cNORhXrsTOOJYFv8hIl333zkKOJ+tx3rJdTmibd0T1PM
RverUhOUCzrtVOpFRtsvuy6j3tbdbiGTIFfEdlP+BnO/UfF+896hGoQ/UCQQ0AtPsDSO+mnUIwQs
DC3sns6KKZ1EdKaOaecGaLlnoUlPKLCaoQec0UZXDwFhZSrjA8d3zz6tBZq8YLf/Xo7GCGnDdrAE
O15nvmoiOnFpV9KkBwxBWeRaChyenVt2gVffTefHQK+vCKwMN9Ei938wvox4CnkwV9LTokcvPIhh
WkRQkZh4SGfod+DUJkZWlYKvYLa05bVunyQrWJZLMvewrfDSO/sYis6rGu/FRyDQ/NAjEEMwU1kZ
UAgKehFCcRVPq+Wj3II9GxLRCeeC+Fb42fKEAd5zJcwsp7GHgcqcKKfar9fHk3jwL7pvhJwefuoo
eEUdVwUmuMDVtoUMjb6W9f/xrZG9fR3wghJ05eOSJtGmodJOqZoO//J8xH9hkr1su3utU5MgPk9J
0lU0lrsN9+ZyLcYO11w+5hC8p2+7vS8x7fFpHgLoZNgpYBe+e+Iv/5awipGTLxbPHi1Ighmtie4l
vmMpODxJXDpQwP1o9AQMsYmXTVVMPJ31+Dw2CRg/sKcG6aJUNxuu5UAxx0L7sfl2Eo7h4Op2lbHj
YZy9NbAUgqEjg8LI5C0jevCfP103/buU18UPiunoRAl5IZ6GKx+KTuCgJZgLtK0lli84yo7kp7Jq
zHGzOVGj38piiGsF8UpGhvkEoQ+Fx8SWBAvHKDmvQVwNVK7PrYYS+ZNkvrnLapnBrDdqQxelg32H
yPJ3ZpOnft51PK47LEw/THkaeL3+7onmiOgMvWbCX67nRvSzrXg/4/P2VrLMos+kLHKz+Vph52F8
4SMtpQIt1fvVFEgWzP10MdCwruFCjarHNgTemyCHE70Bz4PcfCTY7vSt4Xr8lN9955wVHRuBb++O
cAJtj12mfUh53a46/rLoG5xNyA5w7e5WZNZ+I5Zh3IXmh1B+gykvevgDv2kv1lvrYxndF54OIGLV
Fu1eTzujMDaIBIWkY9g8YZF2qT1vqV757Gpqm7QxBwI0+6/NbLqF2XeaceE5rHPLMGF4NwRTPqRj
D88CsCjMzDYA+36gRywBUfG/q2DmWJgCSAyQn3M6zNH3p081HINsZ+8gByy3pDY3Jyb92XNlFn6s
LXGzy/U3yHM8q6w4c4sw/9CYql/HeW8VFcfKzWn1b2WdDe00Ahhq0HFe/s+hqmWAJOABN/aFT1bR
JXBgDv28C2TQWe778W2gXZNVhBstG5JvgsyFXQake2+/Z5PTaTpSZqRG9aGJmb+9V4N2DRu/dmb/
/ZqAq1EdW9EsQwOQ0xnXN6J/4F42WJMPm9783OaiTK/pVAFMR5O/BqQVI9lvsGLzF1MMUZLnSBTY
oUpEwYzmPPmAe+VucEs0mzvpgX4WU23zH8nzTQNtu+QeM9AjprmsRrrdhstOZomjaCDT88J8MRks
f3aWZjaMaQWNaRq3QKb48x2GLYLjVf9QvigDaOjODzGmc5p5MWZSP8zwdG3WsPwb5n9IENI3QEDv
MG0YXqmBTl788modBtxfz+QO/ubN63jEn8eOaeOlCH8TUGg9LazdbckH7nw8ElIT5gttQvKjcsfj
Rz+GgCuCL+unPTO3lc+zDXEH8nKdwjcAX/q/A5GMu9Od+HC/B2kwO/Wg7k4DzYmNoAXLGpSFvgrP
KL+WAkyDmF8A/CxXxU6BQBG2LC+cFgHahx2nrl2s3lXY8LwEIU4XZbyEv/qFQTKNQ7wL5sRcvI1n
h5ZvHggTv96xS5+iQtJLaTSCKrqcvMbxS9q9i8DO0oNR1ovGwm0l0bvwqiq+lEDMo9SnpthsTVg7
dAdseE5eQnz4fkydCz3s9oopx+VyIr1rG6KQVwWCmwGu1kuPitVaUYzBzaJoHRgFs2PZTMGVrfo+
w/Z523YzkHayyzdksjwto5yHglY2U4eia5WiSgq1cpWL6gsNOl7S7zC8/RFTsWcy2Tc8lXdJqN1t
DUgrB13rgX5FeMwjj4BtZnYx6kLqeckzXEv7U2umsHp+XRM8sbTF+9oFo65QxlKw9wZW5lxWm8M6
UCBbJrFgpuXliF79x7Lf76l60mYcgZH6bUMOAWwZTepau/VU60z2tHbU9XSfK30Q8xkdASIL/Fqf
wiMhTdjIUKAiV80U228uWCdp1i0m9JRCepHRI8umr5QgnZAHp7dO8mGkOED8teZkqOa8VMYehfqX
BhsmO60P5hMR/Tn00e4wvAk7icqlkLF3gYqpkxPJz5XmLTeRQV1UJMsaEUEsAOZkDeu+mV7neAVZ
almmGac0prlmr+Ote7tu1WEPSII5lHt0fk8379OtmQaRPinZoTO9SDZiogEsNzFASU676XuBRAXk
QrbsS058TcNs0QHdv1f7LR9WhQXxGDdq/a1vtFWF326mMeVzm+VA7b2Z7b0Gm2Nj2PL4scklVU5o
L/5gV7c9O8ZwkLaUdhBYmkVT18rOI/WyT3wKZD+h/dPLDCnwmKNtG0wcDRvFvV9mhtJVIjjt5DBp
ZpQsFxG36kD+McJfTdX16ywCom8eXnhYF9sSmEx9Agr4uNwZix6W5QS8GajJ4ZZKL+C1V5LuB6/+
udx5+gT6pk0pjzcmRP6VSuTaSfhZnq1+5QWeoLwoZiPnyhFNs8zr1NjdXJKi/cGbOKaAN3guqZGk
tScrGonm4/UmJk1WeW/Vl1oMnSPXkdLKVkeT7JbayHpRuUglwtej6lBi047G/2TPDaTw+7H0qXe9
MDolhXIlF1UnmFinNBP1XVjkusSnE7mHboi4Lpk/7argmitDTsRMw7PtCIeuFpueSWGAFbbBczFx
WqD5hq/uPsAztVX4sreveIN7nmybdNJobLX0XR6gRWrHEYHdTN0/HOtJNgIs0KBhTTgOqplGV7s7
pV7I/5SiZmOCq6EkWfT5Ojmp5wDddFblKZOYPUHxNBRmqBwCot6BDU0wYmG1jNAk+362i7n2Uv9W
3Kmsnk23wC4fJMraGCFGcb6lQySae8PVbHXrPtG2C8AcsaS08584O3i01cbwTwWxi32GcjniZT/0
i9v74z66f3LxAMGVfZFmXWn1t8UXMsC+DpCy0EBLJ5aqXhRrabpZEtiHUd62wLxsWBV/XJ1wsoFh
xLwD2KDwLCbgkGid5ss9bwwm2jPU4+RwhIdhF7LzypKfja8vvfiqD4bsMqR+G9I9OmVt5EGsS4gb
2jiZDVepGQVcxnfM/GMk2nhTw0M87dS1Km4sQ88fveXUkPke2Ml4FHna29J3SjN0bOXEeu9jk+5s
fDgeTr4GuvJuldRvowq2ACiR3JbHw7PIYLabvfWa7DnyPAsMY9wgDnOGg7Q8Dzwb8uxduGjQuwkH
KONGqPoYS1Q8irlACvtHIObg0WgvVgM8gJaleaNhL7RMXa1Qanp4Bq/LSYdOvEyNEr4ZXSiJEXCl
8vAYUCRN5CbAIJHSaI7WDHJXIKfSxbLJhIdAKiT4EJpp31nZMnX/o/Tzg6nMP1/scUAduBB6FarM
+xFoLKJmXSvw8TVhpc5BRr7skWwy44pIQk+hfPRR6g/4z9zS1isatn0PyqgqFgU+dXwRrkhY4upM
JZn0TxKxLOQaZCPyvnbLZHj7lju6aIi6mf11jFOSC+v/Pzb4NCHKx/BlnCK3Z5hyJTYE/U2BAcwO
kxK55QfODR2rNMEBTQ/co7nflnCDgOxAFsbTb5mzC8avk4pqFdejCdL5BanQ5XR+D4vLLIoZoWI+
r0t47i6m51JKIHK5zAQl2f71w9oMwRHehEmm44ISnYgV1DS4QspsaJLhrPc7oaZM9LOWeuG448LF
WqSh4KIIZSMUcM97TUyrSRJUNLlx8UTSD2qiqM43YVXF3g2jnR05n0arYBMGl080oWNdH1tkejx5
NWvLc9mul6hhTHU5MJ5+ADHh+uXf6GWdKqaAiNEI8lC5Fu5jJogCVBzPcFDqn+vEG8pZ7cr/8lav
YMBPdysqLS6YXxModNfpSNgsCZhQTkmVtqOoMpChH5qVvEBg76TJ15lqewJ2ONbk0kzSlCT0ICzI
qmhNCn0q/c8rUJIn/rcyvIH9boR1or/rq7QdN/xoFROCuLvK8upxlDiHe/dQf0muFNVQIvL3oXet
xAtpkivXmIR8oUz+I5Z3d+IV3fnpjxWM1YkC5ULWF83G4DYR2jCsg62IB91N/hs65FQJkE9vJRgt
/G90TVJIaEEW/OweO0P2X89pvVj3a5hQ2XWH1HygPOSIr1OSeHCcGyxvruZhtE6FeBIbTRL+F8iJ
za8VjdmZvFicIF0NXfmEqAbe46lWMYY1fJNCfDbSt5pgtRONVHlNePR34t1nhlQpJzi3jPbNIsP/
2ypw1WAem30zJuFyc0oU4vf+oEuvbolV9ZmDyTJe1uemqfuFmp5WAokJ/jIhll5ihXhc8aKCeBye
eJzoIB83A5zV5yBWHtlPR8zcflrZ+KEWwsMU+ERyqGhmnBpq+vTIW6w1zYvuRjo/hUYqcOYqT1Yv
ZOOpTlhM0N58rfitRYlUJLQq4nWPwhsWTVxftaeCEe2ucgQhfHDZ4LZp7YrIpi94mW85eSpCngL4
mjbYR75F65GbXE7c3HxhlibHqim59Iby9kXAX0O07chGjTkL5wF22pJdhj+OYGQESwLrc75/fOKA
SA5YAJd+h594sRoUSlIfpLPIs+QG77p0aNshZwQzhhq6hj/d3dR0FkTb1xZbcAVBE/9RtVBq/PEg
4RyOt6JL0yrFYK9CAqX7k9FxJmcK5DYrV7pVEnQjbQMDm9zero43FdgcrygJhPqUTYV5k0FzZxVC
/mWHrOu3ifMcdV3lO7wOI2dPFmJnzMoZuNtQ/TLMC+tKecrWuioD/avjlFC3TQJSj6mJgfw0QK4m
06pJAy7m4WOAhxczmNRnnrQBnG1yNuddN64jQLyU9ABToocLIrKcost903rLV3m20rXRg1ePdVzB
VjK20mh7o0qNcNDRu1gxRE1AYRuZ019p2HBOQQCfe1z/3fbUTG7cq+B51qE6WC9qWaumX2d92r1I
34xt7nnByjTm85VXRDQdLGItmTCh6iBkCzOOAYpMMeR11h2aOFhjN+No7o7SK3weoUFKUMVrIJvZ
wn7RWl9zdKUGAgQBB8jR/ePrNoTpzu3EvtisUpM+RvQ5dgNlqLE9HS4E+fqKZeSNUBj/EtpMcldU
d8yqysjzCW34O0+f63BCgt6MEcMvLiFzk/yxxwidI2djKlEieCOtQ/8K/QZVJqKGL0N0VU5R8EKO
+RxvTvg7oTXvsv5ZmBoDw4X+HN1h1r/57x/VgbH8UM6kxD1CqgOFr/U2chIep72Kjm8Q27RdHJBX
+MZdF933N4biQetkwT4sYgMkM4YprxFxNBxTZZDi+9n+IGbAw4+7nX2OgrZ51PoAjGCoN0p8Y+hj
hkfZtxqW01bfjP+7VUyWZn1amkWBXaTelnbpEz49pxVDHdeR9gytZuxkXAWuR4edy9peqB3UL13U
1P9zq6eCVf0t/God2GAjFYflglawhf5VvqFyv0EmWnn+Mn4f/+6cjmMyFZTFn3WS38fYvJyOKLnb
9oFAygzoO+Q9H0qPF0Na2zIh/2XxSzHKZKimHri1y28ebP2Ru04I9JcqDCmjdjlUIJb303OC+lCB
y98bHxD3Z+aaAtXrMOAGJcXO59cFSxLGIqJVPqc4zQkonYVMRq5n1PYfpm5WWnKW7Ltqn7jweqEy
M0eeQkyjKwcOz3T12oZbbs35FuzAcwVLsD1i6jO9P/A+Z48MGc1+s7wDvJjennjk6fzcK2VtA6Zz
AEu2fnbLvQMHO+M5wcvM6obHGVPi23KBvj5+L9zfwSOo7frNDj5MUfX7NFziObjqOgWnVpXlVCYT
Na2W9P7RygOgJeeZWx8v2BnDyMTQdwrtyPGmEuabt/e46UsogiUPx/ZvNlAiQsvOUs9eTjY1xIHJ
ZR9VnreprJHevas9Pf2TkklrixOl2oPIRExOSxyBEV2ymanmQ5tRXu7fbPdU2UiyQzBhCRVpsawA
Dgq2NsXbBCcRm52kJhLoEz2WXpxxb+3eY2jbJC1QageqLYuOWRQ8ttSJAnDS+5P1cxlSlNSA6ede
TXpb8SSmoiN25KdIQ1Z5mq+qiEGMtGWfUQ8YzPRw30hBw9QVwubaK2pDeVgxVJLoe0Y+5CklEX52
XdM7N2UiHXTlF3ZPmb2hs/Vv/lcLuz92Rb8/9mhvmzpLU+EyGCZOYmq7de/zcr6kO6DjMOV9cB6F
YVwgFhFRa6WXliQhBlpCdU+Vme8R95MdQ7n3OXMRZE5uW2xkw0XtKpaKPPlXSodoJv0vuHfGTEa+
m3D/UFjd/6suPRshz65TQVzgN23ZrWLSKbWlx51KUmH5L6KNZZz0bnr72TsJ+bHW91klCDBHojVD
yVobtzLtfzcgkI0zTY/0cbl7VbgSmGSoa2KkeB0SfvAvE+l0BodkfFEhpVTtOzuhG6TxxyAshNUu
CvTWlSOziQ/q6Ie17GOPbuZTmKn9YKs6lfjNFOrVXDbeDmgZdJiMM0S7WqenUAkYsZfLp4/ruRC8
02UNHC2+6Oev+iwbBrbsIZlGXTKpCEu5uclePei+m4+xKQ9azM2sQFUljMvQReRcUwJ/og6KOP+V
/Ig/txs9majVopUnfwpLGo2r0rJgVd3Z80XHtMxE41TPynVrV9dJmFWrBGrQYB1oj0ozS5PCKRTk
Z19jjUphyBMnrK/xtM0U4gCaeHi2OM9gbIOHwX5enNjdK38V4snKlbSQl9BmOxz+VaQxz6tiGoDC
hfVs/ulaTsvDn0KwniZhvX5jArJIKG8b5Fw/A+p08j3xLauchEmfUI/o03MSKwLdViGD9PZXhOal
nNbip25sm0wJgow2DbxJAFpYYzSV+ghLp8sc3ERuoDe0dKyrLSIDufvYnpwKNs3MNgBRF07eO6Ox
kYl+l1vW5llFAsE2vQIa+abrh48BgR8lOIVsiLbOB8v520cgRF6v34zDFBiY3F+/p0XvnjkVJmZv
3PuGTdxWNjvRhDzzW/QRLdEYi7Dz+P+aTnVfnTuro+Uyk8FAZy9CbvzbF0PBmRBZ0KbWn4jt25ht
/hLWbwt/znl+I5UN3sAN1Vu1zuNCnB9jEFwNS0zrjnWHsukr61VvNeBI7J9mcTnI9+1YNA5e9eMS
XT9e9t4FnAsYdkwkDU5tgFTZmxpwdZpJRo7g1oiGNtHSrtfaDeDqr9Y+QteJhTrK8w/R0E3WY7AU
cMfFc4c6fMADs5K9MMwoQ2kZG+4YhjoKjKeTvgnx+NEQNnhWG4Tr1ONQdTZNpK3qAA4B1bMzYB6A
K5v6/brNp4XkiqmjG/cdSYdn72nSTpbisTkVLszfCAd3LwQdspl1VaZTi7I1lO4WCufxaKx3lZ3Q
Eh9gAx+yQyQvBewIO4CsgTjgrM52DwpbmDfuddxvgPYEUvFkHZjT7H2u/Pvnj/ffdb1+OILUh5ub
5dBkm1j0yP1TPhsXNjMd23k+8FFWqv1xNM18EtH0sdVOUsqzIihTntr7jzaBcdK6HwkI17uOKn8h
py9IW0fOfGmf4Y5F/SYfXzuiUXfXYj6YMfOblLWfkRFWIWXuz1RnBmZUGLLc9tUruPm7XsYgiBVo
pX4/4vVG+4JXmTz0HJqqTVX9PTkA+7quwFJF71pIblzuksak+oMKOSlqsP58LvLebI8XgzJdlNil
SeUCBEoevTmgF/eER8keM1b0MPQa2wU194w9JSGz7DBdeheMtpFIYCnJSkvNuGc83IEJFX2tzS7Q
0dPdicWcz4fGxLDV0NiVl0H2EDanX4cX7Xsx0X4k7jSYh8iqQRaC3wT6qjD2rg/eFuu4c/Hwe4dl
JvfXZqOZKpLHwMcStJCES1gOcuyNIaRpDiazhFQ0fX26xhJPqninx1djJzM8WAIyRu4NIDAJf78Z
nhqNvC2wo9J81NsOXoEM18W8VcrfapBEc1dVCXMQ/karzxshcoDS4H5CVc7otSCjhwLshzC3B+NP
2un8SJtK5VvBZF3FbTF22CUz9kPotRyGiRoA/OiayBzjuGJDeZEKXIu8JvDQ6iUbskbn0xcwjF+A
QQCSZYl7RFM82t2r8WMTmyDydGX9P57Q07ryZjxXxDsQjXH2VL9eBoUWuhjaRJBiQmVqoV9hAmNR
hBHppDWhoTu5UoTa7Fal0F4o7S5RAeSbegVvWmdg8G/mAB/jcmklxZNpenMUGDE4pM7+k9OLy0cs
RBDqM1KK5tiYzWLNl8Jxmy4LWynEx5ulp3g/fKYYY8uegq2dHfOQyU0Xa9Qq216feRNMyGj6dJhY
7/JxNJRcECVV0H1k8oHL2ssjdpfSs9tGlfbx9i5xJy6qlLBE65eUM9welFq7Otie1lZn0l50g/mR
x9WygJMTG1e4tpNJlqn8Kyw7rr7m7CLs67m+x8FvGtDjIAOaOfwGNMI15gipPASmFExdyaLwQDXk
p9QDZUCRcriCSvAwdIqhl58sMnRRkxHvBTPAbmYf7QNgZVF1VJNB+sxz+ugL/NSZD4TXmmRRzr1U
/2AgDJ54Ym17q2ZsQ83CfCx+u+cmL+bm5etb278M5aeX068PegRSvlBQu4q1eN2I7Qh3CsjJWte4
WA0GIJXopNwofGuT2rYH1jBcY5PDZvKbDAxiX89qH9ROplRrxAKAxI1YgdW/2XnQ+mIdbxxHdHzz
QFWRNIiSBXndcJvgnMG2qsvRlDWMcG3kDdxVx0WmbZWg1AEaiCP6vMWG0s9mhVcfo1IPw1D5nMw1
GYngK6z8GGZv4FpPK+r6+YYuVyaqcIV+JgdHN89+oYKHBgFowBpW2ri/UHtkmQGU9YM6+2FMpUpe
plKL4LvReAlw8z7yQbJkuDWMjZdr1oSFtkIoI6ntkL6wVir8EfDYoHGZlCKs+xhsL12dq7bJWNCO
FVIDrJMlyGB4N72TPjozbe0CNj709Hny9YU7Bo9xVFnXfx7umCa9dCplGoMleqyuYUrkq+FFDKGP
lNVSktqKn/grUNp0B5dqjFS/V8UY+5qFqDTBUED5681qCFcDhfCB4DAesq7hmjBsfsVxLKHqH1TO
/hpWPQ1ijQwsOiGaRvDySz0jUixJZ84aXpIAdA8PAkwvahDiB+l5bD2KDuOvneFlnU6UZR7W6Obm
vsezo03S7qTaAVX5O0EApO5zssgIQVuwoigZXnpRnA3VUQKm7GXRgmJ7aEw/wU6/AkuiRaUpbfwy
hkrgvTXd3vpN9cjmgB2CNqf7Ad6e/MFaexz8n9ex3P7mBvmz15NbIs+LdtiLxKCa+hzye08PSazV
+yw3IusXn/+e8KzOZQLa6KgDB1ywjdoDGlLUOAL+LE+a5HQVOoqNqdnKsZRjC4vcBaYDiGdzVtAc
5CfW58ael+Q1KKCYkTysd2nYKXM+JnTxSpJ/d+dw0dmFDdnm35Prnw5U5uw8EivQyeJw5s1CWs+9
0CaeJWpUcaEWu9GZJlPL3733o0fHkdUIQHnFzvoXOVVPw1rNQHlq8Vb3EorGidQFwl+smSHRJmF3
JfjW6a4ZA71wYZEcaESA8tofqXHD6fxhEcbrc0hIiqBkBJT5RCNWJTgSXwiEFwyb8VQ1gXIcZUIE
COhmFVf8iAlcY02bplMtWlZIPT6W67vwpWsOmseccCMBjsyJtke9IJBYz1xhGXRTnwK+GmLmsMpe
KFqqAX2WnjZ1cAWmgfAZkovLqZGxuGH0rO2HMkL/AATsavw+tkOz0da1DRNla/RWvLzc1W7kNh9g
xF3PgLp9MUs6M1TyUWQKSxVHUKDxr6WKWGbkqHadcQ/o1NViBqiykW44wEM8Cp5uPeHlIxkWKfzI
eBwdGdIZV7PavEwOnPm1qp5qUApsvzoFUhT6iDEWnLW7nzdRpmK8LDjeUArqPwSBOOTrEbEva+Yk
/KJ1CmsCKvgR80ls/6pIFCxcALN7JCDKBYW8tz+CZNQS5JNET60oGebf243UJrcg85NWNdELSOiB
elwYmEPeXhQqCr4sHFwe7lYNwDw0tMoyeeElbDjrPKHIzp3E109EgHJ5yxb36rfWI4Vevz+f3Mfq
Cq+VscyIVpdIQpmIrKYDj8mCKnq3ocdwZ+HCWJ6zz2GJbZP+vebk0gta00wGJDqQfnPNlzpLXe2g
As+N+YOKEh/SJSZpflKf7g7wMqBCTAcbqMmf1Nl471H9vpQEdgs3mYmh16nVpuLgr5z/0V6TJhW6
Ib9MVV30dRnBGkVS2foS7ZLPHylYaE+kReRrkoJkbZr+jktblHoYFP2LSxsMV+L1ew8r7LR1E4Wo
LtNyeUI99rG47TdBEAB7jX4LrGBv8DntDG2Gmlm4XqEO8yjIDNvxx3KlscjmU02i6goaQTnTyknd
XfyPPpU+Retb6/5cZmMTz7sJvuT8odUquLj/hiFX0e1x402Qpxt9VxrZ3RqghYcn//cDON7T05GP
RgfP5CJqX/tNNTTnJgoh8c0fIRPoAnFmPX3R/Ig2g5zatSrafgxgXkuEJFjlLaOnaXj9Y7pi899G
sDI49bIVi2AnIS7Ilac4AS3trEfK0ZQWshp2OlRhQGUyHIxnaytVI4KpLyKhxUjZ1uPL22gewkFk
NnWn1EAP7p+4XA+SKWGKHc8JZSnoqyTxeq1LZAoGjk59hwT5Dy8Ad1m1kbpMwy8WeNOo4WUlHuif
FqhrJWKyYG9IKCDjAU0qsCt8vDuAKjvzh+lNTl8NStx0IcRYrulVvsI3038FDni2Y3P1FjSiOzt+
omniAlc6Oa/sjg8jxW6bhFbHqC8MIilqQt7mhVNYIG3RhKWEoAQVWNq5RfJLqJaIMOpHWDAcCFie
Je/51Lu8eRCKDR8T+sRWB7/DwTanX24aLvNJcSuLikGWEmGQIqFpWwkvJdQla9H6Hgr/Ppj0DaA7
u/Nh4WhIa/8anYNyy/gLV1CLMrbnC1ioCKicUcNHh2bkzhTeDQ+3nnrEn+3UOFA6BbhVyOvfzeQv
MwtcOtEtoFAUPUvzT0rvOYdlyDQR+TUD/y4CyLcA7+Cibem5DBplUYm3HhSK5W/i89JHGyqqnVKT
5S57Qa8O4Pr3g/27qL0WYUfv9zslPuZFAaQrcs1fa7xzHckBuol5xxpu3E0RM99f9Oafu6Iek+aX
KYKSQeUZruiXKmZd2u05pQZIdNXxYtDGt8xG9rcIMPFBZNgVDcbO1NcmHBYZk/PjBPpI4nTn1L1o
dI+PEIU20feq5DbRPmtATgwhYsRZ92ybURgglp3QKYXCbvhK/K/JyAzsVNrYtdd6SSQ/R7+lALur
uOpD5n/qlZUKPZoKiKa5K8hOprXfGwGgTUXC2MWmnJ9ozDo3Ce5ZifEIBlqpl8T8nbpbuz886M38
qmakQW4DzIQ42q3GApqMsJqoc5C40ZdXx/hke7cNABRTjK0P5bS1xqPPVuCoZlMsYNOKmVfPXuHE
Xd2bcj7QN4lTVyUgRbVlX3hIA33kBepXEOsx7aru+/VOq7c3WzJUyw/rCDlcS3SsfC3vEjNqXDbV
ZoDpi/dDYanEruIujZoSkrVbPsVI0UzZJnFlm6je53aB+mQp9N9YO7AH26+lmA8s5Q8TROJcUh6F
wSWH9jIk3pn9CZQuJVNdm9t71DATokEG6LihN59Hg1GksrrRE0DTPBLGMT5xr/2dbIpAJq3AdeZC
BabVNvO6nB8o25jsXtjZ6rMmdN5qrgjGx9V+sR7CM/mp6FV+5LCO0ZfE8JhmATCURhJ5Yg02JinK
k46vWQWkeemx/lJdI7TkK9ZqUtoO3bsxxbJettol8gh07usDogQlhbcpTrgPFcERA7y6IrgLKJUV
VB8j8zHBJmibijaBQMWH4vsfxGgzKlEKAlr9vo0M+Ulw6e7L3py4fYv68lBhbsUaj/EOptqYj60O
FJMYqeaSRKWAUJh0Vi3IeSfoRYpqnnIGh2KSKj8/YflJavHeajpuTLuLbrHSCPcBBMU9mvtDhrAI
UyFzLFWOYBjhcdL1PjNpDw7jI0iDAgnkaltLZOFS2cJDG4dXwvZyGz/N968nHVoqe/aokC0HIf6u
9lOnnSaYVDX2x6ln2u8iFz/J8FYamfUIyhBhbN15LBCv7tIV6VUVOB9pGWz83SvfgR3ZMbmM4rCJ
0QRMU43wGQ38RN7ApD+ERiLudESy/F1g7WI2nhcxEXuxA+kg/0P853EnuR8a5y1AqM0THNR9HmMv
dqNkfnANqiS/2w6VL/pGCEyb0veBDGjueMPOl2AEiAtLkOa3g+CKoOmB8u7sXQxdBSVNyhMOt0i5
AYn9dm2rQ8dDKYAfYiJy1Lqcj0E8ZxnhJU6CxOQBJMWEEBueUeizRt3j9WfeQa+xVIxKLx2JqvNL
grWQmHvxGxY0ZpsZ67MitnpDiGj7Zjyf7lWQ3WqIUZ1wF2LWNQS8PRGD+415iMzPhFbH40Ccqo9z
fsNLNUW1D0x9ql3gopdB3eAh3ep9khd2fGxUepyC+7MbQRHWgRSl7AYXsJBJOaRcsmnV0H2Zkvtn
Mtf5VwrvyWg+cnERjnxDEy7PhLGiEDEEtHicQgz4vjhKx2Q0VnPwjisBfOOxonILQZ9Cwik9xFz4
rkD2aB+L3fRXRjIZ60ixcM2NKyif58+KlnYOGK/xMwCofhAxGZzPNgZl5gg6+skh643WaC8tvVE8
arASkh+RU3ByBZTm3GcXf2hUcGGTGoUM+rK5T1fdyzOuRBgTtTxbItkGgklxIsXLuhKhFXmS64Dq
l+2C9o2bHiXXOv0U0l/iyy4ofxFbcG9UIonRyU+kHdnJQX0BufvlFdY243KdSA28pZh+hg9ReMO2
Afa4PsXjbZejiJ2ZMf5QV78UC0Z7VB1UD5YdEE/+pXApOk+LVuQdCX7ZiqH0G6/aNLdccgWzdiW6
dNO5vOMorlZlMXWmKiybisc4g6D5QZA+ksaUSFNIOc6aNV5epvype0ILqRlJgz4zeiSTMWmavfnh
rxgLwahpPOSbi6t9dn6AjqHEDmWpCBrfxRk2T7OOegyAeSBSih61DLA2qxPZCtvWVT/PXPlFLH4V
dAJoskSDYf1n9qTKru+dMAvmiFjtyxetsre5FjVTjZE+UjP1V4sNELp+T4eGlUtXVXKgZr9H30C/
7OezxmM/pVmP/73ZOddPV972J6RWhf/djHhYunRohYkNMm49auQ4GXtrc7efAuO1qnP5H/nkiXTb
LOYg3AZchfM3AuvA1DEL/oSMnUByckFV2PrVV8j8VDbiNBOOQEfVAAnwtLgSqYTFGtzB1deXfaoS
EKLlIrS8y9KehlWOOhsJkk65/7tyB/WQtOYU3kAhHTrsZ+awa8STWrUTdEIVYNrih53bwpMoxCqc
2lKGINJsW6K7ieCGJnDkjq04DFcPL3EqLE07QZmmEVmDO5ZcF+R3qVdsArz6PG2LL0VJNG+j3QZf
zG3LJbLJPBRVkYBXEd2KLLh8sRB2gOyjvf5qSM4ZPKM8UKUOcdZF4EIIKqZDhhQAaEgRtMk2hsS7
pkBXlrQtHUsJIApFcOex0bCI9ufaiQS8eROgaOPLk7QJl2LB9Btxoblri6mF5q78l3Jg0iT6KWhP
4bfVWonxCStOg0t9UPozOkX47MG7mK6C/epSixS7OGhHx+kphRKYM/P0t2ELXjfZK9jsT9XSYj2C
NPL7akw3DkUImaTXPCKk5QySRQ5TF7W3SYYcVxVXUOm50U/2nRkD4UEhPdRbX456+CuB9yBnm5iB
nAFTZ3/ib/RxkMU1X/qui3VGUX4X3OJrGmaTWADhwavM4IqcLTeLu/AdAAihYe9/d7IIAEx/DtYm
hxUYTUpYc6ZGbOt7n5H5cDi1QjrNvuY5GxquoG8CNAQtVsFI9Bx2cNwdPmFjFjDagrU6DI8QbrZ2
TDZlvQjddMjG/hBLRSMuoQGdOaiHVQgnyc3IiUfkdGfIZoi+EhqWyGBBSWYfuvdszZ/Sk5m4YNch
HNlADQ6pIq6Ww/nbO+ZIy7VLsrHC91ZCYm3ssBXCi3pCtm3CsBv6Pg8q5PXOIP6rbvQUGraxEBjR
kQiU0sdzrsfDKbiUOWsNqxKfh0Bs5GbY2h3HMeeGOnfCwrYB2tbtzTBLPOShQ1yd0xZrEQBUvZum
z09nwr1YCVUWYapg39layeU3D0oNPii6S4f1m3ZorgOFQMKjsgYs9BelfWozhCyt1XLm829l3OPA
dUlEoGB9qaYdgLBSY8ycsVPZtpQVUM1KKQX//8w56pYg3BqcclD7Da99FNWSzZ52CobKgxW6SHKD
ylLbkQGiEL/xz2y1FvJkyKI+EzUXL452e9Ju365GEc+h8t0ioxpYMXxAo1lzKYWB3UzPaKJ4I0Jb
3kn4KKNK+8xEKYkuulhY4P5HCSeIrH6VQtY2U3DntW6KzcS/3xnvguxF1RIOb3Jn89hg7YkKL7pT
9cqIeuM6JLGJ2Tx+XZhRW0SMNEcfODvbJIGUYPJMJDXLr7TcNPAjtHomblkqEQICTasaSzF3Nmyb
9qsB7BH5lx7bBXOHwPv61ux6OpnoaSOpJgQ2X0IWuG4HlVEuZh0Rz0etnRhu/uIW6489i5PKrej9
MQEmsJMTGwYt3U6FMa5uIp7oQXSAfmKTcFyfE968sx4Psft8ntu510ii4PksuVH63XClCHxSGFgc
uy8kah5DNBNJQH9/wjagUK5AG2NPi8VjoE6hSHjnWQYIbEfyylcnOUq9hkWYa8QGYu9gF2jc+NPI
73fQ/2FjV9/hKZBQFENuN6xyMrfUCuUvjMWVQmHLF/bwHDlaMXHwzlqTxylVL8/zQfQbFb60wW0T
bWi/SNQpELLnvJ8Cb4JM3rfbec9jmdbnn1AWWPK7sI3fmysLNWDZbqv5z5QqMwDQZMje98+IlP3u
B3HzbbE9bDAp7VBULhJ8sMn0yqGCNVWsjiGdyYqNuBzgAD6yHE+n1swjWHiV9Edo4/e9jo8kkyo/
IYTfCPHfejOcCrjz6L549PDV9C9ew3ICA30CGcpMgZgDRPLGH2yHZTk2TJwBRuvZV9ngNKS19O2e
LM7qPx2jgLV0N6iJCnSqHZJe+Rl23tXtengmjFn90ybq8NsjUE42VFg74xkeA52akuFoZNqqAxcu
sOu8tpiFC1XBfwNMEIXXbtoWbiOg2qzeoFw+Rlizk6oTkyujJBAORTXT4tn7PfeNq6u2GBRUVOcv
3y4GgMU2cPEjocC8Xe7+KPaZQ24otrAhDmmngkSg3T95ji4Dc4cebb7LawwU1hTyH7CgFu8t1nj6
DDnFINdjTkLJMcXhPlrFe6cIpsyUjef/KEv0/grZp6UIkiI/LUhFBRXbL4uxzweLfDzlw0Q+R6tS
LZtWjFGFOGOJz59cC5kCwnDC0GdTLD+kmlx7t0zKLk3BuNE2azao3HSanRmEgMdHml5uVBwtkWKd
5eHhhD4VwF3LsE040XgOozSzO2CmP63t1+ShmL1zNrdI3NdqHrsawi8h8toCw3bBQdU8nexyzvbj
TKmNubtv8S37vsjVjxVzKXkeWHRBHXLzZI1ADzZbDY7l4UEyRtZrpFnpbJsQnfLQjNOX5hzYU5pK
B+lf1xOh3fyHfo8qCqpXliTkr1p6V04GF0om0+9sMx2r2ggKJ4UMx45J3M7NOJLY7GbsZZgvby5L
q8T6jZ1xX77vjcss6vUf4xNDyK+YsPXSWdGAUDPMhgyto9sIj+2F2oS23iRTijWVZsCaAM4FQxm+
GXmVzMtcwvn6adilIx2wLfntKJQ58uHfV1yovjKhQJAT9zgIQ/VNHO3y0jw6C2TZx1Nz4eZesYGT
MnfLWWWsg4ioieE6K6TxYE3drj4Hd2JjOXJJkCOMP+PXXtc/ImfkpSPsW++yq3H0WT/awBveCDNF
QhX0Aj6+j6okuxi/o6DVFZN0frdQv4yVQeIq+SQK4A44chmjwCFEj48qDbrFjYSCwEEn9CPkFXKr
G/Os3yOTyjOCqj1fXiEHPH81IoB7fc4cOK/IZ2+G+F/1CH/25clnytDH5oZRIocNYrI5J3MRH+b3
T3h/M5a9m1nqGBfkDppgw3GaVCsb+wlac/zEIv2oD0Wdq+xbgSnO4SsiqhGiDe3c64Bbw78+0uYo
WXaPt7k97RCdNkyc3O+2kcgSXJxHdifgpL2ywoGOvya8oDPZv+/yvUSjaYTQr/d6S4WbllKwt9MM
2gWNEgji4VAwwO5iHZ+5E13ANVShsH7GHqzeagmzdrOCjn0cxM2dGASPn3mv2pSH+s9mrSorZbVt
35JKVTbLjK5qvFqEbHehdafC1hNHLFGxgdAATVQBXAtE/DpjX5Qc87aaS113cTiRqL/UySX9N4SH
ZDARfbcx2Hj7Q9G5Jr/U3EBI/B3tvzRJ4K5plyL8EIB+nk9JQZCA1Zsrr6/c+s3SpuLCFB1sGhZT
9OSF0/9EacCwAcYUFPOWE7iZsxi64hvNXwoQqfKMleJyHCowiB97KaPNLpO1JXsmNCUcptXBDKMD
9epbuh4v814KwV1CFqFJhx4AEYd4vSIUaUfnSxyliEhiQd/Jzsj/REfz1OuPQoCfTz4TsLqBTqDz
FiqHfuCvSaLJNJJy2xFFBXjbK1/x/NBmndX0JLiHAeLYtjhhQFlp9mmW/Nwj4jXgqlYtexp1cyrr
p+MBFEEHWS/+dwZiSiLQFq29Z6QHO6Fjp/R/IpXeZsQYqJShNptghT+nWPiY6PUkKqAlZutTZXM0
nqGtrhgMfslj6a3CjlnudJ8TLcRWi2b5BnkUn8w6Di0iwaDXMhDCk7CDu5BqaIgi9pEJcFfzijSz
j7CfnTHK+ZgvhGs7qewR1kiIsiuepGuwcJfT5luKqqiKTQOA/nb75yact8K+plhK214/0SX8BI6J
RNRkbVDI529lklOdQc4v8vcBlY63UPtHenVxPo/yJKZALa4w6hSWZ/CDcHjHgaFS78qDy+lqlwEa
9OP3tCEdt5v/55cbpiq1LFQJSkjU/px21I3pZL6F+KZiISlMEnvSmaO3hNQ2NQ+ShoHbtmlj8tw2
0Rfppyr82uMZRVRl+IHccYSI0i0opFPYzJ5QMpgZ2L7IKPEkjC9gX6zpCd2M7gAZHrt+Ek1vxfmR
FEeLpI2Up7CtDBytI4aTdqOV9bsJOKiTRmZkV8j52MMXNLTez58usT/XLmyjbfOdcrH9tM4hvXTQ
48anWd11vsRmbutFzR7twMqZryRvWHt+oCFXm0d3dO22sK8RhWuF58bMrlo8yqaPXFBnrothx3iy
gBWXaCircZtFoCCdSOCJhKIJCCDR1mgaafIrJFQEIy+HYWGsdVcENifVFkzmo2vPFA1VjoSj/5tE
qHm5QiO6mJTjQ7e92+70lk7LBGVarj/i80iOEBsSkgpKN+Qppbe2AHVowtvvwH9RxwavVEi3FghE
Xz46L8JXQxc9eC0PD3Jdnbzx60xXFUSPNKrbysEHIN1QqSWT2EEaicUlKNbwJcCRw9+1z0whlIB3
/LTxAFNQgkZfiNL7usnbTROzVASSFEw/0dztDBuWl0cn3knch1L2iyh9th3LY4M3lL2K98ughtj7
edupq1mN/WqksfnZer3/0d37aoB/qJ5dnjOcK7k5gKLEGpYAMqCK5zCotGHUjeh/rMGO8YCVm5N/
yb90pRA9nC8KuPI4Oz/+rs6oOlnma6oI0y6a4ar/icbRORMQZ/wKQ+d2eTre+43fx1Mp/VoxLMEM
ukdTW9MGsvpOz8HMcjSIqeQiK0gCyYrQrgsTCFJ4qqR2SCGrKZ3VKGaZeKLBh5QJIwU+Ldeu44OK
Cf08e3wf5E23/H2vlutfX/upmLyc0do3WkFvipIBBp7v5wBlgpCxTgB0j5xSPHCzlVZaX/RUf3vX
VwNMEBEfaewrPh3Aglby7bKQ6bEtohptRGJPGXz2I1xfZd322kGTLncp9FaugTMzGfBAvQ1v4GTH
yogZxK5sB1iPUzUket+NiWlklXlY9qthGEwmpKdA9XStIXBcxMlp0u7pngFYmTDCumZpT4qiwehX
u5b9OKjqcu730wkaIRGJjrxXndV858/fWAvCLTwwNMys8lQSIvCdTkEqoiRwbew1OHbG4vG3d84z
lJdHcQLXpI5t6o6nbcJDimVXV+3/TaaF6ZfS87f+V+DckcOPGRK2Rh+1TdaSKABMh9/IFGjSEOBD
W/GHbHrF2cQi8R1hB1swwrOexVI9UD50WJEp37xxFA91AAs0FJQwFaPdjsqqnbK9RrxCglvy2t2/
Ovw909XYJY2gQ3GUODLoW3fLlTfMwFCoottv/f7SonW0CfKhcLtgzEXk1KeObhXORtZXhIhhQrQZ
KrIUcSQ5DfR+y4lUlHw0BbhEF7KjINs9+1kOALdHVNGVMzXmNQ0591lj7V2GtxJ2EjX+Ty201cip
QkGAMsdgqxM9yEgPD6ZkKB4SCPi+2MRfCQKnFIlZhbF/ven2a+NAKK8MJ6QByoFs4Z+AVADhuQUI
XkjnUp3SujYJBYVn+JdJDVBpkC+s8FuLUlIVzqQyCJpYTghw9ZGXxDxTDs9Wj1dUV89ZYXwbwGyC
5HvGjT0XEMCHyF+1QghFyo90cM5pX6CAFLcdOx4TBSC8lvpTgre1DlwQMHc2BeaRpOFnNwf/N+bQ
0b5LWIMUHbPlhjcxIo7x38SC/dBK7psfaZ1+/DHFF0TvNjYa9GZ6KSo8RGS1XMM0yPUQV9+pDIEi
5mCZ9dbvkZcDb6xYG5bMnEbe6992zWy25VDO5bTi/3tCIzbkb9CgF0VhB6rHdGOCEc9BhXQlvkCt
yL/wvyZHWswrLeCcSFbpLV2buhuXWssA9BjGBRh/tCPwUqNp5sCb+2yCoWn0/p1/SEsEt80NlQTq
+7+3YFJF2S1fsgcSkCM7LpJR3OA0vBrLPk9l1RppWWd77ouSf/FUZ8bcLrlsJX8Aj6wzN/FNW74s
ZPM9eBL7OX62z1JuXEWVz8ukQcMAk0Pov5cr0F5cw+UdFZ2YTq6kd7so4mtfQk9uOJYzl2qH9cHP
l8nQv3rwWRgSmwhoYso0RsEZ08yqO7wCqylBs56cV1kfz2tBaa9GKIU5RJLX1mc4xRnERigjycMW
MBSUFD6uwiYW4sGSQ1TQbNLIVxLpliy74YGIKD8o+1niqcpGrGBUdcOmc0ncaLvffgVHvGsxsNVc
w9QkTKC+heTMb2V9YH6z9w7m/m+TtgbS2z7R986jC61H0EMM65ra6U577C/gyRlEwnU3vGEDWmrS
PKGO6zJ7OdI08cgVKLBt2ocRUceZFGCHaHbv/gDAfkp2bgItv6ukZ0wrhnmVOi1CGvf2yoVRJ4B7
3zELcLViR9rvGjmzL/ROgf+kyqSK1fSb+gcgOdAeVaQNOhNGE1sECf5cC0+xFcEWqKWnl4wX+vHQ
7suonaFcOMCFFcVJ8/8Sc7+yTqDXrOfeqrAxn558au2L3ZDn7EZsVKhBC2iArWO/BY4UYoo8+Xuv
5hf3tRzJjgY9lRt6rSRvUn0pev5IkCp0H/bdabbH29JBi7jBMfSrSEg8Ys4xXcs/Fou4a53tbjI3
rhuvFWm72c7Y/LHJGwNJLKehJllaMj+85T9ggnCX8Md6K6Atj63Rt2rjVb3XsEKnTMMqPUJcv+aZ
znuYNgoV1gRsDw6A43tcO/XlzbeHpplYQsly6e7wwyFw3xY1rIDOnTsL4oj+7WYgL+Rrv5T6+PIn
p5ub47q9w33FDU3k5hGEFRzoYKx2JwC3Df/LmJbCtNmoJPCQx+l0fA1XHOAiz7a5VK5GpAZpO89b
X4ZQIsx/JfgOtEjPrLieGkRpuqB0mZfPxfcxwDYTKJGONtVy5SGWGw2pVTIcdPo43cwrQ1HR7niT
z+U8+NboAwFOxfELeVO6L/gHx6tlXuMNKeEwTq1rKAib0siFM/8nRjTjlJsxwABi2VaLajmYeKzl
RPWgrfO4ejvOfRMbDU+26hdwvhdVoH0TknGC5YL1XXlfn3aBMVqoDEH7TWA6tptg3Hp4P2XFRD6d
LGMeypo+TgHdY2elnMt6Gwkjm17IEiA4KQwzmfslMIAvv0HaAfsC1Vrspc3e2QUHh5Y0o8GShF3/
2SVKJzpx5lGvvxWu7kiVrLf0INKd9MZLUC5LV/Dugr0WiwWOQHv68XuzoUTAHCmn5/EPX9Xx+twP
Nb+NC4Bpr5wkOViEVok2VYrys1qEw/e1sLmwx9zWkmKMFzewtLKNHMy3ywf3fHI222JrhC9qfNTn
baG2wbrlDfPdp42APgXQ9C9YbOCN5iFgDyF0HfXVQTW8NOix9lidS4KsQ3yU8YJA4qHm6rBxYtSR
oS/DxYnrY/SOmZLeWmztTmzzolCGJJj4FV7VUooTgkahAlMqsQnDJFScOGiLcUqLk/7ha7ZBDWp6
rI80QVZAeJdQlKFdnNwbPZMgQf7Ibmw0VvsYzafcJqj2FzwFPzlkXXSswSVPMTAXyjcg9h6Ch4e2
7mWTdw1WD2CbKV92t0qBreCPq/Dnv+1kxuEMXAgZMuMeViK+q6Axycqxnp9veGSqENRYn5WTs3RL
Z31HwtufWH1FBuXEYB5slcjjq3F9o0K6KqcBZrRkkbH4XR8IR4/Cn2RqeTsDful4Kcc4cvikzP9E
gc8Jrdcn4SI9MZ+iDf+lbgjMRdJkxMpMCGWUKmDZc+QE9G0bN8Jn8qd4clf/PJTaDbin2l3HRc2g
BR9084ONR2KoECixXxMN2OfiMSBW/UY1P11Y2B6QGnmPsNvWxl4qW7nu+cZ4koPSYHBt4bVV55HY
STmKr65477z7q2lT/noSFoMpvuG8sMnpaEn9HQl4j3F47peCX8eihG8vJoAdN2NjIxTEa0dvUIjO
XHgRLPMhpni/hV0HRHHWjs2FJtzQwOs4QNhdj++vxPsddVKvrr0BJSu6LoCc+JfP11nHE6H/1Pwi
bBTQX1ltLQEFTeqtcot3C2SmVkT8wl2u6n67hlxfHm5Sb8LkAffwWXr+OJREPZTFdRJr8G3n1CO7
ZZH54p6mbcnvDgBrrbYrjx79Opqj+jCVN5t6OK1fXJct2xPuSjiUzl8XrL3BItHH3gGvuwn10wLr
urU5GXp9Pry9Ihj/1+56P8JUw0vVvDvkCpNc9lB2hPPQCpPa2bsVwX5qSq7Y25VdTqPNkrsJ1la2
mhfOF6lm/ao2xPeNwe6/9O44reflxTaKtKXuuR2hWLgHxoqHaIXAVt+HL/FFNo93fCl/U7DBhwIu
IBADtjl/C3+POcXgmUld2zwNAgzjHN5Gp5VbucqaKnjZfKYc3wqUjDxY9t/jd0HUK60NuxWHpdkY
xpFfd2gCMIry3/OkwFTekqb4nYChfNrFvqNXX2A2qtQbIiaOJ7BV54V7eIQdggxbdIiU2a7r1af6
F9lnR1Qv9vaBtT8h4XSpkOQp7VnHBp87W0PtBHcxhhja5vi2ZWgP3rLWuPZ4bHgaJKyxAV7lVJKK
zdZuENBFjryf7xcwr49bqkTovwvDMDteGPDpuEzQaFR9WaT12oz/fG4fExdfxCtOGxbeAGBIjtGG
siSpvcx7w4nZdb72FafUvMmfci4dpGAp/RIWsOZoia5njSW1Im/TVJlpCyHOP4Vt98gDFPB8GUC/
xqr1b9vYAsdQZu2kFelnkq93cSTrWxhM7FejJDo/ELChkFUPT3zI2y9/yWPNntitcKS2XABnVlgA
ELyagXqBqV6XSQDeOVNdZy/fsubb+RJh0738b/4RO7g9gTZWjLi3AS2sZpoHCM83hhaprW9Y/VZe
4aG7Yl/yZ/W5qZqBGMK49XwI3doCMjGT3TzYlyFAlQ1yUfxHaiomk4oEKXPFIBJ5jvFpTEZLCv1s
wFl7PoOJP4DQRCv3T3+EldTbuqToyXbadMbJkwkOZQCxnOccXFtbE7Ajj8hWvwKgacKjpWpuTGKw
lyeZ30oJzDUXA3Aoz3E+G9YiGI+a9+2s/FdqlAN5vjWRvLZdvC0kvqNdV1XlRPFNYESgiKeE7b90
NemNSloYwc/AZTA1GcwZXwcIZ5VclDGCh5G2fu40emWA+fv6cWsU4lvqAiYloChZRB1s8i3dDbbx
WNjedHYkDbjIICSg4V482ba5KvRpgvNORIdUitTiVqxtFTJH2Xphskm54xLZnQqrl+O4tfFyXftt
O2TieWirFrT4qYdW7rZt/xzaacSy7vAd5NyCwTnMIUaJxUR1CYi0l+zxPOXFeErF95RQn1y/U6qb
ao6q8wnbIleMK4Q2iMGOd9HEh7DR1HTgnQIupeLX5iQbmocmGdhhkUjU8lI5nakTNy8JhcEzkh82
jSRUGz4K1YADvclUjS1oOp6PnXml59JI7s+swH22lzSJSYYjG/KpZhUnTH3BYYqceeoSW2sjci3h
L1UB9EodQc5UqqpN7ljv6NhjkamCqgggLh/zUnx2wZMf+okMXTCjT0ABSzzt9Mo3LW/UWXhpMBbr
i42p357WivS9WqMEQew5AHX1jPN1kgA+OHqK2Jm95ybPKJlWYwp/PaUf8I7JHOpDafkSjc8e2om+
4YoL8RSRUN5iRFzFBmrzuitsyj899YzC24Gre1BvdzWYOHfbqpxDzHbTIHrUAPw92EBlGaQAwMJH
5iFE1LlnPK+3hNlkjIPDQ04XccEQo/XV6F/iDdTg/h1kjwa58sZ3TUJprY9paYSf6Y6DVXAiKIbO
0UR8nCpXqx8uwtnci/VMHwB1mw6zL7KzPE64l3G+wA7UwD0cjZE+rBEQR+owuW07jDY5Q2PkOVFH
qAxoA9UT2wWTYuIQ7AlqqWMOkbWvP+qzavbYHfjFi91oj4K4VSmgvpBTw0inpDSier8PXcIeAN7l
TgdDj3tKSvIkoMROERUgAP2NPJEXbzISHhgZTNdYMC55keVLmpnNqz8c58RKycR4nAdpgFGpC96V
DzKNKZRqOk8z0zFUYWagCMK357GdyNoisfY29KG4HYZeMG8+y+dMI0tv+pxnuzTiDCaI08h8Pzc7
bWid2gFv/eEANceVOw/Wodwzit3u8U3XGWInvlHtLBrB97AYlRfTsKCbGg/E/khnNNRJ1Gpyet5e
atjLrAF19wyxHzTb+hkXFxjdpISwlSjnAqGnR+KuS6ze7wTKOxdB3Cjb22HCHN1ygGgqghDQgPVN
oKbD504e5rYv3DBKAjRCgvAnuN2f8B5nhluyZWGrBZ76pMCMIUmbfzr8t4i5vDYXQawqov2z3B25
l1uxoIL7KzVNibUmhlMhGnkeLLsnt0HX5Cg7tL5FabfcuDtjCD4aPN9SDaLDx1xn+CHROwPEXaYb
OM4rD7GKvvybrWwAgm1OpjL7IdU9W6C7qU0VXxYBgTYPj0eXMPPZMni6GuA9qKFW0trlyThXDp0s
NG+fZUq+L7mgZwq9IAMnJiKD0ikjjC6QQKCMy/6nS/AT1+rn7c2xIqah1f5eUaUBwYBsn8guR8WJ
I+9Ky4vlkdoWYReicfK84b8W9ToRPW3z5SUcaVWzcqEbJ+sp7FUl6vujahh6dElaj5yX93wIoDSG
Y6XCT9PqJENcqKmvgVrTpvQ/O9V1hqSWtmqXWRVT3a8+4Km6zF4KCAaKqEQxIS2ME/6kT93dulpv
KouFnoCUAmd7P7yC9lfdMxp7Y9/25837yCBSh3iOSyuSzdCKFbjf6GQ8EPcc6WYyQK7oNdiXRl22
0lJc1CZFU7ZMLK7QLQLKx2y72pQ4Z0pbm4MSEc1u+grfIVz62r71ldd1ijtzCS9G5D0CSHtGI4D/
RhND/kJoBJQlIbXO8f7FYvqE9IuDrwKHgxymUUmNXClzTCs5C0ClYuO3k8qUYgcwv+Tuds6fSk8C
tVGu0wreHrcj4h6EuZb9658WvFH2mzmQ4b0wLcTct4cc6dXokwqzEJH+D5EW5Luay69oAzlgTWJQ
k69W7+lDiSNEMIRxMAG8G1dRXaCan2hm1b9nXy119v/5nY9hRHRyJ5+arwBWqyNXX+whhgmbGBU4
mPapa+eo8cic47HZhwGAjfjqJ1s4rWmWuicRl/hBBKLID0j9+FdIfK7tMzF3cngCQIBYuVDuJoRM
3mj41cGpL79yfSirjUyE48jX4GAVaoqMLvZlatCHa2iGaZr/ChLUN1+Hb5Y04MaVndRcYGAbDal0
OjX0QTLZosD4adOUtwoGGj73u5BTHdbIfHjYGMJjWQfsPL8WE9TnIAIZCv6xdqjrBsGFN1xmQpyB
vdmJB9znwJekP2kAwQJXpTSoHqsHuVgOQVxjXmvqHVJ7ODwypHsG3on56Um+M9uUrR3ZdzGmctK7
3TfMitP6iAaCHxrNOcIVhp6JZO7h1etcdWQw3Xe7WnZhquN++0y+BAyQR59pC0IpIJXFSAzffVTn
8uPwm1/DI3O6pcmg6IVIphQQPtqusQC7SQVlriNK4kCAUK7QKL9b/PIk6yyKLb2Lsz1KLRgd0LUv
/zlD+6L0ZwRxCaaal4qfnLc559cCfIKdwHdiTvC6z6sIN0QRaw6WiY4jBCm4RfyP9TsQmLhsHbOz
LKBRGXfv3eTvnUtobJUGyWir+Bl4IG98JV8q581tpDZReBBBpPDyVpAR7MRWmiMRmyWdVXCfdIpu
65GxsRoEPzHSEaZiDECJMYj2Mx1GNdGtMNvpDSNdT+GXR1GB4hF6X4Bwgla09tIiafZVb8E/AS9h
+nMe98ewKzLxIeim8+NkTy3EnvXxthBbcm56RkdvRvn0tyP/e/a3IcnFFLKtCJq1JkCReuAfzeJ4
3jNF7a3UC4vt3XX5zB94fMxX+FmPVCZA6Wi3X/9ELwxNU/AnIqk7N9fdKD+7ZvwI8tKKkvDoVcGk
T5Lw/wwfhUl/MuBnom5FnhUdZqVyvlnu0ObJXwjWbwC2Ew4rN4CroEGBDZWDjU9YafyeAEO1R2bN
SknMKvV/o6WABNnvyHqUvX7GaFxARPWzRlLrMO8WowawTksB50bBR6ZpxB1Ugq7ZqJR5MlhQFX0D
FQuR6a927tP3Dq/HIVx2UzQ91HmgvetsW3KPBpn0hu1xDBO+Nza2W7u3oW9O4QbCB8ZeqMSIVT69
MJvH1puklk5wkMmJL2m7zSPVCVROtdiqZWGtyfVfCZIi4XwHrqyMxlLRjTc4MVRGpzM7Qvyi7iMo
IX8si0EBXeDWa1ChRt4oEHB4pqUf6nL6Mt3TvrsvHys6gov17URPN5uzGLlizrstpJtLP4K6Jp3W
qL9JMo0Ck2xqnIgK1DY3Yfo4EFY0gYDaZJSj9Yh57C6lt/jzbZ1c0xd2XT5MyIaJrl2Cuyb0a9N5
S9kN2YPBurcwoQQ61s1ZVK5hQwFQRYbtAZFPNzPh9tW0IEo4zqnROcLLhaSl/gjhVrqIiGbFlwzr
Vj1oudo/dkjrHjfDZpepR4G58gjd7TU2mSZ+SF/heipn7ZyIQuzYJFCKnBmPwyGxasg0y6q8YmBi
14ePMQep4voQ3rLvE1FMG9lzFchovvUZtt55C8CA3/wqKHB0gQcy+oopeBIN8QlUSi4rw8SmpBcs
EpvyvL11Qq+QzrhK2YsuN5KoN2qvF4Jyj7pl2jQtxFi9bzuuuIQtWM58+t7W/OXkXaRZ/VnPPdM2
58smEVKT3FbJCNjBEd3JRE74ow7udykoRuQG2ARcOld2yFBVdAG7QV95xJG3A2Grde6cIUwWZJCx
2wkdYo5Zlhmb4cLPZRUF++zlrqpJPErvj7vULnKCCCLxcIkuogu+0I3ZORPEFR5efcpE+NNZdqXt
g53mS2TYLcSD2ApI2kZ57KaCMjeOxXR8/iLcZk4s4xreDd9JvSpy0RhDPXlIRMHd+jkbqKjAzAQg
q+FQ8mmoF9zmU+8jLemL04ajLGEEYrIcplHts7AaCMVSTUaR2D6wnrjsrOOmxWm+5lyJWQeS/law
5pwrKjvZfw7Mf3fVE1Qp1cHAbJGdGsRpM8CxwmKPa9h263pCe7vw9lhZNNISixV4x1yXZRDvC77w
FPn42RRUTJg0iI/Xfe3w68A3ZZyyUZ+YQZFoFzmMYIICkQjAUZkEAU268y07wiBZ6duDIcprv1pV
VRLq6x1wcPVFJDiNRxwXLlZK5rvD/XqAi6EmbzrAgZR+EWG7xZ/MoC6VJFQlCEONLvs2jO6Z/jOb
LOTtUmKg0hh9wjSFcfHrBJ+KnrSLoslk8eW/QM3/57UCr5np/IKlUsgTYSWCyOtzKmE3Q/5YUJBF
RJtffE7dWIRCC3j/yk2IVbMCzfzdVE6i+OtFUKk7IleH9zZOZ1mCUKz/WGyclFW//S5VQjCMKZ+1
gvvy8kqZulIF8K8SzhVpV6H1vTI07sd4+jNcqf3Twy6ZCQobIHJGc+7St7ADDiXpHFO/tBta0V4n
Sur2u7CKhQDRO7M2ImTWkY0hO+z+H7p2764FlqxxpsNRbTXIRPKIxReFzP/rsjUSmDI3jBPybKpU
bpKLn783Hnptbo7GwjJLA5z1Xx3TiQ/z7c1U+6OWzVNgn3WGK6EFRKxlsqKB9DV2mn8m4N999ub5
x3nppQQMih+W4S+0UhShvkH/HPjv/+fxjEeyqLo+pJEgpCm4LQ56NJR9mUYeY7te+uUk0N0hXda+
Aj7pFPp1DWgmYar1aM3Vbeo+6w4lqOVaX9EH4TEz/R9YsmgP7gteN++EQHD6R6W/eX8sv+v70GvF
qtlzc1aruPvcfF7I23/y22O8vljA+h5NjvE/+NMNyMWa9wiyCeG/UZEjG3KAPy5RexfwCElJJUM8
1GrxFmHMBBLW3714mIXxqDgvncxsjnbqkWiGtZ+NZL+ZNbDWlRviAuJjWXubYyulSwo6UDBB7b4w
bkl4NQd9HHO+W5EJ1vce/kA5nsGbLn4GSZ4sK5PvfDthBKqFvre7CYi85simSweS2v81+KslO4w2
MEz25PDiC+Qs1UDNRW8ayoFsTXPh8Yn5HbiI7jNefqrJaNhBvCMQfOW9UuMD7PhVjd3XEA9IH1LO
J7sF6HPrrlNLmFNGLF8suDOm5fLhOxL37jen674S/hMo67NPNzspCrgq5EqztYEet7tAlOo86txW
9i9MebyL1MVG5kPCrI1yWI+Dybrsg00fHM5+hViHj6zAez+Bc2fCjp6rIrmYng9C6YLbMBsIZcd/
NIBxQQn5cx/XZJ1O/d/R0SAeFxfbK9SbIQtNd97+44K2uYO5+vcuUndcYX6sSmDZSqyM3YpyHnbJ
kwjF+xX2egofNP5rnsuVYLZmC0bzGiQR4FpjlQitm206zBIjO0Bal0+U+4jHOOhQwH/bGeNQi7yl
pjw7jtEQ2Hy3FTpFkaccBZWwn1deEgBvNf91wDT5s47d7hkHRIz6j+4HK1Bf1HU1vUJtmz0N+aSp
2/3Dy5zjl2Gj3A/XIaubaT12xdspUSzB8s3R26TGoyeguFC3AeDuMafxrGnN/YFM6I9/+wnfWuuE
raU56rHjcYVsKc+7915WkZIaylHwTdToB0tov2qur2aSe+1XllHD6bwqpNO02nxTBO3CDlDqfq/m
qMKPbBicJc2yN3jC8RlgQ/Z4bNQitoKggQtyQwyLFd6hoTOgl0Utz9KxHcad80F87SFH7eYSopwm
9Ync8wNEZTJjFqolXTYjbGp5tpyc1jvqYnytnNkj/vmqdLF2A21troRB4ibDLDBJMEbQ7+WLF0BQ
AiBhhXv5ruzMxkN5UWUNHypqIvGX44ga7K5/e+HKWR9HstkkDQksbbbtefWrS79R07Lbqtdzt+Oo
wVxwVxz07TlECAqouiUhm+nptiKb0ZvapxQsuB0nfOYzsFAZy0lT7FU2F01Pa99sopRC3e2PppaR
GEs2aA8azq877kNOZUX7Tx3rMD9WJCM5ckZXrnl8nWo/jNzkb4jf5bPVn7sPU+G3fXCWk5Vk05e9
FRPVD7WY2btA83A2lQYdZY1wZ5WY/OzF/vi/WRL93rlPt6/R1ys17k3Ajv3X4zjkqhOFtDcl1JYv
pA7NBNt2WtUP1e8QzUfwiZaJM/09Zhyc2sDTilnJ+iYpLUCIBAX80OS7QsCLTL77fBOhJTakWDpv
M3ILbSXuvggis7yUMe6vCgOmijibp4n/khxGNArBAVkdJHGoxRTui0I5VcwbeJcF9My2cTJarW39
F58+Ok8XcValxpsZSr4/DDoqNWV3q029f1KDq+n2NLfj3LWcoiotltgg7xZG6SSGugenSk9A/+xZ
j51w0fgcZFa/V81M03gGBr1lJZJXdgDYNfde70kr96F19yjMG+rP8TZXB8LQ1/fOkaYgJXV4Nhu1
fqwT9kLysCjCeb32gQFQVBunsQy5oslGMkCYUEy6AInNL++IGxET/Y8vUBPkj6/bkP955ri4r65C
GU7MRP3wbFoEf1fTR/Hn9qcXgu6SjvAkZDZ/rUexb6bNihOqPcPzgwDnt0QQiCC1DVaP/W2xUcEg
UHS8LCqEFTu0zGbkWpwYakmv/A4ZBZBxKBhsFyyszfofuaCYeI9weE3vexUGSt4s7SERCdkELEKg
7G2cssnrzhTOCSytm5cy1y8c5c/PBr6o/3uCFOZeK05UJFBJhSal1LrtVIl7ijj2Gel/+GiKapon
B9cUqPds5+eMOlWz6aULWUguTge8Ny34m0JhHRbBVcPXRPfDxB9d3MrDWMCNSA3cviG7+tfcGi1E
zDFgNBrsp5wofhmBH3Xj82Ax+kXo3uapZT2v8eVEeof/93S3Y4aw1f/UUHxqPba2tGcxPioiRZoW
LCHTxIYQvM3dgD1jt7yjt8M7kXftMQaOU+eAGITkZlBfiUaXfrEOeWrFFC8wB7gIFyCbUHfJLwol
xYTL/3Ynid5/CbfqgS/aipheBouEy6590eb7Dnc1V13mV1jukYi5Sirj66WpGSzVnsjjYN+dW7p6
xosbNoG2fKSL40ImQuu5AcOkg9lCzENio0syfLPGg6Hv0FpR35o9eVCY+3axc205YqowybUxuyV7
QS6o6DoDisF8ey6d4jFYrff0ElTief54HyLG5wtLYNS+mFbedrb035HgzJFKtZ/fiYtwv+xOKUu8
j7HzYB+/qidIPBF+fHXLK58to09Ybfr3ROjL4nhDLx1dDRIk6RQKkv1EW/mh0GlDq10ye7lyHFNG
kyS9HpIGDZ08xfXD/E0tzEJ4vADVZfweIB1yVWZRBrOdMQGeQXOjz67UicdxjHo6IFABLi9AuufK
MsShY3xKCy9Cmz3vkfpQD4Mca//WPgEFCzOC6eIcKFWqU8nnl/wnaYHGo/34S7DI0SjN0ET630Mh
lyCksKrIuS/Gu7iXOnLQtK2cNIKqm67qbHjVkVcwV9bwxvpEQY7Sm5ot9JBaZuxDsLaueRRVUBr7
dfSbOLPfe9/6NN2r0wr/Ceh09MRWH/+FZKdtcn705exX249GvGkskSTkJJKa8fcZLP7UnsCPh88w
KadgqorLQWLP/t6V+bXcKnVET14wnEIWsWzaymiuncNtbZo/a8e8h7gtHkqvqjQT6ZkI1kSeg6kz
v1SL5CSXMScmQpwChCGeStx9p2yi3wmW3sPMkieTO3kcx8WdhnIzMTXuAHaU/8Y+MNiwuLgw0MJm
387KT7AgGoh2/0Rx+c7u6bT12i6KxIiqIsQsjFgxuy6qhpRMC7RcRBZlkIiel6bKURKLv/gcskTy
g11Eo2B4gw6vaCV3gnNMkYOka5nDHE7m+H7MEwwH4dyeHV9Q31qhyEGLWfQ8Uvm5MMGF6DIxM6kn
YRF9LNpUz85zCcou5dyJ4QHMltELIyFT2dM0PM5lElfPNOjYjmF7TUItvMoLG5rIGxj1Rf7FzCWH
zbj8/RR9oqDc0+/HIFfUUOlvGWuwhjF9Vkgcdpo54t9Qk8+lcSbTsWFCEQW8wxe8Dy73m7AtwuUD
mSBEF/bGr/v0osCRwkF4iACbbMhbVSe5Z7h6KARucSXlZI0HU4Yl4SAgEf/ZrXhnRVCcW+sGQ1Tx
c9nZIiiCHqBNd6bzXFzfPw9WQq7mzfq2amLtfINH1s84PPXw1LCM2KUv/xFfVR0icMocUtaLiMvs
+fVg++GkdldrPx6WHxSr6JIRn8lT5S2YQv2aeL6N/SScYJ74CiJIwlJlXzsm1ru5wXA7tR2Gqs6d
qbNGTfj9hGxDPc/1iIFsj2RwJQPVQXbW2/BRNs9w2w0NQEVVv9tpZTy8nALPOJ9228YmT6RLZPZz
x155PqanMbVhhCQtrG8cckbMguhQLXS6RGlzjOQNeKmWKix0R53JBAVnoG8W8CnqR5++IPko9RmL
ZQz8x9exvQJkEwf3+0pOCrSi0K+5AqXS/3OpRtMVsXgKz9NnFW+8mYci7TWfyOMJUOLaAHtMpfAn
IN7ANlGZ59qo1jjHNBFd2aw7AkLQ2N4PVgigOaGNk8Mwz7d/ugTsKZ+/iagd+sG1s0wPT64xM9EA
a/rr9mhFgXRIjShW5lPrQjeVncwkBvmmCPuf6d71pQB+eMbQpfpP46FO+7bgqnz6FHnZS3U/6aTA
USluAUBCSU318xfHaur+Ky8W3k+L0M6zbWKp5EUJvR2bthat/SDOZW8VqNDupT1iRnOw8tpvlfRg
fpg+04PnLYZlCVhjRaS7jHiTh1esnmvPX0aUvtrIOOrSDQ02/w3vNFOPT30siJOAGyJBPUsqRIpI
2aVhaf0B2CkyJ/kz+3/zP5MNNuI90wnhkq/WMLMZQnHsW2cUDkL9tR2gb2CcJ7IRzlJ+/aRDcKGE
DRd6fP5qf3r5tUJhWB0+tdFoPCgzCPOWJmRWOmfab/RWbACe26LX0aUr0heByM2QK0/0wa1kusTg
srlTAw+g3/L5BoHIOIlB2GxlvPqrSHUc4olX/cChlrTohVwkJD2wPiVb8tFzXepJ/1GBGFcqTvoO
GJnPuaP+OzxXqOFYJ9RGCFdQt/qEUHr2oaBDn9fy4/tqPDlRThZHuGx7j6/3uJZKU9qOIk55SG4r
WsezWbqQAkOq9aCZqlFVVinOccbJwBL2W+cw90jyhaeizSx/dAC9ezhPkYCM9qmiUP+dXTA+gSoj
o/DGlLtDrOOLqCjWJrqXbItaryK2w0j2ilTwsaHV/2OrNEx00oOkB8MFuKY7VpssNXnJS+FHzvge
WxxK+7Z2JiPGWp6Az1s6KVqt94hQLbSXb09TLGIubkgMiGbvqBo+c7m7YN+asCjDSOINsOgpNCvg
fgGN2ZTGqvk47q74t9BpjHYynuHFoIjQhuxM1ekpX933AW7jbENlt83wT3J+EM4lruHmMsxiN1V0
HwbggQJfgFAg6cev44v0teQRMKLLLfl7r5cJ4M5j2dJo7bqOJrtgGT6aHC3f0U5Oeq8/ya7uitgT
go0+GfaaHTFBkTuoD9fUAX1pNgmV5px7e7MC1Olj7eKoKPddPNNV2VTaLIUTvqSHTQYRZb2Q3+62
cRUOeVkKfmDCD/8UqBdCTRFIqNl+NOMC3HJe6FHUdQYUobDJYrQUeVvTZ374b233kKYap5apUK1J
S7tyPN7huqqnz2iBEeM6UmshLWNeCg9HoPTriX/6nT4Cr9Nze4BD6v3eKp+38IftztUOapIZGztT
Gw9oTRlu92Nz42TkvN+0obaBTussJEVT03xDWTm7kC6KHYRSfjgUkGPE7Wb87q/H98iNCcF1vioF
2Ix69zkyb3Fg3/YosvWEOvuDAoOtWCrun7rqfK7maMUDVb3wJFFfJuKMErZNfPVvG9xQohvioAi1
gkvueXCsp/UNLZ04Ku0Czo3CkVfYSd8iwBXSeZkNT+tS8qThKxFzHQGUWR/QcywZcAWKZsUHqrRR
dBrP0GrOb+Q+2bT2yECVEXU3MFYGwM/NsJ/xSQtdr+6qxiyMcfu3jOPiulm3bQ8LE9oU5/1l2vG8
QCSa1/BJIywRbV1LJVyLylWwh2wHHvLWM1HxNzXO+ayNdDOceSOOy1WvJCfwK0m3CBs2RGRP6ows
GnNLmb0yUlC3pogp/4GIuf+HCv0mHWYnVM/tzC5u7PtFwA2jfRr1Zh8rGi8X371X86L/LXL9WIxH
3XkLFqxaoZnqhi9RwDbVUWumAW0NmSvQdPxqspmCTX6iZP3W6OcKavmESR/ItEZ4toBI8RrvvQHk
GzRS85aZBD+u/8FE/AxS2A+df9+AGO0UO9vaJSMipMvcEkdAmuXzk+ET0Ojx2BheGnD3lEe2P3zi
4A3JtWe0cPzt+BVGuh9B8y535Va8GI6CaD+RiNf6euw13oGbpOJ2g+WREhEkeT79HedEB2pGataU
HovF81+AeyJZBJjlu+gG57WKUMtiSDySl4Xyg8O8arHOqUpZMBZql12VLNJ4wrrvwLtzOdSrRL4E
j1tGarbhedGzZhly3rtO2RMczSYe5h9p6PWLOCuS5+o0camr7ckGqu+AFsdFoUUTQMRqzCXRWeRB
SWnEIwwEBQJvLaBtuUs0SrGPH00PWlBU1ckbG0ZbH/sDiw7gliumUDn8yizCtnnysYpTlXG+bJKe
5ksNxahN8RetyWyPBOTnvVvnI+8PiPKKi5FlZT3lGvDqsdj9MXSLfF2CQ1sufYpZ4Kbzuo+vh8lR
rYcHHQAMpbJDHSIy1Wu5Vgd6h2brOcPqA/FlJeMdbLmENwrhfoErp4NFwOzsIYDr9LFB1qGryjKS
bmB8Z8OshkivnjssmSfFNy0jtWCjLAzy1lSLO7cgv0l9sq8yDU50DbeX79Bd+Vbv5Dmd6IOgrI2t
oNQH069isHL28lN1Vq8OR4zWwWzYJ8JvHlQiY4zfi+OR1VYbwdfSeSYfpgk3u3PX5QKmywd875/A
OjQhwxP03wDLo59joEGMO99Nnebcso26WQRUs1wKne5hHZK0YJNRvOqKypkfKgEez5sFSzZi9RAW
/gKwWJ4WSy3bVquH9/CcGJ+SO/4021zHpd19I5i1zZB8by7DV15k/NmaNNSXq8PyF6QT3I8veJfG
KDYCdC6ApxPnh4OaPK3HkYdFAweN4IhD6px99VcWjr3cRw5DiS5tF21A7mP/mXTc+YRHXFQFbC1F
iWFZ03QQ3XVTn6ziiNkMicYnUuAlJDMlHZEB/Q9iKqbov9lrrA5vtS5MzyHS1nnLtidv5YPObDuQ
ra9CiYFOq99w30KK1YxC9wy68euJx3C95CyWdXZcj0Hg763X7iDxX/538PVM5PZiOv1LK9b1ljXo
ThOWwsEEIGx4fuylwzmfsXOsHkwJGl0NiX4Ic0se3czrLWDe/TteuWtPYo6jZ57KmyARiVFPuEWK
xjpYqCbq/ePqwsavj/OJd1+jw8OUmV/rLSJNU2ZcE/sPJZgEAZIhct2uGjkD9dXnkgNuic5Ip3TD
m/R0QMTlQo39CBOLsnnzi+P7J5rPHtLb4m4p7NAekDW9jf1w/AkopK0I/Gk4u79NERokXiJs2UAb
kNCpI0mA7OLgTA6eLPhWxqHYSCJx4wRHkHkWYZ5xMFVXGOY2V1X+UMbMOttOU+w6ih9DP++0XFZS
oI4iy+AlckjISGYEMmPCuf1Q8Gvt4oXisBgb1ayySPXgpC+XfE4mfUCi8rnXawVUdGuw5SwqnGif
fO1QmDG2A4gp9ptb3si5xEF1wkJibleHqHRTl5WlST8CxOBvJF2Esxdg1KJBBGyNr6HU7a3r4cRR
sN4f2Xr2ifLKSrKNPBgWR+DXzsg9OmaqB8kVUSgA6/5/hVnqQOGvZKH1xwVdtxucNaP1DrVCzhTh
okJ2AvwSZ4MyyneWnF5SywuYutkuEPLJrMAxHAzWJYGi17ZHWX7/DJMDl2i8qlYIP55efkiY4TMq
dhNFTwy2ox6ZwLTaSRNg42LMgSEsoU1vzfC6hK19I0FawQ2CSi3VH6NUJmwoWhWFv5k/f7qhY9oI
lGQ+I/vOyAeU1yehbsuz9iM/wAD+uFvJOLuklFvNhPMb/U5cpB2EHQOPsAN+lJ3sjERosj+WnpTk
nC42ZCxr8oV/S8x+egm9R54sDjKu5N1kleluj97lksABvzRZpRIT2/k73zCJ6/cVmDZUauV17s4B
objJMdhPEHL1PCrB9jczUSiNVTKmSi4Jj/dSlbazSpiEuYROk4n0/6QpjnSxG7iUWWCW4DoW48vd
eYF5BMLkUYSDHzP6K/RY7/MMtgtVK8zEHjA8oVNQ1WwmJkGq08jIdAoXLLcp9KIIi6Ihgiecaume
+ibRq1sUoWYZD19+p/l3RqpONccmfEk6HdSsrdd8uj6rcAX54sMELc4i5c5PIuaLqC3BW2AHubld
zHbk9H3KJ8WUdel0iS+fzvvLh0jOnWeVyzUnxQwIkGV22QGuH3lcVZ0m5raA/evSQbA3QJCPaU5L
zFvg1PLnqKi9FannF+byZ3MjaHvoqXkoxBFvB4w6o4I/zAezAvQpAe89MDpFEp4dYALbQ3Un0Jwf
p4OvPTHkchEu2tI7nlST34Oi1sQVwlkexgKrT9wA6StUXxShmq/JV2+aoAjk/h+as9SnHnW7pBp+
5h+k4zMcsrhVrB/PSV9b3hAiv7bdLZjFq36J8kX4npuxJ9ZsIygJ73dxk+ElQpq7wq7xfDSzh+vj
xyb/w59yuOHlvun2Kmmp0R4GU/aUMPkc6KrNc12ChUqXBGFMha3uuA/71AUS32TralbjhCmfK3xD
aW8HgyJREgcVH7LaINmkocttrNB3ANr+Pox+Gf7Npk/91Ih7KxJUI4uPAYrY/QokqRjs/9v7WYg7
z/4m9bt25aE/crfLTepMhcwzDmbAZPMYJCdMgi8WMkATWDG0G443f6GxCT+Hf8d0fS0UcetLI1p0
IY7qSNpYgYFLoMz6FFn8gfgwmcdhP4l5H0I9rSiYPWZAT9cTuAI4Ipf5Ik9tAtzTVxvX834mA6sb
9b+zBbFaZqb6KCvunGvJFLXeA1+NKz/5+lpBWg8/0tBLEqlRKyCdJuj9HYlEFmxbHzDxVUsS4l44
iW7dvm6gfDfSe2TZmR3iXy+3RkwDHBpUBad1RrFPMcbkp1LdKu2IO97AmYij4ckOwsn5i0+Xsyyr
x13bqHnZnFM/Jyd8eNWs6Y8iv6iwmqKebEOcv9zPrEwIFCjdXNHwxJdHj1F5G7fGF5I+tD+y29qd
eBH6PQ1/IwCOXYYtGdM8TTCeGsdru99aEzUOlEFmUpsjJwhgQOY7aC3s1SiakTuPkkfXqw0J0h8A
Zk4Jv9+OX24Uu7GuyBiyVQ9modJhalehsQsHooJjflRkrRcd5W/QXzT17GQPvkmibSnKDmXVwVTi
+BPl68m6So/XALHW89dSW38ersjrqsDkx0RQw6Ub3yXbty4494ToNAVZGyd9BlHTRH/oG0f0x5sW
JOijY5tLdXvoe5q4Weh/UWgKn45VWX2Bou9THqaK3fpSWY9HhFx6ptzFLXFfW/m+t7goodBxdAgm
5ipos1uWlgH6yI3EVI0M4EcDDjSe2v/8J8cphLHwEk/Y+8QnuCkPvcOwh2OSj4zhN3DxJ/xEnraF
cbzDbaXRDypwqAhytQZuIsoSDHgV5Lii4iDckxdNeiumtsuSyQL+p5cLpKZJxGuo9MBVIpW+VWfh
aXQdMq9kQNzh80+z+e5ngqUL/WXBZqdBbqdKrR70EubDIeTDR7Yq8a6EL6zyq34K1MblulVLJILM
NWB78Bleh08ae+bTM713oIssuhtnibE3/qW/m7ja3/RUP/Oev6c1sERDzbJ3XoBi/wq5xCLkNl3m
Cxy3XFPDHt61Ek8O9YT/c7L6rxxS5gKqPv59qcuGp24a+ioa5GrZR9/SIbd0T965G9iw9zVKg5Ck
L5Wr9iuR6jxKrgkKKNJVRw7EkBIOyvExyidroZSkFEXbbHPMguBkwZ5kfddmGPQ5QYrtP67ucRzO
T8JrAPKOJrd+n1h+CYiaz+lGQLxps9nmrlj2AWw4oEeTl/mE7iNyHZg/ZCb3kUQSTBflFITkEoqL
Ih7jlE6lqm7pYk5DRl5RfX0+DoyFszvsNSEPavnVwRXRY3ft9XAbGzNHV6ceLwjbJqeigNSp+lSu
JI8xCCKAr8gIV8HqoyCWVNT7oOjZCCA2cUsMfwx6efScHDYXViOafTXbNnC5bNl3lWbVmTWl0xzo
4uw35nMatDFkgOWo+6kReYkKU+FX1g1hP8NgBGe0LTi8zNQvIrJXf/wFMT1Ani/E02fRVnzG51Ij
Xq4cnU1BksVLsaYgRwWPKQ84RNvBQMbqSYk5w4auQwztfnroeVAm0odZVygXN6BUUedM6IQopQi9
Jvb/V7il8NqM0qhVPGJuwBMD+66V9SFW1tbUIril1edKZXc5I0u60/F4zc1DcYOmL/R3/TNImSzb
qO3nWhe+AdFJ5xMIdpPw8NorfI8IisEMzHWWkqZlsm3LWsLoeLlfqvtN9G2/mMqIAN1/jys9ho0F
FnjU+245zhwHnY5ijrzD0tiRxA6t7fge8heM9xylSdbAGSUgKIU7sJEsCizsvAGi9Fqydczlg/AE
dOenrHPvTM75axhA8aPi2B014T7+Gw8NAq3ROXERbE+IDrqyW0Y1BtKOjyYbErb7c/B2XBsySrNu
++6lTa2KPCk3myubqPqJcvn3VOi2yyifqlKuLXko1eHRrPuzNh8T7pWPkxQc+m1IfZPiuyWaWysM
5h4r4lVmlTZsBz6LET8WjwCYjbhi5XTovnef7iixb4uCtsGsOsNVOAr+ILfoOsvohI9GRtTzgM0N
oiyM0d/oD4AcotVNT3cdCdZzfAaBVCy53/vqLt5OnYAEriRQEPoh3PTmwCbZznz7qbs771mHyrgE
ciTYar22niOCkqZChRJYFMj24TvJYJIHSefUrfyQJ+jCNTl3DpOwFrrx27tWj7a5JYu0Yku0fxD8
6JERB054GHUM0lSmhiZa9NQt2A63oSpBwBheur1FkVqVW37bIG00KdiUXBD/6x8Va+tEMdSWol+y
1NiFKYWQGtZ5eWQsJkZBidMn/96qqUTutTKBP8u5A/c1qsqtbHYjqlSpT7toKBwK1EhdtLFsD4q5
hRXbeDsEb6nH5lnltmmT5cz8FKYuUBAtFX+ca5EeV2/J9ZDSoP/W48Okjw9LPGZRgX4/XfwW6byo
DWVwZrG3x97wLW0OlrivqQ6Mpi24EMUmEnM9Mn9ohSALM4Q1JUjPSkbHKJYH8XdODLiw1mG8pT4r
78MIeASA5R4+8tEt5rWTExgdEXRe7Xw3BSjwCEqk0HqSiV8yuLY9dRYUmbBMJ0Zj79U2Enfi0qdT
qmSxf6iZ2WUHkZ52H6mMY4kiduka0nOpEPWtaqOTtnY0eb9cE7rwOWawetHK51tKXygX+VuDaBlH
/LSUGts5T/gPnCqaMi28Kn+4LYrtf/u+uDlo0bav1UgI9RdFIHLB226YwHPwv3Pii1W+oIf9PCFf
+MjwRiQudtkpobQRSM6Jq7QZfaW+NPpZTZx3hpIjr4+iKJsuFMvnLuSgLqBgKn7j4IyEfhoRDEDK
+9nboK5FUrkRVKTEhaZMdkymIo6Rar8KUOGSAhfITmo+FhtIjCf9tc/BQB1EhFuLr0HY04jxa0t1
F+08PIX1LqhVd2cYcdgB6I2xVvgtgb5YvSXGZAmFaxdEr/2s3OsisOWwZ6vXlip99tkRE7cyHMFP
GqjKjWJ7GI3PRIhHfKA5WLRX7qPWo2lnb579d+n0dHxa/dCLsqXlTSs5FXn+AosGx3BZmyl5qInP
GoqIUIjf7tQJJjS77d4Astf67Wz1U8D4a3Nbf0aVNy0Md0oHA5e7Vz8KoHmRHUPSuZm5InZw2j9j
BeuHTjVoZcOr/a0u2PYV7ASsVuWX5WkICPiZd2fkPXcnXlDZXWEbc/bTwldZwUlJkhPyLur/k8d4
Cb68rMKCNG7K1XeG8UVxTShiqE8YrLd5XC2VehxYpR36GvPsf/jeqZ2m/Q4lE0eZWErCuzHRM1cq
QF9/K6z+MsBd2seZqtX90nue7lcxJ0o0YkxU1iJXEun+7B616eeRTETPvZbF0hYbgQkjRjuQlsgB
7nbjcHX3eLaPgfVEpfdAgWxPWZOv1tOcDzyuhx3tQlqmX+dglR/Q4L/1Nvo4+IJNANgJvQwSh5KL
bNxHeaaM0jhNpjwVupyMZU2i4gwZ66yO1BBOVshp6KSv66XDqoELqaJ/T6K/bG1eB41ntv1WsOxg
7lQ1xktzPypz6Hq6Af9Z45T9V2NO5Y9bCUi91n2oDmH3y+WJ4t73HdVW8Zl47sx6IXDo6Bx4NBSw
T4Cy/HzzysKWpdaGBPAi2Kc42IIHdZqtCfOX05FWJXKIRQMOLIQ2C7KgsCgPCzqE5gK1uCaIp4Ig
48FdDwq4yNz6AvUG8AbCjk8RWnS/7ktZjwZNfqF8ec9MvgLb3F0qXQYaH2zC9so/CqsUQPSkWu0L
Ylov9vTilsjEfAJNsaewq/zD7TZpfiot+S0QDMxyhiYr+XU+hvqc6krUgw2tjIRWb/gnAG8YPiF3
tcXgzGJNIpRv0cuNPmyssTPHY61/J3ngEXgsotQflMN1JN2NoUS45u6JX7nzPa1ky7a3to5vV71y
b+k+J84FymmwYTRP9bQyexARVZ4DsXgpp48IJxjYCfb9dfeuylH5yoE+3UBWMGLHxKGar/0Q+Z+E
BL6v29DV3l7LwTU8cvtH/CchLXPpJXnICuH8baATFvtRI0IWk7W7NxMc3Cq4sErHOYZViGzipOxF
Ix/zGaTG0STy0GfR8/HJNn+Q5GPi46UEH8AUu3HqJ3Fao6KLuYgo7lQIcuRro2x7CnN2FZ55K/UJ
QOoPbQiDPG4aysckU1uF+iGEO/51CiIz3JOljd/JJuQV5fwcFf0n94oqHzD25WXPMQ17si/Hz1TB
3h0uMNeSCBWjx/AYhgOS+IssmPVhkZX6DEM8ffwWlS15sgX8bn/y+Q3ta0dCa4lWlQ9JkA22OGTx
MCgBcoHtA2f/Xr+UB4AQTnS4exlF/28WjQkpRutpbAXHWSwovH6AfkLXBaiwLciRzaLysbikkZ+E
LO9DyfjNIhgsiyeoT5c3ydHDbLz98y2XqLNuc9I3bDCHQkn93VIBhN3/zpId5yNSUGoD816CQU9u
Sy4Hv6LN7CIAeXQR3WetH7Y1PrrW7meOTcKWeX/gefMcFAbYfSDx+VBCHu/uTSpli2ScmUgUhyUc
kqbjA7ImWhZv6vpsGljyQavhP/nk2+WEfji/zB70eCc9xuiGytIdB0wwnumD7WRGXEzAuDQdFboc
3Bk/x7/JwruQZbNtsC/jfoGPDLUMT/SQJn+++bfuHc6vpRXsa4nRfXSJ5GGtNe3Of7pWnrf0k6le
XV25SmvvTncRXvhinm6foY0UlLKa80Zyj8STsJx50bF4PtJuOtj0uvBt9fCWEsphwv5x8qL1UVVm
Uu+Y9mQmC+d7+HKf7MdwmlTutZjm6buo8V8KAwp+si8WZkJAB4TT/5PptT3W5juZuMqyMNSJROro
ukvxeuS4Adztz9b2Zs/TvYTmzEgFt7nkUxbpw9eoRUovvcl+KYj1lMqFTvalK/rc7oGFhvSqJdAt
GdxRUr7NdEya5oeKPlP4BepApuBRa7TFIizXqQFIX4lt9a1EnsznmLPWvQGkJaI5Ozk6+12z2IK+
u9Al2LDMdRF3KJIEcfmDDdA7byubOj9oAseCTP1GVYX6QROfxwse37LBOroLcVLQG4a+NfaJhW6c
/BmPm58I0oezluyZ7c0Q52+c8HlQjdPAsb43Vmwm54IIcmbUYoorZztoMgOA6wr0hOWR+uqQvLwn
i++pmXLoYh3QjxRsKB8jS/Vp7dc49A52sbj+eJCMTEGcsQyzacMDXcgRNIlrleMEcWe8+dly+nwF
RBIxuaNNBr9G7D9jqllPJBuKOAstnAndhIGessBvRZ3bQHKsuuqw8NB5Z8U10oBPbDERZdtSk3SR
OyocqFmMQQDaOzrnFs4sc84DpG1bWTZKX5nAU3DdmYq5wIzm55e+MVzxNcCYGJ0s2g0XFm3+64od
tHg7vi9PkPdCkW8tB1zdXsVCzJzgdxrfKjgbmzmgvuTtv7fxMkj9RyzHI/zrpyJmN3LGa88bU5g8
1UawaYpXRpm4ZbaaLPbJKUY93U9s+J7P5aESwRVGgER3shIYCE6GbabFFiB/fbxzehZc/Rccv+nx
SnJUMUACt97nVl9jrg89hTvh/sRm90j928G8a7cYUH5KPCl8W6SR8y2QOg2mrL7q8lrkl4s2EXvV
fs2oG7FXiEgZx4mVcVIQ9ZhP0qXx6EfvjFbix5plhqvOo3zp1vhM2tzY2OyfcrsMRd47kleqRzHc
z9fsjRNJKCsjmrymFWntN9yHpWsCrMvKgVMtMnbqlOqEE8DE2zdI8tFYP8imOKvFiw9rfCvTT8Wx
FlE77hXB2FIvGFYZg4vKz7xeFgD498XgWv6a7CRFJ9ln8k9qrPEHkhKgC6BxOaqO7uZvVCsFC1GU
/rWhC2CZwA0RFWlzYDHGiyLF7fdk0OVlihx+q0fxaPtsreecc0uW28xm/y8gDvn4AC15/pArKgKQ
ZbHAwiSejmT5pcLpUxEikS2OimO+nJpiR+itTZQVUqi67UCaRfhBmSB8sDrn0UfqYEJcUPRQZz2F
WZG7iBXe+renmZUbOCuMti8cSTa/an0Qh0Mq9Iax9q8FPlW9d9e8jdsLZE9gHrlJAf/lnTS3ZODN
G5liA6PUc/3NrWhX9HrvnUtDcaYJU6gBaRcM9tA31IdxumQqU8eC5/wkNuG6uP5//4aXGZXP3+XX
vokqNF6x0tR+RsvFjxiuK7v1WpSLM1IivYWaCZHBLaXXqaEVDrmdtI/BFfLV7SayX7/KsCmRvjiL
bn7yPmE1q8W+dcfHAzpxygUCmytHstCPnRdLMfYcnELk5/T9cSUWwA/yZTMXfYVMQYERh+T6lK2i
h9CkeH7xhr8cw1Y+1kQEPDMGxxmdLu3H+7urhnFROeNotw+Ga9l3sVR5HYYysnCIVe8M1cMNiSbh
nGlOC71ZGEibn1szDcpImpT9UqLJt3LtfUM7WjPHmOF0uF76Gsbz4ZPV7i8p3h4Qzq5VwlyHCqSh
YxWMh1w20RqdnECSOd94HivZkFjlDqI3RFFzVbSXAR37FPOaoi8M/zWpYab23EqPvLtI9VjsZ/r9
uHOI34zHlMGPULlCV3cfPWzw02Y2WwXv9PwYjkhIi404nXzsdyfaMxl2BdSdQiDIgIB2jOM8VVwN
yYV3LLWm9hBYW0bSh3noE5Fgzjlena+1B10cojd8iZvK27DIl5Pz7V3qR2zMjyJNlZWjdCvP332k
Tr3Z8QgFQOKY/HX0wPKjfpaVpT2XS2n/65fRzFeuXWeyX6W7WOT/P2jKdt4Tkn1bdU351krxQZfC
nV9WMD2LDI5MCJp1lL0tJ9wqiFfn8zAH/zKsT64wSDiQ6DFOJJ1fv6IyyObZxR3STlpCh9mEjSnQ
jrOuN8VPTxO8JAMeAps4lfHSe7XzW6cIb7SUMFjRISGSRwRwiGrs/t+HOpJBDzcu63G7I9JKEhq9
8FsLUvABoXvjQgKdbWb8CXpBd0oUR2gfsFVVHX/6I9uFjbxJhTbUy+XwpTAszQB/4NVwNKBQeFFV
iSb991uwqnvO57QeuPNQBteOAbAhTikLzgRrgoR+sX3ee2ZHIL4inFbfoUe8p7L7dKYcbQIsAyU4
QNCniE7lc2U83OD5dDUUxlH/2yAHpysstKmf1PP1hSIChYL9XNE3L55blbiA431Hg+0IUC8rM8CI
igwW5EE4dWyhpC0ecOKuLIyGBpZq+NRFsvvWAspzFXwWvtv3naXpqmZ+QvnZ9kEfqUIsN+MK0Xme
7nqFrbh+6h5h3kBYOqNW44BW8i4Visi5EvlZFMis9kxdzEiE+793PPZ5BFCohOahuHceKgN3hmfw
yzYOSQiCqW5niyUdgKYjGBYtEdp5pRH+SOCpS4POc2sWqhXe/XmMTTJ1NZNr0Hipx2nIG976I4Py
/pTNuwxQNAB8lTqzoQvWn8jok7FV0XDvt1zo6kyFvl1/Y/ubugkg3eOG943/f7x4ITG7w8gbfhmw
CUoG2HzgGXhH+p+iH2RN3EI0FSeyk1JK/o2kKiptxDiHHMr4wh9hoEi/k7xHOcbV6u/qMqbPbErU
+2608tA8Dn4hjfwr+0xnCSi6hme1PE76/gcfBIMV/FlKzES3nYcg5fN8Bhb6TaTjPDYKJu3/3jmF
PfprJPpu/EeTyZUSoMBJ0rJ1Wm1EL0uvpEFMGoY1G8PsAURW+7Gu9KczMHO3bf3t9/VZNXAISjsK
TwbjYDvdFgE6LgYHTuUpoGijEcCunorgd5gh5WpWxhKwO+YskV1X0A1iGm8hOS5qtUB2w+rk+hhc
G25bHOXoII4N9ngNuiM7eLQu6gf7c49QmgKvUdcsEcvy7pAptNXnJ0ieQzZTW0pRw6P1SnALtulm
W9KwKIJhDnu7gDbUz+hEkvb+aUcjgb+NIiC2YpFK+URsHX9N81kx/MMddQHgtC/nBYN0EUJzWJh/
IkADxXk+oLp+l2AJyvv/hazvixGPWhrqS3Cj0CkJwxRg3RmdS0XcnCMAHN9c9i+4p3hWTOTNL4W6
F7V5DifZm8m2qOOgswXjFjjA91b28xNBM6PKaHUcK4IU4MBC9bSPHmKPfo3uAJZzQhaKB6mtkLYi
Br8goBvPgLpBX4qowzopOzGb8PZPi+CvszLuUxb/JMw+4MipL2Tgzj9RDYUH930sxx++UjlDeGsa
gAsyZqSy7bWzJRXUasasoS5yvG018jKPPM3IR5ucIaKC1zMPpqYdPNSSkq3H75U+HYz5ASgZcCSP
WTMSVMx66oxrhV2T0Tm/+SfgV37dGeZSyvmy5rDn36K0KdT2zwf/qS560XQG270sG5d2CZMX0pyB
HJAuvK2N+NK8JwzgNoZwotRjStqeoC9GIBpCoIJXMm5BriivF2mEE/WxgzB4OfnzxE3UHBF1eE6O
Xa6ORc3iV+qv44GxhXygq/2cv9qvRlyKLhuh9VfixjiUu7QKa0lUrXKLPVUhz5E9Z1QZqu+FvHd8
of7Kx8+qXfwVAuWBwwlrd305a6lYiYCSeGK3mk2pskETX76rr7+AIC0ZYsSUhMPKv2OQEkxRMzHu
uqTBa8Z5nriT9P+DcnudyRdr8m8FQDfuVPpR2PA101ZffV7CaIrb9kiM+d5r1Zw9Hw80i8uMtIQ3
0pLPsfBZIUy1tm8n65nOFSsKfdHlAOzs3w3CR5d4M3bC58gpYQxIlElM1K8BdYp0DRdU1kQ8C7Y2
6XxacZG1rcp3BITp8lUrGF/IRBV/SnZez7pApg1K5hE3CVZeEeMpdkWIaxMrXUVvFZmm0GmcY0jW
aPnC886asYP+kRdYG/ZhcU4Lzo2j2EyKuT+hOSzQIDFxMDA8ZkBGmLNfG6/dIo3kvs3AYff7rId0
gPw29r9C252nTOQs3liudFEkeilxPfGHZ87jc6PPBgRZ5jbLiOViWqsW0JQJ4zofo89CVOT0yaTH
c7ALIyuNJDOdZVfHVY11gzKoLljWFto+mP87pFrlnbuioBp7pHtwN6+Marm76MoAgVsty1iGI+Qq
5ewX/vsPaAFJjVFS/vX7MtDc09hRnbf/O+PRJr1XJMHEmP7PNK371ksNFo5JvklzSl+nJ5Lk1k7R
IGkujTRJLgsmCzr65BKTHfx1dQ7UtZmD6BMhNRXulfBsG8QISmw9Tmk2wuJzM18VLW4/2hK8Sg0/
Wf9wsuUGj+a3D4Vx6PylUTRF2U7heds7q23l+QrrFEqzTX9to3LnPgvdgmyaocA1LujfIJ1+B7Kr
3y2PpMccWSCUmGVobUrPmHaGVwCkJWU8LTnQWKN+KxlIlTZzd13rVcHe8TDy7Hw5ZBmtD/R4u43e
iT1eTQjZxwhhPQeDPreJ4PwUEv1cscQa2x56jHrwSHJGAFi77EqeSPBVgiYYABz5cWjkWMVqZfU4
3OUoTAZ3y3EKyDfnybEgD4qP8dnbMt+sA03EpL40oMYo2pRmlIamxncRAFMHaPLuAmP5G6Y8A+1a
AL6Z/8NasJLBRG/vJ8Wyexb/0FfFw8WTh2fkpS0xJ+GE3PgUEAvs90ajyxVXK+zIeRrZJeWJ/aW8
HmEqRWMqnwH13li3tv2n0e7JjLUJYDEl6pqjti25QL0vFnQR7Wb9/qP7pkJQegqGQiXwuiviChRf
RN73lPs8ZF+NbKlSWyY/5CtJH3cGi3/W4lx6m1cWad23ocZrLHw3xWUlC3pgd2S1r3dsY6X5uUW1
s8i2PCje4J4FgNqfupOD+C8RdxD/JPaugfxHAILQXd/a0+vZQWiQzM3x+PvQyuQmRR+2b9ommpU4
7T100FEjeYPRFFrBqclR4/TnLP+u6lMPuzJqi6hi8egf9WMv0l12sW2pFWAuyKgddm/98nKU22Jn
KELnK72/UkvXmMZA1Dhcv4lphNI521+zEHWuGXPSQ02c+Qmh7yz7+JuUZsftTn9WHcyWrWFZ9FQR
sjjctfi8UbmqBRTieOh2QPAmWVbXl62rtdaA7R2wm+AikvbNiEjsl5zSVo+YmRlsGnsRwMY4Nouj
oYNVG9FfibGWERRKMZCWvWvT1O9yYB15erxmdNWDsYU71xUpVkyC6oLKi2tUKnuCdiIb4Tm8ioyA
lfOMN2+SXUdZkMTHxJnYVQ8TF7/JwsWM84YMXov+mZU1b3icgT2Y/JXL2qVd3BwfPU7064uJb9YK
fPbcObIUIU16oNhGZIHRuytimq2MYJAqx8LpuI3AJSVieJyTSfS6SIkwivsdSsKapNC1gt6UpepX
po0fSmVtNN9aO5Riq9S1gXIqa/C4jkGbHBEUo5u9G5nqm/dhqQPwz9UmMFyBQileo+6O9EZmiuay
gQVp0oMpVHzURewj+BFRmfEPmlh74LKMTY2G5VyKvZe7bULw5pv6SGGrhoINdYVxUI3C2KUrAjN0
Vbg9m6iOWTaqOlRNa0I+/xLbMsOTbh2EQzVkx8e4mu523kHRuh42lA2S9kOqOH21cBlNDd9HCJcT
daNjMUfY2shZHSId3Buan5kszSXN8k1um5sio32LgRxUbq+e9G5EIYtNj/pz0Dv+zS5f12T9J1b0
mMhZM/WtJrxBKvjL9Pfh8Vsus6X3SSW2rFA0ESqm9F0+CgR4vLEFiuoVaYRkSAIuupha9qXKxbDH
Urzz1QXg5ArFTPik4ydc+BlZDHaNbypzTOqQEDA2Sio56TwaVaB8fqBdieWsxMl830cXCxLKm8eA
H+6fWMY7B9anxq6YQV8C9RTVZJ9YyLA5uKnobMavAViW4ZV2ywwCz4OIVKzbuph5W7wcTE+RxwrT
ciOK1lhKgqa5sbzFtK9aQ5cNiQyQ9J5LNEYdt2/zAAIas/GLt/vKgE/jtvxJWS6m0VBnKKQmI8BP
O0wRfBLkX0d/cCaUpDnWZPXZLblwo6fUvbaLbqdzerkjwrZlLN6sLWbe57M5fBajgIFRw25VNQyu
khSo9Bn56SgW/Mi66rrRF2z4VC5ylg1TxZzLrDqtBe8DzUVOTgJ2ox13/sPZB9I+iZwIkZSYAviJ
65u/MVLaa4ii+Fvs2cEymsuj2kECnay9fCMpc81qvsRb7bqt3bgm+Dc1Vr/ga1AaVVP0LTjsgzTm
GdeDHBzCB8PLpnyREFanvDUHZIPhw2REyP/QEU7XDmTuTed9n5vKxJAuaxyMHIJhOeY8jG8mzwP1
5QUhCtylxmIKln8o2Yvz7U6VRS+d2kWDUqaVBRxJDqZ9g+rn0uqJ0EGgMq+9ZmIP045qbaRu86N5
IyAmoLrP/vlnckJNMLjlRf6mEOVZJ/9eVcD92mVMz0U8JjdqZIFVzc3oWxWKFr4AzS9v3c3plpeJ
vFykk0Ipo1gWqq90O4aErWb0yVrZb/CI/31ZEwwBTM756VLY/nRmOgrn5kw5RLfG5smScl8UE62R
oLDhV6jICAKw9lDk5ueGX4e/1cGUrfO+71P0bmzSbrxsJQ9p+k2cgylao30LkzuXj2SaVjVbIK2I
kokNJ/Zs2VAWzDmCJ8FJ0RJhhVM4j/TmMbii3GeidoYyXyD6rk0/HbYDwjb057r5Wdht08vfsu8R
DYDd8pE7vmn3GxA6HtSZ4EdJ02KVy2epzcxnLCd2Lc5MQ43pXAvkOzxyCOVvD3PDo7e80NCyjrxI
9PWNRbqo6gs+NVZDE++CmCS7KqeiG3YvjcgZXQwvBfsFdaF50YU719WhuNtDXwzsksKggAEKiN0V
CTYhTF+bsuPHCYZej1qorQY2MQAnLpa+C1eDT2QQ3pGuVpMw6IEy52b8ewKpybslyFf6koOkc74/
XVhlxdO9obTiDjUg8WB01up6I+pByADZmckTdk8u+W5lfJVBp5NuyZ0UY2pXX5YSFB7JibQmGbKg
KBfO/BAfcRD0wLqgB24Mft7j0y9UnQKbphbrU7Pz6h9+x2tDIJ/xEjV6Z6XXupP6ADnbmLYdd4X+
RvNx9qQS+u07xDpD5p+l0sYgaiq3q76NNQrOWfQ83q3V7MMdUgghCNyAfZTlm1YLgyYasXCdZYph
JzVdts7GtUh1IZx8P8Rywh0moJy36dOsZ71ueC+/uf/AOeXErVkBa5l5SGE9oqp5eWFp+KghsVyU
Q0t6By4CJ08n7tBg1GAicGwe0BUVSftVxms+E4BvquSEIenPcV4IJ3FgreyrEEYXL6PfmQfAK2hL
OOxX9QK7NjTa/Vc7taHgzyujKFRHA+I11Ch6dGYb/mOxVRfYGb4+hTseN9jsodFigtvtGAZi+x75
aP0PSPTUkkaGfohmX+lrA+eABDqg+LBmiwbPBLuNXlVC72lXmUThWYrSPDptYmL6GA2etIWesa90
7NR3ox2J0vQlHgTujOBDPKmOF21wzMzNdGe9h0ScakozxV57k/1v9o2XbsgPoRCUoCxTfqSPIDhX
ONs8EeXL0jLTb6EPmDeSj5N0suuwrw/rB7S0ooEULypfwisN5kpkkiPjmusMtLfFS6GXgZSFZl9O
+AJpFARMpkNaxSsIADej8tPbHzHhzwERtQGdpxgfj1tktNoIBFqcbTsbkyVvOEZ8uJ+L0FyGBTXD
bsU8aE0N50j0U6diLwKA+HCu6EDEYJrKkd2E4XaglO0IAKSBpzeMNU/bAIrFDJnRuqJSuuUdJTYN
ib/eSFUUAhcLt/C4K5YvptWf2gcCtH/wWCT8WTM68XdGhLXcvB6y5ABNcFOlD52IVqkgRnxJr9gb
9aGVOb26OxLWEIpXmjvDSCWqcgfcvcJSFHx4soW75nkbd4h43EWMZoCVY09oK3pV7/DdrBv1fHI6
Bj7opXEQYBr9/ItuGHMQUWMIyMMTtvPt6c1HviXl40tfO0O2D+/9Vl0sGoZOf9diCz2uHShhAW+L
XoNZsRDhZ4439d1L+yWDt/DpTNH0Pq9niT6oLAmGPiTidIrGr7c4cM7XpBSif42jdtVqlsgXmJjx
LvABm+bOh4i0vHxO76gMCcSsiDAGf/4tyMv64KUgM2VRogfrC75KPQP3eyqrYcgu64At4SQaPaBz
9PEGOsxlNUeI65zHLGur4LDYBL8SmumyXV4DmMu4sbnYZSx+LFCQK0xEOcQOtgUC8MW79HyVMd0c
PRfVXJhBX8LRPXGCYgfxDKnsOrFdy7oh7rSnzPLaKRSjStnl9JBUJIjN+A/sOE+72PX7MF+ILKyh
xZ9tSrpAArfzA1Lq0sSHf7caTkcWd7lnCKVwNQ7EHNdC4FapldDxeVmnITCCfmV7Z2oBMK93QC+M
I/KAFPRzhGJ8Ti0Gsd3Dfy9uF+11k3eK2xp18Pc0VO98iou71m592+70v9U6Wf4QQx8CokVNvS9q
ZbuF46KB1qHjhBAON9a/CK6jLkOpN+qs4SWYWQXOoGrBgoKT6MUNjdQIdNTzGZRy43P35cQVvM8b
EUTFNz4ZSrcB0m6w01T90LGOSSTw5asrgdUOhBigELMy1GHcXH7JMI0T0zd75WpRHblF+DCUHnUG
p64AsG+X3wDV9Cu25meBtqn4IdFKWDwA7QT67SpsWmQkpcb7bxc8hkTT6K3kzEb0XZXXOIt4YbEx
+GPxTiXhO5mF+PTCfx+GIp9BUwP76R8BWlm/hmy94A6iffYU6rl5b1VhDLZKnDGDIHxL8ZppKgGM
HY2iBYrCi59NhKhJkDGu/cmK51ji7DHjxKNo5t/79RvGSss5t5VhGN6xtWJ2avyIKQ1oJjXrpWwq
+GnaA96P+1fGi/ccQUirCFpZ8pK/XzWFzwP/z4xXHzEJbfxAsZjdQmSSFvlksfpQOWlM9nfdGJRz
+9SNtvqU5GEhDsmUMH4aXCfbqAAAsuvl0XqVfqpXjYUomjxg+u3bxHVwHVSe0Wv4A7js7nGBZ8qw
Gy1j+jRT3v0GIXFEAWntAwvxEUP8NoK3kUKVHfqpGyOfBLT74y5XqWjg2XlNkcRJ/BIEgieEznBc
RRHJQVaC49cW/aNYk0Wjb8c14DaXVS4dN3eW0TlRdLFGdh1UqspaXl4s8xWQMKe4c7N9oTSs9zKG
YIwCUmw2PSYKeBNRWs+7Rvykb+hdJIlctU5twN+VfSRLN0hmY0xsBYTJyYx5VRAKqmhiwxSZPz+E
nJiLzmI8+bk/rCmq+QLzTvhKfUBz1/6QJ7lBsZ2Pjo6zqoDxJyBE4fQgmEDYjGIT5J5kqiL5t2KW
UP5dwNRkzPeT0NZ2U83b5upbAGqUBQyekq2QPfszpe3q8mP50iTK+jO8R7C+GKWx1pM+hulg5fgS
BNAZhP+A87S5YZlkJp8LbBZgpUr1X/LmRoRgf6t9MGknBRAlu5pHfGtqhh8FVbCG3Fpx48JpO9zQ
9anrKdwMK9WWW5WM8j8sEsHIL23lV4BhdoSep9duSQKfcihEZmeNhnO+D7tUr4i6+Dj30YrYkiLw
+Ww3L+mTYrf3GvW4HWyvowWdRE4QvABMjkywHuBStZxVHVcF0B2dL4MiKa3GbYlsCOvHb9Dhvp5A
a0Txx2xXLA4/4KJlSmqykEfeMlWmxcIDh7svRSE+v4TiTkriW3g1T2HbV3kyNmF3iDRKjqWjwpc6
0LNCMytGw025oxsgn0p7T+6X42qtpnGWbAKe1Ph8/MnohS5yxDLaEnH4dzr70R5ZdRYCMBWCRH2U
41t6aqsgh5n+YkmD+Fv7YfpMG96RR4oipEb7E6DMTZ0i6uG7c+Jdcy0VJwIsx7eA9D1B8oCk0OoA
Z+PD1LoQRLOh4nufsmMovJJYFklToz6NVxHRpdN8NirQa5RyN047vDqx1ZIhEBtvc4SdRif+j/lg
ybFDSerXcDukCq/+az5KeutJ1ivT6JsTDqaMU2HbteR22NOO8Rc3gCRsCESyHAisJJfj3vH3TPOk
8nleWb40ODdwkdOR0LLCF8CyYNiE8dxjq/jGGcV3u+StxBj40s9LTRrAdWV0EtJ3XbwXyGfI2RuM
1IF/5hRNEdhm4EMUx3HHypayizTYE34Xx1UtbYxRCtnebg8Imsjsx4ztJ38GQA8pBufvtZ+L61Xi
W6ZfxiHlp8vgrKBoDmfPtVFYDQxINx3euXBPfU/P0Qne6qJBdDb/PhSCBGEQKHyX1hvaP5sp4OUr
+VJ6GHmTw3TEHCliTvdAv0AgLfx3AGOZqZCjNNRR5BwNnX5iR7dtYckrLYWbPyJf95+3xnmKSfxB
6qOHoCJIdjUSGBW5vdNZLoJXwdQ5gQS6W4KhLY91RrYgsFFkUHkDY/I7lnIccUkNO2LOjOMI8nxl
R6++pH7cA+HRTreygkODx8aobkqmh3akNkTKHJVh2LKyDymqTko7bbHy4lsCD18v8bM6a/V01BXu
1TYyPAvwnT3aGK1s+Pf/Fjuoo9kIGsy/dhwJZJpaW8wqXA4oCorJ5C3HBUrXAN99RNdnoL8imOfT
XGbjwLMsLfMf6JtZPVVFs+0nOx3tsT4+zzwpXxsGBHTPXs+3aNP+8i9PthLnobgUWC40FmnVt22I
mxKrSb5OIqPmvba/rqEpaXAT0oLimrEU79yfinNVZw9MSIu18VYSZ0HV/fB5rMNkm2ylDgGVOp9+
usFGA7sJj5NcT16ECyb/Ky2jOiyWvSgZxGG2S+HXv3juXpTaqKIe84Jp6Q5O3/2p8Vpgro8qvE1h
LCOrGS2lyh8f5KSsLgHwaofLM7UWnZ3D8Q2KBHHcSfb+XoDI4yIEXXxbWEYi2WNVdCmI3pnVuvb6
yJPuar6pnaRB+gTylCqpvAhiThBH/9WxRnbLrIj4eKTmTBl+wnIi+RcjHaq8mTAZz5MuFBe5Y8QW
g/O4paptcZvHlooAsGqHFZSpz35oWbEXbIANgT3cfUZmuK7+DhmVfsdOYpMsgTvQOhdYF4SxDASB
DFx0De8n5me5ixjIgCvSNTTbr/mYRNqr2MwudbYb25MaQRR3N9P7yLIyYYPywRlpT9XDFy0wshjl
LcmgJIt12/GnRYH9e8/0mTKC+VH6FZF6IpSlHQ5uRbHbsCA9TYEe2bcFDVIW4ryfiOkpJYnw4A7H
xBWVt9gdCsBSHDKe6BPRaxeAhJ0UbsF1GTwqhwut0/rxFHFxKBSCZ2ZNhScOOu2rZ0t3DSKbvqht
H+2XVhZX1JNKnoBOYlKRjyJoWm/bxEztkrDhsx/o+Dl9yC9E4K2GVuFZCDaMrUSzQZPMG44gjS9G
h3BIU9lKbRe0sdf6LYkSKqRDvI8MRtk+t5EnegVtMuc1ACe432UM4uArLMIw3L7LjLYqg0ozG985
35cpfy1d/92SGLvLMfbawY8xo1Z7u4Cnu3kBFilVh5F5Xu9oLeofjAKwxKWE01nVmgFrwfgJ+/Lf
qUQGRudpiJ0YJnlqHyfBdUvIfbEk5qkY2Hxq0HlZPu3jkA+b5NgVqx6b8DPC74JCYv8cqk5NNJgR
QazSD4QgW9U8ii81zRrD6rEVy1WMWdaI/yF9B+tT3M4rK/X0uLCSXf9kJCruxj++qH/VTtNf/PNM
DuutE+zsQXID3f+qbvgYqqLdCwe/pdzlJi4WMBU9+pUUWZnJEW3SeFMJQ8HV3eOhtKrbCFZBhG6L
Pg977rw7+kVh/tHI/CpCAAjheo+8oW0dvFJ26lSvvHqi0zQQI4D4FZC2NFxu/MdR2pID24q6AhwG
QzI+rgEeWHyJqTQs7Q3x8+7lAJQTMREgCAOpa1kWbtXUy465mfl29RF7g5mpvG9TuRxzZQ2Oz9/B
wCVJM9BNUMrqlcPZfPeLO/fll3lLHhPSOM86Ps7JQBq4n6TndchIWI2wBbsYBOrDdQCY0a10V0El
yg2FOxI6hZtM8vavb3iuYlEhriCOhOMUV93Af8xideRQE7Ru8xsy0ZakBbvUDdq7iN/TOlkQrVWZ
09q4Xds0Xncjl9PXTTzlOVzVHzmfnE81neRxhIxf3c/wGbj36N1AWk5M5LMXeWX2Txhnv/3MHEt/
JzUmBI6u4S1kYcBhVd3PJk6/SGhFsV/Sr1uNocNgRAGinLA0jXzLYTh2Zay7xqW9A7eoD2EsxsEX
w6aKj/47lCNggBLzMq/Dh4J62OMCJLSOArbdWEm1VvZTdCSUUfQ0JVWoy0YMfPcc4U0us7T5TiwN
V75UM9PebJBd8BWDEaKc2GqstbpGHVCa8aTPiLEHFJLL4GxQl1HuJ+Lar5Tjcj+6T7wCnKBRBer+
vGlTrC0LL6v/Q9RY8nYYNDLsrJ/+usPPVO/M90I1xGC5/6ANNbvGtqrvIvIqicoFqU9jAgDdDh0Y
f8SXhd+CkIi4CTWnoOiRD8rE4ZF2/tCzRWpKRGvgPXUeVdQNVEfQm72Hfc+fd9ihjSH+tKMMpVl8
E8jFhcyP1UZdu4gc8qUQMTuL7MWhgnBCLV7dPy0mJh0nM8jPyfY34YWHgJ7vSie7mi8nPDYfCCQd
f+95inxaNlGKLWIUzGlLszTUt4YDfXOEd4Xf4Nlhf/Jznda7duEWEtLxNi+R75MXuceI8d+XoWZF
uL6p+lApnpwuSJRwnslDL7gRyWW5RgUZdG5YXK8zvfpv3gdjWQXQ3sl4Eq+uPlYOqdf2SaLOv8Vm
nLrx23CFcS2ARpMAqCcduKrxDcDfYmyhxJ5j+XuDwRTeMPUdxEx7VDUqNe3490q0Tot5i/3TQUf+
YNoYwtqfgTXDd+Za+Y7NqtOY9G3VEgGJ2FZZkfB/48mnZCKG9aOB9Wrd3iuOkQNI/UisTWURIDoS
mDBVtPYTDkvimShgBJiLD/GM4/4D11uzNORkkgKYgN1qz+cUbuBkhaZKq6xgdijqprDnQCtSPzK2
t/asIQAvOTZVQQMZm2C5StD46dmdjjYmtDrvyWrZ1PLJjXaHMRhSVSpMwoR6ahfmwJ/6jVxFFJ1f
frzicLxHPDnPtUJp9cT1O+ESTYkPZkIQhNo+P0YlOX03CROz5elEfRpo+Xsp3ndv8H7wQSdsq8Th
fXJAcfzGxPmF2dU5W4f0NvIg0jYBR5hqdb/BLbv+kFRczOHDJzLtSpOsbGDcE7Xpw7+dsqGp2gll
0ILwLPOI9ojXj0Omg07Qh3OGgexTQEuhshYFlJO26kj/z6tzF9w6dKE1+yt3Gz8u9nsrvWmKqNF1
1zoYenfUig4ak0v4lw7RmjwtV0p3QUSr5/FlhK+A990SIQSD4Cx8FBtQWZLxMCs6R5ltBdz65j1+
urRjD3fLd/cIIzjXWwNZ333gow2fnpjarxrzSAPYDopRsCRWpTuVXVf5Kd0Rw8swJZ86XUTfps40
XRta1aA2KxGCRQLZCZHsGmbgr9LRGHAnwyRwHASvaHdcFbwKMf63lWdm5J8xzr1FvfVth8b6+mgv
qPOS0aRS4Mt9BLqyjViGPZyysPK5LbpoYiuKZPHYr8hyhxTH0U09mzQFKJFYkCzNBL2SskYviVYA
B4dwjjkWOzemJiwjJlIeL1VW38IOMlygPkKGDNATwUZXvq8pMV2xD02/iKplIgdIbo0boaiIQQ/n
F4LfLgaonW+Y2fFe7GiucU2E7ko1Lufs1YKsG8PBE4Q4DG2KewNDtB0ws3Ur0XXZsBo8G88CE5/O
WwaHVbUa1qqb4H3qETg00tknCso+ABbdrkOc/yoB/O6oAfOejTxsYpAaD5FpSctJGVWy8N5L+d4V
1X4loJkwLXTlimXiAysdPSP8C+qiqKc0lyQJsC7GgTE5qi+LbvqVLEADbgW6ku/f+GwV1wak4BA+
AsI3A246SKBMnnTQXDPNjyqF15UXJHWK1K6/LHb8PRLNa1H7Ml6foukMVDs6bNKk/X8ai2mC343o
8EgVnbyrw+rHdlLbMKTaTKPN0l4S/kQ3b+S/lAWcYSdn7RTNH9BJxHv5SkUcKA3jRoeYliNJ96ea
uaj1DMjOP7+t2FFvkGbPbMdm21e3MJ9w81zZp4tYp5WwuYPysjRl6eQdzJgdLzm30RmMvB+Xk8p/
xsJdq7v/Eda1VDIsv41aUdYRfpqvWdAAiNC8oRE/lwBuBxfwEa2gjsVgmU9m2PWdEYbxTxVcnZH7
zjYcK2XbDwwg0OfNqWvZINR56R97OKIjck6pf9IJ3sMx9JQiHEurI14VYGxBulB9hzrIwlFc+04S
olqrQh0FpCiD1r6slgCCSp07ef/QxaJ4G6BPhxNBgahEvfzyzsTToWLnufmpFE7rW4Gyg8bHsjbS
qd43lF9fXq8prxFtpVfS8uIPB4lhzEz+D91WWGKI9UXX4ilpmeH3vPQeiMv6F2elkIRWYuuOsRAe
jvs1TcthPhrfI8tbWVMeJoRHh5OrGi25fTwbVxa9uoMw4SjLsUZzfgy75pnHiGRObB7+5LZWPR7Y
u1XbH1jg6ZoUxDXLonV5hLn1gm5aMn/W3QruNcgBF8fVXuWsDcAJMAuoW7daJ4KctJcBd5yWvJ7i
EvYi0ab+FpD7zVFsvmdZm+ngw+GlcSwGM3qdvo0VcLFnc7qSrnpPzglyR3LdtvQcdKodCTsRkad/
W6ZEaKXk8V99Z68Dl/QvlF9POMe9c4osWETPslxnHgd2VSyV2tfuYxlSq0bf3cbx81s7EXooklnK
FZkxWIfQ4U8SrY56JPk0yZq/qs2rZdGOFRQJy6+vZQWng1+X3WsG0Nd7oiHjcpWpS6OvGuOHL83N
yo8ODxKp5BXsDYsOIWorpOnaO9+vrUV6MM9hT3RpUyr8nx8Q1EaYuUACaM4dt5dWM0SVOUJHIi9l
4JHD1PyJw1GBybifxwx9HER8MznVHY0XfpMJ9NMbehXjPV6XPHNBTPzqwNwOkAM0JiUC1uvVbAYV
1mQeF4HCgL0NrQgNUDw3+2f7jT27aBo8Yzc+IY4A7DDD0vBuWHOLTq3NozRZ1SwkcjfZgHJJF8KG
Q2HdhvWyDNIZLNydr1irXjNxDx/c8uUml0d2Vc8h0w1mVLGalibJPAcUvVjURJcB2NYiarvYV8wK
S1J6pt4kSasx30pl1dYRNMzo7zLlxrAG7XcIOf3OdqWG7GJ2sLTBmvN3mQnJYZkTdQP09C+afCfN
mDAnyDLXl/gq5v5xhIWfYdRZPNA99RYErNYqWsLFRV41Ts4Zhb7Sudr/zI5g9l9iTTYozA74ihFa
nyqCK3/nlg4a2FeAkTZv5M27GhXwHCgrvBmwCKcaIptOCu+Tfdikb4jB/eGgeY67r/+feOBjru6A
aIuqzib7THVJ1RNJSsMBqIKSif+a68fvdnI0olB1NSUd3bw7GA4eJFg2KstKloZbbOkj7fugw3aL
ksO3cPbykMq/U5N0wviCvdt1dENpYP8PylRUIV6yeu3NGpTywF2wNM3685w+CbOaDvhY43NLXnvO
3aFSGaG9TePwPKZKqGOOisXj7An91TnSw0jK3skhvnsy0IvcUhz8csv+f1YHS9ceXKqpPiBCGhJy
PFljf8JK13Lplzi6PvaQXR93Si9mqDzT5K38tOiyt5pnoeqOsJ0NR8ieHxJK2GerqMcLKva8Yr3U
qU1LRZtxEVuEMdoInDO0dLmyPD/Ys89wmmANlEmikhA5sexCT7hQ5NF6/8vqFoPzy2wdceqRfmuO
DiMMNsqeJwC/d2VkhHBHJ6ogX2m9y4X8eHeIFD/+auU3fLCSqlg+xfWtWlPAT7faS5OVNZqzGReP
e4qrIVZbl6LWC2y/R0jiFd04iYBFP6OwRPqAsHB/jfFfvC9QnRDbIFwyDUOXTW507eZTBLJgGOxm
Qva7Wons76++1pNVeJjZpgWintgk1iauKn8pizownc8x5PUAM9wH7g4k0q43f2UDxuOEWK2jwUyE
UvUEB0ajO4dQeduPuDhLU4J5GtXiU1AwVg2zv3uJJ1VxmOu0RdC45l7ZVAqFP81/B1+4Ei74bc1u
WRRIjLhD8lBkuSI8A4tWjFqzoaG3X4bgj6+fmRDP/i9k16gkheEQBenkzv64WJHsSMbSpx6EP9DL
Qic2gZRwmzVW+Ilnv4ifdtGSuJYOTibWS2GyCQ6TXRZoIyoA2IZdkv4KtFu8sySL025f6qQfkiit
n4uXk7plRKtH3iFy/HrRyE5B9hxXxTyT6MHoAg610DN3M2gayfifpWoyPYaD7/s9v1PwzbGfGwcd
Y/R1gF3mSyKVDmg13+TV9O7djwBf8r9hv6O2IWJ2a9XLQNcF2NUqu8UZHtcD264ZpjCO7xRlKDmD
AoT5EzR1ZvoQcFtd+GkVdu2Fgi+4WSivFxJKtnIARE7T8kze8f0Q1073pV3g5xkEiZdHXxI2ibBk
0s41BrlLdckfv2uC32oQY64BFsrglFlfttL0YrB4k/b1ZpMTwq/xfyXuNkMT1O0YxDruXjjvzt+b
gaouvH8ppxJch+yYk8my15s0IVIS/4l1dlNWXgRAHjF4oAZhJXQyxenQ5vGauinl8dbHYQnM+rAP
e8HR5+prnwgRq84Ws9u2PtdVRhRBIx/RiiuKIlAO1tC1pSlLI+SaFna+vyrpD30dpt5qUzUE5VC4
Zxh86ktspkUi9Gd2vDfwGBRq34JBswc0J2nEnP33TMIpTZB3m6uO0J+InlOp6gXK6eFXokDdRc0i
BfVwCC/w1Z9Zd+oymv6KNkNDkV7to0ePj7xgk5LouI2NPpW7eixXHGxynWNkiJsPVolzM44EhY/V
ELao5ZOx0yfyw+eCkctzmrraVHRZgT7wXi9/HX3OARCzddYq6hbmgyHeMczQunERQ10ex6ERHhm3
rgMVkq0UFRsmizCTKvUbookstx8Y369dL1IhB9b3dIR1O/aqi/f0NdVhYuEK7gMO7nbr3dH/qFC/
srhlTsdznHOyCelIoTboP9RofEX1/yR50EcZpOP1c0otpzMAU5U8P2NU/vDrVFLotLhr6seU/nld
apCnfrODyJM3A13HfOA2+CFQQSEyQ3E5jEbtoQIFxlWo71KPv3mB5IqemK2XzMBi26oFw35Jo7qB
GcdvVsCpgNCf26lU/c/5LcJ9Z77A5Y8y7V1oRmLZiMf+FeAUE8faNm0DXm3joZT5UKm1V6p3wXOU
HJeepCHIqi5Yl3gLpTpPlaNGeJt5A7+kizEfi4ysIDdqgYIKTXf5OjrOz2VbtXQkoUViZpPfHlKJ
84JjXCWpgNHPfzDw7G2zsBbQWUazlLn83hjM2t769Ce+1uC+Udkq2U/swj/IbVuCvdyCNyqzFtlz
8SWzVdQe4WSQewtQMVW4qm4zRA8bsdf/Q8+62vwsWZOq8NVIDEv80yPMEtCPuI42oU5fcAth8Ujy
X4dNy7+RWMKpqB7V1gwR0j+hWa3gMdm2IbXIV30kuNJf/khZMFas1GJl1zM9hk0QI/fUaKKD4grp
RZoszPgiRr7ki5O1WzcJPiAaFjVJSKsT9tK0B8wFLIfeZl9Aa6qPxI4RpzKsNaUqiqSnZtW2JAPA
GGGHBvUewGnyPCm6RWD//4CRKfq3MnmPCh1sPBDQ3HLan6h2YP65r8W7Io9WNClvSrFSP8qiO+M3
bo8C6lkkU4j3Ev+0U7UmnrWHn4uhJj5pxhqYIfeTQFC6tZ+BwVKs8j8IsYkvbL1kKs/ffHUxKqT+
cHtdgPYhn7AZd/YbQrtoAKOQRFSVOjSlIWkwhXU0hP3qegb+oonLud5EL0VZzZp64ca02cJyoUZY
01YkiXwcrDsVUIFGwcr7rLmFXMB3AGIvK7fsIbwBqZLWGsZAc4rRVVpN+XeVvm1f29Qv/gVLl/mF
vS+hmzevap683ayu8uAi7e8ASfK86wxnQINo55zLIzKF6nNeIC+WtUkUh87SomiIxI2uWpb942gh
vQyaTEigNPKpMpggmfC3NzYD9Kf8lhGg0YacqTrIS1eo+zfC3XihOCil9x3eFdyWOxVO3fOg+S3/
VLI7wSnC9rEi3tkK9kCkelSkJHNhz8vu/xiwPDrkmlLQ6S6BmbmBk5qg05j6Nz7LH+ncnqn+Thyp
Lu41MD+yQ3OsEljEyDeQtUYB7vE8QkKnnAQLof5jPOGiqYWpcTraOkdBRNz8K847zOEMK+KjY+Dq
XdMKxEIljnre+DhbJhyZzc2rFqMFU5auf0RUc7oG3cunRoSCwyA/YdAFl2Lp5ycDkc13YlB0nh9g
WQP5G5GwM9zUvrhnATPp2Y/GMBC57eJXhu6qbbVJtRDv5qg17fTfpicBGrnS4vBN3/tTGPkSSWm/
ujSKdXMFamK8nj9fVDloxs6cxgn//WH6vwJ79y3EujhmgdvFCbUuS5lcyrJK4zuY2w9eY6l/Xj2u
0y9uw2Fonhyj66caFMmS9BvmcUKZ+6yFQEwMlHZHbrRojLLtgk48iCT9E30PJ+UZpm/lmazaE0S2
1hQ6WfpCHvsBv0bvKEcDJzYuw1uVKRZZbz11QcTAUQCd4kULIa5l+51oLJ31ir/OGTi5nPdkYyWp
N25U2QJxsv+eGftb4BsSzRXLNXK8wsbd1kfYwpXheN+B9FarCNxKoRjLC/JMFMbN+qux7QgIRUax
Ey8gvCceG23bdcFYCubTQVbF6Q++o9bmaEQO0Dg8CcXJHpBO3MgyT72k/8ddxUNT2+ARGoTuVsxP
9TQt9Nk4uwzOdIwrhvQub2LvvPAktZsyd16oGMa1alQArFkvNadr5IppGOXutm9Gse70YqCKOf22
4MyYpWF4Lu2QqCMsN9+7x0Hidx0niSA4C1yuuHRC5MNYg8JnyIxFb89J5Fiq9twvWfNIX/nZgg3j
bD/gnyMpKasw3x1kflOWbXGsyuJ0cucgEVIEVoaGQtKBZEHSw11MxM53aVlU8IUSrTMC8XY8rkXo
UhtmTi4H00jx1OGyCDzJb/+y1+T5Fkaa+oKr1D+s4scMKadOo0rRBxVFh95z0O9pOttugXWMNKEQ
FXCFzD+r4xk4/I8FZhxGSSGapZvivriLnddV7xcaKhYaQX6zDedXmIxp8MkCDWvevFNp4Krx3RW8
mRMoXNFFZsCzAWq8rd6xp6UuJ4RY+HwGISQpBbz/dCp6pL9n//9tSXYh/0Nu4vLZ0gBLnBL2/FBY
bTUQPucOg05ik13hW4hMFHKdvDDw0ONzAsXpjOwntASxQmj18Z2XUH7EpcuC/ifcmc22U+ZjHdwt
PRxFJhGecCch8D6p65Eatsg7/HW/+ouJ4PoeqPIq3ynFZ/7iPK8eR6t3Nmff/INKp1/xX82dsOu+
jr0WHp3b/2jkq8cKt+q9WJEZdXZXNrDO7u8ihqFO9NdMaQx5hijNJaEL1xR4HE5LIvfN1svM+8v6
xn0vfUk7SaldwNOYSPYqjtA4FCtEjCHEJ6AkJkabrYC8z9EZddfGeelUAfBWIsQBPWEEBhYJlL75
ae1LEepthsOs9Huu1G4k+CDAG8192jVeDAY0LxHvoaj2uhf+knY+l/xeFe+X/Yf2EgagiOpF1yc+
lkTQ1QtFt0zquYo55076Ecs78vvOwhvIbKIymIPO4UWHuh0rc5qHAOLFJ4WLFRbXBtIqrfWIZI+m
7v8xGPxi+7kVtIhf3bIj2TH7Q7Sc7hV96CcgnCxBeERyozkQv1O4Urg1jzEnzzqlPa73n0PLgAFM
6c5Lg4mclXvRosqU54zj09F1c4SPJ6xXzKiYvcVTLJc7mSQfvq94ULTCE8qS/ghZrhGiQX7aihcU
9H6cEmP81ZC2MfzWevu8AM46jrUkCzwc0x15QQLY+ZAMIK64RNf+Cqf2Q4TsSWZtrM19drHMnPid
y7zIgPhr/LUVpSachPqUItR0Z/MMgix3h3A39kDleXEMjuW4mOUkN1EAGTevqWOv/c+D8pNPgZX+
PW9tWl6ioB7kd+EpHCBpRGczRN8VbJ5c+SpAUBOVhCcefI6ilqaSUl5gU6yCdk8kGIDOfm7uWayI
pbLH/+6SIGoH+SqiY4UUwww4IYcwebnFLuyxRfz6O311a/18hWKnlRQfm9cVeN+sK2GOe5p5rg+i
2K4YJmAmEBeVFqcU0aVodzS6mYt/mjM41eP6iKbVsWLpQBrfyeApgZUyUm+Zt+EhWMNvMxhILOkA
QBUzhiyhIZ4NmHPwDcP8WbASOrYFWM2adBQDBo5U+q/x8HPXVQIzAHP6aKtdW+ZKXYCo06NQuGcb
Z0Hpw/FE+7hlO1BIjhK7k8lk13tOGKA+z3iRVwnxjqFKerh7OVuLWG4ps4OyvNt2fSVSr7doLgeO
DnD+y7s4DLOPDeJIldIrD08wyw+lLeUMZ1Ds7o50JVxLb+BCVQrwCPIA57KBK9SyZ/A046aLjNDA
4w2UDPadBjlsenlkQsofhTOJl9zjzRi1qCC43UbzUqT+wgUw27W7jbbf2BkDgkIcCJWu96lLyGkM
JtHHQs+LMH61AdJLUsMPbPu9BV6BjdDtA5yPFoNdVHzscNY9HEAHGhphIkgdM19aJXzrF+7AWaoj
bm0XN4vLd7kmbXXnjSPKg10H0tDZ+oUKGaCnVjCZPtav5J0nAAPYVkGi8t1R2uGxEgxGkdYlklx9
Qw67iF/E7zX7sPnxZi8Qfm1MRbjW5vE6SMS3bRNV5XbK4cd+0dtUOH99ttU4bXJaq5LAghv/otyn
KbdONcBkIwVnWK9CewqV1CNkEMqJS90vqiCDqywbRuk5RZlW2ccFLZWN0xsO0yiPcVYGNhUVPh+6
6B6K4p18OPNQEDSDJBCDXHeqAcDUpe0FaJsn9P2rgd9mmXjjBhugqEV+ECDe6lpLetVQHlQNzdaB
g//4gYkY6IKd6Gd/lzs4pOi+mCH9rCZ7nMtTlMLkFJ/JHpdIuOUAv/HghEjMRE3PLtHn+R3c/J1R
Wy2L1CcNqaRnmLnsNDyPMiYTaOV95/uiTyXSzFID6P2KCmGPiuL9aERTkE0WrOvtcsYmdTBh4e2v
8yuOXieXKxL617H1ncOSjZIiXgsRSkBC3JII5npOxlwq4SQeHmGZV/P8EkgOJOnxqxf6HEyWfhCC
8bDLk45DeogdqdxLjcs7YcmLPABUuS8X7nP4AX8DorMlAEc5PpsUfITO2/7wujxPFLasV2eN4tOC
xX1kpkH7ZXBQfNbx1f5kEuZaQLMSFZlp8Y/ShYzELkAyIy38uAQSDPG70CPiZ/DIB0W0rEFPwJKA
kROMcgCrBiFg+qEVOmVlounusRqUCsw30MIrfxQtnxpyOGT8LG1jvcG6cBo2uWTJbZ31Wye1LsSS
nhjC0jWsYaKNP2yMN42h9DBt/HTqEJOJtHYJvy0wWm9ZeyXYZarO/RQvf2HdcEOtPotsm1i/KZNK
xHDrvXN7CfldmRPy56r9zwsw3NXvejJHA21UFJoC+ipFxVT3BMKk1D1ZKX/sL0T+tjsl8MhxKdEI
ku1Nk9WaiwMXojIyHHTTOprcI0obcLNtAFcCxxUABv9/yrOgEX0xhJtt7OHW0kkpuDLiRSRRs1E0
nZRW5Q9AM4I3Luj0GmzL/cQijC64PY9dsFVAI0uWxve3UWvV5nsKC1dvmqLQQ5ZbCH48Ss0MIkjM
GSltCk/F69CHka7KX4Or2psNToLPvSFvpvc6+OzvLdzV02imggEJwpHfHdFKdhUSzoK/zHJ/KFse
q24/davmGCEy/kytF0QvwV+XRcmbLDHo8lZZbuoywXHpX39CsIoK/ZzZG3sChmC6T9IsxcW8ZqNf
izH9upi/dUQ29rbcudV3SB44eBLD5UE/KJUX81/nYmB21L+jABSv8VB/oAKF6WgXnxNPeXX6nynF
1hKtKgB0i9GnuOvmEvInPT+PzufEGYkxJW57S4rfB/enpBWoDu4OHBaLeFq9nXVMT+m6NIPam4s+
hSLdEzwy13/jGEDcZeRKEYeNZ56/qeNlTRSWOUdu36usxLTr9NvZyOOdWhhfmFwhwGRbgTywK/dO
22ck4LSDn7tFKgVXeS6I+fumDAtzrHQtS1Z5ktGLFD1K+CY4TIV7C1HPuZLw3qARWkDT0FJfKr3J
s8l/6g9rSC17d5kxBqSFSwNftCZkri45lXWD8k5oS9x3qLBy8Khe1k24hRwimE8D/a0hHdPOBZYd
CLswRtfa1jI4FZB6uignnYGdpeDZE0NUC+Te14vciZf7MoVhCkV9Xo3ZTIJobuNM5WHgJoWmOMyk
UQ8xMyv7d+U/4mEVtSG8N4CnunoLVpXfIZ3j3yL40SPKmS6wpb8gEolWCl6nVK/CvZSRHrDd8Zpv
btPZQ2l8LDcK4srRu8mXXNHRrIEZQdbS9qyqlgOdpSStKftA8QrGEhS96WWY5UtM30QVY14JOdgI
impxWY0mF5ZyCgHFKBhKM3wx4+d4hVCC4SzrckEQ552UgSNftm8wCQ5Y9Of/9UI9xllibNPoLLkA
Zh5pqZ8myNA2SrRWp9UHAEX2THx388/vV1k0bTmiBTgltLfJWp7YIMJih7BjmGp8uowUJYN/paU6
P8sw4buXNRNRFw8laku+Nb/h+KYCvRMGNsxxDIoknQFwtNzbCoaONTw9PT7285/1m7LYkWHfhgME
GLfAsMympN7B9AgTc8dXgPw+ABXdakjxDAfbsAiU1emiJtFclCN+oaeyKgYDefxeq58uMoNLKsm9
CX8kzZmQq5oWUnzQYecR8ocGG7zfyhluCaRBsk+fcPDpCBQY0CkYj389vgv6sLTUWIJDPraYMxEx
6b4ls++5DySSO5pfftZ5q/pro7BFSgQcLFHnkBe6iEzbvyAFAd9nKceo30b5CHnEYTkPceHm6PrK
ekdU7bQFEnZaaZVQEnPqrmtFBMRuvsuf2P9dbbL0kehx+pn1um6o/zSZJHlP5PoZ9KqQdMtxec8r
j219l7vrM6vmcEzAJiCTCD7VHIUuKO3TrqpjQmO80RSDa6CjvnWHYkGkSbggfxli/DK0SfAgByuV
Kd8wvxD5VUZABjMfMET+5LHpVz0d9n40UahOle0Tfk/EMp4s/vCTenCD+AUw+wqPMw1cVt/vTMfL
4x8R1htfiPpNUsmCU0k3AReSlIYZiBn1VlzPwfVmMYFGBlusxohzz2NrH7AuqHG4ZVEgmlakElMC
b5zhgSuAYk6idmgxivGrp8gHHYWb7d7niQkeobPAe/7bJsQVTK7+eDkZANzvILVMDnkwAvJbVGYR
W07P1RnyF+uLfFcrefC12cA8YGmybD0Enrzb1B4W2S007Jdbl830Lj3Q45/0EXf/zAmmPdn+65on
Xh4jOYIcRdvkbPQ8imKaynyY7gxmNrNhrKTJFeJxNyJGmuFo03M9deH0zJqjsyvpDvoE5chBglpK
o5Tlouk799AmyzzkujFTDLRbmhwAE4oXa0NkLw74R04h0TNqEqKs1GAQ2G3L55vH2AuqYx2D3njB
ztVaEYqJIruigYldYYjQPf9miPO6QRueMAezwo6e3gXPcSc1iwkTEJJGkBBT9jc6w/yguyecMRbx
bEeAEDxcPtxhB0WZqHoEKX20S3ufQAsXOVovioQZq8EyVOdbRgMP1gRmIw5y26b9FlMSc3f5PEd5
17mp/Ugp2n8d9VbqoQOimzW1v85ZHSxB/V0ueff5PRcmEQ2nOt4/WuQcEt8YuHwd0IHO/EvBr8Jg
vJt6vgOd4e/WGN+13zLcVCSkX5qHf2PYCiCTAaQiwnlU+e79ZyOwGPJqy7uU9sMgDofecdOrpsV9
wfUWxgeQsJ6Kg2J/I/fyfNX2cmbZwIyYy5qzwg/qUSvWbyhDmL9Dj9RDP1rhqbNctrtjtoZRZQ1z
dLma4ClXQwhn482DvaSB0gToqRAFtiFZWyfC5ionHNdcWZgWORmps+fLW/clgMfemhMq+5GbGbVZ
wLmFvPARliQ3QTvpxGjhXcyLIJKaF2aCFLav/7pzXZ+8gPZ9cQ/GRCKYIUb6GB+aEzOe3OYnC/rl
VlCR5KeCuV8RlAaly6Com6HRgMz+dIkynFFWZU726Su+6SwD5pFKcFtvO64f2nnRCLZdVRDF8wYB
OHDMkNE/p+p/zKy/2BmwIeOnFDQgqhkzOGttzoz1ZDPI3rKQkDByqpbkAGHW3nlbegqUABihpcFD
/rKdA2TtpSDx1pZDcFEy4OvtmNBwEB9hr9OU9XqumovtJ1/+sc5UYrrN+Zxj7j2Rco4GtcyM7V6V
UI+G+R5Ob4P+vYx9baPlS7attxEuY4YFWSDy69rWPVvIhIoPY6tyTEOqjURQZ9TQxpfCBDADrS14
3VGNu/zRNj0TrSyi+Kz0ihtFZRHIIGlsgfW21KKsfmHSGdqMMQiqaXnkDiKNnkX3jEb8/mYRuR+q
sM8++E6qGc7jFDEEM+1UTVFkzDMlh3WPco3z5pFHHdAzHyLDTTxRI9bUhZGAXefq87QS0rhS1qf/
nEaTsYRpLbRZkCIbsxvd+kR6r7A3xkUcJ7IpMI/hkYs+xPJgeSSxnohWCUsrJJhEAQ8o8PzOmVk8
lgKaiHL9FBdGEb+i8vWbxhJDCLY0/TLCDYOUNK5WSUIwBxo4FO1THgJhYSTBziZhHV+bymznnmy9
jZY5pwDzRbG8DiSCD09ivm/dmzWsjfLbiap/fzRIMdWWSMmPVlsY1j9R/H4CKLKxnM5/7jxgiFVB
/DA3xSlJMWktNdgpo2C1Udgu3l20glD1lJLkvG+SyWiCERC3S4hiY67J4weuiI1q3G5F1QSHR1F5
k0hN8vlq6t3TejViRx9zNHDeR2NfSon2CRAKPeFrBlSxWhoV1X0qDOT21EkePFRQmvRZ27Senf6o
Z9yPV2g2mXTqvZIvRrxJJfOVtSMc8BJdevkEnGgkEbTR02fjqnjJncDPVor7DyLX+f/Oed3p76F9
/ZISKsyoHrfZ+cqB8hLdxJwEdVW60rHI3EXWYEACwVtd9uLREW3HdSXy5GzvISrxVTyBddjPoHjJ
Hqz+zfw+Jl7oyPOHJ3Z9jEaxf7KDiVuSTDIOd7zpMTTv9pxlcXfzUZQb8ecm94WKIdqGYMbpMgXO
Zq9Ut9huIvrtAaoiB0bpPj+eRA18qxN243FUpAJKaDq6wCblqROMANQQ9p9opi1ZX1r0puPI1GFT
lV5WrbI7YIFV1XtmsSwzHNg71IUpaVa+ZG8EsQI490utVgNI5MyXqiFSohNqr331FTiMr7KIniiy
j1V5IKx6SRWqp0P+9QpXz7LuLkvd5I66FBSgJwDWrQlWRKVTR+T4wpYTNzml5ChD8toiAfkAoEvC
S2D09voz195mH9fw2g2Qy6AjGw3K3Cop+HGYtsSWyicVS090m3tEnu62reYELCrXQbcoeD8KhhLj
0qTS1m/rR8zkgHIrqu+Cn/btbWnoqTmwKKrZQZ6L+QLZ+StqJS60ZZ4V35EUndl0EhXxY2V8C5Rs
p69QcAu3Y+Rv9azcHKqH9K4bS402Vy8zT8zlQx2Q0tThLdUSY/a3pmfTl6RDDobroiZXXRpildxM
bpiAArvH5Q+zkiZgSP1bxxMtr3PAR2xDRC41Sfzyh4TsXkXss0gH+peQBqhK812dZ7A/jaDr2kcj
4ewmAFxmfqYexVpfe5SSsoFFR8QFv2OpAcjBKSFy9lZAsKsbsf+TSRyXeSt/k/AlBlgbDREEBCww
FqgK+A9gN9OOszBYeqmPQucXFAaU3v/BkY3GxSNi807r9KsAm/oVCT6+ffg8f9zPcQNZabE98xV9
JY5bFX9eOaXPPitJlXLzZSz5LT82lRC269Kh9BdmrKSUzvrkUA8gwoPgLL9FjKOgjGq3bzrnc2Sb
FxnTUdTV77NtbMHDSX1g+ob1BGG77V3Z4VHuuvbX5Wk7GcgBp8dPtG7uOH7gE8t0OG26haAMtDeV
PTVztjDCmGUpf4XN6fhetmFTh/2EQ3qrjm4eYvCluidGF9jPMfopZ73XBlwiZV0UXDzXLJ+q3Gmq
B94XQW1iu0bzZ0VFoNVobemel2KTLM1TeTjHUNusqB95KT4/o/t3y78Z2YWQcHlKr5mr7Gl3GlUV
5SmLsJpMBadUkpJsXHFw2TRiKS1V5wNv8W2ONRAlU2odZc/zN37tbbHiVNqzp7Y1E//NwP0FHyUg
YJAfeg+u68jEDSormEgBe+Chf6wvC3LBKZcZJST+yIcjKsGSCS8lBH9vv5CjWNg3zRFqKCijBZip
KMMHdoTDm8JPXpS62vopvq8Dl1jsc2bGaUcQFEDt2gxo4dD4oI/U+dWnfh+bPve8JZHFXfjbwxHM
eellh1TKCbZijpf1bZgvwQGcibeBbw5UpsgHqDkZ+MPzPHojKZnDci3uRjO8fm31iBX9ghb0HNU8
LaJ7XM20BDOU9+40hPuMTzvpxLpwFfxDYYVfRf7FTKRNbGxsJpyODR/HB7xs9/IVr7j5UKi+YCUc
smfn3+meMxN54duUklUVRlzw8pIfBpcCOhRJS+qv4fXVPw/VOfk4UrPq8XDIvsQirPNsYVt//zmo
FHPO+mcAHu4rmpeyxmOj4eJQ4ch40xxuICK+keDrIXi/xRgzppXrkqJVQCvEWgL8g0cHlhgkPccF
l6LeDkCM19h6JBmt3L5og/i1pFLnA090jq9qvB5vExcGAlHnw2r8F9g1SF6/BYkKeRVio5sv9LGW
pwRK0ZpR0qejAA7MCFpYDlXvjI4+Oj9fl2TVluO/fgIDwer0BTabYEcnaewbyHyqNXiNp5YF+I0W
okn4LKgbfSbbefYifFuu8SxPkiUb0u3GcrDLNi7t31iq47IyBoRO/m2+VvqpZaNJufzJow6fU4f2
s1N6gjshTjzku/FOuC/eEAAI2gjU7CkdovNKLr7SgVoJTnOXru8KQrgmbkTEMlO7H7SIl1ebCxSE
gbYQFmsOlhGCUnHPUC2QpfTMHbWJr+FYnYp2F9617CSrcldzydSdDhB/Pr4ETbrwVc1bozESAra2
Wo4DLNObQ+PKIiizeVYkiqmwI+0b/KNBjmEFtkPrKjz9mp0XFremunXjDommuaVtPfkJlkgNlBv5
BnqEZFSwIBZcFxa7ouui/R3JM5IWz+x/B+0mBBqdO5O5BsuddvOqIeNpGYx7uBRN3n1sTljCCW52
0kY5WU8JRSrYu4TlUsLUHywJlEAnEMEoI9/gBNgwmzq/bn0PjzuAu0mt/j1y8eFL3ppnWTLWx2u/
jQC4kx78EQ8oVHtQ8s+y0tHbGFZYtf1tJWzU8wcRn5OB7wPpGQtFwzk+wGZ0aSLwXwpN70EWj2pb
s1slR9KzOmmieFtPfEEBycZI+QZS+ZxWe0eJ1dW2VDN7tHU0CrkqzWqnJCFm+Tmt4VWikpUY26ku
XD8fPuRwqrKHxv+3r58oCAxt1M/hKWkeT8xlGzfu4FZIPbl9GLHBbZVNgAOst43UD92xUt8CR/iS
f53ztks70Ll/y0GKm9vTNzcm2BmBDgdpY9ZQ7XP0BA9i4315+FXHdtdc7WiGyV7ljXweMRexaiej
uxROeorL1AcbmWRh6V4gZP6A7ETKKqmYPYTOnbrl/PXgKnbZwPA7ASi5UzQkPAY3jexALJ/cScgV
qncprWS/MwqZXRS7rcqpY3G23kMr1dBk0XYQQ7iOXz/ABW3bSf0B5sfjczS6PDIOsNVtGEZ9xYQG
6YH9IP6Afvnw/BgCmmaZNWq1vu55OwfLhnXR/Z5WiMwCcpBKX1iiD4sIzCWpIKKmChBEy5GL3qCj
RmwSyFTaMf6m58C2Ag4Qa0WcHMC4rAnaWaJ5b3kA0tZtTWYMN2WXNCJkAtL2EK9FWO9jHRktVeE8
A26AMt0qx0Rk7gDAoHy3t9GbzpdxOIFDZw1lBuKqD7tESOMauQQPwqjTkC/bYDtuORpyhaGCY+ke
v7OpKn7sp/PNllPeSfmoU/kGhDB9rc0Jvl9EuVo9kUsGbzWPESNgkx9+nYXc/e7FfmvelbeEVm4V
cP5hgB9yhtueiTqGUPN7Z2i27HAfZym7ttlKeN7M9VIlaU1ha54Nxzo2jRXJ5U7nC90msgtRu4KM
850xyD6QM7pMm8A/ptEpGdlCAC9zIWPGmzk+wZYLBHzx4VAEuNqFRcIxluWZN/6J4Icg7NLLO9dL
9WI3ix85NalS1YhI+S9eWpB/xbyhq/lYzAiCLQky4FT0xbmunro28Yc2boxFtt+tauWAthTzd5o8
tla3FZ3FbMQ/k/i3yanArABScOzG8RhJ8TqUU1lGzifMS7z0vav8ypQ9RbLq87dTbFlY5Ufbuoof
9GQuWmpVnVV1d7gESFE31rF1+Zz7ayE2As6UqPD5bGlsuqKdvXvWyXTt4p6kNfiLBlGiYqF4QXzJ
YrKLfgLafr5pjvHnDHHuel3+keuPvtqxvCwUs7r8UFR4xBYFGxwEMbiwmPi+WBShIxuCc/xm9IQR
99U5X6/NC/qTCqUjaMaXgrrMlAsxlLHNV4pFcO/4BTyz4wz9YmJmEuSBtkFOEQMRpn/oijUlyKki
gab7thH89LBFZPzjskVBhYFgM7Q8aMZogDoedmauPSptbSXUxFCEkS42HsKzwER9j5T4NhAYq7oj
E0rbUpwFdfV3I2K7ov9Kg6V7xfiGJsBjbrMqDhB6pMhu/le7DakHfaUdaynrIRa9B+MXhE3YEnRs
bVom3N5jnjRw6ImKg8svOS1FUzfOjXofaiJEvng2Pvf4n2GlmZRde1FGD8MQQfNEmR7aSveAXhKw
tuNJwaPEOiZE0BEdCyxr0zO3orQLUoD2gXXhQ2LGKer83yQqynGTkB5nlLIbS2MnaQfS/Q+DF1kz
FCF1pNkFFs2ofRL8fLvuwCdC1XKK4ITDDTQ8G2B/9R6cZ7fHhkaZNB6nfdFYqODxwsWNQ0gZwCy6
b7n0NiR9ySOK1GN4cVW4QL6QPpWA2Adxww9H9QYYqaBuNDY0qiivTUV89289abZrsMi6OFWcUffk
8FIjRtrPk2lPHXp/XO4J/dKwkYkN7B9LDt+Rhvb4qdchYb95doGJwQg23AWH/IP1VjJ2ARy2k1zR
FVdW7ruN85zD4gM+xBD3997d7vDq+zA5bLwwAeUfIALRNxuu03NcjHkG/1ydnnUfF3Ab+AulNZcz
IQLtYbrQelhToD3AnwvzV+j0mUmfnVapgenghgDGRs44AmI78mkxOt6y2HfulX8ccpbsgE4WCEa6
WB1pNdgR2I3eZR/ur6TsjhYZ7OR5a1Tyk1hz+cfL/Xp3PApm9OgEbBHd7Mzg237giUyCDEkm4FdJ
z70MONGGk41ddzDe7NDyedi9+ldn+BfLZjJOHkz0jXggolLvgFWl5m1/igjuOe/v+tCH+SeDisxT
zwetNmvNjSpVdkt1Kg0ZNuNjZsdca7bGSVcAaM5/Ky9Vpj5UTTUYOvderFVsLR0k998FCD/fBhqZ
c/fLSAcxJOftaF05t0E9RbHU9fSbxxB8qfCTHVoHqfal3jYvmWGY+7k85pM52sQJa1ZoG7TMD32I
XllIGdRbHZQTS+T4/ApG5UjRQEcPTxfjZHQWqqCn6Jx/iUO90J/iLZAwJhB/enZ/Xsd0yQTm9go2
zmBbFwRqd9+dG1VDRbubv1AaC1Yio62N0asUOP4RzzWbvuFaMOwqCXaMJ1HdRnD2GES//z6l0xLN
reQjF/DaoOwpCQbf4j9K4bZ4j9kLHnKAvD2y8uPA6woVSGS8oeeVU/1KiX2kXP/5CIN39+3NzYPU
/8EZYeyjYRDWN0fj10zifjbHN0Pn25t6DLnfg31/1wO03q4cDvbqYNRCxiWhaINjl5m+F3cbj80v
hhpkuYBr5xhbql+i9ZtfVi7tYuvehm4+p70o6WZ+8zYLIjLjRAeaYJiveJ0PPnEMGhlI9nj3ZRAL
Ek7uOckGj1thNfdLmLrpHw6rpOxcvRBH84h/Duy0XBYbzZHzK2/cGGpNW+o4hv/K89Epik9ogbRe
xr2h33yWICEebnRfEdR2AhNsokonX2s3poMDOL018WMBiYdXVfhzZPPYrxsdcIctKJnpJBNCuyKa
OwSjA3olrbPxl7OLx7fwwH2s1HkID3Vw7+6iYyFr9wUWqIbhxRpegKOw6SLqOFtIP9jmJsF5//4c
c2FcGePFYc0B9YEl/dcqpet6Z5cN1Q8amgZI/VB/6dj+fr5HY6VDObw9dICdICiQ30RhO7IVvv1q
djuiLRSW9KHI2NZOaYRybu0xAZTv2/sR44YW33jg2zJSfTIsNz7bbBker/vFcKQNMdc1B8L03bPB
LMjjJtaNF2kxWrb0clSa8EuvumadNZnVrJsU3upq4ledMarlTVSNiEsh6rtwa90yTq6OBJ8gWae4
QeaGDA/FM4G40H+EADvR35H5fAV9MMSRPBICL9ATWodhEN2vZs4nS31j9DfIloPkEIYexPmNejki
CinkacUZs1glunHEfLuThTNy9rRJciHSKJWiw3zS6vehm+YJrp72Qyiy9LL8g+0drwuee5/THdIy
4tGxB4ZLtaAErsvav1c0nbpiVUVIFqIrxWWySmtDxKxCBN4vkh+MTWYT53o+UbupamVBeS7bBb/x
Bj30rJkYUMK0o1jHmFHx4CzVjOtISbnwQP3JyMpVHt35jYxFYET2uks3QDS/kpJQZb0Kow7w0r9B
YA2M6Vk60ZimH1TrePIZROseo/JZWxQNx6mtm/aMuvx2n20RExIjLs34P2L7MSfSTzzKgfR+SJyL
+Wi3RLd83IR1mL5Nk1tYcIaA1zs/blhxEmZrWzYmB6vP0ZICt9Lu8T+bxlwTw5s5wK0cfqTEbcOt
quuSPcGSDSloioBGQzdtWRdhkj3chBhz+8adsFXV/aqElGF5LwI0Bv1q3s7HEo3sDhb8PPXGTsa4
EJFhtgboxwFi1V3X/IuXKnMDgiII6FfQvbjYwn3L2QZ5soFJjHk57kwDpq2R7s3/wRT+NxUTyxgv
JU+xIcSCbaacgVZFTcifh7QL7VEgrSN2AqVqB1clRmYkCX4NKyM6ijVQYlLzmKfRzrf/rhs01czg
8cAPsLlhhToogIiEGdK4Sfl5vnwVEFVbavwfK73iR0ZVsS0bk78QwFTzsB808bfddB253AkhJI1o
pL9bWBfpWWEXDkz6Jef4GoMyyrwmw92vzZnF1ZRsQC/sxKlYJ8UqOI6gfG4XGi+SIju30t4qjsCX
YuGHN+9nHkiclUEz4RZEOQnrxmMDnKWR9XjE0G4aDcYEjxo9kXY5vL+r6/YIFWc9SIMkhSubSKcU
TZoJEZ6p9SKlDJ7QpzQKAOWs4AepdVjQtN+jCgG/7gdVZLJukXLsv5hCFqy0ueB3lxUNftEWysi6
mH2CyvkLABeJuMM/JOGYoCC3kIOhJSVTok4Rr/cXR3uIvlSUd/qfG+oBcOKMd1m3CAHsmwQsNai/
x2nGJSsQdNkwF+/gwm1tqaV0vqSGAl7FGerCPHh7E65f4rLro9rautnXH9YP3LIzadHhJLmcW8bv
/ib9tlGXtbWL8/j8a0RZIRtH5wpoMWpl2iQjNnqufNlynB7LuIuTiGs/YIozx/pA6hPQg0wbZjiq
aYo0JPkpQdWE5GqqDzOlnL+hrjG/swHPKNIUIl4UppkrUoJZUncEJSv8F6PkoaW4O1wUNhTCBkG/
ypZekKswscrJG313NMGXmu/0idXuELU/t8T91UCzdVgti384eZy5dTIamJ7bwUQRg+NbuJ805o/+
02P94Ntzp8LDbFknjOblK33lIGGJnaacceryP/aJSPsdQfGmn9VS489hMI9xJBI0g/S51yo/uu0S
9LTCB/+dMsjjpDnmkrbKK+InRmkjQ91ABdKBqT3b0JZHubTPkV3USvFQ1ZVtLmsG3DbTc3epg50g
taZBAaa9MPwRta+F0JjF0n0KD7QLJq7rSLEe//cMFOSE7RBGC/QcJwsfnj8VstHCzEeMn1bjeexB
LAyxPNxHH8mXnBc3KtdQhcTAjcBY9YEHXDUNMWgegZ6iMUBrCrj151wQYUmW7wyCRNGtpna8CdT6
+YLDnowp1nVhdHjOPdfosR89yfRcEr1tVY1IDYq2Z428Pp0PgKxdorf9U/bmfCnkMaNrxL29/EHz
6SxdhyYPlH5IXJ1t4MPtDjeDgq3bU9kOaQ2tegWKcPkmLTUgPCFj2DJX03dje+97Z0aGQUo2j4+Q
I+Rns7ZJtqCwPUEFPnq1utk2HbKcIk82zAj4ZT90/MPwtE1cGaA3pTnSekc0oSTD8WJfBd2i2Uol
WrmG3FRU5bGv+MdqZG36yYlUa115zgJwmi261BZCIay4x6r27z5YrHhdsLVAGF9mT331Ckxfn8V6
Pso7Q3okpRRKhGjhMXUJ6BF0oOkoWhFedLm4K94Q3rxsoRpTk1C5lcSyDAzZMkrX87nsD9DTYbst
ZEcBEyz/bUQRujpmWygGTkvVrl4M/DdQtw8dbOsidOiP75ysMbA1XKcycZ/dUFo89AHtU1lKVoV1
Ki4kPkOUGUBG4s96sVZbxfndi1OfWGnfTebkzx3gMP2ywwr15QQYvY4HHbkHUhHYDfvUClpUhmYH
Jq4i5/otX+TkI100m0m2EFPI02LpZLi1vRPjNVHdzF9vFTwGr3Z6waPuIcd+30gT0M0PWZXW7g3J
wGpOFo1FOwgPIc7Pe+cEhinNPIcXf6DBQ4L1iyTOta52wVgdCAUqlAbLoPORdAx5njRwxSashFju
9FTaDEJJg57VBPecjMwU9PkqdCd8AlXl3q7xprev1YVQGzgJwtA3mGznuh7HyOBXfwQtMaMKRoc6
u1FPheCm7J3oBLmzBu9c4Q/7FG44eZBGWu32UNZidReCUeLgwhL3PyDiC2rrl7fYS6vKV/cNwkzU
MMUL0UABPT49uqOyVvuiApxeAgJCy5ytyC248xMtl5d08x4EgExQRiqsiDjA/pJ8fkahRmmiLi/t
+gXOxYxnLKYYP4HuTZaBOGyoVPkLZS9nsEjzyOrSbEh7hftwGEo31FTxVicn1TIeK95g6zpuCFnr
I8rojLTLIGes7W7180xAd33QeYCXG44noCNQOoJqe75iqWSqPTTNiq3jp2tWicom/YeaNlozLTkU
M0XFzRHDZKRXmE0ykNw7DJ6P4JeKg950HHkq5xtLCHgE22gQrXKA0Fi1Iz6iiqq1eiQ7quUPVh8A
yD+X0Jds+J+1zybga+0JkGWIqYdZ8xrKLld86nKCQbU6GVje5Oz9amEnfJbX0PytXmkZHKrGDKHA
1Tfm3PiMiP09qeFJyMWLfi13TMMnh32q2Gk9xFW0pE6Gd3TX+h3GN8Ra0DCHvEZBZWoN21IJ+izQ
S6PRXreZrVs4SO7i1dOhxkOgnZEzMUJynRhqXpaG60Sn9REV8S5JL4YqR0f2V/BxZ4GL3KAyuD6L
z/LqJzkJQil1gBPRNGO5VTk7sUBq9UAWh0FD18Wkp/vU+aOpG9ps83NHnG2kYkcMgNFSOWKS4RPf
l4Y9LJSg3/PB5kRminltxdO+xOTaTwe+eVukJ5yEvDQeKrxZ87dibypfUu9QGnMTlVuZgOQ8KW18
jR67p1Kls7qWABIKd+L1iFl74Le7DNv3FagCzjfEYrcKmwCqI7TL5/k6pn0o8VBUbyC+1BtT4z7m
bRQ+5qmpxvNn0IwBsjlKFW0u4Wn0jJW+i1leZlLwFLGpBGo2CWVMybR9CDkHqBCW1ZroQgGNc/zG
9q4ZOzUkf/wWYZwT8soSYFnwNA+ohCwtzDQWyszRJf3HUy1/0zsn356uYW3bQq+JSrRbbdwjuzVC
U9TRirR1siMIg4BSgS9vU4waCeFiuNGjFwSIBjjkv7Oyxe01Ndn/4jIoPx1ZsUY3tKwEn98YR0h7
bTQIRBppR4WAQMCsq0krh45efHvjDsHUIBlRsWOiHIiyGwo8eil3LNYs3LJLu9opnB/alkmdaTXl
x6z7ZDtzp7k0FBtE8lcbBYg+9m2O2ENfPXylYqHQdwr6lUcMNqeObhu4MeR7P/AkWHpM6vNQXZ9/
0CWaKoHefNOR2Z6Nablwr579vS02Sf3OWnCwUllyBZOWYqQTqegg2EKLnnur6392hcJM1J5eb+L0
SILbQA2qfk9/zuAKX6O97hE06devfUPIFzjJrsYPR9HOT9hXTktwzt+HzJzLsgL7/IqV54wJd6ZC
HwBkAazbry7UgoIHx0JLihNuXQhQdQdUSm0J4sBgZPAe85NEni4VXsht0qWBe8QjTYs9TMAC9pOU
m5GpDOPps168sNmUUFwKg8oTc8Yn3MZQy+RnW3LtGEG3IzoSiXPJr3fG1ysL9pel+j64iUG+hKEB
kPnQCzLCSSxpH2uuA19TGFuNyRdxeC20x4yKYY+GIrfNtBW6WODPLKHCM97UrtxlikKi+smneK2L
IFnevhAgyRdqN6U53Xee0Ho+euyFiL0YEGOQJ/FwdnR/TZsmrJ6Ai4J1J+ZdaTs8WO5VR5AcmT1N
sUtEcfqXEcq1l6sQ5LEG4YJH7CqZmhofN/ZZ0sdTSRO0edAVEVzNDZjGov2qJTUmKgMJMauJKC9z
jooHD0EpbI3PraQUqlSQ42TBiRIjn0FgW1RXEaSGoDMUGsYLDMmi0k+Q3OHoqMA2UHzGoYYImTYm
GAm+zqxsc+W/1/pH9qIP3K5I6qM+l/hz6I0M4f8vwCcAWI4/XsKNtwKnvNbi16eR9pBlC4s9lBtP
MSHWfQTa0KDOndpi+yc63Qas60fRKYDlgPpeclT51XBfuroR/Mw2IIfkdKi5Xx64sUfe5LX2MxVj
jqRJ9P4J4BloKAwn78IvdpP8+pM7ShyFx8gj7Zi5eNppPT4EgDDVZ/WT9yyuSOklxP77qURKrJH5
YpVzFL9605i0v7jiipvLm2f6uHD/l+Qo//2yygVuuumRoZoihp+8nhh9TD8rv+uSU+ftEOzWxyF9
uf4QXOiF+3LrjrY6zjyf4JkhIC+3MZtdWJ/1aWTm+MW6NrrbxRqFeWghbarM+/2uJaiDx0k3wlPj
GmrqX3iz/I4u9fSbmiTMnJRCD6f+dMNG7X76sDz+wHBz3AGok3gKYPKFiSl8IQm/TqXRYqEmZy2a
yBRfSDuM7iFusg/BIprPuUq0k/ABgeJOGWA2saLD2uthCQ0OV174WigSN1n8J6BFdBlzeD2BLemo
WyxEmhIbqk7mY9v+L4RTQQebxQl2BYeWtjBGgmf4TKpszxKVUt3NkP1TJ6xMdTHDxaAQ4F6sT3HC
dX8Po8ud9wQFweh80M5rvf126sO+S+UIYzjemnKoH7sn3FSlL+NZZKvwj/GnS24ncdyU6rpBVq1D
+5vXCZgmJQaCM9GXnK9+Rx5l34Hjue/aC6lF9wWcmyPAaBRQfY3fn5o5GZG2iDQS6O5ACRjdKeDp
Y3O+7eOEb6DK2kVmURrM4BU2KFgeearpbuIPdOttau1HyxrUgNg1dZH7dWD7omckoKKMo4FAlYNa
u7vCfIVFaXu9IPZEV1vwcx0EPULovmBkWoFGmHDg0mbv3Bgx2U1T1bbm6FHdlD6zHSNYpUvSx8HP
PmfyM0JM2S4bzwMdcJ0dhhGnBxiM1i6oe+84a2e0n9y//j4lHBUvqw1VlUQ3FW7VIivgNKh+WBai
W04cxqUXnjLXCGoDpHtIsIx5TjSeOBBZquR444tCf4X06mRWViV8sJkI6sASVU9CqvVYbpWyjhac
/1Ysi99CTx6jGepFXJDVHy/zc7a0mwbI+mt7FwL467QBpVTZS95ObI+WqhAXoMZYc0sVJXuuznP2
NSts8nOB9qEjO4lSX5OAqNpqfzH4lxeQx6GIMejYhfQAtX+i//ApBwC1OzNXup50L1bYnLpBsEHo
vTbvScQslkTn++KcFoR9AUcjIJ9OMkvG2bApgAfUdTQ/d/xTIBFeuxwMrKiApPh3xsRwisBkYN/D
lsUHpT1tBFnizuAGjVKn6LfvBeTPaYBEqPh/5pBIYNxeAbJW1+ap7PIqfmKZVUZ5vHgq9Q1lUNqv
jWsaOJebOeCCGD6poKf0/k3lV8bt+BQvJvS+HhXs7+IZFqmjK17DYqv5EDuTdXYCggYyNY7yr9m9
gWO5UJijuKurSmBmLlh7ZP6gte8Bm34KJ6sN5vAMW0D6W6fesxjzwuuPa/hjigzv+Fww/qtrNsNL
eMrYzhyZdijfTYNRcX/ECPMiTLszcgq7iWRJTqD9a/f0PMUAXAUQvSLgMofYR52Kfx2Yi83rMBW2
oNMxue/UvAgLCUa3c9yyD4Qba40dZs266bx6uki4KYuITln90cy+Y53LRUyW05nhR53i2I9pCdhM
J8tf9+gWdlwW2ol3vKPLf89G0yQInAB+jC5dM6DJJUVvfSYD4J9ePE18Zrwk+A+87EDc5rPrpHAm
s1PwbJj2C7xU+bzSUIbvI0snLDC3bb7DMiQdb6b+PnuHZz0WtQO2Kifwy1uY2WGl8kl8XaNkhZge
L5iMmY5lvLAru5g+DKjlaAdM65gAd7eYd16/9hetyopM9dSDuJvPIuHKJAsq0Ih78Bcjp22Sfjly
0O2OqnNx2xx0atfdambCwAe/Mybo7VWGTfw793p/IdITkHtrwyV5r0Bgi01Gqhkm6JiVGRnTIELC
hCNjIfzav0iHuNnso7lQpwHTnUOc8vPVIkJRglhDnTqHo0ATs4hdeQ+SnKGVvzTB+UZEuJwtPfVJ
zZj3VPDyn0CDUpe8CTrcBJ4bjGDDalCF9CU0OWeHANPR0iXwz2BKfyIs0vQY4DC0S8lFWUHMrOJH
APWuZBkblw7WrvbyQE/iiXfL7JSBrvGqvTFhoyp2cKNsOldz1Xi40SB2NyGd+y2Y/6geTWNXh+C5
NHJbSqGHaYURlF9cE7e7L9PZLJVltzSbNvxm9Gp1FwR36U6PBH9L4uvHx8xRvmF4fwRm7ZqRxA8T
ACbhIhFxk1ffxpMXxeR2wsmjNBkFhWX2t3Fa7eJ6IZ4cmT9mohQQfLw78GC0lzluNYReQoAhjwPW
r4IIkfCeQiDmNC5tyr26GZKmTDcCBPDSWycef35FlJjWtzyCBxuFB77H2cS1ox0UwDMPELd3PTT1
qCV54CMriJ82pWsQk9OPclcA/2c1byyNKFpb3E1YMyT7cjbfGE0bY+XXKc38NrAWhZxLuTXG8boq
Jd1OpwHkxWpVb26wtHz4s3psGgp4TUuRLq7y56piFUJuGDma6Bwh6A2QqIRvKis7W3WWZz9sixXs
BI2JklFYgLD17xPcmWs7qIuu9pZ5E375ZR91iFJZeeY47SBme/3U+XbI3bLBcIlu94ciJm2o0dxJ
TN4nnuR//3SfBocgRUfbRpKJXh/dF/t0XNkelVRZ99KVSwjrbmfssPIzfbH4iB4Gyc5Hg4ZxQmaY
sAUdr0wsluinVbYYEgLJCBghmmRMplwKc/kKD4pMFyO61BSYrYLJYOROvMVPVZhKFiE6gviDhCcG
aiP+al7yDzZuERBVepHKXrpSlxy41pyp71f82X/gvfh8jx/kaJvT3Pjd7C5UalbM9+Uia0ETD44V
v7NN83yVLyqzvjP8uoVAb11J0uOUf5N/+cs1M7M/66fLyElIyG+LWMqIVXahpL33H/jszBZ9UyS7
46mFXDs1XZKxgIu3ogREJ8q19uIDyQfWilU4z8GE2AG1mORzM1DZi1+d1ThLNOVNEwvjCjbgdL4C
6eHspXtxQ22yPyAPnzesMfMIZ85+uV3CeGGhhYbZxIdiYTLXRB9ULmJIIEZyE0Fv3YEh4WRaJAZR
/TVVjEDAWiwlVFXJwv9gTdCjsPJImjRxoohSHi50zh/dWo23R2uB4a8Alio6GMS180MGPVbeW68e
f+7AY66+i6POb0VQDv4gEjUrh0/A0vHkyYKYcBz5vln38M7S0TwqN04QosBZFhe/csL90e8dwUt/
Vb0lyr2tWT76bOQDCQ53GgUhtOaPXLip9H7WZ3Qs1c4mpRVZCZSa4WYIm8R5b2TNW4lPaCPFeOPJ
raWBIagY02xkK/9fiWV6NuBYqc0cHXCSYozMCFVnyiR1TJ06medNYwJABU4vtunWn7BxU0HVHNx9
RqbiIE46VmeFgLp/TTGpSWrBFb2pDzyg93v0PF1XOY3Z5qv5xNy2TzsVG9iJLzzUSPwUCG4RaIkd
Ny5PlUrSKXIKy1Yxor4kdRScxT+awrV+JDScn2bo7d6s92FYYF6h1UjXstWjkdymk9+szxEqF4hp
P0PGeltVhlhMqJo2bbVWPHKrnsRjqMxjOSS3/U615djlN6e2PHTFBuJvbSg+0gtO2haw4wKndxAa
4UMdSG951vYvdPeaeG7wjKYvnWVBExPIfbd58ldRPySi0BPEOViTSjae8gEmEjuPph8TtIOz/T5C
UsjUYjhPd/1WEaPCyEhQ8wgwg8un/tVlQdOszY3eyWEFWtSWtPG6x8xBjG0o57O9foL1tfCLCT3b
5oL8xyduhUxQsJ+DSS//h5KQ0F6Wwm5HQ+WPT+XCXVMdhHFS8o/BtGQNtHfBGCcoYYuawrMMJJic
Kr2/ok5s7x5Qi5za+qm4Knjwdq/OwGDgJTOMyK8O1En0105qI/m565aDmNQGQjOm7FGPrgCzeCPw
scGFvgxRGi3HTLirI661CDEmNgCCnX6XujxuqsEaq+6lq0es2Rkkx5GVIpYlgqnuQ9dUvvDiIpJF
nyL1zpzit5sqa/bYQGV5agArbz3CV0jD+A3CTCeQYGE/B0x0OI7gSmb57n4FcYutdRnOI9B7kwNa
/M/YbfC3RGiFdpUHgUczxfqQlCkz7hBYQr/WlSWWsARUGUUDdEjhqQnSkZn8vlezYveRQ3dBBDdy
xRnNgHFA89LuF6ibjKy4XQaQYFYCtBsqE/15MNJpwfpOugkoEbY43ODoMHy6ezM1qQeHv6V1A8ts
hGxSdKgtmqDHGEUjtBbHs6oYekyzEyOwMSjrWIQnPIlc0w4ZofylB4nQnTd2rTZsy/MVxcJU1PKz
wybdNHH/90QpbYx9h76BDIboklZWzZAe1rkwo02t/oE4Sf21okfSxTaPnnnUc+iQP3bT6EJ6ivM9
eY8InbJwBHtgzencRHiWUsRnBk7v0coYBOLascEdnrH/yKYRKwR4lWvFKHC39pH7rTAH0CGeEzHX
LVjKwO5JOHnyAhTcYZIcaNUdU9FUUUhKi21AJG8tLF95tM2jpBoJXWMCS8drlooYyM4giW60u3fY
avfwmzzhSG5WNXuuRXTQGM0+T3G2Mzb3Cw7lOWSoKMnnumR8ugzXctZd5ZYYa/A0ZHsAg5jHlsRR
SUYeGIiuCNlcaTWwZHE2S4J8jQ+scyC6VuZbppUevA6zU0z9ZeIZ8Es6USrEoh9ZR2Yab34ddbur
gn9X3KpUgmd65jqTn2WNmAxiGmAecOcHVMxn4TS3u8/AU7+dnnq5nge+J9UF3xRHM8dqgoYkE0XE
6YAozqj/d/s9WhT4QgMdZCh1thvDEUa62ECC0a7jhdZ3YrJ5mG1VFDchTIuJnxIfTKXrFtw7wqNj
EphGQjdwRfvAQAeUHN38bEDXF6hdg2Qy5ykDkTOYwF9EjFjlaATqzs/DA/ndDH68F5NUOhEnWhui
pQjCDXk8WyBdS4cjP7B4ZY4kNZbwU7FrbPBmJi1J3UdLjI3SmsNmRtFHYT4Ae3uujzjH68SkG3CW
V5yK2sVO7a6wuC4/luVyBZ6yHu19hxNwSK1VivjKRwIRQLzDOqlODxhGNAxbP4zZRR+AGdOihMJ8
ACj6yRbUuMInTHiX6AJd6OrpL+TwfdUiBCnBYldvnmVxKq6miMKvIiavguJ02gxcE4JD25VhK683
FiTClODdls2utRFj/PAXS+qQlM202ChdhHuy5Jgd61cZKLeSAjFxhkZfYO0EpdkfPcPXMrNKrl4M
g8bjzTQPSWptCuU8aURq6sruu4qpzsF88g7jFHvuAhP4fif+Rl30gQtU9hid0r686kH0aJhQIuBo
XgAM9NfxoKwxsLJx97w6iZIZsRXaitg83TNAEVcABXLSsQ1/ym1Z2VnBomq+NZ82DUN/U2dnPTcb
ZCE6yWYi1RqfrYtiCghIpnDXDvU5hdjgrFIApuy82AHRyEf9F71u7QaxHSzImO0Y5lYJPj3rEzO6
4/4T7Y1pZIyaazcw0GcU2gPEdsVTcvHOm1bewBI9fJKF2OdeNe5bPbzyArHpc9OABEsYeqhLemvj
tqaaO1+gWTHtBmx8s40RNN9PmfYya+uD3+fDV7UesCMeihkoScXxTLSBlYpv21aHUkiNfwTXvdd2
HnxIdU7hTHX8teYP1CNSY9ewpRFCJw3GV9FnAiqWPRWuetTGOxyoY6UwP21FoVuk0Nshmn3KZfCI
/tiVmTpkcvTKcoyse/OWe2ygUCJjOQ8J+sPkjbBH/9cjCyb9QqXjeAehLzdbRFLDNmjxVm5LvDBv
yFdapv9L8lLx3Zsw7RzvOGKdANm1yWxaQVMuhkQ/a5341IxJd0zj6jEq7yYV/8bWYbhDrm4HlEhs
TZohlKDa00LSeeKdz4EUJ3XbfstZJAgXNDe8+GQqmuS4RhVBQFOI2gEePkXiBa5yPtgpI02qLBVW
nPW4glfrN1vSP8SzkQwuXQ+ievxhdAkJQ/lTC1wyImW7z0X2c2eJjjmo6ayLSR0KsMhfPOlcEbtK
ZZ32IOHMuxckgy7R1gU3aeUy6FlXTlfYygsSrSWHN1bdoFOunEw5uEsFSGsOHXXMpGuuSE04V4XT
E/EUG0/6k1kvD3eKftExw0CPNR9yIzU68HONPB+80uD6pn5bLvAeGmlcTRaT/FzlVADMEZemcknC
7k0cFzHR+eeN6VWshGeQbgawim5ulw7EnEXBTaHqxSDYO/64wtseUTAzmxiuCSfkmZve4dXGTnG8
ROT/tzVVuM1tC/yh5Ok7u0IB+UReOO4N/avnPUe5q9pOS6Lo3WHTWy6zB6cfwullw1XS7mCVmLTD
FX7A/YBx+vhNy3g2KeXeWOnOahvnMxOMY9CA7smaPzNPTxhtp+v4hFPwzEitakww69p05Ghoy/UD
rK0jM1iLv+e8tmWa2aOv3a5SXHsIA3TjJgVRmq/wpwX7HKxQZWfwntGe7K0pDljulA+nYOgDpbfI
DgrndsJidDFY/p3KbuqAi15y2xYED2TRSqscOeYcHKYxf825Jov/IuuJPeZLAu6UZdER7aDZzGob
HvN8E2//l82gh3aNvRFh92Xp/sJWzmoxa7pZ1vXJWYDNRSFaLjwuNH2iOFw3oPl8+cXE6KJ69jtW
Yv5con8uM8uwJVaX+cyQnktIIez9fK17tz2C4pwzFjhKWRAehjrezV8Opqv2g5Cs+wRicRanUySz
WbNs5mBWch8KOAIDBNwGSL7xQh46UEE08VKmomRoHsKS4Cw2emvNX279vV4PyV67rNbbrnEZgnzD
uSxJLJGRKe1DYEAVK5iDlRs6t5N1qdjGSnE+HoFzIdeZnScmqr7fS8xTpMOQoKQOdNKPdAoqhDzl
bPYrVrwNvSQPelEzSFwWdcrCSzczVaRFLdWm7RQ9cPXeIJZiiZpwyUvicH3+TCqVXJEYCowj2Dga
09mMV5+V1+yT//PgxAYqbQWs+n/Aolhkfzz4L7sdJxz5IM5X7hLEC2uGvz/Mx8FNoHIGLbwLN9ft
HmMSQODZVW9+K6LqkB6ZvXFOQUr8KY7kpWnogVFXr952LMvcP6RFVMiGTdTpQs22sAadRiUr3QYA
McbADjXBKC+jZNp8uCCFENiLgmfKAwlt/uBKqkjqS7VFiTSU5ySzIBkZVNaVTlzyTgF5L+t8GMmd
DLBlN7w25mKymbRqKFHJjqLUni25S27kbesiQGnyCrVkkaQKjQ7IUrNte/iHDyk7RssnKanwQd8E
Pr7yxk0YpXAbBPC0OSdYSAYQrc1sEFs0s7kL5kFKDoo2ubIXUxL/zAu69HTmGgxgnej6aC0SnjDx
k1LOGfUFpWWuNmsHXGDkti+Pe5BWp8K00S5nZkEGQElte25+BzOJBcvifZjxp03vUl6TzXKkst7o
BE9h6klZi6LoXO46LMFIG3L10mwXPt9NOt06V/FggCSRyb3Kbzu+2amlShQ11QOoFr7QSy3Xnbwq
RwhV7PfG2GIhQEMMwncZzUiYBknZlZYk24E4EB94Mh9bD6n2+quTj/rrDSNU0VFOUZybAPwBIdSH
wb2ghWnQ5++a7mKVdjQz3uaFc3kXVuNdr7hXd5vdEuIGZiTudAzMYgA/Qfi0UUIPfql+BnbZXypP
v+8RXtF5pdUgTWTBVuf8Gt6BbXHCYs8OJO4F5bdnGtJbTbSvKeCr005wi2fvm3RRCrQoa1ECNgR6
0Lu5AwWsWqcz/qrWLI2lBzYUnQ78xLiKsVzTFjVVoj91bUjCFGjI3V04K11unH+MrtWhVymCoLcb
hYCnzShQzZhBA3qzsLJ9rhCDw5LKTEkuEiIohrlhG8QjmlWQbn9t1UxtRTGC40RO9PX+MH/MGYkB
w+2EvvjoaT51pSXIGZxJ1oR0D5q++JxluZMx/bPivhlLrsTvtlGdru0nzFASZV/hHNSnGhoHl9GJ
l5Srxm08nfJeUDbuJBSabGtdg27v8H5DtsSkHBqzjKJXm8adMwOEmRxvo+anK2Vnxec21C7gblpm
X8PbgBwawV0tKkP4SdooSzhe451IZM7Qfq1Dlq8bK1nqAOGn22LFXdDvUiP55qzxn0+pgdMO0MWH
zDUuM/Fzvr+1HHpXI6IFfGulbr+I48uWDZ3ieHr9J4CH0CdmEB80I+kE5g48VzLyS9N0raC+eiCN
nPLIap1vqnI0FMGYfC2h9K6EnKJIxzLIGR7QNpc3GKe3Uhb0QdwZyj0FyXfu5Eu6yTAopDrdPKxZ
zEVE/hnNvl24MyxiXKbsaDT8OYfZtXwHMHCsXtiOpQ8in8Y1Vwyf3VW1DtJDkNFnFtOMcpApT2O6
SIz5IeS7dx+OnM7PeNDjtEdfokDq8K1hwmJyxWLb15KRgAsuCynqpcy/n+6+9nmC7MSbAZWMQ4hR
6Xs2rWOKxwaM+E1416L76z+prh4QXjqw3KmzqBmlKH+LqlcDmQ50/19qQ4vas2wY5Z4aK0RAFIzS
YjqwQof9WeQ/0Ct714hQtnWyZSfOt5Bzzl3mCUP+WTA9de5VdKu4PVSkUzIy64urHyKfBBRm9fmX
iMwTc4WKrfCIQqK0KcSvkgVtWDNuBjpa5LLnc8fAkKHWFvyEpd4SONRq5wzt9R0wnutf/vZkHNf3
qlycGvADC74EH6uP6rYsSDMve0a2LUYRtsGgf0KErvgJvMGfi6nFPOdxhEdg8SjigMlq2BEhem7F
rOEPoOWVNMTkJV2/+Jyr5rV+vIi912VyWswtaEumO6PwsFBuxTATtU2ddETb+tDfc5NAK5xT9aXx
62cwjcGi1NW9t/Ac43S5uND4PkqjL4KoMY3c49IqXYL5e50xdPRzRnlbdRz0P1OY51I4JDwBznwW
FmAzqNP1qOY2yVeCve7ZbX/XrVBOUXtBPp6R4jktBV6RW7iSOMAnnp63eIUTzkusYUQUPIITEajU
iGRlZChIjE/YQJI1vUH4dKTX6YyaRCyds0p9U7rEZywEOm5LynVyKu6Jac41RudwIqx7oyTk+Do3
FyUegnTNglI1rn9Go5vdUX/uZUD4Lqgs+hEBoK3lQiyeB3wGtYE5PeIhAt47vpkaUluHOVcgiUg9
oorN9V/Jdb4LLzDct2j2B1W9z5RErXptNy7T/4hX5lk/s5ln/h10LuGZMKn1M/Wx3tK8poimgb2s
ZZYr+jDttZ3LJerIxevqUpPcrZ5yrEF+o0vQUsboJ+3/9i7v+sdBkgcYkewh7ZApVemqCLdD+hW2
tuSdAxJpqbDp6iBJyDKDZo9nDiiJlbC1q09/AHavUc14u6FBUfd7LDbjtHbFzzGOiN+eb+AxhtN+
PZPtHJj2K5SLv1rr0asDn1ZABoUBN7rm64tFmP5h1aSLzml85XrxXEgz6elar3x635yelkETNb76
R/X7QvhabgXpoauoT6XLnQOqZ9yib89XtXIFDnJswAFLSoH+r9p20P3spIdtT0hGjStSeUtV3/1Y
r84dCXkc6elvaVBpbzNk5snmK6uYTL4aEbJC9E7/3fakjUoouFA0cKAlkFzLdc+YXsdIyq6On+o5
hNCPhUxr82/4k1taxowq8uwALjqWczbaI+YSOEHgor+hAfAdOFDPekMYamdsHHuxi0pCwrHp9QBN
C21uC0SCteiKgEZyTkp76LL9pGwmzPK68lGQ+L82c+UE5kssYUDv5kHYCfvpqPbQ5caxnubCs88b
MDyAAd5gCKTOforb6Q4xN9T2ooGBMqfUWb+0fBva7okeyGXayB9DgztP//jyyczirMn9rjxaMT/T
ETY66pbjt6EX8aW5amUplo8QB/ctoj1XSUE+owngu22fEfYZT0UGVdETJTTA6azK/6zaYEVj/YQA
MbHxBjFM5aJSbnayYnIgVUPjdfX8Bb/AictiA8Oif8Ns9+K1oMOBphShQd5o6u2gJv+20+8Y0zaf
fjbyTXQRldhiUQqEWLMVrv1zyolmUPyL+AI7TUBmFHn7lg3A1ljAZUXqjZsmF9F4pyIwBI7tk6KP
wUZrHE5pRzf0lebdo3LRJDESATLYyjCiLVIqoQjtPfKQlHvEICvN1bJQhHifgU+TCq9g4NQt4zfW
+17NBTvwezwvnmwWu7QBgx6pnDrxs9F94svIpy5oeWtDc6upQty3BHj0kCDhQJp7826vS6/cOtRm
yBtpq95If3eu1QGCOSuneEVPV2tlkT17rLwJuK0+4OtmhAgqv48Ot/DUUz9BY33h4ZG6AukXgyWA
JJNW14wwj108zM+Z6O2gNTd/3t8eWSsNzoRL9pM7sX0+hhTr1qTHV6cf7C2NKupIaxZGxfp7PP+J
wQB3biluX0ewo4OdesX53ozVjj0OIobtQA75IXa6YcXhD3zMFvGbQBZ+l26lR2TsHSB2ZGq2d+Uu
N3374BFePrFruY9aejn4k8BQCcNzOTi9PlYm2THrCzqNlckbEFw5+Gmpc9oaHhNFdGpHwwAJ2f6l
PJQ3vBQhX7XP0MpEL0kcUcqXBItY3GMKIVYhIAweNMOGgUJFCUb1H20GqQ+4FVGzqRvJdy7Aqkz2
TMhTJCoNAZnUiEPuLNL0hNv8Qal0ofxPy+++6/53z3MIL6vTGRKCvjxb3EuCMm9dPhsA2eA6JmEh
lc1kmn2WReNlUiN6hrT3NoYYEnbox2KJd34x9ShGZpIQkl2p6GWUToJ+3oSI1vCA7QLhJa62xG2P
Pe97oeq//KpCaq+tX1ygh6skl80quASS/or6oT4/DP6aTvWZ+Eye/U5PfiGxq9XggHi0q8T12vtV
S7pX6GHeNQDE8EpFGWPeldu6C8NT4ssPLotGg6QpfSfaElnj5ZtSAihoTbUusZNNgRLedHA6PyEh
ornE3RXrWVQpJ1iZqdwy/1XfPszcSletCF4hHEYVZRKSD42saNQHGjbJB3vrnixm0BuDoSU6fg1C
7u8d0tiaQesKXPrJ16v05H633s8LtQ/T3Cda1jlLe1HYGy35nflKWUuq9mVVNliOWLItw+WxhtQW
A44+F5G3yBP43JM5yg1zNEBnN1oGW2+/aDPKsGhRUQAHR7CpZVaL4qrNCxDDEzRQG9G3ggLebAen
k14zGB+TUE4SUq9+5aVmuB8EodlX4f0aW8l1DcrHZjVpUNfaMJ/+Q5HK9z1NIY7H+BkfHA4RiDg5
3gfyQaySid2JW4zgbEgPNBeoH1QGaREprArU+eazpL0hQiJpkJ7YSge7m9uFSlAZhVCKPeh0FB4q
nar8fpB2awQu97r5v+hheyCChkOfcQM7YqTiska7tJ1DiQrB7VglEc5XWMaC82j+CkFNGpGwXwmw
arOswYCr9EQ85jZ60AHIyT8Dopimfg2J1Y3/Qrlxt1hAFvA4PjW6WNudRMyJUi4HV0witp/hb89I
ut+llluMFHentWRNp7mVxZ+ky+lfdXeYGY7HSD76kgIQDlTguS1fUsHKcIUDBM36BYf9mYTyOzLV
4erlNK6sxcLvX31E8tU1Rl4E6KA1W8ybU0vv4lFFWn7SDF+JjDNoVzM44y+56q+bIqgwZa/fYAC8
JvUINamyzTGtdpNwI7atmjqAeWjMxanCGVqeRC6i9p0tlw8S7I/DWtnxNAgjNGQ3a0S7aw5CiLvk
ALN1yFLZab+zIcO1LjwZgZ/HsBjMlM7brmQs6ayr1aucUl0r8lyGhz4TNOzPOIbvjxBU8EfXXqbU
q8OP2+EsuL+0bBMfOCrWVnh1XxZiYNB7Eb2ueJQGV5jKFFapN6GuIv6mS+q6qhDlxCpoPEliAdjX
RHboTLLY3tbfHwtGHhgDTlWdrdA+P/EymjKTUgB9s4mvPOs745r9iTe7wETBscaJfbdjsMIiQcTh
wfHQDnQxBcO1y4iiM1SS+pPrIfPK3WYf2AoH3QKtP5FctUDGzK1GujEcYmT8KepESiytdNQZord6
91THKz3ARDYQRNXpa5dk+B8SvODeXtdQ5E5V6LXIjCahKm8g0a++FS0BsVo+F+9Via9biDLQ18bg
n1I92wU5IZbL4KtBv4UK8bX0+iQwNMt6wO0sQPS0cv3Di+HRhFPh/rAD2bYHLyKzfidNh4e2srgH
ZWnLEcRKKb6gFdUSy49XcSh/1mwsZnICBcy4GDOm38tvw6mg4wdpiN33cEKXwO1FP50DxM7SHYo7
LPlRXhlh3MPb0dzg0epOLbVckKoW6wtdE2Cj7ttzE0wUOYuaAnXMwL7/NRkzN137FdpfWtsO8RsH
TWtHSTPKRdM+bajOIQe7EluERFP59nJ+dhvNW3wMNZiidLuow6UxVIFu8JMjCKqOh7fs/MFZvYPP
MXxnHBmDAw+eMMwKo9TnPbDidk7XqGcHNU3tz/aZGz2qT8PytUvx6jh0+3wI+ESuWMV8M02SHZ9U
1TIJTc2bvyNSWnT6PtV+ET0q0dz92pYIpucHiQ2DLXySQBTj0tUZIi0E9qiE2N4GQtpJlNoeylIr
8/5Dtoyc47442Pz1ahSS5QDXR0ue/I6cd2UlZRQGFMaHoaB/074iYpFPwXjSEYg9FBdbIsOykP8C
QioIR33JC9rXGX906b9fnJtKWDHqi/Y8/XAy1hVIcogFlHxMJaZSRJwmm1dZ3yM3a1ms1T6IAO/n
1n2Czb9WxvD0VP+dJcwPxyTm/MAgEjS8i7RNzqv0Ao3NtPJICNb6y6OpkDcQGgJOMcLC49n+upUk
rA2mwBVjgj+puWB+cK4iLQnxxZqWP/Z7DJ8lxMx5QzhYk5PFeIxtDHmXYAYo/93QrkOmq14zT8y+
lcjianFq6fsTgdfIqUINjS3pht1LlWnxGR3S61pLloLIP8+kBr/SUchoWN0PbcO6ceLDu7GKkZ5V
g5dFFb4/sgMg1qAOLW03jDubhZaTsdA9BpB0PN/Oeul4+xAvaSiXWWaA8wfJXvQLyNGasrSLjcmB
yDaQuTVMfrFalmqBWpzEfHwwRJvICNA1Q6/qLSX4I7BtbJNSPKU3Lki55o3GEzFTOD6232P1sWms
Km9qKNyp8Ql0RAkBqdWsDFtx4rGiooLSvxSKbmRFLEj+Vpthbt0kj2OoF2BtyTA6iwHYhmaDJp34
9NG4BnZZ+zwvaCaKUM9o5ZyMc61i2StslymLCgSQbTo7kHhHCSLtbDoPmPZvUNBYwerPs2HakGWC
VoGVxxRAnGVDkuqXtw338nMiJ4kUVGZPQcm63MafULRNPZVBFqkw5NJWGuh0F+BIni4IbDbQ5vxu
myxJXT1M+H2ITSrt7BMSG3opftFfmprScrErIm9xTvBQGlK86P913G5qG48vWTLApPeychcglX/h
5fC5PSuphx5NKMG+R4KUAzaDxVumTDJVRb4B9ILdIFnBCrL7HJJLJT/tXoYYohsrtjP9t9LXYKlo
guiWjDMk77ap/cDgwabYveqkgV0e3rvEzVwPFW/wCDeNnE6GjO7ATE12qCcEOcITPYEKsLFSEnVa
/tlw9EoZljOPRSvPIOquvrmpMDP+IlNqYiymZPHw1dL4wyKpu67bR6wD0G4Of/FM92raEouQQbBk
qoRfs0C0uG6uaWtqoI4BIuZo0OYVFY15tpDQRsCJIMd37V/SeEMn1YWzR1euTgBmGfHOD4vmcImm
YZPt9t6nu1MOPSboLjcDAJbChV9Ob/kpv2elv6MdwdX7q2B6TFbnHm7vaLbNZlFGC4GEnbdu2kT2
F6AomIKajRwd3QVad3MZ0Kpza9bwKbLzeL/w6vwq0wPzhWFPS2GK+sCyaD7fDQYEhYpT+surw7QN
zxeGkFihVWrwWp3FOggLCbbQndwLira7WOCSkZ83Nl8DNGs16ldD8Zhp3XFrmAgECAjYIMXf92dj
E5J8F0MKalLN9b69z6sHDxd6Q9xAtH0aAtQljEB9eDbSIX3rPUVMujNjYJI1DdjEf9bji28fo7NP
HKGBFwvcraQAxb1uAHf2Kg7gDpro8J/3eJd+D6DV6qnZtUyWJpR8FspUoa41dpWk/Kv8XF9ILMS7
u/pYFmNFVTauH3O+EyCL8Lgm6o4rxCEzEVExBLlUAux8rlihihVhcHND2FTJBcsf0jh0XzWmiU9w
dtDHk+AlRQ7mwv8hQ725wuLldr6lubmEEkNduqKfabmQfmwxPVV4Gnx4nA8ZffY4kp7SKEXEVZFV
Rf4rCDA1iYFOkAJO3fs7f/6PEpcdABKgpaxdNN2MFtjdGtr0oTg+Jt8uIkqmtkpo6FK33f4ew41C
EDdQ2fqH0y72ZU0/1PgikDCp/V+nCX/8olL6oh48Pk8dTsaHD//cw1FFalKmUQYySz1wd36g4coJ
Ml7K4gGLDqX8aKSnKOpGESS76KjIWvubLBy4dMDiW/Cb/t9fvJgPGGOGlc3k34Q+J2ZEK9m5uCRy
OXssGlL2CsC/f2Oj/6d2jmeSLnQofH3ies2SYunOSNvcIzis5ToT7Ywms0m0zohKcGfHbLKCbx5y
jXPL7HpHHNSv0r9lTL7cSFnpTgfLE6NXaxFoeWn6YhC192De7W0xi7hySeRvneHKah1r6eWURM+s
aJuy+YhUODB0G9pmmp5Jez9K/GG2+c9MUtcOkwb7anmFbdCic21NPJ0KQHDlGLB7S8vxAoufHd8/
FsTHNh2uXookR/KswUVHtarn446Z+jyS7oJlLrA2En3rod8RBj2I3lEHRG4WYzsKymFyKtpDSOb5
ihM4bvDpu24VB/kvv16CyhqpKDmDcjqgh/qOiQ4FwTjjQoyjncFZaqe0ihbQWNa8D87qzYCFcOKe
w34ooV/tuqcnXkYxaC3ySBynF8RUrtTCWOd8ukEinFyEvc6J74ZZQaYedTKgk+O4B5yTI4ddYzcZ
8LqMJ5WtllluP27zcQfaxC35gfx936cUVvoOYfpo8k3kuZvstzmst+RnClzxWqr6DceAwb5dA4xo
VRoSd0jGGQCOPvW0WF0/it1Hdf053TdwQFflXyEf4ACHx/i58v66MFeq9z5zOYyLuTO4FBVeE0s0
kFmOA855P8XdxPAJ25Dcwilj8wScmYJ0qqFPVFVHKoO0EUePzoQryE2ywpLUFHKAQB2PA9d7wGQa
B4YhOJ/++UHa9nJgF3G7SmgNdUu5MKPvfn/XZVoXeQufxKIJClyqTPyuYL9sPPeXyrx2oaU9o5y5
lc7Jd4Qjdl+q2VhaaVx8rFa+GUqMKqss6ERS5PxKeQLKVMdqvzoOidc0p9wWp+Q8Pp9wq/KfbUtV
2D1tdWm1J++gjFfZP8QMotjFcGIkTvp9H6doGt9K8pWvAt/26lFwZKZuhkpITp/lO0EEWcM9L4Nc
S8ZnqBoYKsmEKiSEg8vovSvBzHYwktUHx2Fb35Ep8s4VnpJ0gjdqyGJNo9Rj+gIzJq2qMYjfJ1UQ
G5HSROKj4l5Ve1PyhGt0RbtJzhsc5HgnQc4EDnHNHiWo7gZWQWjP5S/LZUph8ttfGinSwCfcFWim
9o20r5naWtsuWgo4A/kjSpivJTuEUJAkl+4ZIBc2MsQBhbBPU+ZRnPi9Vg4Z3RALzsv/VaCeY8S1
Lm+c4fNnyTKO7kzTdnIhG21XPNwDIekhDeiBH52v54/60yVmFXTz5pz/lbpiuy4cbwCdhW+mCPUu
Qt9xn61cdIR7dsVJoQ+HsPsZBPr9DPhFwy5Odul+6eMjEBWerYLEqk6SZyusy+4G7qTKuhEs2sNp
o7T+34FKY9uhxSyNEOVQVDY+Egjb0SYrha4Ht5Tic8XSlQzhPph87bdlNWcI3W/PR+IUSWGngEnO
epbYXnSbDExQfrm06e0MFwEdZOC+UFRhHO+vjhswuwxuqNw3hOM7Rt2b33VGE6cIbVpGCdKjBAxS
vmcbj6+qkNxazgjZBBzZZoJQu6XVb+FiA+SN6sHIHgvjES6RluRBzXJCp8ifGqb2PqYxOKG6Ovb8
1Qzep0EyMYx93EGhIVKM/NeNhsoSHF/w27SK9PftUSyP19fXrJBlMMqS5nHlq5+6Vwai6gkA76mZ
f2D7Eoqf3JQQ1muWwJxBbTam/9NGOjR+F/IMQ0DCXgyw/P8O205/Q+fWmmp5a+KZRsko1mQcvwCR
HIwuuE25o4Kwx14pjqKOTAgflAaOJVLA2ZlDj9k2/+/y8utbam3jKYGnBqzo26o4erO49GmiyOf/
e5ExS46WFjTwEHkgamPWyw0pCsBndD/fWHCcSmgndpHK4yXH5ySxxzQL2Sr6ofgKIK7OJMApDhCV
og5RJeDjWHGncenwBthIGrIsysl/Ho3KSLCEw78fTV01fv9mI/mZg3S4vDbS9PMmG4DVXxokPELx
lp4sqJoPUrUNGhqw/TpBAMlTrDuOqk4fOXva0N4m7Jg7Z9telhQ0pS0PvCRT6cX9ay/65ZfharnS
B/6UCdk4OoJn6DkEYyRIe6uo9lGdr3nMX6FZWOkqmCiV0zhllE5M8IwTDELLXtMiY6r1vwgOF81C
b+D5wVOX7oxCtz/zzJGl3bD8pRKTw4LN2izR50XzhLROaW9g1hqX8fSbf9TjFKmZckaJT7sW57R8
QGhC0II5GIV4tVG6BX9JstKMLav3x6Bs3+Or4g8nXEeKdumcZkKEappJBEnoTNXZBO5j4aqSWhAd
GkYPNCyef0oLNvzv2v8fqFXY349/6DpHGfz4JfUYKT5q66A/UOD/JHYim8auDlbI9DK8emQTu5gY
ieoDPq/JewVdDnCPKGn5EpRIL7rL9K+Nh4A9Yv/Io6iDd4ctyiw/RrwYJoFLAEONkuivnZldsq8p
Oo8GPi6COMlXYmTBbGqkQs4aShN8XeVmVQeKE4tcqtCYiRM2CNSh+en/uVji4PMmneXpYmkuAGP+
szAO15xgiDj0nRAb9WpTEBUgl2/sgMPrYiUyqpF/uAXwp60uIkYyptAfrMwDFSKnM4frQNwQN5+r
72WVgVfiHAd1zAhQmmSmHJjBWfiXAOKv3L5b3Jbx0VVMzsEYG2HBnvEhHRS1pKSRskb/N/si8LOh
ljHacn2MvCQGfF/lPbu1Jie+mG4+YughzHxjiuAfVJ4/v7BmUpESV7O2syM47doQtm1dLcm9j04U
cjZSWSym0WBQ8yBOvSH6fNYh2vvcaX/DLYrtQBOwe3ThMPdu5KR7FQj3K+S0F+xhBBFKx+/L5Fzm
A0Y2IC669Dv5Lv0Di0mhpqVtuWNssO4Q/I494h1jbrw4bFv9TLfodoh61qJD6X+N+npBunbTPSig
RK56V5XhArGCcWU6YuCFPZ1k8+bQeGY/4aOSeKPY/KStjmA4sS3RUy989nd02n3SXkhvzBkwy+mk
cJn8UUtkLLRQldlB+MdxRsQSvPB8y+U5fJMKjhyEgwd7LWUPbIpHB7AXCbMq9OVRQy7AN9SJ5WpZ
u+XoE2/VRFoDqUdtn0x/+kAtR/99qHpI5oQcxGFr3CXjbm5ciTRCyM8bKzOi0XDUzAr6UKpSVvBi
r369FHIybQurZSIZ6rftrUkh+E/NDqbfLIrclATttG2iuaXaBh8WSY7rfD2WjybK4Wy/SJ02rsIJ
lRoUQxq0MA1ChfWcmwuf5AdwZGRhwPBRaRWROYF6X6HtkuWDyp+SzcnWuPFYDiOHTshtm9eSa4Qq
Op7gE4EQTtFKPPrekwMu7UMcR0lsLd+z/L7Nd6qJ4p55Yy9YXgBZD0AkxI6/U4IoImLIHVAmtOeb
8I8XsL4RTTqSi5rvLPpdgS8uHusrX5YWY+ZWY8+PqXQRNUDzGuDjJRw4TWb4EvefmDJlsDy4vNST
nGfSN2Dt08blxmTY4AIIENk3g9XiptuAJNm3PSsMNJj/K0359critRHSF6V6H8lC/PdPFZ+8qBpy
wtr6lm+ofyZOyC5ZEruVYW7irAwWLlJWO7ZrskTcB1H9LRFJX7vrEK42MC1s2G7tPtrw5MEFKQP/
ppDAFUTKaxtNObzRPQ3TS0ghnVJUgXKvLSU4jR6PUcQJ+I1dKsFohvWaDcOVJdb9xuz5IkpF40FI
HWV/RqrLqp8vsafTnFWEaGvtBWpznx1v6uo68GnZ9tvJGxFTApIhH1Cg4SHMGCvcADBg1nR2326B
qX1BVoXcqfvnxLCGbqREhfTpNg9CjYWl0q84wFtFgkxZvVfsbyjOdMYpwngbV5vO/f/y8tXe54HA
kDWDvbXDUs/m/u2WqMumoNI30LuGlXOV44b0+p8oWFkyNZxq5sgI8LeHfX4YamGOef7bgMrYVm7k
EcCnF8K6LhDoc0AJLjMrilaMpjK3ayQkeF17/B+HuWWnWC5Ou5XzHQtDTBwx5d5GvaOdgbY7DeIc
R0X1A87qnHstPk1nQg+HG0SdtWFncGGXYg4A+esfJGJzzLRmYKzfgPJlYUCSgDoDUlh+O/E3aLEj
0ZYWbDz2kFbeG/cfYrn3s2g44+huH/U2OiCQR0j89TYfdaS9881DrhSj5WzagPhm5IP2z1nfM8Z8
6QY5RSGLZ5tj/tYyRshWMg1OFyUPTjzJTPEztedJAewR+ALxR8wYsPceJdzf+2QH+V+g3LRpB5H7
rE2WaNu4P1WCYlNGBSUMMpvZT1uyFPXdzSp5sDwiwC5UTTvRiNPWgqmUzm6kyBvUfL0R1fLK9EOE
ZC8sGVDHcao8cjtx+x1rHmyAQwuFF41ZXD5SosD7D0U95nEafZVU4GHQoQW73cRVzwHiJQSoKzKb
pQ0QDoKm/fvHXV7Iz4i+Ii0SO8+y2RvqmAHKIbt2rvEbJV68ByUTKqvijbHTVms2Mlyu4DVcpSrP
A7nmgJeg07a9Au2w+09GEOL7gY6SyP2GG/NQgGvsJAz6l+jk7AECWDfQb2CO+Egfd7Q596h9hge6
4+iAQbDbKP1XVUlYRDlbdhsSrKzrcMc8F0hQNCHPCFAG3Okf+oWWf1kMbwxLYL55z85Ewv+W8HcF
NY4E2GyLh2nJb/1zYYhZT4iCV3p3j2eaUmyvL0ie4gNrheP7x2IRKsPnvab+XJMuSNtuJihBf0os
pTdt8IYQy6fY06YcOXPlDoPzqvLgBxTwv8kxYEznQepRAVNH7VZQTsEf3Vva4VPl/KN4p9p9FDsj
kch6vy7A7tTDn2yBB+NEadXpQH/knHp701vEw/jWTtWseliFNUgZ903w79Zjf9SoVOq0EFjoLFGj
mdCnD1PjuNP+USZgROJlvCIRnN/6YBOmsXbhmZTRNwsGrxGAYzrCYakjZ1zLdLI44FsiUeta1u3z
5oHnNb0TeEdUm2RL9lm6YK8dEPVBpT/Gds4S/uTq50I9neSbQ2ukEMryeXTVR8Gn0aFNNwdjHJX6
b+arLG2gTMwHJf12lBrSCsUHd13UfjVPWEeRke3mAArvOIxEwLpt0/czk+1jJzHgLLn9nI7XvvPl
EyN/wLO/s+q5rTc+BhDl+Mc7o1Wj5nAcHNcvbkvfK3JXiJj8u5rqBPej7akNdnWodnln8Q1fZKi3
Vmk+dYbWvr62TqYfuYLJrRfxV1z5r/9/z2s6FGfQrMSCXZGQ1TxWuSLpmg249OJii2M2IIzNcdro
gGfox3/f0W6+x8p4rJEZe6jeGFdzHnhmHwPwm2AxhERoDp+X8C4Iu29zcMTlFPDXZRJ+yYg4JUVt
XBD4z74GbVZVmT2jChR87RMxOnBiXXEt0PSgko0v8pPpe23DtOigzYAc8SbCMcH1r91vpvqwsgGf
Wp82R4PBDHkhq66IOKhPsf1f6Mwbb9NkVJTY2XEIZDJweNDBesUq3T+2u+RTDDQgcGjyoX2w66IZ
Zub8lAQAzZtkLrZoT/Sj9SIgIT2cn0vY3VM8Ov+tBL2Dowywic+Ypzu5dIg9YX9G2rOKiV/BC0he
jur3dhkF37CzKLf3c1yaltjUdB12k2zLx7Q/O9C71pYBXzsoO8YgCGwuhS1FUMTb+gtDymTFR/PS
2It0kRlzgmxcKP3c3mErCMbnBTVO0zs3wK/dt3C8pkf2Z+7ig7PLyzEc5aDM6uKIbaDVgRTSuQLp
N869jO3YutnetPqVwhRFqWJa41m0Wp24n7fmiNK+Uyrx5szPHneA7TJ+Icg+xmeYScDmso3d6h/f
CSYsPYR5BFcao5U3aFgsmfk7Eyg6R0Sm4QjLyvqHJ4k/skM79izOGpP9rbWQqKalpKtEW3gtl5+x
8bG328p0nPYXJOj0OLinrNJHQ2UpBnSEIAfK0Q5iDtwW0KBxAMzpsBomMqN17dOHdF0INZW0y0re
V77+W7mhp7bhYrzlyZzj7UWZ01kGUPd0y6ErRICfv967ipmgTWkV1ZZeglv0Ds6fMv3ojT6vM8t7
5SMU2UAwqIVc88YfSN6Nxs+hKK0CONfdvqFY6XU7e+gUFiGZexk7MlVA/Y05Sk0m3LSXQ6tRQR2t
X/4NAgUbYY027dijQtJ6HOckHkPsi4uozCwix7YaNoJy/RzjJbPyEK/YRkcvZLC7hTFfxGBtfKrQ
Q8wSTgm2egTkVaQfmDMET6HkSAZ7rFt7aaoj1lmFmxzg8jzNi/04B+rc8iQB+f0Kru6cAldn4017
WOJeMSzUT/PWk9+eUeu3Xv78NdkYU3tpClhw+Zc+Sgs0nKLaGSvJh9Vn1EGUVEP7rhCqHsj2cb75
nvXs+Jff6F1iEvWCzxH/cDZb/RdFHMdalsnjxEDhtpZ1DG032NXBIBvBa+H5RTG7ECP6gAVpl43G
pT7xSB0njWjG2E83MR1WRJII+4GgPxx1MJuYj4i3YqL1j4QMxjkGRh/OM/yECGvLD/vQFBWdPFSB
5DUATy+dTc1BuSDqypxl6Ts9XVY3SGiLxxAFD0WvECFbeVTGJf7VFN9GMD+uM55jYEJSgPx73Q1K
b0E3Vc4EZ2ro+tONJnSWnvKMr31WussWZq+h7//NPvqE6DFjROJ8s06P3NprtjopSoElcNNBpT7R
AsdlWqi4wIQ7133XnKI77GSuw+uAscgfIZ/ok3s/xDh5zr3zISFL8X65Dgpv0pxqSh1GDNFJb8lg
HCoL8lBu5Z/sehDnNSdaK47I4se4CMfRFpRK/+/pMpEE7CdmIYnz4VaO4LffGAn8FVQEqhU5s9pC
CYm5TEyZVFnfiNkvqMVEeMVtPk5iTnkYsCp89uOSs4bb32eNdkQ76c5b26Y/C+w8zxR9nqjxkA/0
ywPuewzzHyaQFmC+O80hcmzovpdRac+8GglDJlate2rHV4qnofL9aXzhWNjuU87x8xh6Cu1yKuiC
pFNuzvkuigAiM9d4FPycEsxrQu9HXrHeROx7lsdldViXU7fIi7zjqpvuovuAA8fmNvKbWpYXulnb
OOie9JPaEVp/HOdNosippimcGDkHScG3Fqx/tSdzYiGXmp/zTojUna5g8VGLh81dHeeZNOe/IvX9
eNLbUnaKrZvykbgYUgI9UdE0S6HBdVkapV2hRahiXCI9Cf+RXDuXoDR9HfJat29G59VrvnHhrmic
Thhsr52L+hjaoaL1Qh1ufBk0Sss9Uw+dVPq+gD5af51qnwU6miyjmdWJa26lMSmwkG2dW2cQb1rc
lVOOb7YdlAAZRRFAGZOMiE/fdFI+9QKA+RJghU9ZEYozneYekvWc9a+Cfn8k/yqbr7/0q3DIJcmH
TKaCT6mbF5wmaLJ35yv4lckTkRJ0POn7bcivjaaR+BJRFXGV//w9XD52buPlc//NxlrKVc2s1Bn2
VdLpl80+VPhtkVzepSzI+tKz3hV6xfF+GM2NYSUqoB1ZQGFjrJyBoV7KTYkx7ApJhLoqp4O/ZR3B
uVSYUJfg/UZpjcNu/sHblxqcEVS1Xg6FfLNmrkdoNiV/u0mTPeJ2ZyAudSd13W3fwxOXQ/E6c/4V
gXvMuSqMfBzROpAFlKjt3QZcNGHjNfav1EEEeO6lwKm1RqsMxnRrKj9prk/r61zN1lg1gtW17G9L
sUofvFmp1klbgouT3xDNyUYcYK77KJPB/Im5YigjrLq/71cyOglfukoAd/jQTkpglKeDjgDCmPVZ
n26dbRHMfjhz0xycd9F5nRwOBW3LsNqVpHRkegRSmq0YgcMyGzorWRk1cQqcOban8yfg3s4gQ3Pa
FQ5Uyh0T0oo8mX7nQ7iQC/38R/rgyJefRHFZCWe6zAh3wkzWycGo55ejMdU+1w85+eZ6OT+EJ7Mf
NWzlrg2qr1fsuuRIIUQjIG56+bDua8VIHkGqKYFY28FErLqFrW7GJKUWN9ptMGSVncuEz+1DeRvb
QI95Oi1niXnn3cENQ8ClHLEVJatWpt52/HtQXneBKDeoSCPEwCGUe/a+/DXzMXZAxZMQYn4s4D1r
Pf+GrhExwwW+OMeQP7O6XrPA3bD6EG6E8be21+Km4Zt7euu8Q+5HnjJJsGRdEQG2bZtqjFM2bOju
2iBd+8TeBJh5TAUVljqe4JOAtOs+X0gB0Cidv+3VRXCx5XFpYP3MIj9moqoPTePSIYDCsvUkRJfA
ReGL/mKkVi1XWUGeGEhEsx1drmXDVEhy4Kcu7e8/1gN+bjJJu2km0TG8TziWQ0vSt1Z5z0nS3V5Q
6l1ycLaZpisCWCvGR9Lgyv48RgUtAFFsa8FBALW4U2/7OAJB+ssvLeZL4rSQqEx0KM7lDAF9fdEr
ESDaGT2zOxPbCJLRsatNBSUjnBBALIz0XlamZ60CyR4BWlAwG45WegcAazZwQQCHOrGaIIJpONT3
A4sBbCDorDtwf7+gT96QCo6R0f/XZIQac0wnO0zI7C3C22U3U9vguRm/ZQngwetV+nNOA5oahS3C
7pZ/Hngang+QiPjUuDQ4ffIjq9dfkiJSAk/oKeCfyVO7o6xfrfqhSz8MIoRyZ+XIPICSBy3fyNKE
KU4p7LiDbBKsOaPqbt8LXfa2l8/NSdrfS0nkgKK0/cZX9PFrbkjSe+A1VGkIo0nLgsK7+MfNMYvx
2zn3V3yGntmUCU8agTQSNPRGD2ElDF2LtLrn04C3Pr1rXU1MW7yFU2SlGBDyAQkYGQi4Uwi4oCK9
KpqwCZol6gr2ABsjQL90MUZe4gi6WZRgU3cSRo/rnYqev1qrhZ6IsS7YFOPvkEwZSAkEYYsn7R1y
wRphN7d02ac6HeOWLp1wv57YAB67eXpDJ2VdUA6NvFxUKGmoZEVzNhF+tE9ub85P7Q6vGfdyGWTs
sgUbak6LIflMVmNb1/eny4Ifbu0xB7Zdg9VCBCK1/OHAUxFonJomikLz+LsurdGBaXqo2aXmYwmz
rHaMae2otc6Yrsbw3aHEkRLOeFU8O9MHRWnT2Ju7Buz03Zc9N/zgdHwH1Nwn1ujs3VcZb1FjnZLV
Pi3sC8c02KGVDXuL4TzetpOvFsozITR/St3G1orOj2h4iSjWSHDmMNEvt/SGxUcTxtNXVVJBH6Ya
THh4h1av7nC+XPTUqmlcWFFiIY8Ms93FkI8pApdH3vPKzG9aiZvyqA+FHW1yKVtowusDtqWQ2O2u
13Zg4uiuTnjqHWyckBie4id5/Pvc8pIjpkmrcmhiiRBwjssk/89YMxL7omgUgd/8dcPanutSHmeG
VKNeFp7SVP/Hh9ktWi0H/sWbib7ADYMY2+Ig1GmmaxcvclAV9srKa0rQCrZ8/EJCh9pYj9Era7d8
qPFe9NtOkFObCWvJhcsch3ygyUnjEM27QtvrXzmCAzUtKM7k4Au+uFIT4Rt9TEm7PRsqkvks9Xzp
g+LQ5QpbiGL8sX7MrkjUkeufb4EjSyDySpXaHtsqfWP4q4peb+SPpCZMKDTHx0q7NXfdAUsRv8jI
kTtqAjiOl6YvEw6o2BwsuHdOCJKYF9j1Lb57VJerczf63WkHFVyG9gaqA3FYr981jQFM3Znp16tC
gnTzBLfATv4SUyb/kudcXCwfieOcRci6+2Co5lMDNEf72i7fL3UfMn9VY0R5cxSy4r4t/s1+dw2S
0nMNuWGAeRmDnhYaexxhI5c6M6EQeoul8miI0q3AsM0HTKosBBhTOxuat+PWywg9NiakAmsVR1Lj
Ytr5SdF2M0ut02nGGrxCEcBFTBAeXoUZkPwzB+6iOdW0n3VGhK8miqFLpZNhyAkHhiSZz4DLVcCW
VV9kU+UZeTNgp+iqPYS9A2QMOVRwX8+NHj/HVLbIKEBNfov6TLoZWuLUBd7HmVvdOEFLLjF6iZ4G
ZqEZazLJEgGi3ZZYHhcFz9sZcIp+R1haOTHUfnrd0g2GPFs4+AHiJxgk4GWPXMKp1eV5/lKQy0cl
RU4zMRJLR7Fh3ULN6rjkUDZNsoCvrdqnOfiwTDJc29W1WZEkcz9anTR+zGbeNV7IXW7OtzyijPwr
+aJHAPGcLqV0QPQf//d00I6/cuBzsZR4F1Iz5H6yUCmJ5Q8EFSBa0+Dj5bcr03+N9oJLxlDZ8aXW
Ydb/R8hdWWFKFY1WXRxhAf4Td/MhD218jrPej0Fk/laNFanM7GTXLPjr7/cNIWkcOQk0W45/Nrff
TjXI/+sSqtGJj2jhaPzBEtov1KymhbwJbv2fJZ/K7DBkbOJz7rKOhldcT5Q8/qPLsHJ0UznsFWMQ
1vOr3tjcWn/f3MjeKc+UqqTZ7n/7SWOggaqBS1kVr37sL49553f22cx8cGNx4MdpN7uDnLigMrIt
SJRpidq/Z+Y+EVeCGkm9LGfkpku1Es5Mv9Zjar+y2qzvIKhPxKw5jSo7nLetZgGvHMt5DRHoNqoI
Vlz0NP5UhYyEvQUmw0yPwNaFCQBj3VJI9iHOktzx3ImX80tJriUA+G4uCZy+or7/T0/7X4TuQbOk
REhraCg+Au12xpXw+nAPQjdME6VPhxDfGfb+zrjWoRNhI+HDJM4AChvvTqcoicO8ZIM8jIk68NfH
PVciU9BhdYJR9m0Dj/74h8v14eP7tNxuWl/71v6pDTCpBfdF3+CIH3Vb1V/P6nm5Oua7TXJFbV9L
7266wtokO+pUdTKKlSkPMtdLS5o+P0OWoYEFvPi1Xi0FAkWJ5zjO5pDOhZhwTRoen5AWSTJR2Ax+
9Xx7GpMRJ2AvdIy2p3HoPGIFfNsL2fhSd0wityugaewkg9rUwkegt9JqG+HFlpPj5I3FUefYes0n
KQ4A9nkh0bo3fSmyLhNgmrEUTk0TjUqTW5BBHdYJx0vOWOUUfwkiITMhknsgT6urHu9BCgIZuWIf
yZnVRemBJqGHbs2dw0Q7MnaKuvcboR730oU4OjLVx9S1ahS0LL0huylzDMrRKjtHmr0rsFQA8tYJ
kmdNa2hqIybAwfpU7ZrvztEuwS0MK9qAwEQvrgxsYVNZvaVp3Vg2lH9xWNiG61v9IVfcUcS2nYtj
1g7Ew72W4fER76f73IhKc7lm9vEmdoW9ca+r3x2wnwLCyT28ZNfKt52wMm81v76VAxJ0PvC5NExW
wO7OBWioBxPCoNmgTzIAc0KkrAY4e7Te2QJsugIdTyAfKDng9v+hx3US4lX1Rj8QJ/U62OM7Rc0y
DjG09u1FCuvHoOhVROaJ8jKeMJ/Nw4j0MXpww06dWBuaAakK9pi9D2m17LsaWrSkcEodSVHx5UOP
GulznvRei2GHgLyHVu5kWpzmlW3vRR+hn9o3qqpOu2xlVoS6H3/rAvvaHBLWvCMHjkRZ2FwEj8OJ
0sT/jA5yEIk2uwyDmprHANSpWLm35gQ1n+w0rRN+h5fA2aQu+VI0RX/qP6hgqefGNqrt1OIeTIzK
CYd/d3rmfPNamzqDsrtQYQVStbYNWl6sTgol4tniovTZ3alDWtiEQsdO9Hizq7arBHvlwS5kvb/s
QEVGaRpB/w/hNODCUHLP0EG/UP7ghiWL9XFymUmizlx+6+lXglHI2QWCAWbOqFB9ldFhy6d/e1cg
dAkZ/WVd5UvwBCNWyijKcendmHHJuJt6ZXJYzircBArgma0WXRddhP4qzbuYIhzsst7rYYxTBQci
PEAh/O6eedHbdtbn0/j6l6fw4raVGoH6NXbovfnpa0nG1/cLzB7Jc5rb8I1x9lHqa4jQ8cdSlrSo
ZeWsdaxatMpscPCZBZDXYPDxEPlndiBN5ynPyBXCsRWY4aynHzRlggrt5VU+3X22QOTMqY+GUuq7
vx3PlCpjpy1nTMs9T1T/VtTK5FnPhf4pJ+Ij1g272Ik+MyhA7rBzhPhn3JMh9aBR90NfTwrPiCD3
VkuB4tF5DrtISLvz58Iv3+f9+NWedc9luP+QrLieNNcsRQt4reJxTJjYRQZZencagS1mp5KrkjdQ
6adKKHSfa7FxJKnjzBhlPkcskyLlcIYP/Mr5yS4ezq6PrOnp9SY+8oo3krc+cgc3qkrZ2boXfeYn
l1pPcgr90T50mmbvVmbwhYRAF2gcIhzJ3XF8kd/y4cOO9t56znyH2u289FqJ/L5iRlB9zrb6taFZ
WHJpwD2ZTugU70Jy+UBP6P3CDMpTg2bBank2W1ZtlMSCMSA+ling09UVHG+AbgiqcejaoxMvZlx+
nWtstikEDggd6TmbU/lb16ZygcjnDZJjeySlNLDekBUYloWZU8Nw6bipolqqR5pSfLJTi69QWRIf
L+eDLniILBZ3oQ5ep94AVbpGSQcoQChI49YZD1PpniRbnKvJp7EyFFTxeBSn3rsrhYGQxJyL4VWj
Muv0SKYQP6M/ut0ra+KT+nDpVQ9td/C0RBR2JwqXfXjAKQN47BRlDfVXh1FiqQPtgJGpfMeTjM7Q
+5bFhJ15z0U3BzsBIZNsM5vf7j5sHIrwW3Ek08DWWqnUMevQEc/gZXb1wZx7SPOcMfZiuG3fjuw1
ZniTKwxYQGms16Wp0/j8BgOgTFK5ajJo4TjXSceqjrFnZjRycELBK16D1hHsjkjUXvLje1hF0IQE
igQp37ZesOaFdkQ6mxmQ1RKoxkNy/oyTtp0ngwstIfp6eSh+9vwSuWZyyCImf99+31ibzc7A7o/y
73Ub4gWttMMNLnYorgFrDkBCPnd7/iaIH6Vt5XdDEq/6WaTzqfmn5+7w0hroY+QSJSDC6MpZaTft
SY2YBVe2tvgyawAaHzKJ4nD93CBsUgzMp3kGJJ3tNZ0BlRcpGuDEDUuTsy9c/qcbSzpnne1EaG8w
CcDDfAKAHWKxSVnKO/IpvvrIA6F9iSjSTSQXF4RhW+z+iWplBwnfYfjdHNFy9u9oTOn8I3nRDEmu
ocrPJlAzQlNp7SdvXh9U82ejPf+BrxwX26oFAfo5v8QOAwYlxRdJVyrgtqPbSIc2O5cWqmgDW1Ta
7R1j1gEHMKlUmY12w8ue0jXx4sdmrb5Ylo4ox4cvzzyc/RakCQIb+JK8Q/BR1XQPB+gu0iu3wYhY
Tdxto0jd6xrw8rLsdxQiPKR09sevclVYiZc+3iUrLQKb4mMpuzbC8N168CHMIViiiemJquR2o+Y3
pCrur5Esibc4eQUas5HnRJy7BL0vChbiGDOjVWY23ya5gK4qUFiCqKW0e8JTUOuicbS7VJw/FF3p
ARQq96N+W0YOmZPn0YoLGBbnhzXXADZuCHt3bFQW7giM0/0qt4VmzHwEI3YeEZcT1iq8pXwWUKkP
uAdUHZ7Bt3x+Sxa3VVA3ggb75ZG1n6V7Da66Fm8Y5DwPiBYOhDLbaLbwAT3HzFHUdhHBbI9O5OyN
BfAdQXvhoNsvO3Rmn65my+SoxyRZhrd9kR0j9sJQQ8MCm/g+ZVVRlig5cQQqzwMKacPffmpRY3mO
A5uL5rku7+5WQtD4rrAqT5PlEkVZkjAdLq1pzox2b1s3v9jGZdyhDVumzhW6bCYJ9YdHSWwOS0wu
JS6SCz9W0mihImXCSV7O4UGoRx05NmYHsiYfc8oQlWWETv+Dm6BbNMnfaLaA95JY+REb/tO1ErK9
ZC17ggC8d1c7qNuutjANgiJCbZ8AjBdT7/4ArOGiK3+YG8RMEl43xFv2xYrBLeqege84L/O6FASV
MNC/93iATssPy6BBfSAllpa1f8M1U74rMdpTP68PTgKon7deqcw5sFw3xVL+XiDjS8orIqs1kMv3
MEfexLu4TV4QawjzWMkPoU61rYd9pzFAPJtmySqFLBfg0EirP2IDIFU51kjV6zm17HKa8Y4JRMHb
6gOFKbrBtQnPnyu/w6WxPoORkykVmTqAaqyN4At4TmYk7UcREMj4Nw0Qyj1Fj558XeGIGzu+Wqkn
HJa60e0sHks4gojmbcMfeuLW+XA+UTXrpqcg9lbSucRcITPUqYAFnWuZ3o+j2O7KenJ4/+IYjFib
33ZMTVTl9UaE+g4l4SwWKTJUdDfGHMebu2Ba2NeY22Rd+5snEgDG85TwSk2SWCq9Yg9YlNYHoS2K
VT3YOryAJ7MgJNquvp9yd7hJ1pkvNx1BnPdrpSdZUwEdb1p6gCyDDkYvRLEPtjPvt3QudiRV08fo
3w5EZTMaX+f6ceaccQu2xW5w7YuBS6FlWLv1alcgg28DXU6GuKY6QU7p8I9eV2Fey0GHEtKPxSFB
xYOprE5M+cOG3On5r79ybD78/WvFwqIr5rcXYAxCueRH5dfBMtSQYnta1hh5jrCpV8zNa6V54ISi
1PFqCX1nmoTFEI0pUgcaaIxV9v7c1hwCX0HzCE7djP6HM/rhXG4tgppxENrNtpfTut9+OBa4ZUAQ
CUQsOEzBVJy0y5uMGaZGmeY2eCoMfx/1PglFAg9MQ1uVHcf4Nh2RNgQN/5GNujoFinohcLcuj8yo
xb7FY2VysAiLOdqr1MQmLbKzmXu0E37Y+hHm7bVEoE/zBjN0Iz54GMOLz4fFmzgCtAXgOLs5ICCU
AJqVdIhJS5CUrspk627MZDj7j90f+X639CJMHYo+KMJu1MdSMSGN0rjkOZOmbva5Nh4YtKSCyfKj
uINHaWJo191xye3sgHpbV/RdjaH2NWRuYM1zQiBJZbLMseSvLOrzT63H4B+xum7S2Hmhq52GUEII
RM09RXMeTlZXu3HdEQmOpFErl6FFmaysJlaqQlSr3gCmCF2PiSbvsRAbQ8+o/IkXkD3fBr+3G3rl
8mvOAbrQUcHCRXg1WWi5+vB1aufq2IsgB+hEY7rJyoGdFhJo34MICZ3awno/4BmlHFK7bb5AJY2d
1U5XfF5b6Ul4NONYHsyUa1doGlo+lAVK0uSXd6LeIW20NLm/DlDAazWJxCJZucyh37ga2bGqCkxn
HKaLN1aGrBgWRAFokxClckcHMiogIopIGRTyL6/LW23JJQz7Jw28V48O0mSftRArkBytYozDemfC
TXI64Um55NNAkwocH3oaMJ2SlmsNAhxsOSaCGVCxtieCQhL6OnVAx0Mv7mhUjpkTq4+7sRW20Jmk
kp2h/heX8N4ZL+NFM0EAu3UoafKBKnYcGO7vN5Utk0PwAVxXSvR/R9x+Zjxxjv9XPdfrnGCcRiCn
P1A87So+Bc+NtiZQ4Y1cuWOPAxWxjlh8HjCPJiTuH87NhozNJb1wtB2HzX0TxOCWJlxls5Hxtz62
43Pjtr0EOBhBnyKLypNVEGVh2huA0lYyuWhloFbUPoqVEZg+jhQbiFgQoqtl94/TXlm83YfccJKR
r9dUcFJ8KpzcEtjoZw4vR3k/iMeGcwefc42LSaOxkmHljVV5uwezxtH+5lxmrzLaLkM4W89Fya3C
1K49JlEzYry9C9x+2NyyliyiQeUJD1dpHwdI9FeDY0gSAfDU0jePezUr/DTyznmmjpXA/zWzCYfg
e3jLmInNPy3zXlSpEn48kS1RUo7Ar7FqTzc/61K5FC2EMS5oy45GcjqvsJ2NE08pOGC+3PewGP4H
zfveodR5T3+/wwSu9BbgwP7WkUgr0RDrgTfrKKdRM8W5t6cGcdac+9I73ExeDbExe8xOrr3EVDlj
W088j2AG0T+T/JiRtgId1lHg+U5qC5bF/klPE0JYvBhoWR/OQuOpbeRGh4eGbQD+aF084zgymkRB
o8VG3mr9EaL7L0yu602QlqoFuLGiUEy/urjrvBUAfbP92y8IL2F2K4VRtf+Mr32GDjt2iDy3kdQf
2NNbqDh2SttKhfJfktXpGieTfn/0CUjq2I1hHe1uG2oupUWajl25I/nys/kghRjWMGJQrceehBVM
bc5xRc34M+qssDWFZqo5RTWYBM2xcFrs2w/p6HPNX/tJJvCw8GWLi6Nu4aff2a1KjnjNym4WpDXh
KemIWHlC0/p+0iz4J04xI2TdoG5MNGZ0V6qhccZ5sBw68MduQtsjahbU5JgRSEM1gdFwFtM7/ph3
i51UjQNn3O3hh8q9zEOoencMPIGb+HHLOefq+ewzz8zP4M5sy1PzFTztwwhp4XNxOoDwffu2IOH5
2B+Gk+2VuPwsFrZE1a4MYdXt4nThD0iusRRfpWUIWnw2UlZAnbq4FGQFh5jzyAXo49IwIdjwCNlu
y79S3E1q2LeJnZ/Px0FFHk+Vj1nyFwsa9s8bPkkWTagskOuo9E6xkwb8bZGpgOCWrahBeqaAWLD2
ZIjN6Y/XqYn/3HXczFOXpC+EkwwddolaclAYQZuk+Nz3qahumeT1emn67FMBH01MlsBHFjERLkql
77hkBHPK5wUk1Gd7cDt0Tm5VAuiNvtCUXpFViG/97ezNRdNGNnYm3W5meiouKA/s5dft0rtneIjZ
mHSVxQpSAa/GbvAi16XXqh9A0lKb8vm55xHV0cz+I+NIO3W/7XxUZa8vBvnIhvjfpqA3sirlhBV4
+IdSDE8Y4QwtFRrkJ2ffzxtJsM2QqcK2Dh5njDS0xFvdgiQcrK+Ruq+199z58eS2eWEJVvxZTWbX
sDKELZNVeNmxBu+VNkDqlJts196CDi5xe9tY1Yv+QCHDyynRL7xoN+0lAmGb5CTnj0x+sAkCiAsn
wVs5I07myae3CRxBH6HjpyOuPuCeXVp9i4yvJL7N+ioIPn3tv3UUXLjdIJgRh8ze1tXOI3a8hlyj
HxRuBHdxjsnG/ItiubWMUzMAU35fkpa9oLXoDX0WViFLLxj0LcVktFsQvLORetKd6tgJBtmyvifR
d9SLHoGtMiEFSTd8M/lIznlPBcWybeNIUV3Wc7kptkYuGVHOyb+Ac/PS79LrstjoAmMK8SKQHTdV
bPV7KvLc0Qn5KsLufTYtF9CNs4n7sAQFUGEdVmOyHkLRx9cGotaR3VIyOWbo4wP33PL3v7fj1ZAb
+QRqPKg+uvNPMEOtCHQYRCqsLTSVpC3C0V2Yy6kVUSZrBGiDVkoLwlJK/+Kq37se16edg6OKg/Ua
iSjEAzIUM2jrlJSX1+JJxd+CqexJ9QZ1X9lkOD9VwVKDZnAgEP8d8dU16rKkY/HIF9PMTkB1/L9n
Hk11TwTdffCfxEHS1DiwgC4q6WZIJL2PUleQauZ+snfy3zp3Bpih7j4Jhi6CCiqsaLnc4AKNnr3K
rqrTHPvWVr2KvY6dnLhDSGL34NmRpchY84dAHzOXDbTq6Yco6nXDz1JnE9romaCP7geFyOhBBcYL
ipCxGxB1R3Zd8KvpV0/UtPjIVE8GTzbFJwAjOmcucgmcJARmZ+BaIOs9Gx9PJMUDSuGMnpVmeQtd
aHHOKEnrIY1EZjajrmWEbf0MGanYStzDUtRVDfh/YrjOzRqPPTKiBiieuF3cowOB71Nw9UOwKhnH
ulCCfm/KbY/kxxfBH6dsW/uN20M7SAlQGVjTHVyEALRVVoUt+WQtsvFl5kyRF3XTTfAeE7VwNhc8
bjq3l24HzVwOV8addk6cTPnL8P+JoATd/OltyFPkyBv8nNYXRsxYHNWna5F3mCb8N3HpNQCUIQX1
jsFgGYGAqLWdzI4+pdY3Q35Lfv4Dy0ADk4a55cQDPtVj13BYAg9mN6+v5GX0jhnus5I/JjC+rg/w
CBpphlNCqE/06CKu9EG/ToEfmQQ7hKdmMCboACZzTTgqgrlAOpAS9MnG6rVMrJbPlhuavBh0DIWS
yGyCPu9tyJT5aXqMg2tKD5y098To7z4MzrXyNTMpuMyOhFcX9d4fA2DsdAfeAL1nkAtxf1YU5gn8
1loeBuQ4HUzicuR6bh124eU0FTjYJULpx/t8J22lHtkWkuYRtNntB21uxtLZNcpwI6J97yeHjkSV
96FgP8n6egRn4+9FRHViKWJ7qkEIMGO5ihVmAIWAV18M7CaYgkkNwjwXDrQqMkTjwf8bTfbIz56O
jhZvf+sqWbngV+exst16SQilB0cL271jc5FhZlgzPbK40+1ziABFOCZxHCJk+GWfaoE08TMMYgjV
Y/xjU3simCYZv1pgTNqVXlrMrRdKQJthnQqB6cuGjmE5BzodOEjezXfB19grFXBaHxlyuR2JvpJw
7pfa/SsRhbOwI1OSJ4ltd4JwEOXlb9unc7LQmUK7w9J1IhkJsPnOJVhs9SRJKXFnGQVOUd/3aN6T
np5IW/dv1gL4+v6O6cwL3ISJHNqjoJ53gkYPROoa2rRuVlJeI1e+YOPRS1I0f7PGY2if3t/KoxlB
WzC8y2ouPBqvV9pcyeMElnCnqZDlByXHpLQwKQRDlBwyE59qW4WFRzlNt6nLeJsEMFHnI3VlqE/D
OJ6JcHxXNQrLJpWYoMmqZTXa6aelh1BilgVmikzujm67HmihL3JyCJfUcKGPQZYpZK6zhwuRWm5F
j83dU+6UL0uWa/vUj+jlE31gjgkQLtVK9JUcaWw2kJyLwXPQN1lltSs2vtk/Wye5B3MJA4cema5l
Z3om5JdZzA1fTKYhLIK93NfCE2xDRpEYiW0PDbXhvtDZwadw2YuNQ+UOh2cMf71mTsZX/8Q1f9Bx
JMgVPlTD2aqR44mWyBKp0qkUxsRg4mDgXV3CJbNisBa4KGoYPL5rU6/vYBFP1N4uoty4uc+1yMwV
fJ8SA63u6XavkiV0eHc3tqY7VQhgPfm27Q+qPHq+7w7BRZL/F9VV2859Ks3qypPQ6WxQ7xkEisuG
HkD5f0qhSl56+zzPpPAaCBZ+wx4FMtNFmYrjlY9WzIFn1xh+ef8ln34gwiVSTXyN8bif2XAO3zDw
Rfx6/GX8M8bKAK58QFvMdayTZk/pLFzUvk7yNVq8y4MiOce+LQjGxFFJSv+l+82DjjPz6KqzH3oh
KnUauMx50J32BD33ES/suv/sRdK0Gx4T1ozWUqfDN+9LFm/LzTFMOIk6QOeHBXnToLQfyjn3d7Qs
ozcwaK92IFIa+5Xqnl3tt/cA/DXCMBoa+2KMzMYeawyRQS++9e5Yu2sonIbSGKmqep82v5l8RrTF
t4R7iTSEK/xeQzonls4n3wNEg4b1r9EIVRJHHYxRhqCjyFyhAmAs4qRYu0DwGEl16O4J4h1j5BN4
ZjEQudK3V2ttJ6slkj4fPYBpv0cyPb7U+JgqXhS2aa/YigD7tgE/Z2uEEAfU/3iij0hAobbPF4r/
UjVNBYNNIVrkPo89Kib9DQTIFp4zpUKChclGsOcXE8fGa4TFxOaPEBvBclleLvRhBK0eHbX+YgTS
XLVwMf+VL05xuyDYIK6sZK11+qQEUQ1p2mtZYIyeVdgscE0EzkIx4QFni0fmfl+DmDERbenFpB3u
K0XydAmI09uHLOvnyrbY+hjTkekkxeknJNnEpqNE978TXPl6GB0PukxgnmUiuMJmF975HeJoUWeq
JcyR8oUy+nhvWtASD51JuRwHe7oJyMW7yrligMUIescmznFEgddo9NuKHLyrGLkJ9vBiQBu5l/j6
ZmcUpLdXcgIdwzo+VVh57gfa4rqE+0QFP0cN3SsE8zjw6bziD21B/o6gpZgapc6zAp0jlJzom+ZF
PHFHOAwN9PNU4GSNgPSSDiCzjx1djiTOfIIihFxW/dVpAJGXdScuEylIjtEjPgFJE1FQ10RByirq
v6GvdOgeMLE+wMJNsRuyjd1bxZ0GvoFH0j9015tnFxldYJBsR9ZM29nLSU6xjIjBGvWAIHqvZ8cp
hIp2KVbCYZQvzCFLT1NbSs+h/wt/qhIbXJA4c2guS+obPG2TQcwebnsKtrnBpumeIK2WivePYJ1Y
Fz/JHp+o6YI53Sg//5VfQufhy7GuXtAUTBwxysQ0C27OTsKrdjl1mh553YMVGyMmrO+1v1dk7Lcm
44ktxRX7KL5vWwrqdmfcDoXAlF6TsmPdtERhkyAmRW9rBTDDveUANr0caIJV+dDZYDngTNJz0kqQ
dRuNsA8q4kKPrTRuabJuYGG6E7DZgxgN31+FxguOLmuFGTQ1epcZlCAHIYNyKiu5uCJZDxn8ntJD
WWgfVlGzLPXYo+lYUN0SyXBMF1ZRnyvdDn2PME63yOgwg9pptvbODdeAi3eK5EJbs/b9wMhYxPfy
TdC4T+eVl6x3FLk3vjmPmybX09HH/m2qRf6sJ8MOMT6pizhjejAzvD4Fy75L8NiUQ7q5fvLZQPJ9
nqw2t/qdcAO6BZ0xL8jAM7wYQkq0BtR3HygsGiNP1LgtSSNr11PF87X3ksTEmSYUGsC/ZY+nR+iK
z2+Iv5ySiuP96jkn3PqOljah6p8Bnf2mECdtbUgSQJmWTVIr3UC4R3KzLnrvvVdxAp5zgAKSeF/S
Lqfw3SB5cmzL524/68vBT+g7EABbV2vSVXAkJtsTAR6CsWClFh0Hr34NjoxxRcyt5qnBS4zfmCW2
vabNWaDqNYcZBPddma9zR/WfVeGWUx4ygpi66wdXYSnv8Bsbv8arNLqxBWok7y+WeCXNQrzQRVOB
00HKjtYpQnV8K/C2HzWyRN09QoUO8pUz15m30jKmnCfR2mAKvThHO6MDmNrMXN+Zck/2Jj55niil
KYQF5nS+RPPinhOk9R6mHTAOWMLEd1KlISVzyp9xtcb5YDo5NN0wfcw6ZsK9h2fY7TeMKlDFpzM9
empF0kTQauPUJrF80nM+aeXDasoXIPvMFK2Hfvq3+oChjfRHdYG3GSTmPHODYDCFF97XB5IQ1gyo
76eziz+QFqF2WfCj9eGafbuwgYDMWfsEoMRiEXD/wz0xBhtdR/oypdT2fwyAcSvTStYo49XJf8GL
Y4nS+eW37Rw2artIQc1mxNx4N5lBD1hKTiYgK6VN5HBKMqMqMVPzRwYmjX4NWRNQOJrXrNMwgPpf
uZKLYQYUmuJkf8SLKRxAjzjg265ltfA5yOmUx6dWpqpSaWNnpHVAIEAiKl6a5zv7iG6dWRwBqWW+
jVjtuIoDTc2HK/mP3Kd4MUlYf8B0SyJYdLWlxFT1l9DR/XeA4DUERua63Tml0faDsL1gwSnLENoc
G18/UVw4OTha+5WIMQbt1qdGz87zHKwqa3UOM8KUVY4/jFjMUnC9j2D+L6dhkauv4XcVhBl94uvU
hG5mgIZFcM/sk3vR13K+HJbiO4DNT8c9x9Pd3zrzVfJr1RKPfb4Npnmpxx11QAkXXfJShXsUWyTu
EY2MyBuBdR3JO58aZ7onEizHLzkd1UYW7ChB2xlfzxZNn+MODO1XPqfDhgdhzy5aqFzI567P+8eC
lHtvnNOWZ/8dd6PTfp0UKphApYSMyXu3hcrcjD5wNW1ut38bECOonc6B7wf4rSMN7lkE2rnOkWme
3INdKwh5QYVb4FnQOsoZrTZ4sYHGIxwRj78/5167l148e+AFghZcFx1UnWhOXGc59MW/IHYXcMAn
qKlPp8z9LGrf4kvG6n8dEp5CXXvVbsKvoCwX2fJc9m9b1zA+K0uKaj/H+RO17gsZkazTpEfHQIic
35rHdioaU0Y9K3F1NgTD0oKqbsweFPqNay55I8j53bglzLzWmMviJIO/ProF1gTeJdSyajqm3sge
xXE+jEGOQy2WOELgqMH/e6LvmcUpkXpOp89SNkGtR2Cs+c6IvC6axld/8LZntuYmQuueR8q5o8T7
AYnJ+chj405Mhnh9N1Iq1pUo/Qo/tB7gk5POerG3glEggRlplMSzCohuiac7GPwprMJOseISc3sN
WswUMm/6ey3kMJCwQDEF3e9NcR7lWNhlFXWFsy8MGdgeJOsBISM7yRPzDOFiaLJtZqe21hYKLSBc
TaCOKWeBnzd7ZM2lxF6aspbUIIVsmddGxn66AVMPlK57r8ulC3hnC2Q9FfvUVQ1yXYC/v3wNhi1v
G8c39qyDxCmknqI+097re/X5f95MUIr+DmwGDeK5EFrlU/mpDnKQIdW7aI9yEOCr5eBzI2MaIlPO
9xv/ckBixJyvls8+F+AcOLSBjE/iu0MPuPzq1GPiMCR2Y0uwTA/9K/OPt9vqc6MaxNknp/bG3OtQ
ZKCzeCkhL4uo7kpzjhD8DFr8yAduKrrAdCI7U98WrEDu3diSubfxQ2a3GN/OFLW/cvGj8EFPgO14
m3/kDGvomPoYceYo1WXKroRKVGqMFdFaDPIgca4HpUMGZ9imJhp7txl/olY1YIIK34ChDq6yLQpa
aRyJNKaQBPmplmxBj/9A7cNVf8XCMUvA4CkMTgLImtgbr1C6HiQ+OScbZIXKXDEFWaE3cLs4ADnY
qZXp6Ilm6QTziSphAy+11Rv2Kpc7TXFMsRS7KIqIGaLCbznlsXRgIJ9dyDCh2KFBsm0vTCMgeenI
FqI1iVBprPt9/MTkObxMdXj48p7NOpGe62WUwrEGxL2kL7A0+istHGIzThoMeLA5mur4f1JLC3qu
+WjScUgdllePh2T7N9GP5AfylOGDHrfNh+iPxP0if/jz5FZKxliH6pz1k+KmYvlLOYj0WXNn9nsX
pD/1ZYoHNoPu/eT/dsF+GvjKcFWhjUTnpxF9Tr6opZwaitRKvnY1atQgYOOCgoF5U/RgTXf0hCRL
ZJouRXv0KPuvq+5BtHkKmyPjA7yyn+EJurOKx6InDq7m4R/K1Yx9nsVlaoAOiQET+lMyXD8hquJx
2Z6lSr94e4AtPGylfNkikYp866G30cc0fEw3Y3WV5HwdRfXL1xnEdhe5D74hzzXQSV36J6iNRXgs
FopL9Vy6g0ODKp2L6wqv0qmln4Y3R2BYMdTrAjvJs0gpZmHNYS+lWNzPDy3SDLkkDCtvOcupefpx
QRz5fz1ELFkSoz5wNlVURqJswWx8DfdTxl/5oAqZFRwWw3VsTAS05LomHP7Uvqruo0T9ZC4x+Ykk
T9T3CdHHdJOa2aw3LhfYTXNRo/jX1i57l8Mno20L5OivCwoZv5tGrYo72UBR/X4wzo4Sg4A1OsZu
RAuGU+mKfZn0bA5XJEjFcV3Ui8NT/Ofp3gNiKTwR1BNG1HZ7+VbQQgLLQNObpTLsUpKXDmot1Bf0
/jiRa3mQhjAgB4oEYvCgcazVUEwb0FGbOUNfIulxMTSRl4AyrweNWJrhZgs7T4pnDGSHvJSpkJ93
Dq8lGBYaQZkvT+S9ZJQ29eRZKzYQ+y+D8dE/u6l7uE2ELjr1qkJQq7CDpC1oAO2d80avDBEBlo7L
BFdtZuii0/aHCSF/JEbcF7tVuqWOs0PjK5PsZiTMXam4cnAeRlNmwmqenoGaY8KXFyZIWm+Gd7zi
lMqNOEvK9xXZv+VuaMOgR3ApX88GuX6MFOcJ8m0Fi/iInv64IJeg+3GE+cbbRana9lBXOBcggb7a
UQzy+bcfTJ6qNggLz663ZxBsHiAqzw5ozBuVP1G5eUB4ieF/Yp1N1KXgmhL3nVYJu47kT6RZfrMm
LOa8TLJMesaJdSwbq+OJzzt9hNY3glVgDcP41yzp8uOREOFAAD7D5zSoBg2jzfLnswK485ll6Nj5
b0A8l2EtW/7v75JJ2IaqT6VGkFAsMcwfE0aRWgVqH5FyBJ8DKO7FV+j5TPnDEkRvF76P7p28E9Zd
zWpHoil/gLAET/llCK4uhiZXsdmbUz15fFqZLWIkvm9aDkTIm8iq2AzYDi0yOyYPBBGJlIvJz5q0
97l2Ce+cST8RiXpwVEWoTEt7g6JWkXRjcwHAdOogmuZL6mWzCN3voaavcQhpMdEmf2oooFo/l1a+
XUgHQeNRbihCBauOnc69AFklg0Z2B+2iglbjgO6G9pii67tEQanTP6KXN5ErXtZ2YG/HcIM1VvLC
5bg+x5chG/wuy2G7E2Kj0Sv3VjuRwtWJmxBg1eNYrhYtD2SbcQ4fJzYzHQgQ3VRHdFySO3SJ2PM4
NoFNEbh/Furl1EylbUIgWKFSYt7jCJcex53EtLvWn7cqNK3NGv8b1i9GXhFt6q0y9k47qYr3mpB4
Wtf3sx3YBp4FXRjSbjTYYqqUVVhjczsGZSb2Rp9ec1fnFUXAHW1kOqwhqVTE+vDIHDdoSlS4LNCC
Il8ky4n78Ty4dAVtMEVC7Uh8lMh0lDRFHbTOddzUvkXBrcHGVcmocBOVxgGJ5043SZP/B8vurCcp
C/W4XNqgiRarJwWyApBehR73YoJ5xsh9qnCaqvoC2O7MSb/Hgmbt52MP9M/0uONBw3qHZsoIj8zE
Yc6sRBw5whzhnW6k+Gea9V3aQ/8Vg4dhrtGqeLjZM1nLx6Mvl+RYrfYBKam9HoHlp1DGaki4zKKD
Y8dg5sqgsHyp9WmXw/2kkW3ehzJEqTfC6DU2ZDGXEYR4btiAd7TaHQci4O6onlSi82ip5NKYiDxT
ttAm7PnfilQHXg5pUpaxPXHGNp06Lg3B1jeJzScvIJ/1eLoAS99A7GbbYWYoQwp4XVzYr/DWQ0LD
Mz+WDzQVVW4ad6gN3d9iU8YtWwuzamGKYHUv+tRAbi7RHhTZjlzPM3k5Btz/vZRqxxVHS7szScGu
0dHiE/6MPfo7ZgIJwZ0WUz1kkA3hlNGHtGke3ySACUpqsY1kWxr13mq61gSmU7GrP1C1h55X+Mok
eBxtzHSCM3e+xey61GyGLZopvwAkiuvxZ60cn5fc+9y7gY0gACcEEcXGz6/xNRTkw8Kyc4XmfMXY
+xAyro7OQ1utyGBfgzVmunwtdUjOry6KQ5m/Ig6J6KYIe6bc/uD7QuoQMZXS4WSZutyXXFd8a9An
TI6GmBif4VZ1D42Sp87xNAkU4hXz8ETS1hukCzXUneozd2Ezk3dUiai9ZAh/KVJqASR3qqNY3bn6
aqk0Uq4bgqNPzF6SjUBjyJsuup48s99heXaMHAv81Gx5zTluDxTAT8IF09IQyqIh4uF+tlPzIysP
d6bkyVetY3nHCcozOBL54N+nqjuykgknigalInz1YxwIdJbPg/+EcFGIsIkSN3kBZ4DlIX/EPtj4
9sMbNeuuqe4mYOXEfgVM+Vi/SKlOXZb5rHFCpyT+dX0eorIK9z3VILZvMJhhvzS11dbherdJO1oX
SUYuqc+L7BduI4wuC5C+PR1PvU/Jsue842QCOD1teKjUIBKOuAxaYM8ktB381cUTYdw8pPOMWMVo
qQbGPWy28nlJ06jYtbQBhegcHKyEVc3/iqYSTr4AgyaNcuKTfkAi8gB1pBGwI1KIEFhh54OQMeRl
JUMkMzbfpzxqgYepnaOdXRkrGX/35HHCGt9aCL4rObP5GUq7gGGbZLtZD83um9sanPnZefgEEiXz
KJcn7o4LDZaVPgzHWX2s4tgf44HhJoKJDngQW220/hwCQ0nZkfnrEg7sbAZGKamnHdtXJYpNMbk2
GVl5FHOGEBqfTkPtStX7NTPoRhBKxShMttlx5cFIKlIGB7Pb/JSzjMIfXA8TJp3WWG1EOV+kMvLt
eHGHJK96bmTZ76iR7f4ASRIkkMA2PxqBCZi5iPHgzszgAw/oNs3qAOQ5Rf+vpGPr2M4FIomFQ1LS
ZjnHGu/SHFrnYDZqfHC247hgqZXJJd/eIQ9Khwqgx4xixQO03ZMV+AE9C2a0AQToBTWFuhq8YHyE
2UXmige//raq3igeR3HgjXYSsw4Xf+yoHliqo5z9gH4GALlAB5Aym/Vr+woSx6ODifWaQWS71rVO
Dzi5faMlWftnaChXNukZ4FzXUU2TfBNj7S5U6ajSb1rgPhNh7DCQwjEuVJ6n4FLy3KgI2BhTR0yB
btZid/zrbL6B8cWcQsnldQpfl3JG5vKlXszBIwtsmsbUA7zFmP7HoNRW79miiUhJaIicGQnFWksp
ybHnqwOvzuINm8twszlrFKlrhs7J8bss2BoJ9k3CmRAzr/Jvyptw4EHhwXBZHK+YWJlB8jjXH2VR
GCvcp8KRSa9ndx7xDD1JLxnlsPOBtT3IAe63tje6FQRti9rj0KbVSW/ZMPqvet524ugctXLleVz6
PV7U9lxzIJSFR6MJ9yk20VYRNVmmt+DQvoiqrkQCOfUsfn9gGfc6bYdKLAE317wACMfuhbhvTd/Q
hRIL7AIduNccOZVM0Ab3o3oJMIKRQMtxu6ztcvilm7Gp+y39j7o/pdUDGG8J9CPisXYu3+wYAaKt
i89gtkrGBwJMh7FgN8yAzJWKLsPOgjWf1I6fw7oymv1jtRE6QIPMck2udvxF9FjEP6raTToG5l45
eaZFEG9DaKBZ5f2v4l4KT9XBlPiIxqYjKeda4o51n607uMkGq+WSJosS8LWQ83GMu4ME7AEjpJEn
2U+XHvM0u3Clcg0EUSgooDn7G4/kDaQSwReU/5wS7amWF/cr/+435dsr1QroAvo5TWjIFsvfOOrB
8BoDq+NpALwEnna/+qbM8fPVNuwfkQGzfwnDxX764V5jqj00tgHyY+VCwCGULV/r6lNT3wn8MgiB
Q2m8Wwl7kCLbO5FWAZsZQP0FgX9NY4aoEfjMQEM8JUT8aJ2mG6JCYDJ/nlQ0wxYeiZS2wN5UVLE0
AyS892dPRsLE0oE01/VUUSWSwYCXox+gElMZE6OCOuLQ2je0hvKsCBRUiyH2Vo4XG+VmAJZmON7D
WBJagZ7kVrJ4uJS69TLGjpckoc05JZ2KWpNUYFUPZxwUKusTnsyrJoM/6blei95WWBORpVqdnF7G
A0d2o3VFl7PQoXm79+bqIcPxcxjm1HeGZxfn23gQ/mp7EXijnfzpM0nxA/BqzH6pSpi4Wx4vI+zd
NqdJRDyUX2MqPSGrPquzGk3GkpvEuaaqeWPQDsiSeZcDTm8Oz8IRue6zUYABDHG7Rw5fxL8rhqTv
mR0XcsZkKnuoXLXxHv+uC0tM5cLuxx4T/OJv9bLyaAWdqcGZiWKZScPidwYTiduHPeA+JUVPh5X4
BWlrH3cjoV3WmSTe83riseYzbIajhjmgyxHDGBa22hHOGD1f3sqz8/VYPQyDg6A2oGsP8neNc1Zn
QRh6qMed1ApTA3R9t4bzJyak+mTdOhZF0er28cnbmOzD1T79Vmxu1/SsSdU93/N505hi3b7ab0tn
kd5GLNPJcNjBrmJaIUo3mq/8dCI502ef0BBerfG8Xh0X+oV6LT9JZsfaC5Wm/9ZBEu+A7GoyjR3e
yzRV76fK0bNEuKF37NysAb7Svmrbe/Nj3UWg6kEllxZZ11gVQbjKPakU4famGXX6g2PmZCX9yjzS
1H8COYHdk6Tl8wbwJimtyZ0mnQcfV6h4eTyFBOAu6K1eLWuHAKiNLOqvYSt2hqZuBh4giYng35cC
nv0L+EL6SlPrs67XBh6Im+PdJ+PR/S/i+nccsgHAp6bydUDiOcqQWNZeaTJwdyietZLV6lLzsjGg
JNEJSXqokYrZIJVPyaV03zQ9ed8IjL47QGDkPwVc//OyngSXlaU5Y1exuSsDmZg6yvoBU2KU/9er
3SPTXKxSRKvpS1Q3c5dCjdmFgs5D8rCNWf+yai3HhDSLzk6MJ6m3TDNpT7hdQdSLbcoiQ7lT1l6S
rq6w75GoW5ljjCxwNLH6y2fo3MLk7JAaukWZfqOImoK+uquQX55Kl7F8964cN0oLNHDoJP/dLVAS
HBxZBcjNqbbfIfbuBEjdc3Hm6EmfeZeOZ1FWzHYRla+bQzBZ1bioy48fIec1hEpA/s7udqEfg4Tm
QCBhJC5uAXkws6ja080j9sWsxABa3/VOok/Sl2wP6fk14YCAAK2S7hJaGgEb2Qj8plHnXPE5R0Ii
ePfyZavdHEdJjn3mXaMaCjPAD475frG5vryuNZwuMNWd5I69j+M429rYP9T17zf7T50Y7R2RfvDw
1NkOD69wSIgSA/2XaErMIpF9SIQy/0elStHfuQHIsROoNh5Zusn+j/csXYTnkhQGF3ijXWKTEOSP
LPUHbcNesaAdoyAPZxt2Dv8xu2pg9Fnr1ZKXqgg1Z8J9Ass1Kf5egRUaDVzR7jjoyq5vt96Hu0xm
gXoLYBMum6ORYHDdt9wa+nov6n7rvuLWn/ankbzN90wDrG3dPtlINjuwuL1wPf7Cma92CLMiCIji
x3ZQyRUQKH7Ee+0JAz99pyy6U03lz6iH+7OscYG43NjQgBSOAJJxJi+DfceNHebBS9i4u5ZfSQFr
vPTeBoadvIPaXsRYtTFjBrdNJ/E8OcFXFKzpZxQdJqqkIHv2tTgSE1E0N2Bi5b7ZXDY2lAjmOVvZ
WetHj/55WK20g/YTFNTgxXkAbyNTwj4qKeDgS31rsF/9U3PeD0dXv110llBpv6dDM+35wBevMKZw
lsYWvObf4IHV6a1nKYDU2HLJW6lfDWX+2zXluugMRT1gkdqbQPn75kQsQ2IwNmOasGr7YhXC1FNN
wt+h6gwvcbHW9D1gwUX7DxG0wnHERqSwChRqismVEMRNpzLsZGYsn9WrhG7FDy1flX8lvu2c5Wcg
klPRxnqHgHJP9YTCjEDdOYAo6DqoCDipXYp6Eg61p5CgBJOvz0CxAiUX9/+kNoRQCkEiEwmeMUbW
8vPAuCWTJsEUkjokw+AInBLfHPEQeoL5w1hdneNNuSQAwyvgLTBjBWsQ3Biumif9uizvYam5HwG4
I811+KeIGEzxYKvBLAgmTsG4qBcqRznzx69r0HwXQO6yAswz4062FY4bV3OEaG1Vn7A6ckIYRu/T
8AVq9Tv+6Ayz98zb8h6jYXxQRenmw+cHz22NKFfoCb90dWmDn9lGvNggjqc/6iNVnyYESp/p9jSv
G5Y98B4OhPhFe45rZl6F//B8m4U9qYuzB9dnu5qPIhtDMZZOD2Pi+hjowd13D8hyFOz857mEYG1e
lQD0uaWooo2f6DiI5toW/967BYL1uiRBf28hdaldTwUK5i/EwIqzrIblJvJy6fpzosTWSk07sRO4
ypnlqanSlLhKjP6lEaaHQH1Mj0QXVbN4MZpGRRWqMNiakaw3er1oXolwYSBdmCgGdrvH3EWXc95T
cKZ3f1LRyLboVwjIBBbRiW4NLbXF8dthzf7+SmiPjvVVr2QfloTJl/epFzrLqKmj8nCBmyLVTIw+
PU4/u8FM4MCuQzsFe5vX5xR/xu+8gEnJYVlgWdxlzVZSZifLcnx1gZFGO4VcDeR8bjHgzZchk8Bd
bsbM8dteH4GptZd8q5ny+f9ip1q3qatX1qRk5odA8jhco/GnnfYF5JZyc1kzWPQ821WbF8gNfu4O
fxPL68qcUbbW0flOT5qTHz7AhB6fORMjKAuqIc2KUjqIRxEilxZNTL6CHQD6OjSVaxcOxty5BcTD
UdBjY+ibGv5NqtiGixT8SfTOxdnrorSW69YWvhSvW9/Xg0k/NEWpOpkEQ+DX+uQHj/OlygghR5s9
uXQ1jPagbhQySbBBsIidNEYh865urEJWQ81C8wiV4MbjGofJOE6Kuc7gLAQJPNAfXcXxuDPWF34W
WvVZsDdJYdLsKA55VOYXBYgUqYLRLTfTybXal7h2J7cszju1qGr+OX8U77gcCq1nae/gZdNyWAa2
1jjyZdAalEBYS9wQYTEaf0N6ecx0sty5Re31fZ7cr5zoLXz9qzUubmgCbM7YjU2ZJZMlHOP4EicY
h5zb1aQaLZeMTvuFj8Bkv3C3DJfSmVcUYExuPzUqdUFiH4DHW7aj/ZtVWE8WoImpbYVQhveXETs/
lIstxCGP+t4S9exEZ0KReeqYjl70KIEHbuG55ZfMMGzga1Ai36dy7hgWaw6+CweWVrght29/F22Y
ODQ7Fag3/oRg0RhEUIs+9lxx++KoDN4tQWyfaaFCqKxDniQ6ccn3WpQLSN+Ke+m2BX8NXGWYDmn5
PC9Xi1lu4H6+intD4pP0hMmT90Vb5gE7oAygBMLdytUbfwoXOy65QUrVGqwyfnWRQ86oOPDu60+Y
hhQgfSXdDi5VSgbbXlffaPDRBGWCg/fBf/UusLPsG8AtDe5y/yv/h3PSh/X+zZ86i6vn7sfcOFl2
M96/XwixTmv17eLcwLMFpbutVUdurMOWBCogQ4XDeanmjbOU+J/qBA+nrEqygzKoKGR1tfbIBDWZ
4YPecmgqUUePzbHpjBnvaehDucw2ekRKY2CAsxTd0GHRJsw5JPvvALOLufi77CkCJ9ARCTxWfJ4u
zV3PWSm6PpAncmB4pjnjfd97KcBn1ISN3bWFdgIlllRZiDkxpJnuq8rm7O9upPpcntgs5D/QSa1T
6/H/PS4nHeyFbh/5KciuO440+0xr6cAqQjOhlQ4ARnlQmeEm9Sll6D7EcQ3rWKVEG38uk3Oiup1O
ZbvzkLfQv3BjpUklOBX6QvpehTEzh5xs2N9LF/u8ws+bkAs3b2hcbxZ1/9Ro3bgjsN9S54GO5MLL
MK97CtAd5LArBKuaNBkHAEej/YW1CJy7XA4qIuYoiH5byaYhOpRlCOXz8qRlwa1dZpvH7xZxFVH2
VN5lhOEpNZ6Xj68EIx9Q+gnoK5LVtKmdz2bDhzAAK9MI1Z9XKyrD5bRHdENUGEX4ORj+3jXnGcTp
nKd0edc0OXsMQfqK0fOXiHnxqc+DH1+jiBpBxalFbZxmb1+GFdlg4PPJPGDpz/9JxpQSnvrOMWjz
e1wheBXbZM5Fzahob90W4ugjUon3x/vB2r1txcqBuDBJBxvtgdJeV+Si/QbbjEASYxS8S/zmF55S
hheTvBMaUurfvCH+odR3RnDYCXpAGbWhqH86rFIh18mCkjXGrOsLFJYGszYX4l1+4mizAlI6khxH
Gg9IFrLbzbW3ZG1ev9pnI6NEIky3hQppwwOO8EMNnc1JZ1egx1D69BiEOYpHdi4d/M6MvSDWM3vT
rmkAYkMKvTH5COKWVQXwPeX70O2Z8yKbllp+vA3w0tSb7ViBIVNmcrPr3zN18cczmmRJq9jEb8dq
xhXEEsVg/jOwAqZ/QTfXWhxk2vMCXglJciKj4+yiJZKAiSOOctYeKk3Imi4AtKKZ7vUE70jkX60u
Usx57lMEaYZTjFOoGKEC9jlcRt7CtRPwrNeBwWs4VyLMY78XCLR9/I6RnxKx9WO+uwzTekj/xhTa
3/fXeTOnKNAUIRD3w8l0dkQzRy7bclafOKv0e4ExsdGTW6Mf0TOeX2jYx/iGNdusZi3DaBaBW83j
YnXpP0Ot4qTqtZiQRRq9Eu1LH+Rih15/3ohxH7aDq2Ue0jkeMS3Rd1Zbl8k7eoSEpu0c16go2y2u
kS+sIdFCzIMK17NVk6GCGHg0l8e7URKWqLZwE8xH13i12Xqzm9wwNFKvby4owFDPXukY7dwgLWOi
zEEHDcESHpsDYjf61VlUI0BzhN4/DBQXBO7wOerkSDziutwA25ykYMfLPeyCoqtSRbN99kbXZLTo
lL0J7BSz3/5NIAnPPwYFBZppttsT/hC7v6aR3GQ9wfVmNkZ1EI0h3cf1WTJobVb5CD/feNbVqYYF
nGYeBy0I70MYxfzfNNvP+XSJ6iEPcG7wduL20T1iELFsgD7atbNRL6La+POZ+qb5W3AMQGGRefwU
5KCv1duU8Pruerhf0q+DEBwc505DRGkF86d3yu+G0qcLcPchPvnEjR+PXZZ2KvGd4kZ3GQ8Fkwwt
r9BCJkLqBjoqxqAwk/TamHpgrft9gP4oM4uvbQFAIKTwB65E+YWYXIUAxBY52rgtzLU+SwbhBcoQ
5FycTieBupbrIJtshNGlq6VRR0U+rCdjH+ZCmO8SrW2I7UA2HkMkTb4O6eS776PfnhcGhP3piojC
Qsv67hIlHmIWDt58ELcdbl+uzwWOAm3mURWWrcmsGFBJdkj2/DJIKKH5Sf0eu33nsJKZb5VIWJEn
GW8LFPVpe006voyNZOkApM1kQGNWiRwSK2fofRCmLTX3pqlqeq6sYvT/svzXVPek7yVeI4axBD6e
1C3pM+AxVcizX/vKGytCGotB+wHVJUPHE8dbreCE340Wx8uiNHtO1xiZOz5doA919yb1LBxZmyta
n0NcsPaVlkiykiuDduxuLvRNhNzoBjE6pgVU5cx0AQGDxu4vna9UjjRfdtV+NT3oGOkT/oppE34S
TXgYnkI7+qBDz/nVFpPpNNivkLozpzYedFwVDR7XTHXM+tbMQk9wpLHOIXheznmTdP0UWVywKU47
rsPYTnl2OQ5IrnDXepThbDCh2ofQD2EaKe6F3uhjU8Jjq2qkBy78Kjd1yNHL2HOWtGWSs7KAN707
F4NUDk57aiQP9p+kEf5qv6RiUyUpp4t4O/t3mfQ60gnzLACpuzHzhQ4NuolqD9I/LTAvSyn2reWf
9ou4m+FZGhIe4VErvMkDNaiMgwMDyP7hYEben4b2//AA4lQv9mfE9kDT8p6U2hx1ldE25U75/ev+
OxeECOH0dgSB3KvzO2IW/18Qn6/eag24qWujiFDYjDfk5291QPLIEOtWfmNgy6rUgDNjFrJ7bN/H
ha5yIZzcmibE9IvzfOSPYMhw0McNXaMypns9ik9kFgcymslvCXnWE8Vi2P6z3tUUbUUc+w2oGndQ
zMT9ZJU+aZSHGUcaM7O98E/jPIB4kkvGosfXxDDpFZaMhW6DKnibUKURlqhayniKNWNEfwGiNy/4
wxQVETpnPNMh36CB2snlYF4SBEvFYwZ6AK2t3EsBgu+MbIa0MdWvi58bZ0mt1+wjpSksY/2kNsVT
GJJ3dOGPqX6I2ha1vSJ3jfNqk4Qa+PBfol0B6xWcv3Cln58b3gUkIo7UM3yLZgGyKp7uWWFORzGq
RQyYxWQYM8aQFu4U5Wy0qiCcq+Bqra07tdZLek1/9inA8PFhsRE+pT8by37Ie6BE/Ds2XxLtvhdP
hw/NfRJNFi8jqJscl1cstyIsl0mTJuN+aLMUSsHQkZAugSB2kj9VuRpFgLZQnEeCLbsGRgu6sKw8
5wGRIIthmriLs0lyipWqhwhMhGiTzudPKsDWgj+ncO/WU4RbLYIdPe9Zih6DIcG4kFkzxubu8IfS
TfNmbKDQsaju9ej9+BNqv+buch+pXGyAUlaNOEpBVD9EvfsujDRB5qlaIj9cCRX57ac3QJcJl5qK
Izs9VU4WPIYQIYI+HwKeYbXXz47PT2LsgttiEyFKdPr1mHtXyAJqSRYzNYZcD9Mx9TUXSfYVW/8C
qLL62YBNHgmij6wOXzOsOh/xEIUR2HOgd6ppbfe5m5gt6/P+Uog///lSjsrSzMOiL8w8B4eibhzt
zXQC02hkHs56XsmZ7j6mJqotA2cTF2ENrVMrYmVKmCbLSklYGSK5ct/R+p0iIRpAy7KbH7zl3Dtu
4elf68k0BTaVUrD37TnW+zZtmVowaDuHzulHKbnJ0BnaeGgNrYItvytC2/ZQa+HX3PscRo0b1Luc
WPlbek/G4W+sr0+ZrWHIzz66AnmrMbSZCI7kboSmURKlQrYBjgFMJW6+HLZO/jgMYMcK5DN+vd9V
mn9MuUSpz1tWdnlB6JHVwiP6c3Q0HoBE7eTI1Uc6oimIBEdZGldpIajM8TA4UIoXMlEfPX5gLgPh
iAGJ1DOX26mrX5auX8osjq8XivWqfnjwkdgli4NDRcsgFdEXznufk2Q/36PnrSwf1xbmh9Qavv9b
dD5YeNSubFGTUUheYtuOfESLSUA/HEZA4k4MLnQpG4VmJjzbYZAvGsZcnNwyQI4sugncIj6rGz2b
2UcfzoeQPRJz6uFUWeEE7WSa9B8JwNUuhA5f2TUHmPEbPT1tjVltNvHYdgo9cZ+x6kx/4mk1bfZW
96ZMkrqNRvq+98BnokUttdE/g2DGZ8epi3fFNZqPnErKi6QSqGdmg5g7er2yzTSkutehVcHn1EDD
c3IckOLssMTdj1t4f/iH1Q2eXQTYkJmAbeTDFzFdowtlqqqWH7UgQIh5ICggcQdca1MuhR0NDRmD
Ho9pG7qZuOD5T4kR9VCHgWoJ82WjVXsJiRcmx5zKBaZGXidkP50ZG+KwDmY3IVJzhappeueTuCay
L7QICouUs7zLBHW0TxclExmld0oYkuvxzhYs4jU/cFCE5H6TQjB5LhjUCWvr95PpwKhOQLsYMIBp
y2j9hHGWW0GfebJ92r3S+VereE9+zQ7CwzkmbCP5X5qrUPoz4qTWO+9zyEySF3BdYF+OEksXroaG
ZaLZmjA8U/po/oqSz4vL0sYv08XhSMxWzxCdd/SBmatoaFy7nWneHU2aiWBgS4VOK1CJmnMVJM2G
N6KRNKcus6Nyp05k0XZq69w0f7CcTBTtoVMiEDoprcu6IGLH4280f4Dt7Y91DkNWn5g3TMce2oWj
kViKHQJdyp6wa5d2N4I5kvQVlwFM94wgph3d581ntHZz787norDyElpkKQRp3enRpZiiMHSqyNQw
LVf4Ul7u785paaoW+e5G/KU/p1fiUE6k0lJjjq7lrywKQm252beLlPOxfkVAYcbEHHaRs89nX0tU
0NwRD8lukQx3LrN37s4/s88U6IHzlL/owcEaJY3Z1sZWuw7p8B+n+AwSfaBIA9HVnkHYnhh04+A1
NLmT41/GwAoZ5BPwfDDRiau9wiqftirmSpeAV38xKI1iho2jEZONlKYtE46hu2G0gdCAEmBTwC5D
3zTKsYIUAY3Sbrc/g99RsNA1WnudFTNKvzFoXVqzRfTe+O9X2BK2dlZ+HKQkVHvegl+/YMfzVpd0
U4OQa6+S8+Itc9lrEL/w0P6JjmwVf64z01R6/NAIQyfD92DiSn01h81fB0a/uU05Kv0s+jyzdeqH
Hndrb7fhmn/M5eBBXVQYagE6oSe58VoCUnvpce/3B7ADCfcim8dPZFosvCU19+MfN3JgayCd9NH0
kmR84Qa5XErzBJ4JKgoxKGVElmk4xNeqgmECoaBoIYWESD27HiBxpvcj67F3zQyZNSLwGJ81EtZE
Uo0rZvB0aLyacJ95FtJ0GmJQJvdyRsZHcA59p9kThgw4igOERXMu/qmZ30mOWot77sO8sfXFZHyv
8+0QLlYJ7tNR6QinuN7L/tXNj8TfB0iDBXql3sa7w3y+BsHlcF5nZ/EJNQ2t0W67voP4GekQGLWA
uCrePlQhRdRE0zOa89sYg1PY3Mx/U/KaQDGmi8a6fxBrTcnhRyBGNfIiUwuWuKzdSbITo2PBpk0k
OXS1bdLTdf5W3hfSQ7nk3C8zVeAxsb25SFnLQn/jhV0p7rIcAnTy7tK4h+4Yr7PceFf+wdAVavqz
Hh28YilCI0U2mzn9McupY2ERYEg28rb10XcjYe4IvNTpSG7hdVgjmsuweDOebweVYeRHWNDLKohh
QeB8VEfukwE2RLskpzM2aDAI7JejSe7NFkL8fIlUR5A6Jn1CygS1vrH8oeyy3RIYnsebw1LWe6Gb
P7dfK/rScG3Y3vwbjwYDuhZx+vsfWQmdsQvYDctb8v/U7t9riALWYtSh0ogZOfesytcFd9u8Ixp+
KTUVrEgLvk549ryfJFI4etY3mNfOahuIuEOOt56a2nHY5Mqg0QTPdXYovkdFGDALI5nbUGp3t0Gn
1ZyzJ4PNqM3loYjlWZC6RKMPvauXxJoSZrGwOzsKn9AYQOiBQMYA51ZUCXOSzCiMOAgZYVUQOiXe
/rw6pLLp/SQ66NMh/zu7/S0SnRntOkRAgEd+FYvgNprFRhK5pOz1a/48oVzP+s39fIev+L1M+a0w
PkVtq5alNGC2rzQnN4PyFpcdGTFIynw0+JvEzFu3ogZCkvrEaGUFbxWviSS0EqtMHC/cfrPGEmof
xtsGcTGZyybZGShckxaxa5givcBZtVhVZmTMg0mPE0qpsLg5ye3fT6LY9mGg5UC8Atf8f3QGLBVL
n/iC4F+ptWtZDUyorYXk3ZdbTcCJL8VKzgEwntYX+a51JF54iI6aDXJUp0wDa5qZTPp8j9L+GOI1
9YCbDX9CwcqXsCncHUTb9JqUO2ADUfCXvrUjHFhxfwbMt2pOkHDHf9Ez9cMGgLFmg3O7sAEsxEeK
4cv9bInrjr3JKmTRXJ8MYIXfMF0HOzOcRIth0fpl+600TgJP49IdOE89pjKw3K/cmLCP/hH2Uwfr
sMCLqCqGX7zYnf65joCRUF5Of0M0pAzvs1/FCQIb59jW5heqL98ZzoFvBY5sGolDRed/ZQn9itPT
jSfJM5loVRfezTRVKvgZ5KhQZySc4GNocGzqGzfu6ra+O7YUG7/A54PVMSh23xaUbS9OM+h51h4t
Bey7cw2t8cF+f7bdW+RVhoigNLbHcn26Xf5XPNOx/sNwIOLGdGTLxrEtBgIK4K1MEZxv16FjNL/P
DE6GTDmBbCvirl5E76U9PD6rF2TL73p5jdsgxXAz0txTcTLHoJm20un51vWaZrQ7+VYZkvv7EzSo
dDVFbYGYVs9yG+j/P2Xf2UL2zrCqGmACM8F4d/w4eDUBVSmdGpKyKCLZPouD/ms/M/OfVIoHGmEl
arRuHjZUvY0Omyvaitx829xDSjLGflogwi4sqLUANXS7AH/hF6+5px2dZi/nn9ADPiDQ2GMDQBe0
cqyrIm98cxGhVuJz/E7eazYlPz3GPuSWxLrITiXhnZCgL7huVcrFjRBKarmUpJ2zuk6ic2KdA6O4
FkN1tUYsTw9d3+iIwitakdsTBDxG1QRfSnEXbNBgrmGJz4NWBN5XLFHjERpgmKJi4ESy7+lQfb79
Z4cdjRszywek6K47jeUEoHXIHZ0IZDO4HkZFUd/nz1XrbIJgmIVbojGvwWygUiARweaZUASp9FVB
mGcG+tMjlE1BPqUdQOkQ4nMmHVnduRbh8oUU2732rPnvyG8PG5IihE8DZidsBNLYLjym2Qv84VRt
kkM1+R/TlcnrO+q7MF135dZBL+C8lU7qao8fPZSUFy9b7+5yeR/BWRs00lKudDIEpVHRN/zw10j6
tWSF/NvItePKV37rKBklTlvTZwBbhzp0Wn1KcXUY4pKevP4yA3DmGZJqyx5LuC4k5UkgL5z9mk4m
2uMZwasYrArLVMt443+5pXJQ4O8kux4Vuc16VmdUiwpJlcUSQydgwKiAqrJu+FZ14Wh+fnK0hEDH
V7sliGJ1LvjI/5WiqR5XeNIia9MVR9kMvYIfjUTQbBygdpgSrlrUvx279gfBM/8FZGd6TU6TEGNp
x9Jm2g/pn+krQ8AV+9SybCepmIQ7cghu5R0h/I+l8Ru6/XqcUn3huY1BO/7VzWTzHXwIKrZqUBM0
inuRPpVxt/mXKzWeTjS+A+dVdGYLk6+PyFQZxKL3iw26+Rp+ruRVkjWFembi6yZxAYVEagFqg0BB
+GtgKxid2adTB7WYfvGBFXjZyQRyvpQaEidzu6U50uT4IBx/Qyud8JbSVrpCwNbIvqoY+5og8vEB
GKwyss/QzkepvwzNdTTM5Vfncb859P5MrYFJVY7F7SC1OieI+uwb0G4eSfHtj2S+O7IQp0TBG0Tn
CBA3yhj2ZStvtYTaxhyvVf3zMiuddtyol1URHyr0KI5w1oY+96YitLIC8yZq2BdOo1K61v3vgYhe
ltz7uZRKVYfFZ5ErfKBstZvgkBM2ae+Oqsf8p+77EJ/VjxDTdYh/EJZOR4EXwFH7UMG2MBjDPgDG
Ihct2jJUU10DaiS92UsLq8XlpPa5abGpFrFcn+gYP/pW5ynp2+Eusa/5Jtts7ct6pKdt+XlBgn+L
COyUCO4PSiCGidEBaOKafzOoGgz+QY6Lw4V+DEjZ5ny6zPUSFqIYOUpn+UJBiA+fWHVh943HgP1F
CR19lCi5/TCKyDdLtZgixqlhopS9V9CLmALA7q7Pn7LYkMnyCuFEV8SkMF6PXnRLdPNV204z3X9z
ROTYd5+LLhln9oTKUdIFHQT4bLFejQPH5JA5FCEBf4tWKz5V8GDhFWqMJkW1hgi9CnWKNWLy8caO
eM8dN2eNJSU/K6ZDVLZ9K7ImSZ0qWX9kVY6hKcgwcT5Q/SY8jqiTICSE9QmqmWEXCMUVhtu3qtBZ
hW4zECIkwBlAzUS5d2shBtw/JnP7z8Wlre0j/wTgwc7qcEheTENvqsNp9AHIsNt2km0XjzLGESBA
zOAWFB1GkT5jvrPUKTZaxUjU6zQJUp19pKadhcZwSr7VwZBm3tCcRTVVyoHdatUGV2Hz1HbD/uvZ
cVx0ANlk8U60uRSKx/u89Gpoc+SjQEQrDrztz93zgfC9AtUVJ69/mWNUPXclQ2EV2R/FCoPwhLCx
6fVYotlZBKQSM72rJx/g+s+T0WsbgnXyk186zm1XnWkJzzGgV6f6pMkIRM6Vc4ZJaFAiCRnj1XCi
kaf6rzuR0UhelxGMBMSjAQ6hruVxiHkDCaC+mbTYPp+qlyDeqbQw/bTg97XAzWIND660BnBQZtTw
HaYrLt+tiYownCw4zdTDng3xmOSO22KKimKaN5dOjVfB4NSlwkCnZSqYUh3RMjRBNSTND/988+Zq
s/B2TC2JcB3ISdWo9B8sTpF5xikYAClQElBw4SgD1STrJ3FMjVh6bujeiP3O+DODdJnQtDd9BWug
v5UmR0YFHkCsEDzWwChorqedV2/r3k9LcmRKmf6tQbP1rNuQn5AvS6B+j1vd4i8Mfw1ltdbpbRME
rfWPis1BBdP8BCA/457zLUlbXg8Hmww3SHxKdp7d1U9dK4LVpsRa4dhhF4z+3EruDKKtCJ7NOK9S
rwM3b6huVmkaTD+NMAgG0CpPef3pkTG9NqjNfuiTIyXT2H1sUVf234X0vo2d0psSPahZBRcG5L3b
rgTfDjx/WuhfdG3skBsUDxlojp8ehsmeJ19rdeHE2gvYZ6LnlzS+x4QgxprWi+Kv33ndUhhEWu88
Tv6ndMDB9AAlVPB2RVACPZRQXUITK6qdbeSqPDOej8SVa77ZoI2hRLemK+Xr0vddGptIlzXapBIm
WuqlsGiZFT4hysln4GqktmK7NtX7xXfFVsLnGlo2y/6VPutB9UiGVp2IyoH1Yvy9A76aU+cDnnwI
OBqXnKRHutwxbfOTUMAiJpaVrqgSZyJFUzhypSiYAivVR0UaEX2XezfrgyO2Jya0UjndAJzLG4oG
FiaxcQ+yP2hsN+TIJCM7lBTxxE8yPvW74CvmnxdU68iNmyheBBStECwKGSvO+/X6U+rKpbW56wOI
Q9H4GelmK0cOU4SO4a84FztaZgfL7XCKr5iTvtWwjrXqpxqP1MzgA22yqVp9XD8m/70lRQlQfdQY
iQzS6Yhn6wfaOXStXdPNzIdHnW+jODOynO3vprhj3TcfUEfTBYsLUtutfLSP5G1AHHIzXl83KgKD
Qy5vjPPPK9xbKtAV+rp+xtQaNHwnhw0XMIK9hD8c9S3iQNc2/Ds0zirC4K5BBA9+z+4yh8RL1Lja
CiFKbmw8Q1Va8yptexkIonCfCsymafP+gLUrQCx7pUMZu6MetRTiLPDHhimUDEpcFDpRU/005LwB
kLeo8xsaJjAwr+pIkoyccebQ1F7M0nDISt+oxoxnExFweqiMkyiXRCdsjVxKgauMSB3//TSI7WEZ
YHj2t6pi5CPUGpPHuCvV3BMfSq3pY7mU/tEiT41UavBBM4tMzFWCu5U6ywY2tS855+Sqf/YG1XeE
Q8rFt0XCOpQ86pRMmdbYB5IVni4ym3ouU13dgD15367wynuXbAKkkys2EPQ8T9ZkvqoTtoChx/Zl
Rn9hLuG9VpuB2hVOuQfv6aSGz3FYViFnxoujk0zWLJA4xrC3awGgylbpeujtsN5eRLIO/BZYGJ3x
bMk2xrw3LKqH+BBRKFxhk7B4i3jvgnIJH+aR8mtaJSZg7YoBzKDlJ1dygZZJqDUzg1y86TItIG6r
SGrJu68vLyjeGFeuaHAnghcV2h/tKynbodf0k6V0hM9s7bczNS9BU05V8PsCJ5jsZWn8BoRFg94h
L+9ZEDZjgBWuFEqiEInveimdZxs+drUXHAjR1v949TGrLUiYLG/Xg72WR1pi1LQ2Xi5uExw4TO2O
Lc9ubxWTDVJHNmOUupZschyAtyh+7JhJpchznxRIbVWGKgGun/iVL1pbwHqvXd1CDXU7Mf2mhXWB
aNrmph87+iRIKSCqhXYky8Fc963+94AwVXlo7UEM1EiHDmLss7KtuXjc8xn3tTKhB9VeLkCOvAsW
W4y0oCvQJIgDXE07AsrAFbmHWMfF8iZtmyl9pKDkxTR4cEeojUfwMNvjMJ02DemUIiRrFJvqK/SZ
Rslrt19HD6wr/DSnSH62GSsJWOSQgnqzfUknjN6xIF18jnD+dZHPDQp2RuHNvriIxv6FBr0x++6y
/Unna6H6aIaz7qxaXJcz938Gqzmkcdueo3A1RhgMZHledi2RektUuaEOkUdSqBo9VWKnb9DVN8LP
vQT8x5aaWHuvFiUfFqEzGKdq573I6i0mNYZ/4oRsTOGUl2uuvtitXMq/YSChd07lFMq0v7CYWXdI
4CSAt/tt03HiTfi57rSO/5VKeqo5cyshGSPwJjRqPkfbiz6mwTdiyy1yL57cSAjsiZnv/YPOC7Ii
U9KqGBArio2rla42xeKy7vP7uaHHhtFSbFEzi0F4ZntrHMJSqOtGWPyrujvW4XrlRBfEFTPtNdJY
CLR4Qw7IdNw74zFZNT59LTgrT0SUxrDH2R+qxPYkmuJ8OWYi0D921zYjHY072hpjclWH7/gaUNHS
FQd2zZCVHLZsUqLRvJganaJlFVr3pyA2sO2mGaTpCpjp3KWP9vhLhQwTwNKxpEkTZ/+tFEkn7vgh
CO6yU3eW67PMYfYAlir9MldrW8gpu9cE3Z0eZ3DNoYVn2MeZn6DRrMr1A257nhCp9drhQDl85qUA
kFXpOVPHgR8F7fzPOrWX92f06G6wR19c6feWOiyB4XLaDnLZ2hnMNGGr1XqYSUesc1NkS+ta4Wes
OLvk9o530ch04TlOqE7Gae0j20olU9mGDwuUppzllMLlfL9KESS+wzpI1BuMlSuUFUQOBZk0Vdlu
WtyrGGxWA3cJvAYRosVwX4EWLuFOKsgwP/EQzPk16Avyz2WjNFIg08nVhaPL7Uj0C+xxYHdSSGib
U6XUCbGuM1Xunm8XTMEz1Jbe4e336ddr34jLRFQ1AIoV+dUjrNRZP2h0kKxguJw4mWIEPOVCjopy
9O9FMVDUf9fKerGfdOqP/q8O+JIKe/SuU1FHVIE+SW/oyl0Lsit/0dM1pzIEDcN208tFK4QqnMYc
SdMQ+UZjhgEfuInkSsy+yEUhqlA0SVNUpCe9uksHhTTDvgM8Wqy3pyolNQ81ng60/U/CrJ/Kyit0
Orleesqt2ZjU3H+DqYHBTsSNciJE/MgSSUiPMepNNwACJY4gcDm1JK0SjbwOfkwNCPyC345u62q0
EghKQ9egXPeYsnE8XDVdhsq63vdxo1IF6bvehYS/07soOWNIymNfZ2tigaAxNjwh4ipE5hyBgEC9
5v6mEMfs0jrDndSQS84Qn4ePfxFTRIsQOsjZxfS1Rhe8v4cHy8a361/0qPi/ujJB9wVH2gowvlmN
bKWh0+KDEiWJVZ9HNMwz+5LqTIHAKKgPoDieko982X/MczlvYWbLqhziDoH1R4SxoHLC2OybP7Wo
pV6kE6lWplCKdtSuQiAjKPXklnlZQZRP/44QiRwVryPvcDt63iPrAfJB0f4qYuN+boOh5AO64NNv
+Bu8SmkWMEAkLH8MZi3gLvoBNU23yErRUwZSyGQYYHXNmi0pDcHyzjMIiB0fpH3RLyynMQ6sXlWM
hYs8VUBSay+y9MYQyKMoeYg2VrF1YAN8z2FXY5B47QnUBCndYG6sFxRmY105WNwXN5OZwklTD0E6
e36K8GxbnhJ+PK4Oz3m1w7RAmcfzp9mEu/mdgMmp/3e0e6bU9L36Fsh6D5gjqMB42fDQYb8DUvm0
9ryreZeAqefFT8thIfubExi7S0jhzmPOE0sOl8XeA4ATnxlgNHHTUTpFj4kkqIlJN7kip1gOL9tE
E9dby0EL56ngCjP06nqs+qRjRvTHlhrk6652FFJH7oEFbiqH+cOGTcR80XgSGW9hh2czzLwkw0p5
2lUTZ8TLHH1Zq2FIzb9CdkhkEfV9Vftw3/lRPg5Q5qfD/Yi7YZxtXTIo5WAcHBshAT7/7CNpK5VZ
xNJo/mZMHvuDX5YldkDJvNn3RvJcWLILEzxY/i680Wm6xY3zVUxzvnpbBpArWJ6p/qsuq2yB9C7h
P+f8/j4kg6dvx28ICYlVUQpVaKtp6FWi2V7KKR5aTbTz8dEm6LBpZh5Jo48qfZyP/xf/YAm3Ty7U
TOVqHUIL+QdDVxuH43o9W2lX7LjUswgGEuA7Jyo1hFXWsms7IXgYqPL9a8es80YDYUuQrbQd/VCb
q0E3s2X687J4jptYO3LfbrTaNSYxVt4MUrYP2/RjRKo0T7JfbvkoP2JSnTxUBuz2wzAXtXLNr39m
CG8kviIdjSQZmflXoZWS6ngCdWt3+1/u1nQ4+K4XG/kWWBPRKY0srtiIQDoOtZfKQArvBEKIhWnt
vTv4JRGhtp28WVQD1PfcujP4948e3zJHMTpTGrgqaF4Xf1jfBNE+fcesIlBDJuirImvV1jVR86WB
SvwiQyIIH+7G1C/ArjHWPJzmLuP/Pavi5XDuYzlw3v/1tmlxdVEBn2JAnknqWCv+D7RwK+AhNrkF
3tm2ZIrxPs7ecZrP5Uth7nad9R9SaWFaayZkR7eyEwsPRH62cFVYsklApY8w9CLJ7VGyqjPNlKdW
7KcEfZHB6hJNZKWqjo89qszOaxCAeNEI5/UWX1ahImpzjQcYNyV4wbPMQpsFoiRpjwvNh+/dI0VJ
65xtTPgaF5NZT1YJgwvchMDbWZIvtaKvYJOBf98uLqxsSsS2lIdWAv5mrQ3WRBBZuyLp70D47jlh
jHLXf3LcsovR3SOOt3dQ874tXnUzKt6dYYOTHiDYt/6FX/AdFs7RroD5iwLsVY2jgBkaOVodOC5c
ys+YyxrZfDppJAAWoL34gyqjqolbFm+EGhtHrKuv5K98iCX/q7aHdWAtcPVaTlhe7q1aXJ2Elpm1
M2fxS+3Z3EBqugqp6irmshuKq4cJjbPwWhLIc4px8qqokHAT75v0CPkq+NIYJPOnUKLH2QR0nsM5
6IKnJ33FE55vESgce38bYPJZ3nZa2yttib+xIMrN70Tv25+5vcQJFMVA3bxIZO5eA4Y/AHSnLdVw
m438QSkWe+F9LaFvumCwp2ThtGmgKt+vtFG6D5/O91lmuI8SGoEAAaXkCHKniCjWuxvHEwhYXR9s
9OmYK9Bu9Pv9iRD8KItXJe/EYR+8ccE7Zf/SuqF5v43V1bbkFcJd8OpwkrBy7gi72+D1AdIQL5/R
mYMHR8ORiLEZ2RowGVTEt1CzSnaithXrT2qZuiz9e2Gibx2RAmWDlYS1o+fXn7ffMThV+NE2uWhg
OB1s/iYWCz8yWPFx/29YiRn0VQBPpGgo1yBxmf483eptE0ar7lR5LkFEhKpdjPP80vMKk73oWBqN
Sq2A9DT+/ZZl+ujNOkrPf9tLVeRuqPe+ycI2npe46NMq+9lsb6OwCMoCZb+lYAxD/pqEVwNcAoq+
oePnYgf40sKMeBuVO6mkVS5YQJYrg5EEzRDpVvymrVq07145qwSaDODneRDLHD/roBFQsdlmlyF0
WyAnuMZi2eE6HwiRtiWB5kCBZQ4FyHxy2DvR7XoGTS2I6n5qrkqXyIRFBVy3cT+tSFbor3ufhFHr
zfOq3I1dnP2ITy4nBFUohpZOJ8jEKPwsVs97XBHhDtZkt24jarj4jVS3Yfmdyd/Fid+ps2yGgiRN
XuwlngyUH2RP7nqROYJ6z7dtuDeGFCQMIY5aFBSXQFu6OCRU9yK2HQINSLD8xq744MQXBAuycXf0
owILFjTel9j4peW/pBn+Q+WUn2IWfmrV0ErmQhDfd3fXfIZs7Cm8iFGHaX4h4RGMRz3/wobKyY/Q
tcML9FwR5W3jxTMWNrZfq5ANkj3Ic13M37Je0snsZIP6KIx8tQRoLCXvxzjQq8nDCtNXD0b1P95b
kqVgKT1Ea1gYuf45Q/jH01u0L++kwtyTZZ5MayI7oGrAAo4+isLCe5bTXcDalqqSsuPVr6zAVy08
XNDsfSfkmVCInDO1mlPh4hNk8FOJ+ruNKm7CtfE5NgFGqZEWZqT9zntDD8uY2EDa4Ega8PVczc4G
3FsJGMLjT3JpRMSPStWZvoMStGtAH4D9I1WCfeN88BTgPp6swWxq8B8oYJxDOn+BQQmfmcfF1Vyd
f1gDwKIsUV9GQpvippJ4tdE2w1oXTrfQOq6Bb5+EzA3akrYM8/8NpdkT5C/+9VahE4MYNgtaXHtw
URe9PdGXttPxgns+BvAO4VthhEKD32Iu7sUmjMRp3Qsp68EYQ7dN+gI1j7KZz1dpjGziik+DtM2i
M/u2tTJPjDrsPsZ/pza08S3alhlCHoSZP90ufFLvevgzfE6KtkLVbqa+XanJNFMgTcDheGkWGMP8
FqogIps7/agByUSfYVujQkRHw2VFfD8oq+ZjIAUW91UoSwXT8NM4Uiw4Hia02CjRTojHx47MHkNz
P/6f3c7WCdakgTLpZ9Sp22fbwV+S5UGFNE7OZirQCfFj1f1YqhMFWKu0fZGTum6RWZNd1mtnEudm
dmQSDVpR6HVCixa4dS7iWTjkFXY1ItkCHH9zUdgl+toFqnmCjjL9m1d1DMs3X91cR5Zt0CBHtK6Z
frBUXjml2tj6UuI7Cp+bsQmxBMSNCpKYf/zM26b1CY8aOfoU9HZQHgVnvRRRlOS3M2LdDDG0eV4Q
GvOfJiknatZrpzsycVdux5lV3QGZCXvS9cEHI2ALcYl+HzHAZytChyhkyI+Po5nw/5zQZvVNMWd8
Qu5qRgLfkaad+zXIVL0PoD9aMzvzDtgS2w8XqdFlCjMtSD+tJACSok7Xz1CEd3e56N4ZfhQWwWDG
Kv08AkYQCEs98PX/xrxV+xx8lQkTcwMAJhcrYSyG0xobtPPX6VF2VDoDQL0aVtpbbeBHYlNFIf/2
xhJCTEkNm4LjOpKh1VsHiLfhYHnjn44w9vNzb1Y4mLarj4tUYEai4hlbWwjvAuAEDSy82lhcIpXp
F7gNJlA88Bc+EsHZBbMnSLzQxf2b9b7UE+lCJea4Il5t1aLR81YMpvi/M2wM8va5YUQs1cbe10JH
G+S2WUHfbkI7kdLwTYkHpV4Js0nh7Sngr6IFSnQiRq5+25LoIWoLfzxou5q9rBROBc6bCSWfhziW
AoKNr4j09gaRcUss/Y4UfCiOJSJuPgFmu0dsS8iIrKMOOJNcLTbJhfs4VNFVQxZsB5ATcMhIv9fW
JXzbXjyashDOb1ucIVENU1tnnW+tmB7qeBhz2nn8ZklyMGKfJRJR8AK7m+QxcdLslBKehVFru7ui
qzWpuLyTTOqflJbbS4XcR/bq+5JqPggpateaZAqISMGPTg/J7GJYqkpVHY7qTHcYRfUtriLi/Xpi
8J6d+ym2JhDSJnO+dyJARMjoaWfedFZsKg8ftUtkQUB769ztC5qnNGpJRfOCxIr3Uo7LiuhPt6bF
bhS5y8DV0eg1MIQvaKmMuRQBcYy4qNmIwog2b0QdkVbAY7X3TaAr92QAUxn5mfKP1iKFYQ/pJoI4
SiB5quB5X6ymuwQl9S2cpztRZjDSyelHdtGV3XvfWb1mOxCwdkKghip2Fk8be1+luGuZSpgxs7/a
aJMhAVKmNCNFDY2MIrv+G8M6tHKH8HG8JKbkBBCog88OJ+/h4Z2+fTWiKWBXWgIH7IuSqwXhomrn
eBIFrKWrCJD6DLE2JSXyFNmt1VO+cf7jLfMYsM8k6ZgOOJ7K/pprj+7xujOHgKgofGDDL/3SE6Uv
Vv9SFjACeUtuOuP+z5OCSitDiiM2Lvwq37HJXP+s/V74h+iMwwmBSF3dY1RcaSk06b/4EQGMHRpS
EYsCaSBTLwLHpNFIGGblh/qKDQ3DASrKBRgMQxcZEGporoj5+R1jjdRgaPVqgTk5Shv5KMS48YQj
sDlBG9vY4lYmLIXSxRkhZM9ppACqJl9TrP8kHlwt3tiHv2kROHFemogzcvsXKxN8xABJtBi0CQLL
M+v+tComMnHAo94hcF0NWBnk1qVHP/sm6Od1YHvKQTwdGdq0lHva3LtZOeKWgW5qwpFxI7kLnI6P
PmxJRCIXNRxJNgeYhe962mSkWjG6XCzh+RbgfK9qqyH/f0qu6V55KlTPfhbfhEIGbgIMQI9ij+sU
9QEILYTC7VeMTgKO0fPd0Bc2DU6l5b8eyr+oAzTHkkgyFbuKsL0SEeYwvKSA6mvALF58UMkeDE55
BXqvqHr4v3PgAIrnmscmpPrRFr83J99cjsJSnfUDP5bDOFEelcV+BArShpBO+wLR5J04fB0lMmA8
UeGdMbwe1hiPDl3e8APGg4XIBcQYZe0aPIInRKTlGZcf/6jJTDQHNy159PQYggjRmYq6bUizHagj
YXUOn6E5YEcOxCBVlZI8+mXvjwtYDej0we+8d3VXf0GfnNQUDvgzUtoNiiQxLIwASMoxtIg+eBJF
hOvkBXmheBKmy+AKRLnxr8pwGuW74PoQrDoiUM6/YN6fkadQBB5PM+yOwqS9WaE/cSEgZwuvd8WK
jLp/QfHk65/6LrCPaq9WKdtEwBBOoBF+yLHW7QUCtOAIuosUeWacrGajEXFatWPf1ktR4zSNj5+/
UmpajjjQg2k8ubF3tflQxVqrOqxxBKLkhIlJZ+hIHQaBUHgspYk7ocqH7lY2hn9hJXd47kmxR7OI
cbv+P9UAN+F3pVyfRSjUNpfpPdx2t1wRH/hSYR4Yse5hqIfBL+xwJCmED7AyPz4FO5+30nqvaUjC
xU8x1biwiuZ8QAckICsf7MOU/bieCWagQu54fQDsrjrSDQgLWe7UHlLaPBrgMMaw8x6CN0r2c3qQ
xjOme6SJwkr7+16euJfTSM4FFH3iohS/rS281tY76I4Wn3zWZcNTaDkhusWjaRayWllJMKOf4anD
NK7CcS/gausUkEXt8kRvfqIPsiz+w2uzo3f7EiDrA5eT4XuY/WMSemz2Hwo4zgV8GaPrjhUc+5zV
mdKzXWNI2KpaUVkSmEXRwltMxzITDzMDJvniUVt8m/hRCdP4ni6gULw3oK0hAHFBOLypI801s557
y+dMK8lJ7SMh5TQBKrwVxRMJWHunlS3rKFkeaoK6ZCkih4dadFtFgDOKZxNs8DCi4GsQXB7TjP6f
E2+lo4Hpsf4DKONBUgtygnl2zllJi8jzngiQJneee/Fptfhn46d1Gjfx/fZLw5dc2A3HqpYKd6LL
QKOWmkdvR8YzCjl+XsqYS0bbckd+FzbI5VL4f2HbSR2WpngOE1+j2bz/qo0BZpKydhR+sYHndbs+
HQ8LhmUlQKsrr5iavxSsSHIJxWT2LVm5zdLLsQL2TdLwQW9G4S3aP9IPklZtITj4+1+VI3aqQwyw
ZQn5Tk2cKM+UHRUFSZ2pPmQcPGIqSNhb37zZkjtGGpmFg+bKHd+CmezoccDpZChM81/r/JiJgkw8
EdoVHHbPivQFMaOmfaaM5a1ZQjNLJxo5t3asenCcnGQtgkhTOoJPIop6HQ3RG9wsrjXuxq+4MwXp
quGoEp+lUEifMofF6y5UnasU+kvxN9dkPuogVLfuLR+KftNGKbpdhwS2+/3eVzkEEBUFh9C+n/7N
hPVy0HPMn8T/mDaM93PD/VlWtfLZHQ45bzTV7TPgiyAirva7A7n948/I/CwDxlaPBvYSONCHjdFR
lX8/FpbW4sx3VAPE33X4sWpzZ7mQkIAKCoFDJoDs2ZBQPOOFVsJj8yYkHwGumCMH5xDj7FwBDxh4
6ude2f1HfYiNyHpe7ya1kDecvZZA0dAY/XpRzHbaQ0IPmOWZs5Seeqz+9eLVRkdiSgWk3QVEZCQ5
3s7DdatEXIvBLmby7OfQBAUvGm7CdRx6u/k/2IMwyMit+6SMm5TqeECy/OmnWHcYHlH8SNSxFsO0
eHE1EUWVgdVA0dtF790iYPgLYH0qa02gBIwXxBomTJpdlc4C9xWHWyZTw/kqC2S5oZhXeEjUK+qp
RElB83zfSmTCAGzOZTPCQ8KJxz5Ixasha2aoIEf+9FwzMsjo0c+DFU0GlWcpbz3jRPT57Xo+Z/7l
2tzerpBTitYVPaiGbK6g7fje9iaovL+aDvaI8WYepAI8pDXbjUJvlGsNAhPVUn8/mpFKI3T3TWRc
pKb7UlQMhF3JmdQHxN/TePJ6+3uD6GMXGakNvED7X0Lmb+bwlfRoRfREtlCvWq7N5nH61lb0+Ibi
EwR5uTDn9ctgHSbdIT4x1NeV+OPKmIzpehMpuiZ53Zd+wPR/fKa66oh5epXRBTR78ZWTGfrGZFvc
d0kgKoXgQ7BEAK93tDuZaSaghdQoFZMPKRvGFTgDICoIboPxsQadMqRMtZZUj3h3qVPM5gMlMUJY
tUTvE4FFn1Twb0ajtUwXuV5A+PPjYP14nRbtkvWrMuYyxAa0J8mXiACIhOVafi8n0uSoMwrxLKkY
3Q4RdJnqWZZU5C2INqewDddwI7HkHjlqQfOOUDzRSuqOWEKvfKLs9IC4I8xN7lByDAwKjK6eD/su
ZewExrPnzxFYjqRQNXy5qrP15eecskKfflrUzvnC7Avf+A+eIoWFUrVfkhi00bYd5l0bjA+pC/Tq
MixDlF3XkC7tA3uJhO/nhSTomMJcmhhUDDIw2GDwMVhJnrPPilJAaVZTa3DvSagdMiGz9/usacL8
Z8iicurz4aUXoa+CuOLeJ3ji8E5F5xyH5eXThG5se1fK20h0FKesm1T+r7p7LGcM/ZiVbZyMuiv6
jERnUj+Twf0zIcOQuMsf903Ojviz5Ib2OjeGIB6SR7tc8AedSpMBXALyXN7ivwrhKEWPsbUu1+lq
Bej0COH8w+Sjm1xu1rVtIcFe3BpzYwl250eqMevIWlDjOJ8EHVcva//39C4XONr5Uhf9LdHRhNaB
ngavuzOki+8igt4T4T9Xfp/V3qssYYBixb/5O3jsMbvPynPCjk507/cPy6y9FXTjWUi5/b3BjUoE
dEVauOvjqxBQ8vKrsQ/nXqnXVWemX6zu1XxWz/kBglRSZrQVp/jWeVRRQwOgnJBPeibFbagr1x3V
sj9TYk7pD8YWgobLe1GHWrB/kLV70NJnmyIAaZuFtowuVibiMJy5LXmufl0qV+pQzcSsMIWJxjo5
cDhQVfN1q5PUe729WTZEKYT0o7VkyuxZxkSNAjZbt6RUIxOmw0o2h63DUtmBW73r+cT8wIVAMHLj
YO9vg2sHVw1t2LL+NYMsuKVYnuV1Q+PXe18Y7gpkg75DEjpiNYcKYLRRj1Mc1TAwkfORLinKF0Ay
WklLUuLAfio0YzXMjL+343+ILwXR25DGQ2EEzB5+AzegVz6Dw7GcDC1lUBvwGEOlhkcgLW88+n/w
kNdcWm/nPza1206McBRpCxmpn6RJgbefOEM5upyhekFaQ1Q/PFi7o/e+SXArFMaehjc70HQXcV9b
dZuG7ZwL+/hd2E90sJsvdybWj7mfBHif7KE8n0e5G1956mVh3LAzVNqztuoLOr2k4m1hePnBGLH6
q7eY50XhMwxpDw1vKOwcObOmHxUAm2OTXbHq4MtMuAa1L2Amr3svbKz3LXIdyyKsv2nlFKkHbyOG
pmZDpLC3ogG12NwiXroqX6VJDuz04xO0y5VboO17vCRYJfeo2WEjGsI369YInQNvzvJxDr7gTbgU
fdKwg0m+Va46jGA9bJtSkyTkPsToSNP9VmeFNUChl3uyEBaop9JGRKKKSErA8obuVWtcolYak4V5
5RCFAuVHdmIwD13mVjSl8Vkl2ZRKdtkdM95AQhbG9yrTRCrwYVaUFDVAgsoaHZfjOfBRXG6RfQhe
PGM1j71ryi0jF3NJ5OIx8OsreiDa4SrdETa8q2QPMwqSmNt5qZ2EW+9GUoowMbPoHsX3A1favz+7
tjRX8ofwW0Nm1IbGKLw5lCrKmIcugh/OdpxIYVat0bdDUopMNAa6gVpRq9/Gaq01joJ8uCoXTogc
sOyXBfgzVXvYcoChxXSqDrTtXFVSHkmoLnAOBQtE1HLR63+6g4yIAYfo4qnyMNTWxeAsKqOotARG
ud2pKlFrzRmRjwTW/16aW5uPLWXBNuCqVrkMuBhg/IUUQOx0OwrF+Xp+QOzl7f9hL3x6v/Xp5jSx
n8tUa5yed5ThlbVaGfUhZwbON15uP4qtWAd/7TZSxKqXUkAyTNHjffLak9QrEKVdWRlapgNRWv5T
yC0UVnvnd4q8yhB3DNko6C69NoIs1nBM6FyN5xkBDI7ESMNumWr2IgMy838wN4Gd2b2+GM4AbywA
QTKhPONhzaPdQnd18iAGh7enWLZ0xPmLKc5Wu2hoVzZdF13S+RurGM78IGDJ6EPkR8YSNbhDxVn8
uZT18+YSvh0HwKUR8s8eTocPb8jjKJ0iZNxRJDSSU2stN/Wm64sTTuzvjWll6VaLX8hH+8p3DHec
DH1ckLOK9ftyBPyTRntehLW6QZs5tMHhb3aAqvbOJsRnBrNE0kTteXlJ7dmIKTEh3bbozgWeH15m
udtWdijyEzpWGMQj4bC2T4wL5efuus7K/E8I0rCb0gQP4YJklGbtzPfbGmGgLdMMFOMs9YjExnk0
eGZaKzduJsgRQ9pWOtkfPITq/GpOWI4e5wsw0EHlBxLMNcqksgkpiTmFpzNsJwfoOoHrP+en8NaH
3bXYpLaFRQ3/psn3aWO7v2AE5wLxBAaBL1zuwPD/u0FCTTGwkywInnkGjAapiF29zy7yIoZ4ftse
VyeFRqnxfWF7RtrAKlDcErM5uIzyt64OEc+KxayiA4ZVUlKn9g4n5uW6tlNRYhgQRBecFv5PA7Z5
UAWdoUv+yasxQcWMrjEI9rm5JhkkatxG72o5/gCf374TXTJtTt88/01H3awmpuso/mG2GpP/P8Bt
8UHaCmBNz+h7YMDF32NCuhRYRWJFD9WzybM9ri/Tsji24mJKDZ3zP19sc+/eUOgfhXVLvjrPeW8c
UD2/pQZJC74LVf5TEpvXdPb2fH23kCZomqlONu5Zq6c8jQpZMrm43B9ZdDm7KHvS3QxfV5v3ioKW
uQP7p8UB/i2pZ+xvMhPNrvWvqmZQ06DtXVCpCuKtjmtGN0clv7T4nbqWXx650NIAANc/cx1GEBsp
OLy0HJ1W2kUD3CMsHj4Cm4EH7l+CuLgAMoioAUJdc159XUcPviYVI2isBwSCibJNALqTN7TlJPUa
KRuQ8LTghGxF1zstePlGBj9CeQTYAfCS2ePjpkllLvAJyBoPwTrFyI8klO99o9/cT4LquODIjUxU
wnTqU54aUEIpx1vvB7OVbecxgm7gU/k23CjvUS8263syOA2IzYeqsx95iIzArv4j/16x0AsjKdPH
gWIYPh2hRnWoUg/bGZf46rokzdjSOtUIFsNSG8iBRPJi6UockbZtcb0SsJvJfWs6qpeVeiGmisHB
iosshoeSDQhcKqdnRyuw8aIURm5NJOEJ5WyIkp66/oW59ogHh/2QJ+pfhMWjzZXnkpkvY+wdX5Lm
lGRloxypO6+CH4bCfvWQdmPmhDpYDhj6xE4lmaMo0sRADCbMCbXdpBSLoko8vdHeqGfAVXjqI7rS
v+/ShVg3l5D/R4Tw6AeFcTnoWR39ShizGSCa9CQ7LorI4JUPrTlnc5TsFqW+IhENTr/pJ473UWRg
JvpVV+2S/1jcUDH/wJsVTx7YeCR9MREMJLBaR9LTm9LJkirAJxK0fOpFmB5BCIaEYORiXQd4f1Ci
nSMJTutqFIeA3+L5XED1fxUiypUjPpcBSKBIMpjQ7povB10rUpSrcp1LysOR8+R+v/dqAq+GE3Sa
7iCObwV3bFW79YBJ9Ow/slgURGLBrs6cVN5q1LHtdtXxkm45mLftddzBtdHK39NKUGDqEuyzwAE4
vdT2qB7jOdb8uxYo2uIklfpzAiseWuYgzQf3R8L7FkRWKBAVKufb+gCQwFbIl6PSjPaSbz2jvP5U
Z04TtAs7TUTjvRiCC7V7naYsoKhdZOpjPxtU+ZtjQss2mGXkBgvK0BzCc2LMOGtqTstSt+39O90a
IsOb5G53B+m0jENY82nJ5Vox6vnhoe/6nab1eaOThHlEjIqvpgTFMgHKMen10Phs4x667WALwI/K
9YrvyXIGyjoB8GXuZ+eFXS0F4lWLspIaiNP8npakS0BnyXsytd1RsYej6vzSqG297MyurbZjX+LI
dPasL5vbShCm1Mvl0r3NqlCQ+j8xBJBtFx1qnK1bfa+u91A6M8MFN5n5TJemDQHKw+zi9kznd6v7
G8cIibgICf/Q/tKxhjw3MmhTtfsK+oi+//fuX/1e4lGedm5czRVvgx7cnS3Yl8dx9nxdu8tuAtx6
wtSGGAS1AIquR+VHduNQ9I9G8dU5Toeip/8/jcNMWURQ5W5KDnXE6LJraFsxhytU1smqbQ6J4pso
kMgWvzNWvf0Qskq64CtOaca7IjWFBg9MX1Unw1Q2Nqdk5lK+H+bfkwi1JR5tOanYNlVOCfrgDODb
7KEdqyx2IOYt0vBvZDW7ZfAUPuZNIV0/ITaDOu3v0WI0Uo68XmVveg6uXEICVOy9CiwkHQVfxqaM
42zJOxOOb+4ZV7NvQ/AC9tylMRmSQmcK4fmrA15Rq6MPXiHFfe/U8kFoK+8ho4Pr8o46Ekcgsoud
nmqw9W4+3NfUg0rlFxQh3KEif5UDCu9fIn22FhDUp7zuYfL7q9mhzSJeNghjiXmUH1gRZHYY6+O8
DeCz8K44zbk4vqBoOsJGLq9AJjzh1PYdwh+P5E37n8QwJBhwkMx3YTMU1iyleoDlFHJWKGqdk3DV
8A3uZRueuiwMY+cix0mCG9UzawaoLCb2TbqJK0ZFivEAtoQaldMuWLI9Kk3s3x2TD3Qe4pdT6XQu
UoPazH7JjwqCXjiaHjRWBpnuNapWOu59gSfaHImGLd6ysUvPl4F2Bb17YddkUagK9+R8sNS/KN6S
vbNvqdXM4pSYkUKr/yr7saHi2r4TnctREpyITLF0fjkHB/cpjlELn44SJ4tw4G11yw6YKg5uUQxS
HCUlnwyjvuEc/tK8XZOf8tWxKCywcQmkUXV7a5MKAXPWuiCKPDxf36RxHJPnm2FmTbh+nBCJKqN7
fQ026RN662bZ49D2nR+4dYCkQt0s7UrbvSwYwA/4snUvApCyHRrmmKI4f/OqxV3R+WUG6Ya5HzRd
GDMmzRVuvZV3Vhkao5fAOSOLMq5oZS1D6a7h3S3ZJEZ0dacX1A0x0evwNj4/4k6+ogaFfwy1STGK
gprZAVBSUJLzcXJTOOrXFWoGdy/1ktkt4N8y2Ng35fJroTcH5mqybBrE5kYkZVEyJU0o03Er5FGe
0ajVVuII6VCXZtVcWPTlP3IlNLlWHdlS68XUVI+7j6FmActyWy24OfkBoCut8fzwxlRJJor62n53
nqWRjQghfvcpSdmM77JN9Mcxql2rVfP/DDaC7BT3Lsm/ukojCGuvSqoQcytJDo5OdK1eaIx+/4Ml
6KaJY3iedlWvj3hf1eRMPPgh2GbvoCHh1sFisxVhQ9rcfeiwMF7nVtZSpzcjgPQNa4yCyHhA46HC
TZ4W4SwcM5yxrSBdOszja+1jUrDBMn4ghk99Y5+N3hnur4UKv7J9LtETMksPr0nbKqhKK5Zy08fF
PYBcqDV7L0LFNnUZS3LDN9jB+HWtbLWAyD5nroYSM5nC5A5sbqSweU0oCboJy11lgKj+IVjp4E6z
2Zy+3F0MT+0uH6IYqTyjDf/IJVp4ve/y4mYbBFuxntSWjbFOlixGA7/6T8gNBV32Ei1uETsb9E2S
g7j1BliX5krbR4LYuwTpUW66XFi2MDWC3U78qww1cbHQ/XKqif1gLAbFdA6mjcUaLSrjqNbdnCqp
JzvvTQzVeFPVsk+nxg3AvU4OmLRQGFgNWr/SEXwxS3wRTn9xoQtUWO/8CYUlM7cKfSEePpNQx6K/
xheJ+OtkWU4E/fY2+qOAPTWDD3sbQ3P9qotoCGoGia+ock45Axo20LNOCSoyiv9g9HFPFO80/OTn
6Py0s87Xtbgw4Uf/YdU2Fw10B5lehWfr4Uj3ArhvCSGsNZi5oSVvzdp7KkKX4hsiynyXR/tJis0j
dGtfkNBUp/9zQvtViPJV+prRi1giZTPqf7GyGNll93cTmcr/0ZY1e1Xt/Zjhseq1nBUeKRnQ3OJg
G+o5bPy5WGcVu2hDl58QP5PdvUdUOPBuHcaVk52oex6eQp9yW/VaAOJSxbGjxksI1jO3KMcO2IvY
XQvVds02Vi7J3Wa0KmY2ehzdtudQnVgoR/Yb5G8e9fZLVOvWdy6K9SMm9p8arArt0/XG5Ps5OFsA
zBgnlfPPJtmDn55eNW2mrjXtP+xrwyoU8GHrtNlU5+eVV09qQk2J9vKKQzdIag3Vx9EP681zF0QZ
IWjI5jq/VgxiFZHYbqYvuI6seH3OsMDKt+JX2+Gv9DkE6VJbXKRg1LDubBZMkIF7SA99DEJ6FlO6
n6+vepew/mw2a9A4D3yEMGcWjrzicpWSEg6qEEo2IFZacy4EHgEJ/ko/hradHQayoECIUxkbo8w3
evVtuIcT6Cv5llcMif4YdM8ru/p4NrRdhUk8IvsNcZZ1mVDdk88s9V+ryp3OqxbFYMY5SlTqCdw+
qVdtDAG2CNl1VxqgvKtD11fQ1FYb7a/XR/AZ5N0b3vVIId0AvLSZEArdLlPnodMNK80dOMGSRkQ+
HpLmn50GxWyAKA98O6swB9n2FLr5rA6MmwxkYbN2Z9H3ks043NZWKEuPy02eHvUOb0ZnJryPPnw5
TaKzstm2YRPdaK+tOKGK5UsZFkMbJqmX/tQ+oo8GlKnbuZccPGewMIALOriKxojn5OOl1zjJ16e4
Mkdvi+iqH2YSDbGfNUUa7BL6obkdLNP8TYQHyji+k4w1hbxKkRRGjeQ8uNUQhebqby/Kiqf2FsPx
w+hy7mq8Mj+6vqvqiIH0N4orHV72Fjv5ljQsCkQNZ/GYdSZ1J+ymLJyBpJlruGwljC8pJ+gBYImZ
zx9CGBRlJnF2EfGtHx/mxKAAiIHZTHRJiYfzZt0OnVWU1PxXAkGLsyENXIyQw1Cax1gvW2yDdz1h
BAvQnCnJRgB0e60lpNAwIJFXaQfb+KWfDTWhbM8EmQdynd2TklMsSUrHpdSzru2JXnWWUBceZk4a
d/KhdCz7b2HB3XtEZvdF35gjTvEVGXpnOZ34UQ5YUpCV4jA/f27AUQaF7lAoyrmrZpNh/rm5ow6+
zWamHKkPgN7hUdFh6WPhOFzAQXWDmcFgKZLFfeI0/GpWxqu3EfyYkV3HKqhijDkozPvfpnVuXBtl
pTHQHzbrvXj4TP/MPiqa8ncMNH8KwyZnzZ92eaTyiHkXm+VIVc18uvS4GVLaa2cRZJhe78tkel1z
7cqdGRO6hALlboXT8BjQDkSMem3tqBLbb4eD6TK/ywRGBfZdo8wmPNHfCqIkLRb3xW5asOGJtwIc
5UJaTrft8UyAnlCwwfFurpx0mKhthNjK8Hgj08N6FiuL0ZqtnWxAJ3EOLEU7QchPsKDq6bCYnJQe
H2iG08kVK409zgojgGYbvBr2uSyMpgOszVH1Xzw4OC342+/3eQRJCwJBSPqL93Ss9cjEZ92tarqw
PbWdANR+pTlS24hxEQpEtZiPMEBKk/B40DQjGvFtV3tK715Hm0qjDQrVprf+umVef6k5JLwg+U0N
UsdbscrBkKlv/YBDvhbEi9TW6/aK6BIUpSRkVFTtymHjob6QFOIVNziHDXVJM9CEbVtbEXZQb1nB
uyPo5CaalncwbPimgPbx4oyKjd55E2RGVfprqGP1+X2LY2dlQs/QcCgrB5yxCSPeCDiZkv9xKcHV
W1qSPsPoYSSR0TvoJBBZfAfnLlObldjsWfoyKtNhcsJnuMWadK+GqBxgiNVw+M2GD/LBj1Yw1tAf
IFlTJFe7d6w8qpAASnkTr4rPCi8PjG7IuzkRdnWnQ420HL3oDGr9gwoaClA8cR6rPVzSuWWIZlb9
05eeYNxAN9gZFmwBZeCGU6oye+KORoELPxMs+asMmsxw6MCLC5SQQZc106z9vC59AEmIANtsKVob
oDRdaAQxEBtybovIMpDRi2Lq5qstfPa8m3ZDCH5Z3Z7Nx5y/l/DCxVoqu/EsLxZlD8ON0hzCquXs
Y2RbAehQR9+Yf6gy9S385m3MLjH2y9wn8DaIN1v7YtEod6XlzSSqYuPQvbr37j+SEjjRpBFLssKg
Ln0r4rj09U/j3P48/yIqVu483WLHYlF7J/VNx2Auq5fnp8RkiPH2TpWMNejxNaJojuYTui1dtb+o
4JocQfTRcUDfqRO1Bp7hGcZweqZaSBpRf6q2qhaOcR8fm20U6B7T0BMaARJXbcoLQgprqKcGRDQx
dZW9gKb5k4+7geAAFfD8TyWC1P0MovZ4/RxCz0wJfnLw8t2FkUe284/gPHGivaa3uTmGzo5al9UC
kfjNAEn8TOmCuvG+cA6tYTSDfofgnIGZGAHjcQwwJhPr9bLUWh55dyLmhVp9XsKsJH3zC60TEjx5
X0GVNm5AtE5wOQrprTrDQE6eIUPZvxLrFRjxQi0w0yd7Ymsy+kyGQBtqqG3+/Tvw68Bsy/DR2QLz
X90FLAGcbsFIDeONCSETTaZPUpvKejzCCOXa7dGVdQvjctbWUgYqgCVmEbYxnQAAAD5Ru0dQY7Y1
i/mqmEWi/MSdtbUTkZbCe99GEybjDKvqW2foPc4XDec46gpfP+hbU07liVWEx7MQxV/f6sz6PX7/
/J8DMFFi+8fzirdyxFvVUAc0tBUDCpOsoC7Lgf2klOLe0MIYjvhcLkAs5cQK/ZwPzu9hEHOuoK3C
dxI1TSPRDZ0S8KYtrKBZwNtMY+vogu9XPugnhs4jG5iHUAK7tRu5giyoZX21qnnIp2vJO+/2kU/h
u73PUCDnF6LgkF54TeJj/VUimGHrNPo89imcAfktqXVJWCqeq8JgeCM+HWvOOjnrBK69x6knn9a6
h2bysAZVT818pm2GR8zFv+sWVfh30I31QjKdIoJfyoMUUUoLQKlJOwW37EaylgSdUVXZzcAeeU23
4fkSj5S2ivmjTuz0CYHbTQLMjS9MTyNDVIwu72o7EZo8ImL51lnKd17GSxhvngCWtVcN7i/VSQOa
uKHdPF1NzG9x/LuXaAwnVj9uMX6Pe4i5AgFo5MPyY1OmofupBXwrd0XKiHJbiWGRBcn5nmDG9akY
QBqeFf2mhdcqIgRNGSAPuU09mKW2Fsg85DOMXQe6LKjaankK3OR3aTAY7sI5AYxfpuHnIAcgSVZH
VWE+ny66Meb4hSWh37295rm7c4e94iMgwkZu9GV6v1a8++jFIVUUYvct11ZaJy2uSWc+Bla2sgK0
J3s+sDMuzc0RSvLrrCjewymPVZ5hhOoYoa+En2u/yFTX7C/RR3vwORjfVtgqOes1jqPkjh5alkPX
RFD9dbHtL1tSID/XfI10StK8CxkmgM3WfAI2sEK+lBNXXrkGVwnZbyVw0As8h57eCAepORaetdGP
5YHOJDJqLIa5uQzvfnV2wXDSBfbYcZVzVVHSlsX4bCGp6dJzuBI3Yo4w1czy0XPsrgJnIz+/gc4z
phhPnmORFCUUyR8LsSS1+gm8UZLG/Bk/UcuB0jGqV9ww2QUFt8clZAmOnBkvizv5WMtoJzaD3nqS
pt/w+dWsyv/6N1GAiWUBCY3AapHPyc6WuSUR5v7nbU2bttrISRu1yCC1Zl0zMU6uumccX8Fi8rQZ
CbkbIx696IgdeZTzo3ajuEm+By92iK7XkESn4GDyMBJZh2MlbUUNylez0GtEntjuuKKldDmClBeT
BvnMgicbii8pHmR7kNYlEfTnmPWui4k3SNswaXgXlHxHA/JBSdS7rS4wFMuomMjkoodBJgGxfRfR
EnMx1b/SlQHHrnnaRsO/KIZtyvY+CuSbfltSwaSa7m72JXd7QN5IZdRtxXLVHQf29CUWoiVNcdgc
MAu8sAbmWQYQESU8yNYWkyCYgQSP9c53hOKnd9geH5Yiax6dAjXC1SGnLA3odbwrO5Oa9HQEcApr
071BqijJKCbFWHprzumpAlxuf2MMrK36xyXpe0j0bNp0CtzVAPMaWyVF3JBkMpiaTZ5D6V9rSRej
F/GIW2mxggPcZzlF3YTGsm4wY99G5KiMVxAKsERpN2z1mHY08YkGZLynKKNSm4FQKCiG3ZIOWMUm
KVdz8CViJenFu1O5I78gs79uEHC1hqZ3OT+MImICL3siCvD+3wVjVwPnCWq6QWjbtH/uPf/LjOYu
sQuWBHaxwH6KIAtOl/iCdZ08D15j9yimrdV58enrxu5J37GxyWbDS8jSo4ANNGXvnCv3kvCsqM87
0tlvTNct5LDIu/VZ9D4on3Lw5SN61GSMrKw8+za6k+o5uY0kjUcU3lTnyR0NirXiB6X0H7M6XTOc
scQ7MWaaeKvuxqdoswdB8CIcHhbJ20PZBxdCyal67CjMn4rb294Ek7TLf562S9nWStbtY+SMC1lp
IYUXMznRRszezbvYq9SY9oDDjcHWbdSQ9Jb423e+pqbbLCYjxQYgWhoXpKEbJxraSIa6fDsPUq71
eeJH7QpIp6AvCZxUpuQhVpN3CjLaQV6FwJL7xQcZ5z6ChC1PV/HzDyXqAP+Ig9IsTKi6jcVNtbUs
ZZdW9wYt8N0/EqJBgJa/3CZ3KMweSqe0fYRYGuv3dSpnlA2Azm1yKVgYf9K0va9nNO6ziSzKIkRb
H+ww3gEictNtYIZ/WtNuyH7zrMNJE6DZAYWVPTU+4CnVGRq7xmwbuMWVmEU51MTUwuRLbp8vufhs
1wGWYLYcTyNPvz9ErvhS1BGDMxANWkfda7pWA6KrrhtyYmACLQPW7pBRFPCN7aDO9db4BbSq+VfQ
KUzEh9Up8g5shKRyPDvIgisbhpjbCjp/i931Jrsddh1Bnk41hFtVRNz4wY0PN69/NOU4WZFN0y5a
4E7XEpQ2A3rzOhX1TrWtmu6FP3VDnkW9B5S3vrf8GgH1QHK+kqRYJzf5EugOKHBfaKYnrpcMuyZB
icJiCFaRXyxc0u6FdZgScBOqBHPkVi+XIRGxwFaN4Z2gqz6pr+ea8QWWFURB3AUcUVDoJjXNC/3q
CIqf+QHaUfOSzRl2wYEk7h+pHX5+c9wVhZPG3OR9RFR556qrlH7lrWUDW98Dq3W6hPAk4ngswILW
mjsow1ri+UWoGDAyHjsP04dOGrntH1AXqB3wne4EdAZur1DyQtx8ou0CdDNnrcN/W4Nec0AEy7GY
ylJLoXeawj/RezthK/NSEtMNkaA1HQgwLnvbbPzVzX6RkjXb16yBt25cQ/tP1dqcORqKvAy11qRK
8D58KoMsNWbCiTuzVVaG1VQDvO6Qn6OnOMz5vWz+FHRXeJhAqKWYR1OSHfuxAFW+6kHrJSP5Tf9C
fDmsLOeHvjD6qBoj/4zuDvcH3g6VWJEVOlzsyfnKUnHmCrTZUmpFrUSCnEuOBKlztPxy7zBrlLz3
unglhy6WdAPdduqJgtQcbqA00rSaelHw78VD2k+4mr4WFvxxt+V+HGIz6zvO59xgHumKljllWF1I
0orb2Hf7qHNqtk1Avp5aX/kh/i20XOx2tAiYPn+VzRx5EVkLFvkd4hA8pCwGdPZ0ruSMH2BdMI1V
2Q4ZohjjjdPEfx66wWl78oJhJyxnaZ+AqGYpB3ve1DQtBH/HWw3YpDExWr2r/sYA4W83qzH+gOLo
Ls6DMwMzneheoL/xCtJXiEnYly6l2NaIOmVGQ1rATgDoOl09sSUVFj22X5OIM1nr9D7iB5rVsUpk
T1G7TBEDdxAbJR+/yCIqt6SYF/X2ATjdQyhRq/jG2mpWwDhLnuh/dpBCWibQzUBJ8FbVwo65F7g/
Vh9EIL9ZGq/vfbTXkBjIaMhh9y/dxd/PiWoqaWc4UyxjprhoefF1OI8jMxVwj3KOFFN34zXS4agK
yq8HE6SK9ps7bcy6FyLXfetPycOcExZAMu9FVSgNyjF59mPfDjVgQkyLkRHxBEgd3w1AIF3miWPS
nM9bWCg09jmd8oc4k4enV3O8SDi3q9MsdfIYFryfgWS+/8DQ9uqSSh2oFQYVvbYXFGHWXTwYVJBa
vYgXDV+vcSWW23DGprn8uHtpqPjak7LXxE6AWE2WxAE63/Xac4geD7F+v1rNjJ17swzXnE2/zzJL
H5x4mk1zyZoHSs+ijjk3eBNxKDBag5Mh7HocWgYZtFvGl64BP/LPw9tcaq8OZjjMnMfJaUei64n/
AnpKxUF5APIrCbRShY0NhlhNkJf/7lQbzGLCK2odH5dMil/avpJLyBodczIujx0iSv8IEsdSqBYj
BfKFsUCwGCpVMCaqCZEWlAFTGb5Q2e351KMlN1Rr8lgszmYjwHaM87YI1cUbLeoim8FurekcHwqL
LgHAe0EGGe3z0OcwBNX8/49mNkWfl5fRpFM/JIUdpG7wrqMS9eYNasGFyyhMviN/3rm6iHWXzj37
RymgsDpd1FWc5XGpMZYQkZpqjJBKxeqmraVhUWxrgBI3Xi6Yni1Y/3pKURbjZMWiiGcjjcWITtpS
EFHCl8Oty7E6Are2Xa2WIU+0qLBqoB1rwkPIEAeArMW/w8dwPgXkmGLCTpxyfFwWT8bMJcYV8N2b
hc+IJ3ec68l055caszdZrgzdim1lr2oPYEha1WBp61hQHkbaNiJK+lAXI8baSQWT4LFy6gA3rRBP
PyE0ULT8thfsjupLYbvwXS1ILjN3xjoPJOYOcjJ86HMUOHdq5YHeUb3DhrWRRjnaotAUeLa6+yTk
TyLv4FVD4fnmMJI17/ceyCXko0/KDPfRoLCjfd3OMstjNm2ifpteALs/EnqssRw9+xqJ/GYUMA40
2ZuqObu7IDMjpu4eRd2oatKhr5o1CLd1T0zv8GRHiHgfp7pT0+99m9LFJ3+T+/zRTnYl8J4yxBNy
r2UuQKkC/+P3WipFiPYpZDLeNPyO3o7uhq2+63LtWo8HBxdcaFpjyHOOybsq1WiX9XL0KviQD1Gv
/572KZKoG8dk+Sr6WtGAEdkJBx344bxZQ1DoEtz6g5ktQDMPVcN1LBFnOAYWCHM+EDDJ+hg3TVf4
xFoL1BVao0yQcmjKtXWSr9WPnxbqgY1+i8sjB1BTBWSJ8+vXceWffkP5/rfTxpwgkDBo0JAPiKwK
M5sJu+f4EJxcBAd+EcFflnficRdUehP+Y8CjCZhAIOXH8sfNkW/15wvI6QoliILF/iQ0qMrWst4D
sSebzAx0XJPbr8Or/9bTtnpMNdX4rWhPHzhPPzFwQG/TUkPq4fyV3YegrqlsGujSL6unZy/dSw2N
Yw4huGUcnjXFBg/aPfDewd/ITI6R+fW8KzgNjFXvVSHkq183uAmGqRnau+IEQQ08rf8SeLmL7Xbk
/fx1+HhJIrRwldlIylHxv6D+H9HCi8tbv0PtDA+xMIEFvJxmzMsLjzUhctprXwrl35F7/dIoSisj
VBvnsGDdTHU+Ed17hbaI1pBC9EqsftlJaClQSynUNvw28fPqckAhpeNdfHc0yUGSD0JSgQrMUfBr
r+ESB4ZX3BsaDhHx9m7nobGyvzR4RE+uEwKunOOgC9V7sDSAHWWCAKe61452+ypcHwzVs8pOlDUW
yi4gwLg+AZihgREHrjGQo4OsbppgurDC/eoqoyQh0IwK5CTVefiNn5VU8GAPnslq5XAsBh6lg2Js
Hs/xJhJ8TdBJkE5vVqcb8pIfuMzVI9GfWC4BzvKC5EWNNGM5tb9GD1glciLsq8anW1tdbglPArQk
4TJ6sE2JcFMbLgV4If7QXXH8OQefBgjA9uO4d3EeylLMpNAjlGPBgdSwNuT/xCMYLETgcK/QI0n1
7HIJZOC8KQsppv88RTW5olFMfdup4XE6zOuwJaY1vRVwCGnaOCUVHuqgSvNiIO8GXRcYHDw0nkug
u6eJDykWbSygKTwhelfwnEGYlDJX6Uluv1EwUBdUsSl9y/FrEChr1srp/Syb/DX764ojt7UYnuoE
wz070k9Bj5Ni3Dz7igZJmMt1WgywWwyqkPbcAFpTUin/UB4uQwMzvBwywQvH1kvjk5QqpUuFFxAd
gqr5kR21ddOkucoklFE3SIz8TYhEV5OJVfuRvzcgngmjzMamQMZNRHmWzrp5qGG79Bt8sUTzrCyW
TIde6RvcStCnKLN3+BvzuGKk5os0fT9MrxsryQ0Cy/UMCRMymcJyUiN+e7hdJd+V5u6AQH+c9h2G
Em7Tlv5b8nJFd3jxRdQR8NYulFErUw4c17VOjwLG/xUBED0jxN3XvrZmSZy+v14esGzIxE3JI/uK
DHhHyvCQJ4lY1fyg+Sb468oyd6FgZYJ/zONFhbl7ppzILlEJK8nIABwk1MbGaIgXqXk0ZyKH5HLg
+JZs41b53hYyCLGjHn9yuwDYcr0Z6Iu1nbqz7EJDctkxDBR14mUtc0KWwzj/5mcJcem7oGYizjVe
H7/PrBVUrMj+wZ/mfKVw9RbYJqX1nRKYWRi/3jrUsEgq9MdfAKD1Av0RE4c/6ztNArX5Z8IK1vIg
YFQpM3URSWsLFLtnuFhvse+OXuxGiYcJNsOeiQqNiBSph9aaj3hLa5XmD56CXaJXgMlwsHGkCJ54
WHvkc4Y2sP25g6k6LMo03zvw6xIbbm+MJuumGaXl7GGmawV4HeAr0rNm21pwaRFgXTo9SwqRy6z9
mM/4NsaWqXUb9jkoBP0cM72BF+GuPWYczn4I7cJliwxaCClJoNSicysGNj09/9QgzD2wy008+q6b
Pru1e0EckPw/+aXd7ePYtceZRMMJJbvkZk0d2ySxUUmcVo8y5YGegj/EsH+/s2OVIC9wW+EfNBQc
96985mAIc4dT2axSXJsIfBi2wX6tqm2MTEUk54zrB+4LPzqgmqj50fT0O59uDvRjGj3WnYxTNC/y
YQcU+obDoxATglpUK3tL2T3TZ5XJy8p7/I7Sf8W4DYBEJT0XfRJnudmIFm5HnWEf8Qye1LCKBOAR
7r7shMz6PzBkkdiL05XclPPkEllyYE9u77fcfOoabRaRoeUIz0q9LelQZEpkBEagypanO68eYU5O
WUnF2+JAMOD6/nWB35P7SjaoDjWZyG0OfNrkZy19V3KP9qut2N7ZLACqwM7p6g0yYCCYzKB85E1r
VmzzdHHE5fio/3WUFbtUbC5o/lh3tZRzMtydCAbfjkYLaYPf8Rs1haoocc/MLKGnBbbEXHZEeSQ3
miyVH4WZac52jcvL1kx9iW/yfNERJxLQVeovemRPxzFPeFenyptsMAeREkWHeXZh9gSMRsSs65hL
nBEVo8WFFINQ/WCAI+wdACLKhudx6iwDMFmm5qySEPGnGulooffdWNMjloAvcA7IgnrjcWKBlDda
8+hGQ6+eHb3gUGgLR/vt+k5unu1/VN1iYzSUrmDwraZ2pMPiKoHlfJH9VmqWx92VdUM/f+0IbGof
4QXQ3yhIvaf2SQWkZwArF8ngtffJOJ/Iq3ibhSuFpUvUG3b5ieTkiP3Kyn2lsCtFfq0ldYrS6cRg
kJzacBfAkwY2rJhgxtDKPV4rASOMGGYb3XUUry3stqQ6TPNlf4ulgcIwlvdUBXp4SSQCwrgT3PnH
UpkF4JrqsYPpE5xnghMRLIn7piHvZoupgrBzFcoBWdPjtsJzf+GLMUIRVSaOL+tr6AM0YCbm12eW
Grj/TMUQM22FBF2B0mq0i98ZDSaZyZk2SzWymn3sNAkxdCGiv4cyECjAoK9DMxL+iofS3TttLnIv
3aZV+5gwYVvYlNoHs1j+wKPYaqYJS8FoceZ8E8MoPZTuqyycRSQMR1NOJLM3Jgp8fMtiQv7bv0av
qb29bb4cBX6KWc/yeHU9LXHXBim2TA0mRH4CIywIpdO0T9o96KnKZgbULi2kONZbjok+P4ckozCq
+MJ3oSwIVaH387ZDdvWi8VdxqIcddkWB0J9snsLZ+Fhnmc1/ZA+n1zQPhA1j2Xo8Z1km1i2tpMJm
bTVhWH6VV7iPl2jHhYD0KPY/T0SWt9aFW/e6PO+hmQwfVioYIy6GIq0QDiNNZiSWnsGO27raUKR0
ceDJg4M9o4NXPBDe58wRAbb/J97AdomgAtaA5VeiXCRvJqjKb01Y/azhfjYfgo5U3bciW+mO1rL4
SCDqGAG2bQm2ZiS4fbF6d0O5AqtJEv2xLeMDKbGmiXcxApSf3PBLusgB2uRh41V5yXCqhkHXhehd
OAZC0zWzpHDwSi0jz/1nEg91P3R7tcznRIcc14FocmNIehySR0y6FXSh3aE/+oCnkYhDbCU8mCnY
eAFfB1YlEjfkQRzaoXM2CfbxMI6Wl3rKuqP/OO9qUAzqAaOfW11OikNEzXF9lM/pqazjnQ1zVbBn
gaB8tSTr2e7LKQMmftxEZoIm+8qvjLhkvC0J2botgbFJbxvEBNhrT/0ocjW3fXPA0m/RHVpHAt+0
aCW21DWaIvRKWfA83qcQNG+2/geseVoHAf/HMgHyfey6KsatL2k5Q+mvPSW1reraOC69HVbUNc6M
8pm6LCFcQtS/mIq/x377BirUm7UVCv1FcMWFyFtqsKe04QfVAsthnQbXqJzjugF/fbblHEgvUywU
ipxUh04vUQIppA8VC9leqPQStxgk8j1OS44w6ZCPFlj/ILK7iP04XEGTFzAB4KURx8/Ga0tNvZyh
HpxmJAF/iARUs5IBRLGYMwuKd0aqrISL7UVcvxjFZnY/6/7COc8vteVkQK+/GAw5jmTaJGhVRQld
gCrmPnHUYf6kRL1sutzTE5CLAv/6A8jNVz7eRjvR2J4Cpx70h1U1v86WWhGRfRrl0SZTJeUykrzO
WKpgdXfmAyx5mWBvPVdaTV4/hWAGNyDMeJRrFJKYIO+4BbztHgrvVdp5o5WvSJz0QE9qn9H3wYsY
fwl2g7t/7SH0oEIlsTgADAvTA5by/ArthdgRc4s64HoFJNU3iTNgQoQD9D0M1tM1GrFWa0NrveqV
5i4S68BjxCAzzjUjUBDlEFP032ZPJAnuaGS7LNXwu5piJC7NK+jpFkM/66GdN4+H1HId4NeJLtVG
NraqJnhYoyAglaBXWJdbMC+pnF71lYfKtslzrbkCzu3liaODoemq5Kf6W3WXe2ViS0SxFW3rz+oA
k4+iCBynEoGLqCvC7gqycNFlLOnAcUnZYHiFGZ7g74qGX+XHVQ/VblyJIqftzg2bVVZ3StLzz2ky
uaqsQjfYvH1K9gVI2Pwu9osuPxh4/Qxw8pkvtdt2tVM4GQMU44FSacqBDdj84I47RIVgXCiictA9
xwItxOLJSQ0SlB8pWXby1P/JoewOO7CIZ0Jm6ECnXf0Qf8dLEBvT3LuZZOANgD06dU081iBO3+0H
mxLHLDNLWVlXMQtoe+B0D4g5M72I6tej5EmCoyXF5e/N+EBDRO6XcwVu3VdbcSD6N6bHNcorSnq3
a5Q5faJJi1mCoVRpJDU827VYgbtvJgylVWR73WUgOVdY5z4AXI5LYTc/2KlIjoGDvDMCr5CzPB+I
sg0f0ENgQ3dPs9WErOhIIoiDAUsYS2/rEH3bVeNlNIAlJl6cQBUTW0S7CLVAPl0WAgfkPlQP04G+
feWceE51qPfx+bdlausHOOp0aEgPZ5AKPNpHunHU9DV+ueeRDotTvnGg6Kdi3KRZkn3iMcSe1e7T
hnCjUswHsyrnuUSAHosp60QXw4cT+xAmVMus9PtWeoCOseLJNv1mkwuNill+vgIpSbtcA3OmBtdL
HadU8u0SUChRnxyr4CK+jgGrsOje2ybhnnB/EqbaDhfI3EeiBsY1YU2f0sNxR4kbwA7m0fm26nzq
ivjLebQn53QVPxYOqjHbfis4c2faiJzrE/wxGISYvGTcNDm9UaD+q7nNfK2vq21DGnlm6ByhHgKM
k9FWTCErj5x+sLs8d+60YbtC81ZdcRHNgmMhRl/sf9YcP5vzielpM3wuvR9Ew+zgN57ZvDNzDpVR
eVhy5y8oyWtjHxIT5UAfof8OCGmaCNQY9nJxuL7+eb/ea9GAiKGy7BnJ1F2+vNulKc7srqjJRpUg
VZ1wxj9Li4hBis5EyLkFm8KxG/2F9BVVYY+qwCRN5iiu3mQya2ReDBZme2xFNiZ635nbgwWCryft
zIOoBVkxRiKkY+Od1umKexgJKA4PejdAhuI3bi3DWjy1qmWqINTriNArw47sS45qRG5XuElimeqt
Nb4i95aDUDa0de0JTtr2Rq/1pViktTMTGwUJFv8GhRKjuyj+38vE1qMng7JvgIGBNIIISmt52ILA
IPY+xZxBv5ZnSJG0dxr2qZbP1n9NAUVPfaprhpxf66dXBhpYZCx+V81rPWsBSFmuY0JDavgVU888
zlEgEtG+LBYEIvFbHuSoV64Z7UO1g+D8D4rbhdjJrALd0ElzVzh8sdIg3w7wLhO401e3rXgs04uh
dPJ5uN5YbYHyPsrQFZ+bNobSlI7xQX2oCQqPIPpQhIxcYsrt0hb7KZ1FCP1LvU4qAHgojsKd1o9s
MYb9MBFvLlTmgzB5iyST0WGQnhuCd5uDg1FP5I+fAZ+7D73kLnX/3/e6DljA38qlXtvp4ffCwWIY
y/W12S5oy255lj8BNvQXjH8QkYFBOXjJ148g8GR29hq9Pach6x+lofD48jbavdsPtXWlH5Nld9lO
RZDOIzo/y3SzhMsBGjcUaT4mH6Gnl6HYayrdyrC2VAWKVVI1aQ1peFQ8NZlqRyTxWKh2BmeehPiL
kAirlrUeDuNbdvpQEysDKFEceqvNAdMxMm11hnuCRIqh0Q62nT7vDKOrSkQ+MrguLVMkwjMoJmRP
r/VIeQoj506BjvulWRNlpmHnUcm/DAlVuHPcogsZC2bPdLkv4XXBkqUI4zR9f/iXM1pMDCutWkx5
OyjNceJVZA/jBOCAqgQpEnN2cIc4Af6mmNGP5ArCR4RbNR6WOZkrGnHC7dbUUUL44+Frv+mI0VtP
g8RcAykjccWQZaAxVOUvTDlwuYBiNus4hfmEzjsfLkm/tZ8HCbC89r1ahcmEC9pl8yWo3T23rA2Z
Tky9ZWJJBRUIssHC7QinOL1q2DRcJ314gOFOvWtAL7Pb4HVwYINRbkTaW/jK7sxE7SlWoyp0X3UX
HfS3IsTyZDv+eWMU9vZeSefRLHl2NcXYQQJ9FLxNDm3l6c4SZuwA5hzzrkdmhDuFCuovTqs3fUcI
XpSt0fMn9aGW+YDF2KGYUWETPvZCObr7AgvpvOAIkumyvorwYxmMR56wLy14FMbqWacn7oYxf/YP
ZZcgBIvqPptmZSVHEj1ulsvJPAqs60Trnigwca/fF6m32/sojknoHRn+8tedd4Qi6gMbzNKimcdV
Nu59vwAr94q0flfPM+m25mHRfnBq2QzTGgl1Ol0l4qZw/Dt6u3NOB7YgfJWdhiY8Ku6myCjz2CMw
Qpwo7HeYzN/DN/UBzyB+02Rby4peKQgVa22fgv2CVjDU0lCUgIIh62c0y8SP5AbIRunlEMZgkyTZ
/UDRSY4+EGeum0QAkzdimuzY0E8KRiyT/xqhoDJzcDtIdr6vRZ804nl3bX+pw+0qnnHVt1a2FWXn
5z7GxiZ5x97M/tzfpsg1p5KJdFbTAlyGLJTAaRohO/fYCoQGwaaP46gWlOElejn3uV5R/bED+dG1
cz65gGong8+6Iz0+DhVJJKUapuHRmOQr3D02TdnEeAKqrSfxWVVH1+dbmbZ9oHv95jj6Q/TQvr6N
fx7o7lx7sOnn72vB+4kuMXjLOfYTg274yym2qFMtjLrJmTkyvR69wXTl8htwSVE22AeOliAEuQHZ
f4oOsLV7ksVHHgBP8cqECDQHPwFqY06S05o6N7TTxNJdTTjyiUXUiE1dGijhr1obfdtskP9j+Xjw
L7mugYnZiWXuxPfvL8GrlP+hW+OGeN5Wm/niMpGEKaPd3KGQZPcyPIZA8y7mQKelNfE0yMKW81zw
YRcuMFUdDeSz7N/6p7eZKsf1qKm6KA+fBnh0DrQab/PBRBJakPU3tDnQAthrVoa9p/qaZnS+uy4l
gECbCwY8rDKHfHu9z6mPSSWzAYcvihQLNQY+Q+TInHlrZ8FyUHsfu7cQ6TAoFgA1Nqu1aMPPvnSJ
U/2ovCbvl17X/WuD0YKd3l6QhDKQPVaIZ0ZAFWlJyM79e17JhT/dN+yHbvZGzAtqvFK1fqFanOcs
Xal9VE5ZjED9RfVUORoLMN6Ckeq/8GUr0Z7NlG/TSJYmtR+VcXtzbNLzmuJO5cAIslNYCDQM4VRT
g7HmtWGobUN28yjFiX63/YRuFcOMvCMnp065DG/Y9mu+V8ra2p8idiuw+QZ6JpEFWUDNjPHwDKif
CAvjV0dQcdcVW33UE/VI72m+k4kstYgmXA7QPmsQS66Ac5qmEluZmwuweqCNLD2yrJw4LkYh9g8w
BtcGBayllDGJW/y4hbatPVo81XPBQyrj52pl9rIqbVx6+zZdJhqmciBLI8x9H8njk8zWtQVDbFCz
500KBgix0ZCRGGndSfO/A9QEvoqVFXm4OdFMPSHFETc3nvLKlabT8gbdwp+34BXorwvlh/ixbgvp
VZtSPUKXhIX4PqVbUs992CMOQOrODJ0ZqKUbXrnYqzmPAXLR5OrB8bRGgXzQGlpDDgO7uAlzTK/x
lEkNdYu7j2KZyTRaC7pWxHxIvg2lFCczkEEPFdcNtZ55dJI8CN0suLjreRshWPGqaykiGCupv9wE
3pUw85MkoOSZ2nRZCIJh+d/KjhmOxIkSi5TPSOlSGNrU/kiVUjik64thJxl2g1yKEl4vX/weDWqB
YVxHV+Kzlx3JZlaTflToF3JSbQOdCGBdkfWCmlwfSuKl5SVT4UtcDRGloQPEG/EH571O5EhIzXon
d8/j1vSKSBzQ2QQSHYaqZBoNrYV5j0esOkt4L3tTFlwbIdra51z51WQJ73KGHjQGDqMwOgQIQ2gD
pVMAmPM+ic39Qb3P91kMTu3ZHOjRMKebBNqLbvMWHMrRFsoFNTsluiAn7IXRMgcFZ2JOhA8wVrxq
EUulEicZXHlXDUWJri5CcpuBRWL2igcEQAd03xjT6iChIYeqloozpTQHvgY+XAH8RHjJ1mR+W0u6
rLp+F12NX2xywmr5rfVMYim4bsX7mezAjOSX2HclfzvgeAGgFjcrkCYgENW/U3auPyjdrwe0BIzp
U8O+KnYqhguN1XQp0Os5UkUBqSSfeEA2shAa/WLJXwjyqLWjnnAmRLiDOvW0MTmp1ounqIp7Hmnp
rc5Ykucj2C9JA32zD3wh4Z0RZse8xcm3vxMBtBqFfSlXlGwkruXBT4UjCuzamRkz8ecRnc2O5fzY
yEBV+gtjUbzg7G3+vnTBWpud7RIt+QCCEA49qBfoNfW5QavHtF8oT191kUOtJz33i/NXSclpX8xC
ooNs85NKc/CuGb/qXiA0cEq4zTfFbP9YHXUZ3XnwPYX0n5d9RLWUQH6aCeU+SX5QAtRY5e5E44y2
wSFI7SEfQ8fik0wIiFNRhJ7lYLuah+SHUf5eteJ/hSTWxLeGfjYDIlCf7WBx9LFaGBsOdJKiH5Rd
IQBaSgcC/FX27qmGnZtTNVcjKF7DijiJ/yH5xZP6259EB3pHBdMGD5kvuiXUfLo+pG8ovDHm73H1
vX4VY/mx7h40W8Kh5TQolhFtZN2Xq2YnE8B/2iM0wv14jTRKY9UYEg2hrE1hsqUBjYXI6Hfe3Joc
q5a/s1K0Tf6ciwS6Ar24DfMWqjFXBtRPArNhE0T52sBR8U0VsVMEG9JBn0h5RJ7bZ+YOc2NqB9oQ
jy6A5xngBC/N7fsGiyeG7igTeNGYalIt5TX94rrgl3kn7yCJ7zWGwXO4BSDaGO0Vj3GsGufT4BBK
nr+JI+Qp441bo+jTJvZ8wG+q0tcXG8mfIskmrj50miHg1cNw0D7XWdth7oQwdkG0i0LTOhp8gqsU
AGmPScqE3vGqjohzsXwOBWefwbEi/vDti6O0g0UZVYqeQ+Z2Nh2WJdgSl3SqN4/p13g1OmJRGCAc
iqfmZYLjsgVDg2RtMTLk/cW8NNTzpPy//+UKHxCatlLgRZ6kSiKfxIoPKjuLQi69uWpJP+CbegZe
d5hm/037uWn1tyat1JfLQmo3KjL95ePquZtq/5UwrVJig0jQZDk5Ah8xpPSQo3PMTwjfOWejFvrw
UAOfaCaFy6cm7hJX+hM6HqBUAtCCn6b2y0UbOzgLVRf6JViOOk0oBBFPKBRJTzIUci9139A02lQ/
DQtIcmk7u8XDX5OWp2Csh3hp+R1G15HeRGyUfOemgcqmKDm1hC4HWzJyQu7dncaYvQV9tMJUGopi
+ugf/IgutUuiDxcGZs8263noMW7Vc21fJARlC8Dez90gdk0TtbIK9MlJoE4DKTj/PL3jDIQY34JD
XRslpgM8iNe1s+k4Kk2ZF6cQEY3HJo6P+Ad8PhGVZAyn9+Ym98hKFeGM01hc4coWPJoHtm1vwDnv
8bODXa8eXsoMNWKobex3/5Zjsutlfj02hM2QS8AyEn4Np3SUShOrHblHuhhB0ynAOcHKyoQtpMsP
oh/Xya7rh9zg0b9TqpAHioYdMBY9alTLS30HYh5EhTSR9c4sIn2FhjveP7AUjtXnJFxN/VEWJgV2
2QZonJUS8+xbI1GOmIoOQ10I+7MHj1wRrzIAB5fdZueuiHy+c6FkDaMCit644xYTLMfKPSKsbRY4
QByUHKNjft3fNtW3+6MAnLCf++dcgU9BAqOHVjQ7DNWNTpDx0p+EM2HC3W1cVzUa1kW2hLOF16ba
23LFIZVPtE8ZZsZfVQOvYZMJY0Tt9+uyVx38JcR/fLVCnVTsrtYgJ0WghP+Tzk8V2VSFLLd7lxWB
rY8+2xDX9bpgsWbACbjG23RId4klE0Vr2uqXP4PUbLrC1AbgFBKJqN4CKx6SjN1VOKC6OJMB1EcO
Keg02lIyNotBCyo9sZ27SEQGUuidCrPChEQuIrrnwgAN3x/jXuSTUG1lRoGlUyF8Oawn/y25EwHR
txBh7miDE3ahvomX1xEC5a7aofMplhBDRjJ3b4YwWMfT2b7NhwaKKIu0PWtEwtbdRSqCX9E90Gjw
2JPML0sJsuRKOMLolTXBt8Cd5OOiqwMeVbxzwVBLGJT6cU8DVYzH0cg6b4pvrGfXZdmSCu6ITyEq
+LczM/UB05RpuO8lP5FNmSSKRohLoHi+AOZVH/ukXS3iZCsu4e+nmTdQQOnPdIPp6kJRo90Hzh2l
CN5xdtXgpfL6iYMERdG7HlSwMefkORl68R3T1299a9YK+CZoGTWSU3Gx0bhsR1dVxEvvBFXpjwXi
fXYOa7ve+xktqGppWhK6SNxehWqM1UP/N/v+HIev4DhRmL9S1YL3WeCi+UOM2/6z5qeNOiNn81SI
B7dmw1OOdHxOee0hv7Z4jIviDsI3A8cF9zShb0h7geJPgUOhSnroEnr/6BHDmM7Q5P8Cc9g3JzGg
1M6KfLl0eg3VSsZiwQY/tmX3AT85ZRtr5Pvk1rl/yOySw/OWGJ+R4cPNmw5xw/Al7FrOU3HwHgs+
OQf8EP3Peezzjkid+3no3dTi0ruxFmou7kPLgbbHd4Dv2OyJ7jrvcdMd1KlRyWrJ1zBK1m64uib+
gSpniE/Ku9wFXJHo7HK7WRCKrMTAEs6mIxBpfyK44WGBuhtqaBGgHxWEklXglACRgIdSqrx0YLrw
fjNlJQ1nYmde+XzCcRQNYuCBmtOmvIWXfuyk1NTqv6edzr564ezA51kjBRgeJSLev1qDr7bCCHw3
9u7Y81IfXseCeh90qNs9QBN0jVCD8SPlE9DA2BIqmK+xyyA+X1+dpJ9JBOswXCzOLAKuNCXpS4LZ
qAtqdb5G8+91GnO0h3aXti9GxC/Max1wuRb+sDIrpDD5GOEGje3bsConastLpDftaYk/XAevJIKE
5yEtPeIYAoqUNzcLZK/x/JL5Ha6DbGip0Z3X6MhZpuMYc8mmhpFTS6qtCKZ79DaI2/RM4FVPpyTh
XeUDcbT0K0Rx90YD84yZlPbRnEnTIyQFpRQGJ+fwmj/j8ING8NXnkMEVuBjpoKP4y05TizRPV30H
KZiqePss7tD4RbFcw2j22qjfCtujBpG4k7koyeJlfnP91F3bl5hT1eC71nEkSVJ8UoVloP4HJ1Zz
eYHmtyNES0LaPDOm+6/lS08qMaObuF+tht7zn3A1O24ZVFKuU/N6CHbJ/NvpNYuPizAofkTilO16
UhbbrRDI5W4qaIawSYb6M/8yTYxBR7gSqGgpypm3wZAfgOoXc3HsuG6OZpBMB5cZOaL2i5W05hlr
tT/BZwEDQWS0AVnMnW0Vr6I10LeHm8lPtZwWqdUkDnIKT73R+f58p7WUziH4UVIccRjgDiJb5TQd
RSz4VsWcebzHlvS58ddupF0AkSqCuLx6LgXQIQbUvIvgEdjWLD9HRv9w34NBFWdVZkIzu4MuRBtG
8e6/QbPnDcax1Irk0CKtDZd1tnXwTPNzU2M2VZ/0IaxD5HudvFEPx3i1TsKjan2fYSHRjbf5Kr2F
1m0fthcyRrVFt2jbVbBhkIJZiFA6i0QgCOt7ahf0kB8EFnwT3+vH2kxrh9Giq021ke8DbVKMgFQ+
jHr1Y5hqAOIF0NqPzq0H4xqmZ/2TifQu9cQ+6bghy7oVoBzUpwXH9+0mLFKsWQ4+AscpXgkvJHJ1
Y90rd9TTcz1t8rL6Z1j3C86Njld5icv4VVkBoQlSPQ3tbHcOJsSX+4JxLMdWDo9NNzZx15OtMCWf
Vba2paLQ7UOJCD3AXmmBwKk1KjzseN4LI9xizb0iVK3hTOojlYTAg0qHC4QVAL1q4z/yXz7EgOjD
ttl4AByEO/O2aO2p/dZJz5Ah0tlC/nR3kUp3iR2F0LhBaHisJIhaHfZf4r4gg4V/KFYLVI+33C2M
kkUAM+bo2qaHWpF/A73IlO3y86glrFxoSLKZwcDxKi31Pl2krK+xPlgSiOREDjKQW6qBSfKypEQb
XCfLoo6DusSuT7tzWwv/mmGnAQLgup7ZPOFR+GgoITlDTsQWn19fPgXqheiTxabtfvBKUBSn3dMd
XgIFrPcKrjgNK1rhB+t2afNSyRTH5NU5NVdtMKWzvlZra5wKdRNSqFoIR/BAJe3rNhHKz+hBpaxN
bGba5XxhR2Yrl8/oiFkw9IvVfd/UCzoJ4vW3o7UA2Yl7kV9Q333lwlILAS00aV9yOueV5mbl1hPl
OG4HDM+Z1qwuQoYGwd96+mqhHClhFba4yPJS68bcjdAzBvwaNPm+1cXKIEnJrJPNd/NcCi0ewO4m
3cWeIE6Z0kpMQbvxrtE+nn0k7F1tJxs1EuE9e8ovPFdS0esl7IYSTsS9Xe2w8HNubqnHhbEU8/N+
JVWk9TuwvEnwmnz+eb3cQWMvYIZ4BkIZ/CE1vbLjnCTWpeiCQL6c85liwThjZfSyk8OPye2IoyNd
r+SUN+0WwLX8wCZHM9WswjgcqLAYMDGubntiZ8a/TBFmEwiV5cj+9iRmR3rpD6sZk04CjI9jlMs6
e7HiMBLAdWwG8JeaxbaFut2g+DEoMXe6fl+ndV3+6G4NYEj8iF3zd6jWP6fPIP9fhblzSagcdCxY
2hFCcvCUX56eT5KWliKcyQtXK2anaxM6RMc+9sHuTOzEwRLaAm9K6JXl8oV2zkZBqnuSfqQjJ56N
0xXVGlnCIR84TnHwo5EhEOoY+F0zTaAFUxg5yOkf+kqi27nQxWYfCElqIIQxik4TAd4gRID/0Fp9
CTd3HIE3sVp6J2872qenbnZvwCeojuOBdDFJKbbRZoho1u7ouiObl77VCjDgFmHzOTDikDVw/4Cu
ixXdgEuk9Kwca6Sx/FRoHdS4j25Hk1pInDpfhENbyL+P9g2zthaPQgCxH9grwDA+ZmBwU1s820Xm
Y2tB04R8oQjivQIEofi0yzLJFa7fdZMltItKF6IaPA/2jHoY8LOSLo4Lh8Bde2ycWfiwevcbS8Tj
Kjb0cCwXc3rJXbJLOn1VpIFlVHpE+VPEkeB+HATDRDj7W9JxvkkQhHKtOD+VZ8s4gHJC9HgNJTk3
eT7AboZW+mM6aA7eteLNkQ3jHONJrSN9qvLrz6IngaSCwPY+B4OxtJrkxbVThL197bypfh8iBh7r
IUj5T+aFwz8Bf7wfzAvyPtaOQ0Q1Ip3e/L08VtOGQj26CKleX1uCl+QQ39eXlEWT+lozx5vdJpTt
BXcNGgzdZ+CMUwKcyUZ5mV56Tc7TmoiIGpzAv0a/DlKzvD2YbuPF/sFZ8DZoN/9aUJhmjdA9hgmT
rb4M8uMp0rXmPYW27UhJe+MMWA/ksehFpEQqBd+pVaVPLrz0ktbo2RbTQwKM5IQgTy4L7U20LeOs
DW/nH/ICzXy1pZG+uMtd6cUT5C2fqJBJUQ0HD0G7ocAwLp46rP2v4UoFPzSYKN+uJYNdDOAs34ZW
jCzd8JeIJaD0VcJDFY7KyVvteX6Ag+JGT+lukKGFz4VNcR3D8PD+bmpSxLRQ8O+TqkWZxfrGZybn
jvyiD+Jfcgv9MNxJnnK+qj7Z+aXBz06qmlLjkScZWf06jfGkgzxBhXlqJQ1QQsLXjifr7TKF6i8C
+kwClmtqDR68nLK6AllBTsXPq9RE9RLKqbiDWiTxsbeiCpgRFfztVBvsSkwB7Ype7cRIGzhhwnlj
+RWaMTch06Y/5WJMgJ8JZ85hnc9wi36Y9IT7I9LYJVmlZeHKbkKlfbOwAz+CcnGAmvzBvZu2BRze
7neGX+PrCWdr0JoKyLgGyAL4RpFXJUGjHMQeQHamJivO13l0qCPfuHQizxngVXi4J6iULtr5wSha
R1Reyi0VCYkWNOj59Q1OY1C681X8S+RxjBDdvj4b+ZlARS0o8pfgweF+E6x5xgc4ULW5xgGvzqQB
LAEH8lyspBAMrE70T+BL+Uu5or631ItxmgHvO+Sjji8XROcTz0y9S2Z+FNfzf0HurAw0DNwLX3lQ
wZypp+zaFEiegMkKkw0Nb67JSrSBSDWkbPgkUCiKRx9em+DzIA052paRZmIQilnPxyerkPpSCv9/
pA3JBZ9BTkgHoxWziX8bO0XvmZrRj39S9u4Ns/T7xwliRZaxJa8lFS5bwlxw0Nz1VjX/zo/YbVCy
YDjiPv2F9DJMzJUuJY87nZMvCH09tRE9Eo8WZZ2eOHSp2pcQxLIcRFIadELKA7JJ1rLCZfhQ0WQD
GCEVc8TYRKO84GUtiNKvYbt+fWQr5clARTsSi2gxV8g2fr8ns52kPd3XAD8+NLPbLtogF7L6Qr9b
yj2CKuOIpPAK/WkmdUptc9TcQrZjGu/StThPlO2XDvdHv+mN8wYQ+49mW7Euh1VL66P/d41+ikgz
J0YvuWvDnr04bgvjFT+ioKGRtQNTTLEkJsWBOtt23vBUPxz7n347FltOvqYoOata0tBZTkFgLVNM
BLMIWQbPlKKTRuqeRW05LAr4OapVfxf0TmA1RK5o7g60tF860gVI6NNts3MYoNQreJu6uaZ8lc6q
d3g3LaR0ka61J/sMAKVLQXCrzrBCAIIGiXZXkAsDw9ZXdj1JGwU8dYCaf1GI8UWqeunzBIF8Y2wK
Dc50FErIJA8BrLeolqBiOjasIbeJj14rr6pyTpuQb4VSaHcCaXOwoGZ1ow+TnSLFXeW+iwnu2mxX
1sJWIlzcoaHjPMOH2wfD5aUg0J0wxaJRGRZXfAzfRISUUcaU2fu00JWxr6Z9cEbRm5emO9B5DsJC
Lm5/ENq+c/RlkPTEVKZOpC5DhGjGJXi51X4Y5JhIPuUgGhflyyOgv30nSfqVJSAs6n7SPttEu+BU
OtPyCCzYAoxavr4MLhMIT2vbrvXmDwUbTuEKcLZSJVZrg12YATY7rN5ZQFkIHmxRTwIeWLbQ742r
PBsUOWt8z9ge2lzXO23dBZy6N1UGKl44dw3Uw84vhX6v8zTbnuT41dKddWUWNFzXC9Hza1QGm959
Q93NvKRlstB3tDIMZcunnWhOQmg9gRWPTdqO+9joV5gK+73R6Ub/NgJ7IUaWl+DaNuu/+ab8ENxw
6KbEQJhzYHdVSUqpskmPAPYufIQy8xiPhUDYzEyJilWD4XnE7pLfzOfB7Zkgpf+KMy3wef80jx0z
mNMqJKkGzv8ef6ckIbPR3jNmNbAr7xPYdQnQaWYbJfhpr7272lGYHFKVBoErfJfokgUgcZMZMlED
45JbAxgE4kN9tOrRKn5cqiaKIwXtn1GOA3Y6u+sET7m+gycagCqY9CzXM6ix0/5q0EIEfVXWFdeq
Tp7DqgexhwypddIr7lO1mywMsnil/5AvU2pK7bcQGzF7I+50diMpsF/a/3ka6Zm5a1LZizDYw4tP
W0ikfVu/D73vq3ZlLC+btoZjid30dP4SEerznXI8W77HEb4dKln4Dko1ItH3O0ifK2Kmtk+PxXUn
p3lYSdgJ9fmgoARLR9fp/q8OQVF4SF1c6EhK4ZT0Bt4CExdgMVtTost9soHFuhvBhXfMuNN0VfuP
+p/jIU2SZI5S3/HsZ3lD9R5G+mgo2LkZwCzrQc3o2N1uzT+qzfDkltVeiH5Zrod7yJ/RgoQc6If/
V+bgB9EchEHjsiSaA2rQxy/OA3oo02hyvwDDYuazpJu/gmAKJ4Xp12qWj6cWZwFk/bv8gfUSWGxH
Cvwjekp/wWxfpYAaYO676v1arcWoumQcshG21AI+Ig13NtytQcwgv0x+N979+4fyCez3Lwx39JLw
VhLuv9m4sL7rdSkwJDHU2PiWIeP6GE9LResjU0zk+/rnXpBOO8BlTGPPvp4YXqmlKP7dUloNjYsO
/qXpb94wthZkw+NwdGTZvjh/9O2cjir5VoyOZYkPq1gZEhpBJPsSG5Q6wB5wzdBu2xIJ9icySv18
8/BKSIeEk0HMdlx005MorGlLYdiMXyM6R8nhXlGZbxOvwdQuMm581gsP3uCSG6H9Wm1BOa0JhQjk
I9+pY8IHdvNF2MgqQ7V4wlApq3DXP40z+DhWfBOxkNwoXyZKFrOFJMhtpmxooRQ5oS7tDY8gS3U5
acrjXfeeIi4QUC4N7QLPxERB79vSS2GvJGNvzbGxg1Ef6uc5kiXWwVV3tnCRNEjJQVPdiY+3xoQE
8j4KYklYJ+utuw43siVRtK8gsUAKFEgso0kVzIC5WKF4FzZUaccgFlsSwF4XURTKG0FZHD4W5Nb0
gTD5bnesP9Sm+zEJUmzDVIgsGdLOJCWG8LbPhhPt8bci1+0CYlUdoe/5R4EshOsiNcEkKtCDIbzL
1MhwlJco+q1dokJdLHBr4VrpFQLwjRfS7ll/hJXDo+56j8M0UkOGcNyYGp84AdhjT/hHivrC3K3Z
1Uag38+yrkGiHUUYevMdxyOqcf5gaXbMVAMIIpOB8IuNkqnN32Y2IZyt6J4Is9Ig9kGLF5CEeChv
4+FmzoxotZIvnY1N52/rpVIlAhB/ZyVi/eB4qALvhhAL3NvZTUluMd6pxtNNaUoK6WMmqrOgUXlv
VsmJQ8wf2MFo0VdjoJM3qkCz7ikxrE5z9mWiuwv+K8hyS1qPMOZdx32JxfWH6200lTE2g2EVW8tT
A8z7zRcnyxsaC9x5oTm0ARyL3DN2fwl6JvhqBUsW8P63x6FJj6t4ZteB0lSFnHeC986Yqp95ruNw
xaKiC1QujACeixeRUlzsC6ZxkL7hhcF6vZvpwy+c6E+i0oqb/V6VHi458z6P1mxR8DN7A644IOPn
+ps4FJ/KY5DOnpDE6r0FM7rJLvCuoO333RS5ZFTeRhzvddq345Stu0a6rutoC1CKtmUWo/kC/SEy
h1Q1FZoIV8LVFl3eRt4aVo1pAps6kOyIypNfED8q7xo1XfZMEqIVfBSAhbbNyysCe4WMLrAt41da
lBbiE3U9LgdohTAJyley2+PK5eYJ1cjOVD81Sw+o7Z/d/jZ+BguSC7jGMRev5iwR8k+u7tIk0qCj
4tEdN6bPBYMjx4UCNdIWvSOcyGIL0eOYrELuDpaiyEkATMYXxNLOpFaY1xSIo5i73cuZcEGn98ix
I4EIkqDeiY265y7wDJajl3gwhP/GpxcUnW89hpqO9eqFNvxbE59uv8hKKR14fr5RtDjYLYOhOmfI
cywUHazVOQQsV/78d0rip0SrP0Jz6Euqkfsup1mvLbpDonL44Q17sIpxBz7gd96RW3/WB8BuyUFY
4xrolzQwL8rRULyk+dQsL6XZ4p5hf6gEEVpgIhjk+XpI1ep81ky3nGZrqNvuHGQUKxHd/sm9T1Eu
pM4uRcRVfgGPn//hlvxo2QjsztTa8Vm2El8NIq6OvvDSdqPBQ9FBznOuTTt/XC3xPSvwGrizJBQM
3JYo+NnL9cBckh5GOjIUriNADEiyT3rmPeHHDvWBmQpHDH0Buepc6mQqqOuOcXTElR4PmmzmkEYy
3YRYRZFBVxOnkS8HPW4P9EDM6r6rKhsSfinNMndPTxTUYIFA3nXVRPDa6cXihaO4jw8vdQlLApKu
qrc1paBF8mNbPginRHLo4zXJm6LrINas27AxJwXZhpYUGe9HneYdfZTvl6iL2iNI19eFyf+Bqveo
KRxjmfvqgfUDiHczmhUksOeVW4R/q8J0F53VmZ63Er9ybtate80iHBMiOYNu6BvdllkrhRnHAZGe
HtWT4qFCokyVhtkG8WvK8Bobcs2eWy1dS6Bnm09pr8sq7VOaF7Ucipl1bRdWXBMNhYOxRceByh09
luAY5/qlWaoPpqvcvtIUamfZhyKhG4xoy+43zO7xFGijmR9v/3fMnH33WC6tIbj3SixJRvx3kjt+
ifI87NmiytJxnwIVdESIhiLWmxhspdEzg3E4A0vQC5PKyjZEBlHZAJNYzpfBdHwsNZe9AAbH+86R
jAh4+Hv0NQjeePudgO9YKI4xsrMGKvO82OLW1CupbPjMTyGTJ9Jci2OdiCFIhl9N8pemkplpJKq1
t1iJnDeXISfPz6NrZ0KneDwAKVmonthBWOlG3oNn1nY/uj0A8H/d8rKRYiVXyAaqKnbPbX3ZpWM5
cg42iQnlKBggtG62CTYpdvuDqmDZ++bRYw9XhpPmMnQqzMynC3lJQpRzRjDvbzv+WXBLNGLTgYy2
PlHj+eruVqjpYmyRwBiq2JZQaz/Gi+80/tmDab77tO1ZY08Nde5Dhif55U6bEkVgDfZ/17ldvK29
fwSMsZ45zzRB36EEqICkBHrWdL/GOEkFnXDVKSaQy9kJ3rdtmzSq26KUDqxNgjNqhhg1VfLDfHbu
8muroBkIRaLc6dQ/v/zbgxCZ1KPsxpCPwDij+KWeSR/joK/bB5mheOiS81MHmACAnMb7f2gfu4xj
eBR8OCu8LxuCxVf7ZTOyPhAwYfbFLQrYE69s8fanr9Bn2av5hjrGbHXl5FRuZ7/BgX5LOPo4usow
S6efOSEGn/DgPYy99c93bvrtLMwqsClz4UM93jKdXTQ57sx7CCJRegg2nY7uVRPJCtcax3HtGnn4
BOHugYzkW57kfYZF+gSgrLiJr3U1OdIDVUgYVZkUsYbesmhSkjAo211R6z7whBMFweT/Odh5FYXj
FnmhCXf3htchPFBuI5nGWI0suSqEbjQ3Is19KtK8FVKxIigtW/gbXt+WhfjTUUTnAa+ufMzVySeS
GIBc6Q3kt3O2gX2/NFeL2xIGm5+QCoa+dVrN6PTURfYNr0yyWaa32j63lldNul3/MoSR/BhdhK5y
epyVRZWErjpBeLH82iS/gGZpRNRl+QVBgj1bMA2aQq+iTuydsEaiRvnOonV8leoPf06ejvCDYq64
/ckhRZZ8jjo5rfdHd7K75XbxC1KT8gIpZnrkba4ROxVVqv9G/fYoMT8dY1oCahq8ABV8i9vaBpux
mbGEQvKt+40n9VambpVB9VroYF+zz/CPOsVIzcUVO9psE8eFctz7xnNgbk3Q5u+X8Eweb+/2n/3h
RP855QR88jo76QVJBnD81AiY4Sak5g+/H9qtuAy48dXQmtdybUMJzdlvkSJUcE/D0/YjMaQqV663
F6JHHLO5SskC95UNkiUAk1JnKCdTlnNtid3ZSpVOtTzJAjmhsfotrhkgsTPbZxx+ApMTDYNTpb2a
VYpsA9CKrjc5XEw5FGhCLLGRqxLanUKvsVDNjrsyE/fVXycwsk3hHEhIGdKLPJqkYtv9FoDZ+2Zd
5Fhsg0rUwbf8e9ZPJocEKDz1myhcFM692EKpWHus1+mzf6AVkDxV2kZzGYRT/ZXBDQooovZ0R1eT
RuEe8fFO4U0iapsax9VjbL+wVUCTHSBeBhsqRMHSIQALQaOQe7B3G/UP1ifA1q2ZTz8LCmJA8g8D
1Brz3hrRsMmQ1lh8JRUoQdX8Mo1U2IIrD/4xBwHzhb5PLq4d/lwak0RdtJtZ4E0bJRdf81vdXois
LhpV3tzaFNHJJVFgQK4D+cY92m+QoGf1mxzc8B+nsfmOfuHncs+OIPocCr+hO3qyDteFgnKCAR4x
/S+KHrefbQjAb0lQOzWKN410n8csjNtvbN0as+UlB5amy+zxDFazaw1IVgZbBmIvMqYAiUXEAFFJ
H5EDqUVd9O7+zwkVf0Lc75kIaVgGokpg2scdgtGrMROUpgiAzxPAtXjhJi986tXp9Q3q0oQ8If84
u9vuq6b3Pufc1jmF/qd6eBDIJs3z6puaY3w0uNIRDrZ5OivI8NGHcy0t8lvIVSYW+xCdM3+xYl9d
rdU0hJbUGrSnG8hpDvzplECkA+yi0NT0I+7BEnjIl3OqIYjjnidMTffs2J62tQzJPoovFyDlN0av
UqHV1dFXqQPT9fZXwggtnZe84cHL/jIdE/tF6+FU1EoTBM8+660F+dwGTajq7lqNDjMWP+UMFZsm
onyaFp9KWdQxillb0HK/qSZVXb+AGrKHSmb2/geEyCDeydyeyeinRO8tOUXHyx3gkWgeiiH6JEeq
ko2gepyOSF1v10H5a5Iw76TqY6uPfxF5DXpHHmUoVeabJYwbH/rVSCO9gpbmuP05qzdakwhUCIYt
X3aiT2jXkZrdbkDQlgJFO4wvxs6QF6YBZy42eacO55RWBvsby96KmCVW7StyaGX44hdszHDCSAGa
QChTxp5qRl/uwNVZMPGvG0NHUODS6kNpbAMnjxjdmmzfLdvCB8M2XgBTpclAIWDmt8F7rhSUXj9H
fU3jnEDOQWVF7Ap6MykMrDaqDl/BBdiGtzIyJcvp6tvN17+NN1Co7zJcB/KLIxPk87jcewKYab2i
/yz8tQ31IpRhjCNeAbaySmy9Hk4uQg3fwjlRuAu6GUl2YvVRkWeVfHTJiHQe9xxUcZLshzrR2VMg
syTFrBdFUAYyIh4EhAICVl40MWetB47zxVvFEGtX4JQNcGuZnX09rA472mtDhHMfnuyPYQ2txId5
69uX/rO/orLOUWBXW6yfmG4gQggoR2LWc+k91KWQ3tw2tLwMyuJH4IDBzCBrvkyJRrULZntS70vy
XbstT4Vv0ATkTnKyqptqETb6lAEtfDrkLCgZZGVfHO1l92MyhzDpWZ6EA1TxP7+FA2zGxPw1R2ZC
m4UN9B33yj6GXzlZRH4QXBPZEfSD8w8aaK2FGKoRrWU6ELw97ipRo+xYGXz5WHan9xb5Zbsvifhs
d4cxDfZkiPJSDMAmo+cSmW2Suhg0s8cHLD4A9eGCFjPCZ7I060R41pi4KGDR/Axmsogmx/VzwucT
BcR3HsK4JZ30inm0PNhLO9jk9KpT4JfAQYHRPEUVcQzhoZ2RK5qZCkWNaPWL2HhFfYZ3tTgoqiKM
Gl3cis4u3/ivbZIu4kylf9P25nhafppi2RXZV4gqGcgRggN1CDLJaAt6Wgnbzpd8aNGoWK/EUK8o
0wUIoOHyJK0V+HpnUn5EmtSUbGBYR05xVryhdyeOO5PnIPNz7HvOp87Mqm4OKvJ3iksW36gssBQm
pTtFIUbQJ5ubGjap515lKNemrsEpfiiOEraxi2UduWYNiYYKHBkBQZqNsvbx5h9Wu2MML3He3qsx
f0cA+NNzxBMXSpii4PrtdcjceDc1AkDoh8PSj0wib3zWJBgsnbj2cu/jpDvlChDWCWaCFWljBLTk
VdU2UeveuTH1gfAwe14B7MmtgrbN5GICz5IUdAnvRtZkKlSRS0X9Qb162/OHSpZ9dQ5S52ndJmrV
DWh3Kv2FanUikruUzfmgPT9v8g3Bh10X45BqOWeoGLCgYybqbI2JIVjz4UouVAO14bN9+Zrx/+y9
xiFW22Lp3mX6RmC+MUbYGXPMv8rOXGJo9QrlseDruSjJF7bwmClJWpMXxkltMQLfTytj8H6ICupV
ePApHxgMvXXTJ3DJSkyIUIXzWjb/RsvKYY53LWRCtw0AWub7y5btMdITcdfsER3oRMSVBCWBB3mb
j6Ko9ZYZdHELKtuX6Car6QtulHgrC4IWtZKiVRvP6MEIlDzlhHWuGdw0RXflCMUBKysk81m8QNJw
6j8pkpJK7n82f0y835IDhFPXWJpXRlCJHNlfzlvN4wBlti2iTcFL+rGimA/sRQLzvBzUiwRbq0N+
Ey7Mlam2SqiTQQjwQl6zd5eHq1nrdrCeDFCd9cjLsPyHnqeeUkVITMd5t0tmFfqt+qheEfqNpyCQ
Wp4ZpeK6QR0qpF1bSQTA4aHiUtxk7+w+eEqgtcbeacCWzDm3uuPcQjxXFoP2yDFSMixGew20Lqbc
LZ+tsSjSLEh4yUBfc2S6WIXj0BiXEs9GOI0Ac+WHfmcJNOfTC01hpdXzr9fO/g2iV711UEwQGtJ7
7NPFWZ0upvXGZnRtnqB+vG6Ud7wGXspKn06LFDN2BOrbLRJACQwNA3Qi+rW7aQ8o3Y7UnE1RcckQ
UD0NJj+dK/3DmxdNSVHqp1NTQuRm9L2RUMpnewBExIQ4EwF+tjm/XH+9kbKCmvRUdCFWd7OWMk8g
sMJYvgkVgoLzfytxV6u3x+zG4X5aXuiRGmt/rIDf29r+3Sa+yTRiTcZeqBYl5wxKwATzFLjaTXuq
l7CBJmNlpfY/HRyccHJ1MO5R2onR2jo7n9d3E9FVH+wj+39UaPVzMbJxDMVsJ/5v4294C/wL6Afo
E5JzLXyZvMgGZAi0YGqzMnDLGd8GUPXiJ2r88novgxKWuTrMz2MZFYkVZIBRCpjanljPmMBX4frL
IrQDU6PeSe9s+9H2hUDyz7cSr7CwMr5K8nSahw/y7ddQHYNHj9ZPAA8WJnP3anO/elYnhgdHY9H9
X6bWRAWPbkzbkrThIyyMWTjJVQF0ZVJcVS9d0P+1oy81oy2rlsfTARmfu1SggVpCTRClLZ5x2ckY
sqyvrRkha18R2nvKDdkFFrJEzGZR2c9c1f3iDWpu5LRy1WEV6bLKRaWMQf2MVICoY2PenVYDmecJ
bhGJaPsSdhG0MKSCDxJdMVFmggLunPbV60ip2k1RbjP8/FmCX5CnWylS1ajvBrxVJXn5ws9Ru32w
oeygjHEwUnPVFdhPRLoNOLtwVP4odeiQr7NjANzTCCS1/NyLFtukyOlBrwWISre08TaSZri1f/Mn
xPEBSbit40jT2SHl+UdcSzo8NGnGUcdVL4aDZc73Jpu28Ue2E0kFwrhG++R5wjcmey16bZU9XKwC
pbv71jbRtemQp6Uu7IyoRU9UJm4QfKJKZVpsCXf1iU3A7mAbrf27FBFGhlgrTtt62YBknLapcCH6
DPJAqj/ZkP0lpy1u2S/pp4ct8L38Q4FU31mg9msKBpNMSRQJGtxqjo3kkOgCO4jKqhDVHuxYzlwh
62yj/4cHcOIAMmmSjP2awocrsSQZ6XInKEdoGN1EwLoIdJM4Z8lEB1faYatw2jV2o6sMA+EAd3Ou
6khZVMv/pD+JkwzWgSBgNB6DkpOexvV87C1g6+WeFdo8Y0mWBWGqXdYzgQxidJpXd0QP8k+7NJ65
74oS87fcRQejG2Nq0wyJxQY2kjyy0ZKY/vngNiT7WKH8/UFLa5bz5tJW98hNMZGUSyoSKMUhORH/
+cWNnq08Fo7plWOLzlwo/BZIu21PbzmJhO/3NkcKoH+s4YtUSkh+aNCMoSGEFdgedoZF7f5tJam4
f4kPBTNw22IN6d5Tp2wfdyE322rvewG4W/XF7yQQETsTJryap2wwClnMTm/Gk63ZfYePd7Oy2OYH
9gcJjJ/+Us3mhVbrUPsEDsHf0G5nuo6nwxYSB5e/fv6AJrR/AOe1s5/JLKtXQXOsxToQuB1Ib8CH
s5qYarFVdcNqsa7+mU82pIlX3YRKFwFnpRObthX7KYhRUfvOFawgHy9yBahhBWmFH3LhYv9iIDxL
HhzIXBCamQdMFAH3pRNq/xXc2OJq2ZloXnHM/sGaaEw1Jq82YxZbEjYMtoURbQ7jPzxS/81klBy4
nKMxbysgzTU4Xt97sFoWTwzkU989bnKrLYzLXgwEM4Tohjf+sakj8oN9++KiaFuYmmcErIfA//HC
bpK8NWZyZE60VNpjtGBRhAjHACkDWCL8yz0VxG3k4yuyoxzocCKLKzdUZQE01q4dIo4aM+MuvEvU
4vW2G5pT7kk4/AKxQdxq/zDS6RSe8FQlMOwJqvGY8/uBHZy/cYGZw5azarEK3q8pNQPF2DcKn+Ny
mViLMdGOr0MlAc08eb4RYSCrSO67HAu/SVUtxcSAPAED+Ffb80lSLucpreAqV2IDpMl8XlmRVMc/
rJUz/VSGTtmRlkPxmZkQb4uH3s/g6ZLBSDZRYqovjl2c3nlOlMdKSIAeh7fwf5fI2hwsHHhndPxk
bfjHEFilGn2gXRLkFJQFfIneULSl9C8ob/8Wbc3iDp1RNXrqgZpalHM80Be4v438CEaBb6qN+uPQ
uq6pTlRrhoH9SxkqamI4/lDzoDkae1uj9kXC7BDIxaioFoPnGaa6AbwPbtTx0JSdkoKpEqSF3F+D
TosCxafuwRYk3AiYMWwJ770nQJH53QyqZ+3LSvRDH6njTj6LRLy4t93yFcz2d7WQcwpRj87PssJF
0HPzDwd7/XFwFMrSY/Jw04dgZZ6yqEq0pf5j/8r3pppbYzOk2r/ud19dKaILuaxca+IdN4RQto7P
XvqGXMCZt59OKIEyxS87B8nVE+hxFE7HckSqfptE1Fp4G+YMMT5I6qnuI4nijaz/19QDIav/a1nm
iJFPW0g0EzbUdRI0ClPcbWq4Glot+39gLa1onSji/mbYKwmHq3nMR4BEPHYV0q7qLf8iQ6gf++g4
ipEURaCGTWGrU6nnHQyYO9q1VEpFlSTqzMDgz9PUfgaHzTPJYzRT+X0sAkZc7p7aIl1XsB+lDKXR
vqoliR15kKo7FoWJxqmJsGeXGvI84wsADiskz1lvBXh820y0689yjaD90uhI206DNmnECJgMBd0W
57udKhNM0Qgmmo1ILcgzLH5A2INvZOPwXvM597tKHDk1OeIheHV3keHuP2JBAA/6htl8vZDnUcdJ
X+KmN4WFL2kve/9bIfR4zF4snh8vs2ovstIXXraQ9rXJnQvNr9BHzivlKnQ9hQiMvIxq9DOtHLPP
DlGhw9iTmBcPi/xGcpoCz45TynbPMXqzdilvOM8E6Zytj9GaKbXMCtVSL0ZlJoDtclDhsqe2/xAW
tkhOnRn37x4ku3jNkHni5F0Ew6121r5VZ1Ib3u3YQGlyHi9boTAjOHIXbGWDqaArJ0DXzwfxBfYG
5Bk8P6EfnsCKGE3KLj3EmLFNNCLoCmxlk/rKjjSgfutiJOC+2F2qOFN7jg7A6LeOMg/xv5QHSxRQ
cL4229ngxmu9wT+ODFkkWeGbwzEJsypbzhom0LcBNPORuch/qszB6oj97akO9GmdXdJZl2pUeqZ1
FRL1OZK2GXhD41Cx3gFHW5kVDUln7FdXaONyqw2okv2+N5KruX1RVfLChK7arYHcMuzt3XiXe5b6
rGhVexfjATTZG1JsJYHnPTV/PNinFm14BXJ/IoSKo3iYA4OXM1jt6bqbzSgmXGWb9OnQvuk1B5Fz
9GIUpi77s8E6RiYmouGDWUKGr6Pikxl2ibAzZAUYZF+pc2CWhwQxKvAdElo7B54OIW40rD8YUXiw
MbHHoTuTB93Vj2D5hylC2mRdujMfYulEer1zc8YqRA/ZE1moxCImwWqr70fymalQ7leJuR7az+fX
/hnE6z69PVN1xpGuUgksGcqOxR5WnQkwgdAWucHrSrrRHlbhAGujwFts/3yLE0hiXcLCZiblmOLW
K8Ikf0lNU1jU1gAcAnzkwTZhSOr8vRbLeCSXyfPrN66LzlFCQS4ZRb2aSD/rmXBoMhWtUfanIIxs
jqsO48lZLZfc4pVOW9a/KiAUHf6mSJ7sJSDRelz/O2XriyAzvPwEe9gQyGwSPBt3cjl7Fn9fZ3Sc
DmlGzNbE2hpFarkwb6wXMbiZU2Wtx0DYj5yeDOgAe/hes051+NmDutHkmCGbspbBoJ90HAHpIxb1
nmXDebDgE6DB4thRbmwg4Cc6PzesKElhOaGljqXqf8uW13OCJlm8qVUC5Z36Bk2DojulCiuDqdcQ
ngvXHMoq2t0ATIiEfr6MlRmXkDc1ekLOxZQzqGZYHbnokUfi5PQzv0bstXsPOqT42F7z7rIIHsZn
nmBuGrWC0LXsX/lYDNsMHstvN82StTDnc7RvLoBIzd6vfVRmNck0P4fu+cbV31oMheG/tnkBfDkU
a7MskaiDKk/pc56mwMGlKIoz5sfqCvmJ1Vfx0UG5alP11PJXnnUSvwROrvNIvWqhTFUiPGO9AVwj
GOD3uDggFiyIuj3Kg8PoOFmP6vMuWEvyNbE7dH167pwi/TjIpaxmJlVTRMj3LGwEZJv/qxi1emNp
lohqzMsyzivms7n0mY3yeUt6U9L8WHQRZYNz7tGM1KLzEU0zEZ1TP5BP3bRz8DhyqUv4aGyGCYHM
vtkMCaTwtSh0oM6XKaam2tObYcmvbUJnqni9/hD82uPdStJ9dMUkawRfGKCck+DtfbSrs/NnzA7w
Juv/OX8FyCdfhZ4gW6cytOdmGCW6W51bFcoWQQ5t+HW1wT8yUjt1zt8swWujC8MfGnTzaCoYilos
VfRPiQoZDK06ykwn7iaMq3YPyYjMf8dYaREi5tBpLpptyaSw90XIYsHgkGlfn1bs0WEyQF1BbA8l
kNCCNYTDggnCo9/iR2vFCfTvDJxSSdiLkGGATIbM23MF2EfRDX2be275kV3s+bdEQvnxfNcpqeD7
Hzk0zTlSPqRbg3Kp8SSAm5WvrWhbEwkABf5watx7xRmOCmwSplBXvGjW2uK97APif7ReGr8+fsvy
XEZdpsannDidZpmg2o4e3OItACvq9URNUKze4X7EeJXaa4uQeKOAShoh/6gcveDarnEItwfLE2HT
+cVzBN29vxt9xEL7zND3Grg0LnaoFtU2Tf0hVeuD00WCTuvkUmN1PPz2DeW+B43HubbkFNn9+q5O
ZJlLY5CNPvh+1o5b/mBspwyo2ipDCn7PSzTxtKu351/d/B1wAFEqPO84tKDCaSxcHHxu6uRNonSn
Z1Oo1U0EvAhSr7psDvZY+nh9GzNJDb0wBkmIpJb5vSBroQmVNNLVY1EN2988oKKl1eGnW9iKFOe+
jWNnO9li8H9hGMr1XobMUYlH3MvArsqlQOLRXoXKX/TQltR4n8QAAskl7pyx5mFyzzQp1myNkfNT
gLiRjOmdilU4aYNQinewAC67yxfSdr5LMarWEynsTKz459VB0dSwD42aCL75vPcEM2pfjjV8+jZI
5JKYP1m9mGc2m3sEx40wWkVZKvuL65S0sjVEptuX/31M80hys+N9+/cWgdIABiLTYf2sfLh3TD2v
nn3pembx8WK/5sXj1abXlH6h5hAe3rM3ey11CzyGNQNm3QA+eTzQf6ODpzYYaYAM/pS1tB3MkLXo
mlPZxEXuqtRfBq3dZPTWysyt39Ni0Yg+zRKkYb5uqO8LuhtVA+rLlKElGn8LHeDtFf+jYr8KF0Rk
pjDOqg/hPtYchyJhJHI+g93uVEZkOy82RXXbssIj2kycH3B1gFMthQVi2gg+VC5Kh57ubNNn9QqM
KKT2LqiBCbsudyjK1T2zuW7lITHVIUVqG84ItgC526bYxPpue8YBihP6JyTZ/pMRhLB/mIOJvqkm
beerJbeRaS09JRN4pKN8ubM2IQeezPv/xi3OL9apLd+RvBnuKnqKjC/nbiDjLwzTyrbn/HrT2nYP
oPzMfqD0K8igU+8AJVzUlcaWd5gdmckTu8xj2Vy2/CiGDuYA02p+NERlH3aCfhrU+iwkVSsJ9Zxb
uxVEFQtOLR9Xtosk/hy6gRXmqSkHsZeyyh1QFcFqdL0c4pEfEICZj08m0ubA2kh3jbs/SxWUpvlD
V5V+9YUJA1Clhl7INEGS0fddWQWRs+FH5STbmOL5wGcVhldjm7dp2t1eg5lRRKLfy3Td1Q76dGbF
7/dRsGTljTjrOAqStbG+6iCoI4Mu9b0/zTNcgSCxGPuwZz9u4CYcKPlmQrkOvcuwExYoG1pCA03v
281OYgm5YN2Q3YdSY3tQDY8fTbdzJVpeSqyMx3l2iaVKV0Z/sKSP0Fq6OVkFkXGXE2I4xKx10PVe
bE44+ExcI9pOgnwuB2VxoSAH/4YVufTGiLAENP25BsWPtL94uzTvNDEuLXyWiLOdsWivKr6lD9vs
sIomosflGz0xZAxoMouvt5MbE06b7lXxZkl1It4+O2UTnloooFHZ8d3JXWVn96mZxg4TMd9SSSY8
FbmBwTxHb8mX8Zb6VT8wJmarpFjpwZ9CX9Fhc7KCEOJoinsPHRksAybdCDgvPNKiWhJ408Vs5NLM
ONgn8tKQ4Cd5PS2T3kQtKvm8Ec4xW3q/MXk3mZ9i57tR5lrv2z5EDyYawCWrTYQu5zWknEnX/Irk
3e/TRouq+PCXMocZd1rwmPxp1UvJZnexeGvd4SlboMNwVgg+YCWb8xIaYhDjmqauBb+2TN8ALh5f
akVczhSTv9P2CmP3PXibXBalpwhOe6UQAtNL3oJ/zyf5Fxfba85OBeE3kFwYYicUarW9P2h8JfGo
POZHZfIPV7Qx4+a1zkv1tWMKkD7775r8XRP1Ahzawx0UVFdCTyXkheAulwWK3Gc+5Yz1y2Od2jBk
LWW5iOkPPcg8GAfZsp7uuApGh0VpAxV507UCBxz0xs6BpuC0cNKSbVthExUri+zg3xsvSrpUIEvK
MIxLYBkSwFnAwWQuXCRqew92p8M2pZub0nR8xqLYQK8eLBtFrviiexmoFuiwpPUCeCoPizsuVtI4
Qwdkp7ES+A985pz3kiTauEl/qtVwFz8x4U6W7PcZjEW3UdQjg9ou/i3ze0CTqbAICNfZ/hq26IpR
yE/RPYcxcCPAhcJPc7rD5UdM5nPLAJ5OQyWzf/WSQ0bN9SEHvPtMnUTpIwYtZCSkajLAjcy3yAIb
78AjhDWlQNorWCN+1e579JX6QfoSxQPPf6C3OWx+MLBMnwyc6FPpbYmeGeamC8wwD+oeWEPNiuXa
mPoEtiGFENEskQdr/RS3jHwtQXyihArh1tmyf1QzT2r/lHqCVO8pMRR4KNmI3owHQo57SVqJyAKd
EsDLo1rO1Bw/gzyC6cMxm8fH6kK06WiJRgZjHSkExXmSHUzwKAG70w3DQPltEkI2R5NCX1i68inf
qOfG9QCb6/qzO+omh8+WS7CPD6GPz9fbrcrWrQdQ8HjSPdOTz7bZbX1qs+Gh+Du/ZXWG5mbTPYAB
woAviEH+cDPhSGB7dB0ecBpM1oWg1li4sZbupM+UNe7KwCXb7T00MsFcuGOyLaqyXNvKdsv1JT9o
soZ6vpu1VTMX8QLVGt9efGhAe7l5KoEzLh93lq/5PEWmKEPNSCaEdM3exTNoOU5t84pWfKzzrUyD
Eh4zJ0l2N/3xn3DKSHlEq1PvJvqa48cDp/EDfNvY3XJQIwxDfGFjryu4PNfJQDNDkmdsDhI3ypYO
F98U5lBhgfH0DL1V3U5AFPlT8J6SKlmi4ak1IS/r/u5o1svluoapK2fMBaUprlki305pL2iTFGTD
MGlwtJu6wQ5GY9TPh7Q1Dgn95+tC1x0O/5YqMFlUnkHyu908if1kXYpz+L6gHrkkp90HwB7ywSND
exDZnPgw5WFnk61Xy0Z3nJyNuVN9qnVLO+t9g+rKeNrnhahU1HERXBiZLbRZBmolvdH4uhCuxjEK
dZ+MHjEAPFG4+CHjk9H3L18n+26ejyfpuBNpRQE+3abAWFFeo/sBfg3mSGvxZ9p+1gqotOcd2NAI
f6V/QhNoEBEjBtBlM3r9jbDy4IMRrSHhyO61Vo8D0udCQwXqUSfkhswqlkUGRYyFvWkL0G6kUrFK
+Du/xZEMGZXdCTsi57okrKPDbIlrfZMIQ/CUr5+l4ErtAClFLu0glWbnhgr31Wnw2AKXhkcTmlTc
8n4bobVoWJSZSOmDJira5xoLGnno0F3F4iohnfZP5X7Z7U5EDd8BNw9wMndacAJxEC6i+VwCE60X
KgQf4MkHquKbGhCSRnNLIIfhkpAdM4rEvjxQwSS5SxWOWrYWR3g0mJJ+GJfrRYFNWKvl5HiZyaFb
xx2vrU/wWh5K0a/isC2FDvbrk4xcBf5UjkfpUB2xH2RKwPAffXy8v9oa8Sci+sEwVpGv5CHj/6vg
nYeci70ucF8NqmUJpJTrzN2pPC7n9lLgpsw0rzqUPUv9btPlTa8bE3cEb+AsvqFIeyDG5kgC39fd
XtqkIruOCE9TambI05/l0meDqMfnYZVLtykyuaTsUQrKiO8697BqyEzpr5IUtD6c6c8UWjXbiBLj
CoDU+aVkmGcXLkEuq42hTswcXnzFRjAdclMios5ttQvlCOxAUEfP8iKWfxGdi3PRdhCUaOgLOPYz
eJIPS+U0mlSxQuuM+yKHaHhBhgXTonBRrXu6AOFYrTCpiqZaLJ6SNNgARLkiLJKsnDfd12rsSlV3
jw8KpDw/fuvpZNgC7dc3uF5jy8O3p2QZDO4aqFs6jmLFVEe7IUWONEpKKpgHlRbRM7f/Uoej+c7/
81pR+YGNN0Ye1+5/dHnnILzXap0VYKO2hbe7askhsPpy6bndui+GTTee6YNro9NbAhZpa8jlUGqd
W5mr7fq2Feu2R5I2DkIh/FjHLLln+KwK7eOC3ZNdyLaQK32YWw+zSUjTyR5GuB3JltIHASOE2Umw
QUVPfZWaG6ihTfTMV/qIZLSwUHi5J+09r2vlbl8xA73LCOf+UhJ3sRmphscMnPlXYauYzNlgsA/6
Vl/CP8EWYr+j+1R+CFOlvvyeVVK9p2VOITFKL5ku2NWzydKQZYCh6D2L3w9xp/4gZCTzvBRbfCOM
Vb9nhfdA16gTDTnRjtBBKLD2LHv7/C0fQLbUlT4cjgpHaM46OJglbve5sBw5E2ApCkQ+uU34aOXT
Gyx38JuU2AIavA29V2TQff804zz9I3l/Vc6hD60dKPuM7MVePE3d0C+JnFwuwQfhXwkNmXxMcdz9
dwp10/oeKuXMdNqsJSWXQ7b0DN4fo4dLJBhqJ0K1yE1zkbZWKRqcBfhEYdQF06MnpxIldUD9BMpH
DOO6vUbNGZIkB64BbllQdkCvVWm5JkAedstgJ8wUQDtMgPGeaBPY6LgGlWXmRIk96Tb7lf5UuEg1
NhiGAuv5zZKHEKnRrU40qYnIoLKQqCHIm8fpAorD+Xo4ZxOnXzWHikCB7uGHzUwCppcoqsLP6zOf
g8eMzg3060KPsB2+jeVNOPlPIFo5qHrSk0ZBwRA6+qJp6MbUw/QYpyS1O7ReSSvaTd6H+86WQV/x
hiLH9bOBWawElbiGA35flBrn+5aX1YJpneVK44rZppcaCXffA5cr1qscuytAvycE6s4wcP5le5kh
SWUknwsefC8sM5YdjjvRejFczAAZrPZwY/bZMF0CLDn/sunN/JhrTRMRztarMqVrBj4zNj8OrTHf
5Smth3g7RUXMQ9lzBvGDJxy/O4NkTMaVtkInkFdA91NsMY/pFIYJ4joG8QM2CBRpvMSPZAtJGU5Z
zwvgrJG+/0BGX+4TBx4Rqcfbf9J6mTBRbKoShyhxKpeMSckH67xAg8jH5SniO+r8FOCZiDBf8deI
nKTc6Tfl7Vx/2a2TKyOoPrjo+K41BF2eetHFAsQQmcjDjObEgJWe83CnUzIB4RttR9U10R7xy+0D
IX79eEFpLXmXVaAX1MWWKBw6t/JRSotfHINvj/Frm8ztnKK5sPkQLqFVkfYinHPIyMucXxQnGhTM
iDL/fZooIUCJoifyjySZlRFEMDb8ZgdWqL+djOm9cSmxWze+OF7DcCBOhZSplywFsIHw1WnnkgZu
43C8m9SYTaAP+RZQ5kZmTMyEHXPLbmAvwZP3m+c5ay7k4ekiiiYmceXqPBvXjx0dRv0Gef2DvokC
MM0kfyJHchqp6EWsglIokphOl0Cqb8+q12Gcv7o75O+wvvd8ZNr64XhMLk18abzOS10iPa0mqsKo
DMNUXfaK3kwZ1Rk+Fjgdsd3R8dizHg0ngdx/nwPej14o0WIX7ZstpAr0NX/0lCXdkjPnj4R5Df5Q
9aMtphUs89Bc8pjgovXS3K3OJBsIGVwoIjQy7SMLcDtcjI2yBE7sVNl2GDdJmANVBMyqz42ggATO
GI1F+OItYjDj7mAlkTn+H/o/yJL0Y/kPq5hy2QPR/mCclMipkNpoyHGwyo0DqeEpCYEPyop9o+L1
vx0q83jfQaHxF3+nS+eMYJFNqeqsH9toHwkUXtaKzUqs6oJ2XgKSOJ2J0C0FTb7hNbIjUW4t/D2p
3yP5+yPpCGU/HcnVIGMXqJxT+fiYK6NALQkByy+EqXDa5xMFOe9VWjzwhcVotlOT8qVIcVt/PIEz
ceeb41X+C+/+2yRDcpfHNzjqDbCR6tLyMFkFmL0hyKicH/3UJ/LOJHq/dyOz5PfeP/2xVIF4u0pO
5VT3hrV/kJDGfs8c3E+pw3ZpvNCoEW9uZhNfTev51SyFECfZ1aYKo9N13+jbZC0CcdolFTvLCAuw
jl85salBtvPJ774yYDw3q/F5sXdg6b4HY1g+ITvJAPjyvYNKccndGjC+9W0aG/t5UlLVlsjIqqXz
ZUJs4acsooomrb1aZ4P4H9papyeFnifvn/mnxv6LIP+w/gisp3vXxdTcfs10t0lwJ7y83zw9rEFE
9xtc9gF0GoodaFbg6o57JKDbhKcRHM5EHyN/Lu1/dCcYH2kKDgFjj6UebrpP2SStfIJx5i/2JzLc
BA/6lcF+rEwcCvJ/IhfCOcnHn2NrScib54B8LfvT9oaVG8DI34i+NryudpzSV+z1719dJIp3R69O
X4mov5EpuF06N2PoFpNfncP32rMJ3clrYw0S+iFy+vTUxGUC104otw7kJTclXkkXA4MgYvSGhgU8
5UC2YiUM1sVOJQpXsXFqWhnYOM+Wi8ZeiQU+dS6gR8pKtZpFeo6mU4K1xsxVPjqIUyxrIiKctt16
bMJ4QgmyAi99UfAsXVMiedao2NKH/FDlg/S3bCMrvqlNTPdUlBs8iH+34xV1hBj7ifqGvCu7Tdg8
xK6WTL6pCx9qk/vYrclOmwPNTcrY/7pwPX1ENgSYbVXGlyBPRcI8L5EguNuQEnPUeqUpPZ3bDPDb
yVgK45cXalBpKirA6f9o8JcfAot4w647W2fN57fiPZf4G98Jhd/W7YBQ12mBmciB4oA17Jb9R58l
14tvFAx6dXgC2hAHDt1sNEKo3JnaLu67WS70tuEhiYXz9tAa+60yraVgGKrxf4VudJxodD8NszK1
wMYGZH2TxHSXvHjV/W0F62hhGIamBSesTxYlmSxQew1iooAlzpmZ1AgbY6iP4pHuvHdHpVDP8Yr7
XINR5iiyOQmOah/XwSvnYktPDVEG8j9Qz7vzFiSJBbpp1FFDXclnWO9sbL3mp+yiz8N1OHb+ioMb
i3R0G1AWJA8Iaq6nDTx4owAd1PAYwpN0fMQemvnEELN+BUn9fHmIMkNsLyR9tCKGAyminM1Vfr/3
1VtfAa7Lxj9B2CFqkxpOIXhg7Nf690VJ7gMu8jxqkT712YaNiDo9TPBTp7B7X4+KKtMi/TaGF/zd
he2t6fcARHP2GRXWMAy21mJNQN3qfZ2oGtzD6mYY7VrEMKaxBUdMA1N0dvzI4gk1fO+nVAWtb70R
NL+w9fV5zV97I8UUvPmPAVgDMFoh6IdM/wsdCNdOCBp8fDHCTDCuB437MouwvrYjmMw1kKWO4urX
n8JC3Dvv+bNRJPy/c56HjH3Z8N54SH5zBI3jqdLGZ4Nl0JUAj0VFQb0S1UslDxOixbbJCo1vfBuS
k+Y+9aDaj+kPS4MfK7jAfyRGvHNmpljaojU8VnMkr6HU9oMjJ/2wGChK7qzbwkQnpVLxHD0lp+vv
PEne/mdy04DNYeqiG0FGWPFbYD9vyPFl2cx+L7YiYf62j/7vhNCVySzhoLVj2hdXjAB5k92EdlqL
QnvOm45YYitlr2vkzIgyfJ9XifpetVuEhdQ/B34f5EeeSikXjnPf3/EzS5VGPp092hMglFCexUJb
Jrkyi+SiafRlTMFxUdOFS/fSTrg/LJx2K+lyS+4zO6+EQwF+nJ/gbIkcYmjGLdrrYRUfxetoTsiH
3l/zIhvgdHqxZGKrM3cMkeqRcyWKKK1m3EGfMjP1ltBNv9tsmUbi5wJ2z9Jk6qB28llGw1ts7XDu
WQwzNlal7SgnRMakaqdmNUf98r7huOn8Nlq0+82USDlLKCY9QBDBJcM6WNaUwE+j5ks21xMsXQZN
FChA1z8h9xVAH2NbkD9RLCLIhmdVXGIT6GXjXt4hVOlSnYw+I6X19WSIjhaJ/kK3h6VfRA7BHiam
Srj11exaGXcz6kzLZJed1vmXX8diPYfZ5WgJicwYDT8L6qwwm1tSzc+b4WBF2+3xvBYPB7sQcAik
/eHbS3DglItF1HhVT8/jQ04uSzwXdSP2ZL5qzVUxMN1JadHPW8S2a1kurnOAD7fFpcik/zG8kmfY
YpLOV0LdWs4PPFV7tPFwzbUQxyfbBBwFOOecjHHM5gASDLT7B8ZqUQgPlUYR4OPnjIQvrHtjSyLz
pXOhq8ZSga6NMn89K9cLNbLpIksVBgC343NcrFiccKZdd7y3FKlQRxfjHVDq9QX03J81ugFmwoM2
0OqTWtrcBQJj8ifjHFrs9yAbh+fwLrNHZP38Fh/WwA3eh36Wn70lvAXlEUUdWBjLi9oE8V5q+TZ1
VKzVbgLkCKgixFX39F4iZ8F6zBs+BORfV3N4nTl70oPTo0rxwZaTt5iY546GDoOW2keU23uo6lSE
t4JoMf2MY12p/KNG7xX4JecEfKxzX083rs4pSRSIwaPBZwig/zOhZ8uDbtJv9A+S9eKZw7pA+WWI
W66dohETVe4DOiBqqb95jmENKVPZ7Xck+9NdGxl4AqoOw1zHMMQT76VRWzubj+XIIFeTP6LWX8T+
eJH5AXstJx5K23If2nAFEMU8xtaRnAjWvTI+EzQ/UPq1IZXF9kF8mX9b9cDgH2B3XyEniEtJ+mqw
VQr/vZnEcIr4reYkNCeqMFkOjRZgGlSpq9CrFEcWNxAkEbixyjwNsKafF+9I6rqxCEJE/yAB8lID
NHfnWEz3DHT82dwMV7lVO1+ttphnKyTXjBU7hNGqtgz27ng8wYlGMahHsL3q4im8YV5EmlNiOuBz
Iq8ZZ3hElL6NZh+/X/W+oZLma4kIkoXJVHWnla+koKnq1VsWTnB7LwK9IMrVJ8mbKO8tCHIpzhYc
NNpgSjRQNofHewYOP0ygcZM+3u603RcNbDG+ITtfHZU5hgT/W2Gl50MnlcuSclSJeun5XtzFZKAm
IZzJ/iV9IFaYoeAvc0xYO1BnWMxeEuraBPu6q760oRSdthqIcTbw2q+pguS6fw1LJY45BxKSHymB
kXTPeH6VUsmok8gwUanJRaVzj7dCyxx391936i7hiyN0j3p/oKdDXR2o29PkjjehLKz8LSxXTK58
nUtSyzay7bC6wvcDn87cooXNHfH19j1RCZYKRl7NWJql0siLpfh6XgJDIcDGXgvxHkwtxIy/gYLx
5/J7KuCwwtlzBUA8KCt/1qzRAJIRyu/+APd+cOLjKjbiPh27ZqzLFnbCBMadFjCO4XBFDe4bK5QO
wgNuDWm4yMfYHlgBsPLVwLaHwwRn5p5PP0VQuURiZSsV7My0I1rvqcrtbs0iL0gRv5nls3TW+jCF
LN9SbwragliEW7CnfjPZW2H29TuNEEJJdbmEZfPDdmyP+6LHPlCxjjCrM9uWvgxOvYIIOZ7/k8OZ
xGKpoCe65N6f1MOwDsV5OT7d7LwtyzjwRb8AUqfEur4Wl+SoCijigRBeOjuFgJ0tpNafFQnnLUym
at8FYHUxpK+7buNqZe+iEkP8XmwqtFR40WbOgr+75aRinK/zA0H/rLdDb3IuRCQn5OP/mqtmYfE2
J/SkuYCyAVqCC9bJFmZHkIuWtU3dJ0SQtEIcjH0qorJ8iulV4TMzQ5+70gW+1cbRCDBcKFauVxgQ
eHtbDTre+EOVKPivvnVQCeT0hH+xeB6J361CDkafR3G6wOI8WezSLoGkFzHSfs4ZMgEYGCw2QJDe
24JP/K+4lBinosl+TQKKlPNx7qegVnsF4ksARvGmbQpJ/8CwVmLcwF2MmmQaisgI7axaiSJVAOk0
qW6/uA3beT5Flui4aHpABAWvgtcqc2gYQxtNyV5qXHYUAo23O7jp+CSf4watKhN3nEEFXG+o4cR5
+GN1J4DRlI9QGWDWVqrnjueE8Ni9oG5t3XnxdBngXXF5dH8I67OgIHuzUFwqCSeZMZATnlVPVmTg
C237PLdg3oMe+JIadih3uhuFes5uWBRK8VPANlJOOWvga08QDkQJHjZKKYZ1/mf1ajG6FAFXRLp2
l1Ia7UGr8w2yQjLgvUEUKcPrZTgfoezv46M5i7gbeFqoyU0yYxkj9Wj9TiqURMgXB75SpwkXXI6W
4ZLqfH1HW5iHtoOU0Tmg+I/OafDk1jivxLp2JbssYO5jk0HE/itSlzzGPzbO0WC3SJ8Z5KYLOrtT
52kcbVnTgP20moOja3qJN/Dbi8jT3BwOvAwVv0dx1pcHEOTjPErsmTwsYNCXGfe1Jf5YoG895T+L
/uJKsP+m04jTctq8+I6xvPhlPHpFEydXH4mi4hW68hgvNCtdOXg6WORjkccUZ6/a1uNe6m+GXOTu
EvBhvUsmPiyCH18jND3Kyn/akoNwpOfplDI+FkxSLSe6kMdrBVbE+N8xVRKKDtbKdbLF5BO9hY/z
a12363WhJM2/B9/ZimIkUb/GuCnkEonOjFS1XrKsPnEqwq0LE55tpCs0+6TvyzfKly74ofMrwFNd
XeBQHgiu0qjZnpXN7svkpOf9YDlL9vOsXPbRJ26T18DM/Vc9P/I/AV2Cj7UVNtJDaEPCnscd0fop
b6LSaHOWM7Xdvvs23/YG7i+cvMXmURxTYLwfzuBn8Fx5rl9so6hS03d1C7rV5wuIYuOSlSrYIZbw
uQcPhoP1PBa4lwgAIS4ETYJCzo237tcVh7eA76A/D/rLJCT1gX2F8e+ipOOwykrWI5qeFPfA6s1f
SekvFwIcLmlc4Cp61cFG2g6hDPyzk7sO6HSCZfbg4qeSDOTiQ140b2Qmjh89NDOZZV9wJJu8q668
yn9DIJ6ZLNR9FxNY7WGo9OVQ2aNojFpMgsJww0GNmYK2hmGeC15rtfBIZ1BhlEWzJu/87L25zC72
ot0QudG9z8k9g00KJASvdkFRTzt5vYOnA+J2KjzVWoBfZwXtQ3eu0Y8F8IWi/7o7OVN/3h/sRcZP
jwQaMuN2SU/OwJGIR6ugotcLKT4eOIq+IVxhgPFN38yyIxDnt3UI6lFpblBvwk62K/LVmY/B0Gf7
zJr/UHIhL2lH4hT9fFZC/E6lXuZlwEZE8iYNgcpYzstKccVD7m8JOqEIbY+ELBZfS6VgzHk5Qd0G
86HInoYdNgtKdU63tCYRXjImwKQbmCy8xaQwU3QkSWeiPhVGwWMephsXj90LXQ9JAK0INEhbmb+z
1xXkkQmPI2VySLkHoLQQbGvcyn/Z5SlSnzBjbqJmWDsfLudrS0HfjLHvPv8HvKwDNZ/YnKc0BINs
6ck7wNEPe2No9waFWxX8ebLCilN3xHLwNd2GKDIAalsVSAbCGWAtWPK9hhH7pU1kOUvJ6VePxK3q
8QO2spW7kQohefHBfXapU9rzsJtzAbHY44HBvL9M2hcXl9XrTBgFhdHE5Lv+HUmJT2ZqWwMg7wk9
oBhO+r1NQGw217QEL1XsHKr9I628AcSIfqFRmzeIwz1K9yc+PEgWjPN4PM/N3b98XKqR2ODGHvPf
pvOL/pzD6rXoHoedy9trQopZimtD3PbAbUEeG8qV2DMOyoFzmfHpUX2gG51OMEl1KF7OJvFxxfng
IW31FJi9d9S5VTSQzvcySEWeiatiVvaQP5jGq1QCzA4rAX7VeR/g5OgFajq6mLv/Sp8EUOcKjfgR
RjPVKgwHwunxRFztjPG3aNTlIDSPFycXS4j+RAqmT/Azh1NoAqDunA1Qnnsy+cPT2iqnP6kN5npv
8AcPBRkOkxf+bsIt9XW6P9MpHKEEpF1S3SgP7u478JEfUohxWg+sk0DmCh1fEtZbKQUSycBITOqL
qC0Ni8KfzcmjR9h5EMHC85UxP2jREmgiwovcwQ2BHqO3zWtulbVCS9fY2CdJdzLVTwrOZJh76+Z8
zpusvtTAVvpGQFf/reePvS4ZYHlE9VcVJanRlcJnYdhl4ZPfxpQ2XlgU1dj3FCpA0G6wxnixK+7L
O/Cu/CGR3HjOwKjStfx3qg87NP3HohNYUP5cQn/+a5xs30YCUU01Eb5aeDnjNq7dKrkC/WmZs7Xq
Wy7L3qi+SVaX48I/LN6zoCpH3AptPMuTvVBRBrK3vBDoXwTgu4htpzgTMIEhNMZHKiGpD6y+VABC
+Roq3Gmbc9NL0vVxaUPsxPanUmGCWljx4HQZG6F2uNqiY009xgv8LIYWW+Mt7+kQV8qxrqfBITW7
yWoKSUdL1xE0eYgm8AYUyx9wUILLy2twOJ/v6afcQiFSsI1D5gWjnZgu7X+CZTFz/ap3vbvd6kbx
UMaUtReqAzHqnfCF9CNp7pxmPLTTaD06aOXq7dbWb9wn/GSHnmTznuuASylYm7L9d3a55qtbfZVJ
/l9+gpVp/x5bYnV4jU3Mchnoo9FULIisINSujSRId7zSNOIuEpKxDUWRdDki/Rql12GNEbaYd+du
WS7edVU3YJjrK1jQx0pMbeOMrn+rrD6hFkK449XBKnTRze5DyBNIYt8e7CEGgC1y9G5B2OAvbWmh
T6j1NPFdEeREgY5N+ERSwvBcATFN2W41j33T0Q4g9rea4ubtjwU6ScJl6dtsQuKfZr8/6XDzCpQY
59YGE9hR9f+52M5+qejPfi4AvSFL81Xo5pYShyTWGJpIqUpDiyPkpQoEgieE5ghU6LCYX+pa3eV0
rBkr6A/K/0ogwxm6/xlGXj7nYHBHZlenB3INh1aMv2AhAvZopQx5MsXZWlY69Sr43DGUkUEbLASs
4zzKlzD29mEfkEeJiNvNlM31EOAL7MxN1Fj4ofq9ZoMavZKh/KMCI226dcVdgJEfV4hxTE2QJINl
r41aof4pIQsMA/w1KZoT4uiFOpRXYMroobOSh9vgZy8u7LawB3Ghzr4TLq5vkwXcQLOraM2mgChV
kxp0JaaeiBeitQSMT81dxANUv1Wkv9M8Axa1CchfFaFNL6aUxTUEEqpcByQewY13o3PLyRISrVNz
wRbstpgh6EropXlYkMuOaA2XLJLWNss1vQytsJ24hQRvAWI7SYNxoyjP7HFMBYMQ4lDjBfIOp6X2
Xod7eYte//NPpXFT3xVMunSo3vsw4r1tKtrGd3DX3K9j0LUKhyVVZxNxHfANXzkK3PY4kfGpjcvo
B87ch6/Easy9vsUgqQc5JdA6COEbXL1Ox8qgqNSguAzMvSClGwMfI6gyBvPVJcESPxy5g4ZnMSpf
9iFBdmsFOq5yDxoplxXyH1ycNSbCjG2vEKRxuFjPN40sy7DH6RoHcjhwW/csFDV8T6e3gxEBERx1
5FygkXmpiqK7Y2VVq/qDJEqsppnZo6cKhSN8gP9WyWK6i9gwR/FSANIqjiJ7LCRJleq607a8YuIr
owJI8EtCrZ/5n78PiDKW3fHy9p2odDv2Ybz8gfmue1JOPFbCgl5hYr0g2lIU++Wigw9faABEkDKj
ojqmwIB2+B7COhPIFxbGxMt20mt1t2FMJkk3egn17pk5sRVrI4ENqkLedfNbftAjJQwt0Tt/F8jZ
/0gGGiSTczGDLwxOI7NfRBGrExDanmd1CPFIeS4KP1nmOIuCCob/aGVEgepCVZ72vf3u8+9Nq6bl
NCXJ/7t5zQakUqXBPvkAR9q91TLOE+Csui4m8MU2CzFwL6ET04kW8pBkbdk2sV7qohJhu2wqsQay
pDTS7t3wXCVkWPWUHo+VvpPA9jJvVhc3dWT++MYxr5qpMRMPo+mEYt3jaNOP5tRk9/lyOVXHRPWd
JUzBkJ80ExTtE+Ni6neZlCwzvpsnWp0q85n+itLW4s1NOT+62gsCar5MVIZKoejBpUjF2kPBzO7m
Q3h9mXjILj9G526odxfslNYwb99nV5cdaonPfagMS8nv8OKM3wm6J+Q1+Z4rAGjExr2XD7nDoRs+
KFWfHKHI2fnGAFp4TcWRn/JaNGqI3AHT0E+/vO7W/XPRu4xg1ZCR7UCfUsXYpM7kAe8ENKmlmDg+
mHP8f0ZAdUq/s1LKPLyYNsO+wgnr5DFQo7IoFQHwUbSLT2rn4S0fRCEbuRjdJvT4kSF935A7g0l5
a9fRwfeX2uxs5yF5EE4N/8XuSIbVBqiHgWxUjwrSEgMBVCW+vCBHMJhTSHrSaBEK8AnDOI8iQO8s
7AxQeU9FOl251uOpzj/SauXRd/OD3Dly63An0bun35+zdyc9tCcRrftBVU+SJLv/lP0s4unirdMA
huYTOKpxqi8B9R/GUey1lmScnZQ4Ong3uOMUjIwhOZ0rrLodDgTUbxJ1bWbSQe6JPPq41miIPdnL
MCD6s8+n0hbljU8jBbAPHJuHr17EfzqW8v2hJOGM4WdlKq7yp9GTNrlIUazsGbeq1cqcVPuijWke
f7djVMFAPT55WMkG6bws2E6N+Npyy39pOcPE+j4d9ap/cmWTlWtREEHoX8LtlrGsYTfCbZEGdO1e
hajaOSe1eGB9UYEHlornEAVUF7qwu1ewtLqtlmbLXh6la6hm8EleC05/WNiM6L2jDq2msHm3C/gT
qEMUlRHojipisVvRlHGghoI+e3yel1JW6x11aVc/twhiZ9Hand0S8XRv2OzILfXBS8jzH4dNsdVH
htxfCfMEeldEP+FcW2uhY7+IYex74PXWkQx78stC7r5blha+ioOH4jGdz+rJDer1ozADZ+ZF5TAd
lyZxZJTM7pWbSE4hHuw+WSEiwtFLLuEXg1KvWTzbUHYdeGd0yJN75q6cJOfNJSWawaB5jZK/RPPu
XHk1DJKrrWsTBSn3xRJ5Sr2a7oxL9M8BRXuvsxtEpqKikuGF91Ljxw7oweC0K5F6i+BDzPX7mCoD
Ug0XYhsIhRlMq+O4REE4FKnLJ0FIlvlpW7DqzO4AS31J5IhVkDpuKAl3ksU01I94DtmsPmbE754B
BGd4a9Ja7MNw/Z6fS2wGYAyVdE9Oj1zDk1lk6AOnwf8aYSL91YaXklTIvxURKaqSeqquLa+1p9cl
l46h/9Wp7PX7CgJCJwFX6DqFchn/INNgsEY1u3zxn8s+BAnj1+lbYrupHbZh87w9aUvoyAKcbfAg
OuYrEhGCPoY/k9+zkQOedAAnzApv4foC2aU7GYo9xgvZ+ymodCPy4DJs5kFehOswtW36TPEz62U2
aoazEAVjhmTj0Qx47niOmdR34eAZ1mcVGQWikRJX1BHqhgT7w06ZWemYcd3IX4BXFTgN/Lc4nUuE
JGdJtw/6voKfzmz0aUNHnuphM3Q1k77iolP/YBAyxrEGBtLtL7hsIIvD991bQ499Gay3uN9GKGoW
RbDWjX2LeQrFL0lanbPjT3KfDcEK+mlBhleR1W5LB6BpxHOkhX6BCIRWdxHPf40sRu2HUGo3OoGz
evqZpkQ+cdrBAfp/tk9ou81bXy/nIskDOraSvk3wHgmu6pRWX5nbkxBN03wCL1qlHSZyhaDDtp/V
b88sE6SzDlB13rsSnIMDRQtG9Y5v19tOMokyiXJaXzS1PmyDhdiOPCb7rc3jex2Xyxz8kU8jA42+
etGZQcZPeXJo4pAODGJZJecJuKUR6vXjl9/x3iKhhvN5SW/t5rOOj1l43B9OhcQAZV+iApWcbL7s
X26u8K00GtjHquMPuQaSYbaI1y8hix7I9QyuBHyW9GRgDUBYDxahyeYiMzasNTMgB8xoIoiU/6IP
KBVQRXAYg/bWbS9TFq/Zr/BbefQ3k21m0ODkvsdQuHmS6iQf7roNXjDwcmAl+n6FYw4oIUIWXQBM
KyVTR2SbGEJCZSC2LWjmYsaV86saNdUY0SzMlsyJJmcZrWbeT7L0JS1EGF1XK4mxRym42SoZ9yX7
4WeTuA56wRK0oiZMKKJrNeFSHeqRjgTWhfqWg1YjJwZ12hd2YMes/PA6zawN6Lyo/t9Hn+X8ZCKc
V3WN6Qks67/XMFwObP0r4YBVQu6lCsUN5iDMchjWXCUaKOAvL2XfqOf3HbeM2a1rlzHEcm3G+Vx4
uaz/lP1IcAxGPGKrIu4eCZVyTdq09Xbsf1aB3tN9jNG5o6ziw9Pu8CujmljQpopBFKxfB2IJ5yqZ
kfc/QL7MYtEQtquD3B04Mx0Kz6g8mCQTHpDo172BnofSPlzEoNwlXTXMug3UE+VluGJiJDoGwDMH
WPA3hNF27hsirZ0n/z5HagUvJuGkc5wBTMofH7wVW1bNus56So1SIi8RG1+VeWjamgcabx0Nu2kR
93kjYLoFbSu1LY/mfMms3uMB9+LLWGC7q/1z7hrqxaAZCxB3VH9w8PVkluw9G70pSXO2Ns5oHB74
zVqDwziVyYWkavIrhQVdD9/p6HIJi3tQpzoDgLg5kcD4W9Y59joLa4NQjK6ES8UQVkRw3YqE+Cdn
ChDwXsvFd4sGe+YqKT8LGmfFUP+IzPQUh01lBccq2lz5N2rrYO1w7izX+GjYbCqX1gmqLYkble3n
Nnf/hYlGpc8VC6ehgavxbj0yKsE0rue/T3nN0MwxvrRCFM+coqgAflDPL2ER3/fd8o+10FEtGwre
uuNfqHV7Eyh6UlqyVsbE7W7KitR8H+H6hZj2shLYI/3guWqWMjcDtGUILL+w+r5dYbJ2rL0DQ2+U
4DnJxnrva8ZoAjYGG5dROnABRP50xyZG7vAdMgOoMW/zM6hScDwkiL+MoKC1kzBAzAQAzozoozAe
Q5cIb6ddNUqXED5RGR2PPAClUOCCgJRmY+zvJGUjPhnXp141xu7JIakOcD3H5JfDeHYBWjmkbzGn
uUlFUmqa/6tn0u/CwGbtNf9nXvI4BoRvK+1gwD8dG4JEjcHE91tVzpaNSIjOZWBzEbqgwdDpv5oV
J2/KawCpoyImvNBgkhyno2aQ8Azw7+f/wLQLxFsZfxxMYtw/QAsiKp7v28mZOkl96C9kPN96kgkT
yXhScQysfbgjedjPGfVJJD67VJsgpDBDLxVXpfjRzBJFMDhX6z+ix2osrB0gwN0O86vM5bdi2sI8
BIC4OHAcDh4SnKoGZkP2swa3kOL17dPSXF/lk/Xr1mjP0qDuwiTc9BMhtvMsmSiWklL15Ds/9aO2
t87ibeWevFT0ZqWiW81mjNdt6RY7Mpt8e3khCWjqc2yhr8eQNk2MOW53ilGkcFYVV+xUzoiliZwL
qb0AK4mTo5I7FJyD0I64yPWT8Hkbbp/wfvbuMO5gKlqXfIfp+wabBKHWtheHwN50ZsQJryQ6u30R
EgfF9Ppz7MnHuvZG6T2a2szjOUNhE51v5HrAhiz/ZQILqYS9ynP1+VytXXgjudK62690tgffg43Z
SuOFHxLjjrLyw9/H1C9aEdBoaZi1gYP0A2EIGCvmJ0B5nLygwth9zKg01IfcridVkyOlK9pYb5vu
gYRf1KC0P01raTuZPu4FvDj5U73V/U7YM/pN/gy4sfkRc/tuRkDECO7W4HjmSDlwOjn5IlrBH6Rt
6m5OOL4PzPM+0kXIBBBu95xD3v/R2k6hEa9kebbECJULD22LWxgNqeWEU/CEfDQD3+0eD1K6mdET
FTq2lxt6HyrCPU/Sh5Cs1ISchzh9Xj6h99NwbEjkId3Zd0y8LZhkvGE1h6rsMCV+TqxbMvYA9lqY
UdEcKnzPccnWfLv4rBvnBviSVutv4niaIw1DgRY4mY/s3lUqcgnRTW2dzaAnZzL2ca5GeHiiSLS2
jP25O0rEAUl2qpEwpjPdARPRmwekRaOuY5ukPoG2dEZCYfLrax05dt4ZojwGdLz4IDIk8AXRd/Wk
mObFTC7tWDGFT9tevJZwWSQi32j3CnBlDcK0DsjOh+2lt2ozCnIvizWGp/nC2du3yoekU60AprZb
Q0T3/+HK7qXyEkSdLOroS7R8D2LUM6jbVGYbJWHHqgA2OhZodralz3x38J8C4OdSrVoNIFlIkP7O
A5S1vixlwUtovZ4mjXqC1rlRtxx+WTSZcoq1NYGeYwx1884ehX6TJp2sAZttSOK917PpRVvS2Q/T
iexsKHhuldq6inocvX0w2hgRq89pk2erM32aD/PsgQq8+oWBx7FY6j8Fz3qMpM7sKoixXg/uLj4M
wHK7kd3GnHdzgbQupyJ8QyhmKjk8VqBkLzTitDvIDr7QjoHwyYF9m0Dsht92kzqUmT5QdIW4XYzH
goxyjNzR5fs4VcInMwfT4poQu7/Hubq3SXuSnf9LyFL7Ge8wwI6PWSBWKN0xylYz+gcfW1QcpjyK
KMRJO7udU1Z4HTwlFxn3TIifSYGh9lPwyLe5yjVmAB2yAZe+Vg2NgLX5S5S+pLF0s8v5lKOcIfnh
PWymcWG5TRtgT+MEc4P1pLZQkVxJdjweUXvElOnO5BXKv5X8Mc3kXGGPN1ky3/wL0kdl415A8pk1
6TrCtK7IjsI2cvMl6NXKATFRZOD9n5i7iDBiBjQct7ZD5enbWeExl50UA77UCshfFfME0CIMGkXl
LoKuAUGNBaIAAqye5BlU/EuxXJ/0AGQvO+QGNusAceMbZyhwMSHm9y3JGVfAg7WQpfZicqFLcTVp
7bs5SlGeoIhHQu+hFrMrsd0Y45c/TSasSTFYIUQ6Lh/v0ibHYxVk1QAgesQBI05Dwh3yLqXyVrne
1XFZn1+fkU0AUBIMrkctXXVT5/2jqe8rp4lobJn0mLW4PFT3wZZcRfNiTGDFgi1tQZyHeqN8NcbO
veGrP7ClcP4/NdT9hV8eefVBpA/2f/OrJgh40h/1DygtAvpBw55DlGn6zb9SAjiPQlZo6GIb/olm
lGSge5UL+J68eOPIz4W3oH/HY3/CtIW3BvZhtRfprTCTypBhuVNfI03YgO8SpG7r+4ZYyRPenPhM
10HeX4ViPpDFoASkrv4D+fgludWMKTvbFT1wh+/d/LcEBQhgHSBUydk3r4UbLANtl+YWXfEDc4CK
0pLrG0lOr5afg14Ko7FOaRmgGr57hPdI0ARrOSJOgEMJqUpp6bvmDBt75YDP49FxZOpVphCSMEYI
ekvySXP2dIHcMBpOWIJhf7sGo/FMAByxnU4OeWQSRcLh9YVDS4xQuks1jG12fk4G4YXVFPwPePSL
ot1uAiu2E2AS0YR3PJ2m1B0w/Md9KkcKaaNJ3W+aGWBA5/YK/iV/hdvwdTfo9dZDNfpRmWkzrGqG
zBOjqzWSyepWPtfTJ3bQFEjjYDIMLpA+NQmAW7K04sK6i/rKRTV+rlw2qSZcGFqKNs/IHjtrcXpK
I/fu8UEiD0sGFXuOoSo7LSemkBDpYlxKkN1yrRu4PswT7rSVFBCAPLTytvVi0TjWlZPRfQXnflWj
QpluZn77IUnzp8ZjxrDvXLYk9O10dNi0dovxGklzkm0cm8U5vwkuFM/w6O2wkh79V+hc/5nGp7ra
+sOilQxoac1Y3yzbx7hNMA/aJ+B1E3Tt5FPCJZTUyc7L9+CM5IdTbiCyd6L6Y55zTBhuv34DQotJ
7w4LbU43ovID2UvCu4UYRJd2Z++S0j458qf26/sS3a0ULCCtlxKZgyM7/eKgjJQZlmvMzALwvvQN
lSp0TGo3L5RX1e9T+PpeUYxCAF6yCt2c9t3L6vsFih/2YWs8JAXbpF7fcQEZAnIV2P1J4B3tu08c
u1e9BenAMcXFsFrSfJ18vqr6UuAT/n4BaglQKHLCBoeSlDlAYhLKzJ1bQBtWyxrqBjAfah9CFe2B
iWCZYgilSkOp+ndQ4k1XsEdre7tJC0Wsfss0PbD6rgp53YM1/0YPQzYN+F4n+iznXUYwlpMB+AxT
Zb0OEzG89p4VXiyDTT4P/MVcYuL5eyxdRyeAbPxkBHN8b4mx8MOpaCTeO5j4Lpv+TdYirFgERbt9
7Bmkn/7HyaRwjTNUS6k6HZr+fGzqnq2i/99fjVrmAtW4dBjqU1Ug5TbyFQ0977CdU+D7ReOoZNlG
fOSu3j7p4FqJ9NlmXR8QCpAH+dboq7gk2Wdu4e8qDfl/zoeHmjDshzOxgvfoAbNfKBPQ2vojC+K0
JC05phQcSPH1AY6LH/fEl4C7gstNoqaMHfDs3WL0m7jd4bWLykVS+lCJX/Sy/A4BEapOpSDTLl7z
67CxoYtLig+Ka9uyIF1ucM55o21WKSWadkyAD15j1RTpsMSLHE1FWJJGUfIjrdBHn8EjF3JA3cG9
uBwnTPHVkFgxJVjvFfKDQu+Nqau1GiA+TOKga2bC/7ILdMU3oEPkrPiaUG/QnWrLqQnCnIBhsxJh
DUsu6Gbm/2nQeADBHmXgMfqa6wrfnuTyTUd9rCSa3iHEowpvsKHnt/i6+9fNoB1MWHEKGrsHazQa
1wNftib/HmkpCnS9Vw0VfkY4F+Ouv/QSD2R5iSDCaEkdDOlK2MNZX0fDy2R0H+KOc10nv1ir+NrO
/NtwRasEbkaKF5JsFCxMR/l/KD4DXCcOXFDUGwFjOqAr/u3U6a6vuX0cXjAT+nuDPF5IVqR3Bsyq
TtLinRdAPJ5Lt90S9gubLP6exUJKQJzfa3reN9YpWCUx/i9scZfsIfSZlmQiAP6i+2n+Y+CG6IpP
KFhy/jFRf4qEsNUP/dT2m/LfbNqI/goI0nZMk13UhfYTrIkX8MrTVmXlLaA/NxyxsJk6q0fRtZl/
p+iAp6Iwepi7E8mOsDT+4MZODhdBTXwxbGMffKtcoKJpKW9EOIokLNpljnQFCTiclGYU5JE0bsDo
SjQkdU76TUIl6HYvaWVBNmt11U3FNwbT96TSyZtbQ6D6C6Yv2FxIeCvClTZtfEktTAxGmlSPPIoW
bRQJNiL3Hq7jJMInBCK2BurXUW9kc9IvoZQOZQKWGLbrybbqwyJU/OZNswmUzDQKtrnwioU0jIS8
n1VWU3gRB5aG+l/uFg6UqxkzbZHtG/FnpI7qUSTKocC4277ptPUMViYHQ7TBU0M8obo1qhL8WsHV
gGl2nD8e5FmOZDEMTIWPSPGIvpUOOjlzTo4/uSJJcA7QeRf4OJg8QX4W/eSD1NiHLNWpuuWzEGAJ
A31zYo57nLJUGY5YcFEXIDm1ryXTSYQByZ2N1/I3vCjlHy8kggdTZyayMJn/JDTNQKBG3OjJXMeA
dCmsw/xf5KCrbyH0kYXbdSc6o82WgqePu/qTxuuBnLMxG+Hvju6W4u2GQDDsdqQf9dKUX8jJ1iwc
wUEufp5Ruhfsm4Rz4ghJ+ZMASIDgWT6hcWKUdl5bkAJ+zD1E17cmlHNyhTWiy66hq6c51DetEemb
GtTJ6ZsONrHCvchVoNWbctUD4CHIIyH6b07ty66g90gdTKDZ8g3HCykyEzuKRJm0P4UFynoKjkPY
k37Q7LyrVyk/+8iMot+my0yQbVr2XJxYkp0rTRFlD8RW2S5Ofb+UxU58sfJrEYTgabU96CemJkuR
WpbAo4SKiZpH0XN6nr22lFO061IW8/dSYWKXY3rNHheQOkKWldJavf2JfWfKggDXy27vybIFIDJW
3MiaFi9tExHPN0Cf6zFsE/8LlXfximNc4/r8D/5KZ0+aeYVfySyeODZ5LNnmcBtvkLg/cY9d7oXy
5p5WBUmltS30SAQ58vk3c7ALUmpJsh4rs2JCoDeTOhV4z0cw+fsqCVWAAYOYRo04+bbQUQOc+VEs
P8XIq+ezR8S/g5YA8ao+DI+Rv/TA+50OVGhMgOE1hIxge+zLClGnS9HlHug7Hoh4Xt/m7geiWxN9
0KdUiKfLZ+4z4eI4ZIkjSk+zrAUkVs173ifIDy/QbRYF8fG3D03VmF8yntgV1t2+dK33QLpJ2KOe
T2Q6ZxOki5uIqBof+FRcJn5/dCRuz2e+wTTwnFYBVo/+V8IThyP6mjmwxtkU4WI6aT7c7erS5uFb
RclLCjsjENsxf6yfj1gke4wTKbsfURZPyuDXZtJwtTHU0Hzn2Eiznwp/AIUYPmzC0nNXlWdH7UnQ
2opH59KJlcHrST64pGvxih9JJyyuJMvXUF1gdc/wyA8CJoVdXgCVDs5+KOc8FyUMlCkM2P6TuT43
FgsBRNUnl+Dz79yraaE2Aprv3KYVxdcb0dRjINOnzsqgzxS7RWQq3lJIG4kbh+aJhfFnlceMyqnu
kKhCFi8F/38NnWXoYSMbNzAjn+AkogL5T/iEajTpMT11x9MSFI76+2265p54MMS7W/nawEbLoPWy
h5UB3J4lNUPUi0w7zbcVjAiW6lEnrBBlazaplbgG+wN0bTtumQsNP7zm0ibn11CxZ8R3aWQywjn0
QCapBQE9vKpfl6GOB+xqISc3vxBgGuTWdwJbjApLNXA4bQnIuhzTGyas21rw2O53NhLVB6GRDLnJ
PR/aAFTfr3zHh4QKIP4fktOlriKhZ82gs8vvCE8qGccO4Gr7/2Deu7VWefZXfXOgOw57YUQzBXFU
93zCYoBiL6ZMp2AHFZXswzzdwG6gxhVL3XuaEssC5JNkpVDDMJ60lOHSMS6AaLG5v5/nNFLm2zEw
ZPJQ1fQSCVt2/rJsFy4cnl1tlGw1hCHYd78oQpUFKN7EnxwOWMpT06FwNGr+STontv3qJi03KSSJ
Ofx/0CKSPXOXjMDhSbT09P8/9eK2V3XMF+Yp5t78VEOF8uwtyc8mIZvja9f6WAZen/AcLdgPTwU8
OS1n3Wz5Y5Hm+5Lc4JbFBpf+JMkc1TYpOMYXtt6HAxsmkTPIzDFKHsNY/0EJ7ud1m5qMDiJEFjpT
K5SWZIWTaszrgsYev777rmk5ycX3IWJf06D9EzH/McuqITDct8pYrDx4JOa3sLdidkHjyySc4imD
w5cUCHoCtTlry9CzHdUaW4v9ScZlY/c1Yxvo3jhMTGD/Bg1SIAzbD0p1zIYwLeQzPVdzKvNocVdp
AABXhlHrFv3x0bRVLtKsaGQYkuLm/WCAcpFa3G4vYCd0FrJ7XZjyoxuvy1t2SsdfWvfLi5P4m9NF
W4x0BvJKPSom0FzK2GTP9mPf7XhyFOYbkAY8s4s9V4ggvkMEWAyOt9AWlsDPCmsBe6XmeveX0Xf5
CQHpaHT2qLNdnO1csk0iaQtQM/Jvtq49aNySR2ghGGr1OVpu5rLc3xoJPPkC3GeeBCstvoMcZQp/
+2LOBxR1YLuOUBF59IUQYFY4ncgvO2FhNWVSxOYdFaFuXJpcUrctsRDzidwzq+PKjX+cpwMyseJb
Hm7slxDiiKYPh2QQA1alKFw7nIFZFwLm33oX7Vyw9O2b340ULanV0ag9oy8/d3h+JoQRGfaU8ePw
JBubfLlh1Me2Pp+DL0a7lkTqsIPWjZ8l8upv2KXtti+z2FQmPHHWdLmCRLselhCFLV1snxwBR6l9
bYBsYA1DHikfTESzqKdy/3NSp9rbWCkftYfAOjmGv40OWf8mUDsReoDnGsIwuHwHFsvxumj/D8fl
zr6sbE59PcTZ6R3bZeiaIijFeFMX2GU3p19BEoKud3lubXG8R024lvoQZOd1TNHNSZUJ+3Bjv6ok
dVeveZTBw/yuQhuerKNzY3cbx2jalCdRE7VslFTvvJI5dVJYNycTntoK2N2IMl9OzRrdwdzMh+x3
qeWwm4TiWwXAYVsU9mTsAmR8omecr9bK17eD4khNs9JOBbt+KDhdcDnfeDC89sMtMU+3kxJE/SHp
7TAbs1XV12lqBJk1IxnTAeCc0rpFyh9Rqrat8UHfYoY5seEdfrxNzJAZGE743s+xOpovAgwr17y/
ilayHHtufD3ULw2bUN+0aN0M72+VLW+a7F+WaVvLMHG7ylT9VDtJOFEAyNOL1ga1UPVQNufzA3j8
F+utTeVf6l9LjcPDpWs+WVvFnk50msPPg3AcZHGSXGhYKLt66c5KGsnz90VK5/9dixsxaXrErmGo
2fuNqaygooE2sWX5cwe2iF2winCrEnIPemndrl6tUCPQhBiEFoFreVbFR7lkC4sCDXniCMVhpZjz
1W3KNAKOWYH/opTGA0gjMeNps2cOAt/2FRbfW5WSU/Gqh9MRPRkLlEDD++ntIstbn7KU6XybTju0
AO5ba4G2lgOGAnfnCQQe8dabd4S9HrFsHPLeDVeTyux8MSAoi+247u2FfiHGv3b+pdiDnWzcwx3J
uqKTwG1Xecqot9HqvLK7kyjpvP3mnMD+A/IAtsFjAJ9m1UAIqq5MxDEw+PNgM5N6tgTFrraev1/d
Jhx3Qq97ItqAt/896vBhCWsZf3J42XvNelLbIsDlCAEQJvQzjRXQS3hTenD2uAskbZzzJGcAyuG4
HNkfe7M73H/oNeYAcsFF5Wjmb8Uo61zn64IvPePQ7B49gruU7U14onTfocPSExVqdjdus2Km8G7Z
MrKI1t+fb6QxwdjvNqr7ohm1ibKFzz6F8wKmiuGil6ClKiugcxnSJc9h0Z+ftKthxTq3GRaXTi0D
shaSje8XcRMzHCgpRKOyygZlGQskbMIkPfPZfkLis3R0vvoAm7EPiJFUaVw3EBL0oAwIsda1HOdB
RlGDKdSBLogFxCMG/Hh+e0xmSnXkIRDdTAboPidjZPkjM4LruqVUDAUcXrBaXrrIaufNFbw+1onk
7OWmPk7iT8r0NGvGdWookgcBV8k+XO1ooElW092HWXBy833DJxGRmcuSuQRk1ILo7Ka6LXkasqFs
aQ37GKnTd7kRvmwMgGa5hR0xEw5x/teMj9DVGp/2FYtBJ/0k5H4xxOhpYfB/yRsBF9OWJPDGZ36e
MavJz24+ay1KNWaI/zW7tAPwz6+eCuXasUtYoOi7qyWOm4lDujf1lZilzWd4CzjQqdOkG/JclIIF
WujxqklKwsvGk3h+g0qCc6gjd7PIqPoefscaXbLleSjZBH3qnv3cFW1crxVbt8fCSU2HWv++PLS7
dwJSvW6Qe5ntcfsQqEB6bgDzmMfK/HBsms75gRcSIw3T3NuCSDbKQPYFkzlGxXGZ54x0DoSuoeNH
am52R5JeTDQrCoAkrWI8RLa9eL8oiJonKkdVNuUc47n1hWL5FGVls3JlqIY/ysNYVNLgglFbRwR+
G77FGZvfCCTiDoiNyDMsyw9LgdxvNfsvC7td55nFGxIzkH9eSJQ8R8i+pSOYkhgIaDwXZ4tLGvb9
NmXp/OHs8EO6g93ZwuDCI0JDD1UfAvwX8xWNPBtbYhoRefuqY+StGlKesa57G01ZNEd0DfmPuice
DjFmGcr+BPLPKCxNr6iU7ZCmRYSuSoxrZqDXwIwU4XpmzM7Ai9qXxMDQLRl+ub45hxb9JwKahe3X
eyANR2Z5dI106kL/l8eFqGTgu/0z6ZiU8kc53VxVhb6vPIteFnXWt1+NHfWM6kj9xiw78UEQoT9d
TCh3oEJ6MucgapGvOnuEUmzd1tL+/FQHrjGvMXC7ckt2FvZjXAF0x0okUI1JOmqwBWqD5T+7Vntn
1SLwaSsRfxVfDELYSe6KQGJrezMAPwadr37mQSom3sMwQpZg9nf7XwkmQc6QE5ExGUry56jwcV7I
Ny5LwY0jRYBV2kxOPFUSrgX7llyEWW86+NcppH/Xc3F51KfMQEZMmtsr4+dFMJcT+vkXgVWYvv7P
1fUF+1FsmfOwyMNOvAEq0DupQEnebNXtLpVTdzTcMdBrsdq/63ycNRXEP9fvPJaFXop8fe/jKorR
RVwi1+a/dJHeYJ0Arxy1NN3IA/sAhG1wOYFVKXEeUhpEhh0KyPvEv2BOlDfQdT5w1ogglDJZ5mNv
suHUenVD0FONM859zf/LljyoEA281KuprthGLCV0AegvWMue3yPc6RpehUDXMh92+d7mAIUBT+p1
TxVd91A8OIqpaOJWJ3GYc8Tg6X2WoevCiBmKEunFPiCNb2kpR5ZhEPX3jc6YWMQ83xdWe13oMUUM
X3FkqWHGAhxSH9pkhtZRhYHgwN6bZf8nFRbZp3AZGbJWLcoTpi2l0XuF48H0WZ7YovNMLaSRsgHK
kbAFQR7GJHJEyqMARR5jFQb0ekZBqhHaU9jbpSBbsi7M3OVNOuLWeyDA1hTrxia8aDog/pAl03f2
Hum25RA7pxFhS4jGwkpchviL1Y+T/gRCaUInqTtsihTFrJBRsfRZt483LGazHD8QVoTBAxl7SPZ6
BnSKOiSXgErXDZmccNbJBIkBr9Y+EVrYWT6mn35lH7F8xhFvEbVCL3he/M+RvW5P5Doenj3JbpLA
YSsJY+HW+dFupCuPHiUi8LP/1OvmzSYjZVTnqg7dR/k6qudPYTTPgGjjkxUR1gCXq1KCyx1Oo5Sr
cGYhkimFtsm0pOhz90BFzLhA+zTc7IreaWEPl3I1pPDSiR+3zLg3f4T2GCWeGb25WKaN609Mvbld
gGq/ilfMfquA0dbnmyskgMhTvXxTT6FH6fCZU7sjUvTHTX1m3ouM5OITj06x1bXQ9xaFVR0R2t8b
Uoz9xfxmq6mAROgB9D/S9r25Rs90rZq9ANGyYCfN7uivy6Ux01IAHtG9fNJIDn30OsHPdbGBrQCQ
IG3P/8Xwv37QHtVop05thBjxxl9YO4rOCBLv4FQL3NhNsemTBvryiGHgAX7iip7dLQwGPi4A9ybe
EOVJq9ZZoZuSRY9gl+fAsp+GLsLyVq1WXSJePm3tRpOon+WSgyde2POaeZsVrnm2W4fmBCGaxVck
WPCybxv03A8pmKTojlhug3E7cWCdkMRg0FTYRveZEMolMDjPVNydEZNq/NEkn9dDQ11kHNAaCQku
C1seNnPzUH/bLMNdg2AzcHZ4KKB23gkYaVvsFMX2b+XvgBAcwrd0nHNwclnPiUPUfDxAGD4oXVUY
gOncugPtwFmaVg4vY7RcIT2dfbXAjsJrYV9b+qOasA8a6yM3WoDWu6EzOFqgU9fgC7JmQX4n4+cN
J5b+dpnbrtkS2Oyv9wZ8WHA/RllotvjYMyGPYbBicwjYApU+F0Pv/GLHBTVRuOM9cgK37kFDcaSf
C+8e9saNlq0f0Hl9NbShEw1Hcaqa1+o5Ht3uqM67du3q9X3AXeyxLg3sgH6SrhPTg3arTij/ymcv
zKBdmii+dnIWoGkI8IKRTWB1R0Gu3GdnrjnoFzQI+oVe6i5F39MP/I9DFpMpFxM25ESvNioymfpd
M/ispClBubwJPfg0Gnx1dzG37tPsFfKbt+SW7STW/P703E3NJQ/xg0BaxcvOPBYE4F/p9zYWBQxJ
eN/oY6cvNmPkK12gNNtII9k5F+2uMbbCorjUkhavk8erKz/9k7GnjGeKQX3M9ZZG2Hvh58gFYEGZ
Q9No8nWuru1kCGD4FpHTZlgyzZlxFEclzpJtYqLtUH3vsuI6BSikg8pgcIpHXO4TJrNQZPhukH6x
2PrcKsesz0coXZJMnRz7Gmkvf2ez1/Wg4ocsMmM4WZ+rtMNn/ZGyLimzTrvvznzcgGlHJc3dZwcN
iN61YQt+9kZXr+PtAoGekE1qkMqiFAc19MHnmwrTiibgY/lvh+F89N793y8nntiGpT8dmsk8xCx6
FsvExINak9Wp/hVK8UQHFTCG0RdLNFRGp8dM8mVVlzyBFtW0Fy4iZ06QfIGjJSlYBiPNqicsE16v
fS8Jhp328fM35CreQwvwdqbzDUlsBZ5BCySBP32HhCyPWMnEPHZEpDRqJVtIFODi1Pqzr372KJPG
FJlwiUwBs1g81g28BdEvGS2bj/+Yd22exxxgpwayLosVACvcWVV/Gy1uOuPfP8WiOvKrjXwqMDro
XVL9i9FSkXuJR1HLSXSCkq36fC6upvFVdwHNa1oImcYGqX0KXGKQ37IbOT7T0jqde3fIdklKnwbr
hPafbge02pabbOfdQicUMsOxs4rxdhDHQ7ubCwXjj05DO+NuHUYllf9JxtYgjC1TspS9A+eF9UJ+
GtebYZ5XM2bP336jfQUO5rQi5a9Gdl/S0u/gn1FSFok7DX7P1zTGoIvrAOgJ/lJlY3wl4yD8Gzfi
6lFB1eiUPtD3MZNb+OcxNut0AqXT3Ifa9RJr3GUfNkS7MBxX2oG5snds9AJy1E/cVVnWxa+BEst3
9u76NRerDVDHqCWtFzRAuYWFnKWNbCKlUbY6Ns9RULM5dwTGc1U/pj9Y90ILn5cQ52SjCT2IYUR3
SuRzoCfkbQF0bDJQJbdXLPCqh4TLU2XhCKaCQm7iIaGFkXxZmqCCkAWNydiYu7EagwU3+5GQTQzO
YK0RWMuzKDAhuIoB+Bt1mtUxnhmmOEMAR6YpzUO1Qzitt6QpBiNI6oJLbeoO7pHM3q7dFwk+TGXA
fW7dwnfd9UYVsgV4b9uGCZUnbKPVLpIV+hCTCAkHApUnCECKynS560cLL4LzS9IIkYaOS3Sh64DD
GkrFT2VU+mYqNq5y2d3kAHlcY/kzhoVqk7VLW4wFGQPVoUBRRUQ74suj6oTcp4Su5QLElfPVuGjs
CjuOhELO+Kweyq4pn/aWGFKYFDsE4N6WpsrX64X77JyxD/vz6q1RtTXd0Kkk7tpyGkAC+tZsbCGl
LQ8riVEGTml9F1gorxN+Kj9ruLl6PkiISPynJKVr1+zpeNpIKhoRm69osbku02NRRpNvt2FniE8F
TlYdE3m/NZTwg4sWtWGe/kR0KNZEjq9sZdVyI0hQURfcLBLFDnjwVOfhRBtBkFhvDk/12cLiUCf1
cCdqTm5WmYa/tTKlf1fHr1nTU1zUwjN4W/jZ2E5+Egfo05tjM9lJiiMetUcD0MC2K6kP89smcxhm
d18HQYoeYLk7qhpTC++DbeWfgtlrkJKns4YYhf67CXoVeStpK/BP0x6FzYj6M9Q6T6uUQMGnLc/F
i7KgrH17mVTbtiEPB29ck405jORmsmNZ4sNNjWSelA18Vpp/qwNATLbkT+xB2arbJ9RaKaKxU7uO
oAgyyygFw7nnp6Nbs2/1P92iiiW+hcAdfh+7tfK9n4QkG6YQM6qRPGAtIsdnF1eTiDFSm4y5pd+/
QqNFf2W74CqveyLUfh463P5WgEp3g5CMgz+qj5Ca4HG7wELKDpDZXHsN0lYILj9I4Poo4+2YEAFM
6YwfDqOkmL53uj0G2uaCnRHOnu4NT2rrJsRS+u8C9KjcQRTsWu4qCQsns3hAwJfXu6DbVwFFS8yL
OlVSb2w/sZstXkK7ohO0jER+er/OSTmd1zF5FMeny3Q/szaVpWSsqdbL7FpWxm5yc5jQGcBRuiz8
opo6+G69Zb7XPyFjhU42zV23ZUpk+/GxJLW/5eEhNJhbYvG39jhLLjWI6wVv2B0wHV/wmDrG3yXa
3Xj6HMzdHS8MJfk4Yy57kOdaGOT5xFSzRKmp+o4qzis61NTI90+sDmnGJxJ1fUdjhq4iEOT8BQti
aqjR+qTHg03pVtUQoqSZLwt6V5I+7CMXxFKkDLAoNh/tj99w8qOX+fRDQmrD5IIvaaG6BvmOY9ff
DPca7+xLv1QJhKPrCwyYAf1Gpn0RtXrejOzqgUAey+XhU9C5o49AKtvTrwl+TEOD6sku+XmDKwTw
NerTO9Rakm6bZ9DU9FT4jpZ9D35mhbMf7R24hJ7NaucDeNG/aY3QVW62cOY/K0NmDzNwQ9ZDEsdk
7vSK4DFI5wMiZcXR/cIcqRr8nO3+gctN9AvCJidEC2e5758ev1OF0wJwAxI04qdlymr2NUjP9gfx
x6y/G9BDS9AparyTabGKyF6RVSJuzerywIjotcc2yA3F8Jl4mMvWbvU+yHGZf/m+hKd7fyliOwNF
rhH2liKweGfrsCiF+i8MIC3Hh4fsVOnWrN/F8eT1Ov4HMn4GPeQBmAPIxQsGL4DD39t1N7Zb4Xua
2i/vKP3h94Dk6BCv9lGFJIxzah/fEp9uoUc1d0IMHCdFaeKBlzSRNWVpUyazxdoWpEgnNyHflu+e
xOc97yfI4IyXIQb8xglgmW5AItRYf6Yl5dkILQ22+WlyIkBuuW9JSSBQyhrQEndZ/riqstE8LUFB
jv14FbmBXnJKy/HOoLNPXSOVM/3Smbx9Bj1R59ak5E9fvqRtQZf+AKQG2CU+aJQoPfDRA/Q9ZiEV
DEZlBMVpbe+UuVn8LehHkWIA+KBcPaKB26U/9DLABA8adLmsvvitO8CvjZhpmmodlGvsdrIcfMiZ
0s1vuPL34fkw6P/huEf/MowovzB/rr5ijYcVDAlQsBBiLqtd5y6s2vPnDdvkQN/kZojbqQq6OJpv
Oswo4vtndopXKYG0EiXSQZaxsmyFlEqwFCR28mT1V2dXpFQF/WRxMnddMx4+JR7HPe82Uub0PuBL
oT4jJNNMEen/fH+iIdCH2AhFi3nOj4hUrN7zrgITKHdov2PyLMMsmptwgYD1oTdRDXmzH7dUUCD7
3SqIM84J2UlZ0TfJ0k4stZQtwFtiSxJjER77w+3jnhELmju7iWiewb+wdbXJkP8ndpFBBDrPF5qs
DJKdivGP/XHlH9hcrVHfIrhjnhu4jCaoeflhjjnIEWqKLKqOK/DWYp2oSwu0rdCmet0eOTTm4/sA
+PT9UYmEyI82WzXfz4mdndX41nXbUFCDkh+UsGGopkeaemhpGV9q8ehJq4dMeEekYT0LdnRNdLbC
ap7+3zDiR5QxPWFJQmSRdgYPYfl8BrP5l1KBdfxwX9VRkWOhqeUuw1WLzcZ+5t14+3qygc0jMthW
2LrUelN6iRg6qXYGVrB+vyDZTPR/f7XKTmWtZjSP/iODhjKukJcnjsvAnqV9hNfyD4Q19Pa6EaUP
qqGJBsowj6ERpqUFzRrdSH03pNUGsx9eWaYUUxyB9xgfM3l4BFtbkLkUQMd8u+yfeYe5VUkUoK1d
2GMQt5MocmJWBU25a5mzohoqZTtolTFinDmMy3+jm3J7PCdYx2o9iXvGwMgyJWx31e8cTqWTZt46
crOsOAmFZvytyk8jm7XbpMIFc6+hLKMV6yhTRlXRr6ihCAvbUoC8Mef0SZ1IF3ae5SN+FS7SDfQ1
u6wcvYVy7CQt9wUr9NuYBDWi936sF86NsqavokK899OHjiO60BDmlk9a5aB48uSKySXHyjf9y0jF
Kx/pOjrlk+zu4y+Vou9YIQe5Pe6nxm6Z4TCQcAO4HkNR7XNQnpcf8gkW24tXvgN3g/M3s1aS+fa+
rro+z4DOcIgLUHQsEWCMUuAkBJRvvMDXzcJvSlbFk5FiM6Hc3xusJ0h9y0RTc9rFOe60Ah73/pMw
HDNze9B3F6Jr0HfCAnHwoO57Oewav7KGVpktjALnsIXEFVKR0oDTrh25XcUj5pHZeHyKaaxGsfTB
RwYz82CG+wHyWOZeTfZyPLYgFlCq1+HnZgnW2WIAycOqmYu4aSPcoI3Ls1uherT86PXIWTmvFdI0
FffcG9CzaNUz35emke1ZFtREtrkXnmhvSmm5ZC0ax6cSGPRVP32cKrfT19cJ/sI5FZPn9XtnoPwP
QEU/AxqFKQ/SDsc76MyvJwOMelszdn4O8KxAJS+epOE8tI9LrrD409ONRz/RE7NFPEzb/WPHqpNs
au0zIZgtxTLqez7CUHCTWPf6dmJdPpANjF36LWi/dLVI01yaHqPG+bctqZ5suTOH/PgqgJFNVDQ2
1G9TQwudGTtzFyTF7gZhlw8ufbqcgqW4XjB7QpPQVi0RIpZXGk4qNTOrTdRSv2nsXqqZ06dYS1K+
xHhfW3YnlMqcECPWnrwXiiBXjOQPdbIqoWLvOo8Qfo8JRiYokNfI3KxTKrOS41vcPG4a/Q7jBpjQ
jeCNa+tJ3jqjLsgi6TTmH8oIinCKx7aviGBiWG1PGl3AC5srAfRWiePzKLxlkr5km+UaYGViNdyJ
nvefTF0rHgtHDmpXaYPDaqS/ZhDlFudv288w5jmwF1WVTJPbGypLotIQ17eQAtlyCYOkZeZvI33D
9uSGv7hIFcAuhS9K7uezi2j9CzEwh04GO27xND6Ej1Y6SyetiNSmpvxFt7CrynSU4YU0C9z556Ro
50ZCKGgmT2oyuOfjxFwaMa6OrPA+WNPiZno00iB/KWR8FFh1Pr/4GU3xX4lqtleHw4Bf8aUfVkoX
cFBDXyr3tcZmd2uG1Xc2aMc6x67bGa82D5vuWIztb1V17sGyp+QkCpyE5e0T0h2zk+Ie+YKITClT
XdhuQuSJ5K+pO0yda0SAe1cLY5/qpUHec3PFAnnFsPaxWvDRaCzfSDLSrZgNx9CXPqGF0NE0sScZ
PFOSQpVrb+oQolC+mBcyJN9AflkhZn8dcPpLGXWwtwLLl1Rck38g+xzSeO5i/n9yfTd0HGIYBT40
kFEpJAj+CW9HZT5q+twvh0e/OIyHK+eKuLCrSSzi6a2A2lsSPzCeGdFhlzAvgBBslNXEOZWGddE3
uEQrrBpdQPJr2UhfQHNpBlx7YPzTU7TsOtKxEF7LnmCLxBRpulErCS4WmrKyUcv40xyizqIAL6XP
NDa3MFXvejjZGIQ47jTuDxryYIHiFiCIXIJhXE1JN5wkGhp+E39H5fvwCyXJofKHPnsWcKfwM0PJ
9RQq9Y/OXWSYTr72FWkc08uQrjqoYEtqJf+qD489wsVioDJ2pURNlN1eZh0+6GGMHL/WRRuamSi7
6wdOSr0BC6kx3bIfV68/at1WeHo1bHWFWM66N+lKtkhb3v2M+xzikk6aD5qAxY9iFW4QLmM59h0p
UlITxtF4VKgSOrkaZH1UvS2YV33MRRPX0D6eUnOoWmjzmWnP+OqZbNmUBUtKtI3rxw/FZR7Nf9Ie
a/gID3W8XmZ6jUbAzkk6W/VhHQrmmso5k1zTSs8CrKSqMuuKd/UJ5N3DzV6ob5apGYwCHt7xKgmA
KriXzCNnuftOhKEU6VxfDdJBk99ODgz4hwVJMUst65+Xd5JjEa8WucHJyHOUAcP+b4u+GcW8gwLm
2oZOuOv7siYWNR1UVgDtpuuEAvX7ia+H+G0KcwiImpFHyqzTpuYQ+5HbofswfHMDrPwKqzI1wXcU
qcqjMNt/46imLl7N7CjE7nAxpjyTDnlkhXfuPPG/J/GG4luW15QaoEXASIUPg7Cq2j4a3Uxn005T
aTNWCmozAhhafJuhxEc6fWrSZfgmFd0dAXpEYjnVIEZzFupbJ7JcuM+bbdPNbBG6TpJElNIT9uw/
2/XYwLdK9VBh6iGiZbJBK0X7jqjuCIruso65K2XLQqqIw54d2V14Lv7JrPvbIid9DgzdbmNxa4w6
1satp2v5sKZ+P+SE3d6BiDNJttfJagfdMGx/h133fpe4GRxCw2uyjw0RmDJUIoQl5At+/rNt/Weg
KxTfWSFVBnbJiavGIab9Cm4hEpUYNFbOPQs1wkqU/SpFp41x3OVQC5gXxpb2PgkTvDuxsqG8FBZ+
P/U4cFaQqZ91f44S2amG94LWAxAFQseu5W2Ke5EzPiJ/0wFx996RImXH4JU7AMKwNZEssrY/O3kF
1eruYf8lUwvR7+k5O4S/qJG2Qi4vHSuWLCswtFirBg6TXHiBwLOlL7uwmimhduQI/CGSkAAZU5fi
+WQYPbv/5y6VZhFUy+nkTPriaXVlrKmDLAZsn1ATg4tPTUQbx0HcPf8a7v8WS6mAmLxLwnpW4dcS
whgKgZknES7qR8nu08cdegfzuZjL5jaQJWM98JR4zEEGy+H6DSw2DHs+F0aYdMj1uK4/OEC5Zbue
q+UDTuiMffR1D7PpfCQ7XaZqMZ0A8CGiyAjZymzH5mu7r7veD96TeKLReCW6G6Rqlj8ljB7ALuR8
54tCsT2J5iExgSx9HTCZ3B1ossAVuewUWtBMCcfZK2hlZh3PcVwukZj2RvrLbtQ2s6zfVGaZYoXr
NsIIPxPkmHzaaQ0GYE2DXaL6E0Bgb3W84HEYX+p5ferIy1iJpF+n3qjOAsaNf6LGG3BiBZHy61Tp
waEkh7ivfRa+dxe7nhNauHc9CPTO2N2yAy9oPq0WdTdzSywqiv34z3rbKmMZrJ1dZyRQMu+auloc
puVWbqnOTR7ODkdYCNUDp9v3f60DZ43VoGyElbHghnVRw5aCi7shB0gG9b+MN5jgDTx7YJ8bzATj
m5cn8XcHLwGJocug6PQSftedOFhGWMc9q1p5+zrszutwKcLMB8DDKfPc+5tvkCXI8j7XwuUgK30A
qXvHvdbH9zlIExEmOK7GLn+qB6W/km0QZlROtX1Y8cBc7HTTPcsazV03D59YBNjmFu5JvnUksi/A
sS+02moflN05oUum8Clo/2K2MDSg7dDKm/UYdB+hXjcUCO35n6v96++hZc3UVG8cIFvvZEJB9zBq
d5qSabUI+nMCZcfZwGqDA3bZ24v1M9xZ1PB9VBKYJ0l69CfYkkUMyktF0e7v1wbCLzArMq8v27zc
CX7YGHfYkj2BBCYQIM+V6p4d1DP16vk5kzH+oDUxyd/PN1hxAUFRpTh95BW2MYZXTzUQWEKErlH9
PeSXwlV5XR+y9GjXhSwyUEkAIIjrxquF2Jjj/DmRtJXgoIGhmj4UwZ7qKaaos0roxJB67UYMFkQr
LaEedx6KHMzqWZBm5Z+sgC6c/pZ6qqojucsIQtRqkADz9bLMW5fArraYakWHuso+3g+OlK+F1iir
VSYzBI8MM4ZX+6v2IA0x2zQGqCr8bHmBDYdFLhg/s4BjZLWPUGE94a0PoXO0toP8MEDGt4kTGE8a
5jXshyXjJyHaPCU9kjtNokLNIIttrc6DG02+fSerbt+s2SKe9cGraMubWP7DIJK/ptiM39RZ9MQZ
1ZjTLREnUlAZRp5AmuYb76Ra9L8U2brlx+eqetA9E+3xktgGkRmc/UCZj5RMyR5WvfqxL6FImZaI
IwoA06N1XpEOo0HpWZZg7Di8RdR35m/mFV6MOjguOf/Bln0TjQqX1/Rvn+pSIr+kZTXwiCfQ1pHS
rm5lkXpVjKMz1z1omapISxDMRd6jVAQdkMoz7baafnD9P/9SVxlsE8bYcH98DJKBEL/OB6DQbl8K
BVuh/OuKK9dRKH0mRZNWcZoT3Vhv9RoAJPVuRtVGajOUjv/eA+1RPTB92lg37uevtuWvoezUc20c
5PvnGxPBYiskcHZjIVlxWRf/ThztnasZU5kTnrjUY3lhHXv9fXNsHiy5I6BLvTG+e2RO7/yBQ8FJ
msfhMoWYiRJthdHjtf9YtbLexnHtfIL84+1ki2CRZ71IWEyxWZ1LCVP+LkZcDqA8ygO0UQ9t7Aka
oGrsJ2IoyI36LP4xzhi61KSaMgef034SRh2k2YjRWQ17ZmZycHaEvfODfI+xdiWWjMue3o4UW27J
3dLluL5QeZh7U8kbOA7y4xMsVKVnPFRAxtKXATJsyPvG9Q6PURSUwQMh7AGJsvrPuVD9vwjgzAbn
eJHEuNULfIj+tOS8fz7wmd0dwL/NsYUQ+n3cZjA3mAXvKC+p8CPyUHshQ+639tt0TGkEqDkJFwpR
Gmsz4F6IbUrLwO+kGDmrXlXmaZR/RcO9jJvj/5LESyj6aYZzfrOGvstzmkbw7GeJulk/6w94O2Ev
FhoKfpYMphdD0g0QxmHpyZlwz9zIKFfyeCrPCQ0S+dkSVCcLUE2av0HKqqRcpXc7aZ7vRBO/l40X
Ivl75nYothOzK6geqUlvKzDW8fSU6WQtUMMYeLixa+JeC1C2e+LOjkkapIMq00QN8NcQvQCr8sUY
/UqssRaurQ07mhbAUWgT+TfPVNZpgdTWErU6npl6XoCYeDwAIKw3tKrEsGQgSbzx2WT5sa7QIx+L
va/3MjPfi9FNvWBCYerO/2LdjeyJogUxSwnOMcjTjp+uI7kvu5Gz4C/ivPKHnxNttMXgJo3swC8h
njoPcClTcLz64cUO5RwcUxtcPTKUsQIb+1jTkpPvDw/pWVYRhYh3/hl8qk12Jk7rk6JrLuVcwOcK
CHwbnGkSB8KzxyXxeenLGJMG8NHAUcXh1nHzFBa2gHh4gGguPbYlwCqz9RlsEXo4982t5u7+ooGE
a+txedL30TfZRGpfBnI6MJc6Mg45HyrVKutGPxkvp19YjIYq5THwmIuG1KAqaMWBJJMsEyHC2EUK
mgfDMpCKXObqxfS2Ina92pt/5P/e5YZ7ZTCfvXcf0H4kRB7KhAi1O3KDWXE9RGM22J3c+Tp6u/H0
iawQCoox/rIRw/1JojhaNehyN+mSJcjTzGeFYZlWBSHWcCPX5E78Qana0YW9JDY8Qv+BdgJkQOvt
ciPvfAv3degfGD2Cjl1TuRKLffH/wKPO0UAjepnW8PJhrXYScHRP8E6vzpW3iIyixPqLJknwgouc
eCkYCioyvcvJakavGxH4gsxzovsZU/5ee9lceJx/FvS1TApfBqyz1Zq6/xhBBAPca414Md0yoVGO
CRj8L9Lz6FX2EvBOw2O8ihOpaiF0rKJytjZYBsQjFUggpDp85fODO2efBiT77FM1oDDESnXKJPcD
wpQux08F7tLncSFd2PDHJZu6OUr5zHom8pg/Xoel4B1vTi6zoN6y89X9VI0URQKpB0dcVHBM/9rW
aDZcXPgucYfi5RwW50IOlqY0Rali3H8YWKY7x/dWuS26JHO9svAAy6PO0WrSuaU8ff5BjjRFbfNR
Mnl7h0IoZrqV33zEff1v21OgfDiIRrlSkh3JQtvPsfZX603M2aYf+yN56TVl1ghbK0ic46ufnP4G
5F4jUK9MYdYkSVZlvgsQ8NFizoMIDtQS1KcxqHuwNlDPXCI20Rnq9wnAsPAMyzFSEvD9Kof5YCGi
H9v6eQ0hGUm80pQ1tBkP5RkJwAKW77rCXorY/wq0TJLae+FcNJ8T7/MUscfa45H8PoIL9WHNWGaF
QNracH6EOEFwKDfEZ+XQF0ErgBx3cz1CWdE4eeYwLLEFBZVomiuVb9y8XdLjgD252/uf+NYPmJ+4
1JXnSjG6U3otroBUDdoin/IvJHUa8+dACfFAOfHdDZsHLZSeppZaAFI99BQsfW5C/zd/aV97B1Uh
wO8wZ0mWgOvAlDUzcYFUv1/V/xBmACBr7DaEYaoe9wpneCGNLHLICSPc1dAfg+rC4SP1viS7K8ji
itWTROzuM+xOsgq9ch5e1VBJfOBZVCZ++ZUFc9FWmOxYT3/sgR0jP6tmInTK86v7xtq/diocfOD+
FSEJOCjW8vE6ZqZcVPL4eqw9K8HieTnNpQaE61Bb0d4vVwTRgbUU/FpPw41uKqyRQRt42DXAyHNY
e6Gd9pqk2rQwN/vT/AXjAx2ZLXTns4UpjLAIwLecw+fY+9H6DwcNZfKE9Lr/j7IrWxooML3U++9L
A4bw1KkY+ZJ6PPqaL/6MxNpHzP/qtURdBWXfrlB1vfJYHRJeNftbc4vr7nH/2IHpG4OX42LP0WWg
wAEcRJykG2vtkctL5Kv3fYrOFqSB5XZ23zr0dqxbiPVTDlTITc7GOKUb6QZgvE76Lxn1lRemUk70
UM2xKiQ7AAmro+LkyoYGmOFf7xjOgAX387IuAcRlO6zVXMZVZj3q6PQThWI7jHuYTPvuWISwOX2Y
tlVSV7CgCuRTZYWWek/VgF9tp0nd3Lus4QwbGSshh+AU8GOyE5cvLKwPpvV9QAyV7Vuh9qGmCPGc
1SlI3ehFJK75fyM8hQtrsOfvldXPb0TVMou5+SPfU0yG2m6JjR3aSmQA/WDHai265tDdDSPmamwA
5De5oxydK3fV1Eff7PhUWB7imWz+e55IGbUWT2wqDTL/wO7itbwgXObuaWFqlqaUJ8dlHQ3vHfsv
YIhko4wjl/Zpd0rNKji2ho3EG+0gRUlcdzqVHwZX7PiCezKVaHY86jg/zN0zN4i9iLAos3XmAZlp
s8l4gt8VtxcTbkwodqTJ7dGFQAfLJOs/E2VH0oOUEF+UJnngH84V00X7CRIrDjb/TANTgAZGaOBA
Iwyb58mSVctytihBLK254z3XtAQHez60YCTHxQPMVTsC+o1uyYkfZ/RsV66Zh4G9E57EdxZtGdcE
UhKuqb4hulqKsor5FDoohNwjmgHyA1J4cDS2ZNLMUZgWpe+ZbmNQRw6L0cSGUBgYRxg011m+Yjgm
5XzvzpNYEsoEZnFe9zURMoZ1tTLGlbz0YrBCvzo53ruKqwqtF9CGezKwO/BH2v8L6XloIVrjYuAV
85dMuXiO46m/uTUmNcN9NyU/gn8X8s6OrNQKOFyn0+nmtmI7q6CEemnF0YacmPIkJYvM+y2PX9tN
95PScCZgEewWUmiCJlsKD3hr7c4BMgIIpCsv3l9KKodArejc5ON+/6PrlYQoMWb+PVs5Q/VwcQTR
uIPuwbryPT8+cZfmF8Mps45yny+CBPtJM5ZMVBs+rhfnrx8cI5O9OViOlxhFQCRvsODgDr9lFNrV
nGOUYRm0vrrAV7K0m10a6lLF0fgx/HhmJ6bnsE0ywGCFpiivwGfQDMKf/AdN7Dnavasc8BlIDi/D
LVl/n7KYgYgSMXmaDmMGh+T1iv1m0am0k+JrpoeJKzCaHfB1ra6fi7Av2IrJyt+pYQxOn5A1K8/D
3QuFiP2nFc+gIIGxMSFz3yH+RHoWCo8dg43GyNzxhJjK8rDm/pZUgmMNDUghfU2/1aYFZC8h3ukt
AaslJOalquGW8rzXzYoz7IrA7CW00+aebOKyvKn7N73G5vcgMSSN28GfXZ9ziZPa14NpPcb+DfOT
SCyw4V/0Ni0OqZpcQ3FpJS1lbvmvzIw1MKsYFIFRNvy0dUao18rSSTShvHOiAPPJARLLAarexfv0
6GzN4uWzwFdWsql0WA4eZMw4kmwTZvxrq+RgoF5y3wQw4gOO25tRmcuYW2lH4icv8UoTQhokvTrZ
3AGukTuo49/QjpW+wgzWs+l3YO5dn8FHawqthbihcsomRoVttwO8tutcripXg4XGczRswvGTiCBo
so0tX+gJ9wwOAqGyPKMyp+BqukdRRZav3O2/zFFkKFrjzX6ZnlGjLNCddADe6gTgWN0Gx3zgWK4/
Oz/YojWPIp+D+mTHRKDYut3DRcy7kO12eG7ao6FwcyKdDESZBrUCN3NHIlBoUPo4e+oRHpnIgDDF
dClokvaYMMf5I1qDcz3SrakG/l49fgLYccj3ZyRCWb2WW/x1V0WCSVa5UlpgMv/R1cYYqguDyOpU
Q3UGEGnISzjKr6sJEKFmMbxstnYRWJ0rYJj5yfUa01F6C5200+wXTD74jBLRFBc1y6Zj3BeE+X4y
FZdkazeMh3B9+zZ8GuWePTGhdZgO7VoLP82AgSU2gBOTvtkfSFirvhQzQvtb120RYLSudXCGb02H
syyvf9YZoKLLNcL9/Bfpzj94ihLdimIi95BZd72cXrWjj78BsHNyn/pv1g43E+UkwHKHKiPXER4k
wd/JzQSBouRqicIqO3tTSavdEjTkr8Hlha4UzURQzFjllb9UqUXHPqHN+J5k6iFnDMMo1DSTw6j9
kM4MIAvlEzzVKErD+GPjOExdiLiQXFJBO+AosTnwIg8QfTk6yxdpYxX4XcUydhp8x25JHY9DpeMc
4wA6L0SbjkzskYC3k2ox4by8EB/IgHyjJN+FmbFFnRT/MKzBaeZo7M9PvB9nhQWLHssVk/60fq5Z
ELKN4LigBmvtT5dBBAaslPxFz8QALtd6IFEEhwFDxc/YoaOczX4S95Dhn05e/7odjv/oSNWSRZtZ
zqM8BWQSVg3NMy1eO+qu8H0UpjmXoKDxCLdP7JFKg5k1qoFY4mTCaN6ZMprDpWz5F45wBnFw63KD
t6ocLQNGJlxa7FaM7R1eT6RVhXOMegbEinbvPVwn2/kNVZH9cePp22hrSbe7YQgSFpAaAFA49oZM
BLG67DbpojOL1d+MdvRdw9Ja5XSstFSSAmqzvxBUDVu+mEHhKCphvLVotxkOhlT0b2LJx6Dp58ej
1fXpOV0rldv0yBt19jCWdRHxrp8jLamk1tKwcf5Jg4vQfpBnVdjx1PXq26nuFn2QndePLgv+M8xt
dXKctwsJKQ8AUx7iGCnLyMV4D8p7mTB9/CRgfEGfdW93t7usX/mTEks3HFfj1lBaKMAN0WQXblES
BjOCyErWotrC+N/JvbcgOEgycZVmpUXIbjWEh2br/MV1Ian9ldSr3SHrjJPyi99xPxqriHj2SVIz
HiLM6SuhZeVXJAIkkp0KVt+ehfjGf1V+fYsBAqwB1ybPpaJAKCprCxdJg/4S+XOws8MVglDJbR9X
iRHiEBMapYwJs9FSLWFdG7MW6jfP/oyV59VMGrQw0jpnb2b+OOhJqlbU5MJHnpTyobXJFVs9YE+6
8aDqC/vr0kNZorXWJZcR9YazeiZSkraO1518YvvAr2c49sulzdRBmQCItlbKwU7ncyz8DnOMFjSb
xCTkKb6DY51mXQIqHRwE0eVoexlXKYdpyCKLkm8Y75etzs5XEUxzL94JTM458Bx7G03try1zlZC1
OSKi+1XKs4evCRTw0TRG9yrSKD2P+3NznYqM/wsD8rBIZqnVmFN8tvcOBF1gZ8XunPwClcl4yOcG
TY7o1pvwYRgeA7lKXWw5lxXT4v/oSk1d/OCWBTiGpfEf+pqdLJsHiGrBC2iZWlFlpH/GWl6icf9m
KUH3NpwvbIvjnNPFFkrEL+ZrxaGrmJKQiM/V2f9z45em7aKvSPL9Qb8IwvpYlCW7CeTlWrfUgO80
bJzC5o3Ghx3TJxsjpW7/6GA+XxahK6j47xwws2ZgaERnjJkRLddF9uNUeP/xIDRtCke3aLWhmzMK
pSDBM7SlQW5ob9VUbAF4wrc/TwOA0k5YbVtfl7BmX7CvhXJ7KMoKrc9EjHhtCxIPHx/RtU30ao48
H1YO2qaQx+uqIKf+76QFa5/TXAkq8pahiPN7Q/gWaS6sG8B0AB5HP4iC9PJQkO7TdhbDWkTaoj8h
jqmnT4mwY4GQEoGQeB8fW27COkCPoS81S9OPmGWc0PzWzTUugJQwsIz/RVty/aJNYbAX9cGeiuCu
r7KtGVLs9aQr5PBpnMpsRmMNRCEoBpjJOTNAsz68N8Ss2/JpDf+6OmLNUqGb3zC2QgOTh1SFtkei
4lBsM5hosmIUu59ZE6/siseVbsVFMaK4UCQdKbYt5oS9+NE9+fFQ/zdTPNhT91+ytSymT9tdgQNp
h6zx5Gb063OACX2nrbcuBnVk+OW0V7YI1fQKosK4bhkDWtd0w8NIFXtJRzE/z6I+MHy1ZSweyo9H
UslH0Hl/w/41RnW5f+6DrWRhHmb1gQXy9DZEu27yRhINVlgb5TlFx/8sBaCnDmRfygmaudzCueGX
uWyR5X7+RwXVdryEPGi49buy0S8rdzjyDp7ekfrwX9YSusrhPqjwwf3ccAjPkZ+RDvtUeVQc55+s
2Mtoo8mm9mXCvUyBN9vYxELRIR66FPKq112s6C4tAOxHi2IlYKz0pJvOnx70oyIeuh3Ob17UFTYE
6o7gwsfXzZE7WaeCc2KPWrCbd6u1J5TK5F70wGvREfahMm2/vtd/1aXtcPms6X+tDzSNDn5b3YhE
62GALJwNgRU44qA4NTYHOjczTk4RReq74QIc3I7SzAp3H+eNJmbkBn/mkwIkR8gNKCAqy3v/pTh+
K3JsGFWwtFQK0q/QjllV383Aa38D0y9N63XKhzmK2uE8nsopRUS4uvl7eNMG1tPVNT8wG9D5v++5
2Xfbmvl9FV1WV7c2ZwrGuaTGp3g5mNBykDOG0/mw28Bzj9WCrL/56d2KXoORlMnyqi6YQ4j7I6At
M+P5DFLfhKX0Wy+O/2SrR4aTD3q+loU+cpbnBDr5xwXrWwhrQGTm05fFWwdNZ6npsmS47dumfOyD
AUm74VhnrTypGeRyx+mbKOfrkklGdh1EgU3Nw2s06u+N40WyY0Yb65dqKqdO3n+hF26G89KzOkzN
YlhSTtv6QOUsOlZMSIGzqY9KjiRGmB7LSfHaT4n8As9IASD/82ffoyRlikcqOOKno4uUrXCexPOG
BYJDy1tvWksTVqqjqEh6hnH4aueJPqQ9gzu2WR351t2DSw7SeXIz6DAKs4ex1UgeGSLgQ00fciCN
/LhvrGhq+xZNkWAXGI+Y9MDS1dM3lalW+MpuPxY4lhkTp53z9Hs6+qOIV63jkm+C/HwC8e565kYa
KmVjcOSOVZm1d6CgaZOCgctKQgZ6i1rGb5k/cdB5C7LNZCK12sJn0K3zJditiWpv9KwHoKuEveJC
ljX7prQBy46oJrsgQkWDXgLUdkkGJG0R6WTnzpPBIVWCKBCtQWWQd1C4HlxPKS2aAyECTZVdkSWV
TejoycNh74oFEppkSsviiQFl4apM1wJf8/+3/gaAl6jLf24GveMlcpIApfmHKcWSoDmFjF+g6uq7
qjL3vxgv/284cChcf0DERqkvViOLSAUu9m3KBWCKtvm+lSg+bjtIy/6s52RUBzuvznD7dqtB/cLN
64u1FF5skQhmE8tX5ySKJXFJYP2wYL29pKlWJkeSrlSR85gdJVa5IVDZKTcD5BoG/BiqL0Ashqsl
luwzo2t75VzOEXztwFnXxtqiUnO40tWexaknwveB7qnGL7U3bS1LZma6P2JNl2MulOD+yfGJOER2
sKPefi7t0GR/ArL2ki4hqP9p4y0DUkBhmPBHHkNhOioohJYj4CPlcOsIVzFzsz3wChawyJfH8G5r
+IO1tUYTIWs2bqZz/6qvY/pLFqWTLcOZ/pyhGOetz/qXNTJJTSogUdkwVljn3d3hz8HquHs7q3XY
3SCKg2N4Sa+f4MM04aegyiudoZfTtZEYRYE4v6xYPQYX+T/VzVCeTYE9MjJ59m/e6PrQ/nomkGm1
6Y4uj2EmT8NPjV4u60kEgH5gwtoCb5xACb64YafkAIZMqhZKI6roVI7TSjqyD288kNg9O3VJKDV0
83BuIXt52/STfmd5JPAjDYmg8sOU2P+PwSKU5p8ayUD/m4wBy21L7tJDDtLn4Ikg1rW+FzrUkGL+
+lJLhOtMgof7evotuHI7oDS7EU2xRGwt5kc6HtbjvjkKwdYnTFnZUPtXMPUcvHrgsPEfnB3e7WZg
QPA1Bu4p5ZMZAy9V+zp/BsSua7www+fWEq/kzJOHH5B2sGmMASjFmuLMpDy7x8nmEZsSiwKOETc2
h3c4leOVFwllYX8WU/AcofyD7JneAz90Hh2SUQla7FIChQtjovuNQ9pQVE4UgwrfOuVSWgphg8Sf
3GkLcmCgGK9thiXsDY4a1rBlCSxkyABbANmsaMYRnjYy9jwifok/xZqJUHJAMMpnJON6Z+vKhwg9
yYf+Z3aMUR3BvklRNcF/uNNV2LL+9QNDZhr1S4hzcfBF8XjeY2nBFilPvUwMGFeYZ3ZAe/7xJ1ES
8Rd64G8240MqOER/blt3gOiLgTzeyWH1Ju94/gy9i0bbVQiJkGpGPdD1hZbz8CMax3GIH1WUOsYG
iqo32pKxhQB7mzVILy++KZkoTxk6UO7Xcy/2frQCpy4B37PV4DiiLJ4K++mvnhXCk181/Ky4WwWq
YnWnUTwZFykwFgv8A7qusHJaipVqNQkhfK/GZriJwnyJ+QIbv8tEMQUJaseuk+q906lZcOZy9jIh
l18L85fHs3KDbV+rouWIA2Ew2SFYdIwj1jndhzjX48l2MXDSRcX3NNuo87nZKubZn1WrnjVPIeh4
q981Zv3YwifWtiIACP1KUjkGlzYF3U1197VC5H9XhSUelyiTIBOsD23/k+LU6ilNWQBcntap/TTy
sm/9x8N/D1DJFtpWPZ4/Y9umXR6aErj9wwSAM2Cgw8Tjrl0QpRMFmDQsA8Fu71RkYWdqLFfpykqJ
7Y/qGbq4frWmrRBdCnrZ3LNSPV/Th3X2gaPehYepaSUJn1Hcir6LS8FxKVH/QUTCwrr2KkH9XVrf
X2/Hey+eMQbkXjyWTdY9iJ1g1bUb4ZmmuTYSGzySVa48vivKHis0+HgmyU8KofdZTMA2TdKLt9Ug
Zd/z8jBJDDfCpXtUrfWK4ONt9hPRs610/zJAeu9HBmUmTozK8UFFK3Nc5IoMKX+jFvOyQdttAi96
WjTRV3rZcbLJvUnMHeMTciS4uL8AeDIzohrJT1Tcbf4cp9AzMC6YsWMVWnUu4pKQDlN+NGxx3XdD
OMRjK/Ix6Cbd7sI/PvoSjS8yU5HHmFKlO5+ZCAcyUbVTxw20GccTwvSfn3K6kb+E3c74RzkmtMQv
iF/qnJ8xWAB5UjZoVHnuBpe7pbIWHP9qGB7TLn9GqjSRCdW+Id9t7MuMISGE9AHzt+Anf6KQ270l
tfYi+7513/Ya7Tf/IqP7+oD9VNsK3yZxsUAqMMQS86xeSqbDySRG6nxUU5F+3mKA8EM1HzgRVJBL
2zASeUekCOl7WaBSsfjvW7r5lWrcVEulCCb4K4GW/aMapk3pQ4nU+QQ7Y6A1Wzt0+bE4i5bYuMqF
VulIETBXUgfippL77xbT963jyTJXUs9M0iGcSOQMRKSE495m1GCu8W8UVsKZVnX5DydXpEjK57vD
9TzVPrYa+VazBMDvVbWdeEziuyocQNaETfqVeB2TohO+VvJmRlEUyKvE2nrhrlbcMs/uinvMnc1s
J11N3RCr+sskjtdpoWD2J3kwVfn3VI3ileseJ+xx/X8/NdAeHme1o2s2uk0eXc+EYyUWaCOx6vpT
3J7Z6JAWwYsiseeKpZs/HHPpgXE8pVIZ9bYZoXwhRcNpL5pLfWQhEM9IiZ80xKDIXqOmH8qGm/BR
Yee3Njffuf2EVPrA4qPAjNkveTnjvTki01XDZmqbWuAHW4kGaGLdUzP5ZVc53b8MSMuvoPD8qjRB
s393q0DRfk5vacLY1tTvSLPF8M3JDwDyvL65yIAxsmA+lCWePBRwt8juQOkrSUQtX+9L8e6uT6a7
fLGmDvq513isH0qSPC/Y9TwJ4doqBgP1wg5gKA6vTx0MBUdCg2Ygrg6u04Ge9ET8D8sasLRYPFWn
Vz1n9aJH3x8MksQnvdrdXeWIwbYnDIdPyXgh4gSnzf4FlISna9EBHPK22I17zcRtrJ7PnpY/7Kvf
C7iDifdXbIh6liJ4eaaH5nP6ZdqUHyfvPBYqTt08J6E1TsGqBRsoO82+zMHo09bi8C3AhIBXPXgQ
beEkdNTwbz5pzn0WIbNGdh8Xw+NxgKUPWCyLQ4nBzY3LFcazvKo2hjitJ7wZZ/BStJksNxBJFyHe
6pREFx6ISc3alnLEJjaoMrT/gumnqcWH5V5rTkmyj0ZeZ9kWjLRhylv3oyhwiv+gPSPZZAIXWDiL
q1BJ5O7dcvDXi1iC0idN9g5vXie7NjX7hkC7OQcAsetGTog7hE63hgk+DCDvNF/O6un+dvY+qsLw
vO6op2g96iTw6+zpFeD3HpCkAwXB/hVr9WWPGPfwNzs7deP1vMMwTS9dXk5eEHdwCjzRfFQsW59o
fpSb0NEKnmKJJZXX3LNGuDCL53x2MNci/zR+M3tg8NXeJQ1OzVL4WtnIORKrbIfMCC0bjiog4fDk
Fz1vkBV9C4PIobSiZYMvZsAMc/0DLoSEFlNmrPoK2FL9kl4UNMNnyobTyXFbL6oCglbG2uHyRAsO
B0hhKfkRXk/fSYL/nxjvvUg5Cm7hY9sNQ7O+IT0FKl5o9FVNhWfHsJ0aTWjlQqvFx/lfjpTfDJ72
DzrFpd+cX08PXx+/EvBatkE1sTy9V9a8hyNJCwACs76Z8mQg7oTyPE0nBRAOXccGiKUhDZYPRpbW
bpZHlORkiUuH23aedTdbqXeVyBf5wtNyN6syXfynSqDlZPIGdOhzlVoU/0LP3CdM+75leBVR6FKH
f/CrxICAMuFcWDEx89dl+mAHikqhdV3W3qcj4BGzIkd+NOsgN+BxeokbpFJkyLLmjbY9nVnKOmzr
qMDmeK3LWOijLRj7kfrQl7hMxBRxDQdGL9oVlrqW3lBbi09h2OhyoQ7JeO/Uyo6mueUkWbafvZNd
CF5QaaDSW+vqFlesN6W6QtRhW7uVvtXWpnw+pmiLguNdw4MlV1Zq4jCTF3wOae0uo2w8sZzKdT6G
EF4KANpGXpRrd5v5dutVXtQ19Gz78iwPHub+kLYfWpt7SkImguK3gViiEsgdM8iyyBXLM9Nraewi
qobOg8yzyFPG+64CcPERriD/X47HJ+0grU+Nj9yOdfyTDU5gH9xJNwB/bN3MMpmBZlXkjjIGpwZB
H/HOMciH/4tLOJ5v3nDVCE/HHIHdLSrcUdH9ZB8w6/w/AMJMh4+Ncue804YGvVO67O6uWV+nu9Go
bFZ6UXkss6VBT7kKucGZwAGZDx3eWBQV//B9NvvosvGvoQ4U5UiMEA9vHxvZdci08ybG1Zx2vGg3
+1YHad/267OxWYnz7HZP8428Cm/ehdiiFNIRhvOz20QKVJDJW5anlkdw8zfZqNAobJOrX/9l9m9i
l8wiuZXVN+2SyX0YLluZW6gkrlvICxflScks5Xs6aGtTQH1YFR/End0wNpQL7ViC3v7ZzBNyd6rV
I2YTUIgyzRMWcVcOjlmeWT0twwHe330jCPbjRFZpUtmfHRbQFqGXxpFTYeS5uvTxqY7ZU3EXnGrk
s140VwUbkLRlMkrd5gPTtK67AVRifvkxacyzMnoBhxd2KcX1T8BNYs+rGRa3CIUdnLOtfvppz2S8
ymmr5KLqRsfRPh0JpvrOFZFXb/6/4PiJnla8FUwKIbsM0FScwSB+i7/epOk/9aFGlWpvUmdXbc0g
PPYS1ojeTarXazAXudGAG7acBsS3J2Ac3pPJ/hIjDGvxzaJuoMCdlblpy4/+iAeGcic3OdCeVvtb
jVB/Z1spnWyu1rjuDhKu+444XIupgR8Wn5i6UPcogJ7669z6JJ/2tnKCb6VNXcq/Z3avBhue0ech
dmrEB/Lh5Bsav7aOOWmEdYBvIKGNpZZsVby3yXUUjKWwcpIjkmeDpA86VPXviQ2rlJMYmEdSwBJk
xGLA2ORp79YJCS0eipON2pbmd8vpjCf+l2vHi68Wb3JiGBcZS1+tMi2RTtGa8fugpeDOaoJ+pPNK
JTAa5WnEIR1d8hy+3mxSlTQEBFLEFi8ypnngWSOezyTKsmBa+cZCEZFUeLOMILVD+ZSFHdnwTA0q
dS+VZoDpVy8mlfCmb75TI6qoqvUqdsNNmY6mT6zFJ4CtBDcMPL6N2ipFN10osuf1DS8EKMOkn1kA
6o5jfBxpRkV/VUC3H1lY+4oWTGU4YiNaP1yPy5a0Yqkq34fEmlOBz3PecnvCVtG4pa/5FEtZc43c
Lmr7F3ZOzDmlGLTGeYDu5PWS2I6a3Om8TbUPYYf0P7Gi81PQlA0CmE1zUQB1HX4l5PWKkW8UABd9
LJRTRhh1MglFQSY8rHVg+zPJUl78pksn8DzRSJKofp/Vrc9tVqndrHUxp1PVEn3d+OElw5TQvRts
BlU2/6KQe6gWqtuoBUTj5m5hs2pAV0oTtIprwsxMxe3crnBPUpUN3jFy8uqrOkPfE+DzU3CwxhWp
egrKvOxCHFeVpT1Cp2VZUgkEEsjI3Uz31KGW/KfKjP+RO6cJNf0SAJNASacy7oWVtvH0/bZsHrN8
q0fMqw2Bb0B5ZqYVuvaeTj0UDp6UdlJKTPgE/axVHsDoS6ni9EEW4ablmqkJcj+iqJKJ1JuGxU/M
dIp1uRFWfFGEwwwdGUoRwfZZOF7ZqWs5cdzYs4ZvItK59XVb3pg1zXi0dlREWkBnR2BJcfK/XeBa
UVn1Xypth71gn05IRORJPmpNlrtPvQRw4URxukGkQnOnLvtRi6wILcQSPUVXd4nmsMHvFUgs5owl
5F97X99U+LEEY/ywcwotz4ziHJ60z3E2I3a/nSobt+4maahuLO9Ol6oz0HZht49oBSAkYdtN2ypa
qPpWQGaubaj5JAxCK3a88HgwhuijkizaYRAJ0IatRJ9IhoZM5BX0jwOlYXc+0y9JIxcDJknXnHvo
GEOQR14YLDgBVQbqRm9oXj612jm73fLC8BVFvlAOjTZdja85VKA/F8ZuXowFgYcwGbSI+Y+AOSCM
8uuLbPHcobK7uZwyapGhJMUZ5xH9HqcA9WQFW+CIEv++1QKsI2iDYsPEEN1MHvNDp9nigQnASU0V
Tblu+SndRa1hE9+bWwXRl0sbpgr1vxPw4NVICuuDjKRJ6wxXXa9k2loddUiUGy1Yrj6IKsiIgQ6m
gYLvasPVuWUS66D3Ahzruxhhnv0BPjfmLYntYBNYStIX/bV9TC6ULU39d7i0BSE2YEEwpERKk14Q
9JLNIMPyACzNhdZ4XvZ8ELL/80pEgyWw2WDgHP5tVpJ0O2EXLvIeHFvP+xG8L7JyeZw3Sey3K66e
Lo9R1kBVdpEXnyiYOJuStzqlmrmatUWHY6CQsx0SVN8f8SdJMmuL7KRKA/ws+429AzqcFDt/euSY
nK1XofBcMwoCXknI8HPgB1Hb1qGl5XFv3wM3dv3AKHDmtZHpM1+h87liuDnub/9TpN7t8fV/iUt8
QfkuJ8i7JkSLAwbj9mv1M4LZpzx09gUhM8PJnC87y/i5INk3Bgh88fDcWMmtRr3ak1xWHVD6cCv0
lPFlY2CE7IPTGMSUGrYBtrS6WVWlIZ0pwi2EKhdPH5uF1hgrRj0934aPBrGH2W7fY6T4AeOYeqYg
kAclNGuGx5m8gnPQDLtuaZXMR/gC1hqxrIK0E4AJVSKZqY2My/rU1nRd1Bom/oD6rtFo9G3abrUh
0eqyDOTMWu2P+nEj+vB0K2LZrFG7ZqNeSjg9Sk1TYOnlln6WfOpA0zW+HRyVzrK5UM1urXQUl7Pi
i6T7xVTdicaM1MxEMnh/pI0XPSnwyD8a5tUU9tbEFkG5eofhe0Pv5OFh+1AwaNCT1h096yEJzTla
fr4PNSzkg+LyVyntQSfo5nHigE42wi/oFHb5uF3oVzXZUmMG3ucRAzXj/i3jEi+0kAqP7aETDJ4O
GrWiM43hufKHRCDxzMErhEnivE8/jJlLUlrC75soTPTzjVcLY5+vovhewa0SxbffN/gXKPRu7YFf
4vPRO00mpd78ICDH4bgoOB5oj4J3BUn6WL0OOpJaGkyj5k1XOTKX+d9j6evSZRBfkQQ1UMjJDfzX
V/nay87HH/MVpYbO85X6W1pIeWHwwL7ggNNBXBAuW0K9Pbc+G1a1LNVaRE7idbTqHuPtWFfZrd3L
e00jkYHg9eyssQtODary/tfFQ82Fhm4GHsfuXRikqfGMQBVFekBAqVemQ48s1q2XXPTogwwTYmnf
6odyXdlTJ/GKdul2+FKRiK9BzPR1hR24Km7uQk2sa0DLmlBYeaaN/FG9KqG8ySctEEe8B0suolxm
VhnWSh2Ub8vo+F0+GOE8kH2xkvALtr5rBCa6MeE8QUEEtYmuBOMGs2zmLeA+Fn1RrSiv7qbNerHd
3xnMLCQOIq3CqLkEvmaRdCFOo1Osyd6knTnwYRpHJFQBJ4ehriXUCxWas8xi8GXCLAsGKlmosUFW
CVYAqAnmPJ9Znd9NYPoaEDOwD3QQB0HfqQV+k0MHQ/Kzz39w6df8YuNc7ewoS+b6sqpAlzAF0R3o
ifKWCLGsbEIcMQqTpVdMSSCojmPD48e2l7wdgDXmT5u7iKWmE3sh0YKsQn6iRxiLCpYOoU5GMbfv
0lQQVNb4nyGfsX4ZIPdTYgb8auPG+ZB0gyLivAQ3oLDLUm8Quss46ZzboB3hPwxwMAnMddbB2BUF
M71Qfv5qo43ryw76K2IhSi9DQLGgERopLsLIY4V/RaC2WuInRCPfX5zC2UNXDFjdFFmvCuaqTr8G
HwRhwQ24HADTWbqYAYFKsDEtFCv4rpSedjGYCR6FJfynn3gBsf0FF0yZbN4nYekYFR5E7HoKvUxF
HtnR+I+IHKzbNdwW9sadU2c7YIQPB88emgOGm+gj6VGP4xAkZR7AYYQWusxKJ5H9rBfWl37EnQ6l
cAxAI5QhSL5RXRPD37ZVxs3ex/kbP/ZL8uPD+UW7pp6U78zJyN0AiiDJwHXpSSNfR2uKv5xfPC5m
QOFJWv6zg36rGlaj3Gb3Zv6QNnprI5Cnttyh+pY62GLKErulIWAtGQ5k9xsh8DITSntDXipNuVRS
DMg9j6YBRcIN/G3NWTlWMkP4HhXfegzR7DrdHg/5qHfZMoZBYt64dFliAD4q18HfuSdopEtCcAzh
xTbMSDHIZWNt187WyWq2I2KbSxwRNwyRLojq9UYadTTXhzvC/xHbqE/LT7prKfNjOO6ZkeUPnlXB
QszNCoURqV1XrIy1eLp/ZiRg/9uuy4CyWvYLGPSUiRQ5jMKzGxX+koh9b8xdCB1+bcaduNyOBHdt
sriK7Lhe6WDjP7J/TOUHeW3iqx9fCdfK0ht+Q3yEa9pb7tgkgIPCjmSPF4aCAZ7pq4AzDTw+g1VE
aAfO/FkNmWYymBn/C9C1uQY7n9SmW8o8EZZsiDsCvYJ2E1MACDi230ZX7H3TPFBq2fWvHgc6AoX1
uUORYTepEpgQRl2zTAiI7XVP8NBANdOK85FfzkfBvXNsWOdXJQZWF9KW+S7RR0oFhRZQ4ZKdygtX
LX6l/C8Ieu0NjeEkBbkYJz/KBv9g5SXJQq1+MuQR4FwVGjaxIX25LIH38xffBXxKCFBHoTXy1bqJ
hvHIoAzy9fNTXQNVVUo2nuOuOi64kVnPupgEzx89U84aA2HIuPnlZOtamfEAjd2IVtSQH8RMiOoz
A6zCdMyR5mPSiRTjMfxWsP5hHSOZF2dIchXwWvJf3mKNONxGV74kkuuDUhMnnu0eVBYQaH59aOHk
pXxhXXKowun1SkLDJ4hDuVasU8vmG/lNVpiRkyBJcCjlOsuQqr4o9CCrkcxy0/OWK+oiWxWl6+Hn
JJ7ZHV1665nPJ6hmBhC5/A8iDnilyqkHd4iyekyZhZh8X6RGlxxvImk9orNwk43VXDch4Xmp9z7r
+kKQ/AsmQPOJc68Bo3BrMcfTwqSDwjFQ2P40krNNrBzBStl0+TCWWYHWNF0vzzo/2P3Ps/lQPYvB
rj1XXf/N161RAH2rOnXZYc89EuGc9xcLWw0+OjiNl+ynom/2oR4cdMndSPfRi1jsg/88Yt+lJbr0
1gSsupAi/K/Q5up9wLso2UGSM0F88ZqlYMXeydD8JIBt9zU0xzLxqLTT7FlW3imTs0gbWB/+rxE+
62HTAlctY498cFsb7qltKT8++jo+QyJfXwFTCUEtzoHerz84ZT3WQg+/7ra62giVJAw9RCtJAtWq
Xyhab7bwbDM84REB9kVeX5V6LOYpmzpCPZhq7UVEUf5vKU2gpcsw+t34Pff53Uq3N71d36FD5HmF
pkYq7IsYfMyN7Lw8OjrmN/qiyKJj3th4Nz3ZdoW2xF4W4y+kb1dw9I85O+vOAvQsDVWBB6/fOGKZ
bVUxU2cqgZJUt73eKkeGG9TgQu5Q+u1+EbfFQ74JQy4aemzPgyGBH0xf7L43JuF+MCyWBlanFx7Q
wc2ZKBobwlH4pvuFJvSdbGycmuppX9qEwBnDubun7jncjkB07I4DssIUNIbcalNvO2ro/ilTyBUU
9ag8cEG7AH3JQUbnlnjPG9704tqVpTqcTyY1jOfopWrRQwBYht6P5X0iaRF2E46/oJT3WUR28sdb
bY97odMIcYP0XJVLB3id/1aH2qGuvSPAbbTgScbTK0lkVML+KN3g5YoFuxro+pjjqlcHCYZEKaLH
by06cEFDB1xPtsgGRbM+yIINOn9UlVHaJvsAti6GgCEV5oK+iYuoQAAAIzdW4U5WFxBKZecRLwpi
ACO6kVP2zOOYuxK+sIn4vep7JkT4IlnCgEkO2Jas+0L4bixHJiGle8jaFjMhUbx7qQC4pLJgrxRO
dbxfuNQ84HXjZvOdhstMFaSZmqqT6fZMRu55PTDjGVpUvY8QyKbmLqbhNOvw9VQdeomjyAL1wczG
Fqy0ym6X91mMsCEamc6IRiesVOGa4Bqf6rL/THcPiP7gn3acz+kvt0gI52CrarbiiE3bhAdH2HSZ
wr1YZ3W6h2gb6LrdqSN3vQ+uGMK5muLGK2Fe8xTr80Ca3L2ceBgbuvnw8wVO5q2Oro1WNrOyUikG
IOapJmfQ5OuBBPr9NVQDQzQtwfvQrD0yfXoLzwz4IAEzlZiFlxtrL6cLHhsRDTmHdFAtkh9khHCn
QqtjUHxq+eOj8sVr+asRXP36p64sIGO1rIk74KFZKal1qO2zrOIhfCpqcbH4pVFXy3v0r0IZQZb4
EphJnSOZUNmrf1U1mGlMtnZyBeLMdZ4hvQtWcyVvlcL2ec0NP0fivlLKIlX26hrge9Rr+nxpWgD3
GGAGnC9gO8Sic9hY92dgo99Dx1C7IJ817QDjylm9l8I66RNRoUfjTF7Cxp3czfS7xbVCudZi9ec7
FjedsscE4VOEJACF72NHAmQ/HjD8EV/zumu5kEO+1f2xm0ATVc7IKD5n+LQDrql81JMrF5PMTdyo
k9oe59DfrL4m+j/Q4AlztGXPD7RN65ZlkkTshh8dpnQxg/L9H/Uu5KxjEc4F3SVzuFLYMtwI36U3
y2sDLdix34B/fhp1siOvdJhzTLwW8izgKjNElhaw9zT1yyV9KLTGC1q1gPMOirLQdpoNyRg3O3LA
2LpCA6MTdOiOvN/CgTpIqXGGbUuOZx4csxbw16ZPdqBxVxIOXPVh66IrFjwkaPVb9sbY24O/jyAV
+bsX2Zvv64TzN2r0WyfepuqiLovyI90tAHDWuQiODen6NF/KjnRji1lFiE5ExXhdJrqvCIOS5OWD
6TOj9tU4nAdY0Rpai9KthiamRz2NED5BdjNtRkSMtlsOjJTmaFisr5qWnGwiKswhjNr91KtDxLwp
hrHZ3WSJrrOq6ihX8N5WbFL1xYXadatnEH0B7t6UfPQ9g7q6Lw+CZmYr3WDZMMRNHPgUgTXvXAvP
uyM7Ay/nrc5hmck9zgNnQX373e3xpLLwz97rn1AXfwqken7uBrgn+1dv5MY2kiTXPenP09Ceybaf
Rz3ih9QHGyU5DU2sctlP3W9D8gihon6EzKeykmCxW0Ugp2RHlHVDXoOTcgvyxsLDhwW6RptBkcgg
VJTnYJhexs8P4t4+G7DE4B2vnPk8WhVULbn5G/3UQ7fBl32z0/+HECkNNQA01S9Uf5eYru2LtFjs
MFcUDbj9zaZWGtOhqQwKsRaNnAJLWWAFdCuwyFpWF48gU99hP0TSAR0fdem/4xmZLsXHtogxRmkG
W/2mIla5t7/pG0IvT9ECEPLronz0mX2z9mGk3J66V/0f8L95Skr2Hll1eOGHYIFRR0as/h0+0HdC
VeZfgcdPZ8VaG5PPsPQ8Yh73tDNWUMExaWW2mLRqwX1BJu2coZq14oc7nnmTCWZoQPdkKoQW0JeJ
JOQjgg/GLmYE3GwkZ5aWrDkUvc/FMcMBBAQihZ/UnaPKK2/Kl2FzgpO+7lDrP7Evgkj6TFHP7Zol
nCL7YiJuQOjAoMd2WIpVO4dTsh+ICVNjg6hjbU/KRBqwdSk36ovRRCAyVzyl20dJ90ukdw6KvhoY
cqfVmmcbRNRc6Pc6XeXtirgrAQPQTIW7LrdT8FxB4Uxw6239vGkrLeM2417HtSO0KU/yJ67XtM3M
XRfJ4rMTtC5xY3wXZ1OvD8JoziuuYubRPdzJJ2makBla8SVgNNpakICgKZb4FY4l4n1VhoPj/xiT
+zGMAVQQr4ZKo1AG8EM1424UjrCP2r4fQkufnHq1ySNTTSYtmoL/c9L/kX8RO2hi2iYxSQds+ybz
NbTk17YPBUpCIdh0BBYKRJjts/Yg88QP4NpCvePj0MMfMzDi7PaP1kJrV3mklZQR3WF5LIR0jqbw
+PbNbtILTQoSr0UViKvF7sQxKWpe/xgOK6Aq2nqjZH4kqpUIxYKlTJmtQA96spPzLaqIIaq/ZBXs
1SD7Aya5ZpDRq4q7B4QQ8JMyNNqAeAZc3NWq5d+7cKOWX6MsG8iAPC9ndwfRXLRXzBXVRMzzJd36
nKCm6aHE9CniDKTOqIFBKrprLcrmZ7mArTfCVMiB7lEag05Du/L5bvAR3ddkamhtqLbhkV0iot+4
uct6nHguhTUEX9CIwlvs5NYQMJdr36I0CFJvPh4EOl20ij3dbAI1EUu/mVtaEvB5+v1Aeu7KR0iz
v4SfvZRluCEuGuKVAd9RuLX2Dm9Bq4gi1OL2HB6Do4RVueoGLaBT1MEsNXWMzU7Bkgb3zXqDBKFw
8u4Kj64E1L+iy61ilVD/6k45zsjO8RKZzPvRg83V39qRUQepYqbAbg71S/a92Ti4IlPxeSp1c9Lu
fSbbCSIutraJU4xj6Sp0HrItNZ11Ske6+ooJq3ENQmbGPwtjUsSqM3o/u+X+AVJkA9irqg/XfAsL
imkEKXNmP9kLYYA6ZthNk3piY2FQP/jAhLHNBGiTq4QDoDlOQ5NyebHwAgLwJxquPWdURA+QnjvC
4WWppHjmBVluzpXXVTQpvH0P/iWc5LKWU/AXLz3EtuMK946fP8Z8/khVp7rl+areM48s0ah5y5cR
N0gjz9t5Azxkr1W3qefY/XFiva21UCOBaOb2CmA31VIRknWn592FCM2bgRvxVSegP1252coz6cuy
Rw3yK4QPg1Kc8Kh/UBZDSHNPWvFtqIIsa3AwRNGNjsadbEoNfab00HIYhjlKJLt/RHALvhk8f60B
os/lC7ed0BSo48AssQAPvNqOE94HpsUsX4zvq8z07RH7QI6bP874Wq+7THqg38MglBs/BG22ArLC
qYuiCO2MP+ZvLAbOxe7SxsZWqIF5azPI0BCk+QO+ZmpxedaQfMXKKLuA117wgEtRxeTBKTA6CrQR
nLSODKJITccCeV1cj7I2qE6+kY86Ti4QBzrGijIzIehIHqRv/VLEHRmnpXji+Daqsw3UeLNVDnz+
lSjdMECTbkGY+twqtpNhG7x6aha01PxgGQtl5RsoeF2ZEaqH+PTCW6wbUnKRHFSeoFMg3VBv2zKc
5I0+yuW94dnG06Gb0UD56LLCneMgHqqnshz/HPMA/bhEPEuv0dYxUJksnH4p5TPuB1M87LA109Y0
bmQT+ezKXO/xybu7d3W5Vm+1A6MzfO8yk1+IqiNJoIZtjLlbStklFWWWlU/FWF+ehYYveLh5knUy
+rGCz6sGYY96J00yoZr36DGdWgexcRWyuW9aPl72uGRHg5atBcefoa7dPnXcqG8eCh92wv78xP1N
M/+YJqXhDUcN+v1vscDBaypeNVkJlv/AZtttMNq0PU1jvDPuTyAsCFuOVncCPYwOhFgtIZjb0ATS
718mqFCFzfaZttpNS+2hTB6YE9rWApDGymGAZ1b5fbxBkB8e/nImtTZ8vLaK0bj3S6cRRl5F2bB1
pHFZAPqa+jMZK/L+WN+PPJorbKpnQ/2WnOSjbJa5Odoxt5BzcZI7iFchKXGHqfGtjAkOhAlBvxKy
PlIdHK7zwG8ToLeS4giL2FCiBxXyGbJ7GFyqX2ByxWZkQPNElqig6GWd5udKQTO9TnX44b5yyop6
Ivm9posheQbC3fsBaLr2JHhBkrQ9UTqtdUdooN1pfDRKKDzHAIg1P4VXix48ekZHRiuMlamJhES9
Nmo/s130HtpnM8MV1cU3aWTIPNBig5LseALHrUp/WOuKC0mPxz3TtKIawdDc9dSFrbKAIMIpKjWO
/wYvAZK0KTxiZanSfsnkipixC8VpjH7L1L81G2V79KkNKFfV6zg+dj11+QvyeLzOttVBbXfaas70
9FP/nv5nXhfSgAh7d71B0Kg4LdWx8V15PQZoORabk0qPCgTy2qL2BlAg31edClbYENaCzAHwLt7V
Ay5z3WVML3p1jvc0tIwYCgoZtf26a8D0kPyKmP6ZniqKBcA1yK7V8JPpEYD8kxhNPKCyusLKn+LU
LvQWiW3KOCwZatWjRDMdeEYkt/VBKsAGPbjqBvfIfF7PVwxzZSt7fwF3eN5u9yjsCmzq5sm9TC4e
9xCauyROnqQDrw5hAY5RWuX8962yCzasnn4XOTNQbyMcJJBImLcwE5458l7j40dkIJsBhBlit/ju
2hiNNnP86X12XkfO2aR7SyTZm41A+dBPMO22pESX6SjjbAjP4W6DxmUAXhSaipfR/aOeIhGOAzbs
dQsNahjAFYU2KpcIrsiw+RaUyXdQ9nYAOBwbI0hQHUkeCQFX+A2GVaOfS+vGMgTR5B/JTerKTZ2z
Xg5ThZpUwK2YOPu9WcCRIb7KhkN+Q7q7Qmy4SELHUVbs6vaPKCbUpE6ZOwuBKY62minSEMFaOLXG
Li1NM2VJMvfZME9Zk21NNS40fcbcJzptloSDXtnyMU4631egpfqWbO72ixX7RSwWWAc/zNUZzZdo
ZAPksvfOf7hvOkX1x1fJfCZe/SyPed2qiB36r7sknB3waApQiFuXOxFAmwI3Pfe9JeCkUn/ZTx4z
4SCq26G6C1rbmq0SB2O3fdxDkecB6xG/bkpN6KIkkZ62b+SnbDyJOeXt/jdjIxsCwCRjxXEocnSj
fplZhKZdyiStVoWFGY/M6q1xEmU9T4me3hrpcjEvRYrB5OvZ3hwGM+PR5BCryb2QgIaM3F6JTnlA
0amlMwjB5hekvkfoSshKnBxo4e0C3rLDvm+x4z2ONRic9w7xXcID4GmVThNMbu4Uv/IK+i5VVPFE
PZ0I+aBVecmO6ZDWAsvt3PP6t59wQh5vYTGEQmvyo8zcb8geMC1ZoKw/D06LedY0jeGJE15Wfm0Y
OS6FPVKy5SqIDlq8oLZrSbAfWVUbkJRg9+Fzd1ldHCyX80It5MKAJ+TMPna/tYLMzHoVAyJ8dg8x
GU/LFxYtBY2cj8QIDcf/Xx8/29THlhkJDKbTwuvJYqI5qbH7ht7beJ4ADvnEBeGnTUl2phEQqFyE
7Svx25I1a65bWO4U0hBqQEp+YRPPtZeSOGvIC+c4ufxSQojNWBu3sd82YpoaGviCYvNphWn+JXcA
YN/68Jf0k/sy7UXkVb6wb1dRtXjmFQVN6uxth5GOknAugE6wsWBvGV/yzh71BSKbLgTtt3i6TP2k
n1OLcasi/3SyxOFxJkj+x9RkR7/IoIdQ71zpB6IyADLzXtnj3ELS8zSFXhCtTaMaUWdtwO2TeVZb
sSIoyhgaTqFceCW7hpl2kBXiP+eW7RL9E9ubveJ0gYg7dVhT5nBtgQecsTtXRbROl6lfZGhha0aj
p6REaYWbLakB748s4+TTyKGbBNhRvd/zyaSyZPiyWx465Er9ynbScYr2yLlzxFIGyPEmydwp2q3Z
AnJO6gzcdWid4sh70+YO0wWJJboAI6cXdoSbUJauPdV973a4OmaXblYSVFpHwBMMdoyKUpMSBNVS
CrlGM8ceXvrNsUi4JN9Sr7n7/ufxYUmrjOQggPyZEWQ6Wa2+alevelKTB4dJTb7f72AESUHVOAWE
/lgnmd3GHJhyVTwO89BCnHgBaiGikhi2dDlVx/7mi8irRvMFtNfm8H5srSgxQXd13jB+ywWp2za2
0+KF/bm4wXEcUzMqE43qLOtPsWLpRd6LAYc1BUpof6frGDnlx0F3HKVQeiqMSbq9yXfJ6Qj80XiA
J4G/T0+qXn8nkkU+6kTy0AV8741qbqc3mgRjbWz5v6Wre+tSV+Y6mtvSWKMMAvOIbZxmu+HucpDJ
q05FVqhw5vC9XBH9d/Lh2Ro6uhxYA7+8KE5y5r1yrJhP21hVH1EGbHpofVSAFPsOR8iCoN+56ein
xX1Bs6H2DQv5t9fTwCRqQHa0bgKNLSjfG3YgqvFXO+E78SfCPtUjAqvJaBPxQ2tIQpstQL10ZzpA
+FMGeb0VW5vjDkdLSm0LknDLg2S5C3MufyqM3q4XgvEoLKRmaGbrILmxtR2MUgRJSy2nqGkWuj55
Ex5mgAa+aNAeAQeA5LWIkd0MGDBoVZtg26X1+XFZ2ze883t+ZXtx28HivQU5fppbVGbbNofkLNxr
NS2RdhgapefI4OBT6OEPS8HNOkHXgGPb07bnFdODSsDnwSbJqt4zof9MCx1cHAIo1VNknJdnkCx1
mPkwHWP5a7juyFDZ6MiKQr5o+B+d4jGy+WThfKBLqZN2wyCnB2i13nnDvQOM2Ec2fzqwd0yf7fN2
8jxJkqxpwpYZ54S902PYjgoi174tnImV4THIpn4UwX1q+uBN2yLucQN9F1CUVV5eGzRLSxz3AlAK
5miJ7iEy+ua35iHF9w8BNqcb/tyEVlkennmcxA7vA1dh9+u9ZA+KhaPluVa0slyaySo4m3gzuYfc
BEhK1ax4BcFJU2LLXwq87jJYt5QtYcNJgVrDYI5+kXWiMt/iJpjyrIIFKAOZcAx1bLkjlWnCpYxD
Bjek27wMWJw+QwK7TBjcXT2z/OvZSQqr7dPIDxega/J98WWOF5P6pp+BJqQzIZa75aLyGDJF0Iaa
8W9W9GxFCtyHeLmlhyP7S+06kVLJl4uHm6DrLHjkGV0mwIcldKUMbv8mBIke6LSGNsd5CfjY1qoo
7uuwjByKSzxJn9ggPdU46yGf0t59Wg7/BuUTK5AhIxUNbNIgpERdr8OvmeD5PgESLCQsS/zOMmfv
Q5Jnp/1hD4uVBAR1jyqFZs5EdgLMhGpAX7dUhNUsM3KUqn38I1J6FNFIw7cLd9JZqLLToeb912dF
3KekaPEWe/BtmrqCsIRnRohDgmYlkMh70SyDwynEJTG22W4yl2bkXyEPnlJX1w6jFHJYgQbLkdil
cSDovjaF/zFNfqNBA7ksqTmaaKH15j56Oxf4R9Dw4imDqznFAhgxFtDXAmL4n0dJrG/UCpu0XWbu
OUMmJyh3HmH8bwmE6m6xzdfTMZXcOQNF4LiLyJzn1ouPZplAoRKq8kID6vVurG63tQ/9kIdvla9Y
CxjKeo87SR6YCQZHKjNtzI5SJbVki95xZfOCUvrLqLfZKT+Hyt+iOKCaxIeAXs7L6La/V7WPRxip
gJX2GZB590P7D7kM7CRk2i8jwM10wR++dWoNVTrCDsq3clMiFV1z0Lg0aX4eH6HXEWVSU6wUGB7g
3PoAllpG93CmRKJfxN50h7O1AevScEYbyWSF5dr6fZGRypN6Gdr61QagIfpStALu1mnpdBRbMhtI
1UzYQFLsFkZ7oX2M9ACENrAzV/ho4GgK36lxx9GRYnrLpS9It45TqMYE3M8Cn4VullLmjId+1W5J
Ys351pa/UZJRLz9Husr52bMr1+xGl0omdg0V64CEst8fk76dTl/dZ0FYGEJnFyqoM3I94HjLx5Gd
vh+QSNYG7VKtt8bEoAoAvjcRg0wiZBzVRvK02dpAReP//WYLDeo6C6GpBvkqhaFTh/WgLMJwyb/t
cE7RanYqG7saWBsKmZOO8rec7pXiImR1Gbx+kpl6B8Rzdv5VgYdjdwc5ZT39/pc1MHGJZmCNfZTQ
73TrPdBgkcqwWbhWy4KdS+cam02OXL+pF9SFA0Q9VBCYtt+TPewnN5vdZgT/B2w53B77JlUXhD0O
3Cb3lPcfNJ6zrg3QTC2IbzAc51pLqxueS/lvJa/xjg/ImmE4ZawylVVZq/QAHCXD6aFpSzlIQp9D
JWecynct+9K5q+cq6Oa8QCyr22HOsAn7KJOxFe0r25jWSi5PabnjtDlhfZh6QWuqqUmXFGPyzJky
nQB4Ikov0V1ddXFlm3XDPLEyj69rDveFiwGu/k741Qhx7mFFgvslOEN8sYgBIM8Gyn5QeIyK94nl
2bqMrunZg7PKkxRRb67Q5gBr8UeCRUnOjLcVWQyXPH54zENf6GszzZ0MHfVtxPyN6GYnqRuUmM72
kfPzpHNh+GwlNpLS/pg/MbZX2l3H/VCPcRcGTwPWNI4kmIgvIbRqgqSKmPwnqUZlZ9yYilTBVoKd
jcbiBWxqfhJPamnGrvHzPH8tiFBKqmWZW2FumQHJidyKgnPNBRZ7WYxwYM2m/hZXaA5GM2v+L9WB
xxAqOp4pRzuB785wUas5bvrAAtQkY5xmhsPO2SM70iMn0himCQ6Y/9Oa+SK1JXhfUum9GUoXQ0Od
20ETEdsiZwnOC8DBWKC8RIaKE3BdZ09OP92bt42Qn+uUF58yWAnRTByWrVEddWVdqzv+w73nQkH0
hTL7QMdpolgxZ7BpwTjzKLKav1kYYLnfJDO+C7hkcA7HIR7SNG3/5sAg6uLU/tlQ35KCluMws3hR
uJIMSd8hNvQK/db87RhAP0NMLJGQzLFcGeHn5gQhEQAtiawcqhPKXUaqgZ90xX9lBaNH0xQbQ9oq
aq73RW+6Fe5qtnXzzCxbAbGRo2I5aQC5EhydYxPExw5k5WTb+8HOXMb78D/h88LFqlWXDnHRX/W3
j8BfSgNT0XTx5nsFMcz+hAcTcUrmBIZXMA13ER1QJobKYRlSs9yKM0aX9+5nP942l2FsxAiQp+ck
Hq2G7Vr+Z+JhPs6VEsU2dK0R9IydAbvPp2VnyRP0gWaoJ6d4p1jAbSjpQv+YrLEVqhtcillF4bUb
vmHrAqFBqDAMF9vWqWhq5pUvztw5BBh1iFm9HpwzmD3RBYucSy7KfdMAjdTcmKFNJ8WCGYIQUYCI
9oCc9lmTwA1yyyGrvzBvtyx3XuSYybd1waWjQqXj67GKWl/Qn/2ea728MKDRSkMj0gmJSfRSikF0
AU4oc9O5BJcSz1++uoP4piEB1Qzb0TcItcT1vFDrBiBqQsiJ4AU5RdcwHr2GiPI+FfWc33p9Gm8Z
L+QrZpJjKGs5PfHGUHMCKI0uN11rfsl/yU7m76oUjZPYOPDtQiKov9J6d/GxVSk5AMzkCrXJ+j76
ExbCJ3j3qOgbQkaBEg4Gmik6zb7//AuHCVOZlMyEXTz/ayR5zexEnUiSNPYqm2CvY00xSjz+Me3t
3G+ENanAevNOlU/pG9MaXTBREHwyWkobN/4SRUltZwQ7vyljbihbS6JRASTV4ploYbxRKVzz6liO
s8pknUZPDYFbn+FmMrWgKRdlXbLYjyNyNKpz05U/8QFvHUk32CtO62uIlFdHHymgDTNsXK3+eMzM
jg7WdUzTFfbW1lVC3dY7u1p2vYT6vJrWi9RG+5Ip2BJj5GWxyJgbGXYahuMsd50k+YSLDfbOyE9N
3G9N5B716e09Xng+e91gkccHPGiF/aQpLsN4kiwAW7L5Z9GPXH9x79VYqAJKxhq5UGW3kxYzDift
HSkzAgqKIfiiBhzaGfjbfEmhE1RO2WEnTxfH4aOwe4hX9X7vWfNOGcN2dgZ8Vv4h71jJnAO0VM+k
ziZQMhKkEmHafs7xeovCRZeWMolPfZChFagfxXhZoxfr8ZeH+TABOwHAHlg2OlgbySHmeaQhPFIK
IkdEFv4wfOdCwWw5QMkXHuXbBwkPnIBdx7s3uPgbFiqyqwUj0cQF6jnXsyQFdToztkiGHQxoyBLS
leNTJxkk4HK3GyA/yxqj2OMRfyH7Y5DY50xEUMzo5vx2ZXyKq1W6epqXPi0CqoH7r4bqspnG6LpM
iCr3d7zymm+bMq2G3gqNV0WXEYOf4yv9OlP6+fBkeNOuJ2eug8zdSOpRmEzg/C/ejLj52Z/x+A8e
Xu+7n7V187KBsIyV5VIsexz7eoZIh0vSFsWynLS3emPGBIOKteZ8k5EI+QLn17MzOZzBNtbLn3ij
rTxJeY2noznV6PhpkXlRorHqum0GRuDxI1OtGwfPbSxl0ykaHNn97JhT5jxxGCMYDZyOTuFqnL7W
fHRVkXgVYlxwLLk66noYUBP9oy0CQCztwpmS6emR3yXaFAtnL+JO03WNAPxNvEdGMuyK+pnYRFBN
BQ+rgQcb2rEZtTvusiRPx2fvfAMlip62rWfTMWT0Ud+SXJmc8vqCj+5Qa2LfJuCaCrfVNHjWPAUt
sPlRE392BJtPYfs/rmol+QSeotYDvFnBAkei7qS6haxLQiMrwDBecRLmQaBWcCtbdsF9UWYNDr/k
Z8kQropKsU3iyqvQGmWxQjFRdGoNyTdiaom/PjPwVEvT/fUYyLRzvuI38kqch+w3XxyC7ZsCK13Q
a/5pOxm1Jp82RAe1Y9AyAs/MoDiORvvKmThivbiMCC9PmQBze2ivR1K3h4M8bGUKWWLQ9J3Qk8Xq
OEbGqcY9Llrf3T8r2VdbsY0fyrYCJ6twFXo3kAEw+a1e3aiwAf25w8NR1UDxDiqP4ff0oXWSDk0X
wVLqV66hhOQl5TVJ/UAgWR1JLGU+r0/Uri/NLnsrplNvvPffCYaa+szze3aZ5usA2Mcj1XVPKNri
jx3mkapeKK8bKOgiWWLSiDjd9Uym/NNhAzGvWfuovdJ/DCsIG480F6yMgvmP7p0um03qV+Tnh4Q+
XqY5n7Hphbw7JNWXUlxfglBlQ4Uq9oARlnmseJsU/eWXSh4Pe8iXBXr6dhBGLKIDH9aOVhcW34L0
E6l6ELXNRSOuYzGxip1j6Picvhvo/HkFPl6n67FstfqIreT8WbYU0JdBWfO6LWvCUgQbs9yONanp
+VZBehBeapmRL4QLlOtz8/+WhEvpj/2UdILSJVeCZ6ucjT2X5gHG6RmRW5mZ1y86p/8QrBRE7oaf
XDAefFeCnUNraXLyU1/YbyjOdCQQ+ChCZG95fuGROsbWeZxaRKYq1OXGsBfS40SZjpILq2KcYalU
PxbK5+pMujnDI0UR2zhEfLcg/gdxyKR0/Z86q/e+V8K0l7sirQm+qBrFu7MS+p3BZojzZzxLNKX2
7gLKkSf8vG69RHB3PCgN5P4LeWuJbQAfOr836etIhgyO36pLiVmZPRJqOwvs99zVDjGoef5F4P1L
KBZuQhe8hqhDT6wrzSuIypaZLC1MTZKAdgI8RMkAFoPilz4yeUfx++fT94CSC02TCcLS4Vv5vlg4
BTlNZns7UhkJaerxEwYd2w7xiYYtAEpzOq85rqo/nQfDN2qnHH/sRKmTgzTe30NTgR8P2tyJ+79a
O73qtWgctZ+Ri8XXefUyFuFoYVzvAtX9R+sso12EJYaWSvzfOU/DNwURGSTvIxibP48q1oHFS4Os
mNVKzsJdaHdR2pbwOBLPTCsMlCNKXsDX2b0tYK7543hxaxXGSuuT90ua3cw4i5DNl++mmSbrtpBp
BdJI35x/HYyma4WEHNue2+dCxROhiAy90SaRg/+62f8p8hIXMwqItNcpLxbXWNmaFnsJDuaHt1gm
TRrid9I4v2YLllxYo2aa78Vkr54iTlgLyiR/cAGYdVBp5JfSZLYgjQEA5B8hFoShmWtvaut30MwL
qtPKn6YMbuK6pRWlx9fQdwROXbelx+GkTq4T1nc5L6BWMo/NHV64EkHzwWF28GdFEIBqtQch5FuY
xcgLsPb5prNlt3qn1ASMCKUn0zOTQRZuYEus51zEYxxAS911wbWMXa7unSeDFsEeSqdxDUeq4jZv
QRJpFvmtrSJ9ITHT256JUQYaDXiercd1MN2KMebGiW+MPC3AvzMCU6cyhc8CZGlwyxCW4EgtB3LP
/hVAd0aFgXLh2MyqyIfvBsso/CSltauOgzE2m/4hFUG9IovAJGXKoIqzDqPLVS5U9Hz24BUyLUBt
+cJEwTFk8rI26z+BLEaGZIeUIPGfSBn4i2x2xLOtgJM6G+74s6JN1qPI+zyxVGtnTT4zmhDusR2u
os/Cy1i5B6G9WAanzUSueB0z/R/zrDD/cONOK6znor5ILeo6UYyWQtikuCXpBD4kUkEEG7WYJ5+i
DHw0FY9efhHu9YmkLIzhuxuSW+AGPpQKIIIXPcD/t2VLV9wjLCmLS3ddUVSV4Lmwu4iCbInvGMap
HH8NjpNb0A2SZmmn8Z1HDfgRhaXXrb6nZdAfb0CUran3fE2jn6aI4wJ/x6a+Wcjrif3I11rW6vg7
r7JurmINOyw3v+e6kEb17Lq+nb21TRC99ST6qjs7hiOifxanxubLQBJIFbXgCHsnOaRZ0k38sWRt
RlNH3GrSKCnjdL/TgkETGBTJyWWgqjwdgfyNpookbikQIVzAE/CMwsOjmGk8cJGSk5rPtiZpCSqg
do3FkTFAwY06gbXtuN2aDropMbA5o0nYftz2MBVyTCslC9v0hbvR8y1jGfSBgmnRda63rrR9AL23
3M7fdBUGQGmx6F0L+WDYYh0FM8lApSwhJEpMXEJK3DgpSga5iBG2PGSGj2jIPKQkL5Le7+PdFy8w
wslRFLI0WjIpC6fWg+Lbe6nDY+7E13NMW9I2Bbz8GhIn6oAsN6lkUyVrBj6EzWLOGPoVDBsiQnzZ
hpUbvY2cCDrxMXG8+oQ0C7jhOLCGJ9NtGMLgkx6jjiEW59R62Vv2RgQa58J6vwpjDvahRta1RDXa
q5VH1w+r5fnPpy1JELEkeeng7WDZ8A69VSB3MrmDDHDsetuAPNY7Ys4mHb6dxMtIM4wfK4V1pxTw
RmoL5Dlbdgc8ie6MpQ6ekyyW3xPqqfusxgJS1gaOA2JRI9uub9+CIIxTm1Um/DnYpmI7mKex1FG5
XqjpOzzmkNOh/YvynZboKKidhaB1w6sCM+wrGVHpnCrXY+mR6uFTh+Ts8almNPBJDARwxEIFH/SI
OPcIM+KLY7vonVgYER0OaCqplqWs773X+pErmZvbHGRm9KdZi2mld2S3ktCwEaHkpuVIoXjzC2rg
kqv5D1Q4WTHsjmozR50xRayOSnEo1MUo4HGG88xTdChhO/1JCPfd3ltZK6onXGPX0QfYGCPU+A6k
obFYWzS2tg/TKCbCMR6pOzY4DLWg+287x188oxL4E20WkTb6jRat7hQtI8QByANEGYOdoFR3UnUS
fb+6G1IiViqMEdOkSQI/wefAMaTAixb721+5Ao1SpKH95meGIzYpbqrOJR5lbyjCR8cyBxE/KQ5w
UFPoB/3WoGn30evA5pfvasHAYN9Bw+A320sEWEIDWMynwONEkKZxJP88JA6TwtKXJB7yiBt3xlG7
SDNjfwjBX+2MjDJeBPXq86ChyPn89YjmZ3ANTzrNLXgAOsLHgLxvkntnAT8cT56+BLE1FI5/zxAU
zoTS+ScXuYFcmPa6GYWBbj8xsJovqjpVHW0hgzExY1P3UMTbWlyhiAZ2XhiC8gVpTuJxDnal0jrP
5YcYUR/FdHMlx0xpb4vtKTtAb4TPR+kA814z7h/W1jNHZdxIJYYLWrLio/moxpBQFKrhso4OC77+
3I6c7DkDK5kOaLkN2JtYO1txD8KBDF9t5UYzQbtoS6fBLXLyUiu9ECoBScCfjqHBLRfK1X2IttDI
aFxM9+5AFwbPPzgLZOfl6AegWiA2ASrOOyNjDwy3ipyjqetHNtR0ZLiNvzBZsvWcY6PwN8KGo1/D
MINUJKDKfbpLPcS6fLHwS89b/icqMG22UJPYaxHnraClfxflGOxfhopvb7idZyDAHHqJ8grw5Jlj
fSySdL/XsC9VxJ4T8o0u9Cow5Ke3M/W46rxVNAnCbyxFXRMbKKdIKG3xBCRym64nl/cJJawclIXn
LWS3UxG/+1yz27oGHBnLOAn/LfPPgCINTOpxhklgN+m4N3W/GxQvmW4bZSQYc6t13yDsbtRmqS7C
xcB6lALQTd5OSIR1sKQfess+VsLR+Z3wZGfXucbTLug5lXG+4OI8mOKa1Jx/Tvv/sOzHT6qA8HVv
oQfTIgL+hQAOK6smbIm98O9MkMmBhULxT8EsqHOY8L0Oq3m0p8ku1SBVwJrfj4Gcmf3k8qihwEl1
F/aWTSh2q34jUbTRJd8cjP3XxfF881GEOuXxvJ/RnArHlJwLRUNsdgkL/mcX/VKIWWifFCr2HEqb
I6gE91+AyjRKHZ8gd5W9fb1G2AMRxdPujLt1y+5zefdkwFzH7m2NT1J2F6Y1pH9BXcmhtf+3WpZm
2YXotjp2YhKJBsGdZqR0OqQQ3oWmhfgj+L5Sc8D2jIZe8mlI+eymv+piPbupUiCRI/L2U0o3sHfd
j+ileURV0MAcKSHbXIjkC2I1OeQCWClGvgiUw/VMjScm2oMZ+hKSMg8BBTxohghUyS3V+WrgvRxP
KVlL1wxtX9iIcRnq/GdsRvonP/avildjJBAtPHO1lXVgoH0iY5s7V6IeMcdGwRlPomrz8TrV9V69
XXEJ2qW2OaIsdXeM9unAfB9srf/c5ySxzMJq/xwrqWVwVULrilRBF4fzXgS7uxVeH/G3/riIUzzu
7YRXTMG7WwvwNZhQGX9AfClWSlOJ+b4CtQElVagzliQNaAvpiib7Tr9RHl05r55KBgo5tJwUu/fN
pYo5oljP99eJbhzxn6xU7+fwbspTcRh8Rq9FLVKYbffHYZ3RaGhNDDu5exzwWHMwAjf5iLZjleCK
o1xd5Ci84Jg50GITeDb7Mnwrr9JNyVhthhOVINd8jPxEV5qF7bZxG+64i+D+NAlFcH8in4my+fXE
+3E5xuPkrNquSYirjX5krAyFLQ9hiJYzWzo9cktjaElakdvvdY/epaqXtPbe0461BuAKogyzMMFF
hYesXDCYH/+bsbrC6QJf861qpVkttx6hCHZ8CZlh5REDmS3NNrar5cbIO209Def+ooPCpA7VUq7Y
FxlbobP6+/qrHOcop+hMaG80JLsWDFGTu4Q+ary9iDNrukx2oRFkq+ISLxtJ0ikwNNXsj1zRcjAo
y71t0b7kg0k2uLUy9GpzTGn8+z7hjFamI6MORwtq13pYBv+j7Q70Hrs8gx/XsCzdoOjWLhqgCyU7
OXBSTxU7dVNPxHOTjnnMx9/s5huXKamzQ8waqpW2vMCVjQ9AhBdQ7HNKQSVeGo2VNvvOaEYWWt+S
jVW3FCnHP7r/B5a2+LkCtFhe8u9MBUY6wZXb6KvxFGCbvUO9l/nX0N6dEawgDlaxggVH6a0sewjO
afePx522KVvXf7zJ7MuL40esYOLWFkKSBxMipptxe+Ghv+OK2qekmiljiKnRBoc5aoSb8TmZoRrT
pM5wCWkTPZc2NlJZ+yeKY14Tq0U+rDxXIkSnKXPPUwEp3J+3UaK/9GzzJZEXC0znG5dh0JYJhQQ7
Eqtz7hnqoWefw0rDgLFG7Wytkne1Nloa2zqxIqjHZZbCFsWJHVowD5EmHdDSByObLrw+/OF+PJ4E
GltOw04NYOQC2wYr+zxe0YBoSEW2H5lTGrBVgcYfjsHhWght+sy3PbvrHdX7Hbj0MGeT5U0S5tKC
qzue0ouiKJwoNrjM7UV21NZXzLF6t9NL//TLkWe1SHHEk8b5dqN18Qjlji2vVChFZoGD2DJF9YCU
bZf0L9m56d1FTutQwhN+AcFsNiVXftpbGYvZh4qP/LxBNnLExOUCpS6ZakODSCIHLcKHxNYxx1OV
OToc42SYbMZtqWfhK5oIly3JjHXZmNbOr6q061OhiMnHTUGoOeH6SEgFA99tZHF99RKBIQskVr3p
5HQNAvunebXC5adPtD9VySH69I1gixRM2OySo/rjsMmLHtWkDija401dzCxcTEsUKpgOjvRqfDWw
HHDf6dSzs6Qi8/NM9smfGsJPzEgnQtBiQYHCv9AevaDm9bQMhMs4nAN4/H1HkNfU74SOcXaEhqkl
Vq8gVjszPf1GBH8j+C3OmXex8El1MUP+KmPyk7PWK8AH1Xy9OhHRHzx6EliCLwgyJkek5HX/8Zgd
o42N0FSW4DN2+K/ckY0L7KNaW5QdwCirQl0Dzex9syPXI00v4DHAsl2WmHf6O8GJdIw4IZg1+0hq
hiCDDxWE36oF25eNLpb307EYZs7j288LlspYNkFZlEcApmQFWry5kLhaxuGkI0dVm/9eDJMYfq6T
vxG/TvQU+U9YZXowJiX6lbd7l9rHR+cqBLj9sSTG6ejJCDmf5R21SeZbASOnFrfdDFu1NGDv5O2i
fHTyXiDZ67PNuBgO8sCDbWciXOHx3zyPeLFjrNRJ60uvWbe62CPEC6bC3eRw4RNkFmkgPF5aljDc
ZCvbhbR7J3BL0cdIxr0Recb2AhcKI3OVz+o2V+p4mhymnSTdZBVdlOEnCWxavvt8m2085GQTg3GK
qkxd5Ao/IL5bUXs9eNiDtYJckbFMCRwZsh2mGvG8W7NY9fpD8he/SoH0uQr/tN1WARoDohFBsS1G
pbZRHBGeN5paIkN/V7S89KPVIXhrNzAl0sLBhWcXeQdiZgVj94Wkx91xqHqRTp6T8uYZiHkFqxCx
wovVW8ihLJzbz1Ya4HKtCRyK4RCrNflpvc6tX+BNDiSFI6Jw+lGSRjYuMTDlkiqeNdMoR8MsjGEj
XeKU88ZfpC+vfI3K6usxZuBfb16t78kMjE4vrW47f1dFkT/T3iMuQXrrwWXcRtBFTJShAzUe1/5R
QEb30NidJlNHe0/LxfI396HDmxHM+u0EEJeW8kVN9aDiZmq8CdaUFd6zg7xkwkBbTf/cFGoFrwal
a9NzIzYwlmLYN3GLF4F0IaEMqDzOXl6OXh6Ts1AFeeXafCGaOy19jBqQPOHxNDEptTojWjCGOE6j
EEHQe4hshaWGEbtV0zXjY4wnEje3huaOHRnv5BX0Uh7vXA5QLwZPFPZvA0BVF3OlrGC2fLP3jNbw
j5Rx7CP6Ze5UbtaSE3yaatgHJTlZYBds+9xDddwa+VQk4ZDBCqiQNYUh6tdH6n8K2khH2N0MaXOw
ZsN9PRwWg2yIyEVjzbpAo5eQF3cTuhHJNWLwTjl0MIj26At6AwiVAi8Vnfpfn8NJRUvRuG2AO2nR
p14cb/AA/TmKARCpGI1TzmrF474dePxelWa6ukt0Ohp1HVexko6tD8UC08eDlE/dWCjcObjRFDJK
j2kPlS2ky5VciJmB1EogMkn7eaXp3EzdvqhURsOZhLCsLnqALyUHd26edlKMCkIW3Xvtcl4594Kh
lNyJK5KN9639eKrYY57ZZo2T6qigy4pXh74xADRx0kv74M6vdn1PVcueGAVEIe06T2kNsv7+t3rk
Jc7DDz5kcirPgSeuz4tRLNaO/VzGw2uAaz5Kr77B7klw4ElFP6iGnTjSAtoa1N+iU5Xc/3T2P9mA
H48+MLW3L0Xoiet8YYqLQ7U8h+dFmEtKVBiEVsGRI/VbIBO+gsJKFAfJq9HJ7Snre+6AcKuyd4PK
eBbiU2vxC9PVUMMgRsnEj7XK/HBRQ0p2yxDGciZEtThJdRXuaoCnrx3ECKEdySnYYQjnwkifIQOR
9gfWdEMxAQPk/+9JpnIzY6qPJaH7v6LANpTMjyMSff6Rg3EWjI0s/C4UcbiZd9kIpUtZtf9Ims5m
cFNjG++7uJGTYPg3Me+CzDBUrJmRVHMe5WQRRMBrGxfhasMLYV6kzvSK49ksTwczoe4CrLm6R/rI
VhD/msRm9bIj6tz9Xut+s6FjH0Rbqm12iSMrcfBH2FQVP+OWXwAGmv/rL03aPv/cozE2AFM3N3RI
bRl0RDsVxsxAI+j16XAkaHvCV0YgF9GtDDaNbMKXmp13vCSVH/UwdHocZD6Ze7DfkLhJqvrW52hR
dyhEPrzk6rDyB07DuYZ5ihz38xG2jMX6lE+uMwADD7xG2AthloitsUHiNN/hBCYTw30vMBnI5RFc
LXRVl1S7PuMc6rW/COHObrfiIcIzsULvz73Q1NomNeic7YwUxSXkpFxlJbiiAPo261I0qSuyFY/Z
BYQWjb+ogdemiPFag9O7MRx6CqZMDKSzuNM99+Yf2+PuZm8PtBfniWySQoBfcm1iJBssc1nxx0/w
g+Qu+dh6dV5zceW/tn3TILSGW8yWdbYUEpQwc/jDdKfg1FPtgUr6lnClhQ/IkJhJ1t65iozYxeeV
GzNyMLyLhHZq2z0YhtdKtTUBdAGDpZ/MFucf4hjaHjG56YXow00/PzYS/n/a9ksHGsDeHKRN+NLA
/xMFv2LTWZvY5ri7zd7nfRLJ/6Vg9JGDboSj2CoQAP/U/A6jl5mywe1NtUqqaZ6A+ms/+F3deKqS
fO7cy1zM/0vkBwq9E+BiDSfYd7xXk3cKFnz9qTkrFnCk7bAoG5XgBXGq/ZGG9tT5iOr5FvCWit/r
1kdnl53NdYh69/Wb44herCSifNv8dpuzTj+SEpra6DO3cGXHgDIV95wrCBJFLUk1CZdyvxQufwnm
ivYgMeckaWjOUFHeD8kAeYP37foXcU8EY6o1vvu+plNSC7l9vBGBWvtETWIC9TKjHdHvJ9teSjwI
g1kNc1zo0QCw46IF22khKgqB72HrgQn9x2MitUGCiSZ7LEe4QSZrhldLXHrPEEaz3WtxodoVmnc1
WQu+skZM269GD7xWJtYUq3RNCNqsS5Dt3CRvtLgSzJTpeWn5Ld4EG1IQnzB2oU9TyWgYgx0KHF6V
a4ryfC7Z7lBbHzOantOxY3ex7Armj0bmyrqKoujKd8zFcdb9M7l9BHUSN7RCWkR/AvY58Hy/UZIb
a8wVsJE4Ra+pUszKJ2XegeBHAiKGpaiA2Bj5Hl4EWrqMB9jIotqc6hGt4znThJ/9ZBzWL6WVRoJ5
SjNkpmeX2xIXzg9Jse7klCGR/nTaVRy3pKkCsevTQGVaHOcMBeATGSGjYABJLgk52JcYi9JhUDnH
RBZ97gKLRv4YiKoTOyKk5W2OA0plaheUOP5zkJQhbttpyWgRX4fxrb/sM2SyOfqVTGA9zQhhoEYy
m1rysV4Vpb0dG4vYy9yqFuftTRrnmfXxvE6B8K7vS63C+9DuFWLvGFvlYbAmwZtIaM/hJZpUwJSn
5kU/VqgNJ1EQFwp9jbNRaUDfu0aGJVzjEt1+cASmb4pOEHEZwVW+wz7YFk3NI1HlKYJ64UeH0Y9X
NjEV0EuFsXjDKaBM879cn3S8MM2AvAaBQSxW+F2F82CKrTNsnmIjkWWFLdaK+G35RaJSEEV956i+
askR1WDs5wtEAVdavmdEkj9zpgrIeB9hVWztzecojIKorUGFcJs07pmB6Q0i448vwOiekF3u8NY1
uzDA7VYCyJETk+9N5DYVyDZWLVmVwWX7aPq/qRTuY4iw0RbtpDZB3T2TjTdmOGTkvSlzh1mXNAuX
FcAwoqDHOiyLnyEYjZas537kwfV1nvgtcZY09knN8RDkzXcAkRFchjIdlpq9kaJX+ULlwug/IElo
EKaXKsxdEyUo1T96EveCt9pqO3Fi9eXJ7vktIUpCBDQXfJu7V7Zd4NXsDVSezQDCCda2yroJiIb5
qPjtY/NgHm2d8ApFNhKKYrTOq9bGQccVrKNxTqb5RA1XaBIjBdHZpKWH2VQ/LeVaFiWAkA+g7X7w
GDVCEHPSvJZ2mkC6CBkoJdiHFiNDsRanjKj0xaiAvmhpxIzz2SctRbZ927DFOJT1d1qWiiBJvz4G
fDuugMzUroAx1y8KaSMZvFPpKtD2K9wekFckrD3cou7OxQ/U2lcbooSPruO/D5335Y0wYqDWhnjO
9xnxojRoQBhNKVKlG2YhSe3sfvF+DpMijWAtTRLASEihWhL4aWln0i446wamlVniMZli8qk1byuj
mR9L1hqS+Bias86iqRSYCSgehK9WkH1HFZ3Dgd60foYHDd5H8BArd5+RGzz4yGH4oPt7nWRJGku2
3hvx+mD65PxMAm/n9AaKOuVv5Q/auIe2Z+Li9FHJ3mnxYbjDAo0EtNZkXqDIh3mHoAgXH5Ds/GE4
+fc7lEU/DCPeFxUPEKuwbi90zwes6zN5ncFAs50KWlMl/0gmQsEdEdrLuNfMMnxcQLG4yCZGq4/f
Wg64YFaA7N4MLfFEGX6WQIQmBHO7v8tbB6dIymgO/583Ed0BpwrWnlB4PZ5rauzuZoaU1pFfbPq+
2JD+eThUZ834/iJoauIvPn3ppm3OUmJHRM0tIcvxB7Nl8MwOSPVeFI3AXDPOPeTcgksyn64/mJim
clHnCBR+9l21Z0avhL4sN+i4FAJ6cPWrrejI14djZ5bu9FznmDDkRcp6zlX+xAmFo2MqIF/8g86i
7uOApsyCk4f6v9LSBDQjuF3g+oOATIX0Hapv9tGbXCe6BzoePeZfZTV7qm1jN38VtNf3AAMz5cgh
KFaD0pjbxgbED6kDHFMcd7D8507N687ZF+cCJ1cCca7kY0z7DaO6TBKL2gHLLEFkcWwoeZ0pciFF
08TYs2NFMaHQeTutxj/4KU7KjdUeoKuoiSYEA8zhHX6OJdCOkgHuCba3JTkkMvbT4nnPkzQsWsp0
kCSNpVArvuj6uDHP+w+tE8wCH1wfgU4/lUDSBK89bJU6rl+tZFRWEKPSuJOkdShDS+NJcHZms21D
F0cLdN6nR15l+flac5MybQmj3YAHxVjy/3jRRSaHq7lybui5jlX12fPg/W/31tiLH8Yjh0Zxvxtf
ed9UulQ5D2wIgXi1jKbG/UqQMpT+WQO+rtMDr+e9YAL5nr6oP9cR3DVnfe7tvChtMal6Ue2nc442
ArAa0q77JBey3FPaC8siOwBNu+Oxfl7UEFM/X0tc8FhlMV6RMU5Kusx2O1AQvLXkAQCE1bMyyH63
Ag+7zcN2rAgv4D6JVvkjhDb9PjCQSCSo6TfWpdNSQUbxfF12i1kdcTaQX9nMuLE8XnG6vWjleYdq
7FhIvvVkaC0JxCIL6j6nA+fpYAQ2akhx1TOezxfQpOYVP9uH1J7il+QO7opJ672u9kajXdmo/N5h
nK0q87gLQ0XLwwxlId+F/BFDtiKwHEOPVTtQPSkg51XgJeKeJWJJHz+e8/b6Zh+p8/WAHCK8u1Nv
NFHNLTYkibs4IaVPPTnEMBKHXKFagUGnrIgnjGvMzkL250DDCmOKUz10ziBbI8EKQ9Q+ZW1k82Fr
kmthXTdtCIOIRsAGBM57cEV5iKXso7erNGobS3SwrN2KhqVy/gj+kOu3vh7kmCar31VIaNu/4qKC
/Vur7JqTh52w5m/s/feomoTaZQ421wfSJ0y3v6tDNN0Xt7s89C526Qj5j9BBlI59V6qp2NV3PDE9
8fX1Fg/24EPCE3fQrtC0GZtTFyx914k9mbpEeJoVUBTSTo8L+yBTf1ifO3FKlJsgoRx18E/hejqO
Usfg0hV7vnx84CCKvCG8whahWEKo9qQuP2bJYgbdPRGlgE6IdwhHt8W2A40Y2gX7XPqpTeOzjHTw
Xd3Y24HFuDmkf8eb1okeq8zz4elNAeRXf/tDSv8ROq1x6DeoCvtpXeocPtJgmSE2Km48/khHc0Ae
93iTea3MCkBPc+pwzdB2gsu0LKTb5lBcuenn5E1joUJqOepTP8nBNBfLOQ9VNdcFtEc+Qndwud9e
/9qP6CI3fgiJ/cvdONcWK8Smbjf0QIv7n5j8+gtFaXlZNVO2ryA79dSLunhtakhzcj5GzOsKDDKb
QzUALxhpFLSXvdcTOfqQO+5IM2ycnZQ50UFh8Ac3OmEU0l1YX7MisKtiiMzbXVKuQjvIr+KSAiHX
72vBx//Zdqo0kSl+0YxTpt/iMa2v8W/1RO9+lgE8BJHtxNfmU4wSRdeVwHkFY92ZSiQP2IfSX2Yc
90bFstDis0X/FTzIaafNCpquTLHqgio0DJe84QX7Uk8c1TfUTxeDtLl7M71cDjBc4NZOlLJLg7vT
DK9m0zf4zxKTS4efg7g7MY9bt+elOCnoh414ATgciFWnFXHFXQCr+QjzsJuCJtB+ts96W50hMUDg
w6cTDRm+/D6On5E8Oh+AhIbO9RAry8MA7TkguHM/gBokGVf0d5ywd4aqBbaxdBO3EUrd2slMX/cE
YbWY2KkseFnPD7QdoC0pQgTU2GvaB0NFSLQW7zfMZUo74Z6i958tFUFZZ5Sx37ybtN7EMMimEfyj
5uqDVmATkPlK8c+ioiFb1gblGn2xFvX0aL7shZnyzPxvusWuyf+w/Bkb2RpGahH0puFPTQ8n9BML
017Ofuwb/PKy4QjoKoBY72CE75+MjSaXlKfThNkI5rv4VdtRhZ29gravG3rWAUMgQVbLtXUsCZnR
oO9uRocNsoqVKqlej/vTd9cj0RmONjH4jasYty77A4WfUyn3Mq88Dr7eBbfQ0OJVcdkyiOyx6Cf+
wa2NkIlZDHsn9DPfxhR+ATPtZ4cjcGOna+69aU2xBoOy2NKrZ4kxMYzxHnh7dgbpnxKXJ6OgqYH4
XVLL7R3eLr4qdgQjQ2oQtIhzmjiYe9VAwfHZcOOvnm6sqa8fUPmMu87DWca62fQpOCYGBQR83RJQ
zaxMelcemme+20L2qGF4gz3AyIAD7DiUH++9xDIOBgo2MLCwmgb8osJp6BIoBDLA76g/ycqMHHaI
A9//xsjznwIoMM+Ws1P+0b+GbHn3qJq45EOUsysgETvPfNxCihHaMGneTywHKF3/m1iCxYkC7akF
bfJRJ37HSqojYIZJSC/ldvG/u6MSUqEc8/pW9IHrCRTHv2olfxYobCCQOQOSMXnqm3lJmrqKyVZo
r6sTZR6EF6EBcfGcJi6EA4W097vw4kd/rzCuZ8X2Yw09rcmrXob6q71oWZk5+sCYjd1mo1C32TZh
Lr+IXsl7VjbQZ/zm6eoxBHxWtcakavKOzsmoGwzdZ3alIdK9d+PVatv5e4UhJJPtowFt7nWG624a
WkC5varAB+wrb5rTpufZiTe3432erT4v6egZMWKN53O4hmTAZqv9ndkN6seVvZeGTonqa5aEilKV
pt8a8sitJzdYLsTeEpVcIzyc2dUPC00S/YBZH1f/pImiIckiSneYq9RUy1Szlck57841jGkP6Nyz
snAl4aM/l9DYuphqU3homfq7c8RD3RNV/5Ma33aX74tFxfdWYvc/hZiIppCR3SzBSoKLHKVvGKFC
wfEs+oujbxyZiew4dt4JT4ICvVtCeBkLH+ugxn3DUVyLk7H1aZF4eKCeOYPTpiRjYyxiklthPORL
BettJvdudYxp7sTsAsWzHv/pFytql/fhaQvaaC8qlRW3HG6Jzi20wQqxV2Po+vNV366AtLuJ2ioK
LpdVmjNfcl61HyW5knpty1JTcy2Fgd6f8S33L27wEraVaMpj3DNdv/y9VxGe+QUMwxwmuAdM7c1f
V3JaghnwIu7kZw470/mC0uQm0XAgeeh4uMvjfuOzqeQMTykteqY4M0SEqd3JfJTPuUYwvXoMznFo
zmdXlG05LTpoNeA/K7Sl9XhfJJu03PTMf7yjbGnIKIUcik34WQg7BMk3VLUvq9x7QLszT2ngK1Wu
IrkXMwnArT59eJfEUvhP96GpPLPlmLuDjTrOFdXbz5iHqM+xllcldSTJACKRy7Evv/kAHegRAdFU
yqGJRUmbVJYSoU7Xy71Oeqkv0uT2NCCKmXA9LFpeQbTzdrb52YGHZrXoNAyIwNKU++YWai0YkKax
1tGWyuhcTp7lBXNZbf3YgJJ47CiOp0OFGN7weIQaZoEikLJJn0PrsCLeDKBWMms2VEzg5dwqkHxF
mlPsFhoZifxgM8rfe8baZEe7/kihuBpMIO7ezjg2gk80Q1OtU8LPAXD6Syue5ndqnhOOEQ58dlEJ
s9eDvltIIQRjBNpsySmaSJFGcshH+lrF363C8WhmoACdMDqxQy/FPA6WffipddqHTvXXdh1wnVhF
bsWAWr4839W5qZR8dn7DT1YmHL2hxMohdeqqBWnpwkz9178yieCZjjmXT0aa6YpuZgM9zwZil3uV
d9SkAC9hyHV3j9OiqpLZ1XYjZzCAyUeiAEzniUODqWAfZud57rjpBvzntknVXXY1fHje/b1ulgc6
/AjS6X9ew4saMss+5F6eBIhwPVFmdCgAjddMnrnjWhsLTxqCoKekwaTwJAueUF0h/FLPwQws8guT
cOSLZLeVZ+gIiBCO71CUcn8tApWBJld6TGxDtz7c3k+UBRxfCGp5wX2vVxz1CYwYDPMfQIpJAlP8
CZsIxKXtaDHcMXEy3zQyOlyEomEi6DyAkTz8aSgzgYQ3SwGVkvifnmCVWSXoj63obPRX99kgQVyd
HPT90HDYanC8MRoVqLF9LsI7MmPmTn906HYpv5UN0EA4lmS55B+eTETVJFPCkzNk+DFTZ/MFP1/o
07SHJ9Q06Qio+hXrksOIAPJC9+VKMijy43xXs9BcIVbURv40EYcwzJd1eITKklhrivQNUV9dg7Le
92RZEN1tGD+dgbevqNX1tp1JW6YJJ3Gj9q7uYfn2H2duLfzR3VBZurb+1bX/8VvMGZYVCz0F2jAI
XxTWfgB85uH56u052sFs5iX7o8R5PYqimHQmHNr3dSrkwt+iUtXDThJQoTbig59712MeaTXgu1eY
mllDm11TYxz5lOdtpStmLvfXphOq8UdiUoYmQbf1Y0svRXn3ARqgHOUQAgHf15JsXA5vcVShJ6Yd
g4fdRRdhoSpN8c76o3XTTlgA36RmTezcChQlpRUcY5yCtEBUKCnGz1cA30JMzSVsWf6dYaMQX5kw
iUFRRwvBuz0LEWPD+EnsucjiyQa9j76J62MCTMETq/9hP+UxUJy6z8rmIM7vVPEcMRNytDyJjVts
9AQvPuDpPSufzHQdEI8f4wl1ibws9XUC/kTKEfiHsc/kMEJr0+QUxHY+NUhQGjxg9H55MNJk/zX7
64FXGz6Uw9S9QFv31bjagvKRDSdBjMKDGEzkc62s6X6n9xqnYwqgc9Ynye1MO+VvcQrXHmh/cw9x
Jl/sJDzi43VWLaRY6nlH2yYXoERY9FoiZC4xwwjsgGK7M2BvJuIcvBp9cJK6mBX3y/I6cO7+xn81
LB34wITQU03mHeG+2G+SYdPEUAV9KRFMkee7+mOvsmXdR8Y3nVhkvr6Gg232KiRDezCRxAqrWJTJ
9zPOrnL39Mol6l6SD6Q6LazJ9tLOcSokWlpwdNZl60RX0Nc4uNR9HnLfnDKbMXxlZ4VcrlDxQtqn
C9H3DWoJ/9fYirAAbyqRansPsZScETvodLxn5Z9PJbD7DhdVbW3xuVUjo7BaKE7smDjVnfLJRWUa
+GJelouShXXCcbA4G2EI7Nfauq4BOTz1UMblmJVljoKw1QowTu5GcBNNicZ2fEcsCgOuZOMZTdK5
7WxTgeeJdL8UoEivt0c0QAT0BzTH5FrMHVXho4R7Hd7v1b2sq7WllHQKYlxbpBHaNlxZE58VNDaT
jcTzHsaf9NXbvGqzFQBlY/nq0LpxpvAnU1NUzh47LYDN6LKxasw0e2zsfg6LzJSMln84feb1ecY0
uJw7ziY3z1xvehZbGJ+OL080k4g4HRYL9JAyoizKlwkAPOIz18USlFimp09H5IoeDk/kf+YugRnn
vLULcmYLLDlnUyziP3JB0qD7BL99nQ65VBt+xQVuoE75ABzU/+QOEpLE7Vf4nbJNAxt4NPh7x1JV
FXqUcT6NAiuaErLwKD03ZUX9+hNyDwq4U0tBN7EoggIFRSAj8lpFe9ltNZLpmLuhXmvcrkzeQjhA
ZdZj+qHC2Lxf7XFRbYOmQMK//al5bcLUB54jPgMBeqEAAHRVBC0fTOWTTNtu2YMvmqXwnrWK65g4
43NUxKMK6Uf5LAIdznCyh8DgAZm1WXVKLPntG+EUBBENbDGMKEskJgCIW3/3KdFWC9BUpJe0QqVv
IUo+b9ofxTLuYMVN9JOOyAVr4khX9Rj4P+JnMXK6T07sKzFEpIlgH8RafSxP9Wwxzqg6kj7+7zim
BoG9xPW22dA6FVVCZnthJA1qaZE2+n50CUS7SRHbUHOxyzsrVywu0qCsSwamEQqV/8Hp2V646ScV
zvK3rilppIaQh4RlyED5EVZ8xdVaObhgthDXRLiVU1ha3II9zsogq0Jum9eVIc/ARSLKx94hLhX/
jqqvcPoGUN7YNVD/WmthQSEG5i/364ILWOQq4BvjE6PTrip0SJpKODObYomHRMenTrSzqtKWFmOz
kdkoAVCCMc+y8wa6jrqK+Os35wH8VA/Ny0V2EZjCHCi4A0p4B56Bsc6a0BT+50mP7fpvLGUQncTM
N8kOTuOjHSmnvFXWNR2UjQbYDmsXpsEUNfOtSjhntj3RfIWEZZf3oe7tjZmM86p9/+icIeAngwQF
GPI/kjFWD+ZplmKK01eYjIHNeet5FzdIMUPfPqkicVb6ljGxbC1Fk9tU1UykeVsZkevfOmjz0g0S
271m7gJlmnEYe12So6oBnze4QFyrn/I/CPoo5FLJOUq1SqQx+UA8g6OEU2wBhguYYQmecHytgi+L
kVuDtjgbaTP6BejPYW8IKqG433bBBtZQxCtTA3nZJwlfVFB3jyCCGOVd5O6W+Qm4CFSzs0hTx4j0
cry1PlgNaUBlROoyG+C06gzTiyXYO6A1SCUae4xe7IOEMcHRGxKkiIvS6VZqXudU6YllxP7HK0Xa
SAJwRR2vuipksrY0zNjyjdMXHTADMNTz/jWKkZdeo11GrIEKkQk3OBiqz+Oo5TmC/+/QsurFUczn
zJsB7XTa/tBuavKO0RqGCKPgXEBeQE7ZDnycVCVoEDJ+XG8LX1PCvDt+Kl0kv1MeyXvjtt07WOpd
nqDcvm577hjnkw5SjoBe7BKHg0gCUZRv95N1WPVvGgFS4JOUotH/ezSciGU0zdnWsuHRD9xOEZH+
IYVMf3tIal/CsslLUVg0/Ma3SdmBnsWAIMon8vNnsBcDF0k0e+7/oC62Q9AUQEzAHxY0bS261hBz
ppZGktJEtzLwOsTh5kmC0vaAJ+3+CzOCJ35+i0toCwXNRBdQR7YnD/4M0gZnGfoYL+j2N8QASGrx
KTovlfKIx6TxhMIBYPFvevVQJYjNN9uCeJXfZixEeZfmtKC06mTuVS+qN6WpDqTLjldFRw9ok+fq
bJ2lmF1uBMpYS8BoHnLf/uq8fN2caPlUVKg30k8bHDNR2ZjciVVp4M155IkgcmD0v1zURuhiQcCG
erQpJkZWkTgM1XnmdYzJyoI7wmCXYg19NsgE1rvnf9GIhdTOc3Q4JndgIhfB343V0wX1/Utw68gW
QTbKRr27x6mNG+0c/eOMZvoMEdt73Yv5UBuuLk4BtHWzMLGN8VVlaRmosdEOv1X/zSQaQrUkTSRc
wcHFyaYRjPAsMQRXl0DAV0PaT1HQo7wgjMK2Gu3HWneBbxlSLI6AJeJFZrlafoVjS+sVsy1gHRc6
S9BZHK2W5OuKx93XNIS0yHYvjH/gOGOjbSQKYvesyEuLPyfVIdpWQN4ktqhUYmCJmprfv4X1Kyhk
NME9ApoDnoNGRSvK7Wae//PG4gjOh8gTiJUStkGwquZ3rYBA9rnpVBi4Dfl9Ryui3lWl64UqCBpb
NP3S7T0huZCpVEYcv7I9Z+OQyod282OQzExYlbMxtromG46FJwPSM4B4a0u1E/bqzjXSQkwqy/7V
imO9BQQqxAQPC4a1+97BvYZm/3xBKjMRtiBVTooNs16Fb71yIUGQ/5GbdCYRE/VRw3vT4GzSqmJk
H6TWW7geXoMywUueTeJrCioZBS8zOgsOBpO919Ht//x4N1G3bwGhXK+3xYKamq4N6UhhYj3nPbAY
PTI0i1DVoSD7pEuKAqjuq9lJJTRTQFPlEGs7JJQy4Nx2sqICvpKAfJ4wFTYXrKjwlEdFhEmCPZig
Vvozq12MvK2pna96dv6P0AdD3Aj6s2ozdsBZb4A5qMYx9WX4tAREyrLlAf7gNj3CXgJoWCeJrdtQ
+xWLopRWFKG3U+OM883V8K6oHVdvhoG9VwchwdWOsEL171NS/cQJrjkSl+uro5faGDHLOO38+YBi
gPLtqlxwLhyaxEBiFsbWUzIzfhubGNur+RtC5tGMss1kRFH3hXshR3CBlyHH8v4nK9T71JOq7LE0
nNuvypANnVZg48Jx+uoiC4FaKfMyHoZldZFDwNa3Wg/+JQGDte07w2J+lHXjq9p5kOOfATjMWiiF
8iAKMCEmDFb2CeSo/uZQF7ez0E3deNGgV3W1l/P4nv9OYbjjBBiMQYa3+ZhXkgx1CVJBvGjWf+JS
Qd9FpmUac5wRXBELOoPelWpYnxJCUSMCzKv3JF+ukFdzZ19VQezOS4HIpOTCqr+3l1ppnXoGrOtx
KQiTIJJa85nV8pf5KK8h1UFDzw5NXU6+5LUxYZX119uXtXxy8DMHMJN9b91wppZYjm60qDAzrwzd
/sz+DDHrNOp2eRT4kavxccwrWnlMUiEjS6rxbm7Un65UMueCQjCFurwpBhIPl84RxIPPkdNaAyG3
nULl+d21VuoHlUGcE98vsA2MSbauGgm5yb4qwMxqFWgj/o0oWnMpoJndCzUyvcCqeYmP1zu2jAH5
6xcoL85JAg7zjrEg/oA/s2oerjWPjRk5tgBMRZxKL4PiFnz6oBbkOTHOlGXbGR09bUrWSH9136UA
y5Q5MKTt3hWUVJLImVInC8m5hdN36Urze+TNXYqBvaWppw/2CwheDP6Q+HQg5XtBnk/HkCq00mrt
2cGCat2peDh3FgGbRKAq3CfgPgA+kB+z2SWizRV9g+/2ooXpvFosQv6S4rvQ2HoSoPQV+n+LG9ob
pscR2IJqxNOfc2nIP22gtDgv+3es0aB9eZyyur7mcsh+RGK4KLXQme0Hc5h6XzKCaCZLqYvlF+bp
GgCv8NecnUzUoSjUWPACdO3KPOzSg45xGy1TR2XLY0JFCCxoWtSA85G7l3n4gufG5lNxYzyHGYBO
xVOv14kerGXwE3DPrUOXhev+h+D5kZmdzXb7GIBJhplke3bTa3Wgdmqm2vMvNO+B6ybHapG5mInl
WqMbS4HGlMmQ/oE9tvos8//qhZPnFMJkiXJnYbfPIa6+Fp6x4nT5QmWKSrunrdHVvTbPZ5RHvUSC
ugMjOlzw/oHYEg44Ka68S5Fy2NI9YzvCCXhmZHEM2FuIbNwpAbpczUmoiVnTTAlrFCb2FD4tGAPj
Pkfuj4GJjvdQs5yWyaG2KNaIShi8BjZ5iYlsBuXaQ05SBa+WOPptR0uONMT7pgogPbUB/olhge2Y
JP/xnauA+CtpSKUWmbQ659eWhZwA+FrqyfqmjRJSQGFY8tDozKHEdHmI8snU2xJrQQGpBF4zHgwx
pvVK1SbDzBMADpO3Mq5bQMcNsoV9UgvC/PH+sKJrP3noOxwLf4HXV3+d3noPLFXVEmAxEA51aeq2
CPDK/x8ajuifc/W3mzsYvV/YCQd5m+4H8C0iaDTyiwe7xMW1F/FPzuNo7oJipJ9fvWjMrA70QKPO
iTJtmteXKdhtKkzV2z4VyhtGroGeXgfbLOMzcTu33o3fwYjXVj/MFYG6uPRSswMEkvlcEfNbvzSx
aLJ60COQZfupDLjDHY6kOYdkKxjAKX5fOD7dkszNju9ZOhX3anzfYa+GJZpwsprB2Mz8iIN7vJSC
5xZAxL/pJq4UiSb0V9gPLkAqVqO2kvWQaRIrIRjBYUAYLAkADY+2RZbTD63292nQueVmEMIjbabP
U88pzt0S3acYRNM+h5qHp8F4mS7JFG1t5Cuda9RN6toQBJ49Q25HPoR6oDELlOrlfQkKeJZF35Dd
DSDSwUN0XYrvK/ndupi2RGZrKQf7HnYk4WNTt1B42pWfq5e2JufcJtzzVXqeKa6Wb1Qm/AgKZFW8
+P9V0kTL3vhwviK+fEbYuU3wpJJs3a7aW8VxjbHzFOOyI5jr8hXwQ10SQWu6kcMitO5u9qB9OQN+
ZlhlVELFIyxKVrp6Ef1/cSNd3F5C30htxgMPMi6x36a97JIecznA5bu29go+hAAlQbgb2GoUAoWq
qiLhSdUmIXffD0Xks+G4DmzQxE/HfkzgY4b2Ept6LGIF44CTTFOV4zluhATVijY9jOyARyirmuje
Q3DrSNkmeiX3riidzqWrHz0M/RBB+/xSg9sgHscgbsq+yBksZx3U1MU4Z3vfIzluqR4vhbDBh2O3
pPNwx0krScSZwmp36qbHV/mpR/6r+R/qrfjdXUdkbW6iY/J3lUPge7AQXvO4d/9it/6SlsyLz8Q+
DEpEwB/Ajh5E8ZCyVKFZ1LXcF9ggFUfT0I7rzZW8e0HaJaDd4qTtgyvYP7HiOT2kwgYssExY1pN7
VL0veCrAnAMBMPtclNfrcw17pvoch0dJEXeVsPfM+1RqgIHvbeSnusCaxftPah3JmnbkHC6Imonm
Ht09uFs4Iktcy1OGA9nKbrX5Mxk5zuhs35BJ5KEUCvZFscEOVutcDHTEBN4RC5myviGsp0A3zBEO
A80id+mQHkm+Iktd7uXuqPAahBa/LP/TUEH+YsCycDHFjYy0ISqxCah0I5O+NaAO7ea4QWINIxdY
XdZFnQLC1AYG1+I/VaZv5TuhA7+ibPmsoguUQ/0P5uoSPSQQbpPTM6Ea8mf2J/d1VSSE34rOclOK
17ra7T7pCUT+jsfDLNyFnlI5rDDmj4BB1zL3WR2nI171bVkHQo6mg5M7P8KdzV8cyncr0rq+Emc0
/LFirkRb566+VRYSdkv2GxmPL6JX3pNiFKid4pI/dU8ir+G4CmcOZ6+gRS5y6d/UGQul9Q51yCmC
3rztPLi80yY9CoTwHyyYLZxSj3vJL82QOCFpHsj4tDd2n/E8rODTljIGKzVDCiBaEgFSBw/QbigX
Hs6ioB03Hr5e8rCukKF67lNagk6Xbi/um8OaySJiDNg2I4skHHXodt937Cdqy9bSgyH1e2tKcsvA
HbU5MJ763w86I08LpRyoSiHNQ5xS5XQMyZvccR3xIyDEkk+3rLSNfIShIvlUXCjY+Y5aOxzkc4ES
vo4RF28QpV9CKdFYJ/SjIgcP4kLDk9LY75WP1aZw9WWWG2zrr4AfjFGqUlHoRJJsjrm/xuYrH28Z
wJrPu4KLENxE+VikM1RVuyYkRn23LMQdiil0X9UuCqlK1ntSY0mPCc9fwPdkG1kkDpSHyBfb0NaX
6ASzn+3HpZ79E05Wn+WTtEaRmjT6Y2uasdQT9Dv2a66omkDNKI6CkkQHOU9OIY+9oFfoKjE/59pr
cyPJrW7a0U89u+hNFKzcZiSyKclOKvzcS07IWfaR8PTC/GBmrYXabi1+cPNN7UrPymEOOV4eHP0F
hRRose0QybsIAC9QUAfzd6wSTzvlFp+BHbTpr8VB+zVnX1RRmlvMn/mBfylcvxFvNKEX+AadLiXh
jdhDZ1RizgKaURpXw1mwJmr6hs21mb5C0B3UHicTKF1rSUSpWBIpCnfC7HMpRKiERaqnLBDkTWB9
dviiDUMfwMkZlJmL6WoXmpuFK9Iq3jD+IL5HLAC5ID5eTYvbpLb05vcf5m8P9zzFSYsjYPnn3Ag3
HK9MroWmK3ENzTSQN8rfK0L3l4D0se8MslkgV3cZryzk/JFOH0stHgNXncVqd8SpIqap9cTe8U5U
oRCgc9/7DT3RYpT3WYUSQCRhf7TudYVxcgvU0jCRJZEejzeF72V8nh3LHYkItUgK6huwaGMlDYAb
2J3T2l3/HggOZnoWVsIchhP729bpIye1NxjtNlkm4lcrs4p3Ca8S0v82gATiq+Noiy5u6RVKSNPn
r3CMitLBt2xVBkJ8rOuXBGtsqawK/88r+2IJlw3vx5aetetvc739Wzl7GiIBvhcXCCgsYtYKU39t
pu0w6C/50IGrouvhA1Mx8jJG/akyjhRAaRJw+D2WlA+vEmH871/43EiVGGivwPgazht70lOlOK/q
uS9uC0PGGJR0OlLUhA6sznHDPXM4m3mG+O/7Lp44e9L9NzBVdnwsCj/nLUK6ldDgQIwKaHodFlN/
0qS6zmMMqeGUlE8PiiyTAfRSHwbROg4vyxl9a2mu7bXuQWz8pQirmLNJNJOjEvDhJi/OyXcrTsCO
ocIjTErDJnOmr3TLezt3JUOqn/HPBNlnd9zq8A+mokQgSNxVYlstBkYOtAyh2JpZswJDNbLbFYGX
CZKKkh4E6GTlsvI046hafVhF2mbRM8sjUCAufsGvC2RSN83ig/EVxeZv1KttJgVs1ZAVmHB4cYTq
soUkxqpMJV13qaZOOkntR/isFbh+sRnKzpp+OLdqq69Rm2NosjLzD6OTpe5AIRkHzOY/gewi6bbY
vK4L317SA1a0SjhQZ9hT4piQWRj9+tWJASdAQL+4lOGITyg94pxV4oDUldtPVAfrcwed+2/TiY5T
Rj9J7Qpe1tZIStbvxbvxj2kW7cizj/oqNGv8NsSL65F+ErXadjf4uDnCScvlBfxQHXL4hT/Dp6YT
p0Uvh+lXI3ppmDRvlQXqHRl6FmWMRvHJOkYxtKhmmLEyVLjFBootwEEeisfq23qRdu76qSZLl6WS
TW/HSn8o15He9Lb4hT9TBDPqA5N+No0+uKE6sgiW9QhRjPKpEf2RKGQ0OA6DtbquHcVo9CwtHgEj
SEZLzWG8tGQsDgVEH+uQTxanhfhu8hzzDvxPYxSx+IspTnJD216+ftMuq8GL0YTTBTXhfmCkmkf/
qN/V7fCAR/3ioYCpaSe2uwr027/wPv1rW0IJCLs+QWV3Twgo2udLEjUmYLL/wRZ1YE85zhllxY8c
kdA4GLhGog++qluVCOppV8hl8+qK0nxOn3LFyeLHIJFsX3PLX3brG9lRUO6xn4yS8z50D356W4zW
ECRS4JUa8CGstrBWAw7UNC00nayPuWt5QYPiFC4E9txqyrT/nYPTU3b+pIvIz+gurwVRfFLcWvY4
/KODwGlJaA232bleoV8fO2OIf4Nbf7GnfJiFvh4OwBRLBBUA/sus4RV3QRjkGIR57bVyaSUoCKGN
0hiiND4+pDiaA3prKwRulWl63GCkb2/vXcCbFsRCQ5ntwsERgpo45p8MfvKnQHbAgijv6dvardzr
nBQyrr3QlLSTaX/7bFQAh2t+g1DLYn/pLl9ZbXVY9UV+D09ba1wS/8FVDRgTiAKTwzCI8ZYyH/Nf
HpfmZWUEGIi3EmyhwI+Zh+Ua9KOi52c1yGvGs0wRqzymi3KqR6S7NpmwpPW3MXo0IOtiYYCBGK5L
vRgKLjJqyKB0fc7tzZt4eLRJ4KlFTS2Wvys85RnnnhwbS4tYcR3hfQeMACl2E+XRBXrMD3yYez/d
aWkzxfNkZi/yaXcC2N6i0pWI4yFVhPpls+uDUxqYZX65PZg9iMCM5vw9gs6wMoPK1XZxYQT0VD8P
Sit+fIEDg43s+Nhf5gbuvAXzI4OX+OIM5QT4YXZBAD84IlouaD+ucKdYp4+x6+vYRqLUYCSJUSbM
sD2exfoXEN0HtNchf0H64bOSJnU2coFvLbeIIDw3B70eXLdv1049PO2eQRFxSKDLJhouFWweVD4n
/1kGempOjoi78tpVllwIJhr+ltM24nMM4goynnY/sUSvAiusdlNqGWqzi/8NLRQjeQZu5leza1Qj
MSB2hcccqU1Wj8PaHu9N1iKPPepjvlQgQGoW2wYq9suJzaM7pH1EwfObkeqqxvJzi5s/q7tf7Lt3
jUqPzZ3zYyZKUEG1E92XLXKD5jUnUAuzxbCK4dX7xKLzkv2K272XLrUqZWxwXgjlz9WrxSooz6IU
XWJUsFwfQWWu/Sw5B2XgQMGP3WOD1HqNXCEBkgsZHz0/o68QWJRtsuBEHvxg/EJHTigV0XBSdZzr
2YPWPcyfLddXRBVXdXb0bXpbKGwduqpUOSDpSHsyK0SZcCKSHH7LxZI7Jb5PnV4At+M0qkHV8MhN
42ysc7GQXLgkRP7OuvznidMGRBUdFErNRXS43rI5H9+KdPsR4BhqYLs8BalXRFeO+96rl5+f/j6R
iwLA7QMX4aqnk1dfJjSiXyq63FPFcH963X6zcwkhoe4potUpvfkma58NrHF7NIT5YnL3242wI+KD
W20iXb0IQB9zS1sb00//pL/fxP75SsrOWBlaNxLpESI9B489s3g5P0BRZUneyZMJ0a/XdsAUpCDN
k99cIhOYJelUOkk4HsBXn8bcC87727e74EDqjdRpVk0k3MPkKF8IFBgSnQsEExS0YWz2h7GHuBeG
mR1bVJOr1zRUdWdWEocodESxfRPwsZS5uhR5nIQX851XKv5BqFqFNk1xEwtpL6wQIQKwomuWJ60p
HlWTczgo+nfmI076E8gV1dglB/tnqF8X6RKzkdVsrPVgM/tbIeEQnx0jQqzZ285fgC9oPFk/0Rhy
EC6TTXQ2R191kzFfmnLd23oL3CUrQk9DSgkmMDinvj/m86pf3N0XdQYj6rZ3uvI8nDBtbKu9BiXH
JG8dfMnC+lkdnE2O85m/gWcVA0SxA3HUEHbYmFHRC59jauOqpoMzmK7b+kkainzzi3OEvHWNL5GK
eKn3HQ6A0HyCWqYwsSiJRaCFNZl9wiSymBryDGWonkwsPN9wkO5pQoM8zHdpqfl/78BtCn1lS1cU
dC0ziTmuwZvPYDhN2/WNc9cGR4zse6qBlYMW/Mxr+Y58YOy3pU4jevc2jEfm2fR1CuuM2zI0Ak9a
I0mohdaKZZr4QWsC9teONqFDW6MROr8nzQ8OgFRreafmEuhXZI/Ud8mCcBSXqRiNATOMBtSHjha8
KQQVj9Vpq9+od5Df7sme0WpTipdzBkCPhIg/HQatW/XO5lDWOF0TCBjBZk+kGG8mdF/LoJw8xXLq
wtAv/vwBNGN2mjEX1Wt2HkrjOaaVUMNcnApAtQJXArPIveE5yQ/j6MqqW3OctxHzCaCqJm2md8R/
q+W19rFFVx00xG1RuRRa53g8E6UHDe1BFuMPorn166rBIoDWh2+aBRGqVfmzLyF6JlIoVuKLvVe8
unNPCy82DwdfByg7O593LuKJFZ6Rm4xo7tGwkHD8VDpynIyA9qQsWzH7U5c341yKeoFDDxHqcU2w
1WIPdgaSBgBfTDvq0NwGzkJaereEJTUgK9mYZbeHCq00888JrbPhs3BoaUCEN6aAvKf7Ok0LRxiE
I0OaBqr8TJ6DOb0x6S65DR84Ipv19KFaGvbBSwhdWWsk0Wjt81vOaKwbgueP9fX/XBnksGVYN7ia
/kKKgp7uOgl9kcW6pulRC6qdpguE+Uj2KpRipX0ASS29c0ktJYTT1UTT4BD2j+KIuhzDFkYaCQB9
6GAwb5nwMFA6r5bgcoMko3wBAzl7cMeoMKCWMlDcC4iOXQCMyJxfBiGeedMrbckoqK7MB7LW1o63
O2XZDD3rLaHiKkLsNnzg/snzRRa9JpCaStjHezd5ORf2xA/HYQ9H8Ofw5UcJ2t3AmBmff1UQUmdd
/xCY0Jsz84V3TlktoO10IR7IA89z3z8JUDvLun2bOa6VUNOOD1M3mqq7+V1hF6TjHCLmqKpr+RxG
bm07jHtjW3CXhIX2qhZ4FRaVoLfJgbqyDpikVgP8QgZ6x/ix7peMJFRGsxXjrimPm8EIAabGDc1W
geCm5dALyqK9ywe37Jbb9ynnX0Ix/uEYBahCkhdJh6J/E2QA6tJy4it2CZHaeHwVn/8JdesHmpnn
Veg27F0bIxgAwHQI5v2wm6JKA/VIm8DJtkPOANnO1tAMBFXIUeBkq3tKyxDfz+zV4CuhsKMIenZD
h/ciEcD6BZCZOWWLH0SWMKsivvuD7bpbWKvQgXOTUjPYxNPzzz8P/OZiUQBdZtX7pi9OwxE92jyB
Oe9Mfjqcu0MgBelaEOfRCq1masz2gYiBZ71sGbGE02dpwSUAnTuMkQJmGqyOS+Ui/QSzEy7AdkcH
fl+A+uux0LZtMycVMVEZ/E6uXfmgqiIBGMA6/DE6wvke06Yhhkjy4FwpbFWaKPMeiy3G8yoLqhm8
GSJRF2WWMc+OqJgD6ILcXXw57oN6oF9348GVNdTIrKL0MRHS0xCM683oG3Hz4NK+YIyThDiRH5pP
jlSMcz4PF9CiPm0CG9PtHm+M5/oKTLOkrPoJzgidJyOSNjGC83mhd7YAID+MUvSF6D+fWGUi05Xm
ngy4DJ9jF0lcU0zam9qyL25PyCaRWEfBVNm+xxi2LzB2NB2iYtW1kQrLeE/TE2JcxungQrgpBDbB
2/iGvYFJk9+3hNJo28tGTk9M/Cg330F/Zc4yJ4CT6I/JiVOjIvZXfEaXDdFqoFH3i5gGk8yyCTqi
GBC6g1+QLlELb3N7qVZcaYfm1/bdTFtDecQNOTrOXDic0tB0yrfGufDh50ig3nhExQ+kIcxcT6lP
u0dpKH5PPzMUbGMU0E8SrKf6tEzwwxWlaOnWOVCm11ggusFrr5aDkBXheu7L+xBn5G0sBsb7+7bc
fRRCWXGyIJ+n8KcY3qpr04daQ3PKCnqXBzBGfDxxYoXkbF1upl/v1G41S3K2I1SMEt4hClR42pKQ
nHhdFa64VykuoHvyQ18nJUa7jSUuO7LkOvw4ZfaKztKjYUZg+1S6DZ6//Ge7BF2XKReUnKIhsy4p
QEpLQqZw50sLCcZmoKN7bHDQiiwMG6pVzIuwINFZUTedvZr4UxE83b16bF8tAkGz14vLqNzr599P
JAbYdohwdRAMHGzxS1TxyGS6eW/GXSQ2Dc2cNZpbHYOdvx91Vzwi8WQ/Qw+f9R3sdJXNyvMMiKvl
71n1k4ymP1QAcJJSf+gR3bkHcXPIjnIyCj86yuWpdBgOXUmcYf7mfiuU7S/tj5lNu2lLB1MmjCpY
zkmbn0bTCUUsZeBi9p6TzeKpmnRlQ3Ap29b65apRQIbKcAlDV446gwylLrnyFeFBg651mICuZE/I
RWzenwxRUL1XiR0KWIgRZD40z/Yt+FDwgMrTKGi7gNnM+ea5yA3TKk3NTQSxuR3uz4XP80OahTNI
EuZWac6W05EZRaAaMHkddIuhu//h+SKcvzPylg6pu0mmxgolX5tgqwNmEFM3fPbVlBfnFP7ipmpv
mQ5iEZSqrxOhNcr4Fl6UFCWiawsVsgu69BEp0WaR/o0lqOnIj8fg/wHszTtvkf9C/fbe/RxBCo+d
NMTj6uyIqo3TJyTLKNyN+CX74PtoF6zPGskqTWu9lRaF5/PZNVzKZh2brc7hajfl6Q5yfQqwXFsh
2hjBFTpkdm5m1hrK3Wz23Vy7Lr3tme1AWzONX9iYEhxbiCbmLIcNnMdWGRvXap0oV0ldg1jKTKuU
FLpfR8L2BkwfEXfgVLv4KV9PvEjrgrB8Fu4LUX0PndhQdD254UqlxVhShO4mA2BiL3YZ8MEX33S9
qlyHpz/IIf/sLP4oYTbu9cyDWzPvnUdF1N7gqX7+JpGOouTDcW4D+rlHt48sGbfBy5cyOmYNDO/7
zLtBFc1dftDIZMIN1Z8wOMsz0ao4Ll67axuXy4ubSjxl7ep5Z29zonkMyQsPwNiJePuIU9kT5WEP
CjMMtfPnj7qS3ccEufjYe/JeGAc4+gbGZ07h53EkzUkv+zwJB15g64KecoI3nrCYbuhKCd2WRuf6
frY0/s+eeHqISPZhMsijL7B09irAgeDG66UM//BHOneNc954ZX6szd/hfdH5WzAV8b2FAGDQ59rI
WvE132IyXmASzb6vrxe4fxlsNQ6rye33qyqLtKiAp3YqEX555xvogis1JSVZDCVwD0DQ6dnI/9Y1
J0DkG64TaDRV6wI3UnKznwR2hkuHl4TuhsYs5GssbP0cIuYThyfX1XCkZT7SQzV5ZVY5jq8PmAdN
21VrnC/con/Eh6FFHqn0aZt9SCZTr6VMwUtFmELhhxpQKjFkMcyR9l0iE5WLx6VYMFRrGF160q4k
sDZayQdZ5zNO8KMkiX9P7sykn/4HBLvLFRAMjTUTlqhpkW6jHayDV0Op5Iar8qeC0v4xzpDxOIpt
HecUHZdDJXIv/7kWiCyHWtMUG16ZqZIUc3lJXSegG0eC/rCS1HoEqBNzUFVadCL89dqcpS2EW0FX
5c2d8Ej+7yQkBHjD/1w51TY8nU5SrfupyBJTnBOQEh5rD2pQXurPJMW6cjXsbnh03epa0SNM0hKH
AC2xzJHl6VQcQC3eKHCo28pfRuU+o9GspcE9cgHxpdAi80nwJVxJpytJqmKR2maFWVHeqBu33UNS
dwTMlFnFkG4DN8foEx60t+pwpzjI/OKfwFS6QLgWoPDoIJOHWq9gpmx/RarIsg0oaxLJgNS4Y+HX
5Wj09qmhFy6mpvZoFfBzflfVzUJt6EJ4E4GqziGwPRKRYwUGbbDBcpeJvezAVO1wrWbFF0VOnli1
ufHKhCz4/jl6jcTKZoaB2ewnoHNMxeK7BuaQWIgyJit6mupZGsuDAk1NkoEWUtqLDopZgj82ota+
qp3uOxB5uaidaDTGSupxc9/w9ZlplQLBdB1Pukzdn+g8DQextWpDk3nIkexbAiEvgg9O0zCmo6fM
AG+F8a9tdYdFvUwN6A5rDd9Bi5kyOk/Z1bXvWNiC2DlwZBXWn78Ce1BM9na+p7d5vjM/IprvSTs/
NIRa0KpoPVB3LHDROJdBbWA+yEWogvw3hbRbKD62Sj1iCm9y0LpGEZPsyqokssvjzq2kUS3x8fxv
BeYrQf8c3WC3RsJpgBUra7SbxXuBr55+BMO7pcB4zISdmBAi32W/gQHImoDr4pbDhE57LdrMRmUh
HMA6Qt25S5rQrBT9Tu0AtDQZG1leeeCRcfSeUSktDUD5NIuTxPveSInSSsQviQs6JJNe//OLKs+I
c/BgWFZtrunB7CRL55kafL5jyYoslC22Bzak5I3ITIQ0OfPJY7yf0qorNMFc93h1oA5Vrooq3QDO
hWA63W4YZYzNNTiMMZ2HCmZyI0dg5GFG3SIljzgAI7NICeUf4PyKIDjFwhp18Rjnz3+oM/CDmo/a
y7/4m1o2h51BEJjrLTvGjfamzfM4EJs+nQe9LRgrWL1kgsKc0Z1phzjVLZfTpnzDX6p64LBJWkym
nSnuYR2WnQUPjJwh+P5Hw9lycJAIN+0YDnXqLBxMhR7agodyRrdV4VkZoBrSHRVq8NLITn4Qwpy7
ZZb5P+/wRCDSTA+uEzP2drSo4E2tKpS/iJKzuwXtj+l/DePql2aAJtWhBwDfAOksCU5+xATTRVeM
yqHEHOfUzA+8X9BWPbnW79ls/Frtmon0Z4LPDzHHUmP7BuSUqAkHu3dZQhMbH4OwdB/lnTs1TXvr
kNos82ThBKnEkno0Hf4sXT7tRhkVn3g6pcWoFaCf/pUdZBuq0wTXNzwf5dgVa4UcqxzGmGiHTR95
xycI9V7vxm5z6m7f5AmI3qMbPNtT/nYovwLA2b4xsRyL4BWWvViZG+KAFVInAX4zYFSrFdaVDZ9n
TBXGoJ3uC/korBXzjQ3WTwXxlVT2u1BlLnVV4SVXStF/dTjxEp9JPncFM/D/JbOskPPw5YMXK2fT
fzVxVvNvIHbiF60qvYeJ3/Kv0f3j8HEdIsdDuWWl4OdhEx+oFzFVroCBrrEpZGpD+4D3c9UFzHjw
2rGNy4oWNVqfg02j/49U0+Y3iDoK46x3DZfQkEFRGpQNCP3DxE950ObtX8JN5zvmJiaw0ZYdf/Sp
Ca2mNLoLeEUdte0FhfP8c+qZfgzO8apRvzIkiYMd1LE1pNj6bcocVdn9elhViOe80+cfE38ig5kL
qV0a0FEBSWj70zzfXRf1O9s5vcE8I+Ve0wGX6KH9AQUe5gVDO5ipA25n5fCjOCxcCdX9l8wretmv
gG5Sn0sSfs8vj0oLkPBQzFOO4fUDIG1ixUA/1L/hxK/6chvKWtRHn6R3aYsIeeC3xg7yKgDbV3Be
agjGQkbVCCFm85hUajoGlCs4k+q/oqCs8RkbiAxEmRysTCy+yDQ9OjV5ath23r1nXRoEJ8N6hSFV
27CcCJ244EAtWwzOtIWZ2uP9Ry1ILvWQSHTZhobtMj6BzAagGUHPCZ46QsrBqwyDTaWC2R56ZD2F
bHVzkYtNdlyJ6BgkRdqa7hulXOmjEfADkTUuPZjok0c1sQ5MBYLIs6PODGmoCk2O+e80Q6nS3C+P
Jdr2GAD/v4atHmR0kv8mQs+wNUZlkFczublYveW72i/zBJK8/GbuhF+4jsAWmIP8PKHJ8jlppysa
hUkxWTyp338fc+FWquV3Kv4IDmz0LyYKcnljn3/nBSnd3Nl5e+qlUgEmLvlJqFs+UEKAeHhQT/wr
4dp2/eOTjbCShAnXWuGf9WtfGpls8N7BwZQ0dq6iSO+7YuWIIS/6RlhyO/0R+lb/rlCXqYpWGNZo
SFECMHHUomQxzJ1wvmj7BqKPZwrhIQC7/fqUfd5KpSQEtbZgNjCVHKQk8UjBiLzTXcqveRSo/VaC
6Fw+G+IxnOa4HeoCpPnHP/HmpIy2rXEv+2H8llopC/ltELNGYXeawBiaktkUPhiKfIofZFufGsfP
IruSfh59UWSuOHj/jrHdUa6g4GdWMVrETsAU+mjJuYhasIrU1ql46/7x9/xyWQQjlGA74PyaqPnn
YI8eJ3U1kk4GQtpNYdyRYxQW8hlFzcb/Z5uq8SJKF/0BXvfeNZ95Q20TTb6wR/Bgiy+YUp5mMDOm
5Ftc4n1bGtazLstJZQDKHlO3891teCnRILSA7QmyEglZuvOrOIIpVnFWQFaL6zab9L8eBtIByy/i
HdrDRDexUOLGIHBi+R6a69Ij73mEMtOHRYTPXtJ3yc/YGJWXE9BTZQkn0jRD5vSoukTQeCAlQzNX
6U3/MDpfuyX1W4uZMeENLqLOzsIEKtUQL3jUZAQjYo9Wu+gNj+MrrpybrvuP7wq6fmem4LxYVGoZ
hYCVgTcE674rzED2fjZmLqriN7Nz3zrMce/XcLNdUU7apsuzFuEU2drvIae7pr2tJiyiWJDmmusX
ceemNkdBpPszgPjHMvxCWTsPwTrYJl14vdci9KmNmHraYS4IlfgDMOWV7G/Cduuj8WYxHPGk/GHq
pc3y2/CF6Qtjdb1jaj/EosIz0WktliuS+yuQmXsPD6cryeZxjpLUJR8g9AjZWcJH1kOgb4oIDG+u
RitAWRB0O8B0A2/SUo/VD0O7ec7/XuJw7GaOFGmPy+3U9qxOx6aLCNa7DqqwXiVODFIOeo4TAPfT
UwNoX0xmxcPKQUv7w4BTvuRy5GH6u2tuMRDOADZU2S0pUcn2sOxa/IEa5bQ7YkrVAVrCI2U9MRum
felfKA7wrZQ6snz0Enn7E/ekDB0HtSQ5vfrTXiLIAcqHhh4Xh2dVheFz0cdKshClwF4Sk1cXvpzZ
jT03R3aJm5GcnRYh7lkV7iiiQgZzGvbNKoJ+MMcveklvHSl0qbUz4L6TOKbBPXljeLkaIx1ybKjt
gZfBjwphCreosyqpnIDa9XFsVXGxf1t7uyExWY0huHBv9ZWcf7yj4qJ7T8gFbZmRZkCMZSkkiH7J
2pb1dke+GQLRDtquyc96iw1HGT4+I6jGehrtSMhhTYd0ojoYDyFcb/0n17PwvjtUfyR+YOdHvOrz
EHHPuTtvp5FFL0G44VXvb3ZtO5pTohuWyCGupCOdbYojtZ34uzxCP2TG/4MUiuXqZnV+V4xtvkw2
WnZuWlOh51glkZ5t3rdnQWdkVqv/P0v3CprXFk7EzSTRWYdnG1/a4g2EyoWQ3HbLr/U8Bmv3DwUp
vU0/JkrOOUhz979rAbJVl6BAq9H3MQzUK3uo52H3jHhJbbohqrlweYC9sfJIpz4ZudaEVdZk8j/n
JL9uR1pjiISfyW7NwCePpEFERSsTJXg8jfAXxuRwDFxc7gz5mmPtYppm3514SsGt9YSwKNBed19n
hYzZSHN/Q5VeRhZRVXXuJLAekpFrQM361kO8sM+Un+IzPPTFsNYhZS3H256Pn/CW//e3hpX4feJ0
qlRbBlizogW2pP6/mehTIyW2Ha1mIoA+BaiWKdb9CWi95axeIBqJSJAzXkFYYLt9Nmv23s/QmhBM
7Gwdj3qB2Ab5ggEfyQKa1/hkMYshvQrY9Zjo77yJa0UdgV0Lzxq/wId0sve1hus9KvLr1nVvAAdr
pvxH0zgshZRlKEX6Zwg6eBZptOE+aEpqJaxPyel1KD5jaY5HWm73+2IVN89iL4BxlM3xQoTty7ju
gBy+x4lundRkoy0sFaREDV87qZIkognWECsj82jFLtBKjI85UW2lhVom2R2vyfVx9nK7uexiDGpH
5cib7+DYIV27atEH/IOhnT6vHjXV1UetLuNHgpFppI9+hni+CwQITtXSQtpXemnNBVJZb31cOcYx
Er61DISmXn04ay/I0NNZChX0NNedUVjawRtKVBhdGCugVsWDsRl0yWlEzkhK3I/Gf6AtInMOwDFO
mIK75UwqDx0+TvesWM4LBmmWusWuqqErbRlJaXGC5iPdcsc3ICcZ47Pnh+D3xIksEzcRPbgp7hWm
HJ6nNGtigVsYI+T1MRxKugcaDpiXFYDRBFkjd+1DpTPgMhNO2q5XWv5NKWR685K+L/mgPkwbwxCi
4vW0IY6RZ1V7ASyc8eCxP1MVkD1U6aGIFAPtvkeeaoaLq8NH9+SSx6LuIfPHU++/cM41s69UtwPc
nuWszA5a4wSeWDDoUlWL1kyuE93XPctgMQBWQ8MwNF569PUSCSnbd4dcmN7oKwhI5KTOPgcvMTje
rd4ZSCAXewSXkVqC2EvGaSFHxgUshefu0cKyD6v/SMI22RvvOOSxHQu0Py/KWhGtzgKuhho8P/Mp
Gf9bOZIhkVkrypPdvaVyzVedZ0s+9UlD8/JFIoWphlOYz3Mp13DnKeWqH5JqtgiAoYdWoF6/qjvt
ZpMtieRDP2Ufropig1S8/Ce1bHYoS0XHzDxQva4wJJx5QicQW7oVMioyzznvgLRWSTIrbr5omZao
XDCE4SBu1lvLsx/CF+8xz9SqhjsgJJeg4Cy1arwI63VCbWPqTtiRHp3V9eN6/HxO6QdSI6NxQv/q
X/GoDUMirNOdO2SRCrOCHt4N3ksldeeqLYdQBt0G+i4+UWyWRx+J1cDYSixTe9pN6b+gxJBToK7y
sOmrbPbC7rhFT+tenXZiHC43LVYFJtK1kgwYqD+95VFAQgNCgBMcTo86Ij4tdWfzM5hDY1q/1dzV
cFvxDZku81Pv4xXhdeMyvuNu/he2XI48ttOFdC69am/q6SZnJ2DEjabUii8yqd/o/c0ibnogrcYL
EE92H475g5rNfq1iDgB7LmP9l0P3KyqEWIK6WJGCWVTnGq8sUoGY6oaa4MbRl/X9K917pfvSdVvB
woRgGDktSpFxtOi3W8c5+Xvb+IOUCJviRZ0W+IHkTr4ubjmi8JCCHXPlbNDd6N8VJ71bLOPAG2mv
QY2OYhm/NmFcmHLhVoXt0uSycx3TDyV1WMd5emk41Cp//trr6SzdKZaBMIzdLXZEz/bTnXUiHux8
532mMgyWPgC7xnGjL63a61sUPL2BwpLyKOS+33gelah+mS9ztGC4RiER8X5sYJoGJhHRgar4jMsA
I8blFNegYI+exRhh3KGzPCtHH185KBHMsb2cwj93hYN2+rBWubWsG2a4A4L1VAy8DMYx//8cx9fX
0FhwMfX9dQ9oT4aMtYY2I07d0SihpaC3/bTZXf/iVnUsdHnSnwUEyExynAkRvtAYZariVF0nb2IS
b8QbqJrWljV6t9gJ8gpc+hx8R6n2hZTnp43//dVVvG6AyZQteCSoQ0JlMbV26SVbpIpqn0kjo/js
A3sQPQjCe7uuUmSPnLbwUGqi+QUmM17fCb4wMOnkSHL8ZtXmApKH4CUBUTg9OSedUytW6sDbAO/x
eXd+3NilqSI7rWn2SGT5D6FemNWa/1K4VEvhJseiEhn/gKbsDeLReOMfB8UYgnEBziag73sxyZNv
vg/rDLmrD5/4NXVx7QF/HIbiB/caWCxz0tNaTPErqRcvOW0lH/Vj1NgKPKt3+o9Ps9OAo3tR5vSI
fydbQKgvOGWbKAOD8M7LKS9pEUV4CDu0dDmb00WL5AtKaCsIZ2CLsv/RirRJZt9Hb1Tqz98OMMj+
aaVzy9BPqRCyJ1eTiqbe8Nq5GWT3yuvIwSz6PkH19wbuvvf3PeG9oM89dKLh5Y7ic4Wlwn6ByW0c
gdUjIOS1tMWvLIbXY/qUB5ICSjSqOxo42Vm3DKHhkm19qz1Vk4abG6keRp7UAtY5Q/ZaCohInRtE
QkA02/WqVL6o3qtEKiYYRyIvJR2YOhSSdiopvujAmqSG5pvCAeB+0ZZxrxVjF/Hi13YC5dukdUbe
cOSywQqLsteGzcVjBpJ9NxUKWBhAm1cnTBPHlvnZZ+J9LdYMWNIQwPEzJALI71lv/0sRpY03wWZK
MQ6O+4FewJqLAVH2l6EcrXsZfJ8TMwM5SHzy9/0eniRXSNhEWi5iyL1ijzPxgVcY3C4KKoh5x9xG
dsojez+IZinzSNPoh8H7qrZ/suxXBSCbvvfuYyL7rli+pcaW8fXfD8czPhjvypWggPBtZ568auvK
b57W57qCCEfuB+KY/waw6+A/fFIrMbVpOmoNGvoPYhXwawmIgGr743560VbpBVVqGzAP3DFE4JOx
NUG2+BifFVQ/R66uwb1Qx8j3X9niMXL6Y7ujHmG15OR15MtF6MPgmKWesMOgl/dAaCyb8g6rTzT+
3LpoiBor9o2/m8sjdDb5kwqPOSRlHvXFSjXIe4l3ukMYaONMBq4zAoVX7/j4SEhmYSj08QsSMbJr
9nzCdUcEl1FhmNNAf5BzCMh23FTSsVj2q1RcIvBUswEIqnnNL+iY1j3Sp2nyTCLSPhqa8xfNIFB5
SSVUhOQfBugzi7zg4LwTMtlfCyjkEvJVQzrV350tEgKR+7Z6doxJrNUyjoG/Obd2F2kgtouM0yAi
E5bHFw0m0dHRxy2FHCrN99GRn3jDpHnnI+N4h/dlC72rVeVkEi9MkMlLSttZpvc4X85dJk0z9p3z
+Yl/BlC4m6RgAiV+xGD1L6WNtmLyrLMfpPp/l03NX4+i1QKIEU7zzb678Oz3xJEmBTnqke3pmmto
M51+h3lGsjZbTvAWbqqQMtyYyep0piwiPoq9gsspB9jkyX4Ccju0tG980GPJ9/t14mhdv/7WmHbh
aztWNR84eBllqziib1quM3Xi3JKdO8Sm3RvXW7SSEksptJ0hP6XaAsDl8+eZZYBXPncoTPfKBm72
YvIJNKxX2h7OdU3jns3nXObdwwikS7N25GfjnZ0cAd9PTXRi6LpoOe+cu3pz46lmHpudAAKVJMDq
L5CK8N3lMo8PUnY5TgenVeAXr2/k/mPOCYoXbx1+65SITZAxFU7E9Bb+uJnyFpnjetjPV97WBHDL
5oeMdmSdvHUrynbjnaQ+0UvD4D/LQ9Aq+sohsaI6juQDM2+k7hCGALibA5oPMEuqK/W3AWRdVz/t
qmYN/et1/s7z7QMiSb/Ly2jJaJHveMxlLJTRAKse6/eRXIXJS9w9qO9WHdY+6POGrO/1vT3NNQRc
J7rf2x7Gmgw8hGY2dND3KBdC5jHslX5i5dZF1bq9M1aCZ+LTXCwjJQGwqbjnwltHhdUpw9MPziB+
whDrn/1Wjo0JrKmEuH9fTC9kH8iGcu+EyVLuTjctrnKxsEAW7n/SXbji7D6OqAuIOCOoXVkMy2zA
RseVf1sXcZNvURzseVFr6GiY2wU9bM4GoJr8OHktwtbljhHyAsvlV4mr5gBWASxOOHvQulfqdJGn
XuQgWzc+GaxpgmDqeDRdQ9TDH3u18W865EgpxfAzy7xDC2+UtOTrD7aC4ZwzTwkZFDT2t4UwbHxt
uuiYum9k7/gydONuBdfPAHLLCwsfXAGT17KWRbDy1shnJRAnQggGo0evdDUOgmQB66+qUx3MGf9Q
5rADxKxFtSQxNFPkezzZ+1/90GK4pAkm+Uy4VgMt7WAuKdZ1he8HMn5ocuPFSU5E5gtDG0B5x/cu
EpJcD0cE1I46DM2rbKHTOPW2C4FIwt+AB07ARKwApQOr/0yYrMu5dkT0APAzO/wg5iOEKckGoDUB
4fx636LV9nhWr/2hzkHdX5VDK3aUz6LKRGcWkcxz0anIHhIBjgdkCsONIdfMtr4qbnMUTUYEzzFE
NThkJBJRCRlvOgBgCX0J+4nYDk5KpykakBMUl52zs51CX3v5bsOv96g+n+TVG9OAgQQ3L8YqgdDD
K6sNTbfSLYJByNNOavbtymuejnz1GqM35jSgVi+qn1TrpRrT741xtcGhftYNrjGq0ffQz/wmacMQ
GkReercBWgEvpsrxnTM/eMUee8GywulHWldbocwkJvC6ChBOXCTak9wfs3pgFc9B/Ffh97iqxV1s
/wDEGchCDSqnQaPqtMtgAFnHroVpZhIn1RoDwlC/6ystNbcAwa+og/KUqwMjl9yvYZyk4Af3iY1a
Aa1kblYpfKBnuZIoYQzXQMeuGye3gw6lxZ0YFGrxyBdjAZ/VW+WeR+7ULhMX5d5Cgszjwb+Nb4IN
TzNLl2sLXD9w42IOBD7CEfwBy+SPnTmwx1E831qEKXhpzE32z2O4DLajON6azDXpM2DgZXs8l6TZ
lhSXTs3oqWT3uLxR8yxdysAW+SaPt83/CRTeLN2g/UM0NOwK0DoYeWZhokUzCxAE0ac5wwjM81Zd
A1AxYqyO/PdiULZ0/VfuE+VJPzyNMkXI3s7gyBuPMTpA0qndM2IvYMyowjdHBpK2a0Q27ZWKL4VH
n+f8WH7xpskksjynaNSFCLPBHVppUwxvJ/5vAxEj816NN2Wtiq8ggMaKQ1rRg8IruwdBhv7aWNkQ
uSObv3DafBJc500by0LXIJIwCplybXHSYYqJDSYXqqlsxEJz8kt7PEKIhkBwQIjYmzR9G+t31CKw
yCQEOgF4VxTGm6k/iFOx/CST/uEhv0YtrSvy1IA8V7nx9jkx+o2TlEOiiGNyDPYINY7f4muZhrz4
HOLIr6AWuI3dL1+SaGrOb/yuh/p2L/zLDFYm+h7EvBHXUNXlTMZWPmsNk07nu/o5rIL5qiInGabg
coPPYUKX5E+DUlbwnD3IwbKUPVK1V5A2lr67FszghB9z8h/r1p7t79tMozvYXQPTDdwP6m6UK9m3
5PO0Hzqqv4ssFGwSgGNK/yIEH26maInWiytbDJhVIpOuHwJ8I6Q0dahgc8Gk5NedEz+d3VrVMLIA
dHuqxqxN9sBCgbbR0SdPJjhoPEHf2Nee3fcxWS8+Dw8mQezl5U5FMLWj7G6ww1kXd66CVs91P029
XXc550M8OsIEQyZIPWw9h4Rz+cCItN5vkIhEAT+vv7wRo3nITkdWSaTjh0ba+oAll9ZZAsmYI7DF
YdZqOKr53Xi6RZVHuKodyCyLMprGAjBoh4o9ZAW0LN+ZTUP0Tk70b/Y7L5g7+16Ufu/hk5U9AhlY
4DbIPdnLn8oDo9+xGUsnHnBGPnlO3VjVA/PROw5V3hXqE3HZuX7mz1UYmyD1si2x3yW9HzVkg/52
f/M/u+0hFsHqMKwuagrZk2fRF2Sh6sbvMbSN1OpFLc5B5x3Nkx5dyWttJukaz77VILvmfNWQo8Qg
bpASnl3Q7Jf/f8zs82KFUyM8Etw0szrr7TR7Y+pzGTA3mKbad8gjYZ8Yz5qSnixGVyyeHwwSDTEP
YHH5ZJ3zj4CbtwXQwQ61ITuB+l4qJ7NAYhH2a8WuFt6wvoJwXxHKiGNtbeyGkB7fPTVMVJPAGJyk
59z6w9vVMENG619XbY9CmWTn3W4kCRG3Gro+AyS+7V7vMcth1cGv9VDA2Ko2UQCzizrOxx/mk6dg
L3orkLikfDGMvqJYsUbV4whUrlrkmHYhr5bRaieLnABvVwzTO+PbapqwkIMaRn7aa033pWHuvj5r
vcJsivLMpIonYieWW6ZYTIbcPBTHBXFvKVXtKeYLedCOdr9FvN2hV9UbxkReQ4BVA0DKzMW71rXd
N2QTPAdOuulIyDKmmPbZ4ayeYzoUekvNjo8MOcTe12zNzvCe2yWrYkATGWjDVDD+wW1tTPlpLvJe
MnvTmt11NFFSIeq4yWgkjmCbZLEgUAMmlnHORbJqP5DGYhHVE/b7HLBohI4doTUHQoIa1lP6TlYu
vmljSUmOnwqoig6KEIBa1xXjRbsOHf5yuGDRPKqylOnjD26tuH5EgOFY8wYQTQ1v/lR4J95nmVPN
J+rZ9TGhTHejlkWesq/RQMWv4lzE/NnFwl4TE80zc8sceJIrjClhpkItt/Rhtkqdmiwcz6/poGzr
wzglMGy6F5SCPZuIGMo4FgY0jKTZiob0jyPEvV79wRlM+LCSrILMOAQITYMU5bfFSqNSyruulzRX
d2/TpiSX+DMNaCRGtSqb/d/6FwVTu4rTzzjjBuaMibvboKO0F/c/XOInxMLsZdRa9BsqQ7+E+mR7
hehfO70KfLIjuiW2b10KtlbS3BQsyWrPyouiAvChymahBzWydfLICyn7imY7dB85p2sC8SvPEmO3
0iN7hXJBtFea3qfo4n56Dkuu7PURYAkmY9N1UmRmtvFQv3LEMkOPu3gaUnywCyPOOaOXmQqjGcdV
TnckAttstv8y86U8/gJssVUeWAN1It/ihz8Cl3psqbh0fU7BOpoFPKCI0IV7EKvYg1+Ld1jR77hI
41ROsNog2A1tphOnJzNFdxxSJi4XS5CbYvkHZckgz4VIScl0xsHTzMAHBBK9CJyECPoIcwVI08MY
eofkBZqhGzTgFCH8sMrW31ZPeE1j+HbFZbAMdMZvtd+GmjK3Ffwtd3ftOEHtSs+jloycEj1BiJ/1
lyOzQeEAyiYZ4zyhQta7nXBghYGNbb0xGF0j9I52A7bFHJreYfD/5XWVZHBUUG07OwUmjgnPGwzq
M9BJOKyTO5pMFOjGnczpjom3kjRkfipVbFzCsSNXXNyN8lVw+bPlA/I1zEG8PSjFkehos5CBDExP
gQiG6ukiDiolbwezI6UtA3iu86ESq5F5DQ1UEq/RnqPalHNrRsErXh3C1v1JxZ7vxtu8yltYUtTI
pJKN+tEVwHarevGlzI+pI9fghBDkM+L8xnMr1FHJ9UFxY/kEXf4D8AXkTJgdJAK0P9IjmFGPkHoA
CjsOaSX2DI/X33XkxMmgk3BR9YE+S2dnEovIZRc2swKoVx1Jj8nwlFiccWIi8wpQ5J/XVXTw9bUg
lvIvMV1uZ0Jc9MS72expWHBDzFKZyNTlWtZJ2ncwpkvpYeOQHdHtsgfVKllWw3WqfIVTE7Tjjsfj
aMC5gKXY4YrJ1+xaFIf6A7PP2WHhIwYz8nh+CysYJPCmaPnRARiVXN1JHMvHfuzCKPt4jKBDcIYm
Z3BPbZ7gwWl1SpQ1OgD//cyZNahayDVCQQzEpPnC+YgEsC9a0qak1x9PHKsBFRPlnTDAQObZzRkk
h6Eg9yHutHxcEdu2Q1gpaa10x5Xxa6MJag8WE0tyqo6H+skgh4eAdliI7kcypNg4SsvP/d3oWFlZ
hglxFrnrUXelhryy7U2oWUrRPvLECrMhJw1ojicepMng6KPeePvTQsDObFzVc8D8IeNN7I4F656t
t8qpgt6LRjMKieHkVi7wCi/Ma9x/LW6GrjATOfrZ7XkystKB1ij3WILbhdGWSNwqfquAnfsllSzp
N2Ag9D4cwpyasDDrJIpvhHzpNa6SiTVfauHb5+wgPDfIo1o//VFG6Tw0w2qzFNNsUprOIA32H7Cn
jvafJUqP/VLCcScqcTvLKb4s1O7lkIQTBabaM8gR4Nf5Udkb/VDP1w8yKmqt05P4xkUEdq1ruw1S
SlJJSmW2+ZH1xa91gdv0ul8zUjLNNdmjTGENbKAcN3FyG8JwbvRvH63eRHUOP5A1lbF3CFyAKFmE
U7lqvdyW6qHcEQ7Br7Wtxmq+Bq4kJj8onqVNr2Db8wH0EVNfqDVR0wesvMZM6v9ihC0ENH3U025w
AAfwpvBH+CTZYKnY9dFGGdE/1VQPirujo3RR2S6IYND7tSVnn4n7YwshIIhigXVDVW0jLRRH0PjM
3+Sfz2tJFPfaKE0CVAh/1YTacGgrTv+lH1FgJtYZcIiOS2XEW6TSPNhIVNPzLFRYx4GUBsC8CnuU
1WNxYbIQxuSDantyeaq8BKTT/f3pak9Fn+9y4ufUKTwY0u/tSUmHceJ2tTeWBVhvyBNdeBw9wiRn
QBVxfBxuJ4KOaWhhDGiJykXktG9ay5FFH0TAu4gF3eY6TrCZN/r0+a/Cffr69cSonckuqmypUQOo
toNeSat4PRDqgo5uVJEC7/FUuw74wTEkhsi+0he6NkwcKJFpOJcMAREJt09OwG8y+dEcfv0TLhRc
N/8GoY+6s8ALBw0O3w3iXmnLbH80oF2cwiBIKy23cCYJlUs2croXjeegJcdsBDTSePvqsEf49DLw
j3zx8E+PgMokyi+gSqi5O2RigRG/H9HKF+Sam/ziMd2gOvz7PLOwHHmKfulUApOKBPLPu5Il3LmG
Wu6LExPymAldlz2nhtZUg4Zulxn/XHzCji9UV6XYamzOunRUQ2Vk0v9JJkom6xogzwEJwR58I5JO
SlfY5aoHuYj7Lx5Rqfs4g2bYI56TiT/XQ7Vt8NaboYBzWjRdKCGnYcsJuAwT2LgWL7yu7h9AIorM
6OYHeWI1VsdcztwfUf4jxUD5ylkFFG627jiZdVvJPRsEhigsBt7t05UqRcbiP4L06zk9VvRuJFHf
xsyWJkp9TYIKKH0wJfgJ3doyo6sxwcYM26SVOVrC2j8INcWz0zxKJi9uvWON/zGiRAnUQCJB+R/x
rpixh0tFfx58VyWsryLaSYg0tAFkWq8C9gB4hpjxrtU5Id0+bMZ1HcVNE62vYyN20zQexhk0yAPh
Kyvk7Yq/3ZA36uGCKBlocJhSboEE8kT5OQz8KvCqYxRTCJJilJ97cl0oc76Pb9g2ew4S16hIsyJT
DhZVskLoI8PzOVhBzDzN0Zq/9GQHpEHCBQX3I2axiqtTHGoj0VtWgWkOY7L7JH9hkEW127sHsb8Q
5d+2etrrkAoFzGs/ZyMsnq5W6VdIekfulJ5WUjqubuneu+6FQkRMjpR5e/wp+2Pg4ubYwjAZQ1fk
rfcG6fGS3xngBB3h+z/hCkPa9tcbcOCiDXSisIykp9JVz+c61e2Ar3Wl0qu/B2zFT4zlY30tYMcG
DiogvZUu5KqiPxdLclpM3qJsrMBoTBBVq5OwqPbUSE/8yOUxcmqsktu8HEbu0Rlkb7/uIcA5jZW1
YFQi59F5M0h750IKzTu6c3sLUkBcfT+JudTBiUoeP+xEGF7lodrysfEbZD5wK+TWBJWtdoybcPnW
DS6u78U6Tu5lIWmCU6x5fw6OTM/ApLRsQoSojw3w2Muz4y85EWy/MQC1RzASKaiLffW/sT0fq6sc
LMyKD49yhTIYEk/nJzYSKbbuJdZ9U8ZcqARlDsKp4HQrSfdm6lSqVl7+myQ/L62pHpnVOCe76CT5
XDuocgfi7OF9SzfqaqVWoxxNkAkd3wSCsxpyPOT1cgWDRu0GwBAQvIsn3Ak2jsbt+k/wD1fF4kn4
P0KUNZzepnTaE8lAMy38bWEY0amrt4xVJIpVecaCObpyRohoLVYEIx7ATzXMDK2Y4XF9uhWqM/Zz
BIG9W9Yr+1RPGg3jNVHbRs+PJ47Xc4tBPU6GiZOiNW1Q0uWeh992ccWUjl9Xr9raaBkiYpzCATiV
z0aWIBMUdKeBj9xQVM1cqQObEHCDAhvGTadGgWKO1dvWZBVq2aFia8dx6BxFe2rwOBv1TDAEuUX+
n1KvUXpXkBefQnBbfaJb90tJvBa479oCTmR+EiiuAm5tM+FU81XzP3Pdm5ikcE87fHB4OfOvLcx6
GWnaskrwDb/BPqqGA8WTTSKAiqBQgwHE/Kte+BjftHAwU1arBzZN22Cyl6Pm4YdUvL/P1F3Q0JGG
zFuCh5Vndisx75j3j/Xk4Hu/kM3WAd/yzkUFEB07MyZ2yWLI0KUOph9SdDbz51s/3oAEwgjfx/8U
eWQw+75RNWLoNsKDaEsERCPek3d53uTufKlUQcJmidixd9CVy5jrvrA8wrlOyFnyEXtXmzSoMSYM
IFKaSVFFsxFQ4y3NbWyc4hD85BjNa+/bqOzm0EJ/e9Xjy93LOGKk1EeMqcdal1oWX49lQGFTtaO5
UPlX0wFvd46mjJB97MLLwnORUXC/xCsasZqvIV5r/edbzlN3tbmxf9Ox4i0pRdNU8R04S4hXOzTq
OyBZOxR4DwYyMGHMloV0wBxtfbGM9ap7f55QADBOj1l4v5sWnW4mehsyjoMJq9aMUApuXtCleVMc
TAE64Cpwi6sySdOnWAfNfeSbBelLZvZD1X0TvyEzLhooSjuBsioOCdI+h/QUwgWLS0nnvw33kPzf
/pIIep66piH3McJtXrlnmcloFNFLa6VDStKaSlI8PUbhY7mNuhS8f9Gu3HIQ3OgS/bXl9BBwKtVX
/dUcHYvT9I8xS0o1ItT4UMnRD4w8scSwBqlIxVxDOyIPs7ZRdFbUdzHdyPrI/BiSXp04F/KREXHK
5MgU9VL/5LgAZMnizdQcg5HQ+fVBCKqun2+LbkqskKh/kM1wiLUd61WpofDyD2ISbNKQfCAhPpgP
IfOEg6ReqRsz5oSPEFxqFWbpjAnmzW1yLE0OotgsXwUniHdV0nMFeJpLfWa5Fgk/bLW+Ae33Ibl/
akcHRuHfhqoDwWLpYc3jPoZEUd3xCnwb9vZZXFH0Qoy/lCawlq4vnKTeelOgKBh8LKxDR5IIMluf
HwDyOXDlNsmCGeDPkgBOcGzws5MUeqhOW2EzBDBmb0R5TAgezNGsANDO24Bpm6JO0T+HjmHKI5L2
o4XcYuf3q0Y4Lnn6SGBZJL04vK4WPRtcg3lZC4208QJR+NSN4CgpECnC1rFgm2a8pc2z56tUjX67
/DRbyT/bKWxj1eoFKkZJgRW3UTeox3syQ1Q//qzfBe8qTuVuHduIZ2JTYFB0dZQgfoUnFh/jP1Ga
N0OFSgbLTyxLWinZJc51SjjCDx49axv8MazrGWdQBFztVb/nNqizJ4m8ux3/bJrMMNNKo43NVg5S
+QAJ+VN3DphUgWdMgy4U8c03t21oLACqiuIJKfLKgGcHANtUQe1NdhZkk+qGfoMdx0CQ+FceTyPP
sesSZKPmwvH4Xu7RMndwNl97sE8K9+i6ju5nrhcpn5ufydM57xmC/jaTMH0gkdC6UJWk00tIDLci
Ay1ubxVWqpEKsJRfKoPc62L2bP5JlOwlUd/AHBHgcXaTDfH2UYjclfQYcpFuwhfwv1R0OG/2QoLx
yGWVpI2tGji6MD3gFZeJp+D3ZVKkfgTnFVgeGyzJm7dHho8JFKAAEJSLKN8A33ferpDCoKIGU3BR
J4m57nKs6wThktVBWdL7USS1Ape1YJK+P0BbpqQ01WjnE9bmIIuygYga9xIaUEb3R/UqgDOyX4eL
74K9UFNvDKD5yHj52fFez76+l1XxcdRrKYtagOZ4lCPuZ+LhFR37FGCsjGraEbiZuV6mpIA5BvBH
I/W8ysQ/VP4UPU06LE/eMYQ25UUD1Bd7hT5f9ZijY2thK89wOMbcIpAryxjedxTK6GftHfH2V6yH
cciAaUeo6Eg+qVhAQg7PJ6LSHfNC3CsMEMmLm5KJLdOh8NwXcDujjVBomg3O5GEiSoBAAVks03ii
TZj/d/iUojmQOWQA7whaNgKxIx5+w73A+nUv4N2gYyHxDlCEt2gBpqesp9BmY6PZqWYiwuQPLJjJ
6zgSnYIJIF179OKWcMiOnXZbHLJlM0XJXcA+LhZbh2xRwEuVC9wDFpocSrRCOjLYcGueFrnCi5uJ
sIaaLimWh56N6oye30uTgBb7lZVc6hOT0MtPVZCOQS90dYy07iYwA7k2CX0mn49kAz5t3Vie6x8D
StTwsO+BjYykIHma16SZJoIEYbyQRlSMqxt4jVxcou8/wc2pPcoOYNDcHpWvA13OhnT+7QmN+2Ua
2lvqn1E9KVlbWpsukV3GGUj473TRw0pVvMJzB9XJcxEDhVHkwTSYJG2vQFUBrFgWtFVJbVJgzihG
nnQkpFBNNwew53/+SFf9dKnSF1ywKWMBVgdJmOA2AWcyYtmN+ML9xhJsRePBmAtT9DkQAd8QSPOo
ZNvuJlDZmfzQQWr6cdaYOA8kOaF5PELy5mCU10B6oH0uiR7ukVXlAKWSfYND5iO9zh23JdCe0cu5
NMrQ0zDwucfFssqxrzHBjAu3tC1QPaMSAsZC7JqgPhWdYOcSDkVqgIDmcFdAHRrNZuFSDog14tbi
vjWZvVIUZNYu0R1Rb26hyChPlJRUejTn0hBmDN0CnxV7Wc+Q0+SJtoC6DuuvL1mEN8gECphbxAY9
6zRFag/qIVIhaBxdwafccmxtIBNzX0mS0sAUwlDYtybW3QMUsSAoLmylVEvuuWZDuLLhH3G5jR7Q
coiWYJN8b/tqupasI+fhZhweAv77Cjl94ReaTQJOiCJ2tk4l+ncV2Uwnd3MdgglFtvo3ze6GTise
1lQOEw99E35MOQr463L1HChN0knAWqnNmKtgsMHoUSxN6c7HjHFffBI1zSb+ptN62Cp3aytRz2dT
eJWK7lxJitEeP+6CrSPJC6FTXt6hzMMc1/6EgntG22q2rPXwdhsPycvICYjmCdf3vjVzVjzWk7Qo
oRfC5T64GDik9NxOTx44cbwwgz6Gqjj2Xw9kp+h4QZedp48ol9rlBfxs5nGxcvG5ZgmkfqO53BDs
36UhcnX4HYDoJ2syfgMdaCoTDiBkTYNFkI1mFJBTU5mLMAfDUE+5cpKwjuyE9A0+CrHhuTU1N0SR
Co9ptGOwL6Gxc37lq9pApXdb5WqJHAFSiZfW/RDGE17j3g64GdnWriDfRHuIkCcpF+jg8OASFpHj
7BTZImR8G8++B0Xh93WS0LBnTu1wFbkPA1JNwuoMh7FCPPK73ooUtwlAJBnMRvgZOOeW5kT9YiQK
A6sBymn7YAFHTethx30nHtbHiAqettWyxQBOwGb3mJVemO28t5DXMHjl/vN5/sbtPYq9Dz23CtKr
iB22mdoEcEeUkupgfbU8AfiUORivjSNccPkDSrmF1rW5xF33SRMtOnvDCNQdN4laGgDZTpB2V/G5
PguhpoFDn3ItWJ2q5Ys5LtcSJ5enMJe71MzXDOvsnPdaxbBmPkvhRZLD9EbzqAtxi9/s3HgsTXZb
vROo5nIhFya3sfqC3oBb3PHawIKVH3UDrHnuIdSkNvYZNELyorHUP30JLjiAUyTG5SncWEnzxVYX
nL91Fkae4/T0ky+j68rfdRPEFM3OEvUkwIks1oGKu+isSg1cTlwFIjwDRLt38UB7Uxdl4ZU6dlFF
KhTVf3FuGbpKAx9x3YbJfOycV+544PC0CeOcYWmH17ChHKzWCC9SUD+rccp+pDAxeb6sm5CNN/Aq
ZexX7Vm7Jl8ZX6Fjy5KwouVgNoAYKgk95UEf4t7tnNq8jfZNvnkRtztmGLvUYHqSuUyEQyupOVvX
AAU/9o+ltGHyQb2anpQfdBTZGFwDSJ8ALichOtLlvJUfPwh8eynIFfNwuj3Q0T7lhau68/3rXGKj
xdoQCzUsBNln+tYQC1SQQNv30GxRAVOApbW/i5h+TZed3291PxvN/YLDDdY7LDxJijxg2aDgR+Ep
VrqXYm5KFTH0phgIHAswbDQS8pBrCN/n1Z+O1kFicvn/cX9BXBxpcV0RFwXhkzCvZ4u7t+aRdsJW
xj++619ksQDh7S0ZCddjPBAVLi4NSWLqRZfwZbgxnfjmz7nYugzn8ng3p/elLbvcjnS+J37rxZ/V
rkuL9P7FiNFGP1vp5ry4X2Xms8v10y2eXyQp+eVvACX9dX2OtWGXIWStBX3xX5uVGYYySrWuIX10
S8VcQm6SxngAcaOQsr2LL0MxIjkGW2gsrxzmICP26/3mvS+nHIDJZ8Z53Iiukd85cWnb76USD9UD
SDyKNkOgWROhLTiFMj4/lyw0bAjHdYOLpAd9MThTtQ2t6rdrZM84cFAMvhjJqQrhKyzB3mEat4Zb
u9rc2o2v55zxK/J8qmT7JQLWfvJ4rNNI9JZ2SiFx/z95yrioJia00VKf26j6tFAOLx+h/oq93YKT
QJe5oLmxv4kyeIKKdNzpkDcaw4GaqM36r4rMbL6uuHcPi9glrqrxGEs/QZBg7LbYuMMDgDPLDI44
MrL7X3lKjTMPGMwYB2+w4FHEvR8yZSQBQS4gxawbXNgku+FL75v1d5QHQor8LhQlaf2bKqg/oAes
8o1Uw/uwwIh3hwLnHaDyMnByxKOM150UTIucd2fk8rw/sFwI8JVGwYAQ94w3+Yp/VZLpwfXPpd83
vrE9hfMXgbAF2xq/K4t/t5EVlbuCUJPcWDkhmCw8ibFFKhujjv2dmgyM5cFCneV0VGROwPAUYO1N
0oJP1s/3ON7iWed4RrhXMJKLPhvff4ssN/dXrh5kX7deb/Tsb1ReW/vZq/fdwhdg70OWtOQ/aF2L
qLd03jP2ga8F17EF4Wr8Dzw36At7gYePx0ZS+xZJ+IKYz9zwD+G81kCsEHRLxZkKpsNQr8QEpKyq
X/rN6mR6SyR/vsNa/wiYyePa2M4AVPJwdrWxrDCUu2KG0QgdXCGyT2HdfeiG3s1l/7zBXtiLfaI+
8gqUXVWvNrtalx96cKUa8SimENHJheIlUkG4v7+rwBpNjlqkv1M+45wwb+rgerX+VKZ3X+bGTXBA
r2zMEUHaqVO7O/GgM2g65seAniDnnFgKQ7nnaxUpU2a6SdrGPCXO+LmFdMPdyt01mdUmIL4p+zJ4
wOG4zddtKAxDn/FZfN88Td+UBKGf9k1UDiRnyH5S/7hThY0/dO+pySOmByKMQtIdthnF64+tHW78
xFoKFyESs/TMzLBZP+UeFbfvvSf/ZJDhvLgn9/Oh4Q+MVcNaAOp9IydRxqMd4ZRgOdVjAw+nLolg
kcEhl94VHmdKnOnKkZM1dwOa2hq3lZLEDHZWqK5ZTsL2LFXmr9cGcGGDWF/1uQpbJoAQsCvqgUPR
B6b1eMUl8yHj6Hj3W7gRmM2QSF90Kb+UynMz3Kky1sJX4kmvL5b1ELcTdC5imquENwPK5v+SUbv0
Rxq1YcZWH2p8uGKDCB+tL/CUZAyBdGljFQpYl6/yLY46ld+IpBrxnAo1KjxgMaWyALIU20uiiZZo
he2aXEnRXX0MqtqjlzFjrROlSagcr4zmhL4sVftYeerL6MTc+5O1nCayED4lW4cC2WvSkz0xbS9e
/t92AupRzxkew03iCoPFDT0yfef25crr0IHGqTfTMnVgNkrMDBwqBuGz073okTNj8shZZlaiXTwN
6qMmrxW/2u1I0IERZeIRyP4Mj1jTYWTyuxg/VQesKv9ncWTeIsYrXPzqK5zRYJGLCf/+pkHRBPCc
lQwxYMzDUnC1/bt1Kd+5risNEr1p4n7AicJtIfzUtY8zV9zkvkcf7fdLPcspcchmj5/BxYfvXGHH
T2qoT9x4rDaWxT0rCbp9VNk3CH5Jp2+mKRlhrz5lTRaPGt9ZZR6mWQXPf7T9WVIyJkc5LEUYtARt
tsEhIQ1VbRbylyZsdn33trXZsd+1FsKtvZY+zxQmjsGpr0jyscQVAiLh5F9UOWEM9dU0COqr2SwM
I2BLVoeyX/jXErFBX3Df9JWUFlw9K0YM+gtFeHEHAmg+ZNT4tZguLKFDt/wU1NnMRC9g90IDTKCF
+XEgs25HhQ27tld4SvX3C/NlQh+RyExrFMA+TNNn26HO5WAJrvh1DxOhG8sss4clrJd95B53Qm/b
bfPMPbhSrwGdLxgykNVsyaBpWrdMkPI6HNSv2z9G6Evc/tBEGjV/MvwxNR4Xvehm5JxUsjnJH5ib
YlHJmsDKXiZW3AXFW1WXD+PiuUQO9BZtt0tgwaJtF45RaWqHwnuCuR3a/A/uWr9qZKUK+A2OJImH
aUpVWK6GHq9L9cVX5WhSZuvQuVmPjXF4YbdmIIn0HfY7BKE6WJZAZaUXpJ/HHtnW0nZ9VSPZhOb4
BuF/vni0DtJ75MRaDLWivya3vgxPu75JKRbfCqOaVwHDsscWCaZnWodIR/vo1n8wzskwOvW5B9YL
81ZwX0HiBPqdgTrTtlX7BDMv/JjwnaOR1Bm/Klg/JfpJvaRH4KRB+XEDKAJd9KQMRjHCNT6rT3HM
xZxNPH45aLPFIYRon9kTTT2E4fRtK5ChI3ZxGe9pYk4Y3DRvALbndvPBlHblwOGUnq7r7/R5zOHJ
XIAfUGJ2n4y5y/PPJ9LXZVtXVBVT1sqNe+ul8jHcmKkJn8hTT59lEIXAUqZLTaQmrf6FW+Iuxrhc
qt6wC5PDAx9uFxnK3MLJIu0t/GoYhqsmR1RMeCdQHV5OP+DRuXPOBPqhvAnd5bZHYzvqqCZnAYU2
7AA6ec9lb83ytdzWRv0LXdZpB9rxQgDV1dff7miA/NXQcbC4RzTAfh6q1JyAYihP0TkjR2rppiYa
oaxQQejsomfdAsG9L8Md7sWdsDF4L1Sq3WbqHlfkAwovdAZ7FBzWLLLEeWJlS1dHnlHmrhzEI74Z
ewLSEbtrwWsCiDW6d+fI/NL2GWMQC7ZwPz9ZR9QpacUX5XUUooY7kDTwYbkg/FK8Uz1Jd9P8lvqo
B/mrrPRCn8vWsu90Tc5neSyZnicIqxh/ng5uGeiG/UJ1A6sSgLPEC+Lhlz8ELgh5ShQrv5BIyxdl
l130RY7dLs/iYO3GkXMM1QuYs7T7TjZKSkPVHwZevLS99ixcz6T2Xs8BJm3XooYmXWX91tI6eiDT
WyS2kRPBGbvaQwxjWYguQTc8V3nBB/4+sNjWTCSt6JG01Z2hzJ7JDzgVnh3lRQ6tnIFcqUZXdq2i
Z6N4AEf4Bw5VrRFZGUzxwmIgXaYtkkT/tC2vYWfaiVYX6DnEUb7H/DGeGLryizwS2hYaKFp9uV26
UHH8D3R4BIUNuAWcB+MxudbTonnNG9ftME/Zaf8+dIQg/WFGGIby8y70sffMNbOakWnz1Mhg9yZu
aNwftmNNsDSeHoU5EyPSDSk9ktHQBOI1Dkhtxvj8Jpz9FCgy/FR9Oz+6H5PcezcHAaMK6SMw6l7z
xFZjnWjgfmsVMScLnuP10KMI3Zq7g5eFbcRAqeFA97UWhpdInWxhONE9VQq24onTGLDtw72QYsgA
findnp/a/COl9CRpXBT+ZlxVJnAQ9RVzyErRdGt7R28I5Lx0IUmqrfXooSK6DD/OLkjuw/rNGmnD
8BoML76vEiAZKXIai5blqGRq+JmZiPxXKsT4m4MCWqesUybtr0AJtcmi2QJ7KIKiVdTAc7DT8JwU
/SnK/aueJB2kkf7dss4vw2zyCtbXBKQJUvo8Eueml5TPR4ghnpy+zhkYwHYkNAPULES6F4ijBfYb
LUYX+s48fiJsb2G8YzRBw7ursrtJGmVUc1r61Uvz9ZooZKuIxqyUPVYQzgyqrmCzjh6c/O0l/RSP
bcFmXi6RQFacNm0cI2Kl7IUhZ3Ap1vA58yfeEw4tbn/Ze50Jhp1+eIqSGq7p/LR1p3j73pQ6sTC1
bnmwQAo6UTWyzq6urXY1DmVlNdqPBSVvFakN/rk5EP3Uy3TOeAD+eya3696kQfqWbG3bedm6pflS
vRLe2wIm318JFJIbw1sZiMVDo0sitTCf0t00NZxMQEEFhSRvtNHNkw5QNke97kaALg+lumjwDrtP
uLIyrQm5DsLhca976MG40VS+MmpMGhkOembPeQOUkkw6JSjAWTooq3ypbcoCFAjzhC1dg93NnpgE
7J3p4Dpb2lFLfVJQkxrPg5fn2msKvvz1SuQiRjCeUccBu6cobQH2mnLgd/QKU3hRT/IrdJB3i0/T
tbfML/4+cXyTBmzKKBOs2jLb3c4EbEYRojbbUD+ppVjTFqxRAkkM7kpTEHahVWcwTFHGqWUdZ3d4
BSBAu/C5vCoS7dNk59joSMZ23mH+YBAsF1AIvp0E7k+lIc2Q95c3d3fx8RHGB/l2rDeKLqqBpxRw
KE1bq86kayEspHRGeUzysAs34XP2cBk95QfoVPEDmGcDswQxINHY6H3zp51Qc2dpT4HNRK4qXbNY
FRSfIc8zYhK5yZvjGd+se2qE3YM8l92Y8Uax77J+486cGxtQ/RC/7K96203uIHBEzvTiasDRnw9n
55WnDgGoptC24wwbgxx48XlkUcMjEos4DSYpCplHDYEqbcytmbSJglfS3Wl+BTCdWugCCpmJ5c4C
5TLefVM59OYVoCAtTaEt6USmT8U5Pz1G7YHGgF5ez0m+61/LDOHZ65Mk13iOfO41EKW2VKW1G52y
qWOw60zYpdqgDwSz+ESxC+iyRVOSG8ydGt6+ksxcpo87XTy3RbUkvGzVj5DRhpZGOxHv5pNejg1G
Q7Bw50XzkAdhbLqSSkuEqpsZYj6bB0XX93nBDhAiJIdxO8GKImrJMGk82ewgq6PHQEtOO8P8cI32
chVWpAnrL539dBwx0IA/iowMB441ecDPtWgytSeeXJlDbz2Bo09kX7FgqoHYzOBWd4IEKvXujEAH
dNZks/0CFOv5kn748tmotZv0cQd8aizAXdfjC3nqhlq/GmMhU0Xbd0ZpExAuetSEBdU5QRFfwljU
OSN4CPstSIYWdZn7b0i7yu5VjZDzMPw1saBi5JiOEQ1AKESM/01cwj2fQXeSgfEX1DWhQbSQr95R
gc8XbdxhyLPH+K2s+gJ930lVksrne2wMXOBzUNAwM94eV83YBbEeltFt13Z9DMoQiIS3Fw7EoVto
Ebw3EcfwusnM4YLXL/iQsnXtYRVxMatAdUpbjTisdl5+r9AVnW1LmNde9Y0CxsPGXejwfviGXSJR
tpoZ3tVaqsp4c3gb9AedvzHMNUZD5ET6N5OGR0WgGeB3KUNGgn2tB2swxzPlCeZjf6CQpraRyIu4
eyEOP/eZCZK4qZHLdRhK67Bw3aU5YBhntY6W8JNniVZbNNT1D8XeA6xOiTxNHf4abLYDJ6bKDGc3
61nIIlq9C+l8CJJO9z+7J03du/5uvnv18rCY6Jb0BsA0tttW6zIipvnX2bJdA6RBzcG76HLYkZNa
UPfNUYjgRe+oa19ONhMSzVfel934MZgHNxNaFHgUTDodIQFDhb48Zx6V/RiwDDF+yccOleFOshxm
6E0CFKH2A0klpntDATY235t9+d31O5y6Hws9QNMvz+NKhqlKczZgsa1JL2g0OZZD4lU7vSnrtUb1
mI2B7uQbZcGCPBDTySzg9VJ1UEX0JyWhYt1ykzIa/yVPFhmYaH1mfreL+PlDpUTmWLGTgqkiseUp
kWE/T6DaBiuvtubwQYNH2sWaVHDVf0GX0b09gcX+S4Gbt5A6+2l3U5KzwLa4SW4uLePCpEmhZf6+
YqkbopIJHLC8koCTR8KBCJrvhRPOupxTR1KXCEzJ6RWVXufUu6ilI9b5GbZ74XKTAcZ8xwrz8liU
uU0MpUPQepV7isk+z8gxakrIqPfz79bBRXbkrVdUqTo++qe92V0VOaX4J7wQNeUtZQ/WvX1QPoh4
PJtPW0IMI3BBW7GE3lgl0W/deUkPbkJ25t2VA4TINB2T0PMF2Ok85Zu2m+zHWnmpJh24MwA/YvAt
FQE8DWGasjHU9oKd/vEz7mfy8PgMM8UeZC/mJMW6OPPxyV3GxrlT1PPrbQAttJV39CvKoIhAycCk
WqwaXwXbFrlHW6/1F+k2rmIUyrQd1QwVkBAB6s4p5d2CybHKNPOZjJHfwyqn8F5CL1kSwuLOtLMr
ikS5x+oR1oDc39WAvySlKU1BmGu2jH5s/eQg8H1AxeGXe4tcX54BuFcSQl/z34Iy1B+n5yl+TkfO
/CqLc1SlsSMsOQINNOE6zG9+A2+CJ343MlG6oOXjBANkJpOZEVH+PswedxfArTAPRuvKJAcEox+y
K+pWJkpTQIx4rjUQqQ/OdP4ZDYQ4RHZUYlaFf7elNsWTwHrM9tupVUYSZ+xRoN1MvtrvkAmvdsNV
57IvNT5zeKKNEvfFTol6nCuXVM6feAmXShCGl0FrYxhEvtUzz6ocaF8ux+hL/Wiw3R11Q09+1ZFT
t7sL7fS+b4S6OsTDgIPMopqE8VllD0GITJFSIN2sFmcsp5kTU+vnWVEKPkw4UbzgFgkYADTMvPtr
nhoDIjz+UbY09vcXJFIn7+hWPc6WYoa1hxjK+dNQJ5IYi0xJQ2N4vuNEcGvkdJ1GHmfaLkOKqfZ1
ofbw2XU/ZDoXTqnRJypFLPgdD21BXhnnMnsa/5ZYpi/IRv/xRbscpyqvRrDsEnKNT/62mxPeQOZv
fjEAh4YCKRu8TC8O3gmdtLMq/Mqse1wh2q7KX4K1aas6xLXyaZp+J2akqQ4BAuV6hYYS+dp7IhYb
KAL3fWeXG+gSqxUvq8yTGOkoMKGuE65t/nMBHVWs9OnbGxAKW2L+SFZ4jg2GF7Ue0F+KeSCT0YDq
rVzmZaVFKCbW0qbgCWIFUK7MZV+G2v6HwCqTraw2lbc3LFnMFp/fA4weKhTUuoS43WDllgMa2aR9
ZNbzdMKYPKWqAz0NRBtFbhf5WoPfRtnygdl41zVcwgY0ynsZifeEdBWL5891pD8HBvTVSQAjamxU
0c+QMkLOhtbz7Rx+5HpcbFmKYrojM06mLveKOg+Qdjz+vBPyBEft5NAigpkN7h5ckNxOBMoEuUfk
RcdWqlrXpzxpxTvxWGCOYC1eS08zoCf9SIFzn6Qpk92cNXx70/dhrdajlYZgVf25IloxmW1ciCjZ
LRFbaJsAaswBcfzYrdaZThNDOpiYiPSjvUYf9GVKIc7baCRvH9/WeamRlxMqVEW+7/FTjvnqUKc3
3XfGewQ/n0QayMV7uvRXWdziOOLce2OBfFAian3YhkuPLoaUJZY3c7GBSXKoE5LTxnpPM/GzFrBl
p3+kJH+V4vFLZOEfhjg0Tafruh0RfvmMZh82bsPMJEbKI33dx9tQsvYrv6h3wxkBrrEMxHrqmXpw
XfO+JZJNuokV2LsKXyBzSHJCg8Ib+47wtheIydXEfX2hYqYjPiIdT2NkPQ635w9uyKKha6lI8Lm5
odQlorUbAuYZFkIF6VDX/6T/6QxnIK2YITG06JJKkJL3nZzMKhRuyirHtFDKoteagi7dtKTLVjDq
iU1wkkAaPFN+I1YZ+03CWJ3+6ksceutt3djZmAcPOI3kBXbUtv2uzBGsFMItI0Egk91SmrhIRu2k
UkLI5gNFF6rw/7D0vJaBKKRNzggKpLULbaY7S7Lw9g0JWZxqq4Paapu9+uj357ITo6y1mmQizEBB
8dDtgOWTTj5fVJM7kNY0nOVt/XNgu32oD1uXgeRkzw/FO1XgC+m1tdHCuO767tSLMplCN/ADjo6S
+O+dltekcizjviQVcLSR4mRvUUQ1euWj2hyqd2VveVzwaCkZdZ4SS9DbCC85xMRcedP/bRZKTu9s
QRh6EPys02DFDZ2QibDWgD8ddgTB9+dqpdpCjAmR/WgLONoMy9GfFVUVX5REMD6Cv45olQBPvrFe
l5KEWBEezyWxiZcrMoCIzbFc8J6OSSGrbEM2ldV+Gu7sfZlD+vtULl0wbAf60oB3R/jBuLiwBY8B
PjC7SDR7ftxSfmzpjIs48FYaDe3pTT/V9pUSSi26qv54/RqQQgZxfxhotwRfykt/AyvSkVGF/k89
+LH5Uans6EcMLlKbDnKlJ2wyG8FH9YAQi4kYxKNJKyoq8esYuxPv1vz5e2aXY2l7++lTUmeGUl9c
a3wnWbi1ZkD9R4d0PQJBDAeWZIu5PTlWjjsmKztfOJleq+2dkS1uc/HEPkxdA7VDYGh+4Gr/wN22
y95SHKDp72tIhm8GvIGOGtMdghpf74Lzdbxaj8Da8UY74U6zC7ZyyfuzVAtpozlUC0P7rM35aNIg
RmKzV/v8pk1rdKo2wiwGmWaadLZs3tGHoiWS7EQquvIPR8aLTi/7SEeYX7kudbM9pWwu8zhQWmYK
QYJd+vvDDxu5ojqQlKEi/AdoRM6nGCA8DRtG4F2RUyouUeUipBAJAMSP5x8V3zQuyDZ+1r769vda
6cgr1SeKXqHRoxpTUjXgDQCteQWlRTZ0xseeGAoxKONAh70fIx7+LBuuHRhnNKujLBifhaSlfb4b
s/PH9MbrOdAR3joVAv+r68PUAxhIexkn4LAb0HRLkcDF28NjTHM9r3+0EiO609rKfjiE49BxSawC
i0nP8YRXd/nWTcwOiE29ygjUerpUTJif0hb5jdc7ffdIvblDpe9cVghv3UjWA2AYxeqrFd6VtJMr
2sVdT4WSYXOr+fNniSFSQIL/VGEp4ygpX/+xwKR+ZfHXxgUrnCb5mtSCGkpYsI7mVPs5mhEYeGd2
hv9dn8LqtSLEbI1eBrNe5Rs42PF/uFUBkGGXF+uV4noSOiy/pYqK7ghqryWQ3drAd2vWD6HkYhjl
Nq9f6UCaLaBRpQGVZlenptO4NbGO77+zD5l01xoyk7rfuPJcu+87sYculekrGIuYRFFYLtwZBnxp
Nkwl1ghjKSxNRvhTSB4LpotH5s60nF7PNKqZDc07/G3MIunz2k7brUhJnLbPIkCxLeLtm2W+2bQg
u++a9wZtJ/8X+iqbIG5SgkGRgoeowKtEJLN/TKJI2vwQvrGkF58SfnnHpad/ErE6xEZdZkhSX2vz
5inB6Q16tL0BBf02BvShUg6lr6SXeAWsunV6L5zqWRc8RA7D/tJLAt10Ku8Rg9WFqkyWM9+Xuf2K
i5v+7grisaQBdIcNaRJHncuTgDCSBltgZlK9T1P0TDO9MqLfdd6fkhLvWf4xRuDTy5VBpuoUYHYQ
wXYlbhpuc98UuBTG5mXXgu7lflzc5mJmqmOZYqH6BBirGd09ipOVdeMTnWUiffMyLmsnyf8bLWyx
ZOApl0pSZx9CwZgafssTk9OQpKUya10YgvblIkqIqsLkXpGYC0zuy6cpfO8tMX0oVDN2IA88JS36
q6TBLV6/6FugiQxpk4gZjewqtuO7FND44o1znOK7Q9kB7iF6q32HBp5ldT6SN90RxFujK4u5P8rL
eIUvv7nKR9bEibI9ASKNMxJvL2FZZ8QUf/qFQHFLpQjN3fYa/cjqz4CZIrb61m0G3N1gxSCzcL2/
p9+Ceeh8NwKb31NQ27eL7XSG4nBthavQos0Y/ifnFkG5sRaChx+U89xzScxSkX49qxzFZSKzmkmq
uZU1LLhgriJdwGs57TveHB+YA9Fscb+NHoo4gP2w3CX0oVNwm6FrORZh/16zav8Kerb3BBqYDbq3
OTbSWYvZIn9VnrMv2W6jJTPow+62WK8MGR2zoELToI6jEhKkL8QNe4mPQN1avJOeTXUHw0LqJp+H
29Ltdyan2jZfveFOeygv4aj0eQuwfawrFj9K5MmsNxOp2NShtXEHZR1EdZ+bLP35YFYvTOIp9bGZ
oMjPUyqkZ+cB0GsJik0bVcMdg69nKiHAwwK3tAJszevaVapSjYg5FL/oT8zzLIeNozX+E3svaqVf
M5zd5olBdRDe3UfcLeqxEQX2NClePGJxb1N3YpHOWbyjniKUKoZBHLAY2X8Jtnzx6SjQS08cufZU
pMsMSR0iHaqnsC3QOxZ2g7RJhj2bSTjlSDikwhxTgWFmS2RydZVeJgdPtgYA/GQX4UqOczDZyWfO
N5qKWrvGsYD5n9soYT8vx7gPjfEnsX1jkDQ9L68Q5JBxC7Aj5JH1aAjeSb7BmL/+GFikQM+FNbV5
eCEuI6BnQexCfbOreIk2wU3WsswEFsnppATYaPbcVX5ugtue6ZbIiK6revvDexHMLrK2U79BxgXH
pCbDthjCVbiuPrbrrGHTLuOty78DGFxHe/kkyA6HD9w8QvrX4PVL/axZlvLUND3m9NAHSzQszPrL
qlv+Sf1/0CAVL17w8KAK+BCI/wT8IJqUZyTHIBkJsPCa3vXG3820vwaCxaXafl/CM5c6/7uEREIy
ViqauCE6ytJky/ZPkZuF9A+D3TPNqLbHZEpOpWAzkkv2wcWTfuAR5rqiRAQvnmuRGdyuCDTxwXcQ
mqB29QaNxPBNLqn1bqGYvpdDuiRB0cUs3o0sS3Lh1UsbqTxvYNi+iimhq0e07U+orX634pSKnFYV
giaOh9TJ6sULggOCUirvQq66AEBK98PLiYVXnsNJc5EiQ31Y3Sw4Sp+y0Tg4KHkp6/bSeMVLyt0V
sjf6c8Ug6aHkbtbUFituc1jYr+5X35JxXYhGSe3K3jR0GBAcEuVr74r2yEs5JCCGTDip2yAdq41H
GDntI5h93kWaJuV544QqKRL114wx6AAynd4y5ZaFU+Nrt8syUZ0G83K9uk3gKgYNn2SgBhUs6sJk
6NOcQLlSa5i0Tht7emTacHGV6hhgGhwzXmHUASzMuQOaGEIt2UKnc4uHyMoBaPcFQuPZE1j6BY9v
oWfB2uUddEtDrGEYG4eDZm6mbqXs1BGrbJiMQd1YCkwpeKEZxvnnpNM8EIzBKSg2s509VDZ1fQa8
hrHmVwCr+bvMiEVzlkqZ/L9fuDe2/UpmLJO0/eN2MXshmaLSwswI/noLtsK52GZ9IyEmj3qvfx4R
LM0gAArzDMeJ10jFZDI0zaFiy+RIAAEh/EkunZZo98N9znVsFUp8ir1iRKxbaCny8fgOiaLBQxv9
xUgrvvERZCxPdUy4ccRhJ2IsIrcTJx8mq7h/5D0Cr/x63N3em3pwpQK4cQYm9jegrP2OgkHc80w9
WFkRrO2hCpe8ewyaDGG5Uewylavt6ezBB0igtI3zGS1ZnqR4P5SFvKJ7DZKDe4UuNw0/N+KL/GqN
CJ3PqvSHJTfjgmUE9QiVMbQxGfoPzN7QeuTGfOoOOoqF6nUeQyHJLQw+AgqhrYWLOHYBixzJRGZy
erV/LdF+JFCMRPPDlv/d4eNHRi2EOUNxSW48eCBQ3jO1hILXdRfvY4kIUuirl017uu3Ej244CISv
2C8a2Kd12yfW3zb0Lfe/S75XwH530S+ea+hlYQ43BgN7e5vHhd3VrR+czPtWoHfl8qatrywtJQoG
XQR9IgRoGyKxh23JcjsBL7JUdoa5bi+GYF2eiXnpMDnLR74piSVX0U3/WTajjWLTxGER0vvNBRi4
r/zqEzyaEJv/uyJeCnxZADrvwBRkrWBLlFn5rxPZNWvZ59m5A3IpxHPTtrwO2YledwH/SKWrOThx
D+Hleb4IffzqM0utwefyyE7kaN88OnBECNKZ5trJlQq2ThpCqVBz9z2R0JayxH+v1SeWP7IG953c
cfvMnQ5O1CHr8SCud0+3wM8Uv9IW17rkyvcLXVMMgTQEHyxCkof/KpHtyoLP07RThJdg00aKJXsX
XymzY2YKclK5AFp7S5ghCgZ6KdZseOhUNxgMFFUiUCzsd07/mKvwH+02PinGbfWfNsrcCAyvV2NR
w6B/3DI2wekznQZD9RCPbq2BpKInbJBLHhqn9x1EjeWi+fmuCNyZoPQ78ncRDbndNQqOHxukhcS5
Jozo/itmgiWE3wjoanpTUlaEWq4MQH2wjXiCo2j4XaagMMX2ufb+tcVo/kabV0RqRUI+Gu5NCftp
gO2+fmsbAMUo6y+TmrfXSHOQ2ELt2GBwdbihH+76kujm0hj+xbxSv5JVRBoO2ZWBjdK9fUfh0dDe
+TwnHgQAje0g8bNdmny5TM1CNaLUUJ3+ZZdB6u2ctHtYKRplIp5EumWACDoP2f7sMmWyOB2WGfwF
/eop8JI2kq45wRAKlZO8JBcZDZpZkiQeRgydYanPqjFjPBayGA0jte2iYdq4cwN+G1yIV+i3r1Cb
LkAtlMtumpR2LhsUMW5pHzOMf3Gf7n07Uthe1ATism7u54LErBMjBLjVFLg71VJ/2YloSh0b3qqK
CeEwcTXEnOOy0RA/baXhnpNdzjoaRJQtXURbFaO8Xs8VYWPD0yeCzWP5uvsy67xUHkQOduj9vdlV
EU5Kvn4PZBR8yu7z04R71/ZF4VbyHXVUAUbq78sonfCFeg709xcZ/MERa7BojA9NXHYdjGol2vuH
FqVkBmZF2Be7Nm7gCILKS1RaHTstbfPi7Un6WMMmaRA6+HusB0WDQna8I0wXBPe9lAp+b4D+Kt4e
BdPBeJAT5XqrMzMPn2EKqa/oyp1faMUr0yF/xfSaXGaJe3hqbHxFqtw6pbiLp/e/ym6K5K7Jg3qr
dSVZcLfsArGeAoWv2HiR3QDU8BkwFNPVp+e445MBBsOjOn3cx0eSgLWdNtAKhzK730CBLcpoUdYs
nW86J0ex9yHbOcLOipq0ZbCgaXciz7IC8x8KnRCH1+CocLE5NSVTTGnXrZxUweWKKNFBt2EZjO1h
ZLgi1Qt/5l1ioywyg8vfSj7EpcuiHWJRAvLKpalq3tusi0FrhyeKtS4XXS2mDdTcQHyoUekd+NZW
FGJJEX9r2rBfmZYtXozTlROxd1RDnMW+YOpkCunFyHO+l0v+kS0pOX2PYQpBefFEjKarKkrRSsKn
Laxx94z8q9gWS36KiCLFQrjI4+Gy47ZF49+sflykX2lf1URPQG2hF0uqbdB9cLXLzPcc9GgvCcVr
VgxaZOcDjYQNke1q4eptM+3DnSQpV0AZxAoOMBZaN2cQRvknYE8JqeTgB2FjPJZe918vEqAk6OVL
gjb1xUOF3SSjYqw9GcMVpf3BphYutKeD+hw7iZS+X7cUKy6uvup7io3Oj+KWv3GmdzqpUqEZ4u5M
R0gfJ8X5I/Hmb9stkRnSgJURY0AjaZMUdDM1Bq1/nRE0JmHeRqv/Q8Gg0CqayaneYmn1SeUCm6Ey
lkll3uNz4nenu6B57z9+a7KErA2DhBdqfIVYPi/BBRUF6+/ji+llJic0vYZgPBq9M1m3hKkPKrAA
MX2TjmlsCsucIOGj53adGZkbby0ut2037xH+L+RPdyMHbxk60QT+YzBYOEkugu1Zabul1aVtmkM1
uNThYPqYOBwdrTXIfMrICKscER2smGeuw27cHcwTBzEK/EbloYdN9bng3QiEGlkAoIgaD/0eqjfp
P4zS0xbJud1hh9xIQH9nM82NMxbvHOKANAfbmyEqG2S+Zs8AAD955d+fbVJm28TiUUVeySBd2S0P
Y6WxhgUBwmNDTgZwzjRADLq3GZCtCV1gHHQ+pHLEI2QaH9bnUxCMlwmNy7DiQIPmY+ONoyvme5Gw
uxAxW8q2D0L7uGDIWwuWDuTxrdF8/rQ4EE6XY+w2H9Jog+ZIEhdr5DObg2MN/q50YJUyvwaneGWn
yOkJXPuNFRTAoSyzOXUpyOBGW458h3e/24EVd6vB5V5FYnmLbgB9eAzKZUqPU+VTwAC+8XIv//lm
HEc5y7PHYzUjfzd6ZuMVpIc1K4P5wCX7vj7QldHPfkbZ7WhSGZIoOfK9S5XCZJm1RzI7OcpP6WjE
+nxsl/tAgo77ZUYAKI1vTB8/oeF5FiNpNym4TpqLzmbsL/WYDqkmY6iNw7wt98abR1I2JIi+VNMV
2dZi+wO6fesLVTfKDcfz8VDFQJ9Butt9GScAK1t8YuPdlPcEAHDdMnnWRKcsdoT+VUBXzZDXl6T2
AQACml3Wo9pu8Ej9TQW6AwjBOXcn517EWz7TBeHnjJrLyxFFTDzd76tQfU0eJqWjFum3EpMC24MC
HjmoCoAo4TuRDqp6O36pA9jjoRH5q9eycZp65OMFeV9VYX5KzPzleu1ckBlXnzE91OhtAs4snhZx
sqKhiSvToeCxmbJYHU3LJ134IDAVZrCxWw19waQAASz+Mk7j7+rPy8NVdWXA0JhzENbJm4MN/Peg
NQYSmd1z0lCjQcenUgsGoQFi0OxesVxd91lGq0tIQRTGh6uaO7ummCTs4IzhHSFbFpXtVtMGHn6s
H4VjNAFfQBKmYVAM0y/nISofoX0Q7BP9j4SyCodDJNHhMcZ100r39YSZJEjVc5mm6o5xTmiVJCA1
6CXNL2/mIZzS2CF2yr807s1IbgjX29j/VgjFSm+nXmTFbmny3iuobSP27vCDaUxz0WEQgnGA7AZv
3gpKix7F/AHW+MBpmGOi2CgQj4qzAx1ubO4jm2hkN7AkavfQRLrbTWCaDNeeIjW0XhTMs8WXM0GV
Ssm1wwvlgCsxAEYxRCu05f8yeN9hdMqxmzJHk9QmSdw0lmq/OQglvRGY8lT5hfjGLkNTDTmp0Jg4
MpseyuWSVZKObnkqM03ZplnUW92fTk04msV7AGapLYTkndgIQbbRpCzAIAPamBSABN2W15ibZJpe
ihMe5qBEZQ+w2udX+TC/Xma3Ajr9NrtUM872w/xzFY57EobdmZGp4cLeXj+bi4wr17LIiYGjCEyx
DjKLDB5Xh1uouf40sFEAn7bAvXULb/95SKI0ptlDH+9TW0KRbydkqW04ie7V3l1MeWSViHbCELiZ
R0Mq8yasmYnhhTXmTFmbeOFXZy6Wmf+3Fy8k7kNGZcKzxlVw8p7yx6G/XUBy2DjJAcesqEW+RL2j
HQlL/foGfdTY/u0eqw+G9Q6ausRdl9bXPiet0Jt40NLX/8rWBZGd/J4wwfOn9UTkBqvLkawNfZyu
KFqiJM4mXkHHx0T4k3YBl+YLBqzwtigKmfGevSQXUkSuGnDxBWLPDopWGnbluwV5XRNNX8Xw+lok
ueKol1M6GIVDt4/WvZnX09o3MofsG0ido7etl7bzQhPwnJHKpoLsqg4hO2uAnIc8j9HL6J+rkNLb
VBEa9wQ1x5dsOChxA8d+qsF/V3AidddMVy80woc8W6MPATZ8EK0y3tqw54PTb6MEvDBFFa8ECaJQ
p1fEAG9vSQ5A3wwh0qiXZiDJ14aUAjrGhmrdECY4wHE6YA8eBs2JnEegx6kunSP7OsqUjqT/5d9i
uTdpzDIvUufVzRzWbr5t2Iv7Vm03BorGYfccsx89+PoxMlvTw7o9eQGgQ5IrG+hXFmZ5sVcghYnQ
w4PiikYopgO4y0ZRuyR1lIYyoKdAnkluROutH60S3uWlY/E3Kronlx+QO302P4NvrkwJcHaaxclg
BCmCmeAkTt+7Sx8Js81rkb4GuCDW6omQS5/6ouGDgXhcqZZTIsY5e9ga+slgaVdcw0WY7LZaBo1n
4sFLdgREeiBwc5S0kPy3rzBR1Gj6rIRxZlp5SVXwYzM/IIeEXfP+ESy7V6SKBxdBjKfyXPGP6kC4
Eg1gq5/22Yb1Ypi60RgnWe2o9MYtLMgdJpqKMBDOuVkN+huvsNo1MXiQrGS88vlgR/SGeHD5eFMr
uVm+U/OpcAiytsdwPw4aJI+RQg34TTacCAXFf0sU1fbrrVdVeWLpVNC9ZdQWtOzJLGjrKYQYKKLt
O7q/sZRX3Q6CeiomZ2f9Ofn7jqN3nuhsXJMyHI5k1M3eFdPmJhQ2DBZuDiP9FG5QN51m20sY89QY
IvAkcD13UVXFvqviYOfp4dp1c7lqd4bHWN4yuTUvYTL0m/BYgtKhRe3gHfUP3IG3ASMhTqrehn87
aRLHorODuYwvxFWX3woH9f+zWTA4TsX2bUwe/R8RpWeJlaMEuo/k+fOkuqmF9RIL0RW4OJJtfyTb
alK2rsDVGHrxpo23qb4yUaPwNxScCeljtjC5++ounqYkyhg4XhPc8slp6TvEDsM9Ut59fHRxDbyC
quJbxNobzDPrfrYdTZuDCcGvnLcLkFtPFbVJFcskBcJbFUx2KUqf1tmkEGAtQW42mgLBB5MT+YN2
JZwyhT5/5irC2WKAUMcHuDn1OT7h6er4ofAcew8hCOGBOGNv6L+brrpxnORCLGsyQYw6lZjiWus0
7QmnxUcCMeCBxyJT0q5bjw396JUr5943jNVYzELnhvsjIKaEomogz6OlLiv7FL9txM+evev3Tdtn
sHZArVzNkkjtfk+uu9JlyVNjwxagx11Qvjysn6BpJMCTSA1wtHfRSpdrWYYY6KEfDWPvJqO6u0O5
AQ4uQr3rp4bcsRRAVdq5mOW0qMPoAW4RMW1ahBELKR3Oxn/JglWw3HCoycwB2OuwwqDNZPb4e8vY
MGVaPVi5oOt7SWPlDXkXFknPXwHx/UrlpXXiFLehX9hDj+amb0XnEm8ETL9InR6Exj60wqjOTwKq
w7gKUiF4cO6COPj17RpJIUHJZvaozRBS34x4tliz2JXKQXppSx2BkmlMUR2b6jGJEU3kBl6Ux8Tz
L3CpBSwoIEi0uhoXr6K/fHI1+85/33FZsRy1GEO6V4T8DoT3zNzwOyCmnt1i9elj/p2Mcqu9By0h
Gr/t8tCbI/165nLePXmEcZ8wtzw9j/x79Tikd9dFY4bMBrYrmiqMNpLeBlr88f9S3y+oWdzxj0Tm
l0aDsxl1oxRZXJRo9CysjgGdJWV3d4o8n1xmYTIDA4NT4smjDKNMUnruNIlVRb9hmL54R/04/llV
rOHg/zSm0AKqmuaK20FOPknO/XhzlXdZ+Nz1r9sAsW5z9VBxlb8kDFj8THvLcIZjfOryz0DkYC4I
7hsNqK9KUIHFbHwdfi9HyW7sdAKb9zcSfbnxYnsLFrS5HNagiW8APWPuqVrl6kF0Ba8SA+OHLm0+
8PmLsoseH8CZKuXRRh9AQN8L9+GcRt4qQlECD4kA5abnOOA7NgEvi3PIcM1KwK/9UAoJegxCxCvw
NfmBTxcc0eiXXztA4Cm/QaRflEXHUukP9gONk2L/WNvkxVdXcSPkeP+p0kNpn5wWHmB2sHtxhj7s
1oFmcpJYJm42lQPph7UYihfQ81XSSS4AzgyEer+urhehbxiWgIoFm2IszQjZITpBpDasCSb7BvBw
g2QhpwKta5fGT/pjpaCBhVAqL7WOoOqWEosMqZnayScddiJK+UY2FqHh/5iuI8ZIhiZAQeBOWOZb
y5fDstFMFyrJyRIF0iVrCUDnZwbawshQPmYlkQw1sfeYzsGrp80f04BRuoLPhd9fekAqv0BL+SId
D7yNWc7/iSWZBhpL+u8ZH8PnJZzpbo2JvvLi48Ju8ZUtYEmxbG+DQnnJ3Nm/xvbx0CVpLSuz8Ytq
O/dP1w+m84n6+QWxGkEFQwVXTf/ax2HL9vpLHQFMAb/4fjuiIe4s8UHuHIxtcGp7NTHNsoGGK2u9
pWlNjJx0B8a3foqOLs+IXL8GxTINQT3WWSCkqyiA92Wt9trTaN3NlXsJpcaOW1IqZhHzU60aOwd/
3TQDEmQqeOdIW5WqBc6BlththRNp5SLh3H7vcBkffNdcWyN/KH00dpQNuLuHLk2FQZ3C/TQGChEj
Mbra+cq02DOQTcBMe1tLtYqPSZFAPGb4mv1phjO6F7SOeILWEw59JaluN2S2Xi1/HTjXSVPcqoiG
MeWr4Q9kbXyNJXl3Zxrx7RmxOK3W3L3qEHjS/YPX7ixZB8CNJRpwd3JMHHJIsjrXInEaF7SSwFrA
0fb0yKIo956sIU/dwsm56SkYf6+m/zJEhKwV8uAZXWT3JkHepc7HkMpic8JzlWtFzdn8blhBpjVk
u0dgNdCPFIeO3YI+hyztm0O2eWqmptJUllbz2mQD49pfsFhY8WJADK9yY5kVFrPD8KPw5SlbBVPI
vRrpDXqZFe0invmtyKb2qK7E1yUAQS98fQEqUzkTim95zj032avqssF1/cFYDEbt+bFqlVZ3rJPn
8Zhj52MpkxtUdZ1rknpSw/nPNafTW0/H/+WcTiN1UkR9H9g3T2+CK0nPc21KbKLxQMLVjOcP1peL
Sbk5hjTPuuaCFRh7btiWtuFfZTZn2D/9RvMdsod1OyoeZYUPagMDN/rw4b+pkfTcxQvIQ7AHqnuF
Br9hUq7XOccNYFyzDM2ezP0+yhsQ4qiu6G1wrIjjsSHARoouICrR4DtqSvb1ZzjxoP0wcKXyOV11
8IiM11+vQPLYlWg5wfYOoSZxr5iIcVS94ZxQegAHm7S7W5x3ks0qyXx7sgfcEQjwvaYtjeuWzy7n
Pj0W0eKzlCnDA0WhuOtMoI4AU7aAtVb/m0aTqYnS57o9NC0alMA+KEGaWFPHNZdxY0vCQXtGyC0L
4IwMpr6uANKjU61zRjrPkR62dCmRavm3b6YF1476aKGgduKpVDzilR+cbv8MF8yeX+n6skFVpA4z
WS2rvpJAjf/BaKBXAuDaqUcnbW7z13+JgCCLPzOb9zjqdPx3FnTkJ2NAzrVD7JdrO48Ysrah7V8o
1mxwV3i14C8/Js7L7JJ7rEiHJNN1GKOLwG4scR66A87+56ThOaoB0IykMP4D5pUnm9+DmAgdGnj9
NlnemoI7albMIY4BgD11tUhv9yh1g0ORRXWnHuQdsuQCLCGEDI13GtJejm9F3+u75NtrfaMems5F
1AD/jioaSTRhXrZN7SBM/gc0QH/60GT5+KYJwvKot5yFPr5QG2UjyJvWxCqegbqFPSht9FMvJuRu
SPmNXjf02yvbPaeuFlhuk3Oi3ESj/FDzhyqfeUGWwjexL+vRRSUMt+6wuO3VJiflS2ORuhSdlxxa
9AdXOfaCk0s4M07zQjtlitZKB3eeQBUgaeA/qRhQkqRWLxKdbxhVAVgrSVg7iyW+4VjpzizEsaU2
xSg4MVfzbcgAkFFu/R/Hr/LZDNpo4V4BaNedlaQ4PBQEhfZpF4Zm5pXNJfpWKrds50tIw6aQkHYe
JjPi44igmxYqtgShs8cmUcfxGod0q00jza4UnlNjE8dCWHPM6hev/IjvAdk+A/lhwGbvRX9TP7x3
T/YbIULtvV7YyHu/Qfz8zpzykGVpMffgpBfUdbtBcoz9UGDX4ONI/LeCjcbTvK8oPhcC0c3sdjXI
2hM9T2bcHlz4HzgIa8MMpOghPCQpw43Q+zxydOKvqhKmgu66R6kYU6N+WXwcdF9NzwVqUieqcXO9
UuXokKjNafNynWjrxAvsJIh/LROSRdkSyOIVdEduGu4nOKEZrfgxJ6BjP3K0JZy73TzV68wB4dqT
JlrOjC+RzVjgSlR/kH1WWh/kJx2gp2RjwI0z70AFGd7nmtPxjJDLLl1zmDhxzyShc7nuW+IsdSL5
HmSZ8tmAVuxROk2UTGipXL1/tV2fqovVOmSRBoLVnnH7rLDwIond6IfeQw8pT/0VvqCEJfc2h0j8
DzTnBEPtD47oRdHaJAUhMNbJ1iviqAeTMe75LLJnp3A0qp8oii4apkH42Q1Rt6g/w4d3T/LtKW9r
1m1EQeiHAoh83Rr0cg8nCXbRhDwK0Lh1ovZzYIwwBCkJH3nkOBzMLYZ9kzNko213SDwwbMt9lhS8
5+68aVrHqUR1HkncR4kopcPA4gXdCunwFlewmCf3wbvABVhyVX2UpkcoQBtRHIbItA856eWqbAII
WvpTSwvdrzMsZj5cDjDMcPyVah3CoIRTSb/Sasutxee95TyaUBfDx+OnlizNn2+ZSfmGCqvRlJKh
Q2+99tHNG5udV7A4nxzYwWewP1saXCUGX9n+9xjX157UvvX9BpYUn1D9+SlIIO/cgIsljxRKlCe7
VSk6j2c0HtuRhuxde0YQ0+MQG4mUZEmDL8GYEjpnt70eQ0bV7v25vI8CTZNS7xmMhy0uiOGC1FDx
wZbBtp+mw36UGF4vNHq3MsFcpp+d+3K5visiK+UOX8w8ueOhw+VtiAmNjY8tqehFM9jLaqq3othj
Sh7WweXxn46RG3+xjGXukcADq2ZuM3KnFjSBvdJz0fI1NK19X6ds3DGOmEycHNwNqx+MvoQOzHvv
+12uRH2l6ah91rKpcd7Z5lwu49bX5BmDHVl+5VqHOW81iYsYKPbeQQqcinTItIK4699LlPQOOS6y
cirfOpTou8RbwYGFcSd+b/hql80rqlnG8x1MkPQnqBoIsAikDADR3uXyzePpy16ljfPIsnXZBx28
VS5XF0/1e7Eq06m1CqwVo0GUg9RhwBGnX+dS+GMTCbawUIbHuYRIysKGW4Z9kN/IaPGkOriDsdTQ
arW2NK7xSiM/dWUPOI6uOSZX7AWKQifpWHHLPLuBmTaAYoB7dy1CCWnFvpZSKzBHJhx2Ch/xH65u
Fqu9EBdaPsy6KCCbjAaQ7dpKvJT5eZUSAk+JmeNYeGEL+xIuhZPtq6vvvF1YpUS1CCwaPyfFreAF
Wp/I06O8PGd1MJlyDjd9uoCeHx9RhpOqMxjI7c/hh82nIHZnOEClzAqQ+2MjmrkbNDGIDX+w1WM0
cfjvZTtqN3vC1LaLX+VYrT0YAG8VbTvTGmtQ/58s0L1g5gEIm15h838e5n2DkFRg+9fnMzi/g8lp
X6rhIJDoi+wrOR6SM9Sns0w9dpw07U2+uGX7Ukg+1UzqDq8C/YSrdu7kUn7KVMCIXXuQvGu/xiBa
5n+WAuVlNuuz9kN/jQ+TkUnEl+PsftN3Y2nVygeWqV6QvD4ONLrkWuAMxdCbkd4cg+dSNMr8JzmA
9ZBz3CiMb+I6BpWGpDr0iR7xE6FJSgDcV/fOQBt+RLcd687sph60IiqHQs7fYBUOqvrEL4u4jvTL
OQ0GfhM1SJr0U9oMFBk4CCq81mDWLZidlDVtW+UuJX9dQJuOvEOiCKODoderkFVehcRCEm/4TOlQ
Ny0SICunxbfpNVd3ezqpt28A/4Hl0TfHsXmwvMN7R5iEiy+nuVmMA6Uk7cHph+3hMTFAOa6Mvi1u
yQURIoYIA2fojYcjCokI5KYXBIezmWTlgEqqXa25JB8Rs4hFxM/uPAxO445RK6XgqqDiLX0BNnlr
I35xQLf0IB7KVOEvCIMNlBZpf3EC8CV6V67HOaUrJaqoOK4XuTyeRbKzGT3QP9dAWuqewEPWoi6y
o45DhKU5DWk3jMeiHNrR8Mg0OznvZKWcp6IuIRnkIn/ymAnmPB04W4+voCdXcSveXcnpCewn9StE
VRFYaEiVZOZjIziF3s7SucwPpU6xd0nkennhbEImJeN1uJ8RrPukJYNt88S0alRGv9magmBoEgHv
f4J+gvQsSeiKbW8hLmSP/Ih+0Hx1N/+iCISHXk/awBXJbhG8J3YR4Y731cHGtyv9Zs9wkFx9VQAM
8xHgI9H0k2OEysNsE8pfsVxPjaJiXlojHMXD+e8mZQgp+0uTjZoDCKJb0386/iGvg7ovh0Salpp1
HsyIgVOu+BAi85IxbwJXzylkoRTnypAweljmU85bQXXgjfUgii4CuRetF3TSfu2kU/J0Pa4Kl5Ls
cNEzHSv2pXxZOXohu95HKtYqWAxWhF2Ir21AtZV4tvNcEsEtCDbv47gdvBIAykdPAGrU/+VMM/+P
XV/h/NMWrS0dL4h+G9r0LkCsEeFnjhV8GWhCshA63sFqBS2ZHL5fHJ60iuoM9HNKAQEP7QlUQwv/
Q4soW4Ej74bnOMClSMWpy4gJJwFmyGfSJyp6MKV3hl8QYWvpjDAw/dD0CDDbWyhoGzylJoFg/uuv
+8ioturmOztVRbJLlyLQV9vrpkpM46LCBac3gNmhTXRXl1KpHBMTHUgOLO84mfOF4743vXQaZWRM
jZ7KU9jToPsWfi13zmeLSa4nIYf/6acpgw6Y0CKOQDRqzNTIq9iWFc+AUKG2cWYVoLaqjWBBkvDr
filJ7RHFtphsPx6kF6hFiSRnufEkmUZV25DIl679PVcoLrqHkt4TDZvN0sczAuAUBeDTZYCJv350
DoeKkmgS0hvVvnkd9/zrMXr0qrXQz1qvNp7ndcvqM4r470w+1Kh90zBl/zzYpW9EQkVHDj9sjcQ6
jzT4w7Ejw4daxSsJ1/+2206kMwhmOakaZFTb8OWLscE59UigCAY1d1G4rok0MyZBJjB1BKjhgMvY
0cRbygDOMqAmKN3T7tghkbHP+uegPkrQxce9E8znyGtKUtmqvHyCqMANYi3PGXgla+f0D7R2dUvj
4blCo8Jc5JLmitdGgi6ugiajQ+K15CCkaUDdof+MQTWpg0tif6Y5dg8q1JM/hOQrsEWhFa1KcRVU
OzAHdXAvnItgQAbcY+6LNfqSa3RCsVW0nSAucQDwu1p47z64c2BBqaFGnXDcB3DjJ/C5deT8XmI7
Cm2Jz+2fTfBW4WoK4qnQiwnAkqiRBEcKOrhlboam4o2KufpvNIkMWoayNZJ9cH6RH3y8KI9PLrd1
14nR8qstpTw/M9dcVVUQCfaMwb0w/Lu50nTTeNrZD96F5LBhDkn8db9h8e7Erpda54JLPCQ2F1l5
pXwXM6jjtQU9waCw3a8xLSsb6GJaDxCM5u4PcupqNcV7dECK63pCOT+Wxhg6qTrYEdVVBlN63LgK
ckCvM92IchO88cgNlngt2dsT5EfBa4j9ThMZ34Iz4rE+PKztFpxp9ndqhSs9IsoKPDXvNYN/pDQa
9NErLFHW0S92DG0063wVd85vrfFzF0hiXpuDd+zhFALzo0qMhjwWzWQTRnpamm669Y0wc/ccm0iw
DUAWF2N5tgSYAr7QP0jxNVZiLdZP9th1pp0SorSdJcArVJAgiUoi8lrtxKAgDAEAEjp3Xndux1rA
xB7Clfi5phxpKLOCmCHmiKwMix1/Y5rnnKrDVJHWhnksI+5Rpwvzffc6NciVh54F+0AQATi2KGZP
pWUIpdaS++N3ji0bmnnGZaE7/WEblK65fFuTHAQSx3qTAYfFNdMK+9a98MeohY0WH5A8mSJisbsC
bcEbhM//MaZ+nzCFcfSUpEbCBWaR2fZxck12X1sGYsJTlGp9penze6YJ5jLQStOhLkp2fxTNbr0t
zsVXM/mGBVSCD8NoF9HsD24HDJlxCumir6pZlYHzGprtEzq2EaegIlns5JOF/HoJbbufoc7hJ7+4
WYlC3R26QKQ3DEeDppkNdiJDeng9SFQN5jZ+u+GFcHeY3voxPfweqrvXofFrgewsYM+9FD5PqYhW
G0ySDhswoL8elOPn4X6eWttEeneV4sgDrsty5W5iD3sCEKwtEzNng1J9waxWY3+Hyfz6uhXEk9Vc
9bSAQoKUH67gt1lKYFfz4os8lq7Ewlx/BuLJ/zusVx2fYeLDeC8TdVcjlxha8N1nTg9y2YyBNPek
TRPL/m45LmcO+Ic++1CJQpsKnwWJazhlqpLg1lW96avDL9gFiZki3oibSYVTpxpzT3SfwUXnD8g3
TJGDy7qFrkcmYY9DE0pH6LjT37MQmrzY+3VgiKRu9VNM2NRCTMp4vctzQR08wh3e1Nu5JsbJo1eA
m5GFDq867S3tWXzfj5I84vG1CvP4vWYpHEfIreI1HjwDmjTsJZCLrSVp0/ihkDkZ8qhdjaEBXaH+
qZPxue0UGK0orCvbwvwLrVokT4UQEdrg1+1hxJU6OXNyYwB2DwRk4neZVSk/NN99MBF3t94Nz/Ul
oVg/n6tGM98HLtE2f+zL9JYRGh1c1NJxCZ1Fd3C03Rs2OjFTg3uZN/5GOzYofBapkIFydGf6RpIe
WrGEboKInZkgkXDB5Gyn3u1Ew5s+VwRV+g8Qp5dPTOZPLJF6ItH3nSgzRoQS5JD7U9J+rH/4XOMA
+MXGijLBGuTuZ2QC4JEK3ZNOno40Q6iaRPgsUyDaavOG5fA8WL4KQa5Lvrk0P8R98rOqbuKiE9Fl
E3VnlUZf7nfW3wPFKyOMIO7xxQnJZiV+7soHBk4Ssn1PVALv7uug7gYLD/y1eGQI99iHzCwzdQBy
Mspy36uXbCajfHQTORVrOXyxczCnB5HcyfndHbKKVLEFSKHncC1J1PLQgCG1Z4LJ6A88YaNe0Uuf
ZJWpEsh8Y72bhaZFD+akr88CNWJQH8ateyT2uw6Kli4gcHsJz9uBIhomiVbR59jZqi6b1tokLNOr
CtrgNQwUQQmdGob0bOYKL8LQnFG+FSkMM/MTTOXcf/zM5cVLFF/nXENNOM5hP0MSjaRdYJMcoy3D
yYrC5+c8Ck3r1lF9kDyeDsVoRku0GouPuBL4oy/ybnfZk8rp3TqhYUFPHSiQjF80GN3QVlBIGtCT
lwLH4utE7S3a+bUdcEtwdbYpEKsOKcrDu8riFw/JY0AuJ76S2UWsjPW/jsksb+j3WjXhDbbA3PsE
w2WUlqOJw+HpeJo63o1362qQzUG2fd682cMCU40ig+wz8KvtiqHBuRkupqQghxqPpJ1NtUjeXZWj
DjOEUzzodB937jd7zvliCPJ9rfoZLETyuMNUrWHB8wKnQkVBn0C3qb8MPsDGNDi+xCDgcv2OxWNW
SgFb0RERFjO2OfQeOTLD+5SRREWClXHZ5XriQaXsWw7HWTjlP9/i9BUWJ5KXqyzKhD/B1As2KU0Q
mJmQHch6ChxpRonQ+jDiyzdXYJLXVwx/8yWFcy+YQKpctlI8e9cWZkNQ7i8WSTFiwLvTMrEclk0d
IcpkRIjZWHastqQX9rFhkg4nwO53OMz7O1cCDQsJfIVDATfKD9jEHyXOLhWMB5h4xGJUEaDxsiN6
CrKvDszWo+waJUTcS5SaR6VVQQQGrvz15jxG6SSPgIp+NMYO8AGpjziR5k30dHBYK2r+nvO3ogvx
M8eMtOUEs/IX2iSeSNLzKRKuUmPIJDGnP/XE8SkkNP/l4Hen5oWLqCjxe3Hfi8SgTVsmKdnLivDL
NLEXs828q1PUAIM2bxPMutiJttOk1DhOK7QtMR/5NNux8roZaKIcTjDNEoHbIV5RVfp5LTnQOqmP
F255cNRxcT4nP9+yA3hLm/j7YaIo8a2vni+lg3rGB5vhDaHQdUmIn8UC1WwufME/gWTM1CxgpR3X
MeHzak/eor7Pe0EO0xJV0vGNGntApNNQFO3tsacWeYucBtVIMfgNmyzhDUBL6IL/OcJq8kbnZZVy
h6h/qLQE563XOxThg2f3w54ubAKr5RUirTKFBCrfebPWHG6bA1EriSVrhyL1TiqGdfP1HyFU3t1f
uXWmICOiN34rGJWEmGs+X67CxBaVy2BZTKQX/WKQ2JBCQcCwrkJ0K8E1KJgYH2zcgKgIS0GkUzK2
0+5/RpkW8vh5x7v2+ALIYgU8gxQyLwoQS3i2TscyILR6qufvVYWAQ3Q80P651cuhiiPdoCDj4EYo
+WqXKJmtw1ZMrV/Fim2rtFZA5OzAQTH0mW3Jr+2bakbJrADwyjxJunMMNOnp85Uu5wpwyYpW9iPJ
BecgakRr81hqAXjAac+7grfDdBNjTXmOI59XO9YkK33kfLN/3p4gLOPA3+QgHgckQrhAExQKs2mk
al6ZfgDhH1rhJ7WcLlYdOW1q9biBqTRzXxvkFyBsf0FC5/3spkPHt09F/V/7POsIjwsSU3SWfWy1
d9hr/rECmQOm6JvR8qsvQ8wKTIf+9JV6TjBMcidO9lSeoGncO2plRSpmZFMLnHbH5J9EpYfKKOW4
TNf0jZiCTiwKlArxCil4LcOMKR+9vkeIz0520114f1deG/p8hNeo2Czbv1ZZpcLzppqRPxVLC1BP
p0yYwc7OuhLvD+Jz4qY/IzivOvXeityxZO41mM/R160IWF9oNxyz1DO3reI1hEgEB+KhuimX+29U
cMZ2OTdnmR+nSlUAcVCwjpwxpalbU3NyLpWoCYne/8vYtNQ/JTm0eV4tZzXqLy+/hYuLcYySX7Rh
K9dR0VCrGpnrsfma27ZTi3AkV6+hdrdDBvV4BPoqsk3Uqm89xp6HxRcKDTaj6XudH3x29QyL6GBh
W4kFDZjxeQj8wF8bXxrAlcE/Mjw9qDBaDPra6WlyVOIAnV0+Yibe+K77Yz3msBZZ6dwL7ZmdNM9o
JjbmisseRJ9y8jdJPV1GSByGyY9veQC7wq4tg9iEJaXFUq/UCzdjNh+xmX4eo5gU54O0jp/uNdH1
nnQytl1sf706RIa/M7Rffy9NLnB7Nyi2i94h1+VXu+yTdBVA+DG73Ltb3/W2ZLu5VkCa4USG4VKj
EWi1p+6/CO08rqFKiQUfle3hxweA3AIku32HM/6YfU0aA8nM7FxXlIXSTk6xDpbbvHfNjYzDH26j
iHWLFPYPnMZXjavNrcXEEDGJnYs2nJRo+rRt1VaqnYAbUDcYrEdRTXPQsnZvUbi2cGkgVwi2I6/8
22KSGv9vpBi6ZxXT+x/V4/dIL8Z6ubWNagb71waqCsaRda6dyzhKrioBUoNUgXl3o+q4yNXARk0C
PjK0xd9Uc7fBpRKskUvfU6Qm9V60rwN6lA7l/al1Q8R1xYcNGuF/HwNNInvmWle70xw+WZ/SeyMo
j3eDDq+8zOGzOaJoofkRIVcCU6inXWaYjVqHls8M0P1M1pb5lxXW91UNLM9PpJCOYMdTGlMrgahM
iN3pWpV1AhWOdO9NyuPYLO4JZNfGBgyFT/RZIvzc4ugQ+U9gwxz2dogT3wO/hU7T/FUsFJO+Z/Yh
3EYWNbZmgMtZF2iUDjpVJaewirwknrQ4m3Y3pQdattOxkHbZ45gfk7lIU1h9UJNYt2FWECqbPC3G
xL/gmFsmbFmsh79AWJ2fNJSof+jo5Jdfqm7c3kxWvVDPC7ZpmcYq65lrZKLC6R65muCrkiUlBr2w
xmIm8BwfvUkAEjeBo8XdHDJRok52yXD9SBkRNVC1EFsDMXYJshBoVf59U+7DrZSsSLMPDkhJbmAU
C3MYbvs7J4KsQ2s+3+1uMY44b7eWEORcGZZ7o/sc3MSCtygpdhsOs96RpavrYLRuWh0k12P2/oyn
wkrKmdO39jk6AM6r6pHTQno8aviBTFQgaTnqb+YB9sTqhlnq/V8wFt8VwMvIjl6xjkzTtKzxLAqi
8xskB38CpdDXTXkPWS077QQuONjfGiCP3lakb2d10IJx8PcEXyr8tomH9hHLwHjeq12aYMsXTeuO
+fPSua/+MzQiA9t/Mbz2PxCQO0KKFXT21d7V3uVWaeLexFAs+W7UsDv9Js39cYOCsc3CjABnUzmW
CXwvn7FF4VztMpSIMqa/mEzcDfDFKaYcOkAgKlNeJ/DZkQdFll7McFuS4HnumX7h5riIFRuVOzve
Oe+RfvSsaypkHLfDMflheQVS15XE/iblUq80U3xz1i00tcqr0EpSF+T4vDp4D4dYjWHF3fodqgYc
672ETKVZUH7KonDF8efur0noDxf+ftWp7bSYtvzefbYSPaTMb3Nqq0kJPvYbr4rFXmP7IIFe/R0N
ZSXz7H8kJslasKMuUzCJtbatfYBsZLzv5IKe6MIVju1pjV1RCcyuPsIYgIbXsv8+MQnAGNThH+OC
m7RGsb4FjhHUP25QB6uso7DYilhpsZVSSomiv7ATQHMabO8m9rLERNgGr56wMtCIyZghVwQfYdm5
PGo/SePdKolxIz8gsuF/TAr4ESryD2TEyR0ZrPKa5Fd92ff5lXtY7vcJGMkYc2Tp6frvgxIdQYZw
AsAJdNO+wxWFUFEEB5WTHeHkqy42zZo34w3/Vsx2qhbzR5lxPjRmtIq1ct5D8ntGK3/xnOLVb/O9
jEqXG9vx5KK8W7BRaspl/cyc8oD0ZH79ADOTJ9n/TlXJ0rjqcXt0eQTSS4n70BuJDBxdXDrjiJVJ
l2hvJ860FrN+nybrsCAWBgerpfEL8v+UC/XOvDnhLASB2Amw3iGf8R8UuDZxkHMBB+OX/vqbMp4e
nFkrvokvMQ/oAhjjGOImDT4v277B80lDAHBHlviDOyGc3/G1rhIYH0Tpu1rML2gIynrpcV7uwuXZ
YGrqeCIVhmuMw4N4/e42DDOaKVGTng6B3+Zlhfldo13On3439ubwnmViyK7JnosooBr1cdL2CEtn
owvacTOmrqKK8i2XF6lLUz27fRei3tDyz7V3zz6wR4KkX5hbfyu+Xp4rLDgGD2fCEpZuZMfAiwjk
QC/hE7x+U1AySLugQGB+L7WhXbxM8xWqZR36CF1Ts4CrCezbfOYvfkidakCaIDe0pDkbSUmVv2QL
oHn9x1OANUFpHBGS05G80XwS+mgAEkyh1ea9fgzSpSak2IIFRM34T4+3+ePVSoUxO/o5Nfc1Y+KU
JFxxdsDS6OFo3C5SbQaxKi39A5DK1nxjy+iGuuAobtB68kSpY22ceBy1hkugnsihiefUzcYHhx5M
whG0Mj0RSr4VMiCGlk/Pr9uytoDCbdKuXpGUDEBbHQ/34oWcFDqu2+iKdQB6j4h2ZC7g+7/YSIEi
4/u1zklzcFRe9/iTjcCucUKIEk3DBQ0Y3rL6pbLYB+LD5zaRmyWrWcTeiPxi7zFQhr0vv8x/N/3A
PY5p2zvOIhzwrw28CQpz9ydEtCHr4T9dLEhVF88tfgkndUcjJ0V2EWZavZ8le/+UMYQk4dgagRCw
+waYILf9LGxNjxND37AQS4UoYOTcZ8KmfvenrUiU2TWJj4YrDnJlGofZAEmFlSxibQd4mhMGQ5eF
YKzkq6C9e1ve1prgWRlJxZfC14lDxHn7oyncyrq9WDGv0dW7+DFqh7Y2GnaHEdmzgJwLiYQl8SC1
KD1Jm2vLU9ZwfEr/IOLQyhhUzL5rPX7xvQoQDnb82C1X/huAB6Jy8dJYsjcLEPkQxkV2N8Q5NCwZ
/B35M0/ncsXexKnAatIOzUFeg0MZEdVeTpDIEEJi8ZpNydVRkDlPOmgC8WNuod1Ey2ukJ/Beq01R
DVoxaJsJd6Na1Cdw6SWqaVkXHTIZCqO31XlDzYsgyQMi7RkoGbUvEm66fjKvcRFKlKhTX1R4Kk5x
AyUvEtfSgpPZeYYZ5tP0HO8ajX3Qmx57mBuAwi3lKRE2RKfp3ZWiy7sHGpBOiW3B2dc+0MCpunrU
wEJcgfe2HX5/I/6IRAOD5FBEAx0ow0SAShzXVROgQYXUrKf4wc3r7T9KsccEGedCFUR4yOE25ibc
lw+YYWc1gBxpfB8xAnNZXstXFLYgB1efJCtnaK/Dvm+Pb4UnGKdSrVuSGxG4h/LuiLQnN65coR+z
xpaRTe6vwQ2gfR3GJlSabc1wCn5FfDhLwN0yeQuT73lK+mBQo0WSrzW+8yZhpENvJNG8MLN7dpfW
6CmLlVTlOAtnN3VV1behfl3DYjD7SULtRkpISubFpqJndDMLKGyobcQ4pDCV6X2W3wjZfDxPstO+
VR5pXiaXAcf6pgVyYhxYjTcxPRJ6RAZl6ejUCoRBcNuIJAv+ouOTJ1sP2TmlabCTI4N2q0g3wyXY
94iTwLBuYpl/qAVEiiXQ1+nUHrDHBYjBm7Yw7LfFpso6KL6ocpUWqcME3benRE5RG+sSDr2z9xY7
7sBpR5Eb9AykFdfmiLwi7zZNrKY3zCq1cuQWvujx0EZ5bRX1qtBxz01N2qObxHAFl4CxNDXz/51E
XJ+t+st/qssDg4esesPZybT8soXH3nfoaOaoN6hxaIM1BZjQ6GTT8ff6cFCQVstHtk8ce+glSw6A
uYdA//HXm01vd+z5dlLbCeNVMJM+QFhViEjgN4JHeUcFUMKv+wOMEneuifbARZDeLcWpqCtxWQaL
kkDblbYgQ8ouA5G9Ae3Wl5WO6Cq7y+o3SCXzo0ORylEwNIYABpNp5rGoGVtlm9o6Ro8IIJrokTWA
nbVUBew8BmfLS8WIZvnhWgvt130a32wenVHW3mZBtZAyeyGVjoF5wAwv6xWRqrd3avFOkpA/yvaC
gMAhWDhghN4DbzzbeYEkrWgwqKddUg1kfELNvuBtmma+S4TQoCDBETGhdyO0WqftHn7zQ7TW6FH5
Jr/AhOIP+x/N8SLt1pa3gDxIupp56pVYpmx03NpiyL6LoNJoW/j0ALvaKKcJNbhvrYOzMCfX450j
56P48tNJZnBEWoYOSsUNwKtunIkXs28rf3RVkHKEwKaSXxTx6jcWwGsTVpuN3t315bwCoaYn/PgZ
8GnT30AfCgkrnLSb6FdCK+IMR4fm022FJDf84ivmz43wNlB8XG+i2YCrhPqGlX1dMV7jx7cDyU6Q
PrekezGS9GEjGonNB4iVRz4XiCygZKJF60Bv/ln2IG6JgJQZq3/1DL5MiCK/eoWzMIzZNeux5+qI
M1ETcyLu5v9IzYy0ZDGA5PueehjmNoQc6+c//Te426n5Ph9hTuxv9pnuhFNPoeSML0UP01HbKYUh
zEUNBDOMCuGGJaxTRqEEEWr0o9P0Rb4V+vTCDtD9y8PXtNbloDuNrc0FnkAB467lWibfOAMecyXv
OzYmufp25t0iu7dygvybskWuRQLm1h+JCTXsHKG0tn7t/ZXxNIYbD9jQE/DkxA35WCiW289/xJLU
vr/waQE52uaN8SIQh94KqXuWBBIs+m4trhV+fyIxRHxGl7gN1hibGkMuuXALShfhna/ozdXD+91l
ZlszYmhPZkkq9Ob0ddH3uXpsA6yyL7lwc0sF2KoTZmQHtqKnCmZpJxdRza0j7vErJVneFAka7fDQ
VHLFDMEd9JUmbqcEE7SR8wVvntJVLYTghywGSLhJTxQMKcT/OBjnIIUx1YRRjHK5377huWWVXj23
3ClghMJDyAlnWjNtqfogcIETtwrKc3qyTrGWEVcLkolk68yA0UOiU42Bwy1fL1iXVIhuHkrh12Lg
XaGEIXpJu8tU9hDrBze7H93vE78VXrPEe1EVRyanvW8ZbfTvQHhc0zGSHauVj4qwA//Nl6sfCsqx
6fz5vQR/7aZeynk5ym10n2kTgq2XuTRUwnf/AVx1lURxFRBhoQUhlHbTbFRVWjR/D1Gt6MDLYaLL
5iGhi9lMtxodDG6ud/eb3lsjcSoFH5cpgQSH4o71ANnNqfFsgVZOTpFxF3RbR84SGcLHlIYKQPV0
k1xoxvUpjsBA1ytEer1FeoFECARqf25uedKnvV175Tysit9Szu1yeVEGXHCf7h70z94MuxgdE+cj
radKJFR2dtHZmv6InFGmKs3qT+Ud7kfjVBtMLBmSXc8eLWSptTalJqitn6wQaJ/BuHCsGCbQtBGv
FfkO14nw993M3eWBSlNQwZVl7UCN7QbYU5/ws+LhNViit/yDF2PC1hDIWHZ/tn2jsMjjEKgBz6PE
R7XI7MAnRobvzLECwBSni3YaIMkGgqu5x9bosbdUwWbBTdBmAB/SUtnIz2dYXri3iIdQwfDDAz5j
PdbEx5bBbwtUObvdeCCx2PKLHalL4kXTlevFyA1AfbjcChd22x68iaowAyoUgN0CabKuPZ97DrVW
UtmUNl4BsHbvOqcCrgBDMCcIA77IgUXdIt1GL09yp2US1vm14NplAnCHEXr3ehnQ+xFM+g5aWf5p
f6K6fXiLDQTi6a+ZVmMT1x2TpWCh8vnOeBWJwVutxGx7krGLC5Kcyk5B/hbRX3q7UDxTqRIs0yEG
sSSCmJl0r6KBkBXl22ndYmMGGwV2r5KyG4P29ybm6ccceI1OT4+G4mbWecCbwQyLDBSNF94nTsia
/7SsW7xri0lJfzlEE8i3TDMCbjlLmtrJXf8WNYb5GLHL3qEbx+nPzOa/ipck5VZ2EmWhhiFq7YJ4
TQdZ/GpCHd/hD5j3qSlSpGeNymJfdfWnyuAlU/1ytfyLetE67C440W1SMPHSxggYFiw6/tPCBP7O
vVhLr9sBGDogQcbNUxP5NcikV3ntIbap6z8ihV3NEJpmxuwv4VsPWvIlhxAz0o2N3QJPPUn+wY3m
UM2b3otXybjLtZEKRbJFdNT/K/MfcXOp73TuEVWtyAkHqBp3BU8/mSl5FvuLOqPCZvGnn0Pb1drx
Xie1rrMpEBWV/emi9m0P3MfRw7Dzze3JbuPYSby/h6gNOes0dE9Pxpao3kf9yyXP1FPe4YnEIi4c
eba38QUIs/43DzLK4Ruk9f59YR/gOml00Kz8sZsuUKmUBZR/rUSJhaGez27m6kH+WVB2wpQ/pI9d
+KvWX98XFKW2KEQIRwIf4R1nlA5b3CSDnSeHHa96OM5SY/YuEVX1SIxQLEnXyEhHHOpF3Akb2o5v
VUXsCd1IddMWACW0ZUJMC1tarAA7v6OCGQi+T8LM9s17RQFHKZDmGxKzWrPO7Mu4riPAFioFVmKm
+xJg6jDqFnURmhxCAnjtaTj88DifZZ4IjMgADOqNcOeZgcHpEsXZFgMVx/oxcRNqd8B0QPY7avQF
yenMAA5H/wHjr9AaLF0ysTy4ZMxMEAL/XxFQEKhq0Cim0PBpIO/0svJ7xP3CWYGbEFgQCcA3QmPV
c3GcOh5SMNbrkuDRbqNUpN3BySKVK3Rqtkp6q0snpj9EiZOLSRCiMIhnIdLYX3NrOhAujmlRYaWp
2bIqwJptkSlQuTOvmI650UAYfc31Adfh4sihxz5LkeuEn11pnHgUKPBNlS2kHkYsT1FJ6G7es9eV
aHb64h+d6aoPFbRjTBaXWiokvr4tveJJvLb8Q+ex9Lbq2chNIBXysRd+8lwfJ3z6S5U/0OkOF8Ed
1Ihnu6iOK4d9E6k2dteeigsGJHbbzPoXjZL6a7jNyNluuXKFTM/9w6wZd4pN9ARTsXaMCnCkGu+D
hNsIIAMJ96kjqtUKtzF5dqbNANolbGybRo9dNofm1WhGjv6bR4nvxXstff1GRlEHGMJwTEJAFlfG
cMoln7usdM+TLXSn7GIJ3BB/dp9xe0QoIOu31qwEYR8Z15GXXHUI/L19iF4JrzElnzAQA07QxNSc
VRF7fSzj1TzLW/zRa2CV24kjmytIiK9HWgqGH8DpPAujSEuc3iPeyyEJlIgL9uYaWhIyh5wCrny9
8VEMtdDO4DNpvv++MPNzXJfgIvcFlxb8+YqhQr/6Zu6AJsa0Oa1+vzihXLnk9ZCTjdn3M6hwXF7d
UAnvyVBbBxeWXafioDk+58hYcVvllNLG8B2e9GfTS3bQwguM0fkzz7a2h/ZLU6h97fxcRaXFfjS2
rzBxYNTULOZytUqrBMv8d6Ah6psSeo+uElsnKTsOJpQE2DflJBMyO7I1wztfalLj5NZZ1X5a8a9w
oe+dvK2a6WcNeAfqU0YwNLRGob4tFaG6CaBZvxkirOhM+7D56FFQCHnWYIVaZmiwUouMUKFwAEel
yry6t8zbS8iphwLeZiVN+6FZGRQyTCekPTVG+plo80hTxBTSzuhmEuSdLXP8wFHBt6mUVGoXBRKJ
vVVZxoWiw45knUDdl8Ed/N/IwRzOu963xVvZXaFp/pCCBK0L/PinarUi2zPnfTczIRSCsvkFlJo7
ACOl3i61J4VWOSxcKtl1OP5bEJnaSNcafEvxev78rstOreKmozJUBbC8SB/pHyqWaiI1OeuAJ3l/
sHhST0FM+/D54kM9Ti8Z+uJCEhbD2nDY1efKotGlGULgm75sfpjQ02dkUAsEncm+s6k7q4bfX9+T
SW55ERfCLe28b12O6QVf1tu2GjeBtYu1QjTbjA7SqaVNr1OigmhAN5nhKzf0p9eI0AzdhOlUYDJk
aEtqY5SszvVBpTbn7DYv3fMv94tMthsxbw/vbJZztmkrjjAfE1lIgAaid3IeNxNQ/xIq2f+Ly41W
rjs8AhTBDcLdRKPlVzpk8/IRTOMXXvSJFri+qj+bYifLK591RurFcHcGrCKjFOBOZARCdPtkMXcD
8FCWxrMvbMf4Mpoctc1bQ7zX/VBPEyLxu2UuEYRXS+2WownuSwgvGv/ilA9NGlZKhkP4Ko5+uC0t
nr+5V/mw1dib374ZESc4i1GUnVkE0TxK68sXOLw+vPFVITtO7Fflyir2U2UutXp3wg/Ma/57xPjk
e5OdLWlWUt5S0Ljv98KZhXFVmj+FVtFCCHgXD6XcNinG7i3VDMBS1wQDpRgFGGJ8eDbYdTNihG0z
mPxpWo6qhY79CCjKUKgXY5At9G2y09fM2aeb5wL3vPSrHpb5FxkkAvSd00Y1wv+JpNBR+IqSeuXL
MuboY5uEX7zVqd5Z2FYDVlra1ua/lEl2CfpM+iFZqCmlpL4B36PhBCEVdLvtrhUyXzxA7Zs4Zs05
9YMyFphp7B/Tkj11HdByZddmtuKb/3YegpKwyjnaXXxhyl5Wug9kxrJ8vjfmfkXYe9ikyLBHROex
LrvfH53+vSd8wZYJIAjeGD/oGhLOKWrwqJ0D/fok9GwCdKQVVUwzMgF68zpTpTV9Xyg+Y55Qx7dd
14/YPOF/P1FOExm9XGWUfn9FQk5dEG8rsktcx+zWnOrybypmIRMFw2z0ZPEV05ijBn2bhuHyv4y1
Kk9SK6wJEmidiSHIbx4dceUEFY9NwwRNwRUYHkALT/LDS7YE5IJEMhSFJdgkwGF/HxXZaPKPIa/h
ZS64C1LSOT7XsdEpqRWnGjwN0WesupVMN5m8xPdHJxCy83g6TpAsLal9rVqOQmjvCW6JuzlqoF1i
XWFahUWcF024Id9hSMA/glZFtMD2b9Qr/EnXZ9REO7Im1/Jzhc/A8CJYxdXyyrbxl7dKfzAxCshV
AOYYJidffVKYNODpxXTIw+YUbRQhE3S1u/0pzeb7t7xSc8mjssX9xxavPar156V2qkR+c5LGXpzF
0VeY7m1dKLYKJxuHKjw9LLBUQyA1szeKgAblvod+EbqTQjHWArSn+cXN5bbxB5RUDE738SFuBv81
0yr4MARgsUckK3V8435yBXvcgelXxCGys3ecJH1Q8DcXefWljKj4e9uA6ShYhAlA1OjflbOui/cr
HmIjbxjJa7ByvwXlnYmimYmT81OK69RsM8MS560l1kPAn/j72pOqwoZvWLb1krGARy9PKCZC1Jtb
ZeDRHxpSJxpIQayKXIskK3S7oRmFmz9e2MGz1zKFnm8iJVK9nyJx0/Zc2NTSXqA/rAZFg/Ms6aTu
Wlb/X9xNkLCczH+C/NAymZrsO2KBId8v0v9sKJizsq82vkqKFYNkofXCUCsWQfepRvxZZ/vdfEdE
ZBae5LSVUDXCU/nk7fnlQnJ4OGrrll+LhaamUpKY4FeMbk+2umk+rgx/3j+R88AI9sr4D3Ty43YD
6dv/k9X/obZ/suAnxT8gPfPy5Q5XFYXaYJXSJfw8x3N1GOOAJaUtBdawOZCjrMnMYacXrTaGiHJs
4ouz89K80l3H2gAvEHXy31oyJwCGF19x+VJ8fO/DbmINJS7tjplhd43gZGnOJETvHcCGymAEqMwU
iEM1xN1R3a5pZIOyi/t3xQ5+qtPtjfqfFozYWAicNO+IS+eRv0DeuzxHeWdBqApi2K6qg0AXJ3ei
ES1eO4uSLR4N7WBh6dz/UewfrQ+B6k7J2HpakCFZBn17Ocs9hP8VqovM57qHm1uvwnAEdn5wt4gi
5qM7eGV4PBLqVMLXzUfdr5dt4LpdhOt2qB/Xd61W0NhADIYL9MAv/9ye35ZwMA/MA05wMJRdDvi1
XsD91wCI8KHCH7snKq4EnYmwaC1AUlmdXWp7Qz5V24ECEVewaOhzgLDV5Qpqe5snBFCf4j+512fG
Cqy1Ak7a84m18di3aQMklr/DGM196fJeRDosbMUwSLw0mHmusK0/Jl+gQIstau7V51LmLp8g7LC1
0GuJ5Kse2Ox02TzPPsDNK7ftbvNt3j4whWzaajXUdLyohjyOPhFkiCWaTDBubhdlP5K7yZ3i3ZZt
WilRy/4mHbZ5PvxGLBUhkEC+gUi4vo3KSwXehmJRBjmYYAFjoPY2fZew+aGtK4BGo36RJ4V4UJMx
o+4dqeCtGy9ItVPmwga9Cfdi6yJztIAUUoWT3SUcJ0XaoBq2Y6p05+zSl/RZ26J0cH3oLodbFXo4
0E3CAvjjJsb6UKgWuH9kvIxI0krTDDR1bkW6FfXCTxCDM5M/HY9gIDyUpNqp1kH8S1w0NvuDzGJb
lxxdn1fyGuJOTF6itvKIWgM2lKvIrTVvtKYEVC2846GE6/jN3/AYYW24aKFc9rTQwjX+VJhlhcmD
3acmpQucmnua8Oemxi9exY2eZRywqzqpUoVDSwQZNot1xMqrvkkBF2EVk7zyHl9MzRDK3N3Vc0tU
ts4AuX74F3UZ0DS3dzgUIwOaPaziK9GW22Q5ViO4D7BaI/xsxc2N5avETIQ5OEZjs9ahdIQZVluN
27jDDqS36rwZMe24hTg6pAP0dQhprJsPSrSDujM4xemJrcOcrGQPYiapHIbu8Z6X3gezF6QIQUQJ
STvujzibG+O9D/FCCajEqGIt+wfUKL1iTTW6Ca5dUIEwb6KyHZc0I0MxmRX4WcJBIzyAQo5c+ENd
OPrWUob0QFKMCuDyRDuG5ZoIsba8gEnopuLc2dDu37gr8q8vqFuHmgR1BUJ/qUk3kXNa8GEjNf15
MJZhfIz5f8UfwV/9+DZswAinNZMF6ji9COvPSvkmK4MY6pcRY/Hqq4gqlKta0ZdvAgq4wHVz1gTt
JXwC2qKSN/LnpoWJUhtZWtpfEQlsEz/SRER2o8DNgisERss3gmm0P+98R/eitgxKZPrHn1edhvPy
H1ImYvpFBNraNeh7nqjI4HpiJyEyBJnvjJZx7cj470FxwvdBjwfOymSlpBkn8hUDnJHTQMI5HOO9
DlPgMstvZ6xGtxu9xCtARXSwNsylh2MjnLo9LVqeBd+wiItjEh2MAhhCgpSO7/LQKdlOn6XRtqav
qg3oRmL/VuBAN0GZ1WvdocSB7xRxGuPMECyVxwFABc8yMUEvRxUzMPQO+qcsnG9ZwUa2WYTlHLwK
/j4nU1EdYMq/NXJU0eBppG+RB3Ci6crAV6WX5gS5rP0qZM1IMY2MIPFil9ATy3o04yTOz7YGcYO6
4wa9JGkrcm3c5v8dNg0y97bT9WEEkdTB8CJCCPGPt05l21qhgcDhQhvt+MCxnWtg9P4h3/KX0Sp5
5RAZKOdcx5BclAmiqUiK3dap5w5Oj7V5FBKbPnrMBvqMI3z9YujSnGz2RGTG1LctsLXJ4W85yMM4
2AHAp28vBkkyJkwVkj7tKONfX+AgYPIEPiySuN9dRrXTJF55T2/Gj3DRuZA587ntJGor9MYuW4GQ
cRtdq2KqKqJ2+7mcDTkFHMV4yOQLEvEj6CCjekxed+H5M3oGj6PyZvGZan1kXP14rJCOj65HHNof
RoD7a8xX5zo+5SBP6212MgG80zlF6vkU4zy19OcLZM2/3wcfWqdq88gTeBTM+lBogISN0YCpl5aG
Q6JeovyMuqqjizh8wFnqpA1YRikzpF5NT1CyCChU87AxO30XBEAYhkKp0rhTie+2e3LGUqIwM/EI
Bx9hH4joj8EqJAJruEqB+jQsNhtzlMLR+K6dxjpXsM2z3CA5liD7wUaGMLp7WHUUezwCIrV4jqkg
A+a4imBcNH4L5b2pjm3pCUzxsisHkOBhvEO8z2Su8PzkDmc78b4Fd/IcogH3Z0XP9Kxi9Lc2skvK
DS9B944Tykv3l29ul46zKfBc8ZCb6Ry0DNMtHw9LHUlieDgOwzicbnDLsIw9PRTXmYLE/GNKRBaU
OYifJXYhCMPRJGccaeBbb0rHYBqwblpo0mZ1FR9aThn8eYUi6TM0xFpRdlyRq3f50/TdYF9h/MxV
73RMhnBrsRKd9q+MjooRe5S9VtslC+yufaEBNdND6KTH9zoD0Fa181Gd0AWVrOXd4e8T6XVMgQz9
IbdGUF00bFeQtcj4isZq60c0WzyB3SEmzQHmj1hdkQCCj4wIADLBgNjR9PMbGbxEWovVRqFDj0uT
D1V5kuQ21ud+2q5JxHInJ33ZhY3Qc8y7SnPgWRe4jwDMrFHdOyTWJuYD30mt3NpCboS80g+sbsDS
2W4S9Lu6TMmM1R1iB7bME9V74pmlm53vI6ED5/6f6ZnM0oduBNreKaXKZSE/asBMCGuZPML33kPz
uJKwpwLWj3KluSEpq9W8IwXO5XeZdoElp+kffxJlo4qmRDvsxSnIMqr+mpTDpDD7hvdFWtbBRSrO
2laW1OmKFeZbdajcbzjr2HzRYv+NRqy3go4f7ENJfLBLb0IQrpa1HIf8DZv0hm9CyoCP9NIhsb5J
9e7dNA+M3y0eGd48Kky74hvHUZZbPDp+ouHKMdScpUb/6G8bS/k1wh1x+JwZeCgT7DIMdLFQbILY
TvPNJMYV0UAsCL2BKpt9ySGg+h0tA67EOHb0lrAU17SJNhM6MXk8SttW8AojxsBLImizhRusMdmD
YfXAnDFbM7iKz2w1KPY+WjcjaD+MtGHBCh91lyYmLPcxJg4wVZbJAPEhm37SKrabyGYCG+FXDNbw
sep+Q0c1XfdwuCs2i+T20mgmvXd/MF8VojHTJ2ggfYY/R7OD5JCPJXAAoYeb/hWE/rHkUcgbqyPL
AycM0eWoI6Ba5iIpM+VTUTvvwzHqIqcOtJlerZL5gZX7diDOYAMQIJskOVumWmIcSLeXJURbsVK+
mKL5D5OvSVavkUYmWCNgi1h23i65lQydmN1DEKKdSoHjo2uWFAbfJ1dV/8gArptcHUU3/1ypP7gS
Of277S87yb/mIgmrzicQjNd+r8FJpei5aIiR2m9VDlgRaYI+lJ9W6Fvyr/yoNfjv95qBwSZo9r4U
ZsgTC+Nj59ORWf8AYJBsqTQBtptmPGG9PgPQYZPrY0t6qHEegeD39F+3226HnfR/inVoOtz7MMmQ
ekE2D9ErXylsG9OPwZWH3eSfpfIerggHuvVrn90+ovdwt1z4gBsiJUHd3ZALTkUZOlYLBgfyibso
O7uFV9/Tqanlpg5AI4g+7DW2iKSigTl4u+wzbg0IIltZpj3RYlH3nFXHIwxr80J6t4LncGSHtl9M
oAlxOPOkyAcMtK4o+45pD8lpL6w0+PeUJeYQVN0EVxncw1mldOg2jF3B8uaVqLtepT93PXnMGT01
DXpXG56ADOW6Wbd5/5b1sAISquay0HQNMj4HqvKykmKPq/34iTPUhlwThKOgsbZmK1P16cGLMVVV
hGvSjnrGexXx0dgpKaWY89FpB+1WKfDLXXdbOcd2HUYnbOZuj1v9/cd7IAVvGcZBLidaPfZNR6q2
bQ/NlXocNkktOSl3MH4FsRKs+LzWrOjBG7aqsSB472PEXhnwmrgAjASlLXwBpoRQPH4o1VajN/Pk
VdBoUrtwbchd702t9abKf345t8Izqtc6OZua/0UAerqneRDWGPPzuu+YaIJUXkOkX910JVahd6tC
6OrerWQsXsTS/R5avprdVbQttHPusFqEbtFwYWxaInt6LsJkz+CvxuGIk/KLdsE6V2sUzamhFkCs
8QUuNWMiPLKJoVm0yw1EeKR30MsW4MBwLT+dltbohhIQc9/MSKKsyeSo6TG/y4SnQg1Ly4xVwAkt
zpclpudoXGvsCjcLARMcloCDpOwatsb2jLXtFYBhXEhvGRIAWqxRt7meJkxOEoBUaLOG27G+t8Yg
e0XE+2H3fHZghNTpqoDayhcuivuehOAJolNGilwa+Sy8hCebqmVcjbi10YqR0YSOqEnVur8JgxI7
x54nd3uNATn1f8+FXVCVhvJ2ehfc63C4BlR0c9dTac/6EfmLuVwFesjLl2QHlZBK5L0vk/zvXZ6p
hfWdHLlIeFeQzJeTciXxhpLJ6Y7Ha9uPW7J6n/1dGyY2L9ckRW/6HvadwZD1h3DbrNAp0K2SdzsD
EJpwM+jtvlujuhoWDWAB9O3k5S088dOJCzwXeL0clLTuM/Udq5GhnuU4sdJv1b4DYpQBjCSgPvQ7
BYIjcOIxPhDUcAmjFa4q6+vJHEdPuXV+tXFDb52SsXFD/2tEYQDoyg4PWbHhi+VO09gWw+59cvtP
mSZZ+ON1soRC0TA8qZM353nHb7IhE/7sdbAMxzZi7GKvOUHSdibgL/OHyk8vtnbES0/Qd5nO6sgk
H7wnNbgA71bK1vZqifgDCFYjgElBVVz9TqaGWrqHUbW3LjqaiO34LFrkkdEc/6IXcjP2kNeENGgW
dxi5cQGiP/CQkarI0XaDUho5kMtcBapxvvhCYN727kpI/XVLj5iiUIph4yzVrRN6cGkivdyqlNjW
wtfpHvxoI6Drd0KNglzyTQiPx1PcIECJzt4ulbu5yHY3IfDoSpSF7heSY4r+WbC1k1OiR1HlgS39
H3gr3igLlu3nfmTulvZMZa2zC72jlXZPBG+KfG/bf94KbCYgC3WJCFgXGQI1l/PkqFarzUclCDAR
AuUegeUyA2QHXp9O1vTBReS9u0Q3UlA6hdMyV7aN3Jk23RYngmMHOLL+W/X2lmib9M9WUTH3iAhA
I2D+dAPgRFvPpXDiXwnxCiQRNvQOZc6/VjhcFSEIB2TWhDGZzecvNIhPCADARmjmMxwz/vxFdJa7
p50VcwQvGENTliX6LZj6R409wuBl3XlEFbC8sUVHPcbP8I5e55nfJEgGBpUmy3NVC3ZGAI9LSyRP
GNd9cQHR2YoOKcxo73cflSL39N3ZCmsfPnRU517LBVxcJxb9Kf0nO4YifNBMtZTVfNSaMKKPvrQH
z5Jyb6UbnieFepvIyyXSTTqNDXXtXlixzgmS+cki9k63vkOW71plDpnzlDdbqEj2akPQ+9N+DDF7
Mwu3K1cxs4Mh6wFCrElXyRl+zfZD9+0YD5rwpnxht0q9fIxhYNy5EAcBHyLwDGjVtoElYGC39uOp
svCHgxlNzyZsci1tJ1zEcuV2a9s3gv+IKuiJv8aEgF7CEMgP0SG5pYaUlhocar9AJFmMnsdAwzcI
7IYmgMI9yDidOfOr23U3WcvzHgd96qpl53emRwF92d/xULN/YQmCojbRFrQf/Hk0lnFTvaz9odH+
Pn8IpJ/XUPAIwqghvG1Cw5YBENqKbCHd54tly5IGrkcL1Ew0IHk9c3nFfVOkXFacocrdyO4QCeUP
u9grn3z/lj1wLNEtZaWCY3FP2Q8icgpUIVTkdDTQArMPhYsglrOdW+hvmu7/wkjk89ZE9OkdplHV
2eWgcIs+wfoI0UQu9M16De6H03ULgAQCVuPF6BYIMTrB7FXsN9pcgYT9z8IUMOqtd3rEGapTBOmT
PTXk4dX+0xsRK9Tmaw4X8tnbox5hjP5oTyEKHvGcfcxadaP2B285dHSD+ZcsiwvssEx8gvrvT4jk
Lwypqm3P32G51hkRpX+B9LNsOqipBP28DnI191smsmM87NMzpXIgnBGgDbwEFVllWLNHsZvTaFlw
FiIH2aV9zy9sw0QHHI21t18q68wlEygwmU5vQSgeSph76ddVMIKjJZ1cePnEtul9DGQ6f/IX0QdG
Vnqt9UyIVlje5GaxQ9gyj7MqgEdEn8tQrjlIZKzqIXox2++M662uni2q4ICe2ovP4pRWBeb9NzRH
kXRgOeQJwuTB949zHc4F5Cl99ksKiS6ExzvcseiBHXnVXdh3KFeKQVaOysaYeGCyItppa9UknAE9
iIbDKLI7uD8sliCiXlEzFl/alN4eDpqHUSbehRPWVcWkt8oBLzioe4AMFzt0xUIVyTN/q6sVowk0
ttcR61HjJ86SXAB/jYrt5kgWcUWy/um6ZU7Y+5aiKQpciDkD9bbXw8dN5EqKFpabddPAgmohdZEB
Ot978fhc0XZW/Gegw1MaYEGbHAhB2qDaeBuSIjn70GRtDDyjZVpaHmxuwbN1/CM3VWAX26HFmV5v
GoEw5145vxgyfRyV7URkzu8HPfsAGnHFnY7y29R3PRzczuxTe9TZWMxtOcrxt0YWGWY6WteceG0n
m0oI/oAi+0yczffgWOhcuZUU+NTe2yFqxYJks5EZH+2Pafk9lsS7kdVni9vVKUqr6qR2UOBGyIgs
5UlNQt3cHQPZcFU2m3eDZrl35eqxviwA6xKsypYSMWjGzdGkzTib2+YFWfRgcXOaPf8lAkVMoMFl
rlzq/+zK+iD+ubFe1gifl4kWzy2i51P9VEhzxktWBkJyNn7iKlxYBUIujC2c5D5QxrrZBcstG3JT
BdEtqzsNS74RwoMXu1jXp7B6TZjgFzktXByhjPC43A/4IH1Y1iwCJL8upUYB30PGU08CAawMSMGl
ItpLcXysxdWhnXZCpLMQxzPLIeK1VtgRU1WTJhgs26JdmwNyP2MJY8GA8nInuCUkQqd2VgLa1eJP
wiw6TOxTF2HdWQbdKUt52d2SP8ON00Eo8yxxjiyAx7YH8H1SST/nqDgEuKb66SSYv5FOO+HDDPJX
saSWpqChWyc2lNWs58l6GHAya6aTotIzDLIv9LFmCdmBE47W7K+B6Ig7xnz7pZ0frHwJLFFw+x8W
TRXRRPfoANH/x6TKUaVHf1rYwxcJkhb7nNZ4OBOH2VWDVsQJoCrYIkl2LvhAYBw6jTXnyyIzTKov
BzPdIRryLtF7oPfoSpqnUsAHD0b6fxZX2skJfTAUTjJW/GktgEE3obIB4VqqSxTigWhE/pLx13Pr
pAvl4CG/1yT1Y2Zp1qZdz1raBr6ItHzTvgvZit27+f/fTA/WBJewc6OUek4fNNMh/fQXf31sW7P7
kaFDDw9+xfqOupMc5GgojZ3PYadOjDvdH9jks0y7TILQ2qO5TZ8k8tLer4/AAj7JEfTmiBxo6BFK
7lJaFg9JtvWLrMZprKeQbinFpAhS/gG4D9C5TjaeS23Rpm9kVG4tAe655P4BLwoLFATM2U49KM4k
ZmXmXbMedM74CZ6qF1ez5Ab+DlX/HRzZgZPnN1xBuvpOY8OYR+1U5PryNNiyf8O2KQklNy2rbwA4
rH4Ef2wGbgp084LS1Bn8axOX9+sP/bu3frypuF/uanhRjniy0PpFxOiLJNNpGpfMd/2pu10RU15/
/i5Kao0YG27+SMW/xHT/+Ec9fVKvvrO/VWJWOMQRO13a54koa7xAkdnH1TRLj/A9SRQKJ+bPZ2Th
PDXF5q/O9EorNNSwLawWAIXCm/YzsWVMvvCmc/qsYOZPuTITq9Cs/u6CY29AeOjFdntJ+k9dYaKr
HESNq0gFNJ0hwxPJ24OQrMvk8XUpMkIggT6SSWYiDK8Wubm7INJgcawuzeYzLXVxHr/jPK7K2IVX
hsBufwVzZEVMv10JF1htpDd1K1/FsEWVF5/JQW1/ErMjevDD/qjHDSbNEooTYbGDki0sc2eeZwfD
9fRPpchJyWlcgHrZ8kNZz3mnc4TqBKb1t/b0IdO5aCpfLh1WxECfXZruz9E4cx6lkdHGD3lOOus5
ucm3tH0GPSp9SQXJANJZ/InenotC+Za/5IdLzXLj055e7FXnCYFoicv8qZUHzVRjtjIXFQqt7bP6
JNSfT0KT6YYJjV3teSYmWr52EoxOGVjLVqqFosPWcyJqwMn9J5F4eo0nVA1k9R8sECue5XNy7qUU
m6Z55fv2mBoH76L1pKLvbIlEcCwhSR5TYx4gpygwFLRYxF5ie4CGPuwfUE8x3ANWf4l+XIStpSLe
MGKyUp4XTaZ5HECFCd4cjUt6Go3NJ1oHVFLJO4+Mc+xFjJy5wVpB9LqOzmGpiHtJZ2ZjBwdNuGmC
V7iSPkOt1PKHoiAHBngGpl0Lja65xJHSySuNH4as+gjSkpom/ky1pXicpRsUE88wx0yUYe88E5yt
vypz7J9a044Lmc4WCsBhWW5ygJATMHGsqjLSuqNYvebuDa4J2iNK2mKoyN6z9oOa9K2cXA2uLQyS
ZjGSXcSdoQXnsVaFJ7bbWa7vNvTXgisLMpVwxTFjeS9eea6fTIvu2HzLp5LftrEOJuC8OyKS6uFU
35/3K0vFc5pLY/1E+ux8/hoFajgfztoFrLrmv9GnrcsI7QzpPKldGhPITPVQ0n1n72ATEqsBUj7h
crV0SaASYBjeuQJKbCW+kzNx+pIGyMSr971ctFkElWSkIMbTs1jfI8vGFGDHmoiCrtRt8MeviLhp
ZwydNy7GpTDcr9krWwXzAvrI/G19+BW1B+IHt4rQBteo2Yg2LExVduQG4Dtru2TfV/iqh1kzNVrG
/DXfaYVVDU9cOdptgH2mKbEY4hmVoKyKK+sq99R1wTYqCy7sLoqGtrpvGmx5ePE/SUtmMAsZl3bE
aCfGPPN725USOJXUfuqC2qo4wN4NEdKUzEA/5FhuhVGZlq54i/PMZ4Zktb8LdZlX6r5fh83ftgTP
OnkOEm5jOp3XYKoX/PH5ARtKh2G7EbEID+++15/mXkswDpwZT5GiDqt9IZ/VLoFYoxS/9Xf1AcYE
vxm2OKkeNx88USNveNkPjytOefDSEV+EK4XIicjlVp5bdevUGRNJUA6pO3uOJCaCptuhdiPx8UH+
Ui0FfD5MNB9solOO5s3B2k2NIPHYy6yO/yiGijAGKLi6ZQ13Cv0ZOOVIPQgh0CXvAYJUc/E24nUq
X3ZTxzHQKnThgo7jrZdwO6BrUTLFEnMRroAfAIzLKQAQ0wpQMDhqK6LMVBw5rotJsIdLdY2z7MVA
XQg02wfagg/dz/ty9J3gbVGP+LvPIK3f88ZYu/mrUET+dlpYH210k+sE8RjgCVuW57tCVHjNfa+0
e+o2blGwHbPc3HQ8uD+2UssBrKFeEB73ATM3qqMGd3I8uUtlhXXccRvH3QC/f1wm1/SKeTGrTIi9
X5gMvZ8yWmoca0ZqyedpKC7yd/Z1atsVOAqjCDku4/q1D64jSB1Vw0MfwMBd3ddBWh2I7ShO8oeL
ygZZGJ70HQYwijYg226uy8R3mnU9QgkohcLME0HyGUJLxpdQQvvB1q3PD32FjkBckSQf+LpkZsgh
mE5SWO6PlW3e05PCgCdp+RTF2P4Ex70xisUeK69O/9pAWgq+NDGFmlH48YvM712QDWEsSmYOXYfX
8ZlLfwFE4vUzPfBpHxDGaMv8fKi7i3ZxzYnJ/j/Ui4M2pZ2VF/p9DMuUunNV0gCpshv1SG0TaZLO
8BiIcRhftRrbxt2Hb+eqqW397LSzU4QyAhjtLV9zgDWATPYK8t0VIfcMwML5bK71Q6N+fpg0QiQF
GrZRx9EgxjKtPD2V3Rnu8PptJGmEaIhvcDilqg+FipwkNYTCUdazkLxW2BzBcX6cSSiRgxyx+1+7
eIcdhBt8ErLDIEkmHjpSlZXqYosf/b1HSjIX/tlO0U9srfB2svi1SIVq9qLuhIEzh/RDRKiWkM7T
98TDPbJ1ifC9fPaoQkfPCSyMQhR31P/n18PxChe1pl47tX+8vuNk6DWeDCBqws8DCKcerGrLyl7M
KpyBseqwHwbOJqpTRJNIDqmRxdf9D7efNK2IXQNavQeUvr3OxT5wcVJRHWQ7whSdUjYe4PTjEDzk
7bLjVAxSoVlVECiFJMdhji/gik4m4xiab9p1XcSy7WbIaxydOR9fJ0G3Gw7QhVtNwWSSpSOnG5bZ
4rZ5jJ1ZmqO/485sxaxxyPVdvQOK5n39xP90UBs4cth/zMp99MiTHMXEcmJEPHIdLw2Zq7h8pVaT
aP/G8UKCE8adYSuCdUe0vis5rXRCKXjuTzqfYJeKivXm+kWgM1vUsW5f0lqwcr/KX0/gkh9PbT1J
vZtn0RSoBG+kLtjhZlst/caqOlKwrIJieyV2wnUVJOjmGsZ5JI25RKXEFWqpWOaZLOx97/RysMOo
IdSFnvQuqlWWi9wJv0wlTrXj0vG3vulx/yIqrbwOhHxiNTHrTIQCbL/SxsHndvg0C+8QO52vM1Vf
3YYVCdONAJvIOeoxJVpv1Bi593fI5chx4+kqqWZ85/6IuuykYglDTip7roNHMFfAGXIVns3EE0HA
nCZdbwoKPlUSUUaWqPLBKj2JoCTXyLp3hiUQGYD3XsVHhgdLscS3FiJLfd+QORbyKDo8QqTGy9OH
6UxwlSEslXCIBwfzSZ82P+CAQteFPjd5eyKxIpW3D1Zr3enTyE1BzpTzP7EfUvzIGDyxLRtXoutr
3+euqR3YpKgxigvgAsdsciTuxOp1wAV5UhCp5ZAjnlW0FgACogjK4nlux5L96j5bP0DOWEN3pEeo
jUsCWA6vBxeCpAAw5L98dCIGM/4oWsYeRXZl9l9Ngh0dBVtaVvtslBAnbNGo7aM3rUrBSVNO68Mi
ZFpBz8HC6t+pMDZSqetPGdIxqn3Ca2cqb9jMfRF8ya+/8T6qdnQ3y3+uwJdFkfBRnvyae+GvC5GG
jKXnW6YyXAk6MB49AWRklp0DMwuEuueakxejYeezvXmJyUYTJTlzq9uixRrJiZGiwgGzU4IrXo8Z
xUrxVtjz0PDSXY3XbFV2ZShbe6VwdRNWvJyterknZRx5w4/cQKKUxFDP91b5T0O9a4/yAUwtRZEu
d7iYzcyUGEf6bIiQnmOiJhX24ABbSbkIO1NWhv26oeOBaP6RYP0ux7HY9P2usYJHLvwFHG06M/p2
66QEqrdwq+iO1mNpRySc7GCQVwxYIsgUoQ05YVAtPSG6YiGA7FXgP8EfEl3h7yq9IhI0xZQ2w/Nf
V/Bg0pENX/j/87atvomFF1+6TaJz4knyrk8Z1LFbBbLuhzPMOft7ATS9yeQ/03nsWF3LlWqDsHK2
kYb2qJ7g3iXQXVF8mezdOuDGo9E4OYKNPvqIR+rCCwK9B5+xg4NpZZpak5usLyW3EyY1fAUnVUak
GsEPAZ3U6cpDlgMlqOERHaULpx1NTK6PtOjOipE1A/EoOR0h1saHw8cAlL5HL0U33Iqym4CH9SpN
YJJQxlT6VYroiDSBmqjuXUyKNRDkixQRLSIv72b5M0s9MmqKrN1MVnnS8OO8c+Cc/KPxk0ChETBJ
lDxctn0ov7M/IN8TIfwkOqyfOoNKpjy+hw6zhRXa/k7EZ+NGCnqsVu3sReEKEBPhJmafnyAHCIjO
Kvn4Y/rJsVJfxTdONkrWvvmp1rbNqAjS903W/j1NHaZ1D+h+6XkcXlzpcY0vp/dSCP5e6OH8bvNJ
HkXUjCeCJAojGZIu0EKRbJ/kWdgNcNUyvQnujaBzSy9QThsfmYgmwlyg/SqIRBdDIYzm33ZwB4+b
IqzPzuMGFKgir3jBuaOkJ3ZPUZWNfXTGaVWXz9iPBbH3tOLiYPcAPMAW1J5jrHZP/r8UO09Ym9g+
TD1eA+GbWRQr5GcO490Uo+pi2P+aJAEsfEPais6qh8HOIns4BZwtTJxnt+b6dLVY9n32zvnqm3CC
YgAlIyqxEDvteMb1azfRlKDfgq++6DfOIBA7xOLG7x+nE4Lc3NeJ7PZdDjuaQYsau10ZuO50Ib+r
LM6gJY3OelgGPNGCRWEGUvqUoEnSaRGSqaIJnJcB15HTbZxJmmYRz3fCsUkX9UfxcKFOLw56/V8e
8a5wGimlJa38G/s9BQClhhid6WxFF3ckGnbet9TgMBxFETenNAOa+zsQajHLptqmI5vmmWYloqk5
CmdBb7QhxGnlWbAXYxnBRO5aJaQbEnv31vrTKRXt6RxZPv6HYyNe4tq/Rp8cUaDRZhOdPVAu5HeO
UDerRRIrQ34tudAny5a89C9jxhKDGdBW8SC03bLl2NYWevkdjV7QoW/oRfiJooo/IQUiKlyzYGXM
sovHKeIBUbI/8/TpOiH4zisZFFUn+0Oa5dY97X4pB5T0vwegLaneF//V7+acBlnSfMaBC5MSrrqg
PpuHy3ZUowDR1Y3LTZYliQoBQeQsx7sSsqzbToRMKBADpQwCu9OVksteULxf2KUusNpdBwo8X669
IlqlucN4dicc+sLa7leH8F7ajCMHPOUDbBDurgVclDvwsgnJu0RbJ4kwrFdYbiHuANXqDyW94rdb
eLn8ACax06qIy4V3ApS/gVUmNGuNu+d1do8Djrkr6z9H5wMsjMMgrucLJbq/8A7klaDzjVYO6kee
hXRN1p+sq3vFXq9fI43o6CI3vVo/AvrfI4WRVgAiQCOnOL9U2xYy0Vq8C9xzFwoC3t/fdPa1pY7W
di7Xve3l72Wtyc3WoXiftmchWi8xSRthq0DH/xSKDu5vUZj7emmKt3yo374IgzJWNvP3AGOEctNn
nu92V9BCNMPXgu9oTm9/qgnDLyS3dhWgWMh8eXN/hzVkbrR7KzRjc1Ajjt+dZ1EKYNHP/HTU/s6I
DVeGCuGyhMX2K6GTMHvZGMh1Y/cClJr0jRDBYhDEfvC9hvPsXpo2bLAT/rI1zuAQSD0qWiu36nni
8f49YqBmEOZLzdpeLxSN8i89nK7JGA7yoWX5cUVa2qQiGRw6qGvnodnaVAPqM2+L+/vCh/ZfOr+x
TK2kwrMcwnW/AJFJatUVQGQLu7bPrcAig6etWXiIu1R4qQi3mimQeU0ZgUsczk6nQ9aab/JMK2QV
RBMkchCubx8iDWhU7b1fiUClvcvrUyGPMw1R1odjP4FCabGxNcD/IZ1aWMmG8f2aiaCHrdu/3mln
3TTK4C6rb9gG+5sNxTxDTLPw/m12WSEFM4loXQVLBW9lVfTrdQvzcHdr5Ws56w6VwCNZgyB46egv
VqGwdftHIrDYdyzYtPLdqCV3tJM1vl7KNjUTf7wkTCJ1pu4ozdYanoIzjYSNEkq72yjWb5d4OPuR
y+OwwPWP6AZeNaYU+rCV8xj87s4O3D/dDR15+a3X8NlLkKz1ndBIC1uuBi9RWz/zDoehlcSRn1ih
EiXlPxTVDHaO7OdYqQpzr95Tadqmq2/3eiFXDJd0u0yeGKuh4myVxA2Tz133oqpWtySy4zJyD2Ed
1mEFECAres8bqbXegcDWwmlZikI9YL8lN4auLW11c2jGK6PKSabzGtxssffBm6riNFdbvpRyf9RN
3EcQjMhMzT2Dh86NzGGV5R2IddVPp6VjmL8FAwupndTQg1vgoXurnJdVqZNymKz8QsTvVIKl1Vof
nUKxxVpgBgbAxQ7W160XY0qoTQSPSkVGeKHWxJPEXYAjW193BoFT3wjAooYjUzZGKagCUQE7ou5C
npBQrWXtLVqKqtokzceXQpVA4kzbw151IcjtA0p7KKRubI2wQTn5Ho4rfa7VUXk9MOHkSLJg1ipn
pabQYBfePK4c0J5g1RmYAhQRNIEfAjylrRaofwQjMfQrpHBWz7RSJOHtyETyzfNCmwm2X3pifmrC
YieRnT2cM67etywA1p3xk79SmMON6Hoqzth5GnJlvc0j/0krse6kUdSQEmdKPniAznUD7Rgi5wIV
5/4VxbHPV+mRj6oFpEWiygNuzFitc94kaRLPoZANdQA/XEow80Oth+6P4sx7kPS2b3KXr2TLfGCX
z/MMrWWFwXe9Y9IuF2cxCBX4PF3Dnr+SsJyZIQovInEVeF5EUx1mz+5UYWgaDUPOgxpHco8z7uOZ
vKu5nY2OJd4K95uTwXBokO6HiEmIK8b9YLmDtwF8Z+y3g5+BDRuI9XDbj33nOLHrwGgS1Md5FHHx
9VpQlGzILRLK9eiHkoOzdlQS+PAsaZl7hMLXQ384QOOo25k43G6Pp3xa8EiKIaHtGlagpYoVQCsg
tkof+1e7qCi0nae8oZC/SixlDVjp4+RAzR3oG67L1USkOZeXfRuRcG/bVJo5RNaQEllJHJgmoyG2
rVoweTIPkxVBEoN2GDbkvlm7mR+ZowiVHlAEljHZCv+Z0PJfjVLvFiDGdnv7OCD3/gUjTfxQqKMZ
s7q6u42QhIf/UCjO2gCpaWl2RmfHiGhFrFSwRaHLjjiCdkQ/cG4aWYSTl/qpmLxqgYwVwtFDDzag
ySxbz3vZx4vuCXOplwjuXGItbHF/0S4vQO/kpyiAJr2N0ihtI80rpIQ8N7OFnz5A8MPYX0/VDFgA
SvF7Vl3S0qQ+i7s7y1YNHHEHbF2nqJXkcFGfkr4Uyw2aw4hYVtV8WZtc1J29vHt1TzDQi2wD7YQq
Ew4CnpXuhhkFHbGG4pcG0Z0r+ujLTx7fNO1Pku1LkUvndczpHoN9Mx+Yr5H9jPC0muS+RlGBQagK
7uUMQaMZT4NaBSccOk37g7B0/Q6bv3Fu9PbC+XnCYsfLRueiJ8/PwyAwvGJ7cjxNSxHsiUiy/dwB
r9iK4Bl1JnXqSZI0dUuJG4+wpzjZEtYl2yUCYkFTB6pJKlp2E+t/IXb61TQ4RIiXgme00PCnuIhk
3zNL2NMtvgq5nIkDFVF8IiMzIMxes2rJvjyPY4Ioj0279mpAYx+7kfvzIIziVglUtOGGL1PieOl9
kR2HeMBbSIemCmY1rzrXdq4ao593FFfqcp1c41nlhUeaoeQq7XciANW9dKBRPSB2vKWKgtMH00tu
dStR9XM6ThOu02p1ZHdYVgg8kXxQsveDYmgD/MiLnhliQxCEk8Xg/IPXeMsG3Lz24WvNF3ZxSXmX
AEfmTvfk4ZePxXGIt8h+qZhgdCwLKxnu3skPsMvd4ubnl8mQRY4HyhfV4obii7NEjlZQKSsSxiCd
8vFnBLhBW/i5nbD9vxOEZfbQvhbecQPVmqKrEsWuKMCHu2Z4vYcR0hhJE0cIGi9lUsh3wukcVgZe
UYsyINoJDE1Z79/5cfoahxm5BuDgxIyC4IlpfOPqzk2zr7Cwdle94v+qoPiJGxGRXX9H23Msg0Uh
k02Mc6rcCLXJqzpUmSWue4b/JvQXPzXRA2VECxsiALwbGFznm4WVuj8DrenC1hB4zvRtNcWarj71
bcBKzgS2ryEAKKym6KLcGMy+t/wYGdnAdfX6PjFVEa1QnV/gBveK6I51Wi+nMIWKkof/k1nG2u/M
iBdepoT9HHFUHDjSmtH9ncDitLCP12OLMxduB87llTSDwwJW/IQhWVuGlwtVeUddrh4qqnwkTA03
WaRHuI1OJfGW0opfc4HvzwJd6nXWfJQzgep9vlZG70fk9MraoDuyXqZzqZouoPhIyitKHUqT4fo9
Wy63KWumQwRdKSMXVrWwvtlADvFhEaaL7kz+77eAzh1pcZ0NIMCFSCFAYn7fu8SgUPrbWYNQ7cI7
qvdQH6SZBanC8P/T3FVBGIPCMviMuAqBYip3FMaVjcxPUcJiIylg1SVps1jJPXrsCB8dPx8e2or9
1jU84uphYVvx8RBQnc3R2BWig1wz+KuXBJzmFqTnK6MKFyvfvUB8yvVvPrkmqAOXjDNSoMDlRG61
sdKEgYcpiebWvrM9VqP2xbOr6EpEdFvPVA6t5TWKva6WwKML+zL8bd3Tat74JSfPf7JtEF7ThiyR
2GoabuK0Qh2zKo4kDalahnO5Y/GmmKgZV8U0q+sgGi6+DtwQ2Ir+ergXfC52Gifsnuyehi+ixRIE
qu1TVlwGzmT8/vIirxUOwYfYcDG44y+oeOeDpT1jsKDhUSMDPJfigmcO+jDEpI1e5YO+IvAAE/uL
qu1OYaRqyNoQrK2tRDkFpBdN5er/hjXzGi6pTPooPN72gj+Mkf/ty9AmiTyDqY5gWc7BqtTbNHHp
GMFNy+OlBxdMZHbC2NtV7IaD9ajyQ0+GgegiuwlAwigUcWJKSkhRnBVNFIn3L5XTVB36q3BB5Jew
S6CK7teE9v0eEeJiH3IE50Ddl1GIFbP1zFa9O0BpkXZuFQSKjBcWo/sStxGqXrjfaJued+TVGC38
2No2erYqHlaKojkpTlI6UZUu8B12uaxQHLxGwMLw907jONDBTgFf7hXqxZ82rEzezyQaNqtpEy5R
mjIxwPVj2r5XJbU4+WW/xExB/Y25HhCgnR4k1Wpbmj9MpB+MuNFYF/Gg2MFCkgYGisoZ31ikI8lk
mOTSCJ1B7q8vKzIG2h8x+soK2z6l9wjlZBUAnR0F+g8hi/A71Kwhv5Ier7jWiF8tL8UlpBuWMTxE
wMACgJADLDEnfO3GpglxXChdNk7mZH9IpFcLTByhbaau6lI9fZnSrLLa+dzbzaWozYYkOANNLWKX
AaNUFomp+0FgL+IVTNdMZoZ1ei1SjY1QdzliiS+M0S0QfPsT0gO8RLUeHnIel+tSCvZPMAkafIRc
q6n5RpCjkd9J+Y08YhJV/pM8e2VAQpz3feKX3j/V5fwNM3oos6fgRYCWvocheD/n86zhMBftV2Bh
DPy3fZS3ZhKPXa+XH5lL3WeH3haX/CgqlhmBxb5bn3cqzjd6aJkK6aRY7l2Uyx5pThkyFoLgL6YL
+j/n3LMEbeoT8awIbJ3b3sHo++mkPcyTz/xZtqtt5FxK85nKufCBj1TWWb2ITCdlbHtpZUicuzv9
C8t25bjNFArBSYTl2ooFJ0msjkIivdGcnyol1WZIB7rRt/lR4OFIqZJAg7JtCi5CNKEfrEisZV8b
U1j45SH43JMBAg48DPsaazyS7ZDoNGVpGPvOd57azkjr36TqAMTvwS0tPLfl1kDh7k3XNgd+0RKL
bU8uzjYswysowo/WuDNPQ+ZUEYDrfNXrvMHN7exWDLsK4GHWMj1gb3T7JYo+D6SIsippb9CIJCef
o/WI3IzwxBxTu7EKFQb6+EW4EJvubDV+RAQwdPeCucc1g3jaTEwTAVux00Xhfmyli9RWIzpd1h4N
4OJYpdICklg8zds2Oi6HH1ReYTQ9KMKKIfgmf6Wk7ThUS5EbSg/i4xifEqjwnfluPPDcbjN+Ewcq
KDfSDCTSLOwr7oG7Gqeg9ZAgo4JeHEMn7al4eAiMXi9pAZ9nNv3/CSZSWIPIMeVHzxP1F9m8LLXN
U8Dtes29MDXVG20xsUAoMPRi9/KR3fXI0EqQ+zUDAhdG2qJTx4wynu6WNJGmWNPn46pYTZg9qqBZ
5YyWYrWcFF3v20p+SyK111+r2TzhUC4WgJvnOQ5r4t9pGJ44ozWXEwOBXynR0JSKxvp+/O1OLDnH
wCQYoya8bXYnyUK724TMfRqpldnoMIIzR75yvG8roLtteLz6PbIzfYKaZ3+bOrPPaZSmaeY2M1qm
3BPN0xy8M1VBm20IkfGR/L/nOokEXkYXlHHgHhW5IsE6HQaWMnQiimmXaRwXDirasY7nT3P39I/4
Q+FqOKjEOod+1xiD2C2xsVLW5fbD36ckwfw0BfWi6uDJImw5BQFWO0+Msx9+ifuI5inZ9y2oVEC0
8t7JONV4zNKxuNCuKcpsd5KTpRMSc9VycpkIrrK9LNmRMpub+U8fefTMsTFRxswILEa2ckZyfyVO
lMo5vgcwmvJOF+40IsydRXf2uDqGlYXimqQe2EkT4xGstxfxoWL8l8g7sfNdXAY0cnmVOtP+uLBz
W/g6ZtX/rzjoWQ3Us9YXJWcDsN9bHj2LaVBAFX6ET6HKxAjOwEY9dxfS7RXAODZQr/toCJ01oIbL
U5U9rMFw9qYpHmNFrhbUtKbNSY60AHHbP0lbq12i0bqGMkjOwS1WmBQ81C6mGBxWE+bXLfEGCT1H
cURiJrkaoVCgbKfiUNdBBW3flVn94N0cpu7M0WjvW0zzBQgKItMzpdLSk1kQYdCFSmqoFBB+XKQh
TsgeV4kdpxO8yFPpvBCH2ViJsIG3uSM1pIjk7TmhztGaBqL7kqiaSp9FwGKQiD3rpnU3imJQC7e6
XqtgKPY7WIk0LS249IpjdOndnxxz+vFoUqZECqU6DfndOvzF6zmBlGnVFi72qW1ZIhgUJw2Hb51y
WW4kQU5X+RrXVKhXFOodiWYkhnKSjzHFNUH6bdQuIvadKDH1EXerIP1Va9rd5heshZu2BUX0XCwt
StdGzgMa9OdRaVosuZ7SU20INWzXwyz2JvmUnryR9Bsay9oaF2Z4OCxRrzMAwUhf6JYGEjvtJprG
hx6AZJEWRRfnvxv0InYJsHJ0IXS0IPS+uPDHQfUAcBPE0k/zDrT4EjuB6ayNImJzPzlrhBbg47is
/pGIx3866lr/cIxUBU2R/qApKMiL0Kcme+/BuVLotwjLYn16ST1gqx0ZCXq4rxT69vfS0R9su0NZ
PEs93KWpdp6JxVOxj4im33Anjm2joRPSXCA7EUb1q19rli9Ay8/pAq+zRgxwJsUNb0aCrOMsqxzO
ApjLyZe1jetD6L/bcznAmBOmjsv5rZQ3BfsODgZzvjqszWdBiUNjwctzCH+X2b7y5HKy1n+hlO96
vgGGWdtgkfgoUqdww9C640NpfGJpdtEhdxZ3uGGocbgO8vHEONZv912NJUJTR84V7XeEv2j5M8dJ
YyhugsRjPnGVFZG58WcFpaK9EPLahr353Adp6RozlUE1cL5DPEMd1fFIz0hepldjppEFe8krs2lr
JSmQQr3n12suukhMOob7jKHOQji1iW5Z9lHBL1wgAYy97rheIBjazQNtULdYPSKckNPmfDCVyQyF
5BHV82K9B717+igGNGYhipKAaPnxBpW+AnL4mfd+Rwp6BRQ+R1wz9y7QmmBMS8ahW37u/JnWIZXf
R+ptGoODu8yeQMtypU3It9SGkYkLWyyPEMWNUVoocxlWBrsvtZg/dqbSJ8uBgwLElegJXJECCS/4
3WgJ2/vl0OY0efndGcXh9vLhF99ftP7sKe7Q8rucgCSWafasbnC9gERi8XoY0F6gUS10D1tq9kkU
2mXIa14hfD58dxJbq9S5iS46YqFG44ADCKGsN9m4PHwCo8FY0kV2epxWu+/565kNo68L8gx0Sfrz
+BZSSlvyc6QP5PEqGuJldGeBwOy7ULYoztvUe6nHETrBZ+ySHuRPXKC7banW+2WQNH9OfeFdvC06
Psd2iR4p6SX0aRT5buDC1PlR9G/wMFRmiFcQ93njk0X0TAI9oI12pZVtftesoMm4p1fS0mw9hnA/
mUiwRYYQO53UOeAmKsgInG65zkr4RZqkb3zYpcyA71lNeLecRfUJ02VsWubRazs0oh1xCBm5xl9k
Dps6aqV2Rv1Q1UL1edIBMQtk6IAYfDNw1vJex/vlHiUI+R7mToIDoqDoTmMW73HfzHeEJFfE1cMZ
BYeefv7CoEjR2QyaxzTJ7QKghBJ+iCpstvSCySjDoknJFOIepIG631uYDFsv721s41eo+V/CihKN
sIqYGi7q4P0tak59Oxz9/4cEbhzkTrOVdA9tuGG07Cbzc6YDkvePFxLM2QK1fhSwTHTBAo0QkGT3
Hhhsjfqt18cHX9zkRzE/8kGxKMd5iPS9oDL+rqtlaYLMdk5CpEzfyPHKTGjihKZw7exmMAGrqZ1G
96RckTkPnZoEjVAMZpDLSsT6CmU+UT9a1IvyRdzJ7akMuAh0iEL4rT+/4tjJWpX6NKdA3fuYLh/C
g06sn4cdNMakrPczQkbqM3SJ7eu3//dFACtecgZwyW9+mTPtM7e2X/Hb/m6s/9ICsUjg464j/BdN
PlNuO4ihZE0kTWEj9/xVgRj0kcO1KRx861PnOVJ+g35pGvE5zKKHD8ly64fq85EY5hBRYG0OQAJa
TXj4oaXEJJ9GqodZlXcnD+vlu+pZ1wVeOwNqgauLqv8mXVLKl3dAccueMEgXxKXYxCuKOXUa0Sa/
KsbvnihbhRMtTXP5dFvcXxL3CtHaiAhUCmGex1GtxbozZWk/nsyaYJUfOKorXBgWMN88to6RiIEp
4K6phcBlBHycYkLrqJ+SOwKC6wyiHpPXHHQglp/dMF8U8smhl/X2xykr7kEYRTgqJ5ez0PnRBBuE
qtpmLjmrXyij0ALjJMXs7eyB7fktd3f+uIvBOEZtuzKt4RHS4pPxuKfngwEDdpyqQC8NW6bsMhDc
NE7Rn60X7fY+ClNGXCh1vFPjvF0KSuk72TsudKK57Kk+5YhAZx7tzNyXZ5Ad+1Kd5Aa9Gp4+sggb
K5Bntk9YZ0lFiyuR/rwhuU3WMiMKm/evfrFGZii36cMuqzG8h/pp+V3kuK5RjopixZQzB15JWhbH
SIKStzJesq3NAbCOHeRt9HdnnDgMeTIYo/z9xYDs6qXCVjJpK2v89Nr6D3Vb53r1Ll90F76IbROi
v6xpERSrjqXCmS6PUD1e6Sy9/lMJcREzEznJ2GcrWROl2G/BhpDVtkgAGtf15RdX9vJjGd4WzQ4W
6sEc0ufmoQWiMwhE3EejJr86purnzeuD6/Vtxz97bZrsYjJWnzFGCChHdy4iZh8T45GopA7YJhMc
f2D6pDW75foEOhNDl0/U0LNvhlDEpj5RLRETku0wZTjA9mUiWkWQm18oVw9tiTCsCHUvz++/iDJs
EzSKGtiecl2i205FXjYICyv5yMpd3vrkHqnE6HWuOATY+mp0ocuY8eo8t/mUu+87tgp+rSKAEPYO
324PKhkMjWLWG203Bh9PAt3F2lhV5kQL0z51Q4cr2/obe9nqT56/YaIimsQaKabS6aLh761O0FCE
tqoocR5EtzC0Kq2GiWQl4JtLHL8XF4sT9aGU1PqhQviiw+TrtKgdPxglO4CePptNugK6MVZ2fCF2
/l0T61F7qs26xf4+nu6Rd7eXmZbmkfxUsesnlRRBC+sFw5ZyoNQwhX49WYFo7AC9T5cz7FCU79u1
7+L8Zhc/jeEtoNUWYkRTHQTNxCpCai5gVGs7FfnB5Tdd532w7I+YGXqRboIV+rwBt7nfulCPdoIm
G/3mGmePo1MF4pGBFp5GpqtTvyvacmexufRAb492f5msy15Mibnl9SRE1IC4fkx9H7KqzJRgWFOs
/DKv6OsvY3tK3ikRO0XIaolXeYwq2BDuZJc85WTqn9ktOQgvSZmqS0aMeLJNalNUk3TJfM1DPXYl
R5TVX+doRMhQu5j/UhxYP3U7D2Bswendz2rXScNXsRhAIyszvfjPNQ1p21PzlGrOKm0OSxU2c8Ju
zCNDcsoUeCMKFi4yg1c0NUiUCRu5VndB/izN/S+DA2CuuC3JWfn1h0W1daBM3Dz9qud2II5+KV4D
y15O88pH7zley8XIlMP69tdW5bLc+t4YiNleIThQRuXAI+BVW5m5E9QMlfxXYD57PgmaA5XZlC9m
UBnu3AzM0OqBviDyQG1XNc7wwR4vfif8gfAfg+Xs+ovSfwY8X9oP6QgKqK9plFHAXmgaVxiXmKtU
8CzR3idMJ7qHYh6Xk38kAP7EeMoeurx7mamGZDhoo2sRpj4IGrM2zCh/VbEdqIzUEAXxQR2M19RB
hHIqGNj+uJYM/v49zV6+g76u5y0JnCenXNj8NgV115G1s00U2j90xvGQmYgOZnW/s4CFhD/9jgpi
SvIqx99MR08ye3QuKN6sUwPok6woKzcIIZZBePMwe8IPv16HCt/XMeH6fhy7gyNGmrkCLuZnsA9c
mfgD/DT2uI0o383Ex75e9b5Hvnauy8r1FghbzXZy1qzoqUe8KwMWLrA+GWwnYodYPPidC3W8kCsm
DM7O3QbZeKx2b4AlRMPlszjUwOHOAKE2ziKzBY91gLfU2Jy6JvQlKya3eJU+sFTv8MOeiGWIF1Zc
HRCZSQsC+ayzk5vaS75rGefgXP9AJ25m2iAbqPhmhhiWVVqJi8BofFO0jVYYZGaApWsel3pjFh1A
jWxNj/UjkPMIfGcM34Hvlzes24UzYOtk85sTfh7n/q2oFUbO9N9ZDFJLgON/96KESGaNylhO9PsW
vnKvShVbJV4GTeWFwLvCosj8zkvUTlWp5r7HKU42fzo9TaDpSbTaAU+VE+KLe7woXfSywdsr/pKI
dmi14vgDvzP4o1WK4kxi7Jr2RqP4nKb1GBHVfbBN6kzaOI9XF53f3i/9a7lI/7iNNUjwfjAOzejS
pKpt/DoOMVvwJfoDu/fkjOTYh+XLg9XIbNGpnyWxX1E3Sm+tTsIo0AjXCSz06NI3VmIxURfq4YnB
f7ruH75WF7PwUI7fQ/452a4+kctYg1GHwztCdixA8Me7ge9e1O1JE7RoKnhQYO3xdLPizgiSB+ie
3zjv0xJ0USLMOuQlyajl2bS/sqTNOTqlxS8aXOcM4SWud5LT8kclR808csUjSAdgO8YwA9WL4lUW
uPfg3FLGoUVsJCe3P+ATXnx/srtI/1pJduZ8P4i9IGIZ4iijtOxK9fOsbtIiOKiMNvLnLq+eXsQp
lR/usRBut7i8ybXkjpIOoUJ3csm7JW+0ZGzXHELJbr11uWXE4W+c5Gq3nNX2SjQ/uts/vHv/C64c
rUNMX87P/KRXPTlUFdjsz3m7kX9s6E9W17Y4SSY/AkJWCkVsNcSz67CwduWd9CvxBHhj3XRxvDCV
b5sM+uPgu6u0ItJamK2Yga5R3UlrHHTT6ACwrxmcAWJf5CDF6LJ1Dz4yav0zK06e4IJ1S/3Leb9m
tLFZElV/1akP2FXobV6BplFHuBWEmUqj9Rbw3VExuX03Wy18AmpHTlat9k9JX3lz+cFWrmetUOBV
t3soXeJgkUinOZ5BZjbrc+jzBvY5D5wsuHCHdFly7VoUdv8L2h1gwV/B8XrN59v/EmD14b4WsZ5q
tGCe41qD336ihQ1Q9FRHpldJ3h4PrWULEWJxfEPmnndsMHWxBnWaT9ji1tMRncFz/oVrgU8vDRBa
48/w76ueX9E9Fp2nSIoOqCShhYywkMqARCHWQiogAIRe9m+uDQUdcLUc1dxHNS/8ruTU11M3kJwy
ilu/lDv5Kc97R7f/CP3xuclM+CeIGugHtWbQ2fW2BUdmH805uO2TDjRPVFGGOLf7zmPUOL9sYT8p
5CSwrEHem4QZ5PlT5PjFvv1kOiinZZx0yS+J5lV3M+JHBIqr/gRUyTGdAcPV93hK/IBecEBuS6fu
8whOtK3J8xXAR/2AN1KtnDZPDSyeHQC7H/iV/DlOwY9u8TXyHxAg377Pa/qsNjKtcxPbgHqMU1uD
a1hganTk0ZJXm0SS4FMyVgIozWFnAnB80IUJtJFMk3XzI2RyY2Uot0iV8Z+2YO8MV/CclVb5vrnu
jykJJLtroPcnyfol99qETidePmpuZWaUwRXqpNqYr9g24aNS4+p4EJybI4lWESa33uikk5VQxbCa
0migDVbYVhQo++7pmzN1QcrGAciTn6tMMk4gY8lkpC7vMRFVRg1bzjOEG5TG2OEpVmMAKXAJ0SbI
mqxgBCC3aC0O9d0jt8/GfUy+BPySj64s9m2auyhsRjYoZb0xHKcWg5ZqdD/s2/EGrevcKUsSLzwp
zY7G/V4ha92FqTCgwEj9mqGKebQPPpUo6zSWgWBeqEv5nZAQgTh8g75Z4KUerx37GLaj8mYi9FpN
J+uv8+9zkxPmrcHHehjBiBP6nWvEPlhCbKn7EGnlyh//EFYJtdM/V9xpSNedtG4/0wwNuaAeyXTv
SxlmHoZZuzOdJXYb66Rj8tQ9v9ls0N6OdZoeH++bN/Uta40KqEFvqoBvj4oU+xNU+mOkidfO/58I
boDo750+jzD/nYsvjSAZKhss8Uv+J8ebjDSiH9LFYV988WP3cr8fOVNQdz3VHFLu6qnK7j0qzsK1
BLrNxWDdCKEpvuUG30ETfSc9cQni6F4Q7qY8/jmmqVPnet7v+jGBxqV5LXbroUcbLdSu1jHskeKN
Zl6yaZ5DM0jQr2PQvxjImUlxiVZySagU46cvuWCX2cd8ROFcS8q0DWtrPwM2KvG3sK1nRElt7p+g
qUbS4GK1GwQy1ZgV5w7aB0tV9/peqyL3n9NDYONBIzDHQIB2sPPktHFQnZm1SvNm6aKkDbP3YuXE
m3aaQ6g27qO5Nqwq/oSn5+tp0LbJxD8jjzXG6Sxg5eWDjUghgTN4NXp+YgI/SlbQo2nw58iFyn8a
4Crmpud3L2KaGbosd93al4mM61FBFXbqDsb0mSc8GWwfzA7pDbzutaRptH11NlaD3mba9d7ZhVHy
+/NwK94mVVdSTmJhOILEi4bWrDeUrz28fXOzTmYXcFEHYxnnXipDBQAN3fTFhhLjP620QsrAUcqn
Ioo9tZFDugArqIg2H8wWUcb84n4DO1oxwhaLOrhBcsFrsZSRJibvVhGWmYCQFbf9QLASKYCIu3Nl
6vKJxzJ7V/4QrHH23bdJ8gYNexRO1Af5P/vBu0Oh3e2d6C4wsKKdoMLuGtZCHV/27oU2/POKLbNJ
Utwuf4VI8bI/5bnMehL8MgB409cN27tn56r8hW1kbTjnCWnZQlBx68V0zWrBYnREUJqz+/n+PRNb
xhsrsDc58269tikTJl3HhSxNpNTneRW12RnIK9vGi3YrVnAm/Jxty+N/B84Uvr2KSxxMwVdS5k9C
U7lwCArl4MhpcW05hN2ykTnQ/EF0BVPa4GR3OdSPZI5wOPn2mo5Z96wCU21jnWaXUZk9QeL7qKY6
xXq+U6SYCWJBaKTE7c/UIhS1ErOcX2cV6iX0lrt39y7dRBfjWe9gROvYSZNYDfqGMUeDWpMAP9S8
MmkyokpiIDtOSEQOTyQvB9FZIMiy9z82p3XMoAvICkbdf/neAa/cLdE4+2PErb0YsxE3nlkl7+Bd
UHjaYiMvIQnHSJkbvCL+ZujarMOQG61AB5OQHq31zne0OFXA7MM2g4e3/mMv9zhg3lWL+1IL+rq4
H6hepXwqkicXMyYpOmE2MXvN1OGi4bKixG3C/c7lMMbvbF5JfbtQiH5f81C39hniYfk8ATZmPlRA
bNYvAMXSVxO8uItWqnYdhusGfmP82ONjrHJt9HWRqZqM8H1MQH2QfmR8VhGvYXof535nVdGxtPKD
oeXCUfSoq1YD1skuF9wq1j5RI8DXRNzSYsPUdGgHVO6X4Gw7+l5UuioWkIZPIR32dC+EssBoDyMA
X0+rOmTsd/rEU+IyQK91qZ6ebn84Ra1uYR6cAaPHOU8uv7XopxE4hPh6OLu4QLSOsS+5WGDS3lHf
ffHhx9GcPJDXdp1sTdWH2NSSqPAg9OWWmTgyXTzsOmF9Xs5eBNJOuMYso3leOXY+s1nuNL7kzhhK
CdFEnnlpAL0HpXgF/164DBghm97yMhsV1WIvZkgdcJ6E3u4LSJO78WjGCQzHkaFIGQrBOcJTbKfT
Z8+WwfAMNs2UbV8bPBBhlxVKyZLyhH038JsyqlAxzOTvjOa7CMHwyR6bGV031nsZLO7/JqHXL636
fnnP21QUNmwCUm6xR7Smr8cD587M6z3y+9lSyMBEZoit7eTd5q8p2ZYXU5EX1KM7IP0DvjtCJnXJ
35BTaQKnIpaGpB2kj1y5pL/Vjkudo799HU6uqi2l3FjQ4gWkYFaTQxaxtMkAHVhsvbbO/sC1/I0h
jjaNHnmwziNwcnqtYg12Ms8n5UmBwpTmVMr+i63XkaKQ108CZ3IVhUOgQx+6bO2e8s5SMSHp/zaq
QezACmdCQxsCBEpRMQBTuuWtXJBu79MTBByWAb2Ds6Pi2qSYIgWOMlZe9tlxz/MsdW2T/cdpM3qg
Hlgb/COry0yl7tqJi3eb5/eoQtyiEoFDaNhHp2Gjii5mv32h0irmIxEczteUIOEdoqLnSVkr5B1q
ZvTbnF815xn+Iz3dskaa8VQgSAZDfumjUUToq7JJWgV53mi/z6YO6M4IsjL/kp5aS9rFL54aKspN
IF9WtKC9d1Tp7IJ9Fj6okp0KMJDyI9+0gxDRDtwq/s42Wr50Wn1gcQQSF3wW1d099Y6DDBbUCaaf
Y/HKEpayBT8WMDb+x7MMafFZBwnlBUZoVChVRTJbi366fXZLGJdmZICbXpg9fvcCYryQ5AiNT7+N
MeZs8UfM45g+sz8geVjys+PPehiyB4vStKkyjtmdion+rw66OoiF5rmD4LUOftZDhr1KXpuugFFX
YfrdVBBVmI5vP+AaCcQ9YP5whvyHEacP2AV2rDWpJokn4T+8wYz66ncqAey7yA98u2A7VbXvG3ap
Vt3V7FYB9VE+8cjbG9oAO4bQ7qJ6XEmAIvFycz/7kQ2IJyZr2z3ToaRjusV17hkYEM7wf9i0fACR
8wXmYwkzjaU6Xn1f5Dx11RQ4RDDpMnb/Fm3dfJwfc3IubHST/wNJzicdg97+soXb+20SKDoTMOGz
dGG2TtKvNXVXbt6Z+/Ip+LujpWrbh2dw4VeHCFHLK28yC11t5r2l43IQmrZAacEp4ZgwjIeypvt8
VEbPGhM8TxRMpVCDBEEqBOfNzoZOk4XSYbIqr9DujAOOqAPEVQXGPZg3Iq8QaM4CB4wKKAbXW9GP
zMIs7pmlL/6Su+1sIFslne/eKooXu7q6OHM/RU8v9A+DB/aEYj3VQovQcrTZIfKyr6CpfxSknZUU
ImXT+1Vd7+2cEi8VVN12NhGIWsMWUnDFnUt6xQAAAAAAm00FUoAjNhUAAc3CEv7m8gEAAABlXKHh
FBc7MAMAAAAABFla

--MP_/JrfABqdNyhV/ar7fCf_MSe6--

--Sig_/U3gfAfjj8OadWqGafY.KWMx
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+ae80ACgkQ86SN7mm1
DoA65A//Yxvyh5KC4gWJjs8rdjn3vab516JLWejwVM/9tq5R3JGDR9Aiq0eSLvwe
sjBvm6atezhHuNUNwO2qDIhy6TPBmAo6zYijcVwqUdKJYU88wvxmde08HqdMdjip
aPq99EbV+Zv//cnEoKJ92pcvWxUE4lOEobXeMfXpcptVKH4GhP9rM4N4xfaGf7eq
nHIYhl8sHqa9y3oEgGOGmSg7xUY4JIAhOlmje6IuXax/If5T/+nGrQ2BojfSyUi/
Kn7jrj9EgqIHNLBR8jl0z6JpV82imcvJkxPiMEDwAc87XdrEpAfbfvuMhkjYBH7f
gGBp6eWRE2oUSzgpOzE0EdA+RSnhp1oj2s5JTyn9XvymhV9fBIRfsCk+royD1KSg
A7DnsJlTsbp46XMitg2L+QOfKjcVGeWARVtA+GEkZhmqJh+8hN/a5zWtYG4VzCVH
WSIsSzOxjGsgRrmTL/Vf1Pmhg18U+5JuCBZAH35P5woJdeqyrPNmPXzg+Il0H3xY
yYyKJkySbX+faf1bwSRqA2/o4WcMdJG30y3nG55oOhxB7PSkc7/0DMwwis6mb0f9
jjEblA6aaztsySxI7rg7L+T66ZX1gsfpwVTAzul7NLBvehDIF1GvooIp4Z67nlvF
zG7MuGh3nz2iGFaan+Izsj18+NU+Myjf/KntjHK9lbnmaNW5AcY=
=llFd
-----END PGP SIGNATURE-----

--Sig_/U3gfAfjj8OadWqGafY.KWMx--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 08:47:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 08:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14054.34978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3aa-0003WN-Ra; Thu, 29 Oct 2020 08:47:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14054.34978; Thu, 29 Oct 2020 08:47:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3aa-0003WG-Oa; Thu, 29 Oct 2020 08:47:20 +0000
Received: by outflank-mailman (input) for mailman id 14054;
 Thu, 29 Oct 2020 08:47:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY3aZ-0003WB-0w
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:47:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c8c41d6-e7fa-4633-a6d4-6f4158091005;
 Thu, 29 Oct 2020 08:47:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF42FB95B;
 Thu, 29 Oct 2020 08:47:13 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY3aZ-0003WB-0w
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 08:47:19 +0000
X-Inumbo-ID: 8c8c41d6-e7fa-4633-a6d4-6f4158091005
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8c8c41d6-e7fa-4633-a6d4-6f4158091005;
	Thu, 29 Oct 2020 08:47:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603961234;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5tvZJ4AQ6Vz2ud0aD2NJmoVWEaQSCMw/rCvCUHffpkI=;
	b=CrzEdll8O35dafx9oqaovKQJDz8TtlgCgY3k2sxiCPvsTMBp2IgNqoFQybaM4J7D29AysO
	x+z8lGWkVi8SKKZ0qbgT6DVGEJaf5oI5H4s5VI1km+eQ9YexCCMyP2e6Gbbh2NwAPut9x4
	rpwh+J37ubxN9IuZxk3cqdwzV7kH6Jo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DF42FB95B;
	Thu, 29 Oct 2020 08:47:13 +0000 (UTC)
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Olaf Hering <olaf@aepfle.de>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
 <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
 <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
 <20201028181344.GA273696@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4597e596-822e-041a-83de-9fcbfbd03933@suse.com>
Date: Thu, 29 Oct 2020 09:47:09 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201028181344.GA273696@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.10.2020 19:13, Anthony PERARD wrote:
> On Tue, Oct 27, 2020 at 12:06:56PM +0100, Jan Beulich wrote:
>> On 27.10.2020 11:57, Andrew Cooper wrote:
>>> On 27/10/2020 10:37, Jan Beulich wrote:
>>>> On 27.10.2020 11:27, Olaf Hering wrote:
>>>>> Am Tue, 27 Oct 2020 11:16:04 +0100
>>>>> schrieb Jan Beulich <jbeulich@suse.com>:
>>>>>
>>>>>> This pattern is used when a rule consists of multiple commands
>>>>>> having their output appended to one another's.
>>>>> My understanding is: a rule is satisfied as soon as the file exists.
>>>> No - once make has found that a rule's commands need running, it'll
>>>> run the full set and only check again afterwards.
>>>
>>> It stops at the first command which fails.
>>>
>>> Olaf is correct, but the problem here is an incremental build issue, not
>>> a parallel build issue.
>>>
>>> Intermediate files must not use the name of the target, or a failure and
>>> re-build will use the (bogus) intermediate state rather than rebuilding it.
>>
>> But there's no intermediate file here - the file gets created in one
>> go. Furthermore doesn't make delete the target file(s) when a rule
>> fails? (One may not want to rely on this, and hence indeed keep multi-
>> part rules update intermediate files of different names.)
> 
> What if something went badly enough and sed didn't managed to write the
> whole file and make never had a chance to remove the bogus file?

How's this different from an object file getting only partly written
and not deleted? We'd have to use the temporary file approach in
literally every rule if we wanted to cater for this.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:04:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14061.34993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3qd-0005JL-A5; Thu, 29 Oct 2020 09:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14061.34993; Thu, 29 Oct 2020 09:03:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3qd-0005JE-7C; Thu, 29 Oct 2020 09:03:55 +0000
Received: by outflank-mailman (input) for mailman id 14061;
 Thu, 29 Oct 2020 09:03:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY3qb-0005J9-8j
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:03:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1cd48e6-3838-4636-a1fa-03e120b04660;
 Thu, 29 Oct 2020 09:03:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A02A6B9AF;
 Thu, 29 Oct 2020 09:03:50 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY3qb-0005J9-8j
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:03:53 +0000
X-Inumbo-ID: e1cd48e6-3838-4636-a1fa-03e120b04660
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1cd48e6-3838-4636-a1fa-03e120b04660;
	Thu, 29 Oct 2020 09:03:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603962231;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ulVUtW4BNF2IuUgaF7SbMFhoXSDut/HECFdtzjrjyk0=;
	b=ShIAg2DTClkNGej+6vqRmHf8tPA+Yayz8VWduYlfzs9ZP5MrewRT5Cn71IDwKoogyeetCN
	uhzQWwjIx3Ti3i9ayblRpvtRzJyltmEr93PhlyN0jlWWAZaWZz4YLEfQQsu604b2Jp1EEP
	PHaUQbHal1JU0GZaLPYO+W4ukLFmTxc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A02A6B9AF;
	Thu, 29 Oct 2020 09:03:50 +0000 (UTC)
Subject: Re: [PATCH v3] x86/pv: inject #UD for entirely missing SYSCALL
 callbacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
 <0ac0f006-c529-2437-4286-62158c2c491b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1fca003e-72d4-2f56-3180-6c39ba123a99@suse.com>
Date: Thu, 29 Oct 2020 09:53:41 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <0ac0f006-c529-2437-4286-62158c2c491b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.10.2020 22:31, Andrew Cooper wrote:
> On 26/10/2020 09:40, Jan Beulich wrote:
>> In the case that no 64-bit SYSCALL callback is registered, the guest
>> will be crashed when 64-bit userspace executes a SYSCALL instruction,
>> which would be a userspace => kernel DoS.  Similarly for 32-bit
>> userspace when no 32-bit SYSCALL callback was registered either.
>>
>> This has been the case ever since the introduction of 64bit PV support,
>> but behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which
>> yield #GP/#UD in userspace before the callback is registered, and are
>> therefore safe by default.
>>
>> This change does constitute a change in the PV ABI, for the corner case
>> of a PV guest kernel not registering a 64-bit callback (which has to be
>> considered a defacto requirement of the unwritten PV ABI, considering
>> there is no PV equivalent of EFER.SCE).
>>
>> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
>> SYSENTER (safe by default, until explicitly enabled).
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <JBeulich@suse.com>
>> ---
>> v3:
>>  * Split this change off of "x86/pv: Inject #UD for missing SYSCALL
>>    callbacks", to allow the uncontroversial part of that change to go
>>    in. Add conditional "rex64" for UREGS_rip adjustment. (Is branching
>>    over just the REX prefix too much trickery even for an unlikely to be
>>    taken code path?)
> 
> I find this submission confusion seeing as my v3 is already suitably
> acked and ready to commit.  What I haven't had yet enough free time to
> do so.

My objection to the other half stands, and hence, I'm afraid, stands
in the way of your patch getting committed. Aiui Roger's ack doesn't
invalidate my objection, sorry.

>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -33,11 +33,27 @@ ENTRY(switch_to_kernel)
>>          cmoveq VCPU_syscall32_addr(%rbx),%rax
>>          testq %rax,%rax
>>          cmovzq VCPU_syscall_addr(%rbx),%rax
>> -        movq  %rax,TRAPBOUNCE_eip(%rdx)
>>          /* TB_flags = VGCF_syscall_disables_events ? TBF_INTERRUPT : 0 */
>>          btl   $_VGCF_syscall_disables_events,VCPU_guest_context_flags(%rbx)
>>          setc  %cl
>>          leal  (,%rcx,TBF_INTERRUPT),%ecx
>> +
>> +        test  %rax, %rax
>> +UNLIKELY_START(z, syscall_no_callback) /* TB_eip == 0 => #UD */
>> +        mov   VCPU_trap_ctxt(%rbx), %rdi
>> +        movl  $X86_EXC_UD, UREGS_entry_vector(%rsp)
>> +        cmpw  $FLAT_USER_CS32, UREGS_cs(%rsp)
>> +        je    0f
>> +        rex64                           # subl => subq
>> +0:
>> +        subl  $2, UREGS_rip(%rsp)
> 
> There was deliberately not a 32bit sub here (see below).

Funny you should say this when what I've taken as input (v2; I
don't think I had seen a v3, or else I would have called this
one v4) had "subl", not "subq". Roger's comment was regarding
a "mov" with a 32-bit register destination, not this "sub". If
you had also noticed and fixed this one, without you having
posted v3 (or without me having seen the posting) I couldn't
have known.

> As to the construct, I'm having a hard time deciding whether this is an
> excellent idea, or far too clever for its own good.
> 
> Some basic perf testing shows that there is a visible difference in
> execution time of these two paths, which means there is some
> optimisation being missed in the frontend for the 32bit case.  However,
> the difference is tiny in the grand scheme of things (about 0.4%
> difference in aggregate time to execute a loop of this pattern with a
> fixed number of iterations.)
> 
> However, the 32bit case isn't actually interesting here.  A guest can't
> execute a SYSCALL instruction on/across the 4G->0 boundary because the
> M2P is mapped NX up to the 4G boundary, so we can never reach this point
> with %eip < 2.
> 
> Therefore, the 64bit-only form is the appropriate one to use, which
> solves any question of cleverness, or potential decode stalls it causes.

Good point.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14066.35004 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3xO-0006Ba-2L; Thu, 29 Oct 2020 09:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14066.35004; Thu, 29 Oct 2020 09:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3xN-0006BT-Vj; Thu, 29 Oct 2020 09:10:53 +0000
Received: by outflank-mailman (input) for mailman id 14066;
 Thu, 29 Oct 2020 09:10:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY3xN-0006BO-Ee
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:10:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7650592-46c0-43a7-bb04-a60d66bafc0e;
 Thu, 29 Oct 2020 09:10:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5E30FAC65;
 Thu, 29 Oct 2020 09:10:51 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY3xN-0006BO-Ee
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:10:53 +0000
X-Inumbo-ID: d7650592-46c0-43a7-bb04-a60d66bafc0e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d7650592-46c0-43a7-bb04-a60d66bafc0e;
	Thu, 29 Oct 2020 09:10:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603962651;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SzTkLgkH7xmXURvo+vTHjDlyEfUi6v22gqhA1NQ1zMc=;
	b=HQTSStvH5NdjybFs8hoswDeDtUcMY44kW2ag5Wa02TvSqISb6LxWhEJBKYrwxLKfAMUccm
	AhrhdHBoDPDRKLpMnHP3TRA93vMGtakI45WMj3fX+YoVzHbSrY7jIoAb/N5iZqGnf0MDLR
	KwuEuKthofqddirRZXkRRIqoZBZTtic=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 5E30FAC65;
	Thu, 29 Oct 2020 09:10:51 +0000 (UTC)
Subject: Re: call traces in xl dmesg during boot
To: Olaf Hering <olaf@aepfle.de>
References: <20201029092237.50b8a6f6.olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1172852a-3428-be59-a1a0-9e264664bf81@suse.com>
Date: Thu, 29 Oct 2020 10:10:51 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201029092237.50b8a6f6.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.2020 09:22, Olaf Hering wrote:
> During boot of xen.git#staging, Xen seems to think something pressed debug keys very early, which shows various call traces in 'xl dmesg'. A reboot may "fix" it, and no traces are printed.

I'm seeing the same every now and then, albeit only with a single
'A' so far, iirc.

> Any idea what may cause this behavior? I do not see it on a slightly smaller box.

I've been assuming this is stuff left in the serial port's input
latch or FIFO. So far I didn't have a good idea how to tell
left over garbage from actual input that was sent very early, so
I've no good idea how to work around it. A command line option
triggering the discarding of initially present input doesn't
look very attractive to me ...

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:13:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:13:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14070.35017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3zr-0006M7-H0; Thu, 29 Oct 2020 09:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14070.35017; Thu, 29 Oct 2020 09:13:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY3zr-0006M0-Do; Thu, 29 Oct 2020 09:13:27 +0000
Received: by outflank-mailman (input) for mailman id 14070;
 Thu, 29 Oct 2020 09:13:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kY3zq-0006Lv-GD
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:13:26 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91c3bf7f-8559-4444-ba46-7dd7c995a9fd;
 Thu, 29 Oct 2020 09:13:21 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kY3zq-0006Lv-GD
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:13:26 +0000
X-Inumbo-ID: 91c3bf7f-8559-4444-ba46-7dd7c995a9fd
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 91c3bf7f-8559-4444-ba46-7dd7c995a9fd;
	Thu, 29 Oct 2020 09:13:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603962801;
  h=subject:to:references:from:message-id:date:mime-version:
   in-reply-to;
  bh=njeWV/EhVeAwdJZy8iBFwE5E+DJt7jd8mdfipFHfhdM=;
  b=NT01YFeDqz5LAxxetQyF5jyU0hy4dRtsr39TGk+pPQ4KjoMpouRvMq0g
   p72gfW8pLSY195OLo0POhtnTH32N6FCufAgw+3q5XYDIZF5HEPNWTvWe4
   Wau2V4aD4xMosINAO+YpAFYfDn20PiAH7RW0zgyt9RUAI6Djk/9PZIb3W
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1oI8sBvQvKbsoecTC6ptYy5NGOnI1E5p9u6brqy1o7+v2L1LoBUeAVablw+uWdQdiWliBHNij1
 qNN4GJfs6NJm2xCfOV28Gv8DgEsH8rCdkXyvmOEWZb90lGtbIe/N4gtTynsLFJhxI0KFJ19iPc
 YRvCK3uux0c6cKrJr4RvaYz6lhRtn4G8peMPXefRzPBQ3mp0qn7gbSrv2ab91SOnGpFIWszJNN
 tX4czHOoJswuK1UEDbTC495vDo/BCoFYslcSQy/WPmJcOT7xD1MQsuBLyi/QUxyeDvHywsqXIY
 xEQ=
X-SBRS: None
X-MesageID: 30379730
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,429,1596513600"; 
   d="asc'?scan'208";a="30379730"
Subject: Re: call traces in xl dmesg during boot
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
References: <20201029092237.50b8a6f6.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0831520a-609f-de69-7a07-1b86fe137699@citrix.com>
Date: Thu, 29 Oct 2020 09:13:08 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <20201029092237.50b8a6f6.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature";
	boundary="QhF6ym3qSJ5TBsC7AQlvmNYiiDpEja59q"
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

--QhF6ym3qSJ5TBsC7AQlvmNYiiDpEja59q
Content-Type: multipart/mixed; boundary="fsO1gEleEDsGEEguxrHRzKed47sJ5Yk9Q"

--fsO1gEleEDsGEEguxrHRzKed47sJ5Yk9Q
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB

On 29/10/2020 08:22, Olaf Hering wrote:
> During boot of xen.git#staging, Xen seems to think something pressed de=
bug keys very early, which shows various call traces in 'xl dmesg'. A reb=
oot may "fix" it, and no traces are printed.
>
> Any idea what may cause this behavior? I do not see it on a slightly sm=
aller box.

That means Xen is receiving real keystrokes on the UART.

I've seen it before on hardware with floating wires in place of a
properly connected UART, but I've also seen it on systems with an
incorrectly configured BAUD rate.=A0 Looking at your command line, you
probably want com1=3D115200,8n1 although this should also be the default.=



Totally unrelated to the problem at hand, but an observation.

(XEN) [000000d6b68c0ecc] WARNING: NR_CPUS limit of 256 reached -
ignoring further processors

You'll need NR_CPUS configured to 512 to boot properly on this system.

~Andrew


--fsO1gEleEDsGEEguxrHRzKed47sJ5Yk9Q--

--QhF6ym3qSJ5TBsC7AQlvmNYiiDpEja59q
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEzzVJW36m9w6nfSF2ZcP5BqXXn6AFAl+ah6QACgkQZcP5BqXX
n6AclxAAubBpOm2VoMtiRvVgehGi8zG1emDjR1mdOcVDUJOrn9sizO5OFtyN1Oet
QP3aiKuN1eA3BqRu8qrXai5VlXlXYCZSFFnU7MNqEea2U4ROW4AqfgojkS2NQYMR
6Py4ppMeQ8fK7pudGQ+kqkKfbHbuYPLHjHVtDpRVPNxZLSBsALH7XcsFOCnxnMxt
GyFJHuErBMsROh6sFHRBoOC3YnEN+wx7LcI426cqVfzkqPYuzqqAZN7+ITUG4rKq
s7cNkXVzJHceuszPFe3/bs7hHLscE4Yn6Oamtco2Fq92hB94sLSlbWdkpeEflmbn
6cOeuBYur6k1RpJ1E0+vK6rNn4TpBU5UL7H/r4BoNWkMIdOcbqwJ/CJH9EuOgc2x
vTY2sAWN5J/CM1o4BMGpjTTkB/ev3CtZeBbWG62LKReuDGKG/lFzXYe1ffsjHyFs
qOzoHo1qU5STobxbBZIQJwCfub0Q7lCjw9qgwXvsPAmmBu3+YMv0L8pUnFZhHT4w
QaF8x3BFQrD3cOJi238h6BDVw8CUQGq6U8bpL4+wZw9ujS7igHPJEMpOOMu7ao0+
a1DNVHgit27ED3O1yCXb0NV6yu6Zr+FxfPbxxn2+OV1TsOg6LiH7dzVHYfHdZxYm
T/d4gr9I6dfW3QZOcraROJEwO2/KnaffA96p7wHrbp9yqnV9A0s=
=Bfn3
-----END PGP SIGNATURE-----

--QhF6ym3qSJ5TBsC7AQlvmNYiiDpEja59q--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:19:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14076.35029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY45h-0006Z9-7X; Thu, 29 Oct 2020 09:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14076.35029; Thu, 29 Oct 2020 09:19:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY45h-0006Z2-3S; Thu, 29 Oct 2020 09:19:29 +0000
Received: by outflank-mailman (input) for mailman id 14076;
 Thu, 29 Oct 2020 09:19:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kY45f-0006Yx-Hh
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:19:27 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2777765c-b1d5-4e40-8202-e07cf5be8a41;
 Thu, 29 Oct 2020 09:19:25 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9T9JN15e
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 10:19:23 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kY45f-0006Yx-Hh
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:19:27 +0000
X-Inumbo-ID: 2777765c-b1d5-4e40-8202-e07cf5be8a41
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2777765c-b1d5-4e40-8202-e07cf5be8a41;
	Thu, 29 Oct 2020 09:19:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603963164;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=SC8SmVv0c/5jonRmYiSBZn1dYb/iyICnBaU1DZlYyvk=;
	b=RIOqKxtZiHGnYkqHEtrcD5Ghd+PifK4dLTJ6a8YiA80/7hOvP27VW/SrYYMHr2zbHr
	47n2vBlXhOLP5tHl6z/D8+jKDvBg38mjlxVToCvRnWwZxPB95hvz10uwGruwp3Zo4FjA
	bcrxs4ScKEmdDpZCjr/e1r1yKhtMB3uFxqIpXXodtywtBH7Eaa9X+32oxhivn5dhpXFW
	IU19nLQ+5dA0XY3aCXHEnuzVrCrHB/896cpMQk2fXE+eBR5U5E12IGEFuVg3XAYcFHrv
	8BPEbzbHuP8t0byUZT0YsBAuL3uJOT9roc5IrZRW7FHvJdfHkngTzPCZGAHNSKfBl6os
	D0BA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9T9JN15e
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 10:19:23 +0100 (CET)
Date: Thu, 29 Oct 2020 10:19:16 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: call traces in xl dmesg during boot
Message-ID: <20201029101916.46854068.olaf@aepfle.de>
In-Reply-To: <0831520a-609f-de69-7a07-1b86fe137699@citrix.com>
References: <20201029092237.50b8a6f6.olaf@aepfle.de>
	<0831520a-609f-de69-7a07-1b86fe137699@citrix.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/Dp6PCMwbvN+8upAg5.v.Afa"; protocol="application/pgp-signature"

--Sig_/Dp6PCMwbvN+8upAg5.v.Afa
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 29 Oct 2020 09:13:08 +0000
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> You'll need NR_CPUS configured to 512 to boot properly on this system.

Ah, thanks. My builds use the built-in defaults. I will adjust my future bu=
ilds.

Olaf

--Sig_/Dp6PCMwbvN+8upAg5.v.Afa
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+aiRQACgkQ86SN7mm1
DoDusw//WtGwHiY1nnQQZcw2IRrd9VI8SOHnCnYlwMwKyVmxSTn2smxctU0zNNRb
x12mLuHicVyjgGZizsj1YbzPBBLLY/LCM4LPHrdM4NEXMrkjlML4VFnbehAmoVcI
Uy4T8d8nbmXlY1eKw1gdCEzmHdYSavmxhdtp5k128v8YOqBfuYm8VPwGysTRo/gO
PnqPi4FmgOs5mMB1+kN2+SRUG6X5gQFbIgaWe4SUKk0MSnH5BUjpjzy2pginE0wd
qU7sXDPIjJxeBoynrd5slnS+Ivlc72YoyDlTY0eBubuafSXAeSHMWULrsltZFNXz
OHOtpR+2flbm3kiuMYxEK9Cb6vULxveNH0mRMn8M+BREphBwetzGZrKcKUJYE8nt
yD0onmG32JWzNmEMOfw6QiYz034B3rTFTcWJbF8mE+pJhJDXIla2bFBrU+EtpctT
YgtJoZ8VOo/Om4NEI2Fh1frkHIX559CMHSiZo5/CAjV8P8R+HZyXQqvhc846Bwxs
C/xhyNfrRfSvLl2UNVn4ggHZzK8w8eWTJ+TpDSROr4DizRjpIiWbSLaCbtlyR95M
d8fAEDmB2dXLtwSxdRbTSbRbiIUotpdCXTHflT0Pbo8U4yIJZxBHQjLnkm6OiCxm
Ilp16ESIMuMBFbj4OE3wBxfrRt+SdEgicNarezkYh2sNoHnfjJw=
=v+Dx
-----END PGP SIGNATURE-----

--Sig_/Dp6PCMwbvN+8upAg5.v.Afa--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:20:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14080.35041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY46m-0007KZ-H9; Thu, 29 Oct 2020 09:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14080.35041; Thu, 29 Oct 2020 09:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY46m-0007KS-De; Thu, 29 Oct 2020 09:20:36 +0000
Received: by outflank-mailman (input) for mailman id 14080;
 Thu, 29 Oct 2020 09:20:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kY46l-0007KN-Bk
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:20:35 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a100300-c334-46b4-b805-2c3296e067a7;
 Thu, 29 Oct 2020 09:20:33 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9T9KW16C
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 10:20:32 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kY46l-0007KN-Bk
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:20:35 +0000
X-Inumbo-ID: 4a100300-c334-46b4-b805-2c3296e067a7
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4a100300-c334-46b4-b805-2c3296e067a7;
	Thu, 29 Oct 2020 09:20:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603963233;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=jfdgUILIVnX0r6rpu2YL1X5K0LZjxjv3EjQIIikHW6k=;
	b=oqA4UyCGoZxipQhCrWpaHpGD7dYfwvd9+Ztv0nuTIIIbxR9Dxv2h95jaq9tGCHhRTG
	Ru2rl0ZnPZg96y+Ytudh7Cqfim6N0bN08L5HEg9B/kQPvhOhA/TTbBglXx6xM8UoHaMD
	3SXl/uUFlURyEjkuQ9Duz36CitB+YItsLtC/AlSHCXZ2/0HeGoFQuCxDJssq3MBtlMov
	ap9flUhTIfrCUJAxsBg78Azk5u/Zyz+0OKD4sAVNAutHGMda2/ya0Xlcz8CGQZWeQxrG
	KEdOuLCu2L6Q+e0J/OXJn7dYaNtCm6HI4bx2Tx+CnWTicjG8d46FcsD02E9J7rxPNw/h
	Ljmw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9T9KW16C
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 10:20:32 +0100 (CET)
Date: Thu, 29 Oct 2020 10:20:28 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: call traces in xl dmesg during boot
Message-ID: <20201029102028.08c85230.olaf@aepfle.de>
In-Reply-To: <1172852a-3428-be59-a1a0-9e264664bf81@suse.com>
References: <20201029092237.50b8a6f6.olaf@aepfle.de>
	<1172852a-3428-be59-a1a0-9e264664bf81@suse.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/NoJyVm/.FBGIada81nYL12i"; protocol="application/pgp-signature"

--Sig_/NoJyVm/.FBGIada81nYL12i
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 29 Oct 2020 10:10:51 +0100
schrieb Jan Beulich <jbeulich@suse.com>:

> I've been assuming this is stuff left in the serial port's input
> latch or FIFO.

Weird, there is no `ipmitool` attached to this host.

Maybe the firmware does funny things...


Olaf

--Sig_/NoJyVm/.FBGIada81nYL12i
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+aiVwACgkQ86SN7mm1
DoCfyg/8CFamjskrgUNabLft9FSPSqz4RuLGZp7q3QhU0jF6hjCoiS9/hXSjjueL
CGNB+TPF/MnmxCFtA6nUNaR7Jzmxhh44mjzadTrL8mqGD0IFJ+ATs8ULhwXqAWPK
rQMJpLdY45TRhL72zR8vJ7bw/rQ1ImBt2ImVJcgvJYcJStF6fXBX2VX5jLcLN/7v
8CsMxDLKcl7HJNhN7h/waOWqqJkOdya5p/SJVum4jQXODYRFOzYxMBK3944XNxp/
jHIYDg1G00FTTJ6XPY69B8G79VzAvviifMNZxX8QLW3rxba1+kzuWHyZs73fEmA/
hZIcIpl6u+r0WmEjjLIq9I2ZNXJL/tIHwzCfx4yrYM4eOPdR/OyoWZLt85p7ij5+
aahB/px4+qwDHVhNtwlWXG2UpO/tyADR7MGiSG6CWTdvLcNeNqE9zkN3eElYnwWV
BVQDpSpSEJRR/bD2iB/OkHcNAIFThe6+HGOGwSRhIz6bmPLng5caqGMqYfXlnNxZ
16hjJIEC47gboTJouYUBistiAPwCuoL7zhwhXqIRbhlAvPSt6s84N8yYWJxt9PvY
e9QUAcdOvt62pIFh2alTDWv3JUy9+nODktiYf2zb1nbzpov3YGp/nONw1OOPvvf+
wSE9F/iNqoleRs2SF9x2UxqXr1eVP4hfELz2h9zk/fNoXzMC4lI=
=b5oW
-----END PGP SIGNATURE-----

--Sig_/NoJyVm/.FBGIada81nYL12i--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:51:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14089.35053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4ar-0001Ys-Ss; Thu, 29 Oct 2020 09:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14089.35053; Thu, 29 Oct 2020 09:51:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4ar-0001Yl-Pv; Thu, 29 Oct 2020 09:51:41 +0000
Received: by outflank-mailman (input) for mailman id 14089;
 Thu, 29 Oct 2020 09:51:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vaed=EE=kernel.org=arnd@srs-us1.protection.inumbo.net>)
 id 1kY4aq-0001Yg-4v
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:51:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d10c9f9c-25f2-4fef-8023-716e105234c2;
 Thu, 29 Oct 2020 09:51:39 +0000 (UTC)
Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com
 [209.85.219.45])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 72FA121655
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:51:38 +0000 (UTC)
Received: by mail-qv1-f45.google.com with SMTP id t20so1069902qvv.8
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 02:51:38 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=vaed=EE=kernel.org=arnd@srs-us1.protection.inumbo.net>)
	id 1kY4aq-0001Yg-4v
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:51:40 +0000
X-Inumbo-ID: d10c9f9c-25f2-4fef-8023-716e105234c2
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d10c9f9c-25f2-4fef-8023-716e105234c2;
	Thu, 29 Oct 2020 09:51:39 +0000 (UTC)
Received: from mail-qv1-f45.google.com (mail-qv1-f45.google.com [209.85.219.45])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 72FA121655
	for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:51:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603965098;
	bh=9kcfotFFJ47rYdLC6ZxzdCDBa1k7PeFD878Gh2iRMYw=;
	h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
	b=vMd1WTw46NmbdjDfGERZXE49CkIxTu62Lo98SE3sftOXo2ak1wjE9ZykA5QSCJ4f6
	 RDyWIZLY0FUMnquHbcpT2wuFgnwMWGJcElPYN/dCVDOBDKjwtw6POjOBjpvi7Bxxzs
	 IPD7rWkW0qA5Zp3lwpn5CoP9SOsl9SQ1SBIW0R2s=
Received: by mail-qv1-f45.google.com with SMTP id t20so1069902qvv.8
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 02:51:38 -0700 (PDT)
X-Gm-Message-State: AOAM5339D4LE5ypYI3aIYXXGK8fbutN+VYoeeu9JSR4XLBJ5JNULwXxo
	NlH496EvW4+JgvF4X+7cFvR8ImnDMS7e3jSVljQ=
X-Google-Smtp-Source: ABdhPJxaZEntygitIVQOXS85Gb8nGw1osAMMJulsU6ZeD1uTGkc5sotJk4c97dNCIKL6ORL8v2BjlIfOh3rRL6d2/zw=
X-Received: by 2002:ad4:4203:: with SMTP id k3mr2986180qvp.8.1603965097361;
 Thu, 29 Oct 2020 02:51:37 -0700 (PDT)
MIME-Version: 1.0
References: <20201028212417.3715575-1-arnd@kernel.org> <ea34f1d3-ed54-a2de-79d9-5cc8decc0ab3@redhat.com>
In-Reply-To: <ea34f1d3-ed54-a2de-79d9-5cc8decc0ab3@redhat.com>
From: Arnd Bergmann <arnd@kernel.org>
Date: Thu, 29 Oct 2020 10:51:21 +0100
X-Gmail-Original-Message-ID: <CAK8P3a0e0YAkh_9S1ZG5FW3QozZnp1CwXUfWx9VHWkY=h+FVxw@mail.gmail.com>
Message-ID: <CAK8P3a0e0YAkh_9S1ZG5FW3QozZnp1CwXUfWx9VHWkY=h+FVxw@mail.gmail.com>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, 
	"the arch/x86 maintainers" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>, 
	Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, 
	"H. Peter Anvin" <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, linux-hyperv@vger.kernel.org, 
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, kvm list <kvm@vger.kernel.org>, 
	Platform Driver <platform-driver-x86@vger.kernel.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	"open list:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 29, 2020 at 8:04 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 28/10/20 22:20, Arnd Bergmann wrote:
> > Avoid this by renaming the global 'apic' variable to the more descriptive
> > 'x86_system_apic'. It was originally called 'genapic', but both that
> > and the current 'apic' seem to be a little overly generic for a global
> > variable.
>
> The 'apic' affects only the current CPU, so one of 'x86_local_apic',
> 'x86_lapic' or 'x86_apic' is probably preferrable.

Ok, I'll change it to x86_local_apic then, unless someone else has
a preference between them.

> I don't have huge objections to renaming 'apic' variables and arguments
> in KVM to 'lapic'.  I do agree with Sean however that it's going to
> break again very soon.

I think ideally there would be no global variable, withall accesses
encapsulated in function calls, possibly using static_call() optimizations
if any of them are performance critical.

It doesn't seem hard to do, but I'd rather leave that change to
an x86 person ;-)

      Arnd


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 09:55:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 09:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14096.35065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4ek-0001m0-JQ; Thu, 29 Oct 2020 09:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14096.35065; Thu, 29 Oct 2020 09:55:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4ek-0001lt-GQ; Thu, 29 Oct 2020 09:55:42 +0000
Received: by outflank-mailman (input) for mailman id 14096;
 Thu, 29 Oct 2020 09:55:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kY4ej-0001lo-01
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:55:41 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eea2b81a-53e7-4399-b9ea-a3bc850ad0b4;
 Thu, 29 Oct 2020 09:55:40 +0000 (UTC)
Received: from DB6PR0402CA0009.eurprd04.prod.outlook.com (2603:10a6:4:91::19)
 by AM9PR08MB6274.eurprd08.prod.outlook.com (2603:10a6:20b:2d5::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 29 Oct
 2020 09:55:32 +0000
Received: from DB5EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::37) by DB6PR0402CA0009.outlook.office365.com
 (2603:10a6:4:91::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 09:55:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT013.mail.protection.outlook.com (10.152.20.105) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 09:55:32 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Thu, 29 Oct 2020 09:55:31 +0000
Received: from 2af84ba5829e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9B6D1991-ED96-42F0-8898-5CBF1A03D10E.1; 
 Thu, 29 Oct 2020 09:55:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2af84ba5829e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 29 Oct 2020 09:55:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3034.eurprd08.prod.outlook.com (2603:10a6:5:24::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Thu, 29 Oct
 2020 09:55:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 09:55:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kY4ej-0001lo-01
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 09:55:41 +0000
X-Inumbo-ID: eea2b81a-53e7-4399-b9ea-a3bc850ad0b4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.53])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eea2b81a-53e7-4399-b9ea-a3bc850ad0b4;
	Thu, 29 Oct 2020 09:55:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Utgxc1gYS05fnkTtMDjRM647ko6/E1RfiZUyMuo/lg=;
 b=Ma03aF2jF+HuKomvsjrBvO7i9FjszEfVcMqqZdhQRKjYjYx1YBUSvjSK4M2gjfyCmewpCwQZIjADKrde1EZRl1sllGTfku/7UNttAoaT+2TgW7yjslXr2vfDeLufMEq/LOGDB5fm54nWUdVz/PmgmjeuQFVoPME2FozAFdMz5l8=
Received: from DB6PR0402CA0009.eurprd04.prod.outlook.com (2603:10a6:4:91::19)
 by AM9PR08MB6274.eurprd08.prod.outlook.com (2603:10a6:20b:2d5::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Thu, 29 Oct
 2020 09:55:32 +0000
Received: from DB5EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:91:cafe::37) by DB6PR0402CA0009.outlook.office365.com
 (2603:10a6:4:91::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 09:55:32 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT013.mail.protection.outlook.com (10.152.20.105) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 09:55:32 +0000
Received: ("Tessian outbound c189680f801b:v64"); Thu, 29 Oct 2020 09:55:31 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: d935657349f6c042
X-CR-MTA-TID: 64aa7808
Received: from 2af84ba5829e.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 9B6D1991-ED96-42F0-8898-5CBF1A03D10E.1;
	Thu, 29 Oct 2020 09:55:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2af84ba5829e.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 29 Oct 2020 09:55:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CTqxJ5A17c1AppxU8Ty9usC6Cul2ng6OaePjnTKQfTqYuVuO5kH+/8UcU9nuHrP68RV9DgEPAZbGPRgtm7xzhIuwKkOlsZPFKMdt5N4wQH+RLz1Cm+bzfhhG1OCKMTEFFp0QIyTEIjdSzcEk1u6zBIRz1hqroN+QQ+xmoz6wcLhVWho8wpAYuhoY9c+UwOdRCitPXIUq0KVCr9n8sEYxLSUqNY8A3a1uW11SkzEcxWMmnPigh+2MAA5JPY03OxeA5GEVqTJBVwAq227j3DChPxRVE7HDMql5pf9/3LKlBkL0IuwjGKgPeuYencMWaWf1jVnE+/E/OIA4OGUjClAzyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Utgxc1gYS05fnkTtMDjRM647ko6/E1RfiZUyMuo/lg=;
 b=YVTd1bxdYtUm65d8XGDRa5ROusDy2sowtOPvV2bx/TBWxHwE2qevOTqjMsw2HNVG4vup0oZgJR/eIiX3Yc3+AQXTCOdzXBLZ1SmijqE39bKg0tOOHl7kCgakkNIEkhxR64Jl3QgmFjsn1Qpdn1vbjLz7CwW8orrzdJRdW8iKfSIxeuYx9Fc1q9Zna5xwZRrOwYnDIrPi86YZqVQHVQTDfZKWL/7Oi3Z5ox/PM6fGNpuSKVXyCQpzeifWSk86CnpS4VI1ru7H/6Q/lVIwZwjp3UzfErDKTp0x3TMOWCzSbzMIEEVTH/psuThIO7A6g/9zn09bF0RX15qGDAcmpCtqbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/Utgxc1gYS05fnkTtMDjRM647ko6/E1RfiZUyMuo/lg=;
 b=Ma03aF2jF+HuKomvsjrBvO7i9FjszEfVcMqqZdhQRKjYjYx1YBUSvjSK4M2gjfyCmewpCwQZIjADKrde1EZRl1sllGTfku/7UNttAoaT+2TgW7yjslXr2vfDeLufMEq/LOGDB5fm54nWUdVz/PmgmjeuQFVoPME2FozAFdMz5l8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3034.eurprd08.prod.outlook.com (2603:10a6:5:24::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Thu, 29 Oct
 2020 09:55:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 09:55:15 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: "open list:X86" <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWq7RKM3lRt4XRXUWYGPT9/eCkOKmtW9sAgAD/7gA=
Date: Thu, 29 Oct 2020 09:55:15 +0000
Message-ID: <DFD994CA-C456-468C-8442-0F63CE661E78@arm.com>
References:
 <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
In-Reply-To: <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 4b7103a0-f8a7-427a-589d-08d87bf0c630
x-ms-traffictypediagnostic: DB7PR08MB3034:|AM9PR08MB6274:
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6274149F99E61B6DF8E2C2729D140@AM9PR08MB6274.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xUBKE3jFKPKzayjOzFiSgOxrMh+ePEb3076dYwiCQzOESVNWFO5U3tPyPwWFpCi6hSb9L+EpuzrIoqhuzD70mzXfzOWIe1PkDTv8ccP2WLgssBbPw5wkhJZMcZE9tkznVE0AR+wvKCpCen8JtPrpEUwl0bH+KrD8ZLrtLS4gTJzyVkrGZizXNLlBvR9WQaYy+9KkitKsZk0ECVllDiD76jLUxS59RRh9JFYYL7j52rY5h1Q5TeoO6A6Vu3ySLsMOnSBGzUQ3UuQTVHNT+ma/rYyN9RQ/YBFUplOMDp0bOCuugL3QZEgndNwD3S8iTl5VWZ9GIWtpXsN+A4iCXZn0+w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(396003)(39860400002)(346002)(6512007)(8936002)(66446008)(5660300002)(4326008)(71200400001)(91956017)(66946007)(76116006)(66476007)(64756008)(66556008)(86362001)(36756003)(478600001)(83380400001)(316002)(2616005)(6916009)(54906003)(26005)(8676002)(53546011)(6486002)(186003)(6506007)(2906002)(33656002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 PGMGO8a9OFTdCXUCA6e2ZD2zqSUcLkBX0A4l7h53iKx3cFUH6rWpvJg2+kB8ePwjF9GIEOzDU0ZjP44HS4UpB5ABOsYwkeRTexENDBz8w1uufQvnTeuWWLzWmWYDXwgGOoVBSxH9V/b48bQxm9XMpYa2Tx3O/p4fV4CtJ0VNHg5xE86voAwVANQI2CUnR7bl+pi0GUd7y4+WBcNcy/2RW4EN7Fcxi4menKuhK7SeyDxRy8YwRtbZxwgwZ+MZs3deM/6iCfJgNNn400pAXkpsoBeFfhnl6aoGVpnC/SWAYCs8qwBRfPZ39acFvNylWFV4+jeg8jhX38nPHI7VxqwOTAAO9wcEKBZVHO6BapgQT/tb1Go2vfCp68/mF0qPXyn66ct2qKvrt8tTUdFbhB1pJufPrKShRL4bz0e3EfxdT/3dybMT57vZz7RU1J1n54fx9OCWRetGZl0Sg1/sroc/zHRyEwt96jgsN7mhbwvur27IkWJ38ZCViYrWMWkSlC2FG9SrsK7tkdZayfpkUsHZAEA1v7dDDT5pattAQNOoJCkc5dy0G2jqtIYm/4yEyonWX00I/h8xHJP5Ho4XYhWUJg8AvdSLLB11ROVXtjAHXcLtcyQr2trHmHkwHWFpze6iqxvmnceoN63K7ym/ae+wpg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <1FAF3F81E7FDC3499FD908A83E58482E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3034
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e832afca-666c-469b-6d79-08d87bf0bc30
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gs3JeG89J3Ryfmq8or/WLVbuA+OZ7CqsprQbQ7edIEDEqRQl9XBEDUmXAvaPDuCIqkEy3Qo/SBMtnxxR8QRChvc8xybfbOHwIfNf8GfWXcg9Spa+ZnZeyYwAyc+cPsqMQ0pFm1PV5N0p3Uk67qhwFdxQ2aeqQJ8gqZR/w0pNRLf4jdfYyW+M41MDCFf/h1+1gW3ehLEhra4iGH17QRHYzBOwjnhh0rKXCom0j8j6Z8rQ2oPsS2w0kaejrodtnVaUROaw50A0ebKrmtClv/12mtZLPXqbHa+B+m7e5nQMWy0/OdH1Ig8wKHHNbSk9uu0faoWkqmOMhZsUDMHxyS2sEaddVw++kND08xzFFq2dJy8FNdX6YgFlWezHFCY+llcge5jhHPvtN/Q6Fptauasezw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(376002)(396003)(46966005)(70206006)(2906002)(8936002)(53546011)(6862004)(54906003)(83380400001)(81166007)(82740400003)(6506007)(107886003)(4326008)(70586007)(478600001)(356005)(6512007)(8676002)(47076004)(36756003)(6486002)(2616005)(316002)(26005)(86362001)(336012)(186003)(33656002)(82310400003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 09:55:32.0209
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4b7103a0-f8a7-427a-589d-08d87bf0c630
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6274

Hi Julien,

> On 28 Oct 2020, at 18:39, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 26/10/2020 16:21, Bertrand Marquis wrote:
>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>> not implementing the workaround for it could deadlock the system.
>> Add a warning during boot informing the user that only trusted guests
>> should be executed on the system.
>> An equivalent warning is already given to the user by KVM on cores
>> affected by this errata.
>> Also taint the hypervisor as unsecure when this errata applies and
>> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>=20
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thanks

>=20
> If you don't need to resend the series, then I would be happy to fix the =
typo pointed out by George on commit.

There is only the condensing from Stefano.
If you can handle that on commit to great but if you need me to send a v3 t=
o make your life easier do not hesitate to tell me

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 10:01:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 10:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14110.35077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4kP-0002kA-87; Thu, 29 Oct 2020 10:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14110.35077; Thu, 29 Oct 2020 10:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY4kP-0002k3-59; Thu, 29 Oct 2020 10:01:33 +0000
Received: by outflank-mailman (input) for mailman id 14110;
 Thu, 29 Oct 2020 10:01:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rF0B=EE=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kY4kN-0002ju-RO
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 10:01:31 +0000
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 654e3420-3d3c-40e6-9649-47812afaa44d;
 Thu, 29 Oct 2020 10:01:30 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k10so722028wrw.13
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 03:01:30 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id m8sm4010179wrw.17.2020.10.29.03.01.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 03:01:28 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id CD4601FF7E;
 Thu, 29 Oct 2020 10:01:27 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rF0B=EE=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kY4kN-0002ju-RO
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 10:01:31 +0000
X-Inumbo-ID: 654e3420-3d3c-40e6-9649-47812afaa44d
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 654e3420-3d3c-40e6-9649-47812afaa44d;
	Thu, 29 Oct 2020 10:01:30 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id k10so722028wrw.13
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 03:01:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=NgsgEF3cxh4a2QJyJ3yz8CSoN5GhmG6ImfMK4QDU38Q=;
        b=eGHuWm7a8YHGoQQJkRVDw9QsDr2F43cQVL2euxj7XBnjYPtt7Ib7g0Fn20LpIHzM47
         AeRkPIcP3DbwaJ6NXZg2r6ecLsBsD6o91apVoD6gYryRRyOKYestNhleAmHqSuM1+qyz
         CYCR7QIFKJXHbeCTDBP3r+bQ1E3vz1F42SOctC7E1xzUbbuFAjnYA/ZXCEOi8hdrXMFo
         j9SIq0z07Vcs/z9RucelrLvYmvli/8Kkkqepvn/93566Ch38fBldRiJk3vo+rjc1jZiU
         WsVXVdM7YuZmZ4BHH+ILPVAmBYZw8oz2yEtuSmn16ftgAavQQNSm3t/moMEUu/ymqq+f
         E1tQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=NgsgEF3cxh4a2QJyJ3yz8CSoN5GhmG6ImfMK4QDU38Q=;
        b=pz80ILo7qJbyT3qx0c5WrIk2cXJsrtIAqQTIzfpbjtPJemIbVdxrB7Ql+vrKLJSaPx
         w7usuRmLjHeWOyV1+Uz3f7B0b27eXYyTMYCG7iF/aLFb/FCvi5oQihGKYFdOL6zmrGzk
         RUmbsHelBaooIHuDBoQa4ZNRbZZEqk18RC/S1MTpBP45qiktLIHLFnYDGyXV8vEDqnJf
         944osi+Pb/tDMcH/LJoWfeGElG6muJBOY0M5L2h24nVJovaGV7h/uPGSotS0EbWYFwkA
         LpOdO9YJEfyHyACCfqjuorX216XXHXLKF8P19o4R5dyYy2NJ6g9D/1UFomebooYmETOz
         +mLg==
X-Gm-Message-State: AOAM532DQ8DXvZXAP234i/3NWe2cMzwdEBs9YI5JhQwBGLoFmw+D2x0c
	cydr68Rk5RHsQiSklpID5UL19g==
X-Google-Smtp-Source: ABdhPJzWpStp+pVp+vl7p4xVCS8J5Ty0ezaMWiGhF7GJHh+IQlKU+PIE3IZJSNl53jpoBI09DxLgvA==
X-Received: by 2002:a5d:448b:: with SMTP id j11mr4433017wrq.129.1603965689861;
        Thu, 29 Oct 2020 03:01:29 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id m8sm4010179wrw.17.2020.10.29.03.01.28
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 03:01:28 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id CD4601FF7E;
	Thu, 29 Oct 2020 10:01:27 +0000 (GMT)
References: <20201028174406.23424-1-alex.bennee@linaro.org>
 <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s>
User-agent: mu4e 1.5.6; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: qemu-devel@nongnu.org, xen-devel@lists.xenproject.org, Masami Hiramatsu
 <masami.hiramatsu@linaro.org>, Anthony Perard <anthony.perard@citrix.com>,
 Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
In-reply-to: <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s>
Date: Thu, 29 Oct 2020 10:01:27 +0000
Message-ID: <87d011mjuw.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Stefano Stabellini <sstabellini@kernel.org> writes:

> On Wed, 28 Oct 2020, Alex Benn=C3=A9e wrote:
>> Xen is supported on aarch64 although weirdly using the i386-softmmu
>> model. Checking based on the host CPU meant we never enabled Xen
>> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
>> make it not seem weird but that will require further build surgery.
>>=20
>> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
>> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
>> Cc: Stefano Stabellini <sstabellini@kernel.org>
>> Cc: Anthony Perard <anthony.perard@citrix.com>
>> Cc: Paul Durrant <paul@xen.org>
>> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
>> ---
>>  meson.build | 2 ++
>>  1 file changed, 2 insertions(+)
>>=20
>> diff --git a/meson.build b/meson.build
>> index 835424999d..f1fcbfed4c 100644
>> --- a/meson.build
>> +++ b/meson.build
>> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
>>      'CONFIG_HVF': ['x86_64-softmmu'],
>>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
>>    }
>> +elif cpu in [ 'arm', 'aarch64' ]
>> +  accelerator_targets +=3D { 'CONFIG_XEN': ['i386-softmmu'] }
>>  endif
>
> This looks very reasonable -- the patch makes sense.
>
>
> However I have two questions, mostly for my own understanding. I tried
> to repro the aarch64 build problem but it works at my end, even without
> this patch.

Building on a x86 host or with cross compiler?

> I wonder why. I suspect it works thanks to these lines in
> meson.build:
>
>   if not get_option('xen').disabled() and 'CONFIG_XEN_BACKEND' in config_=
host
>     accelerators +=3D 'CONFIG_XEN'
>     have_xen_pci_passthrough =3D not get_option('xen_pci_passthrough').di=
sabled() and targetos =3D=3D 'linux'
>   else
>     have_xen_pci_passthrough =3D false
>   endif
>
> But I am not entirely sure who is adding CONFIG_XEN_BACKEND to
> config_host.

The is part of the top level configure check - which basically checks
for --enable-xen or autodetects the presence of the userspace libraries.
I'm not sure if we have a slight over proliferation of symbols for Xen
support (although I'm about to add more).

> The other question is: does it make sense to print the value of
> CONFIG_XEN as part of the summary? Something like:
>
> diff --git a/meson.build b/meson.build
> index 47e32e1fcb..c6e7832dc9 100644
> --- a/meson.build
> +++ b/meson.build
> @@ -2070,6 +2070,7 @@ summary_info +=3D {'KVM support':       config_all.=
has_key('CONFIG_KVM')}
>  summary_info +=3D {'HAX support':       config_all.has_key('CONFIG_HAX')}
>  summary_info +=3D {'HVF support':       config_all.has_key('CONFIG_HVF')}
>  summary_info +=3D {'WHPX support':      config_all.has_key('CONFIG_WHPX'=
)}
> +summary_info +=3D {'XEN support':      config_all.has_key('CONFIG_XEN')}
>  summary_info +=3D {'TCG support':       config_all.has_key('CONFIG_TCG')}
>  if config_all.has_key('CONFIG_TCG')
>    summary_info +=3D {'TCG debug enabled': config_host.has_key('CONFIG_DE=
BUG_TCG')}
>
>
> But I realize there is already:
>
> summary_info +=3D {'xen support':       config_host.has_key('CONFIG_XEN_B=
ACKEND')}
>
> so it would be a bit of a duplicate

Hmm so what we have is:

  CONFIG_XEN_BACKEND
    - ensures that appropriate compiler flags are added
    - pegs RAM_ADDR_MAX at UINT64_MAX (instead of UINTPTR_MAX)
  CONFIG_XEN
    - which controls a bunch of build objects, some of which may be i386 on=
ly?
    ./accel/meson.build:15:specific_ss.add_all(when: ['CONFIG_XEN'], if_tru=
e: dummy_ss)
    ./accel/stubs/meson.build:2:specific_ss.add(when: 'CONFIG_XEN', if_fals=
e: files('xen-stub.c'))
    ./accel/xen/meson.build:1:specific_ss.add(when: 'CONFIG_XEN', if_true: =
files('xen-all.c'))
    ./hw/9pfs/meson.build:17:fs_ss.add(when: 'CONFIG_XEN', if_true: files('=
xen-9p-backend.c'))
    ./hw/block/dataplane/meson.build:2:specific_ss.add(when: 'CONFIG_XEN', =
if_true: files('xen-block.c'))
    ./hw/block/meson.build:14:softmmu_ss.add(when: 'CONFIG_XEN', if_true: f=
iles('xen-block.c'))
    ./hw/char/meson.build:23:softmmu_ss.add(when: 'CONFIG_XEN', if_true: fi=
les('xen_console.c'))
    ./hw/display/meson.build:18:softmmu_ss.add(when: 'CONFIG_XEN', if_true:=
 files('xenfb.c'))
    ./hw/i386/xen/meson.build:1:i386_ss.add(when: 'CONFIG_XEN', if_true: fi=
les('xen-hvm.c',
                                                                           =
    'xen-mapcache.c',
                                                                           =
    'xen_apic.c',
                                                                           =
    'xen_platform.c',
                                                                           =
    'xen_pvdevice.c')
    ./hw/net/meson.build:2:softmmu_ss.add(when: 'CONFIG_XEN', if_true: file=
s('xen_nic.c'))
    ./hw/usb/meson.build:76:softmmu_ss.add(when: ['CONFIG_USB', 'CONFIG_XEN=
', libusb], if_true: files('xen-usb.c'))
    ./hw/xen/meson.build:1:softmmu_ss.add(when: ['CONFIG_XEN', xen], if_tru=
e: files(
    ./hw/xen/meson.build:20:specific_ss.add_all(when: ['CONFIG_XEN', xen], =
if_true: xen_specific_ss)
    ./hw/xenpv/meson.build:3:xenpv_ss.add(when: 'CONFIG_XEN', if_true: file=
s('xen_machine_pv.c'))
    - there are also some stubbed inline functions controlled by it
  CONFIG_XEN_IGD_PASSTHROUGH
    - specific x86 PC only feature via Kconfig rule
  CONFIG_XEN_PCI_PASSTHROUGH
    - control Linux specific specific feature (via meson rule)


First obvious question - is everything in hw/i386/xen actually i386
only? APIC seems pretty PC orientated but what about xen_platform and
pvdevice? There seem to be some dependancies on xen-mapcache across the
code.

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 10:57:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 10:57:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14128.35089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY5cQ-000796-HR; Thu, 29 Oct 2020 10:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14128.35089; Thu, 29 Oct 2020 10:57:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY5cQ-00078z-EB; Thu, 29 Oct 2020 10:57:22 +0000
Received: by outflank-mailman (input) for mailman id 14128;
 Thu, 29 Oct 2020 10:57:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UAOD=EE=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1kY5cO-00078u-UT
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 10:57:20 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40eb2c34-5b67-46ad-95fc-b59a5684fa06;
 Thu, 29 Oct 2020 10:57:19 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=UAOD=EE=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
	id 1kY5cO-00078u-UT
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 10:57:20 +0000
X-Inumbo-ID: 40eb2c34-5b67-46ad-95fc-b59a5684fa06
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40eb2c34-5b67-46ad-95fc-b59a5684fa06;
	Thu, 29 Oct 2020 10:57:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603969039;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=tNjUvnugyUPqKcRpSygwhUcjNKEdo+iL0Ets3TLEYcg=;
  b=QOg1Ppj97I812SAuPQwSNHaV0cRo3QksQBiK2QJSUBrucfjXVkN4X0LB
   gs+2KzjUAqAAu1bFG8lVCkfyq/NBnOB/V2G0o43D/AWrwFA0N5M9dNoZn
   f/fgwwEqQWacTsnAvtbmb2+MjmNvRIkJOQMkIBcQmTlcUaQK7jIQUZZa4
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Vf9RLH2f0eDGqmSQISgS67B2/3ErBoNa90E5UUQJn44P8+fnmMS7WtmVRWAQ06ZahosTUtEl45
 KxiKEmto9P8zZFBxNCD8qnUgR4t+0OViP5Ztd5hvAETaapiVamC+vIcfZjedBF31oBd4kAtA7I
 ELyYoA11opINXzIqwAsTkgp+K4IEAsywk/VVBhm1oyBTo+ra6z5rQSBwA4NYRogjnI+78luQyY
 hjre/nT8hQv75KEOqbbVeuAsTeF6cjF3dg1UMKFhrsyoN6XdgQk4NW6wsL1yr0sgvOPCeVoi1w
 wHI=
X-SBRS: None
X-MesageID: 30295470
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,429,1596513600"; 
   d="scan'208";a="30295470"
Date: Thu, 29 Oct 2020 10:57:15 +0000
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>,
	Olaf Hering <olaf@aepfle.de>
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
Message-ID: <20201029105715.GG2214@perard.uk.xensource.com>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
 <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
 <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
 <20201028181344.GA273696@perard.uk.xensource.com>
 <4597e596-822e-041a-83de-9fcbfbd03933@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <4597e596-822e-041a-83de-9fcbfbd03933@suse.com>

On Thu, Oct 29, 2020 at 09:47:09AM +0100, Jan Beulich wrote:
> On 28.10.2020 19:13, Anthony PERARD wrote:
> > On Tue, Oct 27, 2020 at 12:06:56PM +0100, Jan Beulich wrote:
> >> On 27.10.2020 11:57, Andrew Cooper wrote:
> >>> On 27/10/2020 10:37, Jan Beulich wrote:
> >>>> On 27.10.2020 11:27, Olaf Hering wrote:
> >>>>> Am Tue, 27 Oct 2020 11:16:04 +0100
> >>>>> schrieb Jan Beulich <jbeulich@suse.com>:
> >>>>>
> >>>>>> This pattern is used when a rule consists of multiple commands
> >>>>>> having their output appended to one another's.
> >>>>> My understanding is: a rule is satisfied as soon as the file exists.
> >>>> No - once make has found that a rule's commands need running, it'll
> >>>> run the full set and only check again afterwards.
> >>>
> >>> It stops at the first command which fails.
> >>>
> >>> Olaf is correct, but the problem here is an incremental build issue, not
> >>> a parallel build issue.
> >>>
> >>> Intermediate files must not use the name of the target, or a failure and
> >>> re-build will use the (bogus) intermediate state rather than rebuilding it.
> >>
> >> But there's no intermediate file here - the file gets created in one
> >> go. Furthermore doesn't make delete the target file(s) when a rule
> >> fails? (One may not want to rely on this, and hence indeed keep multi-
> >> part rules update intermediate files of different names.)
> > 
> > What if something went badly enough and sed didn't managed to write the
> > whole file and make never had a chance to remove the bogus file?
> 
> How's this different from an object file getting only partly written
> and not deleted? We'd have to use the temporary file approach in
> literally every rule if we wanted to cater for this.

I though that things like `gcc' would write the final object to a
temporary place then rename it to the final destination, but that
doesn't seems to be the case.

I tried to see what happens if the `sed' command fails, and the target is
created, empty, and doesn't gets deleted by make. So an incremental
build uses a broken file without trying to rebuild it.

If we want `make' to delete target when a rule fails, I think we need to
add '.DELETE_ON_ERROR:' somewhere. Or avoid creating files before the
command is likely to succeed.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 11:07:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 11:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14133.35100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY5mE-00087g-Gl; Thu, 29 Oct 2020 11:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14133.35100; Thu, 29 Oct 2020 11:07:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY5mE-00087W-Dc; Thu, 29 Oct 2020 11:07:30 +0000
Received: by outflank-mailman (input) for mailman id 14133;
 Thu, 29 Oct 2020 11:07:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kY5mD-00087R-0D
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 11:07:29 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9b4f07b-0764-428c-91da-afb4e77150ea;
 Thu, 29 Oct 2020 11:07:27 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9TB7P1pP
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 12:07:25 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kY5mD-00087R-0D
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 11:07:29 +0000
X-Inumbo-ID: d9b4f07b-0764-428c-91da-afb4e77150ea
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d9b4f07b-0764-428c-91da-afb4e77150ea;
	Thu, 29 Oct 2020 11:07:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603969646;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=OkdInhY8ISYKDz1eHHErpkH+spNmrkzvpZ+j9etzVro=;
	b=U5eShJ7osS0cMMMK17kqtmwgQpTvKdI/F9yCl+bCVejQZREY18UiBiYrEmKzwaCCCk
	iR4vSqlV8A5kqUzeEi9jzmI8jLc2XAa+1MKyJujprKUlfhS33nt6OEn33q09pfJZDhiq
	kmCkdzd0Dv/0Lz8TzQtglnAI9G9zAXjVtDKYicJW9uZVEf7YopCKtShdQu5ahJWnZiOk
	Z7wtUpLGofs2KT6J3swFpuT0Nf7arUPvxBmTuwEy2PJ4S/D4XVdrS+LP3zHFywPYZ15x
	gl9QWi/2f3V8FGDra+IYnga7HzLi417yvJNEKVM8qlTH3b9uN+ZeR8+WACMBa0u6ZoQh
	UPIQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDXdoX8l8pYAcz5OTW+r+/A=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9TB7P1pP
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 12:07:25 +0100 (CET)
Date: Thu, 29 Oct 2020 12:07:10 +0100
From: Olaf Hering <olaf@aepfle.de>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
 <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
Message-ID: <20201029120710.382b48b8.olaf@aepfle.de>
In-Reply-To: <20201029105715.GG2214@perard.uk.xensource.com>
References: <20201026204151.23459-1-olaf@aepfle.de>
	<68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
	<20201027112703.24d55a50.olaf@aepfle.de>
	<bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
	<24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
	<3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
	<20201028181344.GA273696@perard.uk.xensource.com>
	<4597e596-822e-041a-83de-9fcbfbd03933@suse.com>
	<20201029105715.GG2214@perard.uk.xensource.com>
X-Mailer: Claws Mail 2020.08.19 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 boundary="Sig_/t1lFA3iyqOAZ4XapZ9mVPva"; protocol="application/pgp-signature"

--Sig_/t1lFA3iyqOAZ4XapZ9mVPva
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 29 Oct 2020 10:57:15 +0000
schrieb Anthony PERARD <anthony.perard@citrix.com>:

> we need to add '.DELETE_ON_ERROR:'

The last paragraph at https://www.gnu.org/software/make/manual/html_node/Er=
rors.html#Errors suggests that this is a good thing to have.


Olaf

--Sig_/t1lFA3iyqOAZ4XapZ9mVPva
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAl+aol4ACgkQ86SN7mm1
DoBOlA/+KB6IsoSYAk250IUqUJyBirmv8C5IGkopUrEfY3o90wWOJDQFSeQ/azeI
cEjENVLl2oh/txAM0D7fzdBzAerOSN9/W1rVDVl3bWccEjV9CkSUuiAI6AyCbrZz
uLnpjU+QxCfPbmvFJKWDoSJXpFqdmObDWOaLs9rvA9BrUrbeWCsnv7gsXkD5jfFq
hH41MYfraFMLvLe1ux2Zf92lUvKFbyyEUuISDCEhAasrxvMCS4GTh7q9tqq7OQ+u
OKwsWS/WcoNmfN8eTY+YV6D3xlBAIMqkuSN6HboZNJqxztBuUkQeC9knF+/dnGkF
e1Fm3FxkK8yNQytb07ymgq/EMh6+Dqc7555xvk6yymxytN/uqOfTCjTQAKrXQynR
vO6R47rj5w/tw81mUJoUYJnZww6Ivfdkw89U4cZcyFwiF3sJpyzhFsz7hU3GvsiC
g8kHi6qd0mxYt/uLIkarIhC4v/gla8iSZv1e/kqVir6paW+KdipLXFt3+nNpVp2Q
lxULfXkFujl2E6Bj8C9K3WHpwhX2vMZ+NG6ImbNqp1SL0Lu4TuJ8hxQZdKSa09zn
kJtjS/jGmkqnIbtzHZ5QTR6N8E8iS0N21AfQWnbW+oRX+jtukIxpJmAAbBEYnU/a
91WiMiKoSfq7hLj2XYkEsPdtfDLor4oFpZ4fymvm/e2NM0Tb2dA=
=60w4
-----END PGP SIGNATURE-----

--Sig_/t1lFA3iyqOAZ4XapZ9mVPva--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 11:25:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 11:25:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14141.35116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY63N-0001Tt-4q; Thu, 29 Oct 2020 11:25:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14141.35116; Thu, 29 Oct 2020 11:25:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY63N-0001Tm-0q; Thu, 29 Oct 2020 11:25:13 +0000
Received: by outflank-mailman (input) for mailman id 14141;
 Thu, 29 Oct 2020 11:25:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kY63L-0001TE-Td
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 11:25:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b9b3b50-41cb-431c-8d28-515955dd2648;
 Thu, 29 Oct 2020 11:25:05 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY63E-0002SJ-Nj; Thu, 29 Oct 2020 11:25:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY63E-0001gh-FC; Thu, 29 Oct 2020 11:25:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kY63E-00089O-EJ; Thu, 29 Oct 2020 11:25:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kY63L-0001TE-Td
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 11:25:11 +0000
X-Inumbo-ID: 7b9b3b50-41cb-431c-8d28-515955dd2648
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7b9b3b50-41cb-431c-8d28-515955dd2648;
	Thu, 29 Oct 2020 11:25:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qjio7EDTPqNvGlaW6CX2DaFSwVFZJC4vL8ma3J9oEu0=; b=ESV+yiqmypuo/aQh4YQUZMJjFo
	Our5YLYZvOTCozVKigZJqgLo3v99GeXVgGW395moxvNNHvwJUUp0mBLdu4ZWl/zsicCMJkeMc1ZXj
	xkh9HtCi8rMbo7TKVm8NW6oqCfXuFzrV+l3Xy5o0G6Lk1Un9roBziEq6sEuOZwzdmVMw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY63E-0002SJ-Nj; Thu, 29 Oct 2020 11:25:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY63E-0001gh-FC; Thu, 29 Oct 2020 11:25:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY63E-00089O-EJ; Thu, 29 Oct 2020 11:25:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156270: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3b87d728742fe58f427f4b775b11250e29d75cc6
X-Osstest-Versions-That:
    ovmf=eb520b93d279e901a593c57e30649fb08f4290c5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 11:25:04 +0000

flight 156270 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156270/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3b87d728742fe58f427f4b775b11250e29d75cc6
baseline version:
 ovmf                 eb520b93d279e901a593c57e30649fb08f4290c5

Last test of basis   156255  2020-10-27 09:04:36 Z    2 days
Testing same since   156270  2020-10-28 03:11:01 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   eb520b93d2..3b87d72874  3b87d728742fe58f427f4b775b11250e29d75cc6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 12:09:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 12:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14149.35128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY6jr-00055n-JV; Thu, 29 Oct 2020 12:09:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14149.35128; Thu, 29 Oct 2020 12:09:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY6jr-00055g-GC; Thu, 29 Oct 2020 12:09:07 +0000
Received: by outflank-mailman (input) for mailman id 14149;
 Thu, 29 Oct 2020 12:09:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY6jp-00055b-9a
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 12:09:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b15f3a07-ee21-4339-abf3-410d790a9ebd;
 Thu, 29 Oct 2020 12:09:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0EA9CAF16;
 Thu, 29 Oct 2020 12:09:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY6jp-00055b-9a
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 12:09:05 +0000
X-Inumbo-ID: b15f3a07-ee21-4339-abf3-410d790a9ebd
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b15f3a07-ee21-4339-abf3-410d790a9ebd;
	Thu, 29 Oct 2020 12:09:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603973343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zii9mTPE1zW7eEBBIVzvf2Bp7xuyhsskf/aoQB8pkGE=;
	b=PsNw63nHwExSB8uoKjP6nop1Jaqr0AZ61bjYOBQQdCBv4Z4lNSoqf+mXV/ULPOuWXydEt3
	/OZTnlZ6wweXbt23gev7Dc8AdkPM1WDm78STdso4UyLzMZbD7GlQTCusE6O6wLlGzD5fSQ
	yHmhNvFixuofHO35qtJ67OsYd06yf18=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0EA9CAF16;
	Thu, 29 Oct 2020 12:09:03 +0000 (UTC)
Subject: Re: [PATCH v1] libacpi: use temporary files for generated files
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Olaf Hering <olaf@aepfle.de>
References: <20201026204151.23459-1-olaf@aepfle.de>
 <68312718-c8ad-040b-be45-192d2c91ba8f@suse.com>
 <20201027112703.24d55a50.olaf@aepfle.de>
 <bc7a5e73-af27-45ae-5d82-f53176cd43a9@suse.com>
 <24025dd2-2c61-7e92-a9b1-2433eea2e909@citrix.com>
 <3880bcbd-9281-10a5-7de5-f73bcf74557a@suse.com>
 <20201028181344.GA273696@perard.uk.xensource.com>
 <4597e596-822e-041a-83de-9fcbfbd03933@suse.com>
 <20201029105715.GG2214@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <13929147-40b5-fd67-5587-7b44eb8b12cd@suse.com>
Date: Thu, 29 Oct 2020 13:09:03 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201029105715.GG2214@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.2020 11:57, Anthony PERARD wrote:
> On Thu, Oct 29, 2020 at 09:47:09AM +0100, Jan Beulich wrote:
>> On 28.10.2020 19:13, Anthony PERARD wrote:
>>> On Tue, Oct 27, 2020 at 12:06:56PM +0100, Jan Beulich wrote:
>>>> On 27.10.2020 11:57, Andrew Cooper wrote:
>>>>> On 27/10/2020 10:37, Jan Beulich wrote:
>>>>>> On 27.10.2020 11:27, Olaf Hering wrote:
>>>>>>> Am Tue, 27 Oct 2020 11:16:04 +0100
>>>>>>> schrieb Jan Beulich <jbeulich@suse.com>:
>>>>>>>
>>>>>>>> This pattern is used when a rule consists of multiple commands
>>>>>>>> having their output appended to one another's.
>>>>>>> My understanding is: a rule is satisfied as soon as the file exists.
>>>>>> No - once make has found that a rule's commands need running, it'll
>>>>>> run the full set and only check again afterwards.
>>>>>
>>>>> It stops at the first command which fails.
>>>>>
>>>>> Olaf is correct, but the problem here is an incremental build issue, not
>>>>> a parallel build issue.
>>>>>
>>>>> Intermediate files must not use the name of the target, or a failure and
>>>>> re-build will use the (bogus) intermediate state rather than rebuilding it.
>>>>
>>>> But there's no intermediate file here - the file gets created in one
>>>> go. Furthermore doesn't make delete the target file(s) when a rule
>>>> fails? (One may not want to rely on this, and hence indeed keep multi-
>>>> part rules update intermediate files of different names.)
>>>
>>> What if something went badly enough and sed didn't managed to write the
>>> whole file and make never had a chance to remove the bogus file?
>>
>> How's this different from an object file getting only partly written
>> and not deleted? We'd have to use the temporary file approach in
>> literally every rule if we wanted to cater for this.
> 
> I though that things like `gcc' would write the final object to a
> temporary place then rename it to the final destination, but that
> doesn't seems to be the case.
> 
> I tried to see what happens if the `sed' command fails, and the target is
> created, empty, and doesn't gets deleted by make. So an incremental
> build uses a broken file without trying to rebuild it.

IOW it's rather a courtesy of the compiler / assembler / linker
to delete their output files on error.

> If we want `make' to delete target when a rule fails, I think we need to
> add '.DELETE_ON_ERROR:' somewhere.

Ah, indeed. I thought this was the default nowadays, but the doc
says it isn't. I think this would be preferable over touching
individual rules to go through temporary files.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:05:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14159.35146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY7cQ-0001m4-Ug; Thu, 29 Oct 2020 13:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14159.35146; Thu, 29 Oct 2020 13:05:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY7cQ-0001lx-Qr; Thu, 29 Oct 2020 13:05:30 +0000
Received: by outflank-mailman (input) for mailman id 14159;
 Thu, 29 Oct 2020 13:05:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY7cQ-0001ls-1o
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:05:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfa4114d-9e52-4e67-92ab-7e99f77221a3;
 Thu, 29 Oct 2020 13:05:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB72EAE63;
 Thu, 29 Oct 2020 13:05:27 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY7cQ-0001ls-1o
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:05:30 +0000
X-Inumbo-ID: dfa4114d-9e52-4e67-92ab-7e99f77221a3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dfa4114d-9e52-4e67-92ab-7e99f77221a3;
	Thu, 29 Oct 2020 13:05:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603976728;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WZG7lWK1DTImLm3namvGtYDcL9nQhEHL5orYTN2wH3g=;
	b=QxISMQeJp9E/9+HaCR1VwRZJZt2c02rkSf4g9kqmX0Sry9MolT7xM1n0XoTnMwJGRqlozg
	459zVLF/onrlYAiuYYGS8GjS0/C8HQYIinUNYT9hFv091PL7XtQqhY/mOWbFj68kBMBhXI
	+sn0sfHYje6/hgw1be1x8Lex7hgRNpM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id EB72EAE63;
	Thu, 29 Oct 2020 13:05:27 +0000 (UTC)
Subject: Re: [PATCH v3] x86/pv: inject #UD for entirely missing SYSCALL
 callbacks
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <0e76675b-c549-128e-449f-0c7a4df64f11@suse.com>
 <0ac0f006-c529-2437-4286-62158c2c491b@citrix.com>
 <1fca003e-72d4-2f56-3180-6c39ba123a99@suse.com>
Message-ID: <5f1e9751-c7b5-3bd8-a01e-3ef4f86ae79e@suse.com>
Date: Thu, 29 Oct 2020 14:05:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1fca003e-72d4-2f56-3180-6c39ba123a99@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.10.2020 09:53, Jan Beulich wrote:
> On 28.10.2020 22:31, Andrew Cooper wrote:
>> On 26/10/2020 09:40, Jan Beulich wrote:
>>> In the case that no 64-bit SYSCALL callback is registered, the guest
>>> will be crashed when 64-bit userspace executes a SYSCALL instruction,
>>> which would be a userspace => kernel DoS.  Similarly for 32-bit
>>> userspace when no 32-bit SYSCALL callback was registered either.
>>>
>>> This has been the case ever since the introduction of 64bit PV support,
>>> but behaves unlike all other SYSCALL/SYSENTER callbacks in Xen, which
>>> yield #GP/#UD in userspace before the callback is registered, and are
>>> therefore safe by default.
>>>
>>> This change does constitute a change in the PV ABI, for the corner case
>>> of a PV guest kernel not registering a 64-bit callback (which has to be
>>> considered a defacto requirement of the unwritten PV ABI, considering
>>> there is no PV equivalent of EFER.SCE).
>>>
>>> It brings the behaviour in line with PV32 SYSCALL/SYSENTER, and PV64
>>> SYSENTER (safe by default, until explicitly enabled).
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Signed-off-by: Jan Beulich <JBeulich@suse.com>
>>> ---
>>> v3:
>>>  * Split this change off of "x86/pv: Inject #UD for missing SYSCALL
>>>    callbacks", to allow the uncontroversial part of that change to go
>>>    in. Add conditional "rex64" for UREGS_rip adjustment. (Is branching
>>>    over just the REX prefix too much trickery even for an unlikely to be
>>>    taken code path?)
>>
>> I find this submission confusion seeing as my v3 is already suitably
>> acked and ready to commit.  What I haven't had yet enough free time to
>> do so.
> 
> My objection to the other half stands, and hence, I'm afraid, stands
> in the way of your patch getting committed. Aiui Roger's ack doesn't
> invalidate my objection, sorry.

Actually I realize now that earlier I said I'm okay with this going in
if Roger acks it. I take it that Roger was aware of the controversy
when providing the ack. Therefore I'd like to take back what I've said
above, and your supposed v3 ought to be okay to get committed.

As to backporting - I'd surely be taking the split off part here, but
I have to admit I'm hesitant to take the full change. IOW splitting
the two changes might still be a helpful thing.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:38:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:38:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14164.35158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY87e-0004R4-DE; Thu, 29 Oct 2020 13:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14164.35158; Thu, 29 Oct 2020 13:37:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY87e-0004Qx-9z; Thu, 29 Oct 2020 13:37:46 +0000
Received: by outflank-mailman (input) for mailman id 14164;
 Thu, 29 Oct 2020 13:37:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/pEi=EE=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kY87c-0004Qs-Gy
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:37:44 +0000
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eee399a0-b9f9-4d71-a471-c91918496340;
 Thu, 29 Oct 2020 13:37:42 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id a9so3331724lfc.7
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 06:37:42 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/pEi=EE=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kY87c-0004Qs-Gy
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:37:44 +0000
X-Inumbo-ID: eee399a0-b9f9-4d71-a471-c91918496340
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id eee399a0-b9f9-4d71-a471-c91918496340;
	Thu, 29 Oct 2020 13:37:42 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id a9so3331724lfc.7
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 06:37:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=YfRB8FQmwaiiY6xcRF4WpyWAS6wFDwuQyJf55DguVeI=;
        b=Fn544HyQ56FmED4VrrCTbRHrewY2uEkVSqzaOy/OKq87AbdPUFNUyYuUf/nm08B18F
         iio61xHYwW+0NlEBF+CzvBvhgYlXgOWybJHVIZoxTGgPyVXLGTrW9laQpieRFWS1efHN
         98NTlo3NyEXCCSY2B2kvZHKdjFr1y+vIRntqxydoyBb4SFvm+DZgT+SDoHafT9QMrEGd
         +330fhVygOyqlkdKj+YZ9sWNUgIICw5gLXA4G9I1yhnE8kRphXrZ9CgJHUH7AKeN9WW2
         QOXuca2P4poUhCZ/oNkEDcMIaO3ERXDygL++4LhCSvNvwcF+Fff7OeRQdXA5+MQIVC/9
         yr0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=YfRB8FQmwaiiY6xcRF4WpyWAS6wFDwuQyJf55DguVeI=;
        b=SxkMltpS5JPPpXm/TlfMaRhhYzFyHpC8LK+Eivlco2BpYqJifnj+VQOWhtdVQosL0N
         r69zQJRdLAwhWlLPzgc2wkKI31IwdHl1g9tMaoeUVBF3GjaXNK/D6IQYj4U0dKcXAMll
         YjeO/KgOOxZkiU/7/B5mMiMwXkoePwMOX2Ed6jmUORKvTKf21Qm6lv4a7qPEt6GVw8C+
         E00X5FXbmNeYAnp71L9hkCC0ZEZYvqUKEfmbrgO7jt15PN3+Ea5JKSdnxQCTEdlWCaLv
         MByZ9w7I/z73xqxwiA1sBLRQJZT7ZfSuKEfDPkLEVYR4JIHLRaAb1apUqWLiROhaiQuX
         0hSw==
X-Gm-Message-State: AOAM530rIY6gPoWHiQ/lbMymYcMGGsEpJ1K1ZmDI+jlT2Au9lWwx5z+z
	Xv8hZpvcEmJrq8vlzbEEl3yHNhvXcYjcyYbqO0c=
X-Google-Smtp-Source: ABdhPJx8m++592/7jOeOjKwbpWqCnQXCFYegDVTkjfLC3umw/45ChrDJ7JUU2KiPV7DSMa6Ey9rVMi+ytX8h8FVXoMk=
X-Received: by 2002:ac2:455c:: with SMTP id j28mr1530509lfm.320.1603978661669;
 Thu, 29 Oct 2020 06:37:41 -0700 (PDT)
MIME-Version: 1.0
References: <20201028174406.23424-1-alex.bennee@linaro.org>
 <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s> <87d011mjuw.fsf@linaro.org>
In-Reply-To: <87d011mjuw.fsf@linaro.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 29 Oct 2020 09:37:29 -0400
Message-ID: <CAKf6xpsYorMRYpuPcb8B1zVWz3GHgZZwF+NPVA=nFL2Tr13hqQ@mail.gmail.com>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, QEMU <qemu-devel@nongnu.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, 
	Masami Hiramatsu <masami.hiramatsu@linaro.org>, Anthony Perard <anthony.perard@citrix.com>, 
	Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Oct 29, 2020 at 6:01 AM Alex Benn=C3=A9e <alex.bennee@linaro.org> w=
rote:
>
>
> Stefano Stabellini <sstabellini@kernel.org> writes:
>
> > On Wed, 28 Oct 2020, Alex Benn=C3=A9e wrote:
> >> Xen is supported on aarch64 although weirdly using the i386-softmmu
> >> model. Checking based on the host CPU meant we never enabled Xen
> >> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
> >> make it not seem weird but that will require further build surgery.
> >>
> >> Signed-off-by: Alex Benn=C3=A9e <alex.bennee@linaro.org>
> >> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>
> >> Cc: Anthony Perard <anthony.perard@citrix.com>
> >> Cc: Paul Durrant <paul@xen.org>
> >> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
> >> ---
> >>  meson.build | 2 ++
> >>  1 file changed, 2 insertions(+)
> >>
> >> diff --git a/meson.build b/meson.build
> >> index 835424999d..f1fcbfed4c 100644
> >> --- a/meson.build
> >> +++ b/meson.build
> >> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
> >>      'CONFIG_HVF': ['x86_64-softmmu'],
> >>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
> >>    }
> >> +elif cpu in [ 'arm', 'aarch64' ]
> >> +  accelerator_targets +=3D { 'CONFIG_XEN': ['i386-softmmu'] }
> >>  endif
> >
> > This looks very reasonable -- the patch makes sense.

A comment would be useful to explain that Arm needs i386-softmmu for
the xenpv machine.

> >
> > However I have two questions, mostly for my own understanding. I tried
> > to repro the aarch64 build problem but it works at my end, even without
> > this patch.
>
> Building on a x86 host or with cross compiler?
>
> > I wonder why. I suspect it works thanks to these lines in
> > meson.build:

I think it's a runtime and not a build problem.  In osstest, Xen
support was detected and configured, but CONFIG_XEN wasn't set for
Arm.  So at runtime xen_available() returns 0, and QEMU doesn't start
with "qemu-system-i386: -xen-domid 1: Option not supported for this
target"

I posted my investigation here:
https://lore.kernel.org/xen-devel/CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzU=
o2q1KEMjhg@mail.gmail.com/

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:40:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:40:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14168.35170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8A5-0005GN-RG; Thu, 29 Oct 2020 13:40:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14168.35170; Thu, 29 Oct 2020 13:40:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8A5-0005GG-O9; Thu, 29 Oct 2020 13:40:17 +0000
Received: by outflank-mailman (input) for mailman id 14168;
 Thu, 29 Oct 2020 13:40:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY8A4-0005GA-Kj
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:40:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e56244f9-637f-432b-9926-a2ae2319acb5;
 Thu, 29 Oct 2020 13:40:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F2333AD5F;
 Thu, 29 Oct 2020 13:40:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY8A4-0005GA-Kj
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:40:16 +0000
X-Inumbo-ID: e56244f9-637f-432b-9926-a2ae2319acb5
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e56244f9-637f-432b-9926-a2ae2319acb5;
	Thu, 29 Oct 2020 13:40:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603978815;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q1Bpm8q+ni/LHNqViyLzcib+SJg2pmfkMPF/cyhDaY0=;
	b=LvC1tcQQM3Qi52PjN8fRYqtdvymULGqB22kJgLpoggxRZvpR00J8JnoSk6s+PyBV/23awh
	wSn4Y9nigJ1Ld7fjHqnyeDkax8kCQPxYppZM1kQDAK80/Hmt0RbTHo26NWjYVPCiLTiS21
	tHT0bf4EruNg6EdqP5EkLuRVbmfY1d0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F2333AD5F;
	Thu, 29 Oct 2020 13:40:14 +0000 (UTC)
Subject: Ping: [PATCH v3 0/3] x86: shim building adjustments (plus shadow
 follow-on)
From: Jan Beulich <jbeulich@suse.com>
To: Tim Deegan <tim@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Message-ID: <73ec8762-d7c4-c46f-b0bf-f40b89377312@suse.com>
Date: Thu, 29 Oct 2020 14:40:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Tim,

On 19.10.2020 10:42, Jan Beulich wrote:
> The change addressing the shadow related build issue noticed by
> Andrew went in already. The build breakage goes beyond this
> specific combination though - PV_SHIM_EXCLUSIVE plus HVM is
> similarly an issue. This is what the 1st patch tries to take care
> of, in a shape already on irc noticed to be controversial. I'm
> submitting the change nevertheless because for the moment there
> looks to be a majority in favor of going this route. One argument
> not voiced there yet: What good does it do to allow a user to
> enable HVM when then on the resulting hypervisor they still can't
> run HVM guests (for the hypervisor still being a dedicated PV
> shim one). On top of this, the alternative approach is likely
> going to get ugly.
> 
> The shadow related adjustments are here merely because the want
> to make them was noticed in the context of the patch which has
> already gone in.
> 
> 1: don't permit HVM and PV_SHIM_EXCLUSIVE at the same time
> 2: refactor shadow_vram_{get,put}_l1e()
> 3: sh_{make,destroy}_monitor_table() are "even more" HVM-only

unless you tell me otherwise I'm intending to commit the latter
two with Roger's acks some time next week. Of course it would
still be nice to know your view on the first of the TBDs in
patch 3 (which may result in further changes, in a follow-up).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:41:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14173.35182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Be-0005Ot-6n; Thu, 29 Oct 2020 13:41:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14173.35182; Thu, 29 Oct 2020 13:41:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Be-0005Om-3m; Thu, 29 Oct 2020 13:41:54 +0000
Received: by outflank-mailman (input) for mailman id 14173;
 Thu, 29 Oct 2020 13:41:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kY8Bc-0005Oh-9W
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:41:52 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ef154d6-a335-4d8a-9987-4a115bb7ed14;
 Thu, 29 Oct 2020 13:41:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY8BZ-0005GG-Pz; Thu, 29 Oct 2020 13:41:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kY8BZ-0000ks-Ga; Thu, 29 Oct 2020 13:41:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kY8BZ-0007ut-DS; Thu, 29 Oct 2020 13:41:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kY8Bc-0005Oh-9W
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:41:52 +0000
X-Inumbo-ID: 3ef154d6-a335-4d8a-9987-4a115bb7ed14
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ef154d6-a335-4d8a-9987-4a115bb7ed14;
	Thu, 29 Oct 2020 13:41:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WNTAaKMVh2YSTwe/5Ch1tpbi0BabpfLFZJEigGn6jhs=; b=bYKKXrsscKkmDr/1Eb2fr+5ppO
	8I0t3g/b/Go9UKU8o/flJhoAE8u1Odk8yeZD3QuHS+GngjMb1t54cDPoetZOaBSPwXkhvNc+tSIQp
	vMsikdq9zRt65tjvqiqE3RP6g24srWfHDN7lRBR/IQTFXY35p1+grVgwXtquBKFeo8qA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY8BZ-0005GG-Pz; Thu, 29 Oct 2020 13:41:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY8BZ-0000ks-Ga; Thu, 29 Oct 2020 13:41:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kY8BZ-0007ut-DS; Thu, 29 Oct 2020 13:41:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156269-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156269: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ed8780e3f2ecc82645342d070c6b4e530532e680
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 13:41:49 +0000

flight 156269 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156269/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ed8780e3f2ecc82645342d070c6b4e530532e680
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   89 days
Failing since        152366  2020-08-01 20:49:34 Z   88 days  150 attempts
Testing same since   156269  2020-10-28 01:11:59 Z    1 days    1 attempts

------------------------------------------------------------
3380 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641685 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:47:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:47:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14180.35197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8HE-0005e6-0u; Thu, 29 Oct 2020 13:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14180.35197; Thu, 29 Oct 2020 13:47:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8HD-0005dz-Tj; Thu, 29 Oct 2020 13:47:39 +0000
Received: by outflank-mailman (input) for mailman id 14180;
 Thu, 29 Oct 2020 13:47:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY8HD-0005du-2q
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:47:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a29eaeca-5f5d-4913-9ec7-207c03daf668;
 Thu, 29 Oct 2020 13:47:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 439DAAC12;
 Thu, 29 Oct 2020 13:47:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY8HD-0005du-2q
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:47:39 +0000
X-Inumbo-ID: a29eaeca-5f5d-4913-9ec7-207c03daf668
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a29eaeca-5f5d-4913-9ec7-207c03daf668;
	Thu, 29 Oct 2020 13:47:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603979257;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LQIujlwoAnCz3OyXR+oavnADLbcJ+AWZBHdl8qVnZIw=;
	b=twlFJEfSy56lqJhjWE0r7bvN28nioCUfMRInHGSE0v9nYPLldryfvRKCL6HoMF6Hv7VGPA
	szsnYerJUWEEjiizYpI5SaPsV8LXmsK375gwpF/g+fVrQ0gp0Mu1I6ZeQ6CwSBTyWkdHVi
	6d8iWZIKttJuqRlttoBUy6h/I1otvNE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 439DAAC12;
	Thu, 29 Oct 2020 13:47:37 +0000 (UTC)
Subject: Ping: [PATCH 2/2] tools/libs: fix uninstall rule for header files
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
 <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
Message-ID: <952aa284-9112-a128-db88-4a2e89d8c8ef@suse.com>
Date: Thu, 29 Oct 2020 14:47:37 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 09:21, Jan Beulich wrote:
> This again was working right only as long as $(LIBHEADER) consisted of
> just one entry.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

While patch 1 has become irrelevant with Juergen's more complete
change, this one is still applicable afaict.

Jan

> ---
> An alternative would be to use $(addprefix ) without any shell loop.
> 
> --- a/tools/libs/libs.mk
> +++ b/tools/libs/libs.mk
> @@ -107,7 +107,7 @@ install: build
>  .PHONY: uninstall
>  uninstall:
>  	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/$(LIB_FILE_NAME).pc
> -	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$(LIBHEADER); done
> +	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$$i; done
>  	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so
>  	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR)
>  	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
> 
> 



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:48:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14183.35209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Hr-0005m5-9w; Thu, 29 Oct 2020 13:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14183.35209; Thu, 29 Oct 2020 13:48:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Hr-0005ly-6n; Thu, 29 Oct 2020 13:48:19 +0000
Received: by outflank-mailman (input) for mailman id 14183;
 Thu, 29 Oct 2020 13:48:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY8Hp-0005lr-LG
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:48:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54443f21-8ce5-473f-baeb-f5eae923dbde;
 Thu, 29 Oct 2020 13:48:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18097AC12;
 Thu, 29 Oct 2020 13:48:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY8Hp-0005lr-LG
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:48:17 +0000
X-Inumbo-ID: 54443f21-8ce5-473f-baeb-f5eae923dbde
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54443f21-8ce5-473f-baeb-f5eae923dbde;
	Thu, 29 Oct 2020 13:48:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603979296;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b+W3OIBwFQnB56lPFhfRRpxuzw3xbNI8cjlGnjdVkuM=;
	b=qn3EYK6oPq6k0BdN7UTWfQ49fcE9w+m1mdkOs++0ONj9VXb5nkbbA/OZ6cM6XZ/xid6UXm
	PMhJ4TdxpDCK1bP0afIcQpXr3bPSoUDyN1v3qe3psWdPqRgETQDs39S8NO4JVG0mdjUJeC
	M8ueGSA1LyEMjuKUKr4hZPF/ZkH/YTU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 18097AC12;
	Thu, 29 Oct 2020 13:48:16 +0000 (UTC)
Subject: Ping: [PATCH] tools/python: pass more -rpath-link options to ld
From: Jan Beulich <jbeulich@suse.com>
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d10bb94f-c572-6977-40a4-57a61da4094b@suse.com>
Message-ID: <f449285a-aa96-85a7-59d2-0a9dc6cc50b4@suse.com>
Date: Thu, 29 Oct 2020 14:48:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <d10bb94f-c572-6977-40a4-57a61da4094b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.10.2020 10:31, Jan Beulich wrote:
> With the split of libraries, I've observed a number of warnings from
> (old?) ld.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Marek?

> ---
> It's unclear to me whether this is ld version dependent - the pattern
> of where I've seen such warnings doesn't suggest a clear version
> dependency.
> 
> --- a/tools/python/setup.py
> +++ b/tools/python/setup.py
> @@ -7,10 +7,15 @@ XEN_ROOT = "../.."
>  extra_compile_args  = [ "-fno-strict-aliasing", "-Werror" ]
>  
>  PATH_XEN      = XEN_ROOT + "/tools/include"
> +PATH_LIBXENTOOLCORE = XEN_ROOT + "/tools/libs/toolcore"
>  PATH_LIBXENTOOLLOG = XEN_ROOT + "/tools/libs/toollog"
> +PATH_LIBXENCALL = XEN_ROOT + "/tools/libs/call"
>  PATH_LIBXENEVTCHN = XEN_ROOT + "/tools/libs/evtchn"
> +PATH_LIBXENGNTTAB = XEN_ROOT + "/tools/libs/gnttab"
>  PATH_LIBXENCTRL = XEN_ROOT + "/tools/libs/ctrl"
>  PATH_LIBXENGUEST = XEN_ROOT + "/tools/libs/guest"
> +PATH_LIBXENDEVICEMODEL = XEN_ROOT + "/tools/libs/devicemodel"
> +PATH_LIBXENFOREIGNMEMORY = XEN_ROOT + "/tools/libs/foreignmemory"
>  PATH_XENSTORE = XEN_ROOT + "/tools/libs/store"
>  
>  xc = Extension("xc",
> @@ -24,7 +29,13 @@ xc = Extension("xc",
>                 library_dirs       = [ PATH_LIBXENCTRL, PATH_LIBXENGUEST ],
>                 libraries          = [ "xenctrl", "xenguest" ],
>                 depends            = [ PATH_LIBXENCTRL + "/libxenctrl.so", PATH_LIBXENGUEST + "/libxenguest.so" ],
> -               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
> +               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENCALL,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENDEVICEMODEL,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENEVTCHN,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENFOREIGNMEMORY,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENGNTTAB,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENTOOLCORE,
> +                                      "-Wl,-rpath-link="+PATH_LIBXENTOOLLOG ],
>                 sources            = [ "xen/lowlevel/xc/xc.c" ])
>  
>  xs = Extension("xs",
> @@ -33,6 +44,7 @@ xs = Extension("xs",
>                 library_dirs       = [ PATH_XENSTORE ],
>                 libraries          = [ "xenstore" ],
>                 depends            = [ PATH_XENSTORE + "/libxenstore.so" ],
> +               extra_link_args    = [ "-Wl,-rpath-link="+PATH_LIBXENTOOLCORE ],
>                 sources            = [ "xen/lowlevel/xs/xs.c" ])
>  
>  plat = os.uname()[0]
> 



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 13:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 13:54:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14189.35221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8NC-0006gK-Uj; Thu, 29 Oct 2020 13:53:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14189.35221; Thu, 29 Oct 2020 13:53:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8NC-0006gD-Rh; Thu, 29 Oct 2020 13:53:50 +0000
Received: by outflank-mailman (input) for mailman id 14189;
 Thu, 29 Oct 2020 13:53:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY8NB-0006fU-J3
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:53:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8941dbe-b15e-406d-abae-65774beb4612;
 Thu, 29 Oct 2020 13:53:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 962FDAFB1;
 Thu, 29 Oct 2020 13:53:47 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY8NB-0006fU-J3
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 13:53:49 +0000
X-Inumbo-ID: c8941dbe-b15e-406d-abae-65774beb4612
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c8941dbe-b15e-406d-abae-65774beb4612;
	Thu, 29 Oct 2020 13:53:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603979627;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qcmxnvRUtgd5pwtZDJYRsRYYJtUit4eLtWDDXvSIsZA=;
	b=TyGlzbCkfrF50ZvWzg7t9ja1xNxsHc92k1r9iD/dDoAeBYlmV054eEvVjnpLSs2Dx+RHV3
	HsPqYAayMtG2EpBnDEIfMwVncrlWTM8iduIJnGBM0xDtA+yv7qzbi4AHiO70SpJ9JRhCop
	iUZb6ZBXTFvbTOhsDFu5op0IEeEQKvU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 962FDAFB1;
	Thu, 29 Oct 2020 13:53:47 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: =?UTF-8?Q?Ping=c2=b2=3a_=5bPATCH=5d_x86/PV=3a_make_post-migration_p?=
 =?UTF-8?Q?age_state_consistent?=
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <f7ed53c1-768c-cc71-a432-553b56f7f0a7@suse.com>
Message-ID: <d1e9190e-a3be-8500-41f7-406af18458fc@suse.com>
Date: Thu, 29 Oct 2020 14:53:48 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <f7ed53c1-768c-cc71-a432-553b56f7f0a7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 11.09.2020 12:34, Jan Beulich wrote:
> When a page table page gets de-validated, its type reference count drops
> to zero (and PGT_validated gets cleared), but its type remains intact.
> XEN_DOMCTL_getpageframeinfo3, therefore, so far reported prior usage for
> such pages. An intermediate write to such a page via e.g.
> MMU_NORMAL_PT_UPDATE, however, would transition the page's type to
> PGT_writable_page, thus altering what XEN_DOMCTL_getpageframeinfo3 would
> return. In libxc the decision which pages to normalize / localize
> depends solely on the type returned from the domctl. As a result without
> further precautions the guest won't be able to tell whether such a page
> has had its (apparent) PTE entries transitioned to the new MFNs.
> 
> Add a check of PGT_validated, thus consistently avoiding normalization /
> localization in the tool stack.
> 
> Alongside using XEN_DOMCTL_PFINFO_NOTAB instead of plain zero for the
> change at hand, also change the variable's initializer to use this
> constant, too. Take the opportunity and also adjust its type.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think I did address all questions here.

Jan

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -215,7 +215,8 @@ long arch_do_domctl(
>  
>          for ( i = 0; i < num; ++i )
>          {
> -            unsigned long gfn = 0, type = 0;
> +            unsigned long gfn = 0;
> +            unsigned int type = XEN_DOMCTL_PFINFO_NOTAB;
>              struct page_info *page;
>              p2m_type_t t;
>  
> @@ -255,6 +256,8 @@ long arch_do_domctl(
>  
>                  if ( page->u.inuse.type_info & PGT_pinned )
>                      type |= XEN_DOMCTL_PFINFO_LPINTAB;
> +                else if ( !(page->u.inuse.type_info & PGT_validated) )
> +                    type = XEN_DOMCTL_PFINFO_NOTAB;
>  
>                  if ( page->count_info & PGC_broken )
>                      type = XEN_DOMCTL_PFINFO_BROKEN;
> 



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:01:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14195.35241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Uf-0007g1-Qw; Thu, 29 Oct 2020 14:01:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14195.35241; Thu, 29 Oct 2020 14:01:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8Uf-0007fu-Nv; Thu, 29 Oct 2020 14:01:33 +0000
Received: by outflank-mailman (input) for mailman id 14195;
 Thu, 29 Oct 2020 14:01:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kY8Ue-0007fp-KE
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:01:32 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b261967-038a-452e-a746-fb7780b2e5e9;
 Thu, 29 Oct 2020 14:01:30 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kY8Ue-0007fp-KE
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:01:32 +0000
X-Inumbo-ID: 6b261967-038a-452e-a746-fb7780b2e5e9
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6b261967-038a-452e-a746-fb7780b2e5e9;
	Thu, 29 Oct 2020 14:01:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603980090;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=0C2T7H5utLXNgmzYD5sBnHV5mV4tc2r1W12ZbIfWW4U=;
  b=Tkicl1PwrkkbCOOJSqTj6ycb00e64Ez3vhN0k9sZAVpQDTIL3BHH3kUl
   YlEzHrt8lw1AcKtSkAEhca3ndvymFK99QcQCYltHGG0DxXJYSyZNBBmRJ
   YfS+l3B92y10o7mTsccX4roSGoJ9g3hSPHM+62GwlgjlzwJv7mYOggvVQ
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: u1wlRsvtVKk7DFfUXO06V3DGPG2mjL+Sawr5OB3JUIpJCRSSOXvUbWc2GjiGog3WStqg+Poslj
 0etr4Z1TuZXgSgGvCU0tpqM8a9II9yE0sKuJV0yJRTpAZlXuVG0a9hOQyt6ekuM4HsAu4Iwr1l
 7/kVipOaE+mEyljWIkdNhRumdLb2RVZ+TPiwPeerS8W7NI3tClAwfZduRuccDT458svBRizfYG
 ifxOZc8WqrfhoVtKAwD/LZhMlc7vmyISpU1xyhJJviScqR11B2gUASKL8xDKkf99Ii33bIyOrx
 bKs=
X-SBRS: None
X-MesageID: 30067431
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30067431"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/pv: Drop stale comment in dom0_construct_pv()
Date: Thu, 29 Oct 2020 14:00:41 +0000
Message-ID: <20201029140041.18343-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This comment has been around since c/s 1372bca0615 in 2004.  It is stale, as
it predates the introduction of struct vcpu.

It is not obvious that it was even correct at the time.  Where a vcpu (domain
at the time) has been configured to run is unrelated to construct the domain's
initial pagetables, etc.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

Almost...  I'm not entirely sure NUMA memory allocation is plumbed through
correctly, but even that still has nothing to do with v->processor
---
 xen/arch/x86/pv/dom0_build.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index d79503d6a9..f7165309a2 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -616,7 +616,6 @@ int __init dom0_construct_pv(struct domain *d,
         v->arch.pv.event_callback_cs    = FLAT_COMPAT_KERNEL_CS;
     }
 
-    /* WARNING: The new domain must have its 'processor' field filled in! */
     if ( !is_pv_32bit_domain(d) )
     {
         maddr_to_page(mpt_alloc)->u.inuse.type_info = PGT_l4_page_table;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:22:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14204.35255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8oz-00014Z-Jb; Thu, 29 Oct 2020 14:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14204.35255; Thu, 29 Oct 2020 14:22:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8oz-00014S-Gm; Thu, 29 Oct 2020 14:22:33 +0000
Received: by outflank-mailman (input) for mailman id 14204;
 Thu, 29 Oct 2020 14:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY8oy-00014N-NW
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:22:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50c411da-509c-4194-af80-5dd487e357ed;
 Thu, 29 Oct 2020 14:22:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 28BAFB92B;
 Thu, 29 Oct 2020 14:22:30 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY8oy-00014N-NW
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:22:32 +0000
X-Inumbo-ID: 50c411da-509c-4194-af80-5dd487e357ed
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 50c411da-509c-4194-af80-5dd487e357ed;
	Thu, 29 Oct 2020 14:22:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603981350;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1Eg2OQdAP9+U2/kwKuC4NxzO8O+jhIpACCQZnbvM87k=;
	b=hcBnDtyxEIULA2qIuFZp+BZH2H/JTRGgo4vS1EN0zWR8gnVkJUVfHL0CaFzCQwVjxyzMEs
	TCrOCMD49bcOcKo1sqKp/4nin5L6Mp0OfnEdTH0XvlCxZIKgz7dVHR4GZSHqKOC4tdGxJR
	2FfJrkvt2VBUqPjYm3En85sBXOZ/Ys4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 28BAFB92B;
	Thu, 29 Oct 2020 14:22:30 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
Date: Thu, 29 Oct 2020 15:22:30 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201022143905.11032-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 22.10.2020 16:39, Juergen Gross wrote:
> When the host crashes it would sometimes be nice to have additional
> debug data available which could be produced via debug keys, but
> halting the server for manual intervention might be impossible due to
> the need to reboot/kexec rather sooner than later.
> 
> Add support for automatic debug key actions in case of crashes which
> can be activated via boot- or runtime-parameter.

While I can certainly see this possibly being a useful thing in
certain situations, I'm uncertain it's going to be helpful in at
least a fair set of cases. What output to request very much
depends on the nature of the problem one is running into, and
the more keys one adds "just in case", the longer the reboot
latency, and the higher the risk (see also below) of the output
generation actually causing further problems.

IOW I'm neither fully convinced that we want this, nor fully
opposed.

> Depending on the type of crash the desired data might be different, so
> support different settings for the possible types of crashes.
> 
> The parameter is "crash-debug" with the following syntax:
> 
>   crash-debug-<type>=<string>
> 
> with <type> being one of:
> 
>   panic, hwdom, watchdog, kexeccmd, debugkey
> 
> and <string> a sequence of debug key characters with '.' having the
> special semantics of a 1 second pause.

1 second is a whole lot of time. To get two successive sets
of data, a much shorter delay (if any) would normally suffice.

Also, while '.' may seem like a good choice right now, with the
shortage of characters we may want to put a real handler behind
it at come point. The one character that clearly won't make much
sense to use in this context is 'h', but that's awful as a (kind
of) separator. Could we perhaps replace 'h' by '?', freeing up
'h' and allowing '?' to be used for this purpose here?

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -574,6 +574,29 @@ reduction of features at Xen's disposal to manage guests.
>  ### cpuinfo (x86)
>  > `= <boolean>`
>  
> +### crash-debug-debugkey
> +### crash-debug-hwdom
> +### crash-debug-kexeccmd
> +### crash-debug-panic
> +### crash-debug-watchdog
> +> `= <string>`
> +
> +> Can be modified at runtime
> +
> +Specify debug-key actions in cases of crashes. Each of the parameters applies
> +to a different crash reason. The `<string>` is a sequence of debug key
> +characters, with `.` having the special meaning of a 1 second pause.
> +
> +So e.g. `crash-debug-watchdog=0.0r` would dump dom0 state twice with a
> +second between the two state dumps, followed by the run queues of the
> +hypervisor, if the system crashes due to a watchdog timeout.
> +
> +These parameters should be used carefully, as e.g. specifying
> +`crash-debug-debugkey=C` would result in an endless loop. Depending on the
> +reason of the system crash it might happen that triggering some debug key
> +action will result in a hang instead of dumping data and then doing a
> +reboot or crash dump.

I think it would be useful if the flavors were (briefly)
explained: At the very least "debugkey" doesn't directly fit "in
cases of crashes", as there's no crash involved. kexec_crash()
instead gets invoked without there having been any crash.

You may also want to point out that this is a best effort thing
only - system state at the point of a crash may be such that the
attempt of handling one or the debug keys would have further bad
effects on the system, including that the actual kexec may then
never occur.

> @@ -507,6 +509,41 @@ void __init initialize_keytable(void)
>      }
>  }
>  
> +#define CRASHACTION_SIZE  32
> +static char crash_debug_panic[CRASHACTION_SIZE];
> +static char crash_debug_hwdom[CRASHACTION_SIZE];
> +static char crash_debug_watchdog[CRASHACTION_SIZE];
> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
> +static char crash_debug_debugkey[CRASHACTION_SIZE];
> +
> +static char *crash_action[CRASHREASON_N] = {
> +    [CRASHREASON_PANIC] = crash_debug_panic,
> +    [CRASHREASON_HWDOM] = crash_debug_hwdom,
> +    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
> +    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
> +    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
> +};
> +
> +string_runtime_param("crash-debug-panic", crash_debug_panic);
> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);

In general I'm not in favor of this (or similar) needing
five new command line options, instead of just one. The problem
with e.g.

crash-debug=panic:rq,watchdog:0d

is that ',' (or any other separator chosen) could in principle
also be a debug key. It would still be possible to use

crash-debug=panic:rq crash-debug=watchdog:0d

though. Thoughts?

> +void keyhandler_crash_action(enum crash_reason reason)
> +{
> +    const char *action = crash_action[reason];
> +    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
> +
> +    while ( *action ) {

Misplaced brace.

> +        if ( *action == '.' )
> +            mdelay(1000);
> +        else
> +            handle_keypress(*action, regs);
> +        action++;
> +    }
> +}

I think only diagnostic keys should be permitted here.

> --- a/xen/include/xen/kexec.h
> +++ b/xen/include/xen/kexec.h
> @@ -1,6 +1,8 @@
>  #ifndef __XEN_KEXEC_H__
>  #define __XEN_KEXEC_H__
>  
> +#include <xen/keyhandler.h>

Could we go without this, utilizing the gcc extension of forward
declared enums? Otoh ...

> @@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
>  #define kexecing 0
>  
>  static inline void kexec_early_calculations(void) {}
> -static inline void kexec_crash(void) {}
> +static inline void kexec_crash(enum crash_reason reason)
> +{
> +    keyhandler_crash_action(reason);
> +}

... if this is to be an inline function and not just a #define,
it'll need the declaration of the function to have been seen.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:23:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14208.35268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8pw-0001Da-Ur; Thu, 29 Oct 2020 14:23:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14208.35268; Thu, 29 Oct 2020 14:23:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY8pw-0001DT-Re; Thu, 29 Oct 2020 14:23:32 +0000
Received: by outflank-mailman (input) for mailman id 14208;
 Thu, 29 Oct 2020 14:23:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kY8pv-0001DO-LS
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:23:31 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::60b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d221d285-cb95-43ae-830f-47312e1848ad;
 Thu, 29 Oct 2020 14:23:28 +0000 (UTC)
Received: from AM6PR10CA0076.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::17)
 by PA4PR08MB5885.eurprd08.prod.outlook.com (2603:10a6:102:e6::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 14:23:26 +0000
Received: from VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::a7) by AM6PR10CA0076.outlook.office365.com
 (2603:10a6:209:8c::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 14:23:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT037.mail.protection.outlook.com (10.152.19.70) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 14:23:26 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Thu, 29 Oct 2020 14:23:25 +0000
Received: from 981b874fd139.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 133E8170-7A5E-48D8-BFCC-9438021616E0.1; 
 Thu, 29 Oct 2020 14:22:48 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 981b874fd139.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 29 Oct 2020 14:22:48 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Thu, 29 Oct
 2020 14:22:44 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 14:22:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kY8pv-0001DO-LS
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:23:31 +0000
X-Inumbo-ID: d221d285-cb95-43ae-830f-47312e1848ad
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0d::60b])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d221d285-cb95-43ae-830f-47312e1848ad;
	Thu, 29 Oct 2020 14:23:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E61SZG2vUV+Pu0oXHvXKigcaIzgjPeXzQD8dTqxzB9I=;
 b=GCYLMy9bIhrWHZxO5P7FVcCB98cWoNezMAUElQxalqTU5HDv2KQ4Q6BHUwRcTeMVPfGFws8oCfOTXo0UhS5WdfZM3PXBG7fi1X5zr9Hk6obcPgxNu4HmCVlh2VMUtpv+qETBwknIE+YFTxk9btQVrRmq53vXIT9sTe4DQiu+riA=
Received: from AM6PR10CA0076.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:209:8c::17)
 by PA4PR08MB5885.eurprd08.prod.outlook.com (2603:10a6:102:e6::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 14:23:26 +0000
Received: from VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8c:cafe::a7) by AM6PR10CA0076.outlook.office365.com
 (2603:10a6:209:8c::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 14:23:26 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT037.mail.protection.outlook.com (10.152.19.70) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 14:23:26 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Thu, 29 Oct 2020 14:23:25 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7b66fb0a56fe10a4
X-CR-MTA-TID: 64aa7808
Received: from 981b874fd139.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 133E8170-7A5E-48D8-BFCC-9438021616E0.1;
	Thu, 29 Oct 2020 14:22:48 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 981b874fd139.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 29 Oct 2020 14:22:48 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BNJTbMm51qS3xPLWKGb6K0+1L6o4n2ZRgdAFjferw+Hq4A34Fh6dFB4cMo6rFkmqfXCPwr8zxHNwoLSQeoK5h2f9+R7n1qQrIXBT4iJNpnC7uIpdRcNutfCvAi9o+b8MehJ1cnPBbsaINUI6h4LJdP8e4VkztYcDwAO1uG5aS7q9wTKtVyFpJNrvmdIdn7HBt3K2ykY0sY90SA+PtgKi9a6Ixx8nYw3nqPOa2TRRWjjscBPfFJhnNu2zHSzMrW2gjrD7LZOCoKNOrdvPS64kQwO/00WQOrB0zfpiQzkmKk62mr6smnFaTYqxHXbg4aindbhMI6ZYw5JH3ed+85Wrug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E61SZG2vUV+Pu0oXHvXKigcaIzgjPeXzQD8dTqxzB9I=;
 b=dlGueqsXYNLNRMHcy/FXVw5qJH0pedYsIy0FVimW3lbbzk8NP8fYxqmpvJgymP14oy5D13YbSiF/YMjn6SFATI+KLfWIKgK3ANDqn+QaLGoaWlxmNdicuTn/TIbQtUrATzWPl8KQQkOgFyWO4iWxnM5JnVy6qPo94ex8G99wsd/rknCnqjCulwmF+5uSQ2dTJ+i+E0VCfTKAQaWul/ZQPmHIeMYeDGPQlLWuhJnLNppiqwo+Y7eLayZOZKiQX7poRJQ+mT2hqDlHJgBXj0Sso9uFzxqB2KwJ5YfOigD67bWnSEs1VRJ06rw4ui2OmwV7S84/V5Ety4Y4Tj6xDQkAng==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=E61SZG2vUV+Pu0oXHvXKigcaIzgjPeXzQD8dTqxzB9I=;
 b=GCYLMy9bIhrWHZxO5P7FVcCB98cWoNezMAUElQxalqTU5HDv2KQ4Q6BHUwRcTeMVPfGFws8oCfOTXo0UhS5WdfZM3PXBG7fi1X5zr9Hk6obcPgxNu4HmCVlh2VMUtpv+qETBwknIE+YFTxk9btQVrRmq53vXIT9sTe4DQiu+riA=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6075.eurprd08.prod.outlook.com (2603:10a6:10:207::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Thu, 29 Oct
 2020 14:22:44 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 14:22:44 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Rahul Singh <Rahul.Singh@arm.com>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mgw0AAgAEzsQCAAWItgIABA+GAgADBXQCAABfAAIAAGI8AgAAORwCABG97AIADrU2AgAFBfQA=
Date: Thu, 29 Oct 2020 14:22:44 +0000
Message-ID: <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
In-Reply-To: <bc697327-2750-9a78-042d-d45690d27928@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 16d50ad4-be41-4781-0de0-08d87c163346
x-ms-traffictypediagnostic: DBBPR08MB6075:|PA4PR08MB5885:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PA4PR08MB5885D29AF5C7025CA0352A649D140@PA4PR08MB5885.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 rXqex44ZzM4ktf+P3axY2pwoNjoYfIFRC0Sv59Or9+8JUjutgYydIYc7+MBaH6//ZNMrTuEBpzx/+v1rPE3Z46M5Vv1s/Feyepne7RHb3q0mNFl4CAMfjNGUPWgNpMdtcorjbGS5jyuVHfRa+Yo7eBvkjg603SIAsaGpv0EZyNKriGC+38Fc1i4wdxCbR80qg65Ax3mytYu0iFs8rxS3Iix5T/Qg1zS1fL/p6hnCmErzT/75j4injrIfNLXhrALu2kLJdBXCpeqnAN+n34pSXqeQIjJg+KdYEp2CR9gL99LCsDk5/XsKqkoUmCJ2LV1z29/XAZdasR7GR1Qbr+1C1w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(39850400004)(396003)(346002)(66556008)(66946007)(8936002)(26005)(316002)(5660300002)(8676002)(66476007)(2616005)(91956017)(54906003)(66446008)(6486002)(33656002)(6916009)(86362001)(64756008)(36756003)(186003)(76116006)(6512007)(83380400001)(71200400001)(53546011)(4326008)(6506007)(2906002)(478600001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 LAd3RTHOSgc2pyTp9MX0A2Q7aFuLssqB+68wmeJGAM7M9/giVVvHEDYO4YJTjHUjQI7ujAZPHtZUexzex38SsEKLN4u3RroYLMSNvUXb8NbQIFT3+hoU1A7BlRq4ro3KxUeSIqyYJ7PFIJQ9d7Byqf6+PrPcvbEhsXeTVbNWu9+1hMo1yHJqQ9Q+4XpeNFCBSmG02r6qxeDvyF/1eEwQbO/tS3s4LRw7QElc3+vESz4y43dI00aoeQJIxtyL7v/kxXD9UsDoKXAUGUOVhpQmmoBrAzU6GpGWFbeOlhyq846IVRS8jN6lsZLOyuRu7ehFfYFjQBjWs0LFz14jv9VNMwKQvop5Sezl/QimB1DstrCDNc1rSikYM/amn9e17P0nq2yuY8o59nhW/+U1EzvKMkQvogc9C4YytDdoP0vvLM6o/nNH1Kku5JPAi7o+dnfojIDQr7umqkC6zsQb5nf/orKD/cLTcS5Y0MFLhSf1EXKfH2JO4XVSq7yfykUNUKQOZ8eBL/uH0rQycn3Cu96OFetsTYo06zuwq4nJBF4miVYVTkN/5j3qzEqH7GzO4l9niPDUwy9nDghEbv1FFETSKokxRdtE0mxvuC9yPLRwjGDmrB7Za+DvYYzg7+fzxjFPtokV+8QJzciW1nqaADkMEQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <F268206986925A418AB085BF34BC14A1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6075
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f678471c-d644-4141-3a39-08d87c161a53
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gPgzupfL4Z6c9gFoajwfi9Z/MvDmrY0HjWNnrreSqJeveSn1IdfUhLbk/XExp7lMHxUx4Iz+5In6hpLdG2KeBBWD51g+clBqGTp+Ps5xA40cq7PXgWn7mOzt8r35EQWlxvr6t0tUHLWtaWZrrt5KJO1AMHpFgyd3bFvwH9qgyftMJGbfItlbTdm84XU5hBLW6HcDsyVQYls0A0mTiBFDduCVBhrGyfDLKkzxsGuntuy9mxNJZhSr0mYPwrZvG1Ktcreu4dkMCi+4x6ATXsEPEcQ9/Z06I+0lzmujZRO5/W20l5iQryEvXSlXJzmvaJkdh5PLfGEVJXFWx+0CaymKKEdaY+IVatZo9WJ6h3D2p0uk7HSinxagfWNF50ofRZvEzvac19zyoCWmFnAp/L5AYA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(346002)(136003)(376002)(46966005)(6486002)(356005)(70586007)(70206006)(8676002)(53546011)(36756003)(8936002)(2616005)(478600001)(4326008)(5660300002)(36906005)(316002)(6512007)(54906003)(47076004)(81166007)(33656002)(6506007)(86362001)(336012)(26005)(82740400003)(186003)(83380400001)(107886003)(82310400003)(2906002)(6862004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 14:23:26.2309
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 16d50ad4-be41-4781-0de0-08d87c163346
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5885

SGkgSnVsaWVuLA0KDQo+IE9uIDI4IE9jdCAyMDIwLCBhdCAxOToxMiwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiANCj4gDQo+IE9uIDI2LzEwLzIwMjAgMTE6MDMs
IFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4gSGVsbG8gSnVsaWVuLA0KPiANCj4gSGkgUmFodWwsDQo+
IA0KPj4+IE9uIDIzIE9jdCAyMDIwLCBhdCA0OjE5IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiANCj4+PiANCj4+PiBPbiAyMy8xMC8yMDIwIDE1OjI3
LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4gSGVsbG8gSnVsaWVuLA0KPj4+Pj4gT24gMjMgT2N0
IDIwMjAsIGF0IDI6MDAgcG0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0K
Pj4+Pj4gDQo+Pj4+PiANCj4+Pj4+IA0KPj4+Pj4gT24gMjMvMTAvMjAyMCAxMjozNSwgUmFodWwg
U2luZ2ggd3JvdGU6DQo+Pj4+Pj4gSGVsbG8sDQo+Pj4+Pj4+IE9uIDIzIE9jdCAyMDIwLCBhdCAx
OjAyIGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+IHdyb3Rl
Og0KPj4+Pj4+PiANCj4+Pj4+Pj4gT24gVGh1LCAyMiBPY3QgMjAyMCwgSnVsaWVuIEdyYWxsIHdy
b3RlOg0KPj4+Pj4+Pj4+PiBPbiAyMC8xMC8yMDIwIDE2OjI1LCBSYWh1bCBTaW5naCB3cm90ZToN
Cj4+Pj4+Pj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxl
bWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24NCj4+Pj4+Pj4+Pj4+IHRoZSBMaW51eCBTTU1VdjMg
ZHJpdmVyLg0KPj4+Pj4+Pj4+Pj4gTWFqb3IgZGlmZmVyZW5jZXMgYmV0d2VlbiB0aGUgTGludXgg
ZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0KPj4+Pj4+Pj4+Pj4gMS4gT25seSBTdGFnZS0yIHRyYW5z
bGF0aW9uIGlzIHN1cHBvcnRlZCBhcyBjb21wYXJlZCB0byB0aGUgTGludXggZHJpdmVyDQo+Pj4+
Pj4+Pj4+PiAgICB0aGF0IHN1cHBvcnRzIGJvdGggU3RhZ2UtMSBhbmQgU3RhZ2UtMiB0cmFuc2xh
dGlvbnMuDQo+Pj4+Pj4+Pj4+PiAyLiBVc2UgUDJNICBwYWdlIHRhYmxlIGluc3RlYWQgb2YgY3Jl
YXRpbmcgb25lIGFzIFNNTVV2MyBoYXMgdGhlDQo+Pj4+Pj4+Pj4+PiAgICBjYXBhYmlsaXR5IHRv
IHNoYXJlIHRoZSBwYWdlIHRhYmxlcyB3aXRoIHRoZSBDUFUuDQo+Pj4+Pj4+Pj4+PiAzLiBUYXNr
bGV0cyBpcyB1c2VkIGluIHBsYWNlIG9mIHRocmVhZGVkIElSUSdzIGluIExpbnV4IGZvciBldmVu
dCBxdWV1ZQ0KPj4+Pj4+Pj4+Pj4gICAgYW5kIHByaW9yaXR5IHF1ZXVlIElSUSBoYW5kbGluZy4N
Cj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFRhc2tsZXRzIGFyZSBub3QgYSByZXBsYWNlbWVudCBm
b3IgdGhyZWFkZWQgSVJRLiBJbiBwYXJ0aWN1bGFyLCB0aGV5IHdpbGwNCj4+Pj4+Pj4+Pj4gaGF2
ZSBwcmlvcml0eSBvdmVyIGFueXRoaW5nIGVsc2UgKElPVyBub3RoaW5nIHdpbGwgcnVuIG9uIHRo
ZSBwQ1BVIHVudGlsDQo+Pj4+Pj4+Pj4+IHRoZXkgYXJlIGRvbmUpLg0KPj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+Pj4gRG8geW91IGtub3cgd2h5IExpbnV4IGlzIHVzaW5nIHRocmVhZC4gSXMgaXQgYmVj
YXVzZSBvZiBsb25nIHJ1bm5pbmcNCj4+Pj4+Pj4+Pj4gb3BlcmF0aW9ucz8NCj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+PiBZZXMgeW91IGFyZSByaWdodCBiZWNhdXNlIG9mIGxvbmcgcnVubmluZyBvcGVy
YXRpb25zIExpbnV4IGlzIHVzaW5nIHRoZQ0KPj4+Pj4+Pj4+IHRocmVhZGVkIElSUXMuDQo+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4gU01NVXYzIHJlcG9ydHMgZmF1bHQvZXZlbnRzIGJhc2VzIG9uIG1l
bW9yeS1iYXNlZCBjaXJjdWxhciBidWZmZXIgcXVldWVzIG5vdA0KPj4+Pj4+Pj4+IGJhc2VkIG9u
IHRoZSByZWdpc3Rlci4gQXMgcGVyIG15IHVuZGVyc3RhbmRpbmcsIGl0IGlzIHRpbWUtY29uc3Vt
aW5nIHRvDQo+Pj4+Pj4+Pj4gcHJvY2VzcyB0aGUgbWVtb3J5IGJhc2VkIHF1ZXVlcyBpbiBpbnRl
cnJ1cHQgY29udGV4dCBiZWNhdXNlIG9mIHRoYXQgTGludXgNCj4+Pj4+Pj4+PiBpcyB1c2luZyB0
aHJlYWRlZCBJUlEgdG8gcHJvY2VzcyB0aGUgZmF1bHRzL2V2ZW50cyBmcm9tIFNNTVUuDQo+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+Pj4gSSBkaWRu4oCZdCBmaW5kIGFueSBvdGhlciBzb2x1dGlvbiBpbiBY
RU4gaW4gcGxhY2Ugb2YgdGFza2xldCB0byBkZWZlciB0aGUNCj4+Pj4+Pj4+PiB3b3JrLCB0aGF0
4oCZcyB3aHkgSSB1c2VkIHRhc2tsZXQgaW4gWEVOIGluIHJlcGxhY2VtZW50IG9mIHRocmVhZGVk
IElSUXMuIElmDQo+Pj4+Pj4+Pj4gd2UgZG8gYWxsIHdvcmsgaW4gaW50ZXJydXB0IGNvbnRleHQg
d2Ugd2lsbCBtYWtlIFhFTiBsZXNzIHJlc3BvbnNpdmUuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFNv
IHdlIG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgWGVuIGNvbnRpbnVlIHRvIHJlY2VpdmVzIGludGVy
cnVwdHMsIGJ1dCB3ZSBhbHNvDQo+Pj4+Pj4+PiBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IGEgdkNQ
VSBib3VuZCB0byB0aGUgcENQVSBpcyBhbHNvIHJlc3BvbnNpdmUuDQo+Pj4+Pj4+PiANCj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+PiBJZiB5b3Uga25vdyBhbm90aGVyIHNvbHV0aW9uIGluIFhFTiB0aGF0
IHdpbGwgYmUgdXNlZCB0byBkZWZlciB0aGUgd29yayBpbg0KPj4+Pj4+Pj4+IHRoZSBpbnRlcnJ1
cHQgcGxlYXNlIGxldCBtZSBrbm93IEkgd2lsbCB0cnkgdG8gdXNlIHRoYXQuDQo+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+IE9uZSBvZiBteSB3b3JrIGNvbGxlYWd1ZSBlbmNvdW50ZXJlZCBhIHNpbWlsYXIg
cHJvYmxlbSByZWNlbnRseS4gSGUgaGFkIGEgbG9uZw0KPj4+Pj4+Pj4gcnVubmluZyB0YXNrbGV0
IGFuZCB3YW50ZWQgdG8gYmUgYnJva2VuIGRvd24gaW4gc21hbGxlciBjaHVuay4NCj4+Pj4+Pj4+
IA0KPj4+Pj4+Pj4gV2UgZGVjaWRlZCB0byB1c2UgYSB0aW1lciB0byByZXNjaGVkdWxlIHRoZSB0
YXNsa2V0IGluIHRoZSBmdXR1cmUuIFRoaXMgYWxsb3dzDQo+Pj4+Pj4+PiB0aGUgc2NoZWR1bGVy
IHRvIHJ1biBvdGhlciBsb2FkcyAoZS5nLiB2Q1BVKSBmb3Igc29tZSB0aW1lLg0KPj4+Pj4+Pj4g
DQo+Pj4+Pj4+PiBUaGlzIGlzIHByZXR0eSBoYWNraXNoIGJ1dCBJIGNvdWxkbid0IGZpbmQgYSBi
ZXR0ZXIgc29sdXRpb24gYXMgdGFza2xldCBoYXZlDQo+Pj4+Pj4+PiBoaWdoIHByaW9yaXR5Lg0K
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBNYXliZSB0aGUgb3RoZXIgd2lsbCBoYXZlIGEgYmV0dGVyIGlk
ZWEuDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBKdWxpZW4ncyBzdWdnZXN0aW9uIGlzIGEgZ29vZCBvbmUu
DQo+Pj4+Pj4+IA0KPj4+Pj4+PiBCdXQgSSB0aGluayB0YXNrbGV0cyBjYW4gYmUgY29uZmlndXJl
ZCB0byBiZSBjYWxsZWQgZnJvbSB0aGUgaWRsZV9sb29wLA0KPj4+Pj4+PiBpbiB3aGljaCBjYXNl
IHRoZXkgYXJlIG5vdCBydW4gaW4gaW50ZXJydXB0IGNvbnRleHQ/DQo+Pj4+Pj4+IA0KPj4+Pj4+
ICBZZXMgeW91IGFyZSByaWdodCB0YXNrbGV0IHdpbGwgYmUgc2NoZWR1bGVkIGZyb20gdGhlIGlk
bGVfbG9vcCB0aGF0IGlzIG5vdCBpbnRlcnJ1cHQgY29uZXh0Lg0KPj4+Pj4gDQo+Pj4+PiBUaGlz
IGRlcGVuZHMgb24geW91ciB0YXNrbGV0LiBTb21lIHdpbGwgcnVuIGZyb20gdGhlIHNvZnRpcnEg
Y29udGV4dCB3aGljaCBpcyB1c3VhbGx5IChmb3IgQXJtKSBvbiB0aGUgcmV0dXJuIG9mIGFuIGV4
Y2VwdGlvbi4NCj4+Pj4+IA0KPj4+PiBUaGFua3MgZm9yIHRoZSBpbmZvLiBJIHdpbGwgY2hlY2sg
YW5kIHdpbGwgZ2V0IGJldHRlciB1bmRlcnN0YW5kaW5nIG9mIHRoZSB0YXNrbGV0IGhvdyBpdCB3
aWxsIHJ1biBpbiBYRU4uDQo+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gNC4gTGF0ZXN0IHZlcnNpb24g
b2YgdGhlIExpbnV4IFNNTVV2MyBjb2RlIGltcGxlbWVudHMgdGhlIGNvbW1hbmRzIHF1ZXVlDQo+
Pj4+Pj4+Pj4+PiAgICBhY2Nlc3MgZnVuY3Rpb25zIGJhc2VkIG9uIGF0b21pYyBvcGVyYXRpb25z
IGltcGxlbWVudGVkIGluIExpbnV4Lg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gQ2FuIHlvdSBw
cm92aWRlIG1vcmUgZGV0YWlscz8NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBJIHRyaWVkIHRvIHBv
cnQgdGhlIGxhdGVzdCB2ZXJzaW9uIG9mIHRoZSBTTU1VdjMgY29kZSB0aGFuIEkgb2JzZXJ2ZWQg
dGhhdA0KPj4+Pj4+Pj4+IGluIG9yZGVyIHRvIHBvcnQgdGhhdCBjb2RlIEkgaGF2ZSB0byBhbHNv
IHBvcnQgYXRvbWljIG9wZXJhdGlvbiBpbXBsZW1lbnRlZA0KPj4+Pj4+Pj4+IGluIExpbnV4IHRv
IFhFTi4gQXMgbGF0ZXN0IExpbnV4IGNvZGUgdXNlcyBhdG9taWMgb3BlcmF0aW9uIHRvIHByb2Nl
c3MgdGhlDQo+Pj4+Pj4+Pj4gY29tbWFuZCBxdWV1ZXMgKGF0b21pY19jb25kX3JlYWRfcmVsYXhl
ZCgpLGF0b21pY19sb25nX2NvbmRfcmVhZF9yZWxheGVkKCkgLA0KPj4+Pj4+Pj4+IGF0b21pY19m
ZXRjaF9hbmRub3RfcmVsYXhlZCgpKSAuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFRoYW5rIHlvdSBm
b3IgdGhlIGV4cGxhbmF0aW9uLiBJIHRoaW5rIGl0IHdvdWxkIGJlIGJlc3QgdG8gaW1wb3J0IHRo
ZSBhdG9taWMNCj4+Pj4+Pj4+IGhlbHBlcnMgYW5kIHVzZSB0aGUgbGF0ZXN0IGNvZGUuDQo+Pj4+
Pj4+PiANCj4+Pj4+Pj4+IFRoaXMgd2lsbCBlbnN1cmUgdGhhdCB3ZSBkb24ndCByZS1pbnRyb2R1
Y2UgYnVncyBhbmQgYWxzbyBidXkgdXMgc29tZSB0aW1lDQo+Pj4+Pj4+PiBiZWZvcmUgdGhlIExp
bnV4IGFuZCBYZW4gZHJpdmVyIGRpdmVyZ2UgYWdhaW4gdG9vIG11Y2guDQo+Pj4+Pj4+PiANCj4+
Pj4+Pj4+IFN0ZWZhbm8sIHdoYXQgZG8geW91IHRoaW5rPw0KPj4+Pj4+PiANCj4+Pj4+Pj4gSSB0
aGluayB5b3UgYXJlIHJpZ2h0Lg0KPj4+Pj4+IFllcywgSSBhZ3JlZSB3aXRoIHlvdSB0byBoYXZl
IFhFTiBjb2RlIGluIHN5bmMgd2l0aCBMaW51eCBjb2RlIHRoYXQncyB3aHkgSSBzdGFydGVkIHdp
dGggdG8gcG9ydCB0aGUgTGludXggYXRvbWljIG9wZXJhdGlvbnMgdG8gWEVOICB0aGVuIEkgcmVh
bGlzZWQgdGhhdCBpdCBpcyBub3Qgc3RyYWlnaHRmb3J3YXJkIHRvIHBvcnQgYXRvbWljIG9wZXJh
dGlvbnMgYW5kIGl0IHJlcXVpcmVzIGxvdHMgb2YgZWZmb3J0IGFuZCB0ZXN0aW5nLiBUaGVyZWZv
cmUgSSBkZWNpZGVkIHRvIHBvcnQgdGhlIGNvZGUgYmVmb3JlIHRoZSBhdG9taWMgb3BlcmF0aW9u
IGlzIGludHJvZHVjZWQgaW4gTGludXguDQo+Pj4+PiANCj4+Pj4+IEhtbW0uLi4gSSB3b3VsZCBu
b3QgaGF2ZSBleHBlY3RlZCBhIGxvdCBvZiBlZmZvcnQgcmVxdWlyZWQgdG8gYWRkIHRoZSAzIGF0
b21pY3Mgb3BlcmF0aW9ucyBhYm92ZS4gQXJlIHlvdSB0cnlpbmcgdG8gYWxzbyBwb3J0IHRoZSBM
U0Ugc3VwcG9ydCBhdCB0aGUgc2FtZSB0aW1lPw0KPj4+PiBUaGVyZSBhcmUgb3RoZXIgYXRvbWlj
IG9wZXJhdGlvbnMgdXNlZCBpbiB0aGUgU01NVXYzIGNvZGUgYXBhcnQgZnJvbSB0aGUgMyBhdG9t
aWMgb3BlcmF0aW9uIEkgbWVudGlvbi4gSSBqdXN0IG1lbnRpb24gMyBvcGVyYXRpb24gYXMgYW4g
ZXhhbXBsZS4NCj4+PiANCj4+PiBPay4gRG8geW91IGhhdmUgYSBsaXN0IHlvdSBjb3VsZCBzaGFy
ZT8NCj4+PiANCj4+IFllcy4gUGxlYXNlIGZpbmQgdGhlIGxpc3QgdGhhdCB3ZSBoYXZlIHRvIHBv
cnQgdG8gdGhlIFhFTiBpbiBvcmRlciB0byBtZXJnZSB0aGUgbGF0ZXN0IFNNTVV2MyBjb2RlLg0K
PiANCj4gVGhhbmtzIQ0KPiANCj4+IElmIHdlIHN0YXJ0IHRvIHBvcnQgdGhlIGJlbG93IGxpc3Qg
d2UgbWlnaHQgaGF2ZSB0byBwb3J0IGFub3RoZXIgYXRvbWljIG9wZXJhdGlvbiBiYXNlZCBvbiB3
aGljaCBiZWxvdyBhdG9taWMgb3BlcmF0aW9ucyBhcmUgaW1wbGVtZW50ZWQuIEkgaGF2ZSBub3Qg
c3BlbnQgdGltZSBvbiBob3cgdGhlc2UgYXRvbWljIG9wZXJhdGlvbnMgYXJlIGltcGxlbWVudGVk
IGluIGRldGFpbCBidXQgYXMgcGVyIG15IHVuZGVyc3RhbmRpbmcsIGl0IHJlcXVpcmVkIGFuIGVm
Zm9ydCB0byBwb3J0IHRoZW0gdG8gWEVOIGFuZCByZXF1aXJlZCBhIGxvdCBvZiB0ZXN0aW5nLg0K
PiANCj4gRm9yIHRoZSBiZWdpbm5pbmcsIEkgdGhpbmsgaXQgaXMgZmluZSB0byBpbXBsZW1lbnQg
dGhlbSB3aXRoIGEgc3Ryb25nZXIgbWVtb3J5IGJhcnJpZXIgdGhhbiBuZWNlc3Nhcnkgb3IgaW4g
YSBsZXNzIGVmZmljaWVudC4gVGhpcyBjYW4gYmUgcmVmaW5lZCBhZnRlcndhcmRzLg0KPiANCj4g
VGhvc2UgaGVscGVycyBjYW4gZGlyZWN0bHkgYmUgZGVmaW5lZCBpbiB0aGUgU01NVXYzIGRyaXZl
cnMgc28gd2Uga25vdyB0aGV5IGFyZSBub3QgZm9yIGdlbmVyYWwgcHVycG9zZSA6KS4NCj4gDQo+
PiAxLiBhdG9taWNfc2V0X3JlbGVhc2UNCj4gDQo+IFRoaXMgY291bGQgYmUgaW1wbGVtZW50ZWQg
YXM6DQo+IA0KPiBzbXBfbWIoKTsNCj4gYXRvbWljX3NldCgpOw0KPiANCj4+IDIuIGF0b21pY19m
ZXRjaF9hbmRub3RfcmVsYXhlZA0KPiANCj4gVGhpcyB3b3VsZCBuZWVkIHRvIGJlIGltcG9ydGVk
Lg0KPiANCj4+IDMuIGF0b21pY19jb25kX3JlYWRfcmVsYXhlZA0KPiANCj4gVGhpcyB3b3VsZCBu
ZWVkIHRvIGJlIGltcG9ydGVkLiBUaGUgc2ltcGxlc3QgdmVyc2lvbiBzZWVtcyB0byBiZSB0aGUg
Z2VuZXJpYyB2ZXJzaW9uIHByb3ZpZGVkIGJ5IGluY2x1ZGUvYXNtLWdlbmVyaWMvYmFycmllci5o
IChzZWUgc21wX2NvbmRfbG9hZF9yZWxheGVkKS4NCj4gDQo+PiA0LiBhdG9taWNfbG9uZ19jb25k
X3JlYWRfcmVsYXhlZA0KPj4gNS4gYXRvbWljX2xvbmdfeG9yDQo+IA0KPiBUaGUgdHdvIHdvdWxk
IHJlcXVpcmUgdGhlIGltcGxlbWVudGF0aW9uIG9mIGF0b21pYzY0LiBWb2xvZHlteXIgYWxzbyBy
ZXF1aXJlZCBhIHZlcnNpb24uIEkgb2ZmZXJlZCBteSBoZWxwLCBob3dldmVyIEkgZGlkbid0IGZp
bmQgZW5vdWdoIHRpbWUgdG8gZG8gaXQgeWV0IDooLg0KPiANCj4gRm9yIEFybTY0LCBpdCB3b3Vs
ZCBiZSBwb3NzaWJsZSB0byBkbyBhIGNvcHkvcGFzdGUgb2YgdGhlIGV4aXN0aW5nIGhlbHBlcnMg
YW5kIHJlcGxhY2UgYW55dGhpbmcgcmVsYXRlZCB0byBhIDMyLWJpdCByZWdpc3RlciB3aXRoIGEg
NjQtYml0IG9uZS4NCj4gDQo+IEZvciBBcm0zMiwgdGhleSBhcmUgYSBiaXQgbW9yZSBjb21wbGV4
IGJlY2F1c2UgeW91IG5vdyBuZWVkIHRvIHdvcmsgd2l0aCAyIHJlZ2lzdGVycy4NCj4gDQo+IEhv
d2V2ZXIsIGZvciB5b3VyIHB1cnBvc2UsIHlvdSB3b3VsZCBiZSB1c2luZyBhdG9taWNfbG9uZ190
LiBTbyB0aGUgdGhlIEFybTY0IGltcGxlbWVudGF0aW9uIHNob3VsZCBiZSBzdWZmaWNpZW50Lg0K
PiANCj4+IDYuIGF0b21pY19zZXRfcmVsZWFzZQ0KPiANCj4gU2FtZSBhcyAxLg0KPiANCj4+IDcu
IGF0b21pY19jbXB4Y2hnX3JlbGF4ZWQgbWlnaHQgYmUgd2UgY2FuIHVzZSBhdG9taWNfY21weGNo
ZyB0aGF0IGlzIGltcGxlbWVudGVkIGluIFhFTiBuZWVkIHRvIGNoZWNrLg0KPiBhdG9taWNfY21w
eGNoZygpIGlzIHN0cm9uZ2x5IG9yZGVyZWQuIFNvIGl0IHdvdWxkIGJlIGZpbmUgdG8gdXNlIGl0
IGZvciBpbXBsZW1lbnRpbmcgdGhlIGhlbHBlci4gQWx0aG91Z2gsIG1vcmUgaW5lZmZpY2llbnQg
OikuDQo+IA0KPj4gOC4gYXRvbWljX2RlY19yZXR1cm5fcmVsZWFzZQ0KPiANCj4gWGVuIGltcGxl
bWVudHMgYSBzdHJvbmdlciB2ZXJzaW9uIGF0b21pY19kZWNfcmV0dXJuKCkuIFlvdSBjYW4gcmUt
dXNlIGl0IGhlcmUuDQo+IA0KPj4gOS4gYXRvbWljX2ZldGNoX2luY19yZWxheGVkDQo+IA0KPiBU
aGlzIHdvdWxkIG5lZWQgdG8gYmUgaW1wb3J0ZWQuDQoNCldlIGRvIGFncmVlIHRoYXQgdGhpcyB3
b3VsZCBiZSB0aGUgd29yayByZXF1aXJlZCBidXQgc29tZSBvZiB0aGUgdGhpbmdzIHRvIGJlIGlt
cG9ydGVkIGhhdmUgZGVwZW5kZW5jaWVzIGFuZCB0aGlzIGlzIG5vdA0KYSBzaW1wbGUgY2hhbmdl
IG9uIHRoZSBwYXRjaCBkb25lIGJ5IFJhaHVsIGFuZCBpdCB3b3VsZCByZXF1aXJlIHRvIGFsbW9z
dCByZXN0YXJ0IHRoZSBpbXBsZW1lbnRhdGlvbiBhbmQgdGVzdGluZyBmcm9tIHRoZQ0KdmVyeSBi
ZWdpbm5pbmcuDQoNCkluIHRoZSBtZWFudGltZSBjb3VsZCB3ZSBwcm9jZXNzIHdpdGggdGhlIHJl
dmlldyBvZiB0aGlzIFNNTVV2MyBkcml2ZXIgYW5kIGluY2x1ZGUgaXQgaW4gWGVuIGFzIGlzID8N
Cg0KV2UgY2FuIHNldCBtZSBhbmQgUmFodWwgYXMgbWFpbnRhaW5lcnMgYW5kIGZsYWcgdGhlIGRy
aXZlciBhcyBleHBlcmltZW50YWwgaW4gU3VwcG9ydC5tZCAoaXQgaXMgYWxyZWFkeQ0KcHJvdGVj
dGVkIGJ5IHRoZSBFWFBFUlQgY29uZmlndXJhdGlvbiBpbiBLY29uZmlnKS4NCg0KDQpDaGVlcnMN
CkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:37:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:37:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14228.35282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY93H-0002HR-Ci; Thu, 29 Oct 2020 14:37:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14228.35282; Thu, 29 Oct 2020 14:37:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY93H-0002HK-9W; Thu, 29 Oct 2020 14:37:19 +0000
Received: by outflank-mailman (input) for mailman id 14228;
 Thu, 29 Oct 2020 14:37:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY93F-0002HF-Hc
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:37:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f4c647d-9863-49aa-86e9-957b00e0878c;
 Thu, 29 Oct 2020 14:37:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00EDCACB0;
 Thu, 29 Oct 2020 14:37:08 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY93F-0002HF-Hc
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:37:17 +0000
X-Inumbo-ID: 5f4c647d-9863-49aa-86e9-957b00e0878c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5f4c647d-9863-49aa-86e9-957b00e0878c;
	Thu, 29 Oct 2020 14:37:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603982228;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QVOvTTn7VFXREsK9GoEmgxr+I3Pq780l7aXY5XhDApI=;
	b=mhY5Ibf0rScD0pYrQBrvbkLII08xLpLlRYyXFzROMb6GebZBVhcaIhV/2qLaZOi1/9eLFe
	0fv7hieILxvq/b0aOAzK6/ll7/yUuuEZr7yrlO/Zu+tUHDFFtA7uEdro3AYmv6apOWi0Ti
	1md+hRXPy4aFhS5/uKhpoVqOyNPGiy0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 00EDCACB0;
	Thu, 29 Oct 2020 14:37:08 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Drop stale comment in dom0_construct_pv()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201029140041.18343-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2425da57-9c04-215e-297e-844fb11fac9a@suse.com>
Date: Thu, 29 Oct 2020 15:37:08 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201029140041.18343-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.2020 15:00, Andrew Cooper wrote:
> This comment has been around since c/s 1372bca0615 in 2004.  It is stale, as
> it predates the introduction of struct vcpu.

That commit only moved it around; it's 22a857bde9b8 afaics from
early 2003 where it first appeared, where it had a reason:

    /*
     * WARNING: The new domain must have its 'processor' field
     * filled in by now !!
     */
    phys_l2tab = ALLOC_FRAME_FROM_DOMAIN();
    l2tab = map_domain_mem(phys_l2tab);
    memcpy(l2tab, idle_pg_table[p->processor], PAGE_SIZE);

But yes, the comment has been stale for a long time, and I've
been wondering a number of times what it was supposed to tell
me. (I think it was already stale at the point the comment
first got altered, in 3072fef54df8.)

> It is not obvious that it was even correct at the time.  Where a vcpu (domain
> at the time) has been configured to run is unrelated to construct the domain's
> initial pagetables, etc.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:40:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14232.35295 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY96H-000385-RW; Thu, 29 Oct 2020 14:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14232.35295; Thu, 29 Oct 2020 14:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY96H-00037y-OR; Thu, 29 Oct 2020 14:40:25 +0000
Received: by outflank-mailman (input) for mailman id 14232;
 Thu, 29 Oct 2020 14:40:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kY96G-00037t-Q5
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:40:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66158acc-b120-46d6-b67e-69b4b48a8fe4;
 Thu, 29 Oct 2020 14:40:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F2E2AFCD;
 Thu, 29 Oct 2020 14:40:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kY96G-00037t-Q5
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:40:24 +0000
X-Inumbo-ID: 66158acc-b120-46d6-b67e-69b4b48a8fe4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 66158acc-b120-46d6-b67e-69b4b48a8fe4;
	Thu, 29 Oct 2020 14:40:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603982422;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sP8ejWNFAyIlGFRSqM3r7s+NiwIY3F4P5hAFngx5fos=;
	b=M8qODgZP1r+vaS8+oZDWcHtrv6lpFq/U0/NHNz7kHvjtzJk8wTGaqlOAPWL+iGLBvkdiL4
	wpwiLhbZK9xR9Vru2g3RDYrC7xNkkrC98xkl5vmqwqXM1WWut85+vphTTol3ADr91ftXgi
	w8S/jrkJ4YUZDsRD6mXkdXY6pifshdA=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8F2E2AFCD;
	Thu, 29 Oct 2020 14:40:22 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
Date: Thu, 29 Oct 2020 15:40:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.20 15:22, Jan Beulich wrote:
> On 22.10.2020 16:39, Juergen Gross wrote:
>> When the host crashes it would sometimes be nice to have additional
>> debug data available which could be produced via debug keys, but
>> halting the server for manual intervention might be impossible due to
>> the need to reboot/kexec rather sooner than later.
>>
>> Add support for automatic debug key actions in case of crashes which
>> can be activated via boot- or runtime-parameter.
> 
> While I can certainly see this possibly being a useful thing in
> certain situations, I'm uncertain it's going to be helpful in at
> least a fair set of cases. What output to request very much
> depends on the nature of the problem one is running into, and
> the more keys one adds "just in case", the longer the reboot
> latency, and the higher the risk (see also below) of the output
> generation actually causing further problems.

The obvious case is a watchdog induced crash: at least 2 sets of dom0
state will help in many cases.

> IOW I'm neither fully convinced that we want this, nor fully
> opposed.
> 
>> Depending on the type of crash the desired data might be different, so
>> support different settings for the possible types of crashes.
>>
>> The parameter is "crash-debug" with the following syntax:
>>
>>    crash-debug-<type>=<string>
>>
>> with <type> being one of:
>>
>>    panic, hwdom, watchdog, kexeccmd, debugkey
>>
>> and <string> a sequence of debug key characters with '.' having the
>> special semantics of a 1 second pause.
> 
> 1 second is a whole lot of time. To get two successive sets
> of data, a much shorter delay (if any) would normally suffice.

Yes, I'd be fine to trade that for a shorter period of time.

> Also, while '.' may seem like a good choice right now, with the
> shortage of characters we may want to put a real handler behind
> it at come point. The one character that clearly won't make much
> sense to use in this context is 'h', but that's awful as a (kind
> of) separator. Could we perhaps replace 'h' by '?', freeing up
> 'h' and allowing '?' to be used for this purpose here?

Fine with me. Another possibility would be to add '\' as an escape
character with '\.' meaning "debug-key .".

>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -574,6 +574,29 @@ reduction of features at Xen's disposal to manage guests.
>>   ### cpuinfo (x86)
>>   > `= <boolean>`
>>   
>> +### crash-debug-debugkey
>> +### crash-debug-hwdom
>> +### crash-debug-kexeccmd
>> +### crash-debug-panic
>> +### crash-debug-watchdog
>> +> `= <string>`
>> +
>> +> Can be modified at runtime
>> +
>> +Specify debug-key actions in cases of crashes. Each of the parameters applies
>> +to a different crash reason. The `<string>` is a sequence of debug key
>> +characters, with `.` having the special meaning of a 1 second pause.
>> +
>> +So e.g. `crash-debug-watchdog=0.0r` would dump dom0 state twice with a
>> +second between the two state dumps, followed by the run queues of the
>> +hypervisor, if the system crashes due to a watchdog timeout.
>> +
>> +These parameters should be used carefully, as e.g. specifying
>> +`crash-debug-debugkey=C` would result in an endless loop. Depending on the
>> +reason of the system crash it might happen that triggering some debug key
>> +action will result in a hang instead of dumping data and then doing a
>> +reboot or crash dump.
> 
> I think it would be useful if the flavors were (briefly)
> explained: At the very least "debugkey" doesn't directly fit "in
> cases of crashes", as there's no crash involved. kexec_crash()
> instead gets invoked without there having been any crash.

Yes, and having some additional state generate for this case might
help diagnosis.

> 
> You may also want to point out that this is a best effort thing
> only - system state at the point of a crash may be such that the
> attempt of handling one or the debug keys would have further bad
> effects on the system, including that the actual kexec may then
> never occur.

True.

> 
>> @@ -507,6 +509,41 @@ void __init initialize_keytable(void)
>>       }
>>   }
>>   
>> +#define CRASHACTION_SIZE  32
>> +static char crash_debug_panic[CRASHACTION_SIZE];
>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>> +
>> +static char *crash_action[CRASHREASON_N] = {
>> +    [CRASHREASON_PANIC] = crash_debug_panic,
>> +    [CRASHREASON_HWDOM] = crash_debug_hwdom,
>> +    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
>> +    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
>> +    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
>> +};
>> +
>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
> 
> In general I'm not in favor of this (or similar) needing
> five new command line options, instead of just one. The problem
> with e.g.
> 
> crash-debug=panic:rq,watchdog:0d
> 
> is that ',' (or any other separator chosen) could in principle
> also be a debug key. It would still be possible to use
> 
> crash-debug=panic:rq crash-debug=watchdog:0d
> 
> though. Thoughts?

OTOH the runtime parameters are much easier addressable that way.

> 
>> +void keyhandler_crash_action(enum crash_reason reason)
>> +{
>> +    const char *action = crash_action[reason];
>> +    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
>> +
>> +    while ( *action ) {
> 
> Misplaced brace.

Will fix.

> 
>> +        if ( *action == '.' )
>> +            mdelay(1000);
>> +        else
>> +            handle_keypress(*action, regs);
>> +        action++;
>> +    }
>> +}
> 
> I think only diagnostic keys should be permitted here.

While I understand that other keys could produce nonsense or do harm,
I'm not sure we should really prohibit them. Allowing them would e.g.
allow to do just a reboot without kdump for one type of crashes.

> 
>> --- a/xen/include/xen/kexec.h
>> +++ b/xen/include/xen/kexec.h
>> @@ -1,6 +1,8 @@
>>   #ifndef __XEN_KEXEC_H__
>>   #define __XEN_KEXEC_H__
>>   
>> +#include <xen/keyhandler.h>
> 
> Could we go without this, utilizing the gcc extension of forward
> declared enums? Otoh ...
> 
>> @@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
>>   #define kexecing 0
>>   
>>   static inline void kexec_early_calculations(void) {}
>> -static inline void kexec_crash(void) {}
>> +static inline void kexec_crash(enum crash_reason reason)
>> +{
>> +    keyhandler_crash_action(reason);
>> +}
> 
> ... if this is to be an inline function and not just a #define,
> it'll need the declaration of the function to have been seen.

And even being a #define all users of kexec_crash() would need to
#include keyhandler.h (and I'm not sure there are any source files
including kexec.h which don't use kexec_crash()).


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:50:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14240.35307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9FW-0003RK-TA; Thu, 29 Oct 2020 14:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14240.35307; Thu, 29 Oct 2020 14:49:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9FW-0003RD-Q0; Thu, 29 Oct 2020 14:49:58 +0000
Received: by outflank-mailman (input) for mailman id 14240;
 Thu, 29 Oct 2020 14:49:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY9FV-0003R8-1q
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:49:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce6ff7da-f66b-4421-9bef-39a5746a9978;
 Thu, 29 Oct 2020 14:49:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BD03AAC1F;
 Thu, 29 Oct 2020 14:49:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY9FV-0003R8-1q
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:49:57 +0000
X-Inumbo-ID: ce6ff7da-f66b-4421-9bef-39a5746a9978
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ce6ff7da-f66b-4421-9bef-39a5746a9978;
	Thu, 29 Oct 2020 14:49:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603982994;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RAnW0bO+7Nb+T4ud5VCzhOmmIF4MQq+4zdr6XVq40h4=;
	b=bTjCw+V+pYWGjR1F2ulUrPtzQ38jccb6BwlnxdNhi4JBEQOcFg5ToF2/ZThywC4LzdgViF
	oMxDS/QlK4ND2ypnHubRZ7FrOqpWVPa/vSDfPaKgSjFY3aXRQKrwNp4BHkk8SqMhEuMq9V
	Fn4u7s8g26mRQ8osBbVrG6kK+77GqNU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id BD03AAC1F;
	Thu, 29 Oct 2020 14:49:54 +0000 (UTC)
Subject: Re: [PATCH] xen: add support for automatic debug key actions in case
 of crash
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201022143905.11032-1-jgross@suse.com>
 <977bab69-892c-d94d-d952-1a748f69d0b6@suse.com>
 <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ed2f73e7-04cc-f568-f0b7-19c843a8d31b@suse.com>
Date: Thu, 29 Oct 2020 15:49:55 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <53732f8f-fe6d-91bd-4100-4b4d904a4073@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.10.2020 15:40, Jürgen Groß wrote:
> On 29.10.20 15:22, Jan Beulich wrote:
>> On 22.10.2020 16:39, Juergen Gross wrote:
>>> @@ -507,6 +509,41 @@ void __init initialize_keytable(void)
>>>       }
>>>   }
>>>   
>>> +#define CRASHACTION_SIZE  32
>>> +static char crash_debug_panic[CRASHACTION_SIZE];
>>> +static char crash_debug_hwdom[CRASHACTION_SIZE];
>>> +static char crash_debug_watchdog[CRASHACTION_SIZE];
>>> +static char crash_debug_kexeccmd[CRASHACTION_SIZE];
>>> +static char crash_debug_debugkey[CRASHACTION_SIZE];
>>> +
>>> +static char *crash_action[CRASHREASON_N] = {
>>> +    [CRASHREASON_PANIC] = crash_debug_panic,
>>> +    [CRASHREASON_HWDOM] = crash_debug_hwdom,
>>> +    [CRASHREASON_WATCHDOG] = crash_debug_watchdog,
>>> +    [CRASHREASON_KEXECCMD] = crash_debug_kexeccmd,
>>> +    [CRASHREASON_DEBUGKEY] = crash_debug_debugkey,
>>> +};
>>> +
>>> +string_runtime_param("crash-debug-panic", crash_debug_panic);
>>> +string_runtime_param("crash-debug-hwdom", crash_debug_hwdom);
>>> +string_runtime_param("crash-debug-watchdog", crash_debug_watchdog);
>>> +string_runtime_param("crash-debug-kexeccmd", crash_debug_kexeccmd);
>>> +string_runtime_param("crash-debug-debugkey", crash_debug_debugkey);
>>
>> In general I'm not in favor of this (or similar) needing
>> five new command line options, instead of just one. The problem
>> with e.g.
>>
>> crash-debug=panic:rq,watchdog:0d
>>
>> is that ',' (or any other separator chosen) could in principle
>> also be a debug key. It would still be possible to use
>>
>> crash-debug=panic:rq crash-debug=watchdog:0d
>>
>> though. Thoughts?
> 
> OTOH the runtime parameters are much easier addressable that way.

Ah yes, I can see this as a reason. Would make we wonder whether
command line and runtime handling may want disconnecting in this
specific case then. (But I can also see the argument of this
being too much overhead then.)

>>> +void keyhandler_crash_action(enum crash_reason reason)
>>> +{
>>> +    const char *action = crash_action[reason];
>>> +    struct cpu_user_regs *regs = get_irq_regs() ? : guest_cpu_user_regs();
>>> +
>>> +    while ( *action ) {
>>> +        if ( *action == '.' )
>>> +            mdelay(1000);
>>> +        else
>>> +            handle_keypress(*action, regs);
>>> +        action++;
>>> +    }
>>> +}
>>
>> I think only diagnostic keys should be permitted here.
> 
> While I understand that other keys could produce nonsense or do harm,
> I'm not sure we should really prohibit them. Allowing them would e.g.
> allow to do just a reboot without kdump for one type of crashes.

Ah yes, that's a fair point.

>>> --- a/xen/include/xen/kexec.h
>>> +++ b/xen/include/xen/kexec.h
>>> @@ -1,6 +1,8 @@
>>>   #ifndef __XEN_KEXEC_H__
>>>   #define __XEN_KEXEC_H__
>>>   
>>> +#include <xen/keyhandler.h>
>>
>> Could we go without this, utilizing the gcc extension of forward
>> declared enums? Otoh ...
>>
>>> @@ -82,7 +84,11 @@ void vmcoreinfo_append_str(const char *fmt, ...)
>>>   #define kexecing 0
>>>   
>>>   static inline void kexec_early_calculations(void) {}
>>> -static inline void kexec_crash(void) {}
>>> +static inline void kexec_crash(enum crash_reason reason)
>>> +{
>>> +    keyhandler_crash_action(reason);
>>> +}
>>
>> ... if this is to be an inline function and not just a #define,
>> it'll need the declaration of the function to have been seen.
> 
> And even being a #define all users of kexec_crash() would need to
> #include keyhandler.h (and I'm not sure there are any source files
> including kexec.h which don't use kexec_crash()).

About as many which do as ones which don't. But there's no
generally accessible header which includes xen/kexec.h, so perhaps
the extra dependency indeed isn't all this problematic.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:52:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:52:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14238.35319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9He-0004EX-9f; Thu, 29 Oct 2020 14:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14238.35319; Thu, 29 Oct 2020 14:52:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9He-0004EQ-6e; Thu, 29 Oct 2020 14:52:10 +0000
Received: by outflank-mailman (input) for mailman id 14238;
 Thu, 29 Oct 2020 14:49:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=waTM=EE=kernel.org=jic23@srs-us1.protection.inumbo.net>)
 id 1kY9F4-0003Py-HQ
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:49:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01b6ca57-2a07-43f2-badf-e1784f66c6f1;
 Thu, 29 Oct 2020 14:49:29 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net
 [82.4.196.95])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 029F1206E3;
 Thu, 29 Oct 2020 14:49:14 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=waTM=EE=kernel.org=jic23@srs-us1.protection.inumbo.net>)
	id 1kY9F4-0003Py-HQ
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:49:30 +0000
X-Inumbo-ID: 01b6ca57-2a07-43f2-badf-e1784f66c6f1
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 01b6ca57-2a07-43f2-badf-e1784f66c6f1;
	Thu, 29 Oct 2020 14:49:29 +0000 (UTC)
Received: from archlinux (cpc149474-cmbg20-2-0-cust94.5-4.cable.virginm.net [82.4.196.95])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 029F1206E3;
	Thu, 29 Oct 2020 14:49:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1603982968;
	bh=1M8zHKAHm8tfB8AOV5kFGjQ5L8HBiwt2lqvdGfEDgxQ=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=VwKBRazZTJNx5ayevPifAIJt6BcELHVSflPv1Pz3hhkIRLQu/hpqVx9zq6UOPHYwi
	 1Cr/eBqf+UTruLweRPs1uluGBSsUp8XS5oIZwMoDI9YYtGpTUHifWPeFkCnRe78Q3+
	 9acepxXtiAj1lspr5f5lKQxsgHU3+k3h2anx8nNM=
Date: Thu, 29 Oct 2020 14:49:12 +0000
From: Jonathan Cameron <jic23@kernel.org>
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>, Greg Kroah-Hartman
 <gregkh@linuxfoundation.org>, Mauro Carvalho Chehab
 <mchehab+samsung@kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, "Jonathan Corbet" <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Felipe Balbi <balbi@kernel.org>, Frederic Barrat
 <fbarrat@linux.ibm.com>, Guenter Roeck <groeck@chromium.org>, Hanjun Guo
 <guohanjun@huawei.com>, Heikki Krogerus <heikki.krogerus@linux.intel.com>,
 Jens Axboe <axboe@kernel.dk>, Johannes Thumshirn
 <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>, Konstantin
 Khlebnikov <koct9i@gmail.com>, Kranthi Kuntala <kranthi.kuntala@intel.com>,
 Lakshmi Ramasubramanian <nramas@linux.microsoft.com>, Lars-Peter Clausen
 <lars@metafoo.de>, Len Brown <lenb@kernel.org>, Leonid Maksymchuk
 <leonmaxx@gmail.com>, Ludovic Desroches <ludovic.desroches@microchip.com>,
 Mario Limonciello <mario.limonciello@dell.com>, Maxime Coquelin
 <mcoquelin.stm32@gmail.com>, Michael Ellerman <mpe@ellerman.id.au>, Mika
 Westerberg <mika.westerberg@linux.intel.com>, Mike Kravetz
 <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain
 <nayna@linux.ibm.com>, Nicolas Ferre <nicolas.ferre@microchip.com>, Niklas
 Cassel <niklas.cassel@wdc.com>, Oleh Kravchenko <oleg@kaa.org.ua>, Orson
 Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 20/33] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201029144912.3c0a239b@archlinux>
In-Reply-To: <4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
	<4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
X-Mailer: Claws Mail 3.17.7 (GTK+ 2.24.32; x86_64-pc-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Wed, 28 Oct 2020 15:23:18 +0100
Mauro Carvalho Chehab <mchehab+huawei@kernel.org> wrote:

> From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> 
> Some files over there won't parse well by Sphinx.
> 
> Fix them.
> 
> Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>

Query below...  I'm going to guess a rebase issue?

Other than that
Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> # for IIO


> diff --git a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
> index b7259234ad70..a10a4de3e5fe 100644
> --- a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
> +++ b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
> @@ -3,67 +3,85 @@ KernelVersion:	4.11
>  Contact:	benjamin.gaignard@st.com
>  Description:
>  		Reading returns the list possible master modes which are:
> -		- "reset"     :	The UG bit from the TIMx_EGR register is
> +
> +
> +		- "reset"
> +				The UG bit from the TIMx_EGR register is
>  				used as trigger output (TRGO).
> -		- "enable"    : The Counter Enable signal CNT_EN is used
> +		- "enable"
> +				The Counter Enable signal CNT_EN is used
>  				as trigger output.
> -		- "update"    : The update event is selected as trigger output.
> +		- "update"
> +				The update event is selected as trigger output.
>  				For instance a master timer can then be used
>  				as a prescaler for a slave timer.
> -		- "compare_pulse" : The trigger output send a positive pulse
> -				    when the CC1IF flag is to be set.
> -		- "OC1REF"    : OC1REF signal is used as trigger output.
> -		- "OC2REF"    : OC2REF signal is used as trigger output.
> -		- "OC3REF"    : OC3REF signal is used as trigger output.
> -		- "OC4REF"    : OC4REF signal is used as trigger output.
> +		- "compare_pulse"
> +				The trigger output send a positive pulse
> +				when the CC1IF flag is to be set.
> +		- "OC1REF"
> +				OC1REF signal is used as trigger output.
> +		- "OC2REF"
> +				OC2REF signal is used as trigger output.
> +		- "OC3REF"
> +				OC3REF signal is used as trigger output.
> +		- "OC4REF"
> +				OC4REF signal is used as trigger output.
> +
>  		Additional modes (on TRGO2 only):
> -		- "OC5REF"    : OC5REF signal is used as trigger output.
> -		- "OC6REF"    : OC6REF signal is used as trigger output.
> +
> +		- "OC5REF"
> +				OC5REF signal is used as trigger output.
> +		- "OC6REF"
> +				OC6REF signal is used as trigger output.
>  		- "compare_pulse_OC4REF":
> -		  OC4REF rising or falling edges generate pulses.
> +				OC4REF rising or falling edges generate pulses.
>  		- "compare_pulse_OC6REF":
> -		  OC6REF rising or falling edges generate pulses.
> +				OC6REF rising or falling edges generate pulses.
>  		- "compare_pulse_OC4REF_r_or_OC6REF_r":
> -		  OC4REF or OC6REF rising edges generate pulses.
> +				OC4REF or OC6REF rising edges generate pulses.
>  		- "compare_pulse_OC4REF_r_or_OC6REF_f":
> -		  OC4REF rising or OC6REF falling edges generate pulses.
> +				OC4REF rising or OC6REF falling edges generate
> +				pulses.
>  		- "compare_pulse_OC5REF_r_or_OC6REF_r":
> -		  OC5REF or OC6REF rising edges generate pulses.
> +				OC5REF or OC6REF rising edges generate pulses.
>  		- "compare_pulse_OC5REF_r_or_OC6REF_f":
> -		  OC5REF rising or OC6REF falling edges generate pulses.
> +				OC5REF rising or OC6REF falling edges generate
> +				pulses.
>  
> -		+-----------+   +-------------+            +---------+
> -		| Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
> -		+-----------+   +--+--------+-+        |-> | Control +-->
> -		                   |        |          ||  +---------+
> -		                +--v--------+-+ OCxREF ||  +---------+
> -		                | Chx compare +----------> | Output  | ChX
> -		                +-----------+-+         |  | Control +-->
> -		                      .     |           |  +---------+
> -		                      .     |           |    .
> -		                +-----------v-+ OC6REF  |    .
> -		                | Ch6 compare +---------+>
> -		                +-------------+
> +		::
>  
> -		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r":
> +		  +-----------+   +-------------+            +---------+
> +		  | Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
> +		  +-----------+   +--+--------+-+        |-> | Control +-->
> +		                     |        |          ||  +---------+
> +		                  +--v--------+-+ OCxREF ||  +---------+
> +		                  | Chx compare +----------> | Output  | ChX
> +		                  +-----------+-+         |  | Control +-->
> +		                        .     |           |  +---------+
> +		                        .     |           |    .
> +		                  +-----------v-+ OC6REF  |    .
> +		                  | Ch6 compare +---------+>
> +		                  +-------------+
>  
> -		                X
> -		              X   X
> -		            X .   . X
> -		          X   .   .   X
> -		        X     .   .     X
> -		count X .     .   .     . X
> -		        .     .   .     .
> -		        .     .   .     .
> -		        +---------------+
> -		OC4REF  |     .   .     |
> -		      +-+     .   .     +-+
> -		        .     +---+     .
> -		OC6REF  .     |   |     .
> -		      +-------+   +-------+
> -		        +-+   +-+
> -		TRGO2   | |   | |
> -		      +-+ +---+ +---------+
> +		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r"::
> +
> +		                  X
> +		                X   X
> +		              X .   . X
> +		            X   .   .   X
> +		          X     .   .     X
> +		  count X .     .   .     . X
> +		          .     .   .     .
> +		          .     .   .     .
> +		          +---------------+
> +		  OC4REF  |     .   .     |
> +		        +-+     .   .     +-+
> +		          .     +---+     .
> +		  OC6REF  .     |   |     .
> +		        +-------+   +-------+
> +		          +-+   +-+
> +		  TRGO2   | |   | |
> +		        +-+ +---+ +---------+
>  
>  What:		/sys/bus/iio/devices/triggerX/master_mode
>  KernelVersion:	4.11
> @@ -91,6 +109,30 @@ Description:
>  		When counting down the counter start from preset value
>  		and fire event when reach 0.
>  

Where did these come from?

> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> +KernelVersion:	4.12
> +Contact:	benjamin.gaignard@st.com
> +Description:
> +		Reading returns the list possible quadrature modes.
> +
> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> +KernelVersion:	4.12
> +Contact:	benjamin.gaignard@st.com
> +Description:
> +		Configure the device counter quadrature modes:
> +
> +		channel_A:
> +			Encoder A input servers as the count input and B as
> +			the UP/DOWN direction control input.
> +
> +		channel_B:
> +			Encoder B input serves as the count input and A as
> +			the UP/DOWN direction control input.
> +
> +		quadrature:
> +			Encoder A and B inputs are mixed to get direction
> +			and count with a scale of 0.25.
> +
>  What:		/sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available
>  KernelVersion:	4.12
>  Contact:	benjamin.gaignard@st.com
> @@ -104,6 +146,7 @@ Description:
>  		Configure the device counter enable modes, in all case
>  		counting direction is set by in_count0_count_direction
>  		attribute and the counter is clocked by the internal clock.
> +
>  		always:
>  			Counter is always ON.
>  


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:58:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14248.35331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9NO-0004Va-4L; Thu, 29 Oct 2020 14:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14248.35331; Thu, 29 Oct 2020 14:58:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9NO-0004VT-1P; Thu, 29 Oct 2020 14:58:06 +0000
Received: by outflank-mailman (input) for mailman id 14248;
 Thu, 29 Oct 2020 14:58:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY9NN-0004VO-K2
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:58:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d658816-b8f8-4ab3-9c98-c373480a6ad4;
 Thu, 29 Oct 2020 14:58:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 03EB5AC1F;
 Thu, 29 Oct 2020 14:58:04 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY9NN-0004VO-K2
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:58:05 +0000
X-Inumbo-ID: 3d658816-b8f8-4ab3-9c98-c373480a6ad4
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3d658816-b8f8-4ab3-9c98-c373480a6ad4;
	Thu, 29 Oct 2020 14:58:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603983484;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=L3YiefnC/CQma9gjKaC1uS383M6pQmnClZZjwUeKzuc=;
	b=kUx9JMth1tmSQ0eWpKUEhCRwD6Pdt+GaHONo2BxZWFd7qXfl742bo/PttazrVaK5LzJ3vl
	q99JVwIP4eiMrFToSP6lzPM/fMFuNhs07aMcXPi4H3z0Rml7ZvB2VwaPvhWaUh7FS5LoY6
	m4/8E8YuiM9jMvtOnO71/B2KcPPDch4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 03EB5AC1F;
	Thu, 29 Oct 2020 14:58:04 +0000 (UTC)
Subject: Re: [PATCH 12/12] xen/cpupool: make per-cpupool sched-gran hypfs node
 writable
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-13-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <18acae1b-eebc-26db-e11c-1847e4221e69@suse.com>
Date: Thu, 29 Oct 2020 15:58:04 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201026091316.25680-13-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.10.2020 10:13, Juergen Gross wrote:
> @@ -1088,13 +1098,58 @@ static int cpupool_gran_read(const struct hypfs_entry *entry,
>      return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;
>  }
>  
> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
> +                              XEN_GUEST_HANDLE_PARAM(void) uaddr,
> +                              unsigned int ulen)
> +{
> +    const struct hypfs_dyndir_id *data;
> +    struct cpupool *cpupool;
> +    enum sched_gran gran;
> +    unsigned int sched_gran;
> +    char name[SCHED_GRAN_NAME_LEN];
> +    int ret = 0;
> +
> +    if ( ulen > SCHED_GRAN_NAME_LEN )
> +        return -ENOSPC;
> +
> +    if ( copy_from_guest(name, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    sched_gran = sched_gran_get(name, &gran) ? 0
> +                                             : cpupool_check_granularity(gran);
> +    if ( memchr(name, 0, ulen) != (name + ulen - 1) || sched_gran == 0 )
> +        return -EINVAL;

I guess the memchr() check wants to happen before the call to
sched_gran_get()?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 14:59:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 14:59:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14253.35345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9Os-0004dB-GW; Thu, 29 Oct 2020 14:59:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14253.35345; Thu, 29 Oct 2020 14:59:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9Os-0004d4-DH; Thu, 29 Oct 2020 14:59:38 +0000
Received: by outflank-mailman (input) for mailman id 14253;
 Thu, 29 Oct 2020 14:59:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kY9Or-0004cz-Ng
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:59:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55834bec-813f-4c16-b3c8-dcd05ba059fa;
 Thu, 29 Oct 2020 14:59:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 44104AF0C;
 Thu, 29 Oct 2020 14:59:36 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=FUbw=EE=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kY9Or-0004cz-Ng
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 14:59:37 +0000
X-Inumbo-ID: 55834bec-813f-4c16-b3c8-dcd05ba059fa
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55834bec-813f-4c16-b3c8-dcd05ba059fa;
	Thu, 29 Oct 2020 14:59:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603983576;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=muZq2HFe/2X2/+Rl35meEsnwYY1hrkMD6PECAcGQ1ws=;
	b=nwg2bDHZPvU1YdtTm+cJCgOmhQOjBvI+2RLm3dkfr5R5ptpD4uRKqDOhWquf3sS2UuJPoJ
	Fq/K1uQotcsjppVL4IgKqckt3JQDWcySxXACArsReZh7wkGbH0tN+tRNi8eolp5oIM49WK
	PsTl6byEt8kQG35dkZkQ5HsYShDnmBE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 44104AF0C;
	Thu, 29 Oct 2020 14:59:36 +0000 (UTC)
Subject: Re: [PATCH 12/12] xen/cpupool: make per-cpupool sched-gran hypfs node
 writable
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Dario Faggioli <dfaggioli@suse.com>,
 xen-devel@lists.xenproject.org
References: <20201026091316.25680-1-jgross@suse.com>
 <20201026091316.25680-13-jgross@suse.com>
 <18acae1b-eebc-26db-e11c-1847e4221e69@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dd943d50-372d-d7ad-19cd-1870631befe2@suse.com>
Date: Thu, 29 Oct 2020 15:59:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <18acae1b-eebc-26db-e11c-1847e4221e69@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.20 15:58, Jan Beulich wrote:
> On 26.10.2020 10:13, Juergen Gross wrote:
>> @@ -1088,13 +1098,58 @@ static int cpupool_gran_read(const struct hypfs_entry *entry,
>>       return copy_to_guest(uaddr, name, strlen(name) + 1) ? -EFAULT : 0;
>>   }
>>   
>> +static int cpupool_gran_write(struct hypfs_entry_leaf *leaf,
>> +                              XEN_GUEST_HANDLE_PARAM(void) uaddr,
>> +                              unsigned int ulen)
>> +{
>> +    const struct hypfs_dyndir_id *data;
>> +    struct cpupool *cpupool;
>> +    enum sched_gran gran;
>> +    unsigned int sched_gran;
>> +    char name[SCHED_GRAN_NAME_LEN];
>> +    int ret = 0;
>> +
>> +    if ( ulen > SCHED_GRAN_NAME_LEN )
>> +        return -ENOSPC;
>> +
>> +    if ( copy_from_guest(name, uaddr, ulen) )
>> +        return -EFAULT;
>> +
>> +    sched_gran = sched_gran_get(name, &gran) ? 0
>> +                                             : cpupool_check_granularity(gran);
>> +    if ( memchr(name, 0, ulen) != (name + ulen - 1) || sched_gran == 0 )
>> +        return -EINVAL;
> 
> I guess the memchr() check wants to happen before the call to
> sched_gran_get()?

Yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 15:01:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 15:01:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14257.35358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9QF-0005SH-Rr; Thu, 29 Oct 2020 15:01:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14257.35358; Thu, 29 Oct 2020 15:01:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9QF-0005SA-OH; Thu, 29 Oct 2020 15:01:03 +0000
Received: by outflank-mailman (input) for mailman id 14257;
 Thu, 29 Oct 2020 15:01:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY9QE-0005S5-Fd
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:01:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ceffbaf-7701-4ff5-83d0-f44b26206e05;
 Thu, 29 Oct 2020 15:01:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 89EA1AC1F;
 Thu, 29 Oct 2020 15:01:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY9QE-0005S5-Fd
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:01:02 +0000
X-Inumbo-ID: 8ceffbaf-7701-4ff5-83d0-f44b26206e05
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8ceffbaf-7701-4ff5-83d0-f44b26206e05;
	Thu, 29 Oct 2020 15:01:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603983660;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=/9tOE9r1xCAU2Id5vfHT7XxFcVG5d5GbizM7ksXMvXY=;
	b=obHuQUNPLF3dJ29k2UzRStvrA4m6LqRlzDqo55taWgeLdR9gShvvx1ozC0wnijxR/pZdzh
	hp907bNEfdzvfED7AuUr3o/JA1L5IPZ+5oreRimQlsVUC5njTyBS0ecjzoqrwgyAjG7IqG
	WJe+ZaJ0LLuegvjng1AjHMMYtFWRlNU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 89EA1AC1F;
	Thu, 29 Oct 2020 15:01:00 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: support AVX-VNNI
Message-ID: <bead640a-e75e-8352-cd3d-5986386cab3a@suse.com>
Date: Thu, 29 Oct 2020 16:01:01 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

These are VEX-encoded equivalents of the EVEX-encoded AVX512-VNNI ISA
extension.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
SDE: -spr

--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -226,6 +226,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"core-caps",    0x00000007,  0, CPUID_REG_EDX, 30,  1},
         {"ssbd",         0x00000007,  0, CPUID_REG_EDX, 31,  1},
 
+        {"avx-vnni",     0x00000007,  1, CPUID_REG_EAX,  4,  1},
         {"avx512-bf16",  0x00000007,  1, CPUID_REG_EAX,  5,  1},
 
         {"lahfsahf",     0x80000001, NA, CPUID_REG_ECX,  0,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -175,7 +175,7 @@ static const char *const str_7d0[32] =
 
 static const char *const str_7a1[32] =
 {
-    /* 4 */                 [ 5] = "avx512-bf16",
+    [ 4] = "avx-vnni",      [ 5] = "avx512-bf16",
 };
 
 static const struct {
--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -1335,6 +1335,10 @@ static const struct vex {
     { { 0x45 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsrlv{d,q} */
     { { 0x46 }, 2, T, R, pfx_66, W0, Ln }, /* vpsravd */
     { { 0x47 }, 2, T, R, pfx_66, Wn, Ln }, /* vpsllv{d,q} */
+    { { 0x50 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusd */
+    { { 0x51 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpbusds */
+    { { 0x52 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpwssd */
+    { { 0x53 }, 2, T, R, pfx_66, W0, Ln }, /* vpdpwssds */
     { { 0x58 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastd */
     { { 0x59 }, 2, T, R, pfx_66, W0, Ln }, /* vpbroadcastq */
     { { 0x5a }, 2, F, R, pfx_66, W0, L1 }, /* vbroadcasti128 */
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -5028,6 +5028,61 @@ int main(int argc, char **argv)
         printf("okay\n");
     }
 
+    printf("%-40s", "Testing vpdpwssd (%ecx),%{y,z}mmA,%{y,z}mmB...");
+    if ( stack_exec && cpu_has_avx512_vnni && cpu_has_avx_vnni )
+    {
+        /* Do the same operation two ways and compare the results. */
+        decl_insn(vpdpwssd_vex1);
+        decl_insn(vpdpwssd_vex2);
+        decl_insn(vpdpwssd_evex);
+
+        for ( i = 0; i < 24; ++i )
+            res[i] = i | (~i << 16);
+
+        asm volatile ( "vmovdqu32 32(%0), %%zmm1\n\t"
+                       "vextracti64x4 $1, %%zmm1, %%ymm2\n\t"
+                       "vpxor %%xmm0, %%xmm0, %%xmm3\n\t"
+                       "vpxor %%xmm0, %%xmm0, %%xmm4\n\t"
+                       "vpxor %%xmm0, %%xmm0, %%xmm5\n"
+                       put_insn(vpdpwssd_vex1,
+                                /* %{vex%} vpdpwssd (%1), %%ymm1, %%ymm3" */
+                                ".byte 0xc4, 0xe2, 0x75, 0x52, 0x19") "\n"
+                       put_insn(vpdpwssd_vex2,
+                                /* "%{vex%} vpdpwssd 32(%1), %%ymm2, %%ymm4" */
+                                ".byte 0xc4, 0xe2, 0x6d, 0x52, 0x61, 0x20") "\n"
+                       put_insn(vpdpwssd_evex,
+                                /* "vpdpwssd (%1), %%zmm1, %%zmm5" */
+                                ".byte 0x62, 0xf2, 0x75, 0x48, 0x52, 0x29")
+                       :: "r" (res), "c" (NULL) );
+
+        set_insn(vpdpwssd_vex1);
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vpdpwssd_vex1) )
+            goto fail;
+
+        set_insn(vpdpwssd_vex2);
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vpdpwssd_vex2) )
+            goto fail;
+
+        set_insn(vpdpwssd_evex);
+        regs.ecx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( rc != X86EMUL_OKAY || !check_eip(vpdpwssd_evex) )
+            goto fail;
+
+        asm ( "vinserti64x4 $1, %%ymm4, %%zmm3, %%zmm0\n\t"
+              "vpcmpeqd %%zmm0, %%zmm5, %%k0\n\t"
+              "kmovw %%k0, %0" : "=g" (rc) );
+        if ( rc != 0xffff )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing invpcid 16(%ecx),%%edx...");
     if ( stack_exec )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -170,6 +170,7 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_vp2intersect (cp.feat.avx512_vp2intersect && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
+#define cpu_has_avx_vnni   (cp.feat.avx_vnni && xcr0_mask(6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2008,6 +2008,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_vp2intersect() (ctxt->cpuid->feat.avx512_vp2intersect)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
+#define vcpu_has_avx_vnni()    (ctxt->cpuid->feat.avx_vnni)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
 #define vcpu_must_have(feat) \
@@ -9453,6 +9454,14 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_avx;
 
+    case X86EMUL_OPC_VEX_66(0x0f38, 0x50): /* vpdpbusd [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_66(0x0f38, 0x51): /* vpdpbusds [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_66(0x0f38, 0x52): /* vpdpwssd [xy]mm/mem,[xy]mm,[xy]mm */
+    case X86EMUL_OPC_VEX_66(0x0f38, 0x53): /* vpdpwssds [xy]mm/mem,[xy]mm,[xy]mm */
+        host_and_vcpu_must_have(avx_vnni);
+        generate_exception_if(vex.w, EXC_UD);
+        goto simd_0f_avx;
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x50): /* vpdpbusd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x51): /* vpdpbusds [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x52): /* vpdpwssd [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -133,6 +133,7 @@
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
 
 /* CPUID level 0x00000007:1.eax */
+#define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
 
 /* Synthesized. */
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -273,6 +273,7 @@ XEN_CPUFEATURE(CORE_CAPS,     9*32+30) /
 XEN_CPUFEATURE(SSBD,          9*32+31) /*A  MSR_SPEC_CTRL.SSBD available */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.eax, word 10 */
+XEN_CPUFEATURE(AVX_VNNI,     10*32+ 4) /*A  AVX-VNNI Instructions */
 XEN_CPUFEATURE(AVX512_BF16,  10*32+ 5) /*A  AVX512 BFloat16 Instructions */
 
 #endif /* XEN_CPUFEATURE */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -252,7 +252,7 @@ def crunch_numbers(state):
         # feature flags.  If want to use AVX512, AVX2 must be supported and
         # enabled.  Certain later extensions, acting on 256-bit vectors of
         # integers, better depend on AVX2 than AVX.
-        AVX2: [AVX512F, VAES, VPCLMULQDQ],
+        AVX2: [AVX512F, VAES, VPCLMULQDQ, AVX_VNNI],
 
         # AVX512F is taken to mean hardware support for 512bit registers
         # (which in practice depends on the EVEX prefix to encode) as well


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 15:05:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 15:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14271.35373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9Uh-0005hA-Er; Thu, 29 Oct 2020 15:05:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14271.35373; Thu, 29 Oct 2020 15:05:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9Uh-0005h3-Bm; Thu, 29 Oct 2020 15:05:39 +0000
Received: by outflank-mailman (input) for mailman id 14271;
 Thu, 29 Oct 2020 15:05:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1kY9Uf-0005gy-QU
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:05:37 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [207.82.80.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2b0dfeee-94b7-4c43-8e36-105c0466feeb;
 Thu, 29 Oct 2020 15:05:36 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-56-P6lCnI4YNLSev_eoOtq5HQ-1; Thu, 29 Oct 2020 15:05:32 +0000
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 15:05:31 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; 
 Thu, 29 Oct 2020 15:05:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
	id 1kY9Uf-0005gy-QU
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:05:37 +0000
X-Inumbo-ID: 2b0dfeee-94b7-4c43-8e36-105c0466feeb
Received: from eu-smtp-delivery-151.mimecast.com (unknown [207.82.80.151])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id 2b0dfeee-94b7-4c43-8e36-105c0466feeb;
	Thu, 29 Oct 2020 15:05:36 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-56-P6lCnI4YNLSev_eoOtq5HQ-1; Thu, 29 Oct 2020 15:05:32 +0000
X-MC-Unique: P6lCnI4YNLSev_eoOtq5HQ-1
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 15:05:31 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000;
 Thu, 29 Oct 2020 15:05:31 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'Arnd Bergmann' <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"x86@kernel.org" <x86@kernel.org>
CC: Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger
	<sthemmin@microsoft.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J.
 Wysocki" <rjw@rjwysocki.net>, Paolo Bonzini <pbonzini@redhat.com>, "Vitaly
 Kuznetsov" <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, "Jim
 Mattson" <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: RE: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Topic: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Index: AQHWrZenJpzBwTRfbE+Uihb7XQWTqKmurjkg
Date: Thu, 29 Oct 2020 15:05:31 +0000
Message-ID: <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
References: <20201028212417.3715575-1-arnd@kernel.org>
In-Reply-To: <20201028212417.3715575-1-arnd@kernel.org>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

From: Arnd Bergmann
> Sent: 28 October 2020 21:21
>=20
> From: Arnd Bergmann <arnd@arndb.de>
>=20
> There are hundreds of warnings in a W=3D2 build about a local
> variable shadowing the global 'apic' definition:
>=20
> arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a glo=
bal declaration [-Wshadow]
>=20
> Avoid this by renaming the global 'apic' variable to the more descriptive
> 'x86_system_apic'. It was originally called 'genapic', but both that
> and the current 'apic' seem to be a little overly generic for a global
> variable.
>=20
> Fixes: c48f14966cc4 ("KVM: inline kvm_apic_present() and kvm_lapic_enable=
d()")
> Fixes: c8d46cf06dc2 ("x86: rename 'genapic' to 'apic'")
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
> v2: rename the global instead of the local variable in the header
...
> diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
> index 284e73661a18..33e2dc78ca11 100644
> --- a/arch/x86/hyperv/hv_apic.c
> +++ b/arch/x86/hyperv/hv_apic.c
> @@ -259,14 +259,14 @@ void __init hv_apic_init(void)
>  =09=09/*
>  =09=09 * Set the IPI entry points.
>  =09=09 */
> -=09=09orig_apic =3D *apic;
> -
> -=09=09apic->send_IPI =3D hv_send_ipi;
> -=09=09apic->send_IPI_mask =3D hv_send_ipi_mask;
> -=09=09apic->send_IPI_mask_allbutself =3D hv_send_ipi_mask_allbutself;
> -=09=09apic->send_IPI_allbutself =3D hv_send_ipi_allbutself;
> -=09=09apic->send_IPI_all =3D hv_send_ipi_all;
> -=09=09apic->send_IPI_self =3D hv_send_ipi_self;
> +=09=09orig_apic =3D *x86_system_apic;
> +
> +=09=09x86_system_apic->send_IPI =3D hv_send_ipi;
> +=09=09x86_system_apic->send_IPI_mask =3D hv_send_ipi_mask;
> +=09=09x86_system_apic->send_IPI_mask_allbutself =3D hv_send_ipi_mask_all=
butself;
> +=09=09x86_system_apic->send_IPI_allbutself =3D hv_send_ipi_allbutself;
> +=09=09x86_system_apic->send_IPI_all =3D hv_send_ipi_all;
> +=09=09x86_system_apic->send_IPI_self =3D hv_send_ipi_self;
>  =09}
>=20
>  =09if (ms_hyperv.hints & HV_X64_APIC_ACCESS_RECOMMENDED) {
> @@ -285,10 +285,10 @@ void __init hv_apic_init(void)
>  =09=09 */
>  =09=09apic_set_eoi_write(hv_apic_eoi_write);
>  =09=09if (!x2apic_enabled()) {
> -=09=09=09apic->read      =3D hv_apic_read;
> -=09=09=09apic->write     =3D hv_apic_write;
> -=09=09=09apic->icr_write =3D hv_apic_icr_write;
> -=09=09=09apic->icr_read  =3D hv_apic_icr_read;
> +=09=09=09x86_system_apic->read      =3D hv_apic_read;
> +=09=09=09x86_system_apic->write     =3D hv_apic_write;
> +=09=09=09x86_system_apic->icr_write =3D hv_apic_icr_write;
> +=09=09=09x86_system_apic->icr_read  =3D hv_apic_icr_read;
>  =09=09}

For those two just add:
=09struct apic *apic =3D x86_system_apic;
before all the assignments.
Less churn and much better code.

=09David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1=
PT, UK
Registration No: 1397386 (Wales)



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 15:14:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 15:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14277.35384 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9dR-0006eK-Ad; Thu, 29 Oct 2020 15:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14277.35384; Thu, 29 Oct 2020 15:14:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9dR-0006eD-7g; Thu, 29 Oct 2020 15:14:41 +0000
Received: by outflank-mailman (input) for mailman id 14277;
 Thu, 29 Oct 2020 15:14:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kY9dP-0006e8-Gt
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:14:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78dbd365-d764-406e-bec0-2c668edd3c62;
 Thu, 29 Oct 2020 15:14:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AAD13AC91;
 Thu, 29 Oct 2020 15:14:37 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kY9dP-0006e8-Gt
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:14:39 +0000
X-Inumbo-ID: 78dbd365-d764-406e-bec0-2c668edd3c62
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 78dbd365-d764-406e-bec0-2c668edd3c62;
	Thu, 29 Oct 2020 15:14:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603984477;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=oMm5zPzSCOpinbtJseRPVozVV46YNC4lZVuO5Wv53SE=;
	b=qwlKE0y67hLWNKXrJGKqtVR99n1tU6yiChiyezvSLufQJOeW6emmrYRk5AH/v3OHzsDRbJ
	i3vRUl046RD61nYf+d0FduPGTH8xM4NMXcXLyemLAVfqUqKzKyQpfxqRX5l6kikAKU+4Oj
	ECNYMkTO9QXHtKoQe2TZXUOjhQnZVgY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id AAD13AC91;
	Thu, 29 Oct 2020 15:14:37 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/HVM: send mapcache invalidation request to qemu
 regardless of preemption
Message-ID: <d33721a8-af91-7efc-b954-1d775bd4e35c@suse.com>
Date: Thu, 29 Oct 2020 16:14:38 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Even if only part of a hypercall completed before getting preempted,
invalidation ought to occur. Therefore fold the two return statements.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Split off from "x86/HVM: refine when to send mapcache invalidation
request to qemu".

--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -326,14 +326,11 @@ int hvm_hypercall(struct cpu_user_regs *
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "hcall%lu -> %lx", eax, regs->rax);
 
-    if ( curr->hcall_preempted )
-        return HVM_HCALL_preempted;
-
     if ( unlikely(currd->arch.hvm.qemu_mapcache_invalidate) &&
          test_and_clear_bool(currd->arch.hvm.qemu_mapcache_invalidate) )
         send_invalidate_req();
 
-    return HVM_HCALL_completed;
+    return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 15:21:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 15:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14282.35397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9jt-0007WT-3P; Thu, 29 Oct 2020 15:21:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14282.35397; Thu, 29 Oct 2020 15:21:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9js-0007WM-Vx; Thu, 29 Oct 2020 15:21:20 +0000
Received: by outflank-mailman (input) for mailman id 14282;
 Thu, 29 Oct 2020 15:21:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFDJ=EE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kY9js-0007WH-38
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:21:20 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4db4a33-9efe-4c02-965a-7cda75d6d75f;
 Thu, 29 Oct 2020 15:21:18 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uFDJ=EE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kY9js-0007WH-38
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:21:20 +0000
X-Inumbo-ID: c4db4a33-9efe-4c02-965a-7cda75d6d75f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c4db4a33-9efe-4c02-965a-7cda75d6d75f;
	Thu, 29 Oct 2020 15:21:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603984878;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=7PjLjndO3ks+GSswA7Y3requtaj2/y5zuqaKPnZN2cI=;
  b=Ju41AET8xVZpNc8eGVDA6gmwn0jM5bGT/FYrPi2kWMOKBBR/keqiajV4
   fYdnUhjetqSNLU4ng2oLSm1faydhZjUWsditBUUd8RtTz1frG1QIiUNg4
   j6M36YYA8eRiXbUY1LwuysF+by+N2oiENzyUY522dFY9Hu914j0lGJVId
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZbmUzLXINQEI188lCzrT/6a0tiCE64K9cvEcdtWAah3G9RGMttHzMYz1oCzmW6/DakExeL/aCi
 VX9wJ/tElUhaV6wwsvdUIPrGMlrz7S23dWcSp2lK8baWj8Img/2bSL8sOvuxF1gw5NQfO20ySI
 tZJLR0HXy0/JGSiQ4piM+Grl1YCgG3R+tbYGNUrMnin/g86TBq3CHmEeBccCnVvf+rBYeKw4hQ
 C8LnvgaeJBGXgJ3eST2P0aiDxcgY4aecHbvoNIUph0h7DQkR9/0hweOuTr2hpAdeJ9Y7f+r5Sw
 53Q=
X-SBRS: None
X-MesageID: 30409310
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30409310"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EojKTFJ3CNlSn4NWzvvQ+gnJF8cHi+O8Xvnfh4wSa2XPv4x9Ei4fZ1Zuwrt1R88VCRa2GqzJRtMbzfizgLlqO4clusj6bQeLtZF5MSs3BBj0PN65jH9iyX8iHJtYDydKbAHBKbyFxG646fS2UxxaV2fyyUIMLGDnBJ33gHQXOWdUlGrSySr05xguEW1wvTqlTayYxYs3sR5V8fOMK1mZ7c9FBWiWLQjEvxwgILZlD49dQWt7UjZvmp/BSd0ijHRtujxHHu9GJ0N+ynbeK9nx1zUF9nVXTKKNmXBGPJJKRGANMupCBXV0dxZ+17TNDqlDS+WgWQ/jAVwe+IjFG5g7nQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k/DG2aYNv+hxwx2bL0bpNn0FsUDBwHEnHdbgGz6/Gy4=;
 b=N/TyU+y7pW7l4XQFJSbMS+NeOs3l32HlP5pPFSnnnZi6t9UKDlsMVL5OkJ/GuSWff2mC76KmjTV1Xuu7YyeLG35aUXSfzdMxGKwzJL/jx21tkSKPNB6O39yyZQQK9xRk0+O1wwVDah/9qLzv7bCRhoMvwvRGYYWGcSezGqCJ0Dh/rfNkm5oumbSyrYZOLsSt8iKkHtoXrNoX9uaNpcFsx29yw3HVfLv0QBnq6xFIIqfvavbI4q4KP2vsePBlS2XbjqMlaOIRMPFu6dmz97sjRure3GoHfdVj1ffGMtQe8MH+mtVN+yZxEeeH3dzGKn39PiQBkZJzdUOHG1LtXt/oyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k/DG2aYNv+hxwx2bL0bpNn0FsUDBwHEnHdbgGz6/Gy4=;
 b=vVAxugVQPNh1+A3AqmSYqlys729763uXAJw98MAXv1OfZVneJWdo6iliMpKs0qvxA9eeUKlFbMy9OIXzlahsILUg42hXPGIdDm4l66wwjxuf7LPXc6adQIyQM7RJxGr7afXcBCrlyUXxjP/NL+cN34nVDQU9Lj0XkW+zHDrIXWg=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/hvm: process softirq while saving/loading entries
Date: Thu, 29 Oct 2020 16:20:54 +0100
Message-ID: <20201029152054.28635-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.29.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: LO3P123CA0018.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:ba::23) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8b18530-9309-47b8-c529-08d87c1e465e
X-MS-TrafficTypeDiagnostic: SN6PR03MB3744:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SN6PR03MB3744E1DA8BFF5B83E21206F48F140@SN6PR03MB3744.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1060;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ri01FN6M9UVPwXsZdEUmFFWWKq8R7BGMD4/jEGD7tJxOTkC7qoLLfS/kaF9ZJeSOCNhZ0ZQKIFWMnPIP96l7QaWChkfY205+ALy4zIO6c0wVtSjDwwA+w7yjUwTpNH5q9adC6IczYXW4lSyd5zvF27hrPx/rVlT+/j2HPcf1EbQ/NylVDKUv4y/yy3/iL5J7j1oeHsU2/6rMTfz2GVOsm3vmH4V3xXVvAH5/CCZKQbaosQFEjUO86sBtgaMbvjuMisXuErACJmoRVk3f/Xgji2BzHYBXhTWvoxTyrIs2ShQ94byPoTiAmvR6cqdFd31k
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(39860400002)(136003)(376002)(366004)(36756003)(66556008)(186003)(5660300002)(2906002)(2616005)(1076003)(4326008)(16526019)(26005)(6916009)(956004)(86362001)(6486002)(478600001)(66946007)(83380400001)(66476007)(316002)(8676002)(8936002)(54906003)(6666004)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: l0t23ttVpE3dzF5+NO84NqkOnz6/fJ3bHTfcD+MBpbRPb9hMrGtq6lK3M44dqcYd2zECrNoRBQm04FecOagr0BrZkoLUBDuoCHvfJxhKL1hh3AshkMLOEPQziDZG5SbqdjPMquMiVs94oacQpkskTcMqjuaImmVMsk9CgfjWCtLh35DpRPq3tw1I/KkMxONykUqRon//tTkT7UTvxVxtMRr4ur0vzEmlu/0+ZJUVFJKfeCoAML0HsnhNJGhrnUnRbod10h4keZzorkXpvzmHdi6bmhbicFpd8pKSjNPw5OquHRyIzAjru96zHIWUJwk69ECLh4HdViH6qejKIQf/dknche94GVzXU2WifdQ1QjGUa5bR2mX4DVGWwvHgNhnwHeCK3Aw8p/2GYyqG87iLygfWb2hKto4kujHhDx8o4tS4M3ZoabEcwevXKPoiXtF4BIkKQH6AxIuLcAiQvABLDvakTd6sa5oVYmm7g3Q28bDJG1js0aoj7AK/eN3n+qJK4hmBZU2snRQ/fKdsX0OfZL+CsCFeewyiZ4nIof9VXnBhIbzQm52rqZ+Ha4i08AxXysqScO3dtSedCfRDsMVNKLwSbpkF8zYGgTzxG3YeHCQpPRDaurqmX9rcGPnQ3EnZCzmWKCgyrc3Nq2YW2qk5Eg==
X-MS-Exchange-CrossTenant-Network-Message-Id: d8b18530-9309-47b8-c529-08d87c1e465e
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 15:21:14.6893
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: j0z0E1ipD0kanQyEAI01zEGVHAOGjvo3nMQbh0cHWVmPDFuXc+r+YDXAtKqD1SDe0DAlGrYnY1Ckc1LZ+Cw+Dw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR03MB3744
X-OriginatorOrg: citrix.com

On slow systems with sync_console saving or loading the context of big
guests can cause the watchdog to trigger. Fix this by adding a couple
of process_pending_softirqs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/save.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/hvm/save.c b/xen/arch/x86/hvm/save.c
index a2c56fbc1e..584620985b 100644
--- a/xen/arch/x86/hvm/save.c
+++ b/xen/arch/x86/hvm/save.c
@@ -21,6 +21,7 @@
  */
 
 #include <xen/guest_access.h>
+#include <xen/softirq.h>
 #include <xen/version.h>
 
 #include <asm/hvm/support.h>
@@ -255,6 +256,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                            v, i);
                     return -ENODATA;
                 }
+                process_pending_softirqs();
             }
         }
         else
@@ -268,6 +270,7 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
                        d->domain_id, i);
                 return -ENODATA;
             }
+            process_pending_softirqs();
         }
     }
 
@@ -341,6 +344,7 @@ int hvm_load(struct domain *d, hvm_domain_context_t *h)
                    d->domain_id, desc->typecode, desc->instance);
             return -1;
         }
+        process_pending_softirqs();
     }
 
     /* Not reached */
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 15:24:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 15:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14289.35409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9nE-0007jw-O7; Thu, 29 Oct 2020 15:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14289.35409; Thu, 29 Oct 2020 15:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kY9nE-0007jp-KH; Thu, 29 Oct 2020 15:24:48 +0000
Received: by outflank-mailman (input) for mailman id 14289;
 Thu, 29 Oct 2020 15:24:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kY9nD-0007jj-Cc
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:24:47 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18d7b006-2ffe-4887-82c9-ac10440d2560;
 Thu, 29 Oct 2020 15:24:45 +0000 (UTC)
Received: from MRXP264CA0007.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::19)
 by VI1PR08MB3552.eurprd08.prod.outlook.com (2603:10a6:803:81::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19; Thu, 29 Oct
 2020 15:24:43 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::1f) by MRXP264CA0007.outlook.office365.com
 (2603:10a6:500:15::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 15:24:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 15:24:42 +0000
Received: ("Tessian outbound 7c188528bfe0:v64");
 Thu, 29 Oct 2020 15:24:42 +0000
Received: from c89f5e344ddd.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 EA7DA308-6565-48E5-AC33-791B81806C2B.1; 
 Thu, 29 Oct 2020 15:24:14 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c89f5e344ddd.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 29 Oct 2020 15:24:14 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6090.eurprd08.prod.outlook.com (2603:10a6:10:208::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 15:24:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 15:24:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=sh/s=EE=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kY9nD-0007jj-Cc
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 15:24:47 +0000
X-Inumbo-ID: 18d7b006-2ffe-4887-82c9-ac10440d2560
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe0d::620])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 18d7b006-2ffe-4887-82c9-ac10440d2560;
	Thu, 29 Oct 2020 15:24:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qa/pCWZFXD+sFV8cZRKbKmtTkBEMrqltU9AtHXXs1fs=;
 b=OCONyekbFsNnbaLl59gvtzRnZO4A7pF38lQ8ol+bBquRHkyy8RcESjPKmZHb0HEr2SEBw12g+z7rMjjKw661DLuMmrjyjVJ+8L0J3Xi3JBH/0R/xiWIjwbsgHDbNr4GvbfE0BjXcPgSARKZZXfDlACFaJnHKl0n/OV51uqqPjoA=
Received: from MRXP264CA0007.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::19)
 by VI1PR08MB3552.eurprd08.prod.outlook.com (2603:10a6:803:81::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19; Thu, 29 Oct
 2020 15:24:43 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::1f) by MRXP264CA0007.outlook.office365.com
 (2603:10a6:500:15::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 15:24:42 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 15:24:42 +0000
Received: ("Tessian outbound 7c188528bfe0:v64"); Thu, 29 Oct 2020 15:24:42 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8cc10f7d42b108fa
X-CR-MTA-TID: 64aa7808
Received: from c89f5e344ddd.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id EA7DA308-6565-48E5-AC33-791B81806C2B.1;
	Thu, 29 Oct 2020 15:24:14 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c89f5e344ddd.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 29 Oct 2020 15:24:14 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kOMGVFiPMiaBeVPts20aQ9oPgSIDgg5LOJtJFMbBNBeoshi5RvycfpphHMkmgw4ip6sj/2MN4C1TnYQiMt3jJKOZMzkT1GqOqlBo8E8dZ/oVetlgqb0c8VTw4mF+cfz9cw1m4UZIv0/kguHI1hemmHUXKD0tjCM3578/f/sVm9XLLtg66iIrn90vZUO6jKYgwJoupJ80fMKX/IOR1cedatOzJLPF1M/0WEch9T3ZWyMlUOXlsLxiFJNPKnpN2BqxFU8ouMs1K2eiaxVX8ImzDAhN3WoJDlBXsKIwrWf/Re6b3+SjN6cNO/24NU8Hsem97YKntZN9tIjPTHwncW5YBA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qa/pCWZFXD+sFV8cZRKbKmtTkBEMrqltU9AtHXXs1fs=;
 b=Av0QU7imBh8AfIvhHSjL74wYHswb4rDZs6Brfa7ERzNlNGuKP8gJP5s8VuwYu7EkA6NhxseX2yuKkm1qG0LVuY38PcK88JkclZKcvA6di9ca3cw2b4BV24ZCTNiUD3Mv53AY8NoBDwXslqTq83XruGUrYMdVjJ2PGuAKbw/Ol274/NVT/P9Y2w+m1PSgQ/pHz0FbwSWmvS8PV9698QZlopx946iUbTWVWzLl6xTwBWSmpx6hOb9Dm8sIi6FHt6awO5jZwiGP30XQMVusQAX8t71JzOEG5GDHS2imEPctTURAE11gV+AjYK+5nWe+S6Kt4wWDTjhjr/m2zvreo143xg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qa/pCWZFXD+sFV8cZRKbKmtTkBEMrqltU9AtHXXs1fs=;
 b=OCONyekbFsNnbaLl59gvtzRnZO4A7pF38lQ8ol+bBquRHkyy8RcESjPKmZHb0HEr2SEBw12g+z7rMjjKw661DLuMmrjyjVJ+8L0J3Xi3JBH/0R/xiWIjwbsgHDbNr4GvbfE0BjXcPgSARKZZXfDlACFaJnHKl0n/OV51uqqPjoA=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DBBPR08MB6090.eurprd08.prod.outlook.com (2603:10a6:10:208::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 15:24:10 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3477.028; Thu, 29 Oct 2020
 15:24:10 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/2] tools/libs: fix uninstall rule for header files
Thread-Topic: [PATCH 2/2] tools/libs: fix uninstall rule for header files
Thread-Index: AQHWpeifIAHoNnN0LEeTc6ujCHuJAqmuw0aA
Date: Thu, 29 Oct 2020 15:24:10 +0000
Message-ID: <5495896C-2AD6-413E-A1A6-D9994F10D391@arm.com>
References: <2c9a0407-1bd1-6898-d1e3-9be4c869684b@suse.com>
 <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
In-Reply-To: <74c629db-0f63-aba0-f294-9668c29b8f70@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b4238e84-25e6-499f-1d8d-08d87c1ec285
x-ms-traffictypediagnostic: DBBPR08MB6090:|VI1PR08MB3552:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3552207606DDF1900A5485669D140@VI1PR08MB3552.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 LZCfhW35j3xvFfKObpgK6f7+EjBvdyz4fkrLv069lHO48VYb+SKDkIBfrhjtoqIZOSoxrDh8GzBr1KubN10veOUIJbT24JMt8AltrrbUfPdXb6uT3KcqrtNhBrUb/MqHlzAvs4sqksWQkD7WIrOCxMxikP8hU8zVn7qRgIsmVgyK2AfkFsFonr5ZIH4jPoxibL4LchRasxSNxCSjhWqpLsAGjEskxk1MHGpvbyCzGBXxYlNmzmvyp93Kkf9zFT7tV/4HKa8jZQS2kvESLbVrQx0dAMTu7MQFKqfoEzOEFmtcfDvlZQYy+tuU+OuhOZ2RKjdnKC7eVZmgxRul3syBew==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(346002)(376002)(366004)(39860400002)(5660300002)(33656002)(2616005)(4744005)(54906003)(316002)(53546011)(6506007)(36756003)(8676002)(91956017)(66446008)(64756008)(66476007)(8936002)(66556008)(66946007)(71200400001)(76116006)(26005)(186003)(86362001)(2906002)(4326008)(478600001)(6916009)(6512007)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 y2sQMnQJVXh3PxF5Y9DHl2RR2v1o+t4Jycko4s1O+0Lf3qzTBtQs5bYJEHIei+nUuafS/k5kZxAfSITONjWbp2eIwngaHOILw+okvtbY6VMyzCf/Yeu/h/hiD1D2Mkve2wDRvkbJYMDx53NTWpzhcG3NhIP6amLwMaoH9T47vkjEPCwHE3RW1axcZ+BMnrywt09+bO8gWOPT/i7D96jUjBgFDOnbSun/+sQlSc130aiML8TnGz3MxJZ7RudoFJDyTGQBMOlSTdXs/NUsXVUVSHbTJp7EQbqWI4pu0GnYYI7Jl3meEduNjVUFykdnYoHbMfdMkYeMG9qDJRzL/Yo6vlEhLabFqvRc2vZ+NvyzTAZfnK4Q9tZ4JLeFCBHss41jB0hHdOmH6hctXHsqNn1IKnCE9ZeNw9W+o61d26ZBoLCbyTzhnNJfpsXT0P3QbYocCTfqpPEl25yg6WCrllSH+vYr/yE0uJDvu+0JP1yl5/FMSJu1nvnJisAl3Dw44KuYAJoAvf7gHVMMb4KXa9XpWUcwpa7bMib8ySd9eBPM3aKbmh5Eblod+k9T0nxJ+vw8SLUcXCzgIB+AJqLHfuETk/0bx8BMM10C5nqmr8AtrpEWd06twgJKuLfRnX/sADE396Zzy4CNd1PEG+fhbQct9g==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4DA6A219E03CFF49B741FC175285678B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6090
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	95490a88-963b-482f-472b-08d87c1eaf43
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	wz4huWq7fwpM79T/g89hID0L21toDMYUB1D2XzRDIaSHp+Aytcf/c5SOp8DN5NmWEESz8prToBt1Y4qNquhEug9VWiiJnOpTimM7gRYghyeAv/0tcPttXD2IrXeflcOmqaitaq86pBxOA4bC6Lgd8/i5W5cyOnLJuE+53F/7wjihYE1sEkW3XGewm/OB3jHIq2NxB9DwryEdSj0dSnslGY/LRytuyLzs+SjgkOpDs++n/a6cCQnt9vNnM7YpGc+tBVUe2UTLYYtlxAoyGYyKHvXMb/YIYDu7ssD/qNQdVRTm6w2EEb3QL70Zqzrp+kfAXJMl1Z5dnxxopuJXZSmfMelM0l2Hw7hqfIjLACMOkXGgSBooC+LWVyjfUBOKArH0KjSwcp+C3i+y/PnqjptnwA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(46966005)(82310400003)(81166007)(4326008)(6486002)(356005)(6512007)(478600001)(2616005)(6862004)(4744005)(6506007)(33656002)(8676002)(53546011)(5660300002)(8936002)(82740400003)(70586007)(70206006)(86362001)(54906003)(2906002)(36756003)(26005)(36906005)(316002)(336012)(47076004)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 15:24:42.6232
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b4238e84-25e6-499f-1d8d-08d87c1ec285
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3552



> On 19 Oct 2020, at 08:21, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> This again was working right only as long as $(LIBHEADER) consisted of
> just one entry.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

The change is obviously fixing a bug :-) and the double $ is required to pr=
otect from make.

Cheers
Bertrand


> ---
> An alternative would be to use $(addprefix ) without any shell loop.
>=20
> --- a/tools/libs/libs.mk
> +++ b/tools/libs/libs.mk
> @@ -107,7 +107,7 @@ install: build
> .PHONY: uninstall
> uninstall:
> 	rm -f $(DESTDIR)$(PKG_INSTALLDIR)/$(LIB_FILE_NAME).pc
> -	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$(LIBHEADER); d=
one
> +	for i in $(LIBHEADER); do rm -f $(DESTDIR)$(includedir)/$$i; done
> 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so
> 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR)
> 	rm -f $(DESTDIR)$(libdir)/lib$(LIB_FILE_NAME).so.$(MAJOR).$(MINOR)
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:15:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14325.35424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAZq-0004HO-KG; Thu, 29 Oct 2020 16:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14325.35424; Thu, 29 Oct 2020 16:15:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAZq-0004HH-HE; Thu, 29 Oct 2020 16:15:02 +0000
Received: by outflank-mailman (input) for mailman id 14325;
 Thu, 29 Oct 2020 16:15:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYAZp-0004HC-Kg
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:15:02 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19df6871-3009-4950-8d40-72c03d805cf5;
 Thu, 29 Oct 2020 16:15:00 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9TGEp3PQ
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 17:14:51 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYAZp-0004HC-Kg
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:15:02 +0000
X-Inumbo-ID: 19df6871-3009-4950-8d40-72c03d805cf5
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 19df6871-3009-4950-8d40-72c03d805cf5;
	Thu, 29 Oct 2020 16:15:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603988099;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=9OTNbIyCoQ7WfPyq64qoCNVmm6++9PsjVAwM8+F1lss=;
	b=CKDGh3fnezBIGmNlaXn9hsXY9CMDlJXeCrbkyP1nhbmChUmxIVu/qBHrEukMYu88CV
	eHzXq9dF9OUvC834ly/XHrqy4uuLOaBueTyY7kteGlFJ4bkibuAPOnO+lrQwrjzmFwBA
	I9qp6bpWLmq+ySdHYoqpOE7K75vfPZ6cbWLwK9Hj2J9685jwketbd3IsMvjEoNhF6L4X
	PVJIj38WfIzJf11QOsnENRBVSRNSx63Ey8h2tgxP5TkebCvmE/f85SwdmGe61LYO0hV2
	GATWOMD/TwayUF7q8nqU7O2MFGOpJ5Rw0HshoOQ2GQb+JYu+Cgf4P/CZpEEEtZHMjwx6
	4VVw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9TGEp3PQ
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 17:14:51 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1] xl: fix description of migrate --debug
Date: Thu, 29 Oct 2020 17:14:48 +0100
Message-Id: <20201029161448.11385-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xl migrate --debug used to track every pfn in every batch of pages.
But these times are gone. Adjust the help text to tell what --debug
is supposed to do today.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 docs/man/xl.1.pod.in   | 4 +++-
 tools/xl/xl_cmdtable.c | 2 +-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 4bde0672fa..d0f50f0b4a 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -488,7 +488,9 @@ Include timestamps in output.
 
 =item B<--debug>
 
-Display huge (!) amount of debug information during the migration process.
+Verify transferred domU page data. All memory will be transferred one more
+time to the destination host while the domU is paused, and compare with
+the result of the inital transfer while the domU was still running.
 
 =item B<-p>
 
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index fd2dc0aef2..af160dde42 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -168,7 +168,7 @@ struct cmd_spec cmd_table[] = {
       "-e              Do not wait in the background (on <host>) for the death\n"
       "                of the domain.\n"
       "-T              Show timestamps during the migration process.\n"
-      "--debug         Print huge (!) amount of debug during the migration process.\n"
+      "--debug         Verify transferred domU page data.\n"
       "-p              Do not unpause domain after migrating it.\n"
       "-D              Preserve the domain id"
     },


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:30:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14333.35442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAok-00060U-WF; Thu, 29 Oct 2020 16:30:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14333.35442; Thu, 29 Oct 2020 16:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAok-00060N-TB; Thu, 29 Oct 2020 16:30:26 +0000
Received: by outflank-mailman (input) for mailman id 14333;
 Thu, 29 Oct 2020 16:30:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1kYAok-00060I-0q
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:30:26 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8d2e50c4-db4a-48de-ae1e-01113de221e0;
 Thu, 29 Oct 2020 16:30:25 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-97-MKHylt3TN2qOdBALyCARbQ-1; Thu, 29 Oct 2020 16:30:20 +0000
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 16:30:19 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; 
 Thu, 29 Oct 2020 16:30:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
	id 1kYAok-00060I-0q
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:30:26 +0000
X-Inumbo-ID: 8d2e50c4-db4a-48de-ae1e-01113de221e0
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8d2e50c4-db4a-48de-ae1e-01113de221e0;
	Thu, 29 Oct 2020 16:30:25 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-97-MKHylt3TN2qOdBALyCARbQ-1; Thu, 29 Oct 2020 16:30:20 +0000
X-MC-Unique: MKHylt3TN2qOdBALyCARbQ-1
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 16:30:19 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000;
 Thu, 29 Oct 2020 16:30:19 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'Arnd Bergmann' <arnd@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>
CC: Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, the arch/x86 maintainers <x86@kernel.org>,
	Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger
	<sthemmin@microsoft.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J.
 Wysocki" <rjw@rjwysocki.net>, Vitaly Kuznetsov <vkuznets@redhat.com>,
	"Wanpeng Li" <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>,
	Joerg Roedel <joro@8bytes.org>, "linux-hyperv@vger.kernel.org"
	<linux-hyperv@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, kvm list <kvm@vger.kernel.org>, "Platform
 Driver" <platform-driver-x86@vger.kernel.org>, xen-devel
	<xen-devel@lists.xenproject.org>, "open list:IOMMU DRIVERS"
	<iommu@lists.linux-foundation.org>
Subject: RE: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Topic: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Index: AQHWrdqfIpyThHXm90WmqPrnIVTaQ6muxUWQ
Date: Thu, 29 Oct 2020 16:30:19 +0000
Message-ID: <2a85eaf7d2e54a278493588bae41b06a@AcuMS.aculab.com>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <ea34f1d3-ed54-a2de-79d9-5cc8decc0ab3@redhat.com>
 <CAK8P3a0e0YAkh_9S1ZG5FW3QozZnp1CwXUfWx9VHWkY=h+FVxw@mail.gmail.com>
In-Reply-To: <CAK8P3a0e0YAkh_9S1ZG5FW3QozZnp1CwXUfWx9VHWkY=h+FVxw@mail.gmail.com>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

RnJvbTogQXJuZCBCZXJnbWFubg0KPiBTZW50OiAyOSBPY3RvYmVyIDIwMjAgMDk6NTENCi4uLg0K
PiBJIHRoaW5rIGlkZWFsbHkgdGhlcmUgd291bGQgYmUgbm8gZ2xvYmFsIHZhcmlhYmxlLCB3aXRo
YWxsIGFjY2Vzc2VzDQo+IGVuY2Fwc3VsYXRlZCBpbiBmdW5jdGlvbiBjYWxscywgcG9zc2libHkg
dXNpbmcgc3RhdGljX2NhbGwoKSBvcHRpbWl6YXRpb25zDQo+IGlmIGFueSBvZiB0aGVtIGFyZSBw
ZXJmb3JtYW5jZSBjcml0aWNhbC4NCg0KVGhlcmUgaXNuJ3QgcmVhbGx5IGEgbWFzc2l2ZSBkaWZm
ZXJlbmNlIGJldHdlZW4gZ2xvYmFsIHZhcmlhYmxlcw0KYW5kIGdsb2JhbCBhY2Nlc3MgZnVuY3Rp
b25zLg0KDQoJRGF2aWQNCg0KLQ0KUmVnaXN0ZXJlZCBBZGRyZXNzIExha2VzaWRlLCBCcmFtbGV5
IFJvYWQsIE1vdW50IEZhcm0sIE1pbHRvbiBLZXluZXMsIE1LMSAxUFQsIFVLDQpSZWdpc3RyYXRp
b24gTm86IDEzOTczODYgKFdhbGVzKQ0K



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:38:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:38:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14339.35454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAwO-0006IC-Rh; Thu, 29 Oct 2020 16:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14339.35454; Thu, 29 Oct 2020 16:38:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYAwO-0006I5-Oh; Thu, 29 Oct 2020 16:38:20 +0000
Received: by outflank-mailman (input) for mailman id 14339;
 Thu, 29 Oct 2020 16:38:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYAwN-0006I0-3Z
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:38:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b31470ea-f733-4d8d-95d2-6e258bb94a65;
 Thu, 29 Oct 2020 16:38:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B5D74AB98;
 Thu, 29 Oct 2020 16:38:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYAwN-0006I0-3Z
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:38:19 +0000
X-Inumbo-ID: b31470ea-f733-4d8d-95d2-6e258bb94a65
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b31470ea-f733-4d8d-95d2-6e258bb94a65;
	Thu, 29 Oct 2020 16:38:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603989496;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5Cfzj/gXh0YLmbTsJHNVVlTFt8gFMdLegExytjwGpww=;
	b=CFmrUdSdEyAq6gRgMTIdkbmmvT/dci+kp5+aBrRVrzYmzEpSvuLIbla5PZe4HFkI6hnNWD
	MBZHYly2x/3/ITq8RMQLXO+c1Oji9ZSIl3bK/21mIPTfqhTtwAii0Tn8IPTorXvleN2psV
	BM1Kl/sjOrh0cvICNdF8JONipuAawic=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B5D74AB98;
	Thu, 29 Oct 2020 16:38:16 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: process softirq while saving/loading entries
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20201029152054.28635-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ab7f64a2-2ea8-0445-7266-c60e58a49a85@suse.com>
Date: Thu, 29 Oct 2020 17:38:17 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201029152054.28635-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.10.2020 16:20, Roger Pau Monne wrote:
> On slow systems with sync_console saving or loading the context of big
> guests can cause the watchdog to trigger. Fix this by adding a couple
> of process_pending_softirqs.

Which raises the question in how far this is then also a problem
for the caller of the underlying hypercall. IOW I wonder whether
instead we need to make use of continuations here. Nevertheless
...

> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:42:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:42:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14343.35465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYB08-000774-C8; Thu, 29 Oct 2020 16:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14343.35465; Thu, 29 Oct 2020 16:42:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYB08-00076x-8s; Thu, 29 Oct 2020 16:42:12 +0000
Received: by outflank-mailman (input) for mailman id 14343;
 Thu, 29 Oct 2020 16:42:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q+Bh=EE=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1kYB06-00076r-Co
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:42:10 +0000
Received: from mga11.intel.com (unknown [192.55.52.93])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f72ce307-175a-4c16-afd7-e9503d4241ee;
 Thu, 29 Oct 2020 16:42:08 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Oct 2020 09:42:06 -0700
Received: from ichao-mobl.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.212.87.139])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 29 Oct 2020 09:42:05 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q+Bh=EE=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
	id 1kYB06-00076r-Co
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:42:10 +0000
X-Inumbo-ID: f72ce307-175a-4c16-afd7-e9503d4241ee
Received: from mga11.intel.com (unknown [192.55.52.93])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id f72ce307-175a-4c16-afd7-e9503d4241ee;
	Thu, 29 Oct 2020 16:42:08 +0000 (UTC)
IronPort-SDR: L/PoWEqABikXI8HwTgEh/tF4phOl5T23F/LcHrVvR+sUYmlbJYmPY2F8AERZDQc7e3Ty3BjzFi
 GJL/5qn18QVw==
X-IronPort-AV: E=McAfee;i="6000,8403,9789"; a="164967052"
X-IronPort-AV: E=Sophos;i="5.77,430,1596524400"; 
   d="scan'208";a="164967052"
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga001.jf.intel.com ([10.7.209.18])
  by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2020 09:42:06 -0700
IronPort-SDR: wCHGdkfs93RJGDBx7WxBU9CfqgrEGLy6+Jd1HVSXgF54pEZt0VA6cVG9Uz0cTs1hl28hkweL2c
 PWWVTjDNjm/g==
X-IronPort-AV: E=Sophos;i="5.77,430,1596524400"; 
   d="scan'208";a="395219071"
Received: from ichao-mobl.amr.corp.intel.com (HELO ubuntu.localdomain) ([10.212.87.139])
  by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 29 Oct 2020 09:42:05 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas.lengyel@intel.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH] tools: add noidentpt domain config option
Date: Thu, 29 Oct 2020 09:41:51 -0700
Message-Id: <93aec8d6e90c5b1c571297a9d4822d1868417be7.1603989586.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Identity Pagetable is currently being created for all HVM VMs during setup.
This was only necessary for running HVM guests on Intel CPUs where EPT was
available but unrestricted guest mode was not.

Add option to skip the creation of the Identity Pagetable via the "noidentpt"
xl config option. This allows a system administrator to skip this step when
the hardware is known to have the required features.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
Further context: while running VM forks a significant bottle-neck was
identified when HVM_PARAM_IDENT_PT is getting copied from the parent VM. This
is due to the fact that setting this parameter requires obtaining a Xen-wide
lock (domctl_lock_acquire). When running several VM forks in parallel during
fuzzing the fork reset hypercall can fail due to the lock being taken by
another fork that's being reset at the same time. This whole situation can be
avoided if there is no Identity-map pagetable to begin with as on modern CPUs
this identity-map pagetable is not actually required.
---
 docs/man/xl.cfg.5.pod.in         |  5 +++++
 tools/include/xenguest.h         |  1 +
 tools/libs/guest/xg_dom_x86.c    | 31 +++++++++++++++++--------------
 tools/libs/light/libxl_create.c  |  2 ++
 tools/libs/light/libxl_dom.c     |  2 ++
 tools/libs/light/libxl_types.idl |  1 +
 tools/xl/xl_parse.c              |  2 ++
 7 files changed, 30 insertions(+), 14 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0532739c1f..4d992fe346 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -587,6 +587,11 @@ which are incompatible with migration. Currently this is limited to
 enabling the invariant TSC feature flag in CPUID results when TSC is
 not emulated.
 
+=item B<noidentpt=BOOLEAN>
+
+Disable the creation of the Identity-map Pagetable that was required to run HVM
+guests on Intel CPUs with EPT where no unrestricted guest mode was available.
+
 =item B<driver_domain=BOOLEAN>
 
 Specify that this domain is a driver domain. This enables certain
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index a9984dbea5..998a8c57ba 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -26,6 +26,7 @@
 
 #define XCFLAGS_LIVE      (1 << 0)
 #define XCFLAGS_DEBUG     (1 << 1)
+#define XCFLAGS_NOIDENTPT (1 << 2)
 
 #define X86_64_B_SIZE   64 
 #define X86_32_B_SIZE   32
diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
index 2953aeb90b..827bea7eb7 100644
--- a/tools/libs/guest/xg_dom_x86.c
+++ b/tools/libs/guest/xg_dom_x86.c
@@ -718,20 +718,23 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
         goto out;
     }
 
-    /*
-     * Identity-map page table is required for running with CR0.PG=0 when
-     * using Intel EPT. Create a 32-bit non-PAE page directory of superpages.
-     */
-    if ( (ident_pt = xc_map_foreign_range(
-              xch, domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
-              special_pfn(SPECIALPAGE_IDENT_PT))) == NULL )
-        goto error_out;
-    for ( i = 0; i < PAGE_SIZE / sizeof(*ident_pt); i++ )
-        ident_pt[i] = ((i << 22) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
-                       _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
-    munmap(ident_pt, PAGE_SIZE);
-    xc_hvm_param_set(xch, domid, HVM_PARAM_IDENT_PT,
-                     special_pfn(SPECIALPAGE_IDENT_PT) << PAGE_SHIFT);
+    if ( !(dom->flags & XCFLAGS_NOIDENTPT) )
+    {
+        /*
+         * Identity-map page table is required for running with CR0.PG=0 when
+         * using Intel EPT. Create a 32-bit non-PAE page directory of superpages.
+         */
+        if ( (ident_pt = xc_map_foreign_range(
+                  xch, domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
+                  special_pfn(SPECIALPAGE_IDENT_PT))) == NULL )
+            goto error_out;
+        for ( i = 0; i < PAGE_SIZE / sizeof(*ident_pt); i++ )
+            ident_pt[i] = ((i << 22) | _PAGE_PRESENT | _PAGE_RW | _PAGE_USER |
+                           _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_PSE);
+        munmap(ident_pt, PAGE_SIZE);
+        xc_hvm_param_set(xch, domid, HVM_PARAM_IDENT_PT,
+                         special_pfn(SPECIALPAGE_IDENT_PT) << PAGE_SHIFT);
+    }
 
     dom->console_pfn = special_pfn(SPECIALPAGE_CONSOLE);
     xc_clear_domain_page(dom->xch, dom->guest_domid, dom->console_pfn);
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 321a13e519..62b06b359c 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -256,6 +256,8 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
 
     libxl_defbool_setdefault(&b_info->disable_migrate, false);
 
+    libxl_defbool_setdefault(&b_info->disable_identpt, false);
+
     for (i = 0 ; i < b_info->num_iomem; i++)
         if (b_info->iomem[i].gfn == LIBXL_INVALID_GFN)
             b_info->iomem[i].gfn = b_info->iomem[i].start;
diff --git a/tools/libs/light/libxl_dom.c b/tools/libs/light/libxl_dom.c
index 01d989a976..a4b3fd808c 100644
--- a/tools/libs/light/libxl_dom.c
+++ b/tools/libs/light/libxl_dom.c
@@ -1126,6 +1126,8 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     dom->console_domid = state->console_domid;
     dom->xenstore_domid = state->store_domid;
 
+    dom->flags |= libxl_defbool_val(info->disable_identpt) ? XCFLAGS_NOIDENTPT : 0;
+
     rc = libxl__domain_device_construct_rdm(gc, d_config,
                                             info->u.hvm.rdm_mem_boundary_memkb*1024,
                                             dom);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 9d3f05f399..02eb6a0b40 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -508,6 +508,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     ("exec_ssid_label", string),
     ("localtime",       libxl_defbool),
     ("disable_migrate", libxl_defbool),
+    ("disable_identpt", libxl_defbool),
     ("cpuid",           libxl_cpuid_policy_list),
     ("blkdev_start",    string),
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index cae8eb679c..ac4a6f2124 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -1531,6 +1531,8 @@ void parse_config_data(const char *config_source,
 
     xlu_cfg_get_defbool(config, "nomigrate", &b_info->disable_migrate, 0);
 
+    xlu_cfg_get_defbool(config, "noidentpt", &b_info->disable_identpt, 0);
+
     if (!xlu_cfg_get_long(config, "tsc_mode", &l, 1)) {
         const char *s = libxl_tsc_mode_to_string(l);
         fprintf(stderr, "WARNING: specifying \"tsc_mode\" as an integer is deprecated. "
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:52:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14349.35478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBA4-00086J-EH; Thu, 29 Oct 2020 16:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14349.35478; Thu, 29 Oct 2020 16:52:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBA4-00086C-BB; Thu, 29 Oct 2020 16:52:28 +0000
Received: by outflank-mailman (input) for mailman id 14349;
 Thu, 29 Oct 2020 16:52:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l+xb=EE=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kYBA3-000867-6X
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:52:27 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d14fa99-2859-4e0d-8132-154096b25fb1;
 Thu, 29 Oct 2020 16:52:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l+xb=EE=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kYBA3-000867-6X
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:52:27 +0000
X-Inumbo-ID: 8d14fa99-2859-4e0d-8132-154096b25fb1
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d14fa99-2859-4e0d-8132-154096b25fb1;
	Thu, 29 Oct 2020 16:52:24 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1603990343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to; bh=BwThNbQsIy31JmqgarQ4wUhEb+UzdV725F922IlA/F4=;
	b=2FkT5gzjoyr+ccyJHCtb4QbofbUucrdb26XfUpdE+rdknukTxAg72GjxOZ5n41jAnKkYB/
	nOpJkfACWzeDMF9Gf28Vi+9nUZMt6JCdJF214EEVoZ1yZcMxqCTQCc46rL/vBXNOjbaYLf
	f+eXkQpHbUsq3GQII9gW2a6gC/yzaVGRhIDCe7QC8qE2NXhN+gqydOhJhoyJkPdfEZ6A3H
	521AeFSdgq1ad66a8zBLLVOR40MhLsMZO/mD5DBZk0Zoq9X1/sGoa8LUpdsu3PsEqtSaUm
	fd5d4g1zK5WhugYvwd0KaIDhMulezjPfh4Q5hU2OhBCUeaAlgqLTDOppSiEmsg==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1603990343;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to; bh=BwThNbQsIy31JmqgarQ4wUhEb+UzdV725F922IlA/F4=;
	b=PgpLqCB/8RWNfhP0PSdbieEqqU4kgNwBJgx4lowNSAaj/HnR/wMahgJJLuQXXytVePuRe9
	Nfyl8b5dIoAH1xCQ==
To: Arnd Bergmann <arnd@kernel.org>, Paolo Bonzini <pbonzini@redhat.com>
Cc: Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, the arch/x86 maintainers <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, linux-hyperv@vger.kernel.org, "linux-kernel\@vger.kernel.org" <linux-kernel@vger.kernel.org>, kvm list <kvm@vger.kernel.org>, Platform Driver <platform-driver-x86@vger.kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, "open list\:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
In-Reply-To: <CAK8P3a0e0YAkh_9S1ZG5FW3QozZnp1CwXUfWx9VHWkY=h+FVxw@mail.gmail.com>
Date: Thu, 29 Oct 2020 17:52:22 +0100
Message-ID: <871rhhotyx.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

Arnd,

On Thu, Oct 29 2020 at 10:51, Arnd Bergmann wrote:
> On Thu, Oct 29, 2020 at 8:04 AM Paolo Bonzini <pbonzini@redhat.com> wrote:
>> On 28/10/20 22:20, Arnd Bergmann wrote:
>> > Avoid this by renaming the global 'apic' variable to the more descriptive
>> > 'x86_system_apic'. It was originally called 'genapic', but both that
>> > and the current 'apic' seem to be a little overly generic for a global
>> > variable.
>>
>> The 'apic' affects only the current CPU, so one of 'x86_local_apic',
>> 'x86_lapic' or 'x86_apic' is probably preferrable.
>
> Ok, I'll change it to x86_local_apic then, unless someone else has
> a preference between them.

x86_local_apic is fine with me.

> I think ideally there would be no global variable, withall accesses
> encapsulated in function calls, possibly using static_call() optimizations
> if any of them are performance critical.

There are a bunch, yes.

> It doesn't seem hard to do, but I'd rather leave that change to
> an x86 person ;-)

It's not hard, but I'm not really sure whether it buys much.

Can you please redo that against tip x86/apic. Much of what you are
touching got a major overhaul.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:58:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14357.35493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBFS-0008Nc-77; Thu, 29 Oct 2020 16:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14357.35493; Thu, 29 Oct 2020 16:58:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBFS-0008NV-3V; Thu, 29 Oct 2020 16:58:02 +0000
Received: by outflank-mailman (input) for mailman id 14357;
 Thu, 29 Oct 2020 16:58:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYBFR-0008NP-GI
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:58:01 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e284285-160f-409e-85d1-222b36185b23;
 Thu, 29 Oct 2020 16:58:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYBFP-0001a8-TY; Thu, 29 Oct 2020 16:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYBFP-0003Wg-Lz; Thu, 29 Oct 2020 16:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYBFP-0006QW-LV; Thu, 29 Oct 2020 16:57:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYBFR-0008NP-GI
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:58:01 +0000
X-Inumbo-ID: 4e284285-160f-409e-85d1-222b36185b23
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e284285-160f-409e-85d1-222b36185b23;
	Thu, 29 Oct 2020 16:58:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p5QXr4YKX+XUr7uXySSJNGvolq1COPSDV0IK0QEamNI=; b=ioKXyNV+kRbxRTqTCyl99HsTOo
	NqRaMuvCYkBk1wZcBsX5A0HZpuqSeapeq9CndBLs1NvxrgMO+Ey79C2YSm6wvjqZoA0gn0Pnb8hbL
	ny4rWocgUt0eFnaV4e0DRsWIGo/ccGvO+IcGtGIXq4j8MAqDxUAcyfPZjchvgABkwTdI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYBFP-0001a8-TY; Thu, 29 Oct 2020 16:57:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYBFP-0003Wg-Lz; Thu, 29 Oct 2020 16:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYBFP-0006QW-LV; Thu, 29 Oct 2020 16:57:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156297-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156297: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74
X-Osstest-Versions-That:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 16:57:59 +0000

flight 156297 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156297/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74
baseline version:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883

Last test of basis   156260  2020-10-27 18:01:29 Z    1 days
Testing same since   156297  2020-10-29 14:01:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   16a20963b3..1fd1d4bafd  1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 16:58:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 16:58:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14362.35508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBG9-0008Ts-Gv; Thu, 29 Oct 2020 16:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14362.35508; Thu, 29 Oct 2020 16:58:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBG9-0008Tl-Dg; Thu, 29 Oct 2020 16:58:45 +0000
Received: by outflank-mailman (input) for mailman id 14362;
 Thu, 29 Oct 2020 16:58:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tRsB=EE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYBG8-0008TU-5H
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:58:44 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::620])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1735df1-2f6f-4213-9d17-b528081c2819;
 Thu, 29 Oct 2020 16:58:38 +0000 (UTC)
Received: from MR2P264CA0009.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::21) by
 DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.19; Thu, 29 Oct 2020 16:58:36 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::5c) by MR2P264CA0009.outlook.office365.com
 (2603:10a6:500:1::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 16:58:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 16:58:34 +0000
Received: ("Tessian outbound 68da730eaaba:v64");
 Thu, 29 Oct 2020 16:58:33 +0000
Received: from f9b6f271de3f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 186C52E8-2B1B-43CA-BD3E-A284BD05D655.1; 
 Thu, 29 Oct 2020 16:58:20 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f9b6f271de3f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 29 Oct 2020 16:58:20 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM7PR08MB5463.eurprd08.prod.outlook.com (2603:10a6:20b:106::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 16:58:16 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Thu, 29 Oct 2020
 16:58:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=tRsB=EE=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYBG8-0008TU-5H
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:58:44 +0000
X-Inumbo-ID: e1735df1-2f6f-4213-9d17-b528081c2819
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:7e1b::620])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e1735df1-2f6f-4213-9d17-b528081c2819;
	Thu, 29 Oct 2020 16:58:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wwmvJxEH7BLwjJAT9lkSUzA36bu5zsN2kgvd1zbkEBQ=;
 b=ySE0p2rM2plDooVE12svvQ2bEvGgyX7N2KGnfDggaIJAmGCeUBBezglev26zKe2ayPEtXJJzGkOklR0uavIfTXVVSHXaGHPaXXyV8pvvJF3YnhN7I2qa9lSpVm47tkEJeIcVMkzWydk3w7HddIer85NPcGy52hd8ziSrC5UUm+I=
Received: from MR2P264CA0009.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:1::21) by
 DBAPR08MB5845.eurprd08.prod.outlook.com (2603:10a6:10:1a5::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.19; Thu, 29 Oct 2020 16:58:36 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:1:cafe::5c) by MR2P264CA0009.outlook.office365.com
 (2603:10a6:500:1::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Thu, 29 Oct 2020 16:58:36 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Thu, 29 Oct 2020 16:58:34 +0000
Received: ("Tessian outbound 68da730eaaba:v64"); Thu, 29 Oct 2020 16:58:33 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ff0a1e33f986aa30
X-CR-MTA-TID: 64aa7808
Received: from f9b6f271de3f.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 186C52E8-2B1B-43CA-BD3E-A284BD05D655.1;
	Thu, 29 Oct 2020 16:58:20 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f9b6f271de3f.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Thu, 29 Oct 2020 16:58:20 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f6i/oJHPzCMS7eQB1Zu4IVsw/xWXhf/25q0IXuscezF+IQdXzJfyOgYZol4c/pcC/rTffaujT1BgT8URASX37pxWD9qBIXA9QGE1o7zCQRhQQ2sP9gY89LHqoCu22efpX9ILNrL9PZWJH997ZbUfXxCIj0h+0wJnBB1u0bTv7qMFB9Wde6YJR+gSBcqV/EtCQx7M6o9+DVTECJzJf8qctYOSgvhHAfcKIhL/KCiimmBO/OICqzxm5VUuE/CzXUNR+cbK7MiemC9gU5BLxHP3NVe+aYF78o86GaoZfg60KrQdC6I+NZjAQXS3mScv2iY2cO9YtHZGU/Fa83MrqxULFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wwmvJxEH7BLwjJAT9lkSUzA36bu5zsN2kgvd1zbkEBQ=;
 b=AFiDhUFlMftqxHysEAXW7tPO8ViEerayQQ7UFm23dhXChkpvM9/FDaN8+pFKJRU1JnmAvcmFeTTatICi9x+KjGogPTeaVuUnXmt+X4Ubb02NpRDr2EutYxbv1hGG06WgM6k6Z8u+31t91FVZGBpvheVJuULJot/pXM5dY3g17ZUa0Gp7071GDG+CwFhNhv3F0LoR3OXzOVNMsnLf0rvbN3AOhvYk5h4mRELbNmlXyT5CPDmm8FykqZJQ6uR2POxZQeWXnMT8GBgIfRZlEzs+i6380g+GxAdjXm5bgUTvIIX6DHDBm8i0NRZL8Tw3lBrGJ+25PrFCkWuy5IGfly2fIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wwmvJxEH7BLwjJAT9lkSUzA36bu5zsN2kgvd1zbkEBQ=;
 b=ySE0p2rM2plDooVE12svvQ2bEvGgyX7N2KGnfDggaIJAmGCeUBBezglev26zKe2ayPEtXJJzGkOklR0uavIfTXVVSHXaGHPaXXyV8pvvJF3YnhN7I2qa9lSpVm47tkEJeIcVMkzWydk3w7HddIer85NPcGy52hd8ziSrC5UUm+I=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM7PR08MB5463.eurprd08.prod.outlook.com (2603:10a6:20b:106::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Thu, 29 Oct
 2020 16:58:16 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Thu, 29 Oct 2020
 16:58:16 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Topic: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Index: AQHWq7wn1E1TPrUt3EyifJVqFrVmIams60gAgAA3GICAAa+LAA==
Date: Thu, 29 Oct 2020 16:58:16 +0000
Message-ID: <77495A6D-D1D8-4E62-B481-6AE59593CFAD@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
 <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
 <14328157-D9C5-428E-BD1C-F4A841359185@arm.com>
In-Reply-To: <14328157-D9C5-428E-BD1C-F4A841359185@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5c65c6ef-ee3a-4aa8-cf83-08d87c2bdf78
x-ms-traffictypediagnostic: AM7PR08MB5463:|DBAPR08MB5845:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB58455095B03B37C26D0F0D62FC140@DBAPR08MB5845.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:6108;OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ft6gD+56MQFFsD/h/sGNj7HArlatslp7DpYoQ/T7QtRXh/p+lF5uegneaNzYLowPufIW/uNp6R2gQwA+ry1V6TyqQGeN7qWZLTUSiCH0MzuME4mSexvqVy+0XCEbSWRz8cBnz6jLy1aaowAVBWKDbZdhXKYv9rfLhYspTO8hZVpMKMaSJ9Jwhgp2qr6SkkzeXqVZPgglqPG8S8EyuggPdKGRpSO0TNfkrGJsxM0L6VJlKvxZs9gLOSltPA0IP1Sdlh6OdKolDur9yhb8WKgiQn+gMpKB3zM3MooB6WCBEy+Ksw/UxwkwN17q3W9K3coCoEcaZTdA+KAzmZI9liv9XA==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(366004)(346002)(39850400004)(316002)(478600001)(4326008)(55236004)(64756008)(66446008)(91956017)(66946007)(66476007)(76116006)(8936002)(66556008)(36756003)(8676002)(5660300002)(2906002)(86362001)(54906003)(2616005)(26005)(186003)(33656002)(71200400001)(6506007)(53546011)(6512007)(6916009)(6486002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 x9EO5SzVqYUb32D5v84ieUWT77/guDWkVFutNv+YEzNjD+PzJk8jDZxs20pkoZn7nRHRWsknjx7a+J+iCIHO3PFaZvsB5CklEMlk63G1QNehZ7gK7NIF/qN4bJJyQd9+Fv5RS80JsevhU3c6SK4TDrYSumqDqSJPftdbwlBpIRC1eIWHibeZMDBoqpKm4WVYNQkUBREnHdmedmMVKopwtsSJismnjdlTygixPGXsfhWWSbzQlsQgKv/xj1UTTMYUh3Z8afqb+QSCmpEThH0mOuqyHnu2l4sIsWHvYt0ptMf/ZIYDvhKWA/yCUGgQNF36S0uvjHu1RMdReGE/lYNz1CF9QsNPhdIX8Y8sOpy1XKoVOgniF7Cohc+9Uq2pT3ebORZDxaM9mIXiffh8Thzj8/rPa5L89SQ7UjKabHMWLzGncU7pAMwc/bm+C/QjsL+jSAz54CQsPEiPDbfUcfzdc1Ndraa0nsFCrEiXeBtx/iHg8mCrbHv8bgbUls5Pvo1mG/EZySsFLrlk2vOZohll5upoAx1hHYf6KzTnm/mu7oYv/X2HPo9DGvINO171WCz8tMPNe40WHbaoqIHqxHPOUg0t5cYNQLGyQp6UuJaaakkdy1/0nwcH03nCx1zQtl05b4yhKkTeNwbO/G/v6/wmFQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <59FEF0793B0AAD4DB7077E5EB4B6FA6F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM7PR08MB5463
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7f92f996-1562-455b-d55f-08d87c2bd4b2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EqD9Hf2aoT6nyIMLUb7oS37iyTAMSG1zPzv2OqxjTtU9JUp26H5upMQ4ZfH+FKPt/kUXIrqlzcDNWoAbYJa4fCMwNA7ONHcyVvE1vqmf+8qMlB8pNBkg03aDjv9Wdz0FM7WyKxjK83EgNrp6SOvTRzl+lvjn6jXMT8hfyxqvxAtS3E3xK/5WiOyyDzVyLrxYgp/wUQvaS718SSyn0AM8TTO2FeiZCJWEoyasLqTphAm5JXB6wS7DPxb+V4TQcU2gxO2mnsCo86UktOuF8WP02n9Q91W4FRz2nqhDipmcXXLTmVV1QXys3DJ1+m+yVZbFouy+0ZMnRbVUtCg6uUExrbiwpaGgREtbkB3zaSI4Y0bDU5qmJR76UjOSlofUjgEdRKwwQM4V9BOMis3GJhUyKw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(39860400002)(376002)(396003)(46966005)(54906003)(316002)(55236004)(8936002)(6486002)(478600001)(6862004)(356005)(70206006)(70586007)(36756003)(86362001)(5660300002)(2906002)(33656002)(47076004)(82310400003)(336012)(26005)(4326008)(82740400003)(8676002)(6512007)(81166007)(186003)(6506007)(53546011)(2616005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 16:58:34.5460
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c65c6ef-ee3a-4aa8-cf83-08d87c2bdf78
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5845

Hello Jan,

> On 28 Oct 2020, at 3:13 pm, Rahul Singh <Rahul.Singh@arm.com> wrote:
>=20
> Hello Jan,
>=20
>> On 28 Oct 2020, at 11:56 am, Jan Beulich <jbeulich@suse.com> wrote:
>>=20
>> On 26.10.2020 18:17, Rahul Singh wrote:
>>> --- a/xen/drivers/passthrough/pci.c
>>> +++ b/xen/drivers/passthrough/pci.c
>>> @@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u16 =
seg, u8 bus, u8 devfn, u32 flag)
>>>    if ( !is_iommu_enabled(d) )
>>>        return 0;
>>>=20
>>> -    /* Prevent device assign if mem paging or mem sharing have been=20
>>> +#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
>>> +    /* Prevent device assign if mem paging or mem sharing have been
>>>     * enabled for this domain */
>>>    if ( d !=3D dom_io &&
>>>         unlikely(mem_sharing_enabled(d) ||
>>>                  vm_event_check_ring(d->vm_event_paging) ||
>>>                  p2m_get_hostp2m(d)->global_logdirty) )
>>>        return -EXDEV;
>>> +#endif
>>=20
>> Besides this also disabling mem-sharing and log-dirty related
>> logic, I don't think the change is correct: Each item being
>> checked needs individually disabling depending on its associated
>> CONFIG_*. For this, perhaps you want to introduce something
>> like mem_paging_enabled(d), to avoid the need for #ifdef here?
>>=20
>=20
> Ok I will fix that in next version.=20

I just check and found out that mem-sharing , men-paging and log-dirty is x=
86 specific and not implemented for ARM.
Is that will be ok if I move above code to x86 specific directory and intro=
duce new function arch_pcidev_is_assignable() that will test if pcidev is a=
ssignable or not ?

>=20
>> Jan

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:00:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14373.35520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBHO-0000Ke-Se; Thu, 29 Oct 2020 17:00:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14373.35520; Thu, 29 Oct 2020 17:00:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBHO-0000KA-OO; Thu, 29 Oct 2020 17:00:02 +0000
Received: by outflank-mailman (input) for mailman id 14373;
 Thu, 29 Oct 2020 17:00:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2HK/=EE=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kYBHN-0000Bc-Jb
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:00:01 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a4719388-7c24-45c1-a71b-5d60c346accc;
 Thu, 29 Oct 2020 17:00:00 +0000 (UTC)
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-18-QcIn_t-uP3268TnP8ugTDA-1; Thu, 29 Oct 2020 12:59:58 -0400
Received: by mail-wr1-f71.google.com with SMTP id 2so1530694wrd.14
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:59:58 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id f17sm6577560wrm.27.2020.10.29.09.59.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 29 Oct 2020 09:59:56 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2HK/=EE=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kYBHN-0000Bc-Jb
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:00:01 +0000
X-Inumbo-ID: a4719388-7c24-45c1-a71b-5d60c346accc
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id a4719388-7c24-45c1-a71b-5d60c346accc;
	Thu, 29 Oct 2020 17:00:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1603990800;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=LFIk1KDbaI+imFseS7EgKO+PMROEHyYFYHvDyMX5vJs=;
	b=dSWCbKpj2w6DFFmHob56gvbFpNw5b2wuB7FI2aaN0gcPaWz67Xsjhd029zbGGIsAkH1DOL
	nrgPEx8hTWdobVXvAKIn2j9+E0w3+9yhrQV3MGVouJolKdnLKLMTUf47z6tOcMYN/3FVcP
	7M4GYkPPTuWbrqcApUVZhcQLZGTI3M4=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-18-QcIn_t-uP3268TnP8ugTDA-1; Thu, 29 Oct 2020 12:59:58 -0400
X-MC-Unique: QcIn_t-uP3268TnP8ugTDA-1
Received: by mail-wr1-f71.google.com with SMTP id 2so1530694wrd.14
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:59:58 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=LFIk1KDbaI+imFseS7EgKO+PMROEHyYFYHvDyMX5vJs=;
        b=ixM1eHdh38UpLKV0YEZBjsREbPt24UxImEl+shehwM3zEtn8fX/R+4hYbK6xi18VeB
         A410cyEuiyfaNEZvSaUDsDws/lvRMBIvN9kf/lcRyZC78c0nrfVcrbsFuBrAfxiBFin7
         /g9EPcbuvNtD8iNiNP9sPlktuPDjUZHKrOy7EcZNZWSx05clSEiMZkdhXkZG4oNAH49X
         xzaVtyW6+QYsnVH/V6K+SiJ7oVioANUPIwG7VIUldz02Frtd5D0cWhqJFyctwZunIV0z
         wBVxz3ZplrkxI+36CEL/Kj86z14qY5TQo0ZYyjERI6DmYRi1SQocRRUTSV5899S46JBn
         EflQ==
X-Gm-Message-State: AOAM531pjGPpPvZsxqNobZWHjoSZvBn/IPFdIT5fnLZZqlCXuuYn168G
	qPzTCwzwJCMlLHLAEXnALuZHQiBux4Nos2RWSmTPBB/FNXPGE9L8yqmVqNwlLNgHz/XxJagVi23
	DVWazCOPEf54ss+/VfCddtIh3mnc=
X-Received: by 2002:a5d:4e8d:: with SMTP id e13mr6897938wru.368.1603990797493;
        Thu, 29 Oct 2020 09:59:57 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxIJpY7s5tS16KM1jeRqPRytnQIFiA4jNhSDraUoIgQqWjC9ew8fh0YJ62Gxk2fkrYU/ZAcRw==
X-Received: by 2002:a5d:4e8d:: with SMTP id e13mr6897898wru.368.1603990797308;
        Thu, 29 Oct 2020 09:59:57 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id f17sm6577560wrm.27.2020.10.29.09.59.55
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Thu, 29 Oct 2020 09:59:56 -0700 (PDT)
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
To: Arvind Sankar <nivedita@alum.mit.edu>,
 David Laight <David.Laight@ACULAB.COM>
Cc: 'Arnd Bergmann' <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
 Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
 "x86@kernel.org" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin"
 <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
 "platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
 <20201029165611.GA2557691@rani.riverdale.lan>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
Date: Thu, 29 Oct 2020 17:59:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201029165611.GA2557691@rani.riverdale.lan>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29/10/20 17:56, Arvind Sankar wrote:
>> For those two just add:
>> 	struct apic *apic = x86_system_apic;
>> before all the assignments.
>> Less churn and much better code.
>>
> Why would it be better code?
> 

I think he means the compiler produces better code, because it won't
read the global variable repeatedly.  Not sure if that's true,(*) but I
think I do prefer that version if Arnd wants to do that tweak.

Paolo

(*) if it is, it might also be due to Linux using -fno-strict-aliasing



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:08:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14386.35535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBP3-0001Cn-Mq; Thu, 29 Oct 2020 17:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14386.35535; Thu, 29 Oct 2020 17:07:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBP3-0001Cb-Jn; Thu, 29 Oct 2020 17:07:57 +0000
Received: by outflank-mailman (input) for mailman id 14386;
 Thu, 29 Oct 2020 17:07:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYBP1-0001Bl-Mr
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:07:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae50ef18-0294-4033-ae69-3f325dc3fbfb;
 Thu, 29 Oct 2020 17:07:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDD10ACDF;
 Thu, 29 Oct 2020 17:07:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYBP1-0001Bl-Mr
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:07:55 +0000
X-Inumbo-ID: ae50ef18-0294-4033-ae69-3f325dc3fbfb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ae50ef18-0294-4033-ae69-3f325dc3fbfb;
	Thu, 29 Oct 2020 17:07:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603991273;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=fiK9X/wvAAM8i0u+8KFD3nOrXT+gEzyAGqGIL6zpRJk=;
	b=UbRDxNJdobdbOGPeJcV1Weur0qpCxoNe7g15zYZhhUNfii9r9Ow1GPTIkozDZRB4OzBEw+
	eUH6k8JJby129Tz7H35Bxau7wAVpx7FwDSpHKlZFgXhIIQWiAFZjTlFfSzXc7yXC7grQTX
	EdtZQSXceE3G3WtqqstLC1L0jUPezNo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id CDD10ACDF;
	Thu, 29 Oct 2020 17:07:53 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 0/2] x86: PV shim vs grant table
Message-ID: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
Date: Thu, 29 Oct 2020 18:07:54 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While Andrew reported a randconfig build failure, I started wondering
why we'd ever build in a way different from what had failed to build.
Fix the build and then switch the shim config file accordingly.

1: fix build of PV shim when !GRANT_TABLE
2: PV shim doesn't need GRANT_TABLE

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:10:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14392.35546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBR3-0001MU-7b; Thu, 29 Oct 2020 17:10:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14392.35546; Thu, 29 Oct 2020 17:10:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBR3-0001MN-4W; Thu, 29 Oct 2020 17:10:01 +0000
Received: by outflank-mailman (input) for mailman id 14392;
 Thu, 29 Oct 2020 17:10:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYBR2-0001MI-8x
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:10:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0639829f-d219-4018-aa96-5a8fc86ef1cb;
 Thu, 29 Oct 2020 17:09:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5290ACDF;
 Thu, 29 Oct 2020 17:09:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYBR2-0001MI-8x
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:10:00 +0000
X-Inumbo-ID: 0639829f-d219-4018-aa96-5a8fc86ef1cb
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0639829f-d219-4018-aa96-5a8fc86ef1cb;
	Thu, 29 Oct 2020 17:09:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603991398;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3hE4uVxT8Nrw6+ayb6BmQUqV+JrKDRf67hQnoXyI4UA=;
	b=LHHiOfgHMsbd7/1FTq8fAj9nbwJ6GsYestzVeJhFsijS3lO3BeHJLSa6nHFb+GZrDnM4vd
	VggUzE65cPeRqv1gDP2+gW3r2iYtHnuGIKPUiGLdDwWTAJBg9POIngPVnvCBHp7MP7wXoL
	R/Zohxa80zTnAnVHeg3MKuGGMWd1gJ0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A5290ACDF;
	Thu, 29 Oct 2020 17:09:58 +0000 (UTC)
Subject: [PATCH 1/2] x86: fix build of PV shim when !GRANT_TABLE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
Message-ID: <333de96e-5f60-dc87-7668-e8027dcfbcdd@suse.com>
Date: Thu, 29 Oct 2020 18:09:59 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

To do its compat translation, shim code needs to include the compat
header. For this to work, this header first of all needs to be
generated.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -24,6 +24,7 @@ headers-$(CONFIG_X86)     += compat/arch
 headers-$(CONFIG_ARGO)    += compat/argo.h
 headers-$(CONFIG_PV)      += compat/callback.h
 headers-$(CONFIG_GRANT_TABLE) += compat/grant_table.h
+headers-$(CONFIG_PV_SHIM) += compat/grant_table.h
 headers-$(CONFIG_HVM)     += compat/hvm/dm_op.h
 headers-$(CONFIG_HVM)     += compat/hvm/hvm_op.h
 headers-$(CONFIG_HVM)     += compat/hvm/hvm_vcpu.h



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:10:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:10:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14395.35559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBRR-00026o-Gu; Thu, 29 Oct 2020 17:10:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14395.35559; Thu, 29 Oct 2020 17:10:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBRR-00026h-Cs; Thu, 29 Oct 2020 17:10:25 +0000
Received: by outflank-mailman (input) for mailman id 14395;
 Thu, 29 Oct 2020 17:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYBRP-00026X-VF
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:10:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 847c457d-17cf-4b40-bc43-e1ed0460cb81;
 Thu, 29 Oct 2020 17:10:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7A389ACF6;
 Thu, 29 Oct 2020 17:10:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYBRP-00026X-VF
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:10:23 +0000
X-Inumbo-ID: 847c457d-17cf-4b40-bc43-e1ed0460cb81
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 847c457d-17cf-4b40-bc43-e1ed0460cb81;
	Thu, 29 Oct 2020 17:10:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603991422;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=E+ezQCTGakbRm0syNT/lctbw8ycn2F425iJ8DjN9kNw=;
	b=MQJ29yPq1EqSqtt+PS4dtHxn0Lg2YBlLrBvwllE1Rk3+X7ucOObj2y+G07fZhWB5281NTE
	BOd+b7THGBPjq7uGOmjTExQOhLKzS8JHWqY7p95y54kGlh1o9BpkEC6T5McTH3JLKn+yQz
	f7K7ZMZrqp7nh5Erf8jdvbJr5CYAL+0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7A389ACF6;
	Thu, 29 Oct 2020 17:10:22 +0000 (UTC)
Subject: [PATCH 2/2] x86: PV shim doesn't need GRANT_TABLE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
Message-ID: <17fb04fb-99b6-7e20-0583-421ebb666ddc@suse.com>
Date: Thu, 29 Oct 2020 18:10:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The only reference into the code controlled by this option is from the
hypercall table, and that hypercall table entry gets overwritten.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/configs/pvshim_defconfig
+++ b/xen/arch/x86/configs/pvshim_defconfig
@@ -9,6 +9,7 @@ CONFIG_EXPERT=y
 CONFIG_SCHED_NULL=y
 # Disable features not used by the PV shim
 # CONFIG_XEN_SHSTK is not set
+# CONFIG_GRANT_TABLE is not set
 # CONFIG_HYPFS is not set
 # CONFIG_BIGMEM is not set
 # CONFIG_KEXEC is not set



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:16:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14402.35570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBX4-0002OZ-4y; Thu, 29 Oct 2020 17:16:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14402.35570; Thu, 29 Oct 2020 17:16:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBX4-0002OS-1t; Thu, 29 Oct 2020 17:16:14 +0000
Received: by outflank-mailman (input) for mailman id 14402;
 Thu, 29 Oct 2020 17:16:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYBX2-0002ON-T7
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:16:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e2c4d17-6125-47ba-baac-8ed461a1ffc8;
 Thu, 29 Oct 2020 17:16:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A88E9B30C;
 Thu, 29 Oct 2020 17:16:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=o7xP=EE=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYBX2-0002ON-T7
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:16:12 +0000
X-Inumbo-ID: 4e2c4d17-6125-47ba-baac-8ed461a1ffc8
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e2c4d17-6125-47ba-baac-8ed461a1ffc8;
	Thu, 29 Oct 2020 17:16:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1603991771;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w/2obeIH7InPTqeYrlu7qYHCRBddaFdx0xM/gn1DiPc=;
	b=lOYof78SXUxDwFhrBwoNZJg1jsBI0yeTNR8+IicUoIH5LeE6bq7zEuDtOOwuRz2+iMpfkb
	bEmuPsgOPHJ8Gzf7BIPv94hrxHGyOswmFccAm2xG/U3GgmLQNkeUrJR1Q6TaMhc2EAm1cY
	0JDmTwwlJAxDXLBgwMhZ/yWVXf3hNtE=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A88E9B30C;
	Thu, 29 Oct 2020 17:16:10 +0000 (UTC)
Subject: Re: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
 <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
 <14328157-D9C5-428E-BD1C-F4A841359185@arm.com>
 <77495A6D-D1D8-4E62-B481-6AE59593CFAD@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e6b24649-3621-6403-6fd7-de5814922197@suse.com>
Date: Thu, 29 Oct 2020 18:16:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <77495A6D-D1D8-4E62-B481-6AE59593CFAD@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.10.2020 17:58, Rahul Singh wrote:
>> On 28 Oct 2020, at 3:13 pm, Rahul Singh <Rahul.Singh@arm.com> wrote:
>>> On 28 Oct 2020, at 11:56 am, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 26.10.2020 18:17, Rahul Singh wrote:
>>>> --- a/xen/drivers/passthrough/pci.c
>>>> +++ b/xen/drivers/passthrough/pci.c
>>>> @@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
>>>>    if ( !is_iommu_enabled(d) )
>>>>        return 0;
>>>>
>>>> -    /* Prevent device assign if mem paging or mem sharing have been 
>>>> +#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
>>>> +    /* Prevent device assign if mem paging or mem sharing have been
>>>>     * enabled for this domain */
>>>>    if ( d != dom_io &&
>>>>         unlikely(mem_sharing_enabled(d) ||
>>>>                  vm_event_check_ring(d->vm_event_paging) ||
>>>>                  p2m_get_hostp2m(d)->global_logdirty) )
>>>>        return -EXDEV;
>>>> +#endif
>>>
>>> Besides this also disabling mem-sharing and log-dirty related
>>> logic, I don't think the change is correct: Each item being
>>> checked needs individually disabling depending on its associated
>>> CONFIG_*. For this, perhaps you want to introduce something
>>> like mem_paging_enabled(d), to avoid the need for #ifdef here?
>>>
>>
>> Ok I will fix that in next version. 
> 
> I just check and found out that mem-sharing , men-paging and log-dirty is x86 specific and not implemented for ARM.
> Is that will be ok if I move above code to x86 specific directory and introduce new function arch_pcidev_is_assignable() that will test if pcidev is assignable or not ?

As an immediate workaround - perhaps (long term the individual
pieces should still be individually checked here, as they're
not inherently x86-specific). Since there's no device property
involved here, the suggested name looks misleading. Perhaps
arch_iommu_usable(d)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:16:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:16:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14355.35583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBXM-0002Sn-Dl; Thu, 29 Oct 2020 17:16:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14355.35583; Thu, 29 Oct 2020 17:16:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBXM-0002Sf-AB; Thu, 29 Oct 2020 17:16:32 +0000
Received: by outflank-mailman (input) for mailman id 14355;
 Thu, 29 Oct 2020 16:56:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
 id 1kYBDl-0008Js-CV
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:56:17 +0000
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e661ee0-febb-40c4-9359-9d5e833adb62;
 Thu, 29 Oct 2020 16:56:15 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id q1so3750861ilt.6
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:56:15 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
 by smtp.gmail.com with ESMTPSA id r12sm2584639ilm.28.2020.10.29.09.56.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 09:56:14 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
	id 1kYBDl-0008Js-CV
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 16:56:17 +0000
X-Inumbo-ID: 7e661ee0-febb-40c4-9359-9d5e833adb62
Received: from mail-il1-x144.google.com (unknown [2607:f8b0:4864:20::144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7e661ee0-febb-40c4-9359-9d5e833adb62;
	Thu, 29 Oct 2020 16:56:15 +0000 (UTC)
Received: by mail-il1-x144.google.com with SMTP id q1so3750861ilt.6
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 09:56:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:date:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=46u7k16lmtR7tMf5v+7fbJj/BQNg8vF1V9HFBp6hYFg=;
        b=TO+GpLM6LycPQoMwryo9xUQRLiEvWYgbGjKPlPQ6JQhAIVJlLxzOsJaEhiiNlylpPU
         rGt9Uq2jE226wycKVWSpG3npnQmZoU8s5A0MHGd4XSgMb8BejsiG/VnzNhIPy8QMD2Z2
         F7IuQvFLeFbghxG3CEhpvXe2WzK8pXgSdiSab7Hx7KATccJNuio5nWAfuBEIbjKsp7Jl
         57A4/cWqICjr/T8wosnkqI/KIZt/A6xx/E5Z8Ir2VJXwueycsYLO4vaFWXcEsKYYZcWb
         Jn/3mLxoII+jzekPaOQFJui7oYC1wE6ekbWbnMCmXPVPkacWmJDSEidjIcgzxSpyLhaA
         1ADA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:date:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=46u7k16lmtR7tMf5v+7fbJj/BQNg8vF1V9HFBp6hYFg=;
        b=snBJZKiUQ4dTvYdWXyl357GUfYGRIFPG7eST1O+WZ6D74eGEb3QSrcGVf+hvhjjoO6
         KF66OK3GF6URwK4vv2N06NJT2N00G0QRfKgcs5XL/brvfiYr93MoDEQsPgkp1Q1ClnVl
         IwrxKwBd5S9P3Y4XBrCdk9eIpIse8ZIOBVFrzMxtf70vcViHzgHUu5y11GH/CGsIPDtB
         GwV7PtcVZjS3XlLV+IHGeIn9xwLqKBe86u9710RWRQwpPLbfel51gKw3izBUoUFZYaN1
         13iEkgo2QFclP6TgqhFh3YjjqSOSaEfs8p2aFurcc6b8hRO31jxncEuC3UlzpX6+qXQS
         s23Q==
X-Gm-Message-State: AOAM533Yz0JaJpKcxS8VTzG0GqEXbOzeYTTFnFrOD0u8pk/yPmjCesJF
	J0ftWt7j5sDqDrPcbAMWznU=
X-Google-Smtp-Source: ABdhPJw9DVYgoIwqL5pROZm1JBXSItTI0Ey8t6jCLCnyLj0KmL714RaHercqOegkYMBCGEVvAtY/7g==
X-Received: by 2002:a05:6e02:2cc:: with SMTP id v12mr4099654ilr.115.1603990575376;
        Thu, 29 Oct 2020 09:56:15 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
        by smtp.gmail.com with ESMTPSA id r12sm2584639ilm.28.2020.10.29.09.56.13
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 09:56:14 -0700 (PDT)
Sender: Arvind Sankar <niveditas98@gmail.com>
From: Arvind Sankar <nivedita@alum.mit.edu>
X-Google-Original-From: Arvind Sankar <arvind@rani.riverdale.lan>
Date: Thu, 29 Oct 2020 12:56:11 -0400
To: David Laight <David.Laight@ACULAB.COM>
Cc: 'Arnd Bergmann' <arnd@kernel.org>, Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"x86@kernel.org" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Message-ID: <20201029165611.GA2557691@rani.riverdale.lan>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>

On Thu, Oct 29, 2020 at 03:05:31PM +0000, David Laight wrote:
> From: Arnd Bergmann
> > Sent: 28 October 2020 21:21
> > 
> > From: Arnd Bergmann <arnd@arndb.de>
> > 
> > There are hundreds of warnings in a W=2 build about a local
> > variable shadowing the global 'apic' definition:
> > 
> > arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a global declaration [-Wshadow]
> > 
> > Avoid this by renaming the global 'apic' variable to the more descriptive
> > 'x86_system_apic'. It was originally called 'genapic', but both that
> > and the current 'apic' seem to be a little overly generic for a global
> > variable.
> > 
> > Fixes: c48f14966cc4 ("KVM: inline kvm_apic_present() and kvm_lapic_enabled()")
> > Fixes: c8d46cf06dc2 ("x86: rename 'genapic' to 'apic'")
> > Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> > ---
> > v2: rename the global instead of the local variable in the header
> ...
> > diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
> > index 284e73661a18..33e2dc78ca11 100644
> > --- a/arch/x86/hyperv/hv_apic.c
> > +++ b/arch/x86/hyperv/hv_apic.c
> > @@ -259,14 +259,14 @@ void __init hv_apic_init(void)
> >  		/*
> >  		 * Set the IPI entry points.
> >  		 */
> > -		orig_apic = *apic;
> > -
> > -		apic->send_IPI = hv_send_ipi;
> > -		apic->send_IPI_mask = hv_send_ipi_mask;
> > -		apic->send_IPI_mask_allbutself = hv_send_ipi_mask_allbutself;
> > -		apic->send_IPI_allbutself = hv_send_ipi_allbutself;
> > -		apic->send_IPI_all = hv_send_ipi_all;
> > -		apic->send_IPI_self = hv_send_ipi_self;
> > +		orig_apic = *x86_system_apic;
> > +
> > +		x86_system_apic->send_IPI = hv_send_ipi;
> > +		x86_system_apic->send_IPI_mask = hv_send_ipi_mask;
> > +		x86_system_apic->send_IPI_mask_allbutself = hv_send_ipi_mask_allbutself;
> > +		x86_system_apic->send_IPI_allbutself = hv_send_ipi_allbutself;
> > +		x86_system_apic->send_IPI_all = hv_send_ipi_all;
> > +		x86_system_apic->send_IPI_self = hv_send_ipi_self;
> >  	}
> > 
> >  	if (ms_hyperv.hints & HV_X64_APIC_ACCESS_RECOMMENDED) {
> > @@ -285,10 +285,10 @@ void __init hv_apic_init(void)
> >  		 */
> >  		apic_set_eoi_write(hv_apic_eoi_write);
> >  		if (!x2apic_enabled()) {
> > -			apic->read      = hv_apic_read;
> > -			apic->write     = hv_apic_write;
> > -			apic->icr_write = hv_apic_icr_write;
> > -			apic->icr_read  = hv_apic_icr_read;
> > +			x86_system_apic->read      = hv_apic_read;
> > +			x86_system_apic->write     = hv_apic_write;
> > +			x86_system_apic->icr_write = hv_apic_icr_write;
> > +			x86_system_apic->icr_read  = hv_apic_icr_read;
> >  		}
> 
> For those two just add:
> 	struct apic *apic = x86_system_apic;
> before all the assignments.
> Less churn and much better code.
> 

Why would it be better code?


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14413.35600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb0-0003Mu-8Z; Thu, 29 Oct 2020 17:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14413.35600; Thu, 29 Oct 2020 17:20:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb0-0003Mk-3c; Thu, 29 Oct 2020 17:20:18 +0000
Received: by outflank-mailman (input) for mailman id 14413;
 Thu, 29 Oct 2020 17:20:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBay-0003MD-No
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:16 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aab968a0-fda7-4de8-b9f9-2d4afecf3e0d;
 Thu, 29 Oct 2020 17:20:15 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THK83f7
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:08 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBay-0003MD-No
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:16 +0000
X-Inumbo-ID: aab968a0-fda7-4de8-b9f9-2d4afecf3e0d
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.20])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aab968a0-fda7-4de8-b9f9-2d4afecf3e0d;
	Thu, 29 Oct 2020 17:20:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992014;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=8EOjWvQk239Wp/aWNifZAKC+Sjf+9ZihUrmLiKkJAAc=;
	b=ebHuVvb9I0JhokgnvvB108AaZ3uzxamS+m3vMJQB3RA5DsHUOfuauo7wxMi2aS4Mm+
	u8NrCVYpYNNl4wQBZQKlbC26m2VQom6iUaRc7o2x1peL2pOYkLjPiOYcXWPXcVSvm32P
	qbidfvNSbia/ylUA5ZHdKh/XREycotHAaCU0gU6e3Qldw205+cS9HV+H8gdUzyXflR3v
	lsFl48ckqEZFQJJxQcMpyKxztdmZksgu7flGhp29CMSn5dl70lZcaPn3IXQMPv2hRDE7
	ga0KDPFDWzMqZ/Ly56xc+N7lrn+eZr46INlYT+3+jZGrz6z/eaN6Yv3LvbY8rVgAXSKt
	qtHQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THK83f7
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:08 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 02/23] tools: add xc_is_known_page_type to libxenctrl
Date: Thu, 29 Oct 2020 18:19:42 +0100
Message-Id: <20201029172004.17219-3-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Users of xc_get_pfn_type_batch may want to sanity check the data
returned by Xen. Add a simple helper for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_private.h | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index 5d2c7274fb..afb08aafe1 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -421,6 +421,39 @@ void *xc_map_foreign_ranges(xc_interface *xch, uint32_t dom,
 int xc_get_pfn_type_batch(xc_interface *xch, uint32_t dom,
                           unsigned int num, xen_pfn_t *);
 
+/* Sanitiy check for types returned by Xen */
+static inline bool xc_is_known_page_type(xen_pfn_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_NOTAB:
+
+    case XEN_DOMCTL_PFINFO_L1TAB:
+    case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L2TAB:
+    case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L3TAB:
+    case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_L4TAB:
+    case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
+
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = true;
+        break;
+    default:
+        ret = false;
+        break;
+    }
+    return ret;
+}
+
 void bitmap_64_to_byte(uint8_t *bp, const uint64_t *lp, int nbits);
 void bitmap_byte_to_64(uint64_t *lp, const uint8_t *bp, int nbits);
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14412.35595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBaz-0003MT-Us; Thu, 29 Oct 2020 17:20:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14412.35595; Thu, 29 Oct 2020 17:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBaz-0003MM-Ro; Thu, 29 Oct 2020 17:20:17 +0000
Received: by outflank-mailman (input) for mailman id 14412;
 Thu, 29 Oct 2020 17:20:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBay-0003MC-Md
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:16 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 548caff4-7b70-4619-80c3-5022562d6d8c;
 Thu, 29 Oct 2020 17:20:15 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THK83f6
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:08 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBay-0003MC-Md
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:16 +0000
X-Inumbo-ID: 548caff4-7b70-4619-80c3-5022562d6d8c
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.25])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 548caff4-7b70-4619-80c3-5022562d6d8c;
	Thu, 29 Oct 2020 17:20:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992014;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=AJouOKw1M1RuOZDP/M7JpmShmV/UDPt9XmfG61mTqBM=;
	b=b9EoRL0cCnT0G6ceRd2tkuBc8C94KzJHGVcRjuv2TylBtex+NqiUNmZmxZeLdAmS+I
	HLxXRFQ+GIhEA3CuJh76BvDug7R4wEazub58uzPNLJ3XL6c1fAGEAhPeaIWak+KZXJPr
	1uruhvsDVj//YD1r/iOtsJIAhbV4cPcepqNLQmkS2yc6xGYohj+y4KtHYepfSyahTC8L
	9ACpw6en5GVPoWxjNvbAbcKlj/KZHo0IrF71LJomAgTD0Vt5YiqsMep/8XiWy45ys3pB
	Bx8PS9G0bWzx60q3HqhTMVwZmNbMyQJ/DzCtbhVntscZPQdHFBajXslo5LpFD1wJSMEQ
	iDfQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THK83f6
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:08 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1 01/23] tools: add readv_exact to libxenctrl
Date: Thu, 29 Oct 2020 18:19:41 +0100
Message-Id: <20201029172004.17219-2-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read a batch of iovec's.

In the common case of short reads, finish individual iov's with read_exact.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/ctrl/xc_private.c | 54 +++++++++++++++++++++++++++++++++++-
 tools/libs/ctrl/xc_private.h |  1 +
 2 files changed, 54 insertions(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_private.c b/tools/libs/ctrl/xc_private.c
index d94f846686..a86ed213a5 100644
--- a/tools/libs/ctrl/xc_private.c
+++ b/tools/libs/ctrl/xc_private.c
@@ -659,8 +659,23 @@ int write_exact(int fd, const void *data, size_t size)
 
 #if defined(__MINIOS__)
 /*
- * MiniOS's libc doesn't know about writev(). Implement it as multiple write()s.
+ * MiniOS's libc doesn't know about readv/writev().
+ * Implement it as multiple read/write()s.
  */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc, i;
+
+    for ( i = 0; i < iovcnt; ++i )
+    {
+        rc = read_exact(fd, iov[i].iov_base, iov[i].iov_len);
+        if ( rc )
+            return rc;
+    }
+
+    return 0;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     int rc, i;
@@ -675,6 +689,44 @@ int writev_exact(int fd, const struct iovec *iov, int iovcnt)
     return 0;
 }
 #else
+int readv_exact(int fd, const struct iovec *iov, int iovcnt)
+{
+    int rc = 0, idx = 0;
+    ssize_t len;
+
+    while ( idx < iovcnt )
+    {
+        len = readv(fd, &iov[idx], min(iovcnt - idx, IOV_MAX));
+        if ( len == -1 && errno == EINTR )
+            continue;
+        if ( len <= 0 )
+        {
+            rc = -1;
+            goto out;
+        }
+        while ( len > 0 && idx < iovcnt )
+        {
+            if ( len >= iov[idx].iov_len )
+            {
+                len -= iov[idx].iov_len;
+            }
+            else
+            {
+                void *p = iov[idx].iov_base + len;
+                size_t l = iov[idx].iov_len - len;
+
+                rc = read_exact(fd, p, l);
+                if ( rc )
+                    goto out;
+                len = 0;
+            }
+            idx++;
+        }
+    }
+out:
+    return rc;
+}
+
 int writev_exact(int fd, const struct iovec *iov, int iovcnt)
 {
     struct iovec *local_iov = NULL;
diff --git a/tools/libs/ctrl/xc_private.h b/tools/libs/ctrl/xc_private.h
index f0b5f83ac8..5d2c7274fb 100644
--- a/tools/libs/ctrl/xc_private.h
+++ b/tools/libs/ctrl/xc_private.h
@@ -441,6 +441,7 @@ int xc_flush_mmu_updates(xc_interface *xch, struct xc_mmu *mmu);
 
 /* Return 0 on success; -1 on error setting errno. */
 int read_exact(int fd, void *data, size_t size); /* EOF => -1, errno=0 */
+int readv_exact(int fd, const struct iovec *iov, int iovcnt);
 int write_exact(int fd, const void *data, size_t size);
 int writev_exact(int fd, const struct iovec *iov, int iovcnt);
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14414.35619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb5-0003Qw-Gw; Thu, 29 Oct 2020 17:20:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14414.35619; Thu, 29 Oct 2020 17:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb5-0003Qp-DC; Thu, 29 Oct 2020 17:20:23 +0000
Received: by outflank-mailman (input) for mailman id 14414;
 Thu, 29 Oct 2020 17:20:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBb3-0003MC-L3
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:21 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2ded997-a0ba-499a-9121-66cb54d56275;
 Thu, 29 Oct 2020 17:20:16 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THK93f8
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:09 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBb3-0003MC-L3
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:21 +0000
X-Inumbo-ID: f2ded997-a0ba-499a-9121-66cb54d56275
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f2ded997-a0ba-499a-9121-66cb54d56275;
	Thu, 29 Oct 2020 17:20:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992015;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=nOFtdNXlk5HjYyxO1mkGMNqJ+Z1za/MxmXOScJ0HbPg=;
	b=K1on7XABhOD58zqU8Not3Jy8NtNKHxa7+wF/7CEvBGy0Xy50gVK77PKerWSLzU+So9
	mHzWJMuN/kN5RrPggT/TH/mZQ78+ern0VAvdGZWDY5IV2fvDrTAiXfwrrrdlvJlujlkQ
	eT8gpkyVPxNJNNRtQ+oMeJ36NHmS8EdXyNfwXvOMgR62w+fQza04ErpoIRIIAk1QJRsl
	pSXi1Id9jJK4NCWJkSJ6MXk+OpskfrVo8RhyfN0NObP4+mmVnQ1nxapFNmnWLcgw/FM4
	ehXSuTwR0ESHiKYrC3itFRjbE8b1pCFsQbUN1vauqXrs5pgQ1LUJTUosAD3b7q/8FUrR
	fr3A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THK93f8
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:09 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 03/23] tools: use xc_is_known_page_type
Date: Thu, 29 Oct 2020 18:19:43 +0100
Message-Id: <20201029172004.17219-4-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Verify pfn type on sending side, also verify incoming batch of pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_restore.c | 3 +--
 tools/libs/guest/xg_sr_save.c    | 6 ++++++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index b57a787519..f1c3169229 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -406,8 +406,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         }
 
         type = (pages->pfn[i] & PAGE_DATA_TYPE_MASK) >> 32;
-        if ( ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) >= 5) &&
-             ((type >> XEN_DOMCTL_PFINFO_LTAB_SHIFT) <= 8) )
+        if ( xc_is_known_page_type(type) == false )
         {
             ERROR("Invalid type %#"PRIx32" for pfn %#"PRIpfn" (index %u)",
                   type, pfn, i);
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 2ba7c3200c..044d0ae3aa 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -147,6 +147,12 @@ static int write_batch(struct xc_sr_context *ctx)
 
     for ( i = 0; i < nr_pfns; ++i )
     {
+        if ( xc_is_known_page_type(types[i]) == false )
+        {
+            ERROR("Wrong type %#"PRIpfn" for pfn %#"PRIpfn, types[i], mfns[i]);
+            goto err;
+        }
+
         switch ( types[i] )
         {
         case XEN_DOMCTL_PFINFO_BROKEN:


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14415.35627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb6-0003Rm-1J; Thu, 29 Oct 2020 17:20:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14415.35627; Thu, 29 Oct 2020 17:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBb5-0003RS-Os; Thu, 29 Oct 2020 17:20:23 +0000
Received: by outflank-mailman (input) for mailman id 14415;
 Thu, 29 Oct 2020 17:20:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBb3-0003MD-MS
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:21 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2754cbc-b397-40ad-98e1-d30c24dfef73;
 Thu, 29 Oct 2020 17:20:17 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THK93f9
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:09 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBb3-0003MD-MS
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:21 +0000
X-Inumbo-ID: a2754cbc-b397-40ad-98e1-d30c24dfef73
Received: from mo4-p01-ob.smtp.rzone.de (unknown [85.215.255.53])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a2754cbc-b397-40ad-98e1-d30c24dfef73;
	Thu, 29 Oct 2020 17:20:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992016;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=yv4/vNzbLjhhKaj2O8jZKB9GBNc7GvmmLKVV7dABzFU=;
	b=fd+jqgrBJT5aK88Qi2z21JZDs9LRityC/r22lmSVsVIZ6FbWPBDATcwEeIje6ECCll
	Vxq6lfCa64Vg2/JdeQEojiDDgKlADOQK/d9CZ8wtrX0vHclBRXwmXoQ8vFarwhKUdr1A
	QcvAoon4MJHY4kujwQlf7YdJmXVqpmeFnMTuiR5jKNInz1dT+JmafrMnGqOSN1Y01qrX
	YUefPoCd3tMTCAH95ngRFNgmmdDem0HeqZ0RvrPv7u6sMLEZ7/EBChhJcyiPVmgi8nhe
	dUcMGPMQxBFsd/xw6tWz3XA1a3PidbsFisvtkxhFn+UwjvL40reNj6JUXG4oOtzjLspa
	LwhQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THK93f9
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:09 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 04/23] tools: unify type checking for data pfns in migration stream
Date: Thu, 29 Oct 2020 18:19:44 +0100
Message-Id: <20201029172004.17219-5-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce a helper which decides if a given pfn type has data
for the migration stream.

No change in behavior intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 17 ++++++++++++++++
 tools/libs/guest/xg_sr_restore.c | 34 +++++---------------------------
 tools/libs/guest/xg_sr_save.c    | 14 ++-----------
 3 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index cc3ad1c394..70e328e951 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -455,6 +455,23 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 /* Handle a STATIC_DATA_END record. */
 int handle_static_data_end(struct xc_sr_context *ctx);
 
+static inline bool page_type_has_stream_data(uint32_t type)
+{
+    bool ret;
+
+    switch (type)
+    {
+    case XEN_DOMCTL_PFINFO_XTAB:
+    case XEN_DOMCTL_PFINFO_XALLOC:
+    case XEN_DOMCTL_PFINFO_BROKEN:
+        ret = false;
+        break;
+    default:
+        ret = true;
+        break;
+    }
+    return ret;
+}
 #endif
 /*
  * Local variables:
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index f1c3169229..0332ae9f32 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -152,9 +152,8 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0; i < count; ++i )
     {
-        if ( (!types || (types &&
-                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
+        if ( (!types ||
+              (types && page_type_has_stream_data(types[i]) == true)) &&
              !pfn_is_populated(ctx, original_pfns[i]) )
         {
             rc = pfn_set_populated(ctx, original_pfns[i]);
@@ -233,25 +232,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     {
         ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_NOTAB:
-
-        case XEN_DOMCTL_PFINFO_L1TAB:
-        case XEN_DOMCTL_PFINFO_L1TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L2TAB:
-        case XEN_DOMCTL_PFINFO_L2TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L3TAB:
-        case XEN_DOMCTL_PFINFO_L3TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
-        case XEN_DOMCTL_PFINFO_L4TAB:
-        case XEN_DOMCTL_PFINFO_L4TAB | XEN_DOMCTL_PFINFO_LPINTAB:
-
+        if ( page_type_has_stream_data(types[i]) == true )
             mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-            break;
-        }
     }
 
     /* Nothing to do? */
@@ -271,14 +253,8 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 
     for ( i = 0, j = 0; i < count; ++i )
     {
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_XTAB:
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-            /* No page data to deal with. */
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         if ( map_errs[j] )
         {
@@ -413,7 +389,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
             goto err;
         }
 
-        if ( type < XEN_DOMCTL_PFINFO_BROKEN )
+        if ( page_type_has_stream_data(type) == true )
             /* NOTAB and all L1 through L4 tables (including pinned) should
              * have a page worth of data in the record. */
             pages_of_data++;
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 044d0ae3aa..0546d3d9e6 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -153,13 +153,8 @@ static int write_batch(struct xc_sr_context *ctx)
             goto err;
         }
 
-        switch ( types[i] )
-        {
-        case XEN_DOMCTL_PFINFO_BROKEN:
-        case XEN_DOMCTL_PFINFO_XALLOC:
-        case XEN_DOMCTL_PFINFO_XTAB:
+        if ( page_type_has_stream_data(types[i]) == false )
             continue;
-        }
 
         mfns[nr_pages++] = mfns[i];
     }
@@ -177,13 +172,8 @@ static int write_batch(struct xc_sr_context *ctx)
 
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
-            switch ( types[i] )
-            {
-            case XEN_DOMCTL_PFINFO_BROKEN:
-            case XEN_DOMCTL_PFINFO_XALLOC:
-            case XEN_DOMCTL_PFINFO_XTAB:
+            if ( page_type_has_stream_data(types[i]) == false )
                 continue;
-            }
 
             if ( errors[p] )
             {


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14416.35643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbA-0003YI-Ec; Thu, 29 Oct 2020 17:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14416.35643; Thu, 29 Oct 2020 17:20:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbA-0003Y9-AY; Thu, 29 Oct 2020 17:20:28 +0000
Received: by outflank-mailman (input) for mailman id 14416;
 Thu, 29 Oct 2020 17:20:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBb8-0003MC-LG
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:26 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 694b2120-ffda-4093-b495-670ba2163d55;
 Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKA3fC
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:10 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBb8-0003MC-LG
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:26 +0000
X-Inumbo-ID: 694b2120-ffda-4093-b495-670ba2163d55
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 694b2120-ffda-4093-b495-670ba2163d55;
	Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992017;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=s6U8mXu6k+6wrCilP+jHEKI7OfSE2V+H39V5VeQDOaY=;
	b=qh2TZXRzZOzPKSr3D4EgM+qiL5MOJM9yXCQmI5+7JzmNbWXfcgswCnujKQLgtnRP4u
	ZiE6mkGd0FGFAk3zqxXi4njEhwNugU7ZJTjMlkWzARPSNshC3JNeKSRaSMv9zgthSVwv
	gW06sSBDWgTZtg9Wtc3f8+mnfk7tlX24tvrJ0uYqGMdmLnlT3aItA4WupjhEZAS45E+K
	8Ht7fktlshpTQp5FK6614uaCuNo1AlC3ZfaE1R/BYpSePbtL4BPOZ+zn9GXy7jXG4w9x
	FUvETc8roCyzp/F2GlWiUuINx+o60tlacAMmMQWP97zMtOPxJlkMzMth/+4oZ4FxJjdx
	xp6w==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKA3fC
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:10 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 07/23] tools/guest: save: move batch_pfns
Date: Thu, 29 Oct 2020 18:19:47 +0100
Message-Id: <20201029172004.17219-8-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The batch_pfns array is already allocated in advance.
Move it into the preallocated area.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h |  2 +-
 tools/libs/guest/xg_sr_save.c   | 25 +++++++++++--------------
 2 files changed, 12 insertions(+), 15 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 62bc87b5f4..c78a07b8f8 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -212,6 +212,7 @@ static inline int update_blob(struct xc_sr_blob *blob,
 }
 
 struct xc_sr_save_arrays {
+    xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
@@ -249,7 +250,6 @@ struct xc_sr_context
 
             struct precopy_stats stats;
 
-            xen_pfn_t *batch_pfns;
             unsigned int nr_batch_pfns;
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 1e3c8eff2f..597e638c59 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -77,7 +77,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 
 /*
  * Writes a batch of memory as a PAGE_DATA record into the stream.  The batch
- * is constructed in ctx->save.batch_pfns.
+ * is constructed in ctx->save.m->batch_pfns.
  *
  * This function:
  * - gets the types for each pfn in the batch.
@@ -128,12 +128,12 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
-                                                      ctx->save.batch_pfns[i]);
+                                                      ctx->save.m->batch_pfns[i]);
 
         /* Likely a ballooned page. */
         if ( mfns[i] == INVALID_MFN )
         {
-            set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+            set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
             ++ctx->save.nr_deferred_pages;
         }
     }
@@ -179,7 +179,7 @@ static int write_batch(struct xc_sr_context *ctx)
             if ( errors[p] )
             {
                 ERROR("Mapping of pfn %#"PRIpfn" (mfn %#"PRIpfn") failed %d",
-                      ctx->save.batch_pfns[i], mfns[p], errors[p]);
+                      ctx->save.m->batch_pfns[i], mfns[p], errors[p]);
                 goto err;
             }
 
@@ -193,7 +193,7 @@ static int write_batch(struct xc_sr_context *ctx)
             {
                 if ( rc == -1 && errno == EAGAIN )
                 {
-                    set_bit(ctx->save.batch_pfns[i], ctx->save.deferred_pages);
+                    set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
                     ++ctx->save.nr_deferred_pages;
                     types[i] = XEN_DOMCTL_PFINFO_XTAB;
                     --nr_pages;
@@ -224,7 +224,7 @@ static int write_batch(struct xc_sr_context *ctx)
     rec.length += nr_pages * PAGE_SIZE;
 
     for ( i = 0; i < nr_pfns; ++i )
-        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.batch_pfns[i];
+        rec_pfns[i] = ((uint64_t)(types[i]) << 32) | ctx->save.m->batch_pfns[i];
 
     iov[0].iov_base = &rec.type;
     iov[0].iov_len = sizeof(rec.type);
@@ -296,9 +296,9 @@ static int flush_batch(struct xc_sr_context *ctx)
 
     if ( !rc )
     {
-        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.batch_pfns,
+        VALGRIND_MAKE_MEM_UNDEFINED(ctx->save.m->batch_pfns,
                                     MAX_BATCH_SIZE *
-                                    sizeof(*ctx->save.batch_pfns));
+                                    sizeof(*ctx->save.m->batch_pfns));
     }
 
     return rc;
@@ -315,7 +315,7 @@ static int add_to_batch(struct xc_sr_context *ctx, xen_pfn_t pfn)
         rc = flush_batch(ctx);
 
     if ( rc == 0 )
-        ctx->save.batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
+        ctx->save.m->batch_pfns[ctx->save.nr_batch_pfns++] = pfn;
 
     return rc;
 }
@@ -850,14 +850,12 @@ static int setup(struct xc_sr_context *ctx)
 
     dirty_bitmap = xc_hypercall_buffer_alloc_pages(
         xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->save.p2m_size)));
-    ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
-                                  sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
     ctx->save.m = malloc(sizeof(*ctx->save.m));
 
-    if ( !ctx->save.m || !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.m || !dirty_bitmap || !ctx->save.deferred_pages )
     {
-        ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
+        ERROR("Unable to allocate memory for dirty bitmaps and"
               " deferred pages");
         rc = -1;
         errno = ENOMEM;
@@ -886,7 +884,6 @@ static void cleanup(struct xc_sr_context *ctx)
     xc_hypercall_buffer_free_pages(xch, dirty_bitmap,
                                    NRPAGES(bitmap_size(ctx->save.p2m_size)));
     free(ctx->save.deferred_pages);
-    free(ctx->save.batch_pfns);
     free(ctx->save.m);
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14417.35650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbB-0003ZV-1p; Thu, 29 Oct 2020 17:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14417.35650; Thu, 29 Oct 2020 17:20:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbA-0003Yy-Mp; Thu, 29 Oct 2020 17:20:28 +0000
Received: by outflank-mailman (input) for mailman id 14417;
 Thu, 29 Oct 2020 17:20:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBb8-0003MD-Mf
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:26 +0000
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40a0b57d-7dc9-4a68-8ead-9123909cb445;
 Thu, 29 Oct 2020 17:20:17 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKA3fA
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:10 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBb8-0003MD-Mf
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:26 +0000
X-Inumbo-ID: 40a0b57d-7dc9-4a68-8ead-9123909cb445
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.167])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 40a0b57d-7dc9-4a68-8ead-9123909cb445;
	Thu, 29 Oct 2020 17:20:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992016;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=aHhooxXlVtScSWow8fmwlUPLQUlpWTQjy90bGrDLuPg=;
	b=jfXAXmLLWiSnWD9wSRj3VcpPbKP02MIv9DSYJN1AR/tj6RQIJ7LMgiMz0Ul+tUkyqe
	IznA81YexYkgrNm+pToGKZEKf9WQbVlJbRdjewTT73yFfQC+dzNdrghhAWKFLQ803B4+
	3gyxAez5E1i9SJsJ2PTMslnvDg8WrupjzZ4k8SqusjS6F4HSo5Lfu2pBjr/8VOn6/+8K
	L0gh+09w3CriRYmKbgpC7RY6DcPnALGjEuwR5BMgRxnnDtg/ln9feJxu0+K2fTFgVhH5
	3i+/O49sc+0BRFtozeqMZc14brss4rXu7cro/llkMj1+enpGghxSd8lUrbnINkvfFW6q
	3Akw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKA3fA
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:10 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 05/23] tools: show migration transfer rate in send_dirty_pages
Date: Thu, 29 Oct 2020 18:19:45 +0100
Message-Id: <20201029172004.17219-6-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Show how fast domU pages are transferred in each iteration.

The relevant data is how fast the pfns travel, not so much how much
protocol overhead exists. So the reported MiB/sec is just for pfns.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h |  2 ++
 tools/libs/guest/xg_sr_save.c   | 47 +++++++++++++++++++++++++++++++++
 2 files changed, 49 insertions(+)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 70e328e951..f3a7a29298 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -238,6 +238,8 @@ struct xc_sr_context
             bool debug;
 
             unsigned long p2m_size;
+            size_t pages_sent;
+            size_t overhead_sent;
 
             struct precopy_stats stats;
 
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 0546d3d9e6..f58a35ddde 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -1,5 +1,6 @@
 #include <assert.h>
 #include <arpa/inet.h>
+#include <time.h>
 
 #include "xg_sr_common.h"
 
@@ -238,6 +239,8 @@ static int write_batch(struct xc_sr_context *ctx)
     iov[3].iov_len = nr_pfns * sizeof(*rec_pfns);
 
     iovcnt = 4;
+    ctx->save.pages_sent += nr_pages;
+    ctx->save.overhead_sent += sizeof(rec) + sizeof(hdr) + nr_pfns * sizeof(*rec_pfns);
 
     if ( nr_pages )
     {
@@ -357,6 +360,43 @@ static int suspend_domain(struct xc_sr_context *ctx)
     return 0;
 }
 
+static void show_transfer_rate(struct xc_sr_context *ctx, struct timespec *start)
+{
+    xc_interface *xch = ctx->xch;
+    struct timespec end = {}, diff = {};
+    size_t ms, MiB_sec = ctx->save.pages_sent * PAGE_SIZE;
+
+    if (!MiB_sec)
+        return;
+
+    if ( clock_gettime(CLOCK_MONOTONIC, &end) )
+        PERROR("clock_gettime");
+
+    if ( (end.tv_nsec - start->tv_nsec) < 0 )
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec - 1;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec + (1000U*1000U*1000U);
+    }
+    else
+    {
+        diff.tv_sec = end.tv_sec - start->tv_sec;
+        diff.tv_nsec = end.tv_nsec - start->tv_nsec;
+    }
+
+    ms = (diff.tv_nsec / (1000U*1000U));
+    if (!ms)
+        ms = 1;
+    ms += (diff.tv_sec * 1000U);
+
+    MiB_sec *= 1000U;
+    MiB_sec /= ms;
+    MiB_sec /= 1024U*1024U;
+
+    errno = 0;
+    ERROR("%s: %zu bytes + %zu pages in %ld.%09ld sec, %zu MiB/sec", __func__,
+          ctx->save.overhead_sent, ctx->save.pages_sent, diff.tv_sec, diff.tv_nsec, MiB_sec);
+}
+
 /*
  * Send a subset of pages in the guests p2m, according to the dirty bitmap.
  * Used for each subsequent iteration of the live migration loop.
@@ -370,9 +410,15 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     xen_pfn_t p;
     unsigned long written;
     int rc;
+    struct timespec start = {};
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
+    ctx->save.pages_sent = 0;
+    ctx->save.overhead_sent = 0;
+    if ( clock_gettime(CLOCK_MONOTONIC, &start) )
+        PERROR("clock_gettime");
+
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
         if ( !test_bit(p, dirty_bitmap) )
@@ -396,6 +442,7 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     if ( written > entries )
         DPRINTF("Bitmap contained more entries than expected...");
 
+    show_transfer_rate(ctx, &start);
     xc_report_progress_step(xch, entries, entries);
 
     return ctx->save.ops.check_vm_state(ctx);


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14418.35667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbF-0003hR-DM; Thu, 29 Oct 2020 17:20:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14418.35667; Thu, 29 Oct 2020 17:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbF-0003hG-8P; Thu, 29 Oct 2020 17:20:33 +0000
Received: by outflank-mailman (input) for mailman id 14418;
 Thu, 29 Oct 2020 17:20:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbD-0003MC-LT
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:31 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16a7cb04-a854-45a3-a90c-5042f3c0a328;
 Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKB3fE
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:11 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbD-0003MC-LT
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:31 +0000
X-Inumbo-ID: 16a7cb04-a854-45a3-a90c-5042f3c0a328
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 16a7cb04-a854-45a3-a90c-5042f3c0a328;
	Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992018;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=kQfsNVsKk3NxmO3L5ZaWoX8Fyv7MojyLwaUWzvd0rkk=;
	b=JOaVgOC3U0ye2T8Mnibg30o5Jl8LZLeoDlyFpQLyZUKZFpI1iVDy7YlGuUBHw7k7pJ
	lEBMhciP8OGN8a5nfrNQk4oCPIgo5zi6pddlxJGdYLZWlDJaEi2Xa8HVMo1rIj9r34w6
	MKK6aDe3IpPqQipPLV1/ZVSyXj/+ypvK9Hx1nYvNdDEmaZTzFehzTUQagbUWpLfFIjRh
	rGmfWtMa4cuprrWipm6rfNzoRwQYolDgKdx76KZC22iwTfhDkWK5dcerSup7q1xAgAAq
	N0wtO9sckmXzuzqgbrSaKdEmzPeUv+JDqwpRBlsNLjN/LFOAZCOk1dlhf5EWaF3UhfIg
	tUsA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKB3fE
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:11 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 09/23] tools/guest: save: move types array
Date: Thu, 29 Oct 2020 18:19:49 +0100
Message-Id: <20201029172004.17219-10-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h | 2 ++
 tools/libs/guest/xg_sr_save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 0c2bef8f78..3cbadb607b 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -215,6 +215,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
     /* write_batch: Mfns of the batch pfns. */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    /* write_batch: Types of the batch pfns. */
+    xen_pfn_t types[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index cdab288c3f..6678df95b8 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Types of the batch pfns. */
-    types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
     errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
@@ -116,7 +114,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -274,7 +272,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(local_pages);
     free(guest_data);
     free(errors);
-    free(types);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14419.35673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbG-0003ic-08; Thu, 29 Oct 2020 17:20:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14419.35673; Thu, 29 Oct 2020 17:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbF-0003iE-KZ; Thu, 29 Oct 2020 17:20:33 +0000
Received: by outflank-mailman (input) for mailman id 14419;
 Thu, 29 Oct 2020 17:20:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbD-0003MD-My
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:31 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 127c0127-9c7d-4b89-ac04-0ef80a5d7e3d;
 Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKB3fD
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:11 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbD-0003MD-My
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:31 +0000
X-Inumbo-ID: 127c0127-9c7d-4b89-ac04-0ef80a5d7e3d
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 127c0127-9c7d-4b89-ac04-0ef80a5d7e3d;
	Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992017;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=dcpi0RoPHtdIFCl+PLBz0qXMYWjsEXHpQRt3CZZau4g=;
	b=ijoHKnN8B92cG0ZgPu+XVZgFz9yoj6EfYeXfeeqipGOTC7DUSV/ZsxvS0dcUoQdIH/
	fSBwyISTh/Aot3bYwGit11QA9Pl/9Vp8ibtLeNxWx3L8zlc0317d/sQtFm/XdF94Dg9P
	NHSkcedeLi2U/fdgBKfsU5W8EQ2Ik/GGOOOi8/+8xHHXMgFATvfSxvgYvc1CPT2pd1sR
	JGLquTspeeNgz6TJWQG3sx0/bZL1G4OB5Or0/y/AWd8oSLRfyIm1tgc6GLlz0mj4g/vm
	kHLpuxTcAywspG3wZ7Z2oeqz0g+YWuXSmD2OuLzHOHmCVL9gRk+pbJkULyA17Q+c338t
	oL+A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKB3fD
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:11 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 08/23] tools/guest: save: move mfns array
Date: Thu, 29 Oct 2020 18:19:48 +0100
Message-Id: <20201029172004.17219-9-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h | 2 ++
 tools/libs/guest/xg_sr_save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index c78a07b8f8..0c2bef8f78 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -213,6 +213,8 @@ static inline int update_blob(struct xc_sr_blob *blob,
 
 struct xc_sr_save_arrays {
     xen_pfn_t batch_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Mfns of the batch pfns. */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 597e638c59..cdab288c3f 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -88,7 +88,7 @@ static int write_checkpoint_record(struct xc_sr_context *ctx)
 static int write_batch(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = NULL, *types = NULL;
+    xen_pfn_t *mfns = ctx->save.m->mfns, *types = NULL;
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Mfns of the batch pfns. */
-    mfns = malloc(nr_pfns * sizeof(*mfns));
     /* Types of the batch pfns. */
     types = malloc(nr_pfns * sizeof(*types));
     /* Errors from attempting to map the gfns. */
@@ -118,7 +116,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !mfns || !types || !errors || !guest_data || !local_pages || !iov )
+    if ( !types || !errors || !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -277,7 +275,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(guest_data);
     free(errors);
     free(types);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14422.35691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbJ-0003rH-Oo; Thu, 29 Oct 2020 17:20:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14422.35691; Thu, 29 Oct 2020 17:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbJ-0003r6-Ke; Thu, 29 Oct 2020 17:20:37 +0000
Received: by outflank-mailman (input) for mailman id 14422;
 Thu, 29 Oct 2020 17:20:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbI-0003MC-LV
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:36 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ebe0a0c-9949-4068-aaaa-4cbe193ab190;
 Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKC3fG
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:12 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbI-0003MC-LV
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:36 +0000
X-Inumbo-ID: 7ebe0a0c-9949-4068-aaaa-4cbe193ab190
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7ebe0a0c-9949-4068-aaaa-4cbe193ab190;
	Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992018;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=B++RYJANjQEZHzQRb6RRsob3izlvX5G7HJ1Rv0O4H+A=;
	b=WIz+NE+7xF6vT1cRZSz/As+Y7vlWr2VKQ6uf3nNZG8WvVgAUUTZT7tN/CK97UqIslb
	dShQsjixmwuLu5ZMK/YCZeDfuEZtOXVgZC0QWtSvXW2PY/zWQ3GL1jQg7YHhBT+mFcA/
	f4pN6z29cyCwuxEpRk3Qqnhw9XshKX7KuRzSzU5S22JLyID9gmOHm1He3ku455uoCkV3
	jUZ7fAKBG0G8pAZkOQwxg+cAKl5KnbphW/+6/yk/YaNKY/ig5wyy0hof4QQu09GS8DPL
	HjF5XWtQfMyum2eozgsQjJjs/7dLgJIv2cyQeQ3BtxDzJHdemmzBF6QzIk3Phrj37LhR
	aCzQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKC3fG
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:12 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 11/23] tools/guest: save: move iov array
Date: Thu, 29 Oct 2020 18:19:51 +0100
Message-Id: <20201029172004.17219-12-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move iov array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h | 2 ++
 tools/libs/guest/xg_sr_save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 71b676c0e0..f315b4f526 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -219,6 +219,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t types[MAX_BATCH_SIZE];
     /* write_batch: Errors from attempting to map the gfns. */
     int errors[MAX_BATCH_SIZE];
+    /* write_batch: iovec[] for writev(). */
+    struct iovec iov[MAX_BATCH_SIZE + 4];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index cdc27a9cde..385a591332 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -97,7 +97,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
     uint64_t *rec_pfns = NULL;
-    struct iovec *iov = NULL; int iovcnt = 0;
+    struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
         .type = REC_TYPE_PAGE_DATA,
@@ -109,10 +109,8 @@ static int write_batch(struct xc_sr_context *ctx)
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
-    /* iovec[] for writev(). */
-    iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -266,7 +264,6 @@ static int write_batch(struct xc_sr_context *ctx)
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
-    free(iov);
     free(local_pages);
     free(guest_data);
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14423.35697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbK-0003sh-HP; Thu, 29 Oct 2020 17:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14423.35697; Thu, 29 Oct 2020 17:20:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbK-0003sA-3H; Thu, 29 Oct 2020 17:20:38 +0000
Received: by outflank-mailman (input) for mailman id 14423;
 Thu, 29 Oct 2020 17:20:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbI-0003MD-ND
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:36 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da89125b-8752-45de-bff5-641a8ac91f59;
 Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKA3fB
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:10 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbI-0003MD-ND
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:36 +0000
X-Inumbo-ID: da89125b-8752-45de-bff5-641a8ac91f59
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id da89125b-8752-45de-bff5-641a8ac91f59;
	Thu, 29 Oct 2020 17:20:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992017;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=LTn6Zyyvrx+7UIt94Limr5RoX+Df4LpsEj+n6P0mPjI=;
	b=cyiChVrN7bX7kbJ1XJcd0kh4utBwBfFXspgce7T2qyODlrBf9aVGxzZ9XMD+AK6/mv
	bkx0vDZIqyZrhkHQqd+NGt5i1M7IIUjM46XuSeQGy3CdJJCs/Nu5mlv1rT/eFY2GBJcZ
	Lzaz/QieYwv7V/uyX3dXLNs6GYHUJ5zOXG5Jkr+q8TGkE/JZuMbghh15jbeiqTV+bWfh
	mESHS2Wu+iVmkVZRb3Ez+jrlvw7JQodkJCPqh72/Vmn/J/FWQH6DRYZs1FuZG8AT4i9Q
	BjKQ/W+HuxmqsXvrnS1ue/YTAp13nb6tW0epcjq6L3LJTkeVS3r5R+MSZh0mpH69Zm8m
	vP5A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKA3fB
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:10 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 06/23] tools/guest: prepare to allocate arrays once
Date: Thu, 29 Oct 2020 18:19:46 +0100
Message-Id: <20201029172004.17219-7-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The hotpath 'send_dirty_pages' is supposed to do just one thing: sending.
The other end 'handle_page_data' is supposed to do just receiving.

But instead both do other costly work like memory allocations and data moving.
Do the allocations once, the array sizes are a compiletime constant.
Avoid unneeded copying of data by receiving data directly into mapped guest memory.

This patch is just prepartion, subsequent changes will populate the arrays.

Once all changes are applied, migration of a busy HVM domU changes like that:

Without this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_testing):
2020-10-29 10:23:10.711+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 55.324905335 sec, 203 MiB/sec: Internal error
2020-10-29 10:23:35.115+0000: xc: show_transfer_rate: 16829632 bytes + 2097552 pages in 24.401179720 sec, 335 MiB/sec: Internal error
2020-10-29 10:23:59.436+0000: xc: show_transfer_rate: 16829032 bytes + 2097478 pages in 24.319025928 sec, 336 MiB/sec: Internal error
2020-10-29 10:24:23.844+0000: xc: show_transfer_rate: 16829024 bytes + 2097477 pages in 24.406992500 sec, 335 MiB/sec: Internal error
2020-10-29 10:24:48.292+0000: xc: show_transfer_rate: 16828912 bytes + 2097463 pages in 24.446489027 sec, 335 MiB/sec: Internal error
2020-10-29 10:25:01.816+0000: xc: show_transfer_rate: 16836080 bytes + 2098356 pages in 13.447091818 sec, 609 MiB/sec: Internal error

With this series, from sr650 to sr950 (xen-4.15.20201027T173911.16a20963b3 xen_unstable):
2020-10-28 21:26:05.074+0000: xc: show_transfer_rate: 23663128 bytes + 2879563 pages in 52.564054368 sec, 213 MiB/sec: Internal error
2020-10-28 21:26:23.527+0000: xc: show_transfer_rate: 16830040 bytes + 2097603 pages in 18.450592015 sec, 444 MiB/sec: Internal error
2020-10-28 21:26:41.926+0000: xc: show_transfer_rate: 16830944 bytes + 2097717 pages in 18.397862306 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:00.339+0000: xc: show_transfer_rate: 16829176 bytes + 2097498 pages in 18.411973339 sec, 445 MiB/sec: Internal error
2020-10-28 21:27:18.643+0000: xc: show_transfer_rate: 16828592 bytes + 2097425 pages in 18.303326695 sec, 447 MiB/sec: Internal error
2020-10-28 21:27:26.289+0000: xc: show_transfer_rate: 16835952 bytes + 2098342 pages in 7.579846749 sec, 1081 MiB/sec: Internal error

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 8 ++++++++
 tools/libs/guest/xg_sr_restore.c | 8 ++++++++
 tools/libs/guest/xg_sr_save.c    | 4 +++-
 3 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index f3a7a29298..62bc87b5f4 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -211,6 +211,12 @@ static inline int update_blob(struct xc_sr_blob *blob,
     return 0;
 }
 
+struct xc_sr_save_arrays {
+};
+
+struct xc_sr_restore_arrays {
+};
+
 struct xc_sr_context
 {
     xc_interface *xch;
@@ -248,6 +254,7 @@ struct xc_sr_context
             unsigned long *deferred_pages;
             unsigned long nr_deferred_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
+            struct xc_sr_save_arrays *m;
         } save;
 
         struct /* Restore data. */
@@ -299,6 +306,7 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            struct xc_sr_restore_arrays *m;
         } restore;
     };
 
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 0332ae9f32..4a9ece9681 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -739,6 +739,13 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    if ( !ctx->restore.m ) {
+        ERROR("Unable to allocate memory for arrays");
+        rc = -1;
+        goto err;
+    }
+
  err:
     return rc;
 }
@@ -757,6 +764,7 @@ static void cleanup(struct xc_sr_context *ctx)
         xc_hypercall_buffer_free_pages(
             xch, dirty_bitmap, NRPAGES(bitmap_size(ctx->restore.p2m_size)));
 
+    free(ctx->restore.m);
     free(ctx->restore.buffered_records);
     free(ctx->restore.populated_pfns);
 
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index f58a35ddde..1e3c8eff2f 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -853,8 +853,9 @@ static int setup(struct xc_sr_context *ctx)
     ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
                                   sizeof(*ctx->save.batch_pfns));
     ctx->save.deferred_pages = bitmap_alloc(ctx->save.p2m_size);
+    ctx->save.m = malloc(sizeof(*ctx->save.m));
 
-    if ( !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
+    if ( !ctx->save.m || !ctx->save.batch_pfns || !dirty_bitmap || !ctx->save.deferred_pages )
     {
         ERROR("Unable to allocate memory for dirty bitmaps, batch pfns and"
               " deferred pages");
@@ -886,6 +887,7 @@ static void cleanup(struct xc_sr_context *ctx)
                                    NRPAGES(bitmap_size(ctx->save.p2m_size)));
     free(ctx->save.deferred_pages);
     free(ctx->save.batch_pfns);
+    free(ctx->save.m);
 }
 
 /*


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14427.35715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbO-000427-Tz; Thu, 29 Oct 2020 17:20:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14427.35715; Thu, 29 Oct 2020 17:20:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbO-00041n-OI; Thu, 29 Oct 2020 17:20:42 +0000
Received: by outflank-mailman (input) for mailman id 14427;
 Thu, 29 Oct 2020 17:20:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbN-0003MC-Li
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:41 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1211aba-36a0-4d14-9c12-d0ef84df3695;
 Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKD3fH
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:13 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbN-0003MC-Li
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:41 +0000
X-Inumbo-ID: b1211aba-36a0-4d14-9c12-d0ef84df3695
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1211aba-36a0-4d14-9c12-d0ef84df3695;
	Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992018;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=VVfW+oAH1Frox7ej6KApgNQS/Q90zBjaSWimvZM/Rz8=;
	b=TkkWMSXeFN2eROBRvYQywqH0q0qV5fFJK7u/XrU/lNwp2dPTpeaAsEx2FJNQzWfMC3
	NDTxq9YTAmUp6nQDyAHjWRJlTn1XaODMO0lhzm9VrpQjQIRTKfJZCWLEEeD8JXzXaRMn
	PTOkpnCh0HneHbEaPr0fXD3M1P3rF+rMwSa3EPUWjpv5kBbsu/2i+87lfSaYRGrBcY49
	+0wS65vNO5hVmAszhdW8WnhWK2GDZxJb7CmYSu6FgCH6AbHxVvUnhl3kkg+oVru0ibC0
	nGJrLNRSvHN2Y/hJ+6ctaCyIuZY/WuIiI+nrjkLctW/dX81kQV6CFfX2nGQLbnkLedsp
	oLxQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKD3fH
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:13 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 12/23] tools/guest: save: move rec_pfns array
Date: Thu, 29 Oct 2020 18:19:52 +0100
Message-Id: <20201029172004.17219-13-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move rec_pfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h |  2 ++
 tools/libs/guest/xg_sr_save.c   | 11 +----------
 2 files changed, 3 insertions(+), 10 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index f315b4f526..81158a4f4d 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -221,6 +221,8 @@ struct xc_sr_save_arrays {
     int errors[MAX_BATCH_SIZE];
     /* write_batch: iovec[] for writev(). */
     struct iovec iov[MAX_BATCH_SIZE + 4];
+    /* write_batch */
+    uint64_t rec_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 385a591332..4d7fbe09ed 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -96,7 +96,7 @@ static int write_batch(struct xc_sr_context *ctx)
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
-    uint64_t *rec_pfns = NULL;
+    uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
     struct xc_sr_record rec = {
@@ -201,14 +201,6 @@ static int write_batch(struct xc_sr_context *ctx)
         }
     }
 
-    rec_pfns = malloc(nr_pfns * sizeof(*rec_pfns));
-    if ( !rec_pfns )
-    {
-        ERROR("Unable to allocate %zu bytes of memory for page data pfn list",
-              nr_pfns * sizeof(*rec_pfns));
-        goto err;
-    }
-
     hdr.count = nr_pfns;
 
     rec.length = sizeof(hdr);
@@ -259,7 +251,6 @@ static int write_batch(struct xc_sr_context *ctx)
     rc = ctx->save.nr_batch_pfns = 0;
 
  err:
-    free(rec_pfns);
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
     for ( i = 0; local_pages && i < nr_pfns; ++i )


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14428.35724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbP-00043f-Pd; Thu, 29 Oct 2020 17:20:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14428.35724; Thu, 29 Oct 2020 17:20:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbP-00042z-B3; Thu, 29 Oct 2020 17:20:43 +0000
Received: by outflank-mailman (input) for mailman id 14428;
 Thu, 29 Oct 2020 17:20:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbN-0003MD-NO
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:41 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1223d891-298a-419b-987a-b0e4d644f04c;
 Thu, 29 Oct 2020 17:20:20 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKD3fI
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:13 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbN-0003MD-NO
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:41 +0000
X-Inumbo-ID: 1223d891-298a-419b-987a-b0e4d644f04c
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.171])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1223d891-298a-419b-987a-b0e4d644f04c;
	Thu, 29 Oct 2020 17:20:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992019;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=lkN1X6kUoB//sxPm6COzYhXSiEgr2M3yddCJzpbrAss=;
	b=IHBp8bGT+CjzevJ5KObxjBTmwycrPlfQlA8m7+twU1iV2uRqoop2Pgp86SuPb6M/cm
	kSSc6cIZY/xBaOpru3unK+lnoGDj/kITm2y8d5m0RFzT9mhxW7OYZne+ho5+B3pq6NYC
	bjOhehhBO1yRDGmmET+BOmVJhifRB0L6+7E5bfset/Q47vIaIvevf7VcQ1SudJeTCrqk
	DlZ0NfrZOrb+DChRBUnjDC+id6rHlbZl/l9qcXpUPKqiFmwFpTJgIPlBaBKyxvJxmPhv
	5NcOZ+s4zN7JyXWCel/K14wpryXivD4jgQ7w+GpA53x6vc7H/KNfaTba6tQ/ORImRsAc
	N0Gg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKD3fI
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:13 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 13/23] tools/guest: save: move guest_data array
Date: Thu, 29 Oct 2020 18:19:53 +0100
Message-Id: <20201029172004.17219-14-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move guest_data array into preallocated space.

Because this was allocated with calloc:
Adjust the loop to clear unused entries as needed.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h |  2 ++
 tools/libs/guest/xg_sr_save.c   | 11 ++++++-----
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 81158a4f4d..33e66678c6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -223,6 +223,8 @@ struct xc_sr_save_arrays {
     struct iovec iov[MAX_BATCH_SIZE + 4];
     /* write_batch */
     uint64_t rec_pfns[MAX_BATCH_SIZE];
+    /* write_batch: Pointers to page data to send. Mapped gfns or local allocations. */
+    void *guest_data[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 4d7fbe09ed..658f834ae8 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -90,7 +90,7 @@ static int write_batch(struct xc_sr_context *ctx)
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
-    void **guest_data = NULL;
+    void **guest_data = ctx->save.m->guest_data;
     void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
@@ -105,12 +105,10 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to page data to send.  Mapped gfns or local allocations. */
-    guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
     local_pages = calloc(nr_pfns, sizeof(*local_pages));
 
-    if ( !guest_data || !local_pages )
+    if ( !local_pages )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -166,7 +164,10 @@ static int write_batch(struct xc_sr_context *ctx)
         for ( i = 0, p = 0; i < nr_pfns; ++i )
         {
             if ( page_type_has_stream_data(types[i]) == false )
+            {
+                guest_data[i] = NULL;
                 continue;
+            }
 
             if ( errors[p] )
             {
@@ -183,6 +184,7 @@ static int write_batch(struct xc_sr_context *ctx)
 
             if ( rc )
             {
+                guest_data[i] = NULL;
                 if ( rc == -1 && errno == EAGAIN )
                 {
                     set_bit(ctx->save.m->batch_pfns[i], ctx->save.deferred_pages);
@@ -256,7 +258,6 @@ static int write_batch(struct xc_sr_context *ctx)
     for ( i = 0; local_pages && i < nr_pfns; ++i )
         free(local_pages[i]);
     free(local_pages);
-    free(guest_data);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14431.35739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbU-0004Dt-7r; Thu, 29 Oct 2020 17:20:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14431.35739; Thu, 29 Oct 2020 17:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbU-0004Dd-1a; Thu, 29 Oct 2020 17:20:48 +0000
Received: by outflank-mailman (input) for mailman id 14431;
 Thu, 29 Oct 2020 17:20:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbS-0003MC-Lm
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:46 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b787c47-ec20-4ce7-921c-3bdab7e6f4a0;
 Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKC3fF
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:12 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbS-0003MC-Lm
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:46 +0000
X-Inumbo-ID: 1b787c47-ec20-4ce7-921c-3bdab7e6f4a0
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1b787c47-ec20-4ce7-921c-3bdab7e6f4a0;
	Thu, 29 Oct 2020 17:20:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992019;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=zPzB2Ea/XkNgns/5zMr604KMV7uEET1uzQmhUGE8Zl4=;
	b=Zm9Fxmnh227rnoPh2z1h87SP3mDURpyMdE8IBX/J4SoKhSODhFYo5HRzA/FrCMbYsc
	6ghjx1ekd8+VgzU3ASVxKljdQfQWGHH76RCKJLsqq8FWPpPrpPrtNxucdE+1QbSotC+a
	u6zLoM5tQuXZs+rfR2ivsN6ESu/ViR2Vj4PokMlMEUE91BL8hoRGKKOrw+OraANLaMGS
	nfq5A0tZzB58TAsZ5KRyhjgRQxET7rs1li0pXSiQXHpffqUzK1lCkKqvEEQ6lfa9ZrAw
	FYqtoHipBWL7HJWaTVeVop0+vX7AlxlNFlAHr+XUpK+XOHUKsvzqC6LHx9aRoch730vT
	Zk6Q==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKC3fF
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:12 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 10/23] tools/guest: save: move errors array
Date: Thu, 29 Oct 2020 18:19:50 +0100
Message-Id: <20201029172004.17219-11-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move errors array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h | 2 ++
 tools/libs/guest/xg_sr_save.c   | 7 ++-----
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 3cbadb607b..71b676c0e0 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -217,6 +217,8 @@ struct xc_sr_save_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     /* write_batch: Types of the batch pfns. */
     xen_pfn_t types[MAX_BATCH_SIZE];
+    /* write_batch: Errors from attempting to map the gfns. */
+    int errors[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_restore_arrays {
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 6678df95b8..cdc27a9cde 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -92,7 +92,7 @@ static int write_batch(struct xc_sr_context *ctx)
     void *guest_mapping = NULL;
     void **guest_data = NULL;
     void **local_pages = NULL;
-    int *errors = NULL, rc = -1;
+    int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
     void *page, *orig_page;
@@ -105,8 +105,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Errors from attempting to map the gfns. */
-    errors = malloc(nr_pfns * sizeof(*errors));
     /* Pointers to page data to send.  Mapped gfns or local allocations. */
     guest_data = calloc(nr_pfns, sizeof(*guest_data));
     /* Pointers to locally allocated pages.  Need freeing. */
@@ -114,7 +112,7 @@ static int write_batch(struct xc_sr_context *ctx)
     /* iovec[] for writev(). */
     iov = malloc((nr_pfns + 4) * sizeof(*iov));
 
-    if ( !errors || !guest_data || !local_pages || !iov )
+    if ( !guest_data || !local_pages || !iov )
     {
         ERROR("Unable to allocate arrays for a batch of %u pages",
               nr_pfns);
@@ -271,7 +269,6 @@ static int write_batch(struct xc_sr_context *ctx)
     free(iov);
     free(local_pages);
     free(guest_data);
-    free(errors);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14432.35745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbU-0004FL-Ut; Thu, 29 Oct 2020 17:20:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14432.35745; Thu, 29 Oct 2020 17:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbU-0004Ey-HD; Thu, 29 Oct 2020 17:20:48 +0000
Received: by outflank-mailman (input) for mailman id 14432;
 Thu, 29 Oct 2020 17:20:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbS-0003MD-Nd
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:46 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61498030-9189-4465-82cd-c67210d36d26;
 Thu, 29 Oct 2020 17:20:20 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKD3fK
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:13 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbS-0003MD-Nd
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:46 +0000
X-Inumbo-ID: 61498030-9189-4465-82cd-c67210d36d26
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.80])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 61498030-9189-4465-82cd-c67210d36d26;
	Thu, 29 Oct 2020 17:20:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992020;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=N7Hr2VOfOon4RnvyU1vv7pvNigOvHS1qA+Cy9MD8+GQ=;
	b=DWa+kVf573alXYUQYv+wjcgQrPIuU7myRbzKwpdCoRWE9DRENK25z/1AaGfD0aCF85
	o8pdJfaRypVXbUERwx4iocR2ikDfsAsc38cr+nddBbATVIQ5sfsq6PGecpaPSzu6Gqng
	iBSyZjZnoBcmAsxcvym+7scgvaJ2u1eENHTPnuGRF3czC0GJrIfEz6eCrVTlu1jzD/Q6
	42LbjhLNBvIhxLniORBCyo4s5VOoN+CX4CaGHOYPPGS4UL5eoNbTt8ionwtiw10WeE5o
	AqiNkgWfc8Ua5PAZd1L9PH+wCOiJ8Ej9YxI7/gDUvFNLBCPJjyBmXMO9k8t50Cj+5nuK
	2jiQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKD3fK
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:13 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 14/23] tools/guest: save: move local_pages array
Date: Thu, 29 Oct 2020 18:19:54 +0100
Message-Id: <20201029172004.17219-15-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move local_pages array into preallocated space.

Adjust the code to use the src page as is in case of HVM.
In case of PV the page may need to be normalised, use an private memory
area for this purpose.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h       | 22 ++++++++++---------
 tools/libs/guest/xg_sr_save.c         | 25 +++------------------
 tools/libs/guest/xg_sr_save_x86_hvm.c |  5 +++--
 tools/libs/guest/xg_sr_save_x86_pv.c  | 31 ++++++++++++++++++---------
 4 files changed, 39 insertions(+), 44 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 33e66678c6..2a020fef5c 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -33,16 +33,12 @@ struct xc_sr_save_ops
      * Optionally transform the contents of a page from being specific to the
      * sending environment, to being generic for the stream.
      *
-     * The page of data at the end of 'page' may be a read-only mapping of a
-     * running guest; it must not be modified.  If no transformation is
-     * required, the callee should leave '*pages' untouched.
+     * The page of data '*src' may be a read-only mapping of a running guest;
+     * it must not be modified. If no transformation is required, the callee
+     * should leave '*src' untouched, and return it via '**ptr'.
      *
-     * If a transformation is required, the callee should allocate themselves
-     * a local page using malloc() and return it via '*page'.
-     *
-     * The caller shall free() '*page' in all cases.  In the case that the
-     * callee encounters an error, it should *NOT* free() the memory it
-     * allocated for '*page'.
+     * If a transformation is required, the callee should provide the
+     * transformed page in a private buffer and return it via '**ptr'.
      *
      * It is valid to fail with EAGAIN if the transformation is not able to be
      * completed at this point.  The page shall be retried later.
@@ -50,7 +46,7 @@ struct xc_sr_save_ops
      * @returns 0 for success, -1 for failure, with errno appropriately set.
      */
     int (*normalise_page)(struct xc_sr_context *ctx, xen_pfn_t type,
-                          void **page);
+                          void *src, unsigned int idx, void **ptr);
 
     /**
      * Set up local environment to save a domain. (Typically querying
@@ -371,6 +367,12 @@ struct xc_sr_context
 
                 union
                 {
+                    struct
+                    {
+                        /* Used by write_batch for modified pages. */
+                        void *normalised_pages;
+                    } save;
+
                     struct
                     {
                         /* State machine for the order of received records. */
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 658f834ae8..804e4ccb3a 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -91,11 +91,10 @@ static int write_batch(struct xc_sr_context *ctx)
     xen_pfn_t *mfns = ctx->save.m->mfns, *types = ctx->save.m->types;
     void *guest_mapping = NULL;
     void **guest_data = ctx->save.m->guest_data;
-    void **local_pages = NULL;
     int *errors = ctx->save.m->errors, rc = -1;
     unsigned int i, p, nr_pages = 0, nr_pages_mapped = 0;
     unsigned int nr_pfns = ctx->save.nr_batch_pfns;
-    void *page, *orig_page;
+    void *src;
     uint64_t *rec_pfns = ctx->save.m->rec_pfns;
     struct iovec *iov = ctx->save.m->iov; int iovcnt = 0;
     struct xc_sr_rec_page_data_header hdr = { 0 };
@@ -105,16 +104,6 @@ static int write_batch(struct xc_sr_context *ctx)
 
     assert(nr_pfns != 0);
 
-    /* Pointers to locally allocated pages.  Need freeing. */
-    local_pages = calloc(nr_pfns, sizeof(*local_pages));
-
-    if ( !local_pages )
-    {
-        ERROR("Unable to allocate arrays for a batch of %u pages",
-              nr_pfns);
-        goto err;
-    }
-
     for ( i = 0; i < nr_pfns; ++i )
     {
         types[i] = mfns[i] = ctx->save.ops.pfn_to_gfn(ctx,
@@ -176,11 +165,8 @@ static int write_batch(struct xc_sr_context *ctx)
                 goto err;
             }
 
-            orig_page = page = guest_mapping + (p * PAGE_SIZE);
-            rc = ctx->save.ops.normalise_page(ctx, types[i], &page);
-
-            if ( orig_page != page )
-                local_pages[i] = page;
+            src = guest_mapping + (p * PAGE_SIZE);
+            rc = ctx->save.ops.normalise_page(ctx, types[i], src, i, &guest_data[i]);
 
             if ( rc )
             {
@@ -195,8 +181,6 @@ static int write_batch(struct xc_sr_context *ctx)
                 else
                     goto err;
             }
-            else
-                guest_data[i] = page;
 
             rc = -1;
             ++p;
@@ -255,9 +239,6 @@ static int write_batch(struct xc_sr_context *ctx)
  err:
     if ( guest_mapping )
         xenforeignmemory_unmap(xch->fmem, guest_mapping, nr_pages_mapped);
-    for ( i = 0; local_pages && i < nr_pfns; ++i )
-        free(local_pages[i]);
-    free(local_pages);
 
     return rc;
 }
diff --git a/tools/libs/guest/xg_sr_save_x86_hvm.c b/tools/libs/guest/xg_sr_save_x86_hvm.c
index 1634a7bc43..11232b9f1d 100644
--- a/tools/libs/guest/xg_sr_save_x86_hvm.c
+++ b/tools/libs/guest/xg_sr_save_x86_hvm.c
@@ -129,9 +129,10 @@ static xen_pfn_t x86_hvm_pfn_to_gfn(const struct xc_sr_context *ctx,
     return pfn;
 }
 
-static int x86_hvm_normalise_page(struct xc_sr_context *ctx,
-                                  xen_pfn_t type, void **page)
+static int x86_hvm_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
+                                  void *src, unsigned int idx, void **ptr)
 {
+    *ptr = src;
     return 0;
 }
 
diff --git a/tools/libs/guest/xg_sr_save_x86_pv.c b/tools/libs/guest/xg_sr_save_x86_pv.c
index 4964f1f7b8..8165306354 100644
--- a/tools/libs/guest/xg_sr_save_x86_pv.c
+++ b/tools/libs/guest/xg_sr_save_x86_pv.c
@@ -999,29 +999,31 @@ static xen_pfn_t x86_pv_pfn_to_gfn(const struct xc_sr_context *ctx,
  * save_ops function.  Performs pagetable normalisation on appropriate pages.
  */
 static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
-                                 void **page)
+                                  void *src, unsigned int idx, void **ptr)
 {
     xc_interface *xch = ctx->xch;
-    void *local_page;
     int rc;
+    void *dst;
 
     type &= XEN_DOMCTL_PFINFO_LTABTYPE_MASK;
 
     if ( type < XEN_DOMCTL_PFINFO_L1TAB || type > XEN_DOMCTL_PFINFO_L4TAB )
+    {
+        *ptr = src;
         return 0;
+    }
 
-    local_page = malloc(PAGE_SIZE);
-    if ( !local_page )
+    if ( idx >= MAX_BATCH_SIZE )
     {
-        ERROR("Unable to allocate scratch page");
-        rc = -1;
-        goto out;
+        ERROR("idx %u out of range", idx);
+        errno = ERANGE;
+        return -1;
     }
 
-    rc = normalise_pagetable(ctx, *page, local_page, type);
-    *page = local_page;
+    dst = ctx->x86.pv.save.normalised_pages + idx * PAGE_SIZE;
+    rc = normalise_pagetable(ctx, src, dst, type);
+    *ptr = dst;
 
- out:
     return rc;
 }
 
@@ -1031,8 +1033,16 @@ static int x86_pv_normalise_page(struct xc_sr_context *ctx, xen_pfn_t type,
  */
 static int x86_pv_setup(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     int rc;
 
+    ctx->x86.pv.save.normalised_pages = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+    if ( !ctx->x86.pv.save.normalised_pages )
+    {
+        PERROR("Failed to allocate normalised_pages");
+        return -1;
+    }
+
     rc = x86_pv_domain_info(ctx);
     if ( rc )
         return rc;
@@ -1118,6 +1128,7 @@ static int x86_pv_check_vm_state(struct xc_sr_context *ctx)
 
 static int x86_pv_cleanup(struct xc_sr_context *ctx)
 {
+    free(ctx->x86.pv.save.normalised_pages);
     free(ctx->x86.pv.p2m_pfns);
 
     if ( ctx->x86.pv.p2m )


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14438.35763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbY-0004P0-Tc; Thu, 29 Oct 2020 17:20:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14438.35763; Thu, 29 Oct 2020 17:20:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbY-0004Ol-NM; Thu, 29 Oct 2020 17:20:52 +0000
Received: by outflank-mailman (input) for mailman id 14438;
 Thu, 29 Oct 2020 17:20:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbX-0003MC-M4
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:51 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.170])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c87f4a73-4a17-4d15-a0fc-66c914d8e5f4;
 Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKE3fM
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:14 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbX-0003MC-M4
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:51 +0000
X-Inumbo-ID: c87f4a73-4a17-4d15-a0fc-66c914d8e5f4
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.170])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c87f4a73-4a17-4d15-a0fc-66c914d8e5f4;
	Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992020;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=8HnEje+bi5L28ovh7GxYhMDV/iYGgk+hVnPKuTzQjbY=;
	b=Tz1p1IUBUd9B3S95popNAy44u2KWyhewJA6OQIaykQVWTfzCZMJK8k9C/YiGeJgmR8
	OPtpyy/H7A4kri4DKD/vQScMHpg75ec9ogZuVmsPz95l1LNBhkgBiA32wun1zylBdKrN
	1P504CQ3SdNpTjjqh4g8hE9i/FFhAKDTBJRMrII+g3FaNmOLR0rZJ9FF/b2PyC1yTRV9
	eoY4RLgSubMDPHbh5iiAWlruR4yuyz4Giz/0p/U4bpxnKdPQtaEDi78kJ3+FFePEhyxi
	RLd3Z1Fdp/P/ElRsqB3vHP2nqyFmbMa1BHF6hpCMl0EbywMUhgIS41+L8X9Taemqu8KW
	mVNQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKE3fM
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:14 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 15/23] tools/guest: restore: move pfns array
Date: Thu, 29 Oct 2020 18:19:55 +0100
Message-Id: <20201029172004.17219-16-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move pfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 2 ++
 tools/libs/guest/xg_sr_restore.c | 6 ++----
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 2a020fef5c..516d9b9fb5 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -224,6 +224,8 @@ struct xc_sr_save_arrays {
 };
 
 struct xc_sr_restore_arrays {
+    /* handle_page_data */
+    xen_pfn_t pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 4a9ece9681..7d1447e86c 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -315,7 +315,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     unsigned int i, pages_of_data = 0;
     int rc = -1;
 
-    xen_pfn_t *pfns = NULL, pfn;
+    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
     uint32_t *types = NULL, type;
 
     /*
@@ -363,9 +363,8 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    pfns = malloc(pages->count * sizeof(*pfns));
     types = malloc(pages->count * sizeof(*types));
-    if ( !pfns || !types )
+    if ( !types )
     {
         ERROR("Unable to allocate enough memory for %u pfns",
               pages->count);
@@ -412,7 +411,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
                            &pages->pfn[pages->count]);
  err:
     free(types);
-    free(pfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14439.35769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbZ-0004Qg-Kb; Thu, 29 Oct 2020 17:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14439.35769; Thu, 29 Oct 2020 17:20:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbZ-0004Q5-8n; Thu, 29 Oct 2020 17:20:53 +0000
Received: by outflank-mailman (input) for mailman id 14439;
 Thu, 29 Oct 2020 17:20:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbX-0003MD-Nw
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:51 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8acf577a-2151-4411-8110-51fcf5b5d3c3;
 Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKE3fO
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:14 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbX-0003MD-Nw
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:51 +0000
X-Inumbo-ID: 8acf577a-2151-4411-8110-51fcf5b5d3c3
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.83])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8acf577a-2151-4411-8110-51fcf5b5d3c3;
	Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992020;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=+/YrJrdWLFQBmx4Fjz9uQTzJVj46LYQgpP9ngHfsNNY=;
	b=KgAiCTG6MTTv88AsoblIMcLuAeIHAYJP7vdff65WN3i7lIBdymfh1dFxmBOfxnULbH
	9XrbAvJwsQqAwevBUO1V1rtPijyqQZDN1pvm9RCfmDunnbV7ZPjknZuyopbMi823g37e
	k0u0nDAAmE6c03KnaivQqqL4nvuD53P67c2LhoeoWqIko6wSE96tShAK2NrIWlMnEmY3
	5n+PNyTU60TPWdfAwkytJxmBevM+1vXpcJj48p08Rg0jKpaZGT6juwBBfnQivxhGMdUV
	2Pd7oGzcugqF7srBMib1aR2QPn/21M+FdvDSd/7Zba5/gXuYv+k+Aru3WjYLO/2NmtwC
	My3w==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKE3fO
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:14 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 16/23] tools/guest: restore: move types array
Date: Thu, 29 Oct 2020 18:19:56 +0100
Message-Id: <20201029172004.17219-17-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move types array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  |  1 +
 tools/libs/guest/xg_sr_restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 516d9b9fb5..0ceecb289d 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -226,6 +226,7 @@ struct xc_sr_save_arrays {
 struct xc_sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
+    uint32_t types[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7d1447e86c..7729071e41 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -316,7 +316,7 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     int rc = -1;
 
     xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = NULL, type;
+    uint32_t *types = ctx->restore.m->types, type;
 
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
@@ -363,14 +363,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    types = malloc(pages->count * sizeof(*types));
-    if ( !types )
-    {
-        ERROR("Unable to allocate enough memory for %u pfns",
-              pages->count);
-        goto err;
-    }
-
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -410,8 +402,6 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     rc = process_page_data(ctx, pages->count, pfns, types,
                            &pages->pfn[pages->count]);
  err:
-    free(types);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14441.35787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbe-0004ba-21; Thu, 29 Oct 2020 17:20:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14441.35787; Thu, 29 Oct 2020 17:20:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbd-0004bN-Rn; Thu, 29 Oct 2020 17:20:57 +0000
Received: by outflank-mailman (input) for mailman id 14441;
 Thu, 29 Oct 2020 17:20:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbc-0003MC-MN
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:56 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c76ea11-9640-4cc7-aaec-5b32a0503c23;
 Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKF3fP
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:15 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbc-0003MC-MN
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:56 +0000
X-Inumbo-ID: 3c76ea11-9640-4cc7-aaec-5b32a0503c23
Received: from mo4-p02-ob.smtp.rzone.de (unknown [81.169.146.168])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3c76ea11-9640-4cc7-aaec-5b32a0503c23;
	Thu, 29 Oct 2020 17:20:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992020;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=7yf5VgvfryEVAPtE1hdfDVCjjHFoZWgCpzJbwfomFp0=;
	b=F1zOdzX4p3t36MPCxlIyOJl3UjwfgkUMdjSGnCnrZM1vBjOziZWe4u7r0N1usI9uxI
	/wAvj56seYsFj6DLNdrq/GRUelq8pKby205Sa/hAl+tLzu5GRZDztj9HqBdh3n1roAja
	+B0EjX80SKUXiyshn72T84QMG1H8gBxT46K8WQaN534jyMgmID8f/ZMhI10Ur2n83aF9
	/xJP6sK3L0YLKjepBC/Nh7fCgdXOwB6PFwNM6ZBKnD3R8fliR+hjj8V3R2KyMATJlaDV
	Kwwp78xhCIdUKendWi+7nepwG/cW1uya5Lxqbc5BY+w2YiFHPR0suWeeJnonu/5htLRR
	Dx4A==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKF3fP
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:15 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 17/23] tools/guest: restore: move mfns array
Date: Thu, 29 Oct 2020 18:19:57 +0100
Message-Id: <20201029172004.17219-18-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move mfns array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 2 ++
 tools/libs/guest/xg_sr_restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 0ceecb289d..5731a5c186 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -227,6 +227,8 @@ struct xc_sr_restore_arrays {
     /* handle_page_data */
     xen_pfn_t pfns[MAX_BATCH_SIZE];
     uint32_t types[MAX_BATCH_SIZE];
+    /* process_page_data */
+    xen_pfn_t mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 7729071e41..3ba089f862 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -205,7 +205,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
                              xen_pfn_t *pfns, uint32_t *types, void *page_data)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns));
+    xen_pfn_t *mfns = ctx->restore.m->mfns;
     int *map_errs = malloc(count * sizeof(*map_errs));
     int rc;
     void *mapping = NULL, *guest_page = NULL;
@@ -213,7 +213,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !mfns || !map_errs )
+    if ( !map_errs )
     {
         rc = -1;
         ERROR("Failed to allocate %zu bytes to process page data",
@@ -299,7 +299,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
     free(map_errs);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:20:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14442.35795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbf-0004dW-1L; Thu, 29 Oct 2020 17:20:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14442.35795; Thu, 29 Oct 2020 17:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbe-0004ck-HN; Thu, 29 Oct 2020 17:20:58 +0000
Received: by outflank-mailman (input) for mailman id 14442;
 Thu, 29 Oct 2020 17:20:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbc-0003MD-OD
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:56 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d042921d-b001-4a70-b3e6-3b7021119e28;
 Thu, 29 Oct 2020 17:20:23 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKG3fS
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:16 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbc-0003MD-OD
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:20:56 +0000
X-Inumbo-ID: d042921d-b001-4a70-b3e6-3b7021119e28
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.102])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d042921d-b001-4a70-b3e6-3b7021119e28;
	Thu, 29 Oct 2020 17:20:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992022;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=7l8yhNQOQPrO7fe+7mN7zL3gcTrejyHmlkzb9nVzYkU=;
	b=eeVgrae+rDElAgTheRJNRHdLvU+1Y9cxgOjEh/7sQmhVyGnKV+SkzqBExLqCJTUEN5
	m5Qt2FcBn+Z3f8HlDc3U0VdmAWyOPfn3oxnoZYxLY/aOVcEVyok6+ZbG9gdxACo3jHmC
	yqlxSbSrfSk5UGdRmXwQFnSJ0duUbycF0rDvtpJu5dds46iVZ4CoynWveUN76NNo7M3x
	YN1O4p1fypXQO5w22lrjhaRLP0qnUe8IyhVhKivzuBpiK6XnpetuHAmRcRp9i7jNGn9p
	1dX3SR6rc5Y53frY+5PKJQP/ism4dsT5C8ZYZmGVC/YwhewDYRJu1781hs/ZQn08+J6F
	1YMQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKG3fS
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:16 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 20/23] tools/guest: restore: move pfns array in populate_pfns
Date: Thu, 29 Oct 2020 18:20:00 +0100
Message-Id: <20201029172004.17219-21-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns' pfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  |  1 +
 tools/libs/guest/xg_sr_restore.c | 11 +----------
 2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 96a77b5969..3fe665b91d 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -232,6 +232,7 @@ struct xc_sr_restore_arrays {
     int map_errs[MAX_BATCH_SIZE];
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
+    xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 85a32aaed2..71b39612ee 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -139,17 +139,10 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
-        *pfns = malloc(count * sizeof(*pfns));
+        *pfns = ctx->restore.m->pp_pfns;
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !pfns )
-    {
-        ERROR("Failed to allocate %zu bytes for populating the physmap",
-              2 * count * sizeof(*mfns));
-        goto err;
-    }
-
     for ( i = 0; i < count; ++i )
     {
         if ( (!types ||
@@ -190,8 +183,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     rc = 0;
 
  err:
-    free(pfns);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:21:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:21:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14448.35811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbj-0004pU-Q6; Thu, 29 Oct 2020 17:21:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14448.35811; Thu, 29 Oct 2020 17:21:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbj-0004p4-DL; Thu, 29 Oct 2020 17:21:03 +0000
Received: by outflank-mailman (input) for mailman id 14448;
 Thu, 29 Oct 2020 17:21:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbh-0003MC-MZ
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:01 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eafb4928-96a0-4e13-9bea-3740bdbff561;
 Thu, 29 Oct 2020 17:20:22 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKF3fQ
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:15 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbh-0003MC-MZ
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:01 +0000
X-Inumbo-ID: eafb4928-96a0-4e13-9bea-3740bdbff561
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.82])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id eafb4928-96a0-4e13-9bea-3740bdbff561;
	Thu, 29 Oct 2020 17:20:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992021;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=hpVNJcvCo1Nci9xkIIy9iyHUvD/MXxHiNddZBiksKp8=;
	b=swT6PklD29WD3en/xNHuc35eLb9dBXhbmumKEAYTOnpUMWQteAbTO/RScOoCLtkAsB
	n/WC1r2RqAabpPe8NXhmWa5jB1B6ovWajEA6u8nVdwLPDb6mvmcLBokWpB2UUcoOFLr6
	HtPOrTJbQcwyT3qA9BUyEpdOOmPG4Ep/QsRweYRfHSjmW2PuuD7nm68/rsdoi6GhCmDh
	pi2QPzqo4gfICjQpI2XDjMlMrChayjBV4GlvaOQu+h85iDz/CJVJYBQwSOSCFWGz3jeN
	l1IWCjpSqZMKtVd2bUqYYCVE0TYBoW9/m6aruohbX4r2NvtfozlCqLiSkg3J4nEx+Hzh
	ILEg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKF3fQ
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:15 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 18/23] tools/guest: restore: move map_errs array
Date: Thu, 29 Oct 2020 18:19:58 +0100
Message-Id: <20201029172004.17219-19-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move map_errs array into preallocated space.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  |  1 +
 tools/libs/guest/xg_sr_restore.c | 12 +-----------
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 5731a5c186..eba3a49877 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -229,6 +229,7 @@ struct xc_sr_restore_arrays {
     uint32_t types[MAX_BATCH_SIZE];
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
+    int map_errs[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 3ba089f862..94c329032f 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -206,21 +206,13 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
 {
     xc_interface *xch = ctx->xch;
     xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = malloc(count * sizeof(*map_errs));
+    int *map_errs = ctx->restore.m->map_errs;
     int rc;
     void *mapping = NULL, *guest_page = NULL;
     unsigned int i, /* i indexes the pfns from the record. */
         j,          /* j indexes the subset of pfns we decide to map. */
         nr_pages = 0;
 
-    if ( !map_errs )
-    {
-        rc = -1;
-        ERROR("Failed to allocate %zu bytes to process page data",
-              count * (sizeof(*mfns) + sizeof(*map_errs)));
-        goto err;
-    }
-
     rc = populate_pfns(ctx, count, pfns, types);
     if ( rc )
     {
@@ -298,8 +290,6 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
     if ( mapping )
         xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
 
-    free(map_errs);
-
     return rc;
 }
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:21:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14449.35817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbk-0004rD-HC; Thu, 29 Oct 2020 17:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14449.35817; Thu, 29 Oct 2020 17:21:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBbk-0004qX-0Z; Thu, 29 Oct 2020 17:21:04 +0000
Received: by outflank-mailman (input) for mailman id 14449;
 Thu, 29 Oct 2020 17:21:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbh-0003MD-ON
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:01 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.101])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d291c9d-b52d-4ba4-b80c-2a309346f51a;
 Thu, 29 Oct 2020 17:20:26 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKH3fU
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:17 +0100 (CET)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbh-0003MD-ON
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:01 +0000
X-Inumbo-ID: 5d291c9d-b52d-4ba4-b80c-2a309346f51a
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.101])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5d291c9d-b52d-4ba4-b80c-2a309346f51a;
	Thu, 29 Oct 2020 17:20:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992025;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=JnSHsGBdE+hUonxO+Hv9VJUmNC+IMHKghPQX1ClFpdo=;
	b=YRBZ3/NAr7/qOHfTdKRX3rJwf7ZPgs1J+6+i+D+nPgqqX0hAook0mMrH2X8FcI8zoY
	I2DGeYvBxRHpNe9y2mJkqv8iYTyiPBrZq4SLo3pCctY/G8cTx/clzJX4xJp1g7L5v9tS
	xgkpZFJeyjIC7KYWtD4Lidl2AJXL7OjkA3HFtv8KzczmNYoONhwZ4exhEJoryapFwGze
	xTGnDewSHK/lIf+Wor46FA02VtulB2TE8O+SKa5a3F61b5V8fOMV6SigZpXK2NuMzmIX
	R0OR6Ke2Zrvs9slDpj9NbWo6UDmT/EtBoq8vl+8P/mQumkA8Lwb8QEfYUj5CX8cUAicK
	PCkA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKH3fU
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:17 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 22/23] tools/guest: restore: split handle_page_data
Date: Thu, 29 Oct 2020 18:20:02 +0100
Message-Id: <20201029172004.17219-23-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data that can be consumed verbatim.

Split the various steps of record processing:
- move processing to handle_buffered_page_data
- adjust xenforeignmemory_map to set errno in case of failure
- adjust verify mode to set errno in case of failure

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  |   9 +
 tools/libs/guest/xg_sr_restore.c | 343 ++++++++++++++++++++-----------
 2 files changed, 231 insertions(+), 121 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 66d1b0dfe6..7ec8867b88 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -230,9 +230,14 @@ struct xc_sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    void *guest_data[MAX_BATCH_SIZE];
+
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
     xen_pfn_t pp_pfns[MAX_BATCH_SIZE];
+
+    /* Must be the last member */
+    struct xc_sr_rec_page_data_header pages;
 };
 
 struct xc_sr_context
@@ -323,7 +328,11 @@ struct xc_sr_context
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
+            void *verify_buf;
+
             struct xc_sr_restore_arrays *m;
+            void *guest_mapping;
+            uint32_t nr_mapped_pages;
         } restore;
     };
 
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 93f69b9ba8..060f3d1f4e 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -186,123 +186,18 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
     return rc;
 }
 
-/*
- * Given a list of pfns, their types, and a block of page data from the
- * stream, populate and record their types, map the relevant subset and copy
- * the data into the guest.
- */
-static int process_page_data(struct xc_sr_context *ctx, unsigned int count,
-                             xen_pfn_t *pfns, uint32_t *types, void *page_data)
+static int handle_static_data_end_v2(struct xc_sr_context *ctx)
 {
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = ctx->restore.m->mfns;
-    int *map_errs = ctx->restore.m->map_errs;
-    int rc;
-    void *mapping = NULL, *guest_page = NULL;
-    unsigned int i, /* i indexes the pfns from the record. */
-        j,          /* j indexes the subset of pfns we decide to map. */
-        nr_pages = 0;
-
-    rc = populate_pfns(ctx, count, pfns, types);
-    if ( rc )
-    {
-        ERROR("Failed to populate pfns for batch of %u pages", count);
-        goto err;
-    }
-
-    for ( i = 0; i < count; ++i )
-    {
-        ctx->restore.ops.set_page_type(ctx, pfns[i], types[i]);
-
-        if ( page_type_has_stream_data(types[i]) == true )
-            mfns[nr_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, pfns[i]);
-    }
-
-    /* Nothing to do? */
-    if ( nr_pages == 0 )
-        goto done;
-
-    mapping = guest_page = xenforeignmemory_map(
-        xch->fmem, ctx->domid, PROT_READ | PROT_WRITE,
-        nr_pages, mfns, map_errs);
-    if ( !mapping )
-    {
-        rc = -1;
-        PERROR("Unable to map %u mfns for %u pages of data",
-               nr_pages, count);
-        goto err;
-    }
-
-    for ( i = 0, j = 0; i < count; ++i )
-    {
-        if ( page_type_has_stream_data(types[i]) == false )
-            continue;
-
-        if ( map_errs[j] )
-        {
-            rc = -1;
-            ERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed with %d",
-                  pfns[i], mfns[j], types[i], map_errs[j]);
-            goto err;
-        }
-
-        /* Undo page normalisation done by the saver. */
-        rc = ctx->restore.ops.localise_page(ctx, types[i], page_data);
-        if ( rc )
-        {
-            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
-                  pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-            goto err;
-        }
-
-        if ( ctx->restore.verify )
-        {
-            /* Verify mode - compare incoming data to what we already have. */
-            if ( memcmp(guest_page, page_data, PAGE_SIZE) )
-                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
-                      pfns[i], types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
-        }
-        else
-        {
-            /* Regular mode - copy incoming data into place. */
-            memcpy(guest_page, page_data, PAGE_SIZE);
-        }
-
-        ++j;
-        guest_page += PAGE_SIZE;
-        page_data += PAGE_SIZE;
-    }
-
- done:
-    rc = 0;
-
- err:
-    if ( mapping )
-        xenforeignmemory_unmap(xch->fmem, mapping, nr_pages);
-
-    return rc;
-}
+    int rc = 0;
 
-/*
- * Validate a PAGE_DATA record from the stream, and pass the results to
- * process_page_data() to actually perform the legwork.
- */
-static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
-{
+#if defined(__i386__) || defined(__x86_64__)
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rec_page_data_header *pages = rec->data;
-    unsigned int i, pages_of_data = 0;
-    int rc = -1;
-
-    xen_pfn_t *pfns = ctx->restore.m->pfns, pfn;
-    uint32_t *types = ctx->restore.m->types, type;
-
     /*
      * v2 compatibility only exists for x86 streams.  This is a bit of a
      * bodge, but it is less bad than duplicating handle_page_data() between
      * different architectures.
      */
-#if defined(__i386__) || defined(__x86_64__)
+
     /* v2 compat.  Infer the position of STATIC_DATA_END. */
     if ( ctx->restore.format_version < 3 && !ctx->restore.seen_static_data_end )
     {
@@ -320,12 +215,26 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ERROR("No STATIC_DATA_END seen");
         goto err;
     }
+
+    rc = 0;
+err:
 #endif
 
-    if ( rec->length < sizeof(*pages) )
+    return rc;
+}
+
+static bool verify_rec_page_hdr(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    bool ret = false;
+
+    errno = EINVAL;
+
+    if ( rec_length < sizeof(*pages) )
     {
         ERROR("PAGE_DATA record truncated: length %u, min %zu",
-              rec->length, sizeof(*pages));
+              rec_length, sizeof(*pages));
         goto err;
     }
 
@@ -335,13 +244,35 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         goto err;
     }
 
-    if ( rec->length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
+    if ( pages->count > MAX_BATCH_SIZE )
+    {
+        ERROR("pfn count %u in PAGE_DATA record too large", pages->count);
+        errno = E2BIG;
+        goto err;
+    }
+
+    if ( rec_length < sizeof(*pages) + (pages->count * sizeof(uint64_t)) )
     {
         ERROR("PAGE_DATA record (length %u) too short to contain %u"
-              " pfns worth of information", rec->length, pages->count);
+              " pfns worth of information", rec_length, pages->count);
         goto err;
     }
 
+    ret = true;
+
+err:
+    return ret;
+}
+
+static bool verify_rec_page_pfns(struct xc_sr_context *ctx, uint32_t rec_length,
+                                 struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    uint32_t i, pages_of_data = 0;
+    xen_pfn_t pfn;
+    uint32_t type;
+    bool ret = false;
+
     for ( i = 0; i < pages->count; ++i )
     {
         pfn = pages->pfn[i] & PAGE_DATA_PFN_MASK;
@@ -364,23 +295,183 @@ static int handle_page_data(struct xc_sr_context *ctx, struct xc_sr_record *rec)
              * have a page worth of data in the record. */
             pages_of_data++;
 
-        pfns[i] = pfn;
-        types[i] = type;
+        ctx->restore.m->pfns[i] = pfn;
+        ctx->restore.m->types[i] = type;
     }
 
-    if ( rec->length != (sizeof(*pages) +
+    if ( rec_length != (sizeof(*pages) +
                          (sizeof(uint64_t) * pages->count) +
                          (PAGE_SIZE * pages_of_data)) )
     {
         ERROR("PAGE_DATA record wrong size: length %u, expected "
-              "%zu + %zu + %lu", rec->length, sizeof(*pages),
+              "%zu + %zu + %lu", rec_length, sizeof(*pages),
               (sizeof(uint64_t) * pages->count), (PAGE_SIZE * pages_of_data));
         goto err;
     }
 
-    rc = process_page_data(ctx, pages->count, pfns, types,
-                           &pages->pfn[pages->count]);
+    ret = true;
+
+err:
+    return ret;
+}
+
+/*
+ * Populate pfns, if required
+ * Fill m->guest_data with either mapped address or NULL
+ * The caller must unmap guest_mapping
+ */
+static int map_guest_pages(struct xc_sr_context *ctx,
+                           struct xc_sr_rec_page_data_header *pages)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    uint32_t i, p;
+    int rc;
+
+    rc = populate_pfns(ctx, pages->count, m->pfns, m->types);
+    if ( rc )
+    {
+        ERROR("Failed to populate pfns for batch of %u pages", pages->count);
+        goto err;
+    }
+
+    ctx->restore.nr_mapped_pages = 0;
+
+    for ( i = 0; i < pages->count; i++ )
+    {
+        ctx->restore.ops.set_page_type(ctx, m->pfns[i], m->types[i]);
+
+        if ( page_type_has_stream_data(m->types[i]) == false )
+        {
+            m->guest_data[i] = NULL;
+            continue;
+        }
+
+        m->mfns[ctx->restore.nr_mapped_pages++] = ctx->restore.ops.pfn_to_gfn(ctx, m->pfns[i]);
+    }
+
+    /* Nothing to do? */
+    if ( ctx->restore.nr_mapped_pages == 0 )
+        goto done;
+
+    ctx->restore.guest_mapping = xenforeignmemory_map(xch->fmem, ctx->domid,
+            PROT_READ | PROT_WRITE, ctx->restore.nr_mapped_pages,
+            m->mfns, m->map_errs);
+    if ( !ctx->restore.guest_mapping )
+    {
+        rc = -1;
+        PERROR("Unable to map %u mfns for %u pages of data",
+               ctx->restore.nr_mapped_pages, pages->count);
+        goto err;
+    }
+
+    /* Verify mapping, and assign address to pfn data */
+    for ( i = 0, p = 0; i < pages->count; i++ )
+    {
+        if ( page_type_has_stream_data(m->types[i]) == false )
+            continue;
+
+        if ( m->map_errs[p] == 0 )
+        {
+            m->guest_data[i] = ctx->restore.guest_mapping + (p * PAGE_SIZE);
+            p++;
+            continue;
+        }
+
+        errno = m->map_errs[p];
+        rc = -1;
+        PERROR("Mapping pfn %#"PRIpfn" (mfn %#"PRIpfn", type %#"PRIx32") failed",
+              m->pfns[i], m->mfns[p], m->types[i]);
+        goto err;
+    }
+
+done:
+    rc = 0;
+
+err:
+    return rc;
+}
+
+/*
+ * Handle PAGE_DATA record from an existing buffer
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_buffered_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_rec_page_data_header *pages = rec->data;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    void *p;
+    uint32_t i;
+    int rc = -1, idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    if ( verify_rec_page_hdr(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the pfn numbers */
+    if ( verify_rec_page_pfns(ctx, rec->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Map the target pfn */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    for ( i = 0, idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        p = &pages->pfn[pages->count] + (idx * PAGE_SIZE);
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], p);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], p, PAGE_SIZE) )
+            {
+                errno = EIO;
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+                goto err;
+            }
+        }
+        else
+        {
+            memcpy(m->guest_data[i], p, PAGE_SIZE);
+        }
+
+        idx++;
+    }
+
+    rc = 0;
+
  err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
     return rc;
 }
 
@@ -641,12 +732,21 @@ static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_recor
         break;
 
     case REC_TYPE_PAGE_DATA:
-        rc = handle_page_data(ctx, rec);
+        rc = handle_buffered_page_data(ctx, rec);
         break;
 
     case REC_TYPE_VERIFY:
         DPRINTF("Verify mode enabled");
         ctx->restore.verify = true;
+        if ( !ctx->restore.verify_buf )
+        {
+            ctx->restore.verify_buf = malloc(MAX_BATCH_SIZE * PAGE_SIZE);
+            if ( !ctx->restore.verify_buf )
+            {
+                rc = -1;
+                PERROR("Unable to allocate verify_buf");
+            }
+        }
         break;
 
     case REC_TYPE_CHECKPOINT:
@@ -725,7 +825,8 @@ static int setup(struct xc_sr_context *ctx)
     }
     ctx->restore.allocated_rec_num = DEFAULT_BUF_RECORDS;
 
-    ctx->restore.m = malloc(sizeof(*ctx->restore.m));
+    ctx->restore.m = malloc(sizeof(*ctx->restore.m) +
+            (sizeof(*ctx->restore.m->pages.pfn) * MAX_BATCH_SIZE));
     if ( !ctx->restore.m ) {
         ERROR("Unable to allocate memory for arrays");
         rc = -1;


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:24:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:24:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14496.35851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBea-0005rI-Rh; Thu, 29 Oct 2020 17:24:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14496.35851; Thu, 29 Oct 2020 17:24:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBea-0005r6-OB; Thu, 29 Oct 2020 17:24:00 +0000
Received: by outflank-mailman (input) for mailman id 14496;
 Thu, 29 Oct 2020 17:23:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbw-0003MC-ND
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:16 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.104])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d6ec06d-e896-4655-b970-ab122dacae12;
 Thu, 29 Oct 2020 17:20:25 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKH3fV
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:17 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbw-0003MC-ND
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:16 +0000
X-Inumbo-ID: 8d6ec06d-e896-4655-b970-ab122dacae12
Received: from mo4-p03-ob.smtp.rzone.de (unknown [85.215.255.104])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8d6ec06d-e896-4655-b970-ab122dacae12;
	Thu, 29 Oct 2020 17:20:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992024;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=bBxHDLx7AeMPpHXep28OEFmN0H0Z0pHfYy9VAoYWQzc=;
	b=qDoMP6d0d7wEn447iURC/emTuLvga9PR96Ko4Iv7jidNwnAlMoBUAnRGmLKo4HZLGA
	GHOgirg4mmXFYlap6+tmFd5Ax0PCMpMPTMzUiK29J/IrzDDArLcwmsNHAELVYY7r1ZTE
	XeoSFGmaiu8nGMKG9l/us2OUAYNWKdnizZuFmnjh4vohK5OWUQ5Psk0IdJ/P++i2UZzL
	HIIx55HWepCb0R8+J2HVGu0gyxQYcOdu5Qf6HumxN1ZfjMbpg0FamnYRlSSnmskzTIvx
	gf+hZWMFtprVMiitlaZlIGLimIhY1iIum9npDeoa/wSMzhXEffG/CAI+MYNWt6VI+cRm
	syTQ==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKH3fV
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:17 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 23/23] tools/guest: restore: write data directly into guest
Date: Thu, 29 Oct 2020 18:20:03 +0100
Message-Id: <20201029172004.17219-24-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Read incoming migration stream directly into the guest memory.
This avoids the memory allocation and copying, and the resulting
performance penalty.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  |   1 +
 tools/libs/guest/xg_sr_restore.c | 132 ++++++++++++++++++++++++++++++-
 2 files changed, 129 insertions(+), 4 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 7ec8867b88..f76af23bcc 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -231,6 +231,7 @@ struct xc_sr_restore_arrays {
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
     void *guest_data[MAX_BATCH_SIZE];
+    struct iovec iov[MAX_BATCH_SIZE];
 
     /* populate_pfns */
     xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 060f3d1f4e..2f575d7dd9 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -392,6 +392,122 @@ err:
     return rc;
 }
 
+/*
+ * Handle PAGE_DATA record from the stream.
+ * Given a list of pfns, their types, and a block of page data from the
+ * stream, populate and record their types, map the relevant subset and copy
+ * the data into the guest.
+ */
+static int handle_incoming_page_data(struct xc_sr_context *ctx,
+                                     struct xc_sr_rhdr *rhdr)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_restore_arrays *m = ctx->restore.m;
+    struct xc_sr_rec_page_data_header *pages = &m->pages;
+    uint64_t *pfn_nums = m->pages.pfn;
+    uint32_t i;
+    int rc, iov_idx;
+
+    rc = handle_static_data_end_v2(ctx);
+    if ( rc )
+        goto err;
+
+    /* First read and verify the header */
+    rc = read_exact(ctx->fd, pages, sizeof(*pages));
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn header");
+        goto err;
+    }
+
+    if ( verify_rec_page_hdr(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Then read and verify the incoming pfn numbers */
+    rc = read_exact(ctx->fd, pfn_nums, sizeof(*pfn_nums) * pages->count);
+    if ( rc )
+    {
+        PERROR("Could not read rec_pfn data");
+        goto err;
+    }
+
+    if ( verify_rec_page_pfns(ctx, rhdr->length, pages) == false )
+    {
+        rc = -1;
+        goto err;
+    }
+
+    /* Finally read and verify the incoming pfn data */
+    rc = map_guest_pages(ctx, pages);
+    if ( rc )
+        goto err;
+
+    /* Prepare read buffers, either guest or throw away memory */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        m->iov[iov_idx].iov_len = PAGE_SIZE;
+        if ( ctx->restore.verify )
+            m->iov[iov_idx].iov_base = ctx->restore.verify_buf + i * PAGE_SIZE;
+        else
+            m->iov[iov_idx].iov_base = m->guest_data[i];
+        iov_idx++;
+    }
+
+    if ( !iov_idx )
+        goto done;
+
+    rc = readv_exact(ctx->fd, m->iov, iov_idx);
+    if ( rc )
+    {
+        PERROR("read of %d pages failed", iov_idx);
+        goto err;
+    }
+
+    /* Post-processing of pfn data */
+    for ( i = 0, iov_idx = 0; i < pages->count; i++ )
+    {
+        if ( !m->guest_data[i] )
+            continue;
+
+        rc = ctx->restore.ops.localise_page(ctx, m->types[i], m->iov[iov_idx].iov_base);
+        if ( rc )
+        {
+            ERROR("Failed to localise pfn %#"PRIpfn" (type %#"PRIx32")",
+                  m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            goto err;
+
+        }
+
+        if ( ctx->restore.verify )
+        {
+            if ( memcmp(m->guest_data[i], m->iov[iov_idx].iov_base, PAGE_SIZE) )
+            {
+                ERROR("verify pfn %#"PRIpfn" failed (type %#"PRIx32")",
+                      m->pfns[i], m->types[i] >> XEN_DOMCTL_PFINFO_LTAB_SHIFT);
+            }
+        }
+
+        iov_idx++;
+    }
+
+done:
+    rc = 0;
+
+err:
+    if ( ctx->restore.guest_mapping )
+    {
+        xenforeignmemory_unmap(xch->fmem, ctx->restore.guest_mapping, ctx->restore.nr_mapped_pages);
+        ctx->restore.guest_mapping = NULL;
+    }
+    return rc;
+}
+
 /*
  * Handle PAGE_DATA record from an existing buffer
  * Given a list of pfns, their types, and a block of page data from the
@@ -773,11 +889,19 @@ static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_s
     struct xc_sr_record rec;
     int rc;
 
-    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
-    if ( rc )
-        return rc;
+    switch ( rhdr->type )
+    {
+    case REC_TYPE_PAGE_DATA:
+        rc = handle_incoming_page_data(ctx, rhdr);
+        break;
+    default:
+        rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+        if ( rc == 0 )
+            rc = process_buffered_record(ctx, &rec);;
+        break;
+    }
 
-    return process_buffered_record(ctx, &rec);
+    return rc;
 }
 
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:24:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:24:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14495.35839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBeX-0005pA-IQ; Thu, 29 Oct 2020 17:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14495.35839; Thu, 29 Oct 2020 17:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBeX-0005p1-FN; Thu, 29 Oct 2020 17:23:57 +0000
Received: by outflank-mailman (input) for mailman id 14495;
 Thu, 29 Oct 2020 17:23:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbm-0003MC-Mt
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:06 +0000
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6679421a-6802-44fa-a291-31bfe98f6a5f;
 Thu, 29 Oct 2020 17:20:22 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKG3fR
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:16 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbm-0003MC-Mt
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:06 +0000
X-Inumbo-ID: 6679421a-6802-44fa-a291-31bfe98f6a5f
Received: from mo4-p02-ob.smtp.rzone.de (unknown [85.215.255.84])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6679421a-6802-44fa-a291-31bfe98f6a5f;
	Thu, 29 Oct 2020 17:20:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992021;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=fJgEN/Sdwhd/b4yPCS/UDhsI4V1GTM00uHRSqwAWm0w=;
	b=IjVk1PEgemRk3g/+LcWdQ9aonZPORdQ2b+s2uZPEM6Hx2TOAe2MuO7Ep+0wSbYj4Km
	pXjg8y/C7hpw78RY1DYFwKOPZDEdpt6IflhlkLyq/qxx9GcF8YKDoRpWzbaFIlgLV8qM
	4+oS9o6+fa7jxZ6IISDEYeBxeFt2tRHHg1/QTXgdpb6yeD5F4cgebDuzponeBSefrjDN
	rlgNz31WfZ4+fhIWxuT6vkGNkGXAzBA49rYadTHwTG49fmp9QRPnuAhaI2MlmoBSqnL4
	ODCYnoMyOLmInMCnUIzzuijTGPS9ph5l8H33XIJ7n6CG3yNnCMIYO0TCe1dxLJIK9FvP
	B0Yg==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKG3fR
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:16 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 19/23] tools/guest: restore: move mfns array in populate_pfns
Date: Thu, 29 Oct 2020 18:19:59 +0100
Message-Id: <20201029172004.17219-20-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove allocation from hotpath, move populate_pfns mfns array into preallocated space.
Use some prefix to avoid conflict with an array used in handle_page_data.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.h  | 2 ++
 tools/libs/guest/xg_sr_restore.c | 5 ++---
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index eba3a49877..96a77b5969 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -230,6 +230,8 @@ struct xc_sr_restore_arrays {
     /* process_page_data */
     xen_pfn_t mfns[MAX_BATCH_SIZE];
     int map_errs[MAX_BATCH_SIZE];
+    /* populate_pfns */
+    xen_pfn_t pp_mfns[MAX_BATCH_SIZE];
 };
 
 struct xc_sr_context
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 94c329032f..85a32aaed2 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -138,12 +138,12 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types)
 {
     xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
+    xen_pfn_t *mfns = ctx->restore.m->pp_mfns,
         *pfns = malloc(count * sizeof(*pfns));
     unsigned int i, nr_pfns = 0;
     int rc = -1;
 
-    if ( !mfns || !pfns )
+    if ( !pfns )
     {
         ERROR("Failed to allocate %zu bytes for populating the physmap",
               2 * count * sizeof(*mfns));
@@ -191,7 +191,6 @@ int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
 
  err:
     free(pfns);
-    free(mfns);
 
     return rc;
 }


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:24:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:24:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14507.35863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBfA-00063S-52; Thu, 29 Oct 2020 17:24:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14507.35863; Thu, 29 Oct 2020 17:24:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBfA-00063L-1u; Thu, 29 Oct 2020 17:24:36 +0000
Received: by outflank-mailman (input) for mailman id 14507;
 Thu, 29 Oct 2020 17:24:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
 id 1kYBc6-0003MD-P8
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:26 +0000
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec327819-3ee6-4007-a9f6-2165b14effc9;
 Thu, 29 Oct 2020 17:20:51 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id s14so2560047qkg.11
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 10:20:51 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
 by smtp.gmail.com with ESMTPSA id m33sm1432097qtb.65.2020.10.29.10.20.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 10:20:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
	id 1kYBc6-0003MD-P8
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:26 +0000
X-Inumbo-ID: ec327819-3ee6-4007-a9f6-2165b14effc9
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ec327819-3ee6-4007-a9f6-2165b14effc9;
	Thu, 29 Oct 2020 17:20:51 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id s14so2560047qkg.11
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 10:20:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:date:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=bgET2vKXyCysvHtuVXHC8x2bsoHr9I9pVPGZuJ2I2ok=;
        b=kCwHtNb0aDqVERdhKoxZMhicLRZ2QR3PolGpflI38e1AZzBFPmngVCvkULn12HaIpd
         xNZsE0KTf8SFsTeciNeDfLB2b8eo7aWwi+oWAp/jGZANfyHEmSTDAiXW3x245FjthbK+
         cNyymvnYwa4RgDaOm832TsIoNGN5VteNRutW9xIM3LOXzwfrz3rFIbB2qhaahGSK3KWZ
         rcjSGv+Lrv3xXtp7IhxItj2ThU4R1TxkB1brB4zm0qayhFjR52RmdpupiWUa8/437Az6
         Z5hmy1fEbRMjrq5fMw2J1qv+hvc2qIZX/q2dYGHrv1hoTM8Kgy/UG1ajYtjFlzWenf9F
         /BdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:date:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=bgET2vKXyCysvHtuVXHC8x2bsoHr9I9pVPGZuJ2I2ok=;
        b=IuxqC6fCv9b7mVDGfgQYwtFqtcaXDbsEhHrWITqCjE/kUmTRAbDIy1XaCSnKXUsyMt
         Kj64Jww9MzeLsPpzczPyhR7US+qtgxvwo1SgYxA+D7xOG1tlPMINwDPn3a0SNhaXJOjE
         urkXazEqrCl++mQPHwra57UuUD2uVzi7DdNOTZ62t13K2eZjAUf0nSfbLqQ6ZIOuy4FC
         qWzKmo0eF6E0CvUy1FPABrUCBm5HA9M7qZmpxAZcxuZHwfC+WkWECJQXRLOC5CcXDKLJ
         kX1MvM4BH0LXlP/TOvok31Nx53AB9J8oi9UKBPOj/j5++Fml8tr6CNWhPSPB/pKcNXOv
         GKUg==
X-Gm-Message-State: AOAM531Ji7tgk5jb6HDV5MuDsCNiEXnn+JPOoFS/iitQv26srSAxTQNP
	R/YHCz9r+FLSYXK88hqm1pc=
X-Google-Smtp-Source: ABdhPJxpj2lHqddfkMCdJsg7kjoxJdmwW/wIVKjCyqCU3W25AK0ciYoXUlV7HQMe/OgzwbBLEfxGlQ==
X-Received: by 2002:a05:620a:130a:: with SMTP id o10mr4612780qkj.63.1603992050890;
        Thu, 29 Oct 2020 10:20:50 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
        by smtp.gmail.com with ESMTPSA id m33sm1432097qtb.65.2020.10.29.10.20.49
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 10:20:50 -0700 (PDT)
Sender: Arvind Sankar <niveditas98@gmail.com>
From: Arvind Sankar <nivedita@alum.mit.edu>
X-Google-Original-From: Arvind Sankar <arvind@rani.riverdale.lan>
Date: Thu, 29 Oct 2020 13:20:48 -0400
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Arvind Sankar <nivedita@alum.mit.edu>,
	David Laight <David.Laight@ACULAB.COM>,
	'Arnd Bergmann' <arnd@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"x86@kernel.org" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Message-ID: <20201029172048.GA2571425@rani.riverdale.lan>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
 <20201029165611.GA2557691@rani.riverdale.lan>
 <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>

On Thu, Oct 29, 2020 at 05:59:54PM +0100, Paolo Bonzini wrote:
> On 29/10/20 17:56, Arvind Sankar wrote:
> >> For those two just add:
> >> 	struct apic *apic = x86_system_apic;
> >> before all the assignments.
> >> Less churn and much better code.
> >>
> > Why would it be better code?
> > 
> 
> I think he means the compiler produces better code, because it won't
> read the global variable repeatedly.  Not sure if that's true,(*) but I
> think I do prefer that version if Arnd wants to do that tweak.
> 
> Paolo
> 
> (*) if it is, it might also be due to Linux using -fno-strict-aliasing
> 

Nope, it doesn't read it multiple times. I guess it still assumes that
apic isn't in the middle of what it points to: it would reload the
address if the first element of *apic was modified, but doesn't for
other elements. Interesting.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:24:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:24:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14512.35875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBfJ-00068T-Dd; Thu, 29 Oct 2020 17:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14512.35875; Thu, 29 Oct 2020 17:24:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBfJ-00068L-AL; Thu, 29 Oct 2020 17:24:45 +0000
Received: by outflank-mailman (input) for mailman id 14512;
 Thu, 29 Oct 2020 17:24:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYBbr-0003MC-N9
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:11 +0000
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.174])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0bcea0f-e8b5-44b9-9aff-c228c6f43efc;
 Thu, 29 Oct 2020 17:20:24 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THKG3fT
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 29 Oct 2020 18:20:16 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYBbr-0003MC-N9
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:21:11 +0000
X-Inumbo-ID: b0bcea0f-e8b5-44b9-9aff-c228c6f43efc
Received: from mo4-p03-ob.smtp.rzone.de (unknown [81.169.146.174])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b0bcea0f-e8b5-44b9-9aff-c228c6f43efc;
	Thu, 29 Oct 2020 17:20:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992023;
	s=strato-dkim-0002; d=aepfle.de;
	h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
	X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=vDSMlFXnzPOXkYBGrnj7LwkLFSBsc9LnEOe7WzFsods=;
	b=pYItbwo0qq88TgAn5HU4aLCh4QxzQogCseTejXzbnL0cbd20RgqH38jj2mJtia1c3v
	y+yCnu9G1+2TtlSGdgB49SL4/J6gxS1DRNN0byKMgN4bnbzK3+E0IsXVlmvxmoMRMjOQ
	uxNmi4/LYnJ1hSVdRZg1QtxE59bjusMXx5eye3qcRfuC3cUkRFCF+d3rer9FExlodscC
	cOx5bWAnf4x4q1jvkSlvEzRNVNfiHIKxigAwCyILxoycLHS3+2RW98Fokt8oTZxqjfga
	H8GFbybybV4huVmGx05OnrDJdu83Tm9YKgfNI//z4y0nMaaDrxTbmMrRmo74m8dJ4uux
	ufVw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THKG3fT
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate);
	Thu, 29 Oct 2020 18:20:16 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v1 21/23] tools/guest: restore: split record processing
Date: Thu, 29 Oct 2020 18:20:01 +0100
Message-Id: <20201029172004.17219-22-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201029172004.17219-1-olaf@aepfle.de>
References: <20201029172004.17219-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

handle_page_data must be able to read directly into mapped guest memory.
This will avoid unneccesary memcpy calls for data which can be consumed verbatim.

Rearrange the code to allow decisions based on the incoming record.

This change is preparation for future changes in handle_page_data,
no change in behavior is intended.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libs/guest/xg_sr_common.c  | 33 ++++++++++++---------
 tools/libs/guest/xg_sr_common.h  |  4 ++-
 tools/libs/guest/xg_sr_restore.c | 49 ++++++++++++++++++++++----------
 tools/libs/guest/xg_sr_save.c    |  7 ++++-
 4 files changed, 63 insertions(+), 30 deletions(-)

diff --git a/tools/libs/guest/xg_sr_common.c b/tools/libs/guest/xg_sr_common.c
index 17567ab133..cabde4ef74 100644
--- a/tools/libs/guest/xg_sr_common.c
+++ b/tools/libs/guest/xg_sr_common.c
@@ -91,26 +91,33 @@ int write_split_record(struct xc_sr_context *ctx, struct xc_sr_record *rec,
     return -1;
 }
 
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_rhdr rhdr;
-    size_t datasz;
 
-    if ( read_exact(fd, &rhdr, sizeof(rhdr)) )
+    if ( read_exact(fd, rhdr, sizeof(*rhdr)) )
     {
         PERROR("Failed to read Record Header from stream");
         return -1;
     }
 
-    if ( rhdr.length > REC_LENGTH_MAX )
+    if ( rhdr->length > REC_LENGTH_MAX )
     {
-        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr.type,
-              rec_type_to_str(rhdr.type), rhdr.length, REC_LENGTH_MAX);
+        ERROR("Record (0x%08x, %s) length %#x exceeds max (%#x)", rhdr->type,
+              rec_type_to_str(rhdr->type), rhdr->length, REC_LENGTH_MAX);
         return -1;
     }
 
-    datasz = ROUNDUP(rhdr.length, REC_ALIGN_ORDER);
+    return 0;
+}
+
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec)
+{
+    xc_interface *xch = ctx->xch;
+    size_t datasz;
+
+    datasz = ROUNDUP(rhdr->length, REC_ALIGN_ORDER);
 
     if ( datasz )
     {
@@ -119,7 +126,7 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
         if ( !rec->data )
         {
             ERROR("Unable to allocate %zu bytes for record data (0x%08x, %s)",
-                  datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                  datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
 
@@ -128,18 +135,18 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
             free(rec->data);
             rec->data = NULL;
             PERROR("Failed to read %zu bytes of data for record (0x%08x, %s)",
-                   datasz, rhdr.type, rec_type_to_str(rhdr.type));
+                   datasz, rhdr->type, rec_type_to_str(rhdr->type));
             return -1;
         }
     }
     else
         rec->data = NULL;
 
-    rec->type   = rhdr.type;
-    rec->length = rhdr.length;
+    rec->type   = rhdr->type;
+    rec->length = rhdr->length;
 
     return 0;
-};
+}
 
 static void __attribute__((unused)) build_assertions(void)
 {
diff --git a/tools/libs/guest/xg_sr_common.h b/tools/libs/guest/xg_sr_common.h
index 3fe665b91d..66d1b0dfe6 100644
--- a/tools/libs/guest/xg_sr_common.h
+++ b/tools/libs/guest/xg_sr_common.h
@@ -475,7 +475,9 @@ static inline int write_record(struct xc_sr_context *ctx,
  *
  * On failure, the contents of the record structure are undefined.
  */
-int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
+int read_record_header(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr);
+int read_record_data(struct xc_sr_context *ctx, int fd, struct xc_sr_rhdr *rhdr,
+                     struct xc_sr_record *rec);
 
 /*
  * This would ideally be private in restore.c, but is needed by
diff --git a/tools/libs/guest/xg_sr_restore.c b/tools/libs/guest/xg_sr_restore.c
index 71b39612ee..93f69b9ba8 100644
--- a/tools/libs/guest/xg_sr_restore.c
+++ b/tools/libs/guest/xg_sr_restore.c
@@ -471,7 +471,7 @@ static int send_checkpoint_dirty_pfn_list(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec);
 static int handle_checkpoint(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -510,7 +510,7 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
 
         for ( i = 0; i < ctx->restore.buffered_rec_num; i++ )
         {
-            rc = process_record(ctx, &ctx->restore.buffered_records[i]);
+            rc = process_buffered_record(ctx, &ctx->restore.buffered_records[i]);
             if ( rc )
                 goto err;
         }
@@ -571,10 +571,11 @@ static int handle_checkpoint(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
 {
     xc_interface *xch = ctx->xch;
     unsigned int new_alloc_num;
+    struct xc_sr_record rec;
     struct xc_sr_record *p;
 
     if ( ctx->restore.buffered_rec_num >= ctx->restore.allocated_rec_num )
@@ -592,8 +593,13 @@ static int buffer_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
         ctx->restore.allocated_rec_num = new_alloc_num;
     }
 
+    if ( read_record_data(ctx, ctx->fd, rhdr, &rec) )
+    {
+        return -1;
+    }
+
     memcpy(&ctx->restore.buffered_records[ctx->restore.buffered_rec_num++],
-           rec, sizeof(*rec));
+           &rec, sizeof(rec));
 
     return 0;
 }
@@ -624,7 +630,7 @@ int handle_static_data_end(struct xc_sr_context *ctx)
     return rc;
 }
 
-static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
+static int process_buffered_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
 {
     xc_interface *xch = ctx->xch;
     int rc = 0;
@@ -662,6 +668,19 @@ static int process_record(struct xc_sr_context *ctx, struct xc_sr_record *rec)
     return rc;
 }
 
+static int process_incoming_record_header(struct xc_sr_context *ctx, struct xc_sr_rhdr *rhdr)
+{
+    struct xc_sr_record rec;
+    int rc;
+
+    rc = read_record_data(ctx, ctx->fd, rhdr, &rec);
+    if ( rc )
+        return rc;
+
+    return process_buffered_record(ctx, &rec);
+}
+
+
 static int setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
@@ -745,7 +764,7 @@ static void cleanup(struct xc_sr_context *ctx)
 static int restore(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
-    struct xc_sr_record rec;
+    struct xc_sr_rhdr rhdr;
     int rc, saved_rc = 0, saved_errno = 0;
 
     IPRINTF("Restoring domain");
@@ -756,7 +775,7 @@ static int restore(struct xc_sr_context *ctx)
 
     do
     {
-        rc = read_record(ctx, ctx->fd, &rec);
+        rc = read_record_header(ctx, ctx->fd, &rhdr);
         if ( rc )
         {
             if ( ctx->restore.buffer_all_records )
@@ -766,25 +785,25 @@ static int restore(struct xc_sr_context *ctx)
         }
 
         if ( ctx->restore.buffer_all_records &&
-             rec.type != REC_TYPE_END &&
-             rec.type != REC_TYPE_CHECKPOINT )
+             rhdr.type != REC_TYPE_END &&
+             rhdr.type != REC_TYPE_CHECKPOINT )
         {
-            rc = buffer_record(ctx, &rec);
+            rc = buffer_record(ctx, &rhdr);
             if ( rc )
                 goto err;
         }
         else
         {
-            rc = process_record(ctx, &rec);
+            rc = process_incoming_record_header(ctx, &rhdr);
             if ( rc == RECORD_NOT_PROCESSED )
             {
-                if ( rec.type & REC_TYPE_OPTIONAL )
+                if ( rhdr.type & REC_TYPE_OPTIONAL )
                     DPRINTF("Ignoring optional record %#x (%s)",
-                            rec.type, rec_type_to_str(rec.type));
+                            rhdr.type, rec_type_to_str(rhdr.type));
                 else
                 {
                     ERROR("Mandatory record %#x (%s) not handled",
-                          rec.type, rec_type_to_str(rec.type));
+                          rhdr.type, rec_type_to_str(rhdr.type));
                     rc = -1;
                     goto err;
                 }
@@ -795,7 +814,7 @@ static int restore(struct xc_sr_context *ctx)
                 goto err;
         }
 
-    } while ( rec.type != REC_TYPE_END );
+    } while ( rhdr.type != REC_TYPE_END );
 
  remus_failover:
     if ( ctx->stream_type == XC_STREAM_COLO )
diff --git a/tools/libs/guest/xg_sr_save.c b/tools/libs/guest/xg_sr_save.c
index 804e4ccb3a..9be5e8c52b 100644
--- a/tools/libs/guest/xg_sr_save.c
+++ b/tools/libs/guest/xg_sr_save.c
@@ -590,6 +590,7 @@ static int send_memory_live(struct xc_sr_context *ctx)
 static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    struct xc_sr_rhdr rhdr;
     struct xc_sr_record rec;
     uint64_t *pfns = NULL;
     uint64_t pfn;
@@ -598,7 +599,11 @@ static int colo_merge_secondary_dirty_bitmap(struct xc_sr_context *ctx)
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
-    rc = read_record(ctx, ctx->save.recv_fd, &rec);
+    rc = read_record_header(ctx, ctx->save.recv_fd, &rhdr);
+    if ( rc )
+        goto err;
+
+    rc = read_record_data(ctx, ctx->save.recv_fd, &rhdr, &rec);
     if ( rc )
         goto err;
 


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:31:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:31:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14528.35887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBlN-00079X-9M; Thu, 29 Oct 2020 17:31:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14528.35887; Thu, 29 Oct 2020 17:31:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBlN-00079Q-5U; Thu, 29 Oct 2020 17:31:01 +0000
Received: by outflank-mailman (input) for mailman id 14528;
 Thu, 29 Oct 2020 17:30:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYBlL-00079K-Qd
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:30:59 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca475892-3348-4a95-9715-74c3628a907d;
 Thu, 29 Oct 2020 17:30:58 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYBlL-00079K-Qd
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:30:59 +0000
X-Inumbo-ID: ca475892-3348-4a95-9715-74c3628a907d
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ca475892-3348-4a95-9715-74c3628a907d;
	Thu, 29 Oct 2020 17:30:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603992658;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=UCywN/RX+8t1lgf/RGUgEx1VaXzugQ4kDhApv8r9oUs=;
  b=MIRXUvZ8UOH8VBJ8p1ITE/30LWC6GxthzAoqs5h79h9v5Bvmy2NVhRjM
   nBhxPzWXAc9HigAAmLip34Lu740XnWDG4BUkdh6N7VFqqp/ffdmNK7ym3
   bh4gumoV4LfXtAeZxdekabMuRB4APwRSsB+FBSj8N2aZAHfEltc8Gwes9
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 6rp4OESqtVocJTDgZ8qwPP2DQJ00UTd4rQg7uSPQFt8F6Irkne/m+EVafVFuIAIc6LKC58Rxpb
 g+lBE0gEySgfGu+TmP0UA0r0aSQxhtJzlxC2FZmPOmuiY2IMYIVg8o6DAsnfQl8qccDi9WH9wz
 FleDqc36yzdYdDr4MwE0Wa9++jjx7/VgGb/+4GgPs9NOiKevWX4Bq0Ar2HlonBx0PdQoPSsHya
 yi9Wx6fHjUZvymJqfUMxWY3xTIpPZau/swssvX0evDtlKEVPVhLMbfzQWKI37LK8W1HNObUeN7
 R0E=
X-SBRS: None
X-MesageID: 31179790
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="31179790"
Subject: Re: [PATCH] x86emul: support AVX-VNNI
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <bead640a-e75e-8352-cd3d-5986386cab3a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7232319d-bc8b-8aec-d19e-e947f13c9e86@citrix.com>
Date: Thu, 29 Oct 2020 17:30:51 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <bead640a-e75e-8352-cd3d-5986386cab3a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 29/10/2020 15:01, Jan Beulich wrote:
> These are VEX-encoded equivalents of the EVEX-encoded AVX512-VNNI ISA
> extension.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14542.35915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBvF-0008E0-LQ; Thu, 29 Oct 2020 17:41:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14542.35915; Thu, 29 Oct 2020 17:41:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBvF-0008Dr-HL; Thu, 29 Oct 2020 17:41:13 +0000
Received: by outflank-mailman (input) for mailman id 14542;
 Thu, 29 Oct 2020 17:41:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYBvF-0008CJ-37
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:41:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8aa08b42-43ad-4a44-8b0a-41b9318e4dc6;
 Thu, 29 Oct 2020 17:41:10 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYBvF-0008CJ-37
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:41:13 +0000
X-Inumbo-ID: 8aa08b42-43ad-4a44-8b0a-41b9318e4dc6
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8aa08b42-43ad-4a44-8b0a-41b9318e4dc6;
	Thu, 29 Oct 2020 17:41:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603993270;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XELrZl0g2ovrQi4NpNOv5cCb24Wat3i5Csq+gJWLytE=;
  b=eBNnR/rxmgR4VgBKK1PO7P9ORvhbzko0t11rfKR2VNHhwTCCW2aSFkS/
   6U7gGLYVN0l0hc2dxYFuwBYWiLqN0wbXjl+PGWoZVvwf/oNevUju+0Cex
   mOC2ARB1XsJdgbFhXLmL1Grn5ozP0lc8qWimbhMlzCGNDqXJWKI0qZ3Mr
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: TCsNK4BtfXYqtUMksYkQy4XF6iMzm/s384ySHhqZjFsbHl7HLF926UjH3GQ1qkRtnkEuT09pVn
 eBqIpXCc8h2FlYVTlZLKW3Wrcljk4+GmiFlNlcAiT+xclF+2hil/D/thliFCrNPUfPnlEFg8xj
 9ZoOIUuzVXM9PXr6t7wA86EmXJCqLT6bZe/Chz0SDjKgHT/FIIvaKFdAAHwyDBVgAD2cBOqrqN
 56AeT6FZlwU3WgQBFCtTFeIETK21kZpZECLvk23NX+TI75wLChaMu5TmRh6njotGZmjCHNw1y8
 YIU=
X-SBRS: None
X-MesageID: 30093419
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30093419"
Subject: Re: [PATCH 2/2] x86: PV shim doesn't need GRANT_TABLE
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
 <17fb04fb-99b6-7e20-0583-421ebb666ddc@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <28c10a7b-a55a-faea-e4bc-e9e007c1140e@citrix.com>
Date: Thu, 29 Oct 2020 17:40:53 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <17fb04fb-99b6-7e20-0583-421ebb666ddc@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 29/10/2020 17:10, Jan Beulich wrote:
> The only reference into the code controlled by this option is from the
> hypercall table, and that hypercall table entry gets overwritten.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:41:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14541.35902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBvB-0008CW-Bv; Thu, 29 Oct 2020 17:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14541.35902; Thu, 29 Oct 2020 17:41:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBvB-0008CP-8n; Thu, 29 Oct 2020 17:41:09 +0000
Received: by outflank-mailman (input) for mailman id 14541;
 Thu, 29 Oct 2020 17:41:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYBvA-0008CJ-6k
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:41:08 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cc04f88-2ecd-4c9d-8be5-b8d61ad6589f;
 Thu, 29 Oct 2020 17:41:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYBvA-0008CJ-6k
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:41:08 +0000
X-Inumbo-ID: 8cc04f88-2ecd-4c9d-8be5-b8d61ad6589f
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8cc04f88-2ecd-4c9d-8be5-b8d61ad6589f;
	Thu, 29 Oct 2020 17:41:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603993266;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=XK9v4RmpfHkc84S2Lds000zmEHI8x7nt8fjLRmZIgtc=;
  b=fSxwJR5gOTw93GZgyTyGO4h/6uGpHV1fLkKoHtyUtGTU+RMtuBKW/19Y
   dXbQqJIFlx5nN9rTv5RR2pg1s7CgLYfiBaYreiLmIrziiNRULdDaco8Yu
   9EYTrqucgLqr+eXqVXnbcSPUk0U9du84+0qtfzNrC7tmUR6TOxEcPPWUd
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: AVuIuWTOedBXcQ8yufK5HrQggcd+UAiQyaItvFNi7wWBcxRSrSvnow9S2BTqITy7pcliiTYOsh
 liSlvYp1XxlO1diJ9pYYEpHZjAzuFICpDwOO7X0zBMVLSth08QoXdccnZwxQpwrkCUZSGS1saN
 rQ5KBLX5lYyXCTfQaRffcHoCu1Q56FczqKhO4P03mX1SMj/vle8InZjV/8/di5gB0HixEEEKPU
 TCFdnRepIRk1NOyT99sBqhopWuq71g3mDyI4MQ3I7dSqK71ChN2DsgqhKbR9pU+58f+1Whrhds
 Hi0=
X-SBRS: None
X-MesageID: 30425211
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30425211"
Subject: Re: [PATCH 1/2] x86: fix build of PV shim when !GRANT_TABLE
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <94c18ebf-632d-8a2c-220c-c31d6e37eb55@suse.com>
 <333de96e-5f60-dc87-7668-e8027dcfbcdd@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c8172ed7-cda1-a47f-b21c-8d996208723f@citrix.com>
Date: Thu, 29 Oct 2020 17:40:04 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <333de96e-5f60-dc87-7668-e8027dcfbcdd@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 29/10/2020 17:09, Jan Beulich wrote:
> To do its compat translation, shim code needs to include the compat
> header. For this to work, this header first of all needs to be
> generated.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 17:45:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 17:45:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14554.35930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBzX-00009g-8W; Thu, 29 Oct 2020 17:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14554.35930; Thu, 29 Oct 2020 17:45:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYBzX-00009Z-5X; Thu, 29 Oct 2020 17:45:39 +0000
Received: by outflank-mailman (input) for mailman id 14554;
 Thu, 29 Oct 2020 17:45:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYBzV-00009U-RZ
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:45:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ad4ce39-f4db-4bed-aa96-f4ab9cc7af4e;
 Thu, 29 Oct 2020 17:45:36 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYBzV-00009U-RZ
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 17:45:37 +0000
X-Inumbo-ID: 5ad4ce39-f4db-4bed-aa96-f4ab9cc7af4e
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5ad4ce39-f4db-4bed-aa96-f4ab9cc7af4e;
	Thu, 29 Oct 2020 17:45:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603993536;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=TEzmsFvs5V+BQzfUO38TjsJ2AjAGtch8NefQjs2tR1c=;
  b=SLiM+83+JA226t8WFbi9k38+q2HXcQRXbp6VKsilp3q3SEi60twO338Y
   HObQtS1TnRVbbgy2yc+yjcHWqkOlxtGMHyMhr2ArkwVnbw0RG3YzQ8jHV
   US6JivbGQsva8kygOO67+pLLEC0dLGBYEfd7AvTamPKPr/cgS2jHWIHed
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: onNk+iUiQ2s5vvviSkBV8onCPxtCZrZrNegG6uE49Xn9rTber9OnZd3mF/kMfmH8EuaGmGnBDU
 4+vJcHfjTlUGfQPLnhdbi3kqZ9b6mryCcjnTTQF7vW3OYVDNm+osqA+itbgXRfOqsCUFIzjh5y
 gL908PsIUcUOJVlRcYqDEU2WE+Hsc2UQFLEoAw3s6OkNcygntqQvJ6taaAwWSZ6X2KomxJcxTa
 yUbdxAuCCgjIPewWErM4Y2pww6u3t14E/feW559IasfQi3IlTX7m21aoStwYTk78yDmZAJlZeL
 mWw=
X-SBRS: None
X-MesageID: 31181329
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="31181329"
Subject: Re: [PATCH] x86/HVM: send mapcache invalidation request to qemu
 regardless of preemption
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <d33721a8-af91-7efc-b954-1d775bd4e35c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ae68944c-ddbd-53b8-3725-376abc9013ae@citrix.com>
Date: Thu, 29 Oct 2020 17:45:31 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <d33721a8-af91-7efc-b954-1d775bd4e35c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 29/10/2020 15:14, Jan Beulich wrote:
> Even if only part of a hypercall completed before getting preempted,
> invalidation ought to occur. Therefore fold the two return statements.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 18:49:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 18:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14574.35973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYCyj-0005e4-9B; Thu, 29 Oct 2020 18:48:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14574.35973; Thu, 29 Oct 2020 18:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYCyj-0005dx-5Q; Thu, 29 Oct 2020 18:48:53 +0000
Received: by outflank-mailman (input) for mailman id 14574;
 Thu, 29 Oct 2020 18:48:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kYCyh-0005ds-L2
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 18:48:51 +0000
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55e4c43a-eabd-457b-889b-0ad6f0345e5f;
 Thu, 29 Oct 2020 18:48:50 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id a72so784205wme.5
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 11:48:50 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kYCyh-0005ds-L2
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 18:48:51 +0000
X-Inumbo-ID: 55e4c43a-eabd-457b-889b-0ad6f0345e5f
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 55e4c43a-eabd-457b-889b-0ad6f0345e5f;
	Thu, 29 Oct 2020 18:48:50 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id a72so784205wme.5
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 11:48:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=N4de3iPKUftnlFpf5FnF9kPaolR5meis5keMRhND+JY=;
        b=Pmc7WdVth87fJUeF9gvezBoM39x722fkjjp2rwBe4iDwf1FKYDFuDqevwYlvdDwg+1
         Uk/mQ8eAGsGNjIV5+g5lRDS1HxrCrklwdMUXkfHs5F8eGt28QKghvcWMpiYIm9tmT+GD
         kylGlJzWM10+5K3VLC6wR6Swnwub+VnL1NqrGrplOG7+/CbMsYzyVrEJJDnP4l5t9jP2
         APoyUa4NAYiEdONV/VgUZGdYHJ0Xdv0VigkimLvQkcQNPpnb5n3sTJ7CRpTAVqqapfmU
         K8lbl1NePf1qIENQ+4FgoUsxkl7k2edi1ReMcFwz9ZPJLkjunXJiDdQEE+fXWACGOdwP
         p7tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=N4de3iPKUftnlFpf5FnF9kPaolR5meis5keMRhND+JY=;
        b=ge73Udl2FRYRaCfo0zGyvuztSQL6MJT7+YLaer1+dRArS+WvKUKugK+tNA6UOHNN1w
         w9zMjXLDn77/NNe0r8VCNsqo5y2OaRZ108z7JkEr23/dm5QThdEiz/cbsbj68+HFu33Q
         0rfvGyCDzDMIl9jwb3UOFx4SLgfEuf7zGLmXuiccLC3qBWl5Su/RT0d9McAfJ8Hd7gK+
         P2G6mHukq51VUwRPGCyGQ56TQShB5JzPcLPqYHo603GJuBfHtGvz86XBfkYK4j/bSRBN
         jPEbefgVSV8Eih7IorsEkzJ7tbdJWnWVhxMpAkZcRQbiEA35JOVLHFHgOU+/d2v5Z1s3
         ajFw==
X-Gm-Message-State: AOAM5337WE38lc2fGfHvHhwGiPSZt1cpAr7dxWJP8Z8WNLVyTrNhYkBc
	JdpLlquPwZjCGkans0uYZfVvkwxt5Os7mUvDk9Y=
X-Google-Smtp-Source: ABdhPJy//taBYNrhU1ensXWXY2Tv5+OM89IPdVdRGTvQl3nVRLm4AbABsDjwxZE2bwTmElAaJNjlgLCcO0lkBd8+Ex0=
X-Received: by 2002:a1c:acc1:: with SMTP id v184mr223201wme.63.1603997329636;
 Thu, 29 Oct 2020 11:48:49 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
In-Reply-To: <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Thu, 29 Oct 2020 20:48:38 +0200
Message-ID: <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: multipart/alternative; boundary="0000000000000dc0df05b2d3bb4e"

--0000000000000dc0df05b2d3bb4e
Content-Type: text/plain; charset="UTF-8"

On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <
masami.hiramatsu@linaro.org> wrote:

> Hi Oleksandr,
>
Hi Masami

[sorry for the possible format issue]


>
> I would like to try this on my arm64 board.
>
Glad to hear you are interested in this topic.


>
> According to your comments in the patch, I made this config file.
> # cat debian.conf
> name = "debian"
> type = "pvh"
> vcpus = 8
> memory = 512
> kernel = "/opt/agl/vmlinuz-5.9.0-1-arm64"
> ramdisk = "/opt/agl/initrd.img-5.9.0-1-arm64"
> cmdline= "console=hvc0 earlyprintk=xen root=/dev/xvda1 rw"
> disk = [ '/opt/agl/debian.qcow2,qcow2,hda' ]
> vif = [ 'mac=00:16:3E:74:3d:76,bridge=xenbr0' ]
> virtio = 1
> vdisk = [ 'backend=Dom0, disks=ro:/dev/sda1' ]
>
> And tried to boot a DomU, but I got below error.
>
> # xl create -c debian.conf
> Parsing config from debian.conf
> libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
> 1:unable to add virtio_disk devices
> libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
> 1:xc_domain_pause failed
> libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
> type for domid=1
> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> 1:Unable to destroy guest
> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> 1:Destruction of domain failed
>
>
> Could you tell me how can I test it?
>

I assume it is due to the lack of the virtio-disk backend (which I haven't
shared yet as I focused on the IOREQ/DM support on Arm in the first place).
Could you wait a little bit, I am going to share it soon.


-- 
Regards,

Oleksandr Tyshchenko

--0000000000000dc0df05b2d3bb4e
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Thu, Oct 29, 2020 at 9:42 AM Masam=
i Hiramatsu &lt;<a href=3D"mailto:masami.hiramatsu@linaro.org">masami.hiram=
atsu@linaro.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" s=
tyle=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);pad=
ding-left:1ex">Hi Oleksandr,<br></blockquote><div>Hi Masami</div><div><br><=
/div><div>[sorry for the possible format issue]</div><div>=C2=A0</div><bloc=
kquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:=
1px solid rgb(204,204,204);padding-left:1ex">
<br>
I would like to try this on my arm64 board.<br></blockquote><div>Glad to he=
ar you are interested in this topic.=C2=A0</div><div>=C2=A0</div><blockquot=
e class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px s=
olid rgb(204,204,204);padding-left:1ex">
<br>
According to your comments in the patch, I made this config file.<br>
# cat debian.conf<br>
name =3D &quot;debian&quot;<br>
type =3D &quot;pvh&quot;<br>
vcpus =3D 8<br>
memory =3D 512<br>
kernel =3D &quot;/opt/agl/vmlinuz-5.9.0-1-arm64&quot;<br>
ramdisk =3D &quot;/opt/agl/initrd.img-5.9.0-1-arm64&quot;<br>
cmdline=3D &quot;console=3Dhvc0 earlyprintk=3Dxen root=3D/dev/xvda1 rw&quot=
;<br>
disk =3D [ &#39;/opt/agl/debian.qcow2,qcow2,hda&#39; ]<br>
vif =3D [ &#39;mac=3D00:16:3E:74:3d:76,bridge=3Dxenbr0&#39; ]<br>
virtio =3D 1<br>
vdisk =3D [ &#39;backend=3DDom0, disks=3Dro:/dev/sda1&#39; ]<br>
<br>
And tried to boot a DomU, but I got below error.<br>
<br>
# xl create -c debian.conf<br>
Parsing config from debian.conf<br>
libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain<br>
1:unable to add virtio_disk devices<br>
libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain<br>
1:xc_domain_pause failed<br>
libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain<br>
type for domid=3D1<br>
libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain<br>
1:Unable to destroy guest<br>
libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain<br>
1:Destruction of domain failed<br>
<br>
<br>
Could you tell me how can I test it?<br></blockquote><div><br></div><div>I =
assume it is due to the lack of the virtio-disk backend (which I haven&#39;=
t shared yet as I focused on the IOREQ/DM support on Arm in the first place=
).</div><div>Could you wait a little bit, I am going to share it soon.=C2=
=A0</div></div><br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr" cla=
ss=3D"gmail_signature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=
=3D"ltr"><span style=3D"background-color:rgb(255,255,255)"><font size=3D"2"=
><span style=3D"color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,<=
/span></font></span></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div>=
<span style=3D"background-color:rgb(255,255,255)"><font size=3D"2">Oleksand=
r Tyshchenko</font></span></div></div></div></div></div></div></div></div>

--0000000000000dc0df05b2d3bb4e--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 19:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 19:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14582.35993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDDD-0007Ol-M8; Thu, 29 Oct 2020 19:03:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14582.35993; Thu, 29 Oct 2020 19:03:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDDD-0007Oe-J5; Thu, 29 Oct 2020 19:03:51 +0000
Received: by outflank-mailman (input) for mailman id 14582;
 Thu, 29 Oct 2020 19:03:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/pEi=EE=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1kYDDC-0007OZ-9F
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:03:50 +0000
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d77ef8ef-a576-42f2-bd4f-ebcbb3d3e1f8;
 Thu, 29 Oct 2020 19:03:49 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id u62so4708732iod.8
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 12:03:49 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:fcfe:e213:efe5:2016])
 by smtp.gmail.com with ESMTPSA id g185sm2958818ilh.35.2020.10.29.12.03.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 12:03:47 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=/pEi=EE=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
	id 1kYDDC-0007OZ-9F
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:03:50 +0000
X-Inumbo-ID: d77ef8ef-a576-42f2-bd4f-ebcbb3d3e1f8
Received: from mail-io1-xd41.google.com (unknown [2607:f8b0:4864:20::d41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id d77ef8ef-a576-42f2-bd4f-ebcbb3d3e1f8;
	Thu, 29 Oct 2020 19:03:49 +0000 (UTC)
Received: by mail-io1-xd41.google.com with SMTP id u62so4708732iod.8
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 12:03:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PjoN1VtNrqCy9Sd+SGhiFGDyT8M6kw114LQQ5wrE1Lk=;
        b=D4vmX/hVLY7pNXdyr/ARgCUQIxYP/peLKolCkz11AlYWKAV6E9FliRldyvgKm2W5RJ
         uWy81iRgo1aX5zEBx0XYHgTYskhIC/fsuHpZ7JTm/H8/9cm8G9jkt2Hqjr3VYhh6D0Ps
         G8WFwGj3kxrucyjP0lhi4AimJgd839QVFVgZJTC9HeaLHjuKqOxRmRSgj28ByEnfht/5
         SAoVWBoKT+aNuKZLyclqBnm44XpLVadCcr216ap5SfflL8QMsZ6I+AOdV5xCpMsRqPF0
         OQ4iLkV5G2zIAUf6fKAgCG4cgvMosGZfKGXkz9pbjPyNIWuLCQq3P3qtFtukLKvCBbzn
         YK4g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PjoN1VtNrqCy9Sd+SGhiFGDyT8M6kw114LQQ5wrE1Lk=;
        b=pfXwWHsMtSHS2u3hk9+cFFCAyl+0wqyeYiPeOwYphrSCGIxyOUByqL7DPiVcERM7lX
         e8Ww40TLUzVak9T+xERtABIAYDBBYXgV+5uYn0CjSeqY1u5JHl+uh0l1xQdRCNVCvFfq
         8OCoY+/InI8+hqEOononNU15LmOHSvPhdDUmyTJAuqizd+Q7NAH0JvAFzSwc0J/PjNZw
         27rdJ5b0oJfPHYYsB9tOy8JmpcCslncpZ5surf42zbuOdznLmT28i/xRMniaBfNThe8a
         3IQPDIJFT+UR23hTVzEEOuYEwaYGUD/z7rEPoRzRnIOP5yJfCiVBS1sXre6FVdtXmy+W
         eqWQ==
X-Gm-Message-State: AOAM5305oB/iXwsAprNSUMJ6tSfw+MqctH2shOivbnW6tKRROyO66zkQ
	pQYVwx2tuO/DjOgCPQm7eFWNiJpAkjQ=
X-Google-Smtp-Source: ABdhPJzOhp5U22ALopDDwWzUDzIt6Hf9u9tG4Ta9al7tHl/lc6geiR7/6WbaApxLlPxn4MzcDPMGGA==
X-Received: by 2002:a02:48:: with SMTP id 69mr4618562jaa.108.1603998228652;
        Thu, 29 Oct 2020 12:03:48 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:fcfe:e213:efe5:2016])
        by smtp.gmail.com with ESMTPSA id g185sm2958818ilh.35.2020.10.29.12.03.46
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 12:03:47 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony.perard@citrix.com>,
	Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] libxl: Add suppress-vmdesc to QEMU machine
Date: Thu, 29 Oct 2020 15:03:32 -0400
Message-Id: <20201029190332.31161-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The device model state saved by QMP xen-save-devices-state doesn't
include the vmdesc json.  When restoring an HVM, xen-load-devices-state
always triggers "Expected vmdescription section, but got 0".  This is
not a problem when restore comes from a file.  However, when QEMU runs
in a linux stubdom and comes over a console, EOF is not received.  This
causes a delay restoring - though it does restore.

Setting suppress-vmdesc skips looking for the vmdesc during restore and
avoids the wait.

QEMU 5.2 enables suppress-vmdesc by default for xenfv, but this change
sets it manually for xenfv and xen_platform_pci=0 when -machine pc is
use.

QEMU commit 9850c6047b8b "migration: Allow to suppress vmdesc
submission" added suppress-vmdesc in QEMU 2.3.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
QEMU 2.3 came out in 2015, so setting suppress-vmdesc unilaterally
should be okay...  Is this okay?
---
 tools/libs/light/libxl_dm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index d1ff35dda3..3da83259c0 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1778,9 +1778,9 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             /* Switching here to the machine "pc" which does not add
              * the xen-platform device instead of the default "xenfv" machine.
              */
-            machinearg = libxl__strdup(gc, "pc,accel=xen");
+            machinearg = libxl__strdup(gc, "pc,accel=xen,suppress-vmdesc=on");
         } else {
-            machinearg = libxl__strdup(gc, "xenfv");
+            machinearg = libxl__strdup(gc, "xenfv,suppress-vmdesc=on");
         }
         if (b_info->u.hvm.mmio_hole_memkb) {
             uint64_t max_ram_below_4g = (1ULL << 32) -
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 19:15:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 19:15:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14589.36009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDNw-0008Ni-PT; Thu, 29 Oct 2020 19:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14589.36009; Thu, 29 Oct 2020 19:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDNw-0008Nb-MK; Thu, 29 Oct 2020 19:14:56 +0000
Received: by outflank-mailman (input) for mailman id 14589;
 Thu, 29 Oct 2020 19:14:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYDNv-0008NW-TH
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:14:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5552412-28e1-4ea7-aa5d-b143026030fe;
 Thu, 29 Oct 2020 19:14:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=hwhy=EE=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYDNv-0008NW-TH
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:14:55 +0000
X-Inumbo-ID: e5552412-28e1-4ea7-aa5d-b143026030fe
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id e5552412-28e1-4ea7-aa5d-b143026030fe;
	Thu, 29 Oct 2020 19:14:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1603998894;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=GJeVn2qDW5HK4YO9Jnmz/fILLqjy67QZMfCQwG2vVNo=;
  b=ci8f/+PsbX+K5LPYRZ4Ych5z/wZYqU9jpiY5fSwzjjKjCJPw0cjVStre
   UZ6FXjKpZUMyFtidQ71XBM25rJvGexjRYp5rHLoTJDHU6znS666mFwtuY
   LU8qnUD13gY8NxwsEcZnxsg4r6PKsnuG2kAPsXiMCueRZU7JQETVC9ha2
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ozP+qusxK5Fqqqpc8FMmZA7MW4bCpUKRNVRrXTodp3MvDfKivGZCatZ1uoa6KhOOLdRzOIHkIV
 7176IdnCU6+QCUvgR/n88QJao18LAF72zJZ4LPeZzJ04PwzmkbSLbB0h6TxL41DDE02Dh93Ibt
 kKwUKcOABU+72wV4i3p6TaTSZAXdz55UfH4vgb5+UfAgy3xszPFOtTe3kmTyH65IBTpSWEWI0X
 NccwZINrf1Y6FXZp5v4MB6Co+cFYPAdjXUE2cL5LkD979zZyFgd26iQRjdxknw7tU8LiORhyTj
 6Ks=
X-SBRS: None
X-MesageID: 30433939
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30433939"
Subject: Re: [PATCH] x86/pv: Drop stale comment in dom0_construct_pv()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20201029140041.18343-1-andrew.cooper3@citrix.com>
 <2425da57-9c04-215e-297e-844fb11fac9a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b46a9ce5-cb84-b119-5b63-9ab63cd39b6f@citrix.com>
Date: Thu, 29 Oct 2020 19:14:48 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <2425da57-9c04-215e-297e-844fb11fac9a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 29/10/2020 14:37, Jan Beulich wrote:
> On 29.10.2020 15:00, Andrew Cooper wrote:
>> This comment has been around since c/s 1372bca0615 in 2004.  It is stale, as
>> it predates the introduction of struct vcpu.
> That commit only moved it around; it's 22a857bde9b8 afaics from
> early 2003 where it first appeared, where it had a reason:
>
>     /*
>      * WARNING: The new domain must have its 'processor' field
>      * filled in by now !!
>      */
>     phys_l2tab = ALLOC_FRAME_FROM_DOMAIN();
>     l2tab = map_domain_mem(phys_l2tab);
>     memcpy(l2tab, idle_pg_table[p->processor], PAGE_SIZE);

Oh yes - my simple search didn't spot the reformat.

>
> But yes, the comment has been stale for a long time, and I've
> been wondering a number of times what it was supposed to tell
> me. (I think it was already stale at the point the comment
> first got altered, in 3072fef54df8.)

Looks like it became stale with 99db02d5097 "Remove CPU-dependent
page-directory entries." which drops the per-cpu idle_pg_table.

>
>> It is not obvious that it was even correct at the time.  Where a vcpu (domain
>> at the time) has been configured to run is unrelated to construct the domain's
>> initial pagetables, etc.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.  I'll update the commit message.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 19:20:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 19:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14595.36021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDTc-0000rF-KI; Thu, 29 Oct 2020 19:20:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14595.36021; Thu, 29 Oct 2020 19:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDTc-0000r8-HA; Thu, 29 Oct 2020 19:20:48 +0000
Received: by outflank-mailman (input) for mailman id 14595;
 Thu, 29 Oct 2020 19:20:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYDTa-0000r3-FU
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:20:46 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dcbbdb21-2618-4862-81ac-32b8bbdab198;
 Thu, 29 Oct 2020 19:20:44 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYDTX-0004i7-TK; Thu, 29 Oct 2020 19:20:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYDTX-0005ap-JO; Thu, 29 Oct 2020 19:20:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYDTX-0000B1-Ip; Thu, 29 Oct 2020 19:20:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYDTa-0000r3-FU
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:20:46 +0000
X-Inumbo-ID: dcbbdb21-2618-4862-81ac-32b8bbdab198
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dcbbdb21-2618-4862-81ac-32b8bbdab198;
	Thu, 29 Oct 2020 19:20:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1LDgPJ1DyXZ5JMzDfHf70iIHiBVzZS3/92ToNlSgy6Y=; b=WOdQTiKN1ZbU8uGEJZgjGNgaRo
	HhjSGZ/NEStnVzUfoRX8wpeLD8zksIviXty0vbxRaSqIlqBfma1lFeYYBCg94b2eIKVFZwX8TM789
	BBv5RYCH1geYcLlSAFUYDi0RY/plGyj0WFvcsr4d4gFgxvWU1NGQudGJjy5qYT0RAsYw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYDTX-0004i7-TK; Thu, 29 Oct 2020 19:20:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYDTX-0005ap-JO; Thu, 29 Oct 2020 19:20:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYDTX-0000B1-Ip; Thu, 29 Oct 2020 19:20:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 156277: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e274c8bdc12eb596e55233040e8b49da27150f31
X-Osstest-Versions-That:
    xen=63199dfd3a0418f1677c6ccd7fe05b123af4610a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 19:20:43 +0000

flight 156277 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156277/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156034
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 156034
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156034
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156034
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156034
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156034
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156034
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156034
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156034
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156034
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156034
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  e274c8bdc12eb596e55233040e8b49da27150f31
baseline version:
 xen                  63199dfd3a0418f1677c6ccd7fe05b123af4610a

Last test of basis   156034  2020-10-20 13:35:54 Z    9 days
Testing same since   156262  2020-10-27 18:36:52 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   63199dfd3a..e274c8bdc1  e274c8bdc12eb596e55233040e8b49da27150f31 -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 19:54:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 19:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14615.36092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDzZ-0003q7-2z; Thu, 29 Oct 2020 19:53:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14615.36092; Thu, 29 Oct 2020 19:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYDzY-0003q0-W7; Thu, 29 Oct 2020 19:53:48 +0000
Received: by outflank-mailman (input) for mailman id 14615;
 Thu, 29 Oct 2020 19:53:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYDzY-0003pv-00
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:53:48 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15ee0e1c-4e07-47a8-a4d9-ea5bf67a6df0;
 Thu, 29 Oct 2020 19:53:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D4CA020825;
 Thu, 29 Oct 2020 19:53:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYDzY-0003pv-00
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:53:48 +0000
X-Inumbo-ID: 15ee0e1c-4e07-47a8-a4d9-ea5bf67a6df0
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 15ee0e1c-4e07-47a8-a4d9-ea5bf67a6df0;
	Thu, 29 Oct 2020 19:53:46 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D4CA020825;
	Thu, 29 Oct 2020 19:53:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604001225;
	bh=Zoi/yikzohcZPcA0qdIgH94+cmvGhtI4trijgDqXqus=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jGQA2PZ0fHR+NSCU1sU7NXi2M7F3AwDqPfidMVfEcdQ2Y7kbaGwHgCVvtbDZNpQ9p
	 LpssgOtZwGLXKltBdLc1ue+UV4P3BC7tU+8DMZszAYaMr7fJseyP+ygguk2dVJV+9T
	 DQk06oHuBG3eIcyYM4qekcj+Q0pyhzczMYy4XcME=
Date: Thu, 29 Oct 2020 12:53:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
In-Reply-To: <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com> <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1222637645-1604001225=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1222637645-1604001225=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Oleksandr Tyshchenko wrote:
> On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <masami.hiramatsu@linaro.org> wrote:
>       Hi Oleksandr,
> 
> Hi Masami
> 
> [sorry for the possible format issue]
>  
> 
>       I would like to try this on my arm64 board.
> 
> Glad to hear you are interested in this topic. 
>  
> 
>       According to your comments in the patch, I made this config file.
>       # cat debian.conf
>       name = "debian"
>       type = "pvh"
>       vcpus = 8
>       memory = 512
>       kernel = "/opt/agl/vmlinuz-5.9.0-1-arm64"
>       ramdisk = "/opt/agl/initrd.img-5.9.0-1-arm64"
>       cmdline= "console=hvc0 earlyprintk=xen root=/dev/xvda1 rw"
>       disk = [ '/opt/agl/debian.qcow2,qcow2,hda' ]
>       vif = [ 'mac=00:16:3E:74:3d:76,bridge=xenbr0' ]
>       virtio = 1
>       vdisk = [ 'backend=Dom0, disks=ro:/dev/sda1' ]
> 
>       And tried to boot a DomU, but I got below error.
> 
>       # xl create -c debian.conf
>       Parsing config from debian.conf
>       libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
>       1:unable to add virtio_disk devices
>       libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
>       1:xc_domain_pause failed
>       libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
>       type for domid=1
>       libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
>       1:Unable to destroy guest
>       libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>       1:Destruction of domain failed
> 
> 
>       Could you tell me how can I test it?
> 
> 
> I assume it is due to the lack of the virtio-disk backend (which I haven't shared yet as I focused on the IOREQ/DM support on Arm in the
> first place).
> Could you wait a little bit, I am going to share it soon. 

Do you have a quick-and-dirty hack you can share in the meantime? Even
just on github as a special branch? It would be very useful to be able
to have a test-driver for the new feature.
--8323329-1222637645-1604001225=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 19:58:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 19:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14630.36146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE3f-0004FM-7u; Thu, 29 Oct 2020 19:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14630.36146; Thu, 29 Oct 2020 19:58:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE3f-0004FF-4B; Thu, 29 Oct 2020 19:58:03 +0000
Received: by outflank-mailman (input) for mailman id 14630;
 Thu, 29 Oct 2020 19:58:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYE3d-0004FA-R0
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:58:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dcaac7b1-3a97-466a-bb5b-183204c6b214;
 Thu, 29 Oct 2020 19:58:01 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 57F0D20782;
 Thu, 29 Oct 2020 19:57:59 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYE3d-0004FA-R0
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 19:58:01 +0000
X-Inumbo-ID: dcaac7b1-3a97-466a-bb5b-183204c6b214
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id dcaac7b1-3a97-466a-bb5b-183204c6b214;
	Thu, 29 Oct 2020 19:58:01 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 57F0D20782;
	Thu, 29 Oct 2020 19:57:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604001480;
	bh=mZUIQ64KWBXYfkAR73cGIAw+2e1BnuKeAvWx88nBK2U=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EUa64uqc5JbY3VUHnLDiWBCr7kl+xPw0XnupFOyBT7eInsGt3kbSk3oz1M71Fn7h1
	 /U8WykQuUYO3atDL5bC6qF8r+F/9VoHRCkXIrMQemJXx+avUuaUR0jGryyd37qtKB9
	 smo86dxndIwNhrwzNXKojPfP4fGZvJYVIZtuGGWY=
Date: Thu, 29 Oct 2020 12:57:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, Julien Grall <julien@xen.org>, 
    roman@zededa.com, xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
In-Reply-To: <e885b2a9-f6ea-e224-b906-125936cfe550@suse.com>
Message-ID: <alpine.DEB.2.21.2010291255070.12247@sstabellini-ThinkPad-T480s>
References: <20201022021655.GA74011@mattapan.m5p.com> <alpine.DEB.2.21.2010221620230.12247@sstabellini-ThinkPad-T480s> <20201023005629.GA83870@mattapan.m5p.com> <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s> <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s> <20201024053540.GA97417@mattapan.m5p.com> <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org> <20201026160316.GA20589@mattapan.m5p.com> <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com> <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s> <e885b2a9-f6ea-e224-b906-125936cfe550@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1916697122-1604001480=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1916697122-1604001480=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Jürgen Groß wrote:
> On 29.10.20 01:37, Stefano Stabellini wrote:
> > On Tue, 27 Oct 2020, Elliott Mitchell wrote:
> > > On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> > > > On 26/10/2020 16:03, Elliott Mitchell wrote:
> > > > > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> > > > > > On 24/10/2020 06:35, Elliott Mitchell wrote:
> > > > > > > ACPI has a distinct
> > > > > > > means of specifying a limited DMA-width; the above fails, because
> > > > > > > it
> > > > > > > assumes a *device-tree*.
> > > > > > 
> > > > > > Do you know if it would be possible to infer from the ACPI static
> > > > > > table
> > > > > > the DMA-width?
> > > > > 
> > > > > Yes, and it is.  Due to not knowing much about ACPI tables I don't
> > > > > know
> > > > > what the C code would look like though (problem is which documentation
> > > > > should I be looking at first?).
> > > > 
> > > > What you provided below is an excerpt of the DSDT. AFAIK, DSDT content
> > > > is written in AML. So far the shortest implementation I have seen for
> > > > the AML parser is around 5000 lines (see [1]). It might be possible to
> > > > strip some the code, although I think this will still probably too big
> > > > for a single workaround.
> > > > 
> > > > What I meant by "static table" is a table that looks like a structure
> > > > and can be parsed in a few lines. If we can't find on contain the DMA
> > > > window, then the next best solution is to find a way to identity the
> > > > platform.
> > > > 
> > > > I don't know enough ACPI to know if this solution is possible. A good
> > > > starter would probably be the ACPI spec [2].
> > > 
> > > Okay, that is worse than I had thought (okay, ACPI is impressively
> > > complex for something nominally firmware-level).
> > > 
> > > There are strings in the present Tianocore implementation for Raspberry
> > > PI 4B which could be targeted.  Notably included in the output during
> > > boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
> > > unsure how kosher these are as this wsn't implemented nor blessed by the
> > > Raspberry PI Foundation).
> > > 
> > > I strongly dislike this approach as you soon turn the Xen project into a
> > > database of hardware.  This is already occurring with
> > > xen/arch/arm/platforms and I would love to do something about this.  On
> > > that thought, how about utilizing Xen's command-line for this purpose?
> > 
> > I don't think that a command line option is a good idea: basically it is
> > punting to users the task of platform detection. Also, it means that
> > users will be necessarily forced to edit the uboot script or grub
> > configuration file to boot.
> > 
> > Note that even if we introduced a new command line, we wouldn't take
> > away the need for xen/arch/arm/platforms anyway.
> > 
> > It would be far better for Xen to autodetect the platform if we can.
> > 
> > 
> > > Have a procedure of during installation/updates retrieve DMA limitation
> > > information from the running OS and the following boot Xen will follow
> > > the appropriate setup.  By its nature, Domain 0 will have the information
> > > needed, just becomes an issue of how hard that is to retrieve...
> > 
> > Historically that is what we used to do for many things related to ACPI,
> > but unfortunately it leads to a pretty bad architecture where Xen
> > depends on Dom0 for booting rather than the other way around. (Dom0
> > should be the one requiring Xen for booting, given that Xen is higher
> > privilege and boots first.)
> > 
> > 
> > I think the best compromise is still to use an ACPI string to detect the
> > platform. For instance, would it be possible to use the OEMID fields in
> > RSDT, XSDT, FADT?  Possibly even a combination of them?
> > 
> > Another option might be to get the platform name from UEFI somehow.
> 
> What about having a small domain parsing the ACPI booting first and use
> that information for booting dom0?
> 
> That dom would be part of the Xen build and the hypervisor wouldn't need
> to gain all the ACPI AML logic.

That could work, but in practice we don't have such a domain today --
the infrastructure is missing. I wonder whether the bootloader (uboot or
grub) would know about the platform and might be able to pass that
information to Xen somehow.
--8323329-1916697122-1604001480=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:00:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:00:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14640.36173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE6G-0005DC-UJ; Thu, 29 Oct 2020 20:00:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14640.36173; Thu, 29 Oct 2020 20:00:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE6G-0005D5-RN; Thu, 29 Oct 2020 20:00:44 +0000
Received: by outflank-mailman (input) for mailman id 14640;
 Thu, 29 Oct 2020 20:00:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYE6F-0005Cs-P6
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:00:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 662eb8df-9a22-4d8b-b072-55dbf1eab504;
 Thu, 29 Oct 2020 20:00:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 649AE20782;
 Thu, 29 Oct 2020 20:00:40 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYE6F-0005Cs-P6
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:00:43 +0000
X-Inumbo-ID: 662eb8df-9a22-4d8b-b072-55dbf1eab504
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 662eb8df-9a22-4d8b-b072-55dbf1eab504;
	Thu, 29 Oct 2020 20:00:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 649AE20782;
	Thu, 29 Oct 2020 20:00:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604001640;
	bh=s3H1XCFF5/PhNwE1oJWzALManxl6p2C9gIwOpoLw4FM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=VDg1NIwJnqhNBblNgJvyzLzdu9tkOe8Yk3ymjeFq5egKWr66e6pdFSNYqrHreC1Rc
	 OavE6oeqU58PXu22yeKxSMD7OaPFYpu6JHL8+t/vxkfTCH/2FscisuGjEf7Z5ZjCxm
	 sR99zxoLxDgHNA2tCisgVv90SnKPbdmFkl6U/cu0=
Date: Thu, 29 Oct 2020 13:00:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jason Andryuk <jandryuk@gmail.com>
cc: =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, QEMU <qemu-devel@nongnu.org>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
In-Reply-To: <CAKf6xpsYorMRYpuPcb8B1zVWz3GHgZZwF+NPVA=nFL2Tr13hqQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2010291300000.12247@sstabellini-ThinkPad-T480s>
References: <20201028174406.23424-1-alex.bennee@linaro.org> <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s> <87d011mjuw.fsf@linaro.org> <CAKf6xpsYorMRYpuPcb8B1zVWz3GHgZZwF+NPVA=nFL2Tr13hqQ@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-510861080-1604001640=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-510861080-1604001640=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Jason Andryuk wrote:
> On Thu, Oct 29, 2020 at 6:01 AM Alex Bennée <alex.bennee@linaro.org> wrote:
> >
> >
> > Stefano Stabellini <sstabellini@kernel.org> writes:
> >
> > > On Wed, 28 Oct 2020, Alex Bennée wrote:
> > >> Xen is supported on aarch64 although weirdly using the i386-softmmu
> > >> model. Checking based on the host CPU meant we never enabled Xen
> > >> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
> > >> make it not seem weird but that will require further build surgery.
> > >>
> > >> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> > >> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
> > >> Cc: Stefano Stabellini <sstabellini@kernel.org>
> > >> Cc: Anthony Perard <anthony.perard@citrix.com>
> > >> Cc: Paul Durrant <paul@xen.org>
> > >> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
> > >> ---
> > >>  meson.build | 2 ++
> > >>  1 file changed, 2 insertions(+)
> > >>
> > >> diff --git a/meson.build b/meson.build
> > >> index 835424999d..f1fcbfed4c 100644
> > >> --- a/meson.build
> > >> +++ b/meson.build
> > >> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
> > >>      'CONFIG_HVF': ['x86_64-softmmu'],
> > >>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
> > >>    }
> > >> +elif cpu in [ 'arm', 'aarch64' ]
> > >> +  accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'] }
> > >>  endif
> > >
> > > This looks very reasonable -- the patch makes sense.
> 
> A comment would be useful to explain that Arm needs i386-softmmu for
> the xenpv machine.
> 
> > >
> > > However I have two questions, mostly for my own understanding. I tried
> > > to repro the aarch64 build problem but it works at my end, even without
> > > this patch.
> >
> > Building on a x86 host or with cross compiler?
> >
> > > I wonder why. I suspect it works thanks to these lines in
> > > meson.build:
> 
> I think it's a runtime and not a build problem.  In osstest, Xen
> support was detected and configured, but CONFIG_XEN wasn't set for
> Arm.  So at runtime xen_available() returns 0, and QEMU doesn't start
> with "qemu-system-i386: -xen-domid 1: Option not supported for this
> target"
> 
> I posted my investigation here:
> https://lore.kernel.org/xen-devel/CAKf6xpss8KpGOvZrKiTPz63bhBVbjxRTYWdHEkzUo2q1KEMjhg@mail.gmail.com/

Right, but strangely I cannot reproduce it here. I am natively building
(qemu-user) on aarch64 and for me CONFIG_XEN gets set.
--8323329-510861080-1604001640=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:01:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14648.36201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE6u-0005Of-Es; Thu, 29 Oct 2020 20:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14648.36201; Thu, 29 Oct 2020 20:01:24 +0000
Resent-From: Olaf Hering <olaf@aepfle.de>
Resent-Date: Thu, 29 Oct 2020 21:01:16 +0100
Resent-Message-ID: <20201029200116.GA7461@aepfle.de>
Resent-To: xen-devel@lists.xenproject.org
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE6u-0005OY-Bw; Thu, 29 Oct 2020 20:01:24 +0000
Received: by outflank-mailman (input) for mailman id 14648;
 Thu, 29 Oct 2020 20:01:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1kYE6t-0005OK-Jc
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:01:23 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 986f202c-5778-4c9a-bb07-c7c601215df6;
 Thu, 29 Oct 2020 20:01:22 +0000 (UTC)
Received: from aepfle.de by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9TK1K4Sw
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Thu, 29 Oct 2020 21:01:20 +0100 (CET)
Received: from mo4-p00-ob.smtp.rzone.de ([81.169.146.216])
 by smtpin.rzone.de (RZmta 47.3.0 OK) with ESMTPS id R05ba5w9THK78YQ
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client CN "*.smtp.rzone.de",
 Issuer "TeleSec ServerPass Class 2 CA" (verified OK (+EmiG)))
 (Client hostname verified OK) for <olaf@aepfle.de>;
 Thu, 29 Oct 2020 18:20:07 +0100 (CET)
Received: from sender by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
 with ESMTPSA id j0b1afw9THK73f5
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate) for <olaf@aepfle.de>;
 Thu, 29 Oct 2020 18:20:07 +0100 (CET)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=u/HF=EE=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
	id 1kYE6t-0005OK-Jc
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:01:23 +0000
X-Inumbo-ID: 986f202c-5778-4c9a-bb07-c7c601215df6
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.24])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 986f202c-5778-4c9a-bb07-c7c601215df6;
	Thu, 29 Oct 2020 20:01:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1604001681;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:
	Received-SPF:X-RZG-CLASS-ID:Authentication-Results:
	Authentication-Results:Authentication-Results:Authentication-Results:
	Authentication-Results:X-RZG-CLASS-ID:X-RZG-AUTH:From:Subject:Sender;
	bh=fdmy2g4paPxwsmM3EZC6IDQ9fD2XxCAjar+up2NIJfk=;
	b=D6TEPCM5xWICo57a8YcogUGug/dtxj1SgAifJjKPWmPJvcbQaEvJMFy/ypi3Ly7pdz
	WO+zx+GOT/c20sO0qE/psls+ccYlWoEvhWJAOd0JxI5LwlS9NuCxaWCLN5vowfjO5OQV
	C2rsnIsEt0JEVaBYssdkGLjgusjIe86idMLtdWsXKta8QYMqjGJ0NPnashNaWop6f59r
	JRNGG6qm6mNdZbJ6Dr5wCMNi6P4FZSZjc1JOLbm7AVc7dAgwj8Vy0YKESgkcUGoDcJyu
	FL2ihthfcf5pHDPBq0AEYAmPGkTbS7xTVKqrUMzRlbtvE9PkilKhbk2A+oN96y8f0RlE
	SxDA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from aepfle.de
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9TK1K4Sw
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client did not present a certificate)
	for <xen-devel@lists.xenproject.org>;
	Thu, 29 Oct 2020 21:01:20 +0100 (CET)
X-Envelope-From: <olaf@aepfle.de>
X-Envelope-To: <olaf@aepfle.de>
X-Delivery-Time: 1603992008
X-UID: 548113
Authentication-Results: strato.com; dmarc=none header.from=aepfle.de
Authentication-Results: strato.com; arc=none
Authentication-Results: strato.com; dkim=pass header.d=aepfle.de
Authentication-Results: strato.com; dkim-adsp=pass header.from="olaf@aepfle.de"
Authentication-Results: strato.com; spf=none smtp.mailfrom="olaf@aepfle.de"
X-RZG-Expurgate: clean/normal
X-RZG-Expurgate-ID: 149500::1603992008-0000A3A5-B584449D/0/0
X-RZG-CLASS-ID: mi00
Received-SPF: none
	client-ip=81.169.146.216;
	helo="mo4-p00-ob.smtp.rzone.de";
	envelope-from="olaf@aepfle.de";
	receiver=smtpin.rzone.de;
	identity=mailfrom;
Received: from mo4-p00-ob.smtp.rzone.de ([81.169.146.216])
	by smtpin.rzone.de (RZmta 47.3.0 OK)
	with ESMTPS id R05ba5w9THK78YQ
	(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
	(Client CN "*.smtp.rzone.de", Issuer "TeleSec ServerPass Class 2 CA" (verified OK (+EmiG)))
	(Client hostname verified OK)
	for <olaf@aepfle.de>;
	Thu, 29 Oct 2020 18:20:07 +0100 (CET)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1603992007;
	s=strato-dkim-0002; d=aepfle.de;
	h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
	Subject:Sender;
	bh=fdmy2g4paPxwsmM3EZC6IDQ9fD2XxCAjar+up2NIJfk=;
	b=KGuOzy9UMIOA9+8H40XwG5ehFikbu+GO0fsfj7e7HRXYN2yLuQulkdpIcfL8GMQFDY
	s3ZpCmw+dkV2+laHIncV3rCJ3OSdsBT3W2UPz4Da16HpXlVHFXlwMuc1oN3TflooIkUg
	kB20FO1gCYR3cw9sXOSLwHWnL0dIZxi8AwFpXEnT05+jGWuPb9zWWfevmY0Y8Z3uxTGJ
	gXs3254RGdQR/XvxRZdYb71ihP/ChS7lTU3/YVmXsVlGfY6WnTbSxZZ3yJ0/5y9rFLH9
	v8nZV8+4ZI2latwmgKsBBXn9BofLuL3Li8JGWEO7RdafdzK7+u4j8dz9xTiWh1E+7vOr
	gLyA==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3G1Jjw=="
X-RZG-CLASS-ID: mo00
Received: from sender
	by smtp.strato.de (RZmta 47.3.0 DYNA|AUTH)
	with ESMTPSA id j0b1afw9THK73f5
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
	(Client did not present a certificate)
	for <olaf@aepfle.de>;
	Thu, 29 Oct 2020 18:20:07 +0100 (CET)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>
Subject: [PATCH v1 00/23] reduce overhead during live migration
Date: Thu, 29 Oct 2020 18:19:40 +0100
Message-Id: <20201029172004.17219-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current live migration code can easily saturate an 1Gb link.
There is still room for improvement with faster network connections.
Even with this series reviewed and applied.
See description of patch #6.

Olaf

Olaf Hering (23):
  tools: add readv_exact to libxenctrl
  tools: add xc_is_known_page_type to libxenctrl
  tools: use xc_is_known_page_type
  tools: unify type checking for data pfns in migration stream
  tools: show migration transfer rate in send_dirty_pages
  tools/guest: prepare to allocate arrays once
  tools/guest: save: move batch_pfns
  tools/guest: save: move mfns array
  tools/guest: save: move types array
  tools/guest: save: move errors array
  tools/guest: save: move iov array
  tools/guest: save: move rec_pfns array
  tools/guest: save: move guest_data array
  tools/guest: save: move local_pages array
  tools/guest: restore: move pfns array
  tools/guest: restore: move types array
  tools/guest: restore: move mfns array
  tools/guest: restore: move map_errs array
  tools/guest: restore: move mfns array in populate_pfns
  tools/guest: restore: move pfns array in populate_pfns
  tools/guest: restore: split record processing
  tools/guest: restore: split handle_page_data
  tools/guest: restore: write data directly into guest

 tools/libs/ctrl/xc_private.c          |  54 ++-
 tools/libs/ctrl/xc_private.h          |  34 ++
 tools/libs/guest/xg_sr_common.c       |  33 +-
 tools/libs/guest/xg_sr_common.h       |  86 +++-
 tools/libs/guest/xg_sr_restore.c      | 562 +++++++++++++++++---------
 tools/libs/guest/xg_sr_save.c         | 158 ++++----
 tools/libs/guest/xg_sr_save_x86_hvm.c |   5 +-
 tools/libs/guest/xg_sr_save_x86_pv.c  |  31 +-
 8 files changed, 666 insertions(+), 297 deletions(-)



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:02:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14653.36222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE80-0005Zg-Vb; Thu, 29 Oct 2020 20:02:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14653.36222; Thu, 29 Oct 2020 20:02:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE80-0005ZZ-SR; Thu, 29 Oct 2020 20:02:32 +0000
Received: by outflank-mailman (input) for mailman id 14653;
 Thu, 29 Oct 2020 20:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uFDJ=EE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1kYE7z-0005ZS-Id
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:02:31 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3806ff7d-0309-4d56-a9b8-91c987145fa9;
 Thu, 29 Oct 2020 20:02:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=uFDJ=EE=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
	id 1kYE7z-0005ZS-Id
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:02:31 +0000
X-Inumbo-ID: 3806ff7d-0309-4d56-a9b8-91c987145fa9
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3806ff7d-0309-4d56-a9b8-91c987145fa9;
	Thu, 29 Oct 2020 20:02:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604001749;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=L1WRBMcR3/QEoUlSDas3ah65/j+icWZ9WixAhllcT58=;
  b=fY2NtHTl1Yaex5E9kQ9CsV0hhwVwDCd0frL3c6DLYPNaU8UD0wIzd0pY
   tXTipgTWRuUBDfknL0DxurMk5k77JVl4ByBWspcPXAOkn87mEvyhFscmV
   Zg1Hc9z1Ti2pNQJZEn7t/xR/e0iqAvVGZOgPLqasusikCnWy4EpFyeW+g
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: I+jgjfee8tjzNJPnwba4aSwNDI2MElLbKfjT9LafAkd7i/c84W/jacY7J0fZyPUIFaJH0rrPEO
 UqNxxS/3ncTSWfL03eA4z5pLf75jAnUpqAWPsE/8LTdbihMwv2eLgylv2ATCNUzqPxtmoqjTFx
 Ea6bc53sOzVT/fK74wvEGQxqyCyLKY8DnquOYr5M0dQS17iMjPLkPLkEP0zZ08R8isHVq2tCrl
 FO+rYsiocMLZyfGyzuEWqjz0gYKxdgOVM4rbWSyCHrFBqmF0pOA0J8Ppw1YO/Xn10d2kGoWx+m
 XdE=
X-SBRS: None
X-MesageID: 30437793
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,430,1596513600"; 
   d="scan'208";a="30437793"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dCnirZTtGBt0LpIk2IxWFqyH1F/GbBUxH7TepdHxYm42pemvoXA9vtOTLRYdbt93X3aO0gqzSaOmVo3S1A34U4A3VawcMYNBOQ1qcE+fgMjKDil7PZskjuJi/kXI0zQj5R02dXxcclzaiIugfTgo785AvwOX/M3eOKVDWxdePnC3dyV04GRknIQbA1IuxkCpw7SBsznUqGr/u9+/uaWBRt94AR1ulI7JVUCMKFQJt8LieTAHd7TVzDjufNW6ptOWW/7CoryZhlwDu52sYDcc8shwdpBvwr9h3IeWYUgN1Mtxkcpf9i/DUT929Q8ufkGuaALvG4rMU6+G/vAjIrIv7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6VbRTD7h4MR94e3q3oI/2ulPLfER4fAOHAUSJ+KE9+0=;
 b=FGuc5gBajUZyGyC/YpoSd/BQtHstNF9FoN1GPpdRb1S9mZEb9odSeEkt805xtPXk55JsjT/tj5g4X4mICrZyEYsnqigv3Ech2chAox7Rk5vLyMfNKeubuj80pu7x/fdLCayQN2cAQWeV8abeK2Tu+CD51h3N4HU/y/eBI+F0HOYbgpWDoKlJAp7J9S8gLQJUEHKh/4prPztH7volT6lwksMVqUb1NLjutxod4UGWlaHS7hUUQHhCE7FvhLaqrPiG0yDNewYy7hkaUy+vWe08PXvNNtXTj+MFAN+06mvB2lKbPeQZFT2IOxlPTxQIB8eyd4SN17x8wtDWM2IUPkat5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6VbRTD7h4MR94e3q3oI/2ulPLfER4fAOHAUSJ+KE9+0=;
 b=D0ImmfFgcmuB8/ZP66w/jndcw+0+YdQzU0eSzFgLQjdE7DNnWKwyixV7u3PeEivKHQI+k1l/1YJMWP1KIGzqSOSBeLVc3hSuDY2XAX/ro0pu4GKK/AxFHisvbag30Bb1PQvvK82fUc3APrUcXcJbZryBqE5N7RNgXB4zB5rD2pQ=
Date: Thu, 29 Oct 2020 21:02:20 +0100
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] x86/hvm: process softirq while saving/loading entries
Message-ID: <20201029200220.aad6yk6x4xet63jh@Air-de-Roger>
References: <20201029152054.28635-1-roger.pau@citrix.com>
 <ab7f64a2-2ea8-0445-7266-c60e58a49a85@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ab7f64a2-2ea8-0445-7266-c60e58a49a85@suse.com>
X-ClientProxiedBy: LNXP123CA0020.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:d2::32) To SA0PR03MB5610.namprd03.prod.outlook.com
 (2603:10b6:806:b2::9)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2386b9a7-8292-4439-d41a-08d87c458e34
X-MS-TrafficTypeDiagnostic: SA0PR03MB5660:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SA0PR03MB56606A88181E9AB2604A808E8F140@SA0PR03MB5660.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: l57zALguzYLLwu+XFdGdUvwry8THVleyrEZ3bY8IC4W8KcGuiV5g9YQfsaFmnTKFkdwOph3SG3I5bcrqxuzwPLmGHZry4WysvZk31YseoNOTGte9EKvS+Q82L3qkjlChOtIHswLxCdvaj+gAjYoK1ONfTyrWYclsfVJOQo6T9w/Oifex5QM+uRroxJ6ecq9Ut0B+/fGF4z49PLJCI9dwD17pOMUvQ5/uMVsGOxD8yHNLfWis9Y7bB6SULyA0ilisur5bD0s5aOpZX39ycOPd2TaYFRI0Z3Y0ppqQf7xwd9M57Z+69xoD9pIBnjRHgA5a+hxEmgb7Diitmw4Kk5jThA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SA0PR03MB5610.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(396003)(376002)(366004)(39860400002)(4326008)(316002)(6496006)(6486002)(8936002)(478600001)(2906002)(54906003)(5660300002)(9686003)(26005)(33716001)(86362001)(4744005)(6916009)(956004)(53546011)(16526019)(186003)(66946007)(6666004)(8676002)(66476007)(1076003)(66556008)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: KCGesIU4jrHqxG7dMvYmwYIE3/yVReapqnnX1kSFSy4qLrPqQDBh95rNPo0qZQIkgyROFVOI3aYQsFHnxmbtnc2wWBhwS+Qh9kyhWrdVlG48uJtqF8eYeuS5xrk6mFsJjf7X7QxqJBNjtb3OIaiOYCjExLjP/DZUIwG6y9rLG6L9V6EC36dFaKy8TqjY+0k8o//7mIhZeDw0ZtacmMgZGk/PLN1LRb6myUklUDBkBacX5ph1MvWSUcYDsX0H/vION4FX3h9Rk8zDgEPAdxzmpBUt/I29cSHUkq/qXEIDcRqBGVPcGkeT9AqJ0J1L9gKf0lnltvBK3IAm2askh47TOkiZ++5ATW0K6KlV8lygk+RvUKzpcQxl6QrUFSdCkYu/qsWG/+g/ohrVwYFgZH6md2pOHJHtcFeoebayugD9dXLJXy86npp9Adnirfg8OUBVuF2NP4DBSfXtOsqGquC2m8rnxyFl4BzAdx0EsyWD3VeG2sgJu74ItKiDWefXhzUqdY7fdtXNWWn4TFFRzPFoVoePcN+5qyoJF8/59m4OF0K/O6rqCh6+OLzm2Re9TyqvGghqfujCJMaL0MyeFWy1UPZhaJm0mq2LbbrHKtLTq6E3wgXohfyi3qTWve6EJBRb37X9CExZmxHRRv3ey+DgXA==
X-MS-Exchange-CrossTenant-Network-Message-Id: 2386b9a7-8292-4439-d41a-08d87c458e34
X-MS-Exchange-CrossTenant-AuthSource: SA0PR03MB5610.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2020 20:02:25.5072
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CxmnpeSXkNNeFL+5Rbzz5HSPdRnQ/efsk7YXfDtM5hLEYD4V1YRu0HNubMdmWzT2FMowKIDT75XeVe4idnZ4jQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR03MB5660
X-OriginatorOrg: citrix.com

On Thu, Oct 29, 2020 at 05:38:17PM +0100, Jan Beulich wrote:
> On 29.10.2020 16:20, Roger Pau Monne wrote:
> > On slow systems with sync_console saving or loading the context of big
> > guests can cause the watchdog to trigger. Fix this by adding a couple
> > of process_pending_softirqs.
> 
> Which raises the question in how far this is then also a problem
> for the caller of the underlying hypercall. IOW I wonder whether
> instead we need to make use of continuations here. Nevertheless

FWIW, I've only hit this with debug builds on boxes that have slow
serial with sync_console enabled, due to the verbose printks.

> 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:04:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:04:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14661.36234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE9X-0005m7-8D; Thu, 29 Oct 2020 20:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14661.36234; Thu, 29 Oct 2020 20:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYE9X-0005m0-5A; Thu, 29 Oct 2020 20:04:07 +0000
Received: by outflank-mailman (input) for mailman id 14661;
 Thu, 29 Oct 2020 20:04:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rF0B=EE=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1kYE9V-0005lv-8r
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:04:05 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ee4472f-387b-41ca-9cea-6b3ead444336;
 Thu, 29 Oct 2020 20:04:03 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id y12so4119077wrp.6
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 13:04:02 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id f20sm1466314wmc.26.2020.10.29.13.04.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 13:04:00 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id BEAC41FF7E;
 Thu, 29 Oct 2020 20:03:59 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=rF0B=EE=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
	id 1kYE9V-0005lv-8r
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:04:05 +0000
X-Inumbo-ID: 9ee4472f-387b-41ca-9cea-6b3ead444336
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9ee4472f-387b-41ca-9cea-6b3ead444336;
	Thu, 29 Oct 2020 20:04:03 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id y12so4119077wrp.6
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 13:04:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:in-reply-to:date
         :message-id:mime-version:content-transfer-encoding;
        bh=GlgPpZHPFyGb6ykEqr5MjXdGXxy2SQgZWabctoNQvD4=;
        b=xgwV+hUK7lV4D03GrJJkVj6yoyPcU5evOBLuWo40ehpfriURrarYr/atO/JUQ5z2m3
         mpFJDnd51X93Vl8lQTU8DdR3hu+rPWlq2frXOdUyHo5htFzGMLT04yOD0DHX52+qaBgF
         j/WeQpXMevN1eBEnJcq8WGn22PkV2V6tdV+DZETDVOrSL1Qiefy7VmQXFTWz+/kam2Qr
         m9gnZ+F43iGuUg0Qo/cDBcqmh7voYNvsy3znx8TxgaTBncVlnYQRzMMw600qSnsQK88o
         g1zK8IXLLya3X5af4jvSbe+RDQl3R3QDOuSynvf/DyQmW2LcnLtO5ioSlKG/FHhLgVit
         PkxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject
         :in-reply-to:date:message-id:mime-version:content-transfer-encoding;
        bh=GlgPpZHPFyGb6ykEqr5MjXdGXxy2SQgZWabctoNQvD4=;
        b=GXXF0r8NIrqAMpbd9UF1wT8i8Qnv1jywcr6fpk/EyOTEfRCC8s/xK4y7GBiIGbhJ/6
         1rmbaGkToSkSIhKi7JQX1ALMQL3vzJKp5Bam2Gv1hy3cu8uGZdLic1Afu6U3FCaCLMRK
         dny9pBNqU6tk+5geavCPbAYOQN2dcnyc0+atFSVgK1A7+kgaiA5Lh7WCUyf/v2aFMK+c
         NvvJh9Z7iCxyui3Dv++uUIO2Hu6kq1Kps0aV/R8DTGu/Bt86iMCT796kkKWV9CEYrBZ2
         z3+5lC3eW3dyczCpHLtYinolLYdhrWwvIoSq2Rpf4QyyN6aPch2+NsPwQ6gWvAXwQAKW
         rouA==
X-Gm-Message-State: AOAM530rlo/n6YMFeRS5Uj3uBlDG3MGS9L/LJYPC2dy7FGxBRoIG0W/C
	cNX0pTAMzN9QOa4NqhkHc67LWA==
X-Google-Smtp-Source: ABdhPJzHtM27tE6YMeBH5RezYAbYcbqowgQsX3y1aonmHNKDBDY1JKY295X5ITmNMm4XDg9aRH0sRQ==
X-Received: by 2002:adf:e744:: with SMTP id c4mr7774206wrn.222.1604001841949;
        Thu, 29 Oct 2020 13:04:01 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
        by smtp.gmail.com with ESMTPSA id f20sm1466314wmc.26.2020.10.29.13.04.00
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 13:04:00 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
	by zen.linaroharston (Postfix) with ESMTP id BEAC41FF7E;
	Thu, 29 Oct 2020 20:03:59 +0000 (GMT)
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
User-agent: mu4e 1.5.6; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, Jan Beulich
 <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?utf-8?Q?Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, Julien
 Grall
 <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, Daniel De
 Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Jun Nakajima <jun.nakajima@intel.com>, Kevin
 Tian <kevin.tian@intel.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
In-reply-to: <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
Date: Thu, 29 Oct 2020 20:03:59 +0000
Message-ID: <871rhgn6j4.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Oleksandr Tyshchenko <olekstysh@gmail.com> writes:

> On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <
> masami.hiramatsu@linaro.org> wrote:
>
>> Hi Oleksandr,
>>
> Hi Masami
>
> [sorry for the possible format issue]
>
>
>>
>> I would like to try this on my arm64 board.
>>
> Glad to hear you are interested in this topic.
>
>
>>
>> According to your comments in the patch, I made this config file.
>> # cat debian.conf
>> name =3D "debian"
>> type =3D "pvh"
>> vcpus =3D 8
>> memory =3D 512
>> kernel =3D "/opt/agl/vmlinuz-5.9.0-1-arm64"
>> ramdisk =3D "/opt/agl/initrd.img-5.9.0-1-arm64"
>> cmdline=3D "console=3Dhvc0 earlyprintk=3Dxen root=3D/dev/xvda1 rw"
>> disk =3D [ '/opt/agl/debian.qcow2,qcow2,hda' ]
>> vif =3D [ 'mac=3D00:16:3E:74:3d:76,bridge=3Dxenbr0' ]
>> virtio =3D 1
>> vdisk =3D [ 'backend=3DDom0, disks=3Dro:/dev/sda1' ]
>>
>> And tried to boot a DomU, but I got below error.
>>
>> # xl create -c debian.conf
>> Parsing config from debian.conf
>> libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
>> 1:unable to add virtio_disk devices
>> libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
>> 1:xc_domain_pause failed
>> libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
>> type for domid=3D1
>> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
>> 1:Unable to destroy guest
>> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>> 1:Destruction of domain failed
>>
>>
>> Could you tell me how can I test it?
>>
>
> I assume it is due to the lack of the virtio-disk backend (which I haven't
> shared yet as I focused on the IOREQ/DM support on Arm in the first place=
).
> Could you wait a little bit, I am going to share it soon.

I assume this is wiring up the required bits of support to handle the
IOREQ requests in QEMU? We are putting together a PoC demo to show
a virtio enabled image (AGL) running on both KVM and Xen hypervisors so
we are keen to see your code as soon as you can share it.

I'm currently preparing a patch series for QEMU which fixes the recent
breakage caused by changes to the build system. As part of that I've
separated CONFIG_XEN and CONFIG_XEN_HVM so it's possible to build
i386-softmmu with unneeded for ARM bits. Hopefully this will allow me to
create a qemu-aarch64-system binary with just the PV related models in
it.

Talking to Stefano it probably makes sense going forward to introduce a
new IOREQ aware machine type for QEMU that doesn't bring in the rest of
the x86 overhead. I was thinking maybe xenvirt?

You've tested with virtio-block but we'd also like to extend this to
other arbitrary virtio devices. I guess we will need some sort of
mechanism to inform the QEMU command line where each device sits in the
virtio-mmio bus so the FDT Xen delivers to the guest matches up with
what QEMU is ready to serve requests for?

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:06:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:06:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14666.36246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEBk-0005v6-ME; Thu, 29 Oct 2020 20:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14666.36246; Thu, 29 Oct 2020 20:06:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEBk-0005uz-IG; Thu, 29 Oct 2020 20:06:24 +0000
Received: by outflank-mailman (input) for mailman id 14666;
 Thu, 29 Oct 2020 20:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYEBi-0005ut-LR
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:06:22 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea6105d8-8990-4d89-bfe6-4c6948464954;
 Thu, 29 Oct 2020 20:06:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7C45C20809;
 Thu, 29 Oct 2020 20:06:20 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYEBi-0005ut-LR
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:06:22 +0000
X-Inumbo-ID: ea6105d8-8990-4d89-bfe6-4c6948464954
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ea6105d8-8990-4d89-bfe6-4c6948464954;
	Thu, 29 Oct 2020 20:06:21 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 7C45C20809;
	Thu, 29 Oct 2020 20:06:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604001980;
	bh=VOSYn4cuThyPXUrnBOG7B7CxuW7hIMMa8BtTwBj8j2c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fK+1qk29v4U5OIyQXWTjnJ7dUEmsn48J0M2JBDuPAlyaAjCgzLt6wAtCW3wNS5O4B
	 vJHbCcmpM4P4SH2+Hf5KFm12yVCeb2AVRSuGEUoCdgbXhSqM2eRIKr1mVJk17oeium
	 UHachG/QUS/9OhpB2rHHLYLFQXdoEu4SFfzusFL4=
Date: Thu, 29 Oct 2020 13:06:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, qemu-devel@nongnu.org, 
    xen-devel@lists.xenproject.org, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    Anthony Perard <anthony.perard@citrix.com>, Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] meson.build: fix building of Xen support for aarch64
In-Reply-To: <87d011mjuw.fsf@linaro.org>
Message-ID: <alpine.DEB.2.21.2010291300440.12247@sstabellini-ThinkPad-T480s>
References: <20201028174406.23424-1-alex.bennee@linaro.org> <alpine.DEB.2.21.2010281406080.12247@sstabellini-ThinkPad-T480s> <87d011mjuw.fsf@linaro.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1821319659-1604001980=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1821319659-1604001980=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Alex Bennée wrote:
> > On Wed, 28 Oct 2020, Alex Bennée wrote:
> >> Xen is supported on aarch64 although weirdly using the i386-softmmu
> >> model. Checking based on the host CPU meant we never enabled Xen
> >> support. It would be nice to enable CONFIG_XEN for aarch64-softmmu to
> >> make it not seem weird but that will require further build surgery.
> >> 
> >> Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
> >> Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>
> >> Cc: Stefano Stabellini <sstabellini@kernel.org>
> >> Cc: Anthony Perard <anthony.perard@citrix.com>
> >> Cc: Paul Durrant <paul@xen.org>
> >> Fixes: 8a19980e3f ("configure: move accelerator logic to meson")
> >> ---
> >>  meson.build | 2 ++
> >>  1 file changed, 2 insertions(+)
> >> 
> >> diff --git a/meson.build b/meson.build
> >> index 835424999d..f1fcbfed4c 100644
> >> --- a/meson.build
> >> +++ b/meson.build
> >> @@ -81,6 +81,8 @@ if cpu in ['x86', 'x86_64']
> >>      'CONFIG_HVF': ['x86_64-softmmu'],
> >>      'CONFIG_WHPX': ['i386-softmmu', 'x86_64-softmmu'],
> >>    }
> >> +elif cpu in [ 'arm', 'aarch64' ]
> >> +  accelerator_targets += { 'CONFIG_XEN': ['i386-softmmu'] }
> >>  endif
> >
> > This looks very reasonable -- the patch makes sense.
> >
> >
> > However I have two questions, mostly for my own understanding. I tried
> > to repro the aarch64 build problem but it works at my end, even without
> > this patch.
> 
> Building on a x86 host or with cross compiler?

native build (qemu-user)


> > I wonder why. I suspect it works thanks to these lines in
> > meson.build:
> >
> >   if not get_option('xen').disabled() and 'CONFIG_XEN_BACKEND' in config_host
> >     accelerators += 'CONFIG_XEN'
> >     have_xen_pci_passthrough = not get_option('xen_pci_passthrough').disabled() and targetos == 'linux'
> >   else
> >     have_xen_pci_passthrough = false
> >   endif
> >
> > But I am not entirely sure who is adding CONFIG_XEN_BACKEND to
> > config_host.
> 
> The is part of the top level configure check - which basically checks
> for --enable-xen or autodetects the presence of the userspace libraries.
> I'm not sure if we have a slight over proliferation of symbols for Xen
> support (although I'm about to add more).
> 
> > The other question is: does it make sense to print the value of
> > CONFIG_XEN as part of the summary? Something like:
> >
> > diff --git a/meson.build b/meson.build
> > index 47e32e1fcb..c6e7832dc9 100644
> > --- a/meson.build
> > +++ b/meson.build
> > @@ -2070,6 +2070,7 @@ summary_info += {'KVM support':       config_all.has_key('CONFIG_KVM')}
> >  summary_info += {'HAX support':       config_all.has_key('CONFIG_HAX')}
> >  summary_info += {'HVF support':       config_all.has_key('CONFIG_HVF')}
> >  summary_info += {'WHPX support':      config_all.has_key('CONFIG_WHPX')}
> > +summary_info += {'XEN support':      config_all.has_key('CONFIG_XEN')}
> >  summary_info += {'TCG support':       config_all.has_key('CONFIG_TCG')}
> >  if config_all.has_key('CONFIG_TCG')
> >    summary_info += {'TCG debug enabled': config_host.has_key('CONFIG_DEBUG_TCG')}
> >
> >
> > But I realize there is already:
> >
> > summary_info += {'xen support':       config_host.has_key('CONFIG_XEN_BACKEND')}
> >
> > so it would be a bit of a duplicate
> 
> Hmm so what we have is:
> 
>   CONFIG_XEN_BACKEND
>     - ensures that appropriate compiler flags are added
>     - pegs RAM_ADDR_MAX at UINT64_MAX (instead of UINTPTR_MAX)
>   CONFIG_XEN
>     - which controls a bunch of build objects, some of which may be i386 only?
>     ./accel/meson.build:15:specific_ss.add_all(when: ['CONFIG_XEN'], if_true: dummy_ss)
>     ./accel/stubs/meson.build:2:specific_ss.add(when: 'CONFIG_XEN', if_false: files('xen-stub.c'))
>     ./accel/xen/meson.build:1:specific_ss.add(when: 'CONFIG_XEN', if_true: files('xen-all.c'))
>     ./hw/9pfs/meson.build:17:fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
>     ./hw/block/dataplane/meson.build:2:specific_ss.add(when: 'CONFIG_XEN', if_true: files('xen-block.c'))
>     ./hw/block/meson.build:14:softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xen-block.c'))
>     ./hw/char/meson.build:23:softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xen_console.c'))
>     ./hw/display/meson.build:18:softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xenfb.c'))
>     ./hw/i386/xen/meson.build:1:i386_ss.add(when: 'CONFIG_XEN', if_true: files('xen-hvm.c',
>                                                                                'xen-mapcache.c',
>                                                                                'xen_apic.c',
>                                                                                'xen_platform.c',
>                                                                                'xen_pvdevice.c')
>     ./hw/net/meson.build:2:softmmu_ss.add(when: 'CONFIG_XEN', if_true: files('xen_nic.c'))
>     ./hw/usb/meson.build:76:softmmu_ss.add(when: ['CONFIG_USB', 'CONFIG_XEN', libusb], if_true: files('xen-usb.c'))
>     ./hw/xen/meson.build:1:softmmu_ss.add(when: ['CONFIG_XEN', xen], if_true: files(
>     ./hw/xen/meson.build:20:specific_ss.add_all(when: ['CONFIG_XEN', xen], if_true: xen_specific_ss)
>     ./hw/xenpv/meson.build:3:xenpv_ss.add(when: 'CONFIG_XEN', if_true: files('xen_machine_pv.c'))
>     - there are also some stubbed inline functions controlled by it
>   CONFIG_XEN_IGD_PASSTHROUGH
>     - specific x86 PC only feature via Kconfig rule
>   CONFIG_XEN_PCI_PASSTHROUGH
>     - control Linux specific specific feature (via meson rule)
> 
> 
> First obvious question - is everything in hw/i386/xen actually i386
> only? APIC seems pretty PC orientated but what about xen_platform and
> pvdevice? There seem to be some dependancies on xen-mapcache across the
> code.

All files under hw/i386/xen are only used on x86 today, because they are
either x86 specific, or ioreq specific (required to handle ioreqs.)
Given that today ioreqs are not used on ARM, they are not used. In the
near future when we are going to handle ioreqs on ARM, they'll become
useful.


xen_apic.c:
x86 specific

xen-hvm.c:
There are some x86 things there, but mostly it is about handling ioreqs
which is going to be used on ARM soon as part of the virtio enablement

xen-mapcache.c:
The mapcache is used when ioreqs are used, it will get useful

xen_platform.c:
x86 specific

xen_pvdevice.c:
x86 specific
--8323329-1821319659-1604001980=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:11:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14671.36258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEGC-0006n2-Cg; Thu, 29 Oct 2020 20:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14671.36258; Thu, 29 Oct 2020 20:11:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEGC-0006mv-9T; Thu, 29 Oct 2020 20:11:00 +0000
Received: by outflank-mailman (input) for mailman id 14671;
 Thu, 29 Oct 2020 20:10:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYEGB-0006mp-Sw
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:10:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9881c222-3bb1-4721-be8e-680c77871b82;
 Thu, 29 Oct 2020 20:10:58 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 754CA206DD;
 Thu, 29 Oct 2020 20:10:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYEGB-0006mp-Sw
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:10:59 +0000
X-Inumbo-ID: 9881c222-3bb1-4721-be8e-680c77871b82
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 9881c222-3bb1-4721-be8e-680c77871b82;
	Thu, 29 Oct 2020 20:10:58 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 754CA206DD;
	Thu, 29 Oct 2020 20:10:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604002257;
	bh=v0p2G582aOFIUiJyWgMHNkJQLhhJjy3AUdcS39pK404=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uhrpxRJBPXzBlG5A5CQe2w998JoMLEkffnhErwax3+321woO4kl1E7t7gGB8W67Dk
	 ES0ESN0yUCxKwfLY82azdVoi65YLP36QUe7v6+Q8u2zRb1YK2xG324jXXN2B0b5QN7
	 ngDOqOIj3bmXFDPOJcFIi3REps1c3YCfXut8RitE=
Date: Thu, 29 Oct 2020 13:10:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?Alex_Benn=C3=A9e?= <alex.bennee@linaro.org>
cc: Oleksandr Tyshchenko <olekstysh@gmail.com>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    Tim Deegan <tim@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
In-Reply-To: <871rhgn6j4.fsf@linaro.org>
Message-ID: <alpine.DEB.2.21.2010291308500.12247@sstabellini-ThinkPad-T480s>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com> <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com> <871rhgn6j4.fsf@linaro.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-451898909-1604002257=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-451898909-1604002257=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Alex Bennée wrote:
> Oleksandr Tyshchenko <olekstysh@gmail.com> writes:
> > On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <
> > masami.hiramatsu@linaro.org> wrote:
> >
> >> Hi Oleksandr,
> >>
> > Hi Masami
> >
> > [sorry for the possible format issue]
> >
> >
> >>
> >> I would like to try this on my arm64 board.
> >>
> > Glad to hear you are interested in this topic.
> >
> >
> >>
> >> According to your comments in the patch, I made this config file.
> >> # cat debian.conf
> >> name = "debian"
> >> type = "pvh"
> >> vcpus = 8
> >> memory = 512
> >> kernel = "/opt/agl/vmlinuz-5.9.0-1-arm64"
> >> ramdisk = "/opt/agl/initrd.img-5.9.0-1-arm64"
> >> cmdline= "console=hvc0 earlyprintk=xen root=/dev/xvda1 rw"
> >> disk = [ '/opt/agl/debian.qcow2,qcow2,hda' ]
> >> vif = [ 'mac=00:16:3E:74:3d:76,bridge=xenbr0' ]
> >> virtio = 1
> >> vdisk = [ 'backend=Dom0, disks=ro:/dev/sda1' ]
> >>
> >> And tried to boot a DomU, but I got below error.
> >>
> >> # xl create -c debian.conf
> >> Parsing config from debian.conf
> >> libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
> >> 1:unable to add virtio_disk devices
> >> libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
> >> 1:xc_domain_pause failed
> >> libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
> >> type for domid=1
> >> libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> >> 1:Unable to destroy guest
> >> libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> >> 1:Destruction of domain failed
> >>
> >>
> >> Could you tell me how can I test it?
> >>
> >
> > I assume it is due to the lack of the virtio-disk backend (which I haven't
> > shared yet as I focused on the IOREQ/DM support on Arm in the first place).
> > Could you wait a little bit, I am going to share it soon.
> 
> I assume this is wiring up the required bits of support to handle the
> IOREQ requests in QEMU? We are putting together a PoC demo to show
> a virtio enabled image (AGL) running on both KVM and Xen hypervisors so
> we are keen to see your code as soon as you can share it.
> 
> I'm currently preparing a patch series for QEMU which fixes the recent
> breakage caused by changes to the build system. As part of that I've
> separated CONFIG_XEN and CONFIG_XEN_HVM so it's possible to build
> i386-softmmu with unneeded for ARM bits. Hopefully this will allow me to
> create a qemu-aarch64-system binary with just the PV related models in
> it.
> 
> Talking to Stefano it probably makes sense going forward to introduce a
> new IOREQ aware machine type for QEMU that doesn't bring in the rest of
> the x86 overhead. I was thinking maybe xenvirt?
> 
> You've tested with virtio-block but we'd also like to extend this to
> other arbitrary virtio devices. I guess we will need some sort of
> mechanism to inform the QEMU command line where each device sits in the
> virtio-mmio bus so the FDT Xen delivers to the guest matches up with
> what QEMU is ready to serve requests for?

That would be great. As a reference, given that the domU memory mapping
layout is fixed, on x86 we just made sure that QEMU's idea of where the
devices are is the same of Xen's. What you are suggesting would make it
much more flexible and clearer.
--8323329-451898909-1604002257=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:15:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:15:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14676.36274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEKn-00070p-49; Thu, 29 Oct 2020 20:15:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14676.36274; Thu, 29 Oct 2020 20:15:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEKn-00070i-0r; Thu, 29 Oct 2020 20:15:45 +0000
Received: by outflank-mailman (input) for mailman id 14676;
 Thu, 29 Oct 2020 20:15:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEKl-00070d-9o
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:15:43 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 984761c0-ab78-4b12-ab96-1a693cbc1d06;
 Thu, 29 Oct 2020 20:15:42 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEKi-000LJw-Kc; Thu, 29 Oct 2020 20:15:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEKl-00070d-9o
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:15:43 +0000
X-Inumbo-ID: 984761c0-ab78-4b12-ab96-1a693cbc1d06
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 984761c0-ab78-4b12-ab96-1a693cbc1d06;
	Thu, 29 Oct 2020 20:15:42 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEKi-000LJw-Kc; Thu, 29 Oct 2020 20:15:40 +0000
Date: Thu, 29 Oct 2020 20:15:40 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v3 2/3] x86/shadow: refactor shadow_vram_{get,put}_l1e()
Message-ID: <20201029201540.GA81685@deinos.phlegethon.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
 <bc686036-18c0-ba7b-b8e5-a20b914aac68@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <bc686036-18c0-ba7b-b8e5-a20b914aac68@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:44 +0200 on 19 Oct (1603104271), Jan Beulich wrote:
> By passing the functions an MFN and flags, only a single instance of
> each is needed; they were pretty large for being inline functions
> anyway.
> 
> While moving the code, also adjust coding style and add const where
> sensible / possible.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:17:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14681.36285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEMs-000799-Gl; Thu, 29 Oct 2020 20:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14681.36285; Thu, 29 Oct 2020 20:17:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEMs-000792-Du; Thu, 29 Oct 2020 20:17:54 +0000
Received: by outflank-mailman (input) for mailman id 14681;
 Thu, 29 Oct 2020 20:17:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYEMr-00078x-3F
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:17:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1ac74a9-13e7-4db1-9ec0-177c593647be;
 Thu, 29 Oct 2020 20:17:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 58A0220639;
 Thu, 29 Oct 2020 20:17:50 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYEMr-00078x-3F
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:17:53 +0000
X-Inumbo-ID: b1ac74a9-13e7-4db1-9ec0-177c593647be
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b1ac74a9-13e7-4db1-9ec0-177c593647be;
	Thu, 29 Oct 2020 20:17:51 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 58A0220639;
	Thu, 29 Oct 2020 20:17:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604002670;
	bh=z1hQL2B8LbNft63IM5lmDeIRTvFcnbRTIi5NFyeo4cU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F1MyV4YCvNg6iTFDlPit5CdQTNK4ZV2ady5XTxRhmSeQnTI/tR/i+Caa8fkWOXBRy
	 HArPUNUE4loEF/Jl+aJY7eMrjmZnmgGrXq761jkFCVEl498gkAqWW6mfEF2C0nAsQQ
	 Ey7nO3JWcM6VV8hwyOmXlH65jeZylY78KDmFwMw0=
Date: Thu, 29 Oct 2020 13:17:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
In-Reply-To: <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
Message-ID: <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com> <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org> <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com> <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s> <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com> <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org> <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com> <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com> <bc697327-2750-9a78-042d-d45690d27928@xen.org> <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-670353858-1604002670=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-670353858-1604002670=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Bertrand Marquis wrote:
> > On 28 Oct 2020, at 19:12, Julien Grall <julien@xen.org> wrote:
> > On 26/10/2020 11:03, Rahul Singh wrote:
> >> Hello Julien,
> >>> On 23 Oct 2020, at 4:19 pm, Julien Grall <julien@xen.org> wrote:
> >>> On 23/10/2020 15:27, Rahul Singh wrote:
> >>>> Hello Julien,
> >>>>> On 23 Oct 2020, at 2:00 pm, Julien Grall <julien@xen.org> wrote:
> >>>>> On 23/10/2020 12:35, Rahul Singh wrote:
> >>>>>> Hello,
> >>>>>>> On 23 Oct 2020, at 1:02 am, Stefano Stabellini <sstabellini@kernel.org> wrote:
> >>>>>>> 
> >>>>>>> On Thu, 22 Oct 2020, Julien Grall wrote:
> >>>>>>>>>> On 20/10/2020 16:25, Rahul Singh wrote:
> >>>>>>>>>>> Add support for ARM architected SMMUv3 implementations. It is based on
> >>>>>>>>>>> the Linux SMMUv3 driver.
> >>>>>>>>>>> Major differences between the Linux driver are as follows:
> >>>>>>>>>>> 1. Only Stage-2 translation is supported as compared to the Linux driver
> >>>>>>>>>>>    that supports both Stage-1 and Stage-2 translations.
> >>>>>>>>>>> 2. Use P2M  page table instead of creating one as SMMUv3 has the
> >>>>>>>>>>>    capability to share the page tables with the CPU.
> >>>>>>>>>>> 3. Tasklets is used in place of threaded IRQ's in Linux for event queue
> >>>>>>>>>>>    and priority queue IRQ handling.
> >>>>>>>>>> 
> >>>>>>>>>> Tasklets are not a replacement for threaded IRQ. In particular, they will
> >>>>>>>>>> have priority over anything else (IOW nothing will run on the pCPU until
> >>>>>>>>>> they are done).
> >>>>>>>>>> 
> >>>>>>>>>> Do you know why Linux is using thread. Is it because of long running
> >>>>>>>>>> operations?
> >>>>>>>>> 
> >>>>>>>>> Yes you are right because of long running operations Linux is using the
> >>>>>>>>> threaded IRQs.
> >>>>>>>>> 
> >>>>>>>>> SMMUv3 reports fault/events bases on memory-based circular buffer queues not
> >>>>>>>>> based on the register. As per my understanding, it is time-consuming to
> >>>>>>>>> process the memory based queues in interrupt context because of that Linux
> >>>>>>>>> is using threaded IRQ to process the faults/events from SMMU.
> >>>>>>>>> 
> >>>>>>>>> I didn’t find any other solution in XEN in place of tasklet to defer the
> >>>>>>>>> work, that’s why I used tasklet in XEN in replacement of threaded IRQs. If
> >>>>>>>>> we do all work in interrupt context we will make XEN less responsive.
> >>>>>>>> 
> >>>>>>>> So we need to make sure that Xen continue to receives interrupts, but we also
> >>>>>>>> need to make sure that a vCPU bound to the pCPU is also responsive.
> >>>>>>>> 
> >>>>>>>>> 
> >>>>>>>>> If you know another solution in XEN that will be used to defer the work in
> >>>>>>>>> the interrupt please let me know I will try to use that.
> >>>>>>>> 
> >>>>>>>> One of my work colleague encountered a similar problem recently. He had a long
> >>>>>>>> running tasklet and wanted to be broken down in smaller chunk.
> >>>>>>>> 
> >>>>>>>> We decided to use a timer to reschedule the taslket in the future. This allows
> >>>>>>>> the scheduler to run other loads (e.g. vCPU) for some time.
> >>>>>>>> 
> >>>>>>>> This is pretty hackish but I couldn't find a better solution as tasklet have
> >>>>>>>> high priority.
> >>>>>>>> 
> >>>>>>>> Maybe the other will have a better idea.
> >>>>>>> 
> >>>>>>> Julien's suggestion is a good one.
> >>>>>>> 
> >>>>>>> But I think tasklets can be configured to be called from the idle_loop,
> >>>>>>> in which case they are not run in interrupt context?
> >>>>>>> 
> >>>>>>  Yes you are right tasklet will be scheduled from the idle_loop that is not interrupt conext.
> >>>>> 
> >>>>> This depends on your tasklet. Some will run from the softirq context which is usually (for Arm) on the return of an exception.
> >>>>> 
> >>>> Thanks for the info. I will check and will get better understanding of the tasklet how it will run in XEN.
> >>>>>>> 
> >>>>>>>>>>> 4. Latest version of the Linux SMMUv3 code implements the commands queue
> >>>>>>>>>>>    access functions based on atomic operations implemented in Linux.
> >>>>>>>>>> 
> >>>>>>>>>> Can you provide more details?
> >>>>>>>>> 
> >>>>>>>>> I tried to port the latest version of the SMMUv3 code than I observed that
> >>>>>>>>> in order to port that code I have to also port atomic operation implemented
> >>>>>>>>> in Linux to XEN. As latest Linux code uses atomic operation to process the
> >>>>>>>>> command queues (atomic_cond_read_relaxed(),atomic_long_cond_read_relaxed() ,
> >>>>>>>>> atomic_fetch_andnot_relaxed()) .
> >>>>>>>> 
> >>>>>>>> Thank you for the explanation. I think it would be best to import the atomic
> >>>>>>>> helpers and use the latest code.
> >>>>>>>> 
> >>>>>>>> This will ensure that we don't re-introduce bugs and also buy us some time
> >>>>>>>> before the Linux and Xen driver diverge again too much.
> >>>>>>>> 
> >>>>>>>> Stefano, what do you think?
> >>>>>>> 
> >>>>>>> I think you are right.
> >>>>>> Yes, I agree with you to have XEN code in sync with Linux code that's why I started with to port the Linux atomic operations to XEN  then I realised that it is not straightforward to port atomic operations and it requires lots of effort and testing. Therefore I decided to port the code before the atomic operation is introduced in Linux.
> >>>>> 
> >>>>> Hmmm... I would not have expected a lot of effort required to add the 3 atomics operations above. Are you trying to also port the LSE support at the same time?
> >>>> There are other atomic operations used in the SMMUv3 code apart from the 3 atomic operation I mention. I just mention 3 operation as an example.
> >>> 
> >>> Ok. Do you have a list you could share?
> >>> 
> >> Yes. Please find the list that we have to port to the XEN in order to merge the latest SMMUv3 code.
> > 
> > Thanks!
> > 
> >> If we start to port the below list we might have to port another atomic operation based on which below atomic operations are implemented. I have not spent time on how these atomic operations are implemented in detail but as per my understanding, it required an effort to port them to XEN and required a lot of testing.
> > 
> > For the beginning, I think it is fine to implement them with a stronger memory barrier than necessary or in a less efficient. This can be refined afterwards.
> > 
> > Those helpers can directly be defined in the SMMUv3 drivers so we know they are not for general purpose :).
> > 
> >> 1. atomic_set_release
> > 
> > This could be implemented as:
> > 
> > smp_mb();
> > atomic_set();
> > 
> >> 2. atomic_fetch_andnot_relaxed
> > 
> > This would need to be imported.
> > 
> >> 3. atomic_cond_read_relaxed
> > 
> > This would need to be imported. The simplest version seems to be the generic version provided by include/asm-generic/barrier.h (see smp_cond_load_relaxed).
> > 
> >> 4. atomic_long_cond_read_relaxed
> >> 5. atomic_long_xor
> > 
> > The two would require the implementation of atomic64. Volodymyr also required a version. I offered my help, however I didn't find enough time to do it yet :(.
> > 
> > For Arm64, it would be possible to do a copy/paste of the existing helpers and replace anything related to a 32-bit register with a 64-bit one.
> > 
> > For Arm32, they are a bit more complex because you now need to work with 2 registers.
> > 
> > However, for your purpose, you would be using atomic_long_t. So the the Arm64 implementation should be sufficient.
> > 
> >> 6. atomic_set_release
> > 
> > Same as 1.
> > 
> >> 7. atomic_cmpxchg_relaxed might be we can use atomic_cmpxchg that is implemented in XEN need to check.
> > atomic_cmpxchg() is strongly ordered. So it would be fine to use it for implementing the helper. Although, more inefficient :).
> > 
> >> 8. atomic_dec_return_release
> > 
> > Xen implements a stronger version atomic_dec_return(). You can re-use it here.
> > 
> >> 9. atomic_fetch_inc_relaxed
> > 
> > This would need to be imported.
> 
> We do agree that this would be the work required but some of the things to be imported have dependencies and this is not
> a simple change on the patch done by Rahul and it would require to almost restart the implementation and testing from the
> very beginning.
> 
> In the meantime could we process with the review of this SMMUv3 driver and include it in Xen as is ?
>
> We can set me and Rahul as maintainers and flag the driver as experimental in Support.md (it is already
> protected by the EXPERT configuration in Kconfig).

I think that is OK as long as you make sure to go through the changelog
of the Linux driver to make sure we are not missing any bug fixes and
errata fixes.
--8323329-670353858-1604002670=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:27:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14686.36302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEWB-00087T-JV; Thu, 29 Oct 2020 20:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14686.36302; Thu, 29 Oct 2020 20:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEWB-00087M-Ft; Thu, 29 Oct 2020 20:27:31 +0000
Received: by outflank-mailman (input) for mailman id 14686;
 Thu, 29 Oct 2020 20:27:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEWA-00087E-6e
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:27:30 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e08d0d0-cfec-40bb-b073-815e71a49c29;
 Thu, 29 Oct 2020 20:27:29 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEW7-000LLR-AI; Thu, 29 Oct 2020 20:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEWA-00087E-6e
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:27:30 +0000
X-Inumbo-ID: 4e08d0d0-cfec-40bb-b073-815e71a49c29
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4e08d0d0-cfec-40bb-b073-815e71a49c29;
	Thu, 29 Oct 2020 20:27:29 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEW7-000LLR-AI; Thu, 29 Oct 2020 20:27:27 +0000
Date: Thu, 29 Oct 2020 20:27:27 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH v3 3/3] x86/shadow: sh_{make,destroy}_monitor_table() are
 "even more" HVM-only
Message-ID: <20201029202727.GB81685@deinos.phlegethon.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
 <cd39abe3-5a5c-6ebc-a11e-3d4ed1d74907@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <cd39abe3-5a5c-6ebc-a11e-3d4ed1d74907@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:45 +0200 on 19 Oct (1603104300), Jan Beulich wrote:
> With them depending on just the number of shadow levels, there's no need
> for more than one instance of them, and hence no need for any hook (IOW
> 452219e24648 ["x86/shadow: monitor table is HVM-only"] didn't go quite
> far enough). Move the functions to hvm.c while dropping the dead
> is_pv_32bit_domain() code paths.
>
> While moving the code, replace a stale comment reference to
> sh_install_xen_entries_in_l4(). Doing so made me notice the function
> also didn't have its prototype dropped in 8d7b633adab7 ("x86/mm:
> Consolidate all Xen L4 slot writing into init_xen_l4_slots()"), which
> gets done here as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>

> TBD: In principle both functions could have their first parameter
>      constified. In fact, "destroy" doesn't depend on the vCPU at all
>      and hence could be passed a struct domain *. Not sure whether such
>      an asymmetry would be acceptable.
>      In principle "make" would also not need passing of the number of
>      shadow levels (can be derived from v), which would result in yet
>      another asymmetry.
>      If these asymmetries were acceptable, "make" could then also update
>      v->arch.hvm.monitor_table, instead of doing so at both call sites.

Feel free to add consts, but please don't change the function
parameters any more than that.  I would rather keep them as a matched
pair, and leave the hvm.monitor_table updates in the caller, where
it's easier to see why they're not symmetrical.

Cheers

Tim.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:36:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:36:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14711.36394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEfF-00013e-D0; Thu, 29 Oct 2020 20:36:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14711.36394; Thu, 29 Oct 2020 20:36:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEfF-000137-9z; Thu, 29 Oct 2020 20:36:53 +0000
Received: by outflank-mailman (input) for mailman id 14711;
 Thu, 29 Oct 2020 20:36:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEfE-000131-Ck
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:36:52 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a08deb1b-57c1-45d0-8f22-28d6ca92c564;
 Thu, 29 Oct 2020 20:36:51 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEfB-000LMl-8k; Thu, 29 Oct 2020 20:36:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEfE-000131-Ck
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:36:52 +0000
X-Inumbo-ID: a08deb1b-57c1-45d0-8f22-28d6ca92c564
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a08deb1b-57c1-45d0-8f22-28d6ca92c564;
	Thu, 29 Oct 2020 20:36:51 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEfB-000LMl-8k; Thu, 29 Oct 2020 20:36:49 +0000
Date: Thu, 29 Oct 2020 20:36:49 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 2/5] x86/p2m: collapse the two ->write_p2m_entry() hooks
Message-ID: <20201029203649.GC81685@deinos.phlegethon.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <b26981d1-7a1a-2387-0640-574bdf11ceff@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:22 +0100 on 28 Oct (1603880578), Jan Beulich wrote:
> The struct paging_mode instances get set to the same functions
> regardless of mode by both HAP and shadow code, hence there's no point
> having this hook there. The hook also doesn't need moving elsewhere - we
> can directly use struct p2m_domain's. This merely requires (from a
> strictly formal pov; in practice this may not even be needed) making
> sure we don't end up using safe_write_pte() for nested P2Ms.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:41:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14716.36405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEjV-0001uV-UQ; Thu, 29 Oct 2020 20:41:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14716.36405; Thu, 29 Oct 2020 20:41:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEjV-0001uO-RN; Thu, 29 Oct 2020 20:41:17 +0000
Received: by outflank-mailman (input) for mailman id 14716;
 Thu, 29 Oct 2020 20:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l+xb=EE=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1kYEjV-0001uJ-05
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:41:17 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a434d79-33f8-4cc0-8048-6e6dde7f122e;
 Thu, 29 Oct 2020 20:41:15 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l+xb=EE=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
	id 1kYEjV-0001uJ-05
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:41:17 +0000
X-Inumbo-ID: 8a434d79-33f8-4cc0-8048-6e6dde7f122e
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8a434d79-33f8-4cc0-8048-6e6dde7f122e;
	Thu, 29 Oct 2020 20:41:15 +0000 (UTC)
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1604004074;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=atBWSsg54EdElpQmxIGprMythfayvCfZuL6C0LobpJM=;
	b=zq8OmPxEES1sHwpepW8vzEe0jYTTlYtXV12+hxY9jzb3tYWHhm/ZdTBMeg3suW0uXnI22o
	QRfyl9qohhkGLsu5YphvfKNBGVVDQawQCEUKIkYyS9kHIx+egw2JhKwPfjRd0CQcLbf0Ma
	0+PC2RmqmSccZCm0+EePmF3HtanM8DIlixTveKLP2V0Ksru8Qhe9ajxfSPkPPl0uWvBwkU
	CZwbIAvCZIttVS0GI97ynyZTPYQQIOv8hYusYoSjZ4dO50yoSQ3YCiIbnrq6RPIo7I7h9P
	itJ4XgJrsTvo9TskPmCZoEXzffgEzSJ6VbIxGy8NcIttwKKxU5XG9VypabrK6g==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1604004074;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=atBWSsg54EdElpQmxIGprMythfayvCfZuL6C0LobpJM=;
	b=0iX007DAMCgsp9VjWAMG/D89r7Kujao6BFenyKLGjjTNx6nxedksxtcAkjsrsCDNJr2Ram
	K5nPYTQvIUbyR9CA==
To: Paolo Bonzini <pbonzini@redhat.com>, Arvind Sankar <nivedita@alum.mit.edu>, David Laight <David.Laight@ACULAB.COM>
Cc: 'Arnd Bergmann' <arnd@kernel.org>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, "x86\@kernel.org" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin" <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, "linux-hyperv\@vger.kernel.org" <linux-hyperv@vger.kernel.org>, "linux-kernel\@vger.kernel.org" <linux-kernel@vger.kernel.org>, "kvm\@vger.kernel.org" <kvm@vger.kernel.org>, "platform-driver-x86\@vger.kernel.org" <platform-driver-x86@vger.kernel.org>, "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "iommu\@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
In-Reply-To: <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
References: <20201028212417.3715575-1-arnd@kernel.org> <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com> <20201029165611.GA2557691@rani.riverdale.lan> <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
Date: Thu, 29 Oct 2020 21:41:13 +0100
Message-ID: <87v9esojdi.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Thu, Oct 29 2020 at 17:59, Paolo Bonzini wrote:
> On 29/10/20 17:56, Arvind Sankar wrote:
>>> For those two just add:
>>> 	struct apic *apic = x86_system_apic;
>>> before all the assignments.
>>> Less churn and much better code.
>>>
>> Why would it be better code?
>> 
>
> I think he means the compiler produces better code, because it won't
> read the global variable repeatedly.  Not sure if that's true,(*) but I
> think I do prefer that version if Arnd wants to do that tweak.

It's not true.

     foo *p = bar;

     p->a = 1;
     p->b = 2;

The compiler is free to reload bar after accessing p->a and with

    bar->a = 1;
    bar->b = 1;

it can either cache bar in a register or reread it after bar->a

The generated code is the same as long as there is no reason to reload,
e.g. register pressure.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:46:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14737.36482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEos-0002Tl-BF; Thu, 29 Oct 2020 20:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14737.36482; Thu, 29 Oct 2020 20:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEos-0002Td-7u; Thu, 29 Oct 2020 20:46:50 +0000
Received: by outflank-mailman (input) for mailman id 14737;
 Thu, 29 Oct 2020 20:46:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEor-0002TX-0Q
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:46:49 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93f3cfd3-0c49-4615-969e-7806f7b833ab;
 Thu, 29 Oct 2020 20:46:48 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEon-000LO7-M6; Thu, 29 Oct 2020 20:46:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEor-0002TX-0Q
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:46:49 +0000
X-Inumbo-ID: 93f3cfd3-0c49-4615-969e-7806f7b833ab
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 93f3cfd3-0c49-4615-969e-7806f7b833ab;
	Thu, 29 Oct 2020 20:46:48 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEon-000LO7-M6; Thu, 29 Oct 2020 20:46:45 +0000
Date: Thu, 29 Oct 2020 20:46:45 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Subject: Re: [PATCH 5/5] x86/p2m: split write_p2m_entry() hook
Message-ID: <20201029204645.GD81685@deinos.phlegethon.org>
References: <29d30de1-2a8d-aee2-d3c3-331758766fc9@suse.com>
 <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <7b2b7cc9-8828-41bd-7949-764161bbe7ff@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 10:24 +0100 on 28 Oct (1603880693), Jan Beulich wrote:
> Fair parts of the present handlers are identical; in fact
> nestedp2m_write_p2m_entry() lacks a call to p2m_entry_modify(). Move
> common parts right into write_p2m_entry(), splitting the hooks into a
> "pre" one (needed just by shadow code) and a "post" one.
> 
> For the common parts moved I think that the p2m_flush_nestedp2m() is,
> at least from an abstract perspective, also applicable in the shadow
> case. Hence it doesn't get a 3rd hook put in place.
> 
> The initial comment that was in hap_write_p2m_entry() gets dropped: Its
> placement was bogus, and looking back the the commit introducing it
> (dd6de3ab9985 "Implement Nested-on-Nested") I can't see either what use
> of a p2m it was meant to be associated with.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

This seems like a good approach to me.  I'm happy with the shadow
parts but am not confident enough on nested p2m any more to have an
opinion on that side. 

Acked-by: Tim Deegan <tim@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:52:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14776.36621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEu4-00041C-N0; Thu, 29 Oct 2020 20:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14776.36621; Thu, 29 Oct 2020 20:52:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEu4-000415-Jw; Thu, 29 Oct 2020 20:52:12 +0000
Received: by outflank-mailman (input) for mailman id 14776;
 Thu, 29 Oct 2020 20:52:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEu3-00040x-8a
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:52:11 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43bd7d4b-06da-4db6-89cc-b590a21d6e53;
 Thu, 29 Oct 2020 20:52:10 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEu0-000LOw-TH; Thu, 29 Oct 2020 20:52:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEu3-00040x-8a
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:52:11 +0000
X-Inumbo-ID: 43bd7d4b-06da-4db6-89cc-b590a21d6e53
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 43bd7d4b-06da-4db6-89cc-b590a21d6e53;
	Thu, 29 Oct 2020 20:52:10 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEu0-000LOw-TH; Thu, 29 Oct 2020 20:52:08 +0000
Date: Thu, 29 Oct 2020 20:52:08 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/shadow: correct GFN use by
 sh_unshadow_for_p2m_change()
Message-ID: <20201029205208.GE81685@deinos.phlegethon.org>
References: <888b8f2b-4368-6179-25c5-dc3eb6acbf3d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <888b8f2b-4368-6179-25c5-dc3eb6acbf3d@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 16:42 +0100 on 28 Oct (1603903365), Jan Beulich wrote:
> Luckily sh_remove_all_mappings()'s use of the parameter is limited to
> generation of log messages. Nevertheless we'd better pass correct GFNs
> around:
> - the incoming GFN, when replacing a large page, may not be large page
>   aligned,
> - incrementing by page-size-scaled values can't be right.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>

Thanks!

Tim.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 20:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 20:54:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14792.36674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEvk-0004Pq-IY; Thu, 29 Oct 2020 20:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14792.36674; Thu, 29 Oct 2020 20:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYEvk-0004Pj-De; Thu, 29 Oct 2020 20:53:56 +0000
Received: by outflank-mailman (input) for mailman id 14792;
 Thu, 29 Oct 2020 20:53:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1kYEvj-0004Pa-IP
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:53:55 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dac4ed7-8798-413c-84db-ac32706fd07c;
 Thu, 29 Oct 2020 20:53:54 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1kYEvf-000LPS-Al; Thu, 29 Oct 2020 20:53:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=1VJZ=EE=xen.org=tim@srs-us1.protection.inumbo.net>)
	id 1kYEvj-0004Pa-IP
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 20:53:55 +0000
X-Inumbo-ID: 0dac4ed7-8798-413c-84db-ac32706fd07c
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0dac4ed7-8798-413c-84db-ac32706fd07c;
	Thu, 29 Oct 2020 20:53:54 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
	(envelope-from <tim@xen.org>)
	id 1kYEvf-000LPS-Al; Thu, 29 Oct 2020 20:53:51 +0000
Date: Thu, 29 Oct 2020 20:53:51 +0000
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH v3 0/3] x86: shim building adjustments (plus shadow
 follow-on)
Message-ID: <20201029205351.GF81685@deinos.phlegethon.org>
References: <d09b0690-c5e0-a90b-b4c0-4396a5f62c59@suse.com>
 <73ec8762-d7c4-c46f-b0bf-f40b89377312@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <73ec8762-d7c4-c46f-b0bf-f40b89377312@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 14:40 +0100 on 29 Oct (1603982415), Jan Beulich wrote:
> Tim,
> 
> unless you tell me otherwise I'm intending to commit the latter
> two with Roger's acks some time next week. Of course it would
> still be nice to know your view on the first of the TBDs in
> patch 3 (which may result in further changes, in a follow-up).

Ack, sorry for the dropped patches, and thank you for the ping.  I've
now replied to everything that I think is wating for my review.

Cheers,

Tim.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 21:14:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 21:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.14917.37162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFFG-0000Rf-NY; Thu, 29 Oct 2020 21:14:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 14917.37162; Thu, 29 Oct 2020 21:14:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFFG-0000RY-Jq; Thu, 29 Oct 2020 21:14:06 +0000
Received: by outflank-mailman (input) for mailman id 14917;
 Thu, 29 Oct 2020 21:14:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kYFFF-0000RM-9U
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:14:05 +0000
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a7bbdfb-12ad-4d87-9930-5f3852a3d78e;
 Thu, 29 Oct 2020 21:14:03 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id w23so1107662wmi.4
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 14:14:03 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kYFFF-0000RM-9U
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:14:05 +0000
X-Inumbo-ID: 8a7bbdfb-12ad-4d87-9930-5f3852a3d78e
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8a7bbdfb-12ad-4d87-9930-5f3852a3d78e;
	Thu, 29 Oct 2020 21:14:03 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id w23so1107662wmi.4
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 14:14:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=D8eSB78EFRLXythUPCXWl4F9Wn1YnT4u0iH1EGYuYfw=;
        b=iy3yWpN/lBOR/vjfFY3UbQ9P1W3O79OVJNRecMBc6P5FVsznAr7L6gjasOgAC0GH5P
         M5xTeKNeiZokJLh6kfCy9xWoGXvhm984woBtVu/PVVJiGEAttUEiPb77CcviTG+/ZO3v
         4BLVCsjzgkP7eRq3L5iazPLC7Qr33qRJtLEQg3xmnFiv9pRmypOwwZM/bc0757QxQxrv
         clEZIA/QcoQkfUakAuMPA6IunPgOiB3wZOrt8iQaHMoJdSGGFaGnnM1vC6q3jF7qiSIe
         NlN5dxUbf4py6dR9siuE+vEd2dVlYJeo4f/A/On9uYR4pwGp5QlWwmBKXHrVh1rzqDHG
         dqIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=D8eSB78EFRLXythUPCXWl4F9Wn1YnT4u0iH1EGYuYfw=;
        b=CDHvtP4RCAEif48IExYYn4mjySy83mhDnx0TqahAvYryEv1lEEYtlIJjxAfzom8mBk
         MKUMr2NC2lTp8MyEJgJm/aQQ8yqKYX+Ms1/04KwjHIhzkOitDcLNkOopA1xaBmafJOSP
         TFa5FOxHrYSZwbd3JwV3Eri4Og4y+StrnVsOqZ9i656PCt7KNXqT3+tTGJ+gduIltDSh
         7RgM3NzrCACMgDZnsGR7wYtBXk9QfajaFm7GMWqEpSE8YyEfzD5zxgWC+eJnZbG6EGhi
         r3U8E/674NnCfbkAp1Mh+18pHO8A6fdWpc3NmZRwwwbz8Rz8xLXv3xxtTcD1vcSbU0kg
         8Cfg==
X-Gm-Message-State: AOAM530dX7+K56+m0pxJM2JsaTqMdpYHcJDbNFxDlzyy9why6iSBBIKE
	o+hb+uvPdZeTLnOsk0rAkk4WaOiDUn6iYOv/gok=
X-Google-Smtp-Source: ABdhPJyNiCh2YgVlVzX55ffL1sEm/QYthO4nJNqfoowwh4t07Ek7ExLl31U3s1X09+qzexbbc398gr/bI5Pp1ABmXoM=
X-Received: by 2002:a1c:acc1:: with SMTP id v184mr808495wme.63.1604006043018;
 Thu, 29 Oct 2020 14:14:03 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com> <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Thu, 29 Oct 2020 23:13:51 +0200
Message-ID: <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: multipart/alternative; boundary="000000000000696d0f05b2d5c215"

--000000000000696d0f05b2d5c215
Content-Type: text/plain; charset="UTF-8"

Hi Stefano

[sorry for the possible format issue]

On Thu, Oct 29, 2020 at 9:53 PM Stefano Stabellini <sstabellini@kernel.org>
wrote:

> On Thu, 29 Oct 2020, Oleksandr Tyshchenko wrote:
> > On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <
> masami.hiramatsu@linaro.org> wrote:
> >       Hi Oleksandr,
> >
> > Hi Masami
> >
> > [sorry for the possible format issue]
> >
> >
> >       I would like to try this on my arm64 board.
> >
> > Glad to hear you are interested in this topic.
> >
> >
> >       According to your comments in the patch, I made this config file.
> >       # cat debian.conf
> >       name = "debian"
> >       type = "pvh"
> >       vcpus = 8
> >       memory = 512
> >       kernel = "/opt/agl/vmlinuz-5.9.0-1-arm64"
> >       ramdisk = "/opt/agl/initrd.img-5.9.0-1-arm64"
> >       cmdline= "console=hvc0 earlyprintk=xen root=/dev/xvda1 rw"
> >       disk = [ '/opt/agl/debian.qcow2,qcow2,hda' ]
> >       vif = [ 'mac=00:16:3E:74:3d:76,bridge=xenbr0' ]
> >       virtio = 1
> >       vdisk = [ 'backend=Dom0, disks=ro:/dev/sda1' ]
> >
> >       And tried to boot a DomU, but I got below error.
> >
> >       # xl create -c debian.conf
> >       Parsing config from debian.conf
> >       libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
> >       1:unable to add virtio_disk devices
> >       libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
> >       1:xc_domain_pause failed
> >       libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get
> domain
> >       type for domid=1
> >       libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
> >       1:Unable to destroy guest
> >       libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
> >       1:Destruction of domain failed
> >
> >
> >       Could you tell me how can I test it?
> >
> >
> > I assume it is due to the lack of the virtio-disk backend (which I
> haven't shared yet as I focused on the IOREQ/DM support on Arm in the
> > first place).
> > Could you wait a little bit, I am going to share it soon.
>
> Do you have a quick-and-dirty hack you can share in the meantime? Even
> just on github as a special branch? It would be very useful to be able
> to have a test-driver for the new feature.

Well, I will provide a branch on github with our PoC virtio-disk backend by
the end of this week. It will be possible to test this series with it.


-- 
Regards,

Oleksandr Tyshchenko

--000000000000696d0f05b2d5c215
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi Stefano</div><div><br></div><div>[sorry for the po=
ssible format issue]</div><br><div class=3D"gmail_quote"><div dir=3D"ltr" c=
lass=3D"gmail_attr">On Thu, Oct 29, 2020 at 9:53 PM Stefano Stabellini &lt;=
<a href=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wr=
ote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px=
 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, 29 =
Oct 2020, Oleksandr Tyshchenko wrote:<br>
&gt; On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu &lt;<a href=3D"mailto=
:masami.hiramatsu@linaro.org" target=3D"_blank">masami.hiramatsu@linaro.org=
</a>&gt; wrote:<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Hi Oleksandr,<br>
&gt; <br>
&gt; Hi Masami<br>
&gt; <br>
&gt; [sorry for the possible format issue]<br>
&gt; =C2=A0<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0I would like to try this on my arm64 board.<=
br>
&gt; <br>
&gt; Glad to hear you are interested in this topic.=C2=A0<br>
&gt; =C2=A0<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0According to your comments in the patch, I m=
ade this config file.<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0# cat debian.conf<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0name =3D &quot;debian&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0type =3D &quot;pvh&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0vcpus =3D 8<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0memory =3D 512<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0kernel =3D &quot;/opt/agl/vmlinuz-5.9.0-1-ar=
m64&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0ramdisk =3D &quot;/opt/agl/initrd.img-5.9.0-=
1-arm64&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0cmdline=3D &quot;console=3Dhvc0 earlyprintk=
=3Dxen root=3D/dev/xvda1 rw&quot;<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0disk =3D [ &#39;/opt/agl/debian.qcow2,qcow2,=
hda&#39; ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0vif =3D [ &#39;mac=3D00:16:3E:74:3d:76,bridg=
e=3Dxenbr0&#39; ]<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0virtio =3D 1<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0vdisk =3D [ &#39;backend=3DDom0, disks=3Dro:=
/dev/sda1&#39; ]<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0And tried to boot a DomU, but I got below er=
ror.<br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0# xl create -c debian.conf<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Parsing config from debian.conf<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0libxl: error: libxl_create.c:1863:domcreate_=
attach_devices: Domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01:unable to add virtio_disk devices<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0libxl: error: libxl_domain.c:1218:destroy_do=
mid_pci_done: Domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01:xc_domain_pause failed<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0libxl: error: libxl_dom.c:39:libxl__domain_t=
ype: unable to get domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0type for domid=3D1<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0libxl: error: libxl_domain.c:1136:domain_des=
troy_callback: Domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01:Unable to destroy guest<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0libxl: error: libxl_domain.c:1063:domain_des=
troy_cb: Domain<br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A01:Destruction of domain failed<br>
&gt; <br>
&gt; <br>
&gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Could you tell me how can I test it?<br>
&gt; <br>
&gt; <br>
&gt; I assume it is due to the lack of the virtio-disk backend (which I hav=
en&#39;t shared yet as I focused on the IOREQ/DM support on Arm in the<br>
&gt; first place).<br>
&gt; Could you wait a little bit, I am going to share it soon.=C2=A0<br>
<br>
Do you have a quick-and-dirty hack you can share in the meantime? Even<br>
just on github as a special branch? It would be very useful to be able<br>
to have a test-driver for the new feature.</blockquote><div>Well, I will pr=
ovide a branch on github with our PoC virtio-disk backend by the end of thi=
s week. It will be possible to test this series with it.=C2=A0</div></div><=
br clear=3D"all"><div><br></div>-- <br><div dir=3D"ltr" class=3D"gmail_sign=
ature"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><span s=
tyle=3D"background-color:rgb(255,255,255)"><font size=3D"2"><span style=3D"=
color:rgb(51,51,51);font-family:Arial,sans-serif">Regards,</span></font></s=
pan></div><div dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span style=3D"b=
ackground-color:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshchenko</fo=
nt></span></div></div></div></div></div></div></div></div>

--000000000000696d0f05b2d5c215--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 21:30:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 21:30:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15022.37556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFUn-0004IR-AU; Thu, 29 Oct 2020 21:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15022.37556; Thu, 29 Oct 2020 21:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFUn-0004Hp-6h; Thu, 29 Oct 2020 21:30:09 +0000
Received: by outflank-mailman (input) for mailman id 15022;
 Thu, 29 Oct 2020 21:30:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nYkQ=EE=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1kYFUm-0004Es-7w
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:30:08 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f3d3471-aeb5-4326-92ce-f0fba5042a55;
 Thu, 29 Oct 2020 21:30:06 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09TLTtIO053328
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 29 Oct 2020 17:30:01 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 09TLTsYY053327;
 Thu, 29 Oct 2020 14:29:54 -0700 (PDT) (envelope-from ehem)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nYkQ=EE=m5p.com=ehem@srs-us1.protection.inumbo.net>)
	id 1kYFUm-0004Es-7w
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:30:08 +0000
X-Inumbo-ID: 0f3d3471-aeb5-4326-92ce-f0fba5042a55
Received: from mailhost.m5p.com (unknown [74.104.188.4])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0f3d3471-aeb5-4326-92ce-f0fba5042a55;
	Thu, 29 Oct 2020 21:30:06 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
	by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 09TLTtIO053328
	(version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
	Thu, 29 Oct 2020 17:30:01 -0400 (EDT)
	(envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
	by m5p.com (8.15.2/8.15.2/Submit) id 09TLTsYY053327;
	Thu, 29 Oct 2020 14:29:54 -0700 (PDT)
	(envelope-from ehem)
Date: Thu, 29 Oct 2020 14:29:54 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, roman@zededa.com,
        xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
Message-ID: <20201029212954.GA50793@mattapan.m5p.com>
References: <20201023005629.GA83870@mattapan.m5p.com>
 <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s>
 <20201023211941.GA90171@mattapan.m5p.com>
 <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s>
 <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org>
 <20201026160316.GA20589@mattapan.m5p.com>
 <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org>
 <20201028015423.GA33407@mattapan.m5p.com>
 <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com

On Wed, Oct 28, 2020 at 05:37:02PM -0700, Stefano Stabellini wrote:
> On Tue, 27 Oct 2020, Elliott Mitchell wrote:
> > On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> > > On 26/10/2020 16:03, Elliott Mitchell wrote:
> > > > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> > > >> On 24/10/2020 06:35, Elliott Mitchell wrote:
> > > >>> ACPI has a distinct
> > > >>> means of specifying a limited DMA-width; the above fails, because it
> > > >>> assumes a *device-tree*.
> > > >>
> > > >> Do you know if it would be possible to infer from the ACPI static table
> > > >> the DMA-width?
> > > > 
> > > > Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> > > > what the C code would look like though (problem is which documentation
> > > > should I be looking at first?).
> > > 
> > > What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
> > > is written in AML. So far the shortest implementation I have seen for 
> > > the AML parser is around 5000 lines (see [1]). It might be possible to 
> > > strip some the code, although I think this will still probably too big 
> > > for a single workaround.
> > > 
> > > What I meant by "static table" is a table that looks like a structure 
> > > and can be parsed in a few lines. If we can't find on contain the DMA 
> > > window, then the next best solution is to find a way to identity the 
> > > platform.
> > > 
> > > I don't know enough ACPI to know if this solution is possible. A good 
> > > starter would probably be the ACPI spec [2].
> > 
> > Okay, that is worse than I had thought (okay, ACPI is impressively
> > complex for something nominally firmware-level).
> >
> > There are strings in the present Tianocore implementation for Raspberry
> > PI 4B which could be targeted.  Notably included in the output during
> > boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
> > unsure how kosher these are as this wsn't implemented nor blessed by the
> > Raspberry PI Foundation).
> > 
> > I strongly dislike this approach as you soon turn the Xen project into a
> > database of hardware.  This is already occurring with
> > xen/arch/arm/platforms and I would love to do something about this.  On
> > that thought, how about utilizing Xen's command-line for this purpose?
> 
> I don't think that a command line option is a good idea: basically it is
> punting to users the task of platform detection. Also, it means that
> users will be necessarily forced to edit the uboot script or grub
> configuration file to boot.

-EINVAL

Many Linux installations (near universal on desktop/server, but may be
uncommon on ARM servers) Xen's command-line comes from grub.cfg.
grub.cfg is in turn created by a series of scripts with several places
for users to modify configuration without breaking things.

The scripts which create grub.cfg could add a "dma_mem=" option to Xen's
command-line based upon what the running kernel reports.  If the kernel
is running on top of Xen, it will still be able to retrieve this
information out of ACPI.

This does mean distributions would need to modify scripts, but that is
doable.  This also means a dumb user could potentially jump in, modify
the value and thus cause unrecoverable breakage.  Yet on the flip side
this also allows for the short-term stop-gap of smart users modifying
the option as appropriate for new hardware.

Certainly it may not be the greatest approach, but it isn't as bad as
you're claiming.


> Note that even if we introduced a new command line, we wouldn't take
> away the need for xen/arch/arm/platforms anyway.

Perhaps, but it could allow for this setting at least to be moved to
somewhere outside of Xen.

I'm inclined to agree with Juergen Gro??, this reads kind of like having
an extra domain run during Xen's initialization which can talk ACPI.


> > Have a procedure of during installation/updates retrieve DMA limitation
> > information from the running OS and the following boot Xen will follow
> > the appropriate setup.  By its nature, Domain 0 will have the information
> > needed, just becomes an issue of how hard that is to retrieve...
> 
> Historically that is what we used to do for many things related to ACPI,
> but unfortunately it leads to a pretty bad architecture where Xen
> depends on Dom0 for booting rather than the other way around. (Dom0
> should be the one requiring Xen for booting, given that Xen is higher
> privilege and boots first.)
> 
> 
> I think the best compromise is still to use an ACPI string to detect the
> platform. For instance, would it be possible to use the OEMID fields in
> RSDT, XSDT, FADT?  Possibly even a combination of them?
> 
> Another option might be to get the platform name from UEFI somehow. 

I included appropriate strings in e-mail.  Suitable strings do appear in
`dmesg`.

Problem is this feels like you're hard-coding a fixed list of platforms
Xen can run on.  Instead values like these should be provided by
firmware.  ACPI includes a method for encoding DMA limitations,
device-tree really should have one added.  Only challenge for device-tree
is getting everyone to agree on names and parameters.



Looking at it, there are really issues with the allocate_memory_11()
function in xen/arch/arm/domain_build.c.  Two tasks have been merged and
I'm unsure they were merged correctly.

I'm unaware of any Xen-capable platforms with such, but DMA can have
distinct restrictions outside of what allocate_memory_11() provides for.
ACPI allows for explicit address ranges and in the past many devices have
used addresses that didn't start at zero.

Additionally a device might have several devices with restricted DMA and
these could have differing non-overlapping ranges.  Domain 0 might need
memory in several DMA ranges.  Luckily In the past few years I haven't
read about any potentially Xen-capable devices where DMA-capable memory
had differing capabilities/performance from non-DMA-capable memory.

Meanwhile for a platform which does have DMA limitations, the kernel and
boot information for domain 0 shouldn't be placed in DMA-capable memory
if domain 0 has any memory which isn't DMA-capable.  Yet it appears
allocate_memory_11() will cause kernel/initrd/boot information to be
placed in DMA-capable memory.

If my understanding is correct (this is a BIG IF), as a last step
allocate_memory_11() should reverse the order of memory banks.  Another
trick is DMA-capable banks need to be subject to ballooning AFTER
non-DMA-capable banks (I'm unsure how often ballooning is used on ARM).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Thu Oct 29 21:34:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 21:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15054.37676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFZ6-0005HK-2m; Thu, 29 Oct 2020 21:34:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15054.37676; Thu, 29 Oct 2020 21:34:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFZ5-0005HD-Vu; Thu, 29 Oct 2020 21:34:35 +0000
Received: by outflank-mailman (input) for mailman id 15054;
 Thu, 29 Oct 2020 21:34:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYFZ5-0005H5-8z
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:34:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24934f57-594d-41e4-8cfc-adc5ae92920b;
 Thu, 29 Oct 2020 21:34:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 50D5620731;
 Thu, 29 Oct 2020 21:34:32 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYFZ5-0005H5-8z
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:34:35 +0000
X-Inumbo-ID: 24934f57-594d-41e4-8cfc-adc5ae92920b
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 24934f57-594d-41e4-8cfc-adc5ae92920b;
	Thu, 29 Oct 2020 21:34:34 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 50D5620731;
	Thu, 29 Oct 2020 21:34:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604007273;
	bh=ZRwgpdl6pgaYDU1dnZVdmAY97Zm/vVutiyjWKLVCaxM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=2dLzgniFXG1XFt8xqj6eErMg/hWz++LxEEABpLTMJ3Km4DYr/8tYR1YNajKPq5XD9
	 Y3jfjEXTQvRUhZPTfNhmzbPDzblQ6hBOiSVR1nzz5HcgqLau6YmID5tIq9900H/ReL
	 b09Dz0WT1xdm5zWwwZchDCggZ9E9Kw+W6itBs+KU=
Date: Thu, 29 Oct 2020 14:34:31 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
    xen-devel <xen-devel@lists.xenproject.org>, 
    Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, 
    Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Wei Liu <wl@xen.org>, Julien Grall <julien.grall@arm.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>, 
    Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
    Anthony PERARD <anthony.perard@citrix.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
In-Reply-To: <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2010291434220.12247@sstabellini-ThinkPad-T480s>
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com> <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com> <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com> <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-477224580-1604007273=:12247"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-477224580-1604007273=:12247
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 29 Oct 2020, Oleksandr Tyshchenko wrote:
> Hi Stefano
> 
> [sorry for the possible format issue]
> 
> On Thu, Oct 29, 2020 at 9:53 PM Stefano Stabellini <sstabellini@kernel.org> wrote:
>       On Thu, 29 Oct 2020, Oleksandr Tyshchenko wrote:
>       > On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <masami.hiramatsu@linaro.org> wrote:
>       >       Hi Oleksandr,
>       >
>       > Hi Masami
>       >
>       > [sorry for the possible format issue]
>       >  
>       >
>       >       I would like to try this on my arm64 board.
>       >
>       > Glad to hear you are interested in this topic. 
>       >  
>       >
>       >       According to your comments in the patch, I made this config file.
>       >       # cat debian.conf
>       >       name = "debian"
>       >       type = "pvh"
>       >       vcpus = 8
>       >       memory = 512
>       >       kernel = "/opt/agl/vmlinuz-5.9.0-1-arm64"
>       >       ramdisk = "/opt/agl/initrd.img-5.9.0-1-arm64"
>       >       cmdline= "console=hvc0 earlyprintk=xen root=/dev/xvda1 rw"
>       >       disk = [ '/opt/agl/debian.qcow2,qcow2,hda' ]
>       >       vif = [ 'mac=00:16:3E:74:3d:76,bridge=xenbr0' ]
>       >       virtio = 1
>       >       vdisk = [ 'backend=Dom0, disks=ro:/dev/sda1' ]
>       >
>       >       And tried to boot a DomU, but I got below error.
>       >
>       >       # xl create -c debian.conf
>       >       Parsing config from debian.conf
>       >       libxl: error: libxl_create.c:1863:domcreate_attach_devices: Domain
>       >       1:unable to add virtio_disk devices
>       >       libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
>       >       1:xc_domain_pause failed
>       >       libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get domain
>       >       type for domid=1
>       >       libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
>       >       1:Unable to destroy guest
>       >       libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>       >       1:Destruction of domain failed
>       >
>       >
>       >       Could you tell me how can I test it?
>       >
>       >
>       > I assume it is due to the lack of the virtio-disk backend (which I haven't shared yet as I focused on the IOREQ/DM support on
>       Arm in the
>       > first place).
>       > Could you wait a little bit, I am going to share it soon. 
> 
>       Do you have a quick-and-dirty hack you can share in the meantime? Even
>       just on github as a special branch? It would be very useful to be able
>       to have a test-driver for the new feature.
> 
> Well, I will provide a branch on github with our PoC virtio-disk backend by the end of this week. It will be possible to test this series
> with it. 

Very good, thank you!
--8323329-477224580-1604007273=:12247--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 21:35:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 21:35:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15064.37707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFZn-0005UO-M6; Thu, 29 Oct 2020 21:35:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15064.37707; Thu, 29 Oct 2020 21:35:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYFZn-0005UH-Ip; Thu, 29 Oct 2020 21:35:19 +0000
Received: by outflank-mailman (input) for mailman id 15064;
 Thu, 29 Oct 2020 21:35:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
 id 1kYFZl-0005Sn-US
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:35:18 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae8c061d-e4c8-4757-95d7-5ce69e755c05;
 Thu, 29 Oct 2020 21:35:16 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id y20so5310965iod.5
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 14:35:16 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
 by smtp.gmail.com with ESMTPSA id r3sm3065346iog.55.2020.10.29.14.35.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 14:35:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=iFbr=EE=gmail.com=niveditas98@srs-us1.protection.inumbo.net>)
	id 1kYFZl-0005Sn-US
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 21:35:18 +0000
X-Inumbo-ID: ae8c061d-e4c8-4757-95d7-5ce69e755c05
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ae8c061d-e4c8-4757-95d7-5ce69e755c05;
	Thu, 29 Oct 2020 21:35:16 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id y20so5310965iod.5
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 14:35:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=sender:from:date:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=o/fZRhurDRoh3x2KXwZGFT2Yi8JsbQLcBDCy2LAivAY=;
        b=gGuTqV/6UjMeGtDxjXYhwQPHTl/X7tzC8nyDkopIigNmkrO3mCPbuH/1MhlZEXs368
         ilsmPmjF8/8PcamJS4tzXcIssUUJKuowmvAaZpvnATyfAuLpyzByye2ob5V27GVE8qIT
         KZP17N/xFAWoIKkVe0RuMkFayeP8BWuJYRQfihoxN9IcITuyVnsNMyFFHot0k3ChkDxn
         nFZSLwwceJfQxsoqw1O2SZk7WDZNX5WeTFIk5dkyBtO3OtvmEbw8mG/eVrbi8eoLhfOn
         0GZOx0BOnNgBQieydv87h3zVPKI0C/CSIt6VEOi3C7rjEkzw5bkznPJoY5+aGh/Uu3xA
         /g8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:sender:from:date:to:cc:subject:message-id
         :references:mime-version:content-disposition:in-reply-to;
        bh=o/fZRhurDRoh3x2KXwZGFT2Yi8JsbQLcBDCy2LAivAY=;
        b=tbSeA/sjjQRG8FMXsZnNz6xN23wihzTrxhUHz2gtIbzV3vG5cdAuglQoDGbD5h9T1x
         EiDvZcYVq3GCT24NOWXh2nhkrB0/sfVdURs1YTXpiZbIDtweCfbkibrmjGagNYnuQYYo
         DMtAfo+RI+xyN2nb+FqkGjsrB0UIrn0uLnyTycgqIhi+VkFUDBLdUeFQvEZ7nGWvd9W7
         HxY+FIjKtB1cEkoY4zKHalsdtW6Il7tPR+tbq0fFL6OEMyeyjiyJBi8Ii/GZA8d7PkF7
         64JqKUzKsiS/5F1MyhWPxc8X9jBBM5aNsFlJbA+FSh9TYeDfF4LjeBv1geFmKXJPPsP9
         n50g==
X-Gm-Message-State: AOAM531UCF/XMYJYG8fLvC1x7y7avh8qRxhwRbEYD0EBSEMtpQj8Oeea
	pOzXzQeLd6JaIjSJIrb7WmE=
X-Google-Smtp-Source: ABdhPJyTkgnuu9ypsiIO3xCkmwdOh4o3IIPvbsxF62gfZSdk4g2zCe1B1vFwKdXsPs4FzLAVBSR7HA==
X-Received: by 2002:a05:6602:22cf:: with SMTP id e15mr5087695ioe.1.1604007316212;
        Thu, 29 Oct 2020 14:35:16 -0700 (PDT)
Received: from rani.riverdale.lan ([2001:470:1f07:5f3::b55f])
        by smtp.gmail.com with ESMTPSA id r3sm3065346iog.55.2020.10.29.14.35.13
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 14:35:15 -0700 (PDT)
Sender: Arvind Sankar <niveditas98@gmail.com>
From: Arvind Sankar <nivedita@alum.mit.edu>
X-Google-Original-From: Arvind Sankar <arvind@rani.riverdale.lan>
Date: Thu, 29 Oct 2020 17:35:12 -0400
To: Thomas Gleixner <tglx@linutronix.de>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	Arvind Sankar <nivedita@alum.mit.edu>,
	David Laight <David.Laight@ACULAB.COM>,
	'Arnd Bergmann' <arnd@kernel.org>, Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>, "x86@kernel.org" <x86@kernel.org>,
	Arnd Bergmann <arnd@arndb.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	"H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Vitaly Kuznetsov <vkuznets@redhat.com>,
	Wanpeng Li <wanpengli@tencent.com>,
	Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Message-ID: <20201029213512.GA34524@rani.riverdale.lan>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
 <20201029165611.GA2557691@rani.riverdale.lan>
 <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
 <87v9esojdi.fsf@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <87v9esojdi.fsf@nanos.tec.linutronix.de>

On Thu, Oct 29, 2020 at 09:41:13PM +0100, Thomas Gleixner wrote:
> On Thu, Oct 29 2020 at 17:59, Paolo Bonzini wrote:
> > On 29/10/20 17:56, Arvind Sankar wrote:
> >>> For those two just add:
> >>> 	struct apic *apic = x86_system_apic;
> >>> before all the assignments.
> >>> Less churn and much better code.
> >>>
> >> Why would it be better code?
> >> 
> >
> > I think he means the compiler produces better code, because it won't
> > read the global variable repeatedly.  Not sure if that's true,(*) but I
> > think I do prefer that version if Arnd wants to do that tweak.
> 
> It's not true.
> 
>      foo *p = bar;
> 
>      p->a = 1;
>      p->b = 2;
> 
> The compiler is free to reload bar after accessing p->a and with
> 
>     bar->a = 1;
>     bar->b = 1;
> 
> it can either cache bar in a register or reread it after bar->a
> 
> The generated code is the same as long as there is no reason to reload,
> e.g. register pressure.
> 
> Thanks,
> 
>         tglx

It's not quite the same.

https://godbolt.org/z/4dzPbM

With -fno-strict-aliasing, the compiler reloads the pointer if you write
to the start of what it points to, but not if you write to later
elements.


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:03:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:03:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15177.38142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG0w-0002O3-MS; Thu, 29 Oct 2020 22:03:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15177.38142; Thu, 29 Oct 2020 22:03:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG0w-0002Nw-JF; Thu, 29 Oct 2020 22:03:22 +0000
Received: by outflank-mailman (input) for mailman id 15177;
 Thu, 29 Oct 2020 22:03:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kYG0v-0002Nq-Fj
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:21 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0b923d13-7339-4191-9bbf-e356f1660d3f;
 Thu, 29 Oct 2020 22:03:19 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-161-e21b2LpUM7-lrnfJUBFn8A-1; Thu, 29 Oct 2020 18:03:15 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 75CEE81CAF8;
 Thu, 29 Oct 2020 22:03:13 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id EB3B455780;
 Thu, 29 Oct 2020 22:03:06 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kYG0v-0002Nq-Fj
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:21 +0000
X-Inumbo-ID: 0b923d13-7339-4191-9bbf-e356f1660d3f
Received: from us-smtp-delivery-124.mimecast.com (unknown [216.205.24.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 0b923d13-7339-4191-9bbf-e356f1660d3f;
	Thu, 29 Oct 2020 22:03:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604008998;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=SnHfOwCs2tzlEiGVuOK1p3qk46w81VRMdurae76Skl8=;
	b=FT/VgYAUG3Rff4KfY5gl1d9vujWs86/VmnF2gdGVSSQhes8paUtAHZJV7EhiPyjqmxbtad
	ojjPRi7M40BvNLGHxX5DfxXF9MVjKvVt+GM1M834JpxwoAKYMNmP+/A47eeq9Mf3TdypLy
	bOELrsSgjxtIKXQFQFC82JpL9THM140=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-161-e21b2LpUM7-lrnfJUBFn8A-1; Thu, 29 Oct 2020 18:03:15 -0400
X-MC-Unique: e21b2LpUM7-lrnfJUBFn8A-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 75CEE81CAF8;
	Thu, 29 Oct 2020 22:03:13 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id EB3B455780;
	Thu, 29 Oct 2020 22:03:06 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH 09/36] qdev: Make qdev_get_prop_ptr() get Object* arg
Date: Thu, 29 Oct 2020 18:02:19 -0400
Message-Id: <20201029220246.472693-10-ehabkost@redhat.com>
In-Reply-To: <20201029220246.472693-1-ehabkost@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make the code more generic and not specific to TYPE_DEVICE.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  8 ++--
 hw/block/xen-block.c             |  6 +--
 hw/core/qdev-properties-system.c | 57 +++++++++-------------
 hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
 hw/s390x/css.c                   |  5 +-
 hw/s390x/s390-pci-bus.c          |  4 +-
 hw/vfio/pci-quirks.c             |  5 +-
 8 files changed, 68 insertions(+), 101 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 0ea822e6a7..0b92cfc761 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -302,7 +302,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
+void *qdev_get_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(DeviceState *dev,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index b58d298c1a..e91c21dd4a 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,8 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    TPMBackend **be = qdev_get_prop_ptr(dev, opaque);
+    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -49,7 +48,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -73,9 +72,8 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(dev, prop);
+    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 8a7a3f5452..1ba9981c08 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -335,9 +335,8 @@ static char *disk_to_vbd_name(unsigned int disk)
 static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -396,9 +395,8 @@ static int vbd_name_to_disk(const char *name, const char **endp,
 static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
+    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index d0fb063a49..c8c73c371b 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -59,9 +59,8 @@ static bool check_prop_still_unset(DeviceState *dev, const char *name,
 static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -87,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(dev, prop);
+    void **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -185,7 +184,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(dev, prop);
+    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -218,8 +217,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
-    CharBackend *be = qdev_get_prop_ptr(dev, opaque);
+    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -232,7 +230,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -272,9 +270,8 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(dev, prop);
+    CharBackend *be = qdev_get_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -297,9 +294,8 @@ const PropertyInfo qdev_prop_chr = {
 static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -315,7 +311,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
+    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -381,9 +377,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
 static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -395,7 +390,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
+    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -461,9 +456,8 @@ const PropertyInfo qdev_prop_netdev = {
 static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -475,7 +469,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
+    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -582,7 +576,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -674,9 +668,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
 static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -693,7 +686,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
+    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -761,7 +754,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -804,8 +797,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    DeviceState *dev = DEVICE(obj);
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -828,9 +820,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
 static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -857,7 +848,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
+    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -950,9 +941,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
 static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -981,7 +971,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
     if (dev->realized) {
@@ -1027,9 +1017,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
 static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -1067,7 +1056,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
+    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
     if (dev->realized) {
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 3a4638f4de..0a54a922c8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -38,9 +38,9 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
+void *qdev_get_prop_ptr(Object *obj, Property *prop)
 {
-    void *ptr = dev;
+    void *ptr = obj;
     ptr += prop->offset;
     return ptr;
 }
@@ -48,9 +48,8 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
 void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -60,7 +59,7 @@ void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(dev, prop);
+    int *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -94,8 +93,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint32_t *p = qdev_get_prop_ptr(dev, props);
+    uint32_t *p = qdev_get_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -107,9 +105,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(dev, prop);
+    uint32_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -156,8 +153,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    DeviceState *dev = DEVICE(obj);
-    uint64_t *p = qdev_get_prop_ptr(dev, props);
+    uint64_t *p = qdev_get_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -169,9 +165,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
 static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(dev, prop);
+    uint64_t *p = qdev_get_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -208,9 +203,8 @@ const PropertyInfo qdev_prop_bit64 = {
 static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -220,7 +214,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(dev, prop);
+    bool *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -242,9 +236,8 @@ const PropertyInfo qdev_prop_bool = {
 static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -254,7 +247,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -288,9 +281,8 @@ const PropertyInfo qdev_prop_uint8 = {
 void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -300,7 +292,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -322,9 +314,8 @@ const PropertyInfo qdev_prop_uint16 = {
 static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -334,7 +325,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -347,9 +338,8 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
                              void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -359,7 +349,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -388,9 +378,8 @@ const PropertyInfo qdev_prop_int32 = {
 static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -400,7 +389,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -413,9 +402,8 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
 static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -425,7 +413,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -454,15 +442,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(DEVICE(obj), prop));
+    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -477,7 +464,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(dev, prop);
+    char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -515,9 +502,8 @@ const PropertyInfo qdev_prop_on_off_auto = {
 void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -528,7 +514,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
     if (dev->realized) {
@@ -563,9 +549,8 @@ const PropertyInfo qdev_prop_size32 = {
 static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -581,7 +566,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
+    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
     if (dev->realized) {
@@ -653,7 +638,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      */
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
     void **arrayptr = (void *)dev + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -699,7 +684,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
          * being inside the device struct.
          */
         arrayprop->prop.offset = eltptr - (void *)dev;
-        assert(qdev_get_prop_ptr(dev, &arrayprop->prop) == eltptr);
+        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
         object_property_add(obj, propname,
                             arrayprop->prop.info->name,
                             arrayprop->prop.info->get,
@@ -893,9 +878,8 @@ void qdev_prop_set_globals(DeviceState *dev)
 static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -905,7 +889,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 9961cfe7bf..2b8f33fec2 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2343,9 +2343,8 @@ void css_reset(void)
 static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2375,7 +2374,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
+    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index fb4cee87a4..b59cf0651a 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1248,7 +1248,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(DEVICE(obj), prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1259,7 +1259,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
     DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 57150913b7..53569925a2 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1488,9 +1488,8 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1501,7 +1500,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:03:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:03:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15178.38154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG19-0002RG-4A; Thu, 29 Oct 2020 22:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15178.38154; Thu, 29 Oct 2020 22:03:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG19-0002R9-0m; Thu, 29 Oct 2020 22:03:35 +0000
Received: by outflank-mailman (input) for mailman id 15178;
 Thu, 29 Oct 2020 22:03:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kYG17-0002Qb-Fe
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:33 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 09ba3a73-1cde-4de8-8799-bb8872e675da;
 Thu, 29 Oct 2020 22:03:31 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-196-mT6zWae2O6enDNs-xp8QhA-1; Thu, 29 Oct 2020 18:03:29 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 396F71018F78;
 Thu, 29 Oct 2020 22:03:27 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 47C225B4B5;
 Thu, 29 Oct 2020 22:03:23 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kYG17-0002Qb-Fe
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:33 +0000
X-Inumbo-ID: 09ba3a73-1cde-4de8-8799-bb8872e675da
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 09ba3a73-1cde-4de8-8799-bb8872e675da;
	Thu, 29 Oct 2020 22:03:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604009011;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BjFOMSTX6fdH1tpm8duv0qhQRUHgbBNeh3ANOAmFKgQ=;
	b=Jv7UKt4Ni8likMoLwpxkAS9RITv2L0KEZGcOqBYVMYAZDBwsLw4b7aBHgjX5vl/+Hn5vKB
	+mAntb3/H/yyYKpDBKOxhFfi7Y9k9jSS2grzwKrIvkmBzgLxMEccxGylTiAUfzpWdOdhD+
	ET1FiAIzYxCjHcoNgnef5fq3WYEJecs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-196-mT6zWae2O6enDNs-xp8QhA-1; Thu, 29 Oct 2020 18:03:29 -0400
X-MC-Unique: mT6zWae2O6enDNs-xp8QhA-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 396F71018F78;
	Thu, 29 Oct 2020 22:03:27 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 47C225B4B5;
	Thu, 29 Oct 2020 22:03:23 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Thomas Huth <thuth@redhat.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
	Artyom Tarasenko <atar4qemu@gmail.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH 14/36] qdev: Move dev->realized check to qdev_property_set()
Date: Thu, 29 Oct 2020 18:02:24 -0400
Message-Id: <20201029220246.472693-15-ehabkost@redhat.com>
In-Reply-To: <20201029220246.472693-1-ehabkost@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Every single qdev property setter function manually checks
dev->realized.  We can just check dev->realized inside
qdev_property_set() instead.

The check is being added as a separate function
(qdev_prop_allow_set()) because it will become a callback later.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Cc: Artyom Tarasenko <atar4qemu@gmail.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 backends/tpm/tpm_util.c          |   6 --
 hw/block/xen-block.c             |   5 --
 hw/core/qdev-properties-system.c |  64 -------------------
 hw/core/qdev-properties.c        | 106 ++++++-------------------------
 hw/s390x/css.c                   |   6 --
 hw/s390x/s390-pci-bus.c          |   6 --
 hw/vfio/pci-quirks.c             |   6 --
 target/sparc/cpu.c               |   6 --
 8 files changed, 18 insertions(+), 187 deletions(-)

diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index e91c21dd4a..042cacfcca 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 1ba9981c08..bd1aef63a7 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -400,11 +400,6 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
     char *str, *p;
     const char *end;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index fca1b694ca..60a45f5620 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -92,11 +92,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
     bool blk_created = false;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -228,17 +223,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CharBackend *be = qdev_get_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -309,18 +298,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     MACAddr *mac = qdev_get_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -388,7 +371,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
 static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
@@ -396,11 +378,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
     int queues, err = 0, i = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -467,18 +444,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
 static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -580,11 +551,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     uint64_t value;
     Error *local_err = NULL;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -684,7 +650,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
 static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
     Error *local_err = NULL;
@@ -692,11 +657,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
     char *str;
     int ret;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_str(v, name, &str, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
@@ -752,17 +712,11 @@ const PropertyInfo qdev_prop_reserved_region = {
 static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, NULL)) {
         if (!visit_type_int32(v, name, &value, errp)) {
             return;
@@ -846,7 +800,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
 static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
     char *str, *p;
@@ -855,11 +808,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
     unsigned long dom = 0, bus = 0;
     unsigned int slot = 0, func = 0;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -969,16 +917,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
     int speed;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, prop->name, &speed, prop->info->enum_table,
                          errp)) {
         return;
@@ -1054,16 +996,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
 static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
     int width;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_enum(v, prop->name, &width, prop->info->enum_table,
                          errp)) {
         return;
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index aab9e65e97..195bfed6e1 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -25,6 +25,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
     }
 }
 
+/* returns: true if property is allowed to be set, false otherwise */
+static bool qdev_prop_allow_set(Object *obj, const char *name,
+                                Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return false;
+    }
+    return true;
+}
+
 void qdev_prop_allow_set_link_before_realize(const Object *obj,
                                              const char *name,
                                              Object *val, Error **errp)
@@ -66,6 +79,11 @@ static void static_prop_set(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
     Property *prop = opaque;
+
+    if (!qdev_prop_allow_set(obj, name, errp)) {
+        return;
+    }
+
     return prop->info->set(obj, v, name, opaque, errp);
 }
 
@@ -91,15 +109,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
 void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
 
@@ -149,15 +161,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -209,15 +215,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
 static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_bool(v, name, &value, errp)) {
         return;
     }
@@ -246,15 +246,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     bool *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_bool(v, name, ptr, errp);
 }
 
@@ -279,15 +273,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint8(v, name, ptr, errp);
 }
 
@@ -324,15 +312,9 @@ void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
 static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint16(v, name, ptr, errp);
 }
 
@@ -357,15 +339,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
 static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint32(v, name, ptr, errp);
 }
 
@@ -381,15 +357,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
 static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int32(v, name, ptr, errp);
 }
 
@@ -421,15 +391,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
 static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_uint64(v, name, ptr, errp);
 }
 
@@ -445,15 +409,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
 static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     int64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_int64(v, name, ptr, errp);
 }
 
@@ -496,16 +454,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
 static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     char **ptr = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -546,16 +498,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
     uint64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_size(v, name, &value, errp)) {
         return;
     }
@@ -598,16 +544,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
     char *str;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
@@ -678,10 +618,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
     const char *arrayname;
     int i;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
     if (*alenptr) {
         error_setg(errp, "array size property %s may not be set more than once",
                    name);
@@ -921,15 +857,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
 static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     visit_type_size(v, name, ptr, errp);
 }
 
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 38fd46b9a9..46cab94e2b 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
 static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_str(v, name, &str, errp)) {
         return;
     }
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index b59cf0651a..d02e93a192 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1256,16 +1256,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
 static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
     uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
     }
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 53569925a2..802979635c 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        const char *name, void *opaque,
                                        Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
     uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
     }
diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
index 8ecb20e55f..cf21efd85f 100644
--- a/target/sparc/cpu.c
+++ b/target/sparc/cpu.c
@@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
 static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
-    DeviceState *dev = DEVICE(obj);
     const int64_t min = MIN_NWINDOWS;
     const int64_t max = MAX_NWINDOWS;
     SPARCCPU *cpu = SPARC_CPU(obj);
     int64_t value;
 
-    if (dev->realized) {
-        qdev_prop_set_after_realize(dev, name, errp);
-        return;
-    }
-
     if (!visit_type_int(v, name, &value, errp)) {
         return;
     }
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:03:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15183.38166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG1U-0002Yp-Ej; Thu, 29 Oct 2020 22:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15183.38166; Thu, 29 Oct 2020 22:03:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG1U-0002Yi-Ay; Thu, 29 Oct 2020 22:03:56 +0000
Received: by outflank-mailman (input) for mailman id 15183;
 Thu, 29 Oct 2020 22:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kYG1S-0002YG-4Z
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:54 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 328c929a-1878-46ff-acfd-832c80e4a959;
 Thu, 29 Oct 2020 22:03:52 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-404-V-z-WGJaOwy6R88EMbiifg-1; Thu, 29 Oct 2020 18:03:50 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4722C1018F7A;
 Thu, 29 Oct 2020 22:03:48 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 8958D5B4A4;
 Thu, 29 Oct 2020 22:03:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=MP2Z=EE=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kYG1S-0002YG-4Z
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:03:54 +0000
X-Inumbo-ID: 328c929a-1878-46ff-acfd-832c80e4a959
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 328c929a-1878-46ff-acfd-832c80e4a959;
	Thu, 29 Oct 2020 22:03:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604009031;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ghGtQGb+4l5vTeOq5EcJj/Yo0EHPLxl5VGbxEaF22Yo=;
	b=IXXiR5KXr56MuwZJW/5bMPZNiUiwXIhgMQStpzWI6eRVjvfAbA11pIjAfAuK73pWuZMvpS
	6DM+RfrN2veLeLwdfyZdtf43ulMZPnFCeZJpXsCy3QiU6hVaU9TgpnTJuaBojy0SK3yS1V
	hTrw+beUuHW+5kdHAndkCB+t5kgh57Y=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-404-V-z-WGJaOwy6R88EMbiifg-1; Thu, 29 Oct 2020 18:03:50 -0400
X-MC-Unique: V-z-WGJaOwy6R88EMbiifg-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4722C1018F7A;
	Thu, 29 Oct 2020 22:03:48 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 8958D5B4A4;
	Thu, 29 Oct 2020 22:03:41 +0000 (UTC)
From: Eduardo Habkost <ehabkost@redhat.com>
To: qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Eric Blake <eblake@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	=?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Igor Mammedov <imammedo@redhat.com>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Anthony Perard <anthony.perard@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Kevin Wolf <kwolf@redhat.com>,
	Max Reitz <mreitz@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Richard Henderson <rth@twiddle.net>,
	David Hildenbrand <david@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	xen-devel@lists.xenproject.org,
	qemu-block@nongnu.org,
	qemu-s390x@nongnu.org
Subject: [PATCH 25/36] qdev: Rename qdev_get_prop_ptr() to object_static_prop_ptr()
Date: Thu, 29 Oct 2020 18:02:35 -0400
Message-Id: <20201029220246.472693-26-ehabkost@redhat.com>
In-Reply-To: <20201029220246.472693-1-ehabkost@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The function will be moved to common QOM code, as it is not
specific to TYPE_DEVICE anymore.

Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
---
Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: "Daniel P. Berrangé" <berrange@redhat.com>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Halil Pasic <pasic@linux.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: David Hildenbrand <david@redhat.com>
Cc: Thomas Huth <thuth@redhat.com>
Cc: Matthew Rosato <mjrosato@linux.ibm.com>
Cc: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-devel@nongnu.org
Cc: xen-devel@lists.xenproject.org
Cc: qemu-block@nongnu.org
Cc: qemu-s390x@nongnu.org
---
 include/hw/qdev-properties.h     |  2 +-
 backends/tpm/tpm_util.c          |  6 +--
 hw/block/xen-block.c             |  4 +-
 hw/core/qdev-properties-system.c | 46 +++++++++++------------
 hw/core/qdev-properties.c        | 64 ++++++++++++++++----------------
 hw/s390x/css.c                   |  4 +-
 hw/s390x/s390-pci-bus.c          |  4 +-
 hw/vfio/pci-quirks.c             |  4 +-
 8 files changed, 67 insertions(+), 67 deletions(-)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index 0acc92ae2b..4146dac281 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -332,7 +332,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
                            const uint8_t *value);
 void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop);
+void *object_static_prop_ptr(Object *obj, Property *prop);
 
 void qdev_prop_register_global(GlobalProperty *prop);
 const GlobalProperty *qdev_find_global_prop(Object *obj,
diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
index 042cacfcca..2b5f788861 100644
--- a/backends/tpm/tpm_util.c
+++ b/backends/tpm/tpm_util.c
@@ -35,7 +35,7 @@
 static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
+    TPMBackend **be = object_static_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(*be ? (*be)->id : "");
@@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend *s, **be = object_static_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_tpm(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
+    TPMBackend **be = object_static_prop_ptr(obj, prop);
 
     if (*be) {
         tpm_backend_reset(*be);
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index bd1aef63a7..20985c465a 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_static_prop_ptr(obj, prop);
     char *str;
 
     switch (vdev->type) {
@@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
                                void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
+    XenBlockVdev *vdev = object_static_prop_ptr(obj, prop);
     char *str, *p;
     const char *end;
 
diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
index d9355053d2..448d77ecab 100644
--- a/hw/core/qdev-properties-system.c
+++ b/hw/core/qdev-properties-system.c
@@ -60,7 +60,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_static_prop_ptr(obj, prop);
     const char *value;
     char *p;
 
@@ -86,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    void **ptr = qdev_get_prop_ptr(obj, prop);
+    void **ptr = object_static_prop_ptr(obj, prop);
     char *str;
     BlockBackend *blk;
     bool blk_created = false;
@@ -179,7 +179,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
+    BlockBackend **ptr = object_static_prop_ptr(obj, prop);
 
     if (*ptr) {
         AioContext *ctx = blk_get_aio_context(*ptr);
@@ -212,7 +212,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
 static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
-    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
+    CharBackend *be = object_static_prop_ptr(obj, opaque);
     char *p;
 
     p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
@@ -224,7 +224,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_static_prop_ptr(obj, prop);
     Chardev *s;
     char *str;
 
@@ -260,7 +260,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
 static void release_chr(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    CharBackend *be = qdev_get_prop_ptr(obj, prop);
+    CharBackend *be = object_static_prop_ptr(obj, prop);
 
     qemu_chr_fe_deinit(be, false);
 }
@@ -284,7 +284,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_static_prop_ptr(obj, prop);
     char buffer[2 * 6 + 5 + 1];
     char *p = buffer;
 
@@ -299,7 +299,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
+    MACAddr *mac = object_static_prop_ptr(obj, prop);
     int i, pos;
     char *str;
     const char *p;
@@ -361,7 +361,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_static_prop_ptr(obj, prop);
     char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
 
     visit_type_str(v, name, &p, errp);
@@ -372,7 +372,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
+    NICPeers *peers_ptr = object_static_prop_ptr(obj, prop);
     NetClientState **ncs = peers_ptr->ncs;
     NetClientState *peers[MAX_QUEUE_NUM];
     int queues, err = 0, i = 0;
@@ -434,7 +434,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_static_prop_ptr(obj, prop);
     char *p = g_strdup(audio_get_id(card));
 
     visit_type_str(v, name, &p, errp);
@@ -445,7 +445,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
+    QEMUSoundCard *card = object_static_prop_ptr(obj, prop);
     AudioState *state;
     int err = 0;
     char *str;
@@ -547,7 +547,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -635,7 +635,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_static_prop_ptr(obj, prop);
     char buffer[64];
     char *p = buffer;
     int rc;
@@ -651,7 +651,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
                                 void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
+    ReservedRegion *rr = object_static_prop_ptr(obj, prop);
     Error *local_err = NULL;
     const char *endptr;
     char *str;
@@ -713,7 +713,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t value, *ptr = object_static_prop_ptr(obj, prop);
     unsigned int slot, fn, n;
     char *str;
 
@@ -751,7 +751,7 @@ invalid:
 static int print_pci_devfn(Object *obj, Property *prop, char *dest,
                            size_t len)
 {
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_static_prop_ptr(obj, prop);
 
     if (*ptr == -1) {
         return snprintf(dest, len, "<unset>");
@@ -775,7 +775,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_static_prop_ptr(obj, prop);
     char buffer[] = "ffff:ff:ff.f";
     char *p = buffer;
     int rc = 0;
@@ -801,7 +801,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
                                  void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
+    PCIHostDeviceAddress *addr = object_static_prop_ptr(obj, prop);
     char *str, *p;
     const char *e;
     unsigned long val;
@@ -890,7 +890,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_static_prop_ptr(obj, prop);
     int speed;
 
     switch (*p) {
@@ -918,7 +918,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkSpeed *p = object_static_prop_ptr(obj, prop);
     int speed;
 
     if (!visit_type_enum(v, prop->name, &speed, prop->info->enum_table,
@@ -960,7 +960,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_static_prop_ptr(obj, prop);
     int width;
 
     switch (*p) {
@@ -997,7 +997,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
                                    void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
+    PCIExpLinkWidth *p = object_static_prop_ptr(obj, prop);
     int width;
 
     if (!visit_type_enum(v, prop->name, &width, prop->info->enum_table,
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index e4aba2b237..0b53e5ba63 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -51,7 +51,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
     }
 }
 
-void *qdev_get_prop_ptr(Object *obj, Property *prop)
+void *object_static_prop_ptr(Object *obj, Property *prop)
 {
     void *ptr = obj;
     ptr += prop->offset;
@@ -97,7 +97,7 @@ void object_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -106,7 +106,7 @@ void object_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
                             void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int *ptr = qdev_get_prop_ptr(obj, prop);
+    int *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
 }
@@ -135,7 +135,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
 
 static void bit_prop_set(Object *obj, Property *props, bool val)
 {
-    uint32_t *p = qdev_get_prop_ptr(obj, props);
+    uint32_t *p = object_static_prop_ptr(obj, props);
     uint32_t mask = qdev_get_prop_mask(props);
     if (val) {
         *p |= mask;
@@ -148,7 +148,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *p = qdev_get_prop_ptr(obj, prop);
+    uint32_t *p = object_static_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -189,7 +189,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
 
 static void bit64_prop_set(Object *obj, Property *props, bool val)
 {
-    uint64_t *p = qdev_get_prop_ptr(obj, props);
+    uint64_t *p = object_static_prop_ptr(obj, props);
     uint64_t mask = qdev_get_prop_mask64(props);
     if (val) {
         *p |= mask;
@@ -202,7 +202,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
                            void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *p = qdev_get_prop_ptr(obj, prop);
+    uint64_t *p = object_static_prop_ptr(obj, prop);
     bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
 
     visit_type_bool(v, name, &value, errp);
@@ -234,7 +234,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -243,7 +243,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    bool *ptr = qdev_get_prop_ptr(obj, prop);
+    bool *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_bool(v, name, ptr, errp);
 }
@@ -261,7 +261,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -270,7 +270,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -300,7 +300,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -309,7 +309,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint16_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint16(v, name, ptr, errp);
 }
@@ -327,7 +327,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -336,7 +336,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -345,7 +345,7 @@ void object_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
                              void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -354,7 +354,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
                       Error **errp)
 {
     Property *prop = opaque;
-    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int32_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_int32(v, name, ptr, errp);
 }
@@ -379,7 +379,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -388,7 +388,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint64(v, name, ptr, errp);
 }
@@ -397,7 +397,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -406,7 +406,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
                       void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    int64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_int64(v, name, ptr, errp);
 }
@@ -430,14 +430,14 @@ const PropertyInfo qdev_prop_int64 = {
 static void release_string(Object *obj, const char *name, void *opaque)
 {
     Property *prop = opaque;
-    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
+    g_free(*(char **)object_static_prop_ptr(obj, prop));
 }
 
 static void get_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_static_prop_ptr(obj, prop);
 
     if (!*ptr) {
         char *str = (char *)"";
@@ -451,7 +451,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
                        void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    char **ptr = qdev_get_prop_ptr(obj, prop);
+    char **ptr = object_static_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -485,7 +485,7 @@ void object_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
                               void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
     uint64_t value = *ptr;
 
     visit_type_size(v, name, &value, errp);
@@ -495,7 +495,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
     uint64_t value;
 
     if (!visit_type_size(v, name, &value, errp)) {
@@ -526,7 +526,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_static_prop_ptr(obj, prop);
     char buffer[UUID_FMT_LEN + 1];
     char *p = buffer;
 
@@ -541,7 +541,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
                     Error **errp)
 {
     Property *prop = opaque;
-    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
+    QemuUUID *uuid = object_static_prop_ptr(obj, prop);
     char *str;
 
     if (!visit_type_str(v, name, &str, errp)) {
@@ -605,7 +605,7 @@ static ArrayElementProperty *array_element_new(Object *obj,
      * being inside the device struct.
      */
     arrayprop->prop.offset = eltptr - (void *)obj;
-    assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
+    assert(object_static_prop_ptr(obj, &arrayprop->prop) == eltptr);
     return arrayprop;
 }
 
@@ -646,7 +646,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
      * array itself and dynamically add the corresponding properties.
      */
     Property *prop = opaque;
-    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *alenptr = object_static_prop_ptr(obj, prop);
     void **arrayptr = (void *)obj + prop->arrayoffset;
     void *eltptr;
     const char *arrayname;
@@ -867,7 +867,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
@@ -876,7 +876,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
                      Error **errp)
 {
     Property *prop = opaque;
-    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint64_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_size(v, name, ptr, errp);
 }
diff --git a/hw/s390x/css.c b/hw/s390x/css.c
index 46cab94e2b..c8e7ce232a 100644
--- a/hw/s390x/css.c
+++ b/hw/s390x/css.c
@@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_static_prop_ptr(obj, prop);
     char buffer[] = "xx.x.xxxx";
     char *p = buffer;
     int r;
@@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
+    CssDevId *dev_id = object_static_prop_ptr(obj, prop);
     char *str;
     int num, n1, n2;
     unsigned int cssid, ssid, devid;
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index d02e93a192..74a469e91d 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -1248,7 +1248,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
                          void *opaque, Error **errp)
 {
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint32(v, name, ptr, errp);
 }
@@ -1258,7 +1258,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
 {
     S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
     Property *prop = opaque;
-    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint32_t *ptr = object_static_prop_ptr(obj, prop);
 
     if (!visit_type_uint32(v, name, ptr, errp)) {
         return;
diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
index 802979635c..37cb9ab1fa 100644
--- a/hw/vfio/pci-quirks.c
+++ b/hw/vfio/pci-quirks.c
@@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t *ptr = object_static_prop_ptr(obj, prop);
 
     visit_type_uint8(v, name, ptr, errp);
 }
@@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
                                        Error **errp)
 {
     Property *prop = opaque;
-    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
+    uint8_t value, *ptr = object_static_prop_ptr(obj, prop);
 
     if (!visit_type_uint8(v, name, &value, errp)) {
         return;
-- 
2.28.0



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:07:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:07:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15192.38178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG4S-0002nW-26; Thu, 29 Oct 2020 22:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15192.38178; Thu, 29 Oct 2020 22:07:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYG4R-0002nP-VH; Thu, 29 Oct 2020 22:06:59 +0000
Received: by outflank-mailman (input) for mailman id 15192;
 Thu, 29 Oct 2020 22:06:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kYG4Q-0002nK-Lc
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:06:58 +0000
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1257bab-c4f6-4c3f-b9b5-7e4b672cec8a;
 Thu, 29 Oct 2020 22:06:57 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id a72so1201328wme.5
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 15:06:57 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Wm+X=EE=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kYG4Q-0002nK-Lc
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:06:58 +0000
X-Inumbo-ID: c1257bab-c4f6-4c3f-b9b5-7e4b672cec8a
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c1257bab-c4f6-4c3f-b9b5-7e4b672cec8a;
	Thu, 29 Oct 2020 22:06:57 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id a72so1201328wme.5
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 15:06:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=l6NI7kaC3nQvvPszH00Re27/NRj1bjVvJ2D9vN+soQk=;
        b=Gj+ihPQLepNvdi5vjCkRgdNBkRk5+W9+kLux/YayXm6wTUyETKPptSkTiTHP5XrLdS
         tg8CgojEQMXGDbCrcK4roym516+PZu+qntmSPV+C3VkR1xi/Dk0rBV09D0mwT5lok5s9
         bQORyln6MAKLEKB3dtmcNrsocmRgJcwI9I9mnz2uBZsC+YTe/BJuBljU63pRk3h0q2hj
         rjUxLoNRJn2oCwQcZcTgsFh0cbjJFbb9HPxAKyqPtfsdJesp5pNHw2eSv7SZR4Ccmig9
         NXC64eVtPkpWgGJiF165lr/O+p0eMuZeEsv+PlvqJZuqjQ+OAiZvTvQhTn5yDNmCbxFn
         NBIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=l6NI7kaC3nQvvPszH00Re27/NRj1bjVvJ2D9vN+soQk=;
        b=oOsfwcQs5kwGz9/XRSlpYDBGWb7IlTb8u0YUO4mCJudck65BfK7owB2JF1tO7uttCz
         Xj4LvSM0dYnZKbf8a9DvJ1IihtVBepi43mIVzgNYFieSF11gXsrkdPdtvsjLOEnYpDQI
         Zx7KQmM+JI/H/PtPTpFQjTs1Wt2RicJ4/sKXKnnUVsTZoQDEc5vm6tcl1X6e6uwWpr4C
         SRhsbztUi2HvboPyVslxK1VXnl678ovWpFtUZ9USNmHTeBZkuMQoidxgRSExFGJYOmhM
         ELboCRHfdrRSTllH8XjHAtTrPpg7jgRjxbPoFdiWuHBhGW1mfJcS/qWHL39oN0/2WEF6
         RVTw==
X-Gm-Message-State: AOAM5331g/UjOVRnwWsQ+l1MQoDJstTa9aq4P81khmP5nwWsXmYhAtjC
	5HL7HTtMK1oE6beZXioYn9CuoVhTU3/CsjtaLgs=
X-Google-Smtp-Source: ABdhPJwl4hqDlAIrif301IXHfdIGb0UQv4GoF7FPVWET3aTgf3iz6lYAE16EhvKvANBWW9qGGYK4SSBB1c7oiB+pBV8=
X-Received: by 2002:a1c:4646:: with SMTP id t67mr1133470wma.40.1604009216860;
 Thu, 29 Oct 2020 15:06:56 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com> <871rhgn6j4.fsf@linaro.org>
In-Reply-To: <871rhgn6j4.fsf@linaro.org>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Fri, 30 Oct 2020 00:06:45 +0200
Message-ID: <CAPD2p-mD-7=zZAPV8u4Ya67haJFbCuGoDVAT8ZqfVyJSW8GDJw@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: Masami Hiramatsu <masami.hiramatsu@linaro.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="000000000000966a0805b2d67f62"

--000000000000966a0805b2d67f62
Content-Type: text/plain; charset="UTF-8"

Hi Alex.

[sorry for the possible format issues]


> I assume this is wiring up the required bits of support to handle the
> IOREQ requests in QEMU?


No, it is not related to QEMU. We don't run QEMU in our system. Being
honest I have never tried to run it)
virtio-disk backend PoC is a completely standalone entity (IOREQ server)
which emulates a virtio-mmio disk device.
It is based on code from DEMU (for IOREQ server purposes) and some code
from kvmtool to implement virtio protocol,
disk operations over underlying H/W and Xenbus code to be able to read
configuration from the Xenstore
(it is configured via domain config file). Last patch in this series
(marked as RFC) actually adds required bits to the libxl code.


> We are putting together a PoC demo to show
> a virtio enabled image (AGL) running on both KVM and Xen hypervisors so
> we are keen to see your code as soon as you can share it.


Thank you. Sure, I will provide a branch with code by the end of this week.



>
> I'm currently preparing a patch series for QEMU which fixes the recent
> breakage caused by changes to the build system. As part of that I've
> separated CONFIG_XEN and CONFIG_XEN_HVM so it's possible to build
> i386-softmmu with unneeded for ARM bits. Hopefully this will allow me to
> create a qemu-aarch64-system binary with just the PV related models in
> it.
>
Does it mean that it will be possible to use QEMU in Xen on Arm just for
"backend provider" purposes?


>
> Talking to Stefano it probably makes sense going forward to introduce a
> new IOREQ aware machine type for QEMU that doesn't bring in the rest of
> the x86 overhead. I was thinking maybe xenvirt?
>
> You've tested with virtio-block but we'd also like to extend this to
> other arbitrary virtio devices. I guess we will need some sort of
> mechanism to inform the QEMU command line where each device sits in the
> virtio-mmio bus so the FDT Xen delivers to the guest matches up with
> what QEMU is ready to serve requests for?
>

I am sorry, I can't provide ideas here, not familiar with QEMU.
But, completely agree, that other virtio devices should be supported.

-- 
Regards,

Oleksandr Tyshchenko

--000000000000966a0805b2d67f62
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><div>Hi Alex.</div><div><br></d=
iv><div>[sorry for the possible format issues]</div><br><div class=3D"gmail=
_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex=
;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
I assume this is wiring up the required bits of support to handle the<br>
IOREQ requests in QEMU?</blockquote><div>=C2=A0</div><div>No, it is not rel=
ated to QEMU. We don&#39;t run QEMU in our system. Being honest I have neve=
r tried to run it)</div><div>virtio-disk backend PoC is a completely standa=
lone entity (IOREQ server) which emulates a virtio-mmio disk device.</div><=
div>It is based on code from DEMU (for IOREQ server purposes) and=C2=A0some=
 code from kvmtool to implement virtio protocol,</div><div>disk operations =
over underlying=C2=A0H/W and Xenbus code to be able to read configuration f=
rom the Xenstore</div><div>(it is configured via domain config file). Last =
patch in this series (marked as RFC) actually adds required bits to the lib=
xl code.=C2=A0 =C2=A0</div><div>=C2=A0</div><blockquote class=3D"gmail_quot=
e" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204)=
;padding-left:1ex">We are putting together a PoC demo to show<br>
a virtio enabled image (AGL) running on both KVM and Xen hypervisors so<br>
we are keen to see your code as soon as you can share it.</blockquote><div>=
=C2=A0</div><div>Thank you. Sure, I will provide a branch with code by the =
end of this week.</div><div><br></div><div>=C2=A0</div><blockquote class=3D=
"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(2=
04,204,204);padding-left:1ex">
<br>
I&#39;m currently preparing a patch series for QEMU which fixes the recent<=
br>
breakage caused by changes to the build system. As part of that I&#39;ve<br=
>
separated CONFIG_XEN and CONFIG_XEN_HVM so it&#39;s possible to build<br>
i386-softmmu with unneeded for ARM bits. Hopefully this will allow me to<br=
>
create a qemu-aarch64-system binary with just the PV related models in<br>
it.<br></blockquote><div>Does it mean that it will be possible to use QEMU =
in Xen on Arm just for &quot;backend provider&quot; purposes?</div><div>=C2=
=A0</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8e=
x;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Talking to Stefano it probably makes sense going forward to introduce a<br>
new IOREQ aware machine type for QEMU that doesn&#39;t bring in the rest of=
<br>
the x86 overhead. I was thinking maybe xenvirt?<br>
<br>
You&#39;ve tested with virtio-block but we&#39;d also like to extend this t=
o<br>
other arbitrary virtio devices. I guess we will need some sort of<br>
mechanism to inform the QEMU command line where each device sits in the<br>
virtio-mmio bus so the FDT Xen delivers to the guest matches up with<br>
what QEMU is ready to serve requests for?<br></blockquote><div>=C2=A0</div>=
<div>I am sorry, I can&#39;t provide ideas here, not familiar with QEMU.</d=
iv><div>But, completely agree, that other virtio devices should be supporte=
d.=C2=A0=C2=A0</div><div>=C2=A0</div></div>-- <br><div dir=3D"ltr"><div dir=
=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><span style=3D"backgro=
und-color:rgb(255,255,255)"><font size=3D"2"><span style=3D"color:rgb(51,51=
,51);font-family:Arial,sans-serif">Regards,</span></font></span></div><div =
dir=3D"ltr"><br></div><div dir=3D"ltr"><div><span style=3D"background-color=
:rgb(255,255,255)"><font size=3D"2">Oleksandr Tyshchenko</font></span></div=
></div></div></div></div></div></div></div>

--000000000000966a0805b2d67f62--


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:13:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:13:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15199.38189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGAL-0003ss-Ot; Thu, 29 Oct 2020 22:13:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15199.38189; Thu, 29 Oct 2020 22:13:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGAL-0003sl-Lr; Thu, 29 Oct 2020 22:13:05 +0000
Received: by outflank-mailman (input) for mailman id 15199;
 Thu, 29 Oct 2020 22:13:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1kYGAK-0003sg-4I
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:13:04 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8eebcadd-fc6c-4314-b280-61fd1bd9f38b;
 Thu, 29 Oct 2020 22:13:03 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-27-qEwyl-1dPLWnmCDyyUncBw-1; Thu, 29 Oct 2020 22:13:00 +0000
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 22:12:59 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000; 
 Thu, 29 Oct 2020 22:12:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=qEoS=EE=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
	id 1kYGAK-0003sg-4I
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:13:04 +0000
X-Inumbo-ID: 8eebcadd-fc6c-4314-b280-61fd1bd9f38b
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.86.151])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 8eebcadd-fc6c-4314-b280-61fd1bd9f38b;
	Thu, 29 Oct 2020 22:13:03 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.126 [156.67.243.126]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-27-qEwyl-1dPLWnmCDyyUncBw-1; Thu, 29 Oct 2020 22:13:00 +0000
X-MC-Unique: qEwyl-1dPLWnmCDyyUncBw-1
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:43c:695e:880f:8750) with Microsoft SMTP
 Server (TLS) id 15.0.1347.2; Thu, 29 Oct 2020 22:12:59 +0000
Received: from AcuMS.Aculab.com ([fe80::43c:695e:880f:8750]) by
 AcuMS.aculab.com ([fe80::43c:695e:880f:8750%12]) with mapi id 15.00.1347.000;
 Thu, 29 Oct 2020 22:12:59 +0000
From: David Laight <David.Laight@ACULAB.COM>
To: 'Arvind Sankar' <nivedita@alum.mit.edu>, Thomas Gleixner
	<tglx@linutronix.de>
CC: Paolo Bonzini <pbonzini@redhat.com>, 'Arnd Bergmann' <arnd@kernel.org>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"x86@kernel.org" <x86@kernel.org>, Arnd Bergmann <arnd@arndb.de>, "K. Y.
 Srinivasan" <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin" <hpa@zytor.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>, Vitaly Kuznetsov
	<vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson
	<jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
	"linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	"platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
Subject: RE: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Topic: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
Thread-Index: AQHWrZenJpzBwTRfbE+Uihb7XQWTqKmurjkggABtZPaAAAkikA==
Date: Thu, 29 Oct 2020 22:12:59 +0000
Message-ID: <ad73f56e79d249b1b3614bccc85e2ca5@AcuMS.aculab.com>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
 <20201029165611.GA2557691@rani.riverdale.lan>
 <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
 <87v9esojdi.fsf@nanos.tec.linutronix.de>
 <20201029213512.GA34524@rani.riverdale.lan>
In-Reply-To: <20201029213512.GA34524@rani.riverdale.lan>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: base64

RnJvbTogQXJ2aW5kIFNhbmthcg0KPiBTZW50OiAyOSBPY3RvYmVyIDIwMjAgMjE6MzUNCj4gDQo+
IE9uIFRodSwgT2N0IDI5LCAyMDIwIGF0IDA5OjQxOjEzUE0gKzAxMDAsIFRob21hcyBHbGVpeG5l
ciB3cm90ZToNCj4gPiBPbiBUaHUsIE9jdCAyOSAyMDIwIGF0IDE3OjU5LCBQYW9sbyBCb256aW5p
IHdyb3RlOg0KPiA+ID4gT24gMjkvMTAvMjAgMTc6NTYsIEFydmluZCBTYW5rYXIgd3JvdGU6DQo+
ID4gPj4+IEZvciB0aG9zZSB0d28ganVzdCBhZGQ6DQo+ID4gPj4+IAlzdHJ1Y3QgYXBpYyAqYXBp
YyA9IHg4Nl9zeXN0ZW1fYXBpYzsNCj4gPiA+Pj4gYmVmb3JlIGFsbCB0aGUgYXNzaWdubWVudHMu
DQo+ID4gPj4+IExlc3MgY2h1cm4gYW5kIG11Y2ggYmV0dGVyIGNvZGUuDQo+ID4gPj4+DQo+ID4g
Pj4gV2h5IHdvdWxkIGl0IGJlIGJldHRlciBjb2RlPw0KPiA+ID4+DQo+ID4gPg0KPiA+ID4gSSB0
aGluayBoZSBtZWFucyB0aGUgY29tcGlsZXIgcHJvZHVjZXMgYmV0dGVyIGNvZGUsIGJlY2F1c2Ug
aXQgd29uJ3QNCj4gPiA+IHJlYWQgdGhlIGdsb2JhbCB2YXJpYWJsZSByZXBlYXRlZGx5LiAgTm90
IHN1cmUgaWYgdGhhdCdzIHRydWUsKCopIGJ1dCBJDQo+ID4gPiB0aGluayBJIGRvIHByZWZlciB0
aGF0IHZlcnNpb24gaWYgQXJuZCB3YW50cyB0byBkbyB0aGF0IHR3ZWFrLg0KPiA+DQo+ID4gSXQn
cyBub3QgdHJ1ZS4NCj4gPg0KPiA+ICAgICAgZm9vICpwID0gYmFyOw0KPiA+DQo+ID4gICAgICBw
LT5hID0gMTsNCj4gPiAgICAgIHAtPmIgPSAyOw0KPiA+DQo+ID4gVGhlIGNvbXBpbGVyIGlzIGZy
ZWUgdG8gcmVsb2FkIGJhciBhZnRlciBhY2Nlc3NpbmcgcC0+YSBhbmQgd2l0aA0KPiA+DQo+ID4g
ICAgIGJhci0+YSA9IDE7DQo+ID4gICAgIGJhci0+YiA9IDE7DQo+ID4NCj4gPiBpdCBjYW4gZWl0
aGVyIGNhY2hlIGJhciBpbiBhIHJlZ2lzdGVyIG9yIHJlcmVhZCBpdCBhZnRlciBiYXItPmENCj4g
Pg0KPiA+IFRoZSBnZW5lcmF0ZWQgY29kZSBpcyB0aGUgc2FtZSBhcyBsb25nIGFzIHRoZXJlIGlz
IG5vIHJlYXNvbiB0byByZWxvYWQsDQo+ID4gZS5nLiByZWdpc3RlciBwcmVzc3VyZS4NCj4gPg0K
PiA+IFRoYW5rcywNCj4gPg0KPiA+ICAgICAgICAgdGdseA0KPiANCj4gSXQncyBub3QgcXVpdGUg
dGhlIHNhbWUuDQo+IA0KPiBodHRwczovL2dvZGJvbHQub3JnL3ovNGR6UGJNDQo+IA0KPiBXaXRo
IC1mbm8tc3RyaWN0LWFsaWFzaW5nLCB0aGUgY29tcGlsZXIgcmVsb2FkcyB0aGUgcG9pbnRlciBp
ZiB5b3Ugd3JpdGUNCj4gdG8gdGhlIHN0YXJ0IG9mIHdoYXQgaXQgcG9pbnRzIHRvLCBidXQgbm90
IGlmIHlvdSB3cml0ZSB0byBsYXRlcg0KPiBlbGVtZW50cy4NCg0KSSBndWVzcyBpdCBhc3N1bWVz
IHRoYXQgZ2xvYmFsIGRhdGEgZG9lc24ndCBvdmVybGFwLg0KDQpCdXQgaW4gZ2VuZXJhbCB0aGV5
IGFyZSBzb3J0IG9mIG9wcG9zaXRlczoNCg0KV2l0aCB0aGUgbG9jYWwgdmFyaWFibGUgaXQgY2Fu
IHJlbG9hZCBpZiBpdCBrbm93cyB0aGUgd3JpdGUNCmNhbm5vdCBoYXZlIGFmZmVjdGVkIHRoZSBn
bG9iYWwgLSBidXQgaXMgdW5saWtlbHkgdG8gZG8gc28uDQoNClVzaW5nIHRoZSBnbG9iYWwgaXQg
bXVzdCByZWxvYWQgaWYgaXQgaXMgcG9zc2libGUgdGhlIHdyaXRlDQptaWdodCBoYXZlIGFmZmVj
dGVkIHRoZSBnbG9iYWwuDQoNCglEYXZpZA0KDQotDQpSZWdpc3RlcmVkIEFkZHJlc3MgTGFrZXNp
ZGUsIEJyYW1sZXkgUm9hZCwgTW91bnQgRmFybSwgTWlsdG9uIEtleW5lcywgTUsxIDFQVCwgVUsN
ClJlZ2lzdHJhdGlvbiBObzogMTM5NzM4NiAoV2FsZXMpDQo=



From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:42:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:42:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15205.38202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGcS-0006mQ-5P; Thu, 29 Oct 2020 22:42:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15205.38202; Thu, 29 Oct 2020 22:42:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGcS-0006mJ-2G; Thu, 29 Oct 2020 22:42:08 +0000
Received: by outflank-mailman (input) for mailman id 15205;
 Thu, 29 Oct 2020 22:42:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
 id 1kYGcQ-0006mE-Es
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:42:06 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed9d5bff-4790-480d-9e68-87180b7518bb;
 Thu, 29 Oct 2020 22:42:04 +0000 (UTC)
Received: from pps.filterd (m0098416.ppops.net [127.0.0.1])
 by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09TMVMf1182390; Thu, 29 Oct 2020 18:42:01 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34g3kkdv3d-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:42:01 -0400
Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMg0K6022398;
 Thu, 29 Oct 2020 18:42:00 -0400
Received: from ppma03wdc.us.ibm.com (ba.79.3fa9.ip4.static.sl-reverse.com
 [169.63.121.186])
 by mx0b-001b2d01.pphosted.com with ESMTP id 34g3kkdv30-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:42:00 -0400
Received: from pps.filterd (ppma03wdc.us.ibm.com [127.0.0.1])
 by ppma03wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMajqd015970;
 Thu, 29 Oct 2020 22:42:00 GMT
Received: from b03cxnp08025.gho.boulder.ibm.com
 (b03cxnp08025.gho.boulder.ibm.com [9.17.130.17])
 by ppma03wdc.us.ibm.com with ESMTP id 34ernqtpaq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 22:41:59 +0000
Received: from b03ledav004.gho.boulder.ibm.com
 (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
 by b03cxnp08025.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 09TMfrDm62783912
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 29 Oct 2020 22:41:53 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 9E04478066;
 Thu, 29 Oct 2020 22:41:58 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 0EE5078064;
 Thu, 29 Oct 2020 22:41:57 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
 by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
 Thu, 29 Oct 2020 22:41:56 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
	id 1kYGcQ-0006mE-Es
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:42:06 +0000
X-Inumbo-ID: ed9d5bff-4790-480d-9e68-87180b7518bb
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.158.5])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ed9d5bff-4790-480d-9e68-87180b7518bb;
	Thu, 29 Oct 2020 22:42:04 +0000 (UTC)
Received: from pps.filterd (m0098416.ppops.net [127.0.0.1])
	by mx0b-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09TMVMf1182390;
	Thu, 29 Oct 2020 18:42:01 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=6v6B0uW4D+eVZCTdE+3+asMAqwy609OXzk6fBum/jkA=;
 b=ooYneL2mjZvBnJ08P6I0PZ3P4BOivHAF6IvGAqUlbT9q1pL/2j1CW9SrxDV926KGAs2d
 x+u+AsUKeM7bU6yTbEu2nlJhg8kg1R5XDvuFlcDlqWcv+IMTEhfdnSPzZcbiqVKwoCq3
 UmTUdORkLI9iv0oGSupm5/3nuHOmzSX5Wh5dKbDMS82Zjd20x/xTuI54uwqg0LWT2RLV
 cqJj7vjd99GlIylif5qoJxKX0V+6bHxAw5ZILwrZjyaynA8viuP5+btyZ8BjgSu6e641
 G295ci7ZVzXvyBPg8fYaaHyDiZ3zzxIPURkv8znUsWypH+PXhVwINiJOoy+SZz+s0d/8 FA== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34g3kkdv3d-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:42:01 -0400
Received: from m0098416.ppops.net (m0098416.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMg0K6022398;
	Thu, 29 Oct 2020 18:42:00 -0400
Received: from ppma03wdc.us.ibm.com (ba.79.3fa9.ip4.static.sl-reverse.com [169.63.121.186])
	by mx0b-001b2d01.pphosted.com with ESMTP id 34g3kkdv30-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:42:00 -0400
Received: from pps.filterd (ppma03wdc.us.ibm.com [127.0.0.1])
	by ppma03wdc.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMajqd015970;
	Thu, 29 Oct 2020 22:42:00 GMT
Received: from b03cxnp08025.gho.boulder.ibm.com (b03cxnp08025.gho.boulder.ibm.com [9.17.130.17])
	by ppma03wdc.us.ibm.com with ESMTP id 34ernqtpaq-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 22:41:59 +0000
Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
	by b03cxnp08025.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09TMfrDm62783912
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 29 Oct 2020 22:41:53 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 9E04478066;
	Thu, 29 Oct 2020 22:41:58 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 0EE5078064;
	Thu, 29 Oct 2020 22:41:57 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
	by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
	Thu, 29 Oct 2020 22:41:56 +0000 (GMT)
Subject: Re: [PATCH 25/36] qdev: Rename qdev_get_prop_ptr() to
 object_static_prop_ptr()
To: Eduardo Habkost <ehabkost@redhat.com>, qemu-devel@nongnu.org
Cc: Matthew Rosato <mjrosato@linux.ibm.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
        Stefan Berger <stefanb@linux.vnet.ibm.com>,
        David Hildenbrand <david@redhat.com>,
        Markus Armbruster <armbru@redhat.com>,
        Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Anthony Perard <anthony.perard@citrix.com>,
        xen-devel@lists.xenproject.org,
        =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
        Thomas Huth <thuth@redhat.com>,
        Alex Williamson
 <alex.williamson@redhat.com>,
        Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>,
        Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
        "Daniel P. Berrange" <berrange@redhat.com>,
        Cornelia Huck <cohuck@redhat.com>, qemu-s390x@nongnu.org,
        Max Reitz <mreitz@redhat.com>, Igor Mammedov <imammedo@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-26-ehabkost@redhat.com>
From: Stefan Berger <stefanb@linux.ibm.com>
Message-ID: <7cee6c1e-5fc0-4f31-55b0-9c4211da256e@linux.ibm.com>
Date: Thu, 29 Oct 2020 18:41:56 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201029220246.472693-26-ehabkost@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-29_12:2020-10-29,2020-10-29 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011
 priorityscore=1501 lowpriorityscore=0 mlxscore=0 mlxlogscore=999
 phishscore=0 bulkscore=0 malwarescore=0 impostorscore=0 spamscore=0
 adultscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2009150000 definitions=main-2010290150

On 10/29/20 6:02 PM, Eduardo Habkost wrote:
> The function will be moved to common QOM code, as it is not
> specific to TYPE_DEVICE anymore.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Reviewed-by:  Stefan Berger <stefanb@linux.ibm.com>


> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrangé" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>   include/hw/qdev-properties.h     |  2 +-
>   backends/tpm/tpm_util.c          |  6 +--
>   hw/block/xen-block.c             |  4 +-
>   hw/core/qdev-properties-system.c | 46 +++++++++++------------
>   hw/core/qdev-properties.c        | 64 ++++++++++++++++----------------
>   hw/s390x/css.c                   |  4 +-
>   hw/s390x/s390-pci-bus.c          |  4 +-
>   hw/vfio/pci-quirks.c             |  4 +-
>   8 files changed, 67 insertions(+), 67 deletions(-)
>
> diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
> index 0acc92ae2b..4146dac281 100644
> --- a/include/hw/qdev-properties.h
> +++ b/include/hw/qdev-properties.h
> @@ -332,7 +332,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
>                              const uint8_t *value);
>   void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
>
> -void *qdev_get_prop_ptr(Object *obj, Property *prop);
> +void *object_static_prop_ptr(Object *obj, Property *prop);
>
>   void qdev_prop_register_global(GlobalProperty *prop);
>   const GlobalProperty *qdev_find_global_prop(Object *obj,
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index 042cacfcca..2b5f788861 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -35,7 +35,7 @@
>   static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
> +    TPMBackend **be = object_static_prop_ptr(obj, opaque);
>       char *p;
>
>       p = g_strdup(*be ? (*be)->id : "");
> @@ -47,7 +47,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
> +    TPMBackend *s, **be = object_static_prop_ptr(obj, prop);
>       char *str;
>
>       if (!visit_type_str(v, name, &str, errp)) {
> @@ -67,7 +67,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void release_tpm(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
> +    TPMBackend **be = object_static_prop_ptr(obj, prop);
>
>       if (*be) {
>           tpm_backend_reset(*be);
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index bd1aef63a7..20985c465a 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -336,7 +336,7 @@ static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev = object_static_prop_ptr(obj, prop);
>       char *str;
>
>       switch (vdev->type) {
> @@ -396,7 +396,7 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
> +    XenBlockVdev *vdev = object_static_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *end;
>
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index d9355053d2..448d77ecab 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -60,7 +60,7 @@ static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(obj, prop);
> +    void **ptr = object_static_prop_ptr(obj, prop);
>       const char *value;
>       char *p;
>
> @@ -86,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(obj, prop);
> +    void **ptr = object_static_prop_ptr(obj, prop);
>       char *str;
>       BlockBackend *blk;
>       bool blk_created = false;
> @@ -179,7 +179,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
> +    BlockBackend **ptr = object_static_prop_ptr(obj, prop);
>
>       if (*ptr) {
>           AioContext *ctx = blk_get_aio_context(*ptr);
> @@ -212,7 +212,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
>   static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
> +    CharBackend *be = object_static_prop_ptr(obj, opaque);
>       char *p;
>
>       p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
> @@ -224,7 +224,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be = object_static_prop_ptr(obj, prop);
>       Chardev *s;
>       char *str;
>
> @@ -260,7 +260,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void release_chr(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(obj, prop);
> +    CharBackend *be = object_static_prop_ptr(obj, prop);
>
>       qemu_chr_fe_deinit(be, false);
>   }
> @@ -284,7 +284,7 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac = object_static_prop_ptr(obj, prop);
>       char buffer[2 * 6 + 5 + 1];
>       char *p = buffer;
>
> @@ -299,7 +299,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
> +    MACAddr *mac = object_static_prop_ptr(obj, prop);
>       int i, pos;
>       char *str;
>       const char *p;
> @@ -361,7 +361,7 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr = object_static_prop_ptr(obj, prop);
>       char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
>
>       visit_type_str(v, name, &p, errp);
> @@ -372,7 +372,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
> +    NICPeers *peers_ptr = object_static_prop_ptr(obj, prop);
>       NetClientState **ncs = peers_ptr->ncs;
>       NetClientState *peers[MAX_QUEUE_NUM];
>       int queues, err = 0, i = 0;
> @@ -434,7 +434,7 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card = object_static_prop_ptr(obj, prop);
>       char *p = g_strdup(audio_get_id(card));
>
>       visit_type_str(v, name, &p, errp);
> @@ -445,7 +445,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
> +    QEMUSoundCard *card = object_static_prop_ptr(obj, prop);
>       AudioState *state;
>       int err = 0;
>       char *str;
> @@ -547,7 +547,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>       uint64_t value;
>       Error *local_err = NULL;
>
> @@ -635,7 +635,7 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr = object_static_prop_ptr(obj, prop);
>       char buffer[64];
>       char *p = buffer;
>       int rc;
> @@ -651,7 +651,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
> +    ReservedRegion *rr = object_static_prop_ptr(obj, prop);
>       Error *local_err = NULL;
>       const char *endptr;
>       char *str;
> @@ -713,7 +713,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t value, *ptr = object_static_prop_ptr(obj, prop);
>       unsigned int slot, fn, n;
>       char *str;
>
> @@ -751,7 +751,7 @@ invalid:
>   static int print_pci_devfn(Object *obj, Property *prop, char *dest,
>                              size_t len)
>   {
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       if (*ptr == -1) {
>           return snprintf(dest, len, "<unset>");
> @@ -775,7 +775,7 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr = object_static_prop_ptr(obj, prop);
>       char buffer[] = "ffff:ff:ff.f";
>       char *p = buffer;
>       int rc = 0;
> @@ -801,7 +801,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
> +    PCIHostDeviceAddress *addr = object_static_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *e;
>       unsigned long val;
> @@ -890,7 +890,7 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p = object_static_prop_ptr(obj, prop);
>       int speed;
>
>       switch (*p) {
> @@ -918,7 +918,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkSpeed *p = object_static_prop_ptr(obj, prop);
>       int speed;
>
>       if (!visit_type_enum(v, prop->name, &speed, prop->info->enum_table,
> @@ -960,7 +960,7 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p = object_static_prop_ptr(obj, prop);
>       int width;
>
>       switch (*p) {
> @@ -997,7 +997,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
> +    PCIExpLinkWidth *p = object_static_prop_ptr(obj, prop);
>       int width;
>
>       if (!visit_type_enum(v, prop->name, &width, prop->info->enum_table,
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index e4aba2b237..0b53e5ba63 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -51,7 +51,7 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
>       }
>   }
>
> -void *qdev_get_prop_ptr(Object *obj, Property *prop)
> +void *object_static_prop_ptr(Object *obj, Property *prop)
>   {
>       void *ptr = obj;
>       ptr += prop->offset;
> @@ -97,7 +97,7 @@ void object_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(obj, prop);
> +    int *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
>   }
> @@ -106,7 +106,7 @@ void object_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(obj, prop);
> +    int *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
>   }
> @@ -135,7 +135,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
>
>   static void bit_prop_set(Object *obj, Property *props, bool val)
>   {
> -    uint32_t *p = qdev_get_prop_ptr(obj, props);
> +    uint32_t *p = object_static_prop_ptr(obj, props);
>       uint32_t mask = qdev_get_prop_mask(props);
>       if (val) {
>           *p |= mask;
> @@ -148,7 +148,7 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *p = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *p = object_static_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask(prop)) != 0;
>
>       visit_type_bool(v, name, &value, errp);
> @@ -189,7 +189,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
>
>   static void bit64_prop_set(Object *obj, Property *props, bool val)
>   {
> -    uint64_t *p = qdev_get_prop_ptr(obj, props);
> +    uint64_t *p = object_static_prop_ptr(obj, props);
>       uint64_t mask = qdev_get_prop_mask64(props);
>       if (val) {
>           *p |= mask;
> @@ -202,7 +202,7 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *p = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *p = object_static_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
>
>       visit_type_bool(v, name, &value, errp);
> @@ -234,7 +234,7 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(obj, prop);
> +    bool *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_bool(v, name, ptr, errp);
>   }
> @@ -243,7 +243,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(obj, prop);
> +    bool *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_bool(v, name, ptr, errp);
>   }
> @@ -261,7 +261,7 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -270,7 +270,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -300,7 +300,7 @@ static void get_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint16(v, name, ptr, errp);
>   }
> @@ -309,7 +309,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint16_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint16(v, name, ptr, errp);
>   }
> @@ -327,7 +327,7 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -336,7 +336,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -345,7 +345,7 @@ void object_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
>                                void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_int32(v, name, ptr, errp);
>   }
> @@ -354,7 +354,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_int32(v, name, ptr, errp);
>   }
> @@ -379,7 +379,7 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint64(v, name, ptr, errp);
>   }
> @@ -388,7 +388,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint64(v, name, ptr, errp);
>   }
> @@ -397,7 +397,7 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_int64(v, name, ptr, errp);
>   }
> @@ -406,7 +406,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    int64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_int64(v, name, ptr, errp);
>   }
> @@ -430,14 +430,14 @@ const PropertyInfo qdev_prop_int64 = {
>   static void release_string(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
> +    g_free(*(char **)object_static_prop_ptr(obj, prop));
>   }
>
>   static void get_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(obj, prop);
> +    char **ptr = object_static_prop_ptr(obj, prop);
>
>       if (!*ptr) {
>           char *str = (char *)"";
> @@ -451,7 +451,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(obj, prop);
> +    char **ptr = object_static_prop_ptr(obj, prop);
>       char *str;
>
>       if (!visit_type_str(v, name, &str, errp)) {
> @@ -485,7 +485,7 @@ void object_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
>                                 void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>       uint64_t value = *ptr;
>
>       visit_type_size(v, name, &value, errp);
> @@ -495,7 +495,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
>                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>       uint64_t value;
>
>       if (!visit_type_size(v, name, &value, errp)) {
> @@ -526,7 +526,7 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid = object_static_prop_ptr(obj, prop);
>       char buffer[UUID_FMT_LEN + 1];
>       char *p = buffer;
>
> @@ -541,7 +541,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
> +    QemuUUID *uuid = object_static_prop_ptr(obj, prop);
>       char *str;
>
>       if (!visit_type_str(v, name, &str, errp)) {
> @@ -605,7 +605,7 @@ static ArrayElementProperty *array_element_new(Object *obj,
>        * being inside the device struct.
>        */
>       arrayprop->prop.offset = eltptr - (void *)obj;
> -    assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
> +    assert(object_static_prop_ptr(obj, &arrayprop->prop) == eltptr);
>       return arrayprop;
>   }
>
> @@ -646,7 +646,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>        * array itself and dynamically add the corresponding properties.
>        */
>       Property *prop = opaque;
> -    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *alenptr = object_static_prop_ptr(obj, prop);
>       void **arrayptr = (void *)obj + prop->arrayoffset;
>       void *eltptr;
>       const char *arrayname;
> @@ -867,7 +867,7 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_size(v, name, ptr, errp);
>   }
> @@ -876,7 +876,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint64_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_size(v, name, ptr, errp);
>   }
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 46cab94e2b..c8e7ce232a 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2344,7 +2344,7 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id = object_static_prop_ptr(obj, prop);
>       char buffer[] = "xx.x.xxxx";
>       char *p = buffer;
>       int r;
> @@ -2373,7 +2373,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
> +    CssDevId *dev_id = object_static_prop_ptr(obj, prop);
>       char *str;
>       int num, n1, n2;
>       unsigned int cssid, ssid, devid;
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index d02e93a192..74a469e91d 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1248,7 +1248,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -1258,7 +1258,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>   {
>       S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint32_t *ptr = object_static_prop_ptr(obj, prop);
>
>       if (!visit_type_uint32(v, name, ptr, errp)) {
>           return;
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 802979635c..37cb9ab1fa 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1489,7 +1489,7 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t *ptr = object_static_prop_ptr(obj, prop);
>
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -1499,7 +1499,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          Error **errp)
>   {
>       Property *prop = opaque;
> -    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
> +    uint8_t value, *ptr = object_static_prop_ptr(obj, prop);
>
>       if (!visit_type_uint8(v, name, &value, errp)) {
>           return;




From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:44:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15226.38217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGeS-00072s-Q7; Thu, 29 Oct 2020 22:44:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15226.38217; Thu, 29 Oct 2020 22:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGeS-00072l-LZ; Thu, 29 Oct 2020 22:44:12 +0000
Received: by outflank-mailman (input) for mailman id 15226;
 Thu, 29 Oct 2020 22:44:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
 id 1kYGeS-00072f-1D
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:44:12 +0000
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa5dddcf-3114-49a4-b515-f46fb62b2cfb;
 Thu, 29 Oct 2020 22:44:10 +0000 (UTC)
Received: from pps.filterd (m0098421.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09TMWoFR105701; Thu, 29 Oct 2020 18:43:59 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34fyx962g9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:43:59 -0400
Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMXnPI108232;
 Thu, 29 Oct 2020 18:43:59 -0400
Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com
 [169.62.189.10])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34fyx962fx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:43:59 -0400
Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1])
 by ppma02dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMhSNR019179;
 Thu, 29 Oct 2020 22:43:58 GMT
Received: from b03cxnp08028.gho.boulder.ibm.com
 (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20])
 by ppma02dal.us.ibm.com with ESMTP id 34e1gpax1f-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 22:43:58 +0000
Received: from b03ledav004.gho.boulder.ibm.com
 (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
 by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 09TMhuBn42795512
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 29 Oct 2020 22:43:56 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 64B0B78063;
 Thu, 29 Oct 2020 22:43:56 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id C603578064;
 Thu, 29 Oct 2020 22:43:54 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
 by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
 Thu, 29 Oct 2020 22:43:54 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
	id 1kYGeS-00072f-1D
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:44:12 +0000
X-Inumbo-ID: fa5dddcf-3114-49a4-b515-f46fb62b2cfb
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fa5dddcf-3114-49a4-b515-f46fb62b2cfb;
	Thu, 29 Oct 2020 22:44:10 +0000 (UTC)
Received: from pps.filterd (m0098421.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09TMWoFR105701;
	Thu, 29 Oct 2020 18:43:59 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=wIvAuR1n5JBSPhQ5ZuhQQ2WGTQ/wVeN+2RQNQwNlkcg=;
 b=oeGE8O/fIclNNn4JSLbiWEELlXBeaOEkppd9+xNSJs5lCN8o0DTYPykfZ3ripCOL5oep
 kTpaypQriNd1SIZjalT6wTu3rEP7Hu0AHvGXJCSfXk6WArW9hVCCBR4bK1BilUpnGjNO
 ObsozvBdYUFTb/IGO+NKXMuuVOOORI8LRbt8Ay/seQniRlPOLM0XzLZ/ozghPhRdWctY
 IcqyQJ1Wd7bc838+W8yH+c3OvXzX2FtlAGavlp+Mqe6IpXhBXKEuH1AfQVtykC4uFVy0
 IRZKhIvJJh9Z20+XIWB/D9De1SACwOkxyZ+FjFR8H95w0yVGqcuHig27uVaVwV6R5PHN TA== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34fyx962g9-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:43:59 -0400
Received: from m0098421.ppops.net (m0098421.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMXnPI108232;
	Thu, 29 Oct 2020 18:43:59 -0400
Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34fyx962fx-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:43:59 -0400
Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1])
	by ppma02dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMhSNR019179;
	Thu, 29 Oct 2020 22:43:58 GMT
Received: from b03cxnp08028.gho.boulder.ibm.com (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20])
	by ppma02dal.us.ibm.com with ESMTP id 34e1gpax1f-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 22:43:58 +0000
Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
	by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09TMhuBn42795512
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 29 Oct 2020 22:43:56 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 64B0B78063;
	Thu, 29 Oct 2020 22:43:56 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id C603578064;
	Thu, 29 Oct 2020 22:43:54 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
	by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
	Thu, 29 Oct 2020 22:43:54 +0000 (GMT)
Subject: Re: [PATCH 14/36] qdev: Move dev->realized check to
 qdev_property_set()
To: Eduardo Habkost <ehabkost@redhat.com>, qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
        Eric Blake <eblake@redhat.com>,
        "Daniel P. Berrange" <berrange@redhat.com>,
        Markus Armbruster <armbru@redhat.com>,
        =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
        Igor Mammedov <imammedo@redhat.com>,
        Stefan Berger <stefanb@linux.vnet.ibm.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Anthony Perard <anthony.perard@citrix.com>,
        Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
        Max Reitz <mreitz@redhat.com>, Cornelia Huck <cohuck@redhat.com>,
        Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Thomas Huth <thuth@redhat.com>, Richard Henderson <rth@twiddle.net>,
        David Hildenbrand <david@redhat.com>,
        Matthew Rosato
 <mjrosato@linux.ibm.com>,
        Alex Williamson <alex.williamson@redhat.com>,
        Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
        Artyom Tarasenko <atar4qemu@gmail.com>, xen-devel@lists.xenproject.org,
        qemu-block@nongnu.org, qemu-s390x@nongnu.org
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-15-ehabkost@redhat.com>
From: Stefan Berger <stefanb@linux.ibm.com>
Message-ID: <4d1f56fe-c213-0967-2828-8f6a34e02ce5@linux.ibm.com>
Date: Thu, 29 Oct 2020 18:43:54 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201029220246.472693-15-ehabkost@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-29_12:2020-10-29,2020-10-29 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0
 priorityscore=1501 lowpriorityscore=0 bulkscore=0 phishscore=0
 clxscore=1015 suspectscore=0 impostorscore=0 spamscore=0 malwarescore=0
 mlxlogscore=999 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010290150

On 10/29/20 6:02 PM, Eduardo Habkost wrote:
> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>

Reviewed-by:  Stefan Berger <stefanb@linux.ibm.com>


> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrangé" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
> Cc: Artyom Tarasenko <atar4qemu@gmail.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>   backends/tpm/tpm_util.c          |   6 --
>   hw/block/xen-block.c             |   5 --
>   hw/core/qdev-properties-system.c |  64 -------------------
>   hw/core/qdev-properties.c        | 106 ++++++-------------------------
>   hw/s390x/css.c                   |   6 --
>   hw/s390x/s390-pci-bus.c          |   6 --
>   hw/vfio/pci-quirks.c             |   6 --
>   target/sparc/cpu.c               |   6 --
>   8 files changed, 18 insertions(+), 187 deletions(-)
>
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index e91c21dd4a..042cacfcca 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -46,16 +46,10 @@ static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 1ba9981c08..bd1aef63a7 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -400,11 +400,6 @@ static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
>       char *str, *p;
>       const char *end;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index fca1b694ca..60a45f5620 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -92,11 +92,6 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
>       bool blk_created = false;
>       int ret;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -228,17 +223,11 @@ static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       CharBackend *be = qdev_get_prop_ptr(obj, prop);
>       Chardev *s;
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -309,18 +298,12 @@ static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       MACAddr *mac = qdev_get_prop_ptr(obj, prop);
>       int i, pos;
>       char *str;
>       const char *p;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -388,7 +371,6 @@ static void get_netdev(Object *obj, Visitor *v, const char *name,
>   static void set_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
>       NetClientState **ncs = peers_ptr->ncs;
> @@ -396,11 +378,6 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
>       int queues, err = 0, i = 0;
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -467,18 +444,12 @@ static void get_audiodev(Object *obj, Visitor *v, const char* name,
>   static void set_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
>       AudioState *state;
>       int err = 0;
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -580,11 +551,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
>       uint64_t value;
>       Error *local_err = NULL;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_size(v, name, &value, errp)) {
>           return;
>       }
> @@ -684,7 +650,6 @@ static void get_reserved_region(Object *obj, Visitor *v, const char *name,
>   static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
>       Error *local_err = NULL;
> @@ -692,11 +657,6 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>       char *str;
>       int ret;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_str(v, name, &str, &local_err);
>       if (local_err) {
>           error_propagate(errp, local_err);
> @@ -752,17 +712,11 @@ const PropertyInfo qdev_prop_reserved_region = {
>   static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>       unsigned int slot, fn, n;
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, NULL)) {
>           if (!visit_type_int32(v, name, &value, errp)) {
>               return;
> @@ -846,7 +800,6 @@ static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>   static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
>       char *str, *p;
> @@ -855,11 +808,6 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>       unsigned long dom = 0, bus = 0;
>       unsigned int slot = 0, func = 0;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -969,16 +917,10 @@ static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>   static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
>       int speed;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_enum(v, prop->name, &speed, prop->info->enum_table,
>                            errp)) {
>           return;
> @@ -1054,16 +996,10 @@ static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>   static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
>       int width;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_enum(v, prop->name, &width, prop->info->enum_table,
>                            errp)) {
>           return;
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index aab9e65e97..195bfed6e1 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -25,6 +25,19 @@ void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
>       }
>   }
>
> +/* returns: true if property is allowed to be set, false otherwise */
> +static bool qdev_prop_allow_set(Object *obj, const char *name,
> +                                Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +
> +    if (dev->realized) {
> +        qdev_prop_set_after_realize(dev, name, errp);
> +        return false;
> +    }
> +    return true;
> +}
> +
>   void qdev_prop_allow_set_link_before_realize(const Object *obj,
>                                                const char *name,
>                                                Object *val, Error **errp)
> @@ -66,6 +79,11 @@ static void static_prop_set(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> +
> +    if (!qdev_prop_allow_set(obj, name, errp)) {
> +        return;
> +    }
> +
>       return prop->info->set(obj, v, name, opaque, errp);
>   }
>
> @@ -91,15 +109,9 @@ void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
>   void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
>   }
>
> @@ -149,15 +161,9 @@ static void prop_get_bit(Object *obj, Visitor *v, const char *name,
>   static void prop_set_bit(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool value;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_bool(v, name, &value, errp)) {
>           return;
>       }
> @@ -209,15 +215,9 @@ static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
>   static void prop_set_bit64(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool value;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_bool(v, name, &value, errp)) {
>           return;
>       }
> @@ -246,15 +246,9 @@ static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       bool *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_bool(v, name, ptr, errp);
>   }
>
> @@ -279,15 +273,9 @@ static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint8(v, name, ptr, errp);
>   }
>
> @@ -324,15 +312,9 @@ void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
>   static void set_uint16(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint16(v, name, ptr, errp);
>   }
>
> @@ -357,15 +339,9 @@ static void get_uint32(Object *obj, Visitor *v, const char *name,
>   static void set_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint32(v, name, ptr, errp);
>   }
>
> @@ -381,15 +357,9 @@ void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
>   static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_int32(v, name, ptr, errp);
>   }
>
> @@ -421,15 +391,9 @@ static void get_uint64(Object *obj, Visitor *v, const char *name,
>   static void set_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_uint64(v, name, ptr, errp);
>   }
>
> @@ -445,15 +409,9 @@ static void get_int64(Object *obj, Visitor *v, const char *name,
>   static void set_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       int64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_int64(v, name, ptr, errp);
>   }
>
> @@ -496,16 +454,10 @@ static void get_string(Object *obj, Visitor *v, const char *name,
>   static void set_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       char **ptr = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -546,16 +498,10 @@ void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
>   static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
>                          Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>       uint64_t value;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_size(v, name, &value, errp)) {
>           return;
>       }
> @@ -598,16 +544,10 @@ static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> @@ -678,10 +618,6 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>       const char *arrayname;
>       int i;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
>       if (*alenptr) {
>           error_setg(errp, "array size property %s may not be set more than once",
>                      name);
> @@ -921,15 +857,9 @@ static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
>   static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       visit_type_size(v, name, ptr, errp);
>   }
>
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 38fd46b9a9..46cab94e2b 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2372,18 +2372,12 @@ static void get_css_devid(Object *obj, Visitor *v, const char *name,
>   static void set_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
>       char *str;
>       int num, n1, n2;
>       unsigned int cssid, ssid, devid;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_str(v, name, &str, errp)) {
>           return;
>       }
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index b59cf0651a..d02e93a192 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1256,16 +1256,10 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
>   static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
>       Property *prop = opaque;
>       uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_uint32(v, name, ptr, errp)) {
>           return;
>       }
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 53569925a2..802979635c 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1498,15 +1498,9 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          const char *name, void *opaque,
>                                          Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
>       uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_uint8(v, name, &value, errp)) {
>           return;
>       }
> diff --git a/target/sparc/cpu.c b/target/sparc/cpu.c
> index 8ecb20e55f..cf21efd85f 100644
> --- a/target/sparc/cpu.c
> +++ b/target/sparc/cpu.c
> @@ -798,17 +798,11 @@ static void sparc_get_nwindows(Object *obj, Visitor *v, const char *name,
>   static void sparc_set_nwindows(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       const int64_t min = MIN_NWINDOWS;
>       const int64_t max = MAX_NWINDOWS;
>       SPARCCPU *cpu = SPARC_CPU(obj);
>       int64_t value;
>
> -    if (dev->realized) {
> -        qdev_prop_set_after_realize(dev, name, errp);
> -        return;
> -    }
> -
>       if (!visit_type_int(v, name, &value, errp)) {
>           return;
>       }




From xen-devel-bounces@lists.xenproject.org Thu Oct 29 22:46:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 22:46:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15255.38232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGgh-0007Dp-6q; Thu, 29 Oct 2020 22:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15255.38232; Thu, 29 Oct 2020 22:46:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYGgh-0007Di-3a; Thu, 29 Oct 2020 22:46:31 +0000
Received: by outflank-mailman (input) for mailman id 15255;
 Thu, 29 Oct 2020 22:46:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
 id 1kYGgf-0007Da-Ig
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:46:29 +0000
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c93e8b1-06a9-473f-8714-dd551a9cd556;
 Thu, 29 Oct 2020 22:46:27 +0000 (UTC)
Received: from pps.filterd (m0098399.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09TMXhhn102165; Thu, 29 Oct 2020 18:46:24 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34g31cxyn8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:46:23 -0400
Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMgPZK130876;
 Thu, 29 Oct 2020 18:46:23 -0400
Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com
 [169.63.214.131])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34g31cxymp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 18:46:23 -0400
Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1])
 by ppma01dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMfO2Z000644;
 Thu, 29 Oct 2020 22:46:22 GMT
Received: from b03cxnp08028.gho.boulder.ibm.com
 (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20])
 by ppma01dal.us.ibm.com with ESMTP id 34cbw9r3as-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 29 Oct 2020 22:46:22 +0000
Received: from b03ledav004.gho.boulder.ibm.com
 (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
 by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 09TMkKwF46793124
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 29 Oct 2020 22:46:20 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 496FC78064;
 Thu, 29 Oct 2020 22:46:20 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 2A66978067;
 Thu, 29 Oct 2020 22:46:18 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
 by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
 Thu, 29 Oct 2020 22:46:18 +0000 (GMT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=l48A=EE=linux.ibm.com=stefanb@srs-us1.protection.inumbo.net>)
	id 1kYGgf-0007Da-Ig
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 22:46:29 +0000
X-Inumbo-ID: 0c93e8b1-06a9-473f-8714-dd551a9cd556
Received: from mx0a-001b2d01.pphosted.com (unknown [148.163.156.1])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0c93e8b1-06a9-473f-8714-dd551a9cd556;
	Thu, 29 Oct 2020 22:46:27 +0000 (UTC)
Received: from pps.filterd (m0098399.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09TMXhhn102165;
	Thu, 29 Oct 2020 18:46:24 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=H5v2xbpUrIS62JTvcWDgh+WcyuhKWQ3SLSJiiDe33FM=;
 b=qIcuHzaS5mt2x8uJWMdFgUQlgySIC2e2IAWmb1CgO7a6vq4UsMxDfzYUOl60IfMvx41H
 tsg9tQLF9TAdi5/0yMIy+NVhcOt7ncw6XzIAH3L26hOV8C3BFW6fNODICJMsKEe6XfLQ
 HhkhYRBgZHftHw9vGrdisxKprHeA4ARPmVwuCoZfMqtWWWs7jSnl1sPKrYxZq1UHmvIf
 ahSN9lqGf9XRash7FEIkhMhT6RByoTeNltRXpdvrpsaLANbd9BLVh3bzfTLyJQpEteHD
 CEfS3DfYOz4AzCkgrLs0RebbARB80CLiEDF1MSynrvx0aaT9UQ+s8cLUYgop/lG9a219 2g== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34g31cxyn8-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:46:23 -0400
Received: from m0098399.ppops.net (m0098399.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09TMgPZK130876;
	Thu, 29 Oct 2020 18:46:23 -0400
Received: from ppma01dal.us.ibm.com (83.d6.3fa9.ip4.static.sl-reverse.com [169.63.214.131])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34g31cxymp-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 18:46:23 -0400
Received: from pps.filterd (ppma01dal.us.ibm.com [127.0.0.1])
	by ppma01dal.us.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09TMfO2Z000644;
	Thu, 29 Oct 2020 22:46:22 GMT
Received: from b03cxnp08028.gho.boulder.ibm.com (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20])
	by ppma01dal.us.ibm.com with ESMTP id 34cbw9r3as-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Thu, 29 Oct 2020 22:46:22 +0000
Received: from b03ledav004.gho.boulder.ibm.com (b03ledav004.gho.boulder.ibm.com [9.17.130.235])
	by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09TMkKwF46793124
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Thu, 29 Oct 2020 22:46:20 GMT
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 496FC78064;
	Thu, 29 Oct 2020 22:46:20 +0000 (GMT)
Received: from b03ledav004.gho.boulder.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 2A66978067;
	Thu, 29 Oct 2020 22:46:18 +0000 (GMT)
Received: from sbct-3.pok.ibm.com (unknown [9.47.158.153])
	by b03ledav004.gho.boulder.ibm.com (Postfix) with ESMTP;
	Thu, 29 Oct 2020 22:46:18 +0000 (GMT)
Subject: Re: [PATCH 09/36] qdev: Make qdev_get_prop_ptr() get Object* arg
To: Eduardo Habkost <ehabkost@redhat.com>, qemu-devel@nongnu.org
Cc: John Snow <jsnow@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
        Eric Blake <eblake@redhat.com>,
        "Daniel P. Berrange" <berrange@redhat.com>,
        Markus Armbruster <armbru@redhat.com>,
        =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
        Igor Mammedov <imammedo@redhat.com>,
        Stefan Berger <stefanb@linux.vnet.ibm.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Anthony Perard <anthony.perard@citrix.com>,
        Paul Durrant <paul@xen.org>, Kevin Wolf <kwolf@redhat.com>,
        Max Reitz <mreitz@redhat.com>, Richard Henderson <rth@twiddle.net>,
        David Hildenbrand <david@redhat.com>,
        Cornelia Huck <cohuck@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
        Christian Borntraeger <borntraeger@de.ibm.com>,
        Thomas Huth <thuth@redhat.com>,
        Matthew Rosato <mjrosato@linux.ibm.com>,
        Alex Williamson <alex.williamson@redhat.com>,
        xen-devel@lists.xenproject.org, qemu-block@nongnu.org,
        qemu-s390x@nongnu.org
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-10-ehabkost@redhat.com>
From: Stefan Berger <stefanb@linux.ibm.com>
Message-ID: <620fbb37-fe1d-36d0-e216-b8cde61954cb@linux.ibm.com>
Date: Thu, 29 Oct 2020 18:46:17 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.11.0
MIME-Version: 1.0
In-Reply-To: <20201029220246.472693-10-ehabkost@redhat.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-29_12:2020-10-29,2020-10-29 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 bulkscore=0 clxscore=1015 suspectscore=2 adultscore=0 malwarescore=0
 mlxscore=0 mlxlogscore=999 lowpriorityscore=0 spamscore=0 impostorscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010290155

On 10/29/20 6:02 PM, Eduardo Habkost wrote:
> Make the code more generic and not specific to TYPE_DEVICE.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by:  Stefan Berger <stefanb@linux.ibm.com>
> ---
> Cc: Stefan Berger <stefanb@linux.vnet.ibm.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Kevin Wolf <kwolf@redhat.com>
> Cc: Max Reitz <mreitz@redhat.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: "Daniel P. Berrangé" <berrange@redhat.com>
> Cc: Eduardo Habkost <ehabkost@redhat.com>
> Cc: Richard Henderson <rth@twiddle.net>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Cornelia Huck <cohuck@redhat.com>
> Cc: Halil Pasic <pasic@linux.ibm.com>
> Cc: Christian Borntraeger <borntraeger@de.ibm.com>
> Cc: Thomas Huth <thuth@redhat.com>
> Cc: Matthew Rosato <mjrosato@linux.ibm.com>
> Cc: Alex Williamson <alex.williamson@redhat.com>
> Cc: qemu-devel@nongnu.org
> Cc: xen-devel@lists.xenproject.org
> Cc: qemu-block@nongnu.org
> Cc: qemu-s390x@nongnu.org
> ---
>   include/hw/qdev-properties.h     |  2 +-
>   backends/tpm/tpm_util.c          |  8 ++--
>   hw/block/xen-block.c             |  6 +--
>   hw/core/qdev-properties-system.c | 57 +++++++++-------------
>   hw/core/qdev-properties.c        | 82 +++++++++++++-------------------
>   hw/s390x/css.c                   |  5 +-
>   hw/s390x/s390-pci-bus.c          |  4 +-
>   hw/vfio/pci-quirks.c             |  5 +-
>   8 files changed, 68 insertions(+), 101 deletions(-)
>
> diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
> index 0ea822e6a7..0b92cfc761 100644
> --- a/include/hw/qdev-properties.h
> +++ b/include/hw/qdev-properties.h
> @@ -302,7 +302,7 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
>                              const uint8_t *value);
>   void qdev_prop_set_enum(DeviceState *dev, const char *name, int value);
>
> -void *qdev_get_prop_ptr(DeviceState *dev, Property *prop);
> +void *qdev_get_prop_ptr(Object *obj, Property *prop);
>
>   void qdev_prop_register_global(GlobalProperty *prop);
>   const GlobalProperty *qdev_find_global_prop(DeviceState *dev,
> diff --git a/backends/tpm/tpm_util.c b/backends/tpm/tpm_util.c
> index b58d298c1a..e91c21dd4a 100644
> --- a/backends/tpm/tpm_util.c
> +++ b/backends/tpm/tpm_util.c
> @@ -35,8 +35,7 @@
>   static void get_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
> -    TPMBackend **be = qdev_get_prop_ptr(dev, opaque);
> +    TPMBackend **be = qdev_get_prop_ptr(obj, opaque);
>       char *p;
>
>       p = g_strdup(*be ? (*be)->id : "");
> @@ -49,7 +48,7 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    TPMBackend *s, **be = qdev_get_prop_ptr(dev, prop);
> +    TPMBackend *s, **be = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
>       if (dev->realized) {
> @@ -73,9 +72,8 @@ static void set_tpm(Object *obj, Visitor *v, const char *name, void *opaque,
>
>   static void release_tpm(Object *obj, const char *name, void *opaque)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    TPMBackend **be = qdev_get_prop_ptr(dev, prop);
> +    TPMBackend **be = qdev_get_prop_ptr(obj, prop);
>
>       if (*be) {
>           tpm_backend_reset(*be);
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 8a7a3f5452..1ba9981c08 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -335,9 +335,8 @@ static char *disk_to_vbd_name(unsigned int disk)
>   static void xen_block_get_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
> +    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
>       switch (vdev->type) {
> @@ -396,9 +395,8 @@ static int vbd_name_to_disk(const char *name, const char **endp,
>   static void xen_block_set_vdev(Object *obj, Visitor *v, const char *name,
>                                  void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    XenBlockVdev *vdev = qdev_get_prop_ptr(dev, prop);
> +    XenBlockVdev *vdev = qdev_get_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *end;
>
> diff --git a/hw/core/qdev-properties-system.c b/hw/core/qdev-properties-system.c
> index d0fb063a49..c8c73c371b 100644
> --- a/hw/core/qdev-properties-system.c
> +++ b/hw/core/qdev-properties-system.c
> @@ -59,9 +59,8 @@ static bool check_prop_still_unset(DeviceState *dev, const char *name,
>   static void get_drive(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    void **ptr = qdev_get_prop_ptr(obj, prop);
>       const char *value;
>       char *p;
>
> @@ -87,7 +86,7 @@ static void set_drive_helper(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    void **ptr = qdev_get_prop_ptr(dev, prop);
> +    void **ptr = qdev_get_prop_ptr(obj, prop);
>       char *str;
>       BlockBackend *blk;
>       bool blk_created = false;
> @@ -185,7 +184,7 @@ static void release_drive(Object *obj, const char *name, void *opaque)
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    BlockBackend **ptr = qdev_get_prop_ptr(dev, prop);
> +    BlockBackend **ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (*ptr) {
>           AioContext *ctx = blk_get_aio_context(*ptr);
> @@ -218,8 +217,7 @@ const PropertyInfo qdev_prop_drive_iothread = {
>   static void get_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
> -    CharBackend *be = qdev_get_prop_ptr(dev, opaque);
> +    CharBackend *be = qdev_get_prop_ptr(obj, opaque);
>       char *p;
>
>       p = g_strdup(be->chr && be->chr->label ? be->chr->label : "");
> @@ -232,7 +230,7 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(dev, prop);
> +    CharBackend *be = qdev_get_prop_ptr(obj, prop);
>       Chardev *s;
>       char *str;
>
> @@ -272,9 +270,8 @@ static void set_chr(Object *obj, Visitor *v, const char *name, void *opaque,
>
>   static void release_chr(Object *obj, const char *name, void *opaque)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    CharBackend *be = qdev_get_prop_ptr(dev, prop);
> +    CharBackend *be = qdev_get_prop_ptr(obj, prop);
>
>       qemu_chr_fe_deinit(be, false);
>   }
> @@ -297,9 +294,8 @@ const PropertyInfo qdev_prop_chr = {
>   static void get_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>                       Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
> +    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
>       char buffer[2 * 6 + 5 + 1];
>       char *p = buffer;
>
> @@ -315,7 +311,7 @@ static void set_mac(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    MACAddr *mac = qdev_get_prop_ptr(dev, prop);
> +    MACAddr *mac = qdev_get_prop_ptr(obj, prop);
>       int i, pos;
>       char *str;
>       const char *p;
> @@ -381,9 +377,8 @@ void qdev_prop_set_macaddr(DeviceState *dev, const char *name,
>   static void get_netdev(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
> +    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
>       char *p = g_strdup(peers_ptr->ncs[0] ? peers_ptr->ncs[0]->name : "");
>
>       visit_type_str(v, name, &p, errp);
> @@ -395,7 +390,7 @@ static void set_netdev(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    NICPeers *peers_ptr = qdev_get_prop_ptr(dev, prop);
> +    NICPeers *peers_ptr = qdev_get_prop_ptr(obj, prop);
>       NetClientState **ncs = peers_ptr->ncs;
>       NetClientState *peers[MAX_QUEUE_NUM];
>       int queues, err = 0, i = 0;
> @@ -461,9 +456,8 @@ const PropertyInfo qdev_prop_netdev = {
>   static void get_audiodev(Object *obj, Visitor *v, const char* name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
> +    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
>       char *p = g_strdup(audio_get_id(card));
>
>       visit_type_str(v, name, &p, errp);
> @@ -475,7 +469,7 @@ static void set_audiodev(Object *obj, Visitor *v, const char* name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    QEMUSoundCard *card = qdev_get_prop_ptr(dev, prop);
> +    QEMUSoundCard *card = qdev_get_prop_ptr(obj, prop);
>       AudioState *state;
>       int err = 0;
>       char *str;
> @@ -582,7 +576,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>       uint64_t value;
>       Error *local_err = NULL;
>
> @@ -674,9 +668,8 @@ const PropertyInfo qdev_prop_multifd_compression = {
>   static void get_reserved_region(Object *obj, Visitor *v, const char *name,
>                                   void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
> +    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
>       char buffer[64];
>       char *p = buffer;
>       int rc;
> @@ -693,7 +686,7 @@ static void set_reserved_region(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    ReservedRegion *rr = qdev_get_prop_ptr(dev, prop);
> +    ReservedRegion *rr = qdev_get_prop_ptr(obj, prop);
>       Error *local_err = NULL;
>       const char *endptr;
>       char *str;
> @@ -761,7 +754,7 @@ static void set_pci_devfn(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int32_t value, *ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>       unsigned int slot, fn, n;
>       char *str;
>
> @@ -804,8 +797,7 @@ invalid:
>   static int print_pci_devfn(Object *obj, Property *prop, char *dest,
>                              size_t len)
>   {
> -    DeviceState *dev = DEVICE(obj);
> -    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (*ptr == -1) {
>           return snprintf(dest, len, "<unset>");
> @@ -828,9 +820,8 @@ const PropertyInfo qdev_prop_pci_devfn = {
>   static void get_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>                                    void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
> +    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
>       char buffer[] = "ffff:ff:ff.f";
>       char *p = buffer;
>       int rc = 0;
> @@ -857,7 +848,7 @@ static void set_pci_host_devaddr(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(dev, prop);
> +    PCIHostDeviceAddress *addr = qdev_get_prop_ptr(obj, prop);
>       char *str, *p;
>       const char *e;
>       unsigned long val;
> @@ -950,9 +941,8 @@ const PropertyInfo qdev_prop_off_auto_pcibar = {
>   static void get_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
> +    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
>       int speed;
>
>       switch (*p) {
> @@ -981,7 +971,7 @@ static void set_prop_pcielinkspeed(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIExpLinkSpeed *p = qdev_get_prop_ptr(dev, prop);
> +    PCIExpLinkSpeed *p = qdev_get_prop_ptr(obj, prop);
>       int speed;
>
>       if (dev->realized) {
> @@ -1027,9 +1017,8 @@ const PropertyInfo qdev_prop_pcie_link_speed = {
>   static void get_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>                                      void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
> +    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
>       int width;
>
>       switch (*p) {
> @@ -1067,7 +1056,7 @@ static void set_prop_pcielinkwidth(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    PCIExpLinkWidth *p = qdev_get_prop_ptr(dev, prop);
> +    PCIExpLinkWidth *p = qdev_get_prop_ptr(obj, prop);
>       int width;
>
>       if (dev->realized) {
> diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
> index 3a4638f4de..0a54a922c8 100644
> --- a/hw/core/qdev-properties.c
> +++ b/hw/core/qdev-properties.c
> @@ -38,9 +38,9 @@ void qdev_prop_allow_set_link_before_realize(const Object *obj,
>       }
>   }
>
> -void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
> +void *qdev_get_prop_ptr(Object *obj, Property *prop)
>   {
> -    void *ptr = dev;
> +    void *ptr = obj;
>       ptr += prop->offset;
>       return ptr;
>   }
> @@ -48,9 +48,8 @@ void *qdev_get_prop_ptr(DeviceState *dev, Property *prop)
>   void qdev_propinfo_get_enum(Object *obj, Visitor *v, const char *name,
>                               void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(dev, prop);
> +    int *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_enum(v, prop->name, ptr, prop->info->enum_table, errp);
>   }
> @@ -60,7 +59,7 @@ void qdev_propinfo_set_enum(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int *ptr = qdev_get_prop_ptr(dev, prop);
> +    int *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -94,8 +93,7 @@ static uint32_t qdev_get_prop_mask(Property *prop)
>
>   static void bit_prop_set(Object *obj, Property *props, bool val)
>   {
> -    DeviceState *dev = DEVICE(obj);
> -    uint32_t *p = qdev_get_prop_ptr(dev, props);
> +    uint32_t *p = qdev_get_prop_ptr(obj, props);
>       uint32_t mask = qdev_get_prop_mask(props);
>       if (val) {
>           *p |= mask;
> @@ -107,9 +105,8 @@ static void bit_prop_set(Object *obj, Property *props, bool val)
>   static void prop_get_bit(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *p = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *p = qdev_get_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask(prop)) != 0;
>
>       visit_type_bool(v, name, &value, errp);
> @@ -156,8 +153,7 @@ static uint64_t qdev_get_prop_mask64(Property *prop)
>
>   static void bit64_prop_set(Object *obj, Property *props, bool val)
>   {
> -    DeviceState *dev = DEVICE(obj);
> -    uint64_t *p = qdev_get_prop_ptr(dev, props);
> +    uint64_t *p = qdev_get_prop_ptr(obj, props);
>       uint64_t mask = qdev_get_prop_mask64(props);
>       if (val) {
>           *p |= mask;
> @@ -169,9 +165,8 @@ static void bit64_prop_set(Object *obj, Property *props, bool val)
>   static void prop_get_bit64(Object *obj, Visitor *v, const char *name,
>                              void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint64_t *p = qdev_get_prop_ptr(dev, prop);
> +    uint64_t *p = qdev_get_prop_ptr(obj, prop);
>       bool value = (*p & qdev_get_prop_mask64(prop)) != 0;
>
>       visit_type_bool(v, name, &value, errp);
> @@ -208,9 +203,8 @@ const PropertyInfo qdev_prop_bit64 = {
>   static void get_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(dev, prop);
> +    bool *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_bool(v, name, ptr, errp);
>   }
> @@ -220,7 +214,7 @@ static void set_bool(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    bool *ptr = qdev_get_prop_ptr(dev, prop);
> +    bool *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -242,9 +236,8 @@ const PropertyInfo qdev_prop_bool = {
>   static void get_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>                         Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -254,7 +247,7 @@ static void set_uint8(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -288,9 +281,8 @@ const PropertyInfo qdev_prop_uint8 = {
>   void qdev_propinfo_get_uint16(Object *obj, Visitor *v, const char *name,
>                                 void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint16(v, name, ptr, errp);
>   }
> @@ -300,7 +292,7 @@ static void set_uint16(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint16_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -322,9 +314,8 @@ const PropertyInfo qdev_prop_uint16 = {
>   static void get_uint32(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -334,7 +325,7 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -347,9 +338,8 @@ static void set_uint32(Object *obj, Visitor *v, const char *name,
>   void qdev_propinfo_get_int32(Object *obj, Visitor *v, const char *name,
>                                void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_int32(v, name, ptr, errp);
>   }
> @@ -359,7 +349,7 @@ static void set_int32(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    int32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -388,9 +378,8 @@ const PropertyInfo qdev_prop_int32 = {
>   static void get_uint64(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint64(v, name, ptr, errp);
>   }
> @@ -400,7 +389,7 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -413,9 +402,8 @@ static void set_uint64(Object *obj, Visitor *v, const char *name,
>   static void get_int64(Object *obj, Visitor *v, const char *name,
>                         void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_int64(v, name, ptr, errp);
>   }
> @@ -425,7 +413,7 @@ static void set_int64(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    int64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    int64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> @@ -454,15 +442,14 @@ const PropertyInfo qdev_prop_int64 = {
>   static void release_string(Object *obj, const char *name, void *opaque)
>   {
>       Property *prop = opaque;
> -    g_free(*(char **)qdev_get_prop_ptr(DEVICE(obj), prop));
> +    g_free(*(char **)qdev_get_prop_ptr(obj, prop));
>   }
>
>   static void get_string(Object *obj, Visitor *v, const char *name,
>                          void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(dev, prop);
> +    char **ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (!*ptr) {
>           char *str = (char *)"";
> @@ -477,7 +464,7 @@ static void set_string(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    char **ptr = qdev_get_prop_ptr(dev, prop);
> +    char **ptr = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
>       if (dev->realized) {
> @@ -515,9 +502,8 @@ const PropertyInfo qdev_prop_on_off_auto = {
>   void qdev_propinfo_get_size32(Object *obj, Visitor *v, const char *name,
>                                 void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>       uint64_t value = *ptr;
>
>       visit_type_size(v, name, &value, errp);
> @@ -528,7 +514,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>       uint64_t value;
>
>       if (dev->realized) {
> @@ -563,9 +549,8 @@ const PropertyInfo qdev_prop_size32 = {
>   static void get_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
> +    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
>       char buffer[UUID_FMT_LEN + 1];
>       char *p = buffer;
>
> @@ -581,7 +566,7 @@ static void set_uuid(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    QemuUUID *uuid = qdev_get_prop_ptr(dev, prop);
> +    QemuUUID *uuid = qdev_get_prop_ptr(obj, prop);
>       char *str;
>
>       if (dev->realized) {
> @@ -653,7 +638,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>        */
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *alenptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *alenptr = qdev_get_prop_ptr(obj, prop);
>       void **arrayptr = (void *)dev + prop->arrayoffset;
>       void *eltptr;
>       const char *arrayname;
> @@ -699,7 +684,7 @@ static void set_prop_arraylen(Object *obj, Visitor *v, const char *name,
>            * being inside the device struct.
>            */
>           arrayprop->prop.offset = eltptr - (void *)dev;
> -        assert(qdev_get_prop_ptr(dev, &arrayprop->prop) == eltptr);
> +        assert(qdev_get_prop_ptr(obj, &arrayprop->prop) == eltptr);
>           object_property_add(obj, propname,
>                               arrayprop->prop.info->name,
>                               arrayprop->prop.info->get,
> @@ -893,9 +878,8 @@ void qdev_prop_set_globals(DeviceState *dev)
>   static void get_size(Object *obj, Visitor *v, const char *name, void *opaque,
>                        Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_size(v, name, ptr, errp);
>   }
> @@ -905,7 +889,7 @@ static void set_size(Object *obj, Visitor *v, const char *name, void *opaque,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint64_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint64_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> diff --git a/hw/s390x/css.c b/hw/s390x/css.c
> index 9961cfe7bf..2b8f33fec2 100644
> --- a/hw/s390x/css.c
> +++ b/hw/s390x/css.c
> @@ -2343,9 +2343,8 @@ void css_reset(void)
>   static void get_css_devid(Object *obj, Visitor *v, const char *name,
>                             void *opaque, Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
> +    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
>       char buffer[] = "xx.x.xxxx";
>       char *p = buffer;
>       int r;
> @@ -2375,7 +2374,7 @@ static void set_css_devid(Object *obj, Visitor *v, const char *name,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    CssDevId *dev_id = qdev_get_prop_ptr(dev, prop);
> +    CssDevId *dev_id = qdev_get_prop_ptr(obj, prop);
>       char *str;
>       int num, n1, n2;
>       unsigned int cssid, ssid, devid;
> diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
> index fb4cee87a4..b59cf0651a 100644
> --- a/hw/s390x/s390-pci-bus.c
> +++ b/hw/s390x/s390-pci-bus.c
> @@ -1248,7 +1248,7 @@ static void s390_pci_get_fid(Object *obj, Visitor *v, const char *name,
>                            void *opaque, Error **errp)
>   {
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(DEVICE(obj), prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint32(v, name, ptr, errp);
>   }
> @@ -1259,7 +1259,7 @@ static void s390_pci_set_fid(Object *obj, Visitor *v, const char *name,
>       DeviceState *dev = DEVICE(obj);
>       S390PCIBusDevice *zpci = S390_PCI_DEVICE(obj);
>       Property *prop = opaque;
> -    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint32_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);
> diff --git a/hw/vfio/pci-quirks.c b/hw/vfio/pci-quirks.c
> index 57150913b7..53569925a2 100644
> --- a/hw/vfio/pci-quirks.c
> +++ b/hw/vfio/pci-quirks.c
> @@ -1488,9 +1488,8 @@ static void get_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>                                          const char *name, void *opaque,
>                                          Error **errp)
>   {
> -    DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint8_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint8_t *ptr = qdev_get_prop_ptr(obj, prop);
>
>       visit_type_uint8(v, name, ptr, errp);
>   }
> @@ -1501,7 +1500,7 @@ static void set_nv_gpudirect_clique_id(Object *obj, Visitor *v,
>   {
>       DeviceState *dev = DEVICE(obj);
>       Property *prop = opaque;
> -    uint8_t value, *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint8_t value, *ptr = qdev_get_prop_ptr(obj, prop);
>
>       if (dev->realized) {
>           qdev_prop_set_after_realize(dev, name, errp);




From xen-devel-bounces@lists.xenproject.org Thu Oct 29 23:11:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 23:11:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15319.38243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYH4k-0001sn-CX; Thu, 29 Oct 2020 23:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15319.38243; Thu, 29 Oct 2020 23:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYH4k-0001sg-96; Thu, 29 Oct 2020 23:11:22 +0000
Received: by outflank-mailman (input) for mailman id 15319;
 Thu, 29 Oct 2020 23:11:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYH4i-0001sa-R5
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:11:20 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b65e765-fdeb-4ffe-8b7b-d6229aa24f4d;
 Thu, 29 Oct 2020 23:11:18 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYH4f-00026v-OC; Thu, 29 Oct 2020 23:11:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYH4f-0001qu-G7; Thu, 29 Oct 2020 23:11:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYH4f-0008Lb-Fd; Thu, 29 Oct 2020 23:11:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYH4i-0001sa-R5
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:11:20 +0000
X-Inumbo-ID: 5b65e765-fdeb-4ffe-8b7b-d6229aa24f4d
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 5b65e765-fdeb-4ffe-8b7b-d6229aa24f4d;
	Thu, 29 Oct 2020 23:11:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/SannIBDd+5Uu3IiMehQwGyOl0SXohts3IksHdqMZJU=; b=D6lMNd4tC8iYHP6ggUxMTqZFyO
	7Il7NmNgsZTDZgpOLE5OugGdN1GtScIe24xnPOxjkCcYbpCfKsbSYNBl4JDKSzO2tsEfDcAdMAmyd
	C6dkUAGQbfROVJwmTKKLeB8+6cUDbbDZkVX2IyGajjwLoKmVQXe7Mit7QfeZYyO13Kcg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYH4f-00026v-OC; Thu, 29 Oct 2020 23:11:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYH4f-0001qu-G7; Thu, 29 Oct 2020 23:11:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYH4f-0008Lb-Fd; Thu, 29 Oct 2020 23:11:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156280: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 23:11:17 +0000

flight 156280 xen-4.12-testing real [real]
flight 156308 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156280/
http://logs.test-lab.xenproject.org/osstest/logs/156308/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 156035

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z    9 days
Testing same since   156263  2020-10-27 18:36:53 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4100d463dbdd95d85fabe387dd5676bed75f65f7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit b1d6f37aa5aa9f3fc5a269b9dd21b7feb7444be0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 23:23:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 23:23:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15326.38262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYHGs-00035K-KI; Thu, 29 Oct 2020 23:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15326.38262; Thu, 29 Oct 2020 23:23:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYHGs-00035D-HL; Thu, 29 Oct 2020 23:23:54 +0000
Received: by outflank-mailman (input) for mailman id 15326;
 Thu, 29 Oct 2020 23:23:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYHGr-000358-Gp
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:23:53 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6579e11-f112-4c13-99db-9d901749ad73;
 Thu, 29 Oct 2020 23:23:52 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYHGp-0002Nk-QP; Thu, 29 Oct 2020 23:23:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYHGp-0002Fu-Hw; Thu, 29 Oct 2020 23:23:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYHGp-0007LJ-HP; Thu, 29 Oct 2020 23:23:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=m9Pg=EE=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYHGr-000358-Gp
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:23:53 +0000
X-Inumbo-ID: c6579e11-f112-4c13-99db-9d901749ad73
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id c6579e11-f112-4c13-99db-9d901749ad73;
	Thu, 29 Oct 2020 23:23:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3hpYVFaL/uBAiN1sPhin0tPao7xldNcifXXjUSCKBSw=; b=IdfS0bDfwwA3SqYscYqMiX0VvS
	YLqcsZ5kne8jvNtl9CnIUudLKB7VEUJfT/piqmOhJolc/T03u77c/fj44sx/xaltH+kHlHBTACzEx
	i8mx6bbCV3GBw8LLNQMJzCVXOMbazmLRp4h5YetLXQL5xIFlR2jxNPO7qKBba9jqjMDs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYHGp-0002Nk-QP; Thu, 29 Oct 2020 23:23:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYHGp-0002Fu-Hw; Thu, 29 Oct 2020 23:23:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYHGp-0007LJ-HP; Thu, 29 Oct 2020 23:23:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156304-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156304: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=26a8fa494f2f323622b6928bd15921b41818f180
X-Osstest-Versions-That:
    xen=1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 29 Oct 2020 23:23:51 +0000

flight 156304 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156304/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  26a8fa494f2f323622b6928bd15921b41818f180
baseline version:
 xen                  1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74

Last test of basis   156297  2020-10-29 14:01:23 Z    0 days
Testing same since   156304  2020-10-29 20:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1fd1d4bafd..26a8fa494f  26a8fa494f2f323622b6928bd15921b41818f180 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Oct 29 23:32:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 29 Oct 2020 23:32:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15337.38277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYHPP-000406-Gt; Thu, 29 Oct 2020 23:32:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15337.38277; Thu, 29 Oct 2020 23:32:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYHPP-0003zz-CT; Thu, 29 Oct 2020 23:32:43 +0000
Received: by outflank-mailman (input) for mailman id 15337;
 Thu, 29 Oct 2020 23:32:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYHPO-0003zu-GR
 for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:32:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 646fc09c-371c-48be-ae42-72376af5c5c3;
 Thu, 29 Oct 2020 23:32:41 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DA0D720739;
 Thu, 29 Oct 2020 23:32:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=IgEN=EE=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYHPO-0003zu-GR
	for xen-devel@lists.xenproject.org; Thu, 29 Oct 2020 23:32:42 +0000
X-Inumbo-ID: 646fc09c-371c-48be-ae42-72376af5c5c3
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 646fc09c-371c-48be-ae42-72376af5c5c3;
	Thu, 29 Oct 2020 23:32:41 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id DA0D720739;
	Thu, 29 Oct 2020 23:32:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604014360;
	bh=UmFbTMDHbkNkOi+5Y3xlBOkHjjr+XUT+ze9jmPRI6r4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KYfTGz0GMgkZNtE5eBNAabccZQtmN4E4dj6MggqoB3DizcuBKFfNQIXqiaWhLW00L
	 00VXze7QlMcX+wLQv+gnQLT+MkXnQl1AhtSf/X7v6AC2uBQYU7D7CTEt7GQr0pFtLd
	 llXouzwByJnj3ZKftWlYZRL2XjHDNPdwqDWk3GX8=
Date: Thu, 29 Oct 2020 16:32:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
cc: Julien Grall <julien@xen.org>, 
    "open list:X86" <xen-devel@lists.xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
In-Reply-To: <DFD994CA-C456-468C-8442-0F63CE661E78@arm.com>
Message-ID: <alpine.DEB.2.21.2010291632130.12247@sstabellini-ThinkPad-T480s>
References: <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com> <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com> <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
 <DFD994CA-C456-468C-8442-0F63CE661E78@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 29 Oct 2020, Bertrand Marquis wrote:
> Hi Julien,
> 
> > On 28 Oct 2020, at 18:39, Julien Grall <julien@xen.org> wrote:
> > 
> > Hi Bertrand,
> > 
> > On 26/10/2020 16:21, Bertrand Marquis wrote:
> >> When a Cortex A57 processor is affected by CPU errata 832075, a guest
> >> not implementing the workaround for it could deadlock the system.
> >> Add a warning during boot informing the user that only trusted guests
> >> should be executed on the system.
> >> An equivalent warning is already given to the user by KVM on cores
> >> affected by this errata.
> >> Also taint the hypervisor as unsecure when this errata applies and
> >> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
> >> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> > 
> > Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> Thanks
> 
> > 
> > If you don't need to resend the series, then I would be happy to fix the typo pointed out by George on commit.
> 
> There is only the condensing from Stefano.
> If you can handle that on commit to great but if you need me to send a v3 to make your life easier do not hesitate to tell me

I have just done the committing


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 00:25:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 00:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15350.38309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYIEK-0000oP-DY; Fri, 30 Oct 2020 00:25:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15350.38309; Fri, 30 Oct 2020 00:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYIEK-0000oI-AK; Fri, 30 Oct 2020 00:25:20 +0000
Received: by outflank-mailman (input) for mailman id 15350;
 Fri, 30 Oct 2020 00:25:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYIEJ-0000nk-9h
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 00:25:19 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2d5ae5d9-aa0c-4784-b3de-9fa4613089da;
 Fri, 30 Oct 2020 00:25:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYIEC-0004GG-Ck; Fri, 30 Oct 2020 00:25:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYIEB-0004mf-Va; Fri, 30 Oct 2020 00:25:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYIEB-0005Qe-V4; Fri, 30 Oct 2020 00:25:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYIEJ-0000nk-9h
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 00:25:19 +0000
X-Inumbo-ID: 2d5ae5d9-aa0c-4784-b3de-9fa4613089da
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2d5ae5d9-aa0c-4784-b3de-9fa4613089da;
	Fri, 30 Oct 2020 00:25:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aThNRt/OZhyI3sKS5E57SMd3T4f4qeXEeBkAhVsVwd4=; b=cynHa2T0k29K4qS1AJFOannUdU
	8sIaT9zhH9ohLzsa5Pz5Q5b24QWYo2oYX70/yNcue8xDaM4YccRGwP4Jt5epMgVZQ9QW34XxhqxhQ
	JOng5bgChgr+nrxW/sUpD7itbeJooex+vb5s/TOn55Z0UL/2uvkwmAH6BqAMkCiyHS7s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYIEC-0004GG-Ck; Fri, 30 Oct 2020 00:25:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYIEB-0004mf-Va; Fri, 30 Oct 2020 00:25:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYIEB-0005Qe-V4; Fri, 30 Oct 2020 00:25:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 156285: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=94f0510dc75e910400aad6c169048d672c8c7193
X-Osstest-Versions-That:
    seabios=58a44be024f69d2e4d2b58553529230abdd3935e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 00:25:11 +0000

flight 156285 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156285/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155839
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155839
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155839
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155839
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155839
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              94f0510dc75e910400aad6c169048d672c8c7193
baseline version:
 seabios              58a44be024f69d2e4d2b58553529230abdd3935e

Last test of basis   155839  2020-10-15 09:39:29 Z   14 days
Testing same since   156285  2020-10-28 19:42:01 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Graf <graf@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   58a44be..94f0510  94f0510dc75e910400aad6c169048d672c8c7193 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 01:15:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 01:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15359.38324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJ0N-0001Mt-BL; Fri, 30 Oct 2020 01:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15359.38324; Fri, 30 Oct 2020 01:14:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJ0N-0001Mm-8U; Fri, 30 Oct 2020 01:14:59 +0000
Received: by outflank-mailman (input) for mailman id 15359;
 Fri, 30 Oct 2020 01:14:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYJ0L-0001ME-Rv
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 01:14:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9d64c3b-42ac-495a-9ec2-24d61f9e80e9;
 Fri, 30 Oct 2020 01:14:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJ0E-00010q-JR; Fri, 30 Oct 2020 01:14:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJ0E-0006un-BP; Fri, 30 Oct 2020 01:14:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJ0E-0002l5-As; Fri, 30 Oct 2020 01:14:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYJ0L-0001ME-Rv
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 01:14:57 +0000
X-Inumbo-ID: a9d64c3b-42ac-495a-9ec2-24d61f9e80e9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a9d64c3b-42ac-495a-9ec2-24d61f9e80e9;
	Fri, 30 Oct 2020 01:14:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pLC7KCAO13JuPynkjAaBxl2Mg5o2xbPnJfinnQAfl00=; b=su/BuThWqFUaYRtFdZN3HEcX72
	F7EeIXLpqGbPrxk9czd0QKoAUydTWH/+LE+9V4i8T3/YBQmLxcxaWqY6R0DgvStVVY2s12k7M4cjl
	LEbiC0izaby9gn6QDNyf7buasRdBGN5ZKFlbUeD5wXsVo9AE1ws2TWatMr9CQeJkJ1SY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJ0E-00010q-JR; Fri, 30 Oct 2020 01:14:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJ0E-0006un-BP; Fri, 30 Oct 2020 01:14:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJ0E-0002l5-As; Fri, 30 Oct 2020 01:14:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156290-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156290: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=1f807631f402210d036ec4803e7adfefa222f786
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 01:14:50 +0000

flight 156290 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156290/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              1f807631f402210d036ec4803e7adfefa222f786
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  111 days
Failing since        151818  2020-07-11 04:18:52 Z  110 days  105 attempts
Testing same since   156273  2020-10-28 04:19:15 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23384 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 02:03:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 02:03:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15365.38340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJlO-00064Q-Qt; Fri, 30 Oct 2020 02:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15365.38340; Fri, 30 Oct 2020 02:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJlO-00064J-MW; Fri, 30 Oct 2020 02:03:34 +0000
Received: by outflank-mailman (input) for mailman id 15365;
 Fri, 30 Oct 2020 02:03:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAhc=EF=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kYJlM-00064E-L0
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:03:32 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com (unknown
 [40.107.100.50]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9897ae22-8e30-4c8c-b37d-bed0f9262847;
 Fri, 30 Oct 2020 02:03:31 +0000 (UTC)
Received: from DM5PR15CA0067.namprd15.prod.outlook.com (2603:10b6:3:ae::29) by
 MN2PR02MB6621.namprd02.prod.outlook.com (2603:10b6:208:1db::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.27; Fri, 30 Oct 2020 02:03:30 +0000
Received: from CY1NAM02FT026.eop-nam02.prod.protection.outlook.com
 (2603:10b6:3:ae:cafe::d2) by DM5PR15CA0067.outlook.office365.com
 (2603:10b6:3:ae::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 02:03:30 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by
 CY1NAM02FT026.mail.protection.outlook.com (10.152.75.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 02:03:29 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Thu, 29 Oct 2020 19:03:29 -0700
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Thu, 29 Oct 2020 19:03:29 -0700
Received: from [10.23.121.44] (port=52670 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kYJlJ-0000Fa-9k; Thu, 29 Oct 2020 19:03:29 -0700
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KAhc=EF=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kYJlM-00064E-L0
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:03:32 +0000
X-Inumbo-ID: 9897ae22-8e30-4c8c-b37d-bed0f9262847
Received: from NAM04-BN8-obe.outbound.protection.outlook.com (unknown [40.107.100.50])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9897ae22-8e30-4c8c-b37d-bed0f9262847;
	Fri, 30 Oct 2020 02:03:31 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xy1vxGeLtwFWqXLgPxuPexD2Ewy/oGlDFA39pNgaecaKe/Rqf+byUZeoZGPjvej0R6tLkiRlkwKNjeqQ7tqMzUAw3rJjKWQ2HNgY/81PAvKnYtJtXLXY0iRwWdeXWGmJUwNbfZUhy7vsLoUy2ZTLOsUI9KU1jdnQFR2u6srvpX1xuKN3pUP2AiUQsIlX7suqSQASmlLvhAtMT6AUHmmYs/VCW1JASapIFaXx+Wv78KVXxgEohRJz2ds05r6OZ7THjXED6aNZenS/zOLhitLMKeCMq0+tWlWw8rfUA8359h8xpucTUBgt5P1yofkxa6ORGTBjtAv1/7kNl07BHU65cQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gEwLjDXhqthcRC8OE0t4vbwooUOBz3m3pMD02xKVBHE=;
 b=lGfELAjqab4ZiAVO/4x/mQ2MwU0p+moPn42nTMwlg/hgwgCuvkEfcrI1onGtPUiSB3TJB1SfwdSNdIEv6A2ZltyJ9qOt2yLp5mMbg9hapzqZdWI2PDsZy/p0N+9OWeKKoP2fnwArrKXIskSCVQzBTbE0FrTxP6nRvQu+DHBoNRawlxD3c+3LGrb5cCOSCl2nCxzdD6cblueOHaZG+56/0x2HiBQRmCMuQbohfW7wac/exzt2L4RWgjBSOUW3K4/PTDZAD3lpGRm15i/PcutQaja5gO4ZCTVC/1WYmSHBJ6n3b1X5kd4EIWXPHUgky2JyaHt4d1SfwCfiXZ8/RrFHEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=linaro.org smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gEwLjDXhqthcRC8OE0t4vbwooUOBz3m3pMD02xKVBHE=;
 b=QtHoVDeNxWKlGvWo8tOlH29fF92TpkIQZefbXrOg6dwuReHQwZuk8iO2qLwFtH/ZruyQMstwV84wyHkLLC1krac4uOSk5247EJR2a8dYaZ/NNEJ4l2mTvc5ZrxEXFppk2Ma7DE1aDEnJgrMSmPrZwt3A2l0qT09Bii87xwvW3b4=
Received: from DM5PR15CA0067.namprd15.prod.outlook.com (2603:10b6:3:ae::29) by
 MN2PR02MB6621.namprd02.prod.outlook.com (2603:10b6:208:1db::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3499.27; Fri, 30 Oct 2020 02:03:30 +0000
Received: from CY1NAM02FT026.eop-nam02.prod.protection.outlook.com
 (2603:10b6:3:ae:cafe::d2) by DM5PR15CA0067.outlook.office365.com
 (2603:10b6:3:ae::29) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 02:03:30 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; linaro.org; dkim=none (message not signed)
 header.d=none;linaro.org; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com;
Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by
 CY1NAM02FT026.mail.protection.outlook.com (10.152.75.157) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 02:03:29 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Thu, 29 Oct 2020 19:03:29 -0700
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Thu, 29 Oct 2020 19:03:29 -0700
Received: from [10.23.121.44] (port=52670 helo=localhost)
	by smtp.xilinx.com with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kYJlJ-0000Fa-9k; Thu, 29 Oct 2020 19:03:29 -0700
Date: Thu, 29 Oct 2020 19:03:28 -0700
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Takahiro Akashi <takahiro.akashi@linaro.org>
CC: Stefano Stabellini <stefano.stabellini@xilinx.com>, Alex Benn??e
	<alex.bennee@linaro.org>, Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	<ian.jackson@eu.citrix.com>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<xen-devel@lists.xenproject.org>
Subject: BUG: libxl vuart build order
In-Reply-To: <20201029114705.GA291577@laputa>
Message-ID: <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com> <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s> <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa> <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="8323329-1085550638-1604016260=:12247"
Content-ID: <alpine.DEB.2.21.2010291704210.12247@sstabellini-ThinkPad-T480s>
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 57497799-52af-4a52-71da-08d87c77ff60
X-MS-TrafficTypeDiagnostic: MN2PR02MB6621:
X-Microsoft-Antispam-PRVS:
	<MN2PR02MB6621BCE34063850120841BBAA0150@MN2PR02MB6621.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	vkxJvW8v2I4fckR2tgVVqkNrd99NkDVBaaDB+ZsWYBY1R5b+4Z2ScBm/h/0STschP+EIcY4Gzreq1sisjAmU6Do4tBLEQAVjxHKycpmMWrrCcunL3bk4kLEXuxKak/s6GI9KVHGnUc4iH7fVIBCOzouwPunfz3aKGYBWUdgl4c84mH5WXuK7yfmqo8CSguUcy099jJOg7EJ7PL5PpI3qtSAw6zzWFGDM2R3ep5GdGwJylA8stcEOWXno6mTFV972TxAK8fQNrfcUN7zzDBVLkE4QWP9yiKIbLY1laPA01Us41zp8Xn1ixSqBYJ79cZH6bJE9djWE6+GcLc3keEb55S51LTauQAfeyWmJFxGiXsCWKwOJTUwUTvsNM2iMA2hr0FleeNNbA8AYtYJrddiI8Q==
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch02.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(4636009)(7916004)(376002)(396003)(136003)(346002)(39860400002)(46966005)(426003)(7636003)(33964004)(44832011)(26005)(82740400003)(186003)(47076004)(478600001)(70206006)(70586007)(336012)(83380400001)(5660300002)(54906003)(36906005)(82310400003)(316002)(8676002)(8936002)(2906002)(4326008)(356005)(6916009)(9786002)(9686003)(33716001);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 02:03:29.9749
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 57497799-52af-4a52-71da-08d87c77ff60
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch02.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY1NAM02FT026.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB6621

--8323329-1085550638-1604016260=:12247
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010291704211.12247@sstabellini-ThinkPad-T480s>

+ xen-devel and libxl maintainers

In short, there is a regression in libxl with the ARM vuart introduced
by moving ARM guests to the PVH build.


On Thu, 29 Oct 2020, Takahiro Akashi wrote:
> On Wed, Oct 28, 2020 at 02:44:16PM -0700, Stefano Stabellini wrote:
> > On Wed, 28 Oct 2020, Takahiro Akashi wrote:
> > > On Tue, Oct 27, 2020 at 09:02:14AM +0900, Takahiro Akashi wrote:
> > > > On Mon, Oct 26, 2020 at 04:37:30PM -0700, Stefano Stabellini wrote:
> > > > > 
> > > > > On Mon, 26 Oct 2020, Takahiro Akashi wrote:
> > > > > > Stefano,
> > > > > > 
> > > > > > # I'm afraid that I have already bothered you with a lot of questions.
> > > > > > 
> > > > > > When I looked at Xen's vpl011 implementation, I found
> > > > > > CR (and LCHR) register is not supported. (trap may cause a data abort).
> > > > > > On the other hand, for example, linux's pl011 driver surely
> > > > > > accesses CR (and LCHR) register.
> > > > > > So I guess that linux won't be able to use pl011 on a Xen guest vm
> > > > > > if vuart = "sbsa_uart".
> > > > > > 
> > > > > > Is this a known issue or do I miss anything?
> > > > > 
> > > > > Linux should definitely be able to use it, and in fact, I am using it
> > > > > with Linux in my test environment.
> > > > > 
> > > > > I think the confusion comes from the name "vpl011": it is in fact not a
> > > > > full PL011 UART, but an SBSA UART.
> > > > 
> > > > Yeah, I have noticed it.
> > > > 
> > > > > SBSA UART only implements a subset of
> > > > > the PL011 registers. The compatible string is "arm,sbsa-uart", also see
> > > > > drivers/tty/serial/amba-pl011.c:sbsa_uart_probe.
> > > > 
> > > > Looking closely into the details of implementation, I found
> > > > that all the accesses to unimplemented registers, including
> > > > CR, are deliberately avoided in sbsa part of linux driver.
> > > 
> > > So I'm now trying to implement "sbsa-uart" driver on U-Boot
> > > by modifying the existing pl011 driver.
> > > (Please note the current xen'ized U-Boot utilises a para-virtualized
> > > console, i.e. with HVM_PARAM_CONSOLE_PFN.)
> > > 
> > > So far my all attempts have failed.
> > > 
> > > There are a couple of problems, and one of them is how we can
> > > access vpl011 port (from dom0).
> > > What I did is:
> > > - modify U-Boot's pl011 driver
> > >   (I'm sure that the driver correctly handle a vpl011 device
> > >   with regard of accessing a proper set of registers.)
> > > - start U-Boot guest with "vuart=sbsa_uart" by
> > >     xl create uboot.cfg -c
> > > 
> > > Then I have seen almost nothing on the screen.
> > > Digging into vpl011 implementation, I found that all the characters
> > > written DR register will be directed to a "backend domain" if a guest
> > > vm is launched by xl command.
> > > (In case of dom0less, the backend seems to be Xen itself.)
> > > 
> > > As a silly experiment, I modified domain_vpl011_init() to always create
> > > a vpl011 device with "backend_in_domain == false".
> > > Then, I could see more boot messages from U-Boot, but still fails
> > > to use the device as a console, I mean, we will lose all the outputs
> > > after at some point and won't be able to type any keys (at a command prompt).
> > > (This will be another problem on U-Boot side.)
> > > 
> > > My first question here is how we can configure and connect a console
> > > in this case?
> > > Should "xl create -c" or "xl console -t vuart" simply work?
> > 
> > "xl create -c" creates a guest and connect to the primary console which
> > is the PV console (i.e. HVM_PARAM_CONSOLE_PFN.)
> 
> So in case of vuart, it (console) doesn't work?
> (Apparently, "xl create" doesn't take '-t' option.)
> 
> > To connect to the emulated sbsa uart you need to pass -t vuart. So yes,
> > "xl console -t vuart domain_name" should get you access to the emulated
> > sbsa uart. The sbsa uart can also be exposed to dom0less guests; you get
> > their output by using CTRL-AAA to switch to right domU console.
> > 
> > You can add printks to xen/arch/arm/vpl011.c in Xen to see what's
> > happening on the Xen side. vpl011.c is the emulator.
> 
> I'm sure that write to "REG_DR" register is caught by Xen.
> What I don't understand is
> if back_in_domain -> no outputs
> if !back_in_domain -> can see outputs
> 
> (As you know, if a guest is created by xl command, back_in_domain
> is forcedly set to true.)
> 
> I looked into xenstore and found that "vuart/0/tty" does not exist,
> but "console/tty" does.
> How can this happen for vuart?
> (I clearly specified, vuart = "sbsa_uart" in Xen config.)

It looks like we have a bug :-(

I managed to reproduce the issue. The problem is a race in libxl.

tools/libxc/xc_dom_arm.c:alloc_magic_pages is called first, setting
dom->vuart_gfn.  Then, libxl__build_hvm should be setting
state->vuart_gfn to dom->vuart_gfn (like libxl__build_pv does) but it
doesn't.


---

libxl: set vuart_gfn in libxl__build_hvm

Setting vuart_gfn was missed when switching ARM guests to the PVH build.
Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
dom->vuart_gfn.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index f8661e90d4..36fe8915e7 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
         LOG(ERROR, "hvm build set params failed");
         goto out;
     }
+    state->vuart_gfn = dom->vuart_gfn;
 
     rc = hvm_build_set_xs_values(gc, domid, dom, info);
     if (rc != 0) {
--8323329-1085550638-1604016260=:12247--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 02:11:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 02:11:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15375.38351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJt2-0006zv-O3; Fri, 30 Oct 2020 02:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15375.38351; Fri, 30 Oct 2020 02:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYJt2-0006zo-Ki; Fri, 30 Oct 2020 02:11:28 +0000
Received: by outflank-mailman (input) for mailman id 15375;
 Fri, 30 Oct 2020 02:11:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYJt1-0006zj-Bc
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:11:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c1bc3e6-a747-4d4d-b379-0c381c7a1a0b;
 Fri, 30 Oct 2020 02:11:26 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJsz-0002ai-L8; Fri, 30 Oct 2020 02:11:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJsz-0002jq-Ba; Fri, 30 Oct 2020 02:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYJsz-0003Tr-AA; Fri, 30 Oct 2020 02:11:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYJt1-0006zj-Bc
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:11:27 +0000
X-Inumbo-ID: 1c1bc3e6-a747-4d4d-b379-0c381c7a1a0b
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c1bc3e6-a747-4d4d-b379-0c381c7a1a0b;
	Fri, 30 Oct 2020 02:11:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CFjsfB+EBBad1Hq7300KoL78GeKijXKt6uRTspNvVO8=; b=o90yJytScPyUwUu+AO/GNnsjzD
	EUasA56+s+gaY75/ET3ELAARvT/pfhnhJ/mgiea6CL6SPfzz38e7Z2+vMnj3jlx5Lt1U6Qm6wtFLJ
	kDI5COx/mmR32524UCqoOeCEUoTnRkZPmlvpg6/oJYX5JW6Ma/s9kxvGmeHI6q1JjGSQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJsz-0002ai-L8; Fri, 30 Oct 2020 02:11:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJsz-0002jq-Ba; Fri, 30 Oct 2020 02:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYJsz-0003Tr-AA; Fri, 30 Oct 2020 02:11:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156310: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6e2ee3dfd660d9fde96243da7d565244b4d2f164
X-Osstest-Versions-That:
    xen=26a8fa494f2f323622b6928bd15921b41818f180
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 02:11:25 +0000

flight 156310 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156310/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6e2ee3dfd660d9fde96243da7d565244b4d2f164
baseline version:
 xen                  26a8fa494f2f323622b6928bd15921b41818f180

Last test of basis   156304  2020-10-29 20:00:26 Z    0 days
Testing same since   156310  2020-10-30 00:01:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   26a8fa494f..6e2ee3dfd6  6e2ee3dfd660d9fde96243da7d565244b4d2f164 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 03:28:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 03:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15386.38367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYL5l-0004gM-F6; Fri, 30 Oct 2020 03:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15386.38367; Fri, 30 Oct 2020 03:28:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYL5l-0004gF-Bh; Fri, 30 Oct 2020 03:28:41 +0000
Received: by outflank-mailman (input) for mailman id 15386;
 Fri, 30 Oct 2020 03:28:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYL5j-0004gA-TA
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 03:28:40 +0000
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d408ed9-47a9-4720-b2f9-0c90b863b4ad;
 Fri, 30 Oct 2020 03:28:38 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id BDE9D5C0131;
 Thu, 29 Oct 2020 23:28:38 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Thu, 29 Oct 2020 23:28:38 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 0861A3280063;
 Thu, 29 Oct 2020 23:28:37 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYL5j-0004gA-TA
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 03:28:40 +0000
X-Inumbo-ID: 1d408ed9-47a9-4720-b2f9-0c90b863b4ad
Received: from out2-smtp.messagingengine.com (unknown [66.111.4.26])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1d408ed9-47a9-4720-b2f9-0c90b863b4ad;
	Fri, 30 Oct 2020 03:28:38 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id BDE9D5C0131;
	Thu, 29 Oct 2020 23:28:38 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Thu, 29 Oct 2020 23:28:38 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=nldD6o
	fiqaXXlTDlB3bpDhEfk1kq2/CjjYuqgH3H7W4=; b=kxQO6WTrIWD++n8RmjVjQS
	pLDxSA3uO4ztQjA0DiMBI/AGWZ4cIdpzm5We7GacS468E5UazX2MpWsYOlbbsthf
	VMhDJf+TBrfTKSTJR958JnXcAo3+jMGxEQPRQvDja3H6jKMlewPiHB0ZvSdAikHA
	ZMTvEQobWwTVFFOM/HyHm3FGjnctk/PXkP2M4JzK/9K5k87vVJrFa/QvTniYCzFe
	ZxcqoyCMEntTbdtaf4j9NiZqkZ3616KXDh++5hJpo3+/9qyo2uSjX+vyYuRnuXLc
	/8kcUvI8WMd3ati3qTrB8i3x2YbPhBi9DbQXgxNWLe+K4CR6i0g6jhHkXaHSP2UQ
	==
X-ME-Sender: <xms:ZoibX8-xRz5Ympyuea7dt22rIAKJVH50qPQ5PDQ35sXKi1XttfHdVw>
    <xme:ZoibX0vLEKOtz2nb3_mNixSlsOXz93LEfDuKuKhEdsTIjzABS2Jc-kVwuyz89dUB6
    _65V-lvVHZj0w>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleeggdehkecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgefhtdfhuddtjefgjeevhffh
    jeejtefgjeevgeeijeduhfdtteehieffvdettddvnecukfhppeeluddrieegrddujedtrd
    ekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehm
    rghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:ZoibXyBJUO9x3O-71CIPPEPRjAAdCPs5IKwhnaKuG-upjD6FjjpunA>
    <xmx:ZoibX8dORXVHQT8GtpCTkYLiEsh5SOOBMWJ5Dl1WLO-OoeFap93UGw>
    <xmx:ZoibXxMZAaGs4KGDwRKReRyzWpEYdQuuUAnmmIbEOZWc5OhlaEXiQQ>
    <xmx:ZoibX3bn6F7bit2hMvNer2h-XtJxnr9cLpI5wtb1DonjFT-aTTaBTw>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 0861A3280063;
	Thu, 29 Oct 2020 23:28:37 -0400 (EDT)
Date: Fri, 30 Oct 2020 04:28:34 +0100
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] tools/python: pass more -rpath-link options to ld
Message-ID: <20201030032834.GC1447@mail-itl>
References: <d10bb94f-c572-6977-40a4-57a61da4094b@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="F4Dl6XKrV7PH8SJF"
Content-Disposition: inline
In-Reply-To: <d10bb94f-c572-6977-40a4-57a61da4094b@suse.com>


--F4Dl6XKrV7PH8SJF
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] tools/python: pass more -rpath-link options to ld

On Mon, Oct 19, 2020 at 10:31:37AM +0200, Jan Beulich wrote:
> With the split of libraries, I've observed a number of warnings from
> (old?) ld.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> It's unclear to me whether this is ld version dependent - the pattern
> of where I've seen such warnings doesn't suggest a clear version
> dependency.
>=20
> --- a/tools/python/setup.py
> +++ b/tools/python/setup.py
> @@ -7,10 +7,15 @@ XEN_ROOT =3D "../.."
>  extra_compile_args  =3D [ "-fno-strict-aliasing", "-Werror" ]
> =20
>  PATH_XEN      =3D XEN_ROOT + "/tools/include"
> +PATH_LIBXENTOOLCORE =3D XEN_ROOT + "/tools/libs/toolcore"
>  PATH_LIBXENTOOLLOG =3D XEN_ROOT + "/tools/libs/toollog"
> +PATH_LIBXENCALL =3D XEN_ROOT + "/tools/libs/call"
>  PATH_LIBXENEVTCHN =3D XEN_ROOT + "/tools/libs/evtchn"
> +PATH_LIBXENGNTTAB =3D XEN_ROOT + "/tools/libs/gnttab"
>  PATH_LIBXENCTRL =3D XEN_ROOT + "/tools/libs/ctrl"
>  PATH_LIBXENGUEST =3D XEN_ROOT + "/tools/libs/guest"
> +PATH_LIBXENDEVICEMODEL =3D XEN_ROOT + "/tools/libs/devicemodel"
> +PATH_LIBXENFOREIGNMEMORY =3D XEN_ROOT + "/tools/libs/foreignmemory"
>  PATH_XENSTORE =3D XEN_ROOT + "/tools/libs/store"
> =20
>  xc =3D Extension("xc",
> @@ -24,7 +29,13 @@ xc =3D Extension("xc",
>                 library_dirs       =3D [ PATH_LIBXENCTRL, PATH_LIBXENGUES=
T ],
>                 libraries          =3D [ "xenctrl", "xenguest" ],
>                 depends            =3D [ PATH_LIBXENCTRL + "/libxenctrl.s=
o", PATH_LIBXENGUEST + "/libxenguest.so" ],
> -               extra_link_args    =3D [ "-Wl,-rpath-link=3D"+PATH_LIBXEN=
TOOLLOG ],
> +               extra_link_args    =3D [ "-Wl,-rpath-link=3D"+PATH_LIBXEN=
CALL,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENDE=
VICEMODEL,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENEV=
TCHN,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENFO=
REIGNMEMORY,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENGN=
TTAB,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENTO=
OLCORE,
> +                                      "-Wl,-rpath-link=3D"+PATH_LIBXENTO=
OLLOG ],

This basically open-codes SHLIB_libxenctrl + SHLIB_libxenguest. Isn't it
better to pass it from make that already has all the dependencies
resolved?

>                 sources            =3D [ "xen/lowlevel/xc/xc.c" ])
> =20
>  xs =3D Extension("xs",
> @@ -33,6 +44,7 @@ xs =3D Extension("xs",
>                 library_dirs       =3D [ PATH_XENSTORE ],
>                 libraries          =3D [ "xenstore" ],
>                 depends            =3D [ PATH_XENSTORE + "/libxenstore.so=
" ],
> +               extra_link_args    =3D [ "-Wl,-rpath-link=3D"+PATH_LIBXEN=
TOOLCORE ],
>                 sources            =3D [ "xen/lowlevel/xs/xs.c" ])
> =20
>  plat =3D os.uname()[0]

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--F4Dl6XKrV7PH8SJF
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+biGMACgkQ24/THMrX
1ywLpAf+JOAKhIphCQaat0Kg8bFiR/QoCzGlgTAabln1b5D+Gbl8qPO9HW6XuBo1
9wGKElizC/lgIZhFAMw0ulxCrNKn+fpCYQ16H2v/6JaSbXtsSBTEukiRw47AIjBC
crcgkdSYDd26bniXZFTW/kY7kTpeVCYZYTwSHpzwmVnk3KP/BluAq4hER2fPYLna
SUh7Moyh0RJNlGyUjV4D9+00bgjPUF2ppGc6McFjeKukctSV7U3uzhHHLjF/X/ko
NecKn+pdSTe8I1K6ga78JB/yhHygUYqg3EY0GmahgbJTCT2IcIDUcLIwY2ym3cSB
kpI/3E9Ggr5akcNxGhSYTEISPcQbCA==
=2fCX
-----END PGP SIGNATURE-----

--F4Dl6XKrV7PH8SJF--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 03:58:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 03:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15392.38379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYLYO-0007HM-PJ; Fri, 30 Oct 2020 03:58:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15392.38379; Fri, 30 Oct 2020 03:58:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYLYO-0007HF-Lj; Fri, 30 Oct 2020 03:58:16 +0000
Received: by outflank-mailman (input) for mailman id 15392;
 Fri, 30 Oct 2020 03:58:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYLYN-0007HA-2v
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 03:58:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a122c64f-bed5-40d4-b834-9188d35a01eb;
 Fri, 30 Oct 2020 03:58:11 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYLYJ-0004pA-HC; Fri, 30 Oct 2020 03:58:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYLYJ-0001OF-8E; Fri, 30 Oct 2020 03:58:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYLYJ-0007tt-7O; Fri, 30 Oct 2020 03:58:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYLYN-0007HA-2v
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 03:58:15 +0000
X-Inumbo-ID: a122c64f-bed5-40d4-b834-9188d35a01eb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a122c64f-bed5-40d4-b834-9188d35a01eb;
	Fri, 30 Oct 2020 03:58:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bx6CwzIpIc8l1CQy7eH2CrNDk2ZnNRks80HdsXUR1c4=; b=EQy3SprT1eyA4PTwvYwelzS8Xx
	sqOL38Z7Jgp4Oa9W7SzlfCFxt9D92BoDGFSPC4KTdCjMOUkxEh8WP9CDcAqCAU9KNHPmgit2FCgrN
	5xFgHL/UdNKv02I1tVqgo0Xhr2dD6pIgE6CLcvoJvyfTKoFgj6qdrxQd5ZBHd+InQK9I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYLYJ-0004pA-HC; Fri, 30 Oct 2020 03:58:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYLYJ-0001OF-8E; Fri, 30 Oct 2020 03:58:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYLYJ-0007tt-7O; Fri, 30 Oct 2020 03:58:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156287-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156287: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=bbc48d2bcb9711614fbe751c2c5ae13e172fbca8
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 03:58:11 +0000

flight 156287 qemu-mainline real [real]
flight 156312 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156287/
http://logs.test-lab.xenproject.org/osstest/logs/156312/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 12 windows-install   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                bbc48d2bcb9711614fbe751c2c5ae13e172fbca8
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   70 days
Failing since        152659  2020-08-21 14:07:39 Z   69 days  159 attempts
Testing same since   156287  2020-10-29 05:06:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 53380 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 05:14:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 05:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15383.38394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYMjU-00062h-HY; Fri, 30 Oct 2020 05:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15383.38394; Fri, 30 Oct 2020 05:13:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYMjU-00062a-Dq; Fri, 30 Oct 2020 05:13:48 +0000
Received: by outflank-mailman (input) for mailman id 15383;
 Fri, 30 Oct 2020 02:52:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8EZK=EF=linaro.org=takahiro.akashi@srs-us1.protection.inumbo.net>)
 id 1kYKWK-00020q-Fg
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:52:04 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52b729cf-5ef6-411a-8d57-5c07cc8941a1;
 Fri, 30 Oct 2020 02:52:03 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id r186so3991047pgr.0
 for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 19:52:03 -0700 (PDT)
Received: from laputa (p784a66b9.tkyea130.ap.so-net.ne.jp. [120.74.102.185])
 by smtp.gmail.com with ESMTPSA id n2sm1333790pja.41.2020.10.29.19.51.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 29 Oct 2020 19:52:01 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8EZK=EF=linaro.org=takahiro.akashi@srs-us1.protection.inumbo.net>)
	id 1kYKWK-00020q-Fg
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 02:52:04 +0000
X-Inumbo-ID: 52b729cf-5ef6-411a-8d57-5c07cc8941a1
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 52b729cf-5ef6-411a-8d57-5c07cc8941a1;
	Fri, 30 Oct 2020 02:52:03 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id r186so3991047pgr.0
        for <xen-devel@lists.xenproject.org>; Thu, 29 Oct 2020 19:52:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=LJD6otIgNEYI/ougx2MRjDDBkrRANlT+jo+gqX6ZWwE=;
        b=hIBPkU1qqwn9e03MzkVkCSRKYVXL6Y70XXOnMFtYggnPvybAJw8P7HI/tS8JocGMKq
         evJrVrwx6SpHudhSXE6sOSzWB0+spgHp0rrliagRLSM2k25D8HVUIjcrt2sQ0rlC2VAX
         2swE9krS+kOVI8XaXt6qudhREuY1rDYvbcrGsAyWHy0WUJHtVHtDDoblpV/eZ5Ok+gdk
         rtr9EOtBod2KHFTw4R7L2mqHUick8xZLwAmRdw/FKZKrzspTXkXL1NcGgWsR30NKqrMI
         pCVmZIFv7oSgazPCMNACiDOeJYlm6ftmT90MnKBPBbe3Jf8CPGMGdp6fJS9PbcnoN5UT
         0x7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=LJD6otIgNEYI/ougx2MRjDDBkrRANlT+jo+gqX6ZWwE=;
        b=aESqdwU/LWyn7O+8O73519/cy7QE//C/TETvhzli09hfidIH5NHZa+QhiLE9riuMj5
         B802TtWflvQDIK9rkEMLQIH/E79ZB1yLR/2WQP8eWiTFzTTF6K8RAkLqrRx0xdwUT9M8
         qxfbbzl0jW8LKDgVHTfHaQOPDHbqcfnpx8V92h5TCCQCUCSr+p6QT+8KFiEBJ59RK6H+
         1mUMBJA+oWnQ7vv03qZHQYLfTVmKMwVw2OgzjSbAM1Qn6u4sUdLK/gqXABb44a7NXVbC
         zqFea0Lt0qI4VNBNvPaHicV764rAEnJZXSUlQ3r2fL5heyXxVRoH3qhFFNFyLp39ol9e
         cUYA==
X-Gm-Message-State: AOAM533ydSSbjwLgvnqBi7mq6sLJ6lEL92lxpZVt2cJWpLV/6jZ18Peo
	NKU1igbki8GS60Jojqp/zI9al1d42OEDmA==
X-Google-Smtp-Source: ABdhPJyA/d7MixJvadILEmXf4T7H+WkTx7m1mAzWq6MBL0vd7zLS5b6LB8Q5WRm4Lf+kj7NkM44kfA==
X-Received: by 2002:a63:609:: with SMTP id 9mr271011pgg.227.1604026322402;
        Thu, 29 Oct 2020 19:52:02 -0700 (PDT)
Received: from laputa (p784a66b9.tkyea130.ap.so-net.ne.jp. [120.74.102.185])
        by smtp.gmail.com with ESMTPSA id n2sm1333790pja.41.2020.10.29.19.51.59
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Thu, 29 Oct 2020 19:52:01 -0700 (PDT)
Date: Fri, 30 Oct 2020 11:51:57 +0900
From: Takahiro Akashi <takahiro.akashi@linaro.org>
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
Cc: Alex Benn??e <alex.bennee@linaro.org>,
	Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	ian.jackson@eu.citrix.com, wl@xen.org, anthony.perard@citrix.com,
	xen-devel@lists.xenproject.org
Subject: Re: BUG: libxl vuart build order
Message-ID: <20201030025157.GA18567@laputa>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com>
 <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s>
 <20201027000214.GA14449@laputa>
 <20201028014105.GA11856@laputa>
 <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa>
 <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s>

Hi Stefano,

On Thu, Oct 29, 2020 at 07:03:28PM -0700, Stefano Stabellini wrote:
> + xen-devel and libxl maintainers
> 
> In short, there is a regression in libxl with the ARM vuart introduced
> by moving ARM guests to the PVH build.
> 
> 
> On Thu, 29 Oct 2020, Takahiro Akashi wrote:
> > On Wed, Oct 28, 2020 at 02:44:16PM -0700, Stefano Stabellini wrote:
> > > On Wed, 28 Oct 2020, Takahiro Akashi wrote:
> > > > On Tue, Oct 27, 2020 at 09:02:14AM +0900, Takahiro Akashi wrote:
> > > > > On Mon, Oct 26, 2020 at 04:37:30PM -0700, Stefano Stabellini wrote:
> > > > > > 
> > > > > > On Mon, 26 Oct 2020, Takahiro Akashi wrote:
> > > > > > > Stefano,
> > > > > > > 
> > > > > > > # I'm afraid that I have already bothered you with a lot of questions.
> > > > > > > 
> > > > > > > When I looked at Xen's vpl011 implementation, I found
> > > > > > > CR (and LCHR) register is not supported. (trap may cause a data abort).
> > > > > > > On the other hand, for example, linux's pl011 driver surely
> > > > > > > accesses CR (and LCHR) register.
> > > > > > > So I guess that linux won't be able to use pl011 on a Xen guest vm
> > > > > > > if vuart = "sbsa_uart".
> > > > > > > 
> > > > > > > Is this a known issue or do I miss anything?
> > > > > > 
> > > > > > Linux should definitely be able to use it, and in fact, I am using it
> > > > > > with Linux in my test environment.
> > > > > > 
> > > > > > I think the confusion comes from the name "vpl011": it is in fact not a
> > > > > > full PL011 UART, but an SBSA UART.
> > > > > 
> > > > > Yeah, I have noticed it.
> > > > > 
> > > > > > SBSA UART only implements a subset of
> > > > > > the PL011 registers. The compatible string is "arm,sbsa-uart", also see
> > > > > > drivers/tty/serial/amba-pl011.c:sbsa_uart_probe.
> > > > > 
> > > > > Looking closely into the details of implementation, I found
> > > > > that all the accesses to unimplemented registers, including
> > > > > CR, are deliberately avoided in sbsa part of linux driver.
> > > > 
> > > > So I'm now trying to implement "sbsa-uart" driver on U-Boot
> > > > by modifying the existing pl011 driver.
> > > > (Please note the current xen'ized U-Boot utilises a para-virtualized
> > > > console, i.e. with HVM_PARAM_CONSOLE_PFN.)
> > > > 
> > > > So far my all attempts have failed.
> > > > 
> > > > There are a couple of problems, and one of them is how we can
> > > > access vpl011 port (from dom0).
> > > > What I did is:
> > > > - modify U-Boot's pl011 driver
> > > >   (I'm sure that the driver correctly handle a vpl011 device
> > > >   with regard of accessing a proper set of registers.)
> > > > - start U-Boot guest with "vuart=sbsa_uart" by
> > > >     xl create uboot.cfg -c
> > > > 
> > > > Then I have seen almost nothing on the screen.
> > > > Digging into vpl011 implementation, I found that all the characters
> > > > written DR register will be directed to a "backend domain" if a guest
> > > > vm is launched by xl command.
> > > > (In case of dom0less, the backend seems to be Xen itself.)
> > > > 
> > > > As a silly experiment, I modified domain_vpl011_init() to always create
> > > > a vpl011 device with "backend_in_domain == false".
> > > > Then, I could see more boot messages from U-Boot, but still fails
> > > > to use the device as a console, I mean, we will lose all the outputs
> > > > after at some point and won't be able to type any keys (at a command prompt).
> > > > (This will be another problem on U-Boot side.)
> > > > 
> > > > My first question here is how we can configure and connect a console
> > > > in this case?
> > > > Should "xl create -c" or "xl console -t vuart" simply work?
> > > 
> > > "xl create -c" creates a guest and connect to the primary console which
> > > is the PV console (i.e. HVM_PARAM_CONSOLE_PFN.)
> > 
> > So in case of vuart, it (console) doesn't work?
> > (Apparently, "xl create" doesn't take '-t' option.)
> > 
> > > To connect to the emulated sbsa uart you need to pass -t vuart. So yes,
> > > "xl console -t vuart domain_name" should get you access to the emulated
> > > sbsa uart. The sbsa uart can also be exposed to dom0less guests; you get
> > > their output by using CTRL-AAA to switch to right domU console.
> > > 
> > > You can add printks to xen/arch/arm/vpl011.c in Xen to see what's
> > > happening on the Xen side. vpl011.c is the emulator.
> > 
> > I'm sure that write to "REG_DR" register is caught by Xen.
> > What I don't understand is
> > if back_in_domain -> no outputs
> > if !back_in_domain -> can see outputs
> > 
> > (As you know, if a guest is created by xl command, back_in_domain
> > is forcedly set to true.)
> > 
> > I looked into xenstore and found that "vuart/0/tty" does not exist,
> > but "console/tty" does.
> > How can this happen for vuart?
> > (I clearly specified, vuart = "sbsa_uart" in Xen config.)
> 
> It looks like we have a bug :-(
> 
> I managed to reproduce the issue. The problem is a race in libxl.
> 
> tools/libxc/xc_dom_arm.c:alloc_magic_pages is called first, setting
> dom->vuart_gfn.  Then, libxl__build_hvm should be setting
> state->vuart_gfn to dom->vuart_gfn (like libxl__build_pv does) but it
> doesn't.

Thank you for the patch.
I confirmed that sbsa-uart driver on U-Boot now works.

=== after "xl console -t vuart" ===
U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest

Xen virtual CPU
Model: XENVM-4.15
DRAM:  128 MiB

In:    sbsa-pl011
Out:   sbsa-pl011
Err:   sbsa-pl011
xenguest# dm tree
 Class     Index  Probed  Driver                Name
-----------------------------------------------------------
 root          0  [ + ]   root_driver           root_driver
 firmware      0  [   ]   psci                  |-- psci
 serial        0  [ + ]   serial_pl01x          |-- sbsa-pl011
 pvblock       0  [   ]   pvblock               `-- pvblock
===

If possible, I hope that "xl create -c" command would accept "-t vuart"
option (or it should automatically selects uart type from the config).

-Takahiro Akashi


> 
> ---
> 
> libxl: set vuart_gfn in libxl__build_hvm
> 
> Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> dom->vuart_gfn.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> index f8661e90d4..36fe8915e7 100644
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
>          LOG(ERROR, "hvm build set params failed");
>          goto out;
>      }
> +    state->vuart_gfn = dom->vuart_gfn;
>  
>      rc = hvm_build_set_xs_values(gc, domid, dom, info);
>      if (rc != 0) {



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 07:11:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 07:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15417.38406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOZR-0007kI-N0; Fri, 30 Oct 2020 07:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15417.38406; Fri, 30 Oct 2020 07:11:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOZR-0007kB-K0; Fri, 30 Oct 2020 07:11:33 +0000
Received: by outflank-mailman (input) for mailman id 15417;
 Fri, 30 Oct 2020 07:11:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fAey=EF=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
 id 1kYOZP-0007k2-Pf
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:11:31 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bfbac23-7c3c-4ceb-bd93-490a678f20ed;
 Fri, 30 Oct 2020 07:11:30 +0000 (UTC)
Received: from coco.lan (ip5f5ad5bb.dynamic.kabel-deutschland.de
 [95.90.213.187])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A5A6620729;
 Fri, 30 Oct 2020 07:11:12 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fAey=EF=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
	id 1kYOZP-0007k2-Pf
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:11:31 +0000
X-Inumbo-ID: 6bfbac23-7c3c-4ceb-bd93-490a678f20ed
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6bfbac23-7c3c-4ceb-bd93-490a678f20ed;
	Fri, 30 Oct 2020 07:11:30 +0000 (UTC)
Received: from coco.lan (ip5f5ad5bb.dynamic.kabel-deutschland.de [95.90.213.187])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id A5A6620729;
	Fri, 30 Oct 2020 07:11:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604041889;
	bh=HMfalVRoys07ml7VPXGsddZqQtyxcaotzZe94CKTB5U=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=R7y6Bkg5GrGcYmjgIayZshuogEXNtD+N43HIj2ydphSqWHTpEKjdj+tkINZ9lO4pX
	 +EzN06y8xtiLvvAwAlP7B2jMtZP7Xz5TMkR6xCd/tfAA7bm7KHEvTD9Bw/cC31SHKp
	 JuVmtl721avx2j0rGBNlzZnqYSENceoWWYTIE5Wc=
Date: Fri, 30 Oct 2020 08:11:09 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Jonathan Cameron <jic23@kernel.org>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>, Greg Kroah-Hartman
 <gregkh@linuxfoundation.org>, Mauro Carvalho Chehab
 <mchehab+samsung@kernel.org>, "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
 "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier =?UTF-8?B?R29uesOhbGV6?=
 <javier@javigon.com>, "Jonathan Corbet" <corbet@lwn.net>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "Rafael J. Wysocki"
 <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Fabrice Gasnier
 <fabrice.gasnier@st.com>, Felipe Balbi <balbi@kernel.org>, Frederic Barrat
 <fbarrat@linux.ibm.com>, Guenter Roeck <groeck@chromium.org>, Hanjun Guo
 <guohanjun@huawei.com>, Heikki Krogerus <heikki.krogerus@linux.intel.com>,
 Jens Axboe <axboe@kernel.dk>, Johannes Thumshirn
 <johannes.thumshirn@wdc.com>, Juergen Gross <jgross@suse.com>, Konstantin
 Khlebnikov <koct9i@gmail.com>, Kranthi Kuntala <kranthi.kuntala@intel.com>,
 Lakshmi Ramasubramanian <nramas@linux.microsoft.com>, Lars-Peter Clausen
 <lars@metafoo.de>, Len Brown <lenb@kernel.org>, Leonid Maksymchuk
 <leonmaxx@gmail.com>, Ludovic Desroches <ludovic.desroches@microchip.com>,
 Mario Limonciello <mario.limonciello@dell.com>, Maxime Coquelin
 <mcoquelin.stm32@gmail.com>, Michael Ellerman <mpe@ellerman.id.au>, Mika
 Westerberg <mika.westerberg@linux.intel.com>, Mike Kravetz
 <mike.kravetz@oracle.com>, Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain
 <nayna@linux.ibm.com>, Nicolas Ferre <nicolas.ferre@microchip.com>, Niklas
 Cassel <niklas.cassel@wdc.com>, Oleh Kravchenko <oleg@kaa.org.ua>, Orson
 Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 linux-acpi@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
 linux-iio@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-mm@kvack.org, linux-pm@vger.kernel.org,
 linux-stm32@st-md-mailman.stormreply.com, linux-usb@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, netdev@vger.kernel.org,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH 20/33] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201030081109.5f7bbdaf@coco.lan>
In-Reply-To: <20201029144912.3c0a239b@archlinux>
References: <cover.1603893146.git.mchehab+huawei@kernel.org>
	<4ebaaa0320101479e392ce2db4b62e24fdf15ef1.1603893146.git.mchehab+huawei@kernel.org>
	<20201029144912.3c0a239b@archlinux>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Em Thu, 29 Oct 2020 14:49:12 +0000
Jonathan Cameron <jic23@kernel.org> escreveu:

> On Wed, 28 Oct 2020 15:23:18 +0100
> Mauro Carvalho Chehab <mchehab+huawei@kernel.org> wrote:
> 
> > From: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> > 
> > Some files over there won't parse well by Sphinx.
> > 
> > Fix them.
> > 
> > Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
> > Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>  
> 
> Query below...  I'm going to guess a rebase issue?

Yes. I sent this series about 1,5 years ago. On that time, it
ended by not being merged, as there were too much docs patches
floating around. 

The second SoB is not there on my tree. It was added by
git send-email ;-)

Anyway, fixed.

Thanks,
Mauro


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 07:29:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 07:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15423.38418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOqy-0000Om-8n; Fri, 30 Oct 2020 07:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15423.38418; Fri, 30 Oct 2020 07:29:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOqy-0000Of-56; Fri, 30 Oct 2020 07:29:40 +0000
Received: by outflank-mailman (input) for mailman id 15423;
 Fri, 30 Oct 2020 07:29:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1kYOqx-0000Oa-Cg
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:29:39 +0000
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35735104-b085-42c1-b633-aa28adafaa88;
 Fri, 30 Oct 2020 07:29:38 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id dg9so5534092edb.12
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 00:29:38 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
	id 1kYOqx-0000Oa-Cg
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:29:39 +0000
X-Inumbo-ID: 35735104-b085-42c1-b633-aa28adafaa88
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 35735104-b085-42c1-b633-aa28adafaa88;
	Fri, 30 Oct 2020 07:29:38 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id dg9so5534092edb.12
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 00:29:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=G5Y2ybGQFFg/Q1IWxTbgDWrthNCdA5cUZUHcaXIADVo=;
        b=juU6YdlGOZQXGNSlK6wUIxSw0U6XVMJQsmW5ibCAI622UsHp5E6SjA0yRhdlz6lAIP
         evU517XRKe2S5Fpk9P4VTTju9LQenjJZCPfj3qXLMcjWjYOJTEqL1xZ6oQPcG38NwdHc
         Dh5K7GdPUEmgCh9cOtdLqayGtqbpm5xHi1sS6WWttMqRk+PBBF2s3rw2tQEZw16vsGCs
         PtSBs1A4BvvkqG+sdoFmuTaTPoAchM4twXNwVJQsrRmNuIFvnCTPViRnYIK4Dv8HbhPN
         fmBS+HcBbGvyYgBJNzao06S9VzmPgMFMyTJ7FkhjmEmwWtArTT+zlkYGm5wS8/qIPiCp
         lF1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=G5Y2ybGQFFg/Q1IWxTbgDWrthNCdA5cUZUHcaXIADVo=;
        b=FrcY8tPTKIAYY94NT/upTfHXPhwYbNZSsyaBjC4P0/8te24Hcy85vtGcGinFLhfc+W
         F/BsL8tWfSe3xmT/3sBi7Ou949+17YLuJbYnnNluhCwn8XbfurX+I2aG11dCRQhqAffv
         VAlknHmRQHZ6lZVCtA6qwCvxPqlKeJLALBGBQ/RrhaomlrVaAf9TUqL8yaLxRj1Gz2sB
         /ocxm1Cm7gdwph5MSLF06Hnxwsb3/bPrrl2BgG8ia2L6lBCm3VdUW+csUcX7P26U3wRG
         e202E6JiIQwIckVFtJZV+XmGaog8C/KlHUi7gg+UIfXdSZOX8t/RuuS5wN7VqSBr4Jtp
         DMOA==
X-Gm-Message-State: AOAM5314CqCxlfQd0TgsxhfRAOrIJeMUMjNbERTlaqz81HWCYvb7rSv/
	faPOy0I8V6iNJkLM5gJyrhxhNXC3ochLewlcfdA=
X-Google-Smtp-Source: ABdhPJzgtC+eSdWPUt4jhEkUAj9Oql/sHSTe5c6CKejoOmRHDQyaVOsgc0U8MOpXF2K9JdY6i3DC7Y1XTKxw/FXy4f4=
X-Received: by 2002:a50:a441:: with SMTP id v1mr905994edb.30.1604042977710;
 Fri, 30 Oct 2020 00:29:37 -0700 (PDT)
MIME-Version: 1.0
References: <20201029220246.472693-1-ehabkost@redhat.com> <20201029220246.472693-10-ehabkost@redhat.com>
In-Reply-To: <20201029220246.472693-10-ehabkost@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@gmail.com>
Date: Fri, 30 Oct 2020 11:29:25 +0400
Message-ID: <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
Subject: Re: [PATCH 09/36] qdev: Make qdev_get_prop_ptr() get Object* arg
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>, 
	Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	"open list:Block layer core" <qemu-block@nongnu.org>, Stefan Berger <stefanb@linux.vnet.ibm.com>, 
	David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, Alex Williamson <alex.williamson@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>, 
	Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>, 
	"Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck <cohuck@redhat.com>, 
	Qemu-s390x list <qemu-s390x@nongnu.org>, Max Reitz <mreitz@redhat.com>, 
	Igor Mammedov <imammedo@redhat.com>
Content-Type: multipart/alternative; boundary="000000000000e41a3005b2de5b99"

--000000000000e41a3005b2de5b99
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Oct 30, 2020 at 2:07 AM Eduardo Habkost <ehabkost@redhat.com> wrote=
:

> Make the code more generic and not specific to TYPE_DEVICE.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
>

Nice cleanup!, but fails to build atm

../hw/block/xen-block.c:403:9: error: =E2=80=98dev=E2=80=99 undeclared (fir=
st use in this
function); did you mean =E2=80=98vdev=E2=80=99?
  403 |     if (dev->realized) {

--=20
Marc-Andr=C3=A9 Lureau

--000000000000e41a3005b2de5b99
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, Oct 30, 2020 at 2:07 AM Eduar=
do Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com">ehabkost@redhat.com</=
a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Ma=
ke the code more generic and not specific to TYPE_DEVICE.<br>
<br>
Signed-off-by: Eduardo Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com" t=
arget=3D"_blank">ehabkost@redhat.com</a>&gt;<br></blockquote><div><br></div=
><div>Nice cleanup!, but fails to build atm</div><div><br></div><div>../hw/=
block/xen-block.c:403:9: error: =E2=80=98dev=E2=80=99 undeclared (first use=
 in this function); did you mean =E2=80=98vdev=E2=80=99?<br>=C2=A0 403 | =
=C2=A0 =C2=A0 if (dev-&gt;realized) {</div><br></div>-- <br><div dir=3D"ltr=
" class=3D"gmail_signature">Marc-Andr=C3=A9 Lureau<br></div></div>

--000000000000e41a3005b2de5b99--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 07:34:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 07:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15428.38430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOvY-0001Gm-SC; Fri, 30 Oct 2020 07:34:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15428.38430; Fri, 30 Oct 2020 07:34:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYOvY-0001Gf-OL; Fri, 30 Oct 2020 07:34:24 +0000
Received: by outflank-mailman (input) for mailman id 15428;
 Fri, 30 Oct 2020 07:34:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1kYOvX-0001Ga-0W
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:34:23 +0000
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 879efc71-4952-4112-8859-2501a4f17b6d;
 Fri, 30 Oct 2020 07:34:22 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id k9so5614658edo.5
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 00:34:22 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
	id 1kYOvX-0001Ga-0W
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:34:23 +0000
X-Inumbo-ID: 879efc71-4952-4112-8859-2501a4f17b6d
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 879efc71-4952-4112-8859-2501a4f17b6d;
	Fri, 30 Oct 2020 07:34:22 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id k9so5614658edo.5
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 00:34:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=QecOoWQ/HZb8eLncR1EKgGqUdTzbd3QYBazcsejlCn8=;
        b=ujrjISZg77YgswczqnGmqLCmD8a6h4MPRyt1NHmMmnkpJXhbeT0O+rgJX3GZi8Si+G
         rDQQy2PQOnNsF3XnHMVYDOXCJuPB6/0XoU81UkR6yMofcbFZcrJX5FvkALRY0hrLJkDh
         248wTeLxRhp6GMHvTgkum4wjs1iTw4rKjQ8zRRIsL07x7XxJF5bZomRWcCdRhSyzh4Eh
         OUGvXipVheIUlmdsYhB8w29X+83G04YAkrQuHd7PpDPMbls4tSr6AF4Eb8Uvebwf1Rtu
         xcFq8d7jGjV/OA7eeY0MnLVz/t3jEs42qWSICjUZYIAK8btREG2Qj3lmlqizcKYz4yDb
         fdJg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=QecOoWQ/HZb8eLncR1EKgGqUdTzbd3QYBazcsejlCn8=;
        b=mFMrKMXrAu1b8vARL7vveGhLeH+Sw+YAtFUoAnijs7lBahYv4Rs42f0QkCNLxs5l0l
         T+rNA10G/GMv5Q/XZ0zcCvYpQ8ithW1KD8qP7u94UfjQU4E4MNABb/um1Dbxb8GccwaP
         657V8u/DgYbGCmfhvrk7hkkhM9aGKiR8UHSDqJChLYYgKStKBpY2s+noWjbzwi31sKJ8
         gJ8edkECiD+sS7YwnNUAszOGsSmkyN4fKGTs/TIqitAmXXUCcEDCc9/kx33qfgFLsHjq
         j3/GkSIJhTNgewCSyK47x+jaht72uzJyA9ETGuxkWqt8XOQeOvvnbyDBCmdSyur8FRpJ
         UCbQ==
X-Gm-Message-State: AOAM531nJ5WMzfMxe69vgNQqH4hFurD9hYfPI0svHyQg3SM/ue0JcIrw
	wM20S+7sILHQ2hzhaM6SkzWS+sENb96rbzl9rSA=
X-Google-Smtp-Source: ABdhPJwHdlUXIcvql7BaxkEQ5qq2IjRF1QtV14hIqZo+aMFSkt+speVDDuk/H/QhE4pezbuAlNycJUmN/TXFmcdeYIA=
X-Received: by 2002:a05:6402:6ca:: with SMTP id n10mr927720edy.314.1604043261375;
 Fri, 30 Oct 2020 00:34:21 -0700 (PDT)
MIME-Version: 1.0
References: <20201029220246.472693-1-ehabkost@redhat.com> <20201029220246.472693-10-ehabkost@redhat.com>
 <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
In-Reply-To: <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@gmail.com>
Date: Fri, 30 Oct 2020 11:34:09 +0400
Message-ID: <CAJ+F1CKLtKeoP43OV5dpbHLFNO8OnMdsjD=atDo=pjqvFkX2fg@mail.gmail.com>
Subject: Re: [PATCH 09/36] qdev: Make qdev_get_prop_ptr() get Object* arg
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>, 
	Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	"open list:Block layer core" <qemu-block@nongnu.org>, Stefan Berger <stefanb@linux.vnet.ibm.com>, 
	David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Thomas Huth <thuth@redhat.com>, Alex Williamson <alex.williamson@redhat.com>, 
	Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>, 
	Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>, 
	"Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck <cohuck@redhat.com>, 
	Qemu-s390x list <qemu-s390x@nongnu.org>, Max Reitz <mreitz@redhat.com>, 
	Igor Mammedov <imammedo@redhat.com>
Content-Type: multipart/alternative; boundary="000000000000cc7dc705b2de6c6f"

--000000000000cc7dc705b2de6c6f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Oct 30, 2020 at 11:29 AM Marc-Andr=C3=A9 Lureau <
marcandre.lureau@gmail.com> wrote:

>
>
> On Fri, Oct 30, 2020 at 2:07 AM Eduardo Habkost <ehabkost@redhat.com>
> wrote:
>
>> Make the code more generic and not specific to TYPE_DEVICE.
>>
>> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
>>
>
> Nice cleanup!, but fails to build atm
>
> ../hw/block/xen-block.c:403:9: error: =E2=80=98dev=E2=80=99 undeclared (f=
irst use in this
> function); did you mean =E2=80=98vdev=E2=80=99?
>   403 |     if (dev->realized) {
>
>
That seems to be the only issue though, so with that fixed:
Reviewed-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>

--=20
Marc-Andr=C3=A9 Lureau

--000000000000cc7dc705b2de6c6f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, Oct 30, 2020 at 11:29 AM Marc=
-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandre.lureau@gmail.com">marcan=
dre.lureau@gmail.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quo=
te" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204=
);padding-left:1ex"><div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div cl=
ass=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Fri, Oct 30, 2=
020 at 2:07 AM Eduardo Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com" t=
arget=3D"_blank">ehabkost@redhat.com</a>&gt; wrote:<br></div><blockquote cl=
ass=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid=
 rgb(204,204,204);padding-left:1ex">Make the code more generic and not spec=
ific to TYPE_DEVICE.<br>
<br>
Signed-off-by: Eduardo Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com" t=
arget=3D"_blank">ehabkost@redhat.com</a>&gt;<br></blockquote><div><br></div=
><div>Nice cleanup!, but fails to build atm</div><div><br></div><div>../hw/=
block/xen-block.c:403:9: error: =E2=80=98dev=E2=80=99 undeclared (first use=
 in this function); did you mean =E2=80=98vdev=E2=80=99?<br>=C2=A0 403 | =
=C2=A0 =C2=A0 if (dev-&gt;realized) {</div><br></div></div></blockquote></d=
iv><div class=3D"gmail_quote"><br></div><div class=3D"gmail_quote">That see=
ms to be the only issue though, so with that fixed:</div><div class=3D"gmai=
l_quote">Reviewed-by: Marc-Andr=C3=A9 Lureau &lt;<a href=3D"mailto:marcandr=
e.lureau@redhat.com">marcandre.lureau@redhat.com</a>&gt;</div><div class=3D=
"gmail_quote"><br></div>-- <br><div dir=3D"ltr" class=3D"gmail_signature">M=
arc-Andr=C3=A9 Lureau<br></div></div>

--000000000000cc7dc705b2de6c6f--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 07:41:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 07:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15435.38454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYP26-0002Bc-5F; Fri, 30 Oct 2020 07:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15435.38454; Fri, 30 Oct 2020 07:41:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYP26-0002BT-0V; Fri, 30 Oct 2020 07:41:10 +0000
Received: by outflank-mailman (input) for mailman id 15435;
 Fri, 30 Oct 2020 07:41:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fq9/=EF=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
 id 1kYP24-0002B2-O1
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:41:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac250ae5-1603-4cc5-b164-0bbf0667e983;
 Fri, 30 Oct 2020 07:41:07 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5bb.dynamic.kabel-deutschland.de
 [95.90.213.187])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9622A223C6;
 Fri, 30 Oct 2020 07:41:04 +0000 (UTC)
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
 (envelope-from <mchehab@kernel.org>)
 id 1kYP1w-004OgA-VZ; Fri, 30 Oct 2020 08:41:01 +0100
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fq9/=EF=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
	id 1kYP24-0002B2-O1
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:41:08 +0000
X-Inumbo-ID: ac250ae5-1603-4cc5-b164-0bbf0667e983
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ac250ae5-1603-4cc5-b164-0bbf0667e983;
	Fri, 30 Oct 2020 07:41:07 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5bb.dynamic.kabel-deutschland.de [95.90.213.187])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 9622A223C6;
	Fri, 30 Oct 2020 07:41:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604043665;
	bh=7pXZkTx4NQUNb+rBYGAAAZxDl0vIpOQRcftBUxab/t4=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=2W3casn4acTL+duXr/V2NF+/Ei4f9TGxFjqsunwyOhdcMayisExJRv93gkBJ4jluf
	 t0NPNUZ3Kn68Yg5X16FPV0Rzun6CEnefnbPIgj4Je+KrGGb8uUmjZY8Nc+eJxiBjIe
	 oD7u562tIs9sWSJziBnXSMRz1ySSc/oJV3BbO+Mk=
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
	(envelope-from <mchehab@kernel.org>)
	id 1kYP1w-004OgA-VZ; Fri, 30 Oct 2020 08:41:01 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Linux Doc Mailing List <linux-doc@vger.kernel.org>
Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
	"Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
	"Jason A. Donenfeld" <Jason@zx2c4.com>,
	=?UTF-8?q?Javier=20Gonz=C3=A1lez?= <javier@javigon.com>,
	"Jonathan Corbet" <corbet@lwn.net>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Alexandre Belloni <alexandre.belloni@bootlin.com>,
	Alexandre Torgue <alexandre.torgue@st.com>,
	Andrew Donnellan <ajd@linux.ibm.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Baolin Wang <baolin.wang7@gmail.com>,
	Benson Leung <bleung@chromium.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Bruno Meneguele <bmeneg@redhat.com>,
	Chunyan Zhang <zhang.lyra@gmail.com>,
	Dan Murphy <dmurphy@ti.com>,
	Dan Williams <dan.j.williams@intel.com>,
	Enric Balletbo i Serra <enric.balletbo@collabora.com>,
	Fabrice Gasnier <fabrice.gasnier@st.com>,
	Felipe Balbi <balbi@kernel.org>,
	Frederic Barrat <fbarrat@linux.ibm.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Guenter Roeck <groeck@chromium.org>,
	Hanjun Guo <guohanjun@huawei.com>,
	Heikki Krogerus <heikki.krogerus@linux.intel.com>,
	Jens Axboe <axboe@kernel.dk>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>,
	Jonathan Cameron <jic23@kernel.org>,
	Juergen Gross <jgross@suse.com>,
	Konstantin Khlebnikov <koct9i@gmail.com>,
	Kranthi Kuntala <kranthi.kuntala@intel.com>,
	Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
	Lars-Peter Clausen <lars@metafoo.de>,
	Len Brown <lenb@kernel.org>,
	Leonid Maksymchuk <leonmaxx@gmail.com>,
	Ludovic Desroches <ludovic.desroches@microchip.com>,
	Mario Limonciello <mario.limonciello@dell.com>,
	Mark Gross <mgross@linux.intel.com>,
	Maxime Coquelin <mcoquelin.stm32@gmail.com>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Mika Westerberg <mika.westerberg@linux.intel.com>,
	Mike Kravetz <mike.kravetz@oracle.com>,
	Mimi Zohar <zohar@linux.ibm.com>,
	Nayna Jain <nayna@linux.ibm.com>,
	Nicolas Ferre <nicolas.ferre@microchip.com>,
	Niklas Cassel <niklas.cassel@wdc.com>,
	Oded Gabbay <oded.gabbay@gmail.com>,
	Oleh Kravchenko <oleg@kaa.org.ua>,
	Orson Zhai <orsonzhai@gmail.com>,
	Pavel Machek <pavel@ucw.cz>,
	Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
	Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
	Peter Rosin <peda@axentia.se>,
	Petr Mladek <pmladek@suse.com>,
	Philippe Bergheaud <felix@linux.ibm.com>,
	Richard Cochran <richardcochran@gmail.com>,
	Sebastian Reichel <sre@kernel.org>,
	Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Tom Rix <trix@redhat.com>,
	Vaibhav Jain <vaibhav@linux.ibm.com>,
	Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
	Vishal Verma <vishal.l.verma@intel.com>,
	linux-acpi@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-iio@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-mm@kvack.org,
	linux-pm@vger.kernel.org,
	linux-stm32@st-md-mailman.stormreply.com,
	linux-usb@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	netdev@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>
Subject: [PATCH v2 20/39] docs: ABI: testing: make the files compatible with ReST output
Date: Fri, 30 Oct 2020 08:40:39 +0100
Message-Id: <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1604042072.git.mchehab+huawei@kernel.org>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Sender: Mauro Carvalho Chehab <mchehab@kernel.org>

Some files over there won't parse well by Sphinx.

Fix them.

Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> # for IIO
Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
---
 .../ABI/testing/configfs-spear-pcie-gadget    |  36 +--
 Documentation/ABI/testing/configfs-usb-gadget |  83 +++---
 .../ABI/testing/configfs-usb-gadget-hid       |  10 +-
 .../ABI/testing/configfs-usb-gadget-rndis     |  16 +-
 .../ABI/testing/configfs-usb-gadget-uac1      |  18 +-
 .../ABI/testing/configfs-usb-gadget-uvc       | 220 +++++++++-------
 Documentation/ABI/testing/debugfs-ec          |  11 +-
 Documentation/ABI/testing/debugfs-pktcdvd     |  11 +-
 Documentation/ABI/testing/dev-kmsg            |  27 +-
 Documentation/ABI/testing/evm                 |  17 +-
 Documentation/ABI/testing/ima_policy          |  30 ++-
 Documentation/ABI/testing/procfs-diskstats    |  40 +--
 Documentation/ABI/testing/sysfs-block         |  38 +--
 Documentation/ABI/testing/sysfs-block-device  |   2 +
 Documentation/ABI/testing/sysfs-bus-acpi      |  18 +-
 .../sysfs-bus-event_source-devices-format     |   3 +-
 .../ABI/testing/sysfs-bus-i2c-devices-pca954x |  27 +-
 Documentation/ABI/testing/sysfs-bus-iio       |  11 +
 .../sysfs-bus-iio-adc-envelope-detector       |   5 +-
 .../ABI/testing/sysfs-bus-iio-cros-ec         |   2 +-
 .../ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 |   8 +-
 .../ABI/testing/sysfs-bus-iio-lptimer-stm32   |  29 ++-
 .../sysfs-bus-iio-magnetometer-hmc5843        |  19 +-
 .../sysfs-bus-iio-temperature-max31856        |  19 +-
 .../ABI/testing/sysfs-bus-iio-timer-stm32     | 137 ++++++----
 .../testing/sysfs-bus-intel_th-devices-msc    |   4 +
 Documentation/ABI/testing/sysfs-bus-nfit      |   2 +-
 .../testing/sysfs-bus-pci-devices-aer_stats   | 119 +++++----
 Documentation/ABI/testing/sysfs-bus-rapidio   |  23 +-
 .../ABI/testing/sysfs-bus-thunderbolt         |  40 +--
 Documentation/ABI/testing/sysfs-bus-usb       |  30 ++-
 .../testing/sysfs-bus-usb-devices-usbsevseg   |   7 +-
 Documentation/ABI/testing/sysfs-bus-vfio-mdev |  10 +-
 Documentation/ABI/testing/sysfs-class-cxl     |  15 +-
 Documentation/ABI/testing/sysfs-class-led     |   2 +-
 .../testing/sysfs-class-led-driver-el15203000 | 229 ++++++++---------
 .../ABI/testing/sysfs-class-led-driver-sc27xx |   4 +-
 Documentation/ABI/testing/sysfs-class-mic     |  52 ++--
 Documentation/ABI/testing/sysfs-class-ocxl    |   3 +
 Documentation/ABI/testing/sysfs-class-power   |  73 +++++-
 .../ABI/testing/sysfs-class-power-twl4030     |  33 +--
 Documentation/ABI/testing/sysfs-class-rc      |  30 ++-
 .../ABI/testing/sysfs-class-scsi_host         |   7 +-
 Documentation/ABI/testing/sysfs-class-typec   |  12 +-
 .../testing/sysfs-devices-platform-ACPI-TAD   |   4 +
 .../ABI/testing/sysfs-devices-platform-docg3  |  10 +-
 .../sysfs-devices-platform-sh_mobile_lcdc_fb  |   8 +-
 .../ABI/testing/sysfs-devices-system-cpu      |  99 +++++---
 .../ABI/testing/sysfs-devices-system-ibm-rtl  |   6 +-
 .../testing/sysfs-driver-bd9571mwv-regulator  |   4 +
 Documentation/ABI/testing/sysfs-driver-genwqe |  11 +-
 .../testing/sysfs-driver-hid-logitech-lg4ff   |  18 +-
 .../ABI/testing/sysfs-driver-hid-wiimote      |  11 +-
 .../ABI/testing/sysfs-driver-samsung-laptop   |  13 +-
 .../ABI/testing/sysfs-driver-toshiba_acpi     |  26 ++
 .../ABI/testing/sysfs-driver-toshiba_haps     |   2 +
 Documentation/ABI/testing/sysfs-driver-wacom  |   4 +-
 Documentation/ABI/testing/sysfs-firmware-acpi | 237 +++++++++---------
 .../ABI/testing/sysfs-firmware-dmi-entries    |  50 ++--
 Documentation/ABI/testing/sysfs-firmware-gsmi |   2 +-
 .../ABI/testing/sysfs-firmware-memmap         |  16 +-
 Documentation/ABI/testing/sysfs-fs-ext4       |   4 +-
 .../ABI/testing/sysfs-hypervisor-xen          |  13 +-
 .../ABI/testing/sysfs-kernel-boot_params      |  23 +-
 .../ABI/testing/sysfs-kernel-mm-hugepages     |  12 +-
 .../ABI/testing/sysfs-platform-asus-laptop    |  21 +-
 .../ABI/testing/sysfs-platform-asus-wmi       |   1 +
 Documentation/ABI/testing/sysfs-platform-at91 |  10 +-
 .../ABI/testing/sysfs-platform-eeepc-laptop   |  14 +-
 .../ABI/testing/sysfs-platform-ideapad-laptop |   9 +-
 .../sysfs-platform-intel-wmi-thunderbolt      |   1 +
 .../ABI/testing/sysfs-platform-sst-atom       |  13 +-
 .../ABI/testing/sysfs-platform-usbip-vudc     |  11 +-
 Documentation/ABI/testing/sysfs-ptp           |   2 +-
 74 files changed, 1322 insertions(+), 865 deletions(-)

diff --git a/Documentation/ABI/testing/configfs-spear-pcie-gadget b/Documentation/ABI/testing/configfs-spear-pcie-gadget
index 840c324ef34d..cf877bd341df 100644
--- a/Documentation/ABI/testing/configfs-spear-pcie-gadget
+++ b/Documentation/ABI/testing/configfs-spear-pcie-gadget
@@ -10,22 +10,24 @@ Description:
 	This interfaces can be used to show spear's PCIe device capability.
 
 	Nodes are only visible when configfs is mounted. To mount configfs
-	in /config directory use:
-	# mount -t configfs none /config/
+	in /config directory use::
 
-	For nth PCIe Device Controller
-	/config/pcie-gadget.n/
-		link ... used to enable ltssm and read its status.
-		int_type ...used to configure and read type of supported
-			interrupt
-		no_of_msi ... used to configure number of MSI vector needed and
+	  # mount -t configfs none /config/
+
+	For nth PCIe Device Controller /config/pcie-gadget.n/:
+
+	=============== ======================================================
+	link		used to enable ltssm and read its status.
+	int_type	used to configure and read type of supported interrupt
+	no_of_msi	used to configure number of MSI vector needed and
 			to read no of MSI granted.
-		inta ... write 1 to assert INTA and 0 to de-assert.
-		send_msi ... write MSI vector to be sent.
-		vendor_id ... used to write and read vendor id (hex)
-		device_id ... used to write and read device id (hex)
-		bar0_size ... used to write and read bar0_size
-		bar0_address ... used to write and read bar0 mapped area in hex.
-		bar0_rw_offset ... used to write and read offset of bar0 where
-			bar0_data will be written or read.
-		bar0_data ... used to write and read data at bar0_rw_offset.
+	inta		write 1 to assert INTA and 0 to de-assert.
+	send_msi	write MSI vector to be sent.
+	vendor_id	used to write and read vendor id (hex)
+	device_id	used to write and read device id (hex)
+	bar0_size	used to write and read bar0_size
+	bar0_address	used to write and read bar0 mapped area in hex.
+	bar0_rw_offset	used to write and read offset of bar0 where bar0_data
+			will be written or read.
+	bar0_data	used to write and read data at bar0_rw_offset.
+	=============== ======================================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget b/Documentation/ABI/testing/configfs-usb-gadget
index 4594cc2435e8..dc351e9af80a 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget
+++ b/Documentation/ABI/testing/configfs-usb-gadget
@@ -12,22 +12,24 @@ Description:
 
 		The attributes of a gadget:
 
-		UDC		- bind a gadget to UDC/unbind a gadget;
-				write UDC's name found in /sys/class/udc/*
-				to bind a gadget, empty string "" to unbind.
+		================  ============================================
+		UDC		  bind a gadget to UDC/unbind a gadget;
+				  write UDC's name found in /sys/class/udc/*
+				  to bind a gadget, empty string "" to unbind.
 
-		max_speed	- maximum speed the driver supports. Valid
-				names are super-speed-plus, super-speed,
-				high-speed, full-speed, and low-speed.
+		max_speed	  maximum speed the driver supports. Valid
+				  names are super-speed-plus, super-speed,
+				  high-speed, full-speed, and low-speed.
 
-		bDeviceClass	- USB device class code
-		bDeviceSubClass	- USB device subclass code
-		bDeviceProtocol	- USB device protocol code
-		bMaxPacketSize0	- maximum endpoint 0 packet size
-		bcdDevice	- bcd device release number
-		bcdUSB		- bcd USB specification version number
-		idProduct	- product ID
-		idVendor	- vendor ID
+		bDeviceClass	  USB device class code
+		bDeviceSubClass	  USB device subclass code
+		bDeviceProtocol	  USB device protocol code
+		bMaxPacketSize0	  maximum endpoint 0 packet size
+		bcdDevice	  bcd device release number
+		bcdUSB		  bcd USB specification version number
+		idProduct	  product ID
+		idVendor	  vendor ID
+		================  ============================================
 
 What:		/config/usb-gadget/gadget/configs
 Date:		Jun 2013
@@ -41,8 +43,10 @@ KernelVersion:	3.11
 Description:
 		The attributes of a configuration:
 
-		bmAttributes	- configuration characteristics
-		MaxPower	- maximum power consumption from the bus
+		================  ======================================
+		bmAttributes	  configuration characteristics
+		MaxPower	  maximum power consumption from the bus
+		================  ======================================
 
 What:		/config/usb-gadget/gadget/configs/config/strings
 Date:		Jun 2013
@@ -57,7 +61,9 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		configuration	- configuration description
+		================  =========================
+		configuration	  configuration description
+		================  =========================
 
 
 What:		/config/usb-gadget/gadget/functions
@@ -76,8 +82,10 @@ Description:
 
 		The attributes:
 
-		compatible_id		- 8-byte string for "Compatible ID"
-		sub_compatible_id	- 8-byte string for "Sub Compatible ID"
+		=================	=====================================
+		compatible_id		8-byte string for "Compatible ID"
+		sub_compatible_id	8-byte string for "Sub Compatible ID"
+		=================	=====================================
 
 What:		/config/usb-gadget/gadget/functions/<func>.<inst>/interface.<n>/<property>
 Date:		May 2014
@@ -89,16 +97,19 @@ Description:
 
 		The attributes:
 
-		type		- value 1..7 for interpreting the data
-				1: unicode string
-				2: unicode string with environment variable
-				3: binary
-				4: little-endian 32-bit
-				5: big-endian 32-bit
-				6: unicode string with a symbolic link
-				7: multiple unicode strings
-		data		- blob of data to be interpreted depending on
+		=====		===============================================
+		type		value 1..7 for interpreting the data
+
+				- 1: unicode string
+				- 2: unicode string with environment variable
+				- 3: binary
+				- 4: little-endian 32-bit
+				- 5: big-endian 32-bit
+				- 6: unicode string with a symbolic link
+				- 7: multiple unicode strings
+		data		blob of data to be interpreted depending on
 				type
+		=====		===============================================
 
 What:		/config/usb-gadget/gadget/strings
 Date:		Jun 2013
@@ -113,9 +124,11 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		serialnumber	- gadget's serial number (string)
-		product		- gadget's product description
-		manufacturer	- gadget's manufacturer description
+		============	=================================
+		serialnumber	gadget's serial number (string)
+		product		gadget's product description
+		manufacturer	gadget's manufacturer description
+		============	=================================
 
 What:		/config/usb-gadget/gadget/os_desc
 Date:		May 2014
@@ -123,8 +136,10 @@ KernelVersion:	3.16
 Description:
 		This group contains "OS String" extension handling attributes.
 
-		use		- flag turning "OS Desctiptors" support on/off
-		b_vendor_code	- one-byte value used for custom per-device and
+		=============	===============================================
+		use		flag turning "OS Desctiptors" support on/off
+		b_vendor_code	one-byte value used for custom per-device and
 				per-interface requests
-		qw_sign		- an identifier to be reported as "OS String"
+		qw_sign		an identifier to be reported as "OS String"
 				proper
+		=============	===============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-hid b/Documentation/ABI/testing/configfs-usb-gadget-hid
index f12e00e6baa3..748705c4cb58 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-hid
+++ b/Documentation/ABI/testing/configfs-usb-gadget-hid
@@ -4,8 +4,10 @@ KernelVersion:	3.19
 Description:
 		The attributes:
 
-		protocol	- HID protocol to use
-		report_desc	- blob corresponding to HID report descriptors
+		=============	============================================
+		protocol	HID protocol to use
+		report_desc	blob corresponding to HID report descriptors
 				except the data passed through /dev/hidg<N>
-		report_length	- HID report length
-		subclass	- HID device subclass to use
+		report_length	HID report length
+		subclass	HID device subclass to use
+		=============	============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-rndis b/Documentation/ABI/testing/configfs-usb-gadget-rndis
index 137399095d74..9416eda7fe93 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-rndis
+++ b/Documentation/ABI/testing/configfs-usb-gadget-rndis
@@ -4,14 +4,16 @@ KernelVersion:	3.11
 Description:
 		The attributes:
 
-		ifname		- network device interface name associated with
+		=========	=============================================
+		ifname		network device interface name associated with
 				this function instance
-		qmult		- queue length multiplier for high and
+		qmult		queue length multiplier for high and
 				super speed
-		host_addr	- MAC address of host's end of this
+		host_addr	MAC address of host's end of this
 				Ethernet over USB link
-		dev_addr	- MAC address of device's end of this
+		dev_addr	MAC address of device's end of this
 				Ethernet over USB link
-		class		- USB interface class, default is 02 (hex)
-		subclass	- USB interface subclass, default is 06 (hex)
-		protocol	- USB interface protocol, default is 00 (hex)
+		class		USB interface class, default is 02 (hex)
+		subclass	USB interface subclass, default is 06 (hex)
+		protocol	USB interface protocol, default is 00 (hex)
+		=========	=============================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uac1 b/Documentation/ABI/testing/configfs-usb-gadget-uac1
index abfe447c848f..dc23fd776943 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-uac1
+++ b/Documentation/ABI/testing/configfs-usb-gadget-uac1
@@ -4,11 +4,13 @@ KernelVersion:	4.14
 Description:
 		The attributes:
 
-		c_chmask - capture channel mask
-		c_srate - capture sampling rate
-		c_ssize - capture sample size (bytes)
-		p_chmask - playback channel mask
-		p_srate - playback sampling rate
-		p_ssize - playback sample size (bytes)
-		req_number - the number of pre-allocated request
-			for both capture and playback
+		==========	===================================
+		c_chmask	capture channel mask
+		c_srate		capture sampling rate
+		c_ssize		capture sample size (bytes)
+		p_chmask	playback channel mask
+		p_srate		playback sampling rate
+		p_ssize		playback sample size (bytes)
+		req_number	the number of pre-allocated request
+				for both capture and playback
+		==========	===================================
diff --git a/Documentation/ABI/testing/configfs-usb-gadget-uvc b/Documentation/ABI/testing/configfs-usb-gadget-uvc
index 809765bd9573..cee81b0347bb 100644
--- a/Documentation/ABI/testing/configfs-usb-gadget-uvc
+++ b/Documentation/ABI/testing/configfs-usb-gadget-uvc
@@ -3,9 +3,11 @@ Date:		Dec 2014
 KernelVersion:	4.0
 Description:	UVC function directory
 
-		streaming_maxburst	- 0..15 (ss only)
-		streaming_maxpacket	- 1..1023 (fs), 1..3072 (hs/ss)
-		streaming_interval	- 1..16
+		===================	=============================
+		streaming_maxburst	0..15 (ss only)
+		streaming_maxpacket	1..1023 (fs), 1..3072 (hs/ss)
+		streaming_interval	1..16
+		===================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control
 Date:		Dec 2014
@@ -13,8 +15,11 @@ KernelVersion:	4.0
 Description:	Control descriptors
 
 		All attributes read only:
-		bInterfaceNumber	- USB interface number for this
-					  streaming interface
+
+		================	=============================
+		bInterfaceNumber	USB interface number for this
+					streaming interface
+		================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/class
 Date:		Dec 2014
@@ -47,13 +52,16 @@ KernelVersion:	4.0
 Description:	Default output terminal descriptors
 
 		All attributes read only:
-		iTerminal	- index of string descriptor
-		bSourceID 	- id of the terminal to which this terminal
+
+		==============	=============================================
+		iTerminal	index of string descriptor
+		bSourceID 	id of the terminal to which this terminal
 				is connected
-		bAssocTerminal	- id of the input terminal to which this output
+		bAssocTerminal	id of the input terminal to which this output
 				terminal is associated
-		wTerminalType	- terminal type
-		bTerminalID	- a non-zero id of this terminal
+		wTerminalType	terminal type
+		bTerminalID	a non-zero id of this terminal
+		==============	=============================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/terminal/camera
 Date:		Dec 2014
@@ -66,16 +74,19 @@ KernelVersion:	4.0
 Description:	Default camera terminal descriptors
 
 		All attributes read only:
-		bmControls		- bitmap specifying which controls are
-					supported for the video stream
-		wOcularFocalLength	- the value of Locular
-		wObjectiveFocalLengthMax- the value of Lmin
-		wObjectiveFocalLengthMin- the value of Lmax
-		iTerminal		- index of string descriptor
-		bAssocTerminal		- id of the output terminal to which
-					this terminal is connected
-		wTerminalType		- terminal type
-		bTerminalID		- a non-zero id of this terminal
+
+		========================  ====================================
+		bmControls		  bitmap specifying which controls are
+					  supported for the video stream
+		wOcularFocalLength	  the value of Locular
+		wObjectiveFocalLengthMax  the value of Lmin
+		wObjectiveFocalLengthMin  the value of Lmax
+		iTerminal		  index of string descriptor
+		bAssocTerminal		  id of the output terminal to which
+					  this terminal is connected
+		wTerminalType		  terminal type
+		bTerminalID		  a non-zero id of this terminal
+		========================  ====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/processing
 Date:		Dec 2014
@@ -88,13 +99,16 @@ KernelVersion:	4.0
 Description:	Default processing unit descriptors
 
 		All attributes read only:
-		iProcessing	- index of string descriptor
-		bmControls	- bitmap specifying which controls are
+
+		===============	========================================
+		iProcessing	index of string descriptor
+		bmControls	bitmap specifying which controls are
 				supported for the video stream
-		wMaxMultiplier	- maximum digital magnification x100
-		bSourceID	- id of the terminal to which this unit is
+		wMaxMultiplier	maximum digital magnification x100
+		bSourceID	id of the terminal to which this unit is
 				connected
-		bUnitID		- a non-zero id of this unit
+		bUnitID		a non-zero id of this unit
+		===============	========================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/control/header
 Date:		Dec 2014
@@ -114,8 +128,11 @@ KernelVersion:	4.0
 Description:	Streaming descriptors
 
 		All attributes read only:
-		bInterfaceNumber	- USB interface number for this
-					  streaming interface
+
+		================	=============================
+		bInterfaceNumber	USB interface number for this
+					streaming interface
+		================	=============================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/class
 Date:		Dec 2014
@@ -148,13 +165,16 @@ KernelVersion:	4.0
 Description:	Default color matching descriptors
 
 		All attributes read only:
-		bMatrixCoefficients	- matrix used to compute luma and
-					chroma values from the color primaries
-		bTransferCharacteristics- optoelectronic transfer
-					characteristic of the source picutre,
-					also called the gamma function
-		bColorPrimaries		- color primaries and the reference
-					white
+
+		========================  ======================================
+		bMatrixCoefficients	  matrix used to compute luma and
+					  chroma values from the color primaries
+		bTransferCharacteristics  optoelectronic transfer
+					  characteristic of the source picutre,
+					  also called the gamma function
+		bColorPrimaries		  color primaries and the reference
+					  white
+		========================  ======================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg
 Date:		Dec 2014
@@ -168,47 +188,52 @@ Description:	Specific MJPEG format descriptors
 
 		All attributes read only,
 		except bmaControls and bDefaultFrameIndex:
-		bFormatIndex		- unique id for this format descriptor;
+
+		===================	=====================================
+		bFormatIndex		unique id for this format descriptor;
 					only defined after parent header is
 					linked into the streaming class;
 					read-only
-		bmaControls		- this format's data for bmaControls in
+		bmaControls		this format's data for bmaControls in
 					the streaming header
-		bmInterfaceFlags	- specifies interlace information,
+		bmInterfaceFlags	specifies interlace information,
 					read-only
-		bAspectRatioY		- the X dimension of the picture aspect
+		bAspectRatioY		the X dimension of the picture aspect
 					ratio, read-only
-		bAspectRatioX		- the Y dimension of the picture aspect
+		bAspectRatioX		the Y dimension of the picture aspect
 					ratio, read-only
-		bmFlags			- characteristics of this format,
+		bmFlags			characteristics of this format,
 					read-only
-		bDefaultFrameIndex	- optimum frame index for this stream
+		bDefaultFrameIndex	optimum frame index for this stream
+		===================	=====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/mjpeg/name/name
 Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific MJPEG frame descriptors
 
-		bFrameIndex		- unique id for this framedescriptor;
-					only defined after parent format is
-					linked into the streaming header;
-					read-only
-		dwFrameInterval		- indicates how frame interval can be
-					programmed; a number of values
-					separated by newline can be specified
-		dwDefaultFrameInterval	- the frame interval the device would
-					like to use as default
-		dwMaxVideoFrameBufferSize- the maximum number of bytes the
-					compressor will produce for a video
-					frame or still image
-		dwMaxBitRate		- the maximum bit rate at the shortest
-					frame interval in bps
-		dwMinBitRate		- the minimum bit rate at the longest
-					frame interval in bps
-		wHeight			- height of decoded bitmap frame in px
-		wWidth			- width of decoded bitmam frame in px
-		bmCapabilities		- still image support, fixed frame-rate
-					support
+		=========================  =====================================
+		bFrameIndex		   unique id for this framedescriptor;
+					   only defined after parent format is
+					   linked into the streaming header;
+					   read-only
+		dwFrameInterval		   indicates how frame interval can be
+					   programmed; a number of values
+					   separated by newline can be specified
+		dwDefaultFrameInterval	   the frame interval the device would
+					   like to use as default
+		dwMaxVideoFrameBufferSize  the maximum number of bytes the
+					   compressor will produce for a video
+					   frame or still image
+		dwMaxBitRate		   the maximum bit rate at the shortest
+					   frame interval in bps
+		dwMinBitRate		   the minimum bit rate at the longest
+					   frame interval in bps
+		wHeight			   height of decoded bitmap frame in px
+		wWidth			   width of decoded bitmam frame in px
+		bmCapabilities		   still image support, fixed frame-rate
+					   support
+		=========================  =====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed
 Date:		Dec 2014
@@ -220,50 +245,54 @@ Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific uncompressed format descriptors
 
-		bFormatIndex		- unique id for this format descriptor;
+		==================	=======================================
+		bFormatIndex		unique id for this format descriptor;
 					only defined after parent header is
 					linked into the streaming class;
 					read-only
-		bmaControls		- this format's data for bmaControls in
+		bmaControls		this format's data for bmaControls in
 					the streaming header
-		bmInterfaceFlags	- specifies interlace information,
+		bmInterfaceFlags	specifies interlace information,
 					read-only
-		bAspectRatioY		- the X dimension of the picture aspect
+		bAspectRatioY		the X dimension of the picture aspect
 					ratio, read-only
-		bAspectRatioX		- the Y dimension of the picture aspect
+		bAspectRatioX		the Y dimension of the picture aspect
 					ratio, read-only
-		bDefaultFrameIndex	- optimum frame index for this stream
-		bBitsPerPixel		- number of bits per pixel used to
+		bDefaultFrameIndex	optimum frame index for this stream
+		bBitsPerPixel		number of bits per pixel used to
 					specify color in the decoded video
 					frame
-		guidFormat		- globally unique id used to identify
+		guidFormat		globally unique id used to identify
 					stream-encoding format
+		==================	=======================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/uncompressed/name/name
 Date:		Dec 2014
 KernelVersion:	4.0
 Description:	Specific uncompressed frame descriptors
 
-		bFrameIndex		- unique id for this framedescriptor;
-					only defined after parent format is
-					linked into the streaming header;
-					read-only
-		dwFrameInterval		- indicates how frame interval can be
-					programmed; a number of values
-					separated by newline can be specified
-		dwDefaultFrameInterval	- the frame interval the device would
-					like to use as default
-		dwMaxVideoFrameBufferSize- the maximum number of bytes the
-					compressor will produce for a video
-					frame or still image
-		dwMaxBitRate		- the maximum bit rate at the shortest
-					frame interval in bps
-		dwMinBitRate		- the minimum bit rate at the longest
-					frame interval in bps
-		wHeight			- height of decoded bitmap frame in px
-		wWidth			- width of decoded bitmam frame in px
-		bmCapabilities		- still image support, fixed frame-rate
-					support
+		=========================  =====================================
+		bFrameIndex		   unique id for this framedescriptor;
+					   only defined after parent format is
+					   linked into the streaming header;
+					   read-only
+		dwFrameInterval		   indicates how frame interval can be
+					   programmed; a number of values
+					   separated by newline can be specified
+		dwDefaultFrameInterval	   the frame interval the device would
+					   like to use as default
+		dwMaxVideoFrameBufferSize  the maximum number of bytes the
+					   compressor will produce for a video
+					   frame or still image
+		dwMaxBitRate		   the maximum bit rate at the shortest
+					   frame interval in bps
+		dwMinBitRate		   the minimum bit rate at the longest
+					   frame interval in bps
+		wHeight			   height of decoded bitmap frame in px
+		wWidth			   width of decoded bitmam frame in px
+		bmCapabilities		   still image support, fixed frame-rate
+					   support
+		=========================  =====================================
 
 What:		/config/usb-gadget/gadget/functions/uvc.name/streaming/header
 Date:		Dec 2014
@@ -276,17 +305,20 @@ KernelVersion:	4.0
 Description:	Specific streaming header descriptors
 
 		All attributes read only:
-		bTriggerUsage		- how the host software will respond to
+
+		====================	=====================================
+		bTriggerUsage		how the host software will respond to
 					a hardware trigger interrupt event
-		bTriggerSupport		- flag specifying if hardware
+		bTriggerSupport		flag specifying if hardware
 					triggering is supported
-		bStillCaptureMethod	- method of still image caputre
+		bStillCaptureMethod	method of still image caputre
 					supported
-		bTerminalLink		- id of the output terminal to which
+		bTerminalLink		id of the output terminal to which
 					the video endpoint of this interface
 					is connected
-		bmInfo			- capabilities of this video streaming
+		bmInfo			capabilities of this video streaming
 					interface
+		====================	=====================================
 
 What:		/sys/class/udc/udc.name/device/gadget/video4linux/video.name/function_name
 Date:		May 2018
diff --git a/Documentation/ABI/testing/debugfs-ec b/Documentation/ABI/testing/debugfs-ec
index 6546115a94da..ab6099daa8f5 100644
--- a/Documentation/ABI/testing/debugfs-ec
+++ b/Documentation/ABI/testing/debugfs-ec
@@ -6,7 +6,7 @@ Description:
 General information like which GPE is assigned to the EC and whether
 the global lock should get used.
 Knowing the EC GPE one can watch the amount of HW events related to
-the EC here (XY -> GPE number from /sys/kernel/debug/ec/*/gpe):
+the EC here (XY -> GPE number from `/sys/kernel/debug/ec/*/gpe`):
 /sys/firmware/acpi/interrupts/gpeXY
 
 The io file is binary and a userspace tool located here:
@@ -14,7 +14,8 @@ ftp://ftp.suse.com/pub/people/trenn/sources/ec/
 should get used to read out the 256 Embedded Controller registers
 or writing to them.
 
-CAUTION: Do not write to the Embedded Controller if you don't know
-what you are doing! Rebooting afterwards also is a good idea.
-This can influence the way your machine is cooled and fans may
-not get switched on again after you did a wrong write.
+CAUTION:
+  Do not write to the Embedded Controller if you don't know
+  what you are doing! Rebooting afterwards also is a good idea.
+  This can influence the way your machine is cooled and fans may
+  not get switched on again after you did a wrong write.
diff --git a/Documentation/ABI/testing/debugfs-pktcdvd b/Documentation/ABI/testing/debugfs-pktcdvd
index cf11736acb76..787907d70462 100644
--- a/Documentation/ABI/testing/debugfs-pktcdvd
+++ b/Documentation/ABI/testing/debugfs-pktcdvd
@@ -4,16 +4,15 @@ KernelVersion:  2.6.20
 Contact:        Thomas Maier <balagi@justmail.de>
 Description:
 
-debugfs interface
------------------
-
 The pktcdvd module (packet writing driver) creates
 these files in debugfs:
 
 /sys/kernel/debug/pktcdvd/pktcdvd[0-7]/
+
+    ====            ====== ====================================
     info            (0444) Lots of driver statistics and infos.
+    ====            ====== ====================================
 
-Example:
--------
+Example::
 
-cat /sys/kernel/debug/pktcdvd/pktcdvd0/info
+    cat /sys/kernel/debug/pktcdvd/pktcdvd0/info
diff --git a/Documentation/ABI/testing/dev-kmsg b/Documentation/ABI/testing/dev-kmsg
index 3c0bb76e3417..a377b6c093c9 100644
--- a/Documentation/ABI/testing/dev-kmsg
+++ b/Documentation/ABI/testing/dev-kmsg
@@ -6,6 +6,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		to the kernel's printk buffer.
 
 		Injecting messages:
+
 		Every write() to the opened device node places a log entry in
 		the kernel's printk buffer.
 
@@ -21,6 +22,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		the messages can always be reliably determined.
 
 		Accessing the buffer:
+
 		Every read() from the opened device node receives one record
 		of the kernel's printk buffer.
 
@@ -48,6 +50,7 @@ Description:	The /dev/kmsg character device node provides userspace access
 		if needed, without limiting the interface to a single reader.
 
 		The device supports seek with the following parameters:
+
 		SEEK_SET, 0
 		  seek to the first entry in the buffer
 		SEEK_END, 0
@@ -87,18 +90,22 @@ Description:	The /dev/kmsg character device node provides userspace access
 		readable context of the message, for reliable processing in
 		userspace.
 
-		Example:
-		7,160,424069,-;pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7] (ignored)
-		 SUBSYSTEM=acpi
-		 DEVICE=+acpi:PNP0A03:00
-		6,339,5140900,-;NET: Registered protocol family 10
-		30,340,5690716,-;udevd[80]: starting version 181
+		Example::
+
+		  7,160,424069,-;pci_root PNP0A03:00: host bridge window [io  0x0000-0x0cf7] (ignored)
+		   SUBSYSTEM=acpi
+		   DEVICE=+acpi:PNP0A03:00
+		  6,339,5140900,-;NET: Registered protocol family 10
+		  30,340,5690716,-;udevd[80]: starting version 181
 
 		The DEVICE= key uniquely identifies devices the following way:
-		  b12:8        - block dev_t
-		  c127:3       - char dev_t
-		  n8           - netdev ifindex
-		  +sound:card0 - subsystem:devname
+
+		  ============  =================
+		  b12:8         block dev_t
+		  c127:3        char dev_t
+		  n8            netdev ifindex
+		  +sound:card0  subsystem:devname
+		  ============  =================
 
 		The flags field carries '-' by default. A 'c' indicates a
 		fragment of a line. Note, that these hints about continuation
diff --git a/Documentation/ABI/testing/evm b/Documentation/ABI/testing/evm
index 201d10319fa1..3c477ba48a31 100644
--- a/Documentation/ABI/testing/evm
+++ b/Documentation/ABI/testing/evm
@@ -17,26 +17,33 @@ Description:
 		echoing a value to <securityfs>/evm made up of the
 		following bits:
 
+		===	  ==================================================
 		Bit	  Effect
+		===	  ==================================================
 		0	  Enable HMAC validation and creation
 		1	  Enable digital signature validation
 		2	  Permit modification of EVM-protected metadata at
 			  runtime. Not supported if HMAC validation and
 			  creation is enabled.
 		31	  Disable further runtime modification of EVM policy
+		===	  ==================================================
 
-		For example:
+		For example::
 
-		echo 1 ><securityfs>/evm
+		  echo 1 ><securityfs>/evm
 
 		will enable HMAC validation and creation
 
-		echo 0x80000003 ><securityfs>/evm
+		::
+
+		  echo 0x80000003 ><securityfs>/evm
 
 		will enable HMAC and digital signature validation and
 		HMAC creation and disable all further modification of policy.
 
-		echo 0x80000006 ><securityfs>/evm
+		::
+
+		  echo 0x80000006 ><securityfs>/evm
 
 		will enable digital signature validation, permit
 		modification of EVM-protected metadata and
@@ -65,7 +72,7 @@ Description:
 		Shows the set of extended attributes used to calculate or
 		validate the EVM signature, and allows additional attributes
 		to be added at runtime. Any signatures generated after
-		additional attributes are added (and on files posessing those
+		additional attributes are added (and on files possessing those
 		additional attributes) will only be valid if the same
 		additional attributes are configured on system boot. Writing
 		a single period (.) will lock the xattr list from any further
diff --git a/Documentation/ABI/testing/ima_policy b/Documentation/ABI/testing/ima_policy
index cd572912c593..e35263f97fc1 100644
--- a/Documentation/ABI/testing/ima_policy
+++ b/Documentation/ABI/testing/ima_policy
@@ -15,19 +15,22 @@ Description:
 		IMA appraisal, if configured, uses these file measurements
 		for local measurement appraisal.
 
-		rule format: action [condition ...]
+		::
 
-		action: measure | dont_measure | appraise | dont_appraise |
-			audit | hash | dont_hash
-		condition:= base | lsm  [option]
+		  rule format: action [condition ...]
+
+		  action: measure | dont_measure | appraise | dont_appraise |
+			  audit | hash | dont_hash
+		  condition:= base | lsm  [option]
 			base:	[[func=] [mask=] [fsmagic=] [fsuuid=] [uid=]
 				[euid=] [fowner=] [fsname=]]
 			lsm:	[[subj_user=] [subj_role=] [subj_type=]
 				 [obj_user=] [obj_role=] [obj_type=]]
 			option:	[[appraise_type=]] [template=] [permit_directio]
 				[appraise_flag=] [keyrings=]
-		base: 	func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK][MODULE_CHECK]
-				[FIRMWARE_CHECK]
+		  base:
+			func:= [BPRM_CHECK][MMAP_CHECK][CREDS_CHECK][FILE_CHECK]MODULE_CHECK]
+			        [FIRMWARE_CHECK]
 				[KEXEC_KERNEL_CHECK] [KEXEC_INITRAMFS_CHECK]
 				[KEXEC_CMDLINE] [KEY_CHECK]
 			mask:= [[^]MAY_READ] [[^]MAY_WRITE] [[^]MAY_APPEND]
@@ -37,8 +40,9 @@ Description:
 			uid:= decimal value
 			euid:= decimal value
 			fowner:= decimal value
-		lsm:  	are LSM specific
-		option:	appraise_type:= [imasig] [imasig|modsig]
+		  lsm:  are LSM specific
+		  option:
+			appraise_type:= [imasig] [imasig|modsig]
 			appraise_flag:= [check_blacklist]
 			Currently, blacklist check is only for files signed with appended
 			signature.
@@ -49,7 +53,7 @@ Description:
 			(eg, ima-ng). Only valid when action is "measure".
 			pcr:= decimal value
 
-		default policy:
+		  default policy:
 			# PROC_SUPER_MAGIC
 			dont_measure fsmagic=0x9fa0
 			dont_appraise fsmagic=0x9fa0
@@ -97,7 +101,8 @@ Description:
 
 		Examples of LSM specific definitions:
 
-		SELinux:
+		SELinux::
+
 			dont_measure obj_type=var_log_t
 			dont_appraise obj_type=var_log_t
 			dont_measure obj_type=auditd_log_t
@@ -105,10 +110,11 @@ Description:
 			measure subj_user=system_u func=FILE_CHECK mask=MAY_READ
 			measure subj_role=system_r func=FILE_CHECK mask=MAY_READ
 
-		Smack:
+		Smack::
+
 			measure subj_user=_ func=FILE_CHECK mask=MAY_READ
 
-		Example of measure rules using alternate PCRs:
+		Example of measure rules using alternate PCRs::
 
 			measure func=KEXEC_KERNEL_CHECK pcr=4
 			measure func=KEXEC_INITRAMFS_CHECK pcr=5
diff --git a/Documentation/ABI/testing/procfs-diskstats b/Documentation/ABI/testing/procfs-diskstats
index 70dcaf2481f4..df5a3a8c1edf 100644
--- a/Documentation/ABI/testing/procfs-diskstats
+++ b/Documentation/ABI/testing/procfs-diskstats
@@ -6,28 +6,32 @@ Description:
 		of block devices. Each line contains the following 14
 		fields:
 
-		 1 - major number
-		 2 - minor mumber
-		 3 - device name
-		 4 - reads completed successfully
-		 5 - reads merged
-		 6 - sectors read
-		 7 - time spent reading (ms)
-		 8 - writes completed
-		 9 - writes merged
-		10 - sectors written
-		11 - time spent writing (ms)
-		12 - I/Os currently in progress
-		13 - time spent doing I/Os (ms)
-		14 - weighted time spent doing I/Os (ms)
+		==  ===================================
+		 1  major number
+		 2  minor mumber
+		 3  device name
+		 4  reads completed successfully
+		 5  reads merged
+		 6  sectors read
+		 7  time spent reading (ms)
+		 8  writes completed
+		 9  writes merged
+		10  sectors written
+		11  time spent writing (ms)
+		12  I/Os currently in progress
+		13  time spent doing I/Os (ms)
+		14  weighted time spent doing I/Os (ms)
+		==  ===================================
 
 		Kernel 4.18+ appends four more fields for discard
 		tracking putting the total at 18:
 
-		15 - discards completed successfully
-		16 - discards merged
-		17 - sectors discarded
-		18 - time spent discarding
+		==  ===================================
+		15  discards completed successfully
+		16  discards merged
+		17  sectors discarded
+		18  time spent discarding
+		==  ===================================
 
 		Kernel 5.5+ appends two more fields for flush requests:
 
diff --git a/Documentation/ABI/testing/sysfs-block b/Documentation/ABI/testing/sysfs-block
index 2322eb748b38..e34cdeeeb9d4 100644
--- a/Documentation/ABI/testing/sysfs-block
+++ b/Documentation/ABI/testing/sysfs-block
@@ -4,23 +4,27 @@ Contact:	Jerome Marchand <jmarchan@redhat.com>
 Description:
 		The /sys/block/<disk>/stat files displays the I/O
 		statistics of disk <disk>. They contain 11 fields:
-		 1 - reads completed successfully
-		 2 - reads merged
-		 3 - sectors read
-		 4 - time spent reading (ms)
-		 5 - writes completed
-		 6 - writes merged
-		 7 - sectors written
-		 8 - time spent writing (ms)
-		 9 - I/Os currently in progress
-		10 - time spent doing I/Os (ms)
-		11 - weighted time spent doing I/Os (ms)
-		12 - discards completed
-		13 - discards merged
-		14 - sectors discarded
-		15 - time spent discarding (ms)
-		16 - flush requests completed
-		17 - time spent flushing (ms)
+
+		==  ==============================================
+		 1  reads completed successfully
+		 2  reads merged
+		 3  sectors read
+		 4  time spent reading (ms)
+		 5  writes completed
+		 6  writes merged
+		 7  sectors written
+		 8  time spent writing (ms)
+		 9  I/Os currently in progress
+		10  time spent doing I/Os (ms)
+		11  weighted time spent doing I/Os (ms)
+		12  discards completed
+		13  discards merged
+		14  sectors discarded
+		15  time spent discarding (ms)
+		16  flush requests completed
+		17  time spent flushing (ms)
+		==  ==============================================
+
 		For more details refer Documentation/admin-guide/iostats.rst
 
 
diff --git a/Documentation/ABI/testing/sysfs-block-device b/Documentation/ABI/testing/sysfs-block-device
index 17f2bc7dd261..aa0fb500e3c9 100644
--- a/Documentation/ABI/testing/sysfs-block-device
+++ b/Documentation/ABI/testing/sysfs-block-device
@@ -8,11 +8,13 @@ Description:
 
 		It has the following valid values:
 
+		==	========================================================
 		0	OFF - the LED is not activated on activity
 		1	BLINK_ON - the LED blinks on every 10ms when activity is
 			detected.
 		2	BLINK_OFF - the LED is on when idle, and blinks off
 			every 10ms when activity is detected.
+		==	========================================================
 
 		Note that the user must turn sw_activity OFF it they wish to
 		control the activity LED via the em_message file.
diff --git a/Documentation/ABI/testing/sysfs-bus-acpi b/Documentation/ABI/testing/sysfs-bus-acpi
index e7898cfe5fb1..c78603497b97 100644
--- a/Documentation/ABI/testing/sysfs-bus-acpi
+++ b/Documentation/ABI/testing/sysfs-bus-acpi
@@ -67,14 +67,16 @@ Description:
 		The return value is a decimal integer representing the device's
 		status bitmap:
 
-		Bit [0] –  Set if the device is present.
-		Bit [1] –  Set if the device is enabled and decoding its
-		           resources.
-		Bit [2] –  Set if the device should be shown in the UI.
-		Bit [3] –  Set if the device is functioning properly (cleared if
-		           device failed its diagnostics).
-		Bit [4] –  Set if the battery is present.
-		Bits [31:5] –  Reserved (must be cleared)
+		===========  ==================================================
+		Bit [0]      Set if the device is present.
+		Bit [1]      Set if the device is enabled and decoding its
+		             resources.
+		Bit [2]      Set if the device should be shown in the UI.
+		Bit [3]      Set if the device is functioning properly (cleared
+			     if device failed its diagnostics).
+		Bit [4]      Set if the battery is present.
+		Bits [31:5]  Reserved (must be cleared)
+		===========  ==================================================
 
 		If bit [0] is clear, then bit 1 must also be clear (a device
 		that is not present cannot be enabled).
diff --git a/Documentation/ABI/testing/sysfs-bus-event_source-devices-format b/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
index 5bb793ec926c..df7ccc1b2fba 100644
--- a/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
+++ b/Documentation/ABI/testing/sysfs-bus-event_source-devices-format
@@ -10,7 +10,8 @@ Description:
 		name/value pairs.
 
 		Userspace must be prepared for the possibility that attributes
-		define overlapping bit ranges. For example:
+		define overlapping bit ranges. For example::
+
 			attr1 = 'config:0-23'
 			attr2 = 'config:0-7'
 			attr3 = 'config:12-35'
diff --git a/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x b/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
index 0b0de8cd0d13..b6c69eb80ca4 100644
--- a/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
+++ b/Documentation/ABI/testing/sysfs-bus-i2c-devices-pca954x
@@ -6,15 +6,18 @@ Description:
 		Value that exists only for mux devices that can be
 		written to control the behaviour of the multiplexer on
 		idle. Possible values:
-		-2 - disconnect on idle, i.e. deselect the last used
-		     channel, which is useful when there is a device
-		     with an address that conflicts with another
-		     device on another mux on the same parent bus.
-		-1 - leave the mux as-is, which is the most optimal
-		     setting in terms of I2C operations and is the
-		     default mode.
-		0..<nchans> - set the mux to a predetermined channel,
-		     which is useful if there is one channel that is
-		     used almost always, and you want to reduce the
-		     latency for normal operations after rare
-		     transactions on other channels
+
+		===========  ===============================================
+		-2	     disconnect on idle, i.e. deselect the last used
+			     channel, which is useful when there is a device
+			     with an address that conflicts with another
+			     device on another mux on the same parent bus.
+		-1	     leave the mux as-is, which is the most optimal
+			     setting in terms of I2C operations and is the
+			     default mode.
+		0..<nchans>  set the mux to a predetermined channel,
+			     which is useful if there is one channel that is
+			     used almost always, and you want to reduce the
+			     latency for normal operations after rare
+			     transactions on other channels
+		===========  ===============================================
diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index a9d51810a3ba..e3df71987eff 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -65,6 +65,7 @@ Contact:	linux-iio@vger.kernel.org
 Description:
 		When the internal sampling clock can only take a specific set of
 		frequencies, we can specify the available values with:
+
 		- a small discrete set of values like "0 2 4 6 8"
 		- a range with minimum, step and maximum frequencies like
 		  "[min step max]"
@@ -1627,17 +1628,21 @@ Description:
 		Mounting matrix for IIO sensors. This is a rotation matrix which
 		informs userspace about sensor chip's placement relative to the
 		main hardware it is mounted on.
+
 		Main hardware placement is defined according to the local
 		reference frame related to the physical quantity the sensor
 		measures.
+
 		Given that the rotation matrix is defined in a board specific
 		way (platform data and / or device-tree), the main hardware
 		reference frame definition is left to the implementor's choice
 		(see below for a magnetometer example).
+
 		Applications should apply this rotation matrix to samples so
 		that when main hardware reference frame is aligned onto local
 		reference frame, then sensor chip reference frame is also
 		perfectly aligned with it.
+
 		Matrix is a 3x3 unitary matrix and typically looks like
 		[0, 1, 0; 1, 0, 0; 0, 0, -1]. Identity matrix
 		[1, 0, 0; 0, 1, 0; 0, 0, 1] means sensor chip and main hardware
@@ -1646,8 +1651,10 @@ Description:
 		For example, a mounting matrix for a magnetometer sensor informs
 		userspace about sensor chip's ORIENTATION relative to the main
 		hardware.
+
 		More specifically, main hardware orientation is defined with
 		respect to the LOCAL EARTH GEOMAGNETIC REFERENCE FRAME where :
+
 		* Y is in the ground plane and positive towards magnetic North ;
 		* X is in the ground plane, perpendicular to the North axis and
 		  positive towards the East ;
@@ -1656,13 +1663,16 @@ Description:
 		An implementor might consider that for a hand-held device, a
 		'natural' orientation would be 'front facing camera at the top'.
 		The main hardware reference frame could then be described as :
+
 		* Y is in the plane of the screen and is positive towards the
 		  top of the screen ;
 		* X is in the plane of the screen, perpendicular to Y axis, and
 		  positive towards the right hand side of the screen ;
 		* Z is perpendicular to the screen plane and positive out of the
 		  screen.
+
 		Another example for a quadrotor UAV might be :
+
 		* Y is in the plane of the propellers and positive towards the
 		  front-view camera;
 		* X is in the plane of the propellers, perpendicular to Y axis,
@@ -1704,6 +1714,7 @@ Description:
 		This interface is deprecated; please use the Counter subsystem.
 
 		A list of possible counting directions which are:
+
 		- "up"	: counter device is increasing.
 		- "down": counter device is decreasing.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector b/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
index 2071f9bcfaa5..1c2a07f7a75e 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
+++ b/Documentation/ABI/testing/sysfs-bus-iio-adc-envelope-detector
@@ -5,7 +5,8 @@ Contact:	Peter Rosin <peda@axentia.se>
 Description:
 		The DAC is used to find the peak level of an alternating
 		voltage input signal by a binary search using the output
-		of a comparator wired to an interrupt pin. Like so:
+		of a comparator wired to an interrupt pin. Like so::
+
 		                           _
 		                          | \
 		     input +------>-------|+ \
@@ -19,10 +20,12 @@ Description:
 		            |    irq|------<-------'
 		            |       |
 		            '-------'
+
 		The boolean invert attribute (0/1) should be set when the
 		input signal is centered around the maximum value of the
 		dac instead of zero. The envelope detector will search
 		from below in this case and will also invert the result.
+
 		The edge/level of the interrupt is also switched to its
 		opposite value.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-cros-ec b/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
index 6158f831c761..adf24c40126f 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
+++ b/Documentation/ABI/testing/sysfs-bus-iio-cros-ec
@@ -4,7 +4,7 @@ KernelVersion:	4.7
 Contact:	linux-iio@vger.kernel.org
 Description:
 		Writing '1' will perform a FOC (Fast Online Calibration). The
-                corresponding calibration offsets can be read from *_calibbias
+                corresponding calibration offsets can be read from `*_calibbias`
                 entries.
 
 What:		/sys/bus/iio/devices/iio:deviceX/location
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
index 0e66ae9b0071..91439d6d60b5 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32
@@ -3,14 +3,20 @@ KernelVersion:	4.14
 Contact:	arnaud.pouliquen@st.com
 Description:
 		For audio purpose only.
+
 		Used by audio driver to set/get the spi input frequency.
+
 		This is mandatory if DFSDM is slave on SPI bus, to
 		provide information on the SPI clock frequency during runtime
 		Notice that the SPI frequency should be a multiple of sample
 		frequency to ensure the precision.
-		if DFSDM input is SPI master
+
+		if DFSDM input is SPI master:
+
 			Reading  SPI clkout frequency,
 			error on writing
+
 		If DFSDM input is SPI Slave:
+
 			Reading returns value previously set.
 			Writing value before starting conversions.
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
index ad2cc63e4bf8..73498ff666bd 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-lptimer-stm32
@@ -17,9 +17,11 @@ KernelVersion:	4.13
 Contact:	fabrice.gasnier@st.com
 Description:
 		Configure the device counter quadrature modes:
+
 		- non-quadrature:
 			Encoder IN1 input servers as the count input (up
 			direction).
+
 		- quadrature:
 			Encoder IN1 and IN2 inputs are mixed to get direction
 			and count.
@@ -35,23 +37,26 @@ KernelVersion:	4.13
 Contact:	fabrice.gasnier@st.com
 Description:
 		Configure the device encoder/counter active edge:
+
 		- rising-edge
 		- falling-edge
 		- both-edges
 
 		In non-quadrature mode, device counts up on active edge.
+
 		In quadrature mode, encoder counting scenarios are as follows:
-		----------------------------------------------------------------
+
+		+---------+----------+--------------------+--------------------+
 		| Active  | Level on |      IN1 signal    |     IN2 signal     |
-		| edge    | opposite |------------------------------------------
+		| edge    | opposite +----------+---------+----------+---------+
 		|         | signal   |  Rising  | Falling |  Rising  | Falling |
-		----------------------------------------------------------------
-		| Rising  | High ->  |   Down   |    -    |    Up    |    -    |
-		| edge    | Low  ->  |    Up    |    -    |   Down   |    -    |
-		----------------------------------------------------------------
-		| Falling | High ->  |    -     |    Up   |    -     |   Down  |
-		| edge    | Low  ->  |    -     |   Down  |    -     |    Up   |
-		----------------------------------------------------------------
-		| Both    | High ->  |   Down   |    Up   |    Up    |   Down  |
-		| edges   | Low  ->  |    Up    |   Down  |   Down   |    Up   |
-		----------------------------------------------------------------
+		+---------+----------+----------+---------+----------+---------+
+		| Rising  | High ->  |   Down   |    -    |   Up     |    -    |
+		| edge    | Low  ->  |   Up     |    -    |   Down   |    -    |
+		+---------+----------+----------+---------+----------+---------+
+		| Falling | High ->  |    -     |   Up    |    -     |   Down  |
+		| edge    | Low  ->  |    -     |   Down  |    -     |   Up    |
+		+---------+----------+----------+---------+----------+---------+
+		| Both    | High ->  |   Down   |   Up    |   Up     |   Down  |
+		| edges   | Low  ->  |   Up     |   Down  |   Down   |   Up    |
+		+---------+----------+----------+---------+----------+---------+
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843 b/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
index 6275e9f56e6c..13f099ef6a95 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
+++ b/Documentation/ABI/testing/sysfs-bus-iio-magnetometer-hmc5843
@@ -5,11 +5,16 @@ Contact:        linux-iio@vger.kernel.org
 Description:
                 Current configuration and available configurations
 		for the bias current.
-		normal		- Normal measurement configurations (default)
-		positivebias	- Positive bias configuration
-		negativebias	- Negative bias configuration
-		disabled	- Only available on HMC5983. Disables magnetic
+
+		============	  ============================================
+		normal		  Normal measurement configurations (default)
+		positivebias	  Positive bias configuration
+		negativebias	  Negative bias configuration
+		disabled	  Only available on HMC5983. Disables magnetic
 				  sensor and enables temperature sensor.
-		Note: The effect of this configuration may vary
-		according to the device. For exact documentation
-		check the device's datasheet.
+		============	  ============================================
+
+		Note:
+		  The effect of this configuration may vary
+		  according to the device. For exact documentation
+		  check the device's datasheet.
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856 b/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
index 3b3509a3ef2f..e5ef6d8e5da1 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
+++ b/Documentation/ABI/testing/sysfs-bus-iio-temperature-max31856
@@ -5,9 +5,12 @@ Description:
 		Open-circuit fault. The detection of open-circuit faults,
 		such as those caused by broken thermocouple wires.
 		Reading returns either '1' or '0'.
-		'1' = An open circuit such as broken thermocouple wires
-		      has been detected.
-		'0' = No open circuit or broken thermocouple wires are detected
+
+		===  =======================================================
+		'1'  An open circuit such as broken thermocouple wires
+		     has been detected.
+		'0'  No open circuit or broken thermocouple wires are detected
+		===  =======================================================
 
 What:		/sys/bus/iio/devices/iio:deviceX/fault_ovuv
 KernelVersion:	5.1
@@ -18,7 +21,11 @@ Description:
 		cables by integrated MOSFETs at the T+ and T- inputs, and the
 		BIAS output. These MOSFETs turn off when the input voltage is
 		negative or greater than VDD.
+
 		Reading returns either '1' or '0'.
-		'1' = The input voltage is negative or greater than VDD.
-		'0' = The input voltage is positive and less than VDD (normal
-		state).
+
+		===  =======================================================
+		'1'  The input voltage is negative or greater than VDD.
+		'0'  The input voltage is positive and less than VDD (normal
+		     state).
+		===  =======================================================
diff --git a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32 b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
index b7259234ad70..a10a4de3e5fe 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
+++ b/Documentation/ABI/testing/sysfs-bus-iio-timer-stm32
@@ -3,67 +3,85 @@ KernelVersion:	4.11
 Contact:	benjamin.gaignard@st.com
 Description:
 		Reading returns the list possible master modes which are:
-		- "reset"     :	The UG bit from the TIMx_EGR register is
+
+
+		- "reset"
+				The UG bit from the TIMx_EGR register is
 				used as trigger output (TRGO).
-		- "enable"    : The Counter Enable signal CNT_EN is used
+		- "enable"
+				The Counter Enable signal CNT_EN is used
 				as trigger output.
-		- "update"    : The update event is selected as trigger output.
+		- "update"
+				The update event is selected as trigger output.
 				For instance a master timer can then be used
 				as a prescaler for a slave timer.
-		- "compare_pulse" : The trigger output send a positive pulse
-				    when the CC1IF flag is to be set.
-		- "OC1REF"    : OC1REF signal is used as trigger output.
-		- "OC2REF"    : OC2REF signal is used as trigger output.
-		- "OC3REF"    : OC3REF signal is used as trigger output.
-		- "OC4REF"    : OC4REF signal is used as trigger output.
+		- "compare_pulse"
+				The trigger output send a positive pulse
+				when the CC1IF flag is to be set.
+		- "OC1REF"
+				OC1REF signal is used as trigger output.
+		- "OC2REF"
+				OC2REF signal is used as trigger output.
+		- "OC3REF"
+				OC3REF signal is used as trigger output.
+		- "OC4REF"
+				OC4REF signal is used as trigger output.
+
 		Additional modes (on TRGO2 only):
-		- "OC5REF"    : OC5REF signal is used as trigger output.
-		- "OC6REF"    : OC6REF signal is used as trigger output.
+
+		- "OC5REF"
+				OC5REF signal is used as trigger output.
+		- "OC6REF"
+				OC6REF signal is used as trigger output.
 		- "compare_pulse_OC4REF":
-		  OC4REF rising or falling edges generate pulses.
+				OC4REF rising or falling edges generate pulses.
 		- "compare_pulse_OC6REF":
-		  OC6REF rising or falling edges generate pulses.
+				OC6REF rising or falling edges generate pulses.
 		- "compare_pulse_OC4REF_r_or_OC6REF_r":
-		  OC4REF or OC6REF rising edges generate pulses.
+				OC4REF or OC6REF rising edges generate pulses.
 		- "compare_pulse_OC4REF_r_or_OC6REF_f":
-		  OC4REF rising or OC6REF falling edges generate pulses.
+				OC4REF rising or OC6REF falling edges generate
+				pulses.
 		- "compare_pulse_OC5REF_r_or_OC6REF_r":
-		  OC5REF or OC6REF rising edges generate pulses.
+				OC5REF or OC6REF rising edges generate pulses.
 		- "compare_pulse_OC5REF_r_or_OC6REF_f":
-		  OC5REF rising or OC6REF falling edges generate pulses.
+				OC5REF rising or OC6REF falling edges generate
+				pulses.
 
-		+-----------+   +-------------+            +---------+
-		| Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
-		+-----------+   +--+--------+-+        |-> | Control +-->
-		                   |        |          ||  +---------+
-		                +--v--------+-+ OCxREF ||  +---------+
-		                | Chx compare +----------> | Output  | ChX
-		                +-----------+-+         |  | Control +-->
-		                      .     |           |  +---------+
-		                      .     |           |    .
-		                +-----------v-+ OC6REF  |    .
-		                | Ch6 compare +---------+>
-		                +-------------+
+		::
 
-		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r":
+		  +-----------+   +-------------+            +---------+
+		  | Prescaler +-> | Counter     |        +-> | Master  | TRGO(2)
+		  +-----------+   +--+--------+-+        |-> | Control +-->
+		                     |        |          ||  +---------+
+		                  +--v--------+-+ OCxREF ||  +---------+
+		                  | Chx compare +----------> | Output  | ChX
+		                  +-----------+-+         |  | Control +-->
+		                        .     |           |  +---------+
+		                        .     |           |    .
+		                  +-----------v-+ OC6REF  |    .
+		                  | Ch6 compare +---------+>
+		                  +-------------+
 
-		                X
-		              X   X
-		            X .   . X
-		          X   .   .   X
-		        X     .   .     X
-		count X .     .   .     . X
-		        .     .   .     .
-		        .     .   .     .
-		        +---------------+
-		OC4REF  |     .   .     |
-		      +-+     .   .     +-+
-		        .     +---+     .
-		OC6REF  .     |   |     .
-		      +-------+   +-------+
-		        +-+   +-+
-		TRGO2   | |   | |
-		      +-+ +---+ +---------+
+		Example with: "compare_pulse_OC4REF_r_or_OC6REF_r"::
+
+		                  X
+		                X   X
+		              X .   . X
+		            X   .   .   X
+		          X     .   .     X
+		  count X .     .   .     . X
+		          .     .   .     .
+		          .     .   .     .
+		          +---------------+
+		  OC4REF  |     .   .     |
+		        +-+     .   .     +-+
+		          .     +---+     .
+		  OC6REF  .     |   |     .
+		        +-------+   +-------+
+		          +-+   +-+
+		  TRGO2   | |   | |
+		        +-+ +---+ +---------+
 
 What:		/sys/bus/iio/devices/triggerX/master_mode
 KernelVersion:	4.11
@@ -91,6 +109,30 @@ Description:
 		When counting down the counter start from preset value
 		and fire event when reach 0.
 
+What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
+KernelVersion:	4.12
+Contact:	benjamin.gaignard@st.com
+Description:
+		Reading returns the list possible quadrature modes.
+
+What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
+KernelVersion:	4.12
+Contact:	benjamin.gaignard@st.com
+Description:
+		Configure the device counter quadrature modes:
+
+		channel_A:
+			Encoder A input servers as the count input and B as
+			the UP/DOWN direction control input.
+
+		channel_B:
+			Encoder B input serves as the count input and A as
+			the UP/DOWN direction control input.
+
+		quadrature:
+			Encoder A and B inputs are mixed to get direction
+			and count with a scale of 0.25.
+
 What:		/sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available
 KernelVersion:	4.12
 Contact:	benjamin.gaignard@st.com
@@ -104,6 +146,7 @@ Description:
 		Configure the device counter enable modes, in all case
 		counting direction is set by in_count0_count_direction
 		attribute and the counter is clocked by the internal clock.
+
 		always:
 			Counter is always ON.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
index 7fd2601c2831..a74252e580a5 100644
--- a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
+++ b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
@@ -9,11 +9,13 @@ Date:		June 2015
 KernelVersion:	4.3
 Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 Description:	(RW) Configure MSC operating mode:
+
 		  - "single", for contiguous buffer mode (high-order alloc);
 		  - "multi", for multiblock mode;
 		  - "ExI", for DCI handler mode;
 		  - "debug", for debug mode;
 		  - any of the currently loaded buffer sinks.
+
 		If operating mode changes, existing buffer is deallocated,
 		provided there are no active users and tracing is not enabled,
 		otherwise the write will fail.
@@ -23,10 +25,12 @@ Date:		June 2015
 KernelVersion:	4.3
 Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
 Description:	(RW) Configure MSC buffer size for "single" or "multi" modes.
+
 		In single mode, this is a single number of pages, has to be
 		power of 2. In multiblock mode, this is a comma-separated list
 		of numbers of pages for each window to be allocated. Number of
 		windows is not limited.
+
 		Writing to this file deallocates existing buffer (provided
 		there are no active users and tracing is not enabled) and then
 		allocates a new one.
diff --git a/Documentation/ABI/testing/sysfs-bus-nfit b/Documentation/ABI/testing/sysfs-bus-nfit
index e4f76e7eab93..63ef0b9ecce7 100644
--- a/Documentation/ABI/testing/sysfs-bus-nfit
+++ b/Documentation/ABI/testing/sysfs-bus-nfit
@@ -1,4 +1,4 @@
-For all of the nmem device attributes under nfit/*, see the 'NVDIMM Firmware
+For all of the nmem device attributes under ``nfit/*``, see the 'NVDIMM Firmware
 Interface Table (NFIT)' section in the ACPI specification
 (http://www.uefi.org/specifications) for more details.
 
diff --git a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
index 3c9a8c4a25eb..860db53037a5 100644
--- a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
+++ b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
@@ -1,6 +1,6 @@
-==========================
 PCIe Device AER statistics
-==========================
+--------------------------
+
 These attributes show up under all the devices that are AER capable. These
 statistical counters indicate the errors "as seen/reported by the device".
 Note that this may mean that if an endpoint is causing problems, the AER
@@ -17,19 +17,18 @@ Description:	List of correctable errors seen and reported by this
 		PCI device using ERR_COR. Note that since multiple errors may
 		be reported using a single ERR_COR message, thus
 		TOTAL_ERR_COR at the end of the file may not match the actual
-		total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
-Receiver Error 2
-Bad TLP 0
-Bad DLLP 0
-RELAY_NUM Rollover 0
-Replay Timer Timeout 0
-Advisory Non-Fatal 0
-Corrected Internal Error 0
-Header Log Overflow 0
-TOTAL_ERR_COR 2
--------------------------------------------------------------------------
+		total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
+		    Receiver Error 2
+		    Bad TLP 0
+		    Bad DLLP 0
+		    RELAY_NUM Rollover 0
+		    Replay Timer Timeout 0
+		    Advisory Non-Fatal 0
+		    Corrected Internal Error 0
+		    Header Log Overflow 0
+		    TOTAL_ERR_COR 2
 
 What:		/sys/bus/pci/devices/<dev>/aer_dev_fatal
 Date:		July 2018
@@ -39,28 +38,27 @@ Description:	List of uncorrectable fatal errors seen and reported by this
 		PCI device using ERR_FATAL. Note that since multiple errors may
 		be reported using a single ERR_FATAL message, thus
 		TOTAL_ERR_FATAL at the end of the file may not match the actual
-		total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
-Undefined 0
-Data Link Protocol 0
-Surprise Down Error 0
-Poisoned TLP 0
-Flow Control Protocol 0
-Completion Timeout 0
-Completer Abort 0
-Unexpected Completion 0
-Receiver Overflow 0
-Malformed TLP 0
-ECRC 0
-Unsupported Request 0
-ACS Violation 0
-Uncorrectable Internal Error 0
-MC Blocked TLP 0
-AtomicOp Egress Blocked 0
-TLP Prefix Blocked Error 0
-TOTAL_ERR_FATAL 0
--------------------------------------------------------------------------
+		total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
+		    Undefined 0
+		    Data Link Protocol 0
+		    Surprise Down Error 0
+		    Poisoned TLP 0
+		    Flow Control Protocol 0
+		    Completion Timeout 0
+		    Completer Abort 0
+		    Unexpected Completion 0
+		    Receiver Overflow 0
+		    Malformed TLP 0
+		    ECRC 0
+		    Unsupported Request 0
+		    ACS Violation 0
+		    Uncorrectable Internal Error 0
+		    MC Blocked TLP 0
+		    AtomicOp Egress Blocked 0
+		    TLP Prefix Blocked Error 0
+		    TOTAL_ERR_FATAL 0
 
 What:		/sys/bus/pci/devices/<dev>/aer_dev_nonfatal
 Date:		July 2018
@@ -70,32 +68,31 @@ Description:	List of uncorrectable nonfatal errors seen and reported by this
 		PCI device using ERR_NONFATAL. Note that since multiple errors
 		may be reported using a single ERR_FATAL message, thus
 		TOTAL_ERR_NONFATAL at the end of the file may not match the
-		actual total of all the errors in the file. Sample output:
--------------------------------------------------------------------------
-localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
-Undefined 0
-Data Link Protocol 0
-Surprise Down Error 0
-Poisoned TLP 0
-Flow Control Protocol 0
-Completion Timeout 0
-Completer Abort 0
-Unexpected Completion 0
-Receiver Overflow 0
-Malformed TLP 0
-ECRC 0
-Unsupported Request 0
-ACS Violation 0
-Uncorrectable Internal Error 0
-MC Blocked TLP 0
-AtomicOp Egress Blocked 0
-TLP Prefix Blocked Error 0
-TOTAL_ERR_NONFATAL 0
--------------------------------------------------------------------------
+		actual total of all the errors in the file. Sample output::
+
+		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
+		    Undefined 0
+		    Data Link Protocol 0
+		    Surprise Down Error 0
+		    Poisoned TLP 0
+		    Flow Control Protocol 0
+		    Completion Timeout 0
+		    Completer Abort 0
+		    Unexpected Completion 0
+		    Receiver Overflow 0
+		    Malformed TLP 0
+		    ECRC 0
+		    Unsupported Request 0
+		    ACS Violation 0
+		    Uncorrectable Internal Error 0
+		    MC Blocked TLP 0
+		    AtomicOp Egress Blocked 0
+		    TLP Prefix Blocked Error 0
+		    TOTAL_ERR_NONFATAL 0
 
-============================
 PCIe Rootport AER statistics
-============================
+----------------------------
+
 These attributes show up under only the rootports (or root complex event
 collectors) that are AER capable. These indicate the number of error messages as
 "reported to" the rootport. Please note that the rootports also transmit
diff --git a/Documentation/ABI/testing/sysfs-bus-rapidio b/Documentation/ABI/testing/sysfs-bus-rapidio
index 13208b27dd87..634ea207a50a 100644
--- a/Documentation/ABI/testing/sysfs-bus-rapidio
+++ b/Documentation/ABI/testing/sysfs-bus-rapidio
@@ -4,24 +4,27 @@ Description:
 		an individual subdirectory with the following name format of
 		device_name "nn:d:iiii", where:
 
-		nn   - two-digit hexadecimal ID of RapidIO network where the
+		====   ========================================================
+		nn     two-digit hexadecimal ID of RapidIO network where the
 		       device resides
-		d    - device type: 'e' - for endpoint or 's' - for switch
-		iiii - four-digit device destID for endpoints, or switchID for
+		d      device type: 'e' - for endpoint or 's' - for switch
+		iiii   four-digit device destID for endpoints, or switchID for
 		       switches
+		====   ========================================================
 
 		For example, below is a list of device directories that
 		represents a typical RapidIO network with one switch, one host,
 		and two agent endpoints, as it is seen by the enumerating host
-		(with destID = 1):
+		(with destID = 1)::
 
-		/sys/bus/rapidio/devices/00:e:0000
-		/sys/bus/rapidio/devices/00:e:0002
-		/sys/bus/rapidio/devices/00:s:0001
+		  /sys/bus/rapidio/devices/00:e:0000
+		  /sys/bus/rapidio/devices/00:e:0002
+		  /sys/bus/rapidio/devices/00:s:0001
 
-		NOTE: An enumerating or discovering endpoint does not create a
-		sysfs entry for itself, this is why an endpoint with destID=1 is
-		not shown in the list.
+		NOTE:
+		  An enumerating or discovering endpoint does not create a
+		  sysfs entry for itself, this is why an endpoint with destID=1
+		  is not shown in the list.
 
 Attributes Common for All RapidIO Devices
 -----------------------------------------
diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
index dd565c378b40..171127294674 100644
--- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
@@ -37,16 +37,18 @@ Contact:	thunderbolt-software@lists.01.org
 Description:	This attribute holds current Thunderbolt security level
 		set by the system BIOS. Possible values are:
 
-		none: All devices are automatically authorized
-		user: Devices are only authorized based on writing
-		      appropriate value to the authorized attribute
-		secure: Require devices that support secure connect at
-			minimum. User needs to authorize each device.
-		dponly: Automatically tunnel Display port (and USB). No
-			PCIe tunnels are created.
-		usbonly: Automatically tunnel USB controller of the
+		=======  ==================================================
+		none     All devices are automatically authorized
+		user     Devices are only authorized based on writing
+		         appropriate value to the authorized attribute
+		secure   Require devices that support secure connect at
+			 minimum. User needs to authorize each device.
+		dponly   Automatically tunnel Display port (and USB). No
+			 PCIe tunnels are created.
+		usbonly  Automatically tunnel USB controller of the
 			 connected Thunderbolt dock (and Display Port). All
 			 PCIe links downstream of the dock are removed.
+		=======  ==================================================
 
 What: /sys/bus/thunderbolt/devices/.../authorized
 Date:		Sep 2017
@@ -61,17 +63,23 @@ Description:	This attribute is used to authorize Thunderbolt devices
 		yet authorized.
 
 		Possible values are supported:
-		1: The device will be authorized and connected
+
+		==  ===========================================
+		1   The device will be authorized and connected
+		==  ===========================================
 
 		When key attribute contains 32 byte hex string the possible
 		values are:
-		1: The 32 byte hex string is added to the device NVM and
-		   the device is authorized.
-		2: Send a challenge based on the 32 byte hex string. If the
-		   challenge response from device is valid, the device is
-		   authorized. In case of failure errno will be ENOKEY if
-		   the device did not contain a key at all, and
-		   EKEYREJECTED if the challenge response did not match.
+
+		==  ========================================================
+		1   The 32 byte hex string is added to the device NVM and
+		    the device is authorized.
+		2   Send a challenge based on the 32 byte hex string. If the
+		    challenge response from device is valid, the device is
+		    authorized. In case of failure errno will be ENOKEY if
+		    the device did not contain a key at all, and
+		    EKEYREJECTED if the challenge response did not match.
+		==  ========================================================
 
 What: /sys/bus/thunderbolt/devices/.../boot
 Date:		Jun 2018
diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb
index 614d216dff1d..e449b8374f6a 100644
--- a/Documentation/ABI/testing/sysfs-bus-usb
+++ b/Documentation/ABI/testing/sysfs-bus-usb
@@ -72,24 +72,27 @@ Description:
 		table at compile time. The format for the device ID is:
 		idVendor idProduct bInterfaceClass RefIdVendor RefIdProduct
 		The vendor ID and device ID fields are required, the
-		rest is optional. The Ref* tuple can be used to tell the
+		rest is optional. The `Ref*` tuple can be used to tell the
 		driver to use the same driver_data for the new device as
 		it is used for the reference device.
 		Upon successfully adding an ID, the driver will probe
-		for the device and attempt to bind to it.  For example:
-		# echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
+		for the device and attempt to bind to it.  For example::
+
+		  # echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
 
 		Here add a new device (0458:7045) using driver_data from
-		an already supported device (0458:704c):
-		# echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
+		an already supported device (0458:704c)::
+
+		  # echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
 
 		Reading from this file will list all dynamically added
 		device IDs in the same format, with one entry per
-		line. For example:
-		# cat /sys/bus/usb/drivers/foo/new_id
-		8086 10f5
-		dead beef 06
-		f00d cafe
+		line. For example::
+
+		  # cat /sys/bus/usb/drivers/foo/new_id
+		  8086 10f5
+		  dead beef 06
+		  f00d cafe
 
 		The list will be truncated at PAGE_SIZE bytes due to
 		sysfs restrictions.
@@ -209,6 +212,7 @@ Description:
 		advance, and behaves well according to the specification.
 		This attribute is a bit-field that controls the behavior of
 		a specific port:
+
 		 - Bit 0 of this field selects the "old" enumeration scheme,
 		   as it is considerably faster (it only causes one USB reset
 		   instead of 2).
@@ -233,10 +237,10 @@ Description:
 		poll() for monitoring changes to this value in user space.
 
 		Any time this value changes the corresponding hub device will send a
-		udev event with the following attributes:
+		udev event with the following attributes::
 
-		OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
-		OVER_CURRENT_COUNT=[current value of this sysfs attribute]
+		  OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
+		  OVER_CURRENT_COUNT=[current value of this sysfs attribute]
 
 What:		/sys/bus/usb/devices/.../(hub interface)/portX/usb3_lpm_permit
 Date:		November 2015
diff --git a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
index 9ade80f81f96..2f86e4223bfc 100644
--- a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
+++ b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
@@ -12,8 +12,11 @@ KernelVersion:	2.6.26
 Contact:	Harrison Metzger <harrisonmetz@gmail.com>
 Description:	Controls the devices display mode.
 		For a 6 character display the values are
+
 			MSB 0x06; LSB 0x3F, and
+
 		for an 8 character display the values are
+
 			MSB 0x08; LSB 0xFF.
 
 What:		/sys/bus/usb/.../textmode
@@ -37,7 +40,7 @@ KernelVersion:	2.6.26
 Contact:	Harrison Metzger <harrisonmetz@gmail.com>
 Description:	Controls the decimal places on the device.
 		To set the nth decimal place, give this field
-		the value of 10 ** n. Assume this field has
+		the value of ``10 ** n``. Assume this field has
 		the value k and has 1 or more decimal places set,
 		to set the mth place (where m is not already set),
-		change this fields value to k + 10 ** m.
+		change this fields value to ``k + 10 ** m``.
diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
index 452dbe39270e..59fc804265db 100644
--- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
+++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
@@ -28,8 +28,9 @@ Description:
 		Writing UUID to this file will create mediated device of
 		type <type-id> for parent device <device>. This is a
 		write-only file.
-		For example:
-		# echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
+		For example::
+
+		  # echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
 		       /sys/devices/foo/mdev_supported_types/foo-1/create
 
 What:           /sys/.../mdev_supported_types/<type-id>/devices/
@@ -107,5 +108,6 @@ Description:
 		Writing '1' to this file destroys the mediated device. The
 		vendor driver can fail the remove() callback if that device
 		is active and the vendor driver doesn't support hot unplug.
-		Example:
-		# echo 1 > /sys/bus/mdev/devices/<UUID>/remove
+		Example::
+
+		  # echo 1 > /sys/bus/mdev/devices/<UUID>/remove
diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
index 7970e3713e70..a6f51a104c44 100644
--- a/Documentation/ABI/testing/sysfs-class-cxl
+++ b/Documentation/ABI/testing/sysfs-class-cxl
@@ -72,11 +72,16 @@ Description:    read/write
                 when performing the START_WORK ioctl. Only applicable when
                 running under hashed page table mmu.
                 Possible values:
-                        none: No prefaulting (default)
-                        work_element_descriptor: Treat the work element
-                                 descriptor as an effective address and
-                                 prefault what it points to.
-                        all: all segments process calling START_WORK maps.
+
+                =======================  ======================================
+		none			 No prefaulting (default)
+		work_element_descriptor  Treat the work element
+					 descriptor as an effective address and
+					 prefault what it points to.
+                all			 all segments process calling
+					 START_WORK maps.
+                =======================  ======================================
+
 Users:		https://github.com/ibm-capi/libcxl
 
 What:           /sys/class/cxl/<afu>/reset
diff --git a/Documentation/ABI/testing/sysfs-class-led b/Documentation/ABI/testing/sysfs-class-led
index 5f67f7ab277b..65e040978f73 100644
--- a/Documentation/ABI/testing/sysfs-class-led
+++ b/Documentation/ABI/testing/sysfs-class-led
@@ -50,7 +50,7 @@ Description:
 		You can change triggers in a similar manner to the way an IO
 		scheduler is chosen. Trigger specific parameters can appear in
 		/sys/class/leds/<led> once a given trigger is selected. For
-		their documentation see sysfs-class-led-trigger-*.
+		their documentation see `sysfs-class-led-trigger-*`.
 
 What:		/sys/class/leds/<led>/inverted
 Date:		January 2011
diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000 b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
index f520ece9b64c..69befe947d7e 100644
--- a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
+++ b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
@@ -6,127 +6,132 @@ Description:
 		The LEDs board supports only predefined patterns by firmware
 		for specific LEDs.
 
-		Breathing mode for Screen frame light tube:
-		"0 4000 1 4000"
+		Breathing mode for Screen frame light tube::
 
-		    ^
-		    |
-		Max-|     ---
-		    |    /   \
-		    |   /     \
-		    |  /       \     /
-		    | /         \   /
-		Min-|-           ---
-		    |
-		    0------4------8--> time (sec)
+		    "0 4000 1 4000"
 
-		Cascade mode for Pipe LED:
-		"1 800 2 800 4 800 8 800 16 800"
+			^
+			|
+		    Max-|     ---
+			|    /   \
+			|   /     \
+			|  /       \     /
+			| /         \   /
+		    Min-|-           ---
+			|
+			0------4------8--> time (sec)
 
-		      ^
-		      |
-		0 On -|----+                   +----+                   +---
-		      |    |                   |    |                   |
-		  Off-|    +-------------------+    +-------------------+
-		      |
-		1 On -|    +----+                   +----+
-		      |    |    |                   |    |
-		  Off |----+    +-------------------+    +------------------
-		      |
-		2 On -|         +----+                   +----+
-		      |         |    |                   |    |
-		  Off-|---------+    +-------------------+    +-------------
-		      |
-		3 On -|              +----+                   +----+
-		      |              |    |                   |    |
-		  Off-|--------------+    +-------------------+    +--------
-		      |
-		4 On -|                   +----+                   +----+
-		      |                   |    |                   |    |
-		  Off-|-------------------+    +-------------------+    +---
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		Cascade mode for Pipe LED::
 
-		Inverted cascade mode for Pipe LED:
-		"30 800 29 800 27 800 23 800 15 800"
+		    "1 800 2 800 4 800 8 800 16 800"
 
-		      ^
-		      |
-		0 On -|    +-------------------+    +-------------------+
-		      |    |                   |    |                   |
-		  Off-|----+                   +----+                   +---
-		      |
-		1 On -|----+    +-------------------+    +------------------
-		      |    |    |                   |    |
-		  Off |    +----+                   +----+
-		      |
-		2 On -|---------+    +-------------------+    +-------------
-		      |         |    |                   |    |
-		  Off-|         +----+                   +----+
-		      |
-		3 On -|--------------+    +-------------------+    +--------
-		      |              |    |                   |    |
-		  Off-|              +----+                   +----+
-		      |
-		4 On -|-------------------+    +-------------------+    +---
-		      |                   |    |                   |    |
-		  Off-|                   +----+                   +----+
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+			^
+			|
+		    0 On -|----+                   +----+                   +---
+			|    |                   |    |                   |
+		    Off-|    +-------------------+    +-------------------+
+			|
+		    1 On -|    +----+                   +----+
+			|    |    |                   |    |
+		    Off |----+    +-------------------+    +------------------
+			|
+		    2 On -|         +----+                   +----+
+			|         |    |                   |    |
+		    Off-|---------+    +-------------------+    +-------------
+			|
+		    3 On -|              +----+                   +----+
+			|              |    |                   |    |
+		    Off-|--------------+    +-------------------+    +--------
+			|
+		    4 On -|                   +----+                   +----+
+			|                   |    |                   |    |
+		    Off-|-------------------+    +-------------------+    +---
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
-		Bounce mode for Pipe LED:
-		"1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
+		Inverted cascade mode for Pipe LED::
 
-		      ^
-		      |
-		0 On -|----+                                       +--------
-		      |    |                                       |
-		  Off-|    +---------------------------------------+
-		      |
-		1 On -|    +----+                             +----+
-		      |    |    |                             |    |
-		  Off |----+    +-----------------------------+    +--------
-		      |
-		2 On -|         +----+                   +----+
-		      |         |    |                   |    |
-		  Off-|---------+    +-------------------+    +-------------
-		      |
-		3 On -|              +----+         +----+
-		      |              |    |         |    |
-		  Off-|--------------+    +---------+    +------------------
-		      |
-		4 On -|                   +---------+
-		      |                   |         |
-		  Off-|-------------------+         +-----------------------
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		    "30 800 29 800 27 800 23 800 15 800"
 
-		Inverted bounce mode for Pipe LED:
-		"30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
+			^
+			|
+		    0 On -|    +-------------------+    +-------------------+
+			|    |                   |    |                   |
+		    Off-|----+                   +----+                   +---
+			|
+		    1 On -|----+    +-------------------+    +------------------
+			|    |    |                   |    |
+		    Off |    +----+                   +----+
+			|
+		    2 On -|---------+    +-------------------+    +-------------
+			|         |    |                   |    |
+		    Off-|         +----+                   +----+
+			|
+		    3 On -|--------------+    +-------------------+    +--------
+			|              |    |                   |    |
+		    Off-|              +----+                   +----+
+			|
+		    4 On -|-------------------+    +-------------------+    +---
+			|                   |    |                   |    |
+		    Off-|                   +----+                   +----+
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
-		      ^
-		      |
-		0 On -|    +---------------------------------------+
-		      |    |                                       |
-		  Off-|----+                                       +--------
-		      |
-		1 On -|----+    +-----------------------------+    +--------
-		      |    |    |                             |    |
-		  Off |    +----+                             +----+
-		      |
-		2 On -|---------+    +-------------------+    +-------------
-		      |         |    |                   |    |
-		  Off-|         +----+                   +----+
-		      |
-		3 On -|--------------+    +---------+    +------------------
-		      |              |    |         |    |
-		  Off-|              +----+         +----+
-		      |
-		4 On -|-------------------+         +-----------------------
-		      |                   |         |
-		  Off-|                   +---------+
-		      |
-		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+		Bounce mode for Pipe LED::
+
+		    "1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
+
+			^
+			|
+		    0 On -|----+                                       +--------
+			|    |                                       |
+		    Off-|    +---------------------------------------+
+			|
+		    1 On -|    +----+                             +----+
+			|    |    |                             |    |
+		    Off |----+    +-----------------------------+    +--------
+			|
+		    2 On -|         +----+                   +----+
+			|         |    |                   |    |
+		    Off-|---------+    +-------------------+    +-------------
+			|
+		    3 On -|              +----+         +----+
+			|              |    |         |    |
+		    Off-|--------------+    +---------+    +------------------
+			|
+		    4 On -|                   +---------+
+			|                   |         |
+		    Off-|-------------------+         +-----------------------
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
+
+		Inverted bounce mode for Pipe LED::
+
+		    "30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
+
+			^
+			|
+		    0 On -|    +---------------------------------------+
+			|    |                                       |
+		    Off-|----+                                       +--------
+			|
+		    1 On -|----+    +-----------------------------+    +--------
+			|    |    |                             |    |
+		    Off |    +----+                             +----+
+			|
+		    2 On -|---------+    +-------------------+    +-------------
+			|         |    |                   |    |
+		    Off-|         +----+                   +----+
+			|
+		    3 On -|--------------+    +---------+    +------------------
+			|              |    |         |    |
+		    Off-|              +----+         +----+
+			|
+		    4 On -|-------------------+         +-----------------------
+			|                   |         |
+		    Off-|                   +---------+
+			|
+			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
 
 What:		/sys/class/leds/<led>/repeat
 Date:		September 2019
diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
index 45b1e605d355..215482379580 100644
--- a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
+++ b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
@@ -12,8 +12,8 @@ Description:
 		format, we should set brightness as 0 for rise stage, fall
 		stage and low stage.
 
-		Min stage duration: 125 ms
-		Max stage duration: 31875 ms
+		- Min stage duration: 125 ms
+		- Max stage duration: 31875 ms
 
 		Since the stage duration step is 125 ms, the duration should be
 		a multiplier of 125, like 125ms, 250ms, 375ms, 500ms ... 31875ms.
diff --git a/Documentation/ABI/testing/sysfs-class-mic b/Documentation/ABI/testing/sysfs-class-mic
index 6ef682603179..bd0e780c3760 100644
--- a/Documentation/ABI/testing/sysfs-class-mic
+++ b/Documentation/ABI/testing/sysfs-class-mic
@@ -41,24 +41,33 @@ Description:
 		When read, this entry provides the current state of an Intel
 		MIC device in the context of the card OS. Possible values that
 		will be read are:
-		"ready" - The MIC device is ready to boot the card OS. On
-		reading this entry after an OSPM resume, a "boot" has to be
-		written to this entry if the card was previously shutdown
-		during OSPM suspend.
-		"booting" - The MIC device has initiated booting a card OS.
-		"online" - The MIC device has completed boot and is online
-		"shutting_down" - The card OS is shutting down.
-		"resetting" - A reset has been initiated for the MIC device
-		"reset_failed" - The MIC device has failed to reset.
+
+
+		===============  ===============================================
+		"ready"		 The MIC device is ready to boot the card OS.
+				 On reading this entry after an OSPM resume,
+				 a "boot" has to be written to this entry if
+				 the card was previously shutdown during OSPM
+				 suspend.
+		"booting"	 The MIC device has initiated booting a card OS.
+		"online"	 The MIC device has completed boot and is online
+		"shutting_down"	 The card OS is shutting down.
+		"resetting"	 A reset has been initiated for the MIC device
+		"reset_failed"	 The MIC device has failed to reset.
+		===============  ===============================================
 
 		When written, this sysfs entry triggers different state change
 		operations depending upon the current state of the card OS.
 		Acceptable values are:
-		"boot" - Boot the card OS image specified by the combination
-			 of firmware, ramdisk, cmdline and bootmode
-			sysfs entries.
-		"reset" - Initiates device reset.
-		"shutdown" - Initiates card OS shutdown.
+
+
+		==========  ===================================================
+		"boot"      Boot the card OS image specified by the combination
+			    of firmware, ramdisk, cmdline and bootmode
+			    sysfs entries.
+		"reset"     Initiates device reset.
+		"shutdown"  Initiates card OS shutdown.
+		==========  ===================================================
 
 What:		/sys/class/mic/mic(x)/shutdown_status
 Date:		October 2013
@@ -69,12 +78,15 @@ Description:
 		OS can shutdown because of various reasons. When read, this
 		entry provides the status on why the card OS was shutdown.
 		Possible values are:
-		"nop" -  shutdown status is not applicable, when the card OS is
-			"online"
-		"crashed" - Shutdown because of a HW or SW crash.
-		"halted" - Shutdown because of a halt command.
-		"poweroff" - Shutdown because of a poweroff command.
-		"restart" - Shutdown because of a restart command.
+
+		==========  ===================================================
+		"nop"       shutdown status is not applicable, when the card OS
+			    is "online"
+		"crashed"   Shutdown because of a HW or SW crash.
+		"halted"    Shutdown because of a halt command.
+		"poweroff"  Shutdown because of a poweroff command.
+		"restart"   Shutdown because of a restart command.
+		==========  ===================================================
 
 What:		/sys/class/mic/mic(x)/cmdline
 Date:		October 2013
diff --git a/Documentation/ABI/testing/sysfs-class-ocxl b/Documentation/ABI/testing/sysfs-class-ocxl
index ae1276efa45a..bf33f4fda58f 100644
--- a/Documentation/ABI/testing/sysfs-class-ocxl
+++ b/Documentation/ABI/testing/sysfs-class-ocxl
@@ -11,8 +11,11 @@ Contact:	linuxppc-dev@lists.ozlabs.org
 Description:	read only
 		Number of contexts for the AFU, in the format <n>/<max>
 		where:
+
+			====	===============================================
 			n:	number of currently active contexts, for debug
 			max:	maximum number of contexts supported by the AFU
+			====	===============================================
 
 What:		/sys/class/ocxl/<afu name>/pp_mmio_size
 Date:		January 2018
diff --git a/Documentation/ABI/testing/sysfs-class-power b/Documentation/ABI/testing/sysfs-class-power
index dbccb2fcd7ce..d4319a04c302 100644
--- a/Documentation/ABI/testing/sysfs-class-power
+++ b/Documentation/ABI/testing/sysfs-class-power
@@ -1,4 +1,4 @@
-===== General Properties =====
+**General Properties**
 
 What:		/sys/class/power_supply/<supply_name>/manufacturer
 Date:		May 2007
@@ -72,6 +72,7 @@ Description:
 		critically low).
 
 		Access: Read, Write
+
 		Valid values: 0 - 100 (percent)
 
 What:		/sys/class/power_supply/<supply_name>/capacity_error_margin
@@ -96,7 +97,9 @@ Description:
 		Coarse representation of battery capacity.
 
 		Access: Read
-		Valid values: "Unknown", "Critical", "Low", "Normal", "High",
+
+		Valid values:
+			      "Unknown", "Critical", "Low", "Normal", "High",
 			      "Full"
 
 What:		/sys/class/power_supply/<supply_name>/current_avg
@@ -139,6 +142,7 @@ Description:
 		throttling for thermal cooling or improving battery health.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/charge_control_limit_max
@@ -148,6 +152,7 @@ Description:
 		Maximum legal value for the charge_control_limit property.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/charge_control_start_threshold
@@ -168,6 +173,7 @@ Description:
 		stop.
 
 		Access: Read, Write
+
 		Valid values: 0 - 100 (percent)
 
 What:		/sys/class/power_supply/<supply_name>/charge_type
@@ -183,7 +189,9 @@ Description:
 		different algorithm.
 
 		Access: Read, Write
-		Valid values: "Unknown", "N/A", "Trickle", "Fast", "Standard",
+
+		Valid values:
+			      "Unknown", "N/A", "Trickle", "Fast", "Standard",
 			      "Adaptive", "Custom"
 
 What:		/sys/class/power_supply/<supply_name>/charge_term_current
@@ -194,6 +202,7 @@ Description:
 		when the battery is considered full and charging should end.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/health
@@ -204,7 +213,9 @@ Description:
 		functionality.
 
 		Access: Read
-		Valid values: "Unknown", "Good", "Overheat", "Dead",
+
+		Valid values:
+			      "Unknown", "Good", "Overheat", "Dead",
 			      "Over voltage", "Unspecified failure", "Cold",
 			      "Watchdog timer expire", "Safety timer expire",
 			      "Over current", "Calibration required", "Warm",
@@ -218,6 +229,7 @@ Description:
 		for a battery charge cycle.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/present
@@ -227,9 +239,13 @@ Description:
 		Reports whether a battery is present or not in the system.
 
 		Access: Read
+
 		Valid values:
+
+			== =======
 			0: Absent
 			1: Present
+			== =======
 
 What:		/sys/class/power_supply/<supply_name>/status
 Date:		May 2007
@@ -240,7 +256,9 @@ Description:
 		used to enable/disable charging to the battery.
 
 		Access: Read, Write
-		Valid values: "Unknown", "Charging", "Discharging",
+
+		Valid values:
+			      "Unknown", "Charging", "Discharging",
 			      "Not charging", "Full"
 
 What:		/sys/class/power_supply/<supply_name>/technology
@@ -250,7 +268,9 @@ Description:
 		Describes the battery technology supported by the supply.
 
 		Access: Read
-		Valid values: "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
+
+		Valid values:
+			      "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
 			      "NiCd", "LiMn"
 
 What:		/sys/class/power_supply/<supply_name>/temp
@@ -260,6 +280,7 @@ Description:
 		Reports the current TBAT battery temperature reading.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_max
@@ -274,6 +295,7 @@ Description:
 		critically high, and charging has stopped).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_min
@@ -289,6 +311,7 @@ Description:
 		remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_max
@@ -299,6 +322,7 @@ Description:
 		charging.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_min
@@ -309,6 +333,7 @@ Description:
 		charging.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/voltage_avg,
@@ -320,6 +345,7 @@ Description:
 		which they average readings to smooth out the reported value.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_max,
@@ -330,6 +356,7 @@ Description:
 		during charging.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_min,
@@ -340,6 +367,7 @@ Description:
 		during discharging.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What:		/sys/class/power_supply/<supply_name>/voltage_now,
@@ -350,9 +378,10 @@ Description:
 		This value is not averaged/smoothed.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
-===== USB Properties =====
+**USB Properties**
 
 What: 		/sys/class/power_supply/<supply_name>/current_avg
 Date:		May 2007
@@ -363,6 +392,7 @@ Description:
 		average readings to smooth out the reported value.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 
@@ -373,6 +403,7 @@ Description:
 		Reports the maximum IBUS current the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microamps
 
 What: 		/sys/class/power_supply/<supply_name>/current_now
@@ -385,6 +416,7 @@ Description:
 		within the reported min/max range.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/input_current_limit
@@ -399,6 +431,7 @@ Description:
 		solved using power limit use input_current_limit.
 
 		Access: Read, Write
+
 		Valid values: Represented in microamps
 
 What:		/sys/class/power_supply/<supply_name>/input_voltage_limit
@@ -441,10 +474,14 @@ Description:
 		USB supply so voltage and current can be controlled).
 
 		Access: Read, Write
+
 		Valid values:
+
+			== ==================================================
 			0: Offline
 			1: Online Fixed - Fixed Voltage Supply
 			2: Online Programmable - Programmable Voltage Supply
+			== ==================================================
 
 What:		/sys/class/power_supply/<supply_name>/temp
 Date:		May 2007
@@ -455,6 +492,7 @@ Description:
 		TJUNC temperature of an IC)
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_max
@@ -470,6 +508,7 @@ Description:
 		remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_alert_min
@@ -485,6 +524,7 @@ Description:
 		accordingly to remedy the situation).
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_max
@@ -494,6 +534,7 @@ Description:
 		Reports the maximum allowed supply temperature for operation.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What:		/sys/class/power_supply/<supply_name>/temp_min
@@ -503,6 +544,7 @@ Description:
 		Reports the mainimum allowed supply temperature for operation.
 
 		Access: Read
+
 		Valid values: Represented in 1/10 Degrees Celsius
 
 What: 		/sys/class/power_supply/<supply_name>/usb_type
@@ -514,7 +556,9 @@ Description:
 		is attached.
 
 		Access: Read-Only
-		Valid values: "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
+
+		Valid values:
+			      "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
 			      "PD_DRP", "PD_PPS", "BrickID"
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_max
@@ -524,6 +568,7 @@ Description:
 		Reports the maximum VBUS voltage the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_min
@@ -533,6 +578,7 @@ Description:
 		Reports the minimum VBUS voltage the supply can support.
 
 		Access: Read
+
 		Valid values: Represented in microvolts
 
 What: 		/sys/class/power_supply/<supply_name>/voltage_now
@@ -545,9 +591,10 @@ Description:
 		within the reported min/max range.
 
 		Access: Read, Write
+
 		Valid values: Represented in microvolts
 
-===== Device Specific Properties =====
+**Device Specific Properties**
 
 What:		/sys/class/power/ds2760-battery.*/charge_now
 Date:		May 2010
@@ -581,6 +628,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 5, 6 or 7 (hours),
 		- 0: disabled.
 
@@ -595,6 +643,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 4 - 16 (hours), step by 2 (rounded down)
 		- 0: disabled.
 
@@ -609,6 +658,7 @@ Description:
 		interrupt and start top-off charging mode.
 
 		Valid values:
+
 		- 100000 - 200000 (microamps), step by 25000 (rounded down)
 		- 200000 - 350000 (microamps), step by 50000 (rounded down)
 		- 0: disabled.
@@ -624,6 +674,7 @@ Description:
 		will drop to 0 A) and will trigger interrupt.
 
 		Valid values:
+
 		- 0 - 70 (minutes), step by 10 (rounded down)
 
 What:		/sys/class/power_supply/bq24257-charger/ovp_voltage
@@ -637,6 +688,7 @@ Description:
 		device datasheet for details.
 
 		Valid values:
+
 		- 6000000, 6500000, 7000000, 8000000, 9000000, 9500000, 10000000,
 		  10500000 (all uV)
 
@@ -652,6 +704,7 @@ Description:
 		lower than the set value. See device datasheet for details.
 
 		Valid values:
+
 		- 4200000, 4280000, 4360000, 4440000, 4520000, 4600000, 4680000,
 		  4760000 (all uV)
 
@@ -666,6 +719,7 @@ Description:
 		the charger operates normally. See device datasheet for details.
 
 		Valid values:
+
 		- 1: enabled
 		- 0: disabled
 
@@ -681,6 +735,7 @@ Description:
 		from the system. See device datasheet for details.
 
 		Valid values:
+
 		- 1: enabled
 		- 0: disabled
 
diff --git a/Documentation/ABI/testing/sysfs-class-power-twl4030 b/Documentation/ABI/testing/sysfs-class-power-twl4030
index b4fd32d210c5..7ac36dba87bc 100644
--- a/Documentation/ABI/testing/sysfs-class-power-twl4030
+++ b/Documentation/ABI/testing/sysfs-class-power-twl4030
@@ -4,18 +4,20 @@ Description:
 	Writing to this can disable charging.
 
 	Possible values are:
-		"auto" - draw power as appropriate for detected
-			 power source and battery status.
-		"off"  - do not draw any power.
-		"continuous"
-		       - activate mode described as "linear" in
-		         TWL data sheets.  This uses whatever
-			 current is available and doesn't switch off
-			 when voltage drops.
 
-			 This is useful for unstable power sources
-			 such as bicycle dynamo, but care should
-			 be taken that battery is not over-charged.
+		=============	===========================================
+		"auto" 		draw power as appropriate for detected
+				power source and battery status.
+		"off"  		do not draw any power.
+		"continuous"	activate mode described as "linear" in
+				TWL data sheets.  This uses whatever
+				current is available and doesn't switch off
+				when voltage drops.
+
+				This is useful for unstable power sources
+				such as bicycle dynamo, but care should
+				be taken that battery is not over-charged.
+		=============	===========================================
 
 What: /sys/class/power_supply/twl4030_ac/mode
 Description:
@@ -23,6 +25,9 @@ Description:
 	Writing to this can disable charging.
 
 	Possible values are:
-		"auto" - draw power as appropriate for detected
-			 power source and battery status.
-		"off"  - do not draw any power.
+
+		======	===========================================
+		"auto"	draw power as appropriate for detected
+			power source and battery status.
+		"off"	do not draw any power.
+		======	===========================================
diff --git a/Documentation/ABI/testing/sysfs-class-rc b/Documentation/ABI/testing/sysfs-class-rc
index 6c0d6c8cb911..9c8ff7910858 100644
--- a/Documentation/ABI/testing/sysfs-class-rc
+++ b/Documentation/ABI/testing/sysfs-class-rc
@@ -21,15 +21,22 @@ KernelVersion:	2.6.36
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Reading this file returns a list of available protocols,
-		something like:
+		something like::
+
 		    "rc5 [rc6] nec jvc [sony]"
+
 		Enabled protocols are shown in [] brackets.
+
 		Writing "+proto" will add a protocol to the list of enabled
 		protocols.
+
 		Writing "-proto" will remove a protocol from the list of enabled
 		protocols.
+
 		Writing "proto" will enable only "proto".
+
 		Writing "none" will disable all protocols.
+
 		Write fails with EINVAL if an invalid protocol combination or
 		unknown protocol name is used.
 
@@ -39,11 +46,13 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode filter expected value.
+
 		Use in combination with /sys/class/rc/rcN/filter_mask to set the
 		expected value of the bits set in the filter mask.
 		If the hardware supports it then scancodes which do not match
 		the filter will be ignored. Otherwise the write will fail with
 		an error.
+
 		This value may be reset to 0 if the current protocol is altered.
 
 What:		/sys/class/rc/rcN/filter_mask
@@ -56,9 +65,11 @@ Description:
 		of the scancode which should be compared against the expected
 		value. A value of 0 disables the filter to allow all valid
 		scancodes to be processed.
+
 		If the hardware supports it then scancodes which do not match
 		the filter will be ignored. Otherwise the write will fail with
 		an error.
+
 		This value may be reset to 0 if the current protocol is altered.
 
 What:		/sys/class/rc/rcN/wakeup_protocols
@@ -67,15 +78,22 @@ KernelVersion:	4.11
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Reading this file returns a list of available protocols to use
-		for the wakeup filter, something like:
+		for the wakeup filter, something like::
+
 		    "rc-5 nec nec-x rc-6-0 rc-6-6a-24 [rc-6-6a-32] rc-6-mce"
+
 		Note that protocol variants are listed, so "nec", "sony",
 		"rc-5", "rc-6" have their different bit length encodings
 		listed if available.
+
 		The enabled wakeup protocol is shown in [] brackets.
+
 		Only one protocol can be selected at a time.
+
 		Writing "proto" will use "proto" for wakeup events.
+
 		Writing "none" will disable wakeup.
+
 		Write fails with EINVAL if an invalid protocol combination or
 		unknown protocol name is used, or if wakeup is not supported by
 		the hardware.
@@ -86,13 +104,17 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode wakeup filter expected value.
+
 		Use in combination with /sys/class/rc/rcN/wakeup_filter_mask to
 		set the expected value of the bits set in the wakeup filter mask
 		to trigger a system wake event.
+
 		If the hardware supports it and wakeup_filter_mask is not 0 then
 		scancodes which match the filter will wake the system from e.g.
 		suspend to RAM or power off.
+
 		Otherwise the write will fail with an error.
+
 		This value may be reset to 0 if the wakeup protocol is altered.
 
 What:		/sys/class/rc/rcN/wakeup_filter_mask
@@ -101,11 +123,15 @@ KernelVersion:	3.15
 Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
 Description:
 		Sets the scancode wakeup filter mask of bits to compare.
+
 		Use in combination with /sys/class/rc/rcN/wakeup_filter to set
 		the bits of the scancode which should be compared against the
 		expected value to trigger a system wake event.
+
 		If the hardware supports it and wakeup_filter_mask is not 0 then
 		scancodes which match the filter will wake the system from e.g.
 		suspend to RAM or power off.
+
 		Otherwise the write will fail with an error.
+
 		This value may be reset to 0 if the wakeup protocol is altered.
diff --git a/Documentation/ABI/testing/sysfs-class-scsi_host b/Documentation/ABI/testing/sysfs-class-scsi_host
index bafc59fd7b69..7c98d8f43c45 100644
--- a/Documentation/ABI/testing/sysfs-class-scsi_host
+++ b/Documentation/ABI/testing/sysfs-class-scsi_host
@@ -56,8 +56,9 @@ Description:
 		management) on top, which makes it match the Windows IRST (Intel
 		Rapid Storage Technology) driver settings. This setting is also
 		close to min_power, except that:
+
 		a) It does not use host-initiated slumber mode, but it does
-		allow device-initiated slumber
+		   allow device-initiated slumber
 		b) It does not enable low power device sleep mode (DevSlp).
 
 What:		/sys/class/scsi_host/hostX/em_message
@@ -70,8 +71,8 @@ Description:
 		protocol, writes and reads correspond to the LED message format
 		as defined in the AHCI spec.
 
-		The user must turn sw_activity (under /sys/block/*/device/) OFF
-		it they wish to control the activity LED via the em_message
+		The user must turn sw_activity (under `/sys/block/*/device/`)
+		OFF it they wish to control the activity LED via the em_message
 		file.
 
 		em_message_type: (RO) Displays the current enclosure management
diff --git a/Documentation/ABI/testing/sysfs-class-typec b/Documentation/ABI/testing/sysfs-class-typec
index b834671522d6..b7794e02ad20 100644
--- a/Documentation/ABI/testing/sysfs-class-typec
+++ b/Documentation/ABI/testing/sysfs-class-typec
@@ -40,10 +40,13 @@ Description:
 		attribute will not return until the operation has finished.
 
 		Valid values:
-		- source (The port will behave as source only DFP port)
-		- sink (The port will behave as sink only UFP port)
-		- dual (The port will behave as dual-role-data and
+
+		======  ==============================================
+		source  (The port will behave as source only DFP port)
+		sink    (The port will behave as sink only UFP port)
+		dual    (The port will behave as dual-role-data and
 			dual-role-power port)
+		======  ==============================================
 
 What:		/sys/class/typec/<port>/vconn_source
 Date:		April 2017
@@ -59,6 +62,7 @@ Description:
 		generates uevent KOBJ_CHANGE.
 
 		Valid values:
+
 		- "no" when the port is not the VCONN Source
 		- "yes" when the port is the VCONN Source
 
@@ -72,6 +76,7 @@ Description:
 		power operation mode should show "usb_power_delivery".
 
 		Valid values:
+
 		- default
 		- 1.5A
 		- 3.0A
@@ -191,6 +196,7 @@ Date:		April 2017
 Contact:	Heikki Krogerus <heikki.krogerus@linux.intel.com>
 Description:
 		Shows type of the plug on the cable:
+
 		- type-a - Standard A
 		- type-b - Standard B
 		- type-c
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
index 7e43cdce9a52..f7b360a61b21 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
+++ b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
@@ -7,6 +7,7 @@ Description:
 		(RO) Hexadecimal bitmask of the TAD attributes are reported by
 		the platform firmware (see ACPI 6.2, section 9.18.2):
 
+		======= ======================================================
 		BIT(0): AC wakeup implemented if set
 		BIT(1): DC wakeup implemented if set
 		BIT(2): Get/set real time features implemented if set
@@ -16,6 +17,7 @@ Description:
 		BIT(6): The AC timer wakes up from S5 if set
 		BIT(7): The DC timer wakes up from S4 if set
 		BIT(8): The DC timer wakes up from S5 if set
+		======= ======================================================
 
 		The other bits are reserved.
 
@@ -62,9 +64,11 @@ Description:
 		timer status with the following meaning of bits (see ACPI 6.2,
 		Section 9.18.5):
 
+		======= ======================================================
 		Bit(0): The timer has expired if set.
 		Bit(1): The timer has woken up the system from a sleep state
 		        (S3 or S4/S5 if supported) if set.
+		======= ======================================================
 
 		The other bits are reserved.
 
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-docg3 b/Documentation/ABI/testing/sysfs-devices-platform-docg3
index 8aa36716882f..378c42694bfb 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-docg3
+++ b/Documentation/ABI/testing/sysfs-devices-platform-docg3
@@ -9,8 +9,10 @@ Description:
 		The protection has information embedded whether it blocks reads,
 		writes or both.
 		The result is:
-		0 -> the DPS is not keylocked
-		1 -> the DPS is keylocked
+
+		- 0 -> the DPS is not keylocked
+		- 1 -> the DPS is keylocked
+
 Users:		None identified so far.
 
 What:		/sys/devices/platform/docg3/f[0-3]_dps[01]_protection_key
@@ -27,8 +29,12 @@ Description:
 		Entering the correct value toggle the lock, and can be observed
 		through f[0-3]_dps[01]_is_keylocked.
 		Possible values are:
+
 			- 8 bytes
+
 		Typical values are:
+
 			- "00000000"
 			- "12345678"
+
 Users:		None identified so far.
diff --git a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
index 2107082426da..e45ac2e865d5 100644
--- a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
+++ b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
@@ -17,10 +17,10 @@ Description:
 		to overlay planes.
 
 		Selects the composition mode for the overlay. Possible values
-		are
+		are:
 
-		0 - Alpha Blending
-		1 - ROP3
+		- 0 - Alpha Blending
+		- 1 - ROP3
 
 What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_position
 Date:		May 2012
@@ -30,7 +30,7 @@ Description:
 		to overlay planes.
 
 		Stores the x,y overlay position on the display in pixels. The
-		position format is `[0-9]+,[0-9]+'.
+		position format is `[0-9]+,[0-9]+`.
 
 What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_rop3
 Date:		May 2012
diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
index b555df825447..274c337ec6a9 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
+++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
@@ -151,23 +151,28 @@ Description:
 		The processor idle states which are available for use have the
 		following attributes:
 
-		name: (RO) Name of the idle state (string).
+		======== ==== =================================================
+		name:	 (RO) Name of the idle state (string).
 
 		latency: (RO) The latency to exit out of this idle state (in
-		microseconds).
+			      microseconds).
 
-		power: (RO) The power consumed while in this idle state (in
-		milliwatts).
+		power:   (RO) The power consumed while in this idle state (in
+			      milliwatts).
 
-		time: (RO) The total time spent in this idle state (in microseconds).
+		time:    (RO) The total time spent in this idle state
+			      (in microseconds).
 
-		usage: (RO) Number of times this state was entered (a count).
+		usage:	 (RO) Number of times this state was entered (a count).
 
-		above: (RO) Number of times this state was entered, but the
-		       observed CPU idle duration was too short for it (a count).
+		above:	 (RO) Number of times this state was entered, but the
+			      observed CPU idle duration was too short for it
+			      (a count).
 
-		below: (RO) Number of times this state was entered, but the
-		       observed CPU idle duration was too long for it (a count).
+		below: 	 (RO) Number of times this state was entered, but the
+			      observed CPU idle duration was too long for it
+			      (a count).
+		======== ==== =================================================
 
 What:		/sys/devices/system/cpu/cpuX/cpuidle/stateN/desc
 Date:		February 2008
@@ -290,6 +295,7 @@ Description:	Processor frequency boosting control
 		This switch controls the boost setting for the whole system.
 		Boosting allows the CPU and the firmware to run at a frequency
 		beyound it's nominal limit.
+
 		More details can be found in
 		Documentation/admin-guide/pm/cpufreq.rst
 
@@ -337,43 +343,57 @@ Contact:	Sudeep Holla <sudeep.holla@arm.com>
 Description:	Parameters for the CPU cache attributes
 
 		allocation_policy:
-			- WriteAllocate: allocate a memory location to a cache line
-					 on a cache miss because of a write
-			- ReadAllocate: allocate a memory location to a cache line
+			- WriteAllocate:
+					allocate a memory location to a cache line
+					on a cache miss because of a write
+			- ReadAllocate:
+					allocate a memory location to a cache line
 					on a cache miss because of a read
-			- ReadWriteAllocate: both writeallocate and readallocate
+			- ReadWriteAllocate:
+					both writeallocate and readallocate
 
-		attributes: LEGACY used only on IA64 and is same as write_policy
+		attributes:
+			    LEGACY used only on IA64 and is same as write_policy
 
-		coherency_line_size: the minimum amount of data in bytes that gets
+		coherency_line_size:
+				     the minimum amount of data in bytes that gets
 				     transferred from memory to cache
 
-		level: the cache hierarchy in the multi-level cache configuration
+		level:
+			the cache hierarchy in the multi-level cache configuration
 
-		number_of_sets: total number of sets in the cache, a set is a
+		number_of_sets:
+				total number of sets in the cache, a set is a
 				collection of cache lines with the same cache index
 
-		physical_line_partition: number of physical cache line per cache tag
+		physical_line_partition:
+				number of physical cache line per cache tag
 
-		shared_cpu_list: the list of logical cpus sharing the cache
+		shared_cpu_list:
+				the list of logical cpus sharing the cache
 
-		shared_cpu_map: logical cpu mask containing the list of cpus sharing
+		shared_cpu_map:
+				logical cpu mask containing the list of cpus sharing
 				the cache
 
-		size: the total cache size in kB
+		size:
+			the total cache size in kB
 
 		type:
 			- Instruction: cache that only holds instructions
 			- Data: cache that only caches data
 			- Unified: cache that holds both data and instructions
 
-		ways_of_associativity: degree of freedom in placing a particular block
-					of memory in the cache
+		ways_of_associativity:
+			degree of freedom in placing a particular block
+			of memory in the cache
 
 		write_policy:
-			- WriteThrough: data is written to both the cache line
+			- WriteThrough:
+					data is written to both the cache line
 					and to the block in the lower-level memory
-			- WriteBack: data is written only to the cache line and
+			- WriteBack:
+				     data is written only to the cache line and
 				     the modified cache line is written to main
 				     memory only when it is replaced
 
@@ -414,30 +434,30 @@ Description:	POWERNV CPUFreq driver's frequency throttle stats directory and
 		throttle attributes exported in the 'throttle_stats' directory:
 
 		- turbo_stat : This file gives the total number of times the max
-		frequency is throttled to lower frequency in turbo (at and above
-		nominal frequency) range of frequencies.
+		  frequency is throttled to lower frequency in turbo (at and above
+		  nominal frequency) range of frequencies.
 
 		- sub_turbo_stat : This file gives the total number of times the
-		max frequency is throttled to lower frequency in sub-turbo(below
-		nominal frequency) range of frequencies.
+		  max frequency is throttled to lower frequency in sub-turbo(below
+		  nominal frequency) range of frequencies.
 
 		- unthrottle : This file gives the total number of times the max
-		frequency is unthrottled after being throttled.
+		  frequency is unthrottled after being throttled.
 
 		- powercap : This file gives the total number of times the max
-		frequency is throttled due to 'Power Capping'.
+		  frequency is throttled due to 'Power Capping'.
 
 		- overtemp : This file gives the total number of times the max
-		frequency is throttled due to 'CPU Over Temperature'.
+		  frequency is throttled due to 'CPU Over Temperature'.
 
 		- supply_fault : This file gives the total number of times the
-		max frequency is throttled due to 'Power Supply Failure'.
+		  max frequency is throttled due to 'Power Supply Failure'.
 
 		- overcurrent : This file gives the total number of times the
-		max frequency is throttled due to 'Overcurrent'.
+		  max frequency is throttled due to 'Overcurrent'.
 
 		- occ_reset : This file gives the total number of times the max
-		frequency is throttled due to 'OCC Reset'.
+		  frequency is throttled due to 'OCC Reset'.
 
 		The sysfs attributes representing different throttle reasons like
 		powercap, overtemp, supply_fault, overcurrent and occ_reset map to
@@ -469,8 +489,9 @@ What:		/sys/devices/system/cpu/cpuX/regs/
 Date:		June 2016
 Contact:	Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
 Description:	AArch64 CPU registers
+
 		'identification' directory exposes the CPU ID registers for
-		 identifying model and revision of the CPU.
+		identifying model and revision of the CPU.
 
 What:		/sys/devices/system/cpu/cpu#/cpu_capacity
 Date:		December 2016
@@ -497,9 +518,11 @@ Description:	Information about CPU vulnerabilities
 		vulnerabilities. The output of those files reflects the
 		state of the CPUs in the system. Possible output values:
 
+		================  ==============================================
 		"Not affected"	  CPU is not affected by the vulnerability
 		"Vulnerable"	  CPU is affected and no mitigation in effect
 		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
+		================  ==============================================
 
 		See also: Documentation/admin-guide/hw-vuln/index.rst
 
@@ -515,12 +538,14 @@ Description:	Control Symetric Multi Threading (SMT)
 		control: Read/write interface to control SMT. Possible
 			 values:
 
+			 ================ =========================================
 			 "on"		  SMT is enabled
 			 "off"		  SMT is disabled
 			 "forceoff"	  SMT is force disabled. Cannot be changed.
 			 "notsupported"   SMT is not supported by the CPU
 			 "notimplemented" SMT runtime toggling is not
 					  implemented for the architecture
+			 ================ =========================================
 
 			 If control status is "forceoff" or "notsupported" writes
 			 are rejected.
diff --git a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
index 470def06ab0a..1a8ee26e92ae 100644
--- a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
+++ b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
@@ -5,8 +5,10 @@ Contact:        Vernon Mauery <vernux@us.ibm.com>
 Description:    The state file allows a means by which to change in and
                 out of Premium Real-Time Mode (PRTM), as well as the
                 ability to query the current state.
-                    0 => PRTM off
-                    1 => PRTM enabled
+
+                    - 0 => PRTM off
+                    - 1 => PRTM enabled
+
 Users:          The ibm-prtm userspace daemon uses this interface.
 
 
diff --git a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
index 4d63a7904b94..42214b4ff14a 100644
--- a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
+++ b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
@@ -6,11 +6,13 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
 		if DDR power rails will be kept powered during system suspend.
 		("on"/"1" = enabled, "off"/"0" = disabled).
 		Two types of power switches (or control signals) can be used:
+
 		  A. With a momentary power switch (or pulse signal), DDR
 		     Backup Mode is enabled by default when available, as the
 		     PMIC will be configured only during system suspend.
 		  B. With a toggle power switch (or level signal), the
 		     following steps must be followed exactly:
+
 		       1. Configure PMIC for backup mode, to change the role of
 			  the accessory power switch from a power switch to a
 			  wake-up switch,
@@ -20,8 +22,10 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
 		       3. Suspend system,
 		       4. Switch accessory power switch on, to resume the
 			  system.
+
 		     DDR Backup Mode must be explicitly enabled by the user,
 		     to invoke step 1.
+
 		See also Documentation/devicetree/bindings/mfd/bd9571mwv.txt.
 Users:		User space applications for embedded boards equipped with a
 		BD9571MWV PMIC.
diff --git a/Documentation/ABI/testing/sysfs-driver-genwqe b/Documentation/ABI/testing/sysfs-driver-genwqe
index 64ac6d567c4b..69d855dc4c47 100644
--- a/Documentation/ABI/testing/sysfs-driver-genwqe
+++ b/Documentation/ABI/testing/sysfs-driver-genwqe
@@ -29,8 +29,12 @@ What:           /sys/class/genwqe/genwqe<n>_card/reload_bitstream
 Date:           May 2014
 Contact:        klebers@linux.vnet.ibm.com
 Description:    Interface to trigger a PCIe card reset to reload the bitstream.
+
+		::
+
                   sudo sh -c 'echo 1 > \
                     /sys/class/genwqe/genwqe0_card/reload_bitstream'
+
                 If successfully, the card will come back with the bitstream set
                 on 'next_bitstream'.
 
@@ -64,8 +68,11 @@ Description:    Base clock frequency of the card.
 What:           /sys/class/genwqe/genwqe<n>_card/device/sriov_numvfs
 Date:           Oct 2013
 Contact:        haver@linux.vnet.ibm.com
-Description:    Enable VFs (1..15):
+Description:    Enable VFs (1..15)::
+
                   sudo sh -c 'echo 15 > \
                     /sys/bus/pci/devices/0000\:1b\:00.0/sriov_numvfs'
-                Disable VFs:
+
+                Disable VFs::
+
                   Write a 0 into the same sysfs entry.
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
index 305dffd229a8..de07be314efc 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
+++ b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
@@ -12,7 +12,9 @@ KernelVersion:	4.1
 Contact:	Michal Malý <madcatxster@devoid-pointer.net>
 Description:	Displays a set of alternate modes supported by a wheel. Each
 		mode is listed as follows:
+
 		  Tag: Mode Name
+
 		Currently active mode is marked with an asterisk. List also
 		contains an abstract item "native" which always denotes the
 		native mode of the wheel. Echoing the mode tag switches the
@@ -24,24 +26,30 @@ Description:	Displays a set of alternate modes supported by a wheel. Each
 		This entry is not created for devices that have only one mode.
 
 		Currently supported mode switches:
-		Driving Force Pro:
+
+		Driving Force Pro::
+
 		  DF-EX --> DFP
 
-		G25:
+		G25::
+
 		  DF-EX --> DFP --> G25
 
-		G27:
+		G27::
+
 		  DF-EX <*> DFP <-> G25 <-> G27
 		  DF-EX <*--------> G25 <-> G27
 		  DF-EX <*----------------> G27
 
-		G29:
+		G29::
+
 		  DF-EX <*> DFP <-> G25 <-> G27 <-> G29
 		  DF-EX <*--------> G25 <-> G27 <-> G29
 		  DF-EX <*----------------> G27 <-> G29
 		  DF-EX <*------------------------> G29
 
-		DFGT:
+		DFGT::
+
 		  DF-EX <*> DFP <-> DFGT
 		  DF-EX <*--------> DFGT
 
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-wiimote b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
index 39dfa5cb1cc5..cd7b82a5c27d 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-wiimote
+++ b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
@@ -39,9 +39,13 @@ Description:	While a device is initialized by the wiimote driver, we perform
 		Other strings for each device-type are available and may be
 		added if new device-specific detections are added.
 		Currently supported are:
-			gen10: First Wii Remote generation
-			gen20: Second Wii Remote Plus generation (builtin MP)
+
+			============= =======================================
+			gen10:        First Wii Remote generation
+			gen20:        Second Wii Remote Plus generation
+				      (builtin MP)
 			balanceboard: Wii Balance Board
+			============= =======================================
 
 What:		/sys/bus/hid/drivers/wiimote/<dev>/bboard_calib
 Date:		May 2013
@@ -54,6 +58,7 @@ Description:	This attribute is only provided if the device was detected as a
 		First, 0kg values for all 4 sensors are written, followed by the
 		17kg values for all 4 sensors and last the 34kg values for all 4
 		sensors.
+
 		Calibration data is already applied by the kernel to all input
 		values but may be used by user-space to perform other
 		transformations.
@@ -68,9 +73,11 @@ Description:	This attribute is only provided if the device was detected as a
 		is prefixed with a +/-. Each value is a signed 16bit number.
 		Data is encoded as decimal numbers and specifies the offsets of
 		the analog sticks of the pro-controller.
+
 		Calibration data is already applied by the kernel to all input
 		values but may be used by user-space to perform other
 		transformations.
+
 		Calibration data is detected by the kernel during device setup.
 		You can write "scan\n" into this file to re-trigger calibration.
 		You can also write data directly in the form "x1:y1 x2:y2" to
diff --git a/Documentation/ABI/testing/sysfs-driver-samsung-laptop b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
index 34d3a3359cf4..28c9c040de5d 100644
--- a/Documentation/ABI/testing/sysfs-driver-samsung-laptop
+++ b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
@@ -9,10 +9,12 @@ Description:	Some Samsung laptops have different "performance levels"
 		their fans quiet at all costs.  Reading from this file
 		will show the current performance level.  Writing to the
 		file can change this value.
+
 			Valid options:
-				"silent"
-				"normal"
-				"overclock"
+				- "silent"
+				- "normal"
+				- "overclock"
+
 		Note that not all laptops support all of these options.
 		Specifically, not all support the "overclock" option,
 		and it's still unknown if this value even changes
@@ -25,8 +27,9 @@ Contact:	Corentin Chary <corentin.chary@gmail.com>
 Description:	Max battery charge level can be modified, battery cycle
 		life can be extended by reducing the max battery charge
 		level.
-		0 means normal battery mode (100% charge)
-		1 means battery life extender mode (80% charge)
+
+		- 0 means normal battery mode (100% charge)
+		- 1 means battery life extender mode (80% charge)
 
 What:		/sys/devices/platform/samsung/usb_charge
 Date:		December 1, 2011
diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
index f34221b52b14..e5a438d84e1f 100644
--- a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
+++ b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
@@ -4,10 +4,12 @@ KernelVersion:	3.15
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the keyboard backlight operation mode, valid
 		values are:
+
 			* 0x1  -> FN-Z
 			* 0x2  -> AUTO (also called TIMER)
 			* 0x8  -> ON
 			* 0x10 -> OFF
+
 		Note that from kernel 3.16 onwards this file accepts all listed
 		parameters, kernel 3.15 only accepts the first two (FN-Z and
 		AUTO).
@@ -41,8 +43,10 @@ KernelVersion:	3.15
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This files controls the status of the touchpad and pointing
 		stick (if available), valid values are:
+
 			* 0 -> OFF
 			* 1 -> ON
+
 Users:		KToshiba
 
 What:		/sys/devices/LNXSYSTM:00/LNXSYBUS:00/TOS{1900,620{0,7,8}}:00/available_kbd_modes
@@ -51,10 +55,12 @@ KernelVersion:	3.16
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file shows the supported keyboard backlight modes
 		the system supports, which can be:
+
 			* 0x1  -> FN-Z
 			* 0x2  -> AUTO (also called TIMER)
 			* 0x8  -> ON
 			* 0x10 -> OFF
+
 		Note that not all keyboard types support the listed modes.
 		See the entry named "available_kbd_modes"
 Users:		KToshiba
@@ -65,6 +71,7 @@ KernelVersion:	3.16
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file shows the current keyboard backlight type,
 		which can be:
+
 			* 1 -> Type 1, supporting modes FN-Z and AUTO
 			* 2 -> Type 2, supporting modes TIMER, ON and OFF
 Users:		KToshiba
@@ -75,10 +82,12 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Sleep & Charge charging mode, which
 		can be:
+
 			* 0 -> Disabled		(0x00)
 			* 1 -> Alternate	(0x09)
 			* 2 -> Auto		(0x21)
 			* 3 -> Typical		(0x11)
+
 		Note that from kernel 4.1 onwards this file accepts all listed
 		values, kernel 4.0 only supports the first three.
 		Note that this feature only works when connected to power, if
@@ -93,8 +102,10 @@ Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Sleep Functions under battery, and
 		set the level at which point they will be disabled, accepted
 		values can be:
+
 			* 0	-> Disabled
 			* 1-100	-> Battery level to disable sleep functions
+
 		Currently it prints two values, the first one indicates if the
 		feature is enabled or disabled, while the second one shows the
 		current battery level set.
@@ -107,8 +118,10 @@ Date:		January 23, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB Rapid Charge state, which can be:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -118,8 +131,10 @@ Date:		January 23, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the Sleep & Music state, which values can be:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that this feature only works when connected to power, if
 		you want to use it under battery, see the entry named
 		"sleep_functions_on_battery"
@@ -138,6 +153,7 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the state of the internal fan, valid
 		values are:
+
 			* 0 -> OFF
 			* 1 -> ON
 
@@ -147,8 +163,10 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the Special Functions (hotkeys) operation
 		mode, valid values are:
+
 			* 0 -> Normal Operation
 			* 1 -> Special Functions
+
 		In the "Normal Operation" mode, the F{1-12} keys are as usual
 		and the hotkeys are accessed via FN-F{1-12}.
 		In the "Special Functions" mode, the F{1-12} keys trigger the
@@ -163,8 +181,10 @@ KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls whether the laptop should turn ON whenever
 		the LID is opened, valid values are:
+
 			* 0 -> Disabled
 			* 1 -> Enabled
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -174,8 +194,10 @@ Date:		February 12, 2015
 KernelVersion:	4.0
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the USB 3 functionality, valid values are:
+
 			* 0 -> Disabled (Acts as a regular USB 2)
 			* 1 -> Enabled (Full USB 3 functionality)
+
 		Note that toggling this value requires a reboot for changes to
 		take effect.
 Users:		KToshiba
@@ -188,10 +210,14 @@ Description:	This file controls the Cooling Method feature.
 		Reading this file prints two values, the first is the actual cooling method
 		and the second is the maximum cooling method supported.
 		When the maximum cooling method is ONE, valid values are:
+
 			* 0 -> Maximum Performance
 			* 1 -> Battery Optimized
+
 		When the maximum cooling method is TWO, valid values are:
+
 			* 0 -> Maximum Performance
 			* 1 -> Performance
 			* 2 -> Battery Optimized
+
 Users:		KToshiba
diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_haps b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
index a662370b4dbf..c938690ce10d 100644
--- a/Documentation/ABI/testing/sysfs-driver-toshiba_haps
+++ b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
@@ -4,10 +4,12 @@ KernelVersion:	3.17
 Contact:	Azael Avalos <coproscefalo@gmail.com>
 Description:	This file controls the built-in accelerometer protection level,
 		valid values are:
+
 			* 0 -> Disabled
 			* 1 -> Low
 			* 2 -> Medium
 			* 3 -> High
+
 		The default potection value is set to 2 (Medium).
 Users:		KToshiba
 
diff --git a/Documentation/ABI/testing/sysfs-driver-wacom b/Documentation/ABI/testing/sysfs-driver-wacom
index afc48fc163b5..16acaa5712ec 100644
--- a/Documentation/ABI/testing/sysfs-driver-wacom
+++ b/Documentation/ABI/testing/sysfs-driver-wacom
@@ -79,7 +79,9 @@ Description:
 		When the Wacom Intuos 4 is connected over Bluetooth, the
 		image has to contain 256 bytes (64x32 px 1 bit colour).
 		The format is also scrambled, like in the USB mode, and it can
-		be summarized by converting 76543210 into GECA6420.
+		be summarized by converting::
+
+					    76543210 into GECA6420.
 					    HGFEDCBA      HFDB7531
 
 What:		/sys/bus/hid/devices/<bus>:<vid>:<pid>.<n>/wacom_remote/unpair_remote
diff --git a/Documentation/ABI/testing/sysfs-firmware-acpi b/Documentation/ABI/testing/sysfs-firmware-acpi
index 613f42a9d5cd..e4afc2538210 100644
--- a/Documentation/ABI/testing/sysfs-firmware-acpi
+++ b/Documentation/ABI/testing/sysfs-firmware-acpi
@@ -12,11 +12,14 @@ Description:
 		image: The image bitmap. Currently a 32-bit BMP.
 		status: 1 if the image is valid, 0 if firmware invalidated it.
 		type: 0 indicates image is in BMP format.
+
+		======== ===================================================
 		version: The version of the BGRT. Currently 1.
 		xoffset: The number of pixels between the left of the screen
 			 and the left edge of the image.
 		yoffset: The number of pixels between the top of the screen
 			 and the top edge of the image.
+		======== ===================================================
 
 What:		/sys/firmware/acpi/hotplug/
 Date:		February 2013
@@ -33,12 +36,14 @@ Description:
 		The following setting is available to user space for each
 		hotplug profile:
 
+		======== =======================================================
 		enabled: If set, the ACPI core will handle notifications of
-			hotplug events associated with the given class of
-			devices and will allow those devices to be ejected with
-			the help of the _EJ0 control method.  Unsetting it
-			effectively disables hotplug for the correspoinding
-			class of devices.
+			 hotplug events associated with the given class of
+			 devices and will allow those devices to be ejected with
+			 the help of the _EJ0 control method.  Unsetting it
+			 effectively disables hotplug for the correspoinding
+			 class of devices.
+		======== =======================================================
 
 		The value of the above attribute is an integer number: 1 (set)
 		or 0 (unset).  Attempts to write any other values to it will
@@ -71,86 +76,90 @@ Description:
 		To figure out where all the SCI's are coming from,
 		/sys/firmware/acpi/interrupts contains a file listing
 		every possible source, and the count of how many
-		times it has triggered.
-
-		$ cd /sys/firmware/acpi/interrupts
-		$ grep . *
-		error:	     0
-		ff_gbl_lock:	   0   enable
-		ff_pmtimer:	  0  invalid
-		ff_pwr_btn:	  0   enable
-		ff_rt_clk:	 2  disable
-		ff_slp_btn:	  0  invalid
-		gpe00:	     0	invalid
-		gpe01:	     0	 enable
-		gpe02:	   108	 enable
-		gpe03:	     0	invalid
-		gpe04:	     0	invalid
-		gpe05:	     0	invalid
-		gpe06:	     0	 enable
-		gpe07:	     0	 enable
-		gpe08:	     0	invalid
-		gpe09:	     0	invalid
-		gpe0A:	     0	invalid
-		gpe0B:	     0	invalid
-		gpe0C:	     0	invalid
-		gpe0D:	     0	invalid
-		gpe0E:	     0	invalid
-		gpe0F:	     0	invalid
-		gpe10:	     0	invalid
-		gpe11:	     0	invalid
-		gpe12:	     0	invalid
-		gpe13:	     0	invalid
-		gpe14:	     0	invalid
-		gpe15:	     0	invalid
-		gpe16:	     0	invalid
-		gpe17:	  1084	 enable
-		gpe18:	     0	 enable
-		gpe19:	     0	invalid
-		gpe1A:	     0	invalid
-		gpe1B:	     0	invalid
-		gpe1C:	     0	invalid
-		gpe1D:	     0	invalid
-		gpe1E:	     0	invalid
-		gpe1F:	     0	invalid
-		gpe_all:    1192
-		sci:	1194
-		sci_not:     0	
-
-		sci - The number of times the ACPI SCI
-		has been called and claimed an interrupt.
-
-		sci_not - The number of times the ACPI SCI
-		has been called and NOT claimed an interrupt.
-
-		gpe_all - count of SCI caused by GPEs.
-
-		gpeXX - count for individual GPE source
-
-		ff_gbl_lock - Global Lock
-
-		ff_pmtimer - PM Timer
-
-		ff_pwr_btn - Power Button
-
-		ff_rt_clk - Real Time Clock
-
-		ff_slp_btn - Sleep Button
-
-		error - an interrupt that can't be accounted for above.
-
-		invalid: it's either a GPE or a Fixed Event that
-			doesn't have an event handler.
-
-		disable: the GPE/Fixed Event is valid but disabled.
-
-		enable: the GPE/Fixed Event is valid and enabled.
-
-		Root has permission to clear any of these counters.  Eg.
-		# echo 0 > gpe11
-
-		All counters can be cleared by clearing the total "sci":
-		# echo 0 > sci
+		times it has triggered::
+
+		  $ cd /sys/firmware/acpi/interrupts
+		  $ grep . *
+		  error:	     0
+		  ff_gbl_lock:	   0   enable
+		  ff_pmtimer:	  0  invalid
+		  ff_pwr_btn:	  0   enable
+		  ff_rt_clk:	 2  disable
+		  ff_slp_btn:	  0  invalid
+		  gpe00:	     0	invalid
+		  gpe01:	     0	 enable
+		  gpe02:	   108	 enable
+		  gpe03:	     0	invalid
+		  gpe04:	     0	invalid
+		  gpe05:	     0	invalid
+		  gpe06:	     0	 enable
+		  gpe07:	     0	 enable
+		  gpe08:	     0	invalid
+		  gpe09:	     0	invalid
+		  gpe0A:	     0	invalid
+		  gpe0B:	     0	invalid
+		  gpe0C:	     0	invalid
+		  gpe0D:	     0	invalid
+		  gpe0E:	     0	invalid
+		  gpe0F:	     0	invalid
+		  gpe10:	     0	invalid
+		  gpe11:	     0	invalid
+		  gpe12:	     0	invalid
+		  gpe13:	     0	invalid
+		  gpe14:	     0	invalid
+		  gpe15:	     0	invalid
+		  gpe16:	     0	invalid
+		  gpe17:	  1084	 enable
+		  gpe18:	     0	 enable
+		  gpe19:	     0	invalid
+		  gpe1A:	     0	invalid
+		  gpe1B:	     0	invalid
+		  gpe1C:	     0	invalid
+		  gpe1D:	     0	invalid
+		  gpe1E:	     0	invalid
+		  gpe1F:	     0	invalid
+		  gpe_all:    1192
+		  sci:	1194
+		  sci_not:     0
+
+		===========  ==================================================
+		sci	     The number of times the ACPI SCI
+			     has been called and claimed an interrupt.
+
+		sci_not	     The number of times the ACPI SCI
+			     has been called and NOT claimed an interrupt.
+
+		gpe_all	     count of SCI caused by GPEs.
+
+		gpeXX	     count for individual GPE source
+
+		ff_gbl_lock  Global Lock
+
+		ff_pmtimer   PM Timer
+
+		ff_pwr_btn   Power Button
+
+		ff_rt_clk    Real Time Clock
+
+		ff_slp_btn   Sleep Button
+
+		error	     an interrupt that can't be accounted for above.
+
+		invalid      it's either a GPE or a Fixed Event that
+			     doesn't have an event handler.
+
+		disable	     the GPE/Fixed Event is valid but disabled.
+
+		enable       the GPE/Fixed Event is valid and enabled.
+		===========  ==================================================
+
+		Root has permission to clear any of these counters.  Eg.::
+
+		  # echo 0 > gpe11
+
+		All counters can be cleared by clearing the total "sci"::
+
+		  # echo 0 > sci
 
 		None of these counters has an effect on the function
 		of the system, they are simply statistics.
@@ -165,32 +174,34 @@ Description:
 
 		Let's take power button fixed event for example, please kill acpid
 		and other user space applications so that the machine won't shutdown
-		when pressing the power button.
-		# cat ff_pwr_btn
-		0	enabled
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		3	enabled
-		# echo disable > ff_pwr_btn
-		# cat ff_pwr_btn
-		3	disabled
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		3	disabled
-		# echo enable > ff_pwr_btn
-		# cat ff_pwr_btn
-		4	enabled
-		/*
-		 * this is because the status bit is set even if the enable bit is cleared,
-		 * and it triggers an ACPI fixed event when the enable bit is set again
-		 */
-		# press the power button for 3 times;
-		# cat ff_pwr_btn
-		7	enabled
-		# echo disable > ff_pwr_btn
-		# press the power button for 3 times;
-		# echo clear > ff_pwr_btn	/* clear the status bit */
-		# echo disable > ff_pwr_btn
-		# cat ff_pwr_btn
-		7	enabled
+		when pressing the power button::
+
+		  # cat ff_pwr_btn
+		  0	enabled
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  3	enabled
+		  # echo disable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  3	disabled
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  3	disabled
+		  # echo enable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  4	enabled
+		  /*
+		   * this is because the status bit is set even if the enable
+		   * bit is cleared, and it triggers an ACPI fixed event when
+		   * the enable bit is set again
+		   */
+		  # press the power button for 3 times;
+		  # cat ff_pwr_btn
+		  7	enabled
+		  # echo disable > ff_pwr_btn
+		  # press the power button for 3 times;
+		  # echo clear > ff_pwr_btn	/* clear the status bit */
+		  # echo disable > ff_pwr_btn
+		  # cat ff_pwr_btn
+		  7	enabled
 
diff --git a/Documentation/ABI/testing/sysfs-firmware-dmi-entries b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
index 210ad44b95a5..fe0289c87768 100644
--- a/Documentation/ABI/testing/sysfs-firmware-dmi-entries
+++ b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
@@ -33,7 +33,7 @@ Description:
 		doesn't matter), they will be represented in sysfs as
 		entries "T-0" through "T-(N-1)":
 
-		Example entry directories:
+		Example entry directories::
 
 			/sys/firmware/dmi/entries/17-0
 			/sys/firmware/dmi/entries/17-1
@@ -50,61 +50,65 @@ Description:
 		Each DMI entry in sysfs has the common header values
 		exported as attributes:
 
-		handle	: The 16bit 'handle' that is assigned to this
+		========  =================================================
+		handle	  The 16bit 'handle' that is assigned to this
 			  entry by the firmware.  This handle may be
 			  referred to by other entries.
-		length	: The length of the entry, as presented in the
+		length	  The length of the entry, as presented in the
 			  entry itself.  Note that this is _not the
 			  total count of bytes associated with the
-			  entry_.  This value represents the length of
+			  entry.  This value represents the length of
 			  the "formatted" portion of the entry.  This
 			  "formatted" region is sometimes followed by
 			  the "unformatted" region composed of nul
 			  terminated strings, with termination signalled
 			  by a two nul characters in series.
-		raw	: The raw bytes of the entry. This includes the
+		raw	  The raw bytes of the entry. This includes the
 			  "formatted" portion of the entry, the
 			  "unformatted" strings portion of the entry,
 			  and the two terminating nul characters.
-		type	: The type of the entry.  This value is the same
+		type	  The type of the entry.  This value is the same
 			  as found in the directory name.  It indicates
 			  how the rest of the entry should be interpreted.
-		instance: The instance ordinal of the entry for the
+		instance  The instance ordinal of the entry for the
 			  given type.  This value is the same as found
 			  in the parent directory name.
-		position: The ordinal position (zero-based) of the entry
+		position  The ordinal position (zero-based) of the entry
 			  within the entirety of the DMI entry table.
+		========  =================================================
 
-		=== Entry Specialization ===
+		**Entry Specialization**
 
 		Some entry types may have other information available in
 		sysfs.  Not all types are specialized.
 
-		--- Type 15 - System Event Log ---
+		**Type 15 - System Event Log**
 
 		This entry allows the firmware to export a log of
 		events the system has taken.  This information is
 		typically backed by nvram, but the implementation
 		details are abstracted by this table.  This entry's data
-		is exported in the directory:
+		is exported in the directory::
 
-		/sys/firmware/dmi/entries/15-0/system_event_log
+		  /sys/firmware/dmi/entries/15-0/system_event_log
 
 		and has the following attributes (documented in the
 		SMBIOS / DMI specification under "System Event Log (Type 15)":
 
-		area_length
-		header_start_offset
-		data_start_offset
-		access_method
-		status
-		change_token
-		access_method_address
-		header_format
-		per_log_type_descriptor_length
-		type_descriptors_supported_count
+		- area_length
+		- header_start_offset
+		- data_start_offset
+		- access_method
+		- status
+		- change_token
+		- access_method_address
+		- header_format
+		- per_log_type_descriptor_length
+		- type_descriptors_supported_count
 
 		As well, the kernel exports the binary attribute:
 
-		raw_event_log	: The raw binary bits of the event log
+		=============	  ====================================
+		raw_event_log	  The raw binary bits of the event log
 				  as described by the DMI entry.
+		=============	  ====================================
diff --git a/Documentation/ABI/testing/sysfs-firmware-gsmi b/Documentation/ABI/testing/sysfs-firmware-gsmi
index 0faa0aaf4b6a..7a558354c1ee 100644
--- a/Documentation/ABI/testing/sysfs-firmware-gsmi
+++ b/Documentation/ABI/testing/sysfs-firmware-gsmi
@@ -20,7 +20,7 @@ Description:
 
 			This directory has the same layout (and
 			underlying implementation as /sys/firmware/efi/vars.
-			See Documentation/ABI/*/sysfs-firmware-efi-vars
+			See `Documentation/ABI/*/sysfs-firmware-efi-vars`
 			for more information on how to interact with
 			this structure.
 
diff --git a/Documentation/ABI/testing/sysfs-firmware-memmap b/Documentation/ABI/testing/sysfs-firmware-memmap
index eca0d65087dc..1f6f4d3a32c0 100644
--- a/Documentation/ABI/testing/sysfs-firmware-memmap
+++ b/Documentation/ABI/testing/sysfs-firmware-memmap
@@ -20,7 +20,7 @@ Description:
 		the raw memory map to userspace.
 
 		The structure is as follows: Under /sys/firmware/memmap there
-		are subdirectories with the number of the entry as their name:
+		are subdirectories with the number of the entry as their name::
 
 			/sys/firmware/memmap/0
 			/sys/firmware/memmap/1
@@ -34,14 +34,16 @@ Description:
 
 		Each directory contains three files:
 
-		start	: The start address (as hexadecimal number with the
+		========  =====================================================
+		start	  The start address (as hexadecimal number with the
 			  '0x' prefix).
-		end	: The end address, inclusive (regardless whether the
+		end	  The end address, inclusive (regardless whether the
 			  firmware provides inclusive or exclusive ranges).
-		type	: Type of the entry as string. See below for a list of
+		type	  Type of the entry as string. See below for a list of
 			  valid types.
+		========  =====================================================
 
-		So, for example:
+		So, for example::
 
 			/sys/firmware/memmap/0/start
 			/sys/firmware/memmap/0/end
@@ -57,9 +59,8 @@ Description:
 		  - reserved
 
 		Following shell snippet can be used to display that memory
-		map in a human-readable format:
+		map in a human-readable format::
 
-		-------------------- 8< ----------------------------------------
 		  #!/bin/bash
 		  cd /sys/firmware/memmap
 		  for dir in * ; do
@@ -68,4 +69,3 @@ Description:
 		      type=$(cat $dir/type)
 		      printf "%016x-%016x (%s)\n" $start $[ $end +1] "$type"
 		  done
-		-------------------- >8 ----------------------------------------
diff --git a/Documentation/ABI/testing/sysfs-fs-ext4 b/Documentation/ABI/testing/sysfs-fs-ext4
index 78604db56279..99e3d92f8299 100644
--- a/Documentation/ABI/testing/sysfs-fs-ext4
+++ b/Documentation/ABI/testing/sysfs-fs-ext4
@@ -45,8 +45,8 @@ Description:
 		parameter will have their blocks allocated out of a
 		block group specific preallocation pool, so that small
 		files are packed closely together.  Each large file
-		 will have its blocks allocated out of its own unique
-		 preallocation pool.
+		will have its blocks allocated out of its own unique
+		preallocation pool.
 
 What:		/sys/fs/ext4/<disk>/inode_readahead_blks
 Date:		March 2008
diff --git a/Documentation/ABI/testing/sysfs-hypervisor-xen b/Documentation/ABI/testing/sysfs-hypervisor-xen
index 53b7b2ea7515..4dbe0c49b393 100644
--- a/Documentation/ABI/testing/sysfs-hypervisor-xen
+++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
@@ -15,14 +15,17 @@ KernelVersion:	4.3
 Contact:	Boris Ostrovsky <boris.ostrovsky@oracle.com>
 Description:	If running under Xen:
 		Describes mode that Xen's performance-monitoring unit (PMU)
-		uses. Accepted values are
-			"off"  -- PMU is disabled
-			"self" -- The guest can profile itself
-			"hv"   -- The guest can profile itself and, if it is
+		uses. Accepted values are:
+
+			======    ============================================
+			"off"     PMU is disabled
+			"self"    The guest can profile itself
+			"hv"      The guest can profile itself and, if it is
 				  privileged (e.g. dom0), the hypervisor
-			"all" --  The guest can profile itself, the hypervisor
+			"all"     The guest can profile itself, the hypervisor
 				  and all other guests. Only available to
 				  privileged guests.
+			======    ============================================
 
 What:           /sys/hypervisor/pmu/pmu_features
 Date:           August 2015
diff --git a/Documentation/ABI/testing/sysfs-kernel-boot_params b/Documentation/ABI/testing/sysfs-kernel-boot_params
index eca38ce2852d..7f9bda453c4d 100644
--- a/Documentation/ABI/testing/sysfs-kernel-boot_params
+++ b/Documentation/ABI/testing/sysfs-kernel-boot_params
@@ -23,16 +23,17 @@ Description:	The /sys/kernel/boot_params directory contains two
 		representation of setup_data type. "data" file is the binary
 		representation of setup_data payload.
 
-		The whole boot_params directory structure is like below:
-		/sys/kernel/boot_params
-		|__ data
-		|__ setup_data
-		|   |__ 0
-		|   |   |__ data
-		|   |   |__ type
-		|   |__ 1
-		|       |__ data
-		|       |__ type
-		|__ version
+		The whole boot_params directory structure is like below::
+
+		  /sys/kernel/boot_params
+		  |__ data
+		  |__ setup_data
+		  |   |__ 0
+		  |   |   |__ data
+		  |   |   |__ type
+		  |   |__ 1
+		  |       |__ data
+		  |       |__ type
+		  |__ version
 
 Users:		Kexec
diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
index fdaa2162fae1..294387e2c7fb 100644
--- a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
+++ b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
@@ -7,9 +7,11 @@ Description:
 		of the hugepages supported by the kernel/CPU combination.
 
 		Under these directories are a number of files:
-			nr_hugepages
-			nr_overcommit_hugepages
-			free_hugepages
-			surplus_hugepages
-			resv_hugepages
+
+			- nr_hugepages
+			- nr_overcommit_hugepages
+			- free_hugepages
+			- surplus_hugepages
+			- resv_hugepages
+
 		See Documentation/admin-guide/mm/hugetlbpage.rst for details.
diff --git a/Documentation/ABI/testing/sysfs-platform-asus-laptop b/Documentation/ABI/testing/sysfs-platform-asus-laptop
index 8b0e8205a6a2..c78d358dbdbe 100644
--- a/Documentation/ABI/testing/sysfs-platform-asus-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-asus-laptop
@@ -4,13 +4,16 @@ KernelVersion:	2.6.20
 Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		This file allows display switching. The value
-		is composed by 4 bits and defined as follow:
-		4321
-		|||`- LCD
-		||`-- CRT
-		|`--- TV
-		`---- DVI
-		Ex: - 0 (0000b) means no display
+		is composed by 4 bits and defined as follow::
+
+		  4321
+		  |||`- LCD
+		  ||`-- CRT
+		  |`--- TV
+		  `---- DVI
+
+		Ex:
+		    - 0 (0000b) means no display
 		    - 3 (0011b) CRT+LCD.
 
 What:		/sys/devices/platform/asus_laptop/gps
@@ -28,8 +31,10 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Some models like the W1N have a LED display that can be
 		used to display several items of information.
-		To control the LED display, use the following :
+		To control the LED display, use the following::
+
 		    echo 0x0T000DDD > /sys/devices/platform/asus_laptop/
+
 		where T control the 3 letters display, and DDD the 3 digits display.
 		The DDD table can be found in Documentation/admin-guide/laptops/asus-laptop.rst
 
diff --git a/Documentation/ABI/testing/sysfs-platform-asus-wmi b/Documentation/ABI/testing/sysfs-platform-asus-wmi
index 1efac0ddb417..04885738cf15 100644
--- a/Documentation/ABI/testing/sysfs-platform-asus-wmi
+++ b/Documentation/ABI/testing/sysfs-platform-asus-wmi
@@ -5,6 +5,7 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Change CPU clock configuration (write-only).
 		There are three available clock configuration:
+
 		    * 0 -> Super Performance Mode
 		    * 1 -> High Performance Mode
 		    * 2 -> Power Saving Mode
diff --git a/Documentation/ABI/testing/sysfs-platform-at91 b/Documentation/ABI/testing/sysfs-platform-at91
index 4cc6a865ae66..b146be74b8e0 100644
--- a/Documentation/ABI/testing/sysfs-platform-at91
+++ b/Documentation/ABI/testing/sysfs-platform-at91
@@ -18,8 +18,10 @@ Description:
 		In order to use an extended can_id add the
 		CAN_EFF_FLAG (0x80000000U) to the can_id. Example:
 
-		- standard id 0x7ff:
-		echo 0x7ff      > /sys/class/net/can0/mb0_id
+		- standard id 0x7ff::
 
-		- extended id 0x1fffffff:
-		echo 0x9fffffff > /sys/class/net/can0/mb0_id
+		    echo 0x7ff      > /sys/class/net/can0/mb0_id
+
+		- extended id 0x1fffffff::
+
+		    echo 0x9fffffff > /sys/class/net/can0/mb0_id
diff --git a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
index 5b026c69587a..70dbe0733cf6 100644
--- a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
@@ -4,9 +4,11 @@ KernelVersion:	2.6.26
 Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		This file allows display switching.
+
 		- 1 = LCD
 		- 2 = CRT
 		- 3 = LCD+CRT
+
 		If you run X11, you should use xrandr instead.
 
 What:		/sys/devices/platform/eeepc/camera
@@ -30,16 +32,20 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
 Description:
 		Change CPU clock configuration.
 		On the Eee PC 1000H there are three available clock configuration:
+
 		    * 0 -> Super Performance Mode
 		    * 1 -> High Performance Mode
 		    * 2 -> Power Saving Mode
+
 		On Eee PC 701 there is only 2 available clock configurations.
 		Available configuration are listed in available_cpufv file.
 		Reading this file will show the raw hexadecimal value which
-		is defined as follow:
-		| 8 bit | 8 bit |
-		    |       `---- Current mode
-		    `------------ Availables modes
+		is defined as follow::
+
+		  | 8 bit | 8 bit |
+		      |       `---- Current mode
+		      `------------ Availables modes
+
 		For example, 0x301 means: mode 1 selected, 3 available modes.
 
 What:		/sys/devices/platform/eeepc/available_cpufv
diff --git a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
index 1b31be3f996a..fd2ac02bc5bd 100644
--- a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
+++ b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
@@ -12,6 +12,7 @@ Contact:	"Maxim Mikityanskiy <maxtram95@gmail.com>"
 Description:
 		Change fan mode
 		There are four available modes:
+
 			* 0 -> Super Silent Mode
 			* 1 -> Standard Mode
 			* 2 -> Dust Cleaning
@@ -32,9 +33,11 @@ KernelVersion:	4.18
 Contact:	"Oleg Keri <ezhi99@gmail.com>"
 Description:
 		Control fn-lock mode.
+
 			* 1 -> Switched On
 			* 0 -> Switched Off
 
-		For example:
-		# echo "0" >	\
-		/sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
+		For example::
+
+		  # echo "0" >	\
+		  /sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
diff --git a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
index 8af65059d519..e19144fd5d86 100644
--- a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
+++ b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
@@ -7,5 +7,6 @@ Description:
 		Thunderbolt controllers to turn on or off when no
 		devices are connected (write-only)
 		There are two available states:
+
 		    * 0 -> Force power disabled
 		    * 1 -> Force power enabled
diff --git a/Documentation/ABI/testing/sysfs-platform-sst-atom b/Documentation/ABI/testing/sysfs-platform-sst-atom
index 0d07c0395660..d5f6e21f0e42 100644
--- a/Documentation/ABI/testing/sysfs-platform-sst-atom
+++ b/Documentation/ABI/testing/sysfs-platform-sst-atom
@@ -5,13 +5,22 @@ Contact:	"Sebastien Guiriec" <sebastien.guiriec@intel.com>
 Description:
 		LPE Firmware version for SST driver on all atom
 		plaforms (BYT/CHT/Merrifield/BSW).
-		If the FW has never been loaded it will display:
+		If the FW has never been loaded it will display::
+
 			"FW not yet loaded"
-		If FW has been loaded it will display:
+
+		If FW has been loaded it will display::
+
 			"v01.aa.bb.cc"
+
 		aa: Major version is reflecting SoC version:
+
+			=== =============
 			0d: BYT FW
 			0b: BSW FW
 			07: Merrifield FW
+			=== =============
+
 		bb: Minor version
+
 		cc: Build version
diff --git a/Documentation/ABI/testing/sysfs-platform-usbip-vudc b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
index 81fcfb454913..53622d3ba27c 100644
--- a/Documentation/ABI/testing/sysfs-platform-usbip-vudc
+++ b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
@@ -16,10 +16,13 @@ Contact:	Krzysztof Opasiak <k.opasiak@samsung.com>
 Description:
 		Current status of the device.
 		Allowed values:
-		1 - Device is available and can be exported
-		2 - Device is currently exported
-		3 - Fatal error occurred during communication
-		  with peer
+
+		==  ==========================================
+		1   Device is available and can be exported
+		2   Device is currently exported
+		3   Fatal error occurred during communication
+		    with peer
+		==  ==========================================
 
 What:		/sys/devices/platform/usbip-vudc.%d/usbip_sockfd
 Date:		April 2016
diff --git a/Documentation/ABI/testing/sysfs-ptp b/Documentation/ABI/testing/sysfs-ptp
index a17f817a9309..2363ad810ddb 100644
--- a/Documentation/ABI/testing/sysfs-ptp
+++ b/Documentation/ABI/testing/sysfs-ptp
@@ -69,7 +69,7 @@ Description:
 		pin offered by the PTP hardware clock. The file name
 		is the hardware dependent pin name. Reading from this
 		file produces two numbers, the assigned function (see
-		the PTP_PF_ enumeration values in linux/ptp_clock.h)
+		the `PTP_PF_` enumeration values in linux/ptp_clock.h)
 		and the channel number. The function and channel
 		assignment may be changed by two writing numbers into
 		the file.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 07:41:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 07:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15434.38441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYP23-0002A1-MS; Fri, 30 Oct 2020 07:41:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15434.38441; Fri, 30 Oct 2020 07:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYP23-00029u-J7; Fri, 30 Oct 2020 07:41:07 +0000
Received: by outflank-mailman (input) for mailman id 15434;
 Fri, 30 Oct 2020 07:41:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Fq9/=EF=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
 id 1kYP21-00029p-MH
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:41:05 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a43a2ac4-534e-4757-9e04-06d72ecba5df;
 Fri, 30 Oct 2020 07:41:04 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5bb.dynamic.kabel-deutschland.de
 [95.90.213.187])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 40B9D2231B;
 Fri, 30 Oct 2020 07:41:03 +0000 (UTC)
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
 (envelope-from <mchehab@kernel.org>)
 id 1kYP1w-004Og7-SV; Fri, 30 Oct 2020 08:41:00 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Fq9/=EF=kernel.org=mchehab@srs-us1.protection.inumbo.net>)
	id 1kYP21-00029p-MH
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 07:41:05 +0000
X-Inumbo-ID: a43a2ac4-534e-4757-9e04-06d72ecba5df
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a43a2ac4-534e-4757-9e04-06d72ecba5df;
	Fri, 30 Oct 2020 07:41:04 +0000 (UTC)
Received: from mail.kernel.org (ip5f5ad5bb.dynamic.kabel-deutschland.de [95.90.213.187])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 40B9D2231B;
	Fri, 30 Oct 2020 07:41:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604043663;
	bh=BKQkqzYZD/ZlL1u0LI3C4YBTTW/2Hv8CbtS1hljnhLs=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=0SSXGMm8TsRteVJ0LhE/q6EmO3EbpSH5+LGiJufHReWUFyufWeqNqH9dc0JR+uFFG
	 zgP1iGJTXIK+ozzHjNMlSZ0SRnBeGOBo9+LXaQSq7Kx/83IEqvvjckvUhV9l2AliT5
	 6wuBH8E3BG9hsai/gFxIauDMVrl/4uzfaKhkKERM=
Received: from mchehab by mail.kernel.org with local (Exim 4.94)
	(envelope-from <mchehab@kernel.org>)
	id 1kYP1w-004Og7-SV; Fri, 30 Oct 2020 08:41:00 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Linux Doc Mailing List <linux-doc@vger.kernel.org>
Cc: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
	"Jonathan Corbet" <corbet@lwn.net>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Daniel Thompson <daniel.thompson@linaro.org>,
	Jarkko Sakkinen <jarkko@kernel.org>,
	Jason Gunthorpe <jgg@ziepe.ca>,
	Jerry Snitselaar <jsnitsel@redhat.com>,
	Jingoo Han <jingoohan1@gmail.com>,
	Johannes Berg <johannes@sipsolutions.net>,
	Jonathan Cameron <Jonathan.Cameron@huawei.com>,
	Juergen Gross <jgross@suse.com>,
	Lee Jones <lee.jones@linaro.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Oded Gabbay <oded.gabbay@gmail.com>,
	Paul Mackerras <paulus@samba.org>,
	Srinivas Kandagatla <srinivas.kandagatla@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Tom Rix <trix@redhat.com>,
	Vaibhav Jain <vaibhav@linux.ibm.com>,
	dri-devel@lists.freedesktop.org,
	linux-kernel@vger.kernel.org,
	linux-wireless@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 19/39] docs: ABI: stable: make files ReST compatible
Date: Fri, 30 Oct 2020 08:40:38 +0100
Message-Id: <467a0dfbcdf00db710a629d3fe4a2563750339d8.1604042072.git.mchehab+huawei@kernel.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1604042072.git.mchehab+huawei@kernel.org>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Sender: Mauro Carvalho Chehab <mchehab@kernel.org>

Several entries at the stable ABI files won't parse if we pass
them directly to the ReST output.

Adjust them, in order to allow adding their contents as-is at
the stable ABI book.

Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
---
 Documentation/ABI/stable/firewire-cdev        |  4 +
 Documentation/ABI/stable/sysfs-acpi-pmprofile | 22 +++--
 Documentation/ABI/stable/sysfs-bus-firewire   |  3 +
 Documentation/ABI/stable/sysfs-bus-nvmem      | 19 ++--
 Documentation/ABI/stable/sysfs-bus-usb        |  6 +-
 .../ABI/stable/sysfs-class-backlight          |  1 +
 .../ABI/stable/sysfs-class-infiniband         | 93 +++++++++++++------
 Documentation/ABI/stable/sysfs-class-rfkill   | 13 ++-
 Documentation/ABI/stable/sysfs-class-tpm      | 90 +++++++++---------
 Documentation/ABI/stable/sysfs-devices        |  5 +-
 Documentation/ABI/stable/sysfs-driver-ib_srp  |  1 +
 .../ABI/stable/sysfs-firmware-efi-vars        |  4 +
 .../ABI/stable/sysfs-firmware-opal-dump       |  5 +
 .../ABI/stable/sysfs-firmware-opal-elog       |  2 +
 Documentation/ABI/stable/sysfs-hypervisor-xen |  3 +
 Documentation/ABI/stable/vdso                 |  5 +-
 16 files changed, 176 insertions(+), 100 deletions(-)

diff --git a/Documentation/ABI/stable/firewire-cdev b/Documentation/ABI/stable/firewire-cdev
index f72ed653878a..c9e8ff026154 100644
--- a/Documentation/ABI/stable/firewire-cdev
+++ b/Documentation/ABI/stable/firewire-cdev
@@ -14,12 +14,14 @@ Description:
 		Each /dev/fw* is associated with one IEEE 1394 node, which can
 		be remote or local nodes.  Operations on a /dev/fw* file have
 		different scope:
+
 		  - The 1394 node which is associated with the file:
 			  - Asynchronous request transmission
 			  - Get the Configuration ROM
 			  - Query node ID
 			  - Query maximum speed of the path between this node
 			    and local node
+
 		  - The 1394 bus (i.e. "card") to which the node is attached to:
 			  - Isochronous stream transmission and reception
 			  - Asynchronous stream transmission and reception
@@ -31,6 +33,7 @@ Description:
 			    manager
 			  - Query cycle time
 			  - Bus reset initiation, bus reset event reception
+
 		  - All 1394 buses:
 			  - Allocation of IEEE 1212 address ranges on the local
 			    link layers, reception of inbound requests to such
@@ -43,6 +46,7 @@ Description:
 		userland implement different access permission models, some
 		operations are restricted to /dev/fw* files that are associated
 		with a local node:
+
 			  - Addition of descriptors or directories to the local
 			    nodes' Configuration ROM
 			  - PHY packet transmission and reception
diff --git a/Documentation/ABI/stable/sysfs-acpi-pmprofile b/Documentation/ABI/stable/sysfs-acpi-pmprofile
index 964c7a8afb26..fd97d22b677f 100644
--- a/Documentation/ABI/stable/sysfs-acpi-pmprofile
+++ b/Documentation/ABI/stable/sysfs-acpi-pmprofile
@@ -6,17 +6,21 @@ Description: 	The ACPI pm_profile sysfs interface exports the platform
 		power management (and performance) requirement expectations
 		as provided by BIOS. The integer value is directly passed as
 		retrieved from the FADT ACPI table.
-Values:         For possible values see ACPI specification:
+
+Values:	        For possible values see ACPI specification:
 		5.2.9 Fixed ACPI Description Table (FADT)
 		Field: Preferred_PM_Profile
 
 		Currently these values are defined by spec:
-		0 Unspecified
-		1 Desktop
-		2 Mobile
-		3 Workstation
-		4 Enterprise Server
-		5 SOHO Server
-		6 Appliance PC
-		7 Performance Server
+
+		== =================
+		0  Unspecified
+		1  Desktop
+		2  Mobile
+		3  Workstation
+		4  Enterprise Server
+		5  SOHO Server
+		6  Appliance PC
+		7  Performance Server
 		>7 Reserved
+		== =================
diff --git a/Documentation/ABI/stable/sysfs-bus-firewire b/Documentation/ABI/stable/sysfs-bus-firewire
index 41e5a0cd1e3e..9ac9eddb82ef 100644
--- a/Documentation/ABI/stable/sysfs-bus-firewire
+++ b/Documentation/ABI/stable/sysfs-bus-firewire
@@ -47,6 +47,7 @@ Description:
 		IEEE 1394 node device attribute.
 		Read-only and immutable.
 Values:		1: The sysfs entry represents a local node (a controller card).
+
 		0: The sysfs entry represents a remote node.
 
 
@@ -125,7 +126,9 @@ Description:
 		Read-only attribute, immutable during the target's lifetime.
 		Format, as exposed by firewire-sbp2 since 2.6.22, May 2007:
 		Colon-separated hexadecimal string representations of
+
 			u64 EUI-64 : u24 directory_ID : u16 LUN
+
 		without 0x prefixes, without whitespace.  The former sbp2 driver
 		(removed in 2.6.37 after being superseded by firewire-sbp2) used
 		a somewhat shorter format which was not as close to SAM.
diff --git a/Documentation/ABI/stable/sysfs-bus-nvmem b/Documentation/ABI/stable/sysfs-bus-nvmem
index 9ffba8576f7b..c399323f37de 100644
--- a/Documentation/ABI/stable/sysfs-bus-nvmem
+++ b/Documentation/ABI/stable/sysfs-bus-nvmem
@@ -9,13 +9,14 @@ Description:
 		Note: This file is only present if CONFIG_NVMEM_SYSFS
 		is enabled
 
-		ex:
-		hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
+		ex::
 
-		0000000 0000 0000 0000 0000 0000 0000 0000 0000
-		*
-		00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
-		0000000 0000 0000 0000 0000 0000 0000 0000 0000
-		...
-		*
-		0001000
+		  hexdump /sys/bus/nvmem/devices/qfprom0/nvmem
+
+		  0000000 0000 0000 0000 0000 0000 0000 0000 0000
+		  *
+		  00000a0 db10 2240 0000 e000 0c00 0c00 0000 0c00
+		  0000000 0000 0000 0000 0000 0000 0000 0000 0000
+		  ...
+		  *
+		  0001000
diff --git a/Documentation/ABI/stable/sysfs-bus-usb b/Documentation/ABI/stable/sysfs-bus-usb
index b832eeff9999..cad4bc232520 100644
--- a/Documentation/ABI/stable/sysfs-bus-usb
+++ b/Documentation/ABI/stable/sysfs-bus-usb
@@ -50,8 +50,10 @@ Description:
 
 		Tools can use this file and the connected_duration file to
 		compute the percentage of time that a device has been active.
-		For example,
-		echo $((100 * `cat active_duration` / `cat connected_duration`))
+		For example::
+
+		  echo $((100 * `cat active_duration` / `cat connected_duration`))
+
 		will give an integer percentage.  Note that this does not
 		account for counter wrap.
 Users:
diff --git a/Documentation/ABI/stable/sysfs-class-backlight b/Documentation/ABI/stable/sysfs-class-backlight
index 70302f370e7e..023fb52645f8 100644
--- a/Documentation/ABI/stable/sysfs-class-backlight
+++ b/Documentation/ABI/stable/sysfs-class-backlight
@@ -4,6 +4,7 @@ KernelVersion:	2.6.12
 Contact:	Richard Purdie <rpurdie@rpsys.net>
 Description:
 		Control BACKLIGHT power, values are FB_BLANK_* from fb.h
+
 		 - FB_BLANK_UNBLANK (0)   : power on.
 		 - FB_BLANK_POWERDOWN (4) : power off
 Users:		HAL
diff --git a/Documentation/ABI/stable/sysfs-class-infiniband b/Documentation/ABI/stable/sysfs-class-infiniband
index 87b11f91b425..348c4ac803ad 100644
--- a/Documentation/ABI/stable/sysfs-class-infiniband
+++ b/Documentation/ABI/stable/sysfs-class-infiniband
@@ -8,12 +8,14 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===========================================
 		node_type:	(RO) Node type (CA, RNIC, usNIC, usNIC UDP,
 				switch or router)
 
 		node_guid:	(RO) Node GUID
 
 		sys_image_guid:	(RO) System image GUID
+		=============== ===========================================
 
 
 What:		/sys/class/infiniband/<device>/node_desc
@@ -47,6 +49,7 @@ KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ===============================================
 		lid:		(RO) Port LID
 
 		rate:		(RO) Port data rate (active width * active
@@ -66,8 +69,9 @@ Description:
 
 		cap_mask:	(RO) Port capability mask. 2 bits here are
 				settable- IsCommunicationManagementSupported
-				(set when CM module is loaded) and IsSM (set via
-				open of issmN file).
+				(set when CM module is loaded) and IsSM (set
+				via open of issmN file).
+		=============== ===============================================
 
 
 What:		/sys/class/infiniband/<device>/ports/<port-num>/link_layer
@@ -103,8 +107,7 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
-		Errors info:
-		-----------
+		**Errors info**:
 
 		symbol_error: (RO) Total number of minor link errors detected on
 		one or more physical lanes.
@@ -142,8 +145,7 @@ Description:
 		intervention. It can also indicate hardware issues or extremely
 		poor link signal integrity
 
-		Data info:
-		---------
+		**Data info**:
 
 		port_xmit_data: (RO) Total number of data octets, divided by 4
 		(lanes), transmitted on all VLs. This is 64 bit counter
@@ -176,8 +178,7 @@ Description:
 		transmitted on all VLs from the port. This may include multicast
 		packets with errors.
 
-		Misc info:
-		---------
+		**Misc info**:
 
 		port_xmit_discards: (RO) Total number of outbound packets
 		discarded by the port because the port is down or congested.
@@ -244,9 +245,11 @@ Description:
 		two umad devices and two issm devices, while a switch will have
 		one device of each type (for switch port 0).
 
+		======= =====================================
 		ibdev:	(RO) Show Infiniband (IB) device name
 
 		port:	(RO) Display port number
+		======= =====================================
 
 
 What:		/sys/class/infiniband_mad/abi_version
@@ -264,10 +267,12 @@ Date:		Sept, 2005
 KernelVersion:	v2.6.14
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===========================================
 		ibdev:		(RO) Display Infiniband (IB) device name
 
 		abi_version:	(RO) Show ABI version of IB device specific
 				interfaces.
+		=============== ===========================================
 
 
 What:		/sys/class/infiniband_verbs/abi_version
@@ -289,12 +294,14 @@ Date:		Apr, 2005
 KernelVersion:	v2.6.12
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ================================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host Channel Adapter type: MT23108, MT25208
 				(MT23108 compat mode), MT25208 or MT25204
 
 		board_id:	(RO) Manufacturing board ID
+		=============== ================================================
 
 
 sysfs interface for Mellanox ConnectX HCA IB driver (mlx4)
@@ -307,11 +314,13 @@ Date:		Sep, 2007
 KernelVersion:	v2.6.24
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===============================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
 
 		board_id:	(RO) Manufacturing board ID
+		=============== ===============================
 
 
 What:		/sys/class/infiniband/mlx4_X/iov/ports/<port-num>/gids/<n>
@@ -337,6 +346,7 @@ Description:
 		example, ports/1/pkeys/10 contains the value at index 10 in port
 		1's P_Key table.
 
+		======================= ==========================================
 		gids/<n>:		(RO) The physical port gids n = 0..127
 
 		admin_guids/<n>:	(RW) Allows examining or changing the
@@ -365,6 +375,7 @@ Description:
 					guest, whenever it uses its pkey index
 					1, will actually be using the real pkey
 					index 10.
+		======================= ==========================================
 
 
 What:		/sys/class/infiniband/mlx4_X/iov/<pci-slot-num>/ports/<m>/smi_enabled
@@ -376,12 +387,14 @@ Description:
 		Enabling QP0 on VFs for selected VF/port. By default, no VFs are
 		enabled for QP0 operation.
 
-		smi_enabled:	(RO) Indicates whether smi is currently enabled
-				for the indicated VF/port
+		================= ==== ===========================================
+		smi_enabled:	  (RO) Indicates whether smi is currently enabled
+				       for the indicated VF/port
 
-		enable_smi_admin:(RW) Used by the admin to request that smi
-				capability be enabled or disabled for the
-				indicated VF/port. 0 = disable, 1 = enable.
+		enable_smi_admin: (RW) Used by the admin to request that smi
+				       capability be enabled or disabled for the
+				       indicated VF/port. 0 = disable, 1 = enable.
+		================= ==== ===========================================
 
 		The requested enablement will occur at the next reset of the VF
 		(e.g. driver restart on the VM which owns the VF).
@@ -398,6 +411,7 @@ KernelVersion:	v2.6.35
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== =============================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Driver short name. Should normally match
@@ -406,6 +420,7 @@ Description:
 
 		board_id:	(RO) Manufacturing board id. (Vendor + device
 				information)
+		=============== =============================================
 
 
 sysfs interface for Intel IB driver qib
@@ -426,6 +441,7 @@ Date:		May, 2010
 KernelVersion:	v2.6.35
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ======================================================
 		version:	(RO) Display version information of installed software
 				and drivers.
 
@@ -452,6 +468,7 @@ Description:
 		chip_reset:	(WO) Reset the chip if possible by writing
 				"reset" to this file. Only allowed if no user
 				contexts are open that use chip resources.
+		=============== ======================================================
 
 
 What:		/sys/class/infiniband/qibX/ports/N/sl2vl/[0-15]
@@ -471,14 +488,16 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		Per-port congestion control. Both are binary attributes.
 
-		cc_table_bin:	(RO) Congestion control table size followed by
+		=============== ================================================
+		cc_table_bin	(RO) Congestion control table size followed by
 				table entries.
 
-		cc_settings_bin:(RO) Congestion settings: port control, control
+		cc_settings_bin (RO) Congestion settings: port control, control
 				map and an array of 16 entries for the
 				congestion entries - increase, timer, event log
 				trigger threshold and the minimum injection rate
 				delay.
+		=============== ================================================
 
 What:		/sys/class/infiniband/qibX/ports/N/linkstate/loopback
 What:		/sys/class/infiniband/qibX/ports/N/linkstate/led_override
@@ -491,6 +510,7 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		[to be documented]
 
+		=============== ===============================================
 		loopback:	(WO)
 		led_override:	(WO)
 		hrtbt_enable:	(RW)
@@ -501,6 +521,7 @@ Description:
 				errors. Possible states are- "Initted",
 				"Present", "IB_link_up", "IB_configured" or
 				"Fatal_Hardware_Error".
+		=============== ===============================================
 
 What:		/sys/class/infiniband/qibX/ports/N/diag_counters/rc_resends
 What:		/sys/class/infiniband/qibX/ports/N/diag_counters/seq_naks
@@ -549,6 +570,7 @@ Contact:	Christian Benvenuti <benve@cisco.com>,
 		linux-rdma@vger.kernel.org
 Description:
 
+		=============== ===============================================
 		board_id:	(RO) Manufacturing board id
 
 		config:		(RO) Report the configuration for this PF
@@ -561,6 +583,7 @@ Description:
 
 		iface:		(RO) Shows which network interface this usNIC
 				entry is associated to (visible with ifconfig).
+		=============== ===============================================
 
 What:		/sys/class/infiniband/usnic_X/qpn/summary
 What:		/sys/class/infiniband/usnic_X/qpn/context
@@ -605,6 +628,7 @@ Date:		May, 2016
 KernelVersion:	v4.6
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== =============================================
 		hw_rev:		(RO) Hardware revision number
 
 		board_id:	(RO) Manufacturing board id
@@ -623,6 +647,7 @@ Description:
 				available.
 
 		tempsense:	(RO) Thermal sense information
+		=============== =============================================
 
 
 What:		/sys/class/infiniband/hfi1_X/ports/N/CCMgtA/cc_settings_bin
@@ -634,19 +659,21 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		Per-port congestion control.
 
-		cc_table_bin:	(RO) CCA tables used by PSM2 Congestion control
+		=============== ================================================
+		cc_table_bin	(RO) CCA tables used by PSM2 Congestion control
 				table size followed by table entries. Binary
 				attribute.
 
-		cc_settings_bin:(RO) Congestion settings: port control, control
+		cc_settings_bin (RO) Congestion settings: port control, control
 				map and an array of 16 entries for the
 				congestion entries - increase, timer, event log
 				trigger threshold and the minimum injection rate
 				delay. Binary attribute.
 
-		cc_prescan:	(RW) enable prescanning for faster BECN
+		cc_prescan	(RW) enable prescanning for faster BECN
 				response. Write "on" to enable and "off" to
 				disable.
+		=============== ================================================
 
 What:		/sys/class/infiniband/hfi1_X/ports/N/sc2vl/[0-31]
 What:		/sys/class/infiniband/hfi1_X/ports/N/sl2sc/[0-31]
@@ -655,11 +682,13 @@ Date:		May, 2016
 KernelVersion:	v4.6
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ===================================================
 		sc2vl/:		(RO) 32 files (0 - 31) used to translate sl->vl
 
 		sl2sc/:		(RO) 32 files (0 - 31) used to translate sl->sc
 
 		vl2mtu/:	(RO) 16 files (0 - 15) used to determine MTU for vl
+		=============== ===================================================
 
 
 What:		/sys/class/infiniband/hfi1_X/sdma_N/cpu_list
@@ -670,26 +699,28 @@ Contact:	linux-rdma@vger.kernel.org
 Description:
 		sdma<N>/ contains one directory per sdma engine (0 - 15)
 
+		=============== ==============================================
 		cpu_list:	(RW) List of cpus for user-process to sdma
 				engine assignment.
 
 		vl:		(RO) Displays the virtual lane (vl) the sdma
 				engine maps to.
+		=============== ==============================================
 
 		This interface gives the user control on the affinity settings
 		for the device. As an example, to set an sdma engine irq
 		affinity and thread affinity of a user processes to use the
 		sdma engine, which is "near" in terms of NUMA configuration, or
-		physical cpu location, the user will do:
+		physical cpu location, the user will do::
 
-		echo "3" > /proc/irq/<N>/smp_affinity_list
-		echo "4-7" > /sys/devices/.../sdma3/cpu_list
-		cat /sys/devices/.../sdma3/vl
-		0
-		echo "8" > /proc/irq/<M>/smp_affinity_list
-		echo "9-12" > /sys/devices/.../sdma4/cpu_list
-		cat /sys/devices/.../sdma4/vl
-		1
+		  echo "3" > /proc/irq/<N>/smp_affinity_list
+		  echo "4-7" > /sys/devices/.../sdma3/cpu_list
+		  cat /sys/devices/.../sdma3/vl
+		  0
+		  echo "8" > /proc/irq/<M>/smp_affinity_list
+		  echo "9-12" > /sys/devices/.../sdma4/cpu_list
+		  cat /sys/devices/.../sdma4/vl
+		  1
 
 		to make sure that when a process runs on cpus 4,5,6, or 7, and
 		uses vl=0, then sdma engine 3 is selected by the driver, and
@@ -711,11 +742,13 @@ Date:		Jan, 2016
 KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ==== ========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Show HCA type (I40IW)
 
 		board_id:	(RO) I40IW board ID
+		=============== ==== ========================
 
 
 sysfs interface for QLogic qedr NIC Driver
@@ -728,9 +761,11 @@ KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ==== ========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Display HCA type
+		=============== ==== ========================
 
 
 sysfs interface for VMware Paravirtual RDMA driver
@@ -744,11 +779,13 @@ KernelVersion:	v4.10
 Contact:	linux-rdma@vger.kernel.org
 Description:
 
+		=============== ==== =====================================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
 
 		board_id:	(RO) Display PVRDMA manufacturing board ID
+		=============== ==== =====================================
 
 
 sysfs interface for Broadcom NetXtreme-E RoCE driver
@@ -760,6 +797,8 @@ Date:		Feb, 2017
 KernelVersion:	v4.11
 Contact:	linux-rdma@vger.kernel.org
 Description:
+		=============== ==== =========================
 		hw_rev:		(RO) Hardware revision number
 
 		hca_type:	(RO) Host channel adapter type
+		=============== ==== =========================
diff --git a/Documentation/ABI/stable/sysfs-class-rfkill b/Documentation/ABI/stable/sysfs-class-rfkill
index 5b154f922643..037979f7dc4b 100644
--- a/Documentation/ABI/stable/sysfs-class-rfkill
+++ b/Documentation/ABI/stable/sysfs-class-rfkill
@@ -2,7 +2,7 @@ rfkill - radio frequency (RF) connector kill switch support
 
 For details to this subsystem look at Documentation/driver-api/rfkill.rst.
 
-For the deprecated /sys/class/rfkill/*/claim knobs of this interface look in
+For the deprecated ``/sys/class/rfkill/*/claim`` knobs of this interface look in
 Documentation/ABI/removed/sysfs-class-rfkill.
 
 What: 		/sys/class/rfkill
@@ -36,9 +36,10 @@ KernelVersion	v2.6.22
 Contact:	linux-wireless@vger.kernel.org
 Description: 	Whether the soft blocked state is initialised from non-volatile
 		storage at startup.
-Values: 	A numeric value.
-		0: false
-		1: true
+Values: 	A numeric value:
+
+		- 0: false
+		- 1: true
 
 
 What:		/sys/class/rfkill/rfkill[0-9]+/state
@@ -54,6 +55,7 @@ Description: 	Current state of the transmitter.
 		through this interface. There will likely be another attempt to
 		remove it in the future.
 Values: 	A numeric value.
+
 		0: RFKILL_STATE_SOFT_BLOCKED
 			transmitter is turned off by software
 		1: RFKILL_STATE_UNBLOCKED
@@ -69,6 +71,7 @@ KernelVersion	v2.6.34
 Contact:	linux-wireless@vger.kernel.org
 Description: 	Current hardblock state. This file is read only.
 Values: 	A numeric value.
+
 		0: inactive
 			The transmitter is (potentially) active.
 		1: active
@@ -82,7 +85,9 @@ KernelVersion	v2.6.34
 Contact:	linux-wireless@vger.kernel.org
 Description:	Current softblock state. This file is read and write.
 Values: 	A numeric value.
+
 		0: inactive
 			The transmitter is (potentially) active.
+
 		1: active
 			The transmitter is turned off by software.
diff --git a/Documentation/ABI/stable/sysfs-class-tpm b/Documentation/ABI/stable/sysfs-class-tpm
index 58e94e7d55be..ec464cf7861a 100644
--- a/Documentation/ABI/stable/sysfs-class-tpm
+++ b/Documentation/ABI/stable/sysfs-class-tpm
@@ -32,11 +32,11 @@ KernelVersion:	2.6.12
 Contact:	linux-integrity@vger.kernel.org
 Description:	The "caps" property contains TPM manufacturer and version info.
 
-		Example output:
+		Example output::
 
-		Manufacturer: 0x53544d20
-		TCG version: 1.2
-		Firmware version: 8.16
+		  Manufacturer: 0x53544d20
+		  TCG version: 1.2
+		  Firmware version: 8.16
 
 		Manufacturer is a hex dump of the 4 byte manufacturer info
 		space in a TPM. TCG version shows the TCG TPM spec level that
@@ -54,9 +54,9 @@ Description:	The "durations" property shows the 3 vendor-specific values
 		any longer than necessary before starting to poll for a
 		result.
 
-		Example output:
+		Example output::
 
-		3015000 4508000 180995000 [original]
+		  3015000 4508000 180995000 [original]
 
 		Here the short, medium and long durations are displayed in
 		usecs. "[original]" indicates that the values are displayed
@@ -92,14 +92,14 @@ Description:	The "pcrs" property will dump the current value of all Platform
 		values may be constantly changing, the output is only valid
 		for a snapshot in time.
 
-		Example output:
+		Example output::
 
-		PCR-00: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-01: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-02: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-03: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		PCR-04: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
-		...
+		  PCR-00: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-01: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-02: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-03: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  PCR-04: 3A 3F 78 0F 11 A4 B4 99 69 FC AA 80 CD 6E 39 57 C3 3B 22 75
+		  ...
 
 		The number of PCRs and hex bytes needed to represent a PCR
 		value will vary depending on TPM chip version. For TPM 1.1 and
@@ -119,44 +119,44 @@ Description:	The "pubek" property will return the TPM's public endorsement
 		ated at TPM manufacture time and exists for the life of the
 		chip.
 
-		Example output:
+		Example output::
 
-		Algorithm: 00 00 00 01
-		Encscheme: 00 03
-		Sigscheme: 00 01
-		Parameters: 00 00 08 00 00 00 00 02 00 00 00 00
-		Modulus length: 256
-		Modulus:
-		B4 76 41 82 C9 20 2C 10 18 40 BC 8B E5 44 4C 6C
-		3A B2 92 0C A4 9B 2A 83 EB 5C 12 85 04 48 A0 B6
-		1E E4 81 84 CE B2 F2 45 1C F0 85 99 61 02 4D EB
-		86 C4 F7 F3 29 60 52 93 6B B2 E5 AB 8B A9 09 E3
-		D7 0E 7D CA 41 BF 43 07 65 86 3C 8C 13 7A D0 8B
-		82 5E 96 0B F8 1F 5F 34 06 DA A2 52 C1 A9 D5 26
-		0F F4 04 4B D9 3F 2D F2 AC 2F 74 64 1F 8B CD 3E
-		1E 30 38 6C 70 63 69 AB E2 50 DF 49 05 2E E1 8D
-		6F 78 44 DA 57 43 69 EE 76 6C 38 8A E9 8E A3 F0
-		A7 1F 3C A8 D0 12 15 3E CA 0E BD FA 24 CD 33 C6
-		47 AE A4 18 83 8E 22 39 75 93 86 E6 FD 66 48 B6
-		10 AD 94 14 65 F9 6A 17 78 BD 16 53 84 30 BF 70
-		E0 DC 65 FD 3C C6 B0 1E BF B9 C1 B5 6C EF B1 3A
-		F8 28 05 83 62 26 11 DC B4 6B 5A 97 FF 32 26 B6
-		F7 02 71 CF 15 AE 16 DD D1 C1 8E A8 CF 9B 50 7B
-		C3 91 FF 44 1E CF 7C 39 FE 17 77 21 20 BD CE 9B
+		  Algorithm: 00 00 00 01
+		  Encscheme: 00 03
+		  Sigscheme: 00 01
+		  Parameters: 00 00 08 00 00 00 00 02 00 00 00 00
+		  Modulus length: 256
+		  Modulus:
+		  B4 76 41 82 C9 20 2C 10 18 40 BC 8B E5 44 4C 6C
+		  3A B2 92 0C A4 9B 2A 83 EB 5C 12 85 04 48 A0 B6
+		  1E E4 81 84 CE B2 F2 45 1C F0 85 99 61 02 4D EB
+		  86 C4 F7 F3 29 60 52 93 6B B2 E5 AB 8B A9 09 E3
+		  D7 0E 7D CA 41 BF 43 07 65 86 3C 8C 13 7A D0 8B
+		  82 5E 96 0B F8 1F 5F 34 06 DA A2 52 C1 A9 D5 26
+		  0F F4 04 4B D9 3F 2D F2 AC 2F 74 64 1F 8B CD 3E
+		  1E 30 38 6C 70 63 69 AB E2 50 DF 49 05 2E E1 8D
+		  6F 78 44 DA 57 43 69 EE 76 6C 38 8A E9 8E A3 F0
+		  A7 1F 3C A8 D0 12 15 3E CA 0E BD FA 24 CD 33 C6
+		  47 AE A4 18 83 8E 22 39 75 93 86 E6 FD 66 48 B6
+		  10 AD 94 14 65 F9 6A 17 78 BD 16 53 84 30 BF 70
+		  E0 DC 65 FD 3C C6 B0 1E BF B9 C1 B5 6C EF B1 3A
+		  F8 28 05 83 62 26 11 DC B4 6B 5A 97 FF 32 26 B6
+		  F7 02 71 CF 15 AE 16 DD D1 C1 8E A8 CF 9B 50 7B
+		  C3 91 FF 44 1E CF 7C 39 FE 17 77 21 20 BD CE 9B
 
-		Possible values:
+		Possible values::
 
-		Algorithm:	TPM_ALG_RSA			(1)
-		Encscheme:	TPM_ES_RSAESPKCSv15		(2)
+		  Algorithm:	TPM_ALG_RSA			(1)
+		  Encscheme:	TPM_ES_RSAESPKCSv15		(2)
 				TPM_ES_RSAESOAEP_SHA1_MGF1	(3)
-		Sigscheme:	TPM_SS_NONE			(1)
-		Parameters, a byte string of 3 u32 values:
+		  Sigscheme:	TPM_SS_NONE			(1)
+		  Parameters, a byte string of 3 u32 values:
 			Key Length (bits):	00 00 08 00	(2048)
 			Num primes:		00 00 00 02	(2)
 			Exponent Size:		00 00 00 00	(0 means the
 								 default exp)
-		Modulus Length: 256 (bytes)
-		Modulus:	The 256 byte Endorsement Key modulus
+		  Modulus Length: 256 (bytes)
+		  Modulus:	The 256 byte Endorsement Key modulus
 
 What:		/sys/class/tpm/tpmX/device/temp_deactivated
 Date:		April 2006
@@ -176,9 +176,9 @@ Description:	The "timeouts" property shows the 4 vendor-specific values
 		timeouts is defined by the TPM interface spec that the chip
 		conforms to.
 
-		Example output:
+		Example output::
 
-		750000 750000 750000 750000 [original]
+		  750000 750000 750000 750000 [original]
 
 		The four timeout values are shown in usecs, with a trailing
 		"[original]" or "[adjusted]" depending on whether the values
diff --git a/Documentation/ABI/stable/sysfs-devices b/Documentation/ABI/stable/sysfs-devices
index 4404bd9b96c1..42bf1eab5677 100644
--- a/Documentation/ABI/stable/sysfs-devices
+++ b/Documentation/ABI/stable/sysfs-devices
@@ -1,5 +1,6 @@
-# Note: This documents additional properties of any device beyond what
-# is documented in Documentation/admin-guide/sysfs-rules.rst
+Note:
+  This documents additional properties of any device beyond what
+  is documented in Documentation/admin-guide/sysfs-rules.rst
 
 What:		/sys/devices/*/of_node
 Date:		February 2015
diff --git a/Documentation/ABI/stable/sysfs-driver-ib_srp b/Documentation/ABI/stable/sysfs-driver-ib_srp
index 84972a57caae..bada15a329f7 100644
--- a/Documentation/ABI/stable/sysfs-driver-ib_srp
+++ b/Documentation/ABI/stable/sysfs-driver-ib_srp
@@ -6,6 +6,7 @@ Description:	Interface for making ib_srp connect to a new target.
 		One can request ib_srp to connect to a new target by writing
 		a comma-separated list of login parameters to this sysfs
 		attribute. The supported parameters are:
+
 		* id_ext, a 16-digit hexadecimal number specifying the eight
 		  byte identifier extension in the 16-byte SRP target port
 		  identifier. The target port identifier is sent by ib_srp
diff --git a/Documentation/ABI/stable/sysfs-firmware-efi-vars b/Documentation/ABI/stable/sysfs-firmware-efi-vars
index 5def20b9019e..46ccd233e359 100644
--- a/Documentation/ABI/stable/sysfs-firmware-efi-vars
+++ b/Documentation/ABI/stable/sysfs-firmware-efi-vars
@@ -17,6 +17,7 @@ Description:
 		directory has a name of the form "<key>-<vendor guid>"
 		and contains the following files:
 
+		=============== ========================================
 		attributes:	A read-only text file enumerating the
 				EFI variable flags.  Potential values
 				include:
@@ -59,12 +60,14 @@ Description:
 
 		size:		As ASCII representation of the size of
 				the variable's value.
+		=============== ========================================
 
 
 		In addition, two other magic binary files are provided
 		in the top-level directory and are used for adding and
 		removing variables:
 
+		=============== ========================================
 		new_var:	Takes a "struct efi_variable" and
 				instructs the EFI firmware to create a
 				new variable.
@@ -73,3 +76,4 @@ Description:
 				instructs the EFI firmware to remove any
 				variable that has a matching vendor GUID
 				and variable key name.
+		=============== ========================================
diff --git a/Documentation/ABI/stable/sysfs-firmware-opal-dump b/Documentation/ABI/stable/sysfs-firmware-opal-dump
index 32fe7f5c4880..1f74f45327ba 100644
--- a/Documentation/ABI/stable/sysfs-firmware-opal-dump
+++ b/Documentation/ABI/stable/sysfs-firmware-opal-dump
@@ -7,6 +7,7 @@ Description:
 
 		This is only for the powerpc/powernv platform.
 
+		=============== ===============================================
 		initiate_dump:	When '1' is written to it,
 				we will initiate a dump.
 				Read this file for supported commands.
@@ -19,8 +20,11 @@ Description:
 				and ID of the dump, use the id and type files.
 				Do not rely on any particular size of dump
 				type or dump id.
+		=============== ===============================================
 
 		Each dump has the following files:
+
+		=============== ===============================================
 		id:		An ASCII representation of the dump ID
 				in hex (e.g. '0x01')
 		type:		An ASCII representation of the type of
@@ -39,3 +43,4 @@ Description:
 				inaccessible.
 				Reading this file will get a list of
 				supported actions.
+		=============== ===============================================
diff --git a/Documentation/ABI/stable/sysfs-firmware-opal-elog b/Documentation/ABI/stable/sysfs-firmware-opal-elog
index 2536434d49d0..7c8a61a2d005 100644
--- a/Documentation/ABI/stable/sysfs-firmware-opal-elog
+++ b/Documentation/ABI/stable/sysfs-firmware-opal-elog
@@ -38,6 +38,7 @@ Description:
 		For each log entry (directory), there are the following
 		files:
 
+		==============  ================================================
 		id:		An ASCII representation of the ID of the
 				error log, in hex - e.g. "0x01".
 
@@ -58,3 +59,4 @@ Description:
 				entry will be removed from sysfs.
 				Reading this file will list the supported
 				operations (currently just acknowledge).
+		==============  ================================================
diff --git a/Documentation/ABI/stable/sysfs-hypervisor-xen b/Documentation/ABI/stable/sysfs-hypervisor-xen
index 3cf5cdfcd9a8..748593c64568 100644
--- a/Documentation/ABI/stable/sysfs-hypervisor-xen
+++ b/Documentation/ABI/stable/sysfs-hypervisor-xen
@@ -33,6 +33,8 @@ Description:	If running under Xen:
 		Space separated list of supported guest system types. Each type
 		is in the format: <class>-<major>.<minor>-<arch>
 		With:
+
+			======== ============================================
 			<class>: "xen" -- x86: paravirtualized, arm: standard
 				 "hvm" -- x86 only: fully virtualized
 			<major>: major guest interface version
@@ -43,6 +45,7 @@ Description:	If running under Xen:
 				 "x86_64": 64 bit x86 guest
 				 "armv7l": 32 bit arm guest
 				 "aarch64": 64 bit arm guest
+			======== ============================================
 
 What:		/sys/hypervisor/properties/changeset
 Date:		March 2009
diff --git a/Documentation/ABI/stable/vdso b/Documentation/ABI/stable/vdso
index 55406ec8a35a..73ed1240a5c0 100644
--- a/Documentation/ABI/stable/vdso
+++ b/Documentation/ABI/stable/vdso
@@ -23,6 +23,7 @@ Unless otherwise noted, the set of symbols with any given version and the
 ABI of those symbols is considered stable.  It may vary across architectures,
 though.
 
-(As of this writing, this ABI documentation as been confirmed for x86_64.
+Note:
+ As of this writing, this ABI documentation as been confirmed for x86_64.
  The maintainers of the other vDSO-using architectures should confirm
- that it is correct for their architecture.)
+ that it is correct for their architecture.
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 08:05:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 08:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15456.38466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYPPm-0004jP-Nt; Fri, 30 Oct 2020 08:05:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15456.38466; Fri, 30 Oct 2020 08:05:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYPPm-0004jI-Jn; Fri, 30 Oct 2020 08:05:38 +0000
Received: by outflank-mailman (input) for mailman id 15456;
 Fri, 30 Oct 2020 08:05:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
 id 1kYPPk-0004jD-H8
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:05:36 +0000
Received: from mail-ed1-x52e.google.com (unknown [2a00:1450:4864:20::52e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e054684-22f0-4b32-b28d-76acc7bf1673;
 Fri, 30 Oct 2020 08:05:35 +0000 (UTC)
Received: by mail-ed1-x52e.google.com with SMTP id dg9so5617000edb.12
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 01:05:35 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=AXzT=EF=gmail.com=marcandre.lureau@srs-us1.protection.inumbo.net>)
	id 1kYPPk-0004jD-H8
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:05:36 +0000
X-Inumbo-ID: 0e054684-22f0-4b32-b28d-76acc7bf1673
Received: from mail-ed1-x52e.google.com (unknown [2a00:1450:4864:20::52e])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0e054684-22f0-4b32-b28d-76acc7bf1673;
	Fri, 30 Oct 2020 08:05:35 +0000 (UTC)
Received: by mail-ed1-x52e.google.com with SMTP id dg9so5617000edb.12
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 01:05:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=+W8Syh9vDrLxoDxx3JbwPAyYgekK9oE44RncrvpEX1E=;
        b=H7Kvgh4A71Iou9ViSAqjmbvaEh2Xohe1FbjT8oYVIfSr+/2jhwib63AROFHMJVi36w
         H1e7hfNXNmvLqKP0FLoU0nXpKcbP58eKa6f+H+q/VHJi1SgMqyEbTEQXu9db9zlx49Xw
         3QYWg4zRtHnp0JXW6512Byo71YUBZMxZQb7SpdZZtoX2QhC0F/fx4jvGO6JZtbn/tdb8
         UOa2y5jmBV9zi6nmpMAT/8h1/DRWFtAcTcArGS1zob97KEbwC7uoc4j2q0uhTwADbT3w
         Kd2fVAfoS/KtDBRJvhHBvTjspS5Kd63/0RMsb+dCCtkRlD3e98V4CSm5wpr5YJKoNbAh
         u/EQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=+W8Syh9vDrLxoDxx3JbwPAyYgekK9oE44RncrvpEX1E=;
        b=Ull6y97MGKr2T/k1WdUDdnmVae0lB6z7NO2rdjh1Aa5yoR/ArJMHUiCYFcg7DxuN2F
         7AfZulfBhi3OTtzMSEhSvXkah0H6OAOE6U1H79s3odM+EYtQZ5YOj0S6X1v3kPee0Wmm
         cu2Zq3z7A54CX0q83BbVbdquKJ+TYu1vQK8ztwin/HHLqNu3Wg7yuukBB6paLXaCUvn4
         x3mlWp1jt2PJEC2h9FhUo2KkA+2LQc0sjJ107CvZFNA1WGbDbW0Jjmf3hQ0jUvU3UZFC
         Q2M6HjbU8nBDINwJTVmVpNKbC1yoG8CqkC+dgBmdcBfEpl8C0Amw7MV70+0JoNJsvPvf
         AeSQ==
X-Gm-Message-State: AOAM533m6NGCURsbxUvSC7n9KGVjGW/lQg3nvlVw8mNK4OyLjfCx1UTR
	fiy0nJcP2HP3ycN8upTLWo15Y7XYS5b+TZv1nvc=
X-Google-Smtp-Source: ABdhPJwAzHJL/B15MxhX4Scz95CdufOBBB/pgX/hI7IUh5HrGvbAVvyFTs9OP8Zyv5YfMRK7pRXf/vq+wy4ae/8OsKY=
X-Received: by 2002:aa7:dd42:: with SMTP id o2mr998428edw.53.1604045134553;
 Fri, 30 Oct 2020 01:05:34 -0700 (PDT)
MIME-Version: 1.0
References: <20201029220246.472693-1-ehabkost@redhat.com> <20201029220246.472693-15-ehabkost@redhat.com>
In-Reply-To: <20201029220246.472693-15-ehabkost@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@gmail.com>
Date: Fri, 30 Oct 2020 12:05:23 +0400
Message-ID: <CAJ+F1CKZD5V8BJ1GHvt_GZ53Ytdtz-6Ja4h+L_wWZ_YW_x6dcQ@mail.gmail.com>
Subject: Re: [PATCH 14/36] qdev: Move dev->realized check to qdev_property_set()
To: Eduardo Habkost <ehabkost@redhat.com>
Cc: QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>, 
	Paul Durrant <paul@xen.org>, Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, 
	Stefano Stabellini <sstabellini@kernel.org>, 
	"open list:Block layer core" <qemu-block@nongnu.org>, Stefan Berger <stefanb@linux.vnet.ibm.com>, 
	David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>, 
	Christian Borntraeger <borntraeger@de.ibm.com>, Anthony Perard <anthony.perard@citrix.com>, 
	xen-devel@lists.xenproject.org, =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <philmd@redhat.com>, 
	Artyom Tarasenko <atar4qemu@gmail.com>, Thomas Huth <thuth@redhat.com>, 
	Alex Williamson <alex.williamson@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>, 
	John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>, 
	"Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck <cohuck@redhat.com>, 
	Qemu-s390x list <qemu-s390x@nongnu.org>, Max Reitz <mreitz@redhat.com>, 
	Igor Mammedov <imammedo@redhat.com>
Content-Type: multipart/alternative; boundary="00000000000072e65e05b2dedc08"

--00000000000072e65e05b2dedc08
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Oct 30, 2020 at 2:10 AM Eduardo Habkost <ehabkost@redhat.com> wrote=
:

> Every single qdev property setter function manually checks
> dev->realized.  We can just check dev->realized inside
> qdev_property_set() instead.
>
> The check is being added as a separate function
> (qdev_prop_allow_set()) because it will become a callback later.
>
> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
>

nice
Reviewed-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>

--=20
Marc-Andr=C3=A9 Lureau

--00000000000072e65e05b2dedc08
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, Oct 30, 2020 at 2:10 AM Eduar=
do Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com">ehabkost@redhat.com</=
a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Ev=
ery single qdev property setter function manually checks<br>
dev-&gt;realized.=C2=A0 We can just check dev-&gt;realized inside<br>
qdev_property_set() instead.<br>
<br>
The check is being added as a separate function<br>
(qdev_prop_allow_set()) because it will become a callback later.<br>
<br>
Signed-off-by: Eduardo Habkost &lt;<a href=3D"mailto:ehabkost@redhat.com" t=
arget=3D"_blank">ehabkost@redhat.com</a>&gt;<br></blockquote><div><br></div=
><div>nice<br></div><div>Reviewed-by: Marc-Andr=C3=A9 Lureau &lt;<a href=3D=
"mailto:marcandre.lureau@redhat.com">marcandre.lureau@redhat.com</a>&gt;=C2=
=A0 <br></div></div><br>-- <br><div dir=3D"ltr" class=3D"gmail_signature">M=
arc-Andr=C3=A9 Lureau<br></div></div>

--00000000000072e65e05b2dedc08--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 08:40:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 08:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15462.38478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYPxW-00085g-HL; Fri, 30 Oct 2020 08:40:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15462.38478; Fri, 30 Oct 2020 08:40:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYPxW-00085Z-Dp; Fri, 30 Oct 2020 08:40:30 +0000
Received: by outflank-mailman (input) for mailman id 15462;
 Fri, 30 Oct 2020 08:40:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kYPxU-00085U-Tw
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:40:29 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.52]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30063103-a2f2-4a20-9ed7-fac822e2202a;
 Fri, 30 Oct 2020 08:40:27 +0000 (UTC)
Received: from MRXP264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::26)
 by DB7PR08MB4618.eurprd08.prod.outlook.com (2603:10a6:10:78::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:40:25 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::cc) by MRXP264CA0014.outlook.office365.com
 (2603:10a6:500:15::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 08:40:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:40:23 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Fri, 30 Oct 2020 08:40:22 +0000
Received: from 84d36ca888bb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AD963B37-A3EE-4B5A-BC5F-18EE5C75027D.1; 
 Fri, 30 Oct 2020 08:40:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 84d36ca888bb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 08:40:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1989.eurprd08.prod.outlook.com (2603:10a6:4:75::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:40:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 08:40:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kYPxU-00085U-Tw
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:40:29 +0000
X-Inumbo-ID: 30063103-a2f2-4a20-9ed7-fac822e2202a
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown [40.107.20.52])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 30063103-a2f2-4a20-9ed7-fac822e2202a;
	Fri, 30 Oct 2020 08:40:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CmKpdbIhxEmml/pXgr/joDp7FIh++K/fjT2Nx/u2Oig=;
 b=XeMFQvLU+Rg4DKuuhux3t8LckkInS/95vuD6sBEQRRffGorSrJRTZoMRMjScNO/UNZNYc6S1PH+yNpvlWeebF/uXck8OGM39NlmRgVG/EFOQHeSeLmrQgF8a1RUK9r1ACUcWRSI7L4mieIoniRM6c0yT5NuT3oyZh3mP9wUEA/8=
Received: from MRXP264CA0014.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:15::26)
 by DB7PR08MB4618.eurprd08.prod.outlook.com (2603:10a6:10:78::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:40:25 +0000
Received: from VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:15:cafe::cc) by MRXP264CA0014.outlook.office365.com
 (2603:10a6:500:15::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 08:40:25 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT036.mail.protection.outlook.com (10.152.19.204) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:40:23 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Fri, 30 Oct 2020 08:40:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5ac2bc20b691bf81
X-CR-MTA-TID: 64aa7808
Received: from 84d36ca888bb.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id AD963B37-A3EE-4B5A-BC5F-18EE5C75027D.1;
	Fri, 30 Oct 2020 08:40:16 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 84d36ca888bb.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 08:40:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BSSJ+jIYq7m9Sm4bXrVCkYh7gCXOpgA1k7VwCWvUkhoRUBY182Stezyhp2i2TG4DA+wgxXcP1vZre9L1lChoVRoxHHSLsFjaPJt1e3Ccx+BrdFMK6Tg8FMLhyOiQF/T5ci6O7JBjXk37MFmLEgKZ0DpxsleykmygssxiRbhIyYmr3ArJmhAKO2aUaRDnFxWJcZHQUx1miXiF5vKL4jkvswBI6OIwKf6M0ROBNzq4q5n1jjT9JEgIXZOJI7Im4EC1KEl/xjdgctiAKcd9E/oVvAbehPQvRSYkiDhBmfjmEIRO1+efeW/JUZ84PA0LsJoC5MLe4CdUufN6PptHsuy4uA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CmKpdbIhxEmml/pXgr/joDp7FIh++K/fjT2Nx/u2Oig=;
 b=N5CTm7sw8T47ZtUcTZvfkghYvrzWwhH5Xz6uPGzle2iuSAbbxfryYinZmSLLB2wB+Nx7d+MuMac7+Rx3oRLlmnrnF+9p22E9BL3GlpkUGBt3TzuPCgJT9f4z81y9DKgNw64YB7WNJkCJXTOgwfpBc+s4GzVlShL3ZYSWdsRLNZcNg0+GOK5uvsn5N0MIdD0UyddCKsAd/RH07maWWdg51ONo9WjyKtgZdJ3fAiJRAcP1IgaPHOnto1Mu5V7Oi2rHh/56QnvphdgsYbU6vSxbknge+q5xFyyUx4h67Nn/qd//ZPGnYXF0G3/eM7IkmACB0u1cwmL9e1EhyneKXbmCuQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CmKpdbIhxEmml/pXgr/joDp7FIh++K/fjT2Nx/u2Oig=;
 b=XeMFQvLU+Rg4DKuuhux3t8LckkInS/95vuD6sBEQRRffGorSrJRTZoMRMjScNO/UNZNYc6S1PH+yNpvlWeebF/uXck8OGM39NlmRgVG/EFOQHeSeLmrQgF8a1RUK9r1ACUcWRSI7L4mieIoniRM6c0yT5NuT3oyZh3mP9wUEA/8=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1989.eurprd08.prod.outlook.com (2603:10a6:4:75::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:40:11 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 08:40:11 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, "open list:X86"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Topic: [PATCH v2 3/3] xen/arm: Warn user on cpu errata 832075
Thread-Index: AQHWq7RKM3lRt4XRXUWYGPT9/eCkOKmtW9sAgAD/7gCAAORhAIAAmPsA
Date: Fri, 30 Oct 2020 08:40:11 +0000
Message-ID: <10730DC0-C52B-453D-B5C3-044CE8997825@arm.com>
References:
 <a6fc6cfd71d6d53cf89bf533a348bda799b25d7d.1603728729.git.bertrand.marquis@arm.com>
 <4d62bc0844576b80e00ea48e318be238a4d73eae.1603728729.git.bertrand.marquis@arm.com>
 <c6790d34-2893-78c4-d49f-7ef4acfceb96@xen.org>
 <DFD994CA-C456-468C-8442-0F63CE661E78@arm.com>
 <alpine.DEB.2.21.2010291632130.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010291632130.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: af993d2c-b309-4b1d-bd5b-08d87caf711b
x-ms-traffictypediagnostic: DB6PR0801MB1989:|DB7PR08MB4618:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB461874B4CBA54BA33AD5981D9D150@DB7PR08MB4618.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3faNI77TWyTbRwxtXdtJjr2C1972BrWP8MY8mLciVDSylfQnK6k5kq5Df3kij5GwFSthbEu9MN2xUoHjRSGuguIrphDoOqvhQvNq265my7IIIcfuWIHcAnj4q2OeyQu21SkLAlkSep5oZv4xAf+efSLYGB/vmEQWfb/Braa7yqtdsNkOnJtU+BK4oW3F0uxuS1UJcxGRtOiqmrUgn1mm44ceN6uIL2GHmk3/TXkv9ASf445AQMv6pP3YdhMov+k1+0HJlXE47iTuqMzC8kJpnqyWd/kY7pqsk6MFVV49wVjdCGEAMsC+MkyeYJxX1mrAuuqbjPjcTBqzLi41Ru/0Zw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39840400004)(346002)(396003)(376002)(136003)(366004)(316002)(26005)(86362001)(71200400001)(6916009)(36756003)(186003)(6486002)(33656002)(54906003)(5660300002)(4326008)(478600001)(91956017)(6512007)(2616005)(76116006)(8676002)(64756008)(66946007)(53546011)(6506007)(8936002)(66556008)(66476007)(66446008)(2906002)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 HG3eh85OewByATTai2lak5/jHgCEv1lt89kK4R7zqRgRZSIMiXLSn6v7frc9jIfI0glPFCxC5DzUOT/uHGc1xrlBI+s+c9o6/4J8+LcWyv1e/G+eKYwWliKCvhmqVY09vF5HBwxhttc1GNW1jhUU4yFQ56+Lk1V4HToQdl7wOhyrqFx0jhdnxc4Rz5Uupf+I2YcspLcWn4R2QyyzRXfW/uM3tsGcfGwx+IXKwQEDM1IljJX81QoTGGnSLPBjdyC/xtd7RIOQ9FmJ2z84eS1DDP/WbPZ9sMSZslN+dJ5TEJdDg/Q0NnHhnGQ9H2yG9HiJNFNaBSDQpWI1P9MmKWs9NvDMMnIGiEtUvV1X8zTxG+PQlbRbZdiYSEp/l2eNV1oCvRH8Dfg4q/6mMpp02eKdLpiSjZMWJz/NbeRVZ56TqLsIBhfZc/sGXdzu7XhVMZzemOuek27XnDeOX+8SeHu+Unx6j4KYDsdQ6cPIpCR0JDeTBTaUWTtA2H7/5A7K4eyh0vsQFB0AYoSh+04SUQ8KFfVvHxU3NreJQgyYmFmTJ8h7XbZvP9oWaAj1SJ2un7UrdN2GfeD5EcKq/Lg0DlgtlsFOajGv6maGWALATa8cfsPEAmKeSTg5PmqE+CO1qlEHxy/Sy4k8SlQgeboqJtX3hg==
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <04A1A4CF9F5C1E4081E8BE9DC11F24EB@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1989
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bdcfd7ed-e5fb-43a7-f8ac-08d87caf6a0a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yn3JceVKXVsDHbCRAMOrBgnpDgxHva7UZOsYsWnZamt2porgodG0yEIptsoaLABecw1+Ed4PBn5WhwvPnzmLHG7rRgxUCgKj9rB7Uoaycr2VayZx0riYwZSNbhUWq1ksL3QE0y14SlW8/BeBGaz/L70H5/HTgpRmPUr6+AmIm8xZj/zMMKjsnhgivqfOkv7dydbM/7BJCMW9NcQtDCwnB++pIPKuv7m3tGb3lDeyvXaPkENpOBAqRr0H8WZ6Y6ttlomeRYlHiQ1yyXIV0h33bcEypGatPxI1vfdB7ddVTf9pFyIK7PBSh+kbPrrI8vEwDtCbNnLs+ZVkvq0+inD0Jwz7atwGGpWi1mI0lZXUecAF7W1sYcrgQHvjgwVGoWRZdqH1ythlbToKlxCLNFxCAg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39840400004)(136003)(376002)(346002)(46966005)(82310400003)(81166007)(8676002)(54906003)(6486002)(36906005)(47076004)(36756003)(26005)(186003)(4326008)(6512007)(2906002)(33656002)(83380400001)(356005)(5660300002)(316002)(70206006)(6862004)(86362001)(336012)(2616005)(53546011)(478600001)(8936002)(6506007)(107886003)(70586007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 08:40:23.0640
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: af993d2c-b309-4b1d-bd5b-08d87caf711b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB4618



> On 29 Oct 2020, at 23:32, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 29 Oct 2020, Bertrand Marquis wrote:
>> Hi Julien,
>>=20
>>> On 28 Oct 2020, at 18:39, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi Bertrand,
>>>=20
>>> On 26/10/2020 16:21, Bertrand Marquis wrote:
>>>> When a Cortex A57 processor is affected by CPU errata 832075, a guest
>>>> not implementing the workaround for it could deadlock the system.
>>>> Add a warning during boot informing the user that only trusted guests
>>>> should be executed on the system.
>>>> An equivalent warning is already given to the user by KVM on cores
>>>> affected by this errata.
>>>> Also taint the hypervisor as unsecure when this errata applies and
>>>> mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
>>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>>=20
>>> Reviewed-by: Julien Grall <jgrall@amazon.com>
>>=20
>> Thanks
>>=20
>>>=20
>>> If you don't need to resend the series, then I would be happy to fix th=
e typo pointed out by George on commit.
>>=20
>> There is only the condensing from Stefano.
>> If you can handle that on commit to great but if you need me to send a v=
3 to make your life easier do not hesitate to tell me
>=20
> I have just done the committing

Thanks a lot :-)

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 08:44:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 08:44:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15475.38490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQ1F-0008Hc-2Q; Fri, 30 Oct 2020 08:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15475.38490; Fri, 30 Oct 2020 08:44:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQ1E-0008HV-VM; Fri, 30 Oct 2020 08:44:20 +0000
Received: by outflank-mailman (input) for mailman id 15475;
 Fri, 30 Oct 2020 08:44:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kYQ1D-0008HQ-Qg
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:44:19 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.53]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c92260a-4cfa-4e30-9567-d10abbf531a1;
 Fri, 30 Oct 2020 08:44:18 +0000 (UTC)
Received: from DB6PR0801CA0054.eurprd08.prod.outlook.com (2603:10a6:4:2b::22)
 by AM0PR08MB3044.eurprd08.prod.outlook.com (2603:10a6:208:5a::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:44:15 +0000
Received: from DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::1) by DB6PR0801CA0054.outlook.office365.com
 (2603:10a6:4:2b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 08:44:14 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT053.mail.protection.outlook.com (10.152.21.119) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:44:12 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Fri, 30 Oct 2020 08:44:11 +0000
Received: from ed8889a32a9f.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E08EB2E7-DC79-40A5-9944-3F32D11E163D.1; 
 Fri, 30 Oct 2020 08:44:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ed8889a32a9f.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 08:44:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1654.eurprd08.prod.outlook.com (2603:10a6:4:3a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 30 Oct
 2020 08:44:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 08:44:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kYQ1D-0008HQ-Qg
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:44:19 +0000
X-Inumbo-ID: 4c92260a-4cfa-4e30-9567-d10abbf531a1
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [40.107.5.53])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4c92260a-4cfa-4e30-9567-d10abbf531a1;
	Fri, 30 Oct 2020 08:44:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPm3xJApa4DEiwPt3dqAlFlecCW5Av9u0DG5Xocvwf0=;
 b=8HeN+Ddnt6WbeOo6wqW1i9vhxsY46AV+ZFdKqYSQNEhdDAdFDWhnL6Hl7bR3W/yidQgsZPhkgdLUhYBuabCcgBg0vYh1gyUOE9EHLMQWM3j3NzdXRNazNa3Akb6SUMUhQiv6j44zgxAortClALC91gmPqf56mm/gfTgK7QDiESE=
Received: from DB6PR0801CA0054.eurprd08.prod.outlook.com (2603:10a6:4:2b::22)
 by AM0PR08MB3044.eurprd08.prod.outlook.com (2603:10a6:208:5a::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:44:15 +0000
Received: from DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::1) by DB6PR0801CA0054.outlook.office365.com
 (2603:10a6:4:2b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 08:44:14 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT053.mail.protection.outlook.com (10.152.21.119) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:44:12 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Fri, 30 Oct 2020 08:44:11 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4495b43c3a323f95
X-CR-MTA-TID: 64aa7808
Received: from ed8889a32a9f.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id E08EB2E7-DC79-40A5-9944-3F32D11E163D.1;
	Fri, 30 Oct 2020 08:44:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ed8889a32a9f.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 08:44:06 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GzXMFATeJRP+heR7wHHFQUKS1V0mwDgrPcCXXZQMoU2y6yHuEv1FAAVvKdRHQ4B/+oFfHA9GkcN0aCWVoo+BELwuVobJNEvTV27sWvv7+ecfrbFepcVE6JKgOQ4faLbBUFiRGYPj+q14QJJfpq6xtD2Y77DDjlARsn2E+ujLYqn/XL5q4UjxqR+vFuWkz3a+y5LNwAVUuOH20qez8wOoteOEUb0/l+hVMhiLkb0lhV1vC+AmZhXxceACDof3Et0kqy5BwjCpAJJWicuA9/FPq7ZAJAnDk2cEH4ZKGmLDRvF8paG7CriCoKH45OjxfdbWUgMOkL3zMs0scxRsAHMu+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPm3xJApa4DEiwPt3dqAlFlecCW5Av9u0DG5Xocvwf0=;
 b=VgiyHzTHXr63QN8NLtg8BLvRmoKUacDglRFoDqcssJBzcZt6VqLjVR5/qR4Vh45meAZB2HqbGDIiRgv80O3GvyecXLbBzjGT1dpot+P2SWQWJotVcK+drxum7gLIJ6AehxuXS2KiBvBX6iNOlbyfYA9+MJCARs09sjVdomcn0o1Tlkhhs/YqCp9JtE6cVimpadWKyjLUpojg1m9ZMoW8XcPaTAXJc8O6GU5duVl4mdZP3f1InK4m0c4bOwpJbO+JGmBN5/r7rjPdvSntK1GOPbYjf6v31kqknjAxzHU/pf9BaVMlrmk5gMxAK7inVpQuSaoKLI4fDtdIJhGVpVvL7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPm3xJApa4DEiwPt3dqAlFlecCW5Av9u0DG5Xocvwf0=;
 b=8HeN+Ddnt6WbeOo6wqW1i9vhxsY46AV+ZFdKqYSQNEhdDAdFDWhnL6Hl7bR3W/yidQgsZPhkgdLUhYBuabCcgBg0vYh1gyUOE9EHLMQWM3j3NzdXRNazNa3Akb6SUMUhQiv6j44zgxAortClALC91gmPqf56mm/gfTgK7QDiESE=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1654.eurprd08.prod.outlook.com (2603:10a6:4:3a::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 30 Oct
 2020 08:44:04 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 08:44:04 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>, Rahul Singh <Rahul.Singh@arm.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mgw0AAgAEzsQCAAWItgIABA+GAgADBXQCAABfAAIAAGI8AgAAORwCABG97AIADrU2AgAFBfQCAAGM1gIAA0IAA
Date: Fri, 30 Oct 2020 08:44:04 +0000
Message-ID: <5A60DDEA-5A39-4D50-8CCD-B41B14EE2AA0@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 19a8063b-26ee-4a94-df69-08d87caffa61
x-ms-traffictypediagnostic: DB6PR0801MB1654:|AM0PR08MB3044:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB304430BF7BDAE3F472C260B39D150@AM0PR08MB3044.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7s+1FG5HE/rEqCOWDpEk9lEW2CAhpaI+WMGmOIDCXVQK1TlxjpudOjAxL7wCy3fBLYUU5TyqoktEnp+tqP2hNPDQUbeOVeM0Yzc1A4+xSYgD4MvWIAZ/SCmhZDiiqsyU6a8tVz76P9yaU85KrRm8rgeOCsO0DJ1rC0jnHf1pqASHBHUF0+jBGZOfR53t8WCdSJsyNEgpEbfetJXkwT6uK0aDBc9NUuFAeWuY1DaRcYqHU86M8HEVbV8TY7dacmnGLNF+jSuIiFwDlc4CHloGCHSVrXMqf2v4UvyxbxbFBf8Y8vC2E+wokVOH5BNsiBiYkdHl4EHMrGRQh+g6srCrjg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(376002)(396003)(39840400004)(4326008)(54906003)(33656002)(316002)(66946007)(91956017)(76116006)(6506007)(53546011)(86362001)(8676002)(36756003)(5660300002)(6916009)(8936002)(6512007)(6486002)(478600001)(83380400001)(2616005)(66476007)(66556008)(64756008)(66446008)(71200400001)(186003)(26005)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 bAZJs2STdzS/p9YQ+aNKj/zl2xDJG6lbO7jHt5dt06fMHeUC+gWFyrEl/jVGNe/g86/INHRvBtajdHD4I6CozF9EUMCsbJHnbIbHlBTKLVBk7vc9ouuB/cy+4TEDiVvKWcrRRCzAUw5jXm4EDh0FMCv4LdOHPC0OahL0+Ub18iOWzRlAhxzyWlGcD5AYTw/+nMzVwhuTiVhqStJg/opsphLlWP+s5vlL5lLjW4rXeZn2092X39h0D9tt3lf9vDZX/VDMnGrwI1qgDn9wG3rrsiKNdcyb+uPVboMaxraa5GbX7RYXZ41wRt578btmE6q4cw/z3fdJQMM8rbczOprEeEslYpGE5A4bEwt6ip3R6i9AHb10k5MWxVlZea6h0tbKzhOegMwD3HDL6+7GJxAJxVY42aawA3mCrnhXgrND+RxVv7fzn5qKrNjU8G2O5L5GAvXpGyNWA88y4+HSmqjPclPYNuAOuf/ZOM+OYnEJtxFPJDOnGTmBy6pB4kaHODisPvLxt0Ogu/VieawfSat6YwcDRh4+Wq8gpe+0qrHygVtQdtSNtXChF4TYReWG4WpcTDjfPRRzmwrbV4o7YAmY5UFOGGaL3Y5VS4t9xiYyGkjwOFLFm1MduMlybwzWIcig+zs5x6fPRvaXJWsdSRDdsQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <50CFE705E976E2428033914DB10DA091@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1654
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2208cd73-09b3-4398-b2e8-08d87caff516
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QaNczFuzr/1JVL+VpijJmVsbDPs3T0Q3btku/OuZ0sr2IsS7mQZenBGotUAz2lDZu1PlsSQRfGKNvewRaAFnB4L5/jS3IeS5gExJwP22KxAZlEf/nRf0MlcXSPMP0NNLqEk7Nrd3hIR5DOgYFmRYEJ2e3NVLuwf4yYqHFKdBNgkgZ6NmOKzqLClg7QL2crjy/mm6uDk+mxR0693urAiGWHk5tbh6fWL/87jkXn7cF4herYsGQf5S2+Tc9+pbxM6Bk7Y4tA5KvHZgOWwRtRkRPmcoGEd3FH/ODQj8dJDwLfEVmWqXsby2bGKrTG6gQnk1YzUqcINqP2AqvjvgkQlpAfArUQEa1yL7mcnqkLHjby/iTicPuZ1zJ3y7Euh61qjljqDXJyaFLG/9/7tITqRkAg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39840400004)(346002)(396003)(136003)(46966005)(70586007)(6512007)(5660300002)(81166007)(8676002)(36756003)(33656002)(6862004)(54906003)(356005)(70206006)(6506007)(8936002)(86362001)(186003)(316002)(26005)(53546011)(107886003)(6486002)(478600001)(2906002)(336012)(2616005)(82310400003)(83380400001)(4326008)(47076004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 08:44:12.2315
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 19a8063b-26ee-4a94-df69-08d87caffa61
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT053.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3044

SEkgU3RlZmFubywNCg0KPiBPbiAyOSBPY3QgMjAyMCwgYXQgMjA6MTcsIFN0ZWZhbm8gU3RhYmVs
bGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+IA0KPiBPbiBUaHUsIDI5IE9j
dCAyMDIwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+IE9uIDI4IE9jdCAyMDIwLCBhdCAx
OToxMiwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4gT24gMjYvMTAv
MjAyMCAxMTowMywgUmFodWwgU2luZ2ggd3JvdGU6DQo+Pj4+IEhlbGxvIEp1bGllbiwNCj4+Pj4+
IE9uIDIzIE9jdCAyMDIwLCBhdCA0OjE5IHBtLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3Jn
PiB3cm90ZToNCj4+Pj4+IE9uIDIzLzEwLzIwMjAgMTU6MjcsIFJhaHVsIFNpbmdoIHdyb3RlOg0K
Pj4+Pj4+IEhlbGxvIEp1bGllbiwNCj4+Pj4+Pj4gT24gMjMgT2N0IDIwMjAsIGF0IDI6MDAgcG0s
IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+Pj4+PiBPbiAyMy8xMC8y
MDIwIDEyOjM1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+IEhlbGxvLA0KPj4+Pj4+Pj4+
IE9uIDIzIE9jdCAyMDIwLCBhdCAxOjAyIGFtLCBTdGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxs
aW5pQGtlcm5lbC5vcmc+IHdyb3RlOg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IE9uIFRodSwgMjIg
T2N0IDIwMjAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+Pj4+Pj4+Pj4+PiBPbiAyMC8xMC8yMDIw
IDE2OjI1LCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4+Pj4+Pj4+Pj4gQWRkIHN1cHBvcnQgZm9y
IEFSTSBhcmNoaXRlY3RlZCBTTU1VdjMgaW1wbGVtZW50YXRpb25zLiBJdCBpcyBiYXNlZCBvbg0K
Pj4+Pj4+Pj4+Pj4+PiB0aGUgTGludXggU01NVXYzIGRyaXZlci4NCj4+Pj4+Pj4+Pj4+Pj4gTWFq
b3IgZGlmZmVyZW5jZXMgYmV0d2VlbiB0aGUgTGludXggZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0K
Pj4+Pj4+Pj4+Pj4+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVkIGFz
IGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+Pj4+Pj4+Pj4+Pj4gICB0aGF0IHN1cHBv
cnRzIGJvdGggU3RhZ2UtMSBhbmQgU3RhZ2UtMiB0cmFuc2xhdGlvbnMuDQo+Pj4+Pj4+Pj4+Pj4+
IDIuIFVzZSBQMk0gIHBhZ2UgdGFibGUgaW5zdGVhZCBvZiBjcmVhdGluZyBvbmUgYXMgU01NVXYz
IGhhcyB0aGUNCj4+Pj4+Pj4+Pj4+Pj4gICBjYXBhYmlsaXR5IHRvIHNoYXJlIHRoZSBwYWdlIHRh
YmxlcyB3aXRoIHRoZSBDUFUuDQo+Pj4+Pj4+Pj4+Pj4+IDMuIFRhc2tsZXRzIGlzIHVzZWQgaW4g
cGxhY2Ugb2YgdGhyZWFkZWQgSVJRJ3MgaW4gTGludXggZm9yIGV2ZW50IHF1ZXVlDQo+Pj4+Pj4+
Pj4+Pj4+ICAgYW5kIHByaW9yaXR5IHF1ZXVlIElSUSBoYW5kbGluZy4NCj4+Pj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4+PiBUYXNrbGV0cyBhcmUgbm90IGEgcmVwbGFjZW1lbnQgZm9yIHRocmVhZGVk
IElSUS4gSW4gcGFydGljdWxhciwgdGhleSB3aWxsDQo+Pj4+Pj4+Pj4+Pj4gaGF2ZSBwcmlvcml0
eSBvdmVyIGFueXRoaW5nIGVsc2UgKElPVyBub3RoaW5nIHdpbGwgcnVuIG9uIHRoZSBwQ1BVIHVu
dGlsDQo+Pj4+Pj4+Pj4+Pj4gdGhleSBhcmUgZG9uZSkuDQo+Pj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+
Pj4+Pj4gRG8geW91IGtub3cgd2h5IExpbnV4IGlzIHVzaW5nIHRocmVhZC4gSXMgaXQgYmVjYXVz
ZSBvZiBsb25nIHJ1bm5pbmcNCj4+Pj4+Pj4+Pj4+PiBvcGVyYXRpb25zPw0KPj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+PiBZZXMgeW91IGFyZSByaWdodCBiZWNhdXNlIG9mIGxvbmcgcnVubmluZyBv
cGVyYXRpb25zIExpbnV4IGlzIHVzaW5nIHRoZQ0KPj4+Pj4+Pj4+Pj4gdGhyZWFkZWQgSVJRcy4N
Cj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gU01NVXYzIHJlcG9ydHMgZmF1bHQvZXZlbnRzIGJh
c2VzIG9uIG1lbW9yeS1iYXNlZCBjaXJjdWxhciBidWZmZXIgcXVldWVzIG5vdA0KPj4+Pj4+Pj4+
Pj4gYmFzZWQgb24gdGhlIHJlZ2lzdGVyLiBBcyBwZXIgbXkgdW5kZXJzdGFuZGluZywgaXQgaXMg
dGltZS1jb25zdW1pbmcgdG8NCj4+Pj4+Pj4+Pj4+IHByb2Nlc3MgdGhlIG1lbW9yeSBiYXNlZCBx
dWV1ZXMgaW4gaW50ZXJydXB0IGNvbnRleHQgYmVjYXVzZSBvZiB0aGF0IExpbnV4DQo+Pj4+Pj4+
Pj4+PiBpcyB1c2luZyB0aHJlYWRlZCBJUlEgdG8gcHJvY2VzcyB0aGUgZmF1bHRzL2V2ZW50cyBm
cm9tIFNNTVUuDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IEkgZGlkbuKAmXQgZmluZCBhbnkg
b3RoZXIgc29sdXRpb24gaW4gWEVOIGluIHBsYWNlIG9mIHRhc2tsZXQgdG8gZGVmZXIgdGhlDQo+
Pj4+Pj4+Pj4+PiB3b3JrLCB0aGF04oCZcyB3aHkgSSB1c2VkIHRhc2tsZXQgaW4gWEVOIGluIHJl
cGxhY2VtZW50IG9mIHRocmVhZGVkIElSUXMuIElmDQo+Pj4+Pj4+Pj4+PiB3ZSBkbyBhbGwgd29y
ayBpbiBpbnRlcnJ1cHQgY29udGV4dCB3ZSB3aWxsIG1ha2UgWEVOIGxlc3MgcmVzcG9uc2l2ZS4N
Cj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFNvIHdlIG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgWGVu
IGNvbnRpbnVlIHRvIHJlY2VpdmVzIGludGVycnVwdHMsIGJ1dCB3ZSBhbHNvDQo+Pj4+Pj4+Pj4+
IG5lZWQgdG8gbWFrZSBzdXJlIHRoYXQgYSB2Q1BVIGJvdW5kIHRvIHRoZSBwQ1BVIGlzIGFsc28g
cmVzcG9uc2l2ZS4NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IElmIHlv
dSBrbm93IGFub3RoZXIgc29sdXRpb24gaW4gWEVOIHRoYXQgd2lsbCBiZSB1c2VkIHRvIGRlZmVy
IHRoZSB3b3JrIGluDQo+Pj4+Pj4+Pj4+PiB0aGUgaW50ZXJydXB0IHBsZWFzZSBsZXQgbWUga25v
dyBJIHdpbGwgdHJ5IHRvIHVzZSB0aGF0Lg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gT25lIG9m
IG15IHdvcmsgY29sbGVhZ3VlIGVuY291bnRlcmVkIGEgc2ltaWxhciBwcm9ibGVtIHJlY2VudGx5
LiBIZSBoYWQgYSBsb25nDQo+Pj4+Pj4+Pj4+IHJ1bm5pbmcgdGFza2xldCBhbmQgd2FudGVkIHRv
IGJlIGJyb2tlbiBkb3duIGluIHNtYWxsZXIgY2h1bmsuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+
PiBXZSBkZWNpZGVkIHRvIHVzZSBhIHRpbWVyIHRvIHJlc2NoZWR1bGUgdGhlIHRhc2xrZXQgaW4g
dGhlIGZ1dHVyZS4gVGhpcyBhbGxvd3MNCj4+Pj4+Pj4+Pj4gdGhlIHNjaGVkdWxlciB0byBydW4g
b3RoZXIgbG9hZHMgKGUuZy4gdkNQVSkgZm9yIHNvbWUgdGltZS4NCj4+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+IFRoaXMgaXMgcHJldHR5IGhhY2tpc2ggYnV0IEkgY291bGRuJ3QgZmluZCBhIGJldHRl
ciBzb2x1dGlvbiBhcyB0YXNrbGV0IGhhdmUNCj4+Pj4+Pj4+Pj4gaGlnaCBwcmlvcml0eS4NCj4+
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IE1heWJlIHRoZSBvdGhlciB3aWxsIGhhdmUgYSBiZXR0ZXIg
aWRlYS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBKdWxpZW4ncyBzdWdnZXN0aW9uIGlzIGEgZ29v
ZCBvbmUuDQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gQnV0IEkgdGhpbmsgdGFza2xldHMgY2FuIGJl
IGNvbmZpZ3VyZWQgdG8gYmUgY2FsbGVkIGZyb20gdGhlIGlkbGVfbG9vcCwNCj4+Pj4+Pj4+PiBp
biB3aGljaCBjYXNlIHRoZXkgYXJlIG5vdCBydW4gaW4gaW50ZXJydXB0IGNvbnRleHQ/DQo+Pj4+
Pj4+Pj4gDQo+Pj4+Pj4+PiBZZXMgeW91IGFyZSByaWdodCB0YXNrbGV0IHdpbGwgYmUgc2NoZWR1
bGVkIGZyb20gdGhlIGlkbGVfbG9vcCB0aGF0IGlzIG5vdCBpbnRlcnJ1cHQgY29uZXh0Lg0KPj4+
Pj4+PiANCj4+Pj4+Pj4gVGhpcyBkZXBlbmRzIG9uIHlvdXIgdGFza2xldC4gU29tZSB3aWxsIHJ1
biBmcm9tIHRoZSBzb2Z0aXJxIGNvbnRleHQgd2hpY2ggaXMgdXN1YWxseSAoZm9yIEFybSkgb24g
dGhlIHJldHVybiBvZiBhbiBleGNlcHRpb24uDQo+Pj4+Pj4+IA0KPj4+Pj4+IFRoYW5rcyBmb3Ig
dGhlIGluZm8uIEkgd2lsbCBjaGVjayBhbmQgd2lsbCBnZXQgYmV0dGVyIHVuZGVyc3RhbmRpbmcg
b2YgdGhlIHRhc2tsZXQgaG93IGl0IHdpbGwgcnVuIGluIFhFTi4NCj4+Pj4+Pj4+PiANCj4+Pj4+
Pj4+Pj4+Pj4gNC4gTGF0ZXN0IHZlcnNpb24gb2YgdGhlIExpbnV4IFNNTVV2MyBjb2RlIGltcGxl
bWVudHMgdGhlIGNvbW1hbmRzIHF1ZXVlDQo+Pj4+Pj4+Pj4+Pj4+ICAgYWNjZXNzIGZ1bmN0aW9u
cyBiYXNlZCBvbiBhdG9taWMgb3BlcmF0aW9ucyBpbXBsZW1lbnRlZCBpbiBMaW51eC4NCj4+Pj4+
Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+PiBDYW4geW91IHByb3ZpZGUgbW9yZSBkZXRhaWxzPw0KPj4+
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBJIHRyaWVkIHRvIHBvcnQgdGhlIGxhdGVzdCB2ZXJzaW9u
IG9mIHRoZSBTTU1VdjMgY29kZSB0aGFuIEkgb2JzZXJ2ZWQgdGhhdA0KPj4+Pj4+Pj4+Pj4gaW4g
b3JkZXIgdG8gcG9ydCB0aGF0IGNvZGUgSSBoYXZlIHRvIGFsc28gcG9ydCBhdG9taWMgb3BlcmF0
aW9uIGltcGxlbWVudGVkDQo+Pj4+Pj4+Pj4+PiBpbiBMaW51eCB0byBYRU4uIEFzIGxhdGVzdCBM
aW51eCBjb2RlIHVzZXMgYXRvbWljIG9wZXJhdGlvbiB0byBwcm9jZXNzIHRoZQ0KPj4+Pj4+Pj4+
Pj4gY29tbWFuZCBxdWV1ZXMgKGF0b21pY19jb25kX3JlYWRfcmVsYXhlZCgpLGF0b21pY19sb25n
X2NvbmRfcmVhZF9yZWxheGVkKCkgLA0KPj4+Pj4+Pj4+Pj4gYXRvbWljX2ZldGNoX2FuZG5vdF9y
ZWxheGVkKCkpIC4NCj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+IFRoYW5rIHlvdSBmb3IgdGhlIGV4
cGxhbmF0aW9uLiBJIHRoaW5rIGl0IHdvdWxkIGJlIGJlc3QgdG8gaW1wb3J0IHRoZSBhdG9taWMN
Cj4+Pj4+Pj4+Pj4gaGVscGVycyBhbmQgdXNlIHRoZSBsYXRlc3QgY29kZS4NCj4+Pj4+Pj4+Pj4g
DQo+Pj4+Pj4+Pj4+IFRoaXMgd2lsbCBlbnN1cmUgdGhhdCB3ZSBkb24ndCByZS1pbnRyb2R1Y2Ug
YnVncyBhbmQgYWxzbyBidXkgdXMgc29tZSB0aW1lDQo+Pj4+Pj4+Pj4+IGJlZm9yZSB0aGUgTGlu
dXggYW5kIFhlbiBkcml2ZXIgZGl2ZXJnZSBhZ2FpbiB0b28gbXVjaC4NCj4+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4+IFN0ZWZhbm8sIHdoYXQgZG8geW91IHRoaW5rPw0KPj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4+IEkgdGhpbmsgeW91IGFyZSByaWdodC4NCj4+Pj4+Pj4+IFllcywgSSBhZ3JlZSB3aXRoIHlv
dSB0byBoYXZlIFhFTiBjb2RlIGluIHN5bmMgd2l0aCBMaW51eCBjb2RlIHRoYXQncyB3aHkgSSBz
dGFydGVkIHdpdGggdG8gcG9ydCB0aGUgTGludXggYXRvbWljIG9wZXJhdGlvbnMgdG8gWEVOICB0
aGVuIEkgcmVhbGlzZWQgdGhhdCBpdCBpcyBub3Qgc3RyYWlnaHRmb3J3YXJkIHRvIHBvcnQgYXRv
bWljIG9wZXJhdGlvbnMgYW5kIGl0IHJlcXVpcmVzIGxvdHMgb2YgZWZmb3J0IGFuZCB0ZXN0aW5n
LiBUaGVyZWZvcmUgSSBkZWNpZGVkIHRvIHBvcnQgdGhlIGNvZGUgYmVmb3JlIHRoZSBhdG9taWMg
b3BlcmF0aW9uIGlzIGludHJvZHVjZWQgaW4gTGludXguDQo+Pj4+Pj4+IA0KPj4+Pj4+PiBIbW1t
Li4uIEkgd291bGQgbm90IGhhdmUgZXhwZWN0ZWQgYSBsb3Qgb2YgZWZmb3J0IHJlcXVpcmVkIHRv
IGFkZCB0aGUgMyBhdG9taWNzIG9wZXJhdGlvbnMgYWJvdmUuIEFyZSB5b3UgdHJ5aW5nIHRvIGFs
c28gcG9ydCB0aGUgTFNFIHN1cHBvcnQgYXQgdGhlIHNhbWUgdGltZT8NCj4+Pj4+PiBUaGVyZSBh
cmUgb3RoZXIgYXRvbWljIG9wZXJhdGlvbnMgdXNlZCBpbiB0aGUgU01NVXYzIGNvZGUgYXBhcnQg
ZnJvbSB0aGUgMyBhdG9taWMgb3BlcmF0aW9uIEkgbWVudGlvbi4gSSBqdXN0IG1lbnRpb24gMyBv
cGVyYXRpb24gYXMgYW4gZXhhbXBsZS4NCj4+Pj4+IA0KPj4+Pj4gT2suIERvIHlvdSBoYXZlIGEg
bGlzdCB5b3UgY291bGQgc2hhcmU/DQo+Pj4+PiANCj4+Pj4gWWVzLiBQbGVhc2UgZmluZCB0aGUg
bGlzdCB0aGF0IHdlIGhhdmUgdG8gcG9ydCB0byB0aGUgWEVOIGluIG9yZGVyIHRvIG1lcmdlIHRo
ZSBsYXRlc3QgU01NVXYzIGNvZGUuDQo+Pj4gDQo+Pj4gVGhhbmtzIQ0KPj4+IA0KPj4+PiBJZiB3
ZSBzdGFydCB0byBwb3J0IHRoZSBiZWxvdyBsaXN0IHdlIG1pZ2h0IGhhdmUgdG8gcG9ydCBhbm90
aGVyIGF0b21pYyBvcGVyYXRpb24gYmFzZWQgb24gd2hpY2ggYmVsb3cgYXRvbWljIG9wZXJhdGlv
bnMgYXJlIGltcGxlbWVudGVkLiBJIGhhdmUgbm90IHNwZW50IHRpbWUgb24gaG93IHRoZXNlIGF0
b21pYyBvcGVyYXRpb25zIGFyZSBpbXBsZW1lbnRlZCBpbiBkZXRhaWwgYnV0IGFzIHBlciBteSB1
bmRlcnN0YW5kaW5nLCBpdCByZXF1aXJlZCBhbiBlZmZvcnQgdG8gcG9ydCB0aGVtIHRvIFhFTiBh
bmQgcmVxdWlyZWQgYSBsb3Qgb2YgdGVzdGluZy4NCj4+PiANCj4+PiBGb3IgdGhlIGJlZ2lubmlu
ZywgSSB0aGluayBpdCBpcyBmaW5lIHRvIGltcGxlbWVudCB0aGVtIHdpdGggYSBzdHJvbmdlciBt
ZW1vcnkgYmFycmllciB0aGFuIG5lY2Vzc2FyeSBvciBpbiBhIGxlc3MgZWZmaWNpZW50LiBUaGlz
IGNhbiBiZSByZWZpbmVkIGFmdGVyd2FyZHMuDQo+Pj4gDQo+Pj4gVGhvc2UgaGVscGVycyBjYW4g
ZGlyZWN0bHkgYmUgZGVmaW5lZCBpbiB0aGUgU01NVXYzIGRyaXZlcnMgc28gd2Uga25vdyB0aGV5
IGFyZSBub3QgZm9yIGdlbmVyYWwgcHVycG9zZSA6KS4NCj4+PiANCj4+Pj4gMS4gYXRvbWljX3Nl
dF9yZWxlYXNlDQo+Pj4gDQo+Pj4gVGhpcyBjb3VsZCBiZSBpbXBsZW1lbnRlZCBhczoNCj4+PiAN
Cj4+PiBzbXBfbWIoKTsNCj4+PiBhdG9taWNfc2V0KCk7DQo+Pj4gDQo+Pj4+IDIuIGF0b21pY19m
ZXRjaF9hbmRub3RfcmVsYXhlZA0KPj4+IA0KPj4+IFRoaXMgd291bGQgbmVlZCB0byBiZSBpbXBv
cnRlZC4NCj4+PiANCj4+Pj4gMy4gYXRvbWljX2NvbmRfcmVhZF9yZWxheGVkDQo+Pj4gDQo+Pj4g
VGhpcyB3b3VsZCBuZWVkIHRvIGJlIGltcG9ydGVkLiBUaGUgc2ltcGxlc3QgdmVyc2lvbiBzZWVt
cyB0byBiZSB0aGUgZ2VuZXJpYyB2ZXJzaW9uIHByb3ZpZGVkIGJ5IGluY2x1ZGUvYXNtLWdlbmVy
aWMvYmFycmllci5oIChzZWUgc21wX2NvbmRfbG9hZF9yZWxheGVkKS4NCj4+PiANCj4+Pj4gNC4g
YXRvbWljX2xvbmdfY29uZF9yZWFkX3JlbGF4ZWQNCj4+Pj4gNS4gYXRvbWljX2xvbmdfeG9yDQo+
Pj4gDQo+Pj4gVGhlIHR3byB3b3VsZCByZXF1aXJlIHRoZSBpbXBsZW1lbnRhdGlvbiBvZiBhdG9t
aWM2NC4gVm9sb2R5bXlyIGFsc28gcmVxdWlyZWQgYSB2ZXJzaW9uLiBJIG9mZmVyZWQgbXkgaGVs
cCwgaG93ZXZlciBJIGRpZG4ndCBmaW5kIGVub3VnaCB0aW1lIHRvIGRvIGl0IHlldCA6KC4NCj4+
PiANCj4+PiBGb3IgQXJtNjQsIGl0IHdvdWxkIGJlIHBvc3NpYmxlIHRvIGRvIGEgY29weS9wYXN0
ZSBvZiB0aGUgZXhpc3RpbmcgaGVscGVycyBhbmQgcmVwbGFjZSBhbnl0aGluZyByZWxhdGVkIHRv
IGEgMzItYml0IHJlZ2lzdGVyIHdpdGggYSA2NC1iaXQgb25lLg0KPj4+IA0KPj4+IEZvciBBcm0z
MiwgdGhleSBhcmUgYSBiaXQgbW9yZSBjb21wbGV4IGJlY2F1c2UgeW91IG5vdyBuZWVkIHRvIHdv
cmsgd2l0aCAyIHJlZ2lzdGVycy4NCj4+PiANCj4+PiBIb3dldmVyLCBmb3IgeW91ciBwdXJwb3Nl
LCB5b3Ugd291bGQgYmUgdXNpbmcgYXRvbWljX2xvbmdfdC4gU28gdGhlIHRoZSBBcm02NCBpbXBs
ZW1lbnRhdGlvbiBzaG91bGQgYmUgc3VmZmljaWVudC4NCj4+PiANCj4+Pj4gNi4gYXRvbWljX3Nl
dF9yZWxlYXNlDQo+Pj4gDQo+Pj4gU2FtZSBhcyAxLg0KPj4+IA0KPj4+PiA3LiBhdG9taWNfY21w
eGNoZ19yZWxheGVkIG1pZ2h0IGJlIHdlIGNhbiB1c2UgYXRvbWljX2NtcHhjaGcgdGhhdCBpcyBp
bXBsZW1lbnRlZCBpbiBYRU4gbmVlZCB0byBjaGVjay4NCj4+PiBhdG9taWNfY21weGNoZygpIGlz
IHN0cm9uZ2x5IG9yZGVyZWQuIFNvIGl0IHdvdWxkIGJlIGZpbmUgdG8gdXNlIGl0IGZvciBpbXBs
ZW1lbnRpbmcgdGhlIGhlbHBlci4gQWx0aG91Z2gsIG1vcmUgaW5lZmZpY2llbnQgOikuDQo+Pj4g
DQo+Pj4+IDguIGF0b21pY19kZWNfcmV0dXJuX3JlbGVhc2UNCj4+PiANCj4+PiBYZW4gaW1wbGVt
ZW50cyBhIHN0cm9uZ2VyIHZlcnNpb24gYXRvbWljX2RlY19yZXR1cm4oKS4gWW91IGNhbiByZS11
c2UgaXQgaGVyZS4NCj4+PiANCj4+Pj4gOS4gYXRvbWljX2ZldGNoX2luY19yZWxheGVkDQo+Pj4g
DQo+Pj4gVGhpcyB3b3VsZCBuZWVkIHRvIGJlIGltcG9ydGVkLg0KPj4gDQo+PiBXZSBkbyBhZ3Jl
ZSB0aGF0IHRoaXMgd291bGQgYmUgdGhlIHdvcmsgcmVxdWlyZWQgYnV0IHNvbWUgb2YgdGhlIHRo
aW5ncyB0byBiZSBpbXBvcnRlZCBoYXZlIGRlcGVuZGVuY2llcyBhbmQgdGhpcyBpcyBub3QNCj4+
IGEgc2ltcGxlIGNoYW5nZSBvbiB0aGUgcGF0Y2ggZG9uZSBieSBSYWh1bCBhbmQgaXQgd291bGQg
cmVxdWlyZSB0byBhbG1vc3QgcmVzdGFydCB0aGUgaW1wbGVtZW50YXRpb24gYW5kIHRlc3Rpbmcg
ZnJvbSB0aGUNCj4+IHZlcnkgYmVnaW5uaW5nLg0KPj4gDQo+PiBJbiB0aGUgbWVhbnRpbWUgY291
bGQgd2UgcHJvY2VzcyB3aXRoIHRoZSByZXZpZXcgb2YgdGhpcyBTTU1VdjMgZHJpdmVyIGFuZCBp
bmNsdWRlIGl0IGluIFhlbiBhcyBpcyA/DQo+PiANCj4+IFdlIGNhbiBzZXQgbWUgYW5kIFJhaHVs
IGFzIG1haW50YWluZXJzIGFuZCBmbGFnIHRoZSBkcml2ZXIgYXMgZXhwZXJpbWVudGFsIGluIFN1
cHBvcnQubWQgKGl0IGlzIGFscmVhZHkNCj4+IHByb3RlY3RlZCBieSB0aGUgRVhQRVJUIGNvbmZp
Z3VyYXRpb24gaW4gS2NvbmZpZykuDQo+IA0KPiBJIHRoaW5rIHRoYXQgaXMgT0sgYXMgbG9uZyBh
cyB5b3UgbWFrZSBzdXJlIHRvIGdvIHRocm91Z2ggdGhlIGNoYW5nZWxvZw0KPiBvZiB0aGUgTGlu
dXggZHJpdmVyIHRvIG1ha2Ugc3VyZSB3ZSBhcmUgbm90IG1pc3NpbmcgYW55IGJ1ZyBmaXhlcyBh
bmQNCj4gZXJyYXRhIGZpeGVzLg0KDQpXZSB3aWxsIGNoZWNrIHRoZSBkcml2ZXIgaGlzdG9yeSBm
cm9tIHRoZSBwb2ludCB3ZSB1c2VkIHRvIHRoZSBjdXJyZW50IHN0YXR1cw0KdG8gY2hlY2sgZm9y
IHBvc3NpYmxlIGJ1ZyBvciBlcnJhdGEgZml4ZXMgbmVlZGVkIHRvIGJlIGJhY2twb3J0ZWQuDQoN
CldlIHdpbGwgYWxzbyBmbGFnIHRoZSBkcml2ZXIgYXMgdW5zdXBwb3J0ZWQgYW5kIHB1dCBSYWh1
bCBhbmQgbXlzZWxmIGFzDQptYWludGFpbmVycyBmb3IgaXQuDQoNCkNoZWVycw0KQmVydHJhbmQN
Cg0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 08:47:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 08:47:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15490.38501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQ3r-0008Ro-Jj; Fri, 30 Oct 2020 08:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15490.38501; Fri, 30 Oct 2020 08:47:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQ3r-0008Rh-Gl; Fri, 30 Oct 2020 08:47:03 +0000
Received: by outflank-mailman (input) for mailman id 15490;
 Fri, 30 Oct 2020 08:47:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYQ3p-0008Rc-MG
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:47:01 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.81]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88047914-df7a-4b5f-9669-e92f07e5f778;
 Fri, 30 Oct 2020 08:46:58 +0000 (UTC)
Received: from AM5PR1001CA0024.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::37)
 by AM9PR08MB6147.eurprd08.prod.outlook.com (2603:10a6:20b:2da::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:46:57 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::8d) by AM5PR1001CA0024.outlook.office365.com
 (2603:10a6:206:2::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 08:46:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:46:55 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Fri, 30 Oct 2020 08:46:52 +0000
Received: from d37468c9f843.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3694CE0F-DD8B-4DDA-8D3D-775410EED42F.1; 
 Fri, 30 Oct 2020 08:46:15 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d37468c9f843.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 08:46:15 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB4772.eurprd08.prod.outlook.com (2603:10a6:20b:cf::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:46:11 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 08:46:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYQ3p-0008Rc-MG
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 08:47:01 +0000
X-Inumbo-ID: 88047914-df7a-4b5f-9669-e92f07e5f778
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown [40.107.6.81])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 88047914-df7a-4b5f-9669-e92f07e5f778;
	Fri, 30 Oct 2020 08:46:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDV+57uX0TZe2aFVIlALMBZ/xjsan6sdHjyjibqPVQE=;
 b=5xbW/jUHkQS2ZtAbYMjckdwZV4kTPVODtnNEZKa6m19npbT3FIfBzDCqqS0pDfJMt+CbMmsXrq21Jgjvy3lYNY5Szemea1p0bOu3HSyBHnfOOpPcAkYLyzIMitmyFy5lM+tK1kVKhqgtizIqEdWXxem21kgW0VmtIf8TIly7tTE=
Received: from AM5PR1001CA0024.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::37)
 by AM9PR08MB6147.eurprd08.prod.outlook.com (2603:10a6:20b:2da::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:46:57 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::8d) by AM5PR1001CA0024.outlook.office365.com
 (2603:10a6:206:2::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 08:46:57 +0000
X-MS-Exchange-Authentication-Results: spf=temperror (sender IP is
 63.35.35.123) smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass
 (signature was verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=temperror action=none header.from=arm.com;
Received-SPF: TempError (protection.outlook.com: error in processing during
 lookup of arm.com: DNS Timeout)
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 08:46:55 +0000
Received: ("Tessian outbound c189680f801b:v64"); Fri, 30 Oct 2020 08:46:52 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: a78b3e467c7fac17
X-CR-MTA-TID: 64aa7808
Received: from d37468c9f843.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 3694CE0F-DD8B-4DDA-8D3D-775410EED42F.1;
	Fri, 30 Oct 2020 08:46:15 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d37468c9f843.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 08:46:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ex5gMdECafIVmcYAvDpWp1pL11gBlOAsDWpGObhHLiuDGXfRcsx1cm1X1CcJh7iUgbvN8tRr2nYckQHcuORLOKWbPl4tNU5t+hvO6Wm/4Xs5725Q86JBsCjCudqPkX6K+BOfjXejABpXTIqxIa9guTIDI73EWg7mtIyy5ix1Znnj58TIyyTDw40g+SoqLQuEpSbOo+/b/rpc7JyUZVMSMxSO9FRBMMPSlpIEfyp/V1UbqLzN2fFDXf7KMenP6qfScs1YTIBE26I90sdQwYR9wbad6tk92mclHOdpYW1y5jHUXRISbPoHZ/WfjXvKLDUpPVqZYHMsV6awgNXUj5vO0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDV+57uX0TZe2aFVIlALMBZ/xjsan6sdHjyjibqPVQE=;
 b=KFId2AoQTI5Sq6VlltSu9LfofQPjHkOLSI+wV1z/1pYrwW2LcYNek0qzTOtTvdJMT/Kr/44/GkLoOYPXEuZSZ/R3GmkV/+7+xp2crpsc3C181OqhghCHmSLKFjqYNqBvZxskhzWwx6IEuPyupt8Lw3/q1+u/EbHj7kG86+6BjiDzuHyo7+6BqvxpXF97o4FXqfDppYicWysNddTrjmXRu+S52/ntnpHepJuzlaPu4XocXToOP/zIYwmgJU8IpaAYmZGNxodO1Sr7vKI2jJgmKaa9n6HAJFVjwWOSdFgLYAyOd1kjrVuHnOBl/4Vj7BjYVv0ylaZII67LiW1xewNBqw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LDV+57uX0TZe2aFVIlALMBZ/xjsan6sdHjyjibqPVQE=;
 b=5xbW/jUHkQS2ZtAbYMjckdwZV4kTPVODtnNEZKa6m19npbT3FIfBzDCqqS0pDfJMt+CbMmsXrq21Jgjvy3lYNY5Szemea1p0bOu3HSyBHnfOOpPcAkYLyzIMitmyFy5lM+tK1kVKhqgtizIqEdWXxem21kgW0VmtIf8TIly7tTE=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB4772.eurprd08.prod.outlook.com (2603:10a6:20b:cf::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 08:46:11 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 08:46:11 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall
	<julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXICAABfBAIAAGI6AgAAOSACABG94AIADrVCAgAFBfQCAAGM1gIAA0RiA
Date: Fri, 30 Oct 2020 08:46:11 +0000
Message-ID: <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d9d69c16-dcca-429e-1812-08d87cb05b31
x-ms-traffictypediagnostic: AM6PR08MB4772:|AM9PR08MB6147:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB61473B14CF89184DDB65D9FCFC150@AM9PR08MB6147.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 Oh6kbzjo3wennyP3Yfjbp/8aXfgeR6f1CEvMuRZAvjxisJ0PpUoC9TSkRSpDhMznrRaOcHuCWodHypTRgTFAEW+22Ol3rh/D4RGOFpERrxrcki04aKuZo864P+UBfe8foOCSigbCdQ/n3rdM4HD2x5nb4iXk9DBI4Xfs5NK9Eoko2vkUKrzAZo06fD1BKsc1W7bFRBx1ecmP6gSR5MGL6fDGBRJbmGtHaMwDCIKKNBd0JXz76oSk7dQW3iYM4LMd2TrHu4ldRoL1LOkLR+X2lftHkWJLTXhMdP10qD45gPjVvd4ZD7gFFFHjq/GKTq5P06t8Xee57LnFw4IQIqCBLw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(136003)(396003)(39840400004)(2906002)(6916009)(5660300002)(86362001)(33656002)(36756003)(66476007)(91956017)(76116006)(6512007)(64756008)(66446008)(66946007)(66556008)(2616005)(6506007)(26005)(478600001)(53546011)(4326008)(83380400001)(8936002)(55236004)(316002)(6486002)(8676002)(186003)(71200400001)(54906003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 bIiDH4mD4wyJipZn1mavgOEHfR3htbfoMZXXe2lCrdQNQQ/+LkHsCRmjuo44gdCL3gSMrUTnmpn2jw1herw/NOKzQo1lE0r9xixbG/9sMZk4nNOdujDAvW+c03ChomuY6hkvASOkj7MMC6rsuEBv29RbOkdMkZCraCYJwNl42oepCtWSZhedRhTEB4OWnhJhlLPMWwEFWrKhy1WpfeYoDxJr598fHnyCnOVzl9Ym5KGhFl8trULFhkjn3RW2XRW3woyE6CdPpbnP3DFGCgYJXcJMzMRU7mzU/LkKJjIKLy5ov6EHhL8qp1H7AHe5xKJR6XOaJEqGfECKuG56tAS5cb0+nqFiIIAjYeT8d1f3yi9BhddrSml9AuCUPS0d9ykkVH6t27hElN9jOys9mXwLdzjFPNtVTYoU0NU1FZs1ETkFKaZqxehEWXZo76f2ZfORJ/hpAuCKyQz0W2jmHd0N5d6TxXWNjaC6j3zbG9Dlua/8VkVwevfMBlAt9l2ld5SzQ57E70BBUSmpxzfYKaQus2MzjGfPco8E7DV4yW6XOSu5Fc+h9koyI18nfhdwORcpHNjkMaDzDzRSqIUQ+xBU9BOzZV+GuXGqzmcfiE7wmgrE98TH+2nyuq8DdL8IpPsnAAF5XbZv3ZmfCoZicsgkYQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <624795650C63604D822C950C2BF2C547@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4772
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	116366eb-c935-4789-4446-08d87cb040cb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	9lPj7r7B357mWjWLYIZdcG1dZH0EkmR6rjnE0bQU3KFpgAyKypJmrqHZIaYEp9ENJZuZaS682o0QkqE73vwy0BTeIMAdvHOdsekrEEkFsj//CTns30JsYTKydmXbROe9cWo0o5Uf5krdwR2Tcf/un1InP++9JVUNDjH7zhPnJ+Wv19XXHG4SK6br9i5K3TcncUOA7UNlNbu4TLiXc2u2qPCVtNr9zM7kArQoqLcWVhvLfrNslYTUYV7iumZm/q1nghlw9tsQCLPdsExno3Ozt1cb+RkYotS61xg9EYXmsbBWNAfm0YOH1iGVlBkp38IEeOaixm29VWGemk0jG6JW7i87segDQBrZ/jCjBOK8ZVEIlfzYl8gZB2o6+kikT4y2la5XjS0FCEYPY+UP9evhLw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(39840400004)(346002)(46966005)(336012)(5660300002)(70206006)(83380400001)(47076004)(70586007)(81166007)(36756003)(86362001)(2906002)(356005)(186003)(478600001)(8676002)(26005)(6862004)(2616005)(6486002)(107886003)(55236004)(63370400001)(6512007)(63350400001)(8936002)(54906003)(36906005)(316002)(82310400003)(53546011)(4326008)(33656002)(6506007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 08:46:55.6968
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d9d69c16-dcca-429e-1812-08d87cb05b31
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6147

SGVsbG8gU3RlZmFubywNCg0KPiBPbiAyOSBPY3QgMjAyMCwgYXQgODoxNyBwbSwgU3RlZmFubyBT
dGFiZWxsaW5pIDxzc3RhYmVsbGluaUBrZXJuZWwub3JnPiB3cm90ZToNCj4gDQo+IE9uIFRodSwg
MjkgT2N0IDIwMjAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4gT24gMjggT2N0IDIwMjAs
IGF0IDE5OjEyLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4+PiBPbiAy
Ni8xMC8yMDIwIDExOjAzLCBSYWh1bCBTaW5naCB3cm90ZToNCj4+Pj4gSGVsbG8gSnVsaWVuLA0K
Pj4+Pj4gT24gMjMgT2N0IDIwMjAsIGF0IDQ6MTkgcG0sIEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc+IHdyb3RlOg0KPj4+Pj4gT24gMjMvMTAvMjAyMCAxNToyNywgUmFodWwgU2luZ2ggd3Jv
dGU6DQo+Pj4+Pj4gSGVsbG8gSnVsaWVuLA0KPj4+Pj4+PiBPbiAyMyBPY3QgMjAyMCwgYXQgMjow
MCBwbSwgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+Pj4+Pj4+IE9uIDIz
LzEwLzIwMjAgMTI6MzUsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4+Pj4gSGVsbG8sDQo+Pj4+
Pj4+Pj4gT24gMjMgT2N0IDIwMjAsIGF0IDE6MDIgYW0sIFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0
YWJlbGxpbmlAa2VybmVsLm9yZz4gd3JvdGU6DQo+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4gT24gVGh1
LCAyMiBPY3QgMjAyMCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+IE9uIDIwLzEw
LzIwMjAgMTY6MjUsIFJhaHVsIFNpbmdoIHdyb3RlOg0KPj4+Pj4+Pj4+Pj4+PiBBZGQgc3VwcG9y
dCBmb3IgQVJNIGFyY2hpdGVjdGVkIFNNTVV2MyBpbXBsZW1lbnRhdGlvbnMuIEl0IGlzIGJhc2Vk
IG9uDQo+Pj4+Pj4+Pj4+Pj4+IHRoZSBMaW51eCBTTU1VdjMgZHJpdmVyLg0KPj4+Pj4+Pj4+Pj4+
PiBNYWpvciBkaWZmZXJlbmNlcyBiZXR3ZWVuIHRoZSBMaW51eCBkcml2ZXIgYXJlIGFzIGZvbGxv
d3M6DQo+Pj4+Pj4+Pj4+Pj4+IDEuIE9ubHkgU3RhZ2UtMiB0cmFuc2xhdGlvbiBpcyBzdXBwb3J0
ZWQgYXMgY29tcGFyZWQgdG8gdGhlIExpbnV4IGRyaXZlcg0KPj4+Pj4+Pj4+Pj4+PiAgIHRoYXQg
c3VwcG9ydHMgYm90aCBTdGFnZS0xIGFuZCBTdGFnZS0yIHRyYW5zbGF0aW9ucy4NCj4+Pj4+Pj4+
Pj4+Pj4gMi4gVXNlIFAyTSAgcGFnZSB0YWJsZSBpbnN0ZWFkIG9mIGNyZWF0aW5nIG9uZSBhcyBT
TU1VdjMgaGFzIHRoZQ0KPj4+Pj4+Pj4+Pj4+PiAgIGNhcGFiaWxpdHkgdG8gc2hhcmUgdGhlIHBh
Z2UgdGFibGVzIHdpdGggdGhlIENQVS4NCj4+Pj4+Pj4+Pj4+Pj4gMy4gVGFza2xldHMgaXMgdXNl
ZCBpbiBwbGFjZSBvZiB0aHJlYWRlZCBJUlEncyBpbiBMaW51eCBmb3IgZXZlbnQgcXVldWUNCj4+
Pj4+Pj4+Pj4+Pj4gICBhbmQgcHJpb3JpdHkgcXVldWUgSVJRIGhhbmRsaW5nLg0KPj4+Pj4+Pj4+
Pj4+IA0KPj4+Pj4+Pj4+Pj4+IFRhc2tsZXRzIGFyZSBub3QgYSByZXBsYWNlbWVudCBmb3IgdGhy
ZWFkZWQgSVJRLiBJbiBwYXJ0aWN1bGFyLCB0aGV5IHdpbGwNCj4+Pj4+Pj4+Pj4+PiBoYXZlIHBy
aW9yaXR5IG92ZXIgYW55dGhpbmcgZWxzZSAoSU9XIG5vdGhpbmcgd2lsbCBydW4gb24gdGhlIHBD
UFUgdW50aWwNCj4+Pj4+Pj4+Pj4+PiB0aGV5IGFyZSBkb25lKS4NCj4+Pj4+Pj4+Pj4+PiANCj4+
Pj4+Pj4+Pj4+PiBEbyB5b3Uga25vdyB3aHkgTGludXggaXMgdXNpbmcgdGhyZWFkLiBJcyBpdCBi
ZWNhdXNlIG9mIGxvbmcgcnVubmluZw0KPj4+Pj4+Pj4+Pj4+IG9wZXJhdGlvbnM/DQo+Pj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4+IFllcyB5b3UgYXJlIHJpZ2h0IGJlY2F1c2Ugb2YgbG9uZyBydW5u
aW5nIG9wZXJhdGlvbnMgTGludXggaXMgdXNpbmcgdGhlDQo+Pj4+Pj4+Pj4+PiB0aHJlYWRlZCBJ
UlFzLg0KPj4+Pj4+Pj4+Pj4gDQo+Pj4+Pj4+Pj4+PiBTTU1VdjMgcmVwb3J0cyBmYXVsdC9ldmVu
dHMgYmFzZXMgb24gbWVtb3J5LWJhc2VkIGNpcmN1bGFyIGJ1ZmZlciBxdWV1ZXMgbm90DQo+Pj4+
Pj4+Pj4+PiBiYXNlZCBvbiB0aGUgcmVnaXN0ZXIuIEFzIHBlciBteSB1bmRlcnN0YW5kaW5nLCBp
dCBpcyB0aW1lLWNvbnN1bWluZyB0bw0KPj4+Pj4+Pj4+Pj4gcHJvY2VzcyB0aGUgbWVtb3J5IGJh
c2VkIHF1ZXVlcyBpbiBpbnRlcnJ1cHQgY29udGV4dCBiZWNhdXNlIG9mIHRoYXQgTGludXgNCj4+
Pj4+Pj4+Pj4+IGlzIHVzaW5nIHRocmVhZGVkIElSUSB0byBwcm9jZXNzIHRoZSBmYXVsdHMvZXZl
bnRzIGZyb20gU01NVS4NCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4gSSBkaWRu4oCZdCBmaW5k
IGFueSBvdGhlciBzb2x1dGlvbiBpbiBYRU4gaW4gcGxhY2Ugb2YgdGFza2xldCB0byBkZWZlciB0
aGUNCj4+Pj4+Pj4+Pj4+IHdvcmssIHRoYXTigJlzIHdoeSBJIHVzZWQgdGFza2xldCBpbiBYRU4g
aW4gcmVwbGFjZW1lbnQgb2YgdGhyZWFkZWQgSVJRcy4gSWYNCj4+Pj4+Pj4+Pj4+IHdlIGRvIGFs
bCB3b3JrIGluIGludGVycnVwdCBjb250ZXh0IHdlIHdpbGwgbWFrZSBYRU4gbGVzcyByZXNwb25z
aXZlLg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gU28gd2UgbmVlZCB0byBtYWtlIHN1cmUgdGhh
dCBYZW4gY29udGludWUgdG8gcmVjZWl2ZXMgaW50ZXJydXB0cywgYnV0IHdlIGFsc28NCj4+Pj4+
Pj4+Pj4gbmVlZCB0byBtYWtlIHN1cmUgdGhhdCBhIHZDUFUgYm91bmQgdG8gdGhlIHBDUFUgaXMg
YWxzbyByZXNwb25zaXZlLg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4g
SWYgeW91IGtub3cgYW5vdGhlciBzb2x1dGlvbiBpbiBYRU4gdGhhdCB3aWxsIGJlIHVzZWQgdG8g
ZGVmZXIgdGhlIHdvcmsgaW4NCj4+Pj4+Pj4+Pj4+IHRoZSBpbnRlcnJ1cHQgcGxlYXNlIGxldCBt
ZSBrbm93IEkgd2lsbCB0cnkgdG8gdXNlIHRoYXQuDQo+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+PiBP
bmUgb2YgbXkgd29yayBjb2xsZWFndWUgZW5jb3VudGVyZWQgYSBzaW1pbGFyIHByb2JsZW0gcmVj
ZW50bHkuIEhlIGhhZCBhIGxvbmcNCj4+Pj4+Pj4+Pj4gcnVubmluZyB0YXNrbGV0IGFuZCB3YW50
ZWQgdG8gYmUgYnJva2VuIGRvd24gaW4gc21hbGxlciBjaHVuay4NCj4+Pj4+Pj4+Pj4gDQo+Pj4+
Pj4+Pj4+IFdlIGRlY2lkZWQgdG8gdXNlIGEgdGltZXIgdG8gcmVzY2hlZHVsZSB0aGUgdGFzbGtl
dCBpbiB0aGUgZnV0dXJlLiBUaGlzIGFsbG93cw0KPj4+Pj4+Pj4+PiB0aGUgc2NoZWR1bGVyIHRv
IHJ1biBvdGhlciBsb2FkcyAoZS5nLiB2Q1BVKSBmb3Igc29tZSB0aW1lLg0KPj4+Pj4+Pj4+PiAN
Cj4+Pj4+Pj4+Pj4gVGhpcyBpcyBwcmV0dHkgaGFja2lzaCBidXQgSSBjb3VsZG4ndCBmaW5kIGEg
YmV0dGVyIHNvbHV0aW9uIGFzIHRhc2tsZXQgaGF2ZQ0KPj4+Pj4+Pj4+PiBoaWdoIHByaW9yaXR5
Lg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gTWF5YmUgdGhlIG90aGVyIHdpbGwgaGF2ZSBhIGJl
dHRlciBpZGVhLg0KPj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+IEp1bGllbidzIHN1Z2dlc3Rpb24gaXMg
YSBnb29kIG9uZS4NCj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+PiBCdXQgSSB0aGluayB0YXNrbGV0cyBj
YW4gYmUgY29uZmlndXJlZCB0byBiZSBjYWxsZWQgZnJvbSB0aGUgaWRsZV9sb29wLA0KPj4+Pj4+
Pj4+IGluIHdoaWNoIGNhc2UgdGhleSBhcmUgbm90IHJ1biBpbiBpbnRlcnJ1cHQgY29udGV4dD8N
Cj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFllcyB5b3UgYXJlIHJpZ2h0IHRhc2tsZXQgd2lsbCBiZSBz
Y2hlZHVsZWQgZnJvbSB0aGUgaWRsZV9sb29wIHRoYXQgaXMgbm90IGludGVycnVwdCBjb25leHQu
DQo+Pj4+Pj4+IA0KPj4+Pj4+PiBUaGlzIGRlcGVuZHMgb24geW91ciB0YXNrbGV0LiBTb21lIHdp
bGwgcnVuIGZyb20gdGhlIHNvZnRpcnEgY29udGV4dCB3aGljaCBpcyB1c3VhbGx5IChmb3IgQXJt
KSBvbiB0aGUgcmV0dXJuIG9mIGFuIGV4Y2VwdGlvbi4NCj4+Pj4+Pj4gDQo+Pj4+Pj4gVGhhbmtz
IGZvciB0aGUgaW5mby4gSSB3aWxsIGNoZWNrIGFuZCB3aWxsIGdldCBiZXR0ZXIgdW5kZXJzdGFu
ZGluZyBvZiB0aGUgdGFza2xldCBob3cgaXQgd2lsbCBydW4gaW4gWEVOLg0KPj4+Pj4+Pj4+IA0K
Pj4+Pj4+Pj4+Pj4+PiA0LiBMYXRlc3QgdmVyc2lvbiBvZiB0aGUgTGludXggU01NVXYzIGNvZGUg
aW1wbGVtZW50cyB0aGUgY29tbWFuZHMgcXVldWUNCj4+Pj4+Pj4+Pj4+Pj4gICBhY2Nlc3MgZnVu
Y3Rpb25zIGJhc2VkIG9uIGF0b21pYyBvcGVyYXRpb25zIGltcGxlbWVudGVkIGluIExpbnV4Lg0K
Pj4+Pj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4+Pj4+IENhbiB5b3UgcHJvdmlkZSBtb3JlIGRldGFpbHM/
DQo+Pj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4+IEkgdHJpZWQgdG8gcG9ydCB0aGUgbGF0ZXN0IHZl
cnNpb24gb2YgdGhlIFNNTVV2MyBjb2RlIHRoYW4gSSBvYnNlcnZlZCB0aGF0DQo+Pj4+Pj4+Pj4+
PiBpbiBvcmRlciB0byBwb3J0IHRoYXQgY29kZSBJIGhhdmUgdG8gYWxzbyBwb3J0IGF0b21pYyBv
cGVyYXRpb24gaW1wbGVtZW50ZWQNCj4+Pj4+Pj4+Pj4+IGluIExpbnV4IHRvIFhFTi4gQXMgbGF0
ZXN0IExpbnV4IGNvZGUgdXNlcyBhdG9taWMgb3BlcmF0aW9uIHRvIHByb2Nlc3MgdGhlDQo+Pj4+
Pj4+Pj4+PiBjb21tYW5kIHF1ZXVlcyAoYXRvbWljX2NvbmRfcmVhZF9yZWxheGVkKCksYXRvbWlj
X2xvbmdfY29uZF9yZWFkX3JlbGF4ZWQoKSAsDQo+Pj4+Pj4+Pj4+PiBhdG9taWNfZmV0Y2hfYW5k
bm90X3JlbGF4ZWQoKSkgLg0KPj4+Pj4+Pj4+PiANCj4+Pj4+Pj4+Pj4gVGhhbmsgeW91IGZvciB0
aGUgZXhwbGFuYXRpb24uIEkgdGhpbmsgaXQgd291bGQgYmUgYmVzdCB0byBpbXBvcnQgdGhlIGF0
b21pYw0KPj4+Pj4+Pj4+PiBoZWxwZXJzIGFuZCB1c2UgdGhlIGxhdGVzdCBjb2RlLg0KPj4+Pj4+
Pj4+PiANCj4+Pj4+Pj4+Pj4gVGhpcyB3aWxsIGVuc3VyZSB0aGF0IHdlIGRvbid0IHJlLWludHJv
ZHVjZSBidWdzIGFuZCBhbHNvIGJ1eSB1cyBzb21lIHRpbWUNCj4+Pj4+Pj4+Pj4gYmVmb3JlIHRo
ZSBMaW51eCBhbmQgWGVuIGRyaXZlciBkaXZlcmdlIGFnYWluIHRvbyBtdWNoLg0KPj4+Pj4+Pj4+
PiANCj4+Pj4+Pj4+Pj4gU3RlZmFubywgd2hhdCBkbyB5b3UgdGhpbms/DQo+Pj4+Pj4+Pj4gDQo+
Pj4+Pj4+Pj4gSSB0aGluayB5b3UgYXJlIHJpZ2h0Lg0KPj4+Pj4+Pj4gWWVzLCBJIGFncmVlIHdp
dGggeW91IHRvIGhhdmUgWEVOIGNvZGUgaW4gc3luYyB3aXRoIExpbnV4IGNvZGUgdGhhdCdzIHdo
eSBJIHN0YXJ0ZWQgd2l0aCB0byBwb3J0IHRoZSBMaW51eCBhdG9taWMgb3BlcmF0aW9ucyB0byBY
RU4gIHRoZW4gSSByZWFsaXNlZCB0aGF0IGl0IGlzIG5vdCBzdHJhaWdodGZvcndhcmQgdG8gcG9y
dCBhdG9taWMgb3BlcmF0aW9ucyBhbmQgaXQgcmVxdWlyZXMgbG90cyBvZiBlZmZvcnQgYW5kIHRl
c3RpbmcuIFRoZXJlZm9yZSBJIGRlY2lkZWQgdG8gcG9ydCB0aGUgY29kZSBiZWZvcmUgdGhlIGF0
b21pYyBvcGVyYXRpb24gaXMgaW50cm9kdWNlZCBpbiBMaW51eC4NCj4+Pj4+Pj4gDQo+Pj4+Pj4+
IEhtbW0uLi4gSSB3b3VsZCBub3QgaGF2ZSBleHBlY3RlZCBhIGxvdCBvZiBlZmZvcnQgcmVxdWly
ZWQgdG8gYWRkIHRoZSAzIGF0b21pY3Mgb3BlcmF0aW9ucyBhYm92ZS4gQXJlIHlvdSB0cnlpbmcg
dG8gYWxzbyBwb3J0IHRoZSBMU0Ugc3VwcG9ydCBhdCB0aGUgc2FtZSB0aW1lPw0KPj4+Pj4+IFRo
ZXJlIGFyZSBvdGhlciBhdG9taWMgb3BlcmF0aW9ucyB1c2VkIGluIHRoZSBTTU1VdjMgY29kZSBh
cGFydCBmcm9tIHRoZSAzIGF0b21pYyBvcGVyYXRpb24gSSBtZW50aW9uLiBJIGp1c3QgbWVudGlv
biAzIG9wZXJhdGlvbiBhcyBhbiBleGFtcGxlLg0KPj4+Pj4gDQo+Pj4+PiBPay4gRG8geW91IGhh
dmUgYSBsaXN0IHlvdSBjb3VsZCBzaGFyZT8NCj4+Pj4+IA0KPj4+PiBZZXMuIFBsZWFzZSBmaW5k
IHRoZSBsaXN0IHRoYXQgd2UgaGF2ZSB0byBwb3J0IHRvIHRoZSBYRU4gaW4gb3JkZXIgdG8gbWVy
Z2UgdGhlIGxhdGVzdCBTTU1VdjMgY29kZS4NCj4+PiANCj4+PiBUaGFua3MhDQo+Pj4gDQo+Pj4+
IElmIHdlIHN0YXJ0IHRvIHBvcnQgdGhlIGJlbG93IGxpc3Qgd2UgbWlnaHQgaGF2ZSB0byBwb3J0
IGFub3RoZXIgYXRvbWljIG9wZXJhdGlvbiBiYXNlZCBvbiB3aGljaCBiZWxvdyBhdG9taWMgb3Bl
cmF0aW9ucyBhcmUgaW1wbGVtZW50ZWQuIEkgaGF2ZSBub3Qgc3BlbnQgdGltZSBvbiBob3cgdGhl
c2UgYXRvbWljIG9wZXJhdGlvbnMgYXJlIGltcGxlbWVudGVkIGluIGRldGFpbCBidXQgYXMgcGVy
IG15IHVuZGVyc3RhbmRpbmcsIGl0IHJlcXVpcmVkIGFuIGVmZm9ydCB0byBwb3J0IHRoZW0gdG8g
WEVOIGFuZCByZXF1aXJlZCBhIGxvdCBvZiB0ZXN0aW5nLg0KPj4+IA0KPj4+IEZvciB0aGUgYmVn
aW5uaW5nLCBJIHRoaW5rIGl0IGlzIGZpbmUgdG8gaW1wbGVtZW50IHRoZW0gd2l0aCBhIHN0cm9u
Z2VyIG1lbW9yeSBiYXJyaWVyIHRoYW4gbmVjZXNzYXJ5IG9yIGluIGEgbGVzcyBlZmZpY2llbnQu
IFRoaXMgY2FuIGJlIHJlZmluZWQgYWZ0ZXJ3YXJkcy4NCj4+PiANCj4+PiBUaG9zZSBoZWxwZXJz
IGNhbiBkaXJlY3RseSBiZSBkZWZpbmVkIGluIHRoZSBTTU1VdjMgZHJpdmVycyBzbyB3ZSBrbm93
IHRoZXkgYXJlIG5vdCBmb3IgZ2VuZXJhbCBwdXJwb3NlIDopLg0KPj4+IA0KPj4+PiAxLiBhdG9t
aWNfc2V0X3JlbGVhc2UNCj4+PiANCj4+PiBUaGlzIGNvdWxkIGJlIGltcGxlbWVudGVkIGFzOg0K
Pj4+IA0KPj4+IHNtcF9tYigpOw0KPj4+IGF0b21pY19zZXQoKTsNCj4+PiANCj4+Pj4gMi4gYXRv
bWljX2ZldGNoX2FuZG5vdF9yZWxheGVkDQo+Pj4gDQo+Pj4gVGhpcyB3b3VsZCBuZWVkIHRvIGJl
IGltcG9ydGVkLg0KPj4+IA0KPj4+PiAzLiBhdG9taWNfY29uZF9yZWFkX3JlbGF4ZWQNCj4+PiAN
Cj4+PiBUaGlzIHdvdWxkIG5lZWQgdG8gYmUgaW1wb3J0ZWQuIFRoZSBzaW1wbGVzdCB2ZXJzaW9u
IHNlZW1zIHRvIGJlIHRoZSBnZW5lcmljIHZlcnNpb24gcHJvdmlkZWQgYnkgaW5jbHVkZS9hc20t
Z2VuZXJpYy9iYXJyaWVyLmggKHNlZSBzbXBfY29uZF9sb2FkX3JlbGF4ZWQpLg0KPj4+IA0KPj4+
PiA0LiBhdG9taWNfbG9uZ19jb25kX3JlYWRfcmVsYXhlZA0KPj4+PiA1LiBhdG9taWNfbG9uZ194
b3INCj4+PiANCj4+PiBUaGUgdHdvIHdvdWxkIHJlcXVpcmUgdGhlIGltcGxlbWVudGF0aW9uIG9m
IGF0b21pYzY0LiBWb2xvZHlteXIgYWxzbyByZXF1aXJlZCBhIHZlcnNpb24uIEkgb2ZmZXJlZCBt
eSBoZWxwLCBob3dldmVyIEkgZGlkbid0IGZpbmQgZW5vdWdoIHRpbWUgdG8gZG8gaXQgeWV0IDoo
Lg0KPj4+IA0KPj4+IEZvciBBcm02NCwgaXQgd291bGQgYmUgcG9zc2libGUgdG8gZG8gYSBjb3B5
L3Bhc3RlIG9mIHRoZSBleGlzdGluZyBoZWxwZXJzIGFuZCByZXBsYWNlIGFueXRoaW5nIHJlbGF0
ZWQgdG8gYSAzMi1iaXQgcmVnaXN0ZXIgd2l0aCBhIDY0LWJpdCBvbmUuDQo+Pj4gDQo+Pj4gRm9y
IEFybTMyLCB0aGV5IGFyZSBhIGJpdCBtb3JlIGNvbXBsZXggYmVjYXVzZSB5b3Ugbm93IG5lZWQg
dG8gd29yayB3aXRoIDIgcmVnaXN0ZXJzLg0KPj4+IA0KPj4+IEhvd2V2ZXIsIGZvciB5b3VyIHB1
cnBvc2UsIHlvdSB3b3VsZCBiZSB1c2luZyBhdG9taWNfbG9uZ190LiBTbyB0aGUgdGhlIEFybTY0
IGltcGxlbWVudGF0aW9uIHNob3VsZCBiZSBzdWZmaWNpZW50Lg0KPj4+IA0KPj4+PiA2LiBhdG9t
aWNfc2V0X3JlbGVhc2UNCj4+PiANCj4+PiBTYW1lIGFzIDEuDQo+Pj4gDQo+Pj4+IDcuIGF0b21p
Y19jbXB4Y2hnX3JlbGF4ZWQgbWlnaHQgYmUgd2UgY2FuIHVzZSBhdG9taWNfY21weGNoZyB0aGF0
IGlzIGltcGxlbWVudGVkIGluIFhFTiBuZWVkIHRvIGNoZWNrLg0KPj4+IGF0b21pY19jbXB4Y2hn
KCkgaXMgc3Ryb25nbHkgb3JkZXJlZC4gU28gaXQgd291bGQgYmUgZmluZSB0byB1c2UgaXQgZm9y
IGltcGxlbWVudGluZyB0aGUgaGVscGVyLiBBbHRob3VnaCwgbW9yZSBpbmVmZmljaWVudCA6KS4N
Cj4+PiANCj4+Pj4gOC4gYXRvbWljX2RlY19yZXR1cm5fcmVsZWFzZQ0KPj4+IA0KPj4+IFhlbiBp
bXBsZW1lbnRzIGEgc3Ryb25nZXIgdmVyc2lvbiBhdG9taWNfZGVjX3JldHVybigpLiBZb3UgY2Fu
IHJlLXVzZSBpdCBoZXJlLg0KPj4+IA0KPj4+PiA5LiBhdG9taWNfZmV0Y2hfaW5jX3JlbGF4ZWQN
Cj4+PiANCj4+PiBUaGlzIHdvdWxkIG5lZWQgdG8gYmUgaW1wb3J0ZWQuDQo+PiANCj4+IFdlIGRv
IGFncmVlIHRoYXQgdGhpcyB3b3VsZCBiZSB0aGUgd29yayByZXF1aXJlZCBidXQgc29tZSBvZiB0
aGUgdGhpbmdzIHRvIGJlIGltcG9ydGVkIGhhdmUgZGVwZW5kZW5jaWVzIGFuZCB0aGlzIGlzIG5v
dA0KPj4gYSBzaW1wbGUgY2hhbmdlIG9uIHRoZSBwYXRjaCBkb25lIGJ5IFJhaHVsIGFuZCBpdCB3
b3VsZCByZXF1aXJlIHRvIGFsbW9zdCByZXN0YXJ0IHRoZSBpbXBsZW1lbnRhdGlvbiBhbmQgdGVz
dGluZyBmcm9tIHRoZQ0KPj4gdmVyeSBiZWdpbm5pbmcuDQo+PiANCj4+IEluIHRoZSBtZWFudGlt
ZSBjb3VsZCB3ZSBwcm9jZXNzIHdpdGggdGhlIHJldmlldyBvZiB0aGlzIFNNTVV2MyBkcml2ZXIg
YW5kIGluY2x1ZGUgaXQgaW4gWGVuIGFzIGlzID8NCj4+IA0KPj4gV2UgY2FuIHNldCBtZSBhbmQg
UmFodWwgYXMgbWFpbnRhaW5lcnMgYW5kIGZsYWcgdGhlIGRyaXZlciBhcyBleHBlcmltZW50YWwg
aW4gU3VwcG9ydC5tZCAoaXQgaXMgYWxyZWFkeQ0KPj4gcHJvdGVjdGVkIGJ5IHRoZSBFWFBFUlQg
Y29uZmlndXJhdGlvbiBpbiBLY29uZmlnKS4NCj4gDQo+IEkgdGhpbmsgdGhhdCBpcyBPSyBhcyBs
b25nIGFzIHlvdSBtYWtlIHN1cmUgdG8gZ28gdGhyb3VnaCB0aGUgY2hhbmdlbG9nDQo+IG9mIHRo
ZSBMaW51eCBkcml2ZXIgdG8gbWFrZSBzdXJlIHdlIGFyZSBub3QgbWlzc2luZyBhbnkgYnVnIGZp
eGVzIGFuZA0KPiBlcnJhdGEgZml4ZXMuDQoNCk9rIFllcyB3aGVuIEkgcG9ydGVkIHRoZSBkcml2
ZXIgSSBwb3J0IHRoZSBjb21tYW5kIHF1ZXVlIG9wZXJhdGlvbiBmcm9tIHRoZSBwcmV2aW91cyBj
b21taXQgd2hlcmUgYXRvbWljIG9wZXJhdGlvbnMgaXMgbm90IHVzZWQgYW5kIHJlc3QgYWxsIHRo
ZSBjb2RlIGlzIGZyb20gdGhlIGxhdGVzdCBjb2RlLiBJIHdpbGwgYWdhaW4gbWFrZSBzdXJlIHRo
YXQgYW55IGJ1ZyB0aGF0IGlzIGZpeGVkIGluIExpbnV4IHNob3VsZCBiZSBmaXhlZCBpbiBYRU4g
YWxzby4NCg0KDQpSZWdhcmRzLA0KUmFodWwNCg0K


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 09:22:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 09:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15540.38518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQbe-0003Rg-Cp; Fri, 30 Oct 2020 09:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15540.38518; Fri, 30 Oct 2020 09:21:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQbe-0003RZ-9o; Fri, 30 Oct 2020 09:21:58 +0000
Received: by outflank-mailman (input) for mailman id 15540;
 Fri, 30 Oct 2020 09:21:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYQbc-0003RU-PP
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:21:56 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a2aa8e4-f54a-4d44-b8d8-9afcc6ec9b9f;
 Fri, 30 Oct 2020 09:21:56 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYQba-00046U-LQ; Fri, 30 Oct 2020 09:21:54 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYQba-0001sV-9C; Fri, 30 Oct 2020 09:21:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYQbc-0003RU-PP
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:21:56 +0000
X-Inumbo-ID: 2a2aa8e4-f54a-4d44-b8d8-9afcc6ec9b9f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2a2aa8e4-f54a-4d44-b8d8-9afcc6ec9b9f;
	Fri, 30 Oct 2020 09:21:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=C6Qn0UOp1nMlwhAK5tIHIGIr907LmAWq6teNp7OFnvk=; b=BFpSZQw0ufIbCXkYfnb3dOWj+a
	yAq7bRX5S+BI6Ipz4yy28MAPvlxS26VXxAhCHvPNgrpi529GvA/t7WkUCLQPInfCSQJ8Vq+6WNu4R
	3ousHKCvrgF8tIuFXhdGzVLlUPs5Fr+WxrJBLaH03tGAY1JRjyOghNr9hKT3nZmbuVZ0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYQba-00046U-LQ; Fri, 30 Oct 2020 09:21:54 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYQba-0001sV-9C; Fri, 30 Oct 2020 09:21:54 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
 <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
Date: Fri, 30 Oct 2020 09:21:52 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 30/10/2020 08:46, Rahul Singh wrote:
> Ok Yes when I ported the driver I port the command queue operation from the previous commit where atomic operations is not used and rest all the code is from the latest code. I will again make sure that any bug that is fixed in Linux should be fixed in XEN also.

I would like to seek some clarifications on the code because there seem 
to be conflicting information provided in this thread.

The patch (the baseline commit is provided) and the discussion with 
Bertrand suggests that you took a snapshot of the code last year and 
adapted for Xen.

However, here you suggest that you took an hybrid approach where part of 
the code is based from last year and other part is based from the latest 
code (I assume v5.9).

So can you please clarify?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 09:24:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 09:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15538.38530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQdv-0003bY-QZ; Fri, 30 Oct 2020 09:24:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15538.38530; Fri, 30 Oct 2020 09:24:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQdv-0003bR-NL; Fri, 30 Oct 2020 09:24:19 +0000
Received: by outflank-mailman (input) for mailman id 15538;
 Fri, 30 Oct 2020 09:19:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GSW1=EF=st.com=fabrice.gasnier@srs-us1.protection.inumbo.net>)
 id 1kYQZL-0002kg-EY
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:19:36 +0000
Received: from mx07-00178001.pphosted.com (unknown [185.132.182.106])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d500efa-764b-4ec6-a616-d248bd240458;
 Fri, 30 Oct 2020 09:19:26 +0000 (UTC)
Received: from pps.filterd (m0046668.ppops.net [127.0.0.1])
 by mx07-00178001.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09U9DAjR017023; Fri, 30 Oct 2020 10:19:25 +0100
Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])
 by mx07-00178001.pphosted.com with ESMTP id 34ccf45tvx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 10:19:25 +0100
Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20])
 by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 6C5E110003A;
 Fri, 30 Oct 2020 10:19:22 +0100 (CET)
Received: from Webmail-eu.st.com (sfhdag1node3.st.com [10.75.127.3])
 by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id A4AC4268F39;
 Fri, 30 Oct 2020 10:19:21 +0100 (CET)
Received: from [10.211.1.243] (10.75.127.47) by SFHDAG1NODE3.st.com
 (10.75.127.3) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 30 Oct
 2020 10:19:14 +0100
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=GSW1=EF=st.com=fabrice.gasnier@srs-us1.protection.inumbo.net>)
	id 1kYQZL-0002kg-EY
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:19:36 +0000
X-Inumbo-ID: 8d500efa-764b-4ec6-a616-d248bd240458
Received: from mx07-00178001.pphosted.com (unknown [185.132.182.106])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8d500efa-764b-4ec6-a616-d248bd240458;
	Fri, 30 Oct 2020 09:19:26 +0000 (UTC)
Received: from pps.filterd (m0046668.ppops.net [127.0.0.1])
	by mx07-00178001.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09U9DAjR017023;
	Fri, 30 Oct 2020 10:19:25 +0100
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=st.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=STMicroelectronics;
 bh=6yLhdk8lZrEmjVYL4ZEHo2AZzFRZ0s76Zo8thVPKwgo=;
 b=eksgd6bAkZsLAoZxeFKTctoNryOCFfSGI7P9karJsQIvu+M+IWBi2MHBk+mO2EZm5qEM
 OzcE51AKz7YWzGIToUREuz3ldBYfIpysnZr+ipysmTf3AyI2BZkIFrtUZBD+KVDYmQol
 UvOF5LbLJf9HQTQlKkLHtrmJ3QZCaFMcCfQty/gVhgyVPO3GwK6Jx65Lc9DADazZwOrK
 JhsMBzjf9YfuGdMzHAW/JReBBDYzLTORd3dR8o5IkYoLptb0sWEwg6Rw208SYU9sIjgj
 /LYxlwRmf91S0kghIj1BFmRh9GB+QA/MmIX+NuF5+IuWRxwgxGjd/qW2w0nf7iMvw29P zw== 
Received: from beta.dmz-eu.st.com (beta.dmz-eu.st.com [164.129.1.35])
	by mx07-00178001.pphosted.com with ESMTP id 34ccf45tvx-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 10:19:25 +0100
Received: from euls16034.sgp.st.com (euls16034.sgp.st.com [10.75.44.20])
	by beta.dmz-eu.st.com (STMicroelectronics) with ESMTP id 6C5E110003A;
	Fri, 30 Oct 2020 10:19:22 +0100 (CET)
Received: from Webmail-eu.st.com (sfhdag1node3.st.com [10.75.127.3])
	by euls16034.sgp.st.com (STMicroelectronics) with ESMTP id A4AC4268F39;
	Fri, 30 Oct 2020 10:19:21 +0100 (CET)
Received: from [10.211.1.243] (10.75.127.47) by SFHDAG1NODE3.st.com
 (10.75.127.3) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Fri, 30 Oct
 2020 10:19:14 +0100
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
CC: Linux Doc Mailing List <linux-doc@vger.kernel.org>,
        "Gautham R. Shenoy"
	<ego@linux.vnet.ibm.com>,
        "Jason A. Donenfeld" <Jason@zx2c4.com>,
        =?UTF-8?Q?Javier_Gonz=c3=a1lez?= <javier@javigon.com>,
        Jonathan Corbet
	<corbet@lwn.net>,
        "Martin K. Petersen" <martin.petersen@oracle.com>,
        "Rafael
 J. Wysocki" <rjw@rjwysocki.net>,
        Alexander Shishkin
	<alexander.shishkin@linux.intel.com>,
        Alexandre Belloni
	<alexandre.belloni@bootlin.com>,
        Alexandre Torgue <alexandre.torgue@st.com>,
        Andrew Donnellan <ajd@linux.ibm.com>,
        Andy Shevchenko
	<andriy.shevchenko@linux.intel.com>,
        Baolin Wang <baolin.wang7@gmail.com>,
        Benson Leung <bleung@chromium.org>,
        Boris Ostrovsky
	<boris.ostrovsky@oracle.com>,
        Bruno Meneguele <bmeneg@redhat.com>,
        Chunyan
 Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
        Dan Williams
	<dan.j.williams@intel.com>,
        Enric Balletbo i Serra
	<enric.balletbo@collabora.com>,
        Felipe Balbi <balbi@kernel.org>,
        Frederic
 Barrat <fbarrat@linux.ibm.com>,
        Greg Kroah-Hartman
	<gregkh@linuxfoundation.org>,
        Guenter Roeck <groeck@chromium.org>,
        Hanjun Guo
	<guohanjun@huawei.com>,
        Heikki Krogerus <heikki.krogerus@linux.intel.com>,
        Jens Axboe <axboe@kernel.dk>,
        Johannes Thumshirn
	<johannes.thumshirn@wdc.com>,
        Jonathan Cameron <jic23@kernel.org>, Juergen
 Gross <jgross@suse.com>,
        Konstantin Khlebnikov <koct9i@gmail.com>,
        Kranthi
 Kuntala <kranthi.kuntala@intel.com>,
        Lakshmi Ramasubramanian
	<nramas@linux.microsoft.com>,
        Lars-Peter Clausen <lars@metafoo.de>, Len Brown
	<lenb@kernel.org>,
        Leonid Maksymchuk <leonmaxx@gmail.com>,
        Ludovic Desroches
	<ludovic.desroches@microchip.com>,
        Mario Limonciello
	<mario.limonciello@dell.com>,
        Mark Gross <mgross@linux.intel.com>,
        Maxime
 Coquelin <mcoquelin.stm32@gmail.com>,
        Michael Ellerman <mpe@ellerman.id.au>,
        Mika Westerberg <mika.westerberg@linux.intel.com>,
        Mike Kravetz
	<mike.kravetz@oracle.com>,
        Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain
	<nayna@linux.ibm.com>,
        Nicolas Ferre <nicolas.ferre@microchip.com>,
        Niklas
 Cassel <niklas.cassel@wdc.com>,
        Oded Gabbay <oded.gabbay@gmail.com>,
        Oleh
 Kravchenko <oleg@kaa.org.ua>, Orson Zhai <orsonzhai@gmail.com>,
        Pavel Machek
	<pavel@ucw.cz>,
        Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
        Peter
 Meerwald-Stadler <pmeerw@pmeerw.net>,
        Peter Rosin <peda@axentia.se>, Petr
 Mladek <pmladek@suse.com>,
        Philippe Bergheaud <felix@linux.ibm.com>,
        Richard
 Cochran <richardcochran@gmail.com>,
        Sebastian Reichel <sre@kernel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
        Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>,
        Vaibhav Jain
	<vaibhav@linux.ibm.com>,
        Vineela Tummalapalli
	<vineela.tummalapalli@intel.com>,
        Vishal Verma <vishal.l.verma@intel.com>, <linux-acpi@vger.kernel.org>,
        <linux-arm-kernel@lists.infradead.org>, <linux-iio@vger.kernel.org>,
        <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
        <linux-pm@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>,
        <linux-usb@vger.kernel.org>, <linuxppc-dev@lists.ozlabs.org>,
        <netdev@vger.kernel.org>, <xen-devel@lists.xenproject.org>,
        Jonathan Cameron
	<Jonathan.Cameron@huawei.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
From: Fabrice Gasnier <fabrice.gasnier@st.com>
Message-ID: <5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
Date: Fri, 30 Oct 2020 10:19:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Originating-IP: [10.75.127.47]
X-ClientProxiedBy: SFHDAG3NODE1.st.com (10.75.127.7) To SFHDAG1NODE3.st.com
 (10.75.127.3)
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-29_12:2020-10-29,2020-10-29 signatures=0

On 10/30/20 8:40 AM, Mauro Carvalho Chehab wrote:
> Some files over there won't parse well by Sphinx.
> 
> Fix them.
> 
> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> # for IIO
> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
> ---
>  .../ABI/testing/configfs-spear-pcie-gadget    |  36 +--
>  Documentation/ABI/testing/configfs-usb-gadget |  83 +++---
>  .../ABI/testing/configfs-usb-gadget-hid       |  10 +-
>  .../ABI/testing/configfs-usb-gadget-rndis     |  16 +-
>  .../ABI/testing/configfs-usb-gadget-uac1      |  18 +-
>  .../ABI/testing/configfs-usb-gadget-uvc       | 220 +++++++++-------
>  Documentation/ABI/testing/debugfs-ec          |  11 +-
>  Documentation/ABI/testing/debugfs-pktcdvd     |  11 +-
>  Documentation/ABI/testing/dev-kmsg            |  27 +-
>  Documentation/ABI/testing/evm                 |  17 +-
>  Documentation/ABI/testing/ima_policy          |  30 ++-
>  Documentation/ABI/testing/procfs-diskstats    |  40 +--
>  Documentation/ABI/testing/sysfs-block         |  38 +--
>  Documentation/ABI/testing/sysfs-block-device  |   2 +
>  Documentation/ABI/testing/sysfs-bus-acpi      |  18 +-
>  .../sysfs-bus-event_source-devices-format     |   3 +-
>  .../ABI/testing/sysfs-bus-i2c-devices-pca954x |  27 +-
>  Documentation/ABI/testing/sysfs-bus-iio       |  11 +
>  .../sysfs-bus-iio-adc-envelope-detector       |   5 +-
>  .../ABI/testing/sysfs-bus-iio-cros-ec         |   2 +-
>  .../ABI/testing/sysfs-bus-iio-dfsdm-adc-stm32 |   8 +-
>  .../ABI/testing/sysfs-bus-iio-lptimer-stm32   |  29 ++-
>  .../sysfs-bus-iio-magnetometer-hmc5843        |  19 +-
>  .../sysfs-bus-iio-temperature-max31856        |  19 +-
>  .../ABI/testing/sysfs-bus-iio-timer-stm32     | 137 ++++++----
>  .../testing/sysfs-bus-intel_th-devices-msc    |   4 +
>  Documentation/ABI/testing/sysfs-bus-nfit      |   2 +-
>  .../testing/sysfs-bus-pci-devices-aer_stats   | 119 +++++----
>  Documentation/ABI/testing/sysfs-bus-rapidio   |  23 +-
>  .../ABI/testing/sysfs-bus-thunderbolt         |  40 +--
>  Documentation/ABI/testing/sysfs-bus-usb       |  30 ++-
>  .../testing/sysfs-bus-usb-devices-usbsevseg   |   7 +-
>  Documentation/ABI/testing/sysfs-bus-vfio-mdev |  10 +-
>  Documentation/ABI/testing/sysfs-class-cxl     |  15 +-
>  Documentation/ABI/testing/sysfs-class-led     |   2 +-
>  .../testing/sysfs-class-led-driver-el15203000 | 229 ++++++++---------
>  .../ABI/testing/sysfs-class-led-driver-sc27xx |   4 +-
>  Documentation/ABI/testing/sysfs-class-mic     |  52 ++--
>  Documentation/ABI/testing/sysfs-class-ocxl    |   3 +
>  Documentation/ABI/testing/sysfs-class-power   |  73 +++++-
>  .../ABI/testing/sysfs-class-power-twl4030     |  33 +--
>  Documentation/ABI/testing/sysfs-class-rc      |  30 ++-
>  .../ABI/testing/sysfs-class-scsi_host         |   7 +-
>  Documentation/ABI/testing/sysfs-class-typec   |  12 +-
>  .../testing/sysfs-devices-platform-ACPI-TAD   |   4 +
>  .../ABI/testing/sysfs-devices-platform-docg3  |  10 +-
>  .../sysfs-devices-platform-sh_mobile_lcdc_fb  |   8 +-
>  .../ABI/testing/sysfs-devices-system-cpu      |  99 +++++---
>  .../ABI/testing/sysfs-devices-system-ibm-rtl  |   6 +-
>  .../testing/sysfs-driver-bd9571mwv-regulator  |   4 +
>  Documentation/ABI/testing/sysfs-driver-genwqe |  11 +-
>  .../testing/sysfs-driver-hid-logitech-lg4ff   |  18 +-
>  .../ABI/testing/sysfs-driver-hid-wiimote      |  11 +-
>  .../ABI/testing/sysfs-driver-samsung-laptop   |  13 +-
>  .../ABI/testing/sysfs-driver-toshiba_acpi     |  26 ++
>  .../ABI/testing/sysfs-driver-toshiba_haps     |   2 +
>  Documentation/ABI/testing/sysfs-driver-wacom  |   4 +-
>  Documentation/ABI/testing/sysfs-firmware-acpi | 237 +++++++++---------
>  .../ABI/testing/sysfs-firmware-dmi-entries    |  50 ++--
>  Documentation/ABI/testing/sysfs-firmware-gsmi |   2 +-
>  .../ABI/testing/sysfs-firmware-memmap         |  16 +-
>  Documentation/ABI/testing/sysfs-fs-ext4       |   4 +-
>  .../ABI/testing/sysfs-hypervisor-xen          |  13 +-
>  .../ABI/testing/sysfs-kernel-boot_params      |  23 +-
>  .../ABI/testing/sysfs-kernel-mm-hugepages     |  12 +-
>  .../ABI/testing/sysfs-platform-asus-laptop    |  21 +-
>  .../ABI/testing/sysfs-platform-asus-wmi       |   1 +
>  Documentation/ABI/testing/sysfs-platform-at91 |  10 +-
>  .../ABI/testing/sysfs-platform-eeepc-laptop   |  14 +-
>  .../ABI/testing/sysfs-platform-ideapad-laptop |   9 +-
>  .../sysfs-platform-intel-wmi-thunderbolt      |   1 +
>  .../ABI/testing/sysfs-platform-sst-atom       |  13 +-
>  .../ABI/testing/sysfs-platform-usbip-vudc     |  11 +-
>  Documentation/ABI/testing/sysfs-ptp           |   2 +-
>  74 files changed, 1322 insertions(+), 865 deletions(-)
> 

Hi Mauro,

[...]

>  
> +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> +KernelVersion:	4.12
> +Contact:	benjamin.gaignard@st.com
> +Description:
> +		Reading returns the list possible quadrature modes.
> +
> +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> +KernelVersion:	4.12
> +Contact:	benjamin.gaignard@st.com
> +Description:
> +		Configure the device counter quadrature modes:
> +
> +		channel_A:
> +			Encoder A input servers as the count input and B as
> +			the UP/DOWN direction control input.
> +
> +		channel_B:
> +			Encoder B input serves as the count input and A as
> +			the UP/DOWN direction control input.
> +
> +		quadrature:
> +			Encoder A and B inputs are mixed to get direction
> +			and count with a scale of 0.25.
> +

I just noticed that since Jonathan question in v1.

Above ABI has been moved in the past as discussed in [1]. You can take a
look at:
b299d00 IIO: stm32: Remove quadrature related functions from trigger driver

Could you please remove the above chunk ?

With that, for the stm32 part:
Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>

[1] https://lkml.org/lkml/2019/5/7/698

Best Regards,
Fabrice

>  What:		/sys/bus/iio/devices/iio:deviceX/in_count_enable_mode_available
>  KernelVersion:	4.12
>  Contact:	benjamin.gaignard@st.com
> @@ -104,6 +146,7 @@ Description:
>  		Configure the device counter enable modes, in all case
>  		counting direction is set by in_count0_count_direction
>  		attribute and the counter is clocked by the internal clock.
> +
>  		always:
>  			Counter is always ON.
>  
> diff --git a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
> index 7fd2601c2831..a74252e580a5 100644
> --- a/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
> +++ b/Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
> @@ -9,11 +9,13 @@ Date:		June 2015
>  KernelVersion:	4.3
>  Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
>  Description:	(RW) Configure MSC operating mode:
> +
>  		  - "single", for contiguous buffer mode (high-order alloc);
>  		  - "multi", for multiblock mode;
>  		  - "ExI", for DCI handler mode;
>  		  - "debug", for debug mode;
>  		  - any of the currently loaded buffer sinks.
> +
>  		If operating mode changes, existing buffer is deallocated,
>  		provided there are no active users and tracing is not enabled,
>  		otherwise the write will fail.
> @@ -23,10 +25,12 @@ Date:		June 2015
>  KernelVersion:	4.3
>  Contact:	Alexander Shishkin <alexander.shishkin@linux.intel.com>
>  Description:	(RW) Configure MSC buffer size for "single" or "multi" modes.
> +
>  		In single mode, this is a single number of pages, has to be
>  		power of 2. In multiblock mode, this is a comma-separated list
>  		of numbers of pages for each window to be allocated. Number of
>  		windows is not limited.
> +
>  		Writing to this file deallocates existing buffer (provided
>  		there are no active users and tracing is not enabled) and then
>  		allocates a new one.
> diff --git a/Documentation/ABI/testing/sysfs-bus-nfit b/Documentation/ABI/testing/sysfs-bus-nfit
> index e4f76e7eab93..63ef0b9ecce7 100644
> --- a/Documentation/ABI/testing/sysfs-bus-nfit
> +++ b/Documentation/ABI/testing/sysfs-bus-nfit
> @@ -1,4 +1,4 @@
> -For all of the nmem device attributes under nfit/*, see the 'NVDIMM Firmware
> +For all of the nmem device attributes under ``nfit/*``, see the 'NVDIMM Firmware
>  Interface Table (NFIT)' section in the ACPI specification
>  (http://www.uefi.org/specifications) for more details.
>  
> diff --git a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
> index 3c9a8c4a25eb..860db53037a5 100644
> --- a/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
> +++ b/Documentation/ABI/testing/sysfs-bus-pci-devices-aer_stats
> @@ -1,6 +1,6 @@
> -==========================
>  PCIe Device AER statistics
> -==========================
> +--------------------------
> +
>  These attributes show up under all the devices that are AER capable. These
>  statistical counters indicate the errors "as seen/reported by the device".
>  Note that this may mean that if an endpoint is causing problems, the AER
> @@ -17,19 +17,18 @@ Description:	List of correctable errors seen and reported by this
>  		PCI device using ERR_COR. Note that since multiple errors may
>  		be reported using a single ERR_COR message, thus
>  		TOTAL_ERR_COR at the end of the file may not match the actual
> -		total of all the errors in the file. Sample output:
> --------------------------------------------------------------------------
> -localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
> -Receiver Error 2
> -Bad TLP 0
> -Bad DLLP 0
> -RELAY_NUM Rollover 0
> -Replay Timer Timeout 0
> -Advisory Non-Fatal 0
> -Corrected Internal Error 0
> -Header Log Overflow 0
> -TOTAL_ERR_COR 2
> --------------------------------------------------------------------------
> +		total of all the errors in the file. Sample output::
> +
> +		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_correctable
> +		    Receiver Error 2
> +		    Bad TLP 0
> +		    Bad DLLP 0
> +		    RELAY_NUM Rollover 0
> +		    Replay Timer Timeout 0
> +		    Advisory Non-Fatal 0
> +		    Corrected Internal Error 0
> +		    Header Log Overflow 0
> +		    TOTAL_ERR_COR 2
>  
>  What:		/sys/bus/pci/devices/<dev>/aer_dev_fatal
>  Date:		July 2018
> @@ -39,28 +38,27 @@ Description:	List of uncorrectable fatal errors seen and reported by this
>  		PCI device using ERR_FATAL. Note that since multiple errors may
>  		be reported using a single ERR_FATAL message, thus
>  		TOTAL_ERR_FATAL at the end of the file may not match the actual
> -		total of all the errors in the file. Sample output:
> --------------------------------------------------------------------------
> -localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
> -Undefined 0
> -Data Link Protocol 0
> -Surprise Down Error 0
> -Poisoned TLP 0
> -Flow Control Protocol 0
> -Completion Timeout 0
> -Completer Abort 0
> -Unexpected Completion 0
> -Receiver Overflow 0
> -Malformed TLP 0
> -ECRC 0
> -Unsupported Request 0
> -ACS Violation 0
> -Uncorrectable Internal Error 0
> -MC Blocked TLP 0
> -AtomicOp Egress Blocked 0
> -TLP Prefix Blocked Error 0
> -TOTAL_ERR_FATAL 0
> --------------------------------------------------------------------------
> +		total of all the errors in the file. Sample output::
> +
> +		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_fatal
> +		    Undefined 0
> +		    Data Link Protocol 0
> +		    Surprise Down Error 0
> +		    Poisoned TLP 0
> +		    Flow Control Protocol 0
> +		    Completion Timeout 0
> +		    Completer Abort 0
> +		    Unexpected Completion 0
> +		    Receiver Overflow 0
> +		    Malformed TLP 0
> +		    ECRC 0
> +		    Unsupported Request 0
> +		    ACS Violation 0
> +		    Uncorrectable Internal Error 0
> +		    MC Blocked TLP 0
> +		    AtomicOp Egress Blocked 0
> +		    TLP Prefix Blocked Error 0
> +		    TOTAL_ERR_FATAL 0
>  
>  What:		/sys/bus/pci/devices/<dev>/aer_dev_nonfatal
>  Date:		July 2018
> @@ -70,32 +68,31 @@ Description:	List of uncorrectable nonfatal errors seen and reported by this
>  		PCI device using ERR_NONFATAL. Note that since multiple errors
>  		may be reported using a single ERR_FATAL message, thus
>  		TOTAL_ERR_NONFATAL at the end of the file may not match the
> -		actual total of all the errors in the file. Sample output:
> --------------------------------------------------------------------------
> -localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
> -Undefined 0
> -Data Link Protocol 0
> -Surprise Down Error 0
> -Poisoned TLP 0
> -Flow Control Protocol 0
> -Completion Timeout 0
> -Completer Abort 0
> -Unexpected Completion 0
> -Receiver Overflow 0
> -Malformed TLP 0
> -ECRC 0
> -Unsupported Request 0
> -ACS Violation 0
> -Uncorrectable Internal Error 0
> -MC Blocked TLP 0
> -AtomicOp Egress Blocked 0
> -TLP Prefix Blocked Error 0
> -TOTAL_ERR_NONFATAL 0
> --------------------------------------------------------------------------
> +		actual total of all the errors in the file. Sample output::
> +
> +		    localhost /sys/devices/pci0000:00/0000:00:1c.0 # cat aer_dev_nonfatal
> +		    Undefined 0
> +		    Data Link Protocol 0
> +		    Surprise Down Error 0
> +		    Poisoned TLP 0
> +		    Flow Control Protocol 0
> +		    Completion Timeout 0
> +		    Completer Abort 0
> +		    Unexpected Completion 0
> +		    Receiver Overflow 0
> +		    Malformed TLP 0
> +		    ECRC 0
> +		    Unsupported Request 0
> +		    ACS Violation 0
> +		    Uncorrectable Internal Error 0
> +		    MC Blocked TLP 0
> +		    AtomicOp Egress Blocked 0
> +		    TLP Prefix Blocked Error 0
> +		    TOTAL_ERR_NONFATAL 0
>  
> -============================
>  PCIe Rootport AER statistics
> -============================
> +----------------------------
> +
>  These attributes show up under only the rootports (or root complex event
>  collectors) that are AER capable. These indicate the number of error messages as
>  "reported to" the rootport. Please note that the rootports also transmit
> diff --git a/Documentation/ABI/testing/sysfs-bus-rapidio b/Documentation/ABI/testing/sysfs-bus-rapidio
> index 13208b27dd87..634ea207a50a 100644
> --- a/Documentation/ABI/testing/sysfs-bus-rapidio
> +++ b/Documentation/ABI/testing/sysfs-bus-rapidio
> @@ -4,24 +4,27 @@ Description:
>  		an individual subdirectory with the following name format of
>  		device_name "nn:d:iiii", where:
>  
> -		nn   - two-digit hexadecimal ID of RapidIO network where the
> +		====   ========================================================
> +		nn     two-digit hexadecimal ID of RapidIO network where the
>  		       device resides
> -		d    - device type: 'e' - for endpoint or 's' - for switch
> -		iiii - four-digit device destID for endpoints, or switchID for
> +		d      device type: 'e' - for endpoint or 's' - for switch
> +		iiii   four-digit device destID for endpoints, or switchID for
>  		       switches
> +		====   ========================================================
>  
>  		For example, below is a list of device directories that
>  		represents a typical RapidIO network with one switch, one host,
>  		and two agent endpoints, as it is seen by the enumerating host
> -		(with destID = 1):
> +		(with destID = 1)::
>  
> -		/sys/bus/rapidio/devices/00:e:0000
> -		/sys/bus/rapidio/devices/00:e:0002
> -		/sys/bus/rapidio/devices/00:s:0001
> +		  /sys/bus/rapidio/devices/00:e:0000
> +		  /sys/bus/rapidio/devices/00:e:0002
> +		  /sys/bus/rapidio/devices/00:s:0001
>  
> -		NOTE: An enumerating or discovering endpoint does not create a
> -		sysfs entry for itself, this is why an endpoint with destID=1 is
> -		not shown in the list.
> +		NOTE:
> +		  An enumerating or discovering endpoint does not create a
> +		  sysfs entry for itself, this is why an endpoint with destID=1
> +		  is not shown in the list.
>  
>  Attributes Common for All RapidIO Devices
>  -----------------------------------------
> diff --git a/Documentation/ABI/testing/sysfs-bus-thunderbolt b/Documentation/ABI/testing/sysfs-bus-thunderbolt
> index dd565c378b40..171127294674 100644
> --- a/Documentation/ABI/testing/sysfs-bus-thunderbolt
> +++ b/Documentation/ABI/testing/sysfs-bus-thunderbolt
> @@ -37,16 +37,18 @@ Contact:	thunderbolt-software@lists.01.org
>  Description:	This attribute holds current Thunderbolt security level
>  		set by the system BIOS. Possible values are:
>  
> -		none: All devices are automatically authorized
> -		user: Devices are only authorized based on writing
> -		      appropriate value to the authorized attribute
> -		secure: Require devices that support secure connect at
> -			minimum. User needs to authorize each device.
> -		dponly: Automatically tunnel Display port (and USB). No
> -			PCIe tunnels are created.
> -		usbonly: Automatically tunnel USB controller of the
> +		=======  ==================================================
> +		none     All devices are automatically authorized
> +		user     Devices are only authorized based on writing
> +		         appropriate value to the authorized attribute
> +		secure   Require devices that support secure connect at
> +			 minimum. User needs to authorize each device.
> +		dponly   Automatically tunnel Display port (and USB). No
> +			 PCIe tunnels are created.
> +		usbonly  Automatically tunnel USB controller of the
>  			 connected Thunderbolt dock (and Display Port). All
>  			 PCIe links downstream of the dock are removed.
> +		=======  ==================================================
>  
>  What: /sys/bus/thunderbolt/devices/.../authorized
>  Date:		Sep 2017
> @@ -61,17 +63,23 @@ Description:	This attribute is used to authorize Thunderbolt devices
>  		yet authorized.
>  
>  		Possible values are supported:
> -		1: The device will be authorized and connected
> +
> +		==  ===========================================
> +		1   The device will be authorized and connected
> +		==  ===========================================
>  
>  		When key attribute contains 32 byte hex string the possible
>  		values are:
> -		1: The 32 byte hex string is added to the device NVM and
> -		   the device is authorized.
> -		2: Send a challenge based on the 32 byte hex string. If the
> -		   challenge response from device is valid, the device is
> -		   authorized. In case of failure errno will be ENOKEY if
> -		   the device did not contain a key at all, and
> -		   EKEYREJECTED if the challenge response did not match.
> +
> +		==  ========================================================
> +		1   The 32 byte hex string is added to the device NVM and
> +		    the device is authorized.
> +		2   Send a challenge based on the 32 byte hex string. If the
> +		    challenge response from device is valid, the device is
> +		    authorized. In case of failure errno will be ENOKEY if
> +		    the device did not contain a key at all, and
> +		    EKEYREJECTED if the challenge response did not match.
> +		==  ========================================================
>  
>  What: /sys/bus/thunderbolt/devices/.../boot
>  Date:		Jun 2018
> diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb
> index 614d216dff1d..e449b8374f6a 100644
> --- a/Documentation/ABI/testing/sysfs-bus-usb
> +++ b/Documentation/ABI/testing/sysfs-bus-usb
> @@ -72,24 +72,27 @@ Description:
>  		table at compile time. The format for the device ID is:
>  		idVendor idProduct bInterfaceClass RefIdVendor RefIdProduct
>  		The vendor ID and device ID fields are required, the
> -		rest is optional. The Ref* tuple can be used to tell the
> +		rest is optional. The `Ref*` tuple can be used to tell the
>  		driver to use the same driver_data for the new device as
>  		it is used for the reference device.
>  		Upon successfully adding an ID, the driver will probe
> -		for the device and attempt to bind to it.  For example:
> -		# echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
> +		for the device and attempt to bind to it.  For example::
> +
> +		  # echo "8086 10f5" > /sys/bus/usb/drivers/foo/new_id
>  
>  		Here add a new device (0458:7045) using driver_data from
> -		an already supported device (0458:704c):
> -		# echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
> +		an already supported device (0458:704c)::
> +
> +		  # echo "0458 7045 0 0458 704c" > /sys/bus/usb/drivers/foo/new_id
>  
>  		Reading from this file will list all dynamically added
>  		device IDs in the same format, with one entry per
> -		line. For example:
> -		# cat /sys/bus/usb/drivers/foo/new_id
> -		8086 10f5
> -		dead beef 06
> -		f00d cafe
> +		line. For example::
> +
> +		  # cat /sys/bus/usb/drivers/foo/new_id
> +		  8086 10f5
> +		  dead beef 06
> +		  f00d cafe
>  
>  		The list will be truncated at PAGE_SIZE bytes due to
>  		sysfs restrictions.
> @@ -209,6 +212,7 @@ Description:
>  		advance, and behaves well according to the specification.
>  		This attribute is a bit-field that controls the behavior of
>  		a specific port:
> +
>  		 - Bit 0 of this field selects the "old" enumeration scheme,
>  		   as it is considerably faster (it only causes one USB reset
>  		   instead of 2).
> @@ -233,10 +237,10 @@ Description:
>  		poll() for monitoring changes to this value in user space.
>  
>  		Any time this value changes the corresponding hub device will send a
> -		udev event with the following attributes:
> +		udev event with the following attributes::
>  
> -		OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
> -		OVER_CURRENT_COUNT=[current value of this sysfs attribute]
> +		  OVER_CURRENT_PORT=/sys/bus/usb/devices/.../(hub interface)/portX
> +		  OVER_CURRENT_COUNT=[current value of this sysfs attribute]
>  
>  What:		/sys/bus/usb/devices/.../(hub interface)/portX/usb3_lpm_permit
>  Date:		November 2015
> diff --git a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
> index 9ade80f81f96..2f86e4223bfc 100644
> --- a/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
> +++ b/Documentation/ABI/testing/sysfs-bus-usb-devices-usbsevseg
> @@ -12,8 +12,11 @@ KernelVersion:	2.6.26
>  Contact:	Harrison Metzger <harrisonmetz@gmail.com>
>  Description:	Controls the devices display mode.
>  		For a 6 character display the values are
> +
>  			MSB 0x06; LSB 0x3F, and
> +
>  		for an 8 character display the values are
> +
>  			MSB 0x08; LSB 0xFF.
>  
>  What:		/sys/bus/usb/.../textmode
> @@ -37,7 +40,7 @@ KernelVersion:	2.6.26
>  Contact:	Harrison Metzger <harrisonmetz@gmail.com>
>  Description:	Controls the decimal places on the device.
>  		To set the nth decimal place, give this field
> -		the value of 10 ** n. Assume this field has
> +		the value of ``10 ** n``. Assume this field has
>  		the value k and has 1 or more decimal places set,
>  		to set the mth place (where m is not already set),
> -		change this fields value to k + 10 ** m.
> +		change this fields value to ``k + 10 ** m``.
> diff --git a/Documentation/ABI/testing/sysfs-bus-vfio-mdev b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> index 452dbe39270e..59fc804265db 100644
> --- a/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> +++ b/Documentation/ABI/testing/sysfs-bus-vfio-mdev
> @@ -28,8 +28,9 @@ Description:
>  		Writing UUID to this file will create mediated device of
>  		type <type-id> for parent device <device>. This is a
>  		write-only file.
> -		For example:
> -		# echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
> +		For example::
> +
> +		  # echo "83b8f4f2-509f-382f-3c1e-e6bfe0fa1001" >	\
>  		       /sys/devices/foo/mdev_supported_types/foo-1/create
>  
>  What:           /sys/.../mdev_supported_types/<type-id>/devices/
> @@ -107,5 +108,6 @@ Description:
>  		Writing '1' to this file destroys the mediated device. The
>  		vendor driver can fail the remove() callback if that device
>  		is active and the vendor driver doesn't support hot unplug.
> -		Example:
> -		# echo 1 > /sys/bus/mdev/devices/<UUID>/remove
> +		Example::
> +
> +		  # echo 1 > /sys/bus/mdev/devices/<UUID>/remove
> diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
> index 7970e3713e70..a6f51a104c44 100644
> --- a/Documentation/ABI/testing/sysfs-class-cxl
> +++ b/Documentation/ABI/testing/sysfs-class-cxl
> @@ -72,11 +72,16 @@ Description:    read/write
>                  when performing the START_WORK ioctl. Only applicable when
>                  running under hashed page table mmu.
>                  Possible values:
> -                        none: No prefaulting (default)
> -                        work_element_descriptor: Treat the work element
> -                                 descriptor as an effective address and
> -                                 prefault what it points to.
> -                        all: all segments process calling START_WORK maps.
> +
> +                =======================  ======================================
> +		none			 No prefaulting (default)
> +		work_element_descriptor  Treat the work element
> +					 descriptor as an effective address and
> +					 prefault what it points to.
> +                all			 all segments process calling
> +					 START_WORK maps.
> +                =======================  ======================================
> +
>  Users:		https://github.com/ibm-capi/libcxl
>  
>  What:           /sys/class/cxl/<afu>/reset
> diff --git a/Documentation/ABI/testing/sysfs-class-led b/Documentation/ABI/testing/sysfs-class-led
> index 5f67f7ab277b..65e040978f73 100644
> --- a/Documentation/ABI/testing/sysfs-class-led
> +++ b/Documentation/ABI/testing/sysfs-class-led
> @@ -50,7 +50,7 @@ Description:
>  		You can change triggers in a similar manner to the way an IO
>  		scheduler is chosen. Trigger specific parameters can appear in
>  		/sys/class/leds/<led> once a given trigger is selected. For
> -		their documentation see sysfs-class-led-trigger-*.
> +		their documentation see `sysfs-class-led-trigger-*`.
>  
>  What:		/sys/class/leds/<led>/inverted
>  Date:		January 2011
> diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000 b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
> index f520ece9b64c..69befe947d7e 100644
> --- a/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
> +++ b/Documentation/ABI/testing/sysfs-class-led-driver-el15203000
> @@ -6,127 +6,132 @@ Description:
>  		The LEDs board supports only predefined patterns by firmware
>  		for specific LEDs.
>  
> -		Breathing mode for Screen frame light tube:
> -		"0 4000 1 4000"
> +		Breathing mode for Screen frame light tube::
>  
> -		    ^
> -		    |
> -		Max-|     ---
> -		    |    /   \
> -		    |   /     \
> -		    |  /       \     /
> -		    | /         \   /
> -		Min-|-           ---
> -		    |
> -		    0------4------8--> time (sec)
> +		    "0 4000 1 4000"
>  
> -		Cascade mode for Pipe LED:
> -		"1 800 2 800 4 800 8 800 16 800"
> +			^
> +			|
> +		    Max-|     ---
> +			|    /   \
> +			|   /     \
> +			|  /       \     /
> +			| /         \   /
> +		    Min-|-           ---
> +			|
> +			0------4------8--> time (sec)
>  
> -		      ^
> -		      |
> -		0 On -|----+                   +----+                   +---
> -		      |    |                   |    |                   |
> -		  Off-|    +-------------------+    +-------------------+
> -		      |
> -		1 On -|    +----+                   +----+
> -		      |    |    |                   |    |
> -		  Off |----+    +-------------------+    +------------------
> -		      |
> -		2 On -|         +----+                   +----+
> -		      |         |    |                   |    |
> -		  Off-|---------+    +-------------------+    +-------------
> -		      |
> -		3 On -|              +----+                   +----+
> -		      |              |    |                   |    |
> -		  Off-|--------------+    +-------------------+    +--------
> -		      |
> -		4 On -|                   +----+                   +----+
> -		      |                   |    |                   |    |
> -		  Off-|-------------------+    +-------------------+    +---
> -		      |
> -		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
> +		Cascade mode for Pipe LED::
>  
> -		Inverted cascade mode for Pipe LED:
> -		"30 800 29 800 27 800 23 800 15 800"
> +		    "1 800 2 800 4 800 8 800 16 800"
>  
> -		      ^
> -		      |
> -		0 On -|    +-------------------+    +-------------------+
> -		      |    |                   |    |                   |
> -		  Off-|----+                   +----+                   +---
> -		      |
> -		1 On -|----+    +-------------------+    +------------------
> -		      |    |    |                   |    |
> -		  Off |    +----+                   +----+
> -		      |
> -		2 On -|---------+    +-------------------+    +-------------
> -		      |         |    |                   |    |
> -		  Off-|         +----+                   +----+
> -		      |
> -		3 On -|--------------+    +-------------------+    +--------
> -		      |              |    |                   |    |
> -		  Off-|              +----+                   +----+
> -		      |
> -		4 On -|-------------------+    +-------------------+    +---
> -		      |                   |    |                   |    |
> -		  Off-|                   +----+                   +----+
> -		      |
> -		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
> +			^
> +			|
> +		    0 On -|----+                   +----+                   +---
> +			|    |                   |    |                   |
> +		    Off-|    +-------------------+    +-------------------+
> +			|
> +		    1 On -|    +----+                   +----+
> +			|    |    |                   |    |
> +		    Off |----+    +-------------------+    +------------------
> +			|
> +		    2 On -|         +----+                   +----+
> +			|         |    |                   |    |
> +		    Off-|---------+    +-------------------+    +-------------
> +			|
> +		    3 On -|              +----+                   +----+
> +			|              |    |                   |    |
> +		    Off-|--------------+    +-------------------+    +--------
> +			|
> +		    4 On -|                   +----+                   +----+
> +			|                   |    |                   |    |
> +		    Off-|-------------------+    +-------------------+    +---
> +			|
> +			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
>  
> -		Bounce mode for Pipe LED:
> -		"1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
> +		Inverted cascade mode for Pipe LED::
>  
> -		      ^
> -		      |
> -		0 On -|----+                                       +--------
> -		      |    |                                       |
> -		  Off-|    +---------------------------------------+
> -		      |
> -		1 On -|    +----+                             +----+
> -		      |    |    |                             |    |
> -		  Off |----+    +-----------------------------+    +--------
> -		      |
> -		2 On -|         +----+                   +----+
> -		      |         |    |                   |    |
> -		  Off-|---------+    +-------------------+    +-------------
> -		      |
> -		3 On -|              +----+         +----+
> -		      |              |    |         |    |
> -		  Off-|--------------+    +---------+    +------------------
> -		      |
> -		4 On -|                   +---------+
> -		      |                   |         |
> -		  Off-|-------------------+         +-----------------------
> -		      |
> -		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
> +		    "30 800 29 800 27 800 23 800 15 800"
>  
> -		Inverted bounce mode for Pipe LED:
> -		"30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
> +			^
> +			|
> +		    0 On -|    +-------------------+    +-------------------+
> +			|    |                   |    |                   |
> +		    Off-|----+                   +----+                   +---
> +			|
> +		    1 On -|----+    +-------------------+    +------------------
> +			|    |    |                   |    |
> +		    Off |    +----+                   +----+
> +			|
> +		    2 On -|---------+    +-------------------+    +-------------
> +			|         |    |                   |    |
> +		    Off-|         +----+                   +----+
> +			|
> +		    3 On -|--------------+    +-------------------+    +--------
> +			|              |    |                   |    |
> +		    Off-|              +----+                   +----+
> +			|
> +		    4 On -|-------------------+    +-------------------+    +---
> +			|                   |    |                   |    |
> +		    Off-|                   +----+                   +----+
> +			|
> +			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
>  
> -		      ^
> -		      |
> -		0 On -|    +---------------------------------------+
> -		      |    |                                       |
> -		  Off-|----+                                       +--------
> -		      |
> -		1 On -|----+    +-----------------------------+    +--------
> -		      |    |    |                             |    |
> -		  Off |    +----+                             +----+
> -		      |
> -		2 On -|---------+    +-------------------+    +-------------
> -		      |         |    |                   |    |
> -		  Off-|         +----+                   +----+
> -		      |
> -		3 On -|--------------+    +---------+    +------------------
> -		      |              |    |         |    |
> -		  Off-|              +----+         +----+
> -		      |
> -		4 On -|-------------------+         +-----------------------
> -		      |                   |         |
> -		  Off-|                   +---------+
> -		      |
> -		      0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
> +		Bounce mode for Pipe LED::
> +
> +		    "1 800 2 800 4 800 8 800 16 800 16 800 8 800 4 800 2 800 1 800"
> +
> +			^
> +			|
> +		    0 On -|----+                                       +--------
> +			|    |                                       |
> +		    Off-|    +---------------------------------------+
> +			|
> +		    1 On -|    +----+                             +----+
> +			|    |    |                             |    |
> +		    Off |----+    +-----------------------------+    +--------
> +			|
> +		    2 On -|         +----+                   +----+
> +			|         |    |                   |    |
> +		    Off-|---------+    +-------------------+    +-------------
> +			|
> +		    3 On -|              +----+         +----+
> +			|              |    |         |    |
> +		    Off-|--------------+    +---------+    +------------------
> +			|
> +		    4 On -|                   +---------+
> +			|                   |         |
> +		    Off-|-------------------+         +-----------------------
> +			|
> +			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
> +
> +		Inverted bounce mode for Pipe LED::
> +
> +		    "30 800 29 800 27 800 23 800 15 800 15 800 23 800 27 800 29 800 30 800"
> +
> +			^
> +			|
> +		    0 On -|    +---------------------------------------+
> +			|    |                                       |
> +		    Off-|----+                                       +--------
> +			|
> +		    1 On -|----+    +-----------------------------+    +--------
> +			|    |    |                             |    |
> +		    Off |    +----+                             +----+
> +			|
> +		    2 On -|---------+    +-------------------+    +-------------
> +			|         |    |                   |    |
> +		    Off-|         +----+                   +----+
> +			|
> +		    3 On -|--------------+    +---------+    +------------------
> +			|              |    |         |    |
> +		    Off-|              +----+         +----+
> +			|
> +		    4 On -|-------------------+         +-----------------------
> +			|                   |         |
> +		    Off-|                   +---------+
> +			|
> +			0---0.8--1.6--2.4--3.2---4---4.8--5.6--6.4--7.2---8--> time (sec)
>  
>  What:		/sys/class/leds/<led>/repeat
>  Date:		September 2019
> diff --git a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
> index 45b1e605d355..215482379580 100644
> --- a/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
> +++ b/Documentation/ABI/testing/sysfs-class-led-driver-sc27xx
> @@ -12,8 +12,8 @@ Description:
>  		format, we should set brightness as 0 for rise stage, fall
>  		stage and low stage.
>  
> -		Min stage duration: 125 ms
> -		Max stage duration: 31875 ms
> +		- Min stage duration: 125 ms
> +		- Max stage duration: 31875 ms
>  
>  		Since the stage duration step is 125 ms, the duration should be
>  		a multiplier of 125, like 125ms, 250ms, 375ms, 500ms ... 31875ms.
> diff --git a/Documentation/ABI/testing/sysfs-class-mic b/Documentation/ABI/testing/sysfs-class-mic
> index 6ef682603179..bd0e780c3760 100644
> --- a/Documentation/ABI/testing/sysfs-class-mic
> +++ b/Documentation/ABI/testing/sysfs-class-mic
> @@ -41,24 +41,33 @@ Description:
>  		When read, this entry provides the current state of an Intel
>  		MIC device in the context of the card OS. Possible values that
>  		will be read are:
> -		"ready" - The MIC device is ready to boot the card OS. On
> -		reading this entry after an OSPM resume, a "boot" has to be
> -		written to this entry if the card was previously shutdown
> -		during OSPM suspend.
> -		"booting" - The MIC device has initiated booting a card OS.
> -		"online" - The MIC device has completed boot and is online
> -		"shutting_down" - The card OS is shutting down.
> -		"resetting" - A reset has been initiated for the MIC device
> -		"reset_failed" - The MIC device has failed to reset.
> +
> +
> +		===============  ===============================================
> +		"ready"		 The MIC device is ready to boot the card OS.
> +				 On reading this entry after an OSPM resume,
> +				 a "boot" has to be written to this entry if
> +				 the card was previously shutdown during OSPM
> +				 suspend.
> +		"booting"	 The MIC device has initiated booting a card OS.
> +		"online"	 The MIC device has completed boot and is online
> +		"shutting_down"	 The card OS is shutting down.
> +		"resetting"	 A reset has been initiated for the MIC device
> +		"reset_failed"	 The MIC device has failed to reset.
> +		===============  ===============================================
>  
>  		When written, this sysfs entry triggers different state change
>  		operations depending upon the current state of the card OS.
>  		Acceptable values are:
> -		"boot" - Boot the card OS image specified by the combination
> -			 of firmware, ramdisk, cmdline and bootmode
> -			sysfs entries.
> -		"reset" - Initiates device reset.
> -		"shutdown" - Initiates card OS shutdown.
> +
> +
> +		==========  ===================================================
> +		"boot"      Boot the card OS image specified by the combination
> +			    of firmware, ramdisk, cmdline and bootmode
> +			    sysfs entries.
> +		"reset"     Initiates device reset.
> +		"shutdown"  Initiates card OS shutdown.
> +		==========  ===================================================
>  
>  What:		/sys/class/mic/mic(x)/shutdown_status
>  Date:		October 2013
> @@ -69,12 +78,15 @@ Description:
>  		OS can shutdown because of various reasons. When read, this
>  		entry provides the status on why the card OS was shutdown.
>  		Possible values are:
> -		"nop" -  shutdown status is not applicable, when the card OS is
> -			"online"
> -		"crashed" - Shutdown because of a HW or SW crash.
> -		"halted" - Shutdown because of a halt command.
> -		"poweroff" - Shutdown because of a poweroff command.
> -		"restart" - Shutdown because of a restart command.
> +
> +		==========  ===================================================
> +		"nop"       shutdown status is not applicable, when the card OS
> +			    is "online"
> +		"crashed"   Shutdown because of a HW or SW crash.
> +		"halted"    Shutdown because of a halt command.
> +		"poweroff"  Shutdown because of a poweroff command.
> +		"restart"   Shutdown because of a restart command.
> +		==========  ===================================================
>  
>  What:		/sys/class/mic/mic(x)/cmdline
>  Date:		October 2013
> diff --git a/Documentation/ABI/testing/sysfs-class-ocxl b/Documentation/ABI/testing/sysfs-class-ocxl
> index ae1276efa45a..bf33f4fda58f 100644
> --- a/Documentation/ABI/testing/sysfs-class-ocxl
> +++ b/Documentation/ABI/testing/sysfs-class-ocxl
> @@ -11,8 +11,11 @@ Contact:	linuxppc-dev@lists.ozlabs.org
>  Description:	read only
>  		Number of contexts for the AFU, in the format <n>/<max>
>  		where:
> +
> +			====	===============================================
>  			n:	number of currently active contexts, for debug
>  			max:	maximum number of contexts supported by the AFU
> +			====	===============================================
>  
>  What:		/sys/class/ocxl/<afu name>/pp_mmio_size
>  Date:		January 2018
> diff --git a/Documentation/ABI/testing/sysfs-class-power b/Documentation/ABI/testing/sysfs-class-power
> index dbccb2fcd7ce..d4319a04c302 100644
> --- a/Documentation/ABI/testing/sysfs-class-power
> +++ b/Documentation/ABI/testing/sysfs-class-power
> @@ -1,4 +1,4 @@
> -===== General Properties =====
> +**General Properties**
>  
>  What:		/sys/class/power_supply/<supply_name>/manufacturer
>  Date:		May 2007
> @@ -72,6 +72,7 @@ Description:
>  		critically low).
>  
>  		Access: Read, Write
> +
>  		Valid values: 0 - 100 (percent)
>  
>  What:		/sys/class/power_supply/<supply_name>/capacity_error_margin
> @@ -96,7 +97,9 @@ Description:
>  		Coarse representation of battery capacity.
>  
>  		Access: Read
> -		Valid values: "Unknown", "Critical", "Low", "Normal", "High",
> +
> +		Valid values:
> +			      "Unknown", "Critical", "Low", "Normal", "High",
>  			      "Full"
>  
>  What:		/sys/class/power_supply/<supply_name>/current_avg
> @@ -139,6 +142,7 @@ Description:
>  		throttling for thermal cooling or improving battery health.
>  
>  		Access: Read, Write
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/charge_control_limit_max
> @@ -148,6 +152,7 @@ Description:
>  		Maximum legal value for the charge_control_limit property.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/charge_control_start_threshold
> @@ -168,6 +173,7 @@ Description:
>  		stop.
>  
>  		Access: Read, Write
> +
>  		Valid values: 0 - 100 (percent)
>  
>  What:		/sys/class/power_supply/<supply_name>/charge_type
> @@ -183,7 +189,9 @@ Description:
>  		different algorithm.
>  
>  		Access: Read, Write
> -		Valid values: "Unknown", "N/A", "Trickle", "Fast", "Standard",
> +
> +		Valid values:
> +			      "Unknown", "N/A", "Trickle", "Fast", "Standard",
>  			      "Adaptive", "Custom"
>  
>  What:		/sys/class/power_supply/<supply_name>/charge_term_current
> @@ -194,6 +202,7 @@ Description:
>  		when the battery is considered full and charging should end.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/health
> @@ -204,7 +213,9 @@ Description:
>  		functionality.
>  
>  		Access: Read
> -		Valid values: "Unknown", "Good", "Overheat", "Dead",
> +
> +		Valid values:
> +			      "Unknown", "Good", "Overheat", "Dead",
>  			      "Over voltage", "Unspecified failure", "Cold",
>  			      "Watchdog timer expire", "Safety timer expire",
>  			      "Over current", "Calibration required", "Warm",
> @@ -218,6 +229,7 @@ Description:
>  		for a battery charge cycle.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/present
> @@ -227,9 +239,13 @@ Description:
>  		Reports whether a battery is present or not in the system.
>  
>  		Access: Read
> +
>  		Valid values:
> +
> +			== =======
>  			0: Absent
>  			1: Present
> +			== =======
>  
>  What:		/sys/class/power_supply/<supply_name>/status
>  Date:		May 2007
> @@ -240,7 +256,9 @@ Description:
>  		used to enable/disable charging to the battery.
>  
>  		Access: Read, Write
> -		Valid values: "Unknown", "Charging", "Discharging",
> +
> +		Valid values:
> +			      "Unknown", "Charging", "Discharging",
>  			      "Not charging", "Full"
>  
>  What:		/sys/class/power_supply/<supply_name>/technology
> @@ -250,7 +268,9 @@ Description:
>  		Describes the battery technology supported by the supply.
>  
>  		Access: Read
> -		Valid values: "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
> +
> +		Valid values:
> +			      "Unknown", "NiMH", "Li-ion", "Li-poly", "LiFe",
>  			      "NiCd", "LiMn"
>  
>  What:		/sys/class/power_supply/<supply_name>/temp
> @@ -260,6 +280,7 @@ Description:
>  		Reports the current TBAT battery temperature reading.
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_alert_max
> @@ -274,6 +295,7 @@ Description:
>  		critically high, and charging has stopped).
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_alert_min
> @@ -289,6 +311,7 @@ Description:
>  		remedy the situation).
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_max
> @@ -299,6 +322,7 @@ Description:
>  		charging.
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_min
> @@ -309,6 +333,7 @@ Description:
>  		charging.
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/voltage_avg,
> @@ -320,6 +345,7 @@ Description:
>  		which they average readings to smooth out the reported value.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
>  What:		/sys/class/power_supply/<supply_name>/voltage_max,
> @@ -330,6 +356,7 @@ Description:
>  		during charging.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
>  What:		/sys/class/power_supply/<supply_name>/voltage_min,
> @@ -340,6 +367,7 @@ Description:
>  		during discharging.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
>  What:		/sys/class/power_supply/<supply_name>/voltage_now,
> @@ -350,9 +378,10 @@ Description:
>  		This value is not averaged/smoothed.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
> -===== USB Properties =====
> +**USB Properties**
>  
>  What: 		/sys/class/power_supply/<supply_name>/current_avg
>  Date:		May 2007
> @@ -363,6 +392,7 @@ Description:
>  		average readings to smooth out the reported value.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microamps
>  
>  
> @@ -373,6 +403,7 @@ Description:
>  		Reports the maximum IBUS current the supply can support.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microamps
>  
>  What: 		/sys/class/power_supply/<supply_name>/current_now
> @@ -385,6 +416,7 @@ Description:
>  		within the reported min/max range.
>  
>  		Access: Read, Write
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/input_current_limit
> @@ -399,6 +431,7 @@ Description:
>  		solved using power limit use input_current_limit.
>  
>  		Access: Read, Write
> +
>  		Valid values: Represented in microamps
>  
>  What:		/sys/class/power_supply/<supply_name>/input_voltage_limit
> @@ -441,10 +474,14 @@ Description:
>  		USB supply so voltage and current can be controlled).
>  
>  		Access: Read, Write
> +
>  		Valid values:
> +
> +			== ==================================================
>  			0: Offline
>  			1: Online Fixed - Fixed Voltage Supply
>  			2: Online Programmable - Programmable Voltage Supply
> +			== ==================================================
>  
>  What:		/sys/class/power_supply/<supply_name>/temp
>  Date:		May 2007
> @@ -455,6 +492,7 @@ Description:
>  		TJUNC temperature of an IC)
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_alert_max
> @@ -470,6 +508,7 @@ Description:
>  		remedy the situation).
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_alert_min
> @@ -485,6 +524,7 @@ Description:
>  		accordingly to remedy the situation).
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_max
> @@ -494,6 +534,7 @@ Description:
>  		Reports the maximum allowed supply temperature for operation.
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What:		/sys/class/power_supply/<supply_name>/temp_min
> @@ -503,6 +544,7 @@ Description:
>  		Reports the mainimum allowed supply temperature for operation.
>  
>  		Access: Read
> +
>  		Valid values: Represented in 1/10 Degrees Celsius
>  
>  What: 		/sys/class/power_supply/<supply_name>/usb_type
> @@ -514,7 +556,9 @@ Description:
>  		is attached.
>  
>  		Access: Read-Only
> -		Valid values: "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
> +
> +		Valid values:
> +			      "Unknown", "SDP", "DCP", "CDP", "ACA", "C", "PD",
>  			      "PD_DRP", "PD_PPS", "BrickID"
>  
>  What: 		/sys/class/power_supply/<supply_name>/voltage_max
> @@ -524,6 +568,7 @@ Description:
>  		Reports the maximum VBUS voltage the supply can support.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
>  What: 		/sys/class/power_supply/<supply_name>/voltage_min
> @@ -533,6 +578,7 @@ Description:
>  		Reports the minimum VBUS voltage the supply can support.
>  
>  		Access: Read
> +
>  		Valid values: Represented in microvolts
>  
>  What: 		/sys/class/power_supply/<supply_name>/voltage_now
> @@ -545,9 +591,10 @@ Description:
>  		within the reported min/max range.
>  
>  		Access: Read, Write
> +
>  		Valid values: Represented in microvolts
>  
> -===== Device Specific Properties =====
> +**Device Specific Properties**
>  
>  What:		/sys/class/power/ds2760-battery.*/charge_now
>  Date:		May 2010
> @@ -581,6 +628,7 @@ Description:
>  		will drop to 0 A) and will trigger interrupt.
>  
>  		Valid values:
> +
>  		- 5, 6 or 7 (hours),
>  		- 0: disabled.
>  
> @@ -595,6 +643,7 @@ Description:
>  		will drop to 0 A) and will trigger interrupt.
>  
>  		Valid values:
> +
>  		- 4 - 16 (hours), step by 2 (rounded down)
>  		- 0: disabled.
>  
> @@ -609,6 +658,7 @@ Description:
>  		interrupt and start top-off charging mode.
>  
>  		Valid values:
> +
>  		- 100000 - 200000 (microamps), step by 25000 (rounded down)
>  		- 200000 - 350000 (microamps), step by 50000 (rounded down)
>  		- 0: disabled.
> @@ -624,6 +674,7 @@ Description:
>  		will drop to 0 A) and will trigger interrupt.
>  
>  		Valid values:
> +
>  		- 0 - 70 (minutes), step by 10 (rounded down)
>  
>  What:		/sys/class/power_supply/bq24257-charger/ovp_voltage
> @@ -637,6 +688,7 @@ Description:
>  		device datasheet for details.
>  
>  		Valid values:
> +
>  		- 6000000, 6500000, 7000000, 8000000, 9000000, 9500000, 10000000,
>  		  10500000 (all uV)
>  
> @@ -652,6 +704,7 @@ Description:
>  		lower than the set value. See device datasheet for details.
>  
>  		Valid values:
> +
>  		- 4200000, 4280000, 4360000, 4440000, 4520000, 4600000, 4680000,
>  		  4760000 (all uV)
>  
> @@ -666,6 +719,7 @@ Description:
>  		the charger operates normally. See device datasheet for details.
>  
>  		Valid values:
> +
>  		- 1: enabled
>  		- 0: disabled
>  
> @@ -681,6 +735,7 @@ Description:
>  		from the system. See device datasheet for details.
>  
>  		Valid values:
> +
>  		- 1: enabled
>  		- 0: disabled
>  
> diff --git a/Documentation/ABI/testing/sysfs-class-power-twl4030 b/Documentation/ABI/testing/sysfs-class-power-twl4030
> index b4fd32d210c5..7ac36dba87bc 100644
> --- a/Documentation/ABI/testing/sysfs-class-power-twl4030
> +++ b/Documentation/ABI/testing/sysfs-class-power-twl4030
> @@ -4,18 +4,20 @@ Description:
>  	Writing to this can disable charging.
>  
>  	Possible values are:
> -		"auto" - draw power as appropriate for detected
> -			 power source and battery status.
> -		"off"  - do not draw any power.
> -		"continuous"
> -		       - activate mode described as "linear" in
> -		         TWL data sheets.  This uses whatever
> -			 current is available and doesn't switch off
> -			 when voltage drops.
>  
> -			 This is useful for unstable power sources
> -			 such as bicycle dynamo, but care should
> -			 be taken that battery is not over-charged.
> +		=============	===========================================
> +		"auto" 		draw power as appropriate for detected
> +				power source and battery status.
> +		"off"  		do not draw any power.
> +		"continuous"	activate mode described as "linear" in
> +				TWL data sheets.  This uses whatever
> +				current is available and doesn't switch off
> +				when voltage drops.
> +
> +				This is useful for unstable power sources
> +				such as bicycle dynamo, but care should
> +				be taken that battery is not over-charged.
> +		=============	===========================================
>  
>  What: /sys/class/power_supply/twl4030_ac/mode
>  Description:
> @@ -23,6 +25,9 @@ Description:
>  	Writing to this can disable charging.
>  
>  	Possible values are:
> -		"auto" - draw power as appropriate for detected
> -			 power source and battery status.
> -		"off"  - do not draw any power.
> +
> +		======	===========================================
> +		"auto"	draw power as appropriate for detected
> +			power source and battery status.
> +		"off"	do not draw any power.
> +		======	===========================================
> diff --git a/Documentation/ABI/testing/sysfs-class-rc b/Documentation/ABI/testing/sysfs-class-rc
> index 6c0d6c8cb911..9c8ff7910858 100644
> --- a/Documentation/ABI/testing/sysfs-class-rc
> +++ b/Documentation/ABI/testing/sysfs-class-rc
> @@ -21,15 +21,22 @@ KernelVersion:	2.6.36
>  Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
>  Description:
>  		Reading this file returns a list of available protocols,
> -		something like:
> +		something like::
> +
>  		    "rc5 [rc6] nec jvc [sony]"
> +
>  		Enabled protocols are shown in [] brackets.
> +
>  		Writing "+proto" will add a protocol to the list of enabled
>  		protocols.
> +
>  		Writing "-proto" will remove a protocol from the list of enabled
>  		protocols.
> +
>  		Writing "proto" will enable only "proto".
> +
>  		Writing "none" will disable all protocols.
> +
>  		Write fails with EINVAL if an invalid protocol combination or
>  		unknown protocol name is used.
>  
> @@ -39,11 +46,13 @@ KernelVersion:	3.15
>  Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
>  Description:
>  		Sets the scancode filter expected value.
> +
>  		Use in combination with /sys/class/rc/rcN/filter_mask to set the
>  		expected value of the bits set in the filter mask.
>  		If the hardware supports it then scancodes which do not match
>  		the filter will be ignored. Otherwise the write will fail with
>  		an error.
> +
>  		This value may be reset to 0 if the current protocol is altered.
>  
>  What:		/sys/class/rc/rcN/filter_mask
> @@ -56,9 +65,11 @@ Description:
>  		of the scancode which should be compared against the expected
>  		value. A value of 0 disables the filter to allow all valid
>  		scancodes to be processed.
> +
>  		If the hardware supports it then scancodes which do not match
>  		the filter will be ignored. Otherwise the write will fail with
>  		an error.
> +
>  		This value may be reset to 0 if the current protocol is altered.
>  
>  What:		/sys/class/rc/rcN/wakeup_protocols
> @@ -67,15 +78,22 @@ KernelVersion:	4.11
>  Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
>  Description:
>  		Reading this file returns a list of available protocols to use
> -		for the wakeup filter, something like:
> +		for the wakeup filter, something like::
> +
>  		    "rc-5 nec nec-x rc-6-0 rc-6-6a-24 [rc-6-6a-32] rc-6-mce"
> +
>  		Note that protocol variants are listed, so "nec", "sony",
>  		"rc-5", "rc-6" have their different bit length encodings
>  		listed if available.
> +
>  		The enabled wakeup protocol is shown in [] brackets.
> +
>  		Only one protocol can be selected at a time.
> +
>  		Writing "proto" will use "proto" for wakeup events.
> +
>  		Writing "none" will disable wakeup.
> +
>  		Write fails with EINVAL if an invalid protocol combination or
>  		unknown protocol name is used, or if wakeup is not supported by
>  		the hardware.
> @@ -86,13 +104,17 @@ KernelVersion:	3.15
>  Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
>  Description:
>  		Sets the scancode wakeup filter expected value.
> +
>  		Use in combination with /sys/class/rc/rcN/wakeup_filter_mask to
>  		set the expected value of the bits set in the wakeup filter mask
>  		to trigger a system wake event.
> +
>  		If the hardware supports it and wakeup_filter_mask is not 0 then
>  		scancodes which match the filter will wake the system from e.g.
>  		suspend to RAM or power off.
> +
>  		Otherwise the write will fail with an error.
> +
>  		This value may be reset to 0 if the wakeup protocol is altered.
>  
>  What:		/sys/class/rc/rcN/wakeup_filter_mask
> @@ -101,11 +123,15 @@ KernelVersion:	3.15
>  Contact:	Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
>  Description:
>  		Sets the scancode wakeup filter mask of bits to compare.
> +
>  		Use in combination with /sys/class/rc/rcN/wakeup_filter to set
>  		the bits of the scancode which should be compared against the
>  		expected value to trigger a system wake event.
> +
>  		If the hardware supports it and wakeup_filter_mask is not 0 then
>  		scancodes which match the filter will wake the system from e.g.
>  		suspend to RAM or power off.
> +
>  		Otherwise the write will fail with an error.
> +
>  		This value may be reset to 0 if the wakeup protocol is altered.
> diff --git a/Documentation/ABI/testing/sysfs-class-scsi_host b/Documentation/ABI/testing/sysfs-class-scsi_host
> index bafc59fd7b69..7c98d8f43c45 100644
> --- a/Documentation/ABI/testing/sysfs-class-scsi_host
> +++ b/Documentation/ABI/testing/sysfs-class-scsi_host
> @@ -56,8 +56,9 @@ Description:
>  		management) on top, which makes it match the Windows IRST (Intel
>  		Rapid Storage Technology) driver settings. This setting is also
>  		close to min_power, except that:
> +
>  		a) It does not use host-initiated slumber mode, but it does
> -		allow device-initiated slumber
> +		   allow device-initiated slumber
>  		b) It does not enable low power device sleep mode (DevSlp).
>  
>  What:		/sys/class/scsi_host/hostX/em_message
> @@ -70,8 +71,8 @@ Description:
>  		protocol, writes and reads correspond to the LED message format
>  		as defined in the AHCI spec.
>  
> -		The user must turn sw_activity (under /sys/block/*/device/) OFF
> -		it they wish to control the activity LED via the em_message
> +		The user must turn sw_activity (under `/sys/block/*/device/`)
> +		OFF it they wish to control the activity LED via the em_message
>  		file.
>  
>  		em_message_type: (RO) Displays the current enclosure management
> diff --git a/Documentation/ABI/testing/sysfs-class-typec b/Documentation/ABI/testing/sysfs-class-typec
> index b834671522d6..b7794e02ad20 100644
> --- a/Documentation/ABI/testing/sysfs-class-typec
> +++ b/Documentation/ABI/testing/sysfs-class-typec
> @@ -40,10 +40,13 @@ Description:
>  		attribute will not return until the operation has finished.
>  
>  		Valid values:
> -		- source (The port will behave as source only DFP port)
> -		- sink (The port will behave as sink only UFP port)
> -		- dual (The port will behave as dual-role-data and
> +
> +		======  ==============================================
> +		source  (The port will behave as source only DFP port)
> +		sink    (The port will behave as sink only UFP port)
> +		dual    (The port will behave as dual-role-data and
>  			dual-role-power port)
> +		======  ==============================================
>  
>  What:		/sys/class/typec/<port>/vconn_source
>  Date:		April 2017
> @@ -59,6 +62,7 @@ Description:
>  		generates uevent KOBJ_CHANGE.
>  
>  		Valid values:
> +
>  		- "no" when the port is not the VCONN Source
>  		- "yes" when the port is the VCONN Source
>  
> @@ -72,6 +76,7 @@ Description:
>  		power operation mode should show "usb_power_delivery".
>  
>  		Valid values:
> +
>  		- default
>  		- 1.5A
>  		- 3.0A
> @@ -191,6 +196,7 @@ Date:		April 2017
>  Contact:	Heikki Krogerus <heikki.krogerus@linux.intel.com>
>  Description:
>  		Shows type of the plug on the cable:
> +
>  		- type-a - Standard A
>  		- type-b - Standard B
>  		- type-c
> diff --git a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
> index 7e43cdce9a52..f7b360a61b21 100644
> --- a/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
> +++ b/Documentation/ABI/testing/sysfs-devices-platform-ACPI-TAD
> @@ -7,6 +7,7 @@ Description:
>  		(RO) Hexadecimal bitmask of the TAD attributes are reported by
>  		the platform firmware (see ACPI 6.2, section 9.18.2):
>  
> +		======= ======================================================
>  		BIT(0): AC wakeup implemented if set
>  		BIT(1): DC wakeup implemented if set
>  		BIT(2): Get/set real time features implemented if set
> @@ -16,6 +17,7 @@ Description:
>  		BIT(6): The AC timer wakes up from S5 if set
>  		BIT(7): The DC timer wakes up from S4 if set
>  		BIT(8): The DC timer wakes up from S5 if set
> +		======= ======================================================
>  
>  		The other bits are reserved.
>  
> @@ -62,9 +64,11 @@ Description:
>  		timer status with the following meaning of bits (see ACPI 6.2,
>  		Section 9.18.5):
>  
> +		======= ======================================================
>  		Bit(0): The timer has expired if set.
>  		Bit(1): The timer has woken up the system from a sleep state
>  		        (S3 or S4/S5 if supported) if set.
> +		======= ======================================================
>  
>  		The other bits are reserved.
>  
> diff --git a/Documentation/ABI/testing/sysfs-devices-platform-docg3 b/Documentation/ABI/testing/sysfs-devices-platform-docg3
> index 8aa36716882f..378c42694bfb 100644
> --- a/Documentation/ABI/testing/sysfs-devices-platform-docg3
> +++ b/Documentation/ABI/testing/sysfs-devices-platform-docg3
> @@ -9,8 +9,10 @@ Description:
>  		The protection has information embedded whether it blocks reads,
>  		writes or both.
>  		The result is:
> -		0 -> the DPS is not keylocked
> -		1 -> the DPS is keylocked
> +
> +		- 0 -> the DPS is not keylocked
> +		- 1 -> the DPS is keylocked
> +
>  Users:		None identified so far.
>  
>  What:		/sys/devices/platform/docg3/f[0-3]_dps[01]_protection_key
> @@ -27,8 +29,12 @@ Description:
>  		Entering the correct value toggle the lock, and can be observed
>  		through f[0-3]_dps[01]_is_keylocked.
>  		Possible values are:
> +
>  			- 8 bytes
> +
>  		Typical values are:
> +
>  			- "00000000"
>  			- "12345678"
> +
>  Users:		None identified so far.
> diff --git a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
> index 2107082426da..e45ac2e865d5 100644
> --- a/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
> +++ b/Documentation/ABI/testing/sysfs-devices-platform-sh_mobile_lcdc_fb
> @@ -17,10 +17,10 @@ Description:
>  		to overlay planes.
>  
>  		Selects the composition mode for the overlay. Possible values
> -		are
> +		are:
>  
> -		0 - Alpha Blending
> -		1 - ROP3
> +		- 0 - Alpha Blending
> +		- 1 - ROP3
>  
>  What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_position
>  Date:		May 2012
> @@ -30,7 +30,7 @@ Description:
>  		to overlay planes.
>  
>  		Stores the x,y overlay position on the display in pixels. The
> -		position format is `[0-9]+,[0-9]+'.
> +		position format is `[0-9]+,[0-9]+`.
>  
>  What:		/sys/devices/platform/sh_mobile_lcdc_fb.[0-3]/graphics/fb[0-9]/ovl_rop3
>  Date:		May 2012
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
> index b555df825447..274c337ec6a9 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-cpu
> +++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
> @@ -151,23 +151,28 @@ Description:
>  		The processor idle states which are available for use have the
>  		following attributes:
>  
> -		name: (RO) Name of the idle state (string).
> +		======== ==== =================================================
> +		name:	 (RO) Name of the idle state (string).
>  
>  		latency: (RO) The latency to exit out of this idle state (in
> -		microseconds).
> +			      microseconds).
>  
> -		power: (RO) The power consumed while in this idle state (in
> -		milliwatts).
> +		power:   (RO) The power consumed while in this idle state (in
> +			      milliwatts).
>  
> -		time: (RO) The total time spent in this idle state (in microseconds).
> +		time:    (RO) The total time spent in this idle state
> +			      (in microseconds).
>  
> -		usage: (RO) Number of times this state was entered (a count).
> +		usage:	 (RO) Number of times this state was entered (a count).
>  
> -		above: (RO) Number of times this state was entered, but the
> -		       observed CPU idle duration was too short for it (a count).
> +		above:	 (RO) Number of times this state was entered, but the
> +			      observed CPU idle duration was too short for it
> +			      (a count).
>  
> -		below: (RO) Number of times this state was entered, but the
> -		       observed CPU idle duration was too long for it (a count).
> +		below: 	 (RO) Number of times this state was entered, but the
> +			      observed CPU idle duration was too long for it
> +			      (a count).
> +		======== ==== =================================================
>  
>  What:		/sys/devices/system/cpu/cpuX/cpuidle/stateN/desc
>  Date:		February 2008
> @@ -290,6 +295,7 @@ Description:	Processor frequency boosting control
>  		This switch controls the boost setting for the whole system.
>  		Boosting allows the CPU and the firmware to run at a frequency
>  		beyound it's nominal limit.
> +
>  		More details can be found in
>  		Documentation/admin-guide/pm/cpufreq.rst
>  
> @@ -337,43 +343,57 @@ Contact:	Sudeep Holla <sudeep.holla@arm.com>
>  Description:	Parameters for the CPU cache attributes
>  
>  		allocation_policy:
> -			- WriteAllocate: allocate a memory location to a cache line
> -					 on a cache miss because of a write
> -			- ReadAllocate: allocate a memory location to a cache line
> +			- WriteAllocate:
> +					allocate a memory location to a cache line
> +					on a cache miss because of a write
> +			- ReadAllocate:
> +					allocate a memory location to a cache line
>  					on a cache miss because of a read
> -			- ReadWriteAllocate: both writeallocate and readallocate
> +			- ReadWriteAllocate:
> +					both writeallocate and readallocate
>  
> -		attributes: LEGACY used only on IA64 and is same as write_policy
> +		attributes:
> +			    LEGACY used only on IA64 and is same as write_policy
>  
> -		coherency_line_size: the minimum amount of data in bytes that gets
> +		coherency_line_size:
> +				     the minimum amount of data in bytes that gets
>  				     transferred from memory to cache
>  
> -		level: the cache hierarchy in the multi-level cache configuration
> +		level:
> +			the cache hierarchy in the multi-level cache configuration
>  
> -		number_of_sets: total number of sets in the cache, a set is a
> +		number_of_sets:
> +				total number of sets in the cache, a set is a
>  				collection of cache lines with the same cache index
>  
> -		physical_line_partition: number of physical cache line per cache tag
> +		physical_line_partition:
> +				number of physical cache line per cache tag
>  
> -		shared_cpu_list: the list of logical cpus sharing the cache
> +		shared_cpu_list:
> +				the list of logical cpus sharing the cache
>  
> -		shared_cpu_map: logical cpu mask containing the list of cpus sharing
> +		shared_cpu_map:
> +				logical cpu mask containing the list of cpus sharing
>  				the cache
>  
> -		size: the total cache size in kB
> +		size:
> +			the total cache size in kB
>  
>  		type:
>  			- Instruction: cache that only holds instructions
>  			- Data: cache that only caches data
>  			- Unified: cache that holds both data and instructions
>  
> -		ways_of_associativity: degree of freedom in placing a particular block
> -					of memory in the cache
> +		ways_of_associativity:
> +			degree of freedom in placing a particular block
> +			of memory in the cache
>  
>  		write_policy:
> -			- WriteThrough: data is written to both the cache line
> +			- WriteThrough:
> +					data is written to both the cache line
>  					and to the block in the lower-level memory
> -			- WriteBack: data is written only to the cache line and
> +			- WriteBack:
> +				     data is written only to the cache line and
>  				     the modified cache line is written to main
>  				     memory only when it is replaced
>  
> @@ -414,30 +434,30 @@ Description:	POWERNV CPUFreq driver's frequency throttle stats directory and
>  		throttle attributes exported in the 'throttle_stats' directory:
>  
>  		- turbo_stat : This file gives the total number of times the max
> -		frequency is throttled to lower frequency in turbo (at and above
> -		nominal frequency) range of frequencies.
> +		  frequency is throttled to lower frequency in turbo (at and above
> +		  nominal frequency) range of frequencies.
>  
>  		- sub_turbo_stat : This file gives the total number of times the
> -		max frequency is throttled to lower frequency in sub-turbo(below
> -		nominal frequency) range of frequencies.
> +		  max frequency is throttled to lower frequency in sub-turbo(below
> +		  nominal frequency) range of frequencies.
>  
>  		- unthrottle : This file gives the total number of times the max
> -		frequency is unthrottled after being throttled.
> +		  frequency is unthrottled after being throttled.
>  
>  		- powercap : This file gives the total number of times the max
> -		frequency is throttled due to 'Power Capping'.
> +		  frequency is throttled due to 'Power Capping'.
>  
>  		- overtemp : This file gives the total number of times the max
> -		frequency is throttled due to 'CPU Over Temperature'.
> +		  frequency is throttled due to 'CPU Over Temperature'.
>  
>  		- supply_fault : This file gives the total number of times the
> -		max frequency is throttled due to 'Power Supply Failure'.
> +		  max frequency is throttled due to 'Power Supply Failure'.
>  
>  		- overcurrent : This file gives the total number of times the
> -		max frequency is throttled due to 'Overcurrent'.
> +		  max frequency is throttled due to 'Overcurrent'.
>  
>  		- occ_reset : This file gives the total number of times the max
> -		frequency is throttled due to 'OCC Reset'.
> +		  frequency is throttled due to 'OCC Reset'.
>  
>  		The sysfs attributes representing different throttle reasons like
>  		powercap, overtemp, supply_fault, overcurrent and occ_reset map to
> @@ -469,8 +489,9 @@ What:		/sys/devices/system/cpu/cpuX/regs/
>  Date:		June 2016
>  Contact:	Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>
>  Description:	AArch64 CPU registers
> +
>  		'identification' directory exposes the CPU ID registers for
> -		 identifying model and revision of the CPU.
> +		identifying model and revision of the CPU.
>  
>  What:		/sys/devices/system/cpu/cpu#/cpu_capacity
>  Date:		December 2016
> @@ -497,9 +518,11 @@ Description:	Information about CPU vulnerabilities
>  		vulnerabilities. The output of those files reflects the
>  		state of the CPUs in the system. Possible output values:
>  
> +		================  ==============================================
>  		"Not affected"	  CPU is not affected by the vulnerability
>  		"Vulnerable"	  CPU is affected and no mitigation in effect
>  		"Mitigation: $M"  CPU is affected and mitigation $M is in effect
> +		================  ==============================================
>  
>  		See also: Documentation/admin-guide/hw-vuln/index.rst
>  
> @@ -515,12 +538,14 @@ Description:	Control Symetric Multi Threading (SMT)
>  		control: Read/write interface to control SMT. Possible
>  			 values:
>  
> +			 ================ =========================================
>  			 "on"		  SMT is enabled
>  			 "off"		  SMT is disabled
>  			 "forceoff"	  SMT is force disabled. Cannot be changed.
>  			 "notsupported"   SMT is not supported by the CPU
>  			 "notimplemented" SMT runtime toggling is not
>  					  implemented for the architecture
> +			 ================ =========================================
>  
>  			 If control status is "forceoff" or "notsupported" writes
>  			 are rejected.
> diff --git a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
> index 470def06ab0a..1a8ee26e92ae 100644
> --- a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
> +++ b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl
> @@ -5,8 +5,10 @@ Contact:        Vernon Mauery <vernux@us.ibm.com>
>  Description:    The state file allows a means by which to change in and
>                  out of Premium Real-Time Mode (PRTM), as well as the
>                  ability to query the current state.
> -                    0 => PRTM off
> -                    1 => PRTM enabled
> +
> +                    - 0 => PRTM off
> +                    - 1 => PRTM enabled
> +
>  Users:          The ibm-prtm userspace daemon uses this interface.
>  
>  
> diff --git a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
> index 4d63a7904b94..42214b4ff14a 100644
> --- a/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
> +++ b/Documentation/ABI/testing/sysfs-driver-bd9571mwv-regulator
> @@ -6,11 +6,13 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
>  		if DDR power rails will be kept powered during system suspend.
>  		("on"/"1" = enabled, "off"/"0" = disabled).
>  		Two types of power switches (or control signals) can be used:
> +
>  		  A. With a momentary power switch (or pulse signal), DDR
>  		     Backup Mode is enabled by default when available, as the
>  		     PMIC will be configured only during system suspend.
>  		  B. With a toggle power switch (or level signal), the
>  		     following steps must be followed exactly:
> +
>  		       1. Configure PMIC for backup mode, to change the role of
>  			  the accessory power switch from a power switch to a
>  			  wake-up switch,
> @@ -20,8 +22,10 @@ Description:	Read/write the current state of DDR Backup Mode, which controls
>  		       3. Suspend system,
>  		       4. Switch accessory power switch on, to resume the
>  			  system.
> +
>  		     DDR Backup Mode must be explicitly enabled by the user,
>  		     to invoke step 1.
> +
>  		See also Documentation/devicetree/bindings/mfd/bd9571mwv.txt.
>  Users:		User space applications for embedded boards equipped with a
>  		BD9571MWV PMIC.
> diff --git a/Documentation/ABI/testing/sysfs-driver-genwqe b/Documentation/ABI/testing/sysfs-driver-genwqe
> index 64ac6d567c4b..69d855dc4c47 100644
> --- a/Documentation/ABI/testing/sysfs-driver-genwqe
> +++ b/Documentation/ABI/testing/sysfs-driver-genwqe
> @@ -29,8 +29,12 @@ What:           /sys/class/genwqe/genwqe<n>_card/reload_bitstream
>  Date:           May 2014
>  Contact:        klebers@linux.vnet.ibm.com
>  Description:    Interface to trigger a PCIe card reset to reload the bitstream.
> +
> +		::
> +
>                    sudo sh -c 'echo 1 > \
>                      /sys/class/genwqe/genwqe0_card/reload_bitstream'
> +
>                  If successfully, the card will come back with the bitstream set
>                  on 'next_bitstream'.
>  
> @@ -64,8 +68,11 @@ Description:    Base clock frequency of the card.
>  What:           /sys/class/genwqe/genwqe<n>_card/device/sriov_numvfs
>  Date:           Oct 2013
>  Contact:        haver@linux.vnet.ibm.com
> -Description:    Enable VFs (1..15):
> +Description:    Enable VFs (1..15)::
> +
>                    sudo sh -c 'echo 15 > \
>                      /sys/bus/pci/devices/0000\:1b\:00.0/sriov_numvfs'
> -                Disable VFs:
> +
> +                Disable VFs::
> +
>                    Write a 0 into the same sysfs entry.
> diff --git a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
> index 305dffd229a8..de07be314efc 100644
> --- a/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
> +++ b/Documentation/ABI/testing/sysfs-driver-hid-logitech-lg4ff
> @@ -12,7 +12,9 @@ KernelVersion:	4.1
>  Contact:	Michal Malý <madcatxster@devoid-pointer.net>
>  Description:	Displays a set of alternate modes supported by a wheel. Each
>  		mode is listed as follows:
> +
>  		  Tag: Mode Name
> +
>  		Currently active mode is marked with an asterisk. List also
>  		contains an abstract item "native" which always denotes the
>  		native mode of the wheel. Echoing the mode tag switches the
> @@ -24,24 +26,30 @@ Description:	Displays a set of alternate modes supported by a wheel. Each
>  		This entry is not created for devices that have only one mode.
>  
>  		Currently supported mode switches:
> -		Driving Force Pro:
> +
> +		Driving Force Pro::
> +
>  		  DF-EX --> DFP
>  
> -		G25:
> +		G25::
> +
>  		  DF-EX --> DFP --> G25
>  
> -		G27:
> +		G27::
> +
>  		  DF-EX <*> DFP <-> G25 <-> G27
>  		  DF-EX <*--------> G25 <-> G27
>  		  DF-EX <*----------------> G27
>  
> -		G29:
> +		G29::
> +
>  		  DF-EX <*> DFP <-> G25 <-> G27 <-> G29
>  		  DF-EX <*--------> G25 <-> G27 <-> G29
>  		  DF-EX <*----------------> G27 <-> G29
>  		  DF-EX <*------------------------> G29
>  
> -		DFGT:
> +		DFGT::
> +
>  		  DF-EX <*> DFP <-> DFGT
>  		  DF-EX <*--------> DFGT
>  
> diff --git a/Documentation/ABI/testing/sysfs-driver-hid-wiimote b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
> index 39dfa5cb1cc5..cd7b82a5c27d 100644
> --- a/Documentation/ABI/testing/sysfs-driver-hid-wiimote
> +++ b/Documentation/ABI/testing/sysfs-driver-hid-wiimote
> @@ -39,9 +39,13 @@ Description:	While a device is initialized by the wiimote driver, we perform
>  		Other strings for each device-type are available and may be
>  		added if new device-specific detections are added.
>  		Currently supported are:
> -			gen10: First Wii Remote generation
> -			gen20: Second Wii Remote Plus generation (builtin MP)
> +
> +			============= =======================================
> +			gen10:        First Wii Remote generation
> +			gen20:        Second Wii Remote Plus generation
> +				      (builtin MP)
>  			balanceboard: Wii Balance Board
> +			============= =======================================
>  
>  What:		/sys/bus/hid/drivers/wiimote/<dev>/bboard_calib
>  Date:		May 2013
> @@ -54,6 +58,7 @@ Description:	This attribute is only provided if the device was detected as a
>  		First, 0kg values for all 4 sensors are written, followed by the
>  		17kg values for all 4 sensors and last the 34kg values for all 4
>  		sensors.
> +
>  		Calibration data is already applied by the kernel to all input
>  		values but may be used by user-space to perform other
>  		transformations.
> @@ -68,9 +73,11 @@ Description:	This attribute is only provided if the device was detected as a
>  		is prefixed with a +/-. Each value is a signed 16bit number.
>  		Data is encoded as decimal numbers and specifies the offsets of
>  		the analog sticks of the pro-controller.
> +
>  		Calibration data is already applied by the kernel to all input
>  		values but may be used by user-space to perform other
>  		transformations.
> +
>  		Calibration data is detected by the kernel during device setup.
>  		You can write "scan\n" into this file to re-trigger calibration.
>  		You can also write data directly in the form "x1:y1 x2:y2" to
> diff --git a/Documentation/ABI/testing/sysfs-driver-samsung-laptop b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
> index 34d3a3359cf4..28c9c040de5d 100644
> --- a/Documentation/ABI/testing/sysfs-driver-samsung-laptop
> +++ b/Documentation/ABI/testing/sysfs-driver-samsung-laptop
> @@ -9,10 +9,12 @@ Description:	Some Samsung laptops have different "performance levels"
>  		their fans quiet at all costs.  Reading from this file
>  		will show the current performance level.  Writing to the
>  		file can change this value.
> +
>  			Valid options:
> -				"silent"
> -				"normal"
> -				"overclock"
> +				- "silent"
> +				- "normal"
> +				- "overclock"
> +
>  		Note that not all laptops support all of these options.
>  		Specifically, not all support the "overclock" option,
>  		and it's still unknown if this value even changes
> @@ -25,8 +27,9 @@ Contact:	Corentin Chary <corentin.chary@gmail.com>
>  Description:	Max battery charge level can be modified, battery cycle
>  		life can be extended by reducing the max battery charge
>  		level.
> -		0 means normal battery mode (100% charge)
> -		1 means battery life extender mode (80% charge)
> +
> +		- 0 means normal battery mode (100% charge)
> +		- 1 means battery life extender mode (80% charge)
>  
>  What:		/sys/devices/platform/samsung/usb_charge
>  Date:		December 1, 2011
> diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
> index f34221b52b14..e5a438d84e1f 100644
> --- a/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
> +++ b/Documentation/ABI/testing/sysfs-driver-toshiba_acpi
> @@ -4,10 +4,12 @@ KernelVersion:	3.15
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the keyboard backlight operation mode, valid
>  		values are:
> +
>  			* 0x1  -> FN-Z
>  			* 0x2  -> AUTO (also called TIMER)
>  			* 0x8  -> ON
>  			* 0x10 -> OFF
> +
>  		Note that from kernel 3.16 onwards this file accepts all listed
>  		parameters, kernel 3.15 only accepts the first two (FN-Z and
>  		AUTO).
> @@ -41,8 +43,10 @@ KernelVersion:	3.15
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This files controls the status of the touchpad and pointing
>  		stick (if available), valid values are:
> +
>  			* 0 -> OFF
>  			* 1 -> ON
> +
>  Users:		KToshiba
>  
>  What:		/sys/devices/LNXSYSTM:00/LNXSYBUS:00/TOS{1900,620{0,7,8}}:00/available_kbd_modes
> @@ -51,10 +55,12 @@ KernelVersion:	3.16
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file shows the supported keyboard backlight modes
>  		the system supports, which can be:
> +
>  			* 0x1  -> FN-Z
>  			* 0x2  -> AUTO (also called TIMER)
>  			* 0x8  -> ON
>  			* 0x10 -> OFF
> +
>  		Note that not all keyboard types support the listed modes.
>  		See the entry named "available_kbd_modes"
>  Users:		KToshiba
> @@ -65,6 +71,7 @@ KernelVersion:	3.16
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file shows the current keyboard backlight type,
>  		which can be:
> +
>  			* 1 -> Type 1, supporting modes FN-Z and AUTO
>  			* 2 -> Type 2, supporting modes TIMER, ON and OFF
>  Users:		KToshiba
> @@ -75,10 +82,12 @@ KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the USB Sleep & Charge charging mode, which
>  		can be:
> +
>  			* 0 -> Disabled		(0x00)
>  			* 1 -> Alternate	(0x09)
>  			* 2 -> Auto		(0x21)
>  			* 3 -> Typical		(0x11)
> +
>  		Note that from kernel 4.1 onwards this file accepts all listed
>  		values, kernel 4.0 only supports the first three.
>  		Note that this feature only works when connected to power, if
> @@ -93,8 +102,10 @@ Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the USB Sleep Functions under battery, and
>  		set the level at which point they will be disabled, accepted
>  		values can be:
> +
>  			* 0	-> Disabled
>  			* 1-100	-> Battery level to disable sleep functions
> +
>  		Currently it prints two values, the first one indicates if the
>  		feature is enabled or disabled, while the second one shows the
>  		current battery level set.
> @@ -107,8 +118,10 @@ Date:		January 23, 2015
>  KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the USB Rapid Charge state, which can be:
> +
>  			* 0 -> Disabled
>  			* 1 -> Enabled
> +
>  		Note that toggling this value requires a reboot for changes to
>  		take effect.
>  Users:		KToshiba
> @@ -118,8 +131,10 @@ Date:		January 23, 2015
>  KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the Sleep & Music state, which values can be:
> +
>  			* 0 -> Disabled
>  			* 1 -> Enabled
> +
>  		Note that this feature only works when connected to power, if
>  		you want to use it under battery, see the entry named
>  		"sleep_functions_on_battery"
> @@ -138,6 +153,7 @@ KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the state of the internal fan, valid
>  		values are:
> +
>  			* 0 -> OFF
>  			* 1 -> ON
>  
> @@ -147,8 +163,10 @@ KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the Special Functions (hotkeys) operation
>  		mode, valid values are:
> +
>  			* 0 -> Normal Operation
>  			* 1 -> Special Functions
> +
>  		In the "Normal Operation" mode, the F{1-12} keys are as usual
>  		and the hotkeys are accessed via FN-F{1-12}.
>  		In the "Special Functions" mode, the F{1-12} keys trigger the
> @@ -163,8 +181,10 @@ KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls whether the laptop should turn ON whenever
>  		the LID is opened, valid values are:
> +
>  			* 0 -> Disabled
>  			* 1 -> Enabled
> +
>  		Note that toggling this value requires a reboot for changes to
>  		take effect.
>  Users:		KToshiba
> @@ -174,8 +194,10 @@ Date:		February 12, 2015
>  KernelVersion:	4.0
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the USB 3 functionality, valid values are:
> +
>  			* 0 -> Disabled (Acts as a regular USB 2)
>  			* 1 -> Enabled (Full USB 3 functionality)
> +
>  		Note that toggling this value requires a reboot for changes to
>  		take effect.
>  Users:		KToshiba
> @@ -188,10 +210,14 @@ Description:	This file controls the Cooling Method feature.
>  		Reading this file prints two values, the first is the actual cooling method
>  		and the second is the maximum cooling method supported.
>  		When the maximum cooling method is ONE, valid values are:
> +
>  			* 0 -> Maximum Performance
>  			* 1 -> Battery Optimized
> +
>  		When the maximum cooling method is TWO, valid values are:
> +
>  			* 0 -> Maximum Performance
>  			* 1 -> Performance
>  			* 2 -> Battery Optimized
> +
>  Users:		KToshiba
> diff --git a/Documentation/ABI/testing/sysfs-driver-toshiba_haps b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
> index a662370b4dbf..c938690ce10d 100644
> --- a/Documentation/ABI/testing/sysfs-driver-toshiba_haps
> +++ b/Documentation/ABI/testing/sysfs-driver-toshiba_haps
> @@ -4,10 +4,12 @@ KernelVersion:	3.17
>  Contact:	Azael Avalos <coproscefalo@gmail.com>
>  Description:	This file controls the built-in accelerometer protection level,
>  		valid values are:
> +
>  			* 0 -> Disabled
>  			* 1 -> Low
>  			* 2 -> Medium
>  			* 3 -> High
> +
>  		The default potection value is set to 2 (Medium).
>  Users:		KToshiba
>  
> diff --git a/Documentation/ABI/testing/sysfs-driver-wacom b/Documentation/ABI/testing/sysfs-driver-wacom
> index afc48fc163b5..16acaa5712ec 100644
> --- a/Documentation/ABI/testing/sysfs-driver-wacom
> +++ b/Documentation/ABI/testing/sysfs-driver-wacom
> @@ -79,7 +79,9 @@ Description:
>  		When the Wacom Intuos 4 is connected over Bluetooth, the
>  		image has to contain 256 bytes (64x32 px 1 bit colour).
>  		The format is also scrambled, like in the USB mode, and it can
> -		be summarized by converting 76543210 into GECA6420.
> +		be summarized by converting::
> +
> +					    76543210 into GECA6420.
>  					    HGFEDCBA      HFDB7531
>  
>  What:		/sys/bus/hid/devices/<bus>:<vid>:<pid>.<n>/wacom_remote/unpair_remote
> diff --git a/Documentation/ABI/testing/sysfs-firmware-acpi b/Documentation/ABI/testing/sysfs-firmware-acpi
> index 613f42a9d5cd..e4afc2538210 100644
> --- a/Documentation/ABI/testing/sysfs-firmware-acpi
> +++ b/Documentation/ABI/testing/sysfs-firmware-acpi
> @@ -12,11 +12,14 @@ Description:
>  		image: The image bitmap. Currently a 32-bit BMP.
>  		status: 1 if the image is valid, 0 if firmware invalidated it.
>  		type: 0 indicates image is in BMP format.
> +
> +		======== ===================================================
>  		version: The version of the BGRT. Currently 1.
>  		xoffset: The number of pixels between the left of the screen
>  			 and the left edge of the image.
>  		yoffset: The number of pixels between the top of the screen
>  			 and the top edge of the image.
> +		======== ===================================================
>  
>  What:		/sys/firmware/acpi/hotplug/
>  Date:		February 2013
> @@ -33,12 +36,14 @@ Description:
>  		The following setting is available to user space for each
>  		hotplug profile:
>  
> +		======== =======================================================
>  		enabled: If set, the ACPI core will handle notifications of
> -			hotplug events associated with the given class of
> -			devices and will allow those devices to be ejected with
> -			the help of the _EJ0 control method.  Unsetting it
> -			effectively disables hotplug for the correspoinding
> -			class of devices.
> +			 hotplug events associated with the given class of
> +			 devices and will allow those devices to be ejected with
> +			 the help of the _EJ0 control method.  Unsetting it
> +			 effectively disables hotplug for the correspoinding
> +			 class of devices.
> +		======== =======================================================
>  
>  		The value of the above attribute is an integer number: 1 (set)
>  		or 0 (unset).  Attempts to write any other values to it will
> @@ -71,86 +76,90 @@ Description:
>  		To figure out where all the SCI's are coming from,
>  		/sys/firmware/acpi/interrupts contains a file listing
>  		every possible source, and the count of how many
> -		times it has triggered.
> -
> -		$ cd /sys/firmware/acpi/interrupts
> -		$ grep . *
> -		error:	     0
> -		ff_gbl_lock:	   0   enable
> -		ff_pmtimer:	  0  invalid
> -		ff_pwr_btn:	  0   enable
> -		ff_rt_clk:	 2  disable
> -		ff_slp_btn:	  0  invalid
> -		gpe00:	     0	invalid
> -		gpe01:	     0	 enable
> -		gpe02:	   108	 enable
> -		gpe03:	     0	invalid
> -		gpe04:	     0	invalid
> -		gpe05:	     0	invalid
> -		gpe06:	     0	 enable
> -		gpe07:	     0	 enable
> -		gpe08:	     0	invalid
> -		gpe09:	     0	invalid
> -		gpe0A:	     0	invalid
> -		gpe0B:	     0	invalid
> -		gpe0C:	     0	invalid
> -		gpe0D:	     0	invalid
> -		gpe0E:	     0	invalid
> -		gpe0F:	     0	invalid
> -		gpe10:	     0	invalid
> -		gpe11:	     0	invalid
> -		gpe12:	     0	invalid
> -		gpe13:	     0	invalid
> -		gpe14:	     0	invalid
> -		gpe15:	     0	invalid
> -		gpe16:	     0	invalid
> -		gpe17:	  1084	 enable
> -		gpe18:	     0	 enable
> -		gpe19:	     0	invalid
> -		gpe1A:	     0	invalid
> -		gpe1B:	     0	invalid
> -		gpe1C:	     0	invalid
> -		gpe1D:	     0	invalid
> -		gpe1E:	     0	invalid
> -		gpe1F:	     0	invalid
> -		gpe_all:    1192
> -		sci:	1194
> -		sci_not:     0	
> -
> -		sci - The number of times the ACPI SCI
> -		has been called and claimed an interrupt.
> -
> -		sci_not - The number of times the ACPI SCI
> -		has been called and NOT claimed an interrupt.
> -
> -		gpe_all - count of SCI caused by GPEs.
> -
> -		gpeXX - count for individual GPE source
> -
> -		ff_gbl_lock - Global Lock
> -
> -		ff_pmtimer - PM Timer
> -
> -		ff_pwr_btn - Power Button
> -
> -		ff_rt_clk - Real Time Clock
> -
> -		ff_slp_btn - Sleep Button
> -
> -		error - an interrupt that can't be accounted for above.
> -
> -		invalid: it's either a GPE or a Fixed Event that
> -			doesn't have an event handler.
> -
> -		disable: the GPE/Fixed Event is valid but disabled.
> -
> -		enable: the GPE/Fixed Event is valid and enabled.
> -
> -		Root has permission to clear any of these counters.  Eg.
> -		# echo 0 > gpe11
> -
> -		All counters can be cleared by clearing the total "sci":
> -		# echo 0 > sci
> +		times it has triggered::
> +
> +		  $ cd /sys/firmware/acpi/interrupts
> +		  $ grep . *
> +		  error:	     0
> +		  ff_gbl_lock:	   0   enable
> +		  ff_pmtimer:	  0  invalid
> +		  ff_pwr_btn:	  0   enable
> +		  ff_rt_clk:	 2  disable
> +		  ff_slp_btn:	  0  invalid
> +		  gpe00:	     0	invalid
> +		  gpe01:	     0	 enable
> +		  gpe02:	   108	 enable
> +		  gpe03:	     0	invalid
> +		  gpe04:	     0	invalid
> +		  gpe05:	     0	invalid
> +		  gpe06:	     0	 enable
> +		  gpe07:	     0	 enable
> +		  gpe08:	     0	invalid
> +		  gpe09:	     0	invalid
> +		  gpe0A:	     0	invalid
> +		  gpe0B:	     0	invalid
> +		  gpe0C:	     0	invalid
> +		  gpe0D:	     0	invalid
> +		  gpe0E:	     0	invalid
> +		  gpe0F:	     0	invalid
> +		  gpe10:	     0	invalid
> +		  gpe11:	     0	invalid
> +		  gpe12:	     0	invalid
> +		  gpe13:	     0	invalid
> +		  gpe14:	     0	invalid
> +		  gpe15:	     0	invalid
> +		  gpe16:	     0	invalid
> +		  gpe17:	  1084	 enable
> +		  gpe18:	     0	 enable
> +		  gpe19:	     0	invalid
> +		  gpe1A:	     0	invalid
> +		  gpe1B:	     0	invalid
> +		  gpe1C:	     0	invalid
> +		  gpe1D:	     0	invalid
> +		  gpe1E:	     0	invalid
> +		  gpe1F:	     0	invalid
> +		  gpe_all:    1192
> +		  sci:	1194
> +		  sci_not:     0
> +
> +		===========  ==================================================
> +		sci	     The number of times the ACPI SCI
> +			     has been called and claimed an interrupt.
> +
> +		sci_not	     The number of times the ACPI SCI
> +			     has been called and NOT claimed an interrupt.
> +
> +		gpe_all	     count of SCI caused by GPEs.
> +
> +		gpeXX	     count for individual GPE source
> +
> +		ff_gbl_lock  Global Lock
> +
> +		ff_pmtimer   PM Timer
> +
> +		ff_pwr_btn   Power Button
> +
> +		ff_rt_clk    Real Time Clock
> +
> +		ff_slp_btn   Sleep Button
> +
> +		error	     an interrupt that can't be accounted for above.
> +
> +		invalid      it's either a GPE or a Fixed Event that
> +			     doesn't have an event handler.
> +
> +		disable	     the GPE/Fixed Event is valid but disabled.
> +
> +		enable       the GPE/Fixed Event is valid and enabled.
> +		===========  ==================================================
> +
> +		Root has permission to clear any of these counters.  Eg.::
> +
> +		  # echo 0 > gpe11
> +
> +		All counters can be cleared by clearing the total "sci"::
> +
> +		  # echo 0 > sci
>  
>  		None of these counters has an effect on the function
>  		of the system, they are simply statistics.
> @@ -165,32 +174,34 @@ Description:
>  
>  		Let's take power button fixed event for example, please kill acpid
>  		and other user space applications so that the machine won't shutdown
> -		when pressing the power button.
> -		# cat ff_pwr_btn
> -		0	enabled
> -		# press the power button for 3 times;
> -		# cat ff_pwr_btn
> -		3	enabled
> -		# echo disable > ff_pwr_btn
> -		# cat ff_pwr_btn
> -		3	disabled
> -		# press the power button for 3 times;
> -		# cat ff_pwr_btn
> -		3	disabled
> -		# echo enable > ff_pwr_btn
> -		# cat ff_pwr_btn
> -		4	enabled
> -		/*
> -		 * this is because the status bit is set even if the enable bit is cleared,
> -		 * and it triggers an ACPI fixed event when the enable bit is set again
> -		 */
> -		# press the power button for 3 times;
> -		# cat ff_pwr_btn
> -		7	enabled
> -		# echo disable > ff_pwr_btn
> -		# press the power button for 3 times;
> -		# echo clear > ff_pwr_btn	/* clear the status bit */
> -		# echo disable > ff_pwr_btn
> -		# cat ff_pwr_btn
> -		7	enabled
> +		when pressing the power button::
> +
> +		  # cat ff_pwr_btn
> +		  0	enabled
> +		  # press the power button for 3 times;
> +		  # cat ff_pwr_btn
> +		  3	enabled
> +		  # echo disable > ff_pwr_btn
> +		  # cat ff_pwr_btn
> +		  3	disabled
> +		  # press the power button for 3 times;
> +		  # cat ff_pwr_btn
> +		  3	disabled
> +		  # echo enable > ff_pwr_btn
> +		  # cat ff_pwr_btn
> +		  4	enabled
> +		  /*
> +		   * this is because the status bit is set even if the enable
> +		   * bit is cleared, and it triggers an ACPI fixed event when
> +		   * the enable bit is set again
> +		   */
> +		  # press the power button for 3 times;
> +		  # cat ff_pwr_btn
> +		  7	enabled
> +		  # echo disable > ff_pwr_btn
> +		  # press the power button for 3 times;
> +		  # echo clear > ff_pwr_btn	/* clear the status bit */
> +		  # echo disable > ff_pwr_btn
> +		  # cat ff_pwr_btn
> +		  7	enabled
>  
> diff --git a/Documentation/ABI/testing/sysfs-firmware-dmi-entries b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
> index 210ad44b95a5..fe0289c87768 100644
> --- a/Documentation/ABI/testing/sysfs-firmware-dmi-entries
> +++ b/Documentation/ABI/testing/sysfs-firmware-dmi-entries
> @@ -33,7 +33,7 @@ Description:
>  		doesn't matter), they will be represented in sysfs as
>  		entries "T-0" through "T-(N-1)":
>  
> -		Example entry directories:
> +		Example entry directories::
>  
>  			/sys/firmware/dmi/entries/17-0
>  			/sys/firmware/dmi/entries/17-1
> @@ -50,61 +50,65 @@ Description:
>  		Each DMI entry in sysfs has the common header values
>  		exported as attributes:
>  
> -		handle	: The 16bit 'handle' that is assigned to this
> +		========  =================================================
> +		handle	  The 16bit 'handle' that is assigned to this
>  			  entry by the firmware.  This handle may be
>  			  referred to by other entries.
> -		length	: The length of the entry, as presented in the
> +		length	  The length of the entry, as presented in the
>  			  entry itself.  Note that this is _not the
>  			  total count of bytes associated with the
> -			  entry_.  This value represents the length of
> +			  entry.  This value represents the length of
>  			  the "formatted" portion of the entry.  This
>  			  "formatted" region is sometimes followed by
>  			  the "unformatted" region composed of nul
>  			  terminated strings, with termination signalled
>  			  by a two nul characters in series.
> -		raw	: The raw bytes of the entry. This includes the
> +		raw	  The raw bytes of the entry. This includes the
>  			  "formatted" portion of the entry, the
>  			  "unformatted" strings portion of the entry,
>  			  and the two terminating nul characters.
> -		type	: The type of the entry.  This value is the same
> +		type	  The type of the entry.  This value is the same
>  			  as found in the directory name.  It indicates
>  			  how the rest of the entry should be interpreted.
> -		instance: The instance ordinal of the entry for the
> +		instance  The instance ordinal of the entry for the
>  			  given type.  This value is the same as found
>  			  in the parent directory name.
> -		position: The ordinal position (zero-based) of the entry
> +		position  The ordinal position (zero-based) of the entry
>  			  within the entirety of the DMI entry table.
> +		========  =================================================
>  
> -		=== Entry Specialization ===
> +		**Entry Specialization**
>  
>  		Some entry types may have other information available in
>  		sysfs.  Not all types are specialized.
>  
> -		--- Type 15 - System Event Log ---
> +		**Type 15 - System Event Log**
>  
>  		This entry allows the firmware to export a log of
>  		events the system has taken.  This information is
>  		typically backed by nvram, but the implementation
>  		details are abstracted by this table.  This entry's data
> -		is exported in the directory:
> +		is exported in the directory::
>  
> -		/sys/firmware/dmi/entries/15-0/system_event_log
> +		  /sys/firmware/dmi/entries/15-0/system_event_log
>  
>  		and has the following attributes (documented in the
>  		SMBIOS / DMI specification under "System Event Log (Type 15)":
>  
> -		area_length
> -		header_start_offset
> -		data_start_offset
> -		access_method
> -		status
> -		change_token
> -		access_method_address
> -		header_format
> -		per_log_type_descriptor_length
> -		type_descriptors_supported_count
> +		- area_length
> +		- header_start_offset
> +		- data_start_offset
> +		- access_method
> +		- status
> +		- change_token
> +		- access_method_address
> +		- header_format
> +		- per_log_type_descriptor_length
> +		- type_descriptors_supported_count
>  
>  		As well, the kernel exports the binary attribute:
>  
> -		raw_event_log	: The raw binary bits of the event log
> +		=============	  ====================================
> +		raw_event_log	  The raw binary bits of the event log
>  				  as described by the DMI entry.
> +		=============	  ====================================
> diff --git a/Documentation/ABI/testing/sysfs-firmware-gsmi b/Documentation/ABI/testing/sysfs-firmware-gsmi
> index 0faa0aaf4b6a..7a558354c1ee 100644
> --- a/Documentation/ABI/testing/sysfs-firmware-gsmi
> +++ b/Documentation/ABI/testing/sysfs-firmware-gsmi
> @@ -20,7 +20,7 @@ Description:
>  
>  			This directory has the same layout (and
>  			underlying implementation as /sys/firmware/efi/vars.
> -			See Documentation/ABI/*/sysfs-firmware-efi-vars
> +			See `Documentation/ABI/*/sysfs-firmware-efi-vars`
>  			for more information on how to interact with
>  			this structure.
>  
> diff --git a/Documentation/ABI/testing/sysfs-firmware-memmap b/Documentation/ABI/testing/sysfs-firmware-memmap
> index eca0d65087dc..1f6f4d3a32c0 100644
> --- a/Documentation/ABI/testing/sysfs-firmware-memmap
> +++ b/Documentation/ABI/testing/sysfs-firmware-memmap
> @@ -20,7 +20,7 @@ Description:
>  		the raw memory map to userspace.
>  
>  		The structure is as follows: Under /sys/firmware/memmap there
> -		are subdirectories with the number of the entry as their name:
> +		are subdirectories with the number of the entry as their name::
>  
>  			/sys/firmware/memmap/0
>  			/sys/firmware/memmap/1
> @@ -34,14 +34,16 @@ Description:
>  
>  		Each directory contains three files:
>  
> -		start	: The start address (as hexadecimal number with the
> +		========  =====================================================
> +		start	  The start address (as hexadecimal number with the
>  			  '0x' prefix).
> -		end	: The end address, inclusive (regardless whether the
> +		end	  The end address, inclusive (regardless whether the
>  			  firmware provides inclusive or exclusive ranges).
> -		type	: Type of the entry as string. See below for a list of
> +		type	  Type of the entry as string. See below for a list of
>  			  valid types.
> +		========  =====================================================
>  
> -		So, for example:
> +		So, for example::
>  
>  			/sys/firmware/memmap/0/start
>  			/sys/firmware/memmap/0/end
> @@ -57,9 +59,8 @@ Description:
>  		  - reserved
>  
>  		Following shell snippet can be used to display that memory
> -		map in a human-readable format:
> +		map in a human-readable format::
>  
> -		-------------------- 8< ----------------------------------------
>  		  #!/bin/bash
>  		  cd /sys/firmware/memmap
>  		  for dir in * ; do
> @@ -68,4 +69,3 @@ Description:
>  		      type=$(cat $dir/type)
>  		      printf "%016x-%016x (%s)\n" $start $[ $end +1] "$type"
>  		  done
> -		-------------------- >8 ----------------------------------------
> diff --git a/Documentation/ABI/testing/sysfs-fs-ext4 b/Documentation/ABI/testing/sysfs-fs-ext4
> index 78604db56279..99e3d92f8299 100644
> --- a/Documentation/ABI/testing/sysfs-fs-ext4
> +++ b/Documentation/ABI/testing/sysfs-fs-ext4
> @@ -45,8 +45,8 @@ Description:
>  		parameter will have their blocks allocated out of a
>  		block group specific preallocation pool, so that small
>  		files are packed closely together.  Each large file
> -		 will have its blocks allocated out of its own unique
> -		 preallocation pool.
> +		will have its blocks allocated out of its own unique
> +		preallocation pool.
>  
>  What:		/sys/fs/ext4/<disk>/inode_readahead_blks
>  Date:		March 2008
> diff --git a/Documentation/ABI/testing/sysfs-hypervisor-xen b/Documentation/ABI/testing/sysfs-hypervisor-xen
> index 53b7b2ea7515..4dbe0c49b393 100644
> --- a/Documentation/ABI/testing/sysfs-hypervisor-xen
> +++ b/Documentation/ABI/testing/sysfs-hypervisor-xen
> @@ -15,14 +15,17 @@ KernelVersion:	4.3
>  Contact:	Boris Ostrovsky <boris.ostrovsky@oracle.com>
>  Description:	If running under Xen:
>  		Describes mode that Xen's performance-monitoring unit (PMU)
> -		uses. Accepted values are
> -			"off"  -- PMU is disabled
> -			"self" -- The guest can profile itself
> -			"hv"   -- The guest can profile itself and, if it is
> +		uses. Accepted values are:
> +
> +			======    ============================================
> +			"off"     PMU is disabled
> +			"self"    The guest can profile itself
> +			"hv"      The guest can profile itself and, if it is
>  				  privileged (e.g. dom0), the hypervisor
> -			"all" --  The guest can profile itself, the hypervisor
> +			"all"     The guest can profile itself, the hypervisor
>  				  and all other guests. Only available to
>  				  privileged guests.
> +			======    ============================================
>  
>  What:           /sys/hypervisor/pmu/pmu_features
>  Date:           August 2015
> diff --git a/Documentation/ABI/testing/sysfs-kernel-boot_params b/Documentation/ABI/testing/sysfs-kernel-boot_params
> index eca38ce2852d..7f9bda453c4d 100644
> --- a/Documentation/ABI/testing/sysfs-kernel-boot_params
> +++ b/Documentation/ABI/testing/sysfs-kernel-boot_params
> @@ -23,16 +23,17 @@ Description:	The /sys/kernel/boot_params directory contains two
>  		representation of setup_data type. "data" file is the binary
>  		representation of setup_data payload.
>  
> -		The whole boot_params directory structure is like below:
> -		/sys/kernel/boot_params
> -		|__ data
> -		|__ setup_data
> -		|   |__ 0
> -		|   |   |__ data
> -		|   |   |__ type
> -		|   |__ 1
> -		|       |__ data
> -		|       |__ type
> -		|__ version
> +		The whole boot_params directory structure is like below::
> +
> +		  /sys/kernel/boot_params
> +		  |__ data
> +		  |__ setup_data
> +		  |   |__ 0
> +		  |   |   |__ data
> +		  |   |   |__ type
> +		  |   |__ 1
> +		  |       |__ data
> +		  |       |__ type
> +		  |__ version
>  
>  Users:		Kexec
> diff --git a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
> index fdaa2162fae1..294387e2c7fb 100644
> --- a/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
> +++ b/Documentation/ABI/testing/sysfs-kernel-mm-hugepages
> @@ -7,9 +7,11 @@ Description:
>  		of the hugepages supported by the kernel/CPU combination.
>  
>  		Under these directories are a number of files:
> -			nr_hugepages
> -			nr_overcommit_hugepages
> -			free_hugepages
> -			surplus_hugepages
> -			resv_hugepages
> +
> +			- nr_hugepages
> +			- nr_overcommit_hugepages
> +			- free_hugepages
> +			- surplus_hugepages
> +			- resv_hugepages
> +
>  		See Documentation/admin-guide/mm/hugetlbpage.rst for details.
> diff --git a/Documentation/ABI/testing/sysfs-platform-asus-laptop b/Documentation/ABI/testing/sysfs-platform-asus-laptop
> index 8b0e8205a6a2..c78d358dbdbe 100644
> --- a/Documentation/ABI/testing/sysfs-platform-asus-laptop
> +++ b/Documentation/ABI/testing/sysfs-platform-asus-laptop
> @@ -4,13 +4,16 @@ KernelVersion:	2.6.20
>  Contact:	"Corentin Chary" <corentincj@iksaif.net>
>  Description:
>  		This file allows display switching. The value
> -		is composed by 4 bits and defined as follow:
> -		4321
> -		|||`- LCD
> -		||`-- CRT
> -		|`--- TV
> -		`---- DVI
> -		Ex: - 0 (0000b) means no display
> +		is composed by 4 bits and defined as follow::
> +
> +		  4321
> +		  |||`- LCD
> +		  ||`-- CRT
> +		  |`--- TV
> +		  `---- DVI
> +
> +		Ex:
> +		    - 0 (0000b) means no display
>  		    - 3 (0011b) CRT+LCD.
>  
>  What:		/sys/devices/platform/asus_laptop/gps
> @@ -28,8 +31,10 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
>  Description:
>  		Some models like the W1N have a LED display that can be
>  		used to display several items of information.
> -		To control the LED display, use the following :
> +		To control the LED display, use the following::
> +
>  		    echo 0x0T000DDD > /sys/devices/platform/asus_laptop/
> +
>  		where T control the 3 letters display, and DDD the 3 digits display.
>  		The DDD table can be found in Documentation/admin-guide/laptops/asus-laptop.rst
>  
> diff --git a/Documentation/ABI/testing/sysfs-platform-asus-wmi b/Documentation/ABI/testing/sysfs-platform-asus-wmi
> index 1efac0ddb417..04885738cf15 100644
> --- a/Documentation/ABI/testing/sysfs-platform-asus-wmi
> +++ b/Documentation/ABI/testing/sysfs-platform-asus-wmi
> @@ -5,6 +5,7 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
>  Description:
>  		Change CPU clock configuration (write-only).
>  		There are three available clock configuration:
> +
>  		    * 0 -> Super Performance Mode
>  		    * 1 -> High Performance Mode
>  		    * 2 -> Power Saving Mode
> diff --git a/Documentation/ABI/testing/sysfs-platform-at91 b/Documentation/ABI/testing/sysfs-platform-at91
> index 4cc6a865ae66..b146be74b8e0 100644
> --- a/Documentation/ABI/testing/sysfs-platform-at91
> +++ b/Documentation/ABI/testing/sysfs-platform-at91
> @@ -18,8 +18,10 @@ Description:
>  		In order to use an extended can_id add the
>  		CAN_EFF_FLAG (0x80000000U) to the can_id. Example:
>  
> -		- standard id 0x7ff:
> -		echo 0x7ff      > /sys/class/net/can0/mb0_id
> +		- standard id 0x7ff::
>  
> -		- extended id 0x1fffffff:
> -		echo 0x9fffffff > /sys/class/net/can0/mb0_id
> +		    echo 0x7ff      > /sys/class/net/can0/mb0_id
> +
> +		- extended id 0x1fffffff::
> +
> +		    echo 0x9fffffff > /sys/class/net/can0/mb0_id
> diff --git a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
> index 5b026c69587a..70dbe0733cf6 100644
> --- a/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
> +++ b/Documentation/ABI/testing/sysfs-platform-eeepc-laptop
> @@ -4,9 +4,11 @@ KernelVersion:	2.6.26
>  Contact:	"Corentin Chary" <corentincj@iksaif.net>
>  Description:
>  		This file allows display switching.
> +
>  		- 1 = LCD
>  		- 2 = CRT
>  		- 3 = LCD+CRT
> +
>  		If you run X11, you should use xrandr instead.
>  
>  What:		/sys/devices/platform/eeepc/camera
> @@ -30,16 +32,20 @@ Contact:	"Corentin Chary" <corentincj@iksaif.net>
>  Description:
>  		Change CPU clock configuration.
>  		On the Eee PC 1000H there are three available clock configuration:
> +
>  		    * 0 -> Super Performance Mode
>  		    * 1 -> High Performance Mode
>  		    * 2 -> Power Saving Mode
> +
>  		On Eee PC 701 there is only 2 available clock configurations.
>  		Available configuration are listed in available_cpufv file.
>  		Reading this file will show the raw hexadecimal value which
> -		is defined as follow:
> -		| 8 bit | 8 bit |
> -		    |       `---- Current mode
> -		    `------------ Availables modes
> +		is defined as follow::
> +
> +		  | 8 bit | 8 bit |
> +		      |       `---- Current mode
> +		      `------------ Availables modes
> +
>  		For example, 0x301 means: mode 1 selected, 3 available modes.
>  
>  What:		/sys/devices/platform/eeepc/available_cpufv
> diff --git a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
> index 1b31be3f996a..fd2ac02bc5bd 100644
> --- a/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
> +++ b/Documentation/ABI/testing/sysfs-platform-ideapad-laptop
> @@ -12,6 +12,7 @@ Contact:	"Maxim Mikityanskiy <maxtram95@gmail.com>"
>  Description:
>  		Change fan mode
>  		There are four available modes:
> +
>  			* 0 -> Super Silent Mode
>  			* 1 -> Standard Mode
>  			* 2 -> Dust Cleaning
> @@ -32,9 +33,11 @@ KernelVersion:	4.18
>  Contact:	"Oleg Keri <ezhi99@gmail.com>"
>  Description:
>  		Control fn-lock mode.
> +
>  			* 1 -> Switched On
>  			* 0 -> Switched Off
>  
> -		For example:
> -		# echo "0" >	\
> -		/sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
> +		For example::
> +
> +		  # echo "0" >	\
> +		  /sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
> diff --git a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
> index 8af65059d519..e19144fd5d86 100644
> --- a/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
> +++ b/Documentation/ABI/testing/sysfs-platform-intel-wmi-thunderbolt
> @@ -7,5 +7,6 @@ Description:
>  		Thunderbolt controllers to turn on or off when no
>  		devices are connected (write-only)
>  		There are two available states:
> +
>  		    * 0 -> Force power disabled
>  		    * 1 -> Force power enabled
> diff --git a/Documentation/ABI/testing/sysfs-platform-sst-atom b/Documentation/ABI/testing/sysfs-platform-sst-atom
> index 0d07c0395660..d5f6e21f0e42 100644
> --- a/Documentation/ABI/testing/sysfs-platform-sst-atom
> +++ b/Documentation/ABI/testing/sysfs-platform-sst-atom
> @@ -5,13 +5,22 @@ Contact:	"Sebastien Guiriec" <sebastien.guiriec@intel.com>
>  Description:
>  		LPE Firmware version for SST driver on all atom
>  		plaforms (BYT/CHT/Merrifield/BSW).
> -		If the FW has never been loaded it will display:
> +		If the FW has never been loaded it will display::
> +
>  			"FW not yet loaded"
> -		If FW has been loaded it will display:
> +
> +		If FW has been loaded it will display::
> +
>  			"v01.aa.bb.cc"
> +
>  		aa: Major version is reflecting SoC version:
> +
> +			=== =============
>  			0d: BYT FW
>  			0b: BSW FW
>  			07: Merrifield FW
> +			=== =============
> +
>  		bb: Minor version
> +
>  		cc: Build version
> diff --git a/Documentation/ABI/testing/sysfs-platform-usbip-vudc b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
> index 81fcfb454913..53622d3ba27c 100644
> --- a/Documentation/ABI/testing/sysfs-platform-usbip-vudc
> +++ b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
> @@ -16,10 +16,13 @@ Contact:	Krzysztof Opasiak <k.opasiak@samsung.com>
>  Description:
>  		Current status of the device.
>  		Allowed values:
> -		1 - Device is available and can be exported
> -		2 - Device is currently exported
> -		3 - Fatal error occurred during communication
> -		  with peer
> +
> +		==  ==========================================
> +		1   Device is available and can be exported
> +		2   Device is currently exported
> +		3   Fatal error occurred during communication
> +		    with peer
> +		==  ==========================================
>  
>  What:		/sys/devices/platform/usbip-vudc.%d/usbip_sockfd
>  Date:		April 2016
> diff --git a/Documentation/ABI/testing/sysfs-ptp b/Documentation/ABI/testing/sysfs-ptp
> index a17f817a9309..2363ad810ddb 100644
> --- a/Documentation/ABI/testing/sysfs-ptp
> +++ b/Documentation/ABI/testing/sysfs-ptp
> @@ -69,7 +69,7 @@ Description:
>  		pin offered by the PTP hardware clock. The file name
>  		is the hardware dependent pin name. Reading from this
>  		file produces two numbers, the assigned function (see
> -		the PTP_PF_ enumeration values in linux/ptp_clock.h)
> +		the `PTP_PF_` enumeration values in linux/ptp_clock.h)
>  		and the channel number. The function and channel
>  		assignment may be changed by two writing numbers into
>  		the file.
> 


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 09:43:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 09:43:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15558.38545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQwJ-0005P5-Mq; Fri, 30 Oct 2020 09:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15558.38545; Fri, 30 Oct 2020 09:43:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQwJ-0005Oy-Hp; Fri, 30 Oct 2020 09:43:19 +0000
Received: by outflank-mailman (input) for mailman id 15558;
 Fri, 30 Oct 2020 09:43:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYQwI-0005OM-4y
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:43:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4f772cd-20fb-47c9-9b43-b59fa079f334;
 Fri, 30 Oct 2020 09:43:10 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYQw9-0004Yy-I9; Fri, 30 Oct 2020 09:43:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYQw9-0006VF-4Y; Fri, 30 Oct 2020 09:43:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYQw9-00022x-48; Fri, 30 Oct 2020 09:43:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYQwI-0005OM-4y
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:43:18 +0000
X-Inumbo-ID: f4f772cd-20fb-47c9-9b43-b59fa079f334
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f4f772cd-20fb-47c9-9b43-b59fa079f334;
	Fri, 30 Oct 2020 09:43:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BEp5Z1U+FvFP2fQwQKDku5VJ64fxWAkC5rYlbIKuHUA=; b=vzJp9wQzCWOW6I5bhunMSGoM0n
	OBIMo1ykfBiOhrOjG+ZxwBPKIjPrOiVOvaAMwfwhD04uvWugdmFTIEGrGN8VMXfegd+eFjXPf74Op
	BE6TXz/rjIXNSZKhuPiESwWusiF6Idjp1e7RmOpbskf4sd8/17w6K4p0HkjNPQZdkreE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYQw9-0004Yy-I9; Fri, 30 Oct 2020 09:43:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYQw9-0006VF-4Y; Fri, 30 Oct 2020 09:43:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYQw9-00022x-48; Fri, 30 Oct 2020 09:43:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156291-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156291: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
X-Osstest-Versions-That:
    xen=964781c6f162893677c50a779b7d562a299727ba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 09:43:09 +0000

flight 156291 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156291/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156254
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156254
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156254
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156254
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156254
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156254
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156254
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156254
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156254
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156254
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156254
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883
baseline version:
 xen                  964781c6f162893677c50a779b7d562a299727ba

Last test of basis   156254  2020-10-27 06:38:30 Z    3 days
Testing same since   156268  2020-10-27 22:07:28 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   964781c6f1..16a20963b3  16a20963b3209788f2c0d3a3eebb7d92f03f5883 -> master


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 09:45:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 09:45:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15565.38565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQys-0005aA-6d; Fri, 30 Oct 2020 09:45:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15565.38565; Fri, 30 Oct 2020 09:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYQys-0005a3-1k; Fri, 30 Oct 2020 09:45:58 +0000
Received: by outflank-mailman (input) for mailman id 15565;
 Fri, 30 Oct 2020 09:45:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYQyq-0005Zy-At
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:45:56 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.48]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79a78a1f-639c-4cae-a3dd-2034078f0f2b;
 Fri, 30 Oct 2020 09:45:53 +0000 (UTC)
Received: from AM0PR01CA0121.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:168::26) by AM6PR08MB5173.eurprd08.prod.outlook.com
 (2603:10a6:20b:e5::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 09:45:51 +0000
Received: from VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:168:cafe::38) by AM0PR01CA0121.outlook.office365.com
 (2603:10a6:208:168::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 09:45:51 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT052.mail.protection.outlook.com (10.152.19.173) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 09:45:51 +0000
Received: ("Tessian outbound c579d876a324:v64");
 Fri, 30 Oct 2020 09:45:50 +0000
Received: from 6a88fe7a7c9c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 418A6E8F-F37D-4EAC-ACFD-BA7A6A16FC84.1; 
 Fri, 30 Oct 2020 09:45:45 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6a88fe7a7c9c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 09:45:45 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AS8PR08MB6056.eurprd08.prod.outlook.com (2603:10a6:20b:299::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 09:45:44 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 09:45:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYQyq-0005Zy-At
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 09:45:56 +0000
X-Inumbo-ID: 79a78a1f-639c-4cae-a3dd-2034078f0f2b
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown [40.107.0.48])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 79a78a1f-639c-4cae-a3dd-2034078f0f2b;
	Fri, 30 Oct 2020 09:45:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B+XAE+Tgudorm68WDbKIYoFOWYX6Cb5xc02saLZogKQ=;
 b=KBmMSp/V45G243AjurihWjDvOWGUym2M9ItnF9LKXefM+DxI9IP38sbTnwt/NPkV9vmjh/6Bxp5peCEejKDjJA7Mgf3re0fgSujX0nkEJeli70wfpbYABgGS511wv9dYQrag9vpCu21EyfZG6Ns2RIpmkTy77g8jf/QJJXvs2A4=
Received: from AM0PR01CA0121.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:168::26) by AM6PR08MB5173.eurprd08.prod.outlook.com
 (2603:10a6:20b:e5::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 09:45:51 +0000
Received: from VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:168:cafe::38) by AM0PR01CA0121.outlook.office365.com
 (2603:10a6:208:168::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 09:45:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT052.mail.protection.outlook.com (10.152.19.173) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 09:45:51 +0000
Received: ("Tessian outbound c579d876a324:v64"); Fri, 30 Oct 2020 09:45:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8a93da2af9c724ba
X-CR-MTA-TID: 64aa7808
Received: from 6a88fe7a7c9c.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 418A6E8F-F37D-4EAC-ACFD-BA7A6A16FC84.1;
	Fri, 30 Oct 2020 09:45:45 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6a88fe7a7c9c.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 09:45:45 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nK2BnITnKhWaFyE8dumR2qt+EmGj0JMEm7w8HQlRSMM497ETeh6p07Fs93Mpo1r5gRlws3Vd78cz1HDs6qnQyJKtKRi3HUVcRH9kXfl3oIp1PGvmwZHYC6AuXOBqKCl0Cwy2Zl8tSlxA3dcK9TOkKeHAhVNd5Lv4vj7KlwwnveFBqEyUBSpX65IGc5+cCfkojAFyW6fUdFc3qb2IqwieqWILcLKusq3ip4np8pnq4N/GZpRwzqxWC1zQPhmI3TxKKsa/xa1m1pZlxhky6ZQlHUNXdDEksqOnt4kxiqRxK3vg6vp/xlfDs+KLDQVlZQPCSjqwk+y3tASxOKVSk8uiEA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B+XAE+Tgudorm68WDbKIYoFOWYX6Cb5xc02saLZogKQ=;
 b=Wey6TZjhFUuxK55BBcDA8/ntkZUlCL3Iwt7lFRuYfYceODWgjqa7PGrehpr1ITFKo92Y+nu3ivnJJk6UaFQvWeA8FKYBTwwDQTuRWmAyftPC9tpp0qZNu/HWtTroQbxP2AyeUQbhQnbqU4HAlv+WbHsc76OgUpAfTOeZ2ZUBvx8p0Vdgf9NdL7aqtrXOSzsaIqB+cAFJNWJ1yekOOKuFyFBVVs/VatTnIxDOtsdExSS2jJlkQOUvzSDtUYuUpgcglN+xl2S1qWrQPbL27R9q5TM1HuI+QzwrWoQk9vrlYD0924oEf+HC4Stnm/lNBIlU34ZTCQn0yuS/EdjLXtVOXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=B+XAE+Tgudorm68WDbKIYoFOWYX6Cb5xc02saLZogKQ=;
 b=KBmMSp/V45G243AjurihWjDvOWGUym2M9ItnF9LKXefM+DxI9IP38sbTnwt/NPkV9vmjh/6Bxp5peCEejKDjJA7Mgf3re0fgSujX0nkEJeli70wfpbYABgGS511wv9dYQrag9vpCu21EyfZG6Ns2RIpmkTy77g8jf/QJJXvs2A4=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AS8PR08MB6056.eurprd08.prod.outlook.com (2603:10a6:20b:299::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 09:45:44 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 09:45:44 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXICAABfBAIAAGI6AgAAOSACABG94AIADrVCAgAFBfQCAAGM1gIAA0RiAgAAJ+ACAAAarAA==
Date: Fri, 30 Oct 2020 09:45:44 +0000
Message-ID: <226DA6DB-D03C-41A7-A68C-53000DFA70F6@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
 <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
 <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
In-Reply-To: <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 707e313b-0cd1-483b-df05-08d87cb89674
x-ms-traffictypediagnostic: AS8PR08MB6056:|AM6PR08MB5173:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB51737DB2F01FDD46F6B32254FC150@AM6PR08MB5173.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 J+UW7QAWr5Ecc1XSNqeSEOBun6t3gmOUmBqMDit0NEdIACqq24ktoZwL3uz+aXLW6MiqSiDlNNPVCxGvelhPduLUsalvwyFtbAyuknSnaR/5o5pahUM1K+caCNySuKtNc9x4tvhUBY/Elf8qK3ZMc+gs90TC0gaeP3X9QxXnig0g5LrUg/NufVz69vtkFOnd1e53P0KLlJ0JtCxdnQhYAjBxHa+WM0P4zTUkzfQwBi/Srwi+0RbCZxJvA8vWmnL+IXrAAK/p9SvNkra7Nwii6QQJtp0mWV64fbMkKqWUaNWUb+pdP8uOV1ycQr182VYMcO9AK/z8p/FoOn8ouH7ryg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(376002)(396003)(136003)(346002)(366004)(53546011)(6506007)(6916009)(71200400001)(36756003)(26005)(54906003)(186003)(8936002)(316002)(55236004)(2616005)(64756008)(66556008)(66946007)(66476007)(66446008)(76116006)(6486002)(91956017)(86362001)(6512007)(2906002)(4326008)(5660300002)(8676002)(478600001)(33656002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 a8FY6RjSgPdEvD2pwTogH9uF//e8S44/vmlIu9UHsop6cd1GbwUgUfnhCziW+Uv6QT6U56S/yh6Vd82KaOVrllnwBzyUDxQnSx5MhRpnkOzhAfQzj5e+TaURrJ1ejgOTCDuBHZ7HErSUs5EwpxPgv0E/iqk5ICqGKg5bDFIdcbsu+uqDJB8Odh1/pKE76BhpLiWUARt9GhvtmKXMby0HElMBOouSCZr0oDwnbaMl8Nhq/kU9omEgn8i5bpv5s7LTdNum4fjUHO2cfw0MELDp/8Io3vq0QjdW9cKIkIX/GXCGKE6dzAPnwcDe2XfTZs2BskHSOmuJolDOffQHHNPxhbdOfgyqp+u3vlN4DYrU2lQRNDOIqYB38nZBEZ6VPJKMIjoVXMteMpNA2vGjrPE323dRBJmGkFd161B3ArEzWiVvogC5mg3A8JZj44pgWZvDfS/xIer1J5i9DFkmIqJW4PHnfiNu0Q9X3LYYv88bwT8VtCesOGZjR0o6poiixeTRsX7ynRNir4fjF32v7N4QtCmXFVW2uK2/j4YHz5y14/irl1WaGGeczRKD6PV0lpYtg71MXgdB/oS/xYyFu1i8Isaq9SVoKg6VQUlDAVVqXDW+96Xc7ljwShAQ9BcOEV2b2VkYJrAaRl9ysnEk3RkyuQ==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <360B5411BA1936419D1AF71F622A44CD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6056
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8db2b812-602b-4b4d-97c2-08d87cb8923f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	uvVqBhikS7/xgrAymhYppctkVqUf3aGQQTTg81dkaOlhsDCX/AqMqFiaXy7xm/vs5Izz0L4G2Ptpwf7qVjAzcrJWX4hDr36Iww5anOUD4RlXbDTudyAMkGLUW5ohhpYYqDIpvauy4adroHjWIJSxWi8Qv+LPtE4KtkTuefc2S8c53wRkCH6pfcGvjBG+lTHCSxhEesYMLtHE3XCIdoFTehzLSjvIbjxVn59wBRROkZ2/J9cWeCY0O8jnDbJtgdLSqC9YYJs1Fv53EqLbf3A1GtsWdy2V+sYReCDhoDtJniHDz6VbR2k/5+VrpI2wqgf/cjVIpbwpF1WHUG3boEHpQZl8XlCWawOmf/eW3WNunRzJ2P6HRZCEESUDMvz8GldHaMvu8rHkJNIkM2UDFRI10g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(39860400002)(136003)(46966005)(5660300002)(81166007)(478600001)(86362001)(70206006)(8936002)(107886003)(36906005)(2906002)(356005)(36756003)(6506007)(55236004)(6512007)(33656002)(2616005)(82740400003)(53546011)(8676002)(336012)(47076004)(316002)(26005)(70586007)(6486002)(82310400003)(6862004)(186003)(54906003)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 09:45:51.1904
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 707e313b-0cd1-483b-df05-08d87cb89674
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT052.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5173

Hello Julien,

> On 30 Oct 2020, at 9:21 am, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 30/10/2020 08:46, Rahul Singh wrote:
>> Ok Yes when I ported the driver I port the command queue operation from =
the previous commit where atomic operations is not used and rest all the co=
de is from the latest code. I will again make sure that any bug that is fix=
ed in Linux should be fixed in XEN also.
>=20
> I would like to seek some clarifications on the code because there seem t=
o be conflicting information provided in this thread.
>=20
> The patch (the baseline commit is provided) and the discussion with Bertr=
and suggests that you took a snapshot of the code last year and adapted for=
 Xen.
>=20
> However, here you suggest that you took an hybrid approach where part of =
the code is based from last year and other part is based from the latest co=
de (I assume v5.9).
>=20
> So can you please clarify?
>=20
> Cheers,

Approach I took is to first merge the code  from the commit ( Jul 2, 2019 7=
c288a5b27934281d9ea8b5807bc727268b7001a ) the snapshot before atomic operat=
ion is used in SMMUv3 code for command queue operations.

After that I fixed  the other code( not related to command queue operations=
.)  from the latest code so that no bug is introduced in XEN because of usi=
ng the last year commit.

>=20
> --=20
> Julien Grall

Regards,
Rahul=


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:05:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:05:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15593.38576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRHR-0007TZ-V0; Fri, 30 Oct 2020 10:05:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15593.38576; Fri, 30 Oct 2020 10:05:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRHR-0007TS-Rt; Fri, 30 Oct 2020 10:05:09 +0000
Received: by outflank-mailman (input) for mailman id 15593;
 Fri, 30 Oct 2020 10:05:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYRHQ-0007TN-MR
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:05:08 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2efaa4c0-8a99-4132-8093-c2e3b73fe7c6;
 Fri, 30 Oct 2020 10:05:07 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRHO-00055Q-2O; Fri, 30 Oct 2020 10:05:06 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRHN-0004nw-PK; Fri, 30 Oct 2020 10:05:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYRHQ-0007TN-MR
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:05:08 +0000
X-Inumbo-ID: 2efaa4c0-8a99-4132-8093-c2e3b73fe7c6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2efaa4c0-8a99-4132-8093-c2e3b73fe7c6;
	Fri, 30 Oct 2020 10:05:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pw/Gs5Q2MwaWU+ogIPs3aeTqukdsUtygpuCBjVn3MQ0=; b=YVZh5Gl6LdXUeqObSxo1nGNHfq
	FaT5sN5QWc+Sy6ly2nOtyj1AUZm7GI+b2YVP26AxBfmYT5FoHYDk8+W9NsrR8syk521pQD5YosJiz
	x/HjvBzY/hGlw5LJ3S587zy9vr9LNkF5v5XJEz+Y0j+WhKbSrw1dh2jYdfRZgBiy6sZI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRHO-00055Q-2O; Fri, 30 Oct 2020 10:05:06 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRHN-0004nw-PK; Fri, 30 Oct 2020 10:05:05 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
 <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
 <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
 <226DA6DB-D03C-41A7-A68C-53000DFA70F6@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e5ce30c5-e0e0-90c8-962d-c86b65a82ccd@xen.org>
Date: Fri, 30 Oct 2020 10:05:02 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <226DA6DB-D03C-41A7-A68C-53000DFA70F6@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/10/2020 09:45, Rahul Singh wrote:
> Hello Julien,
> 
>> On 30 Oct 2020, at 9:21 am, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 30/10/2020 08:46, Rahul Singh wrote:
>>> Ok Yes when I ported the driver I port the command queue operation from the previous commit where atomic operations is not used and rest all the code is from the latest code. I will again make sure that any bug that is fixed in Linux should be fixed in XEN also.
>>
>> I would like to seek some clarifications on the code because there seem to be conflicting information provided in this thread.
>>
>> The patch (the baseline commit is provided) and the discussion with Bertrand suggests that you took a snapshot of the code last year and adapted for Xen.
>>
>> However, here you suggest that you took an hybrid approach where part of the code is based from last year and other part is based from the latest code (I assume v5.9).
>>
>> So can you please clarify?
>>
>> Cheers,
> 
> Approach I took is to first merge the code  from the commit ( Jul 2, 2019 7c288a5b27934281d9ea8b5807bc727268b7001a ) the snapshot before atomic operation is used in SMMUv3 code for command queue operations.
> 
> After that I fixed  the other code( not related to command queue operations.)  from the latest code so that no bug is introduced in XEN because of using the last year commit.

Ok. That was definitely not clear from the commit message. Please make 
this clearer in the commit message.

Anway, it means we need to do a full review of the code (rather than a 
light one) because of the hybrid model.

I am still a bit puzzle to why it would require almost of a restart of 
the implementation in order to sync the latest code. Does it imply that 
you are mostly using the old code?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:09:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15601.38589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRLz-0007ft-IO; Fri, 30 Oct 2020 10:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15601.38589; Fri, 30 Oct 2020 10:09:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRLz-0007fm-F1; Fri, 30 Oct 2020 10:09:51 +0000
Received: by outflank-mailman (input) for mailman id 15601;
 Fri, 30 Oct 2020 10:09:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fAey=EF=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
 id 1kYRLx-0007fh-U4
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:09:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cc7b6fe-9be9-460c-b18a-20b410598e60;
 Fri, 30 Oct 2020 10:09:48 +0000 (UTC)
Received: from coco.lan (unknown [95.90.213.187])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D913322210;
 Fri, 30 Oct 2020 10:09:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=fAey=EF=kernel.org=mchehab+huawei@srs-us1.protection.inumbo.net>)
	id 1kYRLx-0007fh-U4
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:09:49 +0000
X-Inumbo-ID: 2cc7b6fe-9be9-460c-b18a-20b410598e60
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2cc7b6fe-9be9-460c-b18a-20b410598e60;
	Fri, 30 Oct 2020 10:09:48 +0000 (UTC)
Received: from coco.lan (unknown [95.90.213.187])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id D913322210;
	Fri, 30 Oct 2020 10:09:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604052587;
	bh=nY70Min4E7MUGYz9J7zA0EKOEj/k9pNAzXLi2/4ZMkk=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=UNBtt1Tx2c8WK/B9Q0efliEgvZY5yYooQin4TVgBsznntS8ZFwkniWxmzY7RzT79m
	 oK9xtt1aioR8FC0aS3pTXQ0Xn3bEUZCk6NmIqh/YY+rYfGes9W0TlZNagZzinCUs9H
	 0mgCmN5om/0DWDTaU4Jmu0IJpLLZxtrFVkEu873I=
Date: Fri, 30 Oct 2020 11:09:25 +0100
From: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
To: Fabrice Gasnier <fabrice.gasnier@st.com>
Cc: Linux Doc Mailing List <linux-doc@vger.kernel.org>, "Gautham R. Shenoy"
 <ego@linux.vnet.ibm.com>, "Jason A. Donenfeld" <Jason@zx2c4.com>, Javier
 =?UTF-8?B?R29uesOhbGV6?= <javier@javigon.com>, Jonathan Corbet
 <corbet@lwn.net>, "Martin K. Petersen" <martin.petersen@oracle.com>,
 "Rafael J. Wysocki" <rjw@rjwysocki.net>, Alexander Shishkin
 <alexander.shishkin@linux.intel.com>, Alexandre Belloni
 <alexandre.belloni@bootlin.com>, Alexandre Torgue
 <alexandre.torgue@st.com>, Andrew Donnellan <ajd@linux.ibm.com>, Andy
 Shevchenko <andriy.shevchenko@linux.intel.com>, Baolin Wang
 <baolin.wang7@gmail.com>, Benson Leung <bleung@chromium.org>, Boris
 Ostrovsky <boris.ostrovsky@oracle.com>, Bruno Meneguele
 <bmeneg@redhat.com>, Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy
 <dmurphy@ti.com>, Dan Williams <dan.j.williams@intel.com>, Enric Balletbo i
 Serra <enric.balletbo@collabora.com>, Felipe Balbi <balbi@kernel.org>,
 Frederic Barrat <fbarrat@linux.ibm.com>, Greg Kroah-Hartman
 <gregkh@linuxfoundation.org>, Guenter Roeck <groeck@chromium.org>, Hanjun
 Guo <guohanjun@huawei.com>, Heikki Krogerus
 <heikki.krogerus@linux.intel.com>, Jens Axboe <axboe@kernel.dk>, Johannes
 Thumshirn <johannes.thumshirn@wdc.com>, Jonathan Cameron
 <jic23@kernel.org>, Juergen Gross <jgross@suse.com>, Konstantin Khlebnikov
 <koct9i@gmail.com>, Kranthi Kuntala <kranthi.kuntala@intel.com>, Lakshmi
 Ramasubramanian <nramas@linux.microsoft.com>, Lars-Peter Clausen
 <lars@metafoo.de>, Len Brown <lenb@kernel.org>, Leonid Maksymchuk
 <leonmaxx@gmail.com>, Ludovic Desroches <ludovic.desroches@microchip.com>,
 Mario Limonciello <mario.limonciello@dell.com>, Mark Gross
 <mgross@linux.intel.com>, Maxime Coquelin <mcoquelin.stm32@gmail.com>,
 Michael Ellerman <mpe@ellerman.id.au>, Mika Westerberg
 <mika.westerberg@linux.intel.com>, Mike Kravetz <mike.kravetz@oracle.com>,
 Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>, Nicolas
 Ferre <nicolas.ferre@microchip.com>, Niklas Cassel <niklas.cassel@wdc.com>,
 Oded Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>,
 Orson Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>, Pawan Gupta
 <pawan.kumar.gupta@linux.intel.com>, Peter Meerwald-Stadler
 <pmeerw@pmeerw.net>, Peter Rosin <peda@axentia.se>, Petr Mladek
 <pmladek@suse.com>, Philippe Bergheaud <felix@linux.ibm.com>, Richard
 Cochran <richardcochran@gmail.com>, Sebastian Reichel <sre@kernel.org>,
 Sergey Senozhatsky <sergey.senozhatsky@gmail.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Thinh Nguyen <Thinh.Nguyen@synopsys.com>, Thomas
 Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>, Vaibhav Jain
 <vaibhav@linux.ibm.com>, Vineela Tummalapalli
 <vineela.tummalapalli@intel.com>, Vishal Verma <vishal.l.verma@intel.com>,
 <linux-acpi@vger.kernel.org>, <linux-arm-kernel@lists.infradead.org>,
 <linux-iio@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <linux-mm@kvack.org>, <linux-pm@vger.kernel.org>,
 <linux-stm32@st-md-mailman.stormreply.com>, <linux-usb@vger.kernel.org>,
 <linuxppc-dev@lists.ozlabs.org>, <netdev@vger.kernel.org>,
 <xen-devel@lists.xenproject.org>, Jonathan Cameron
 <Jonathan.Cameron@huawei.com>
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
Message-ID: <20201030110925.3e09d59e@coco.lan>
In-Reply-To: <5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
	<58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
	<5326488b-4185-9d67-fc09-79b911fbb3b8@st.com>
X-Mailer: Claws Mail 3.17.8 (GTK+ 2.24.32; x86_64-redhat-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

Em Fri, 30 Oct 2020 10:19:12 +0100
Fabrice Gasnier <fabrice.gasnier@st.com> escreveu:

> Hi Mauro,
> 
> [...]
> 
> >  
> > +What:		/sys/bus/iio/devices/iio:deviceX/in_count_quadrature_mode_available
> > +KernelVersion:	4.12
> > +Contact:	benjamin.gaignard@st.com
> > +Description:
> > +		Reading returns the list possible quadrature modes.
> > +
> > +What:		/sys/bus/iio/devices/iio:deviceX/in_count0_quadrature_mode
> > +KernelVersion:	4.12
> > +Contact:	benjamin.gaignard@st.com
> > +Description:
> > +		Configure the device counter quadrature modes:
> > +
> > +		channel_A:
> > +			Encoder A input servers as the count input and B as
> > +			the UP/DOWN direction control input.
> > +
> > +		channel_B:
> > +			Encoder B input serves as the count input and A as
> > +			the UP/DOWN direction control input.
> > +
> > +		quadrature:
> > +			Encoder A and B inputs are mixed to get direction
> > +			and count with a scale of 0.25.
> > +  
> 

Hi Fabrice,

> I just noticed that since Jonathan question in v1.
> 
> Above ABI has been moved in the past as discussed in [1]. You can take a
> look at:
> b299d00 IIO: stm32: Remove quadrature related functions from trigger driver
> 
> Could you please remove the above chunk ?
> 
> With that, for the stm32 part:
> Acked-by: Fabrice Gasnier <fabrice.gasnier@st.com>


Hmm... probably those were re-introduced due to a rebase. This
series were originally written about 1,5 years ago.

I'll drop those hunks.

Thanks!
Mauro


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:16:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:16:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15607.38600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRS2-00006S-A5; Fri, 30 Oct 2020 10:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15607.38600; Fri, 30 Oct 2020 10:16:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRS2-00006L-6V; Fri, 30 Oct 2020 10:16:06 +0000
Received: by outflank-mailman (input) for mailman id 15607;
 Fri, 30 Oct 2020 10:16:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYRS0-00006G-Df
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:16:04 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5715d58d-7577-48f3-b5d1-72c57d17e4b6;
 Fri, 30 Oct 2020 10:16:02 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRRw-0005JZ-34; Fri, 30 Oct 2020 10:16:00 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRRv-0005Zh-QY; Fri, 30 Oct 2020 10:15:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYRS0-00006G-Df
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:16:04 +0000
X-Inumbo-ID: 5715d58d-7577-48f3-b5d1-72c57d17e4b6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5715d58d-7577-48f3-b5d1-72c57d17e4b6;
	Fri, 30 Oct 2020 10:16:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MOBA07TmFYDEnYKOgbL2LdhdBhbwVYFi4gsjIRISL3k=; b=tAfUCq4RTpq1sndTPox5scQitC
	TzT2TsWIZSLZGkv5LVOLeHsXCY8mdHRhBiBv/mqzhM5ltL66noA1UjgCT68/I49JvZGktXPsQSNOz
	KUmuJg4vE76NBHPH2wAijDtthVDzQrYcezt+3Ka4kgCdzU4ud50pFjhJBEGOCAX0m86o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRRw-0005JZ-34; Fri, 30 Oct 2020 10:16:00 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRRv-0005Zh-QY; Fri, 30 Oct 2020 10:15:59 +0000
Subject: Re: [PATCH v2 1/8] evtchn: avoid race in get_xen_consumer()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <95627ffb-74e5-25f8-08d9-b7420d50b6dd@xen.org>
Date: Fri, 30 Oct 2020 10:15:57 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <9ecafa4d-db5b-20a2-3a9d-6a6cda91252c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/10/2020 15:08, Jan Beulich wrote:
> There's no global lock around the updating of this global piece of data.
> Make use of cmpxchgptr() to avoid two entities racing with their
> updates.
> 
> While touching the functionality, mark xen_consumers[] read-mostly (or
> else the if() condition could use the result of cmpxchgptr(), writing to
> the slot unconditionally).
> 
> The use of cmpxchgptr() here points out (by way of clang warning about
> it) that its original use of const was slightly wrong. Adjust the
> placement, or else undefined behavior of const qualifying a function
> type will result.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:21:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:21:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15612.38613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRX2-0000yq-UP; Fri, 30 Oct 2020 10:21:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15612.38613; Fri, 30 Oct 2020 10:21:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRX2-0000yj-Q5; Fri, 30 Oct 2020 10:21:16 +0000
Received: by outflank-mailman (input) for mailman id 15612;
 Fri, 30 Oct 2020 10:21:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYRX0-0000yd-RQ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:21:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2b54742-748d-44ab-b478-b1543c195c5f;
 Fri, 30 Oct 2020 10:21:13 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRWx-0005Py-4m; Fri, 30 Oct 2020 10:21:11 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRWw-0005n7-Rt; Fri, 30 Oct 2020 10:21:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYRX0-0000yd-RQ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:21:14 +0000
X-Inumbo-ID: a2b54742-748d-44ab-b478-b1543c195c5f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a2b54742-748d-44ab-b478-b1543c195c5f;
	Fri, 30 Oct 2020 10:21:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DpagRUoaWOiZtLNNsCwYXCTH2dY4fG6an5igMXxemW0=; b=lhW4cJ1cvYWJn/8qBC4mSCBJAq
	TQMxKDqB+mZmbOBVb1hD8BciacFx9dj9bunCl/5/ljTDg61p5MiPoB3UY/y7ipcn+FgREUzHYBmw4
	+saTN3EBD7ObFQI03gM8TueVjng/6Y/PqyYcNY8HWMp6HKbC8R65du9YiAOp0OwaOzFI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRWx-0005Py-4m; Fri, 30 Oct 2020 10:21:11 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRWw-0005n7-Rt; Fri, 30 Oct 2020 10:21:10 +0000
Subject: Re: [PATCH v2 2/8] evtchn: replace FIFO-specific header by generic
 private one
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c330d6e3-04da-9bf2-5634-b6b2961c18db@xen.org>
Date: Fri, 30 Oct 2020 10:21:09 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/10/2020 15:08, Jan Beulich wrote:
> Having a FIFO specific header is not (or at least no longer) warranted
> with just three function declarations left there. Introduce a private
> header instead, moving there some further items from xen/event.h.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
> v2: New.
> ---
> TBD: If - considering the layering violation that's there anyway - we
>       allowed PV shim code to make use of this header, a few more items
>       could be moved out of "public sight".

Are you referring to PV shim calling function such as evtchn_bind_vcpu()?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:21:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15591.38625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRXB-00011i-7F; Fri, 30 Oct 2020 10:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15591.38625; Fri, 30 Oct 2020 10:21:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRXB-00011b-30; Fri, 30 Oct 2020 10:21:25 +0000
Received: by outflank-mailman (input) for mailman id 15591;
 Fri, 30 Oct 2020 10:04:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dn1q=EF=linaro.org=srinivas.kandagatla@srs-us1.protection.inumbo.net>)
 id 1kYRGc-0007SB-H0
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:04:18 +0000
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67b18d79-1c8a-4e79-9e70-07f2f4091bd2;
 Fri, 30 Oct 2020 10:04:17 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id t9so5737967wrq.11
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 03:04:17 -0700 (PDT)
Received: from [192.168.86.34]
 (cpc86377-aztw32-2-0-cust226.18-1.cable.virginm.net. [92.233.226.227])
 by smtp.googlemail.com with ESMTPSA id h8sm7699531wro.14.2020.10.30.03.04.14
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 30 Oct 2020 03:04:15 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Dn1q=EF=linaro.org=srinivas.kandagatla@srs-us1.protection.inumbo.net>)
	id 1kYRGc-0007SB-H0
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:04:18 +0000
X-Inumbo-ID: 67b18d79-1c8a-4e79-9e70-07f2f4091bd2
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 67b18d79-1c8a-4e79-9e70-07f2f4091bd2;
	Fri, 30 Oct 2020 10:04:17 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id t9so5737967wrq.11
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 03:04:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=gQH8tfnS5gABqR+rI2vYNKDqWdjjH0cmP3e9KTBgftU=;
        b=B9rJTdT06fBjkoaC4E33GsRL6XklEaCtgE6RSNGKr1R6NIN6tshFsvlrm73+qDotTs
         TgtMR2VLSrECST5bfnSTgKL7Wgi9dLvx0b+wOp98ZwTdU4Ui8lhbwxRwAvAFaXlLlBUv
         kIF79xh7rAnLeAmFUb7OH8iS2qmajJB6f90j1t67g5Zqi8ZvFcvZqslE0Nw79PSToAvD
         N9OdQRnS+ec6wQ18zcvo4xYOB+MrwDVlSeHj5ODLObBC4OqIgDsKYG4VpZ7Zwf1s+0XG
         E8uv8pTzD1YjF2GLDWWylcwx+GDZ++SmMtTJoKDLvlekOLEjOvRKbLbn1NSmNnF0g7r5
         ocZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=gQH8tfnS5gABqR+rI2vYNKDqWdjjH0cmP3e9KTBgftU=;
        b=oUhSup3NYXi5nvqmRLfl9GEzUZgcDAzfK5FPd3YumNOQkqdxMquh0lrycpSBPLkMmH
         uUJ0RRe+AyZGi0zdcOufkbGBhqNg/6HeE+5Evk9OSZRUjw809LsxihdZJTXREcvzM9LO
         DvSu1j2HK6RSCj3QIwZWjop8zHNZX1ERmBHPWqX//3dvlGRqUPM0VGLgaS1mZIZJOZB7
         QNDeEo1lFqU2iFKg8lkvy8FhUjaXa6dhinf/XUjoHfpDvt2kUpyp6BMkHB+FOV/swxWr
         2yWXQAFL1Wr0qENlnIxl08Qa9n2+WlzN2/QYRISMShRziwAlLIA49kiIcaLP/WiesvgS
         1APg==
X-Gm-Message-State: AOAM533TVwWRhnAIk2WasG0V41pLxyhfUriCC7VlX+IjbQkg+2rnUXVV
	U8g6fI2q2AVulx5adAnIuripTUMyKKQIPg==
X-Google-Smtp-Source: ABdhPJyiqwsuWF8oCOJ2p5mgrL1UOBEF3qjVNKqZ9hb80Q/uxO4LXY5WvIi3i2M6psl12Cn+2MTVIQ==
X-Received: by 2002:a5d:40c3:: with SMTP id b3mr2187416wrq.157.1604052256376;
        Fri, 30 Oct 2020 03:04:16 -0700 (PDT)
Received: from [192.168.86.34] (cpc86377-aztw32-2-0-cust226.18-1.cable.virginm.net. [92.233.226.227])
        by smtp.googlemail.com with ESMTPSA id h8sm7699531wro.14.2020.10.30.03.04.14
        (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
        Fri, 30 Oct 2020 03:04:15 -0700 (PDT)
Subject: Re: [PATCH v2 19/39] docs: ABI: stable: make files ReST compatible
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
 Linux Doc Mailing List <linux-doc@vger.kernel.org>
Cc: Jonathan Corbet <corbet@lwn.net>,
 Benjamin Herrenschmidt <benh@kernel.crashing.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Daniel Thompson <daniel.thompson@linaro.org>,
 Jarkko Sakkinen <jarkko@kernel.org>, Jason Gunthorpe <jgg@ziepe.ca>,
 Jerry Snitselaar <jsnitsel@redhat.com>, Jingoo Han <jingoohan1@gmail.com>,
 Johannes Berg <johannes@sipsolutions.net>,
 Jonathan Cameron <Jonathan.Cameron@huawei.com>,
 Juergen Gross <jgross@suse.com>, Lee Jones <lee.jones@linaro.org>,
 Michael Ellerman <mpe@ellerman.id.au>, Oded Gabbay <oded.gabbay@gmail.com>,
 Paul Mackerras <paulus@samba.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Tom Rix <trix@redhat.com>,
 Vaibhav Jain <vaibhav@linux.ibm.com>, dri-devel@lists.freedesktop.org,
 linux-kernel@vger.kernel.org, linux-wireless@vger.kernel.org,
 linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <467a0dfbcdf00db710a629d3fe4a2563750339d8.1604042072.git.mchehab+huawei@kernel.org>
From: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Message-ID: <bc393307-d7dc-1666-f25c-6d756ebf5993@linaro.org>
Date: Fri, 30 Oct 2020 10:04:13 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.8.0
MIME-Version: 1.0
In-Reply-To: <467a0dfbcdf00db710a629d3fe4a2563750339d8.1604042072.git.mchehab+huawei@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 30/10/2020 07:40, Mauro Carvalho Chehab wrote:
> Several entries at the stable ABI files won't parse if we pass
> them directly to the ReST output.
> 
> Adjust them, in order to allow adding their contents as-is at
> the stable ABI book.
> 
> Signed-off-by: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
> ---
>   Documentation/ABI/stable/firewire-cdev        |  4 +
>   Documentation/ABI/stable/sysfs-acpi-pmprofile | 22 +++--
>   Documentation/ABI/stable/sysfs-bus-firewire   |  3 +
>   Documentation/ABI/stable/sysfs-bus-nvmem      | 19 ++--

for nvmem parts:

Acked-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>

--srini

>   Documentation/ABI/stable/sysfs-bus-usb        |  6 +-
>   .../ABI/stable/sysfs-class-backlight          |  1 +
>   .../ABI/stable/sysfs-class-infiniband         | 93 +++++++++++++------
>   Documentation/ABI/stable/sysfs-class-rfkill   | 13 ++-
>   Documentation/ABI/stable/sysfs-class-tpm      | 90 +++++++++---------
>   Documentation/ABI/stable/sysfs-devices        |  5 +-
>   Documentation/ABI/stable/sysfs-driver-ib_srp  |  1 +
>   .../ABI/stable/sysfs-firmware-efi-vars        |  4 +
>   .../ABI/stable/sysfs-firmware-opal-dump       |  5 +
>   .../ABI/stable/sysfs-firmware-opal-elog       |  2 +
>   Documentation/ABI/stable/sysfs-hypervisor-xen |  3 +
>   Documentation/ABI/stable/vdso                 |  5 +-
>   16 files changed, 176 insertions(+), 100 deletions(-)
> 


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:39:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15623.38637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRo7-0002CY-Ok; Fri, 30 Oct 2020 10:38:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15623.38637; Fri, 30 Oct 2020 10:38:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRo7-0002CR-LV; Fri, 30 Oct 2020 10:38:55 +0000
Received: by outflank-mailman (input) for mailman id 15623;
 Fri, 30 Oct 2020 10:38:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYRo5-0002CM-M8
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:38:54 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 043ca3fa-10f5-4f2f-8056-0a673f111ccb;
 Fri, 30 Oct 2020 10:38:52 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRo1-0005mX-Bs; Fri, 30 Oct 2020 10:38:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRo1-0006si-3o; Fri, 30 Oct 2020 10:38:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYRo5-0002CM-M8
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:38:54 +0000
X-Inumbo-ID: 043ca3fa-10f5-4f2f-8056-0a673f111ccb
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 043ca3fa-10f5-4f2f-8056-0a673f111ccb;
	Fri, 30 Oct 2020 10:38:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QPM77mtT2Mx+UDN84JkKmfxgmqVv4FugrhbPhn5XY4Q=; b=rieRRL/kJmuDfMsDLlFAypoNJ3
	MjVho9TLjwwhmHqvi26BEYr5hOpbIEjFk/JdHxhch9Yfq6f8+Ck7T9+ASYkR8If7cjNoC3yDPR+4J
	mYk+qWQY9AViPctASittakzpz12MRR8WN+CyJg7qIDApWLWs4ipbT5rYrNBZQo1t22xo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRo1-0005mX-Bs; Fri, 30 Oct 2020 10:38:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRo1-0006si-3o; Fri, 30 Oct 2020 10:38:49 +0000
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
Date: Fri, 30 Oct 2020 10:38:47 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 22/10/2020 17:17, Jan Beulich wrote:
> On 22.10.2020 18:00, Roger Pau Monné wrote:
>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>> The per-vCPU virq_lock, which is being held anyway, together with there
>>> not being any call to evtchn_port_set_pending() when v->virq_to_evtchn[]
>>> is zero, provide sufficient guarantees.
>>
>> This is also fine because closing the event channel will be done with
>> the VIRQ hold held also AFAICT.
> 
> Right, I'm not going into these details (or else binding would also
> need mentioning) here, as the code comment update should sufficiently
> cover it. Hence just saying "sufficient guarantees".
> 
>>> --- a/xen/include/xen/event.h
>>> +++ b/xen/include/xen/event.h
>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>    * Low-level event channel port ops.
>>>    *
>>>    * All hooks have to be called with a lock held which prevents the channel
>>> - * from changing state. This may be the domain event lock, the per-channel
>>> - * lock, or in the case of sending interdomain events also the other side's
>>> - * per-channel lock. Exceptions apply in certain cases for the PV shim.
>>> + * from changing state. This may be
>>> + * - the domain event lock,
>>> + * - the per-channel lock,
>>> + * - in the case of sending interdomain events the other side's per-channel
>>> + *   lock,
>>> + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
>>> + *   combination with the ordering enforced through how the vCPU's
>>> + *   virq_to_evtchn[] gets updated),
>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>> + * Exceptions apply in certain cases for the PV shim.
>>
>> Having such a wide locking discipline looks dangerous to me, it's easy
>> to get things wrong without notice IMO.
> 
> It is effectively only describing how things are (or were before
> XSA-343, getting restored here).

I agree with Roger here, the new/old locking discipline is dangerous and 
it is only a matter of time before it will bite us again.

I think we should consider Juergen's series because the locking for the 
event channel is easier to understand.

With his series in place, this patch will become unecessary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:40:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15628.38649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRpQ-0002zR-8J; Fri, 30 Oct 2020 10:40:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15628.38649; Fri, 30 Oct 2020 10:40:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRpQ-0002zK-4v; Fri, 30 Oct 2020 10:40:16 +0000
Received: by outflank-mailman (input) for mailman id 15628;
 Fri, 30 Oct 2020 10:40:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYRpP-0002zD-05
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:40:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3313538a-dad6-4f6c-9f82-048862176b4a;
 Fri, 30 Oct 2020 10:40:13 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYRpM-0005nv-Mt; Fri, 30 Oct 2020 10:40:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYRpM-0000hz-FE; Fri, 30 Oct 2020 10:40:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYRpM-0000Y6-Ek; Fri, 30 Oct 2020 10:40:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYRpP-0002zD-05
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:40:15 +0000
X-Inumbo-ID: 3313538a-dad6-4f6c-9f82-048862176b4a
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3313538a-dad6-4f6c-9f82-048862176b4a;
	Fri, 30 Oct 2020 10:40:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LnMp38M1K7qF9Zz4fSkjsSW8Kz7GnyTaes32PK9GFgg=; b=eMG2FY6JHxv97mPZmTp6Dqqzji
	7c1We4y+kr0XoJfY1e/65BIIlMeq2IH0wgL07HNLJYpbzpk+lyElB4IwbFPK1xpNu73JCqUPlAPvx
	Dn/SL/VltErtmIzWre55BCwJJxbLqKDbSLoOvduZLEv9VoXqr47xBlkbVb+03Uhygsg4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYRpM-0005nv-Mt; Fri, 30 Oct 2020 10:40:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYRpM-0000hz-FE; Fri, 30 Oct 2020 10:40:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYRpM-0000Y6-Ek; Fri, 30 Oct 2020 10:40:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156294-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156294: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c26e291375d1808a0ec5af9002dd0ebca5959020
X-Osstest-Versions-That:
    ovmf=3b87d728742fe58f427f4b775b11250e29d75cc6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 10:40:12 +0000

flight 156294 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156294/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c26e291375d1808a0ec5af9002dd0ebca5959020
baseline version:
 ovmf                 3b87d728742fe58f427f4b775b11250e29d75cc6

Last test of basis   156270  2020-10-28 03:11:01 Z    2 days
Testing same since   156294  2020-10-29 11:26:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liming Gao <gaoliming@byosoft.com.cn>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Siyuan Fu <siyuan.fu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3b87d72874..c26e291375  c26e291375d1808a0ec5af9002dd0ebca5959020 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:43:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:43:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15635.38664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRsA-0003Bm-OZ; Fri, 30 Oct 2020 10:43:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15635.38664; Fri, 30 Oct 2020 10:43:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRsA-0003Bf-Ka; Fri, 30 Oct 2020 10:43:06 +0000
Received: by outflank-mailman (input) for mailman id 15635;
 Fri, 30 Oct 2020 10:43:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYRs9-0003Ba-HW
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:43:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47e85cac-885d-43bf-a877-71b8481b88d9;
 Fri, 30 Oct 2020 10:43:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A4424AB0E;
 Fri, 30 Oct 2020 10:43:03 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYRs9-0003Ba-HW
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:43:05 +0000
X-Inumbo-ID: 47e85cac-885d-43bf-a877-71b8481b88d9
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 47e85cac-885d-43bf-a877-71b8481b88d9;
	Fri, 30 Oct 2020 10:43:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604054583;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ms46VkNFTYsaIEUeaYqWz9shxarDl+uRmlHR8UyRFuY=;
	b=sjgL6gWPOZMWM2lLTtfwALSMPsZzu5r1WdUtK21l7CCNpBNfaRD4bxxYcSk7iLRzxYvm4/
	Q4Q8ufGNAch2k4YViLmHRSuB61V8hmP2nQIx7gHCZgMYpV7gSGFV4scn4w+Gejk8upm0i7
	M0SYi5DrELWSm2vyMB4pnZL83fB8gA8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A4424AB0E;
	Fri, 30 Oct 2020 10:43:03 +0000 (UTC)
Subject: Re: [PATCH v2 2/8] evtchn: replace FIFO-specific header by generic
 private one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
 <c330d6e3-04da-9bf2-5634-b6b2961c18db@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <34db0b62-8442-5d3e-3e55-799f8683c8a5@suse.com>
Date: Fri, 30 Oct 2020 11:42:58 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c330d6e3-04da-9bf2-5634-b6b2961c18db@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 11:21, Julien Grall wrote:
> On 20/10/2020 15:08, Jan Beulich wrote:
>> Having a FIFO specific header is not (or at least no longer) warranted
>> with just three function declarations left there. Introduce a private
>> header instead, moving there some further items from xen/event.h.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks; perhaps you didn't notice this went in already?

>> ---
>> v2: New.
>> ---
>> TBD: If - considering the layering violation that's there anyway - we
>>       allowed PV shim code to make use of this header, a few more items
>>       could be moved out of "public sight".
> 
> Are you referring to PV shim calling function such as evtchn_bind_vcpu()?

Yes.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:44:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15639.38676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRtg-0003KF-3R; Fri, 30 Oct 2020 10:44:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15639.38676; Fri, 30 Oct 2020 10:44:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRtf-0003K8-Va; Fri, 30 Oct 2020 10:44:39 +0000
Received: by outflank-mailman (input) for mailman id 15639;
 Fri, 30 Oct 2020 10:44:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYRtf-0003K3-3o
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:44:39 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95b8aa91-3dec-46fe-9751-695b402f1854;
 Fri, 30 Oct 2020 10:44:33 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRtY-0005uJ-3u; Fri, 30 Oct 2020 10:44:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYRtX-0007II-R0; Fri, 30 Oct 2020 10:44:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYRtf-0003K3-3o
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:44:39 +0000
X-Inumbo-ID: 95b8aa91-3dec-46fe-9751-695b402f1854
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 95b8aa91-3dec-46fe-9751-695b402f1854;
	Fri, 30 Oct 2020 10:44:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VFQ8n6PEjMlDDaF1eOqtlImYkqVhedajZ7D7tUCU5H0=; b=RQbImd2cnmVCk7iMlLNGWKcAHY
	YYt3j5swk6/XYMEWDPgu31bARhaWTvMYBCFhxMsFEyO0QgQNjqaTivpaLI3awYmldvQwcG7ej6hPb
	wtp1zV185mj/IR4cG+oh8Zc5SvOSq+E1dWIs6bUac1mRXT7pu7+/gQFJG2hrDM/fZ6ig=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRtY-0005uJ-3u; Fri, 30 Oct 2020 10:44:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYRtX-0007II-R0; Fri, 30 Oct 2020 10:44:31 +0000
Subject: Re: [PATCH v2 2/8] evtchn: replace FIFO-specific header by generic
 private one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <3fea358e-d6d1-21d4-2d83-d9bd457ba3b5@suse.com>
 <c330d6e3-04da-9bf2-5634-b6b2961c18db@xen.org>
 <34db0b62-8442-5d3e-3e55-799f8683c8a5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <30e7270b-713f-33bc-4030-cc084a06185b@xen.org>
Date: Fri, 30 Oct 2020 10:44:30 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <34db0b62-8442-5d3e-3e55-799f8683c8a5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/10/2020 10:42, Jan Beulich wrote:
> On 30.10.2020 11:21, Julien Grall wrote:
>> On 20/10/2020 15:08, Jan Beulich wrote:
>>> Having a FIFO specific header is not (or at least no longer) warranted
>>> with just three function declarations left there. Introduce a private
>>> header instead, moving there some further items from xen/event.h.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> Thanks; perhaps you didn't notice this went in already?

I only noticed it afterwards. I find useful when committers send a quick 
message mentioning that part of the series has been committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:44:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15640.38688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRtl-0003M8-AH; Fri, 30 Oct 2020 10:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15640.38688; Fri, 30 Oct 2020 10:44:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRtl-0003M0-79; Fri, 30 Oct 2020 10:44:45 +0000
Received: by outflank-mailman (input) for mailman id 15640;
 Fri, 30 Oct 2020 10:44:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CBev=EF=epam.com=prvs=95721c7f7b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kYRtk-0003K3-2C
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:44:44 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b75f53c-cde9-4bb6-acd0-92f799744e53;
 Fri, 30 Oct 2020 10:44:33 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09UAfl5d031029; Fri, 30 Oct 2020 10:44:25 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2053.outbound.protection.outlook.com [104.47.1.53])
 by mx0a-0039f301.pphosted.com with ESMTP id 34fq7dc67m-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 10:44:25 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 10:44:20 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 10:44:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CBev=EF=epam.com=prvs=95721c7f7b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kYRtk-0003K3-2C
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:44:44 +0000
X-Inumbo-ID: 0b75f53c-cde9-4bb6-acd0-92f799744e53
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0b75f53c-cde9-4bb6-acd0-92f799744e53;
	Fri, 30 Oct 2020 10:44:33 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
	by mx0a-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09UAfl5d031029;
	Fri, 30 Oct 2020 10:44:25 GMT
Received: from eur01-ve1-obe.outbound.protection.outlook.com (mail-ve1eur01lp2053.outbound.protection.outlook.com [104.47.1.53])
	by mx0a-0039f301.pphosted.com with ESMTP id 34fq7dc67m-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 10:44:25 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HCmtxJflP5+9G5A98x2q7p+bAfpmwsD8dSVk0T4VouBdr3uqCn9p1zkbt5RZrlEmC4u61SIYs3pC2nWj6Zs21hYVqaFSGgR9Yy7mfLnmqJkQ9G92sz41J5uTW1BIuiL2mEH7E5gr/GPoMM5NFhNIewOwkpDN6k1wUDD7c4fz4mODqrQauFdwkGAQKXGWLHoK6AOLwQQZJcT2Eolz5yc+XggmbKZP60Kl3iTsEERa93w+F0skBBGk53KxgkFuQH8K623FjeliQpqkR23cQHHFWbj6ruSK0lFgOBKTyrgFMV+cfhtUycZBzF50DneD85D2RJyBccjl9m4/UYv62azJvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2pz5dJh1rAxRDvqVzFIxUhHYsVolUz0ych1nLMqlMy8=;
 b=XckW6um94k0vhFxRCV8Oi4cs/snz1AjAest/ODZzcNVEhUfYMceERdoWLE+miExPUD5boIsdFBfHMwSd6gG1N/fjiU2yWU2JEt+w5sSqPsD1nSlucbrUispWcBX5pTOIwT/9bmVYJCV2OuEvRonBAB8JMdA029ctlQrk+QVf42YU8jrHjUhW3IbkiaTu1qNyEgL8Wot8p1IRLhA0Ybz+roJNTvvj76QHkylQaMW8p2QC45wu82Hb1ianIh4kk8CG2JLU6sxHyEUqJp0B0YY9LVqbSxg+5eSBdsx84sW9ZVYjRcvyhN5hvF0evdYOyYO3e+87272IV0p3ZIiDDl9x/A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2pz5dJh1rAxRDvqVzFIxUhHYsVolUz0ych1nLMqlMy8=;
 b=GjjLkJ57d496UUAGQr2ZQRmWJT6VO6rsrLY9QC/0Hp1nSA6JhzPN6lDp5MZ6DuHMW2pi9DzJNGWHN2pSMjYeGCn48tRabvvvoFYikSQlMKkXUYBAaAbSMfU82WzhzyinnCJQ/19PjQhG1zbDg1xziRLGOjZrTnUawSUyMihRV99YVQ/DBvB4bbz7JHgimUKXE89h1KyiOQSsSwB63QIrSeQltZOJxzNFn8E6r3VdyUgi2c5mCo+2oLKNwYrz08jO7PeHlOA5d4/iJoDcwJKh5qMvLiqV/iQ0eTVCHpQqkfAU+pklDAkINBeoMV6m/AevaALWyYRxkoufJTQ+TmM8Hw==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0301MB2195.eurprd03.prod.outlook.com (2603:10a6:200:4f::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 10:44:20 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 10:44:19 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Rahul Singh <rahul.singh@arm.com>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "bertrand.marquis@arm.com" <bertrand.marquis@arm.com>,
        Jan Beulich
	<jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWrqmeCV4SGlhADU2GY14D47cn7w==
Date: Fri, 30 Oct 2020 10:44:19 +0000
Message-ID: <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
In-Reply-To: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cd663ee2-53a6-46ba-57c6-08d87cc0c1c7
x-ms-traffictypediagnostic: AM4PR0301MB2195:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM4PR0301MB2195F5905746D7FE2BAB2DF2E7150@AM4PR0301MB2195.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 jGzPjxBfWza0OJNxrWoI11sh31sJ3JpqMUjHXj6/VsgHyM9Fo7MDF4sBRMF0JmgLGw5yH7ErPrJe3EBZOg1cCCT/6F/aLtxk13yZTkAGjbWpeGEiFs2DAwx5qkAh3QQQLAUmm1+W4apURklUtIRB8CwKSFBZzZZBMAWeCN30SHK+GADmL4V5E3RJggsdfIfIAT7r7sFS1ZpHNOxpU9YeLQf7g9WO5H076dSsFCM+/IgM6s1sGPwjxvaCL69gkVCFPVCiSF7nyeH2fK8JoqVCYgddepcZPjEp22KR1YHzhGilH7jmv7GPzP4ANbwyhiNb2hfPl5qVeEazOcH8A2s5+0wHu7cFnJyuWAzW+NYiZEqGWa9q3uDSq+rtHzsKO5qHGS2so1uzhIlvAxz7fcwuNw==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(366004)(316002)(54906003)(2616005)(5660300002)(6506007)(53546011)(36756003)(8676002)(66476007)(66946007)(76116006)(31696002)(8936002)(71200400001)(186003)(66556008)(66446008)(64756008)(26005)(86362001)(110136005)(2906002)(478600001)(107886003)(966005)(4326008)(6486002)(31686004)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 iP6lkFrIrjxCTYfS26bDlLTTTwC+B9IoeTBBBI2mub/TJDoGSZo4IxriHGSUMdKtQ439jJbjViQlY654ntilHjADinlIxAVShsrhtNLRs4xBhNWm4VDFVRS2n4IOO3IWLwyq6cFhw7N8jL+1qjxI5qwtPJ0PboVXDtqNMYMPnQCAbBX47dJoKObCGAwRs+3FowoSdwk+lt34T06hjjD+SLEG6p4oQ/HF31bfWDi4gAMmQqSdlIFladfNtjiU1Gko+e9pgkFAEes3AlAVcwz47FaICJnBn5oTDMgrym50Hu/sB3LLut72wgq5jj0zO83msR2pGO/WGJzYuBn8Pog1Zh2dg9LwYhLU/3SXjDRMye8apQSewS9j1Sl/IpXXQ3QQ3WB+NJwvMZxlvgKAWg+fvqJ4HsC63JPw6jQZbCkvtkmCRfKEsCm8LbHnZVgdNMSegeG2y0BU5qyl5MC/b3vMLYkRJrmx4wyBNGPBMMXk2B7d1COeOwEMH9+E8ri1cTqu91ZTEqXEN12y7emIrYUs/p6qMa1pJkyfLQAGDHTzCNReU6sTNu9ZZ9qeOtOMT4MihBHflk70PIv+y7LD5GuJQLWnGYrBUPZUpZV3qHfbeH5IeODtu3xkppcostWcMtDzfLb0X97UDy9VZwwiFDKWfA==
Content-Type: text/plain; charset="utf-8"
Content-ID: <0781EE8B79B81A4284DFF649F584F4AA@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cd663ee2-53a6-46ba-57c6-08d87cc0c1c7
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Oct 2020 10:44:19.7635
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: kzp0gYnchlVwTSEsXCtLrs1LJbgYSjHpxRAidyNZOOU6/KsPv6CZXL9qQdHwL0IYDK7VB4ra8wXubo0UA9v5b26AqkwS7A99xGpw2ueFlbpwazVB7CgpikORQS6MU55W
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0301MB2195
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-30_02:2020-10-30,2020-10-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0
 mlxlogscore=999 adultscore=0 malwarescore=0 bulkscore=0 lowpriorityscore=0
 mlxscore=0 phishscore=0 clxscore=1015 spamscore=0 priorityscore=1501
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010300081

SGksIFJhaHVsIQ0KDQpPbiAxMC8yMC8yMCA2OjI1IFBNLCBSYWh1bCBTaW5naCB3cm90ZToNCj4g
QWRkIHN1cHBvcnQgZm9yIEFSTSBhcmNoaXRlY3RlZCBTTU1VdjMgaW1wbGVtZW50YXRpb25zLiBJ
dCBpcyBiYXNlZCBvbg0KPiB0aGUgTGludXggU01NVXYzIGRyaXZlci4NCj4NCj4gTWFqb3IgZGlm
ZmVyZW5jZXMgYmV0d2VlbiB0aGUgTGludXggZHJpdmVyIGFyZSBhcyBmb2xsb3dzOg0KPiAxLiBP
bmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVkIGFzIGNvbXBhcmVkIHRvIHRoZSBM
aW51eCBkcml2ZXINCj4gICAgIHRoYXQgc3VwcG9ydHMgYm90aCBTdGFnZS0xIGFuZCBTdGFnZS0y
IHRyYW5zbGF0aW9ucy4NCg0KRmlyc3Qgb2YgYWxsIHRoYW5rIHlvdSBmb3IgdGhlIGVmZm9ydHMh
DQoNCkkgdHJpZWQgdGhlIHBhdGNoIHdpdGggUUVNVSBhbmQgd291bGQgbGlrZSB0byBrbm93IGlm
IG15IHVuZGVyc3RhbmRpbmcgY29ycmVjdA0KDQp0aGF0IHRoaXMgY29tYmluYXRpb24gd2lsbCBu
b3Qgd29yayBhcyBvZiBub3c6DQoNCihYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUwMDAwOiBTTU1V
djM6IERUIHZhbHVlID0gZXZlbnRxDQooWEVOKSBEYXRhIEFib3J0IFRyYXAuIFN5bmRyb21lPTB4
MTk0MDAxMA0KKFhFTikgV2Fsa2luZyBIeXBlcnZpc29yIFZBIDB4NDAwMzEwMDAgb24gQ1BVMCB2
aWEgVFRCUiAweDAwMDAwMDAwYjg0NjkwMDANCihYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMGI4
NDY4ZjdmDQoNCltzbmlwXQ0KDQpJZiB0aGlzIGlzIGV4cGVjdGVkIHRoZW4gaXMgdGhlcmUgYW55
IHBsYW4gdG8gbWFrZSBRRU1VIHdvcmsgYXMgd2VsbD8NCg0KSSBzZWUgWzFdIHNheXMgdGhhdCAi
T25seSBzdGFnZSAxIGFuZCBBQXJjaDY0IFBUVyBhcmUgc3VwcG9ydGVkLiIgb24gUUVNVSBzaWRl
Lg0KDQoNCldlIGFyZSBpbnRlcmVzdGVkIGluIFFFTVUvU01NVXYzIGFzIGEgZmxleGlibGUgcGxh
dGZvcm0gZm9yIFBDSSBwYXNzdGhyb3VnaA0KDQppbXBsZW1lbnRhdGlvbiwgc28gaXQgY291bGQg
YWxsb3cgdGVzdGluZyBkaWZmZXJlbnQgc2V0dXBzIGFuZCBjb25maWd1cmF0aW9ucyB3aXRoIFFF
TVUuDQoNCg0KVGhhbmsgeW91IGluIGFkdmFuY2UsDQoNCk9sZWtzYW5kcg0KDQpbMV0gaHR0cHM6
Ly9wYXRjaHdvcmsub3psYWJzLm9yZy9wcm9qZWN0L3FlbXUtZGV2ZWwvY292ZXIvMTUyNDY2NTc2
Mi0zMTM1NS0xLWdpdC1zZW5kLWVtYWlsLWVyaWMuYXVnZXJAcmVkaGF0LmNvbS8NCg==


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:49:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15652.38700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRyH-0003de-TL; Fri, 30 Oct 2020 10:49:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15652.38700; Fri, 30 Oct 2020 10:49:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYRyH-0003dX-Ph; Fri, 30 Oct 2020 10:49:25 +0000
Received: by outflank-mailman (input) for mailman id 15652;
 Fri, 30 Oct 2020 10:49:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYRyG-0003dS-GZ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:49:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90f427b3-43dc-44f7-a16d-c169e3bf6adf;
 Fri, 30 Oct 2020 10:49:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B5699ACA3;
 Fri, 30 Oct 2020 10:49:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYRyG-0003dS-GZ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:49:24 +0000
X-Inumbo-ID: 90f427b3-43dc-44f7-a16d-c169e3bf6adf
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 90f427b3-43dc-44f7-a16d-c169e3bf6adf;
	Fri, 30 Oct 2020 10:49:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604054962;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UWZMhaL5EMy8WOHwrNRwIWiRPHI5NUJ6uno6NfsPVlc=;
	b=o0ZAt1hxnsdy+atjREs2Xq4fg0ghDTlk2GtkuB/hQi6zJHtEmGawEVKRz8CRmk1HH0+8X0
	H8jsWOoS5KyL6CvYPSzkcKM/vmWchsR56xwEfCG8JTgAMuA1W/6w1UED9IbF84DRbGz6cA
	MzWTLe1uNCJnrqHBv1YD+id2bjbLk58=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id B5699ACA3;
	Fri, 30 Oct 2020 10:49:22 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
Date: Fri, 30 Oct 2020 11:49:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 11:38, Julien Grall wrote:
> On 22/10/2020 17:17, Jan Beulich wrote:
>> On 22.10.2020 18:00, Roger Pau Monné wrote:
>>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>>> --- a/xen/include/xen/event.h
>>>> +++ b/xen/include/xen/event.h
>>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>>    * Low-level event channel port ops.
>>>>    *
>>>>    * All hooks have to be called with a lock held which prevents the channel
>>>> - * from changing state. This may be the domain event lock, the per-channel
>>>> - * lock, or in the case of sending interdomain events also the other side's
>>>> - * per-channel lock. Exceptions apply in certain cases for the PV shim.
>>>> + * from changing state. This may be
>>>> + * - the domain event lock,
>>>> + * - the per-channel lock,
>>>> + * - in the case of sending interdomain events the other side's per-channel
>>>> + *   lock,
>>>> + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
>>>> + *   combination with the ordering enforced through how the vCPU's
>>>> + *   virq_to_evtchn[] gets updated),
>>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>>> + * Exceptions apply in certain cases for the PV shim.
>>>
>>> Having such a wide locking discipline looks dangerous to me, it's easy
>>> to get things wrong without notice IMO.
>>
>> It is effectively only describing how things are (or were before
>> XSA-343, getting restored here).
> 
> I agree with Roger here, the new/old locking discipline is dangerous and 
> it is only a matter of time before it will bite us again.
> 
> I think we should consider Juergen's series because the locking for the 
> event channel is easier to understand.

We should, yes. The one thing I'm a little uneasy with is the
new lock "variant" that gets introduced. Custom locking methods
also are a common source of problems (which isn't to say I see
any here).

> With his series in place, this patch will become unecessary.

It'll become less important, but not pointless - any unnecessary
locking would better be removed imo.

I'd also like to note that the non-straightforward locking rules
wouldn't really change with his series; the benefit there really
is the dropping of the need for IRQ-safe locking.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15659.38712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYS5k-0004XH-NP; Fri, 30 Oct 2020 10:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15659.38712; Fri, 30 Oct 2020 10:57:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYS5k-0004XA-In; Fri, 30 Oct 2020 10:57:08 +0000
Received: by outflank-mailman (input) for mailman id 15659;
 Fri, 30 Oct 2020 10:57:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYS5i-0004X5-P1
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:57:06 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aeaf27e4-7b84-4ea7-b25f-205ed55418b6;
 Fri, 30 Oct 2020 10:57:05 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYS5e-0006A2-Jq; Fri, 30 Oct 2020 10:57:02 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYS5e-0007xW-Ap; Fri, 30 Oct 2020 10:57:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYS5i-0004X5-P1
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:57:06 +0000
X-Inumbo-ID: aeaf27e4-7b84-4ea7-b25f-205ed55418b6
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id aeaf27e4-7b84-4ea7-b25f-205ed55418b6;
	Fri, 30 Oct 2020 10:57:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZiR472IQ1PLz+AsWtLHwX2LVqz8kOiyYyPIwL6kd2lU=; b=rHKNuCwQ6TWOjCWWo3H7SiDXpi
	gxjh7BfMhJfKJtS2lb125KXOWrlHvguskyda09q8wT/S4ooWq2DSZZhgNBDDLm1p+L1kM+lygCHbZ
	dZo0an1VnGreIPYwaZO0PJqcLBykuaJaJldMPu/dXH54EueelMHFxEsDgc1sp3yv72p4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYS5e-0006A2-Jq; Fri, 30 Oct 2020 10:57:02 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYS5e-0007xW-Ap; Fri, 30 Oct 2020 10:57:02 +0000
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
Date: Fri, 30 Oct 2020 10:57:00 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 30/10/2020 10:49, Jan Beulich wrote:
> On 30.10.2020 11:38, Julien Grall wrote:
>> On 22/10/2020 17:17, Jan Beulich wrote:
>>> On 22.10.2020 18:00, Roger Pau Monné wrote:
>>>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>>>> --- a/xen/include/xen/event.h
>>>>> +++ b/xen/include/xen/event.h
>>>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>>>     * Low-level event channel port ops.
>>>>>     *
>>>>>     * All hooks have to be called with a lock held which prevents the channel
>>>>> - * from changing state. This may be the domain event lock, the per-channel
>>>>> - * lock, or in the case of sending interdomain events also the other side's
>>>>> - * per-channel lock. Exceptions apply in certain cases for the PV shim.
>>>>> + * from changing state. This may be
>>>>> + * - the domain event lock,
>>>>> + * - the per-channel lock,
>>>>> + * - in the case of sending interdomain events the other side's per-channel
>>>>> + *   lock,
>>>>> + * - in the case of sending non-global vIRQ-s the per-vCPU virq_lock (in
>>>>> + *   combination with the ordering enforced through how the vCPU's
>>>>> + *   virq_to_evtchn[] gets updated),
>>>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>>>> + * Exceptions apply in certain cases for the PV shim.
>>>>
>>>> Having such a wide locking discipline looks dangerous to me, it's easy
>>>> to get things wrong without notice IMO.
>>>
>>> It is effectively only describing how things are (or were before
>>> XSA-343, getting restored here).
>>
>> I agree with Roger here, the new/old locking discipline is dangerous and
>> it is only a matter of time before it will bite us again.
>>
>> I think we should consider Juergen's series because the locking for the
>> event channel is easier to understand.
> 
> We should, yes. The one thing I'm a little uneasy with is the
> new lock "variant" that gets introduced. Custom locking methods
> also are a common source of problems (which isn't to say I see
> any here).

I am also unease with a new lock "variant". However, this is the best 
proposal I have seen so far to unblock the issue.

I am open to other suggestion with simple locking discipline.

> 
>> With his series in place, this patch will become unecessary.
> 
> It'll become less important, but not pointless - any unnecessary
> locking would better be removed imo.

They may be unnecessary today but if tomorrow someone decide to rework 
the other lock, then you are just re-opening a security hole.

IHMO, having a sane locking system is far more important than removing 
locking that look "unnecessary".

> 
> I'd also like to note that the non-straightforward locking rules
> wouldn't really change with his series; the benefit there really
> is the dropping of the need for IRQ-safe locking.

Well, it is at least going towards that...

Cheers,

> 
> Jan
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 10:57:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 10:57:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15664.38724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYS6Q-0004eU-3s; Fri, 30 Oct 2020 10:57:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15664.38724; Fri, 30 Oct 2020 10:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYS6Q-0004eN-0l; Fri, 30 Oct 2020 10:57:50 +0000
Received: by outflank-mailman (input) for mailman id 15664;
 Fri, 30 Oct 2020 10:57:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYS6O-0004eG-Ng
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:57:48 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a7c5135-9d74-4c12-ab68-dba026070e57;
 Fri, 30 Oct 2020 10:57:47 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYS6L-0006Cb-6z; Fri, 30 Oct 2020 10:57:45 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYS6K-00080N-Vd; Fri, 30 Oct 2020 10:57:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYS6O-0004eG-Ng
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 10:57:48 +0000
X-Inumbo-ID: 4a7c5135-9d74-4c12-ab68-dba026070e57
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4a7c5135-9d74-4c12-ab68-dba026070e57;
	Fri, 30 Oct 2020 10:57:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3WRNYdd2rXXQ/VLt51xZpzNQFRLGTVLdg0Gqequr51Q=; b=ByzGBSxj0E9zsO700sm64JuJ0b
	hOb7B9k3SwXLf32UEUNBIjiFekycp+WCBMvP7GQGOvgczA/fYO9TreWj2gu4H4iAoV2IQIOdECXoJ
	VJQNs2oAtottdWoSgfv6jIyzDdvuZDqqaK6XB2w2Y+ZdVYddT4EKjkytJWpQP97ixDr0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYS6L-0006Cb-6z; Fri, 30 Oct 2020 10:57:45 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYS6K-00080N-Vd; Fri, 30 Oct 2020 10:57:45 +0000
Subject: Re: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
Date: Fri, 30 Oct 2020 10:57:42 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/10/2020 15:10, Jan Beulich wrote:
> There's no need to serialize all sending of vIRQ-s; all that's needed
> is serialization against the closing of the respective event channels
> (so far by means of a barrier). To facilitate the conversion, switch to
> an ordinary write locked region in evtchn_close().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Don't introduce/use rw_barrier() here. Add comment to
>      evtchn_bind_virq(). Re-base.
> 
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
>       v->vcpu_id = vcpu_id;
>       v->dirty_cpu = VCPU_CPU_CLEAN;
>   
> -    spin_lock_init(&v->virq_lock);
> +    rwlock_init(&v->virq_lock);
>   
>       tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
>   
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>   
>       spin_unlock_irqrestore(&chn->lock, flags);
>   
> +    /*
> +     * If by any, the update of virq_to_evtchn[] would need guarding by
> +     * virq_lock, but since this is the last action here, there's no strict
> +     * need to acquire the lock. Hnece holding event_lock isn't helpful

s/Hnece/Hence/

> +     * anymore at this point, but utilize that its unlocking acts as the
> +     * otherwise necessary smp_wmb() here.
> +     */
>       v->virq_to_evtchn[virq] = bind->port = port;

I think all access to v->virq_to_evtchn[virq] should use ACCESS_ONCE() 
or {read, write}_atomic() to avoid any store/load tearing.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 11:15:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 11:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15676.38740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSNh-0006T2-MM; Fri, 30 Oct 2020 11:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15676.38740; Fri, 30 Oct 2020 11:15:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSNh-0006Sv-J0; Fri, 30 Oct 2020 11:15:41 +0000
Received: by outflank-mailman (input) for mailman id 15676;
 Fri, 30 Oct 2020 11:15:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYSNg-0006Sq-En
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:15:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56f29214-1a0c-4b54-ae36-6013f6f44126;
 Fri, 30 Oct 2020 11:15:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7776BB0F2;
 Fri, 30 Oct 2020 11:15:38 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYSNg-0006Sq-En
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:15:40 +0000
X-Inumbo-ID: 56f29214-1a0c-4b54-ae36-6013f6f44126
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 56f29214-1a0c-4b54-ae36-6013f6f44126;
	Fri, 30 Oct 2020 11:15:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604056538;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fD3f5AJYmd8ppqRzHwDpJ8bn+jSd9jluVgA9JqTkT9Y=;
	b=BLGpfc0q1KXnjeujxDCK9YkYCyoMgTI3aDpvasmaTUWwUOjcdV3mFNz3DFw1eXf2qEgXuN
	pw6RcfVIpSp+zSgFqTkocnQyBvKjoO+hFBGynpC2ZTvLqbvWb+f5W1rdM9MpCW5lua+OJt
	QzGKdw/Et6jcWQdJE4mZYel4GE8DBhU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7776BB0F2;
	Fri, 30 Oct 2020 11:15:38 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
Date: Fri, 30 Oct 2020 12:15:37 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.20 11:57, Julien Grall wrote:
> 
> 
> On 30/10/2020 10:49, Jan Beulich wrote:
>> On 30.10.2020 11:38, Julien Grall wrote:
>>> On 22/10/2020 17:17, Jan Beulich wrote:
>>>> On 22.10.2020 18:00, Roger Pau Monné wrote:
>>>>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>>>>> --- a/xen/include/xen/event.h
>>>>>> +++ b/xen/include/xen/event.h
>>>>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>>>>     * Low-level event channel port ops.
>>>>>>     *
>>>>>>     * All hooks have to be called with a lock held which prevents 
>>>>>> the channel
>>>>>> - * from changing state. This may be the domain event lock, the 
>>>>>> per-channel
>>>>>> - * lock, or in the case of sending interdomain events also the 
>>>>>> other side's
>>>>>> - * per-channel lock. Exceptions apply in certain cases for the PV 
>>>>>> shim.
>>>>>> + * from changing state. This may be
>>>>>> + * - the domain event lock,
>>>>>> + * - the per-channel lock,
>>>>>> + * - in the case of sending interdomain events the other side's 
>>>>>> per-channel
>>>>>> + *   lock,
>>>>>> + * - in the case of sending non-global vIRQ-s the per-vCPU 
>>>>>> virq_lock (in
>>>>>> + *   combination with the ordering enforced through how the vCPU's
>>>>>> + *   virq_to_evtchn[] gets updated),
>>>>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>>>>> + * Exceptions apply in certain cases for the PV shim.
>>>>>
>>>>> Having such a wide locking discipline looks dangerous to me, it's easy
>>>>> to get things wrong without notice IMO.
>>>>
>>>> It is effectively only describing how things are (or were before
>>>> XSA-343, getting restored here).
>>>
>>> I agree with Roger here, the new/old locking discipline is dangerous and
>>> it is only a matter of time before it will bite us again.
>>>
>>> I think we should consider Juergen's series because the locking for the
>>> event channel is easier to understand.
>>
>> We should, yes. The one thing I'm a little uneasy with is the
>> new lock "variant" that gets introduced. Custom locking methods
>> also are a common source of problems (which isn't to say I see
>> any here).
> 
> I am also unease with a new lock "variant". However, this is the best 
> proposal I have seen so far to unblock the issue.
> 
> I am open to other suggestion with simple locking discipline.

In theory my new lock variant could easily be replaced by a rwlock and
using the try-variant for the readers only. The disadvantage of that
approach would be a growth of struct evtchn.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 11:33:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 11:33:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15683.38756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSfA-0008Gd-9s; Fri, 30 Oct 2020 11:33:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15683.38756; Fri, 30 Oct 2020 11:33:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSfA-0008GW-5S; Fri, 30 Oct 2020 11:33:44 +0000
Received: by outflank-mailman (input) for mailman id 15683;
 Fri, 30 Oct 2020 11:33:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYSf8-0008GR-My
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:33:42 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 787a24a7-1e8b-46c3-83ec-badb7adef874;
 Fri, 30 Oct 2020 11:33:39 +0000 (UTC)
Received: from AM6P191CA0072.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::49)
 by DB6PR0801MB1910.eurprd08.prod.outlook.com (2603:10a6:4:75::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 11:33:36 +0000
Received: from VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::31) by AM6P191CA0072.outlook.office365.com
 (2603:10a6:209:7f::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 11:33:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT030.mail.protection.outlook.com (10.152.18.66) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 11:33:36 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64");
 Fri, 30 Oct 2020 11:33:35 +0000
Received: from 2805b5e93a1c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F59DCC22-F060-40FE-8114-77FF9587D29A.1; 
 Fri, 30 Oct 2020 11:33:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2805b5e93a1c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 11:33:30 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3911.eurprd08.prod.outlook.com (2603:10a6:20b:80::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 11:33:29 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 11:33:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYSf8-0008GR-My
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:33:42 +0000
X-Inumbo-ID: 787a24a7-1e8b-46c3-83ec-badb7adef874
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown [40.107.1.72])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 787a24a7-1e8b-46c3-83ec-badb7adef874;
	Fri, 30 Oct 2020 11:33:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h2qMybRCDFhgG6MNeBGmhEN/2We65xIKDLyNqtdOX5g=;
 b=vxMrH4BY9s4Kds4h7FH1kUwkLzlM3Q7ka74BPYEq+bqQXQNx/1RDKWuyiK+PqKhOnD9lf5JOwID+Sgap1DlDNDaFoVGtaJgDuBDHLx57/9pVgsrvRC4oDy2gUSXNmLZa+TahXJQlREelxgGwRWLGJr3vFetkxTyxgLGeF6XTOQU=
Received: from AM6P191CA0072.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::49)
 by DB6PR0801MB1910.eurprd08.prod.outlook.com (2603:10a6:4:75::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 11:33:36 +0000
Received: from VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::31) by AM6P191CA0072.outlook.office365.com
 (2603:10a6:209:7f::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 11:33:36 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT030.mail.protection.outlook.com (10.152.18.66) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 11:33:36 +0000
Received: ("Tessian outbound e6c55a0b9ba9:v64"); Fri, 30 Oct 2020 11:33:35 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0092c98987175636
X-CR-MTA-TID: 64aa7808
Received: from 2805b5e93a1c.1
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id F59DCC22-F060-40FE-8114-77FF9587D29A.1;
	Fri, 30 Oct 2020 11:33:30 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2805b5e93a1c.1
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 11:33:30 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XUv1EJVUJ6kZqLgnlNAR7JsARWTteCCCLX9ifXoYZRAaOVYRexFFNZCvDTCi0mOYRBb/34kHMn/Zf+Ybj0Rh+HrvqCu5urObIZwc5QpAFW0EbdU/hC63nPK6+Eyv7EDcITHqug9g49NiYdbdfOOn6r9vkSRLbtonXXXbNQtPrPXBO5tiLHi1BFe6W0Qv+aFSny0ZpFkMGUR63Mx97mKvY+UtDY2/gaLRMschobu568jEd0D3hDhIUts8QpdawcsTE+VN/toEIx3cRCnmLx5c9uN9o4H9HX6+H1gG0PeRPLIB1hygH/nDxYTzBCkTaZYF5SPjbqu5MAKCAp9XGo8Zrw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h2qMybRCDFhgG6MNeBGmhEN/2We65xIKDLyNqtdOX5g=;
 b=fSTa9GSTLD78iG00KF+E1W33cwtWAvrUcChF4EHOc7I6bXgMQJRGfxKDJge+pQIhuiBaO1g05glkT2CI1YR4w75U3i+lfPFHNtclao1RDziimlTcXDYTIFu9sKxXOh5HmKDNi9iRyo2+5tCGjJ93V0GIDOu7vQxniLpssvRnBlRpaIW7U1dcUV+JqwLNmxXCck0Z8BqMKDRL/Fs7xmqhfoQbS/vgtB3HOd913fJLWzntJCI544H+SrmGw9yNF8snQOfwFhilG5giIdtgTp1LY8uoh1iRLN+SP2EhX29+JIb8c2bLXTrWR6kXf+4Rk9V3Wg/YqR6sDd64cgogNUrRaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=h2qMybRCDFhgG6MNeBGmhEN/2We65xIKDLyNqtdOX5g=;
 b=vxMrH4BY9s4Kds4h7FH1kUwkLzlM3Q7ka74BPYEq+bqQXQNx/1RDKWuyiK+PqKhOnD9lf5JOwID+Sgap1DlDNDaFoVGtaJgDuBDHLx57/9pVgsrvRC4oDy2gUSXNmLZa+TahXJQlREelxgGwRWLGJr3vFetkxTyxgLGeF6XTOQU=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3911.eurprd08.prod.outlook.com (2603:10a6:20b:80::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 11:33:29 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 11:33:29 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index:
 AQHWoVvrkmJwOYERdUOadvid1OghFamgw0AAgAEzsQCAAWIugIABA+CAgADBXICAABfBAIAAGI6AgAAOSACABG94AIADrVCAgAFBfQCAAGM1gIAA0RiAgAAJ+ACAAAarAIAABWQAgAAYtgA=
Date: Fri, 30 Oct 2020 11:33:29 +0000
Message-ID: <E52CE228-0D19-491E-BA47-04ED7599DDCE@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
 <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
 <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
 <226DA6DB-D03C-41A7-A68C-53000DFA70F6@arm.com>
 <e5ce30c5-e0e0-90c8-962d-c86b65a82ccd@xen.org>
In-Reply-To: <e5ce30c5-e0e0-90c8-962d-c86b65a82ccd@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 40b8455a-5586-44a9-8f90-08d87cc7a3fe
x-ms-traffictypediagnostic: AM6PR08MB3911:|DB6PR0801MB1910:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1910864F262630D08D12BB0EFC150@DB6PR0801MB1910.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nFNxemlv7JUNCkpuPjTFYRMXpwIkNKh6bJ8uIzCew/JPFsXGZ6zFqidiwB6bqRGVDiP0EtnJ8hCHRfj+hDGKxWPHMaS/ftTpSSgXTtaGzFVFfYZCBMUrIlY5IPhaJ2slzAiMIW4Q8g3xaqqkPypo97Btw/VZkkTAQpxW2yADBrfYR5Nupv5UeKvlokSm+MwqUCAq1GExdg7VXX0kI4geeBBKfsXtmaNyOb7Nw9nOx4IFtDUl8LHux4n9UcQ68OxcKv04EojHr58OTz6kt5yaGJQWY34frpq94oMWx4XEdcWvnpdGAOcZdch/93msjMkzPayH7m30BHGbWorApYCbMvY1u0VEf8wg7uktbQbTfXr1EBUvXI+bSHrsEiwVL8Zrl0HRPXKp09jz3lJ9J96yFw==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(346002)(136003)(366004)(33656002)(36756003)(83380400001)(55236004)(26005)(6506007)(186003)(53546011)(6916009)(316002)(4326008)(76116006)(91956017)(478600001)(66446008)(6512007)(54906003)(2616005)(6486002)(8936002)(66476007)(8676002)(64756008)(2906002)(66556008)(86362001)(966005)(66946007)(71200400001)(5660300002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 TdSMMA6EAAya4WmiJwCSAHuuNWG7IvpDdNpPCivp+g5h4t+qhtuWT4iYUJ/WLQ64nwVMDFzEUxs3BqjcOWLf4iNulrTJ0kWjphQodruSSKHsCcxVKyoDt3wEm59GAQ9FmmlR+V1wNzJRQ/O84Iz+Vt/9ahUH2uxNOK7VWidhtMYg6QuFCgBUfyVgR6ifarBuX8GmnqSx1xghbqIMRoc39+iafsiXf8mkxY0PBEnvFTAvkXHm+0XsQpy3KaGvCxSYelYQ+GMXGvUWMqHYSbED76XuhApUB5zDlThUayHTq3gB480UcilvrAxfplwgqr8J4/tI341EvZJ3xmAhRir5S0IuQWSHWZLTEwBCQnBqSZIMJ8Jopjn93Cf6MLvgGHu/DNrnVdoTFeYRRne3LnDdZxnHffcZ+aTt3hIhpwlfT+QVCbn97hPg5ubI3PPSLOoU9kZrPg0TgFrO704GikLOkTDc6Ik5IDS3/WNlsmHXqmANqS8AqmUO7h2VNoLQ13cxKoLTdsk4gbXYP+0VupxRVGQqFn/VcRCkuLNAtxfgrkiO6cDwjZk9w4rqrgP7xL48XsJW8RIG/Vz0pDGG5RXNa2rKohW5SFuFxayJXm1M0cvCaEMsWp3NrYp5lIds0okcYMjRHjYoxUKa4Brqc1YA0Q==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <65395984F5DED74A919BFF61F4F68EBC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3911
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bae2347a-cc23-47fd-0dd4-08d87cc79f9c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PyHIxHDJOHU5TaW0+o6zFNoWiuqhTO6FqbjN1nzVQxY10l48FjViUJ8c8BcRdv1NivbVPcGQHj6M4Fbya/iseCkttKCYGoxi9ZykoayxaSSFNi0iTgeZ1hzzYaLtpBkJKijpT2BeHLqikCk8NYenXBBYoASW5gbTpmhPWsD2D823zR4DFZ7Z5yxbFMzbheiRnxlGr5RCL2KttcL6AwCGxhO7NAK98u2O5sK5oqFyvWOipMEYsOSg6tRU+f1Nv10IRcWB7cHiaDTgedaG3C8H8VDZnpcfQWjkytxac9p8uFD1YHvytJkQguzp63gu0+gYeOJluNtIiFOvSaNo5E4T2KD6cGtbswS4RUvqF0YvkJ0V2OoAyS7LUT3t9nymbn15kEtH0XSRpnEPJFLOdv+HV4WUzoOZF+8C9Iu8kUTWzc2tgBMQ7F7TxZT9cpW2IIcqqBtk8vXJfhinD3FwgqSyp7gG8wyaY4egtUcXx+UuA7o=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(46966005)(336012)(8676002)(36906005)(8936002)(82310400003)(316002)(478600001)(2616005)(6512007)(86362001)(26005)(356005)(54906003)(83380400001)(33656002)(70206006)(6506007)(6486002)(70586007)(55236004)(53546011)(966005)(107886003)(5660300002)(36756003)(186003)(2906002)(6862004)(47076004)(4326008)(82740400003)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 11:33:36.2631
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 40b8455a-5586-44a9-8f90-08d87cc7a3fe
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT030.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1910

Hello Julien,

> On 30 Oct 2020, at 10:05 am, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 30/10/2020 09:45, Rahul Singh wrote:
>> Hello Julien,
>>> On 30 Oct 2020, at 9:21 am, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi,
>>>=20
>>> On 30/10/2020 08:46, Rahul Singh wrote:
>>>> Ok Yes when I ported the driver I port the command queue operation fro=
m the previous commit where atomic operations is not used and rest all the =
code is from the latest code. I will again make sure that any bug that is f=
ixed in Linux should be fixed in XEN also.
>>>=20
>>> I would like to seek some clarifications on the code because there seem=
 to be conflicting information provided in this thread.
>>>=20
>>> The patch (the baseline commit is provided) and the discussion with Ber=
trand suggests that you took a snapshot of the code last year and adapted f=
or Xen.
>>>=20
>>> However, here you suggest that you took an hybrid approach where part o=
f the code is based from last year and other part is based from the latest =
code (I assume v5.9).
>>>=20
>>> So can you please clarify?
>>>=20
>>> Cheers,
>> Approach I took is to first merge the code  from the commit ( Jul 2, 201=
9 7c288a5b27934281d9ea8b5807bc727268b7001a ) the snapshot before atomic ope=
ration is used in SMMUv3 code for command queue operations.
>> After that I fixed  the other code( not related to command queue operati=
ons.)  from the latest code so that no bug is introduced in XEN because of =
using the last year commit.
>=20
> Ok. That was definitely not clear from the commit message. Please make th=
is clearer in the commit message.
>=20

Ok. I will make this clearer in the commit message.

> Anway, it means we need to do a full review of the code (rather than a li=
ght one) because of the hybrid model.
>=20
> I am still a bit puzzle to why it would require almost of a restart of th=
e implementation in order to sync the latest code. Does it imply that you a=
re mostly using the old code?
>=20

SMMuv3 code is divided into below parts :

1. Low-level/High level queue manipulation functions.
2. Context descriptor manipulation functions.
3. Stream table manipulation functions.
4. Interrupt handling.
5. Linux IOMMU API functions.
6. Driver initialisation functions( probe/reset ).

Low-level/High-level queue manipulation functions are from the old code, re=
st is the new code whenever it was possible.

I started with porting the latest code but there are many dependencies for =
the queue manipulation function so we decided to use the old queue manipula=
tion function.=20
As the queue manipulation function is a big part of the code it will requir=
e a lot of effort and testing to sync with the latest code once the atomic =
operation is in place to use.

Once atomic operation is available in XEN we have merge the below commit fr=
om Linux to XEN to make XEN in sync with Linux code.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/d=
rivers/iommu/arm-smmu-v3.c?h=3Dv5.8&id=3D587e6c10a7ce89a5924fdbeff2ec524fbd=
6a124b

> Cheers,
>=20
> --=20
> Julien Grall

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 11:34:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 11:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15693.38767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSfu-0008NI-JM; Fri, 30 Oct 2020 11:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15693.38767; Fri, 30 Oct 2020 11:34:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSfu-0008NB-GQ; Fri, 30 Oct 2020 11:34:30 +0000
Received: by outflank-mailman (input) for mailman id 15693;
 Fri, 30 Oct 2020 11:34:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CbTf=EF=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
 id 1kYSfs-0008KU-UF
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:34:28 +0000
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb630440-e620-4892-b95c-be87cc13a26c;
 Fri, 30 Oct 2020 11:34:22 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id n142so4832136ybf.7
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 04:34:22 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CbTf=EF=linaro.org=masami.hiramatsu@srs-us1.protection.inumbo.net>)
	id 1kYSfs-0008KU-UF
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:34:28 +0000
X-Inumbo-ID: fb630440-e620-4892-b95c-be87cc13a26c
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id fb630440-e620-4892-b95c-be87cc13a26c;
	Fri, 30 Oct 2020 11:34:22 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id n142so4832136ybf.7
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 04:34:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=tEVPYRxN4eP0njqUdgvQzP1KCxEliVxIDNVsR4aplJA=;
        b=zpVc9gGdOLEOrCPSJjgIOkMzw+k12NzOe3P7qqBULMOQ+erHcuwvb/+Fxl2JM8vj3x
         rSEoY+i0hBVMrfw1ncSegWuomLAeEXOkPOrUpIjEFSXn+kEgdCLWGHlQJiIq+8QKtfr7
         VA7nzm7rpnCpK32Tqtmblx5aHAvIb4otQoK6Wjb0OxM68kpKoHeVEQiblv0uPnOhQsd8
         6syUnmcCxN+vAjO7yYzMBwv4WynPKdRRIyxM8YAWNnvO4V61glc1RewJOy1cZ7lLEcst
         8+3L9cIqkZsTS+Yj8UnMaisMtRSmKWCr3FGAD+eHOe07EOmgYrEnyZORJ5SfJdudyPmK
         sUIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=tEVPYRxN4eP0njqUdgvQzP1KCxEliVxIDNVsR4aplJA=;
        b=Ih1pNLP+A6jStJ1PE5P3uDHbQBFjyLr+m02KjvUL7FQc9pCIYONTyR539Xo5HyU1KA
         N8FMYpGcbauA2T99VbVCcaXkkfX1My6+wxdAp6h2SFVQs6pWJ58rddiLo9bDAQewqZ36
         uXAvYibQ4RrCiYwvlxC+WV8GkL8X+LQUbu2EV0mxqJt0S8UT596g8xgMVxYYvj4zJxEG
         o+MeQVf+XERdLcG8Y0xvdSGTatBldkpfrOC5+KWx6zhuhLp12wyzD52RvTeeGugiS3x1
         FLQ+UV/bh320HgWKfeu0cBE6/D58hVWLTwGjhhZqV+SX4ycO2fuKuVkXeo1iuVPjImS+
         h2wg==
X-Gm-Message-State: AOAM532iVAjjzGCLQagrbfUSyIhadji8k28Bzva1uKGBcnF1DzJqvlww
	gIeuWSvd+ayxjydlZv0Oowpa31N+EIPxY5IvSC/Zog==
X-Google-Smtp-Source: ABdhPJyCQOG1RR4L8URruwKEwmpIplJdpAFn1czstD87XYBxGQ3hoV/Yo68snqZ3Dp0/cqXWj816iSIOtxOBbujh5nk=
X-Received: by 2002:a25:4e46:: with SMTP id c67mr2599406ybb.87.1604057661743;
 Fri, 30 Oct 2020 04:34:21 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s> <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
In-Reply-To: <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com>
From: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Date: Fri, 30 Oct 2020 20:34:10 +0900
Message-ID: <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Oleksandr Tyshchenko <olekstysh@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Oleksandr,

2020=E5=B9=B410=E6=9C=8830=E6=97=A5(=E9=87=91) 6:14 Oleksandr Tyshchenko <o=
lekstysh@gmail.com>:
>
> Hi Stefano
>
> [sorry for the possible format issue]
>
> On Thu, Oct 29, 2020 at 9:53 PM Stefano Stabellini <sstabellini@kernel.or=
g> wrote:
>>
>> On Thu, 29 Oct 2020, Oleksandr Tyshchenko wrote:
>> > On Thu, Oct 29, 2020 at 9:42 AM Masami Hiramatsu <masami.hiramatsu@lin=
aro.org> wrote:
>> >       Hi Oleksandr,
>> >
>> > Hi Masami
>> >
>> > [sorry for the possible format issue]
>> >
>> >
>> >       I would like to try this on my arm64 board.
>> >
>> > Glad to hear you are interested in this topic.
>> >
>> >
>> >       According to your comments in the patch, I made this config file=
.
>> >       # cat debian.conf
>> >       name =3D "debian"
>> >       type =3D "pvh"
>> >       vcpus =3D 8
>> >       memory =3D 512
>> >       kernel =3D "/opt/agl/vmlinuz-5.9.0-1-arm64"
>> >       ramdisk =3D "/opt/agl/initrd.img-5.9.0-1-arm64"
>> >       cmdline=3D "console=3Dhvc0 earlyprintk=3Dxen root=3D/dev/xvda1 r=
w"
>> >       disk =3D [ '/opt/agl/debian.qcow2,qcow2,hda' ]
>> >       vif =3D [ 'mac=3D00:16:3E:74:3d:76,bridge=3Dxenbr0' ]
>> >       virtio =3D 1
>> >       vdisk =3D [ 'backend=3DDom0, disks=3Dro:/dev/sda1' ]
>> >
>> >       And tried to boot a DomU, but I got below error.
>> >
>> >       # xl create -c debian.conf
>> >       Parsing config from debian.conf
>> >       libxl: error: libxl_create.c:1863:domcreate_attach_devices: Doma=
in
>> >       1:unable to add virtio_disk devices
>> >       libxl: error: libxl_domain.c:1218:destroy_domid_pci_done: Domain
>> >       1:xc_domain_pause failed
>> >       libxl: error: libxl_dom.c:39:libxl__domain_type: unable to get d=
omain
>> >       type for domid=3D1
>> >       libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domai=
n
>> >       1:Unable to destroy guest
>> >       libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
>> >       1:Destruction of domain failed
>> >
>> >
>> >       Could you tell me how can I test it?
>> >
>> >
>> > I assume it is due to the lack of the virtio-disk backend (which I hav=
en't shared yet as I focused on the IOREQ/DM support on Arm in the
>> > first place).
>> > Could you wait a little bit, I am going to share it soon.
>>
>> Do you have a quick-and-dirty hack you can share in the meantime? Even
>> just on github as a special branch? It would be very useful to be able
>> to have a test-driver for the new feature.
>
> Well, I will provide a branch on github with our PoC virtio-disk backend =
by the end of this week. It will be possible to test this series with it.

Great! OK I'll be waiting for the PoC backend.

Thank you!
--=20
Masami Hiramatsu


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 11:35:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 11:35:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15698.38780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSgx-0008WO-4I; Fri, 30 Oct 2020 11:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15698.38780; Fri, 30 Oct 2020 11:35:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYSgx-0008WH-0x; Fri, 30 Oct 2020 11:35:35 +0000
Received: by outflank-mailman (input) for mailman id 15698;
 Fri, 30 Oct 2020 11:35:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ToGC=EF=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
 id 1kYSgv-0008W6-Gv
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:35:33 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 9572f6e5-3ad5-4e26-8678-060d743e8873;
 Fri, 30 Oct 2020 11:35:32 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-468-sRWFeep6MQO_UT9sgJT2kA-1; Fri, 30 Oct 2020 07:35:31 -0400
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E7781805F02;
 Fri, 30 Oct 2020 11:35:28 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1AD9555785;
 Fri, 30 Oct 2020 11:35:17 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=ToGC=EF=redhat.com=ehabkost@srs-us1.protection.inumbo.net>)
	id 1kYSgv-0008W6-Gv
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:35:33 +0000
X-Inumbo-ID: 9572f6e5-3ad5-4e26-8678-060d743e8873
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 9572f6e5-3ad5-4e26-8678-060d743e8873;
	Fri, 30 Oct 2020 11:35:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604057732;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RXJ5yBCLZhaWw7FjwvS2z0av/pFVm25MD3/1mjWwAwA=;
	b=Gj4+MWXv4yuKsbSqs1PX3WtK5B6bR2BeHeRmlCIb1+jhgBjdI+El/HhDLdQ9dHv49XpiIX
	chelCa+XfdkRrO8+jjMlqE1Tf3FFAdW+E/RoR2Jy7FQQZvXtKJv1rD4OLpr9xa4O+2UpVZ
	QwnEp5N8YSweHYdtQySPT5ms9FrFMxk=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-468-sRWFeep6MQO_UT9sgJT2kA-1; Fri, 30 Oct 2020 07:35:31 -0400
X-MC-Unique: sRWFeep6MQO_UT9sgJT2kA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E7781805F02;
	Fri, 30 Oct 2020 11:35:28 +0000 (UTC)
Received: from localhost (ovpn-114-68.rdu2.redhat.com [10.10.114.68])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 1AD9555785;
	Fri, 30 Oct 2020 11:35:17 +0000 (UTC)
Date: Fri, 30 Oct 2020 07:35:16 -0400
From: Eduardo Habkost <ehabkost@redhat.com>
To: =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@gmail.com>,
	Thomas Huth <thuth@redhat.com>,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
Cc: Wainer dos Santos Moschetta <wainersm@redhat.com>,
	QEMU <qemu-devel@nongnu.org>,
	Matthew Rosato <mjrosato@linux.ibm.com>,
	Paul Durrant <paul@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"open list:Block layer core" <qemu-block@nongnu.org>,
	Stefan Berger <stefanb@linux.vnet.ibm.com>,
	David Hildenbrand <david@redhat.com>,
	Markus Armbruster <armbru@redhat.com>,
	Halil Pasic <pasic@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	Anthony Perard <anthony.perard@citrix.com>,
	xen-devel@lists.xenproject.org,
	Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
	Thomas Huth <thuth@redhat.com>,
	Alex Williamson <alex.williamson@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>, John Snow <jsnow@redhat.com>,
	Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
	"Daniel P. Berrange" <berrange@redhat.com>,
	Cornelia Huck <cohuck@redhat.com>,
	Qemu-s390x list <qemu-s390x@nongnu.org>,
	Max Reitz <mreitz@redhat.com>, Igor Mammedov <imammedo@redhat.com>
Subject: --enable-xen on gitlab CI? (was Re: [PATCH 09/36] qdev: Make
 qdev_get_prop_ptr() get Object* arg)
Message-ID: <20201030113516.GP5733@habkost.net>
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-10-ehabkost@redhat.com>
 <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
MIME-Version: 1.0
In-Reply-To: <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=ehabkost@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Fri, Oct 30, 2020 at 11:29:25AM +0400, Marc-André Lureau wrote:
> On Fri, Oct 30, 2020 at 2:07 AM Eduardo Habkost <ehabkost@redhat.com> wrote:
> 
> > Make the code more generic and not specific to TYPE_DEVICE.
> >
> > Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
> >
> 
> Nice cleanup!, but fails to build atm
> 
> ../hw/block/xen-block.c:403:9: error: ‘dev’ undeclared (first use in this
> function); did you mean ‘vdev’?
>   403 |     if (dev->realized) {

Thanks for catching it!

What is necessary to make sure we have a CONFIG_XEN=y job in
gitlab CI?  Maybe just including xen-devel in some of the
container images is enough?

-- 
Eduardo



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 11:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 11:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15714.38796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT0A-0001uO-Pu; Fri, 30 Oct 2020 11:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15714.38796; Fri, 30 Oct 2020 11:55:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT0A-0001uH-MT; Fri, 30 Oct 2020 11:55:26 +0000
Received: by outflank-mailman (input) for mailman id 15714;
 Fri, 30 Oct 2020 11:55:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYT08-0001u7-TQ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:55:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e19e9d48-e24c-4f41-8803-612f4a50ca6b;
 Fri, 30 Oct 2020 11:55:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0C49AC65;
 Fri, 30 Oct 2020 11:55:22 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYT08-0001u7-TQ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 11:55:24 +0000
X-Inumbo-ID: e19e9d48-e24c-4f41-8803-612f4a50ca6b
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e19e9d48-e24c-4f41-8803-612f4a50ca6b;
	Fri, 30 Oct 2020 11:55:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604058923;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PVYKfbprZc9tQbMTNw+ObjaU0eW6IiywFqtHh4qY80I=;
	b=EHQARCdBo5+9wHj2fq95QYQqzowbZ4n07d2IjH+1QbAO1HL9TYTmAcuNwZ5whVfekpFMBq
	gtr6Cj4cvmt0WIE7BSISAIWOx/Xvu1H5YeMH9ucA/IdhhEB155yPz12hCK2t1zoEunB1YX
	u38D9jFqCL0kH10Y9SGm705MQt0Re/w=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id F0C49AC65;
	Fri, 30 Oct 2020 11:55:22 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
Date: Fri, 30 Oct 2020 12:55:22 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 12:15, Jürgen Groß wrote:
> On 30.10.20 11:57, Julien Grall wrote:
>>
>>
>> On 30/10/2020 10:49, Jan Beulich wrote:
>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>> On 22/10/2020 17:17, Jan Beulich wrote:
>>>>> On 22.10.2020 18:00, Roger Pau Monné wrote:
>>>>>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>>>>>> --- a/xen/include/xen/event.h
>>>>>>> +++ b/xen/include/xen/event.h
>>>>>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>>>>>     * Low-level event channel port ops.
>>>>>>>     *
>>>>>>>     * All hooks have to be called with a lock held which prevents 
>>>>>>> the channel
>>>>>>> - * from changing state. This may be the domain event lock, the 
>>>>>>> per-channel
>>>>>>> - * lock, or in the case of sending interdomain events also the 
>>>>>>> other side's
>>>>>>> - * per-channel lock. Exceptions apply in certain cases for the PV 
>>>>>>> shim.
>>>>>>> + * from changing state. This may be
>>>>>>> + * - the domain event lock,
>>>>>>> + * - the per-channel lock,
>>>>>>> + * - in the case of sending interdomain events the other side's 
>>>>>>> per-channel
>>>>>>> + *   lock,
>>>>>>> + * - in the case of sending non-global vIRQ-s the per-vCPU 
>>>>>>> virq_lock (in
>>>>>>> + *   combination with the ordering enforced through how the vCPU's
>>>>>>> + *   virq_to_evtchn[] gets updated),
>>>>>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>>>>>> + * Exceptions apply in certain cases for the PV shim.
>>>>>>
>>>>>> Having such a wide locking discipline looks dangerous to me, it's easy
>>>>>> to get things wrong without notice IMO.
>>>>>
>>>>> It is effectively only describing how things are (or were before
>>>>> XSA-343, getting restored here).
>>>>
>>>> I agree with Roger here, the new/old locking discipline is dangerous and
>>>> it is only a matter of time before it will bite us again.
>>>>
>>>> I think we should consider Juergen's series because the locking for the
>>>> event channel is easier to understand.
>>>
>>> We should, yes. The one thing I'm a little uneasy with is the
>>> new lock "variant" that gets introduced. Custom locking methods
>>> also are a common source of problems (which isn't to say I see
>>> any here).
>>
>> I am also unease with a new lock "variant". However, this is the best 
>> proposal I have seen so far to unblock the issue.
>>
>> I am open to other suggestion with simple locking discipline.
> 
> In theory my new lock variant could easily be replaced by a rwlock and
> using the try-variant for the readers only.

Well, only until we would add check_lock() there, which I think
we should really have (not just on the slow paths, thanks to
the use of spin_lock() there). The read-vs-write properties
you're utilizing aren't applicable in the general case afaict,
and hence such checking would get in the way.

> The disadvantage of that approach would be a growth of struct evtchn.

Wasn't it you who had pointed out to me the aligned(64) attribute
on the struct (in a different context), which afaict would subsume
any possible growth?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:00:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15719.38808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT4t-0002ni-IC; Fri, 30 Oct 2020 12:00:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15719.38808; Fri, 30 Oct 2020 12:00:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT4t-0002nb-EL; Fri, 30 Oct 2020 12:00:19 +0000
Received: by outflank-mailman (input) for mailman id 15719;
 Fri, 30 Oct 2020 12:00:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYT4s-0002nV-5d
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:00:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7946771c-ec0b-41b9-9dd2-ea30ef44ff6f;
 Fri, 30 Oct 2020 12:00:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 922E1B1D8;
 Fri, 30 Oct 2020 12:00:16 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYT4s-0002nV-5d
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:00:18 +0000
X-Inumbo-ID: 7946771c-ec0b-41b9-9dd2-ea30ef44ff6f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 7946771c-ec0b-41b9-9dd2-ea30ef44ff6f;
	Fri, 30 Oct 2020 12:00:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604059216;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=a6v5bSldP6LxUk4SPvkNoZRxRbEP6y7Xg4TcgakFXtw=;
	b=s4ApBVOkh33pr8QoSOFCHAyWyTnTFQdUtAORI7kJtnxYKR/SWcfDq4gVpFjzhb/SNBb3lo
	NVCX81m+/5VQoQo9A+S1icMdLqoT2ur/TzHJnhyiK0M7Jroxt/e0xGhnjFBXN0Opwj5i5h
	Sza/Z8jy/AlIvXi6hS5H9mwS33hg0Qk=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 922E1B1D8;
	Fri, 30 Oct 2020 12:00:16 +0000 (UTC)
Subject: Re: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
 <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <33ff45e9-d869-9262-29e0-fa66e3ffb726@suse.com>
Date: Fri, 30 Oct 2020 13:00:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 11:57, Julien Grall wrote:
> On 20/10/2020 15:10, Jan Beulich wrote:
>> --- a/xen/common/event_channel.c
>> +++ b/xen/common/event_channel.c
>> @@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>>   
>>       spin_unlock_irqrestore(&chn->lock, flags);
>>   
>> +    /*
>> +     * If by any, the update of virq_to_evtchn[] would need guarding by
>> +     * virq_lock, but since this is the last action here, there's no strict
>> +     * need to acquire the lock. Hnece holding event_lock isn't helpful
> 
> s/Hnece/Hence/
> 
>> +     * anymore at this point, but utilize that its unlocking acts as the
>> +     * otherwise necessary smp_wmb() here.
>> +     */
>>       v->virq_to_evtchn[virq] = bind->port = port;
> 
> I think all access to v->virq_to_evtchn[virq] should use ACCESS_ONCE() 
> or {read, write}_atomic() to avoid any store/load tearing.

IOW you're suggesting this to be the subject of a separate patch?
I don't think such a conversion belongs here (nor even in this
series, seeing the much wider applicability of such a change
throughout the code base). Or are you seeing anything here which
would require such a conversion to be done as a prereq?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:00:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:00:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15721.38820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT59-0002tG-Pr; Fri, 30 Oct 2020 12:00:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15721.38820; Fri, 30 Oct 2020 12:00:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT59-0002t7-Mf; Fri, 30 Oct 2020 12:00:35 +0000
Received: by outflank-mailman (input) for mailman id 15721;
 Fri, 30 Oct 2020 12:00:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYT58-0002sp-IW
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:00:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7265c6af-9123-44cf-97a5-2c46e8ca89a1;
 Fri, 30 Oct 2020 12:00:29 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYT52-0007XA-OG; Fri, 30 Oct 2020 12:00:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYT52-0004Cd-Cs; Fri, 30 Oct 2020 12:00:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYT52-0002Vx-CN; Fri, 30 Oct 2020 12:00:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYT58-0002sp-IW
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:00:34 +0000
X-Inumbo-ID: 7265c6af-9123-44cf-97a5-2c46e8ca89a1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7265c6af-9123-44cf-97a5-2c46e8ca89a1;
	Fri, 30 Oct 2020 12:00:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w4W5Lw1qlXLoWPVScd/8LaIjHAEv69QuHI7uct3PaTQ=; b=y7XTnDIe9ovvrYRmFeOUawmCpG
	7EY/FDBZFvvww0FrLqjCPFG6A6gsb/2RiC793EOffCHUF5a3YcnFlS7yMiu4pQYQ1AOqBeF9bPrw6
	Itfns+jxATIEAuZoHNa6ALpueF+fv9S4TfYK85tjEI+3ECxaKSBLSnoTnPEgeMflYYz4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYT52-0007XA-OG; Fri, 30 Oct 2020 12:00:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYT52-0004Cd-Cs; Fri, 30 Oct 2020 12:00:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYT52-0002Vx-CN; Fri, 30 Oct 2020 12:00:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 156293: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=bde3f94035b0e5a724853544d65d00536e1889b2
X-Osstest-Versions-That:
    linux=52f6ded2a377ac4f191c84182488e454b1386239
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 12:00:28 +0000

flight 156293 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156293/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 155963
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 155963
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 155963
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 155963
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 155963
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 155963
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 155963
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 155963
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 155963
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 155963
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 155963
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                bde3f94035b0e5a724853544d65d00536e1889b2
baseline version:
 linux                52f6ded2a377ac4f191c84182488e454b1386239

Last test of basis   155963  2020-10-18 16:39:51 Z   11 days
Testing same since   156293  2020-10-29 09:13:51 Z    1 days    1 attempts

------------------------------------------------------------
386 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   52f6ded2a377..bde3f94035b0  bde3f94035b0e5a724853544d65d00536e1889b2 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:04:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:04:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15738.38835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT8b-00037t-BG; Fri, 30 Oct 2020 12:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15738.38835; Fri, 30 Oct 2020 12:04:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT8b-00037m-7Y; Fri, 30 Oct 2020 12:04:09 +0000
Received: by outflank-mailman (input) for mailman id 15738;
 Fri, 30 Oct 2020 12:04:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYT8Z-00037h-Lh
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:07 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88fb0a63-9771-4a42-a02c-f4ab0f034460;
 Fri, 30 Oct 2020 12:04:06 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604059442132732.8991634265685;
 Fri, 30 Oct 2020 05:04:02 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYT8Z-00037h-Lh
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:07 +0000
X-Inumbo-ID: 88fb0a63-9771-4a42-a02c-f4ab0f034460
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 88fb0a63-9771-4a42-a02c-f4ab0f034460;
	Fri, 30 Oct 2020 12:04:06 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604059443; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Uv5wG1prpfxm3KYu0+ROQ4jEv0f0V99LjrCOYMi4Gzuy8QdSjOrx7JWZ/cRBvldV4d+/uYlMadmdrump/vPfbqVSW46XgUp1R1skU8XNewf1JoXfnrSCxTQG7bPlIDR/9Iu1Gfy5KRkVBuTzyZMha8vmrHTlWFSS8rFcuJOuHWg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604059443; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=tfhoj+r0ujv6sRMVEonbmGPvMviX5L0vNpMhsrP8zmE=; 
	b=GzaYGmut+Mz0IoUBEWJ0LFoZNklQ4X3sq/5Y2A5+SWODH/irtRWA3EG5uTt5e7XuAjl41tzllDAmTnmrZdcWDk5CNj/k4y0GrloI3T9f/QbedMAJ8349jCXgPhQ7h4iKmoIy8jzUo6wwkqGr1vR2L8YWYxuPF49KZghs3IBWkeY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604059443;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=tfhoj+r0ujv6sRMVEonbmGPvMviX5L0vNpMhsrP8zmE=;
	b=Baju2x19M0lxQNVACgkDVIhleh0Tik8UgAYWpa5RIV+RJ1yitpU4GO50zufZbddo
	L/kEfreSZVjY1Yjnydt0TJq4NPdelmA527oww7LEhM2W75GKzO/CDBB28rw8d0xsrRB
	RtXwcA10sZQxcc53vRC34GAQz6mPNjBTz0FzDvBw=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604059442132732.8991634265685; Fri, 30 Oct 2020 05:04:02 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <cover.1603725003.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 0/2] Improve reproducible builds
Date: Fri, 30 Oct 2020 13:03:49 +0100
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

This two fixes improve reproducibility of resulting Xen binaries

Fr=C3=A9d=C3=A9ric Pierret (fepitre) (2):
  No insert of the build timestamp into the x86 xen efi binary
  xen/common/makefile: remove gzip timestamp

 xen/arch/x86/Makefile | 1 +
 xen/common/Makefile   | 2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:04:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15739.38846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT8o-0003BK-Jc; Fri, 30 Oct 2020 12:04:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15739.38846; Fri, 30 Oct 2020 12:04:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT8o-0003BD-G5; Fri, 30 Oct 2020 12:04:22 +0000
Received: by outflank-mailman (input) for mailman id 15739;
 Fri, 30 Oct 2020 12:04:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYT8n-0003At-AV
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:21 +0000
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 811e4464-7ffd-47c4-917d-48109bc8779a;
 Fri, 30 Oct 2020 12:04:20 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604059448395617.7980217961564;
 Fri, 30 Oct 2020 05:04:08 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYT8n-0003At-AV
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:21 +0000
X-Inumbo-ID: 811e4464-7ffd-47c4-917d-48109bc8779a
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 811e4464-7ffd-47c4-917d-48109bc8779a;
	Fri, 30 Oct 2020 12:04:20 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604059449; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=REVGGL4++4PneHfwBVedPiZUkD7hCTO7jzyeWeS4fzbtA502l+xodIUB9TvARH2AbNszD5vMSi8+KoYYdurWJ8V6v3z95FMgly0pHxoqGJ0bS5lQQpUFTymezaIjEFC/2ziIS+ggtAwASOJRKSDR/aC+hKV2L5kqV49iA1iNOVU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604059449; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=dilez5HdGh46lx7AxEt9ZLygHthnDA5O3R/mUATC4fA=; 
	b=jjxrA5RxZcAI+IrFT1rAseh+F6GPuLT1uJBeQrNe/WQEqnGQLj311xIugxWPI3dMCjAf3OVWXTkuHHBZ1kMsMpkAnTa3XgZbpwT4+biQSTXy08QGwDDoItC1dT7uVr9LVLGpmPVXFcyX7AnniuOwLagFNKqDJe3zdmj2QGmMddA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604059449;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=dilez5HdGh46lx7AxEt9ZLygHthnDA5O3R/mUATC4fA=;
	b=hTINhEBThGzORRa7QzQ2WqFzH6/fns152PJVtp6bHWAQ3K8m52XVCAqQYBlv2bNv
	avKRKuib1VT6cw+YCIYBju1nwX0crIMX5Iy9vE4J/U9MC14iTA8UVfCngJ6YwrGPxxu
	9AfFmUhAc0soMNZjx3D8S2N4z/bmT2UNCtiB6P04=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604059448395617.7980217961564; Fri, 30 Oct 2020 05:04:08 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Message-ID: <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen efi binary
Date: Fri, 30 Oct 2020 13:03:50 +0100
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1603725003.git.frederic.pierret@qubes-os.org>
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

This is for improving reproducible builds.

Signed-off-by: Fr=C3=A9d=C3=A9ric Pierret (fepitre) <frederic.pierret@qubes=
-os.org>
---
 xen/arch/x86/Makefile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index b388861679..f5a529afd5 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -170,6 +170,7 @@ EFI_LDFLAGS +=3D --major-image-version=3D$(XEN_VERSION)
 EFI_LDFLAGS +=3D --minor-image-version=3D$(XEN_SUBVERSION)
 EFI_LDFLAGS +=3D --major-os-version=3D2 --minor-os-version=3D0
 EFI_LDFLAGS +=3D --major-subsystem-version=3D2 --minor-subsystem-version=
=3D0
+EFI_LDFLAGS +=3D --no-insert-timestamp
=20
 # Check if the compiler supports the MS ABI.
 export XEN_BUILD_EFI :=3D $(shell $(CC) $(XEN_CFLAGS) -c efi/check.c -o ef=
i/check.o 2>/dev/null && echo y)
--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15741.38859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT92-0003I4-1P; Fri, 30 Oct 2020 12:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15741.38859; Fri, 30 Oct 2020 12:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYT91-0003Hu-UR; Fri, 30 Oct 2020 12:04:35 +0000
Received: by outflank-mailman (input) for mailman id 15741;
 Fri, 30 Oct 2020 12:04:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYT90-0003HV-Cb
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:34 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b2810f1-8cf9-481b-b5fc-5821381392f9;
 Fri, 30 Oct 2020 12:04:33 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604059452261779.7132315246392;
 Fri, 30 Oct 2020 05:04:12 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=+yJr=EF=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYT90-0003HV-Cb
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:04:34 +0000
X-Inumbo-ID: 9b2810f1-8cf9-481b-b5fc-5821381392f9
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9b2810f1-8cf9-481b-b5fc-5821381392f9;
	Fri, 30 Oct 2020 12:04:33 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604059453; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=h2W/bm1nQCWlg1ACXWIM/W7jYwE1/M8+cvSaYI4R6hlLHfbg3mLNQ0qED6cn16J08ByKX0q+h/pOi+FUMTr+vBQmTDELmQuN5FZ45IWhm3TUNsxeLu0QZJcvjeRhrpDpbXcxzYAQAZstas6ubUUEFstJDxE+6UR+d+91ykqs43s=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604059453; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=Y6WrjRh0df8S3a0pQK8pS6X5+OtEoeqlIRQoXLZLNJA=; 
	b=U0DE2aI/bQpNX0gmyCOwRirI+uW3ElMkcPum6l2F2IoffHAUvty8LZcvMlcdAOYxmblSZjgVRmhtN9thnarFqaw0mTThNeFQwMk+2rrvJ+0S8qO4MCMZdcNLXiIx7z2pw+5uQVWb1dgGlkRLwSI3aXjgEpkEYy9HXOGrn4tp+Tk=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604059453;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:In-Reply-To:References:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=Y6WrjRh0df8S3a0pQK8pS6X5+OtEoeqlIRQoXLZLNJA=;
	b=Vmq3Ol+rLzBTjjG3Pd/J4NGWmWdwsU0fCcxHHKkDlE+xgZFT3SiAC2utmW+D4Cov
	aVZxc0D69HbX+tbI4T1xJqmk8WhQh4Rs41K0Hk2I2aCLf108mpKW+zjJV+st8/VCTny
	A4zB8KgMaan1vr6l/rv+aizIhcCsSFrrXEyLuNWw=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604059452261779.7132315246392; Fri, 30 Oct 2020 05:04:12 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Message-ID: <29b0632e30aba9bc2e071f572fb1067108bcae8c.1603725003.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 2/2] xen/common/makefile: remove gzip timestamp
Date: Fri, 30 Oct 2020 13:03:51 +0100
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1603725003.git.frederic.pierret@qubes-os.org>
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

This is for improving reproducible builds.

Signed-off-by: Fr=C3=A9d=C3=A9ric Pierret (fepitre) <frederic.pierret@qubes=
-os.org>
---
 xen/common/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/Makefile b/xen/common/Makefile
index 06881d023c..32cd650ba8 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -77,7 +77,7 @@ obj-$(CONFIG_HAS_DEVICE_TREE) +=3D libfdt/
=20
 CONF_FILE :=3D $(if $(patsubst /%,,$(KCONFIG_CONFIG)),$(XEN_ROOT)/xen/)$(K=
CONFIG_CONFIG)
 config.gz: $(CONF_FILE)
-=09gzip -c $< >$@
+=09gzip -n -c $< >$@
=20
 config_data.o: config.gz
=20
--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:08:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15754.38871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTD5-0003ZM-JO; Fri, 30 Oct 2020 12:08:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15754.38871; Fri, 30 Oct 2020 12:08:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTD5-0003ZF-Fb; Fri, 30 Oct 2020 12:08:47 +0000
Received: by outflank-mailman (input) for mailman id 15754;
 Fri, 30 Oct 2020 12:08:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYTD4-0003ZA-CY
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:08:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54d39326-7a1e-4829-88de-1f68dd07a060;
 Fri, 30 Oct 2020 12:08:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C1B3FB1A6;
 Fri, 30 Oct 2020 12:08:44 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYTD4-0003ZA-CY
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:08:46 +0000
X-Inumbo-ID: 54d39326-7a1e-4829-88de-1f68dd07a060
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54d39326-7a1e-4829-88de-1f68dd07a060;
	Fri, 30 Oct 2020 12:08:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604059724;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4W7X/iL6SHyoV2HFBiU0f1MfNU1Gk1c3QN/6fMVpXSE=;
	b=tD1/BUDQqXHqmqqQmO+yafkIBK/Y527xSM3bBFdSJ+P51JE5tuHdAyNAqOx7SX7SuuQ/QO
	Pt3nX3AbiExBtMCF4DexZt/DE4GBIy4nnBHcAz5OcevBAukdrozTWeCMUsjToUF8ufUSFu
	2spSZQsGHT4VVkINyRSXuzfBhR5SGPw=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C1B3FB1A6;
	Fri, 30 Oct 2020 12:08:44 +0000 (UTC)
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary
To: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>
Date: Fri, 30 Oct 2020 13:08:44 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 13:03, Frédéric Pierret (fepitre) wrote:

> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -170,6 +170,7 @@ EFI_LDFLAGS += --major-image-version=$(XEN_VERSION)
>  EFI_LDFLAGS += --minor-image-version=$(XEN_SUBVERSION)
>  EFI_LDFLAGS += --major-os-version=2 --minor-os-version=0
>  EFI_LDFLAGS += --major-subsystem-version=2 --minor-subsystem-version=0
> +EFI_LDFLAGS += --no-insert-timestamp

Generally I prefer binaries to carry timestamps, when they are
intended to do so (i.e. when they have a respective field). So
I think if no timestamp is wanted, it should be as an option
(not sure about the default).

This said, I didn't think time stamps got meaningfully in the
way of reproducible builds - ignoring the minor differences
cause by them, especially when they sit at well known offsets
in the binaries, shouldn't be a big deal.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:08:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15755.38882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTDA-0003bN-QY; Fri, 30 Oct 2020 12:08:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15755.38882; Fri, 30 Oct 2020 12:08:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTDA-0003bG-Nb; Fri, 30 Oct 2020 12:08:52 +0000
Received: by outflank-mailman (input) for mailman id 15755;
 Fri, 30 Oct 2020 12:08:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYTD9-0003ao-II
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:08:51 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2200e26-f295-475b-a37e-4fc2f6127583;
 Fri, 30 Oct 2020 12:08:50 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYTD6-0007if-UM; Fri, 30 Oct 2020 12:08:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYTD6-0004ip-ML; Fri, 30 Oct 2020 12:08:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYTD9-0003ao-II
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:08:51 +0000
X-Inumbo-ID: d2200e26-f295-475b-a37e-4fc2f6127583
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d2200e26-f295-475b-a37e-4fc2f6127583;
	Fri, 30 Oct 2020 12:08:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Gm3nxBLZ1PiF/ODv6qA2U89RYld7CiI8AA/Tzobt7u4=; b=GBqgaBFiCJWZaxYR/oHDWuXiTY
	QQj9ulZqb2TQ8LV9IJsc6SmUB+NRXFujbveSWe7UtKUQTssdp9rNBcUSPs4TOhtAKb76dhhTNIg3n
	9gGnyU0tPe5S6tP+GcjbF5XDX/mPB3ZsSxBysR5Tkw+frtQX+UuXErXBsvBDsR35DmN8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYTD6-0007if-UM; Fri, 30 Oct 2020 12:08:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYTD6-0004ip-ML; Fri, 30 Oct 2020 12:08:48 +0000
Subject: Re: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
 <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
 <33ff45e9-d869-9262-29e0-fa66e3ffb726@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <60559ccf-f8f5-4c54-0867-b8a893df3f0c@xen.org>
Date: Fri, 30 Oct 2020 12:08:45 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <33ff45e9-d869-9262-29e0-fa66e3ffb726@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/10/2020 12:00, Jan Beulich wrote:
> On 30.10.2020 11:57, Julien Grall wrote:
>> On 20/10/2020 15:10, Jan Beulich wrote:
>>> --- a/xen/common/event_channel.c
>>> +++ b/xen/common/event_channel.c
>>> @@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>>>    
>>>        spin_unlock_irqrestore(&chn->lock, flags);
>>>    
>>> +    /*
>>> +     * If by any, the update of virq_to_evtchn[] would need guarding by
>>> +     * virq_lock, but since this is the last action here, there's no strict
>>> +     * need to acquire the lock. Hnece holding event_lock isn't helpful
>>
>> s/Hnece/Hence/
>>
>>> +     * anymore at this point, but utilize that its unlocking acts as the
>>> +     * otherwise necessary smp_wmb() here.
>>> +     */
>>>        v->virq_to_evtchn[virq] = bind->port = port;
>>
>> I think all access to v->virq_to_evtchn[virq] should use ACCESS_ONCE()
>> or {read, write}_atomic() to avoid any store/load tearing.
> 
> IOW you're suggesting this to be the subject of a separate patch?
> I don't think such a conversion belongs here (nor even in this
> series, seeing the much wider applicability of such a change
> throughout the code base).
> Or are you seeing anything here which
> would require such a conversion to be done as a prereq?

Yes, your comment implies that it is fine to write to virq_to_evtchn[] 
without the lock taken. However, this is *only* valid if the compiler 
doesn't tear down load/store.

So this is a pre-req to get your comment valid.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:11:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15764.38895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTFa-0004Uk-8Z; Fri, 30 Oct 2020 12:11:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15764.38895; Fri, 30 Oct 2020 12:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTFa-0004Ud-52; Fri, 30 Oct 2020 12:11:22 +0000
Received: by outflank-mailman (input) for mailman id 15764;
 Fri, 30 Oct 2020 12:11:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYTFZ-0004UY-6P
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:11:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03f43499-f522-4ef7-9a0a-3f0e2c9e7cb3;
 Fri, 30 Oct 2020 12:11:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 37D43AAB2;
 Fri, 30 Oct 2020 12:11:19 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYTFZ-0004UY-6P
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:11:21 +0000
X-Inumbo-ID: 03f43499-f522-4ef7-9a0a-3f0e2c9e7cb3
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 03f43499-f522-4ef7-9a0a-3f0e2c9e7cb3;
	Fri, 30 Oct 2020 12:11:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604059879;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Vo7f5wZB1XrJCx2b+xFImObTT4SECdsEuIeC/WuWGxg=;
	b=jYSgHg9XaQ0Y5yfGAot4kVSuVdaEqvetdCNILzUbr0K31jYiDqrvpogw+hzRxoCigRToL0
	A+AZNLUJUf+uMjSK7zBPrHe87FZ44BGt5BtbDPe1SBEQzXjHonScJl26bM/JAROsPXnJcr
	CpEZfVX9QaFfUKzZKZusPMouTUXxIuQ=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 37D43AAB2;
	Fri, 30 Oct 2020 12:11:19 +0000 (UTC)
Subject: Re: [PATCH v1 2/2] xen/common/makefile: remove gzip timestamp
To: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <29b0632e30aba9bc2e071f572fb1067108bcae8c.1603725003.git.frederic.pierret@qubes-os.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a9608920-981f-cfdb-f6c6-fca7e9b68be6@suse.com>
Date: Fri, 30 Oct 2020 13:11:18 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <29b0632e30aba9bc2e071f572fb1067108bcae8c.1603725003.git.frederic.pierret@qubes-os.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 13:03, Frédéric Pierret (fepitre) wrote:
> This is for improving reproducible builds.
> 
> Signed-off-by: Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

Albeit I'd like to ask for the title to actually mention whose
gzip time stamp it is that gets squashed. Perhaps "xen: don't
have timestamp inserted in config.gz"?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:23:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15771.38907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTRH-0005Uf-B7; Fri, 30 Oct 2020 12:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15771.38907; Fri, 30 Oct 2020 12:23:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTRH-0005UY-7t; Fri, 30 Oct 2020 12:23:27 +0000
Received: by outflank-mailman (input) for mailman id 15771;
 Fri, 30 Oct 2020 12:23:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYTRF-0005UT-9A
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:23:25 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ec2294a-fc3b-4917-821d-2e6810213d68;
 Fri, 30 Oct 2020 12:23:24 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 686D45C00F8;
 Fri, 30 Oct 2020 08:23:24 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 30 Oct 2020 08:23:24 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 309943064680;
 Fri, 30 Oct 2020 08:23:23 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYTRF-0005UT-9A
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:23:25 +0000
X-Inumbo-ID: 3ec2294a-fc3b-4917-821d-2e6810213d68
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 3ec2294a-fc3b-4917-821d-2e6810213d68;
	Fri, 30 Oct 2020 12:23:24 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 686D45C00F8;
	Fri, 30 Oct 2020 08:23:24 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Fri, 30 Oct 2020 08:23:24 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=pYlWoI
	HBkx76rcoqE72ZS3cHrDn/k/jdYN8WkyOhYOg=; b=C4AJSjmLgNRYmLCN3koEyK
	JIJ4P1WtJBvOrV+jCFnxYIjC6wf4kB76ht7kwHSQSZYfm/8v16IW3zdZV2JnvBfm
	u+hWpSme6giyWIKxmDY9vA0hHjUNoqJQMvlfzcMN0b3JhbGCEIF1oAlMVwSJIqa1
	Et0j9QJKKfdsl7oKMi0foZupWwZpXSSnmCWH2i9dcXuV/a2UqzFJ8pCceV/gFAqA
	fMBG+L1RKkYU+BLNyf3ObzGiTWoaH9GbNVh5inzEp0DqHgQ8HB0Ky5AHAOhNuX+Q
	5NVVVmqMKn3eK1e7DQR5S6e5c0fllw8YSi2K3OgOZ1Agph9Uc4Gag1RMgLF0o95w
	==
X-ME-Sender: <xms:vAWcXxchoVnHUzTrsxsZaW3zMddJq411tkCdK0gryC3MO05-cPPLyA>
    <xme:vAWcX_Pcf6gLSIPSkCryJ5nOvdX9T52Es5nSm_e6Iltdf-6cSvMfcfNK39TP5Wcpq
    MA5H0fwDlz4ZQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleehgdefjecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgeetvefh
    ueefueevkeduvdfftefhueetuddvkeehtdelvdevudffvefhteeluefgnecuffhomhgrih
    hnpehrvghprhhoughutghisghlvgdqsghuihhlughsrdhorhhgnecukfhppeeluddrieeg
    rddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh
    hrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:vAWcX6i9VwAk9Baqn3zHD-uPgN5PktAkGcMMWedWp76JeYnKVRhf-w>
    <xmx:vAWcX68chMXxVS6UBu18UzDgiMCAWRLQpEQq1P4VaM1XJWLjT6a1_g>
    <xmx:vAWcX9s0nTp-YK-4T5kbC84UaWASG-PbV4bxBlW9rOVwxGjPxQP3Bw>
    <xmx:vAWcX27iiZeQpMaAz8T5O3nr8qzZuAmbySGnIT7fKp7oZpoNGc95qg>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 309943064680;
	Fri, 30 Oct 2020 08:23:23 -0400 (EDT)
Date: Fri, 30 Oct 2020 13:23:19 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: =?utf-8?Q?Fr=C3=A9d=C3=A9ric_Pierret_=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary
Message-ID: <20201030122319.GA16953@mail-itl>
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
 <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="wRRV7LY7NUeQGEoC"
Content-Disposition: inline
In-Reply-To: <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>


--wRRV7LY7NUeQGEoC
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary

On Fri, Oct 30, 2020 at 01:08:44PM +0100, Jan Beulich wrote:
> On 30.10.2020 13:03, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
>=20
> > --- a/xen/arch/x86/Makefile
> > +++ b/xen/arch/x86/Makefile
> > @@ -170,6 +170,7 @@ EFI_LDFLAGS +=3D --major-image-version=3D$(XEN_VERS=
ION)
> >  EFI_LDFLAGS +=3D --minor-image-version=3D$(XEN_SUBVERSION)
> >  EFI_LDFLAGS +=3D --major-os-version=3D2 --minor-os-version=3D0
> >  EFI_LDFLAGS +=3D --major-subsystem-version=3D2 --minor-subsystem-versi=
on=3D0
> > +EFI_LDFLAGS +=3D --no-insert-timestamp
>=20
> Generally I prefer binaries to carry timestamps, when they are
> intended to do so (i.e. when they have a respective field). So
> I think if no timestamp is wanted, it should be as an option
> (not sure about the default).

What about setting it to the SOURCE_DATE_EPOCH[1] variable value, if
present? Of course if there is an option to set explicit timestamp
value.

[1] https://reproducible-builds.org/docs/source-date-epoch/

> This said, I didn't think time stamps got meaningfully in the
> way of reproducible builds - ignoring the minor differences
> cause by them, especially when they sit at well known offsets
> in the binaries, shouldn't be a big deal.

It is a big deal. There is a huge difference between running sha256sum
(or your other favorite hash) on two build artifacts, and using a
specialized tool/script to compare each file separately. Note the
xen.efi may be buried very deep in the thing you compare, for example
inside deb/rpm and then ISO image (installation image), at which point
it's far from "they sit at well known offsets".

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--wRRV7LY7NUeQGEoC
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+cBbYACgkQ24/THMrX
1yy9QAf+MMtxtyqczE/lrrkRz58C6qrJyls5kWd5ZSkOjaSkkHkEgyAphd99EX9v
iSdMjCvpXWlvRnA1YzZ74fBMgnPEOZd88bKM05MohotmoAsMbFUnHAFPDKb2H+P5
nvdpd6hYinqVZ+OJdW8covmH/gEu1dXV3ZH+2C0WM2OVpKtkpv2yxm6WoBpHN5Lq
xjB9uxsXjiT+a+IsEe6GpGRCjU2jvCO5bjA3OapyWLbMTr/hPIhXbG6pcACexAfr
i/LNx2FWoew2a6LcrBFLt3982b7rfHxz6uk162MGzHdQ3QD+86RIJwSYqkYDewCd
cpZcrR+RJ/FMGe9Ol43uMv7T69BvxA==
=JH4j
-----END PGP SIGNATURE-----

--wRRV7LY7NUeQGEoC--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:25:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:25:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15776.38919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTT0-0005cK-NW; Fri, 30 Oct 2020 12:25:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15776.38919; Fri, 30 Oct 2020 12:25:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTT0-0005cD-KQ; Fri, 30 Oct 2020 12:25:14 +0000
Received: by outflank-mailman (input) for mailman id 15776;
 Fri, 30 Oct 2020 12:25:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYTT0-0005c1-36
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:25:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0986b4f2-a403-4778-9033-1e188fc9905d;
 Fri, 30 Oct 2020 12:25:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 783E7ACAC;
 Fri, 30 Oct 2020 12:25:12 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYTT0-0005c1-36
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:25:14 +0000
X-Inumbo-ID: 0986b4f2-a403-4778-9033-1e188fc9905d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 0986b4f2-a403-4778-9033-1e188fc9905d;
	Fri, 30 Oct 2020 12:25:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604060712;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fY5MJPEehqjsdGHD3mqEvqZ1bxezrK0uVUdXe0GmyF8=;
	b=OB0sht1NH9jT7eTWqtPO7ySIMgoxBDuspfJ+SNL+xEbDPxb0VfGo6Nzyt3xIjNBnr6otUF
	VsCNAwhfBD4kZ5kdlGWGLddxmLpi3EtWDtl0S6l135Hcu2GO5Au9Fg8KxlqAK61UdvmwpI
	XgrrWnODbhd7Fnj9nkkRz28IrBARaik=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 783E7ACAC;
	Fri, 30 Oct 2020 12:25:12 +0000 (UTC)
Subject: Re: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
 <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
 <33ff45e9-d869-9262-29e0-fa66e3ffb726@suse.com>
 <60559ccf-f8f5-4c54-0867-b8a893df3f0c@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <478e8353-e8f9-4cee-20d6-50e1619ac680@suse.com>
Date: Fri, 30 Oct 2020 13:25:11 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <60559ccf-f8f5-4c54-0867-b8a893df3f0c@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 13:08, Julien Grall wrote:
> On 30/10/2020 12:00, Jan Beulich wrote:
>> On 30.10.2020 11:57, Julien Grall wrote:
>>> On 20/10/2020 15:10, Jan Beulich wrote:
>>>> --- a/xen/common/event_channel.c
>>>> +++ b/xen/common/event_channel.c
>>>> @@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>>>>    
>>>>        spin_unlock_irqrestore(&chn->lock, flags);
>>>>    
>>>> +    /*
>>>> +     * If by any, the update of virq_to_evtchn[] would need guarding by
>>>> +     * virq_lock, but since this is the last action here, there's no strict
>>>> +     * need to acquire the lock. Hnece holding event_lock isn't helpful
>>>
>>> s/Hnece/Hence/
>>>
>>>> +     * anymore at this point, but utilize that its unlocking acts as the
>>>> +     * otherwise necessary smp_wmb() here.
>>>> +     */
>>>>        v->virq_to_evtchn[virq] = bind->port = port;
>>>
>>> I think all access to v->virq_to_evtchn[virq] should use ACCESS_ONCE()
>>> or {read, write}_atomic() to avoid any store/load tearing.
>>
>> IOW you're suggesting this to be the subject of a separate patch?
>> I don't think such a conversion belongs here (nor even in this
>> series, seeing the much wider applicability of such a change
>> throughout the code base).
>> Or are you seeing anything here which
>> would require such a conversion to be done as a prereq?
> 
> Yes, your comment implies that it is fine to write to virq_to_evtchn[] 
> without the lock taken. However, this is *only* valid if the compiler 
> doesn't tear down load/store.
> 
> So this is a pre-req to get your comment valid.

That's an interesting way to view it. I'm only adding the comment,
without changing the code here. Iirc it was you who asked me to
add the comment. And now this is leading to you wanting me to do
entirely unrelated changes, when the code base is full of similar
assumptions towards no torn accesses? (Yes, I certainly can add
such a patch, but no, I don't think this should be a requirement
here.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:28:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15781.38931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTVh-0005nV-6S; Fri, 30 Oct 2020 12:28:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15781.38931; Fri, 30 Oct 2020 12:28:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTVh-0005nO-2W; Fri, 30 Oct 2020 12:28:01 +0000
Received: by outflank-mailman (input) for mailman id 15781;
 Fri, 30 Oct 2020 12:27:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYTVf-0005nH-QP
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:27:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a82035e7-73a2-44c5-9a61-b02d666b113f;
 Fri, 30 Oct 2020 12:27:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73551ABAE;
 Fri, 30 Oct 2020 12:27:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYTVf-0005nH-QP
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:27:59 +0000
X-Inumbo-ID: a82035e7-73a2-44c5-9a61-b02d666b113f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id a82035e7-73a2-44c5-9a61-b02d666b113f;
	Fri, 30 Oct 2020 12:27:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604060876;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dm2S2biGevh7s8lupXbDeV+SYEhjoOI+2lL0w3PGooc=;
	b=RgjQquqLaCX7fj2+ZSJPw2CMxxPrvORRdYKAnOq4uuOy6egqMmXpRtU4kKjBBnUQ6uVckL
	PRV6r87MdfTPBzgFMBZEoMDqtsjXt0mVx+1E2ZpbCRDl8LMHeLrbSQPXpJ1SG2+99Kstax
	6Jurhd1rZ2WhTMn3HoONjIaG/oko9f0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 73551ABAE;
	Fri, 30 Oct 2020 12:27:56 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
 <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
Date: Fri, 30 Oct 2020 13:27:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.20 12:55, Jan Beulich wrote:
> On 30.10.2020 12:15, Jürgen Groß wrote:
>> On 30.10.20 11:57, Julien Grall wrote:
>>>
>>>
>>> On 30/10/2020 10:49, Jan Beulich wrote:
>>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>>> On 22/10/2020 17:17, Jan Beulich wrote:
>>>>>> On 22.10.2020 18:00, Roger Pau Monné wrote:
>>>>>>> On Tue, Oct 20, 2020 at 04:10:09PM +0200, Jan Beulich wrote:
>>>>>>>> --- a/xen/include/xen/event.h
>>>>>>>> +++ b/xen/include/xen/event.h
>>>>>>>> @@ -177,9 +177,16 @@ int evtchn_reset(struct domain *d, bool
>>>>>>>>      * Low-level event channel port ops.
>>>>>>>>      *
>>>>>>>>      * All hooks have to be called with a lock held which prevents
>>>>>>>> the channel
>>>>>>>> - * from changing state. This may be the domain event lock, the
>>>>>>>> per-channel
>>>>>>>> - * lock, or in the case of sending interdomain events also the
>>>>>>>> other side's
>>>>>>>> - * per-channel lock. Exceptions apply in certain cases for the PV
>>>>>>>> shim.
>>>>>>>> + * from changing state. This may be
>>>>>>>> + * - the domain event lock,
>>>>>>>> + * - the per-channel lock,
>>>>>>>> + * - in the case of sending interdomain events the other side's
>>>>>>>> per-channel
>>>>>>>> + *   lock,
>>>>>>>> + * - in the case of sending non-global vIRQ-s the per-vCPU
>>>>>>>> virq_lock (in
>>>>>>>> + *   combination with the ordering enforced through how the vCPU's
>>>>>>>> + *   virq_to_evtchn[] gets updated),
>>>>>>>> + * - in the case of sending global vIRQ-s vCPU 0's virq_lock.
>>>>>>>> + * Exceptions apply in certain cases for the PV shim.
>>>>>>>
>>>>>>> Having such a wide locking discipline looks dangerous to me, it's easy
>>>>>>> to get things wrong without notice IMO.
>>>>>>
>>>>>> It is effectively only describing how things are (or were before
>>>>>> XSA-343, getting restored here).
>>>>>
>>>>> I agree with Roger here, the new/old locking discipline is dangerous and
>>>>> it is only a matter of time before it will bite us again.
>>>>>
>>>>> I think we should consider Juergen's series because the locking for the
>>>>> event channel is easier to understand.
>>>>
>>>> We should, yes. The one thing I'm a little uneasy with is the
>>>> new lock "variant" that gets introduced. Custom locking methods
>>>> also are a common source of problems (which isn't to say I see
>>>> any here).
>>>
>>> I am also unease with a new lock "variant". However, this is the best
>>> proposal I have seen so far to unblock the issue.
>>>
>>> I am open to other suggestion with simple locking discipline.
>>
>> In theory my new lock variant could easily be replaced by a rwlock and
>> using the try-variant for the readers only.
> 
> Well, only until we would add check_lock() there, which I think
> we should really have (not just on the slow paths, thanks to
> the use of spin_lock() there). The read-vs-write properties
> you're utilizing aren't applicable in the general case afaict,
> and hence such checking would get in the way.

No, I don't think so.

As long as there is no read_lock() user with interrupts off we should be
fine. read_trylock() is no problem as it can't block.

> 
>> The disadvantage of that approach would be a growth of struct evtchn.
> 
> Wasn't it you who had pointed out to me the aligned(64) attribute
> on the struct (in a different context), which afaict would subsume
> any possible growth?

Oh, indeed.

The growth would be 8 bytes, leading to a max of 56 bytes then.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:46:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15791.38943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTnP-0007Xe-T1; Fri, 30 Oct 2020 12:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15791.38943; Fri, 30 Oct 2020 12:46:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTnP-0007XX-Q6; Fri, 30 Oct 2020 12:46:19 +0000
Received: by outflank-mailman (input) for mailman id 15791;
 Fri, 30 Oct 2020 12:46:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYTnN-0007XS-Uw
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:46:18 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2575f12-8560-49e2-a2d4-b84852c67bc9;
 Fri, 30 Oct 2020 12:46:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYTnK-0008So-V8; Fri, 30 Oct 2020 12:46:14 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYTnK-0007SR-L4; Fri, 30 Oct 2020 12:46:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYTnN-0007XS-Uw
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:46:18 +0000
X-Inumbo-ID: f2575f12-8560-49e2-a2d4-b84852c67bc9
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id f2575f12-8560-49e2-a2d4-b84852c67bc9;
	Fri, 30 Oct 2020 12:46:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Xzp12ysCBigmTq9LkdMqByaOIIb2u7L0l96IThSm3EM=; b=I8422qnQLLIpTrCUijUCGx6/hG
	G0psuvRhZ0ZktqbuW9xqgQiGP8Ll0BB6lgxziXV+objqgtIw2JyaNpRAwMUQcqMwA/5pkSlaLIn9t
	hfCeeygNmZeR7tSAUTVQ+cNCfLQPOhgvT9yaNbagwudpG+2M6rtLHkgF1ouRMKKiMLEA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYTnK-0008So-V8; Fri, 30 Oct 2020 12:46:14 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYTnK-0007SR-L4; Fri, 30 Oct 2020 12:46:14 +0000
Subject: Re: [PATCH v2 6/8] evtchn: convert vIRQ lock to an r/w one
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53a2fc39-1bf1-38ce-bbdf-b512c5279b4f@suse.com>
 <6dec1d48-b8c8-6122-087c-38f36f30596e@xen.org>
 <33ff45e9-d869-9262-29e0-fa66e3ffb726@suse.com>
 <60559ccf-f8f5-4c54-0867-b8a893df3f0c@xen.org>
 <478e8353-e8f9-4cee-20d6-50e1619ac680@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <24caaeea-9c31-23fb-f643-bec75d2f5362@xen.org>
Date: Fri, 30 Oct 2020 12:46:12 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <478e8353-e8f9-4cee-20d6-50e1619ac680@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/10/2020 12:25, Jan Beulich wrote:
> On 30.10.2020 13:08, Julien Grall wrote:
>> On 30/10/2020 12:00, Jan Beulich wrote:
>>> On 30.10.2020 11:57, Julien Grall wrote:
>>>> On 20/10/2020 15:10, Jan Beulich wrote:
>>>>> --- a/xen/common/event_channel.c
>>>>> +++ b/xen/common/event_channel.c
>>>>> @@ -449,6 +449,13 @@ int evtchn_bind_virq(evtchn_bind_virq_t
>>>>>     
>>>>>         spin_unlock_irqrestore(&chn->lock, flags);
>>>>>     
>>>>> +    /*
>>>>> +     * If by any, the update of virq_to_evtchn[] would need guarding by
>>>>> +     * virq_lock, but since this is the last action here, there's no strict
>>>>> +     * need to acquire the lock. Hnece holding event_lock isn't helpful
>>>>
>>>> s/Hnece/Hence/
>>>>
>>>>> +     * anymore at this point, but utilize that its unlocking acts as the
>>>>> +     * otherwise necessary smp_wmb() here.
>>>>> +     */
>>>>>         v->virq_to_evtchn[virq] = bind->port = port;
>>>>
>>>> I think all access to v->virq_to_evtchn[virq] should use ACCESS_ONCE()
>>>> or {read, write}_atomic() to avoid any store/load tearing.
>>>
>>> IOW you're suggesting this to be the subject of a separate patch?
>>> I don't think such a conversion belongs here (nor even in this
>>> series, seeing the much wider applicability of such a change
>>> throughout the code base).
>>> Or are you seeing anything here which
>>> would require such a conversion to be done as a prereq?
>>
>> Yes, your comment implies that it is fine to write to virq_to_evtchn[]
>> without the lock taken. However, this is *only* valid if the compiler
>> doesn't tear down load/store.
>>
>> So this is a pre-req to get your comment valid.
> 
> That's an interesting way to view it. I'm only adding the comment,
> without changing the code here. Iirc it was you who asked me to
> add the comment. 

Yes I asked this comment because it makes easier for reader to figure 
out how your code works. But this doesn't mean I wanted to have a 
misleading comment.

If you don't plan to handle the ACCESS_ONCE(), then it is best to be 
open to say it in the comment.

> And now this is leading to you wanting me to do
> entirely unrelated changes, when the code base is full of similar
> assumptions towards no torn accesses? 

I was expecting you writing that :). Well, small steps are always easier 
in order to achieve a consistent code base.

To me, this is similar to some of the reviewers pushing contributors to 
update the coding style of area touched. The difference here is without 
following a memory model, you are at the mercy of the compiler and XSAs.

> (Yes, I certainly can add
> such a patch, but no, I don't think this should be a requirement
> here.)

That's ok. But please update the comment to avoid misleading the reader.

An alternative would be to use the write_lock() here. This shouldn't 
impact that much the performance and nicely fix the issue.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:49:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:49:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15797.38955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTq1-0007ic-Aw; Fri, 30 Oct 2020 12:49:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15797.38955; Fri, 30 Oct 2020 12:49:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTq1-0007iV-7p; Fri, 30 Oct 2020 12:49:01 +0000
Received: by outflank-mailman (input) for mailman id 15797;
 Fri, 30 Oct 2020 12:48:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYTpz-0007iN-TZ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:48:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb1da06d-fac9-4217-8190-94c2c1452d5f;
 Fri, 30 Oct 2020 12:48:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8F576ACF1;
 Fri, 30 Oct 2020 12:48:53 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYTpz-0007iN-TZ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:48:59 +0000
X-Inumbo-ID: bb1da06d-fac9-4217-8190-94c2c1452d5f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id bb1da06d-fac9-4217-8190-94c2c1452d5f;
	Fri, 30 Oct 2020 12:48:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604062133;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=stJsz8pcnbLkb/mKwROM89PQ6yuqery6EnfKrVSpJHY=;
	b=o1Wb9dFJ33ANRTZ6QLIwhoEOchP8mrEMEnDVi+N4C4ng7GOogMfiX2FOAxLInsRz5XVTDq
	eK50immDJSzmcHBhxfZRD4BHdWnVMYFOQ0BVB0MUScxHfjftz4pBFN2s8UnZVexVrsuI0R
	8sFeM+LU0YgJTE1csYjj7DcjqIxPxX8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8F576ACF1;
	Fri, 30 Oct 2020 12:48:53 +0000 (UTC)
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
 <frederic.pierret@qubes-os.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
 <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>
 <20201030122319.GA16953@mail-itl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9b278993-08bb-7ad2-2dfd-743987a9fd6f@suse.com>
Date: Fri, 30 Oct 2020 13:48:52 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201030122319.GA16953@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 13:23, Marek Marczykowski-Górecki wrote:
> On Fri, Oct 30, 2020 at 01:08:44PM +0100, Jan Beulich wrote:
>> On 30.10.2020 13:03, Frédéric Pierret (fepitre) wrote:
>>
>>> --- a/xen/arch/x86/Makefile
>>> +++ b/xen/arch/x86/Makefile
>>> @@ -170,6 +170,7 @@ EFI_LDFLAGS += --major-image-version=$(XEN_VERSION)
>>>  EFI_LDFLAGS += --minor-image-version=$(XEN_SUBVERSION)
>>>  EFI_LDFLAGS += --major-os-version=2 --minor-os-version=0
>>>  EFI_LDFLAGS += --major-subsystem-version=2 --minor-subsystem-version=0
>>> +EFI_LDFLAGS += --no-insert-timestamp
>>
>> Generally I prefer binaries to carry timestamps, when they are
>> intended to do so (i.e. when they have a respective field). So
>> I think if no timestamp is wanted, it should be as an option
>> (not sure about the default).
> 
> What about setting it to the SOURCE_DATE_EPOCH[1] variable value, if
> present? Of course if there is an option to set explicit timestamp
> value.
> 
> [1] https://reproducible-builds.org/docs/source-date-epoch/

Why not.

>> This said, I didn't think time stamps got meaningfully in the
>> way of reproducible builds - ignoring the minor differences
>> cause by them, especially when they sit at well known offsets
>> in the binaries, shouldn't be a big deal.
> 
> It is a big deal. There is a huge difference between running sha256sum
> (or your other favorite hash) on two build artifacts, and using a
> specialized tool/script to compare each file separately. Note the
> xen.efi may be buried very deep in the thing you compare, for example
> inside deb/rpm and then ISO image (installation image), at which point
> it's far from "they sit at well known offsets".

If you care about checking images / blobs where binaries with time
stamps are merely constituent parts, why not strip the time stamps
at the time of creation of those images / blobs (or as a minor
intermediate step, in case you want to e.g. record the hashes for
later reference)?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 12:52:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 12:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15802.38967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTtb-00005m-RS; Fri, 30 Oct 2020 12:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15802.38967; Fri, 30 Oct 2020 12:52:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYTtb-00005e-OR; Fri, 30 Oct 2020 12:52:43 +0000
Received: by outflank-mailman (input) for mailman id 15802;
 Fri, 30 Oct 2020 12:52:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYTta-00005Z-LK
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:52:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 274e5507-51bd-44ef-a98e-7dc8a99ba54c;
 Fri, 30 Oct 2020 12:52:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 20A4EACBA;
 Fri, 30 Oct 2020 12:52:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYTta-00005Z-LK
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 12:52:42 +0000
X-Inumbo-ID: 274e5507-51bd-44ef-a98e-7dc8a99ba54c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 274e5507-51bd-44ef-a98e-7dc8a99ba54c;
	Fri, 30 Oct 2020 12:52:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604062360;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kY+htJ7ZIvm2ExTh/+OUon/djBnApE9KdzAOCIsOhgM=;
	b=p9PeA3sE5wHJPjwNeU6KgHgUdSgGhwbm/Qew1ZG/I9eteZyem/RPowkbT1HOGLk2gblHxP
	3nQpABnp2YiA5s3pnlOFg//uy0d6CDdwyjcHkkJn21xD3e+431DZhhu2/8zRt25V+pKOQN
	gqcz+dbFMVY3Gm8SMPA6+aXrIrJSNeM=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 20A4EACBA;
	Fri, 30 Oct 2020 12:52:40 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
 <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
 <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <53b9685c-92ed-15a5-2ade-d17fd5d398c2@suse.com>
Date: Fri, 30 Oct 2020 13:52:39 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 13:27, Jürgen Groß wrote:
> On 30.10.20 12:55, Jan Beulich wrote:
>> On 30.10.2020 12:15, Jürgen Groß wrote:
>>> On 30.10.20 11:57, Julien Grall wrote:
>>>> On 30/10/2020 10:49, Jan Beulich wrote:
>>>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>>>> I think we should consider Juergen's series because the locking for the
>>>>>> event channel is easier to understand.
>>>>>
>>>>> We should, yes. The one thing I'm a little uneasy with is the
>>>>> new lock "variant" that gets introduced. Custom locking methods
>>>>> also are a common source of problems (which isn't to say I see
>>>>> any here).
>>>>
>>>> I am also unease with a new lock "variant". However, this is the best
>>>> proposal I have seen so far to unblock the issue.
>>>>
>>>> I am open to other suggestion with simple locking discipline.
>>>
>>> In theory my new lock variant could easily be replaced by a rwlock and
>>> using the try-variant for the readers only.
>>
>> Well, only until we would add check_lock() there, which I think
>> we should really have (not just on the slow paths, thanks to
>> the use of spin_lock() there). The read-vs-write properties
>> you're utilizing aren't applicable in the general case afaict,
>> and hence such checking would get in the way.
> 
> No, I don't think so.
> 
> As long as there is no read_lock() user with interrupts off we should be
> fine. read_trylock() is no problem as it can't block.

How would check_lock() notice the difference? It would be all the
same for read and write acquires of the lock, I think, and hence
it would still get unhappy about uses from IRQ paths afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:02:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:02:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15807.38979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYU3G-00015V-Rt; Fri, 30 Oct 2020 13:02:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15807.38979; Fri, 30 Oct 2020 13:02:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYU3G-00015O-Oe; Fri, 30 Oct 2020 13:02:42 +0000
Received: by outflank-mailman (input) for mailman id 15807;
 Fri, 30 Oct 2020 13:02:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYU3F-00015J-NE
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:02:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab15d3a3-c083-4a22-99ec-a9b9d638c634;
 Fri, 30 Oct 2020 13:02:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 50BF6AD8C;
 Fri, 30 Oct 2020 13:02:39 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYU3F-00015J-NE
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:02:41 +0000
X-Inumbo-ID: ab15d3a3-c083-4a22-99ec-a9b9d638c634
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ab15d3a3-c083-4a22-99ec-a9b9d638c634;
	Fri, 30 Oct 2020 13:02:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604062959;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iV3Cb9zvHsPX7b6ab2O9GBxeolEMTUCe1B+5yU2dZtY=;
	b=a7NWJF6KiyuccoOgXK9+mfdbJOqNA5NI/9QfIImWCuh++AxblODJEg70Qkuj0OrFuXWhBO
	7GzRjRBgFQEYvgaYDT5bWwIgklmdQkDfHzJIgQ/cjAWCsA19llM8ridn7G+GwSHNK/g7bq
	TadAw3SvaF2HNFkGA6bTriqk9t+L5Fg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 50BF6AD8C;
	Fri, 30 Oct 2020 13:02:39 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
 <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
 <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
 <53b9685c-92ed-15a5-2ade-d17fd5d398c2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <c3c41eb8-de3d-3331-1db1-f86706544d82@suse.com>
Date: Fri, 30 Oct 2020 14:02:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <53b9685c-92ed-15a5-2ade-d17fd5d398c2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.20 13:52, Jan Beulich wrote:
> On 30.10.2020 13:27, Jürgen Groß wrote:
>> On 30.10.20 12:55, Jan Beulich wrote:
>>> On 30.10.2020 12:15, Jürgen Groß wrote:
>>>> On 30.10.20 11:57, Julien Grall wrote:
>>>>> On 30/10/2020 10:49, Jan Beulich wrote:
>>>>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>>>>> I think we should consider Juergen's series because the locking for the
>>>>>>> event channel is easier to understand.
>>>>>>
>>>>>> We should, yes. The one thing I'm a little uneasy with is the
>>>>>> new lock "variant" that gets introduced. Custom locking methods
>>>>>> also are a common source of problems (which isn't to say I see
>>>>>> any here).
>>>>>
>>>>> I am also unease with a new lock "variant". However, this is the best
>>>>> proposal I have seen so far to unblock the issue.
>>>>>
>>>>> I am open to other suggestion with simple locking discipline.
>>>>
>>>> In theory my new lock variant could easily be replaced by a rwlock and
>>>> using the try-variant for the readers only.
>>>
>>> Well, only until we would add check_lock() there, which I think
>>> we should really have (not just on the slow paths, thanks to
>>> the use of spin_lock() there). The read-vs-write properties
>>> you're utilizing aren't applicable in the general case afaict,
>>> and hence such checking would get in the way.
>>
>> No, I don't think so.
>>
>> As long as there is no read_lock() user with interrupts off we should be
>> fine. read_trylock() is no problem as it can't block.
> 
> How would check_lock() notice the difference? It would be all the
> same for read and write acquires of the lock, I think, and hence
> it would still get unhappy about uses from IRQ paths afaict.

check_lock() isn't applicable here, at least not without modification.

I think our spinlock implementation is wrong in this regard in case a
lock is entered via spin_trylock(), BTW. Using spin_trylock() with
interrupts off for a lock normally taken with interrupts enabled is
perfectly fine IMO.


Juergen



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:30:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15815.38991 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUTx-0003e4-3P; Fri, 30 Oct 2020 13:30:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15815.38991; Fri, 30 Oct 2020 13:30:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUTw-0003dx-Vw; Fri, 30 Oct 2020 13:30:16 +0000
Received: by outflank-mailman (input) for mailman id 15815;
 Fri, 30 Oct 2020 13:30:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0CY=EF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1kYUTv-0003ds-Mo
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:30:15 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a260a30-446a-4556-a8e7-d03bea3f9037;
 Fri, 30 Oct 2020 13:30:14 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Y0CY=EF=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
	id 1kYUTv-0003ds-Mo
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:30:15 +0000
X-Inumbo-ID: 5a260a30-446a-4556-a8e7-d03bea3f9037
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5a260a30-446a-4556-a8e7-d03bea3f9037;
	Fri, 30 Oct 2020 13:30:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604064614;
  h=subject:to:cc:references:from:message-id:date:
   mime-version:in-reply-to:content-transfer-encoding;
  bh=T97K5otZiBwgjVIL04COIXu0KdDLu6Ny4bmQhdICI8U=;
  b=BUkmd8kM5VYHdPIjIJgBARpLzZROxOXazXFGsuan7+RN7whrEjofGcLy
   5eJchoF/nw7NwYbDX+xMlUsvPlGED9iHBPYcmswix6rq85un26hqS8VkO
   jx+OKGPSwrbHbj7CWEoMICOUTNKvJxixgANzSmbjKsQ5OQti87grgkw9a
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: KJItFsT+tGtT+dw1i9LURL+kpcrRMZxbgIoKvogl2pUPNr/beKRD7bDhphypesP1YPutZKUSzX
 8BLCfKnDDeGwwuNolwKw9SJrIvXmxT1qJypYYV/F4yW3TG892acOih7xay9wahK1CK5Gnf7nL1
 yySJUGFdsGdblvZVas7SFWU6HUxFW7pTp5P2a5IXoKmJ9qeRNZ9SFb2H3jrT+VLuall8gpqhGI
 p2rU1TZdVliKZmuiiIXjKkLp2veXfmYeA68Rs3sknHMCWECB1hjAuhYk6Koi572YTk+lidSGBI
 3fM=
X-SBRS: None
X-MesageID: 31247007
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,433,1596513600"; 
   d="scan'208";a="31247007"
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary
To: Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
CC: =?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Pierret_=28fepitre=29?=
	<frederic.pierret@qubes-os.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	<xen-devel@lists.xenproject.org>
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
 <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>
 <20201030122319.GA16953@mail-itl>
 <9b278993-08bb-7ad2-2dfd-743987a9fd6f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3ee68210-a0d4-a906-c502-4988d996e61c@citrix.com>
Date: Fri, 30 Oct 2020 13:30:08 +0000
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <9b278993-08bb-7ad2-2dfd-743987a9fd6f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 FTLPEX02CL04.citrite.net (10.13.108.177)

On 30/10/2020 12:48, Jan Beulich wrote:
> On 30.10.2020 13:23, Marek Marczykowski-Górecki wrote:
>> On Fri, Oct 30, 2020 at 01:08:44PM +0100, Jan Beulich wrote:
>>> On 30.10.2020 13:03, Frédéric Pierret (fepitre) wrote:
>>>
>>>> --- a/xen/arch/x86/Makefile
>>>> +++ b/xen/arch/x86/Makefile
>>>> @@ -170,6 +170,7 @@ EFI_LDFLAGS += --major-image-version=$(XEN_VERSION)
>>>>  EFI_LDFLAGS += --minor-image-version=$(XEN_SUBVERSION)
>>>>  EFI_LDFLAGS += --major-os-version=2 --minor-os-version=0
>>>>  EFI_LDFLAGS += --major-subsystem-version=2 --minor-subsystem-version=0
>>>> +EFI_LDFLAGS += --no-insert-timestamp
>>> Generally I prefer binaries to carry timestamps, when they are
>>> intended to do so (i.e. when they have a respective field). So
>>> I think if no timestamp is wanted, it should be as an option
>>> (not sure about the default).
>> What about setting it to the SOURCE_DATE_EPOCH[1] variable value, if
>> present? Of course if there is an option to set explicit timestamp
>> value.
>>
>> [1] https://reproducible-builds.org/docs/source-date-epoch/
> Why not.

SOURCE_DATE_EPOCH is the right way to fix this.

It probably wants to default to something sane in the root Makefile, so
it covers tools as well.

>>> This said, I didn't think time stamps got meaningfully in the
>>> way of reproducible builds - ignoring the minor differences
>>> cause by them, especially when they sit at well known offsets
>>> in the binaries, shouldn't be a big deal.
>> It is a big deal. There is a huge difference between running sha256sum
>> (or your other favorite hash) on two build artifacts, and using a
>> specialized tool/script to compare each file separately. Note the
>> xen.efi may be buried very deep in the thing you compare, for example
>> inside deb/rpm and then ISO image (installation image), at which point
>> it's far from "they sit at well known offsets".
> If you care about checking images / blobs where binaries with time
> stamps are merely constituent parts, why not strip the time stamps
> at the time of creation of those images / blobs (or as a minor
> intermediate step, in case you want to e.g. record the hashes for
> later reference)?

Because that is a disaster to maintain.  A critical part of reproducible
builds is not needing custom comparison logic for every binary artefact.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:38:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:38:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15829.39035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUbz-00045j-CM; Fri, 30 Oct 2020 13:38:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15829.39035; Fri, 30 Oct 2020 13:38:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUbz-00045c-8w; Fri, 30 Oct 2020 13:38:35 +0000
Received: by outflank-mailman (input) for mailman id 15829;
 Fri, 30 Oct 2020 13:38:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYUby-00045X-1V
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:38:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0786d6d5-85f1-406e-808b-a9a1685f6020;
 Fri, 30 Oct 2020 13:38:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DDE2AB934;
 Fri, 30 Oct 2020 13:38:31 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYUby-00045X-1V
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:38:34 +0000
X-Inumbo-ID: 0786d6d5-85f1-406e-808b-a9a1685f6020
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0786d6d5-85f1-406e-808b-a9a1685f6020;
	Fri, 30 Oct 2020 13:38:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604065112;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CKj6V5floIGcrcmplI/6V41HCV/XZWSZHqvda5Hiav0=;
	b=hJZ1k7/sH1GrC2jU/LKEbizi6K3V3esYyFs0X+fzgHTgHcLl6zsdw9k3FjSlc/cG0lqTeN
	28JuiFZzw6OY07ivCdzFssQSFKUdY8RwUnDZtFVYigIZcVFY63S8J2NTJBXdMYSsho9gqZ
	halLgp4U0AyFfcuedftr7R9gg15W3Qg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DDE2AB934;
	Fri, 30 Oct 2020 13:38:31 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
 <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
 <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
 <53b9685c-92ed-15a5-2ade-d17fd5d398c2@suse.com>
 <c3c41eb8-de3d-3331-1db1-f86706544d82@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <23b3eb86-02c6-650c-d9a3-a3b40cb646c2@suse.com>
Date: Fri, 30 Oct 2020 14:38:31 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <c3c41eb8-de3d-3331-1db1-f86706544d82@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.2020 14:02, Jürgen Groß wrote:
> On 30.10.20 13:52, Jan Beulich wrote:
>> On 30.10.2020 13:27, Jürgen Groß wrote:
>>> On 30.10.20 12:55, Jan Beulich wrote:
>>>> On 30.10.2020 12:15, Jürgen Groß wrote:
>>>>> On 30.10.20 11:57, Julien Grall wrote:
>>>>>> On 30/10/2020 10:49, Jan Beulich wrote:
>>>>>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>>>>>> I think we should consider Juergen's series because the locking for the
>>>>>>>> event channel is easier to understand.
>>>>>>>
>>>>>>> We should, yes. The one thing I'm a little uneasy with is the
>>>>>>> new lock "variant" that gets introduced. Custom locking methods
>>>>>>> also are a common source of problems (which isn't to say I see
>>>>>>> any here).
>>>>>>
>>>>>> I am also unease with a new lock "variant". However, this is the best
>>>>>> proposal I have seen so far to unblock the issue.
>>>>>>
>>>>>> I am open to other suggestion with simple locking discipline.
>>>>>
>>>>> In theory my new lock variant could easily be replaced by a rwlock and
>>>>> using the try-variant for the readers only.
>>>>
>>>> Well, only until we would add check_lock() there, which I think
>>>> we should really have (not just on the slow paths, thanks to
>>>> the use of spin_lock() there). The read-vs-write properties
>>>> you're utilizing aren't applicable in the general case afaict,
>>>> and hence such checking would get in the way.
>>>
>>> No, I don't think so.
>>>
>>> As long as there is no read_lock() user with interrupts off we should be
>>> fine. read_trylock() is no problem as it can't block.
>>
>> How would check_lock() notice the difference? It would be all the
>> same for read and write acquires of the lock, I think, and hence
>> it would still get unhappy about uses from IRQ paths afaict.
> 
> check_lock() isn't applicable here, at least not without modification.
> 
> I think our spinlock implementation is wrong in this regard in case a
> lock is entered via spin_trylock(), BTW. Using spin_trylock() with
> interrupts off for a lock normally taken with interrupts enabled is
> perfectly fine IMO.

Hmm, I think you're right, in which I case guess we ought to relax
this.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:40:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:40:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15835.39046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUe4-0004tt-UL; Fri, 30 Oct 2020 13:40:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15835.39046; Fri, 30 Oct 2020 13:40:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUe4-0004tm-RU; Fri, 30 Oct 2020 13:40:44 +0000
Received: by outflank-mailman (input) for mailman id 15835;
 Fri, 30 Oct 2020 13:40:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYUe3-0004th-Ts
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:40:43 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 885044e7-172f-4859-8d9d-ce181d6eb7e1;
 Fri, 30 Oct 2020 13:40:43 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYUe1-0001BY-Hd; Fri, 30 Oct 2020 13:40:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYUe1-0002k7-9W; Fri, 30 Oct 2020 13:40:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYUe3-0004th-Ts
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:40:43 +0000
X-Inumbo-ID: 885044e7-172f-4859-8d9d-ce181d6eb7e1
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 885044e7-172f-4859-8d9d-ce181d6eb7e1;
	Fri, 30 Oct 2020 13:40:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZjkrVIv5JXeXyE9ersuoQrXhVPfZD3kNV20TshAAkkg=; b=w0qfUBf2BzE28tYsKKpsax/P5R
	bJgOWuf0qr8owj6RqvPMY+7LPtaNWUY1NvBGa2+sh+vYYyOad3WuWIon8G17wm9kjezPfgMzIE0Rb
	SIuA7O608uKeqZXpeztXqQBT5DSVxoTl+XqXSGPwMx4esZhFrr3uGUTmDflyqX4LfMvY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYUe1-0001BY-Hd; Fri, 30 Oct 2020 13:40:41 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYUe1-0002k7-9W; Fri, 30 Oct 2020 13:40:41 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Rahul Singh <Rahul.Singh@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <cd433f0a-ed0b-ce82-c356-d6deaa053a30@xen.org>
 <BBF09ABE-29A6-4990-8DA2-B44086E9C88C@arm.com>
 <1082f30e-0ce8-00b1-e120-194ff874a9ba@xen.org>
 <alpine.DEB.2.21.2010221631440.12247@sstabellini-ThinkPad-T480s>
 <D8EF4B06-B64D-4264-8C86-DA1B5A1146D2@arm.com>
 <7314936f-6c1e-5ca6-a33b-973c8e61ba3b@xen.org>
 <D9F93137-412F-47E5-A55C-85D1F3745618@arm.com>
 <2813ea2b-bfc4-0590-47ef-86089ad65a5d@xen.org>
 <0E2548E0-0504-43B6-8DD7-D5B7BACCEB6E@arm.com>
 <bc697327-2750-9a78-042d-d45690d27928@xen.org>
 <92A7B6FF-A2CE-4BB1-831A-8F12FB5290B8@arm.com>
 <alpine.DEB.2.21.2010291316290.12247@sstabellini-ThinkPad-T480s>
 <1BE06E0F-26CF-453A-BB06-808CC0F3E09B@arm.com>
 <aae5892a-2532-04f8-02af-84c4d4c4f3fd@xen.org>
 <226DA6DB-D03C-41A7-A68C-53000DFA70F6@arm.com>
 <e5ce30c5-e0e0-90c8-962d-c86b65a82ccd@xen.org>
 <E52CE228-0D19-491E-BA47-04ED7599DDCE@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c820556e-7ba6-c8f8-9d99-b3a6a29348b2@xen.org>
Date: Fri, 30 Oct 2020 13:40:39 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <E52CE228-0D19-491E-BA47-04ED7599DDCE@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 30/10/2020 11:33, Rahul Singh wrote:
> Hello Julien,

Hi,

> 
>> On 30 Oct 2020, at 10:05 am, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 30/10/2020 09:45, Rahul Singh wrote:
>>> Hello Julien,
>>>> On 30 Oct 2020, at 9:21 am, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On 30/10/2020 08:46, Rahul Singh wrote:
>>>>> Ok Yes when I ported the driver I port the command queue operation from the previous commit where atomic operations is not used and rest all the code is from the latest code. I will again make sure that any bug that is fixed in Linux should be fixed in XEN also.
>>>>
>>>> I would like to seek some clarifications on the code because there seem to be conflicting information provided in this thread.
>>>>
>>>> The patch (the baseline commit is provided) and the discussion with Bertrand suggests that you took a snapshot of the code last year and adapted for Xen.
>>>>
>>>> However, here you suggest that you took an hybrid approach where part of the code is based from last year and other part is based from the latest code (I assume v5.9).
>>>>
>>>> So can you please clarify?
>>>>
>>>> Cheers,
>>> Approach I took is to first merge the code  from the commit ( Jul 2, 2019 7c288a5b27934281d9ea8b5807bc727268b7001a ) the snapshot before atomic operation is used in SMMUv3 code for command queue operations.
>>> After that I fixed  the other code( not related to command queue operations.)  from the latest code so that no bug is introduced in XEN because of using the last year commit.
>>
>> Ok. That was definitely not clear from the commit message. Please make this clearer in the commit message.
>>
> 
> Ok. I will make this clearer in the commit message.
> 
>> Anway, it means we need to do a full review of the code (rather than a light one) because of the hybrid model.
>>
>> I am still a bit puzzle to why it would require almost of a restart of the implementation in order to sync the latest code. Does it imply that you are mostly using the old code?
>>
> 
> SMMuv3 code is divided into below parts :
> 
> 1. Low-level/High level queue manipulation functions.
> 2. Context descriptor manipulation functions.
> 3. Stream table manipulation functions.
> 4. Interrupt handling.
> 5. Linux IOMMU API functions.
> 6. Driver initialisation functions( probe/reset ).
> 
> Low-level/High-level queue manipulation functions are from the old code, rest is the new code whenever it was possible.

Thanks for the details! I think it would be useful to mention that in 
the commit message.

> I started with porting the latest code but there are many dependencies for the queue manipulation function so we decided to use the old queue manipulation function.

In general, I would recommend to involve the community as soon as 
possible in the development process. This is quite important for big 
feature because it allows all the party to agree on the approach without 
investing significant effort.

> As the queue manipulation function is a big part of the code it will require a lot of effort and testing to sync with the latest code once the atomic operation is in place to use
Sure, everything has a cost (including maintaining the code). This has 
to be a trade-off.

My main concern was the maintenance of the code long term. However, as 
you and Bertrand stepped up for maintaining the code, then this should 
be less of a concern...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:43:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15854.39078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUh1-0005C9-Ky; Fri, 30 Oct 2020 13:43:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15854.39078; Fri, 30 Oct 2020 13:43:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUh1-0005C2-Hn; Fri, 30 Oct 2020 13:43:47 +0000
Received: by outflank-mailman (input) for mailman id 15854;
 Fri, 30 Oct 2020 13:43:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYUgz-0005Bw-HX
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:43:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62486a45-8024-43b2-9861-68027085f5c7;
 Fri, 30 Oct 2020 13:43:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3E445AAF1;
 Fri, 30 Oct 2020 13:43:43 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYUgz-0005Bw-HX
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:43:45 +0000
X-Inumbo-ID: 62486a45-8024-43b2-9861-68027085f5c7
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 62486a45-8024-43b2-9861-68027085f5c7;
	Fri, 30 Oct 2020 13:43:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604065423;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DXtAgFQzIvI5ClTZh25/Z9WClYy0ovTKlyRRONKx7BU=;
	b=lhABL5E7assUmyekHZo0VGZdlkliCDe7RTpccrr8qTcQcg7UmtupGjAJx/cVIDg/rPafE9
	ew8JAr/dSb6aNcvWag0z872fZs7XGjnTVWb79+g+LUrUcyXwVaz7wwLplyDqqsMkb0H+Ox
	bmrpFhMNBuT+sBhY6mvWZD6TcmYEP7M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 3E445AAF1;
	Fri, 30 Oct 2020 13:43:43 +0000 (UTC)
Subject: Re: [PATCH v2 5/8] evtchn: drop acquiring of per-channel lock from
 send_guest_{global,vcpu}_virq()
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Julien Grall <julien@xen.org>
References: <19babf20-3649-5c63-44a9-7edfa81835aa@suse.com>
 <53eb30ca-9b3f-0ef4-bc90-e1c196b716b3@suse.com>
 <20201022160055.nlucvj2bsxolxd5o@Air-de-Roger>
 <dc7de861-a94c-3ef9-8dbd-ee7a5ba293c4@suse.com>
 <dbb776ad-5b0c-c0a7-8f01-66e60fd7fad9@xen.org>
 <2cfcda4c-4115-e057-f401-5103f5b5b8e8@suse.com>
 <08108cd3-530f-3fe9-e1b2-41c7da9f98b7@xen.org>
 <693bb422-ed13-9327-5f22-12bd6f192916@suse.com>
 <46d5f9cf-c01d-c0c2-777a-c97736633120@suse.com>
 <dc870eaa-6021-1c3e-c2ef-99e3cdb4fcc5@suse.com>
 <53b9685c-92ed-15a5-2ade-d17fd5d398c2@suse.com>
 <c3c41eb8-de3d-3331-1db1-f86706544d82@suse.com>
 <23b3eb86-02c6-650c-d9a3-a3b40cb646c2@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e1794e2f-27c2-2776-6509-84ea98128695@suse.com>
Date: Fri, 30 Oct 2020 14:43:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <23b3eb86-02c6-650c-d9a3-a3b40cb646c2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.20 14:38, Jan Beulich wrote:
> On 30.10.2020 14:02, Jürgen Groß wrote:
>> On 30.10.20 13:52, Jan Beulich wrote:
>>> On 30.10.2020 13:27, Jürgen Groß wrote:
>>>> On 30.10.20 12:55, Jan Beulich wrote:
>>>>> On 30.10.2020 12:15, Jürgen Groß wrote:
>>>>>> On 30.10.20 11:57, Julien Grall wrote:
>>>>>>> On 30/10/2020 10:49, Jan Beulich wrote:
>>>>>>>> On 30.10.2020 11:38, Julien Grall wrote:
>>>>>>>>> I think we should consider Juergen's series because the locking for the
>>>>>>>>> event channel is easier to understand.
>>>>>>>>
>>>>>>>> We should, yes. The one thing I'm a little uneasy with is the
>>>>>>>> new lock "variant" that gets introduced. Custom locking methods
>>>>>>>> also are a common source of problems (which isn't to say I see
>>>>>>>> any here).
>>>>>>>
>>>>>>> I am also unease with a new lock "variant". However, this is the best
>>>>>>> proposal I have seen so far to unblock the issue.
>>>>>>>
>>>>>>> I am open to other suggestion with simple locking discipline.
>>>>>>
>>>>>> In theory my new lock variant could easily be replaced by a rwlock and
>>>>>> using the try-variant for the readers only.
>>>>>
>>>>> Well, only until we would add check_lock() there, which I think
>>>>> we should really have (not just on the slow paths, thanks to
>>>>> the use of spin_lock() there). The read-vs-write properties
>>>>> you're utilizing aren't applicable in the general case afaict,
>>>>> and hence such checking would get in the way.
>>>>
>>>> No, I don't think so.
>>>>
>>>> As long as there is no read_lock() user with interrupts off we should be
>>>> fine. read_trylock() is no problem as it can't block.
>>>
>>> How would check_lock() notice the difference? It would be all the
>>> same for read and write acquires of the lock, I think, and hence
>>> it would still get unhappy about uses from IRQ paths afaict.
>>
>> check_lock() isn't applicable here, at least not without modification.
>>
>> I think our spinlock implementation is wrong in this regard in case a
>> lock is entered via spin_trylock(), BTW. Using spin_trylock() with
>> interrupts off for a lock normally taken with interrupts enabled is
>> perfectly fine IMO.
> 
> Hmm, I think you're right, in which I case guess we ought to relax
> this.

Just writing a patch. :-)

And one for adding check_lock() to rwlocks, too.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 13:43:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 13:43:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15855.39091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUh8-0005Eu-UB; Fri, 30 Oct 2020 13:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15855.39091; Fri, 30 Oct 2020 13:43:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUh8-0005Em-Pl; Fri, 30 Oct 2020 13:43:54 +0000
Received: by outflank-mailman (input) for mailman id 15855;
 Fri, 30 Oct 2020 13:43:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYUh8-0005ET-0I
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:43:54 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54634419-eeba-4ce8-ac73-f5aad179b2e2;
 Fri, 30 Oct 2020 13:43:53 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 15BF85C00F5;
 Fri, 30 Oct 2020 09:43:53 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 30 Oct 2020 09:43:53 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 0F5033064680;
 Fri, 30 Oct 2020 09:43:51 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=e1N+=EF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYUh8-0005ET-0I
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 13:43:54 +0000
X-Inumbo-ID: 54634419-eeba-4ce8-ac73-f5aad179b2e2
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 54634419-eeba-4ce8-ac73-f5aad179b2e2;
	Fri, 30 Oct 2020 13:43:53 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 15BF85C00F5;
	Fri, 30 Oct 2020 09:43:53 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Fri, 30 Oct 2020 09:43:53 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=JOOb/y
	iDDC1k50S9KOzfOAE43VJEtHLYbXHvm8rsAFE=; b=NU2J62bNArcboKjNRkB6HI
	jMBZyBwiLWJeopLZfKsDFF/dWdr9zTMLRAmzj/z8ZtcV1vZjAzt7HAo4XjIkjD2u
	KvS+B/AiYSUZ+9PN6u4/Q+M6mcv1tdwJwo+kdnLYQ3d7msJZB7z1Ic0a8lezR/Cy
	RxANojjAxWN5yFjqVVy2Hg9FiiS7cddKSZx0CAuNqwElhkgXYkWcmblzzwf/gtr1
	pnEx+HlQRdzR5K5g5yiHI+pVjkOwjDALKfcbr/9StD2cIF6Q1Y8bE7BlLkcu23Q3
	13T/I59DsqyiXzCbn6edau+giQHE8xkwwZd9rarjoLDnGKhICzlOYwToQJUOtb4Q
	==
X-ME-Sender: <xms:mBicX6yXCX6je8DjlbR_i5BPFbanCyPkwqHo0vpbozf8CpbuVZJBMQ>
    <xme:mBicX2Q44IpuGt04F-a0CMQBqfMr9ZDmimsiboCGqMdrJzf1uSPUfqWJfbRKKLhqP
    sSawD78EqjoYw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleehgdehudcutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgeetvefh
    ueefueevkeduvdfftefhueetuddvkeehtdelvdevudffvefhteeluefgnecuffhomhgrih
    hnpehrvghprhhoughutghisghlvgdqsghuihhlughsrdhorhhgnecukfhppeeluddrieeg
    rddujedtrdekleenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfh
    hrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:mBicX8Uysj-CoB0PKXUogdpO-XaovFdOV70ZyV_hQgzFwj-Uevn3pg>
    <xmx:mBicXwgI_CABHB7URQqlhmKWPDRp-bClylMutxeL0xbomzQEV0VGGA>
    <xmx:mBicX8BfKg6ZqD2AVU5oO-a9wZHvE4_zng-bUJCkeqr8LRURJ_9NgQ>
    <xmx:mRicX_OkefglbCM2OKrh6Mfz0Fk_p2pSREMU8iw9xdwcw80F7FKwag>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 0F5033064680;
	Fri, 30 Oct 2020 09:43:51 -0400 (EDT)
Date: Fri, 30 Oct 2020 14:43:47 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	=?utf-8?Q?Fr=C3=A9d=C3=A9ric_Pierret_=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Wei Liu <wl@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary
Message-ID: <20201030134347.GB2532@mail-itl>
References: <cover.1603725003.git.frederic.pierret@qubes-os.org>
 <64fc67bc2227d6cf92e079228c9f8d2d6404b001.1603725003.git.frederic.pierret@qubes-os.org>
 <93b0b06e-cb73-66eb-3535-e7ab2ca60bf8@suse.com>
 <20201030122319.GA16953@mail-itl>
 <9b278993-08bb-7ad2-2dfd-743987a9fd6f@suse.com>
 <3ee68210-a0d4-a906-c502-4988d996e61c@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="U+BazGySraz5kW0T"
Content-Disposition: inline
In-Reply-To: <3ee68210-a0d4-a906-c502-4988d996e61c@citrix.com>


--U+BazGySraz5kW0T
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH v1 1/2] No insert of the build timestamp into the x86 xen
 efi binary

On Fri, Oct 30, 2020 at 01:30:08PM +0000, Andrew Cooper wrote:
> On 30/10/2020 12:48, Jan Beulich wrote:
> > On 30.10.2020 13:23, Marek Marczykowski-G=C3=B3recki wrote:
> >> On Fri, Oct 30, 2020 at 01:08:44PM +0100, Jan Beulich wrote:
> >>> On 30.10.2020 13:03, Fr=C3=A9d=C3=A9ric Pierret (fepitre) wrote:
> >>>
> >>>> --- a/xen/arch/x86/Makefile
> >>>> +++ b/xen/arch/x86/Makefile
> >>>> @@ -170,6 +170,7 @@ EFI_LDFLAGS +=3D --major-image-version=3D$(XEN_V=
ERSION)
> >>>>  EFI_LDFLAGS +=3D --minor-image-version=3D$(XEN_SUBVERSION)
> >>>>  EFI_LDFLAGS +=3D --major-os-version=3D2 --minor-os-version=3D0
> >>>>  EFI_LDFLAGS +=3D --major-subsystem-version=3D2 --minor-subsystem-ve=
rsion=3D0
> >>>> +EFI_LDFLAGS +=3D --no-insert-timestamp
> >>> Generally I prefer binaries to carry timestamps, when they are
> >>> intended to do so (i.e. when they have a respective field). So
> >>> I think if no timestamp is wanted, it should be as an option
> >>> (not sure about the default).
> >> What about setting it to the SOURCE_DATE_EPOCH[1] variable value, if
> >> present? Of course if there is an option to set explicit timestamp
> >> value.
> >>
> >> [1] https://reproducible-builds.org/docs/source-date-epoch/
> > Why not.
>=20
> SOURCE_DATE_EPOCH is the right way to fix this.

Hmm, reading 'ld' man page, I don't see an option to set explicit value,
on --insert-timestamp / --no-insert-timestamp.

> It probably wants to default to something sane in the root Makefile, so
> it covers tools as well.

In practice, the package build system (deb, rpm, etc) do set it based on
last package changelog entry, so _for package build case_ it isn't
needed. But probably nice addition.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--U+BazGySraz5kW0T
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+cGJMACgkQ24/THMrX
1yzjNgf+Pa+cW0aa+ADNqXcCGsea/rQvNnd8r16/gt/9OQtjojMOGUxatsZQGHIK
fHNtX29BAV5beBvYPhvtc2KCnxnNSuI+1czJKNtgxPYEPaZy/gDL5RPPirHKbL5S
eWO9xlwR71/XJKN2VodBtIsE+re5jj+QKaHHz0VFbF416buIFAir52vhBRpT5whh
rBWIQnj+bh5TilP6hyL8yeVPq3+Nu9hOaB1rNSs29nN3T7cd5Z8grx5fS7t/c52K
Zp+iqTvFlJVXdEL3Kv1re8eA7ZBysq3rb8DE68T31QONohfbRoC+lAQs6CbVK035
CsMVUmb2jSeyAFaOFtf2IeCoEPl+Jw==
=zo7W
-----END PGP SIGNATURE-----

--U+BazGySraz5kW0T--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:00:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15870.39103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUxQ-00077y-C5; Fri, 30 Oct 2020 14:00:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15870.39103; Fri, 30 Oct 2020 14:00:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYUxQ-00077r-8E; Fri, 30 Oct 2020 14:00:44 +0000
Received: by outflank-mailman (input) for mailman id 15870;
 Fri, 30 Oct 2020 14:00:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYUxP-00077m-CH
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:00:43 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe09::60c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c720f0b-2cf7-4d61-b69c-b9891112b1b1;
 Fri, 30 Oct 2020 14:00:40 +0000 (UTC)
Received: from AM5PR0701CA0065.eurprd07.prod.outlook.com (2603:10a6:203:2::27)
 by VI1PR08MB3471.eurprd08.prod.outlook.com (2603:10a6:803:7d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 30 Oct
 2020 14:00:30 +0000
Received: from VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::9) by AM5PR0701CA0065.outlook.office365.com
 (2603:10a6:203:2::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.11 via Frontend
 Transport; Fri, 30 Oct 2020 14:00:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT027.mail.protection.outlook.com (10.152.18.154) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 14:00:30 +0000
Received: ("Tessian outbound ba2270a55485:v64");
 Fri, 30 Oct 2020 14:00:28 +0000
Received: from ebfd1f4ccf4a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 627368C9-2C74-419F-9A90-7250CC7D6124.1; 
 Fri, 30 Oct 2020 14:00:03 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ebfd1f4ccf4a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 14:00:03 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3079.eurprd08.prod.outlook.com (2603:10a6:209:45::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 13:59:59 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 13:59:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYUxP-00077m-CH
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:00:43 +0000
X-Inumbo-ID: 2c720f0b-2cf7-4d61-b69c-b9891112b1b1
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown [2a01:111:f400:fe09::60c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 2c720f0b-2cf7-4d61-b69c-b9891112b1b1;
	Fri, 30 Oct 2020 14:00:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SBR1mSaVq1AGjPqPcYHqftLodwa4oX7xnw31LeL3eWI=;
 b=W+DBKV/VfnNPUsh22RZOyLiReM/+gsOzO/Pvxc3UfmmKU4RNbjxASi5lOehGer095GXOqn0AGSygVAouZtCBheKK9SQmilipnsJyS81HVws0eylmUi69BHywYxT1UwnYL43NeMSAbwMR6vh1146eA+GPDT+p25jo/HW9fpgtXQM=
Received: from AM5PR0701CA0065.eurprd07.prod.outlook.com (2603:10a6:203:2::27)
 by VI1PR08MB3471.eurprd08.prod.outlook.com (2603:10a6:803:7d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Fri, 30 Oct
 2020 14:00:30 +0000
Received: from VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::9) by AM5PR0701CA0065.outlook.office365.com
 (2603:10a6:203:2::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.11 via Frontend
 Transport; Fri, 30 Oct 2020 14:00:30 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT027.mail.protection.outlook.com (10.152.18.154) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 14:00:30 +0000
Received: ("Tessian outbound ba2270a55485:v64"); Fri, 30 Oct 2020 14:00:28 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7fbeee8f48a57f05
X-CR-MTA-TID: 64aa7808
Received: from ebfd1f4ccf4a.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 627368C9-2C74-419F-9A90-7250CC7D6124.1;
	Fri, 30 Oct 2020 14:00:03 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ebfd1f4ccf4a.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 14:00:03 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e3w7gak5Qcq/FZ//DsFYrgOKR4jlZDBFOC/N1899/THr1PjD8lROb1zHlrj/sqKL7FEJl29WSoHb+M+lEuJtmtvpfCJVgzK6R7Lx4guD4oys+iPotwu8uBiYh1VqeisOyzihQ15nwJ3bDfBFW9eMFojuvHki64Jugwa83iEalk5YEdFF7x1/gtmGt9UXAKI61QhLiIwc8bOKK5OJYZL+HkzMV/bl3otGNUwULjobIHo3e2oeF6bbRMLQU/hSTuLpUnHo/tPKaULPe911V30ws8lQ1bH0CyhwaB0XRR6dg6moLpmuayp+08/DSGl4DsG8rPYz1Dqs0rACxgDqyY3dIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SBR1mSaVq1AGjPqPcYHqftLodwa4oX7xnw31LeL3eWI=;
 b=D7oZP4mmTUIc0+j3Nek2lu2lo7NXKeWiIYL729Vsn5VeEglmxy5nwDadn7wj/1ZLHUsHlmsvAVhWleBPwAbNf7yJfxrPwmE7stAWSAJUM2c+so2LLR9MBs+BqOAItLy3sS2HIa0SGRJasQt2QknGBmOjUaNU3FSIxgvAfpedH7m+aSALldRz6MxomN8yNbOFxPJzHjD0tM8PvQE6JtxKf3Rf/29yTBxSPMPbIiBswZPdEK+XenqvFBntkDc6IzsgrZ3ve06wOmol9zbAxyqxCrU7PY9Z22sMKfOD/Ye2RSo7yNS6a4+msxRf0xt0FKhN2y25pDkNNkvZMLVvMrUOTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SBR1mSaVq1AGjPqPcYHqftLodwa4oX7xnw31LeL3eWI=;
 b=W+DBKV/VfnNPUsh22RZOyLiReM/+gsOzO/Pvxc3UfmmKU4RNbjxASi5lOehGer095GXOqn0AGSygVAouZtCBheKK9SQmilipnsJyS81HVws0eylmUi69BHywYxT1UwnYL43NeMSAbwMR6vh1146eA+GPDT+p25jo/HW9fpgtXQM=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3079.eurprd08.prod.outlook.com (2603:10a6:209:45::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 13:59:59 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 13:59:59 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Topic: [PATCH v1 4/4] xen/pci: solve compilation error when memory
 paging is not enabled.
Thread-Index: AQHWq7wn1E1TPrUt3EyifJVqFrVmIams60gAgAA3GICAAa+LAIAABQGAgAFbgwA=
Date: Fri, 30 Oct 2020 13:59:59 +0000
Message-ID: <497FB9DE-F4AF-4110-9B48-5CA2CAA26F6C@arm.com>
References: <cover.1603731279.git.rahul.singh@arm.com>
 <dc85bb73ca4b6ab8b4a2370f2db7700445fbc5f8.1603731279.git.rahul.singh@arm.com>
 <b345b0d4-8045-1d5d-b3c9-498311cfb1ac@suse.com>
 <14328157-D9C5-428E-BD1C-F4A841359185@arm.com>
 <77495A6D-D1D8-4E62-B481-6AE59593CFAD@arm.com>
 <e6b24649-3621-6403-6fd7-de5814922197@suse.com>
In-Reply-To: <e6b24649-3621-6403-6fd7-de5814922197@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 26be8884-9ebc-4eaa-de09-08d87cdc2978
x-ms-traffictypediagnostic: AM6PR08MB3079:|VI1PR08MB3471:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB3471EE0E203104D5EA962CEFFC150@VI1PR08MB3471.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7691;OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IPCP7RWusULK34LA3WlIAFPFiYAdFF8dJV8kidrESLNAhoLBLuLLweagJDa093y8vgzl1I8bLcTc+wtP/gHhLoxJACjyb3RZkhEP5ynHDYN+duUKsOVTjPB7EvUJn9PxiC/VGX9e0zWj0iPlUjipQEV+5r+tP7laWtaVzd/dQTPleDcEo7vaWOHy1QILgaH++epELfJ6oLWwIuOBajObNbNq82fHkWX358xPOrCDKCrlw/1D3RCeTSf22B8dXHwDGna6mLXEMVv3Ta/QqjoGljiXM4pIZrOf/To81ItGK56kF3cx2RbExLJr49/MKoSp3iaDqukg3jfFrxK69oM9XQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39850400004)(346002)(376002)(136003)(396003)(2616005)(478600001)(76116006)(6512007)(5660300002)(91956017)(2906002)(71200400001)(86362001)(66946007)(8936002)(66556008)(8676002)(64756008)(66476007)(66446008)(54906003)(6486002)(4326008)(36756003)(33656002)(55236004)(316002)(6916009)(53546011)(6506007)(186003)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 u8a6OJ1k+9KiftMe0OBnv6R4GCP6pbxk1Erk1g4jHVFa3eD+tKmpWymj1yC31os1vTX3Hj5LpjRr+Q1/YXrl8N0kCxT+lhbeOQzT7t9BKUHUBYG+U8XnmrKVRRo1qOudI5gE1zncC7hAt7DC4qOA9I81lNfFmhEVlFDPErvYW1UprVBFks1Cc+EwISDRxjtI5isqxmxrDf32SyUjnemLG2RrHdysmHpUEh0UN+GXsntt1DfZNEYArOziIyN85TS8qkXGReEMwEtPzL1LE+XEzVWV8oG4oxuw+ASLa2qS5gFo15ZSCuNxD7ejCf2lBc0ZqsKy+rolR8iRR7OvcE+utiIyI4mSxeuaoOWnsOabPiU0UwyQtyXpuJUpg414FO9HAd0S93MM7I/IQ+XOinNDvw6Z31N85AgkXWsj7nCdK75TFtSZlSchOtmlovU/+UAf+gwxKEzI8qFnPwbuzUtqoD5H9BeqDQIUJpx1zpHaE231iBCws8cO8Aema/DrEP54Wv4xGZmT1gA4bJrdiPWepy06mxQP72PrtrP5Jz9sU6IaPuqmrUCFtYghzjpkIatmDhlpowZMdWBzBV1CkhXptJLqPmyT4M0bQiFW44m+ZvRkmSTWIlzWQk5J/UdEqJE1/6Bu/JmP1O9kWQko/JJkNA==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D03F42363DC2B04CBD8561C2DD0E186F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3079
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	776cb35a-f964-41bd-ddc3-08d87cdc16fc
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yOSzbThKqFuCvP0I0XwL5Trc8eWxOn5kU92PdE8x5LIgqfMsy6a0CajXmW/rlmBRcfofvpT60KV64WqYDglPnqaU30PoA5Gzg7OinBhON2+uWcnScZO7mhoL87ciJE9hIz7IbQSQxbCn5yDHmBd0yeGq1i2XPIfIvyD2MNqOvkAq7oXyYDgzupEIIWagKBNPoFR8LYNSWGbzSluOMkSmbS/DXBgfD5rbr19yXw8jdOvQ+VDBF3q8HzfOl54/KvVmJ1UO2MKlliafleiMcaYHCNC2o+M/891hxR9Kht3D7d8t74YK6UWNr8OAUifSZ9uxzV6J8aueDD9k76HFXgmCe+6ZL6Xsgf78VUXVAPgLCtZ8RaGZdRV7hQURGN4goCJXatMlFjAh1J3a0Ap6IZcwlw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(136003)(396003)(346002)(376002)(46966005)(336012)(4326008)(5660300002)(8676002)(33656002)(54906003)(53546011)(82310400003)(6506007)(6512007)(8936002)(36756003)(6486002)(82740400003)(316002)(36906005)(55236004)(47076004)(186003)(2906002)(26005)(86362001)(70586007)(70206006)(2616005)(6862004)(356005)(478600001)(81166007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 14:00:30.2348
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 26be8884-9ebc-4eaa-de09-08d87cdc2978
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3471

Hello Jan,

> On 29 Oct 2020, at 5:16 pm, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 29.10.2020 17:58, Rahul Singh wrote:
>>> On 28 Oct 2020, at 3:13 pm, Rahul Singh <Rahul.Singh@arm.com> wrote:
>>>> On 28 Oct 2020, at 11:56 am, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 26.10.2020 18:17, Rahul Singh wrote:
>>>>> --- a/xen/drivers/passthrough/pci.c
>>>>> +++ b/xen/drivers/passthrough/pci.c
>>>>> @@ -1419,13 +1419,15 @@ static int assign_device(struct domain *d, u1=
6 seg, u8 bus, u8 devfn, u32 flag)
>>>>>   if ( !is_iommu_enabled(d) )
>>>>>       return 0;
>>>>>=20
>>>>> -    /* Prevent device assign if mem paging or mem sharing have been=
=20
>>>>> +#if defined(CONFIG_HAS_MEM_PAGING) || defined(CONFIG_MEM_SHARING)
>>>>> +    /* Prevent device assign if mem paging or mem sharing have been
>>>>>    * enabled for this domain */
>>>>>   if ( d !=3D dom_io &&
>>>>>        unlikely(mem_sharing_enabled(d) ||
>>>>>                 vm_event_check_ring(d->vm_event_paging) ||
>>>>>                 p2m_get_hostp2m(d)->global_logdirty) )
>>>>>       return -EXDEV;
>>>>> +#endif
>>>>=20
>>>> Besides this also disabling mem-sharing and log-dirty related
>>>> logic, I don't think the change is correct: Each item being
>>>> checked needs individually disabling depending on its associated
>>>> CONFIG_*. For this, perhaps you want to introduce something
>>>> like mem_paging_enabled(d), to avoid the need for #ifdef here?
>>>>=20
>>>=20
>>> Ok I will fix that in next version.=20
>>=20
>> I just check and found out that mem-sharing , men-paging and log-dirty i=
s x86 specific and not implemented for ARM.
>> Is that will be ok if I move above code to x86 specific directory and in=
troduce new function arch_pcidev_is_assignable() that will test if pcidev i=
s assignable or not ?
>=20
> As an immediate workaround - perhaps (long term the individual
> pieces should still be individually checked here, as they're
> not inherently x86-specific). Since there's no device property
> involved here, the suggested name looks misleading. Perhaps
> arch_iommu_usable(d)?

Thanks. I will modify the code as per suggestion and will share the version=
 v2 for review.

>=20
> Jan

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:09:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15889.39115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYV5D-0007Pg-DA; Fri, 30 Oct 2020 14:08:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15889.39115; Fri, 30 Oct 2020 14:08:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYV5D-0007PZ-9L; Fri, 30 Oct 2020 14:08:47 +0000
Received: by outflank-mailman (input) for mailman id 15889;
 Fri, 30 Oct 2020 14:08:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Bh1j=EF=kernel.org=arnd@srs-us1.protection.inumbo.net>)
 id 1kYV5B-0007PU-Cu
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:08:45 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ceeb4067-3576-4783-af68-fb9bfa3b4202;
 Fri, 30 Oct 2020 14:08:44 +0000 (UTC)
Received: from localhost.localdomain
 (HSI-KBW-46-223-126-90.hsi.kabel-badenwuerttemberg.de [46.223.126.90])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 51DEC206F7;
 Fri, 30 Oct 2020 14:08:40 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Bh1j=EF=kernel.org=arnd@srs-us1.protection.inumbo.net>)
	id 1kYV5B-0007PU-Cu
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:08:45 +0000
X-Inumbo-ID: ceeb4067-3576-4783-af68-fb9bfa3b4202
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ceeb4067-3576-4783-af68-fb9bfa3b4202;
	Fri, 30 Oct 2020 14:08:44 +0000 (UTC)
Received: from localhost.localdomain (HSI-KBW-46-223-126-90.hsi.kabel-badenwuerttemberg.de [46.223.126.90])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 51DEC206F7;
	Fri, 30 Oct 2020 14:08:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604066923;
	bh=gVwo9EBasTd5741/ywKxMppouneV4DWFezQFj6Bex30=;
	h=From:To:Cc:Subject:Date:From;
	b=amwa4WWGtNqEMtFTjOayAnVMIGT2+0I8ryTpdaUUhYIGwxkBnRB3hQNLL/O0J2hrA
	 Ov42nwGE4vasvA5nJfmNug6iqtaCKWSU/qS6PzumOb3Vzo5YfSXkGZDlcyzT5pPUeK
	 s0r1NgHksnHH+J262ok0biKeJJvDQrYHt/MsamYE=
From: Arnd Bergmann <arnd@kernel.org>
To: Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org
Cc: Arnd Bergmann <arnd@arndb.de>,
	Paolo Bonzini <pbonzini@redhat.com>,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-pm@vger.kernel.org,
	linux-edac@vger.kernel.org,
	kvm@vger.kernel.org,
	platform-driver-x86@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	iommu@lists.linux-foundation.org
Subject: [PATCH] [v3] x86: apic: avoid -Wshadow warning in header
Date: Fri, 30 Oct 2020 15:06:36 +0100
Message-Id: <20201030140834.852488-1-arnd@kernel.org>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Arnd Bergmann <arnd@arndb.de>

There are hundreds of warnings in a W=2 build about a local
variable shadowing the global 'apic' definition:

arch/x86/kvm/lapic.h:149:65: warning: declaration of 'apic' shadows a global declaration [-Wshadow]

Avoid this by renaming the global 'apic' variable to the more descriptive
'x86_local_apic'. It was originally called 'genapic', but both that
and the current 'apic' seem to be a little overly generic for a global
variable.

Fixes: c48f14966cc4 ("KVM: inline kvm_apic_present() and kvm_lapic_enabled()")
Fixes: c8d46cf06dc2 ("x86: rename 'genapic' to 'apic'")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
v3: rename the global from x86_system_apic to x86_local_apic

v2: rename the global instead of the local variable in the header

This is only tested in an allmodconfig build, after fixing up a
few mistakes in the original search&replace. It's likely that I
missed a few others, but this version should be sufficient to
decide whether this is a good idea in the first place, as well
as if there are better ideas for the new name.
---
 arch/x86/hyperv/hv_apic.c             |  2 ++
 arch/x86/hyperv/hv_spinlock.c         |  4 ++--
 arch/x86/include/asm/apic.h           | 18 +++++++++---------
 arch/x86/kernel/acpi/boot.c           |  2 +-
 arch/x86/kernel/apic/apic.c           | 18 +++++++++---------
 arch/x86/kernel/apic/apic_flat_64.c   |  8 ++++----
 arch/x86/kernel/apic/apic_numachip.c  |  4 ++--
 arch/x86/kernel/apic/bigsmp_32.c      |  2 +-
 arch/x86/kernel/apic/hw_nmi.c         |  2 +-
 arch/x86/kernel/apic/io_apic.c        | 19 ++++++++++---------
 arch/x86/kernel/apic/ipi.c            | 22 +++++++++++-----------
 arch/x86/kernel/apic/msi.c            |  2 +-
 arch/x86/kernel/apic/probe_32.c       | 20 ++++++++++----------
 arch/x86/kernel/apic/probe_64.c       | 12 ++++++------
 arch/x86/kernel/apic/vector.c         |  8 ++++----
 arch/x86/kernel/apic/x2apic_cluster.c |  3 ++-
 arch/x86/kernel/apic/x2apic_phys.c    |  2 +-
 arch/x86/kernel/apic/x2apic_uv_x.c    |  2 +-
 arch/x86/kernel/cpu/common.c          | 14 ++++++++------
 arch/x86/kernel/cpu/mce/inject.c      |  4 ++--
 arch/x86/kernel/cpu/topology.c        |  8 ++++----
 arch/x86/kernel/irq_work.c            |  2 +-
 arch/x86/kernel/kvm.c                 |  6 +++---
 arch/x86/kernel/nmi_selftest.c        |  2 +-
 arch/x86/kernel/smpboot.c             | 20 +++++++++++---------
 arch/x86/kernel/vsmp_64.c             |  2 +-
 arch/x86/kvm/vmx/vmx.c                |  2 +-
 arch/x86/mm/srat.c                    |  2 +-
 arch/x86/platform/uv/uv_irq.c         |  4 ++--
 arch/x86/platform/uv/uv_nmi.c         |  2 +-
 arch/x86/xen/apic.c                   |  8 ++++----
 drivers/iommu/amd/iommu.c             | 10 ++++++----
 drivers/iommu/intel/irq_remapping.c   |  4 ++--
 33 files changed, 125 insertions(+), 115 deletions(-)

diff --git a/arch/x86/hyperv/hv_apic.c b/arch/x86/hyperv/hv_apic.c
index 284e73661a18..9df6ed521048 100644
--- a/arch/x86/hyperv/hv_apic.c
+++ b/arch/x86/hyperv/hv_apic.c
@@ -254,6 +254,8 @@ static void hv_send_ipi_self(int vector)
 
 void __init hv_apic_init(void)
 {
+	struct apic *apic = x86_local_apic;
+
 	if (ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) {
 		pr_info("Hyper-V: Using IPI hypercalls\n");
 		/*
diff --git a/arch/x86/hyperv/hv_spinlock.c b/arch/x86/hyperv/hv_spinlock.c
index f3270c1fc48c..01576e14460e 100644
--- a/arch/x86/hyperv/hv_spinlock.c
+++ b/arch/x86/hyperv/hv_spinlock.c
@@ -20,7 +20,7 @@ static bool __initdata hv_pvspin = true;
 
 static void hv_qlock_kick(int cpu)
 {
-	apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
+	x86_local_apic->send_IPI(cpu, X86_PLATFORM_IPI_VECTOR);
 }
 
 static void hv_qlock_wait(u8 *byte, u8 val)
@@ -64,7 +64,7 @@ PV_CALLEE_SAVE_REGS_THUNK(hv_vcpu_is_preempted);
 
 void __init hv_init_spinlocks(void)
 {
-	if (!hv_pvspin || !apic ||
+	if (!hv_pvspin || !x86_local_apic ||
 	    !(ms_hyperv.hints & HV_X64_CLUSTER_IPI_RECOMMENDED) ||
 	    !(ms_hyperv.features & HV_MSR_GUEST_IDLE_AVAILABLE)) {
 		pr_info("PV spinlocks disabled\n");
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
index 4e3099d9ae62..34294fc8f2da 100644
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -361,7 +361,7 @@ struct apic {
  * always just one such driver in use - the kernel decides via an
  * early probing process which one it picks - and then sticks to it):
  */
-extern struct apic *apic;
+extern struct apic *x86_local_apic;
 
 /*
  * APIC drivers are probed based on how they are listed in the .apicdrivers
@@ -395,37 +395,37 @@ extern int lapic_can_unplug_cpu(void);
 
 static inline u32 apic_read(u32 reg)
 {
-	return apic->read(reg);
+	return x86_local_apic->read(reg);
 }
 
 static inline void apic_write(u32 reg, u32 val)
 {
-	apic->write(reg, val);
+	x86_local_apic->write(reg, val);
 }
 
 static inline void apic_eoi(void)
 {
-	apic->eoi_write(APIC_EOI, APIC_EOI_ACK);
+	x86_local_apic->eoi_write(APIC_EOI, APIC_EOI_ACK);
 }
 
 static inline u64 apic_icr_read(void)
 {
-	return apic->icr_read();
+	return x86_local_apic->icr_read();
 }
 
 static inline void apic_icr_write(u32 low, u32 high)
 {
-	apic->icr_write(low, high);
+	x86_local_apic->icr_write(low, high);
 }
 
 static inline void apic_wait_icr_idle(void)
 {
-	apic->wait_icr_idle();
+	x86_local_apic->wait_icr_idle();
 }
 
 static inline u32 safe_apic_wait_icr_idle(void)
 {
-	return apic->safe_wait_icr_idle();
+	return x86_local_apic->safe_wait_icr_idle();
 }
 
 extern void __init apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v));
@@ -494,7 +494,7 @@ static inline unsigned int read_apic_id(void)
 {
 	unsigned int reg = apic_read(APIC_ID);
 
-	return apic->get_apic_id(reg);
+	return x86_local_apic->get_apic_id(reg);
 }
 
 extern int default_apic_id_valid(u32 apicid);
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 7bdc0239a943..8b7fbfbd86b7 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -211,7 +211,7 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
 	 * to not preallocating memory for all NR_CPUS
 	 * when we use CPU hotplug.
 	 */
-	if (!apic->apic_id_valid(apic_id)) {
+	if (!x86_local_apic->apic_id_valid(apic_id)) {
 		if (enabled)
 			pr_warn(PREFIX "x2apic entry ignored\n");
 		return 0;
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index b3eef1d5c903..8490810451e6 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -241,7 +241,7 @@ static int modern_apic(void)
 static void __init apic_disable(void)
 {
 	pr_info("APIC: switched to apic NOOP\n");
-	apic = &apic_noop;
+	x86_local_apic = &apic_noop;
 }
 
 void native_apic_wait_icr_idle(void)
@@ -519,7 +519,7 @@ static int lapic_timer_set_oneshot(struct clock_event_device *evt)
 static void lapic_timer_broadcast(const struct cpumask *mask)
 {
 #ifdef CONFIG_SMP
-	apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
+	x86_local_apic->send_IPI_mask(mask, LOCAL_TIMER_VECTOR);
 #endif
 }
 
@@ -1444,7 +1444,7 @@ static void lapic_setup_esr(void)
 		return;
 	}
 
-	if (apic->disable_esr) {
+	if (x86_local_apic->disable_esr) {
 		/*
 		 * Something untraceable is creating bad interrupts on
 		 * secondary quads ... for the moment, just leave the
@@ -1570,7 +1570,7 @@ static void setup_local_APIC(void)
 
 #ifdef CONFIG_X86_32
 	/* Pound the ESR really hard over the head with a big hammer - mbligh */
-	if (lapic_is_integrated() && apic->disable_esr) {
+	if (lapic_is_integrated() && x86_local_apic->disable_esr) {
 		apic_write(APIC_ESR, 0);
 		apic_write(APIC_ESR, 0);
 		apic_write(APIC_ESR, 0);
@@ -1581,17 +1581,17 @@ static void setup_local_APIC(void)
 	 * Double-check whether this APIC is really registered.
 	 * This is meaningless in clustered apic mode, so we skip it.
 	 */
-	BUG_ON(!apic->apic_id_registered());
+	BUG_ON(!x86_local_apic->apic_id_registered());
 
 	/*
 	 * Intel recommends to set DFR, LDR and TPR before enabling
 	 * an APIC.  See e.g. "AP-388 82489DX User's Manual" (Intel
 	 * document number 292116).  So here it goes...
 	 */
-	apic->init_apic_ldr();
+	x86_local_apic->init_apic_ldr();
 
 #ifdef CONFIG_X86_32
-	if (apic->dest_logical) {
+	if (x86_local_apic->dest_logical) {
 		int logical_apicid, ldr_apicid;
 
 		/*
@@ -2463,7 +2463,7 @@ int generic_processor_info(int apicid, int version)
 #endif
 #ifdef CONFIG_X86_32
 	early_per_cpu(x86_cpu_to_logical_apicid, cpu) =
-		apic->x86_32_early_logical_apicid(cpu);
+		x86_local_apic->x86_32_early_logical_apicid(cpu);
 #endif
 	set_cpu_possible(cpu, true);
 	physid_set(apicid, phys_cpu_present_map);
@@ -2499,7 +2499,7 @@ void __init apic_set_eoi_write(void (*eoi_write)(u32 reg, u32 v))
 static void __init apic_bsp_up_setup(void)
 {
 #ifdef CONFIG_X86_64
-	apic_write(APIC_ID, apic->set_apic_id(boot_cpu_physical_apicid));
+	apic_write(APIC_ID, x86_local_apic->set_apic_id(boot_cpu_physical_apicid));
 #else
 	/*
 	 * Hack: In case of kdump, after a crash, kernel might be booting
diff --git a/arch/x86/kernel/apic/apic_flat_64.c b/arch/x86/kernel/apic/apic_flat_64.c
index 7862b152a052..8b9eb2285b88 100644
--- a/arch/x86/kernel/apic/apic_flat_64.c
+++ b/arch/x86/kernel/apic/apic_flat_64.c
@@ -20,8 +20,8 @@
 static struct apic apic_physflat;
 static struct apic apic_flat;
 
-struct apic *apic __ro_after_init = &apic_flat;
-EXPORT_SYMBOL_GPL(apic);
+struct apic *x86_local_apic __ro_after_init = &apic_flat;
+EXPORT_SYMBOL_GPL(x86_local_apic);
 
 static int flat_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 {
@@ -53,7 +53,7 @@ static void _flat_send_IPI_mask(unsigned long mask, int vector)
 	unsigned long flags;
 
 	local_irq_save(flags);
-	__default_send_IPI_dest_field(mask, vector, apic->dest_logical);
+	__default_send_IPI_dest_field(mask, vector, x86_local_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
@@ -191,7 +191,7 @@ static void physflat_init_apic_ldr(void)
 
 static int physflat_probe(void)
 {
-	if (apic == &apic_physflat || num_possible_cpus() > 8 ||
+	if (x86_local_apic == &apic_physflat || num_possible_cpus() > 8 ||
 	    jailhouse_paravirt())
 		return 1;
 
diff --git a/arch/x86/kernel/apic/apic_numachip.c b/arch/x86/kernel/apic/apic_numachip.c
index 35edd57f064a..2292b1614c9f 100644
--- a/arch/x86/kernel/apic/apic_numachip.c
+++ b/arch/x86/kernel/apic/apic_numachip.c
@@ -159,12 +159,12 @@ static void numachip_send_IPI_self(int vector)
 
 static int __init numachip1_probe(void)
 {
-	return apic == &apic_numachip1;
+	return x86_local_apic == &apic_numachip1;
 }
 
 static int __init numachip2_probe(void)
 {
-	return apic == &apic_numachip2;
+	return x86_local_apic == &apic_numachip2;
 }
 
 static void fixup_cpu_id(struct cpuinfo_x86 *c, int node)
diff --git a/arch/x86/kernel/apic/bigsmp_32.c b/arch/x86/kernel/apic/bigsmp_32.c
index 98d015a4405a..4832dd7e453c 100644
--- a/arch/x86/kernel/apic/bigsmp_32.c
+++ b/arch/x86/kernel/apic/bigsmp_32.c
@@ -176,7 +176,7 @@ void __init generic_bigsmp_probe(void)
 	if (!probe_bigsmp())
 		return;
 
-	apic = &apic_bigsmp;
+	x86_local_apic = &apic_bigsmp;
 
 	for_each_possible_cpu(cpu) {
 		if (early_per_cpu(x86_cpu_to_logical_apicid,
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 34a992e275ef..0ced5a1bf602 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -31,7 +31,7 @@ u64 hw_nmi_get_sample_period(int watchdog_thresh)
 #ifdef arch_trigger_cpumask_backtrace
 static void nmi_raise_cpu_backtrace(cpumask_t *mask)
 {
-	apic->send_IPI_mask(mask, NMI_VECTOR);
+	x86_local_apic->send_IPI_mask(mask, NMI_VECTOR);
 }
 
 void arch_trigger_cpumask_backtrace(const cpumask_t *mask, bool exclude_self)
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index 7b3c7e0d4a09..7bd1c09e6c94 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1457,7 +1457,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 	 * This is broken; anything with a real cpu count has to
 	 * circumvent this idiocy regardless.
 	 */
-	apic->ioapic_phys_id_map(&phys_cpu_present_map, &phys_id_present_map);
+	x86_local_apic->ioapic_phys_id_map(&phys_cpu_present_map, &phys_id_present_map);
 
 	/*
 	 * Set the IOAPIC ID to the value stored in the MPC table.
@@ -1483,7 +1483,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 		 * system must have a unique ID or we get lots of nice
 		 * 'stuck on smp_invalidate_needed IPI wait' messages.
 		 */
-		if (apic->check_apicid_used(&phys_id_present_map,
+		if (x86_local_apic->check_apicid_used(&phys_id_present_map,
 					    mpc_ioapic_id(ioapic_idx))) {
 			printk(KERN_ERR "BIOS bug, IO-APIC#%d ID %d is already used!...\n",
 				ioapic_idx, mpc_ioapic_id(ioapic_idx));
@@ -1498,7 +1498,7 @@ void __init setup_ioapic_ids_from_mpc_nocheck(void)
 			ioapics[ioapic_idx].mp_config.apicid = i;
 		} else {
 			physid_mask_t tmp;
-			apic->apicid_to_cpu_present(mpc_ioapic_id(ioapic_idx),
+			x86_local_apic->apicid_to_cpu_present(mpc_ioapic_id(ioapic_idx),
 						    &tmp);
 			apic_printk(APIC_VERBOSE, "Setting %d in the "
 					"phys_id_present_map\n",
@@ -2462,7 +2462,8 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 	 */
 
 	if (physids_empty(apic_id_map))
-		apic->ioapic_phys_id_map(&phys_cpu_present_map, &apic_id_map);
+		x86_local_apic->ioapic_phys_id_map(&phys_cpu_present_map,
+						    &apic_id_map);
 
 	raw_spin_lock_irqsave(&ioapic_lock, flags);
 	reg_00.raw = io_apic_read(ioapic, 0);
@@ -2478,10 +2479,10 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 	 * Every APIC in a system must have a unique ID or we get lots of nice
 	 * 'stuck on smp_invalidate_needed IPI wait' messages.
 	 */
-	if (apic->check_apicid_used(&apic_id_map, apic_id)) {
+	if (x86_local_apic->check_apicid_used(&apic_id_map, apic_id)) {
 
 		for (i = 0; i < get_physical_broadcast(); i++) {
-			if (!apic->check_apicid_used(&apic_id_map, i))
+			if (!x86_local_apic->check_apicid_used(&apic_id_map, i))
 				break;
 		}
 
@@ -2494,7 +2495,7 @@ static int io_apic_get_unique_id(int ioapic, int apic_id)
 		apic_id = i;
 	}
 
-	apic->apicid_to_cpu_present(apic_id, &tmp);
+	x86_local_apic->apicid_to_cpu_present(apic_id, &tmp);
 	physids_or(apic_id_map, apic_id_map, tmp);
 
 	if (reg_00.bits.ID != apic_id) {
@@ -2948,8 +2949,8 @@ static void mp_setup_entry(struct irq_cfg *cfg, struct mp_chip_data *data,
 			   struct IO_APIC_route_entry *entry)
 {
 	memset(entry, 0, sizeof(*entry));
-	entry->delivery_mode = apic->irq_delivery_mode;
-	entry->dest_mode     = apic->irq_dest_mode;
+	entry->delivery_mode = x86_local_apic->irq_delivery_mode;
+	entry->dest_mode     = x86_local_apic->irq_dest_mode;
 	entry->dest	     = cfg->dest_apicid;
 	entry->vector	     = cfg->vector;
 	entry->trigger	     = data->trigger;
diff --git a/arch/x86/kernel/apic/ipi.c b/arch/x86/kernel/apic/ipi.c
index 387154e39e08..6532ef8629c5 100644
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -52,9 +52,9 @@ void apic_send_IPI_allbutself(unsigned int vector)
 		return;
 
 	if (static_branch_likely(&apic_use_ipi_shorthand))
-		apic->send_IPI_allbutself(vector);
+		x86_local_apic->send_IPI_allbutself(vector);
 	else
-		apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
+		x86_local_apic->send_IPI_mask_allbutself(cpu_online_mask, vector);
 }
 
 /*
@@ -68,12 +68,12 @@ void native_smp_send_reschedule(int cpu)
 		WARN(1, "sched: Unexpected reschedule of offline CPU#%d!\n", cpu);
 		return;
 	}
-	apic->send_IPI(cpu, RESCHEDULE_VECTOR);
+	x86_local_apic->send_IPI(cpu, RESCHEDULE_VECTOR);
 }
 
 void native_send_call_func_single_ipi(int cpu)
 {
-	apic->send_IPI(cpu, CALL_FUNCTION_SINGLE_VECTOR);
+	x86_local_apic->send_IPI(cpu, CALL_FUNCTION_SINGLE_VECTOR);
 }
 
 void native_send_call_func_ipi(const struct cpumask *mask)
@@ -85,14 +85,14 @@ void native_send_call_func_ipi(const struct cpumask *mask)
 			goto sendmask;
 
 		if (cpumask_test_cpu(cpu, mask))
-			apic->send_IPI_all(CALL_FUNCTION_VECTOR);
+			x86_local_apic->send_IPI_all(CALL_FUNCTION_VECTOR);
 		else if (num_online_cpus() > 1)
-			apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
+			x86_local_apic->send_IPI_allbutself(CALL_FUNCTION_VECTOR);
 		return;
 	}
 
 sendmask:
-	apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
+	x86_local_apic->send_IPI_mask(mask, CALL_FUNCTION_VECTOR);
 }
 
 #endif /* CONFIG_SMP */
@@ -224,7 +224,7 @@ void default_send_IPI_mask_allbutself_phys(const struct cpumask *mask,
  */
 void default_send_IPI_single(int cpu, int vector)
 {
-	apic->send_IPI_mask(cpumask_of(cpu), vector);
+	x86_local_apic->send_IPI_mask(cpumask_of(cpu), vector);
 }
 
 void default_send_IPI_allbutself(int vector)
@@ -260,7 +260,7 @@ void default_send_IPI_mask_sequence_logical(const struct cpumask *mask,
 	for_each_cpu(query_cpu, mask)
 		__default_send_IPI_dest_field(
 			early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
-			vector, apic->dest_logical);
+			vector, x86_local_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
@@ -279,7 +279,7 @@ void default_send_IPI_mask_allbutself_logical(const struct cpumask *mask,
 			continue;
 		__default_send_IPI_dest_field(
 			early_per_cpu(x86_cpu_to_logical_apicid, query_cpu),
-			vector, apic->dest_logical);
+			vector, x86_local_apic->dest_logical);
 		}
 	local_irq_restore(flags);
 }
@@ -297,7 +297,7 @@ void default_send_IPI_mask_logical(const struct cpumask *cpumask, int vector)
 
 	local_irq_save(flags);
 	WARN_ON(mask & ~cpumask_bits(cpu_online_mask)[0]);
-	__default_send_IPI_dest_field(mask, vector, apic->dest_logical);
+	__default_send_IPI_dest_field(mask, vector, x86_local_apic->dest_logical);
 	local_irq_restore(flags);
 }
 
diff --git a/arch/x86/kernel/apic/msi.c b/arch/x86/kernel/apic/msi.c
index 6313f0a05db7..942da5023c72 100644
--- a/arch/x86/kernel/apic/msi.c
+++ b/arch/x86/kernel/apic/msi.c
@@ -32,7 +32,7 @@ static void __irq_msi_compose_msg(struct irq_cfg *cfg, struct msi_msg *msg)
 
 	msg->address_lo =
 		MSI_ADDR_BASE_LO |
-		((apic->irq_dest_mode == 0) ?
+		((x86_local_apic->irq_dest_mode == 0) ?
 			MSI_ADDR_DEST_MODE_PHYSICAL :
 			MSI_ADDR_DEST_MODE_LOGICAL) |
 		MSI_ADDR_REDIRECTION_CPU |
diff --git a/arch/x86/kernel/apic/probe_32.c b/arch/x86/kernel/apic/probe_32.c
index 67b6f7c049ec..14bd89df18c2 100644
--- a/arch/x86/kernel/apic/probe_32.c
+++ b/arch/x86/kernel/apic/probe_32.c
@@ -113,8 +113,8 @@ static struct apic apic_default __ro_after_init = {
 
 apic_driver(apic_default);
 
-struct apic *apic __ro_after_init = &apic_default;
-EXPORT_SYMBOL_GPL(apic);
+struct apic *x86_local_apic __ro_after_init = &apic_default;
+EXPORT_SYMBOL_GPL(x86_local_apic);
 
 static int cmdline_apic __initdata;
 static int __init parse_apic(char *arg)
@@ -126,7 +126,7 @@ static int __init parse_apic(char *arg)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if (!strcmp((*drv)->name, arg)) {
-			apic = *drv;
+			x86_local_apic = *drv;
 			cmdline_apic = 1;
 			return 0;
 		}
@@ -164,12 +164,12 @@ void __init default_setup_apic_routing(void)
 	 * - we find more than 8 CPUs in acpi LAPIC listing with xAPIC support
 	 */
 
-	if (!cmdline_apic && apic == &apic_default)
+	if (!cmdline_apic && x86_local_apic == &apic_default)
 		generic_bigsmp_probe();
 #endif
 
-	if (apic->setup_apic_routing)
-		apic->setup_apic_routing();
+	if (x86_local_apic->setup_apic_routing)
+		x86_local_apic->setup_apic_routing();
 }
 
 void __init generic_apic_probe(void)
@@ -179,7 +179,7 @@ void __init generic_apic_probe(void)
 
 		for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 			if ((*drv)->probe()) {
-				apic = *drv;
+				x86_local_apic = *drv;
 				break;
 			}
 		}
@@ -187,7 +187,7 @@ void __init generic_apic_probe(void)
 		if (drv == __apicdrivers_end)
 			panic("Didn't find an APIC driver");
 	}
-	printk(KERN_INFO "Using APIC driver %s\n", apic->name);
+	printk(KERN_INFO "Using APIC driver %s\n", x86_local_apic->name);
 }
 
 /* This function can switch the APIC even after the initial ->probe() */
@@ -202,9 +202,9 @@ int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 			continue;
 
 		if (!cmdline_apic) {
-			apic = *drv;
+			x86_local_apic = *drv;
 			printk(KERN_INFO "Switched to APIC driver `%s'.\n",
-			       apic->name);
+			       x86_local_apic->name);
 		}
 		return 1;
 	}
diff --git a/arch/x86/kernel/apic/probe_64.c b/arch/x86/kernel/apic/probe_64.c
index c46720f185c0..1d11b21c2ef2 100644
--- a/arch/x86/kernel/apic/probe_64.c
+++ b/arch/x86/kernel/apic/probe_64.c
@@ -24,10 +24,10 @@ void __init default_setup_apic_routing(void)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if ((*drv)->probe && (*drv)->probe()) {
-			if (apic != *drv) {
-				apic = *drv;
+			if (x86_local_apic != *drv) {
+				x86_local_apic = *drv;
 				pr_info("Switched APIC routing to %s.\n",
-					apic->name);
+					x86_local_apic->name);
 			}
 			break;
 		}
@@ -40,10 +40,10 @@ int __init default_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
 
 	for (drv = __apicdrivers; drv < __apicdrivers_end; drv++) {
 		if ((*drv)->acpi_madt_oem_check(oem_id, oem_table_id)) {
-			if (apic != *drv) {
-				apic = *drv;
+			if (x86_local_apic != *drv) {
+				x86_local_apic = *drv;
 				pr_info("Setting APIC routing to %s.\n",
-					apic->name);
+					x86_local_apic->name);
 			}
 			return 1;
 		}
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 1eac53632786..1722dce7da69 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -122,7 +122,7 @@ static void apic_update_irq_cfg(struct irq_data *irqd, unsigned int vector,
 	lockdep_assert_held(&vector_lock);
 
 	apicd->hw_irq_cfg.vector = vector;
-	apicd->hw_irq_cfg.dest_apicid = apic->calc_dest_apicid(cpu);
+	apicd->hw_irq_cfg.dest_apicid = x86_local_apic->calc_dest_apicid(cpu);
 	irq_data_update_effective_affinity(irqd, cpumask_of(cpu));
 	trace_vector_config(irqd->irq, vector, cpu,
 			    apicd->hw_irq_cfg.dest_apicid);
@@ -800,7 +800,7 @@ static int apic_retrigger_irq(struct irq_data *irqd)
 	unsigned long flags;
 
 	raw_spin_lock_irqsave(&vector_lock, flags);
-	apic->send_IPI(apicd->cpu, apicd->vector);
+	x86_local_apic->send_IPI(apicd->cpu, apicd->vector);
 	raw_spin_unlock_irqrestore(&vector_lock, flags);
 
 	return 1;
@@ -876,7 +876,7 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
 		 */
 		irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
 		if (irr & (1U << (vector % 32))) {
-			apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
+			x86_local_apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
 			continue;
 		}
 		free_moved_vector(apicd);
@@ -894,7 +894,7 @@ static void __send_cleanup_vector(struct apic_chip_data *apicd)
 	cpu = apicd->prev_cpu;
 	if (cpu_online(cpu)) {
 		hlist_add_head(&apicd->clist, per_cpu_ptr(&cleanup_list, cpu));
-		apic->send_IPI(cpu, IRQ_MOVE_CLEANUP_VECTOR);
+		x86_local_apic->send_IPI(cpu, IRQ_MOVE_CLEANUP_VECTOR);
 	} else {
 		apicd->prev_vector = 0;
 	}
diff --git a/arch/x86/kernel/apic/x2apic_cluster.c b/arch/x86/kernel/apic/x2apic_cluster.c
index b0889c48a2ac..1ac445cc5c2a 100644
--- a/arch/x86/kernel/apic/x2apic_cluster.c
+++ b/arch/x86/kernel/apic/x2apic_cluster.c
@@ -61,7 +61,8 @@ __x2apic_send_IPI_mask(const struct cpumask *mask, int vector, int apic_dest)
 		if (!dest)
 			continue;
 
-		__x2apic_send_IPI_dest(dest, vector, apic->dest_logical);
+		__x2apic_send_IPI_dest(dest, vector,
+				       x86_local_apic->dest_logical);
 		/* Remove cluster CPUs from tmpmask */
 		cpumask_andnot(tmpmsk, tmpmsk, &cmsk->mask);
 	}
diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
index bc9693841353..b4a37751dd54 100644
--- a/arch/x86/kernel/apic/x2apic_phys.c
+++ b/arch/x86/kernel/apic/x2apic_phys.c
@@ -92,7 +92,7 @@ static int x2apic_phys_probe(void)
 	if (x2apic_mode && (x2apic_phys || x2apic_fadt_phys()))
 		return 1;
 
-	return apic == &apic_x2apic_phys;
+	return x86_local_apic == &apic_x2apic_phys;
 }
 
 /* Common x2apic functions, also used by x2apic_cluster */
diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
index 714233cee0b5..80354a565a7b 100644
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -796,7 +796,7 @@ static void uv_send_IPI_self(int vector)
 
 static int uv_probe(void)
 {
-	return apic == &apic_x2apic_uv_x;
+	return x86_local_apic == &apic_x2apic_uv_x;
 }
 
 static struct apic apic_x2apic_uv_x __ro_after_init = {
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 35ad8480c464..69c12cc86879 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -780,7 +780,8 @@ void detect_ht(struct cpuinfo_x86 *c)
 		return;
 
 	index_msb = get_count_order(smp_num_siblings);
-	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid, index_msb);
+	c->phys_proc_id = x86_local_apic->phys_pkg_id(c->initial_apicid,
+						       index_msb);
 
 	smp_num_siblings = smp_num_siblings / c->x86_max_cores;
 
@@ -788,8 +789,9 @@ void detect_ht(struct cpuinfo_x86 *c)
 
 	core_bits = get_count_order(c->x86_max_cores);
 
-	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid, index_msb) &
-				       ((1 << core_bits) - 1);
+	c->cpu_core_id = x86_local_apic->phys_pkg_id(c->initial_apicid,
+						      index_msb) &
+					       ((1 << core_bits) - 1);
 #endif
 }
 
@@ -1442,7 +1444,7 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		c->initial_apicid = (cpuid_ebx(1) >> 24) & 0xFF;
 #ifdef CONFIG_X86_32
 # ifdef CONFIG_SMP
-		c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+		c->apicid = x86_local_apic->phys_pkg_id(c->initial_apicid, 0);
 # else
 		c->apicid = c->initial_apicid;
 # endif
@@ -1481,7 +1483,7 @@ static void validate_apic_and_package_id(struct cpuinfo_x86 *c)
 #ifdef CONFIG_SMP
 	unsigned int apicid, cpu = smp_processor_id();
 
-	apicid = apic->cpu_present_to_apicid(cpu);
+	apicid = x86_local_apic->cpu_present_to_apicid(cpu);
 
 	if (apicid != c->apicid) {
 		pr_err(FW_BUG "CPU%u: APIC id mismatch. Firmware: %x APIC: %x\n",
@@ -1535,7 +1537,7 @@ static void identify_cpu(struct cpuinfo_x86 *c)
 	apply_forced_caps(c);
 
 #ifdef CONFIG_X86_64
-	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+	c->apicid = x86_local_apic->phys_pkg_id(c->initial_apicid, 0);
 #endif
 
 	/*
diff --git a/arch/x86/kernel/cpu/mce/inject.c b/arch/x86/kernel/cpu/mce/inject.c
index 3a44346f2276..e241746b0cd0 100644
--- a/arch/x86/kernel/cpu/mce/inject.c
+++ b/arch/x86/kernel/cpu/mce/inject.c
@@ -252,8 +252,8 @@ static void __maybe_unused raise_mce(struct mce *m)
 					mce_irq_ipi, NULL, 0);
 				preempt_enable();
 			} else if (m->inject_flags & MCJ_NMI_BROADCAST)
-				apic->send_IPI_mask(mce_inject_cpumask,
-						NMI_VECTOR);
+				x86_local_apic->send_IPI_mask(mce_inject_cpumask,
+							       NMI_VECTOR);
 		}
 		start = jiffies;
 		while (!cpumask_empty(mce_inject_cpumask)) {
diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
index d3a0791bc052..a3f9a2e6c0d1 100644
--- a/arch/x86/kernel/cpu/topology.c
+++ b/arch/x86/kernel/cpu/topology.c
@@ -137,16 +137,16 @@ int detect_extended_topology(struct cpuinfo_x86 *c)
 	die_select_mask = (~(-1 << die_plus_mask_width)) >>
 				core_plus_mask_width;
 
-	c->cpu_core_id = apic->phys_pkg_id(c->initial_apicid,
+	c->cpu_core_id = x86_local_apic->phys_pkg_id(c->initial_apicid,
 				ht_mask_width) & core_select_mask;
-	c->cpu_die_id = apic->phys_pkg_id(c->initial_apicid,
+	c->cpu_die_id = x86_local_apic->phys_pkg_id(c->initial_apicid,
 				core_plus_mask_width) & die_select_mask;
-	c->phys_proc_id = apic->phys_pkg_id(c->initial_apicid,
+	c->phys_proc_id = x86_local_apic->phys_pkg_id(c->initial_apicid,
 				die_plus_mask_width);
 	/*
 	 * Reinit the apicid, now that we have extended initial_apicid.
 	 */
-	c->apicid = apic->phys_pkg_id(c->initial_apicid, 0);
+	c->apicid = x86_local_apic->phys_pkg_id(c->initial_apicid, 0);
 
 	c->x86_max_cores = (core_level_siblings / smp_num_siblings);
 	__max_die_per_package = (die_level_siblings / core_level_siblings);
diff --git a/arch/x86/kernel/irq_work.c b/arch/x86/kernel/irq_work.c
index 890d4778cd35..950bd4d6de4a 100644
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -28,7 +28,7 @@ void arch_irq_work_raise(void)
 	if (!arch_irq_work_has_interrupt())
 		return;
 
-	apic->send_IPI_self(IRQ_WORK_VECTOR);
+	x86_local_apic->send_IPI_self(IRQ_WORK_VECTOR);
 	apic_wait_icr_idle();
 }
 #endif
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 7f57ede3cb8e..ef2bf5ea3354 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -325,7 +325,7 @@ static notrace void kvm_guest_apic_eoi_write(u32 reg, u32 val)
 	 */
 	if (__test_and_clear_bit(KVM_PV_EOI_BIT, this_cpu_ptr(&kvm_apic_eoi)))
 		return;
-	apic->native_eoi_write(APIC_EOI, APIC_EOI_ACK);
+	x86_local_apic->native_eoi_write(APIC_EOI, APIC_EOI_ACK);
 }
 
 static void kvm_guest_cpu_init(void)
@@ -554,8 +554,8 @@ static void kvm_send_ipi_mask_allbutself(const struct cpumask *mask, int vector)
  */
 static void kvm_setup_pv_ipi(void)
 {
-	apic->send_IPI_mask = kvm_send_ipi_mask;
-	apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
+	x86_local_apic->send_IPI_mask = kvm_send_ipi_mask;
+	x86_local_apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
 	pr_info("setup PV IPIs\n");
 }
 
diff --git a/arch/x86/kernel/nmi_selftest.c b/arch/x86/kernel/nmi_selftest.c
index a1a96df3dff1..7f4c6db7abae 100644
--- a/arch/x86/kernel/nmi_selftest.c
+++ b/arch/x86/kernel/nmi_selftest.c
@@ -75,7 +75,7 @@ static void __init test_nmi_ipi(struct cpumask *mask)
 	/* sync above data before sending NMI */
 	wmb();
 
-	apic->send_IPI_mask(mask, NMI_VECTOR);
+	x86_local_apic->send_IPI_mask(mask, NMI_VECTOR);
 
 	/* Don't wait longer than a second */
 	timeout = USEC_PER_SEC;
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 9a94934fae5f..3ffece2d269e 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -756,7 +756,7 @@ wakeup_secondary_cpu_via_nmi(int apicid, unsigned long start_eip)
 	/* Target chip */
 	/* Boot on the stack */
 	/* Kick the second */
-	apic_icr_write(APIC_DM_NMI | apic->dest_logical, apicid);
+	apic_icr_write(APIC_DM_NMI | x86_local_apic->dest_logical, apicid);
 
 	pr_debug("Waiting for send to finish...\n");
 	send_status = safe_apic_wait_icr_idle();
@@ -983,7 +983,7 @@ wakeup_cpu_via_init_nmi(int cpu, unsigned long start_ip, int apicid,
 	if (!boot_error) {
 		enable_start_cpu0 = 1;
 		*cpu0_nmi_registered = 1;
-		if (apic->dest_logical == APIC_DEST_LOGICAL)
+		if (x86_local_apic->dest_logical == APIC_DEST_LOGICAL)
 			id = cpu0_logical_apicid;
 		else
 			id = apicid;
@@ -1080,8 +1080,9 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
 	 * Otherwise,
 	 * - Use an INIT boot APIC message for APs or NMI for BSP.
 	 */
-	if (apic->wakeup_secondary_cpu)
-		boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);
+	if (x86_local_apic->wakeup_secondary_cpu)
+		boot_error = x86_local_apic->wakeup_secondary_cpu(apicid,
+								   start_ip);
 	else
 		boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
 						     cpu0_nmi_registered);
@@ -1132,7 +1133,7 @@ static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,
 
 int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 {
-	int apicid = apic->cpu_present_to_apicid(cpu);
+	int apicid = x86_local_apic->cpu_present_to_apicid(cpu);
 	int cpu0_nmi_registered = 0;
 	unsigned long flags;
 	int err, ret = 0;
@@ -1143,7 +1144,7 @@ int native_cpu_up(unsigned int cpu, struct task_struct *tidle)
 
 	if (apicid == BAD_APICID ||
 	    !physid_isset(apicid, phys_cpu_present_map) ||
-	    !apic->apic_id_valid(apicid)) {
+	    !x86_local_apic->apic_id_valid(apicid)) {
 		pr_err("%s: bad cpu %d\n", __func__, cpu);
 		return -EINVAL;
 	}
@@ -1280,7 +1281,7 @@ static void __init smp_sanity_check(void)
 	 * Should not be necessary because the MP table should list the boot
 	 * CPU too, but we do it for the sake of robustness anyway.
 	 */
-	if (!apic->check_phys_apicid_present(boot_cpu_physical_apicid)) {
+	if (!x86_local_apic->check_phys_apicid_present(boot_cpu_physical_apicid)) {
 		pr_notice("weird, boot CPU (#%d) not listed by the BIOS\n",
 			  boot_cpu_physical_apicid);
 		physid_set(hard_smp_processor_id(), phys_cpu_present_map);
@@ -1467,8 +1468,9 @@ __init void prefill_possible_map(void)
 			pr_warn("Boot CPU (id %d) not listed by BIOS\n", cpu);
 
 			/* Make sure boot cpu is enumerated */
-			if (apic->cpu_present_to_apicid(0) == BAD_APICID &&
-			    apic->apic_id_valid(apicid))
+			if (x86_local_apic->cpu_present_to_apicid(0) ==
+					BAD_APICID &&
+			    x86_local_apic->apic_id_valid(apicid))
 				generic_processor_info(apicid, boot_cpu_apic_version);
 		}
 
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel/vsmp_64.c
index 796cfaa46bfa..03c0050f1f29 100644
--- a/arch/x86/kernel/vsmp_64.c
+++ b/arch/x86/kernel/vsmp_64.c
@@ -135,7 +135,7 @@ static int apicid_phys_pkg_id(int initial_apic_id, int index_msb)
 static void vsmp_apic_post_init(void)
 {
 	/* need to update phys_pkg_id */
-	apic->phys_pkg_id = apicid_phys_pkg_id;
+	x86_local_apic->phys_pkg_id = apicid_phys_pkg_id;
 }
 
 void __init vsmp_init(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index d14c94d0aff1..3ffbe5b339a4 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -3943,7 +3943,7 @@ static inline bool kvm_vcpu_trigger_posted_interrupt(struct kvm_vcpu *vcpu,
 		 * which has no effect is safe here.
 		 */
 
-		apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
+		x86_local_apic->send_IPI_mask(get_cpu_mask(vcpu->cpu), pi_vec);
 		return true;
 	}
 #endif
diff --git a/arch/x86/mm/srat.c b/arch/x86/mm/srat.c
index dac07e4f5834..9e02fd79dc7e 100644
--- a/arch/x86/mm/srat.c
+++ b/arch/x86/mm/srat.c
@@ -40,7 +40,7 @@ acpi_numa_x2apic_affinity_init(struct acpi_srat_x2apic_cpu_affinity *pa)
 		return;
 	pxm = pa->proximity_domain;
 	apic_id = pa->apic_id;
-	if (!apic->apic_id_valid(apic_id)) {
+	if (!x86_local_apic->apic_id_valid(apic_id)) {
 		printk(KERN_INFO "SRAT: PXM %u -> X2APIC 0x%04x ignored\n",
 			 pxm, apic_id);
 		return;
diff --git a/arch/x86/platform/uv/uv_irq.c b/arch/x86/platform/uv/uv_irq.c
index 18ca2261cc9a..33895ba927f2 100644
--- a/arch/x86/platform/uv/uv_irq.c
+++ b/arch/x86/platform/uv/uv_irq.c
@@ -35,8 +35,8 @@ static void uv_program_mmr(struct irq_cfg *cfg, struct uv_irq_2_mmr_pnode *info)
 	mmr_value = 0;
 	entry = (struct uv_IO_APIC_route_entry *)&mmr_value;
 	entry->vector		= cfg->vector;
-	entry->delivery_mode	= apic->irq_delivery_mode;
-	entry->dest_mode	= apic->irq_dest_mode;
+	entry->delivery_mode	= x86_local_apic->irq_delivery_mode;
+	entry->dest_mode	= x86_local_apic->irq_dest_mode;
 	entry->polarity		= 0;
 	entry->trigger		= 0;
 	entry->mask		= 0;
diff --git a/arch/x86/platform/uv/uv_nmi.c b/arch/x86/platform/uv/uv_nmi.c
index eafc530c8767..1acb85b0f806 100644
--- a/arch/x86/platform/uv/uv_nmi.c
+++ b/arch/x86/platform/uv/uv_nmi.c
@@ -597,7 +597,7 @@ static void uv_nmi_nr_cpus_ping(void)
 	for_each_cpu(cpu, uv_nmi_cpu_mask)
 		uv_cpu_nmi_per(cpu).pinging = 1;
 
-	apic->send_IPI_mask(uv_nmi_cpu_mask, APIC_DM_NMI);
+	x86_local_apic->send_IPI_mask(uv_nmi_cpu_mask, APIC_DM_NMI);
 }
 
 /* Clean up flags for CPU's that ignored both NMI and ping */
diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
index e82fd1910dae..a7c3f35ce32b 100644
--- a/arch/x86/xen/apic.c
+++ b/arch/x86/xen/apic.c
@@ -191,12 +191,12 @@ static struct apic xen_pv_apic = {
 
 static void __init xen_apic_check(void)
 {
-	if (apic == &xen_pv_apic)
+	if (x86_local_apic == &xen_pv_apic)
 		return;
 
-	pr_info("Switched APIC routing from %s to %s.\n", apic->name,
+	pr_info("Switched APIC routing from %s to %s.\n", x86_local_apic->name,
 		xen_pv_apic.name);
-	apic = &xen_pv_apic;
+	x86_local_apic = &xen_pv_apic;
 }
 void __init xen_init_apic(void)
 {
@@ -204,7 +204,7 @@ void __init xen_init_apic(void)
 	/* On PV guests the APIC CPUID bit is disabled so none of the
 	 * routines end up executing. */
 	if (!xen_initial_domain())
-		apic = &xen_pv_apic;
+		x86_local_apic = &xen_pv_apic;
 
 	x86_platform.apic_post_init = xen_apic_check;
 }
diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
index b9cf59443843..01a8d1b7fc74 100644
--- a/drivers/iommu/amd/iommu.c
+++ b/drivers/iommu/amd/iommu.c
@@ -3671,8 +3671,10 @@ static void irq_remapping_prepare_irte(struct amd_ir_data *data,
 
 	data->irq_2_irte.devid = devid;
 	data->irq_2_irte.index = index + sub_handle;
-	iommu->irte_ops->prepare(data->entry, apic->irq_delivery_mode,
-				 apic->irq_dest_mode, irq_cfg->vector,
+	iommu->irte_ops->prepare(data->entry,
+				 x86_local_apic->irq_delivery_mode,
+				 x86_local_apic->irq_dest_mode,
+				 irq_cfg->vector,
 				 irq_cfg->dest_apicid, devid);
 
 	switch (info->type) {
@@ -3943,8 +3945,8 @@ int amd_iommu_deactivate_guest_mode(void *data)
 	entry->hi.val = 0;
 
 	entry->lo.fields_remap.valid       = valid;
-	entry->lo.fields_remap.dm          = apic->irq_dest_mode;
-	entry->lo.fields_remap.int_type    = apic->irq_delivery_mode;
+	entry->lo.fields_remap.dm          = x86_local_apic->irq_dest_mode;
+	entry->lo.fields_remap.int_type    = x86_local_apic->irq_delivery_mode;
 	entry->hi.fields.vector            = cfg->vector;
 	entry->lo.fields_remap.destination =
 				APICID_TO_IRTE_DEST_LO(cfg->dest_apicid);
diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
index 0cfce1d3b7bb..01ae7e674974 100644
--- a/drivers/iommu/intel/irq_remapping.c
+++ b/drivers/iommu/intel/irq_remapping.c
@@ -1113,7 +1113,7 @@ static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
 	memset(irte, 0, sizeof(*irte));
 
 	irte->present = 1;
-	irte->dst_mode = apic->irq_dest_mode;
+	irte->dst_mode = x86_local_apic->irq_dest_mode;
 	/*
 	 * Trigger mode in the IRTE will always be edge, and for IO-APIC, the
 	 * actual level or edge trigger will be setup in the IO-APIC
@@ -1122,7 +1122,7 @@ static void prepare_irte(struct irte *irte, int vector, unsigned int dest)
 	 * irq migration in the presence of interrupt-remapping.
 	*/
 	irte->trigger_mode = 0;
-	irte->dlvry_mode = apic->irq_delivery_mode;
+	irte->dlvry_mode = x86_local_apic->irq_delivery_mode;
 	irte->vector = vector;
 	irte->dest_id = IRTE_DEST(dest);
 	irte->redir_hint = 1;
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:25:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15899.39139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVL1-0000iM-6T; Fri, 30 Oct 2020 14:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15899.39139; Fri, 30 Oct 2020 14:25:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVL1-0000iF-3O; Fri, 30 Oct 2020 14:25:07 +0000
Received: by outflank-mailman (input) for mailman id 15899;
 Fri, 30 Oct 2020 14:25:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYVKz-0000i7-IQ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48b2fdfd-684a-4f63-a173-17789522b443;
 Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C2C50AE1A;
 Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYVKz-0000i7-IQ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:05 +0000
X-Inumbo-ID: 48b2fdfd-684a-4f63-a173-17789522b443
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 48b2fdfd-684a-4f63-a173-17789522b443;
	Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604067902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZRO626gkCC58/+xHp5IDaonkexTsr38HffOaqWn+6/w=;
	b=d6qra57GhXRqkpVwBrQobeF5Y0zM4Ap5B7NJ7WU3E02GRIpTPb1IuJwNvOim9rda2EYzhd
	VksYmMAgpp51M6dXHfdIZ5vQlPEjv56wkYUf4YsYMdcrAetywYWxe+qZ7R6wLWYgv1gB7M
	HljXnBlOoSrUKHQN1Ojc3PnUs9B8J7k=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id C2C50AE1A;
	Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 2/2] xen/rwlock: add check_lock() handling to rwlocks
Date: Fri, 30 Oct 2020 15:25:00 +0100
Message-Id: <20201030142500.5464-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201030142500.5464-1-jgross@suse.com>
References: <20201030142500.5464-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Checking whether a lock is consistently used regarding interrupts on
or off is beneficial for rwlocks, too.

So add check_lock() calls to rwlock functions. For this purpose make
check_lock() globally accessible.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c      |  3 +--
 xen/include/xen/rwlock.h   | 14 ++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 54f0c55dc2..acb3f86339 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug, bool try)
+void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -108,7 +108,6 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index 427664037a..c302644705 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -65,7 +65,11 @@ static inline int _read_trylock(rwlock_t *lock)
          * arch_lock_acquire_barrier().
          */
         if ( likely(_can_read_lock(cnts)) )
+        {
+            check_lock(&lock->lock.debug, true);
             return 1;
+        }
+
         atomic_sub(_QR_BIAS, &lock->cnts);
     }
     preempt_enable();
@@ -87,7 +91,10 @@ static inline void _read_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( likely(_can_read_lock(cnts)) )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     /* The slowpath will decrement the reader count, if necessary. */
     queue_read_lock_slowpath(lock);
@@ -162,7 +169,10 @@ static inline void _write_lock(rwlock_t *lock)
      * arch_lock_acquire_barrier().
      */
     if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
+    {
+        check_lock(&lock->lock.debug, false);
         return;
+    }
 
     queue_write_lock_slowpath(lock);
     /*
@@ -205,6 +215,8 @@ static inline int _write_trylock(rwlock_t *lock)
         return 0;
     }
 
+    check_lock(&lock->lock.debug, true);
+
     /*
      * atomic_cmpxchg() is a full barrier so no need for an
      * arch_lock_acquire_barrier().
@@ -328,6 +340,8 @@ static inline void _percpu_read_lock(percpu_rwlock_t **per_cpudata,
         /* Drop the read lock because we don't need it anymore. */
         read_unlock(&percpu_rwlock->rwlock);
     }
+    else
+        check_lock(&percpu_rwlock->rwlock.lock.debug, false);
 }
 
 static inline void _percpu_read_unlock(percpu_rwlock_t **per_cpudata,
diff --git a/xen/include/xen/spinlock.h b/xen/include/xen/spinlock.h
index ca13b600a0..9fa4e600c1 100644
--- a/xen/include/xen/spinlock.h
+++ b/xen/include/xen/spinlock.h
@@ -21,11 +21,13 @@ union lock_debug {
     };
 };
 #define _LOCK_DEBUG { LOCK_DEBUG_INITVAL }
+void check_lock(union lock_debug *debug, bool try);
 void spin_debug_enable(void);
 void spin_debug_disable(void);
 #else
 union lock_debug { };
 #define _LOCK_DEBUG { }
+#define check_lock(l, t) ((void)0)
 #define spin_debug_enable() ((void)0)
 #define spin_debug_disable() ((void)0)
 #endif
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:25:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:25:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15900.39151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVL4-0000km-FG; Fri, 30 Oct 2020 14:25:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15900.39151; Fri, 30 Oct 2020 14:25:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVL4-0000kc-CC; Fri, 30 Oct 2020 14:25:10 +0000
Received: by outflank-mailman (input) for mailman id 15900;
 Fri, 30 Oct 2020 14:25:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYVL2-0000hB-Vc
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24413e0e-f2b8-46c8-9fe3-74a28d1bb41c;
 Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8B54DABD1;
 Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYVL2-0000hB-Vc
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:09 +0000
X-Inumbo-ID: 24413e0e-f2b8-46c8-9fe3-74a28d1bb41c
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 24413e0e-f2b8-46c8-9fe3-74a28d1bb41c;
	Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604067902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5gKiiKos/dApKc2beKCGTAc1Rx9gih7YAhJwI1zCh3E=;
	b=d/9UWwuOw44S/0Dnua9hZLPFL7oLqVNAb6BTZa6QwkYJ0D1sZjhA5BcwZuVfRjyVjmB+H4
	tfVeMSdkw8Lb6za77huWQFmYYTigKHXzx6l6Q1MbhWdBUn9kth+zK6IT39V9tzca7ZRbgD
	W0HV+jHxh7DD1JXdWX98sKXFxX36Kck=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8B54DABD1;
	Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/2] xen/spinlocks: spin_trylock with interrupts off is always fine
Date: Fri, 30 Oct 2020 15:24:59 +0100
Message-Id: <20201030142500.5464-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20201030142500.5464-1-jgross@suse.com>
References: <20201030142500.5464-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Even if a spinlock was taken with interrupts on before calling
spin_trylock() with interrupts off is fine, as it can't block.

Add a bool parameter "try" to check_lock() for handling this case.

Remove the call of check_lock() from _spin_is_locked(), as it really
serves no purpose and it can even lead to false crashes, e.g. when
a lock was taken correctly with interrupts enabled and the call of
_spin_is_locked() happened with interrupts off. In case the lock is
taken with wrong interrupt flags this will be catched when taking
the lock.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/spinlock.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index ce3106e2d3..54f0c55dc2 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -13,7 +13,7 @@
 
 static atomic_t spin_debug __read_mostly = ATOMIC_INIT(0);
 
-static void check_lock(union lock_debug *debug)
+static void check_lock(union lock_debug *debug, bool try)
 {
     bool irq_safe = !local_irq_is_enabled();
 
@@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
      * 
      * To guard against this subtle bug we latch the IRQ safety of every
      * spinlock in the system, on first use.
+     *
+     * A spin_trylock() or spin_is_locked() with interrupts off is always
+     * fine, as those can't block and above deadlock scenario doesn't apply.
      */
+    if ( try && irq_safe )
+        return;
+
     if ( unlikely(debug->irq_safe != irq_safe) )
     {
         union lock_debug seen, new = { 0 };
@@ -102,7 +108,7 @@ void spin_debug_disable(void)
 
 #else /* CONFIG_DEBUG_LOCKS */
 
-#define check_lock(l) ((void)0)
+#define check_lock(l, t) ((void)0)
 #define check_barrier(l) ((void)0)
 #define got_lock(l) ((void)0)
 #define rel_lock(l) ((void)0)
@@ -159,7 +165,7 @@ void inline _spin_lock_cb(spinlock_t *lock, void (*cb)(void *), void *data)
     spinlock_tickets_t tickets = SPINLOCK_TICKET_INC;
     LOCK_PROFILE_VAR;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, false);
     preempt_disable();
     tickets.head_tail = arch_fetch_and_add(&lock->tickets.head_tail,
                                            tickets.head_tail);
@@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
 
 int _spin_is_locked(spinlock_t *lock)
 {
-    check_lock(&lock->debug);
-
     /*
      * Recursive locks may be locked by another CPU, yet we return
      * "false" here, making this function suitable only for use in
@@ -236,7 +240,7 @@ int _spin_trylock(spinlock_t *lock)
 {
     spinlock_tickets_t old, new;
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
     old = observe_lock(&lock->tickets);
     if ( old.head != old.tail )
         return 0;
@@ -294,7 +298,7 @@ int _spin_trylock_recursive(spinlock_t *lock)
     BUILD_BUG_ON(NR_CPUS > SPINLOCK_NO_CPU);
     BUILD_BUG_ON(SPINLOCK_RECURSE_BITS < 3);
 
-    check_lock(&lock->debug);
+    check_lock(&lock->debug, true);
 
     if ( likely(lock->recurse_cpu != cpu) )
     {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:25:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:25:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15898.39127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVKy-0000hN-V0; Fri, 30 Oct 2020 14:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15898.39127; Fri, 30 Oct 2020 14:25:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVKy-0000hG-Ri; Fri, 30 Oct 2020 14:25:04 +0000
Received: by outflank-mailman (input) for mailman id 15898;
 Fri, 30 Oct 2020 14:25:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYVKy-0000hB-4P
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5c82dc8-eeee-41ff-9be8-890cc336cf59;
 Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62FFFAE78;
 Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYVKy-0000hB-4P
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:25:04 +0000
X-Inumbo-ID: b5c82dc8-eeee-41ff-9be8-890cc336cf59
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id b5c82dc8-eeee-41ff-9be8-890cc336cf59;
	Fri, 30 Oct 2020 14:25:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604067902;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding;
	bh=79oOEVUMfaSGePfVVjQlCwA8HSZkhczazYUwIzRljN4=;
	b=mRVMAhOhFQy7OjjyW1euBEyXVzH1h8VS3qmnoKp5KuxIVqT0r7X7kHNHz7nRcIfj0mn4QG
	o8jF34/zz7dghuCbI3qC80gUCXhu+CEWvNji8bsqLBmpTkFebkrcu7V+hiuXcORs9RIl+Q
	NB09pyGkpf6r0OrkWBkoFfOoQ0gMzS4=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 62FFFAE78;
	Fri, 30 Oct 2020 14:25:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 0/2] xen/locking: fix and enhance lock debugging
Date: Fri, 30 Oct 2020 15:24:58 +0100
Message-Id: <20201030142500.5464-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This small series fixes two issues with spinlock debug code and adds
lock debug code to rwlocks in order to catch IRQ violations.

Juergen Gross (2):
  xen/spinlocks: spin_trylock with interrupts off is always fine
  xen/rwlock: add check_lock() handling to rwlocks

 xen/common/spinlock.c      | 17 ++++++++++-------
 xen/include/xen/rwlock.h   | 14 ++++++++++++++
 xen/include/xen/spinlock.h |  2 ++
 3 files changed, 26 insertions(+), 7 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15925.39163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVgd-0002jf-Bp; Fri, 30 Oct 2020 14:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15925.39163; Fri, 30 Oct 2020 14:47:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVgd-0002jY-8M; Fri, 30 Oct 2020 14:47:27 +0000
Received: by outflank-mailman (input) for mailman id 15925;
 Fri, 30 Oct 2020 14:47:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q/70=EF=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1kYVgb-0002jS-Lm
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:47:25 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddd8f756-6e23-43a8-b918-45850768653f;
 Fri, 30 Oct 2020 14:47:24 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=q/70=EF=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
	id 1kYVgb-0002jS-Lm
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:47:25 +0000
X-Inumbo-ID: ddd8f756-6e23-43a8-b918-45850768653f
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id ddd8f756-6e23-43a8-b918-45850768653f;
	Fri, 30 Oct 2020 14:47:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1604069244;
  h=from:to:cc:subject:date:message-id:content-id:
   content-transfer-encoding:mime-version;
  bh=LRMNgs/eXbppdKHgQhDOZFDGxQUAb7X6YI6ArOMqyuo=;
  b=RSrHLWSSVe0R4h1BtcSv63++vn4uJBrvMd27mSLXvrtp/NRHZ33gpXEx
   E2zvuVSsd1Gow7+Z/+e8tvxv01RkjA2LNZhPYaAj2bXkGIPCxZZLAkhdm
   Sk+IK5IRHfpoZdRcA1oFgr4tYiZZZWFR6+pva8EnsysOfJByCzfH0rjNg
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: MqvrxFv9HQgo/ktAZQGydIerdTH0Scwf/UXHqINhLKSEW3tPl9A5dAiLCWUqMDgkSvfgB8fuGN
 Yo6FTgOXf71IE9P0sinTvzZdppH0U8bHdg3ZkSx6Hq3wMXVmv/laEuByvWyz39OyOjYRNmjyZ9
 FTuBEbIeVflhM/BnSoOfjM2wGdIWdkvAEzyWhTTJZNnxq6qfh47ttoj2BYQFgmt1iY1h0dGOsa
 KUR/KuddjTZ/YGyVauFFqlyVqh4RdbFQMT2FQm2rIwv13sP9ahG1NofCjbX2CFlByrGHIMsaZJ
 5Bo=
X-SBRS: None
X-MesageID: 31256108
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.77,433,1596513600"; 
   d="scan'208";a="31256108"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DRNuzlwrOINxiJ0+HDT18KgnYtKjGM8DAvaFstbE5Dc8VM/l06iveBBL9dmhdOjRXnq5OjOjUxNavf0B6qfsHRZyz7gOFjYs593z1squeL6KK523dgXqf3ImzCdn59uZ+30PxTIRFuzkbKtgwnvlPwAMLm73Lk1YXZhAlT0MIRyyIpFRtpNLqg+AXQA+W6e3LLOOOsGMsAj6gU8dawhKM/3UsmsXQWJzgsWLA78+WSwjYth/B3waodHyXZlmJWrfre6gLu+FtLgmFLc4z2UFSAei7/WMpedTsg9/ifsv3BlU+24HRSJdTUgdMHSp2zRF7OOsz269u+snADTLBmmTiA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LRMNgs/eXbppdKHgQhDOZFDGxQUAb7X6YI6ArOMqyuo=;
 b=BzxKGZzQ002bQqCEvrF1ojaSN1YDhH9uY9i/5UBb5BXavi0cjSq/ujDmaJC+uFBb7AqiulRhfF0PL5IV47Ahl3qx8qOywiZ22P5A9RNVXqgdal3IA0tskfdSsoujb2MsgQM7Q1JbQCDvwVut58wW27D77gegJNFz5Csera294UqxvSKTuuY2rRV+RSc3i5mHovG1EoAy4gMOMf2AClpz0E9ZzF4OEXI1JYCxN7sQxK+InB09foqAufW+JdEMjqi5DKtjPs1NqUZwT4rKc3lkvX3+oZwNCZHQm4aZjAuhLQmUlCMYlX8YUQOPSb4we8TGRC2tSUq8uNdxfRTVpiWn3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LRMNgs/eXbppdKHgQhDOZFDGxQUAb7X6YI6ArOMqyuo=;
 b=cwyvX0PhtBzWC8JOQ0+MekpW/x2vdx5lXdzQGNbha/TzwSz39K/TdMXlDaJvItrvP0Ds382CKCQPLKqbK3nBmNhmfRShTVuFRjd5RVcOgQ1+H5eOwX7mblNRBCLu24OOxLMXJYE9foFx8HzAWdjpR+KEuoXMabvL8OSvOCF3shQ=
From: George Dunlap <George.Dunlap@citrix.com>
To: "open list:X86" <xen-devel@lists.xenproject.org>
CC: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, "intel-xen@intel.com"
	<intel-xen@intel.com>, "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
	Roger Pau Monne <roger.pau@citrix.com>, Sergey Dyasli
	<sergey.dyasli@citrix.com>, Christopher Clark
	<christopher.w.clark@gmail.com>, Rich Persaud <persaur@gmail.com>, "Kevin
 Pearson" <kevin.pearson@ortmanconsulting.com>, Juergen Gross
	<jgross@suse.com>, =?utf-8?B?UGF1bCBEdXJyYW50wqA=?= <pdurrant@amazon.com>,
	"Ji, John" <john.ji@intel.com>, "edgar.iglesias@xilinx.com"
	<edgar.iglesias@xilinx.com>, "robin.randhawa@arm.com"
	<robin.randhawa@arm.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>, "Matt
 Spencer" <Matt.Spencer@arm.com>, Stewart Hildebrand
	<Stewart.Hildebrand@dornerworks.com>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, "mirela.simonovic@aggios.com"
	<mirela.simonovic@aggios.com>, Jarvis Roach <Jarvis.Roach@dornerworks.com>,
	Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Ian Jackson
	<Ian.Jackson@citrix.com>, Rian Quinn <rianquinn@gmail.com>, "Daniel P. Smith"
	<dpsmith@apertussolutions.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
	<cardoe@cardoe.com>, George Dunlap <George.Dunlap@citrix.com>, "David
 Woodhouse" <dwmw@amazon.co.uk>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
	<varadgautam@gmail.com>, Brian Woods <brian.woods@xilinx.com>, Robert Townley
	<rob.townley@gmail.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
	=?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQ29yZXkgTWlueWFyZA==?=
	<cminyard@mvista.com>, Olivier Lambert <olivier.lambert@vates.fr>, "Andrew
 Cooper" <Andrew.Cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [ANNOUNCE] Call for agenda items for 5 November 2020 Community Call @
 16:00 UTC 
Thread-Topic: [ANNOUNCE] Call for agenda items for 5 November 2020 Community
 Call @ 16:00 UTC 
Thread-Index: AQHWrsuQhaTNtF6CbkCG4+SMRWU1EQ==
Date: Fri, 30 Oct 2020 14:47:17 +0000
Message-ID: <948CC2D7-B53D-48CD-879B-6C0DDE0B1EE2@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3608.120.23.2.4)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8604ef73-571d-4ed6-e389-08d87ce2b2d1
x-ms-traffictypediagnostic: BY5PR03MB5187:
x-ld-processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR03MB51878C436EA19F7627B316DD99150@BY5PR03MB5187.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: bkldxx+QYEdJcmc+OeuIFhhOp0PfCjInG09KI8zdxv1TbMRkbfUGSe4s8gpBxuasJTnQ+fZayKqk7fRHCili7hRSsm6bPxKPYnyMiIQfVDfX/K4y/rXbZ9BUe6/sh1YeYfRmFr0E7nvk4Q7xLyyL+4KIduMfTOH2tA7HHcNZ/fPnnvTv09YtbsgMEC7DPdOzglRLQ71Z5BDedzf9kb6muutLF+kwHmF6DUJLtxHGMtt8tU3/iQDthtJgF04dWuKYJjCVuFK7h8w6BFsB0cWiq87LwrUCst4ec2EujjNhVlH2TLmF3l3yPLjNpLbM4Xjfn0fUpnahn0VhRCrk15PwRWfK/Vt3ubnKWDmvLz0qQ2iqvU/uW2DaSg3/u62EFtg6iapWAWrWO1IR8mrQJ3a8WQ==
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB4229.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(346002)(39860400002)(396003)(5660300002)(26005)(7416002)(86362001)(54906003)(36756003)(7406005)(83380400001)(316002)(66476007)(66556008)(64756008)(66446008)(76116006)(91956017)(71200400001)(66946007)(33656002)(6486002)(4743002)(8676002)(2616005)(55236004)(966005)(186003)(6916009)(6512007)(4326008)(478600001)(6506007)(2906002)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: DCn116mn4ZHc6sWwPv30ed5UVcEH+BJeajaxxf59p1gTXZVI5dU6MqlKdx/UQejpfGOvwaY+bXPwOPg6wuCQJ0JD+kN46g57tXAjWqw9rzP+0k9LYVW3QhcI90EFPmd+KCbimn/wxvE7SA8QdVfW3QEAC0MOsgJk1teSPk1EURne0+MvDw8zRpqdzOGdPkdRn5a67ATKjym59hWHavKXZyXXa3ob8BoXwzTVWCxR8oymEChzDAmyyFz6cNt9n1V+X7zEII+IMFpe0EAs3XTZVcyRkuHZ7sx25+bGiPv7GUvEfyciqgJMXuGA3GlfGam3nkpLx0On52SesDlvo8Esg8cJ+SzR5TX6J1TohOPnB6sM+mRr9XpSBL2o90wLYFJlLVCoN9zIHL11ptgkcw0FHU8IPBaB8yuXUGxbgVdqMs2gsX9Iior2mWPerP4zSNVVDhcmgDVrDPXP4o8KqqniK3AsVGYP2DIavXmQcMG/wkV5QctuTBkQtrOWTaPoS1twrLwcOqdZ01RofHO0n8Ts2lsfzBibn0K0oUngBNu8kLNQ4ShhOAcwPHlp+HV771G3ugN8hXdqSf3csJnEFswOgqlmCTbjKqniXq5cB0C/1CiNnQRDRcuEmEDmY6Q99sUztUBCVB+et/C5NBqE0EpVvg==
Content-Type: text/plain; charset="utf-8"
Content-ID: <804A991DAC219848AFDD510AD6646DF7@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB4229.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8604ef73-571d-4ed6-e389-08d87ce2b2d1
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Oct 2020 14:47:17.6158
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 2yHHHfUEvJT5wPig/j9Ho+iFrf0FSvdUUW2oNDjEgsOmhmMWDXbId2GW2PPz2Ou6G2rr/t35KdRsoBlZAF/2ka2C/IexvxqvxNshWbSNAvc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5187
X-OriginatorOrg: citrix.com

SGkgYWxsLA0KDQpUaGUgcHJvcG9zZWQgYWdlbmRhIGlzIGluIGh0dHBzOi8vY3J5cHRwYWQuZnIv
cGFkLyMvMi9wYWQvZWRpdC9rLTBBaitTeGI1U2xpTFdyRlJCd3g0OVYvIGFuZCB5b3UgY2FuIGVk
aXQgdG8gYWRkIGl0ZW1zLiAgQWx0ZXJuYXRpdmVseSwgeW91IGNhbiByZXBseSB0byB0aGlzIG1h
aWwgZGlyZWN0bHkuDQoNCkFnZW5kYSBpdGVtcyBhcHByZWNpYXRlZCBhIGZldyBkYXlzIGJlZm9y
ZSB0aGUgY2FsbDogcGxlYXNlIHB1dCB5b3VyIG5hbWUgYmVzaWRlcyBpdGVtcyBpZiB5b3UgZWRp
dCB0aGUgZG9jdW1lbnQuDQoNCk5vdGUgdGhlIGZvbGxvd2luZyBhZG1pbmlzdHJhdGl2ZSBjb252
ZW50aW9ucyBmb3IgdGhlIGNhbGw6DQoqIFVubGVzcywgYWdyZWVkIGluIHRoZSBwZXJ2aW91cyBt
ZWV0aW5nIG90aGVyd2lzZSwgdGhlIGNhbGwgaXMgb24gdGhlIDFzdCBUaHVyc2RheSBvZiBlYWNo
IG1vbnRoIGF0IDE2MDAgQnJpdGlzaCBUaW1lIChlaXRoZXIgR01UIG9yIEJTVCkNCiogSSB1c3Vh
bGx5IHNlbmQgb3V0IGEgbWVldGluZyByZW1pbmRlciBhIGZldyBkYXlzIGJlZm9yZSB3aXRoIGEg
cHJvdmlzaW9uYWwgYWdlbmRhDQoNCiogVG8gYWxsb3cgdGltZSB0byBzd2l0Y2ggYmV0d2VlbiBt
ZWV0aW5ncywgd2UnbGwgcGxhbiBvbiBzdGFydGluZyB0aGUgYWdlbmRhIGF0IDE2OjA1IHNoYXJw
LiAgQWltIHRvIGpvaW4gYnkgMTY6MDMgaWYgcG9zc2libGUgdG8gYWxsb2NhdGUgdGltZSB0byBz
b3J0IG91dCB0ZWNobmljYWwgZGlmZmljdWx0aWVzICZjDQoNCiogSWYgeW91IHdhbnQgdG8gYmUg
Q0MnZWQgcGxlYXNlIGFkZCBvciByZW1vdmUgeW91cnNlbGYgZnJvbSB0aGUgc2lnbi11cC1zaGVl
dCBhdCBodHRwczovL2NyeXB0cGFkLmZyL3BhZC8jLzIvcGFkL2VkaXQvRDl2R3ppaFB4eEFPZTZS
RlB6MHNSQ2YrLw0KDQpCZXN0IFJlZ2FyZHMNCkdlb3JnZQ0KDQoNCg0KPT0gRGlhbC1pbiBJbmZv
cm1hdGlvbiA9PQ0KIyMgTWVldGluZyB0aW1lDQoxNjowMCAtIDE3OjAwIFVUQw0KRnVydGhlciBJ
bnRlcm5hdGlvbmFsIG1lZXRpbmcgdGltZXM6IGh0dHBzOi8vd3d3LnRpbWVhbmRkYXRlLmNvbS93
b3JsZGNsb2NrL21lZXRpbmdkZXRhaWxzLmh0bWw/eWVhcj0yMDIwJm1vbnRoPTExJmRheT01Jmhv
dXI9MTYmbWluPTAmc2VjPTAmcDE9MTIzNCZwMj0zNyZwMz0yMjQmcDQ9MTc5DQoNCg0KIyMgRGlh
bCBpbiBkZXRhaWxzDQpXZWI6IGh0dHBzOi8vd3d3LmdvdG9tZWV0Lm1lL0dlb3JnZUR1bmxhcA0K
DQpZb3UgY2FuIGFsc28gZGlhbCBpbiB1c2luZyB5b3VyIHBob25lLg0KQWNjZXNzIENvZGU6IDE2
OC02ODItMTA5DQoNCkNoaW5hIChUb2xsIEZyZWUpOiA0MDA4IDgxMTA4NA0KR2VybWFueTogKzQ5
IDY5MiA1NzM2IDczMTcNClBvbGFuZCAoVG9sbCBGcmVlKTogMDAgODAwIDExMjQ3NTkNClVrcmFp
bmUgKFRvbGwgRnJlZSk6IDAgODAwIDUwIDE3MzMNClVuaXRlZCBLaW5nZG9tOiArNDQgMzMwIDIy
MSAwMDg4DQpVbml0ZWQgU3RhdGVzOiArMSAoNTcxKSAzMTctMzEyOQ0KU3BhaW46ICszNCA5MzIg
NzUgMjAwNA0KDQoNCk1vcmUgcGhvbmUgbnVtYmVycw0KQXVzdHJhbGlhOiArNjEgMiA5MDg3IDM2
MDQNCkF1c3RyaWE6ICs0MyA3IDIwODEgNTQyNw0KQXJnZW50aW5hIChUb2xsIEZyZWUpOiAwIDgw
MCA0NDQgMzM3NQ0KQmFocmFpbiAoVG9sbCBGcmVlKTogODAwIDgxIDExMQ0KQmVsYXJ1cyAoVG9s
bCBGcmVlKTogOCA4MjAgMDAxMSAwNDAwDQpCZWxnaXVtOiArMzIgMjggOTMgNzAxOA0KQnJhemls
IChUb2xsIEZyZWUpOiAwIDgwMCAwNDcgNDkwNg0KQnVsZ2FyaWEgKFRvbGwgRnJlZSk6IDAwODAw
IDEyMCA0NDE3DQpDYW5hZGE6ICsxICg2NDcpIDQ5Ny05MzkxDQpDaGlsZSAoVG9sbCBGcmVlKTog
ODAwIDM5NSAxNTANCkNvbG9tYmlhIChUb2xsIEZyZWUpOiAwMSA4MDAgNTE4IDQ0ODMNCkN6ZWNo
IFJlcHVibGljIChUb2xsIEZyZWUpOiA4MDAgNTAwNDQ4DQpEZW5tYXJrOiArNDUgMzIgNzIgMDMg
ODINCkZpbmxhbmQ6ICszNTggOTIzIDE3IDA1NjgNCkZyYW5jZTogKzMzIDE3MCA5NTAgNTk0DQpH
cmVlY2UgKFRvbGwgRnJlZSk6IDAwIDgwMCA0NDE0IDM4MzgNCkhvbmcgS29uZyAoVG9sbCBGcmVl
KTogMzA3MTMxNjk5MDYtODg2LTk2NQ0KSHVuZ2FyeSAoVG9sbCBGcmVlKTogKDA2KSA4MCA5ODYg
MjU1DQpJY2VsYW5kIChUb2xsIEZyZWUpOiA4MDAgNzIwNA0KSW5kaWEgKFRvbGwgRnJlZSk6IDE4
MDAyNjY5MjcyDQpJbmRvbmVzaWEgKFRvbGwgRnJlZSk6IDAwNyA4MDMgMDIwIDUzNzUNCklyZWxh
bmQ6ICszNTMgMTUgMzYwIDcyOA0KSXNyYWVsIChUb2xsIEZyZWUpOiAxIDgwOSA0NTQgODMwDQpJ
dGFseTogKzM5IDAgMjQ3IDkyIDEzIDAxDQpKYXBhbiAoVG9sbCBGcmVlKTogMCAxMjAgNjYzIDgw
MA0KS29yZWEsIFJlcHVibGljIG9mIChUb2xsIEZyZWUpOiAwMDc5OCAxNCAyMDcgNDkxNA0KTHV4
ZW1ib3VyZyAoVG9sbCBGcmVlKTogODAwIDg1MTU4DQpNYWxheXNpYSAoVG9sbCBGcmVlKTogMSA4
MDAgODEgNjg1NA0KTWV4aWNvIChUb2xsIEZyZWUpOiAwMSA4MDAgNTIyIDExMzMNCk5ldGhlcmxh
bmRzOiArMzEgMjA3IDk0MSAzNzcNCk5ldyBaZWFsYW5kOiArNjQgOSAyODAgNjMwMg0KTm9yd2F5
OiArNDcgMjEgOTMgMzcgNTENClBhbmFtYSAoVG9sbCBGcmVlKTogMDAgODAwIDIyNiA3OTI4DQpQ
ZXJ1IChUb2xsIEZyZWUpOiAwIDgwMCA3NzAyMw0KUGhpbGlwcGluZXMgKFRvbGwgRnJlZSk6IDEg
ODAwIDExMTAgMTY2MQ0KUG9ydHVnYWwgKFRvbGwgRnJlZSk6IDgwMCA4MTkgNTc1DQpSb21hbmlh
IChUb2xsIEZyZWUpOiAwIDgwMCA0MTAgMDI5DQpSdXNzaWFuIEZlZGVyYXRpb24gKFRvbGwgRnJl
ZSk6IDggODAwIDEwMCA2MjAzDQpTYXVkaSBBcmFiaWEgKFRvbGwgRnJlZSk6IDgwMCA4NDQgMzYz
Mw0KU2luZ2Fwb3JlIChUb2xsIEZyZWUpOiAxODAwNzIzMTMyMw0KU291dGggQWZyaWNhIChUb2xs
IEZyZWUpOiAwIDgwMCA1NTUgNDQ3DQpTd2VkZW46ICs0NiA4NTMgNTI3IDgyNw0KU3dpdHplcmxh
bmQ6ICs0MSAyMjUgNDU5OSA3OA0KVGFpd2FuIChUb2xsIEZyZWUpOiAwIDgwMCA2NjYgODU0DQpU
aGFpbGFuZCAoVG9sbCBGcmVlKTogMDAxIDgwMCAwMTEgMDIzDQpUdXJrZXkgKFRvbGwgRnJlZSk6
IDAwIDgwMCA0NDg4IDIzNjgzDQpVbml0ZWQgQXJhYiBFbWlyYXRlcyAoVG9sbCBGcmVlKTogODAw
IDA0NCA0MDQzOQ0KVXJ1Z3VheSAoVG9sbCBGcmVlKTogMDAwNCAwMTkgMTAxOA0KVmlldCBOYW0g
KFRvbGwgRnJlZSk6IDEyMiA4MCA0ODENCuKAi+KAi+KAi+KAi+KAi+KAi+KAiw0KDQpGaXJzdCBH
b1RvTWVldGluZz8gTGV0J3MgZG8gYSBxdWljayBzeXN0ZW0gY2hlY2s6DQoNCmh0dHBzOi8vbGlu
ay5nb3RvbWVldGluZy5jb20vc3lzdGVtLWNoZWNr


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:47:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:47:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15926.39175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVgk-0002lt-KT; Fri, 30 Oct 2020 14:47:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15926.39175; Fri, 30 Oct 2020 14:47:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVgk-0002lm-HA; Fri, 30 Oct 2020 14:47:34 +0000
Received: by outflank-mailman (input) for mailman id 15926;
 Fri, 30 Oct 2020 14:47:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1kYVgj-0002lW-Aj
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:47:33 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9bc1255-dac4-4970-98e0-124dc979b4b1;
 Fri, 30 Oct 2020 14:47:30 +0000 (UTC)
Received: from AM6PR05CA0031.eurprd05.prod.outlook.com (2603:10a6:20b:2e::44)
 by AM0PR08MB4467.eurprd08.prod.outlook.com (2603:10a6:208:138::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.22; Fri, 30 Oct
 2020 14:47:28 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::79) by AM6PR05CA0031.outlook.office365.com
 (2603:10a6:20b:2e::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 14:47:28 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 14:47:27 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64");
 Fri, 30 Oct 2020 14:47:27 +0000
Received: from ddbd8eb41c98.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5850C7D8-B858-40E0-91DD-35641FB6C47B.1; 
 Fri, 30 Oct 2020 14:47:22 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ddbd8eb41c98.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 14:47:22 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3797.eurprd08.prod.outlook.com (2603:10a6:20b:88::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 14:47:21 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 14:47:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=g2dz=EF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
	id 1kYVgj-0002lW-Aj
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:47:33 +0000
X-Inumbo-ID: d9bc1255-dac4-4970-98e0-124dc979b4b1
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown [40.107.22.84])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d9bc1255-dac4-4970-98e0-124dc979b4b1;
	Fri, 30 Oct 2020 14:47:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n9mY1Zav2n5R2cchI3Vx35ehTUmsYD9/sV52agvUnWM=;
 b=nSj3uQJnhlKZuCjnaqqZl12eiaVhaTBjs1SYcDRq/8fgJaIPdE2qv5SoXBnWPj+p3RevvQQ9qo5krvftByf35IxOAGPQqAbJ06CJw0Dy5clhWCFa+wjk7E2lQMqcsUspl9RvBVZPrp+VdOkdexLs6vyBwS1VlKrAFgarjHKLK6k=
Received: from AM6PR05CA0031.eurprd05.prod.outlook.com (2603:10a6:20b:2e::44)
 by AM0PR08MB4467.eurprd08.prod.outlook.com (2603:10a6:208:138::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.22; Fri, 30 Oct
 2020 14:47:28 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::79) by AM6PR05CA0031.outlook.office365.com
 (2603:10a6:20b:2e::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.19 via Frontend
 Transport; Fri, 30 Oct 2020 14:47:28 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 14:47:27 +0000
Received: ("Tessian outbound a64c3afb6fc9:v64"); Fri, 30 Oct 2020 14:47:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 22ea3b50ea3e25fe
X-CR-MTA-TID: 64aa7808
Received: from ddbd8eb41c98.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5850C7D8-B858-40E0-91DD-35641FB6C47B.1;
	Fri, 30 Oct 2020 14:47:22 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ddbd8eb41c98.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 14:47:22 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RErhpO5DJN+jrYiho6VSpqb+6FlbgknKzaVp7DVIsqpkqtcmYRDnXJC1fGdd9Lrv+50AXlK91HjBYH8/WBEWV/mDb+C1LcSBpn5qhHaXa3sDLLoSvCRnZFqRmY72zFL2e/ShbxyIdGE+yXWF6kd5lOA9HoIcunjX2u55ELNYzDk4r9BsjyI8HGpyGWY8T+3WcIIbpI+7Iy5y46dSy8ngOMFbUwLTPgqCtqsgV9Yi3XgscPncMSxhyz/8lZQdXhVyciPGgfNiIeTfx/K3CZR5cHGiBFFDmKlcyP/vtVDGIodYael+Kj/ghZylzvnlBN9wdoT/npwxY6IQrllV5FcwQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n9mY1Zav2n5R2cchI3Vx35ehTUmsYD9/sV52agvUnWM=;
 b=oHNJFpCd5K1yPsp4FyRPDRXRMPppmxsH3ljNLjKIMp72TUGRWSH+FY14kRfDhxvM0fYCYA4+mbbBNaX+7VIuBskT7eY32nLLMOzZe+6q8MWqT3IHf2PtoMQhEs1IYf8tqslQTF7lD0oLBM/+txgFJ32ezi98ahXAKx7RCO+ll9PEnnIeyszxBvaXzAbOb8Xogl+BU4Djvt92l0sAomWiz5p5lyf5nRVj07dwFFYnuFXiweAXsIx0iI3ynjvZ2PaqVl5G1ZMYw8l2+VrkzpgEKmo+9K3AwHIIvzLPPJsJwckDB74/V57J+QvgXli6i9y6dRDMPGC6tRKt0b87YKUFFw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=n9mY1Zav2n5R2cchI3Vx35ehTUmsYD9/sV52agvUnWM=;
 b=nSj3uQJnhlKZuCjnaqqZl12eiaVhaTBjs1SYcDRq/8fgJaIPdE2qv5SoXBnWPj+p3RevvQQ9qo5krvftByf35IxOAGPQqAbJ06CJw0Dy5clhWCFa+wjk7E2lQMqcsUspl9RvBVZPrp+VdOkdexLs6vyBwS1VlKrAFgarjHKLK6k=
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com (2603:10a6:20b:4e::31)
 by AM6PR08MB3797.eurprd08.prod.outlook.com (2603:10a6:20b:88::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 14:47:21 +0000
Received: from AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a]) by AM6PR08MB3496.eurprd08.prod.outlook.com
 ([fe80::dc5:9a53:a6b1:6a5a%4]) with mapi id 15.20.3499.027; Fri, 30 Oct 2020
 14:47:20 +0000
From: Rahul Singh <Rahul.Singh@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvrkmJwOYERdUOadvid1OghFamwEIWAgABD5gA=
Date: Fri, 30 Oct 2020 14:47:20 +0000
Message-ID: <2AB3A125-D530-4627-A877-EC2BCDCD63DC@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
In-Reply-To: <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [86.26.38.125]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e063c534-46d2-4a5e-a7d7-08d87ce2b8f0
x-ms-traffictypediagnostic: AM6PR08MB3797:|AM0PR08MB4467:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB44678934164A84672E557AE9FC150@AM0PR08MB4467.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 xJTPK382FQOaGmowDY1eAMSnl+bYG/jz7q1Kk/LauA52g1HCQr0v8cjPLR6pkb/gBkTQ9+F+VfCFVKbP32dBtZRNZJeSbhgCAbii4BFozoOksQwhRufpfHDyrTUTClR0Ntfa+SHYagbiLQ97Snbfq/ZJdSk+Oe9YO1OdAb79z8ho72J2g/MYnwuk9IQeCTBnSyBJU9h/Cax+umcm9GqE8+AfgD12wocCu9OKRXzLbF61nJgnmcsH3u29D0DZpv87DFu2PJUBDh1jgl31R4j+wkwAHM9171OqE3IyMKPZPveYqxqg5CG2dqeq5ety/ujwqkWPg1sjTEPo62+bG/e1FZlnRRY8Fc24oM9rq+y8oGRiJGBvhhZz4w3dxBfUUADOuVSbRYmPQSE5UZY5aL6MrQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3496.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(136003)(366004)(376002)(83380400001)(33656002)(86362001)(55236004)(478600001)(91956017)(53546011)(4326008)(6512007)(8676002)(6506007)(76116006)(8936002)(66446008)(2906002)(66946007)(36756003)(5660300002)(64756008)(66556008)(66476007)(26005)(6486002)(2616005)(6916009)(316002)(966005)(71200400001)(54906003)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 5vgF3TN4+zvBpUi/9zh1CVraF2TPUv/EIREdzuGsRC7AlYNtRzDTYmSgfWGmK5s3rN2RkjvBtHvb81DJnUswR84wpx27IafTaQp43zd2o2lHCG5kZ6MY1CaPRvKG20a1rmajEt73XX1dkUeK0JMA/2h7MDjSf9vIG29hrDNLWq4yXNNqJaf5ZWfq/FdCLeNjdkifdeYImEB6IifANrQRzgMbXWU357wNTyj9ghD4Vfdgw72euPsaBtKM+1qW/nQg7554qLAZuTRBzHckHSWPEKwnOvfqnn1TuY1fGfEvmaKkTU7Tzv8Vl/9CudvSQT9IucfWvBmYH1nwOFRQx2kAUOUF8wqxVKjz/xEkmxhcQ3lpsWos6GtFKKv5nJpps/z1QA3ii08vqKHZfDDXLaXoE2KFs9pPuui7Pg1T06jIQJkupdaq4hBK6M+c6c3g6hR+xc+upiAKWAsbbMfEflll1anEnkiw1fO3YBZyWYu1eTHBBNA3HEOM/jBcwQkOAT/9MwGvP7aQbeC3ySVKBAzZ/YK8StXwbQcv15+jES/ToVncSYsPkzDgRicevgSttIKe+WSoDdVMHRCppIcWA6D6rW297uiO1BvcMoDSYuJ4NZiX242DCP6DdoZ9uTXEemue54SoSBBXIVijSu407L9Aow==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4DFEDFCFC1C5084683CD17C33353DC3E@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3797
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0ad570fa-030b-459c-a29c-08d87ce2b4bc
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CldEWsYM/rGMSbrZIH21ja1z+R94JaLZ4zK0AX1v1r6xEYoF8Bpstupj/Q3rBT+O3cahMjBngAS0h6lRkAwCcZh01gW2Hef+eaJQpDdlb+/EZ9t0PNaxBF4hGF1TJadsWyerveQuIvsIxov0jVXaSB7NaBHo4b7gyuyUbfDMA0r7BZoUrlLuPJMi08ZDDlWa5Aw0fkcT33Eag0xfhhSWr89zedamsN8Z30vEkpUxUkpYd7SYs7M6xvNsZ2161y81pNKBPTR2o7ZzPdKV8OdyGqgFTVmKMQ0j/YYw7JfWdt2mTgoAday728JnMxS6gN6Tu11g+25uIRwAu63vkAxxoJhzy+EH7OwhfSrPwzrbwaXy50+cXrU/UpX96Tce4hfAkpyq8hbZhW/UW5GTW3LDFG/5c4mPspImhTUnNJKP3xe0Q+iRFhDjVGr8edanbJ/x/nkhpeK7fGBalRXMKL8T8SRsOpSPX2piiSLnHNgkcuE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(396003)(39850400004)(46966005)(6486002)(53546011)(107886003)(5660300002)(55236004)(47076004)(356005)(6862004)(81166007)(6512007)(82740400003)(4326008)(36906005)(478600001)(336012)(186003)(54906003)(6506007)(966005)(33656002)(316002)(36756003)(8676002)(86362001)(2616005)(83380400001)(70206006)(8936002)(82310400003)(70586007)(2906002)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 14:47:27.8498
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e063c534-46d2-4a5e-a7d7-08d87ce2b8f0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4467

Hello Oleksandr,

> On 30 Oct 2020, at 10:44 am, Oleksandr Andrushchenko <Oleksandr_Andrushch=
enko@epam.com> wrote:
>=20
> Hi, Rahul!
>=20
> On 10/20/20 6:25 PM, Rahul Singh wrote:
>> Add support for ARM architected SMMUv3 implementations. It is based on
>> the Linux SMMUv3 driver.
>>=20
>> Major differences between the Linux driver are as follows:
>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>    that supports both Stage-1 and Stage-2 translations.
>=20
> First of all thank you for the efforts!
>=20
> I tried the patch with QEMU and would like to know if my understanding co=
rrect
>=20
> that this combination will not work as of now:
>=20
> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value =3D eventq

I have limited knowledge about QEMU internals.As what I see from the logs, =
fault is occurred at early driver initialisation when SMMU driver is trying=
 to probe the HW.

> (XEN) Data Abort Trap. Syndrome=3D0x1940010
> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b846900=
0
> (XEN) 0TH[0x0] =3D 0x00000000b8468f7f
>=20
> [snip]
>=20
> If this is expected then is there any plan to make QEMU work as well?
>=20
> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QEMU=
 side.

Yes as of now only Stage-2 is supported in XEN.If we have any requirement o=
r use case that depends on Stage-1 translation we can support that also in =
XEN.

>=20
>=20
> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthrou=
gh
>=20
> implementation, so it could allow testing different setups and configurat=
ions with QEMU.
>=20
>=20
> Thank you in advance,
>=20
> Oleksandr
>=20
> [1] https://patchwork.ozlabs.org/project/qemu-devel/cover/1524665762-3135=
5-1-git-send-email-eric.auger@redhat.com/

Regards,
Rahul



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:53:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:53:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15946.39187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVmK-0003mD-GP; Fri, 30 Oct 2020 14:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15946.39187; Fri, 30 Oct 2020 14:53:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVmK-0003m6-CL; Fri, 30 Oct 2020 14:53:20 +0000
Received: by outflank-mailman (input) for mailman id 15946;
 Fri, 30 Oct 2020 14:53:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYVmI-0003m1-RP
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:53:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cbbf2e3-1795-4eba-9953-ae5dce683375;
 Fri, 30 Oct 2020 14:53:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7ABC1AD6B;
 Fri, 30 Oct 2020 14:53:16 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYVmI-0003m1-RP
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:53:18 +0000
X-Inumbo-ID: 0cbbf2e3-1795-4eba-9953-ae5dce683375
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 0cbbf2e3-1795-4eba-9953-ae5dce683375;
	Fri, 30 Oct 2020 14:53:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604069596;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=4aZb7EZH5CwFk8f/3sl0xrlR4Q0NZ5M+QnPzMN9u0QE=;
	b=Aau8lFNgzKIEZ0qHH0uUKT1RBhwzEURwaTEKyRVkMhEFxM3mYRB+iMO1zA25bcp9mJGitw
	4QYiOIrNgQMuNA2aenWUdvca9EXQcUGrI5uWjBq6B4dpwOG1j2gQtCWCwISO5wbk5FGSc1
	187XvoIxaUyGCKtvL/GbBJ1zuRIr1gg=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 7ABC1AD6B;
	Fri, 30 Oct 2020 14:53:16 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: support RDPKRU/WRPKRU
Message-ID: <c657975c-0483-118a-c86d-8e731aca98ae@suse.com>
Date: Fri, 30 Oct 2020 15:53:15 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Since we support PKU for HVM guests, the respective insns should also be
recognized by the emulator.

In emul_test_read_cr() instead of further extending the comment to
explain the hex numbers, switch to using X86_CR4_* values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -2399,6 +2399,23 @@ int main(int argc, char **argv)
         goto fail;
     printf("okay\n");
 
+    printf("%-40s", "Testing rdpkru / wrpkru...");
+    instr[0] = 0x0f; instr[1] = 0x01;
+    regs.ecx = 0;
+    for ( i = 0, j = (uint32_t)-__LINE__; i < 3; ++i )
+    {
+        instr[2] = 0xee | (i & 1);
+        regs.eax = i < 2 ? j : 0;
+        regs.edx = i & 1 ? 0 : j;
+        regs.eip = (unsigned long)&instr[0];
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (!(i & 1) && (regs.eax != (i ? j : 0) || regs.edx)) ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+    }
+    printf("okay\n");
+
     printf("%-40s", "Testing movdiri %edx,(%ecx)...");
     if ( stack_exec && cpu_has_movdiri )
     {
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -184,8 +184,8 @@ int emul_test_read_cr(
         return X86EMUL_OKAY;
 
     case 4:
-        /* OSFXSR, OSXMMEXCPT, and maybe OSXSAVE */
-        *val = 0x00000600 | (cpu_has_xsave ? 0x00040000 : 0);
+        *val = X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT | X86_CR4_PKE |
+               (cpu_has_xsave ? X86_CR4_OSXSAVE : 0);
         return X86EMUL_OKAY;
     }
 
@@ -256,4 +256,16 @@ void emul_test_put_fpu(
     /* TBD */
 }
 
+static uint32_t pkru;
+
+static unsigned int read_pkru(void)
+{
+    return pkru;
+}
+
+static void write_pkru(unsigned int val)
+{
+    pkru = val;
+}
+
 #include "x86_emulate/x86_emulate.c"
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -5815,6 +5815,39 @@ x86_emulate(
             }
             break;
 
+        case 0xee:
+            switch ( vex.pfx )
+            {
+            case vex_none: /* rdpkru */
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = 0;
+                generate_exception_if(!(cr4 & X86_CR4_PKE), EXC_UD);
+                generate_exception_if(_regs.ecx, EXC_GP, 0);
+                _regs.r(ax) = read_pkru();
+                _regs.r(dx) = 0;
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
+        case 0xef:
+            switch ( vex.pfx )
+            {
+            case vex_none: /* wrpkru */
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = 0;
+                generate_exception_if(!(cr4 & X86_CR4_PKE), EXC_UD);
+                generate_exception_if(_regs.ecx | _regs.edx, EXC_GP, 0);
+                write_pkru(_regs.eax);
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
         case 0xf8: /* swapgs */
             generate_exception_if(!mode_64bit(), EXC_UD);
             generate_exception_if(!mode_ring0(), EXC_GP, 0);
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -385,6 +385,17 @@ static inline unsigned int read_pkru(voi
     return pkru;
 }
 
+static inline void write_pkru(unsigned int pkru)
+{
+    unsigned long cr4 = read_cr4();
+
+    /* See read_pkru() */
+    write_cr4(cr4 | X86_CR4_PKE);
+    asm volatile ( ".byte 0x0f, 0x01, 0xef"
+                   :: "a" (pkru), "d" (0), "c" (0) );
+    write_cr4(cr4);
+}
+
 /* Macros for PKRU domain */
 #define PKRU_READ  (0)
 #define PKRU_WRITE (1)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 14:59:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 14:59:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15955.39199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVsL-00040M-8d; Fri, 30 Oct 2020 14:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15955.39199; Fri, 30 Oct 2020 14:59:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVsL-00040F-5R; Fri, 30 Oct 2020 14:59:33 +0000
Received: by outflank-mailman (input) for mailman id 15955;
 Fri, 30 Oct 2020 14:59:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYVsJ-00040A-77
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:59:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75e7c940-c471-4c61-9eea-750cca740724;
 Fri, 30 Oct 2020 14:59:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 743E1AC12;
 Fri, 30 Oct 2020 14:59:29 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYVsJ-00040A-77
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 14:59:31 +0000
X-Inumbo-ID: 75e7c940-c471-4c61-9eea-750cca740724
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 75e7c940-c471-4c61-9eea-750cca740724;
	Fri, 30 Oct 2020 14:59:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604069969;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kPeJnNrZ1raiHYYF/ZUmjakmrWYlcPvFjGTBg4ACVyI=;
	b=AVe9Nd2pH4zFutlRgzRlwBq3rrpcU9tZI34XlEsMSzVvKYOe4OufAkXTwbyf4m6jaekKZW
	avo+vEjX1kMfK07GNpnJzADvd23Shdbq9+312uotN/YiGEgbCnjShFRf7Zik/85V7GU5Pl
	05rG8H76swjv6MadLTSUNyvYbw90FpU=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 743E1AC12;
	Fri, 30 Oct 2020 14:59:29 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen/spinlocks: spin_trylock with interrupts off is
 always fine
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201030142500.5464-1-jgross@suse.com>
 <20201030142500.5464-2-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <52ae01f4-b887-15da-aecb-b29b7c55a057@suse.com>
Date: Fri, 30 Oct 2020 15:59:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201030142500.5464-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 15:24, Juergen Gross wrote:
> Even if a spinlock was taken with interrupts on before calling
> spin_trylock() with interrupts off is fine, as it can't block.
> 
> Add a bool parameter "try" to check_lock() for handling this case.
> 
> Remove the call of check_lock() from _spin_is_locked(), as it really
> serves no purpose and it can even lead to false crashes, e.g. when
> a lock was taken correctly with interrupts enabled and the call of
> _spin_is_locked() happened with interrupts off. In case the lock is
> taken with wrong interrupt flags this will be catched when taking
> the lock.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit I guess ...

> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>       * 
>       * To guard against this subtle bug we latch the IRQ safety of every
>       * spinlock in the system, on first use.
> +     *
> +     * A spin_trylock() or spin_is_locked() with interrupts off is always
> +     * fine, as those can't block and above deadlock scenario doesn't apply.
>       */
> +    if ( try && irq_safe )
> +        return;

... the reference to spin_is_locked() here wants dropping,
since ...

> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>  
>  int _spin_is_locked(spinlock_t *lock)
>  {
> -    check_lock(&lock->debug);

... you drop the call here?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 15:01:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 15:01:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15959.39211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVuc-0004pe-Mu; Fri, 30 Oct 2020 15:01:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15959.39211; Fri, 30 Oct 2020 15:01:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVuc-0004pX-IZ; Fri, 30 Oct 2020 15:01:54 +0000
Received: by outflank-mailman (input) for mailman id 15959;
 Fri, 30 Oct 2020 15:01:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYVub-0004pS-Gu
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:01:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5103f6e1-7492-40b9-a3a9-734403c0e18e;
 Fri, 30 Oct 2020 15:01:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 05117B1FC;
 Fri, 30 Oct 2020 15:01:52 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYVub-0004pS-Gu
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:01:53 +0000
X-Inumbo-ID: 5103f6e1-7492-40b9-a3a9-734403c0e18e
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 5103f6e1-7492-40b9-a3a9-734403c0e18e;
	Fri, 30 Oct 2020 15:01:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604070112;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zwLndfXZ1d4blZ0iQixz0yi/UIKpYmHe+FgEQr+3CQQ=;
	b=bCW1MLjWgS6LUsUvaTjyD7pTrpHi/M5YJT/Gw/6+cdjxUsxFdayhOFoHuuS3taME5dqmr9
	rewrzRJWWZrvoIQs5Y3tTTZ2jVbF/elOIiZ78UqCBWXv8SfffD9kPQqZelqUD2l0URVuCh
	ppM/vy2AM1mMFqdLv5+cmGV9Pu6Yt80=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 05117B1FC;
	Fri, 30 Oct 2020 15:01:52 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen/spinlocks: spin_trylock with interrupts off is
 always fine
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201030142500.5464-1-jgross@suse.com>
 <20201030142500.5464-2-jgross@suse.com>
 <52ae01f4-b887-15da-aecb-b29b7c55a057@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <88f1989d-7df3-fc86-b6d3-87bdd1f4573b@suse.com>
Date: Fri, 30 Oct 2020 16:01:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <52ae01f4-b887-15da-aecb-b29b7c55a057@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.20 15:59, Jan Beulich wrote:
> On 30.10.2020 15:24, Juergen Gross wrote:
>> Even if a spinlock was taken with interrupts on before calling
>> spin_trylock() with interrupts off is fine, as it can't block.
>>
>> Add a bool parameter "try" to check_lock() for handling this case.
>>
>> Remove the call of check_lock() from _spin_is_locked(), as it really
>> serves no purpose and it can even lead to false crashes, e.g. when
>> a lock was taken correctly with interrupts enabled and the call of
>> _spin_is_locked() happened with interrupts off. In case the lock is
>> taken with wrong interrupt flags this will be catched when taking
>> the lock.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit I guess ...
> 
>> @@ -42,7 +42,13 @@ static void check_lock(union lock_debug *debug)
>>        *
>>        * To guard against this subtle bug we latch the IRQ safety of every
>>        * spinlock in the system, on first use.
>> +     *
>> +     * A spin_trylock() or spin_is_locked() with interrupts off is always
>> +     * fine, as those can't block and above deadlock scenario doesn't apply.
>>        */
>> +    if ( try && irq_safe )
>> +        return;
> 
> ... the reference to spin_is_locked() here wants dropping,
> since ...
> 
>> @@ -220,8 +226,6 @@ void _spin_unlock_irqrestore(spinlock_t *lock, unsigned long flags)
>>   
>>   int _spin_is_locked(spinlock_t *lock)
>>   {
>> -    check_lock(&lock->debug);
> 
> ... you drop the call here?

Oh yes, this was a late modification and I didn't adapt the comment
accordingly. Thanks for spotting it.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 15:02:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 15:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15965.39223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVvP-0004we-VP; Fri, 30 Oct 2020 15:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15965.39223; Fri, 30 Oct 2020 15:02:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYVvP-0004wX-SK; Fri, 30 Oct 2020 15:02:43 +0000
Received: by outflank-mailman (input) for mailman id 15965;
 Fri, 30 Oct 2020 15:02:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CBev=EF=epam.com=prvs=95721c7f7b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1kYVvO-0004wO-Hc
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:02:42 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a1da9e7-0062-4f43-9bd5-61ca165b7215;
 Fri, 30 Oct 2020 15:02:41 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09UEv5qT022520; Fri, 30 Oct 2020 15:02:33 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
 by mx0b-0039f301.pphosted.com with ESMTP id 34gf4nh8qn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 15:02:33 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2801.eurprd03.prod.outlook.com (2603:10a6:200:99::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 15:02:31 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 15:02:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CBev=EF=epam.com=prvs=95721c7f7b=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
	id 1kYVvO-0004wO-Hc
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:02:42 +0000
X-Inumbo-ID: 1a1da9e7-0062-4f43-9bd5-61ca165b7215
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 1a1da9e7-0062-4f43-9bd5-61ca165b7215;
	Fri, 30 Oct 2020 15:02:41 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
	by mx0b-0039f301.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09UEv5qT022520;
	Fri, 30 Oct 2020 15:02:33 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com (mail-vi1eur05lp2173.outbound.protection.outlook.com [104.47.17.173])
	by mx0b-0039f301.pphosted.com with ESMTP id 34gf4nh8qn-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 15:02:33 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cHHnaKppuuCX5qYeU6JtYUrq9onNIUC/oum3i8BLrwqHXQXu/g71Muigg/h4m0/OVIrO4wGiQ0OLdTFaXthBQM51PPpihtuAFyGUciay9qpdCFOqsSNmSwCY6zyN43lBaDV97QrHcWySuQKNpc++X9m4ju1nJpexKrxne6z2jDR2/Ze75L3bjNs1ZzRm5pb+1LFXQNWJvT1F+7c18JVcNh4vyA2SFheMamHONpQ+xwATAkEQA01DEjsaK1RtyhsK959Uyzc83ypXQPJ+Q7xmsCAhKh7YE7j4cWax3T7Q1A6O5m91uYVGl2Dk9+uZStVQfGse3fEblM/UCj0P3pxoNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VLlrb76xIQet8Y6fn0HO1ccId/+9T+kzIKATnE77fZ4=;
 b=IYUrH80p3tlHNj70RqvSfDEbNUHpayRjGTfjDK2vL5va1GaIUiVohjrQsbWaML1W+csI9Lj8kGgq0wa9TEGJNq0ygb65gdFgN/3Dm317dY8zP+2jiJ8EYEEp4q0PXaSnIzheK+GQSc9ZlH05cc7HQLr5gG2iUg2210n8je86W50fqe+fCLZ0uZeHbmmjKJxnlHn6v3ThDfkFdNq2W7upry1l/NndVh3LSL76q8idgL8R1eKS1Wb1mIxQgJBUfeAJP0Cr1F7Yb2KucfDbfGAdhdmsWv+QtYMSebwtFgyp8rflzhCdgEWm/F7azm04BLZvPbuzKghSjmC+SqoUdOp1UA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VLlrb76xIQet8Y6fn0HO1ccId/+9T+kzIKATnE77fZ4=;
 b=xUgWSQUEJQe1B4HOBRUynG2RfhbtVKrmfOeWtYrpB4lggC9Kk/hKN6sQ++UuurygQ/OUSDAo4RBNbdsXTKnt2M0UF7B5NCH988IMQYUWUFAmyFkT+zso6jC1aA2l/Qm1DlZoZhbvKRJE1Fpkl51jbryc2kAH7THFXr4bnMsncLEvAyYgwGS53QTsZGxmRtYEcUDpA7e44jvOa0DY8Z76WKwNg0YvnDkHrTXIlUGrmHAouCxGEqeyHgChTUcHa3ixDDh/7gFWaEW4L6nb5fXMPIKu39Y+AOlx8YjPFUPc/9rbtmim2QNwpRXG+F10ZZ7TASlk4p4goOw7xTfkoH/hbg==
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM4PR0302MB2801.eurprd03.prod.outlook.com (2603:10a6:200:99::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.27; Fri, 30 Oct
 2020 15:02:31 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::501:a686:7515:465e%9]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 15:02:31 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Rahul Singh <Rahul.Singh@arm.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Bertrand Marquis <Bertrand.Marquis@arm.com>,
        Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWrqmeCV4SGlhADU2GY14D47cn76mwOdAAgAAEO4A=
Date: Fri, 30 Oct 2020 15:02:30 +0000
Message-ID: <da9d0192-7431-83ab-be1a-cc107ee1ac4c@epam.com>
References: 
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <2AB3A125-D530-4627-A877-EC2BCDCD63DC@arm.com>
In-Reply-To: <2AB3A125-D530-4627-A877-EC2BCDCD63DC@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: arm.com; dkim=none (message not signed)
 header.d=none;arm.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5ce3663d-359f-4957-aacc-08d87ce4d331
x-ms-traffictypediagnostic: AM4PR0302MB2801:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM4PR0302MB28019D6154E2EAF22973B4A6E7150@AM4PR0302MB2801.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 5Ag6u9abNcl4EVQC7AVIPKtRkTIA+P0+G8IJq6fK60+Guy+tQ4tufZ4ovIhhrDPVf0BpAmbDBCV8tvx7Xe1766Q+pJd0lIbciuCA/k4F6keAO2j4TCIvGTHhKCSx2pMo4OtYUQbdPCd20oF3frxbrcy6DPHnkaWBszo3h66qR/PgPVt6140Smy6e0akQtjKLf7FGB+ZvSIfgkYL6ka955qEViInH7URmDXu7Am3KTi/jvXiSXtLgCJY8b6p9/16gMvJZP7VjmsnLPVROgNK5fMr7CzmCnJryYOmzhgBCgkTorX5/xwtv4iL245f2gO625RWgi2bkRQUlUZR3yYHWFI1caH4jOAON3MMfDXqih0jpNbqkVtqmtQx4OtE/oEsSfZFkSikn7pXgCcxcFyyNaA==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(346002)(136003)(31686004)(71200400001)(6512007)(4326008)(186003)(53546011)(6506007)(2906002)(83380400001)(54906003)(26005)(316002)(36756003)(6486002)(966005)(8936002)(76116006)(6916009)(66946007)(31696002)(66476007)(66556008)(64756008)(66446008)(478600001)(2616005)(5660300002)(107886003)(86362001)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 mHWlM/2uB4c2p+1i0h9BwtkMoFJlB5anv2il8MDssBGXb+mu2QL6wTIzaV7gqWAA06Cn2aZy9cAEhklyQsZALL1Ecitsp2k9x/dOx5EuCQr6tbKecvBVx/sR2zI1KXzORIkHEUnTWFwyxk3+bTFGOoOGLDCKE+BoaOGq8BE1scKnrew1Bsla+yW98pCgxBE81GPtRfvd4JxT9vta5aoT4zVZ7zhQAMHu4Nccs8rbgH8aln4FnIEAPZuqURGcwIv+/1j5X8UD0s+p/EogCtP2WTmwddrXKPlE0GAjPD9JwfGd3DUdVc7opFlZUFvaDeIiM0vEucQV7FpZaUgAf6iP1f8hkGagJWO50nuj7AEpBH6EEBMFtAU6j7D6uwfiUU4LLNrOtF5tpqZxVwmGBVmAiq1SKw0rTz6/HbWrQwn/NVAGjUEOE/o+vaoYY/qM+28uiiUj6fFJVxvM4YR/PUobJke2VedqVEzlmjnTgL9OdVZ2CdtHYvOQJ4XLBA/oOVz1l0fvYNUVj1doDb01IgMr6n42Q1mVoKkwXF/Xo82jkAIRgWuIBbRm07HNyTQVwQjGBEn+H+IUL+Gui6GiJTPEKbVJF37i1NIPTRGXgQo10q1C33+xrTcB/tr+qJaSFeNQxSczoZJYejOUiqThi4HfOQ==
Content-Type: text/plain; charset="utf-8"
Content-ID: <D7B12ABB49D52D4CA05F988FFCAE1C4A@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ce3663d-359f-4957-aacc-08d87ce4d331
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Oct 2020 15:02:30.9325
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hAQrk18rEb2o54dc/+Q4Zat3275WSQHKRd/fFqPSUpgoSSEzQQcbnMy6vRhZsJD/4waPuQ7ps+CHTghLl6Vj64jyB18aSqlsmsnO8J1RaiCtJfvnLjBEH94Oqi33OQVP
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0302MB2801
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-30_07:2020-10-30,2020-10-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 phishscore=0
 mlxlogscore=838 impostorscore=0 priorityscore=1501 bulkscore=0
 lowpriorityscore=0 malwarescore=0 suspectscore=0 spamscore=0 adultscore=0
 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010300114

SGksDQoNCk9uIDEwLzMwLzIwIDQ6NDcgUE0sIFJhaHVsIFNpbmdoIHdyb3RlOg0KPiBIZWxsbyBP
bGVrc2FuZHIsDQo+DQo+PiBPbiAzMCBPY3QgMjAyMCwgYXQgMTA6NDQgYW0sIE9sZWtzYW5kciBB
bmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4gd3JvdGU6DQo+
Pg0KPj4gSGksIFJhaHVsIQ0KPj4NCj4+IE9uIDEwLzIwLzIwIDY6MjUgUE0sIFJhaHVsIFNpbmdo
IHdyb3RlOg0KPj4+IEFkZCBzdXBwb3J0IGZvciBBUk0gYXJjaGl0ZWN0ZWQgU01NVXYzIGltcGxl
bWVudGF0aW9ucy4gSXQgaXMgYmFzZWQgb24NCj4+PiB0aGUgTGludXggU01NVXYzIGRyaXZlci4N
Cj4+Pg0KPj4+IE1ham9yIGRpZmZlcmVuY2VzIGJldHdlZW4gdGhlIExpbnV4IGRyaXZlciBhcmUg
YXMgZm9sbG93czoNCj4+PiAxLiBPbmx5IFN0YWdlLTIgdHJhbnNsYXRpb24gaXMgc3VwcG9ydGVk
IGFzIGNvbXBhcmVkIHRvIHRoZSBMaW51eCBkcml2ZXINCj4+PiAgICAgdGhhdCBzdXBwb3J0cyBi
b3RoIFN0YWdlLTEgYW5kIFN0YWdlLTIgdHJhbnNsYXRpb25zLg0KPj4gRmlyc3Qgb2YgYWxsIHRo
YW5rIHlvdSBmb3IgdGhlIGVmZm9ydHMhDQo+Pg0KPj4gSSB0cmllZCB0aGUgcGF0Y2ggd2l0aCBR
RU1VIGFuZCB3b3VsZCBsaWtlIHRvIGtub3cgaWYgbXkgdW5kZXJzdGFuZGluZyBjb3JyZWN0DQo+
Pg0KPj4gdGhhdCB0aGlzIGNvbWJpbmF0aW9uIHdpbGwgbm90IHdvcmsgYXMgb2Ygbm93Og0KPj4N
Cj4+IChYRU4pIFNNTVV2MzogL3NtbXV2M0A5MDUwMDAwOiBTTU1VdjM6IERUIHZhbHVlID0gZXZl
bnRxDQo+IEkgaGF2ZSBsaW1pdGVkIGtub3dsZWRnZSBhYm91dCBRRU1VIGludGVybmFscy5BcyB3
aGF0IEkgc2VlIGZyb20gdGhlIGxvZ3MsIGZhdWx0IGlzIG9jY3VycmVkIGF0IGVhcmx5IGRyaXZl
ciBpbml0aWFsaXNhdGlvbiB3aGVuIFNNTVUgZHJpdmVyIGlzIHRyeWluZyB0byBwcm9iZSB0aGUg
SFcuDQo+DQo+PiAoWEVOKSBEYXRhIEFib3J0IFRyYXAuIFN5bmRyb21lPTB4MTk0MDAxMA0KPj4g
KFhFTikgV2Fsa2luZyBIeXBlcnZpc29yIFZBIDB4NDAwMzEwMDAgb24gQ1BVMCB2aWEgVFRCUiAw
eDAwMDAwMDAwYjg0NjkwMDANCj4+IChYRU4pIDBUSFsweDBdID0gMHgwMDAwMDAwMGI4NDY4Zjdm
DQo+Pg0KPj4gW3NuaXBdDQo+Pg0KPj4gSWYgdGhpcyBpcyBleHBlY3RlZCB0aGVuIGlzIHRoZXJl
IGFueSBwbGFuIHRvIG1ha2UgUUVNVSB3b3JrIGFzIHdlbGw/DQo+Pg0KPj4gSSBzZWUgWzFdIHNh
eXMgdGhhdCAiT25seSBzdGFnZSAxIGFuZCBBQXJjaDY0IFBUVyBhcmUgc3VwcG9ydGVkLiIgb24g
UUVNVSBzaWRlLg0KPiBZZXMgYXMgb2Ygbm93IG9ubHkgU3RhZ2UtMiBpcyBzdXBwb3J0ZWQgaW4g
WEVOLklmIHdlIGhhdmUgYW55IHJlcXVpcmVtZW50IG9yIHVzZSBjYXNlIHRoYXQgZGVwZW5kcyBv
biBTdGFnZS0xIHRyYW5zbGF0aW9uIHdlIGNhbiBzdXBwb3J0IHRoYXQgYWxzbyBpbiBYRU4uDQpU
aGUgdXNlIGNhc2UgaXMgYmVsb3c6IFBDSSBwYXNzdGhyb3VnaCBhbmQgdmFyaW91cyBjb25maWd1
cmF0aW9ucyBpbmNsdWRpbmcgU1ItSU9WIHdoaWNoIGlzIHBvc3NpYmxlIHdpdGggUUVNVS4uLg0K
Pg0KPj4NCj4+IFdlIGFyZSBpbnRlcmVzdGVkIGluIFFFTVUvU01NVXYzIGFzIGEgZmxleGlibGUg
cGxhdGZvcm0gZm9yIFBDSSBwYXNzdGhyb3VnaA0KPj4NCj4+IGltcGxlbWVudGF0aW9uLCBzbyBp
dCBjb3VsZCBhbGxvdyB0ZXN0aW5nIGRpZmZlcmVudCBzZXR1cHMgYW5kIGNvbmZpZ3VyYXRpb25z
IHdpdGggUUVNVS4NCj4+DQo+Pg0KPj4gVGhhbmsgeW91IGluIGFkdmFuY2UsDQo+Pg0KPj4gT2xl
a3NhbmRyDQo+Pg0KPj4gWzFdIGh0dHBzOi8vdXJsZGVmZW5zZS5jb20vdjMvX19odHRwczovL3Bh
dGNod29yay5vemxhYnMub3JnL3Byb2plY3QvcWVtdS1kZXZlbC9jb3Zlci8xNTI0NjY1NzYyLTMx
MzU1LTEtZ2l0LXNlbmQtZW1haWwtZXJpYy5hdWdlckByZWRoYXQuY29tL19fOyEhR0ZfMjlkYmNR
SVVCUEEhaC1FYUUwT25TYlh0TEJTd0lTMzExYWxEbDdwbjhzSDdzaWhnSVlxaWxNNS1yLThrQ0g2
VVNOTmxMQjN4aGJ6YzZlY3pVT3JoY3ckIFtwYXRjaHdvcmtbLl1vemxhYnNbLl1vcmddDQo+IFJl
Z2FyZHMsDQo+IFJhaHVsDQoNClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 15:11:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 15:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15980.39235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYW3H-0005tS-Q3; Fri, 30 Oct 2020 15:10:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15980.39235; Fri, 30 Oct 2020 15:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYW3H-0005tL-Mw; Fri, 30 Oct 2020 15:10:51 +0000
Received: by outflank-mailman (input) for mailman id 15980;
 Fri, 30 Oct 2020 15:10:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYW3G-0005tG-4Z
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:10:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4a1de54-4467-4379-991a-01175b0c6f8f;
 Fri, 30 Oct 2020 15:10:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 10C36AD32;
 Fri, 30 Oct 2020 15:10:48 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYW3G-0005tG-4Z
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:10:50 +0000
X-Inumbo-ID: b4a1de54-4467-4379-991a-01175b0c6f8f
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id b4a1de54-4467-4379-991a-01175b0c6f8f;
	Fri, 30 Oct 2020 15:10:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604070648;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=q+BpbjQ2SfuyFbxZlQdiPsxUobbrMhp7QTbdGzmKK5w=;
	b=aOZH7Se4XOhv7gSUOwhIGZxNVhnsGwW24dvi6vyYYPka04kJs3MaRSfIqF/NpWnpJyemcg
	td/Cg0VRASmAMZXZ/7dFtpq/HCpvoUzsSwOHk89XJfN6e3XiYKGaelQdrHf0Mf/wmt0vnH
	Rmm8ltSnHKqfx6xp9lAncFZ4KCz+z3E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 10C36AD32;
	Fri, 30 Oct 2020 15:10:48 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Juergen Gross <jgross@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201030142500.5464-1-jgross@suse.com>
 <20201030142500.5464-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <239c8495-46fb-122a-be69-6aee98a3ea82@suse.com>
Date: Fri, 30 Oct 2020 16:10:46 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201030142500.5464-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.2020 15:25, Juergen Gross wrote:
> --- a/xen/include/xen/rwlock.h
> +++ b/xen/include/xen/rwlock.h
> @@ -65,7 +65,11 @@ static inline int _read_trylock(rwlock_t *lock)
>           * arch_lock_acquire_barrier().
>           */
>          if ( likely(_can_read_lock(cnts)) )
> +        {
> +            check_lock(&lock->lock.debug, true);
>              return 1;
> +        }

Why not unconditionally earlier in the function?

> @@ -87,7 +91,10 @@ static inline void _read_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( likely(_can_read_lock(cnts)) )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      /* The slowpath will decrement the reader count, if necessary. */
>      queue_read_lock_slowpath(lock);

I guess doing so here and ...

> @@ -162,7 +169,10 @@ static inline void _write_lock(rwlock_t *lock)
>       * arch_lock_acquire_barrier().
>       */
>      if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
> +    {
> +        check_lock(&lock->lock.debug, false);
>          return;
> +    }
>  
>      queue_write_lock_slowpath(lock);

... here is okay, as the slow paths have checks anyway.

> @@ -205,6 +215,8 @@ static inline int _write_trylock(rwlock_t *lock)
>          return 0;
>      }
>  
> +    check_lock(&lock->lock.debug, true);

But here I again think it wants moving up.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 15:14:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 15:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15987.39247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYW6K-000657-Cw; Fri, 30 Oct 2020 15:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15987.39247; Fri, 30 Oct 2020 15:14:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYW6K-000650-9s; Fri, 30 Oct 2020 15:14:00 +0000
Received: by outflank-mailman (input) for mailman id 15987;
 Fri, 30 Oct 2020 15:13:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYW6I-00064v-Re
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:13:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7451893d-7cce-4fdc-8b17-e096a89eae22;
 Fri, 30 Oct 2020 15:13:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DCC3BAE2C;
 Fri, 30 Oct 2020 15:13:56 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYW6I-00064v-Re
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:13:58 +0000
X-Inumbo-ID: 7451893d-7cce-4fdc-8b17-e096a89eae22
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 7451893d-7cce-4fdc-8b17-e096a89eae22;
	Fri, 30 Oct 2020 15:13:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604070837;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=w3ISSV7hw/KQpGaHdwFoHUv2Um5RuLledvqNKpe7ImA=;
	b=fSpHNikT4s47Gw06qdYsFdmDj/n5qjQAWgyW53fZ2J0VVqfNI+Ca9/vkwev7oBfU5FW6uK
	2H6p9+yhus99/aKtPSuc4MQxjemMP/ClodtBjfHtxlDSMnEtSPcWh7XrlJzcrBCH/sz6Z3
	ma+92PX/oatenMlENFn+qXsAhxb3Hvc=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id DCC3BAE2C;
	Fri, 30 Oct 2020 15:13:56 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen/rwlock: add check_lock() handling to rwlocks
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201030142500.5464-1-jgross@suse.com>
 <20201030142500.5464-3-jgross@suse.com>
 <239c8495-46fb-122a-be69-6aee98a3ea82@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <4cea5bfd-5a91-1a77-9c49-989fa191f8d6@suse.com>
Date: Fri, 30 Oct 2020 16:13:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <239c8495-46fb-122a-be69-6aee98a3ea82@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.10.20 16:10, Jan Beulich wrote:
> On 30.10.2020 15:25, Juergen Gross wrote:
>> --- a/xen/include/xen/rwlock.h
>> +++ b/xen/include/xen/rwlock.h
>> @@ -65,7 +65,11 @@ static inline int _read_trylock(rwlock_t *lock)
>>            * arch_lock_acquire_barrier().
>>            */
>>           if ( likely(_can_read_lock(cnts)) )
>> +        {
>> +            check_lock(&lock->lock.debug, true);
>>               return 1;
>> +        }
> 
> Why not unconditionally earlier in the function?

Its trylock, so we don't want to call check_lock() without having
got the lock.

> 
>> @@ -87,7 +91,10 @@ static inline void _read_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( likely(_can_read_lock(cnts)) )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       /* The slowpath will decrement the reader count, if necessary. */
>>       queue_read_lock_slowpath(lock);
> 
> I guess doing so here and ...
> 
>> @@ -162,7 +169,10 @@ static inline void _write_lock(rwlock_t *lock)
>>        * arch_lock_acquire_barrier().
>>        */
>>       if ( atomic_cmpxchg(&lock->cnts, 0, _write_lock_val()) == 0 )
>> +    {
>> +        check_lock(&lock->lock.debug, false);
>>           return;
>> +    }
>>   
>>       queue_write_lock_slowpath(lock);
> 
> ... here is okay, as the slow paths have checks anyway.
> 
>> @@ -205,6 +215,8 @@ static inline int _write_trylock(rwlock_t *lock)
>>           return 0;
>>       }
>>   
>> +    check_lock(&lock->lock.debug, true);
> 
> But here I again think it wants moving up.

No, another trylock.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 15:20:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 15:20:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.15993.39260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWCH-0006uF-5l; Fri, 30 Oct 2020 15:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 15993.39260; Fri, 30 Oct 2020 15:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWCH-0006u8-03; Fri, 30 Oct 2020 15:20:09 +0000
Received: by outflank-mailman (input) for mailman id 15993;
 Fri, 30 Oct 2020 15:20:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1kYWCG-0006u3-6y
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:20:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8586bb31-0833-4606-96aa-ff504cdea0b2;
 Fri, 30 Oct 2020 15:20:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E372B244;
 Fri, 30 Oct 2020 15:20:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=y1I6=EF=suse.com=jgross@srs-us1.protection.inumbo.net>)
	id 1kYWCG-0006u3-6y
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 15:20:08 +0000
X-Inumbo-ID: 8586bb31-0833-4606-96aa-ff504cdea0b2
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8586bb31-0833-4606-96aa-ff504cdea0b2;
	Fri, 30 Oct 2020 15:20:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604071206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XkTMwdHT+Hhj9foVuFHCZGD8dyJ/KCsO+t+XPA2294g=;
	b=eDKWL/38CisnvUcOLVO0kzf0Fx5q4s/0JCnqfGZVOAY8qQGYzeeb7Vg/OmLP6zAvpXBvWz
	51aKDVAY1+H1wpPpOLf6MzyoWLlZloZxVCTy/RSCJyT+Z9FFfuAy4i/DdkxuNjiZJmNVqQ
	njlrnInOqYUPSh0nWhHVzqKf/lo68w0=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 0E372B244;
	Fri, 30 Oct 2020 15:20:06 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen/rwlock: add check_lock() handling to rwlocks
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201030142500.5464-1-jgross@suse.com>
 <20201030142500.5464-3-jgross@suse.com>
 <239c8495-46fb-122a-be69-6aee98a3ea82@suse.com>
 <4cea5bfd-5a91-1a77-9c49-989fa191f8d6@suse.com>
Message-ID: <5f6f94cb-79fa-209a-b48a-b386d2ecb2a1@suse.com>
Date: Fri, 30 Oct 2020 16:20:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.12.0
MIME-Version: 1.0
In-Reply-To: <4cea5bfd-5a91-1a77-9c49-989fa191f8d6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.10.20 16:13, Jürgen Groß wrote:
> On 30.10.20 16:10, Jan Beulich wrote:
>> On 30.10.2020 15:25, Juergen Gross wrote:
>>> --- a/xen/include/xen/rwlock.h
>>> +++ b/xen/include/xen/rwlock.h
>>> @@ -65,7 +65,11 @@ static inline int _read_trylock(rwlock_t *lock)
>>>            * arch_lock_acquire_barrier().
>>>            */
>>>           if ( likely(_can_read_lock(cnts)) )
>>> +        {
>>> +            check_lock(&lock->lock.debug, true);
>>>               return 1;
>>> +        }
>>
>> Why not unconditionally earlier in the function?
> 
> Its trylock, so we don't want to call check_lock() without having
> got the lock.

Hmm, OTOH we do so for spinlocks, too.

So maybe its really better to move it up.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:08:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16016.39271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWxC-0002g4-Vv; Fri, 30 Oct 2020 16:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16016.39271; Fri, 30 Oct 2020 16:08:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWxC-0002fx-SO; Fri, 30 Oct 2020 16:08:38 +0000
Received: by outflank-mailman (input) for mailman id 16016;
 Fri, 30 Oct 2020 16:08:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1kYWxC-0002fs-0m
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:08:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8fb4790f-0582-4d36-a2f4-0a3e539b4685;
 Fri, 30 Oct 2020 16:08:36 +0000 (UTC)
Received: from AM6P191CA0004.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::17)
 by VI1PR0802MB2493.eurprd08.prod.outlook.com (2603:10a6:800:b3::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Fri, 30 Oct
 2020 16:08:34 +0000
Received: from AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::b4) by AM6P191CA0004.outlook.office365.com
 (2603:10a6:209:8b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 16:08:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT009.mail.protection.outlook.com (10.152.16.110) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 16:08:34 +0000
Received: ("Tessian outbound c189680f801b:v64");
 Fri, 30 Oct 2020 16:08:33 +0000
Received: from 740202d60fa8.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DAAFDF4A-1FEC-487A-B75D-235418EF3387.1; 
 Fri, 30 Oct 2020 16:08:28 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 740202d60fa8.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 30 Oct 2020 16:08:28 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1989.eurprd08.prod.outlook.com (2603:10a6:4:75::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 16:08:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 16:08:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CFCO=EF=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
	id 1kYWxC-0002fs-0m
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:08:38 +0000
X-Inumbo-ID: 8fb4790f-0582-4d36-a2f4-0a3e539b4685
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown [40.107.21.70])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 8fb4790f-0582-4d36-a2f4-0a3e539b4685;
	Fri, 30 Oct 2020 16:08:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RjezaDM00I+46WbgQ4e7+hY68YD3206+DCBkgdUK+3o=;
 b=laXtd6NORx0DdsU7goHswxbi4EMS4OGeazfJlMNiH+/zPMnaAhncPEfwmNdSnV0XY4XJ+auq48QN67F1HL3uhm0dDtcUqA3PKtfvE8zE9tU5jziNAO23HYb4je90OHaIrPyW5XqupGqwjPLSFhJl2pTlCDratcm3XWfDdsLBKUM=
Received: from AM6P191CA0004.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::17)
 by VI1PR0802MB2493.eurprd08.prod.outlook.com (2603:10a6:800:b3::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.28; Fri, 30 Oct
 2020 16:08:34 +0000
Received: from AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::b4) by AM6P191CA0004.outlook.office365.com
 (2603:10a6:209:8b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 16:08:34 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT009.mail.protection.outlook.com (10.152.16.110) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 16:08:34 +0000
Received: ("Tessian outbound c189680f801b:v64"); Fri, 30 Oct 2020 16:08:33 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ae9f11cd77450740
X-CR-MTA-TID: 64aa7808
Received: from 740202d60fa8.2
	by 64aa7808-outbound-1.mta.getcheckrecipient.com id DAAFDF4A-1FEC-487A-B75D-235418EF3387.1;
	Fri, 30 Oct 2020 16:08:28 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
    by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 740202d60fa8.2
    (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
    Fri, 30 Oct 2020 16:08:28 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cbPlOoEOMMS30jZZA6DlNBSt1sIK0kN8DfYLd2wZ/3R9WUTq36UGI4R5Q/2HHnnQ9Ne7gXRAFv3wOUzvRgcP0xCHd5DmooZylHPg5KovA4qh8PygmABgXzeKaG3d+2XGGrGgcdy9vByp3Eh/W6dS0A8mu9Ceg2Sc8PAomjL2zgM/Uo4fLcBD1AsveDy8jmzpYlBqbGLvBg3apG8PzyzlvnBEFn2zmn8TEP8Apon2a/8r0rp4OtV+iCzkT17HJuYtHH3FqMaZN4pt+yITeNcjqjfTPHVlhir3kEeF0eIszqW0v+YrchyAjsblKDkdQhAkX4R73FLQABEjbqI514uA3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RjezaDM00I+46WbgQ4e7+hY68YD3206+DCBkgdUK+3o=;
 b=SRhiPAojAQDWCNhcDP6gX3txoL6PtWdSuq2TKGc1hddwQ7gKgHWewv8L117i2Y+DgNYuAfvJj+vt1IbPGHjs1SL1taIfIi82N+6VGKgf41zYFVPgphzPMoDR0aCM5aWkRfK+sbaV5iqTfIa+ac31W0Ij5AZqyYqB/w4LXcl42+fjQXkOHLuSuxwGZpzW3GYMyo5mVdMYNQP92xruYm/iqyGS9QX6/fqprHubUdttjKPEgYKI1WmECPdEfuR+bq9KvEL0E2YVXKSsLlip8+K5K8DbJVkqgZXJYDogQ8udg8bB6lLCZ6xCQJRVr7bs3BBIreVXEWcdLJUbe6tJ9yga7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RjezaDM00I+46WbgQ4e7+hY68YD3206+DCBkgdUK+3o=;
 b=laXtd6NORx0DdsU7goHswxbi4EMS4OGeazfJlMNiH+/zPMnaAhncPEfwmNdSnV0XY4XJ+auq48QN67F1HL3uhm0dDtcUqA3PKtfvE8zE9tU5jziNAO23HYb4je90OHaIrPyW5XqupGqwjPLSFhJl2pTlCDratcm3XWfDdsLBKUM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB6PR0801MB1989.eurprd08.prod.outlook.com (2603:10a6:4:75::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18; Fri, 30 Oct
 2020 16:08:27 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::cf6:86:f034:aec4%6]) with mapi id 15.20.3499.029; Fri, 30 Oct 2020
 16:08:27 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
CC: Rahul Singh <Rahul.Singh@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Jan Beulich <jbeulich@suse.com>, Paul
 Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Topic: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
Thread-Index: AQHWoVvyGh0jWzCHpk6w/U7XN+08x6mwEIWAgABD5gCAAAQ9AIAAEmwA
Date: Fri, 30 Oct 2020 16:08:26 +0000
Message-ID: <E1137D39-EDF2-4663-A990-7628B7057B45@arm.com>
References:
 <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
 <2AB3A125-D530-4627-A877-EC2BCDCD63DC@arm.com>
 <da9d0192-7431-83ab-be1a-cc107ee1ac4c@epam.com>
In-Reply-To: <da9d0192-7431-83ab-be1a-cc107ee1ac4c@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6e4480c3-d6cd-447f-b6fc-08d87cee0d55
x-ms-traffictypediagnostic: DB6PR0801MB1989:|VI1PR0802MB2493:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR0802MB2493FD6E5F8C32B16FD083999D150@VI1PR0802MB2493.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 nmTWQNbHkNwi1WmQTWvmsh4gBgwLK7eCU6sNm14hlp2qcXRHwKt2a00m9Vt2Rch/jYNqsWJlMtIHh/NP/6r6V84V+xV82Jf7f1te7MWggW5QIUpyG+VpOVmwh/RjJnCcUKEd6lDbsHEvO6tos3/n41DjdWTJYmq4+HeYIldxBXBzw0J+86kpMMe2Wm2cT1xJuqjRsI5Ccs0WfOIP4euYPT1DRgYCJVroEqoj4nRp41+4Ih1h7Sl5OhEm8JTnnqAG294AUzsx8OO/1vUiBQFW+7A//7+XEjHNlpwi7OMT4z4+4kGewJhTvXW98C4p59gc8HiTgPM8BSOpAJqp0ZzF6eQ7baB+siU4hUN5Sjd7fhVV87oCDR0iQVlu6GBCDPEu+DseUJRjv1WomJhSGGXS4A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB7PR08MB3689.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(136003)(366004)(376002)(316002)(26005)(86362001)(71200400001)(6916009)(186003)(6486002)(33656002)(54906003)(5660300002)(4326008)(36756003)(478600001)(91956017)(966005)(2616005)(6512007)(76116006)(8676002)(64756008)(66946007)(53546011)(6506007)(8936002)(66556008)(66476007)(66446008)(2906002)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 uniEXVqtrgGx0OEQHAugj14uNPjUxEJuVT/8XzEbjvwnuz+t23KOuSiDNv6QMgPt7IbAYksVlYwZICPSmK0Zw/mYYgPTuthof72sqE9sQgQmFcKkTqW5jBH6HTJOd7RKY5gvpWBw3Wqs18+7iNeTe92MpKYQiAMj0PxZJ5qsECbUkuLp3pZIgqsC+R/pat3C2CORQH9H/lkkZ1H3GaiqO0+6GNVb99IvE0+VhSoVkL7jr+tef6qe/Cng/8Y1IQ7/Cm5a5sERocJEqfnq5zJIQotYH8HOIsOzwTxHcJ7j1lmSWkn7yRP5RTK70j9P2WfZ26nDqTlXslXGMgsJx3iVTGwkEGmg4eyW1Hf3p/WLQ094AvUtjuDGR+CYJPXrH9bxTJMTFBwxyuKV59bYOJxMzffJ8OQ8gDuxLPdAtV+30i6NiFrbqvNfANuHP/PdeVrsfJI1eQHbsl+K2JNa1U928nychFcMhVqddS9Ot08fZr95Npy127Hhz0M+DitifZ8bHZUIyd71l3PcV3u6tuoK1VdG5lmiLikvV8EOf87pJszOZuExU9IjOF1PFjBozk3ib0foVlg6mHpNP4XZm3oXQieSIBv4jxyh6K/Gmvc26P0smik+Fs5oXV/C8XMYsSr21/We3+L4kfd1qxM1HH9t8A==
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D00FED731FBE094FACB8197851601B03@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1989
Original-Authentication-Results: epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d472815f-232c-4ff7-7789-08d87cee0932
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FHUgliiUiK31HgGGC2h/SqBTZgRHOMx5wUhkUWEEKl6sWIs71JCDhcrUtkPmKZTFbsuJ7fAKVBrqICawaRyf4VEZeQu3xUY9jq4ySEAmyCNIFjz0hM7k5bMDrZLqfyM+M33LUV4sIOjlQQcn11SqwNYxF8Ee6KbNDou3cX5qzZweNLQU7TzswoP7DpaGioiVk+od6V0vxZBgRYSGgxHr0wYoBINH094ILk1M61en5Q09MOAOFPEr+GrlpuW8lThmN/1rVmgrMi4K8BsTxbrrSkWMjI7qOFRCNCgw+L0nhZV8fVJ1WFVH5kCJRT54uCGohcCFeaM8WP7tPIharRjhJXzmZSl1+rhI9ZEBE+HR2eVfT127Hu4yz1W+8asz41+P8KkAP4/eKSBPG+glguw2g9IhihzjHZ8iv5VBO46D4mubWx39Fzbm8IKvWex4vX3mDNbJMJZSx427n5MXJxQILXGxXfyfb269PyPdJOOG09Y=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(396003)(346002)(376002)(46966005)(8676002)(26005)(5660300002)(82310400003)(8936002)(47076004)(2616005)(2906002)(336012)(966005)(83380400001)(33656002)(478600001)(4326008)(107886003)(6862004)(36906005)(186003)(316002)(81166007)(36756003)(356005)(86362001)(70586007)(6486002)(70206006)(6512007)(6506007)(53546011)(82740400003)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 16:08:34.0137
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e4480c3-d6cd-447f-b6fc-08d87cee0d55
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT009.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2493

Hi Oleksandr,

> On 30 Oct 2020, at 15:02, Oleksandr Andrushchenko <Oleksandr_Andrushchenk=
o@epam.com> wrote:
>=20
> Hi,
>=20
> On 10/30/20 4:47 PM, Rahul Singh wrote:
>> Hello Oleksandr,
>>=20
>>> On 30 Oct 2020, at 10:44 am, Oleksandr Andrushchenko <Oleksandr_Andrush=
chenko@epam.com> wrote:
>>>=20
>>> Hi, Rahul!
>>>=20
>>> On 10/20/20 6:25 PM, Rahul Singh wrote:
>>>> Add support for ARM architected SMMUv3 implementations. It is based on
>>>> the Linux SMMUv3 driver.
>>>>=20
>>>> Major differences between the Linux driver are as follows:
>>>> 1. Only Stage-2 translation is supported as compared to the Linux driv=
er
>>>>    that supports both Stage-1 and Stage-2 translations.
>>> First of all thank you for the efforts!
>>>=20
>>> I tried the patch with QEMU and would like to know if my understanding =
correct
>>>=20
>>> that this combination will not work as of now:
>>>=20
>>> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value =3D eventq
>> I have limited knowledge about QEMU internals.As what I see from the log=
s, fault is occurred at early driver initialisation when SMMU driver is try=
ing to probe the HW.
>>=20
>>> (XEN) Data Abort Trap. Syndrome=3D0x1940010
>>> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b8469=
000
>>> (XEN) 0TH[0x0] =3D 0x00000000b8468f7f
>>>=20
>>> [snip]
>>>=20
>>> If this is expected then is there any plan to make QEMU work as well?
>>>=20
>>> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QE=
MU side.
>> Yes as of now only Stage-2 is supported in XEN.If we have any requiremen=
t or use case that depends on Stage-1 translation we can support that also =
in XEN.
> The use case is below: PCI passthrough and various configurations includi=
ng SR-IOV which is possible with QEMU...

This is currently not in the list of configuration supported or that we hav=
e planned on our side to support.

But we would be more then happy to review any changes to enable this :-)

Regards
Bertrand

>>=20
>>>=20
>>> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthr=
ough
>>>=20
>>> implementation, so it could allow testing different setups and configur=
ations with QEMU.
>>>=20
>>>=20
>>> Thank you in advance,
>>>=20
>>> Oleksandr
>>>=20
>>> [1] https://urldefense.com/v3/__https://patchwork.ozlabs.org/project/qe=
mu-devel/cover/1524665762-31355-1-git-send-email-eric.auger@redhat.com/__;!=
!GF_29dbcQIUBPA!h-EaE0OnSbXtLBSwIS311alDl7pn8sH7sihgIYqilM5-r-8kCH6USNNlLB3=
xhbzc6eczUOrhcw$ [patchwork[.]ozlabs[.]org]
>> Regards,
>> Rahul
>=20
> Thank you,
>=20
> Oleksandr



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:11:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:11:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16027.39282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWzw-0003Tp-Dl; Fri, 30 Oct 2020 16:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16027.39282; Fri, 30 Oct 2020 16:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYWzw-0003Ti-Am; Fri, 30 Oct 2020 16:11:28 +0000
Received: by outflank-mailman (input) for mailman id 16027;
 Fri, 30 Oct 2020 16:11:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYWzu-0003Tc-HM
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:11:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6a436ede-ba2e-4948-a8a2-8e0e6f7e1a15;
 Fri, 30 Oct 2020 16:11:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 86243AC1F;
 Fri, 30 Oct 2020 16:11:24 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYWzu-0003Tc-HM
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:11:26 +0000
X-Inumbo-ID: 6a436ede-ba2e-4948-a8a2-8e0e6f7e1a15
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 6a436ede-ba2e-4948-a8a2-8e0e6f7e1a15;
	Fri, 30 Oct 2020 16:11:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604074284;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Fwc3PBndyk5a5z4eQZLAdZbQCP/HMzFvnVlmD3MlYmw=;
	b=cf4f+MqHwUb7XitBNLK69yXUbcxecp2d46FYj3dOvJl98L7HhLkPP+hq+aq0eyl7Ixeqgs
	VeHkmKYz2bHaki5a/o8BHqJASmLwLuc2vQ5oi6AXgbR+9x2PmnhLWGNt3kEOcgSqmaxbpo
	GaRfjJ2b5WJNgQgLXyrEI2rg6pThMRY=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 86243AC1F;
	Fri, 30 Oct 2020 16:11:24 +0000 (UTC)
Subject: Re: [PATCH 4/5] iommu: set 'hap_pt_share' and 'need_sync' flags
 earlier in iommu_domain_init()
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, xen-devel@lists.xenproject.org
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f9fd4db4-3bb6-e5dc-877b-17486a8fe635@suse.com>
Date: Fri, 30 Oct 2020 17:11:23 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 11:49, Paul Durrant wrote:
> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -174,15 +174,6 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
>      hd->node = NUMA_NO_NODE;
>  #endif
>  
> -    ret = arch_iommu_domain_init(d);
> -    if ( ret )
> -        return ret;
> -
> -    hd->platform_ops = iommu_get_ops();
> -    ret = hd->platform_ops->init(d);
> -    if ( ret || is_system_domain(d) )
> -        return ret;

Are you suggesting the is_system_domain() here has become
unnecessary? If so, it would be nice if you could say when
or why. Otherwise I would assume it's needed to avoid
setting one or both of the fields.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:17:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16035.39298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYX5N-0003hr-4o; Fri, 30 Oct 2020 16:17:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16035.39298; Fri, 30 Oct 2020 16:17:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYX5N-0003hk-1d; Fri, 30 Oct 2020 16:17:05 +0000
Received: by outflank-mailman (input) for mailman id 16035;
 Fri, 30 Oct 2020 16:17:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYX5L-0003h6-Bk
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:17:03 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65609465-1a07-4127-9315-3e0661fc7407;
 Fri, 30 Oct 2020 16:16:56 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYX5D-00052A-Lk; Fri, 30 Oct 2020 16:16:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYX5D-0000ii-Cx; Fri, 30 Oct 2020 16:16:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYX5D-0007AU-CI; Fri, 30 Oct 2020 16:16:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYX5L-0003h6-Bk
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:17:03 +0000
X-Inumbo-ID: 65609465-1a07-4127-9315-3e0661fc7407
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 65609465-1a07-4127-9315-3e0661fc7407;
	Fri, 30 Oct 2020 16:16:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YFgweZzv1aDy7ZDWcK9gjhaF2vTqHudrPKAy5gf8iGk=; b=p77r9CkdvXJt4fW864PJILC7mu
	dc4XjhrC5YSTe2wggSZjRIMNWTertomR2WBEwLuKPoagm2cXPnXn7/OFLnr72Vq2JxTAwwKCAE37s
	81MlO5ZUejGdW+9aYHaIX08UqRSUDccyk8+7cgIccVqHpDFFl9urAGfufGNETicoJvu8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYX5D-00052A-Lk; Fri, 30 Oct 2020 16:16:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYX5D-0000ii-Cx; Fri, 30 Oct 2020 16:16:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYX5D-0007AU-CI; Fri, 30 Oct 2020 16:16:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156296: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:host-ping-check-xen:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-xl:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=23859ae44402f4d935b9ee548135dd1e65e2cbf4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 16:16:55 +0000

flight 156296 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 10 host-ping-check-xen      fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-amd64-amd64-dom0pvh-xl-intel 20 guest-localmigrate/x10 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl          11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                23859ae44402f4d935b9ee548135dd1e65e2cbf4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   90 days
Failing since        152366  2020-08-01 20:49:34 Z   89 days  151 attempts
Testing same since   156296  2020-10-29 13:44:28 Z    1 days    1 attempts

------------------------------------------------------------
3380 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            fail    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 641769 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:38:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16055.39313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXQR-0005Z7-3a; Fri, 30 Oct 2020 16:38:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16055.39313; Fri, 30 Oct 2020 16:38:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXQR-0005Z0-0K; Fri, 30 Oct 2020 16:38:51 +0000
Received: by outflank-mailman (input) for mailman id 16055;
 Fri, 30 Oct 2020 16:38:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYXQQ-0005YS-2t
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:38:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79eff082-3d0f-4fcb-b135-455fa62458c7;
 Fri, 30 Oct 2020 16:38:43 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYXQI-0005Ta-KE; Fri, 30 Oct 2020 16:38:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYXQI-0001ih-Ad; Fri, 30 Oct 2020 16:38:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYXQI-00039b-A8; Fri, 30 Oct 2020 16:38:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYXQQ-0005YS-2t
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:38:50 +0000
X-Inumbo-ID: 79eff082-3d0f-4fcb-b135-455fa62458c7
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 79eff082-3d0f-4fcb-b135-455fa62458c7;
	Fri, 30 Oct 2020 16:38:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OFqShuo/caHVvqAqgyN4eB3Mvog/uld8kPsvWT3iaeo=; b=Kwb12axDChh65You/1KQy1pVsT
	z+EUzHFrk18ynPql+f1b/rMIg6EnXeSdNV2V1/r7Uz+yYeNowzex6Kc29diScgXMJpBmTMH/VOaqu
	hGzYC1rWVKH459j0AoyGRMHzwGKeWONSKrg9gYo6hrBRV2EhFthc2ttVUJg2369meuJg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYXQI-0005Ta-KE; Fri, 30 Oct 2020 16:38:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYXQI-0001ih-Ad; Fri, 30 Oct 2020 16:38:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYXQI-00039b-A8; Fri, 30 Oct 2020 16:38:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156319-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156319: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ca56b06043bb4241eeb0a41a60daffb1408a08d5
X-Osstest-Versions-That:
    xen=6e2ee3dfd660d9fde96243da7d565244b4d2f164
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 16:38:42 +0000

flight 156319 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156319/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ca56b06043bb4241eeb0a41a60daffb1408a08d5
baseline version:
 xen                  6e2ee3dfd660d9fde96243da7d565244b4d2f164

Last test of basis   156310  2020-10-30 00:01:28 Z    0 days
Testing same since   156319  2020-10-30 14:02:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6e2ee3dfd6..ca56b06043  ca56b06043bb4241eeb0a41a60daffb1408a08d5 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:43:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:43:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16061.39325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXUv-0006Px-MG; Fri, 30 Oct 2020 16:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16061.39325; Fri, 30 Oct 2020 16:43:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXUv-0006Pq-IO; Fri, 30 Oct 2020 16:43:29 +0000
Received: by outflank-mailman (input) for mailman id 16061;
 Fri, 30 Oct 2020 16:43:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYXUu-0006Pl-BF
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:43:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8df15e29-fa15-4355-a356-17c001ddae72;
 Fri, 30 Oct 2020 16:43:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8A019B8FA;
 Fri, 30 Oct 2020 16:43:26 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYXUu-0006Pl-BF
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:43:28 +0000
X-Inumbo-ID: 8df15e29-fa15-4355-a356-17c001ddae72
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 8df15e29-fa15-4355-a356-17c001ddae72;
	Fri, 30 Oct 2020 16:43:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604076206;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XHbYctlhnOI+j8EwYwcg+4LJQJ2Xt2wV9ITG9UwtcoM=;
	b=jNRlhrywE+vhLbU52w+miGLN2dhFXfyOCtx6k0djPC3Jdj1AyhayEZEyZhSuxR8ex4796K
	0xi86yaCXjtWfvVVt8BwWzOS1KYCgISG6Gdc9P7pFYIFTeMeSFJJIDYJuwgpbNq+bbFEjI
	lIEbJP6U4KiTJyN/q7cnuATi0+N8iAo=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 8A019B8FA;
	Fri, 30 Oct 2020 16:43:26 +0000 (UTC)
Subject: Re: [PATCH 5/5] x86 / iommu: create a dedicated pool of page-table
 pages
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-6-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0729b2b0-cd72-e16c-3ba6-89a86d2db8ac@suse.com>
Date: Fri, 30 Oct 2020 17:43:25 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 11:49, Paul Durrant wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2304,7 +2304,9 @@ int domain_relinquish_resources(struct domain *d)
>  
>      PROGRESS(iommu_pagetables):
>  
> -        ret = iommu_free_pgtables(d);
> +        iommu_free_pgtables(d);
> +
> +        ret = iommu_set_allocation(d, 0);
>          if ( ret )
>              return ret;

There doesn't look to be a need to call iommu_free_pgtables()
more than once - how about you move it immediately ahead of
the (extended) case label?

> +static int set_allocation(struct domain *d, unsigned int nr_pages,
> +                          bool allow_preempt)

Why the allow_preempt parameter when the sole caller passes
"true"?

> +/*
> + * Some IOMMU mappings are set up during domain_create() before the tool-
> + * stack has a chance to calculate and set the appropriate page-table
> + * allocation. A hard-coded initial allocation covers this gap.
> + */
> +#define INITIAL_ALLOCATION 256

How did you arrive at this number? IOW how many pages do we
need in reality, and how much leeway have you added in?

As to the tool stack - why would it "have a chance" to do the
necessary calculations only pretty late? I wonder whether the
intended allocation wouldn't better be part of struct
xen_domctl_createdomain, without the need for a new sub-op.

> @@ -265,38 +350,45 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
>          return;
>  }
>  
> -int iommu_free_pgtables(struct domain *d)
> +void iommu_free_pgtables(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    struct page_info *pg;
> -    unsigned int done = 0;
>  
> -    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
> -    {
> -        free_domheap_page(pg);
> +    spin_lock(&hd->arch.pgtables.lock);
>  
> -        if ( !(++done & 0xff) && general_preempt_check() )
> -            return -ERESTART;
> -    }
> +    page_list_splice(&hd->arch.pgtables.list, &hd->arch.pgtables.free_list);
> +    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
>  
> -    return 0;
> +    spin_unlock(&hd->arch.pgtables.lock);
>  }
>  
>  struct page_info *iommu_alloc_pgtable(struct domain *d)
>  {
>      struct domain_iommu *hd = dom_iommu(d);
> -    unsigned int memflags = 0;
>      struct page_info *pg;
>      void *p;
>  
> -#ifdef CONFIG_NUMA
> -    if ( hd->node != NUMA_NO_NODE )
> -        memflags = MEMF_node(hd->node);
> -#endif
> +    spin_lock(&hd->arch.pgtables.lock);
>  
> -    pg = alloc_domheap_page(NULL, memflags);
> + again:
> +    pg = page_list_remove_head(&hd->arch.pgtables.free_list);
>      if ( !pg )
> +    {
> +        /*
> +         * The hardware and quarantine domains are not subject to a quota
> +         * so create page-table pages on demand.
> +         */
> +        if ( is_hardware_domain(d) || d == dom_io )
> +        {
> +            int rc = create_pgtable(d);
> +
> +            if ( !rc )
> +                goto again;

This gives the appearance of a potentially infinite loop; it's
not because the lock is being held, but I still wonder whether
the impression this gives couldn't be avoided by a slightly
different code structure.

Also the downside of this is that the amount of pages used by
hwdom will now never shrink anymore.

> @@ -306,7 +398,6 @@ struct page_info *iommu_alloc_pgtable(struct domain *d)
>  
>      unmap_domain_page(p);
>  
> -    spin_lock(&hd->arch.pgtables.lock);
>      page_list_add(pg, &hd->arch.pgtables.list);
>      spin_unlock(&hd->arch.pgtables.lock);

You want to drop the lock before the map/clear/unmap, and then
re-acquire it. Or, on the assumption that putting it on the
list earlier is fine (which I think it is), move the other two
lines here up as well.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 16:45:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 16:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16065.39337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXX6-0006Ye-2I; Fri, 30 Oct 2020 16:45:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16065.39337; Fri, 30 Oct 2020 16:45:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXX5-0006YX-VW; Fri, 30 Oct 2020 16:45:43 +0000
Received: by outflank-mailman (input) for mailman id 16065;
 Fri, 30 Oct 2020 16:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1kYXX4-0006YS-WC
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:45:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a62748b3-0937-4bfb-ad0b-5d504495e643;
 Fri, 30 Oct 2020 16:45:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1D331AFC8;
 Fri, 30 Oct 2020 16:45:41 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=2BB6=EF=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
	id 1kYXX4-0006YS-WC
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 16:45:43 +0000
X-Inumbo-ID: a62748b3-0937-4bfb-ad0b-5d504495e643
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id a62748b3-0937-4bfb-ad0b-5d504495e643;
	Fri, 30 Oct 2020 16:45:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604076341;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IVwYz1mmi+3+Dv477cF609gV4yrRxlQ26ujOAA6fyEk=;
	b=uyiMOurHJDmGckxgMKbtxr1aUEyaD0vFisIaYWsS99LKxuZacgv+MWarI0USXqS+iMARqx
	xgukliPAmsKsqkBi3YSwWN8ZZDtk62CYYt5u69mGPWsvQ//pVH6N5nPBRsnhQzwN8CwulD
	mDZW/v5Ga/tJU+LIRNri/WLAE0GtU0M=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 1D331AFC8;
	Fri, 30 Oct 2020 16:45:41 +0000 (UTC)
Subject: Re: [PATCH 3/5] libxl / iommu / domctl: introduce
 XEN_DOMCTL_IOMMU_SET_ALLOCATION...
To: Paul Durrant <paul@xen.org>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <20201005094905.2929-1-paul@xen.org>
 <20201005094905.2929-4-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d4e9b1ff-cd0e-c90c-ff00-09f68c358617@suse.com>
Date: Fri, 30 Oct 2020 17:45:40 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <20201005094905.2929-4-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.10.2020 11:49, Paul Durrant wrote:

Just two nits, in case the op is really needed:

> --- a/xen/drivers/passthrough/iommu.c
> +++ b/xen/drivers/passthrough/iommu.c
> @@ -515,6 +515,14 @@ static int iommu_ctl(
>  
>      switch ( ctl->op )
>      {
> +    case XEN_DOMCTL_IOMMU_SET_ALLOCATION:
> +    {
> +        struct xen_domctl_iommu_set_allocation *set_allocation =
> +            &ctl->u.set_allocation;

const please, or drop the local variable.

> +        rc = iommu_set_allocation(d, set_allocation->nr_pages);
> +        break;
> +    }
>      default:

Blank line above here please.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 17:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 17:13:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16074.39349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXxs-0000jp-5V; Fri, 30 Oct 2020 17:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16074.39349; Fri, 30 Oct 2020 17:13:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYXxs-0000ji-21; Fri, 30 Oct 2020 17:13:24 +0000
Received: by outflank-mailman (input) for mailman id 16074;
 Fri, 30 Oct 2020 17:13:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fiM=EF=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kYXxq-0000jd-JK
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:13:22 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ff9ca925-3ee7-401a-97f1-7e1125e2dc50;
 Fri, 30 Oct 2020 17:13:21 +0000 (UTC)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-554-mFkj9phpOayQbYNxPQng0A-1; Fri, 30 Oct 2020 13:13:16 -0400
Received: by mail-ed1-f71.google.com with SMTP id bc27so2879577edb.18
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 10:13:15 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id rn28sm2612035ejb.22.2020.10.30.10.13.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 30 Oct 2020 10:13:13 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0fiM=EF=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kYXxq-0000jd-JK
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:13:22 +0000
X-Inumbo-ID: ff9ca925-3ee7-401a-97f1-7e1125e2dc50
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
	id ff9ca925-3ee7-401a-97f1-7e1125e2dc50;
	Fri, 30 Oct 2020 17:13:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604078001;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DuNzb5Kn5X9pUmStEj+JKnJyNB4NfLrBdeM6NHGeIbs=;
	b=MSeG/W5BFgaAPjelMz4G385FsavPPQo+gqZuFo+/AR0O2gOUPn0plCcsO6Ysduc6xRWusw
	LoC1jVp0X/sChwyuxPSCMeKb6u0HmdROEnxl3EyJHIw5Yibp+8a3ZrlHnU5BSshrWMJ2W8
	N+WlGRYMRS9S1oqx/Q5rZedueRjLVQ0=
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-554-mFkj9phpOayQbYNxPQng0A-1; Fri, 30 Oct 2020 13:13:16 -0400
X-MC-Unique: mFkj9phpOayQbYNxPQng0A-1
Received: by mail-ed1-f71.google.com with SMTP id bc27so2879577edb.18
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 10:13:15 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=DuNzb5Kn5X9pUmStEj+JKnJyNB4NfLrBdeM6NHGeIbs=;
        b=Iwpv+mIsQHBrIH3sTTLqNVD6Lu2YTtbvPuCf29zATxxSccYsdBIXJZV6PidTe5QGhw
         BWmOLPBp5UWbSFzlCLdm4/3APd0aHMONtYuHvB+kscrXRlpJBdqoIoa1psJXFs9f//6n
         jv5IV0J/amGj2UqIx7rsP/lElmvkoWh46TBOju+PTEJJVcbZ2G5v0ho/7X1L/Va824Qh
         xaH8dfl+uZ/GB+3MjihejgypqcZQpmoE0E/zOvWP3gtLGAXNIDJYYgUMbvpkb2X9bsus
         CbWYP62YBiLHYoFc7wb7tZKWeI+/bHxw15rszBegNbwwL6UN+rJiwLnriNJiajFMk2Wn
         02TA==
X-Gm-Message-State: AOAM533fN7uf0r7pyhUQo+WdFiGX/6quwtil7gjv2WOI5LL/BPcUf5m3
	mEq8v1L+t7x0LwstMSd8T/vtlCxcuvCiAdZ4hDpHyA9+OHd/H67wPhCoP/WO6pkx7C/IGIt59c7
	OcUQ0s4/rVuLochk7jFuPSVOaFKM=
X-Received: by 2002:a17:906:3ed0:: with SMTP id d16mr3611019ejj.477.1604077994632;
        Fri, 30 Oct 2020 10:13:14 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxNrqw+IPZ72SgiHrvzWBvPiDF1WXxJhZYdZEXFTddoFS+f0M1aGXH70Zz6DFO5MzIbmGeOKA==
X-Received: by 2002:a17:906:3ed0:: with SMTP id d16mr3610991ejj.477.1604077994450;
        Fri, 30 Oct 2020 10:13:14 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id rn28sm2612035ejb.22.2020.10.30.10.13.12
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 30 Oct 2020 10:13:13 -0700 (PDT)
Subject: Re: --enable-xen on gitlab CI? (was Re: [PATCH 09/36] qdev: Make
 qdev_get_prop_ptr() get Object* arg)
To: Eduardo Habkost <ehabkost@redhat.com>,
 =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?= <marcandre.lureau@gmail.com>,
 Thomas Huth <thuth@redhat.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: Wainer dos Santos Moschetta <wainersm@redhat.com>,
 QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>,
 Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 "open list:Block layer core" <qemu-block@nongnu.org>,
 Stefan Berger <stefanb@linux.vnet.ibm.com>,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
 "Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck
 <cohuck@redhat.com>, Qemu-s390x list <qemu-s390x@nongnu.org>,
 Max Reitz <mreitz@redhat.com>, Igor Mammedov <imammedo@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-10-ehabkost@redhat.com>
 <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
 <20201030113516.GP5733@habkost.net>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <7645972e-5cad-6511-b057-bd595b91c4aa@redhat.com>
Date: Fri, 30 Oct 2020 18:13:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201030113516.GP5733@habkost.net>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30/10/20 12:35, Eduardo Habkost wrote:
> 
> What is necessary to make sure we have a CONFIG_XEN=y job in
> gitlab CI?  Maybe just including xen-devel in some of the
> container images is enough?

Fedora already has it, but build-system-fedora does not include
x86_64-softmmu.

Paolo



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 17:19:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 17:19:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16080.39361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYY3J-0000wh-Q2; Fri, 30 Oct 2020 17:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16080.39361; Fri, 30 Oct 2020 17:19:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYY3J-0000wa-N9; Fri, 30 Oct 2020 17:19:01 +0000
Received: by outflank-mailman (input) for mailman id 16080;
 Fri, 30 Oct 2020 17:19:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYY3I-0000wS-3C
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:19:00 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2aa8fcd0-177f-414e-8ecd-b505d8cd9a3e;
 Fri, 30 Oct 2020 17:18:58 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYY3F-0006JC-3U; Fri, 30 Oct 2020 17:18:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYY3E-0002zs-RT; Fri, 30 Oct 2020 17:18:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYY3I-0000wS-3C
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:19:00 +0000
X-Inumbo-ID: 2aa8fcd0-177f-414e-8ecd-b505d8cd9a3e
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 2aa8fcd0-177f-414e-8ecd-b505d8cd9a3e;
	Fri, 30 Oct 2020 17:18:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vv4qDEHAvxSnK2RpQGc3twd2Vr4OyqTc61RY7eNV17M=; b=gSfNXVI5IbHQbi4y3oIM3Z+9gQ
	c8UN7rwwcW4UZEUA0swi4Nea54TV9wnzoPrnM9+bZYakadfzanXmmolObTlXrQSBH8qRCIkdDzm/O
	2aoHN1To2g2Sme4DJiFg+yntw7fS9umxGZ0mqvhqchRioXl/lI1Leep7fJOM6UFHzaqg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYY3F-0006JC-3U; Fri, 30 Oct 2020 17:18:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYY3E-0002zs-RT; Fri, 30 Oct 2020 17:18:56 +0000
Subject: Re: [XEN PATCH v1] xen/arm : Add support for SMMUv3 driver
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Rahul Singh <rahul.singh@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: "bertrand.marquis@arm.com" <bertrand.marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <b085e894773842ac320b818aa6f84289d0a128ed.1602591365.git.rahul.singh@arm.com>
 <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <76593217-c7e2-2963-9cbe-d6cc38830710@xen.org>
Date: Fri, 30 Oct 2020 17:18:54 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <09cfc160-3490-0aeb-f872-04fb4ce04364@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Oleksandr,

On 30/10/2020 10:44, Oleksandr Andrushchenko wrote:
> On 10/20/20 6:25 PM, Rahul Singh wrote:
>> Add support for ARM architected SMMUv3 implementations. It is based on
>> the Linux SMMUv3 driver.
>>
>> Major differences between the Linux driver are as follows:
>> 1. Only Stage-2 translation is supported as compared to the Linux driver
>>      that supports both Stage-1 and Stage-2 translations.
> 
> First of all thank you for the efforts!
> 
> I tried the patch with QEMU and would like to know if my understanding correct
> 
> that this combination will not work as of now:
> 
> (XEN) SMMUv3: /smmuv3@9050000: SMMUv3: DT value = eventq
> (XEN) Data Abort Trap. Syndrome=0x1940010
> (XEN) Walking Hypervisor VA 0x40031000 on CPU0 via TTBR 0x00000000b8469000
> (XEN) 0TH[0x0] = 0x00000000b8468f7f
> 
> [snip]
> 
> If this is expected then is there any plan to make QEMU work as well?
> 
> I see [1] says that "Only stage 1 and AArch64 PTW are supported." on QEMU side.

Just for clarication, you are trying to boot Xen on QEMU, right?

You might be able to use the stage-1 page-tables to isolate each device 
in Xen. However, I don't think you will be able to share the P2M because 
the page-tables layout between stage-1 and stage-2 is different.

> 
> 
> We are interested in QEMU/SMMUv3 as a flexible platform for PCI passthrough
> 
> implementation, so it could allow testing different setups and configurations with QEMU.

I would recommend to get the SMMU supporting supporting stage-2 page-tables.

Regardless that, I think Xen should be able to say the SMMU is not 
supported rather than crashing.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 17:46:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 17:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16092.39373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYU8-0003Uj-4A; Fri, 30 Oct 2020 17:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16092.39373; Fri, 30 Oct 2020 17:46:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYU8-0003Uc-0U; Fri, 30 Oct 2020 17:46:44 +0000
Received: by outflank-mailman (input) for mailman id 16092;
 Fri, 30 Oct 2020 17:46:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAhc=EF=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1kYYU6-0003UX-T7
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:46:43 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.94.59]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ddd00798-58a8-4010-ae04-2bda79ae93fa;
 Fri, 30 Oct 2020 17:46:41 +0000 (UTC)
Received: from SN4PR0801CA0001.namprd08.prod.outlook.com
 (2603:10b6:803:29::11) by CY4PR02MB2181.namprd02.prod.outlook.com
 (2603:10b6:903:e::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 30 Oct
 2020 17:46:39 +0000
Received: from SN1NAM02FT005.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:29:cafe::e2) by SN4PR0801CA0001.outlook.office365.com
 (2603:10b6:803:29::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 17:46:39 +0000
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 SN1NAM02FT005.mail.protection.outlook.com (10.152.72.117) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 17:46:38 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Fri, 30 Oct 2020 10:46:38 -0700
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Fri, 30 Oct 2020 10:46:38 -0700
Received: from [10.23.121.44] (port=49713 helo=localhost)
 by smtp.xilinx.com with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1kYYU2-0003DE-8m; Fri, 30 Oct 2020 10:46:38 -0700
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=KAhc=EF=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
	id 1kYYU6-0003UX-T7
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:46:43 +0000
X-Inumbo-ID: ddd00798-58a8-4010-ae04-2bda79ae93fa
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown [40.107.94.59])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ddd00798-58a8-4010-ae04-2bda79ae93fa;
	Fri, 30 Oct 2020 17:46:41 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MRKbVBZJHnzzV+1EKop2FCGsodcGW8aJtBc/W8yJq11ay25EVzxRpt0RBskUzxmWWjqXNGLJNUNVwIfrbbXrQSu9zO9CD64ADWOygKaYAEi2hbCQbYtf2p1FAIjKxpw2sDK9VNh8nW1kspfIwbf1DmaqaT7FDjjH/21itXBQONGMkG/qFyW5H8cdCtKxVhwSc4G9ZLz40maUPBOODgpI+SjxbLVgCyrQsDvLkXSh3jmL6TyI57w1CVmur2YLzN4daSKRxO5IA6GRWj0vgdyNsAWMB4tet1kDYIz4J8KPXFDg7CbrUHakUrx94osR2h1sSma/jTE2oI7hRp7val6H3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Y5rIkKO3U+ca+MQTaMeN8Vh2Q1LZv3NMpqdJBybCrk=;
 b=YEWTAbGrs+1DzZEPrBuBV1wnFVfVgOY3/6f8Gp80y115sZWTAYagl3A6h+f9Wjqt+iKlAoNaAHW4tfuPF/qslP2/1AZcrLYVNTwKcYi2nPLZOgb0F0xK4fyHjkMXJkDPfb6GxrN9YmAOyFof4pg/dtnEn4blIewhHs8unLVPxjfWGzrCWZjba6Ef0FaATy/9cUFN71SQNqKiGgBcXLSEk6opJVyyxxTBFBd2T6JGLH2Fp4AWf4iGzDvBVBvzmw1+UiYSZLwhQC2UHIfUC2PdDKmGpAjKgNpXm39mbKxOqruNpr1PFEio1Ualp92q/DkR89alvDiqIe87IiBxo0cMVQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.62.198) smtp.rcpttodomain=linaro.org smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4Y5rIkKO3U+ca+MQTaMeN8Vh2Q1LZv3NMpqdJBybCrk=;
 b=FSHUQjCMASOELaGJLgzGGLQHpVqucSGvIqR8rJLdwZptucZXHE0DcTyuzxamPNCAPVOIJlY6cQaMpTvvNniLNM9lRgWWLT21rmz3j/7tqP/CcBNresEbPzdMO0s3p839J4VJozJFVA+tIcRvU2rQxwZTjxpotheQ/YHpu/OMaSo=
Received: from SN4PR0801CA0001.namprd08.prod.outlook.com
 (2603:10b6:803:29::11) by CY4PR02MB2181.namprd02.prod.outlook.com
 (2603:10b6:903:e::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.21; Fri, 30 Oct
 2020 17:46:39 +0000
Received: from SN1NAM02FT005.eop-nam02.prod.protection.outlook.com
 (2603:10b6:803:29:cafe::e2) by SN4PR0801CA0001.outlook.office365.com
 (2603:10b6:803:29::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3499.18 via Frontend
 Transport; Fri, 30 Oct 2020 17:46:39 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198)
 smtp.mailfrom=xilinx.com; linaro.org; dkim=none (message not signed)
 header.d=none;linaro.org; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.62.198 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com;
Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by
 SN1NAM02FT005.mail.protection.outlook.com (10.152.72.117) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.3520.15 via Frontend Transport; Fri, 30 Oct 2020 17:46:38 +0000
Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by
 xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Fri, 30 Oct 2020 10:46:38 -0700
Received: from smtp.xilinx.com (172.19.127.96) by
 xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id
 15.1.1913.5 via Frontend Transport; Fri, 30 Oct 2020 10:46:38 -0700
Received: from [10.23.121.44] (port=49713 helo=localhost)
	by smtp.xilinx.com with esmtp (Exim 4.90)
	(envelope-from <stefano.stabellini@xilinx.com>)
	id 1kYYU2-0003DE-8m; Fri, 30 Oct 2020 10:46:38 -0700
Date: Fri, 30 Oct 2020 10:46:37 -0700
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Takahiro Akashi <takahiro.akashi@linaro.org>
CC: Stefano Stabellini <stefano.stabellini@xilinx.com>, Alex Benn??e
	<alex.bennee@linaro.org>, Masami Hiramatsu <masami.hiramatsu@linaro.org>,
	<ian.jackson@eu.citrix.com>, <wl@xen.org>, <anthony.perard@citrix.com>,
	<xen-devel@lists.xenproject.org>
Subject: Re: BUG: libxl vuart build order
In-Reply-To: <20201030025157.GA18567@laputa>
Message-ID: <alpine.DEB.2.21.2010301045250.12247@sstabellini-ThinkPad-T480s>
References: <CAB5YjtCwbvYMVg-9YXjSFtC8KvjkJuYhJFSCHrJaRUKfg4NHYA@mail.gmail.com> <alpine.DEB.2.21.2010261634000.12247@sstabellini-ThinkPad-T480s> <20201027000214.GA14449@laputa> <20201028014105.GA11856@laputa> <alpine.DEB.2.21.2010281437010.12247@sstabellini-ThinkPad-T480s>
 <20201029114705.GA291577@laputa> <alpine.DEB.2.21.2010291704180.12247@sstabellini-ThinkPad-T480s> <20201030025157.GA18567@laputa>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed;
	boundary="8323329-903762955-1604079993=:12247"
Content-ID: <alpine.DEB.2.21.2010301046340.12247@sstabellini-ThinkPad-T480s>
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fbea643a-e3e1-4fc6-8f35-08d87cfbc10b
X-MS-TrafficTypeDiagnostic: CY4PR02MB2181:
X-Microsoft-Antispam-PRVS:
	<CY4PR02MB2181702DDA8E8EF9ECEC27D5A0150@CY4PR02MB2181.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	MsY5NurbGIguzYwh4+77ieSNB/xM29WEGLG/oIWQUWb7DzsnWdIL6IbTJPSvcpg/8T1oPiiuaEgEl5bF+sKYkq22BzAnVr6jC8WUDfsUKHbd+P8UDePX+D2L7hxM87Y9nxQoMv58CVhPFIJIOaYFm0t439L7oIZNO6YsvITRa+jAz3OWGzeqVYP35e4d6Ff4eGWuBDWyA2Hbx9x5zqWyvDRW6JIboVKRBGNozf7cGRTyXodNKgJAuSP+UA6uYEzsimzaHqm92rjAcOdZr2mBdZ0kL+JIj0L0XLVVjwR+K8aMKgyvrOTBartTYWiwKGGrqALgOQdyOocxyjfWRDqM3U0s/vQN3TEsE514GtpR1m8oguYbjYf69A/fMT3pAeteO7vT0Ep0xY6wE3Zs4uoNCw==
X-Forefront-Antispam-Report:
	CIP:149.199.62.198;CTRY:US;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:xsj-pvapexch01.xlnx.xilinx.com;PTR:unknown-62-198.xilinx.com;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(376002)(346002)(136003)(396003)(46966005)(47076004)(7636003)(70206006)(426003)(5660300002)(33964004)(9686003)(2906002)(44832011)(33716001)(26005)(4326008)(54906003)(478600001)(356005)(186003)(9786002)(36906005)(70586007)(336012)(8676002)(6916009)(82740400003)(8936002)(316002)(82310400003)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Oct 2020 17:46:38.9521
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fbea643a-e3e1-4fc6-8f35-08d87cfbc10b
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c;Ip=[149.199.62.198];Helo=[xsj-pvapexch01.xlnx.xilinx.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SN1NAM02FT005.eop-nam02.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR02MB2181

--8323329-903762955-1604079993=:12247
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2010301046341.12247@sstabellini-ThinkPad-T480s>

On Fri, 30 Oct 2020, Takahiro Akashi wrote:
> Hi Stefano,
> 
> On Thu, Oct 29, 2020 at 07:03:28PM -0700, Stefano Stabellini wrote:
> > + xen-devel and libxl maintainers
> > 
> > In short, there is a regression in libxl with the ARM vuart introduced
> > by moving ARM guests to the PVH build.
> > 
> > 
> > On Thu, 29 Oct 2020, Takahiro Akashi wrote:
> > > On Wed, Oct 28, 2020 at 02:44:16PM -0700, Stefano Stabellini wrote:
> > > > On Wed, 28 Oct 2020, Takahiro Akashi wrote:
> > > > > On Tue, Oct 27, 2020 at 09:02:14AM +0900, Takahiro Akashi wrote:
> > > > > > On Mon, Oct 26, 2020 at 04:37:30PM -0700, Stefano Stabellini wrote:
> > > > > > > 
> > > > > > > On Mon, 26 Oct 2020, Takahiro Akashi wrote:
> > > > > > > > Stefano,
> > > > > > > > 
> > > > > > > > # I'm afraid that I have already bothered you with a lot of questions.
> > > > > > > > 
> > > > > > > > When I looked at Xen's vpl011 implementation, I found
> > > > > > > > CR (and LCHR) register is not supported. (trap may cause a data abort).
> > > > > > > > On the other hand, for example, linux's pl011 driver surely
> > > > > > > > accesses CR (and LCHR) register.
> > > > > > > > So I guess that linux won't be able to use pl011 on a Xen guest vm
> > > > > > > > if vuart = "sbsa_uart".
> > > > > > > > 
> > > > > > > > Is this a known issue or do I miss anything?
> > > > > > > 
> > > > > > > Linux should definitely be able to use it, and in fact, I am using it
> > > > > > > with Linux in my test environment.
> > > > > > > 
> > > > > > > I think the confusion comes from the name "vpl011": it is in fact not a
> > > > > > > full PL011 UART, but an SBSA UART.
> > > > > > 
> > > > > > Yeah, I have noticed it.
> > > > > > 
> > > > > > > SBSA UART only implements a subset of
> > > > > > > the PL011 registers. The compatible string is "arm,sbsa-uart", also see
> > > > > > > drivers/tty/serial/amba-pl011.c:sbsa_uart_probe.
> > > > > > 
> > > > > > Looking closely into the details of implementation, I found
> > > > > > that all the accesses to unimplemented registers, including
> > > > > > CR, are deliberately avoided in sbsa part of linux driver.
> > > > > 
> > > > > So I'm now trying to implement "sbsa-uart" driver on U-Boot
> > > > > by modifying the existing pl011 driver.
> > > > > (Please note the current xen'ized U-Boot utilises a para-virtualized
> > > > > console, i.e. with HVM_PARAM_CONSOLE_PFN.)
> > > > > 
> > > > > So far my all attempts have failed.
> > > > > 
> > > > > There are a couple of problems, and one of them is how we can
> > > > > access vpl011 port (from dom0).
> > > > > What I did is:
> > > > > - modify U-Boot's pl011 driver
> > > > >   (I'm sure that the driver correctly handle a vpl011 device
> > > > >   with regard of accessing a proper set of registers.)
> > > > > - start U-Boot guest with "vuart=sbsa_uart" by
> > > > >     xl create uboot.cfg -c
> > > > > 
> > > > > Then I have seen almost nothing on the screen.
> > > > > Digging into vpl011 implementation, I found that all the characters
> > > > > written DR register will be directed to a "backend domain" if a guest
> > > > > vm is launched by xl command.
> > > > > (In case of dom0less, the backend seems to be Xen itself.)
> > > > > 
> > > > > As a silly experiment, I modified domain_vpl011_init() to always create
> > > > > a vpl011 device with "backend_in_domain == false".
> > > > > Then, I could see more boot messages from U-Boot, but still fails
> > > > > to use the device as a console, I mean, we will lose all the outputs
> > > > > after at some point and won't be able to type any keys (at a command prompt).
> > > > > (This will be another problem on U-Boot side.)
> > > > > 
> > > > > My first question here is how we can configure and connect a console
> > > > > in this case?
> > > > > Should "xl create -c" or "xl console -t vuart" simply work?
> > > > 
> > > > "xl create -c" creates a guest and connect to the primary console which
> > > > is the PV console (i.e. HVM_PARAM_CONSOLE_PFN.)
> > > 
> > > So in case of vuart, it (console) doesn't work?
> > > (Apparently, "xl create" doesn't take '-t' option.)
> > > 
> > > > To connect to the emulated sbsa uart you need to pass -t vuart. So yes,
> > > > "xl console -t vuart domain_name" should get you access to the emulated
> > > > sbsa uart. The sbsa uart can also be exposed to dom0less guests; you get
> > > > their output by using CTRL-AAA to switch to right domU console.
> > > > 
> > > > You can add printks to xen/arch/arm/vpl011.c in Xen to see what's
> > > > happening on the Xen side. vpl011.c is the emulator.
> > > 
> > > I'm sure that write to "REG_DR" register is caught by Xen.
> > > What I don't understand is
> > > if back_in_domain -> no outputs
> > > if !back_in_domain -> can see outputs
> > > 
> > > (As you know, if a guest is created by xl command, back_in_domain
> > > is forcedly set to true.)
> > > 
> > > I looked into xenstore and found that "vuart/0/tty" does not exist,
> > > but "console/tty" does.
> > > How can this happen for vuart?
> > > (I clearly specified, vuart = "sbsa_uart" in Xen config.)
> > 
> > It looks like we have a bug :-(
> > 
> > I managed to reproduce the issue. The problem is a race in libxl.
> > 
> > tools/libxc/xc_dom_arm.c:alloc_magic_pages is called first, setting
> > dom->vuart_gfn.  Then, libxl__build_hvm should be setting
> > state->vuart_gfn to dom->vuart_gfn (like libxl__build_pv does) but it
> > doesn't.
> 
> Thank you for the patch.
> I confirmed that sbsa-uart driver on U-Boot now works.

Excellent!


> === after "xl console -t vuart" ===
> U-Boot 2020.10-00777-g10cf956a26ba (Oct 29 2020 - 19:31:29 +0900) xenguest
> 
> Xen virtual CPU
> Model: XENVM-4.15
> DRAM:  128 MiB
> 
> In:    sbsa-pl011
> Out:   sbsa-pl011
> Err:   sbsa-pl011
> xenguest# dm tree
>  Class     Index  Probed  Driver                Name
> -----------------------------------------------------------
>  root          0  [ + ]   root_driver           root_driver
>  firmware      0  [   ]   psci                  |-- psci
>  serial        0  [ + ]   serial_pl01x          |-- sbsa-pl011
>  pvblock       0  [   ]   pvblock               `-- pvblock
> ===
> 
> If possible, I hope that "xl create -c" command would accept "-t vuart"
> option (or it should automatically selects uart type from the config).

I think a patch to add the "-t" option to "xl create" would be
acceptable, right Anthony?


> > 
> > ---
> > 
> > libxl: set vuart_gfn in libxl__build_hvm
> > 
> > Setting vuart_gfn was missed when switching ARM guests to the PVH build.
> > Like libxl__build_pv, libxl__build_hvm should set state->vuart_gfn to
> > dom->vuart_gfn.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> > index f8661e90d4..36fe8915e7 100644
> > --- a/tools/libxl/libxl_dom.c
> > +++ b/tools/libxl/libxl_dom.c
> > @@ -1184,6 +1184,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
> >          LOG(ERROR, "hvm build set params failed");
> >          goto out;
> >      }
> > +    state->vuart_gfn = dom->vuart_gfn;
> >  
> >      rc = hvm_build_set_xs_values(gc, domid, dom, info);
> >      if (rc != 0) {
> 
--8323329-903762955-1604079993=:12247--


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:01:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:01:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16116.39406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYim-0005TP-1W; Fri, 30 Oct 2020 18:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16116.39406; Fri, 30 Oct 2020 18:01:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYil-0005TI-U2; Fri, 30 Oct 2020 18:01:51 +0000
Received: by outflank-mailman (input) for mailman id 16116;
 Fri, 30 Oct 2020 18:01:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYYik-0005Sc-NX
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:01:50 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac0646f2-d59d-4140-8722-eee5bf69468f;
 Fri, 30 Oct 2020 18:01:42 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYic-0007FM-19; Fri, 30 Oct 2020 18:01:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYib-0005qz-Jc; Fri, 30 Oct 2020 18:01:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYib-0004Lc-J7; Fri, 30 Oct 2020 18:01:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYYik-0005Sc-NX
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:01:50 +0000
X-Inumbo-ID: ac0646f2-d59d-4140-8722-eee5bf69468f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ac0646f2-d59d-4140-8722-eee5bf69468f;
	Fri, 30 Oct 2020 18:01:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RLlrdfD7O5humMKuRuBesLRsbeT+NY6VMBQ7UsbANNA=; b=FAaID6pX1Yt9WEurpmpjEo9hS0
	oJNIG5vLrFd8qrVejo+PJSNrpR2earyRI5lwmB4E/wOdNQPDn4DMd1r1L5bDEJgr/sX4X5h/Hoa0W
	shhRy4JI9DCT9sApPbvmMnx9I7vwVkKEkYTVIE6Jj/ytsJZ7ZVqF/E+pXo9xodit0L4A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYic-0007FM-19; Fri, 30 Oct 2020 18:01:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYib-0005qz-Jc; Fri, 30 Oct 2020 18:01:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYib-0004Lc-J7; Fri, 30 Oct 2020 18:01:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156314-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156314: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=e9cfbd36c50c3df4ef1db3b3c56c5f8706a710ee
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 18:01:41 +0000

flight 156314 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156314/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              e9cfbd36c50c3df4ef1db3b3c56c5f8706a710ee
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  112 days
Failing since        151818  2020-07-11 04:18:52 Z  111 days  106 attempts
Testing same since   156314  2020-10-30 04:19:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23479 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:10:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16131.39420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYrI-0006V6-Ux; Fri, 30 Oct 2020 18:10:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16131.39420; Fri, 30 Oct 2020 18:10:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYrI-0006Uz-Rj; Fri, 30 Oct 2020 18:10:40 +0000
Received: by outflank-mailman (input) for mailman id 16131;
 Fri, 30 Oct 2020 18:10:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYYrI-0006UL-9d
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:10:40 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b65b3d5-5880-4f65-b712-26da4d6c7de8;
 Fri, 30 Oct 2020 18:10:32 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYrA-0007Qa-JR; Fri, 30 Oct 2020 18:10:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYrA-00067Y-6x; Fri, 30 Oct 2020 18:10:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYYrA-0006uj-6C; Fri, 30 Oct 2020 18:10:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYYrI-0006UL-9d
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:10:40 +0000
X-Inumbo-ID: 3b65b3d5-5880-4f65-b712-26da4d6c7de8
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3b65b3d5-5880-4f65-b712-26da4d6c7de8;
	Fri, 30 Oct 2020 18:10:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=C3OcmAbQq44PsfVoWNymYk8QEJIs/AIrIfUByOzdjt8=; b=CrOH0VXf0gmOzaEjMgEB41gkXo
	7psCqivk17HQXHwBkbwzr9BnnVOPlRAok1IHU42+wD9EK+wTPiiQg/qd7BawTOUiReC9oEetTd/tX
	MJaReXb2vsuJ/FjI/BIlkrgyyADoKmSBPBKErlk8DYpE95FiIs3t2UvgJXaeabH7nyIw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYrA-0007Qa-JR; Fri, 30 Oct 2020 18:10:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYrA-00067Y-6x; Fri, 30 Oct 2020 18:10:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYYrA-0006uj-6C; Fri, 30 Oct 2020 18:10:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-upstream-unstable test] 156301: tolerable FAIL - PUSHED
X-Osstest-Failures:
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-upstream-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=677cbe1324c29294bb1d1b8454b3f214725e40fd
X-Osstest-Versions-That:
    qemuu=ea6d3cd1ed79d824e605a70c3626bc437c386260
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 18:10:32 +0000

flight 156301 qemu-upstream-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156301/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 151544
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 151544
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 151544
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 151544
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 151544
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 151544
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 151544
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                677cbe1324c29294bb1d1b8454b3f214725e40fd
baseline version:
 qemuu                ea6d3cd1ed79d824e605a70c3626bc437c386260

Last test of basis   151544  2020-07-02 15:39:12 Z  120 days
Testing same since   156301  2020-10-29 17:39:09 Z    1 days    1 attempts

------------------------------------------------------------
307 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ea6d3cd1ed..677cbe1324  677cbe1324c29294bb1d1b8454b3f214725e40fd -> master


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:17:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:17:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16136.39436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYxP-0006lQ-Qt; Fri, 30 Oct 2020 18:16:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16136.39436; Fri, 30 Oct 2020 18:16:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYYxP-0006lJ-Ny; Fri, 30 Oct 2020 18:16:59 +0000
Received: by outflank-mailman (input) for mailman id 16136;
 Fri, 30 Oct 2020 18:16:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0fiM=EF=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1kYYxN-0006lD-Ta
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:16:58 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 51f61f03-0fa2-4443-bbef-2e9e24278cb2;
 Fri, 30 Oct 2020 18:16:57 +0000 (UTC)
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-68-nDSVqSHLM5Gz9gxYPRzgGA-1; Fri, 30 Oct 2020 14:16:54 -0400
Received: by mail-ed1-f71.google.com with SMTP id cb27so2960397edb.11
 for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 11:16:54 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a?
 ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
 by smtp.gmail.com with ESMTPSA id r24sm3338060eds.67.2020.10.30.11.16.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 30 Oct 2020 11:16:51 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0fiM=EF=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
	id 1kYYxN-0006lD-Ta
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:16:58 +0000
X-Inumbo-ID: 51f61f03-0fa2-4443-bbef-2e9e24278cb2
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id 51f61f03-0fa2-4443-bbef-2e9e24278cb2;
	Fri, 30 Oct 2020 18:16:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604081816;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hQ9JbkxvNa0VjLQd06HFpKY/mjumkrtDAKSURNuwMaY=;
	b=RqKX+DP0RS54Yr1lQ/F8Bed33mIAGibN1Yno7dIuYutZz0mrrpoUGmG9qdNfJauA/cPyYt
	uULqyRlrpEPkFas8N65+mTu4LqFsVq8Z7Vp5LWZmJdMiEilYpMF9zUI/UyQubuPeHe2SIy
	49Syu43b6rmf5+egDoxmfA1AERFWD6U=
Received: from mail-ed1-f71.google.com (mail-ed1-f71.google.com
 [209.85.208.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-68-nDSVqSHLM5Gz9gxYPRzgGA-1; Fri, 30 Oct 2020 14:16:54 -0400
X-MC-Unique: nDSVqSHLM5Gz9gxYPRzgGA-1
Received: by mail-ed1-f71.google.com with SMTP id cb27so2960397edb.11
        for <xen-devel@lists.xenproject.org>; Fri, 30 Oct 2020 11:16:54 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=hQ9JbkxvNa0VjLQd06HFpKY/mjumkrtDAKSURNuwMaY=;
        b=H/suE1U4/HqGdMIzfxjHwofp33nJXYBM06+8isPvqBz8uzC33RdDFa3/BYolWBKYFc
         hMMS6nzEMDD37pvGbBElulYN+16vk3cRs72rxxDS5Z64KOb1/XwpQahHj7wHt02xvr/F
         y2wU06BaZhZDLyluPhXU/U8yJHaGlmatiQzGPDR0bTeQgHlcVgMP6RKk5bNj3W1FFMCZ
         OmvJe+7ig6Wx0V0wlMdZT9PSGK8VkIQmc/4hA32+xW7T0OKNJv5KEkCnm0RVhkqRxDDe
         JDvvZB7B/18PK8nzRgnvPxM0KsswHHGWwdKuSbg21622Sj7lHnbJAV+wTMlFX9DtT7Nr
         L7nA==
X-Gm-Message-State: AOAM531RgR+9mhPRh6fpddCyOeKS6Dfm8ZAAqjbuBFF/Xis/R/w/6vMj
	/XYgHOjX7id/NxhsAxL7/exBj+XWqja8xc53jRvYvXa1jUYImvnkmbKSwKpbzc8frMeMCNMfUib
	AEqYKGbQwaA1f1V2ye8tEoL9j3JA=
X-Received: by 2002:a17:906:c20f:: with SMTP id d15mr3678297ejz.341.1604081812910;
        Fri, 30 Oct 2020 11:16:52 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJyxORp9yhzCHXkXQ+SV0R3eIJai4qo4teWFQf7D18bUb6YmyTx02WLXvoVEuul2E5KtdAUsMg==
X-Received: by 2002:a17:906:c20f:: with SMTP id d15mr3678275ejz.341.1604081812714;
        Fri, 30 Oct 2020 11:16:52 -0700 (PDT)
Received: from ?IPv6:2001:b07:6468:f312:c8dd:75d4:99ab:290a? ([2001:b07:6468:f312:c8dd:75d4:99ab:290a])
        by smtp.gmail.com with ESMTPSA id r24sm3338060eds.67.2020.10.30.11.16.50
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Fri, 30 Oct 2020 11:16:51 -0700 (PDT)
Subject: Re: [PATCH] [v2] x86: apic: avoid -Wshadow warning in header
To: David Laight <David.Laight@ACULAB.COM>,
 'Arvind Sankar' <nivedita@alum.mit.edu>, Thomas Gleixner <tglx@linutronix.de>
Cc: 'Arnd Bergmann' <arnd@kernel.org>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "x86@kernel.org" <x86@kernel.org>,
 Arnd Bergmann <arnd@arndb.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>,
 Stephen Hemminger <sthemmin@microsoft.com>, "H. Peter Anvin"
 <hpa@zytor.com>, "Rafael J. Wysocki" <rjw@rjwysocki.net>,
 Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>,
 Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>,
 "linux-hyperv@vger.kernel.org" <linux-hyperv@vger.kernel.org>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
 "platform-driver-x86@vger.kernel.org" <platform-driver-x86@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "iommu@lists.linux-foundation.org" <iommu@lists.linux-foundation.org>
References: <20201028212417.3715575-1-arnd@kernel.org>
 <38b11ed3fec64ebd82d6a92834a4bebe@AcuMS.aculab.com>
 <20201029165611.GA2557691@rani.riverdale.lan>
 <93180c2d-268c-3c33-7c54-4221dfe0d7ad@redhat.com>
 <87v9esojdi.fsf@nanos.tec.linutronix.de>
 <20201029213512.GA34524@rani.riverdale.lan>
 <ad73f56e79d249b1b3614bccc85e2ca5@AcuMS.aculab.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <070f590f-b702-35f0-0b6c-c6455f08e9d5@redhat.com>
Date: Fri, 30 Oct 2020 19:16:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <ad73f56e79d249b1b3614bccc85e2ca5@AcuMS.aculab.com>
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=pbonzini@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29/10/20 23:12, David Laight wrote:
>> https://godbolt.org/z/4dzPbM
>>
>> With -fno-strict-aliasing, the compiler reloads the pointer if you write
>> to the start of what it points to, but not if you write to later
>> elements.
> I guess it assumes that global data doesn't overlap.

Yeah, setting

	p = (struct s *) ((char *)&p) - 8;

invokes undefined behavior _for a different reason than strict aliasing_
(it's a pointer that is based on "p" but points before its start or
after one byte past its end).  So the compiler assumes that only the
first few bytes of a global can overlap it.

If you change the size of the fields from long to char in the compiler
explorer link above, every field forces a reload of the global.

Paolo



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:22:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:22:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16144.39449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ2U-0007cV-G7; Fri, 30 Oct 2020 18:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16144.39449; Fri, 30 Oct 2020 18:22:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ2U-0007cO-Bx; Fri, 30 Oct 2020 18:22:14 +0000
Received: by outflank-mailman (input) for mailman id 16144;
 Fri, 30 Oct 2020 18:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYZ2S-0007cJ-MU
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:22:12 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74303daa-b9f6-4838-a41d-8a79d7eed617;
 Fri, 30 Oct 2020 18:22:11 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZ2D-0007eb-GT; Fri, 30 Oct 2020 18:21:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZ2D-0001Hk-6J; Fri, 30 Oct 2020 18:21:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYZ2S-0007cJ-MU
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:22:12 +0000
X-Inumbo-ID: 74303daa-b9f6-4838-a41d-8a79d7eed617
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 74303daa-b9f6-4838-a41d-8a79d7eed617;
	Fri, 30 Oct 2020 18:22:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=eGreG2lvXzvJMpkXHnPENV4rs64oUHIRpQ132Y2vEYg=; b=3wNEEJNaaEbnAVSmCZ6UrK37XA
	xFCTc8UrMGeR4xOxVPYmhrYIc8gV3+d8FuJQH5+Pmp7lxvik50gbdXwvvPJirMIan83RnovO1B3Ml
	L9tA2xA1/NasjbQRd6DqYKoxASccLm89cvA+OWZBsvSgesjBF8kg4pa3EpagpyyVvoXY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZ2D-0007eb-GT; Fri, 30 Oct 2020 18:21:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZ2D-0001Hk-6J; Fri, 30 Oct 2020 18:21:57 +0000
Subject: Re: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Rahul.Singh@arm.com, Julien Grall
 <jgrall@amazon.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Wei Xu <xuwei5@hisilicon.com>
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-3-julien@xen.org>
 <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <6038f41f-594c-e573-9126-f31291af9c38@xen.org>
Date: Fri, 30 Oct 2020 18:21:53 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 24/10/2020 01:16, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Julien Grall wrote:
>>   bool __acpi_unmap_table(const void *ptr, unsigned long size)
>>   {
>>       vaddr_t vaddr = (vaddr_t)ptr;
>> +    unsigned int idx;
>> +
>> +    /* We are only handling fixmap address in the arch code */
>> +    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
>> +         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )
> 
> Is it missing "+ PAGE_SIZE"?
> 
>     if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
>          (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) )

Yes it should be + PAGE_SIZE. Do you mind if I fix it on commit?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:28:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:28:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16153.39464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ89-0007t5-7R; Fri, 30 Oct 2020 18:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16153.39464; Fri, 30 Oct 2020 18:28:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ89-0007sy-4U; Fri, 30 Oct 2020 18:28:05 +0000
Received: by outflank-mailman (input) for mailman id 16153;
 Fri, 30 Oct 2020 18:28:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYZ88-0007st-9N
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:28:04 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9725f7b8-f10f-45de-867f-ad036ded4a4f;
 Fri, 30 Oct 2020 18:28:03 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 124EB20702;
 Fri, 30 Oct 2020 18:28:02 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYZ88-0007st-9N
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:28:04 +0000
X-Inumbo-ID: 9725f7b8-f10f-45de-867f-ad036ded4a4f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9725f7b8-f10f-45de-867f-ad036ded4a4f;
	Fri, 30 Oct 2020 18:28:03 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 124EB20702;
	Fri, 30 Oct 2020 18:28:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604082482;
	bh=njaLhnwcnoISGj90YGg4hfeOnhw6M8jJBygy8V9B+oM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=jRVGOKI4+A9p2dlAK0o5iCU+WZ0XYptQFM6ElWutFm+1HEAdwNOrLjFKMHSIAPgXL
	 zd9XxQiMe+PdHyGI78Mzn14+sqfrnL+xa+m9/+9pd+dcKvKFNQTbIi6Xzs+V8pyj8m
	 cXCAQ5o6f93ycRHsJ86rBPRCY4tZnwITNRUVRV6U=
Date: Fri, 30 Oct 2020 11:28:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Xu <xuwei5@hisilicon.com>
Subject: Re: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
In-Reply-To: <6038f41f-594c-e573-9126-f31291af9c38@xen.org>
Message-ID: <alpine.DEB.2.21.2010301127470.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-3-julien@xen.org> <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s> <6038f41f-594c-e573-9126-f31291af9c38@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Oct 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 24/10/2020 01:16, Stefano Stabellini wrote:
> > On Fri, 23 Oct 2020, Julien Grall wrote:
> > >   bool __acpi_unmap_table(const void *ptr, unsigned long size)
> > >   {
> > >       vaddr_t vaddr = (vaddr_t)ptr;
> > > +    unsigned int idx;
> > > +
> > > +    /* We are only handling fixmap address in the arch code */
> > > +    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> > > +         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )
> > 
> > Is it missing "+ PAGE_SIZE"?
> > 
> >     if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> >          (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) )
> 
> Yes it should be + PAGE_SIZE. Do you mind if I fix it on commit?

No, I don't mind. Go ahead.


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:29:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:29:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16157.39477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ9V-00080J-IT; Fri, 30 Oct 2020 18:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16157.39477; Fri, 30 Oct 2020 18:29:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZ9V-00080C-Fd; Fri, 30 Oct 2020 18:29:29 +0000
Received: by outflank-mailman (input) for mailman id 16157;
 Fri, 30 Oct 2020 18:29:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYZ9U-000807-NF
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:29:28 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa7a890d-dcbe-472b-a00f-024c4cac5286;
 Fri, 30 Oct 2020 18:29:27 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZ9K-0007ot-70; Fri, 30 Oct 2020 18:29:18 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZ9J-0004ZA-UP; Fri, 30 Oct 2020 18:29:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYZ9U-000807-NF
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:29:28 +0000
X-Inumbo-ID: fa7a890d-dcbe-472b-a00f-024c4cac5286
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id fa7a890d-dcbe-472b-a00f-024c4cac5286;
	Fri, 30 Oct 2020 18:29:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=m1iLd6OVDkA18gJijDSiOaMYJIVKnYJ2qasVX+GACqo=; b=vRgl4VIS+gVfuK2bX54gKVPyGk
	aiEyoLnMRagugtsvaviLN5dZWNnimPLfmc5dOU6v27OfJGIh2j9TGuIrNj37McNnuJQHTiZDoMhuU
	ftKqZx/Knk601P1sc3NegCzrp60GgKWNE1zVCZ19vlFWGl+EtIxDBm5pu1GgzeNGsiI8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZ9K-0007ot-70; Fri, 30 Oct 2020 18:29:18 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZ9J-0004ZA-UP; Fri, 30 Oct 2020 18:29:18 +0000
Subject: Re: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Rahul.Singh@arm.com, Julien Grall
 <jgrall@amazon.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Wei Xu <xuwei5@hisilicon.com>
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-3-julien@xen.org>
 <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s>
 <6038f41f-594c-e573-9126-f31291af9c38@xen.org>
 <alpine.DEB.2.21.2010301127470.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <5a8afe57-1be5-6a0b-2ed5-e668690fc246@xen.org>
Date: Fri, 30 Oct 2020 18:29:15 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010301127470.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 30/10/2020 18:28, Stefano Stabellini wrote:
> On Fri, 30 Oct 2020, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 24/10/2020 01:16, Stefano Stabellini wrote:
>>> On Fri, 23 Oct 2020, Julien Grall wrote:
>>>>    bool __acpi_unmap_table(const void *ptr, unsigned long size)
>>>>    {
>>>>        vaddr_t vaddr = (vaddr_t)ptr;
>>>> +    unsigned int idx;
>>>> +
>>>> +    /* We are only handling fixmap address in the arch code */
>>>> +    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
>>>> +         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )
>>>
>>> Is it missing "+ PAGE_SIZE"?
>>>
>>>      if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
>>>           (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) )
>>
>> Yes it should be + PAGE_SIZE. Do you mind if I fix it on commit?
> 
> No, I don't mind. Go ahead.

I technically don't have a ack for this patch. Can I consider this as a 
Acked-by? :)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:35:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:35:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16181.39517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZEo-0000cX-H0; Fri, 30 Oct 2020 18:34:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16181.39517; Fri, 30 Oct 2020 18:34:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZEo-0000cQ-Dp; Fri, 30 Oct 2020 18:34:58 +0000
Received: by outflank-mailman (input) for mailman id 16181;
 Fri, 30 Oct 2020 18:34:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYZEn-0000cL-74
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:34:57 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d49b34e9-650e-4a92-b7ef-3743d1ebe73f;
 Fri, 30 Oct 2020 18:34:56 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 00B5E20729;
 Fri, 30 Oct 2020 18:34:54 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYZEn-0000cL-74
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:34:57 +0000
X-Inumbo-ID: d49b34e9-650e-4a92-b7ef-3743d1ebe73f
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d49b34e9-650e-4a92-b7ef-3743d1ebe73f;
	Fri, 30 Oct 2020 18:34:56 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 00B5E20729;
	Fri, 30 Oct 2020 18:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604082895;
	bh=Ksm8ZRwC5xIEq+rvokHzDCtcSUrr9kI6LtAgdYmzoJA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Ap6vKCg6A6x4jtLJi01e8Tx9avEXg6iczqIuHjrdYREASK4yAdg5JrNwXOxpLEg6o
	 6qkAIoG+LlxDiOToUmTgkmZouUhxI9gs7G522IYzDiFp1SDGWNIFN9T3AUlW7Cxj/S
	 Go3vD2KS8xpYdDg6Nxuv41QYKiAlRnhZfC2aN3O8=
Date: Fri, 30 Oct 2020 11:34:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, alex.bennee@linaro.org, 
    masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com, 
    andre.przywara@arm.com, Rahul.Singh@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Wei Xu <xuwei5@hisilicon.com>
Subject: Re: [PATCH v2 2/7] xen/arm: acpi: The fixmap area should always be
 cleared during failure/unmap
In-Reply-To: <5a8afe57-1be5-6a0b-2ed5-e668690fc246@xen.org>
Message-ID: <alpine.DEB.2.21.2010301134430.12247@sstabellini-ThinkPad-T480s>
References: <20201023154156.6593-1-julien@xen.org> <20201023154156.6593-3-julien@xen.org> <alpine.DEB.2.21.2010231714510.12247@sstabellini-ThinkPad-T480s> <6038f41f-594c-e573-9126-f31291af9c38@xen.org> <alpine.DEB.2.21.2010301127470.12247@sstabellini-ThinkPad-T480s>
 <5a8afe57-1be5-6a0b-2ed5-e668690fc246@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 30 Oct 2020, Julien Grall wrote:
> Hi,
> 
> On 30/10/2020 18:28, Stefano Stabellini wrote:
> > On Fri, 30 Oct 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 24/10/2020 01:16, Stefano Stabellini wrote:
> > > > On Fri, 23 Oct 2020, Julien Grall wrote:
> > > > >    bool __acpi_unmap_table(const void *ptr, unsigned long size)
> > > > >    {
> > > > >        vaddr_t vaddr = (vaddr_t)ptr;
> > > > > +    unsigned int idx;
> > > > > +
> > > > > +    /* We are only handling fixmap address in the arch code */
> > > > > +    if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> > > > > +         (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END)) )
> > > > 
> > > > Is it missing "+ PAGE_SIZE"?
> > > > 
> > > >      if ( (vaddr < FIXMAP_ADDR(FIXMAP_ACPI_BEGIN)) ||
> > > >           (vaddr >= FIXMAP_ADDR(FIXMAP_ACPI_END) + PAGE_SIZE) )
> > > 
> > > Yes it should be + PAGE_SIZE. Do you mind if I fix it on commit?
> > 
> > No, I don't mind. Go ahead.
> 
> I technically don't have a ack for this patch. Can I consider this as a
> Acked-by? :)

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:47:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:47:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16188.39533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZQe-0001bM-MA; Fri, 30 Oct 2020 18:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16188.39533; Fri, 30 Oct 2020 18:47:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZQe-0001bF-Il; Fri, 30 Oct 2020 18:47:12 +0000
Received: by outflank-mailman (input) for mailman id 16188;
 Fri, 30 Oct 2020 18:47:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYZQd-0001bA-Ir
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:47:11 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de568e22-f50f-4d23-9478-5042f57bfce2;
 Fri, 30 Oct 2020 18:47:09 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZQV-0008BB-Gq; Fri, 30 Oct 2020 18:47:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZQV-00033H-8x; Fri, 30 Oct 2020 18:47:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYZQd-0001bA-Ir
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:47:11 +0000
X-Inumbo-ID: de568e22-f50f-4d23-9478-5042f57bfce2
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id de568e22-f50f-4d23-9478-5042f57bfce2;
	Fri, 30 Oct 2020 18:47:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ONFc4ifL9v9NgzURhV26JtY/MkFHc4W6DkTg3g3G33c=; b=L01Gb5syuT257UFKsDN9i9zGoS
	CuKWmL3/Epq7f/1HPdsOr458HbOmFVI0hi/5yEdg2OT7ahSf22fGv0rer8/TghRrdAy6wWf5xA3A3
	I4bOwK0j6Ki0TWm949eQfYTDGB4rTflksoy4tjauY2s34+9qmeypHOwLgtZF+6/l4mpo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZQV-0008BB-Gq; Fri, 30 Oct 2020 18:47:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZQV-00033H-8x; Fri, 30 Oct 2020 18:47:03 +0000
Subject: Re: [PATCH v2 5/7] xen/arm: acpi: add BAD_MADT_GICC_ENTRY() macro
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Rahul.Singh@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-6-julien@xen.org>
 <alpine.DEB.2.21.2010231719520.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <1bdbd2bd-5bcf-cd9a-f9bc-6239c050b595@xen.org>
Date: Fri, 30 Oct 2020 18:46:59 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010231719520.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 24/10/2020 01:32, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Julien Grall wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> Imported from Linux commit b6cfb277378ef831c0fa84bcff5049307294adc6:
>>
>>      The BAD_MADT_ENTRY() macro is designed to work for all of the subtables
>>      of the MADT.  In the ACPI 5.1 version of the spec, the struct for the
>>      GICC subtable (struct acpi_madt_generic_interrupt) is 76 bytes long; in
>>      ACPI 6.0, the struct is 80 bytes long.  But, there is only one definition
>>      in ACPICA for this struct -- and that is the 6.0 version.  Hence, when
>>      BAD_MADT_ENTRY() compares the struct size to the length in the GICC
>>      subtable, it fails if 5.1 structs are in use, and there are systems in
>>      the wild that have them.
>>
>>      This patch adds the BAD_MADT_GICC_ENTRY() that checks the GICC subtable
>>      only, accounting for the difference in specification versions that are
>>      possible.  The BAD_MADT_ENTRY() will continue to work as is for all other
>>      MADT subtables.
>>
>>      This code is being added to an arm64 header file since that is currently
>>      the only architecture using the GICC subtable of the MADT.  As a GIC is
>>      specific to ARM, it is also unlikely the subtable will be used elsewhere.
>>
>>      Fixes: aeb823bbacc2 ("ACPICA: ACPI 6.0: Add changes for FADT table.")
>>      Signed-off-by: Al Stone <al.stone@linaro.org>
>>      Acked-by: Will Deacon <will.deacon@arm.com>
>>      Acked-by: "Rafael J. Wysocki" <rjw@rjwysocki.net>
>>      [catalin.marinas@arm.com: extra brackets around macro arguments]
>>      Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thanks!

>> ---
>>
>> Changes in v2:
>>      - Patch added
>>
>> We may want to consider to also import:
>>
>> commit 9eb1c92b47c73249465d388eaa394fe436a3b489
>> Author: Jeremy Linton <jeremy.linton@arm.com>
>> Date:   Tue Nov 27 17:59:12 2018 +0000
> 
> Sure

I will add it in my todo list of further improvement.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 18:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 18:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16194.39545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZUN-0002Rw-6b; Fri, 30 Oct 2020 18:51:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16194.39545; Fri, 30 Oct 2020 18:51:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZUN-0002Rp-3Q; Fri, 30 Oct 2020 18:51:03 +0000
Received: by outflank-mailman (input) for mailman id 16194;
 Fri, 30 Oct 2020 18:51:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q/BW=EF=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kYZUL-0002Rk-Kd
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:51:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c49c0763-86d7-4c12-b03f-bbc42b29928d;
 Fri, 30 Oct 2020 18:51:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6BE95AF33;
 Fri, 30 Oct 2020 18:50:59 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Q/BW=EF=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kYZUL-0002Rk-Kd
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 18:51:01 +0000
X-Inumbo-ID: c49c0763-86d7-4c12-b03f-bbc42b29928d
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c49c0763-86d7-4c12-b03f-bbc42b29928d;
	Fri, 30 Oct 2020 18:51:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604083859;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=6NnPIfBbWDw6ItzueazDTk0HN2NbGuH1JzTyEv12Z6k=;
	b=sKzz3CX338j047C+qjbjJPQdNsEO3zm4BQ4+l9FoipqDQMGx3/MJR79+GVvIb+kziuQ80h
	ZjlwO6veBGEUR7AKMbgFGB5A/ePo/pWM6CJ8Qb6rwO63HPUys0DjpDc4kIAAuy2i5KlatT
	eRlXe2ZwFDaVZOwL48jURwOM4ygFQoI=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 6BE95AF33;
	Fri, 30 Oct 2020 18:50:59 +0000 (UTC)
Message-ID: <6e4834fc544b89f723e6a80b7c6f50b9bfcd1fa6.camel@suse.com>
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
From: Dario Faggioli <dfaggioli@suse.com>
To: =?UTF-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>, 
	=?ISO-8859-1?Q?J=FCrgen_Gro=DF?=
	 <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <George.Dunlap@citrix.com>
Date: Fri, 30 Oct 2020 19:50:57 +0100
In-Reply-To: <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
	 <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
	 <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-bbIzAI4rls+5QjaSPH2W"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-bbIzAI4rls+5QjaSPH2W
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

[Cc-ing George as it's often useful having him in the loop when doing=C2=A0
 all this math on credits]

On Wed, 2020-10-28 at 03:04 +0100, Micha=C5=82 Leszczy=C5=84ski wrote:
> ----- 23 pa=C5=BA, 2020 o 6:47, J=C3=BCrgen Gro=C3=9F jgross@suse.com=C2=
=A0napisa=C5=82(a):
> As I've said before, I'm using RELEASE-4.14.0, this is DELL PowerEdge
> R640 with 14 PCPUs.
>=20
> I have the following additional pieces of log (enclosed below). As
> you could see, the issue is about particular vCPUs of Dom0 not being
> scheduled for a long time, which really decreases stability of the
> host system.
>=20
> ---
>=20
> [=C2=A0 313.730969] rcu: INFO: rcu_sched self-detected stall on CPU
> [=C2=A0 313.731154] rcu:=C2=A0=C2=A0=C2=A0=C2=A0 5-....: (5249 ticks this=
 GP)
> idle=3Dc6e/1/0x4000000000000002 softirq=3D4625/4625 fqs=3D2624
> [=C2=A0 313.731474] rcu:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (t=3D5250 jiffies =
g=3D10309 q=3D220)
> [=C2=A0 338.968676] watchdog: BUG: soft lockup - CPU#5 stuck for 22s!
> [sshd:5991]
> [=C2=A0 346.963959] watchdog: BUG: soft lockup - CPU#2 stuck for 23s!
> [xenconsoled:2747]
> (XEN) *** Serial input to Xen (type 'CTRL-a' three times to switch
> input)
> (XEN) sched_smt_power_savings: disabled
> (XEN) NOW=3D384307105230
> (XEN) Online Cpus: 0,2,4,6,8,10,12,14,16,18,20,22,24,26
> (XEN) Cpupool 0:
> (XEN) Cpus: 0,2,4,6,8,10,12,14,16,18,20,22,24,26
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
> (XEN) Active queues: 2
> (XEN)=C2=A0=C2=A0 default-weight=C2=A0=C2=A0=C2=A0=C2=A0 =3D 256
> (XEN) Runqueue 0:
> (XEN)=C2=A0=C2=A0 ncpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 7
> (XEN)=C2=A0=C2=A0 cpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 0,2,4,6,8,10,12
> (XEN)=C2=A0=C2=A0 max_weight=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 =3D 256
> (XEN)=C2=A0=C2=A0 pick_bias=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 =3D 10
> (XEN)=C2=A0=C2=A0 instload=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 =3D 3
> (XEN)=C2=A0=C2=A0 aveload=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 =3D 805194 (~307%)
> [...]
> (XEN) Runqueue 1:
> (XEN)=C2=A0=C2=A0 ncpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 7
> (XEN)=C2=A0=C2=A0 cpus=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =3D 14,16,18,20,22,24,26
> (XEN)=C2=A0=C2=A0 max_weight=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 =3D 256
> (XEN)=C2=A0=C2=A0 pick_bias=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 =3D 22
> (XEN)=C2=A0=C2=A0 instload=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 =3D 0
> (XEN)=C2=A0=C2=A0 aveload=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 =3D 51211 (~19%)
> (XEN)=C2=A0=C2=A0 idlers:
>
That's quite imbalanced... Is there any pinning involved, by any
chance?

> [...]
> (XEN) Domain info:
> (XEN)=C2=A0=C2=A0 Domain: 0 w 256 c 0 v 14
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 1: [0.0] flags=3D20 cpu=3D0 credit=3D-10000=
000 [w=3D256] load=3D4594
> (~1%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 2: [0.1] flags=3D20 cpu=3D2 credit=3D913490=
4 [w=3D256] load=3D262144
> (~100%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 3: [0.2] flags=3D22 cpu=3D4 credit=3D-10000=
000 [w=3D256]
> load=3D262144 (~100%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 4: [0.3] flags=3D20 cpu=3D6 credit=3D-10000=
000 [w=3D256] load=3D4299
> (~1%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 5: [0.4] flags=3D20 cpu=3D8 credit=3D-10000=
000 [w=3D256] load=3D4537
> (~1%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 6: [0.5] flags=3D22 cpu=3D10 credit=3D-1000=
0000 [w=3D256]
> load=3D262144 (~100%)
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 7: [0.6] flags=3D20 cpu=3D12 credit=3D-1000=
0000 [w=3D256] load=3D5158
>
All these credit =3D -10000000 are a bit weird. The scheduler only allow
a vCPU to run until it has no less than -500000. As soon as a vCPU
reaches that point (actually, as soon as it reaches credit < 0), it
should be preempted. And if there is no other one with positive amount
of credits, credits themselves are reset (by adding 10000000 to them).

So, in an ideal world, we'll see no vCPU with credit < 0  here. We know
tat we're not in a super-ideal world, and that's why we allow for
credits to go a bit lower, until -500000.

And we're not even in an ideal world, so values smaller than -500000
are actually possible, due to timer/IRQ latency, etc. But they
shouldn't be too smaller than that.

If we see -10000000, it seems that a vCPU managed to run form more than
20ms, i.e., the 10ms it's been given at the last reset event plus 10ms
more, which brought it to -10ms. We can't know exacly how much more
than 20ms, because even if it run for 40ms, -10000000 is the lower
limit for the credits, and we clip "more negative" values at that
level.

Point is though, the scheduler is indeed setting timers for making sure
that the vCPU is interrupted when it reaches -500000. It looks that
here, quite a few vCPU are able to ignore such timer interrupts, or the
timer interrupts are not being delivered, or they are, but with some
giant latency/delay.

> (XEN) Runqueue 0:
> (XEN) CPU[00] runq=3D0, sibling=3D{0}, core=3D{0,2,4,6,8,10,12,14,16,18,2=
0,22,24,26}
> (XEN) CPU[02] runq=3D0, sibling=3D{2}, core=3D{0,2,4,6,8,10,12,14,16,18,2=
0,22,24,26}
> (XEN) CPU[04] runq=3D0, sibling=3D{4}, core=3D{0,2,4,6,8,10,12,14,16,18,2=
0,22,24,26}
> (XEN)=C2=A0=C2=A0 run: [0.2] flags=3D22 cpu=3D4 credit=3D-10000000 [w=3D2=
56] load=3D262144 (~100%)
>
And here we see this again. So, they're reporting being stuck, but it's
not that they're stuck as "not being run by the hypervisor. On the
contrary, they're running all the time!

And... well, I'd like to think a bit more about this but I'd say that
the vCPU is running and the functions that do the credit accounting
(like burn_credits()-->t2c_update()) are being called, otherwise we
would not see -10000000.

Also...

> (XEN) CPU[06] runq=3D0, sibling=3D{6}, core=3D{0,2,4,6,8,10,12,14,16,18,2=
0,22,24,26}
> (XEN) CPU[08] runq=3D0, sibling=3D{8}, core=3D{0,2,4,6,8,10,12,14,16,18,2=
0,22,24,26}
> (XEN) CPU[10] runq=3D0, sibling=3D{10}, core=3D{0,2,4,6,8,10,12,14,16,18,=
20,22,24,26}
> (XEN)=C2=A0=C2=A0 run: [0.5] flags=3D22 cpu=3D10 credit=3D-10000000 [w=3D=
256] load=3D262144 (~100%)
> (XEN) CPU[12] runq=3D0, sibling=3D{12}, core=3D{0,2,4,6,8,10,12,14,16,18,=
20,22,24,26}
> (XEN) RUNQ:
> (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 0: [0.1] flags=3D20 cpu=3D2 credit=3D913490=
4 [w=3D256] load=3D262144 (~100%)
>
... The actual scheduler (csched2_schedule()) is not being invoked. In
fact, there is vCPU d0v1, sitting here in the runqueue.

And if the scheduler would run on either CPU 4 or 12, it would notice
that vCPUs 2 and 5 have -10000000 credits, while there is vCPU 1
waiting with 9134904 credits. And they'd pick it up and preempt.

Or it is running, but is not doing what I just described for whatever
bug we have there, but that I am not currently seeing.

Also, all the other idle vCPUs are not picking d0v1 up either.

Another thing that should not happen.

So, can I see:

# xl info -n
# xl vcpu-list

 ?

In the meantime, I'll continue looking at the code, and maybe come up
with a debug patch, e.g., for figuring out whether csched2_schedule()
is either running but not doing its job, or not running at all.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-bbIzAI4rls+5QjaSPH2W
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+cYJIACgkQFkJ4iaW4
c+692g/+MGNVuhBU2RLKk67pMXp4sjxL5QOpzgkjT/quVoeot5UIWOouuF6MSuAA
Qjig4A3v0OjPrUcZg1nxdXPPmDgQIS/qxWavqNSeu4uLOoBh/y2MK+LItavj6dca
ihoMuPt0PTzJOB9JdbMxxlJonWUE9K1GeIi+vqwsCEB92FjrrouXZAqL7g6IrHdK
wrCVFuuvmT8YkiGJ4QHGUWby7WYSYFy+pmlj3FtvRlDjrH9328Um5YQ3wvFpRxub
/FpusbfK/yZ80bmQLuXlYPpluFwhIKUTtYbUdPXtmo2ucdscLKyafypDBcCx1k53
k+Efl+7e0R4fBvB/wLxYZyTTHxS5pfpIyNYPzjEszgbzGkuhemQJIyzWJDS16Lml
TIskHbu/o1EnLyhGfr/5c2wz8cJJfdQ/K7vChbnjdS+4AK3WBLSE4ryjhO1iZS6N
6Qu/sVm0WjcJBoXyTlGxRWDJsb2015t0d3ZT1eKF1BvR4M1EJaQeG+O7HR5K+mDZ
pDnE/79u/2vqkAIXiH/7G2w1XpjgbqnSossv6C9QQfdj5PTXsDP2KKbIcjoZTk1M
3C3nYe+es+cN8/hNCsxcJRYiovW+nXi2prz3B4wwSHErqb7rBZI6kOmsmMi7eaaS
AhiBezb+llM33WTAFZ+Uy3355Apsa0hcKIRegpNIdrZ3iVVFxsE=
=HvCU
-----END PGP SIGNATURE-----

--=-bbIzAI4rls+5QjaSPH2W--



From xen-devel-bounces@lists.xenproject.org Fri Oct 30 19:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 19:13:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16207.39572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZqC-0004QI-GT; Fri, 30 Oct 2020 19:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16207.39572; Fri, 30 Oct 2020 19:13:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYZqC-0004QB-D6; Fri, 30 Oct 2020 19:13:36 +0000
Received: by outflank-mailman (input) for mailman id 16207;
 Fri, 30 Oct 2020 19:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1kYZqA-0004Pe-Eg
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 19:13:34 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c725507a-9803-4a90-8777-543cf2ec58fd;
 Fri, 30 Oct 2020 19:13:33 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZq3-0000M0-R6; Fri, 30 Oct 2020 19:13:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1kYZq3-0002z7-Ke; Fri, 30 Oct 2020 19:13:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=pDD0=EF=xen.org=julien@srs-us1.protection.inumbo.net>)
	id 1kYZqA-0004Pe-Eg
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 19:13:34 +0000
X-Inumbo-ID: c725507a-9803-4a90-8777-543cf2ec58fd
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c725507a-9803-4a90-8777-543cf2ec58fd;
	Fri, 30 Oct 2020 19:13:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=3XwIDC/zNO15TsWPqBEwPTczQU30EIZ+3OMOktjsMoM=; b=a4uQwp9kjOUmPjaHLk0mTjYOLi
	fw6VzpowXbS0ZWtCVWFISunXWvRlmH0Bk9wXUFKgE/bokHTY3N6xeC84fVXuiVIX6WcT7ak3mQI4U
	Yn1DbTtAzwK/08QWVdeAdhnR4t/Xrzlw5w073lfxm5vB5KcRgvC8CV3Bw70MWijufX2A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZq3-0000M0-R6; Fri, 30 Oct 2020 19:13:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
	by xenbits.xenproject.org with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
	(Exim 4.92)
	(envelope-from <julien@xen.org>)
	id 1kYZq3-0002z7-Ke; Fri, 30 Oct 2020 19:13:27 +0000
Subject: Re: [PATCH v2 6/7] xen/arm: gic-v2: acpi: Use the correct length for
 the GICC structure
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, alex.bennee@linaro.org,
 masami.hiramatsu@linaro.org, ehem+xen@m5p.com, bertrand.marquis@arm.com,
 andre.przywara@arm.com, Rahul.Singh@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20201023154156.6593-1-julien@xen.org>
 <20201023154156.6593-7-julien@xen.org>
 <alpine.DEB.2.21.2010231731010.12247@sstabellini-ThinkPad-T480s>
 <alpine.DEB.2.21.2010231735000.12247@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <1d3b1643-e2cd-ede4-21e5-3a9bb23db0c3@xen.org>
Date: Fri, 30 Oct 2020 19:13:25 +0000
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.4.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2010231735000.12247@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

I just realized the title says "gic-v2" when I also modified "gic-v3". I 
will update the title on the next version.

On 24/10/2020 01:45, Stefano Stabellini wrote:
> On Fri, 23 Oct 2020, Stefano Stabellini wrote:
>> On Fri, 23 Oct 2020, Julien Grall wrote:
>>> From: Julien Grall <julien.grall@arm.com>
>>>
>>> The length of the GICC structure in the MADT ACPI table differs between
>>> version 5.1 and 6.0, although there are no other relevant differences.
>>>
>>> Use the BAD_MADT_GICC_ENTRY macro, which was specifically designed to
>>> overcome this issue.
>>>
>>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>>> Signed-off-by: Andre Przywara <andre.przywara@arm.com>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Actually it looks we need to do substitutions in a couple of other places:
> 
> - xen/arch/arm/gic-v3.c:gicv3_make_hwdom_madt
> - xen/arch/arm/gic-v3.c:gic_acpi_get_madt_cpu_num
> - xen/arch/arm/gic.c:gic_get_hwdom_madt_size
I will update the 3 and resend the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 20:11:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 20:11:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16227.39633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYajW-0001Mn-IW; Fri, 30 Oct 2020 20:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16227.39633; Fri, 30 Oct 2020 20:10:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYajW-0001Mg-FX; Fri, 30 Oct 2020 20:10:46 +0000
Received: by outflank-mailman (input) for mailman id 16227;
 Fri, 30 Oct 2020 20:10:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYajU-0001Mb-GJ
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 20:10:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 036b5113-f08c-4b91-a6c0-822d2347b923;
 Fri, 30 Oct 2020 20:10:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net
 [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 2A822206F7;
 Fri, 30 Oct 2020 20:10:42 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=0MNT=EF=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYajU-0001Mb-GJ
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 20:10:44 +0000
X-Inumbo-ID: 036b5113-f08c-4b91-a6c0-822d2347b923
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 036b5113-f08c-4b91-a6c0-822d2347b923;
	Fri, 30 Oct 2020 20:10:43 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id 2A822206F7;
	Fri, 30 Oct 2020 20:10:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604088642;
	bh=vZmVPDf0ZmBYwG1bGkRj6FBKNv/zlfneUzVXUfgk4Us=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rywqS1PbGROA0VWKhQv80KY+8yqS4NZpb22p/QYbvRBhwneC/BiCDttR4m/vFGCgt
	 PXu9qR023fYo3nKMKEfKGHBFv0ztI8aTr4x9KQS0diXdqrSVwDQge7bqrmUKjdJE6t
	 Q8TLWgUCB26oD1zm3i7lop40M/cJerJAUrnhHXm0=
Date: Fri, 30 Oct 2020 13:10:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Elliott Mitchell <ehem+xen@m5p.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    roman@zededa.com, xen-devel@lists.xenproject.org
Subject: Re: Xen on RP4
In-Reply-To: <20201029212954.GA50793@mattapan.m5p.com>
Message-ID: <alpine.DEB.2.21.2010301240450.12247@sstabellini-ThinkPad-T480s>
References: <20201023005629.GA83870@mattapan.m5p.com> <alpine.DEB.2.21.2010221801490.12247@sstabellini-ThinkPad-T480s> <20201023211941.GA90171@mattapan.m5p.com> <alpine.DEB.2.21.2010231647290.12247@sstabellini-ThinkPad-T480s> <20201024053540.GA97417@mattapan.m5p.com>
 <4fcf4832-9266-443f-54d0-fa1fff4b6e14@xen.org> <20201026160316.GA20589@mattapan.m5p.com> <7a904044-8206-b45d-8ec2-d4e48b07ea83@xen.org> <20201028015423.GA33407@mattapan.m5p.com> <alpine.DEB.2.21.2010281704250.12247@sstabellini-ThinkPad-T480s>
 <20201029212954.GA50793@mattapan.m5p.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 29 Oct 2020, Elliott Mitchell wrote:
> On Wed, Oct 28, 2020 at 05:37:02PM -0700, Stefano Stabellini wrote:
> > On Tue, 27 Oct 2020, Elliott Mitchell wrote:
> > > On Mon, Oct 26, 2020 at 06:44:27PM +0000, Julien Grall wrote:
> > > > On 26/10/2020 16:03, Elliott Mitchell wrote:
> > > > > On Mon, Oct 26, 2020 at 01:31:42PM +0000, Julien Grall wrote:
> > > > >> On 24/10/2020 06:35, Elliott Mitchell wrote:
> > > > >>> ACPI has a distinct
> > > > >>> means of specifying a limited DMA-width; the above fails, because it
> > > > >>> assumes a *device-tree*.
> > > > >>
> > > > >> Do you know if it would be possible to infer from the ACPI static table
> > > > >> the DMA-width?
> > > > > 
> > > > > Yes, and it is.  Due to not knowing much about ACPI tables I don't know
> > > > > what the C code would look like though (problem is which documentation
> > > > > should I be looking at first?).
> > > > 
> > > > What you provided below is an excerpt of the DSDT. AFAIK, DSDT content 
> > > > is written in AML. So far the shortest implementation I have seen for 
> > > > the AML parser is around 5000 lines (see [1]). It might be possible to 
> > > > strip some the code, although I think this will still probably too big 
> > > > for a single workaround.
> > > > 
> > > > What I meant by "static table" is a table that looks like a structure 
> > > > and can be parsed in a few lines. If we can't find on contain the DMA 
> > > > window, then the next best solution is to find a way to identity the 
> > > > platform.
> > > > 
> > > > I don't know enough ACPI to know if this solution is possible. A good 
> > > > starter would probably be the ACPI spec [2].
> > > 
> > > Okay, that is worse than I had thought (okay, ACPI is impressively
> > > complex for something nominally firmware-level).
> > >
> > > There are strings in the present Tianocore implementation for Raspberry
> > > PI 4B which could be targeted.  Notably included in the output during
> > > boot listing the tables, "RPIFDN", "RPIFDN RPI" and "RPIFDN RPI4" (I'm
> > > unsure how kosher these are as this wsn't implemented nor blessed by the
> > > Raspberry PI Foundation).
> > > 
> > > I strongly dislike this approach as you soon turn the Xen project into a
> > > database of hardware.  This is already occurring with
> > > xen/arch/arm/platforms and I would love to do something about this.  On
> > > that thought, how about utilizing Xen's command-line for this purpose?
> > 
> > I don't think that a command line option is a good idea: basically it is
> > punting to users the task of platform detection. Also, it means that
> > users will be necessarily forced to edit the uboot script or grub
> > configuration file to boot.
> 
> -EINVAL
> 
> Many Linux installations (near universal on desktop/server, but may be
> uncommon on ARM servers) Xen's command-line comes from grub.cfg.
> grub.cfg is in turn created by a series of scripts with several places
> for users to modify configuration without breaking things.
> 
> The scripts which create grub.cfg could add a "dma_mem=" option to Xen's
> command-line based upon what the running kernel reports.  If the kernel
> is running on top of Xen, it will still be able to retrieve this
> information out of ACPI.
> 
> This does mean distributions would need to modify scripts, but that is
> doable.  This also means a dumb user could potentially jump in, modify
> the value and thus cause unrecoverable breakage.  Yet on the flip side
> this also allows for the short-term stop-gap of smart users modifying
> the option as appropriate for new hardware.
> 
> Certainly it may not be the greatest approach, but it isn't as bad as
> you're claiming.

>From what I have seen in previous years getting all the distros to
update their scripts is actually a lot of work. It would take a long
time to see it through. A few years. I think it is likely we would solve
the problem more quickly with a different solution.

 
> > Note that even if we introduced a new command line, we wouldn't take
> > away the need for xen/arch/arm/platforms anyway.
> 
> Perhaps, but it could allow for this setting at least to be moved to
> somewhere outside of Xen.

I was trying to say that it is not just about dma_mem=. There are other
things under xen/arch/arm/platforms that are sometimes required.  In the
specific case of the RPi4, if we added a general dma_mem= parameter, we
would still need to detect the platform to be able to do a platform
reboot, unless it is guaranteed that when ACPI is present ATF is also
present?

More on this below.


> I'm inclined to agree with Juergen Gro??, this reads kind of like having
> an extra domain run during Xen's initialization which can talk ACPI.
> 
> > > Have a procedure of during installation/updates retrieve DMA limitation
> > > information from the running OS and the following boot Xen will follow
> > > the appropriate setup.  By its nature, Domain 0 will have the information
> > > needed, just becomes an issue of how hard that is to retrieve...
> > 
> > Historically that is what we used to do for many things related to ACPI,
> > but unfortunately it leads to a pretty bad architecture where Xen
> > depends on Dom0 for booting rather than the other way around. (Dom0
> > should be the one requiring Xen for booting, given that Xen is higher
> > privilege and boots first.)
> > 
> > 
> > I think the best compromise is still to use an ACPI string to detect the
> > platform. For instance, would it be possible to use the OEMID fields in
> > RSDT, XSDT, FADT?  Possibly even a combination of them?
> > 
> > Another option might be to get the platform name from UEFI somehow. 
> 
> I included appropriate strings in e-mail.  Suitable strings do appear in
> `dmesg`.

Do you mean the ones mentioned in the previous email or did you forget
an attachment here? I don't think you added the value of OEMID anywhere?


> Problem is this feels like you're hard-coding a fixed list of platforms
> Xen can run on.  Instead values like these should be provided by
> firmware.  ACPI includes a method for encoding DMA limitations,

I see where you are coming from: DMA limitations is a general problem,
so it looks like there should be a general solution.

Yes, I agree that it would be good to have a general solution to
automatically detect DMA limitations, no matter if we do ACPI or device
tree.

However, xen/arch/arm/platforms is not just for DMA limitations. There
are per-platform quirks and workarounds that sometimes are required for
booting. Even if we solved the DMA limitations problem generically
(which would be good if we find a way), we are still likely to have to
find another way to detect the platform name.


> device-tree really should have one added.  Only challenge for
> device-tree is getting everyone to agree on names and parameters.

Device tree has ways to describe dma limitations, but they are not
really taken into account when allocating dom0 memory today
unfortunately :-/


> Looking at it, there are really issues with the allocate_memory_11()
> function in xen/arch/arm/domain_build.c.  Two tasks have been merged and
> I'm unsure they were merged correctly.
> 
> I'm unaware of any Xen-capable platforms with such, but DMA can have
> distinct restrictions outside of what allocate_memory_11() provides for.
> ACPI allows for explicit address ranges and in the past many devices have
> used addresses that didn't start at zero.
> 
> Additionally a device might have several devices with restricted DMA and
> these could have differing non-overlapping ranges.  Domain 0 might need
> memory in several DMA ranges.  Luckily In the past few years I haven't
> read about any potentially Xen-capable devices where DMA-capable memory
> had differing capabilities/performance from non-DMA-capable memory.

What you described is possible, at least theoretically. Any patches to
improve allocate_memory_11() in that respect would be great to have.


> Meanwhile for a platform which does have DMA limitations, the kernel and
> boot information for domain 0 shouldn't be placed in DMA-capable memory
> if domain 0 has any memory which isn't DMA-capable.  Yet it appears
> allocate_memory_11() will cause kernel/initrd/boot information to be
> placed in DMA-capable memory.

The dom0 kernel (if zImage) needs to be loaded in the first 128Mb of
RAM. In any case, the kernel will relocate those modules in high memory
and free the original memory during boot, so it shouldn't make a lot of
difference?


> If my understanding is correct (this is a BIG IF), as a last step
> allocate_memory_11() should reverse the order of memory banks.  Another
> trick is DMA-capable banks need to be subject to ballooning AFTER
> non-DMA-capable banks (I'm unsure how often ballooning is used on ARM).

Why reverse the order?


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 23:03:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 23:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16705.41482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYdQk-0000s9-9u; Fri, 30 Oct 2020 23:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16705.41482; Fri, 30 Oct 2020 23:03:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYdQk-0000s2-6a; Fri, 30 Oct 2020 23:03:34 +0000
Received: by outflank-mailman (input) for mailman id 16705;
 Fri, 30 Oct 2020 23:03:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYdQj-0000rx-G0
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 23:03:33 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4340bf6b-c258-4afa-816e-e5e8c8a0c2b3;
 Fri, 30 Oct 2020 23:03:30 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdQg-000643-2y; Fri, 30 Oct 2020 23:03:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdQf-0004RW-O6; Fri, 30 Oct 2020 23:03:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdQf-0005S1-NC; Fri, 30 Oct 2020 23:03:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYdQj-0000rx-G0
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 23:03:33 +0000
X-Inumbo-ID: 4340bf6b-c258-4afa-816e-e5e8c8a0c2b3
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4340bf6b-c258-4afa-816e-e5e8c8a0c2b3;
	Fri, 30 Oct 2020 23:03:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YT7kEjc1q0vbf9IgwdWpJ6RTzr6hNf10WWn6OOpUk6s=; b=KazF7+v6ixvL6k6VZd+ORZLwPv
	YNi41VwjXzlsgH5xImVdMcS+GYJeIKs/Ij45lVyp6o8EiDniTuftT13Q5CzslZIrA1RNpmMeQx75e
	E6cxNIpn+d3tkvJh0ta2IdHqRUzfhqlElbCdh/DSl5433MpKkaBJh97WrmK0WUSprrbY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdQg-000643-2y; Fri, 30 Oct 2020 23:03:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdQf-0004RW-O6; Fri, 30 Oct 2020 23:03:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdQf-0005S1-NC; Fri, 30 Oct 2020 23:03:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156309-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156309: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 23:03:29 +0000

flight 156309 xen-4.12-testing real [real]
flight 156323 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156309/
http://logs.test-lab.xenproject.org/osstest/logs/156323/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 156035

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z   10 days
Testing same since   156263  2020-10-27 18:36:53 Z    3 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4100d463dbdd95d85fabe387dd5676bed75f65f7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit b1d6f37aa5aa9f3fc5a269b9dd21b7feb7444be0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri Oct 30 23:22:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 30 Oct 2020 23:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16711.41500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYdiZ-0002f8-04; Fri, 30 Oct 2020 23:21:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16711.41500; Fri, 30 Oct 2020 23:21:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYdiY-0002f1-T6; Fri, 30 Oct 2020 23:21:58 +0000
Received: by outflank-mailman (input) for mailman id 16711;
 Fri, 30 Oct 2020 23:21:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYdiX-0002eN-DG
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 23:21:57 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 764c7570-f00a-45aa-89b8-da0585c1006c;
 Fri, 30 Oct 2020 23:21:50 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdiP-0006PX-LP; Fri, 30 Oct 2020 23:21:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdiP-00056v-DA; Fri, 30 Oct 2020 23:21:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYdiP-0006S7-Cd; Fri, 30 Oct 2020 23:21:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=mU6k=EF=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYdiX-0002eN-DG
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 23:21:57 +0000
X-Inumbo-ID: 764c7570-f00a-45aa-89b8-da0585c1006c
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 764c7570-f00a-45aa-89b8-da0585c1006c;
	Fri, 30 Oct 2020 23:21:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=077PcwWKKu4uOBs58+hF/l5C7or207xi6f3cDQmU0fY=; b=1URAFbjxNmuzGy1FUR6Eks4V1z
	5n0ZxFAMO2KFhR+G9BxMXpXkB3GGmgogLUzLruy1ZeJdcyYEIQ0tFojfqhoHwRFgK0T9n6A60NDqA
	zfDV7+2n7fXnjhzOW4no+ovr4NeseF0YSMPeqE0YGsB7NuAnwayXTJr1jmRm/iEcvktQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdiP-0006PX-LP; Fri, 30 Oct 2020 23:21:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdiP-00056v-DA; Fri, 30 Oct 2020 23:21:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYdiP-0006S7-Cd; Fri, 30 Oct 2020 23:21:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156322-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 156322: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7056f2f89f03f2f804ac7e776c7b2b000cd716cd
X-Osstest-Versions-That:
    xen=ca56b06043bb4241eeb0a41a60daffb1408a08d5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 30 Oct 2020 23:21:49 +0000

flight 156322 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156322/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7056f2f89f03f2f804ac7e776c7b2b000cd716cd
baseline version:
 xen                  ca56b06043bb4241eeb0a41a60daffb1408a08d5

Last test of basis   156319  2020-10-30 14:02:33 Z    0 days
Testing same since   156322  2020-10-30 20:02:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andre Przywara <andre.przywara@arm.com>
  Elliott Mitchell <ehem+xen@m5p.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien.grall@arm.com>
  Rahul Singh <rahul.singh@arm.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ca56b06043..7056f2f89f  7056f2f89f03f2f804ac7e776c7b2b000cd716cd -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 00:24:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 00:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16717.41511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYegk-0000Ih-JD; Sat, 31 Oct 2020 00:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16717.41511; Sat, 31 Oct 2020 00:24:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYegk-0000IZ-GJ; Sat, 31 Oct 2020 00:24:10 +0000
Received: by outflank-mailman (input) for mailman id 16717;
 Sat, 31 Oct 2020 00:24:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YIuu=EG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1kYegj-0000IU-4s
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 00:24:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d7922a4-6dc5-4d6b-a134-186ea18f9170;
 Sat, 31 Oct 2020 00:24:08 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CED9E206E5;
 Sat, 31 Oct 2020 00:24:06 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YIuu=EG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
	id 1kYegj-0000IU-4s
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 00:24:09 +0000
X-Inumbo-ID: 4d7922a4-6dc5-4d6b-a134-186ea18f9170
Received: from mail.kernel.org (unknown [198.145.29.99])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 4d7922a4-6dc5-4d6b-a134-186ea18f9170;
	Sat, 31 Oct 2020 00:24:08 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net (c-24-130-65-46.hsd1.ca.comcast.net [24.130.65.46])
	(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
	(No client certificate requested)
	by mail.kernel.org (Postfix) with ESMTPSA id CED9E206E5;
	Sat, 31 Oct 2020 00:24:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=default; t=1604103847;
	bh=kOMKGMEqKOWTR2tX5PKtfxx/R7DyiG1cHABWKb9Usyc=;
	h=From:To:Cc:Subject:Date:From;
	b=cadDWJDumStYMN8G1tGMxBd5anO5yPbE3GpezViiyjxvZvmPV+HQ2I9A9/4yw9P3Q
	 5F5Tw5QniU67DFA2HXww1L3Tqd/kDJPM6al8MuNkI4BC7ZQbwiLjVPO3QmvVL/H5c6
	 DkpWJHu0bLsN9tsAxYSDoiNOiBLj94jn/Meyba34=
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	julien@xen.org,
	wl@xen.org
Subject: [RFC PATCH] xen: EXPERT clean-up
Date: Fri, 30 Oct 2020 17:24:05 -0700
Message-Id: <20201031002405.4545-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

A recent thread [1] has exposed a couple of issues with our current way
of handling EXPERT.

1) It is not obvious that "Configure standard Xen features (expert
users)" is actually the famous EXPERT we keep talking about on xen-devel

2) It is not obvious when we need to enable EXPERT to get a specific
feature

In particular if you want to enable ACPI support so that you can boot
Xen on an ACPI platform, you have to enable EXPERT first. But searching
through the kconfig menu it is really not clear (type '/' and "ACPI"):
nothing in the description tells you that you need to enable EXPERT to
get the option.

So this patch makes things easier by doing two things:

- rename the EXPERT description to clarify the option and make sure to
include the word "EXPERT" in the oneliner

- instead of using "if EXPERT" add EXPERT as a dependency so that when
searching for a feature on the menu you are told that you need to enable
EXPERT to get the option

[1] https://marc.info/?l=xen-devel&m=160333101228981

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
CC: andrew.cooper3@citrix.com
CC: george.dunlap@citrix.com
CC: iwj@xenproject.org
CC: jbeulich@suse.com
CC: julien@xen.org
CC: wl@xen.org
---
 xen/Kconfig              | 13 ++++++-------
 xen/arch/arm/Kconfig     | 18 ++++++++++--------
 xen/arch/x86/Kconfig     | 11 ++++++-----
 xen/common/Kconfig       | 21 ++++++++++++++-------
 xen/common/sched/Kconfig |  2 +-
 5 files changed, 37 insertions(+), 28 deletions(-)

diff --git a/xen/Kconfig b/xen/Kconfig
index 34c318bfa2..5fa2716e2d 100644
--- a/xen/Kconfig
+++ b/xen/Kconfig
@@ -35,14 +35,13 @@ config DEFCONFIG_LIST
 	default ARCH_DEFCONFIG
 
 config EXPERT
-	bool "Configure standard Xen features (expert users)"
+	bool "Configure EXPERT features"
 	help
-	  This option allows certain base Xen options and settings
-	  to be disabled or tweaked. This is for specialized environments
-	  which can tolerate a "non-standard" Xen.
-	  Only use this if you really know what you are doing.
-	  Xen binaries built with this option enabled are not security
-	  supported.
+	  This option allows certain experimental (see SUPPORT.md) Xen
+	  options and settings to be enabled/disabled. This is for
+	  specialized environments which can tolerate a "non-standard" Xen.
+	  Only use this if you really know what you are doing.  Xen binaries
+	  built with this option enabled are not security supported.
 	default n
 
 config LTO
diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
index 2777388265..0223cf11c6 100644
--- a/xen/arch/arm/Kconfig
+++ b/xen/arch/arm/Kconfig
@@ -32,8 +32,8 @@ menu "Architecture Features"
 source "arch/Kconfig"
 
 config ACPI
-	bool "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
-	depends on ARM_64
+	bool "ACPI (Advanced Configuration and Power Interface) Support"
+	depends on ARM_64 && EXPERT
 	---help---
 
 	  Advanced Configuration and Power Interface (ACPI) support for Xen is
@@ -49,8 +49,8 @@ config GICV3
 	  If unsure, say Y
 
 config HAS_ITS
-        bool "GICv3 ITS MSI controller support" if EXPERT
-        depends on GICV3 && !NEW_VGIC
+        bool "GICv3 ITS MSI controller support"
+        depends on GICV3 && !NEW_VGICi && EXPERT
 
 config HVM
         def_bool y
@@ -79,8 +79,8 @@ config SBSA_VUART_CONSOLE
 	  SBSA Generic UART implements a subset of ARM PL011 UART.
 
 config ARM_SSBD
-	bool "Speculative Store Bypass Disable" if EXPERT
-	depends on HAS_ALTERNATIVE
+	bool "Speculative Store Bypass Disable"
+	depends on HAS_ALTERNATIVE && EXPERT
 	default y
 	help
 	  This enables mitigation of bypassing of previous stores by speculative
@@ -89,7 +89,8 @@ config ARM_SSBD
 	  If unsure, say Y.
 
 config HARDEN_BRANCH_PREDICTOR
-	bool "Harden the branch predictor against aliasing attacks" if EXPERT
+	bool "Harden the branch predictor against aliasing attacks"
+	depends on EXPERT
 	default y
 	help
 	  Speculation attacks against some high-performance processors rely on
@@ -106,7 +107,8 @@ config HARDEN_BRANCH_PREDICTOR
 	  If unsure, say Y.
 
 config TEE
-	bool "Enable TEE mediators support" if EXPERT
+	bool "Enable TEE mediators support"
+	depends on EXPERT
 	default n
 	help
 	  This option enables generic TEE mediators support. It allows guests
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 24868aa6ad..071bfbbc40 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -146,9 +146,9 @@ config BIGMEM
 	  If unsure, say N.
 
 config HVM_FEP
-	bool "HVM Forced Emulation Prefix support" if EXPERT
+	bool "HVM Forced Emulation Prefix support"
 	default DEBUG
-	depends on HVM
+	depends on HVM && EXPERT
 	---help---
 
 	  Compiles in a feature that allows HVM guest to arbitrarily
@@ -165,8 +165,9 @@ config HVM_FEP
 	  If unsure, say N.
 
 config TBOOT
-	bool "Xen tboot support" if EXPERT
+	bool "Xen tboot support"
 	default y if !PV_SHIM_EXCLUSIVE
+	depends on EXPERT
 	select CRYPTO
 	---help---
 	  Allows support for Trusted Boot using the Intel(R) Trusted Execution
@@ -251,8 +252,8 @@ config HYPERV_GUEST
 endif
 
 config MEM_SHARING
-	bool "Xen memory sharing support" if EXPERT
-	depends on HVM
+	bool "Xen memory sharing support"
+	depends on HVM && EXPERT
 
 endmenu
 
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3e2cf25088..7a8c54e66c 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -12,7 +12,8 @@ config CORE_PARKING
 	bool
 
 config GRANT_TABLE
-	bool "Grant table support" if EXPERT
+	bool "Grant table support"
+	depends on EXPERT
 	default y
 	---help---
 	  Grant table provides a generic mechanism to memory sharing
@@ -151,7 +152,8 @@ config KEXEC
 	  If unsure, say Y.
 
 config EFI_SET_VIRTUAL_ADDRESS_MAP
-    bool "EFI: call SetVirtualAddressMap()" if EXPERT
+    bool "EFI: call SetVirtualAddressMap()"
+    depends on EXPERT
     ---help---
       Call EFI SetVirtualAddressMap() runtime service to setup memory map for
       further runtime services. According to UEFI spec, it isn't strictly
@@ -162,7 +164,8 @@ config EFI_SET_VIRTUAL_ADDRESS_MAP
 
 config XENOPROF
 	def_bool y
-	prompt "Xen Oprofile Support" if EXPERT
+	prompt "Xen Oprofile Support"
+	depends on EXPERT
 	depends on X86
 	---help---
 	  Xen OProfile (Xenoprof) is a system-wide profiler for Xen virtual
@@ -199,7 +202,8 @@ config XSM_FLASK
 
 config XSM_FLASK_AVC_STATS
 	def_bool y
-	prompt "Maintain statistics on the FLASK access vector cache" if EXPERT
+	prompt "Maintain statistics on the FLASK access vector cache"
+	depends on EXPERT
 	depends on XSM_FLASK
 	---help---
 	  Maintain counters on the access vector cache that can be viewed using
@@ -272,7 +276,8 @@ config LATE_HWDOM
 	  If unsure, say N.
 
 config ARGO
-	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
+	bool "Argo: hypervisor-mediated interdomain communication"
+	depends on EXPERT
 	---help---
 	  Enables a hypercall for domains to ask the hypervisor to perform
 	  data transfer of messages between domains.
@@ -344,7 +349,8 @@ config SUPPRESS_DUPLICATE_SYMBOL_WARNINGS
 	  build becoming overly verbose.
 
 config CMDLINE
-	string "Built-in hypervisor command string" if EXPERT
+	string "Built-in hypervisor command string"
+	depends on EXPERT
 	default ""
 	---help---
 	  Enter arguments here that should be compiled into the hypervisor
@@ -377,7 +383,8 @@ config DOM0_MEM
 	  Leave empty if you are not sure what to specify.
 
 config TRACEBUFFER
-	bool "Enable tracing infrastructure" if EXPERT
+	bool "Enable tracing infrastructure"
+	depends on EXPERT
 	default y
 	---help---
 	  Enable tracing infrastructure and pre-defined tracepoints within Xen.
diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
index 61231aacaa..ec0385cd07 100644
--- a/xen/common/sched/Kconfig
+++ b/xen/common/sched/Kconfig
@@ -1,5 +1,5 @@
 menu "Schedulers"
-	visible if EXPERT
+	depends on EXPERT
 
 config SCHED_CREDIT
 	bool "Credit scheduler support"
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat Oct 31 00:42:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 00:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16727.41523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYeyB-00021R-3L; Sat, 31 Oct 2020 00:42:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16727.41523; Sat, 31 Oct 2020 00:42:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYeyB-00021K-0D; Sat, 31 Oct 2020 00:42:11 +0000
Received: by outflank-mailman (input) for mailman id 16727;
 Sat, 31 Oct 2020 00:42:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kYeyA-00021F-Gn
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 00:42:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87f1276a-f99f-4ff8-98f9-f93e534152e1;
 Sat, 31 Oct 2020 00:42:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A37C0B07D;
 Sat, 31 Oct 2020 00:42:07 +0000 (UTC)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kYeyA-00021F-Gn
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 00:42:10 +0000
X-Inumbo-ID: 87f1276a-f99f-4ff8-98f9-f93e534152e1
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 87f1276a-f99f-4ff8-98f9-f93e534152e1;
	Sat, 31 Oct 2020 00:42:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604104927;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2EHFWQ8CihojlaDJ65dYND77qyv6QB1qxRCf5IyZwKA=;
	b=stQFBsYqbwD9xB527PI9aiX+GapOFIz3OszEpXfLLox0PIOYU6ZiDg86EOnescteMYE+Lq
	peCfgmS+ZT+wsVahVxv8+ShyURi9+pvdpMpp2J8HnKKUkA1PHGwqvg/2PIn+O7an95uXT3
	aBjG3bR+C5yAXp1E8rNw9xSHXcCcY9E=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id A37C0B07D;
	Sat, 31 Oct 2020 00:42:07 +0000 (UTC)
Message-ID: <cb67b4beb9d9065a61071bf7236f40802ace4203.camel@suse.com>
Subject: Re: BUG: credit=sched2 machine hang when using DRAKVUF
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
Cc: xen-devel@lists.xenproject.org, =?ISO-8859-1?Q?J=FCrgen_Gro=DF?=
	 <jgross@suse.com>
Date: Sat, 31 Oct 2020 01:42:06 +0100
In-Reply-To: <66f4b628-970c-9990-118a-572f971d6ed2@suse.com>
References: <157653679.6164.1603407559737.JavaMail.zimbra@nask.pl>
	 <a80f05ac-bd18-563e-12f7-1a0f9f0d4f6b@suse.com>
	 <1747162107.4472424.1603850652584.JavaMail.zimbra@nask.pl>
	 <66f4b628-970c-9990-118a-572f971d6ed2@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-vSzTJZVszvG3C28vxN6U"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-vSzTJZVszvG3C28vxN6U
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-10-28 at 08:45 +0100, Jan Beulich wrote:
> On 28.10.2020 03:04, Micha=C5=82 Leszczy=C5=84ski wrote:
>=20
>=20
> I have to admit that the log makes me wonder whether this isn't a
> Dom0 internal issue:
>=20
> > [=C2=A0 338.968676] watchdog: BUG: soft lockup - CPU#5 stuck for 22s!
> > [sshd:5991]
> > [=C2=A0 346.963959] watchdog: BUG: soft lockup - CPU#2 stuck for 23s!
> > [xenconsoled:2747]
>=20
Yeah, weird.

> For these two vCPU-s we see ...
>=20
> > (XEN) Domain info:
> > (XEN)=C2=A0=C2=A0 Domain: 0 w 256 c 0 v 14
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 1: [0.0] flags=3D20 cpu=3D0 credit=3D-100=
00000 [w=3D256]
> > load=3D4594 (~1%)
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 2: [0.1] flags=3D20 cpu=3D2 credit=3D9134=
904 [w=3D256]
> > load=3D262144 (~100%)
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 3: [0.2] flags=3D22 cpu=3D4 credit=3D-100=
00000 [w=3D256]
> > load=3D262144 (~100%)
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 4: [0.3] flags=3D20 cpu=3D6 credit=3D-100=
00000 [w=3D256]
> > load=3D4299 (~1%)
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 5: [0.4] flags=3D20 cpu=3D8 credit=3D-100=
00000 [w=3D256]
> > load=3D4537 (~1%)
> > (XEN)=C2=A0=C2=A0=C2=A0=C2=A0 6: [0.5] flags=3D22 cpu=3D10 credit=3D-10=
000000 [w=3D256]
> > load=3D262144 (~100%)
>=20
> ... that both are fully loaded and ...
>=20
> > [...]
>=20
> ... they're actively running,
>
True indeed. But as I said in my other reply, it's weird that we have
so many vCPUs with the artificial value that we use to represent the
minimum value of credits we allow a vCPU to have.

And it's weird that, with some idle CPUs and with two vCPUs running
vCPUs with negative credits, we have one with positive credits sitting
in the runqueue.

Unless the debug-key captured a transient  state. Like, d0v1 is in the
runqueue because it just woke-up and the 'r' dump occurred between when
it's put in the runqueue and when a physical CPU (which is poked during
the wake-up itself) picks it up.

It seems unlikely, and this still would not explain nor justify the -
10000000. But, still, Micha=C5=82, can you perhaps check whether, while the
issue manifests, poking at the 'r' key a few times always show the same
(or a similar) situation?

> > (XEN) RUNQ:
> > (XEN) CPUs info:
> > (XEN) CPU[00] current=3Dd[IDLE]v0, curr=3Dd[IDLE]v0, prev=3DNULL
> > (XEN) CPU[02] current=3Dd[IDLE]v2, curr=3Dd[IDLE]v2, prev=3DNULL
> > (XEN) CPU[04] current=3Dd0v2, curr=3Dd0v2, prev=3DNULL
> > (XEN) CPU[06] current=3Dd[IDLE]v6, curr=3Dd[IDLE]v6, prev=3DNULL
> > (XEN) CPU[08] current=3Dd[IDLE]v8, curr=3Dd[IDLE]v8, prev=3DNULL
> > (XEN) CPU[10] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
>=20
> ... here. Hence an additional question is what exactly they're doing.
> '0' and possibly 'd' debug key output may shed some light on it, but
> to interpret that output the exact kernel and hypervisor binaries
> would need to be known / available.
>=20
Yes, I agree. Even considering all that I said (which seems to point
back at a Xen issue, rather than kernel), knowing more about what the
vCPUs are doing could indeed be helpful!

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-vSzTJZVszvG3C28vxN6U
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+cst4ACgkQFkJ4iaW4
c+7I5w/7BnpDsKNKpgEfi0+Nf9z73RKRy83ITAAuI9caGsTUNMZtaf7Q+hZ/8d6K
vWoh5y1tlr+1ZRlpxKuzP+J9Z91G33mKL6oqtfRtg7kT+JHRZqeM4tdKwiO4jXJ3
Hgq0lJrSlFkCJ/Rsi/Fxk0CE9CSg/d28EiIqbnQQHvkl36vojIjYZpcU9ngzv42f
4JiwG6KCquXmrtmH2JQ3R8ek8vtGsu/qDEuFbgZytANEB4l6SpWHHa2WB+Wh3EA3
J+GVSnGgTersCg+9O3R9oEFuW8VlBWmXIOskfmEHwo3HmVCz2QXGvjHXGF8KT1RZ
3vRrEyMrQsB3G2s1T3RlIIrmq/5xc/gLYAdW62cf8zJI6WErrVmS3huS6l3OfwAC
FYGBqMRaYUv+lRUw22Qnt6ZHvRGs8oxYg3MuZu503CQsD/YkxuiPf61CEvteLQKW
cFN9WcrGKH/KDgB5w2eQvyEFLBlaE32oCxkdXIWNdcw8x+xwov/GQVnCE4ChK37A
dzxV+RxVmpiwZah7derhniJFdtQqI8/Ii2UX8RSLuoWxRNWnWeBrxNrREDxzxNgW
Cs/j0IVZC7WgO/Ydw31hTuDXg2n0PdH2wXmTy5RPm+viBkzAmIKCuyulpYSBe6yl
IFaN+Q791GqgZAFMQpV4bzGwqu6x2oowTLu3Gzti48mVOH2h8Zk=
=mhuD
-----END PGP SIGNATURE-----

--=-vSzTJZVszvG3C28vxN6U--



From xen-devel-bounces@lists.xenproject.org Sat Oct 31 02:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 02:34:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16739.41536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYgj4-0007oV-Cq; Sat, 31 Oct 2020 02:34:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16739.41536; Sat, 31 Oct 2020 02:34:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYgj4-0007oM-5V; Sat, 31 Oct 2020 02:34:42 +0000
Received: by outflank-mailman (input) for mailman id 16739;
 Sat, 31 Oct 2020 02:34:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kYgj2-0007oH-Ee
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:34:40 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [62.140.7.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91e24b1c-09a5-427a-b256-30af99d418b6;
 Sat, 31 Oct 2020 02:34:38 +0000 (UTC)
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2053.outbound.protection.outlook.com [104.47.1.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-16-4-rJvLxhOiSzyqizTQO_dg-1; Sat, 31 Oct 2020 03:34:35 +0100
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM0PR04MB5011.eurprd04.prod.outlook.com (2603:10a6:208:c9::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Sat, 31 Oct
 2020 02:34:32 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.028; Sat, 31 Oct 2020
 02:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kYgj2-0007oH-Ee
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:34:40 +0000
X-Inumbo-ID: 91e24b1c-09a5-427a-b256-30af99d418b6
Received: from de-smtp-delivery-102.mimecast.com (unknown [62.140.7.102])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 91e24b1c-09a5-427a-b256-30af99d418b6;
	Sat, 31 Oct 2020 02:34:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1604111677;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=O8GQhi0UJ3LHYdAnrkP/toqkgeVbB7a547rgTaiTSe4=;
	b=bRR4khCvFv+hjSqCeOqFn+lKt1PSGIfHvP+GP4VzMbkOFJcjG+7rMk+b0/TCfxAg/m9GYw
	RFYmsCP2sWAWxRW3ZDUtylAKvKJjxagB1s2u08aTrTBG9OI5UR4kaiGc9qxFuXpwre8SrX
	8gd1zv4qI4w1Yj+xqNrjDCEyRoIA2/Q=
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01lp2053.outbound.protection.outlook.com [104.47.1.53]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 de-mta-16-4-rJvLxhOiSzyqizTQO_dg-1; Sat, 31 Oct 2020 03:34:35 +0100
X-MC-Unique: 4-rJvLxhOiSzyqizTQO_dg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ey/S6+iz7PKSfPM7M9hB9To23tgGQayqNm2SFauW4X8FWKbYHGLFY8QdnzkZrAuULvRKVeYWB8xeaDHctG46pwTx4I3x5qt4fqLdumXz7D3c5qs7QxPe+rzxGFg12FDyHrtghmethJR5iwL+E3lavHa3UCtoZpDXCoZ4JjKDH5Adn0C9ef4pO9vUNfBJLbmW45db1N+wMNDcjuFWkWcDL1M0a2BQMoajd1L7SjFYzNyxGoouGfKyVBFCwuptw/9vuQFqDp7cGQTkLFtP1tesLDYNf5UKJGbqYBpR4kPqqtgeC5+xxZ0xG7D/aAR4o6wtyKJ+5eGWH7rCd3d4FjrCZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O8GQhi0UJ3LHYdAnrkP/toqkgeVbB7a547rgTaiTSe4=;
 b=V3DtZL4IgV6tjefX4n9cnBKFXvbKHJfwTVYeGT4WVjjKQFck6/mplScKT1EhMWZHaagNUjssnLbegGJc+KlnyE3fnGbVRcwqnMgbjw4F52Hl58MQrP0uAVhjT3PhPtyGjWscve9+7q56XrGDYOSEuZCTthxdrjeHZr8iZZFSV2yElgC04xK8nofG7/hE/eLsJFNHyVouNGc0oGrwMNEW0Fv84tf8aRLXlrcIx0z+LrhEz6vzNCM2wjyzj6anYy122DadbVlfmEnorKzqQORtg0+wMzxhrGoBRGHX5H+PL740x//PdQotTaKdnJYTkQptcerTDNOqTM+RzFyqmICXJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com (2603:10a6:208:134::22)
 by AM0PR04MB5011.eurprd04.prod.outlook.com (2603:10a6:208:c9::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3477.25; Sat, 31 Oct
 2020 02:34:32 +0000
Received: from AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082]) by AM0PR04MB5826.eurprd04.prod.outlook.com
 ([fe80::db0:41c3:aa05:d082%6]) with mapi id 15.20.3499.028; Sat, 31 Oct 2020
 02:34:32 +0000
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <JGross@suse.com>, "frederic.pierret@qubes-os.org"
	<frederic.pierret@qubes-os.org>, "George.Dunlap@citrix.com"
	<George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Thread-Topic: Recent upgrade of 4.13 -> 4.14 issue
Thread-Index: AQHWq6SD/uyafOnyXU2hV/eab0QUF6mq9TSAgADbCLP//87fAIAFZkwA
Date: Sat, 31 Oct 2020 02:34:32 +0000
Message-ID: <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
	 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
	 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
	 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
	 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
	 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
In-Reply-To: <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator:
user-agent: Evolution 3.38.1 (by Flathub.org) 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
x-originating-ip: [89.186.78.87]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b3952509-3d36-4a23-b1f9-08d87d45800e
x-ms-traffictypediagnostic: AM0PR04MB5011:
x-ld-processed: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba,ExtFwd
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs:
 <AM0PR04MB50113C4B489E58A8617E8E40C5120@AM0PR04MB5011.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info:
 gUErc6qcZHWuzU9J+K4WqMtQPl7fHtvqv1Chl963zFLOoBdU2YFQhDv9GRh08BWuSxnKD1INfAtGsgPTjY3ufOeXEPpbjKeegfr/ooggsESispCbnVr0hfIXlp+v1sIZKvW5+Md9+0gKpSHQucrOXeQErs7uRA+5ZGH4RsR8qXfN1Q54I21+WZMwwrKzdNQmh+MGBpn8PvmvX+t4oLShMWx/iYr2g9esbBG4mPYeBTXoo2S1ep45WziqP4M888nkQIN3TaQz2le7goTE2hUochVqHvQEd4cLtCVaF/zcmEVzx/7HwuqiYvDUxjZEG5fZyDUPDg0eDQOjpaCybGh4DFWXlM6qYw6yFXofy0FTNzu70EuB7xeK3cny/0G77G3Fylg6/me6poIKOmwEB/y1Mg==
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR04MB5826.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(396003)(136003)(376002)(366004)(39860400002)(36756003)(8936002)(186003)(76116006)(66556008)(66446008)(66946007)(2906002)(66616009)(86362001)(6512007)(6486002)(71200400001)(2616005)(66476007)(54906003)(110136005)(26005)(478600001)(4326008)(5660300002)(6506007)(4001150100001)(8676002)(966005)(83380400001)(99936003)(316002)(66574015)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 OoDB68efjwDpz+1RqgA062yzLCj5z/agdnILyMZEnQpyKTfX1UHJiNZyrywGhVi+YV8/6Fga8mEKrNyrYW5WwsTjw62fItp/a+H6QuGf2hIHsvYbAWx+0raNrLbMukSNDJZNXizcodPlOVv7BS5oPJfRjAxuuXbT3xLy8Wccwd4IbflcDLk/uJpltkvKnV3Dl94ctYAWbYp2ONCK2YpslYQClr7d99xYf+mmvqqpaju7Y05ycQyYmfQNo6i8ESjzh1DWtaI/y3vb1D0J1foX7X4RCGhPvRRn7gKYTqzX461m316j38y8LjeP//m/6nMS/wapwUZkOiUbI17nxXbE9GOMja+nN4o4x41AbnxO/TJXxazo0GUJj6+b3Sr06NElw9EyXXVuOZ3qjC7hf8f+34L+c3ABjDryGUs9ap7bYtS+RbI7nimQFeC0hxVcAwCop9+Iv8x71S5Gfx/PxftsAd4IGQrUKT98AlSizXERUDO6HcQY3FuUUq+Cd8fF/q9A9rElp3XgsQ+oLCuxwHrpNwnO1hziSwHhuwABiq2yK5U75SnkTP5axzAdLrcqcOnVvoyIkrJOW5bNj5XnA3iZDC9cJt/TWGwv4kRdph+GVWKMDw0vPtUYyE2B0+9s3lVrMo/J6XMXxm4zlpDfbtKbGA==
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-ZpZ3yaX97+GVLY1v8Nbt"
MIME-Version: 1.0
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR04MB5826.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b3952509-3d36-4a23-b1f9-08d87d45800e
X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Oct 2020 02:34:32.5218
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Rh9z/uuTczZvSdLML+XA91tYbL0/mfLZaNdEecR5iWXcAg1ouhikf5J1SoYB+m7F/FTmUPeFZPf110/b/awp1A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR04MB5011

--=-ZpZ3yaX97+GVLY1v8Nbt
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-10-27 at 17:06 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
>=20
> Ok the server got frozen just few minutes after my mail and I got
> now:
> 'r': https://gist.github.com/fepitre/78541f555902275d906d627de2420571
>
=46rom the scheduler point of view, things seems fine:

(XEN) sched_smt_power_savings: disabled
(XEN) NOW=3D770188952085
(XEN) Online Cpus: 0-15
(XEN) Cpupool 0:
(XEN) Cpus: 0-15
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Active queues: 2
(XEN) 	default-weight     =3D 256
(XEN) Runqueue 0:
(XEN) 	ncpus              =3D 8
(XEN) 	cpus               =3D 0-7
(XEN) 	max_weight         =3D 256
(XEN) 	pick_bias          =3D 1
(XEN) 	instload           =3D 7
(XEN) 	aveload            =3D 2021119 (~770%)
(XEN) 	idlers: 00000000,00000000
(XEN) 	tickled: 00000000,00000000
(XEN) 	fully idle cores: 00000000,00000000
(XEN) Runqueue 1:
(XEN) 	ncpus              =3D 8
(XEN) 	cpus               =3D 8-15
(XEN) 	max_weight         =3D 256
(XEN) 	pick_bias          =3D 9
(XEN) 	instload           =3D 8
(XEN) 	aveload            =3D 2097259 (~800%)
(XEN) 	idlers: 00000000,00000000
(XEN) 	tickled: 00000000,00000200
(XEN) 	fully idle cores: 00000000,00000000

The system is pretty busy, but not in overload.

Below we see that CPU 3 is running the idle vCPU, but it's not marked
as neither idle nor tickled.

It may be running a tasklet (the one that dumps the debug key output, I
guess).

Credits are fine, I don't see any strange values that may indicate
anomalies or something.

All the CPUs are executing a vCPU, and there should be nothing that
prevent them making progresses.

There is one vCPU which apparetnly want to run at 100% in pretty much
all guests, and more than one in dom0.

And I think I saw some spin_lock() in the call stacks, in the partial
report of '*' debug-key?

Maybe they're stuck in the kernel, not in Xen? Thoughs ?


(XEN) Domain info:
(XEN) 	Domain: 0 w 256 c 0 v 16
(XEN) 	  1: [0.0] flags=3D0 cpu=3D5 credit=3D10553147 [w=3D256] load=3D1712=
2
(~6%)
(XEN) 	  2: [0.1] flags=3D0 cpu=3D4 credit=3D10570606 [w=3D256] load=3D1356=
9
(~5%)
(XEN) 	  3: [0.2] flags=3D0 cpu=3D7 credit=3D10605188 [w=3D256] load=3D1346=
5
(~5%)
(XEN) 	  4: [0.3] flags=3D2 cpu=3D11 credit=3D9998469 [w=3D256] load=3D2621=
44
(~100%)
(XEN) 	  5: [0.4] flags=3D0 cpu=3D0 credit=3D10533686 [w=3D256] load=3D1361=
9
(~5%)
(XEN) 	  6: [0.5] flags=3Da cpu=3D9 credit=3D1101 [w=3D256] load=3D0 (~0%)
(XEN) 	  7: [0.6] flags=3D2 cpu=3D2 credit=3D10621802 [w=3D256] load=3D1352=
6
(~5%)
(XEN) 	  8: [0.7] flags=3D2 cpu=3D1 credit=3D10670607 [w=3D256] load=3D1345=
3
(~5%)
(XEN) 	  9: [0.8] flags=3D2 cpu=3D7 credit=3D10649858 [w=3D256] load=3D1350=
2
(~5%)
(XEN) 	 10: [0.9] flags=3D0 cpu=3D3 credit=3D10550566 [w=3D256] load=3D1347=
7
(~5%)
(XEN) 	 11: [0.10] flags=3D2 cpu=3D4 credit=3D10644321 [w=3D256] load=3D135=
39
(~5%)
(XEN) 	 12: [0.11] flags=3D2 cpu=3D1 credit=3D10602374 [w=3D256] load=3D134=
71
(~5%)
(XEN) 	 13: [0.12] flags=3D0 cpu=3D6 credit=3D10617262 [w=3D256] load=3D138=
01
(~5%)
(XEN) 	 14: [0.13] flags=3D2 cpu=3D8 credit=3D9998664 [w=3D256] load=3D2621=
44
(~100%)
(XEN) 	 15: [0.14] flags=3D0 cpu=3D3 credit=3D10603305 [w=3D256] load=3D170=
20
(~6%)
(XEN) 	 16: [0.15] flags=3D0 cpu=3D5 credit=3D10591312 [w=3D256] load=3D135=
23
(~5%)
(XEN) 	Domain: 1 w 256 c 0 v 2
(XEN) 	 17: [1.0] flags=3D2 cpu=3D6 credit=3D4916769 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 18: [1.1] flags=3D0 cpu=3D13 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 2 w 256 c 0 v 2
(XEN) 	 19: [2.0] flags=3D2 cpu=3D5 credit=3D4982064 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 20: [2.1] flags=3D0 cpu=3D14 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 3 w 256 c 0 v 2
(XEN) 	 21: [3.0] flags=3D2 cpu=3D1 credit=3D5200781 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 22: [3.1] flags=3D0 cpu=3D5 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	Domain: 4 w 256 c 0 v 2
(XEN) 	 23: [4.0] flags=3D12 cpu=3D0 credit=3D5395149 [w=3D256] load=3D2621=
44
(~100%)
(XEN) 	 24: [4.1] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	Domain: 5 w 256 c 0 v 2
(XEN) 	 25: [5.0] flags=3D2 cpu=3D2 credit=3D5306461 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 26: [5.1] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 6 w 256 c 0 v 8
(XEN) 	 27: [6.0] flags=3D12 cpu=3D10 credit=3D7915602 [w=3D256] load=3D262=
144
(~100%)
(XEN) 	 28: [6.1] flags=3D0 cpu=3D10 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 29: [6.2] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 30: [6.3] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	 31: [6.4] flags=3D0 cpu=3D0 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	 32: [6.5] flags=3D0 cpu=3D6 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	 33: [6.6] flags=3D0 cpu=3D11 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 34: [6.7] flags=3D0 cpu=3D9 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	Domain: 7 w 256 c 0 v 2
(XEN) 	 35: [7.0] flags=3D2 cpu=3D4 credit=3D5297013 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 36: [7.1] flags=3D0 cpu=3D11 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 8 w 256 c 0 v 4
(XEN) 	 37: [8.0] flags=3D0 cpu=3D14 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 38: [8.1] flags=3D2 cpu=3D7 credit=3D5240630 [w=3D256] load=3D26214=
4
(~100%)
(XEN) 	 39: [8.2] flags=3D0 cpu=3D13 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 40: [8.3] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	Domain: 9 w 256 c 0 v 2
(XEN) 	 41: [9.0] flags=3D0 cpu=3D0 credit=3D10500000 [w=3D256] load=3D0 (~=
0%)
(XEN) 	 42: [9.1] flags=3D12 cpu=3D13 credit=3D7910266 [w=3D256] load=3D262=
144
(~100%)
(XEN) 	Domain: 10 w 256 c 0 v 2
(XEN) 	 43: [10.0] flags=3D12 cpu=3D12 credit=3D8045458 [w=3D256] load=3D26=
2144
(~100%)
(XEN) 	 44: [10.1] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 11 w 256 c 0 v 2
(XEN) 	 45: [11.0] flags=3D12 cpu=3D14 credit=3D7575284 [w=3D256] load=3D26=
2144
(~100%)
(XEN) 	 46: [11.1] flags=3D0 cpu=3D12 credit=3D10500000 [w=3D256] load=3D0
(~0%)
(XEN) 	Domain: 12 w 256 c 0 v 2
(XEN) 	 47: [12.0] flags=3D2 cpu=3D15 credit=3D8014099 [w=3D256] load=3D262=
144
(~100%)
(XEN) 	 48: [12.1] flags=3D0 cpu=3D6 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	Domain: 13 w 256 c 0 v 2
(XEN) 	 49: [13.0] flags=3D0 cpu=3D7 credit=3D10500000 [w=3D256] load=3D0 (=
~0%)
(XEN) 	 50: [13.1] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0
(~0%)
(XEN) Runqueue 0:
(XEN) CPU[00] runq=3D0, sibling=3D{0}, core=3D{0-7}
(XEN) 	run: [4.0] flags=3D2 cpu=3D0 credit=3D5255200 [w=3D256] load=3D26214=
4
(~100%)
(XEN) CPU[01] runq=3D0, sibling=3D{1}, core=3D{0-7}
(XEN) 	run: [3.0] flags=3D2 cpu=3D1 credit=3D5057668 [w=3D256] load=3D26214=
4
(~100%)
(XEN) CPU[02] runq=3D0, sibling=3D{2}, core=3D{0-7}
(XEN) 	run: [5.0] flags=3D12 cpu=3D2 credit=3D5180785 [w=3D256] load=3D2621=
44
(~100%)
(XEN) CPU[03] runq=3D0, sibling=3D{3}, core=3D{0-7}
(XEN) CPU[04] runq=3D0, sibling=3D{4}, core=3D{0-7}
(XEN) 	run: [7.0] flags=3D2 cpu=3D4 credit=3D5215323 [w=3D256] load=3D26214=
4
(~100%)
(XEN) CPU[05] runq=3D0, sibling=3D{5}, core=3D{0-7}
(XEN) 	run: [2.0] flags=3D2 cpu=3D5 credit=3D4816142 [w=3D256] load=3D26214=
4
(~100%)
(XEN) CPU[06] runq=3D0, sibling=3D{6}, core=3D{0-7}
(XEN) 	run: [1.0] flags=3D2 cpu=3D6 credit=3D4755772 [w=3D256] load=3D26214=
4
(~100%)
(XEN) CPU[07] runq=3D0, sibling=3D{7}, core=3D{0-7}
(XEN) 	run: [8.1] flags=3D12 cpu=3D7 credit=3D5175342 [w=3D256] load=3D2621=
44
(~100%)
(XEN) RUNQ:
(XEN) Runqueue 1:
(XEN) CPU[08] runq=3D1, sibling=3D{8}, core=3D{8-15}
(XEN) 	run: [0.13] flags=3D2 cpu=3D8 credit=3D9998664 [w=3D256] load=3D2621=
44
(~100%)
(XEN) CPU[09] runq=3D1, sibling=3D{9}, core=3D{8-15}
(XEN) 	run: [0.5] flags=3Da cpu=3D9 credit=3D1101 [w=3D256] load=3D0 (~0%)
(XEN) CPU[10] runq=3D1, sibling=3D{10}, core=3D{8-15}
(XEN) 	run: [6.0] flags=3D2 cpu=3D10 credit=3D7764532 [w=3D256] load=3D2621=
44
(~100%)
(XEN) CPU[11] runq=3D1, sibling=3D{11}, core=3D{8-15}
(XEN) 	run: [0.3] flags=3D2 cpu=3D11 credit=3D9998469 [w=3D256] load=3D2621=
44
(~100%)
(XEN) CPU[12] runq=3D1, sibling=3D{12}, core=3D{8-15}
(XEN) 	run: [10.0] flags=3D2 cpu=3D12 credit=3D7967846 [w=3D256] load=3D262=
144
(~100%)
(XEN) CPU[13] runq=3D1, sibling=3D{13}, core=3D{8-15}
(XEN) 	run: [9.1] flags=3D12 cpu=3D13 credit=3D7832232 [w=3D256] load=3D262=
144
(~100%)
(XEN) CPU[14] runq=3D1, sibling=3D{14}, core=3D{8-15}
(XEN) 	run: [11.0] flags=3D2 cpu=3D14 credit=3D7509378 [w=3D256] load=3D262=
144
(~100%)
(XEN) CPU[15] runq=3D1, sibling=3D{15}, core=3D{8-15}
(XEN) 	run: [12.0] flags=3D2 cpu=3D15 credit=3D7971164 [w=3D256] load=3D262=
144
(~100%)
(XEN) RUNQ:
(XEN) CPUs info:
(XEN) CPU[00] current=3Dd4v0, curr=3Dd4v0, prev=3DNULL
(XEN) CPU[01] current=3Dd3v0, curr=3Dd3v0, prev=3DNULL
(XEN) CPU[02] current=3Dd5v0, curr=3Dd5v0, prev=3DNULL
(XEN) CPU[03] current=3Dd[IDLE]v3, curr=3Dd[IDLE]v3, prev=3DNULL
(XEN) CPU[04] current=3Dd7v0, curr=3Dd7v0, prev=3DNULL
(XEN) CPU[05] current=3Dd2v0, curr=3Dd2v0, prev=3DNULL
(XEN) CPU[06] current=3Dd1v0, curr=3Dd1v0, prev=3DNULL
(XEN) CPU[07] current=3Dd8v1, curr=3Dd8v1, prev=3DNULL
(XEN) CPU[08] current=3Dd0v13, curr=3Dd0v13, prev=3DNULL
(XEN) CPU[09] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
(XEN) CPU[10] current=3Dd6v0, curr=3Dd6v0, prev=3DNULL
(XEN) CPU[11] current=3Dd0v3, curr=3Dd0v3, prev=3DNULL
(XEN) CPU[12] current=3Dd10v0, curr=3Dd10v0, prev=3DNULL
(XEN) CPU[13] current=3Dd9v1, curr=3Dd9v1, prev=3DNULL
(XEN) CPU[14] current=3Dd11v0, curr=3Dd11v0, prev=3DNULL
(XEN) CPU[15] current=3Dd12v0, curr=3Dd12v0, prev=3DNULL

--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-ZpZ3yaX97+GVLY1v8Nbt
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+czMcACgkQFkJ4iaW4
c+5q2A//a3SnMnVgMY5mMMQAu7cjCVs2xOWLSb1VQWMSEPH8b9PnR8Gj2Cr6dDNm
NX4rllTqWpQ3fgV5fVJO9NuhGcuRCGN3XSdbWzd0yyEPG3brhivfDXTdzq363H1z
CagMwULf1Xo19ZeWPywn3x1utcLoCrWtppj4hoC8Q5HT7s9GtpZfuVnS4vmPp6Hs
zF0mrNUaYt7Q3vNfQEsIWt6RR0FUJKVQdr3WCMEOsPzsh8NxzlOOAnETniskNxNX
6oDPBTekQF2cAOq7lAqFCoAEACbOXKTB0zPZQL23w55NVJQROmvG6xmlhhlXrx/e
H8a5aEVzD4uhPqo7sLi9wbcdtIPTw760PcuWQSO8IG3qnS4wzf66wJPlbuFm6c44
cfEyeyTwv5eBjjWNRu7JrP5D6QqkdnL46u4j4hmxpGC0h1/GUjoOky8jLNCJ6ET2
dwFONkInJEaFYWcl4yjzS1G7DFwnFa638vgWIi1/X47H51vVIalGbKIXJsYYsYQ1
sVIEe3Yfu5kdsgcXFg9m+a2m181Gjci552f8ITbUi9GypdJbJ6w9gn3pJyxMJDK2
g30DxltvHz7WtMVZ+RI73bXIkzBMC9RR6YgaSZ1Hjo86BibmO6jVVlGnuIM35Hx4
z92iLGo/3ps5kxAGxq9f79+45tXPtzHU+BU+17GLFVbTH/6GCQE=
=IW0n
-----END PGP SIGNATURE-----

--=-ZpZ3yaX97+GVLY1v8Nbt--



From xen-devel-bounces@lists.xenproject.org Sat Oct 31 02:45:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 02:45:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16745.41547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYgtf-0000Ll-Cj; Sat, 31 Oct 2020 02:45:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16745.41547; Sat, 31 Oct 2020 02:45:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYgtf-0000Le-9Z; Sat, 31 Oct 2020 02:45:39 +0000
Received: by outflank-mailman (input) for mailman id 16745;
 Sat, 31 Oct 2020 02:45:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYgte-0000LZ-Lr
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:45:38 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5fa9fcc-02d4-4584-8196-73a3b739c2d5;
 Sat, 31 Oct 2020 02:45:34 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYgtZ-0007JS-LH; Sat, 31 Oct 2020 02:45:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYgtZ-0000nN-6d; Sat, 31 Oct 2020 02:45:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYgtZ-0007Kp-5f; Sat, 31 Oct 2020 02:45:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYgte-0000LZ-Lr
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:45:38 +0000
X-Inumbo-ID: d5fa9fcc-02d4-4584-8196-73a3b739c2d5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id d5fa9fcc-02d4-4584-8196-73a3b739c2d5;
	Sat, 31 Oct 2020 02:45:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aY2BptVyepmXz80Q3w5EUAeC5krkWrwh2pUok3JztCg=; b=L2a+gJpHQONW0v8FU8WJbGqc7L
	UC7Kx1oKMgexHG+uPRQIjEIWngS0Vy6zuyWt6PI+lJ3G2NSVsbqQbLjXuh3gqzkRe00bhW6525tzZ
	o2x6azudHybSA98Zg7HtjrzXoxKdN2eSNUhUqszvezffsAVK/QOc7A4cFHdAqKcORkk0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYgtZ-0007JS-LH; Sat, 31 Oct 2020 02:45:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYgtZ-0000nN-6d; Sat, 31 Oct 2020 02:45:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYgtZ-0007Kp-5f; Sat, 31 Oct 2020 02:45:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156313-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156313: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=802427bcdae1ad2eceea8a8877ecad835e3f8fde
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 02:45:33 +0000

flight 156313 qemu-mainline real [real]
flight 156325 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156313/
http://logs.test-lab.xenproject.org/osstest/logs/156325/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                802427bcdae1ad2eceea8a8877ecad835e3f8fde
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   71 days
Failing since        152659  2020-08-21 14:07:39 Z   70 days  160 attempts
Testing same since   156313  2020-10-30 04:00:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 54564 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 02:54:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 02:54:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16751.41563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYh2Y-0001Ir-8V; Sat, 31 Oct 2020 02:54:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16751.41563; Sat, 31 Oct 2020 02:54:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYh2Y-0001Ik-5P; Sat, 31 Oct 2020 02:54:50 +0000
Received: by outflank-mailman (input) for mailman id 16751;
 Sat, 31 Oct 2020 02:54:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYh2X-0001If-2c
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:54:49 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 211d6e46-31ec-40ec-9739-c90a03cc7173;
 Sat, 31 Oct 2020 02:54:47 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 271CC5C014E;
 Fri, 30 Oct 2020 22:54:47 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 30 Oct 2020 22:54:47 -0400
Received: from mail-itl (unknown [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id D83FF3064682;
 Fri, 30 Oct 2020 22:54:45 -0400 (EDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYh2X-0001If-2c
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 02:54:49 +0000
X-Inumbo-ID: 211d6e46-31ec-40ec-9739-c90a03cc7173
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 211d6e46-31ec-40ec-9739-c90a03cc7173;
	Sat, 31 Oct 2020 02:54:47 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 271CC5C014E;
	Fri, 30 Oct 2020 22:54:47 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Fri, 30 Oct 2020 22:54:47 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=sZey/y
	FH4J68uGvZZ3Hx+//0gd9LeqAg0EVCMZzazww=; b=BvSizoDRwPfoX4y5W71Ksw
	BUR/9y9hbE4/YTjYrYDto+ZZ9bcm1hKYZ8FDiaRJCzt7Jj+4BxM0LwPZ/0hbyQ5B
	49PlaRWfiAlaGC75cP3lVTIkAH2aPmYxcu4P4hfKlr1itKjTikxgUXF6UrUwRXzi
	62BuIJMRHtHWgUFq4iKHVPiWwO1GtS1zis5YmHkGEjmwD+XMH3zN8KgrR56GvLc6
	gzUkrBU70sqAam17FGNGGw86ypU1pSplrmcj18WY4M3WtLK3FSMuzSsumDwd2fIL
	0eygD6us/KJjkSJRiQBgdMW7LfWgdlUFXyUWj7O820jxIZrlr6d4NwGEEJTRvtIw
	==
X-ME-Sender: <xms:9tGcXyUZujxTUlgxwoh-O2tM7AWVrpEHrp7iVbZ8XEtdVuj2WGDeqA>
    <xme:9tGcX-kO5pl76JBi0ALVjV-iCNSSk8ACDIwB9--fY5RN5iLKIjlvWSp3ypMY1cWGS
    ZkRHBwhBBuwKw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleeigdeglecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepfdhmrghrmhgr
    rhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmfdcuoehmrghrmhgrrh
    gvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgv
    rhhnpeehveetfefhfedvtdeuuedvtddtvdeijeeuueejffduvefgvdekledtleduveffve
    enucffohhmrghinhepghhithhhuhgsrdgtohhmnecukfhppeeluddrieegrddujedtrdek
    leenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrg
    hrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:9tGcX2YHhvxqpyNdNFENllv_xpxwks7KYO5i8-sB-K_hL0RoW3IyYg>
    <xmx:9tGcX5U2onrPTY5DCrYx37_kSVg_yFy54j9swITacjspWSxQTDUigA>
    <xmx:9tGcX8nfurcB4c5I9mzu9BXSjT3TnmZeKgm4UmUH-bqVuNrNQHizBQ>
    <xmx:99GcXxy97Kv-PqaQZ5BU63SGcbsSrlTrMU3CBxeL-3KQGGkEOGVD_Q>
Received: from mail-itl (unknown [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id D83FF3064682;
	Fri, 30 Oct 2020 22:54:45 -0400 (EDT)
Date: Sat, 31 Oct 2020 03:54:42 +0100
From: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
	"frederic.pierret@qubes-os.org" <frederic.pierret@qubes-os.org>,
	"George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <20201031025442.GF1447@mail-itl>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="bwMunqOr7B7rzQrh"
Content-Disposition: inline
In-Reply-To: <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>


--bwMunqOr7B7rzQrh
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue

On Sat, Oct 31, 2020 at 02:34:32AM +0000, Dario Faggioli wrote:
> On Tue, 2020-10-27 at 17:06 +0100, Fr=C3=A9d=C3=A9ric Pierret wrote:
> >=20
> > Ok the server got frozen just few minutes after my mail and I got
> > now:
> > 'r': https://gist.github.com/fepitre/78541f555902275d906d627de2420571
> >
> From the scheduler point of view, things seems fine:
>=20
> (XEN) sched_smt_power_savings: disabled
> (XEN) NOW=3D770188952085
> (XEN) Online Cpus: 0-15
> (XEN) Cpupool 0:
> (XEN) Cpus: 0-15
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> (XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
> (XEN) Active queues: 2
> (XEN) 	default-weight     =3D 256
> (XEN) Runqueue 0:
> (XEN) 	ncpus              =3D 8
> (XEN) 	cpus               =3D 0-7
> (XEN) 	max_weight         =3D 256
> (XEN) 	pick_bias          =3D 1
> (XEN) 	instload           =3D 7
> (XEN) 	aveload            =3D 2021119 (~770%)
> (XEN) 	idlers: 00000000,00000000
> (XEN) 	tickled: 00000000,00000000
> (XEN) 	fully idle cores: 00000000,00000000
> (XEN) Runqueue 1:
> (XEN) 	ncpus              =3D 8
> (XEN) 	cpus               =3D 8-15
> (XEN) 	max_weight         =3D 256
> (XEN) 	pick_bias          =3D 9
> (XEN) 	instload           =3D 8
> (XEN) 	aveload            =3D 2097259 (~800%)
> (XEN) 	idlers: 00000000,00000000
> (XEN) 	tickled: 00000000,00000200
> (XEN) 	fully idle cores: 00000000,00000000
>=20
> The system is pretty busy, but not in overload.
>=20
> Below we see that CPU 3 is running the idle vCPU, but it's not marked
> as neither idle nor tickled.
>=20
> It may be running a tasklet (the one that dumps the debug key output, I
> guess).
>=20
> Credits are fine, I don't see any strange values that may indicate
> anomalies or something.
>=20
> All the CPUs are executing a vCPU, and there should be nothing that
> prevent them making progresses.
>=20
> There is one vCPU which apparetnly want to run at 100% in pretty much
> all guests, and more than one in dom0.
>=20
> And I think I saw some spin_lock() in the call stacks, in the partial
> report of '*' debug-key?

Yes, I see:

(XEN) *** Dumping CPU1 host state: ***
(...)
(XEN) Xen call trace:
(XEN)    [<ffff82d040223625>] R _spin_lock+0x35/0x40
(XEN)    [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
(XEN)    [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
(XEN)    [<ffff82d0402df160>] S context_switch+0x110/0xa60
(XEN)    [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x250
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x20
(...)
(XEN) *** Dumping CPU2 host state: ***
(XEN) Xen call trace:
(XEN)    [<ffff82d040223622>] R _spin_lock+0x32/0x40
(XEN)    [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
(XEN)    [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
(XEN)    [<ffff82d0402df160>] S context_switch+0x110/0xa60
(XEN)    [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x250
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30

(XEN) *** Dumping CPU5 host state: ***
(XEN) Xen call trace:
(XEN)    [<ffff82d040223622>] R _spin_lock+0x32/0x40
(XEN)    [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
(XEN)    [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
(XEN)    [<ffff82d0402df160>] S context_switch+0x110/0xa60
(XEN)    [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x250
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30

(XEN) *** Dumping CPU6 host state: ***
(XEN) Xen call trace:
(XEN)    [<ffff82d040223625>] R _spin_lock+0x35/0x40
(XEN)    [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
(XEN)    [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
(XEN)    [<ffff82d0402df160>] S context_switch+0x110/0xa60
(XEN)    [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x250
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x20

(XEN) *** Dumping CPU7 host state: ***
(XEN) Xen call trace:
(XEN)    [<ffff82d040223625>] R _spin_lock+0x35/0x40
(XEN)    [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
(XEN)    [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
(XEN)    [<ffff82d0402df160>] S context_switch+0x110/0xa60
(XEN)    [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x250
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30

And so on, for (almost?) all CPUs.

Note the '*' output is (I think) from a different instances of the
freeze, so cannot be correlated with other outputs...

> Maybe they're stuck in the kernel, not in Xen? Thoughs ?

Given the above spin locks, I don't think so. But also, even if they are
stuck in the kernel, it clearly happened after 4.13 -> 4.14 upgrade...

> (XEN) Domain info:
> (XEN) 	Domain: 0 w 256 c 0 v 16
> (XEN) 	  1: [0.0] flags=3D0 cpu=3D5 credit=3D10553147 [w=3D256] load=3D17=
122
> (~6%)
> (XEN) 	  2: [0.1] flags=3D0 cpu=3D4 credit=3D10570606 [w=3D256] load=3D13=
569
> (~5%)
> (XEN) 	  3: [0.2] flags=3D0 cpu=3D7 credit=3D10605188 [w=3D256] load=3D13=
465
> (~5%)
> (XEN) 	  4: [0.3] flags=3D2 cpu=3D11 credit=3D9998469 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) 	  5: [0.4] flags=3D0 cpu=3D0 credit=3D10533686 [w=3D256] load=3D13=
619
> (~5%)
> (XEN) 	  6: [0.5] flags=3Da cpu=3D9 credit=3D1101 [w=3D256] load=3D0 (~0%)
> (XEN) 	  7: [0.6] flags=3D2 cpu=3D2 credit=3D10621802 [w=3D256] load=3D13=
526
> (~5%)
> (XEN) 	  8: [0.7] flags=3D2 cpu=3D1 credit=3D10670607 [w=3D256] load=3D13=
453
> (~5%)
> (XEN) 	  9: [0.8] flags=3D2 cpu=3D7 credit=3D10649858 [w=3D256] load=3D13=
502
> (~5%)
> (XEN) 	 10: [0.9] flags=3D0 cpu=3D3 credit=3D10550566 [w=3D256] load=3D13=
477
> (~5%)
> (XEN) 	 11: [0.10] flags=3D2 cpu=3D4 credit=3D10644321 [w=3D256] load=3D1=
3539
> (~5%)
> (XEN) 	 12: [0.11] flags=3D2 cpu=3D1 credit=3D10602374 [w=3D256] load=3D1=
3471
> (~5%)
> (XEN) 	 13: [0.12] flags=3D0 cpu=3D6 credit=3D10617262 [w=3D256] load=3D1=
3801
> (~5%)
> (XEN) 	 14: [0.13] flags=3D2 cpu=3D8 credit=3D9998664 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) 	 15: [0.14] flags=3D0 cpu=3D3 credit=3D10603305 [w=3D256] load=3D1=
7020
> (~6%)
> (XEN) 	 16: [0.15] flags=3D0 cpu=3D5 credit=3D10591312 [w=3D256] load=3D1=
3523
> (~5%)
> (XEN) 	Domain: 1 w 256 c 0 v 2
> (XEN) 	 17: [1.0] flags=3D2 cpu=3D6 credit=3D4916769 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 18: [1.1] flags=3D0 cpu=3D13 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 2 w 256 c 0 v 2
> (XEN) 	 19: [2.0] flags=3D2 cpu=3D5 credit=3D4982064 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 20: [2.1] flags=3D0 cpu=3D14 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 3 w 256 c 0 v 2
> (XEN) 	 21: [3.0] flags=3D2 cpu=3D1 credit=3D5200781 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 22: [3.1] flags=3D0 cpu=3D5 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	Domain: 4 w 256 c 0 v 2
> (XEN) 	 23: [4.0] flags=3D12 cpu=3D0 credit=3D5395149 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) 	 24: [4.1] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	Domain: 5 w 256 c 0 v 2
> (XEN) 	 25: [5.0] flags=3D2 cpu=3D2 credit=3D5306461 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 26: [5.1] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 6 w 256 c 0 v 8
> (XEN) 	 27: [6.0] flags=3D12 cpu=3D10 credit=3D7915602 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) 	 28: [6.1] flags=3D0 cpu=3D10 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 29: [6.2] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 30: [6.3] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	 31: [6.4] flags=3D0 cpu=3D0 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	 32: [6.5] flags=3D0 cpu=3D6 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	 33: [6.6] flags=3D0 cpu=3D11 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 34: [6.7] flags=3D0 cpu=3D9 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	Domain: 7 w 256 c 0 v 2
> (XEN) 	 35: [7.0] flags=3D2 cpu=3D4 credit=3D5297013 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 36: [7.1] flags=3D0 cpu=3D11 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 8 w 256 c 0 v 4
> (XEN) 	 37: [8.0] flags=3D0 cpu=3D14 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 38: [8.1] flags=3D2 cpu=3D7 credit=3D5240630 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) 	 39: [8.2] flags=3D0 cpu=3D13 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 40: [8.3] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	Domain: 9 w 256 c 0 v 2
> (XEN) 	 41: [9.0] flags=3D0 cpu=3D0 credit=3D10500000 [w=3D256] load=3D0 =
(~0%)
> (XEN) 	 42: [9.1] flags=3D12 cpu=3D13 credit=3D7910266 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) 	Domain: 10 w 256 c 0 v 2
> (XEN) 	 43: [10.0] flags=3D12 cpu=3D12 credit=3D8045458 [w=3D256] load=3D=
262144
> (~100%)
> (XEN) 	 44: [10.1] flags=3D0 cpu=3D8 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 11 w 256 c 0 v 2
> (XEN) 	 45: [11.0] flags=3D12 cpu=3D14 credit=3D7575284 [w=3D256] load=3D=
262144
> (~100%)
> (XEN) 	 46: [11.1] flags=3D0 cpu=3D12 credit=3D10500000 [w=3D256] load=3D0
> (~0%)
> (XEN) 	Domain: 12 w 256 c 0 v 2
> (XEN) 	 47: [12.0] flags=3D2 cpu=3D15 credit=3D8014099 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) 	 48: [12.1] flags=3D0 cpu=3D6 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	Domain: 13 w 256 c 0 v 2
> (XEN) 	 49: [13.0] flags=3D0 cpu=3D7 credit=3D10500000 [w=3D256] load=3D0=
 (~0%)
> (XEN) 	 50: [13.1] flags=3D0 cpu=3D15 credit=3D10500000 [w=3D256] load=3D0
> (~0%)
> (XEN) Runqueue 0:
> (XEN) CPU[00] runq=3D0, sibling=3D{0}, core=3D{0-7}
> (XEN) 	run: [4.0] flags=3D2 cpu=3D0 credit=3D5255200 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) CPU[01] runq=3D0, sibling=3D{1}, core=3D{0-7}
> (XEN) 	run: [3.0] flags=3D2 cpu=3D1 credit=3D5057668 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) CPU[02] runq=3D0, sibling=3D{2}, core=3D{0-7}
> (XEN) 	run: [5.0] flags=3D12 cpu=3D2 credit=3D5180785 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) CPU[03] runq=3D0, sibling=3D{3}, core=3D{0-7}
> (XEN) CPU[04] runq=3D0, sibling=3D{4}, core=3D{0-7}
> (XEN) 	run: [7.0] flags=3D2 cpu=3D4 credit=3D5215323 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) CPU[05] runq=3D0, sibling=3D{5}, core=3D{0-7}
> (XEN) 	run: [2.0] flags=3D2 cpu=3D5 credit=3D4816142 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) CPU[06] runq=3D0, sibling=3D{6}, core=3D{0-7}
> (XEN) 	run: [1.0] flags=3D2 cpu=3D6 credit=3D4755772 [w=3D256] load=3D262=
144
> (~100%)
> (XEN) CPU[07] runq=3D0, sibling=3D{7}, core=3D{0-7}
> (XEN) 	run: [8.1] flags=3D12 cpu=3D7 credit=3D5175342 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) RUNQ:
> (XEN) Runqueue 1:
> (XEN) CPU[08] runq=3D1, sibling=3D{8}, core=3D{8-15}
> (XEN) 	run: [0.13] flags=3D2 cpu=3D8 credit=3D9998664 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) CPU[09] runq=3D1, sibling=3D{9}, core=3D{8-15}
> (XEN) 	run: [0.5] flags=3Da cpu=3D9 credit=3D1101 [w=3D256] load=3D0 (~0%)
> (XEN) CPU[10] runq=3D1, sibling=3D{10}, core=3D{8-15}
> (XEN) 	run: [6.0] flags=3D2 cpu=3D10 credit=3D7764532 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) CPU[11] runq=3D1, sibling=3D{11}, core=3D{8-15}
> (XEN) 	run: [0.3] flags=3D2 cpu=3D11 credit=3D9998469 [w=3D256] load=3D26=
2144
> (~100%)
> (XEN) CPU[12] runq=3D1, sibling=3D{12}, core=3D{8-15}
> (XEN) 	run: [10.0] flags=3D2 cpu=3D12 credit=3D7967846 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) CPU[13] runq=3D1, sibling=3D{13}, core=3D{8-15}
> (XEN) 	run: [9.1] flags=3D12 cpu=3D13 credit=3D7832232 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) CPU[14] runq=3D1, sibling=3D{14}, core=3D{8-15}
> (XEN) 	run: [11.0] flags=3D2 cpu=3D14 credit=3D7509378 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) CPU[15] runq=3D1, sibling=3D{15}, core=3D{8-15}
> (XEN) 	run: [12.0] flags=3D2 cpu=3D15 credit=3D7971164 [w=3D256] load=3D2=
62144
> (~100%)
> (XEN) RUNQ:
> (XEN) CPUs info:
> (XEN) CPU[00] current=3Dd4v0, curr=3Dd4v0, prev=3DNULL
> (XEN) CPU[01] current=3Dd3v0, curr=3Dd3v0, prev=3DNULL
> (XEN) CPU[02] current=3Dd5v0, curr=3Dd5v0, prev=3DNULL
> (XEN) CPU[03] current=3Dd[IDLE]v3, curr=3Dd[IDLE]v3, prev=3DNULL
> (XEN) CPU[04] current=3Dd7v0, curr=3Dd7v0, prev=3DNULL
> (XEN) CPU[05] current=3Dd2v0, curr=3Dd2v0, prev=3DNULL
> (XEN) CPU[06] current=3Dd1v0, curr=3Dd1v0, prev=3DNULL
> (XEN) CPU[07] current=3Dd8v1, curr=3Dd8v1, prev=3DNULL
> (XEN) CPU[08] current=3Dd0v13, curr=3Dd0v13, prev=3DNULL
> (XEN) CPU[09] current=3Dd0v5, curr=3Dd0v5, prev=3DNULL
> (XEN) CPU[10] current=3Dd6v0, curr=3Dd6v0, prev=3DNULL
> (XEN) CPU[11] current=3Dd0v3, curr=3Dd0v3, prev=3DNULL
> (XEN) CPU[12] current=3Dd10v0, curr=3Dd10v0, prev=3DNULL
> (XEN) CPU[13] current=3Dd9v1, curr=3Dd9v1, prev=3DNULL
> (XEN) CPU[14] current=3Dd11v0, curr=3Dd11v0, prev=3DNULL
> (XEN) CPU[15] current=3Dd12v0, curr=3Dd12v0, prev=3DNULL
>=20



--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--bwMunqOr7B7rzQrh
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+c0fMACgkQ24/THMrX
1ywgmQf/UhU+Md2Zt3o/m5Ux0c6dVpjFrh8NsDs9H2MnEi8kWhPyAb714yZU5Ha9
hz0jO2s8kO7iSvIOYQdf45RGorCL0vP971dqhPClrfEZ+MY88Yyb93CYdqUva5af
UJnGowutOanXQcuvJMRS1yaa9gfSPXRNPqjfg4Ky6L6CDBhb/ceK9jmq8W8Dn/2R
ZNS9XEuxEC3Q94V88eGiSUPPozsguokI5apg/amHMiBamjZ9fD9RiIZkuNNDoEDr
3Uh7cjc9sZlzGBc0+H7pxSmOPbD8sx8G2PLbJQ/EJivYZA1blzPLPUUB3HjdrTKh
spqO1IhyPrx4SxdAgVoMocDS9fqeHQ==
=NNx7
-----END PGP SIGNATURE-----

--bwMunqOr7B7rzQrh--


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 03:28:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 03:28:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16761.41577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYhYh-0003zy-Sk; Sat, 31 Oct 2020 03:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16761.41577; Sat, 31 Oct 2020 03:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYhYh-0003zr-Pw; Sat, 31 Oct 2020 03:28:03 +0000
Received: by outflank-mailman (input) for mailman id 16761;
 Sat, 31 Oct 2020 03:28:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1kYhYg-0003zm-Bv
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 03:28:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31a9884b-cd54-4ded-976a-67aa5fe2e73a;
 Sat, 31 Oct 2020 03:28:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4531BACDF;
 Sat, 31 Oct 2020 03:28:00 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=Q0JK=EG=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
	id 1kYhYg-0003zm-Bv
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 03:28:02 +0000
X-Inumbo-ID: 31a9884b-cd54-4ded-976a-67aa5fe2e73a
Received: from mx2.suse.de (unknown [195.135.220.15])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 31a9884b-cd54-4ded-976a-67aa5fe2e73a;
	Sat, 31 Oct 2020 03:28:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1604114880;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=7rwCsRcWUPaeuo0k5aDo+78o6carpCekxQlbYiWQtWA=;
	b=PTh13T4XXvn4gXC/VXfyk8Ce3bPH7I5O2EZeU0L9+5PsA1reui5aHbhWZ0T92R4qS33tjH
	je1wD27H1R216LA800PvKimk7UDbZyxEcZr02Yi3ST8gHEp7JkXMXhiOnms2trZBNZ9TZF
	maMgXb9RzUOHDqpKnfXP/14uBaufrT8=
Received: from relay2.suse.de (unknown [195.135.221.27])
	by mx2.suse.de (Postfix) with ESMTP id 4531BACDF;
	Sat, 31 Oct 2020 03:28:00 +0000 (UTC)
Message-ID: <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
From: Dario Faggioli <dfaggioli@suse.com>
To: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>
Cc: Juergen Gross <JGross@suse.com>, "frederic.pierret@qubes-os.org"
	 <frederic.pierret@qubes-os.org>, "George.Dunlap@citrix.com"
	 <George.Dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>, "andrew.cooper3@citrix.com"
	 <andrew.cooper3@citrix.com>
Date: Sat, 31 Oct 2020 04:27:58 +0100
In-Reply-To: <20201031025442.GF1447@mail-itl>
References: <a8e9113c-70ef-53fa-e340-be15eb3cba57@qubes-os.org>
	 <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
	 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
	 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
	 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
	 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
	 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
	 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
	 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
	 <20201031025442.GF1447@mail-itl>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-tKjOkQ94W1TVlSAcpjUA"
User-Agent: Evolution 3.38.1 (by Flathub.org) 
MIME-Version: 1.0


--=-tKjOkQ94W1TVlSAcpjUA
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Sat, 2020-10-31 at 03:54 +0100, marmarek@invisiblethingslab.com
wrote:
> On Sat, Oct 31, 2020 at 02:34:32AM +0000, Dario Faggioli wrote:
> (XEN) *** Dumping CPU7 host state: ***
> (XEN) Xen call trace:
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040223625>] R _spin_lock+0x35/0x40
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0xc0
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402df160>] S context_switch+0x110/0xa60
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x25=
0
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5=
a/0xa0
> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x=
30
>=20
> And so on, for (almost?) all CPUs.
>=20
Yes, you're right.

So, I indeed saw the spin_lock() calls, but I somehow thought I had
seen them in the in the guests' contextes (for which, we probably don't
even print the callstak! :-O). Instead they're there in the host ones.

Sorry for the oversight.

> Note the '*' output is (I think) from a different instances of the
> freeze, so cannot be correlated with other outputs...
>=20
> > Maybe they're stuck in the kernel, not in Xen? Thoughs ?
>=20
> Given the above spin locks, I don't think so. But also, even if they
> are
> stuck in the kernel, it clearly happened after 4.13 -> 4.14
> upgrade...
>=20
Right. So, it seems like a live (I would say) lock. It might happen on
some resource which his shared among domains. And introduced (the
livelock, not the resource or the sharing) in 4.14.

Just giving a quick look, I see that vmx_do_resume() calls
vmx_clear_vmcs() which calls on_selected_cpus() which takes the
call_lock spinlock.

And none of these seems to have received much attention recently.

But this is just a really basic analysis!

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-tKjOkQ94W1TVlSAcpjUA
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl+c2b8ACgkQFkJ4iaW4
c+6q5A/8D5ZEfxXTKlS8ZRFVRiFTmyTMgljXxR63FMsSA+RHyXIjbcSxLVfxio7E
xjcVK+57qlbSeQAoFgDdC03dv7P5WUKEpEmzu73AZjNV84U6AfmbwDANmtV0rFx0
DJLgjrQiiXXDQ8L8qeshHuoQPN9r1n83mAO0JSgxDQLtc+sqE10KIr/9pYieWLmg
VU0IkA14yX6DDQUv7Ok0U4P+NUOEpVdIjZCd1HKdXWtolyDeYmcAsKRQMGHpgTAd
xNe0EyAVbQa6/zTeIE+JbMQEWoZkygj1jwgAveByJ/KutFEE3WW9vfKgUyICRE0m
dX54OUQfXiBAkb5A5zmsKBZjNYmseodoY9bj4Rpqwlz14LVFuO+BQsp/Op9dUHmt
dqJIsrOGFIMpr1CT21QizPxi2NKGbqMSBHzrv2aRvgxLGdJdl/PjNrJL547BtDvR
VIQW+vteVQqumBxWo0fKZ95PXlWLiVMSECc1WL05MKxrF+q4wQWz7qvv7JQ2FP6T
tKwLFuCfvHIRnofoO32ProiK0Ybnff7mVd4caco3D9oY7hDTZpjm8bGvD1zsDOcG
SfeSEKh83mAPGUldxkliLE+8F8HsxLhx19tS2q18moqAoMKGANZtN/SklyW7yS3H
fR0gmbAgBSjYgwTIom6h5Cfmx8B/rLVOBZDS5/njrMhIyMlWAOY=
=xoxF
-----END PGP SIGNATURE-----

--=-tKjOkQ94W1TVlSAcpjUA--



From xen-devel-bounces@lists.xenproject.org Sat Oct 31 04:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 04:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16767.41590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYiBl-0007VC-0D; Sat, 31 Oct 2020 04:08:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16767.41590; Sat, 31 Oct 2020 04:08:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYiBk-0007V5-TW; Sat, 31 Oct 2020 04:08:24 +0000
Received: by outflank-mailman (input) for mailman id 16767;
 Sat, 31 Oct 2020 04:08:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYiBj-0007V0-FH
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 04:08:23 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12f4fab8-4f74-4eff-a87f-b6b8f78a636d;
 Sat, 31 Oct 2020 04:08:22 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 13BFE5C00FB;
 Sat, 31 Oct 2020 00:08:22 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Sat, 31 Oct 2020 00:08:22 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id D1FB13280063;
 Sat, 31 Oct 2020 00:08:20 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYiBj-0007V0-FH
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 04:08:23 +0000
X-Inumbo-ID: 12f4fab8-4f74-4eff-a87f-b6b8f78a636d
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 12f4fab8-4f74-4eff-a87f-b6b8f78a636d;
	Sat, 31 Oct 2020 04:08:22 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.nyi.internal (Postfix) with ESMTP id 13BFE5C00FB;
	Sat, 31 Oct 2020 00:08:22 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
  by compute3.internal (MEProxy); Sat, 31 Oct 2020 00:08:22 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=+PAx6r
	Uq+oxuTLVOrzZ1M1wnf5y1nOwQIKtU+KCWbVo=; b=CMQhY+hZ+Ve3M/nkA8c9nu
	fPpOXweVrLGdGXs+7ZzFzpLIg+EGmvrD7s9wB009rERu0I03bSRSNntFJQaWHneC
	ApjeOtJKea1sVqGU7sr9CoT/ywYtlQwAX7OcYzv+m8MztAFRvS7ogqVXbktkHCrT
	OCxfFljL+Fh9Q8rCwL/r9tu7vnbf99oVkqthqKs2RxE/N2T64Bb7KPJTv3OrIZdP
	F/qmsWy61UdKkzJ3Ou053TF9ou5gxEZPAZZzWoUBadqNETuzZAqElhhV1ikoMtlX
	G3USJ85V5adKWmPNhTM5k47zbTlNwDRH0qhmj+6cN2RdgHAdnigDSLKwZ0IXh6sA
	==
X-ME-Sender: <xms:NeOcX2y6Vtxg_ioPp6TWRuJ2C6oqbag1ufH284QH28eWRyGdNw4FdA>
    <xme:NeOcXySE8Iexr6uYQ6uZke198KwpqJ0AOK7pbJZarOn3NBlatLQCPDv3rZmWo8L5z
    o7OMIq4wjgOlA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleeigdeigecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepfdhmrghrmhgr
    rhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmfdcuoehmrghrmhgrrh
    gvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgv
    rhhnpeegieffhefgvedtteelueegtedtkedvkefhhfekgeduheefgfefheelveefiefgvd
    enucfkphepledurdeigedrudejtddrkeelnecuvehluhhsthgvrhfuihiivgeptdenucfr
    rghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhih
    hnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:NeOcX4XzsPgQY14pqTDEfKbhQFtWv5eaKprg0chvLhgvHSi4AmtBFg>
    <xmx:NeOcX8iZoPfppcQhod2eerL6dJbhYvo99ZgOtU1g_1IYeXLg1Vd9eg>
    <xmx:NeOcX4DuLiqdYFtdf4-W6IAwRzG_gHa8igsUWY78BLoEDTZWJC9E9A>
    <xmx:NuOcX7NLeUvTqzepxZKvpQd7PTVoM66amVUt4xKs8njUIjle9qOW7A>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id D1FB13280063;
	Sat, 31 Oct 2020 00:08:20 -0400 (EDT)
Date: Sat, 31 Oct 2020 05:08:17 +0100
From: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>
To: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
	"frederic.pierret@qubes-os.org" <frederic.pierret@qubes-os.org>,
	"George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
Message-ID: <20201031040817.GG1447@mail-itl>
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="TXtjXAp+Y+VkYOQB"
Content-Disposition: inline
In-Reply-To: <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>


--TXtjXAp+Y+VkYOQB
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue

On Sat, Oct 31, 2020 at 04:27:58AM +0100, Dario Faggioli wrote:
> On Sat, 2020-10-31 at 03:54 +0100, marmarek@invisiblethingslab.com
> wrote:
> > On Sat, Oct 31, 2020 at 02:34:32AM +0000, Dario Faggioli wrote:
> > (XEN) *** Dumping CPU7 host state: ***
> > (XEN) Xen call trace:
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040223625>] R _spin_lock+0x35/0x40
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402233cd>] S on_selected_cpus+0x1d/0x=
c0
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1b0
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402df160>] S context_switch+0x110/0xa=
60
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04024310a>] S core.c#schedule+0x1aa/0x=
250
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040222d4a>] S softirq.c#__do_softirq+0=
x5a/0xa0
> > (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/=
0x30
> >=20
> > And so on, for (almost?) all CPUs.
>
> Right. So, it seems like a live (I would say) lock. It might happen on
> some resource which his shared among domains. And introduced (the
> livelock, not the resource or the sharing) in 4.14.
>=20
> Just giving a quick look, I see that vmx_do_resume() calls
> vmx_clear_vmcs() which calls on_selected_cpus() which takes the
> call_lock spinlock.
>=20
> And none of these seems to have received much attention recently.
>=20
> But this is just a really basic analysis!

I've looked at on_selected_cpus() and my understanding is this:
1. take call_lock spinlock
2. set function+args+what cpus to be called in a global "call_data" variable
3. ask CPUs to execute that function (smp_send_call_function_mask() call)
4. wait for all requested CPUs to execute the function, still holding
the spinlock
5. only then - release the spinlock

So, if any CPU does not execute requested function for any reason, it
will keep the call_lock locked forever.

I don't see any CPU waiting on step 4, but also I don't see call traces
=66rom CPU3 and CPU8 in the log - that's because they are in guest (dom0
here) context, right? I do see "guest state" dumps from them.
The only three CPUs that do logged xen call traces and are not waiting on t=
hat
spin lock are:

CPU0:
(XEN) Xen call trace:
(XEN)    [<ffff82d040240f89>] R vcpu_unblock+0x9/0x50
(XEN)    [<ffff82d0402e0171>] S vcpu_kick+0x11/0x60
(XEN)    [<ffff82d0402259c8>] S tasklet.c#do_tasklet_work+0x68/0xc0
(XEN)    [<ffff82d040225a59>] S tasklet.c#tasklet_softirq_action+0x39/0x60
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30

CPU4:
(XEN) Xen call trace:
(XEN)    [<ffff82d040227043>] R set_timer+0x133/0x220
(XEN)    [<ffff82d040234e90>] S credit.c#csched_tick+0/0x3a0
(XEN)    [<ffff82d04022660f>] S timer.c#timer_softirq_action+0x9f/0x300
(XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
(XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x20

CPU14:
(XEN) Xen call trace:
(XEN)    [<ffff82d040222dc0>] R do_softirq+0/0x10
(XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x20

I'm not sure if any of those is related to that spin lock,
on_selected_cpus() call, or anything like that...

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--TXtjXAp+Y+VkYOQB
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+c4zEACgkQ24/THMrX
1yxW9wf/ZPjzMYLiq0CsKNmHRuOGrKJyIwcynFReZ2Fe7UmppKsmw9DF7j15m/kQ
mHcS024GreWDMyNuNkgJTMcpaVSpxXa1khFDnBt3Dp3VA9mrdrTRrY8kio2cQkdJ
Wt2Vn/dHxAjFCmKsidEBij+3BzVDxkH6vOxT6+XPe4aLOMY4xGTSg8BI0YNi+IT6
C9srC8rHWqgfd4k2DdWX6iNbKlxl591Cshb8Sh0RfIjRFdEALM+PAmhmk6A8x7db
We/fh3cwI8UqeRqImVSvgPwdCCLPaTKt20p14roJ194DMtxrWLtH4zoDiWWAS2hi
9RmhiOviKbJQwpyEtkexAGWHPVlcAg==
=L++n
-----END PGP SIGNATURE-----

--TXtjXAp+Y+VkYOQB--


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 05:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 05:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16775.41605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYjeb-0007r3-1e; Sat, 31 Oct 2020 05:42:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16775.41605; Sat, 31 Oct 2020 05:42:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYjea-0007qw-Up; Sat, 31 Oct 2020 05:42:16 +0000
Received: by outflank-mailman (input) for mailman id 16775;
 Sat, 31 Oct 2020 05:42:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYjeZ-0007qM-Si
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 05:42:15 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ccee9a45-b7ce-4b75-8e89-010cfefd79f4;
 Sat, 31 Oct 2020 05:42:08 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYjeS-0002yc-HY; Sat, 31 Oct 2020 05:42:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYjeS-00048X-9y; Sat, 31 Oct 2020 05:42:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYjeS-0004O3-9D; Sat, 31 Oct 2020 05:42:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYjeZ-0007qM-Si
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 05:42:15 +0000
X-Inumbo-ID: ccee9a45-b7ce-4b75-8e89-010cfefd79f4
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id ccee9a45-b7ce-4b75-8e89-010cfefd79f4;
	Sat, 31 Oct 2020 05:42:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r3c7D9J/0w8ck/iTmQQNOZa6luWE8/MEGrWuJ5Kschs=; b=pVGTffk0mwAfr3ykmb14DJW3QD
	MY7oPhPGxAShBkg+/Q29jlP8UUVMqHL8CCTvuT37zPflxBZ/jDZ16h/kprBl4q9YxCaY+jE8tv1eP
	n86OQf+Njs+Rhwvu+9JJKHktsIzCMGARx3r85vWXf3hB4YdhBYNoIFJTZtozH1cjdVAU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYjeS-0002yc-HY; Sat, 31 Oct 2020 05:42:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYjeS-00048X-9y; Sat, 31 Oct 2020 05:42:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYjeS-0004O3-9D; Sat, 31 Oct 2020 05:42:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156316-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156316: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8cadcaa13d882816052ad4dec77faddd44a1c108
X-Osstest-Versions-That:
    ovmf=c26e291375d1808a0ec5af9002dd0ebca5959020
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 05:42:08 +0000

flight 156316 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156316/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8cadcaa13d882816052ad4dec77faddd44a1c108
baseline version:
 ovmf                 c26e291375d1808a0ec5af9002dd0ebca5959020

Last test of basis   156294  2020-10-29 11:26:30 Z    1 days
Testing same since   156316  2020-10-30 10:41:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c26e291375..8cadcaa13d  8cadcaa13d882816052ad4dec77faddd44a1c108 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 05:44:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 05:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16086.41616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYjgs-00080a-Ev; Sat, 31 Oct 2020 05:44:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16086.41616; Sat, 31 Oct 2020 05:44:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYjgs-00080T-Bj; Sat, 31 Oct 2020 05:44:38 +0000
Received: by outflank-mailman (input) for mailman id 16086;
 Fri, 30 Oct 2020 17:27:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YQHy=EF=linux.ibm.com=fbarrat@srs-us1.protection.inumbo.net>)
 id 1kYYBF-0001pg-DI
 for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:27:13 +0000
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50bac738-e9ad-4ea3-a885-34fd9a27fecf;
 Fri, 30 Oct 2020 17:27:12 +0000 (UTC)
Received: from pps.filterd (m0098417.ppops.net [127.0.0.1])
 by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id
 09UH4JbG086980; Fri, 30 Oct 2020 13:26:39 -0400
Received: from pps.reinject (localhost [127.0.0.1])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34gm93xm0w-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 13:26:38 -0400
Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09UH5286089727;
 Fri, 30 Oct 2020 13:26:37 -0400
Received: from ppma01fra.de.ibm.com (46.49.7a9f.ip4.static.sl-reverse.com
 [159.122.73.70])
 by mx0a-001b2d01.pphosted.com with ESMTP id 34gm93xm01-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 13:26:37 -0400
Received: from pps.filterd (ppma01fra.de.ibm.com [127.0.0.1])
 by ppma01fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09UHH2Ts031290;
 Fri, 30 Oct 2020 17:26:35 GMT
Received: from b06cxnps4074.portsmouth.uk.ibm.com
 (d06relay11.portsmouth.uk.ibm.com [9.149.109.196])
 by ppma01fra.de.ibm.com with ESMTP id 34dwh0jff3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 30 Oct 2020 17:26:35 +0000
Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com
 [9.149.105.61])
 by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id
 09UHQWf831130076
 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 30 Oct 2020 17:26:32 GMT
Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 81EE111C05C;
 Fri, 30 Oct 2020 17:26:32 +0000 (GMT)
Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1])
 by IMSVA (Postfix) with ESMTP id 60F4211C050;
 Fri, 30 Oct 2020 17:26:30 +0000 (GMT)
Received: from localhost.localdomain (unknown [9.145.85.67])
 by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP;
 Fri, 30 Oct 2020 17:26:30 +0000 (GMT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=YQHy=EF=linux.ibm.com=fbarrat@srs-us1.protection.inumbo.net>)
	id 1kYYBF-0001pg-DI
	for xen-devel@lists.xenproject.org; Fri, 30 Oct 2020 17:27:13 +0000
X-Inumbo-ID: 50bac738-e9ad-4ea3-a885-34fd9a27fecf
Received: from mx0b-001b2d01.pphosted.com (unknown [148.163.158.5])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 50bac738-e9ad-4ea3-a885-34fd9a27fecf;
	Fri, 30 Oct 2020 17:27:12 +0000 (UTC)
Received: from pps.filterd (m0098417.ppops.net [127.0.0.1])
	by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 09UH4JbG086980;
	Fri, 30 Oct 2020 13:26:39 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ibm.com; h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=pp1;
 bh=kRv/mdN0gxrWM4bjmBGarbvn1q9ZLkvFoS6y5DzvQks=;
 b=UWWtqWmXJl3nSrq5VxzL3fA7gftqPT2l/7zujH5kX5UvHZQ2eMEv18xIFli+pDzv7TND
 NhYhOuFvYct9W6WTKfBbUytZ8/Cf+k1x8aeC7Oe5QD8YYyP+4VpoSpOUMvDWDIAhxnB6
 c5FhsgKRalTM6aUBHcNHDfpT9wtGJzgUJ5nY4j/m0uXK5U6stXBYO3eHSNoVzcotT/Xy
 9WS3NIpRpBYAz5DKkKDBBTsOJC2EE7S2Jsz0m9qB01bvBxEWMhFx70vhSmjAfeVZzs6/
 v05iYc3aUMt+dm/aSv5oejFCOaASxJm22fB+sZfWIK+97WfQXSV6rjYLkmgp0HqWCmHh bQ== 
Received: from pps.reinject (localhost [127.0.0.1])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34gm93xm0w-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 13:26:38 -0400
Received: from m0098417.ppops.net (m0098417.ppops.net [127.0.0.1])
	by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 09UH5286089727;
	Fri, 30 Oct 2020 13:26:37 -0400
Received: from ppma01fra.de.ibm.com (46.49.7a9f.ip4.static.sl-reverse.com [159.122.73.70])
	by mx0a-001b2d01.pphosted.com with ESMTP id 34gm93xm01-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 13:26:37 -0400
Received: from pps.filterd (ppma01fra.de.ibm.com [127.0.0.1])
	by ppma01fra.de.ibm.com (8.16.0.42/8.16.0.42) with SMTP id 09UHH2Ts031290;
	Fri, 30 Oct 2020 17:26:35 GMT
Received: from b06cxnps4074.portsmouth.uk.ibm.com (d06relay11.portsmouth.uk.ibm.com [9.149.109.196])
	by ppma01fra.de.ibm.com with ESMTP id 34dwh0jff3-1
	(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
	Fri, 30 Oct 2020 17:26:35 +0000
Received: from d06av25.portsmouth.uk.ibm.com (d06av25.portsmouth.uk.ibm.com [9.149.105.61])
	by b06cxnps4074.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 09UHQWf831130076
	(version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
	Fri, 30 Oct 2020 17:26:32 GMT
Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 81EE111C05C;
	Fri, 30 Oct 2020 17:26:32 +0000 (GMT)
Received: from d06av25.portsmouth.uk.ibm.com (unknown [127.0.0.1])
	by IMSVA (Postfix) with ESMTP id 60F4211C050;
	Fri, 30 Oct 2020 17:26:30 +0000 (GMT)
Received: from localhost.localdomain (unknown [9.145.85.67])
	by d06av25.portsmouth.uk.ibm.com (Postfix) with ESMTP;
	Fri, 30 Oct 2020 17:26:30 +0000 (GMT)
Subject: Re: [PATCH v2 20/39] docs: ABI: testing: make the files compatible
 with ReST output
To: Mauro Carvalho Chehab <mchehab+huawei@kernel.org>,
        Linux Doc Mailing List <linux-doc@vger.kernel.org>
Cc: "Gautham R. Shenoy" <ego@linux.vnet.ibm.com>,
        "Jason A. Donenfeld" <Jason@zx2c4.com>,
        =?UTF-8?Q?Javier_Gonz=c3=a1lez?=
 <javier@javigon.com>,
        Jonathan Corbet <corbet@lwn.net>,
        "Martin K. Petersen" <martin.petersen@oracle.com>,
        "Rafael J. Wysocki" <rjw@rjwysocki.net>,
        Alexander Shishkin <alexander.shishkin@linux.intel.com>,
        Alexandre Belloni <alexandre.belloni@bootlin.com>,
        Alexandre Torgue <alexandre.torgue@st.com>,
        Andrew Donnellan <ajd@linux.ibm.com>,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Baolin Wang <baolin.wang7@gmail.com>,
        Benson Leung <bleung@chromium.org>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        Bruno Meneguele <bmeneg@redhat.com>,
        Chunyan Zhang <zhang.lyra@gmail.com>, Dan Murphy <dmurphy@ti.com>,
        Dan Williams <dan.j.williams@intel.com>,
        Enric Balletbo i Serra <enric.balletbo@collabora.com>,
        Fabrice Gasnier <fabrice.gasnier@st.com>,
        Felipe Balbi <balbi@kernel.org>,
        Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
        Guenter Roeck <groeck@chromium.org>, Hanjun Guo <guohanjun@huawei.com>,
        Heikki Krogerus <heikki.krogerus@linux.intel.com>,
        Jens Axboe <axboe@kernel.dk>,
        Johannes Thumshirn
 <johannes.thumshirn@wdc.com>,
        Jonathan Cameron <jic23@kernel.org>, Juergen Gross <jgross@suse.com>,
        Konstantin Khlebnikov <koct9i@gmail.com>,
        Kranthi Kuntala <kranthi.kuntala@intel.com>,
        Lakshmi Ramasubramanian <nramas@linux.microsoft.com>,
        Lars-Peter Clausen <lars@metafoo.de>, Len Brown <lenb@kernel.org>,
        Leonid Maksymchuk <leonmaxx@gmail.com>,
        Ludovic Desroches <ludovic.desroches@microchip.com>,
        Mario Limonciello <mario.limonciello@dell.com>,
        Mark Gross <mgross@linux.intel.com>,
        Maxime Coquelin <mcoquelin.stm32@gmail.com>,
        Michael Ellerman <mpe@ellerman.id.au>,
        Mika Westerberg <mika.westerberg@linux.intel.com>,
        Mike Kravetz <mike.kravetz@oracle.com>,
        Mimi Zohar <zohar@linux.ibm.com>, Nayna Jain <nayna@linux.ibm.com>,
        Nicolas Ferre
 <nicolas.ferre@microchip.com>,
        Niklas Cassel <niklas.cassel@wdc.com>,
        Oded Gabbay <oded.gabbay@gmail.com>, Oleh Kravchenko <oleg@kaa.org.ua>,
        Orson Zhai <orsonzhai@gmail.com>, Pavel Machek <pavel@ucw.cz>,
        Pawan Gupta <pawan.kumar.gupta@linux.intel.com>,
        Peter Meerwald-Stadler <pmeerw@pmeerw.net>,
        Peter Rosin <peda@axentia.se>, Petr Mladek <pmladek@suse.com>,
        Philippe Bergheaud <felix@linux.ibm.com>,
        Richard Cochran <richardcochran@gmail.com>,
        Sebastian Reichel <sre@kernel.org>,
        Sergey Senozhatsky <sergey.senozhatsky@gmail.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Thinh Nguyen <Thinh.Nguyen@synopsys.com>,
        Thomas Gleixner <tglx@linutronix.de>, Tom Rix <trix@redhat.com>,
        Vaibhav Jain <vaibhav@linux.ibm.com>,
        Vineela Tummalapalli <vineela.tummalapalli@intel.com>,
        Vishal Verma <vishal.l.verma@intel.com>, linux-acpi@vger.kernel.org,
        linux-arm-kernel@lists.infradead.org, linux-iio@vger.kernel.org,
        linux-kernel@vger.kernel.org, linux-mm@kvack.org,
        linux-pm@vger.kernel.org, linux-stm32@st-md-mailman.stormreply.com,
        linux-usb@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
        netdev@vger.kernel.org, xen-devel@lists.xenproject.org,
        Jonathan Cameron <Jonathan.Cameron@huawei.com>
References: <cover.1604042072.git.mchehab+huawei@kernel.org>
 <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
From: Frederic Barrat <fbarrat@linux.ibm.com>
Message-ID: <94520e35-6b73-c951-206e-0031d41ebf83@linux.ibm.com>
Date: Fri, 30 Oct 2020 18:26:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <58cf3c2d611e0197fb215652719ebd82ca2658db.1604042072.git.mchehab+huawei@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-TM-AS-GCONF: 00
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312,18.0.737
 definitions=2020-10-30_07:2020-10-30,2020-10-30 signatures=0
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0
 priorityscore=1501 spamscore=0 mlxscore=0 malwarescore=0 bulkscore=0
 phishscore=0 clxscore=1011 mlxlogscore=999 impostorscore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2009150000 definitions=main-2010300122



Le 30/10/2020 à 08:40, Mauro Carvalho Chehab a écrit :
> Some files over there won't parse well by Sphinx.
> 
> Fix them.
> 
> Acked-by: Jonathan Cameron<Jonathan.Cameron@huawei.com>  # for IIO
> Signed-off-by: Mauro Carvalho Chehab<mchehab+huawei@kernel.org>
> ---
...
>   Documentation/ABI/testing/sysfs-class-cxl     |  15 +-
...
>   Documentation/ABI/testing/sysfs-class-ocxl    |   3 +


Patches 20, 28 and 31 look good for cxl and ocxl.
Acked-by: Frederic Barrat <fbarrat@linux.ibm.com>

   Fred


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 07:45:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 07:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16819.41629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYlZe-0001Of-6F; Sat, 31 Oct 2020 07:45:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16819.41629; Sat, 31 Oct 2020 07:45:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYlZe-0001OY-2g; Sat, 31 Oct 2020 07:45:18 +0000
Received: by outflank-mailman (input) for mailman id 16819;
 Sat, 31 Oct 2020 07:45:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYlZd-0001OT-4I
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 07:45:17 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db43df53-da86-402f-a4c0-8548b5554363;
 Sat, 31 Oct 2020 07:45:12 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYlZY-0005VT-BI; Sat, 31 Oct 2020 07:45:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYlZX-0002YZ-To; Sat, 31 Oct 2020 07:45:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYlZX-0004Z2-TI; Sat, 31 Oct 2020 07:45:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYlZd-0001OT-4I
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 07:45:17 +0000
X-Inumbo-ID: db43df53-da86-402f-a4c0-8548b5554363
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id db43df53-da86-402f-a4c0-8548b5554363;
	Sat, 31 Oct 2020 07:45:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OXkVKNE/RCUmx6kGr5lbyRwV7AHvE0CPBa/JYYfdI/8=; b=RuxiipHNWXdJp8Cv25JXTpS+FG
	YZJ93FqONiSjetv+evQaYugJIsHTDRk91jIME8O54/9+FkOUKsrSRNmDHh5yF325ILR1p7q2Dxk1o
	hkCcVzOrDh8Q/+V12Rth8vv+4V4ODvtj0QQg7qOhrgkH2cVkHs8kQbVJiUVs9b1ucAYI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYlZY-0005VT-BI; Sat, 31 Oct 2020 07:45:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYlZX-0002YZ-To; Sat, 31 Oct 2020 07:45:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYlZX-0004Z2-TI; Sat, 31 Oct 2020 07:45:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 156315: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-multivcpu:debian-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6e2ee3dfd660d9fde96243da7d565244b4d2f164
X-Osstest-Versions-That:
    xen=16a20963b3209788f2c0d3a3eebb7d92f03f5883
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 07:45:11 +0000

flight 156315 xen-unstable real [real]
flight 156330 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156315/
http://logs.test-lab.xenproject.org/osstest/logs/156330/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu 12 debian-install          fail REGR. vs. 156291

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156291
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156291
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156291
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156291
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156291
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156291
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156291
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156291
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156291
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156291
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156291
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  6e2ee3dfd660d9fde96243da7d565244b4d2f164
baseline version:
 xen                  16a20963b3209788f2c0d3a3eebb7d92f03f5883

Last test of basis   156291  2020-10-29 08:15:53 Z    1 days
Testing same since   156315  2020-10-30 09:45:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Bertrand Marquis <bertrand.marquis@arm.com>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6e2ee3dfd660d9fde96243da7d565244b4d2f164
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Mon Oct 26 16:21:33 2020 +0000

    xen/arm: Warn user on cpu errata 832075
    
    When a Cortex A57 processor is affected by CPU errata 832075, a guest
    not implementing the workaround for it could deadlock the system.
    Add a warning during boot informing the user that only trusted guests
    should be executed on the system.
    An equivalent warning is already given to the user by KVM on cores
    affected by this errata.
    
    Also taint the hypervisor as unsecure when this errata applies and
    mention Cortex A57 r0p0 - r1p2 as not security supported in SUPPORT.md
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    [fix SUPPORT.md style, 3 printk lines instead of 4]
    Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

commit 82c0d3d491ccb183cf12c87775086b68531b8444
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Mon Oct 26 16:21:32 2020 +0000

    xen: Add an unsecure Taint type
    
    Define a new Unsecure taint type to be used to signal a system tainted
    due to an unsecure configuration or hardware feature/errata.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit f9179d21e864c1f3ae1834f727a8c9f9137ccbbf
Author: Bertrand Marquis <bertrand.marquis@arm.com>
Date:   Mon Oct 26 16:21:31 2020 +0000

    xen/arm: use printk_once for errata warning prints
    
    Replace usage of warning_add by printk_once with a **** prefix and
    suffix for errata related warnings.
    
    This prevents the need for the assert which is not secure enough to
    protect this print against wrong usage.
    
    Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

commit 26a8fa494f2f323622b6928bd15921b41818f180
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 29 12:03:43 2020 +0000

    x86/pv: Drop stale comment in dom0_construct_pv()
    
    This comment was introduced by c/s 22a857bde9b8 in 2003, and became stale with
    c/s 99db02d50976 also in 2003.  Both of these predate the introduction of
    struct vcpu, when the processor field moved object.
    
    17 years is long enough for this comment to be mis-informing people reading
    the code.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 1fd1d4bafdf6f9f8fe5ca9b947f016a7aae92a74
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:44:02 2020 +0100

    x86: don't open-code vmap_to_mfn()
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 33d2badc3a1f64d1c425f0758d4948d38a011511
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Oct 29 14:42:37 2020 +0100

    x86: don't open-code l<N>e_to_mfn()
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 10:26:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 10:26:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16872.41652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYo5P-000765-Ey; Sat, 31 Oct 2020 10:26:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16872.41652; Sat, 31 Oct 2020 10:26:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYo5P-00075y-BZ; Sat, 31 Oct 2020 10:26:15 +0000
Received: by outflank-mailman (input) for mailman id 16872;
 Sat, 31 Oct 2020 10:26:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Oje=EG=redhat.com=thuth@srs-us1.protection.inumbo.net>)
 id 1kYo5N-00075t-NC
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 10:26:13 +0000
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b255c8fa-b45d-42ee-b2cc-eb89d1d7b7b9;
 Sat, 31 Oct 2020 10:26:13 +0000 (UTC)
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-156-wXYXgx6kPluWjqn1NuvjQA-1; Sat, 31 Oct 2020 06:26:11 -0400
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 16178185A0D9;
 Sat, 31 Oct 2020 10:26:09 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-43.ams2.redhat.com [10.36.112.43])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 4FD415D9CD;
 Sat, 31 Oct 2020 10:25:54 +0000 (UTC)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=8Oje=EG=redhat.com=thuth@srs-us1.protection.inumbo.net>)
	id 1kYo5N-00075t-NC
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 10:26:13 +0000
X-Inumbo-ID: b255c8fa-b45d-42ee-b2cc-eb89d1d7b7b9
Received: from us-smtp-delivery-124.mimecast.com (unknown [63.128.21.124])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTP
	id b255c8fa-b45d-42ee-b2cc-eb89d1d7b7b9;
	Sat, 31 Oct 2020 10:26:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1604139972;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kdHtLe8ZJwHLhY7rbWm1s6LRofKbM9RJROIfuVVS938=;
	b=edAb03zqLgiYU8PhTDB664oRZqdAsfR1y0PjyBKswfX4WQutQFlOSWlPGMVzWHIkW6wIOt
	86Nj0eZ9TWyjBwz38pt8d0NgZnNJ5WIi4CjkjXg1wkSBlkvfmoMao7zknnRUu1IEGSCeq2
	Nj9mE/NaIkKZuFDyotlhMDin6/NDNos=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-156-wXYXgx6kPluWjqn1NuvjQA-1; Sat, 31 Oct 2020 06:26:11 -0400
X-MC-Unique: wXYXgx6kPluWjqn1NuvjQA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14])
	(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
	(No client certificate requested)
	by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 16178185A0D9;
	Sat, 31 Oct 2020 10:26:09 +0000 (UTC)
Received: from thuth.remote.csb (ovpn-112-43.ams2.redhat.com [10.36.112.43])
	by smtp.corp.redhat.com (Postfix) with ESMTP id 4FD415D9CD;
	Sat, 31 Oct 2020 10:25:54 +0000 (UTC)
Subject: Re: --enable-xen on gitlab CI? (was Re: [PATCH 09/36] qdev: Make
 qdev_get_prop_ptr() get Object* arg)
To: Paolo Bonzini <pbonzini@redhat.com>, Eduardo Habkost
 <ehabkost@redhat.com>, =?UTF-8?Q?Marc-Andr=c3=a9_Lureau?=
 <marcandre.lureau@gmail.com>, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>, =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
Cc: Wainer dos Santos Moschetta <wainersm@redhat.com>,
 QEMU <qemu-devel@nongnu.org>, Matthew Rosato <mjrosato@linux.ibm.com>,
 Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 "open list:Block layer core" <qemu-block@nongnu.org>,
 Stefan Berger <stefanb@linux.vnet.ibm.com>,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Alex Williamson <alex.williamson@redhat.com>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>, Kevin Wolf <kwolf@redhat.com>,
 "Daniel P. Berrange" <berrange@redhat.com>, Cornelia Huck
 <cohuck@redhat.com>, Qemu-s390x list <qemu-s390x@nongnu.org>,
 Max Reitz <mreitz@redhat.com>, Igor Mammedov <imammedo@redhat.com>
References: <20201029220246.472693-1-ehabkost@redhat.com>
 <20201029220246.472693-10-ehabkost@redhat.com>
 <CAJ+F1CKqo3D20=qSAovVKWCGz4otctaWnGC0O5p-Z1ZG9Pj_Mw@mail.gmail.com>
 <20201030113516.GP5733@habkost.net>
 <7645972e-5cad-6511-b057-bd595b91c4aa@redhat.com>
From: Thomas Huth <thuth@redhat.com>
Message-ID: <e35c50b6-e795-d901-61e4-4879c5eadd61@redhat.com>
Date: Sat, 31 Oct 2020 11:25:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <7645972e-5cad-6511-b057-bd595b91c4aa@redhat.com>
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=thuth@redhat.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30/10/2020 18.13, Paolo Bonzini wrote:
> On 30/10/20 12:35, Eduardo Habkost wrote:
>>
>> What is necessary to make sure we have a CONFIG_XEN=y job in
>> gitlab CI?  Maybe just including xen-devel in some of the
>> container images is enough?
> 
> Fedora already has it, but build-system-fedora does not include
> x86_64-softmmu.

Eduardo, could you try to add xen-devel to the centos8 container? If that
does not work, we can still move the x86_64-softmmu target to the fedora
pipeline instead.

 Thomas




From xen-devel-bounces@lists.xenproject.org Sat Oct 31 11:27:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 11:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16885.41667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYp2L-0003oK-1H; Sat, 31 Oct 2020 11:27:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16885.41667; Sat, 31 Oct 2020 11:27:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYp2K-0003oD-UW; Sat, 31 Oct 2020 11:27:08 +0000
Received: by outflank-mailman (input) for mailman id 16885;
 Sat, 31 Oct 2020 11:27:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYp2J-0003nZ-D3
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 11:27:07 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a9d307a-e893-40dc-9ba5-01480c5f1c1f;
 Sat, 31 Oct 2020 11:27:00 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYp2B-00027T-MO; Sat, 31 Oct 2020 11:26:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYp2B-00057q-97; Sat, 31 Oct 2020 11:26:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYp2B-0002vc-8c; Sat, 31 Oct 2020 11:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYp2J-0003nZ-D3
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 11:27:07 +0000
X-Inumbo-ID: 1a9d307a-e893-40dc-9ba5-01480c5f1c1f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1a9d307a-e893-40dc-9ba5-01480c5f1c1f;
	Sat, 31 Oct 2020 11:27:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mYduLL+4eE7++bo5BPpj6jj9SvR8QxT0OQ9SAlaY1nU=; b=UXBpHV0agHp/1cg2NezgjPw1/z
	cEuRyxoXKaF0npcETLAxq7OWzU9cX0qbldy/ciKGVqIs3Kyj6sWNvp2oKlLfMErM7wTYV870Q23vF
	sjNCtWDDEIo6iEqLh5cOI0fOSI9Us4i/JHlPgMuskP91rdQYOavqlYIA/5TaY7VO3/Pc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYp2B-00027T-MO; Sat, 31 Oct 2020 11:26:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYp2B-00057q-97; Sat, 31 Oct 2020 11:26:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYp2B-0002vc-8c; Sat, 31 Oct 2020 11:26:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156317-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 156317: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0060ac29bcbdb76d49d2e248ddfcb7afa2345440
X-Osstest-Versions-That:
    xen=28b78171271dbbce88bbd4cb2de3d828a51fb169
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 11:26:59 +0000

flight 156317 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156317/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156265
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156265
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156265
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156265
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156265
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156265
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156265
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156265
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156265
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156265
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156265
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0060ac29bcbdb76d49d2e248ddfcb7afa2345440
baseline version:
 xen                  28b78171271dbbce88bbd4cb2de3d828a51fb169

Last test of basis   156265  2020-10-27 18:37:44 Z    3 days
Testing same since   156317  2020-10-30 11:37:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   28b7817127..0060ac29bc  0060ac29bcbdb76d49d2e248ddfcb7afa2345440 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 13:03:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 13:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16910.41687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYqXL-0003ud-M7; Sat, 31 Oct 2020 13:03:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16910.41687; Sat, 31 Oct 2020 13:03:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYqXL-0003uW-JC; Sat, 31 Oct 2020 13:03:15 +0000
Received: by outflank-mailman (input) for mailman id 16910;
 Sat, 31 Oct 2020 13:03:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYqXK-0003uN-41
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 13:03:14 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4b8b01a-cd32-49c8-aa4d-31bb5e3562fe;
 Sat, 31 Oct 2020 13:03:09 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYqXE-00045F-Vu; Sat, 31 Oct 2020 13:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYqXE-0003B5-J3; Sat, 31 Oct 2020 13:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYqXE-00069E-IZ; Sat, 31 Oct 2020 13:03:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYqXK-0003uN-41
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 13:03:14 +0000
X-Inumbo-ID: c4b8b01a-cd32-49c8-aa4d-31bb5e3562fe
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id c4b8b01a-cd32-49c8-aa4d-31bb5e3562fe;
	Sat, 31 Oct 2020 13:03:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BsDWfkO8hWne48kpnGeBOUqHrukqnJ561LBRevbPFRk=; b=M/bE6eFMrgbxgPeC/X35OqNKeR
	pVh3PG28UBFDVLD/ZfUnj27WomfV1hS95xNMf8xtfUfnuUYfkBI6Ycjfpu/oCaSB69otpT5eecp68
	w3tvXoSOXAM2Eh3ww5aekZqCXp05QCtSngLY1gpCnyi53gfisOkr3PcHg8pSObiIVKiQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYqXE-00045F-Vu; Sat, 31 Oct 2020 13:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYqXE-0003B5-J3; Sat, 31 Oct 2020 13:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYqXE-00069E-IZ; Sat, 31 Oct 2020 13:03:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156320-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 156320: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-install:fail:regression
    linux-linus:test-arm64-arm64-examine:examine-iommu:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-cubietruck:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-arm64-arm64-libvirt-xsm:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:leak-check/basis(11):fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:leak-check/basis(11):fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=07e0887302450a62f51dba72df6afb5fabb23d1c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 13:03:08 +0000

flight 156320 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156320/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      12 debian-install           fail REGR. vs. 152332
 test-arm64-arm64-examine     13 examine-iommu            fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 152332
 test-armhf-armhf-xl-cubietruck  8 xen-boot               fail REGR. vs. 152332
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 152332
 test-armhf-armhf-libvirt-raw  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd       8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit1  11 leak-check/basis(11)    fail blocked in 152332
 test-arm64-arm64-xl-credit2  11 leak-check/basis(11)    fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                07e0887302450a62f51dba72df6afb5fabb23d1c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z   91 days
Failing since        152366  2020-08-01 20:49:34 Z   90 days  152 attempts
Testing same since   156320  2020-10-30 16:19:25 Z    0 days    1 attempts

------------------------------------------------------------
3389 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 643744 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 14:41:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 14:41:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16940.41702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYs4H-0003xx-02; Sat, 31 Oct 2020 14:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16940.41702; Sat, 31 Oct 2020 14:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYs4G-0003xq-T9; Sat, 31 Oct 2020 14:41:20 +0000
Received: by outflank-mailman (input) for mailman id 16940;
 Sat, 31 Oct 2020 14:41:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1kYs4E-0003xl-Hp
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 14:41:18 +0000
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9788167-381b-41e5-b415-7554db7d6de4;
 Sat, 31 Oct 2020 14:41:17 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id A1F8E7BE;
 Sat, 31 Oct 2020 10:41:16 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Sat, 31 Oct 2020 10:41:16 -0400
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de
 [91.64.170.89])
 by mail.messagingengine.com (Postfix) with ESMTPA id 7F7F23064682;
 Sat, 31 Oct 2020 10:41:15 -0400 (EDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=CThl=EG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
	id 1kYs4E-0003xl-Hp
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 14:41:18 +0000
X-Inumbo-ID: e9788167-381b-41e5-b415-7554db7d6de4
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id e9788167-381b-41e5-b415-7554db7d6de4;
	Sat, 31 Oct 2020 14:41:17 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
	by mailout.west.internal (Postfix) with ESMTP id A1F8E7BE;
	Sat, 31 Oct 2020 10:41:16 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
  by compute3.internal (MEProxy); Sat, 31 Oct 2020 10:41:16 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; bh=kXYNxB
	OxqUlfOmI3lIqO6KTgNeITrbBAq5pwLVQVuBE=; b=iTaQEwVPjYkBlkZuDFFtxo
	gYBrNOQsaBajTZTBydXHFFcEEsTQQW6PL/oWfGjRpTjBzYjrWdPS0fT0s4zxS/CZ
	8kS2rM2fVIlduP/4UiRejJblgudcxaBfUh+0b2f+Xdp2ygX5T9xbg3K+n17znwEY
	PP+GrgiR4bSRcDP5wh83ZpeCs+6dGbTWnVq5abWd1mw+iQUNFRgwCW9G9d0eck+O
	V7/XUfgqv6qX2N/C3e9+Yi4zwUE5jJdmzshEHBXiYNbXcKuVEniyDTEI7Y7AtuXX
	J0u9GxnnVmdPFxiVVsa5ZHQGaafCPC9TR9wnMrveOGH8IFDbySZ4Pl9o+NZmalxg
	==
X-ME-Sender: <xms:jHedX8mkldb__Qwm6YnQJP_NDk6SlpZuTEPDdaipItussGye_gOgkg>
    <xme:jHedX73bBU-7zAwrMA_ScuHdrnD_ozmwu7r9kdTuf75Q5F2u3abPyFLDX1Di5AV-X
    BdpvYct0aceuA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrleejgdeikecutefuodetggdotefrodftvf
    curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
    uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
    fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepjeduieeg
    ieffkefguddvteeffeehhfelfeduveeljeevteffudegkeejudffhfefnecuffhomhgrih
    hnpeigvghnrdhorhhgnecukfhppeeluddrieegrddujedtrdekleenucevlhhushhtvghr
    ufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:jHedX6q1hlitzzMAcFCAGjVEY1WkTAs6Rz9VNpQoN3iLVjLJynfH8g>
    <xmx:jHedX4mBLeJSYDVYB1ec8DB-O33W42WHjO064gqC7a8R3tJwPhb1vg>
    <xmx:jHedX60BUaQKG5Efc0K2KKYqfnTcXc15-qX0yfdUmvFzNzQVQ6ZmkQ>
    <xmx:jHedX7B_AM9Z3NozVhcUXrCQkUe9Gk-23KCZ8t-ucYauDbZlaMYZcQ>
Received: from mail-itl (ip5b40aa59.dynamic.kabel-deutschland.de [91.64.170.89])
	by mail.messagingengine.com (Postfix) with ESMTPA id 7F7F23064682;
	Sat, 31 Oct 2020 10:41:15 -0400 (EDT)
Date: Sat, 31 Oct 2020 15:41:12 +0100
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: Re: Xen Security Advisory 331 v2 - Race condition in Linux event
 handler may crash dom0
Message-ID: <20201031144112.GB16953@mail-itl>
References: <E1kUqJd-0001yN-2q@xenbits.xenproject.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="mxv5cy4qt+RJ9ypb"
Content-Disposition: inline
In-Reply-To: <E1kUqJd-0001yN-2q@xenbits.xenproject.org>


--mxv5cy4qt+RJ9ypb
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: Xen Security Advisory 331 v2 - Race condition in Linux event
 handler may crash dom0

On Tue, Oct 20, 2020 at 12:00:33PM +0000, Xen.org security team wrote:
>                     Xen Security Advisory XSA-331
>                               version 2
>=20
>          Race condition in Linux event handler may crash dom0

(...)

> xsa331-linux.patch     Linux

Do you know when it will land in longterm Linux releases? I see it
is included in 5.10-rc1 (released on Oct 25), but not in any longterm
releases (latest published on Oct 29/30).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--mxv5cy4qt+RJ9ypb
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl+dd4gACgkQ24/THMrX
1yxd8wgAmIkt/zDzGAA+H4NKz0QZnsU9WKlsPooMyKroQmaoLqCrBuO7+8E7QSS2
GEIcTywZEqVpqgXjRv/jTD3a2pC5taeZsRS+/RPEGTV7BKr4PDwJH77qaFbkfgkT
NlvEZsX4iWCD1ps3uQ/7dHtr/O/yPV4NELiLIoH5c4rWUJTkaiB/GwEImzeWm+xI
w2x7h6ndx3GCvcykfJ5T8hVgSrsqjetkHUzUuSnyMcv0yBzDI0NAMpMWF/mHoFSN
paEz4I0vp3Z0zOhymsVQNnTgHiq8RClTZ/UbY/wdY36qn9EooKLSq4oyomqII5pA
oFmGtMAfM9PNXICJhPpZafD79AnyUw==
=+Ed/
-----END PGP SIGNATURE-----

--mxv5cy4qt+RJ9ypb--


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 15:04:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 15:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16946.41713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsQm-0005ph-Tb; Sat, 31 Oct 2020 15:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16946.41713; Sat, 31 Oct 2020 15:04:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsQm-0005pa-QZ; Sat, 31 Oct 2020 15:04:36 +0000
Received: by outflank-mailman (input) for mailman id 16946;
 Sat, 31 Oct 2020 15:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYsQl-0005pV-EW
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:04:35 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e717d84-40a9-4d59-8935-183297b075d8;
 Sat, 31 Oct 2020 15:04:33 +0000 (UTC)
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604156668136651.8131786794564;
 Sat, 31 Oct 2020 08:04:28 -0700 (PDT)
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57] helo=us1-amaz-eas2.inumbo.com)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYsQl-0005pV-EW
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:04:35 +0000
X-Inumbo-ID: 3e717d84-40a9-4d59-8935-183297b075d8
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
	id 3e717d84-40a9-4d59-8935-183297b075d8;
	Sat, 31 Oct 2020 15:04:33 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604156670; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NFmbnOn4JSsroaFmRJK/IBLc8Ld9BFp8hXrV/VDt9PtL2hW67JglJHO7F3B7BAEc75StDag0fn48RIGAvyQxDHTJeRUsvoMJnEueGIiUjQR1m4Ec0xx42xG6hgo10S8XUtjzo4sUso7PpjW/E0igXaicmJPM7L+DUaV/jtDwsTg=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604156670; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=5wJvmAsxNGDcaCrKZlXgv9HhD/4l9KxmPQt9VibSOXs=; 
	b=aP1ZexqbGmpAi1SKktkBKJm80oFyMYFPfXf8d6VNCDIHLTPQwJtt9k4XO0e9/2W8pCZAHF6Kfpzpajbrz7oqDnAyIZdniBq4/H9xQcblxpbPM9QkrMSy0Mzb1R+MoziGmBfr0i5O5SrklvOY0WswngC8NGG9dfKHBpIOOar9/EQ=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604156670;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=5wJvmAsxNGDcaCrKZlXgv9HhD/4l9KxmPQt9VibSOXs=;
	b=bFVAVD/xXNuqPl1nSix4VObxCKGIK5W9Fehi+HxKQJIgtqe1OZdjsFfv493Jvm0k
	ho2sKh2scfJMNBzq9Ll7uWwGy2TWbr5kKWi/5zhA+IGY4GhUOF5Vo8MuSyzDFapExtp
	+W/sR6TB2n7eG9HBEuFgod1YhAxylQn/bru42tJk=
Received: from [10.137.0.19] (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604156668136651.8131786794564; Sat, 31 Oct 2020 08:04:28 -0700 (PDT)
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
To: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
Autocrypt: addr=frederic.pierret@qubes-os.org; keydata=
 xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0
 Dy2rUVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8
 vtovi97sWIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIv
 ltoBrYnElD1Pyp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpX
 gYzTN/kq8sxBMh2OrQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PL
 w5koqPs/6JEIVI+t0pyg+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZ
 NbYRXzkI91HCt40X2bTb2jTKgvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpa
 A/GPaJ5DjzV0q9mkYkGDLYI3J/J+s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVir
 EVBum723MFH4DxhTrOoWgta2nyRHOoi0z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvt
 LbYnlSt3v32nfUXh12aQPwU/LCGIzq4oFNVrNp3aWPnSajLPpQARAQABzTxGcsOpZMOpcmlj
 IFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpYy5waWVycmV0QHF1YmVzLW9zLm9yZz7CwXgE
 EwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEEhAELXNxXbiPLkQ
 AI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZJnl+Fb5HBgthU9lBdMqNySg+s8y
 ekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ9uITprfEqX7V2OLbrYW94qw
 R8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04Uekt5I2Zc8iRDF9kneI
 NiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoDRC/bsAow7cBudj+
 lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR9ze2O5Yh+/B
 unrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2VHTfLmA
 Ot+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1coxFUw
 eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBko
 b1EpfW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKb
 xM/NyxHrmNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCN
 He8XNVqBplV0yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOm
 qpbSLbra8NP8Mu5FZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8
 V+9+dpLEU75+mpHU7GlECfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M
 ++ZmWfEV5nCP2qvzeYDGAL6BbWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr
 5aNCaNpv/i4gLO1IScdfDwm6gdfB2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hg
 YlDWdbImhNL/Z7iL3eayH7T9qAVNU587MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpA
 H+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYHwuc3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYD
 yhxVGbeWp820cb0s1f689XCXqFYAzTfCit+EeboYORN5CGioXzS+z0S9IhPbdUuvqs7xvC24
 8bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9aPe/Zw4PTKWvbJlcFioofLwTQE1XvWom
 FPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMAAoJEEhAELXNxXbilSkP/2NcazvU
 DGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrPX8hjITWD9ZA2bbZZ+J+a39v
 yY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9xK8mWXwhn90SHNadEf28
 ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q8aa+G8iAkNJcb+W
 x5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Nyf2MKKbWRJnt
 jy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJUs/geoC
 UBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1KjH
 uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex
 1C+w3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PX
 pm5Pw4stVEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQ
 QMhGv8DnbAdOOOnumAXWq0+wl5uP
Message-ID: <e2b9272b-7a81-b3d7-8eb2-0b82dc9f5465@qubes-os.org>
Date: Sat, 31 Oct 2020 16:04:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.3.1
MIME-Version: 1.0
In-Reply-To: <20201031040817.GG1447@mail-itl>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="cUtv7XcLI85My2kkWAsEIoc1OqDLXcHiW"
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--cUtv7XcLI85My2kkWAsEIoc1OqDLXcHiW
Content-Type: multipart/mixed; boundary="4Sly6D8Kbe7JBmhdlwnO6TS5iK4GpQ4YL";
 protected-headers="v1"
From: =?UTF-8?B?RnLDqWTDqXJpYyBQaWVycmV0?= <frederic.pierret@qubes-os.org>
To: "marmarek@invisiblethingslab.com" <marmarek@invisiblethingslab.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <JGross@suse.com>,
 "George.Dunlap@citrix.com" <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Message-ID: <e2b9272b-7a81-b3d7-8eb2-0b82dc9f5465@qubes-os.org>
Subject: Re: Recent upgrade of 4.13 -> 4.14 issue
References: <30452e9c-bf27-fce2-cc20-4ce91018a15a@citrix.com>
 <deefd340-ec7a-bbb9-7471-d147da174f4a@suse.com>
 <a333ea82c12086874f705fc9ea9baa991235edd4.camel@suse.com>
 <533ce2f2-f268-a70b-fad7-d8f3f4033209@suse.com>
 <182a90a89cc02beec9760559799e74572e18ce49.camel@suse.com>
 <9632dc14-46d5-83c0-7e44-0c3bd4f5154a@qubes-os.org>
 <ce07254a-0775-d35c-559b-7d9ab642accf@qubes-os.org>
 <b1a18e6ed88db3c40a54c7ca15c3399bdc6f2b9c.camel@suse.com>
 <20201031025442.GF1447@mail-itl>
 <c17e7a152a7e1922bd9c729f70a96acf4ca5240b.camel@suse.com>
 <20201031040817.GG1447@mail-itl>
In-Reply-To: <20201031040817.GG1447@mail-itl>

--4Sly6D8Kbe7JBmhdlwnO6TS5iK4GpQ4YL
Content-Type: multipart/mixed;
 boundary="------------07EB2AEF7489AC301F4F05AF"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------07EB2AEF7489AC301F4F05AF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable



Le 10/31/20 =C3=A0 5:08 AM, marmarek@invisiblethingslab.com a =C3=A9crit=C2=
=A0:
> On Sat, Oct 31, 2020 at 04:27:58AM +0100, Dario Faggioli wrote:
>> On Sat, 2020-10-31 at 03:54 +0100, marmarek@invisiblethingslab.com
>> wrote:
>>> On Sat, Oct 31, 2020 at 02:34:32AM +0000, Dario Faggioli wrote:
>>> (XEN) *** Dumping CPU7 host state: ***
>>> (XEN) Xen call trace:
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040223625>] R _spin_lock+0x35/0x40
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402233cd>] S on_selected_cpus+0x1d/=
0xc0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040284aba>] S vmx_do_resume+0xba/0x1=
b0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d0402df160>] S context_switch+0x110/0=
xa60
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d04024310a>] S core.c#schedule+0x1aa/=
0x250
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040222d4a>] S softirq.c#__do_softirq=
+0x5a/0xa0
>>> (XEN)=C2=A0=C2=A0=C2=A0 [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2=
b/0x30
>>>
>>> And so on, for (almost?) all CPUs.
>>
>> Right. So, it seems like a live (I would say) lock. It might happen on=

>> some resource which his shared among domains. And introduced (the
>> livelock, not the resource or the sharing) in 4.14.
>>
>> Just giving a quick look, I see that vmx_do_resume() calls
>> vmx_clear_vmcs() which calls on_selected_cpus() which takes the
>> call_lock spinlock.
>>
>> And none of these seems to have received much attention recently.
>>
>> But this is just a really basic analysis!
>=20
> I've looked at on_selected_cpus() and my understanding is this:
> 1. take call_lock spinlock
> 2. set function+args+what cpus to be called in a global "call_data" var=
iable
> 3. ask CPUs to execute that function (smp_send_call_function_mask() cal=
l)
> 4. wait for all requested CPUs to execute the function, still holding
> the spinlock
> 5. only then - release the spinlock
>=20
> So, if any CPU does not execute requested function for any reason, it
> will keep the call_lock locked forever.
>=20
> I don't see any CPU waiting on step 4, but also I don't see call traces=

> from CPU3 and CPU8 in the log - that's because they are in guest (dom0
> here) context, right? I do see "guest state" dumps from them.
> The only three CPUs that do logged xen call traces and are not waiting =
on that
> spin lock are:
>=20
> CPU0:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040240f89>] R vcpu_unblock+0x9/0x50
> (XEN)    [<ffff82d0402e0171>] S vcpu_kick+0x11/0x60
> (XEN)    [<ffff82d0402259c8>] S tasklet.c#do_tasklet_work+0x68/0xc0
> (XEN)    [<ffff82d040225a59>] S tasklet.c#tasklet_softirq_action+0x39/0=
x60
> (XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
> (XEN)    [<ffff82d040291b6b>] S vmx_asm_do_vmentry+0x2b/0x30
>=20
> CPU4:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040227043>] R set_timer+0x133/0x220
> (XEN)    [<ffff82d040234e90>] S credit.c#csched_tick+0/0x3a0
> (XEN)    [<ffff82d04022660f>] S timer.c#timer_softirq_action+0x9f/0x300=

> (XEN)    [<ffff82d040222d4a>] S softirq.c#__do_softirq+0x5a/0xa0
> (XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x2=
0
>=20
> CPU14:
> (XEN) Xen call trace:
> (XEN)    [<ffff82d040222dc0>] R do_softirq+0/0x10
> (XEN)    [<ffff82d0402d64e6>] S x86_64/entry.S#process_softirqs+0x6/0x2=
0
>=20
> I'm not sure if any of those is related to that spin lock,
> on_selected_cpus() call, or anything like that...
>=20

Hi,
Some newer logs here: https://gist.github.com/fepitre/5b2da8cf2ef976c0b88=
5ce7bcfbf7313

You can have piece of serial console at hang freeze then debug keys 'd' a=
nd '0' blocked at one VCPU.

I hope that will help.

Regards,
Fr=C3=A9d=C3=A9ric

--------------07EB2AEF7489AC301F4F05AF
Content-Type: application/pgp-keys;
 name="OpenPGP_0x484010B5CDC576E2.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0x484010B5CDC576E2.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsFNBFwkq3EBEADcfyaOkeuf+g96S1ieq05tJ8vTGsQrNXQ5RDE7ffagL0+EpfIP3x73x5Q0D=
y2r
UVQ+oN1DHcueNL70RtNs9BFnoW0KZnskbT4nEJ9wQCQa22lQaIk9kCNVddh2HJKljtd8vtovi=
97s
WIjtzxx5Qwc2md0DY9AHhNC4KqKIW3tSPC17UsI8fASoNAHItYtyn2bO67p8pCIvltoBrYnEl=
D1P
yp5IGWiD2/YD325iPl2+qHVkUSWmb92hRRU19Rg+Uds8bVHqhz4cOqIE7jpXgYzTN/kq8sxBM=
h2O
rQ/bSxLaccaNApIVSZVSAasVJfdscNDL9fjkHERK/AiSTleHrsgLf4PLw5koqPs/6JEIVI+t0=
pyg
+Pa8uwFoeYTPrLSlw0f7bXSmlVfv8g7M7RWmk3T5QIpeHA0j3lEZNbYRXzkI91HCt40X2bTb2=
jTK
gvB9jQjEarpk6euvGs2Ig/U4MlUy3pG5Ehd2Ebn8Rz31JXpaA/GPaJ5DjzV0q9mkYkGDLYI3J=
/J+
s2u0Kr0VswLaIN3WJn7kKEDwfc4s2kaAYfblE/p0zVirEVBum723MFH4DxhTrOoWgta2nyRHO=
oi0
z0EVhYA+D86mFPWKb9roWvtnmFlssggGmqbJEMvtLbYnlSt3v32nfUXh12aQPwU/LCGIzq4oF=
NVr
Np3aWPnSajLPpQARAQABzTxGcsOpZMOpcmljIFBpZXJyZXQgKGZlcGl0cmUpIDxmcmVkZXJpY=
y5w
aWVycmV0QHF1YmVzLW9zLm9yZz7CwXgEEwECACIFAlwkq3ECGwMGCwkIBwMCBhUIAgkKCwQWA=
gMB
Ah4BAheAAAoJEEhAELXNxXbiPLkQAI6kEDyLl0TpvRDOanuD5YkVHLEYVuG62CJNwMjFoFRgZ=
Jnl
+Fb5HBgthU9lBdMqNySg+s8yekM9KRlUHKYjwAsyjPIjRtca4bH3V11/waKpvPBgPsC75CxSZ=
9uI
TprfEqX7V2OLbrYW94qwR8jX+n/wlEGG3pbfXG7FTnjxQWM0E0aSvO0Yb5EkjiJ7cwEiqvL04=
Uek
t5I2Zc8iRDF9kneINiNhzRtvrR1UN6KtiZNSk2NsLOptrUQ/1AU5jwH4mnQQymtYDsWddlRoD=
RC/
bsAow7cBudj+lekM3cNRZOazKZx5UPnN8nqvD7FqeAcZBVyrHZ4hcWqABaJEPv6CCHRiLQnGR=
9ze
2O5Yh+/BunrOJdjdsib1ZECH9GtIcj4mmPAN84NO4r8a6Sn9jsXkd2Wj2N5wNrZMPslhfiaW2=
VHT
fLmAOt+wRwLRsFfqLykF8hMlNXXE4frxotwa6+PTd48Ws9H9aalSs0lebsG0623b4mBjy1cox=
FUw
eclPInXsPEdu/Yu2r7xrgGouXH8KgDhqlqq60UaA5n/0XhIeZ8tBTYs+1B5/C9TjvNAUsBkob=
1Ep
fW3J4Gq14GqwK+eodOTL5t2f2PWN/IQyop/j0FMgVU5/PUS0pciz5ybyIJBLhbsJBvKbxM/Ny=
xHr
mNwGEknpoeq+XT8rEJ+/Ag8Wnjl0zsFNBFwkq3EBEADAPJdyFy4KeYpuGATWwWCNHe8XNVqBp=
lV0
yVlT5pSiCyA3UK34JlGX9YJOj/FlMZGgh61vbiK+piRjm/lyb128wpMjnoOmqpbSLbra8NP8M=
u5F
ZMcv8OxrSIr/RHq2heFg1j11QOMGwe6vPC918qpzmiaYj2qpKY/RYsG8V+9+dpLEU75+mpHU7=
GlE
CfPmHYbnsismL/4+xH+8BG56yg0UFbfrNYonIQFSn5k/w6i7jt7M++ZmWfEV5nCP2qvzeYDGA=
L6B
bWVOjuDhrKsAIKnomCyy+MjcVP955PVdN2+OlPJng07oKtQr5aNCaNpv/i4gLO1IScdfDwm6g=
dfB
2Zg/7jTJrKw0kWPFl9rHfN7dLAR28u3uT8Rhicjdd7hgYlDWdbImhNL/Z7iL3eayH7T9qAVNU=
587
MhWvIREyE1gj22cs0e1m6qMFpbFYG0709N2UwlpAH+Pd35bTi9q2o1pH91xBYH6QvvrwsuVYH=
wuc
3xXLRVRXWXY8xvNFSlY1LB8A46JOtV/ZodYDyhxVGbeWp820cb0s1f689XCXqFYAzTfCit+Ee=
boY
ORN5CGioXzS+z0S9IhPbdUuvqs7xvC248bM7nm84YdgVM7HWybOtpRpWpycwGs73IvbxyLE9a=
Pe/
Zw4PTKWvbJlcFioofLwTQE1XvWomFPD9LLrBl5NUjQARAQABwsFfBBgBAgAJBQJcJKtxAhsMA=
AoJ
EEhAELXNxXbilSkP/2NcazvUDGyQLm7tFp4HNqSQfFJ3+chzxfOOdNtdWE+RFetyx9R8DBGrP=
X8h
jITWD9ZA2bbZZ+J+a39vyY7bNZkCGbWzPGK//O1cInL4Ecmj7Xm8DXjk3E2Xzv1YrZk/GBz9x=
K8m
WXwhn90SHNadEf28ghMXcmUJSqT+KTxQQjUVaEtQDdzQnYQKh/dHxs760QSAnXkWr0YVYxk8q=
8aa
+G8iAkNJcb+Wx5gWEw4ft3HpKMRq74OQvWayy0fXpTlusdnvZs0VVMeRpCW6iCt9UmsbfG6Ny=
f2M
KKbWRJntjy8mjJiFjiJ2j9s4yNIookRv8IfocULuhnx5FWsvIzX2Vwcd7G5objnY1DlCNQrhJ=
Us/
geoCUBjBJp7sfbHakWfTKxZjFsuCXT1dCEN7JXX6ABOshzDTwB0kq7Bq/EkOzPDQGfOPoX2h1=
KjH
uvGWw5cBe8WLnEuhIyf/DWfMS1LbjFB4JlMUEcood5xvE4owpfZog+0a9gpBS6cg9bMgRUex1=
C+w
3fudJdPQwIRAjJgac0jTT6uDY8re9RhBDv83PRSM7AzxqEFvDj8K46dg1XvJcKs7K5PXpm5Pw=
4st
VEAxIks5uR62wxygImkdvgjQRzJe4JWwAniBWsZG+cNYj6xcItqkupIb4PeOWgNQQMhGv8Dnb=
AdO
OOnumAXWq0+wl5uP
=3DRWX1
-----END PGP PUBLIC KEY BLOCK-----

--------------07EB2AEF7489AC301F4F05AF--

--4Sly6D8Kbe7JBmhdlwnO6TS5iK4GpQ4YL--

--cUtv7XcLI85My2kkWAsEIoc1OqDLXcHiW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEEn6ZLkvlecGvyjiymSEAQtc3FduIFAl+dfPgACgkQSEAQtc3F
duJThRAAo2ge5pRdTFAYynlM+2ewDLi9/RDpAj+nkzzXEuiBIr7BbBVuE2w+Q1H+
nxPNZtQhXYN3rdpj9CAUUEY+m6OF0lI3+e10wPln29sPgF5lTE2aCfYkZ3t4vuCF
O9lLSBR2ErN4nkxiR3RSzpRpssJTqnDt8qp0jx0sRJwsvRuRxhufobY6pW3Zk3xK
Pn1PGiGlUElCqCh1ySDNyCplGSWfz/0C1vwx7IMQTjWNkfgDRnGCUl1hnTgkmLAn
I3V45sok3aGRMD75ontTY5YBi7WaE6zKH1SBWnXx8+ZyedeAIF9+6n1MDWhRUYxB
0qztDQRorcnPCqNDUvIQBjLUaMb/RiNqaVksCaNfoXfiFZDh8c1rKyd77wfcid+k
7pQ+y2n8KguDbdHx2II5Mqbi1K2V4CtKB/0Af+wOR2KGhlC8n7xMYWOZS2T5SpZr
ppTM40k2IpsnJ0Q9CDo49kMczqC/67o8IUXMTmkvYXjiRcFOkUnnjI3C26tIR4bh
SUaKby7cuVjJH8dGk076c9Ja1KSUyZ7bli/+nZqSd7LRCw6PgFFiTvPvru27aezQ
sgIVHEPj14XC3gP1WSyqO829/WRRLyVkgf562dTij61AK5LS3UkGaqchb4USh4Uo
8UbeSD4FZiT71tloXxFnCn9azhRQjhxh9Ctsae4JyoPNuX6FUWs=
=m26Y
-----END PGP SIGNATURE-----

--cUtv7XcLI85My2kkWAsEIoc1OqDLXcHiW--


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 15:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 15:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16956.41725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsaJ-0006o0-0l; Sat, 31 Oct 2020 15:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16956.41725; Sat, 31 Oct 2020 15:14:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsaI-0006nt-Td; Sat, 31 Oct 2020 15:14:26 +0000
Received: by outflank-mailman (input) for mailman id 16956;
 Sat, 31 Oct 2020 15:14:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYsaH-0006no-5E
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:25 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a324a4d-5b17-4a0b-b0ff-bea87176a417;
 Sat, 31 Oct 2020 15:14:23 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604157256192744.0465096339267;
 Sat, 31 Oct 2020 08:14:16 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYsaH-0006no-5E
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:25 +0000
X-Inumbo-ID: 9a324a4d-5b17-4a0b-b0ff-bea87176a417
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9a324a4d-5b17-4a0b-b0ff-bea87176a417;
	Sat, 31 Oct 2020 15:14:23 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604157257; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=WOoN9bqzI80lerK/Okit0zrBPXDiAhKwIluVIg+q1/mOhie/lnKyj3zt94PyDoaxZgactIVawXxTpB+63PJLbQpgIT85JhTuu/ArR1lO61H+KM3i8JCD8xuzcFbo4M8Bh6COwB3QpysHgoaCjmwmePWdPsj43E730h0oxQwDlv4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604157257; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=r/9IfMBpU+kvBya/rT8LWqoWC929p0oXvKNbnF4Xtqc=; 
	b=NYHtRmWGqqjGE8G6L2IMnCieSpgPOLqx5vXozI3wo+yOFid2xGqlFe299ttmk+DspH2baxrAy4MANRlTofpgEpR0MJjPBgsVKtBIaOXZaRR6uUj90W7R2DVtqKlsf/R/VlbT05vLp9uSerefQKXwcGH444at3P8QYciKx9qhbJw=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604157257;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:MIME-Version:Content-Type:Content-Transfer-Encoding;
	bh=r/9IfMBpU+kvBya/rT8LWqoWC929p0oXvKNbnF4Xtqc=;
	b=DnHeEaMshkBDmdURqjlP34WhQXgT4iHPgttTiZGbTRKp6hbs/0NOjQAji+oAbNDZ
	iRQjkakVA6I7Fow+5FRLcgfQjzi0g3IyASORcMtaraR0nz/cOBn4OKryJ71ew3uj2cp
	yY4B5tF1XIbFqkdkHXOrGzfuuDDWOLnbn3WjB5xQ=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604157256192744.0465096339267; Sat, 31 Oct 2020 08:14:16 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <cover.1604156731.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 0/2] Reproducibility: use of SOURCE_DATE_EPOCH
Date: Sat, 31 Oct 2020 16:14:06 +0100
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External

It intends to take into account feedback on previous patch removing timesta=
mp
from xen efi binary. If defined, we can set build dates and time variables =
with
respect to SOURCE_DATE_EPOCH. A default value is provided if sources
come from git repository else, that's up to the builder to handle it.

Fr=C3=A9d=C3=A9ric Pierret (fepitre) (2):
  Define build dates/time based on SOURCE_DATE_EPOCH
  Define SOURCE_DATE_EPOCH based on git log

 tools/firmware/hvmloader/Makefile | 4 ++++
 tools/firmware/vgabios/Makefile   | 4 ++++
 xen/Makefile                      | 7 +++++++
 3 files changed, 15 insertions(+)

--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Sat Oct 31 15:14:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 15:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16957.41738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsaU-0006qt-9T; Sat, 31 Oct 2020 15:14:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16957.41738; Sat, 31 Oct 2020 15:14:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsaU-0006qk-5W; Sat, 31 Oct 2020 15:14:38 +0000
Received: by outflank-mailman (input) for mailman id 16957;
 Sat, 31 Oct 2020 15:14:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYsaS-0006qP-8L
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:36 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fef08cc-86a5-415a-9c64-964aed3b21bb;
 Sat, 31 Oct 2020 15:14:35 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604157261060530.2165278307001;
 Sat, 31 Oct 2020 08:14:21 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYsaS-0006qP-8L
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:36 +0000
X-Inumbo-ID: 9fef08cc-86a5-415a-9c64-964aed3b21bb
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 9fef08cc-86a5-415a-9c64-964aed3b21bb;
	Sat, 31 Oct 2020 15:14:35 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604157262; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=SJzQFa6rplIFNlFuqeX6Mn2B8ZGQZ2MoLnjSvnQ0EFgZh5FBAXvqJ+YAf1QP3bYeRQ4Le1Ls6ndaxHAj98uUQIMGJ7qmZ1+g+V0wggv1HZhJ5MZ+WcBIKsgjWSiX0JmU02nAkwZ/CtRJ1f++B6M+QcS2NP9o9h532RBlLi5LvoI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604157262; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=9aLrfxdEKAfK4nVPDcKmv48q+Kbtqnpq+yQnWMhB+SE=; 
	b=Gmagl4JfeOcKxP1wPtLRBQwnluouqR74jLvOYhHEcAp+KGa3QSGxvFp8jtqi5nO9utvLA1DAlesL2H6iq84mPZ0yTxt4H4Nm7LFrUb40PofknnRWrV3lpZdelh4egvvcYo5Ml4TNfJP+hbE/b8XnxKcYxSyjKNbtKlw+jfbupts=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604157262;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Content-Type;
	bh=9aLrfxdEKAfK4nVPDcKmv48q+Kbtqnpq+yQnWMhB+SE=;
	b=FcdcVBgkzFZnqlwgpGr1142jiaQxdjAaZWHK9eAx1TzzgEpwpu5hj4vlFaBhoAx3
	APEtfHXcRwR2bJvasLaDduK4OTvX0Fho8MaOVd//gBlEGHb2oB3DUFl6aBZ4f7Basli
	5P5MjbIwebpCu/QdaT1It7Upe5H0wy4b9cPspYHw=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604157261060530.2165278307001; Sat, 31 Oct 2020 08:14:21 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <57423c6627e00fbc3f41d3f6be6ba1e15abb96fc.1604156731.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 1/2] Define build dates/time based on SOURCE_DATE_EPOCH
Date: Sat, 31 Oct 2020 16:14:07 +0100
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1604156731.git.frederic.pierret@qubes-os.org>
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External
Content-Type: text/plain; charset=utf8

It improves reproducibility if SOURCE_DATE_EPOCH is
defined while building xen binary
---
 tools/firmware/hvmloader/Makefile | 4 ++++
 tools/firmware/vgabios/Makefile   | 4 ++++
 xen/Makefile                      | 5 +++++
 3 files changed, 13 insertions(+)

diff --git a/tools/firmware/hvmloader/Makefile b/tools/firmware/hvmloader/M=
akefile
index e980ce7c5f..923e3c8b9a 100644
--- a/tools/firmware/hvmloader/Makefile
+++ b/tools/firmware/hvmloader/Makefile
@@ -21,7 +21,11 @@ XEN_ROOT =3D $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/firmware/Rules.mk
=20
 # SMBIOS spec requires format mm/dd/yyyy
+ifneq ($(SOURCE_DATE_EPOCH),)
+SMBIOS_REL_DATE ?=3D $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "+%m/%d/%Y=
" 2>/dev/null)
+else
 SMBIOS_REL_DATE ?=3D $(shell date +%m/%d/%Y)
+endif
=20
 CFLAGS +=3D $(CFLAGS_xeninclude)
=20
diff --git a/tools/firmware/vgabios/Makefile b/tools/firmware/vgabios/Makef=
ile
index 3284812fde..9b8b687a73 100644
--- a/tools/firmware/vgabios/Makefile
+++ b/tools/firmware/vgabios/Makefile
@@ -5,7 +5,11 @@ BCC =3D bcc
 AS86 =3D as86
=20
 RELEASE =3D `pwd | sed "s-.*/--"`
+ifneq ($(SOURCE_DATE_EPOCH),)
+VGABIOS_REL_DATE ?=3D $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" "+%d %b %=
Y" 2>/dev/null)
+else
 VGABIOS_REL_DATE ?=3D `date '+%d %b %Y'`
+endif
 RELVERS =3D `pwd | sed "s-.*/--" | sed "s/vgabios//" | sed "s/-//"`
=20
 VGABIOS_DATE =3D "-DVGABIOS_DATE=3D\"$(VGABIOS_REL_DATE)\""
diff --git a/xen/Makefile b/xen/Makefile
index bf0c804d43..30b1847515 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -8,8 +8,13 @@ export XEN_FULLVERSION   =3D $(XEN_VERSION).$(XEN_SUBVERSI=
ON)$(XEN_EXTRAVERSION)
=20
 export XEN_WHOAMI=09?=3D $(USER)
 export XEN_DOMAIN=09?=3D $(shell ([ -x /bin/dnsdomainname ] && /bin/dnsdom=
ainname) || ([ -x /bin/domainname ] && /bin/domainname || echo [unknown]))
+ifneq ($(SOURCE_DATE_EPOCH),)
+export XEN_BUILD_DATE=09?=3D $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" 2>=
/dev/null)
+export XEN_BUILD_TIME=09?=3D $(shell date -u -d "@$(SOURCE_DATE_EPOCH)" +%=
T 2>/dev/null)
+else
 export XEN_BUILD_DATE=09?=3D $(shell LC_ALL=3DC date)
 export XEN_BUILD_TIME=09?=3D $(shell LC_ALL=3DC date +%T)
+endif
 export XEN_BUILD_HOST=09?=3D $(shell hostname)
=20
 # Best effort attempt to find a python interpreter, defaulting to Python 3=
 if
--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Sat Oct 31 15:14:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 15:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16958.41750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsae-0006vi-I2; Sat, 31 Oct 2020 15:14:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16958.41750; Sat, 31 Oct 2020 15:14:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYsae-0006vb-EV; Sat, 31 Oct 2020 15:14:48 +0000
Received: by outflank-mailman (input) for mailman id 16958;
 Sat, 31 Oct 2020 15:14:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
 id 1kYsac-0006vA-Uf
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:46 +0000
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c9269b7-80f8-46b8-8a0b-7a8097915c17;
 Sat, 31 Oct 2020 15:14:46 +0000 (UTC)
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by
 mx.zohomail.com with SMTPS id 1604157264126690.3989801319432;
 Sat, 31 Oct 2020 08:14:24 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=b0cl=EG=qubes-os.org=frederic.pierret@srs-us1.protection.inumbo.net>)
	id 1kYsac-0006vA-Uf
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:14:46 +0000
X-Inumbo-ID: 1c9269b7-80f8-46b8-8a0b-7a8097915c17
Received: from sender4-of-o57.zoho.com (unknown [136.143.188.57])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 1c9269b7-80f8-46b8-8a0b-7a8097915c17;
	Sat, 31 Oct 2020 15:14:46 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1604157265; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Y/N2YoseYxZvBfPtBP7ImOm5NEHe4Z/1XgWzf/DsiZqZUacNDP0EUK2mspwNDbHhb4tZyNiVuo1WKDOnmWeL8Cix2CPPPQk2L4jZhfR+66Wozt4NCVtSsycw5+PE6QS9cZMDcf4oI6ED+Mlj02ZoEEMmvgG/f38J2ekbSxwPWXw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1604157265; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=93nOu4o8+RRpc4ALMBdxBUd5VTnxK5nhf8F1BIq1cSk=; 
	b=nRV3NAZfBED06WK+Dhd4ilXvzeZQK7/haNF6swf9D1Uxa4JZYr8Ut2348OdXC1UiEOKNI7yQZhOQU/oKVKADrkbs5EJ9Stp4XZJ+Ax3MO3NZqt/gx/naU1hpE1OXKyU31vnKqr/V4WcRyYrlU7jRxb+v+m8gP75tU6ASW0Zo9YY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=qubes-os.org;
	spf=pass  smtp.mailfrom=frederic.pierret@qubes-os.org;
	dmarc=pass header.from=<frederic.pierret@qubes-os.org> header.from=<frederic.pierret@qubes-os.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1604157265;
	s=s; d=qubes-os.org; i=frederic.pierret@qubes-os.org;
	h=From:To:Cc:Message-ID:Subject:Date:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Content-Type;
	bh=93nOu4o8+RRpc4ALMBdxBUd5VTnxK5nhf8F1BIq1cSk=;
	b=cSUIb1e9btQ/p+5qDL1ri1Reoy66rgh4KsiT+qSfIuUReKoKbYH6F2vbvC+lktJ1
	uTA3SXtuVev7stb6ONhjfNqCs518hqLXQuMjeyvNEbYfqxKINOGsXIdT0JuIjP21Zq0
	N3tGFn9U7zhbzdKpBfb+qyD6MYsyGEzvHqoGgdkA=
Received: from localhost.localdomain (92.188.110.153 [92.188.110.153]) by mx.zohomail.com
	with SMTPS id 1604157264126690.3989801319432; Sat, 31 Oct 2020 08:14:24 -0700 (PDT)
From: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Fr=C3=A9d=C3=A9ric=20Pierret=20=28fepitre=29?= <frederic.pierret@qubes-os.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Message-ID: <8b0e8b8be9c77476ecc702a7c6216ba50659deec.1604156731.git.frederic.pierret@qubes-os.org>
Subject: [PATCH v1 2/2] Define SOURCE_DATE_EPOCH based on git log
Date: Sat, 31 Oct 2020 16:14:08 +0100
X-Mailer: git-send-email 2.26.2
In-Reply-To: <cover.1604156731.git.frederic.pierret@qubes-os.org>
References: <cover.1604156731.git.frederic.pierret@qubes-os.org>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
X-ZohoMailClient: External
Content-Type: text/plain; charset=utf8

---
 xen/Makefile | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/Makefile b/xen/Makefile
index 30b1847515..4cc35556ef 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -6,6 +6,8 @@ export XEN_EXTRAVERSION ?=3D -unstable$(XEN_VENDORVERSION)
 export XEN_FULLVERSION   =3D $(XEN_VERSION).$(XEN_SUBVERSION)$(XEN_EXTRAVE=
RSION)
 -include xen-version
=20
+export SOURCE_DATE_EPOCH=09?=3D $(shell git log -1 --format=3D%ct 2>/dev/n=
ull)
+
 export XEN_WHOAMI=09?=3D $(USER)
 export XEN_DOMAIN=09?=3D $(shell ([ -x /bin/dnsdomainname ] && /bin/dnsdom=
ainname) || ([ -x /bin/domainname ] && /bin/domainname || echo [unknown]))
 ifneq ($(SOURCE_DATE_EPOCH),)
--=20
2.26.2




From xen-devel-bounces@lists.xenproject.org Sat Oct 31 15:59:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 15:59:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.16973.41765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYtHl-00028P-Rp; Sat, 31 Oct 2020 15:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 16973.41765; Sat, 31 Oct 2020 15:59:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYtHl-00028I-Ns; Sat, 31 Oct 2020 15:59:21 +0000
Received: by outflank-mailman (input) for mailman id 16973;
 Sat, 31 Oct 2020 15:59:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYtHl-00027e-2E
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:59:21 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32a1e1e6-3222-4793-9526-1cac6df1c7d5;
 Sat, 31 Oct 2020 15:59:14 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYtHd-0007fF-U5; Sat, 31 Oct 2020 15:59:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYtHd-0002N3-Jw; Sat, 31 Oct 2020 15:59:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYtHd-0003ZL-JT; Sat, 31 Oct 2020 15:59:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYtHl-00027e-2E
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 15:59:21 +0000
X-Inumbo-ID: 32a1e1e6-3222-4793-9526-1cac6df1c7d5
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 32a1e1e6-3222-4793-9526-1cac6df1c7d5;
	Sat, 31 Oct 2020 15:59:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CtBUr3+WvaKenWfaHveoNyInCb3vBzHroi3wwUM2eek=; b=3Q5nZSmRpOndD1rfrMa8NWB6fR
	qexKGYKN/h/6FOpfdfsxhoXtmbpmccv+0WZsVhOfMYa+2TeYYtyoi+NBbooLXc+lDO7XdrtMzG3GA
	/ivkFuJXHyV96pTWxIJtzd2tXuELDEYw+Ii5SB1nxvNNVYjGCeUIxH5ew5+Oj64cBslA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYtHd-0007fF-U5; Sat, 31 Oct 2020 15:59:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYtHd-0002N3-Jw; Sat, 31 Oct 2020 15:59:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYtHd-0003ZL-JT; Sat, 31 Oct 2020 15:59:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 156328: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3b7bb8f451977e81e252e23cbf817029fe40d494
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 15:59:13 +0000

flight 156328 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156328/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3b7bb8f451977e81e252e23cbf817029fe40d494
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  113 days
Failing since        151818  2020-07-11 04:18:52 Z  112 days  107 attempts
Testing same since   156328  2020-10-31 04:20:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Bastien Orivel <bastien.orivel@diateam.net>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Ian Wienand <iwienand@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Laine Stump <laine@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Neal Gompa <ngompa13@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wang Xin <wangxinxin.wang@huawei.com>
  Weblate <noreply@weblate.org>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 23514 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 17:40:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 17:40:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17007.41782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYurK-0003OD-6e; Sat, 31 Oct 2020 17:40:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17007.41782; Sat, 31 Oct 2020 17:40:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYurK-0003O6-3G; Sat, 31 Oct 2020 17:40:10 +0000
Received: by outflank-mailman (input) for mailman id 17007;
 Sat, 31 Oct 2020 17:40:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYurJ-0003Na-Ga
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 17:40:09 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77a58fb8-55db-46d8-8930-3e17f2c15c9f;
 Sat, 31 Oct 2020 17:40:06 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYurG-0001mB-D2; Sat, 31 Oct 2020 17:40:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYurG-0006Nf-3z; Sat, 31 Oct 2020 17:40:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYurG-000277-3U; Sat, 31 Oct 2020 17:40:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYurJ-0003Na-Ga
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 17:40:09 +0000
X-Inumbo-ID: 77a58fb8-55db-46d8-8930-3e17f2c15c9f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 77a58fb8-55db-46d8-8930-3e17f2c15c9f;
	Sat, 31 Oct 2020 17:40:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FgddYQzErSGA2rXGqBcPh4psbY9W2FXqGINipaUeXIo=; b=lIBJeJXELGUfk7Zk2e0r5YQWMY
	X+4e/f4HBVsjHczYw8M5mhRnWCAbJd1LXz61BjJIaK+6N91LR+5DDpYfX2Y4rbwX20gBPSwaGIC3x
	nKghdtgzcabgVB3gtdkZk254tAd3SIo/eF8xUXjjSaYddiN+ZpMA+p4axPfcJus2qBy4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYurG-0001mB-D2; Sat, 31 Oct 2020 17:40:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYurG-0006Nf-3z; Sat, 31 Oct 2020 17:40:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYurG-000277-3U; Sat, 31 Oct 2020 17:40:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 156324: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:regression
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4100d463dbdd95d85fabe387dd5676bed75f65f7
X-Osstest-Versions-That:
    xen=0108b011e133915a8ebd33636811d8c141b6e9f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 17:40:06 +0000

flight 156324 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156324/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2      fail REGR. vs. 156035

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail pass in 156309

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 156035
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 156035
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 156035
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 156035
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 156035
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 156035
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 156035
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4100d463dbdd95d85fabe387dd5676bed75f65f7
baseline version:
 xen                  0108b011e133915a8ebd33636811d8c141b6e9f3

Last test of basis   156035  2020-10-20 13:36:02 Z   11 days
Testing same since   156263  2020-10-27 18:36:53 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 4100d463dbdd95d85fabe387dd5676bed75f65f7
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Oct 19 15:51:22 2020 +0100

    x86/pv: Flush TLB in response to paging structure changes
    
    With MMU_UPDATE, a PV guest can make changes to higher level pagetables.  This
    is safe from Xen's point of view (as the update only affects guest mappings),
    and the guest is required to flush (if necessary) after making updates.
    
    However, Xen's use of linear pagetables (UPDATE_VA_MAPPING, GNTTABOP_map,
    writeable pagetables, etc.) is an implementation detail outside of the
    API/ABI.
    
    Changes in the paging structure require invalidations in the linear pagetable
    range for subsequent accesses into the linear pagetables to access non-stale
    mappings.  Xen must provide suitable flushing to prevent intermixed guest
    actions from accidentally accessing/modifying the wrong pagetable.
    
    For all L2 and higher modifications, flush the TLB.  PV guests cannot create
    L2 or higher entries with the Global bit set, so no mappings established in
    the linear range can be global.  (This could in principle be an order 39 flush
    starting at LINEAR_PT_VIRT_START, but no such mechanism exists in practice.)
    
    Express the necessary flushes as a set of booleans which accumulate across the
    operation.  Comment the flushing logic extensively.
    
    This is XSA-286.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 16a20963b3209788f2c0d3a3eebb7d92f03f5883)

commit b1d6f37aa5aa9f3fc5a269b9dd21b7feb7444be0
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Oct 22 11:28:58 2020 +0100

    x86/pv: Drop FLUSH_TLB_GLOBAL in do_mmu_update() for XPTI
    
    c/s 9d1d31ad9498 "x86: slightly reduce Meltdown band-aid overhead" removed the
    use of Global TLB flushes on the Xen entry path, but added a FLUSH_TLB_GLOBAL
    to the L4 path in do_mmu_update().
    
    However, this was unnecessary.
    
    It is the guests responsibility to perform appropriate TLB flushing if the L4
    modification altered an established mapping in a flush-relevant way.  In this
    case, an MMUEXT_OP hypercall will follow.  The case which Xen needs to cover
    is when new mappings are created, and the resync on the exit-to-guest path
    covers this correctly.
    
    There is a corner case with multiple vCPUs in hypercalls at the same time,
    which 9d1d31ad9498 changed, and this patch changes back to its original XPTI
    behaviour.
    
    Architecturally, established TLB entries can continue to be used until the
    broadcast flush has completed.  Therefore, even with concurrent hypercalls,
    the guest cannot depend on older mappings not being used until an MMUEXT_OP
    hypercall completes.  Xen's implementation of guest-initiated flushes will
    take correct effect on top of an in-progress hypercall, picking up new mapping
    setting before the other vCPU's MMUEXT_OP completes.
    
    Note: The correctness of this change is not impacted by whether XPTI uses
    global mappings or not.  Correctness there depends on the behaviour of Xen on
    the entry/exit paths when switching two/from the XPTI "shadow" pagetables.
    
    This is (not really) XSA-286 (but necessary to simplify the logic).
    
    Fixes: 9d1d31ad9498 ("x86: slightly reduce Meltdown band-aid overhead")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    (cherry picked from commit 055e1c3a3d95b1e753148369fbc4ba48782dd602)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 21:11:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 21:11:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17033.41800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYy9H-0004Ik-Cr; Sat, 31 Oct 2020 21:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17033.41800; Sat, 31 Oct 2020 21:10:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYy9H-0004Id-9r; Sat, 31 Oct 2020 21:10:55 +0000
Received: by outflank-mailman (input) for mailman id 17033;
 Sat, 31 Oct 2020 21:10:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TJr3=EG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1kYy9G-0004IY-AW
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 21:10:54 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc467fe9-9d91-4afe-854f-1a8051b7c388;
 Sat, 31 Oct 2020 21:10:53 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id a9so10181426wrg.12
 for <xen-devel@lists.xenproject.org>; Sat, 31 Oct 2020 14:10:52 -0700 (PDT)
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=TJr3=EG=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
	id 1kYy9G-0004IY-AW
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 21:10:54 +0000
X-Inumbo-ID: dc467fe9-9d91-4afe-854f-1a8051b7c388
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id dc467fe9-9d91-4afe-854f-1a8051b7c388;
	Sat, 31 Oct 2020 21:10:53 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id a9so10181426wrg.12
        for <xen-devel@lists.xenproject.org>; Sat, 31 Oct 2020 14:10:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=uLkdRm8luG3Y/TaJy2hbja3zVLCKJ9ijfgMi5tCNaGA=;
        b=ofbIPN4tff2jsWRZLQ/I5kVvTVlsl0yq7sasoRBjWLOkf/+q1Pd9mc2B61Cuu6otEm
         bECnyMx81yPgKwny9q9pSbK3F84SxDJ4xQsuiGcL6vz99cD6r6OLZU4nqV0CfTGRlYap
         vDw8RpQqUFp6l5imV6FwgUhqQs231GzXvnY/YJx4gVeTTzqZmD6xmmFG6tLN2SwLEZhb
         EW/hcDACuPYD0KgDpPnsg0fYVkJiX+bCtoESTZu9vw36v85jbBiT+zaUZGZDJvCHYNKR
         caQeXOe3CvNpetLqPuJaUfgXPSx8GyUPkwIQxrQa/gIchww4CJSPdLSKz4UdnHF3C+FB
         idaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=uLkdRm8luG3Y/TaJy2hbja3zVLCKJ9ijfgMi5tCNaGA=;
        b=ZMyLjKM8B7g4brqxSMd2+wncXJ8aTaQbq+t9bqJHE+SbgaPWuj+VsOg2h95e8sSP5W
         pVZmo0FBW87pShYwCVC5XhcZ/oHW59yUicT6JRsXodECuT73yV8WCMEBjDuwM3VS0/5E
         H3KKQds5hOsRnZge0JpE2vDJsz6SFV4BBZ/nTo9ZBW9TbYp/6peiuJzynWjTsbGWXKHe
         jigSO+6RvBUB+cY5DVxzdS1KzyLBsnis51GJKAdCXxxVOXmJ6I1c34wRDP5pjvGvRlm6
         pERoxTxAx5WivQTS7NL0hEG5225SUvANZyvqwgSy8P2c0yDn6sk03CeWf1jbEgyx+2ge
         UwpA==
X-Gm-Message-State: AOAM5310hf/SCAHEMEO6aYplupbpU86VsslgI9AhIiihbbmUBDTQHDtm
	eMRtFZCdcjwzyhrKA47msbK0MLjc+IbZr3r4kD8=
X-Google-Smtp-Source: ABdhPJyk9AmXhk1tgcBqa9mfjrBGXwM3Cj1CVpcntjIaKgimQRpet4ZicGCzbgxEPBEJLe1Ab+VRDMonM5rEUI7ekdY=
X-Received: by 2002:adf:8b92:: with SMTP id o18mr11511603wra.54.1604178651915;
 Sat, 31 Oct 2020 14:10:51 -0700 (PDT)
MIME-Version: 1.0
References: <1602780274-29141-1-git-send-email-olekstysh@gmail.com>
 <CAA93ih0o3XmD9neBu1fAkP1iBETu1-4qaQaEsZfEWRfYo7VCZA@mail.gmail.com>
 <CAPD2p-npnQz+7NtMH81s2C3dsAt_6kxQ68n7LhwYbOuTFaUEvw@mail.gmail.com>
 <alpine.DEB.2.21.2010291252410.12247@sstabellini-ThinkPad-T480s>
 <CAPD2p-mH0Hi+JOUB-mt+aZR_gN86EZCpnMPTww0ErMESTwZ=AA@mail.gmail.com> <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
In-Reply-To: <CAA93ih3Z-zxQ33gvr2C43i0J5XP3OBgUhTyMcwhe9zVj-uOONA@mail.gmail.com>
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
Date: Sat, 31 Oct 2020 23:10:40 +0200
Message-ID: <CAPD2p-=2UimQy6VHKw1FgyVi2R94Ux_HFdPYk7=FR3KWSEqiHw@mail.gmail.com>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm
To: Masami Hiramatsu <masami.hiramatsu@linaro.org>, =?UTF-8?B?QWxleCBCZW5uw6ll?= <alex.bennee@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, xen-devel <xen-devel@lists.xenproject.org>, 
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Paul Durrant <paul@xen.org>, 
	Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, 
	Julien Grall <julien.grall@arm.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Tim Deegan <tim@xen.org>, 
	Daniel De Graaf <dgdegra@tycho.nsa.gov>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, 
	Anthony PERARD <anthony.perard@citrix.com>, Bertrand Marquis <bertrand.marquis@arm.com>
Content-Type: multipart/alternative; boundary="000000000000b42afe05b2fdf2e1"

--000000000000b42afe05b2fdf2e1
Content-Type: text/plain; charset="UTF-8"

On Fri, Oct 30, 2020 at 1:34 PM Masami Hiramatsu <
masami.hiramatsu@linaro.org> wrote:

> Hi Oleksandr,
>

Hi Masami, all

[sorry for the possible format issue]


> >> >
> >> >       Could you tell me how can I test it?
> >> >
> >> >
> >> > I assume it is due to the lack of the virtio-disk backend (which I
> haven't shared yet as I focused on the IOREQ/DM support on Arm in the
> >> > first place).
> >> > Could you wait a little bit, I am going to share it soon.
> >>
> >> Do you have a quick-and-dirty hack you can share in the meantime? Even
> >> just on github as a special branch? It would be very useful to be able
> >> to have a test-driver for the new feature.
> >
> > Well, I will provide a branch on github with our PoC virtio-disk backend
> by the end of this week. It will be possible to test this series with it.
>
> Great! OK I'll be waiting for the PoC backend.
>
> Thank you!
>

You can find the virtio-disk backend PoC (shared as is) at [1].

Brief description...

The virtio-disk backend PoC is a completely standalone entity (IOREQ
server) which emulates a virtio-mmio disk device.
It is based on code from DEMU [2] (for IOREQ server purposes) and some code
from kvmtool [3] to implement virtio protocol,
disk operations over underlying H/W and Xenbus code to be able to read
configuration from the Xenstore
(it is configured via domain config file). Last patch in this series
(marked as RFC) actually adds required bits to the libxl code.

Some notes...

Backend could be used with current V2 IOREQ series [4] without any
modifications, all what you need is to enable
CONFIG_IOREQ_SERVER on Arm [5], since it is disabled by default within this
series.

Please note that in our system we run backend in DomD (driver domain). I
haven't tested it in Dom0,
since in our system the Dom0 is thin (without any H/W) and only used to
launch VMs, so there is no underlying block H/W.
But, I hope, it is possible to run it in Dom0 as well (at least there is
nothing specific to a particular domain in the backend itself, nothing
hardcoded).
If you are going to run a backend in other than Dom0 domain you need to
write your own policy (FLASK) for the backend (running in that domain)
to be able to issue DM related requests, etc. Only for test purposes you
could use this patch [6] that tweeks Xen dummy policy (not for upstream).

As I mentioned elsewhere you don't need to modify Guest Linux (DomU), just
enable VirtIO related configs.
If I remember correctly, the following would be enough:
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
If I remember correctly, if your Host Linux (Dom0 or DomD) version >= 4.17
you don't need to modify it as well.
Otherwise, you need to cherry-pick "xen/privcmd: add
IOCTL_PRIVCMD_MMAP_RESOURCE" from the upstream to be able
to use the acquire interface for the resource mapping.

We usually build a backend in the context of the Yocto build process and
run it as a systemd service,
but you can also build and run it manually (it should be launched before
DomU creation).

There are no command line options at all. Everything is configured via
domain configuration file:
# This option is mandatory, it shows that VirtIO is going to be used by
guest
virtio=1
# Example of domain configuration (two disks are assigned to the guest, the
latter is in readonly mode):
vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Hope that helps. Feel free to ask questions if any.

[1] https://github.com/xen-troops/virtio-disk/commits/ioreq_v3
[2] https://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git;a=summary
[3] https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
[4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3
[5]
https://github.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc41d73f6f3da923
[6]
https://github.com/otyshchenko1/xen/commit/be868a63014b7aa6c9731d5692200d7f2f57c611

-- 
Regards,

Oleksandr Tyshchenko

--000000000000b42afe05b2fdf2e1
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Fri, Oct 30, 2020 at 1:34 PM Masam=
i Hiramatsu &lt;<a href=3D"mailto:masami.hiramatsu@linaro.org" target=3D"_b=
lank">masami.hiramatsu@linaro.org</a>&gt; wrote:<br></div><blockquote class=
=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rg=
b(204,204,204);padding-left:1ex">Hi Oleksandr,<br></blockquote><div>=C2=A0<=
/div><div>Hi Masami, all</div><div><br></div><div>[sorry for the possible f=
ormat issue]</div><div>=C2=A0</div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">
&gt;&gt; &gt;<br>
&gt;&gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0Could you tell me how can I test it=
?<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt;<br>
&gt;&gt; &gt; I assume it is due to the lack of the virtio-disk backend (wh=
ich I haven&#39;t shared yet as I focused on the IOREQ/DM support on Arm in=
 the<br>
&gt;&gt; &gt; first place).<br>
&gt;&gt; &gt; Could you wait a little bit, I am going to share it soon.<br>
&gt;&gt;<br>
&gt;&gt; Do you have a quick-and-dirty hack you can share in the meantime? =
Even<br>
&gt;&gt; just on github as a special branch? It would be very useful to be =
able<br>
&gt;&gt; to have a test-driver for the new feature.<br>
&gt;<br>
&gt; Well, I will provide a branch on github with our PoC virtio-disk backe=
nd by the end of this week. It will be possible to test this series with it=
.<br>
<br>
Great! OK I&#39;ll be waiting for the PoC backend.<br>
<br>
Thank you!<br></blockquote><div><br></div><div>You can find the virtio-disk=
 backend PoC (shared as is) at [1].=C2=A0<br><br></div><div>Brief descripti=
on...</div><div><br>The virtio-disk backend PoC is a completely standalone =
entity (IOREQ server) which emulates a virtio-mmio disk device.<br>It is ba=
sed on code from DEMU [2] (for IOREQ server purposes) and some code from kv=
mtool [3] to implement virtio protocol,<br>disk operations over underlying =
H/W and Xenbus code to be able to read configuration from the Xenstore<br>(=
it is configured via domain config file). Last patch in this series (marked=
 as RFC) actually adds required bits to the libxl code.=C2=A0 =C2=A0<br></d=
iv><div><br></div><div>Some notes...</div><div><br></div><div>Backend could=
 be used with current V2 IOREQ series [4] without any modifications, all wh=
at you need is to enable</div><div>CONFIG_IOREQ_SERVER on Arm [5], since it=
 is disabled by default within this series.</div></div><div class=3D"gmail_=
quote"><br></div><div class=3D"gmail_quote">Please note that in our system =
we run backend in DomD (driver domain). I haven&#39;t tested it in Dom0,<br=
>since in our system the Dom0 is thin (without any H/W) and only used to la=
unch VMs, so there is no underlying block H/W. <br>But, I hope, it is possi=
ble to run it in Dom0 as well (at least there is nothing specific to a part=
icular domain in the backend itself, nothing hardcoded).</div><div class=3D=
"gmail_quote">If you are going to run a backend in other than Dom0 domain y=
ou need to write your own policy (FLASK) for the backend (running in that=
=C2=A0domain)<br></div><div class=3D"gmail_quote">to be able to issue DM re=
lated requests, etc. Only for test purposes you could use this patch [6] th=
at tweeks Xen dummy policy (not for upstream).</div><div class=3D"gmail_quo=
te">=C2=A0=C2=A0<br></div><div class=3D"gmail_quote">As I mentioned elsewhe=
re you don&#39;t need to modify Guest Linux (DomU), just enable VirtIO rela=
ted configs.</div><div class=3D"gmail_quote">If I remember correctly, the f=
ollowing would be enough:</div><div class=3D"gmail_quote">CONFIG_BLK_MQ_VIR=
TIO=3Dy<br>CONFIG_VIRTIO_BLK=3Dy<br>CONFIG_VIRTIO=3Dy<br>CONFIG_VIRTIO_BALL=
OON=3Dy<br>CONFIG_VIRTIO_MMIO=3Dy</div><div class=3D"gmail_quote">If I reme=
mber correctly, if your Host Linux (Dom0 or DomD) version &gt;=3D 4.17 you =
don&#39;t need to modify it as well.</div><div class=3D"gmail_quote">Otherw=
ise, you need to cherry-pick &quot;xen/privcmd: add IOCTL_PRIVCMD_MMAP_RESO=
URCE&quot; from the upstream to be able<br>to use the acquire interface for=
 the resource mapping.</div><br clear=3D"all"><div>We usually build a backe=
nd in the context of the Yocto build process and run it as a systemd servic=
e,</div><div>but you can also build and run it manually (it should be launc=
hed before DomU creation).<br></div><div><br></div><div>There are no comman=
d line options at all. Everything is configured via domain configuration fi=
le:<br># This option is mandatory, it shows that VirtIO is going to be used=
 by guest<br>virtio=3D1<br># Example of domain configuration (two disks are=
 assigned to the guest, the latter is in readonly mode):<br>vdisk =3D [ &#3=
9;backend=3DDomD, disks=3Drw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3&#39; ]<br></d=
iv><div><br></div><div>Hope that helps. Feel free to ask questions if any.<=
br><br>[1]=C2=A0<a href=3D"https://github.com/xen-troops/virtio-disk/commit=
s/ioreq_v3">https://github.com/xen-troops/virtio-disk/commits/ioreq_v3</a><=
br></div><div>[2] <a href=3D"https://xenbits.xen.org/gitweb/?p=3Dpeople/pau=
ldu/demu.git;a=3Dsummary">https://xenbits.xen.org/gitweb/?p=3Dpeople/pauldu=
/demu.git;a=3Dsummary</a><br>[3] <a href=3D"https://git.kernel.org/pub/scm/=
linux/kernel/git/will/kvmtool.git/">https://git.kernel.org/pub/scm/linux/ke=
rnel/git/will/kvmtool.git/</a><br></div><div>[4] <a href=3D"https://github.=
com/otyshchenko1/xen/commits/ioreq_4.14_ml3">https://github.com/otyshchenko=
1/xen/commits/ioreq_4.14_ml3</a></div><div>[5]=C2=A0<a href=3D"https://gith=
ub.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc41d73f6f3da923">ht=
tps://github.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc41d73f6f=
3da923</a></div><div>[6]=C2=A0<a href=3D"https://github.com/otyshchenko1/xe=
n/commit/be868a63014b7aa6c9731d5692200d7f2f57c611">https://github.com/otysh=
chenko1/xen/commit/be868a63014b7aa6c9731d5692200d7f2f57c611</a></div><div><=
br></div>-- <br><div dir=3D"ltr"><div dir=3D"ltr"><div><div dir=3D"ltr"><di=
v><div dir=3D"ltr"><span style=3D"background-color:rgb(255,255,255)"><font =
size=3D"2"><span style=3D"color:rgb(51,51,51);font-family:Arial,sans-serif"=
>Regards,</span></font></span></div><div dir=3D"ltr"><br></div><div dir=3D"=
ltr"><div><span style=3D"background-color:rgb(255,255,255)"><font size=3D"2=
">Oleksandr Tyshchenko</font></span></div></div></div></div></div></div></d=
iv></div>

--000000000000b42afe05b2fdf2e1--


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 21:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 21:31:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17037.41813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYySf-00064i-4g; Sat, 31 Oct 2020 21:30:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17037.41813; Sat, 31 Oct 2020 21:30:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYySf-00064b-0S; Sat, 31 Oct 2020 21:30:57 +0000
Received: by outflank-mailman (input) for mailman id 17037;
 Sat, 31 Oct 2020 21:30:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYySd-00064W-5i
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 21:30:55 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61996966-b099-457c-921e-7ce295d3450f;
 Sat, 31 Oct 2020 21:30:51 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYySZ-0006Wp-C0; Sat, 31 Oct 2020 21:30:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYySZ-0004AE-3K; Sat, 31 Oct 2020 21:30:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYySZ-0006Wc-2o; Sat, 31 Oct 2020 21:30:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYySd-00064W-5i
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 21:30:55 +0000
X-Inumbo-ID: 61996966-b099-457c-921e-7ce295d3450f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 61996966-b099-457c-921e-7ce295d3450f;
	Sat, 31 Oct 2020 21:30:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mgU1EIv4lGj0y0/1XTJeHmXu/DCJxgYWAtVzoeEzN+c=; b=rGXn+TZ8ciGlJDn7xJdZdXsahz
	Z25tM+6RYKi6xpvTSJkIcMFTRW0ChMiC7wFKx+43scYcmqFx7xaxp7BpYybY0sys435Z7YAwxxDMG
	tN9CPbXodHOo5xIWpfqAr6PPlZvSLGyYJLduEqfve1n80WAHHYDWepXA4tJAYABAt/qQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYySZ-0006Wp-C0; Sat, 31 Oct 2020 21:30:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYySZ-0004AE-3K; Sat, 31 Oct 2020 21:30:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYySZ-0006Wc-2o; Sat, 31 Oct 2020 21:30:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156326-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 156326: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt-raw:debian-di-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qcow2:guest-start/debian.repeat:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=9a2ea4f4a7230fe224dee91d9adf2ef872c3d226
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 21:30:51 +0000

flight 156326 qemu-mainline real [real]
flight 156337 qemu-mainline real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/156326/
http://logs.test-lab.xenproject.org/osstest/logs/156337/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-armhf-armhf-xl-vhd      12 debian-di-install        fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat fail REGR. vs. 152631
 test-armhf-armhf-libvirt-raw 12 debian-di-install        fail REGR. vs. 152631
 test-amd64-amd64-xl-qcow2   21 guest-start/debian.repeat fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                9a2ea4f4a7230fe224dee91d9adf2ef872c3d226
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z   72 days
Failing since        152659  2020-08-21 14:07:39 Z   71 days  161 attempts
Testing same since   156326  2020-10-31 02:47:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Aaron Lindsay <aaron@os.amperecomputing.com>
  Alberto Garcia <berto@igalia.com>
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Bulekov <alxndr@bu.edu>
  AlexChen <alex.chen@huawei.com>
  Alexey Kirillov <lekiravi@yandex-team.ru>
  Alistair Francis <alistair.francis@wdc.com>
  Alistair Francis <alistair.francis@xilinx.com>
  Amey Narkhede <ameynarkhede03@gmail.com>
  Ana Pazos <apazos@quicinc.com>
  Andrew Jones <drjones@redhat.com>
  Andrey Konovalov <andreyknvl@google.com>
  Andrey Shinkevich <andrey.shinkevich@virtuozzo.com>
  Ani Sinha <ani@anisinha.ca>
  Anthony PERARD <anthony.perard@citrix.com>
  Anton Blanchard <anton@ozlabs.org>
  Anup Patel <anup.patel@wdc.com>
  Artyom Tarasenko <atar4qemu@gmail.com>
  Babu Moger <babu.moger@amd.com>
  BALATON Zoltan <balaton@eik.bme.hu>
  Bastian Koppelmann <kbastian@mail.uni-paderborn.de>
  Ben Widawsky <ben.widawsky@intel.com>
  Bihong Yu <yubihong@huawei.com>
  Bin Meng <bin.meng@windriver.com>
  Bruce Rogers <brogers@suse.com>
  Carlo Marcelo Arenas Belón <carenas@gmail.com>
  Chen Gang <chengang@emindsoft.com.cn>
  Chen Qun <kuhn.chenqun@huawei.com>
  Chetan Pant <chetan4windows@gmail.com>
  Chih-Min Chao <chihmin.chao@sifive.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Chuan Zheng <zhengchuan@huawei.com>
  Claudio Fontana <cfontana@suse.de>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Cleber Rosa <crosa@redhat.com>
  Coiby Xu <coiby.xu@gmail.com>
  Colin Xu <colin.xu@intel.com>
  Collin Walling <walling@linux.ibm.com>
  Connor Kuehl <ckuehl@redhat.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  César Belley <cesar.belley@lse.epita.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  David Carlier <devnexen@gmail.com>
  David Edmondson <david.edmondson@oracle.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dima Stepanov <dimastep@yandex-team.ru>
  Dmitry Fomichev <dmitry.fomichev@wdc.com>
  Douglas Crosher <dtc-ubuntu@scieneer.com>
  Dov Murik <dovmurik@linux.vnet.ibm.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eduardo Habkost <ehabkost@redhat.com>
  Eduardo Otubo <otubo@redhat.com>
  Elena Afanasova <eafanasova@gmail.com>
  Emanuele Giuseppe Esposito <e.emanuelegiuseppe@gmail.com>
  Emmanuel Blot <eblot.ml@gmail.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Erik Kline <ek@google.com>
  Erik Smit <erik.lucas.smit@gmail.com>
  Fabiano Rosas <farosas@linux.ibm.com>
  Fam Zheng <fam@euphon.net>
  Fan Yang <Fan_Yang@sjtu.edu.cn>
  Filip Bozuta <Filip.Bozuta@syrmia.com>
  Finn Thain <fthain@telegraphics.com.au>
  Frajo <franz.haider@jolla.com>
  Frank Chang <frank.chang@sifive.com>
  Franz-Josef Haider <franz.haider@jolla.com>
  Frediano Ziglio <freddy77@gmail.com>
  Georg Kotheimer <georg.kotheimer@kernkonzept.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gonglei <arei.gonglei@huawei.com>
  Graeme Gregory <graeme@nuviainc.com>
  Green Wan <green.wan@sifive.com>
  Greg Kurz <groug@kaod.org>
  Guenter Roeck <linux@roeck-us.net>
  Guoqing Zhang <zhangguoqing.kernel@bytedance.com>
  Gustavo Romero <gromero@linux.ibm.com>
  haibinzhang(张海斌) <haibinzhang@tencent.com>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wu <wuhaotsh@google.com>
  Harry G. Coin <hgcoin@gmail.com>
  Havard Skinnemoen <hskinnemoen@google.com>
  Helge Deller <deller@gmx.de>
  Heyi Guo <guoheyi@huawei.com>
  Hongzheng-Li <Ethan.Lee.QNL@gmail.com>
  Hou Weiying <weiying_hou@outlook.com>
  Huacai Chen <chenhc@lemote.com>
  Huacai Chen <zltjiangshi@gmail.com>
  Igor Kononenko <i.kononenko@yadro.com>
  Igor Mammedov <imammedo@redhat.com>
  James Hogan <jhogan@kernel.org>
  Jan Charvat <charvj10@fel.cvut.cz>
  Jan Henrik Weinstock <jan.weinstock@rwth-aachen.de>
  Janosch Frank <frankja@linux.ibm.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Wang <jasowang@redhat.com>
  Jens Freimann <jfreimann@redhat.com>
  Jiachen Zhang <zhangjiachen.jaycee@bytedance.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Joel Stanley <joel@jms.id.au>
  John Snow <jsnow@redhat.com>
  Jon Doron <arilou@gmail.com>
  Josh DuBois <josh@joshdubois.com>
  Julia Suvorova <jusual@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kele Huang <kele.hwang@gmail.com>
  Kenta Ishiguro <kentaishiguro@slowstart.org>
  Kevin Wolf <kwolf@redhat.com>
  Kito Cheng <kito.cheng@sifive.com>
  Klaus Jensen <k.jensen@samsung.com>
  Klaus Jensen <klaus.jensen@cnexlabs.com>
  Laszlo Ersek <lersek@redhat.com>
  Laurent Vivier <laurent@vivier.eu>
  Laurent Vivier <lvivier@redhat.com>
  Lei YU <yulei.sh@bytedance.com>
  Leif Lindholm <leif@nuviainc.com>
  Li Feng <fengli@smartx.com>
  Li Qiang <liq3ea@163.com>
  Li Zhijian <lizhijian@cn.fujitsu.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Liao Pingfang <liao.pingfang@zte.com.cn>a
  Lichang Zhao <zhaolichang@huawei.com>
  Lijun Pan <ljp@linux.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Longpeng(Mike) <longpeng2@huawei.com>
  Luc Michel <luc@lmichel.fr>
  Lukas Straub <lukasstraub2@web.de>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Markus Armbruster <armbru@redhat.com>
  Matthew Rosato <mjrosato@linux.ibm.com>
  Matthieu Bucchianeri <matthieu.bucchianeri@leostella.com>
  Matus Kysel <mkysel@tachyum.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Max Filippov <jcmvbkbc@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Michael Rolnik <mrolnik@gmail.com>
  Michael Roth <mdroth@linux.vnet.ibm.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Tokarev <mjt@tls.msk.ru>
  Michael Walle <michael@walle.cc>
  Michal Privoznik <mprivozn@redhat.com>
  Mike Gelfand <mikedld@mikedld.com>
  Misono Tomohiro <misono.tomohiro@jp.fujitsu.com>
  Myriad-Dreamin <camiyoru@gmail.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Niek Linnenbank <nieklinnenbank@gmail.com>
  Nikola Pavlica <pavlica.nikola@gmail.com>
  Nir Soffer <nirsof@gmail.com>
  Nir Soffer <nsoffer@redhat.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Pankaj Gupta <pankaj.gupta.linux@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Burton <paulburton@kernel.org>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Paul Zimmerman <pauldzim@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@gmail.com>
  Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
  Pavel Dovgalyuk <Pavel.Dovgalyuk@ispras.ru>
  Pavel Pisa <pisa@cmp.felk.cvut.cz>
  Peng Liang <liangpeng10@huawei.com>
  Peter Lieven <pl@kamp.de>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Xu <peterx@redhat.com>
  Philippe Mathieu-Daude <philmd@redhat.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Prasad J Pandit <pjp@fedoraproject.org>
  Richard Henderson <richard.henderson@linaro.org>
  Robert Hoo <robert.hu@linux.intel.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Sai Pavan Boddu <sai.pavan.boddu@xilinx.com>
  Samuel Thibault <samuel.thibault@ens-lyon.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Sergey Nizovtsev <snizovtsev@gmail.com>
  Sergio Lopez <slp@redhat.com>
  Shashi Mallela <shashi.mallela@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Reiter <s.reiter@proxmox.com>
  Stefan Weil <sw@weilnetz.de>
  Stefano Garzarella <sgarzare@redhat.com>
  Stephen Long <steplong@quicinc.com>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Sunil Muthuswamy <sunilmut@microsoft.com>
  Sven Schnelle <svens@stackframe.org>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Thiago Jung Bauermann <bauerman@linux.ibm.com>
  Thomas Huth <huth@tuxfamily.org>
  Thomas Huth <thuth@redhat.com>
  Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  Timothy Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Timothy E Baldwin <T.E.Baldwin99@members.leeds.ac.uk>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vitaly Cheptsov <vit9696@protonmail.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Volker Rümelin <vr_qemu@t-online.de>
  Xiaoyao Li <xiaoyao.li@intel.com>
  Xinyu Li <precinct@mail.ustc.edu.cn>
  Xu Zou <iwatchnima@gmail.com>
  Yan Jin <jinyan12@huawei.com>
  YanYing Zhuang <ann.zhuangyanying@huawei.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yifei Jiang <jiangyifei@huawei.com>
  Ying Fang <fangying1@huawei.com>
  Yipeng Yin <yinyipeng1@huawei.com>
  Yonggang Luo <luoyonggang@gmail.com>
  Yoshinori Sato <ysato@users.sourceforge.jp>
  Zenghui Yu <yuzenghui@huawei.com>
  zhaolichang <zhaolichang@huawei.com>
  Zhengui <lizhengui@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Zhenyu Ye <yezhenyu2@huawei.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55351 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Oct 31 22:08:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 31 Oct 2020 22:08:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.17045.41830 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYz3G-0000Rn-6x; Sat, 31 Oct 2020 22:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 17045.41830; Sat, 31 Oct 2020 22:08:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1kYz3G-0000Rg-3a; Sat, 31 Oct 2020 22:08:46 +0000
Received: by outflank-mailman (input) for mailman id 17045;
 Sat, 31 Oct 2020 22:08:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1kYz3F-0000Qx-7e
 for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 22:08:45 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b62d2fe-9dd3-4c39-a649-d5431753a97f;
 Sat, 31 Oct 2020 22:08:38 +0000 (UTC)
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYz38-0007In-K6; Sat, 31 Oct 2020 22:08:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1kYz38-0005qn-9y; Sat, 31 Oct 2020 22:08:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1kYz38-0006Sj-9S; Sat, 31 Oct 2020 22:08:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <SRS0=nyv3=EG=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
	id 1kYz3F-0000Qx-7e
	for xen-devel@lists.xenproject.org; Sat, 31 Oct 2020 22:08:45 +0000
X-Inumbo-ID: 4b62d2fe-9dd3-4c39-a649-d5431753a97f
Received: from mail.xenproject.org (unknown [104.130.215.37])
	by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
	id 4b62d2fe-9dd3-4c39-a649-d5431753a97f;
	Sat, 31 Oct 2020 22:08:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pcdTIXh7EHTnnw+EAjCl921EJ34pXehO6f4CQxmBt+w=; b=O9ztZfkiahQ2x6arTnm3t3LbBX
	ipfCiVP4KjYnF16xJBNDELJbzdjTWu07hggd5ASPZNjXHvYgK8WkwF2wvkZqJPTFPJIFP25H9vz/H
	b7+3YEwjrMQpmyRK4Jk1kySI83TcVtiqFsZSc4qodm9z3QuX7N8HgHHuTTDD2EERrEk8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146] helo=infra.test-lab.xenproject.org)
	by mail.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYz38-0007In-K6; Sat, 31 Oct 2020 22:08:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
	by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYz38-0005qn-9y; Sat, 31 Oct 2020 22:08:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim 4.92)
	(envelope-from <osstest-admin@xenproject.org>)
	id 1kYz38-0006Sj-9S; Sat, 31 Oct 2020 22:08:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-156329-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 156329: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8ead7af22bc596de23cdcc46e1f1a8c4e721d6d0
X-Osstest-Versions-That:
    ovmf=8cadcaa13d882816052ad4dec77faddd44a1c108
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 31 Oct 2020 22:08:38 +0000

flight 156329 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/156329/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8ead7af22bc596de23cdcc46e1f1a8c4e721d6d0
baseline version:
 ovmf                 8cadcaa13d882816052ad4dec77faddd44a1c108

Last test of basis   156316  2020-10-30 10:41:41 Z    1 days
Testing same since   156329  2020-10-31 05:43:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8cadcaa13d..8ead7af22b  8ead7af22bc596de23cdcc46e1f1a8c4e721d6d0 -> xen-tested-master


